text stringlengths 4 2.78M |
|---|
---
abstract: 'We have recently argued that if one introduces a relational time in quantum mechanics and quantum gravity, the resulting quantum theory is such that pure states evolve into mixed states. The rate at which states decohere depends on the energy of the states. There is therefore the question of how this can be reconciled with Galilean invariance. More generally, since the relational description is based on objects that are not Dirac observables, the issue of covariance is of importance in the formalism as a whole. In this note we work out an explicit example of a totally constrained, generally covariant system of non-relativistic particles that shows that the formula for the relational conditional probability is a Galilean scalar and therefore the decoherence rate is invariant.'
author:
- 'Rodolfo Gambini$^{1}$, Rafael A. Porto$^{2}$ and Jorge Pullin$^{3}$'
date: August 14th 2004
title: |
Fundamental decoherence from relational time in discrete\
quantum gravity: Galilean covariance
---
Introduction
============
We have recently introduced a new technique for discretizing physical theories [@DiGaPu]. When applied to general relativity it yields a discrete theory that is constraint-free yet it approximates well the continuum theory under certain circumstances [@GaPu; @cosmo]. The lack of constraints allows to tackle some of the fundamental open problems of canonical quantum gravity. For instance one can introduce a relational time [@greece; @deco1; @njp] a la Page–Wootters [@PaWo]. That is, one promotes all quantities in the theory to quantum operators and chooses one that is called “clock” and then one computes conditional probabilities for the other variables to take given values when the “clock” variable shows a certain “time”. The resulting quantum theory approximates ordinary quantum mechanics well when the clock variable chosen behaves in a semi-classical fashion with small quantum fluctuations. If one chooses as clock a variable that is in a quantum regime, the resulting theory is still valid but it will not resemble ordinary quantum mechanics.
We have also argued that, due to the fact that one cannot have a perfectly classical clock in nature, the resulting theory will have small but non-vanishing departures from ordinary quantum mechanics. In particular a pure state does not remain pure forever but evolves into a mixed state.
Since one is approximating a constrained continuum theory with a discrete theory that is unconstrained, the resulting relational discrete theory is formulated in terms of variables that are observables for the discrete theory. But they are not necessarily the discrete counterpart of Dirac observables of the continuum theory. Therefore the issue of how to reconcile the predictions of the discrete relational theory with the covariance of the continuum theory is of importance. In particular, the conditional probabilities must remain invariant when one changes coordinates and both the clock variable and the observed variable change values. To tackle the covariance problem in complete generality is beyond the scope of this paper. What we intend to do here is to analyze a simple model where calculations can be worked out concretely and in particular to probe the following issue. Since the prediction for the time of decoherence of pure states results in a formula that involves the energy of the states, it is may not be immediately apparent in what sense it is Galilean invariant. We would like to discuss in a simple model how to interpret the formula in a way that the invariance is manifest.
The organization of this paper is as follows. In section II we present the model we will study and in section III we will show the emergence of Galilean invariance. We end with a discussion.
The model
=========
We consider the following model. It consists of two non-interacting particles moving in separate potentials in $1+1$ dimensions. One of the particles we will assume is much more massive than the other and it will determine the variable we choose as a clock. The other particle we will call the “system” particle. The potential affecting the clock particle will be a constant force field. We will assume the particle is far away from the turning point, since in this regime we know our discrete approach approximates well the continuum [@cosmo]. For the system particle we will assume it behaves quantum mechanically and is in a potential that gives rise to bound states. As is well known [@brho], the best way to understand Galilean transformations in quantum mechanics is to study them as a limit of Lorentz transformations. We will therefore choose Lorentz invariant and reparameterization invariant action for the particles, $$S =\int d\tau \left[
-\left(M c +{U(q^0,q)\over c}\right)\sqrt{(\dot{q}^0)^2-\dot{q}^2}
-\left(m c+ {V(\phi^0,\phi)\over c}\right)\sqrt{(\dot{\phi}^0)^2-\dot{\phi}^2}\right]$$ where $q^0,q$ are the space-time coordinates of the “clock” particle and $\phi^0,\phi$ are the space-time coordinates of the “system” particle. We have kept the speed of light explicit in order to consider later the non-relativistic limit. We start by assuming that the time-like coordinates of both particles can be synchronized (since we will work in the Newtonian limit this poses no conceptual problem) $q^0=\phi^0$, and the reference system has been chosen such that $\dot{q}^0\gg \dot{q}$ and $\dot{\phi}^0\gg \dot{\phi}$, and also $Mc^2\gg U(q^0,q)$ and $mc^2\gg V(\phi^0,\phi)$ (non-relativistic limit). For concreteness we assume that in the reference frame given, $U(q^0,q)=\alpha q$ and $V(\phi^0,\phi)=V(\phi)$ and the latter has bound states. With these assumptions the action becomes, $$S =\int d\tau \left[
-M c \dot{q}^0 -{\alpha q \over c} \dot{q}^0+M c {\dot{q}^2 \over 2
\dot{q}^0} -m c \dot{\phi}^0-{V(\phi)\over c} \dot{\phi}^0+
Mc {\dot{\phi}^2 \over 2 \dot{\phi}^0}+\lambda(q^0-\phi^0)\right]$$ where $\lambda$ is a Lagrange multiplier associated with the constraint that imposes the synchronization. It is immediate to see that if one chooses $q^0=\phi^0=ct$ one will obtain the ordinary action for two non-relativistic particles with $t$ the ordinary non-relativistic time. We will not do this here, since we are interested in handling a totally constrained system, since it is in such systems where the introduction of a relational time is meaningful since they have no preferred notion of time.
To understand better the constraint structure of the theory, we will rewrite the action in first-order form. We define the canonical momenta, $$\begin{aligned}
p_0 &=& {\partial L \over \partial \dot{q}^0} =-M c-{\alpha q \over c}
-{M c \over 2} {\dot{q}^2 \over (\dot{q}^0)^2},\\
p &=& {\partial L \over \partial \dot{q}} = M c{\dot{q}\over
\dot{q}^0},\\
\pi_0&=&{\partial L \over \partial \dot{\phi}^0} =-m c-
{V(\phi) \over c} -{M c \over 2} {\dot{\phi}^2 \over (\dot{\phi}^0)^2},\\
\pi &=& {\partial L \over \partial \dot{\phi}} = m {\dot{\phi}\over
\dot{\phi}^0}.\end{aligned}$$
From where we can get two constraints, in addition to the one we had before $\phi^0-q^0=0$, $$\begin{aligned}
p_0&= & -M c-{\alpha q\over c} -{p^2 \over 2 Mc}= -{1 \over c}
H_1(p,q),\\ \pi_0 &=&-m c -{V(\phi)\over c} - {\pi^2 \over 2 mc}=-{1
\over c} H_2(\phi,\pi).\end{aligned}$$
If we rearrange the latter two constraints into their sum and difference, $$\begin{aligned}
\pi_0+p_0 &+& {H_1 \over c} +{H_2 \over c}=0, \label{conssum}\\
\pi_0-p_0 &+& {H_1 \over c} -{H_2 \over c}=0, \end{aligned}$$ one readily sees that the last constraint together with $q^0-\phi^0=0$ are second class, whereas they both commute with (\[conssum\]). One imposes the second class constraints strongly and is left with a theory with one constraint, whose action is, $$\begin{aligned}
S&=&\int d\tau \left(\left(p_0+\pi_0\right)\dot{q}^0
+p \dot{q} +\pi \dot{\phi} + N \left[p_0+\pi_0
+{H_1 \over c} +{H_2 \over c}\right]\right)\\
&=&\int d\tau \left(\tilde{p}_0\dot{q}^0
+p \dot{q} +\pi \dot{\phi} + N \left[\tilde{p}_0
+{H_1 \over c} +{H_2 \over c}\right]\right)\end{aligned}$$ where we introduced the shorthand $\tilde{p}_0\equiv p_0+\pi_0$. This action is very natural for the system under study (in fact we could have started the calculation simply by considering this action from the outset).
Galilean invariance
===================
To probe the invariance of the decoherence effect of interest, we would like to study two different situations. One in which the system particle is a potential $V(\phi)$ and another in which the potential is of the form $V(\phi-\beta q^0)$, this will represent a system that is bound by a potential around some minimum that is fixed or that is moving, respectively. This corresponds to adopting the active point of view of the Galilean transformation. We will do this with our consistent discretization techniques [@GaPu; @DiGaPu; @cosmo]. We refer the reader to our previous papers for details on the technique. We start by discretizing the action in the first of the two cases of interest. The integral in the action becomes replaced by a discrete sum $S=\sum_0^N L(n,n+1)$ and we absorb the time interval $\epsilon=\tau_{n+1}-\tau_n$ and where, $$L(n,n+1) = \tilde{p}_{0n} \left(q^0_{n+1}-q^0_n\right)
+p_n \left(q_{n+1}-q_n\right) +\pi_n \left(\phi_{n+1}-\phi_n\right)
- N_n \left[\tilde{p}_{0n}
+{H_1(p_n,q_n) \over c} +{H_2(\pi_n,\phi_n) \over c}\right].$$ We now implement the canonical transformation that materializes the time evolution between instant $n$ and $n+1$ with the Lagrangian $-L(n,n+1)$ playing the role of generating function of a type I canonical transformation, $$\begin{aligned}
P^{\tilde{p}_0}_{n+1} &=&
{\partial L(n,n+1) \over \partial \tilde{p}_{0n+1}} =0,\\
P^{\tilde{p}_0}_n &=& -{\partial L(n,n+1) \over
\partial \tilde{p}_{0n}} = -\left(q^0_{n+1}-q^0_n\right) +N_n,\\
P^{q^0}_{n+1} &=&
{\partial L(n,n+1) \over \partial {q}^0_{n+1}}
=\tilde{p}_{0n},\\
P^{q^0}_{n} &=&
-{\partial L(n,n+1) \over \partial {q}^0_{n}}
=\tilde{p}_{0n},\\
P^{p}_{n+1} &=&
{\partial L(n,n+1) \over \partial p_{n+1}} =0,\\
P^{p}_{n} &=&
-{\partial L(n,n+1) \over \partial p_{n}} =
-\left(q_{n+1}-q_n\right) +N_n{p_n \over M},\\
P^{q}_{n+1} &=&
{\partial L(n,n+1) \over \partial q_{n+1}} =p_n,\\
P^{q}_{n} &=&
-{\partial L(n,n+1) \over \partial q_{n}} =
p_n+\alpha N_n,\\
P^{\pi}_{n+1} &=&
{\partial L(n,n+1) \over \partial \pi_{n+1}} =0,\\
P^{\pi}_{n} &=&
-{\partial L(n,n+1) \over \partial \pi_{n}} =
-\left(\phi_{n+1}-\phi_n\right)-N_n{\pi_n \over m},\\
P^{\phi}_{n+1} &=&
{\partial L(n,n+1) \over \partial \phi_{n+1}} =\pi_n,\\
P^{\phi}_{n} &=&
-{\partial L(n,n+1) \over \partial \phi_{n}} =
\pi_n+N_n{\partial V(\phi_n) \over \partial \phi_n},\\
P^{N}_{n+1} &=&
{\partial L(n,n+1) \over \partial N_{n+1}} =0,\\
P^{N}_{n} &=&
-{\partial L(n,n+1) \over \partial N_{n}} =
\tilde{p}_{0n}
+Mc +\alpha {q_n\over c} + {(p_n)^2 \over 2 M c}
+mc +{V(\phi_n)\over c} + {(\pi_n)^2 \over 2 m c}.\end{aligned}$$
The system has constraints and we will use them to eliminate some of the variables and yield a system of evolution equations in a more explicit form. The resulting system is the following, $$\begin{aligned}
q^0_{n+1}&=&q^0_n+N_n,\\
P^{q^0}_{n+1} &=& P^{q^0}_n,\\
q_{n+1}&=&q_n+N_n {P^q_{n+1} \over M},\\
P^q_{n+1}&=& P^q_n-\alpha N_n,\\
\phi_{n+1} &=& \phi_n+N_n {P^\phi_{n+1}\over m},\\
P^\phi_{n+1} &=& P^\phi_n-N_n {\partial V(\phi_n)\over \partial
\phi_n},\\
0&=&P^{q^0}_{n+1}
+Mc +\alpha {q_n\over c} + {(P^q_{n+1})^2 \over 2 M c}
+mc +{V(\phi_n)\over c} + {(P^\phi_{n+1})^2 \over 2 m c}.
\label{34}\end{aligned}$$
The last equation determines the Lagrange multiplier $N_n$. To see this, we first rewrite it entirely in terms of variables at $n+1$, $$P^{q^0}_{n+1}
+Mc +\alpha {{q_{n+1} -N_n P^q_{n+1}}\over Mc} +
{(P^q_{n+1})^2 \over 2 M c}
+mc +{V({\phi_{n+1}-N_n P^\phi_{n+1}})\over mc} +
{(P^\phi_{n+1})^2 \over 2 m c}=0.$$
Since we are ultimately interested in studying the system in a regime close to the continuum limit, we make the assumption that the lapse $N_n$ is small and expand the term involving the potential to first order in $N_n$, $$P^{q^0}_{n+1}
+Mc +\alpha {{q_{n+1} -N_n P^q_{n+1}}\over Mc} +
{(P^q_{n+1})^2 \over 2 M c}
+mc +{V(\phi_{n+1})\over mc}
-{N_nP^\phi_{n+1}\over mc}
{\partial V(\phi_{n+1})\over \partial \phi_{n+1}}
+{(P^\phi_{n+1})^2 \over 2 m c}=0.$$
We can now solve explicitly for the Lagrange multiplier, $$N_n=\left(\alpha {P^q_{n+1} \over Mc} +{P^\phi_{n+1} \over mc} V'(\phi_n)\right)^{-1}C_{n+1}$$
Where $C_{n+1}$ is the constraint of the continuum theory discretized as if all variables were at $n+1$, $$C_{n+1} = P^{q^0}_{n+1}
+Mc +\alpha {q_{n+1}\over c} + {(P^q_{n+1})^2 \over 2 M c}
+mc +{V(\phi_{n+1})\over c} + {(P^\phi_{n+1})^2 \over 2 m c}.$$
We now assume that $\alpha \gg V'(\phi)$. This is due to the fact that we are assuming the clock to be classical and large and $\alpha$ is therefore associated with a macroscopic force whereas $V(\phi)$ is the potential in which the system is bound, and the latter is microscopic in nature. With this assumption we make sure there are no singularities in the computation of the Lagrange multiplier. Recall that the discrete description departs from the continuum one close to the turning point of the orbit.
One can now substitute the expression of the Lagrange multiplier in the evolution equations. The resulting system of equations can be viewed as a canonical transformation between instant $n$ and instant $n+1$ for the remaining variables of the problem. The next step consists in quantizing the system by representing the discrete evolution through a unitary operator, i.e. $\hat{z}^i_n=\hat{U}^\dagger \hat{z}^i_{n+1} \hat{U}$ where the $z^i$’s are all the phase space variables of the problem. All these calculations can be worked out explicitly for a simple system like the one we are considering, we will not show all the details here for reasons of space, the reader can see similar treatments in [@greece; @smolin].
Since we are interested in the continuum limit, a shortcut can be taken by considering the Hamiltonian associated with the unitary transformation $\hat{U}=e^{i\hat{H}}$ [@smolin]. The Hamiltonian is obtained by taking the logarithm of the unitary operator as a power series. This power series is convergent at all points in phase space except for a small region around the turning point of the orbit of the clock system. The Hamiltonian is obviously conserved upon evolution (except at the turning point). The first term in the expansion of the Hamiltonian is, $$H_n \sim {M c (C_n)^2 \over \alpha p_n}\left[1 +O\left(
{M c\, C_n\over (p_n)^2}\right)\right],$$ and to simplify notation, from now on we call $P^q_n=p_n$ and $P^{q^0}_n=p_{0n}$.
For the quantization we consider wavefunctions $\psi_n(q^0,q,\phi)$ forming a Hilbert space at the “instant” $n$. Isomorphic Hilbert spaces exist at all other discrete instants. With the Hamiltonian we will study the evolution operator $\hat{U}(n,n_0) =e^{i\hat{H} (n-n_0)}$ and its action on the states, $\psi_n(q^0,q,\phi)=\hat{U}(n,n_0)\psi_{n_0}(q^0,q,\phi)$, and we are working in the Schrödinger representation. The explicit form of the quantum Hamiltonian is, $$\hat{H}={M c \over \alpha \hat{p}}\left(\hat{p}_0
+\hat{H}_1+\hat{H_2}\right)^2.$$
It is to be noted that the expression in parenthesis is the constraint that one has in the continuum theory. In the consistent discretization approach the constraint of the continuum theory is not enforced exactly (what is enforced is equation (\[34\]) which corresponds to the constraint of the continuum theory but with the momenta evaluated one instant after the configuration variables). In the continuum limit, it nevertheless is enforced quite approximately and therefore the norm of $\hat{H}$ is going to be small.
We consider a quantum state in which the clock has a semiclassical behavior, so it is described by a coherent state peaked at $<\hat{H}_1>_{n_0}=\bar{E}$, $<\hat{q}_0>_{n_0}=0$, $<\hat{q}>_{n_0}=\bar{q}$, $<\hat{p}>_{n_0}=\bar{p}$ and $
<\hat{p}_0>_{n_0}=\bar{p}_0$. We then have for the wavefunction, $$\Psi_{n_0} = \psi_{n_0}(q,q^0) \varphi_{n_0}(\phi)$$ with $$\psi_{n_0}(q,q^0)=\left(2\pi\sigma_1^2\right)^{-1/4}
\exp\left[-\frac{\left(q-\bar{q}\right)^2}{4 \sigma_1^2}+i \bar{p} q\right]
\left(2\pi\sigma_0^2\right)^{-1/4}
\exp\left[-\frac{\left(q^0\right)^2}{4 \sigma_0^2}+i \bar{p}_0 q^0\right]$$ where $\sigma_1$ is the dispersion in the variable $q$ and $\sigma_0$ is the dispersion in the variable $q^0$. We have also assumed that $\bar{E}\gg |\bar{p}_0+\bar{E}| \gg\, <\hat{H}_2>_{n_0}$. The first inequality is in order to be in the continuum limit. The second inequality is in order to simplify calculations, and implies that we are accepting as “continuum limit” a regime where the constraint of the continuum theory is well enforced with respect to the scale of energies of relevance for the “clock” system, but the error in enforcement is large with respect to the energies of the system under study. It would be desirable to extend the results of this paper to regimes that approximate even further the continuum theory, but the calculations would be more involved.
The fundamental equation to be studied is the conditional probability, $$P(\phi \in \Delta \phi|q^0 \in \Delta t) =
{\sum_n {\rm Tr}\left(\hat{U}^\dagger(n) \hat{P}_{\phi,{q}^0} \hat{U}(n)
\rho_{q^0}\times\rho_q \times \rho_\phi\right)
\over \sum_n {\rm Tr}\left(\hat{U}^\dagger(n) \hat{P}_{q^0} \hat{U}(n)
\rho_{q^0}\times\rho_q \times \rho_\phi\right)}$$ with $\hat{P}_{\phi,q^0}$ is the projector onto the eigenstate labeled by the values $\phi,q^0$ and $\rho_{q^0}, \rho_q,\rho_\phi$ the density matrices associated with the state $\Psi_{n_0}$. From now on we will use natural units where $c=\hbar=1$.
Let us analyze the denominator of this expression. Taking the trace on the $\phi,q$ spaces by integrating, we get, $$\begin{aligned}
{\rm Den}&=&\sum_n {\rm Tr}\left(\hat{U}^\dagger(n) \hat{P}_{q^0} \hat{U}(n)
\rho_{q^0}\times\rho_q \times \rho_\phi\right)\\ &=&
\sum_n {\rm Tr}\left[\exp\left(-i\frac{\left(\hat{p}_0
+\bar{E}\right)^2}{\frac{\alpha \bar{p}}{M}}\left(
n-n_0\right)-2i\frac{\hat{p}_0 +\bar{E}}{\frac{\alpha
\bar{p}}{M}}<\hat{H}_2>\left(n-n_0\right)\right)\times\right.\nonumber\\
&\times&\left.\hat{P}_{q^0}
\exp\left(i\frac{\left(\hat{p}_0
+\bar{E}\right)^2}{\frac{\alpha \bar{p}}{M}}\left(
n-n_0\right)+2i\frac{\hat{p}_0 +\bar{E}}{\frac{\alpha
\bar{p}}{M}}<\hat{H}_2>\left(n-n_0\right)\right)
\rho_{q^0}\right],\nonumber\end{aligned}$$ and the term involving $\hat{H}_2^2$ from $U$ cancels with that of $\hat{U}^\dagger$ since $\hat{H}_2^2$ commutes with $\hat{P}_{q^0}$. We have also replaced $\hat{H}_1$ by $\bar{E}$ and $\hat{p}$ by $\bar{p}$ since the trace implies taking the expectation value of quantities depending on $q$ and $p$. Since $\rho_{q^0}$ represents a state very peaked at $<\hat{p}_0>=\bar{p}_0$ and $<\hat{q}^0>=0$ and since $|\bar{p}_0+\bar{E}|\gg\,<\hat{H}_2>$, we have that, $${\rm Den} = \sum_n {\rm Tr}\left[
\hat{P}_{q^0}
\exp\left( i \frac{\left(\hat{p}_0+\bar{E}\right)^2}
{\frac{\alpha \bar{p}}{M}}(n-n_0)^2\right)
\rho_{q^0}
\exp\left( -i \frac{\left(\hat{p}_0+\bar{E}\right)^2}
{\frac{\alpha \bar{p}}{M}}(n-n_0)^2\right) \right]
\equiv
\sum_n {\rm Tr}\left(\rho_n(q^0)\right)\label{45}$$ where $\rho_n(q^0)\equiv \hat{P}_{q^0} \rho_{n,q^0}\equiv
\hat{P}_{q^0} \hat{U}(n)\rho_{q^0} \hat{U}^\dagger(n)$ represents the wavepacket of a “free particle” which evolves with the effective Hamiltonian $$\hat{H}_{\rm eff} ={(\hat{p}_0+\bar{E})^2\over
{\alpha \bar{p} \over M}}$$
It is instructive to realize that one can write, $${\rm Tr}\left[\rho_n(q^0)\right] =
\left(2 \pi \sigma_0^2(n)\right)^{-1/4} \exp \left[-\frac{(q^0
-\bar{q}_0(n))^2}{4 \sigma_0^2(n)}\right],$$ where $$\bar{q}^0(n) = 2 {(\bar{p}_0+\bar{E})(n-n_0)\over {\alpha \bar{p}\over M}}
\equiv t_{\rm max}(n),\label{48}$$ which shows that the clock “displays a time” in the neighborhood of $\bar{q}^0$ when we are at the level $n$ of the discrete theory. We have defined $t_{\rm
max}(n)$, the most likely value of the clock “time” for a given $n$ level in the discrete theory, and we have chosen the clock in such a way that $t_{\rm max}$ grows linearly with $n$.
The width of the packet grows with $n$ as, $$\sigma_0^2(n)=\sigma_0^2\left(1 + \frac{1}{4 \sigma_0^4} \left({M \over
\alpha \bar{p}}\right)^2(n-n_0)^2\right).\label{49}$$
We now should introduce some relevant scales. We will assume the characteristic mass of the clock system is about a kilogram. The potential of the clock system is characterized by the macroscopic constant $\alpha$, which we will assume is of the order of 10 Newton, which in natural units corresponds to $\alpha \sim 10^{22} m^{-2}$, which implies that if we have $\sigma_0 \sim 10^{-10} s \sim 1 m$, then, $$\frac{1}{4 \sigma_0^2}\left({M \over \alpha \bar{p}}\right)^2 \sim
10^{-34} m^2,$$ and we have assumed $\bar{p}/M\sim 10^{-5}$ so we are in a non-relativistic regime.
As we discussed in [@greece], the sums that appear in the numerator and denominator for the conditional probability should be large enough to involve the complete evolution of interest for the system, but they should not be infinite, otherwise one gets an indeterminate quotient of two diverging quantities for the conditional probability. Given the value computed above for the quantity multiplying $(n-n_0)^2$, it is natural to bound the value of $n-n_0\ll
10^{17}$, that is we assume that the sums go from $n_0$ to a maximum value $N\ll 10^{17}$, let’s say $N\sim 10^{14}$, otherwise the packet representing the clock will spread too much and we would be out of the semiclassical regime. Notice that we also have that $\bar{E}\geq
10^{26} m^{-1}$, and recalling that $|\bar{p}_0+\bar{E}|$ has to be smaller than $\bar{E}$, and choosing it to be $10^{17} m^{-1}$ yields $\bar{q}^0\sim 10^4 s\sim 3$ hours, which is a reasonable number. Summarizing, by bounding the number of steps we find that the denominator is a quantity of order unity. Its precise value is not of great interest, since we can choose it by fixing the normalization of the probability.
Let us analyze the numerator, $$\label{numerator}
{\rm Numer} = \sum_n {\rm Tr} \left[ \hat{P}_{\phi,q^0}
\exp\left( i {(\hat{p}_0+\bar{E})^2 +2 (\hat{p}_0+\bar{E})\hat{H}_2
+\hat{H}_2^2\over {\alpha \bar{p}\over M}}\right)
\rho_{q^0} \rho_q \rho_\phi
\exp\left( -i {(\hat{p}_0+\bar{E})^2 +2 (\hat{p}_0+\bar{E})\hat{H}_2
+\hat{H}_2^2\over {\alpha \bar{p}\over M}}\right)\right],$$ and observing that the projector is independent of $q$, and one can therefore substitute $H_1$ by its expectation value $\bar{E}$. Using now that the clock is semiclassical and $\bar{p}_0+\bar{E} \gg\, <\hat{H}_2>$ to neglect terms quadratic in $\hat{H}_2$ we can write, $$\begin{aligned}
P(\phi \in \Delta \phi| q^0 \in \Delta t) &=&
\sum_n {\rm Tr}\left[
\hat{P}_{\phi,q^0}
\exp\left(i {\left[(\hat{p}_0+\bar{E})^2 +2
(\bar{p}_0+\bar{E})\hat{H}_2\right](n-n_0) \over {\alpha \bar{p}\over
M}}\right)
\rho_\phi(n_0)\rho_{q^0}(n_0)\right.\nonumber\\
&\times&\left.\exp\left(-i {\left[(\hat{p}_0+\bar{E})^2 +2
(\bar{p}_0+\bar{E})\hat{H}_2\right](n-n_0) \over {\alpha \bar{p}\over
M}}\right) \right] {\rm Den}^{-1}\\
&=& \sum_n {\rm Tr}\left[
\hat{P}_\phi
\exp\left(i\hat{H}_2 t_{\rm max}(n)\right)
\rho_\phi \exp\left(-i\hat{H}_2 t_{\rm max}(n)\right)\right] \times
\nonumber\\
&\times&
{\rm Tr} \left[\hat{P}_{q^0}
\exp\left(i {(\hat{p}+\bar{E})^2 \over
{\alpha \bar{p} \over M}}(n-n_0)\right)
\rho_{q^0}
\exp\left(-i {(\hat{p}+\bar{E})^2 \over
{\alpha \bar{p} \over M}}(n-n_0)\right)\right] {\rm Den}^{-1},\end{aligned}$$ where we have replaced $\hat{p}_0$ with $\bar{p}_0$ since the clock is approximately classical and its energy dominates in $\bar{p}_0$, and we have separated the expression into two pieces, one dependent on the $\phi$ variable and one dependent on the $q_0$ variable to make more explicit the separation between clock and system.
The last trace divided by ${\rm Den}$ can be written as ${\cal P}_n(q^0)$ and satisfies that $\sum_n {\cal P}_n(q^0)=1$.
Following the discussion in [@njp], in order to make contact with ordinary quantum mechanics we assume the spacing in $n$ is small compared with the values of $n$ and introduce a continuous variable $v=n \epsilon$. We choose $\epsilon$ such that $$\epsilon = 2 {\bar{p}+\bar{E} \over {\alpha \bar{p} \over \bar{E}}},$$ so we have that $\epsilon\le 1 m$ with the choice of scales we made for the problem. We choose $n_0=0$ and we can then write a good continuum limit approximation for ${\cal P}_n(q^0)$, as in [@njp], $${\cal P}_v(q^0) = \delta(v-q^0)+\sigma_0^2(q^0) \delta''(v-q^0),$$ with $\sigma_0^2(q^0)$ given by (\[48\],\[49\]), $$\sigma_0^2(q^0) = \sigma_0^2 \left(1 + \frac{1}{4 \sigma_0^4}
\frac{(q^0)^2}{4 (\bar{p}_0+\bar{E})^2}\right)$$ and with $t_{\rm max}(n)=\epsilon n =v$, so we can write, $$P(\phi \in \Delta \phi|t \in \Delta t) = \int dv {\rm Tr}\left[\hat{P}_\phi
\exp\left(i \hat{H}_2 v\right)
\rho_\phi
\exp\left(-i \hat{H}_2 v\right) \right] \left(\delta(v-q^0)+
\sigma_0^2(q^0) \delta''(v-q^0)\right)={\rm
Tr}\left[\tilde{\rho}_2(q^0) \hat{P}_\phi\right]$$ where $$\tilde{\rho}_2(q^0)= \int dv {\cal P}_v(q^0)
\hat{U}_v \rho_\phi(v=0) \hat{U}^\dagger_v,$$ and this density matrix satisfies a Schrödinger equation modified due to the fact that we are considering a quantum clock as shown in detail in [@njp], $${\partial \tilde{\rho}_2 \over \partial q^0} =
i[\hat{H}_2,\tilde{\rho}_2] - \sigma(q^0)
[\hat{H}_2,[\hat{H}_2,\tilde{\rho}_2]], \label{59}$$ with $\sigma(q^0) = d\sigma_0^2(q^0)/d q^0$. This expression is just the first two terms in a power series in terms of the dispersion of the quantum clock, which for realistic systems is a very small quantity.
To try to get a handle on a rough value for this quantity in the case of a realistic system, we note that the macroscopic clock particle is subject to decoherence due to interaction with the environment. If we characterize such decoherence by a time $t_D$, we have, $$\sigma \sim \frac{1}{4 \sigma_0^2} \frac{q^0}
{2 (\bar{p}_0+\bar{E})^2}|_{q^0=t_D}.$$
If $t_D\sim 1s \sim 10^{10}m$, which is a rather large decoherence time for a macroscopic system, then $\sigma \sim 10^{-24}m$. In reference [@bh] we have estimated theoretical limits as to how small a dispersion is attainable with optimal realistic clocks.
In order to study the Galilean covariance of the conditional probability, the procedure is simple. We have to repeat the calculation assuming a boost with velocity $-\beta$ has been performed on the system 2 respect to the system 1, in such a way that the potential it now sees is of the form $V(\phi-\beta q^0)$. For instance, the system 2 can be an electron in a central potential given by a nucleus. The relational analysis goes along exactly as before, with two differences. The Hamiltonian for the second system becomes, $$H'_2 = V(\phi-\beta q^0)+{\pi^2 \over 2 m},$$ and the initial state of the system is given by $\rho_{q^0}'\times \rho_\phi' =\hat{U}_G\left(\rho_{q^0}\times \rho_\phi\right) \hat{U}_G^\dagger$ with $$\label{62}
\hat{U}_G = \exp\left[i \beta \hat{\pi} \hat{q}^0 -i m \beta \hat{\phi}\right].$$ In other words, the initial state is the one corresponding to the Galilean boost $\hat{U}_G$ to the original state [@brho]. Notice however that in traditional treatments of Galilean invariance in quantum mechanics the variable that we here take as $\hat{q}^0$ is a classical parameter $t$. Our treatment can be considered a relational generalization of the usual Galilean transformations of quantum mechanics. In ordinary quantum mechanics Schrödinger’s equation has a time derivative that acts on the parameter in $\hat{U}_G$. In the relational treatment the equation has a term involving $\hat{p}_0$ instead of the time derivative. Notice that $\hat{p}_0$ is minus the total energy instead of just the “system energy”. Therefore the presence of the operator $\hat{q}^0$ in $\hat{U}_G$ has the same effect in the relational treatment as the derivative with respect to the parameter has in ordinary quantum mechanics: they both induce a change in the energy of the system due to the boost, $\hat{p}_0\rightarrow \hat{p}_0+\beta \hat{\pi}$.
To study the changes in the conditional probability we go back in the derivation to equation (\[numerator\]), $$\begin{aligned}
P'\left(\phi \in \Delta \phi | q^0 \in \Delta t\right) &=&
\sum_n {\rm Tr}\left[
\hat{P}_{\phi,q^0} \exp\left(i{\left[\left(\hat{p}_0+\bar{E}\right)^2+
2 \left(\bar{p}_0+\bar{E}\right)\hat{H}'_2\right]\over
\frac{\alpha \bar{P}}{M}}
\left(n-n_0\right)\right)\right. \\
&\times&\left.
\hat{U}_G \rho_\phi(n_0)\rho_{q^0}(n_0) \hat{U}_G^\dagger
\exp\left(-i{\left[\left(\hat{p}_0+\bar{E}\right)^2+
2 \left(\bar{p}_0+\bar{E}\right)\hat{H}'_2\right]\over
\frac{\alpha \bar{P}}{M}}
\left(n-n_0\right)\right) \right]{\rm Den}^{-1}.\nonumber\end{aligned}$$
The value of the denominator actually does not change due to the boost, although its form changes. We will address this point later on.
To understand the covariance it is convenient to commute $\hat{U}_G$ with the exponential; let us therefore analyze the product, $$B= \exp\left(i{\left[\left(\hat{p}_0+\bar{E}\right)^2+
2 \left(\bar{p}_0+\bar{E}\right)\hat{H}'_2\right]\over
\frac{\alpha \bar{P}}{M}}
t_{\rm max}(n)\right)
\exp\left(i \frac{m \beta^2}{2} \hat{q}^0\right)
\exp\left(i\beta\hat{\pi}\hat{q}^0\right)\exp\left(-im\beta\hat{\phi}\right),$$ where we have used the fact that, $$\hat{U}_G=\exp\left(i \frac{m \beta^2}{2} \hat{q}^0\right)
\exp\left(i\beta\hat{\pi}\hat{q}^0\right)\exp\left(-im\beta\hat{\phi}\right),
\label{65}$$ which can be shown using the Baker–Campbell–Hausdorff formula.
We wish to commute $\hat{U}_G$ to the left. We start by noting that in the subspace of $\phi,\pi$, the variable $q^0$ behaves as an external parameter, as if it were a classical time $t$. Following the calculations of [@brho] for an ordinary quantum system one has, $$\exp\left(i\hat{H}'(t-t_0)\right) \hat{U}_{t_0}\psi(t_0) =
\hat{U}_t \exp(i \hat{H} (t-t_0)\psi(t_0),$$ which just states that the evolution of the Galilean transformed state should coincide with the Galilean transform of the original evolved state. That is, $$\exp\left(i \hat{H}'(t-t_0)\right)\hat{U}_{t_0} \psi(t_0)=
\exp\left(i m {\beta^2 \over 2} (t-t_0)\right)\exp\left(i \beta
\hat{\pi}(t-t_0)\right) \hat{U}_{t_0} \exp\left(i \hat{H}
(t-t_0)\right)\psi(t_0).$$
We can therefore write for our system, $$\begin{aligned}
P'\left(\phi \in \Delta \phi|q^0\in \Delta t\right)&=&\sum_n
{\rm Tr}\left[\hat{P}_{\phi,q^0}
\exp\left(i \left[ \frac{m \beta^2}{2} t_{\rm max}(n) +i\beta \hat{\pi}
t_{\rm max}(n)\right]\right)
\exp\left(i {\left(\hat{p}_0+\hat{E}\right)^2\over \frac{\alpha
\bar{p}}{M}}\left(n-n_0\right)\right) \hat{U}_G\right.
\nonumber\\
&\times&
\exp\left(i \hat{H}_2 t_{\rm max}(n)\right) \rho_\phi(n_0) \rho_{q^0}(n_0)
\exp\left(-i \hat{H}_2 t_{\rm max}(n)\right)\hat{U}_G^\dagger
\exp\left(-i \frac{\left(\hat{p}_0+\hat{E}\right)^2}{\frac{\alpha
\bar{p}}{M}}\left(n-n_0\right)\right)\nonumber\\
&\times& \left.
\exp\left(-i
\left[ \frac{m \beta^2}{2} t_{\rm max}(n) +i\beta \hat{\pi}
t_{\rm max}(n)\right]\right)
\right]{\rm
Den}^{-1}.\label{68}\end{aligned}$$
We still need to commute $\exp\left(i
{\left(\hat{p}_0+\hat{E}\right)^2\over \frac{\alpha
\bar{p}}{M}}\left(n-n_0\right)\right)$ with $\hat{U}_G$. Noting that the expression for $\hat{U}_G$ (\[65\]) can be written as, $$U_G = \exp\left(i \left[\frac{\beta^2 m}{2} +\beta \hat{\pi}\right] \hat{q}_0 \right)\exp \left(-im\beta \hat{\phi}\right)$$ we see that only the first term in the exponential has a non-trivial commutator.
To proceed we note that if one has two operators $\hat{A},\hat{B}$ such that $[[\hat{A},\hat{B}],\hat{A}]=0$, one has that, $$e^{\hat{A}} e^{\hat{B}} = e^{\frac{1}{2} [\hat{A},\hat{B}]}
e^{\hat{A}+\hat{B}}.$$
If we now take $A = a(\hat{p}_0+\bar{E})^2$ and $B=b \hat{q}^0$ we have the following identities, $$\exp\left(ia \left(\hat{p}_0+\bar{E}\right)^2\right) \exp\left(i b\hat{q}_0\right) =
\exp\left(i a b \left(\hat{p}_0+\bar{E}\right)\right) \exp\left(ia
\left(\hat{p}_0+\bar{E}\right)^2 +i b \hat{q}_0\right),$$ and, $$\exp\left(ib \hat{q}_0\right)
\exp\left(ia\left(\hat{p}_0+\bar{E}\right)^2\right) =
\exp\left(-iab\left(\hat{p}_0+\bar{E}\right)\right)
\exp\left(-\frac{i}{6} ab^2\right)
\exp\left(ia \left(\bar{p}_0+\bar{E}\right)^2+ib \hat{q}_0\right),$$ therefore, $$\exp\left(ia\left(\bar{p}_0+\bar{E}\right)^2\right)
\exp\left(ib\hat{q}_0\right) =
\exp\left(2iab\left(\hat{p}_0+\bar{E}\right)\right)
\exp\left(\frac{i}{6} ab^2\right)
\exp\left(ib\hat{q}_0\right)
\exp\left(ia\left(\hat{p}_0+\bar{E}\right)^2\right).$$
Taking $a={i (n-n_0) \over \frac{\alpha \bar{p}}{M}}$ and $b=
\frac{m}{2} \beta^2 +\beta\hat{\pi}$ and substituting $\hat{p}_0+\bar{E}$ by $\bar{p}_0+\bar{E}$ in (\[68\]) we finally have, $$\begin{aligned}
P'\left(\phi \in \Delta \phi|q^0\in \Delta t\right)&=&\sum_n
{\rm Tr}\left[\hat{P}_{\phi,q^0} \hat{U}_G
\exp\left( i { \left[ \left(\hat{p}_0+ \bar{E}\right)^2 +2
\left(\bar{p}_0 +\bar{E}\right) \hat{H}_2\right]\over
\frac{\alpha \bar{p}}{M}}(n-n_0)\right)
\rho_\phi \rho_{q_0} \right.\\
&\times&\left.
\exp\left(- i { \left[ \left(\hat{p}_0+ \bar{E}\right)^2 +2
\left(\bar{p}_0 +\bar{E}\right) \hat{H}_2\right]\over
\frac{\alpha \bar{p}}{M}}(n-n_0)\right)
\hat{U}_G^\dagger\right]
{\rm Den}^{-1}.\end{aligned}$$
It is remarkable that all the terms involving $t_{\rm max}$ in (\[68\]) have cancelled with the terms stemming from the commutation we just did. We now can address the point we postponed before, namely the change in the denominator of the expression. Basically, a similar calculation to the one we just did starting from (\[45\]) and performing the commutations shows that the denominator is actually invariant, using the cyclicity of the trace and the fact that unlike the numerator, it does not involve the projector on the $\phi$ space.
Now since $$\hat{U}^\dagger_G \hat{P}_{\phi,q^0} \hat{U}_G = \hat{P}_{\phi -\beta q^0,q^0},$$ we therefore have, $$P'\left(\phi \in \Delta \phi|q^0 \in \Delta t\right)
=P\left(\phi -\beta q^0 \in \Delta \phi|q^0 \in \Delta t\right)$$
Which shows that the conditional probability is Galilean invariant.
Conclusions
===========
We have shown in a simple model that considering a quantum clock in quantum mechanics leads to a modification of Schrödinger equation, and that the resulting probabilities are Galilean invariant. Since the probabilities are invariant, then physical predictions from this framework will also be invariant. In particular the rate of decoherence predicted in [@deco1; @njp; @bh] should be invariant. This is an interesting point since the rate of decoherence predicted is proportional to the difference of energies of states of the system in an basis of energy eigenstates. One could ask the question, how can this formula be Galilean invariant since the energy is not? The answer has to do with the fact that in order to have an energy basis as the one assumed in the calculation (with discrete spectrum) one has to consider systems analogous to a particle in a potential. In such systems, at least if they are isolated, the difference between energy levels is a Galilean invariant and therefore the decoherence rate is a Galilean invariant.
It remains to be studied how the decoherence presented would transform under Lorentz transformations. Milburn [@Milburn] recently studied decoherence in a Lorentz invariant setting and his treatment could provide a framework to analyze our proposal in some detail. This is more problematic, since we are considering corrections to quantum mechanics, and if one goes to the relativistic domain one first has to contend with the usual difficulties of defining a relativistic quantum mechanics. Although it appears that the use of a relational time could yield a well defined theory, the details remain to be studied.
Acknowledgments
===============
This work was inspired by questions by Ted Jacobson and Daniel Sudarsky at the Ladek Zdroj meeting on theoretical physics, JP thanks the organizers, Jerzy Kowalski-Glikman and Giovanni Amelino-Camelia for hospitality. This work was supported by grant NSF-PHY0244335 and funds from the Horace Hearne Jr. Laboratory for Theoretical Physics and the Abdus Salam International Center for Theoretical Physics.
[10]{} C. Di Bartolo, R. Gambini, J. Pullin, Class. Quan. Grav. [**19**]{}, 5475 (2002) \[arXiv:gr-qc/0205123\]; C. Di Bartolo, R. Gambini, R. Porto, J. Pullin, “Dirac-like approach for consistent discretizations of classical constrained theories,” arXiv:gr-qc/0405131. R. Gambini, J. Pullin, Class. Quan. Grav. [**20**]{}, 3341 (2003) \[arXiv:gr-qc/0212033\]. R. Gambini, J. Pullin, Phys. Rev. Lett. [**90**]{}, 021301, (2003) \[arXiv:gr-qc/0206055\]. R. Gambini, R.A. Porto and J. Pullin, In “Recent developments in gravity,” K. Kokkotas, N. Stergioulas, editors, World Scientific, Singapore (2003) \[arXiv:gr-qc/0302064\]. R. Gambini, R. Porto, J. Pullin, Class. Quant. Grav. [**21**]{}, L51 (2004) \[arXiv:gr-qc/0305098\]. R. Gambini, R. Porto, J. Pullin, New J. Phys. [**6**]{}, 45 (2004) \[arXiv:gr-qc/0402118\]. D. N. Page and W. K. Wootters, Phys. Rev. D [**27**]{}, 2885 (1983); W. Wootters, Int. J. Theor. Phys. [**23**]{}, 701 (1984); D. N. Page, “Clock time and entropy” in “Physical origins of time asymmetry,” J. Halliwell, J. Perez-Mercader, W. Zurek (editors), Cambridge University Press, Cambridge UK, (1992). H. Brown, P. Hollands, Am. J. Phys. [**67**]{}, 204 (1999). R. Gambini and J. Pullin, Int. J. Mod. Phys. D [**12**]{}, 1775 (2003) \[arXiv:gr-qc/0306095\]. R. Gambini, R. A. Porto and J. Pullin, “Realistic clocks, universal decoherence and the black hole information paradox,” arXiv:hep-th/0406260. G. J. Milburn, “Lorentz invariant intrinsic decoherence,” arXiv:gr-qc/0308021.
|
---
author:
- 'Bo Chen, Radu V. Craiu, Lisa J. Strug and Lei Sun'
bibliography:
- 'mybib.bib'
title: '**Supplementary Materials for The X factor: A Robust and Powerful Approach to X-chromosome-Inclusive Whole-genome Association Studies**'
---
\#1
1
[1]{}
0
[1]{}
[**Supplementary Materials for The X factor: A Robust and Powerful Approach to X-chromosome-Inclusive Whole-genome Association Studies**]{}
Appendix A: Proof of Theorem 1 {#appendix-a-proof-of-theorem-1 .unnumbered}
==============================
In Appendix A, we prove Theorem 1 stated in Section 3.1.
[**Case A: Linear Regression**]{}\
We start from the special case of linear model. For notation simplicity we first rewrite the null hypotheses in matrix form: $H_0: L\boldsymbol{\beta_1}=0$ under $\mathcal{M}_1$ and $H_0: L\boldsymbol{\beta_2}=0$ under $\mathcal{M}_2$, where $L=(0_{(q, p-q)}, I_q)$ is a combination of $q \times (p-q)$ zero matrix and identity matrix with dimension $q$.
In the case of the linear model, it is well known [@vandaele81] that the Wald, Score and LRT test statistics for $H_0$ are all functions of the F-statistic. So it is sufficient to show that the F-statistic is the same for $\mathcal{M}_1$ and $\mathcal{M}_2$. Specifically, the F-statistic under $\mathcal{M}_j$ for $j=1, 2$ is $$F_j=\frac{Q_j/q}{(Y-X_j(X_j'X_j)^{-1}X_j'Y)'(Y-X_j(X_j'X_j)^{-1}X_j'Y)/(n-p)} \sim F(q,n-p),$$ where $$Q_j=Y'X_j(X_j'X_j)^{-1}L'(L(X_j'X_j)^{-1}L')^{-1}L(X_j'X_j)^{-1}X_j'Y.$$ If $X_j$ denotes the covariate matrix used in model $\mathcal{M}_j$, then consider a partition of its columns into $X_j=(X_{j1} X_{j2})$ such that the effect of $X_{jk}$ on the response is $\beta_{jk}$ for $j,k=1,2$. We partition $(X_j'X_j)^{-1}$ into 4 blocks: $(X_j'X_j)^{-1}=\left(\begin{array}{cc}
X_j^{11} & X_j^{12} \\
X_j^{21} & X_j^{22} \\
\end{array}\right)$. Then $L'(L(X_1'X_1)^{-1}L')^{-1}L$ is simplified to $\left(\begin{array}{cc}
0 & 0 \\
0 & (X_1^{22})^{-1} \\
\end{array}\right)$, which implies $$Q_1=Y'X_1(X_1'X_1)^{-1}\left(\begin{array}{cc}
0 & 0 \\
0 & (X_1^{22})^{-1} \\
\end{array}\right)(X_1'X_1)^{-1}X_1'Y$$ Next, $X_2=X_1T$ implies $L'(L(X_2'X_2)^{-1}L')^{-1}L=\left(\begin{array}{cc}
0 & 0 \\
0 & T_2'(X_1^{22})^{-1}T_2 \\
\end{array}\right),$ and $$\begin{aligned}
&&(T')^{-1}L'(L(X_2'X_2)^{-1}L')^{-1}LT^{-1} \\
&=&\left(\begin{array}{cc}
(T_1')^{-1} & 0 \\
(T_2')^{-1}T_{12}'(T_1')^{-1} & (T_2')^{-1} \\
\end{array}\right)\left(\begin{array}{cc}
0 & 0 \\
0 & T_2'(X_1^{22})^{-1}T_2 \\
\end{array}\right)\left(\begin{array}{cc}
T_1^{-1} & T_1^{-1}T_{12}T_2^{-1} \\
0 & T_2^{-1} \\
\end{array}\right) \\
&=&\left(\begin{array}{cc}
0 & 0 \\
0 & (X_1^{22})^{-1} \\
\end{array}\right).\end{aligned}$$ Hence, $$\begin{aligned}
Q_2&=&Y'X_1TT^{-1}(X_1'X_1)^{-1}(T')^{-1}L'(L(X_2'X_2)^{-1}L')^{-1}LT^{-1}(X_1'X_1)^{-1}(T')^{-1}T'X_1'Y \\
&=&Y'X_1(X_1'X_1)^{-1}\left(\begin{array}{cc}
0 & 0 \\
0 & (X_1^{22})^{-1} \\
\end{array}\right)(X_1'X_1)^{-1}X_1'Y=Q_1.\end{aligned}$$ On the other hand, $X_2(X_2'X_2)^{-1}X_2=X_1TT^{-1}(X_1'X_1)^{-1}(T')^{-1}T'X_1'=X_1(X_1'X_1)^{-1}X_1$. Therefore, $F_1=F_2$. Finally, the Wald, Score and LRT statistics are $$Wald=\frac{nqF}{n-p}; Score=\frac{nqF}{qF+n+p}; LRT=n\log(1+\frac{qF}{n-p}).$$ Since $F$ does not change, they are all invariant to the linear transformation $T$ between $\mathcal{M}_1$ and $\mathcal{M}_2$.
[**Case B: Generalized Linear Regression.**]{}\
In generalized linear model, the three test statistics usually do not have closed forms, and they are calculated from $\boldsymbol{\hat\beta_j}$ and $\boldsymbol{\tilde{\beta_j}}$, the unconstrained and constrained MLE of $\boldsymbol{\beta_j}$, which are usually estimated by numerical methods. Throughout the proof, we use $\hat\cdot$ and $\tilde\cdot$ to denote unconstrained and, respectively, constrained (under $H_0$) estimators. For sample size $n$, we use the standard notations of GLM, where $$\boldsymbol{\mu}=(\mu_{1},...,\mu_{n})=g^{-1}(X\boldsymbol{\beta}),$$ $$V(\mu_i)=Var(Y_i)/\phi, V(\boldsymbol{\mu})=diag[V(\mu_1),...,V(\mu_n)],$$ $$w(\mu_i)=1/(V(\mu_i)[g'(\mu_i)]^2), W(\boldsymbol{\mu})=diag[w(\mu_1),...,w(\mu_n)],$$ $$z(\mu_i)=g^{-1}(\mu_i)+g'(\mu_i)(Y_i-\mu_i), z(\boldsymbol{\mu})=[z(\mu_1),...,z(\mu_n)].$$ The proof under generalized linear model relies on the following two assumptions:
1. $\boldsymbol{\hat{\beta_j}}$ and $\boldsymbol{\tilde{\beta_j}}$ are estimated by iterative reweighted least squares method, and they have an initial value of 0, i.e., $\boldsymbol{\hat{\beta_j}}^{(0)}=\boldsymbol{\tilde{\beta_j}}^{(0)}=\mathbf{0}$.
2. If the dispersion parameter $\phi$ is unknown, it is estimated using $\hat{\phi}=h(\boldsymbol{\hat{\mu}})$ and $\tilde{\phi}=h(\boldsymbol{\tilde{\mu}})$ for some function $h$.
These two assumptions are commonly satisfied in GLM framework. For assumption 1, there is no prior information that the effect size is positive or negative, so it is reasonable to choose an initial value of zero. For assumption 2, there exist several possible estimators of $\phi$ in practice, but the most commonly used estimators are all functions of $\mu$, e.g., $\hat{\phi}=\frac{1}{n}\sum_{i=1}^n \frac{(Y_i-\hat{\mu_i})^2}{V(\hat{\mu_i})}$.
We first show that $X_1\boldsymbol{\hat{\beta_1}}=X_2\boldsymbol{\hat{\beta_2}}$, and $X_1\boldsymbol{\tilde{\beta_1}}=X_2\boldsymbol{\tilde{\beta_2}}$. Under assumption 1, $X_1\boldsymbol{\hat{\beta_1}}^{(0)}=X_2\boldsymbol{\hat{\beta_2}}^{(0)}$. If we further assume $X_1\boldsymbol{\hat{\beta_1}}^{(k)}=X_2\boldsymbol{\hat{\beta_2}}^{(k)}$, then it yields $\boldsymbol{\hat{\mu_1}}^{(k)}=\boldsymbol{\hat{\mu_2}}^{(k)}$, $V(\boldsymbol{\hat{\mu_1}}^{(k)})=V(\boldsymbol{\hat{\mu_2}}^{(k)})$, $W(\boldsymbol{\hat{\mu_1}}^{(k)})=W(\boldsymbol{\hat{\mu_2}}^{(k)})$ and $z(\boldsymbol{\hat{\mu_1}}^{(k)})=z(\boldsymbol{\hat{\mu_2}}^{(k)})$. At ($k+1$)th iteration, $$\begin{aligned}
X_2\boldsymbol{\hat{\beta_2}}^{(k+1)}&=&X_2[X_2'W(\boldsymbol{\hat{\mu_2}}^{(k)})X_2]^{-1}
X_2'W(\boldsymbol{\hat{\mu_2}}^{(k)})z(\boldsymbol{\hat{\mu_2}}^{(k)}) \\
&=& X_1TT^{-1}[X_1'W(\boldsymbol{\hat{\mu_1}}^{(k)})X_1]^{-1}
(T')^{-1}T'X_1'W(\boldsymbol{\hat{\mu_1}}^{(k)})z(\boldsymbol{\hat{\mu_1}}^{(k)}) \\
&=& X_1\boldsymbol{\hat{\beta_1}}^{(k+1)}.\end{aligned}$$ Therefore, mathematical induction and a simple limiting argument lead to $X_1\boldsymbol{\hat{\beta_1}}=X_2\boldsymbol{\hat{\beta_2}}$. Under the null hypothesis, we use the same argument on the submatrix $X_{11}$, $X_{21}$ and transformation matrix $T_1$ to show $X_{11}\tilde{\beta_{11}}=X_{21}\tilde{\beta_{21}}$, which leads to $X_{1}\boldsymbol{\tilde{\beta_{1}}}=X_{2}\boldsymbol{\tilde{\beta_{2}}}$. It immediately follows that $\boldsymbol{\hat{\beta_1}}=T\boldsymbol{\hat{\beta_2}}$, $\boldsymbol{\tilde{\beta_1}}=T\boldsymbol{\tilde{\beta_2}}$, $\boldsymbol{\hat{\mu_1}}=\boldsymbol{\hat{\mu_2}}$ and $\boldsymbol{\tilde{\mu_1}}=\boldsymbol{\tilde{\mu_2}}$.
Depending on the type of GLM, the dispersion parameter is either known (e.g., $\phi=1$ in logistic model) or unknown (e.g., $\phi=\sigma^2$ in linear model). However, the estimators of $\beta$ remain the same regardless of whether the dispersion parameter $\phi$ is known or unknown. When $\phi$ is known, it is trivial that $\phi$ remains equal under $\mathcal{M}_1$ and $\mathcal{M}_2$. When $\phi$ is unknown, we replace $\phi$ by its estimator. Because $\boldsymbol{\hat{\mu_1}}=\boldsymbol{\hat{\mu_2}}$ and $\boldsymbol{\tilde{\mu_1}}=\boldsymbol{\tilde{\mu_2}}$, under assumption 2, $\hat{\phi}$ and $\tilde{\phi}$ also remain unchanged under $\mathcal{M}_1$ and $\mathcal{M}_2$. We discuss below each of the three tests, Wald, Score and LR in detail.\
**(i) The Wald statistic** is $$Wald_j=\frac{n}{\hat{\phi}}\{\boldsymbol{\hat{\beta_j}}'L'[L(X_j'W(\hat{\mu_j})X_j)^{-1}L']^{-1}L\boldsymbol{\hat{\beta_j}}\}.$$ Because $\boldsymbol{\hat{\beta_2}}=T^{-1}\boldsymbol{\hat{\beta_1}}$ and $W(\boldsymbol{\hat{\mu_2}})=W(\boldsymbol{\hat{\mu_1}})$, $$Wald_2=\frac{n}{\hat{\phi}}(\boldsymbol{\hat{\beta_1}}'(T^{-1})'L'[LT^{-1}(X_1'W(\boldsymbol{\hat{\mu_1}})X_1)^{-1}(T')^{-1}L']^{-1}LT^{-1}\boldsymbol{\hat{\beta_1}}).$$ We consider a partition $(X_1'W(\boldsymbol{\hat{\mu_1}})X_1)^{-1}$ and follow the approach used for [**Case A**]{} to show $$(T^{-1})'L'[LT^{-1}(X_1'W(\boldsymbol{\hat{\mu_1}})X_1)^{-1}(T')^{-1}L']^{-1}LT^{-1}=L'[L(X_1'W(\boldsymbol{\hat{\mu_1}})X_1)^{-1}L']^{-1}L.$$ Therefore, $Wald_1=Wald_2$.\
**(ii) The Score statistic** is defined by @cordeiro93 as $$Score_j=\frac{1}{\tilde{\phi}}(Y-\boldsymbol{\tilde{\mu_j}})'V(\boldsymbol{\tilde{\mu_j}})^{-1/2}W(\boldsymbol{\tilde{\mu_j}})^{1/2}X_{j2}(\tilde{R_j}'W(\boldsymbol{\tilde{\mu}})\tilde{R_j})^{-1}X_{j2}'W(\boldsymbol{\tilde{\mu_j}})^{1/2}
V(\boldsymbol{\tilde{\mu_j}})^{-1/2}(Y-\boldsymbol{\tilde{\mu_j}}),$$ where $$R_j=X_{j2}-X_{j1}(X_{j1}'W(\boldsymbol{\mu_j})X_{j1})^{-1}X_{j1}'W(\boldsymbol{\mu_j})X_{j2}.$$ First, we note that $$(X_{21}, X_{22})=(X_{11}, X_{12})\left(\begin{array}{cc}
T_1 & T_{12} \\
0 & T_2 \\
\end{array}\right)
=(X_{11}T_1, X_{11}T_{12}+X_{12}T_2)$$ and $W(\boldsymbol{\tilde{\mu_2}})=W(\boldsymbol{\tilde{\mu_1}})$. Thus $$\begin{aligned}
\tilde{R_2}&=&X_{11}T_{12}+X_{12}T_2-X_{11}T_1(T_1'X_{11}'W(\boldsymbol{\tilde{\mu_2}})X_{11}T_1)^{-1}T_1'X_{11}'W(\boldsymbol{\tilde{\mu_2}})(X_{11}T_{12}+X_{12}T_2) \\
&=& X_{12}T_2-X_{11}(X_{11}'W(\boldsymbol{\tilde{\mu_1}})X_{11})^{-1}X_{11}'W(\boldsymbol{\tilde{\mu_2}})X_{12}T_2)=\tilde{R_1}T_2\end{aligned}$$ From the estimating equations for the constrained MLE of $\boldsymbol{\beta}$, $$(Y-\boldsymbol{\tilde{\mu_j}})'V(\boldsymbol{\tilde{\mu_j}})^{-1/2}W(\boldsymbol{\tilde{\mu_j}})^{1/2}X_{j1}=0.$$ Hence, $$\begin{aligned}
(Y-\boldsymbol{\tilde{\mu_2}})'V(\boldsymbol{\tilde{\mu_2}})^{-1/2}W(\boldsymbol{\tilde{\mu_2}})^{1/2}X_{22}&=&
(Y-\boldsymbol{\tilde{\mu_1}})'V(\boldsymbol{\tilde{\mu_1}})^{-1/2}W(\boldsymbol{\tilde{\mu_1}})^{1/2}(X_{11}T_{12}+X_{12}T_2) \\
&=&(Y-\boldsymbol{\tilde{\mu_1}})'V(\boldsymbol{\tilde{\mu_1}})^{-1/2}W(\boldsymbol{\tilde{\mu_1}})^{1/2}X_{12}T_2.\end{aligned}$$ Therefore, $$Score_2=\frac{1}{\tilde{\phi}}(Y-\boldsymbol{\tilde{\mu_1}})'V(\boldsymbol{\tilde{\mu_1}})^{-1/2}W(\boldsymbol{\tilde{\mu_1}})^{1/2}X_{12}T_2
(T_2'\tilde{R_1}'W(\boldsymbol{\tilde{\mu_1}})\tilde{R_1}T_2)^{-1} \cdotp$$ $$T_2'X_{12}'W(\boldsymbol{\tilde{\mu_1}})^{1/2}
V(\boldsymbol{\tilde{\mu_1}})^{-1/2}(Y-\boldsymbol{\tilde{\mu_1}}) =Score_1.$$
**(iii) The LRT statistic** is $$LRT_j=2\sum_{i=1}^n[\log f(Y_i, \boldsymbol{\hat{\beta_j}})-\log f(Y_i,\boldsymbol{\tilde{\beta_j}})].$$ The density function of $Y_i$ belongs to the exponential family $$f(Y_i, \boldsymbol{\beta_j})=\exp\left [\frac{Y_iX_{ij}'\boldsymbol{\beta_j}-b(X_{ij}'\boldsymbol{\beta_j})}{\phi}+c(Y_i, \phi)\right],$$ so $X_1\boldsymbol{\hat{\beta_1}}=X_2\boldsymbol{\hat{\beta_2}}$ and $X_1\boldsymbol{\tilde{\beta_1}}=X_2\boldsymbol{\tilde{\beta_2}}$ imply $f(Y_i, \boldsymbol{\hat{\beta_1}})=f(Y_i, \boldsymbol{\hat{\beta_2}})$ and $f(Y_i, \boldsymbol{\tilde{\beta_1}})=f(Y_i, \boldsymbol{\tilde{\beta_2}})$. Therefore, $LRT_1=LRT_2$.
Appendix B: Non-centrality Parameter Computation for Correctly Specified Genetic Models {#appendix-b-non-centrality-parameter-computation-for-correctly-specified-genetic-models .unnumbered}
=======================================================================================
We provide the details for computing non-centrality parameters for the tests under different genetic models. When the model is correctly specified, $ncp$ may be computed using equation $$\begin{aligned}
ncp&=&\beta_2'[H_{22}(\beta_1, 0)-H_{21}(\beta_1,0)H_{11}^{-1}(\beta_1, 0)H_{12}(\beta_1, 0)]\beta_2.\end{aligned}$$ as described in Section 2.2.
Equation (1) above computes exact $ncp$ as a function of design matrix $X$. In order to disentangle (1) from the sample-specific observed genotypes, we consider the asymptotic behaviour of $ncp$ as $n \to \infty$. In order to avoid the uninteresting case in which $ncp \rightarrow \infty$ when $n$ grows, we assume $\boldsymbol{\beta}=\boldsymbol{c}/\sqrt{n}$ [see also @cox74; @begg92; @begg93; @neuhaus98] for a fixed vector $\boldsymbol{c}$, so that $\boldsymbol{\beta} \to \mathbf{0}$ and $ncp$ converges to a finite number as $n \to \infty$.
In the case of a linear model with covariate matrix $X$, $H=\frac{X'X}{\sigma^2}$ regardless of $\boldsymbol{\beta}$. Let $P$ be the limit of $\frac{X'X}{n}$: $$\begin{aligned}
\frac{X'X}{n} &\overset{p}{\to} P.\end{aligned}$$
Corresponding to the split $\boldsymbol{\beta}=(\beta_1,\beta_2)$, $P$ is partitioned as $P=\left[ \begin{array}{cc}
P_{11} & P_{12} \\
P_{21} & P_{22} \\
\end{array} \right]$. The asymptotic value of $ncp$ is then computed following equation (1): $$ncp_{(linear)} \overset{p}{\to} \frac{1}{\sigma^2}c_{2}'[P_{22}-P_{21}(P_{11})^{-1}P_{12}]c_{2},$$ where $c_2=\beta_2\sqrt{n}$.
In the logistic model, $H(\boldsymbol{\beta})=X'W(\boldsymbol{\beta})X$, where $W(\boldsymbol{\beta})$ is the $n \times n$ diagonal matrix with the $ith$ diagonal element equal to $\mu_i(\boldsymbol{\beta})(1-\mu_i(\boldsymbol{\beta}))$, and $\mu(\boldsymbol{\beta})=\frac{\exp(X\boldsymbol{\beta})}{1+\exp(X\boldsymbol{\beta})}$. As $n \to \infty$, $\boldsymbol{\beta} \to \mathbf{0}$ so that $\mu_i(\boldsymbol{\beta})(1-\mu_i(\boldsymbol{\beta})) \overset{p}{\to} \frac{1}{4}$, which implies $$\frac{X'W(\boldsymbol{\beta})X}{n} \overset{p}{\to} \frac{P}{4}.$$ Hence, the asymptotic non-centrality parameter under logistic model is $$ncp_{(logistic)} \overset{p}{\to} \frac{1}{4}c_{2}'[P_{22}-P_{21}(P_{11})^{-1}P_{12}]c_{2}.$$ Note that if $\sigma^2=4$, the linear and logistic model have equal asymptotic $ncp$ as long as $X$ and $\boldsymbol{\beta}$ are the same. Under this scenario, in both models $$\begin{aligned}
ncp \overset{p}{\to} \frac{1}{\sigma^2}c_{2}'[P_{22}-P_{21}(P_{11})^{-1}P_{12}]c_{2}. \end{aligned}$$ This observation allows a convenient derivation of $ncp$ in logistic models by plugging $\sigma^2=4$ in the $ncp$ formula for linear models.
[*Remark 1*]{}: Assume that the generative model is genotypic (for autosome SNPs) or model $M_4$ in Table 3 (for X-chromosome SNPs). If the additive model or one of the models $M_1, M_2, M_3$ are used for estimation, then the above derivation of $ncp$ is not valid since the estimators for $\boldsymbol{\beta}$ may be biased due to model misspecification.
However, when the true model is additive or one of models $M_1, M_2, M_3$, the derivation for $ncp$ remains valid when using either the genotypic model or $M_4$ for estimation, as the MLE estimators of $\boldsymbol{\beta}$ remain unbiased. Therefore, $ncp$ under the genotypic or $M_4$ model may be computed by equation (1) or (2) regardless of the true model.
[*Remark 2*]{}: Under genotypic model, $\beta_2=(\beta_A, \beta_D)$, and $$\begin{aligned}
P &=&\left( \begin{array}{ccc}
1 & E(G_A) & E(G_D) \\
E(G_A) & E(G_A^2) & E(G_A \cdot G_D) \\
E(G_D) & E(G_A \cdot G_D) &E(G_D^2) \\
\end{array} \right).\end{aligned}$$ Under $M_4$, $\beta_2=(\beta_A, \beta_D, \beta_{GS})$ and $$\begin{aligned}
P &=&\left( \begin{array}{ccccc}
1 & E(S) & E(G_A) & E(G_D) & E(GS) \\
E(S) & E(S^2) & E(S \cdot G_A) & E(S\cdot G_D) & E(S \cdot GS) \\
E(G_A) & E(S \cdot G_A) &E(G_A^2) & E(G_A \cdot G_D) & E(G_A \cdot GS) \\
E(G_D) & E(S \cdot G_D) & E(G_A \cdot G_D) & E(G_D^2) & E(G_D \cdot GS) \\
E(GS) & E(S \cdot GS) & E(G_A \cdot GS) & E(G_D \cdot GS) & E(GS^2) \\
\end{array} \right). \end{aligned}$$ Assuming equal population frequency of females and males, $E(S)=0.5$. Other expected values are computed from the ‘risk’ allele frequencies ($f_{female}$ and $f_{male}$). Although different codings of $G_A$ and $G_I$ may lead to different expected values, the test statistics are common (following Theorem 1) thus implying that the $ncp$ form is asymptotically coding-invariant.
Appendix C: Non-centrality Parameter Computation for Misspecified Genetic models {#appendix-c-non-centrality-parameter-computation-for-misspecified-genetic-models .unnumbered}
================================================================================
Under model misspecification, the derivations in Appendix B may not be applicable. In this section, we provide an alternative approach for deriving $ncp$ by reparametrizing the covariates without changing the test statistics. The approach is illustrated with a series of examples.\
[**Example 1: Additive model is misspecified when dominant effect is present**]{}\
The following four steps are used to compute the correct $ncp$:
- We reparametrize $G_A$ and $G_D$ as $G_A^*$ and $G_D^*$ such that the test statistic for the null $H_0: \beta_A^*=\beta_D^*=0$ is the same as that for $H_0: \beta_A=\beta_D=0$ under the original genotypic model. From Theorem 1 it is sufficient that $(1, G_A^*, G_D^*)$ is a linear transformation of $(1, G_A, G_D)$.
- We next test $\beta_A^*=0$ under the reparametrized genotypic model $Y \sim G_A^*+G_D^*$. Because the reparametrized genotypic model is correctly specified, the asymptotic $ncp$ for this test can be computed following equation (2).
- We show that when $\mbox{corr}(G_A^*, G_D^*)=0$, the re-parametrized additive model: $Y \sim G_A^*$ and genotypic model: $Y \sim G_A^*+G_D^*$ have asymptotic equal $ncp$ for testing $\beta_A^*=0$.
- We require $(1, G_A^*)$ to be a linear transformation of $(1, G_A)$. Then by Theorem 1, testing $\beta_A^*=0$ under $Y \sim G_A^*$ has the same test statistic as testing $\beta_A=0$ under $Y \sim G_A$. Therefore, the correct $ncp$ for testing $\beta_A=0$ under original additive model $Y \sim G_A$ is asymptotically equal to the $ncp$ computed in step 1.
[**S1:**]{} We define $G_A^*=(-1, 0, 1)$ and $G_D^*=(-2f^2, 2f(1-f), -2(1-f)^2)$ for genotype $rr$, $rR$ and $RR$. Direct verification shows $\mbox{corr}(G_A^*, G_D^*)=0$. Also note that $(1, G_A^*, G_D^*)$ and $(1, G_A^*)$ are linear transformations of $(1, G_A, G_D)$ and $(1, G_A)$. Under the new codings, $\boldsymbol{\beta}$ is also re-parametrized so that $\beta_A^*=\beta_A+\beta_D(1-2f)$ and $\beta_D^*=\beta_D$.\
[*Remark 3*]{} Note that the new codings are hard to interpret and we do not recommend using them for effect estimates. Their sole purpose is to facilitate the asymptotic calculation of $ncp$.
[**S2**]{} See previous section.
[**S3**]{} For logistic models, @begg92 showed the equivalence and we apply their conclusion directly. Here we provide the proof for the linear model.
Because the re-parametrized genotypic model $Y \sim G_A^*+G_D^*$ is correctly specified, the asymptotic $ncp$ for testing $\beta_A^*=0$ can be computed following equation (2). With the new coding of $G_A^*$ and $G_D^*$, we have $$P^*= \left( \begin{array}{ccc}
1 & -1+2f & 0 \\
-1+2f & 1-2f+2f^2 & 0 \\
0 & 0 & 4f^2(1-f)^2 \\
\end{array} \right).$$ For testing $\beta_A^*=0$, $\boldsymbol{\beta^*}$ is partitioned as $\beta_1^*=(\beta_0^*, \beta_G^*)$ and $\beta_2^*=\beta_A^*$. $P^*$ is partitioned accordingly so that $$P_{11}^*= \left( \begin{array}{cc}
1 & 0 \\
0 & 4f^2(1-f)^2 \\
\end{array} \right), P_{21}^*={P_{12}^*}'=\left( \begin{array}{cc}
-1+2f & 0 \\
\end{array} \right), P_{22}^*=1-2f+2f^2.$$ Therefore, $$ncp \overset{p}{\to} \frac{1}{\sigma^2}{c_{2}^*}'[P_{22}^*-P_{21}^*(P_{11}^*)^{-1}P_{12}^*]c_{2}^*=2f(1-f)\frac{n{\beta_A^*}^2}{\sigma^2}.$$
To compute the $ncp$ from the re-parametrized additive model $Y \sim G_A^*$, the model is misspecified so that we need to work on the $ncp$ directly. The chi-squared statistic is $$W=\frac{\boldsymbol{\hat{\beta_A^*}}'L'(L({X_A^*}'X_A^*)^{-1}L')^{-1}L\boldsymbol{\hat{\beta_A^*}}}{\sigma^2},$$ where $\boldsymbol{\hat{\beta_A^*}}=(\hat{\beta_0^*}, \hat{\beta_A^*})'$ is the least square estimator of $(\beta_0, \beta_A)'$, $X_A^*=(1, G_A^*)$ and $L=\left[\begin{array}{cc}
0 & 1 \\
\end{array}\right]$.
Because the genotypic model is the true model, $Y \sim N(X^*\boldsymbol{\beta^*}, \sigma^2I_n)$, where $X^*=(1, G_A^*, G_D^*)$ and $\boldsymbol{\beta^*}=(\beta_0^*, \beta_A^*, \beta_D^*)'$. It implies that $$\boldsymbol{\hat{\beta_A^*}}=({X_A^*}'X_A^*)^{-1}{X_A^*}'Y \sim N(({X_A^*}'X_A^*)^{-1}{X_A^*}'X^*\boldsymbol{\beta^*}, ({X_A^*}'X_A^*)^{-1}\sigma^2),$$ and thus $$L\boldsymbol{\hat{\beta_A^*}}\sim N(L({X_A^*}'X_A^*)^{-1}{X_A^*}'X^*\boldsymbol{\beta^*}, L({X_A^*}'X_A^*)^{-1}L'\sigma^2).$$ Therefore, $W \sim \chi^2_{(1, ncp_A)}$ where $$ncp_A=\frac{1}{\sigma^2}\boldsymbol{\beta^*}'{X^*}'X_A^*({X_A^*}'X_A^*)^{-1}L'(L({X_A^*}'X_A^*)^{-1}L')^{-1}
L({X_A^*}'X_A^*)^{-1}{X_A^*}'X^*\boldsymbol{\beta^*}.$$ Next, $X^*\boldsymbol{\beta^*}=X_A^*(\beta_0^*, \beta_A^*)'+\beta_D^*G_D^*$, so we may decompose $ncp_A$ into three parts such that $ncp_A=a_1+a_2+a_3$, where $$\begin{aligned}
a_1&=&\frac{1}{\sigma^2}(\beta_0^*, \beta_A^*)L'(L({X_A^*}'X_A^*)^{-1}L')^{-1}L(\beta_0^*, \beta_A^*)', \\
a_2&=&\frac{2}{\sigma^2}\beta_D^*{G_D^*}'X_A^*({X_A^*}'X_A^*)^{-1}L'(L({X_A^*}'X_A^*)^{-1}L')^{-1}L
(\beta_0^*, \beta_A^*)', \\
a_3&=&\frac{1}{\sigma^2}\beta_D^*{G_D^*}'X_A^*({X_A^*}'X_A^*)^{-1}L'(L({X_A^*}'X_A^*)^{-1}L')^{-1}L
({X_A^*}'X_A^*)^{-1}{X_A^*}'G_D^*\beta_D^*.\end{aligned}$$ Because $$\frac{1}{n}{G_D^*}'X_A^* \overset{p}{\to} (E[G_D^*], E[G_A^*G_D^*])=(0,0)$$ and $$\boldsymbol{\beta^*}=\frac{\boldsymbol{c^*}}{\sqrt{n}} \overset{p}{\to} \mathbf{0},$$ we have $a_2 \overset{p}{\to} 0$ and $a_3 \overset{p}{\to} 0$. To compute $a_1$, $$\frac{1}{n}{X_A^*}'X_A^*
=\frac{1}{n}\left( \begin{array}{cc}
n & \sum G_A^* \\
\sum G_A^* & \sum {G_A^*}^2 \\
\end{array} \right)
\overset{p}{\to} \left( \begin{array}{cc}
1 & -1+2f \\
-1+2f & 1-2f+2f^2 \\
\end{array} \right),$$ which implies $$nL(X_A'X_A)^{-1}L' \overset{p}{\to} \frac{1}{2f(1-f)}.$$ Therefore, $$ncp_A \overset{p}{\to} a_1 \overset{p}{\to} 2f(1-f)\frac{n{\beta_A^*}^2}{\sigma^2},$$ which completes the proof that the two asymptotic non-centrality parameters are equal.
[**Example 2: $\mathbf{M_1}$, $\mathbf{M_2}$ and $\mathbf{M_3}$ are misspecified models when $\mathbf{M_4}$ is the true model**]{}.
As in the previous example, the reparametrized coding $(1, S^*, G_A^*, G_D^*, GS^*)$ must be a linear transformation of $(1, S, G_A, G_D, GS)$, and $(1, S^*)$ must also be a linear transformation of $(1, S)$.
The way to code $G_A$ and $GS$ is not an issue because we can show the equivalency of $(1, S, G_A, G_D, GS)$ under each way of coding by applying Theorem 1, as summarized in Figure S4. The requirements in [**S3**]{} and [**S4**]{} are discussed below for models $M_1-M_3$.
- To compute $ncp_1$ for testing $\beta_A=0$ under $M_1$, we require $$\mbox{corr}(G_D^*, G_A^*)=\mbox{corr}(GS^*, G_A^*)=0,$$ and $(1, S^*, G_A^*)$ is linear transformation of $(1, S, G_A)$.
- To compute $ncp_2$ for testing $\beta_A=\beta_D=0$ under $M_2$, we require $$\mbox{corr}(GS^*, G_A^*)=\mbox{corr}(GS^*, G_D^*)=0,$$ and $(1, S^*, G_A^*, G_D^*)$ is linear transformation of $(1, S, G_A, G_D)$.
- To compute $ncp_3$ for testing $\beta_A=\beta_{GS}=0$ under $M_3$, we require $$\mbox{corr}(G_D^*, G_A^*)=\mbox{corr}(G_D^*, GS^*)=0,$$ and $(1, S^*, G_A^*, GS^*)$ is linear transformation of $(1, S, G_A, GS)$.
We can show $ncp_1$, $ncp_2$ and $ncp_3$ are asymptotically equal to the $ncp$s for testing $\beta_A^*=0$, $\beta_A^*=\beta_D^*=0$ and $\beta_A^*=\beta_{GS}^*=0$ under the correctly specified re-parametrized model $M_4$: $Y \sim S^*+G_A^*+G_D^*+GS^*$, which can be computed using equation (2). The proof under logistic model is a direct application of @begg92’s result. The proof under linear models is omitted because it is similar to the [**Example 1**]{} above but much more lengthy.
The remaining question is to find the re-parametrized codings satisfying the above conditions. We provide such codings in Table \[coding2\].
\[coding2\]
--------- ------------------ ----------------------------- ---------------------- -------------------------------------------------- -----------------------------------------------
Coding $rr$ $rR$ $RR$ $r$ $R$
$G_A^*$ -1 0 1 -1 1
$G_D^*$ $-2f_{female}^2$ $2f_{female}(1-f_{female})$ $-2(1-f_{female})^2$ 0 0
$GS^*$ $-f_{female}$ $\frac{1}{2}-f_{female}$ $1-f_{female}$ $\frac{f_{female}(1-f_{female})}{2(1-f_{male})}$ $-\frac{f_{female}(1-f_{female})}{2f_{male}}$
$S^*$ -1 -1 -1 1 1
--------- ------------------ ----------------------------- ---------------------- -------------------------------------------------- -----------------------------------------------
: Re-parametrized codings of additive, dominant, interaction and sex effect
[*Remark 4*]{}: If the missing covariates in the misspecified model are uncorrelated to the covariate being tested, for finite sample it is well-known that the estimator from misspecified model should be unbiased but less efficient. However, we find the $ncp$s are asymptotically equal under the true model and under the misspecified model using re-parametrized codings. This suggests that the misspecified model is asymptotically as efficient as the true model, in contradiction with the finite sample result. This tension appears because we assume that the nuisance parameter $\beta_1$ also converges to 0. For instance, @begg92 considered the situation in which only $\beta_2$ converges to 0, and their derivations showed that the asymptotic relative efficiency of the misspecified model and true model is less than 1, in agreement with small sample results.
[*Remark 5*]{}: The $ncp$ computation in our paper focuses on linear and logistic regressions, but it is possible to extend to other generalized linear models (GLMs). Although there is no result to be directly used for GLMs in general, @neuhaus98 extended @begg92’s relative efficiency calculation to GLMs. It can be used to extend the asymptotic equivalence of $ncp$ to other types of GLM.
Appendix D: Additional Figures {#appendix-d-additional-figures .unnumbered}
==============================
![[**Heat plots of power and power loss for 1, 2 and 3 degree of freedom chi-squared distributions.**]{} **Upper panels:** Power of $W_1 \sim \chi^2_{(1,ncp)}$, $W_2 \sim \chi^2_{(2,ncp)}$ and $W_3 \sim \chi^2_{(3,ncp)}$ as a function of $-\log_{10} \alpha$ (type I error) and non-centrality parameter. **Lower panels:** Power loss of $W_2$ vs $W_1$, $W_3$ vs $W_1$ and $W_3$ vs $W_2$ as a function of $-\log_{10} \alpha$ and non-centrality parameter assuming equal non-centrality parameter within each pair. Black dots correspond to maximum power loss: $\alpha=0.0025$ and $ncp=10.6$ for left panel, $\alpha=0.0008$ and $ncp=13.4$ for middle panel and $\alpha=9.12 \times 10^{-5}$ and $ncp=19$ for right panel.[]{data-label="heat"}](heat1.jpeg)
![[**Heat plots of power and power loss for 1, 2 and 3 degree of freedom chi-squared distributions.**]{} **Upper panels:** Power of $W_1 \sim \chi^2_{(1,ncp)}$, $W_2 \sim \chi^2_{(2,ncp)}$ and $W_3 \sim \chi^2_{(3,ncp)}$ as a function of $-\log_{10} \alpha$ (type I error) and non-centrality parameter. **Lower panels:** Power loss of $W_2$ vs $W_1$, $W_3$ vs $W_1$ and $W_3$ vs $W_2$ as a function of $-\log_{10} \alpha$ and non-centrality parameter assuming equal non-centrality parameter within each pair. Black dots correspond to maximum power loss: $\alpha=0.0025$ and $ncp=10.6$ for left panel, $\alpha=0.0008$ and $ncp=13.4$ for middle panel and $\alpha=9.12 \times 10^{-5}$ and $ncp=19$ for right panel.[]{data-label="heat"}](heat2.jpeg)
![[**Test power comparison between $\mathbf{W_1 \sim \chi^2_{(1,ncp_1)}}$ and $\mathbf{W_2 \sim \chi^2_{(2,ncp_2)}}$,**]{} when $ncp_1=5, 10$ or 15 and $\Delta_{12}=ncp_2-ncp_1$ varying from 0 to 10. Black dash curves are power of $W_1$; [ red solid curves]{} are power of $W_2$. Type I error $\alpha=0.0025$.[]{data-label="gain"}](gain1.jpeg)
![[**Non-centrality parameter comparison between the additive and genotypic tests for association analyses of autosome SNPs across a range of dominant effects, including no dominant effect.**]{} The additive effect is fixed at $\beta_A=0.3$, while the dominant effect $\beta_D$ ranges from $-0.6$ to $0.6$. The allele frequency $f=0.2, 0.5,$ and 0.8 for the three plots, respectively, from left to right, the sample size $n=1,000$, and the size of the test $\alpha=0.0025$. The black dashed curves are power of testing $\beta_A=0$ using the additive model, and the [ red solid curves]{} are power of testing $\beta_A=\beta_D=0$ using the genotypic model.[]{data-label="Ancp"}](Ancp.jpeg)
(0,0) (-3.1,-0.1)[(a)]{} (-2.8,0)[$g(E(Y))= \beta_0+\beta_AG_A$]{} (-2.8,-0.2)[$H_0: \beta_A=0$]{} (-3.1,-0.6)[(b)]{} (-2.8,-0.5)[$g(E(Y))= \beta_0+\beta_AG_A+\beta_DG_D$]{} (-2.8,-0.7)[$H_0: \beta_A=\beta_D=0$]{} (0.1,-0.1)[(a)]{} (0.4,0)[$g(E(Y))=\beta_0+\beta_SS+\beta_AG_A$]{} (0.4,-0.2)[$H_0: \beta_A=0$]{} (0.1,-0.6)[(b)]{} (0.4,-0.5)[$g(E(Y))= \beta_0+\beta_SS+\beta_AG_A+\beta_DG_D$]{} (0.4,-0.7)[$H_0: \beta_A=\beta_D=0$]{} (-1.5,-2.1)[(a)]{} (-1.2,-2.0)[$g(E(Y))=\beta_0+\beta_SS+\beta_AG_A+\beta_{GS}GS$]{} (-1.2,-2.2)[$H_0: \beta_A=\beta_{GS}=0$]{} (-1.5,-2.6)[(b)]{} (-1.2,-2.5)[$g(E(Y))=\beta_0+\beta_SS+\beta_AG_A+\beta_DG_D+\beta_{GS}GS$]{} (-1.2,-2.7)[$H_0: \beta_A=\beta_D=\beta_{GS}=0$]{} (-2.8,-1.1)[(0.6,0.2)[$G_{A,R,I}$]{}]{} (-1.3,-1.1)[(0.6,0.2)[$G_{A,r,I}$]{}]{} (-2.8,-1.6)[(0.6,0.2)[$G_{A,R,N}$]{}]{} (-1.3,-1.6)[(0.6,0.2)[$G_{A,r,N}$]{}]{} (0.6,-1.1)[(0.8,0.2)[$S,G_{A,R,I}$]{}]{} (2.1,-1.1)[(0.8,0.2)[$S,G_{A,r,I}$]{}]{} (0.6,-1.6)[(0.8,0.2)[$S,G_{A,R,N}$]{}]{} (2.1,-1.6)[(0.8,0.2)[$S,G_{A,r,N}$]{}]{} (-1.2,-3.1)[(1.2,0.2)[$S,G_{A,R,I},GS_{R}$]{}]{} (0.4,-3.1)[(1.2,0.2)[$S,G_{A,r,I},GS_{r}$]{}]{} (-1.2,-3.6)[(1.2,0.2)[$S,G_{A,R,N},GS_{R}$]{}]{} (0.4,-3.6)[(1.2,0.2)[$S,G_{A,r,N},GS_{r}$]{}]{} (-2.2,-1.0)[(1,0)[0.9]{}]{} (1.4,-1.0)[(1,0)[0.7]{}]{} (1.4,-1.5)[(1,0)[0.7]{}]{} (0,-3.0)[(1,0)[0.4]{}]{} (0,-3.5)[(1,0)[0.4]{}]{} (-0.55,-3.1)[(0,-1)[0.3]{}]{} (0.95,-3.1)[(0,-1)[0.3]{}]{} (0,-3.1)[(4,-3)[0.4]{}]{} (0,-3.4)[(4,3)[0.4]{}]{}
**A.** $\mathbf{f_{female}=0.2, f_{male}=0.5}$


**B.** $\mathbf{f_{female}=0.2, f_{male}=0.8}$


**C.** $\mathbf{f_{female}=0.5, f_{male}=0.2}$


**D.** $\mathbf{f_{female}=0.5, f_{male}=0.8}$


**E.** $\mathbf{f_{female}=0.8, f_{male}=0.2}$


**F.** $\mathbf{f_{female}=0.8, f_{male}=0.5}$


**G.** $\mathbf{f_{female}=0.8, f_{male}=0.8}$
![[**Power comparisons for analyzing X-chromosome SNPs.**]{} Additional values of $f$ which are not presented in Figure 2 are specified through part A to G. [ Black dash curves]{} for testing $\beta_A=0$ based on model $M_1$, [ green dot-dash curves]{} for testing $\beta_A=\beta_D=0$ based on model $M_2$, [ orange dotted curves]{} for testing $\beta_A=\beta_{GS}=0$ based on model $M_3$, and [ red solid curves]{} for testing $\beta_A=\beta_D=\beta_{GS}=0$ based on the proposed model $M_4$. **Upper panels** examine power as a function of dominant effect (or skewness of XCI). **Lower panels** examine power as a function of gene-sex interaction effect (or XCI status).](88-1.jpeg)
![[**Power comparisons for analyzing X-chromosome SNPs.**]{} Additional values of $f$ which are not presented in Figure 2 are specified through part A to G. [ Black dash curves]{} for testing $\beta_A=0$ based on model $M_1$, [ green dot-dash curves]{} for testing $\beta_A=\beta_D=0$ based on model $M_2$, [ orange dotted curves]{} for testing $\beta_A=\beta_{GS}=0$ based on model $M_3$, and [ red solid curves]{} for testing $\beta_A=\beta_D=\beta_{GS}=0$ based on the proposed model $M_4$. **Upper panels** examine power as a function of dominant effect (or skewness of XCI). **Lower panels** examine power as a function of gene-sex interaction effect (or XCI status).](88-2.jpeg)
![[**Results of re-analyses of the 60 autosomal, presumably associated, SNPs selected by @wt05 from 41 association studies.**]{} X-axis is the p-value, on the $-log_{10}$ scale, obtained from the standard 1 d.f. additive test and the Y-axis is the recommended 2 d.f. genotypic test.[]{data-label="pp60"}](pp60.jpeg)
![[**QQ-plots of the 556,445 autosome SNPs from cystic fibrosis study in Section 4.3.**]{} [**Left panel:**]{} p-values of the additive test on $-log_{10}$ scale. [**Right panel:**]{} p-values of the genotypic test on $-log_{10}$ scale. The QQ-plots imply that p-values are approximately Uniform(0,1) distributed for either test.[]{data-label="qq"}](qq.jpeg)
![[**p-values of the additive test vs. genotypic test on $\mathbf{-log_{10}}$ scale of the 556,445 autosome SNPs from cystic fibrosis study in Section 4.3.**]{} Due to the capped maximal power loss computed in Section 2.1, p-values under genotypic model is possible to be much smaller than additive model, but not possible to be much greater than additive model, which clearly demonstrates the benefit of genotypic model. It needs to be noted that the capped power loss does not contradict to the fact that the overall p-values under both additive and genotypic model have the same Uniform(0,1) distribution, as shown in Figure \[qq\].[]{data-label="pp"}](ppplot.jpeg)
|
---
abstract: 'In this paper, the optimal trajectory and deployment of multiple unmanned aerial vehicles (UAVs), used as aerial base stations to collect data from ground Internet of Things (IoT) devices, is investigated. In particular, to enable reliable uplink communications for IoT devices with a minimum energy consumption, a new approach for optimal mobility of the UAVs is proposed. First, given a fixed ground IoT network, the total transmit power of the devices is minimized by properly clustering the IoT devices with each cluster being served by one UAV. Next, to maintain energy-efficient communications in time-varying mobile IoT networks, the optimal trajectories of the UAVs are determined by exploiting the framework of optimal transport theory. Simulation results show that by using the proposed approach, the total transmit power of IoT devices for reliable uplink communications can be reduced by $56\%$ compared to the fixed Voronoi deployment method. Moreover, our results yield the optimal paths that will be used by UAVs to serve the mobile IoT devices with a minimum energy consumption.'
author:
- '\'
bibliography:
- 'references.bib'
title: ' Mobile Internet of Things: Can UAVs Provide an Energy-Efficient Mobile Architecture?'
---
Introduction
============
The use of unmanned aerial vehicles (UAVs) as flying wireless communication platforms has received significant attention recently [@mozaffari2; @HouraniModeling; @Irem]. On the one hand, UAVs can be used as wireless relays for improving connectivity of ground wireless devices and extending network coverage. On the other hand, UAVs can act as mobile aerial base stations to provide reliable downlink and uplink communications for ground users, and boost the capacity of wireless networks [@Irem] and [@Letter]. Compared to the terrestrial base stations, the advantage of using UAV-based aerial base stations is their ability to quickly and easily move. Furthermore, the high altitude of UAVs can enable line-of-sight (LoS) communication links to the ground users. Due to the adjustable altitude and mobility, UAVs can move towards potential ground users and establish reliable connections with a low transmit power. Hence, they can provide a cost-effective and energy-efficient solution to collect data from ground mobile users that are spread around a geographical area with limited terrestrial infrastructure.
Indeed, UAVs can play a key role in the *Internet of Things (IoT)* which is composed of small, battery-limited devices such as sensors, and health monitors [@dawy; @lien; @Eragh]. These devices are typically unable to transmit over a long distance due to their energy constraints [@lien]. In such IoT scenarios, UAVs can dynamically move towards IoT devices, collect the IoT data, and transmit it to other devices which are out of the communication ranges of the transmitting IoT devices [@lien]. In this case, the UAVs play the role of moving aggregators for IoT networks. However, to effectively use UAVs for IoT communications, several challenges must be addressed such as optimal deployment and energy-efficient use of UAVs [@mozaffari2].
In [@mozaffari2], we investigated the optimal deployment and movement of a single UAV for supporting downlink wireless communications. However, this work was restricted to a single UAV and focused on the downlink. The work in [@Han] analyzed the optimal trajectory of UAVs to enhance connectivity of ad-hoc networks. Nevertheless, this work did not study the optimal movement of multiple UAVs acting as aerial base stations. The work in [@MozaffariTransport] studied the optimal deployment of UAVs and UAV-users association for static ground users with the goal of meeting users’ rate requirements. In [@pang], the authors used UAVs to efficiently collect data and recharge the clusters’ head in a wireless sensor network which is partitioned into multiple clusters. However, this work is limited to a static sensor network, and does not investigate the optimal deployment of the UAVs. While the energy efficiency of uplink data transmission in a machine-to-machine (M2M) communication network was investigated in [@Nof] and [@Tu], the presence of UAVs was not considered. In fact, none of the prior studies [@mozaffari2; @HouraniModeling; @Irem], and [@Han; @MozaffariTransport; @Nof; @pang; @Tu], addressed the problem of optimal deployment and mobility of UAVs for enabling reliable and energy-efficient communications for mobile IoT devices. The main contribution of this paper is to propose a novel approach for deploying multiple, mobile UAVs for energy-efficient uplink data collection from mobile, ground IoT devices. First, to minimize the total transmit power of the IoT devices, we create multiple clusters where each one is served by one of the UAVs. Next, to guarantee energy-efficient communications for the IoT devices in mobile and time-varying networks, we determine the optimal paths of the UAVs by exploiting dynamic clustering and optimal transport theory [@villani2003]. Using the proposed method, the total transmit power of the IoT devices required for successful transmissions is minimized by the dynamic movement of the UAVs. In addition, the proposed approach will minimize the total energy needed for the UAVs to effectively move. The results show that, using our proposed framework, the total transmit power of the devices during the uplink transmissions can be reduced by $56\%$ compared to the fixed Voronoi deployment method. Furthermore, given the optimal trajectories for UAVs, they can serve the mobile IoT devices with a minimum energy consumption.
The rest of this paper is organized as follows. In Section II, we present the system model and problem formulation. Section III presents the optimal devices’ clustering approach. In Section IV, we address the mobility of the UAVs using discrete transport theory. In Section V, we provide the simulation results, and Section VI draws some conclusions.
System Model and Problem Formulation
====================================
Consider an IoT system consisting of a set $\mathcal{L}=\{1,2,...,L\}$ of $L$ IoT devices deployed within a geographical area. In this system, a set $\mathcal{K}=\{1,2,...,K\}$ of $K$ UAVs must be deployed to collect the data from the ground devices in the uplink. The locations of device $i\in \mathcal{L}$ and UAV $j\in \mathcal{K}$ are, respectively, given by $(x_i,y_i)$ and $(x_{u,j},y_{u,j},h_j)$ as shown in Figure 1. We assume that devices transmit in the uplink using orthogonal frequency division multiple access (OFDMA) and UAV $j$ can support at most $M_j$ devices simultaneously. Note that, we consider a centralized network in which the locations of devices and UAVs are known to a control center such as a central cloud server. The ground IoT devices can be mobile (e.g. smart cars) and their data availability can be intermittent (e.g. sensors). Therefore, to effectively respond to the network mobility, it is essential that the UAVs optimally move for establishing reliable and energy-efficient communications with the devices. Note that, in our analysis, without loss of generality any mobility model can be accommodated.
For ground-to-air communications, each device will typically have a LoS view towards a specific UAV with a given probability. This LoS probability depends on the environment, location of the device and the UAV, and the elevation angle between the device and the UAV [@HouraniModeling]. One suitable expression for the LoS probability is given by [@HouraniModeling]: $$\label{PLoS}
{P_{{\rm{LoS}}}} = \frac{1}{{1 + \psi \exp ( - \beta\left[ {\theta - \psi} \right])}},$$ where $\psi$ and $\beta$ are constant values which depend on the carrier frequency and type of environment such as rural, urban, or dense urban, and $\theta$ is the elevation angle. Clearly, ${\theta} = \frac{{180}}{\pi } \times {\sin ^{ - 1}}\left( {{\textstyle{{{h_j}} \over { {d_{ij}}}}}} \right)$, where $ {d_{ij}} = \sqrt {(x_i-x_{u,j})^2+(y_i-y_{u,j})^2+h_j^2 }$ is the distance between device $i$ and UAV $j$.
From (\[PLoS\]), we can observe that by increasing the elevation angle or increasing the UAV altitude, the LoS probability increases. We assume that, the necessary condition for connecting a device to a UAV is to have a LoS probability greater than a threshold ($\epsilon$ close to 1). In other words, ${P_{\text{LoS}}}(\theta ) \ge \varepsilon$, and hence, $\theta \ge P_{\text{LoS}}^{ - 1}(\varepsilon )$ leading to: $$\label{dmin}
d_{ij} \le \frac{h_j}{{\sin \left( {P_{\text{LoS}}^{ - 1}(\varepsilon )} \right)}}.$$ Note that (\[dmin\]) shows the necessary condition for connecting the device to the UAV. Therefore, a device will not be assigned to UAVs which are located at distances greater than $\frac{h_j}{{\sin \left( {P_{\text{LoS}}^{ - 1}(\varepsilon )} \right)}}$.
Now, considering the LoS link, the received signal power at UAV $j$ from device $i$ is given by [@HouraniModeling] (in dB): $$\label{Pr}
P_r^{ij} = {P_{t,i}} - 10\alpha \log \left( {\frac{{4\pi {f_c}}}{c}{d_{ij}}} \right) - \eta,$$ where $P_{t,i}$ is the transmit power of device $i$ in dB, $f_c$ is the carrier frequency, $\alpha=2$ is the path loss exponent for LoS propagation, $\eta$ is an excessive path loss added to the free space propagation loss, and $c$ is the speed of light.
In our model, the transmit power of the devices must satisfy the minimum signal-to-noise-ratio (SNR) required for a successful decoding at UAVs. For quadrature phase shift keying (QPSK) modulation, the minimum transmit power of device $i$ needed to reach a bit error rate requirement of $\delta$ is: $$\label{Pt_min}
P_t^{ij} = {\left[ {{Q^{ - 1}}\left( \delta \right)} \right]^2}\frac{{{R_b}{N_o}}}{2}{10^{\eta /10}}{\left( {\frac{{4\pi {f_c}{d_{ij}}}}{c}} \right)^2},$$ where $Q^{ - 1}(.)$ is the inverse $Q$-function, $N_o$ is the noise power spectral density, and $R_b$ is the transmission bit rate. Note that, to derive (\[Pt\_min\]) using (\[Pr\]), we use the bit error rate expression for QPSK modulation as ${P_e} = Q(\sqrt{\frac{{2P_r^{ij}}}{{{R_b}{N_o}}}})$ [@Goldsmith].
Our first goal is to optimally move and deploy the UAVs in a way that the total transmit power of devices to reach the minimum SNR requirement for a successful decoding at the UAVs is minimized. In fact, the objective function is: $$\begin{aligned}
\label{opt1}
&\min \limits_{{\mathcal{C}_j},{\boldsymbol{\mu_j}}} \sum\limits_{j = 1}^K {\sum\limits_{i \in {\mathcal{C}_j}} {P_t^{ij}} }, \,\,\,j\in \mathcal{K},\end{aligned}$$ where $P_t^{ij}$ is the transmit power of device $i$ to UAV $j$, and $K$ is the number of UAVs. Also, $\mathcal{C}_j$ is the set of devices assigned to UAV $j$, and $\boldsymbol{\mu_j}$ is the 3D location of UAV $j$.
From (\[Pt\_min\]), we can observe that the transmit power is directly proportional to the distance squared. Hence, minimizing the power is equivalent to minimizing the distance squared. Then, using (\[dmin\]), (\[Pt\_min\]), and (\[opt1\]), and considering the constraint on the maximum number of devices that can connect to each UAV, our optimization problem can be formulated as: $$\begin{aligned}
&\left\{ {\mathcal{C}_j^*,\boldsymbol{\mu_j^*}} \right\} = \mathop {\arg \min }\limits_{{\mathcal{C}_j},{\boldsymbol{\mu_j}}} \sum\limits_{j = 1}^K {\sum\limits_{i \in {\mathcal{C}_j}} {d_{ij}^2} } \,\,,\,\,j\in \mathcal{K}, \label{opt_main}\\
{\text{s.t.}}\,\,&{\mathcal{C}_j} \cap {\mathcal{C}_m} = \emptyset ,\,\,j \ne m,\,\,\,\, j,m\in \mathcal{K}, \label{cons1} \\
&\sum\limits_{j = 1}^K {|{\mathcal{C}_j}|} = L, \label{cons2}\\\
&{d_{ij}} \le \frac{h_j}{{\sin \left( {P_{\text{LoS}}^{ - 1}(\varepsilon )} \right)}}, \\
&|{\mathcal{C}_j}| \le {M_j},\end{aligned}$$ where $|\mathcal{C}_j|$ is the number of devices assigned to UAV $j$, $L$ is the total number of devices, and $M_j$ is the maximum number of devices that UAV $j$ can support. (\[cons1\]) and (\[cons2\]) guarantee that each device connects to only one UAV.
Clearly, we can consider the set of devices which are assigned to a UAV as a cluster, and place the corresponding UAV at the center of the cluster. Placing a UAV at the cluster center ensures that the UAV has a minimum total distance squared to all the cluster members. Hence, to solve problem (\[opt\_main\]), we need to find $L$ clusters, and their centers which effectively correspond to the locations of the UAVs. Note that, in a time varying network in which the location of IoT devices change, the clusters will also change. Consequently, the location of UAVs as the center of the clusters must be updated accordingly. However, moving these UAVs to the center of the clusters should be done with a minimum energy consumption. Therefore, in the mobile scenario, while finding the optimal paths of UAVs, we need to determine which UAV must go to which cluster center. Next, we present the IoT devices’ clustering approach for minimizing the total transmit of the IoT devices.
![ System model.[]{data-label="Nu"}](System_model.pdf){width="8.45cm"}
Clustering IoT Devices
======================
Our first step is to optimally cluster the devices and deploy the UAVs at the center of the formed clusters so as to minimize the transmit power of the ground IoT devices. We solve the problem in (\[opt\_main\]) by exploiting the constrained $K$-mean clustering approach [@Hoppner]. In the $K$-mean clustering problem, given $L$ points in $\mathds{R}^n$, the goal is to partition the points into $K$ disjoint clusters such that the sum of distances squared of the points to their corresponding cluster center is minimized. Hence, considering (\[opt1\]) and (\[opt\_main\]), the total transmit power of devices is minimized by placing the UAVs in the center of the optimal clusters. This problem can be solved in two steps using an iterative process. The first step is related to the assignment, and the second step is called update.
Assignment Step
---------------
In the assignment step, given the location of the clusters’ center (given all $ \boldsymbol{\mu_j}$), we find the optimal clusters for which the total distance squared between the cluster members and their center is minimized. Therefore, in our problem, given the location of the UAVs, we will first determine the optimal assignment of the devices to UAVs which can be written as: $$\begin{aligned}
\label{assign}
&\min\limits_{A_{ij}} \sum\limits_{j = 1}^K {\sum\limits_{i = 1}^L {{A_{ij}}||{\boldsymbol{v_i}} - {\mu _j}|{|^2}} }, \\
{\rm{s}}{\rm{.t}}{\rm{.}}&\sum\limits_{i = 1}^L {{A_{ij}}} \le {M_j}, \,\,\,j\in\mathcal{K},\\
&\sum\limits_{j = 1}^K {{A_{ij}}} = 1, \,\,\,i\in\mathcal{L},\\
&{A_{ij}}||{\boldsymbol{v_i}} - {\boldsymbol{\mu_j}}|| \le \frac{h_j}{{\sin \left( {P_{\text{LoS}}^{ - 1}(\varepsilon )} \right)}},\\
&{A_{ij}} \in \{ 0,1\}, \end{aligned}$$ where $\boldsymbol{v_i}=(x_i,y_i)$ is the two-dimensional location vector of device $i$, and $A_{ij}$ is equal to 1 if device $i$ is assigned to UAV $j$, otherwise $A_{ij}$ will be equal to 0. The problem presented in (\[assign\]) is an integer linear programming which is solved by using the cutting plane method [@garf].
Update Step
-----------
In the update step, given the clusters obtained in the assignment step, we update the location of the UAVs which is equivalent to updating the center of the clusters. Thus, the update location of UAVs is the solution to the following optimization problem: $$\begin{aligned}
\label{update}
&\mathop {\min }\limits_{({x_{u,j}},{y_{u,j}},h_j)} \sum\limits_{i \in {\mathcal{C}_j}} {{{({x_{u,j}} - {x_i})}^2} + {{({y_{u,j}} - {y_i})}^2}}+{h_j}^2 ,\\
&\text{s.t.}\,\,{({x_{u,j}} - {x_i})^2} + {({y_{u,j}} - {y_i})^2} + h_j^2\left( {1 - \frac{1}{{{{\sin }^2}\left( {P_{\text{LoS}}^{ - 1}(\varepsilon )} \right)}}} \right) \le 0, \label{consupdate} \nonumber\\
& \text{for all} \,\,\, i \in {\mathcal{C}_j},\,\, \text{and}\,\, j\in\mathcal{K}.\end{aligned}$$
The solution to (\[update\]) is $\boldsymbol{{s^*}} = (x_{u,j}^*,y_{u,j}^*,h_j^*) = - \boldsymbol{P{(\lambda )^{ - 1}}Q(\lambda )}$, with the vector $\boldsymbol{\lambda}$ that maximizes the following concave function: $$\begin{aligned}
&\mathop {\max {\rm{ }}}\limits_{\boldsymbol{\lambda}} \frac{1}{2} \boldsymbol{Q{(\lambda )^T} P{(\lambda )^{ - 1}}Q(\lambda )} + r(\boldsymbol{\lambda}),\\
&\textnormal {s.t.} \,\,\,\boldsymbol{\lambda} \ge 0,\end{aligned}$$ where $\boldsymbol{P(\lambda)}= \boldsymbol{P_o} + \sum\limits_i {{\lambda _i}{P_i}}$, $\boldsymbol{Q(\lambda)}=\boldsymbol{Q_o} + \sum\limits_i {{\lambda _i}{Q_i}}$ and $r(\boldsymbol{\lambda})={r_o} + \sum\limits_i {{\lambda _i}{r_i}}$, with $\boldsymbol{P_o}$, $\boldsymbol{Q_o}$, $r_o$, $\boldsymbol{P_i}$, $\boldsymbol{Q_i}$, and $r_i$ given in the proof.
As we can see from (\[update\]), the optimization problem is a quadratically constrained quadratic program (QCQP) whose general form is given by: $$\begin{aligned}
\label{QCQP}
&\mathop {\min }\limits_{\boldsymbol{s}} \,{\rm{ }}\frac{1}{2}\boldsymbol{{s^T}{P_o}s} + \boldsymbol{Q_o^Ts} + {r_o},\\
&\text{s.t.}\,\,\frac{1}{2}\boldsymbol{{s^T}{P_i}s} + \boldsymbol{Q_i^Ts} + {r_i}\le 0,\,\,\,i = 1,...,|{\mathcal{C}_j}|.\end{aligned}$$ Given (\[update\]) and (\[consupdate\]), we have:
$\boldsymbol{{P_o}} = \left[ {\begin{array}{*{20}{c}}
{2|{\mathcal{C}_j}|}&0&0\\
0&{2|{\mathcal{C}_j}|}&0\\
0&0&{2|{\mathcal{C}_j}|}
\end{array}} \right]$, $\boldsymbol{{P_i}} = \left[ {\begin{array}{*{20}{c}}
2&0&0\\
0&2&0\\
0&0&\omega
\end{array}} \right]$,\
$\omega = 1 - \frac{1}{{{{\sin }^2}\left( {P_{\text{LoS}}^{ - 1}(\varepsilon )} \right)}}$, $\boldsymbol{Q_o}={\left[ {\begin{array}{*{20}{c}}
{ - 2\sum\limits_{i = 1}^{|{\mathcal{C}_j}|} {{x_i}} }&{ - 2\sum\limits_{i = 1}^{|{\mathcal{C}_j}|} {{y_i}} }&0
\end{array}} \right]^T}$,\
$\boldsymbol{{Q_i}} = {\left[ {\begin{array}{*{20}{c}}
{ - 2{x_i}}&{ - 2{y_i}}&0
\end{array}} \right]^T}$, ${r_o} = \sum\limits_{i = 1}^{|{\mathcal{C}_j}|} {x_i^2} + \sum\limits_{i = 1}^{|{\mathcal{C}_j}|} {y_i^2}$, and\
${r_i} = x_i^2 + y_i^2$. Note that, $\boldsymbol{P_o}$ and $\boldsymbol{P_i}$ are positive semidefinite matrices, and, hence, the QCQP problem in (\[QCQP\]) is convex. Now, we write the Lagrange dual function as: $$\begin{aligned}
f(\lambda ) = \mathop \text{inf}\limits_{\boldsymbol s} \biggl[&\frac{1}{2}\boldsymbol{{s^T}{P_o}s} + \boldsymbol{Q_o^Ts} + {r_o}\nonumber \\
&+ \sum\limits_i {{\lambda _i}\left( {\frac{1}{2}\boldsymbol{{s^T}{P_i}s} + \boldsymbol{Q_i^Ts} + {r_i}} \right)}\biggr]\nonumber \\
&= \mathop \text{inf}\limits_{\boldsymbol s} \left[ {\frac{1}{2}\boldsymbol{{s^T}P(\lambda )s} + \boldsymbol{Q{{(\lambda )}^T}s} + r(\boldsymbol{\lambda} )} \right].\nonumber\end{aligned}$$ Clearly, by taking the gradient of the function inside the infimum with respect to $s$, we find $\boldsymbol{{s^*} = - P{(\lambda )^{ - 1}}Q(\lambda )}$. As a result, using $\boldsymbol{{s^*}}$, $f(\boldsymbol{\lambda} ) = \frac{1}{2}\boldsymbol{Q{(\lambda )^T}P{(\lambda )^{ - 1}}Q(\lambda )} + {r(\boldsymbol{\lambda} )}$. Finally, the dual of problem (\[QCQP\]) or (\[update\]) will be: $$\begin{aligned}
\text{max}\,\, f(\boldsymbol{\lambda}), \,\,\textnormal {s.t.} \,\,\,\boldsymbol{\lambda} \ge 0,\end{aligned}$$ which proves Theorem 1.
The assignment and update steps are applied iteratively until there is no change in the location update step. Clearly, at each iteration, the total transmit power is reduced and the objective function is monotonically decreasing. Hence, the solution converges after several iterations. In summary, for given locations of the ground IoT devices, we determined the optimal locations of the UAVs (cluster centers) for which the transmit power of the IoT devices used for reliable uplink communications is minimized. In a mobile IoT network, the UAVs must update their locations and follow the cluster centers as they evolve due to time-varying dynamics. Next, we investigate how to optimally move the UAVs to the center of the clusters.
Mobility of UAVs: Optimal Transport Theory
==========================================
Here, we find the optimal trajectory of the UAVs to guarantee the reliable uplink transmissions of mobile IoT devices. To move along the optimal trajectories, the UAVs must spend a minimum total energy on their mobility so as to remain operational for a longer time. In the considered mobile ground IoT network, the location of the devices and their availability might change, and hence, the clusters will change. Consequently, the UAVs must frequently update their locations accordingly. Now, given the location of the cluster centers obtained in Section III, and initial locations of the UAVs, we determine which UAV should fly to which cluster center such that the total energy consumed for their mobility is minimized. In other words, given $\mathcal{I}$ and $\mathcal{J}$, the initial and new sets of UAVs’ locations, one needs to find the optimal mapping between these two sets in a way that the energy used for transportations (between two sets) is minimized.
This problem can be modeled using discrete *optimal transport theory* [@villani2003]. In its general form, optimal transport theory deals with finding an optimal transportation plan between two sets of points that leads to a minimum transportation cost [@villani2003]. These sets can be either discrete or continuous, with arbitrary distributions, and can have a general transportation cost function. The optimal transport theory was originated from the following Monge problem [@villani2003]. Given piles of sands and holes with the same volume, find the best move (transport map) to completely fill up the holes with the minimum total transportation cost. In general, this problem does not necessarily have a solution as each point must be mapped to only one location. However, Kantorovich relaxed this problem by using transport plans instead of maps, in which one point can go to multiple points [@villani2003]. In our model, the UAVs need to move from initial locations to the new destinations. The transportation cost for each move is the energy used by each UAV for the mobility. We model this problem based on the discrete Monge-Kantorovich problem as follows [@xia]: $$\begin{aligned}
\label{transport1}
&\min\limits_{Z_{kl}} \sum\limits_{l \in \mathcal{J}} {\sum\limits_{k \in \mathcal{I}} {{E_{kl}}{Z_{kl}}} }, \\
\text{s.t.}\,&\sum\limits_{l \in \mathcal{J}} {{Z_{kl}}} = {m_k},\\
&\sum\limits_{k \in \mathcal{I}} {{Z_{kl}}} = {m_l},\\
&{Z_{kl}} \in \{ 0,1\},\end{aligned}$$ where $\mathcal{I}$ and $\mathcal{J}$, are the initial and new sets of UAVs’ locations. $\boldsymbol{Z}$ is the $\mathcal{|J|\times|I|}$ transportation plan matrix with each element $Z_{kl}$ being 1 if UAV $k$ is assigned to location $l$, and 0 otherwise. $E_{kl}$ is the energy used for moving a UAV from its initial location with index $k \in \mathcal{I}$ to a new location with index $l \in \mathcal{J}$. $m_l$ and $m_k$ are the number of points (UAVs) at the locations with indices $l$ and $k$. The energy consumption of a UAV moving with a constant speed as a function of distance is given by [@di]: $$\label{energy}
E(D,v) = \int\limits_{t = 0}^{t = D/v} {p(v)dt} = \frac{{p(v)}}{v}D,$$ where $D$ is the travel distance of the UAV, $v$ is the constant speed, $t$ is the travel time, and $p(v)$ is the power consumption as a function of speed. As we can see from (\[energy\]), energy consumption for mobility is linearly proportional to the travel distance. Using the Kantorovich-duality, the discrete optimal transport problem in (\[transport1\]) is equivalent to: $$\begin{aligned}
\label{dual}
&\max\limits_{\varphi,\xi} \left[ {\sum\limits_{k \in \mathcal{I}} {\xi (k)} - \sum\limits_{l \in \mathcal{J}} {\varphi (l)} } \right],\\
& \text{s.t.}\,\, \varphi (l)-\xi (k) \le {E_{kl}},\end{aligned}$$ where $\xi :\mathcal{I} \to \mathds{R}$, and $\varphi :\mathcal{J} \to \mathds{R}$ are unknown functions of the maximization problem. The dual problem in (\[dual\]) is used to solve the primal problem in (\[transport1\]) by applying the complementary slackness theorem [@villani2003]. In this case, the optimal solution including the optimal transport plan between $\mathcal{I}$ and $\mathcal{J}$ is achieved when $\varphi (l)-\xi (k) = {E_{kl}}$ [@villani2003]. Here, to find the optimal mapping between initial set of locations and the destination set, we use the revised simplex method [@ford]. The result will be the transportation plan ($\boldsymbol{Z}$) that optimally assigns the UAVs to the destinations. Hence, the location of the UAVs are updated according to the new destinations. Subsequently, having the destinations of each UAV at different time instances, we can find the optimal trajectory of the UAVs. As a result, given the optimal paths, the UAVs are able to serve the mobile IoT devices in an energy-efficient way.
Simulation Results and Analysis
===============================
In our simulations, the IoT devices are deployed within a geographical area of size $1.2\,\text{km}\times 1.2 \,\text{km}$. We consider UAV-based communications in an urban environment with $\psi=11.95$, and $\beta=0.14$ at 2GHz carrier frequency [@HouraniModeling]. Moreover, we use the energy consumption model for UAVs’ mobility as $E(D,v) = D\left( {0.95{v^2} - 20.4v + 130} \right)$ [@di]. Furthermore, in a time-varying network, to capture the mobility and availability of the ground IoT devices, we generate the new devices’ locations by adding zero mean Gaussian random variables to the initial devices’ locations. Table I lists the simulation parameters. Note that, all statistical results are averaged over a large number of independent runs.
Figure \[cluster\] shows a snapshot of the optimal devices’ clustering as well as the optimal UAVs’ locations resulting from our proposed approach. In this example, 5 UAVs are used to support 100 IoT devices. We assume that each UAV has a limited number of resource blocks which can be allocated to at most 30 devices. Therefore, we have 5 clusters with a maximum size of 30 devices and 5 cluster centers corresponding to the locations of the UAVs. As we can see from Figure \[cluster\], the location of IoT devices significantly impacts the number of devices per cluster and also the optimal locations of the UAVs. In this figure, the minimum and maximum cluster sizes are 3 and 27.
![ Optimal clusters and UAVs’ locations in one snapshot.[]{data-label="cluster"}](Clusters.eps){width="7.6cm"}
Figure \[Pt\_voronoi\] shows the total transmit power of devices versus the number of UAVs averaged over multiple simulation runs. In this figure, the performance of the proposed approach is compared with the fixed Voronoi case which is known to be a typical deployment method for static base stations. Note that, for a fair comparison, we assume that the total number of resource blocks is fixed ($L$), and hence, the number of resources per UAV is $\left\lceil {\frac{L}{K}} \right\rceil$. In other words, the maximum size of each cluster will decrease as the number of UAVs increases. In the Voronoi method, assuming a uniform distribution of devices, we fix the location of UAVs at an altitude of 500m, and then, we assign each device to the closest UAV. However, in the proposed clustering algorithm, we find the optimal clusters and deploy the UAVs at the center of the clusters. As shown in Figure \[Pt\_voronoi\], the proposed method outperforms the classical Voronoi scheme as the UAVs can be placed closer to the devices. As expected, increasing the number of UAVs reduces the total transmit power of IoT devices. For instance, when the number of UAVs increases from 4 to 8, the total transmit power decreases from 77mW to 38mW for the proposed method, and from 115mW to 95mW for the Voronoi case. Figure \[Pt\_voronoi\] shows that our approach results in about 56% reduction in the transmit power of the IoT devices.
![ Average of total transmit power vs. number of UAVs.[]{data-label="Pt_voronoi"}](Pt_numb_UAVs_MonteCarlo.eps){width="6.8cm"}
Figure \[Trajectory\] shows the trajectory of one of the UAVs in a mobile IoT scenario derived from optimal transport theory. Here, we consider 8 UAVs, and 400 devices whose locations are updated at each time by adding a Gaussian random variable with $N(0,50\,\text{m})$ to the previous locations. Clearly, since the locations of the devices may change over time, the optimal clusters must be updated accordingly. In Figure \[Trajectory\], the red dots correspond to the optimal destinations of the UAV at different times. In fact, as the clusters are changing over time, the UAV uses the proposed scheme to optimally move to one of the new cluster centers.
![ Trajectory of a UAV in a mobile IoT network.[]{data-label="Trajectory"}](Trajectory.eps){width="7.2cm"}
Figure \[EnergyConsumption\] shows the energy consumed by each UAV during its mobility. In this case, we use 8 UAVs for supporting 400 ground IoT devices. We consider the network at 10 time instances during which the UAVs move at a speed of 10 while updating their locations. As shown in Figure \[EnergyConsumption\], for the given scenario, the total amount of energy that UAVs use for mobility is around 106. Note that, this is the minimum total energy consumption that can be achieved via the optimal transport of the UAVs. As shown, different UAVs spend different amount of energy on the mobility. Depending on the optimal clustering of devices over time, different UAVs might have different travel distances to the cluster centers. For instance, UAV 1 consumes 1.8 times more energy than UAV 3. Hence, the number of UAVs may also change over time.
![ Energy consumption of each UAV.[]{data-label="EnergyConsumption"}](EnergyConsumption.eps){width="6.8cm"}
Figure \[UAV\_loss3\] shows the energy consumption per UAV when the number of UAVs changes. Here, we assume that, initially the UAVs are optimally deployed for a given IoT system, however, after a while some of the UAVs ($q$ UAVs) will not be operational due to the lack of battery. Consequently, the number of UAVs decreases and the remaining UAVs must update their locations to maintain the power efficiency of the ground devices. In Figure \[UAV\_loss3\], for the average case, we take the average of energy over all possible combinations of removing $q$ UAVs among the total UAVs. However, in the worst-case scenario, we remove the $q$ UAVs whose loss leads to the highest energy consumption for the remaining UAVs. Clearly, as more UAVs become inoperational, the energy consumption of the functioning UAVs will increase. For example, when the number of lost UAVs increases from 2 to 4, the average energy consumption per UAV increases from 1520J to 2510J.
![ Energy consumption vs. number of battery depleted UAVs.[]{data-label="UAV_loss3"}](UAV_loss_MonteCarlo.eps){width="6.6cm"}
conclusions
===========
In this paper, we have proposed a novel framework for efficiently deploying and moving UAVs to collect data from ground IoT devices. In particular, we have determined the optimal clustering of IoT devices as well as the optimal deployment and mobility of the UAVs such that the total transmit power of IoT devices is minimized while meeting a required bit error rate. To perform clustering given the limited capacity of each UAV, we have adopted the constrained size clustering approach. Furthermore, we have obtained the optimal trajectories that are used by the UAVs to serve the mobile IoT devices with a minimum energy consumption. The results have shown that by carefully clustering the devices and deploying the UAVs, the total transmit power of devices significantly decreases compared to the classical Voronoi-based deployment. Moreover, we have shown that, by intelligently moving the UAVs, they can remain operational for a longer time while serving the ground devices.
|
---
abstract: 'We study the $a$-numbers and $p$-ranks of Kummer covers of the projective line, and we give bounds for these numbers.'
address: 'Universiteit van Amsterdam Amsterdam, The Netherlands'
author:
- Otto Johnston
date: 'October 10, 2007'
title: 'A note on the A-numbers and P-ranks of Kummer covers'
---
Introduction
============
For some special curves, explicit formulas exist for the $p$-rank in terms of $p$, the degree of $C$, and the degree of the ramification divisor. One of the most famous of these formulas is due to Deuring and Shafarevich and dates back to the 1940s (see [@safarevic]). However, as Crew pointed out much later in [@crew], such a formula is impossible for Kummer covers since even for elliptic curves the $p$-rank can vary with the other numbers fixed. The same argument works equally well for the $a$-number.
We will study the $a$-numbers and $p$-ranks of Kummer covers. Our method uses Cech cohomology to produce a natural basis of $H^1(C, {\mathscr{O}}_C)$, and we calculate the action of Frobenius on this basis. Using this action, we give bounds for the $a$-number and the $p$-rank. This extends recent results of Elkin who used a similar method for a more specialized class of Kummer cover (see [@elkin]).
As an application, we recover Ekedahl’s bound $g(C) < \frac{p + 1}{2}$ for superspecial hyperelliptic curves (see [@ekedahl]). We show that there are numbers less than this upper bound that do not occur as the genus of such a curve.
Kummer Covers
=============
In this section, the main result is a decomposition theorem for the induced action of Frobenius on the first cohomology group of a Kummer cover.
An irreducible projective smooth curve $C$ over a field $k$ is a Kummer cover of degree $n$ if there exists a finite separable morphism $\psi: C \rightarrow {\mathbb{P}}^1_k$ of degree $n$ such that $K(C)/K({\mathbb{P}}^1_k)$ is a Kummer extension.
This definition automatically assumes that the characteristic of $k$ does not divide $n$ and that $k$ contains the $n$th roots of unity. For example, hyperelliptic curves over algebraically closed fields of characteristic not equal to $2$ are the Kummer covers of degree $2$. We will need the following algebraic fact.
Let $R$ be a noetherian unique factorization domain, and let $R[\alpha]$ be the cyclic extension of $R$ defined by a root $\alpha$ of the irreducible polynomial $z^n - u\prod_{j = 1}^{n - 1} f_j^j$, where $f_j \in R$ is square-free and $u \in R^*$. The integral closure of $R[\alpha]$ is generated as an $R$-module by $$\frac{\alpha^k}{\prod_{j = 1}^{n - 1} f_j^{\lfloor jk/n \rfloor}}, \;\;\;\; k = 0,1,\dots,n-1.$$ \[generators\]
See [@EV].
We use the previous result to find an affine cover of our curve.
Let $C$ be a Kummer cover of degee $n$ over a field $k_0$. After a base extension $k/k_0$, we can find a generator $y$ for the cyclic extension $K(C)/K({\mathbb{P}}^1_k)$ such that $y^n= f = u\prod_{j = 1}^{n - 1} f_j^j$ for $u \in k^*$ and $f_j \in k[x]$ separable. Then $C$ has an affine cover consisting of two parts $U^\prime = \emph{{\text{Spec}}}\;A$ and $V^\prime = \emph{{\text{Spec}}}\;B$, where $$A = \sum_{i = 0}^{n - 1} \frac{y^i}{\prod_{j = 1}^{n-1} f_j^{\lfloor ji/n \rfloor}} \cdot k[x],\;\;\;\;\;\; B = \sum_{i = 0}^{n - 1} \frac{y^i}{\prod_{j = 1}^{n-1} f_j^{\lfloor ji/n \rfloor}} \cdot \frac{1}{x^{m_i}} \cdot k[1/x],$$ and $$m_i = \lceil i\deg(f)/n \rceil - \sum_{j = 1}^{n - 1} \deg(f_j) \lfloor ji/n \rfloor.$$ \[cover\]
Let $K({\mathbb{P}}^1_{k_0}) = k_0(x)$. Since $K(C)/k_0(x)$ is a Kummer extension, we can find a generator $\alpha$ such that $\alpha^n = q \in k_0[x]$ and $Z^n - q \in k_0(x)[Z]$ is irreducible. We can also find a field extension $k/k_0$ such that all square-free factors of $q$ in $k[x]$ are separable. Base extending $C$ and ${\mathbb{P}}^1_{k_0}$ by $k$, we get a Kummer extension $K(C)/k(x)$ with a generator $y$ such that $y^n = f$, $f$ divides $q$, and $Z^n - f \in k(x)[Z]$ is irreducible. We have the square-free factorization $f = u\prod_{j = 1}^{n - 1} f_j^j \in k[x]$, where each $f_j$ is separable since it divides $q$.
Write ${\mathbb{P}}^1_k = {\text{Proj}}\;[t_0, t_1]$ and cover it by $U = {\text{Spec}}\;k[x]$ and $V = {\text{Spec}}\;k[1/x]$ with $x = t_1/t_0$. Using the finite morphism $\psi: C \rightarrow {\mathbb{P}}^1_k$, we form a cover of $C$ consisting of two affine open sets $U^\prime = \psi^{-1}(U)$ and $V^\prime = \psi^{-1}(V)$.
We know that $A = \Gamma(U^\prime, {\mathscr{O}}_C)$ is the integral closure of $k[x]$ in $K(C)$ and $B = \Gamma(V^\prime, {\mathscr{O}}_C)$ is the integral closure of $k[1/x]$ in $K(C)$ since $C$ is isomorphic to its normalization. Lemma \[generators\] immediately gives us the generators for $A$. To find the generators for $B$, let $\alpha = (y/x^s)$ be the root of the irreducible polynomial $Z^n - u (\prod_{j = 1}^{n - 1} f_j)/x^{n s} \in k[1/x, Z]$, where $s = \lceil \deg(f)/n \rceil$. We can use Lemma \[generators\] to compute the integral closure of $k[1/x, \alpha]$. Since the integral closure of $k[1/x]$ in $K(C)$ is the smallest integrally closed ring in $K(C)$ that contains $k[1/x]$, this computation is all we need. Rearranging the basis elements for $B$ using elementary algebra gives us the desired form where $m_i = i\;s - \lfloor i\;s - i\;\deg(f)/n \rfloor - \sum_{j = 1}^{n - 1} \deg(f_j) \lfloor ji/n \rfloor$. We get the definition of $m_i$ used above by the equality $i\;s - \lfloor i\;s - i\;\deg(f)/n \rfloor = \lceil i\;\deg(f)/n \rceil$.
Using the same notation, let $C$ be a Kummer cover of degree $n$ over $k$.
1. $H^1(C, {\mathscr{O}}_C) = \sum_{i = 1}^{n - 1} \sum_{t = 1}^{m_i - 1} \frac{y^i}{\prod_{j = 1}^{n-1} f_j^{\lfloor ji/n \rfloor}} \cdot \frac{1}{x^t} \cdot k$.
2. The genus of $C$ is $g(C) = \left( \sum_{i = 1}^{n - 1} m_i \right) - n + 1$. Moreover, $$0 \leq g(C) \leq \frac{1}{2}(n - 1)(\deg(f) - 1).$$
3. Let $\emph{{\text{char}}}(k) = p$. The induced Frobenius map $F$ on $H^1(C, {\mathscr{O}}_C)$ is determined by $$\frac{y^i}{\prod_{j = 1}^{n-1} f_j^{\lfloor ji/n \rfloor}}\cdot \frac{1}{x^t} \longmapsto \sum_{w = 1}^{m_{(pi\bmod{n})} - 1} c_w \cdot \frac{y^{(pi\bmod{n})}}{\prod_{j = 1}^{n-1} f_j^{\lfloor j \cdot (pi\bmod{n})/n \rfloor}} \cdot \frac{1}{x^w},$$ where $c_w$ is the coefficient of $x^{pt - w}$ in $f^{\lfloor pi/n \rfloor}/\prod_{j = 1}^{n - 1} f_j^{p \lfloor ji/n \rfloor - \lfloor j \cdot (pi\bmod{n})/n \rfloor}$.
\[Kummercovers\]
\(1) Let $R = \Gamma(U^\prime \cap V^\prime, {\mathscr{O}}_C)$ and note that $R$ is the integral closure of $k[x, 1/x]$ in $K(C)$. Lemma \[generators\] tells us that $R$ is generated as a $k[x, 1/x]$-module by the same set of generators that formed $A$ as a $k[x]$-module. We form the standard Cech complex $$\xymatrix{
A \oplus B \ar[r]^{\;\;d} & R \ar[r] & 0
}$$ and the result is immediate after taking the quotient.
\(2) The genus formula is obvious from part (1). The upper bound for $g(C)$ comes from considering $f$ to be square-free: it is clear we obtain the largest possible $m_i$ in this case, and hence the largest possible $g(C)$ for fixed $n$ and $\deg(f)$. An obvious lower bound for the genus of a Kummer cover with a square-free $f$ is obtained by replacing $m_i$ with $i\cdot \deg(f)/n$, which gives us $\frac{1}{2}(n - 1)(\deg(f) - 2) \leq g(C)$. The upper bound comes from the basic numerical fact that $\sum_{i = 1}^{n - 1} (m_i - (i)\deg(f)/n) \leq (n - 1)/2$, which is added to the formula for the lower bound.
\(3) We can determine the action of $F$ on $H^1(C, {\mathscr{O}}_C)$ by the action of Frobenius on the Cech complex $A \oplus B \rightarrow R$. Since $F$ is semi-linear on $k$, it is completely determined by its action on the basis vectors of $H^1(C, {\mathscr{O}}_C)$. To determine the action of $F$ on a basis vector, let ${\text{Frob}}$ denote the absolute Frobenius map on $C$ and look at the following commutative diagram. $$\xymatrix{
R \ar[d]_{{\text{coker}}(d)} \ar[r]^{{\text{Frob}}} & R \ar[d]^{{\text{coker}}(d)} \\
H^1(C, {\mathscr{O}}_C) \ar[r]^F & H^1(C, {\mathscr{O}}_C)
}$$ We have already computed the basis vectors of $H^1(C, {\mathscr{O}}_C)$ as the images of elements of $R$ under ${\text{coker}}(d)$ of the form $$\frac{y^i}{\prod_{j = 1}^{n-1} f_j^{\lfloor ji/n \rfloor}} \cdot \frac{1}{x^t}.$$ To compute the action of $F$ on a basis vector of $H^1(C, {\mathscr{O}}_C)$, we will simply apply ${\text{Frob}}$ to the above term of $R$ and then apply ${\text{coker}}(d)$. Applying ${\text{Frob}}$ obviously gives us $$\frac{y^i}{\prod_{j = 1}^{n-1} f_j^{\lfloor ji/n \rfloor}} \cdot \frac{1}{x^t} \mapsto \left( \frac{y^i}{\prod_{j = 1}^{n-1} f_j^{\lfloor ji/n \rfloor}} \cdot \frac{1}{x^t} \right)^p,$$ and we have that the image is equal to $$\frac{f^{\lfloor p i/n \rfloor}}{\prod_{j = 1}^{n-1} f_j^{p\lfloor ji/n \rfloor - \lfloor j (pi \bmod{n})/n \rfloor}} \cdot \frac{y^{(pi\bmod{n})}}{\prod_{j = 1}^{n-1} f_j^{\lfloor j \cdot (pi\bmod{n})/n \rfloor}} \cdot \frac{1}{x^{pt}}$$ by elementary algebra. If we let $Q_i$ denote the leftmost term, we see that $Q_i \in k[x]$ since $j \lfloor p i/n \rfloor \geq p\lfloor ji/n \rfloor - \lfloor j (pi\bmod{n})/n \rfloor$. To finish the calculation, we take the image of the above expression under ${\text{coker}}(d)$, which is clearly $0$ if $m_{(pi\bmod{n})} \leq 1$. If $m_{(pi\bmod{n})} > 1$, the image is the sum of the terms $$\left[ c_w \cdot \frac{y^{(pi\bmod{n})}}{\prod_{j = 1}^{n-1} f_j^{\lfloor j \cdot (pi\bmod{n})/n \rfloor}} \cdot \frac{1}{x^w} \right]$$ for $w = 1, \dots, m_{(pi\bmod{n})} - 1$ and $c_w$ the coefficient of $x^{pt - w}$ as a term of $Q_i$.
The bounds given for $g(C)$ in (2) are sharp. The lower bound occurs for curves with affine equations $y^n = x^j$ for $j > 0$. The upper bound occurs for all Kummer covers with an affine equation of the form $y^n = f(x)$, where $f$ is separable and $\deg(f)$ is coprime to $n$. Also, we have seen that the computation of the Cech map involves the polynomial $$Q_i = f^{\lfloor pi/n \rfloor}/\prod_{j = 1}^{n - 1} f_j^{p \lfloor ji/n \rfloor - \lfloor j \cdot (pi\bmod{n})/n \rfloor} \in k[x].
\label{Q}$$ It is important to note that the exponents of the $1/f_j$ terms may be negative.
We now turn our attention to the $a$-number and $p$-rank of a Kummer cover. To define these numbers, we will need some facts about semi-linear maps. Recall that a semi-linear map of a $k$ vector space $L: V \rightarrow V$ is an additive map satisfying $L(\lambda x) = \theta(\lambda) L(x)$ for some $\theta \in {\text{End}}(k)$. For any semi-linear map $L$, the set $\ker(L)$ is a vector space over $k$ and ${\text{im}}(L)$ is a vector space over $\theta(k)$. Since it is more desirable to view the image of $L$ as a vector space over $k$, we define ${\text{im}}_k(L) = {\text{im}}(L) {\otimes}_{\theta(k)} k$.
Many of the decomposition theorems from linear algebra carry over to semi-linear maps. Recall that Rank-Nullity holds for $L$ in the sense that $\dim_k \ker(L) = r$ if and only if $\dim_k {\text{im}}_k(L) = n - r$. We also have that $\ker(L^m)$ stabilizes for some $m \geq 0$, where the smallest such $m$ is denoted by $i(L)$ and called the index of $L$. Finally, the Range-Nullspace decomposition tells us that $L|_{\ker(L^m)}$ is nilpotent and $\dim_k {\text{im}}_k(L^m) = \dim_k {\text{im}}_k(L^{m + 1})$. Of course, the semi-linear map we are interested in is $F$ acting on $H^1(C, {\mathscr{O}}_C)$, where $\theta$ is $\lambda \mapsto \lambda^p$ on $k$.
From this point on, we assume that ${\text{char}}(k) = p > 0$. The semi-simple rank of $F$ is ${\text{rk}}(F) = \dim_k {\text{im}}_k (F)$. The $a$-number $a(C)$ of a curve $C$ over $k$ is $a(C) = \dim_k \ker(F)$. Rank-Nullity gives us the relation ${\text{rk}}(F) = g(C) - a(C)$. The $p$-rank $f(C)$ of $C$ is $f(C) = {\text{rk}}(F^m)$ for any $m \geq i(F)$. This is well-defined because $\ker(F^m)$ stabilizes. Moreover, it is easy to see that $i(F) \leq g(C)$, so we can always take $m$ to be $g(C)$ in the definition of $f(C)$. The integers ${\text{rk}}(F)$, $a(C)$, and $f(C)$ are all between $0$ and $g(C)$. The curve $C$ is called *superspecial* if $F = 0$.
The partition of ${\mathbb{Z}}/n{\mathbb{Z}}$ into subsets via the action of multiplication by $p$ plays an important role in our next result. We fix the notation for this as follows.
Let $S = {\mathbb{Z}}/n{\mathbb{Z}}- \{0\}$ and let $G$ be the cyclic group $\{p^q: q \geq 0\} \subset ({\mathbb{Z}}/n{\mathbb{Z}})^*$. Consider the group action of $G$ on $S$ given by $p^q \cdot s = p^q\;s \bmod n$. Let $S/G$ be the set of distinct orbits of this action. \[notationone\]
Using the same notation, let $C$ be a Kummer cover over $k$ of degree $n$. Set $B_i = \emph{{\text{span}}}_k\{ \frac{y^i}{\prod_{j = 1}^{n-1} f_j^{\lfloor ji/n \rfloor}}\cdot \frac{1}{x^t}\}_{t = 1}^{m_i - 1}$.
1. $F^q(B_i) \subset B_{(i p^q\bmod n)}$ for $q > 0$.
2. $\emph{{\text{rk}}}(F) = \sum_{i = 1}^{n - 1} \emph{{\text{rk}}}(F|_{B_i})$.
3. $f(C) = \sum_{\Omega \in S/G} \emph{{\text{rk}}}(F^m|_{B_i})$, where $m \geq i(F)$ and $i \in \Omega$ is any element.
\[decomposition\]
Since $\sum_{i = 1}^{n - 1} B_i = H^1(C, {\mathscr{O}}_C)$ by the first part of Lemma \[Kummercovers\], (2) and (3) follow from (1), so we prove (1). Part (3) of Lemma \[Kummercovers\] tells us that the action of $F$ takes $m_{i} - 1$ basis vectors and maps them to $m_{(pi\bmod{n})} - 1$ number of basis vectors. Since multiplication by $p$ defines a bijection from ${\mathbb{Z}}/n{\mathbb{Z}}$ to itself, the $m_i - 1$ number of vectors are the only terms to be mapped to the $m_{(pi\bmod{n})} - 1$ number of vectors. This proves $F(B_i) \subset B_{(p i\bmod n)}$. Iterating $F$ finishes the proof.
Let $C$ be the Kummer cover defined by $y^{11} = x^2 (x + 1)$ over a field of characteristic $13$ that contains the $11$th roots of unity. We will show that $a(C) = 1$ and $f(C) = 0$ using the theorem. The orbit of $1$ under the action of $G$ on $S$ is $\{1, 2, 4, 8, 5, 10, 9, 7, 3, 6\}$. Thus, $S/G$ consists of the single orbit $S$. Moreover, the set $\{4, 5, 8, 9, 10\}$ consists of all values of $i < 11$ where $m_i > 1$; since $m_i = 2$ for these values, $g(C) = 5$. On the other hand, since $S/G$ consists of a single orbit and there is some $j$ for which $m_j = 1$, the image of $F^q(B_i)$ passes through a zero dimensional $B_j$ for some iteration $q$ for any $i$. Hence, $f(C) = 0$. Since $m_7 = 1$ and $9$ maps to $7$ in one iteration, ${\text{rk}}(F|_{B_9}) = 0$, so we can compute ${\text{rk}}(F)$ by taking the sum of ${\text{rk}}(F|_{B_i})$ for $i \in \{4, 5, 8, 10\}$. To determine ${\text{rk}}(F|_{B_i})$, all we need to know is if the coefficient $a_{i, 12}$ of $x^{12}$ in $Q_i$ is zero or not. A simple computation reveals $a_{4, 12} = 4$, $a_{5, 12} = 5$, $a_{8, 12} = 10$, and $a_{10, 12} = 3$. Thus, ${\text{rk}}(F) = 4$ and $a(C) = 1$. We see that $C$ is an example of a curve of genus $5$ with $a$-number $1$ and $p$-rank $0$.
Bounds for the invariants {#bounds-for-the-invariants .unnumbered}
=========================
Using Theorem \[decomposition\], we can easily produce the following bounds. The group action plays an important role in the calculation of the $p$-rank.
Using the notation introduced in Lemma \[cover\] and Notation \[notationone\], let $C$ be a Kummer cover of degree $n$ over $k$.
1. $a(C) \geq 1 - n + \sum_{i = 1}^{n - 1} \max\{1, m_i - m_{(pi\bmod{n})} + 1\}$.
2. $f(C) \leq \sum_{\Omega \in S/G} \min_{i \in \Omega}\{m_i - 1\}$.
\[lowerbound\]
\(1) Using part (1) of Theorem \[decomposition\] and Rank-Nullity, we have the bound ${\text{rk}}(F|_{B_i}) \leq \min\{\dim_k B_i, \dim_k B_{pi \bmod n}\} = \min\{m_i - 1, m_{(pi\bmod{n})} - 1\}$. We get the lower bound for $a(C)$ by subtracting the upper bound of ${\text{rk}}(F)$ from $g(C)$ given in part (2) of Lemma \[Kummercovers\].
\(2) Taking iterations in part (1) and using the Range-Nullspace decomposition, we have ${\text{rk}}(F|_{B_i}^m) \leq \min\{\dim_k B_i, \dim_k B_{pi \bmod n}, \dim_k B_{p^2 i\bmod n}, \dots \} = \min_{i \in \Omega}\{m_i - 1\}$ where $\Omega$ is the action of $G$ on $i$.
The upper bounds are sharp. For instance, take $C$ to be the curve $y^6 = x^3 + x^2 + 1$ over a field $k$ of characteristic $5$ that contains the $6$th roots of unity. In this case, $G = \{1, 5\}$ and $S/G$ consists of the orbits $\{3\}$, $\{1, 5\}$, and $\{2, 4\}$. Only $i$ in $\{3, 4, 5\}$ satisfies $m_i > 1$, where $m_3 = m_4 = 2$ and $m_5 = 3$. From this information alone, we obtain the following: $g(C) = 4$, $f(C) \leq 1$, ${\text{rk}}(F) \leq 1$, $a(C) \geq 3$, and $F = F|_{B_3}$. The action of $F$ on $B_3$ is easy to determine: it is multiplication by the coefficient $a_4$ of $Q_3$, which is $1$. Thus, our bounds are all met. We see that $C$ is an example of a curve of genus $4$ with $a$-number $3$ and $p$-rank $g - 3$.
Using the notation introduced in Lemma \[cover\], let $C$ be a Kummer cover of degree $n$ over $k$. $$a(C) \leq 1 - n + \sum_{i = 1}^{n - 1} \min\{m_i, \max\{m_i - q_i + v_i, 1 + v_i \} \}.$$ where $$q_i = \lfloor (\deg(Q_i) + m_{(pi\bmod{n})} - 1)/p\rfloor\;\;\;\text{and}\;\;\;v_i = \lfloor \deg(Q_i)/p \rfloor. \vspace{0.1in}$$ \[upperbound\]
Our task is to compute a lower bound for ${\text{rk}}(F|_{B_i})$. The entries in $F|_{B_i}$ come from the coefficients of the polynomial $Q_i$ as described by Lemma \[Kummercovers\]. Let $c$ denote the leading coefficient of $Q_i$. We will exploit the following fact: when $c$ is used in $F|_{B_i}$, we can use row-reduction to easily see that it must contribute $1$ to the rank (indeed, any coefficient of $Q_i$ can be used at most once on any given row and all entries below those coming from $c$ are zero). This means we get a lower bound for ${\text{rk}}(F|_{B_i})$ by counting the minimal number of rows where $c$ must occur; we compute this number as follows. The integer $v_i$ is the largest possible row of $F|_{B_i}$ where $c$ may not occur since $p v_i \leq \deg(Q_i)$. The largest row of $F|_{B_i}$ where $c$ must occur is $q_i$ since $p\;(q_i + 1) - m_{(pi\bmod{n})} + 1 > \deg(Q_i)$. Thus, $$\max\{0, \min\{q_i - v_i, m_i - 1 - v_i\}\} \leq {\text{rk}}(F|_{B_i}).$$ We conclude by taking the sum over $i$ of this lower bound for ${\text{rk}}(F|_{B_i})$ and subtracting it from $g(C)$ as we did before.
This lower bound can be made much stronger for superelliptic curves, see [@elkin].
Hyperelliptic curves {#hyperelliptic-curves .unnumbered}
====================
In this section, we look at hyperelliptic curves over an algebraically closed field $k$. Since $a(C)$ and $f(C)$ are invariants under separable base extension, the assumption that $k$ is algebraically closed is no loss of generality for our purposes. Hyperelliptic curves are Kummer covers in every characteristic except $2$, so we only need to extend our results to characteristic $2$.
Let $C$ be a hyperelliptic curve of genus $g = g(C)$ over an algebraically closed field $k$ of characteristic $2$. Assume that $C$ is ramified at infinity.
1. $C$ has an affine cover consisting of two parts $U^\prime = \emph{{\text{Spec}}}\;A$ and $V^\prime = \emph{{\text{Spec}}}\;B$, where $$A = k[x, y]/(y^2 + Qy - P),$$ $$B = k[\frac{1}{x}, \frac{y}{x^{g + 1}}]/(\frac{y^2}{x^{2g + 2}} + \frac{Qy}{x^{2g + 2}} - \frac{P}{x^{2g + 2}}),$$ and where $Q, P \in k[x]$ satisfy $\deg(Q) \leq g$, $\deg(P) = 2g + 1$, and $Q$ is coprime to $(Q^\prime)^2 P + (P^\prime)^2$.
2. $H^1(C, {\mathscr{O}}_C) = \sum_{i = 1}^{g} k \cdot y/x^i$. The induced action of Frobenius is given by $$y/x^i \mapsto \sum_{j = 1}^{g} c_{i,j} y/x^j,$$ where $c_{i, j}$ is the coefficient of $x^{2i - j}$ as a term of $Q$.
3. If $f(C) = 0$, then $a(C) = \lfloor \frac{g + 1}{2} \rfloor$.
\(1) See Proposition 7.4.24 of [@qing].
\(2) We have that $R = \Gamma(U^\prime \cap V^\prime, {\mathscr{O}}_C) = k[x, 1/x, y]/(y^2 + Qy - P)$. The result follows by forming the standard Cech complex $A \oplus B \rightarrow R$ and passing to the quotient. As for the action induced by Frobenius, if we square $y/x^i$ in $H^1(C, {\mathscr{O}}_C)$, we have the coset relation $[(Qy - P)/x^{2i}] = [Qy/x^{2i}] = \sum_{j = 1}^{g} c_{i,j} y/x^j$, where $c_{i, j}$ is the coefficient of $Q$ as stated.
\(3) View $F$ as the $g \times g$ matrix $(c_{i, j})$ and use the notation $F[i, j]$ to denote $c_{i, j}$. Part (3) tells us that the $c_{i, j}$ are coefficients of the polynomial $Q = \sum a_i x^i$ of degree at most $g$. Since $f(C) = 0$, we also know that $F$ is nilpotent. We have that $(F^n)[g, g] = a_g^n$, which forces $a_g = 0$. Using this, we continue our elimination: we have that $(F^n)[g - 1, g - 1] = a_{g - 1}^n$ forces $a_{g - 1} = 0$, $(F^n)[g - 2, g - 2] = a_{g - 2}^n$ forces $a_{g - 2} = 0$, and so on, until we have $(F^n)[1, 1] = a_{1}^n$, which forces $a_1 = 0$. Hence, the only $Q$ that satisfies a nilpotent $F$ is the constant $Q = a_0$. It must be non-zero because $0$ is not coprime to $(P^\prime)^2$. If $g$ is even, $a_0$ appears on $g/2$ rows. If $g$ is odd, $a_0$ appears on $(g - 1)/2$ rows.
A much stronger version of (3) has been proved by G. van der Geer (see Lemma 11.1 of [@vdg]).
Ekedahl’s bound $g(C) < (p + 1)/2$ for superspecial hyperelliptic curves is an immediate consequence of part (3) and Corollary \[upperbound\] when we take $C \rightarrow {\mathbb{P}}^1_k$ to be ramified over infinity. It is well-known that this bound is sharp. What we want to know is if all the numbers below Ekedahl’s bound occur as the genus of some superspecial hyperelliptic curve in characteristic $p$. For $g(C) = 2$ and $p > 3$, such curves exist by a result of Ibukiyama, Katsura, and Oort in [@ibu-kat-oort]. The case $g(C) = 3$ and $p > 5$ follows from a result in [@brock]. Despite these early successes, we will show that there are gaps for genus $4$ in the next example by showing that there is no superspecial hyperelliptic curve of genus $4$ in characteristic $11$.
Assume that $C$ is a superspecial hyperelliptic curve of genus $4$ over an algebraically closed field of characteristic $11$. Use a fractional linear transformation of $C$ to force $0$ and infinity to be ramification points. Using Lemma \[cover\], $C$ has an affine equation of the form $$y^2 = f(x) = a_1 x + \dots + a_9 x^9,$$ with $a_1 \neq 0$ and $a_9 \neq 0$. Lemma $\ref{Kummercovers}$ tells us that $f(x)^5 = \sum b_i x^i$ has $b_j = 0$ for $j \in \{ 7, 8, 9, 10, 18, 19, 20, 21\}$. Since $0 = b_7 = 10 a_1^3 a_2^2 + 5a_1^4 a_3$, we have the relation $a_3 = -2a_2^2/a_1$. Likewise, $a_4 = -3a_2^3/5a_1^2$, $a_5 = -6a_2^4/5a_1^3$, and $a_6 = -8a_2^5/5a_1^4$. For $b_{18}$, we have $$\frac{7 a_2^{13}}{a_1^8} + \frac{8 a_2^7 a_7}{a_1^3} + 8a_1^2 a_2 a_7^2 + \frac{4a_2^6 a_8}{a_1^2} + 9a_1^3 a_7 a_8 = 0.$$ This breaks down into three possible statements that we enumerate and eliminate below.
I. $a_8 = 0$ and $-7a_2^{13} - 8a_1^5 a_2^7 a_7 - 8a_1^{10} a_2 a_7^2 = 0$. This gives us $$0 = b_{19} = \frac{9 a_2^{14}}{a_1^9} + \frac{4a_2^8 a_7}{a_1^4} + 3a_1 a_2^2 a_7^2 + \frac{4a_2^6 a_9}{a_1^2} + 9a_1^3 a_7 a_9.$$ Since $a_9 \neq 0$, we have two subcases from the condition above. We enumerate and eliminate them.
I.a. $a_7 = -4a_2^6/9a_1^5$ and $a_{2}^{14} = 0$. This case is eliminated because $0 = b_{21} = 10a_1^4 a_9^2$, so either $a_1$ or $a_9$ is zero, which is not possible.
I.b. $a_9 = -(9a_2^{14} + 4a_1^5 a_2^8 a_7 + 3a_1^{10} a_2^2 a_7^2)/a_1^7(4 a_2^6 + 9a_1^5 a_7)$. Returning to $b_{18} = 0$, this forces $a_2 = 0$, which in turn forces $a_9 = 0$.
II\. $a_7 = -4a_2^6/9a_1^5$ and $a_2 = 0$. We have that the relation $b_{19} = 0$ yields $a_8 = 0$, and then $b_{21} = 0$ forces either $a_1$ or $a_9$ to be zero.
III\. $a_8 = -(7a_2^{13} + 8a_1^5 a_2^7 a_7 + 8a_1^{10}a_2 a_7^2)/(a_1^6 (4a_2^6 + 9a_1^5 a_7))$. The relation $b_{19} = 0$, gives us the following subcases.
III.a. $a_9 = 0$ and $-5a_2^{14} - 7a_1^{10}a_2^2 a_7^2 = 0$. This case is impossible because $a_9 \neq 0$.
III.b. $a_7 = -4a_2^6/9a_1^5$ and $-5a_2^{14} - 7a_1^{10} a_2^2 a_7^2 = 0$. This case is impossible because this definition of $a_7$ conflicts with the definition of $a_8$ (it causes a division by $0$).
III.c. $a_9 = -(5a_2^{14} + 7a_1^{10} a_2^2 a_7^2)/(a_1^7 (4a_2^6 + 9a_1^5 a_7))$. We have that $$0 = b_{20} = \frac{7a_1^{25} a_2^{15} + 8a_1^{30} a_2^9 a_7 + 8a_1^{35} a_2^3 a_7^2}{a_1^{35}}.$$ On the other hand, $$0 = b_{21} = \frac{5a_1^{24} a_2^{16} + a_1^{29} a_2^{10} a_7 + a_1^{34} a_2^4 a_7^2}{a_1^{35}}.$$ Combining the two yields $a_2 = 0$, which forces $a_9 = 0$.
[9]{} B. Brock. *Superspecial curves of genera two and three*, Ph.D. dissertation, Princeton University (1993). R. Crew. *Etale $p$-covers in characteristic $p$*, Comp. Math. 52 (1984), p. 31-45. T. Ekedahl. *On supersingular curves and abelian varieties*, Math. Scand. 60 (1987), p. 151-178. A. Elkin. *The rank of the Cartier operator on cyclic covers of the projective line*, arXiv:0708.0431v1 (2007). H. Esnault, E. Viehweg. *Lectures on Vanishing Theorems*, DMV seminar, Band 20, Birkhäuser, Basel (1992). G. van der Geer. *Cycles on the Moduli Space of Abelian Varieties*, Aspects Math., Vieweg, Braunschweig (1999), p. 65-89. T. Ibukiyama, T. Katsura, and F. Oort. *Supersingular curves of genus two and class numbers*, Comp. Math. 57 (1986), p. 127-152. Q. Liu. *Algebraic Geometry and Arithmetic Curves*, Oxford Grad. Texts Math., Oxford Univeristy Press Inc., New York, 2002. I. R. Shafarevich. *On $p$-extensions*, AMS. Transl. 4., Ser. 2 (1956), p. 59-72.
|
---
author:
- 'Y. Lyanda-Geller'
- 'I. A. Shelykh'
- 'N.T. Bagraev'
- 'N.G. Galkin'
title: 'Comment on “Experimental Demonstration of the Time Reversal Aharonov-Casher Effect”'
---
In a recent Letter [@Nitta], Bergsten et al. have studied the resistance oscillations with gate voltage $V_g$ and magnetic field $B$ in arrays of semiconductor rings, and interpreted the oscillatory $B$-dependence as Altshuler-Aronov-Spivak oscillations and oscillatory $V_g$-dependence as the time reversal Aharonov-Casher (AC) effect. This comment shows (i) that authors [@Nitta] incorrectly identified AAS effect as a source of resistance oscillations with $B$, (ii) that spin relaxation in \[1\] is strong enough to destroy oscillatory effects of spin origin, e.g., AC effect, and (iii) the oscillations in [@Nitta] are caused by changes in carrier density and the Fermi energy by gate, and are unrelated to spin.
AAS effect is the $h/2e$ oscillations of conductance with $B$ in disordered diffusive rings. Oscillations occur because the intereference of the two electron trajectories passing the whole ring clock- and counterclockwise survives disorder averaging in conditions of diffusive regime $l \ll L_{\phi},L$, where $l$ is the mean free path, $L_{\phi}$ is the phase breaking length, and $L$ is the circumference of the ring.
The mean free path in samples [@Nitta] is $l\sim1.5{-}2\,\mu{m}$. From the ratio of h/2e and h/4e signal amplitudes [@Nitta], $L_{\phi}$ is between 2.8 and 3.5 $\mu{m}$. (Note that $h/2e$ signal is due to interference of clockwise and counterclockwise paths, with magnitude defined by $\exp(-2L/L_{\phi})$, and $h/4e$ oscillations are due to interference of paths going twice clockwise and twice counterclockwise, defined by $\exp(-4L/L_{\phi})$. The calculation of $L_{\phi}$ in [@Nitta] misses a factor of two.). Thus, samples [@Nitta] are not in diffusive regime relevant to AAS oscillations, but are in the quasi-ballistic regime $l \lesssim L$. Then $h/2e$ oscillations are defined not only by interference of time-reversed paths, but also e.g., by the interference of the amplitude of propagation through the right arm clockwise and the amplitude of propagation via the three-segment path: the left arm, the right arm (counterclockwise) and again through the left arm. With all interference processes included, $h/2e$ oscillations depend on the Fermi wave-vector and $n_s$ [@Buttiker],[@Aronov]. Averaging over few resistance curves does not eliminate contributions of of non time-reversed processes (certainly not beyond $0.3{\%}$ of the overall signal for oscillations in [@Nitta]). Their importance is missed in [@Nitta] and is crucial.
\(ii) Another mistake in [@Nitta] is the neglect of spin relaxation. For the spin-orbit constant $\alpha=5\,\mathrm{peV}{\cdot}\mathrm{m}$, the parameter $\alpha ml\sim2.5$ (m is the effective mass), and spin simply flips due to a single scattering event. The spin-flip length $L_S=l< L$. Thus, oscillations of spin origin are rather unlikely in [@Nitta]. The closest to [@Nitta] feasible setting requires ballistic regime $l \gg L$ [@Aronov], which requires mobility an order of magnitude higher. Note that $L_{\phi}>L_S$ and oscillations with $B$ originating from charge coherence are plausible to observe.
\(iii) The key to understanding the $h/2e$ oscillations with $V_g$ in [@Nitta] is it’s Fig .4. It can be seen clearly that resistance oscillations are present only when $n_s$ changes with $V_g$, and are not present when $n_s$ saturates. Therefore the reason for the observed oscillations is the variation of the $n_s$. Oscillations of spin origin, particularly the AC effect, must persist when $n_s$ is constant, while $\alpha$ varies with $V_g$. No such evidence is present in [@Nitta].
The origin of oscillations with $n_s$ is the contribution to $h/2e$ signal from interference of non-time reversed paths. These are independent of $L_S$, and are governed by $L_{\phi}>L_S$. That makes this effect dominant over any spin oscillations. With the account of the role of contacts connecting the ring and the leads [@Buttiker],[@Aronov], in the absence of spin-orbit interactions and for strong coupling of leads and rings, the conductance of the single ring is $$\label{conductance}
G=\frac{2e^2}{h}\left[1-\left|\frac{1-\cos\left(\pi\Phi/\Phi_0\right)}{1-e^{ik_FL}\cos^2\left(\pi\Phi/\Phi_0\right)}\right|^2\right]$$
![\[fig1\] The amplitudes of the second harmonics in a single ring (solid curve) and four consequently connected rings (dashed curve). The spin-orbit interaction is absent.](Fig1.eps){width="1.0\linewidth"}
We note that disregard of transmission and reflection from contacts is yet another critical omission in [@Nitta], whose equation for conductance is incorrrect in ballistic/quasi-ballistic regime. (It is also incorrect for AAS and AC effect in diffusive regime). The second harmonics in (\[conductance\]) depends on $k_F$ and $n_s$ in an oscillatory manner, leading to oscillations of conductance with $V_g$. The system of the n interconnected rings can be described similarly to the setting in [@Shelykh]. On Fig. 1, we show the dependence of the amplitude of the second harmonic on $k_F$ for one and four rings. Conductance oscillates with electron density despite no spin effects are involved. To summarize, conclusions of [@Nitta] on the observation of the AC effect are unfounded.
[10]{} T. Bergsten et al., Phys. Rev. Lett. [**97**]{}, 196803 (2006) M. Buttiker, Y. Imry and M.Ya. Azbel, Phys. Rev. A [**30**]{}, 1982 (1984). A.G. Aronov and Y.B. Lyanda-Geller, Phys. Rev. Lett. [**70**]{}, 343 (1993) I.A. Shelykh et al, Fiz. Tekhn. Polupr. [**34**]{}, 477 (2000) \[Semiconductors [**34**]{}, 462 (2000)\]
|
---
abstract: 'Universal Lesion Detection (ULD) in computed tomography plays an essential role in computer-aided diagnosis systems. Many detection approaches achieve excellent results for ULD using possible bounding boxes (or anchors) as proposals. However, empirical evidence shows that using anchor-based proposals leads to a high false-positive (FP) rate. In this paper, we propose a **box-to-map** method to represent a bounding box with three soft continuous maps with bounds in $x$-, $y$- and $xy$- directions. The **bounding maps (BMs)** are used in two-stage anchor-based ULD frameworks to reduce the FP rate. In the $1^{st}$ stage of the region proposal network, we replace the sharp binary ground-truth label of anchors with the corresponding $xy$-direction BM hence the positive anchors are now graded. In the $2^{nd}$ stage, we add a branch that takes our continuous BMs in $x$- and $y$- directions for extra supervision of detailed **locations**. Our method, when embedded into three state-of-the-art two-stage anchor-based detection methods, brings a free detection accuracy improvement (e.g., a 1.68% to 3.85% boost of sensitivity at 4 FPs) without extra inference time.'
author:
- Han Li
- Hu Han
- 'S. Kevin Zhou'
bibliography:
- 'egbib.bib'
title: Bounding Maps for Universal Lesion Detection
---
Introduction
============
Universal Lesion Detection (ULD) in computed tomography (CT) images [[@zhang2019anchor_free; @zhang2019lesion; @zhang2020Agg_Fas; @zlocha2019one-stage; @tao2019improving; @tang2019uldor; @yan20183DCE; @li2019mvp]]{}, which aims to localize different types of lesions instead of identifying lesion types [[@liao2019evaluate; @lin2019automated; @wang2019volumetric; @yan2019mulan; @astaraki2019normal; @tang2019nodulenet; @shao2019attentive; @liu20193dfpn; @wang2018automated; @zhu2018deepem; @li2020high; @liu20183d]]{}, plays an essential role in computer-aided diagnosis (CAD) systems. Recently, deep learning-based detection approaches achieve excellent results for ULD [@zhou2015medical][@zhou2017deep] using possible bounding boxes (BBoxs) (or anchors) as proposals. However, empirical evidence shows that using anchor-based proposals leads to severe data imbalance (e.g., class and spatial imbalance) [@oksuz2020imbalance], which leads to a high false-positive (FP) rate in ULD. Therefore, there is an urgent need to reduce the FP proposals and improve the lesion detection performance.
Most existing ULD methods are mainly inspired by the successful deep models in object detection from natural images. Tang et al. [@tang2019uldor] constructed a pseudo mask for each lesion region as the extra supervision information to adapt a Mask-RCNN [@he2017maskrcnn] for ULD. Yan et al. [@yan20183DCE] proposed a 3D Context Enhanced (3DCE) R-CNN model based on the model [@deng2009imagenet] pre-trained from ImageNet for 3D context modeling. Li et al. [@li2019mvp] proposed the so-called MVP-Net, which is a multi-view feature pyramid network (FPN) [@lin2017fpn] with position-aware attention to incorporate multi-view information for ULD. Han et al. [@8606226] leveraged cascaded multi-task learning to jointly optimize object detection and representation learning.
![The sharp GT BBox of an image is represented by three continuous 2D bounding maps (BMs) in (along) different directions (axes): (a) $BM_x$, (b) $BM_y$, (c) $BM_{xy}$.[]{data-label="fig:fig1_generate_BM"}](fig1_generate_BM.pdf)
All the above approaches proposed for ULD are designed based on a two-stage anchor-based framework, i.e., proposal generation followed by classification and regression like Faster R-CNN [@ren2015fasterrcnn]. They achieve good performance because: i) The anchoring mechanism is a good reception field initialization for limited-data and limited-lesion-category datasets. ii) The two-stage mechanism is a coarse-to-fine mechanism for the CT lesion dataset that only contains two categories (‘lesion’ or not ), i.e., first finds lesion proposals and then removes the FP proposals. However, such a framework has two main limitations for effective ULD: (i) *The imbalanced anchors in stage-1.* (e.g, class, spatial imbalance [@oksuz2020imbalance]). In the first stage, anchor-based methods first find out the positive (lesion) anchors and use them as the region of interest (ROI) proposals according to the intersection over union (IoU) between anchors and ground-truth (GT) BBoxs. Hence, the number of positive anchors is decided by the IoU threshold and the amount of GT BBoxs per image. Specifically, an anchor is considered positive if its IoU with a GT BBox is greater than the IoU threshold and negative otherwise. This idea helps natural images to get enough positive anchors because they may have a lot of GT BBoxs per image, but it isn’t suitable for ULD. Most CT slices only have one or two GT lesion BBox(s), so the amount of positive anchors is rather limited. This limitation can cause severe data imbalance and influence the training convergence of the whole network. Using a lower IoU threshold is a simple way to get more positive anchors, but a lot of low-IoU anchors are labeled as positive can also lead to a high FP rate in ULD. (ii) *The insufficient supervision in stage-2.* In the second stage, each ROI proposal (selected anchor) from the first stage has one corresponding classification score to represent the possibility of containing lesions. The ROI proposals with high classification scores are chosen to obtain the final BBox prediction. ULD is a challenging task due to the similar appearances (e.g., intensity and texture) between lesions and other tissues; the non-lesion regions can also get very high scores. Hence, a single classification score can easily lead to FPs in ULD.
To address the anchor-imbalance problem, anchor-free methods [[@tian2019fcos; @zhou2019objectsaspotints]]{} solve detection in a per-pixel prediction manner and achieve success in natural images with sufficient data and object categories. But for lesion detection (lesion or not) with limited data, they lack needed precision. To overcome the supervision-insufficient problem, Mask R-CNN-like [@he2017maskrcnn] methods add a mask branch to introduce extra segmentation supervision and hence improve the detection performance. But it needs training segmentation masks that are costly to obtain.
![The network architecture of the proposed ULD approach, in which, three proposed BMs in $x$-, $y$- and $xy$- directions are used in its two stages.[]{data-label="fig:fig2_network_architecture"}](fig2_network_architecture.pdf)
In this paper, we present a continuous bounding map (BM) representation to enable the per-pixel prediction in the 1st stage and introduce extra-supervision in the 2nd stage of any anchor-based detection method. Our **first contribution** is a new box-to-map representation, which represents a BBox by three 2D bounding maps (BMs) in (along) three different directions (axes): $x$-direction ($BM_x$), $y$-direction ($BM_y$), and $xy$-direction ($BM_{xy}$), as shown in Fig. \[fig:fig1\_generate\_BM\]. The pixel values in $BM_x$ and $BM_y$ decrease from the centerline to the BBox borders in x and y directions respectively with a linear fashion, while the pixel values in $BM_{xy}$ decrease from both two directions. Compared with a sharp binary representation (e.g., binary anchors label in RPN, binary segmentation mask in Mask R-CNN [@he2017maskrcnn]), such a soft continuous map can provide a more detailed representation of **location**. This (i.e., per-pixel & continuous) promotes the network to learn more contextual information [@zhou2019objectsaspotints], thereby reducing the FPs. Our **second contribution** is to expand the capability of a two-stage anchor-based detection method using our BM representation in a light way. First, we use $BM_{xy}$ as the GT of a positive anchor in the first stage as in Fig. \[fig:fig2\_network\_architecture\] and choose a proper IoU threshold to deal with the anchor imbalance problem. Second, we add one additional branch called BM branch paralleled with the BBox branch [@ren2015fasterrcnn] in the second stage as in Fig. \[fig:fig3\_BM\_branch\]. The BM branch introduces extra supervision of detailed location to the whole network in a pixel-wise manner and thus decreases the FP rate in ULD. We conduct extensive experiments on the DeepLesion Dataset [@yan18deeplesion] with four state-of-the-art ULD methods to validate the effectiveness of our method.
Method
======
As shown in Fig. \[fig:fig2\_network\_architecture\], we utilize BMs to reduce the ULD FP rate by replacing the original positive anchor class labels in stage-1 and adding a BM branch to introduce extra pixel-wise location supervision in stage-2. Section \[sec:bm\] details the BM representation and Section \[sec:bmanchor\] defines the anchor labels for RPN training based on our BMs. Section \[sec:bmbranch\] explains the newly introduced BM branch.
Bounding maps {#sec:bm}
-------------
Motivated by [@zhou2019objectsaspotints], the BMs are formed in all-zero maps by only changing the value of pixels located within the BBox(s) as in Fig. \[fig:fig1\_generate\_BM\]. Let $(x^{(i)}_1,y^{(i)}_1,x^{(i)}_2,y^{(i)}_2)$ be the $i^{th}$ lesion GT BBox of one CT image $I_{ct}\in \mathcal{R}^ {W\times H}$, the set of coordinates within $i^{th}$ BBox can be denoted as: $$S^{(i)}_{BBox}=\{(x,y){|}x^{(i)}_1 \leq x \le x^{(i)}_2 ~\&~ y^{(i)}_1 \leq y \le y^{(i)}_2 \},$$ and the center point of this BBox lies at $(x^{(i)}_{ctr},y^{(i)}_{ctr})=(\frac{x^{(i)}_1+x^{(i)}_2}{2},\frac{y^{(i)}_1+y^{(i)}_2}{2})$.
Within each BBox $S^{(i)}_{BBox}$, the pixel values in $BM^{(i)}_x \in \mathcal{R}^ {W\times H}$ and $BM^{(i)}_y \in \mathcal{R}^ {W\times H}$ decrease from 1 (center line) to 0.5 (border) in a linear fashion: $$BM^{(i)}_x(x,y)=\left\{
\begin{array}{lp{8mm}<{\centering}l}
0& & {(x,y) \notin S^{(i)}_{BBox}}\\
1-k^{(i)}_x\left|x^{(i)}-x^{(i)}_{ctr}\right| & &{(x,y) \in S^{(i)}_{BBox}}
\end{array} \right. ,$$
$$BM^{(i)}_y(x,y)=\left\{
\begin{array}{lp{8mm}<{\centering}l}
0& & {(x,y) \notin S^{(i)}_{BBox}}\\
1-k^{(i)}_y\left|y^{(i)}-y^{(i)}_{ctr}\right| & &{(x,y) \in S^{(i)}_{BBox}}
\end{array} \right. ,$$
where $k^{(i)}$ is the slope of linear function in $x$-direction or $y$-direction, which is calculated according to the GT BBox’s width ($x^{(i)}_2-x^{(i)}_{1}$) or height ($y^{(i)}_2-y^{(i)}_{1}$): $$k^{(i)}_x= \frac{1}{ x^{(i)}_2-x^{(i)}_{1}}, ~~k^{(i)}_y= \frac{1}{ y^{(i)}_2-y^{(i)}_{1}}.$$ We take the sum of all the $BM^{(i)}_x$s and $BM^{(i)}_y$s to obtain the total $BM_x \in \mathcal{R}^ {W\times H}$ and $BM_y \in \mathcal{R}^ {W\times H}$ of one input image, respectively.
$$BM_x=\min \Big [ \sum\limits_{i=1}^I BM^{(i)}_x,1 \Big ], BM_y=\min \Big [\sum\limits_{i=1}^I BM^{(i)}_y, 1 \Big ],$$
where $I$ is the number of GT BBox(s) of one CT image. Then the $xy$-direction BM $BM_{xy}\in \mathcal{R}^ {W\times H}$ can be generated by calculating the square root of the product between $BM_x$ and $BM_y$: $$BM_{xy}=\sqrt[2]{BM_{x}\odot BM_{y} },$$ where $\odot$ denotes the element-wise multiplication.
By introducing the above BMs, we expect they can promote network training and reduce FPs. Because the proposed BMs offer a soft continuous map about the lesion other than a sharp binary mask, which can convey more contextual information about the lesion, not only its location but also guidance of confidence. These properties are favourable for object detection task with irregular shapes and limited-amount GT BBox(s) like ULD.
Anchor label in RPN {#sec:bmanchor}
-------------------
In the original two-stage anchor-based detection frameworks, RPN is trained to produce object bounds and objectness classification scores $\hat{C}_{ct} \in \mathcal{R}^ {\frac{W}{R}\times \frac{H}{R} \times 1}$ at each position (or anchors’ centerpoint), where $R$ is the output stride. During training, all the anchors are first divided into three categories of positive (lesion), negative and drop-out anchors based on their IoUs with the GT BBoxs. Then the GT labels of positive, negative and drop-out anchors are set as 1, 0, -1 respectively and only the positive and negative anchors are used for loss calculation in RPN training.
In our proposed method, we still use 0 and -1 as the GT class labels of negative and drop-out anchors, but we set the class label of positive anchors as their corresponding value in $BM_{xy}\in \mathcal{R}^ {W\times H} $. For size consistency, we first resize $BM_{xy}\in \mathcal{R}^ {W\times H} $ to $BM^r_{xy}\in \mathcal{R}^ {\frac{W}{R}\times \frac{H}{R} \times 1} $ to match the size of $\hat{C}_{ct} \in \mathcal{R}^ {\frac{W}{R}\times \frac{H}{R} \times 1}$. Therefore, the GT label of anchor $C_{anc}$ is given as:
$$C_{anc}(x,y,IoU_{anc})=\left\{
\begin{array}{lp{8mm}<{\centering}c}
0&&{IoU_{anc} \leq IoU_{min}}\\
-1& & {IoU_{min}<IoU_{anc}<IoU_{max}}\\
BM^{r}_{xy}[x,y] & &{IoU_{anc}\geq IoU_{max}}
\end{array} \right.,$$
where $(x,y)$ is the centerpoint coordinates of an anchor in $\hat{C}_{ct} \in \mathcal{R}^ {\frac{W}{R}\times \frac{H}{R} \times 1}$ and $IoU_{anc}$ denotes the IoU between the anchor and GT BBox.
**Anchor classification loss function:** For each anchor, the original RPN loss is the sum of anchor classification loss and BBox regression loss. However, the amount of GT BBox in one CT slice is usually more limited than one natural image. Hence a proper RPN IoU threshold is hard to find in ULD task: a higher IoU threshold can cause imbalanced anchors problem while a lower IoU threshold which causes too many low-IoU anchors’ GT label are set as $1$ can lead to a high FP rate. Therefore, we replace the original anchor classification loss with our proposed anchor classification loss:
$$\mathcal{L}_{anc}=\left\{
\begin{array}{lp{1mm}<{\centering}c}
\mathcal{L}_2(\hat{C}_{anc},C_{anc})&&{IoU_{anc} \leq IoU_{min}~\&IoU_{anc}\geq IoU_{max}}\\
0& & {IoU_{min}<IoU_{anc}<IoU_{max}}
\end{array} \right.,$$
where $\mathcal{L}_2$ is the norm-2 loss, $IoU_{min}$ and $IoU_{max}$ denote the negative and positive anchor thresholds, respectively.
![BM branch is in parallel with BBox branch and applied separately to each ROI.[]{data-label="fig:fig3_BM_branch"}](fig3_BM_branch.pdf)
Bounding map branch {#sec:bmbranch}
-------------------
As shown in Fig. \[fig:fig3\_BM\_branch\], the BM branch is similar to the mask branch in Mask R-CNN [@he2017maskrcnn]. It is paralleled with the BBox branch and applied separately to each ROI. The branch consists of four $3\times 3$ convolution, two $2 \times 2$ deconvolution and one $1 \times 1$ convolution layers. It takes ROI proposal feature map $F_{ROI} \in \mathcal{R}^ {32\times 32 \times 256}$ as input and aims to obtain the $BM_x$ and $BM_y$ proposals, denoted by $BM^{ROI}_x \in \mathcal{R}^ {128\times 128 \times 1}$ and $BM^{ROI}_y\in \mathcal{R}^ {128\times 128 \times 1}$, respectively:$$[ \hat{BM}^{ROI}_x , \hat{BM}^{ROI}_y ] = H_{BM}(F_{ROI}),$$ where $H_{BM}$ is the function of BM branch. **BM branch loss function:** For each ROI, $BM_x$ and $BM_y$ are first cropped based on the ROI BBox and resized to the size of the BM branch output to obtain $BM^{ROI}_x \in \mathcal{R}^ {128\times 128 \times 1}$ and $BM^{ROI}_y\in \mathcal{R}^ {128\times 128 \times 1}$. Then we concatenate the two BMs into a multi-channel map and use it as the ground-truth for our BM branch. Therefore, the loss function of BM branch for each ROI can be defined as a norm-2 loss:
$$\mathcal{L}_{BM}=\mathcal{L}_2([\hat{BM}^{ROI}_x,\hat{BM}^{ROI}_y],[BM^{ROI}_x,BM^{ROI}_y]).$$
**Full loss function:** The full loss function of our method is given as: $$\mathcal{L}_{full}=\frac{1}{M}\sum\limits_{m=1}^M(\mathcal{L}^{(m)}_{reg}+\mathcal{L}^{(m)}_{anc})+\frac{1}{N}\sum\limits_{n=1}^N(\mathcal{L}^{(n)}_{B}+\mathcal{L}^{(n)}_{BM}),$$ where $\mathcal{L}^{(m)}_{reg}$ and $\mathcal{L}^{(n)}_{B}$ are the original box regression loss (in RPN) of the $m^{th}$ training (negative $\&$ positive) anchor and BBox branch loss of $n^{th}$ positive ROI in Faster R-CNN [@ren2015fasterrcnn]. $\mathcal{L}^{(m)}_{anc}$ and $\mathcal{L}^{(n)}_{BM}$ denote our anchor classification loss and BM branch loss for $m^{th}$ training anchors and $n^{th}$ positive ROI. $M$ and $N$ are the number of training anchors and positive ROIs, respectively.
[p[55mm]{}p[25mm]{}<p[25mm]{}<p[25mm]{}<p[25mm]{}<p[25mm]{}<p[25mm]{}<]{} **FPPI**&**$@0.5$**&**$@1$**&**$@2$**&**$@3$**&**$@4$**\
Faster R-CNN [@ren2015fasterrcnn]&57.17 & 68.82 &74.97 &78.48&82.43\
Faster R-CNN w/ Ours&63.96 (**6.79**$\uparrow$) & 74.43 (**5.61**$\uparrow$) &79.80 (**4.83**$\uparrow$) &82.55 (**4.07**$\uparrow$)&86.28 (**3.85**$\uparrow$)\
3DCE (9 slices) [@yan20183DCE] &59.32 &70.68&79.09&-&84.34\
3DCE (9 slices) w/ Ours &64.38 (5.06$\uparrow$)&75.55 (4.87$\uparrow$) &82.74 (3.65$\uparrow$)& 83.77 (-)& 87.78 (3.44$\uparrow$)\
3DCE (27 slices) [@yan20183DCE] &62.48&73.37&80.70 &-&85.65\
3DCE (27 slices) w/ Ours &66.75 (4.27$\uparrow$)&76.71 (3.34$\uparrow$) &83.75 (3.05$\uparrow$)& 86.01 (-)& 88.59 (2.94$\uparrow$)\
FPN-3DCE (9 slices) [@lin2017fpn]& 64.25&74.41&81.90&85.02&87.21\
FPN-3DCE (9 slices) w/ Ours&69.09 (4.84$\uparrow$)&78.02 (3.61$\uparrow$) &85.35 (3.45$\uparrow$)& 88.59 (3.57$\uparrow$)& 90.49 (3.28$\uparrow$)\
MVP-Net (3 slices) [@li2019mvp]&70.01&78.77&84.71&87.58&89.03\
MVP-Net (3 slices) w/ Ours&**73.32** (3.31$\uparrow$)&**81.24** (2.47$\uparrow$) &**86.75** (2.04$\uparrow$)& **89.54** (1.96$\uparrow$)& **90.71** (1.68$\uparrow$)\
FCOS (anchor-free) [@tian2019fcos]&37.78&54.84&64.12&69.41&77.84\
Objects as points (anchor-free) [@zhou2019objectsaspotints]&34.87&43.58&52.41&59.13&64.01\
Experiments
===========
Dataset and setting
-------------------
We conduct experiments using the DeepLesion dataset [@yan18deeplesion]. The dataset is a large-scale CT dataset with 32,735 lesions on 32,120 axial slices from 10,594 CT studies of 4,427 unique patients. Different from existing datasets that typically focus on one type of lesion, DeepLesion contains a variety of lesions with a variety of diameters ranges (from 0.21 to 342.5mm). We rescale the 12-bit CT intensity range to \[0,255\] with different window ranges proposed in different frameworks. Every CT slice is resized to $800\times800$, and the slice intervals are interpolated to 2mm.[^1] We conducted experiments on the official training ($70\%$), validation ($15\%$), testing ($15\%$) sets. The number of FPs per image (FPPI) is used as the evaluation metric, and we mainly compare the sensitivity at 4 FPPI for WRITING briefness, just as in [@li2019mvp].
We only use the horizontal flip as the training data augmentation and train them with stochastic gradient descent (SGD) for $15$ epochs. The base learning rate is set as $0.002$, and decreased by a factor of $10$ after the $12^{th}$ and $14^{th}$ epoch. The models with our method utilize a lower positive anchor IoU threshold of 0.5, and the other network settings are the same as the corresponding original models.
Detection performance
---------------------
We perform experiment with three state-of-the-art two-stage anchor-based detection methods to evaluate the effectiveness of our approach. We also use two state-of-the-art anchor-free natural image detection methods for comparison.
- **3DCE.** The 3D context enhanced region-based CNN (3DCE) [@yan20183DCE] is trained with 9 or 27 CT slices to form the 9-slice or 27-slice 3DCE.
- **FPN-3DCE.** The 3DCE [@yan20183DCE] is re-implemented with the FPN backbone [@lin2017fpn] and trained with 9 CT slices to form the 9-slice FPN-3DCE. The other network setting is consistent with the baseline 3DCE.
- **MVP-Net.** The multi-view FPN with position-aware attention network (MVP-Net) [@li2019mvp] is trained with 3 CT slices to form the 3-slice MVP-Net.
- **Faster R-CNN.** Similar to MVP-Net [@li2019mvp], we rescale an original 12-bit CT image with window ranges of \[50,449\], \[-505,1980\] and \[446,1960\] to generate three rescaled CT images. Then we concatenate the three rescaled CT images into three channels to train a Faster R-CNN. The other network settings are the same as the baseline MVP-Net.
- **FCOS & Objects as points.** The experiment settings for the anchor-free methods, namely Fully Convolutional One-Stage Object Detection (FCOS) [@tian2019fcos] and objects as points [@zhou2019objectsaspotints], are the same as the baseline Faster R-CNN.
As shown in Table \[results\], our method brings promising detection performance improvements for all baselines uniformly at different FPPIs. The improvement of Faster R-CNN [@ren2015fasterrcnn], 9-slice 3DCE, 27-slice 3DCE and 9-slice FPN-3DCE are more pronounced than that of MVP-Net. This is because the MVP-Net is designed for reducing the FP rate in UDL and has achieved relatively high performance. Also, the anchor-free methods yields unsatisfactory results, and we think the main reason is that they completely discard the anchor and two-stage mechanism. Fig. \[fig:fig4\_vis\_results\] presents a case to illustrate the effectiveness of our method in improving the performance of Faster-R-CNN.
![ High-classification-score results (above 0.9) of Faster R-CNN with or without our method on a test image. Green and red boxes corresponded to GT BBox and predicted BBox, respectively. The classification scores are marked in the images.[]{data-label="fig:fig4_vis_results"}](fig4_vis_results.pdf)
Ablation study
--------------
We provide an ablation study about the two key components of the proposed approach, e.g., with vs. without using $BM_{xy}$ in stage-1 and with vs. without using BM branch ($BM_{x} ~\&~ BM_{y}$) in stage-2. We also perform a study to compare the efficiency between linear BMs and Gaussian BMs. As shown in Table \[ablation\_study\], using $BM_{xy}$ as the class label for positive anchors, we obtain a 2.27% improvement over the Faster R-CNN [@ren2015fasterrcnn] baseline. Further adding a BM branch for introducing extra pixel-wise supervision accounts for another 1.14% improvement. Using both $BM_{xy}$ and BM branch gives the best performance. Taking Gaussian BM instead linear BM does not bring improvement. The use of our method causes a minor influence to the inference time measured on a Titan XP GPU.
[p[30mm]{}<p[20mm]{}<p[20mm]{}<p[25mm]{}<|p[20mm]{}<p[20mm]{}<|p[30mm]{}<]{} Faster R-CNN [@ren2015fasterrcnn]&$BM_{xy}$&$BM_x \& BM_y$&Gaussian $BM$ &$FPPI=2$&$FPPI=4$&Inference (s/img)\
$\checkmark$&&&&74.97&78.48&0.3912\
$\checkmark$&$\checkmark$&&&77.47&80.69&0.3946\
$\checkmark$&$\checkmark$&$\checkmark$&&**79.80**&**82.55**&0.3946\
$\checkmark$&$\checkmark$&$\checkmark$&$\checkmark$&78.44&[82.37]{}&0.4004\
Conclusion
==========
In this paper, we study how to overcome the two limitations of two-stage anchor-based ULD methods: the imbalanced anchors in the first stage and the insufficient supervision information in the second stage. We first propose BMs to represent a BBox in three different directions and then use them to replace the original binary GT labels of positive anchors in stage-1 introduce additional supervision through a new BM branch in stage-2. We conduct experiments based on several state-of-the-art baselines on the DeepLesion dataset, and the results show that the performances of all the baselines are boosted with our method.
[^1]: We use a CUDA toolkit in [@huang20193d] to speed up this process.
|
---
abstract: 'The maximal subalgebras of the finite dimensional simple special Jordan superalgebras over an algebraically closed field of characteristic $0$ are studied. This is a continuation of a previous paper by the same authors about maximal subalgebras of simple associative superalgebras, which is instrumental here.'
author:
- |
Alberto Elduque[^1]\
[Departamento de Matemáticas]{}\
[Universidad de Zaragoza]{}\
[50009, Zaragoza. Spain]{}
- |
Jesús Laliena[${}^*$]{} and Sara Sacristán[${}^*$]{}\
[Departamento de Matemáticas y Computación]{}\
[Universidad de La Rioja]{}\
[26004, Logroño. Spain]{}
title: 'Maximal subalgebras of Jordan superalgebras.'
---
Introduction.
=============
Finite dimensional simple Jordan superalgebras over an algebraically closed field of characteristic zero were classified by V. Kac in 1977 [@Kac], with one missing case that was later described by I. Kantor in 1990 [@Kan]. More recently M. Racine and E. Zelmanov [@RaZe] gave a classification of finite dimensional simple Jordan superalgebras over arbitrary fields of characteristic different from 2 whose even part is semisimple. Later, in 2002, C. Martínez and E. Zelmanov [@Ma-Ze] completed the remaining cases, where the even part is not semisimple.
Here we are interested in describing the maximal subalgebras of the finite dimensional simple special Jordan superalgebras with semisimple even part over an algebraically closed field of characteristic zero. Precedents of this work are the papers of E. Dynkin in 1952 (see [@Dy1], [@Dy2]), where the maximal subgroups of some classical groups and the maximal subalgebras of semisimple Lie algebras are classified, the papers of M. Racine (see [@Ra1], [@Ra2]), who classifies the maximal subalgebras of finite dimensional central simple algebras belonging to one of the following classes: associative, associative with involution, alternative and special and exceptional Jordan algebras; and the paper by the first author in 1986 (see [@El]), solving the same question for central simple Malcev algebras.
In a previous work [@El-La-Sa], the authors described the maximal subalgebras of finite dimensional central simple superalgebras which are either associative or associative with superinvolution. The results obtained there will be useful in the sequel. The maximal subalgebras of the ten dimensional Kac Jordan superalgebra are determined in [@El-La-SaKAC].
First of all, let us recall some basic facts. A *superalgebra* over a field $F$ is just a ${{\mathbb Z}}_2$-graded algebra $A=A{_{\bar 0}}\oplus A{_{\bar 1}}$ over $F$ (so $A_\alpha
A_\beta\subseteq A_{\alpha+\beta}$ for $\alpha,\beta\in{{\mathbb Z}}_2$). An element $a$ in $A_\alpha$ ($\alpha=\bar 0,\bar 1$) is said to be *homogeneous* of degree $\alpha$ and the notation $\bar
a=\alpha$ is used. A superalgebra is said to be *nontrivial* if $A{_{\bar 1}}\ne 0$ and *simple* if $A^2\ne 0$ and $A$ contains no proper graded ideal.
An *associative superalgebra* is just a superalgebra that is associative as an ordinary algebra. Here are some important examples:
1. $A=M_n(F)$, the algebra of $n\times n$ matrices over $F$, where
$A_{\bar{0}} =\left\{ \begin{pmatrix} a & 0 \\ 0 & b \end{pmatrix}
: a\in M_r(F), b\in M_s(F)\right\},$
$ A_{\bar{1}}= \left\{ \begin{pmatrix} 0 & c\\ d & 0\end{pmatrix}
: c\in M_{r\times s}(F), d\in M_{s\times r}(F)\right\},$
with $r+s =n.$ This superalgebra is denoted by $M_{r,s}(F).$
2. The subalgebra $A=A_{\bar{0}}\oplus A_{\bar{1}}$ of $M_{n,n}(F)$, with
$A_{\bar{0}}= \left \{ \begin{pmatrix} a & 0\\ 0 & a \end{pmatrix}
: a\in M_n(F)\right\}, \quad A_{\bar{1}} = \left\{ \begin{pmatrix} 0 & b\\
b & 0
\end{pmatrix} : b\in M_n(F)\right \}.$
This superalgebra is denoted by $Q_n(F).$
Over an algebraically closed field, these two previous examples exhaust the simple finite dimensional associative superalgebras, up to isomorphism.
3. The *Grassmann superalgebra*: $$G= alg \langle 1,e_1, e_2, \dots \ : e_i^2=0= e_ie_j +e_je_i \
\forall i,j=1,2,\dots \rangle$$ over a field $F$, with the grading $G=G_{\bar{0}}\oplus
G_{\bar{1}},$ where $G_{\bar{0}}$ is the vector space spanned by the products of an even number of $e_i$’s, while $G{_{\bar 1}}$ is the vector subspace spanned by the products of an odd number of $e_i$’s. (The product of zero $e_i$’s is, by convention, equal to $1$.)
Following standard conventions, given a superalgebra $A=A{_{\bar 0}}\oplus
A{_{\bar 1}}$, the tensor product $G\otimes A$, where $G$ is the Grassmann superalgebra, becomes a superalgebra with the product given by $(g\otimes a)(h\otimes b)=(-1)^{\bar a\bar h}gh\otimes ab$ for homogeneous elements $g,h\in G$ and $a,b\in A$, and grading given by $(G\otimes A){_{\bar 0}}=G{_{\bar 0}}\otimes A{_{\bar 0}}\,\oplus\,
G{_{\bar 1}}\otimes A{_{\bar 1}}$, $(G\otimes A){_{\bar 1}}=G{_{\bar 0}}\otimes
A{_{\bar 1}}\,\oplus\, G{_{\bar 1}}\otimes A{_{\bar 0}}$. Its even part $G(A)=(G\otimes A){_{\bar 0}}$ is called the *Grassmann envelope* of the superalgebra $A$. Moreover, the superalgebra $A$ is said to be a superalgebra in a fixed variety if $G(A)$ is an ordinary algebra (over $G{_{\bar 0}}$) in this variety. In particular, $A$ is a Jordan superalgebra if and only if $G(A)$ is a Jordan algebra.
It then follows that over fields of characteristic $\ne 2,3$, a superalgebra $J=J{_{\bar 0}}\oplus J{_{\bar 1}}$ is a Jordan superalgebra if and only if for any homogeneous elements $a,b,c$ in $J$: $$L_a b= (-1)^{\bar a\bar b} L_b a,$$ where $L_a$ denotes the multiplication by $a$, and $$\begin{split}
L_{a}L_{b}L_{c}&
+ (-1)^{\bar a\bar b + \bar a\bar c + \bar b\bar c}L_{c}L_{b}L_{a}
+ (-1)^{\bar b\bar c }L_{(ac)b} \\
&=L_{ab}L_{c}+ (-1)^{\bar b\bar c}L_{ac} L_{b}+
(-1)^{\bar a\bar b+ \bar a\bar c}L_{bc}L_{a} \\
&=(-1)^{\bar a\bar b}L_{b}L_{a} L_{c}+
(-1)^{\bar a\bar c+ \bar b\bar c}L_{c}L_{a}L_{b}
+ L_{a(bc)} \\
&=(-1)^{\bar a\bar c+ \bar b\bar c}L_{c}L_{ab}
+ (-1)^{\bar a\bar b}L_{b} L_{ac}+
L_{a}L_{bc} .
\end{split}$$
Let $A$ be a superalgebra. A *superinvolution* is a graded linear map $* \colon A \to A$ such that $x^{**} =x$, and $(xy)^*=
(-1)^{\bar x\bar y}y^*x^*$, for any homogeneous elements $x,y$ in $A$.
The simplest examples of Jordan superalgebras over a field of characteristic $\ne 2$ are the following:
1. Let $A=A_{\bar{0}}+A_{\bar{1}}$ be an associative superalgebra. Replace the associative product in $A$ with the new one: $x\circ y = {\frac{1} {2}}(xy + (-1)^{\bar x\bar y}yx)$. With this product $A$ becomes a Jordan superalgebra, denoted by $A^+$.
2. Let $A$ be an associative superalgebra with superinvolution $*$. Then the subspace of hermitian elements $H(A,*) = \{ a \in A :
a^* =a\}$ is a subalgebra of $A^{+}$.
In fact, if a Jordan superalgebra $J$ is a subalgebra of $A^+$ for an associative superalgebra $A$, $J$ is said to be *special*. Otherwise $J$ is said to be *exceptional*. Any graded Jordan homomorphism $\sigma \colon J \to A^+$ is called a *specialization*. So $J$ is special if there exists a faithful specialization of $J$. Otherwise, $J$ is exceptional. Both examples i) and ii) given above are examples of special Jordan superalgebras.
A specialization $u \colon J \to U^+$ into an associative superalgebra $U$ is said to be *universal* if the subalgebra of $U$ generated by $u(J)$ is $U$, and for any arbitrary specialization $\varphi \colon J \to A^+$, there exists a homomorphism of associative superalgebras $\chi \colon U \to A$ such that $\varphi =\chi \circ u$. The superalgebra $U$ is called the *universal enveloping algebra* of $J$.
*In the sequel only finite dimensional Jordan superalgebras over an algebraically closed field of characteristic zero will be considered.*
We recall the classification of the nontrivial simple Jordan superalgebras given by V. Kac [@Kac] and completed by I. Kantor [@Kan].
1. $J=K_3,$ the *Kaplansky superalgebra*:
$J_{\bar{0}}=Fe,\quad J_{\bar{1}}=Fx+Fy,$ $e^2=e, \quad e \cdot x=\frac{1}{2} x,
\quad e \cdot y = \frac{1}{2}y, \quad x \cdot y=e$.
2. The one-parameter family of superalgebras $J=D_t,$ with $t\in F \setminus\{0\}$:
$J_{\bar{0}}=Fe +Ff,$$J_{\bar{1}} = Fu+Fv$
$e^2 =e, \quad f^2=f, \quad e \cdot f=0,\quad
e \cdot u=\frac{1}{2}u, \quad e \cdot v=\frac{1}{2}v, \quad
f \cdot u=\frac{1}{2}u,$
$ f \cdot v=\frac{1}{2}v, \quad u \cdot v=e+tf.$
Note that $D_t\cong D_{1/t}$, for any $t\neq 0.$
3. $J=K_{10},$ the *Kac superalgebra*. This is a ten dimensional Jordan superalgebra with six dimensional even part. (See [@Hogben-Kac], [@Ki], [@N-B-E] or [@El-La-SaKAC] for details).
4. Let $V= V_{\bar{0}}\oplus V_{\bar{1}}$ be a graded vector space over $F,$ and let $(\ , \ )$ be a nondegenerate supersymmetric bilinear superform on $V,$ that is, a nondegenerate bilinear map which is symmetric on $V_{\bar{0}},$ skewsymmetric on $V_{\bar{1}},$ and $V_{\bar{0}}$ and $V_{\bar{1}}$ are orthogonal relative to $(\ , \ ).$ Now consider $J_{\bar{0}}= Fe+ V_{\bar{0}},$ $J_{\bar{1}}=V_{\bar{1}}$ with $e
\cdot x=x,$ $v \cdot w=(v,w)e,$ for any $x\in J$ and $v,w\in V$. This superalgebra $J$ is called the *superalgebra of a superform*. If $\dim V_{\bar{0}} =1$ and $\dim V_{\bar{1}}=2,$ the superalgebra of a superform is isomorphic to $D_t$ with $t=1.$
5. $A^{+},$ with $A$ a finite dimensional simple associative superalgebra, that is, either $A=M_{r,s}(F)$ or $A=Q_n(F)$. Note that $M_{1,1}(F)^+$ is isomorphic to $D_{-1}$.
6. $H(A,*),$ where $A$ and $*$ are of one of the following types:
=1em i) $A=M_{n,n}(F),$ $*\colon \begin{pmatrix} a &
b \\ c & d\end{pmatrix} \to \begin{pmatrix} d^t & -b^t \\ c^t &
a^t \end{pmatrix}.$
ii\) $A=M_{n,2m}(F),$ $* \colon \begin{pmatrix} a & b \\
c & d \end{pmatrix} \to \begin{pmatrix} a^t & c^tq \\
-q^tb^t & q^td^tq \end{pmatrix},$ where $q =\begin{pmatrix} 0 &
I_m
\\ -I_m & 0 \end{pmatrix}.$
The first one is called the [*transpose superinvolution*]{} and $H(A,*)$ is denoted then by $p(n),$ and the second one the [*orthosymplectic superinvolution*]{} and $H(A,*)$ is denoted in this case by $osp_{n,2m}$. The isomorphisms $D_{-2} \cong
D_{-1/2} \cong osp_{1,2}$ are easy to prove.
7. Let $G$ be the Grassmann superalgebra. Consider the following product in $G$: $$\{f,g\} = \sum_{i=1}^n (-1)^{\bar f} \frac{\partial f}{\partial
e_i}\frac{\partial g}{\partial e_i},$$ and build the vector space, sum of two copies of $G$: $J=G+Gx$, with the product in $J$ given by $$a(bx) = (ab)x, \quad (bx)a=(-1)^{\bar a}(ba)x, \quad
(ax)(bx)=(-1)^{\bar b}\{a,b\}.$$
Finally take the following grading in $J$: $J_{\bar{0}}=G_{\bar{0}}+G_{\bar{1}}x,
J_{\bar{1}}=G_{\bar{1}}+G_{\bar{0}}x.$ This superalgebra is called the [*Kantor double of the Grassmann algebra*]{} or the *Kantor superalgebra*.
The 10-dimensional Kac superalgebra and the Kantor superalgebra are the unique exceptional superalgebras in the above list (see [@McC2] and [@Sht]). Note that the Kaplansky superalgebra is the unique nonunital simple superalgebra.
Let $J$ be a non unital Jordan superalgebra, the unital hull of $J$ is defined to be $H_F(J)=
J + F\cdot 1$, where $1$ is the formal identity and $J$ is an ideal inside $H_F(J)$. In [@Ze] E. Zelmanov determined a classification theorem for finite dimensional semisimple Jordan superalgebras.
\[th:semisimple\] *(E. Zelmanov)*
Let $J$ be a finite dimensional Jordan superalgebra over a field $F$ of characteristic not 2. Then $J$ is semisimple if and only if $J$ is a direct sum of simple Jordan superalgebras and unital hulls $H_K(J_1 \oplus \dots \oplus J_r)= (J_1 \oplus \dots \oplus J_r) + K
\cdot 1$ where $J_i$ are non unital simple Jordan superalgebras over an extension $K$ of $F$.
The maximal subalgebras of the Kac Jordan superalgebra (type 3) above) have been determined in [@El-La-SaKAC]. Our purpose in this paper is to describe the maximal subalgebras of the simple special Jordan superalgebras (types 1), 2), 4), 5) and 6)). This is achieved completely for the simple Jordan superalgebras of types 1), 2) and 4). For types 5) and 6) the results are not complete and some questions arise.
In what follows the word subalgebra will always be used in the graded sense, so any subalgebra is graded.
First note that any maximal subalgebra $B$ in a simple unital Jordan superalgebra $J$, with identity element $1$, contains the identity element. Indeed, if $1\notin B$, the algebra generated by $B$ and $1$: $B+F \cdot 1$, is the whole $J$ by maximality. So $B$ is a nonzero graded ideal of $J$, a contradiction with $J$ being simple. Therefore $1\in B$.
The paper is organized as follows. Section 2 deals with the easy problem of determining the maximal subalgebras of the Kaplansky superalgebra, the superalgebras $D_t$ and the Jordan superalgebras of superforms. Then Section 3 will collect some known results on universal enveloping algebras and will put them in a way suitable for our purposes. Sections 4 and 5 will be devoted, respectively, to the description of the maximal subalgebras of the simple Jordan superalgebras $A^+$ and $H(A,*)$, for a simple finite dimensional associative algebra $A$, and a superinvolution $*$.
The easy cases.
===============
Let us first describe the maximal subalgebras of the simple Jordan superalgebras of types 1), 2), and 4) in section 1. The proof is straightforward.
\[th:easy\]
1. Let $J=K_3$ be the Kaplansky superalgebra. A subalgebra $M$ of $J$ is maximal if and only if $M=J_{\bar{0}}\oplus M_{\bar{1}}$ where $M_{\bar{1}}$ is a vector subspace of $J_{\bar{1}}$ with $\dim
M_{\bar{1}}=1.$
2. Let $J=D_t$ with $t\neq 0.$ A subalgebra $M$ of $J$ is maximal if and only if either $M= J_{\bar{0}} \oplus
M_{\bar{1}}$ where $M_{\bar{1}}$ is a vector subspace of $J_{\bar{1}}$ with $\dim M_{\bar{1}} =1,$ or if $t=1,$ $M=F \cdot
1+J_{\bar{1}}.$
3. Let $J$ be the Jordan superalgebra of a nondegenerate bilinear superform. A subalgebra $M$ of $J$ is maximal if and only if either $M= J_{\bar{0}} \oplus M_{\bar{1}}$ where $M_{\bar{1}}$ is a vector subspace and $\dim M_{\bar{1}}= \dim
J_{\bar{1}}-1,$ or $M= (F \cdot 1+M_{\bar{0}})\oplus J_{\bar{1}}$ where $M_{\bar{0}}$ is a vector subspace and $\dim M_{\bar{0}}= \dim
V_{\bar{0}}-1.$
Note that item (ii) in Theorem \[th:easy\] above cover the maximal subalgebras of $M_{1,1}(F)^+\cong D_{-1}$ and of $osp_{1,2}\cong
D_2$.
Universal enveloping algebras.
==============================
In order to determine the maximal subalgebras of the remaining simple special Jordan superalgebras, some previous results are needed.
Given an associative superalgebra $A$ and a subalgebra $B$ of the Jordan superalgebra $A^+$, $B'$ will denote the (associative) subalgebra of $A$ generated by $B$.
\[pr:DtinQnF\] There is no unital subalgebra $B$ of the Jordan superalgebra $Q_n(F)^+$ ($n\geq 2$), isomorphic to $D_t$ ($t\ne 0$), and with $B'=Q_n(F)$.
Write $A=Q_n(F)$, and take a basis $\{ e,f,u,v\}$ of $B\cong D_t$ as in Section 1. Since $B$ is a unital subalgebra, $e+f=1_A$. Therefore, as $e^2=e$, $f^2=f$ and $ef=fe=(1_A-e)e=0$, we may assume also that $$e= \begin{pmatrix} I_s & 0 &0 &0 \\
0 & 0 &0 &0\\
0 & 0 &I_s &0\\
0 & 0 &0 &0
\end{pmatrix} ,\quad f= \begin{pmatrix} 0& 0 &0 &0 \\
0 & I_m &0 &0\\
0 & 0 &0&0\\
0 & 0 &0 &I_m
\end{pmatrix}.$$
Consider the Peirce decomposition associated to the idempotents $e$ and $f$, and note that $u,v \in A_{\bar{1}} \cap
(Q_n(F)^+)_{1/2}(e) \cap (Q_n(F)^+)_{1/2}(f)$. Hence $$u= \begin{pmatrix} 0 & 0 &0 &a \\
0 & 0 &b &0\\
0 & a &0 &0\\
b & 0 &0 &0
\end{pmatrix} \mbox{ and } \thickspace v= \begin{pmatrix} 0& 0 &0 &c \\
0 & 0 &d &0\\
0 & c &0&0\\
d & 0 &0 &0
\end{pmatrix},$$ for some $a,c\in M_{s\times m}(F)$, $b,d\in M_{m\times s}(F)$. But this contradicts that $B ^\prime$ be equal to $A$, because, for instance, $$\begin{pmatrix} 0 & 0 &x &0 \\
0 & 0 &0 &0\\
x & 0 &0 &0\\
0 & 0 &0 &0
\end{pmatrix} \notin B^{\prime}, \thickspace \mbox{ for } 0\ne x \in M_{s\times s}(F).$$ This finishes the proof.
Now, if $Q_n(F)$ is replaced by $M_{p,q}(F)$, some knowledge of the universal enveloping algebra of $D_t$ is needed.
I. P. Shestakov determined $U(D_t)$ (see [@Ma-Ze2]), which is intimately related to the orthosymplectic Lie superalgebra $osp(1,2),$ that is, the superalgebra whose elements are the skewsymmetric matrices of $M_{1,2}(F)$ relative to the orthosymplectic superinvolution, with Lie bracket $[a,b]=
ab-(-1)^{\bar a \bar b}ba$: $$osp(1,2) =\left\{ \begin{pmatrix} 0 & \beta &\alpha \\ -\alpha & \gamma & \mu
\\ \beta & \nu & -\gamma \end{pmatrix} \ : \ \alpha, \beta, \mu, \gamma , \nu \in
F \right\}.$$ The following elements in $osp(1,2)$, which form a basis, will be considered throughout: $$h= \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1
\end{pmatrix}, \thickspace e= \begin {pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0
\end{pmatrix}, \thickspace f= \begin {pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 1 & 0
\end{pmatrix},$$
$$x=\begin {pmatrix} 0 & 0 & -1 \\ 1 & 0 & 0 \\ 0 & 0 & 0
\end{pmatrix}, \thickspace y= \begin {pmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 1 & 0 & 0
\end{pmatrix},$$ which verify $[h,e]=2e, \quad [h,f]=-2f, \quad [h,x]=x, \quad
[h,y]=-y, \quad [e,y]=x, \quad [f,x]=y, \quad [x,x]=-2e, \quad
[y,y]=2f, \quad [x,y]=xy+yx=h.$
Then $U(D_t)$ is given by
\[th:Ivan\]*(I. Shestakov)* If $t\neq 0, \pm 1,$ then the universal associative enveloping of $D_t$ is $(U(D_t),\iota)$ where $U(D_t) = U(osp(1,2)) /\operatorname{ideal}\bigl\langle (xy-yx)^2+(xy-yx) + \frac{t}{(1+t)^2} \bigr\rangle$ and $$\begin{aligned}
{3}
\iota: \thickspace &D_t \quad & \longrightarrow & \quad U(D_t)\\
&e \quad &\longmapsto & \quad \iota (e)= \frac{1}{t-1}(t1+ (1+t)\overline{(xy-yx)}),\\
&f \quad &\longmapsto & \quad \iota (f)= \frac{1}{1-t}(1+ (1+t)\overline{(xy-yx)}),\\
&u \quad &\longmapsto & \quad \iota (u)= 2\bar{x},\\
&v \quad &\longmapsto & \quad \iota (v)= -(1+t)\bar{y},\end{aligned}$$ where $\bar {z}$ denotes the class of $z\in osp(1,2)$ modulo the ideal generated by $ (xy-yx)^2+(xy-yx) +
\frac{t}{(1+t)^2} $.
Here $U(osp(1,2))$ denotes the universal enveloping algebra of the Lie superalgebra $osp(1,2)$ (see [@KacLie section 1.1.3]).
Note that the element $a=\overline{xy-yx}\in U(D_t)$ satisfies $a^2+a+\frac{t} {(1+t)^2}=0,$ hence if $a^\prime = -(1+t)a$, ${a^\prime}^2-(1+t)a^\prime +t =0$ and in this way the original version of Shestakov’s Theorem is recovered.
The even part of $osp(1,2)$, which is the span of the elements $h,e,f$ above, is isomorphic to the three dimensional simple Lie algebra $ sl(2,F)$, so given any finite dimensional irreducible $U(osp(1,2))$-module $V$, by restriction $V$ is also a module for $sl(2,F)$. The well-known representation theory of $sl(2,F)$ shows that $h$ acts diagonally on $V$ (see [@Hum 7.2 Corollary]), its eigenvalues constitute a sequence of integers, symmetric relative to $0$, and hence $V$ is the direct sum of the subspaces $V_m =\{ v\in V : h \cdot v=mv\}$ with $m\in {{\mathbb Z}}$.
By finite dimensionality, there exists a largest nonnegative integer $m$ with $V_m\neq 0$. Pick a nonzero element $v\in V_m$ (a highest weight vector). Changing the parity in $V$ if necessary, this element $v$ can be assumed to be even.
Since $h(ev)=[h,e]v+e(hv)=(m+2)ev$, it follows that $ev=0$, and since $h(xv)=[h,x]v+x(hv)=(m+1)xv$, it follows that $xv=0$ too. Let ${{\mathfrak g}}=osp(1,2)$, then ${{\mathfrak g}}={{\mathfrak g}}_{-}\oplus{{\mathfrak h}}\oplus{{\mathfrak g}}_{+}$, where ${{\mathfrak g}}_+=Fe+Fx$, ${{\mathfrak h}}=Fh$, and ${{\mathfrak g}}_-=Ff+Fy$, and let $W=W_0=Fw$ be the module over ${{\mathfrak h}}+{{\mathfrak g}}_+$ given by $hw=mw$, $ew=0$, and $xw=0$. The map $W \longrightarrow V$ such that $\lambda w \longmapsto
\lambda v$ for any $\lambda \in F$ is a homomorphism of $({{\mathfrak h}}+{{\mathfrak g}}_+)$-modules, which can be extended to a homomorphism of ${{\mathfrak g}}$-modules (that is, of $U(osp(1,2))$-modules) as follows: $$\begin{array}{ccc} \varphi : U({{\mathfrak g}})\otimes _{U({{\mathfrak h}}+{{\mathfrak g}}_+)} W
& \longrightarrow & V \\
a\otimes w & \longmapsto & av. \end{array}$$
Since $V$ is and irreducible $osp(1,2)$-module, $\varphi$ is onto. We denote by $U(m)$ the $U({{\mathfrak g}})$-module $U({{\mathfrak g}})\otimes
_{U({{\mathfrak h}}+{{\mathfrak g}}_+)} W$ and identify the element $1\otimes w$ with $w$. Then:
$$\begin{array}{rclrcl} hy^iw & = & (m-i)y^iw, & fy^iw & = & y^{i+2}w, \\
xy^{2i}w & = & -iy^{2i-1}w, & xy^{2i+1}w & = & (m-i)y^{2i}w, \\
ey^{2i}w & = & i(m - i+1)y^{2i-2}w, & ey^{2i+1}w & = &
i(m-i)y^{2i-1}w,
\end{array}$$
and hence it follows that the set $\{ w, yw, y^2w, \dots
\}$ spans the vector space $U(m)$. We remark that $I_m=\mathop{span}
\langle y^{2m+1}w, y^{2m+2}w, \dots \rangle$ is a proper submodule of $U(m)$, and because $V$ is irreducible and the weights of the elements $y^{2m+i}w$ are all different from $m$, it follows that $\varphi(I_m)\ne V$, so by irreducibility $\varphi(I_m)=0$. Thus the set $\{v, yv, y^2v, \dots , y^{2m}v\}$ spans the vector space $V$. Again, the theory of modules for $sl(2,F)$ shows that $v,y^2v,\ldots,y^{2m}v$ are all nonzero (see [@Hum 7.2]), and hence so are the elements $yv, y^3v, \dots, y^{2m-1}v$. Note that the elements $v,yv, y^2v, \dots , y^{2m}v$ are linearly independent, as they belong to different eigenspaces relative to the action of $h$. We conclude that $\{ v,yv, y^2v, \dots, y^{2m}v\}$ is a basis of $V$.
Denote $V$ by $V(m)$ and write $e_i=y^iv$. Then, $$\begin{aligned}
{2}
V(m)_{\bar{0}} \quad &= \quad \langle e_0, e_2, \ldots , e_{2m}
\rangle \thickspace ,\\
V(m)_{\bar{1}} \quad &= \quad \langle e_1, e_3, \ldots , e_{2m-1}
\rangle \thickspace .\end{aligned}$$
Observe that $$\begin{aligned}
{3}
(xy-yx)e_{2i} \hspace{0.4cm} \quad &= \quad \hspace{0.5cm}
(m-i)e_{2i}+ ie_{2i} \quad &=& \qquad \hspace{0.25cm} me_{2i} \thickspace ,\\
(xy-yx)e_{2i+1} \quad &= \quad xe_{2i+2}- (m-i)e_{2i+1} \quad &=&
\quad -(m+1)e_{2i+1} \thickspace ,\end{aligned}$$
and so the minimal polynomial of the action of $xy-yx$ is $(X-m)(X+(m+1)) = X^2+X-m(m+1)$, and therefore the finite dimensional irreducible $U(osp(1,2))$-modules coincide with the irreducible modules for $U(osp(1,2)) / \operatorname{ideal}\langle (xy-yx)^2
+(xy-yx) -m(m+1) \rangle$.
Therefore, if $V$ is a finite dimensional irreducible $U(D_t)$-module ($t\ne 0,\pm 1$), then by Shestakov’s Theorem (Theorem \[th:Ivan\]), $V$ is an irreducible module for $osp(1,2)$ in which the minimal polynomial of the action of $xy-yx$ divides $X^2+X+\frac{t}{(1+t)^2}$. From our above discussion, there must exist a natural number $m$ such that $\frac{t}{(1+t)^2}=-m(m+1)$, that is, either $t=-\frac{m}{m+1}$ or $t=-\frac{m+1}{m}$. Thus,
(C. Martínez, E. Zelmanov)The universal enveloping algebra $U(D_t)$ ($t\ne 0,\pm 1$) has a finite dimensional irreducible module if and only if there exists a natural number $m$ such that either $t= - \frac{m}{m+1}$ or $t= -
\frac{m+1}{m}.$ In this case, up to parity exchange, its unique irreducible module is $V(m)$ (that is, the irreducible module for $U(osp(1,2))$ annihilated by the ideal generated by $
(xy-yx)^2+(xy-yx)-m(m+1)$).
Something can be added here:
\[pr:superform\] Up to scalars, the module $V(m)$ has a unique nonzero even bilinear form $(.\mid .)$ such that $\rho_x$ and $\rho_y$, the multiplication operators by $x$ and $y$, are supersymmetric (that is, $(zv|w)=(-1)^{|v|} (v|zw)$ for any $v,w \in V_{\bar{0}}\cup
V_{\bar{1}}$ with $z=x,y$).
If $\rho_x, \rho_y$ are supersymmetric then $\rho_{[x,x]}=2\rho_x^2$, $\rho_{[y,y]}=2\rho_y^2$, and $\rho_{[x,y]}=\rho_x\rho_y+\rho_y\rho_x$ are skewsymmetric, that is, $\rho_e$, $\rho_f$, and $\rho_h$ are skewsymmetric. But $\rho_h$ being skewsymmetric implies that $(V_{(\alpha)}| V_{(\beta)})=0$ if $\alpha +\beta \neq 0$, where $V_{(\alpha)} =\{ v\in V(m) \ : \
hv=\alpha v \}$, because $(hV_{(\alpha)}| V_{(\beta)})=
-(V_{(\alpha)}|hV_{(\beta)})$, and therefore $(\alpha
+\beta)(V_{(\alpha)}| V_{(\beta)})=0$. Hence we can check that $(.\mid
. )$ is determined by $(e_0|e_{2m})$, as
$$(e_1|e_{2m-1})=(ye_0|e_{2m-1})= (e_0|ye_{2m-1})=(e_0|e_{2m}).$$
So, up to scalars, it can be assumed that $(e_0|e_{2m})=1$.
Using that $\rho_y$ is supersymmetric , recursively we get
$$\begin {array}{l} (e_{2r}|e_{2(m-r)})=(-1)^r, \\ (e_{2r+1}|e_{2(m-r)-1})=(-1)^r
\end{array}$$
and $(e_i|e_j)=0$ otherwise. Now it can be checked that $\rho_x$ is supersymmetric too.
Note that $(.\mid . )$ is supersymmetric if $m$ is even and superskewsymmetric if $m$ is odd. In the latter case, one can consider $V(m)^{op}$ with the supersymmetric bilinear superform given by $(u|v)^\prime = (-1)^{|u|}(u|v)$ where $|u|$ denotes the parity in $V(m)$.
Consider again the finite dimensional irreducible $U(D_t)$-module ($t=-\frac{m}{m+1}$ or $t=-\frac{m+1}{m}$) $V=V(m)$, with the bilinear superform in the proposition above. It is known that this determines a superinvolution in $A=\operatorname{End}_F(V)$ such that every homogeneous element $f\in \operatorname{End}_F(V)$ is mapped to $f^*$ verifying $(fv,w)=(-1)^{\bar f\bar v}(v,f^*w)$. Note that, since $\rho_x$ and $\rho_y$ are supersymmetric, $D_t$ is thus embedded in $H(\operatorname{End}_F(V),*)$ as follows:
$$\begin{aligned}
D_t &\longrightarrow & H(\operatorname{End}_F(V),*) \\
e & \longmapsto & \frac{1}{t-1} (t\rho_{Id}+ (1+t)(\rho_x \rho_y - \rho_y \rho_x))\\
f & \longmapsto & \frac{1}{1-t} (\rho_{Id}+ (1+t)(\rho_x \rho_y - \rho_y \rho_x))\\
u & \longmapsto & 2 \rho_x \\
v &\longmapsto & -(1+t)\rho_y.\end{aligned}$$
Moreover, unless $t \neq -2, -1/2$ (that is, unless $m=1$), by dimension count, one has $D_t \subsetneqq H(\operatorname{End}_F(V),*)$.
The conclusion of all these arguments is the following:
\[pr:DtEndV\] Let $V$ be a nontrivial finite dimensional vector superspace and let $B$ be a unital subalgebra of the simple Jordan superalgebra $\operatorname{End}_F(V)^+$, isomorphic to $D_t$ ($t\ne 0,\pm 1$), and such that $B'=\operatorname{End}_F(V)$. Then one of the following situations holds:
1. either $t= -\frac{m}{m+1}$ or $t= -\frac{m+1}{m}$ for an even number $m$, such that $V\cong V(m)$, and through this isomorphism $B
\subseteq H(\operatorname{End}_F(V), *)$ where $*$ is the superinvolution associated to the bilinear superform of Proposition \[pr:superform\],
2. or $t=-\frac{m}{m+1}$ or $t=-\frac{m+1}{m}$ for an odd number $m$ such that $V\cong V(m)^{op}$ and through this isomorphism $D_t \subseteq H(\operatorname{End}_F(V), \diamond)$, where $\diamond$ is the superinvolution associated to the bilinear superform $(.\mid
.)^\prime.$
The hypotheses imply that there is a surjective homomorphism of associative algebra $U(D_t)\rightarrow \operatorname{End}_F(V)$, so $V$ becomes an irreducible module for $U(D_t)$ and the arguments above apply.
Since the superalgebra $\operatorname{End}_F(V)$, for a superspace $V$, is isomorphic to $M_{p,q}(F)$, for $p=\dim V{_{\bar 0}}$, $q=\dim V{_{\bar 1}}$, the next result follows:
\[co:DtinMpqF\] The simple Jordan superalgebra $M_{p,q}(F)^+$ contains a unital subalgebra $B$, isomorphic to $D_t$ ($t\ne 0,\pm 1$), and such that $B'=M_{p,q}(F)$, if and only if $q=p\pm 1$ and either $t=-\frac{p}{q}$, or $t=-\frac{q}{p}$.
Proposition \[pr:DtinQnF\] and Corollary \[co:DtinMpqF\] give all the possibilities for embeddings of the Jordan superalgebra $D_t$ ($t\ne 0,\pm 1$) as unital subalgebras in $A^+$, in such a way that the associative subalgebra generated by $D_t$ is the whole $A$, $A$ being a simple associative superalgebra. For these cases, one always has $D_t\subseteq H(A,*)$, for a suitable superinvolution. By dimension count, equality is only possible here if $t=-2$ (or $t=-\frac{1}{2}$). This corresponds to the isomorphism $D_{-2}\cong
osp_{1,2}$.
For later use, let us recall the following results on universal enveloping algebras of some other Jordan superalgebras (see [@Ma-Ze2]):
\[th:p2\] *(C. Martínez and E. Zelmanov)*
1. The universal enveloping algebra of $ p(2) $ is isomorphic to $M_{2,2}(F[t]),$ where $F[t]$ is the polynomial algebra in the variable $t$.
2. The universal enveloping algebra of $M_{1,1}(F)$ is $(U(D), u)$ with
$$\begin{gathered}
U(D)= \begin{pmatrix} F[z_1, z_2]+ F[z_1, z_2]a & 0 \\
0 & F[z_1, z_2]+ F[z_1, z_2]a
\end{pmatrix} \\
\oplus \begin{pmatrix} 0& F[z_1, z_2]+ F[z_1, z_2]a^{-1}z_2 \\
F[z_1, z_2]z_1+ F[z_1, z_2]a & 0
\end{pmatrix}\end{gathered}$$
where $z_1, z_2$ are variables, $a$ is a root of $X^2 +X -z_1 z_2 \in
F[z_1, z_2]$, and $u: M_{1,1}(F) \rightarrow U(D)^+$ is given by $$\begin{pmatrix} \alpha_{11} & \alpha_{12} \\
\alpha_{21} & \alpha_{22}
\end{pmatrix} \mapsto \begin{pmatrix} \alpha_{11} & \alpha_{12}+ \alpha_{21}a^{-1}z_2 \\
\alpha_{12}z_1+ \alpha_{21}a & \alpha_{22}
\end{pmatrix}.$$
\[th:others\] *(C. Martínez and E. Zelmanov)*
1. $U(M_{m,n}^{+})(F) \cong M_{m,n}(F) \oplus M_{m,n}(F)$ for $(m,n) \neq (1,1);$
2. $U(Q^{+}_n(F))= Q_n(F) \oplus Q_n(F),$ $n \geq 2 ;$
3. $U(osp_{m,n}(F)) \cong M_{m,n}(F),$ $(m,n) \neq (1,2);$
4. $U(p(n)) \cong M_{n,n}(F),$ $n \geq 3.$
Maximal subalgebras of $A^+.$
=============================
Let $B$ be a maximal subalgebra of $A^+$, $A$ being a simple associative superalgebra (so $A$ is isomorphic to either $M_{p,q}(F)$ or $Q_n(F)$, for some $p$ and $q$, or $n$). If $B^\prime \neq A$ then $B^\prime \subseteq C$ with $C$ a maximal subalgebra of the associative superalgebra $A$, and then $C^+ =B$ by maximality. Therefore a maximal subalgebra of $A^+$ is of one of the following types, either:
1. $B^{\prime}=A$ and $B$ is semisimple, or
2. $B=C^+$ with $C$ a maximal subalgebra of $A$ as associative superalgebra, or
3. $B^{\prime}=A$ and $B$ is not semisimple.
$B^\prime = A$ and $B$ semisimple.
----------------------------------
Let us assume first that $B$ is a maximal subalgebra of the simple superalgebra $A^+$, with $B'=A$ and $B$ semisimple.
For the moment being, let us drop the maximality condition, so let us suppose that $B$ is just a semisimple subalgebra of $A^+$ with $B'=A$. By Theorem \[th:semisimple\], $B=\sum_{i=1}^{r}(
J_{i1}\oplus \dots \oplus J_{ir_i} + Fe_i)\oplus M_1\oplus \dots
\oplus M_t$ where $M_1, \dots, M_t$ are simple Jordan superalgebras and $J_{ij}$ are Kaplansky superalgebras.
We claim that $B$ has neither direct summands $M_i$ isomorphic to the Kaplansky superalgebra $K_3$ nor direct summands of the type $(J_{i1}\oplus \dots \oplus J_{ir_i} + Fe_i)$. Indeed, otherwise $A^+$ would contain a subalgebra isomorphic to $K_3$. Let $e$ be its nonzero even idempotent and $x,y$ odd elements with $x\cdot y=e$. Then, in the associative superalgebra $A$ (which is isomorphic to either $M_{p,q}(F)$ or $Q_n(F)$, and hence there is a trace form), one has $\operatorname{trace}(e)=\operatorname{trace}(x\cdot y)=\frac{1}{2}\operatorname{trace}(xy-yx)=0$. However, any nonzero idempotent in a matrix algebra over a field of characteristic $0$ has nontrivial trace. A contradiction.
Therefore, $B = M_1 \oplus \dots \oplus M_t$, where the $M_i$’s are unital simple Jordan superalgebras.
Consider now the identity element $f_i$ of each $M_i.$ Then $B=f_1Bf_1 \oplus \ldots \oplus f_tBf_t$. If $t>1$, it follows that $B^\prime \subset f_1Af_1 \oplus (1-f_1)A(1-f_1)\subsetneqq A$, a contradiction. Hence $B$ is simple and, therefore, is isomorphic to one of the following special superalgebras: $ D_t$, $H(D,*)$ (for a simple associative superalgebra $D$ with superinvolution $*$), the superalgebra of a superform, or $D^+$ for a simple associative superalgebra $D$. (Recall that $K_{10}$ and the Kantor superalgebra are exceptional superalgebras.)
In case $B$ were the superalgebra of a superform over a vector superspace $V$, let $x,y\in V_{\bar{1}}$ such that $x \cdot y =1_A$. Then $x \cdot y = \frac{1}{2}(xy-yx)=1_A$, and again $\operatorname{trace}(x
\cdot y) =0\neq \operatorname{trace}(1_A)$, a contradiction that shows that $V_{\bar{1}}=0$. But then $B\subseteq A_{\bar{0}}$ and $B^\prime
\subseteq A_{\bar{0}} \neq A$, contrary to our hypotheses.
Now, in case $B$ is isomorphic to $D_t$ ($t\ne 0$), Proposition \[pr:DtinQnF\] shows that $A$ is not isomorphic to $Q_n(F)$ and Corollary \[co:DtinMpqF\] shows that $B$ is never a maximal subalgebra of $A\cong M_{p,q}(F)$ unless $t=-2$ (or $-\frac{1}{2}$). In this case $B$ is isomorphic to $H(D,*)$ for a suitable $(D,*)$.
Therefore:
\[le:BinAplus\] Let $B$ be a subalgebra of the Jordan superalgebra $A^+$, where $A$ is a finite dimensional simple associative superalgebra over an algebraically closed field $F$ of characteristic $0$. If $B^{\prime}=A$ and $B$ is semisimple, then either $B$ is isomorphic to $D_t$ ($t\ne 0,1,-1,-2,-\frac{1}{2}$), or $B= D^+$ or $B=H(D,*)$, for a simple associative superalgebra $D$ and a superinvolution $*$. Moreover, if $B$ is a maximal subalgebra of $A^+$, then the first possibility does not hold.
Our next goal consists in proving that, in case $B=D^+$ or $B=
H(D,*)$, one has that $D$ is isomorphic to $A$. For this the following result (see [@GoT]) will be used:
\[th:Carlos\] *(C. Gómez-Ambrosi)* Let $S$ be a unital associative superalgebra with superinvolution $*$. Assume that the following conditions hold:
1. $S$ has at least three symmetric orthogonal idempotents.
2. If $S=\sum_{i=1}^n S_{ij}$ is the Peirce decomposition related to them, then $S_{ij}S_{ji}=S_{ii}$ holds for $i,j=1, \dots ,n$,
and let $\phi \colon H(S,*) \to (A, \cdot)^+$ be a homomorphism of Jordan superalgebras, for an associative superalgebra $(A,\cdot)$. Then $\phi$ can be extended univocally to an associative homomorphism $\varphi \colon S \to A$.
We shall proceed in several steps, where the assumptions are that $B$ is just a semisimple subalgebra of $A^+$ with $B'=A$:
**a)**Assume first that $B=H(D,*)$ for a simple associative superalgebra with involution $(D,*)$. Let us denote the multiplication in $D$ by $\diamond$. The inclusion map $\iota\colon
B=H(D,*) \to (A,\cdot)^+$ is a Jordan homomorphism. Then (Section 1), $D$ is isomorphic to $M_{p,q}(F)$, for suitable $p,q$, and $*$ corresponds to either the transpose involution or an orthosymplectic involution. If neither $D$ is a quaternion superalgebra (isomorphic to $M_{1,1}(F)$), nor $H(D,*)$ is isomorphic to $p(2)$ or $osp_{1,2}$, then $D$ satisfies the hypotheses of Theorem \[th:Carlos\] and, therefore, $\iota:B\rightarrow A$ can be extended to an associative homomorphism $\tau:D\rightarrow A$. But the subalgebra $B'$ generated by $B$ in $A$ is the whole $A$. Hence $\tau$ is onto and, as $D$ is simple, it is one-to-one too. Therefore $D$ is isomorphic to $A$. Thus, we are left with three cases:
**a.1)**If $H(D,*)$ is isomorphic to $osp_{1,2}$ then, since $osp_{1,2}$ is isomorphic to $D_{-2}$, $H(D,*)$ is isomorphic to $D_{-2}$.
**a.2)**If $D$, with multiplication $\diamond$, is isomorphic to $M_{1,1}(F)^+$, with superinvolution $*$ as in 6)i) in Section 1, then $H(D,*)$ is isomorphic to $F1+Fu$, with $u^2=0$. Thus, the universal enveloping algebra of $H(D,*)$ is $F[u]$, the ring of polynomials over $F$ on the variable $u$, and there exists an associative homomorphism $\varphi: F[u]\rightarrow A$, which extends $\iota:B\rightarrow A$. Again, $\varphi$ is onto since $B'=A$. Therefore $A$ should be commutative, a contradiction.
**a.3)**Finally, if $H(D,*)$ is isomorphic to $p(2)$, Theorem \[th:p2\] shows that its universal enveloping algebra is isomorphic to $M_{2,2}(F[t])$, where $F[t]$ is the polynomial algebra on the indeterminate $t$. As before, this gives a surjective homomorphism $\phi:M_{2,2}(F[t])\rightarrow A$. Recall that $A$ is isomorphic either to $M_{p,q}(F)$ or to $Q_n(F)=M_n(F)\oplus M_n(F)u$ ($u^2=1$). Let $e_1,e_2,e_3,e_4$ be primitive orthogonal idempotents of $M_{2,2}(F)$, with $e_1+e_2$ and $e_3+e_4$ being the unital elements in the two simple direct summands of the even part. Since the restriction of $\phi$ to $M_{2,2}(F)$ is injective because $M_{2,2}(F)$ is simple, the images $\phi(e_1),\phi(e_2),\phi(e_3),\phi(e_4)$ are nonzero orthogonal idempotents in $A{_{\bar 0}}$ with $\sum_{i=1}^4\phi(e_i)=1_A$. Write $U=M_{2,2}(F[t])$ and consider the Peirce decomposition of $U$ relative to $e_1,e_2,e_3,e_4,$: $U=
\sum U_{ij}$, and the Peirce decomposition of $A$ relative to $\phi
(e_1),\phi(e_2), \phi(e_3), \phi(e_4)$: $A=\sum A_{ij}$. Since $U_{ii}$ is isomorphic to $F[t]$, it follows that $A_{ii}$ is commutative (as a quotient of $F[t]$) for any $i=1,2,3,4$. Therefore either $p+q=4$ or $n=4$, that is $A\cong Q_4(F)$. Consider now the restriction $\phi|_{M_{2,2}(F[t]){_{\bar 0}}}
\colon M_{2,2}(F[t]){_{\bar 0}}\to
A$. If $A\cong M_{p,q}(F),$ with $p+q=4$ one has that $\phi(M_{2,2}(F[t]){_{\bar 0}})
= \phi(M_2(F[t])) \oplus
\phi(M_2(F[t])) = A_{\bar{0}} \cong M_p(F) \oplus M_q(F)$, and therefore $p=2$ and $q=2$, and $D \cong M_{2,2}(F) = A$. If $A\cong
Q_4(F)$, then $(M_2(F[t])\times {0})$ is an ideal of $M_{2,2}(F[t])_{\bar{0}}$, and so $\phi (M_2(F[t])\times {0})$ is an ideal of $A_{\bar{0}} \cong M_4(F)$. Since $M_4(F)$ is simple and $\phi (e_1), \phi(e_2)$ are nonzero idempotents, it follows that $\phi (M_2(F[t])\times {0})=A_{\bar{0}},$ and so $\phi (e_1)+
\phi(e_2) =1_A,$ that is a contradiction because $\phi
(e_1)+\phi(e_2)+\phi(e_3)+\phi(e_4) =1$, with $\phi(e_3), \phi(e_4)$ nonzero orthogonal idempotents.
**b)**Assume now that $B=D^+$ for a simple associative superalgebra $D$. Consider the opposite superalgebra $D^{op}$ defined on the same vector space as $D$, but with the multiplication given by $a\diamond b = (-1)^{\bar a \bar b} b \cdot
a$, and the direct sum $D\oplus D^{op}$, which is endowed with the superinvolution $- \colon D\oplus D^{op} \to D\oplus D^{op}$, such that $\overline {(x,a)} = (a,x)$. Note that if $e_1, e_2,\dots, e_n$ are orthogonal idempotents in $D$, then $(e_1,e_1), (e_2,e_2),
\dots, (e_n,e_n)$ are also orthogonal idempotents in $D\oplus
D^{op}$, and the Peirce spaces are given by $(D\oplus D^{op})_{ij} =
D_{ij}\oplus (D^{op})_{ji}$. So if $D$ satisfies conditions (i) and (ii) in Theorem \[th:Carlos\], then so does $D\oplus D^{op}$. Since $D^+$ is isomorphic to $H(D\oplus D^{op}, -)$, there is a homomorphism of Jordan superalgebras $\phi \colon H(D\oplus
D^{op},-) \to A^+$.
**b.1)**Suppose that $D$ is not isomorphic to $M_{1,1}(F),$ nor to $Q_2(F)$, then from Theorem \[th:Carlos\], $\phi$ can be extended to an associative homomorphism $\varphi \colon D\oplus D^{op} \to A$. As before, $\varphi$ is onto because $B^\prime =A,$ so $D\oplus D^{op} /Ker
\varphi$ is isomorphic to $A$ and either $Ker \varphi \cong D$ or $Ker \varphi \cong D^{op}$, because $A$ is simple. Hence either $D\cong A$ or $D^{op}\cong A$, that is, $\dim D= \dim A$, a contradiction.
**b.2)**If $D$ is isomorphic to $M_{1,1}(F)$ (that is, $D$ is a quaternion superalgebra), consider the universal enveloping algebra $(U(D),u)$ of $D^+$ (see Theorem \[th:p2\]). The Jordan homomorphism $\iota\colon D \to A^+$ extends to an associative homomorphism $\varphi \colon U(D) \to A$ such that $\varphi \circ u =\iota$. But $B^\prime =A,$ and hence it follows that $\varphi$ is onto and, therefore, $U(D)/Ker \varphi \cong A$. Recall that $F$, the underground field, is assumed to be algebraically closed, so either $A\cong Q_n(F)$ or $A\cong
M_{p,q}(F).$ But $(U(D)/Ker \varphi)_{\bar{0}}$ is commutative, so $A_{\bar{0}}$ is commutative and therefore either $A\cong Q_1(F)$ or $A\cong M_{1,1}(F)$, a contradiction to $D$ being isomorphic to $M_{1,1}(F)$.
**b.3)**Otherwise $D$ is isomorphic to $Q_2(F)$, and hence the universal enveloping algebra $(U(D),u)$ of $D^+$ is isomorphic to $D\oplus D$ (see Theorem \[th:others\]). Hence there is a surjective homomorphism $\varphi \colon U(D) \to A$ which extends $\iota$. As before, $\varphi$ is onto and so $U(D)/ Ker
\varphi \cong A.$ But $A$ is simple, so $Ker \varphi \cong D $ and $A\cong D$, a contradiction.
Therefore, Lemma \[le:BinAplus\] can be improved to:
\[le:BinAplusimproved\] Let $A$ be a finite dimensional simple associative superalgebra over $F$, and let $B$ be a semisimple subalgebra of $A^+$ with $B'=A$, then either $B$ is isomorphic to $D_t$ ($t\ne 0,\pm
1,-2,-\frac{1}{2}$), or $B$ equals $H(A,*)$, for a superinvolution $*$. Moreover, if $B$ is a maximal subalgebra of $A^+$, then $B=H(A,*)$ for a superinvolution $*$ of $A$.
In consequence, if $B$ is a maximal subalgebra of $A$, which is semisimple and satisfies $B^\prime =A$, Lemma \[le:BinAplusimproved\] shows that $B$ coincides with the subalgebra of hermitian elements of $A$ relative to a suitable superinvolution. The converse also holds:
\[th:BsemiAplus\] Let $A$ be a finite dimensional simple associative superalgebra over an algebraically closed field of characteristic zero, and let $B$ be a semisimple subalgebra of $A^+$ such that $B^\prime =A $. Then $B$ is a maximal subalgebra of $A^+$ if and only if there is a superinvolution $*$ in $A$ such that $B= H(A,*)$.
The only thing left is to show that if $A$ is a finite dimensional simple associative superalgebra endowed with a superinvolution $*$, then $H(A,*)$ is a maximal subalgebra of $A^+$.
Our hypotheses on the ground field imply that, up to isomorphism, we are left with the next two possibilities:
1. $A=M_{n,n}(F)$, and $\thickspace \begin{pmatrix} a & b \\
c & d \end{pmatrix}^{*} = \begin{pmatrix} d^t & -b^t \\
c^t & a^t \end{pmatrix}.$
2. $A=M_{n,2m}(F)$, and $\thickspace \begin{pmatrix} a & b \\
c & d \end{pmatrix}^* = \begin{pmatrix} a^t & c^tq \\
-q^tb^t & q^td^tq \end{pmatrix},$ where $q= \begin{pmatrix} 0 & I_m \\
-I_m & 0 \end{pmatrix}.$
Note that $A=H\oplus K$, where $H=H(A,*)$ and $K$ is the set of skewsymmetric elements of $(A,*)$.
**i)**In the first case $$H= \left\{ \begin{pmatrix} a & b \\
c & a^t \end{pmatrix} : c \mbox{ symmetric, } b \mbox{
skewsymmetric } \right\} ,$$ $$K= \left\{ \begin{pmatrix} a & b \\
c & -a^t \end{pmatrix} : b \mbox{ symmetric, } c \mbox{
skewsymmetric } \right\} ,$$ and to check that $H(A,*)$ is a maximal subalgebra of $A^+$ it suffices to prove that $\operatorname{Jalg}\langle H,x\rangle=A^+$ for any nonzero homogeneous element $x\in K $. ($\operatorname{Jalg}\langle S\rangle$ denotes the subalgebra generated by $S$.)
If $0\ne x\in K_{\bar{0}}$ then $$x= \begin{pmatrix} a & 0 \\
0 & -a^t \end{pmatrix}$$
with $a\in M_n(F)$ and so $$\begin{pmatrix} a & 0 \\
0 & -a^t \end{pmatrix} + \begin{pmatrix} a & 0 \\
0 & a^t \end{pmatrix}= \begin{pmatrix} 2a & 0 \\
0 & 0 \end{pmatrix} \in \operatorname{Jalg}\langle H,x \rangle.$$
We claim that if $\bigl(\begin{smallmatrix}
a & 0 \\
0 & 0 \end{smallmatrix}\bigr) \in \operatorname{Jalg}\langle H,x \rangle$, then $\bigl( \begin{smallmatrix} u & 0 \\
0 & 0 \end{smallmatrix}\bigr) \in \operatorname{Jalg}\langle H,x \rangle$, for any $u\in M_n(F)$. Similarly, if $\bigl(\begin{smallmatrix}
0 & 0 \\
0 & a \end{smallmatrix}\bigr) \in \operatorname{Jalg}\langle H,x \rangle$, then $\bigl( \begin{smallmatrix} 0 & 0 \\
0 & u \end{smallmatrix}\bigr) \in \operatorname{Jalg}\langle H,x \rangle$, for any $u\in M_n(F)$. Actually, since $M_n(F)^+$ is simple and the ideal generated by $a$ in $M_n(F)^+$ is the vector subspace spanned by $\{\langle L_{b_1} \ldots L_{b_m}(a):m\in{{\mathbb N}},\,
b_1,\ldots,b_m\in M_n(F)\}$ ($L_b$ denotes the left multiplication by $b$ in $M_n(F)^+$), it is enough to realize that $$\begin{pmatrix} L_{b_1} \ldots L_{b_m}(a) & 0 \\
0 & 0 \end{pmatrix}=
L_{\bigl( \begin{smallmatrix} b_1 & 0 \\
0 & b_1^t \end{smallmatrix}\bigr) } \ldots L_{\bigl( \begin{smallmatrix} b_m & 0 \\
0 & b_m^t \end{smallmatrix}\bigr)} \begin{pmatrix} a & 0 \\
0 & 0 \end{pmatrix} \in \operatorname{Jalg}\langle H,x \rangle.$$
So, if $0\ne x \in K_{\bar{0}}$, then $A_{\bar{0}} \subseteq \operatorname{Jalg}\langle H,x \rangle$. In order to prove that $A_{\bar{1}}\subseteq \operatorname{Jalg}\langle H,x \rangle$, note that $$\begin{pmatrix} 0 & 0 \\
I_n & 0 \end{pmatrix} \in H,$$ and since $$\begin{pmatrix} 0 & 0 \\
I_n & 0 \end{pmatrix} \circ
\begin{pmatrix} d & 0 \\
0 & 0 \end{pmatrix} = \frac{1}{2}
\begin{pmatrix} 0 & 0 \\
d & 0 \end{pmatrix}$$
it follows that $$\begin{pmatrix} 0 & 0 \\
u & 0 \end{pmatrix} \in \operatorname{Jalg}\langle H,x \rangle\quad\text{for any
$u\in M_n(F)$.}$$
It remains to prove that $$\begin{pmatrix} 0 & u \\
0 & 0 \end{pmatrix} \in \operatorname{Jalg}\langle H,x \rangle\quad\text{for any
$u\in M_n(F)$,}$$ and the above implies that $$\begin{pmatrix} 0 & b \\
0 & 0 \end{pmatrix} \in \operatorname{Jalg}\langle H,x \rangle$$ for any nonzero skewsymmetric matrix $b$. But $$\left( \begin{pmatrix} 0 & b \\
0 & 0 \end{pmatrix} \circ \begin{pmatrix} 0 & 0 \\
0 & M_n(F) \end{pmatrix} \right) \circ \begin{pmatrix} M_n(F) & 0 \\
0 & 0 \end{pmatrix} =
\begin{pmatrix} 0 & M_n(F)bM_n(F) \\
0 & 0 \end{pmatrix} \subseteq \operatorname{Jalg}\langle H,x \rangle$$ and $M_n(F)bM_n(F)$ is a nonzero ideal of the simple algebra $M_n(F)$, so it is the whole $M_n(F)$ and $$\begin{pmatrix} 0 & M_n(F) \\
0 & 0 \end{pmatrix} \subseteq \operatorname{Jalg}\langle H,x \rangle .$$
Therefore, $\operatorname{Jalg}\langle H,x \rangle = A^+$ for any nonzero element $x\in K_{\bar{0}}$.
Now, if $0\ne x\in K_{\bar{1}}$, then $$x=\begin{pmatrix} 0 & b \\
c & 0 \end{pmatrix}$$
with $b$ a symmetric and $c$ a skewsymmetric $n\times
n$-matrix respectively. Let $y\in H_{\bar{1}},$ $$y=\begin{pmatrix} 0 & \bar{b} \\
\bar{c} & 0 \end{pmatrix}$$
with $\bar{b}$ skewsymmetric and $\bar{c}$ symmetric, such that $x\circ y \neq 0$. Since $0\ne x\circ y\in K_{\bar{0}}$ we are back to the ‘even’ case, and so $\operatorname{Jalg}\langle H,x \rangle =A^+$.
**ii)**In the second case (orthosymplectic superinvolution), $A=M_{n,2m}(F)$ and $$H(A, *)= \left\{ \begin{pmatrix} a & b \\
-q^tb^t & d \end{pmatrix} : a \mbox{ symetric, } d= \begin{pmatrix} d_{11} & d_{12} \\
d_{21} & d_{11}^t \end{pmatrix}, d_{12}, d_{21} \mbox{ skewsymmetric } \right\},$$
$$K(A, *)= \left\{ \begin{pmatrix} a & b \\
q^tb^t & d \end{pmatrix} : a \mbox{ skewsymmetric, } d= \begin{pmatrix} d_{11} & d_{12} \\
d_{21} & -d_{11}^t \end{pmatrix}, d_{12}, d_{21} \mbox{ symmetric } \right\}.$$
We claim that $\operatorname{Jalg}\langle H,x \rangle =A^+$ for any nonzero homogeneous element $x\in K$. If $0\ne x\in K_{\bar{1}}$, then $$x=\begin{pmatrix} 0 & b \\
q^tb^t & 0 \end{pmatrix}$$
and so $$x + \begin{pmatrix} 0 & b \\
-q^tb^t & 0 \end{pmatrix} = \begin{pmatrix} 0 & 2b \\
0 & 0 \end{pmatrix} \in \operatorname{Jalg}\langle H,x \rangle$$
with $b\in M_{n\times 2m}(F)$. Suppose that $\bigl(\begin{smallmatrix} 0& b\\ 0&0\end{smallmatrix}\bigr) =
\sum_{i=1, j= n+1}^{n,n+2m} \lambda _{ij}e_{ij}$ with $\lambda =
\lambda_{pq}\neq 0$, where, as usual, $e_{ij}$ denotes the matrix whose $(i,j)$-entry is $1$ and all the other entries are $0$, then $$\left( e_{pp} \circ \begin{pmatrix} 0 & b \\
0 & 0 \end{pmatrix} \right) \circ (e_{qq} + e_{q\pm m, q\pm m})=
\frac{1}{4} (\lambda e_{pq}+\lambda_{p,q \pm m} e_{p,q \pm m}) \in
\operatorname{Jalg}\langle H,x \rangle,$$
where ${q\pm m}$ means $q+m$ if $q\in \{n+1, \dots , n+m\}$ and $q-m$ if $q\in \{n+m+1, \dots , n+2m \}$ .
Assume $n>1$ and consider the element $(e_{qk} - q^{t}e_{kq}) \in
H(A,*)$ with $k \in \{1, \dots , n\}$ and $k \neq p$, then it follows that $2(e_{qk} - q^{t}e_{kq}) \circ e_{pq} = e_{pk} \in
\operatorname{Jalg}\langle H,x \rangle$ with $p,k \in \{1, \dots ,n \}$ and $k
\neq p.$ Therefore we have found an element $\begin{pmatrix} a & 0 \\
0 & 0 \end{pmatrix} \in \operatorname{Jalg}\langle H,x \rangle $ with $a \in
M_n(F)$ and $a \notin H(M_n(F), t)$ ($t$ denotes the usual transpose involution). Since $H(M_n(F), t)$ is maximal subalgebra of $M_n(F)^+$ (see [@Ra1 Theorem 6]) we obtain that $$\operatorname{Jalg}\langle H(M_n(F), t),a \rangle
= M_n(F)^+$$
and so $$\begin{pmatrix} M_n(F) & 0 \\
0 & 0 \end{pmatrix} \subseteq \operatorname{Jalg}\langle H,x \rangle.$$
Besides, for any skewsymmetric matrix $ a \in M_n(F)$ and for every $ b \in M_{n \times 2m}(F)$ one has $$\left[ \begin{pmatrix} a & 0 \\
0 & 0 \end{pmatrix} \circ \begin{pmatrix} 0 & b \\
-q^tb^t & 0 \end{pmatrix} \right] + \frac{1}{2} \begin{pmatrix} 0 & ab \\
-q^t(ab)^t & 0 \end{pmatrix} = \begin{pmatrix} 0 & ab \\
0 & 0 \end{pmatrix} \in \operatorname{Jalg}\langle H,x \rangle,$$
and thus $\bigl(\begin{smallmatrix} 0 & M_{n \times 2m}(F) \\
0 & 0 \end{smallmatrix}\bigr) \subseteq \operatorname{Jalg}\langle H,x \rangle$, because it is easy to check that $$K(M_n(F), t)M_{n \times
2m}(F)= M_{n \times 2m} (F).$$
But also $$\begin{pmatrix} a & 0 \\ 0 & 0 \end{pmatrix} \circ
\begin{pmatrix} 0 & -b^tq^t \\ b & 0 \end{pmatrix} + \frac{1}{2}
\begin{pmatrix} 0 & -(ba)^tq^t \\ ba & 0 \end{pmatrix} \in
\operatorname{Jalg}\langle H,x \rangle$$ and hence $$\begin{pmatrix} 0 & 0 \\
M_{2m \times n}(F) & 0 \end{pmatrix} \subseteq \operatorname{Jalg}\langle H,x
\rangle\qquad\text{and}\qquad
\begin{pmatrix} 0 & 0 \\
0 & M_{2m}(F) \end{pmatrix} \subseteq \operatorname{Jalg}\langle H,x \rangle.$$
Finally, if $n=1$ then $\lambda e_{1j}+ \mu e_{1, j\pm m} \in \operatorname{Jalg}\langle H, x \rangle $, with $j + m$ for $j \in \{n+1, \ldots ,
n+m\}$, and $j - m$ for $j \in \{n+m+1, \ldots , n+2m\}$. Now it is clear that $$\begin{pmatrix} M_n(F) & 0 \\ 0 & 0
\end{pmatrix}= \begin{pmatrix} F & 0 \\ 0 & 0
\end{pmatrix} \subseteq H(A,*) \subseteq \operatorname{Jalg}\langle H, x
\rangle.$$
Taking $e_{j1} - e_{1, j \pm m} \in H$ one has $$2(\lambda e_{1j}+ \mu e_{1, j \pm m}) \circ (e_{j1}- e_{1, j \pm
m})= \lambda e_{11}+ \lambda e_{jj} \in \operatorname{Jalg}\langle H, x \rangle.$$
Therefore, $e_{jj} \in \operatorname{Jalg}\langle H, x \rangle.$
Write $e_{jj}=\bigl(\begin{smallmatrix} 0 & 0 \\ 0 & a
\end{smallmatrix}\bigr)$ for a suitable $a \in M_{2m}(F)$. Then $a \notin H(M_{2m}(F),
*)$ with $*$ the involution determined by the skewsymmetric bilinear form with matrix $\bigl(\begin{smallmatrix} 0 & I \\ -I & 0
\end{smallmatrix}\bigr)$, and from the ungraded case (see [@Ra1]) we deduce that $$\operatorname{Jalg}\langle H(M_{2m}(F), *), a \rangle =
M_{2m}(F)^+$$
and therefore $\begin{pmatrix} 0 & 0 \\ 0 & M_{2m}(F)
\end{pmatrix} \subseteq \operatorname{Jalg}\langle H, x \rangle$. Now it is easy to check that since $$\begin{pmatrix} 0 & b \\ -q^tb^t & 0
\end{pmatrix} \circ \begin{pmatrix} 0 & 0 \\ 0 & M_{2m}(F)
\end{pmatrix} \subseteq \operatorname{Jalg}\langle H, x \rangle$$ then $\begin{pmatrix} 0 & M_{1,2m}(F) \\
M_{1,2m}(F) & 0
\end{pmatrix} \subseteq \operatorname{Jalg}\langle H, x \rangle$ also in this case.
If $x$ is now a nonzero homogeneous even element then $$x=\begin{pmatrix} a & 0 \\
0 & b \end{pmatrix}$$
for a skewsymmetric matrix $a$ and a matrix $b=
-q^tb^tq$. Consider $$y=x \circ \begin{pmatrix} 0 & 0 \\
0 & I \end{pmatrix}=\begin{pmatrix} 0 & 0 \\
0 & b \end{pmatrix} \in \operatorname{Jalg}\langle H, x \rangle,$$
and $$z= \begin{pmatrix} 0 & c \\
-q^tc^t & 0 \end{pmatrix}$$ such that $cb \neq 0.$ Then $$y \circ z = \frac{1}{2}\begin{pmatrix} 0 & cb \\
- bq^tc^t & 0 \end{pmatrix} \in \operatorname{Jalg}\langle H,x \rangle \cap
K_{\bar{1}}$$
and the ‘odd’ case applies.
$B = C^+$, $C\leq _{max}A$.
---------------------------
Let us assume now that $B=C^+$ for a maximal subalgebra $C$ of the simple associative superalgebra $A$. It has to be proved that $C^+$ is a maximal subalgebra of $A^+$.
Two different cases appear according to the classification of simple associative superalgebras (see [@Wall]):
1. $A$ is simple as an (ungraded) algebra, that is, $A$ is isomorphic to $M_{p,q}(F)$, for some $p,q$. In this case, [@El-La-Sa Theorem 2.2] shows that either $C=eAe+ eAf+ fAf$ with $e,f$ even orthogonal idempotents in $A$ such that $e+f=1$, or $C=C_A(u)$ (centralizer of $u$), with $u\in A_{\bar{1}}$ and $u^2=1$.
2. $A$ is not simple as an algebra, and hence it is isomorphic to $Q_n(F)$ for some $n$. Then $A=A_{\bar{0}}+
A_{\bar{0}}u$ with $u\in Z(A)_{\bar{1}}$, $u^2=1$ and $A_{\bar{0}}$ is a simple algebra. In this case, [@El-La-Sa Theorem 2.5] shows that either $C=C_{\bar{0}}+ C_{\bar{0}}u$ with $C_{\bar{0}}$ a maximal subalgebra of $A_{\bar{0}}$, or $C=A_{\bar{0}}$, or $A_{\bar{0}}= D_{\bar{0}}+ D_{\bar{1}}$ is a ${{\mathbb Z}}_2$-graded algebra and $C=D_{\bar{0}}+ D_{\bar{1}}u$.
**(1.a)**Assume that $A$ is simple as an algebra, and that there are even orthogonal idempotents $e,f$ such that $C=eAe+eAf+fAf$. Take an element $a_{\alpha}\in
A_{\alpha}\setminus C_{\alpha}$, so one has that $fa_\alpha e\neq
0$. Now the element $(e \circ a_\alpha)\circ f = \frac{1}{4}
(ea_\alpha f+fa_\alpha e)$ lies in $\operatorname{Jalg}\langle C^+,a_\alpha \rangle$. Since $(fAf \circ fa_\alpha e)
\circ eAe = fAfa_\alpha eAe$, and $Afa_\alpha e A = A$, because $A$ is simple, it follows that $fAe\subseteq \operatorname{Jalg}\langle C^+,a_\alpha
\rangle$, and therefore $C^+$ is a maximal subalgebra of $A^+$. So we have that in this case this condition is also sufficient to be a maximal subalgebra of $A^+$.
**(1.b)**If $A$ is simple as an algebra, but $C= C_A(u)$, for an element $u\in A_{\bar{1}}$ with $u^2=1$, let $V$ be the irreducible $A$-module (unique, up to isomorphism), so that $A$ can be identified with $\operatorname{End}_F(V)$. Then $u$ lies in $\operatorname{End}(V)_{\bar{1}}$, and if $\{v_1, \dots, v_s\}$ is a basis of the $F$-vector space $V_{\bar{1}}$, it follows that $\{ u(v_1), \dots,
u(v_s)\}$ is a $F$-basis of $V_{\bar{0}},$ and so $p=q$ and, since $u^2=1$, the coordinate matrix of $u$ in this basis is $$u=\begin{pmatrix} 0 & I_s \\
I_s & 0 \end{pmatrix} .$$
Therefore $C_A(u) = Q_p(F)$, and then one can check easily that $Q_p(F)$ is maximal in $M_{p,p}(F)$.
**(2.a)**Assume now that $A$ is not simple as an algebra, so $A=A_{\bar{0}}+ A_{\bar{0}}u$, with $u\in
Z(A)_{\bar{1}}$, $u^2=1$ and $A_{\bar{0}}$ a simple algebra, and that $C=C_{\bar{0}}+ C_{\bar{0}}u$, with $C_{\bar{0}}$ a maximal subalgebra of $A_{\bar{0}}$. As for the ungraded case (see [@Ra1 page 192]) it follows that $\operatorname{Jalg}\langle
C_{\bar{0}}^+,a_{\bar{0}} \rangle=A_{\bar{0}}^+$ for any $a_{\bar{0}}\in A_{\bar{0}} \setminus C_{\bar{0}}$. Thus $A_{\bar{0}} \subseteq \operatorname{Jalg}\langle C^+,a_{\bar{0}} \rangle$. Moreover since $1\in C_{\bar{0}},$ then $ u\in C$ and it follows that $b_{\bar{0}}\circ u = \frac{1}{2} (b_{\bar{0}}u+ub_{\bar{0}})=
b_{\bar{0}}u\in \operatorname{Jalg}\langle C^+,a_{\bar{0}} \rangle$ for any $b_{\bar{0}} \in A_{\bar{0}}$. Thus $A_{\bar{0}}u\subseteq \operatorname{Jalg}\langle C^+,a_{\bar{0}} \rangle$ and $\operatorname{Jalg}\langle C^+,a_{\bar{0}}
\rangle=A^+$. Now take an element $a_{\bar{1}}\in A_{\bar{1}}
\setminus C_{\bar{1}}$. Then $a_{\bar{1}}=a_{\bar{0}}u$ with $a_{\bar{0}}\in A_{\bar{0}} \setminus C_{\bar{0}}$. Since $u$ lies in $C$, it follows that $a_{\bar{1}}\circ u = a_{\bar{0}} \in
\operatorname{Jalg}\langle C^+,a_{\bar{1}} \rangle $, with $a_{\bar{0}}\in A_{\bar{0}}\setminus C_{\bar{0}}$ and the ‘even’ case applies.
**(2.b)**If $A$ is not simple as an algebra and $C=A_{\bar{0}}$, let $b$ be any odd element: $b\in A{_{\bar 1}}=
A_{\bar{0}}u$. Thus $b=b_{\bar{0}}u$, for some $b_{\bar{0}}\in
A_{\bar{0}}$. Then $a_{\bar{0}}\circ b = (a_{\bar{0}}\circ
b_{\bar{0}})u$, so $\operatorname{Jideal}\langle b_0 \rangle u\subseteq \operatorname{Jalg}\langle A_{\bar{0}}^+,b \rangle$ (where $\operatorname{Jideal}\langle b_{\bar{0}}
\rangle$ denotes the ideal generated by $b_{\bar{0}}$ in the Jordan algebra $A_{\bar{0}}^+$). By simplicity of $A_{\bar{0}}^+$, $A_{\bar{0}}u \subseteq \operatorname{Jalg}\langle A_{\bar{0}}^+,b \rangle$, that is, $C^+$ is a maximal subalgebra of $A^+$.
**(2.c)**Finally, assume that $A$ is not simple as an algebra, and $A{_{\bar 0}}$ (which is isomorphic to $M_p(F)$ for some $p$) is ${{\mathbb Z}}_2$-graded: $A{_{\bar 0}}=D{_{\bar 0}}\oplus D{_{\bar 1}}$, and $C=D{_{\bar 0}}\oplus D{_{\bar 1}}u$, where $u\in Z(A){_{\bar 1}}$, $u^2=1$. Here, as an associative superalgebra (${{\mathbb Z}}_2$-graded algebra), $A{_{\bar 0}}$ is isomorphic to $M_{r,s}(F)$ for some $r,s$. Identify $A{_{\bar 0}}$ to $M_{r,s}(F)$, so that $D{_{\bar 0}}=\left\{\bigl(\begin{smallmatrix}a&0
\\ 0&b\end{smallmatrix}\bigr): a\in M_r(F),\, b\in
M_s(F)\right\}$, and $D{_{\bar 1}}=\left\{\bigl(\begin{smallmatrix}0&u
\\ v&0\end{smallmatrix}\bigr): u\in M_{r\times s}(F),\, v\in
M_{s\times r}(F)\right\}$. Let us show that $C^+$ is a maximal subalgebra of $A^+$. Since $A^+=C^+\oplus\bigl( D{_{\bar 1}}\oplus
D{_{\bar 0}}u\bigr)$, it is enough to check that for any nonzero element $x\in D{_{\bar 0}}u\cup D{_{\bar 1}}$, the subalgebra of $A^+$ generated by $C^+$ and $x$: $\operatorname{Jalg}\langle C^+,x\rangle$, is the whole $A^+$.
Take $0 \neq x \in D_{\bar{0}}u$. Then $$x=\begin{pmatrix} x_0 & 0 \\
0 & x_1 \end{pmatrix} u$$
with $x_0 \in M_r(F),$ and $x_1 \in M_s(F)$ not being both zero. Without loss of generality, assume $x_0 \neq 0$, and take elements $$\begin{pmatrix} b & 0 \\
0 & 0 \end{pmatrix} \in C$$
with $0 \neq b\in M_r(F)$. Then $$\begin{pmatrix} b & 0 \\
0 & 0 \end{pmatrix}\circ x = \begin{pmatrix} b & 0 \\
0 & 0 \end{pmatrix} \circ \begin{pmatrix} x_0 & 0 \\
0 & x_1 \end{pmatrix}u =\begin{pmatrix} b\circ x_0 & 0 \\
0 & 0 \end{pmatrix}u \in \operatorname{Jalg}\langle C^+,x \rangle$$ for any $b\in M_r(F)$. Therefore $$\begin{pmatrix} \operatorname{Jideal}\langle x_0 \rangle & 0 \\
0 & 0 \end{pmatrix}u \subseteq \operatorname{Jalg}\langle C^+,x \rangle$$ and because of the simplicity of $M_n(F)^+$, $$\begin{pmatrix} M_r(F) & 0 \\
0 & 0 \end{pmatrix}u \subseteq \operatorname{Jalg}\langle C^+,x \rangle .$$
Thus $$\begin{pmatrix} M_r(F) & 0 \\
0 & 0 \end{pmatrix}u \circ \begin{pmatrix} 0 & M_{r\times s}(F) \\
M_{s \times r}(F) & 0 \end{pmatrix}u$$ $$= \begin{pmatrix} 0 & M_{r\times s}(F) \\
M_{s \times r}(F) & 0 \end{pmatrix} \subseteq \operatorname{Jalg}\langle C^+,x
\rangle,$$ that is, $D_{\bar{1}} \subseteq \operatorname{Jalg}\langle C^+,x
\rangle$, and so $D_{\bar{1}} \circ D_{\bar{1}}u =
D_{\bar{0}}u \subseteq \operatorname{Jalg}\langle C^+,x \rangle$ and $\operatorname{Jalg}\langle C^+,x \rangle=A.$
Take now an element $0 \neq x\in D_{\bar{1}}$. Then an element $d_{\bar{1}}u \in C^+$ can be found such that $0 \neq x \circ
d_{\bar{1}}u \in D_{\bar{0}}u \cap \operatorname{Jalg}\langle C^+,x \rangle$, so the previous arguments apply.
This concludes the proof of the next result:
\[th:BCplus\] Let $A$ be a finite dimensional simple associative superalgebra over an algebraically closed field of characteristic zero, and let $B$ be a maximal subalgebra of $A^+$ such that $B^\prime \neq A $ (where $B^\prime$ denotes the associative subalgebra generated by $B$ in $A$). Then $B$ is a maximal subalgebra of $A^+$ if and only if there is a maximal subalgebra $C$ of the superalgebra $A$ such that $B= C^+$.
$B^\prime = A$ and $B$ is not semisimple.
-----------------------------------------
This situation does not appear in the ungraded case [@Ra1]. However, consider the associative superalgebra $A=M_{1,1}(F)$ and the subalgebra $B$ of $A^+$ spanned by $\{e_{11},e_{22},e_{12}+e_{21}\}$, which, by dimension count, is obviously maximal and satisfies that $B'=A$. The radical of $B$ consists of the scalar multiples of $e_{12}+e_{21}$, so it is nonzero.
**Question:** Is this, up to isomorphism, the only possible example of a maximal subalgebra $B$ of $A^+$, $A$ being a simple finite dimensional superalgebra over an algebraically field $F$ of characteristic $0$, such that $B'=A$ and $B$ is not semisimple?
Maximal subalgebras of $H(A,*).$
================================
Consider now the Jordan superalgebra $J=H(A,*)$, where $A$ is a finite dimensional simple associative superalgebra over an algebraically closed field $F$ of characteristic zero, and $*$ is a superinvolution of $A$.
Up to isomorphism [@Go-She Theorem 3.1], it is known that $A=M_{p,q}(F)$ and that $*$ is either the orthosymplectic or the transpose superinvolution, that is, $H(A,*)$ is either $osp_{n,2m}$ or $p(n)$.
Let $B$ be a maximal subalgebra of $H(A,*),$ then again three possible situations appear:
1. either $B^\prime = A$ and $B$ is semisimple,
2. or $B^\prime \neq A,$
3. or $B^\prime =A$ and $B$ is not semisimple.
$B^\prime = A$ and $B$ semisimple.
----------------------------------
Let us assume first that $B$ is a maximal subalgebra of the simple superalgebra $H(A,*)$, with $B'=A$ and $B$ semisimple. From Lemma \[le:BinAplusimproved\], we know that either $B$ is isomorphic to $D_t$ ($t\ne 0,\pm 1,-2,-\frac{1}{2}$), or $B=H(A,\diamond)$ with $\diamond$ a superinvolution. In the first case we remark that we have given only necessary conditions in Proposition 3.3 if $B^\prime
=A$ and $1_A\in B.$ In the second case, one has $B=H(A,\diamond)
\subseteq H(A,*),$, but Theorem \[th:BsemiAplus\] shows that $H(A,\diamond)$ is maximal in $A^+$, thus obtaining a contradiction.
Therefore:
\[th:BsemiinHAstar\] Let $J$ be the Jordan superalgebra $H(A, *)$, where $A$ is a finite dimensional simple Jordan superalgebra over an algebraically closed field of characteristic zero, and $*$ a superinvolution in $A$. If $B$ is a maximal subalgebra of $J$ such that $B^{\prime}=A$ and $B$ is semisimple, then $B= D_t$ ($t\ne 0,\pm 1,-2,-\frac{1}{2}$) and $(A,*)$ is given by Proposition \[pr:DtEndV\].
**Question:** Given a natural number $m$, and with $t$ equal either to $-\frac{m}{m+1}$ or to $-\frac{m+1}{m}$, is $D_t$ isomorphic to a maximal subalgebra of the Jordan superalgebra $H(\operatorname{End}_F(V),*)$ ($V$ and $*$ as in Proposition \[pr:DtEndV\])?
For $m=2$ or $m=3$, this has been checked to be the case.
$B'\ne A$.
----------
Assume now that the maximal subalgebra $B$ of $H(A,*)$ satisfies $B'\ne A$. The result that settles this case is the following:
\[th:BprimeneAinHAstar\] Let $J$ be the Jordan superalgebra $H(A,*)$, where $A$ is a finite dimensional simple Jordan superalgebra over an algebraically closed field of characteristic zero, and $*$ is a superinvolution in $A$. Let $B$ be a subalgebra of $J$ such that $B^\prime \neq A$ (where as always $B^\prime $ is the subalgebra of $A$ generated by $B$). Then $B$ is maximal if and only if there are even idempotents $e,f\in A$ with $e+f=1$ such that $B=H(C,*)$ and one of the following possibilities occurs:
1. either $C=eAe+fAf$, $e^*=e$, $f^*=f$, $H(eAe,*)^\prime
=eAe$, and $H(fAf,*)^\prime =fAf$.
2. or $C=eA +Ae^* + ff^*Aff^*$, with $H(ff^*Aff^*,*)^\prime =
ff^*Aff^*$.
Note [@Go] that given a finite dimensional simple associative superalgebra $C$ over $F$ with a superinvolution $*$, the associative subalgebra $H(C,*)'$ is the whole $C$ unless $(C,*)$ is either a quaternion superalgebra with the transpose superinvolution or a quaternion algebra with the standard involution.
If $B^\prime = A$, and since $B\subseteq H(A,*)$, it follows that $B^\prime$ is closed under the superinvolution $*$, and so $B^\prime
\subseteq C$ with $C$ a maximal subalgebra of $(A,*)$. But using the maximality of $B$ and that $B\subseteq H(A,*)$, one concludes that $B=H(C,*)$. Recall that $H(A,*)$ is isomorphic either to $p(n)$ or to $osp_{n,2m}$.
If $B=H(C,*)$ with $C$ a maximal subalgebra of $(A,*)$, then the results in [@El-La-Sa] show that either $C= (eAe+eAf+fAf)\cap
(e^*Ae^*+f^*Ae^*+f^*Af^*)$ with $e,f$ even orthogonal idempotents, or $C= C_A(u)$ with $u\in A_{\bar{1}},$ $0\ne u^2\in F,$ $u^*\in
Fu$. In this last case, since $u^* \in Fu$ it follows that $u^* =
\alpha u$ with $\alpha \in F$. But $(u^*)^* =u$ and so $\alpha^2
=1$, that is, $\alpha = \pm 1$. Thus $u^2=(u^2)^*=-(u^*)^2= -u^2,$ a contradiction.
Thus, $C$ is of the first type, and then [@El-La-Sa Proposition 4.6] gives two possible cases.
In the first case there is an idempotent $e$ of $A$ such that $C=eAe+fAf$ and $e^*=e,$ $f=1-e.$ If $H(C,*)^\prime \neq C$ then either $H(eAe,*)^\prime \neq eAe$ or $H(fAf,*)^\prime\neq fAf$. It may be assumed that $H(eAe,*)^\prime \neq eAe$, and then the results in [@Go] show that either $eAe$ is a quaternion superalgebra with the restriction $*|_{eAe}$ being the transpose superinvolution or is a quaternion algebra contained in $A_{\bar{0}}$, with the standard involution. In both cases $e=e_1+e_2$ with $e_1,e_2$ orthogonal idempotents and $e_1^*=e_2.$ Consider $D=e_1A+Ae_2+fAf$ and take $0\neq e_1af \in e_1Af$, then $e_1af+fa^*e_2\in H(D,*)$ and $e_1af+fa^*e_2\notin H(C,*)$. In the same vein, take $c\in A$ with $e_2cf\neq 0$. Then $e_2cf+fc^*e_1 \in
H(A,*)\setminus H(D,*).$ Therefore $B=H(C,*) \subsetneqq H(D, *)
\subsetneqq H(A,*)$ and $B=H(C,*)$ is not maximal. So $B^\prime=
H(C,*)^\prime =C$ if $B=H(C,*)$ with $C=eAe+ fAf$ and $e^*=e.$
In the second case [@El-La-Sa Proposition 4.6], there is an idempotent $e$ in $A$ such that $e$, $e^*$, $ff^*$ are mutually orthogonal idempotents with $1=e+e^*+ff^*$, and $C=
eA+Ae^*+ff^*Aff^*$. Hence $H(C,*) = H(ff^*aff^*) + \{ ea + a^*e^* :
a\in A\}$.
If $H(ff^*Aff^*,*)^\prime \neq ff^*Aff^*$, then $ff^*Aff^*$ is a quaternion superalgebra with superinvolution such that $ff^* =
e_1+e_2$ with $e_1,e_2$ orthogonal idempotents and $e_1^*=e_2$. Consider the subalgebra $D = eA+Ae^*+e_2A+Ae_1$. As $H(C,*)\subsetneqq H(D,*)\subsetneqq H(A,*)$, $H(C,*)$ is not maximal. Therefore, if $B=H(C,*)$ with $C=eA+Ae^*+ff^*Aff^*$, and $e$, $e^*$, $ff^*$ mutually orthogonal idempotents such that $e+e^*+ff^*=1$, then $H(ff^*Aff^*, *)^\prime =ff^*Aff^*$.
The proof of the converse will be split according to the different possibilities:
**(i.1)**: The superinvolution $*$ on $A$ is the transpose superinvolution, and the conditions in item (i) of the Theorem hold:
Then $*$ is determined, after identifying $A$ with $\operatorname{End}_F(V)$, by a nondegenerate odd symmetric superform $( \ , \ )$. That is, , $(V_{\bar{0}}, V_{\bar{0}}) = (V_{\bar{1}}, V_{\bar{1}})=0$ and $(a_0, b_1)= (b_1, a_0)$ for any $a_0 \in V_{\bar{0}}$, $b_1 \in
V_{\bar{1}}$.
In this situation we claim that a basis $\{x_1, \ldots ,x_n,y_1
\ldots , y_n\}$ of $ V$ can be chosen such that $\{x_1, \ldots
,x_n\}$ is a basis of $V_{\bar{0}}$, $\{y_1, \ldots ,y_n\}$ is a basis of $V_{\bar{1}}$, and the coordinate matrices of the superform and of $e$ present the following form, respectively, $$\begin{pmatrix} 0 & 0 & I & 0 \\
0 & 0 & 0 & I \\ I & 0 & 0 & 0 \\ 0 & I & 0 & 0 \end{pmatrix},
\hspace{1cm} \begin{pmatrix} I & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 &
0 & I & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}.$$
This follows from the fact that the eigenspaces of the idempotent transformation $e$ are orthogonal relative to $(\ ,\ )$, as $e^*=e$. Under these circumstances, we may identify $H(A,*)$ to $$p(n)=\left\{\begin{pmatrix} a&b\\ c&a^t\end{pmatrix} : \text{$b$
skewsymmetric, $c$ symmetric}\right\}$$ in such a way that the subalgebra $H(eAe+fAf,*)$ becomes the subspace of the matrices (in block form) $$\begin{pmatrix}
a_{1}&0&c_{1}&0\\ 0&a_{2}&0&c_{2}\\ d_{1}&0&a_{1}^t&0\\
0&d_{2}&0&a_{2}^t\end{pmatrix}$$ where $a_{1},c_{1},d_{1}$ belong to $M_i(F)$, $a_{2},c_{2},d_{2}$ belong to $M_j(F)$, $i+j=n$, and $c_{1},c_2$ are skewsymmetric matrices, while $d_1,d_2$ are symmetric.
It must be proved that for any homogeneous element $x$, $\operatorname{Jalg}\langle H(C,*),x \rangle =H(A,*)$ holds.
Let $x \in H(A,*)_{\bar{0}} \setminus H(C,*)_{\bar{0}}$, that is, $$x=\sum_{\substack{1\leq k \leq i \\[0.1cm] 1 \leq r \leq j}}
\lambda_{kr} (e_{k,i+r}+e_{n+i+r,n+k})
+ \sum_{\substack{1\leq r \leq j \\[0.1cm] 1 \leq k \leq i}} \mu_{rk} (e_{i+r,k}+e_{n+k,n+i+r})$$ where $e_{r,s}$ denotes the matrix with $1$ in the $(r,s)$-th entry and $0$ in all the other entries. Suppose that there exists $\lambda_{pq} \neq 0.$ The same proof works if $\mu_{pq} \neq 0$.
Since $H(C,*)'=C$ and $i>1$ (as $H(eAe,*)'=eAe$), an index $s \in
\{1, \ldots, i\}$ can be chosen with $s \neq p$, such that $u=e_{s,p}+e_{n+p,n+s} \in H(C,*)$. Let $v=e_{p,p} + e_{n+p,n+p}$ and $w= e_{i+q,i+q} + e_{n+i+q,n+i+q}$ (note that $v,w \in H(C,*)$). Then $$((v \circ x) \circ w )\circ u= \frac{1}{8} \lambda_{pq}(e_{s,i+q}+e_{n+i+q,n+s})
\in \operatorname{Jalg}\langle H(C,*),x \rangle.$$
Denote this element by $\alpha$, and then $0 \neq \alpha \in
e_1Af_1+ f_1^*Ae_1^*.$ Now
$$((e_1ae_1+e_1^*a^*e_1^*) \circ \alpha ) \circ (f_1bf_1+f_1^*b^*f_1^*)=
e_1ae_1\alpha f_1bf_1
+f_1^*b^*f_1^*\alpha e_1^*a^*e_1^*$$
belongs to $ \operatorname{Jalg}\langle H(C,*),x \rangle$. Since $\{ae_1 \alpha f_1 b : a,b \in A \} $ is an ideal of $ A$, and $A$ is simple, it holds that $\{ae_1 \alpha f_1 b :
a,b \in A\}=A$, and so $e_1 a f_1 + f_1^* a^* e_1^* \in \operatorname{Jalg}\langle H(C,*),x \rangle$ for any $a \in A.$
Consider now an element $y \in f_1Af_1^* \cap H(C,*)$. Since $j>1$ (because $ H(fAf,*)'=fAf$), we can pick up the element $ y=
e_{l,k}-e_{l+1,k-1}$, with $l=i+1$ and $k=n+i+2$. Take $z=e_{k-1,p}+e_{1,l} \in H(e_1Af_1 + f_1^* A e_1^*,*) \subseteq
\operatorname{Jalg}\langle H(C,*),x \rangle$ and $v= e_{p,1} \in H(C,*) \cap
e_1^*Ae_1$, with $p=n+1.$ Then $(y \circ z) \circ v=
\frac{1}{4}(-e_{l+1,1}-e_{p,k}) \in (f_1Ae_1 +e_1^*Af_1^*) \cap
H(A,*)_{\bar{0}}$. As before we obtain that $f_1ae_1 +e_1^*a^* f_1^*
\in \operatorname{Jalg}\langle H(C,*),x \rangle$, and $H(A,*)_{\bar{0}} \subseteq
\operatorname{Jalg}\langle H(C,*),x \rangle$.
Now it will proved that $H(A,*)_{\bar{1}}$ is contained in $\operatorname{Jalg}\langle H(C,*), x \rangle $. Take $y=e_{k, n+i+t}- e_{i+t,n+k} \in
H(A,*)_{\bar{1}} \cap (e_1Af_1^*+f_1Ae_1^*)$, with $k \in \{1,
\ldots ,i\}, t \in \{1, \ldots ,j\}$ and we claim that $y \in \operatorname{Jalg}\langle H(C,*),x \rangle$. Since $H(fAf,*)^{\prime}= fAf,$ there exists $s \in \{1, \ldots ,j\}$ with $s \neq t$, and consider then the elements $z=e_{n+i+s,n+k}+ e_{k,i+s} \in \operatorname{Jalg}\langle H(C,*),x \rangle$, and $u=e_{i+s,n+i+t}-e_{i+t,n+i+s} \in H(C,*)$. Then it follows that $z
\circ u=\frac{1}{2}y \in \operatorname{Jalg}\langle H(C,*),x \rangle$. In the same way we obtain that $(e_1^*Af_1 + f_1^*Ae_1) \cap
H(A,*)_{\bar{1}} \subseteq \operatorname{Jalg}\langle H(C,*),x \rangle$.
So for any $x \in H(A,*)_{\bar{0}}\setminus H(C,*)_{\bar{0}}$, $H(A,*)=\operatorname{Jalg}\langle H(C,*), x \rangle$ holds.
Now let $x \in H(A,*)_{\bar{1}} \setminus H(C,*)_{\bar{1}}.$ Then $$x= \sum_{\substack{1\leq k \leq i \\[0.1cm] 1 \leq r \leq j}}
\lambda_{kr} (e_{k,n+i+r}-e_{i+r,n+k})+
\sum_{\substack{1\leq k \leq i \\[0.1cm] 1 \leq r \leq j}}
\mu_{kr}(e_{n+k,i+r}+e_{n+i+r,k})$$
and assume that for some $(p,q)$, one has $\lambda_{pq}
\neq 0$.
Since $u=e_{n+p,p} \in H(C,*),$ $0 \neq 2 x \circ u= - \sum_{1\leq q
\leq j} \lambda_{pq} (e_{i+q,p}+e_{n+p,n+i+q}) \in
H(A,*)_{\bar{0}}\setminus H(C,*)_{\bar{0}}$, and the above case applies.
In the same way, if $\mu_{pq} \neq 0$ we obtain that $H(C,*)$ is a maximal subalgebra of $H(A,*).$
**(i.2)**: The superinvolution $*$ on $A$ is an orthosymplectic superinvolution, and the conditions in item (i) of the Theorem hold:
In this and the following cases, we will content ourselves to establish the setting in which one can apply the same kind of not very illuminating arguments like those used in case **(i.1)**.
Here, after identifying $A$ to $\operatorname{End}_F(V)$, the superinvolution $*$ is determined by a nondegenerate symmetric superform $(\ , \ )$ on $V$, that is, $(\ , \ )\vert_{V_{\bar{0}} \times V_{\bar{0}}}$ is symmetric, $(\ , \ )\vert_{V_{\bar{1}} \times V_{\bar{1}}}$ is skewsymmetric and $(V_{\bar{0}},V_{\bar{1}})=(V_{\bar{1}}, V_{\bar{0}})=0$.
Since $e$ is idempotent and selfadjoint, there is a basis of $V$ in which the coordinate matrices of the superform and of $e$ are, respectively, $$\begin{pmatrix} I&0&0&0&0&0\\
0&I&0&0&0&0\\
0&0&0&0&I&0\\
0&0&0&0&0&I\\
0&0&-I&0&0&0\\
0&0&0&-I&0&0\end{pmatrix},\qquad
\begin{pmatrix} I&0&0&0&0&0\\
0&0&0&0&0&0\\
0&0&I&0&0&0\\
0&0&0&0&0&0\\
0&0&0&0&I&0\\
0&0&0&0&0&0
\end{pmatrix},$$ where $0$, respectively $I$, denotes the zero matrix, respectively identity matrix (of possibly different orders). Let $n$ be the dimension of $V{_{\bar 0}}$, $2m$ the dimension of $V{_{\bar 1}}$, $i$ the rank of the restriction $e\vert_{V{_{\bar 0}}}$, $j=n-i$, $2k$ the rank of $e\vert_{V{_{\bar 1}}}$ and $l=m-k$. Hence, identifying by means of this basis $H(A,*)$ to $osp_{n,2m}$, the idempotent $e$ decomposes as $e= e_1+e_2+ e_2^*$, with $e_1= \sum_{s=1}^i e_{s,s}$, $e_2=\sum_{s=1}^k e_{n+s,n+s}$ and $e_2^*=\sum_{s=1}^k
e_{n+m+s,n+m+s}$. Similarly, $f=1-e$ decomposes as $f=f_1+ f_2+f_2^*
$.
The elements of $H(C,*)$ are then the matrices (in block form) $$\left( \begin{array} {cccccc} c_{11} & 0 \hspace{0.8cm} \vdots & b_{11} & 0 & b_{13} & 0\\
0 & c_{22} \ \hspace{0.25cm} \vdots & 0 & b_{22} & 0 & b_{24}\\
\hdotsfor{6} \\
b_{13}^t & 0 \hspace{0.80cm} \vdots &a_{11} & 0 & a_{13} & 0 \\
0 & b_{24}^t \ \hspace{0.25cm} \vdots &0 & a_{22} & 0 & a_{24} \\
-b_{11}^t & 0 \hspace{0.80cm} \vdots &a_{31} & 0 & a_{11}^t & 0 \\
0 & -b_{22}^t \ \vdots & 0 & a_{42} & 0 & a_{22}^t
\end{array}\right)$$ with $c_{11} \in M_i(F) $ and $ c_{22} \in M_j(F)$ symmetric matrices, $a_{11} \in M_k(F)$, $a_{22} \in M_l(F)$, $b_{11}, b_{13} \in
M_{i\times k}(F)$, $b_{22}, b_{24} \in M_{j\times l}(F),$ $a_{13},
a_{31} \in M_k(F)$ skewsymmetric matrices, and $a_{24}, a_{42} \in
M_l(F) $ skewsymmetric too
Note that it is possible that either $e_1$ or $f_1$ may be $0$. If, for instance, $f_1=0$, then since $H(fAf,*)'=fAf$, it follows that $l>1$.
In this setting, routine arguments like the ones for **(i.1)** apply.
**(ii.1)**: The superinvolution $*$ on $A$ is the transpose superinvolution, and the conditions in item (ii) of the Theorem hold:
Here a basis $\{x_1, \ldots ,x_n,y_1 \ldots , y_n\}$ of $V$ ($\{x_1,
\ldots ,x_n\}$ being a basis of $V_{\bar{0}}$ and $\{y_1, \ldots
,y_n\}$ of $V_{\bar{1}}$), so that the coordinate matrices of the superform and of the idempotents $e$, $e^*$ and $ff^*$ are, respectively,
$$\left(\begin{array} {cccccc} 0 & 0 & 0 & I & 0 & 0\\
0 & 0 & 0 & 0 & I & 0 \\
0 & 0 & 0 & 0 & 0 & I \\
I & 0 & 0 & 0 & 0 & 0\\
0 & I & 0 & 0 & 0 & 0 \\
0 & 0 & I & 0 & 0 & 0\end{array}\right),
\hspace{1cm} \left( \begin{array} {cccccc} I & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & I\end{array} \right),$$
$$\left( \begin{array} {cccccc} 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & I & 0 & 0 & 0 \\
0 & 0 & 0 & I & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0\end{array} \right), \hspace{1cm}
\left( \begin{array} {cccccc}0 & 0 & 0 & 0 & 0 & 0 \\
0 & I & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0& 0 & 0 \\
0 & 0 & 0 & 0 & I & 0 \\
0 & 0 & 0 & 0 & 0 & 0\end{array} \right).$$
This follows from the fact that $e$, $e^*$ and $ff^*$ are orthogonal idempotents with $1=e+e^*+ff^*$, so $$\begin{split}
V{_{\bar 0}}&=S(1,e){_{\bar 0}}\oplus S(1,ff^*){_{\bar 0}}\oplus S(1,e^*){_{\bar 0}},\\
V{_{\bar 1}}&=S(1,e^*){_{\bar 1}}\oplus S(1,ff^*){_{\bar 1}}\oplus
S(1,e){_{\bar 1}},
\end{split}$$ where $S(1,g)$ denotes the eigenspace of the endomorphism $g$ of eigenvalue $1$, and from the fact that $ff^*$ is selfadjoint, so $$V=\bigl(S(1,e){_{\bar 0}}\oplus S(1,e^*){_{\bar 1}}\bigr)\oplus
\bigl(S(1,ff^*){_{\bar 0}}\oplus S(1,ff^*){_{\bar 1}}\bigr)\oplus
\bigl(S(1,e^*){_{\bar 0}}\oplus S(1,e){_{\bar 1}}\bigr).$$
After the natural identifications, the elements of $H(C,*)=H(eA+Ae^*+ff^*Aff^*,*)$ are the matrices (in block form) $$\begin{pmatrix} a_{11} & a_{12} & a_{13} \thickspace \vdots& c_{11} & c_{12} & c_{13} \\
0 & a_{22} & a_{23} \thickspace \vdots & -c_{12}^t & c_{22}& 0 \\
0 & 0 & a_{33} \thickspace \vdots& -c_{13}^t & 0 & 0 \\
\hdotsfor{6}\\
0 & 0 & d_{13} \thickspace \vdots& a_{11}^t & 0 & 0 \\
0 & d_{22} & d_{23} \thickspace \vdots& a_{12}^t & a_{22}^t & 0 \\
d_{13}^t & d_{23}^t & d_{33} \thickspace \vdots & a_{13}^t &
a_{23}^t & a_{33}^t \end{pmatrix}$$ where $c_{11}$, $c_{22}$ are skewsymmetric matrices and $d_{22}$, $d_{33}$ symmetric matrices. Since $H(ff^*Aff^*,*)^{\prime}=ff^*Aff^*$, it follows that $ff^*Aff^*$ is not a quaternion superalgebra and so the order of the blocks in the $(2,2)$ position is $>1$.
This is the setting where routine computations can be applied.
**(ii.2)**: The superinvolution $*$ on $A$ is an orthosymplectic superinvolution, and the conditions in item (ii) of the Theorem hold:
Here, with the same sort of arguments as before, the coordinate matrices in a suitable basis of the orthosymplectic superform, and of the idempotents $ff^*$, $e$ and $e^*$ are, respectively: $$\left(\begin{smallmatrix}
I & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & I & 0 & 0 & 0 & 0 \\
0 & I & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & I & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & I \\
0 & 0 & 0 & -I & 0& 0 & 0 \\
0 & 0 & 0 & 0 & -I & 0 & 0 \end{smallmatrix}\right),
\hspace{0.5cm} \left(\begin{smallmatrix}
I & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & I & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & I & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \end{smallmatrix}\right), \hspace{0.5cm}
\left( \begin{smallmatrix}
0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & I & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & I & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0\end{smallmatrix} \right), \hspace{0.5cm}
\left( \begin{smallmatrix}
0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & I & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & I\end{smallmatrix} \right).$$
Now, the superinvolution $*$, identifying the elements in $H(A,*)$ with their coordinate matrices in the basis above, is given by: $$\left( \begin{smallmatrix} a_{11} & a_{12} & a_{13} & a_{14} &
a_{15} & a_{16} & a_{17}
\vspace{0.15cm} \\
a_{21} & a_{22} & a_{23} & a_{24} & a_{25} & a_{26} & a_{27} \vspace{0.15cm} \\
a_{31} & a_{32} & a_{33} & a_{34} & a_{35} & a_{36} & a_{37} \vspace{0.15cm} \\
a_{41} & a_{42} & a_{43} & a_{44} & a_{45} & a_{46} & a_{47} \vspace{0.15cm} \\
a_{51} & a_{52} & a_{53} & a_{54} & a_{55} & a_{56} & a_{57} \vspace{0.15cm} \\
a_{61} & a_{62} & a_{63} & a_{64} & a_{65} & a_{66} & a_{67} \vspace{0.15cm} \\
a_{71} & a_{72} & a_{73} & a_{74} & a_{75} & a_{76} & a_{77} \end{smallmatrix} \right) \rightarrow \left( \begin{smallmatrix}
a_{11}^t & a_{31}^t & a_{21}^t & a_{61}^t & a_{71}^t & -a_{41}^t & -a_{51}^t \\
a_{13}^t & a_{33}^t & a_{23}^t & a_{63}^t & a_{73}^t & -a_{43}^t & -a_{53}^t \\
a_{12}^t & a_{32}^t & a_{22}^t & a_{62}^t & a_{72}^t & -a_{42}^t & -a_{52}^t \\
-a_{16}^t & -a_{36}^t & -a_{26}^t & a_{66}^t & a_{76}^t & -a_{46}^t & -a_{56}^t \\
-a_{17}^t & -a_{37}^t & -a_{27}^t & a_{67}^t & a_{77}^t & -a_{47}^t & -a_{57}^t \\
a_{14}^t & a_{34}^t & a_{24}^t & -a_{64}^t & -a_{74}^t & a_{44}^t & a_{54}^t \\
a_{15}^t & a_{35}^t & a_{25}^t & -a_{65}^t & -a_{75}^t & a_{45}^t &
a_{55}^t \end{smallmatrix} \right).$$ Therefore the Jordan superalgebra $H(A,*)$ consists of the following matrices: $$\begin{pmatrix}
a_{11} & a_{12} & a_{13} \hspace{0.4cm} \vdots & a_{14} & a_{15} & a_{16} & a_{17} \\
a_{13}^t & a_{22} & a_{23} \hspace{0.4cm} \vdots & a_{24} & a_{25} & a_{26} & a_{27} \\
a_{12}^t & a_{32} & a_{22}^t \hspace{0.4cm} \vdots & a_{34} & a_{35} & a_{36} & a_{37} \\
\hdotsfor{7}\\
-a_{16}^t & -a_{36}^t & -a_{26}^t\vdots & a_{44} & a_{45} & a_{46} & a_{47} \\
-a_{17}^t & -a_{37}^t & -a_{27}^t \vdots & a_{54} & a_{55} & -a_{47}^t & a_{57} \\
a_{14}^t & a_{34}^t & a_{24}^t \hspace{0.4cm} \vdots & a_{64} & a_{65} & a_{44}^t & a_{54}^t \\
a_{15}^t & a_{35}^t & a_{25}^t \hspace{0.4cm} \vdots & -a_{65}^t &
a_{75} & a_{45}^t & a_{55}^t \end{pmatrix},$$ where $a_{11}, a_{23}, a_{32} $ are symmetric matrices, while $a_{46}, a_{57}, a_{64}, a_{75}$ are skewsymmetric matrices. Besides, the elements of $H(C,*)=H(eA+Ae^*+ff^*Aff^*,*)$ are the matrices which, in block form, look like $$\begin{pmatrix}
a_{11} & 0 & a_{13} \hspace{0.4cm} \vdots & a_{14} & 0 & a_{16} & a_{17} \\
a_{13}^t & a_{22} & a_{23} \hspace{0.4cm} \vdots & a_{24} & a_{25} & a_{26} & a_{27} \\
0 & 0 & a_{22}^t \hspace{0.4cm} \vdots & 0 & 0 & 0 & a_{37} \\
\hdotsfor{7}\\
-a_{16}^t & 0 & -a_{26}^t\vdots & a_{44} & 0 & a_{46} & a_{47} \\
-a_{17}^t & -a_{37}^t & -a_{27}^t \vdots & a_{54} & a_{55} & -a_{47}^t & a_{57} \\
a_{14}^t & 0 & a_{24}^t \hspace{0.4cm} \vdots & a_{64} & 0 & a_{44}^t & a_{54}^t \\
0 & 0 & a_{25}^t \hspace{0.4cm} \vdots & 0 & 0 & 0 & a_{55}^t \end{pmatrix}.$$ Now again routine arguments with matrices give the result.
$B^\prime = A$ and $B$ is not semisimple.
-----------------------------------------
As for the maximal subalgebras of the Jordan superalgebras $A^+$, this situation does not appear in the ungraded case [@Ra1]. However, consider the associative superalgebra $A=M_{1,2}(F)$, with the natural orthosymplectic superinvolution. Thus, the Jordan superalgebra $J=H(A,*)$ is $$J=osp_{1,2}=\left\{\begin{pmatrix} a&-c&b\\ b&d&0\\
c&0&d\end{pmatrix} : a,b,c,d\in F\right\}.$$ The subspace $$B=\left\{\begin{pmatrix} a&-b&b\\ b&d&0\\ b&0&d\end{pmatrix}:
a,b,d\in F\right\}$$ is a maximal superalgebra of $J$, and it satisfies $B'=A$, while it is not semisimple, as its radical coincides with its odd part
**Question:** Is this, up to isomorphism, the only possible example of a maximal subalgebra $B$ of $H(A,*)$, $A$ being a simple finite dimensional superalgebra over an algebraically field $F$ of characteristic $0$, such that $B'=A$ and $B$ is not semisimple?
It seems that a broader knowledge of non semisimple Jordan superalgebras is needed here.
The solution to the above question is also related to the Question after Theorem \[th:BsemiinHAstar\]. Actually, if this question is answered in the affirmative, then the subalgebra $B$ isomorphic to $D_t$ ($t\ne 0,\pm 1,-2,-\frac{1}{2}$) in Theorem \[th:BsemiinHAstar\] would indeed be maximal in $H(A,*)$. Otherwise, any maximal subalgebra $S$ containing $B$ would satisfy $S'=A$ (as $B'=A$ already) and would not be semisimple (because of Theorem \[th:BsemiinHAstar\]).
[99]{}
G. Benkart, A. Elduque, A new constructions of the Kac Jordan subalgebra [*Proc. Amer. Math. Soc.*]{} [**130**]{} (2002), no. 1, 3209-3217.
E. Dynkin, Semi-simple subalgebras of semi-simple Lie algebras, [*Mat. Sbornik*]{} [**30**]{} (1952), 249-462; [*Amer. Math. Soc. Transl.*]{} [**6**]{} (1957), 111-244.
E. Dynkin, Maximal subgroups of the classical groups, [*Trudy Moskov. Mat. Obsc.*]{} (1952), 39-166; [*Amer. Math. Soc. Transl.*]{} [**6**]{} (1957), 245-378.
A. Elduque, On maximal subalgebras of central simple Malcev algebras, [*J. Algebra*]{} [**103**]{} no.1 (1986), 216-227.
A. Elduque, J. Laliena, S. Sacristán, Maximal subalgebras of associative superalgebras, [*J. Algebra*]{} **275** (2004), no. 1, 40-58.
A. Elduque, J. Laliena, S. Sacristán, The Kac Jordan superalgebra: automorphisms and maximal subalgebras. Preprint arXiv:mat.RA/0509040.
L. Hogben, V.G. Kac, Classification of simple ${{\mathbb Z}}$-graded Lie superalgebras and simple Jordan superalgebras \[Comm. Algebra [**5**]{} (1977), no. 13, 1375-1400\] by Kac, [*Comm. Algebra*]{} **11** (1983), no. 10, 1155-1156.
C. Gómez-Ambrosi, *Estructuras de Lie y Jordan en Superálgebras Asociativas con superinvolución*, Doctoral Thesis, Publicaciones del Seminario Matemático García de Galdeano, Serie II, no. 50, 1995, Universidad de Zaragoza.
C. Gómez-Ambrosi, On the Simplicity of Hermitian Superalgebras, [*Nova Journal of Algebra and Geometry*]{} [**3**]{} (1998), 193-198.
C. Gómez-Ambrosi, I. P. Shestakov, On the Lie Structure of the Skew Elements of a Simple Superalgebra with Superinvolution, [*J. Algebra*]{} [**208**]{} (1998), 43-71.
J. E. Humphreys, *Introduction to Lie Algebras and Representation Theory*, Graduate Texts in Mathematics v.6, Springer Verlag, 1972, New York.
N. Jacobson, *Structure Theory of Jordan algebras*, The University of Arkansas Lecture Notes in Mathematics v.5, 1981, Fayetteville.
V.G. Kac, Lie superalgebras, [*Advances in Math.*]{} **26** (1977), no. 1, 8-96.
V. G. Kac, Classification of Simple Z-Graded Lie Superalgebras and Simple Jordan Superalgebras, [*Comm. Algebra*]{} [**13**]{} (1977), no. 5, 1375-1400.
I. L. Kantor, Connection between Poisson brackets and Jordan and Lie superalgebras, Lie Theory, Differential Equations and Representation Theory. Publications CRM, Montreal, 1990, 213-225.
D. King, The split Kac Superalgebra $K_{10},$ [*Comm. Algebra*]{} [**22**]{} (1994), no. 1, 29-40.
C. Martínez, E. Zelmanov, Simple Finite-Dimensional Jordan Superalgebras of Prime Characteristic, [*J. Algebra*]{} [**236**]{} (2001), no. 2, 575-629.
C. Martínez, E. Zelmanov, Specializations of Jordan Superalgebras, [*Can. Math. Bull.*]{} [**45**]{} (2002), no. 4, 653-671.
K. McCrimmon, On Herstein’s Theorems Relating Jordan and Associative algebras, [*J. Algebra*]{} [**13**]{} (1969), 382-392.
K. McCrimmon, Speciality and Non-speciality of Two Jordan Superalgebras, [*J. Algebra*]{} [**149**]{} (1992), no. 2, 326-351.
M. Racine, On Maximal subalgebras, [*J. Algebra*]{} [**30**]{} (1974), 155-180.
M. Racine, Maximal subalgebras of exceptional Jordan algebras, [*J. Algebra*]{} [**46**]{} (1977), no. 1, 12-21.
M. L. Racine, E. I. Zelmanov, Simple Jordan Superalgebras with Semisimple Even Part, [*J. Algebra*]{} [**270**]{} (2003), no. 2, 374-444.
I. P. Shestakov, Universal enveloping algebras of some Jordan superalgebras, personal communication.
A. S. Shtern, Representation of an exceptional Jordan superalgebra, [*Funktsional Anal. i Prilozhen*]{} [**21**]{} (1987), 93-94 (Russian), English traslation in [*Functional Anal. Appl.*]{} [**21**]{} (1987), 253-254.
C. T. C. Wall, Graded Brauer Groups, [*Jour. Reine Angew. Math.*]{} [**213**]{} (1964), 187-199.
E. I. Zelmanov, Semisimple Finite-Dimensional Jordan superalgebras, preprint.
[^1]: The first and second authors have been supported by the Spanish Ministerio de Educación y Ciencia and FEDER (MTM 2004-081159-C04-02), and the second and third authors by the Comunidad Autónoma de La Rioja. The first author also acknowledges support by the Diputación General de Aragón (Grupo de Investigación de Álgebra).
|
---
abstract: 'The collaborative ranking problem has been an important open research question as most recommendation problems can be naturally formulated as ranking problems. While much of collaborative ranking methodology assumes static ranking data, the importance of temporal information to improving ranking performance is increasingly apparent. Recent advances in deep learning, especially the discovery of various attention mechanisms and newer architectures in addition to widely used RNN and CNN in natural language processing, have allowed us to make better use of the temporal ordering of items that each user has engaged with. In particular, the SASRec model, inspired by the popular Transformer model in natural languages processing, has achieved state-of-art results in the temporal collaborative ranking problem and enjoyed more than 10x speed-up when compared to earlier CNN/RNN-based methods. However, SASRec is inherently an un-personalized model and does not include personalized user embeddings. To overcome this limitation, we propose a Personalized Transformer (SSE-PT) model, outperforming SASRec by almost 5% in terms of NDCG@10 on 5 real-world datasets. Furthermore, after examining some random users’ engagement history and corresponding attention heat maps used during the inference stage, we find our model is not only more interpretable but also able to focus on recent engagement patterns for each user. Moreover, our SSE-PT model with a slight modification, which we call SSE-PT++, can handle extremely long sequences and outperform SASRec in ranking results with comparable training speed, striking a balance between performance and speed requirements. Code and data are open sourced at <https://github.com/wuliwei9278/SSE-PT>.'
author:
- |
Liwei Wu\
Department of Statistics\
University of California, Davis\
Davis, CA 95616\
`liwu@ucdavis.edu`\
Shuqing Li\
Department of Computer Science\
University of California, Davis\
Davis, CA 95616\
`qshli@ucdavis.edu` Cho-Jui Hsieh\
Department of Computer Science\
University of California, Los Angles\
Los Angles, CA 90095\
`chohsieh@cs.ucla.edu` James Sharpnack\
Department of Statistics\
University of California, Davis\
Davis, CA 95616\
`jsharpna@ucdavis.edu`\
bibliography:
- 'neurips\_2019.bib'
title: |
Temporal Collaborative Ranking\
Via Personalized Transformer
---
Introduction
============
Recommendation systems are increasingly prevalent due to content delivery platforms, e-commerce websites, and mobile apps [@shani2008mining]. Most recommendation problems can be naturally thought of as predicting the user’s partial ranking of a large candidate pool of items. After obtaining the optimal ranking ordering, the recommender system can simply recommend top-$K$ items in the list for each individual user. Usually rankings are made personalized to cater to users’ special tastes. In literature, this is formulated as the collaborative ranking problem [@weimer2008cofi]. The temporal ordering, determined by when users engaged with items, has proven to be an important resource to further improve ranking performance. We call the collaborative ranking setting with temporal ordering information the Temporal Collaborative Ranking problem in this paper.
Recent advances in deep learning, especially the discovery of various attention mechanisms [@bahdanau2014neural; @sutskever2014sequence] and newer architectures [@vaswani2017attention; @devlin2018bert] in addition to classical RNN and CNN architecture in natural language processing, have allowed us to make better use of the temporal ordering of items that each user has engaged with. In particular, the SASRec model [@kang2018self], inspired by the popular Transformer model in natural languages processing, has achieved state-of-art results in the temporal collaborative ranking problem and enjoyed more than 10x speed-up compared to earlier RNN[@hidasi2015session] / CNN[@tang2018personalized]-based methods. But at a closer look, the SASRec is inherently an un-personalized model without introducing user embeddings and this often leads to an inferior recommendation model in terms of both ranking performances and model interpretability. Although personalization is not needed for the original Transformer model [@vaswani2017attention] in natural languages understandings or translations, personalization plays a crucial role throughout recommender system literature [@zhang2019deep] ever since the matrix factorization approach to the Netflix prize [@koren2009bellkor].
In this work, we propose a novel method, Personalized Transformer (SSE-PT), that introduces personalization into self-attentive neural network architectures. [@kang2018self] found that adding additional personalized embeddings did not improve the performance of their Transformer model, and postulate that this is due to the fact that they already use the user history and the embeddings only contribute to overfitting. Although introducing user embeddings into the model is indeed difficult with existing regularization techniques for embeddings, we show that personalization can greatly improve ranking performances with recent regularization technique called Stochastic Shared Embeddings (SSE) [@wu2019stochastic]. The personalized Transformer (SSE-PT) model with SSE regularization works well for all 5 real-world datasets we consider, outperforming previous state-of-the-art algorithm SASRec by almost 5% in terms of NDCG@10. Furthermore, after examining some random users’ engagement history and corresponding attention heat maps used during the inference stage, we find our model is not only more interpretable but also able to focus on recent engagement patterns for each user. Moreover, our SSE-PT model with a slight modification, which we call SSE-PT++, can handle extremely long sequences and outperform SASRec in ranking results with comparable training speed, striking a balance between performance and speed requirements.
Related Work
============
Collaborative Filtering and Ranking
-----------------------------------
Recommender systems can be divided into those designed for explicit feedback, such as ratings [@koren2009matrix], and those for implicit feedback, based on user engagement [@hu2008collaborative]. Recently, implicit feedback datasets, such as user clicks of web pages, check-in’s of restaurants, likes of posts, listening behavior of music, watching history and purchase history, are increasingly prevalent. Unlike explicit feedback, implicit feedback datasets can be obtained without users noticing or active participation. Item-to-item [@sarwar2001item], user-to-user [@wang2006unifying], user-to-item [@koren2009matrix] are 3 different angles of utilizing user engagement data. In item-to-item approaches, the goal is to recommend similar items to what users have engaged. In user-to-user approaches, the goal is to recommend to a user some items that similar users have engaged previously. User-to-item approaches, on the other hand, focus on examining user and item relationships as a whole, which is also referred to as a collaborative filtering approach. These relationships can also be viewed as graphs [@wu2019graph].
Two main approaches to recommendations are: attempt to predict the explicit or implicit feedback with matrix (or tensor) completion, or attempt to predict the relative rankings derived from the feedback. Collaborative filtering algorithms including matrix factorization, [@hill1995recommending; @schafer2007collaborative; @koren2008factorization; @mnih2008probabilistic; @hu2008collaborative], which predict the feedback in a pointwise fashion as if it were a supervised learning problem, fall into the first category. Predicting the feedback with supervised learning objectives suffers from the different rating standards of users, and it can be helpful to consider the data to simply be the ranking of the items based on feedback. There are two main approaches to the collaborative ranking problem, namely pairwise and listwise methods. Pairwise methods [@rendle2009bpr; @wu2017large] consider each pairwise comparison for a user as a label, which implicitly models the pairwise comparisons as independent observations. Listwise methods [@wu2018sql], on the other hand, consider a user’s entire engagement history as independent observations. Normally, in terms of ranking performances, listwise approaches outperform pairwise approaches, and pairwise approaches outperform pointwise collaborative filtering [@wu2018sql].
Session-based and Sequential Recommendation
-------------------------------------------
Both session-based and sequential (i.e. next-basket) recommendation algorithms take advantage of additional temporal information to make better personalized recommendations. The main difference between session-based recommendations [@hidasi2015session] and sequential recommendations [@kang2018self] is that the former assumes that the user ids are not recorded and therefore the length of engagement sequences are relatively short. Therefore, session-based recommendations normally do not consider user factors. On the other hand, sequential recommendation treats each sequence as a user’s engagement history [@kang2018self]. Both settings, do not explicitly require time-stamps: only the relative temporal orderings are assumed known (in contrast to, for example, timeSVD++ [@koren2009collaborative]). Initially, sequence data in temporal order are usually modelled with Markov models, in which future observation is conditioned on last few observed items [@rendle2010factorizing]. In [@rendle2010factorizing], a personalized Markov model with user latent factors is proposed for more personalized results. In recent years, deep learning techniques, borrowing from natural language processing (NLP) literature, are more widely used in tackling sequential data. Like sentences in NLP, sequence data in recommendations can be similarly modelled by recurrent neural networks (RNN) [@hidasi2015session; @hidasi2018recurrent] and convolutional neural network (CNN) [@tang2018personalized] models. Later on, attention models are getting more and more attention in both NLP, [@vaswani2017attention; @devlin2018bert], and recommender systems, [@liu2018stamp; @kang2018self]. SASRec [@kang2018self] is a recent method with state-of-the-art performance among the many deep learning models. Motivated by the Transformer model in neural machine translation [@vaswani2017attention], SASRec utilizes a similar architecture to the encoder part of the Transformer model.
Regularization Techniques
-------------------------
In deep learning, models with many more parameters than data points can easily overfit training data. This may prevent us from adding user embeddings as additional parameters into complicated models like the Transformer model [@kang2018self], which can easily have 20 layers with 6 self-attention blocks and millions of parameters for a medium-sized dataset like Movielens10M [@harper2016movielens]. $\ell_2$ regularization [@hoerl1970ridge] is the most widely used approach and has been used in many matrix factorization models in recommender systems; $\ell_1$ regularization [@tibshirani1996regression] is used when a sparse model is preferred. For deep neural networks, it has been shown that $\ell_p$ regularizations are often too weak, while dropout [@hinton2012improving; @srivastava2014dropout] is more effective in practice. There are many other regularization techniques, including parameter sharing [@goodfellow2016deep], max-norm regularization [@srebro2005maximum], gradient clipping [@pascanu2013difficulty], etc. Very recently, a new regularization technique called Stochastic Shared Embeddings (SSE) [@wu2019stochastic] is proposed as a new means of regularizing embedding layers. [@wu2019stochastic] develops two versions of SSE, SSE-Graph and SSE-SE. We find that SSE-SE is essential to the success of our Personalized Transformer (SSE-PT) model.
Methodology
===========
Temporal Collaborative Ranking
------------------------------
Let us formally define the temporal collaborative ranking problem as: given $n$ users, each user engaging with a subset of $m$ items in a temporal order, the goal of the task is to find an optimal personalized ranking ordering of top $K$ items out of total $m$ items for any given user at any given time point. We assume our data is in the format of sequences of items for $n$ users have interacted with so far, namely $$\label{eq:input}
s_i=(j_{i1}, j_{i2}, \dots, j_{iT}) \text{ for } 1 \leq i \leq n.$$ Sequences $s_i$ of length $T$ contain indices of the last $T$ items the user $i$ has interacted with in the temporal order (from old to new). For different users, the sequence lengths can be very different (where we pad the shorter sequences to obtain length $T$). We cannot simply randomly split data points into train/validation/test sets because they come in temporal orders. Instead, we need to make sure our training data is before validation data which is before test data temporally. We use last items in sequences as test sets, second-to-last items as validation sets and the rest as training sets. We use ranking metrics such as NDCG@$K$ and Recall$@K$ for evaluations, which are defined in and .
Personalized Transformer Architecture
-------------------------------------
Our model is motivated by the Transformer model in [@vaswani2017attention] and [@kang2018self]. In the following sections, we are going to examine each component of our Personalized Transformer (SSE-PT) model: the embedding layer, self-attention layer, pointwise feed-forward layer, prediction layer, layer normalization, dropout, weight decay, stochastic shared embeddings, and so on.
### Embedding Layer
We define a learnable user embedding look-up table $U \in R^{n \times d_u}$ and item embedding look-up table $V \in R^{m \times d_i}$, where $n$ is the number of users, $m$ is the number of items and $d_u$, $d_i$ are the number of hidden units for user and item respectively. We also specify learnable positional encoding table $P \in R^{T \times d}$, where $d = d_u + d_i$. So each input sequence $s_i \in R^T$ will be represented by the following embedding: $$\label{eq:emb}
E = \begin{bmatrix}
[v_{j_{i1}}\text{; } u_i] + p_1 \\
[v_{j_{i2}}\text{; } u_i] + p_2 \\
\vdots \\
[v_{j_{iT}}\text{; } u_i] + p_T
\end{bmatrix} \in R^{T \times d},$$ where $[v_{j_{it}}; u_i]$ represents concatenating item embedding $v_{j_{it}} \in R^{d_i}$ and user embedding $u_i \in R^{d_u}$ into embedding $E_t \in R^d$ for time $t$. Note that the main difference between our model and [@kang2018self] is that we introduce the user embeddings $u_i$, making our model personalized.
### Self-Attention Layer
Self-attention layers defined as: $$S = \text{SA}(E) = \text{Attention}\left(E W^{(Q)}\text{, } E W^{(H)}\text{, } E W^{(V)}\right),$$ where $W^{(Q)}, W^{(H)}, W^{(V)} \in R^{d \times d}$ and $$\text{Attention}(Q, H, V) = \text{softmax}\left(\frac{Q H^T}{\sqrt{d}} \right)\cdot V.$$
The attention layer we actually used is a masked one because we only want attention from the future to the past, not the opposite direction. Therefore, all links between $Q_i, H_j$ for $j > i$ are forbidden. We find using bidirectional attention would lead to significantly worse performance.
### Pointwise Feed-Forward layer
After feeding embeddings into the self-attention layer, we want to add non-linearity to the resulting $S\in R^{n \times d}$ for each sequence data. Therefore, we add a pointwise feed-forward layer after the self-attention layer, consisting of two fully connected layers: $$F = \text{FC}(S) = \text{Relu}(S W + b) \cdot \tilde{W} + \tilde{b},$$ where $W$, $\tilde{W} \in R^{d \times d}$ are the weight matrices and $b$, $\tilde{b} \in R^{d}$ are the bias terms.
### Self-Attention Blocks
We combine self-attention layer and pointwise feed-forward layer to form a self-attention (SA) block. One block consists of one self-attention layer and two fully connected layers. We can stack blocks by feeding the output of first block $F^{(1)}$ as the input of the second block, i.e. $$S^{(2)} = \text{SA}(F^{(1)}).$$ We use $B$ to denote the number of attention blocks.
-0.1in
![Illustration of our proposed SSE-PT model[]{data-label="fig:SSE-PT"}](attention_model.png){width="0.8\columnwidth"}
-0.2in
### Prediction Layer
As to the prediction layer, the predicted probability of user $i$ at time $t$ rated item $l$ is: $$p_{itl} = \sigma(r_{itl})
% = \text{sigmoid}(F_{t-1}^{B} \cdot [v_l\text{; } u_i]),$$ where $\sigma$ is the sigmoid function and $r_{itl}$ is the predicted score of item $l$ by user $l$ at time point $t$, defined as: $$\label{eq:pred}
r_{itl} = F_{t-1}^{B} \cdot [v_l\text{; } u_i].$$ Although we can use another set of user and item embedding look-up tables for the $u_i$ and $v_l$, we find it better to use the same set of embedding look-up tables $U, V$ as in the embedding layer. To distinguish the $u_i$ and $v_l$ in from $u_i, v_j$ in , we call embeddings in output embeddings and those in input embeddings.
There are multiple ways to define the loss of our model, previously a popular loss is the BPR loss [@rendle2009bpr; @hidasi2018recurrent]: $$\label{eq:bprloss}
\sum\nolimits_{i}\sum\nolimits_{t=1}^{T-1} -\sum_{k \in \Omega} \log \big[ \sigma (r_{itl} - r_{itk})\big],$$ where $\sigma$ is the sigmoid function, $r_{itl}$ is the predicted score of the positive item $l$ at time point $t$ for user $i$, $r_{itk}$ is the predicted score of the negative item, and set of negative items is defined as $\Omega = \{1 \leq j \leq m : j \neq j_{it}, \forall 1 \leq t \leq T \}$. At time point $t$, the positive item index is $l = j_{i(t+1)}$ in , and negative item index $k$ satisfies $k \in \Omega$.
We find the BPR loss does not perform as well as the binary cross entropy loss in practice. The binary cross entropy loss between predicted probability for the positive item $l = j_{i (t+1)}$ and one uniformly sampled negative item $k \in \Omega$ is given as $-[\log (p_{itl}) + \log(1 - p_{itk})]$. Summing over $s_i$ and $t$, we obtain the objective function that we want to minimize is: $$\sum\nolimits_{i}\sum\nolimits_{t=1}^{T-1} \sum_{k\in \Omega}-\big[\log (p_{itl}) + \log(1 - p_{itk})\big].$$ At the inference time, top-$K$ recommendations for user $i$ at time $t$ can be made by sorting scores $r_{itl}$ for all items $\ell$ and recommending the first $K$ items in the sorted list.
Personalized Transformer Regularization Techniques
--------------------------------------------------
### Layer Normalization
Layer normalization [@ba2016layer] normalizes neurons within a layer. Previous studies [@ba2016layer] show it is more effective than batch normalization for training recurrent neural networks (RNNs). One alternative is the batch normalization [@ioffe2015batch] but we find it does not work as well as the layer normalization in practice even for a reasonable large batch size of 128. Therefore, our SSE-PT model adopts layer normalization.
### Residual Connections
Residual connections are firstly proposed in ResNet for image classification problems [@he2016deep]. Recent research finds that residual connections can help training very deep neural networks even if they are not convolutional neural networks [@vaswani2017attention]. Using residual connections allows us to train very deep neural networks here. For example, the best performing model for Movielens10M dataset in Table \[tb:block\] is the SSE-PT with 6 attention blocks, in which $1 + 6 * 3 + 1 = 20$ layers are trained end-to-end.
### Weight Decay
Weight decay [@krogh1992simple], also known as $l_2$ regularization [@hoerl1970ridge], is applied to all embeddings, including both user and item embeddings.
### Dropout
Dropout [@srivastava2014dropout] is applied to the embedding layer $E$, self-attention layer and pointwise feed-forward layer by stochastically dropping some percentage of hidden units to prevent co-adaption of neurons. Dropout has been shown to be an effective way of regularizing deep learning models.
### Stochastic Shared Embeddings
Unlike previous SASRec model [@kang2018self], we use one more regularization technique in our SSE-PT model specifically for embedding layer in addition to the ones listed earlier: the Stochastic Shared Embeddings (SSE) [@wu2019stochastic]. The reason that we want to use this additional regularization technique is that we find the existing well-known regularization techniques like layer normalization, dropout and weight decay cannot prevent the model from over-fitting badly after introducing user embeddings. We apply this new regularization technique SSE-SE to our SSE-PT model, and we find it makes possible training this personalized model with $O(n d_u)$ more parameters.
The main idea of SSE is to stochastically replace embeddings with another embedding during SGD, which has the effect of regularizing the embedding layers. Specifically, SSE-SE replaces one embedding with another embedding stochastically with probability $p$, which is called SSE probability in [@wu2019stochastic]. There are 3 different places in our model that SSE-SE can be applied. We can apply SSE-SE to input/output user embeddings, input item embeddings, and output item embeddings with probabilities $p_u$, $p_i$ and $p_y$ respectively. Note that input user embedding and output user embedding are always replaced at the same time with SSE probability $p_u$. Empirically, we find that SSE-SE to user embeddings and output item embeddings always helps, but SSE-SE to input item embeddings is only useful when the average sequence length is large, e.g. more than 100 in Movielens1M and Movielens10M datasets. In summary, layer normalization and dropout are used in all layers except prediction layer. Residual connections are used in both self-attention layer and pointwise feed-forward layer. SSE-SE is used in embedding layer and prediction layer.
Handling Long Sequences: SSE-PT++
---------------------------------
To handle extremely long sequences, a slight modification can be made on the way how input sequences $s_i$’s are fed into the SSE-PT neural networks. We call the enhanced model SSE-PT++ to distinguish it from the standard SSE-PT model, which cannot handle sequences longer than $T$.
Sometimes, we want to make use of extremely long sequences, $s_i=(j_{i1}, j_{i2}, \dots, j_{it}) \text{ for } 1 \leq i \leq n$, where $t > T$. However, our SSE-PT model can only handle sequences of maximum length of $T$. The simplest way is to sample starting index $1 \leq v \leq t$ uniformly and use $s_i=(j_{iv}, j_{i(v+1)}, \dots, j_{iz})$, where $z = \min (t, v + T)$. Although sampling the starting index uniformly from $[1, t]$ can accommodate long sequences of length $t > T$, this does not work very well in practice. Uniformly sampling does not take into account the importance of recent items in a long sequence. To solve this dilemma, we introduce an additional hyper-parameter $p_s$ which we call [*sampling probability*]{}. It implies that with probability $p_s$, we sample the starting index $v$ uniformly from $[1, t - T]$ and use sequence $s_i=(j_{iv}, j_{i(v+1)}, \dots, j_{i(v+T-1)})$ as input. With probability $1 - p_s$, we will simply use the recent $T$ items $(j_{i(t-T+1)}, \dots, j_{it})$ as input. If the sequence $s_i$ is already shorter than $T$ then we simply set $p_s = 0$ for user $i$.
Our proposed SSE-PT++ model can work almost as well as SSE-PT with a much smaller $T$. One can see in Table \[tb:dl-ml2\] with $T = 100$ SSE-PT++ can perform almost as well as SSE-PT. The time complexity of the SSE-PT model is of order $O(T^2 d + T d^2)$. Therefore, reducing $T$ by one half would lead to a theoretically 4x speed-up in terms of the training and inference speeds. As to space complexity, both SSE-PT and SSE-PT++ are of order $O(nd_u + md_i + Td + d^2)$. When the number of users and items scales up, Tensorflow will automatically store the user and item embedding look-up tables in RAM instead of GPU memory.
{width="1\columnwidth"}
-0.15in
-0.1in \[tb:datasets\]
-0.1in
Experiments
===========
In this section, we compare our proposed algorithms, Personalized Transformer (SSE-PT) and SSE-PT++, with other state-of-the-art algorithms on real-world datasets. We implement our codes in Tensorflow and conduct all our experiments on
a server with 40-core Intel Xeon E5-2630 v4 @ 2.20GHz CPU, 256G RAM and Nvidia GTX 1080 GPUs.
Datasets
--------
We use 5 datasets, the first 4 of them have exactly the same train/dev/test splits as in [@kang2018self]:
- Beauty category from Amazon product review datasets. [^1]
- Games category from the same source.
- Steam dataset introduced in [@kang2018self]. It contains reviews crawled from a large video game distribution platform.
- Movielens1M dataset [@harper2016movielens], a widely used benchmark datasets containing one million user movie ratings.
- Movielens10M dataset with ten million user ratings.
Detailed dataset statistics are given in Table \[tb:datasets\]. One can easily see that the first 3 datasets have very short sequences while the last 2 datasets have very long sequences.
Evaluation Metrics
------------------
The evaluation metrics we use are standard ranking metrics, namely NDCG and Recall for top recommendations:
- NDCG$@K$: defined as: $$\label{eq:ndcg}
\text{NDCG}@K = \frac{1}{n} \sum_{i = 1}^{n} \frac{\text{DCG}@K(i, \Pi_i)}{\text{DCG}@K(i, \Pi_i^*)},$$ where $i$ represents $i$-th user and $$\text{DCG}@K(i, \Pi_i)= \sum_{l = 1}^{K} \frac{2^{R_{i\Pi_{il}}} - 1}{\log_2(l + 1)}.$$ In the DCG definition, $\Pi_{il}$ represents the index of the $l$-th ranked item for user $i$ in test data based on the learned score matrix $X$. $R$ is the rating matrix and $R_{ij}$ is the rating given to item $j$ by user $i$. $\Pi_i^*$ is the ordering provided by the ground truth rating.
- Recall@$K$: defined as a fraction of positive items retrieved by the top $K$ recommendations the model makes: $$\label{eq:recall}
\text{Recall}@K = \frac{\sum_{i=1}^{n} \mathbbm{1} \{ \exists 1 \leq l \leq K : R_{i\Pi_{il}} = 1 \}} {n},$$ here we already assume there is only a single positive item that user will engage next and the indicator function $\mathbbm{1} \{ \exists 1 \leq l \leq k : R_{i\Pi_{il}} = 1 \}$ is defined to indicate whether the positive item falls into the top $K$ position in our obtained ranked list using scores predicted in .
---------------------------------------------------- -------------------------------------------------
{width="0.5\linewidth"} {width="0.48\linewidth"}
---------------------------------------------------- -------------------------------------------------
In the temporal collaborative ranking setting, at time point $t$, the rating matrix $R$ can be formed in two ways. One is that we include all ratings after $t$, the other is to include only ratings at time point $t + 1$. We use the latter, which is the same setting as [@kang2018self]. For a large dataset with numerous users and items, the evaluation procedure would be slow because would require computing the ranking of all items based on their predicted scores for every single user. As a means of speed-up evaluations, we sample a fixed number $C$ of negative candidates while always keeping the positive item that we know the user will engage next. This way, both $R_{ij}$ and $\Pi_i$ will be narrowed down to a small set of item candidates, and prediction scores will only be computed for those items through a single forward pass of the neural network.
Ideally, we want both NDCG and Recall to be exactly 1, because NDCG@$K = 1$ means the positive item is always put on the top-$1$ position of the top-$K$ ranking list, and Recall@$K = 1$ means the positive item is always contained by the top-$K$ recommendations the model makes. In our evaluation procedures, a larger $C$ or a smaller $K$ makes the recommendation problem harder because it implies the candidate pool is larger and higher ranking quality is desired.
0.1in
-0.05in
-0.1in \[tb:dl-ml\] -0.15in
-0.05in
-0.1in \[tb:dl-ml2\]
-0.15in
Baselines
---------
We include 5 non-deep-learning and 5 deep-learning algorithms in our comparisons:
### Non-deep-learning Baselines
- PopRec: ranking items according to their popularity.
- BPR: Bayesian personalized ranking for implicit feedback setting [@rendle2009bpr]. It is a low-rank matrix factorization model with a pairwise loss function. But it does not utilize the temporal information. Therefore, it serves as a strong baseline for non-temporal methods.
- FMC: Factorized Markov Chains: a first-order Markov Chain method, in which predictions are made only based on previously engaged item.
- PFMC: a personalized Markov chain model [@rendle2010factorizing] that combines matrix factorization and first-order Markov Chain to take advantage of both users’ latent long-term preferences as well as short-term item transitions.
- TransRec: a first-order sequential recommendation method [@he2017translation] in which items are embedded into a transition space and users are modelled as translation vectors operating on item sequences.
SQL-Rank [@wu2018sql] and item-based recommendations [@sarwar2001item] are omitted because the former is similar to BPR [@rendle2009bpr] except using the listwise loss function instead of the pairwise loss function and the latter has been shown inferior to TransRec [@he2017translation].
-0.1in
-0.01in \[tb:ml1m\]
### Deep-learning baselines
- GRU4Rec: the first RNN-based method proposed for the session-based recommendation problem [@hidasi2015session]. It utilizes the GRU structures [@chung2014empirical] initially proposed for speech modelling.
- GRU4Rec$^+$: follow-up work of GRU4Rec by the same authors: the model has a very similar architecture to GRU4Rec but has a more complicated loss function [@hidasi2018recurrent].
- Caser: a CNN-based method [@tang2018personalized] which embeds a sequence of recent items in both time and latent spaces forming an ‘image’ before learning local features through horizontal and vertical convolutional filters. In [@tang2018personalized], user embeddings are included in the prediction layer only. On the contrast, in our Personalized Transformer, user embeddings are also introduced in the lowest embedding layer so they can play an important role in self-attention mechanisms as well as in prediction stages.
- STAMP: a session-based recommendation algorithm [@liu2018stamp] using attention mechanism. [@liu2018stamp] only uses fully connected layers with one attention block that is not self-attentive.
- SASRec: a self-attentive sequential recommendation method [@kang2018self] motivated by Transformer in NLP [@vaswani2017attention]. Unlike our method SSE-PT, SASRec does not incorporate user embedding and therefore is not a personalized method. SASRec paper [@kang2018self] also does not utilize SSE [@wu2019stochastic] for further regularization: only dropout and weight decay are used.
Comparison Results
------------------
We use the same datasets as in [@kang2018self] and follow the same procedure in the paper: use last items for each user as test data, second-to-last as validation data and the rest as training data. We implemented our method in Tensorflow and solve it with Adam Optimizer [@kingma2014adam] with a learning rate of $0.001$, momentum exponential decay rates $\beta_1 = 0.9, \beta_2 = 0.98$ and a batch size of $128$. In Table 3, since we use the same data, the performance of previous methods except STAMP have been reported in [@kang2018self]. We tune the dropout rate, and SSE probabilities $p_u, p_i, p_y$ for input user/item embeddings and output embeddings on validation sets and report the best NDCG and Recall for top-$K$ recommendations on test sets. As mentioned before, we sampled $C$ negative items to speed up the evaluation.
For a fair comparison, we restrict all algorithms to use up to 50 hidden units for item embeddings. For the SSE-PT and SASRec models, we use the same number of attention blocks of 2 and set the maximum length $T = 200$ for Movielens 1M dataset and $T = 50$ for other datasets. We use top-$K$ with $K = 10$ and number of negatives $C = 100$ in evaluation procedure. One can easily see in Table \[tb:dl-combined\] that our proposed SSE-PT has the best performance over all previous methods on all four datasets we consider. On most datasets, our SSE-PT improves NDCG by more than 4% when compared with SASRec [@kang2018self] and more than 20% when compared to non-deep-learning methods.
When we relax the constraints, we find that an increase in the number of attention blocks and hidden units would allow our SSE-PT model to perform even better than in Table \[tb:dl-combined\]. In Table \[tb:ml1m\], when we increase item embedding dimension $d_i$ from 50 to 100, our SSE-PT achieves 0.6281 for NDCG@10 and SSE-PT++ achieves even higher 0.9292 while that of SASRec drops to 0.5919 from 0.5936.
To show the effectiveness of SSE-PT++, we decrease max length allowed from 200 to 100, we find in Table \[tb:dl-ml2\] that SSE-PT++ suffer the least with NDCG@10 dropping to 0.6186 from 0.6281 and Recall@10 dropping to 0.8318 from 0.8341. The one that suffers the most is the SASRec: its NDCG@10 drops to 0.5769 from 0.5919 and Recall@10 drops to 0.8045 from 0.8202.
We vary the tuning parameters we use in Table \[tb:ml1m\] including user/item embedding dimensions, the number of attention blocks, SSE probabilities for SSE-PT and the sampling probability for SSE-PT++. It is obvious from Table \[tb:ml1m\] and Table \[tb:ml10m\] that these hyper-parameters play an important role in terms of the final prediction performances. For our SSE-PT model, a larger item dimension helps improve the recommendation but it is not the case for baseline SASRec. Also, using SSE-SE in all three places achieves best recommendation performances for Movielens1M dataset in Table \[tb:ml1m\]. One can easily see from Table \[tb:ml10m\], using SSE-SE towards input user embeddings is crucial again to ensure a properly regularized model. SSE-SE, together with dropout and weight decay, is the best choice for regularization, which is evident from Table \[tb:reg\]. In practice, these SSE probabilities, just like dropout rate, can be treated as tuning parameters and easily tuned.
Attention Maps for Input Embeddings
-----------------------------------
Apart from evaluating our SSE-PT against SASRec using well-defined ranking metrics on 5 datasets, we use 2 other ways to visualize the comparisons. The first way is to visualize the attention maps of both methods and compare them. Note that the attention map is a lower triangular matrix as we only allow attention at present is paid to the past, but not to the future. The attention maps for the first layer in Figure \[fig:attention\] show that our SSE-PT paid more attention to recent items in a long sequence than SASRec. This is evident by comparing the attention intensity level of the two plots (right bottom).
As our second way to visualize the comparisons, we examine some random users’ engagement histories to see the top-$K$ recommendations the two models give. In Figure \[fig:example\], a random user’s engagement history in Movielens1M dataset is given in temporal order (column-wise). We hide the last item whose index is 26 in test set and hope that a temporal collaborative ranking model can figure out item-26 is the one this user will watch next using only previous engagement history. One can see for a typical user, they tend to look at different style of movies at different times. Earlier on, they watched a variety of movies, including Sci-Fi, animation, thriller, romance, horror, action, comedy and adventure. But later on, in the last two columns of Figure \[fig:example\], drama and thriller are the two types they like to watch most, especially the drama type. In fact, they watched 9 drama movies out of recent 10 movies. It is not surprising to see the one we hide from the models is also drama type. In the top-5 recommendations given by our SSE-PT, the hidden item-26 is put in the first place. Intelligently, the SSE-PT recommends 3 drama movies, 2 thriller movies and mixing them up in positions. Interestingly, the top recommendation is ‘Othello’, which like the recently watched ‘Richard III’, is an adaptation of a Shakespeare play, and this dependence is reflected in the attention weight. In contrast, SASRec cannot provide top-5 recommendations that are personalized enough. It recommends a variety of action, Sci-Fi, comedy, horror, and drama movies but none of them match item-26. Although this user has watched all these types of movies in the past, they do not watch these anymore as one can easily tell from his recent history. Unfortunately, SASRec cannot capture this and does not provide personalized recommendations for this user by focusing more on drama and thriller movies. What we see from this particular example is consistent with the previous findings from examining the attention maps. Attention heat maps for both models during inference are included in Figure \[fig:example\]. It is easy to see that SSE-PT model shares with human reasoning that more emphasis should be placed on recent movies.
-0.1in
![Illustration of the speed of SSE-PT[]{data-label="fig:speed"}](ml1m_speed.png){width="0.6\columnwidth"}
-0.2in
-0.1in
-0.1in \[tb:ssep\]
-0.1in
-0.1in \[tb:sampling\]
-0.1in
-0.1in \[tb:block\]
-0.1in
-0.1in \[tb:negative\]
Training Speeds
---------------
In [@kang2018self], it has been shown that SASRec is about 11 times faster than Caser and 17 times faster than GRU4Rec$^+$ and achieves much better NDCG@10 results so we did not include Caser and GRU4Rec$^+$ in our comparisons. Therefore, we only compare the training speeds and ranking performances among SASRec, SSE-PT and SSE-PT++. Given that we added additional user embeddings into our SSE-PT model, it is expected that it will take slightly longer to train our model than un-personalized SASRec. In Figure \[fig:speed\], max length $T=100$ is used for SSE-PT++, and $T=200$ is used for SSE-PT and SASRec. We find empirically that training speed of the SSE-PT and SSE-PT++ model are comparable to that of SASRec, with SSE-PT++ being the fastest and the best performing model. It is clear that our SSE-PT and SSE-PT++ achieve much better ranking performances than our baseline SASRec using the same training time in Figure \[fig:speed\].
Ablation Study
--------------
### SSE probability
Given the importance of SSE regularization for our SSE-PT model, we carefully examined the SSE probability for input user embedding in Table \[tb:ssep\]. We find that the hyper-parameter SSE probability is not too sensitive: anywhere between 0.4 and 1.0 gives good results, better than parameter sharing and not using SSE-SE. This is also evident based on comparison results in Table \[tb:reg\].
### Sampling Probability
Recall that the sampling probability is unique to our SSE-PT++ model. We show in Table \[tb:sampling\] using an appropriate sampling probability like 0.2-0.3 would allow it to outperform SSE-PT when the same maximum length is used.
### Number of Attention Blocks
We find for our SSE-PT model, a larger number of attention blocks is preferred. One can easily see in Table \[tb:block\], the optimal ranking performances are achieved at $B = 4, 5$ for Movielens1M dataset and at $B = 6$ for Movielens10M dataset.
### Number of Negatives Sampled
We want to make sure the number of negatives sampled during evaluation or difference in the usage of regularization techniques does not affect our final conclusion. So we add another set of experiments to remove personalization for our SSE-PT model while keeping all the regularization techniques we used. Based on the results in Table \[tb:negative\], we are positive that the personalized model always outperforms the un-personalized one even if we use the same regularization techniques. This holds true regardless of how many negatives sampled during evaluations.
Conclusion
==========
In this paper, we propose a novel neural network architecture called Personalized Transformer for the temporal collaborative ranking problem. It enjoys the benefits of being a personalized model, therefore achieving better ranking results for individual users than the current state-of-the-art. By examining the attention mechanisms during inference, the model is also more interpretable and tends to pay more attention to recent items in long sequences than un-personalized deep learning models.
[^1]: <http://jmcauley.ucsd.edu/data/amazon/>
|
---
abstract: 'We present measurements of [$\Omega_{m}$]{} and [$\Omega_{\Lambda}$]{} from a blind analysis of 21 high-redshift supernovae using a new technique (CMAGIC) for fitting the multi-color light-curves of Type Ia supernovae, first introduced by @Wang:03. CMAGIC takes advantage of the remarkably simple behavior of Type Ia supernovae on color-magnitude diagrams, and has several advantages over current techniques based on maximum magnitudes. Among these are a reduced sensitivity to host galaxy dust extinction, a shallower luminosity-width relation, and the relative simplicity of the fitting procedure. This allows us to provide a cross-check of previous supernova cosmology results, despite the fact that current data sets were not observed in a manner optimized for CMAGIC. We describe the details of our novel blindness procedure, which is designed to prevent experimenter bias. The data are broadly consistent with the picture of an accelerating Universe, and agree with a flat Universe within 1.7 [$\sigma$]{}, including systematics. We also compare the CMAGIC results directly with those of maximum magnitude fits to the same supernovae, finding that CMAGIC favors more acceleration at the 1.6 [$\sigma$]{} level, including systematics and the correlation between the two measurements. A fit for $w$ assuming a flat Universe yields a value that is consistent with a cosmological constant within 1.2 [$\sigma$]{}.'
author:
- |
A. Conley, G. Goldhaber, L. Wang, G. Aldering, R. Amanullah, E. D. Commins, V. Fadeyev, G. Folatelli, G. Garavini, R. Gibbons, A. Goobar, D. E. Groom, I. Hook, D. A. Howell, A. G. Kim, R. A. Knop, M. Kowalski, N. Kuznetsova, C. Lidman, S. Nobili, P. E. Nugent, R. Pain, S. Perlmutter, E. Smith, A. L. Spadafora, V. Stanishev, M. Strovink, R. C. Thomas, and W. M. Wood-Vasey\
(THE SUPERNOVA COSMOLOGY PROJECT)\
title: 'Measurement of $\Omega_{m}$, $\Omega_{\Lambda}$ from a blind analysis of Type Ia supernovae with CMAGIC: Using color information to verify the acceleration of the Universe'
---
INTRODUCTION
============
Type Ia supernovae (SNe Ia) have proved to be an extremely valuable tool for measuring the cosmological parameters, as they are the best high-luminosity standard candles currently known to astronomy. Studies of the peak $B$-band luminosities of high redshift SNe Ia led to the surprising discovery by two independent groups (the Supernova Cosmology Project (SCP; Perlmutter [et al.]{} 1998, Perlmutter [et al.]{} 1999 (hereafter P99)) and the High-z Supernova Search Team (HZSST; Garnavich [et al.]{} 1998; Schmidt [et al.]{} 1998; Riess [et al.]{} 1998), that the expansion of the Universe is accelerating. This acceleration is consistent with some form of ‘dark energy’, possibly Einstein’s cosmological constant $\Lambda$. The implications of this result for the future fate of the Universe and our understanding of fundamental physics are profound; therefore, it is extremely important that it be verified by independent methods.
The best approach is to make use of alternative measurements that depend on other physical processes. There are now several additional lines of evidence that support the accelerating Universe, but most are based on combining several different measurements. For example, the combination of the angular size of fluctuations on the surface of last scattering of the cosmic microwave background (CMB) with measurements of the clustering of mass on large scales [@Spergel:03; @Tegmark:04; @Eisenstein:05] provides strong evidence for a dark energy component. There is also a direct detection of dark energy using the integrated Sachs-Wolfe effect [@Padmanabhan:05]. It is encouraging that these different lines of evidence, which depend on very disparate physical processes and probe very different cosmic epochs, are consistent with a ${\ensuremath{\Omega_{m}}}\sim 0.3$, ${\ensuremath{\Omega_{\Lambda}}}\sim 0.7$ Universe.
Still, SNe Ia provide the best direct evidence for dark energy, and any improvement in our understanding of their properties is very welcome. There are several possible alternative explanations for the SN result. Since dark energy manifests itself in this context as high-redshift SNe Ia being slightly dimmer than expected, the most obvious alternative explanation is that this dimming is caused by extragalactic dust, either in intergalactic space or in the host galaxies of the SNe. Another possibility, and a significantly more difficult one to quantify, is that high redshift SNe are somehow dissimilar from low redshift SNe in a way that we have not yet detected. This paper presents results based on an analysis of SNe Ia with a new method (CMAGIC, for Color-MAGnitude Intercept Calibration) introduced in @Wang:03 (hereafter W03) that partially addresses both issues.
There is no unique choice for the magnitude to associate with an SN Ia because their luminosity varies in time. For convenience, virtually all previous studies have used the $B$ magnitude at maximum brightness, [$m_{B}$]{}, as the standardized candle, but there is no [*a priori*]{} reason why this choice is optimal. [$m_{B}$]{} is generally determined by fitting an empirical curve to the $B$-band brightness as a function of time and reading off the peak value. When available, observations in other passbands are frequently incorporated into the fitting procedure. There is a well-established empirical relation between absolute magnitude and the width of the light curve as parameterized by stretch (Perlmutter [et al.]{} 1997; P99; Goldhaber [et al.]{} 2001), [$\Delta m_{15}\left( B \right)$]{} [@Phillips:93; @Phillips:99] or the MLCS parameter $\Delta$ [@Riess:96] in the sense that SNe with wider, more slowly declining light curves (high stretches) are intrinsically brighter. Here the stretch parameterization is used.
Since ordinary interstellar dust both extinguishes and reddens light, P99 compared the distributions of [$B-V$]{} colors at maximum luminosity of the low and high redshift SN samples, finding no significant evidence that the high redshift sample is more reddened. It should be emphasized that the SN measurement of [$\Omega_{m}$]{} and [$\Omega_{\Lambda}$]{} is relative – as long as the low- and high-redshift samples suffer the same amount of extinction (or any other bias), there is no effect on the final result. @Sullivan:02 decomposed the SN sample into subsets based on the Hubble type of their host galaxies, a powerful approach because early-type galaxies are expected to have little or no dust, and found that [$\Omega_{\Lambda}$]{} was detected in each subsample. A difficulty with this analysis is that the resulting error bars on [$\Omega_{m}$]{}, [$\Omega_{\Lambda}$]{} are necessarily much larger because the morphological subsets have considerably fewer SNe than the full sample.
One may attempt to measure the reddening for each SN by measuring its color and correcting for host galaxy extinction by assuming a dust extinction law. The error in the extinction correction usually dominates the statistical errors of each SN. In early work the HZSST team made use of an asymmetric prior on the intrinsic extinction distribution to limit the propagated uncertainties resulting from the extinction correction [@Riess:98] while performing light-curve fits, which can bias the results under some circumstances (P99). More recent papers have made improvements in the form of the prior and its application and corrected this problem [@Barris:04; @Riess:04], although at the potential cost of enhanced sensitivity to any evolution in the extinction distribution. @Knop:03 (hereafter K03) made use of high quality color measurements made possible by the [*Hubble Space Telescope (HST)*]{} to estimate the extinction values of individual SN without making use of such a prior.
The evolution issue is extremely difficult to address. To first order evolution should not be a concern because the diversity of the environments in which local SNe Ia occur is much larger than the mean difference in environment between the high and low redshift samples. While there are some properties of SNe Ia that are known to correlate with host environment, these correlations disappear once the width-luminosity relation is taken into account [@Hamuy:00]. The analysis of @Sullivan:02 also has relevance for this question because it compares SNe Ia from similar host environments at high and low redshift. One can also compare individual SNe in more detail spectroscopically [@Hook:05], although such measurements are taxing even for modern class telescopes. In a spectroscopic study of 12 high redshift SNe, @Garavini:05 found no evidence for evolution.
CMAGIC offers some benefits with respect to dust and evolutionary models, as described further in §\[subsec:dust\] and §\[subsec:evolution\]. It is possible to define a standard candle magnitude with CMAGIC, and because of the nature of the CMAGIC relationships, this magnitude is affected by the same amount of dust by roughly half as much as [$m_{B}$]{}. On the evolutionary front the situation is more complicated. There are some potential evolutionary effects for which CMAGIC offers advantages, but it is uncertain how important this is because the effects of these theories have not been clearly delineated. Because CMAGIC depends on light-curve data in a very different fashion than maximum magnitude fits, and in particular because it is much more sensitive to later epochs relative to maximum light, for some potential evolutionary effects we can expect the CMAGIC magnitude to be affected differently. However, this is difficult to quantify given the current lack of detailed predictions from theories of SN evolution. Combining these two considerations, CMAGIC can provide a powerful cross check of previous SNe Ia cosmology results. Because we are attempting to verify previous results, it is important to prevent the analysis from being unintentionally biased towards the expected outcome. To this end a blindness technique has been developed and used during the cosmological analysis in this paper.
Perhaps for some of the above reasons, low redshift SNe Ia analyzed with CMAGIC have a smaller intrinsic variation than the maximum magnitudes of the same SNe without extinction correction (${\ensuremath{\sigma_{int}}}= 0.12$ mag, compared with approximately $0.17$ mag for [$m_{B}$]{}). For many current data sets, the intrinsic variation dominates over observational errors, so CMAGIC may allow us to obtain tighter constraints on the cosmological parameters for a similar observational expense in future surveys.
The goals of this paper are twofold: (1) to show that the CMAGIC relations hold at high redshift for well measured SNe, and (2) to measure the cosmological parameters from already existing data sets and use this to cross-check previous results. We first describe CMAGIC in more detail (§\[sec:cmagic\]). We then describe the data sample (§\[sec:data\]) and the CMAGIC fitting procedures (§\[sec:cmagfits\]), and then we use these to demonstrate that CMAGIC works for high redshift SNe (§\[sec:highzdemo\]). Once this is established, we proceed to the primary analysis of this paper, the cosmological fits. First we describe the cosmological fitting techniques (§\[sec:cosfits\]), including a discussion of the blindness technique (§\[subsec:blindness\]). Finally, the cosmological results are presented (§\[sec:cosresults\]), systematic effects are discussed (§\[sec:systematics\]), and the results are analyzed (§\[sec:analysis\]).
CMAGIC {#sec:cmagic}
======
CMAGIC is described in considerably more depth in W03. Here we provide a brief review of the relations, define the magnitude ([$B_{BV0.6}$]{}) used in this study, and discuss the benefits of CMAGIC with respect to extinction and evolution.
CMAGIC Relations {#subsec:cmagrelations}
----------------
CMAGIC is based on the behavior of SNe Ia in color-magnitude diagrams. Starting approximately 1 week after $B$ maximum and lasting approximately 3 weeks, the relation between the $B$ magnitude and [$B-V$]{} color is strikingly linear. This holds true for other colors as well (at least $B-R$, $B-I$). Some typical low redshift examples are shown in figure \[fig:cmag\]. The temporal extent of this linear region is a function of stretch, with slower, higher stretch light-curves starting and ending their linear behavior later. The slope, $\beta$, of the linear region has a narrow distribution. Currently very few rest-frame $R$ and $I$ observations are available for high redshift SNe Ia, so here we consider only $B$ versus [$B-V$]{}. The simplicity of this behavior is so far not completely explained by theory, which gives it a status similar to the empirical width-luminosity relation. Prior to the linear region, the majority of SNe Ia are less luminous than the linear extrapolation. However, a minority (typically those with high stretch) display excess luminosity, which is referred to as a ‘bump’. Standard light-curve template fitting techniques (stretch, MLCS) do not adequately reproduce the CMAGIC relations. Both issues are discussed in more detail in W03.
The distribution of slopes in this linear region is fairly narrow, with $\left< {\ensuremath{\beta_{BV}}}\right> = 1.98$ and a RMS of 0.16, as shown in figure \[fig:slopes\] for low-redshift SNe Ia. To first order, [$\beta_{BV}$]{} is affected by [$K$-corrections]{} but not by extinction. W03 explored fixing the slope at the mean value for all fits. The effects of this assumption are quite minor, but it is possible to improve on this procedure by including information about the distribution of slopes in the fitting procedure (§\[sec:cmagfits\]).
The CMAGIC relation for $B$ versus [$B-V$]{} can be written conveniently in the form $$B = {\ensuremath{B_{BV0.6}}}+ {\ensuremath{\beta_{BV}}}\left( B - V - 0.6 \right),$$ which defines [$B_{BV0.6}$]{} as the $B$ magnitude when ${\ensuremath{B-V}}= 0.6$; this is the magnitude used as a standard candle in this paper. The particular [$B-V$]{} color is chosen to minimize the covariance between the standard candle magnitude and the slope [$\beta_{BV}$]{}, as it is approximately the mean [$B-V$]{} color in the linear region of an unextinguished SN Ia. Because the color roughly measures the ejecta temperature, by evaluating the magnitude at a fixed color we essentially ensure that all SNe are evaluated at a point where their physical properties are similar.
The behavior of an SN Ia on a CMAGIC diagram can also be viewed temporally. Proceeding in a clockwise fashion around the curves in figure \[fig:cmag\], a typical, unextinguished SN Ia usually has a color of approximately ${\ensuremath{B-V}}= 0$ at maximum, and evolves rapidly to the red for about a month. After this it enters the so-called nebular phase and evolves bluewards, again in a linear fashion. This second linear region has some interesting properties, but since data at such late epochs are very rarely available for high-redshift SNe, we do not discuss it further here. With good time coverage it is possible to determine the extent of the linear region by examination, but this is generally not possible with current high redshift data. Fortunately, the beginning and ending dates of the linear region relative to the date of $B$ maximum form a well-defined sequence in terms of stretch and the presence or absence of the bump feature. Using well-observed low-redshift SNe to determine the earliest and latest points in the linear region as a function of stretch, we find that the beginning date of the linear region is well described by $t_b = 5 + 3 \left(s - 1\right)$ and the ending date by $t_e = 29 + 40 \left(s - 1\right)$, where both are measured in rest-frame days relative to $B$ maximum and $s$ is the stretch. SNe Ia with bumps (e.g., the lower panel of figure \[fig:cmag\]) do not fit smoothly into this scheme and are well represented by $t_b = 13.5$ and $t_e = 30$. This suggests a possible source of bias in the analysis of the high redshift sample, since the presence or absence of a bump may be difficult to detect given the typical quality of high redshift photometry. Fortunately, for this data sample this issue proves to be unimportant (Appendix \[apndx:bumps\]).
Detailed studies (Appendix \[apndx:correlations\]) show that the fitting procedure induces weak negative correlations between [$B_{BV0.6}$]{} and [$m_{B}$]{}, at least for current light-curve templates. Clearly, these templates have missed some aspect of SNe Ia behavior (or the correlations would be much stronger), and [$B_{BV0.6}$]{} provides some additional information that can be used to constrain the cosmological parameters. Peculiar velocities, stretch correction, and extinction induce additional correlations between these magnitudes.
Host Galaxy Dust {#subsec:dust}
----------------
Interstellar dust is a major component of our and other galaxies. A good review can be found in @Draine:03. Ordinary dust both extinguishes and reddens starlight because it absorbs blue light more strongly than red light. The relative amount of absorption between wavelengths is characterized by an absorption law such as that of @Cardelli:89. For an object with a stellar spectrum, the extinction in the $B$-band $A_B$ (in magnitudes) is related to the amount of reddening [$E(B\,-\,V)$]{} by $A_B = {\ensuremath{{\cal R}_B}}{\ensuremath{E(B\,-\,V)}}$. For SNe, which do not have stellar-like spectra, and whose spectral features change with time, this is not strictly appropriate, but [${\cal R}_B$]{} is still useful as a parameterization of the extinction law. A typical value in our Galaxy is ${\ensuremath{{\cal R}_B}}= 4.1$, although it varies considerably along different lines of sight [@Fitzpatrick:99]. The characteristic scatter of [${\cal R}_B$]{} is not well constrained.
So far it has not been feasible to measure the extinction law directly for the host galaxies of high redshift SNe, so the general approach has been to assume that the [${\cal R}_B$]{} values for the high and low redshift SNe samples are identical. This assumption takes several forms. In the primary fit of P99 (fit C) no extinction correction is performed, but it is argued that the similarity of the observed [$E(B\,-\,V)$]{} distributions of the two samples implies that host galaxy dust extinction is not contaminating the cosmological results. Because [${\cal R}_B$]{} is necessary to transform [$E(B\,-\,V)$]{} into the amount of extinction, this is tantamount to assuming that [${\cal R}_B$]{} is the same for the two samples. There is a theoretical and empirical expectation that the SN sample suffers from relatively little extinction [@Hatano:98]. K03 perform an extinction correction by comparing the measured [$B-V$]{} at maximum to an empirical model, then converting this to $A_{B}$ by assuming a value for [${\cal R}_B$]{}. @Riess:98 [@Riess:04; @Tonry:03; @Barris:04] use a similar procedure. Previous analyses have generally performed a color cut on their SN samples on the theory that large color excesses may represent SNe in dustier environments where the value of [${\cal R}_B$]{} is likely to depart from the fiducial value. It is interesting to note that we may now have evidence for higher mean extinction at high redshift. The recent SN sample of @Riess:04, which represents the deepest, highest redshift SN survey yet published, has much higher host galaxy extinction values than any other available SN sample, although survey selection effects may explain this result.
Because of the nature of the linear CMAGIC relations, the effective ${\cal R}$-value for [$B_{BV0.6}$]{} is approximately half of the value that it takes for [$m_{B}$]{} (assuming a standard dust law), as shown schematically in figure \[fig:cmagdust\]. The critical point is that the magnitude is always evaluated at the same fixed color, and therefore the extinction and reddening effects partially cancel. Since SNe Ia redden as they evolve along the linear relation, ${\cal R}_{{\ensuremath{B_{BV0.6}}}} = {\ensuremath{{\cal R}_B}}- {\ensuremath{\beta_{BV}}}$. For normal dust, [$B_{BV0.6}$]{} is less affected than [$m_{B}$]{}, which results in smaller uncertainties arising from the extinction correction, if a fixed [${\cal R}_B$]{} is assumed. Because the boundaries of the linear region are determined by date relative to maximum and not color, [$B_{BV0.6}$]{} remains less affected even if the amount of extinction is large enough that ${\ensuremath{B-V}}= 0.6$ does not lie within the linear region. The precise epoch of maximum light is not nearly as important for [$B_{BV0.6}$]{} as it is for [$m_{B}$]{} because the ‘roll-off’ at the edges of the linear region is much less severe than it is near peak luminosity. Note that CMAGIC offers no benefits with respect to an evolving [${\cal R}_B$]{} – the derivatives of [$m_{B}$]{} and [$B_{BV0.6}$]{} with respect to [${\cal R}_B$]{} are identical. Nor does it offer any advantages for the so-called ’gray dust’ (${\ensuremath{{\cal R}_B}}= \infty$) suggested by @Aguirre:99. Constraints on gray dust have been explored by @Riess:00 [@Riess:04], but also see @Nobili:03 [@Nobili:05].
Since [$B_{BV0.6}$]{} and [$m_{B}$]{} are affected by extinction differently, it is possible to estimate the amount of extinction by comparing the two magnitudes using the quantity [$\mathcal{E}$]{}, which is an estimator of : $${\ensuremath{\mathcal{E}}}= \frac{ {\ensuremath{m_{B}}}- {\ensuremath{B_{BV0.6}}}}{ {\ensuremath{\beta_{BV}}}} + \mbox{const}.$$ Using this correction substantially increases the correlations between [$m_{B}$]{} and [$B_{BV0.6}$]{}. Assuming a standard extinction law ([${\cal R}_B$]{}= 4.1), the correlation coefficient between these two magnitudes climbs to $\rho > 0.7$ from $\left< \rho \right> = 0.15$ (Appendix \[apndx:correlations\]), even in the absence of significant extinction. For this reason, this approach is not followed here. However, for smaller values of [${\cal R}_B$]{}, such as those found by @Tripp:99 and @Guy:05, this correlation is significantly reduced.
Evolution of SNe Ia {#subsec:evolution}
-------------------
The possibility that the average properties of SNe Ia have evolved between the current epoch and a redshift of 1 is of considerable concern for SN cosmologists. So far it has been impossible to demonstrate conclusively that evolution is not the cause of the claimed cosmological results. The best that can be done is to continue to quantitatively add “to the list of ways in which they are similar while failing to discern any way in which they are different” [@Riess:99b]. One method to approach this problem is to compare high and low redshift SNe in similar environments, as in @Sullivan:02, where we found no evidence for evolutionary biases. Since all measured dependencies of SN Ia properties on local environment disappear after stretch correction, and because of the diversity of environments in which local SNe Ia occur, concerns about evolution can be usefully restricted to mechanisms that affect the width-luminosity relationship.
There are several theoretical models that predict possible avenues for evolution. @Dominguez:01 and @Hoflich:00 have investigated the effects of decreasing metallicity and changing progenitor mass on SN Ia properties by constructing models of the progenitor star and then following them through detonation. If $\Delta$ is the change in [$B-V$]{} they find that decreasing metallicity causes an SN to become slightly bluer ($\Delta = -0.05$ for an extreme case) without affecting the maximum $B$ magnitude. Most extinction corrections compare observed colors to empirically derived color relations to calculate the amount of extinction. If the intrinsic colors change, then the extinction correction will be incorrect. If no extinction correction is applied, then [$m_{B}$]{} is unaffected, while [$B_{BV0.6}$]{} is overestimated by ${\ensuremath{\beta_{BV}}}\Delta \sim 2 \Delta$. If an extinction correction is applied, then for positive values of $\Delta$, the extinction correction for [$m_{B}$]{} is overestimated and the SN is assigned an extinction-corrected magnitude that is too bright by ${\cal R}_{B} \Delta \sim 4 \Delta$. [$\mathcal{E}$]{}, by contrast, is underestimated, so once this correction is applied, [$B_{BV0.6}$]{} is too dim by ${\ensuremath{\beta_{BV}}}\Delta - \left( {\ensuremath{{\cal R}_B}}- \beta \right)^2 \Delta / {\ensuremath{\beta_{BV}}}$. For typical values of [$\beta_{BV}$]{} and [${\cal R}_B$]{}, this cancels, and the extinction corrected value of [$B_{BV0.6}$]{} is unaffected by this evolutionary effect. In other words, either with or without extinction correction this particular evolutionary model will have different effects on [$m_{B}$]{} and [$B_{BV0.6}$]{}, so by comparing the two magnitudes this model can be evaluated against data. We note that the range of metallicities considered in this study is far greater than the expected change out to $z \sim 1$.
DATA {#sec:data}
====
Currently available SN data sets have not been observed in a manner optimized for CMAGIC, particularly at high redshift. Out of the roughly 100 SNe at $z > 0.1$ with light curves available in the literature, only approximately 20 are useful for CMAGIC purposes. High redshift SNe are frequently not observed in the rest-frame $V$. Even when such observations do exist, they are usually only intended to establish the color at maximum for the purposes of applying an extinction correction, and therefore are usually concentrated too close to the peak to lie within the CMAGIC linear region. Future high redshift data sets (SNLS [@Astier:05], ESSENCE [@Matheson:05], SDSS Supernova Search [@Sako:05], SNAP [@Aldering:04], LSST [@Pinto:04]) will not suffer from this limitation, as they are designed to obtain multi-color photometry for almost all observed epochs. The current situation is considerably better for low redshift data sets, as many of these SNe have excellent multi-color coverage. There is an observational cost associated with CMAGIC because the linear region is $\sim 1.2$ mag dimmer than at peak, so the photometric error bars are larger for the same observational effort. Whether or not this extra cost is outweighed by the benefits with respect to dust and/or evolution depends on the specifics of the survey design.
We have attempted to construct a data sample including all SNe Ia with published light curves. In order to eliminate SNe that cannot be useful for the purposes of this paper, we enforce the following requirements. First, an object must be at least plausibly an SN Ia based on either light-curve shape, spectroscopic ID, or host galaxy morphology. Second, it must have at least one rest-frame [$B-V$]{} observation. For this purpose we require that the central wavelength of the redshifted $B$- or $V$-band lie within one HWHM of the central wavelength of the observed filter, which improves the reliability of the [$K$-corrections]{} by limiting the amount of extrapolation. We also do not include observations taken with extremely wide filters, such as F110W and F160W NICMOS filters on [*HST*]{}. These filters are wide enough that for many of the redshift ranges of interest they overlap considerably with both $B$ and $V$ (and sometimes $R$), making it difficult to measure [$B-V$]{} in a fashion that is not heavily influenced by the model used to calculate the [$K$-corrections]{}. Clearly it must be possible to use these data in some fashion for CMAGIC, but it will require extreme care. Observations in $B$ and $V$ are only combined to form [$B-V$]{} if they are within 0.5 rest frame days of each other; the analysis is quite insensitive to this value.
This results in a sample of 131 SNe, of which one third are at redshifts greater than 0.3. Note that we have not yet required that the [$B-V$]{} point lie in the CMAGIC linear region, since this depends on the measured value of the stretch and date of maximum, or that the SN lie in the Hubble flow. The high-redshift portion of the sample comes from a fairly diverse set of sources. There are 14 from P99, six from K03, two from @Garnavich:98, one from @Schmidt:98, five from @Riess:98, four from @Tonry:03, 13 from @Barris:04, and one from @Riess:04. The low-redshift sample is even more diverse, but primarily comes from three sources: @Hamuy:96, @Riess:99a and @Jha:05. Source information is provided in tables \[tbl:primarysamplowz\] and \[tbl:primarysamphighz\]. Once a reasonable series of cuts are applied to this sample (§\[subsec:cuts\]), approximately half of the SNe remain and are used in the cosmological analysis.
CMAGIC FITTING PROCEDURES {#sec:cmagfits}
=========================
In order to determine if an individual data point lies within the linear CMAGIC region for a particular SN it is necessary to know the stretch and the date of $B$ maximum, although not to a high degree of accuracy. These are determined by performing a template fit to the $B$ and $V$ light curves in a manner similar to P99 and K03. Briefly, light-curve fits are performed using a [$\chi^{2}$]{} minimization procedure based on MINUIT [@James:75] with both [$K$-corrections]{} and corrections for Milky Way dust extinction taken into account. The light-curve template is that of K03 (which uses the $B$ template of @Goldhaber:01 but a different $V$ template). For the photometry from P99 and K03, the photometric correlation matrices were used in the light-curve fits. These reflect the correlations between different observations of the same SN induced by the subtraction of the final reference image(s). For the literature objects, where this information was not available, the observations are assumed to be uncorrelated. In order to prevent systematic errors arising from differences in fitting procedures, we have only included SNe that we can treat consistently, i.e. with our own light-curve fitting procedure and [$K$-corrections]{}.
The correlation of the bump feature with different $B$ and $V$ stretch values complicates matters. As explained in Appendix \[apndx:bumps\], SNe Ia with bumps can be fitted by the standard stretch templates if the ratio between $B$ and $V$ stretch values is allowed to vary. In order to handle this situation, three light-curve fits were performed for each SN – joint $B$ and $V$, $B$ only, and $V$ only. In joint fits the dates of maximum and stretch values of the two filters are fixed relative to each other by the light-curve template. Except when a bump is visible in the CMAGIC diagram, the joint fit is used. The reduced detectability of the bump feature at high redshift due to reduced data quality is a concern that is further discussed in Appendix \[apndx:bumps\].
[$K$-corrections]{} play a critical role in this procedure. At high redshift cross-filter corrections are necessary [@Kim:96], but even at low redshift same-filter [$K$-corrections]{} are not insignificant. Erroneous [$K$-corrections]{} alter the slope of the CMAGIC linear region, unlike extinction. Those used in this paper are based on the prescription of @Nugent:02 but with the time series of spectral templates and empirical stretch-color relation of K03. Milky Way extinction is included in this calculation using the dust map of @Schlegel:98. Our approach naturally takes into account the non-stellar nature of SN spectra and their variation with epoch. Errors associated with the [$K$-corrections]{} are discussed in §\[sec:systematics\], where we also discuss the effects of several other modifications to the fitting procedure described here.
Since the [$K$-correction]{} is a function of stretch and epoch, the light-curve fits must be performed in an iterative manner. On the first iteration the stretch is set to 1 and the date of maximum is set to the date of the brightest point. The combined Milky Way and [$K$-corrections]{} are calculated and the light curve is fitted, and the new stretch and date of maximum are used to calculate new corrections. This process is iterated until convergence. The majority of SNe converge within three iterations, but the maximum number allowed is 16. Those SNe that do not converge within 16 iterations invariably have extremely poor light-curve coverage and are excluded from the sample. Because high-redshift SNe very rarely have data beyond day 30, in order to prevent a bias between high and low redshift SNe in the fitting procedure data between 30 and 200 rest-frame days of maximum are not included, a similar procedure to that followed in P99 and K03. Observations more than 200 days after maximum light are included because they provide final reference information useful for setting the amount of host galaxy light underlying the SN.
This data set contains observations in 14 filters. $BVRI$ filter curves were obtained from @Bessell:90. We reiterate the warning of @Suntzeff:99 that these filter functions include a linear function of $\lambda$, which we have removed. The same is true of the redshifted $B$ and $V$ filters used for some observations by the HZSST ($B35,V35,B45,V45$), with filter curves given by @Schmidt:98. Filter curves for the [*HST*]{} filters on WFPC2 and ACS were generated using [*synphot*]{} [@Simon:96synphot]. There are two sets of ground-based $z$-band observations: those from @Tonry:03, and the $z^{\prime}$ observations taken with SuprimeCam on the Subaru telescope presented in @Barris:04. The @Tonry:03 $Z$-band response curve is as presented in that paper, and the SuprimeCam $z^{\prime}$ system response was provided by H. Furusawa (2004, private communication).
Once the date of maximum and stretch are measured, the points in the CMAGIC linear region can be determined and the linear relation fitted. Note that the CMAGIC fit is performed on the observed data points, not on the template fit used to determine the stretch and date of maximum. Again a [$\chi^{2}$]{} minimization routine is used based on MINUIT that allows for errors in both $B$ and [$B-V$]{}. The narrowness of the CMAGIC slope distribution, as shown in figure \[fig:slopes\], led W03 to suggest fitting all CMAGIC relations with a fixed slope set at the mean of this distribution. This is particularly important when working with high-redshift SNe because the observational error bars are sufficiently large that accurate slope measurements are difficult. We can make better use of the available data by assuming that low- and high-redshift SNe have similar [$\beta_{BV}$]{} distributions, as determined by examining low-redshift SNe. This is similar to the approach followed by previous analyses based on maximum magnitudes, where light-curve templates developed from low-redshift SNe are used to fit high redshift data. This leaves only one parameter in the fit, [$B_{BV0.6}$]{}. However, it is possible to test the assumption that the slope distributions are consistent with the handful of high-redshift SNe with sufficiently small observational errors (§\[sec:highzdemo\]).
We improve on the fixed slope assumption by numerically propagating the additional error due to the observed distribution of slopes using a Monte-Carlo style approach. The slope distribution is determined from the low-redshift SN sample, which for this purpose includes SNe Ia that are not in the Hubble flow. We take care to apply the same cuts, described in §\[subsec:cuts\], on this sample as we do on the sample used to directly determine the cosmological parameters, except for the redshift cut. This approach slightly overestimates the errors because the measured slope distribution includes observational errors, but in any case the net effect is quite small, inflating the errors on [$B_{BV0.6}$]{} by around 0.01-0.03 mag in quadrature without affecting the central values. In other words, the assumption of a fixed slope used in W03 works extremely well for current data sets, although we do include the additional error term in this analysis.
CMAGIC RELATIONS AT HIGH REDSHIFT {#sec:highzdemo}
=================================
The first task in applying CMAGIC at high redshift is to determine if SNe Ia at high redshift follow the linear relations derived at low redshift. A brief examination of the CMAGIC diagrams shows that high-redshift SNe do obey linear relations between magnitude and color. However, in order to put this statement on a more quantitative footing, we investigate the consistency of the [$\beta_{BV}$]{} distributions. Most high-redshift observations have sufficiently large error bars that they do not provide useful slope constraints. However, there are a handful of relatively well observed SNe Ia that can be used to investigate this question: SNe 1997ce, 1997cj, 1998aw, 1998ax, and 1998ba. The requirement for membership in this set is that there be at least three points in the CMAGIC linear region and that $\sigma_{{\ensuremath{\beta_{BV}}}} < 0.5$. SN 1997ce is particularly interesting because it clearly displays a bump feature. Whatever physical mechanism causes the bump feature is still active at high redshift.
The best fit slopes for these SNe are tabulated in table \[tbl:highzslopes\] and the CMAGIC diagrams are plotted in figure \[fig:hizcmagex\]. The [$\chi^{2}$]{} values for these fits are improbably low, suggesting that the photometric errors have been overestimated, which is also true of the low redshift sample. The slopes are histogrammed in figure \[fig:slopehisto\]. The mean slope for the low redshift sample is $\left< {\ensuremath{\beta_{BV}}}\right> = 1.98 \pm 0.03$ and for the high redshift sample it is $\left< {\ensuremath{\beta_{BV}}}\right> = 1.96 \pm 0.11$, so there is no evidence for disagreement. A stronger statement requires more high quality multicolor observations of high redshift SNe Ia.
COSMOLOGY FITTING PROCEDURES {#sec:cosfits}
============================
We now proceed to the primary purpose of this paper, the cosmological analysis. Here we describe our methodology for performing these fits. The results presented here differ from previous papers in several respects. First, we have attempted to formalize the procedure whereby individual SNe are rejected or accepted into the data sample to a greater extent than has been true previously. Second, we make use of a blind analysis procedure in order to prevent experimenter bias from affecting the results. To this end, the results of the cosmological analysis have been hidden from the authors until the cuts and fitting procedure were finalized.
Determining the Cosmological Parameters
---------------------------------------
The luminosity distance equation can be written (in magnitudes) as $$m = 5 \log_{10} \left( {\cal D}_{L} \left( z, {\ensuremath{\Omega_{m}}}, {\ensuremath{\Omega_{\Lambda}}}\right ) \right)
+ {\ensuremath{\mathcal{M}}}- \alpha \left( s - 1 \right) \label{eqn:lumdist}$$ where $m$ is the observed magnitude, $s$ is the stretch, [$\mathcal{M}$]{} is a combination of the Hubble constant $H_{0}$ and the absolute magnitude of an SN Ia, and ${\cal D}_{L}$ is the $H_{0}$ free luminosity distance given in @Perlmutter:97. Because of the somewhat complicated nature of this parameter space, the most conservative approach to fitting this relation is to perform a grid search over the four fitting parameters and then marginalize over the two nuisance parameters . This is the procedure used in P99 and K03. Because of the highly nonlinear nature of the problem and the large errors on the cosmological parameters, looking for the point where the [$\chi^{2}$]{} has increased by 2.3 over its minimum leads to an underestimate of the errors. A [$\chi^{2}$]{} is calculated at each point on the grid, making use of equation \[eqn:lumdist\], and converted into a relative probability $P \propto \exp \left( - {\ensuremath{\chi^{2}}}/ 2 \right )$. The probabilities are then normalized over the grid, and the nuisance dimensions are summed over. The parameter ranges explored are ${\ensuremath{\Omega_{m}}}= [0,3]$, ${\ensuremath{\Omega_{\Lambda}}}=[-1,4]$, ${\ensuremath{\mathcal{M}}}= [24.7, 25.5]$,[^1] and ${\ensuremath{\alpha}}= [-0.5, 2.0]$. These ranges include more than 99.99% of the probability.[^2]
We have also constructed fits to the equation of state parameter $w$. In order to reduce the computational complexity of this problem, these fits are restricted to the flat universe case. Here the four parameters are [$\Omega_{m}$]{}, $w$, [$\mathcal{M}$]{}, and [$\alpha$]{}. ${\cal D}_L$ must be modified appropriately, but in all other respects the fit procedure is identical. The range of $w$ considered is $[0,-3.5]$.
The errors on each [$B_{BV0.6}$]{} include the following terms:
- The uncertainty from the CMAGIC fits, including a contribution from the distribution of [$\beta_{BV}$]{}.
- The uncertainty of the stretch from the lightcurve fits multiplied by [$\alpha$]{}.
- A term due to the uncertainty in redshift. This includes an assumed peculiar velocity dispersion of 300 km $s^{-1}$ and redshift measurement errors .
- [$\sigma_{int}$]{} magnitudes of intrinsic variation determined by fits to the low-redshift Hubble diagram.
At high redshift the redshift measurement errors are taken to be 0.001 when the redshift was measured from host galaxy lines and 0.01 when measured from SN features, as in P99 and K03. The intrinsic variation is assumed to be distributed as a Gaussian, and is determined by performing Hubble fits with low redshift SNe and finding the value that results in a [$\chi^{2}$]{} per degree of freedom of 1. A Monte-Carlo simulation was used to calculate the errors associated with this estimate by generating 100,000 realizations of a nearby SN sample with identical properties to the actual one (redshift distribution and photometry errors). For [$B_{BV0.6}$]{} with stretch correction, ${\ensuremath{\sigma_{int}}}= 0.12^{+0.03}_{-0.04}$ mag. Two additional estimators for [$\sigma_{int}$]{} were considered: the RMS corrected for photometry errors and peculiar velocities, and the maximum-likelihood (ML) estimator for this problem. All three agree, although the ML and [$\chi^{2}$]{} estimators are considerably more efficient than the corrected RMS. We note that this value for [$\sigma_{int}$]{} is slightly higher than that given in W03; the values there were based on samples with tighter color cuts.
Cuts on the Supernova Sample {#subsec:cuts}
----------------------------
The procedure used to estimate the systematic errors in this paper is an extension of that used by P99 and K03 and differs only in that we have endeavored to be even more methodical in our exploration of changes to the fits. For this paper we specify a [*primary*]{} fit defined by a set of cuts, which are designed to be fairly loose while still removing SNe with obviously bad data or that provide no useful constraint on the cosmological parameters. We then explore the effects of changing these cuts in great detail and use the information thus gleaned to estimate the systematic errors. As we discuss below, altering most of these cuts has little effect on the final result, but this systematic exploration raises the specter of an unconscious fine-tuning to obtain the expected result. To circumvent this possibility we have performed a blind analysis, as detailed in §\[subsec:blindness\].
The cuts can roughly be split into two categories: data quality and analysis cuts. Not all are used in every fit considered. Their values for the primary fit are summarized in table \[tbl:primarycuts\]. More complete descriptions are provided below. The same cuts are applied when determining the sample of SNe that are used to measure the intrinsic distribution of [$\beta_{BV}$]{}.
There are four data quality cuts:
- A cut on the minimum number of points in the linear cmagic region. As long as the date of maximum is well known, it is not necessary to have more than one point.[^3]
- A cut on the maximum allowable error on [$B_{BV0.6}$]{}. Objects with very poorly determined magnitudes add little statistical weight to the cosmology fit but make the Hubble diagram more difficult to read and in general obfuscate the result.
- A cut on the maximum allowable error in the date of maximum. This is used because the date of maximum is used to specify the points that are in the linear CMAGIC region. Points that fail this cut usually fail the next cut as well.
- A cut on the maximum allowable gap (in rest frame days) between the nearest point in either $B$ or $V$ and the date of $B$ maximum. If this gap is too large, the date of maximum, stretch, and maximum magnitude can easily be incorrect. This arises because the error in the light-curve template itself is currently not fully taken into account.
There are four analysis cuts:
- A minimum redshift cut for the cosmology fit. It is ignored when the sample of SNe used to determine the intrinsic [$\beta_{BV}$]{} distribution is determined.
- A maximum redshift cutoff for the cosmology fit, which is not used in the primary fit.
- A maximum allowable color excess at $B$ maximum when compared with the color model of K03. This can be interpreted as an extinction cut.
- A minimum allowable stretch value. SNe with best fit values below this are removed from the sample for the reason discussed below.
We find that our estimates for the cosmological parameters from [$B_{BV0.6}$]{} are relatively insensitive to changes in the cut on the color excess, but the same cannot be said of the [$m_{B}$]{} fits. Because we seek to compare the CMAGIC results directly with the [$m_{B}$]{} results, it is useful to choose a value of the color cut that can be used for both fits. Therefore, we have chosen to use the same cut as @Knop:03 ($< 0.25$) in the primary fit.
A minimum stretch cut of 0.7 is applied to our primary fit sample because our [$K$-corrections]{} may not be reliable for extremely low stretch SNe, as their spectra display strong Ti features that are not well represented by our spectral template [@Nugent:02]. We require spectroscopic identification for our sample. There is only one SN that passes the other cuts but does not have a firm spectroscopic ID: SN 2001fo from @Barris:04. As was the case in K03 and P99, SN 1997O has been manually excluded from our sample. When included it is a 7 [$\sigma$]{} outlier from the best fit cosmology. Two of the low-redshift SN in our sample (SN 1997br and SN 1997bp) appear to have internal inconsistencies in their photometry, displaying a far higher degree of scatter both in light-curve and CMAGIC fits than can be explained by their quoted photometric errors.[^4] We have taken the conservative approach of removing them from the sample. When included, they have no impact on the cosmological parameters. In addition to these cuts, the maximum redshift of SNe that are used to measure the [$\beta_{BV}$]{} distribution is specified by another cut.
There are 119 SNe at redshifts greater than 0.01 of the 131 SNe in our baseline sample. Lower redshift SNe can also be included in our fits, but add essentially no statistical weight because of the dominance of their peculiar velocity errors. They are still useful for measuring the intrinsic slope distribution. The data quality cuts at the levels of the primary fit eliminate 62 of the SNe from the primary sample, and the analysis cuts remove five more. 53 are at $z > 0.1$, of which 28 are eliminated by the quality cuts and four by the analysis cuts. We have explored the effects of both relaxing and tightening the cuts in a systematic fashion. Many of the SNe fail multiple cuts, and the cuts are not applied in any order, so it would be misleading to specify the number of SNe removed by each cut. However, a list of which SNe are removed by each cut is potentially interesting, and is provided in appendix \[apndx:cutremoved\].
Blindness {#subsec:blindness}
---------
“Experimenter bias” occurs when an analysis is affected by the expectations of the experimentalist. Such bias is frequently unconscious, and can take quite subtle forms. For example, a result that disagrees strongly with a previous result is frequently subject to more scrutiny than one that appears to be in agreement. This may bias an experimenter into being more likely to find errors that cause their result to disagree with expectations while making it less likely that they will discover errors that have the opposite effect. Since the research process has a natural termination point (publication), if the decision to stop analyzing a result is at all influenced by the value of the result, a bias will be introduced. A nice summary of these issues can be found in @Heinrich:03. It has long been recognized that a useful technique for mitigating experimenter bias is to hide the final results of the experiment from the experimenter for as long as possible. This is known as blind analysis. Such an approach is particularly useful in an analysis with a substantial number of cuts, such as that presented here. In the medical fields double blind procedures (which hide some details of the experiment from both the test subject and the experimenters) are used almost as a matter of course. Naturally, hiding the details of the experiment from the subject is not of great concern in astronomical research.
A critical point is that these techniques do not seek to completely hide all information during the analysis. In fact, the goal is to hide as little information as possible while still acting against experimenter bias. Human judgment and scientific experience continue to play a critical role in a blind analysis. One does not mechanically carry out the steps of the analysis and then publish the results. All that a blind analysis does is prevent unconscious misuse of particular types of information during the analysis process. The kind of data that are excluded from consideration (namely, the final answer derived from each option under consideration) is invariably that which no reasonable scientist would allow to consciously influence his or her decision making process. However, subconscious effects are still present, and this is what this approach helps prevent.
Specifically, it is important to design the blindness technique such that subsidiary diagnostics are available even while hiding the final answer. Errors are initially present in any analysis, and it is important that even while the result remains blinded mechanisms are available to catch these problems. Specifically, our goal is to hide the values of [$\Omega_{m}$]{} and [$\Omega_{\Lambda}$]{} until the cuts and fitting procedures have been finalized, while preserving as much ancillary information as possible. In particular, our method preserves the residuals of individual SNe with respect to the Hubble line, which is extremely useful while diagnosing the fits. For example, an error in the [$K$-corrections]{} might result in all SNe in a given redshift range departing significantly from the Hubble line. This problem would still be detectable in our blinded fits. In addition, the method preserves the shifts in [$\Omega_{m}$]{}, [$\Omega_{\Lambda}$]{} between fits to different subsamples – if excluding a particular SN causes the unblinded result to shift by $\Delta {\ensuremath{\Omega_{m}}}= 0.1,\ \Delta {\ensuremath{\Omega_{\Lambda}}}= 0.2$, the blinded result shifts by the same amount, which is important when investigating systematic errors.
The technique used here is based on altering the true fit estimates. Hidden, but fixed, offsets are added to [$\Omega_{m}$]{} and [$\Omega_{\Lambda}$]{}, and this change is propagated through to the [$B_{BV0.6}$]{} values. In essence the cosmological parameters are fitted twice, with the magnitudes modified between fits, but the results of the first fit are never output. Because it would be possible to circumvent the blindness if the real [$B_{BV0.6}$]{} values were known, these values must be kept hidden. All of the programs used to plot CMAGIC diagrams add random offsets to the $B$ magnitudes for display purposes. Furthermore, the CMAGIC fitter and cosmology fitter are integrated so that the true [$B_{BV0.6}$]{} values are not output.
The expression for the luminosity distance cannot be evaluated in terms of simple functions except in limited cases, so the magnitude modification is calculated numerically. The results of the first, unmodified, fit are marginalized to determine the secret true measured values $\Omega_{mT}$ and $\Omega_{\Lambda T}$. The hidden offsets are then applied to these values, and the difference in magnitudes between the two cosmologies is calculated and applied. If $\Delta {\ensuremath{\Omega_{m}}}$ and $\Delta {\ensuremath{\Omega_{\Lambda}}}$ are the hidden offsets, then the following function is added to [$B_{BV0.6}$]{} for each SN: $$\Delta {\ensuremath{B_{BV0.6}}}\left( z \right) = 5 \log_{10}
{\cal D}_{L} \left( z, \Omega_{mT} + \Delta {\ensuremath{\Omega_{m}}},
\Omega_{\Lambda T} + \Delta {\ensuremath{\Omega_{\Lambda}}}\right) -
5 \log_{10} {\cal D}_{L} \left( z, \Omega_{mT}, \Omega_{\Lambda T} \right),
\label{eqn:magshift}$$ where ${\cal D}_{L}$ is as in equation \[eqn:lumdist\]. The cosmological fit is then redone with the new magnitudes and this result is output. It is safe to output the modified magnitudes, which can be used to construct a Hubble diagram and to perform various tests on the fit.
The simplest method to choose the hidden offsets is to generate them randomly. This performs poorly in this case because there are several non-physical regions in the [$\Omega_{m}$]{}, [$\Omega_{\Lambda}$]{} parameter space. Negative values of [$\Omega_{m}$]{} result in a non-convergent luminosity distance integral. For high values of [$\Omega_{\Lambda}$]{} the universe did not experience a Big Bang, but is instead rebounding from a previous bout of contraction [@Carroll:92]. In such a universe there is a maximum observable redshift, and if any of the SNe are at higher redshifts the luminosity distance expression cannot be evaluated. A randomly generated offset could easily push the cosmological parameters into one of these regions. Instead we have chosen to generate the hidden offsets by specifying the desired values of [$\Omega_{m}$]{} and [$\Omega_{\Lambda}$]{} for a particular SN sample (the primary fit). A special version of the cosmological fitter determines the offsets between a fit to the primary sample and the chosen value[^5] [$\Omega_{m}$]{}= 1, [$\Omega_{\Lambda}$]{}= 1.1. These offsets are then used for all other fits.
As long as the resulting fit values for [$\Omega_{m}$]{} and [$\Omega_{\Lambda}$]{} are roughly equal to ($\Omega_{mT} + \Delta {\ensuremath{\Omega_{m}}}$, $\Omega_{\Lambda T} + \Delta {\ensuremath{\Omega_{\Lambda}}}$) this preserves the residuals with respect to the fit by construction. Because the same hidden offsets are used for all fits, this approximately preserves relative shifts between different fits. The caveat is that, for a particular value of [$\Omega_{m}$]{} and [$\Omega_{\Lambda}$]{}, the shape of the luminosity distance equation effectively weights SNe depending on their redshift, and therefore altering the values of these parameters may cause the relative shifts in the blinded fits to be slightly different than for the true values. Therefore, the offsets are determined iteratively. However, as long as the hidden offset is relatively small, this effect is negligible. Tests on both previous data sets (specifically, the low-extinction primary subset of K03) and artificially generated data show that this procedure works in that the resulting cosmological parameter estimates are equal to the unblinded result plus the specified offset. The offset between the blind target values and the actual estimates for this analysis was somewhat larger than anticipated, so the specified offset does not quite match the actual shift. However, the relative shifts are preserved accurately over small distances, which allowed us to compare different fits to the same data prior to unblinding.
A similar procedure is followed in the $w$ fits, although a different set of offsets are used. Because problems related to non-physical regions of the parameter space are not as severe in this case, the offsets to [$\Omega_{m}$]{} and $w$ were randomly generated from the ranges $[-0.2,0.2]$ and $[-0.4,0.4]$.
Should a mistake in the analysis be found after the result is unblinded, it should still be corrected. In this situation, one should publish both the corrected and uncorrected results and note the effects of the discovered error on the result. An example of this can be found in @Akerib:04. We also note that it is important to determine the systematic errors prior to unblinding, or it would be possible to explain away any unexpected results by inflating them. This technique certainly does not prevent all types of bias, but it does provide an opportunity to improve the situation, and thus is worth pursuing.
Complete Fitting Procedure (Blind)
----------------------------------
Our cosmological fits proceed in the following order:
- The SNe used to measure the intrinsic [$\beta_{BV}$]{} distribution are determined by applying the specified cuts. The distribution of [$\beta_{BV}$]{} is then calculated from these SNe.
- A one-parameter ([$B_{BV0.6}$]{}) CMAGIC fit is performed for all SNe in the data sample using a Monte-Carlo fitting technique that takes into account the distribution of [$\beta_{BV}$]{} from the distribution calculated in the previous step. The fitted [$B_{BV0.6}$]{} values are not output.
- The cuts are applied again to determine the SNe used to measure [$\Omega_{m}$]{} and [$\Omega_{\Lambda}$]{}. The same cuts are used, except for the redshift ranges in §\[subsec:cuts\].
- A cosmological fit is performed. Estimates for [$\Omega_{m}$]{} and [$\Omega_{\Lambda}$]{} are calculated but not output.
- The hidden offsets are read in and added to [$\Omega_{m}$]{} and [$\Omega_{\Lambda}$]{}. A magnitude offset is applied to each SN based on equation \[eqn:magshift\].
- The cosmology is refitted with the new magnitudes. These results are output.
- The altered magnitudes are used to construct a Hubble diagram.
Once the blindness was removed, the fits were redone without the secret offset step. We have also performed fits using the maximum $B$ magnitude, [$m_{B}$]{}. Since these fits are not a principal result of this paper they can be performed in an unblinded fashion, allowing us to test our procedures.
COSMOLOGICAL RESULTS {#sec:cosresults}
====================
Figure \[fig:baselinecontour\] shows the [$\Omega_{m}$]{}, [$\Omega_{\Lambda}$]{} confidence regions of our primary fit, based on 31 nearby and 21 distant SNe Ia. An additional nine very nearby SNe ($z < 0.01$) are used while determining the [$\beta_{BV}$]{} distribution (for a total of 40). The resulting estimates for the cosmological parameters are ${\ensuremath{\Omega_{m}}}= 1.26^{+0.38}_{-0.51}$ and ${\ensuremath{\Omega_{\Lambda}}}= 2.20^{+0.41}_{-0.67}$. If we require a flat universe, consistent with recent CMB results, then ${\ensuremath{\Omega_{m}}}= 0.19^{+0.06}_{-0.06}$. These confidence regions are comparable to those from P99 (but not as good as those from K03), despite the fact that fewer SNe are involved, due to the smaller value of [$\sigma_{int}$]{} for CMAGIC. The fit residuals are shown in figure \[fig:baselinehubble\].
[$\Omega_{m}$]{} and [$\Omega_{\Lambda}$]{} are not the natural variables for this measurement, as they are not independent for this data set. The result of our analysis is better expressed in the principal axes frame of the error ellipse $\Omega_1 \equiv 0.790 {\ensuremath{\Omega_{m}}}- 0.613 {\ensuremath{\Omega_{\Lambda}}}$ (the short axis) and $\Omega_2 \equiv 0.613 {\ensuremath{\Omega_{m}}}+ 0.790 {\ensuremath{\Omega_{\Lambda}}}$ (the long axis). Roughly, $\Omega_1$ can be thought of as measuring acceleration and $\Omega_2$ as measuring geometry. Analyzing the results in this frame has considerable benefits while calculating systematic errors and when comparing the CMAGIC results to those derived from maximum magnitudes. In this frame the results of the primary fit are $\Omega_1 = -0.349^{+0.117}_{-0.131}$ and $\Omega_2 = 2.502^{+0.530}_{-0.838}$. The values of the nuisance parameters are ${\ensuremath{\alpha}}= 0.516^{+0.193}_{-0.206}$ and ${\ensuremath{\mathcal{M}}}= 25.166^{+0.049}_{-0.045}$, and they are almost completely statistically independent. Magnitudes and redshifts are provided in table \[tbl:primarysamplowz\] for the low-redshift sample, and in table \[tbl:primarysamphighz\] for the high redshift sample. The [$\chi^{2}$]{} of this fit is 49.5 for 52 degrees of freedom. In the next section we discuss variations of the cuts, which produce different sets of SNe. The stretch-luminosity relation is shown in figure \[fig:stretchlum\]. When compared with the [$m_{B}$]{} relation (Fig. 13 of @Knop:03, for example), the evidence for the utility of a stretch correction is much weaker for [$B_{BV0.6}$]{}.
Our estimates for $w$ in a flat universe are shown in figure \[fig:wcontour\]. These are combined with the measurement of the angular size of the baryon acoustic peak (BAP) in SDSS galaxy clustering statistics at $z=0.35$ [@Eisenstein:05], which are quite complementary to the SN measurements. The resulting constraint is $w = -1.21^{+0.15}_{-0.12}$ and ${\ensuremath{\Omega_{m}}}= 0.25^{+0.02}_{-0.02}$ (statistical errors only).
This is the first analysis that treats the combined data from the different SN groups in a fully consistent manner. Unlike @Leibundgut:01 or @Riess:04, we find no significant evidence for anomalously blue colors in the high-redshift SN, even though this sample contains many of the same objects as those studies. Figure \[fig:maxcolor\] shows the [$B-V$]{} color at $B$ maximum for the low- and high-redshift primary fit sample. The highly negative color point from the high-redshift sample is due to (by far) the most poorly measured SN, SN 1997af, which has ${\ensuremath{E(B\,-\,V)_{Bmax}}}= -0.24 \pm 0.24$. Excluding this point, the mean color of the low redshift sample is ${\ensuremath{(B-V)_{Bmax}}}= 0.045 \pm 0.027$ and that of the high redshift sample is ${\ensuremath{(B-V)_{Bmax}}}= 0.027 \pm 0.019$, where the standard errors are quoted.
SYSTEMATICS {#sec:systematics}
===========
We explore various systematic errors by performing alternate fits and comparing the results with our primary fit. Because of the way in which our blindness scheme is constructed, this comparison was possible before the final answer was known. As was the case in [@Knop:03], we find that the effects of most of the systematics act along the long axis of our error ellipse. They therefore do not significantly affect the value of the SN measurements for determining if the Universe is accelerating, but do substantially limit our ability to measure geometry. Fortunately, this is the dimension in which CMB measurements are extremely powerful.
There are two types of systematic error possible in this analysis. First, there are the systematics arising from alterations in the fitting procedures, [$K$-corrections]{}, etc. Second, there are those arising from the cuts applied to the sample. Ideally this second set would be handled by a complete Monte-Carlo simulation of the SN sample. Unfortunately, there are far too many pieces of information missing to make the results of such a study at all useful. In order to construct a believable Monte-Carlo, it would be necessary to have a reasonable understanding of the intrinsic luminosity and extinction distributions, which have not been convincingly measured. To make matters substantially worse, it would also be necessary to have a good understanding of the search and follow up strategy used to construct the SN sample. Because the sample used in this paper is primarily constituted of literature SN, a clear definition of the search techniques and procedures is simply not available. Providing the results of such a procedure would provide a misleading sense of accuracy. We therefore proceed by calculating the effects of changing the cuts applied to our sample over what we consider to be a reasonable range and combining the resulting shifts as an estimate of the systematic error. Clearly this procedure is somewhat subjective, but any credible improvement requires the availability of large, well defined SN samples such as those that should be provided by the SNfactory, SNLS, SDSS Supernova Survey, and ESSENCE.
The effects of these shifts can most precisely be stated in terms of the principal axes of the primary fit error ellipse, $\Omega_1$ and $\Omega_2$, which is the primary justification for their use. Recall that for the primary fit $\Omega_1 = -0.349^{+0.117}_{-0.131}$ and $\Omega_2 = 2.502^{+0.530}_{-0.838}$ (statistical errors only). We follow the standard practice of adding the negative and positive shifts in quadrature when handling asymmetric errors (however, see [@Barlow:03] for criticism of this procedure). The resulting systematic errors are $^{+0.060}_{-0.062}$ on $\Omega_1$ (the short axis), $^{+0.476}_{-0.545}$ on $\Omega_2$ (the long axis), and $^{+0.029}_{-0.049}$ on the value of [$\Omega_{m}$]{} in a flat universe. The shifts are summarized in table \[tbl:identifiedsystematics\], and detailed individually in the following sections. Some representative examples can be seen in figure \[fig:baseline\_comp\]. An essentially identical procedure has been carried out for the fit to $w$, [$\Omega_{m}$]{} in a flat Universe, including the BAP constraint, resulting in systematics error estimates of $^{+0.07}_{-0.12}$ on $w$ and $^{+0.01}_{-0.01}$ on [$\Omega_{m}$]{}. Note that this only includes the systematics from the SN measurement. Unlike the [$\Omega_{m}$]{}, [$\Omega_{\Lambda}$]{} fits, here the statistical errors are dominant, reflecting the more challenging nature of the $w$ measurement.
Variation of Fitting Procedures {#subsec:systematicsfitting}
-------------------------------
There are many reasonable ways to alter the CMAGIC fitting procedures that result in slightly different values of the cosmological parameters. We have attempted to explore some of these variations.
P99 found that using a floating value of [$\alpha$]{} when propagating the stretch error into the fit magnitude artificially inflates [$\alpha$]{}, as this decreases the [$\chi^{2}$]{} by increasing the magnitude errors. Therefore, [$\alpha$]{} was fixed for the purposes of error propagation. As in K03, we find no evidence for this effect. Fixing [$\alpha$]{} at the estimate from the primary fit (${\ensuremath{\alpha}}= 0.5$) has essentially no effect on the [$\Omega_{m}$]{}, [$\Omega_{\Lambda}$]{} values except to shrink the error bars slightly, as expected. Not performing a stretch correction (${\ensuremath{\alpha}}= 0$) shifts the error ellipse primarily along $\Omega_2$ by 0.06. This is not included in the final value for the systematic error.
It is possible to include estimates about the error in the stretch and date of maximum in the CMAGIC fitting procedure, since they influence which points are included in the CMAGIC fit. A modified version of the fitting code has been used to investigate this possibility. This approach is considerably more expensive computationally, and for this data sample it turns out to make no difference. In our fits we have effectively assumed that $B$ and [$B-V$]{} are independent variables. An alternative formulation of the linear relations that treats $B$ and $V$ as independent variables is possible. This also has no effect on the fit values (less than 0.005 mag for any SN).
The light-curve fitting procedure used in P99 differs slightly from that used here (and by K03) in that the fits to the $V$ band were performed fixing the stretch and date of maximum to the values derived from a $B$ only fit. This procedure arose from concerns that the rest frame $V$ light curves for the high-redshift sample are more poorly sampled than the rest frame $B$ light curves, which is not the case for the low-redshift sample. Thus, a light-curve fitting procedure that treats both bands on an equal footing might effectively introduce a bias in the fits. This is of considerably less concern for this data sample, since by its nature CMAGIC demands good $V$-band coverage, but to guard against this problem we re-calculated all of the lightcurve fits following this prescription, which affects the CMAGIC fits because it changes the values of the stretch and date of maximum. The resulting effect on the error contours was minor, and primarily towards larger values of $\Omega_2$ by 0.144.
Variations in the [$K$-corrections]{} are investigated by considering alternative versions of the spectral template. In particular, we follow K03 by making use of a $U$-enhanced version of the template with $U - B = -0.5$ instead of $-0.4$ as in our primary fit. This shifts the error ellipse primarily along the short axis, with $\Delta \Omega_1 = -0.052$ (towards smaller values of [$\Omega_{m}$]{}). The [$\chi^{2}$]{} worsens slightly to 50.9. This is, by far, the most significant source of uncertainty related to alterations in the fitting procedures. Simply treating this error as a statistical contribution to each SN is a completely inadequate representation of its effect on the cosmological results. Clearly, future projects would benefit substantially from additional constraints on the $U$-band behavior of SNe Ia.
Variation of Cuts
-----------------
We considered both increasing and decreasing the cut values for all of the cuts described in §\[subsec:cuts\]. Here we only present those that had a measurable effect on the error ellipse or are interesting for some other reason.
Requiring SNe to have observations within 5 rest-frame days of maximum eliminates two low redshift SNe (SN 1998ab and SN 2000fa) and one at high redshift (SN 1996E), and induces a shift along the long axis by $\Delta \Omega_2 = +0.139$. Loosening the requirement to 10 days adds one high-redshift SN (SN 2001jp), and results in a shift along the $\Omega_1$ axis of +0.024 towards higher values of [$\Omega_{m}$]{}. Changing the minimum allowable redshift to 0.015 from 0.01 has an extremely small effect on the fit results while eliminating six low redshift SNe. Halving (to 0.25) or tripling (to 1.5) the cut on the maximum allowable magnitude error alternately removes five high redshift SNe or adds one, but does not affect the results substantially, as one would expect given the low weight given SNe with such large errors.
Placing a substantially tighter cut on the color at maximum \[${\ensuremath{E(B\,-\,V)_{Bmax}}}\le 0.1$, similar to that used for the low-extinction subset of K03\] shifts the error contours by a substantial amount along the long axis (towards a flat universe) by $\Delta \Omega_2 = -0.467$, eliminating three high and eight low redshift SNe. Using a color cut of 0.125 (half of the primary fit value) is not substantially different than using 0.1. Relaxing the color cut to 0.5 adds two high-redshift (SN 1998aw and SN 2002ad) and four low redshift SNe, and moves the contours principally along the short axis by $\Delta \Omega_1 = -0.048$. While less affected by extinction than [$m_{B}$]{}, CMAGIC is not completely unaffected. The analysis presented in this paper suggests that assumptions about the extinction law are not a significant systematic bias, and therefore future studies, including those that use CMAGIC, may benefit by applying an extinction correction. This must be weighed against the decrease in independence of the two magnitudes after correction.
Requiring that the date of maximum be known to better than 0.5 days removes a large number of high redshift SNe from the sample (nine), but has little effect except to inflate the error contours along the long axis. Relaxing the requirement to 2 days adds eight poorly measured high-redshift SNe and shifts the ellipse outwards along the long axis by $\Delta \Omega_2 = +0.115$.
Requiring that there be at least two observations in the CMAGIC linear region, and hence providing some level of confidence that the CMAGIC relations are being obeyed, does have a non-negligible effect on the cosmological parameters. Three high-redshift SNe are eliminated (SN 1998as, SN 2002ab, and SN 2002kd), and the error ellipse shifts primarily outward along the long axis by $\Delta \Omega_2 = +0.23$. Even when two points are required in the linear region, the quality of the high redshift data is such that the CMAGIC slope [$\beta_{BV}$]{} cannot be usefully fitted to each SN.
As can be seen from the above discussion, the primary systematic effect related to the cuts on the SN sample is associated with the extinction cut. A better understanding of the extinction distribution would help reduce this systematic considerably. Note that we do not apply an extinction correction, so we are more sensitive to the extinction cut than some other analyses – although they trade this off with sensitivity to extinction and the intrinsic peak color of SNe Ia. Fortunately, the systematics arising from the cut selection are primarily along the long axis of the error ellipse, and hence have little effect on our detection of acceleration.
Other Systematics {#subsec:othersys}
-----------------
We have also considered limiting our low-redshift SN sample to only those from large, systematic SN studies in order to limit any systematic errors arising from differences in calibration. There are three major low-redshift samples: @Hamuy:96, @Riess:99a and @Jha:05. Excluding all nearby SNe that are not from one of the above three sources has a very minor effect.
To test the sensitivity of our results to individual SNe, we have performed a jack-knife test by removing each of the 21 high-redshift SNe individually and recalculating the cosmological fit. Our values for [$\Omega_{m}$]{} and [$\Omega_{\Lambda}$]{} are sensitive to the removal of SN 2001ix and SN 2002kd, both at the very high redshift end of the sample. Removing either of these SNe shifts the contours primarily along the long axis, although in opposite senses. Removing SN 2001ix results in a shift inward of $\Delta \Omega_2 = -0.28$, and removing SN 2002kd shifts the contour outward by $\Delta \Omega_2 = 0.31$. Interestingly, their effects on the cosmological parameters nearly cancel. This analysis would benefit from additional SNe in this redshift range, but overall the results are reasonably robust.
Properly speaking, [$\sigma_{int}$]{} should be another quantity that is marginalized over while performing the cosmological fits. To determine if this is necessary, we performed fits in which [$\sigma_{int}$]{} was varied by 1 [$\sigma$]{} in each direction, and found that the effects on the cosmological parameters were negligible (less than 0.1 [$\sigma$]{} in $\Omega_1$ and $\Omega_2$).
Since all of the high-redshift supernovae (and many of those at low redshift) come from flux-limited samples, they suffer from Malmquist bias [@Malmquist:36]. We note that only a difference in the amount of Malmquist bias between the low- and high-redshift SN samples can affect the cosmological results. This effect is discussed extensively in P99 and K03, and we adopt the estimates contained therein for these samples: 0.01 mag for P99 and 0.03 mag for K03. P99 also estimated the Malmquist bias for the @Hamuy:96 sample as 0.04 mag. The @Riess:99a and @Jha:05 samples were primarily discovered using a galaxy catalog search, so they may suffer from little or no Malmquist bias [@Li:01]. We therefore adopt a Malmquist bias of 0 mag for these samples. It is difficult to estimate the Malmquist bias for the remaining SNe in the low redshift sample, since they were discovered in a rather inhomogeneous fashion. However, since they constitute only a small fraction of the sample, the effects of any Malmquist bias on the cosmological parameters from this sample are expected to be negligible, and so we adopt a value of 0 mag. For the remaining portion of the high-redshift sample (approximately half) we provisionally use the same value as for the P99 SNe, 0.01 mag. To test the effects of this bias on our estimate, we apply the offsets to each sample and recalculate the fit. The resulting shift in the cosmological parameters is quite small, less than $0.1\ \sigma$ in both dimensions.
Appendix \[apndx:bumps\] contains a discussion of the effects of the ‘bump’ in the CMAGIC diagram exhibited by some SNe. The effects of this systematic are negligible along both axes (less than 0.05 [$\sigma$]{}).
ANALYSIS OF RESULTS {#sec:analysis}
===================
There are two channels available for analyzing the results of this paper. First, the estimates of the cosmological parameters can be considered in isolation. Second, the CMAGIC results can be compared with a maximum magnitude fit to the same SN. Several of the systematics should affect both samples equally (e.g., Malmquist bias); therefore, this comparison should be more precise. However, this requires that the covariance between [$m_{B}$]{} and [$B_{BV0.6}$]{} be determined.
Constraints on the Cosmological Parameters
------------------------------------------
The results of a CMAGIC fit to currently published SN data strongly favor an accelerating Universe — in fact, more strongly than previous results based on [$m_{B}$]{}. Perhaps more interesting is that the fit contours depart mildly from a flat universe. In the principal axis frame, a flat universe corresponds to $\Omega_2 = 0.756 \pm 0.010$ for ${\ensuremath{\Omega_{m}}}= 0.191$. Once systematics are taken into account, the disagreement is 1.75[$\sigma$]{}, which is expected to occur approximately 8% of the time due to random chance. A similar result was seen in the SN sample of [@Tonry:03], although at a somewhat lower level of significance. Both results are interesting, but not yet strong enough to be of serious concern. One of the lessons of blind analyses is that 1.5+[$\sigma$]{} disagreements occur in science more frequently than our intuition, developed from exposure to non-blind experiments, often expects.[^6]
The departure from flatness is driven by SNe at moderate redshifts $0.3 < z < 0.5$. The three with the highest pull are SNe 1998as, 1996k, and 1997ce. It is difficult to find any common thread between them. They come from three different papers, were observed with different telescopes (although SN 1998as and SN 1997ce were both partially observed with [*HST*]{}), and their photometry was reduced by different authors using different techniques. Since they constitute the low-redshift end of their respective surveys, there may be a suspicion that they suffer from unusually high extinction. While SN 1998as does suffer from considerable host galaxy extinction ($A_V = 0.49$; K03), the other two suffer from negligible extinction ($A_V=0.02$ and 0.08 for SN 1996K and SN 1997ce, respectively; Riess [et al.]{} 2004). Note that removing each of these SNe individually has little effect on our results, as explained in §\[subsec:othersys\].
Comparison of [$B_{BV0.6}$]{} and [$m_{B}$]{} Results {#subsec:compare}
-----------------------------------------------------
The results of an [$m_{B}$]{} fit to the same SN as the primary are compared with the [$B_{BV0.6}$]{} fit in figure \[fig:contcompare\]. The [$\chi^{2}$]{} of this fit is 44.32 for 52 degrees of freedom, and the resulting estimates are ${\ensuremath{\Omega_{m}}}= 1.08^{+0.49}_{-0.69}$ and ${\ensuremath{\Omega_{\Lambda}}}= 1.65^{+0.65}_{-0.91}$, with a flat universe value of ${\ensuremath{\Omega_{m}}}= 0.32^{+0.07}_{-0.07}$. The principal axes of this fit are almost identical to those of the CMAGIC fit, so it is useful to express them in this frame. Here they correspond to $\Omega_1 = -0.167^{+0.146}_{-0.133}$ and $\Omega_2 = 1.969^{+0.787}_{-1.146}$ (statistical errors only). Note that the [$m_{B}$]{} fits agree somewhat better with a flat universe than the [$B_{BV0.6}$]{} fits.
If [$m_{B}$]{} and [$B_{BV0.6}$]{} were equivalent (given current templates) we would expect [$\alpha$]{} to be identical for the two methods. When comparing these numbers the marginalized, one-dimensional errors are appropriate instead of the outer extent of the 1 [$\sigma$]{} error contours quoted previously. For [$B_{BV0.6}$]{} ${\ensuremath{\alpha}}= 0.516^{+0.193}_{-0.206}$, and for [$m_{B}$]{} it is ${\ensuremath{\alpha}}= 0.995^{+0.253}_{-0.226}$, a difference of 1.6 [$\sigma$]{}. They are marginally inconsistent, but not at a significant level.
Directly comparing the [$m_{B}$]{} and [$B_{BV0.6}$]{} cosmological results requires that the correlation between the two methods be measured, and then propagated into the cosmological parameter space. The details of this process are presented in Appendix \[apndx:correlations\]. The result is that the correlation coefficients between the two fits are 0.34 along the $\Omega_1$ axis and 0.15 along $\Omega_2$.
While many of the systematic errors should affect [$m_{B}$]{} and [$B_{BV0.6}$]{} equally, not all apply to both fits. For example, the number of points in the CMAGIC linear region is meaningless in an [$m_{B}$]{} context. Furthermore, individual SNe may have quite different weights in the two fits, which partially removes the insensitivity to systematics. Both issues must be addressed before the results can be compared. The number of points in the linear region and the detectability of CMAGIC bumps at high redshift have already been discussed, and are summarized in table \[tbl:identifiedsystematics\]. In addition, we expect that the effects of the $U-B$ color of the spectral templates will not be the same for both methods, since [$m_{B}$]{} and [$B_{BV0.6}$]{} depend on color information in a very different fashion. Comparing the results of [$m_{B}$]{} and [$B_{BV0.6}$]{} fits using the $U$-enhanced spectral templates as discussed in §\[subsec:systematicsfitting\], we find that the residual difference due to this systematic is $\Delta \Omega_1 = 0.010$, $\Delta \Omega_2 = 0.151$. The effects of the differing weights can be addressed by performing a fit to [$m_{B}$]{} where each SN is given the weight it has in the [$B_{BV0.6}$]{} fit, and vice-versa. It is not fair to include both values as systematics errors, since they are essentially measuring the same effect. Fortunately, they turn out to have almost identical effects. The short axis is brought into better agreement by a shift of $\Delta \Omega_1 = 0.054$ and the long axis by $\Delta \Omega_2 = 0.31$.
Putting these contributions together, and using the correlations given above, we find that the difference between the [$m_{B}$]{} and [$B_{BV0.6}$]{} fits is $$\begin{aligned}
\Delta \Omega_1 & = & -0.182 \pm 0.097 \mbox{(stat)}
\pm 0.058 \mbox{(sys)} \\
\Delta \Omega_2 & = & 0.530 \pm 0.661 \mbox{(stat)}
\pm 0.414 \mbox{(sys)}.\end{aligned}$$ The difference along the $\Omega_1$ axis amounts to 1.6 [$\sigma$]{}, and along the $\Omega_2$ axis to 0.7 [$\sigma$]{}. The major disagreement is along the short axis, as is obvious from figure \[fig:contcompare\], and a disagreement of this size or larger is expected to occur in 11% of measurements. Since $\Omega_1$ is essentially sensitive to acceleration, this amounts to the statement that the [$B_{BV0.6}$]{} results favor more acceleration at the 1.6 [$\sigma$]{} level. The differences along both axes can be combined into one measure by projecting them along the difference vector, defined by $\Omega_3 \equiv -0.325 \Omega_1 + 0.946 \Omega_2$. Then the difference between the two fits is $\Delta \Omega_3 = 0.560 \pm 0.657 \mbox{(stat)} \pm
0.410 \mbox{(syst)}$, a difference of 0.7 [$\sigma$]{}.
A similar comparison is possible with the [$\Omega_{m}$]{}, $w$ fits. The result is shown in figure \[fig:wcompare\]. The same sort of detailed comparison is not carried out here for several reasons. First, the difference is certainly not independent from the difference observed in [$\Omega_{m}$]{}, [$\Omega_{\Lambda}$]{} space, so little additional information would be gained from this procedure. Second, because the current constraints on [$\Omega_{m}$]{}, $w$ from SN data alone are not well behaved (not closing off until very negative values of $w$), it is not useful to compare the two fits without the addition of additional constraints, here the BAP measurement, which is the same between the two fits.
CONCLUSIONS
===========
CMAGIC provides some additional information that is not captured by the standard light-curve template fitting techniques used to estimate [$m_{B}$]{}. This allows us to provide some additional constraints on the cosmological parameters. Furthermore, [$B_{BV0.6}$]{}should be affected differently by several potential evolutionary effects.
We have carried out the first blind analysis of the cosmological parameters using SN data, developing a technique to prevent experimenter bias by hiding the final result until the data cuts and analysis procedures are finalized. We find that the results of a CMAGIC fit broadly confirm our picture of an accelerating Universe. In fact, they favor a higher amount of acceleration than the [$m_{B}$]{} results by approximately 1.6 [$\sigma$]{} (including systematics and the correlations between the two measurements). The [$B_{BV0.6}$]{} error contours differ from a flat Universe by 1.7 [$\sigma$]{} (including systematics), which would be interesting if it were more statistically significant.
The constraints on the cosmological parameters from a CMAGIC fit to 31 nearby and 21 distant SNe Ia are ${\ensuremath{\Omega_{m}}}= 1.26^{+0.38}_{-0.51}$, ${\ensuremath{\Omega_{\Lambda}}}= 2.20^{+0.41}_{-0.67}$ (statistical errors only). However, this is a poor frame for expressing the results. It is significantly more useful to instead quote the results as $$\Omega_1 = 0.790 {\ensuremath{\Omega_{m}}}- 0.613 {\ensuremath{\Omega_{\Lambda}}}= -0.349^{+0.117}_{-0.131}
\left( \mbox{stat} \right) ^{+0.060}_{-0.062}
\left( \mbox{syst} \right)$$ $$\Omega_2 = 0.613 {\ensuremath{\Omega_{m}}}+ 0.790 {\ensuremath{\Omega_{\Lambda}}}= 2.502^{+0.530}_{-0.838}
\left( \mbox{stat} \right) ^{+0.476}_{-0.545}
\left( \mbox{syst} \right)$$ with $$\Omega_{m} = 0.19^{+0.06}_{-0.06} \left( \mbox{stat} \right)
^{+0.03}_{-0.05} \left( \mbox{syst}\right)$$ for a flat Universe, where the dark energy has been assumed to have a constant equation of state with $w = -1$, as is the case for a cosmological constant. The systematic errors have been estimated by considering a wide range of alternatives to the primary fit of this paper. The largest systematic error is the extinction cut, indicating that while CMAGIC has some benefits with respect to extinction by interstellar dust, we still have a great deal to learn about this issue. A direct comparison is also possible with an [$m_{B}$]{} fit to the same SN, which requires that the correlations between the two methods be estimated. After including the systematics and correlations, the difference between the two fits is almost exclusively along the short axis, with the CMAGIC fits favoring more acceleration by 1.6[$\sigma$]{}. Fitting for a constant value of $w$ in a flat Universe, the combination of the CMAGIC results with the angular scale of the BAP measured in @Eisenstein:05 yields $w = -1.21^{+0.15}_{-0.12} \left( \mbox{stat} \right)
^{+0.07}_{-0.12} \left( \mbox{supernova syst} \right)$, ${\ensuremath{\Omega_{m}}}= 0.25^{+0.02}_{-0.02} \left( \mbox{stat} \right)
^{+0.01}_{-0.01} \left( \mbox{supernova syst} \right)$, consistent with a cosmological constant at the $1.2 \sigma$ level.
The currently available high redshift SN sample was not observed in an optimal fashion for CMAGIC. Out of the approximately 100 published high-redshift SNe light curves, only about 20 are useful for [$B_{BV0.6}$]{}. As a result, the current data set does not place strong constraints on dust or evolutionary effects. This situation will change in this decade; within the next 5 years it should be possible to measure both [$B_{BV0.6}$]{} and [$m_{B}$]{} for 1000 high-redshift SNe, at which point the comparison between [$m_{B}$]{} and [$B_{BV0.6}$]{} will be extremely interesting.
The authors would like to thank Brian Schmidt for providing non-$K$-corrected light curves for SN 1997ce and SN 1997cj. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
CMAGIC BUMPS {#apndx:bumps}
============
An example of an SNe Ia with a bump feature is shown in the bottom of figure \[fig:cmag\]. Bumps seem to be associated with SNe with different $B$ and $V$ stretches (where the templates have been normalized such that the majority of SNe are well fitted with the same $B$ and $V$ stretch), in particular when $s_V < s_B$. In general, the probability of a bump increases with $B$ stretch. It is possible to find examples of SNe Ia with virtually the same stretch but where one has a bump and the other does not. This clearly indicates that SNe Ia do not constitute a one-parameter family, at least in terms of stretch, [$\Delta m_{15}\left( B \right)$]{} or the MLCS parameter $\Delta$. Bumps are far more common in other filter combinations.
However, these matters do not concern us here. The important thing for the purposes of this paper is the effect of the bump on the cosmology fits. As noted previously, the presence of the bump has an effect on the starting and ending dates of the linear feature. With high-quality data it is trivial to detect the presence of a bump. Therefore, while this is not an issue with the low-redshift SNe, it is a potential systematic in the cosmology fits due to the lower quality of the high-redshift data making bumps difficult to detect for some SNe. Fortunately, this turns out to have a relatively small effect for the present sample.
In order to quantify this effect, we attempted to determine the probability, as a function of stretch, that an SN has a bump by examining the low redshift sample. We find that all SNe with $s > 1.1$ have a bump feature, and none with $s < 0.8$ do. Between these extremes the probability of having a bump is an increasing function of stretch, but remains probabilistic. For $1.1 < s < 1.0$ approximately 50% of SNe Ia have bumps, and for $1.0 < s < 0.8$ only 1 out of 13 does. Applying this result to the high redshift sample, we see that there are six SNe in the first group and 14 in the second. One of the 14 (SN 1997ce) has a bump, consistent with the predicted fraction. As expected, individual filter fits to SN 1997ce show that the $V$ stretch is less than the $B$ stretch, with $s_B = 0.932 \pm 0.025$ and $s_V = 0.816 \pm 0.019$. The systematic effect, if any, will clearly arise from the first group, which consists of SNe 1995ba, 1997F, 1998aw, 1999fj, 2001ix, and 2002ad. The CMAGIC fits to SN 1998aw are not affected by the presence or absence of a bump, so it can be ignored for the purposes of this discussion.
In order to quantify the probability that each of these SNe has an undetected bump, we analyzed a handful of very well observed low redshift SNe that have a bump feature (SNe 1995D, 1995bd, 1998bu, and 1999ee) and used their CMAGIC diagrams to quantify the excess $B$ magnitude over the value predicted by the CMAGIC linear fit as a function of rest frame epoch. We then compared these values with the actual data points for the four high-redshift SNe in question, taking into account the observational errors and the dispersion of excess magnitudes in the bump. SN 1999fj, SN 2001ix, and SN 2002ad are inconsistent with a bump at greater than the 2.5 [$\sigma$]{} level. No strong statement can be made for SN 1995ba or SN 1997F. Therefore, these are the only two that need concern us.
This gives four possibilities, which occur with approximately equal probability. The case where neither has a bump is identical to our primary fit. In order to estimate the systematic error associated with the other possibilities, we performed and compared all four fits, obtaining results very similar to our primary fit. We find that the effects of this systematic on the current sample are $\Delta \Omega_1 = 0.005$ and $\Delta \Omega_2 = -0.014$. This indicates that undetected bumps do not contribute substantially to the systematic error. The story is somewhat complicated, but we have been fortunate in that it does not affect the current result. Most future projects, which will obtain considerably more complete color coverage, should not have to worry about this issue.
CORRELATIONS BETWEEN THE [$m_{B}$]{} AND [$B_{BV0.6}$]{} FITS {#apndx:correlations}
=============================================================
In order to determine the correlation between the cosmological results of the [$m_{B}$]{} and [$B_{BV0.6}$]{} fits, it is first necessary to determine the correlations between [$m_{B}$]{} and [$B_{BV0.6}$]{} values for each SN. There are two components to this correlation: that induced by the fitting procedures, and that intrinsic to the physics of SNe Ia and their environment (extinction, etc.). The former can be determined individually for each SN, and is seen to vary considerably depending on the distribution of observations, while we are forced to assume that the latter is constant across the SN sample.
Since current light-curve templates do not adequately reproduce the CMAGIC relations, the fit correlation must be determined by a Monte Carlo process. For every SN, 1000 realizations are generated using the actual photometric errors and observed epochs. For each realization [$m_{B}$]{} and [$B_{BV0.6}$]{} are fitted independently, and the correlations are estimated from the resulting distributions. After stretch correction, the correlation between [$m_{B}$]{} and [$B_{BV0.6}$]{} is small and positive, with mean correlation coefficients of $\left< \rho \right> = 0.150$ at low redshift and $\left< \rho \right> = 0.144$ for distant SNe. The distributions are shown in figure \[fig:magcorr\]. Furthermore, the correlation between stretch and [$B_{BV0.6}$]{} is quite weak, justifying the assumption that they are uncorrelated in the CMAGIC fitting procedure ($\left< \rho \right> = 0.097$).
In order to estimate the residual resulting from the intrinsic heterogeneity of SNe Ia, the best tool is to consider the residual versus residual plot, shown in figure \[fig:residscatter\]. Note that these residuals are with respect to different fits with different values of the cosmological parameters. It is clear that they are correlated, although this is much less true of the high redshift sample: $\mbox{cov}\! \left[ {\ensuremath{m_{B}}}, {\ensuremath{B_{BV0.6}}}\right]_{\mbox{lowz} } = 0.020$ and $\mbox{cov}\! \left[ {\ensuremath{m_{B}}}, {\ensuremath{B_{BV0.6}}}\right]_{\mbox{highz} } = 0.0076$, where cov denotes the covariance between the two quantities. These values correspond roughly to $\rho = 0.55$ and $\rho = 0.34$, respectively. It is not surprising that the low-redshift sample shows considerably more correlation because of the dominant role of peculiar velocity errors, which affect [$m_{B}$]{} and [$B_{BV0.6}$]{} identically.
To estimate the intrinsic correlation it is necessary to subtract the effects of both the peculiar velocity and the correlations induced by the light-curve and CMAGIC fitting procedures. If $r_{{\ensuremath{m_{B}}}}$ and $r_{{\ensuremath{B_{BV0.6}}}}$ denote the residuals from the fit, then, using the low-redshift approximation for ${\mathcal D}_L$ (which is appropriate because peculiar velocities have a negligible effect at high redshift), and noting that the stretch and redshift are anti-correlated, $$\begin{aligned}
\mbox{cov}\! \left[ r_{{\ensuremath{m_{B}}}} , r_{{\ensuremath{B_{BV0.6}}}} \right] & = &
\mbox{cov}\! \left[ {\ensuremath{m_{B}}}, {\ensuremath{B_{BV0.6}}}\right] +
\alpha_{{\ensuremath{m_{B}}}} \mbox{cov}\! \left[ s , {\ensuremath{B_{BV0.6}}}\right] + \\
\nonumber & \nonumber &
\alpha_{{\ensuremath{B_{BV0.6}}}} \mbox{cov}\! \left[ s, {\ensuremath{m_{B}}}\right] +
\sigma^2_s \alpha_{{\ensuremath{m_{B}}}} \alpha_{{\ensuremath{B_{BV0.6}}}} +
\left( \frac{5}{\log 10} \frac{\sigma_z}{z} \right)^2 + \\
\nonumber & \nonumber &
\frac{5}{\log 10} \left( \alpha_{{\ensuremath{m_{B}}}} + \alpha_{{\ensuremath{B_{BV0.6}}}} \right)
\frac{ \sigma_z } { z } \sigma_s +
\mbox{cov}\! \left[ {\ensuremath{\mathcal{M}}}_{{\ensuremath{m_{B}}}}, {\ensuremath{\mathcal{M}}}_{{\ensuremath{B_{BV0.6}}}} \right] .\end{aligned}$$ Here the correlations between stretch, [$m_{B}$]{}, and [$B_{BV0.6}$]{} are those arising from the fitting procedure only. The desired quantity is $\mbox{cov}\! \left[ {\ensuremath{\mathcal{M}}}_{{\ensuremath{m_{B}}}}, {\ensuremath{\mathcal{M}}}_{{\ensuremath{B_{BV0.6}}}} \right]$, the correlation between the absolute magnitudes modulo the Hubble constant. Note that the stretch-corrected covariance shown in figure \[fig:magcorr\] is not appropriate here because the contributions of stretch are handled separately. More than half of the measured covariance in the low redshift sample (0.013) comes from peculiar velocity errors, which have essentially no effect on the high redshift sample. We find that $\mbox{cov}\! \left[ {\ensuremath{\mathcal{M}}}_{{\ensuremath{m_{B}}}},
{\ensuremath{\mathcal{M}}}_{{\ensuremath{B_{BV0.6}}}} \right]_{\mbox{lowz}} = 0.0072 $ and $\mbox{cov}\! \left[ {\ensuremath{\mathcal{M}}}_{{\ensuremath{m_{B}}}},
{\ensuremath{\mathcal{M}}}_{{\ensuremath{B_{BV0.6}}}} \right]_{\mbox{highz}} = 0.0044$, which correspond to $\rho = 0.37 \pm 0.14 $ and $\rho = 0.22 \pm 0.21$, respectively. These are consistent, and therefore the overall correlation coefficient for the intrinsic scatter is taken to be $\rho = 0.32 \pm 0.12$.
Next it is necessary to propagate this covariance into the cosmological parameter space. This is far from straightforward. While it might be tempting to simply assume that the intrinsic correlation is the dominant one, and that this can therefore be taken as the correlation between the $\Omega_1$ values of the two fits, there is no way to justify this assumption. The correlation at low redshift is dominated by the peculiar velocity errors, and it is unclear how important this is in the context of the cosmological parameters. Furthermore, different SNe have different weights, both because of their observational errors and because SNe at different redshifts have different influences in the [$\Omega_{m}$]{}, [$\Omega_{\Lambda}$]{} parameter space.
In order to determine the effects of these correlations on $\Omega_1$, $\Omega_2$, a Monte-Carlo simulation was carried out on the SN samples. The covariances between stretch, [$m_{B}$]{} and [$B_{BV0.6}$]{} from the fitting procedures were calculated for each supernova as described above, to which were added the measured intrinsic correlation coefficient of 0.32. This simulation also incorporated the effects of redshift errors including the assumed peculiar velocity of 300 km s$^{-1}$.
Generating 2500 realizations required approximately 4 days on a fast workstation. The corresponding correlation coefficients for the $\Omega_1$ and $\Omega_2$ axes are $\rho_{11} = 0.34 \pm 0.02$ and $\rho_{22} = 0.15 \pm 0.02$. The correlation is not evenly distributed between the two axes, acting primarily along the short axes of the error ellipses. Since these correlations are positive, they act to increase the significance of the difference between the two fits. The same data set can be used to verify that $\Omega_1$ and $\Omega_2$ are uncorrelated, yielding $\rho_{ 12{\ensuremath{m_{B}}}} = -0.07$ and $\rho_{ 12{\ensuremath{B_{BV0.6}}}} = 0.07$.
SUPERNOVAE REMOVED BY EACH CUT {#apndx:cutremoved}
==============================
This section presents a list of the SNe removed by each cut at the values specified in the primary fit. Note that these cuts are not applied in any order, and therefore some SNe fail multiple cuts. Furthermore, some of the cuts are correlated. For example, an SN that does not have any data within 7 days of $B$ maximum is unlikely to have a well-determined date of maximum.
The following SNe do not have any points in their CMAGIC linear region: SNe 1995ar, 1995aw, 1995ay, 1995az, 1996cf, 2001iw, and 2002P. These were at redshifts too low to be used in the cosmology fit (although some were used to determine the intrinsic [$\beta_{BV}$]{} distribution): SN1990N, SN1994ae, SN1995D, SN1995al, SN1996X, SN1996Z, SN1997bp, SN1997bq, SN1997br, SN1998bu, SN1998dh, SN1999ac, SN1999by, SN1999cl, SN1999gh, SN2000E, SN2001el, SN2002bo. The following SN did not have data with 7 rest frame days of $B$ maximum: SN1990T, SN1990Y, SN1991S, SN1991U, SN1991ag, SN1992bg, SN1992bk, SN1993ae, SN1993ah, SN1994Q, SN1997bq, SN1998ec, SN1999gh, SN2000bh, SN2000ce, SN2001jn, SN2001jp, and SN2002P. These SNe did not have a well determined date of maximum: SN1992ae, SN1992au, SN1992bk, SN1993B, SN1993ah, SN1994G, SN1995ak, SN1995aq, SN1995ar, SN1995ax, SN1995ay, SN1996I, SN1996U, SN1996Z, SN1996cm, SN1997K, SN1997S, SN1999fn, SN2000bh, SN2001hx, SN2001hy, SN2001jb, SN2001jf, SN2001jn, and SN2002P. The following SNe had stretch values below the minimum cutoff, and were removed for the reasons discussed in §\[subsec:cuts\]: SN1992au, SN1998bp, SN1998de, SN1999by. SN1996U, SN1997K, SN1997am, SN1999ff, SN2001hx, SN2001hy, and SN2001jb have errors on [$B_{BV0.6}$]{} that exceeded 0.5 mag. SN1990Y, SN1992J, SN1993H, SN1995E, SN1995bd, SN1996C, SN1996Z, SN1996bo, SN1997br, SN1998aw, SN1998bu, SN1999cl, SN1999ee, SN1999fw, SN1999gd, SN2000ce, SN2001jn, SN2002ad, and SN2002bo have measured color excesses larger than the 0.25 mag cut value. As discussed in §\[subsec:cuts\], there were 3 additional SN removed by hand from the sample for various reasons: SN1997O, SN1997br, and SN1997bp.
Aguirre, A. 1999, 525, 583. Akerib, D. S. [et al.]{} 2004, 93, 1301 Aldering, G. [et al.]{} 2004, preprint (astro-ph/0405232) Astier, P. [et al.]{} 2005, å in press Barlow, R. 2003, preprint (physics/03061138) Barris, B.J. [et al.]{} 2004, 602, 571 Benetti [et al.]{} 2004, 348, 261 Bessell, M.S. 1990, 102, 1181 Cardelli, J.A., Clayton, G.C. and Mathis, J.S. 1989, 345, 245 Carroll, S.M., Press, W.H., and Turner, E.L. 1992, 30, 499 Dom[í]{}gues, I., Höflich, P., and Straniero, O. 2001, 557, 279 Draine, B.T. 2003, 41, 241 Eisenstein, D.J. [et al.]{} 2005, 633, 560 Fitzpatrick, E.L. 1999, 111, 63 Garavini, G. [et al.]{} 2005, in preparation. Garnavich [et al.]{} 1998, 493, L53 (lightcurves available from astro-ph/9710123) Goldhaber, G. [et al.]{} 2001, 558, 359 Guy, J. [et al.]{} 2005, å in press Hamuy, M. [et al.]{} 1996, 112, 2408 Hamuy, M. [et al.]{} 2000, 120, 1479 Hatano, K., Branch, D., and Deaton, J. 1998, 502, 177 Heinrich, J.G. 2003, H[ö]{}flich, P., Nomoto, K., Umeda, H., and Wheeler, J.C. 2000, 528, 590 Hook, I.M., Howell, D.A. [et al.]{} 2005, , in press James, F. and Roos, M. 1975, Compt. Phys. Comm. 10, 343 Jha, S. [et al.]{} 2005, in press Kim, A., Goobar, A., and Perlmutter, S. 1996, 108, 190 Knop, R.A. [et al.]{} 2003, 598, 102 Krisciunas, K. [et al.]{} 2000, 539, 658 Krisciunas, K. [et al.]{} 2001, 122, 1616 Krisciunas, K. [et al.]{} 2004, 127, 1664 Leibundgut, B. 2001 39, 67 Li, W. [et al.]{} 2001, 546, 734 Malmquist, K.G. 1936, Stockholm Observatory Medd., no. 26 Matheson, T. [et al.]{} 2005, 129, 2352 Modjaz, M. [et al.]{} 2001, 113, 308 Nobili, S. [et al.]{} 2003, å 404, 901 Nobili, S. [et al.]{} 2005, å 437, 789 Nugent, P., Kim, A., and Perlmutter, S. 2003, 114, 803 Padmanabhan, N. [et al.]{} 2005, 72, 043525 Perlmutter, S. [et al.]{} 1997, 483, 565 Perlmutter, S. [et al.]{} 1998, Nature, 391, 51 Perlmutter, S. [et al.]{} 1999, 517, 565 Phillips, M.M. 1993, 413, L105 Phillips, M.M. [et al.]{} 1999, 118, 1766 Pinto, P.A., Smith, C.R., and Garnavich, P.M. 2004, AAS meeting 205, \#108.20 Riess, A.G., Press, W.,H., and Kirshner, R.P. 1996, 473, 88 Riess, A.G. [et al.]{} 1998, 116, 1009 Riess, A.G. [et al.]{} 1999, 117, 707 Riess, A.G., Filippenko, A.V., Li W., and Schmidt B.P. 116, 1009 Riess, A.G. [et al.]{} 2000, 536, 62. Riess, A.G. [et al.]{} 2004, 607, 665 Sako, M. [et al.]{} 2005, preprint (astro-ph/0504455) Schlegel, D.J., Finkbeiner, D.P., and Davis, M. 1998, 500, 525 Schmidt, B. [et al.]{} 1998, 507, 46 Simon, B. and Shaw, R.A. 1996, in ASP Conf. Ser. 101: Astronomical Data Analysis Software and Systems V, ed. G.J. Jacoby and J. Barnes, 183 Spergel, D.N. [et al.]{} 2003, 148, 175 Strolger, L.G. [et al.]{} 2002, 124, 2905 Sullivan, M. [et al.]{} 2002, 340, 1057 Suntzeff, N.B. [et al.]{} 1999, 117, 1175 Tegrmark, M. [et al.]{} 2004, 69, 103501 Tonry [et al.]{} 2003, 594, 1 Tripp, R. and Branch, D. 1999, 525, 209 Vinko, J. [et al.]{} 2003, 397, 115 Wang, L., Goldhaber, G., Aldering, G., and Perlmutter, S. 2003, 590, 944
[llllr]{} SN1990O & $ 0.031 $ & $ 1.087(032) $ & $ 17.530(067) $ & 1\
SN1990af & $ 0.050 $ & $ 0.750(010) $ & $ 18.894(078) $ & 1\
SN1992ag & $ 0.026 $ & $ 0.959(022) $ & $ 17.222(056) $ & 1\
SN1992al & $ 0.014 $ & $ 0.929(013) $ & $ 15.838(083) $ & 1\
SN1992bc & $ 0.020 $ & $ 1.079(007) $ & $ 16.738(062) $ & 1\
SN1992bh & $ 0.045 $ & $ 1.057(024) $ & $ 18.697(050) $ & 1\
SN1992bl & $ 0.043 $ & $ 0.845(021) $ & $ 18.556(065) $ & 1\
SN1992bo & $ 0.018 $ & $ 0.744(007) $ & $ 16.918(049) $ & 1\
SN1992bp & $ 0.079 $ & $ 0.897(021) $ & $ 19.634(086) $ & 1\
SN1992bs & $ 0.063 $ & $ 1.025(017) $ & $ 19.568(071) $ & 1\
SN1993O & $ 0.052 $ & $ 0.927(020) $ & $ 18.912(034) $ & 1\
SN1993ag & $ 0.049 $ & $ 0.940(027) $ & $ 18.839(064) $ & 1\
SN1994M & $ 0.023 $ & $ 0.883(025) $ & $ 17.422(084) $ & 2\
SN1994S & $ 0.015 $ & $ 1.052(024) $ & $ 16.181(102) $ & 2\
SN1996bl & $ 0.036 $ & $ 1.014(014) $ & $ 17.879(029) $ & 2\
SN1996bv & $ 0.017 $ & $ 1.039(020) $ & $ 16.225(030) $ & 2\
SN1997E & $ 0.013 $ & $ 0.821(006) $ & $ 16.232(038) $ & 3\
SN1998V & $ 0.018 $ & $ 0.962(040) $ & $ 16.389(068) $ & 3\
SN1998ab & $ 0.027 $ & $ 0.958(006) $ & $ 17.212(036) $ & 3\
SN1998es & $ 0.011 $ & $ 1.075(014) $ & $ 15.074(048) $ & 3\
SN1999aa & $ 0.014 $ & $ 1.098(004) $ & $ 16.135(017) $ & 4\
SN1999aw & $ 0.038 $ & $ 1.358(008) $ & $ 18.242(035) $ & 5\
SN1999dk & $ 0.015 $ & $ 1.089(010) $ & $ 15.862(020) $ & 6\
SN1999dq & $ 0.014 $ & $ 1.060(004) $ & $ 15.498(076) $ & 3\
SN1999ek & $ 0.018 $ & $ 0.895(007) $ & $ 16.573(049) $ & 7\
SN1999gp & $ 0.027 $ & $ 1.141(004) $ & $ 17.222(064) $ & 6\
SN2000ca & $ 0.024 $ & $ 1.007(016) $ & $ 17.137(067) $ & 7\
SN2000dk & $ 0.017 $ & $ 0.720(004) $ & $ 16.394(037) $ & 3\
SN2000fa & $ 0.021 $ & $ 0.972(007) $ & $ 17.025(062) $ & 3\
SN2001V & $ 0.015 $ & $ 1.119(017) $ & $ 15.769(110) $ & 8\
SN2001ba & $ 0.029 $ & $ 1.049(014) $ & $ 17.669(042) $ & 7\
[llllr]{} SN1995K & $ 0.479 $ & $ 0.956(046) $ & $ 24.276(220) $ & 1\
SN1995ba & $ 0.388 $ & $ 0.999(052) $ & $ 24.025(267) $ & 2\
SN1996E & $ 0.430 $ & $ 0.940(005) $ & $ 23.572(156) $ & 3\
SN1996K & $ 0.380 $ & $ 0.888(013) $ & $ 24.169(166) $ & 3\
SN1997F & $ 0.580 $ & $ 1.034(070) $ & $ 24.861(349) $ & 2\
SN1997H & $ 0.526 $ & $ 0.883(051) $ & $ 24.242(478) $ & 2\
SN1997P & $ 0.472 $ & $ 0.898(039) $ & $ 24.610(487) $ & 2\
SN1997ai & $ 0.450 $ & $ 0.918(112) $ & $ 23.876(283) $ & 2\
SN1997af & $ 0.579 $ & $ 0.846(050) $ & $ 24.655(508) $ & 2\
SN1997ce & $ 0.440 $ & $ 0.932(025) $ & $ 24.327(062) $ & 4\
SN1997cj & $ 0.500 $ & $ 0.925(021) $ & $ 24.453(077) $ & 4\
SN1997eq & $ 0.540 $ & $ 0.947(026) $ & $ 24.514(194) $ & 5\
SN1998as & $ 0.355 $ & $ 0.961(023) $ & $ 23.786(100) $ & 5\
SN1998ax & $ 0.497 $ & $ 1.156(032) $ & $ 24.447(115) $ & 5\
SN1998ba & $ 0.430 $ & $ 0.975(022) $ & $ 24.241(091) $ & 5\
SN1999fj & $ 0.816 $ & $ 1.037(040) $ & $ 25.517(273) $ & 6\
SN2000fr & $ 0.543 $ & $ 1.100(020) $ & $ 24.542(079) $ & 5\
SN2001iv & $ 0.397 $ & $ 0.977(004) $ & $ 23.720(091) $ & 7\
SN2001ix & $ 0.711 $ & $ 1.025(052) $ & $ 24.937(159) $ & 7\
SN2002ab & $ 0.423 $ & $ 0.924(015) $ & $ 23.872(214) $ & 7\
SN2002kd & $ 0.735 $ & $ 0.907(013) $ & $ 25.385(114) $ & 8\
[lrcrcrrr]{} SN1997ce & 0.44 & 4 & 1.41 & 2 & 0.492 & 1.803 & 0.180\
SN1997cj & 0.5 & 6 & 1.44 & 4 & 0.839 & 2.159 & 0.264\
SN1998aw & 0.44 & 3 & 0.14 & 1 & 0.705 & 2.027 & 0.390\
SN1998ax & 0.497 & 3 & 0.43 & 1 & 0.513 &1.616 & 0.291\
SN1998ba & 0.43 & 3 & 0.004 & 1 & 0.952 & 2.222 & 0.359\
[llll]{} Minimum redshift cutoff for cosmology fit & 0.01 & zmin\
Maximum redshift cutoff for cosmology fit & NA & zmax\
High redshift cutoff for slope distribution fit & 0.1 & zslopemax\
Minimum number of points in CMAGIC linear region & 1 & npointsmin\
Maximum allowable magnitude error & 0.5 mag & magerror\
Maximum allowable [$B-V$]{} excess at $B_{max}$ & 0.25 mag & maxcolor\
Maximum allowable error in date of $B$ maximum & 1.0 days & datemaxerror\
Minimum stretch allowed & 0.7 & stretchmin\
Maximum gap between maximum and nearest point in $B$ or $V$ & 7 days & daygap\
[lrrr]{} [**Variation of fitting procedures** ]{}\
No stretch correction & $ -0.009 \left( 0.07 \sigma \right)$ & $ 0.060 \left( 0.11 \sigma \right) $ & $ -0.006 \left( 0.10 \sigma \right)$\
P99 lightcurve fit & $ 0.015 \left( 0.13 \sigma \right) $ & $ 0.144 \left( 0.27 \sigma \right) $ & $ -0.007 \left( 0.12 \sigma \right)$\
U-enhanced [$K$-correction]{} & $-0.052 \left( 0.39 \sigma \right)$ & $ 0.100 \left( 0.19 \sigma \right) $ & $ -0.040 \left( 0.70 \sigma \right)$\
[**Variation of cuts** ]{}\
daygap $< 5$ & $ -0.024 \left( 0.18 \sigma \right) $ & $ 0.139 \left( 0.26 \sigma \right) $ & $-0.006 \left( 0.11 \sigma \right) $\
daygap $< 10$ & $ 0.024 \left( 0.20 \sigma \right) $ & $ 0.015 \left( 0.03 \sigma \right) $ & $0.014 \left( 0.24 \sigma \right) $\
$z > 0.015$ & $ -0.003 \left( 0.02 \sigma \right) $ & $ 0.023 \left( 0.04 \sigma \right) $ & $ 0.00 \left( 0.00 \sigma \right) $\
magerror $< 0.25$ & $-0.011 \left( 0.08 \sigma \right)$ & $ 0.015 \left( 0.03 \sigma \right) $ & $ -0.006 \left( 0.11 \sigma \right)$\
magerror $< 1.0$ & $-0.011 \left( 0.09 \sigma \right)$ & $ 0.017 \left( 0.03 \sigma \right) $ & $ -0.006 \left( 0.11 \sigma \right)$\
$E\left( B-V \right) < 0.1$ & $ 0.018 \left( 0.15 \sigma \right)$ & $ -0.467 \left( 0.56 \sigma \right)$ & $0.004 \left( 0.07 \sigma \right)$\
$E\left( B-V \right) < 0.5$ & $ -0.048 \left( 0.36 \sigma \right) $ & $ -0.00 \left( 0.00 \sigma \right)$ & $0.017 \left( 0.29 \sigma \right)$\
datemaxerror $<0.5$ & $ 0.005 \left( 0.04 \sigma \right) $ & $ 0.027 \left( 0.05 \sigma \right)$ & $0.00 \left( 0.00 \sigma \right)$\
datemaxerror $<2$ & $0.018 \left(0.15 \sigma\right)$ & $ 0.115 \left(0.22 \sigma \right)$ & $0.017 \left( 0.29 \sigma \right)$\
npointsmin $>2$ & $0.018 \left(0.15 \sigma \right)$ & $0.229 \left( 0.43 \sigma \right)$ & $-0.006 \left( 0.11 \sigma \right)$\
[**Other systematics** ]{}\
Hamuy, Riess, Jha only & $-0.015 \left( 0.11 \sigma \right)$ & $ -0.037 \left( 0.04 \sigma \right)$ & $ -0.006 \left( 0.11 \sigma \right)$\
Jack-Knife: SN2001ix & $ -0.014 \left( 0.11 \sigma \right) $ & $ -0.281 \left(0.33 \sigma \right) $ & $ -0.013 \left( 0.23 \sigma \right)$\
Jack-Knife: SN2002kd & $ 0.015 \left( 0.12 \sigma \right) $ & $ 0.310 \left( 0.58 \sigma \right) $ & $ -0.016 \left( 0.28 \sigma\right) $\
${\ensuremath{\sigma_{int}}}=0.08$ & $-0.015 \left(0.11 \sigma\right)$ & $0.022 \left(0.04 \sigma \right)$ & $-0.013 \left( 0.23 \sigma \right)$\
${\ensuremath{\sigma_{int}}}=0.15$ & $0.010 \left(0.09 \sigma\right)$ & $-0.015 \left(0.02 \sigma\right)$ & $0.006 \left( 0.11 \sigma \right)$\
Malmquist bias & $-0.0012 \left(0.01 \sigma\right)$ & $0.045 \left( 0.08 \sigma \right)$ & $-0.003 \left( 0.05 \sigma \right)$\
Bumps & $ 0.005 \left( 0.04 \sigma \right) $ & $-0.014 \left( 0.017 \right)$ & $0.003 \left( 0.05 \sigma \right)$\
[^1]: The definition of [$\mathcal{M}$]{} used here differs slightly from that of P99 and K03 in that all of the constants have been absorbed, including c.
[^2]: This could be verified prior to unblinding for [$\mathcal{M}$]{} and [$\alpha$]{}, but the confirmation of this statement for [$\Omega_{m}$]{} and [$\Omega_{\Lambda}$]{} was only available after unblinding. If the final cosmology had disagreed very strongly with previous results, this would have led to problems with the blindness procedure. Fortunately, this turned out not to be the case.
[^3]: Technically a floor of 2 points is used when the slope distribution sample is determined, but this has no effect because all of the low redshift SNe have 2 or more points in the linear region.
[^4]: The [$\chi^{2}$]{} per degree of freedom for the CMAGIC fits to SN1997bp and SN1997br are around 4, which is particularly striking because for the majority of SNe Ia the [$\chi^{2}$]{} per degree of freedom is considerably less than one.
[^5]: These values were chosen to be sufficiently different from the results of previous analyses to force internal reviewers to psychologically confront the blindness scheme while remaining close enough to the expected values that the resulting error contours were not overly distorted.
[^6]: See @Heinrich:03 §4 for further discussion.
|
---
abstract: 'The authors present evidence for universality in numerical computations with random data. Given a (possibly stochastic) numerical algorithm with random input data, the time (or number of iterations) to convergence (within a given tolerance) is a random variable, called the halting time. Two-component universality is observed for the fluctuations of the halting time, *i.e.*, the histogram for the halting times, centered by the sample average and scaled by the sample variance (see Eqs. \[fluctuation\] and \[halting-fluct\] below), collapses to a universal curve, independent of the input data distribution, as the dimension increases. Thus, up to two components, the sample average and the sample variance, the statistics for the halting time are universally prescribed. The case studies include six standard numerical algorithms, as well as a model of neural computation and decision making. [ A link to relevant software is provided in [@TrogdonNU] for the reader who would like to do computations of his’r own.]{}'
author:
- 'Percy Deift[^1], Govind Menon[^2], Sheehan Olver[^3] and Thomas Trogdon$^*$'
bibliography:
- '/Users/trogdon/Dropbox/References/library.bib'
title: 'Universality in Numerical Computations with Random Data. Case Studies.'
---
In earlier work [@DiagonalRMT], two of the authors (P.D. and G.M., together with C. Pfrang) considered the problem of computing the eigenvalues of a real, $n\times n$ random symmetric matrix $M = (M_{ij})$. They considered matrices chosen from different ensembles $E$ using a variety of different algorithms $A$. Let $S_n$ denote the space of real, $n\times n$ symmetric matrices. Standard eigenvalue algorithms involve iterations of isospectral maps $\varphi = \varphi_A: S_n \rightarrow S_n$, $\operatorname{spec}(\varphi_A(M)) = \operatorname{spec}(M)$ for $M \in S_n$. If $M \in S_n$ is given, one considers the sequence of matrices $M_{k+1} = \varphi(M_k)$, $k \geq 0$, with $M_0 = M$. Clearly, $\operatorname{spec}(M_{k+1}) = \operatorname{spec}(M_{k}) = \cdots = \operatorname{spec}(M)$, and under appropriate conditions $M_k = \varphi_A^{(k)}(M)$ converges to a diagonal matrix, $\Lambda = \operatorname{diag}(\lambda_1, \ldots, \lambda _n)$. Necessarily, the $\lambda_i$’s are the desired eigenvalues of $M$.
In [@DiagonalRMT], the authors discovered the following phenomenon: For a given accuracy $\epsilon$, a given matrix size $n$ ($\epsilon$ small, $n$ large, in an appropriate scaling range) and a given algorithm $A$, the *fluctuations* in the time to compute the eigenvalues to accuracy $\epsilon$ with the given algorithm $A$, were *universal*, independent of the choice of ensemble $E$. More precisely, they considered fluctuations in the *deflation time* $T$ (The notion of deflation time is generalized to the notion of *halting time* in subsequent calculations). Recall that if an $n\times n$ matrix has block form $$\begin{aligned}
M = \left( \begin{array}{cc} M_{11} & M_{12} \\ M_{21} & M_{22} \end{array} \right)\end{aligned}$$ where $M_{11}$ is $k \times k$ and $M_{22}$ is $(n-k) \times (n-k)$ for some $1 \leq k \leq n-1$ then one says that the block diagonal matrix $\hat M = \operatorname{diag}( M_{11}, M_{22})$ is *obtained from $M$ by deflation*. If $\|M_{12}\| = \|M_{21}\| \leq \epsilon$, then the eigenvalues $\{\lambda_i\}$ of $M$ differ from the eigenvalues $\{\hat \lambda_i\}$ of $\hat M$ by $\mathcal O(\epsilon)$. Let $T = T_{\epsilon,n,A,E}(M)$ be the time ($=$ \# of steps = \# iterations of $\varphi_A$) it takes to deflate a random matrix $M$, chosen from an ensemble $E$, to order $\epsilon$, using algorithm $A$, *i.e.* $T$ is the smallest time such that for some $k$, $1 \leq k \leq n-1$, $\|(\varphi_A^{(T)}(M))_{12}\| =\|(\varphi_A^{(T)}(M))_{21}\| \leq \epsilon$.
As explained in [@DiagonalRMT], $T$ is a useful measure of the time required to compute the eigenvalues of $M$: [Generically,]{} at worst $\mathcal O(n)$ deflations are needed to compute the eigenvalues of $M$, and at best, $\mathcal O(\log n)$. The fluctuations $\tau_{\epsilon,n,A,E}(M)$ of $T$ are defined by $$\begin{aligned}
\label{fluctuation}
\tau_{\epsilon,n,A,E}(M) = \frac{ T_{\epsilon,n,A,E}(M) - \langle T_{\epsilon,n,A,E} \rangle}{\sigma_{\epsilon,n,A,E}},\end{aligned}$$ where $\langle T_{\epsilon,n,A,E}\rangle$ is the sample average of $T_{\epsilon,n,A,E}(M)$ taken over matrices $M$ from $E$, and $\sigma^2_{\epsilon,n,A,E}$ is the sample variance. For a given $E$, a typical sample size in [@DiagonalRMT] was of order $5,\!000$ to $10,\!000$ matrices $M$, and the output of the calculations in [@DiagonalRMT] was recorded in the form of a histogram for $\tau_{\epsilon,n,A,E}$.
Most of the calculations in [@DiagonalRMT] concerned three eigenvalue algorithms: the QR algorithm, the QR algorithm with shifts (the version of QR used in practice), and the Toda algorithm. The *QR algorithm* is based on the factorization of a(n invertible) matrix $M$ as $M = QR$, where $Q$ is orthogonal and $R$ is upper-triangular with $R_{ii} > 0$. Given $M \in S_n$, with $M = QR$, $M' = \varphi_A(M) = \varphi_{\operatorname{QR}}(M) \equiv RQ$. Clearly, $M' = Q^T M Q \in S_n$ and $\operatorname{spec}(M') = \operatorname{spec}(M)$. [Practical implementation of the QR algorithm requires the use of a shift, *i.e.* the *QR algorithm with shifts* [@Parlett1998]. As shown in [@DiagonalRMT], shifting does not affect universality.]{} The *Toda algorithm* involves the solution $M(t)$ of the *Toda equation* $\frac{dM}{dt} = [B(M),M] = B(M) M - M B(M)$, where $B(M) = M_+ - M_+^T$, $M_+$ is the upper triangular part of $M$, and $M(t = 0) = M$. For all $t > 0$, $\operatorname{spec}(M(t)) = \operatorname{spec}(M)$, and as $t \rightarrow \infty$, we again have $M(t) \rightarrow \Lambda = \operatorname{diag}(\lambda_1, \ldots, \lambda_n)$ where $\{\lambda_i\}$ are the eigenvalues of $M$. For the convenience of the reader, in Figure \[TodaPfrang\], we reproduce, in particular, histograms for $\tau_{\epsilon,n,A,E}$, from [@DiagonalRMT] for the QR algorithm ($A = \operatorname{QR}$) with two different ensembles and varying values of $n$ and $\epsilon$.
![\[TodaPfrang\] The observation of two-component universality for $\tau_{\epsilon,n,A,E}$ when $A = \operatorname{QR}$. This figure is taken from [@DiagonalRMT]. Overlayed histograms demonstrate the collapse of the histogram of $\tau_{\epsilon,n,A,E}$ to a single curve. See the Appendix for the definitions of our choices for $E$. In the top-left figure, $E = \operatorname{GOE}$, and 40 histograms for $\tau_{\epsilon,\epsilon,A,E}$, are plotted one on top of the other for $\epsilon = 10^{-k}$, $ k = 2,\!4,\!6,\!8$ and $n = 10,\! 30,\ldots,\! 190$. The histograms are created with $\approx 10,\!000$ samples. The top-right figure displays the same information as that in the top-left position, but now for $E = BE$. In the lower figure, all $40+40$ histograms are overlayed and universality is evident: the data appears to follow a universal law for the fluctuations. ](TodaPfrang.eps){width="\textwidth"}
From Figure \[TodaPfrang\], we see that eigenvalue computation with the QR algorithm exhibits two-component universality, *i.e.*, the fluctuations $\tau_{\epsilon,n,A,E}$ obey a universal law for all ensembles $E$ under consideration. The same is true for all three algorithms considered in [@DiagonalRMT]: The laws are different, however, for different algorithms $A$.
In the current paper, the work in [@DiagonalRMT] has been extended in various ways as follows. All matrix ensembles are described in the Appendix.
The Jacobi Algorithm {#sec:Jacobi}
--------------------
In the first set of computations, the authors consider the eigenvalue problem for random matrices $M \in S_n$ using the Jacobi algorithm (see, *e.g.* [@Golub2013]): for $M \in S_n$, choose $i < j$ such that $|M_{ij}| \geq \max_{1 \leq i' < j' \leq n} |M_{i'j'}|$, and let $G^{(ij)} \equiv G^{(ij)}(\theta)$ be the corresponding *Givens rotation matrix*: $G^{(ij)}_{i'j'} = \delta_{i'j'}$, for $i',j' \neq i,j$, and $$\begin{aligned}
\left[ \begin{array}{cc} G^{(ij)}_{ii} & G^{(ij)}_{ij} \\ G^{(ij)}_{ji} & G^{(ij)}_{jj} \end{array} \right] = \left[ \begin{array}{cc} \cos(\theta) & \sin(\theta) \\ - \sin (\theta) & \cos(\theta) \end{array} \right], ~~ (G^{(ij)})^T G^{(ij)} = I.\end{aligned}$$ Here $\theta = \theta(M)$ is chosen so that $((G^{(ij)})^T M G^{(ij)})_{ij} = 0$ and then $\varphi_{\operatorname{Jacobi}}(M) \equiv (G^{(ij)})^T M G^{(ij)}$. Clearly, $M' = \varphi_{\operatorname{Jacobi}}(M) \in S_n$ and $\operatorname{spec}(M') = \operatorname{spec}(M)$ and again (see [@Golub2013]), $M_k = \varphi^{(k)}_{\operatorname{Jacobi}}(M) \rightarrow \Lambda = \operatorname{diag}(\lambda_1,\ldots,\lambda_n)$. The Jacobi algorithm has a very different character from QR-Toda type algorithms which are intimately connected to completely integrable Hamiltonian systems (see [@DeiftEigenvalue] and the references therein)[^4] Deflation, which is a useful measure for eigenvalue computation times for QR/Toda type algorithms, is not useful for the Jacobi algorithm. In place of $T_{\epsilon,n, A, E}$, we record the *halting time* $k_{\epsilon,n,A,E}$: the number of iterations it takes for the Jacobi algorithm to reduce the Frobenius norm of the off-diagonal elements to be less than a given[^5] $\epsilon$. Histograms are produced for an appropriate analog of $\tau_{\epsilon,n,A,E}$: $$\begin{aligned}
\label{halting-fluct}
\tau_{\epsilon,n,A,E}(M) = \frac{k_{\epsilon,n,A,E}(M) - \langle k_{\epsilon,n,A,E} \rangle}{\sigma_{\epsilon,n,A,E}}.\end{aligned}$$ Computations for $A = \operatorname{Jacobi}$ are given in Figure \[Jacobi\]. Again, two-component universality is evident.
![\[Jacobi\] The observation of two-component universality for $\tau_{\epsilon,n,A,E}$ when $A = \operatorname{Jacobi}$, $E = \operatorname{GOE},~\mathrm{BE}$ and $\epsilon = \sqrt{n}\, 10^{-10}$. The left figure displays two histograms, one on top of the other, one for $\operatorname{GOE}$ and one for $\mathrm{BE}$, when $n = 30$. The right figure displays the same information for $n = 90$. All histograms are produced with $16,\!000$ samples. We see two-component universality emerge for $n$ sufficiently large: the histograms follow a universal (independent of $E$) law.](Jacobi.eps){width="\textwidth"}
Ensembles with Dependent Entries {#sec:Depend}
--------------------------------
In all the above cases, including the calculations for the Jacobi algorithm, the matrices $M$ are real and the entries $M_{ij}$ are independent, subject only to the symmetry requirement $M_{ij} = M_{ji}$. In the second set of computations in the present paper, the authors consider $n \times n$ Hermitian matrices $M = M^*$ taken from various unitary ensembles (see *e.g* [@MehtaRM]) with probability distributions proportional to $e^{-n \mathrm{tr} V(M)} dM$ where $V: \mathbb R \rightarrow \mathbb R$ grows sufficiently rapidly as $|x| \rightarrow \infty$, and $dM$ is Lebesgue measure on the algebraically independent entries $M_{ij} = \mathrm{Re}\, M_{ij} + \sqrt{-1}\, \mathrm{Im}\, M_{ij}$ of $M$. Unless $V(x)$ is proportional to $x^2$, the entries of $M$ for such ensembles are dependent, and it is a non-trivial matter to sample the matrices. A novel technique for sampling such unitary ensembles was introduced recently [@Olver2014] by two of the authors, S.O. and T.T., together with N. R. Rao, taking advantage of the representation of the eigenvalues of $M$ as a determinantal point process whose kernel is given in terms of orthogonal polynomials (see also [@Li2013]). Using this sampling technique, the authors of the present paper have considered the QR algorithm for various unitary ensembles[^6]. Histograms for the halting (= deflation) time fluctuations $\tau_{\epsilon,n,A,E}$, $A = QR$, are given in Figure \[QR\] and again two-component universality is evident.
![\[QR\] The observation of two-component universality for $\tau_{\epsilon,n,A,E}$ when $A = \operatorname{QR}$, $E = \mathrm{QUE}, ~\operatorname{COSH}, ~\mathrm{GUE}$ and $\epsilon = 10^{-10}$. Here we are using deflation time ( = halting time), as in [@DiagonalRMT]. The left figure displays three histograms, one each for $\operatorname{GUE},~\operatorname{COSH}$ and $\operatorname{QUE}$, when $n = 70$. The right figure displays the same information for $n = 150$. All histograms are produced with $16,\!000$ samples. Two-component universality emerges for $n$ sufficiently large: the histograms follow a universal (independent of $E$) law. This is surprising because $\operatorname{COSH}$ and $\operatorname{QUE}$ have eigenvalue distributions that differ significantly from $\operatorname{GUE}$ in that they do not follow the so-called *semi-circle law*. [These histograms appear to collapse to the same curve in Figure \[TodaPfrang\].]{} This is a further surprise, given the well-known fact that Orthogonal and Unitary Ensembles give rise to different (eigenvalue) universality classes.](QR.eps){width="\textwidth"}
The Conjugate Gradient Algorithm {#sec:CG}
--------------------------------
In a third set of computations in this paper, the authors start to address the question of whether two-component universality is just a feature of eigenvalue computation, or is present more generally in numerical computation. In particular, the authors consider the solution of the linear system of equations $Wx=b$ where $W$ is real and positive definite, using the conjugate gradient (CG) method. The method is iterative (see *e.g.* [@Saad2003] and also Remark \[rmk:scaling\] below) and at iteration $k$ of the algorithm an approximate solution $x_k$ of $Wx=b$ is found and the residual $r_k = Wx_k-b$ is computed. For any given $\epsilon > 0$, the method is halted when[^7] $\|r_k\|_2 < \epsilon$, and the halting time $k_{\epsilon}(W,b)$ recorded. The authors consider $n \times n$ matrices $W$ chosen from two different positive definite ensembles $E$ (see Appendix \[app:pd\]) and vectors $b = (b_j)$ chosen independently with iid entries $\{b_j\}$. Given $\epsilon$ (small) and $n$ (large), and $(W,b) \in E$, the authors record the halting time $k_{\epsilon,n,A,E}$, $A = \operatorname{CG}$, and compute the fluctuations $\tau_{\epsilon,n,A,E}(W,b)$. The histograms for $\tau_{\epsilon,n,A,E}$ are given in Figure \[CG\], and again, two-component universality is evident.
![\[CG\] The observation of two-component universality for $\tau_{\epsilon,n,A,E}$ when $A = \operatorname{CG}$ and $E = \mathrm{cLOE}, ~\mathrm{cPBE}$ with $\epsilon = 10^{-10}$. The left figure displays two histograms, one for $\mathrm{cLOE}$ and $\mathrm{cPBE}$, when $n = 100$. The right figure displays the same information for $n = 500$. All histograms are produced with $16,\!000$ samples. Two-component universality is evident for $n$ sufficiently large: the histograms follow a universal (independent of $E$) law. The critical scaling (see Appendix \[app:pd\]) has significant impact on the distribution of the condition number and forces $\langle \tau_{\epsilon,n,A,E} \rangle \approx n \alpha$, $\alpha < 1$. If the scaling $m = 2n$ is chosen in the ensemble $E$ then the CG method converges too quickly and the halting time tends to take only 10-15 different values for each value of $m$. No interesting limiting statistics are present. Conversely, if $m = n$ the CG method converges slowly ($\langle k_{\epsilon,m,A,E} \rangle \gg m$) and rounding errors dominate the computation. Experiments do not indicate two-component universality if $m = 2n$ or $m = n$. The scaling $m = n + 2 \lfloor \sqrt{n} \rfloor$ identifies a critical scaling region. Within this scaling region, we see two-component universality emerge for $n$ sufficiently large: the histograms follow a universal (independent of $E$) law. ](CG.eps){width="\textwidth"}
The GMRES Algorithm {#sec:GMRES-un}
-------------------
In a fourth set of computations, the authors again consider the solution of $Wx =b$ but here $W$ has the form $I + X$ and $X \equiv X_n$ is a random, real non-symmetric matrix and $b= (b_j)$ is independent with uniform iid entries $\{b_j\}$. As $W = I +X$ is (almost surely) no longer positive definite the conjugate gradient algorithm breaks down, and the authors solve $(I+X)x = b$ using the Generalized Minimal Residual (GMRES) algorithm [@GMRES-original]. Again, the algorithm is iterative and at iteration $k$ of the algorithm an approximate solution $x_k$ of $(I+X)x = b$ is found and the residual $r_k = (I+X)x_k -b$ is computed. As before, for any given $\epsilon > 0$, the method is halted when $\|r_k\|_2 < \epsilon$ and $k_{\epsilon,n,A,E}(X,b)$ is recorded. As in the conjugate gradient problem (Section \[sec:CG\]), the authors compute the histograms for the fluctuations of the halting time $\tau_{\epsilon,n,A,E}$ for two ensembles $E$, where now $A = \operatorname{GMRES}$. The results are given in Figure \[GMRES-un\], where again two-component universality is evident.
![\[GMRES-un\] The observation of two-component universality for $\tau_{\epsilon,n,A,E}$ when $A = \operatorname{GMRES}$, $E = \mathrm{cSGE}, ~\mathrm{cSBE}$ and $\epsilon = 10^{-8}$. The left figure displays two histograms, one for $\mathrm{cSGE}$ and one for $\mathrm{cSBE}$, when $n = 100$. The right figure displays the same information for $n = 500$. All histograms are produced with $16,\!000$ samples. The critically scaled ensembles cSBE and cSGE are of the form $I + X_n$ with $ \|X_n\| \approx 2$. If the matrix is too close to the identity, the halting time will take almost constant values, *i.e.* $k_{\epsilon,n,A,E} = 8$, independent of $n$. If the matrix is too far from the identity, the fact that it is unstructured makes GMRES perform poorly and the algorithm typically completes in $n$ steps, the maximum possible number of iterations (see Remark \[rmk:scaling\] below). With the proper scaling of $X$, we see two-component universality emerge for $n$ sufficiently large: the histograms follow a universal (independent of $E$) law.](GMRES-un.eps){width="\textwidth"}
\[rmk:scaling\] The computations in Sections \[sec:CG\] and \[sec:GMRES-un\] are particularly revealing for the following reason. Both the CG and GMRES algorithms proceed by generating approximations $x_n$ to the solution in progressively larger subspaces $V_k$ of $\mathbb R^n$, $x_k \in V_k$, $\dim V_k = k$ (almost surely). These algorithms terminate in at most $n$ steps, in the absence of rounding errors. If the matrix $W$ in the case of CG, or $I +X$ in the case of GMRES, is too close to the identity, then the algorithm will converge in $\mathcal O(1)$ steps, essentially independent of $n$. On the other hand, if $W$ or $I + X$ is too far from the identity, the algorithm will converge only after $n$ steps (GMRES) or be dominated by rounding errors (CG). Thus in both cases there are no meaningful statistics. What the calculations in Sections \[sec:CG\] and \[sec:GMRES-un\] reveal is that if the ensembles for CG and GMRES are such that the matrices $W$ and $I + X$, respectively, are typically not too close to, and not too far, from the identity, then the algorithms exhibit significant statistical fluctuations, and two-component universality is immediately evident. (for more discussion see the captions for Figures \[CG\] and \[GMRES-un\]). Analogous considerations apply in Section \[sec:GMRES-Dir\] below.
Discretization of a Random PDE {#sec:GMRES-Dir}
------------------------------
In a fifth set of computations, the authors raise the issue of whether two-component universality is just a feature of finite-dimensional computation, or is also present in problems which are intrinsically infinite dimensional. In particular, is the universality present in numerical computations for PDEs? As a case study, the authors consider the numerical solution of the Dirichlet problem $\Delta u = 0$ in a star-shaped region $\Omega \subset \mathbb R^2$ with $ u = f$ on $\partial \Omega$. The boundary is described by a periodic function of the angle $\theta$, $r = r(\theta)$, and similarly $f = f(\theta)$, $0 \leq \theta \leq 2 \pi$. Two ensembles, BDE and UDE (as described in Appendix \[app:Dir\]), are derived from a discretization of the problem with specific choices for $r$, defined by a random Fourier series. The boundary condition $f$ is chosen randomly by letting $\{f(\frac{2 \pi j}{n})\}_{j=0}^{n-1}$ be iid uniform on $[-1,1]$. Histograms for the halting time $\tau_{\epsilon,n,A,E}$ from these computations are given in Figure \[GMRES-Dir\] and again, two-component universality is evident. What is surprising, and quite remarkable, about these computations is that the histograms for $\tau_{\epsilon,500,A,E}$ in this case are the *same* as the histograms for $\tau_{\epsilon,500,A,E}$ in Figure \[GMRES-un\] (see Figure \[GMRES-Dir\] for the overlayed histograms). In other words, UDE and BDE are structured with random components, whereas cSGE and cSBE have no structure, yet they produce the same statistics (modulo two components).
![\[GMRES-Dir\] The observation of two-component universality for $\tau_{\epsilon,n,A,E}$ when $A = \operatorname{GMRES}$, $E = \mathrm{UDE}, ~\mathrm{BDE}$ and $\epsilon = 10^{-8}$. The left figure displays two histograms, one for $\mathrm{UDE}$ and one for $\mathrm{BDE}$, when $n = 100$. The right figure displays the same information for $n = 500$. The bottom figure consists of four histograms, two taken from Figure \[GMRES-un\] ($E = \mathrm{cSGE},~\mathrm{cSBE}$) and two taken from the right figure above ($E = \mathrm{UDE},~\mathrm{BDE}$). All histograms are produced with $16,\!000$ samples. It is interesting to note two properties. First, as we observe from our computations, BDE and UDE are of the form $I + X_n$ where $X_n$ has a norm that grows proportional to some fractional power of $n$. While this type of growth in the case of Section \[sec:GMRES-un\] (Figure \[GMRES-un\]) would cause GMRES to take its maximum possible number of iterations, that is $k = n$, nevertheless, in the context of Section \[sec:GMRES-Dir\], non-trivial statistics emerge. In light of Remark \[rmk:scaling\], we conjecture that structure is necessary for GMRES to perform well when the perturbation of the identity has an unbounded spectral radius in the large $n$ limit. The second, and most important feature, is that two-component universality for matrices of the form $I + X_n$ persists as the computations are moved from structured randomness (UDE and BDE) to unstructured randomness (cSBE and cSGE): the histograms follow a universal (independent of $E$) law.](GMRES-Dir.eps){width="\textwidth"}
A Genetic Algorithm {#sec:Genetic}
-------------------
In all the computations discussed so far, the randomness in the computations[^8] resides in the initial data. In the sixth set of computations, the authors consider an algorithm which is intrinsically stochastic. They consider a genetic algorithm to compute *Fekete points* (see [@SaffPotential p. 142]). Such points $P^* = (P_1^*,P_2^*, \ldots, P^*_N) \in \mathbb R^N$ are the global minimizers of the objective function $$\begin{aligned}
H(P) = \frac{2}{N(N-1)} \sum_{1 \leq i \neq j \leq N} \log |P_i- P_j|^{-1} + \frac{1}{N} \sum_{i=1}^N V(P_i)\end{aligned}$$ for real-valued functions $V = V(x)$ which grow sufficiently rapidly as $|x| \rightarrow \infty$. It is well-known (see, *e.g.* [@SaffPotential]) that as $N \rightarrow \infty$, the counting measures $\delta_{P^*} = \frac{1}{N} \sum_{i=1}^N \delta_{P_i^*}$ converge to the so-called equilibrium measure $\mu_V$ which plays a key role in the asymptotic theory of the orthogonal polynomials generated by measure $e^{-NV(x)}dx$ on $\mathbb R$. Genetic algorithms involve two basic components , “mutation” and “crossover”. The authors implement the genetic algorithm in the following way.
#### The Algorithm
Fix a distribution $\mathfrak D$ on $\mathbb R$. Draw an initial population $\mathcal P_0 = \mathcal P = \{P_i\}_{i=1}^n$ consisting of $n = 100$ vectors in $\mathbb R^N$, $N$ large, with elements that are iid uniform on $[-4,4]$. The random map $F_{\mathfrak D}(\mathcal P): (\mathbb R^N)^n \rightarrow (\mathbb R^N)^n$ is defined by one of the following two procedures:
- **Mutation**: Pick one individual $P \in \mathcal P$ at random (uniformly). Then pick two integers $n_1,~n_2$ from $\{1,2,\ldots,N\}$ at random (uniformly and independent). Three new individuals are created.
- $\tilde P_1$ — draw $n_1$ iid numbers $\{x_1, \ldots, x_{n_1} \}$ from $\mathfrak D$ and perturb the first $n_1$ elements of $P$ : $(\tilde P_1)_i = (P)_i + x_i$, $i = 1, \ldots, n_1$, and $(\tilde P_1)_i = (P)_i$ for $i > n_1$.
- $\tilde P_2$ — draw $N - n_2$ iid numbers $\{y_{n_2+1},\ldots,y_{N}\}$ from $\mathfrak D$ and perturb the last $N- n_2$ elements of $P$: $(\tilde P_2)_i = (P)_i + y_i$, $i = n_2+1, \ldots, N$, and $(\tilde P_2)_i = (P)_i$ for $i \leq n_2$.
- $\tilde P_3$ — draw $|n_1-n_2|$ iid numbers $\{z_{1},\ldots,z_{|n_1-n_2|}\}$ from $\mathfrak D$ and perturb elements $n_1^*=1+\min(n_1,n_2)$ through $n_2^*=\max(n_1,n_2)$: $(\tilde P_3)_{i} = (P)_i + z_{i-n_1^*+1}$, $i = n_1^*, \ldots, n_2^*$, and $(\tilde P_3)_i = (P)_i$ for $i \not\in \{n_1^*,\ldots,n_2^*\}$.
- **Crossover**: Pick two individuals $P,~Q$ from $\mathcal P$ at random (independent and uniformly). Then pick two numbers $n_1,~n_2$ from $\{1,2,\ldots,N\}$ (independent and uniformly). Two new individuals are created.
- $\tilde P_4$ — Replace the $n_1$th element of $P$ with the $n_2$th element of $Q$ and perturb it (additively) with a sample of $\mathfrak D$.
- $\tilde P_5$ — Replace the $n_1$th element of $Q$ with the $n_2$th element of $P$ and perturb it (additively) with a sample of $\mathfrak D$.
At each step, the application of either crossover or mutation is chosen with equal probability. The new individuals are appended[^9] to $\mathcal P$ and $\mathcal P \mapsto \mathcal P' = F_{\mathfrak D}(\mathcal P) \in (\mathbb R^N)^n$ is constructed by choosing the 100 $P_i$’s in $\tilde {\mathcal P}$ which yield the smallest values of $H(P)$. The algorithm produces a sequence of populations $\mathcal P_1, \mathcal P_2,\ldots,\mathcal P_k, \ldots$ in $(\mathbb R^N)^n$, $\mathcal P_{k+1} = F_{\mathfrak D}(\mathcal P)$, $n = 100$, and halts, with halting time recorded, for a given $\epsilon$, when $\min_{P \in \mathcal P_k} H(P) - \inf_{P \in \mathbb R^N} H(P) < \epsilon$.
The histograms for the fluctuations $\tau_{\epsilon,N,A,E}$, with $A = \operatorname{Genetic}$ are given in Figure \[Genetic\], for two choices of $V$, $V(x) = x^2$ and $V(x) = x^4 - 3 x^2$, and different choices of $E \simeq \mathfrak D$. For $V(x) = x^2$ $\inf_{P \in \mathbb R^N} H(P)$ is known explicitly, and for $V(x) = x^4- 3x^2$, $\inf_{P \in \mathbb R^N} H(P)$ is approximated by a long run of the genetic algorithm. As before, two-component universality is evident.
![\[Genetic\] The observation of two-component universality for $\tau_{\epsilon,N,A,E}$ when $A = \operatorname{Genetic}$, $\epsilon = 10^{-2}$ and $E \simeq \mathfrak D$ where $\mathfrak D$ is chosen to be either uniform on $[-1/(10 N), 1/(10 N)]$ or taking values $\pm 1/(10 N)$ with equal probability. The top row is created with the choice $V(x) = x^2$ and the bottom row with $V(x) = x^4-3x^2$. Each of the plots in the left column displays two histograms, one for each choice of $\mathfrak D$ when $N = 10$. The right column displays the same information for $N = 40$. All histograms are produced with $16,\!000$ samples. It is evident that the histograms collapse onto a universal curve, one for each $V$. ](Genetic.eps){width="\textwidth"}
Curie–Weiss Model {#sec:CurieWeiss}
-----------------
In the seventh and final set of computations, the authors pick up on a common notion in neuroscience that the human brain is a computer with software and hardware. If this is indeed so, then one may speculate that two-component universality should certainly be present in some cognitive actions. Indeed, such a phenomenon is in evidence in the recent experiments of Bakhtin and Correll [@Bakhtin2012]. In [@Bakhtin2012], data from experiments with 45 human participants was analyzed. The participants are shown 200 pairs of images. The images in each pair consist of nine black disks of variable size. The disks in the images within each pair have approximately the same area so that there is no *a priori* bias. The participants are then asked to decide which of the two images covers a larger (black) area and the time $T$ required to make a decision is recorded. For each participant, the decision times for the 200 pairs are collected and the fluctuation histogram[^10] is tabulated. The experimental results are in good agreement with a dynamical Curie-Weiss model frequently used in describing decision processes [@Bakhtin2011]. As each of the 45 participants operates, presumably, in his’r own stochastic neural environment, this is a remarkable demonstration of two-component universality in cognitive action.
At its essence the Curie–Weiss model is Glauber dynamics on the hypercube $\{-1,1\}^N$ with a microscopic approximation of a drift-diffusion process. Consider $N$ variables $\{X_i(t)\}_{i=1}^N$, $X_i(t) \in \{-1,1\}$. The state of the system at time $t$ is $X(t) = (X_1(t), X_2(t), \ldots, X_N(t))$. The transition probabilities are given through the expressions $$\begin{aligned}
\mathbb P(X_i(t+\Delta t) \neq X_i(t) | X(t) = x ) = c_i(x) \Delta t + o(\Delta t),\end{aligned}$$ where $c_i(x)$ is the spin flip intensity. The observable considered is $M(X(t)) = \frac{1}{N} \sum_{i=1}^N X_i(t) \in [-1,1]$, and the initial state of the system is chosen so that $M(X(0)) = 0$, a state with no *a priori* bias, as in the case of the experimental setup. The halting (or decision) time for this model is $k = \inf\{t: |M(X(t))| \geq \epsilon\}$, the time at which the system makes a decision. Here $\epsilon \in (0,1)$ may not be small.
This model is simulated by first sampling an exponential random variable with mean $\lambda(t) = \left(\sum_i c_i(X(t))\right)^{-1}$ to find the time $\Delta t$ at which the system changes state. Sampling the random variable $Y$, $\mathbb P(Y = i) = c_i(X(t)) \lambda(t)$, $i = 1,2,\ldots,N$ produces an integer $j$, determining which spin flipped. Define $X_i(t+s) \equiv X_i(t)$ if $s \in [0,\Delta t)$ for $i = 1,2,\ldots,N$ and $X_i(t + \Delta t) \equiv X_i(t)$, $X_j(t+ \Delta t) \equiv -X_j(t)$ for $i \neq j$. This procedure is repeated with $t$ replaced by $t+ \Delta t$ to evolve the system.
Central to the application of the model is the assumption on the statistics of the spin flip intensity $c_i(x)$. If one changes the basic statistics of the $c_i$’s, will the limiting histograms for the fluctuations of $k$ be affected as $N$ becomes large? In response to this question the authors consider the following choices for $E \simeq c_i(x)$ ($\beta = 1.3$): [$c_i(x) = o_i(x) = e^{-\beta x_i M(x)}$ (the case studied in [@Bakhtin2012]), $c_i(x) = u_i(x) = e^{-\beta x_i(M(x)-M^3(x)/5)}$, or $c_i(x) = v_i(x) = e^{-\beta x_i(M(x) + M^8(x))}$.]{} The resulting histograms for the fluctuations $\tau_{\epsilon,N,A,E}$ of $T$ are given in Figure \[CurieWeiss\]. Once again, two-component universality is evident. Thus the universality in the decision process models mirrors the universality observed among the 45 participants in the experiment of Bakhtin and Correll.
![\[CurieWeiss\] The observation of two-component universality for $\tau_{\epsilon,N,A,E}$ when $A = \text{Curie--Weiss}$, $E \simeq o_i,~ u_i, ~ v_i$, $\epsilon = .5$ and $\beta = 1.3$. The left figure displays three histograms, one for each choice of $E$ when $N = 50$. The right figure displays the same information for $N = 200$. All histograms are produced with $16,\!000$ samples. The histogram for $E = o_i$ corresponds to the case studied in [@Bakhtin2012; @Bakhtin2011]. It is clear from these computations that the fluctuations collapse on to the universal curve for $E = o_i$. Thus, reasonable changes in the spin flip intensity do not appear to change the limiting histogram. This indicates why the specific choice made in [@Bakhtin2012] of $E = o_i$ is perhaps enough to capture the behavior of many individuals. ](CurieWeiss.eps){width="\textwidth"}
Conclusions
===========
Two distinct themes are combined in this work: (1) the notion of universality in random matrix theory and statistical physics; (2) the use of random ensembles in scientific computing. The origin of both these ideas dates to the 1950s in the work of (1) Wigner [@MehtaRM; @Wigner1951], and (2) von Neumann and Goldstine [@Goldstine1951]. There has been considerable progress in the rigorous understanding of universality in random matrix theory (see e.g. [@DeiftRandom4; @Erdos2012] and the references therein). In contrast, the performance of numerical algorithms on random ensembles is less understood, though results in this area include probabilistic bounds for condition numbers and halting times for numerical algorithms [@Demmel1988; @Edelman1988; @Smale1985].
The work presented here reveals empirical evidence for two-component universality in several numerical algorithms. The results of [@DiagonalRMT] and Sections \[sec:Jacobi\]-\[sec:GMRES-Dir\] reveal universal fluctuations of halting times for iterative algorithms in numerical linear algebra on random matrix ensembles with both dependent and independent entries. In each instance, the process of numerical computation on a random matrix may be viewed as the evolution of a random ensemble by a deterministic dynamical system. In a similar light, the algorithms of Section \[sec:Genetic\] and \[sec:CurieWeiss\] may be seen as stochastic dynamical systems with that in Section \[sec:CurieWeiss\] having a close connection with neural computation. In all these examples, the empirical observations presented here suggest new universal phenomena in non-equilibrium statistical mechanics. The results of Section \[sec:GMRES-un\] and \[sec:GMRES-Dir\] reveal that numerical computations with a structured ensemble with some random components may have the same statistics (modulo two-components) as an unstructured ensemble. This brings to mind the situation in the 1950s when Wigner introduced random matrices as a model for scattering resonances of neutrons off heavy nuclei: the neutron-nucleus system has a well-defined and structured Hamiltonian, but nevertheless the resonances for neutron scattering are well-described statistically by the eigenvalues of an (unstructured) random matrix.
Materials {#materials .unnumbered}
=========
All algorithms discussed here are implemented in `Mathematica`. A package is available for download [@TrogdonNU] that contains all relevant data and the code to generate this data. The package supports parallel evaluation for most algorithms and runs easily on personal computers.
Gaussian Ensembles {#app:gauss}
------------------
The Gaussian Orthogonal Ensemble (GOE) is given by $(X+X^T)/\sqrt{4n}$ where $X$ is an $n \times n$ matrix of [standard]{} iid Gaussian variables. The Gaussian Unitary Ensemble (GUE) is given by $(X+X^*)/\sqrt{8n}$ where $X$ is an $n \times n$ matrix of [standard]{} iid complex Gaussian variables.
Bernoulli Ensemble {#app:bernoulli}
------------------
The Bernoulli Ensemble (BE) is given by an $n \times n$ matrix $X$ consisting of iid random variables that take the values $\pm 1/\sqrt{n}$ with equal probability subject only to the constraint $X^T = X$.
Positive Definite Ensembles {#app:pd}
---------------------------
The critically-scaled Laguerre Orthogonal Ensemble (cLOE) is given by $ W = XX^T/m$ where $X$ is an $n \times m$ matrix with [standard]{} iid Gaussian entries. The critically-scaled positive definite Bernoulli ensemble (cPBE) is given by $W = XX^T/m$ where $X$ is an $n \times m$ matrix consisting of iid Bernoulli variables taking the values $\pm 1$ with equal probability. [In both cases,]{} the critical scaling refers to the choice $m = n + 2\lfloor \sqrt{n} \rfloor$.
Shifted Ensembles {#app:shift}
-----------------
The critically-scaled shifted Bernoulli Ensemble (cSBE) is given by $I + X/\sqrt{n}$ where $X$ is an $n \times n$ matrix consisting of iid Bernoulli variables taking the values $\pm 1$ with equal probability. The critically-scaled shifted Ginibre Ensemble (cSGE) is given by $I + X/\sqrt{n}$ where $X$ is an $n \times n$ matrix of [standard]{} iid Gaussian variables. With this scaling $ \mathbb P( |\|X/\sqrt{n}\| -2| > \epsilon) \rightarrow 0$ as $n \rightarrow \infty$ [@Geman1980].
Unitary Ensembles {#app:unitary}
-----------------
The Quartic Unitary Ensemble (QUE) is a complex, unitary ensemble with probability distribution proportional to $e^{-n \mathrm{tr} M^4}dM$. The Cosh Unitary Ensemble (COSH) has its distribution proportional to $e^{- \mathrm{tr} \cosh M}dM$.
Dirichlet Ensembles {#app:Dir}
-------------------
We consider the numerical solution of the equation $\Delta u = 0$ in $\Omega$ and $u = f$ on $\partial \Omega$. Here we let $\Omega$ be the star-shaped region interior to the curve $(x,y) = (r(\theta) \cos(\theta),r(\theta) \sin(\theta))$ where $r(\theta)$ for $ 0 \leq \theta < 2 \pi$ is given by $r(\theta) = 1 + \sum_{j=1}^m (X_j \cos(j \theta) + Y_j \sin(j \theta)),$ and $X_j$ and $Y_j$ are iid random variables [on]{} $[-1/(2m),1/(2m)]$. The boundary integral equation $$\begin{aligned}
\pi u(P) - \int_{\partial \Omega} u(P) \frac{\partial}{\partial n_Q} \log |P - Q| d S_Q = - f(P), \quad P \in \partial \Omega,\end{aligned}$$ is solved by discretizing in $\theta$ with $n$ points and applying the trapezoidal rule with $n = 2m$ (see [@atkinson]). For the Bernoulli Dirichlet Ensemble (BDE), $X_m$ and $Y_m$ are Bernoulli variables taking values $\pm 1/(2m)$ with equal probability. For the Uniform Dirichlet Ensemble (UDE), $X_m$ and $Y_m$ are uniform variables on $[-1/(2m),1/(2m)]$.
Acknowledgments {#acknowledgments .unnumbered}
===============
We acknowledge the partial support of the National Science Foundation through grants NSF-DMS-130318 (TT), NSF-DMS-1300965 (PD) and NSF-DMS-07-48482 (GM) and the Australian Research Council through the Discovery Early Career Research Award (SO). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding sources.
[^1]: Courant Institute, New York, NY, USA
[^2]: Brown University, Providence, RI, USA
[^3]: The University of Sydney, Sydney, NSW, Australia
[^4]: The Jacobi algorithm is well-suited to parallel computation, and also has other advantages over QR in the context of modern, large-scale computation ( see *e.g.* [@Demmel1992]).
[^5]: This is sufficient to conclude that one element on the diagonal of the transformed matrix is within $\epsilon n^{-1/2}$ of an exact eigenvalue of the original matrix.
[^6]: Here $M = QR$ where $Q$ is unitary and again $R$ is upper triangular with $R_{ii} > 0$.
[^7]: The notation $\|\cdot\|_2$ is used to denote the standard $\ell^2$ norm on $n$-dimensional Euclidean space
[^8]: Aside from round-off errors, see comments below Figure \[CG\]
[^9]: After mutation we have $\tilde {\mathcal P} = \mathcal P \cup \{\tilde P_1, \tilde P_2, \tilde P_3\}$ and after crossover, $\tilde {\mathcal P} = \mathcal P \cup \{\tilde P_4, \tilde P_5\}$
[^10]: In [@Bakhtin2012] the authors do not display the histogram for the fluctuations directly, but such information is easily inferred from their figures (see Figure 6 in [@Bakhtin2012]).
|
---
abstract: |
In this paper we give an explicit expression for the local time of the classical risk process and associate it with the density of an occupational measure. To do so, we approximate the local time by a suitable sequence of absolutely continuous random fields. Also, as an application, we analyze the mean of the times $s \in [0,T]$ such that $0\leq X_{s} \leq
X_{s+\varepsilon} $ for some given $\varepsilon>0$.
address:
- 'Universidad Autónoma de Aguascalientes, Departamento de Matemáticas y Física, Av. Universidad 940, C.P. 20100 Aguascalientes, Ags., Mexico'
- 'Cinvestav-IPN, Departamento de Control Automático, Apartado Postal 14-740, 07000 México D.F., Mexico'
- 'Universidad Autónoma de Aguascalientes, Departamento de Matemáticas y Física, Av. Universidad 940, C.P. 20100 Aguascalientes, Ags., Mexico'
author:
- 'F. Cortes'
- 'J.A. León'
- 'J. Villa'
date: 'December 26, 1997'
title: 'The Local Time of the Classical Risk Process${^*}$'
---
[^1]
Introduction and main results
=============================
Henceforth, $X=\{X_{t},t\geq 0\}$ represents the classical risk process. More precisely, $$X_{t}=x_{0}+ct-\sum_{k=1}^{N_{t}}R_{k},\ \ t\geq 0,$$ where $x_{0}\geq 0$ is the initial capital, $c>0$ is the premium income per unit of time, $N=\left\{ N_{t},t\geq 0\right\}$ is an homogeneous Poisson process with rate $\alpha $ and $\left\{ R_{k},k\in\mathbb{N}\right\}$ is a sequence of i.i.d non-negative random variables, which is independent of $N$. $N_{t}$ is interpreted as the number of claims arrivals during time $t$and $R_{k}$ as the amount of the $k$-th claim. We suppose that $R_{1}$ has finite mean and it is an absolutely continuous random variable with respect to the Lebesgue measure.
The risk process has been studied extensively because it is often used to describe the capital of an insurance company. Indeed, among the properties of $X$ considered by several authors, we can metion that the local time of $X
$ has been analyzed by Kolkovska et al. [@T-J-J], the double Laplace transform of an occupation measure of $X$ has been obtained by Chiu and Yin [@C-Y], or that the probability of ruin has been one of the most important goals of the risk theory (see, for example, Asmussen [@A], Grandell [@Grandell], Rolski et al. [@R-S-S-T] and the references therein to get an idea of the analysis realized in this subject). In this paper we are interested in continuing the development of the local time $L$ of $X$ and its applications as an occupational density in order to improve the understanding of $X$.
Note that $X$ is a Lévy process due to $\sum_{k=1}^{N}R_{k}$ being a compound Poisson process. Thus, we can apply different criteria for general Lévy processes to guarantee the existence of $L$. For example, we can use the Hawkes’ result [@Hawkes] when $R_{k}$ is exponential distributed (see also [@Bertoin] and references therein for related works). However, we cannot obtain in general the form of $L$ via this results. Moreover, in the literature there exist different characterizations of the local time (see Fitzsimmons and Port [@F-P] and the references therein). For instance, the local time have been introduced in [@F-P] (resp. [T-J-J]{}) as an $L^{2}(\Omega )$-derivative (resp. derivative in probability) of some occupation measure. Nevertheless, in [@F-P; @T-J-J], it is not analyzed some properties of the involved local time using this approximation of $L$.
The purpose of this paper is to associate the local time of $X$ with the crossing process when $L$ is interpreted as a density of the occupational measure (see Theorem 1.c) below). The relation between the local time and the crossing process was conjectured by Lévy [@L] for the Brownian motion case (i.e., whe $X$ is a Wiener process). In this article we use the ideas of the proof of Tanaka’s formula for the Brownian motion (see Chung [@Chung], Chapter 7) to obtain a sequence of absolutely continuous random fields (in time) that converges with probability 1 (w.p.1 for short) to $$\begin{aligned}
L_{t}(x) &=&\frac{1}{c}(\frac{1}{2}1_{\{x\}}(X_{t})+1_{(x,\infty )}(X_{t})-%
\frac{1}{2}1_{\{x\}}(x_{0})-1_{(x,\infty )}(x_{0}) \notag \\
&&-\sum_{0<s\leq t}\{1_{(x,\infty )}(X_{s})-1_{(x,\infty )}(X_{s-})\}),\quad
t\ge 0 \text{\ and\ } x\in\mathbb{R}. \label{tlrp1}\end{aligned}$$ This approximation allows us to prove that this $L$ is the density of the occupation measure (see (\[ocumea\]) below) and, therefore, to deal with some problems related to occupations measures.
Notice that $L$ given by (\[tlrp1\]) is well-defined because $X$ is càdlàg and $$P(N_{t}<+\infty ,\, \text{for all\ } t>0)=1, \label{fsptp}$$ wich imply that only a finite number of summands in (\[tlrp1\]) are different than zero.
In the following result we not only relate $L$ to the number of crossings with certain level, but also to the occupation measure $$Y_{t}(A)=\int_{0}^{t}1_{A}(X_{s})ds,\quad t\geq 0\ \text{and}\ A\in \mathcal{%
B}(\mathbb{R}), \label{ocumea}$$where $\mathcal{B}(\mathbb{R})$ is the Borel $\sigma$-algrebra of $\mathbb{R}
$. Toward this end, we need the following:
\[defcruza\] We say that there exists a if for all open interval $I$ such that $s\in I$, $x$ is an interior point of $\left\{ X_{t}:t\in
I\right\} $. That is, $x\in \left( X\left( I\right) \right) ^{\circ }.$ Moreover, the number of crossings with the level $x$ in the interval $(0,t)$ is denoted by $C_{t}(x)$. $C$ is known as the $%
X $.
Observe that if $x\in\mathbb{R}$ is a crossing point at time $s$, then $X$ is continuous at time $s$ and $X_s=x$.
Now we can state the main result of the paper.
\[theop\] Let $t>0$ and $x\in\mathbb{R}$. Then, the random field $L$ defined in (\[tlrp1\]) has the following properties:
- $L_{t}(x)\geq 0$ and $L_{\cdot }(x)$ is not decreasing w.p.1.
- $L_{t}(x)=\frac{1}{c}\left( \frac{1}{2}1_{\{X_{t}\}}(x)-\frac{1}{2}%
1_{\left\{ X_{0}\right\} }(x)+C_{t}(x)\right) $ w.p.1.
- For every bounded and Borel measurable function $g:\mathbb{R}%
\mathbb{\rightarrow }\mathbb{R}$, we have $$\int_{0}^{t}g(X_{s})ds=\int_{\mathbb{R}}g(y)L_{t}(y)dy\quad w.p.1.
\label{dtl}$$
Note that Statement b) implies that the number of crossings $C$ of $X$ introduced in Definition \[defcruza\] satisfies $$C_{t}(x)=1_{(-\infty ,X_{t})}(x)-1_{(-\infty ,X_{0})}(x)+\sum_{0<s\leq
t}1_{(X_{s},X_{s-})}(x)\quad w.p.1,$$for $t>0$ and $x\in\mathbb{R}$. Also note that, from (\[dtl\]) and Statement a), the random field $L$ can be interpreted as an occupation density relative to the Lebesgue measure on $\mathbb{R}$. Hence, $L$ in ([tlrp1]{}) is called *the local time* and the expression$$\begin{aligned}
L_{t}(x) &=&\frac{1}{c}(\frac{1}{2}1_{\{X_{t}\}}(x)-\frac{1}{2}1_{\left\{
X_{0}\right\} }(x)+1_{(-\infty ,X_{t})}(x)-1_{(-\infty ,X_{0})}(x) \\
&&+\int_{(0,t]}f(x,X_{s})dX_{s}),\end{aligned}$$is known as *Tanaka-like formula for* $L_{t}(x)$. Here $$f(x,X_{s})=\left\{
\begin{tabular}{ll}
$\frac{1_{(X_{s},X_{s-})}(x)}{\Delta X_{s}},$ & $\Delta X_{s}\neq 0,$ \\
$0,$ & $\Delta X_{s}=0.$%
\end{tabular}%
\right.$$
On the other hand, relation (\[dtl\]) can be extended to some occupational results. Indeed, as an example, we can state the following, which leads us to get some average of the pathwise behavior of $X$.
\[teomed2\]Let $g:\mathbb{R}\times \mathbb{R}\longrightarrow \mathbb{R}$ be a bounded and Borel measurable function. Then for each $\varepsilon >0,$$$E[\int_{0}^{t}g(X_{s},X_{s+\varepsilon }-X_{s})ds]=\int_{\mathbb{R}%
}E[g(x,X_{\varepsilon }-x_{0})]E[L_{t}(x)]dx. \label{cm2}$$
An application of this theorem is to answer the question: *What is the average in time that the capital of an insurance company is positive, and bigger than itself after twelve months?.*
The paper is organized as follows. In Section \[sec:2\] we provide the tool needed to prove Theorem \[theop\]. In particular, we approximate the local time by a sequence of suitable random fields. The proof of Theorem [theop]{} is given in Section \[sec:3\]. Finally, in Section \[sec:4\], we show Theorem \[teomed2\] and answer the above question in the case that the claim $R_{1}$ has exponential distribution.
Main tool {#sec:2}
=========
In this section we provide the needed tool to show that Theorem \[theop\] holds. In particular, we construct the announced sequence converging to the local time $L$.
In the remaining of this paper, $T_i$ denotes the $i$-th jump time of $N$, with $T_0=0$. It is known that $T_{i}$ has gamma distribution with parameters $(i,\alpha )$, $i\geq 1.$
We will use the following technical resul in the proofs of this section.
\[cppax\] Let $x\in \mathbb{R}$, $s>0$, $\Omega _{1}(s)=\left\{ \Delta
X_{s}\neq 0\right\}$, and $$\begin{aligned}
\Omega _{2}&=&\{X_{s-}=x,\ \Delta X_{s}\neq 0\ \text{for some }s>0\} \\
&&\cup \{X_{s}=x,\ \Delta X_{s}\neq 0\ \text{for some }s>0\}.\end{aligned}$$ Then, $P(\Omega _{1}(s))=0$ and $P(\Omega _{2})=0$.
By the law of total probability$$P(\Omega _{1}(s))=\sum_{k=0}^{\infty }P(N_{s}=k)P(\Omega _{1}(s)|N_{s}=k).$$Notice that$$P(\Omega _{1}(s)|N_{s}=k)=P(\Delta X_{s}\neq 0|N_{s}=k)=P(T_{k}=s)=0.$$
On the other hand, let $\nu \in \mathbb{N}$ and define
$$\begin{aligned}
\tilde{\Omega}_{\nu }&=&\{X_{s-}=x,\ \Delta X_{s}\neq 0\ \text{for some }%
0<s<\nu \} \\
& &\cup \{X_{s}=x,\ \Delta X_{s}\neq 0\ \text{for some }0<s<\nu \}.\end{aligned}$$
For $k=0,$ $$P(\tilde{\Omega}_{\nu }|N_{\nu }=0)=P(\emptyset |N_{\nu }=0)=0,$$and for $k\geq 1$,$$\begin{aligned}
P(\tilde{\Omega}_{\nu }|N_{\nu }=k) &\leq &P(X_{T_{j}-}=x\text{ for some }%
j\in \left\{ 1,...,k\right\} |N_{\nu }=k) \\
&&+P(X_{T_{j}}=x\text{ for some }j\in \left\{ 1,...,k\right\} |N_{\nu }=k) \\
&\leq &\sum_{j=1}^{k}(P(X_{T_{j}-}=x|N_{\nu }=k)+P(X_{T_{j}}=x|N_{\nu }=k)).\end{aligned}$$For $j=1$ we get $$\begin{aligned}
P(X_{T_{1}-} =x|N_{\nu }=k)&=&P(T_{1}=(x-x_{0})c^{-1}|N_{\nu }=k)=0, \\
P(X_{T_{1}} =x|N_{\nu }=k)&=&P(cT_{1}-R_{1}=x-x_{0}|N_{\nu }=k)=0,\end{aligned}$$this is because $T_{1}$ and $R_{1}$ are independent and absolutely continuous random variables. Let $P(\cdot |N_{\nu }=k)=P^{\ast }(\cdot )$. When $j>1$ we have $$\begin{aligned}
P^{\ast }(X_{T_{j}-}=x) &=&\int_{\mathbb{R}}P^{\ast
}(X_{T_{j}-}=x|X_{T_{j-1}-}=y)P^{\ast }(X_{T_{j-1}-}\in dy) \\
&=&\int_{\mathbb{R}}P^{\ast }(R_{j-1}=y-(x-(T_{j}-T_{j-1})c))P^{\ast
}(X_{T_{j-1}-}\in dy) \\
&=&0\end{aligned}$$and$$\begin{aligned}
P^{\ast }(X_{T_{j}}=x) &=&\int_{\mathbb{R}}P^{\ast
}(X_{T_{j}}=x|X_{T_{j-1}}=y)P^{\ast }(X_{T_{j-1}}\in dy) \\
&=&\int_{\mathbb{R}}P^{\ast }((T_{j}-T_{j-1})c-R_{j}=x-y)P^{\ast
}(X_{T_{j-1}}\in dy) \\
&=&0.\end{aligned}$$Here we have used the fact that $R_{j-1}$ has an absolutely continuous distribution. Finally notice that $P(\Omega _{2})\leq \sum {}_{\nu
=1}^{\infty }P(\tilde{\Omega}_{\nu })=0.$
An approximating sequence of the local time
-------------------------------------------
Now we approximate the local time $L$ by a sequence of suitable random fields, which allows us to see that Theorem \[theop\].a) is true. Toward this end, let $x\in \mathbb{R}$ arbitrary and fixed. For each $n\in \mathbb{N%
}$ define $\varphi _{x,n}:\mathbb{R}\rightarrow \mathbb{R}$ by $$\varphi _{x,n}(y)=\left\{
\begin{tabular}{ll}
$0,$ & $y<x-1/n,$ \\
$(n(y-x)+1)/2,$ & $x-1/n\leq y\leq x+1/n,$ \\
$1,$ & $x+1/n<y\text{.}$%
\end{tabular}%
\right.$$
Notice that $$\begin{aligned}
\lim_{n\rightarrow \infty }\varphi _{x,n}(y) &=&\left\{
\begin{tabular}{ll}
$0,$ & $y<x,$ \\
$1/2,$ & $y=x,$ \\
$1,$ & $y>x,$%
\end{tabular}%
\right. \notag \\
&=&\frac{1}{2}1_{\{x\}}(y)+1_{(x,+\infty )}(y), \label{cfn}\end{aligned}$$and $$\varphi _{x,n}^{\prime }(y)=\left\{
\begin{tabular}{ll}
$0,$ & $y<x-1/n,$ \\
$n/2,$ & $x-1/n<y<x+1/n,$ \\
$0,$ & $x+1/n<y.$%
\end{tabular}%
\right.$$For each $n\in \mathbb{N}$, we define the random field$$\begin{aligned}
L_{t}^{n}(x) &=&\frac{1}{c}\left( \varphi _{x,n}(X_{t})-\varphi
_{x,n}(X_{0})\right. \\
&&-\sum_{s\leq t}\{\varphi _{x,n}(X_{s})-\varphi _{x,n}(X_{s-})-\varphi
_{x,n}^{\prime }(X_{s-})\Delta X_{s}\}),\end{aligned}$$where $\Delta X_{t}=X_{t}-X_{t-}$. As in (\[tlrp1\]), we have by ([fsptp]{}) that $L^{n}$ is well-defined.
Before proving that $\{L^n, n\in\mathbb{N}\}$ is the sequence that we are looking for, we need to approximate the fuction $\varphi_{x,n}$ by a sequence of smooth functions. To do so, set $$\begin{aligned}
\Omega ^{\prime } &=&(\left\{ X_{s-}=x\pm 1/m\neq X_{s},\text{ for some }%
s>0,\ m\in \mathbb{N}\right\} \\
&&\cup \{N_{s}<+\infty, \ \text{ for all }\ s>0\}^{c}\cup \Omega _{2})^{c}.\end{aligned}$$Since, by Lemma \[cppax\], $$\begin{aligned}
P(X_{s-} &=&x\pm 1/m\neq X_{s},\ \text{for some }s>0,\ m\in \mathbb{N}) \\
&\leq &\sum_{m=1}^{\infty }P(X_{s-}=x\pm 1/m\neq X_{s},\ \text{for some }%
s>0) =0,\end{aligned}$$we have $P(\Omega ^{\prime })=1.$
Let $\psi :\mathbb{R}\rightarrow \mathbb{R}$ a symmetric function in $%
\mathcal{C}^{\infty }(\mathbb{R})$ with compact support on $[-1,1]$ and $$\int_{-1}^{1}\psi (y)dy=1.$$Define the sequence $\left( \psi _{m}\right) $ by $$\psi _{m}(y)=m\psi (my),\quad y\in \mathbb{R},$$and $$\varphi _{x,n}^{m}(y)=(\psi _{m}\ast \varphi _{x,n})(y)=\int_{\mathbb{R}%
}\varphi _{x,n}(y-z)\psi _{m}(z)dz.$$Since $\psi _{m}\in \mathcal{C}^{\infty }(\mathbb{R})$, then $\varphi
_{x,n}^{m}\in \mathcal{C}^{\infty }(\mathbb{R})$ and moreover $$\begin{aligned}
&&\left( \varphi _{x,n}^{m}\right) _{m}\text{ converges uniformly on
compacts to }\varphi _{x,n}, \label{c1} \\
&&((\varphi _{x,n}^{m})^{\prime })_{m}\text{ converges pointwise, except on }%
x\pm 1/n\text{, to }\left( \varphi _{x,n}\right) ^{\prime }. \label{c2}\end{aligned}$$
Let us use the notation $$L_{t}^{n,m}(x)=\frac{1}{c}\int_{(0,t]}(\varphi _{x,n}^{m})^{\prime
}(X_{s-})dX_{s}.$$Then, by the change of variable theorem for the Lebesgue-Stieltjes integral, we have $$\begin{aligned}
cL_{t}^{n,m}(x) &=&\varphi _{x,n}^{m}(X_{t})-\varphi _{x,n}^{m}(x_{0})
\notag \\
&&-\sum_{0<s\leq t}\left\{ \varphi _{x,n}^{m}(X_{s})-\varphi
_{x,n}^{m}(X_{s-})-(\varphi _{x,n}^{m})^{\prime }(X_{s-})\Delta
X_{s}\right\} . \label{appfcv}\end{aligned}$$
Now we can give the relation between $L^{n}$ and $\{L^{n,m},m\in \mathbb{N}\}
$.
\[capptlapx\] Let $n\in \mathbb{N}$. Then, $$\lim_{m\rightarrow \infty }L_{t}^{n,m}(x)=L_{t}^{n}(x),\quad \text{for all}\
t>0,\ \mathit{w.p.1}.$$
For $\omega \in \Omega ^{\prime }$ we have that (\[fsptp\]) and (\[c1\]) imply $$\begin{gathered}
\lim_{m\rightarrow \infty }(\varphi _{x,n}^{m}(X_{t})-\varphi
_{x,n}^{m}(x_{0})-\sum_{0<s\leq t}\left\{ \varphi _{x,n}^{m}(X_{s})-\varphi
_{x,n}^{m}(X_{s-})\right\} ) \\
=\varphi _{x,n}(X_{t})-\varphi _{x,n}(x_{0})-\sum_{0<s\leq t}\left\{ \varphi
_{x,n}(X_{s})-\varphi _{x,n}(X_{s-})\right\} .\end{gathered}$$Now we analyze the remaining term in the definition of $L_{t}^{n,m}(x).$ Notice that for each $w\in \Omega ^{\prime }$ we have $$X_{s-}(w)\neq x\pm 1/k,\quad k\in \mathbb{N}.$$Therefore, from $(\ref{c2}),$$$\lim_{m\rightarrow \infty }\sum_{0<s\leq t}(\varphi _{x,n}^{m})^{\prime
}(X_{s-})\Delta X_{s}=\sum_{0<s\leq t}(\varphi _{x,n})^{\prime
}(X_{s-})\Delta X_{s}.$$From this and (\[appfcv\]) the result follows.
Now we are ready to state the properties of $\{L^n,n\in\mathbb{N}\}$ that we use in Section \[sec:3\].
\[aptlft\]The sequence $\{L^{n},n\in \mathbb{N}\}$ fulfill:
- $L_{t}^{n}(x)=\int_{0}^{t}\varphi _{x,n}^{\prime }(X_{s-})ds+\frac{%
1}{c}\sum_{0<s\leq t}\varphi _{x,n}^{\prime }(X_{s-})\Delta X_{s}$, for all $%
n\in \mathbb{N}$ and $t>0$, w.p.1.
- $\lim_{n\rightarrow \infty }L_{t}^{n}(x)=L_{t}(x)$, for all $t>0$, w.p.1.
We first deal with Statement a). Fix $t\geq 0$ and let $\omega \in \Omega
^{\prime }\cap \Omega _{1}(t)^{c}$. Then there is $k\in \mathbb{N}$ such that $N_{t}(\omega )=k$. Thus $$\begin{aligned}
& \int_{0}^{t}(\varphi _{x,n}^{m})^{\prime }(X_{s-})dX_{s} \\
& =\sum_{i=1}^{k}\int_{(T_{i-1},T_{i}]}(\varphi _{x,n}^{m})^{\prime
}(X_{s-})dX_{s}+\int_{(T_{k},t]}(\varphi _{x,n}^{m})^{\prime }(X_{s-})dX_{s}
\\
& =\sum_{i=1}^{k}\int_{(T_{i-1},T_{i}]}\left( \varphi _{x,n}^{m}\right)
^{\prime }(X_{s-})cds+\int_{[T_{k},t)}\left( \varphi _{x,n}^{m}\right)
^{\prime }(X_{s})cds \\
& \ \ \ +\sum_{i=1}^{k}\left( \varphi _{x,n}^{m}\right) ^{\prime
}(X_{T_{i}-})(X_{T_{i}}-X_{T_{i}-}) \\
& =c\int_{0}^{t}(\varphi _{x,n}^{m})^{\prime
}(X_{s-})ds+\sum_{i=1}^{k}\left( \varphi _{x,n}^{m}\right) ^{\prime
}(X_{T_{i}-})(X_{T_{i}}-X_{T_{i}-}).\end{aligned}$$Notice that on each $(T_{i-1},T_{i}]$ and $(T_{k},t]$, there is at most one $%
s$ such that $X_{s-}=x\pm 1/n$. Hence, by $(\ref{c2})$, we have $$\lim_{m\rightarrow \infty }(\varphi _{x,n}^{m})^{\prime }(X_{s-})=(\varphi
_{x,n})^{\prime }(X_{s-}),\ \ \lambda \text{-}a.s.$$Therefore, by the dominated convergence theorem, we deduce $$\begin{aligned}
\lim_{m\rightarrow \infty }\int_{0}^{t}(\varphi _{x,n}^{m})^{\prime
}(X_{s-})dX_{s} &=&c\int_{0}^{t}(\varphi _{x,n})^{\prime }(X_{s-})ds \\
&&+\sum_{i=1}^{k}\left( \varphi _{x,n}\right) ^{\prime }(X_{T_{i}-})\Delta
X_{T_{i}},\ \ a.s.\end{aligned}$$Consequently, the fact that $L_{t}^{n}(x)$ and the right-hand side of last equality are càdlàg processes implies that Statement a) holds.
Now we consider Statement b) in order to finish the proof of the proposition. Let $\omega \in \Omega ^{\prime }$ and $t\geq 0$. Then there exist $k\in \mathbb{N}$ such that $N_{t}(\omega )=k$. Hence$$\begin{gathered}
\lim_{n\rightarrow \infty }(\varphi _{x,n}(X_{t}(\omega ))-\varphi
_{x,n}(x_{0})-\sum_{i=1}^{k}\left\{ \varphi _{x,n}(X_{T_{i}}(\omega
))-\varphi _{x,n}(X_{T_{i}-}(\omega ))\right\} ) \\
=\frac{1}{2}1_{\left\{ x\right\} }(X_{t}(\omega ))+1_{(x,\infty
)}(X_{t}(\omega ))-\frac{1}{2}1_{\left\{ x\right\} }(x_{0})-1_{\left(
x,\infty \right) }\left( x_{0}\right) \\
-\sum_{i=1}^{k}\{\frac{1}{2}1_{\left\{ x\right\} }(X_{T_{i}}(\omega
))+1_{\left( x,\infty \right) }\left( X_{T_{i}}\left( \omega \right) \right)
\\
-\frac{1}{2}1_{\left\{ x\right\} }\left( X_{T_{i}-}\left( \omega \right)
\right) -1_{\left( x,\infty \right) }\left( X_{T_{i}-}\left( \omega \right)
\right) \}.\end{gathered}$$On the other hand $$\omega \in \left\{ X_{s-}=x\neq X_{s},\text{ for some }s>0\right\} ^{c}$$implies $$X_{T_{i}-}\left( w\right) \neq x,\quad i=0,1,...,k.$$Here there exist a finite number of indexes $i$ such that $$X_{T_{i}}\left( w\right) \leq x<X_{T_{i}-}\left( w\right) .$$For large enough $n$ we have$$X_{T_{i}-}\left( \omega \right) \notin \left( x-1/n,x+1/n\right) ,\quad
i=0,1,...,k.$$Therefore $$\begin{aligned}
& \lim_{n\rightarrow \infty }\sum_{0<s\leq t}(\varphi _{x,n})^{\prime
}(X_{s-}(\omega ))\Delta X_{s}(\omega ) \\
& =\lim_{n\rightarrow \infty }\sum_{i=1}^{k}(\varphi _{x,n})^{\prime
}(X_{T_{i}-}\left( \omega \right) )\Delta X_{T_{i}}\left( \omega \right) \\
& =\lim_{n\rightarrow \infty }\sum_{i=1}^{k}\frac{n}{2}1_{\left(
x-1/n,x+1/n\right) }(X_{T_{i}-}\left( \omega \right) )\Delta X_{T_{i}}\left(
\omega \right) =0.\end{aligned}$$Hence the proof is complete.
Proof of Theorem \[theop\] {#sec:3}
==========================
The purpose of this section is to give the proof of Theorem \[theop\]. This proof will be divided into three steps. It is worth mentioning that the proof of Statement a) gives us a sequence of absolutely continuous random fields that converges to $L$. Namely, the sequence $%
\{\int_{0}^{t}\varphi _{x,n}^{\prime }\left( X_{s-}\right) ds,n\in \mathbb{N}%
\}.$
Proof of part a) of Theorem \[theop\]
-------------------------------------
From part a) and b) of Proposition \[aptlft\] we have$$L_{t}(x)=\lim_{n\rightarrow \infty }L_{t}^{n}\left( x\right)
=\lim_{n\rightarrow \infty }\int_{0}^{t}\varphi _{x,n}^{\prime }\left(
X_{s-}\right) ds.$$which yields that $(L_{\cdot }^{n}\left( x\right) )$ is non-negative and increasing.
Proof of part b) of Theorem \[theop\]
-------------------------------------
Since $X_{s}\leq X_{s-}$ we have $$\begin{aligned}
L_{t}\left( x\right) &=&\frac{1}{c}(\frac{1}{2}1_{\left\{ X_{t}\right\}
}\left( x\right) -\frac{1}{2}1_{\left\{ x_{0}\right\} }\left( x\right)
+1_{(-\infty ,X_{t})}\left( x\right) -1_{(-\infty ,x_{0})}\left( x\right)
\notag \\
&&+\sum_{0<s\leq t}1_{\left( X_{s},X_{s-}\right) }(x)). \label{tlpcr2}\end{aligned}$$ Suppose for example, $x_{0}>x$, $X_{t}<x$ and $C_{t}\left( x\right) =n.$ Let $c_{1},...,c_{n}$ the crossing times with the level $x$. Then, by hypothesis, there exist jumping times $s_{1}\in \left( 0,c_{1}\right) $,...,$%
s_{n+1}\in \left( c_{n},t\right)$ such that $x\in(X_{s_i},X_{s_i-})$. Hence $$\begin{aligned}
1_{\left( -\infty ,X_{t}\right) }\left( x\right) -1_{(-\infty
,x_{0}]}(x)+\sum_{0<s\leq t}1_{\left( X_{s},X_{s-}\right) }(x) &=&0-1+\left(
n+1\right) \\
&=&C_{t}(x).\end{aligned}$$
Proof of part c) of Theorem \[theop\]
-------------------------------------
For each $a,b\in \mathbb{R}$ define$$\begin{aligned}
1_{\langle \langle a,b\rangle \rangle } &=&\left\{
\begin{tabular}{ll}
$1_{(a,b]},$ & if $a\leq b,$ \\
$-1_{(b,a]},$ & if $b<a,$%
\end{tabular}%
\right. \\
&=&1_{(-\infty ,b]}-1_{(-\infty ,a]}.\end{aligned}$$From this definition immediately follows that$$1_{\langle \langle a,b\rangle \rangle }=1_{(a,c]}-1_{(b,c]},\quad a,b\leq c.
\label{ppnf}$$Using induction on $n$, we can prove that, for $a_{1},...,a_{n}$ real numbers, $$1_{\langle \langle a_{1},a_{2}\rangle \rangle }+\cdots +1_{\langle \langle
a_{n-1},a_{n}\rangle \rangle }=1_{\langle \langle a_{1},a_{n}\rangle \rangle
}. \label{spnf}$$
On the other hand, for almost all $\omega \in \Omega ^{\prime }$, there exists $k\in \mathbb{N}\cup \{0\}$ such that $\omega \in \{N_{t}=k\}$. Therefore, $$\begin{aligned}
\int_{0}^{t}g(X_{s})ds
&=&\sum_{i=1}^{k}\int_{(T_{i-1},T_{i}]}g(X_{s})ds+\int_{(T_{k},t]}g(X_{s})ds
\\
&=&\sum_{i=1}^{k}\int_{(T_{i-1},T_{i}]}g(X_{T_{i-1}}+c(s-T_{i-1}))ds \\
&&+\int_{(T_{k},t]}g(X_{T_{k}}+c(s-T_{k}))ds.\end{aligned}$$
Taking $x=X_{T_{i-1}}+(s-T_{i-1})c,$ we can write $$\begin{aligned}
\int_{0}^{t}g(X_{s})ds
&=&\sum_{i=1}^{k}\int_{(X_{T_{i-1}},X_{T_{i-1}}+c(T_{i}-T_{i-1})]}g(x)\frac{%
dx}{c} \\
&&+\int_{(X_{T_{k}},X_{T_{k}}+c(t-T_{k})]}g(x)\frac{dx}{c} \\
&=&\sum_{i=1}^{k}\int_{(X_{T_{i-1}},X_{T_{i}-}]}g(x)\frac{dx}{c}%
+\int_{(X_{T_{k}},X_{t}]}g(x)\frac{dx}{c} \\
&=&\frac{1}{c}\int_{\mathbb{R}}
g(x)\sum_{i=1}^{k}1_{(X_{T_{i-1}},X_{T_{i}-}]}(x)dx \\
&&+\frac{1}{c}\int_{\mathbb{R}} g(x)1_{(X_{T_{k}},X_{t}]}(x)dx.\end{aligned}$$From (\[ppnf\]) and (\[spnf\]) we have$$\begin{aligned}
\int_{0}^{t}g(X_{s})ds &=&\frac{1}{c}\int_{\mathbb{R}}g(x)%
\sum_{i=1}^{k}1_{(X_{T_{i}},X_{T_{i}-}]}(x)dx \\
&&+\frac{1}{c}\int_{\mathbb{R}}g(x)\sum_{i=1}^{k}1_{\langle \langle
X_{T_{i-1}},X_{T_{i}}\rangle \rangle }(x)dx \\
&&+\frac{1}{c}\int_{\mathbb{R}}g(x)1_{(X_{T_{k}},X_{t}]}(x)dx \\
&=&\frac{1}{c}\int_{\mathbb{R}}g(x)%
\sum_{i=1}^{k}1_{(X_{T_{i}},X_{T_{i}-}]}(x)dx \\
&&+\frac{1}{c}\int_{\mathbb{R}}g(x)1_{\langle \langle
X_{T_{0}},X_{T_{k}}\rangle \rangle }(x)dx \\
&&+\frac{1}{c}\int_{\mathbb{R}}g(x)1_{(X_{T_{k}},X_{t}]}(x)dx \\
&=&\frac{1}{c}\int_{\mathbb{R}}(1_{\langle \langle X_{0},X_{t}\rangle
\rangle }+\sum_{i=1}^{k}1_{(X_{T_{i}},X_{T_{i}-}]})(x)g(x)dx \\
&=&\int_{\mathbb{R}}\frac{1}{c}(1_{(-\infty ,X_{t}]}-1_{(-\infty
,X_{0}]}+\sum_{i=1}^{k}1_{(X_{T_{i}},X_{T_{i}-}]})(x)g(x)dx.\end{aligned}$$Thus, the proof is complete by (\[tlpcr2\]).
An occupation measure result {#sec:4}
============================
By $F(\cdot ,t)$ we denote the distribution of $%
\sum_{k=1}^{N_{t}}R_{k}1_{[N_{t}>0]}$, and by $f(\cdot ,t)$ the density of $%
F(\cdot ,t),$ when it exists. In order to use Theorem \[teomed2\] we need an expression for $E[L_{t}(x)]$, which is given in [@T-J-J] (Proposition 1). Namely, if $f\in L^{1}(\mathbb{R}\times \lbrack 0,t])$, then$$E[L_{t}(x)]=\int_{[((x-x_{0})/c)\vee 0]\wedge t}^{t}f(x_{0}+cs-x,s)ds.
\label{exprelt}$$
Example
-------
Consider the measurable set$$\Delta =[0,\infty )\times \lbrack 0,\infty )\in \mathcal{B}(\mathbb{R}^2).$$Then, from Theorem \[teomed2\] and (\[exprelt\]), we get$$\begin{aligned}
& E[\int_{0}^{t}1_{\Delta }(X_{s},X_{s+\varepsilon }-X_{s})ds] \\
& =\int_{\mathbb{R}}E[1_{\Delta }(x,X_{\varepsilon
}-x_{0})]\int_{[((x-x_{0})/c)\vee 0]\wedge t}^{t}f(x_{0}+cs-x,s)dsdx \\
& =\int_{0}^{\infty }P(x_{0}\leq X_{\varepsilon })\int_{[((x-x_{0})/c)\vee
0]\wedge t}^{t}f(x_{0}+cs-x,s)dsdx \\
& =\int_{0}^{\infty }P(\sum_{k=1}^{N_{\varepsilon }}R_{k}\leq c\varepsilon
)\int_{[((x-x_{0})/c)\vee 0]\wedge t}^{t}f(x_{0}+cs-x,s)dsdx \\
& =\int_{0}^{\infty }F(c\varepsilon ,\varepsilon )\int_{[((x-x_{0})/c)\vee
0]\wedge t}^{t}f(x_{0}+cs-x,s)dsdx.\end{aligned}$$
Now assume that $R_{1}$ has exponential distribution with parameter $\beta $, then the density of $\sum_{k=1}^{N_{t}}R_{k}1_{[N_{t}>0]}$ is $$f(x,t)=e^{-\alpha t-\beta x}\left( \sum_{n=1}^{\infty }\frac{(\beta \alpha
t)^{n}x^{n-1}}{n!(n-1)!}\right) 1_{(0,\infty )}(x),\quad t>0.$$Hence, in this case,$$\begin{aligned}
& E[\int_{0}^{t}1_{\Delta }(X_{s},X_{s+\varepsilon }-X_{s})ds] \\
& =\int_{0}^{\infty }\left[ \int_{0}^{c\varepsilon }e^{-\alpha \varepsilon
-\beta y}\sum_{n=1}^{\infty }\frac{(\beta \alpha \varepsilon )^{n}y^{n-1}}{%
n!(n-1)!}dy+e^{-\alpha \varepsilon }\right] \\
& \times \int_{\lbrack ((x-x_{0})/c)\vee 0]\wedge t}^{t}e^{-\alpha
s}e^{-\beta (x_{0}+cs-x)}\sum_{k=1}^{\infty }\frac{(\beta \alpha
s)^{k}(x_{0}+cs-x)^{k-1}}{k!(k-1)!}dsdx \\
& =\int_{0}^{x_{0}}\left[ \int_{0}^{c\varepsilon }e^{-\alpha \varepsilon
-\beta y}\sum_{n=1}^{\infty }\frac{(\beta \alpha \varepsilon )^{n}y^{n-1}}{%
n!(n-1)!}dy+e^{-\alpha \varepsilon }\right] \\
& \times \int_{0}^{t}e^{-\alpha s}e^{-\beta (x_{0}+cs-x)}\sum_{k=1}^{\infty }%
\frac{(\beta \alpha s)^{k}(x_{0}+cs-x)^{k-1}}{k!(k-1)!}dsdx \\
& +\int_{x_{0}}^{x_{0}+ct}\left[ \int_{0}^{c\varepsilon }e^{-\alpha
\varepsilon -\beta y}\sum_{n=1}^{\infty }\frac{(\beta \alpha \varepsilon
)^{n}y^{n-1}}{n!(n-1)!}dy+e^{-\alpha \varepsilon }\right] \\
& \times \int_{(x-x_{0})/c}^{t}e^{-\alpha s}e^{-\beta
(x_{0}+cs-x)}\sum_{k=1}^{\infty }\frac{(\beta \alpha s)^{k}(x_{0}+cs-x)^{k-1}%
}{k!(k-1)!}dsdx.\end{aligned}$$For example, under the conditions$$x_{0}=4,\ \alpha =1,\ \beta =1,\ c=1.1,\ t=1,$$with $\varepsilon =12$ and considering five iterations on the sums we get$$\begin{aligned}
& E[\int_{0}^{1}1_{\Delta }(X_{s},X_{s+12}-X_{s})ds] \\
& \approx \int_{0}^{4}\left[ \int_{0}^{13.2}e^{-12-y}\sum_{n=1}^{5}\frac{%
(12)^{n}y^{n-1}}{n!(n-1)!}dy+e^{-12}\right] \\
& \times \int_{0}^{1}e^{-(2.1)s-4+x}\sum_{k=1}^{5}\frac{%
s^{k}(4+(1.1)s-x)^{k-1}}{k!(k-1)!}dsdx \\
& +\int_{4}^{5.1}\left[ \int_{0}^{13.2}e^{-12-y}\sum_{n=1}^{5}\frac{%
(12)^{n}y^{n-1}}{n!(n-1)!}dy+e^{-12}\right] \\
& \times \int_{(x-4)/(1.1)}^{1}e^{-(2.1)s-4+x}\sum_{k=1}^{5}\frac{%
s^{k}(4+(1.1)s-x)^{k-1}}{k!(k-1)!}dsdx \\
& =7.251\times 10^{-3}.\end{aligned}$$Note that this value may help the insurance company to decide if it invests part of its wealth in another assets.
Proof of Theorem \[teomed2\]
----------------------------
We will use the monotone class theorem (see, for example, Ethier and Kurtz [@E-K], Theorem 4.2) to show that the result holds. Set$$\mathcal{H}=\{\psi :\mathbb{R}^{2}\longrightarrow \mathbb{R},\ \psi \text{
is measurable, bounded and satisfies }(\ref{cm2})\}.$$It is not difficult to see that $\mathcal{H}$ is a real linear space and, by Theorem \[theop\], we have$$\int_{\mathbb{R}}E[L_{t}(x)]dx=E[\int_{\mathbb{R}}L_{t}(x)dx]=E[%
\int_{0}^{t}1_{\mathbb{R}}(X_{s})ds]=t.$$It means, $1_{\mathbb{R}^{2}}\in \mathcal{H}$. Moreover $\mathcal{H}$ is closed under monotone convergence: Let $(\psi _{n})\subset \mathcal{H}$, such that $0\leq \psi _{n}\uparrow \psi $, $\psi $ bounded, then $\psi $ is measurable and $$\begin{aligned}
E[\int_{0}^{t}\psi (X_{s},X_{s+\varepsilon }-X_{s})ds] &=&\lim_{n\rightarrow
\infty }E[\int_{0}^{t}\psi _{n}(X_{s},X_{s+\varepsilon }-X_{s})ds] \\
&=&\lim_{n\rightarrow \infty }\int_{\mathbb{R}}E[\psi _{n}(x,X_{\varepsilon
}-x_{0})]E[L_{t}(x)]dx \\
&=&\int_{\mathbb{R}}E[\psi (x,X_{\varepsilon }-x_{0})]E[L_{t}(x)]dx,\end{aligned}$$which gives that $\psi \in \mathcal{H}$.
Now we use the notation$$\mathcal{K}=\{\psi :\mathbb{R}^{2}\longrightarrow \mathbb{R},\ \psi (\cdot
,\cdot \cdot )=1_{A}(\cdot )1_{B}(\cdot \cdot ),\ \ A,B\in \mathcal{B}(%
\mathbb{R})\}.$$Then the family $\mathcal{K}$ is closed under multiplication and $\mathcal{K}%
\subset \mathcal{H}$. In fact, by Theorem \[theop\] we obtain$$\begin{aligned}
\lefteqn{E[\int_{0}^{t}1_{A}(X_{s})1_{B}(X_{s+\varepsilon }-X_{s})ds]} \\
&=&\int_{0}^{t}E[1_{A}(X_{s})]E[1_{B}(X_{s+\varepsilon }-X_{s})]ds \\
&=&\int_{0}^{t}E[1_{A}(X_{s})]E[1_{B}(\varepsilon
c-\sum_{k=N_{s}+1}^{N_{s+\varepsilon }}R_{k})]ds \\
&=&\int_{0}^{t}E[1_{A}(X_{s})]E[1_{B}(\varepsilon
c-\sum_{k=1}^{N_{\varepsilon }}R_{k})]ds \\
&=&\int_{0}^{t}E[1_{A}(X_{s})]E[1_{B}(X_{\varepsilon }-x_{0})]ds \\
&=&E[1_{B}(X_{\varepsilon }-x_{0})]\int_{0}^{t}E[1_{A}(X_{s})]ds \\
&=&E[1_{B}(X_{\varepsilon }-x_{0})]E[\int_{\mathbb{R}}1_{A}(x)L_{t}(x)dx] \\
&=&\int_{\mathbb{R}}E[1_{A}(x)1_{B}(X_{\varepsilon }-x_{0})]E[L_{t}(x)]dx.\end{aligned}$$Finally, the Dynkin monotone class theorem yields that the proof is finished.
<span style="font-variant:small-caps;">Acknowledgement.</span> *The last two authors would like to thank Cinvestav-IPN and Universidad Autónoma de Aguascalientes for their hospitality during the realization of this work.*
[99]{} S. Asmussen (2000). Ruin Probabilities, World Scientific Publishing Co., Singapure.
J. Bertoin (1996). Lévy Processes, Cambridge University Press.
S.N. Chiu, C. Yin (2002). *On occupation times for a risk process with reserve-dependent premium*, Stochastic Models, **18**(2), 245-255.
K.L. Chung, R.J. Williams (1990). Introduction to Stochastic Integration, Birkhäuser, Boston.
S.N. Ethier, T.G. Kurtz (1986). Markov Processes: Characterizations and Convergence, John Wiley & Sons, New York.
P.J. Fitzsimmons, S.C. Port (1990). *Local times, occupation times, and the Lebesgue measure of the range of a Lévy process*. Seminar on Stochastic Processes, 1989 (San Diego, CA, 1989), 59–73, Progr. Probab. **18**, Birkhäuser, Boston.
E.T. Kolkovska, J.A. López-Mimbela, J. Villa (2005). *Occupation measure and local time of classical risk processes,* Insurance: Mathematics and Economics, **37**(3), 573-584.
J. Grandell (1991). Aspects of Risk Theory, Springer-Verlag, New York.
J. Hawkes (1986). *Local times as stationary processes*, K.D. Elworthy (Ed.), From local times to global geometry, Pitman Research Notes in Math. Vol. **150**, Chicago 111-120.
P. Lévy (1948). Processus Stochastiques et Mouvement Brownien, Gauthier-Villars, Paris.
T. Rolski, H. Schmidli, V. Schmidt, J. Teugels (1999). Stochastic Processes for Insurance and Finance, John Wiley & Sons, New York.
[^1]: $^*$Partially supported by the CONACyT grant 45684-F, and by the UAA grants PIM 05-3 and PIM 08-2
|
---
abstract: 'Epitaxial thin films of the highly spin polarized Heusler compound Co$_2$Cr$_{0.6}$Fe$_{0.4}$Al are deposited by dc magnetron sputtering. It is shown by XRD and TEM investigations how the use of an Fe buffer layer on MgO(100) substrates supports the growth of highly ordered Co$_2$Cr$_{0.6}$Fe$_{0.4}$Al at low deposition temperatures. The as grown samples show a relatively large ordered magnetic moment of $\mu \simeq 3.0\mu_B/f.u.$ providing evidence for a low level of disorder.'
address: 'Institute of Physics, Johannes Gutenberg University Mainz, Staudingerweg 7, 55128 Mainz, Germany'
author:
- 'A.Conca'
- 'M.Jourdan'
- 'C.Herbort'
- 'H.Adrian'
title: 'Epitaxy of thin films of the Heusler compound Co$_2$Cr$_{0.6}$Fe$_{0.4}$Al'
---
A3. Physical vapor deposition process, B1. Heusler alloys, B2. magnetic materials
75.47.Np, 68.55-a, 75.70-i. 68.37Lp
\[sec:level1\]Introduction
==========================
The compound Co$_2$CrAl belongs to the Heusler-type (L2$_1$ structure) materials for which 100 % spin polarization at the Fermi energy, i.e.half metallic properties are predicted [@Gal02a; @Fec05]. For potential technical applications the magnetic ordering temperature of this material has to be raised well above room temperature by doping with iron [@Blo03]. However, considering real samples the effects of impurities and crystal imperfections have to be taken into account [@Miu04; @Fec05]. Additionally, at surfaces and interfaces the local spin polarization depends as well on the crystallographic surface orientation as on the properties of the interface partner material [@Gal02; @Nag04].\
For those reasons it is important to grow single crystalline thin films with low defect concentration which can be used for various investigations like spin polarized photo electron spectroscopy or the integration in magnetic tunnel junctions. Polycrystalline Co$_2$Cr$_{0.6}$Fe$_{0.4}$Al thin films can be grown on oxidized Si [@Ino06], (110)-oriented films were obtained on Al$_2$O$_3$(110) [@Jak05] and (100) oriented epitaxial growth was observed directly on MgO(100) [@Ino06; @Mat06].\
Here we report how the use of an Fe buffer layer on an MgO(100) substrate assists the epitaxial growth of high quality Co$_2$Cr$_{0.6}$Fe$_{0.4}$Al thin films deposited at low temperatures without the need for a high temperature annealing step.
\[sec:level2\] Preparation
==========================
Co$_2$Cr$_{0.6}$Fe$_{0.4}$Al (CCFA) films were deposited by dc magnetron sputtering on MgO(100) substrates. The target stoichiometry as given by the supplier (TBL-Kelpin, Neuhausen) was confirmed by EDX (SEM). By the same method applied in a TEM the thin film stoichiometry was shown to be consistent with the target stoichiometry within the typical experimental error of 10%. Before deposition the commercial substrates (Crystec, Berlin) were annealed ex situ in an oxygen atmosphere at 950$^0$C for 2 hours and subsequently exposed to a microwave oxygen plasma. However, a direct deposition at low temperatures (T $\simeq 100^{\rm o}$C) of CCFA on the substrates did not result in the formation of epitaxial CCFA thin films. Considering the small lattice misfit between Fe and CCFA of $\simeq 0.1\%$, Fe was selected as a buffer layer for the deposition of the Heusler-compound. The expected epitaxial relation of the substrate and thin film layers is shown in Fig.\[epidars\]. An 8nm thick Fe buffer layer was deposited by electron beam evaporation in a separate MBE chamber which is part of the deposition system. The CCFA thin films were prepared in a sputtering chamber with a base pressure of $\simeq 3\times 10^{-8}$mbar on the buffer layer at temperature $T\simeq 100^{\rm o}$C. In p$=6\times 10^{-3}$mbar of Ar a deposition rate of 0.5 nm$/$s was selected. All films were protected against oxidation by a capping layer of 4nm of Al before removing them from the deposition chamber.\
\[sec:level3\]Crystallographic order and epitaxial relation
===========================================================
Different types of site disorder are possible in CCFA. Cr-Al disorder (B2 structure) is most likely due to a very small difference in the total formation energy of the ordered and disordered structure [@Miu04]. However, this type of disorder is predicted to have only a small influence on the spin polarization of CCFA. Possible types of disorder which strongly reduce the spin polarization are Co-Cr disorder or disorder on all sites (A2 structure) [@Miu04].\
For the investigation of the crystallographic properties of the thin films x-ray diffraction was employed. The characteristic x-ray reflections for the disorder in CCFA are (111) and (200). Both are present in the cases of the fully ordered L2$_1$ structure and for pure Co-Cr disorder (with different intensity ratio). However, (111) disappears completely if there is full disorder on the Cr-Al positions. If there is disorder on all positions (A2 structure) (200) disappears as well.\
Fig.\[diff\] shows a $\Theta/2\Theta$-scan of a CCFA thin film obtained in Bragg-Brentano geometry in which scattering at Bragg-planes which are parallel to the substrate surface is observed (specular reflections). The (200) reflection of CCFA is clearly visible. The stronger (400) reflection covers the same angle as the Fe (200) peak of the thin buffer layer (Fe (100) is symmetry forbidden). The observation of the CCFA (200) reflection excludes already the A2 structure, but further insight is obtained from 4-circle x-ray diffraction which allows the investigation of off-specular reflections.\
In Fig.\[phiscan\] a $\phi$-scan of the (220) equivalent reflections of a film and substrate is shown. In this scan the film normal is tilted by $45^{\rm o}$ out of the scattering plane and the sample is rotated by the angle $\phi$ around the film normal. The scan shows that the film is in-plane ordered and proves the epitaxial growth. From the observation of the peak positions of CCFA which are rotated by 45$^{\rm o}$ with respect to the MgO substrate peak positions it is concluded that the epitaxial relation is indeed as shown in Fig.\[epidars\].\
Another off-specular peak, the (111) reflection, was not observed in our films. This, in combination with the observation of the (200) reflection, indicates that there is full disorder on the Cr-Al positions, but order on the Co positions. From the ratio of the scattering intensities of (200) and (400) and considering a geometrical correction due to the thin film geometry of the sample a lower limit for remaining disorder on the Co sites can be estimated: From (I$_{(200)}\times sin\Theta_{(200)}$)/(I$_{(400)}\times sin\Theta_{(400)}$)$\simeq0.16$ and comparing to a simulation (PowderCell) it can be concluded that less than 18% of the Co sites are occupied by other atoms. Thus the films grow with a high degree of B2 order.
\[sec:level4\] Microstructural properties
=========================================
For a direct access to the substrate-film interface and in particular to reveal the function of the buffer layer the samples were investigated by high-resolution transmission electron microscopy (HRTEM) at the center for electron microscopy of the university of Mainz (EMZM). The TEM cross sections were prepared by mechanical thinning and argon-ion polishing.\
Fig.\[ch\_buffer\] shows an image in (010) direction of CCFA of the interface region with MgO substrate, Fe buffer and CCFA thin film.
![\[ch\_buffer\] HRTEM image in (010) direction of CCFA of the interface region with MgO substrate, Fe buffer layer and CCFA thin film.](fig4.eps){width="0.7\columnwidth"}
At the interface between MgO and Fe some lattice distortions are visible, which can be related to the lattice mismatch of $(a_{2 Fe}-\sqrt{2} a_{MgO})/a_{2 Fe}\simeq-0,037$. However, these distortions disappear after some atomic layers and the structure of the buffer layer is well ordered. The interface between Fe and CCFA can not be clearly identified from the HRTEM images. No distortion of the atomic layers is observable at this interface. This indicates a perfect epitaxial growth of CCFA on the buffer layer due to the very small lattice mismatch of $(a_{CCFA}-a_{2 Fe})/a_{CCFA}\simeq-0,001$.
\[sec:level5\] Magnetic properties
==================================
The magnitude of the magnetic moment per formula unit (f.u.) of CCFA is an important figure of merit for the film quality. Typical values reported for CCFA thin films prepared including a high temperature annealing process amount to ${\rm \mu_{CCFA}\simeq 2.7 \mu_B/f.u.}$ [@Mat06].\
The magnetic properties of our thin films were analyzed using a SQUID magnetometer (Quantum Design MPMS). The contributions of the MgO substrate and the Fe buffer layer were measured separately and were subtracted from the total magnetization of the complete sample. In Fig. \[hyst\] the hysteresis curves measured at 5K and 300K are shown. The measured volume magnetization corresponds to ${\rm \mu_{CCFA}\simeq 3.0 \mu_B/f.u.}$ at T = 5K. The deviation from the theoretically predicted value for CCFA, 3.8 $\mu_B$ [@Gal02a], may be explained by a relatively small partial disorder on the Co-Cr atomic positions [@Miu04].\
The temperature dependence of the magnetic moment of the CCFA thin films is shown in Fig. \[mvont\]. At T = 300K the magnetic moment amounts to ${\rm \mu_{CCFA}\simeq 2.5 \mu_B/f.u.}$. This reduction of the magnetic moment compared to the low temperature value is more pronounced than the dependence theoretically predicted for fully ordered Co$_2$CrAl [@Sas05].
\[sec:level6\] Summary
======================
Epitaxial (100) oriented thin films of the Heusler compound Co$_2$Cr$_{0.6}$Fe$_{0.4}$Al (CCFA) were grown by dc magnetron sputtering on MgO(100) substrates employing an Fe (100) buffer layer. This buffer layer is responsible for the high crystallographic quality of the CCFA thin films by relaxing the strain due to the lattice misfit with the substrate and providing an ideal seed for distortion free epitaxial growth of the Heusler compound. A relatively large magnetic moment per formula unit of $\simeq 3.0 \mu_{\rm B}$ was observed indicating only a small degree of disorder on the Co-Cr sites.\
Due to the Fe buffer layer CCFA thin films with the B2 structure can be obtained at relatively low substrate deposition temperatures T = $100{\rm ^o}$C without the need for an additional high temperature annealing process. This may be helpful considering technological applications of CCFA, e. g. the integration into magnetic tunneling junctions or spin valves.\
[**acknowledgments**]{}\
This project is financially supported by the [*Stiftung Rheinland-Pfalz für Innovation*]{}. Experimental support by F.Banhart concerning the HRTEM investigations is gratefully acknowledged.\
[99]{}
I.Galanakis, P.H.Dederichs, and N.Papanikolaou, Phys.Rev.B 66 (2002) 174429. G.Fecher, H.Kandpal, S.Wurmehl, J.Morais, H.-J.Lin, H.-J.Elmers, G.Schönhense, and C.Felser, J.Phys: Condens.Matter 17 (2005) 7237. T.Block, C.Felser, G.Jakob, J.Ensling, B.Mühling, P.Gütlich, and R.J.Cava, J.Solid State Chem. 176 (2003) 646. Y.Miura, K.Nagao, and M.Shirai, Phys.Rev.B 69 (2004) 144413. I.Galanakis, J.Phys: Condens.Matter 14 (2002) 6329. K.Nagao, M.Shirai, and Y.Miura, J.Phys: Condens.Matter 16 (2004) S5725. K.Inomata, S.Okamura, A.Miyazaki, M.Kikuchi, N.Tezuka, M.Wojcik, and E.Jedryka, J.Phys.D: Appl.Phys. 39 (2006) 816. G.Jakob, F.Casper, V.Beaumont, S.Falk, N.Auth, H.-J.Elmers, C.Felser, and H.Adrian, Journ.Mag.Magn.Mat. 290 (2005) 1104. K.Matsuda, T.Kasahara, T.Marukame, T.Uemura, and M.Yamamoto, Journ.of Crystal Growth 286 (2006) 389. E.Sasioglu, L.M.Sandratskii, P.Bruno, I.Galanakis, Phys.Rev.B 72 (2005) 184415.
|
---
abstract: 'In this paper we prove a series of Rogers-Shephard type inequalities for convex bodies when dealing with measures on the Euclidean space with either radially decreasing densities, or quasi-concave densities attaining their maximum at the origin. Functional versions of classical Rogers-Shephard inequalities are also derived as consequences of our approach.'
address:
- 'Departamento de Matemáticas, Universidad de Zaragoza, 50009-Zaragoza, Spain'
- 'Departamento de Matemáticas, Universidad de Murcia, Campus de Espinardo, 30100-Murcia, Spain'
- 'Department of Mathematical Sciences, Kent State University, Kent, OH USA'
author:
- 'David Alonso-Gutiérrez'
- 'María A. Hernández Cifre'
- Michael Roysdon
- Jesús Yepes Nicolás
- Artem Zvavitch
title: 'On Rogers-Shephard type inequalities for general measures'
---
[^1]
Introduction and main results
=============================
We denote the length of a vector $x \in \R^n$ by $|x|$. We represent by $B_n=\bigl\{x\in\R^n:|x|\leq 1\bigr\}$ the $n$-dimensional Euclidean unit ball, by $\s^{n-1}$ its boundary, and $\sigma$ will denote the standard surface area measure on $\s^{n-1}$. The $n$-dimensional volume of a measurable set $M\subset\R^n$, i.e., its $n$-dimensional Lebesgue measure, is denoted by $\vol(M)$ or $\vol_n(M)$ if the distinction of the dimension is useful (when integrating, as usual, ${\mathrm{d}}x$ will stand for ${\mathrm{d}}\vol(x)$). With $\operatorname{int}M$, $\bd M$ and $\conv M$ we denote the interior, boundary and convex hull of $M$, respectively, and we set $[x,y]$ for $\conv\{x,y\}$, $x,y\in\R^n$. The set of all $i$-dimensional linear subspaces of $\R^n$ is denoted by $\G(n,i)$, and for $H\in\G(n,i)$, the orthogonal projection of $M$ onto $H$ is denoted by $P_HM$. Moreover, $H^{\bot}\in\G(n,n-i)$ represents the orthogonal complement of $H$. Finally, let $\K^n$ be the set of all $n$-dimensional convex bodies, i.e., compact convex sets with non-empty interior, in $\R^n$. We will frequently refer to [@AGM], [@Ga] and [@Sch] for general references for convex bodies and their properties.
The Minkowski sum of two non-empty sets $A,B\subset\R^n$ denotes the classical vector addition of them, $A+B=\{a+b:\, a\in A, \, b\in B\}$, and we write $A-B$ for $A+(-B)$.
One of the most famous relations involving the volume and the Minkowski addition is the Brunn-Minkowski inequality (we refer to [@G] for an extensive survey of this inequality). One form of it states that if $K,L\in\K^n$, then $$\label{e:B-M_ineq}
\vol(K+L)^{1/n}\geq \vol(K)^{1/n}+\vol(L)^{1/n},$$ and equality holds if and only if $K$ and $L$ are homothetic.
The Brunn-Minkowski inequality was generalized to different types of measures, including the case of log-concave measures [@Leindler; @Prekopa], a very powerful generalization to the case of Gaussian measures [@B2; @B3; @E1; @E2; @ST], to $p$-concave measures and many other extensions (see e.g. [@Borell; @BL]). It is interesting to note that it was proved by Borell [@Borell1; @Borell] that most of such generalizations would require a $p$-concavity assumption on the underlined measure and its density (see below for the precise definition). Following those works, recently, many classical results in Convex Geometry were generalized to the case of log-concave (and in some cases $p$-concave) functions. We mention, among others, the Blaschke-Santaló inequality [@AKM; @Ba1; @FM], the Bourgain-Milman and the reverse Brunn-Minkowski inequality [@KM], the general works on duality and volume [@Ba1; @Ba2], as well as the Grünbaum inequality [@MNRY; @MSZ] and others [@GaZv; @LiMaNaZv; @Mar; @NaTk; @RYN].
In the particular case when $L=-K$, (\[e:B-M\_ineq\]) gives $$\vol(K-K)\geq 2^n\vol(K),$$ with equality if and only if $K$ is centrally symmetric, i.e., there exists a point $x \in \R^n$ such that $K-x=-(K-x)$. An upper bound for the volume of $K-K$ is given by the Rogers-Shephard inequality, originally proven in [@RS1 Theorem 1]. For more details about this inequality, we also refer the reader to [@Sch Section 10.1] or [@AGM].
\[t:RS\] Let $K\in\K^n$. Then $$\label{e:RS}
\vol(K-K)\leq \binom{2n}{n}\vol(K),$$ with equality if and only if $K$ is a simplex.
Similarly to the Brunn-Minkowski inequality , it is natural to wonder about the possibility of extending for measures associated to certain densities. The most natural candidates would be the classes of $p$-concave measures. Nevertheless, it was noticed recently that a number of results in Convex Geometry and Geometric Tomography can be generalized to a class of measures whose densities have no concavity assumption. This includes the solution of the Busemann-Petty problem for general measures [@Z], the Koldobsky slicing inequality [@Kol; @KoZ; @KK; @KLi], as well as Shephard’s problem for general measures [@Liv].
First we observe that one cannot expect to obtain $$\label{e:RS_not_true}
\mu(K-K)\leq \binom{2n}{n}\mu(K)$$ without having certain control on the ‘position’ of the body $K$. Indeed, it is enough to consider the standard $n$-dimensional Gaussian measure $\gamma_n$ given by $${\mathrm{d}}\gamma_n(x)=\frac{1}{(2\pi)^{n/2}}e^{\frac{-|x|^2}{2}}{\mathrm{d}}x,$$ and $K=x+B_n$ for $|x|$ large enough. In this case it is clear that $\gamma_n(K-K)=\gamma_n(2B_n)>0$, whereas $\gamma_n(K)$ can be arbitrarily small.
One option to get control, on the right-hand side of might be to exchange $\mu(K)$ with a mean of the measures of all the translated copies of $K$ with respect to $-K$. To this end, given a measure $\mu$ on $\R^n$, we define its *translated-average* $\overline{\mu}$ as $$\overline{\mu}(K)=\dfrac{1}{\vol(K)}\int_{K}\mu(-y+K)\,{\mathrm{d}}y,$$ for any $K\in\K^n$. With this notion, our first main result reads as follows.
\[t:RS\_measures\_rad\_decreasing\] Let $K\in\K^n$. Let $\mu$ be a measure on $\R^n$ given by ${\mathrm{d}}\mu(x)=\phi(x)\,{\mathrm{d}}x$, where $\phi:\R^n\longrightarrow[0,\infty)$ is radially decreasing. Then $$\label{e:RS_measures_rad_decreasing}
\mu(K-K)\leq
\binom{2n}{n}\min\bigl\{\overline{\mu}(K),\overline{\mu}(-K)\bigr\}.$$ Moreover, if $\phi$ is continuous at the origin then equality holds in if and only if $\mu$ is a constant multiple of the Lebesgue measure on $K-K$ and $K$ is a simplex.
A function $\phi:\R^n\longrightarrow[0,\infty)$ is said to be radially decreasing if $\phi(tx)\geq \phi(x)$ for any $t\in[0,1]$ and any point $x\in\R^n$.
A lower bound for $\mu(K-K)$ when the density function of $\mu$ is even and $p$-concave (see the definition below), $p\geq -1/n$, can be directly obtained from the results by Borell and Brascamp-Lieb [@Borell; @BL]: $$\label{e:B-M(K-K)}
\mu(K-K)\geq\mu(2K).$$ Here we extend to the case of measures with even and quasi-concave densities (see Theorem \[t:R-S\_reverse\]).
We recall that a function $\phi:\R^n\longrightarrow[0,\infty)$ is $p$-concave, for $p\in\R\cup\{\pm\infty\}$, if $$\label{e:p-concavecondition}
\phi\bigl((1-\lambda)x+\lambda y\bigr)\geq
M_p\bigl(\phi(x),\phi(y),\lambda\bigr)$$ for all $x,y\in\R^n$ and any $\lambda\in(0,1)$. Here $M_p$ denotes the [*$p$-mean*]{} of two non-negative numbers: $$M_p(a,b,\lambda)=\left\{
\begin{array}{ll}
\bigl((1-\lambda)a^p+\lambda b^p\bigr)^{1/p}, & \text{ if }p\neq 0,\pm\infty,\\[1mm]
a^{1-\lambda}b^\lambda & \text{ if }p=0,\\[1mm]
\max\{a,b\} & \text{ if }p=\infty,\\[1mm]
\min\{a,b\} & \text{ if }p=-\infty;
\end{array}\right.$$ for $ab>0$; $M_p(a,b,\lambda)=0$, when $ab=0$ and $p\in\R\cup\{\pm\infty\}$. A $0$-concave function is usually called *log-concave* whereas a $(-\infty)$-concave function is called *quasi-concave*. Quasi-concavity is equivalent to the fact that the superlevel sets $$\label{e:ct(phi)}
\c_t(\phi)=\bigl\{x\in\supp\phi:\phi(x)\geq t\|\phi\|_{\infty}\bigr\}$$ are convex for $t\in [0,1]$. Here $\supp\phi$ denotes the support of $\phi$, i.e., the closure of the set $\bigl\{x\in\R^n:\phi(x)>0\bigr\}$, and with $\|\cdot\|_{\infty}$ we mean $$\|\phi\|_{\infty}=\operatorname*{ess\,sup}_{x\in\R^n}\phi(x)=\inf\Bigl\{t\in\R:\vol\bigl(\{x\in\R^n:\phi(x)>t\}\bigr)=0\Bigr\}.$$ We notice that if $\phi$ is $p$-concave, then $\supp\phi$ is a closed convex set. Furthermore, if a function $\phi$ is quasi-concave and such that $\max_{x\in\R^n}\phi(x)=\phi(0)$ then it is radially decreasing.
Although the Rogers-Shephard inequality has been recently extended to the functional setting (see e.g. [@AAGJV; @AlGMJV; @Co] and the references therein), there seems to be no direct way to derive inequality from the above-mentioned functional versions just by considering the function $\chi_{_K}\,\phi$, where $\phi$ is the density of the given measure, and $\chi_{_K}$ is the characteristic function of a convex body $K$ (see Remark \[r:functional\_NO\_RS\]). More precisely, in [@Co Theorems 4.3 and 4.5], Colesanti extended to the more general functional inequality $$\label{e:Colesanti}
\int_{\R^n}\sup_{x=x_1+x_2}\bigl(f(x_1)^p+f(-x_2)^p\bigr)^{1/p}\,{\mathrm{d}}x\leq \binom{2n}{n}\int_{\R^n} f(x)\,{\mathrm{d}}x,$$ for any $p$-concave integrable function, with $p\in[-\infty,0)$. Here, the case $p=-\infty$ has to be understood as $\min\bigl\{f(x_1),f(-x_2)\bigr\}$. In Section \[s:radial\_decay\] we will also generalize to general measures (see Theorem \[t:pquasitheorem\]).
In [@RS2], in addition to $K-K$, Rogers and Shephard considered two other centrally symmetric convex bodies associated with $K$. The first one is $$CK=\bigl\{(x,\theta)\in\R^{n+1}:\,
x\in(1-\theta)K+\theta(-K),\,\theta\in[0,1]\bigr\},$$ whose volume is given by $$\vol_{n+1}(CK)=\int_0^1\vol\bigl((1-\theta)K+\theta(-K)\bigr)\,{\mathrm{d}}\theta.$$ The second one is just $\conv\bigl(K\cup(-K)\bigr)$. The relation of the volumes of $CK$ and $\conv\bigl(K\cup(-K)\bigr)$ to the volume of $K$ was proved in [@RS2]:
\[t:RS\_CK\] Let $K\in\K^n$ be a convex body containing the origin. Then $$\label{e:RS_CK}
\int_0^1\vol\bigl((1-\theta)K+\theta(-K)\bigr)\,{\mathrm{d}}\theta
\leq\frac{2^n}{n+1}\,\vol(K),$$ with equality if and only if $K$ is a simplex. Moreover, $$\label{e:RS_conv}
\vol\Bigl(\conv\bigl(K\cup(-K)\bigr)\Bigr)\leq2^n\,\vol(K),$$ with equality if and only if $K$ is a simplex with the origin as a vertex.
Here we will show an analog of the above result in the setting of measures with radially decreasing density:
\[t:RS\_CK\_conv\_hull\_rad\_dec\] Let $K\in\K^n$ be a convex body containing the origin and let $\mu$ be a measure on $\R^n$ given by ${\mathrm{d}}\mu(x)=\phi(x)\,{\mathrm{d}}x$, where $\phi:\R^n\longrightarrow[0,\infty)$ is radially decreasing. Then $$\label{e:R-S_mu_CK}
\int_0^1\mu\bigl((1-\theta)K+\theta(-K)\bigr)\,{\mathrm{d}}\theta
\leq\frac{2^n}{n+1}\,\sup_{\substack{y\in
K\\\theta\in(0,1]}}\frac{\mu\bigl((1-\theta)y-\theta K\bigr)}{\theta^n}$$ and $$\label{e:R-S_mu_conv}
\mu\Bigl(\conv\bigl(K\cup(-K)\bigr)\Bigr)\leq 2^n\,\sup_{\substack{y\in
K\\\theta\in(0,1]}}\frac{\mu\bigl((1-\theta)y-\theta K\bigr)}{\theta^n}.$$ Moreover, if $\phi$ is continuous at the origin then equality holds in if and only if $\mu$ is a constant multiple of the Lebesgue measure on $\conv\bigl(K\cup(-K)\bigr)$ and $K$ is a simplex, and equality holds in if and only if $\mu$ is a constant multiple of the Lebesgue measure on $\conv\bigl(K\cup(-K)\bigr)$ and $K$ is a simplex with the origin as a vertex.
We note that the upper bounds in Theorem \[t:RS\_CK\_conv\_hull\_rad\_dec\] are bounded and can be restated using $\|\phi\|_{\infty}\vol(K)$; indeed, $\mu\bigl((1-\theta)y-\theta K\bigr)/\theta^n$ is bounded from above by $\|\phi\|_{\infty}\vol(K)$.
In [@RS2 Theorem 1], Rogers and Shephard also gave the following lower bound for the volume of $K$ in terms of the volumes of a projection and a maximal section of $K$:
\[t:RS\_section\_proy\] Let $k\in\{1,\dots,n-1\}$, $H\in\G(n,n-k)$ and $K\in\K^n$. Then $$\label{e:RS_section_proy}
\vol_{n-k}\bigl(P_HK\bigr)\max_{x_0\in H}
\vol_k\bigl(K\cap\bigl(x_0+H^{\bot}\bigr)\bigr)\leq\binom{n}{k}\vol(K).$$
In this paper we will show that the above result remains true for products of measures associated to quasi-concave densities, provided that $P_HK\subset K$, i.e., $P_HK=K\cap H$. The assumption on the projection is necessary, as pointed out in Example \[r:hip\_P\_HK\]. In particular, this hypothesis does not allow one to prove Theorem \[t:RS\_CK\_conv\_hull\_rad\_dec\] by directly following the proof of Theorem \[t:RS\_CK\] (see [@RS2 Theorems 2 and 3]): there, the authors constructed a suitable higher dimensional set to which was applied. This will be not possible here.
Before stating the result, we fix the following notation: given a convex body $K$ and $x\in P_HK$, we write $K(x)=(K-x)\cap H^{\bot}$. We will use the definition of superlevel set $\c_t(\phi)$ given by .
\[t:RS\_secc\_proy\_K(0)\] Let $k\in\{1,\dots,n-1\}$ and $H\in\G(n,n-k)$. Given a continuous at the origin and quasi-concave function $\phi_k:\R^k\longrightarrow[0,\infty)$ with $\|\phi_k\|_{\infty}=\phi_k(0)$ and a radially decreasing function $\phi_{n-k}:\R^{n-k}\longrightarrow[0,\infty)$, let $\mu_n=\mu_{n-k}\times\mu_{k}$ be the product measure on $\R^n$ given by ${\mathrm{d}}\mu_{n-k}(x)=\phi_{n-k}(x)\,{\mathrm{d}}x$ and ${\mathrm{d}}\mu_{k}(y)=\phi_k(y)\,{\mathrm{d}}y$. Let $K\in\K^n$ with $P_HK\subset K$ and so that $\vol_k\bigl(\c_t(\phi_k)\cap K(x)\bigr)$ attains its maximum at $x=0$ for every $t\in(0,1)$. Then $$\label{e:RS_secc_proy_K(0)}
\mu_{n-k}\bigl(P_HK\bigr)\mu_k\bigl(K\cap
H^{\bot}\bigr)\leq\binom{n}{k}\mu_n(K).$$
The above assumption on the maximal section $K(0)$ of $K$ can be omitted when the density of the product measure is also quasi-concave, as shown in Theorem \[t:RS\_seccion\_proy\_quasi\], which is a straightforward consequence of the following functional version of .
\[t:functional\_RS\] Let $k\in\{1,\dots,n-1\}$ and $H\in\G(n,n-k)$. Let $f:\R^n\longrightarrow[0,\infty)$ be a bounded quasi-concave function such that $\vol_k\bigl(\c_t(f)\cap(x+H^{\bot})\bigr)$, $x\in H$, attains its maximum at $x=0$ for every $t\in(0,1)$, and let $g:H\longrightarrow[0,\infty)$ be a radially decreasing function. Then, $$\label{e:proy_sect_f_g}
\int_{H}g(x)P_Hf(x)\,{\mathrm{d}}x\int_{H^{\bot}}f(y)\,{\mathrm{d}}y
\leq\binom{n}{k}\|f\|_{\infty}\int_{\R^n}g(P_Hx)f(x)\,{\mathrm{d}}x.$$
Here, the projection function $P_Hf:H\longrightarrow[0,\infty)$ of $f$ is defined by $P_Hf(x)=\sup_{y\in H^{\bot}}f(x+y)$.
In the particular case of a log-concave integrable function $f$, this result has been recently obtained in [@AAGJV Theorem 1.1].
The paper is organized as follows. Section \[s:radial\_decay\] is mainly devoted to the proofs of Theorems \[t:RS\_measures\_rad\_decreasing\] and \[t:RS\_CK\_conv\_hull\_rad\_dec\] as well as the functional analogs of these results. We start Section \[s:functions\] by deriving a general result for functions with certain concavity conditions, which will play a relevant role along the manuscript. As a consequence of this result we prove, in particular, Theorem \[t:functional\_RS\]. Next, in Section \[s:quasi\_concave\], we study Rogers-Shephard type inequalities for measures with quasi-concave densities, and prove Theorem \[t:RS\_secc\_proy\_K(0)\]. Finally, in Section \[s:remark\], we present another Rogers-Shephard type inequality when assuming a further concavity for the density of the involved measure.
Rogers-Shephard type inequalities for measures with radially decreasing densities {#s:radial_decay}
=================================================================================
The case of convex sets {#ss:R-S_sets}
-----------------------
As pointed out in the previous section, one cannot expect to obtain without having control on the translations of the set $K$. Moreover, certain requirements on the density of the measure $\mu$ must be made (see also the comments after Remark \[c:RS\_measures\_rad\_decay\] and Example \[ex:ring\]). To this regard, in Section \[s:quasi\_concave\] we will show that one may consider quasi-concave densities with maximum at the origin. In this setting, we will also obtain other Rogers-Shephard type inequalities.
Let us now follow a different approach. First we will prove an extension of for the more general case of radially decreasing densities, collected in Theorem \[t:RS\_measures\_rad\_decreasing\]. Before showing it, we need the following auxiliary result.
\[l:F\_nonposit\] Let $\phi:[0,\infty)\longrightarrow[0,\infty)$ be a decreasing function and let $n,m\in\N$. Then, for every $x\in(0,\infty)$, $$\int_0^x \left(1-\frac{t}{x} \right)^n t^{m-1} \phi(t)\,{\mathrm{d}}t \geq
\binom{n+m}{n}^{-1} \int_0^x t^{m-1}\phi(t) \,{\mathrm{d}}t,$$ with equality if and only if $\phi$ is constant on $(0,x)$.
Considering the function $F:(0,\infty)\longrightarrow[0,\infty)$ given by $$F(x)=\binom{n+m}{n}^{-1}\int_0^x t^{m-1}\phi(t) \,{\mathrm{d}}t -\int_0^x
\left(1-\frac{t}{x} \right)^n t^{m-1} \phi(t)\,{\mathrm{d}}t,$$ we need to show that it is non-positive.
Expanding the binomial $\left(1 - t/x \right)^n$ we may assert on one hand that $F(x)\to 0$ as $x\to 0^+$. On the other hand, and jointly with Lebesgue’s differentiation theorem, we get that the derivative of $F$ exists for almost every $x\in(0,\infty)$ and further $$F'(x)=\binom{n+m}{n}^{-1}\,x^{m-1}\phi(x)-n\int_0^x\left(1-\frac{t}{x}\right)^{n-1}\frac{t^m}{x^2}\,\phi(t)
\,{\mathrm{d}}t.$$ Now, applying the change of variable $u = t/x$, we get $$n\int_0^x\left(1-\frac{t}{x}\right)^{n-1}t^m\,{\mathrm{d}}t=\dfrac{n\,\Gamma(n)\Gamma(m+1)}{\Gamma(n+m+1)}x^{m+1}=\binom{n+m}{n}^{-1}x^{m+1},$$ where $\Gamma$ represents the Gamma function. This together with the fact that $\phi$ is decreasing implies that $F'(x)\leq0$, with equality if and only if $\phi$ is constant on $(0,x)$.
Since $F$ is absolutely continuous on every interval $[a,b]\subset(0,\infty)$, because it arises as a finite sum of products of absolutely continuous functions, $$F(x)=F(a)+\int_a^xF'(s)\,{\mathrm{d}}s\leq F(a)$$ for all $x>0$ and any $0<a\leq x$. Taking into account that $\lim_{a\to0^+}F(a)=0$ we then have $$F(x)= \int_0^x F'(s)\,{\mathrm{d}}s\leq 0,$$ with equality if and only if $F'\equiv0$ almost everywhere or, equivalently, when $\phi$ is constant on $(0,x)$.
Next we prove Theorem \[t:RS\_measures\_rad\_decreasing\]. We follow the idea of the original proof of the Rogers-Shephard inequality ([@RS1]), with the main difference of the application of Lemma \[l:F\_nonposit\] in .
Let $f:\R^n\longrightarrow[0,\infty)$ be the function given by $$f(x)=\vol\bigl(K\cap(x+K)\bigr).$$ Observe that $\supp f=K-K$ and $f$ vanishes on ${\rm bd}(K-K)$. Furthermore, using the Brunn-Minkowski inequality together with the inclusion $$\label{eq:inclus}
K\cap\bigl[(1-\lambda)x+\lambda y+K\bigr]
\supset(1-\lambda)\bigl[K\cap(x+K)\bigr]+\lambda\bigl[K\cap(y+K)\bigr],$$ which holds for all $\lambda\in[0,1]$ and $x,y\in K-K$, we get that $f$ is $(1/n)$-concave.
On the one hand, by Fubini’s theorem, we have $$\label{e:proving_RS_fubini}
\begin{split}
\int_{K-K}f(x) \,{\mathrm{d}}\mu(x) & = \int_{\R^n}\int_{\R^n}\chi_{_K}(y)\chi_{_{y-K}}(x)\,\phi(x)\,{\mathrm{d}}y\,{\mathrm{d}}x\\
& =\int_{K}\mu(y-K)\,{\mathrm{d}}y=\vol(K)\,\overline{\mu}(-K).
\end{split}$$ On the other hand, we define the function $g:K-K\longrightarrow[0,\infty)$ given by $$g(x)=f(0)\left[1-\frac{|x|}{\rho_{_{\!K-K}}\bigl(x/|x|\bigr)} \right]^n,
\quad \text{for every } x\neq0,$$ and $g(0)=f(0)$, where $$\rho_{_{\!L}}(u)=\max\{\rho\geq 0:\rho u\in L\},\quad u\in\s^{n-1},$$ stands for the radial function of $L\in\K^n$. Notice that $g^{1/n}$ is affine on $\bigl[0,\rho_{_{\!K-K}}(u)u\bigr]$, for all $u\in\s^{n-1}$, and so $g(0)^{1/n}=f(0)^{1/n}$ and $$g\bigl(\rho_{_{\!K-K}}(u)u\bigr)^{1/n}=0=f\bigl(\rho_{_{\!K-K}}(u)u\bigr)^{1/n}.$$ Hence, since $f^{1/n}$ is concave, it follows that $f^{1/n}\geq g^{1/n}$ on $\bigl[0,\rho_{_{\!K-K}}(u)u\bigr]$. Therefore, using polar coordinates, we have $$\label{e:proving_RS_polar}
\begin{split}
\int_{K-K}f(x)\,{\mathrm{d}}\mu(x) &
=\int_{\s^{n-1}}\int_0^{\rho_{_{\!K-K}}(u)} r^{n-1} f(r u) \phi(r u)\,{\mathrm{d}}r\,{\mathrm{d}}\sigma(u)\\
& \geq f(0)\int_{\s^{n-1}}\int_0^{\rho_{_{\!K-K}}(u)} \left(1 - \frac{r}{\rho_{_{\!K-K}}(u)}\right)^n r^{n-1} \phi(r u) \,{\mathrm{d}}r \,{\mathrm{d}}\sigma(u).
\end{split}$$ Now, from and Lemma \[l:F\_nonposit\] we obtain $$\label{e:proving_RS_polar_lemma}
\begin{split}
\int_{K-K} f(x) \,{\mathrm{d}}\mu(x)
& \geq \frac{1}{\binom{2n}{n}}f(0)\int_{\s^{n-1}}\int_0^{\rho_{_{\!K-K}}(u)}r^{n-1}
\phi(r u)\,{\mathrm{d}}r\,{\mathrm{d}}\sigma(u)\\
& =\frac{1}{\binom{2n}{n}}\vol(K)\mu(K-K),
\end{split}$$ which, together with , yields $$\mu(K-K)\leq\binom{2n}{n}\overline{\mu}(-K).$$ By replacing $K$ with $-K$, we obtain the desired inequality.
Finally we notice that equality holds in only if there is equality in . This implies, by Lemma \[l:F\_nonposit\], that $\phi(ru)$ is constant on $\bigl(0,\rho_{_{\!K-K}}(u)\bigr)$ for $\sigma$-almost every $u\in\s^{n-1}$. Since $\phi$ is continuous at the origin, $\mu$ is a constant multiple of the Lebesgue measure on $K-K$ and, by Theorem \[t:RS\], $K$ is a simplex. The converse immediately follows from Theorem \[t:RS\].
From the proof of the equality case in the above result (and the corresponding one of Lemma \[l:F\_nonposit\]), we notice that the assumption of continuity at the origin for $\phi$ is necessary in order to ‘recover’ the Lebesgue measure (up to a constant). Indeed, one could consider a simplex $K$ and a function $\phi$ that is constant on $\bigl(0,\rho_{_{\!K-K}}(u)\bigr)$ for every $u\in\s^{n-1}$, but not necessarily constant on $K-K$, and thus would hold with equality.
The next theorem is obtained just by repeating the same argument given in the proof of Theorem \[t:RS\_measures\_rad\_decreasing\], but replacing $-K$ with $L$.
\[c:RS\_K\_L\] Let $K, L\in\K^n$ and let $\mu$ be a measure on $\R^n$ given by ${\mathrm{d}}\mu(x)=\phi(x)\,{\mathrm{d}}x$, where $\phi:\R^n\longrightarrow[0,\infty)$ is radially decreasing. Then $$\mu(K+L)\vol\bigl(K\cap(-L)\bigr)\leq \binom{2n}{n}\int_K\mu(x+L){\mathrm{d}}x.$$
\[c:RS\_measures\_rad\_decay\] As a straightforward consequence of Theorem \[t:RS\_measures\_rad\_decreasing\], we get the following statement. Let $K\in\K^n$ and let $\mu$ be a measure on $\R^n$ given by ${\mathrm{d}}\mu(x)=\phi(x)\,{\mathrm{d}}x$, where $\phi:\R^n\longrightarrow[0,\infty)$ is radially decreasing. Then $$\label{e:RS_measures_rad_decay_coro}
\mu(K-K)\leq \binom{2n}{n}
\min\left\{\sup_{x\in\R^n}\mu(x+K),\sup_{x\in\R^n}\mu(x-K)\right\}.$$
The above fact trivially holds in dimension $n=1$ for an arbitrary measure. Indeed, given $K=[a,b]$, then $$\begin{split}
\mu(K-K)=\mu\bigl([a-b,b-a]\bigr)&=\mu\bigl([a,b]-a\bigr)+\mu\bigl([a,b]-b\bigr)\\
&\leq2\min\left\{\sup_{x\in\R}\mu(x+K),\,\sup_{x\in\R}\mu(x-K)\right\}.
\end{split}$$ However, in dimension $n\geq 2$ the radial decay assumption cannot be omitted, as the following example shows.
\[ex:ring\] Fix $0<\varepsilon<\delta<2$. Consider the measure $\mu$ on $\R^2$ with density $$\phi(x)=\left\{\begin{array}{ll}
1 & \text{ if }x\in\delta B_2\cup\bigl(2B_2\setminus(2-\varepsilon)B_2\bigr),\\
0 & \text{ otherwise}
\end{array}\right.$$ (see Figure \[f:example\_dim\_2\_assump\_density\_needed\]). Then $$\label{e:ex_packing}
\mu(B_2-B_2)>6\sup_{x\in\R^2}\mu(x+B_2).$$ Note that contradicts . Indeed, on the one hand, $$\mu(B_2-B_2)=\mu(2B_2)=\pi\delta^2+\bigl(4-(2-\varepsilon)^2\bigr)\pi=4\pi\varepsilon+\pi(\delta^2-\varepsilon^2).$$
![Constructing a measure for which does not hold.[]{data-label="f:example_dim_2_assump_density_needed"}](packing.pdf){width="3.8cm"}
On the other hand, we note that we need at least 6 copies of the unit disk in order to cover $\bd(2B_2)$, which can be seen by considering a regular hexagon inscribed in $2B_2$ (see Figure \[f:example\_dim\_2\_assump\_density\_needed\]). Moreover, if we would cover $\bd(2B_2)$ with exactly $6$ translated copies of $B_2$, then the covering discs would stay away from the origin. Thus, for $\varepsilon>0$ small enough, $$\sup_{x\in\R^2}\vol\Bigl((x+B_2)\cap\bigl(2B_2\setminus(2-\varepsilon)B_2\bigr)\Bigr)=\frac{1}{6}4\pi\varepsilon+o(\varepsilon).$$ Taking, e.g., $\delta =\sqrt{\varepsilon}/100$ we get, for $\varepsilon$ small enough, that $\delta >\varepsilon$, and also that $4\pi\varepsilon/6>\pi\delta^2$ and $o(\varepsilon)<\delta^2$. Thus $$\begin{split}
6\sup_{x\in\R^2}\mu(x+B_2)&=6\sup_{x\in\R^2}\vol\Bigl((x+B_2)\cap\bigl(2B_2\setminus(2-\varepsilon)B_2\bigr)\Bigr)
=4\pi\varepsilon+o(\varepsilon)\\
&<4\pi\varepsilon+\pi(\delta^2-\varepsilon^2).
\end{split}$$ Moreover, since $\sup_{x\in\R^2}\mu(x+B_2)>\overline{\mu}(B_2)$, this example shows that the radial decay assumption is also needed in Theorem \[t:RS\_measures\_rad\_decreasing\].
Regarding a reverse inequality for Theorem \[t:RS\_measures\_rad\_decreasing\] (or ), we have the following result, which extends .
\[t:R-S\_reverse\] Let $K\in\K^n$. Let $\mu$ be a measure on $\R^n$ given by ${\mathrm{d}}\mu(x)=\phi(x)\,{\mathrm{d}}x$, where $\phi:\R^n\longrightarrow[0,\infty)$ is an even quasi-concave function. Then $$\label{e:reverse_RS_quasi}
\mu(K-K)\geq\mu(2K).$$ Equality holds in only if $K\cap(\supp\phi)/2$ is centrally symmetric. Moreover, if $K$ is centrally symmetric with respect to the origin, then equality holds in .
We write $\overline{K}_t=(2K)\cap\c_t(\phi)$ for every $t\in[0,1]$. On the one hand, by Fubini’s theorem, we have $$\label{e:proving_reverse_RS_quasi}
\begin{split}
\mu(2K) & = \int_{2K} \phi(x) \,{\mathrm{d}}x
=\|\phi\|_{\infty}\int_{2K}\int_0^{\frac{\phi(x)}{\|\phi\|_{\infty}}}\,{\mathrm{d}}t \,{\mathrm{d}}x
=\|\phi\|_{\infty} \int_0^1 \int_{2K} \chi_{_{\c_t(\phi)}}(x) \,{\mathrm{d}}x \,{\mathrm{d}}t\\
& =\|\phi\|_{\infty} \int_0^1 \vol\bigl(\overline{K}_t\bigr) \,{\mathrm{d}}t
\leq\|\phi\|_{\infty} \,2^{-n} \int_0^1 \vol\bigl(\overline{K}_t-\overline{K}_t\bigr) \,{\mathrm{d}}t,
\end{split}$$ where in the last inequality we have used the Brunn-Minkowski inequality (cf. ).
On the other hand, since $\phi$ is quasi-concave and even, then $\c_t(\phi)$ is convex and centrally symmetric (with respect to the origin), and hence $\overline{K}_t-\overline{K}_t\subset(2K-2K)\cap
2\c_t(\phi)=2\bigl((K-K)\cap\c_t(\phi)\bigr)$. Thus, we get $$\begin{split}
\mu(2K) &\leq \|\phi\|_{\infty} \,2^{-n} \int_0^1
\vol\bigl(\overline{K}_t-\overline{K}_t\bigr) \,{\mathrm{d}}t
\leq \|\phi\|_{\infty} \int_0^1 \vol \bigl( (K-K) \cap \c_t(\phi)\bigr) \,{\mathrm{d}}t\\
&= \|\phi\|_{\infty} \int_0^1 \int_{\R^n} \chi_{_{(K-K) \cap
\c_t(\phi)}}(x) \,{\mathrm{d}}x \,{\mathrm{d}}t = \mu(K-K).
\end{split}$$ For the equality case, we note that the identity $\mu(2K)=\mu(K-K)$ implies that holds with equality, and thus $\vol\bigl(\overline{K}_t\bigr)=2^{-n}\vol\bigl(\overline{K}_t-\overline{K}_t\bigr)$ for almost every $t\in[0,1]$. Then, there exists a decreasing sequence $(t_m)_m\subset[0,1]$ with $t_m\to 0$ and such that $\vol\bigl(\overline{K}_{t_m}\bigr)=2^{-n}\vol\bigl(\overline{K}_{t_m}-\overline{K}_{t_m}\bigr)$ for all $m\in\N$. Therefore, since the boundary of a convex set has null (Lebesgue) measure, we get $$\label{e:lower_bound}
\begin{split}
\vol\bigl((2K)\cap\supp\phi\bigr) &
=\vol\left(\bigcup_{m=1}^\infty\overline{K}_{t_m}\right)
=\lim_m \vol\bigl(\overline{K}_{t_m}\bigr)
=\lim_m 2^{-n}\vol\bigl(\overline{K}_{t_m}-\overline{K}_{t_m}\bigr)\\
& =2^{-n}\vol\left(\bigcup_{m=1}^\infty\bigl(\overline{K}_{t_m}-\overline{K}_{t_m}\bigr)\right)\\
& =2^{-n}\vol\Bigl(\bigl((2K)\cap\supp\phi\bigr)-\bigl((2K)\cap\supp\phi\bigr)\Bigr).
\end{split}$$ Since $\supp\phi$ is an $n$-dimensional convex set containing the origin then $\mu(2K)=\mu(K-K)>0$, and so $\vol\bigl((2K)\cap\supp\phi\bigr)>0$. Therefore implies that $(2K)\cap\supp\phi$ is centrally symmetric. The sufficient condition is evident.
If we apply to the set $K'=K+x/2$ then $\mu(K-K)\geq\sup_{x\in\R^n}\mu(x+2K)$ also holds. We observe, however, that we cannot expect a general reverse inequality for in the non-even case, as the following example shows.
Let $\theta>0$ and consider $W_{\theta}=\bigl\{r(\cos t,\sin t):0\leq
t\leq\theta,\,r\geq 0\bigr\}\subset\R^2$. Let $\mu_{\theta}$ be the measure on $\R^2$ with density $\phi_{\theta}(x)=\chi_{_{W_\theta}}(x)$ (see Figure \[f:wedge\]).
![A construction for which $\mu(K-K)\to 0$.[]{data-label="f:wedge"}](pizzapie.pdf){width="7.8cm"}
By letting $\theta\to 0$, we can move a set $K$ far enough, but keeping the measure of the shifts of $K$ constant, while the measure of $K-K$ will be arbitrarily small. So the left-hand side of tends to zero whereas the right-hand side is fixed.
A way to strengthen inequality would be to replace $\mu(K-K)$ by the quantity $\sup_{\omega\in\R^n} \mu(K-K +\omega)$:
[**Question:**]{} Given a measure $\mu$ on $\R^n$, is it true that for every $K\in\K^n$ $$\label{e:question}
\sup_{\omega\in\R^n} \mu(K-K
+\omega)\leq\binom{2n}{n}\min\left\{\sup_{x\in\R^n}\mu(x+K),\sup_{x\in\R^n}\mu(x-K)\right\}?$$
The following result partially solves this question, in the setting of quasi-concave densities, by exploiting the approach carried out in the proof of Theorem \[t:RS\_measures\_rad\_decreasing\]. The idea relies on the possibility of finding a point, for each translated copy of $K-K$, from which the density is radially decreasing over the given translation of $K-K$. The negative counterpart is the apparent necessity of including a factor jointly with the measure of the shift of $K-K$. Nevertheless, we observe that the supremum on the right-hand side can be taken over $K$. In Section \[s:quasi\_concave\], we will provide a different solution to this issue (see Theorem \[t:RS\_omega\_quasiconcave\]).
\[t:RS\_omega\_rad\_decreasing\] Let $K\in\K^n$ and let $\mu$ be a measure on $\R^n$ given by ${\mathrm{d}}\mu(x)=\phi(x)\,{\mathrm{d}}x$, where $\phi:\R^n\longrightarrow[0,\infty)$ is a quasi-concave function whose restriction to its support is continuous. Then, for every $\omega\in\R^n$, $$\label{e:RS_omega_rad_decreasing}
c(\omega)\mu(K-K+\omega) \leq\binom{2n}{n}\sup_{y\in K}\mu(y+\omega-K),$$ where $c(\omega)=\vol\bigl(K\cap(\omega'-\omega+K)\bigr)\vol(K)^{-1}$, and $\omega'\in K-K+\omega$ is such that $\phi(\omega')=\max_{x\in
K-K+\omega}\phi(x)$. Moreover, equality holds for some $\omega_0\in\R^n$ if and only if $\mu$ is a constant multiple of the Lebesgue measure on $K-K+\omega_0$, $c(\omega_0)=1$ and $K$ is a simplex.
Let $f:\R^n\longrightarrow[0,\infty)$ be defined as $f(x)=\vol\bigl(K\cap(x-\omega+K)\bigr)$. As before, we get that $\supp
f=K-K+\omega$ and $f$ is $(1/n)$-concave (see and (\[eq:inclus\])). On the one hand, by Fubini’s theorem, we have $$\label{e:proving_RS_fubini_2}
\int_{K-K+\omega}f(x) \,{\mathrm{d}}\mu(x)=
\int_{\R^n}\int_{\R^n}\chi_{_K}(y)\chi_{_{y+\omega-K}}(x)\,\phi(x)\,{\mathrm{d}}y\,{\mathrm{d}}x=\int_{K}\mu(y+\omega-K)\,{\mathrm{d}}y.$$ On the other hand, from the continuity of $\phi$ on $\supp\phi$, we know that there exists a point $\omega'\in(K-K+\omega)\cap\supp\phi$, which is a compact set, such that $\phi(\omega') = \max_{x\in K-K+\omega}\phi(x)$. This, together with the quasi-concavity of $\phi$, implies that it radially decays from $\omega'$ on $K-K+\omega$, i.e., $\phi\bigl(\omega'+t(x-\omega')\bigr)\geq\phi(x)$ for any $t\in[0,1]$ and all $x\in K-K+\omega$.
Now we define the function $g:K-K+\omega\longrightarrow[0,\infty)$ given by $$g(x)=f(\omega')\left[1-\frac{|x-\omega'|}{\rho_{_{\!K-K+\omega-\omega'}}\bigl((x-\omega')/|x-\omega'|\bigr)}
\right]^n, \quad \text{ for every } x\neq\omega',$$ and $g(\omega')=f(\omega')$. Since $f^{1/n}$ is concave, it follows that $f^{1/n}\geq g^{1/n}$ on $\bigl[\omega',\omega'+\rho_{_{\!K-K+\omega-\omega'}}(u)u\bigr]$, and so, via the polar coordinates $z=x-\omega'=ru$, we get $$\label{e:proving_RS_polar_2}
\begin{split}
& \int_{K-K+\omega} f(x) \,{\mathrm{d}}\mu(x) =\int_{K-K+\omega-\omega'} f(\omega'+z)\phi(\omega'+z)\,{\mathrm{d}}z\\
& =\int_{\s^{n-1}}\int_0^{\rho_{_{\!K-K+\omega-\omega'}}(u)}r^{n-1}f(\omega'+ru)\phi(\omega'+ru)\,{\mathrm{d}}r\,{\mathrm{d}}\sigma(u)\\
& \geq f(\omega')\int_{\s^{n-1}}\int_0^{\rho_{_{\!K-K+\omega-\omega'}}(u)}
\left[1-\frac{r}{\rho_{_{\!K-K+\omega-\omega'}}(u)}\right]^nr^{n-1}\phi(\omega'+ru)\,{\mathrm{d}}r \,{\mathrm{d}}\sigma(u).
\end{split}$$ Then Lemma \[l:F\_nonposit\] yields $$\label{e:proving_RS_omega_lemma}
\begin{split}
\int_{K-K+\omega} f(x) \,{\mathrm{d}}\mu(x)
& \geq\frac{f(\omega')}{\binom{2n}{n}}\int_{\s^{n-1}}\int_0^{\rho_{_{\!K-K+\omega-\omega'}}(u)}
r^{n-1} \phi(\omega'+r u)\,{\mathrm{d}}r\,{\mathrm{d}}\sigma(u)\\
& =\frac{1}{\binom{2n}{n}}\vol\bigl(K\cap(\omega'-\omega+K)\bigr)\mu(K-K+\omega),
\end{split}$$ which, together with , gives $$\begin{split}
\mu(K-K+\omega)\vol\bigl(K\cap(\omega'-\omega+K)\bigr)
& \leq\binom{2n}{n}\int_{K}\mu(y+\omega-K)\,{\mathrm{d}}y\\
& \leq\binom{2n}{n}\vol(K)\sup_{y\in K}\mu(y+\omega-K).
\end{split}$$ Finally we notice that equality holds in for some $\omega_0\in\R^n$ only if there is equality in . This implies, by Lemma \[l:F\_nonposit\], that $\phi(\omega'+r u)$ is constant on $\bigl(0,\rho_{_{\!K-K+\omega_0-\omega'}}(u)\bigr)$ for $\sigma$-almost every $u\in\s^{n-1}$. Since $\phi$ is continuous at $\omega'\in\supp\phi$, $\mu$ is a constant multiple of the Lebesgue measure on $K-K+\omega_0$ and, by Theorem \[t:RS\], $K$ is a simplex (in particular, $c(\omega_0)=1$). The converse immediately follows from Theorem \[t:RS\].
The functional case
-------------------
In this subsection we draw a consequence of Theorem \[t:RS\_measures\_rad\_decreasing\] regarding integrals of quasi-concave functions, which extends two results of Colesanti [@Co Theorems 4.3 and 4.5] and is collected in Theorem \[t:pquasitheorem\]. To this end, given a quasi-concave function $f:\R^n\longrightarrow[0,\infty)$, we define the ($-\infty$)-difference of $f$, which remains quasi-concave (cf. [@Co Proposition 4.2]), by $$\Delta_{-\infty}f(z)=\sup_{z=x-y}\min\bigl\{f(x),f(y)\bigr\}.$$ Besides $\Delta_{-\infty} f$, we also consider the (difference) functions $\Delta_{-\infty,\theta} f$ (for some $\theta\in [0,1]$) and $\widetilde{\Delta}_{-\infty} f$ given by $$\begin{split}
\Delta_{-\infty,\theta} f(z) & =\sup_{z=(1-\theta)x-\theta y}\min\bigl\{f(x),f(y)\bigr\},\\
\widetilde{\Delta}_{-\infty} f(z) &
=\sup_{\substack{z=(1-\theta)x-\theta y\\\theta\in[0,1]}}\min\bigl\{f(x),f(y)\bigr\}.
\end{split}$$ These functions can be regarded as the (quasi-concave) functional counterparts of $K-K$, $(1-\theta)K-\theta K$ and $\conv\bigl(K\cup(-K)\bigr)$, respectively, as it is shown via their (strict) superlevel sets. For the sake of brevity we will write, for a function $f:\R^n\longrightarrow[0,\infty)$ and $t\in[0,\infty)$, $$S_{>t}(f)=\bigl\{x\in\R^n:f(x)>t\bigr\};$$ analogously, $S_{\geq t}(f)=\bigl\{x\in\R^n:f(x)\geq t\bigr\}$. We observe that if $f:\R^n\longrightarrow[0,\infty)$ is a quasi-concave function, then $$\label{e:superlevel_quasi}
\begin{split}
(i) & \qquad S_{>t}\bigl(\Delta_{-\infty}f\bigr)=S_{>t}(f)-S_{>t}(f),\\
(ii) & \qquad S_{>t}\bigl(\Delta_{-\infty,\theta}f\bigr)=(1-\theta)S_{>t}(f)-\theta S_{>t}(f),\\
(iii) & \qquad S_{>t}\left(\widetilde{\Delta}_{-\infty}f\right)=\conv\Bigl(S_{>t}(f)\cup\bigl(-S_{>t}(f)\bigr)\Bigr).
\end{split}$$ Indeed, (i), (ii) and (iii) are completely analogous. To see (i), let $z\in S_{>t}\bigl(\Delta_{-\infty}f\bigr)$. Then there exist $x,y$ such that $z=x-y$ and $\min\bigl\{f(x),f(y)\bigr\}>t$, which shows the inclusion $$S_{>t}\bigl(\Delta_{-\infty}f\bigr)\subset S_{>t}(f)-S_{>t}(f).$$ For the reverse inclusion, if $z\in S_{>t}(f)-S_{>t}(f)$ then there exist $x,y\in\R^n$, with $z=x-y$, such that $f(x)>t$ and $f(y)>t$. Since $\min\bigl\{f(x),f(y)\bigr\}>t$ and $z=x-y$, we get that $\Delta_{-\infty}f(z)>t$, as desired.
Now we collect the above-mentioned consequence of , which may be seen as its functional version.
\[t:pquasitheorem\] Let $f:\R^n\longrightarrow[0,\infty)$ be an integrable quasi-concave function. Let $\mu$ be a measure on $\R^n$ given by ${\mathrm{d}}\mu(x)=\phi(x)\,{\mathrm{d}}x$, where $\phi:\R^n\longrightarrow[0,\infty)$ is radially decreasing. Then $$\label{e:pquasitheorem}
\int_{\R^n}\Delta_{-\infty}f(x)\,{\mathrm{d}}\mu(x)\leq\binom{2n}{n}\int_0^{\infty}\min\Bigl\{\overline{\mu}\bigl(S_{\geq
t}(f)\bigr),\overline{\mu}\bigl(-S_{\geq t}(f)\bigr)\Bigr\}\,{\mathrm{d}}t.$$ In particular, by choosing ${\mathrm{d}}\mu(x)={\mathrm{d}}x$, the Lebesgue measure, we get $$\int_{\R^n}\Delta_{-\infty}f(x)\,{\mathrm{d}}x \leq
\binom{2n}{n}\int_{\R^n}f(x)\,{\mathrm{d}}x.$$
The proof follows the general ideas of those of [@Co Theorems 4.3 and 4.5]. Using Fubini’s theorem, together with (i) in , we may write $$\Delta_{-\infty}f(x)=\int_0^{\infty}\chi_{_{S_{>t}(f)-S_{>t}(f)}}(x)
\,{\mathrm{d}}t$$ and, consequently, $$\label{e:delta-inf}
\begin{split}
\int_{\R^n}\Delta_{-\infty}f(x)\,{\mathrm{d}}\mu(x) &
=\int_{\R^n}\int_0^{\infty}\chi_{_{S_{>t}(f)-S_{>t}(f)}}(x)\,{\mathrm{d}}t\,{\mathrm{d}}\mu(x)\\
& \leq\int_0^{\infty}\mu\bigl(S_{\geq t}(f)-S_{\geq
t}(f)\bigr)\,{\mathrm{d}}t.
\end{split}$$ Since $f$ is quasi-concave and integrable, the closure of the superlevel sets $S_{\geq t}(f)$ are convex bodies for all $0<t<\|f\|_{\infty}$. Thus, we may apply to $S_{\geq t}(f)$ (since the boundary of a convex set has null measure) which, together with , allows us to obtain .
Now we note that, if ${\mathrm{d}}\mu(x)=\,{\mathrm{d}}x$, then we have $$\min\Bigl\{\overline{\vol}\bigl(S_{\geq
t}(f)\bigr),\overline{\vol}\bigl(-S_{\geq
t}(f)\bigr)\Bigr\}=\vol\bigl(S_{\geq t}(f)\bigr),$$ which completes the proof.
Given a $p$-concave function $f:\R^n\longrightarrow[0,\infty)$, for $p\in
[-\infty,0)$, one can define the $p$-difference of $f$, which remains $p$-concave (cf. [@Co Proposition 4.2]), by $$\Delta_pf(z)=\sup_{z=x+y}\bigl(f(x)^p+f(-y)^p\bigr)^{1/p}
=\sup_{z=x-y}\bigl(f(x)^p+f(y)^p\bigr)^{1/p}.$$ where the case $p=-\infty$ is understood as the minimum between both values.
Theorem \[t:pquasitheorem\] can be established for any $p\in(-\infty,0)$. It suffices to note that if $f$ is $p$-concave then it is also quasi-concave, and then, we may apply inequality for $p=-\infty$ together with the fact that $(a^p+b^p)^{1/p}\leq \min\{a,b\}$ for each $a,b\geq 0$.
Hence $\Delta_{p}f\leq \Delta_{-\infty}f$.
\[r:functional\_NO\_RS\] As mentioned before, Theorem \[t:pquasitheorem\] is an application of Theorem \[t:RS\_measures\_rad\_decreasing\]. It is a natural and interesting question whether could be directly derived from previous functional versions as . Just considering $\chi_{_K}\,\phi$ this is not possible because of item (i) in : the integral of $\Delta_{-\infty}f$ does not provide (in general) the measure of $K-K$ with respect to the density $\phi$.
Rogers-Shephard type inequalities for $CK$ and $\conv\bigl(K\cup(-K)\bigr)$ and their functional versions
---------------------------------------------------------------------------------------------------------
Now we prove the corresponding Rogers-Shephard type inequalities for $CK$ and $\conv\bigl(K\cup(-K)\bigr)$, as well as their equality cases.
Let $f:\R^n\times[0,1]\longrightarrow[0,\infty)$ be the function given by $$f(x,\theta)=\vol\Bigl(\bigl((1-\theta)K\bigr)\cap(x+\theta K)\Bigr).$$ Note that $f$ is $(1/n)$-concave by , and $\supp f=CK$. On the one hand, taking the measure $\mu_{n+1}$ on $\R^{n+1}$ given by ${\mathrm{d}}\mu_{n+1}(x,\theta)=\phi(x)\,{\mathrm{d}}x \,{\mathrm{d}}\theta$, Fubini’s theorem and the change of variable $z=(1-\theta)y$ yield $$\label{e:proving_RS_CK_fubini}
\begin{split}
\int_{CK}f(x,\theta)\,{\mathrm{d}}\mu_{n+1}(x,\theta) &
=\int_0^1\int_{\R^n}\vol\Bigl(\bigl((1-\theta)K\bigr)\cap(x+\theta K)\Bigr)\phi(x)\,{\mathrm{d}}x\,{\mathrm{d}}\theta\\
& =\int_0^1\int_{\R^n}\int_{\R^n}\chi_{_{(1-\theta)K}}(z)\chi_{_{x+\theta K}}(z)\,\phi(x)\,{\mathrm{d}}z\,{\mathrm{d}}x\,{\mathrm{d}}\theta\\
& =\int_0^1\int_{(1-\theta)K}\int_{\R^n}\chi_{_{z-\theta K}}(x)\,\phi(x)\,{\mathrm{d}}x\,{\mathrm{d}}z\,{\mathrm{d}}\theta\\
& =\int_0^1(1-\theta)^n\int_{K}\mu\bigl((1-\theta)y-\theta K\bigr){\mathrm{d}}y\,{\mathrm{d}}\theta\\
& \leq \vol(K)\int_0^1(1-\theta)^n\theta^n\,{\mathrm{d}}\theta
\sup_{\substack{y\in K\\\theta\in(0,1]}}\frac{\mu\bigl((1-\theta)y-\theta K\bigr)}{\theta^n}\\
& =\frac{1}{\binom{2n+1}{n}}\frac{\vol(K)}{n+1}
\sup_{\substack{y\in K\\\theta\in(0,1]}}\frac{\mu\bigl((1-\theta)y-\theta K\bigr)}{\theta^n}.
\end{split}$$ Now we define the function $g:CK\longrightarrow[0,\infty)$ given by $$g(x,\theta)=f\left(0,\frac{1}{2}\right)\left[1-\frac{\left|(x,\theta)-\bigl(0,\frac{1}{2}\bigr)\right|}{\rho_{_{\!CK-(0,\frac{1}{2})}}\Bigl(\bigl((x,\theta)-(0,\frac{1}{2})\bigr)/\bigl|(x,\theta)-(0,\frac{1}{2})\bigr|\Bigr)}
\right]^n,$$ for every $(x,\theta)\neq (0,1/2)$ and $g(0,1/2)=f(0,1/2)=\vol(K)/2^n$. Since $f^{1/n}$ is concave, then $f^{1/n}\geq g^{1/n}$ on $\left[(0,1/2),(0,1/2)+\rho_{_{\!CK-(0,\frac{1}{2})}}(u)u\right]$, and so, via the polar coordinates $(x,\theta')=(x,\theta)-(0,1/2)=ru$, we get
$$\label{e:proving_RS_CK_polar}
\begin{split}
\int_{CK}f(x, & \theta)\,{\mathrm{d}}\mu_{n+1}(x,\theta) =\int_{CK-(0,\frac{1}{2})} f\left(x,\theta'+\frac{1}{2}\right) \phi(x)\,{\mathrm{d}}x\,{\mathrm{d}}\theta'\\
& =\int_{\s^{n}}\int_0^{\rho_{_{\!CK-(0,\frac{1}{2})}}(u)} r^{n}f\left(\Bigl(0,\frac{1}{2}\Bigr)+ru\right)\phi\bigl(r P_Hu\bigr)\,{\mathrm{d}}r\,{\mathrm{d}}\sigma(u)\\
& \geq f\left(0,\frac{1}{2}\right)\int_{\s^{n}}\int_0^{\rho_{_{\!CK-(0,\frac{1}{2})}}(u)}
\left(1-\frac{r}{\rho_{_{\!CK-(0,\frac{1}{2})}}(u)}\right)^n r^{n}\phi\bigl(r P_Hu\bigr)\,{\mathrm{d}}r \,{\mathrm{d}}\sigma(u),
\end{split}$$
where $H=\bigl\{(x,\theta)\in\R^{n+1}:\theta=0\bigr\}$. Then, Lemma \[l:F\_nonposit\] yields $$\label{e:proving_RS_CK_polar_lemma}
\begin{split}
\int_{CK}f(x,\theta)\,{\mathrm{d}}\mu_{n+1}(x,\theta) &
\geq\frac{f\left(0,\frac{1}{2}\right)}{\binom{2n+1}{n}}\int_{\s^n}\int_0^{\rho_{_{\!CK-(0,\frac{1}{2})}}(u)}r^n
\phi\bigl(r P_Hu\bigr)\,{\mathrm{d}}r\,{\mathrm{d}}\sigma(u)\\
& =\frac{1}{\binom{2n+1}{n}}\frac{\vol(K)}{2^n}\mu_{n+1}(CK),
\end{split}$$ which, together with , gives .
Finally we notice that equality holds in only if there is equality in . This implies, by Lemma \[l:F\_nonposit\], that $\phi\bigl(r P_Hu\bigr)$ is constant on $\left(0,\rho_{_{\!CK-(0,\frac{1}{2})}}(u)\right)$ for $\sigma$-almost every $u\in\s^{n}$. Since $\phi$ is continuous at the origin, $\mu_{n+1}$ is a constant multiple of the Lebesgue measure on $CK$ and hence $\mu$ is so on $P_H(CK)=\conv\bigl(K\cup(-K)\bigr)$ because $\mu_{n+1}$ is a product measure. Since $(1-\theta)y-\theta K\subset CK$ for all $y\in K$ and any $\theta\in[0,1]$, there is equality in and therefore, by Theorem \[t:RS\_CK\], $K$ is a simplex. The converse is a direct consequence of Theorem \[t:RS\_CK\].
Now we prove . Note that $P_H\Bigl(CK\cap\bigl(\c_t(\phi)\times[0,1]\bigr)\Bigr)=\conv\bigl(K\cup(-K)\bigr)\cap\c_t(\phi)$ and, since $0\in K$, then $CK\cap\bigl(\c_t(\phi)\times[0,1]\bigr)\cap H^{\bot}=[0,1]$. Hence, Theorem \[t:RS\_section\_proy\] yields $(n+1)\vol_{n+1}\Bigl(CK\cap\bigl(\c_t(\phi)\times[0,1]\bigr)\Bigr)\geq
\vol\Bigl(\conv\bigl(K\cup(-K)\bigr)\cap\c_t(\phi)\Bigr)$, which, together with Fubini’s theorem, gives $$\begin{split}
\mu_{n+1}(CK)
& =\|\phi\|_{\infty}\int_{CK}\int_0^1\chi_{_{\c_t(\phi)}}(x)\,{\mathrm{d}}t\,{\mathrm{d}}x\,{\mathrm{d}}\theta\\
& =\|\phi\|_{\infty}\int_0^1\int_{CK}\chi_{_{\c_t(\phi)\times[0,1]}}(x,\theta)\,{\mathrm{d}}x \,{\mathrm{d}}\theta\,{\mathrm{d}}t\\
& =\|\phi\|_{\infty}\int_0^1\vol_{n+1}\Bigl(CK\cap\bigl(\c_t(\phi)\times[0,1]\bigr)\Bigr)\,{\mathrm{d}}t\\
& \geq\|\phi\|_{\infty}\frac{1}{n+1}\int_0^1\vol\Bigl(\conv\bigl(K\cup(-K)\bigr)\cap\c_t(\phi)\Bigr)\,{\mathrm{d}}t\\
& =\|\phi\|_{\infty}\frac{1}{n+1}\int_0^1\int_{\conv(K\cup(-K))}\chi_{_{\c_t(\phi)}}(x)\,{\mathrm{d}}x \, {\mathrm{d}}t\\
& =\|\phi\|_{\infty}\frac{1}{n+1}\int_{\conv(K\cup(-K))}\int_0^\frac{\phi(x)}{\|\phi\|_{\infty}}{\mathrm{d}}t\,{\mathrm{d}}x\\
& =\frac{1}{n+1}\int_{\conv(K\cup(-K))}\phi(x)\,{\mathrm{d}}x
=\frac{\mu\Bigl(\conv\bigl(K\cup(-K)\bigr)\Bigr)}{n+1}.
\end{split}$$ This, together with , shows . Equality in implies, in particular, equality in and thus $\mu$ is a constant multiple of the Lebesgue measure on $\conv\bigl(K\cup(-K)\bigr)$. The proof is now concluded from the equality case of .
Taking the function $f(x,\theta)=\vol\Bigl(\bigl((1-\theta)K\bigr)\cap\bigl(x+\theta(-L)\bigr)\Bigr)$, and arguing as in the proof of Theorem \[t:RS\_CK\_conv\_hull\_rad\_dec\], an analogous result can be obtained for two arbitrary convex bodies instead of $K$ and $-K$. Thus, if $K,L\in\K^n$ contain the origin and $\mu$ is a measure on $\R^n$ given by ${\mathrm{d}}\mu(x)=\phi(x)\,{\mathrm{d}}x$, where $\phi:\R^n\longrightarrow[0,\infty)$ is a radially decreasing function, then $$\label{e:R-S_CK_K,L}
\begin{split}
\frac{\mu\bigl(\conv(K\cup
L)\bigr)}{n+1}
&\leq\int_0^1\mu\bigl((1-\theta)K+\theta L\bigr)\,{\mathrm{d}}\theta\\
&\leq\frac{2^n}{n+1}\dfrac{\vol(K)}{\vol\bigl(K\cap(-L)\bigr)}\,\sup_{\substack{y\in
K\theta\in(0,1]}}\frac{\mu\bigl((1-\theta)y+\theta L\bigr)}{\theta^n}.
\end{split}$$
As a consequence of Theorem \[t:RS\_CK\_conv\_hull\_rad\_dec\], we get in Theorem \[t:functional\_CK\_conv\_hull\] below functional versions of both and . Regarding another functional version of , in the log-concave setting, we refer the reader to [@Co Theorem 1.1]. The advantage of the inequality we present here is that, in contrast to the above-mentioned result, inequality may recovered just by taking $f=\chi_{_K}$. We use here the same notation as for Theorem \[t:pquasitheorem\].
\[t:functional\_CK\_conv\_hull\] Let $f:\R^n\longrightarrow[0,\infty)$ be an integrable quasi-concave function. Let $\mu$ be a measure on $\R^n$ given by ${\mathrm{d}}\mu(x)=\phi(x)\,{\mathrm{d}}x$, where $\phi:\R^n\longrightarrow[0,\infty)$ is radially decreasing. Then $$\label{e:functional_CK}
\int_0^1\int_{\R^n}\Delta_{-\infty,\theta}f(x)\,{\mathrm{d}}\mu(x)\,{\mathrm{d}}\theta
\leq\frac{2^n}{n+1}\int_0^{\infty}\sup_{\substack{y\in S_{\geq
t}(f)\\\theta\in(0,1]}}\frac{\mu\bigl((1-\theta)y-\theta S_{\geq
t}(f)\bigr)}{\theta^n}\,{\mathrm{d}}t$$ and $$\label{e:functional_conv}
\int_{\R^n}\widetilde{\Delta}_{-\infty}f(x)\,{\mathrm{d}}\mu(x)\leq
2^n\int_0^{\infty}\sup_{\substack{y\in S_{\geq
t}(f)\\\theta\in(0,1]}}\frac{\mu\bigl((1-\theta)y-\theta S_{\geq
t}(f)\bigr)}{\theta^n}\,{\mathrm{d}}t.$$ In particular, by choosing ${\mathrm{d}}\mu(x)={\mathrm{d}}x$, the Lebesgue measure, we get $$\int_0^1\int_{\R^n}\Delta_{-\infty,\theta}f(x)\,{\mathrm{d}}x\,{\mathrm{d}}\theta
\leq\frac{2^n}{n+1}\int_{\R^n}f(x)\,{\mathrm{d}}x$$ and $$\int_{\R^n} \widetilde{\Delta}_{-\infty} f(x)\,{\mathrm{d}}x\leq 2^n \int_{\R^n}
f(x) \,{\mathrm{d}}x.$$
Since $f$ is quasi-concave and integrable, the closure of the superlevel sets $S_{\geq t}(f)$ are convex bodies for all $0<t<\|f\|_\infty$. Thus, we may apply Theorem \[t:RS\_CK\_conv\_hull\_rad\_dec\] to $S_{\geq t}(f)$ (since the boundary of a convex set has null measure) to obtain $$\int_0^1\mu\bigl((1-\theta)S_{>t}(f)-\theta S_{>t}(f)\bigr)\,{\mathrm{d}}\theta
\leq \frac{2^n}{n+1}\,\sup_{\substack{y\in S_{\geq
t}(f)\\\theta\in(0,1]}}\frac{\mu\bigl((1-\theta)y-\theta S_{\geq
t}(f)\bigr)}{\theta^n}$$ and $$\mu\Bigl(\conv\bigl(S_{>t}(f)\cup(-S_{>t}(f))\bigr)\Bigr)\leq
2^n\,\sup_{\substack{y\in S_{\geq
t}(f)\\\theta\in(0,1]}}\frac{\mu\bigl((1-\theta)y-\theta S_{\geq
t}(f)\bigr)}{\theta^n}.$$ Integrating on $t\in[0,\infty)$, and now follow by applying Fubini’s theorem together with (ii) and (iii) in , respectively. Finally, if ${\mathrm{d}}\mu(x)=\,{\mathrm{d}}x$, then we have $$\sup_{\substack{y\in S_{\geq
t}(f)\\\theta\in(0,1]}}\frac{\vol\bigl((1-\theta)y-\theta S_{\geq
t}(f)\bigr)}{\theta^n}=\vol\bigl(S_{\geq t}(f)\bigr).$$ This concludes the proof.
A projection-section inequality for quasi-concave functions {#s:functions}
===========================================================
We start this section by showing a general result for functions that will be exploited throughout the rest of the paper.
\[l:integral ineqs\] Let $\mu$ be a measure on $\R^n$ given by ${\mathrm{d}}\mu(x)=\phi(x)\,{\mathrm{d}}x$, where $\phi:\R^n\longrightarrow[0,\infty)$ is quasi-concave and such that $\|\phi\|_{\infty}=\phi(0)$. Let $f:\R^n\longrightarrow[0,\infty)$ be a $p$-concave function, $p>0$, with $\|f\|_{\infty}=f(0)$, and let $g:\R^n\longrightarrow[0,\infty)$ be a measurable function. Then $$\label{e:int(suppf_C_theta)_final}
\int_{\supp f}\int_0^1(1-\theta^{p})^n
g\bigl((1-\theta^{p})x\bigl)\,{\mathrm{d}}\theta\,{\mathrm{d}}\mu(x)
\leq\dfrac{1}{\|f\|_{\infty}}\int_{\supp f}g(x)f(x)\,{\mathrm{d}}\mu(x).$$ Moreover, if $\supp f$ is bounded, $g$ is non-zero on $\supp f$ and $\phi$ is continuous at the origin, equality in implies that $\mu$ is a constant multiple of the Lebesgue measure on $\supp f$.
Since $f$ is $p$-concave, then $\c_{\theta}(f)$ is a convex set for every $\theta\in[0,1]$. We notice that $$\label{e:C_theta1_C_theta2}
\dfrac{\c_{\theta_1}(f)}{1-\theta_1^p}\subset\dfrac{\c_{\theta_2}(f)}{1-\theta_2^p}$$ for $0\leq\theta_1\leq\theta_2<1$. In particular, taking $\theta_1=0$, we have $$\label{e:suppf_C_theta}
\supp f\subset\frac{1}{1-\theta^p}\c_{\theta}(f)\quad\text{ for any }\;
\theta\in[0,1),$$ and hence $$\label{e:key_inclusion_for_eq}
(\supp
f)\cap\c_t(\phi)\subset\left(\frac{1}{1-\theta^p}\c_{\theta}(f)\right)\cap\c_t(\phi)
\subset\frac{\c_{\theta}(f)\cap\c_t(\phi)}{1-\theta^p}$$ for all $\theta\in[0,1)$ and every $t\in[0,1]$. Therefore $$\bigl(1-\theta^p\bigr)\bigl[(\supp f)\cap\c_t(\phi)\bigr]
\subset\c_{\theta}(f)\cap\c_t(\phi),$$ which yields $$\label{e:int(suppf_C_theta)}
\int_0^1\int_0^1\int_{(1-\theta^p)[(\supp f)\cap\c_t(\phi)]}g(x)\,{\mathrm{d}}x\,{\mathrm{d}}\theta\,{\mathrm{d}}t
\leq\int_0^1\int_0^1\int_{\c_{\theta}(f)\cap\c_t(\phi)}g(x)\,{\mathrm{d}}x\,{\mathrm{d}}\theta\,{\mathrm{d}}t.$$ Now we compute both sides of inequality (\[e:int(suppf\_C\_theta)\]). On the one hand, by Fubini’s theorem and the change of variable $x=\bigl(1-\theta^p\bigr)y$, we get $$\begin{split}
\int_0^1\int_0^1 & \int_{(1-\theta^p)[(\supp f)\cap\c_t(\phi)]}g(x)\,{\mathrm{d}}x\,{\mathrm{d}}\theta\,{\mathrm{d}}t \\
& =\int_0^1\int_0^1\int_{(\supp f)\cap\c_t(\phi)}g\bigl((1-\theta^p)y\bigr)
(1-\theta^p)^n\,{\mathrm{d}}y\,{\mathrm{d}}\theta\,{\mathrm{d}}t\\
& =\int_{\supp f}\int_0^1(1-\theta^p)^ng\bigl((1-\theta^p)y\bigl)
\int_0^1\chi_{_{\c_t(\phi)}}(y)\,{\mathrm{d}}t\,{\mathrm{d}}\theta\,{\mathrm{d}}y\\
& =\int_{\supp f}\int_0^1(1-\theta^p)^ng\bigl((1-\theta^p)y\bigl)
\frac{\phi(y)}{\|\phi\|_{\infty}}\,{\mathrm{d}}\theta\,{\mathrm{d}}y\\
& =\dfrac{1}{\|\phi\|_{\infty}}\int_{\supp f}\int_0^1(1-\theta^p)^n
g\bigl((1-\theta^p)y\bigl)\,{\mathrm{d}}\theta\,{\mathrm{d}}\mu(y).
\end{split}$$ On the other hand, using again Fubini’s theorem, $$\begin{split}
\int_0^1\int_0^1\int_{\c_{\theta}(f)\cap\c_t(\phi)}g(x)
\,{\mathrm{d}}x\,{\mathrm{d}}\theta\,{\mathrm{d}}t
& =\int_0^1\int_0^1\int_{\R^n}g(x)\,\chi_{_{\c_{\theta}(f)}}(x)\chi_{_{\c_t(\phi)}}(x)\,{\mathrm{d}}x\,{\mathrm{d}}\theta\,{\mathrm{d}}t\\
& =\int_{\R^n}g(x)\int_0^1\chi_{_{\c_t(\phi)}}(x)\int_0^1\chi_{_{\c_{\theta}(f)}}(x)\,{\mathrm{d}}\theta\,{\mathrm{d}}t\,{\mathrm{d}}x\\
& =\int_{\supp f}g(x)\dfrac{f(x)}{\|f\|_{\infty}}\dfrac{\phi(x)}{\|\phi\|_{\infty}}\,{\mathrm{d}}x\\
& =\dfrac{1}{\|f\|_{\infty}\|\phi\|_{\infty}}\int_{\supp f}g(x)\,f(x)\,{\mathrm{d}}\mu(x).
\end{split}$$ Thus, follows from inequality (\[e:int(suppf\_C\_theta)\]).
Now we deal with the equality case. First we observe that since $\supp f$ is a bounded set and $f$ is $p$-concave, then $\c_{\theta}(f)$ is a bounded convex set for all $\theta\in[0,1)$.
Without loss of generality we may assume that $\phi$ is upper semicontinuous. Indeed, otherwise we would work with its upper closure, which is determined via the closure of the superlevel sets of $\phi$ (see [@RoWe page 14 and Theorem 1.6]) and thus defines the same measure because of Fubini’s theorem together with the facts that all the superlevel sets of $\phi$ are convex (since it is quasi-concave) and the boundary of a convex set has null (Lebesgue) measure. Then its superlevel sets $\c_t(\phi)$ are closed (cf. [@RoWe Theorem 1.6]) for every $t\in[0,1]$. In the same way, $f$ may be assumed to be upper semicontinuous (in fact, it is already continuous in the interior of its support, because of the $p$-concavity). Moreover, since the definitions of both $\c_\theta(f)$ and $\c_t(\phi)$ involve the essential supremum, these superlevel sets have positive volume for all $\theta<1$ and $t<1$, and therefore both $\c_\theta(f)$ and $\c_t(\phi)$ are closed convex sets with non-empty interior, for any $\theta,t\in[0,1)$. From the continuity of $\phi$ at the origin, we know that $0\in\operatorname{int}\c_t(\phi)$ for all $t<1$ and then $0\in\c_\theta(f)\cap\operatorname{int}\c_t(\phi)$ because $f(0)=\|f\|_{\infty}$. Hence, and taking into account that $\supp f$ (and thus $\c_\theta(f)$ for any $\theta\in[0,1]$) is bounded, both $\c_\theta(f)\cap(1-\theta^{p})\c_t(\phi)$ and $\c_\theta(f)\cap\c_t(\phi)$ are convex bodies for all $\theta,t\in[0,1)$.
Thus, if equality holds in then, in particular, there is equality in the right-hand inclusion of for almost all $\theta\in[0,1]$ and almost all $t\in[0,1]$, because $g>0$ on $\supp f$.
Let us assume that there exists $x_0\in\supp f$ such that $\phi(x_0)<\|\phi\|_{\infty}$. Taking $t\in\bigl(\phi(x_0)/\|\phi\|_{\infty},1\bigr]$, since $x_0\not\in\c_t(\phi)$ then we have that $$(\supp f)\cap\c_t(\phi)\subsetneq\supp f.$$ Let $x_t\in\bd\bigl((\supp f)\cap\c_t(\phi)\bigr)\backslash\bd(\supp f)$. Since both sets are convex bodies, we can always take $x_t\neq 0$. Then for all $t\in\bigl(\phi(x_0)/\|\phi\|_{\infty},1\bigr]$, the continuity of $f$ on $\operatorname{int}(\supp f)$ yields the existence of $\theta_t\in(0,1)$ such that $$x_t\in\c_{\theta}(f)\cap\c_t(\phi)\quad\text{ for all }\;
\theta\in[0,\theta_t).$$ However, since $x_t\in\bd\c_t(\phi)$ and $0\in\operatorname{int}\c_t(\phi)$, $$x_t\not\in\c_{\theta}(f)\cap\bigl(1-\theta^p\bigr)\c_t(\phi).$$ This contradicts the equality in the right-hand inclusion of for almost every $\theta\in[0,1]$ and $t\in[0,1]$.
Therefore we may conclude that $\phi(x)\geq\|\phi\|_{\infty}$ for all $x\in\supp f$ and thus $\phi\equiv\|\phi\|_{\infty}$ almost everywhere on $\supp f$. This implies that $\mu$ is a constant multiple of the Lebesgue measure on $\supp f$.
It is an interesting question whether Proposition \[l:integral ineqs\] can be adapted to log-concave functions, i.e., when $p=0$. We notice that the above approach cannot be followed in this case. Indeed, considering e.g. the function $f:\R\longrightarrow[0,\infty)$ given by $f(x)=e^{-x^2}$, we have that $\supp f=\R$ whereas $\c_{\theta}(f)$ is a convex body for all $t\in(0,1]$. Hence, there is no chance to get an inclusion of the type , i.e., $\lambda(\theta)\supp
f\subset\c_{\theta}(f)$ for any $\theta\in[0,1]$ and some $\lambda(\theta)>0$.
In what follows we use Proposition \[l:integral ineqs\] to prove several results, including Theorem \[t:functional\_RS\]. Let us first introduce a helpful family of constants and notice a few facts. We denote by $$\alpha^n_{p,q}=\int_0^1(1-\theta^p)^n\,\theta^{p\,q}\,{\mathrm{d}}\theta
=\dfrac{\Gamma\left(\frac{1}{p}+q\right)\Gamma(1+n)}{p\,\Gamma\left(1+n+\frac{1}{p}+q\right)},$$ for each $p,q>0$. Let us assume that $g$ is concave. Then $$g\bigl((1-\theta^p)x\bigr)\geq\theta^pg(0)+(1-\theta^p)g(x),$$ and so, we get from that $$\label{e:int(suppf_C_theta)_g_concave}
\alpha^n_{p,1}\,g(0)\,\mu(\supp f)+\alpha^{n+1}_{p,0}\int_{\supp
f}g(x)\,{\mathrm{d}}\mu(x)\leq\dfrac{1}{\|f\|_{\infty}}\int_{\supp
f}g(x)\,f(x)\,{\mathrm{d}}\mu(x).$$ Another possibility is assuming that $g$ is radially decreasing. Then, from , we get $$\label{e:int(suppf_C_theta)_g_rad_dec}
\alpha^n_{p,0}\int_{\supp f}g(x)\,{\mathrm{d}}\mu(x)
\leq\dfrac{1}{\|f\|_{\infty}}\int_{\supp f}g(x)\,f(x)\,{\mathrm{d}}\mu(x).$$ We point out that $\alpha^n_{p,0}=\alpha^n_{p,1}+\alpha^{n+1}_{p,0}$, which shows that the expression on the left-hand side of and that of are in a sense “similar”, as shown by considering the constant function $g(x)=1$. Indeed, when $g\equiv
1$, reads $$\label{e:int(suppf_C_theta)_g_uno}
\alpha^n_{p,0}\,\mu(\supp f)\leq\dfrac{1}{\|f\|_{\infty}}\int_{\supp
f}f(x)\,{\mathrm{d}}\mu(x).$$ Moreover, it can be proved that remains true even in the more general case when $\|f\|_{\infty}=f(x_0)$ for an arbitrary $x_0\in\R^n$, and without the maximality assumption for $\phi$.
\[c:|f|\_x0\] Let $f:\R^n\longrightarrow[0,\infty)$ be a $p$-concave function, $p>0$, with $\|f\|_{\infty}=f(x_0)$ for some $x_0\in\R^n$, and let $\mu$ be a measure on $\R^n$ given by ${\mathrm{d}}\mu(x)=\phi(x)\,{\mathrm{d}}x$, where $\phi:\R^n\longrightarrow[0,\infty)$ is a bounded quasi-concave function. Then $$\label{e:mu(suppf)_x0}
\alpha^n_{p,0}\frac{\phi(x_0)}{\|\phi\|_{\infty}}\,\mu(\supp
f)\leq\dfrac{1}{\|f\|_{\infty}}\int_{\supp f}f(x)\,{\mathrm{d}}\mu(x).$$ Moreover, if $\supp f$ is bounded and $\phi$ is continuous at $x_0$, equality in implies that $\mu$ is a constant multiple of the Lebesgue measure on $\supp f$.
The proof follows similar steps as those of Proposition \[l:integral ineqs\], but with some key variations. We will highlight these differences.
We consider the function $\psi:\R^n\longrightarrow[0,\infty)$ given by $\psi(x)=f(x+x_0)$, which satisfies $\|\psi\|_{\infty}=\|f\|_{\infty}$ and $\supp\psi=(\supp f)-x_0$. Then (cf. ) $$\label{e:suppf_C_theta2}
\supp\psi\subset\frac{1}{1-\theta^p}\c_{\theta}(\psi)\quad\text{ for all
}\; \theta\in[0,1).$$ We observe that $y\in\c_{\theta}(\psi)$ if and only if $f(y+x_0)\geq
\theta\|f\|_{\infty}$, or equivalently, when $y+x_0\in\c_{\theta}(f)$. Hence, $\c_{\theta}(\psi)+x_0=\c_{\theta}(f)$, and thus turns into $$(\supp
f)-x_0\subset\frac{1}{1-\theta^p}\bigl(\c_{\theta}(f)-x_0\bigr)\quad\text{
for all }\; \theta\in[0,1).$$ Therefore $$\label{e:key_inclusion_for_eq2}
\begin{split}
\bigl((\supp f)-x_0\bigr)\cap\bigl(\c_t(\phi)-x_0\bigr) &
\subset\left(\frac{1}{1-\theta^p}\bigl(\c_{\theta}(f)-x_0\bigr)\right)\cap\bigl(\c_t(\phi)-x_0\bigr)\\
& \subset\frac{1}{1-\theta^p}\Bigl(\bigl[\c_{\theta}(f)\cap\c_t(\phi)\bigr]-x_0\Bigr)
\end{split}$$ for all $\theta\in[0,1)$ and every $t\in\bigl[0,\phi(x_0)/\|\phi\|_{\infty}\bigr]$, where in the last inclusion we have used that $x_0\in\c_t(\phi)$. Consequently, we obtain $$\label{e:incl_I}
(1-\theta^p)\Bigl(\bigl[(\supp
f)\cap\c_t(\phi)\bigr]-x_0\Bigr)\subset\bigl(\c_{\theta}(f)\cap\c_t(\phi)\bigr)-x_0.$$ Next, integrating over $x\in\R^n$ the constant function $1$, using (\[e:incl\_I\]) and the change of variable $x=(1-\theta^p)y$, we get $$(1-\theta^p)^n\int_{[(\supp f)\cap\c_t(\phi)]-x_0}{\mathrm{d}}y
\leq\int_{[\c_{\theta}(f)\cap\c_t(\phi)]-x_0}{\mathrm{d}}y,$$ which yields $$\label{e:int_supp_Ct<int_Ctheta_Ct}
(1-\theta^p)^n\int_{(\supp f)\cap\c_t(\phi)}{\mathrm{d}}x
\leq\int_{\c_{\theta}(f)\cap\c_t(\phi)}{\mathrm{d}}x.$$ Now, computing the left-hand side in , we get $$\begin{split}
\alpha^n_{p,0}\frac{\phi(x_0)}{\|\phi\|_{\infty}}\,\mu(\supp f)
& =\alpha^n_{p,0}\|\phi\|_{\infty}\int_{\supp f}\frac{\phi(x_0)}{\|\phi\|_{\infty}}\,\frac{\phi(x)}{\|\phi\|_{\infty}}\,{\mathrm{d}}x\\
& \leq\|\phi\|_{\infty}\int_0^1(1-\theta^p)^n\,{\mathrm{d}}\theta
\int_{\supp f}\min\left\{\frac{\phi(x)}{\|\phi\|_{\infty}},\frac{\phi(x_0)}{\|\phi\|_{\infty}}\right\}\,{\mathrm{d}}x\\
& =\|\phi\|_{\infty}\int_0^1\int_0^{\frac{\phi(x_0)}{\|\phi\|_{\infty}}}
(1-\theta^p)^n\int_{(\supp f)\cap\c_t(\phi)} {\mathrm{d}}x\,{\mathrm{d}}t\,{\mathrm{d}}\theta.
\end{split}$$ Applying we obtain the desired inequality. Indeed from the above computation we get $$\begin{split}
\alpha^n_{p,0}\frac{\phi(x_0)}{\|\phi\|_{\infty}}\,\mu(\supp f) &
\leq\|\phi\|_{\infty}\int_0^1\int_0^{\frac{\phi(x_0)}{\|\phi\|_{\infty}}}\int_{\c_{\theta}(f)\cap\c_t(\phi)}{\mathrm{d}}x\,{\mathrm{d}}t\,{\mathrm{d}}\theta\\
& =\frac{\|\phi\|_{\infty}}{\|f\|_{\infty}}\int_{\supp f}f(x)\int_0^{\frac{\phi(x_0)}{\|\phi\|_{\infty}}}\chi_{_{\c_t(\phi)}}(x)\,{\mathrm{d}}t\,{\mathrm{d}}x\\
& \leq\frac{\|\phi\|_{\infty}}{\|f\|_{\infty}}\int_{\supp f}f(x)\int_0^1\chi_{_{\c_t(\phi)}}(x)\,{\mathrm{d}}t\,{\mathrm{d}}x\\
&=\frac{1}{\|f\|_{\infty}}\int_{\supp f}f(x)\,{\mathrm{d}}\mu(x).
\end{split}$$ For the proof of the equality case we observe, on the one hand, that if equality holds in then, in particular, $$\int_{\supp
f}f(x)\int_{\frac{\phi(x_0)}{\|\phi\|_{\infty}}}^1\chi_{_{\c_t(\phi)}}(x)\,{\mathrm{d}}t\,{\mathrm{d}}x=0,$$ which yields $\phi(x_0)=\operatorname*{ess\,sup}_{x\in \supp f}\phi(x)$.
On the other hand, we may replace $\|\phi\|_{\infty}$ by $\operatorname*{ess\,sup}_{x\in
\supp f}\phi(x)$ in the above argument to get also $$\label{e:remark_lemma}
\alpha_{p,0}^n\frac{\phi(x_0)}{\operatorname*{ess\,sup}_{x\in \supp f}\phi(x)}\,\mu(\supp
f) \leq\frac{1}{\|f\|_{\infty}}\int_{\supp f}f(x)\,{\mathrm{d}}\mu(x),$$ and since $$\alpha^n_{p,0}\frac{\phi(x_0)}{\|\phi\|_{\infty}}\,\mu(\supp
f)\leq\alpha_{p,0}^n\,\frac{\phi(x_0)}{\operatorname*{ess\,sup}_{x\in \supp
f}\phi(x)}\,\mu(\supp f)=\alpha_{p,0}^n\,\mu(\supp f),$$ equality in implies that $\phi(x_0)=\|\phi\|_{\infty}$.
Finally, due to the fact that $\phi(x_0)=\|\phi\|_{\infty}$, the rest of the proof of the equality case is entirely analogous to the one in Proposition \[l:integral ineqs\], and we do not repeat it here.
As an application of Proposition \[l:integral ineqs\], and the above-mentioned consequences of it, we show Theorem \[t:functional\_RS\].
For all $t\in[0,1]$, the function $\varphi_t:P_H\c_t(f)\longrightarrow[0,\infty)$ given by $$\varphi_t(x)=\vol_k\bigl(\c_t(f)\cap(x+H^{\bot})\bigr)$$ is ($1/k$)-concave, because of the Brunn-Minkowski inequality , and $\supp\varphi_t=P_H\c_t(f)$. By hypothesis we have $\|\varphi_t\|_{\infty}=\varphi_t(0)$. Then, by applying to $\varphi_t$, we get $$\label{e:gf_t}
\alpha_{1/k,0}^{n-k}\int_{P_H\c_t(f)}g(x)\,{\mathrm{d}}x
\leq\dfrac{1}{\|\varphi_t\|_{\infty}}\int_{H}g(x)\,\varphi_t(x)\,{\mathrm{d}}x$$ and hence, integrating each side of inequality (\[e:gf\_t\]) over $t\in[0,1]$ and noticing that $\alpha_{1/k,0}^{n-k}=\binom{n}{k}^{-1}$, it follows that $$\label{e:gf_ineq_para_phi_t_2}
\int_0^1\int_{P_H\c_t(f)}g(x)\,{\mathrm{d}}x
\int_{H^{\bot}}\chi_{_{\c_t(f)}}(y){\mathrm{d}}y\,{\mathrm{d}}t
\leq\binom{n}{k}\int_0^1\int_{H}g(x)\int_{x+H^{\bot}}\chi_{_{\c_t(f)}}(y){\mathrm{d}}y\,{\mathrm{d}}x\,{\mathrm{d}}t.$$ On the one hand, by Fubini’s theorem and noticing that $$P_H\c_t(f)\supset P_H\Bigl(\bigl\{x\in\R^n: f(x)>
t\|f\|_{\infty}\bigr\}\Bigr)=\bigl\{x\in H: P_Hf(x)>
t\|f\|_{\infty}\bigr\},$$ we obtain $$\label{e:left}
\begin{split}
\int_0^1\int_{H}g(x)\chi_{_{P_H\c_t(f)}}(x)\,{\mathrm{d}}x & \int_{H^{\bot}}
\chi_{_{\c_t(f)}}(y)\,{\mathrm{d}}y\,{\mathrm{d}}t\\
& =\int_{H}\int_{H^{\bot}}g(x)\int_0^1\chi_{_{P_H\c_t(f)}}(x)\chi_{_{\c_t(f)}}(y)\,{\mathrm{d}}t\,{\mathrm{d}}y\,{\mathrm{d}}x\\
& \geq\int_{H}\int_{H^{\bot}}g(x)\min\left\{\frac{P_Hf(x)}{\|f\|_{\infty}},\frac{f(y)}{\|f\|_{\infty}}\right\}\,{\mathrm{d}}y\,{\mathrm{d}}x\\
& \geq\int_{H}\int_{H^{\bot}}g(x)\frac{P_Hf(x)}{\|f\|_{\infty}}\frac{f(y)}{\|f\|_{\infty}}\,{\mathrm{d}}y\,{\mathrm{d}}x\\
& =\int_{H}g(x)\frac{P_Hf(x)}{\|f\|_{\infty}}\,{\mathrm{d}}x\int_{H^{\bot}}\frac{f(y)}{\|f\|_{\infty}}\,{\mathrm{d}}y.
\end{split}$$ On the other hand, Fubini’s theorem yields $$\label{e:right}
\begin{split}
\int_0^1\int_{H}g(x)\int_{x+H^{\bot}}\chi_{_{\c_t(f)}}(y){\mathrm{d}}y\,{\mathrm{d}}x\,{\mathrm{d}}t & =\int_{H}g(x)\int_{x+H^{\bot}}\int_0^1\chi_{_{\c_t(f)}}(y){\mathrm{d}}t\,{\mathrm{d}}y\,{\mathrm{d}}x\\
& =\int_{H}\int_{x+H^{\bot}}g(x)\frac{f(y)}{\|f\|_{\infty}}\,{\mathrm{d}}y\,{\mathrm{d}}x\\
& =\int_{\R^n}g(P_Hz)\frac{f(z)}{\|f\|_{\infty}}\,{\mathrm{d}}z.
\end{split}$$ Therefore, from , and we obtain $$\int_{H}g(x)P_Hf(x)\,{\mathrm{d}}x\int_{H^{\bot}}f(y)\,{\mathrm{d}}y
\leq\binom{n}{k}\|f\|_{\infty}\int_{\R^n}g(P_Hx)f(x)\,{\mathrm{d}}x.$$ This concludes the proof.
With the above approach, but using instead of , we notice that the maximality assumption at the origin can be relaxed to get the following result, which has been recently obtained in the setting of a log-concave integrable function in [@AAGJV Theorem 1.1].
\[c:RS\_sect\_proj\] Let $k\in\{1,\dots,n-1\}$ and $H\in\G(n,n-k)$. Let $f:\R^n\longrightarrow[0,\infty)$ be a quasi-concave function such that $$\sup_{x\in H}\vol_k\bigl(\c_t(f)\cap\bigl(x+H^{\bot}\bigr)\bigr)$$ is attained for all $t\in(0,1)$. Then $$\label{e:proy_sect_f}
\int_{H}P_Hf(x)\,{\mathrm{d}}x\max_{x_0\in H}\int_{x_0+H^{\bot}}f(y)\,{\mathrm{d}}y
\leq\binom{n}{k}\|f\|_{\infty}\int_{\R^n}f(x)\,{\mathrm{d}}x.$$
We point out that, in the case of an integrable function $f$ whose restriction to its support is continuous, the above assumption on the volume of the sections of $\c_t(f)$ trivially holds, since $\c_t(f)$ is compact for every $t\in(0,1)$. Notice also that, when dealing with certain classes of functions with a more restrictive concavity (such as log-concave ones), continuity on the interior of their support is already guaranteed.
Rogers-Shephard type inequalities for measures with quasi-concave densities {#s:quasi_concave}
===========================================================================
As a direct application of Corollary \[c:RS\_sect\_proj\] we obtain the following result.
\[t:RS\_seccion\_proy\_quasi\] Let $k\in\{1,\dots,n-1\}$ and $H\in\G(n,n-k)$. Let $\phi_i:\R^i\longrightarrow[0,\infty)$, $i=n-k,k$, be functions with $\|\phi_i\|_{\infty}=\phi_i(0)$, and such that the function $\phi:\R^n\longrightarrow[0,\infty)$ given by $\phi(x,y)=\phi_{n-k}(x)\phi_k(y)$, $x\in\R^{n-k}$, $y\in\R^k$, is quasi-concave. Let $\mu_n=\mu_{n-k}\times\mu_{k}$ be the product measure on $\R^n$ given by ${\mathrm{d}}\mu_{n-k}(x)=\phi_{n-k}(x)\,{\mathrm{d}}x$ and ${\mathrm{d}}\mu_{k}(y)=\phi_k(y)\,{\mathrm{d}}y$. Let $K\in\K^n$ with $P_HK\subset K$ and so that $\vol\bigl(\c_t(\phi)\cap K\cap(x+H^{\bot})\bigr)$ attains its maximum for all $t\in(0,1)$. Then $$\label{e:RS_seccion_proy_quasi}
\mu_{n-k}\bigl(P_HK\bigr)\max_{x_0\in
H}\left[\frac{\phi_{n-k}(x_0)}{\|\phi_{n-k}\|_{\infty}}\mu_k\bigl(K\cap(x_0+H^{\bot})\bigr)\right]\leq\binom{n}{k}\mu_n(K).$$
It is a straightforward consequence of applied to the function $f:\R^n\longrightarrow[0,\infty)$ given by $f(x,y)=\phi_{n-k}(x)\phi_k(y)\chi_{_K}(x,y)$. Indeed, since $P_HK\subset
K$ then $$P_Hf(x)=\sup_{y\in
H^{\bot}}\phi_{n-k}(x)\phi_k(y)\chi_{_K}(x,y)=\phi_{n-k}(x)\phi_{k}(0)\chi_{_{P_HK}}(x)$$ and $\|f\|_{\infty}=\phi_{n-k}(0)\phi_{k}(0)$.
We point out that the assumption $P_HK\subset K$ is needed in order to conclude the above Rogers-Shephard type inequality (as well as Theorem \[t:RS\_secc\_proy\_K(0)\]):
\[r:hip\_P\_HK\] Let $\mu_1$ be the measure on $\R$ given by ${\mathrm{d}}\mu_1(x)=e^{-x^2}\,{\mathrm{d}}x$ and let $\mu_2=\mu_{1}\times\mu_{1}$, i.e., ${\mathrm{d}}\mu_2(x)=e^{-|x|^2}\,{\mathrm{d}}x$. Let $H=\bigl\{(x,y)\in\R^2:y=0\bigr\}$ and, for a given $0<\alpha<\pi/2$, let $K_{\alpha}$ be the centrally symmetric parallelogram $K_{\alpha}=\conv\bigl\{(1,\tan\alpha\pm 1),(-1,-\tan\alpha\pm 1)\bigr\}$.
On the one hand, $K_{\alpha}(0)=\bigl[(0,1),(0,-1)\bigr]$ is the ‘maximal’ section of $K_{\alpha}$ (with respect to $\mu_1$) and $P_HK_{\alpha}=\bigl[(-1,0),(1,0)\bigr]$. On the other hand, since $K_{\alpha}$ is contained in the infinite strip $S_{\alpha}$ determined by the straight lines $y=(\tan\alpha) x\pm 1$, and $\mu_2$ is rotationally invariant, we have that $$\mu_2(K_{\alpha})\leq\mu_2(S_{\alpha})=\sqrt{2\pi}\,\mu_1(I_{\alpha}),$$ where $I_{\alpha}$ denotes the line segment centered at the origin and with length the width of $S_{\alpha}$.
Hence, $\mu_1(I_{\alpha})$, and so $\mu_2(K_{\alpha})$, can be made arbitrarily small when $\alpha\rightarrow\pi/2$. However, the term $\mu_1\bigl(P_HK_{\alpha}\bigr)\mu_1\bigl(K_{\alpha}(0)\bigr)=\mu_1\bigl([(-1,0),(1,0)]\bigr)^2$ is a fixed positive constant. This shows the necessity of assuming $P_HK\subset K$ in order to derive both and .
In order to avoid the assumption $P_HK\subset K$, one may exchange the orthogonal projection by the corresponding maximal section. To this end, first we fix some notation: given a measure $\mu$ in $\R^n$ with density $\phi$, we will denote by $\mu_i$, $i=1,\dots,n-1$, the [*marginal*]{} of $\mu$ in the corresponding $i$-dimensional affine subspace, i.e., for given $M\subset z+H$ with $H\in\G(n,i)$ and $z\in H^{\bot}$, $$\mu_i(M)=\int_H\chi_{_{M}}(x,z)\phi(x,z)\,{\mathrm{d}}x.$$ Taking the function $f:\R^n\longrightarrow[0,\infty)$ given by $f(x,y)=\phi(x,y)\chi_{_K}(x,y)$, $x\in H$, $y\in H^{\bot}$, since $$P_Hf(x)=\sup_{y\in
H^{\bot}}\phi(x,y)\chi_{_K}(x,y)\geq\phi(x,y)\chi_{_K}(x,y)=f(x,y),$$ we get the following result, as direct consequence of .
\[c:coro\_RS\_sect\_sect\] Let $k\in\{1,\dots,n-1\}$ and $H\in\G(n,n-k)$. Let $\mu$ be a measure on $\R^n$ given by ${\mathrm{d}}\mu(x)=\phi(x)\,{\mathrm{d}}x$, where $\phi:\R^n\longrightarrow[0,\infty)$ is a quasi-concave function with $\|\phi\|_{\infty}=\phi(0)$. Let $K\in\K^n$ be such that there exists the maximum of $\vol\bigl(\c_t(\phi)\cap K\cap(x+H^{\bot})\bigr)$ for all $t\in(0,1)$. Then $$\label{e:coro_RS_sect_sect}
\max_{y\in H}\mu_{n-k}\bigl(K\cap (y+H)\bigr)\max_{x_0\in
H}\mu_k\bigl(K\cap(x_0+H^{\bot})\bigr)\leq\binom{n}{k}\|\phi\|_{\infty}\mu(K).$$
We notice that, from , $$\label{e:RS_seccion_proy_quasi:2}
\mu_{n-k}\bigl(P_HK\bigr)\mu_k\bigl(K\cap
H^{\bot}\bigr)\leq\binom{n}{k}\mu_n(K)$$ holds provided that the density of $\mu_n$, $\phi(x,y)=\phi_{n-k}(x)\phi_k(y)$, is quasi-concave. Although the latter implies that both $\phi_{n-k}, \phi_k$ are quasi-concave, the converse is, in general, not true. In the following we exploit the approach followed in the previous section in order to derive for the more general case of measures $\mu_{n-k}, \mu_{k}$, with radially decreasing and quasi-concave densities, respectively, and their product $\mu_n=\mu_{n-k}\times\mu_{k}$, provided that the maximality assumption $$\max_{x\in P_HK}\vol_k\bigl(\c_t(\phi_k)\cap K(x)\bigr)
=\vol_k\bigl(\c_t(\phi_k)\cap K(0)\bigr)$$ holds. Again, we need to assume the condition $P_HK\subset K$.
By an appropriate choice of the coordinate axes, we may assume that $H=\{x_{n-k+1}=\dots=x_n=0\}$. For every $t\in[0,1]$, and $x\in P_H K$, we consider the set $$\c_{x,t}=\Bigl(\{0\}\times\c_t(\phi_k)\Bigr)\cap K(x)$$ and the function $\varphi_t:P_HK\longrightarrow[0,\infty)$ given by $$\varphi_t(x)=\vol_k\bigl(\c_{x,t}\bigr).$$ Since $P_HK\subset K$ and $\phi_k$ is continuous at the origin (which implies that $0\in\operatorname{int}\c_t(\phi_k)$ for all $t<1$), we may assure that, for every $t<1$, $\varphi_t(x)>0$ for any $x$ in the (relative) interior of $P_HK$ and hence $\supp\varphi_t=P_HK$. Moreover, $\varphi_t$ is $(1/k)$-concave by and, by hypothesis, we have $\|\varphi_t\|_{\infty}=\varphi_t(0)$.
Then, applying , with $p=1/k$, to the function $g:P_HK\longrightarrow[0,\infty)$ given by $g(x,0)=\phi_{n-k}(x)$, $x\in\R^{n-k}$, we get $$\label{e:secc1}
\int_{P_HK}\phi_{n-k}(x)\,{\mathrm{d}}x
\leq\binom{n}{k}\dfrac{1}{\|\varphi_t\|_{\infty}}\int_{P_HK}\phi_{n-k}(x)\,\varphi_t(x)\,{\mathrm{d}}x,$$ and hence, integrating (\[e:secc1\]) over $t\in[0,1]$, we obtain $$\label{e:gf_ineq_para_phi_t_proy_secc_2}
\int_0^1\int_{P_HK}\phi_{n-k}(x)\,{\mathrm{d}}x\int_{\R^k}
\chi_{_{\c_{0,t}}}(y)\,{\mathrm{d}}y\,{\mathrm{d}}t\leq\binom{n}{k}\int_0^1\int_{P_HK}\phi_{n-k}(x)\int_{\R^k}\chi_{_{\c_{x,t}}}(y)
\,{\mathrm{d}}y\,{\mathrm{d}}x\,{\mathrm{d}}t.$$ Therefore, by Fubini’s theorem we have $$\label{e:left_proy_secc}
\begin{split}
\mu_{n-k}\bigl(P_HK\bigr)\mu_k\bigl(K\cap H^{\bot}\bigr) &
=\|\phi_k\|_{\infty}\int_{P_HK}\phi_{n-k}(x)\,{\mathrm{d}}x\int_{K(0)}\int_0^1 \chi_{_{\c_t(\phi_k)}}(y)\,{\mathrm{d}}t\,{\mathrm{d}}y\\
& =\|\phi_k\|_{\infty}\int_0^1\int_{P_HK}\phi_{n-k}(x)\,{\mathrm{d}}x\int_{\R^k} \chi_{_{\c_{0,t}}}(y)\,{\mathrm{d}}y\,{\mathrm{d}}t\\
& \leq\binom{n}{k}\|\phi_k\|_{\infty}\int_0^1\int_{P_HK}\phi_{n-k}(x)\int_{\R^k}\chi_{_{\c_{x,t}}}(y)\,{\mathrm{d}}y\,{\mathrm{d}}x\,{\mathrm{d}}t\\
& =\binom{n}{k}\|\phi_k\|_{\infty}\int_{P_HK}\phi_{n-k}(x)\int_{K(x)}\int_0^1\chi_{_{\c_t(\phi_k)}}(y)\,{\mathrm{d}}t\,{\mathrm{d}}y\,{\mathrm{d}}x\\
& =\binom{n}{k}\int_{P_HK}\phi_{n-k}(x)\mu_k\bigl(K(x)\bigr){\mathrm{d}}x=\binom{n}{k}\mu_n(K).
\end{split}$$ This concludes the proof.
Next we show an extension of the above Rogers-Shephard type inequalities involving maximal sections of convex bodies (cf. ) in the spirit of [@AAGJV Lemma 4.1].
\[c:RS\_E\_H\] Let $i,j\in\{2,\dots,n-1\}$, $i+j\geq n+1$, and let $E\in\G(n,i)$, $H\in\G(n,j)$ be such that $E^{\bot}\subset H$. Let $\phi:\R^n\longrightarrow[0,\infty)$ be a $(-1/n)$-concave function and let $\mu$ be the measure on $\R^n$ given by ${\mathrm{d}}\mu(x)=\phi(x)\,{\mathrm{d}}x$. Then, for every $K\in\K^n$, $$\label{e:RS_E_H}
\sup_{x\in E^{\bot}}\mu_i\bigl(K\cap (x+E)\bigr)\sup_{y\in
H^{\bot}}\mu_j\bigl(K\cap (y+H)\bigr)
\leq\binom{n-k}{n-i}\sup_{x\in\R^n}\mu_k\bigl(K\cap (x+F)\bigr)\mu(K),$$ where $F=E\cap H$.
Let $f:F^{\bot}\longrightarrow[0,\infty)$ be the function given by $$f(x,y)=\int_{\R^k}\phi(x,y,z)\chi_{_{K}}(x,y,z) \,{\mathrm{d}}z.$$ The Borell-Brascamp-Lieb inequality (see e.g. [@G Theorem 10.1]) implies that $f$ is quasi-concave and, in particular, $\c_t(f)$ is a convex body. Then, we may apply Corollary \[c:RS\_sect\_proj\] to obtain $$\begin{split}
& \int_{E^{\bot}}\sup_{y\in H^{\bot}}\int_{\R^k}
\phi(x,y,z) \chi_{_{K}}(x,y,z) \,{\mathrm{d}}z\,{\mathrm{d}}x
\sup_{x\in E^{\bot}}\int_{H^{\bot}}\int_{\R^k}\phi(x,y,z)\chi_{_{K}}(x,y,z) \,{\mathrm{d}}z\,{\mathrm{d}}y\\
& \leq \binom{n-k}{n-i}\sup_{(x,y)\in F^{\bot}}\int_{\R^k}\phi(x,y,z)\chi_{_{K}}(x,y,z) \,{\mathrm{d}}z
\int_{F^\bot}\int_{\R^k}\phi(x,y,z)\chi_{_{K}}(x,y,z) \,{\mathrm{d}}z\,{\mathrm{d}}x\,{\mathrm{d}}y
\end{split}$$ and thus, in particular, for every $y_0\in H^{\bot}$ we have $$\begin{split}
\int_{E^{\bot}}\int_{\R^k}\phi(x,y_0,z)\chi_{_{K}}(x,y_0,&z)
\,{\mathrm{d}}z\,{\mathrm{d}}x
\sup_{x\in E^{\bot}}\int_{H^{\bot}}\int_{\R^k}\phi(x,y,z)\chi_{_{K}}(x,y,z) \,{\mathrm{d}}z\,{\mathrm{d}}y\\
& \leq \binom{n-k}{n-i}\sup_{(x,y)\in F^{\bot}}\mu_k\Bigl(K\cap\bigl((x,y)+F\bigr)\Bigr)\mu(K).
\end{split}$$ Hence, for every $y_0\in H^{\bot}$, we get $$\mu_j\bigl(K\cap (y_0+ H)\bigr)\sup_{x\in E^{\bot}}\mu_i\bigl(K\cap
(x+E)\bigr)\leq \binom{n-k}{n-i}\sup_{x\in\R^n}\mu_k\bigl(K\cap
(x+F)\bigr)\mu(K),$$ which implies .
Next we show how one may exploit the approach we are following in this section to obtain an analogous result to Proposition \[t:RS\_omega\_rad\_decreasing\], in the setting of quasi-concave densities which are not necessarily continuous. Notice that whereas the right-hand side in is smaller than the right-hand side in , the constants $c(\omega)$ and $\phi(\omega)/\|\phi\|_{\infty}$ are not comparable in general.
\[t:RS\_omega\_quasiconcave\] Let $K\in\K^n$ and let $\mu$ be a measure on $\R^n$ given by ${\mathrm{d}}\mu(x)=\phi(x)\,{\mathrm{d}}x$, where $\phi:\R^n\longrightarrow[0,\infty)$ is a bounded quasi-concave function. Then, for every $\omega\in\R^n$, $$\label{e:RS_omega_quasiconcave}
\frac{\phi(\omega)}{\|\phi\|_{\infty}}\mu(K-K+\omega)
\leq\binom{2n}{n}\min\left\{\sup_{y\in K}\mu(y+\omega-K),
\sup_{y\in K}\mu(-y+\omega+K)\right\}.$$ Moreover, if $\phi$ is continuous at $\omega_0$, for some $\omega_0\in\R^n$, then equality holds in (for such $\omega_0$) if and only if $\mu$ is a constant multiple of the Lebesgue measure on $K-K+\omega_0$, $\phi(\omega_0)=\|\phi\|_{\infty}$ and $K$ is a simplex.
Let $\omega\in\R^n$ and consider the function $f_\omega:K-K+\omega\longrightarrow[0,\infty)$ given by $$f_\omega(x)=\vol\bigl(K\cap(x-\omega+K)\bigr).$$ Notice that, $f_\omega$ is $(1/n)$-concave by , $\supp
f_\omega=K-K+\omega$ and, moreover, that $\|f_\omega\|_{\infty}=f_\omega(\omega)=\vol(K)$. Then, using , we get $$\begin{split}
\frac{\phi(\omega)}{\|\phi\|_{\infty}}\mu(K-K+\omega) &
\leq\binom{2n}{n}\frac{1}{\vol(K)}\int_{\R^n}\vol\bigl(K\cap(x-\omega+K)\bigr)\,{\mathrm{d}}\mu(x)\\
& =\binom{2n}{n}\frac{1}{\vol(K)}\int_{\R^n}\phi(x)\int_{\R^n}\chi_{_K}(y)\chi_{_{y+\omega-K}}(x)\,{\mathrm{d}}y\,{\mathrm{d}}x\\
& =\binom{2n}{n}\frac{1}{\vol(K)}\int_{K}\mu(y+\omega-K)\,{\mathrm{d}}y\leq\binom{2n}{n}\sup_{y\in K}\mu(y+\omega-K).
\end{split}$$ Therefore, exchanging the roles of $K$ and $-K$, infers.
Finally, if equality holds in for some $\omega_0\in\R^n$ then, by Corollary \[c:|f|\_x0\], $\mu$ is a constant multiple of the Lebesgue measure on $K-K+\omega_0$ and $\phi(\omega_0)=\|\phi\|_{\infty}$. Now, from the equality case of Theorem \[t:RS\], $K$ must be a simplex. The converse is immediate from Theorem \[t:RS\].
We conclude this section by noticing that, from the proof of the previous result, one may also obtain in the slightly less general setting of quasi-concave densities with maximum at the origin. We include it here for the sake of completeness.
\[c:RS\_measures\] Let $K\in\K^n$ and let $\mu$ be the measure on $\R^n$ given by ${\mathrm{d}}\mu(x)=\phi(x)\,{\mathrm{d}}x$, where $\phi:\R^n\longrightarrow[0,\infty)$ is a quasi-concave function with $\|\phi\|_{\infty}=\phi(0)$. Then $$\label{e:RS_measures}
\mu(K-K)\leq
\binom{2n}{n}\min\bigl\{\overline{\mu}(K),\overline{\mu}(-K)\bigr\}.$$ Moreover, if $\phi$ is continuous at the origin then equality holds if and only if $\mu$ is a constant multiple of the Lebesgue measure on $K-K$ and $K$ is a simplex.
A remark for measures with $p$-concave densities, $p>0$ {#s:remark}
=======================================================
As we have shown in Example \[r:hip\_P\_HK\], the assumption $P_HK\subset
K$ on Theorems \[t:RS\_secc\_proy\_K(0)\] and \[t:RS\_seccion\_proy\_quasi\] is necessary. However, when dealing with measures associated to $p$-concave densities, $p>0$, an inequality in the spirit of can be obtained for an arbitrary $K\in\K^n$, by setting a binomial coefficient according to the concavity nature of the density. This is the content of the following result.
\[t:RS\_seccion\_proy\_p\_conc\] Let $k\in\{1,\dots,n-1\}$, $r\in\N$ and $H\in\G(n,n-k)$. Given a $(1/r)$-concave function $\phi_k:\R^k\longrightarrow[0,\infty)$, and a radially decreasing function $\phi_{n-k}:\R^{n-k}\longrightarrow[0,\infty)$, let $\mu_n=\mu_{n-k}\times\mu_{k}$ be the product measure on $\R^n$ given by ${\mathrm{d}}\mu_{n-k}(x)=\phi_{n-k}(x)\,{\mathrm{d}}x$ and ${\mathrm{d}}\mu_{k}(y)=\phi_k(y)\,{\mathrm{d}}y$. Let $K\in\K^n$ be such that $\max_{x\in H}
\mu_k\left(K\cap\bigl(x+H^{\bot}\bigr)\right)=\mu_k\left(K\cap
H^{\bot}\right)$. Then $$\mu_{n-k}\bigl(P_HK\bigr)\mu_k\bigl(K\cap
H^{\bot}\bigr)\leq\binom{n+r}{n-k}\mu_n(K).$$
Consider the function $f:H\longrightarrow\R$ given by $$f(x)=\mu_k\left(K\cap\bigl(x+H^{\bot}\bigr)\right),$$ which satisfies $\supp f=P_HK$.
Now, the Borell-Brascamp-Lieb inequality (see [@G Theorem 10.1]) implies that $\mu_k$ is ($1/(k+r)$)-concave which, together with the convexity of $K$, yields that $f$ is ($1/(k+r)$)-concave. Furthermore, by assumption we have that $\|f\|_{\infty}=f(0)$. Thus, using for $g=\phi_{n-k}$, we obtain $$\alpha^{n-k}_{1/(k+r),0}\int_{P_HK}\phi_{n-k}(x)\,{\mathrm{d}}x\leq\dfrac{1}{\mu_k\bigl(K\cap H^{\bot}\bigr)} \int_{P_HK}
\mu_k\left(K\cap\bigl(x+H^{\bot}\bigr)\right)\,\phi_{n-k}(x){\mathrm{d}}x$$ and hence $$\mu_{n-k}\bigl(P_HK\bigr)\mu_k\bigl(K\cap
H^{\bot}\bigr)\leq\binom{n+r}{n-k}\mu_n(K),$$ as desired.
The latter result can be stated for any positive real number $r$, just replacing $\binom{n+r}{n-k}$ by the suitable constant.
We notice that the above inequality includes as a special case, since the constant density (of the Lebesgue measure) is $\infty$-concave, and thus $r=0$.
[*Acknowledgements.*]{} We thank the referees for many valuable suggestions and remarks which have allowed us to considerably improve the manuscript.
[99]{}
Alonso-Gutiérrez, D., Artstein-Avidan, S., González, B., Jiménez, C. H. and Villa, R. “Rogers-Shephard and local Loomis-Whitney type inequalities.” [*Submitted*]{}, [arXiv:1706.01499v2](https://arxiv.org/abs/1706.01499v2).
Alonso-Gutiérrez, D., González, B., Jiménez, C. H. and Villa, R. “Rogers-Shephard inequality for log-concave functions.” [*J. Func. Anal.*]{} 271, no. 11 (2016): 3269–3299.
Artstein-Avidan, S., Giannopoulos, A. and Milman, V. D. [*Asymptotic geometric analysis. Part I*]{}. Mathematical Surveys and Monographs, 202. Providence, RI: American Mathematical Society, 2015.
Artstein-Avidan, S., Klartag, B. and Milman, V. “The Santaló point of a function, and a functional form of the Santaló inequality.” [*Mathematika*]{} 51 (2004): 33–48.
Ball, K. [*Isometric problems in $\ell_p$ and sections of convex sets*]{}. PhD dissertation. Cambridge: 1986.
Ball, K. “Logarithmically concave functions and sections of convex sets in $\R^n$.” [*Studia Math.*]{} 88, no. 1 (1988): 69–84.
Borell, C. “Convex measures on locally convex spaces.” [*Ark. Mat.*]{} 12 (1974): 239–252.
Borell, C. “Convex set functions in $d$-space.” [*Period. Math. Hungar.*]{} 6 (1975): 111–136.
Borell, C. “The Brunn-Minkowski inequality in Gauss space.” [*Invent. Math.*]{} 30, no. 2 (1975): 207–216.
Borell, C. “The Ehrhard inequality.” [*C. R. Math. Acad. Sci. Paris*]{} 337, no. 10 (2003): 663–666.
Brascamp, H. J. and Lieb, E. H. “On extensions of the Brunn-Minkowski and Prékopa-Leindler theorems, including inequalities for log concave functions and with an application to the diffusion equation.” [*J. Func. Anal.*]{} 22, no. 4 (1976): 366–389.
Colesanti, A. “Functional inequalities related to the Rogers-Shephard inequality.” [*Mathematika*]{} 53 (2006): 81–101.
Ehrhard, E. “Symétrisation dans l’espace de Gauss.” [*Math. Scand.*]{} 53 (1983): 281–301.
Ehrhard, E. “Élements extrémaux pours les inégalités de Brunn-Minkowski gaussienes.” [*Ann. Inst. H. Poincaré Probab. Statist.*]{} 22 (1986): 149–168.
Fradelizi, M. and Meyer, M. “Some functional forms of Blaschke-Santaló inequality.” [*Math. Z.*]{} 256 (2007): 379–395.
Gardner, R. J. “The Brunn-Minkowski inequality.” [*Bull. Amer. Math. Soc.*]{} 39, no. 3 (2002): 355–405.
Gardner, R. J. [*Geometric tomography*]{}, 2nd ed. Encyclopedia of Mathematics and its Applications, 58. Cambridge: Cambridge University Press, 2006.
Gardner, R. J. and Zvavitch, A. “Gaussian Brunn-Minkowski inequalities.” [*Trans. Amer. Math. Soc.*]{} 362, no. 10 (2010): 5333–5353.
Klartag, B. and Koldobsky, K. “An example related to the slicing inequality for general measures.” [*J. Funct. Analysis*]{} 274, no. 7 (2018): 2089–2112.
Klartag, B. and Livshyts, G. “The lower bound for Koldobsky’s slicing inequality via random rounding.” [*Submitted*]{}, [arXiv:1810.06189](https://arxiv.org/abs/1810.06189).
Klartag, B. and Milman, V. “Geometry of log-concave functions and measures.” [*Geom. Dedicata*]{} 112 (2005): 169–182.
Koldobsky, A. “Slicing inequalities for measures of convex bodies.” [*Adv. Math.*]{} 283 (2015): 473–488.
Koldobsky, A. and Zvavitch, A. “An isomorphic version of the Busemann-Petty problem for arbitrary measures.” [*Geom. Dedicata*]{} 174 (2015): 261–277.
Leindler, L. “On certain converse of Hölder’s inequality II.” [*Acta Sci. Math. (Szeged)*]{} 33 (1972): 217–223.
Livshyts, G. “An extension of Minkowski’s theorem and its applications to questions about projections for measures.” [*To appear in Adv. Math.*]{}
Livshyts, G., Marsiglietti, A., Nayar, P. and Zvavitch, A. “On the Brunn-Minkowski inequality for general measures with applications to new isoperimetric-type inequalities.” [*Trans. Amer. Math. Soc.*]{} 369, no. 12 (2017): 8725–8742.
Marsiglietti, A. “On the improvement of concavity of convex measures.” [*Proc. Amer. Math. Soc.*]{} 144, no. 2 (2016): 775–786.
Meyer, M., Nazarov, F., Ryabogin, D. and Yaskin, V. “Grünbaum-type inequality for log-concave functions.” [*Bull. Lond. Math. Soc.*]{} 50 (2018): 745–752.
Myroshnychenko, S., Stephen, M. and Zhang, N. “Grünbaum’s inequality for sections.” [*J. Funct. Anal.*]{} 275 (2018): 2516–2537.
Nayar, P. and Tkocz, T. “A note on a Brunn-Minkowski inequality for the Gaussian measure.” [*Proc. Amer. Math. Soc.*]{} 141 (2013): 4027–4030.
Prékopa, A. “Logarithmic concave measures with application to stochastic programming.” [*Acta Sci. Math. (Szeged)*]{} 32 (1971): 301–315.
Ritoré, M. and Yepes Nicolás, J. “Brunn-Minkowski inequalities in product metric measure spaces.” [*Adv. Math.*]{} 325 (2018): 824–863.
Rockafellar, R. T. and Wets, R. J.-B. [*Variational analysis*]{}. Grundlehren der Mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\], 317. Berlin: Springer-Verlag, 1998.
Rogers, C. A. and Shephard, G. C. “The difference body of a convex body.” [*Arch. Math.*]{} 8 (1957): 220–233.
Rogers, C. A. and Shephard, G. C. “Convex bodies associated with a given convex body.” [*J. Lond. Math. Soc.*]{} 1, no. 3 (1958): 270–281.
Schneider, R. [*Convex bodies: The Brunn-Minkowski theory*]{}, 2nd expanded ed. Encyclopedia of Mathematics and its Applications, 151. Cambridge: Cambridge University Press, 2014.
Sudakov, V. N. and Cirel’son, B. S. “Extremal properties of half-spaces for spherically invariant measures.” Problems in the theory of probability distributions, II. [*Zap. Naučn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI)*]{} 41 (1974): 14–24, 165.
Zvavitch, A. “The Busemann-Petty problem for arbitrary measures.” [*Math. Ann.*]{} 331, no. 4 (2005): 867–887.
[^1]: First author is supported by MINECO/FEDER project MTM2016-77710-P. Second and fourth authors are supported by MINECO/FEDER project MTM2015-65430-P and “Programa de Ayudas a Grupos de Excelencia de la Región de Murcia”, Fundación Séneca, 19901/GERM/15. Third and fifth authors are supported in part by the U.S. National Science Foundation Grant DMS-1101636. Fifth author is supported in part by la Comue Université Paris-Est.
|
---
abstract: 'There is a linear relation between the mass of dense gas, traced by the HCN(1–0) luminosity, and the star formation rate (SFR), traced by the far-infrared luminosity. Recent observations of galactic disks have shown some systematic variations. In order to explore the SFR–dense gas link at high resolution ($\sim 4$, $\sim 150$ pc) in the outer disk of an external galaxy, we have mapped a region about 5 kpc from the center along the northern spiral arm of M51 in the HCN(1–0), HCO$^+$(1–0) and HNC(1–0) emission lines using the Northern Extended Millimeter Array (NOEMA) interferometer. The HCN and HCO$^+$ lines were detected in 6 giant molecular associations (GMAs) while HNC emission was only detected in the two brightest GMAs. One of the GMAs hosts a powerful H region and HCN is stronger than HCO$^+$ there. Comparing with observations of GMAs in the disks of M31 and M33 at similar angular resolution ($\sim 100$ pc), we find that GMAs in the outer disk of M51 are brighter in both HCN and HCO$^+$ lines by a factor of 3 on average. However, the $I_{\rm HCN}/I_{\rm CO}$ and $I_{\rm HCO^+}/I_{\rm CO}$ ratios are similar to the ratios in nearby galactic disks and the Galactic plane. Using the Herschel 70 $\mu$m data to trace the total IR luminosity at the resolution of the GMAs, we find that both the L$_{\rm IR}$–L$_{\rm HCN}$ and L$_{\rm IR}$–L$_{\rm HCO^+}$ relations in the outer disk GMAs are consistent with the proportionality between the L$_{\rm IR}$ and the dense gas mass established globally in galaxies within the scatter. The IR/HCN and IR/HCO$^+$ ratios of the GMAs vary by a factor of 3, probably depending on whether massive stars are forming or not.'
author:
- Hao Chen
- Jonathan Braine
- Yu Gao
- Jin Koda
- Qiusheng Gu
nocite: '[@*]'
title: Dense Gas in the Outer Spiral Arm of M51
---
INTRODUCTION
============
Stars form in the dense cores of giant molecular clouds (GMCs). The dense cores are traced by high dipole-moment molecules like HCN, HCO$^+$, and CS . In cold regions where the density is very high, a commonly used probe in the Galaxy is N$_2$H$^+$, because it does not deplete onto dust grains. In the warm regions surrounding massive protostars, high-$J$ CO lines are useful probes of the density and temperature [in the sub-millimeter regime, @Lu2014ApJ...787L..23L; @Liudz2015ApJ...810L..14L; @zhao2016ApJ...820..118Z]. The dense gas tracer HCN (n$_{\rm eff} \sim 10^5 cm^{-3}$) exhibits the strongest line in galaxies after CO and $^{13}$CO. In intensely star-forming galaxies such as ultra-luminous infrared galaxies (ULIRGs) the HCN line can be stronger than the $^{13}$CO . A linear relationship between the SFR, traced by infrared luminosity, and dense gas mass, traced by the HCN luminosity is found in galaxies [@Gao2004ApJ...606..271G; @Zhang2014ApJ...784L..31Z; @Liulj2015ApJ...805...31L] and Galactic clumps [@Wu2005ApJ...635L.173W; @Wu2010ApJS..188..313W]. The IR-to-HCN(1–0) luminosity ratio is a proxy for the ratio of the star formation rate (SFR) and dense gas mass, referred to as the star formation efficiency of dense molecular gas, SFE$_{dense}$, and this ratio is almost constant in galaxies. The strong linear relation between HCN intensity and IR emission, even in ULIRGs where the IR-CO relation becomes non-linear, suggests that it is the mass of dense gas rather than the molecular gas reservoir [@Solomon1992ApJ...387L..55S; @Gao2007ApJ...660L..93G] that governs star formation.
From large-scale mapping of the HCN emission in M51, @Chen2015ApJ...810..140C and @Bigiel2016ApJ...822L..26B showed that the IR-to-HCN ratio (SFE$_{dense}$) is lower in the central kpc than the outer disk. The HCN emission is also strong compared to CO in the central kpc of M51. If the SFE$_{dense}$ is in fact [*not*]{} constant, then either HCN(1–0) is not a reliable measure of the dense gas mass throughout the disk or other factors [e.g. turbulence enhanced by the shear motion, @Krumholz; @2015MNRAS.453..739K] prevent dense gas from turning into stars. This effect is not limited to M51. The Antennae galaxies (NGC 4038/4039) show that the IR-to-HCN ratio is on average 4 times higher in the 3 overlap regions than in the two nuclei [@Bigiel2015ApJ...815..103B]. For 48 HCN detections of 29 nearby disk galaxies in the HERACLES survey, @Usero2015AJ....150..115U found that the IR-to-HCN ratios increase systematically with radius in galaxies, 6–8 times lower near galaxy centers than in the disks. @Longmore2013MNRAS.429..987L stated that the current SFR in the inner 500 pc of our Galaxy is ten times lower than the rate predicted from the dense gas.

The IR–HCN relation in the outer disk at high resolution is not well known because the HCN emission in the outer disk is much weaker than the center and there are few high resolution dense gas observations towards the outer disk except for the very nearby galaxies, M31 and M33 . In this work, we present observations at 150 pc resolution of a region along the northern spiral arm in the outer disk ($\sim$ 5 kpc from the center) of M51 in HCN(1–0), HCO$^+$(1–0), and HNC(1–0), all of which trace dense molecular gas. HCN and HNC are isomers, and have similar energy spectra and dipole moments. HCO$^+$ has a slightly lower dipole moment and, although it is an ion, appears to also trace dense gas quite well [@Jiang2011MNRAS.418.1753J]. With these observations, we can fill the gap (between GMC and kpc scale region in galactic disk, $10^7-10^8 L_\odot$ in IR) in IR–HCN and IR–HCO$^+$ relations.
This region was selected because it is in the outer disk and for its high signal-to-noise HCN spectrum observed with IRAM 30M (Chen et al 2015). The metallicity is solar to within the uncertainties [@Bresolin2004ApJ...615..228B]. With these data in a rather small region (in M51), we obtain several independent data points without the potential influence on star formation by the radial (or other) variations of turbulence, metallicity and pressure [@Usero2015AJ....150..115U; @Chen2015ApJ...810..140C; @Bigiel2016ApJ...822L..26B].
{width="80.00000%"}
[cccccc]{} & 5.1 & 380.6–420.4 & 3.68$\times$ 2.87& 395.3\
& 4.2 & 385.4–421.5 & 4.88$\times$ 3.67& 12.0\
& 4.2 & 386.9–419.8 & 4.85$\times$ 3.64& 11.7\
& 6.6 & 394.2–412.6 & 4.96$\times$ 3.59& 11.3
OBSERVATIONS AND DATA REDUCTION
===============================
HCN(1–0), HCO$^+$(1–0), HNC(1–0)
--------------------------------
The two field mosaic was observed with the NOEMA interferometer in the C and D configurations during 8 short (1–4 hour long) sessions between July 2014 and April 2015. The size and shape of the mosaic are shown as an ellipse in Figure 1. The fields were placed at (8, -8) and (-8, 8) offset from 13:29:55.7, 47:13:43.0 (J2000.0). The standard line and WIDEX correlators were used with spectral resolution of 1.25 and 2 MHz, respectively. The only lines detected were HCN(1–0), HCO$^+$(1–0) and HNC(1–0).
The calibration of the uv data was done using the GILDAS[^1] software package CLIC and the standard pipeline. J1259+516 and/or 1418+546 served as phase and amplitude calibrators and were observed every 20 minutes between source observations. The flux of 1418+546 was 10% lower for the data observed on October 17, 2014 than for the other days, so we used J1259+516 as calibrator for the October 17 data. The uncertainty of the absolute flux calibration is about 10% at 3 mm.
For imaging and cleaning we used the GILDAS software package MAPPING. The two pointings are combined together to create the dirty map including the primary beam correction. Natural weighting was used to obtain the best signal to noise ratio. After imaging, we ran the CLEAN algorithm using the HOGBOM method on the central part of the field for 10 iterations. Then, for the channels where flux was detected, we cleaned carefully using the CO map to guide the CLEAN algorithm. The region CLEANed is shown as a black dashed line in the CO panel of Fig. 2, corresponding roughly to the 5 sigma contour of the CO integrated intensity map. The results for each channel were checked to make sure that the algorithm gave proper results. We chose the best result by comparing with the dirty beam, noise levels and adjacent channels to identify whether further cleaning was necessary. The noise level is 10% better when we cleaned with more iterations ($\sim$2000, reaching the default threshold) than with few iterations ($\sim$100) and the fluxes are stable. The data presented here represent what we estimate as the most reliable reduction but certainly underestimating slightly the true flux density for the HCN, HCO$^+$ and HNC lines because some residual sidelobes are still present. The Jy/K conversion factors are 8.71, 8.69, 8.70, and 8.35 for the CO, HCN, HCO$^+$, and HNC data cubes, respectively. The spectral resolution, beam size, and $rms$ of the cleaned data cubes are shown in Table 1.
The HCN flux with the IRAM 30M telescope centered at 13:29:66.64, 47:13:58.0 (J2000.0) is 0.48 K km/s, or 2.3 Jy km/s. The flux of the NOEMA observation is 1.1 Jy K km/s in the same region. The difference could be due to missing short spacings, such that larger structures are resolved out, but could also be due to the residual interferometric “bowl", which has not been completely eliminated by the cleaning.
CO and IR Data
--------------
The CO J = 1 - 0 data were taken with the CARMA array combined with zero-spacings from the $5\times5$ Beam Array Receiver System on the Nobeyama Radio Observatory 45M telescope [@Koda2011ApJS..193...19K]. @Schinnerer2013ApJ...779...42S observed M51 with the Plateau de Bure interferometer at high resolution but their map does not extend to regions as far out as this one.
To compare with the sites of recent star formation, we use the 70 $\micron$ image from the Very Nearby Galaxy Survey (VNGS) which was accessed through the Herschel Database in Marseille (HeDaM[^2]$^,$ [^3]). The resolution is only slightly poorer than ours, so that we can use the Herschel 70$\mu$m images to estimate the star formation rates [@Boquien2011AJ....142..111B; @Galametz2013MNRAS.431.1956G].

RESULTS
=======
Figure 2 shows the integrated intensity maps of the HCN, HCO$^+$ and HNC emission lines along with the CO(1–0) image obtained by @Koda2011ApJS..193...19K. The integrated intensities were measured as $I = \int Td$V and the emission velocity ranges (V) were defined from the data cube and shown in Table 1. Uncertainties are calculated as $\delta = T_{\rm rms}\sqrt{W\delta_c}$ ($T_{\rm rms}$ is the data cube rms, $W$ and $\delta_c$ are the velocity range and velocity resolution of the integrated intensity map) and are 5.6, 0.15, 0.14 and 0.13 K km/s for CO, HCN, HCO$^+$ and HNC, respectively. Because the HCN, HCO$^+$ and HNC emissions show similar but clearly different morphologies, we chose not to bias our results towards one or another of these tracers but rather use the higher S/N CO map with slightly better angular resolution to define the giant molecular associations (GMAs). A total of 6 GMAs have been identified (black numbered polygons in Fig. 2) by the ClumpFind[^4] algorithm [@Williams1994ApJ...428..693W]. The HCN and HCO$^+$ emission peaks are similar at about 1.2 and 1.4 $K\ km/s$ and the peak positions are consistent with each other at the center of GMA 2. The HNC line is only detected in GMA 1 and 2, and the emission region is displaced to the south compared to the other lines. GMA 3, 4 and 6 are weak in HCN but strong in HCO$^+$ (and CO). The HCO$^+$ distribution is generally broader than that of the CO, HCN and HNC. Figure 2 also shows the H$\alpha$ emission regions (white contours) in order to allow a comparison with the sites of massive star formation. As can be seen from the zoom in Fig. 1, the 70 $\mu$m emission distribution is quite similar to that of H$\alpha$, so the white contours provide a good reference for the sites of star formation in these GMAs. GMA 5 clearly shows strong star formation, and its HCN emission is stronger than HCO$^+$ and more centered on the H$\alpha$ than CO. This may point towards HCN as being more linked to star formation than other dense gas tracers, at least in high ($\sim$solar) metallicity environments. Clearly more high resolution studies are required to clarify this.
Spectra for each line integrated over the area of each GMA are shown in Figure 3 and the integrated intensities and uncertainties are provided in Table 2. As can be seen, the lines are as expected all at the same velocity and with similar line widths. Because several clouds are probably included within one GMA, the hyperfine structure of the HCN line is not visible and the broadening (due to the presence of three components) is not detectable.
ANALYSIS AND DISCUSSION
=======================
    
   
HCN and HCO$^+$ Brightness and Line Ratios
------------------------------------------
M31 and M33 have been observed at virtually the same linear resolution as our M51 observations. However, comparison of the M31, M33 and M51 disk GMAs in Figure 4 shows, the integrated intensities of the M51 GMAs are much higher in the HCN and HCO$^+$ lines.
The dense gas fractions, as traced by $I_{\rm HCN}/I_{\rm CO}$ and $I_{\rm HCO^+}/I_{\rm CO}$ , are compared in the lower part of Figure 4. Interestingly, the $I_{\rm HCN}/I_{\rm CO}$ and $I_{\rm HCO^+}/I_{\rm CO}$ ratios are similar for the three galactic disks. In M51, $I_{\rm HCN}/I_{\rm CO}$ ratios vary from 0. 007 to 0.021, which is similar to the ratio of Galactic disk [$\sim$ 0.026, @Helfer; @1997ApJ...478..233H].
With the spectra in Fig. 3 and data in Table 2, we can estimate the CO-based total molecular gas mass (using a galactic conversion factor of N$_{H_2}$/$I_{co}$ = $2\times 10^{20} cm^{-2}/(K\ km/s)$) and the HCN-based dense gas mass [using Eq. 8 from @Gao2004ApJS..152...63G]. With these numbers, we obtain dense gas mass fractions from 2 to 7% for the GMAs in M51. Thus, these observations suggest that a few percent of the mass in a GMC (or GMA, presumably a collection of neighboring GMCs) is in the form of dense gas. The simulations by @Kroupa2001MNRAS.321..699K find that 30% of dense gas mass is turned into stars, such that 0.6–2.1% of the total cloud mass (2–7% $\times$ 0.3) form stars in our sample, which is in reasonable agreement with the estimate of @Zuckerman1974ApJ...192L.149Z that about 1% of the mass of a molecular cloud is converted into stars.
The $I_{\rm HCN}/I_{\rm HCO^+}$ line ratios are also compared for the GMAs in M51, M31 and M33. All but two of the GMAs show similar $I_{\rm HCN}/I_{\rm HCO^+}$ ratios, ranging from 0.4 to 1.0. The two GMAs in M51 and M31 hosting powerful H regions (star forming regions) show high $I_{\rm HCN}/I_{\rm HCO^+}$ ratios (1.21 and 1.72).
Star Formation Rate vs. Dense Gas Mass
--------------------------------------
To compare the dense gas mass with the SFR of the GMAs in the outer spiral arm of M51, we use the HCN and HCO$^+$ luminosities to trace the dense gas mass and IR luminosity to trace the SFR. The line luminosities are calculated as $L[K\ km\ s^{-1}\ pc^2]= 23.5\Omega [arcsec^2]d_L^2[Mpc]I[K\ km\ s^{-1}]$, where $\Omega$ is the solid angle of the GMA, $d_L$ is the luminosity distance of M51 [7.6 Mpc, @Ciardullo2002ApJ...577...31C] and $I$ is the integrated intensity calculated as in Table 2. The line luminosity uncertainties are determined from the same formula, substituting $I_{rms}$ for $I$ and taking $\Omega$ as either the solid angle of GMA or the $5.6\arcsec$ beam size, whichever is larger. The IR luminosities are derived from Herschel 70 $\micron$ data following the conversion function shown in Table 2 of @Galametz2013MNRAS.431.1956G. The uncertainty in $L_{\rm IR}$ comes from the scatter in the conversion of 70 $\mu$m to IR [0.09 dex, @Galametz2013MNRAS.431.1956G]. This uncertainty dominates the measurement error at 70 $\mu$m (0.01 to 0.05 dex). When calculating luminosities, all the maps have been smoothed to the angular resolution of the 70 $\mu$m data (5.6$\arcsec$) and the results are listed in Table 3.
Our observations of the 6 GMAs fill the gap in IR-HCN and IR-HCO$^+$ relations between the large-scale observations, kpc or larger, and the Galactic measurements (see Figure 5). Both the L$_{\rm IR}$–L$_{\rm HCN}$ and L$_{\rm IR}$–L$_{\rm HCO^+}$ relations in the outer disk GMAs are consistent with the proportionality between L$_{\rm IR}$ and dense gas mass established globally in galaxies. The observations of the GMAs presented here are quite close to the average IR-HCN relation with no obvious shift or trend, unlike some of the Galactic observations or observations of galactic centers which tend to have low IR/HCN flux ratios.
There is still some scatter in IR/HCN and IR/HCO$^+$ ratios in the GMAs of the outer disk as shown by the insets in Figure 5. Both IR/HCN and IR/HCO$^+$ ratios of GMA 5 are 3 times higher than GMA 1 and 2. This could be because massive stars are forming in GMA 5 as it hosts an H region. The IR/HCN ratio of GMA 3, 4 and 6 is almost the same as GMA 5, but their IR/HCO$^+$ ratio is 1/2 of GMA 5. It is consistent with the lower HCN/HCO$^+$ ratio of GMA 3, 4 and 6 (about 0.5) than GMA 1, 2 and 5 (about 1.0). The results do not change when we take extinction-corrected H$\alpha$ [@Calzetti2007ApJ...666..870C] or FUV [@Liu2011ApJ...735...63L] instead of IR to trace SFR.
[lllllll]{} & $46.2\pm2.8$ & $47.8\pm4.0$ & $35.0\pm3.9$ & $28.3\pm4.0$ & $30.1\pm2.7$ & $34.0\pm3.2$\
& $0.53\pm0.06$ & $0.97\pm0.07$ & $0.30\pm0.09$ & $0.25\pm0.10$ & $0.64\pm0.12$ & $0.29\pm0.12$\
& $0.63\pm0.06$ & $0.88\pm0.09$ & $0.60\pm0.11$ & $0.60\pm0.11$ & $0.53\pm0.08$ & $0.60\pm0.10$\
& $0.21\pm0.04$ & $0.42\pm0.06$ & $0.10\pm0.07$ & $0.13\pm0.07$ & $0.05\pm0.07$ & $0.06\pm0.09$\
& $0.011\pm0.002$ & $0.020\pm0.003$ & $0.009\pm0.004$ & $0.009\pm0.005$ & $0.021\pm0.006$ & $0.009\pm0.004$\
& $0.014\pm0.002$ & $0.018\pm0.003$ & $0.017\pm0.005$ & $0.021\pm0.007$ & $0.018\pm0.004$ & $0.018\pm0.005$\
& $0.84\pm0.18$ & $1.10\pm0.19$ & $0.50\pm0.24$ & $0.42\pm0.24$ & $1.21\pm0.41$ & $0.48\pm0.28$
[lllllll]{} & $47.16$ & $26.28$ & $20.88$ & $11.52$ & $37.80$ & $28.44$\
& $26.9\pm1.7$ & $15.4\pm1.7$ & $8.6\pm1.6$ & $3.6\pm1.8$ & $13.3\pm1.2$ & $11.1\pm1.4$\
&$33.2\pm3.1$ &$29.3\pm2.8$ & $8.8\pm3.5$ & $4.7\pm3.1$ & $30.0\pm5.6$ & $9.4\pm4.3$\
& $37.3\pm2.8$ & $27.9\pm3.3$ & $14.5\pm3.9$ & $8.1\pm4.3$ & $23.6\pm3.7$ & $17.4\pm3.9$\
& $7.26\pm0.09$ & $7.30\pm0.09$ & $7.05\pm0.09$ & $6.99\pm0.09$ & $7.71\pm0.09$ & $7.25\pm0.09$\
& $2.74\pm0.09$ & $2.83\pm0.09$ & $3.11\pm0.09$ & $3.32\pm0.09$ & $3.23\pm0.09$ & $3.28\pm0.09$\
& $2.69\pm0.09$ & $2.85\pm0.09$ & $2.89\pm0.09$ & $3.08\pm0.09$ & $3.33\pm0.09$ & $3.01\pm0.09$
Figure 5 shows not only the extragalactic data points sampling objects with IR luminosities above $10^5$L$_\odot$ at a variety of scales, but also many Galactic observations, enabling sampling down to very low luminosities. While some of the apparent scatter in Figure 5 could simply be due to noise, Wu et al. (2005) and Ma et al. (2013) show that the IR luminosity at the weak end appears to be lower than the prediction from the HCN (or HCO$^+$) luminosity by the IR–HCN (or IR–HCO$^+$) linear relation. At very small scales or low luminosities, this non-linearity could be introduced by the incomplete sampling of the stellar IMF and thus poor measurement of the SFR which depends strongly on the high stellar mass sampling. suggested that the SFR should be larger than $0.001\sim0.01\ M_\odot\ year^{-1}$ to completely sample the stellar IMF. The SFR of our GMAs are about $0.002\sim0.01\ M_\odot\ year^{-1}$ [derived from IR with Eq. 9 in @Gao2004ApJ...606..271G although mainly adopted/used globally in star-forming galaxies], so it is not clear whether the scatter in IR/HCN and IR/HCO$^+$ ratios in these GMAs could be due to incomplete sampling.
SUMMARY
=========
We mapped a selected region on the outer spiral arm of M51 in HCN(1–0), HCO$^+$(1–0) and HNC(1–0) using the NOEMA interferometer with an angular resolution of 4($\sim 150$ pc).\
(1) We detected bright emission of HCN and HCO$^+$ in 6 GMAs defined by CO(1–0) data, while HNC emission is only detected in the two brightest GMAs.\
(2) The HCO$^+$ spatial distribution is generally broader than that of HCN and HNC. One of the GMAs hosts a powerful H region and HCN is stronger than HCO$^+$ there.\
(3) The GMAs in M51 are brighter in both HCN and HCO$^+$ than the GMAs in M31 and M33, but the ratios of CO/HCN, CO/HCO$^+$ and HCN/HCO$^+$ are similar for the three objects.\
(4) Combined with Herschel 70$\mu$m data, we find that both the L$_{\rm IR}$–L$_{\rm HCN}$ and L$_{\rm IR}$–L$_{\rm HCO^+}$ relations in GMAs of M51 follow the proportionality between the L$_{\rm IR}$ and the dense gas mass established globally in galaxies within the scatter.\
(5) The IR/HCN and IR/HCO$^+$ ratios of the GMAs vary by a factor of 3, probably depending on whether massive stars are forming or not.
We appreciate the generous support from IRAM staff and GILDAS team during the observations and data reduction. This work is supported by the program for Outstanding PhD candidate of Nanjing University, the National Natural Science Foundation of China (grants 11173059, 11390373, 11420101002, 11273015 and 11133001), CAS pilot-b program (No. XDB09000000) and the National Basic Research Program (973 program No. 2013CB834905). We are very grateful to Campus France for Xu Guangqi grant 34454YG which helped fund the travel necessary for this work.
Baan, W. A., Henkel, C., Loenen, A. F., Baudry, A., & Wiklind, T. 2008, , 477, 747 Bigiel, F., Leroy, A. K., Blitz, L., et al. 2015, , 815, 103 Bigiel, F., Leroy, A. K., Jim[é]{}nez-Donaire, M. J., et al. 2016, , 822, L26 Boquien, M., Calzetti, D., Combes, F., et al. 2011, , 142, 111 Bresolin, F., Garnett, D. R., & Kennicutt, R. C., Jr. 2004, , 615, 228 Brouillet, N., Muller, S., Herpin, F., Braine, J., & Jacq, T. 2005, , 429, 153 Buchbender, C., Kramer, C., Gonzalez-Garcia, M., et al. 2013, , 549, A17 Calzetti, D., Kennicutt, R. C., Engelbracht, C. W., et al. 2007, , 666, 870 Chen, H., Gao, Y., Braine, J., & Gu, Q. 2015, , 810, 140 Chin, Y.-N., Henkel, C., Whiteoak, J. B., et al. 1997, , 317, 548 Chin, Y.-N., Henkel, C., Millar, T. J., Whiteoak, J. B., & Marx-Zimmer, M. 1998, , 330, 901 Ciardullo, R., Feldmeier, J. J., Jacoby, G. H., et al. 2002, , 577, 31 Curran, S. J., Polatidis, A. G., Aalto, S., & Booth, R. S. 2001, , 368, 824 Evans, N. J., II 1999, , 37, 311 Galametz, M., Kennicutt, R. C., Calzetti, D., et al. 2013, , 431, 1956 Gao, Y., & Solomon, P. M. 2004, , 606, 271 Gao, Y., & Solomon, P. M. 2004, , 152, 63 Gao, Y., Carilli, C. L., Solomon, P. M., & Vanden Bout, P. A. 2007, , 660, L93 Garc[í]{}a-Burillo, S., Usero, A., Alonso-Herrero, A., et al. 2012, , 539, A8 Graci[á]{}-Carpio, J., Garc[í]{}a-Burillo, S., Planesas, P., Fuente, A., & Usero, A. 2008, , 479, 703 Helfer, T. T., & Blitz, L. 1997, , 478, 233 Jiang, X., Wang, J., & Gu, Q. 2011, , 418, 1753 Juneau, S., Narayanan, D. T., Moustakas, J., et al. 2009, , 707, 1217 Kennicutt, R. C., & Evans, N. J. 2012, , 50, 531 Kepley, A. A., Leroy, A. K., Frayer, D., et al. 2014, , 780, L13 Koda, J., Sawada, T., Wright, M. C. H., et al. 2011, , 193, 19 Krips, M., Neri, R., Garc[í]{}a-Burillo, S., et al. 2008, , 677, 262 Kroupa, P., Aarseth, S., & Hurley, J. 2001, , 321, 699 Krumholz, M. R., & Kruijssen, J. M. D. 2015, , 453, 739 Liu, D., Gao, Y., Isaak, K., et al. 2015, , 810, L14 Liu, L., Gao, Y., & Greve, T. R. 2015, , 805, 31 Liu, G., Koda, J., Calzetti, D., Fukuhara, M., & Momose, R. 2011, , 735, 63 Longmore, S. N., Bally, J., Testi, L., et al. 2013, , 429, 987 Lu, N., Zhao, Y., Xu, C. K., et al. 2014, , 787, L23 Ma, B., Tan, J. C., & Barnes, P. J. 2013, , 779, 79 Mutchler, M., Beckwith, S. V. W., Bond, H., et al. 2005, Bulletin of the American Astronomical Society, 37, 13.07 Privon, G. C., Herrero-Illana, R., Evans, A. S., et al. 2015, , 814, 39 Rand, R. J. 1992, , 103, 815 Rosolowsky, E., Pineda, J. E., & Gao, Y. 2011, , 415, 1977 Schinnerer, E., Meidt, S. E., Pety, J., et al. 2013, , 779, 42 Solomon, P. M., Downes, D., & Radford, S. J. E. 1992, , 387, L55 Usero, A., Leroy, A. K., Walter, F., et al. 2015, , 150, 115 Williams, J. P., de Geus, E. J., & Blitz, L. 1994, , 428, 693 Wu, J., Evans, N. J., II, Gao, Y., et al. 2005, , 635, L173 Wu, J., Evans, N. J., II, Shirley, Y. L., & Knez, C. 2010, , 188, 313-357 Zhang, Z.-Y., Gao, Y., Henkel, C., et al. 2014, , 784, L31 Zhao, Y., Lu, N., Xu, C. K., et al. 2016, , 820, 118 Zuckerman, B., & Evans, N. J., II 1974, , 192, L149
[^1]: http://www.iram.fr/IRAMFR/GILDAS
[^2]: http://hedam.lam.fr
[^3]: The Herschel Database in Marseille (HeDaM) is operated by CeSAM and hosted by the Laboratoire d’Astrophysique de Marseille.
[^4]: http://www.ifa.hawaii.edu/users/jpw/clumpfind.shtml
|
---
abstract: 'A variant of the usual Lagrangian scheme is developed which describes both the equations of motion and the variational equations of a system. The required (prolonged) Lagrangian is defined in an extended configuration space comprising both the original configurations of the system and all the virtual displacements joining any two integral curves. Our main result establishes that both the Euler-Lagrange equations and the corresponding variational equations of the original system can be viewed as the Lagrangian vector field associated with the first prolongation of the original LagrangianAfter discussing certain features of the formulation, we introduce the so-called inherited constants of the motion and relate them to the Noether constants of the extended system.'
address:
- '$^a$ Departamento de Física, Facultad de Ingeniería, Universidad Nacional de Mar del Plata, Av. J.B. Justo 4302, 7600 Mar del Plata, Argentina.'
- '$^{b}$Departamento de Matemáticas, Universidad Autónoma Metropolitana-Iztapalapa, Apartado Postal 55-534 Iztapalapa 09340 D. F., México.'
- '$^{c}$Departamento de Física, Universidad Autónoma Metropolitana-Iztapalapa, Apartado Postal 55-534 Iztapalapa 09340 D. F., México.'
- '$^d$Laboratorio de Sistemas Dinámicos, Universidad Autónoma Metropolitana-Azcapotzalco, Apartado Postal 21-267, Coyoacán 04000 D. F., México.'
author:
- 'C. M. Arizmendi$^{a}$, J. Delgado$^{b}$, H. N. Núñez-Yépez$^{c}$[^1], A. L. Salas-Brito$^{d}$'
title: Lagrangian Description of the Variational Equations
---
=
Introduction
============
The variational equations—this is Whittaker’s [@Whittaker] terminology—associated with dynamical systems are customarily obtained by linearizing the equations of motion around a particular solution. Variational equations are important for understanding both stability and integrability issues[@pla00; @steeb; @ajp84; @uam00; @case85], and interesting since in general relativity and other metric theories of gravitation—as the JBD theory— they can be regarded as the equations of geodesic deviation, and also because they can be useful for describing linearized gravitation [@sussman2; @mtw; @dicke; @robin]. These equations are also useful for studying certain evolution equations[@steeb; @case85; @uam00; @case78; @matsuno]. Additionally, in chaos theory they are basic for defining the Liapunov spectra and the related Kolmogorov entropy[@pla00; @jackson].
This work deals with the variational equations associated to Lagrangian systems. This is not much of a limitation, however, as quantum mechanics, geometric control schemes, and field theories can all be described through a Lagrangian function. In all of these the problem of solving the variational equations is important [@case85; @case78; @sussman]. We should also remember geodesics in Riemannian manifolds and their associated Jacobi fields whose properties have inspired much research and produced important results[@lanczos; @carmo].
Though the equations of motion of the aforementioned theories stem from a Lagrangian through the Euler-Lagrange equations, the variational equations are normally formulated outside such scheme, the fact notwhitstanding that working within the Lagrangian formulation is convenient from a physical and a mathematical standpoint [@pla00; @uam00; @general; @tapia; @tapia2]. It is one of our aims here to describe a remedy to this situation, that is, we discuss a complete Lagrangian formulation of the variational equations and to take advantage of the description to get constants of motion. A feature of the approach is that the equations can be established making no reference whatsoever to any [*specific*]{} solution of the original equations, as is necessarily the case in the standard formulation. This formulation should also have importance in the theory of the [*Jacobi fields*]{} governing the transition from a geodesic to a nearby one in the calculus of variations —[[*i.e.*]{}]{} those vector fields which make the second variation to vanish identically excepting perhaps for boundary terms. The notion of Jacobi equations as an outcome of the second variation is in fact fairly more general than this and general formulae for the second variation and generalized Jacobi equations along critical sections have been already considered in the calculus of variations from a sort of structural, as has been called in [@tapia], point of view (see also [@rund66]).
The paper is organized as follows. In section II we discuss the variational formulation first introduced in [@pla00]. Section III deals with its the main features of the formulation, mentioning some of its possible invariances. In section IV we exhibit how to every constant in the original problem it corresponds another one —what we call the inherited constant— valid in the variational system. In section V we prove Noether theorem for the variational equations, and establish under what conditions Noether constants can be regarded as inherited ones. We should pinpoint that the Noether theorem is able to reproduce the known constants associated with symmetries of the original Lagrangian $L$, but that it also implies that to every $n$-parameter symmetry of $L$ there additionally exist $n$ new independent conserved quantities in the variational equations—which nevertheless can be trivial in some circumstances. In section VI, we use examples to pinpoint the kind of constants we may obtain by extending symmetries of the original Lagrangian [@rmf02]. One employs the simple case of ignorable coordinates in classical mechanics to explain some further points on the application of Noether’s theorem. The other uncovers a conserved vector that is valid in linearized gravitation in a vacuum. Section VI contains the conclusions and some final remarks.
The Lagrangian for the variational equations
============================================
Let us consider any system described in terms of a Lagrangian; that is, described using a real-valued function $L$ defined in the tangent bundle $TQ$ of the $N$-dimensional configuration space $Q$ of the system, we additionally assume that $L$ is non degenerate. The equations of motion of the system follow from
$$\label{1}
\frac{d}{dt}\left(\frac {\partial L} {\partial \dot q^a}\right)= \frac {\partial L} {\partial q^a}, \quad a=1,2, \dots N.$$
The solutions of these Euler-Lagrange equations is the dynamics of the system. The formulation is described, excepting in section V and in one of the examples of section VI, under the implicit assumption that the parameter $t$ is one-dimensional and it is usually referred to as the time; but it can actually be any other parameter, as the arc-lenght for geodesics in Riemannian manifolds, or even a finite set of parameters as in field theory. In such cases the necessary changes are easily done [@carmo; @soper; @rmf02] as the reader can verify for herself (see section V and VI for specific examples).
To describe deviations from the dynamics —the realm of the Jacobi equations— let us consider an augmented configuration space $D$ comprising all the original configurations of the dynamical system plus all the elements of the Jacobi field that join two of its integral curves. The elements of $D$ can hence be coordinatized by pairs $(q^a,\epsilon^a)$ where the $\epsilon^a$ stand for the “virtual displacements” or deviations connecting two solutions of (\[1\]). In local coordinates we thus set $q'^a=q^a+\epsilon^a$ and $\dot q'^a=\dot q^a+\dot \epsilon^a$ where $\epsilon^a$ is assumed “small”. Formally we may regard $D$ as the double tangent bundle of the configuration manifold of the original system composed with a canonical “flip” mapping $\alpha$ reordering the local coordinates as $(q,\epsilon, \dot q, \dot \epsilon)\; \stackrel{\alpha}{\mapsto} \;(q,\dot q, \epsilon, \dot\epsilon)$ [@ijtp02].
We can now define the new Lagrangian as a function defined on the tangent bundle $TD$ of the D’Alambert configuration manifold and such that
$$\label{defg}
\gamma(q,\epsilon,\dot q, \dot\epsilon)=\frac {\partial L} {\partial q^a}(q, \dot q) \epsilon^a + \frac {\partial L} {\partial \dot q^a} (q,\dot q)\dot\epsilon^a\equiv {\cal D}_\epsilon L,$$
where we have written the definition in $TD$’s local coordinates, we are using the summation convention and have taken the opportunity to define the operator ${\cal D}_\epsilon$. Thus $\gamma$ can be regarded as a directional derivative of $L$ along a virtual configuration, or as the effect on $L$ of an operator that “lifts it a little” out of the original space[@crampin]. Formally $\gamma$ can be regarded as a [*prolongation*]{} [@uam00; @sussman] of $L$ [@pla00; @general]. Note also that $TD$ is the natural domain of definition of $\gamma$ since the virtual displacements are defined in the double tangent bundle of $Q$, and that the mapping $\alpha$ is well defined since the $\epsilon$’s are elements of the tangent space $T_qQ$ at the point $q$ in the configuration space $Q$. The $N$-dimensional object $\epsilon$ hence plays the role of the variational field associated with the dynamical system’s solutions [@Whittaker; @uam00]. We should pinpoint that despite the fact that we work in a specific chart of the configuration manifold all the results can be translated into intrinsic language, as has been done in[@uam00; @ijtp02]. Be warned that the preliminary work reported in section 4 of [@uam00] has been revised in [@ijtp02].
Given the Lagrangian $\gamma$, we can immediately write down the $2N$ associated Euler-Lagrange equations of motion
$$\label{elgep}
\frac {d} {dt} \left( \frac {\partial\gamma}{\partial\dot \epsilon^a}\right)-\frac{\partial\gamma} {\partial\epsilon^a}=0,$$
and
$$\label{elgq}
\frac {d} {dt} \left( \frac {\partial\gamma}{\partial\dot q^a}\right)-\frac{\partial\gamma} {\partial q^a}=0.$$
Using definition (\[defg\]) in (\[elgep\]), we immediately obtain the Lagrange equations of the original system ([[*i.e.*]{}]{} we reproduce Eq. (\[1\])) and, from (\[elgq\]), we get their associated linear variational equations [@Whittaker; @pla00; @uam00; @case85]
$$\label{vareq}
M_{ab} \ddot\epsilon^b + C_{ab} \dot\epsilon^b + K_{ab} \epsilon^b =0, \qquad a=1,2,\dots, N,$$
where the three objects $M$, $C$, $K$, are defined by the $N\times N$ matrices
$$\begin{aligned}
\label{mat}
M_{ab} &=&\left(\frac{\partial^2 L}{ \partial \dot q^a \partial \dot
q^b}\right), \\
C_{ab} &=&\left[\frac{d}{ dt}
\left( \frac{\partial^2L}{ \partial\dot q^a \partial\dot q^b}
\right) + \frac{\partial^2 L}{ \partial \dot q^a \partial q^b} -
\frac{\partial^2 L}{ \partial \dot q^b \partial q^a}\right], \\
K_{ab}&=&\left[\frac{d}{
dt}\left( \frac{\partial^2L}{ \partial\dot q^a \partial q^b} \right) -
\frac{\partial^2 L}{ \partial
q^a \partial q^b}\right], \quad
a,b=1,\dots,N. \end{aligned}$$
The variational equations (\[vareq\]) and the matrices $M$, $C$ and $K$, are [*not*]{} explicitly time dependent unless $L$ is it so from the start, a property not found in the standard approaches [@Whittaker; @case85; @case78]. This happens because we are not linearizing around a particular solution but taking instead a less local point of view. Of course, at the end is all the same since for solving (\[elgep\]) we need to insert the solutions of (\[elgq\]). But the explicit time-independence of the coefficients in equations of motion (\[vareq\]) allows devising methods for analysing the variational equations analogous to those used in time-independent Lagrangian systems. Such lack of explicit $t$-dependence can be compared to what happens with the Lagrangian itself which is not regarded as time-dependent —unless it happens to be non autonomous— despite the fact that it depends on $(q^a,\dot q^a)$ and the solutions are always explicitly time-dependent.
Some features of the Lagrangian formulation.
============================================
In this section we pinpoint the main features of the formulation; for a preliminary or a more mathematical outlook see, respectively, [@pla00] and [@ijtp02].
As it should be clear by now the function $\gamma $ is an entirely new Lagrangian which describes the original system [*plus*]{} its response to perturbations, [[*i.e.*]{}]{} what we have called the “virtual displacements” of the system. It is thus useful for studying both matters of stability and, given the relationship of the variational equations with the Painlevé test, also of integrability [@Whittaker; @pla00; @uam00; @steeb].
Moreover the Lagrangian formulation of the variational equations can be dressed in variational robes by using the function $\gamma$ to define the action functional
$$\Sigma[ q(t), \epsilon(t)]=\int_{t_1}^{t_2}
\gamma( q, \dot q, \epsilon, \dot \epsilon,t)
dt \label{action}$$
of the paths joining two (extended) configurations $(q_1,\epsilon_1)$ and $(q_2,\epsilon_2)$ of the system at two given instants of time $t_1$ and $t_2$. The statement of the variational principle is then just Hamilton’s, that is, $\Sigma$ should be extremal when the system follows its actual path in $D$ [@pla00; @uam00]. In this way we can obtain Eq. (\[1\]) and Eq. (\[vareq\]) as a direct consequence of Hamilton’s principle. The variational principle then becomes a compact formulation which may allow finding connections between the dynamical properties of the variational fields with other areas of physics or of mathematics [@pla00; @rund; @rau].
As in any Lagrangian formulation, equations (\[1\]) and (\[vareq\]) are invariant under arbitrary point transformations —[[*i.e.*]{}]{} changes of coordinates— in $Q$, $\bar{q}^a=f^a(q,t),\; a=1,\dots, N$, where the $f^a$ are functions assumed to be invertible ($\det(df(q))\neq0,\, q\in Q$). This result can be proved in the standard way [@Whittaker; @rund].
It is well known [@Whittaker; @lanczos; @LL] that the equations of motion (\[1\]) remain unchanged if we add to $L$ the total time derivative of any function of $q$ and $t$. This property is reflected in the invariance of Eqs. (\[elgep\]) and (\[elgq\]), obtained from $\gamma$, under the substitution
$$\label{transf}
\gamma(q,\epsilon,\dot q,\dot\epsilon)\to \gamma(q,\epsilon,\dot q,\dot\epsilon) +\left( \frac {\partial^2 f(q,t)} {\partial q^b\partial q^a} \dot q^b+\frac{\partial^2 f(q,t)} {\partial t \partial q^a}\right) \epsilon^a +\frac {\partial f(q,t)} {\partial q^a} \dot \epsilon^a.$$
Note that the function $f$ is arbitrary except for being required to depend only on $q$ and on $t$. Of course, the equations of motion also remain unchanged when the total time derivative of any function, $g$, of $\epsilon$, $q$, and $t$ is added to $\gamma(q,\epsilon,\dot q,\dot\epsilon)$:
$$\label{transg}
\gamma(q,\epsilon,\dot q,\dot\epsilon)\to \gamma(q,\epsilon,\dot q,\dot\epsilon) +\frac {d g(q,\epsilon,t)} {dt};$$
the proof of this property follows from the fact that the derivative in the right hand side of (\[transg\]) satisfies identically Eqs. (\[elgep\]) and (\[elgq\]) [@lanczos; @LL]. Any transformation (\[transf\]) is a particular case of (\[transg\]) if we choose
$$g(q,\epsilon,t)=\frac {\partial f(q,t)}{\partial q^a} \epsilon^a.$$
This means that the extended system we are describing has a greater invariance than the original one; [[*i.e.*]{}]{} the extended system described by $\gamma(q,\epsilon,\dot q, \dot \epsilon, t)$ is invariant not just under the point transformations in $Q$ explicitly mentioned above but also under the larger class of point transformations in D’Alambert’s configuration space $D$ [@pla00; @uam00; @LL].
It is to be noted that the $\epsilon$-derivatives of $\gamma$ reduce to corresponding $q$-derivatives of $L$ as follows
$$\frac{\partial \gamma} {\partial\dot\epsilon^a}=
\frac{\partial L} {\partial \dot q^a},\qquad \hbox{and}
\qquad \frac{\partial \gamma} {\partial \epsilon^a}=
\frac{\partial L} {\partial q^a}. \label{9}$$
As a consequence of Eqs. (\[9\]), $\gamma$ is a first-order homogeneous function of the virtual displacements $\epsilon^a$ and velocities $\dot \epsilon^a$
$$\gamma= \frac{\partial \gamma} {\partial \epsilon^a}\epsilon^a +
\frac{\partial \gamma} {\partial \dot\epsilon^a}\dot\epsilon^a, \label{10}$$
and, therefore, it can be also written in the form
$$\label {totalder}
\gamma=\frac {d} {dt} \left( \frac {\partial\gamma} {\partial \dot\epsilon^a} \epsilon^a\right)$$
where use have been made of the equations of motion (\[elgep\]), that is, relation (\[totalder\]) is valid along the integral curves associated with Eqs. (\[1\]) and (\[vareq\]). This equation also implies that the action $\Sigma$ evaluated along such integral curves can be shown to be
$$\Sigma= \frac {\partial\gamma} {\partial \dot\epsilon^a} \epsilon^a +\Sigma_0,$$
where $\Sigma_0$ is the value of $\Sigma$ at a certain convenient reference time $t_0$.
There are other relationships that can be written in terms of $\gamma$ and its derivatives. For example, the Hessian[@carmo] of $L$,
$$\label{W}
W = \frac {1} {2} \frac {\partial^2 L} {\partial q^a \partial q^b} \epsilon^a \epsilon^b + \frac {\partial ^2 L} {\partial \dot q^b \partial q^a} \epsilon^a\dot\epsilon^b+ \frac {1} {2}\frac{\partial^2 \gamma}{\partial \dot q^a\partial \dot q^b}\dot \epsilon^a \dot\epsilon^b,$$
can be expressed in terms of $\gamma$ as
$$\begin{aligned}
\label{hessian}
W &=& \frac {1} {2} {\cal D}_\epsilon \gamma \\
&=&
\frac{1}{2} \left[ \frac {\delta \gamma} {\delta q^a}\epsilon^a + \frac{d}{dt} \left( \frac {\partial\gamma}{\partial \dot q^a}\epsilon^a\right)\right],\end{aligned}$$
where ${\cal D}_\epsilon$ is the operator defined in (\[1\]), $\delta \gamma/\delta q$ is the functional derivative of $\gamma$ respect $q$ [@nash; @riesz]. We pinpoint that $W$ can play an important role in variational problems since its sign distinguishes between minima, maxima and degenerate critical sections [@carmo; @tapia; @LL]. Note the similitude of the expression for $W$ \[Eq. (\[hessian\])\] with the definition of $\gamma$, this has interesting consequences for the theory of second variations when the action functionals are defined by first-order Lagrangians [@tapia; @tapia2].
Constants of motion in the variational equations.
=================================================
Let us consider any constant of motion, $J(q,\dot q)$, of the Lagrangian vector field Eq. (\[1\]), and let us evaluate it on two of the nearby solutions of Eq. (\[1\]), $q(t)$ and $q'(t)=q(t)+\epsilon(t)$. The difference between such constant quantities evaluated in nearby trajectories, $j(q, \dot q, {\epsilon}, \dot{\epsilon})\equiv
J( {q}',\dot{ q}')-J({ q},\dot{ q})$, is also trivially a constant,
$$\frac{d j({ q, \epsilon}, \dot q, \dot{\epsilon})} {dt}=\frac{d} {dt}
\left[ J({ q}',\dot {q}')-J({ q},\dot{ q}) \right] =0. \label{17}$$
This constant, $j[q,{ \epsilon}]$, which we call an [*inherited constant*]{}[@uam00], can be also expressed as
$$\label {incons}
j(q,{ \epsilon},\dot q, \dot \epsilon) = {\cal D}_\epsilon J= \left(\frac{\partial J} {\partial q^a}\epsilon^a+ \frac{\partial
J} {\partial \dot q^a}\dot\epsilon^a \right). \label{18}$$
[[*i.e.*]{}]{} as a sort of directional derivative of $J$ along a virtual configuration. Equation (\[18\]) tells us how, given both a solution of equations (\[1\]) and any one of its constants of motion, we can obtain a constant of motion for equations (\[vareq\]). This result establishes a direct relationship between integrals of motion in the variational equations, like $j$, to constants in the original equations (\[1\]). Related results are discussed in [@case85]. We need to pinpoint that, as in some non-linear evolution equations the integrals of motion $J[q(t)]$ are functionals [@nash; @riesz] —and not functions— of the solutions of (\[1\]), then the constant $j$ has to be regarded also as a functional of the Jacobi fields $\epsilon(t)$, given by [@uam00; @case78]
$$j[\epsilon(t)]=\int \frac{\delta J[q(t)]}{ \delta q(\tau)} \epsilon(\tau)\; d\tau,
\label{19}$$
as has been shown in [@case85]; in Eq. (\[19\]) ${\delta J[q(t)]/ \delta q(\tau)}$ stands for the functional derivative of the functional $J[q(t)]$.
At this point we recall the relationship between symmetries of $L$ and the existence of integrals of motion in Lagrangian systems. This relation is summarized in Noether’s theorem, which we, in the next section, proceed to extend to the existence of constants of motion in the variational equations. Such extension casts (\[incons\]) in a different and perhaps more interesting form by relating it to a symmetry of the original Lagrangian $L$. But, as we mentioned before, the symmetries of the variational equations are larger than the symmetries of the original Lagrangian. Thus the Noether’s theorem relates any continuous symmetry of $\gamma$—not necessarily coming directly from one of $L$—to a constant of motion in the variational equations.
Symmetries and Integrals of Motion.
===================================
In the previous section we established the existence of what we called inherited constants of motion. As every isolating constant in a Lagrangian system come from a symmetry transformation [@LL; @rhor], this has exhibited that to every symmetry of $L$ there exist a related integral of motion in the variational equations. This also shows that any symmetry of $L$ can also be regarded as a symmetry of $\gamma$; the converse of this statement is not necessarily true.
In this section we formalize the just mentioned relation between constants of motion and symmetries of $\gamma$, making explicit the relation between the inherited constants of motion and the generators of the symmetry transformations of $L$. These results can be of particular importance in field-theoretic or in non-linear evolution equation applications [@case85; @case78; @tapia2; @matsuno]. Thinking on such field theoretic applications, in this section we assume that $L$ depends on $n$ parameters $t^\mu$ and not on a single one as we do in the rest of the paper. The $n$-parameters are further assumed to belong to a given $n$-dimensional differentiable manifold $V_n$ as it would happen if they were points in the space-time continuum.
Thus, let us suppose that the Lagrangian $\gamma$ is invariant under the $r$-parameter continuous group of transformations
$$\label{tg}
\bar{q}^a= F_{\bar{q}}^a(t,q,\epsilon, w),\quad\bar{\epsilon}^a=F^a_{\bar{\epsilon}}(t,q,\epsilon,w),\quad \bar{t}^\mu=F^\mu_{\bar{t}}(t,q,\epsilon,w),$$
in which the $\omega^s,\; (s=1,\dots, r)$ denote the $r$ parameters of the group furthermore assumed independent of each other. The functions $F_{\bar{q}}^a$ and $F_{\bar{t}} $ are parameterized by $\omega^s\in I$ (where $I$ is the set of parameters of the transformation group), and such that $\omega^s=0,\; s=1\dots r,$ corresponds to the identity transformation, [[*i.e.*]{}]{} the group is continuously connected with the identity. The transformations (\[tg\]) can be also written in the infinitesimal form
$$\begin{aligned}
\label{itg}
\bar{q}^a=q^a + \zeta^a_{s}(t,q,\epsilon)\omega^s,\\
\bar{\epsilon}^a=\epsilon^a + \eta^a_{s}(t,q,\epsilon)\omega^s,\\
\bar{t}^\mu=t^\mu+ \xi^\mu_s(t,q,\epsilon)\omega^s,\end{aligned}$$
where $\xi_s(t,q,\epsilon)$, $\zeta^a_{s}(t,q,\epsilon)$ and $\eta^a_{s}(t,q,\epsilon)$ are the infinitesimal generators of the symmetry transformation, given by
$$\begin{aligned}
\zeta^a_{s}&=& \left(\frac{\partial F_{\bar{q}}^a}{\partial \omega^s}\right)_{\omega^s=0},\label{igtg1}\\
\eta^a_{s}&=& \left(\frac{\partial F^a_{\bar{\epsilon}}}{\partial \omega^s}\right)_{\omega^s=0},\label{igtg2}
\\
\xi^\mu_s&=& \left(\frac {\partial F^\mu_{\bar{t}}} {\partial \omega^s}\right)_{\omega^s=0}\label{igtg3}.\end{aligned}$$
That under the infinitesimal transformations (\[igtg1\]–\[igtg3\]) the system described by $\gamma$ remain invariant means that the action $\Sigma$ \[Eq. (\[action\])\] must behave under the symmetry transformations as [@rund]
$$\label{condition}
\gamma \left(
\bar{q}^a,
\frac{\partial \bar{q}^a} {\partial \bar{t}^\mu},
\bar{\epsilon}^a,
\frac{\partial \bar{\epsilon}^a} {\partial \bar{t}^\mu},
\bar{t}^\mu
\right)
\det \left(\frac{\partial \bar{t}^\mu}{\partial t^\nu} \right) = \gamma\left(
{q}^a,
{\epsilon}^a,
\frac{\partial {q}^a} {\partial {t}^\mu},
\frac{\partial {\epsilon}^a} {\partial {t}^\mu},
{t}^\mu
\right).$$
The proof of the existence of the conserved quantities —details missing in the following proof can be filled in following the discussions in [@rund; @rhor]— is direct. Begin with (\[condition\]), differentiate it respect to $\omega^s$ noting that the right hand side is independent of such parameters, substitute $\omega^s=0$ after differentiation, and employ the equations of motion, to obtain
$$\frac{\partial j^\mu_a} {\partial t^\mu}=0,$$
where the divergenceless tensor $j^\mu_a$ is given by
$$\label{j}
j^\mu_a= \gamma\xi^\mu_a+\frac{\partial\gamma} {\partial q^s_{,\mu}}\left(\zeta^s_a -q^s_{,\nu} \xi^\nu_a\right) + \frac{\partial\gamma} {\partial \epsilon^s_{,\mu}}\left(\eta^s_a -\epsilon^s_{,\nu} \xi^\nu_a\right),$$
where we use the usual notation $\epsilon_{,\nu}\equiv \partial {\epsilon}/\partial t^\nu$. The result (\[j\]) is the main consequence of the Noether’s theorem, establishing the connection between symmetries and conserved quantities at the level of the variational equations of a Lagrangian dynamical system. Let us point out that Eq. (\[j\]) was to be expected since $\gamma$ is indeed a Lagrangian.
As the number of symmetries of the variational equations is larger than that of the original equations, we may ask under what conditions the Noether’s constants (\[j\]) reduce to the case of an inherited constant (\[incons\]). If we regard the original constant of motion $J^\mu_a$ as coming also from a Noether symmetry, then
$$\label{JN}
J^\mu_a= L\xi^\mu_a + \frac {\partial L} {\partial q^s_{,\mu}} \left(\zeta^s_a-q^s_{,\nu} \xi^\nu_a\right)$$
the variational equations constant $j^\mu_a$ can always be written in terms of $J^\mu_a$ and of the operator ${\cal D}_\epsilon$ as
$$\label{jinh}
j^\mu_a={\cal D}_\epsilon J^\mu_a + \frac {\partial \gamma} {\partial \epsilon^k_{,\mu}} \left( \eta^k_a - \epsilon^k_{,\beta} \xi^\beta_a \right) - L \;{\cal D}_\epsilon \xi^\mu_a - \frac {\partial \gamma} {\partial \epsilon^k_{,\mu}}\; {\cal D}_\epsilon \left( \zeta^k_a - q^k_{,\beta} \xi^\beta_a \right).$$
where we have assumed the identity of the $\zeta^s_a$ and of the $\xi^s_a$ appearing in Eq. (\[JN\]) with those in Eq. (\[jinh\]). It is quite clear then that not every constant $j^\mu_a$ can be regarded as an inherited constant. For this relation to be true two things are needed: i) that $\eta^b_a=0$ [[*i.e.*]{}]{} that the transformation does not involve directly the $\epsilon^a$, and ii) that one or the other (or both) of the generators, $\zeta^b_a$ and $\xi^b_a$, be constants.
Two Examples.
=============
In this section we give two examples of the use of Noether’s theorem in the variational equations.
Ignorable coordinates.
----------------------
Let us first consider a $N$-degrees of freedom mechanical system described by the Lagrangian,
$$\label{lag}
L(q,\dot q)=m_{ab}\dot q_a \dot q_b-V(q).$$
We assume this Lagrangian does not depend on the specific coordinate $q^A$. The function $\gamma$ is then
$$\label{gam}
\gamma= \left(\frac {1} {2}\frac {\partial m_{ab}} {\partial q_s} \dot q_a \dot q_b - \frac {\partial V} {\partial q_s} \right)\epsilon_s + m_{ab} \dot q_a \dot \epsilon_b.$$
and it neither depends on $q^A$. A one-parameter group of symmetries of both (\[lag\]) and (\[gam\]) is the translation
$$\label{trans}
\bar{q}^a=q^a,\;\, a\neq A,\quad\hbox{ and }\quad \bar{q}^A=q^A +w,$$
which is applied on $Q$, the original configuration space. The conserved quantity coming from such invariance is $q^A$’s conjugate momentum $p_A={\partial L} /{\partial \dot q^A} $. We are now going to illustrate the relation between $p_A$ above \[see Eq. (\[JN\])\] and the constant in the variational equations \[Eq. (\[jinh\])\]. The precise answer depends on how the transformation (\[trans\]) is applied on $D$. As it is easy to convince oneself, the following two are the only independent possibilities allowed.
- First, we apply (\[trans\]) to $q^s $ and not to the “virtual displacements” or the time, [[*i.e.*]{}]{} $\bar{\epsilon}^a=\epsilon^a$, $\bar{t}=t$, ($\zeta^A=1$, $\zeta^a=0,\, a\neq A$; $\eta^a=0$; $\xi^a=0$). In this instance the conserved quantity is $$j^{(1)}_A= \frac {\partial \gamma} {\partial \dot q^A}=\frac {\partial m_{Ab}}{\partial q^a}\, \dot q^b \epsilon^a + m_{Ab}\dot\epsilon^b,$$
and can be written as the inherited constant associated with $p_A$, [[*i.e.*]{}]{} $j^{(1)}_A={\cal D}_\epsilon p_A$.
- In the second case, transformation (\[trans\]) is just applied to the $\epsilon$’s and not to the $q's$ or the $t$. We have $\bar{q}^a=q^a$; $\bar{\epsilon}^A=\epsilon^A +w$, $ \bar {\epsilon}^a=\epsilon^a,\, a\neq A$; $\bar{t}=t$ ([[*i.e.*]{}]{} $\eta^A=1$, $\eta^a=0,\, a\neq A$; $\zeta^a=\xi^a=0$). The conserved quantity is just the momentum conjugate to $q^A$, $j^{(2)}_A = {\partial \gamma}/ {\partial \dot \epsilon^A} = m_{Ab}\dot q^b = p_A$.
It is worth emphasizing that this is the typical behaviour when we apply a symmetry of $L$ to $\gamma$. We always obtain a new conserved quantity $j^{(1)}$, and a pre-existing (in $L$) one $j^{(2)}$. We pinpoint that it is not necessary that the constant $j^{(1)}$ be inherited, as it happened in the present example; this might not even be possible. We must keep in mind that besides the inherited symmetries, $\gamma$ may have other symmetries of its own and therefore other associated constants.
Conserved quantities in linearized gravitation in a vacuum.
-----------------------------------------------------------
Let us investigate conserved quantities in a first-order theory of gravitation in a vacuum. The Lagrangian density [@robin]
$$\label{gr}
L=\frac {1} {2} R^{a\mu b\nu} x_{a,\mu} x_{b,\nu},$$
with $R^{a\mu b\nu}$ the Riemann tensor, describes vacuum general relativity. The associated density is
$$\label{lgr}
\gamma=\frac {1} {2} \frac {\partial R^{a\mu b\nu}} {\partial x_c} \epsilon_c x_{a,\mu} x_{b,\nu} + R^{a\mu b\nu} \epsilon_{a,\mu} x_{b,\nu}.$$
this Lagrangian describes linearized gravitation in a vacuum.
It can be easily realized that under the following one parameter group of transformations of the spacetime
$$\label{camp}
\bar{x}^a=x^a + x^a w,$$
the Lagrangian (\[gr\]) changes as $\bar{L}= L/({1+w})^2$. So, though it is not strictly invariant, $L$ ends multiplied by a constant, thus there is a Noether tensor $\theta^\mu=- R^{a \mu b \nu} x_{a, \mu} x_{b, \nu} x^\mu/2 + R^{a \mu b \nu} x_{a,\mu} x^b$ which is not very interesting since it vanishes identically. Furthermore, the density $\gamma$ admits the following two (quasi) symmetry transformations:
- For the first one, we just apply transformation (\[camp\]), not changing the $\epsilon$’s in any way. The Lagrangian $\gamma$ transforms as $\bar {\gamma} = (1+w)^{-3}\, \gamma$, as a result the quantity
$$\begin{aligned}
\label{lg1}
&j^\mu &= \gamma x^\mu + \frac {\partial\gamma} {\partial x^a_{,\mu}} (x^a - x^a_{,\beta} x^\beta) - \frac {\partial\gamma} {\partial \epsilon^a_{,\mu}} ( \epsilon^a_{,\beta} x^\beta) \nonumber \\
&=& {R_a}^{\mu b \nu} \epsilon_{b, \nu} x^a\end{aligned}$$
is divergenceless[@robin].
- The second invariance does not affect the spacetime coordinates $x^a$ just transforms the “virtual” displacements $\epsilon$ as
$$\bar{\epsilon}^a= \epsilon^a + \epsilon^a w,$$
under this change the Lagrangian changes as $\bar{\gamma}= (1+w)^{-1}\, \gamma$ and as a result the quantity $j^\mu= {\partial \gamma}/ {\partial \epsilon^a_{,\mu}} \epsilon^a
=0$ is also —trivially— divergenceless. This invariance does not give any useful new information.
The first divergenceless tensor (\[lg1\]), which do not stem from any constant in the original spacetime, is valid in linearized gravitation and most important remain constant irrespective of what spacetime we are linearizing around.
Conclusions
===========
The Lagrangian description discussed in this paper has led to the uncovering of various properties of the variational equations of Lagrangian systems, and allowed the application of Noether’s theorem to them, thus relating the symmetries of a Lagrangian not only to its constants of motion but also to constants in its variational equations. These conclusions stem from the introduction of the Lagrangian $\gamma$ which, from a mathematical point of view, is the prolongation of $L$ [@uam00; @ijtp02]. It is to be emphaziced that our results are valid for field theories and non-linear evolution equations and can be also extended to the higher order Lagrangians which found application in relativistic theories[@pla00; @tapia]. Moreover, we can apply the techniques presented here in the framework of first-order variational principles on fibered manifolds and their jet prolongations [@tapia2; @sarda]. Our formulation may also possibly be used to study the so-called geoodular structure (a mathematical scheme that defines curvature in terms of properties of congruences of geodesics and their variational equations in affinely connected manifolds[@neste]). The extension of Noether’s theorem to the variational equations presented here may also have some bearings in the study of the evolution of Stokes waves[@matsuno], and can be also used in geometric control theory [@uam00; @sussman].
On the other hand, the relevance of the variational equations for determining both stability and integrability properties, makes this formulation important for the investigation of periodic orbits [@steeb] and of solitonic solutions of nonlinear equations [@matsuno]. Our Lagrangian description of the Jacobi equations can be durthermore casted in Hamiltonian form. Moreover, as the example B of section VI shows, the formulation lends itself to describe linearized gravitation. Let us mention that the mathematical setting of the concepts behind our formulation are explained more formally in [@ijtp02] for the case of discrete Lagrangian mechanics.
Whittaker E T. A Treatise on the Analytical Dynamics of Particles and Rigid Bodies. Cambridge: Cambridge University Press, 1937, sect. 112.
Núñez-Yépez HN and Salas-Brito AL. Jacobi equations using a variational principle. Phys. Lett. A, 2000;275,218–222.
W.-H. Steeb, and N. Euler, Nonlinear evolution equations and the Painlevé test. Singapore: World Scientific; 1988.
Salas-Brito AL. Variational principle for the problem of small oscillations. Am. J. Phys. 1984;52 1012–1014.
Núñez-Yépez NY, Delgado J, and Salas-Brito AL.Variational equations of Lagrangian systems and Hamilton’s principle. in Anzaldo-Meneses A, Bonnard B, Gauthier JP and Monroy-Pérez F, Eds. Contemporary trends in geometric control theory and applications. Singapore: World Scientific; 2001, pp. 405–422. arXiv: math-ph/0107006.
Case KM. Constants of motion and the variational equations. Phys. Rev. Lett. 1985;55: 445–448.
Sussman RA. On spherically symmetric shear-free perfect fluid configurations (neutral and charged). II. Equation of state and singularities. J. Math. Phys. 1988;29: 945–970.
Misner ChW, Thorne KS, and Wheeler JA. Gravitation. New York: Freeman, 1973.
C. Brans and R. H. Dicke.Mach’s Principle and a Relativistic Theory of Gravitation. Phys. Rev., 1959; 124: 925–935.
Robinson DC. Applications of variational principles to classical perturbation theory in general relativity. Math. Proc. Camb. Phil. Soc., 1975; 78: 351–356, and personal communication.
Case KM. Integration of linearized equations of motion. Phys. Rev. Lett. 1978;40: 351–354.
Matsuno Y. Linear stability of multiple dark solitary wave solutions of a nonlocal nonlinear Schrödinger equation for envelope waves. Phys. Lett. A, 2001; 285: 286–292; Johnson RS. A Modern Introduction to the Mathematical Theory of Water Waves. Cambridge: Cambridge University, 1997.
Jackson EA. Perspectives on Nonlinear Dynamics vol 1. Cambridge: Cambridge University, 1989.
Sussman HJ. Symmetries and integrals of motion in optimal control. in Fryszkowski A, Jakubczyk B, Respondek W, Rzezuchowski T, Eds. Geometry and Nonlinear Control and Differential Inclusions. Warsaw: Banach Center Publications, Warsaw, pp. 379-393.
Lanczos C. The Variational Principles of Mechanics. Toronto: University of Toronto, 1970.
Do Carmo MP. Riemannian Geometry. Boston: Birkhäuser, 1992, Ch. 5.
Giachetta G, Mangiaroti L, and Sardanashvily G. Nonholonomic constraints in time-dependent mechanics. J. Math. Phys., 1999; 40: 1376–1390.
Casciaro B, Francaviglia M, and Tapia V. preprints IC/95/37 and IC/95/38, Trieste: International Centre of Theoretical Physics, 1995.
M. Ferraris, M. Francaviglia, and V. Tapia. Global d-invariance in field theory. J. Phys. A: Math. Gen., 1993; 26: 433–442.
Rund H. The Hamilton-Jacobi Theory in the Calculus of Variations. Princeton: Van Nostrand, 1966.
Arizmendi CM, Delgado J, Núñez-Yépez HN, and Salas-Brito AL. Conserved quantities in the variational equations. Rev. Mex. Fis. 2002; submitted.
Núñez-Yépez HN, Delgado J, and Salas-Brito AL. On the variational equations of Lagrangian systems. Int. J. Theor. Phys. 2002; submitted.
Soper DE. Classical Field Theory. New York: John Wiley, 1976.
Crampin M and Pirani FAE. Applicable Differential Geometry. Cambridge: Cambridge University, 1986, London Mathematical Society Lecture Series 59.
MacKay RS, and Meiss JD. Linear stability of periodic orbits in lagrangian systems. Phys. Lett. A, 1983;98: 92–94.
Lovelock D and Rund H. Tensors, Differential Forms, & Variational Principles. New York: Wiley-Interscience, 1975.
Gerjuoy E, Rau ARP, and Spruch L. A unified formulation of the construction of variational principles. Rev. Mod. Phys., 1983; 55: 725–774.
Landau L, and Lifshitz EM. Mechanics. Oxford: Pergamon Press, 1977.
Nash C. Relativistic Quantum Fields. New York: Academic Press, 1978.
Riesz F and Sz.-Nagy B. Functional Analysis. New York: Dover, 1990.
Rhorlich F. Classical Charged Particles. New York: Addison-Wesley, 1965.
Martínez-y-Romero RP, Núñez-Yépez HN, Salas-Brito AL. Superintegrability in classical mechanics: A contemporary approach to Bertrand’s theorem. Intl. J. Mod. Phys. A, 1997; 12: 271–276.
López C, Martínez E, Rañada MF. Dynamical symmetries, non-Cartan symmetries and superintegrability of the n-dimensional harmonic oscillator. J. Phys. A: Math. Gen., 1999; 32: 1241–1249.
Rañada MF. Superintegrability of the Calogero–Moser system: Constants of motion, master symmetries, and time-dependent symmetries. J. Math. Phys. 1999; 40: 246–247.
Giachetta G, Mangiarotti L, and Sardanashvily G. Nonholonomic constraints in time-dependent mechanics. J. Math. Phys. 1999; 40: 1376–1390; Giachetta G, Mangiarotti L, and Sardanashvily G. New Lagrangian and Hamiltonian Methods in Field Theory. Singapore:World Scientific, 1997; Mangiarotti L, and Sardanashvily G. Gauge Mechanics. Singapore: World Scientific, 1998.
Nesterov AI. Geoodular structures. Algebras Groups and Geometry. 1998; 15: 25–31.
[^1]: Corresponding author
|
---
abstract: 'In this work we investigate the photoproduction of massive gauge bosons, $W^{\pm}$ and $Z^0$, as part of relevant physics topics to be studied in the proposed electron-proton collider, the LHeC. The estimates for production cross sections and the number of events are presented. In addition, motivated by the intensive studies to test the deviations from the Standard Model at present and future colliders, we discuss the $W^{\pm}$ asymmetries and perform an analysis on the role played by anomalous $WW\gamma$ coupling.'
author:
- 'C. Brenner Mariotto$^{a}$ and M.V.T. Machado$^{b}$'
title: An analysis on the photoproduction of massive gauge bosons at the LHeC
---
Introduction
============
Being planned to start around 2020/2022, the Deep Inelastic Electron-Nucleon Scattering at the LHC (LHeC) machine is a possible extension of the current LHC at CERN, an electron-proton collider [@dainton]. It is a convenient way to go beyond the LHC capabilities, exploiting the 7 TeV proton beams which will be produced at the LHC, to drive research on $ep$ and $eA$ physics at some stage during the LHC time. This LHC extension will open a new kinematic window - the $\gamma p$ CM energy can reach up to TeV scale, far beyond the $\sqrt{s}\gg$ 200 GeV at HERA, a very proficuous region for small-$x$ physics and many other physics studies. In particular, the energy of the incoming proton is delivered by the LHC beam, and a list of possible scenarios is considered for the energy of the incoming electron as $E_p=7$ TeV and $E_e=50-200$ GeV, corresponding to the center of mass energies of $\sqrt{s}=2\sqrt{E_pE_e}\simeq 1.18-2.37$ TeV [@desreport]. The anticipated integrated luminosity is of order $10-10^2$ fb$^{-1}$ that depends on the energy of the electron beam and also the machine design.
Despite of great successes of the Standard Model (SM) and difficulties in finding new physics such as supersymmetry, which is the most popular scenario, the non-abelian self-couplings of $W$, $Z$, and the photon remain poorly measured up to now. In this context, the investigation of three gauge boson couplings plays an important role to manifest the non-abelian gauge symmetry in standard electroweak theory. Their precision measurement will be the crucial test of the structure of the SM. The inclusive and exclusive production of $W$ and $Z$ at the LHC already provides important tests of the SM and beyond. However, the photoproduction channel has the advantage of being much cleaner than the $pp$ collision channels. The physics program of the LHeC will explore the high-energy domain complementing the LHC and its discovery potential for physics beyond the SM with the great precision DIS measurements at high luminosities. The design report already contains [@desreport] some estimates for a variety of electroweak interaction processes such as leptoquarks/leptogluons, new heavy leptons, new physics in boson-quark interactions, and sensitivity to a Higgs boson. In this work we investigate the photoproduction of massive gauge bosons at the TeV scale and also examine the potential of the LHeC collider to probe anomalous WW$\gamma$ coupling. Along these lines, we propose some observables that are sensitive to deviations from SM physics. Previous theoretical studies on such a subject are quite compelling, and the $WW\gamma$ vertex in $ep$ colliders was addressed in Refs. [@WWg1; @WWg2; @WWg3; @WWg4] long ago.
The aim of this work is twofold - first, we show predictions for the photoproduction of massive gauge bosons at future LHeC energies within SM physics. The photoproduction cross section including the resolved and direct processes are obtained, as well as their number of events in the most promising final states decays. We then go beyond it, with the photoproduction of $W$ bosons, analyzing the production rates of $W$ bosons, as they move away from the SM. The sensitivity of the LHC for deviations from the SM is investigated, and some additional observables are proposed. This article is organized as follows. The basic formulas to calculate the photoproduction of $W^{\pm}$, $Z^0$ and virtual photons are presented in the next section, including the expressions for the anomalous coupling in the $W$-production case. Our numerical results for the photoproduction cross section and event rates within the SM and beyond are presented in section \[Wbeyond\], followed by the corresponding discussion. The summary and conclusions are presented in section \[conc\].
Cross Sections in the Standard Model and beyond
===============================================
Let us start by considering the C and P parity conserving effective Lagrangian for two charged W-boson and one photon interactions [@hagiwara]. The motivation is to use the $W$ photoproduction cross section as a test of the $WW\gamma$ vertex. In such a case, it is introduced by two dimensionless parameters, $\kappa$ and $\lambda$, which are related to the magnetic dipole and electric quadrupole moments, namely, $\mu_W = \frac{e}{2m_W}(1+\kappa+\lambda)$ and $Q_W = -\frac{e}{m_W^2}(\kappa-\lambda)$. In the case of values $\kappa=1$ and $\lambda=0$, the SM is recovered at tree level. We are left with three diagrams for the subprocess $\gamma q_{i}\rightarrow Wq_{j}$ and only t-channel W exchange graph contributes to the $WW\gamma$ vertex. The unpolarized differential cross section for the subprocess $\gamma q_{i}\rightarrow Wq_{j} $ can be obtained using helicity amplitudes from summing over the helicities. For the signal, we are considering a quark jet and an on-shell W with leptonic decay mode $\gamma p\rightarrow W^{\mp}+jet\rightarrow\ell+p_{T}^{miss}+jet $, where $\ell=e,\mu$. In the current mode, the charged lepton and the quark jet are nicely separated and the signal is prevented from being in the background of the SM.
The cross section for the subprocess $ \gamma q_{i} \rightarrow W q_{j}$ is composed of the direct and resolved-photon production, $\hat{\sigma}=\hat{\sigma}_{dir}+\hat{\sigma}_{res}$. The direct-photon contribution is given by [@WWg2; @WWg3] $$\begin{aligned}
\hat{\sigma}_W &=&\sigma_0\{|V_{q_{i}q_{j}}|^{2}\{(|e_{q}|-1)^{2}(1-2\hat{z}+2\hat{z}^{2})
\log({\hat{s}-M_{W}^{2}\over\Lambda^{2}}) \nonumber \\
& - & [(1-2\hat{z}+2\hat{z}^{2}) - 2|e_{q}|(1+\kappa+2\hat{z}^{2})
+{{(1-\kappa)^{2}}\over{4\hat{z}}} \nonumber \\
&- & {{(1+\kappa)^{2}}\over{4}}]
\log{\hat{z}}+ [(2\kappa+{{(1-\kappa)^{2}}\over{16}})
{1\over \hat{z}} \nonumber \\
& +& ({1\over 2}
+ {{3(1+|e_{q}|^{2})}\over{2}})\hat{z}
+ (1+\kappa)|e_{q}|-{{(1-\kappa)^{2}}\over{16}} \nonumber \\
& + & {|e_{q}|^{2}\over 2}](1-\hat{z})
-{{\lambda^{2}}\over{4\hat{z}^{2}}}(\hat{z}^{2}-2\hat{z}
\log{\hat{z}}-1)\nonumber \\
&+ &{{\lambda}\over{16\hat{z}}}
(2\kappa+\lambda-2)[(\hat{z}-1)(\hat{z}-9)
+4(\hat{z}+1)\log{\hat{z}}]\}, \nonumber \\
\label{sigmadir}\end{aligned}$$ where $\sigma_0 ={{\alpha G_{F}M_{W}^{2}}\over{\sqrt{2}\hat{s}}}$, $\hat{z}=M_{W}^{2}/\hat{s}$ and $\Lambda^{2}$ is the cutoff scale in order to regularize the $\hat{u}$-pole of the collinear singularity for massless quarks. In addition, $\Lambda^2$ is the scale that determines the running of photon structure functions in the resolved part. The quantity $V_{ij}$ is the Cabibbo-Kobayashi-Maskawa (CKM) matrix and $e_{q}$ is the quark charge.
The direct part of the cross section then reads $$\begin{aligned}
\sigma_{dir}(\gamma p \rightarrow W^{\pm}X)=\int_{x_p^{m}}^{1}dx_p\sum_{q,\bar{q}}f_{q/p}(x_p,Q^{2})\,\hat{\sigma}_W(\hat{s}),\end{aligned}$$ where $f_{q/p}$ are the parton distributions functions in the proton, $x_p^{m}=M_W^2/s$ and $\hat{s}=x_ps$.
The resolved-photon part of the cross section can be calculated using the usual electroweak formula for the $q_{\gamma}q_{p}\to W^{\pm}$ fusion process, $\hat{\sigma}(q_i\bar{q}_j\rightarrow W)=\frac{\sqrt{2}\pi}{3}G_Fm_W^2|V_{ij}|^2\delta (x_ix_js_{\gamma p}-m_W^2)$. For the photoproduction cross sections one needs parton distribution functions inside the photon and proton. The photon structure function $f_{q/\gamma}$ consists of perturbative pointlike parts and hadronlike parts. Putting it all together, the resolved-photon part reads $$\begin{aligned}
& & \sigma_{res}(\gamma p \rightarrow W^{\pm}X) = \frac{\pi\sqrt{2}}{3\,s}G_{F}m_{W}^{2}
|V_{ij}|^{2}\int_{x_{\gamma}^m}^{1}\frac{dx_{\gamma}}{x_{\gamma}}\nonumber \\
&\times & \sum_{q_i,q_j}f_{q_{i}/p}
(\frac{m_{W}^{2}}{xs},Q_{p}) \left[f_{q_{j}/\gamma}(x_{\gamma},
Q_{\gamma}^{2})-\tilde{f}_{q_{j}/\gamma}(x_{\gamma},Q_{\gamma}^{2})\right], \nonumber \\\end{aligned}$$ where in order to avoid double counting on the leading logarithmic level, one subtracts the pointlike part of the photon structure function (photon splitting at large $x$), $\tilde{f}_{q/\gamma}(x,Q_{\gamma}^{2})=\frac{3\alpha e_{q}^{2}}{2\pi}[x^{2}+(1-x)^{2}]\log (Q_{\gamma}^{2}/\Lambda^{2})$. In addition, here $x_{\gamma}^m=M_W^2/s$.
Similar calculation can be done for the $Z$ boson photoproduction. Here, we focus on the SM prediction. Once again, the cross section for the subprocess $ \gamma q \rightarrow Z q$ is composed of the direct and resolved-photon production, $\hat{\sigma}=\hat{\sigma}_{dir}+\hat{\sigma}_{res}$. The direct-photon contribution is given by $$\begin{aligned}
\hat{\sigma}_Z & = & \frac{\alpha G_{F}M_{Z}^{2}}{\sqrt{2}\,\hat{s}}\,
g_q^2e_q^2\, \left[ \left(1-2\hat{z}+2\hat{z}^{2}\right)\log \left(\frac{\hat{s}-M_{Z}^{2}}{\Lambda^{2}}\right) \right. \nonumber \\
& + & \left. \frac{1}{2}\left(1+2\hat{z}-3\hat{z}^{2}\right)\right],\end{aligned}$$ where now $\hat{z}=M_{Z}^{2}/\hat{s}$ and $g_q^{2}= \frac{1}{2}(1-4|e_e|x_W+8e_q^2x_W^2)$, with $x_W = 0.23$.
The direct part of the $Z$-photoproduction cross section then reads $$\begin{aligned}
\sigma_{dir}(\gamma p \rightarrow Z^0X)=\int_{x_p^{m}}^{1}dx_p\sum_{q,\bar{q}}f_{q/p}(x_p,Q^{2})\,\hat{\sigma}_Z(\hat{s}),\end{aligned}$$ where $f_{q/p}$ are the parton distributions functions in the proton, $x_p^{m}=m_Z^2/s$.
The resolved-photon part of the cross section stands for the subprocess $q\bar{q}\to Z^0$, and it is written as $$\begin{aligned}
& & \sigma_{res}(\gamma p \rightarrow Z^0X) = \frac{\pi\sqrt{2}}{3\,s}G_{F}m_{W}^{2}g_q^2\int_{x_{\gamma}^m}^{1}\frac{dx_{\gamma}}{x_{\gamma}}\nonumber \\
&\times & \sum_{q}f_{\bar{q}/p}
\left(\frac{m_{Z}^{2}}{xs},Q_{p}\right) \left[f_{q/\gamma}(x_{\gamma},
Q_{\gamma}^{2})-\tilde{f}_{q/\gamma}(x_{\gamma},Q_{\gamma}^{2})\right]. \nonumber \\\end{aligned}$$
In the next section we compute the numerical results for the $W^{\pm}$ and $Z^0$ photoproduction cross section in the LHeC regime of energy/luminosity. We also investigate the sensitivity to anomalous $WW\gamma$ couplings associated to beyond SM physics.
Numerical results and discussions {#Wbeyond}
=================================
Let us now perform a preliminary study for the LHeC machine [@dainton; @desreport]. Using the design with an electron beam having laboratory energy of $E_e=70$ GeV, the center of mass energy will reach $E_{cm}=W_{\gamma p}=1.4$ TeV and a nominal luminosity of order $10^{33}$ cm$^{-2}$s$^{-1}$. Our estimates for the massive boson photoproduction cross sections in the SM are the following. One gets $\sigma(\gamma +p\rightarrow W^{\pm}X)\simeq 400$ pb and $\sigma(\gamma +p\rightarrow Z^0X)\simeq 60$ pb. These are roughly estimates, since we have not introduced the $K$-factors associated with next-to-leading-order (NLO) corrections to the processes. We have summed the resolved and direct contributions. The energy behavior for the cross section is presented in Fig. \[fig:1\]. The dependence is quantitatively given by $\sigma_V\propto W_{\gamma p}^{\alpha}$, with $\alpha \simeq 1.312$. It is seen that the cross sections are at least one order of magnitude larger than for DESY-HERA machine, $W_{\gamma p }\simeq 300$ GeV.
![Cross sections for the production of massive $W^{\pm}$ and $Z^0$ gauge bosons as a function of the CM energy.[]{data-label="fig:1"}](sigma_energy.eps)
In Table \[tab1\] the photon-proton total cross sections times branching ratio of $W\rightarrow \mu\nu $ and the corresponding number of events are shown for SM parameters for W ($\kappa=1$ and $\lambda=0$) and also for the $Z^0$ boson with a corresponding branching ratio of $Z^0\rightarrow \mu^+\mu^- $. The number of events has been computed using $N_{ev}=\sigma(e p\rightarrow V+X)BR(V\rightarrow \mu\nu /\mu^+\mu^-)L_{int}$. At this point we consider the acceptance in the leptonic channel as 100%. The photoproduction cross section is calculated by convoluting the Weizsäcker-Williams spectrum $$\begin{aligned}
f_{\gamma/e}(y)& = & \frac{\alpha}{2\pi}\left[\frac{1+(1-y)^2}{y}\log \frac{Q^2_{max}}{Q^2_{min}} \right. \nonumber \\
& - & \left.2m_e^2y\,\left(\frac{1}{Q^2_{min}}- \frac{1}{Q^2_{max}}\right) \right],\end{aligned}$$ with the differential hadronic cross section. Here, $Q^2_{min}=m_e^2y/(1-y)$ and we impose a cut of $Q^2_{min}=0.01$.
Through the calculations, proton structure functions of CTEQ [@Pumplin:2002vw] and photon structure functions of GRV [@grvphoton] have been used with $Q^{2}=M_{W}^{2}$. The usual electroweak parameters are taken from Ref. [@PDG]. We have assumed an integrated luminosity $L_{int}$ at 10 fb$^{-1}$ [@desreport] in order to compute the number of events, $N_{ev}$. The number of events is large enough to put forward further analysis, as we have units of events per second for $W^{\pm}$.
$V$ $\sigma(\gamma p\rightarrow V\,X) \times BR$ $N_{ev}$
------- ---------------------------------------------- --------------------
$W^+$ 24 $1.2\times 10^{4}$
$W^-$ 24 $1.2\times 10^{4}$
$Z^0$ 2.1 $1.1\times 10^{3}$
: The photon-proton cross sections times branching ratios $\sigma(\gamma p\rightarrow W^{\pm}X)\times BR(W^{+}\rightarrow
\mu\nu)$ and $\sigma(\gamma p\rightarrow Z^0X)\times BR(Z^0\rightarrow
\mu^+\mu^-)$ in units of pb. The number of events $N_{ev}$ is also presented at an integrated luminosity 10 fb$^{-1}$. \[tab1\]
Let us now investigate the scenario for physics beyond the SM. Certain properties of the $W$ bosons, such as the magnetic dipole and the electric quadrupole moment, play a role in the interaction vertex $WW\gamma$, thus processes involving this vertex offer the opportunity to measure such properties. The magnetic dipole moment $\mu_W$ and the electric quadrupole moment $Q_W$ of the $W$ bosons can be written in terms of parameters $\kappa, \,\lambda$, where $\kappa=1$ and $\lambda=0$ are the Standard Model values for those parameters at tree level. According to the Particle Data Group [@PDG], the measured value of $\mu_W/\frac{e}{2M_W}=1+\kappa + \lambda=2.22\pm 0.20$ suggests that there are deviations from the standard values. In $W$ photoproduction one has a unique scenario to test the anomalous $WW\gamma$ vertex and its $\kappa$ and $\lambda$ parameters. The $WW\gamma$ vertex \[$W^+(p_1)$, $W^-(p_2)$, $A(p_3)$\], denoted by $\Gamma_{\mu\nu\rho}(p_1,p_2,p_3)$, is given by [@Dubinin] $$\begin{aligned}
\frac{\Gamma_{\mu\nu\rho}}{e} & = & \left[ g_{\mu\nu}\left(p_1-p_2-\frac{\lambda}{M_W^2}[(p_2\cdot p_3)p_1-(p_1\cdot p_3)p_2]\right)_{\rho}
\right.\nonumber\\
& + & g_{\mu\rho}\left({\kappa} p_3-p_1+\frac{\lambda}{M_W^2}[(p_2\cdot p_3)p_1-(p_1\cdot p_2)p_3]\right)_{\nu} \nonumber\\
&+ & g_{\nu\rho}\left(p_2-{\kappa} p_3-\frac{\lambda}{M_W^2}[(p_1\cdot p_3)p_2-(p_1\cdot p_2)p_3]\right)_{\mu} \nonumber \\
& + & \left. \frac{\lambda}{M_W^2}\left(p_{2\mu}p_{3\nu}p_{1\rho}-p_{3\mu}p_{1\nu}p_{2\rho}\right) \right]\end{aligned}$$ where the anomalous contributions from the SM are taken into account if they are included in the terms involving $\kappa\ne 1$ and/or $\lambda\ne 0$.
In the photoproduction of $W^{\pm}$ bosons, the direct contribution $\sigma_{dir}$ involves the generalized $WW\gamma$ vertex, and then the expression for $\hat{\sigma}_W(\hat{s}=x_ps)$ in Eq. (\[sigmadir\]) can be used to investigate deviations from SM physics. An interesting observable is the number of muon plus neutrino events coming from the decay of the $W^+$. This is shown in Table \[tab2\], where we assumed the luminosity of ${\cal {L}}=10\,$ fb$^{-1}$. As we can see, the number of $W^+\to\mu\nu$ events is very dependent on the choice of the $\kappa$ and $\lambda$ parameters, and in most scenarios it increases as $\kappa$, $\lambda$ increase/depart from Standard Model. Such an effect could certainly be tested at LHeC.
 
$\kappa$ $\lambda$ $\sigma(\gamma p\rightarrow W^+\,X) \times BR$ \[pb\] $N_{ev}$
---------- ----------- ------------------------------------------------------- --------------------
0 0 16 $8\times 10^{3}$
1 0 24 $1.2\times 10^{4}$
2 0 44 $2.2\times 10^{4}$
1 1 61 $3.1\times 10^{4}$
1 2 172 $8.5\times 10^{4}$
: The number of muon plus neutrino events coming from the $W^+$ decay for distinct choices for the parameters $\kappa$ and $\lambda$ presented at an integrated luminosity of 10 fb$^{-1}$. \[tab2\]
As we have already noticed, the cross sections for the $W$ and $Z$ production may have contributions due to higher-order terms that where not included in our calculation. Cross sections in NLO considering the default Standard Model vertices were already calculated in [@nlospira]. In order to get rid of normalization uncertainties, one can take the ratios $\sigma_W^{\pm}/\sigma_Z$ to test the $WW\gamma$ vertex and the $\kappa$ and $\lambda$ parameters. To do this we propose the study of the following observable: $$\begin{aligned}
R_{W/Z} (\kappa,\lambda;\sqrt{s}) = \frac{\sigma_{W^+} +\sigma_{W^-}}{\sigma_Z}\,,\end{aligned}$$ which can be constructed from equations given in the previous section. Such an observable was already proposed some time ago, in Refs. [@WWg2; @WWg3]. In Fig. \[ratiowz\] we show our results for the $R_{W/Z}$ ratio, for HERA and LHeC energies. In the left plot, the dependence of the ratio is shown as a function of the $\kappa$ parameter for fixed $\lambda=0$. In addition, in the right plot the ratio is presented as a function of the $\lambda$ parameter for fixed $\kappa = 1$. The results are presented for both DESY-HERA and LHeC energies. We can study the sensitivity with $\kappa$ and $\lambda$ parameters, regarding the SM and possible new physics. The results show that the ratio has much more sensitivity to the $\kappa$ and $\lambda$ parameters for LHeC energies. Regarding the $\lambda$ parameter, it is unimportant at HERA energies. Thus, the LHeC collider would be able to pin down the correct values for these parameters and then determine the magnet dipole and electric quadrupole of the $W$.
 
Another observable that could be studied and tested at the LHeC is the $W^{+}W^{-}$ asymmetry, defined by
$$\begin{aligned}
A(\kappa,\lambda;\sqrt{s}) = \frac{\left(\sigma_{W^+} - \sigma_{W^-}\right)}{\left(\sigma_{W^+} +\sigma_{W^-}\right)}.\end{aligned}$$
The results for this asymmetry are shown in Fig. \[asy\], where we show the sensitivity with the $\kappa$ and $\lambda$ parameters. As we can see, for LHeC energies the $W^{+}W^{-}$ asymmetry depends strongly on the $\kappa$ and $\lambda$ parameters and is therefore a useful observable to help determine the best scenarios. In the left plot we show the dependence of asymmetry as a function of the $\kappa$ parameter for fixed $\lambda=0$ (its SM value) only for the LHeC energy. Moreover, in the right plot we present the ratio as a function of the $\lambda$ parameter for fixed $\kappa = 1$, where the corresponding result for DESY-HERA energy is also shown. As a general conclusion about the anomalous coupling, we see that with the LHeC collider the parameters $\kappa$ and $\lambda$ have better sensitivity than DESY-HERA $ep$ collider, and it would give complementary information to the LHC collider.
Finally, let us compare the present calculation to previous studies. In Ref. [@WWg3] the massive boson photoproduction is considered for an energy of $\sqrt{s}\simeq 1.3$ TeV and integrated luminosity of 1 fb$^{-1}$. For the SM values of $\kappa$ and $\lambda$ parameters, the photoproduction cross section was obtained as $11.3$ pb, $12.2$ pb and $5.4$ pb for $W^+$, $W^-$ and $Z^0$, respectively. The number of events for $Z^0$ was estimated to be 360. These results are completely consistent with ours when considering the integrated luminosity of 10 fb$^{-1}$. Concerning the sensitivity to the parameters, we found the general trend is similar; however, the cross sections for higher values of parameters are relatively larger than ours. We have checked that our ratio $\sigma(W^{\pm})/\sigma(Z)$ is 50 % smaller than in Ref. [@WWg3] for several values of parameters $\kappa,\,\lambda$. This probably is due the different energy and the theoretical uncertainties coming from the PDFs considered.
In Ref. [@WWg4] an analysis quite similar to ours was performed focusing on the spectrum of the backscattered laser photon (energy of $\sqrt{s}=1.7$ TeV and integrated luminosity of 200 pb$^{-1}$). For a Weizsäcker-Williams spectrum, we have checked that the numbers of events in process $\gamma p \rightarrow W^+\mathrm{jet}$ is quite consistent with ours when considering the same integrated luminosity. Their original values are, for instance, 288 and 1151 events for sets ($\kappa =1,\,\lambda=0$) and ($\kappa=1,\,\lambda=2$), respectively. The compatibility is good, as we are using a luminosity 100 times larger.
The photoproduction of the $W$ boson at HERA has been addressed at the NLO level of QCD corrections in Ref. [@nlospira]. The prediction is $\sigma(W^+)=0.478$ pb and $\sigma(W^-)=0.484$ pb at $\sqrt{s}=318$ GeV, imposing a cut $p_T<25$ GeV. The main conclusion is that the QCD corrections reduce the factorization scale dependence significantly and modify the leading-order prediction by a factor 10%. This can be used as a good argument for our LO calculations here. In addition, we can rescale their prediction for the LHeC case. A rough estimate would give $\sigma(W^+)\times BR=0.33$ pb and $\sigma(W^-)\times BR=0.34$ pb at the LHeC. This is smaller than our results in Table 1, where one has $\sigma(e+p\rightarrow W^{\pm}+X)\times BR=1.2$ pb, which is is associated with the cut on boson transverse momentum and distinct kinematic cuts in the integration of the Weizsäcker-Williams spectrum.
Summary {#conc}
=======
We have examined the prospects for massive gauge bosons detection at the proposed Deep Inelastic Electron-Nucleon Scattering at the LHC (LHeC) machine. The photon-proton cross sections have been computed for $W^{\pm}$ and $Z^0$ inclusive production and are of the order of dozens of picobarns. The number of events is evaluated for the photoproduction cross section, assuming an integrated luminosity of 10 fb$^{-1}$ and, they are large enough to make the measurements feasible. We have also investigated the anomalous $WW\gamma$ coupling using the machine design. We found that the likelihood of that kinematic limit to be available at the LHeC is somewhat increased relative to the previous DESY-HERA machine. We have tested some sample scenarios beyond SM physics by scanning the values of parameters $\kappa$ and $\lambda$ considering anomalous $WW\gamma$ coupling. In the case of anomalous coupling, the photoproduction process at the LHeC proves to be a powerful tool. Finally, we consider different observables that together could contribute to pinning down the correct $WW\gamma$ vertex. We introduced the ratio $\sigma (W^{\pm})/\sigma (Z)$, which is less sensitive to the NLO QCD corrections and the $W$-asymmetry observable $A(\kappa,\lambda;\sqrt{s})$ that scanns asymmetries in the $W$-photoproduction.
This research was supported by CNPq and FAPERGS, Brazil.
[99]{}
J. B. Dainton, M. Klein, P. Newman, E. Perez and F. Willeke, JINST [**1**]{}, P10001 (2006).
J. L. Abelleira Fernandez [*et al.*]{} \[LHeC Study Group Collaboration\], “A Large Hadron Electron Collider at CERN: Report on the Physics and Design Concepts for Machine and Detector,” arXiv:1206.2913 \[physics.acc-ph\].
U. Baur and D. Zeppenfeld, Nucl. Phys. [bf B325]{}, 253 (1989).
C.S. Kim and W.J. Stirling, Z. Phys. [**C53**]{}, 601 (1992).
C.S. Kim, Jungil Lee and H.S. Song, Z. Phys. [**C63**]{}, 673 (1994).
S. Atag adn I.T. Cakir, Phys. Rev. [**D63**]{}, 033004 (2001).
K. Hagiwara, R.D. Peccei, D. Zeppenfeld and K. Hikasa, Nucl. Phys. [**B282**]{}, 253 (1987).
J. Pumplin, D. R. Stump, J. Huston, H. L. Lai, P. Nadolsky and W. K. Tung, JHEP [**0207**]{}, 012 (2002).
M. Glück, E. Reya and A. Vogt, Phys. Rev. [**D45**]{}, 3986 (1992).
K. Nakamura [*et al.*]{} (Particle Data Group), J. Phys. G: Nucl. Part. Phys. [**937**]{}, 0755021 (2010).
M.N. Dubinin and H.S. Song, Phys. Rev. [**D57**]{}, 2927 (1998).
K.-P.O. Diener, Ch. Schwanenberger, M. Spira, Eur. Phys. J. [**C25**]{}, 405 (2002).
|
-38mm -17mm
[**The Role of Renormalization Group** ]{}\
[**in Fundamental Theoretical Physics**]{}[^1]\
[Dmitri V. SHIRKOV]{}\
[**1. Introduction (Logic of science)**]{}\
Here, I would like to discuss some general aspects of the logical structure of modern fundamental science and, in particular, the place and role of enormalization roup (RG) in it. Here, by RG we mean the Stueckelberg–Bogoliubov formulation of the Renormalization Group, that is a one-parameter continuous group in a usual mathematical sense.
The importance of symmetries and groups in fundamental theoretical physics have been realized by some of the leading theorists more than half a century ago. One of its most prominent advocates, Eugene Wigner, proposed a hierarchical scheme establishing a relation between three categories: “symmetry or invariance principles”, “laws of nature” and “events”.
As he wrote in 1964 (see pp. 38 and 30 in Ref. [@wig]) :
> “What I would like to discuss instead is the general role of symmetry and invariance principles in physics, both modern and classical. More precisely, I would like to discuss the relation between three categories which play a fundamental role in all natural sciences: events, which are the raw material for the second category, the laws of nature, and symmetry principles for which I would like to support the thesis that the laws of nature form the raw material.” “... the progression from events to laws of nature, and from laws of nature to symmetry or invariance principles, is what I meant by the heirarchy of our knowledge of the world around us.”
This hierarchy follows the line of “science construction”, of extracting regularities from observation, regularities (laws and principles) that form the skeleton of physical science.
However, principles and laws obey predictive ability. To follow the inner logic of science one should proceed in the opposite direction. Again, according to Wigner (p. 17 in Ref. [@wig]):
> “... the function of the invariance principles to provide a structure or coherence to the laws of nature just as the laws of nature provide a structure and coherence to the set of events.”
This quotation with some details added can be visualized in the form of a scheme (see Ref. [@gross]):
(140,80) (0,0)[(1,0)[140]{}]{} (0,80)[(1,0)[140]{}]{} (0,0)[(0,1)[80]{}]{} (140,0)[(0,1)[80]{}]{} (50,72)[ of Symmetry]{} (5,64) (88,64) (57,42)[ of Nature]{} (88,34) (39,4)[Physical ]{} (63,70)[(0,-1)[23]{}]{} (63,40)[(0,-1)[31]{}]{} (20,60)[(3,-1)[37]{}]{} (104,60)[(-3,-1)[36]{}]{} (104,31)[(-3,-2)[31]{}]{}
In what follows, we would like to discuss the validity of the Wigner scheme in modern physical science and indicate the place of RG in it. However, for this purpose, the scheme, Fig.2, is a bit sketchy. We have to modify it. Our first comment relates to the category of ’principles’.\
[**2. Comment on “Principles”**]{}\
Wigner paid attention mainly to principles of symmetry, like space-time (Poincaré Invariance, P, T) and internal symmetries (Isospin $\to$ Flavours, Colour). Meanwhile, in fundamental physics we deal also with some other principles, [General Principles]{}, like:
– [Principle of QUANTUM PRIORITY]{} which states that “quantum level of nature is the basic one and classical physics is secondary, being the limiting case of a quantum picture”;
– [Principle of UNITARITY]{} that reflects the “conservation of probability” ;
– [Principle of CAUSALITY]{}: “Future cannot influence the past” (related to the mistery of a ’Time arrow’);
– [Principle of RENORMALIZABILITY]{} [^2] that acts as a selection rule for QFT models and can be formulated [@prais; @heis] as follows : “The given model of field interaction should be realizable on the quantum level”. [ In combination with the principle of quantum priority this means that the renormalizability property should be considered as a nessesary condition for a given QFT model to have a chance to describe the Nature: i.e., [RENORMALIZABILITY = RELIABILITY.]{}]{}
– The [GAUGE DYNAMICS Principle]{} [^3] stating that the form of a dynamics, of a field interaction, should be deduced from a symmetry (by its “localizing”). [**3. Are “Equations” equivalent to “Laws”?**]{}\
The second comment relates to the Wigner’s category “laws of nature”. In our opinion they, generally, should not be identified with [*Dynamical Equations*]{} deductible from some basic Principles. Rather, these “Laws” are to be related to [*Solutions*]{} of Dynamical Eqs. To illustrate, consider the case of the Standard Model (SM) in QFT.
The most fashionable current topics in SM (Grand Unification, SuSy generalization, quantum gravity,...) are related to ’extremely high energy region’. However, there are two issues lying in the experimentally studied domain. These are: “Confinement in QCD” and “Vector boson masses in the ElectroWeak Theory”:
– All experts agree that we have correct QCD equations responsible for strong interaction. However, the confinement phenomenon, being an essentially nonlinear quantum effect, still is not understood.
– The origin of gauge $W^{\pm}$ and $Z^0$ boson masses is “explained” by the so-called ’Higgs mechanism’. It is highly artificial and, technically, is based upon a very specific scalar field with imaginary mass and quartic self-interaction. ([This mechanism also predicts particles which have not been observed yet.]{}) The scalar field introduction destroys the whole beauty of the Gauge Dynamics principle. Meanwhile, there are serious reasons [@ew-col] to believe that spontaneos symmetry breaking of gauge symmetry can be treated as an intrinsic feature of non-Abilean quantum gluon field, as a nonlinear quantum phenomenon.
Both issues are related to common non-linear quantum topic –
Structure of the ground state of non-Abelian Quantum Field.
Here, we have beautiful equations, like the QCD ones, whose structure is determined by principles. However, we are unable to extract from them the very basic feature of strong interaction (confinement of coloured objects) and some other important information related to experiment. Instead, for the latter purpose we have to be satisfied with effective semi-phenomenological model constructions, like “MIT bag”,“Dubna bag”, “low-energy chiral model” which are not directly related to general and symmetry principles. This means that “Equations”, in practice, do not define the essential features of a system. It is improper to treat them as “Laws”.
Another illustration is provided by the history of superconductivity. This phenomenon stayed unsolved by theorists for about 45 years. In the course of the first 15 years there was no adequate theoretical basement (of quantum mechanics = QM). However, during subsequent three decades we had general belief that superconductivity had to be understood as a macroscopical QM effect, but not understanding of this phenomenon on the basis of QM description of electron gas in metal. Instead, we used to content ourselves with semi-phenomelogical constructions like that of Londons and Ginzburg–Landau. Just they appeared as [*“laws ... providing a structure to the set of events”.*]{}[@bcs] The situation is pictured on Fig.3.
Thus, we arrive at the conclusion that “Laws” should be substituted by two notions :
– [Equations]{} that can be deduced from Principles;
– [Solutions]{} of equations that are equivalent to “Laws” in a sense that they determine the physical system behaviour. In short –
*Between “Principles” and “Laws” there should stand “Equations”.*
This means that principles provide the structure and coherence just to dynamical equations which, in a sense, could be treated as laws of nature formulated in a general form. However, as a rule, in modern science they have no close relation to events. These are rather solutions to equations which provide structure and coherence to sets of physical events. Instead of the Wigner-like scheme, Fig.2, we get:\
[**4. “Logic of Modern Science” Scheme**]{}\
(140,150) (0,0)[(1,0)[140]{}]{} (0,150)[(1,0)[140]{}]{} (0,0)[(0,1)[150]{}]{} (140,0)[(0,1)[150]{}]{} (50,142)[ ]{} (40,137)[of Symmetry and [**General**]{} ]{} (5,130) (88,130) (45,88)[ ]{} (63,133)[(0,-1)[40]{}]{} (19,125)[(4,-3)[42]{}]{} (107,125)[(-4,-3)[42]{}]{} (5,80) (88,80) (27,42)[Eqs. Solution = of Nature ]{} (63,84)[(0,-1)[37]{}]{} (19,77)[(4,-3)[40]{}]{} (107,77)[(-4,-3)[40]{}]{} (5,35)[ ]{} (88,35) (39,4)[Physical ]{} (63,40)[(0,-1)[30]{}]{} (29,32)[(4,-3)[30]{}]{} (98,32)[(-4,-3)[30]{}]{}
[**5. Reductionism vs Constuctivism**]{}\
We believe that this modification has a direct relation to the debate between reductionists and constructivists. Two different credos have been formulated by Einstein (see, e.g. in Ref. [@einstein])
"The supereme test of the physicist is to arrive at those universal elementary laws from which the cosmos can be built up by pure deduction. (1918)
“...we would like not only to know how nature is organized (and how natural phenomena proceed), but possibly to achieve the goal which may be considered as utopian and daring – understand why nature is just the way it is”. (1929)
and P. Anderson [@anderson]
“The ability to reduce everything to simple fundamental laws does not imply to start from these laws and reconstruct the Universe”
“...the more the elementary-particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science...”
In our opinion, the real origin of constuctivists’ scepticism is just the gap between “fundamental laws”, that is implementation of Principles in the form of “Equations” (like Newton’s, Maxwell’s and QCD equations) and “Laws of Nature” that is solution to the equations (like Kepler’s, Ohm’s, Meissner’s and Confinement Laws). The RG plays an important role in this gap filling.\
[**6. Renormalization Group – Solution Symmetry**]{}\
Renormalization group first discovered in QFT by Stueckelberg and Petermann was explicitly formulated by Bogoliubov and the present author [@bsh1] as an exact [@gml] group of transformations related to finite Dyson’s transformations. Later on, it has been shown [@fss] that this exact group (which we call [@umn] the [Bogoliubov Renorm-Group]{}) is related to the symmetry of a given solution and consists of specific transformations of a scale and solution parameter(s) (that could involve, e.g., boundary condition parameters, like experimentally measured coupling constants); in a particular case this symmetry can be reduced to power self-similarity symmetry well known in mathematical physics.
The [Renormalization Group Method]{}(RGM) devised in Refs. [@bsh1] (see also English publications [@bsh2]) allows one to improve an approximate solution behaviour in the vicinity of a singularity by restoring the correct structure of this singularity.
As it is well known, the RGM proved to be an indispensable tool for analysing solution property of complicated nonlinear problems in : QFT (Ghost problem in QED; asymptotic freedom in QCD; Standard Model and Grand Unification), critical phenomena and phase transitions, percolation, turbulence, polymer theory and many others (including boundary value problems of mathematical physics [@venia].
In this context, we conclude that RG Symmetry being the property of a solution forms the basis for “filling a gap between equation and its explicit solution”, solution that is necessary for the ’physical law’ obtaining.\
The author would like to thank Prof. A.M. Baldin for interest in the work and useful comments. Partial support of RFBR 96-15-96030 grant is gratefully acknowledged.\
**References**
[50]{} E. Wigner, [*Symmetries and Reflections*]{}, Indiana Univ. Press, 1967. D. Gross, [*Physics Today*]{}, Dec 1995, p.49. D.V. Shirkov, “In Praise of Quantum Fields” ICTP preprint IC/89/243 and in “Selected Topics in Statist. Mechanics”, Eds. A. Logunov et al., WS, Singapore, 1990, pp 238-254; see also “The Evolution of Quantum Field Theory”, [*Ann. d. Phys.*]{}, 7 Folge, Band 47, Heft 1/2, S. 230-44, 1990. D.V. Shirkov, “Quantum Field - the only form of Matter ?” Munich preprint MPI-Ph/92-54; published in German in “Werner Heisenberg, Physiker und Philosoph”, Spektrum Akad. Verlag, Heidelberg, 1993, pp 269-75. S. Coleman and E. Weinberg, [*Phys. Rev.*]{} [**D 7**]{} (1973) 1888. S.S. Schweber, [*Physics Today*]{}, Nov 1993, p.34. P.W. Anderson, [*Science*]{} [**177**]{} (1972) 393. N.N. Bogoliubov and D.V. Shirkov, \[[*Doklady AN USSR*]{}\] [**103**]{} (1955) 203; 391 (in Russian). . D.V. Shirkov, [*Sov.Phys.Doklady*]{} [**27**]{} (Mar.1982), 197-9. D.V. Shirkov, [*Russian Math. Surveys*]{} [**49**]{} (1994) 155; also corrected English printing - JINR comm. E2-96-15. N.N. Bogoliubov and D.V. Shirkov, [*Nuovo Cim.*]{} [**3**]{} (1956) 845; [*Sov. Phys. JETP*]{}, [**3**]{} (1956) 57-64. V.F. Kovalev, V.V. Pustovalov and D.V. Shirkov, “Group analysis and renormgroup symmetries”, hep-th/9706056, to appear in [*Journ. of Math. Physics*]{}.
[^1]: The text of the talk presented at the Conference “RG-96” (Dubna, Aug 96). To appear at the proceedings.
[^2]: This needs quantum notions to be formulated in detail.
[^3]: This needs quantum notions to be motivated.
|
---
abstract: 'The (projective) convergence set of a divergent formal power series $f(x_{1},\dots ,x_{n})$ is defined to be the image in $\PP^{n-1}$ of the set of all $x\in \mathbb{C}^{n}$ such that $f(x_{1}t,\dots ,x_{n}t)$, as a series in $t$, converges absolutely near $t=0$. We prove that every countable union of closed complete pluripolar sets in $\PP^{n-1}$ is the convergence set of some divergent series $f$. The (affine) convergence sets of formal power series with polynomial coefficients are also studied. The higher-dimensional results of A. Sathaye, P. Lelong, N. Levenberg and R.E. Molzon, and of J. Ribón are thus generalized.'
address:
- ' dma@math.wichita.edu, Department of Mathematics, Wichita State University, Wichita, KS 67260-0033, USA'
- ' neelon@csusm.edu, Department of Mathematics, California State University, San Marcos, CA 92096-0001, USA'
author:
- 'Daowei Ma and Tejinder S. Neelon'
title: On Convergence Sets of Formal Power Series
---
Introduction
============
A formal power series $f(x_{1},\dots ,x_{n})$ with coefficients in $\mathbb{C}$ is said to be convergent if it is absolutely convergent in some neighborhood of the origin in $\mathbb{C}^{n}$. A classical result of Hartogs (see [@Ha]) states that a series $f$ converges if and only if it converges along all directions $\xi \in \mathbb{P}^{n-1}$, [ *i.e.*]{}, $f_{\xi }(t):=$ $f($ $\xi _{1}t,\dots ,$ $\xi _{n}t)$ converges, as a series in $t,$ for all $\xi \in \mathbb{P}^{n-1}$. This can be interpreted as a formal analog of Hartogs’ theorem on separate analyticity. Since a divergent power series still may converge in certain directions, it is natural and desirable to consider the set of all such directions. Following Abyankar-Moh [@AM], we define the convergence set of a *divergent* power series $f$ to be the set of all directions $\xi \in \mathbb{P}^{n-1}$ such that $f_{\xi }(t)$ is convergent. For the case $n=2,$ P. Lelong [@Le] proved that the convergence set of a divergent series $f(x_{1},x_{2})$ is an $F_{\sigma }
$ polar set (i.e. $F_{\sigma }$ set of vanishing logarithmic capacity) in $\mathbb{P}^{1}$, and moreover, every $F_{\sigma }$ polar subset of $\mathbb{P}^{1}$ is contained in the convergence set of a divergent series $f(x_{1},x_{2})$. The optimal result was later obtained by A. Sathaye (see [@Sa]) who showed that the class of convergence sets of divergent power series $f(x_{1},x_{2})$ is precisely the class of $F_{\sigma }$ polar sets in $\mathbb{P}^{1}$. In this paper we prove that a countable union of closed complete pluripolar sets in $\mathbb{P}^{n-1}$ is the convergence set of some divergent series. This generalizes the results of P. Lelong, Levenberg and Molzon, and Sathaye.
We also study convergence sets of power series of the type $f(s,t)=\sum_{j}P_{j }(s)t^{j}$ where the coefficients $P_{j}(s)$ are polynomials with $\deg (P_{j})\leq j$, as in [@Ri] and [@Pe].
Theorems \[mainaffine\] and \[mainproj\] are our main theorems, the proofs of which were inspired by [@Sa], and influenced by the methods developed in [@Sc3], [@LM], and [@Ri].
Transfinite Diameter and Capacity
=================================
Let $\mathbb{Z}_{+}$ denote the set of nonnegative integers. Let $n$ be a positive integer. For $\alpha =(\alpha _{1},...,\alpha _{n})\in \mathbb{Z}_{+}^{n}$, let $|\alpha |=\alpha _{1}+...+\alpha _{n}$. Let $\{\alpha
(1),\alpha (2),\dots \}$ be the listing of the elements of $\mathbb{Z}_{+}^{n}$ indexed using the lexicographic ordering but with $|\alpha (i)|$ nondecreasing. Set $x^{\alpha }=x_{1}^{\alpha _{1}}\cdots x_{n}^{\alpha
_{n}} $ for $x\in \mathbb{C}^{n}$ and $\alpha \in \mathbb{Z}_{+}^{n}$.
Let $m_{k}={\binom{n+k}{k}}$, the number of monomials of order up to $k$. Let $l_{k}=\sum_{q=1}^{k}q(m_{q}-m_{q-1})=n{\binom{n+k}{k-1}}$.
For a finite set $\{s_{1},\dots ,s_{j}\}$ of points in $\mathbb{C}^{n}$, let $V(s_{1},\dots ,s_{j})=\det (s_{q}^{\a(p)}))_{1\leq p,q\leq j}$ be the $j$-th Vandermonde determinant. For a compact set $E$ in $\mathbb{C}^{n}$, let $$V_{j}(E)=\sup \{V(s_{1},\dots ,s_{j}):s_{1},\dots ,s_{j}\in
E\},\;\;d_{k}(E)=(V_{m_{k}}(E))^{1/l_{k}}.$$The limit $d(E):=\lim_{k}d_{k}(E)$ exists ([@Za]), and is known as the transfinite diameter of $E$.
Let $\mathscr{P}_k(\CC^n)$ be the set of polynomials on $\CC^n$ of degrees $\le k$. For a compact set $E\subset \CC^n$ and $p\in \mathscr{P}_k(\CC^n)$, set $$|p|_E=\sup\{|p(z)|: z\in E\},$$ $$L_{k,R}(E)=R^{-1}(\sup\{|p|_{\Delta_R}: p\in \mathscr{P}_k(\CC^n), |p|_E\le 1\})^{1/k},$$ and $$L_R(E)=\sup_{k} L_{k,R}(E),\;\; c(E)=1/\ulim_{R\to\infty}L_R(E).$$ The quantity $c(E)$ is called the capacity of $E$.
For a compact set $E$ in $\mathbb{C}^{n}$ we have
(i)$\ c(E)=0$ if and only if $d(E)=0$, and
\(ii) if $n=1$, then $c(E)=d(E)$.
For a proof of (i), see [@LT]. For a proof of (ii), see \[Ahlfors 1973, p. 24\].
We need to use the following lemma. It appeared, in different forms, in [@SW] and [@Si]. See, also, [@Sc3] and [@Za].
\[BI\] (Bernstein’s inequality) Let $E$ be a compact set in $\CC^n$ with $c(E)>0$. Then there is a positive constant $C_E$ such that for every polynomial $p(z)=\sum_{|\a|\le d}a_\a z^\a$, we have $|a_\a|\le C_E^{d}|p|_E$.
Some Classes of Pluripolar Sets
===============================
Let $E$ be a Borel subset of $\CC^n$. (Though we do not mention the word “Borel” each time, all subsets of $\CC^n$ considered in this paper are assumed to be Borel.) The set $E$ is said to be pluripolar (polar when $n=1$) if for each point $x\in E$ there is a nonconstant plurisubharmonic function $u$ defined in a neighborhood $U$ of $x$ in $\CC^n$ such that $u=-\infty$ on $E\cap U$. The set $E$ is said to be globally pluripolar if there is a nonconstant plurisubharmonic function $u$ defined on $\CC^n$ such that $E\subset \{y: u(y)=-\infty\}$. Josefson’s theorem (answering a question of P. Lelong) states that $E$ is pluripolar $E$ is globally . The set $E$ is said to be complete pluripolar if there is a non-constant plurisubharmonic function $u$ defined on $\CC^n$ such that $E= \{y: u(y)=-\infty\}$. So the set $\{(0, x_2)\in \CC^2: |x_2|<1\}$ and its closure are pluripolar, but not complete . A countable union of pluripolar sets is pluripolar. So the set of rationals in the interval $[0,1]$ is polar. It is not complete polar, because each complete set is . In $\CC$ each polar set is complete polar, which is Deny’s theorem (see [@De]).
Following Siciak [@Sc3 P. 2], we consider families $L,G,H$ of plurisubharmonic functions: $$\begin{aligned}
L(\CC^n) &=&\{u\in \PSH(\mathbb{C}^{n}):\sup_{x\in \mathbb{C}^{n}}(u(x)-\ln
(1+|x|))<\infty \}, \\
G(\CC^n) &=&\exp (L(\CC^n))=\{e^{u}:u\in L(\CC^n)\}, \\
H(\CC^n) &=&\{u\in \PSH(\mathbb{C}^{n}):u\not\equiv 0,u(\lambda x)=|\lambda
|u(x),\forall \lambda \in \mathbb{C},x\in \mathbb{C}^{n}\}.\end{aligned}$$
A set $E$ in $\CC^n$ is said to be $L$-complete if there is a non-constant $u\in L(\CC^n)$ such that $E=\{u=-\infty\}$. A set $F$ in $\CC^n$ is said to be $H$-complete if there is a $w\in H(\CC^n)$ such that $F=\{x: w(x)=0\}$.
It follows from the one-to-one correspondence (see [@Sc3 Prop. 2.7]) \[eq2\]H(\^n) f(x\_0,x)f(1,x)L(\^n) between functions of the class $H$ of $n+1$ variables and the functions of the class $L$ of $n$ variables that each $H$-complete set in $\CC\times\CC^n$ induces a unique $L$-complete set in $\CC^n$, and that each $L$-complete set in $\CC^n$ is induced by a (not necessarily unique) $H$-complete set in $\CC\times\CC^n$.
Let $|x|=(|x_1|^2+\cdots+|x_n|^2)^{1/2}$. Recall that $\mathscr{P}_k(\CC^n)$ is the set of polynomials on $\CC^n$ of degrees $\le k$. Let $\mathscr{H}_k(\CC^n)$ be the set of homogeneous polynomials on $\CC^n$ of degree $k$. Let $$Q(\CC^n)=\{(p,k): k\in \NN, p\in\mathscr{P}_k(\CC^n)\},$$ $$|(p(x),k)|=|p(x)|^{1/k},\;\|(p(x),k)\|=|p(x)|^{1/k}/(1+|x|^2)^{1/2},$$ $$|(p,k)|_K=\sup \{|(p(x),k)|: x\in K\},$$ and $$\|(p,k)\|_K=\sup \{\|(p(x),k)\|: x\in K\},\; \|(p,k)\|=\|(p,k)\|_{\CC^n}.$$ Let $$\Gamma(\CC^n)=\{(h,k): k\in \NN, h\in\mathscr{H}_k(\CC^n)\},$$ $$\|(h(x),k)\|=|h(x)|^{1/k}/|x|,$$ and $$\|(h,k)\|_K=\sup \{\|(h(x),k)\|: x\in K, x\not=0\},\; \|(h,k)\|=\|(h,k)\|_{\CC^n}.$$
Let $F\subset \CC^n$, $F\not=\emptyset$, $x\in\CC^n$, and $0\le r\le 1$. Define \_H(x, F, r)&=&{(h,k)\_F: (h,k)(\^n),|h(x)|\^[1/k]{}r|x|, (h,k)1},\
T\_H(x,F)&=&{r: \_H(x, F, r)=0},\
\_L(x, F, r)&=&{(h,k)\_F: (h,k)Q(\^n),(h(x),k) r, (h,k)1},\
T\_L(x,F)&=&{r: \_L(x, F, r)=0}. For the empty set, we define $\tau_H(x, \emptyset, r)=\tau_L(x, \emptyset, r)=0$, and $T_H(x,\emptyset)=T_L(x,\emptyset)=1$.
It is clear that if $E\subset F$, then $\tau_L(x, E, r)\le \tau_L(x, F, r)$ and $T_L(x,E)\ge T_L(x,F)$.
\[glo\] Let $u\in H(\CC^n)$ be continuous, with $\sup\{u(x): |x|=1\}=1$, and let $F=\{x\in\CC^n: u(x)=0\}$. Then for each $x\in \CC^n\setminus F$, $T_H(x,F)\ge u(x)/|x|$.
Fix $x\in \CC^n\setminus F$. Then $x\not=0$, since $u(0)=0$. Let $r$ be a positive number such that $r< u(x)/|x|\le 1$, and let $\d\in (0,r)$. Let $\phi(x)=\max(u(x),\d|x|)$. Then $\phi$ is a continuous function in $H(\CC^n)$. By [@Sc3 Prop. 2.10], for all $y\in \CC^n$, $$\phi(y)=\sup\{|h(y)|^{1/k}: (h,k)\in \Gamma(\CC^n),\; |h(z)|^{1/k}\le \phi(z)\;\forall z\in\CC^n\}.$$ Thus there is an $(h,k)\in \Gamma(\CC^n)$ such that $|h(z)|^{1/k}\le \phi(z)\;\forall z\in\CC^n$, and $r|x|<|h(x)|^{1/k}\le \phi(x)$, and therefore $$|h(x)|^{1/k}> r|x|,\; \|(h,k)\|\le 1,\;\|(h,k)\|_F\le \d.$$ It follows that $\tau_H(x, F, r)\le \d$. Hence $\tau_H(x, F, r)=0$ for each $r<u(x)/|x|$. Therefore, $T_H(x,F)\ge u(x)/|x|$.
Since (\[eq2\]) is a one-to-one correspondence, each $L$-complete set in $\CC^n$ is related to a $H$-complete set in $\CC\times\CC^n$.
Let $E=\{v=0\}$ be an $L$-complete set in $\CC^n$ with $v\in L(\CC^n)$ such that the function $u(x_0,x):=|x_0|\exp(v(x/x_0))$ defined on $\{x_0\not=0\}$ extends to be a continuous function on $\CC\times \CC^n$. Then for each $x\in \CC^n\setminus E$, $T_L(x,E)\ge (1+|x|^2)^{-1/2}\exp(v(x))$.
This is a consequence of the previous lemma.
\[tec\] Let $E=\{g=0\}$ be a closed $L$-complete set in $\CC^n$ with $g\in G(\CC^n)$ such that $\sup\{(1+|y|^2)^{-1/2}g(y): y\in\CC^n\}=1$. Then for each $x\in \CC^n\setminus E$, and each compact set $K$, $T_L(x, E\cap K)\ge (1+|x|^2)^{-1/2}g(x)$.
If $E\cap K=\emptyset$, then the desired inequality clearly holds since $T_L(x,\emptyset)=1$. Fix $x\in \CC^n\setminus E$ and a compact set $K$ with $E\cap K\not=\emptyset$. Let $r>0$ be such that $r<(1+|x|^2)^{-1/2}g(x)$. Let $\eta$ be a positive number with $\eta<r$. Let $\l$ be a positive number that is less than the distance between the closed set $\{y: g(y)\ge \eta \}$ and the compact set $ K\cap E$, and that is so small that \[sqrteta\](ł+)\^[1-ł]{}<. Let $$\omega(y)=\left\{
\begin{array}{ll}
c_n\exp(-1/(1-|y|^2)),&\mbox{if}\,|y|<1,\\
0,&\mbox{if}\, |y|\ge 1,\end{array}
\right.\;\;\int \omega(y)\,dy=1.$$ For $\mu>0$, let $g_\mu(y)=\int g(y+\mu z)\omega(z)\,dz$. Then $g_\mu\in G(\CC^n)$, $g_\mu$ is $C^\infty$ and positive, and $g_\mu\downarrow g$ as $\mu\downarrow 0$. If $y\in K\cap E$, and if $|z|<1$, then $y+\l z\not\in\{y: g(y)\ge \eta \}$, and hence $g(y+\l z)<\eta$. It follows that $g_\lambda(y)=\int g(y+\lambda z)\omega(z)\,dz<\int \eta\omega(z)\,dz=\eta$. For each $y\in \CC^n$, $$g_\lambda(y)\le \int (1+|y+\lambda z|^2)^{1/2}\omega(z)\,dz\le (1+(|y|+\lambda)^2)^{1/2},$$ and hence $$g_{\lambda}(y)\le (1+\lambda)(1+|y|^2)^{1/2}.$$ As in [@Sc3 p. 17], we define a function $\phi_\lambda\in H(\CC\times\CC^n)$ by $$\phi_\lambda(y_0,y)=\left\{
\begin{array}{ll}
|y_0|(\lambda+g_\lambda(y/y_0))^{1-\lambda}+\lambda(|y_0|^2+|y|^2)^{1/2}, &\mbox{if}\, y_0\not=0,\\
\lambda|y|,&\mbox{if}\, y_0=0,\end{array}
\right.$$ and define a function $\psi_\lambda\in G(\CC^n)$ by $$\psi_\lambda(y)=\phi_\lambda(1,y)=(\lambda+g_\lambda(y))^{1-\lambda}+\lambda(1+|y|^2)^{1/2}.$$ Then $\psi_\lambda$ is $C^\infty$, and $\phi_\lambda$ is continuous.
By [@Sc3 Prop. 2.10], $$\phi_\lambda(y_0, y)=\sup\{|h(y_0,y)|^{1/k}\},$$ where the supremum is taken over all $(h,k)\in \Gamma(\CC\times\CC^n)$ such that $|h(z_0,z)|^{1/k}\le \phi_\lambda(z_0,z)\;\forall (z_0,z)\in\CC\times\CC^n$. It follows that \[eq5\]\_(x)={|p(x)|\^[1/k]{}: (p,k)Q(\^n), |p(y)|\^[1/k]{}\_(y)y\^n}.For all $y\in\CC^n$, $$\begin{aligned}
\psi_\lambda(y)&=&(\lambda+g_\lambda(y))^{1-\lambda}+\lambda(1+|y|^2)^{1/2}\\
&\le & (\lambda+ (1+\lambda)(1+|y|^2)^{1/2})^{1-\lambda}+\lambda(1+|y|^2)^{1/2}\\
&<& (1+3\l)(1+|y|^2)^{1/2}.\end{aligned}$$ If $y\in K\cap E$, then $$\begin{aligned}
\psi_\lambda(y)&=&(\lambda+g_\lambda(y))^{1-\lambda}+\lambda(1+|y|^2)^{1/2}\\
&\le & (\lambda+ \eta)^{1-\lambda}+\lambda(1+|y|^2)^{1/2}\\
&<& \sqrt\eta+\lambda(1+|y|^2)^{1/2}\\
&\le& (\sqrt\eta+\l)(1+|y|^2)^{1/2}.\end{aligned}$$ So $|p(z)|^{1/k}\le \psi_\lambda(z)\,\forall z\in\CC^n$ implies that \[eqn5\](p,k)1+3ł (p,k)\_[ KE]{}+ł.For sufficiently small $\l$, $$(\lambda+g_\lambda(x))^{1-\lambda}+\lambda(1+|x|^2)^{1/2}>(1+3\l)r(1+|x|^2)^{1/2},$$ since as $\l$ approaches 0, the difference of the left side minus the right side tends to $g(x)-r(1+|x|^2)^{1/2}>0$. It follows that for sufficiently small $\l$, \[eqn8\]\_(x)>(1+3ł)r(1+|x|\^2)\^[1/2]{}.
By (\[eq5\]), (\[eqn5\]) and (\[eqn8\]), we have $\tau(x, E\cap K,r)\le (1+3\l)^{-1}(\sqrt\eta+\l)$. Letting $\l\to0$, and then $\eta\to0$, yields that $\tau(x, E\cap K,r)=0$. Since this holds for every $r<g(x)(1+|x|^2)^{-1/2}$, it follows that $T_L(x, E\cap K)\ge g(x)(1+|x|^2)^{-1/2}$.
A pluripolar set $E$ in $\CC^n$ is said to be $J$-complete if for each $x\in \CC^n\setminus E$, and each compact set $K$, $T_L(x, E\cap K)>0$.
Note that the empty set is $J$-complete. Also, it is clear that a $J$-complete set has to be closed.
\[closed\] Every closed $L$-complete set in $\CC^n$ is $J$-complete.
This is a consequence of Lemma \[tec\].
\[intersection\] An intersection of $J$-complete sets in $\CC^n$ is $J$-complete. A finite union of $J$-complete sets in $\CC^n$ is $J$-complete.
Let $\{E_\a\}_{\a\in\Lambda}$ be a family of $J$-complete sets in $\CC^n$ and let $E=\cap_\a E_\a$. Let $K$ be a compact set in $\CC^n$ and let $x\in \CC^n\setminus E$. Then there is a $\b\in\Lambda$ such that $x\not\in E_\b$. Thus $T_L(x, E\cap K)\ge T_L(x, E_\b\cap K)>0$. Therefore, $E$ is $J$-complete.
Let $F_1, \dots, F_m$ be $J$-complete sets in $\CC^n$ and let $F=\cup_{j=1}^m F_j$. Let $K$ be a compact set in $\CC^n$ and let $x\in \CC^n\setminus F$. Choose a number $r$ such that $0<r<\min_j T_L(x, F_j\cap K)$. Then $ \tau_L(x, F_j\cap K,r)=0$ for $j=1, \dots, m$. Let $\ve>0$. Then there are $(h_j, k_j)\in Q(\CC^n)$, $j=1, \dots, m$, such that $$\|(h_j, k_j)\|_{F_j\cap K}<\ve,\;\; \|(h_j(x), k_j)\|\ge r,\;\; \|(h_j, k_j)\|\le 1.$$ Raising each $h_j$ to a suitable power, we may assume that $k_1=\cdots=k_m=k$. Let $h=\Pi h_j$. Then $(h,mk)\in Q(\CC^n)$, and $$\|(h, mk)\|_{F\cap K}<\ve^{1/m},\;\; \|(h(x), mk)\|\ge r,\;\; \|(h, mk)\|\le 1.$$ Thus $\tau_L(x, F\cap K,r)\le \ve^{1/m}$ for each $\ve>0$. It follows that $\tau_L(x, F\cap K,r)=0$ and $T_L(x, F\cap K)\ge r>0$. Therefore, $F$ is $J$-complete.
The following theorem is due to A. Saddulaev, Since his book that includes the theorem has not been published, we include his proof here. We are grateful to him for sending us the statement and proof of the theorem, and to B. Fridman for translating an explanation message of A. Saddulaev from Russian to English.
\[sad\] Every complete set in $\CC^n$ is $L$-complete.
Suppose that $E$ is a complete pluripolar set in $\CC^n$. Let $u$ be a function such that $E=\{x:u(x)=-\infty \}$. Choose an increasing sequence $\{M_j\}$ of positive numbers such that $\lim M_j=\infty $ and $M_j\ge \sup_{|z|\le \exp 2^j} u(z)$. For each $j$, define a function $v_j$ by
$$v_{j}(x)=\left\{
\begin{array}{ll}
\max (2^{-j}(M_j^{-1}u(x)-1), 2^{-j}\log |x|-1), & \text{ \ if \ } |x|< \exp 2^j; \\
2^{-j}\log |x|-1, & \text{ \ if \ } |x|\ge \exp 2^j.\end{array}
\right.$$
Since for each $\zeta$ on the boundary of the ball $B(0, \exp 2^j)$, $$\limsup_{|x|< \exp 2^j, x\to \zeta} (2^{-j}(M_j^{-1}u(x)-1))\le 0=2^{-j}\log |\zeta|-1,$$ the function $v_j$ is on $\CC^n$ by the gluing theorem. On each open set with compact closure, all but a finite number of $v_j$ are non-positive. It follows that the sum $v(x):=\sum_{j=1}^{\infty }v_{j}(x)$ is (or identically $-\infty$), since the sequence of the partial sums of the series is eventually non-increasing. It is clear that $v_j(x)\le 2^{-j}\log^+ x$ for each $j$, so that $v(x)\le \log^+x$. Thus $v\in L(\CC^n)$ (or $v$ is identically $-\infty$).
Suppose that $y\in E$. Then $u(y)=-\infty$, and $v_j(y)=2^{-j}\log |y|-1$ for each $j$. Thus $v(y)=-\infty$.
Now suppose that $y\in \CC^n\setminus E$ so that $u(y)>-\infty$. Then $v_j(y)>-\infty$ for each $j$. Since $$\lim_{j\to\infty} 2^{-j}(M_j^{-1}u(y)-1)=0>-1=\lim_{j\to\infty} (2^{-j}\log |y|-1),$$ it follows that there is a positive integer $m=m(y)$ such that $$2^{-j}(M_j^{-1}u(y)-1)>-1/2> (2^{-j}\log |y|-1), \text{ \ for } j>m.$$ Thus $y\in B(0, \exp 2^j)$ and $v_j(y)=2^{-j}(M_j^{-1}u(y)-1)$ for $j>m$, and therefore v(y)&=&\_[j=1]{}\^m v\_j(y)+\_[j=m+1]{}\^2\^[-j]{}(M\_j\^[-1]{}u(y)-1)\
&&\_[j=1]{}\^m v\_j(y)+\_[j=1]{}\^2\^[-j]{}(-M\_1\^[-1]{}|u(y)|-1)\
&= &\_[j=1]{}\^m v\_j(y)+(-M\_1\^[-1]{}|u(y)|-1)>-. This implies, in particular, that $v$ is not identically $-\infty$. It follows that $v\in L(\CC^n)$ and $E=\{x: v(x)=-\infty\}$.
\[cj\] Every closed complete set in $\CC^n$ is $J$-complete.
This is a consequence of Proposition \[closed\] and Theorem \[sad\]. Let $E$ be a non-empty compact set in $\CC^n$. Define the extremal function $\Phi_E: \CC^n\to [0,\infty]$ by $\Phi_E(x)=\sup \{|p(x)|^{1/k}: (p,k)\in Q(\CC^n), |(p,k)|_E\le 1\}$. The $G$-hull of $E$ is defined to be ${{\hat E}^G}:=\{x\in\CC^n: \Phi_E(x)<\infty\}$. The $G$-hull of the empty set is defined to be the empty set. Since ${{\hat E}^G}=\cup_k\{x: \Phi_E(x)\le k\}$, it follows that ${{\hat E}^G}$ is an set.
A set is said to be $G$-complete if it is the $G$-hull of a compact set. A compact complete set $K$ in $\CC^n$ is $G$-complete, since ${{\hat K}^G}=K$ (see [@LM]).
Convergence Sets in Affine Spaces
=================================
Consider a series $f\in \CC[s_1,\dots, s_n][[t]]$ of the form $f(s,t)=\sum_{j=0}^\infty P_j(s)t^j$, where $P_j(s)=P_j(s_1,\dots, s_n)$ are polynomials of $n$ variables. Define $$\Conv(f)=\{s\in \CC^n: f(s, t)\; \hbox{\rm converges as a power series in $t$}\}.$$
Let $A, B$ be nonnegative integers with $A>0$. A series $f(s,t)=\sum_j P_j(s)t^j$ is said to be in Class $(A,B)$ if $\deg (P_j)\le Aj+B$.
It is clear that Class $(1,0)$ is a subset of Class $(A, B)$. Suppose that $E=\Conv(f)$ for some $f$ in Class $(A,B)$. Write $f(s,t)=\sum_j P_j(s)t^j$. Set $g(s, t)=t^Nf(s,t^N)$, where $N=A+B$. Then $g$ is in Class $(1,0)$ and $\Conv(g)=\Conv(f)$. Therefore, the convergence sets for Class $(A,B)$ are exactly the convergence sets for Class $(1,0)$.
Suppose that $f(s,t)=\sum_{j=0}^\infty P_j(s)t^j$ is in Class $(1,0)$ and $\Conv(f)=\CC^n$. Then, by Hartogs’ classical theorem, $f(s,t)$ converges as a power series in $n+1$ indeterminants $s$ and $t$, [*i.e.*]{}, $f(s,t)$ converges absolutely for $(s,t)$ in some neighborhood of the origin in $\CC^n\times \CC$. In this case, we say $f$ is a convergent series. Conversely, if $\Conv(f)\not=\CC^n$, then $f(s,t)$ diverges as a power series in $s$ and $t$, [*i.e.*]{}, $f(s,t)$ converges absolutely in no neighborhood of the origin in $\CC^{n+1}$. In this case, we say $f$ is a divergent series.
A subset $E$ of $\CC^n$ is said to be a [convergence set]{} in $\CC^n$ if $E=\Conv(f)$ for some divergent series $f$ of Class $(1,0)$.
\[basic\] Let $E$ be a convergence set in $\CC^n$. Then $E$ is a countable union of $G$-complete sets. Hence $E$ is an set.
There is divergent series $f(s,t)=\sum_{j=1}^\infty P_j(s) t^j$ of Class $(1,0)$ such that $E=\Conv(f)$. Put, for $m=1,2,3,\dots$, \[eq4a\]E\_m={s\^n: |s|m, |P\_j(s)|\^[1/j]{} m, j=1,2,…}.Then $E=\cup E_m$.
Suppose, if possible, that for some positive integer $m$, $c(E_m)>0$. Then, by Bernstein’s inequality (Lemma \[BI\]), the coefficients $b_{j\alpha}$ of $P_j(s)=\sum b_{j\alpha } s^\a$ satisfy $|b_{j\a}|\le (C_{E_m}m)^{j}$, where $C_{E_m}$ is a constant depending only on $E_m$. It follows that the series $f(s,t)$ is convergent, contradicting the hypothesis. Therefore each $E_m$ is , and $E$ is an set.
Fix a non-empty $E_m$ and a point $s\in {{\hat E_m}^G}$. Then $\gamma:=\Phi_{E_m}(s)<\infty$. Then $|P_j(s)|^{1/j}\le \gamma m$ for all $j$, and hence $s\in \Conv(f)$. Thus ${{\hat E_m}^G}\subset E$ for all $m$. Therefore, $E=\cup {{\hat E_m}^G}$, and $E$ is a countable union of $G$-complete sets.
\[gcomplete\] Every $G$-complete set in $\CC^n$ is a convergence set.
The theorem is proved by following the approach in\
[@LM Theorem 5.6]. Let $E$ be a non-empty $G$-complete set in $\CC^n$. Then $E={{\hat K}^G}$, where $K$ is a non-empty compact set. Let $\mathscr F_K$ be the collection of members $(p,k)\in Q(\CC^n)$ such that $k\ge 1$, $p$ has rational coefficients, and $|(p,k)|_K\le1$. Let $\{(p_j,k_j)\}$ be an enumeration of $\mathscr F_K$. Choose a sequence $\{r_j\}$ of positive integers so that the sequence $\{r_jk_j\}$ is strictly increasing. Let $f(s,t)=\sum_{j=1}^\infty p_j(s)^{r_j}t^{r_jk_j}$. Then $f$ is of Class $(1,0)$.
Suppose $s\in E$. Then $\a:=\Phi_K(s)<\infty$. It follows that $|p_j(s)^{r_j}|\le \a^{r_jk_j}$ for all $j$, and hence $s\in\Conv(f)$. Therefore, $E\subset \Conv(f)$.
We now consider a point $s\not\in E$. Then $\Phi_K(s)=\infty$. For each positive integer $m$ there is a $(p,k)\in Q(\CC^n)$ such that $|(p,k)|_K\le 1$ and $|(p(s),k)|>m$, so there is a $j_m$ such that $|(p_{j_m},k_{j_m})|_K\le 1$ and $|(p_{j_m}(s),k_{j_m})|>m$. It follows that the sequence $\{|(p_j(s)^{r_j},r_jk_j)|\}$ is unbounded, and $s\not\in \Conv(f)$. Therefore, $E= \Conv(f)$.
\[main\] Let $E$ be a countable union of $J$-complete sets in $\CC^n$. Then $E$ is a convergence set.
The set $E$ can be expressed as $E=\cup E_m$, where $\{E_m\}$ is an ascending sequence of $J$-complete sets. For each positive integer $m$, we shall construct a sequence $\{(h_{mk}, q_{mk})\}_{k=1}^\infty$ in $Q(\CC^n)$ such that
\(i) $|(h_{mk}, q_{mk})|_{\overline B_m\cap E_m}\le1$,
\(ii) $\|(h_{mk}, q_{mk})\|\le m$,
\(iii) $\cup_{k=1}^\infty\{x: |(h_{mk}(x), q_{mk})|>m/2\}\supset \CC^n\setminus E_m$,
where $\ol B_m$ is the closed ball in $\CC^n$ of center 0 and radius $m$.
Fix $m$ and suppose that $y\in \CC^n\setminus E_m$. Then $T_L(x, E_m\cap\ol B_m)>0$. Thus there is a positive number $r<1$ such that $$\inf\{|(p,v)|_{\overline B_m\cap E_m}: (p,v)\in Q(\CC^n), |(p(y),v)|\ge r, \|(p,v)\|\le1\}=0.$$ Choose a positive rational number $\beta=a/b<1$, where $a, b$ are positive integers, such that $(r/m)^\beta> 1/2$. There is a member $(p,v)$ of $Q(\CC^n)$ such that $$|(p,v)|_{E_m\cap\ol B_m}<m^{-1/\beta}, \; |(p(y),v)|\ge r, \|(p,v)\|\le1.$$ Let $h_{(y)}(x)=p(x)^a m^{v(b-a)}$, and $q_{(y)}=bv$. Then $(h_{(y)},q_{(y)})\in Q(\CC^n)$, and $|(h_{(y)}(x),q_{(y)})|=|(p(x),v)|^\b m^{1-\b}$. We have, for all $x\in \CC^n$, $$|(h_{(y)}(x),q_{(y)})|\le (1+|x|^2)^{\beta/2} m^{1-\beta}\le m(1+|x|^2)^{1/2},$$ $$|(h_{(y)},q_{(y)})|_{E_m\cap \ol B_m}<m^{-1}m^{1-\beta}=m^{-\beta}\le 1,$$ and $$|(h_{(y)}(y),q_{(y)})|\ge r^\beta m^{1-\beta}=(r/m)^\beta m>m/2.$$
Put $U_y:=\{x: |(h_{(y)}(x),q_{(y)})|>m/2\}$. Then $U_y$ is an open neighborhood of $y$. Since the set $\CC^n\setminus E_m$ is open, the open cover $\{U_y: y\in \CC^n\setminus E_m\}$ of $\CC^n\setminus E_m$ contains a countable subcover $\{U_{y_k}: k=1,2,\dots\}$. Write $(h_{mk},q_{mk})=(h_{(y_k)},q_{(y_k)})$. Then the sequence $\{(h_{mk}, q_{mk})\}_{k=1}^\infty$ satisfies (i), (ii) and (iii).
Let $\{(P_\nu, q_\nu)\}\}$ be a sequence obtained by arranging $\{(h_{mk}, q_{mk})\}$ as a single sequence. By raising $\{(P_\nu, q_\nu)\}\}$ to suitable powers, we assume that $\{q_\nu\}$ is an increasing sequence. Put $f(x,t)=\sum_\nu P_\nu(x)t^{q_\nu}$. Then $f$ is of Class $(1,0)$. We shall show that $E=\Conv(f)$.
Suppose that $x\in E$ and $\nu$ is a positive integer. Then $x\in \overline B_{m_0}\cap E_{m_0}$ for some positive integer $m_0$. Now $(P_\nu, q_\nu)=(h_{mk}, q_{mk})$ for some $m$, $k$. If $m\ge m_0$, then $|(P_\nu(x), q_\nu)|\le 1$; if $m< m_0$, then $|(P_\nu(x), q_\nu)|\le m(1+|x|^2)^{1/2}<m_0(1+|x|^2)^{1/2}$. It follows that $\{|(P_\nu(x), q_\nu)|\}$ is a bounded sequence, and hence $x\in \Conv(f)$.
Now suppose that $x\not\in E$. Then for each positive integer $m$, there is a positive integer $k(m)$ such that $|(h_{m,k(m)}(x), q_{m,k(m)})|>m/2$. The sequence $\{|(h_{m,k(m)}(x), q_{m,k(m)})|\}_{m=1}^\infty$ is unbounded and is a rearranged subsequence of $\{|(P_\nu(x), q_\nu)|\}$. It follows that $\{|(P_\nu(x), q_\nu)|\}$ is an unbounded sequence, and hence $x\not\in \Conv(f)$. Therefore $E=\Conv(f)$, and $E$ is a convergence set.
\[mainaffine\] Every countable union of closed complete sets in $\CC^n$ is a convergence set.
This is a consequence of Theorems \[main\] and \[cj\].
Every countable union of proper analytic varieties in $\CC^n$ is a convergence set.
Every countable set in $\CC^n$ is a convergence set.
A subset of $\CC$ is a convergence set it is an $F_\sigma$ polar set. This is because each closed polar set in $\CC$ is a complete polar set.
Convergence Sets in Projective Spaces
=====================================
For a formal power series $f(x_{1},\dots ,x_{n})=f(x)\in \mathbb{C}[[x_{1},\dots ,x_{n}]]$ and for $x\in \mathbb{C}^{n}$, let $f_{x}(t)=f(x_{1}t,\dots ,x_{n}t)\in \mathbb{C}[[t]]$. Since for $\lambda \in
\mathbb{C}$, $\lambda \not=0,$ the series $f_{x}$ and $f_{\lambda x}$ converge or diverge together, the convergence set of $f$ ( i.e. the set of $x $ for which $f_{x}$ converges) can be identified with a subset of the projective space $\mathbb{P}^{n-1}$.
For a non-zero member $x$ in $\mathbb{C}^{n}$, $[x]$ denotes its image in $\mathbb{P}^{n-1}$. For a subset $E$ of $\mathbb{P}^{n-1}$, put $\tilde{E}=\{x\in \mathbb{C}^{n}:[x]\in E\}$.
The (projective) convergence set of $f$ is defined to be $$\Conv_p(f)=\{[x]\in \PP^{n-1}: f_x\;\mbox{converges}\}.$$
A subset $E$ of $\PP^{n-1}$ is said to be a convergence set in $\PP^{n-1}$ if $E=\Conv_p(f)$ for some divergent series $f(x_1,\dots,x_n)$.
Let $E$ be a non-empty closed set in $\mathbb{P}^{n-1}$. Define $\Psi _{E}:
\mathbb{P}^{n-1}\rightarrow \lbrack 0,\infty ]$ by $\Psi _{E}([x])=\sup
\{|h(x)|^{1/q}/|x|:(h,q)\in \Gamma (\mathbb{C}^{n}),\Vert (h,q)\Vert _{
\tilde{E}}\leq 1\}$. The $G$-hull of $E$ is ${{\hat E}^G}=\{u\in \mathbb{P}^{n}:\Psi _{E}(u)<\infty \}$. The $G$-hull of the empty set is defined to be the empty set. If $E$ is non-pluripolar, then ${{\hat E}^G}=\mathbb{P}^{n-1}$. If $E$ is pluripolar, then ${{\hat E}^G}$ is an $F_{\sigma }$pluripolar set.
Recall that there are no non-constant functions on $\PP^{n-1}$.
A pluripolar set $E$ in $\mathbb{P}^{n-1}$ is said to be complete if there is a function $h\in H(\mathbb{C}^{n})$ such that $E=\{[x]\in \mathbb{P}^{n-1}:h(x)=0\}$. A pluripolar set $F$ in $\mathbb{P}^{n-1}$ is said to be $G$-complete if $F={{\hat E}^G}$ for some closed pluripolar set $E$.
The proofs of the following two theorems are very similar to those of Theorems \[basic\] and \[gcomplete\], and hence are omitted.
\[bas\] Let $E$ be a convergence set in $\PP^{n-1}$. Then $E$ is a countable union of $G$-complete sets. Hence $E$ is an set.
Every $G$-complete set in $\PP^{n-1}$ is a convergence set.
The set $\Pi $ of all hyperplanes in $\mathbb{P}^{n-1}$ is naturally isomorphic to $\mathbb{P}^{n-1}$. Each $\Omega \in \Pi $ is isomorphic to $\mathbb{P}^{n-2}$, and its complement in $\mathbb{P}^{n-1}$ is isomorphic to $\mathbb{C}^{n-1}$. For any two hyperplanes in $\mathbb{P}^{n-1}$, there is a unitary transformation that maps one to the other.
Fix a positive number $M$. Let S\_1&=&{\[1,0,…,0\]},k=2,…,n,\
S\_k&=&{\[x\]: |x\_1|\^2++|x\_[k-1]{}|\^2M\^2|x\_k|\^2, x\_[k+1]{}==x\_n=0}. Put \[km\] K\_M=\_[k=1]{}\^n S\_k.Then $\{K_m\}$ is an ascending sequence of closed sets with $\PP^{n-1}=\cup_{m=1}^\infty K_m$.
A subset $E$ of $\PP^{n-1}$ is said to be [*non-occupying*]{} if there exists $\Omega \in \Pi $ such that $E\cap \Omega =\emptyset$.
\[noc\]If $K$ is a closed non-occupying subset of $\PP^{n-1}$ and if $u\in \PP^{n-1}$, then $K\cup\{u\}$ is non-occupying.
Let $R=\{V\in \Pi: V\cap K=\emptyset\}$ and $S=\{V\in\Pi: u\in V\}$. Then $R$ is a non-empty open set in $\Pi$ and $S$ is a hyperplane in $\Pi$. Thus $R\setminus S$ is non-empty.
For each $M>0$, the set $K_M$ is non-occupying.
Let $e_1,\dots, e_n$ be the standard basis of $\CC^n$, and let $\eps$ be a sufficiently small positive number. Let $v_j=e_j+\eps e_{j+1}$ for $j=1,\dots,n-1$. Put $V_j=\span(v_1,\dots,v_j)$ for $j=1,\dots, n-1$, and $V=V_{n-1}$. Also, let $W_j=\span(e_1,\dots,e_j)$. Note that $$V\cap W_j\subset V_{j-1},\;\;\text{for }j\ge2.$$ Since $S_j\subset W_j$, it follows that $V\cap S_j\subset V_{j-1}$ for $j\ge 2$. It is clear that $V\cap S_1=\emptyset$. For $j\ge 2$ and for sufficiently small $\eps$, since $W_{j-1}\cap S_j=\emptyset$, and since $V_{j-1}$ is close to $W_{j-1}$, we see that $V_{j-1}\cap S_j=\emptyset$. It follows that $$V\cap K_M=\cup_{j=1}^n (V\cap S_j)\subset \cup_{j=2}^n (V_{j-1}\cap S_j)=\emptyset.$$ Therefore $K_M$ does not intersect the hyperplane $V$.
A set $E$ in $\PP^{n-1}$ is said to be $J$-complete if for each hyperplane $V$, $E\setminus V$ is $J$-complete in $\PP^{n-1}\setminus V$. The set $E$ is said to be globally $J$-complete if for each $[x]\in \PP^{n-1}\setminus E$, $T_H(x, \tilde E)>0$.
It is clear that each $J$-complete set is closed, and that each globally $J$-complete set is $J$-complete.
The proof of the following proposition is very similar to that of Proposition \[intersection\], and hence is omitted.
\[union\] An intersection of (globally) $J$-complete sets in $\PP^{n-1}$ is (globally) $J$-complete. A finite union of (globally) $J$-complete sets in $\PP^{n-1}$ is (globally) $J$-complete.
Let $E\subset \PP^{n-1}$ be the zero locus of a continuous function $h\in H(\CC^n)$. Then $E$ is a globally $J$-complete set in $\PP^{n-1}$.
This is a consequence of Lemma \[glo\].
\[closed5\] Every closed complete set in $\PP^{n-1}$ is $J$-complete.This is a consequence of Theorem \[cj\].
\[glob\] A proper algebraic variety in $\PP^{n-1}$ is a global $J$-complete set.
Let $E$ be a proper algebraic variety in $\PP^{n-1}$. Then there are members $(h_j, q_j)$ of $\Gamma(\CC^n)$, $j=1,\dots, k$, such that $$E=\{[x]\in \PP^{n-1}: h_1(x)=\cdots=h_k(x)=0\}.$$ Let $h=\sum_{j=1}^k |h_j|^{1/q_j}$. Then $h\in H(\CC^n)$, $h$ is continuous, and $E=\{h=0\}$. By Proposition \[glob\], $E$ is globally $J$-complete.
For $[x]\in \PP^{n-1}$ and $S\subset \PP^{n-1}$, we define $T_H([x],S)$ to be $T_H(x, \tilde S)$. If $W$ is a hyperplane in $\PP^{n-1}$, and if $z$ and $S$ lie in $\PP^{n-1}\setminus W\cong \CC^{n-1}$, we observe that $T_H(z,S)=0$ $T_L(z,S)=0$.
\[key\] Let $E$ be a $J$-complete set in $\PP^{n-1}$, let $K$ be a non-occupying closed set in $\PP^{n-1}$, let $[y]\in \PP^{n-1}\setminus E$, and let $m$ be a real number $\ge 1$. Then there exists an $(h, q)\in \Gamma(\CC^n)$ such that $$\|(h,q)\|\le m, \;\;\|(h,q)\|_{E\cap K}\le 1,\;\; \|(h(y),q)\|>m/2.$$
By Lemma \[noc\], $K\cup\{[y]\}$ is non-occupying, hence there is a hyperplane $V$ such that $\Omega:=\PP^{n-1}\setminus V\supset(K\cup\{[y]\})$. Since $E\cap \Omega $ is a $J$-complete set in $\Omega$, we see that $T_L([y], E\cap K)>0$. It follows that $T_H([y], E\cap K)>0$. Thus there is a positive number $r<1$ such that $\tau _{H}(x,F,r)=0$, [* i.e.*]{}, $$\inf\{\|(p,v)\|_{E\cap K}: (p,v)\in \Gamma(\CC^n), \|(p(y),v)\|\ge r, \|(p,v)\|\le1\}=0.$$
Choose a positive rational number $\beta=a/b<1$, where $a, b$ are positive integers, such that $(r/m)^\beta> 1/2$. There is a $(p,v)\in \Gamma(\CC^n)$ such that $$\|(p,v)\|_{E\cap K}<m^{-1/\beta}, \; \|(p(y),v)\|\ge r,\; \|(p,v)\|\le1.$$ Let $u=\ol y/|y|$. Then $u$ is a unit vector, and $\langle y,u\rangle:=u_1y_1+\cdots+u_ny_n=|y|$. Put $h(x)=p(x)^a (m\langle x,u\rangle)^{v(b-a)}$, and $q=bv$. Then $(h,q)\in \Gamma(\CC^n)$, and $\|(h(x),q)\|=\|(p(x),v)\|^\b m^{1-\b}$. We have, for all $x\in \CC^n$, $$\|(h(x),q)\|\le m^{1-\beta}\le m,$$ $$\|(h,q)\|_{E\cap K}<m^{-1}m^{1-\beta}=m^{-\beta}\le 1,$$ and $$\|(h(y),q)\|\ge r^\beta m^{1-\beta}=(r/m)^\beta m>m/2.$$
\[proj\] Let $E$ be a countable union of $J$-complete sets in $\PP^{n-1}$. Then $E$ is a convergence set.
As in the proof of Theorem \[main\], it is enough to construct a sequence $\{(P_{\nu },q_{\nu })\}_{\nu =1}^{\infty }$ in $\Gamma (\mathbb{C}^{n})$, with $\{q_{\nu }\}$ strictly increasing, such that $[x]\in E$ if and only if the sequence $\{\Vert (P_{\nu }(x),q_{\nu })\Vert \}$ is bounded, because then $E$ is the convergence set of $f(x)=\sum_{\nu }P_{\nu }(x)$.
Since, by Proposition \[union\], the union of a finite number of $J$-complete pluripolar sets is $J$-complete, we can assume that $E=\cup E_{m}$, where $\{E_{m}\}$ is an ascending sequence of $J$-complete pluripolarsets in $\mathbb{P}^{n-1}$. Recall that $\mathbb{P}^{n-1}=\cup K_{m},$ where $K_{m},m=1,2,3,...,$ is the ascending sequence of closed non-occupying sets in $\mathbb{P}^{n-1}$ defined in (\[km\]). For each positive integer $m$, we shall construct a sequence $\{(h_{mk},q_{mk})\}_{k=1}^{\infty }$ in $\Gamma (\mathbb{C}^{n})$ such that
\(i) $\|(h_{mk}, q_{mk})\|_{K_m\cap E_m}\le1$,
\(ii) $\|(h_{mk}, q_{mk})\|\le m$,
\(iii) $\cup_{k=1}^\infty\{[x]\in\PP^{n-1}: \|(h_{mk}(x), q_{mk})\|>m/2\}\supset \PP^{n-1}\setminus E_m$.
Fix $m$ and suppose that $[y]\in \PP^{n-1}\setminus E_m$. By Lemma \[key\], there exists an $(h_{[y]}, q_{[y]})\in \Gamma(\CC^n)$ such that $$\|(h_{[y]},q_{[y]})\|\le m, \;\;\|(h_{[y]},q_{[y]})\|_{E_m\cap K_m}\le 1,\;\; \|(h_{[y]}(y),q_{[y]})\|>m/2.$$
Put $U_{[y]}:=\{[x]: \|(h_{[y]}(x),q_{[y]})\|>m/2\}$. Then $U_{[y]}$ is an open neighborhood of $[y]$. Since the set $\PP^{n-1}\setminus E_m$ is open, the open cover $\{U_{[y]}: [y]\in \PP^{n-1}\setminus E_m\}$ of $\PP^{n-1}\setminus E_m$ contains a countable subcover $\{U_{[y_k]}: k=1,2,\dots\}$. Put $(h_{mk},q_{mk})=(h_{[y_k]},q_{[y_k]})$. Then the sequence $\{(h_{mk}, q_{mk})\}_{k=1}^\infty$ satisfies (i), (ii) and (iii).
Let $\{(P_\nu, q_\nu)\}\}$ be a sequence obtained by arranging $\{(h_{mk}, q_{mk})\}$ as a single sequence. By raising $\{(P_\nu, q_\nu)\}\}$ to suitable powers, we assume that $\{q_\nu\}$ is a strictly increasing sequence. Now (i), (ii) and (iii) imply that the sequence $\{\Vert (P_{\nu }(x),q_{\nu })\Vert \}$ is bounded $[x]\in E$.
\[mainproj\]Every countable union of closed complete sets in $\PP^{n-1}$ is a convergence set.
This is a consequence of Theorems \[proj\] and \[closed5\].
Every countable union of proper algebraic varieties in $\PP^{n-1}$ is a convergence set.
Every countable set in $\PP^{n-1}$ is a convergence set.
A subset of $\PP^1$ is a convergence set it is an $F_\sigma$ polar set.
[Ah]{} L.V. Ahlfors, *Conformal invariants: topics in geometric function theory*, McGraw-Hill, N.Y., 1973.
S.S. Abhyankar, T.T. Moh, A reduction theorem for divergent power series, *J. Reine Angew. Math.*, **241** (1970), 27–33.
J. Deny, Sur les infinis d’un potential, [*C. R. Acad. Sci. Paris Sér. I Math.*]{}, [**224**]{}(1947), 524–525
F. Hartogs, Zur Theorie der analytischen Funktionen mehrerer unabhängiger Veränderlichen, Math. Ann. [**62**]{} (1906), 1–88
P. Lelong, On a problem of M.A. Zorn, *Proc. Amer. Math. Soc.*, **2** (1951), 11–19.
N. Levenberg, and R.E. Molzon, Convergence sets of a formal power series, *Math. Z.*, **197** (1988), 411–420.
N. Levenberg, B.A. Taylor, Comparison of capacities in $\CC^n$, [*Complex analysis (Toulouse, 1983), Lecture Notes in Math., v. 1094*]{}, Springer, Berlin, 1984, 162–-172.
R. Pérez-Marco, A note on holomorphic extensions, preprint, 2000
J. Ribon, Holomorphic extensions of formal objects, [*Ann. Scuola Norm. Sup. Pisa Cl. Sci. (5)*]{}, [**3**]{} (2004), 657–680
N. Sibony, Sur la frontière de Shilov des domains de $\CC^n$, [*Math. Ann.*]{}, [**273**]{} (1985), 115-121
N. Sibony, Some results on global analytic sets, [*Seminaire P. Lelong, H. Skoda Analyse 1978/1979, Springer Lecture Notes 822*]{}, 221–237.
J. Siciak, Extremal phurisubharmonic functions and capacities in $\CC^n$, [ *Sophia Kokyuroku Math.*]{}, [**14**]{} (1982), Sophia University, Tokyo
A. Sathaye, Convergence sets of divergent power series, *J. Reine Angew. Math.*, **283** (1976), 86–98.
V.P. Zaharjuta, Transfinite diameter, Chebyshev constants, and capacity for compacta in $\CC^n$, [*Math. USSR Sbornik*]{}, [**25**]{} (1975), 350–364
|
---
abstract: 'Coupling gauge fields to the chiral currents from an effective Lagrangian for pseudoscalar mesons naturally gives rise to a species doubling phenomenon similar to that seen with fermionic fields in lattice gauge theory.'
address: |
Physics Department, Brookhaven National Laboratory, PO Box 5000, Upton, NY 11973-5000, USA\
creutz@wind.phy.bnl.gov,mtytgat@wind.phy.bnl.gov
author:
- 'Michael Creutz and Michel Tytgat [^1]'
title: Species Doubling and Effective Lagrangians
---
epsf @th \#1
Species doubling is deeply entwined with the famous axial anomalies. A lattice regulator removes all infinities; so, anomalies cannot appear without explicit symmetry breaking. As the regulator is removed, a remnant remains determining an overall chiral phase. This phase is the well known strong CP parameter $\theta$ [@theta]. Most lattice schemes also mutilate non-singlet chiral symmetries, although ways around this exist [@shamir].
For the gauge interactions of the electroweak theory, non-perturbative chiral issues remain unresolved. The $W$ bosons couple in a parity violating manner, and rely for consistency on a subtle anomaly cancellation between quarks and leptons. While a flurry of recent work treats fermions with a separate limit [@twocutoff; @overlap2; @lee], it remains unclear how to implement this cancellation in a fully finite and gauge-invariant manner.
We argue that the doubling problem is not unique to the lattice approach, but is a general consequence of chiral anomalies. This presentation is based on our recent letter [@mcmt]. Starting with an effective Lagrangian for the pseudoscalar mesons, we couple gauge fields to the chiral currents. When these fields are chiral, the process naturally introduces additional particles mirroring the original theory. As with lattice doublers, the mirror fields cancel anomalies.
In a chiral Lagrangian approach, the effects of anomalies appear in a term introduced by Wess and Zumino [@wz], and later elucidated by Witten [@witten]. This involves extending the fields into an internal space, only the boundary of which is relevant to the equations of motion. On coupling to a local gauge field, the boundary can acquire additional contributions. We argue that these are most naturally written in terms of doubler fields.
We start with quark fields $\psi^a(x)$ interacting with non-Abelian gluons. We suppress all indices except flavor, represented by the index $a$, and space-time, represented by $x$. From $\psi$ we project right and left handed parts, $\psi^a_R={1\over
2} (1+\gamma_5)\psi^a$ and $\psi^a_L={1\over 2} (1-\gamma_5)\psi^a$. We ignore fermion masses.
This theory with massless quarks is invariant under a global $SU(n_f)\times SU(n_f)$ symmetry, where $n_f$ represents the number of flavors. The quark fields transform as $\psi_L^a\rightarrow \psi_L^b
g_L^{ba}$ and $\psi_R^a\rightarrow \psi_R^b g_R^{ba}$. Here $g_L$ and $g_R$ are elements of $SU(n_f)$.
The chiral symmetry is spontaneously broken by the [æ]{}ther, resulting in $n_f^2-1$ Goldstone bosons and a remaining explicit $SU(n_f)$ flavor symmetry. The composite field $\overline \psi^a_R \psi^b_L$ acquires an expectation value. Chiral symmetry allows chosing a standard [æ]{}ther with, say, $\langle\overline \psi^a_R
\psi^b_L\rangle=v\delta^{ab}$. Here the parameter $v$ determining the magnitude of the expectation value requires a renormalization scheme for precise definition. The [æ]{}ther is degenerate, and one could choose to replace $\delta^{ab}$ by an arbitrary element $g^{ab}$ of $SU(n_f)$. The basic idea of the effective Lagrangian is to promote this element into a local field $g(x)$. Slow variations of this field represent the Goldstone bosons. The chiral symmetry is an invariance under $g(x)\rightarrow g_L^\dagger g(x) g_R$.
The effective Lagrangian approach represents an expansion in light particle momenta [@clreview]. The lowest order action is $$S_0={F_\pi^2\over 4}\int d^4x\ {\rm Tr}(\partial_\mu g\partial_\mu g^\dagger).
$$ The constant $F_\pi$ has an experimental value 93 MeV. In terms of conventional fields, $g=\exp(i\pi\cdot\lambda/F_\pi)$, where the $n_f^2-1$ matrices $\lambda$ generate $SU(n_f)$ and are normalized ${\rm Tr}
\lambda^\alpha \lambda^\beta=2\delta^{\alpha\beta}$.
From this action the equations of motion are $\partial_\mu
J_{L,\mu}^\alpha=0$, where the “left” current is $$J_{L,\mu}^\alpha= {iF_\pi^2\over 4}{\rm Tr}\lambda^\alpha(\partial_\mu g)g^\dagger
$$ Equivalently, one can work with “right” currents. There is a vast literature on adding higher derivative terms [@clreview].
We are interested in a special higher derivative coupling describing the effects of anomalies. This term cannot be written as an integral of a local expression in $g(x)$, even though the resulting contribution to the equations of motion is fully local [@wz; @witten; @zumino]. Continuing to write the equations of motion in terms of a divergence free current, a possible addition which satisfies the required symmetries is $$\eqalign{
J_{L,\mu}^\alpha&=
{iF_\pi^2\over 4}{\rm Tr}\lambda^\alpha(\partial_\mu g)g^\dagger\cr
&+{in_c\over 48\pi^2} \epsilon_{\mu\nu\rho\sigma}{\rm Tr}\lambda^\alpha
(\partial_\nu g)g^\dagger
(\partial_\rho g)g^\dagger
(\partial_\sigma g)g^\dagger\cr
}
$$
To obtain an action generating the above requires extending $g(x)$ beyond a simple mapping of space-time into the group. We introduce an auxiliary variable $s$ to interpolate between the field $g(x)$ and some fixed group element $g_0$. Thus consider $h(x,s)$ satisfying $h(x,1)=g(x)$ and $h(x,0)=g_0$. This extension is not unique, but the equations of motion are independent of the chosen path. We now write $$\eqalign{
S&={F_\pi^2\over 4}\int d^4x\
{\rm Tr}(\partial_\mu g\partial_\mu g^\dagger)\cr
&+{n_c\over 240 \pi^2}\int d^4x\int_0^1
ds\ \epsilon_{\alpha\beta\gamma\delta\rho}
{\rm Tr} h_\alpha h_\beta h_\gamma h_\delta h_\rho.\cr
}
$$ Here we define $h_\alpha=i(\partial_\alpha h)
h^\dagger$ and regard $s$ as a fifth coordinate.
For equations of motion, consider a small variation of $h(x,s)$. This changes the final integrand by a total divergence, which then integrates to a surface term. Working with either spherical or toroidal boundary conditions in the space-time directions, this surface only involves the boundaries of the $s$ integration. When $s=0$, space-time derivatives acting on the constant matrix $g_0$ vanish. The surface at $s=1$ generates precisely the desired additional term in Eq. (3).
The last term in Eq. (4) represents a piece cut from the $S_5$ sphere appearing in the structure of $SU(n_f)$ for $n_f\ge 3$. The mapping of four dimensional space-time into the group surrounds this volume. Chiral rotations shift this region around, leaving its measure invariant. As emphasized by Witten [@witten], this term is ambiguous. Different extensions into the $s$ coordinate can modify the above five dimensional integral by an integer multiple of $480\pi^3$. To have a well defined quantum theory, the action must be determined up to a multiple of $2\pi$. Thus the quantization of $n_c$ to an integer, the number of “colors.”
Crucial here is the irrelevance of the starting group element $g_0$ at the lower end of the $s$ integration. Our main point is the difficulty of maintaining this condition when we make the chiral symmetry local. As usual, this requires the introduction of gauge fields. Under the transformation $g(x)\rightarrow g_L^\dagger(x) g(x)
g_R(x)$, derivatives of $g$ transform as $$\partial_\mu g\longrightarrow
g_L^\dagger\left (
\partial_\mu g-\partial_\mu g_L g_L^\dagger g+ g\partial_\mu g_R g_R^\dagger
\right) g_R
$$ To compensate, we introduce left and right gauge fields transforming as $$\matrix{
A_{L,\mu} & \longrightarrow
& g_L^\dagger A_{L,\mu} g_L + i g_L^\dagger\partial_\mu g_L \cr
A_{R,\mu} & \longrightarrow
& g_R^\dagger A_{R,\mu} g_R + i g_R^\dagger\partial_\mu g_R \cr
}
$$ Then the combination $$D_\mu g = \partial_\mu g-iA_{L,\mu} g + ig A_{R,\mu}
$$ transforms nicely: $D_\mu g\rightarrow g_L^\dagger D_\mu g g_R$. Making the generalized minimal replacement $\partial_\mu g\rightarrow
D_\mu g$ in $S_0$, we find a gauge invariant action.
A problem arises when we go on to the Wess-Zumino term. We require a prescription for the gauge transformation on the interpolated group element $h(x,s)$. Here we note a striking analogy with the domain wall approach to chiral fermions first promoted by Kaplan [@kaplan]. There an extra dimension was also introduced, with the fermions being surface modes bound to a four dimensional interface. The usual approach to adding gauge fields involves, first, not giving the gauge fields a dependence on the extra coordinate, and, second, forcing the component of the gauge field pointing in the extra dimension to vanish [@mcih; @gjk]. In terms of a five dimensional gauge field, we take $A_\mu(x,s)=A_\mu(x)$ and $A_s=0$ for both the left and right handed parts. Relaxing either of these would introduce unwanted degrees of freedom. The natural extension of the gauge transformation is to take $h(x,s)\rightarrow g_L^\dagger(x) h(x,s)
g_R(x)$ with $g_{L,R}$ independent of $s$.
We now replace the derivatives in the Wess-Zumino term with covariant derivatives. This alone does not give equations of motion independent of the interpolation into the extra dimension. However, adding terms linear and quadratic in the gauge field strengths allows construction of a five dimensional Wess-Zumino term for which variations are again a total derivative. This gives $$S_{WZ}=
{n_c\over 240 \pi^2 }\int d^4x \int_0^1 ds\ \Gamma$$ where $$\eqalign{
\Gamma&=\Gamma_0+{5i\over2}(i\Gamma_L+i\Gamma_R\cr
&-\Gamma_{LL}-\Gamma_{RR}-\alpha\Gamma_{LR}-(1-\alpha)\Gamma_{RL}),\cr
}$$ $\alpha$ is a free parameter, and $$\eqalign{
&\Gamma_0=
\epsilon_{\mu\nu\rho\lambda\sigma}
{{\rm Tr}}D_\mu h h^\dagger D_\nu h h^\dagger D_\rho h h^\dagger
D_\lambda h h^\dagger D_\sigma h h^\dagger\cr
&\Gamma_L=\epsilon_{\mu\nu\rho\lambda\sigma}
{{\rm Tr}}D_\mu h h^\dagger D_\nu h h^\dagger D_\rho h h^\dagger
F_{L,\lambda\sigma} \cr
&\Gamma_R=\epsilon_{\mu\nu\rho\lambda\sigma}
{{\rm Tr}}D_\mu h h^\dagger D_\nu h h^\dagger D_\rho h
F_{R,\lambda\sigma} h^\dagger \cr
&\Gamma_{LL}=
\epsilon_{\mu\nu\rho\lambda\sigma}
{{\rm Tr}}D_\mu h h^\dagger F_{L,\nu\rho} F_{L,\lambda\sigma} \cr
&\Gamma_{RR}=
\epsilon_{\mu\nu\rho\lambda\sigma}
{{\rm Tr}}D_\mu h F_{R,\nu\rho} F_{R,\lambda\sigma} h^\dagger \cr
&\Gamma_{RL}=
\epsilon_{\mu\nu\rho\lambda\sigma}
{{\rm Tr}}D_\mu h F_{R,\nu\rho} h^\dagger F_{L,\lambda\sigma} \cr
&\Gamma_{LR}=
\epsilon_{\mu\nu\rho\lambda\sigma}
{{\rm Tr}}D_\mu h h^\dagger F_{L,\nu\rho} h F_{R,\lambda\sigma} h^\dagger.\cr
}$$ The covariantly transforming field strengths are $$\eqalign
{
&F_{L,\mu\nu}=\partial_\mu A_{L,\nu}-\partial_\nu A_{L,\mu}
-i[A_{L,\mu},A_{L,\nu}]\cr
&F_{R,\mu\nu}=\partial_\mu A_{R,\nu}-\partial_\nu A_{R,\mu}
-i[A_{R,\mu},A_{R,\nu}].\cr
}$$ For the photon, parity invariance fixes $\alpha=1/2$. The last four terms contain the process $\pi\rightarrow 2\gamma$.
This procedure works well for a vector-like gauge field, where we take $g_L(x)=g_R(x)$ and $A_L=A_R$. We could, for example, take $g_0$ to be the identity, and then the gauge transformation cancels out at $s=0$. However, difficulties arise on coupling a gauge field to an axial current. Then $g_0\rightarrow g_L^\dagger(x) g_0 g_R(x)$ in general will no longer be a constant group element. After a gauge transformation, variations of the action give new non-vanishing contributions to the equations of motion from the lower end of the $s$ integration.
The simplest solution makes the $s=0$ fields dynamical. Thus we replace the field $g(x)$ with two fields $g_0(x)$ and $g_1(x)$. The interpolating field now has the properties $h(x,0)=g_0(x)$ and $h(x,1)=g_1(x)$. The action becomes $${F_\pi^2\over 4}
\int d^4x\ {\rm Tr}(D_\mu g_0D_\mu g_0^\dagger+D_\mu g_1D_\mu g_1^\dagger)
+ S_{WZ}.
$$
While now gauge invariant, the theory differs from the starting model through a doubling of meson species. The extra particles are associated with the second set of group valued fields $g_0(x)$. The Wess-Zumino term of the new fields has the opposite sign since it comes from the lower end of the $s$ integration. Thus, these “mirror” particles have reflected chiral properties and implement a cancellation of all anomalies. In essence, we have circumvented the subtleties in gauging the model. The value of $F_\pi$ need not be the same for $g_0$ and $g_1$; so, their strong interactions might differ in scale. Nevertheless, coupling with equal magnitude to the gauge bosons, the new fields cannot be ignored.
For vector currents, we can remove the doublers using a diagonal mass term at $s=0$. For example, with a term $M{\rm Tr} g_0(x)$ added to the Lagrangian density, $M$ could be arbitrarily large, forcing $g_0$ towards the identity.
The doublers arise in analogy to the problems appearing in the surface mode approach to chiral lattice fermions [@kaplan; @mcih; @gjk; @goltermanshamir]. In both cases, an extension to an extra dimension is introduced. Difficulties arise from the appearance of an extra interface. This new surface couples with equal strength to the gauge fields.
If we let $g_{L,R}$ depend on $s$, we expect problems similar to those seen with domain wall fermions. When the gauge fields vary in the extra dimension, four dimensional gauge invariance is lost. Symmetry can be restored via a Higgs field, but this introduces the possibility of unwanted degrees of freedom in the physical spectrum. Ref.[@gjk] explores the possibility of sharply truncating the gauge field at an intermediate value of the extra coordinate. This gives rise to new low energy bound states acting much like the undesired doubler states.
A Higgs field does permit different masses for the extra species. In particular, the matter couplings to the Higgs field can depend on $s$. Qualitative arguments suggest that triviality effects on such couplings limit their strength, precluding masses for the extra species beyond a typical weak interaction scale. Presumably such constraints are strongest when the anomalies in the undoubled sector are not canceled. With domain-wall fermions, taking a Higgs-fermion coupling to infinity on one wall introduces a plethora of new low energy bound states [@goltermanshamir].
These problems emphasize the subtle way anomalies cancel between quarks and leptons. If the contributions of the leptons are ignored, no non-perturbative approach can be expected to accommodate gauged weak currents. When the required cancellations occur between different fermion representations, perturbation theory appears to be consistent, while all known non-perturbative approaches remain awkward.
There are several possible solutions. Mirror particles might exist, perhaps with masses at the weak scale [@montvay]. Such might even be useful in the spontaneous breaking of the electroweak theory [@technicolor]. A related alternative involves spontaneous breaking of an underlying vector-like theory containing additional heavy bosons coupling with opposite parity fermions [@patisalam]. All of these involve a profusion of new particles awaiting discovery. A speculative solution would twist the extra dimension so that the doubling particles could be among those already observed. This requires the interpolation in the extra dimension to mix the quarks and the leptons, all of which are involved in the anomaly cancellations.
[9]{} V. Furman and Y. Shamir, Nucl. Phys. B439, 54 (1995). This is an old topic. For a recent discussion see M. Creutz, Phys. Rev. D52, 2951 (1995). M. Creutz and M. Tytgat, Phys. Rev. Letters 76, 4671 (1995). A. (1995); G. ‘t Hooft, Phys. Lett. B349, 491 (1995); P. Hernandez and R. Sundrum, Nucl. Phys. B455, 287 (1995); S. Hsu, preprint YCTP-P5-95 (1995); G. Bodwin, preprint ANL-HEP-PR-95-59-REV (1995). R. Narayanan and H. Neuberger, Nucl. Phys. B443, 305 (1995); S. Randgbar-Daemi, J. Strathdee, preprint IC-95-399 (1995); S. Frolov and A. Slavnov, Nucl. Phys. B411, 647 (1994). R. Friedberg, T.D. Lee, and Y. Pang, J. Math. Phys. 35, 5600 (1994). J. Wess and B. Zumino, Phys. Lett. 37B, 95 (1971). E. Witten, Nucl. Phys. B223, 422 (1983); Nucl. Phys. B223, 433 (1983); Commun. Math. Phys. 92, 455 (1984). For recent reviews, see H. Leutwyler, preprint hep-ph/9406283 (1995), B. Holstein, preprint hep-ph/9510344 (1995). B. Zumino, chapter in “Current Algebra”, S.B. Trieman, R. Jackiw, B. Zumino, and E. Witten, (Princeton University Press, 1985) p. 361. D. Kaplan, Phys. Lett. B288 342, (1992). M. Creutz and I. Horvath, Phys. Rev. D50, 2297 (1994); R. Narayanan and H. Neuberger, Phys. Lett. B302, 62 (1993). M. Golterman, K. Jansen, D. Kaplan, Phys. Lett. B301, 219 (1993). M. Golterman and Y. Shamir, Phys. Rev. D51, 3026 (1995). I. Montvay, Nucl. Phys. B (Proc. Suppl.) 30, 621 (1993); Phys. Lett. 199B, 89 (1987). L. Susskind, Phys. Rev. D20 2619 (1979); S. Weinberg, Phys. Rev. D19, 1277 (1979); E. Farhi, Phys. Rept. 74,277 (1981). J. Pati and A. Salam, Phys. Rev. D8, 1240 (1973).
[^1]: Poster presented by M. Creutz. This manuscript has been authored under contract number DE-AC02-76CH00016 with the U.S. Department of Energy. Accordingly, the U.S. Government retains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purposes.
|
---
abstract: 'We construct a family of instanton metric obtained from new exact singular solutions for minimal surfaces by noticing the correspondence between minimal surfaces in the three dimesional Euclidean space and gravitational instantons possessing two killing vectors. By Calabi’s correspondence, we derive a family of explicit maximal surface solution for spacelike surface with zero mean curvature equation.'
address:
- 'School of Mathematics, Xiamen University, Xiamen 361000, People’s Republic of China'
- 'Laboratoire Jacques-Louis Lions, Université Pierre et Marie Curie (Sorbonne Université), 4, Place Jussieu, 75252 Paris, France.'
author:
- Weiping Yan
date: Dec 2017
title: Explicit singular minimal surface solutions for gravitational instantons
---
[^1]
Introduction
============
Gravitational instantons can be seen as localized “excitations” in imaginary time and mediate the transition between two distinct gravitational vacua [@Nu2; @Ati; @Hit; @Eg; @Chen]. It plays an important role in the Euclidean path integral approach to quantum gravity [@Haw; @Gi1; @Gi2; @Gi3]. On one hand, an instanton solutions of the Einstein field equations with Euclidean signature and anti-self-dual curvature can obtain from the minimal surface in Euclidean $E^3(x,y,z)$ [@Nu1]. On the other hand, from J$\ddot{o}$rgens’ correspondence [@J], any two dimensional minimal surfaces gives a solution to the real elliptic Monge-Amp$\grave{e}$re equation on a real manifold of two dimensions. These solutions then lead to some gravitational instantons, which are described by hyper- K$\ddot{a}$hler metrics were studied extensively in the framework of supergravity and M-theory as well as Seiberg-Witten theory [@Gi4; @SW].
The minimal surface equation is the equation of minimal graphs over a domain of the $xy$-plane in $E^3$, which takes the form $$\label{ENNN1-1}
\partial_x\left(\frac{\partial_x u}{\sqrt{1+|\partial_x u|^2+|\partial_yu|^2}}\right)+\partial_y\left(\frac{\partial_y u}{\sqrt{1+|\partial_x u|^2+|\partial_yu|^2}}\right)=0,$$ it is equivalent to the classcial form $$u_{xx}(1+u_y^2)-2u_xu_yu_{xy}+u_{yy}(1+u_x^2)=0,$$ which exhibits the following scaling invariance for any $\lambda>0$, $$u(x,y)\mapsto u_{\lambda}(x,y)=\lambda^{-1} u(\lambda x,\lambda y).$$ A class of gravitational instantons may be represented by the metric $$\label{E1-3}
ds^2=\frac{1+u_t^2}{\sqrt{1+u_t^2+u_x^2}}(dt^2+dy^2)+\frac{1+u_x^2}{\sqrt{1+u_t^2+u_x^2}}(dx^2+dz^2)+\frac{2u_tu_x}{\sqrt{1+u_t^2+u_x^2}}(dtdx+dydz),$$ for which the Einstein field equations reduce to $$\label{E1-4}
u_{tt}(1+u_x^2)-2u_tu_xu_{tx}+u_{xx}(1+u_t^2)=0.$$
The equation (\[E1-4\]) has a family of exact singular solutions $$u(t,x)=k\ln|\frac{x}{t}+(1+\frac{x^2}{t^2})^{\frac{1}{2}}|,~~\forall k\in \mathbb{R}\setminus \{0\}.$$ Moreover, there is a class of gravitational instantons can be represented by the metric $$ds^2=a(t,x)(dt^2+dy^2)+b(t,x)(dx^2+dz^2)+c(t,x)(dtdx+dydz),$$ where $$\begin{aligned}
a(t,x)&=&\frac{1+u_t^2}{\sqrt{1+u_t^2+u_x^2}}\\
&=&[1+\frac{k^2x^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{t^2(t^2+x^2)((t^2+x^2)^{\frac{1}{2}}+x)^2}][1+\frac{k^2x^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{t^2(t^2+x^2)((t^2+x^2)^{\frac{1}{2}}+x)^2}+\frac{k^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{(x(t^2+x^2)^{\frac{1}{2}}+t^2)^2}]^{-\frac{1}{2}},\\
b(t,x)&=&\frac{1+u_x^2}{\sqrt{1+u_t^2+u_x^2}}\\
&=&[1+\frac{k^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{(x(t^2+x^2)^{\frac{1}{2}}+t^2)^2}][1+\frac{k^2x^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{t^2(t^2+x^2)((t^2+x^2)^{\frac{1}{2}}+x)^2}+\frac{k^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{(x(t^2+x^2)^{\frac{1}{2}}+t^2)^2}]^{-\frac{1}{2}},\\
c(t,x)&=&\frac{2u_tu_x}{\sqrt{1+u_t^2+u_x^2}}\\
&=&-2k^2(x(t^2+x^2)^{\frac{1}{2}}+x^2)((t^2+x^2)^{\frac{1}{2}}+x)(tx(t^2+x^2)^{\frac{1}{2}}+t^3+tx^2)^{-1}(x(t^2+x^2)^{\frac{1}{2}}+t^2)^{-1}\\
&&\times[1+\frac{k^2x^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{t^2(t^2+x^2)((t^2+x^2)^{\frac{1}{2}}+x)^2}+\frac{k^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{(x(t^2+x^2)^{\frac{1}{2}}+t^2)^2}]^{-\frac{1}{2}}.\end{aligned}$$
Obviously, equation (\[E1-4\]) is the minimal surface equation (\[ENNN1-1\]). It is well-known that Nutku[@Nu1] gave a famility of singularity solution of minimal surface equation (\[E1-4\]) as $$u(t,x)=k\arctan\frac{x}{t},~~k\in\mathbb{R}\setminus \{0\}.$$
In this note, we show a new family of singular solutions of minimal surface equation (\[E1-4\]), which gives a new of gravitational instantons.
Exact gravitational instantons
==============================
Introduce the similarity coordinates $$\tau=-\log t,~~~~\rho=\frac{x}{t},$$ then we denote by $$u(t,x)=v(-\log t,\frac{x}{t}),$$ and noticing $$\begin{aligned}
&&u_t(t,x)=e^{\tau}(v_{\tau}+\rho v_{\rho}),\\
&&u_{tt}(t,x)=e^{2\tau}(v_{\tau\tau}+v_{\tau}+2\rho v_{\rho}+2\rho v_{\tau\rho}+\rho^2v_{\rho\rho}),\\
&&u_x(t,x)=e^{\tau}v_{\rho},\\
&&u_{xx}(t,x)=e^{2\tau}v_{\rho\rho},\\
&&u_{tx}(t,x)=e^{2\tau}(v_{\tau\rho}+v_{\rho}+\rho v_{\rho\rho})\end{aligned}$$ equation (\[E1-4\]) is transformed into an one dimensional quasilinear elliptic equation $$\begin{aligned}
\label{YNN1-1}
v_{\tau\tau}+(1+\rho^2)v_{\rho\rho}&+&v_{\tau}+2\rho v_{\rho}+2\rho v_{\tau\rho}+e^{2\tau}v_{\rho}^2(v_{\tau\tau}+v_{\tau}+2\rho v_{\rho}+2\rho v_{\tau\rho}+\rho^2v_{\rho\rho})\nonumber\\
&&+e^{2\tau}(v_{\tau}+\rho v_{\rho})^2v_{\rho\rho}-2e^{2\tau}v_{\rho}(v_{\tau}+\rho v_{\rho})(v_{\rho}+\rho v_{\rho\rho}+v_{\tau\rho})=0.\end{aligned}$$ The equation only on $\rho$ of elliptic equation (\[YNN1-1\]) is $$(\rho^2+1)v_{\rho\rho}+2\rho v_{\rho}=0,$$ which is an ODE. Direct computation shows that it has a family of solutions $$v(\rho)=k\ln|\rho+(1+\rho^2)^{\frac{1}{2}}|,$$ where $k$ is an arbitrary constant in $\mathbb{R}\setminus \{0\}$.
Minimal surface equation (\[E1-4\]) has a family of explicit self-similar solutions $$u(t,x)=k\ln|\frac{x}{t}+(1+\frac{x^2}{t^2})^{\frac{1}{2}}|,~~\forall k\in \mathbb{R}\setminus \{0\}.$$ It is easy to see that $$\partial_xu|_{x=0}=\frac{k}{t}\rightarrow+\infty,~~as~~t\rightarrow 0.$$ So by (\[E1-3\]), a class of gravitational instantons can be represented by the metric $$ds^2=a(t,x)(dt^2+dy^2)+b(t,x)(dx^2+dz^2)+c(t,x)(dtdx+dydz),$$ where $$\begin{aligned}
a(t,x)&=&\frac{1+u_t^2}{\sqrt{1+u_t^2+u_x^2}}\\
&=&[1+\frac{k^2x^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{t^2(t^2+x^2)((t^2+x^2)^{\frac{1}{2}}+x)^2}][1+\frac{k^2x^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{t^2(t^2+x^2)((t^2+x^2)^{\frac{1}{2}}+x)^2}+\frac{k^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{(x(t^2+x^2)^{\frac{1}{2}}+t^2)^2}]^{-\frac{1}{2}},\\
b(t,x)&=&\frac{1+u_x^2}{\sqrt{1+u_t^2+u_x^2}}\\
&=&[1+\frac{k^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{(x(t^2+x^2)^{\frac{1}{2}}+t^2)^2}][1+\frac{k^2x^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{t^2(t^2+x^2)((t^2+x^2)^{\frac{1}{2}}+x)^2}+\frac{k^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{(x(t^2+x^2)^{\frac{1}{2}}+t^2)^2}]^{-\frac{1}{2}},\\
c(t,x)&=&\frac{2u_tu_x}{\sqrt{1+u_t^2+u_x^2}}\\
&=&-2k^2(x(t^2+x^2)^{\frac{1}{2}}+x^2)((t^2+x^2)^{\frac{1}{2}}+x)(tx(t^2+x^2)^{\frac{1}{2}}+t^3+tx^2)^{-1}(x(t^2+x^2)^{\frac{1}{2}}+t^2)^{-1}\\
&&\times[1+\frac{k^2x^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{t^2(t^2+x^2)((t^2+x^2)^{\frac{1}{2}}+x)^2}+\frac{k^2((t^2+x^2)^{\frac{1}{2}}+x)^2}{(x(t^2+x^2)^{\frac{1}{2}}+t^2)^2}]^{-\frac{1}{2}}.\end{aligned}$$
Discussion
==========
In section 2, we derive a singularity metric, the singular point is at $t=0$. In fact, let $T$ be a positive parameter. if we introduce the similarity coordinates $$\tau=-\log(T-t),~~~~\rho=\frac{x}{T-t},$$ then minimal surface equation (\[E1-4\]) has a family of explicit self-similar singularity solutions $$u(t,x)=k\ln|\frac{x}{T-t}+(1+\frac{x^2}{(T-t)^2})^{\frac{1}{2}}|,~~\forall k\in \mathbb{R}\setminus \{0\}.$$ It is easy to see that $$\partial_xu|_{x=0}=\frac{k}{T-t}\rightarrow+\infty,~~as~~t\rightarrow T^{-1}.$$ Hence we have a class of gravitational instantons can be represented by the metric $$ds^2=a(T-t,x)(dt^2+dy^2)+b(T-t,x)(dx^2+dz^2)+c(T-t,x)(dtdx+dydz).$$
On the other hand, using Calabi’s correspondence, we know that spacelike surface with zero mean curvature equation $$u_{tt}(1-u_x^2)+2u_tu_xu_{tx}+u_{xx}(1-u_t^2)=0$$ has the same self-similar solutions with minimal surface equation (\[E1-4\]).
[xx]{}
A.N. Aliev, J. Kalayci and Y. Nutku, Phys. Rev. D. 56, 1332 (1997)
Y. Nutku, Phys. Rev. Lett. 77, 4702 (1996)
M. F. Atiyah, N. Hitchin, and I. M. Singer, Proc. R. Soc. London A362, 425 (1978)
N. Hitchin, J. Diff. Geom. 9, 435 (1975)
T. Eguchi, P. H. Gilkey, and A. J. Hanson, Phys. Rep. 66, 213 (1980)
Y.Chen, E. Teo, Physics Letters B 703, 359 (2011)
S. W. Hawking, Phys. Lett. 60, 81 (1977)
G. W. Gibbons and S. W. Hawking, Phys. Lett. 78B, 430 (1978)
G. W. Gibbons and S. W. Hawking, Commun. Math. Phys. 66, 291 (1979)
G. W. Gibbons and C. N. Pope, Commun. Math. Phys. 66, 267 (1979)
K. J$\ddot{o}$rgens, Math. Ann. 127, 130 (1954)
G.W. Gibbons, Gravitation and Relativity: At the turn of the Millennium Proceedings of the GR-15 Conference, eds. Dadhich N and Narlikar J. 1997 India
N. Seiberg and E. Witten, Nucl. Phys. B 426 19, ibid B 431 484 (1994)
[^1]: The author is supported by NSFC No 11771359.
|
---
abstract: 'In this paper I present an elementary construction to prove that any proper metric space can arise as the asymptotic cone of another proper metric space. Furthermore I answer a question of Druţu and Sapir concerning slow ultrafilters.'
author:
- |
Lars Scheele[^1]\
[*Universität Münster, Einsteinstr. 60, 48149 Münster, Germany*]{}
date: 'October 8, 2010'
title: Slow ultrafilters and asymptotic cones of proper metric spaces
---
Introduction
============
Given a metric space $X$, the asymptotic cone of $X$ is another metric space meant to capture the “large-scale geometry” of $X$. This construction has been introduced by Gromov and was later modified by van den Dries and Wilkie who used non-standard methods to ensure that the asymptotic cone exists for every metric space. However one of the drawbacks is that it is a priori not clear in how far the cone of $X$ depends on additional data one has to choose, namely a non-principal ultrafilter and a sequence of scaling factors.\
In this short paper I would like to briefly recall some basic definitions about ultrafilters and ultraproducts in Section 2 and then define asymptotic cones in Section 3. The question by Gromov whether there is an example of a finitely generated group with two different (i.e. non-homeomorphic) asymptotic cones has been answered positively by Thomas and Velickovic. This example will also be discussed in that section.\
In Section 4 I then show how much freedom one has in the construction of the cone. One of the two main results (Theorem \[decone\]) states that any proper metric space can be realized as an asymptotic cone of another metric space, which again will be proper, so that the process can be iterated. In a very recent preprint ([@S]) Sisto independently obtains the same result using a very different construction and non-standard methods. He also proves that any seperable metric space which is an asymptotic cone has to be proper.\
The known examples of spaces with different cones all depend on the choice of very fast growing sequences. Druţu and Sapir suggested a way to avoid these fast sequences and asked if there are still examples of groups with different cones with respect to these slow ultrafilters. The second main result of this paper (Theorem \[Drutuanswer\]) answers this question in showing that any cone that can be realised with a fast growing sequence can also be obtained using a slow ultrafilter.\
All of the results in this paper will appear in my doctoral thesis. The author is indebted to his thesis advisor Katrin Tent for helpful questions and even more helpful answers and discussions.
Ultrafilters and ultraproducts
==============================
Let $I$ be a set. A filter $\mu$ on $I$ is a nonempty collection of subsets of $I$, such that for all subsets $A,B \subseteq I$ we have
- $\emptyset \notin \mu$.
- $A \in \mu, A \subseteq B \Rightarrow B \in \mu$.
- $A,B \in \mu \Rightarrow A \cap B \in \mu$.
The set of all filters on $I$ can be partially ordered by inclusion. It is easy to see that totally ordered subsets have upper bounds and therefore maximal filters exist by Zorn’s lemma. Those are called ultrafilters. They can be characterized as follows: A filter $\mu$ is an ultrafilter if and only if
- For all $A \subseteq I$ either $A \in \mu$ or $I \backslash A \in \mu$.
An ultrafilter on $I$ can also be regarded as a finitely additive probability measure on $I$, which only takes the values 0 and 1. We say that some property of elements of $I$ holds $\mu$-almost everywhere ($\mu$-a.e.) if the set where it holds lies in $\mu$.
Let $I$ be a set and $i \in I$ a point. Then the collection $$\mu := \{ A \subseteq I : i \in A \}$$ defines an ultrafilter on $I$. Such an ultrafilter is called principal.
Note that for finite sets $I$ each ultrafilter is of this form. Non-principal ultrafilters on $I$ exist if and only if $I$ is infinite: Take the collection of all cofinite sets in an infinite $I$. This is a filter and therefore contained in an ultrafilter, which is non-prinicipal since it contains no finite sets.
Let $X$ and $I$ be sets and $\mu$ a non-prinicipal ultrafilter on $I$. The ultraproduct of $X$ with respect to $\mu$ and $I$ is defined as $${\!\! ~^{*}}X := \prod_\mu X := \prod_I X / \sim$$ where $\sim$ is the equivalence relation on the product given by $$(x_i) \sim (y_i) : \iff x_i = y_i \; \mu\mbox{-a.e.} \iff \{ i \in I : x_i = y_i \} \in \mu \quad \mbox{for } (x_i),(y_i) \in \prod_I X.$$ An equivalence class modulo $\sim$ will be denoted by $[x_n]$.
The important thing to note here is that if $X$ carries additional structure, this can be carried over to the ultraproduct ${\!\! ~^{*}}X$. Łoš’s theorem (cf. [@BS]) states that any first-order sentence true in $X$ can be transferred to ${\!\! ~^{*}}X$. As an example consider the field of hyperreal numbers.
Fix a non-prinicipal ultrafilter $\mu$ on ${\mathbbm{N}}$ and consider $${\!\! ~^{*}}{\mathbbm{R}}:= \prod_\mu {\mathbbm{R}}.$$ These are just sequences $(x_n)$ of real numbers, where two sequences are identified if they agree $\mu$-a.e. Then ${\!\! ~^{*}}{\mathbbm{R}}$ carries the structure of an ordered field. It is clear that the set of sequences forms an integral domain if addition and multiplication os defined elementwise. After the identification $\sim$ one really obtains a field: For any sequence $(x_n)$ of real numbers either the set $\{ n \in {\mathbbm{N}}: x_n = 0 \}$ is in $\mu$ or its complement. In the first case, $[x_n] = 0$ in ${\!\! ~^{*}}{\mathbbm{R}}$ and in the second one can define $$y_n := \left\{ \begin{array}{ll} x_n^{-1} \quad & \mbox{if } x_n \not= 0 \\ 0 \quad & \mbox{if } x_n = 0\end{array}\right.$$ Then $(x_n \cdot y_n)$ is $\mu$-a.e. equal to 1, so $[x_n]^{-1} = [y_n]$.\
The order $<$ can also be transferred to make ${\!\! ~^{*}}{\mathbbm{R}}$ into an ordered field, which is real closed but not archimedean.
The field ${\mathbbm{R}}$ can be embedded into ${\!\! ~^{*}}{\mathbbm{R}}$ by taking constant sequences. An element $x \in {\!\! ~^{*}}{\mathbbm{R}}$ is called finite if there is some constant $C > 0$ such that $|x| < C$, otherise it is called infinite. Further $x$ is called infinitesimal if $x \not= 0$ but $|x| < {\varepsilon}$ for all ${\varepsilon}> 0, {\varepsilon}\in {\mathbbm{R}}$.\
The set of all finite elements of ${\!\! ~^{*}}{\mathbbm{R}}$ forms a local ring with the set of all infinitesimal elements as maximal ideal. The quotient is isomorphic to ${\mathbbm{R}}$ and the projection map $\operatorname{st}$ is called the “standard part”.\
Another way of looking at this is the following. Any finite element $x = [x_n] \in {\!\! ~^{*}}{\mathbbm{R}}$ corresponds to a bounded sequence $(x_n)$ which has a unique limit with respect to $\mu$, i.e. a number $a \in {\mathbbm{R}}$ such that every neighbourhood of $a$ contains $\mu$-almost every element of the sequence $(x_n)$. Write $$\lim_{n,\mu} x_n = a \qquad \mbox{or simply} \quad \lim_\mu x_n = a$$ for this limit. Note that it depends a lot on the choice of $\mu$. The bounded sequence $(-1)^n$ has $\mu$-limit 1 or $-1$ depending on whether the set of even or the set of odd natural numbers lies in $\mu$.\
Taking the limit of a bounded sequence is the same as taking the standard part of a finite hyperreal number: $$\operatorname{st}\big([x_n]\big) = \lim_\mu x_n \qquad \mbox{if } [x_n] \in {\!\! ~^{*}}{\mathbbm{R}}\mbox{ is finite.}$$
Asymptotic Cones
================
We start by defining the asymptotic cone of an arbitrary (pseudo)-metric space and discuss to what extend it depends on the defining data. The ideas here are not new and can be found in [@R] and [@DS]. Recall that in a pseudo-metric space all the axioms for metric spaces are valid except for the possibility that two different points can have distance 0.
Let $(X,d)$ be a pseudo-metric space, $\mu$ a non-principal ultrafilter on ${\mathbbm{N}}$, $(e_n)$ a sequence of points in $X$ (the “sequence of base-points”) and $(\alpha_n)$ a sequence of positive real numbers tending to infinity (the “sequence of scaling factors”). Consider the ultrapower ${\!\! ~^{*}}X$, which is an ${\!\! ~^{*}}{\mathbbm{R}}$-pseudo-metric space. This ${\!\! ~^{*}}{\mathbbm{R}}$ metric will be denoted by ${\!\! ~^{*}}d$.\
Set $e := [e_n] \in {\!\! ~^{*}}X$ and $\alpha := [\alpha_n] \in {\!\! ~^{*}}{\mathbbm{R}}$. The metric ${\!\! ~^{*}}d/\alpha$ is again a ${\!\! ~^{*}}{\mathbbm{R}}$ metric on ${\!\! ~^{*}}X$. Consider now the following set: $${\!\! ~^{*}}X^\alpha_e := \left\{ [x_n] \in {\!\! ~^{*}}X : \frac{{\!\! ~^{*}}d\big([x_n],[e_n]\big)}{\alpha} \mbox{ is finite in } {\!\! ~^{*}}{\mathbbm{R}}\right\}.$$ For any finite non-standard real number one can take the standard part, which is real. This makes ${\!\! ~^{*}}X^\alpha_e$ into a pseudo-metric space. Identifying points with distance 0 gives the asymptotic cone of $X$: $$\operatorname{Cone}_\mu(X,e,\alpha) := {\!\! ~^{*}}X^\alpha_e / \approx \quad \mbox{, where } [x_n] \approx [y_n] \iff \frac{{\!\! ~^{*}}d\big([x_n],[y_n]\big)}{\alpha} \mbox{ is infinitesimal.}$$ We don’t want to complicate the notation even more, so we will denote an equivalence class with respect to $\approx$ again by $[x_n]$. The metric $d_\infty$ on $\operatorname{Cone}_\mu(X,e,\alpha)$ is defined by $$d_\infty \big([x_n],[y_n]\big) := \operatorname{st}\left( \frac{{\!\! ~^{*}}d \big( [x_n], [y_n] \big)}{\alpha} \right) = \lim_\mu \frac{d(x_n,y_n)}{\alpha_n}.$$
Saturation properties of ultrapowers guarantee that the asymptotic cone is always a complete metric space[^2]. A more direct proof can for example be found in [@VDW], Proposition 4.2.
For later use we also define iterated asymptotic cones. For this note that if $X$ is a metric space and $e \in {\!\! ~^{*}}X$ a fixed basepoint, the asymptotic cone of $X$ will have a canonical basepoint given by the equivalence class of $e$, which we will denote by $\hat{e}$. Fix a non-principal ultrafilter $\mu$ on ${\mathbbm{N}}$ and an infinite hyperreal number $\alpha$. Then we set $\operatorname{Cone}^0_\mu(X,e,\alpha) := X$ and for $i \in {\mathbbm{N}}$ set $$\operatorname{Cone}^{i+1}_\mu(X,e,\alpha) := \operatorname{Cone}_\mu\big(\operatorname{Cone}^i_\mu(X,e,\alpha),\hat{e},\alpha \big).$$
As indicated in the notation, the definition of the asymptotic cone depends on the choices of the ultrafilter $\mu$, the sequence of base points $e$ and the sequence of scaling factors $\alpha$. We want to discuss how severe these dependencies are. The first, alsmost obvious observation is the following.
\[bounded\_add\] Let $\mu$ be a non-principal ultrafilter, $\alpha \in {\!\! ~^{*}}{\mathbbm{R}}$ an infinite hyperreal number and let $\beta \in {\!\! ~^{*}}{\mathbbm{R}}$ be a finite number. Let further $X$ be a metric space with basepoint $e \in X$. Then $$\operatorname{Cone}_\mu(X,e,\alpha) \cong \operatorname{Cone}_\mu(X,e,\alpha + \beta).$$
This is the following simple fact about real sequences. If $x_n$ is any sequence of real numbers, $\alpha_n$ another sequence tending to infinity and $\beta_n$ a $\mu$-a.s. bounded sequence, such that $\frac{x_n}{\alpha_n}$ converges with respect to $\mu$. Then $$\lim_\mu \frac{x_n}{\alpha_n + \beta_n} = \lim_\mu \frac{x_n}{\alpha_n}.$$
A metric space $(X,d)$ is called quasi-homogenous if the action of $\operatorname{Isom}X$ has a bounded fundamental domain in $X$. Put another way, $(X,d)$ is quasi-homogenous if $\operatorname{diam}\big(X / \operatorname{Isom}X) < \infty$. Recall that a metric space is called homogenous if the isometry groups acts transitively on the points of $X$.
Let $(X,d)$ be a metric space, $\mu$ a non-principal ultrafilter on ${\mathbbm{N}}$ and $\alpha \in {\!\! ~^{*}}{\mathbbm{R}}$ a sequence of scaling factors as above. Let $e$ and $e'$ be two basepoints in ${\!\! ~^{*}}X$. If $(X,d)$ is quasi-homogenous, there exists an isometry $${\varphi}: \operatorname{Cone}_\mu(X,e,\alpha) \to \operatorname{Cone}_\mu(X,e',\alpha)$$ mapping $e$ to $e'$.
By assumption there is a constant $C > 0$ and isometries ${\varphi}_n \in \operatorname{Isom}X$ such that $$d\big({\varphi}_n(e_n), e_n'\big) < C.$$ This induces a well-defined map ${\varphi}: \operatorname{Cone}_\mu(X,e,\alpha) \to \operatorname{Cone}_\mu(X,e',\alpha)$, which can be seen as follows. Let $x = [x_n] \in {\!\! ~^{*}}X^\alpha_e$, then $$\begin{aligned}
\frac{{\!\! ~^{*}}d\big({\varphi}(x),e'\big)}{\alpha} = \frac{{\!\! ~^{*}}d\big([{\varphi}_n(x_n)],[e_n']\big)}{\alpha} &\leq& \frac{{\!\! ~^{*}}d\big([{\varphi}_n(x_n)],[{\varphi}_n(e_n)]\big)}{\alpha} + \frac{{\!\! ~^{*}}d\big([{\varphi}_n(e_n)],[e_n']\big)}{\alpha}\\
&\leq& \underbrace{\frac{{\!\! ~^{*}}d(x,e)}{\alpha}}_{\mbox{finite}} + \frac{C}{\alpha}.\end{aligned}$$ This shows that ${\varphi}(x) \in {\!\! ~^{*}}X^\alpha_{e'}$ and therefore the map ${\varphi}$ is well-defined. It is clear that it is an isometry. The calculation above also shows ${\varphi}(e) = e'$, because $C/\alpha$ is infinitesimal and therefore $d_\infty\big({\varphi}(e),e'\big) = 0$.
This proof also shows that for two basepoints $e,e' \in {\!\! ~^{*}}X$ with finite distance (in ${\!\! ~^{*}}{\mathbbm{R}}$) the identity map is an isometry. This is especially the case if $e$ and $e'$ are constant, i.e. points of $X$.
Assume from now on unless stated otherwise the basepoint to be one point $e \in X$ embedded into ${\!\! ~^{*}}X$ via a constant sequence.\
The dependence on the ultrafilter $\mu$ and the scaling factor $\alpha$ is more crucial. There is an example of a metric space $(X,d)$ having non-homeomorphic cones $\operatorname{Cone}_\mu(X,e,\alpha)$ and $\operatorname{Cone}_{\mu'}(X,e,\alpha)$, where $\mu$ and $\mu'$ are distinct ultrafilters on ${\mathbbm{N}}$, see [@TV]. The construction in this paper can be adapted to give an example of non-homeomorphic cones $\operatorname{Cone}_\mu(X,e,\alpha)$ and $\operatorname{Cone}_\mu(X,e,\beta)$ for different scaling factors $\alpha$ and $\beta$. Indeed, the choices of the ultrafilter and the sequence of scaling factors are interrelated. We will discuss the example in [@TV] in greater detail in \[TV\_example\].
Let $\alpha_n$ be a sequence of positive real numbers tending to infinity. We say that this sequence has bounded accumulation if there is a number $N \in {\mathbbm{N}}$, such that for all $r \in {\mathbbm{N}}$ the set $$S_r = \{ n \in {\mathbbm{N}}: \alpha_n \in [r,r+1[ \} = \{ n \in {\mathbbm{N}}: \lfloor \alpha_n \rfloor = r \}$$ has less than $N$ elements.\
If $\mu$ is any ultrafilter on ${\mathbbm{N}}$, we say that $\alpha$ has $\mu$-almost surely bounded accumulation if there is a set $T \in \mu$, such that $|T \cap S_r|$ is uniformly bounded.\
Since ${\mathbbm{N}}\in \mu$ for all ultrafilters $\mu$ we have that if $\alpha$ has bounded accumulation it also has $\mu$-almost surely bounded accumulation for all $\mu$. Moreover if $\alpha$ has $\mu$-almost surely bounded accumulation for some ultrafilter $\mu$ then there exists a set $A' \in \mu$ such that for each $r \in {\mathbbm{N}}$ the set $$A' \cap S_r = \{ n \in A' : \alpha_n \in [r,r+1[ \} = \{ n \in A' : \lfloor \alpha_n \rfloor = r \}$$ has at most one element. Indeed by assumption exists an $N \in {\mathbbm{N}}$ and a set $T \in \mu$ such that for all $r \in {\mathbbm{N}}$ we have $|T \cap S_r| \leq N$. Therefore we can write $T$ as a finite disjoint union $$T = A_1 \dot\cup A_2 \dot\cup \dots \dot\cup A_N$$ with $|A_i \cap S_r| \leq 1$ for each $i \leq N$ and each $r \in {\mathbbm{N}}$. Because the union is disjoint exactly one of the $A_i$ lies in $\mu$ and this can be taken as $A'$.
In the following denote the hyperreal number $[n]$ by $\omega$. This is often used as the “standard” scaling sequence.
\[scaling\] Let $(X,d)$ be a metric space, $\mu$ a non-principal ultrafilter on ${\mathbbm{N}}$, $e$ a basepoint and $\alpha$ a sequence of scaling factors. Then there exists a non-principal ultrafilter $\mu'$ on ${\mathbbm{N}}$, a basepoint $e'$ and an isometric embedding ${\varphi}: \operatorname{Cone}_{\mu'}(X,e',\omega) \to \operatorname{Cone}_\mu(X,e,\alpha)$.\
If moreover $\alpha$ has $\mu$-almost surely bounded accumulation, then ${\varphi}$ is an isometry.
Define a map $\psi: {\mathbbm{N}}\to {\mathbbm{N}}$ by setting $\psi(n) := \lfloor \alpha_n \rfloor$. Indeed it is no loss of generality to assume $\alpha_n \in {\mathbbm{N}}$ for all $n$, since $\operatorname{Cone}_\mu(X,e,\alpha)$ and $\operatorname{Cone}_\mu(X,e,\lfloor \alpha \rfloor)$ are isometric by Lemma \[bounded\_add\].\
Define the ultrafilter $\mu'$ as follows. For any subset $A \subseteq {\mathbbm{N}}$ set $$A \in \mu' : \iff \psi^{-1}(A) \in \mu.$$ It is clear that this defines a non-principal ultrafilter on ${\mathbbm{N}}$. For every $[x_n] \in \operatorname{Cone}_{\mu'}(X,e,\omega)$ set ${\varphi}\big([x_n]\big) := [x_{\psi(n)}]$. This is a well defined map to $\operatorname{Cone}_\mu(X,e,\alpha)$. Let $[x_n] \in {\!\! ~^{*}}X^\omega_e$ be a representative of any point in $\operatorname{Cone}_{\mu'}(X,\omega,e)$ and consider ${\varphi}(x)$: $$\frac{{\!\! ~^{*}}d\big({\varphi}([x_n]),e\big)}{[\alpha_n]} = \frac{{\!\! ~^{*}}d\big(x_{\psi(n)},e\big)}{[\psi(n)]}$$ and this is a finite hyperreal with respect to $\mu$, because $$\frac{{\!\! ~^{*}}d\big([x_n],e\big)}{\omega} = \frac{{\!\! ~^{*}}d\big([x_n],e\big)}{[n]}$$ is by assumption a finite hyperreal with respect to $\mu'$. This shows ${\varphi}(x) \in {\!\! ~^{*}}X^\alpha_e$. We also have for any two representatives $x = [x_n]$ and $y = [y_n]$ of points in $\operatorname{Cone}_{\mu'}(X,\omega,e)$: $$\frac{{\!\! ~^{*}}d\big({\varphi}([x_n]),{\varphi}([y_n])\big)}{[\alpha_n]} = \frac{{\!\! ~^{*}}d\big([x_{\psi(n)}],[y_{\psi(n)}]\big)}{[\psi(n)]}$$ The $\mu$-limit of this number is the same as the $\mu'$-limit of ${\!\! ~^{*}}d(x,y)/\omega$ and this shows that the map ${\varphi}$ respects the distance and is therefore an isometric embedding. Since the asymptotic cone is a metric (not a pesudo-metric) space, it follows in particular that ${\varphi}$ is injective.\
Assume now that $\alpha$ has $\mu$-almost surely bounded accumulation and consider again for each $r \in {\mathbbm{N}}$ the set $$S_r = \{ n \in {\mathbbm{N}}: \lfloor \alpha_n \rfloor = r \} = \psi^{-1}\big(\{r\}\big).$$ By assumption there exists a set $A \subseteq {\mathbbm{N}}$ with $A \in \mu$ and $|A \cap S_r| \leq 1$ for each $r \in {\mathbbm{N}}$.\
Consider then $A' := \psi(A)$. By construction we have $\psi^{-1}(A') = A$ and therefore $A' \in \mu'$. The inverse of ${\varphi}$ can then be defined on the set of indices in $A'$ and it follows that ${\varphi}$ is surjective and therefore an isometry.
We will now see what this proof shows in the particular example of Thomas and Velickovic.
\[TV\_example\] In the paper [@TV] the authors give an example of a metric space $(X,d)$ (in this case a finitely generated group $G$ with the word metric) and two different asymptotic cones with respect to two distinct ultrafilters. In particular they prove the following:\
There is a metric space $(X,d)$ and two disjoint subsets $A,B \subseteq {\mathbbm{N}}$ such that for any ultrafilter $\mu$ containing $A$ the cone $\operatorname{Cone}_\mu(X,e,\omega)$ is simply connected whereas for any ultrafilter $\mu'$ containing $B$ the cone $\operatorname{Cone}_{\mu'}(X,e,\omega)$ has non-trivial fundamental group. The cones are therefore non-homeomorphic.\
Note that together with the proof of Proposition \[scaling\] we obtain the following. Let $\alpha$ be the sequence of scaling factors obtained by ordering the elements of $A$ in the natural order and $\beta$ the sequence obtained from the set $B$. Then the asymptotic cone $\operatorname{Cone}_\mu(X,e,\alpha)$ is simply connected for [**any**]{} ultrafilter $\mu$, because it is by Proposition \[scaling\] isometric to $\operatorname{Cone}_{\mu'}(X,e,\omega)$ and $\mu'$ is an ultrafilter containing $A$ by construction.\
The same argument shows that $\operatorname{Cone}_\mu(X,e,\beta)$ has non-trivial fundamental group again independent of the choice of the ultrafilter $\mu$.\
This example shows that it is not enough to simply fix the scaling factor and vary the ultrafilter to get all possible asymptotic cones.
Proper spaces as asymptotic cones
=================================
We will now proceed to show that many different spaces can arise as asymptotic cones. In particular the following theorem holds.
\[decone\] Let $(Y,d)$ be a proper metric space. Then there is a proper metric space $(X,{\overline}{d})$ with basepoint $e \in X$ and a sequence of scaling factors $(\alpha_n)$, such that for any non-principal ultrafilter $\mu$ on ${\mathbbm{N}}$ there is an isometry $$\operatorname{Cone}_\mu(X,e,\alpha) \cong Y.$$
Set $\alpha_n := n!$ and fix any non-principal ultrafilter $\mu$ on ${\mathbbm{N}}$. Choose any point $e \in Y$ as basepoint. For $n \geq 2$ consider the following subset of $Y$: $$Y_n := \left\{ y \in Y : d(y,e) \in \left[ \frac{1}{\log n}, \log n \right] \right\} \cup \{e\}.$$ This is a closed subset of $Y$ and therefore itself a complete metric space. Rescale the metric on $Y_n$ by $n!$ and call the resulting space $X_n$, i.e. for $x,x' \in X_n$ we have ${\overline}{d}(x,x') = n! \cdot d(x,x')$.\
Define the space $X$ now as the union of the spaces $X_n$ amalgamated along the common basepoint $e$. For $x \in X$ with $x \not= e$ write $x < X_n$ if $x \in X_k$ for some $k < n$ and similarly write $x > X_n$ if $x \in X_k$ for some $k > n$.\
Consider now the asymptotic cone of $X$ with respect to $\alpha$, $\mu$ and the basepoint $e$. Suppose $[x_n]$ is any point in this cone represented by a sequence in $X$. Then there are three cases:
- [*Case 1:*]{} We have $\mu$-almost surely $x_n < X_n$. Then $$\lim_\mu \frac{{\overline}{d}(x_n,e)}{\alpha_n} \leq \lim_\mu \frac{(n-1)! \log (n-1)}{n!} = \lim_\mu \frac{\log(n-1)}{n} = 0.$$ In this case $(x_n)$ is equivalent to the constant sequence given by the basepoint.
- [*Case 2:*]{} We have $\mu$-almost surely $x_n \in X_n$.
- [*Case 3:*]{} We have $\mu$-almost surely $x_n > X_n$. But then $$\lim_\mu \frac{{\overline}{d}(x_n,e)}{\alpha_n} \geq \lim_\mu \frac{(n+1)!}{\log(n+1) n!} = \lim_\mu \frac{n+1}{\log n} = \infty.$$ In this case the sequence does not give a point in the asymptotic cone, contradicting the assumption.
This shows that any point in the asymptotic cone which is different from the basepoint must fulfill the condition of case 2 above.\
Now let $y \in Y$ be an arbitrary point. For $n \in {\mathbbm{N}}$ define $${\varphi}_n(y) := \left\{ \begin{array}{ll} y \qquad& \mbox{, if }\frac{1}{\log(n)} \leq d(y,e) \leq \log(n)\\ e \qquad&\mbox{ otherwise.}\end{array}\right.$$ Then ${\varphi}_n(y) \in X_n$ and for all $y \in Y$ there is a natural number $N$, such that for all $n \geq N$ we have ${\varphi}_n(y) = y$. Define now a map ${\varphi}: Y \to \operatorname{Cone}_\mu(X,e,\alpha)$ by setting ${\varphi}(y) := [{\varphi}_n(y)]$. The basepoint $e$ of $Y$ is then mapped to the class of the constant sequence $[e]$. By construction this map is an isometric embedding, for if $y,y' \in Y$ are arbitrary points we have $${\overline}{d}_\infty\big({\varphi}(y),{\varphi}(y')\big) = \lim_\mu \frac{{\overline}{d}\big({\varphi}_n(y),{\varphi}_n(y')\big)}{\alpha_n} = \lim_\mu \frac{n! \cdot d(y,y')}{n!} = d(y,y').$$ We now have to prove that ${\varphi}$ is a surjection to get the required isometry. For this let $[x_n]$ be an arbitrary point of the cone represented by a sequence $(x_n)$ in $X$. We may assume that this sequence is not equivalent to the basepoint. By the above condition we know that $\mu$-almost surely we have $x_n \in X_n$. Regarding the points $x_n$ as points in $Y$ we get the inequality $$d(x_n,e) = \lim_\mu \frac{n! \cdot d(x_n,e)}{n!} = \lim_\mu \frac{{\overline}{d}(x_n,e)}{\alpha_n} < \infty$$ since the point is by assumption in the cone. It follows that the sequence $(x_n)$ is bounded in $Y$. Since $Y$ is proper and therefore complete there is a limit $y$ of this sequence with respect to $\mu$. And since $${\overline}{d}_\infty\big([x_n], {\varphi}(y)\big) = \lim_\mu \frac{{\overline}{d}(x_n,y)}{\alpha_n} = \lim_\mu \frac{n! \cdot d(x_n,y)}{n!} = \lim_\mu d(x_n,y) = 0$$ the point ${\varphi}(y)$ is equivalent to $(x_n)$ and therefore ${\varphi}$ is a surjection.\
It remains to show that $X$ is again a proper metric space. First observe that any of the spaces $X_n$ is compact, since it is a rescaled version of a closed subset of the closed ball with radius $\log n$ around $e$ in the proper space $Y$. Moreover the basepoint $e$ in each of the $X_n$ is isolated and has distance at least $\frac{n!}{\log n}$ from any other point in $X_n$. Since this grows with $n$ it is clear that any closed ball with fixed radius around any point in $X$ only meets finitely many of the $X_n$ and can therefore be seen as a finite union of compact sets, which is compact itself.
\[itdecone\] Let $(Y,d)$ be a proper metric space. Then for each number $k \in {\mathbbm{N}}$ there is a proper metric space $(X^{(k)},{\overline}{d})$ with basepoint $e \in X$ and a sequence of scaling factors $(\alpha_n)$, such that for any non-principal ultrafilter $\mu$ on ${\mathbbm{N}}$ there is an isometry $$\operatorname{Cone}^k_\mu(X^{(k)},e,\alpha) \cong Y.$$
Since the metric space $(X,{\overline}{d})$ from Theorem \[decone\] is again proper, the process can be iterated.
Instead of proving Theorem \[decone\] and Corollary \[itdecone\] for fixed scaling factor and all ultrafilters, we could have stated that there is a non-principal $\mu$, such that the theorem is valid for the scaling factor $\omega$ by Proposition \[scaling\].
Slow ultrafilters
=================
Let $A = \{a_1 < a_2 < a_3 < ... \} \subseteq {\mathbbm{N}}$. We call $A$ thin if $\lim \frac{a_n}{a_{n+1}} = 0$ and we call $A$ fast if $\lim \frac{a_n}{n} = \infty$.
It is easy to see that every thin set is fast. The converse is not true, the set $A = \{2^n : n \in {\mathbbm{N}}\}$ is an example of a fast set which is not thin.
The collection ${\mathcal{S}}$ of all cofinite sets together with complements of fast sets forms a filter.
Since subsets of fast sets are clearly fast it remains to show that the union of two fast sets is again fast, which is a simple calculation.
Any ultrafilter extending the filter from the lemma will be called slow. Druţu and Sapir asked in [@DS] if there are examples of groups having non-homeomorphic cones with respect to the standard scaling sequence $\omega = (1,2,3,4,\ldots)$ and two slow ultrafilters.\
The relevance of this question lies in the fact that almost all examples of groups (or metric spaces in general) having different asymptotic cones rely on sequences of scaling factors which are thin when seen as subsets of ${\mathbbm{N}}$. Equivalently this means that these cones are formed using the standard scaling sequence $\omega$ and ultrafilters containing thin sets.\
The next theorem shows that any construction for cones that can be realised with an ultrafilter containing a thin set can also be done with a slow ultrafilter.
\[Drutuanswer\] Let $A$ be a thin set and $\mu$ an ultrafilter containing $A$. Then there is a slow ultrafilter $\mu'$, such that for every pointed metric space $(X,e)$ there is an isometry $$\operatorname{Cone}_\mu(X,e,\omega) \to \operatorname{Cone}_{\mu'}(X,e,\omega).$$
Fix the thin set $A = \{ a_1 < a_2 < a_3 < \ldots \}$. For every $L > 1$ and $n \in {\mathbbm{N}}$ set $$X_{L,{a_n}} := \left[ \frac{1}{L} a_n, L a_n\right] \cap {\mathbbm{N}}\qquad \mbox{and for } I \subseteq A \mbox{ set} \qquad X_{L,I} := \bigcup_{a_n \in I} A_{L,{a_n}}$$ Since $A$ is thin these intervals will be disjoint for large $n$, so it is no loss of generality to assume that this is always a disjoint union by getting rid of finitely many parts. We will first show that the set $X_{L,I}$ for any $I \subseteq A$ with $I \in \mu$ is not fast, neither is its complement.\
First note that an infinite set $X \subseteq {\mathbbm{N}}$ is fast if and only if $$\lim_{x \to \infty \atop x \in X} \frac{|X \cap[1,x-1]|}{x} = 0. \qquad (*)$$ In the set $X_{L,I}$ consider a subsequence of elements of the form $L a_n$ for $a_n \in I$. Then $$\frac{|X_{L,I} \cap [1,La_n - 1]|}{La_n} \geq \frac{La_n - \frac{1}{L}a_n - 1}{La_n} = 1 - \frac{1}{L^2} - \frac{1}{La_n}.$$ Since $L > 1$ this will be bounded away from 0 for $n \to \infty$. Therefore this subsequence doesn’t satisfy (\*) and this implies that the set $X_{L,I}$ is not fast.\
For the complement $Y = {\mathbbm{N}}\backslash X_{L,I}$ note that $Y$ contains sets of the form $$\left] L a_{n-1}, \frac{1}{L} a_n \right[ \cap {\mathbbm{N}}.$$ It is no loss of generality to consider a subsequence in $Y$ of elements of the form $\frac{1}{L}a_n$. We see $$\frac{|Y \cap [1,\frac{1}{L}a_n - 1]|}{\frac{1}{L}a_n} \geq \frac{\frac{1}{L}a_n - La_{n-1} - 1}{\frac{1}{L}a_n} = 1 - L^2 \frac{a_{n-1}}{a_n} - \frac{L}{a_n}$$ Since $A$ is thin, the right hand side is bounded away from 0 as $n$ goes to infinity, so again (\*) is not satisfied for the complement.\
Consider now the collection of sets $\{ X_{L,I} : I \in \mu, I \subseteq A\}$ for some fixed $L > 1$. Since $\mu$ is an ultrafilter this collection is closed under taking intersections of finitely many sets. And since all these sets are not fast and their complements are not fast as well, we can find a filter ${\mathcal{F}}_L$ containing ${\mathcal{S}}$ and this family.\
Now fix a sequence $L_k > 1$ of numbers tending to 1 (strictly monoton). Then for each $I \subseteq A, I \in \mu$ we have $X_{L_k,I} \subseteq X_{L_r,I}$ for $L_k < L_r$. This means that each generating set of ${\mathcal{F}}_{L_r}$ contains a generating set of ${\mathcal{F}}_{L_k}$ and because filters are closed under taking supersets it follows that ${\mathcal{F}}_{L_r} \subseteq {\mathcal{F}}_{L_k}$. This implies that we obtain an ascending sequence of filters $${\mathcal{S}} \subseteq {\mathcal{F}}_{L_1} \subseteq {\mathcal{F}}_{L_2} \subseteq {\mathcal{F}}_{L_3} \subseteq \ldots$$ A direct application of Zorn’s Lemma yields an ultrafilter $\mu'$ containing all these filters. In particular $\mu'$ is a slow ultrafilter since it contains ${\mathcal{S}}$. Define a map $${\varphi}: \operatorname{Cone}_\mu(X,e,\omega) \to \operatorname{Cone}_{\mu'}(X,e,\omega)$$ by setting ${\varphi}\big([x_m]\big) := [y_m]$ where $y_m$ need only be defined for $m \in X_{L_1,A}$, say. Set $y_m := x_{a_n}$ if $m \in X_{L_1,a_n}$. By construction this map is well-defined: Consider another sequence $[x_m']$ which agrees with $[x_m]$ on a set $I$ of $\mu$-measure 1. Since $A \in \mu$ it is no loss of generality to assume $I \subseteq A$. The construction then implies that the image under ${\varphi}$ of these sequences agrees on the set $X_{L_1,I} \in \mu'$.\
Moreover ${\varphi}$ is a bi-Lipschitz homeomorphism with constant $L_1$. This is immediate since each $k \in X_{L_1,A}$ is in exactly one interval $X_{L_1,a_n}$ and we have $$\frac{d(x_{a_n},e)}{L_1 a_n} \leq \frac{d\big({\varphi}(x_k),e\big)}{k} \leq \frac{d(x_{a_n},e)}{\frac{1}{L_1}a_n}.$$ Consider now an arbitrary $L_k$. Since $L_k \leq L_1$ we know that $X_{L_k,A} \subseteq X_{L_1,A}$. Note that the actual definition of ${\varphi}$ does not depend on the constant $L_1$, therefore the map ${\varphi}$ can be defined for all $L_k$ in the same way, it is actually the same map. It follows that ${\varphi}$ is indeed a bi-Lipschitz map with constant $L_k$ for all $k$. Since $L_k \to 1$, we find that ${\varphi}$ is the desired isometry.
This theorem shows that it is possible to “thicken” an ultrafilter containing a thin set in such a way that the same cone can be realised using a slow ultrafilter. Therefore if one has an example of a finitely generated group with two different asymptotic cones using different ultrafilters containing thin sets, one can modify the construction to obtain two slow ultrafilters yielding different cones. For instance this can be done in the example given by Thomas and Velickovic in [@TV].
[999999]{} J. Bell, A. Slomson, [*Models and Ultraproducts.*]{} North-Holland, Amsterdam, 1969. C. Druţu, M. Sapir, [*Tree-graded spaces and asymptotic cones of groups.*]{} Topology [**44**]{} (2005), 959–1058. D. Marker, [*Model Theory: An Introduction.*]{} Springer-Verlag, New York, 2002. T. R. Riley, [*Higher connectedness of asymptotic cones.*]{} Topology [**42**]{} (2003), 1289–1352. A. Sisto, [*Separable and tree-like asymptotic cones of groups*]{}, arXiv:1010.1199v1, (2010). S. Thomas, B. Velickovic, [*Asymptotic cones of finitely generated groups.*]{} Bull. London Math. Soc. [**32**]{} (2000), 203–208. L. van den Dries, A. J. Wilkie, [*On Gromov’s theorem concerning groups of polynomial growth and elementary logic.*]{} J. of Algebra [**89**]{} (1984), 349–374.
[^1]: E-Mail address: lars.scheele@uni-muenster.de
[^2]: An ultraproduct over a countable set is always $\aleph_1$-saturated, cf. [@M], Exercise 4.5.37. A limit of a Cauchy-sequence can be written as the realization of a type over a countable set and from this the assertion follows directly. Note that the asymptotic cone itself will not be saturated in general.
|
---
abstract: 'The local dynamics around a fixed point has been extensively studied for germs of one and several complex variables. In one dimension, there exist a complete picture of the trajectory of the orbits on a whole neighborhood of the fixed point. In dimensions larger or equal than two some partial results are known. In this article we analyze a case that lies in the boundary between one and several complex variables. We consider skew product maps of the form $F(z,w)=({\lambda}(z),f(z,w))$. We deal with the case of *parabolic* skew product maps, that is when $DF(0,0)=\textrm{Id}$. Our goal is to describe the behavior of orbits around a whole neighborhood of the origin. We establish formulas for conjugacy maps in different regions of a neighborhood of the origin.'
address: |
The Ohio State University\
Columbus, OH\
USA
author:
- Liz Vivas
title: 'Local dynamics of parabolic skew-products'
---
Introduction {#section:intro}
============
The dynamics of skew-product maps $F(z,w) = ({\lambda}(z),f(z,w))$ has been studied in the past [@Jo; @L; @PR; @PS; @PV]. In this article we focus on a local aspect of the study, that is we look at the dynamics of $F$ close to a fixed point. By simplicity, suppose that the fixed point is the origin $(0,0)$. We turn our attention to a class of skew-product map that we call *parabolic*, that is ${\lambda}(z)=z+O(|z|^2)$ and $f(z,w) = w +O(|(z,w)|^2)$.
Skew-product maps are maps in which we can test general theorems for dynamics of self-maps on several dimensions. Since the first coordinate depends only on one coordinate, we can use the results from one complex dynamics to obtain information on one of the variables. Nonetheless, they provide a richer theory than in one dimension. An instance of this can be seen in the recent article by Astorg et al. [@ABDPR] in which they describe a skew-product map in two dimensions that has a wandering Fatou component.
We center our study on the following maps: $$\begin{aligned}
\label{Fintro}
F: &({\mathbb C}^2,0) \to ({\mathbb C}^2,0)\\
\nonumber F(z,w) &= ({\lambda}(z),f(z,w)) \end{aligned}$$ where ${\lambda}(z) = z + a_2z^2 + O(z^3), a_2 \neq 0$ and $f(z,w) = w +b_2w^2 + O((z,w)^3)$, $b_2 \neq 0$.
Our goal is to describe the dynamics of a map above in a neighborhood of the origin. We divide our goals into the following two categories:
- Describe regions in which $F$ is conjugated to a simpler map.
- Find formulas for the conjugation map in each region, as in the one dimension case.
One classical tool in the study of local dynamics is a conjugacy map. Finding a conjugacy map to a simpler map depends strongly on the type of map we are studying and the dimension of our space.
Consider $F: ({\mathbb C}^n,p) \to ({\mathbb C}^n,p)$ a holomorphic germ with a fixed point $p$. A local conjugacy of $F$ to $G$ is a one-to-one map $\phi: U_p \to {\mathbb C}^n$ where $U_p \subset {\mathbb C}^n$ is an open neighborhood around $p$ and where $G=\phi^{-1}\circ F\circ \phi$. In general the goal is to obtain a conjugacy to a map $G$, easier to study than $F$. There is a rich history that goes back to Schroeder about conjugacy. We point out the reader to [@Abate] or [@Milnor] for a very complete list of results.
In the case of $F$ a parabolic map, that is $DF(p) = \textrm{Id}$, only some partial results are known. The dynamics of parabolic maps in several dimensions is in general very chaotic [@AT; @Hak] and although some results have been proven for the case of generic maps, much less is known in general in comparison to the theory in one dimension.
One common feature on the study of parabolic maps is the conjugacy to a translation. While normally this conjugacy cannot be realized on a whole neigborhood around $p$, it is well defined on open sets that have $p$ at its boundary. The conjugacy is commonly referred as a Fatou coordinate.
Fatou coordinates are useful tools on the study of parabolic maps. In one dimension for instance, it is the main tool in order to study parabolic bifurcation. In recent work of Bedford, Smillie and Ueda [@BSU], they prove parabolic bifurcation results in the semi parabolic case by using Fatou coordinates.
Let us recall the results in one dimension theory. Consider the map $f(z) = z + a_2z^2 + O(z^3), a_2 \neq 0$, where the origin is a parabolic fixed point for $f$. The Leau Fatou flower theorem states that there exists a parabolic basin $B$ for the origin, that is an open set with the origin at its boundary such that every point converges to the origin after iteration by $f$. There exists in fact a conjugacy of $f$ to the translation map $g(w)=w+1$ in the set $B$. Similarly there exists a repelling basin $R$ converging to $0$ under backward iteration. Likewise we can construct a conjugacy to the translation. Note that the union of $B$ and $R$ contains a full pointed neighborhood of the origin [@Milnor].
Our goal in this article is to describe the dynamics of a parabolic map in two dimensions in a similar way. That is, we would like to divide an entire neighborhood of the origin in several open sets, in such a way that we can conjugate our parabolic map to simpler maps in each one of those open sets.
Our main results are Theorems \[GgeneralWiWo\] and \[GgeneralWaWb\]. Let us summarize the content of the theorem here.
\[MAINTHEOREM\] Let $F$ as in . Then we can describe the dynamics of $F$ on a neighborhood of the $w$-axis. That is, after a change of coordinates for $F$, the following set: $$U = \{(z,w)\in{\mathbb C}^2, |z|<\epsilon, |w|<\epsilon, |w|<|z|^M\}$$ where $M$ can be chosen as large as desired, can be divided on several regions such that in each region we have a conjugacy of $F$ to a simpler map.
While most of the conjugacy maps for the hyperbolic case can be obtained as a limit of iterates of our maps, Fatou coordinates are in general not so easily computed. In this article we give formulas for Fatou coordinates for the class of skew-product parabolic maps as in .
The article is organized as follows: In the following section we write down properties of Fatou coordinates, namely how do they change after changes of coordinates. In section 3 we recall results in one dimension. In section 4 we recall results from [@Vi14] where we found a complete description of the dynamics of a more particular class of parabolic maps on a whole neighborhood of the origin. Finally in the last section we prove the main theorem.
Fatou coordinates
=================
Since we use Fatou coordinates of different maps throughout our article we write here the main definitions as well as properties. We work on the most general possible case. In the following sections we use the results from this section.
Let $F: ({\mathbb C}^n,p) \to ({\mathbb C}^n,p), n\geq 1,$ a holomorphic germ with a fixed point $p$ such that $DF(p) = {\textrm{Id}}$ and $F\neq {\textrm{Id}}$. Along our article we alternate between our fixed point $p$ being the origin and the point at infinity. The hypotheses on the derivative of $F$ guarantees that there is a local well defined inverse holomorphic germ $F^{-1}$ on a neighborhood of $p$.
Let $\zeta \in {\mathbb C}^k$ then we write $T_{\zeta}: {\mathbb C}^k \to {\mathbb C}^k$ the translation map $T_{\zeta}(z)= z+\zeta$. Assume from now on $\zeta \neq 0$.
\[incoming\] Let ${U^{\textrm{i},F}}\subset {\mathbb C}^n$ an open set such that $p \in \partial {U^{\textrm{i},F}}$ and $F({U^{\textrm{i},F}}) \subset {U^{\textrm{i},F}}$. Then we say ${U^{\textrm{i},F}}$ is an *attracting basin* of $F$.\
Let ${U^{\textrm{i},F}}\subset {\mathbb C}^n$ be an attracting basin of $F$. Assume we have a map ${\phi^{\textrm{i},F}}: {U^{\textrm{i},F}}\to {\mathbb C}^k$ a holomorphic map such that the following equation is satisfied: $$\begin{aligned}
{\phi^{\textrm{i},F}}(F(z))= {\phi^{\textrm{i},F}}(z)+\zeta\qquad \textrm{or equivalently}\qquad{\phi^{\textrm{i},F}}\circ F = T_\zeta \circ {\phi^{\textrm{i},F}}:\end{aligned}$$ $$\begin{aligned}
\begin{CD}
{U^{\textrm{i},F}}@>F>> {U^{\textrm{i},F}}\\
@V{\phi^{\textrm{i},F}}VV @V{\phi^{\textrm{i},F}}VV \\
{\mathbb C}^k @>T_{\zeta}>> {\mathbb C}^k
\end{CD}\end{aligned}$$
where $0\neq\zeta \in {\mathbb C}^k$, then we say ${\phi^{\textrm{i},F}}$ is an *incoming Fatou map* for $F$ and ${U^{\textrm{i},F}}$ with translation $T_\zeta$.
\[multiple\] If ${\phi^{\textrm{i},F}}$ is an incoming Fatou map for $F$ and ${U^{\textrm{i},F}}$ with translation $T_\zeta$, then $\lambda{\phi^{\textrm{i},F}}$ is an incoming Fatou map for $F$ and ${U^{\textrm{i},F}}$ with translation $T_{\lambda\zeta}$ for any $0\neq\lambda \in {\mathbb C}$.
Repelling basins as well as repelling Fatou maps are defined by considering the local inverse map $F^{-1}$.
\[outgoing\] Assume ${U^{\textrm{o},F}}\subset {\mathbb C}^n$ an open set such that $p \in \partial {U^{\textrm{o},F}}$ and $F^{-1}({U^{\textrm{o},F}}) \subset {U^{\textrm{o},F}}$, that is ${U^{\textrm{o},F}}$ is an open attracting basin of $F^{-1}$. Then we say ${U^{\textrm{o},F}}$ is a *repelling basin* of $F$.\
Assume there exists an incoming Fatou map $\psi$ for $F^{-1}$ and ${U^{\textrm{o},F}}$ with translation $T_{-\zeta}$. $$\begin{aligned}
\begin{CD}
{U^{\textrm{o},F}}@>F^{-1}>> {U^{\textrm{o},F}}\\
@V\psi VV @V\psi VV \\
{\mathbb C}^k @>T_{-\zeta}>> {\mathbb C}^k
\end{CD}\end{aligned}$$
Under the assumption of $n=k$, assume we have a local inverse map for $\psi$, then we call this map $${\phi^{\textrm{o},F}}: \psi({U^{\textrm{o},F}}) \subset {\mathbb C}^k\to {U^{\textrm{o},F}}$$ the *outgoing Fatou map* for $F$ and ${U^{\textrm{o}}}$ with respect to $T_\zeta$. Using the functional equation satisfied by $\psi$ and $F^{-1}$ we can easily see that: $$\begin{aligned}
F({\phi^{\textrm{o},F}}(z))= {\phi^{\textrm{o},F}}(z+\zeta)\qquad \textrm{or equivalently}\qquad F\circ{\phi^{\textrm{o},F}}= {\phi^{\textrm{o},F}}\circ T_\zeta.\end{aligned}$$
We point out some remarks about the definitions above.
When $n=k=1$, we can assume without loss of generality that $\zeta=1$ and the incoming/outgoing change of coordinates is well-known as *incoming/outgoing Fatou coordinates*. When $n=k\geq 2$ we could in principle simply use many copies of Fatou coordinates for $k=1$. We demand in fact one more condition, that our maps should be injective (necessary for the outgoing Fatou map), and we call them also *incoming/outgoing Fatou coordinates*.
When $n \geq 2$ and $k=1$, the incoming change of coordinate has been used in the past to prove the existence of Fatou-Bieberbach maps for automorphisms of ${\mathbb C}^n$ [@Hak; @Vi12].
It is easy to see that Fatou coordinates are not unique. From the functional equations we see that compositions (resp. pre-compositions) of translations with incoming (resp. outgoing) Fatou coordinates are also incoming (resp. outgoing) Fatou coordinates.
When there is no danger of confusion about the map $F$, we simply write ${\phi^{\textrm{i}}}$ and ${\phi^{\textrm{o}}}$. For now though, we stick with the superscript referring to the map in question, since we want to establish how do they change when changing coordinates for our map $F$.
\[inversemap\] Let $F$ be a germ as above. Assume $F$ and $F^{-1}$ have attracting basins ${U^{\textrm{i},F}}$ ad ${U^{\textrm{o},F}}=U^{{\textrm{i}},F^{-1}}$. Then $F^{-1}$ also has a repelling basin, namely $U^{{\textrm{o}},F^{-1}}={U^{\textrm{i},F}}$. Let also ${\phi^{\textrm{i},F}}$ (resp. ${\phi^{\textrm{o},F}}$) an incoming (resp. outgoing) Fatou coordinates for $F$ and ${U^{\textrm{i},F}}$ with respect to $T_\zeta$. Then the following $$\begin{aligned}
\phi^{o,F^{-1}}(z) = (\phi^{\textrm{i},F})^{-1}(-z),\qquad\phi^{\textrm{i},F^{-1}}(z) = -(\phi^{o,F})^{-1}(z)\end{aligned}$$ give us outgoing (resp. incoming) Fatou maps for $F^{-1}$ and $U^{{\textrm{o}},F^{-1}}$ (resp. $U^{{\textrm{i}},F^{-1}}$) with respect to $T_\zeta$.
The proof follows by verifying the respective equations and using Remark \[multiple\].
\[cofc\] Let $\eta$ be a (local) change of coordinates between $F$ and $G$ as in the following commutative diagram: $$\begin{aligned}
\begin{CD}
({\mathbb C}^n,p) @>F>> ({\mathbb C}^n,p) \\
@A\eta AA @A\eta AA \\
({\mathbb C}^n,q) @>G>> ({\mathbb C}^n,q)
\end{CD}\end{aligned}$$ Assume we have ${U^{\textrm{i},F}}$ and ${U^{\textrm{o},F}}$, attracting/repelling basins for $F$ along with ${\phi^{\textrm{i},F}}$ and ${\phi^{\textrm{o},F}}$ Fatou coordinates for $F$, then we can also find attracting/repelling basins for $G$ as well as incoming/outgoing Fatou coordinates for $G$ as follows: $$\begin{aligned}
\label{coc}
{U^{\textrm{i},G}}&= \eta^{-1}({U^{\textrm{i},F}}),\qquad{U^{\textrm{o},G}}= \eta^{-1}({U^{\textrm{o},F}}),\\
\nonumber{\phi^{\textrm{i},G}}&= {\phi^{\textrm{i},F}}\circ \eta, \qquad{\phi^{\textrm{o},G}}= \eta^{-1} \circ {\phi^{\textrm{o},F}}. \end{aligned}$$ where ${\phi^{\textrm{i},G}}$ (resp. ${\phi^{\textrm{o},G}}$) is defined in ${U^{\textrm{i},G}}$ (resp. ${U^{\textrm{o},G}}$).
The proof is immediate.
One observation that we will use repeatedly on the next sections is the following: on the last proposition we do not need $\eta$ to be well defined on a whole neighborhood of the origin. In fact $\eta$ can be defined only on ${U^{\textrm{i},G}}$ or similarly only on ${U^{\textrm{o},G}}$.
Fatou coordinates in one dimension
==================================
Consider a germ at the origin of the following form: $$f(z)=z+ az^2 + O(|z|^3),$$ where $a \neq 0$ a parabolic germ at the origin. By a simple change of coordinates we can always assume $a=-1$. The following is the classic theorem of Leau and Fatou. See [@Milnor] for details.
(Leau-Fatou Theorem) Assume $f$ as above. Then there exists ${U^{\textrm{i},f}}$ and ${U^{\textrm{o},f}}$ for $f$, such that ${U^{\textrm{i},f}}\cup {U^{\textrm{o},f}}$ form a punctured neighborhood of the origin, as well as incoming and outgoing Fatou coordinates ${\phi^{\textrm{i},f}}:{U^{\textrm{i},f}}\to{\mathbb C}$ as well as ${\phi^{\textrm{o},f}}:\psi({U^{\textrm{o},f}})\to{U^{\textrm{o},f}}$.
Before we continue, we write down an explicit choice for the sets ${U^{\textrm{i},f}}$ and ${U^{\textrm{o},f}}$: Let $${V_{\epsilon}}= \{ \zeta \in {\mathbb C}, |\zeta|<\epsilon, |\textrm{Arg}(\zeta)| < 3\pi/4\}$$ then ${U^{\textrm{i},f}}= {V_{\epsilon}}$ for $\epsilon$ small enough and similarly we can see that ${U^{\textrm{o},f}}= -{V_{\epsilon}}$.
We translate all the action to a neighborhood of $\infty$, using the inverse map $I(z) = 1/z$. We obtain $g(w) = w + 1 +\frac{\alpha}{w} +O(1/w^2)$ where $g=I \circ f\circ I$. Our fixed point is taken to be the infinity point. Using proposition \[cofc\] we see that ${U^{\textrm{i},g}}$ and ${U^{\textrm{o},g}}$ can be obtained as the map $I$ applied to ${U^{\textrm{i},f}}$ and ${U^{\textrm{o},f}}$ respectively. Then ${\phi^{\textrm{i},g}}$ and ${\phi^{\textrm{o},g}}$ can be obtained as follows.
Let $g(w) = w + 1 +\frac{\alpha}{w} +O\left(\frac{1}{w^2}\right)$, then we can find the incoming Fatou coordinate ${\phi^{\textrm{i},g}}: {U^{\textrm{i},g}}\to {\mathbb C}$ of $g$ as the following limit: $$\begin{aligned}
\label{onein}
{\phi^{\textrm{i},g}}(w)&:= \lim_{n\to\infty}L_{-\alpha}(g^n(w))-n\end{aligned}$$ Similarly we can find an extension to all of ${\mathbb C}$ of the outgoing Fatou coordinate ${\phi^{\textrm{i},g}}: {\mathbb C}\to {\mathbb C}$ by using the following limit: $$\begin{aligned}
\label{oneout}
{\phi^{\textrm{o},g}}(w)&:=\lim_{n\to\infty} g^n(L_\alpha(w-n))\end{aligned}$$ where $L_{\alpha}(w) = w + \alpha \log(w)$.
We start by proving equation . Note first that ${U^{\textrm{i},g}}= S_R=\{|w|>R, |\textrm{Arg}|<3\pi/4\}$, where $R=1/\epsilon$. In this set the map $L_{\alpha}$ is well defined an injective for any $\alpha \in {\mathbb C}$. We use proposition 2.4 and the following lemma.
Consider $g$ as above, let $\rho=L_{-\alpha}\circ g \circ (L_{-\alpha})^{-1}$ defined on $W=L_{-\alpha}({U^{\textrm{i},g}})$ then $\rho(W)\subset W$ and $\rho(w)=w+1+O(\frac{1}{w^{1+\epsilon}})$. Similarly, consider $\tau=(L_{\alpha})^{-1}\circ g \circ L_{\alpha}$ defined on $V=(L_{\alpha})^{-1}({U^{\textrm{o},g}})$ then $\tau(V) \supset V$ and $\tau(w)=w+1+O(\frac{1}{w^{1+\epsilon}})$.
Note that $L_\alpha$ and $L_{-\alpha}$ are one-to-one maps on ${U^{\textrm{i},g}}$ and ${U^{\textrm{o},g}}$. The rest of the assertions are immediate.
Then $\phi^{\textrm{i},\rho}(w):= \lim_{n\to\infty}\rho^n(w)-n$ and similarly $\phi^{o,\tau}(w):= \lim_{n\to\infty}\tau^n(w-n)$ clearly converge. Using the fact that ${\phi^{\textrm{i},g}}=\phi^{\textrm{i},\rho}\circ L_{-\alpha}$ and ${\phi^{\textrm{o},g}}=L_\alpha \circ \phi^{o,\tau}$ we see that the limits on the statement of the proposition converge.
Fatou coordinates in two dimensions
===================================
Let us recall our results for a skew parabolic map $F$ of a particular form considered in [@Vi14]: $$\begin{aligned}
\label{Fspecial}
F(z,w) = \left(\frac{z}{1+z},f_z(w)\right) = \left(\frac{z}{1+z},w-w^2+w^3+O(w^4,z^4)\right)\end{aligned}$$
Let ${V_{\epsilon}}= \{ \zeta \in {\mathbb C}, |\zeta|<\epsilon, |\textrm{Arg}(\zeta)| < 3\pi/4\}$. In one dimension, the union ${V_{\epsilon}}\cup (-{V_{\epsilon}})$ forms a punctured neighborhood of the origin. In two dimension, to cover a full neighborhood of the origin we define the following four sets: $$\begin{aligned}
\label{regionsinU}
{U^\textrm{i}}= {V_{\epsilon}}\times {V_{\epsilon}}, {U^{\textrm{o}}}= (-{V_{\epsilon}})\times(-{V_{\epsilon}}), {U^\textrm{a}}= (-{V_{\epsilon}})\times {V_{\epsilon}}, {U^{\textrm{b}}}= {V_{\epsilon}}\times(-{V_{\epsilon}}), \end{aligned}$$ Then ${U^\textrm{i}}\cup {U^{\textrm{o}}}\cup{U^\textrm{a}}\cup{U^{\textrm{b}}}$ together with both axes, form a neighborhood of the origin. Since $F$ preserves the axis, then we can describe the dynamics in each axes by using the one dimensional results on parabolic maps.
As in the one dimensional case, we change variables so the fixed point is at infinity by using the map $I(z,w)=(1/z,1/w)$. Let $G = I\circ F\circ I$. $$\begin{aligned}
\label{Gspecial}
G(u,v) = \left(u+1,g_u(v)\right) = \left(u+1,v+1+O\left(\frac{1}{v^2},\frac{1}{uv^2}\right)\right)\end{aligned}$$ Let $S_R=\{|\zeta|>R, |\textrm{Arg}(\zeta)|<3\pi/4\}$ where $R$ is large, since $I({V_{\epsilon}})=S_R$. Then we will focus our study on the following four sets: $$\begin{aligned}
\label{regionsinW}
{W^\textrm{i}}= S_R \times S_R, {W^{\textrm{o}}}= -S_R\times-S_R, {W^\textrm{a}}= -S_R\times S_R, {W^{\textrm{b}}}= S_R\times-S_R.\end{aligned}$$
Denote $T_{(a,b)}: {\mathbb C}^2 \to {\mathbb C}^2$ defined as $T_{(a,b)}(z,w)=(z+a,w+b)$.
Let $G$ be as in . Then in each region we have that the following limits exist:
- For any $p \in {W^\textrm{i}}$, then $G^n(p)$ converges to infinity. We have that ${\Phi^{\textrm{i},G}}$ the Fatou coordinate is given by ${\Phi^{\textrm{i},G}}:= \lim_{n\to\infty}T_{(-n,-n)} \circ G^n$ and we have the following diagram: $$\begin{aligned}
\begin{CD}
{W^\textrm{i}}@>G>> {W^\textrm{i}}\\
@V{\Phi^{\textrm{i},G}}VV @V {\Phi^{\textrm{i},G}}VV \\
{\mathbb C}^2 @>T_{(1,1)}>> {\mathbb C}^2
\end{CD}\end{aligned}$$
- For any $p \in {W^{\textrm{o}}}$, then $G^{-n}(p)$ converges to infinity. We have that ${\Phi^{\textrm{o},G}}$ the Fatou coordinate is given by ${\Phi^{\textrm{o},G}}:= \lim_{n\to\infty}G^n\circ T_{(-n,-n)}$ and we have the following diagram: $$\begin{aligned}
\begin{CD}
G^{-1}({W^{\textrm{o}}}) @>G>> {W^{\textrm{o}}}\\
@A{\Phi^{\textrm{o},G}}AA @A {\Phi^{\textrm{o},G}}AA \\
N \subset{\mathbb C}^2 @>T_{(1,1)}>> T_{(1,1)}(N) \subset {\mathbb C}^2
\end{CD}\end{aligned}$$ where $N=({\Phi^{\textrm{o},G}})^{-1}(G^{-1}({W^{\textrm{o}}}))$ and $T_{(1,1)}(N) = ({\Phi^{\textrm{o},G}})^{-1}({W^{\textrm{o}}})$.
We prove first the convergence of the sequence going to ${\Phi^{\textrm{i},G}}$. Then we use this result to prove the analogue for ${\Phi^{\textrm{o},G}}$. Denote ${\Phi^{\textrm{i},G}}_n:= T_{(-n,-n)} \circ G^n$. A simple computation shows: $${\Phi^{\textrm{i},G}}_{n+1} - {\Phi^{\textrm{i},G}}_n = G^{n+1}(u_n,v_n)-G^n(u_n,v_n)-(1,1) = (0, O(1/v_n^2,1/(u_nv_n^2))).$$ since $u_n$ and $v_n$ are $O(n)$ when $(u,v)\in {W^\textrm{i}}$ then ${\Phi^{\textrm{i},G}}_n$ converge uniformly in compacts.\
For the outgoing coordinate, we denote ${\Phi^{\textrm{o},G}}_n:=G^n\circ T_{(-n,-n)}$. Then we have the following: $$\begin{aligned}
\label{phiGH}
{\Phi^{\textrm{o},G}}_n \circ \eta \circ {\Phi^{\textrm{i},H}}_n \circ \eta = {\textrm{Id}}.\end{aligned}$$ for $H=\eta\circ G^{-1} \circ \eta$ and $\eta(u,v) = (-u,-v)$. Since $H(u,v) = (u+1,v+1+O(1/v^2))$, we have that ${\Phi^{\textrm{i},H}}_n$ converges, and therefore ${\Phi^{\textrm{o},G}}_n$ also. Since $\eta({W^\textrm{i}})={W^{\textrm{o}}}$ and $H({W^\textrm{i}}) \subset {W^\textrm{i}}$ then ${W^{\textrm{o}}}\subset{\Phi^{\textrm{o},G}}({W^{\textrm{o}}})$.
We also have:
\[GspecialWaWb\] Let $G$ be as in . Then in each region we have that the following limits exist:
- The following maps converge uniformly in compacts in ${W^\textrm{a}}$ given by ${\Psi^{\textrm{a},G}}:= \lim_{n\to\infty}T_{(n,-n)} \circ G^n\circ T_{(-2n,0)}$ and we have the following diagram: $$\begin{aligned}
\begin{CD}
{W^\textrm{a}}@>(-1,g_\infty)>> {W^\textrm{a}}\\
@V{\Psi^{\textrm{a},G}}VV @V {\Psi^{\textrm{a},G}}VV \\
{\mathbb C}^2 @>T_{(-1,1)}>> {\mathbb C}^2
\end{CD}\end{aligned}$$
- The following maps converge uniformly in compacts in ${W^{\textrm{b}}}$ given by ${\Psi^{\textrm{b},G}}:= \lim_{n\to\infty}T_{(-2n,0)} \circ G^n\circ T_{(n,-n)}$ and we have the following diagram: $$\begin{aligned}
\begin{CD}
L^{-1}({W^{\textrm{b}}}) @>L=(-1,g_\infty)>> {W^{\textrm{b}}}\\
@A{\Psi^{\textrm{b},G}}AA @A {\Psi^{\textrm{b},G}}AA \\
E \subset {\mathbb C}^2 @>T_{(-1,1)}>> T_{(-1,1)}(E) \subset {\mathbb C}^2
\end{CD}\end{aligned}$$
Denote ${\Psi^{\textrm{a},G}}_{n}:=T_{(n,-n)} \circ G^n \circ T_{(-2n,0)}$ and ${\Psi^{\textrm{b},G}}_{n}:=T_{(-2n,0)} \circ G^n \circ T_{(n,-n)}$. Unraveling we have: $${\Psi^{\textrm{b},G}}_{n}(u,v)=(u,g_{u+2n-1}\circ\ldots \circ g_{u+n+1}\circ g_{u+n}(v-n))$$ We have proven on [@Vi14] that the following map: $${\psi^{o}}_n(v) = g_{-v+\alpha+2n-1}\circ\ldots \circ g_{-v+\alpha+n+1}\circ g_{-v+\alpha+n}(v-n)$$ converges for any $\alpha \in {\mathbb C}$ and $v \in -S_R$ and that the limit ${\psi^{o}}(v+1) = g_{\infty}({\psi^{o}}(v))$. Applying this result for $\alpha = u-v$ we obtain the convergence of the sequence ${\Phi^{\textrm{a},G}}_n$ to the following map: $${\Psi^{\textrm{b},G}}_n(u,v) \to (u, {\psi^{o}}(v))$$ where ${\psi^{o}}(v+1) = g_{\infty}({\psi^{o}}(v))$.
For the other coordinate, we use the following identity: $$\begin{aligned}
\label{psiGH}
{\Psi^{\textrm{a},G}}_n \circ \eta \circ {\Psi^{\textrm{b},H}}_n \circ \eta = {\textrm{Id}}.\end{aligned}$$ for $H=\eta\circ G^{-1} \circ \eta$, where $\eta(u,v) = (-u,-v)$. Since $H(u,v) = (u+1,v+1+O_u(1/v^2))$, we have that ${\Psi^{\textrm{b},H}}_n$ converges, and therefore ${\Psi^{\textrm{a},G}}_n$ also.
Summarizing the results that we obtained for the map $F$:
Let $F$ be as in . Let the sets ${U^\textrm{i}}, {U^{\textrm{o}}}, {U^\textrm{a}}$ and ${U^{\textrm{b}}}$ defined as in . Then:
- The union of ${U^\textrm{i}}, {U^{\textrm{o}}}, {U^\textrm{a}}$ and ${U^{\textrm{b}}}$ together with the axes form a neighborhood of the origin in ${\mathbb C}^2$.
- For any $p \in {U^\textrm{i}}$ then $F^{n}(p) \in{U^\textrm{i}}$ and $F^n$ converges to the origin uniformly in compacts.
- For any $p \in {U^{\textrm{o}}}$ then $F^{-n}(p) \in{U^{\textrm{o}}}$ and $F^{-n}$ converges to the origin uniformly in compacts.
- For any $p \in {U^\textrm{a}}$ then $F^{-n}(p)$ converges to the $w$-axis, the invariant fiber of the map $F$.
- For any $p \in {U^{\textrm{b}}}$ then $F^n(p)$ converges to the $w$-axis, the invariant fiber of the map $F$.
General Case
============
We are ready now to tackle the most general case. Consider now the following map: $$\begin{aligned}
F(z,w) = (\lambda(z),f_z(w))\end{aligned}$$ where $\lambda(z) = z + O(z^2), f_z(w) = w+O(|(z,w)|^2)$.
We focus on the following case: $$\begin{aligned}
F(z,w) = (z+a_2z^2+O(z^3),w+b_{2}w^2+O(|(z,w)|^3))\end{aligned}$$ where $a_2 \neq 0$ and $b_2 \neq 0$.
By a simple change of coordinates we can assume $a_2 = -1$ and $b_2=-1$. Using a shear polynomial change of coordinates we can increase the power of $z$ on the second term. Similarly by using another change of coordinates we can increase the degree of the $z$ term that is multiplied by $w$. Therefore we can write $F$ as follows: $$\begin{aligned}
\label{genF}
F(z,w) = (z-z^2+O(z^3),w-w^2+O(w^3,zw^2,z^Mw,z^M))\end{aligned}$$ for $M$ as large as we want.
We once again study the map at infinity by using the inverse map $(u,v):=I(z,w)=(1/z,1/w)$, so we consider $G=I \circ F \circ I$ and the fixed point is at infinity. $$\begin{aligned}
\label{genG}
G(u,v) = (\rho(u),g_u(v))= \left(u+1+O\left(\frac{1}{u}\right),v+1+ O\left(\frac{1}{u},\frac{1}{v},\frac{v}{u^M},\frac{v^2}{u^M}\right)\right)\end{aligned}$$
Let $S_R = \{|\zeta|>R, |\textrm{Arg}(\zeta)|<3\pi/4\}$ for $R$ large. Since the first coordinate of $G$ depends only on one variable, there exists two maps $\psi_1$ and $\psi_2$, defined respectively on $S_R$ and $-S_R$ such that $\psi_j^{-1} \circ \rho \circ \psi_j(u) = u+1$ for $u \in S_R$ when $j=1$, and for $u \in -S_R$ when $j=2$.
Recall that the maps $\psi_j$ are of the form $\psi_j(u) = u +O(\log(u))$, for $j=1,2$. We conjugate $G$ on $T_1 = S_R \times (S_R\cup (-S_R))$ by the change of coordinates $\Psi_1(u,v) = (\psi_1(u),v)$ and on the set $T_2 = (-S_R) \times (S_R\cup (-S_R))$ by the map $\Psi_2(u,v) = (\psi_2(u),v)$.
Then we obtain the following conjugations of $G$: $$\begin{aligned}
G_{j}(u,v) = \left(u+1,v+1+ O\left(\frac{1}{u},\frac{\log(u)}{u^2},\frac{1}{v},\frac{v}{u^M},\frac{v\log(u)}{u^{M+1}},\frac{v^2}{u^M},\frac{v^2\log(u)}{u^{M+1}}\right)\right)\end{aligned}$$ where each $G_j = (\Psi_j)^{-1}\circ G \circ \Psi_j$ is defined in $T_j$, for $j=1,2$. The specific higher order terms are in general different on each region.
Now, as before we separate each $T_i$ in two sets: $$\begin{aligned}
T_1 &= S_R \times (S_R\cup (-S_R)) = (S_R \times S_R) \cup (S_R \times (-S_R)) = {W^\textrm{i}}\cup {W^{\textrm{b}}},\\
T_2 &= (-S_R) \times (S_R\cup (-S_R)) = (-S_R \times S_R) \cup (-S_R \times -S_R) = {W^\textrm{a}}\cup {W^{\textrm{o}}}.\end{aligned}$$
From now on, when we refer to a region $W$, we mean one of the possible four sets ${W^\textrm{i}},{W^{\textrm{b}}},{W^\textrm{a}}$ or ${W^{\textrm{o}}}$.
Consider the class of maps $\Theta(u,v) = (u,v+\alpha \log(u) + \beta \log(v))$. It is immediate to see that after choosing $R$ large enough then $\Theta$ is an injective transformation in each region $W$.
Now we can see that conjugating $G_j$ by $\Theta$ and choosing $\alpha$ and $\beta$ appropriately, we can get rid of the linear terms $O(1/u,1/v)$ and we obtain: $$\Theta^{-1} \circ G_j \circ \Theta(u,v) = \left(u+1,v+1+O\left(\frac{\log(u)}{u^2},\frac{1}{v^2},\frac{\log(u)}{v^2},\frac{\log(u)\log(v)}{v^3},\frac{v}{u^M},\frac{v^2}{u^M}\right)\right).$$
To emphasize that each one of these maps is a different conjugation of $G$ on each set $W$, we write ${\theta_{\textrm{i}}}$ the change of coordinates on ${W^\textrm{i}}$, ${\theta_{\textrm{o}}}$ on ${W^{\textrm{o}}}$, ${\theta_{\textrm{a}}}$ on ${W^\textrm{a}}$ and ${\theta_{\textrm{b}}}$ on ${W^{\textrm{b}}}$. We write ${G_{\textrm{i}}}:= ({\theta_{\textrm{i}}})^{-1}\circ G_1 \circ {\theta_{\textrm{i}}}$ the corresponding map defined on ${W^\textrm{i}}$, ${G_{\textrm{o}}}:=({\theta_{\textrm{o}}})^{-1}\circ G_2 \circ {\theta_{\textrm{o}}}$ on ${W^{\textrm{o}}}$, ${G_{\textrm{a}}}:=({\theta_{\textrm{a}}})^{-1}\circ G_2 \circ {\theta_{\textrm{a}}}$ on ${W^\textrm{a}}$ and ${G_{\textrm{b}}}:=({\theta_{\textrm{b}}})^{-1}\circ G_1 \circ {\theta_{\textrm{b}}}$ on ${W^{\textrm{b}}}$.
We have that the following subsets of ${W^\textrm{i}}$ and ${W^{\textrm{o}}}$ be the respective attracting and repelling basins for ${G_{\textrm{i}}}$ and ${G_{\textrm{o}}}$: $$\begin{aligned}
\widetilde{{W^\textrm{i}}} &= \{(u,v) \in S_R\times S_R, |u|^{M+1}>|v|\}\\
\nonumber\widetilde{{W^{\textrm{o}}}} &= \{(u,v)\in (-S_R)\times(-S_R),|u|^{M+1}>|v|\}\end{aligned}$$
Let ${W^{\textrm{i},G}}=\Psi_1\circ \theta_i (\widetilde{{W^\textrm{i}}})$, then we have that $G({W^{\textrm{i},G}}) \subset {W^{\textrm{i},G}}$. Using results from the last section, we also have that we can conjugate $G_{\textrm{i}}$ to a translation on $\widetilde{{W^\textrm{i}}}$ by using the limit: $$\begin{aligned}
\Phi^{\textrm{i},G_\textrm{i}}: \widetilde{{W^\textrm{i}}} \to {\mathbb C}^2, \quad \Phi^{\textrm{i},G_\textrm{i}}= \lim_{n\to\infty}T_{(-n,-n)}\circ G_\textrm{i}^n.\end{aligned}$$ If we unravel for $G$ then we obtain the following formula for the Fatou coordinate on the incoming basin for $G$: $$\begin{aligned}
{\Phi^{\textrm{i},G}}: {W^{\textrm{i},G}}\to {\mathbb C}^2, \quad {\Phi^{\textrm{i},G}}(u,w)= \lim_{n\to\infty}T_{(-n,-n)}\circ {\theta_{\textrm{i}}}^{-1}\circ\Psi_1^{-1}\circ G^n\end{aligned}$$
Similarly, if we define ${W^{\textrm{o},G}}=\Psi_2\circ \theta_o (\widetilde{{W^{\textrm{o}}}})$ then we obtain that $G({W^{\textrm{o},G}}) \supset {W^{\textrm{o},G}}$. By the theorem on the last section we obtain a conjugation of $G$ on ${W^{\textrm{o},G}}$ to the translation: $$\begin{aligned}
\Phi^{\textrm{o},G_\textrm{o}}: \widetilde{{W^{\textrm{o}}}} \to {\mathbb C}^2, \quad \Phi^{\textrm{o},G_\textrm{o}}= \lim_{n\to\infty}G_\textrm{o}^n\circ T_{(-n,-n)}.\end{aligned}$$ Unraveling for $G$ then we obtain the following formula for the Fatou coordinate on the outgoing basin for $G$: $$\begin{aligned}
{\Phi^{\textrm{o},G}}: {W^{\textrm{o},G}}\to {\mathbb C}^2, \quad {\Phi^{\textrm{o},G}}(u,w)= \lim_{n\to\infty} G^n \circ \Psi_2 \circ {\theta_{\textrm{o}}}\circ T_{(-n,-n)}.\end{aligned}$$
We have therefore proven:
\[GgeneralWiWo\] Let $G$ as above on . Then we can find incoming and outgoing Fatou coordinates for the respective incoming and outgoing basins for $G$ at infinity.
We also obtain information on the behavior of $G$ on the regions ${W^\textrm{a}}$ and ${W^{\textrm{b}}}$, since we can apply theorem \[GspecialWaWb\] to the maps ${G_{\textrm{a}}}$ and ${G_{\textrm{b}}}$ respectively. Even though we do not have a conjugacy of ${G_{\textrm{a}}}$ or ${G_{\textrm{b}}}$ on the regions ${W^\textrm{a}}$ and ${W^{\textrm{b}}}$, we have that certain compositions of $G$ on these regions (together with translation maps) conjugate to the action $(-1,g_\infty)$.
On theorem \[GspecialWaWb\] we proved that on the region ${W^\textrm{a}}$, the map given by ${\Psi^{\textrm{a}}}:= \lim_{n\to\infty}T_{(n,-n)} \circ {G_{\textrm{a}}}^n\circ T_{(-2n,0)}$ satisfies ${\Psi^{\textrm{a}}}\circ (-1,g_\infty) = T_{(-1,1)}\circ {\Psi^{\textrm{a}}}$. Since ${G_{\textrm{a}}}= ({\theta_{\textrm{a}}})^{-1}\circ G_2 \circ {\theta_{\textrm{a}}}$, then ${G_{\textrm{a}}}^n = ({\theta_{\textrm{a}}})^{-1}\circ (\Psi_2)^{-1} \circ G^n \circ \Psi_2 \circ {\theta_{\textrm{a}}}$, and ${\Psi^{\textrm{a}}}= \lim_{n\to\infty}T_{(n,-n)} \circ ({\theta_{\textrm{a}}})^{-1}\circ (\Psi_2)^{-1} \circ G^n \circ \Psi_2 \circ {\theta_{\textrm{a}}}\circ T_{(-2n,0)}$ converges and satisfy the same commutative diagram.
Similarly, on the region ${W^{\textrm{b}}}$, the map given by ${\Psi^{\textrm{b}}}:= \lim_{n\to\infty}T_{(-2n,0)} \circ {G_{\textrm{b}}}^n\circ T_{(n,-n)}$ satisfies ${\Psi^{\textrm{b}}}\circ (-1,g_\infty) = T_{(-1,1)}\circ {\Psi^{\textrm{b}}}$. Since ${G_{\textrm{b}}}= ({\theta_{\textrm{b}}})^{-1}\circ G_1 \circ {\theta_{\textrm{b}}}$, then ${G_{\textrm{b}}}^n = ({\theta_{\textrm{b}}})^{-1}\circ (\Psi_1)^{-1} \circ G^n \circ \Psi_1 \circ {\theta_{\textrm{b}}}$, then ${\Psi^{\textrm{b}}}= \lim_{n\to\infty}T_{(-2n,0)} \circ ({\theta_{\textrm{b}}})^{-1}\circ (\Psi_1)^{-1} \circ G^n \circ \Psi_1 \circ {\theta_{\textrm{b}}}\circ T_{(n,-n)}$ converges and satisfy the same commutative diagram.
We have therefore proven
\[GgeneralWaWb\] Let $G$ as above on . Then on the regions: $$\begin{aligned}
\widetilde{{W^\textrm{a}}} &= \{(u,v) \in -S_R\times S_R, |u|^{M+1}>|v|\}\\
\nonumber\widetilde{{W^{\textrm{b}}}} &= \{(u,v)\in S_R\times(-S_R),|u|^{M+1}>|v|\},\end{aligned}$$ the following limits exist: ${\Psi^{\textrm{a}}}= \lim_{n\to\infty}T_{(n,-n)} \circ ({\theta_{\textrm{a}}})^{-1}\circ (\Psi_2)^{-1} \circ G^n \circ \Psi_2 \circ {\theta_{\textrm{a}}}\circ T_{(-2n,0)}$ and ${\Psi^{\textrm{b}}}=\lim_{n\to\infty}T_{(-2n,0)} \circ ({\theta_{\textrm{b}}})^{-1}\circ (\Psi_1)^{-1} \circ G^n \circ \Psi_1 \circ {\theta_{\textrm{b}}}\circ T_{(n,-n)}$ and the second coordinate conjugates $g_\infty$ to the translation $T_{1}$.
[9999]{}
Abate, M. *Discrete holomorphic local dynamical systems*, Lecture Notes in Math. **1998** Springer, Berlin (2010), 1–55. Abate, M. and Tovena, F. *Poincaré-Bendixson theorems for meromorphic connections and homogeneous vector fields*, J. Differential Equations **251** (2011), 2612–2684.
Astorg, M., Buff, X., Dujardin, R., Peters, H. and Raissy, J. *A two-dimensional polynomial mapping with a wandering Fatou component*, Ann. of Math. **184** (2016), 263–313.
Bedford, E., Smillie, J., Ueda, T. *Parabolic bifurcations in complex dimension 2*, arXiv preprint arXiv:1208.2577, 2012.
Boc-Thaler, L., Forn[æ]{}ss, J.E. and Peters, H. *Fatou Components with Punctured Limit Sets*, Ergodic Theory Dynam. Systems [**35**]{} (2015), 1380–1393
Dujardin, R. *A non-laminar dynamical Green current*, Math. Ann. (2015), avaliable online <http://link.springer.com/article/10.1007%2Fs00208-015-1274-0>.
Forn[æ]{}ss, J. E. and Sibony, N. *Classification of recurrent domains for some holomorphic maps*, Math. Ann. [**301**]{} (1995), no. 4, 813–820.
Hakim, M. *Attracting domains for semi-attractive transformations of ${\mathbb C}^p$*, Publ. Mat. **38** (1994), 479–499.
Jonsson, M. *Dynamics of polynomial skew products on $\mathbb C^2$*, Math. Ann. [**314**]{} (1999), 403–447.
Heinemann, S.-M. *Julia sets of skew products in ${\mathbb C}^2$*, Kyushu J. Math. [**52**]{} (1998), no. 2, 299-–329.
Hubbard, J.H. *Parametrizing unstable and very unstable manifolds*, Mosc. Math. J. [**5**]{} (2005), no. 1, 105–124.
Koenigs, G. *Recherches sur les intégrales de certaines équations fonctionelles*, Ann. Sci. École Norm. Sup. Paris ($\text{3}^\text{e}$ ser.), [**1**]{} (1884), 1–41.
Lilov, K. *Fatou Theory in Two Dimensions*, PhD thesis, University of Michigan (2004).
Lyubich, M and Peters, H. *Classification of invariant Fatou components for dissipative H[é]{}non maps*, Geom. Func. Anal. [**24**]{} (2014), 887–915.
Milnor, J. *Dynamics in one complex variable*, Princeton University Press, Princeton, NJ (2006).
Peters, H. and Raissy, J. *Fatou components of elliptic polynomial skew products*, available online at https://arxiv.org/abs/1608.08803
Peters, H. and Smit, I. M. *Fatou components of attracting skew products*, preprint (2015), available online at http://arxiv.org/abs/1508.06605
Peters, H. and Vivas, L. *Polynomial skew-products with wandering Fatou-disks*, Math. Z. [**283**]{} (2016), no. 1-2, 349–366.
Roeder, R. *A dichotomy for Fatou components of polynomial skew products*, Conform. Geom. Dyn. [**15**]{} (2011), 7-19.
Rosay, J.P. and Rudin, W. *Holomorphic maps from $\mathbb C^n$ to $\mathbb C^n$*, Trans. Amer. Math. Soc. [**310**]{} (1988), no. 1, 47–86.
Sternberg, S. *Local contractions and a theorem of Poincaré*, Amer. J. Math. [**79**]{}, (1957), 809–824.
Sullivan, D. *Quasiconformal homeomorphisms and dynamics. [I]{}. [S]{}olution of the [F]{}atou-[J]{}ulia problem on wandering domains*, Ann. of Math. [**122**]{} (1985) 401–418.
Ueda, T. *Fatou sets in complex dynamics on projective spaces*, J. Math. Soc. Japan [**46**]{} (1994), 545-555.
Ueda, T. *Holomorphic maps on projective spaces and continuations of Fatou maps*, Michigan Math J. [**56**]{} (2008), no. 1, 145-153.
Vivas, L. *Fatou-Bieberbach domains as basins of attraction of automorphisms tangent to the identity*, J. Geom. Anal. [**22**]{} (2012), no. 2, pp. 352–382.
Vivas, L. *Parametrization of unstable manifolds and Fatou disks for parabolic skew-products*, to appear at Complex Analysis and its Synergies. Preprint available at https://arxiv.org/abs/1411.3110.
|
---
abstract: 'Entropic cosmology assumes several forms of entropy on the horizon of the universe, where the entropy can be considered to behave as if it were related to the exchange (the transfer) of energy. To discuss this exchangeability, the consistency of the two continuity equations obtained from two different methods is examined, focusing on a homogeneous, isotropic, spatially flat, and matter-dominated universe. The first continuity equation is derived from the first law of thermodynamics, whereas the second equation is from the Friedmann and acceleration equations. To study the influence of forms of entropy on the consistency, a phenomenological entropic-force model is examined, using a general form of entropy proportional to the $n$-th power of the Hubble horizon. In this formulation, the Bekenstein entropy (an area entropy), the Tsallis–Cirto black-hole entropy (a volume entropy), and a quartic entropy are represented by $n=2$, $3$, and $4$, respectively. The two continuity equations for the present model are found to be consistent with each other, especially when $n=2$, i.e., the Bekenstein entropy. The exchange of energy between the bulk (the universe) and the boundary (the horizon of the universe) should be a viable scenario consistent with the holographic principle.'
author:
- 'Nobuyoshi [Komatsu]{}$^{1}$'
- 'Shigeo [Kimura]{}$^{2}$'
title: General form of entropy on the horizon of the universe in entropic cosmology
---
Introduction
============
The accelerated expansion of the late universe [@PERL1998ab_Riess1998_2004] can be elegantly explained by $\Lambda$CDM (lambda cold dark matter) models that assume a cosmological constant $\Lambda$ and dark energy. However, it is well-known that the $\Lambda$CDM model suffers from theoretical difficulties, such as the cosmological constant problem [@Weinberg1989]. To resolve the difficulties, $\Lambda (t)$CDM models, in which a time-varying cosmological term $\Lambda (t)$ is assumed [@Freese-Mimoso; @Waga1994_Sola_2007-2009; @Sola_2009-2015; @Sola_2013b; @LimaSola_2013a; @Valent2015; @Sola_2015_1c; @Sola_2015_1a; @Sola_2015_1b], have been extensively examined. (For various other models, see, e.g., Refs. [@Miao1; @Bamba1; @Weinberg1; @Roy1] and references therein.)
As a possible alternative scenario, entropic-force models based on the holographic principle [@Hooft-Bousso], in which several forms of entropy on the horizon of the universe are assumed [@Easson12; @Cai1; @Koivisto-Costa1; @Lepe1; @Basilakos1; @Sola_2014a; @Koma4; @Koma5; @Koma6; @Koma7; @Koma8; @Gohar_2015a; @Gohar_2015b; @Nunes_2015b], have been recently proposed. For example, the Bekenstein entropy (an area entropy based on additive statistics) [@Bekenstein1], the Tsallis–Cirto entropy (a volume entropy based on nonadditive statistics) [@Tsallis2012], and a quartic entropy [@Koma5; @Koma6] have been suggested for the entropy on the horizon [@Easson12; @Koma6]. Most entropic-force models can be interpreted as a particular case of $\Lambda (t)$CDM models [@Basilakos1; @Sola_2014a]. This interpretation implies that the assumed entropy is exchangeable (reversible), such as is the entropy related to the exchange of energy [@Prigogine_1998]. That is, the entropy on the horizon is considered to behave as if it were related to ‘energy exchange cosmology’, which assumes the transfer of energy between two fluids [@Barrow22], e.g., the interaction between dark matter and dark energy, dynamical vacuum energy, etc. [@Wang0102_YWang2014; @Pavon_2005].
Such pairs of fluids are not generally employed in entropic cosmology because dark energy is not assumed. Accordingly, the exchangeability may imply the transfer of energy between the bulk (the universe) and the boundary (the horizon of the universe) [@Lepe1], because the information of the bulk is assumed to be holographically stored in the boundary [@Easson12]. However, the exchangeability has not yet been made clear in entropic cosmology. The exchangeability can probably be discussed in terms of the consistency of two continuity equations derived from two different methods [@Koma4]. For example, the continuity equation is typically derived from the Friedmann and acceleration equations because only two of the three equations are independent [@Ryden1]. Alternatively, the continuity equation can be derived from the first law of thermodynamics as well [@Koma4; @Ryden1]. Therefore, it is possible to discuss the consistency of the continuity equations derived from these two different methods. The forms of entropy on the horizon (i.e., area, volume, and quartic entropies) are expected to affect the consistency.
In contrast, several entropic-force models similar to bulk viscous models [@Weinberg0; @Murphy1; @Barrow1986-Lima1988; @Zimdahl1996-Nojiri2011] and CCDM (creation of cold dark matter) models [@Lima_1992b; @Lima_1992e; @Lima_2000; @Lima_2014b; @Harko_2014; @Zimdahl_1993; @Lima-Others1996-2008; @Lima2010-2014; @Lima_Newtonian_1997; @Lima2011; @Ramos_2014-2014b; @C301] assume irreversible entropy related to dissipation processes [@Koma5; @Koma6]. In those models, an effective description for pressure is used without assuming the exchange of energy. However, Prigogine *et al.* have proposed open systems with the exchange of energy, in which reversible and irreversible entropies are considered [@Prigogine_1988b; @Prigogine1989], to discuss the thermodynamics of cosmological matter creation for non-adiabatic processes. The proposed system is suitable for describing the general systems in entropic cosmology discussed here.
In this context, we formulate a phenomenological entropic-force model, in which area, volume, and quartic entropies [@Koma4; @Koma5; @Koma6] are systematically assumed to be the entropy on the horizon. Moreover, irreversible entropy due to matter creation [@Koma7; @Koma8] is included in that formulation. Using the present model, we examine whether the entropy on the horizon behaves exchangeably or not. In this short paper, to discuss the exchangeability, we focus on the consistency of the two continuity equations derived from different methods. The study of the exchangeability should provide new insights into entropic cosmology.
The remainder of the article is organized as follows. In Sec. \[General entropic-force models\], a phenomenological entropic-force model is formulated, assuming a general form of entropy on the horizon. In Sec. \[Two continuity equations\], two continuity equations are derived from two different methods. Specifically, in Sec. \[Continuity equation from the first law of thermodynamics\], the continuity equation is derived from the first law of thermodynamics, and in Sec. \[Continuity equation from the Friedmann equations\], the continuity equation is derived from the Friedmann and acceleration equations. The consistency of the two continuity equations is then discussed in Sec. \[Consistency\]. Finally, in Sec. \[Conclusions\], the conclusions of the study are presented.
Entropic-force models {#General entropic-force models}
=====================
In this section, a phenomenological entropic-force model that assumes a general form of entropy on the horizon of the universe is described. For this purpose, a homogeneous, isotropic, and spatially flat universe is considered, and the scale factor $a(t)$ is examined at time $t$ in the Friedmann–Lemaître–Robertson–Walker metric [@Koma4; @Koma5; @Koma6; @Koma7; @Koma8]. First, $\Lambda$CDM models are briefly reviewed in Sec. \[LCDM models\]. The entropic-force model is then formulated in Sec. \[Entropic-force model based on a general form of entropy\]. The derivation of entropic forces is based on the original work of Easson *et al.* [@Easson12] and the recent work of the present authors [@Koma4; @Koma5; @Koma6]. The concept of entropic force considered here is different from the idea that gravity itself is an entropic force [@Padma1; @Verlinde1], as described in Ref. [@Easson12]. In the present paper, the inflation of the early universe is not discussed because we have chosen to focus on background evolution of the late universe.
Please note that irreversible entropy due to matter creation is *not* considered in this section. The irreversible entropy is discussed in the next sections.
$\Lambda$CDM model {#LCDM models}
------------------
In this subsection, the well-known $\Lambda$CDM models are briefly reviewed [@Ryden1; @Weinberg1]. The acceleration equation is written as $$\frac{ \ddot{a}(t) }{ a(t) } = \dot{H}(t) + H(t)^{2}
= - \frac{ 4\pi G }{ 3 } \left ( \rho(t) + \frac{3p(t)}{c^2} \right ) + \frac{\Lambda}{3} ,
\label{eq:FRW2_LCDM}$$ where the Hubble parameter $H(t)$ is defined by $$H(t) \equiv \frac{ da/dt }{a(t)} = \frac{ \dot{a}(t) } {a(t)} ,
\label{eq:Hubble}$$ and $G$, $\Lambda$, $c$, $\rho(t)$, and $p(t)$ are the gravitational constant, a cosmological constant, the speed of light, the mass density of cosmological fluids, and the pressure of cosmological fluids, respectively [@Koma5]. The right-hand side of Eq. (\[eq:FRW2\_LCDM\]) includes a driving term $\Lambda/3$, which can explain the accelerated expansion of the late universe. This term corresponds to a cosmological constant term and is interpreted as an additional energy component called dark energy.
Entropic-force model based on a general form of entropy proportional to $r_{H}^{n}$ {#Entropic-force model based on a general form of entropy}
-----------------------------------------------------------------------------------
In entropic-force models, extra driving terms are derived from entropic forces, unlike in $\Lambda$CDM models [@Easson12]. The entropic-force model assumes that the horizon of the universe has an associated entropy $S$ and an approximate temperature $T$. In this study, we use the Hubble horizon as the preferred screen because the apparent horizon coincides with the Hubble horizon in a spatially flat universe [@Easson12]. If we were instead considering a spatially non-flat universe, we would use the apparent horizon as the preferred screen [@Easson12].
The Hubble horizon (radius) $r_{H}$ is given by $$r_{H} = \frac{c}{H} \quad \textrm{and therefore} \quad \dot{r}_{H} = - \frac{ H \dot{H} }{ c^2 } r_{H}^3 .
\label{eq:rH}$$ The temperature $T$ on the horizon is given by $$T = \frac{ \hbar H}{ 2 \pi k_{B} } \times \gamma = \frac{ \hbar }{ 2 \pi k_{B} } \frac{c}{ r_{H} } \gamma ,
\label{eq:T0}$$ where $k_{B}$ and $\hbar$ are the Boltzmann constant and the reduced Planck constant, respectively. It should be noted that the temperature considered here is assumed to be obtained by multiplying the horizon temperature, $ \hbar H /( 2 \pi k_{B} ) $, by a non-negative free parameter, $\gamma$. Here, $\gamma$ is assumed to be a free parameter for the temperature [@Koma4; @Koma5].
In the present study, we do not discuss the magnitude of the free parameter $\gamma$ for the temperature. However, before proceeding further, in this paragraph, we give a brief review of $\gamma$, according to the previous and recent studies. For example, Easson *et al.* have suggested a similar modified coefficient for the temperature, in which $\gamma$ may be estimated from a derivation of surface terms or the Hawking temperature description [@Easson12]. Also, Cai *et al.* have proposed that $\gamma$ can be interpreted as a parameter for the holographic screen temperature [@Cai1]. In those works, $\gamma$ is considered to be of the order of $O(1)$ [@Koma4]. On the other hand, interestingly, Dabrowski *et al.* have recently reported that a similar parameter $\gamma$ is two to four orders of magnitude less than $O(1)$ [@Gohar_2015b]. In that paper, the combination of the holographic and vacuum dark energies is likely assumed from different viewpoints. Therefore, $\gamma$ used in Ref. [@Gohar_2015b] should be related to a parameter $\nu$ on dynamical vacuum models (see the second paper of Ref. [@Valent2015]). The parameter $\nu$ can be small because it behaves as a type of $\beta$-function coefficient in quantum field theory. In fact, $\nu$ is expected to be approximately $\nu \sim 10^{-3}$ from observations, as examined by Solà *et al.* [@Sola_2015_1c]. Consequently, a similar parameter $\gamma$ used in Ref. [@Gohar_2015b] may become small as well. The similar parameter is expected to be related to $\gamma$ considered here. The discussion of $\gamma$ will provide new insights in entropic cosmology because the smallness of $\gamma$ has not yet been explained by the holographic approach. This task is left for the future research. Keep in mind that $\gamma$ considered here is assumed to be a free parameter.
In the original entropic-force model [@Easson12], an associated entropy on the Hubble horizon is given as $$S_{r2} = \frac{ k_{B} c^3 }{ \hbar G } \frac{A_{H}}{4} = \frac{ k_{B} c^3 }{ \hbar G } \frac{ 4 \pi r_{H}^2 }{4} = \frac{ \pi k_{B} c^3 }{ \hbar G } r_{H}^2 ,
\label{eq:SH(r2)}$$ where $A_{H}$ is the surface area of a sphere with the Hubble radius $r_{H}$. This entropy is the Bekenstein entropy proportional to area and $r_{H}^{2}$ [@Bekenstein1]. Recently, several forms of entropy have been proposed for the entropy on the horizon of the universe. For example, a volume entropy $S_{r3}$ and a quartic entropy $S_{r4}$ (proportional to $r_{H}^{4}$) have been used for entropic-force models [@Koma5; @Koma6]. The volume entropy $S_{r3}$ is a generalized black-hole entropy, i.e., the Tsallis–Cirto black-hole entropy [@Tsallis2012], based on nonadditive statistics [@Tsa0]. In contrast, although the meaning of $S_{r4}$ is less clear, it can be considered as a form of entropy that would arise if extra dimensions existed [@Koma6]. Consequently, it is found that an area entropy $S_{r2}$, a volume entropy $S_{r3}$, and a quartic entropy $S_{r4}$ can lead to $H^{2}$, $H$, and constant entropic-force terms, respectively. Each entropic-force term has been separately discussed in Ref. [@Koma6]. In the present study, a general form of entropy is used to discuss a phenomenological entropic-force model systematically. Note that Dabrowski, Gohar, and Salzano have recently proposed more extended entropic forces to examine varying-constant theories [@Gohar_2015a; @Gohar_2015b].
The general form of entropy (proportional to $r_{H}^{n}$) is defined by $$S_{rn} = \frac{ \pi k_{B} c^3 }{ \hbar G } \times L_{n} r_{H}^{n} \quad (n=2, 3, 4) ,
\label{eq:S(rn)}$$ where $n=2$, $3$, and $4$ correspond to indices of area, volume, and quartic entropies, respectively. $L_{n}$ is a non-negative free-constant-parameter. The following derivation can be applied to higher-order forms of entropy. (Values of $ L_{2}=1$, $ L_{3}=\zeta$, and $ L_{4}=\psi$ were used for the free-constant parameters in Ref. [@Koma6].)
We now derive an entropic-force $F_{rn}$ from a general form of entropy, $S_{rn} \propto r_{H}^{n}$. The entropic force can be given by $$F_{rn} = - \frac{dE}{dr} = - T \frac{dS_{rn} }{dr} \left ( = - T \frac{dS_{rn} }{dr_{H}} \right ) ,
\label{eq:Frn}$$ where the minus sign indicates the direction of increasing entropy or the screen corresponding to the horizon [@Easson12]. Substituting Eqs. (\[eq:T0\]) and (\[eq:S(rn)\]) into Eq. (\[eq:Frn\]), the entropic-force $F_{rn}$ becomes $$\begin{aligned}
F_{rn} &= - T \frac{dS_{rn}}{dr_{H}}
= - \frac{ \hbar }{ 2 \pi k_{B} } \frac{c}{ r_{H} } \gamma \times \frac{d}{dr_{H}} \left [ \frac{ \pi k_{B} c^3 }{ \hbar G } \times L_{n} r_{H}^{n} \right ] \notag \\
%
&= - \gamma \frac{c^{4}}{G} \left ( \frac{n L_{n} }{2} \right ) r_{H}^{n-2} .
\label{eq:F(rn)}\end{aligned}$$ From Eq. (\[eq:F(rn)\]), the pressure $p_{rn}$ is given as $$\begin{aligned}
p_{rn} &= \frac{ F_{rn} } {A_{H}} = - \gamma \frac{c^{4}}{G} \left ( \frac{n L_{n} }{2} \right ) r_{H}^{n-2} \frac{1} {4 \pi r_{H}^2} \notag \\
&= - \gamma \frac{c^{4} n L_{n} }{ 8 \pi G } r_{H}^{n-4}
= - \gamma \frac{c^{4} n L_{n} }{ 8 \pi G } \left ( \frac{c }{H} \right )^{n-4} \notag \\
&= - \gamma \left ( \frac{c^{n} n L_{n} }{ 8 \pi G } \right ) H^{4-n} .
\label{eq:P(rn)}\end{aligned}$$ In entropic cosmology [@Easson12], $p_{rn}$ is added to the acceleration equation. To this end, Eq. (\[eq:FRW2\_LCDM\]) is arranged as follows. Setting $\Lambda =0$, replacing $p$ by $p + p_{rn}$, and substituting Eq. (\[eq:P(rn)\]) into Eq. (\[eq:FRW2\_LCDM\]), the acceleration equation is given by $$\begin{aligned}
\frac{ \ddot{a} }{ a } &= - \frac{ 4\pi G }{ 3 } \left ( \rho + \frac{ 3 (p+p_{rn}) }{c^2} \right ) \notag \\
&= - \frac{ 4\pi G }{ 3 } \left ( \rho + \frac{ 3 p }{c^2} \right ) + \gamma \left ( \frac{c^{n-2} n L_{n} }{ 2 } \right ) H^{4-n} .
\label{eq:FRW2_(rn)}\end{aligned}$$ The last term on the right-hand side is the so-called entropic-force term. As for most entropic-force models, adding the entropic-force term to the Friedmann equation $H^{2} = 8\pi G \rho/ 3$ gives $$H^{2} = \frac{ 8\pi G }{ 3 } \rho + \gamma \left ( \frac{c^{n-2} n L_{n} }{ 2 } \right ) H^{4-n} .
\label{eq:FRW1_(rn)}$$ For $n=2$, $3$, and $4$, the last terms on the right-hand side of Eq. (\[eq:FRW2\_(rn)\]) are $\gamma L_{2} H^2$, $\gamma (3c L_{3} /2)H$, and $\gamma (2c^{2} L_{4}) $, respectively. That is, the $H^{2}$, $H$, and constant terms are phenomenologically derived from the area, volume, and quartic entropies, respectively. This result agrees with that of Ref. [@Koma6], in which $L_{2}=1$, $L_{3}=\zeta$, and $L_{4} = \psi$ were used. Keep in mind that irreversible entropy due to matter creation is neglected in this section. Accordingly, the formulations of the Friedmann and acceleration equations are essentially equivalent to those of $\Lambda (t)$CDM models. This type of $\Lambda(t)$CDM model has been examined extensively (see, e.g., Refs. [@Sola_2009-2015; @LimaSola_2013a; @Sola_2013b; @Valent2015; @Sola_2015_1c; @Sola_2015_1a; @Sola_2015_1b]).
In the above discussion, the entropic force $F_{rn}$ was calculated from Eq. (\[eq:Frn\]), i.e., $F_{rn} = - T (dS_{rn} / dr_{H})$. Therefore, the heat flow $dQ$ across the horizon can be calculated from $dQ=TdS_{rn}$ as if $S_{rn}$ is exchangeable. Based on this concept and using Eq. (\[eq:F(rn)\]), $dQ$ is given as $$\begin{aligned}
dQ &= T dS = T \left ( \frac{dS}{dr} \right ) dr = T \left ( \frac{dS_{rn}}{dr_{H}} \right ) dr_{H} \notag \\
&= \gamma \frac{c^{4}}{G} \left ( \frac{n L_{n} }{2} \right ) r_{H}^{n-2} dr_{H} .
\label{eq:dQ=TdS}\end{aligned}$$ Using this heat flow, in Sec. \[Continuity equation from the first law of thermodynamics\], we derive the continuity equation from the first law of thermodynamics.
Continuity equations {#Two continuity equations}
====================
In this section, two continuity equations are derived from two different methods. In Sec. \[Continuity equation from the first law of thermodynamics\], the continuity equation is derived from the first law of thermodynamics. In Sec. \[Continuity equation from the Friedmann equations\], the continuity equation is derived from the Friedmann and acceleration equations. In the following, irreversible entropy due to matter creation is also considered, i.e., we examine the entropic-force model with matter creation. Accordingly, the formulation discussed here is slightly complicated.
Continuity equation from the first law of thermodynamics {#Continuity equation from the first law of thermodynamics}
--------------------------------------------------------
In this subsection, the continuity equation for the entropic-force model with matter creation is derived from the first law of thermodynamics. For this purpose, the first law of thermodynamics for non-adiabatic processes with matter creation is briefly reviewed, according to the work of Prigogine *et al.* [@Prigogine_1988b].
First, let us consider a closed system containing a constant number of particles $N$ in a volume $V$. From the first law of thermodynamics, the heat flow $dQ$ across a region during a time interval $dt$ is given by $$dQ = dE + p dV ,
\label{eq:ClosedFirstLaw_0}$$ where $dE$ and $dV$ are changes in the internal energy $E$ and volume $V$ of the region, respectively [@Ryden1]. Dividing this equation by $dt$ gives the following differential form of the first law of thermodynamics [@Modak2012]: $$\frac{dQ}{dt} = \frac{dE}{dt} + p \frac{dV}{dt} = \frac{d}{dt} (\varepsilon V) + p \frac{dV}{dt} ,
\label{eq:ClosedFirstLaw}$$ where $\varepsilon$ represents the energy density of cosmological fluids, i.e., $ \varepsilon = \rho c^2$. In addition, $dQ$ is assumed to be related to reversible (exchangeable) entropy $S_{\textrm{rev}}$ [@Prigogine_1998]. If adiabatic (and isentropic) processes are considered, i.e., $dQ/dt = 0$, then Eq. (\[eq:ClosedFirstLaw\]) is written as $$\frac{d}{dt} (\varepsilon V) + p \frac{dV}{dt} = 0 .
\label{eq:ClosedFirstLaw(dQ=0)}$$ Using Eq. (\[eq:ClosedFirstLaw(dQ=0)\]), the continuity equation for the adiabatic process can be written as [@Ryden1] $$\dot{\rho} + 3 \frac{\dot{a}}{a} \left ( \rho + \frac{p}{c^2} \right ) = 0 ,
\label{eq:fluid(dQ=0)}$$ where the right-hand side of Eq. (\[eq:fluid(dQ=0)\]) is zero because the right-hand side of Eq. (\[eq:ClosedFirstLaw(dQ=0)\]) is zero.
Now, let us consider more general situations that include matter creation [@Lima_1992b; @Lima_1992e; @Lima_2000; @Lima_2014b; @Harko_2014; @Zimdahl_1993; @Lima-Others1996-2008; @Lima2010-2014; @Lima_Newtonian_1997; @Lima2011; @Ramos_2014-2014b]. To this end, we assume an open system in which $N$ is time dependent. The matter creation results in the generation of irreversible entropy. For non-adiabatic processes taking place in the open system, the first law of thermodynamics can be written as $$\frac{d}{dt} (\varepsilon V) + p \frac{dV}{dt} = \left ( \frac{dQ}{dt} \right )_{\textrm{rev}} + \left ( \frac{\varepsilon + p}{n} \frac{d}{dt} (n V) \right )_{\textrm{irr}} ,
\label{eq:OpenFirstLaw_ri}$$ where $n$ is the particle number density given by $N/V$ [@Prigogine_1988b; @Harko_2014]. The entropy per particle is assumed to be constant [@Lima_1992b; @Lima_2014b]. For details regarding matter creation, see, e.g., Refs. [@Prigogine_1988b; @Lima_1992b; @Harko_2014]. The first term $dQ/dt$ on the right-hand side of Eq. (\[eq:OpenFirstLaw\_ri\]) is assumed to be related to reversible entropy $S_{\textrm{rev}}$ due to the exchange (the transfer) of energy. In contrast, the second term on the right-hand side, i.e., $[(\varepsilon + p)/n] d(nV)/dt$, is related to irreversible entropy $S_{\textrm{irr}}$ due to matter creation. Accordingly, the total entropy change is written as [@Prigogine_1988b] $$dS = dS_{\textrm{rev}} + dS_{\textrm{irr}} ,
\label{eq:Stotal1}$$ with $$dS_{\textrm{rev}} = \frac{dQ}{T} \quad \textrm{and} \quad d S_{\textrm{irr}} \geq 0 ,
\label{eq:Stotal2}$$ where $dS_{\textrm{rev}} = dQ/T$ is assumed [@Prigogine_1998]. Typically, the heat flow $dQ$ is negligible [@Prigogine_1988b] when examining adiabatic matter creation [@Lima_1992b; @Lima_1992e; @Lima_2000; @Lima_2014b; @Harko_2014; @Zimdahl_1993; @Lima-Others1996-2008; @Lima2010-2014; @Lima_Newtonian_1997; @Lima2011; @Ramos_2014-2014b]. However, the negligibility should be related to a free parameter $\gamma$ for the temperature, as discussed later. Accordingly, in this study, we leave the $dQ/dt$ term in Eq. (\[eq:OpenFirstLaw\_ri\]). (Although an entropic-force model in a dissipative universe has been proposed recently, the exchange of energy is neglected [@Koma7; @Koma8]. More general thermodynamics for matter creation have been discussed by Harko [@Harko_2014].)
To derive the continuity equation, energy flows across the Hubble horizon at $r = r_{H}$ are considered. Therefore, Eq. (\[eq:OpenFirstLaw\_ri\]) can be written as $$\begin{aligned}
& \left [ \frac{d}{dt} (\varepsilon V) + p \frac{dV}{dt} \right ]_{r=r_{H}} \notag \\
&= \left [ \left ( \frac{dQ}{dt} \right )_{\textrm{rev}} + \left ( \frac{\varepsilon + p}{n} \frac{d}{dt} (n V) \right )_{\textrm{irr}} \right ]_{r=r_{H}} .
\label{eq:OpenFirstLaw_rev_irr_r=rH}\end{aligned}$$ To calculate the left-hand side of Eq. (\[eq:OpenFirstLaw\_rev\_irr\_r=rH\]), suppose a sphere of arbitrary radius $r$ [@Modak2012]. The volume of the sphere is given by $V = 4 \pi r^{3} /3$. In addition, $r$ is set to be $r_{H}$ after the time derivative in Eq. (\[eq:OpenFirstLaw\_rev\_irr\_r=rH\]) is calculated [@Modak2012]. Concretely speaking, we consider a sphere of arbitrary radius $\hat{r}$ expanding along with the universal expansion: $$r(t) = a(t) \hat{r} .$$ The volume $V(t)$ of the sphere is $$V(t) = \frac{4 \pi}{3} r(t)^3 = \frac{4 \pi}{3} \hat{r}^3 a(t)^3 .
\label{eq:V(t)}$$ From Eq. (\[eq:V(t)\]), the rate of change of the sphere’s volume can be given as [@Ryden1; @Koma4] $$\frac{dV}{dt} = \dot{V} = \frac{4 \pi}{3} \hat{r}^3 (3 a^2 \dot{a} ) = V \left ( 3 \frac{\dot{a}}{a} \right ) .
\label{eq:dotV}$$ Using Eq. (\[eq:dotV\]), the rate of change of the sphere’s internal energy is $$\frac{d}{dt} (\varepsilon V) = \dot{\varepsilon} V + \varepsilon \dot{V} = \left ( \dot{\varepsilon} + 3 \frac{\dot{a}}{a} \varepsilon \right ) V .
\label{eq:dotE}$$ Substituting Eqs. (\[eq:dotV\]) and (\[eq:dotE\]) into $d(\varepsilon V)/dt + p dV/dt$, and using $\varepsilon = \rho c^{2} $, we have $$\begin{aligned}
\frac{d}{dt} (\varepsilon V) + p \frac{dV}{dt}
&= \left ( \dot{\varepsilon} + 3 \frac{\dot{a}}{a} \varepsilon \right ) V
+ p V \left ( 3 \frac{\dot{a}}{a} \right ) \notag \\
&= \left [ \dot{\varepsilon} + 3 \frac{\dot{a}}{a} \left ( \varepsilon + p \right ) \right ] V \notag \\
&= \left [ \dot{\rho} + 3 \frac{\dot{a}}{a} \left ( \rho + \frac{p}{c^2} \right ) \right ] c^2 \left ( \frac{4 \pi}{3} r^3 \right ) .
\label{eq:dotEpdotV} \end{aligned}$$ This equation corresponds to the left-hand side of Eq. (\[eq:OpenFirstLaw\_rev\_irr\_r=rH\]), where the arbitrary radius $r$ is used. If we assume adiabatic (and isentropic) processes without dissipation, the right-hand side of Eq. (\[eq:OpenFirstLaw\_rev\_irr\_r=rH\]) is zero. Consequently, the continuity equation is given as $ \dot{\rho} + 3 (\dot{a}/a) ( \rho + p/c^2 ) = 0 $.
To calculate the right-hand side of Eq. (\[eq:OpenFirstLaw\_rev\_irr\_r=rH\]), we assume both heat flows related to $S_{\textrm{rev}}$ and matter creation related to $S_{\textrm{irr}}$. In this study, the heat flow can be derived from a general form of entropy \[Eq. (\[eq:S(rn)\])\]: $ S_{\textrm{rev}} = S_{rn} \propto r_{H}^{n}$ (for $n=2$, $3$, and $4$). Using Eqs. (\[eq:rH\]) and (\[eq:dQ=TdS\]), the heat flow rate is given as $$\begin{aligned}
\left ( \frac{dQ}{dt} \right )_{\textrm{rev}}
&= \gamma \frac{c^{4}}{G} \left ( \frac{n L_{n} }{2} \right ) r_{H}^{n-2} \frac{dr_{H}}{dt} \notag \\
&= \gamma \frac{c^{4}}{G} \left ( \frac{n L_{n} }{2} \right ) r_{H}^{n-2} \left ( - \frac{ H \dot{H} }{ c^2 } r_{H}^3 \right ) \notag \\
&= - \gamma \frac{c^{2}}{G} \left ( \frac{n L_{n} }{2} \right ) r_{H}^{n+1} H \dot{H} .
\label{eq:dQdt_rev_FirstTerm}\end{aligned}$$ This equation indicates that the heat flow rate is negligible when $\gamma$ is sufficiently small. In contrast, the second term on the right-hand side of Eq. (\[eq:OpenFirstLaw\_rev\_irr\_r=rH\]) is related to $S_{\textrm{irr}}$ for matter creation [@Prigogine_1988b; @Prigogine1989; @Lima_1992b; @Lima_1992e; @Lima_2000; @Lima_2014b; @Harko_2014; @Zimdahl_1993; @Lima-Others1996-2008; @Lima2010-2014; @C301; @Lima_Newtonian_1997; @Lima2011; @Ramos_2014-2014b]. Using Eq. (\[eq:dotE\]), and replacing $\varepsilon$ by $n$, we obtain $$\frac{d}{dt} (n V) = \left ( \dot{n} + 3 \frac{\dot{a}}{a} n \right ) V .
\label{eq:dot_n}$$ Substituting Eq. (\[eq:dot\_n\]) into the second term on the right-hand side of Eq. (\[eq:OpenFirstLaw\_rev\_irr\_r=rH\]), and using $\dot{n} + 3 (\dot{a}/a) n = n \Gamma$ [@Lima_2014b], we have $$\begin{aligned}
\left ( \frac{\varepsilon + p}{n} \frac{d}{dt} (n V) \right )_{\textrm{irr}}
&= \frac{\varepsilon + p}{n} \left ( \dot{n} + 3 \frac{\dot{a}}{a} n \right ) V \notag \\
&= \frac{\varepsilon + p}{n} ( n \Gamma ) V
= (\varepsilon + p ) \Gamma V \notag \\
&= \left ( \rho + \frac{p}{c^{2}} \right ) c^{2} V \Gamma ,
\label{eq:SecondTerm}\end{aligned}$$ where $\Gamma$ represents the particle production rate [@Harko_2014; @Lima_2014b].
We now calculate Eq. (\[eq:OpenFirstLaw\_rev\_irr\_r=rH\]). Substituting Eqs. (\[eq:dotEpdotV\]), (\[eq:dQdt\_rev\_FirstTerm\]), and (\[eq:SecondTerm\]) into Eq. (\[eq:OpenFirstLaw\_rev\_irr\_r=rH\]), setting $r=r_{H}$, and arranging the resultant equation, we obtain $$\begin{aligned}
\dot{\rho} + 3 \frac{\dot{a}}{a} \left ( \rho + \frac{ p_{e}^{\textrm{irr}} }{c^2} \right )
&= - \gamma \frac{c^{2}}{G} \left ( \frac{n L_{n} }{2} \right ) r_{H}^{n+1} H \dot{H} \frac{1}{\frac{4 \pi}{3} r_{H}^3 c^{2}} \notag \\
&= - \gamma \frac{3 n L_{n} }{ 8 \pi G} r_{H}^{n-2} H \dot{H} \notag \\
&= - \gamma \frac{3 n L_{n} }{ 8 \pi G} \left ( \frac{c}{H} \right )^{n-2} H \dot{H} \notag \\
&= - \gamma \left ( \frac{3 c^{n-2} n L_{n} }{ 8 \pi G} \right ) H^{3-n} \dot{H} ,
\label{eq:fluid_rev_irr}\end{aligned}$$ where $p_{e}^{\textrm{irr}}$ is an effective pressure given by $p_{e}^{\textrm{irr}} = p + p_{c}^{\textrm{irr}} $, and $p_{c}^{\textrm{irr}}$ is a creation pressure for constant specific entropy in adiabatic matter creation [@Prigogine_1988b; @Lima_2014b; @Harko_2014]. The creation pressure is defined as $$p_{c}^{\textrm{irr}} = - \frac{(\rho c^{2} + p) \Gamma}{3H} .
\label{eq:pc_irr}$$ For clarity, the effective pressure is written as $p_{e}^{\textrm{irr}}$ because it includes $p_{c}^{\textrm{irr}}$.
In the present paper, a matter-dominated universe (when $p=0$) is considered. Therefore, the effective pressure $p_{e}^{\textrm{irr}}$ is given by $p_{e}^{\textrm{irr}} = p + p_{c}^{\textrm{irr}} = p_{c}^{\textrm{irr}} $. Consequently, Eqs. (\[eq:fluid\_rev\_irr\]) and (\[eq:pc\_irr\]) are rewritten as $$\dot{\rho} + 3 \frac{\dot{a}}{a} \left ( \rho + \frac{ p_{e}^{\textrm{irr}} }{c^2} \right ) = \left [ - \gamma \left ( \frac{3 c^{n-2} n L_{n} }{ 8 \pi G} \right ) H^{3-n} \dot{H} \right ]_{\textrm{rev}}
\label{eq:fluid_rev_irr_p=0}$$ and $$p_{e}^{\textrm{irr}} = p_{c}^{\textrm{irr}} = - \frac{ \rho c^{2} \Gamma }{3H} .
\label{eq:pc_irr_p=0}$$ Equation (\[eq:fluid\_rev\_irr\_p=0\]) is the modified continuity equation derived from the first law of thermodynamics. The right-hand side of Eq. (\[eq:fluid\_rev\_irr\_p=0\]) is considered to be related to reversible entropy, assuming a general form of entropy on the horizon given by $ S_{\textrm{rev}} = S_{rn} \propto r_{H}^{n}$. In contrast, $p_{e}^{\textrm{irr}}$ ($= p_{c}^{\textrm{irr}} $) on the left-hand side is related to irreversible entropy due to matter creation [@Prigogine_1988b; @Lima_2014b; @Harko_2014]. If the heat flow rate is negligible (i.e., when $\gamma$ is sufficiently small), the continuity equation for adiabatic matter creation is given by $ \dot{\rho} + 3 (\dot{a}/a) ( \rho + p_{e}^{\textrm{irr}}/c^2 ) = 0 $. Substituting $n=2$, $L_{2}=1$, and $p_{e}^{\textrm{irr}} =p$ into Eq. (\[eq:fluid\_rev\_irr\_p=0\]), we obtain the continuity equation discussed in Ref. [@Koma4].
Continuity equation from the Friedmann and acceleration equations {#Continuity equation from the Friedmann equations}
-----------------------------------------------------------------
In this subsection, the continuity equation is derived from the Friedmann and acceleration equations. We have chosen this route because only two of the three equations are independent [@Ryden1]. To this end, the general Friedmann, acceleration, and continuity equations are reformulated, according to our previous works [@Koma4; @Koma5; @Koma6; @Koma7]. The general equations are applied to the present model, i.e., the entropic-force model with matter creation.
The general Friedmann and acceleration equations for a matter-dominated universe (when $p=0$) are written as $$\begin{aligned}
H^2 = \frac{ 8\pi G }{ 3 } \rho &+ f_{\textrm{rev}}(t) , \label{eq:General_FRW01_f_0} \\
%
\frac{ \ddot{a} }{ a } = - \frac{ 4\pi G }{ 3 } \rho &+ f_{\textrm{rev}}(t) + h_{\textrm{irr}}(t) , \label{eq:General_FRW02_g_0}\end{aligned}$$ with $$f_{\textrm{rev}}(t) \geq 0 \quad \textrm{and} \quad h_{\textrm{irr}}(t) \geq 0 ,
\label{eq:g(t)_GE_f(t)_00}$$ where $f_{\textrm{rev}}(t)$ and $h_{\textrm{irr}}(t)$ are general extra driving-terms [@Koma7]. In this formulation, $f_{\textrm{rev}}(t)$ is considered to be related to reversible entropy $S_{\textrm{rev}}$, whereas $h_{\textrm{irr}}(t)$ is related to irreversible entropy $S_{\textrm{irr}}$. Consequently, Eq. (\[eq:General\_FRW02\_g\_0\]) can be rearranged as $$\frac{ \ddot{a} }{ a } = - \frac{ 4\pi G }{ 3 } \left ( \rho + \frac{3 p_{e}^{\textrm{irr}} }{c^2} \right ) + f_{\textrm{rev}}(t) ,
\label{eq:General_FRW02_g_f}$$ where the effective pressure $p_{e}^{\textrm{irr}}$ is given by $$p_{e}^{\textrm{irr}} \equiv - \frac{ c^{2} h_{\textrm{irr}}(t) } { 4\pi G } .
\label{eq:General_pe}$$ In a matter-dominated universe (when $p=0$), the effective pressure $p_{e}^{\textrm{irr}}$ is given by $p_{e}^{\textrm{irr}} = p + p_{c}^{\textrm{irr}} = p_{c}^{\textrm{irr}}$. Here, $p_{c}^{\textrm{irr}}$ is interpreted as a pressure derived from $S_{\textrm{irr}}$. Accordingly, $p_{e}^{\textrm{irr}}$ is equivalent to that in the previous subsection because the same matter creation is assumed.
We now calculate the general continuity equation [@Koma4; @Koma5; @Koma6] from the general Friedmann and acceleration equations. The general continuity equation in a matter-dominated universe becomes $$\dot{\rho} + 3 \frac{\dot{a}}{a} \rho = \frac{3}{4 \pi G} H \left( h_{\textrm{irr}}(t) - \frac{\dot{f}_{\textrm{rev}}(t)}{2 H } \right ) .
\label{eq:drho_General_00}$$ This equation can be rewritten as $$\dot{\rho} + 3 \frac{\dot{a}}{a} \rho = \rho \Gamma_{\textrm{irr}} - \Theta_{\textrm{rev}} ,
\label{eq:drho_General_01}$$ or equivalently $$\dot{\rho} + 3 \frac{\dot{a}}{a} \left ( \rho + \frac{p_{e}^{\textrm{irr}}}{c^{2}} \right ) = -\Theta_{\textrm{rev}} ,
\label{eq:drho_General_011}$$ where, using $p_{e}^{\textrm{irr}}$ from Eq. (\[eq:General\_pe\]), $\Gamma_{\textrm{irr}}$ is given by $$\Gamma_{\textrm{irr}} = \frac{3 H}{4 \pi G} \frac{ h_{\textrm{irr}}(t)}{ \rho } = - 3 H \frac{ p_{e}^{\textrm{irr}} }{\rho c^{2}} ,
\label{eq:Gamma_0}$$ and $\Theta_{\textrm{rev}}$ is defined by $$\Theta_{\textrm{rev}} = \frac{3}{8 \pi G} \dot{f}_{\textrm{rev}}(t) .
\label{eq:Q_0}$$ $\Gamma_{\textrm{irr}}$ in Eq. (\[eq:Gamma\_0\]) is equivalent to $\Gamma$ in Eq. (\[eq:pc\_irr\_p=0\]). That is, the general function $ h_{\textrm{irr}}(t)$ is a constant given by $$h_{\textrm{irr}}(t) = h_{\textrm{irr}0} = - \frac{ 4\pi G p_{e}^{\textrm{irr}} }{ c^{2} } = \textrm{constant} .
\label{eq:h_irr}$$ On the other hand, from Eqs. (\[eq:FRW2\_(rn)\]) and (\[eq:FRW1\_(rn)\]), $f_{\textrm{rev}}(t)$ can be written as $$f_{\textrm{rev}}(t) = \gamma \left ( \frac{c^{n-2} n L_{n} }{ 2 } \right ) H^{4-n} \quad (n=2,3,4),
\label{eq:f_rev}$$ where $n=2$, $3$, and $4$ correspond to indices of area, volume, and quartic entropies, respectively. Accordingly, the Friedmann and acceleration equations for the present model are summarized as $$H^2 = \frac{ 8\pi G }{ 3 } \rho + \gamma \left ( \frac{c^{n-2} n L_{n} }{ 2 } \right ) H^{4-n} ,
\label{eq:FRW01_EntroMatter}$$ $$\frac{ \ddot{a} }{ a } = - \frac{ 4\pi G }{ 3 } \left ( \rho + \frac{3 p_{e}^{\textrm{irr}} }{c^2} \right ) + \gamma \left ( \frac{c^{n-2} n L_{n} }{ 2 } \right ) H^{4-n} ,
\label{eq:FRW02_EntroMatter}$$ where the last term on the right-hand side is the entropic-force term derived from a general form of entropy on the horizon.
Now, we calculate $\Theta_{\textrm{rev}}$ on the right-hand side of Eq. (\[eq:drho\_General\_011\]). Substituting Eq. (\[eq:f\_rev\]) into Eq. (\[eq:Q\_0\]), and rearranging the resultant equation, we obtain $$\begin{aligned}
\Theta_{\textrm{rev}} &= \frac{3}{8 \pi G} \times \gamma \left ( \frac{c^{n-2} n L_{n} }{ 2 } \right ) (4-n) H^{3-n} \dot{H} \notag \\
&= \gamma \left ( \frac{3 c^{n-2} n L_{n}}{8 \pi G} \right ) \left ( \frac{4-n}{ 2 } \right ) H^{3-n} \dot{H} .
\label{eq:Q_10}\end{aligned}$$ In addition, substituting Eq. (\[eq:Q\_10\]) into Eq. (\[eq:drho\_General\_011\]), we have $$\begin{aligned}
& \dot{\rho} + 3 \frac{\dot{a}}{a} \left ( \rho + \frac{p_{e}^{\textrm{irr}}}{c^{2}} \right ) \notag \\
&= \left [ - \gamma \left ( \frac{3 c^{n-2} n L_{n}}{8 \pi G} \right ) \left ( \frac{4-n}{ 2 } \right ) H^{3-n} \dot{H} \right ]_{\textrm{rev}} .
\label{eq:drho_General_011_entropic}\end{aligned}$$ This equation is the modified continuity equation for the entropic-force model with matter creation, which is derived from the Friedmann and acceleration equations. The right-hand side of Eq. (\[eq:drho\_General\_011\_entropic\]) depends on the general form of entropy on the horizon. In this way, the two continuity equations for the present model are derived from the different methods. In the next section, the consistency of the two continuity equations is discussed. (The present model is considered to be a kind of $\Lambda (t)$CDM model in a dissipative universe. Brevik *et al.* have recently examined a similar cosmological system with two interacting fluids in a dissipative universe [@Brevik_2015a].)
Note that, as shown in Eq. (\[eq:f\_rev\]), in order to derive the continuity equation, we assume that $f_{\textrm{rev}}(t)$ is an entropic-force term. Additionally, in the previous subsection, the continuity equation was derived from the first law of thermodynamics, assuming $ S_{\textrm{rev}} = S_{rn} \propto r_{H}^{n}$. Accordingly, it may seem that the exchangeability of the entropy on the horizon is assumed beforehand. Therefore, the validity should be confirmed by studying the consistency of the two continuity equations, as discussed in the next section.
Consistency of the two continuity equations {#Consistency}
===========================================
In Sec. \[Two continuity equations\], two continuity equations for the entropic-force model with matter creation were derived using different methods. In this section, we examine the consistency of those two continuity equations. Moreover, we discuss the exchangeability of the entropy on the horizon of the universe in entropic cosmology. If the two continuity equations agree, we can interpret the agreement as a sign that the entropy behaves exchangeably. Note that we admit the possibility that the consistency is not directly related to the exchangeability.
To study the consistency of two continuity equations, the two equations are written again. From Eq. (\[eq:fluid\_rev\_irr\_p=0\]), the continuity equation derived from the first law of thermodynamics is written as $$\dot{\rho} + 3 \frac{\dot{a}}{a} \left ( \rho + \frac{ p_{e}^{\textrm{irr}} }{c^2} \right ) = - \gamma \left ( \frac{3 c^{n-2} n L_{n} }{ 8 \pi G} \right ) H^{3-n} \dot{H} ,
\label{eq:fluid_rev_irr_p=0_2}$$ where $n$ is an index of entropy: i.e., $n=2$, $3$, and $4$ correspond to indices of the area, volume, and quartic entropies, respectively. In contrast, from Eq. (\[eq:drho\_General\_011\_entropic\]), the continuity equation derived from the Friedmann and acceleration equations is $$\dot{\rho} + 3 \frac{\dot{a}}{a} \left ( \rho + \frac{p_{e}^{\textrm{irr}}}{c^{2}} \right ) = - \gamma \left ( \frac{3 c^{n-2} n L_{n}}{8 \pi G} \right ) \left ( \frac{4-n}{ 2 } \right ) H^{3-n} \dot{H} .
\label{eq:drho_General_0110_entropic}$$ As shown in Eqs. (\[eq:fluid\_rev\_irr\_p=0\_2\]) and (\[eq:drho\_General\_0110\_entropic\]), the two left-hand sides agree because an equivalent matter-creation is assumed. Interestingly, the two right-hand sides are also consistent with each other, except for the coefficient $(4-n)/2$ in Eq. (\[eq:drho\_General\_0110\_entropic\]). Therefore, the two right-hand sides are in absolute agreement when $n=2$, which corresponds to an area entropy. A similar non-zero right-hand side appears in energy exchange cosmology [@Koma4]. This consistency of the two continuity equations may imply the exchange (the transfer) of energy in entropic cosmology. For example, the interchange of energy between the bulk (the universe) and the boundary (the horizon of the universe) [@Lepe1] is a viable scenario from the viewpoint of the holographic principle. Of course, strictly speaking, the two right-hand sides are slightly different when $n=3$ and $4$ due to the coefficient $(4-n)/2$ in Eq. (\[eq:drho\_General\_0110\_entropic\]). In this case, the entropic-force model should be considered to be a type of energy exchange cosmology between dark matter and effective dark energy [@Sola_2014a]. When $n > 4$ (corresponding to higher-order forms of entropy), the two right-hand sides have opposite sign due to the coefficient $(4-n)/2$. This opposite sign may be interpreted as a sign that the direction of heat flows could be reversed when Eq. (\[eq:fluid\_rev\_irr\_p=0\_2\]) is derived. Alternatively, these results simply imply that the Bekenstein entropy (an area entropy) is the most suitable for describing entropic cosmology.
The coefficient, $(4-n)/2$, plays an important role because it affects the difference between the two continuity equations. The interpretation of the coefficient should provide new insights into the exchange of energy in entropic cosmology although it has not yet been made clear. Note that we can obtain an effective continuity equation similar to bulk viscous and CCDM models by moving the non-zero right-hand side to the other side and extending an effective description for the pressure.
In this short paper, we have focused on the consistency of the two continuity equations. Therefore, we do not examine the properties of the present model in detail. However, it is possible to evaluate them roughly because the cosmological equations used here are similar to those of the $\Lambda(t)$CDM, CCDM, and entropic-force models [@Valent2015; @Lima2011; @Sola_2014a; @Koma6]. For example, background evolutions of the universe are essentially equivalent to those described by an extended entropic-force model in Ref. [@Koma6]. In addition, a unified formulation for density perturbations [@Koma6] can be applied to the present model when $f_{\textrm{rev}}(t)=0$ or $h_{\textrm{irr}}(t)=0$. Accordingly, in the next paragraph, we briefly discuss the properties of the present model, according to the previous studies [@Basilakos1; @Sola_2014a; @Valent2015; @Koma4; @Koma5; @Koma6; @Lima2011; @Ramos_2014-2014b].
As shown in Eqs. (\[eq:FRW01\_EntroMatter\]) and (\[eq:FRW02\_EntroMatter\]), the Friedmann and acceleration equations include $H^{4-n}$ terms related to $f(t)_{\textrm{rev}}$. Moreover, the acceleration equation includes the effective pressure $p_{e}^{\textrm{irr}}$ related to the constant $h_{\textrm{irr}0}$ term for matter creation. First, we focus on the entropic-force term $H^{4-n}$. In fact, the entropic-force model, which includes each of the $H^{2}$, $H$, and constant terms, agrees well with observed supernova data [@Koma4; @Koma5; @Koma6]. That is, each of the three terms can properly describe the accelerated expansion of the late universe. However, Basilakos and Solà have shown that simple combinations of pure Hubble terms, i.e., $H^{2}$, $\dot{H}$, and $H$ terms, are insufficient for a complete description of the growth rate for clustering related to structure formations [@Sola_2014a]. Similarly, several combinations of $H^{2}$, $\dot{H}$, $H$, and constant terms in the $\Lambda (t)$CDM model have been examined by Gómez-Valent *et al.* [@Valent2015]. Those studies indicate that the constant term plays an important role in the discussion of observations of cosmological fluctuations [@Basilakos1; @Sola_2014a; @Valent2015; @Koma6]. In the $\Lambda(t)$CDM model, such a constant term is obtained from an integral constant of the renormalization group equation for the vacuum energy density [@Sola_2013b]. A similar constant term (corresponding to $h_{\textrm{irr}0}$) appears in CCDM models. However, in CCDM models, a negative sound speed [@Lima2011] and the existence of clustered matter [@Ramos_2014-2014b] are necessary to properly describe the growth rate [@Koma8]. Therefore, the entropic-force model with a non-zero $h_{\textrm{irr}}(t)$ term (not only the constant term but also the $H$ term) is inconsistent with the observed growth rate, especially for a low redshift, as we have previously shown [@Koma6]. We have also shown that a weakly dissipative model (similar to the $\Lambda$CDM model) describes observations of the cosmic microwave background radiation temperature, whereas a strong dissipative model (similar to the CCDM model) does not [@Koma8].
The present study and previous studies imply that an area entropy (which leads to $H^{2}$ terms), a constant term, and a weakly dissipative universe are favored. Accordingly, $ f_{\textrm{rev}}(t) = C_{0} + C_{1} H^{2}$ and $h_{\textrm{irr}}(t) = 0$ can be proposed for one of the favored models, where $C_{0}$ and $C_{1}$ are constants. The favored model can be interpreted as a particular case of $\Lambda (t)$CDM models. This type of $\Lambda(t)$CDM model has been recently examined by, e.g., Lima *et al.* [@LimaSola_2013a], Gómez-Valent *et al.* [@Valent2015], and Basilakos and Solà [@Basilakos1].
From Eqs. (\[eq:fluid\_rev\_irr\_p=0\_2\]) and (\[eq:drho\_General\_0110\_entropic\]), the two continuity equations are found to be slightly inconsistent with each other when $n \neq 2$. Interestingly, when $n \neq 2$, the maximum tension principle does not work for generalized entropic-force models as well, as recently examined by Dabrowski and Gohar [@Gohar_2015a]. That is, $n=2$ is suitable both for the consistency of the two continuity equations and for the maximum tension principle. The results imply that the entropic-force model alone may be difficult to solve the cosmological constant problem because an additive constant term is obtained not from $n=2$ but from $n = 4$. In addition, as mentioned previously, it is difficult to properly describe not only a decelerating and accelerating universe but also structure formations, without adding the constant term [@Basilakos1; @Sola_2014a; @Sola_2009-2015; @LimaSola_2013a; @Sola_2013b; @Valent2015; @Koma6; @Gohar_2015b]. To solve these difficulties, the entropic-force model for $n=2$ should be appropriately coupled with $\Lambda (t)$CDM models. In particular, the favored model proposed in the above paragraph is expected to play an important role theoretically and phenomenologically.
In fact, recent studies imply that $\Lambda (t)$CDM models based on power series of the Hubble rate are likely more suitable both for a theoretical explanation and for a phenomenological description than the standard $\Lambda$CDM model. See, e.g., Ref. [@Sola_2015_1c] and the third paper of Ref. [@Valent2015]. Matter is conserved in the $\Lambda (t)$CDM models. The present entropic-force model is expected to be related to the $\Lambda (t)$CDM models. However, it should be difficult to distinguish the entropic-force model from the $\Lambda (t)$CDM model in practice, when the formulations are the same and dissipative terms are zero, i.e., $h_{\textrm{irr}}(t) = 0$. Of course, when $h_{\textrm{irr}}(t) \neq 0$, we can distinguish between the two models even if background evolutions of the universe are the same. However, our previous studies imply that a weakly dissipative universe is favored [@Koma6; @Koma7; @Koma8]. (A slowly varying gravitational coupling is assumed for the $\Lambda (t)$CDM model examined in Ref. [@Sola_2015_1c], unlike for the present entropic-force model. Accordingly, it may be possible to distinguish the entropic-force model for $h_{\textrm{irr}}(t) = 0$ from the $\Lambda (t)$CDM model, if a varying gravitational constant is revealed through observations.)
Finally, the inflation of the early universe in entropic cosmology is briefly discussed. In the present study, $H^{4-n}$ terms are obtained from entropic forces. Accordingly, the exponent $4-n$ decreases with $n$. However, higher exponents should be required for the inflation. The higher exponent cannot be obtained from the present entropic force, without assuming $n \leq 0$. Probably, this problem can be solved by introducing logarithmic entropic corrections which generate $H^4$ terms (see, e.g., the second paper of Ref. [@Easson12]). Such an entropic-force model can be interpreted as a particular case of $\Lambda (t)$CDM models as well. For example, not only $H^{4}$ terms [@Sola_2015_1a] but also $H^{m}$ terms [@Sola_2015_1b] (corresponding to an arbitrary exponent of $H$) have been recently examined in $\Lambda (t)$CDM models. To acquire a deeper understanding of cosmology, we need to study general relativity from various viewpoints [@Antoci; @Bousso_2015; @Sola_2016].
Conclusions {#Conclusions}
===========
Entropic-force models assume several forms of entropy on the horizon of the universe, where the entropy can be considered to behave as if it were exchangeable. To study the consequences of this assumption, a phenomenological entropic-force model has been formulated, focusing on a homogeneous, isotropic, spatially flat, and matter-dominated universe. For this formulation, a general form of entropy proportional to the $n$-th power of the Hubble horizon, i.e., $S_{rn} \propto r_{H}^{n}$, is used. Here, the Bekenstein entropy (an area entropy), the Tsallis–Cirto black-hole entropy (a volume entropy), and a quartic entropy are represented by $n=2$, $3$, and $4$, respectively. Consequently, $H^{4-n}$ terms for the Friedmann and acceleration equations are obtained from entropic-forces. That is, the $H^{2}$, $H$, and constant entropic-force terms are confirmed to be systematically derived from the area, volume, and quartic entropies, respectively.
In addition, irreversible entropy due to matter creation has been included in the current formulation. Using the present model, we have examined whether the entropy $S_{rn}$ on the horizon behaves exchangeably or not. To this end, two continuity equations for the present model are derived from two different methods. The first continuity equation is derived from the first law of thermodynamics, whereas the second equation is derived from the Friedmann and acceleration equations. The two continuity equations are found to be consistent with each other. In particular, the two equations agree completely when $n=2$, which corresponds to the Bekenstein entropy. This consistency may imply the exchange (the transfer) of energy in entropic cosmology. The interchange of energy between the bulk (the universe) and the boundary (the horizon of the universe) is a viable scenario consistent with the holographic principle. Alternatively, the entropy on the horizon in the entropic-force model can be interpreted as an effective dark energy. The present study should provide new insights into the entropic, energy-exchange, and time-varying $\Lambda(t)$ cosmologies and bridge the gap between them.
The authors wish to thank the anonymous referee and H. Gohar for very valuable comments which improve the paper.
[99]{}
S. Perlmutter *et al.*, Nature (London) **391**, 51 (1998); Astrophys. J. **517**, 565 (1999); A. G. Riess *et al.*, Astron. J. **116**, 1009 (1998); Astrophys. J. **607**, 665 (2004).
S. Weinberg, Rev. Mod. Phys. **61**, 1 (1989); I. Zlatev, L. Wang, and P. J. Steinhardt, Phys. Rev. Lett. **82**, 896 (1999); S. M. Carroll, Living Rev. Relativity **4**, 1 (2001).
K. Freese, F. C. Adams, J. A. Frieman, and E. Mottola, Nucl. Phys. **B287**, 797 (1987); J. M. Overduin and F. I. Cooperstock, Phys. Rev. D **58**, 043506 (1998); I. L. Shapiro and J. Solà, J. High Energy Phys. 02 (2002) 006; H. A. Borges and S. Carneiro, Gen. Relativ. Gravit. **37**, 1385 (2005); H. Fritzsch and J. Solà, Classical Quantum Gravity **29**, 215002, (2012); J. P. Mimoso and D. Pavón, Phys. Rev. D **87**, 047302 (2013); E. M. Barboza Jr., R. C. Nunes, E. M. C. Abreu, and J. A. Neto, Physica A **436**, 301 (2015); M. H. P. M. Putten, Mon. Not. R. Astron. Soc. **450**, L48 (2015).
R. C. Arcuri and I. Waga, Phys. Rev. D **50**, 2928 (1994); J. Grande, R. Opher, A. Pelinson, and J. Solà, J. Cosmol. Astropart. Phys. 12 (2007) 007; J. Grande, A. Pelinson, and J. Solà, Phys. Rev. D **79**, 043006 (2009).
S. Basilakos, M. Plionis, and J. Solà, Phys. Rev. D **80**, 083511 (2009); J. Grande, J. Solà, S. Basilakos, and M. Plionis, J. Cosmol. Astropart. Phys. 08 (2011) 007; E. L. D. Perico, J. A. S. Lima, S. Basilakos, and J. Solà, Phys. Rev. D **88**, 063531 (2013); S. Basilakos and J. Solà, Mon. Not. R. Astron. Soc. **437**, 3331 (2014); J. Solà, AIP Conf. Proc. **1606**, 19 (2014). J. A. S. Lima, S. Basilakos, and J. Solà, Mon. Not. R. Astron. Soc. **431**, 923 (2013). J. Solà, J. Phys. Conf. Ser. **453**, 012015 (2013). A. Gómez-Valent and J. Solà, Mon. Not. R. Astron. Soc. **448**, 2810 (2015); A. Gómez-Valent, J. Solà, and S. Basilakos, J. Cosmol. Astropart. Phys. 01 (2015) 004; A. Gómez-Valent, E. Karimkhani, and J. Solà, J. Cosmol. Astropart. Phys. 12 (2015) 048.
J. Solà, A. Gómez-Valent, and J. C. Pérez, Astrophys. J. **811**, L14 (2015).
J. Solà and A. Gómez-Valent, Int. J. Mod. Phys. D **24**, 1541003 (2015).
J. Solà, Int. J. Mod. Phys. D **24**, 1544027 (2015).
L. Miao, L. Xiao-Dong, W. Shuang, and W. Yi, Commun. Theor. Phys. **56**, 525 (2011). K. Bamba, S. Capozziello, S. Nojiri, and S. D. Odintsov, Astrophys. Space Sci. **342**, 155 (2012).
S. Weinberg, *Cosmology* (Oxford University Press, New York, 2008). G. F. R. Ellis, R. Maartens, and M. A. H. MacCallum, *Relativistic Cosmology* (Cambridge University Press, Cambridge, England, 2012).
G. ’t Hooft, arXiv:gr-qc/9310026; L. Susskind, J. Math. Phys. **36**, 6377 (1995); R. Bousso, Rev. Mod. Phys. **74**, 825 (2002).
D. A. Easson, P. H. Frampton, and G. F. Smoot, Phys. Lett. B **696**, 273 (2011); Int. J. Mod. Phys. A **27**, 1250066 (2012). T. S. Koivisto, D. F. Mota, and M. Zumalacárregui, J. Cosmol. Astropart. Phys. 02 (2011) 027; Y. S. Myung, Astrophys. Space Sci. **335**, 553 (2011); T. Qiu and E. N. Saridakis, Phys. Rev. D **85**, 043504 (2012). S. Basilakos, D. Polarski, and J. Solà, Phys. Rev. D **86**, 043010 (2012). S. Basilakos and J. Solà, Phys. Rev. D **90**, 023008 (2014). Y. F. Cai, J. Liu, and H. Li, Phys. Lett. B **690**, 213 (2010); Y. F. Cai and E. N. Saridakis, Phys. Lett. B **697**, 280 (2011).
S. Lepe and F. Penã, arXiv:1201.5343v2 \[hep-th\]. N. Komatsu and S. Kimura, Phys. Rev. D **87**, 043531 (2013); N. Komatsu, JPS Conf. Proc. **1**, 013112 (2014). N. Komatsu and S. Kimura, Phys. Rev. D **88**, 083534 (2013). N. Komatsu and S. Kimura, Phys. Rev. D **89**, 123501 (2014). N. Komatsu and S. Kimura, Phys. Rev. D **90**, 123516 (2014). N. Komatsu and S. Kimura, Phys. Rev. D **92**, 043507 (2015).
M. P. Dabrowski and H. Gohar, Phys. Lett. B **748**, 428 (2015). M. P. Dabrowski, H. Gohar, and V. Salzano, arXiv:1503.08722v2.
R. C. Nunes, E. M. Barboza Jr., E. M. C. Abreu, and J. A. Neto, arXiv:1509.05059v1.
J. D. Bekenstein, Phys. Rev. D **7**, 2333 (1973); Phys. Rev. D **9**, 3292 (1974); Phys. Rev. D **12**, 3077 (1975).
C. Tsallis and L. J. L. Cirto, Eur. Phys. J. C **73**, 2487 (2013).
D. Kondepudi and I. Prigogine, *Modern Thermodynamics: From Heat Engines to Dissipative Structures* (John Wiley & Sons, New York, 1998).
J. D. Barrow and T. Clifton, Phys. Rev. D **73**, 103520 (2006). B. Wang, Y. Gong, and E. Abdalla, Phys. Lett. B **624**, 141 (2005); S. Nojiri and S. D. Odintsov, Phys. Lett. B **639**, 144 (2006); Y. Wang, D. Wands, G.-B. Zhao, and L. Xu, Phys. Rev. D **90**, 023502 (2014); N. Tamanini, Phys. Rev. D **92**, 043524 (2015). D. Pavón and W. Zimdahl, Phys. Lett. B **628**, 206 (2005); B. Hu and Y. Ling, Phys. Rev. D **73**, 123510 (2006).
B. Ryden, *Introduction to Cosmology* (Addison-Wesley, Reading, MA, 2002).
S. Weinberg, *Gravitation and Cosmology* (Wiley, New York, 1972). G. L. Murphy, Phys. Rev. D **8**, 4231 (1973). J. D. Barrow, Phys. Lett. B **180**, 335 (1986); J. D. Barrow, Nucl. Phys. **B310**, 743 (1988); P. C. W. Davies, Classical Quantum Gravity **4**, L225 (1987); J. A. S. Lima, R. Portugal, and I. Waga, Phys. Rev. D **37**, 2755 (1988).
W. Zimdahl, Phys. Rev. D **53**, 5483 (1996); A. I. Arbab, Gen. Relativ. Gravit. **29**, 61 (1997); I. Brevik and S. D. Odintsov, Phys. Rev. D **65**, 067302 (2002); S. Nojiri and S. D. Odintsov, Phys. Rev. D **72**, 023003 (2005); B. Li and J. D. Barrow, Phys. Rev. D **79**, 103521 (2009); A. Avelino and U. Nucamendi, J. Cosmol. Astropart. Phys. 04 (2009) 006; I. Brevik, E. Elizalde, S. Nojiri, and S. D. Odintsov, Phys. Rev. D **84**, 103508 (2011); H. Velten, T. R. P. Caramês, J. C. Fabris, L. Casarini, and R. C. Batista, Phys. Rev. D **90**, 123526 (2014); M. M. Disconzi, T. W. Kephart, and R. J. Scherrer, Phys. Rev. D **91**, 043532 (2015); S. Floerchinger, N. Tetradis, and U. A. Wiedemann, Phys. Rev. Lett. **114**, 091301 (2015).
M. O. Calvão, J. A. S. Lima, and I. Waga, Phys. Lett. A **162**, 223 (1992).
J. A. S. Lima and A. S. M. Germano, Phys. Lett. A **170**, 373 (1992).
W. Zimdahl and D. Pavón, Phys. Lett. A **176**, 57 (1993).
J. A. S. Lima, A. I. Silva, and S. M. Viegas, Mon. Not. R. Astron. Soc. **312**, 747 (2000).
J. A. S. Lima and I. Baranov, Phys. Rev. D **90**, 043515 (2014).
T. Harko, Phys. Rev. D **90**, 044067 (2014).
J. A. S. Lima, A. S. M. Germano, and L. R. W. Abramo, Phys. Rev. D **53**, 4287 (1996); J. A. S. Lima, Gen. Relativ. Gravit. **29**, 805 (1997); W. Zimdahl, D. J. Schwarz, A. B. Balakin, and D. Pavón, Phys. Rev. D **64**, 063501 (2001); M. P. Freaza, R. S. de Souza, and I. Waga, Phys. Rev. D **66**, 103502 (2002); R. C. Nunes and D. Pavón, Phys. Rev. D **91**, 063526 (2015). J. A. S. Lima, J. F. Jesus, and F. A. Oliveira, J. Cosmol. Astropart. Phys. 11 (2010) 027; S. Basilakos, M. Plionis, and J. A. S. Lima, Phys. Rev. D **82**, 083517 (2010); J. A. S. Lima, S. Basilakos, and F. E. M. Costa, Phys. Rev. D **86**, 103534 (2012); J. F. Jesus and S. H. Pereira, J. Cosmol. Astropart. Phys. 07 (2014) 040. J. A. S. Lima, V. Zanchin, and R. Brandenberger, Mon. Not. R. Astron. Soc. **291**, L1 (1997). J. F. Jesus, F. A. Oliveira, S. Basilakos, and J. A. S. Lima, Phys. Rev. D **84**, 063511 (2011). R. O. Ramos, M. Vargas dos Santos, and I. Waga, Phys. Rev. D **89**, 083524 (2014); M. Vargas dos Santos, I. Waga, and R. O. Ramos, Phys. Rev. D **90**, 127301 (2014).
A possible equivalence of the bulk viscosity and matter creation dissipative mechanisms has been discussed in Ref. [@Lima_1992e]. The connections between warm inflation [@Berera_1995-2014], $\Lambda (t)$CDM models, and CCDM models have been debated in Ref. [@Connection], as mentioned in Ref. [@Koma8].
I. Prigogine, J. Geheniau, E. Gunzig, and P. Nardone, Proc. Natl. Acad. Sci. U.S.A. **85**, 7428 (1988). I. Prigogine, J. Geheniau, E. Gunzig, and P. Nardone, Gen. Relativ. Gravit. **21**, 767 (1989).
T. Padmanabhan, Mod. Phys. Lett. A **25**, 1129 (2010). E. Verlinde, J. High Energy Phys. 04 (2011) 029.
C. Tsallis, J. Stat. Phys. **52**, 479 (1988).
S. K. Modak and D. Singleton, Phys. Rev. D **86**, 123515 (2012).
I. Brevik, V. V. Obukhov, and A. V. Timoshkin, Astrophys. Space Sci. **359**, 11 (2015).
S. Antoci and D. E. Liebscher, Astron. Nachr. **322**, 137 (2001); arXiv:0910.2073v1.
R. Bousso and N. Engelhardt, Phys. Rev. D **92**, 044031 (2015); Phys. Rev. Lett. **115**, 081301 (2015).
J. Solà, arXiv:1601.01668.
J. Gariel and G. Le Denmat, Phys. Lett. A **200**, 11 (1995); J. P. Mimoso, A. Nunes, and D. Pavón, Phys. Rev. D **73**, 023502 (2006); L. L. Graef, F. E. M. Costa, and J. A. S. Lima, Phys. Lett. B **728**, 400 (2014).
|
---
abstract: 'For any $r\geq 1$ and ${\mathbf{n}}\in {\mathbb{Z}}_{\geq0}^r \setminus \{\mathbf0\}$ we construct a poset $W_{\mathbf{n}}$ called a *2-associahedron*. The 2-associahedra arose in symplectic geometry, where they are expected to control maps between Fukaya categories of different symplectic manifolds. We prove that the completion ${\widehat}{W_{\mathbf{n}}}$ is an abstract polytope of dimension $|{\mathbf{n}}|+r-3$. There are forgetful maps $W_{\mathbf{n}}\to K_r$, where $K_r$ is the $(r-2)$-dimensional associahedron, and the 2-associahedra specialize to the associahedra (in two ways) and to the multiplihedra. In an appendix, we work out the 2- and 3-dimensional 2-associahedra in detail.'
address: 'School of Mathematics, Institute for Advanced Study, 1 Einstein Dr, Princeton, NJ 08540'
author:
- Nathaniel Bottman
title: '2-associahedra'
---
Introduction {#sec:intro}
============
Ma’u–Wehrheim–Woodward proposed in [@mww] that a Lagrangian correspondence $M_1 {\stackrel}{L_{12}}{\lra} M_2$ between symplectic manifolds should induce an $A_\infty$-functor ${{\operatorname}{Fuk}}(M_1) {\stackrel}{\Phi(L_{12})}{\lra} {{\operatorname}{Fuk}}(M_2)$ between Fukaya categories, where $\Phi(L_{12})$ is defined by counting pseudoholomorphic quilted disks. In his thesis [@b:thesis], the author suggested extending this proposal by counting witch balls, which are pseudoholomorphic quilted spheres. The underlying domain moduli spaces (described in §\[ss:motivation\]) are stratified topological spaces, and the posets indexing the strata have interesting combinatorial structure, similarly to how the Stasheff polytopes have strata indexed by the associahedra. In this paper we define these underlying posets, which we call *2-associahedra*.
In this introduction, we will first motivate the construction of the 2-associahedra by describing some bubbling phenomena in spaces of witch curves, which are the moduli spaces of domains of witch balls. After that, we will give a plan for the body of the paper.
Motivation for Wn from witch curves {#ss:motivation}
-----------------------------------
The author was led to define and study the 2-associahedra $(W_{\mathbf{n}})$ because they are the posets corresponding to degenerations in the moduli space ${\overline}{2{\mathcal{M}}}_{\mathbf{n}}$ of *witch curves* — similarly to how the compactified moduli space of $r$ points on the line modulo translations and dilations is stratified by the associahedron $K_r$. ${\overline}{2{\mathcal{M}}}_{\mathbf{n}}$ is defined in [@b:realization], crucially relying on the current paper’s construction of $W_{\mathbf{n}}$. Here we sketch the definition of ${\overline}{2{\mathcal{M}}}_{\mathbf{n}}$ to motivate the definition of $W_{\mathbf{n}}$.
We begin by defining the uncompactified moduli space: $$\begin{aligned}
2{\mathcal{M}}_{\mathbf{n}}\coloneqq \left\{
\begin{array}{rcl}
(x_1,\ldots,x_r) &\in& {\mathbb{R}}^r \\
(y_{11},\ldots,y_{1n_1}) &\in& {\mathbb{R}}^{n_1} \\
&\vdots& \\
(y_{r1},\ldots,y_{rn_r}) &\in& {\mathbb{R}}^{n_r}
\end{array}
\:\left|\:
\begin{array}{c}
x_1 < \cdots < x_r \\
y_{11} < \cdots < y_{1n_1} \\
\vdots \\
y_{r1} < \cdots < y_{rn_r}
\end{array}
\right.\right\}\Big/{\mathbb{R}}^2 \rtimes {\mathbb{R}}_{>0} \eqqcolon X/G.\end{aligned}$$ We view an element of $X$ as describing a configuration of $r$ vertical lines in ${\mathbb{R}}^2$ with $x$-positions $x_1, \ldots, x_r$, along with $n_i$ marked points on the $i$-th line with $y$-positions $y_{i1}, \ldots, y_{in_i}$. (By identifying ${\mathbb{R}}^2 \cup \{\infty\} \simeq S^2$, we can also view an element of $X$ as a configuration of marked circles on $S^2$, where all the circles intersect tangentially at the south pole.) We view $G$ as the group of affine-linear transformations of the plane which consist of a translation and a positive dilation, which we can extend to define an action of $G$ on $X$.
![Two views of a witch curve in the main stratum $2{\mathcal{M}}_{\mathbf{n}}\subset {\overline}{2{\mathcal{M}}}_{\mathbf{n}}$: on the left, as ${\mathbb{R}}^2$ with marked lines; and on the right, as $S^2$ with marked circles.[]{data-label="fig:witch_ball"}](witch_ball){width="0.4\columnwidth"}
$2{\mathcal{M}}_{\mathbf{n}}$ is not compact: points on a single line can collide, or lines can collide, or these two phenomena can take place simultaneously. We compactify $2{\mathcal{M}}_{\mathbf{n}}$ to ${\overline}{2{\mathcal{M}}}_{\mathbf{n}}$ like so: when a collection of lines collide, then wherever the marked points on these lines are as this collision happens, we bubble off another configuration of lines and points. To define ${\overline}{2{\mathcal{M}}}_{\mathbf{n}}$ precisely, we need to specify the allowed degenerations, and this is where the 2-associahedra come in: the elements of $W_{\mathbf{n}}$ correspond to the allowed degenerations in ${\overline}{2{\mathcal{M}}}_{\mathbf{n}}$. We illustrate this in the following figure: on the left is the compactified moduli space ${\overline}{2{\mathcal{M}}}_{200}$, and in the middle and on the right are two presentations of $W_{200}$.
On the left, we are identifying configurations of lines and points in the plane with configurations of vertical lines in the right half-plane, which can in turn be identified with configurations of circles on a disk, all of which intersect tangentially at a point on the boundary. In this figure, and throughout this paper, we overlay the posets $W_{200}^{{{\operatorname}{tree}}}$ and $W_{200}^{{{\operatorname}{br}}}$ over polytopes. The set of faces of any polytope has a poset structure, where $F<G$ if the containment $F \subset {\overline}G$ holds; our depiction of $W_{200}^{{{\operatorname}{tree}}}$ and $W_{200}^{{{\operatorname}{br}}}$ indicates that they are isomorphic to the face poset of the pentagon.
$W_{\mathbf{n}}$ is intended to index the possible degenerations that a sequence of points in the moduli space ${\overline}{2{\mathcal{M}}}_{\mathbf{n}}$ can undergo. In §\[sec:2ass\] we will define two models for $W_{\mathbf{n}}$: $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ and $W_{\mathbf{n}}^{{{\operatorname}{br}}}$. To approach and motivate these models, consider the following degenerations in ${\overline}{2{\mathcal{M}}}_{200}$, corresponding to the bottom resp.bottom-left resp. upper-right edges in the depiction of ${\overline}{2{\mathcal{M}}}_{200}$ above:
Degeneration 1 occurs when the two black points collide; degeneration 2 occurs when the larger interior circle expands and collides with the boundary circle, while the black points simultaneously collide; and degeneration 3 occurs when the two interior circles simultaneously expand and collide with the boundary circle. To define the 2-associahedra, we must produce combinatorial data that track these degenerations. We can do so in two ways:
- Represent each disk as a vertex in a tree, with solid edges corresponding to the seams (i.e., boundary circle or interior circles) appearing on that disk. We represent an attachment between two disks as a dashed edge; we also represent a marked point by a dashed edge. This leads to the model $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$, and in this model the degenerations pictured above take the following form:
The reader will observe that a single datum is not only a tree with solid and dashed edges, but also a smaller, solid, tree which receives a map from the larger tree. The reason for this is that when disks bubble off, the same seam may appear in multiple disks. These seams must remain linked, so that the enlargement ${\overline}{2{\mathcal{M}}}_{\mathbf{n}}$ is a reasonable compactification of $2{\mathcal{M}}_{\mathbf{n}}$, and this linking is enforced by the smaller tree and the map it receives.
- Represent the seams as a horizontal line of numbers; above each number, represent the points that appear on that seam as a vertical line of letters. For any given disk $C$ in the bubble tree, form a subtree consisting of the disks that can only be reached from the main component by passing through $C$; the datum corresponding to $C$ is a grouping including those marked points appearing in this subtree. Such a grouping is called a [*2-bracket*]{}, and every 2-bracket comes with a “width”, which indicates the seams that appear on the corresponding disk. This leads to the model $W_{\mathbf{n}}^{{{\operatorname}{br}}}$, and in this model the above degenerations take the following form:
Plan
----
The constructions in this paper are rather technical, and with the exception of §\[sec:ass\], our definitions and results are completely new. For this reason, we give a plan of the paper to orient the reader.
[**§\[sec:ass\]:**]{} We recall two equivalent constructions, called $K_r^{{{\operatorname}{tree}}}$ and $K_r^{{{\operatorname}{br}}}$, of the associahedra $K_r$, along with several basic properties. This material is not new, but these particular constructions of $K_r$ are needed for the constructions of the 2-associahedron $W_{\mathbf{n}}$ in §\[sec:2ass\]. In addition, the constructions of $K_r^{{{\operatorname}{tree}}}$ and $K_r^{{{\operatorname}{br}}}$ and the proofs of Prop. \[prop:Kr\_iso\] and Prop. \[prop:Kr\_main\] are analogous to the constructions of $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ and $W_{\mathbf{n}}^{{{\operatorname}{br}}}$ and the proofs of Thms. \[thm:iso\] and \[thm:main\], and so will serve as an introduction to §§\[sec:2ass\]–\[sec:Wn\_polytope\].
1. In Def. \[def:Krtree\_set\] and Def.-Lem. \[deflem:Krtree\_poset\] we define a poset, $K_r^{{{\operatorname}{tree}}}$, consisting of rooted ribbon trees with $r$ leaves. Then, in Def. \[def:Krbr\], we define the poset $K_r^{{{\operatorname}{br}}}$, consisting of 1-bracketings of $r$ letters. We prove that the posets $K_r^{{{\operatorname}{tree}}}$ and $K_r^{{{\operatorname}{br}}}$ are isomorphic in Prop. \[prop:Kr\_iso\], and define $K_r \coloneqq K_r^{{{\operatorname}{tree}}}= K_r^{{{\operatorname}{br}}}$.
2. We establish two important properties of $K_r$, collected in the following result:
[**Proposition \[prop:Kr\_main\]**]{} (Key properties of $K_r$)[**.**]{} \[prop:Kr\_main\] The posets $(K_r)$ satisfy the following properties:
- <span style="font-variant:small-caps;">(abstract polytope)</span> For $r \geq 2$, ${\widehat}{K_r} \coloneqq K_r \cup \{F_{-1}\}$ is an abstract polytope of dimension $r-2$.
- <span style="font-variant:small-caps;">(recursive)</span> For any $T \in K_r^{{{\operatorname}{tree}}}$, there is an inclusion of posets $$\begin{aligned}
\gamma_T\colon \prod_{\alpha \in T_{{{\operatorname}{int}}}} K_{\#\!{{\operatorname}{in}}(\alpha)}^{{{\operatorname}{tree}}}\hra K_r^{{{\operatorname}{tree}}},\end{aligned}$$ which restricts to a poset isomorphism onto ${\mathrm{cl}}(T) = (F_{-1},T]$.
We now give brief explanations of these properties.
1. <span style="font-variant:small-caps;">(abstract polytope)</span>: As explained in Def. \[def:abstract\_polytope\], an abstract polytope is a poset satisfying some of the characteristic combinatorial properties of a convex polytope.
2. <span style="font-variant:small-caps;">(recursive)</span>: This property reflects the fact that if $S$ is a stratum in ${\overline}{\mathcal{M}}_{\mathbf{n}}$ corresponding to a configuration with several disk-components, then the degenerations that can take place in ${\overline}S$ correspond to a choice of a degeneration (or lack thereof) in each of the disk-components. We depict one of the maps $\gamma_T$ in the following figure (using ${\overline}{\mathcal{M}}_{\mathbf{n}}$ rather than $K_r^{{{\operatorname}{tree}}}$ or $K_r^{{{\operatorname}{br}}}$ for clarity):
\[fig:operad\]
Here the upper-left edge in $K_4$ is decomposed as the product of $K_2$ and $K_3$, corresponding to the two disk-components appearing in the label on the upper-left edge. These maps give $(K_r)$ the structure of an operad, which is of fundamental importance for applications to symplectic geometry.
[**§\[sec:2ass\]:**]{} In this section, we construct the posets $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ and $W_{\mathbf{n}}^{{{\operatorname}{br}}}$ and show that they are isomorphic.
1. Here we define the posets $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ and $W_{\mathbf{n}}^{{{\operatorname}{br}}}$ (see Def. \[def:Wn\_tree\] and Def. \[def:Wn\_br\]), which were motivated in §\[ss:motivation\]. We also show in Lemma \[lem:WnKn\] that $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ specializes to the associahedra and the multiplihedra, which will be important for future applications to symplectic geometry.
2. This subsection is devoted to the proof of the following theorem:
[**Theorem \[thm:iso\]**]{} (Equivalence of the two models for $W_{\mathbf{n}}$). For any $r\geq 1$ and ${\mathbf{n}}\in {\mathbb{Z}}_{\geq0}^r\setminus\{{\mathbf{0}}\}$, $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ and $W_{\mathbf{n}}^{{{\operatorname}{br}}}$ are isomorphic posets.
With this theorem in hand, we define $W_{\mathbf{n}}\coloneqq W_{\mathbf{n}}^{{{\operatorname}{tree}}}= W_{\mathbf{n}}^{{{\operatorname}{br}}}$. We also define a forgetful map $\pi\colon W_{\mathbf{n}}\to K_r$, which has a simple definition in either model for $W_{\mathbf{n}}$: $\pi^{{{\operatorname}{tree}}}\colon W_{\mathbf{n}}^{{{\operatorname}{tree}}}\to K_r^{{{\operatorname}{tree}}}$ sends a tree-pair $T_b \to T_s$ to the seam tree $T_s$, and $\pi^{{{\operatorname}{br}}}\colon W_{\mathbf{n}}^{{{\operatorname}{br}}}\to K_r^{{{\operatorname}{br}}}$ sends a 2-bracketing to the underlying 1-bracketing of $1\:2\:\cdots\: r$. In the following figure we depict $\pi^{{{\operatorname}{br}}}\colon W_{200}^{{{\operatorname}{br}}}\to K_3^{{{\operatorname}{br}}}$:
The forgetful map provides an important connection between the 2-associahedra and the associahedra. Together with the <span style="font-variant:small-caps;">(recursive)</span> property described below, the forgetful map endows $(W_{\mathbf{n}})$ with the structure of a [*relative 2-operad*]{}, a notion which the author plans to describe in a forthcoming paper.
[**§\[sec:Wn\_polytope\]:**]{} This section is devoted to the proof of several properties of $W_{\mathbf{n}}$ which we collect in this paper’s main theorem:
[**Theorem \[thm:main\]**]{} (Key properties of $W_{\mathbf{n}}$). For any $r \geq 1$ and ${\mathbf{n}}\in {\mathbb{Z}}^r_{\geq0}\setminus\{{\mathbf{0}}\}$, the 2-associahedron $W_{\mathbf{n}}$ is a poset, the collection of which satisfies the following properties:
- <span style="font-variant:small-caps;">(abstract polytope)</span> For ${\mathbf{n}}\neq (1)$, ${\widehat}{W_{\mathbf{n}}} \coloneqq W_{\mathbf{n}}\cup \{F_{-1}\}$ is an abstract polytope of dimension $|{\mathbf{n}}| + r - 3$.
- <span style="font-variant:small-caps;">(forgetful)</span> $W_{\mathbf{n}}$ is equipped with forgetful maps $\pi\colon W_{\mathbf{n}}\to K_r$, which are surjective maps of posets.
- <span style="font-variant:small-caps;">(recursive)</span> For any stable tree-pair $2T = T_b {\stackrel}{f}{\to} T_s \in W_{\mathbf{n}}^{{{\operatorname}{tree}}}$, there is an inclusion of posets $$\begin{aligned}
\Gamma_{2T} \colon \prod_{
{\alpha \in V_{{{\operatorname}{comp}}}^1(T_b),}
\atop
{{{\operatorname}{in}}(\alpha)=(\beta)}
} W_{\#\!{{\operatorname}{in}}(\beta)}^{{{\operatorname}{tree}}}\times
\prod_{\rho \in V_{{{\operatorname}{int}}}(T_s)} \prod^{K_{\#\!{{\operatorname}{in}}(\rho)}}_{
{\alpha\in V_{{{\operatorname}{comp}}}^{\geq2}(T_b)\cap f^{-1}\{\rho\},}
\atop
{{{\operatorname}{in}}(\alpha)=(\beta_1,\ldots,\beta_{\#\!{{\operatorname}{in}}(\rho)})}
}
\hspace{-0.25in} W^{{{\operatorname}{tree}}}_{\#\!{{\operatorname}{in}}(\beta_1),\ldots,\#\!{{\operatorname}{in}}(\beta_{\#\!{{\operatorname}{in}}(\alpha)})}
\hra W_{\mathbf{n}}^{{{\operatorname}{tree}}},\end{aligned}$$ where the superscript on one of the product symbols indicates that it is a fiber product with respect to the maps described in <span style="font-variant:small-caps;">(forgetful)</span>. This inclusion is a poset isomorphism onto ${\mathrm{cl}}(2T) = (F_{-1},2T]$.
We now make remarks about two of these properties.
1. <span style="font-variant:small-caps;">(abstract polytope):</span> It seems likely that ${\overline}{2{\mathcal{M}}}_{\mathbf{n}}$ can be realized as a convex polytope in a way that identifies its face lattice with $W_{\mathbf{n}}$, but this is not important for the author’s purposes.
2. <span style="font-variant:small-caps;">(recursive)</span>: This property is similar to the <span style="font-variant:small-caps;">(recursive)</span> property of $K_r$, but differs in that the closed strata of $W_{\mathbf{n}}$ are *fiber* products of lower-dimensional 2-associahedra. We depict one of the maps $\Gamma_{2T}$ in the following figure (using ${\overline}{2{\mathcal{M}}}_{\mathbf{n}}$ rather than $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ or $W_{\mathbf{n}}^{{{\operatorname}{br}}}$ for clarity):
Here the fiber product $W_2 \times W_{100} \times_{K_3} W_{200}$ is included in $W_{300}$ as the green pentagon. ($W_{300}$ is a polyhedron; here we depict its net.) $K_3$ is the 1-dimensional associahedron, which is an interval, and the maps from $W_{100}$ and $W_{200}$ to $K_3$ measure the width of the yellow strip.
[**§\[app:ex\]:**]{} In the appendix, we record all 2- and 3-dimensional 2-associahedra (except those which are isomorphic to associahedra). One can immediately see from these examples that the 2-associahedra are not trivial extensions of the associahedra (for instance, products of associahedra). Moreover, these examples are evidence that all 2-associahedra can be realized as convex polytopes.
Future directions
-----------------
The construction of the 2-associahedra suggests several potential future directions:
- In [@b:realization], the author shows that the 2-associahedra have modular realizations in terms of witch curves; these realizations are stratified topological spaces. This fits into a larger project of constructing invariants of collections of Lagrangian correspondences, defined by counting pseudoholomorphic quilts whose domains are witch curves. More progress toward this goal can be found in [@bw:compactness], where a version of Gromov compactness for these quilts is proven using the analysis in [@b:sing].
- The associahedra have by now several realizations as convex polytopes, including a realization as the secondary polytopes of certain planar polygons. It is natural to wonder whether the 2-associahedra can also be realized as convex polytopes, in particular as secondary or fiber polytopes (a possibility suggested by Gabriel Kerr). Realizability as convex polytopes is not clearly relevant for applications to symplectic geometry, but a realization as secondary or fiber polytopes could suggest additional structure relevant to the study of Fukaya categories.
- It is natural to ask whether there is a notion of “$m$-associahedra” for all $m \geq 1$. The author believes that this concept should be relatively straightforward to define, but he does not have a need for this generalization and therefore has no plans to investigate this.
- The author has conjectured that a cellular model for the little 2-disks operad can be built by gluing together copies of $W_{1\cdots 1}$. If this is true, it suggests a way of defining a homotopy Gerstenhaber structure on symplectic cohomology involving only finitely many operations of any given arity.
- Analogously to how an $A_\infty$-category is the same thing as a category over the operad of associahedra, the author plans to define a notion of [*$A_\infty$-2-category*]{} as a 2-category over the relative 2-operad of 2-associahedra.
Glossary of notation, and conventions {#ss:notation_and_conventions}
-------------------------------------
notation interpretation page first defined
-------------------------------------------- --------------------------------------------------------------------------------------------------------------------- --------------------
$[\alpha,\beta]$ path from $\alpha$ to $\beta$ in a tree p.
$\alpha(\beta,\gamma,\delta)$ single vertex in $[\beta,\gamma]\cap[\gamma,\delta]\cap[\delta,\beta]$ p.
$\alpha(B)$ vertex corresponding to 1-bracket $B$ p.
$\alpha({{\mathbf{2B}}})$ vertex corresponding to 2-bracket ${{\mathbf{2B}}}$ p.
$B$ 1-bracket p.
$B(\alpha)$ 1-bracket corresponding to vertex $\alpha$ p.
${\mathscr{B}}$ 1-bracketing p.
${{\mathbf{2B}}}= (B,(2B_i))$ 2-bracket p.
${{\mathbf{2B}}}(\alpha)$ 2-bracket corresponding to vertex $\alpha$ p.
$({\mathscr{B}},{2\mathscr{B}})$ 2-bracketing p.
$K_r$ associahedron, $K_r=K_r^{{{\operatorname}{br}}}=K_r^{{{\operatorname}{tree}}}$ p.
$K_r^{{{\operatorname}{br}}}$ poset of 1-bracketings of $r$ p.
$K_r^{{{\operatorname}{tree}}}$ poset of rooted ribbon trees with $r$ leaves p.
$\nu$ isomorphism $K_r^{{{\operatorname}{tree}}}\to K_r^{{{\operatorname}{br}}}$ p.
$2\nu$ isomorphism $W_{\mathbf{n}}^{{{\operatorname}{tree}}}\to W_{\mathbf{n}}^{{{\operatorname}{br}}}$ p.
$\pi$ forgetful map $W_{\mathbf{n}}\to K_r$ p.
$T_{\alpha\beta}$ those vertices $\gamma$ with $[\alpha,\gamma]\ni\beta$ p.
$2T = T_b \stackrel{f}{\to} T_s$ stable tree-pair (with bubble tree $T_b$ and seam tree $T_s$) p.
$W_{\mathbf{n}}$ 2-associahedron, $W_{\mathbf{n}}= W_{\mathbf{n}}^{{{\operatorname}{br}}}= W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ p.
$W_{\mathbf{n}}^{{{\operatorname}{br}}}$ poset of 2-bracketings of ${\mathbf{n}}$ p.
$W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ poset of stable tree-pairs of type ${\mathbf{n}}$ p.
The following conventions apply throughout this paper:
Note that when $\ell=0$, the “empty fiber product over $Y$” is $Y$ itself, and that when $\ell=1$ and $f_1\colon X_1 \to Y$ is surjective, this fiber product can be identified with $X_1$.
Acknowledgments
---------------
The ideas presented in this paper evolved over the course of several years, and the author is grateful to a number of people for their insight and support. This project began in 2014, while the author was a graduate student at MIT under the supervision of Katrin Wehrheim; the author is grateful for Prof. Wehrheim’s support throughout this project. Conversations with Satyan Devadoss greatly helped in the development of the definition of $W_{\mathbf{n}}$. Stefan Forcey pointed out the notion of abstract polytope. David Feldman, Nick Sheridan, and the anonymous referees made suggestions that improved the exposition. The author thanks Mohammed Abouzaid, Helmut Hofer, Jacob Lurie, Paul Seidel, and James Stasheff for encouragement.
The ideas in this paper were developed while the author was a graduate student at MIT, then a postdoctoral researcher at Northeastern University, and finally a member at the Institute for Advanced Study and a postdoctoral researcher at Princeton University. The author was supported by an NSF Graduate Research Fellowship and an NSF Mathematical Sciences Postdoctoral Research Fellowship.
The definition of Kr and some basic properties {#sec:ass}
==============================================
In this section we define the associahedra $K_r$ and prove the analogues of Thms. \[thm:iso\] and \[thm:main\]. We will use $K_r$ later in this paper; besides, the ideas in this section will shed light on the techniques we will use to prove Thms. \[thm:iso\] and \[thm:main\].
Two constructions of Kr: in terms of rooted ribbon trees, and in terms of 1-bracketings {#ss:Kr_construction}
---------------------------------------------------------------------------------------
In this subsection we will define two posets $K_r^{{{\operatorname}{tree}}}$ and $K_r^{{{\operatorname}{br}}}$, then show that they are isomorphic. We begin by recalling the definition of a tree.
A *tree* is a finite set $T$ and a relation $E \subset T \times T$ satisfying these axioms:
- ([Symmetry]{}) If $\alpha E \beta$, then $\beta E \alpha$.
- ([Antireflexivity]{}) If $\alpha E \beta$, then $\alpha \neq \beta$.
- ([Connectedness]{}) If $\alpha, \beta$ are distinct vertices, then there exist $\gamma_1, \ldots, \gamma_k \in T$ with $\gamma_1 = \alpha$, $\gamma_k = \beta$, and $\gamma_i E \gamma_{i+1}$ for every $i$.
- ([No cycles]{}) If $\gamma_1, \ldots, \gamma_k$ are vertices with $\gamma_i E \gamma_{i+1}$ and $\gamma_i \neq \gamma_{i+2}$ for all $i$, then $\gamma_1 \neq \gamma_k$.
$\triangle$
We can now define the model $K^{{{\operatorname}{tree}}}_r$.
\[def:Krtree\_set\] A *rooted ribbon tree* (RRT) is a tree $T$ with a choice of a root $\alpha_{{{\operatorname}{root}}}\in T$ and a cyclic ordering of the edges incident to each vertex; we orient such a tree toward the root. We say that a vertex $\alpha$ of an RRT $T$ is *interior* if the set ${{\operatorname}{in}}(\alpha)$ of its incoming neighbors is nonempty, and we denote the set of interior vertices of $T$ by $T_{{{\operatorname}{int}}}$. An RRT $T$ is *stable* if every interior vertex has at least 2 incoming edges. We define $K^{{{\operatorname}{tree}}}_r$ to be the set of all isomorphism classes of stable rooted ribbon trees with $r$ leaves.
We denote the $i$-th leaf of an RRT $T$ by $\lambda_i^T$. For any $\alpha, \beta \in T$, $T_{\alpha\beta}$\[p:Talphabeta\] denotes those vertices $\gamma$ such that the path $[\alpha,\gamma]$\[p:path\] from $\alpha$ to $\gamma$ passes through $\beta$. We denote $T_\alpha \coloneqq T_{\alpha_{{{\operatorname}{root}}}\alpha}$. $\triangle$
Here is an illustration of the notation we have just introduced, in the case of a single RRT $T$:
\[fig:RRT\_example\]
The following lemma provides a useful alternate characterization of RRTs.
\[lem:RRT\_in\] An RRT is equivalent to the following data:
- a finite set $V$ of vertices with a distinguished element $\alpha_{{{\operatorname}{root}}}$;
- for every $\alpha \in V$, a sequence ${{\operatorname}{in}}(\alpha) \subset V$ such that:
- for every $\alpha \in V$, ${{\operatorname}{in}}(\alpha) \not\ni \alpha_{{{\operatorname}{root}}}$;
- for every $\alpha \neq \alpha_{{{\operatorname}{root}}}$ there exists a unique vertex $\beta$ with ${{\operatorname}{in}}(\beta) \ni \alpha$; and
- if $\alpha_1, \ldots, \alpha_\ell$ is a sequence in $V$ with $\ell \geq 2$ and $\alpha_j \in {{\operatorname}{in}}(\alpha_{j+1})$ for every $j$, then $\alpha_1 \neq \alpha_\ell$.
Moreover, the RRT is stable if and only if for every $\alpha \in V$, $\#\!{{\operatorname}{in}}(\alpha) \neq 1$.
Throughout this proof, we will abbreviate $\alpha \in {{\operatorname}{in}}(\beta)$ by $\alpha \prec \beta$.
[*[Step 1: Given an RRT, we show that its vertices together with their incoming neighbors satisfy (1–3).]{}*]{}
Fix an RRT $T$. Its root $\alpha_{{{\operatorname}{root}}}$ is not an incoming neighbor of any vertex, so (1) holds. It is also clear that any $\alpha \neq \alpha_{{{\operatorname}{root}}}$ is an incoming neighbor of exactly one vertex, since otherwise the [()]{} property would not hold; therefore (2) holds. Finally, if $\alpha_1 \prec \ldots \prec \alpha_\ell$ is a sequence of vertices with $\ell \geq 2$, then ${{{\operatorname}{dist}}}(\alpha_j,\alpha_{{{\operatorname}{root}}}) = {{{\operatorname}{dist}}}(\alpha_{j+1},\alpha_{{{\operatorname}{root}}}) + 1$ for every $j$, so $\alpha_1 \neq \alpha_\ell$.
[*[Step 2: Given a finite set $V \ni \alpha_{{{\operatorname}{root}}}$ and sequences ${{\operatorname}{in}}(\alpha)$ for $\alpha \in V$ that satisfy (1–3), we produce an RRT having $V$ as its vertices and ${{\operatorname}{in}}(\alpha)$ as the incoming neighbors of $\alpha$, ordered according to the order of ${{\operatorname}{in}}(\alpha)$.]{}*]{}
Given this data, define a tree $T$ by $$\begin{aligned}
V(T) \coloneqq V, \quad \alpha E \beta \iff \alpha \prec \beta \text{ or } \beta \prec \alpha.\end{aligned}$$ This relation is clearly symmetric, and its antireflexivity follows from the $\ell=2$ case of (3).
To prove , we will show that every vertex is connected to $\alpha_{{{\operatorname}{root}}}$. Fix $\alpha \in V \setminus \{\alpha_{{{\operatorname}{root}}}\}$, and define a path like so: set $\alpha_1 \coloneqq \alpha$, and for $j \geq 1$ with $\alpha_j \neq \alpha_{{{\operatorname}{root}}}$, set $\alpha_{j+1}$ to be the unique vertex with $\alpha_j \prec \alpha_{j+1}$. By (3), this path is nonoverlapping, so since $V$ is finite, this path will eventually terminate at $\alpha_{{{\operatorname}{root}}}$.
To prove [(no cycles)]{}, consider a path $\alpha_1, \ldots, \alpha_\ell$ with $\ell \geq 3$ and $\alpha_j \neq \alpha_{j+2}$ for every $j$; we must show $\alpha_1 \neq \alpha_\ell$. If there exists $j$ with $\alpha_j \succ \alpha_{j+1} \prec \alpha_{j+2}$, then (2) implies $\alpha_j = \alpha_{j+2}$, in contradiction to our assumption. Therefore we must either have (a) $\alpha_1 \prec \cdots \prec \alpha_\ell$, (b) $\alpha_1 \succ \cdots \succ \alpha_\ell$, or (c) $\alpha_1 \prec \cdots \prec \alpha_j \prec \alpha_{j+1} \succ \alpha_{j+2} \succ \cdots \succ \alpha_\ell$. In cases (a) and (b), (3) implies $\alpha_1 \neq \alpha_\ell$. In case (c), suppose $\alpha_1 = \alpha_\ell$. (2) implies that the paths $(\beta_j)$ resp. $(\gamma_j)$ defined by $\beta_1 \coloneqq \alpha_1$ and $\beta_{j+1} \succ \beta_j$ resp. $\gamma_1 \coloneqq \alpha_\ell$ and $\gamma_{j+1} \succ \gamma_j$ coincide, hence $\alpha_j = \alpha_{j+2}$, a contradiction. In all cases we have shown $\alpha_1 \neq \alpha_\ell$, hence [(no cycles)]{} holds.
Finally, we upgrade $V(T)$ to an RRT. Define its root to be $\alpha_{{{\operatorname}{root}}}\in V$. With this choice of root, the incoming neighbors of $\alpha$ are exactly the elements of $\in(\alpha)$; order these vertices according to the order on $\in(\alpha)$.
Clearly Steps 1 and 2 are inverse to one another. The stability criterion is also obvious.
Now we will define a “dimension” function $d$ on $K_r^{{{\operatorname}{tree}}}$. As described in §\[sec:intro\], $K_r^{{{\operatorname}{tree}}}$ indexes the strata of a topological space ${\overline}{\mathcal{M}}_r$; $d$ assigns to an element of $K_r^{{{\operatorname}{tree}}}$ the dimension of the corresponding stratum of ${\overline}{\mathcal{M}}_r$.
\[def:RRT\_dim\] For $T$ a stable RRT in $K_r^{{{\operatorname}{tree}}}$, we define its *dimension* $d(T) \in {\mathbb{Z}}_{\geq0}$ like so: $$\begin{aligned}
\label{eq:RRT_dim}
d(T) \coloneqq r - \#\!T_{{{\operatorname}{int}}}- 1.\end{aligned}$$ $\triangle$
Note that $K_1^{{{\operatorname}{tree}}}$ has a single element, the RRT $\bullet$ with a single vertex and no edges; its dimension is zero. $\triangle$
\[lem:Kr\_dim\_props\] Fix an RRT $T \in K_r^{{{\operatorname}{tree}}}$.
- The dimension can be re-expressed using this formula: $$\label{eq:T_dim_reform}
d(T) = \sum_{\alpha \in T_{{{\operatorname}{int}}}} (\#\!{{\operatorname}{in}}(\alpha)-2).$$
- If $r \geq 2$, the dimension satisfies the inequality $0 \leq d(T) \leq r - 2$.
<!-- -->
- is the result of substituting into the following identity: $$\begin{aligned}
\label{eq:T_valence_sum}
\sum_{\alpha \in T_{{{\operatorname}{int}}}} \#\!{{\operatorname}{in}}(\alpha) = \#\!T_{{{\operatorname}{int}}}+ r - 1.
\end{aligned}$$ This follows by noting that $\sum_{\alpha \in T_{{{\operatorname}{int}}}} \#\!{{\operatorname}{in}}(\alpha)$ counts the vertices in $T$ that are an incoming neighbor of another vertex in $T$. This set is the complement of the root of $T$, hence has cardinality $\#\!T_{{{\operatorname}{int}}}+ r - 1$.
- The inequality $d(T) \geq 0$ follows from and the stability hypothesis on $T$; the inequality $d(T) \leq r-2$ follows immediately from .
We now define moves that can be performed on stable RRTs. As we will show, each move decreases the dimension $d$ by one — and in fact, the moves that can be performed on $T$ correspond to the codimension-1 degenerations that can occur in ${\overline}{\mathcal{M}}_r$ starting from the stratum corresponding to $T$. Given a stable RRT $T$, here is the general description of a legal move that can be performed on $T$: choose $\alpha \in T_{{{\operatorname}{int}}}$ and a consecutive subset $(\gamma_{p+1},\ldots,\gamma_{p+l})\subset(\gamma_1,\ldots,\gamma_k)={{\operatorname}{in}}(\alpha)$ where $l$ satisfies $2 \leq l < k$ (necessary to preserve stability). The corresponding move consists of modifying the incoming edges of $\alpha$ like so:
\[fig:T\_moves\]
In the following figure, we illustrate the notion of a move on an RRT. On the left resp. right we show all the RRTs with three resp. four leaves, and indicate all moves amongst these RRTs by arrows, each one corresponding to a single move. As we will shortly see, these moves equip the set of RRTs with a fixed number of leaves with the structure of a poset, which in fact is an abstract polytope; for this reason we suggestively overlay the RRTs over polytopes.
\[fig:RRT\_moves\]
$\triangle$
\[lem:Kr\_move\_dim\] If $T$ is a stable RRT and $T'$ is the result of performing a move on $T$, then $d(T') = d(T)-1$.
If $T$ has $r$ leaves, then we have $d(T) = r - \#\!T_{{{\operatorname}{int}}}- 1$. When we perform a move on $T$, the number of leaves remains the same and one new interior node is created, so $d$ decreases by one.
\[deflem:Krtree\_poset\] Define $K_r^{{{\operatorname}{tree}}}$ as a poset by declaring $T' < T$ if there is a finite sequence of moves that transforms $T$ into $T'$.
We must check that this defines a partial order on $K_r$. The reflexivity property follows from Lemma \[lem:Kr\_move\_dim\]. Transitivity is immediate.
Recall that a **tree homomorphism** is a map $f\colon T \to {\widetilde}T$ such that $f^{-1}\{{\widetilde}\alpha\}$ is a tree for every ${\widetilde}\alpha \in {\widetilde}T$, and $\alpha E\beta$ implies that either $f(\alpha)=f(\beta)$ or $ f(\alpha){\widetilde}Ef(\beta)$. An **RRT homomorphism** is a tree homomorphism $f\colon T \to {\widetilde}T$ that sends leaves to leaves, root to root, interior vertices to interior vertices, and respects the cyclic orderings of edges and the orientation in the following ways:
- Suppose that ${\widetilde}\beta_1, {\widetilde}\beta_2$ lie in ${{\operatorname}{in}}({\widetilde}\alpha)$ and satisfy ${\widetilde}\beta_1 < {\widetilde}\beta_2$. Suppose that $\beta_1, \beta_2$ satisfy $f(\beta_1) = {\widetilde}\beta_1$, $f(\beta_2) = {\widetilde}\beta_2$. Choose $\gamma \in T$ to be the first intersection of the path from $\beta_1$ to $\alpha_{{{\operatorname}{root}}}^T$ and the path from $\beta_2$ to $\alpha_{{{\operatorname}{root}}}^T$, and define $\delta_1, \delta_2$ to be the incoming neighbors of $\gamma$ that these two paths pass through. Then the inequality $\delta_1 < \delta_2$ holds.
- Suppose $\alpha$ lies in $T$ and $\beta$ lies in ${{\operatorname}{in}}(\alpha)$. Then either $f(\beta) = f(\alpha)$ or $f(\beta) \in {{\operatorname}{in}}(\alpha)$.
If ${\widetilde}T$ is the result of performing a sequence of moves on a stable RRT $T$, then there is a surjective RRT homomorphism ${\widetilde}T \to T$ gotten by contracting the edges added to $T$ to form ${\widetilde}T$. As the next lemma shows, all surjective homomorphisms of stable RRTs can be obtained in this fashion.
\[lem:RRT\_ll\] If $f\colon {\widetilde}T \to T$ is a surjective homomorphism of stable RRTs, then ${\widetilde}T$ can be obtained from $T$ by applying a finite sequence of moves, and $f$ is the map that contracts the new edges that were added to $T$ to form ${\widetilde}T$.
We begin by showing that $T$ and ${\widetilde}T$ have the same number of leaves, and that $f$ satisfies $f(\lambda_i^T) = \lambda_i^{{\widetilde}T}$ for all $i$. For this, it suffices to show that $f$ is injective on leaves. Suppose that for some $i \neq j$, $f(\lambda_i^T) = f(\lambda_j^T)$. The preimage of every vertex in ${\widetilde}T$ is connected, so the outgoing neighbor of $\lambda_i^T$ must also be sent to $f(\lambda_i^T)$. This contradicts the hypothesis that $f$ sends interior vertices to interior vertices.
We prove the lemma by induction on $\#\!{\widetilde}T - \#\!T$. If ${\widetilde}T$ and $T$ have the same number of vertices, then they are isomorphic and the claim is trivially true. Next, suppose we have proven the claim as long as the inequality $\#\!{\widetilde}T - \#\!T \leq k$ holds, and suppose that ${\widetilde}T, T$ satisfy $\#\!{\widetilde}T = \#\!T + k+1$. Choose an edge ${\widetilde}\alpha {\widetilde}E{\widetilde}\beta$ with $f({\widetilde}\alpha)=f({\widetilde}\beta)$, and assume that ${\widetilde}\beta$ is further from the root than ${\widetilde}\alpha$. Define $T'$ to be the stable RRT gotten by contracting ${\widetilde}\alpha{\widetilde}E{\widetilde}\beta$ in ${\widetilde}T$. Then ${\widetilde}T$ can be obtained from $T'$ by making a single move. Moreover, $f\colon {\widetilde}T \to T$ can be factored as ${\widetilde}T \to T' \to T$, where ${\widetilde}T \to T'$ is the map that contracts ${\widetilde}\alpha{\widetilde}E{\widetilde}\beta$, and $g\colon T' \to T$ is defined by $$\begin{aligned}
g({\widetilde}\gamma) \coloneqq \begin{cases}
f({\widetilde}\alpha), & {\widetilde}\gamma \in \{{\widetilde}\alpha,{\widetilde}\beta\}, \\
f({\widetilde}\gamma), & \text{otherwise}.
\end{cases}\end{aligned}$$ By induction, $g\colon T' \to T$ is the result of applying finitely many moves to $T$, so we have proven the claim by induction.
Another way to characterize a stable RRT is as a *1-bracketing* of $\{1,\ldots,r\}$. Each 1-bracket corresponds to the RRT’s local structure at a particular interior vertex.
\[def:Krbr\] A *1-bracket of $r$* is a nonempty consecutive subset $B \subset \{1,\ldots,r\}$. \[p:B\] A *1-bracketing of $r$* is a collection ${\mathscr{B}}$\[p:sB\] of 1-brackets of $r$ satisfying these properties:
- [(Bracketing)]{} If $B, B' \in {\mathscr{B}}$ have $B \cap B' \neq \emptyset$, then either $B \subset B'$ or $B' \subset B$.
- [(Root and leaves)]{} ${\mathscr{B}}$ contains $\{1,\ldots,r\}$ and $\{i\}$ for every $i$.
We denote the set of all 1-bracketings of $r$ by $K_r^{{{\operatorname}{br}}}$,\[p:Krbr\] and define a partial order by defining ${\mathscr{B}}' < {\mathscr{B}}$ if ${\mathscr{B}}$ is a proper subcollection of ${\mathscr{B}}'$. $\triangle$
\[def:T\_bracket\] Fix a stable tree $T \in K_r^{{{\operatorname}{tree}}}$. A *$T$-bracket* is a 1-bracket $B$ of $r$ with the property that for some $\alpha \in T$, $B$ is the set of indices $i$ for which $T_\alpha$ contains $\lambda_i$. \[p:Balpha\] We denote this bracket by $B(\alpha) \coloneqq B$. $\triangle$
Note that since $T$ is stable, the $T$-brackets are in bijective correspondence with the vertices of $T$.
\[prop:Kr\_iso\] \[p:nu\] The function $\nu\colon K_r^{{{\operatorname}{tree}}}\to K_r^{{{\operatorname}{br}}}$ that sends a stable RRT $T$ to the set of $T$-brackets is an isomorphism of posets.
*Step 1: If $T$ is a stable RRT with $r$ leaves, then $\nu(T)$ is a 1-bracketing of $r$.*
We have $B({\alpha_{{{\operatorname}{root}}}}) = \{1,\ldots,r\}$ and $B(\lambda_i^T) = \{i\}$, so $\nu(T)$ contains $\{1,\ldots,r\}$ and $\{i\}$ for every $i$. Clearly every $B(\alpha) \in \nu(T)$ is nonempty and consecutive. Finally, if $\alpha, \beta \in T$ have neither $\alpha \in T_\beta$ nor $\beta \in T_\alpha$, then $B(\alpha) \cap B(\beta) = \emptyset$: indeed, if $i \in B(\alpha) \cap B(\beta)$, then the path from $\lambda_i$ to $\alpha_{{{\operatorname}{root}}}$ passes through both $\alpha$ and $\beta$. On the other hand, if $\beta \in T_\alpha$, then $B(\beta) \subset B(\alpha)$.
*Step 2: We define a putative inverse $\tau\colon K_r^{{{\operatorname}{br}}}\to K_r^{{{\operatorname}{tree}}}$.*
Given ${\mathscr{B}}\in K_r^{{{\operatorname}{br}}}$, we will define $\tau({\mathscr{B}})$ via Lemma \[lem:RRT\_in\]. Define $$\begin{gathered}
V \coloneqq {\mathscr{B}},
\qquad
\{1,\ldots,r\} \eqqcolon \alpha_{{{\operatorname}{root}}}\in V,
\\
B' \in {{\operatorname}{in}}(B) \iff \bigl(B' \subsetneq B \:\text{ and }\: \not\!\exists\: B'' \in {\mathscr{B}}: B' \subsetneq B'' \subsetneq B\bigr),
\nonumber\end{gathered}$$ and denote the vertex corresponding to $B \in {\mathscr{B}}$ by $\alpha(B)$. \[p:alpha\_B\] Any distinct $\alpha(B'), \alpha(B'') \in {{\operatorname}{in}}(\alpha(B))$ must have $B' \cap B'' = \emptyset$, since otherwise one of $B'$ and $B''$ would be properly contained in the other; we may therefore order ${{\operatorname}{in}}(\alpha(B))$ by declaring that $\alpha(B'), \alpha(B'') \in {{\operatorname}{in}}(\alpha(B))$ have $\alpha(B') <_{\alpha(B)} \alpha(B'')$ if and only if $i' < i''$ for all $i' \in B'$, $i'' \in B''$.
We now verify conditions (1–3) from Lemma \[lem:RRT\_in\]. To prove (1), note that for any $\alpha(B) \in V$ and $\alpha(B') \in {{\operatorname}{in}}(\alpha(B))$, $B'$ is a proper subset of $B$; therefore ${{\operatorname}{in}}(\alpha(B))$ does not contain $\alpha(\{1,\ldots,r\}) = \alpha_{{{\operatorname}{root}}}$. For (2), fix $\alpha(B) \in V \setminus \{\alpha_{{{\operatorname}{root}}}\}$ and define $\Sigma \coloneqq \{B' \in {\mathscr{B}}\:|\: B' \supsetneq B\}$. Then $\Sigma$ contains $\{1,\ldots,r\}$, hence is nonempty. Moreover, if $B', B'' \in \Sigma$ are distinct and minimal with respect to inclusion, then $B' \cap B'' \supset B \neq \emptyset$; therefore one of $B', B''$ must contain the other, a contradiction to minimality. This shows that $\Sigma$ contains a unique minimal element, which is the unique $B'$ with ${{\operatorname}{in}}(\alpha(B')) \ni \alpha(B)$; this establishes (2). Finally, if $\alpha(B^1), \ldots, \alpha(B^\ell) \in V$ is a sequence with $\ell \geq 2$ and $\alpha(B^j) \in {{\operatorname}{in}}(\alpha(B^{j+1}))$ for every $j$, then $B^1 \subsetneq B^\ell$, hence $\alpha(B^1) \neq \alpha(B^\ell)$.
It is clear that $\tau({\mathscr{B}})$ is stable and that $\{i\}$ is the $i$-th leaf of $\tau({\mathscr{B}})$.
[*Step 3: We show that $\nu$ and $\tau$ are inverse bijections.*]{}
First, fix $T \in K_r^{{{\operatorname}{tree}}}$; we claim $T \simeq \tau(\nu(T))$. There is an obvious identification of vertices, which identifies root with root. Next, we must show that the edge relations on the vertices $T$ are the same, which is to say that $\beta \in {{\operatorname}{in}}(\alpha)$ is equivalent to $B(\beta) \subsetneq B(\alpha)$ and the nonexistence of $\gamma \in T$ with $B(\beta) \subsetneq B(\gamma) \subsetneq B(\alpha)$.
Fix $\beta \in {{\operatorname}{in}}(\alpha)$. Certainly $B(\beta) \subset B(\alpha)$, and this containment is proper by the stability of $T$. (Indeed, define an outgoing path by setting $\delta_1 \coloneqq \alpha$, choosing $\delta_2$ to be an incoming neighbor of $\alpha$ other than $\beta$, and inductively defining $\delta_{i+1}$ to be an incoming neighbor of $\delta_i$ as long as ${{\operatorname}{in}}(\delta_i)$ is not empty. This path will terminate, by condition (3) in Lemma \[lem:RRT\_in\]. Moreover, this path does not include $\beta$: if it did, it would by (2) have $\delta_i = \alpha$ for some $i \geq 2$, which is impossible by (3). If $\lambda_j$ is the leaf at which this path terminates, then $j \in B(\alpha)\setminus B(\beta)$.) Suppose for a contradiction that there exists $\gamma$ with $B(\beta) \subsetneq B(\gamma) \subsetneq B(\alpha)$, and choose $i \in B(\beta)$. Since $B(\alpha), B(\beta), B(\gamma)$ all contain $i$, the path $[\lambda_i,\alpha_{{{\operatorname}{root}}}]$ must contain $\alpha, \beta, \gamma$. The containments $B(\beta)\subsetneq B(\gamma) \subsetneq B(\alpha)$ now imply that the path $\beta = \delta_1, \delta_2, \ldots, \delta_\ell = \alpha$ from $\beta$ to $\alpha$ is oriented toward the root and has $\delta_i = \gamma$ for some $i \in [2,\ell-1]$. Without loss of generality we may assume $i = 2$. Then $\beta \in {{\operatorname}{in}}(\gamma)$, so $\beta$ cannot lie in ${{\operatorname}{in}}(\alpha)$, a contradiction.
Conversely, suppose $B(\beta) \subsetneq B(\alpha)$ and that there does not exist $\gamma$ with $B(\beta) \subsetneq B(\gamma) \subsetneq B(\alpha)$. An argument similar to the one in the previous paragraph yields $\beta \in {{\operatorname}{in}}(\alpha)$.
Second, fix ${\mathscr{B}}\in K_r^{{{\operatorname}{br}}}$; we claim ${\mathscr{B}}= \nu(\tau({\mathscr{B}}))$. To prove this, it suffices to show that $\alpha(\{i\}) \in T_{\alpha(B)}$ if and only if $i \in B$. Suppose $\alpha(\{i\}) \in T_{\alpha(B)}$. This means that there is a sequence of 1-brackets $\{i\} = B_1, B_2, \ldots, B_\ell = B$ in ${\mathscr{B}}$ such that for every $i$, either (a) $B_i \subsetneq B_{i+1}$ and there exists no $B' \in {\mathscr{B}}$ with $B_i \subsetneq B' \subsetneq B_{i+1}$, or (b) the same holds but with $B_i$ and $B_{i+1}$ interchanged. In fact, an argument similar to the one made in the proof of [(no cycles)]{} in Lemma \[lem:RRT\_in\] implies that for every $i$ it is (a) that holds. Therefore $i$ lies in $B$. Conversely, suppose $i \in B$. Define a sequence in ${\mathscr{B}}$ by setting $B_1 \coloneqq B$ and, as long as $B_i$ is not equal to $\{i\}$, defining $B_{i+1}$ to be the largest element of ${\mathscr{B}}$ satisfying $B_i \supsetneq B_{i+1} \supsetneq \{i\}$. This defines a non-self-intersecting path in $\tau({\mathscr{B}})$ that begins at $\alpha(B)$ and terminates at the $i$-th leaf, which proves the backwards direction of the assertion that $\alpha(\{i\}) \in T_{\alpha(B)}$ is equivalent to $i \in B$.
[*Step 4: We show that $\nu$ and $\tau$ respect the partial orders on $K_r^{{{\operatorname}{tree}}}$ and $K_r^{{{\operatorname}{br}}}$.*]{}
First, we show that if $T' < T$, then $\nu(T') < \nu(T)$. We may assume without loss of generality that $T'$ is the result of performing a single move on $T$. Denote by $\alpha \in T$ the vertex at which the move is performed, so that $T'$ is produced from $T$ by introducing a new incoming neighbor of $\alpha$. We may therefore regard $V(T)$ as a subset of $V(T')$. If $\beta$ is a vertex of $T$, and $B(\beta)$ resp. $B'(\beta)$ denote the indices of the leaves in $T_\beta$ resp. in $T'_\beta$, then $B(\beta) = B'(\beta)$. Therefore $\nu(T') < \nu(T)$.
Second, we show that if ${\mathscr{B}}' < {\mathscr{B}}$, then $\tau({\mathscr{B}}') < \tau({\mathscr{B}})$. Define a map $f\colon \tau({\mathscr{B}}') \to \tau({\mathscr{B}})$ like so: $$\begin{aligned}
f(\alpha(B')) \coloneqq \alpha\bigl(\min\{B \in {\mathscr{B}}\:|\: B \supset B'\}\bigr), \end{aligned}$$ where the minimum is taken with respect to inclusion. By [(root and leaves)]{}, the set over which we are taking the minimum contains $\{1,\ldots,r\}$, hence is nonempty; therefore $f$ is well-defined. I claim that $f$ is a surjective homomorphism of stable RRTs. Again by [(root and leaves)]{}, $f$ sends $\lambda_i^{\tau({\mathscr{B}}')}$ to $\lambda_i^{\tau({\mathscr{B}})}$ and $\alpha_{{{\operatorname}{root}}}^{\tau({\mathscr{B}}')}$ to $\alpha_{{{\operatorname}{root}}}^{\tau({\mathscr{B}})}$; since ${\mathscr{B}}$ is a subcollection of ${\mathscr{B}}'$, $f$ is surjective. It remains to show that the preimage under $f$ of each vertex in $\tau({\mathscr{B}})$ is connected. Fix $B \in {\mathscr{B}}$; it suffices to show that for any $B' \in {\mathscr{B}}'$ with $f(\alpha(B')) = \alpha(B)$, the path from $\alpha(B')$ to $\alpha(B)$ in $\tau({\mathscr{B}}')$ is contained in $f^{-1}\{\alpha(B)\}$. This is apparent from the definition of $f$, so we may conclude that $f$ is a surjective morphism of stable RRTs, hence $\tau({\mathscr{B}}') < \tau({\mathscr{B}})$ by Lemma \[lem:RRT\_ll\].
By this lemma, we may define $K_r \coloneqq K_r^{{{\operatorname}{tree}}}= K_r^{{{\operatorname}{br}}}$. \[p:Kr\]
Kr is an abstract polytope of dimension $r-2$ {#ss:Kr_polytope}
---------------------------------------------
In this subsection we prove the following proposition.
\[prop:Kr\_main\] The posets $(K_r)$ satisfy the following properties:
- <span style="font-variant:small-caps;">(abstract polytope)</span> For $r \geq 2$, ${\widehat}{K_r} \coloneqq K_r \cup \{F_{-1}\}$ is an abstract polytope of dimension $r-2$.
- <span style="font-variant:small-caps;">(recursive)</span> For any $T \in K_r^{{{\operatorname}{tree}}}$, there is an inclusion of posets $$\begin{aligned}
\gamma_T\colon \prod_{\alpha \in T_{{{\operatorname}{int}}}} K_{\#\!{{\operatorname}{in}}(\alpha)}^{{{\operatorname}{tree}}}\hra K_r^{{{\operatorname}{tree}}},\end{aligned}$$ which restricts to a poset isomorphism onto ${\mathrm{cl}}(T) = (F_{-1},T]$.
These two properties are proven in Def.-Lem. \[deflem:gammaT\] resp. Prop. \[prop:Kr\_polytope\].
We begin by establishing [(recursive)]{}. After the proof of Def.-Lem. \[deflem:gammaT\], we will illustrate the definition of $\gamma_T$ in an example.
\[deflem:gammaT\] Fix $r \geq 2$ and $T \in K_r^{{{\operatorname}{tree}}}$. Define a map $$\begin{aligned}
\gamma_T\colon \prod_{\alpha \in T_{{{\operatorname}{int}}}} K_{\#\!{{\operatorname}{in}}(\alpha)}^{{{\operatorname}{tree}}}\to K_r^{{{\operatorname}{tree}}}\end{aligned}$$ by sending $(T^\beta)_{\beta \in {{{\operatorname}{int}}}(\alpha)}$ to the RRT gotten by replacing each $\beta$ and its incoming neighbors and edges by $T^\beta$. Then $\gamma_T$ restricts to a poset isomorphism from its domain to ${\mathrm{cl}}(T)$.
We will define an inverse $\sigma_T\colon {\mathrm{cl}}(T) \to \prod_{\alpha\in T_{{{\operatorname}{int}}}} K_{\#\!{{\operatorname}{in}}(\alpha)}^{{{\operatorname}{tree}}}$ to the restriction of $\gamma_T$. Fix ${\widehat}T \in {\mathrm{cl}}(T)$; then there is a (unique) surjective homomorphism $f\colon {\widehat}T \to T$ of ribbon trees. For any $\alpha \in T$, define ${\widehat}\alpha \in {\widehat}T$ to be the element of $f^{-1}\{\alpha\}$ that is closest to the root. For any $\alpha \in T_{{{\operatorname}{int}}}$, define $$\begin{aligned}
{\widehat}T^\alpha \coloneqq f^{-1}\{\alpha\} \cup \bigl\{{\widehat}\beta \:|\: \beta \in {{\operatorname}{in}}(\alpha)\}, \quad \alpha_{{{\operatorname}{root}}}^{{\widehat}T^\alpha} \coloneqq {\widehat}\alpha.\end{aligned}$$ Then ${\widehat}T^\alpha$ is a subtree of ${\widehat}T$: since $f$ is a tree homomorphism, $f^{-1}\{\alpha\}$ is a subtree of ${\widehat}T$. For any $\beta \in {{\operatorname}{in}}(\alpha)$, we must either have $f({{{\operatorname}{out}}}({\widehat}\beta)) = f({\widehat}\beta) = \beta$ or $f({{{\operatorname}{out}}}({\widehat}\beta)) = {{{\operatorname}{out}}}(f({\widehat}\beta)) = \alpha$; by the definition of ${\widehat}\beta$, the latter equality must hold, so ${\widehat}T^\alpha$ is indeed a subtree of ${\widehat}T$. Furthermore, the ribbon tree structure of ${\widehat}T$ induces a ribbon tree structure on ${\widehat}T^\alpha$, so ${\widehat}T^\alpha$ is an RRT. I claim that ${\widehat}T^\alpha$ is stable and has leaves in bijection with ${{\operatorname}{in}}(\alpha)$. For any $\beta \in {{\operatorname}{in}}(\alpha)$, it follows immediately from the definition of ${\widehat}\beta$ that ${\widehat}\beta \in {\widehat}T^\alpha$ is a leaf. Next, fix $\beta \in f^{-1}\{\alpha\}$; we must show $\beta \in {\widehat}T^\alpha$ has at least 2 incoming neighbors. In fact, its incoming neighbors are in correspondence with the incoming neighbors of $\beta$ in ${\widehat}T$. We may conclude that ${\widehat}T^\alpha$ is a stable RRT with leaves in correspondence with ${{\operatorname}{in}}(\alpha)$. We may now define $\sigma_T$: $$\begin{aligned}
\sigma_T \colon {\mathrm{cl}}(T) \to \prod_{\alpha\in T_{{{\operatorname}{int}}}} K_{\#\!{{\operatorname}{in}}(\alpha)}^{{{\operatorname}{tree}}}, \quad \sigma_T({\widehat}T) \coloneqq \bigl({\widehat}T^\alpha\bigr)_{\alpha \in T_{{{\operatorname}{int}}}}. \end{aligned}$$ It is clear from the definition of $\sigma_T$ that it is an inverse to the restriction $\gamma_T \colon \prod_{\alpha \in T_{{{\operatorname}{int}}}} K_{\#\!{{\operatorname}{in}}(\alpha)}^{{{\operatorname}{tree}}}\to {\mathrm{cl}}(T)$. It is also clear that $\gamma_T$ and $\sigma_T$ respect the partial orders on $\prod_{\alpha \in T_{{{\operatorname}{int}}}} K_{\#\!{{\operatorname}{in}}(\alpha)}^{{{\operatorname}{tree}}}$ and $K_r^{{{\operatorname}{tree}}}$.
In the following figure, we illustrate the definition of $\gamma_T$ in a simple example. On the left is $T$, which has three interior vertices, the incoming edges of which are colored red, blue, or green respectively. $\gamma_T$ acts by replacing the red, blue, and green corollas by RRTs in $K_3^{{{\operatorname}{tree}}}$, $K_4^{{{\operatorname}{tree}}}$, and $K_3^{{{\operatorname}{tree}}}$, respectively.
\[fig:gamma\_example\]
$\triangle$
We now turn to the proof of the [(abstract polytope)]{} property from Prop. \[prop:Kr\_main\]. Define ${\widehat}{K_r} \coloneqq K_r \cup \{F_{-1}\}$, where $F_{-1}$ is a formal minimal element with $d(F_{-1}) \coloneqq -1$. We first recall the notion of abstract polytope:
\[def:abstract\_polytope\] An [**[abstract polytope of rank $n \in {\mathbb{Z}}_{\geq -1}$]{}**]{} is a partially ordered set $P$ (whose elements are called [**[faces]{}**]{}) satisfying the properties [(extremal)]{}, [(flag-length)]{}, [(strongly connected)]{}, and [(diamond)]{}, defined below.
- [(extremal)]{} $P$ has a least and a greatest face, denoted $F_{-1}$ resp. $F_{{{\operatorname}{top}}}$.
- [(flag-length)]{} Every flag (i.e. maximal chain) of $P$ has length $n+1$, i.e. contains $n+2$ faces.
For $F, G \in P$ with $F \leq G$, recall that the closed interval $[F,G]$ is defined by $$\begin{aligned}
[F,G] \coloneqq \{H \in P \:|\: F \leq H \leq G\}. \end{aligned}$$ It follows from [(extremal)]{} and [(flag-length)]{} that we can endow $P$ with a rank function, where ${{{\operatorname}{rk}\:}}F$ is defined to be the rank of the poset $[F_{-1},F]$.
- [(strongly connected)]{} For every $F < G$ with ${{{\operatorname}{rk}\:}}G - {{{\operatorname}{rk}\:}}F \geq 3$ and for every $H, H' \in (F,G)$, there is a sequence $(H = H_1, H_2, \ldots, H_k = H')$ in $(F,G)$ such that $H_i$ and $H_{i+1}$ are adjacent for every $i$ (i.e., either $H_i \prec H_{i+1}$ or $H_{i+1} \prec H_i$, where we write $F_1 \prec F_2$ if $F_1 < F_2$ and there exists no $F_3$ with $F_1 < F_3 < F_2$).
- [(diamond)]{} For every $F < G$ with ${{{\operatorname}{rk}\:}}G - {{{\operatorname}{rk}\:}}F = 2$, the open interval $(F,G)$ contains exactly 2 elements.
$\triangle$
\[prop:Kr\_polytope\] For any $r \geq 2$, ${\widehat}{K_r}$ is an abstract polytope of dimension $r-2$.
- [(extremal)]{} The least face is the face $F_{-1}$ we have added to $K_r$ to form ${\widehat}{K_r}$. The greatest face (in ${\widehat}{K_r^{{{\operatorname}{br}}}}$) is the 1-bracketing $\bigl\{\{1,\ldots,r\},\{1\},\ldots,\{r\}\bigl\}$.
- [(flag-length)]{} We must show that if $T^0 < \cdots < T^\ell$ is a maximal chain in $K_r^{{{\operatorname}{tree}}}$, then $\ell = r-2$. By Lemmata \[lem:Kr\_dim\_props\] and \[lem:Kr\_move\_dim\], we have $0 \leq d(T^0) < \cdots < d(T^\ell) \leq r-2$. To prove the claim, we must show that every dimension between 0 and $r-2$ is represented. For any $T^i, T^{i+1}$, we must have $d(T^i) = d(T^{i+1}) - 1$: otherwise, there exists $T' \in K_r^{{{\operatorname}{tree}}}$ which can be obtained by performing a single move to $T^{i+1}$, and which satisfies $d(T^i) < d(T') < d(T^{i+1})$; this contradicts the maximality of the chain. Again by maximality, we must have $T^\ell = F_{{{\operatorname}{top}}}$. It remains to show $d(T^0) = 0$. Suppose for a contradiction that $d(T^0)$ is positive; then by Lemma \[lem:Kr\_dim\_props\], there exists $\alpha \in T^0_{{{\operatorname}{int}}}$ with $\#\!{{\operatorname}{in}}(\alpha) \geq 3$. It follows that we may perform a move to $T^0$, so there exists $T' \in K_r^{{{\operatorname}{tree}}}$ with $T' < T^0$, contradicting the maximality of this chain.
- [(strongly connected)]{} In this step we may assume $r \geq 4$, since otherwise [(strongly connected)]{} is vacuous. First, we show that ${\widehat}{K_r}$ is connected. It suffices to show that for any $a, b \geq 2$ with $a + b - 1 = r$ and $i$ with $1 \leq i \leq a$, there exists a path in ${\widehat}{K_r} \setminus F_{{{\operatorname}{top}}}$ between these two codimension-1 faces:
We produce such a path in the following four, exhaustive, cases:
Next, we show that for any $T \in K_r^{{{\operatorname}{tree}}}$ with $d(T) \geq 2$, the interval $[F_{-1},T]$ is connected. By Def.-Lem. \[deflem:gammaT\], $[F_{-1},T]$ is isomorphic to $\{F_{-1}\} \cup \prod_{\alpha \in T_{{{\operatorname}{int}}}} K_{\#\!{{\operatorname}{in}}(\alpha)}^{{{\operatorname}{tree}}}$. The connectedness of ${\widehat}{K^{{{\operatorname}{tree}}}_s}$ for $s \geq 4$, the fact that $K^{{{\operatorname}{tree}}}_3$ is isomorphic to the face poset of an interval, and the inequality $\sum_{\alpha \in T_{{{\operatorname}{int}}}} (\#{{\operatorname}{in}}(\alpha)-2) = d(T) \geq 2$ imply that $\{F_{-1}\} \cup \prod_{\alpha \in T_{{{\operatorname}{int}}}} K_{\#\!{{\operatorname}{in}}(\alpha)}^{{{\operatorname}{tree}}}$ is connected. Finally, we show that for any ${\mathscr{B}},{\mathscr{B}}'$ with ${\mathscr{B}}' < {\mathscr{B}}$ and $d({\mathscr{B}}') \leq d({\mathscr{B}})-3$, the interval $[{\mathscr{B}}',{\mathscr{B}}]$ is connected. Extend the dimension function $d$ to $K_r^{{{\operatorname}{br}}}$ via the identification $K_r^{{{\operatorname}{br}}}\simeq K_r^{{{\operatorname}{tree}}}$. Then $d({\mathscr{B}}) = 2r - \#\!{\mathscr{B}}- 1$. To prove that $[{\mathscr{B}}',{\mathscr{B}}]$ is connected, it suffices to show that for any distinct ${\widetilde}{\mathscr{B}}^1, {\widetilde}{\mathscr{B}}^2 \in ({\mathscr{B}}',{\mathscr{B}})$ with $d({\widetilde}{\mathscr{B}}^1) = d({\widetilde}{\mathscr{B}}^2) = d({\mathscr{B}})-1$, there is a path from ${\widetilde}{\mathscr{B}}^1$ to ${\widetilde}{\mathscr{B}}^2$ in $({\mathscr{B}}',{\mathscr{B}})$. The inequality ${\widetilde}{\mathscr{B}}^j \geq {\mathscr{B}}'$ for $j \in \{1,2\}$ implies that ${\widehat}{\mathscr{B}}\coloneqq {\widetilde}{\mathscr{B}}^1 \cup {\widetilde}{\mathscr{B}}^2$ is a 1-bracketing. Moreover, it satisfies ${\widehat}{\mathscr{B}}\in [{\mathscr{B}}',{\mathscr{B}})$, and by the formula for the dimension of a 1-bracketing given above, it satisfies $d({\widehat}{\mathscr{B}}) = d({\mathscr{B}}) - 2$. By the hypothesis $d({\mathscr{B}}') \leq d({\mathscr{B}}) - 3$, ${\widehat}{\mathscr{B}}$ must therefore lie in $({\mathscr{B}}',{\mathscr{B}})$, so $({\widetilde}{\mathscr{B}}^1,{\widehat}{\mathscr{B}},{\widetilde}{\mathscr{B}}^2)$ is a path in $({\mathscr{B}}',{\mathscr{B}})$.
- [(diamond)]{} First, fix $T \in K_r^{{{\operatorname}{tree}}}$ with $d(T) = 1$; we must show that the open interval $(F_{-1},T)$ contains exactly two elements. Lemma \[lem:Kr\_dim\_props\] implies that every vertex in $T_{{{\operatorname}{int}}}$ has two incoming neighbors except for a single $\alpha$ with $\#\!{{\operatorname}{in}}(\alpha) = 3$. Denote the incoming neighbors of $\alpha$ by $(\beta_1,\beta_2,\beta_3)$. There are two possible moves that can be made at $\alpha$, by either splitting off $(\alpha_1,\alpha_2)$ or $(\alpha_2,\alpha_3)$. In fact, these are the only two moves that can be performed on $T$. Since $d(T) = 1$, $(F_{-1},T)$ contains two elements.
Next, fix ${\mathscr{B}}, {\mathscr{B}}' \in K_r^{{{\operatorname}{br}}}$ with $d({\mathscr{B}}') = d({\mathscr{B}}) - 2$. It follows from the definition of $d$ and Lemma \[prop:Kr\_iso\] that ${\mathscr{B}}'$ is obtained from ${\mathscr{B}}$ by adding two 1-brackets. From this it is clear that the open interval $({\mathscr{B}}',{\mathscr{B}})$ contains two elements.
Construction of the 2-associahedra Wn {#sec:2ass}
=====================================
In this section, we define the posets $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ (§\[ss:Wntree\_construction\]) and $W_{\mathbf{n}}^{{{\operatorname}{br}}}$ (§\[ss:Wnbr\_construction\]), then show that they are isomorphic (§\[ss:Wn\_iso\]). This allows us to define the 2-associahedron by $W_{\mathbf{n}}\coloneqq W_{\mathbf{n}}^{{{\operatorname}{tree}}}= W_{\mathbf{n}}^{{{\operatorname}{br}}}$.
The poset Wntree of stable tree-pairs {#ss:Wntree_construction}
-------------------------------------
We begin with the definition of $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$. It is rather technical, and we advise the reader to refer to Ex. \[ex:tree-pair\_examples\] while looking at this definition for the first time.
\[p:2T\] A *stable tree-pair of type ${\mathbf{n}}$* is a datum $2T = T_b {\stackrel}{f}{\to} T_s$, with $T_b, T_s, f$ described below:
- The *bubble tree* $T_b$ is an RRT whose edges are either solid or dashed, which must satisfy these properties:
- The vertices of $T_b$ are partitioned as $V(T_b) = V_{{{\operatorname}{comp}}}\sqcup V_{{{\operatorname}{seam}}}\sqcup V_{{{\operatorname}{mark}}}$, where:
- every $\alpha \in V_{{{\operatorname}{comp}}}$ has $\geq 1$ solid incoming edge, no dashed incoming edges, and either a dashed or no outgoing edge;
- every $\alpha \in V_{{{\operatorname}{seam}}}$ has $\geq 0$ dashed incoming edges, no solid incoming edges, and a solid outgoing edge; and
- every $\alpha \in V_{{{\operatorname}{mark}}}$ has no incoming edges and either a dashed or no outgoing edge.
We partition $V_{{{\operatorname}{comp}}}\eqqcolon V_{{{\operatorname}{comp}}}^1 \sqcup V_{{{\operatorname}{comp}}}^{\geq2}$ according to the number of incoming edges of a given vertex.
- ([stability]{}) If $\alpha$ is a vertex in $V_{{{\operatorname}{comp}}}^1$ and $\beta$ is its incoming neighbor, then $\#\!{{\operatorname}{in}}(\beta) \geq 2$; if $\alpha$ is a vertex in $V_{{{\operatorname}{comp}}}^{\geq2}$ and $\beta_1,\ldots,\beta_\ell$ are its incoming neighbors, then there exists $j$ with $\#\!{{\operatorname}{in}}(\beta_j) \geq 1$.
- The *seam tree* $T_s$ is an element of $K_r^{{{\operatorname}{tree}}}$.
- The *coherence map* is a map $f\colon T_b \to T_s$ of sets having these properties:
- $f$ sends root to root, and if $\beta \in {{\operatorname}{in}}(\alpha)$ in $T_b$, then either $f(\beta) \in {{\operatorname}{in}}(f(\alpha))$ or $f(\alpha) = f(\beta)$.
- $f$ contracts all dashed edges, and every solid edge whose terminal vertex is in $V_{{{\operatorname}{comp}}}^1$.
- For any $\alpha \in V_{{{\operatorname}{comp}}}^{\geq2}$, $f$ maps the incoming edges of $\alpha$ bijectively onto the incoming edges of $f(\alpha)$, compatibly with $<_\alpha$ and $<_{f(\alpha)}$.
- $f$ sends every element of $V_{{{\operatorname}{mark}}}$ to a leaf of $T_s$, and if $\lambda_i^{T_s}$ is the $i$-th leaf of $T_s$, then $f^{-1}\{\lambda_i^{T_s}\}$ contains $n_i$ elements of $V_{{{\operatorname}{mark}}}$, which we denote by $\mu_{i1}^{T_b},\ldots,\mu_{in_i}^{T_b}$.
We denote by $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$\[p:Wntree\] the set of isomorphism classes of stable tree-pairs of type ${\mathbf{n}}$. Here an isomorphism from $T_b {\stackrel}{f}{\to} T_s$ to $T_b' {\stackrel}{f'}{\to} T_s'$ is a pair of maps $\varphi_b\colon T_b \to T_b'$ and $\varphi_s\colon T_s \to T_s'$ that fit into a commutative square in the obvious way and that respect all the structure of the bubble trees and seam trees. $\triangle$
\[ex:tree-pair\_examples\] In the following figure we illustrate some of the notation that we have just introduced. We picture the same tree-pair (with $r = 5$, ${\mathbf{n}}= (1,1,4,1,0)$) three times, in each case indicating different data. In each case, $T_b$ is above and $T_s$ is below. On the left, we label the roots of $T_b$ and $T_s$, the leaves of $T_s$, and the elements of $V_{{{\operatorname}{mark}}}(T_b)$. In the middle, we indicate the coherence map $f\colon T_b \to T_s$: we color the edges of $T_s$, and use those same colors to show which edges in $T_b$ are identified with the various edges of $T_s$. Some edges in $T_b$ are contracted by $f$, which we indicate by using black. On the right, we show how the vertices of $T_b$ are partitioned into $V_{{{\operatorname}{mark}}}$, $V_{{{\operatorname}{seam}}}$, and $V_{{{\operatorname}{comp}}}$.
\[fig:tree-pair\_examples\]
$\triangle$
\[def:tree-pair\_dim\] For $2T$ a stable tree-pair, we define the *dimension* $d(2T) \in {\mathbb{Z}}$ like so: $$\begin{aligned}
\label{eq:tree-pair_dim}
d(2T) \coloneqq |{\mathbf{n}}| + r - \#\!V^1_{{{\operatorname}{comp}}}(T_b) - \#\!(T_s)_{{{\operatorname}{int}}}- 2.\end{aligned}$$ $\triangle$
Note that $W_1^{{{\operatorname}{tree}}}$ has a single element, the stable tree-pair $\bullet \to \bullet$; its dimension is zero. $\triangle$
\[lem:Wn\_dim\] Fix a stable tree-pair $2T \in W_{\mathbf{n}}$.
- The dimension can be re-expressed using this formula: $$\label{eq:2T_dim_reform}
d(2T) = \sum_{{\alpha \in V^1_{{{\operatorname}{comp}}}(T_b)}\atop{{{\operatorname}{in}}(\alpha) = (\beta)}} \bigl(\#\!{{\operatorname}{in}}(\beta) - 2\bigr)
+ \sum_{\alpha \in V^{\geq 2}_{{{\operatorname}{comp}}}(T_b)} \left(\Bigl(\sum_{\beta \in {{\operatorname}{in}}(\alpha)} \#\!{{\operatorname}{in}}(\beta)\Bigr)-1\right)
+ \sum_{\rho \in (T_s)_{{{\operatorname}{int}}}} \bigl(\#\!{{\operatorname}{in}}(\rho) - 2\bigr).$$
- If ${\mathbf{n}}\neq (1)$, the dimension satisfies the inequality $0 \leq d(2T) \leq |{\mathbf{n}}| + r - 3$.
<!-- -->
- is the result of substituting into and the following identity: $$\begin{aligned}
\label{eq:2T_valence_sum}
\sum_{\alpha \in V_{{{\operatorname}{seam}}}(T_b)} \#\!{{\operatorname}{in}}(\alpha) = \#\!V_{{{\operatorname}{comp}}}(T_b) + |{\mathbf{n}}| - 1.
\end{aligned}$$ This follows by noting that $\sum_{\alpha \in V_{{{\operatorname}{seam}}}(T_b)} \#\!{{\operatorname}{in}}(\alpha)$ counts the elements of $V_{{{\operatorname}{comp}}}(T_b)$ that are the incoming neighbor of some element of $V_{{{\operatorname}{seam}}}(T_b)$. This set is the complement of the root of $T_b$, hence has cardinality $\#\!V_{{{\operatorname}{comp}}}(T_b) + |{\mathbf{n}}| - 1$.
- The inequality $d(2T) \geq 0$ follows from and the ([stability]{}) axiom in the definition of stable tree-pairs; the inequality $d(2T) \leq |{\mathbf{n}}| + r - 3$ follows immediately from .
Next we define three types of “moves” that can be performed on stable tree-pairs. Examples of these moves are shown in Ex. \[ex:tree-pair\_moves\]. As we will show, each move decreases the dimension $d$ by one.
Fix a stable tree-pair $T_b {\stackrel}{f}{\to} T_s$. Here are the moves that may be applied:
- [*Type-1 moves:*]{} Fix $\alpha \in V_{{{\operatorname}{comp}}}(T_b)$ and $\beta \in {{\operatorname}{in}}(\alpha)$. Choose a consecutive subset $(\gamma_{p+1},\ldots,\gamma_{p+l}) \subset (\gamma_1,\ldots,\gamma_k) = {{\operatorname}{in}}(\beta)$ where $l$ satisfies the following condition (necessary to preserve stability):
- If $\#\!{{\operatorname}{in}}(\alpha) = 1$, then we require $2 \leq l < k$.
- If $\#\!{{\operatorname}{in}}(\alpha) \geq 2$, then we require $2 \leq l \leq k$.
The corresponding *type-1 move* consists of modifying the incoming edges of $\beta$ as shown here,
\[fig:2T\_move\_1\]
leaving $T_s$ unchanged, and modifying $f$ in the obvious way.
- [*Type-2 moves:*]{} Fix $\alpha \in V_{{{\operatorname}{comp}}}(T_s)$. Choose a consecutive subset $(\gamma_{p+1},\ldots,\gamma_{p+l})\subset(\gamma_1,\ldots,\gamma_k)={{\operatorname}{in}}(\alpha)$ where $l$ satisfies $2 \leq l < k$. For every ${\widetilde}\alpha \in V_{{{\operatorname}{comp}}}^{\geq 2}(T_b) \cap f^{-1}\{\alpha\}$ with ${{\operatorname}{in}}\bigl({\widetilde}\alpha\bigr) \eqqcolon (\beta_1,\ldots,\beta_k)$ and for every $i$ with $1 \leq i \leq l$, choose $q \geq 0$ and partition ${\mathbf{a}}\coloneqq \bigl(\#\!{{\operatorname}{in}}(\beta_{p+i})\bigr)_{i=1}^\ell$ as ${\mathbf{a}}= \sum_{j=1}^q {\mathbf{b}}^j$, for ${\mathbf{b}}^1,\ldots,{\mathbf{b}}^q \in {\mathbb{Z}}^\ell\setminus\{{\mathbf{0}}\}$. The corresponding *type-2 move* consists of (a) \[type2a\] modifying the incoming edges of $\alpha$ as shown here,
\[fig:2T\_move\_2s\]
and (b) modifying the incoming edges of each ${\widetilde}\alpha$ as shown here,
\[fig:2T\_move\_2b\]
and modifying $f$ in the obvious way.
- [*Type-3 moves:*]{} Choose $\alpha \in V^{\geq2}_{{{\operatorname}{comp}}}(T_b)$, and write ${{\operatorname}{in}}(\alpha) \eqqcolon \{\beta_1,\ldots,\beta_k\}$. Choose $q \geq 2$, and partition ${\mathbf{a}}\coloneqq \bigl(\#\!{{\operatorname}{in}}(\beta_i)\bigr)_{i=1}^k$ as ${\mathbf{a}}= \sum_{j=1}^q {\mathbf{b}}^j$ for ${\mathbf{b}}^1,\ldots,{\mathbf{b}}^q \in {\mathbb{Z}}^k\setminus\{{\mathbf{0}}\}$. The corresponding [*type-3 move*]{} consists of modifying the incoming edges of $\alpha$ as shown here,
\[fig:2T\_move\_3\]
leaving $T_s$ unchanged, and modifying $f$ in the obvious way.
\[ex:tree-pair\_moves\] In the following figure, we illustrate the moves just introduced.
\[fig:tree\_move\_examples\]
On the left are the eleven tree-pairs comprising $W_{200}^{{{\operatorname}{tree}}}$ — suggestively overlaid on a pentagon, which encodes the poset structure which is defined just below. The the three moves which we illustrate here can be thought of as going from the interior of the pentagon to the bottom red edge, resp. the interior to the upper-right blue edge, resp. the upper-right blue edge to the top mauve vertex. On the right we illustrate these moves, which are of type 1 resp. 3 resp. 2 (these numbers label the arrows corresponding to the moves). We color the portions of the tree-pairs which have been altered by the move. $\triangle$
\[lem:Wn\_moves\] If $2T'$ is the result of making a move of type 1, 2, or 3 to $2T$, then $d(2T') = d(2T)-1$.
Recall the definition of $d$: $$\begin{aligned}
|{\mathbf{n}}| + r - \#\!V_{{{\operatorname}{comp}}}^1(T_b) - \#\!(T_s)_{{{\operatorname}{int}}}- 2.\end{aligned}$$
- In a type-1 move, one 1-seam component forms (where the points collide) and $T_s$ does not change.
- In a type-2 move, no new 1-seam components form; one new interior vertex in $T_s$ forms.
- In a type-3 move, one 1-seam component forms (the former $k$-seam component) and the seam tree does not change.
\[def:Wn\_tree\] Define $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ as a poset by declaring $2T' < 2T$ if there is a finite sequence of moves that transforms $2T$ into $2T'$. The poset structure is well-defined by the same argument as in Def.-Lem. \[deflem:Krtree\_poset\]. $\triangle$
\[lem:WnKn\] For any $n \geq 1$, there is a poset isomorphism $W_n^{{{\operatorname}{tree}}}\simeq K_n^{{{\operatorname}{tree}}}$. There are also isomorphisms $W_{n,0}^{{{\operatorname}{tree}}}\simeq J_n \simeq W_{0,n}^{{{\operatorname}{tree}}}$, where $J_n$ is the $(n-1)$-dimensional multiplihedron.
Define a map $W_n^{{{\operatorname}{tree}}}\to K_n^{{{\operatorname}{tree}}}$ like so: given a stable tree-pair $T_b {\stackrel}{f}{\to} T_s$, send it to the result of collapsing all the solid edges in $T_b$, then converting all the dashed edges to solid ones. A straightforward check shows that this map is well-defined and respects the partial order. An inverse $K_n^{{{\operatorname}{tree}}}\to W_n^{{{\operatorname}{tree}}}$ is given like so: given a stable RRT $T$, convert its edges to dashed ones, then insert a solid edge at every interior vertex. Here is an illustration of this correspondence, where the stable tree-pair on the left lies in $W_6^{{{\operatorname}{tree}}}$ and the RRT on the right lies in $K_6^{{{\operatorname}{tree}}}$:
The identifications $W_{n,0}^{{{\operatorname}{tree}}}\simeq J_n \simeq W_{0,n}^{{{\operatorname}{tree}}}$ are explained in Rmk. 4.4 of version 2 of [@bw:compactness].
There are also poset isomorphisms $W^{{{\operatorname}{tree}}}_{\tiny \underbrace{0,\ldots,0,1,0,\ldots,0}_n} \simeq K^{{{\operatorname}{tree}}}_n$; we leave the details of these isomorphisms to the reader. $\triangle$
The poset Wnbr of 2-bracketings {#ss:Wnbr_construction}
-------------------------------
We now define the notion of a 2-bracketing, which will allow us to define the model $W_{\mathbf{n}}^{{{\operatorname}{br}}}$. These definitions are somewhat opaque, so we give some motivation after the definitions.
\[def:2bracket\] A *2-bracket of ${\mathbf{n}}$* is a pair ${{\mathbf{2B}}}= (B, (2B_i))$\[p:btB\] consisting of a 1-bracket $B \subset \{1,\ldots,r\}$ and a consecutive subset $2B_i \subset \{1,\ldots,n_i\}$ for every $i \in B$ such that at least one $2B_i$ is nonempty. We write ${{\mathbf{2B}}}' \subset {{\mathbf{2B}}}$ if $B' \subset B$ and $2B_i' \subset 2B_i$ for every $i \in B'$, and we define $\pi(B,(2B_i)) \coloneqq B$. $\triangle$
\[def:Wn\_br\] A *2-bracketing of ${\mathbf{n}}$* is a pair $({\mathscr{B}}, {2\mathscr{B}})$,\[p:sBstB\] where ${\mathscr{B}}$ is a 1-bracketing of $r$ and ${2\mathscr{B}}$ is a collection of 2-brackets of ${\mathbf{n}}$ that satisfies these properties:
- [(1-bracketing)]{} For every ${{\mathbf{2B}}}\in {2\mathscr{B}}$, $\pi({{\mathbf{2B}}})$ is contained in ${\mathscr{B}}$.
- [(2-bracketing)]{} Suppose that ${{\mathbf{2B}}}, {{\mathbf{2B}}}'$ are elements of ${2\mathscr{B}}$, and that for some $i_0 \in \pi({{\mathbf{2B}}}) \cap \pi({{\mathbf{2B}}}')$, the intersection $2B_{i_0} \cap 2B_{i_0}'$ is nonempty. Then either ${{\mathbf{2B}}}\subset {{\mathbf{2B}}}'$ or ${{\mathbf{2B}}}' \subset {{\mathbf{2B}}}$.
- [(root and marked points)]{} ${2\mathscr{B}}$ contains $(\{1,\ldots,r\},(\{1,\ldots,n_1\},\ldots,\{1,\ldots,n_r\}))$ and every 2-bracket of ${\mathbf{n}}$ of the form $(\{i\},(\{j\}))$.
For any $B_0 \in {\mathscr{B}}$, write ${2\mathscr{B}}_{B_0} \coloneqq \{(B,(2B_i)) \in {2\mathscr{B}}\:|\: B = B_0\}$.
- [(marked seams are unfused)]{}
- For any $B_0 \in {\mathscr{B}}$ and for any $i \in B_0$, we have $\bigcup_{{{\mathbf{2B}}}\in {2\mathscr{B}}_{B_0}} 2B_i = \{1,\ldots,n_i\}$.
- For every ${{\mathbf{2B}}}\in {2\mathscr{B}}_{B_0}$ for which there exists ${{\mathbf{2B}}}' \in {2\mathscr{B}}_{B_0}$ with ${{\mathbf{2B}}}' \subsetneq {{\mathbf{2B}}}$, and for every $i \in B_0$ and $j \in 2B_i$, there exists ${{\mathbf{2B}}}'' \in {2\mathscr{B}}_{B_0}$ with ${{\mathbf{2B}}}'' \subsetneq {{\mathbf{2B}}}$ and $2B''_i \ni j$.
- [(partial order)]{} For every $B_0 \in {\mathscr{B}}$, ${2\mathscr{B}}_{B_0}$ is endowed with a partial order with the following properties:
- ${{\mathbf{2B}}}, {{\mathbf{2B}}}' \in {2\mathscr{B}}_{B_0}$ are comparable if and only if $2B_i \cap 2B_i' = \emptyset$ for every $i \in B_0$.
- For any $i$ and $j<j'$, we have $(\{i\},(\{j\})) < (\{i\},(\{j'\}))$.
- For any 2-brackets ${{\mathbf{2B}}}^j \in {2\mathscr{B}}_{B_0}, {\widetilde}{{{\mathbf{2B}}}}^j \in {2\mathscr{B}}_{{\widetilde}B_0}, j \in \{1,2\}$ with ${\widetilde}{{{\mathbf{2B}}}^j}\subset {{\mathbf{2B}}}^j$, we have the implication $$\begin{aligned}
{{\mathbf{2B}}}^1 < {{\mathbf{2B}}}^2 \implies {\widetilde}{{{\mathbf{2B}}}}^1 < {\widetilde}{{{\mathbf{2B}}}}^2.\end{aligned}$$
We define $W_{\mathbf{n}}^{{{\operatorname}{br}}}$ to be the set of 2-bracketings of ${\mathbf{n}}$, with the poset structure defined by declaring $({\mathscr{B}}',{2\mathscr{B}}') < ({\mathscr{B}},{2\mathscr{B}})$ if the containments ${\mathscr{B}}' \supset {\mathscr{B}}$, ${2\mathscr{B}}' \supset {2\mathscr{B}}$ hold and at least one of these containments is proper. \[p:Wnbr\] $\triangle$
\[ex:2bracketing\_examples\] Define a 2-bracketing $({\mathscr{B}},{2\mathscr{B}}) \in W_{11410}^{{{\operatorname}{br}}}$ like so: $$\begin{aligned}
{\mathscr{B}}\coloneqq &\{(1),(2),(3),(4),(5),(1,2),(3,4,5),(1,2,3,4,5)\},
\\
{2\mathscr{B}}\coloneqq &\Bigl\{\bigl((1),((a))\bigr),
\bigl((2),((b))\bigr),
\bigl((3),((c))\bigr),
\bigl((3),((d))\bigr),
\bigl((3),((e))\bigr),
\bigl((3),((f))\bigr),
\bigl((4),((g))\bigr),
\nonumber\\
&\quad \bigl((1,2),((a),())\bigr),
\bigl((1,2),((),(b))\bigr),
\bigl((1,2),((a),(b))\bigr),
\nonumber\\
&\quad \bigl((3,4,5),((c),(g),())\bigr),
\bigl((3,4,5),((d),(),())\bigr),
\bigl((3,4,5),((f,e),(),())\bigr),
\nonumber\\
&\quad \bigl((1,2,3,4,5),((a),(b),(c),(g),())\bigr),
\bigl((1,2,3,4,5),((),(),(f,e,d),(),())\bigr),
\nonumber\\
&\hspace{2.75in}
\bigl((1,2,3,4,5),((a),(b),(f,e,d,c),(g),())\bigr),
\nonumber\end{aligned}$$ subject to the partial orders defined by the following relations: $$\begin{gathered}
\bigl((3),((f))\bigr) < \bigl((3),((e))\bigr) < \bigl((3),((d))\bigr) < \bigl((3),((c))\bigr),
\\
\bigl((1,2),((a),())\bigr) < \bigl((1,2),((),(b))\bigr),
\nonumber\\
\bigl((3,4,5),((f,e),(),())\bigr) < \bigl((3,4,5),((d),(),())\bigr) < \bigl((3,4,5),((c),(g),())\bigr),
\nonumber\\
\bigl((1,2,3,4,5),((),(),(f,e,d),(),())\bigr) < \bigl((1,2,3,4,5),((a),(b),(c),(g),())\bigr).
\nonumber\end{gathered}$$ Here we have denoted each 2-bracket $(B,(2B_i))$ to have $2B_i$ a subsequence of $(a)$, $(b)$, $(f,e,d,c)$, or $(g)$, for $i = 1,2,3,4,5$; this alternate notation is easier to parse in examples.
The presentation of $({\mathscr{B}},{2\mathscr{B}})$ just given is obviously cumbersome. It is more convenient to depict 2-bracketings in the following pictorial format:
\[fig:2-bracketing\_example\]
Here the 1-brackets in ${\mathscr{B}}$ are shown on the bottom row. The 2-brackets are shown above the dividing line. It is important to note that the 2-brackets come with a width, which indicates the 1-brackets they map to under $\pi$, and this is incorporated in the picture. Moreover, the partial orders are reflected like so: for ${{\mathbf{2B}}}_1, {{\mathbf{2B}}}_2 \in {2\mathscr{B}}_{B_0}$, the inequality ${{\mathbf{2B}}}_1>{{\mathbf{2B}}}_2$ holds if and only if ${{\mathbf{2B}}}_1$ appears above ${{\mathbf{2B}}}_2$. We have not depicted the 1-bracket $(1,2,3,4,5)$, the 2-bracket $\bigl((1,2,3,4,5),((a),(b),(f,e,d,c),(g),())\bigr)$, or the 1-brackets resp. 2-brackets of the form $(i)$ resp. $\bigl((i),((j))\bigr)$: these must be included in $({\mathscr{B}},{2\mathscr{B}})$ according to the $\textsc{(roots and leaves)}$ axiom, so it would not add any information to include them in the picture. $\triangle$
In this example, we discuss several invalid variants of the (valid) 2-bracketing from the last example. Consider the five supposed 2-bracketings in the following figure:
\[fig:2-bracketing\_nonexamples\]
Here is why these are invalid 2-bracketings, from left to right:
- We have deleted $(1,2)$ from ${\mathscr{B}}$. As a result, <span style="font-variant:small-caps;">(1-bracketing)</span> is not satisfied.
- We have modified ${2\mathscr{B}}$ by replacing $\bigl((3,4,5),((d),(),())\bigr)$ with $\bigl((3,4,5),((e,d),(),())\bigr)$. As a result, <span style="font-variant:small-caps;">(2-bracketing)</span> is not satisfied.
- Here we have removed the 2-bracket $\bigl((3,4,5),((c),(g),())\bigr)$ from ${2\mathscr{B}}$. As a result, we have $$\begin{aligned}
\bigcup_{{{\mathbf{2B}}}\in {2\mathscr{B}}_{(3,4,5)}} 2B_3 = (f,e,d) \subsetneq (f,e,d,c),
\qquad
\bigcup_{{{\mathbf{2B}}}\in {2\mathscr{B}}_{(3,4,5)}} 2B_4 = () \subsetneq (g),\end{aligned}$$ so the first part of <span style="font-variant:small-caps;">(marked seams are unfused)</span> is violated.
- We have removed $\bigl((1,2),((a),())\bigr)$ from ${2\mathscr{B}}$. This violates the second part of <span style="font-variant:small-caps;">(marked seams are unfused)</span>: in the notation of that condition, set ${{\mathbf{2B}}}\coloneqq \bigl((1,2),((a),(b))\bigr)$, ${{\mathbf{2B}}}' \coloneqq \bigl((1,2),((),(b))\bigr)$, $i \coloneqq 1$, and $j \coloneqq a$.
- In the fifth non-example, we have modified the partial order on ${2\mathscr{B}}_{(1,2,3,4,5)}$ by declaring $$\begin{aligned}
\bigl((3,4,5),((d),(),())\bigr) < \bigl((3,4,5),((f,e),(),())\bigr) < \bigl((3,4,5),((c),(g),())\bigr).\end{aligned}$$ This, together with the inequality $\bigl((3),((e))\bigr) < \bigl((3),((d))\bigr)$, contradicts the third part of <span style="font-variant:small-caps;">(partial order)</span>.
$\triangle$
Recall from §\[ss:motivation\] that a 2-bracketing is intended to indicate the bubbling structure of a nodal witch curve. That is, the 1-bracketing ${\mathscr{B}}$ indicates how the seams have collided; each 2-bracket ${{\mathbf{2B}}}\in {2\mathscr{B}}$ corresponds to a bubble in the nodal witch curve, with $\pi({{\mathbf{2B}}})$ indicating the seams present on the bubble and the fashion in which these seams have collided, and $2B_i$ indicating the marked points which appear on the $i$-th seam and which are either on the present bubble or appear above this bubble (i.e., further from the main component). The properties [(1-bracketing)]{}, [(2-bracketing)]{}, and [(root and marked points)]{} are straightforward enough: [(1-bracketing)]{} says that the collisions of seams on the various bubbles in the tree are controlled by a single 1-bracketing; [(2-bracketing)]{} says that if a single marked point appears above two different bubbles, then one of these bubbles must be above the other; and [(root and marked points)]{} is a consequence of the fact that the root corresponds to the 2-bracket $(\{1,\ldots,r\},(\{1,\ldots,n_1\},\ldots,\{1,\ldots,n_r\}))$ and that the $j$-th marked point on the $i$-th seam corresponds to the 2-bracket $(\{i\},(\{j\}))$. [(marked seams are unfused)]{} guarantees that marked points may only appear on unfused seams, which is a result of the fact that in our putative compactification ${\overline}{2{\mathcal{M}}}_{\mathbf{n}}$, when a collection of seams collide, wherever there is a marked point at the instant of this collision, a bubble must form. Finally, [(partial order)]{} reflects the fact that in a bubble tree, on each seam of each bubble there is an ordering of the marked and nodal points. $\triangle$
We prove that Wntree, Wnbr coincide {#ss:Wn_iso}
-----------------------------------
In this subsection we finally define $W_{\mathbf{n}}$, as well as the forgetful map $W_{\mathbf{n}}\to K_r$.
\[def:Wn\] \[p:forgetful\] Define $W_{\mathbf{n}}\coloneqq W_{\mathbf{n}}^{{{\operatorname}{tree}}}= W_{\mathbf{n}}^{{{\operatorname}{br}}}$. \[p:Wn\] The forgetful map $\pi\colon W_{\mathbf{n}}\to K_r$ is defined in the two models like so: $\pi^{{{\operatorname}{tree}}}\colon W_{\mathbf{n}}^{{{\operatorname}{tree}}}\to K_r^{{{\operatorname}{tree}}}$ sends $T_b {\stackrel}{f}{\to} T_s$ to $T_s$, and $\pi^{{{\operatorname}{br}}}\colon W_{\mathbf{n}}^{{{\operatorname}{br}}}\to K_r^{{{\operatorname}{br}}}$ sends $({\mathscr{B}},{2\mathscr{B}})$ to ${\mathscr{B}}$.
We need to show that $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ and $W_{\mathbf{n}}^{{{\operatorname}{br}}}$ are isomorphic posets, and that this isomorphism intertwines $\pi^{{{\operatorname}{tree}}}$ and $\pi^{{{\operatorname}{br}}}$. The isomorphism $W_{\mathbf{n}}^{{{\operatorname}{tree}}}\simeq W_{\mathbf{n}}^{{{\operatorname}{br}}}$ is exactly the content of Thm. \[thm:iso\] below, and it is evident from the definition of this isomorphism that the two forgetful maps are intertwined.
We now turn to the proof of the main theorem of this section.
\[thm:iso\] For any $r\geq 1$ and ${\mathbf{n}}\in {\mathbb{Z}}_{\geq0}^r\setminus\{{\mathbf{0}}\}$, $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ and $W_{\mathbf{n}}^{{{\operatorname}{br}}}$ are isomorphic posets.
We construct a bijection $2\nu\colon W_{\mathbf{n}}^{{{\operatorname}{tree}}}\to W_{\mathbf{n}}^{{{\operatorname}{br}}}$ in Def.-Lem. \[deflem:Wn\_models\_iso\] and show that it respects the partial orders in Lemma \[lem:Wn\_models\_orders\].
The notion of a $2T$-bracket will be central in the construction of $2\nu\colon W_{\mathbf{n}}^{{{\operatorname}{tree}}}\to W_{\mathbf{n}}^{{{\operatorname}{br}}}$ in Def.-Lem. \[deflem:Wn\_models\_iso\].
\[def:2T\_bracket\] Fix a stable tree-pair $2T = T_b {\stackrel}{f}{\to} T_s \in W_{\mathbf{n}}^{{{\operatorname}{tree}}}$. A *$2T$-bracket* is a 2-bracket ${{\mathbf{2B}}}= (B,(2B_i))$ of ${\mathbf{n}}$ with the property that for some $\alpha \in V_{{{\operatorname}{comp}}}(T_b) \sqcup V_{{{\operatorname}{mark}}}(T_b)$, we have $$\begin{aligned}
B = B(f(\alpha)), \quad 2B_i = \bigl\{j \:|\: \mu_{ij}^{T_b} \in (T_s)_\alpha\bigr\}.\end{aligned}$$ We denote this bracket by ${{\mathbf{2B}}}(\alpha) = (B(f(\alpha)),(2B_i(\alpha)))$.\[p:btBalpha\] $\triangle$
Note that [(stability)]{} implies that $\alpha \in V_{{{\operatorname}{comp}}}(T_b) \sqcup V_{{{\operatorname}{mark}}}(T_b)$ is uniquely determined by ${{\mathbf{2B}}}(\alpha)$. We denote the vertex corresponding to ${{\mathbf{2B}}}$ by $\alpha({{\mathbf{2B}}})$. \[p:alpha\_2B\]
With this preparation, we are now ready to define the bijection $2\nu\colon W_{\mathbf{n}}^{{{\operatorname}{tree}}}\to W_{\mathbf{n}}^{{{\operatorname}{br}}}$. This definition is rather technical, so we advise the reader to consult Ex. \[ex:2nu\_example\] while reading this definition.
\[deflem:Wn\_models\_iso\] \[p:2nu\] Define a map $2\nu\colon W_{\mathbf{n}}^{{{\operatorname}{tree}}}\to W_{\mathbf{n}}^{{{\operatorname}{br}}}$ to send a stable tree-pair $2T = T_b {\stackrel}{f}{\to} T_s$ to $({\mathscr{B}}(2T),{2\mathscr{B}}(2T))$, where ${\mathscr{B}}(2T) \coloneqq \nu(T_s)$ and where ${2\mathscr{B}}(2T)$ is the set of $2T$-brackets, with the partial order on ${2\mathscr{B}}(2T)_{B(\beta)}, \beta \in T_s$ defined like so: fix ${\widetilde}\beta_1, {\widetilde}\beta_2 \in (V_{{{\operatorname}{comp}}}(T_b) \sqcup V_{{{\operatorname}{mark}}}(T_b)) \cap f^{-1}\{\beta\}$. If ${{\mathbf{2B}}}({\widetilde}\beta_1), {{\mathbf{2B}}}({\widetilde}\beta_2)$ have $2B_i({\widetilde}\beta_1) \cap 2B_i({\widetilde}\beta_2) \neq \emptyset$ for some $i \in B(\beta)$, then define ${{\mathbf{2B}}}({\widetilde}\beta_1), {{\mathbf{2B}}}({\widetilde}\beta_2)$ to be incomparable. Otherwise, define \[p:alpha\_triple\] $$\begin{aligned}
\gamma \coloneqq \alpha(\alpha_{{{\operatorname}{root}}}^{T_b},{\widetilde}\beta_1,{\widetilde}\beta_2) \coloneqq [\alpha_{{{\operatorname}{root}}}^{T_b},{\widetilde}\beta_1] \cap [{\widetilde}\beta_1,{\widetilde}\beta_2] \cap [{\widetilde}\beta_2,\alpha_{{{\operatorname}{root}}}^{T_b}] \in V_{{{\operatorname}{seam}}}(T_b)\end{aligned}$$ as in §D.2, [@ms:jh], and for $j \in \{1,2\}$, define $\delta_j$ to be the element of ${{\operatorname}{in}}(\gamma)$ with ${\widetilde}\beta_j \in (T_b)_{\gamma\delta_j}$. Now define the order on ${{\mathbf{2B}}}({\widetilde}\beta_1), {{\mathbf{2B}}}({\widetilde}\beta_2) \in {2\mathscr{B}}(2T)_{B(\beta)}$ like so:
- if $\delta_1 <_\gamma \delta_2$, then ${{\mathbf{2B}}}({\widetilde}\beta_1) < {{\mathbf{2B}}}({\widetilde}\beta_2)$;
- if $\delta_2 <_\gamma \delta_1$, then ${{\mathbf{2B}}}({\widetilde}\beta_2) < {{\mathbf{2B}}}({\widetilde}\beta_1)$.
Then $2\nu$ is bijective.
Throughout this proof we assume ${\mathbf{n}}\neq (1)$, since in this case the bijectivity of $2\nu$ holds trivially.
*Step 1: If $T$ is an RRT and $\beta_1,\beta_2$ are any distinct non-root vertices, then $$\begin{aligned}
\gamma \coloneqq \alpha(\alpha_{{{\operatorname}{root}}},\beta_1,\beta_2) \coloneqq [\alpha_{{{\operatorname}{root}}},\beta_1] \cap [\beta_1,\beta_2] \cap [\beta_2,\alpha_{{{\operatorname}{root}}}]\end{aligned}$$ is the node furthest from the root satisfying $\beta_1,\beta_2 \in T_\gamma$.*
Define $\Sigma$ to consist of those vertices $\delta$ of $T$ satisfying $\beta_1,\beta_2 \in T_\delta$. The inclusion $\gamma \in [\alpha_{{{\operatorname}{root}}},\beta_j]$ for $j \in \{1,2\}$ implies $\beta_j \in T_\gamma$, so $\gamma$ is an element of $\Sigma$. Any two elements $\delta_1,\delta_2 \in \Sigma$ have the property that either $\delta_1 \in [\delta_2,\alpha_{{{\operatorname}{root}}}]$ or $\delta_2 \in [\delta_1,\alpha_{{{\operatorname}{root}}}]$, since both $\delta_1$ and $\delta_2$ lie in $[\alpha_{{{\operatorname}{root}}},\beta_1]$. Suppose that $\delta_1, \delta_2$ are distinct elements of $\Sigma$ and that $\delta_1$ lies in $[\delta_2,\alpha_{{{\operatorname}{root}}}]$. Since $\beta_1,\beta_2$ lie in $T_{\delta_1}$ and $\delta_1$ lies in $[\delta_2,\alpha_{{{\operatorname}{root}}}]$, any path from $\beta_1$ to $\beta_2$ that passes through $\delta_1$ must pass through $\delta_2$ more than once; therefore $\delta_1$ does not lie in $[\beta_1,\beta_2]$, which implies $\gamma \neq \delta_1$. It follows that $\gamma$ is the (unique) element of $\Sigma$ that is furthest from the root.
*Step 2: If $2T$ is a stable tree-pair of type ${\mathbf{n}}$, then $2\nu(2T)$ is a 2-bracketing of ${\mathbf{n}}$.*
As discussed in the proof of Prop. \[prop:Kr\_iso\], ${\mathscr{B}}(2T) = \nu(T_s)$ is a 1-bracketing of $r$. For any $(B,(2B_i)) \in {2\mathscr{B}}(2T)$, it is clear that for every $i$, $2B_i \subset \{1,\ldots,n_i\}$ is consecutive. The [(stability)]{} axiom implies that there exists $i$ for which $2B_i$ is nonempty, so every element of ${2\mathscr{B}}(2T)$ is a 2-bracket.
Next, we justify two implicit assertions in the definition of $({\mathscr{B}}(2T), {2\mathscr{B}}(2T))$. Specifically, we implicitly asserted (1) that for $\beta \in V(T_s)$ and ${\widetilde}\beta_1,{\widetilde}\beta_2 \in (V_{{{\operatorname}{comp}}}(T_b) \sqcup V_{{{\operatorname}{mark}}}(T_b)) \cap f^{-1}\{\beta\}$ with the property that $2B_i({\widetilde}\beta_1) \cap 2B_i({\widetilde}\beta_2) = \emptyset$ for every $i \in B(\beta)$, $\gamma \coloneqq \alpha(\alpha_{{{\operatorname}{root}}}^{T_b},{\widetilde}\beta_1,{\widetilde}\beta_2)$ lies in $V_{{{\operatorname}{seam}}}(T_b)$; and (2) that we have defined a partial order on every ${2\mathscr{B}}(2T)_{B(\beta)}$.
- Suppose for a contradiction that $\gamma \in V_{{{\operatorname}{comp}}}(T_b) \sqcup V_{{{\operatorname}{mark}}}(T_b)$, and recall from Step 1 that $\gamma$ can be interpreted as the node furthest from the root with ${\widetilde}\beta_1,{\widetilde}\beta_2 \in (T_b)_\gamma$. The assumption on ${\widetilde}\beta_1,{\widetilde}\beta_2$ implies that they are not the same vertex, hence $\gamma$ cannot lie in $V_{{{\operatorname}{mark}}}(T_b)$; therefore $\gamma \in V_{{{\operatorname}{comp}}}(T_b)$. For $j \in \{1,2\}$, define ${\epsilon}_j$ to be the element of ${{\operatorname}{in}}(\gamma)$ with ${\widetilde}\beta_j \in (T_b)_{\gamma{\epsilon}_j}$. By the definition of $\gamma$, ${\epsilon}_1$ and ${\epsilon}_2$ cannot coincide. In particular, $\#\!{{\operatorname}{in}}(\gamma) \geq 2$, so $f$ must map the incoming edges of $\gamma$ bijectively onto the incoming edges of $f(\gamma)$. Therefore $f({\epsilon}_1)$ and $f({\epsilon}_2)$ are distinct elements of ${{\operatorname}{in}}(f(\gamma))$. A given edge in $T_b$ is either contracted by $f$ or mapped to an edge in an orientation-preserving fashion, so we must have $f({\widetilde}\beta_1) \neq f({\widetilde}\beta_2)$, in contradiction to the assumption $f({\widetilde}\beta_1) = \beta = f({\widetilde}\beta_2)$. Therefore our implicit assertion $\gamma \in V_{{{\operatorname}{seam}}}(T_b)$ was justified.
- If $2B_i({\widetilde}\beta_1) \cap 2B_i({\widetilde}\beta_2) = \emptyset$ for every $i \in B(\beta)$, then ${\widetilde}\beta_1$ and ${\widetilde}\beta_2$ must be distinct; therefore our relation is antireflexive. Next, fix ${\widetilde}\beta_1,{\widetilde}\beta_2,{\widetilde}\beta_3$ with ${{\mathbf{2B}}}({\widetilde}\beta_1) < {{\mathbf{2B}}}({\widetilde}\beta_2)$ and ${{\mathbf{2B}}}({\widetilde}\beta_2) < {{\mathbf{2B}}}({\widetilde}\beta_3)$. Define $$\begin{aligned}
\gamma_{ij} \coloneqq \alpha(\alpha_{{{\operatorname}{root}}}^{T_b},{\widetilde}\beta_i,{\widetilde}\beta_j), \quad i,j \in \{1,2,3\}.\end{aligned}$$ Then we have either (a) $\gamma_{13} = \gamma_{12}$ and $\gamma_{23} \in (T_b)_{\gamma_{13}}$ or (b) $\gamma_{13} = \gamma_{23}$ and $\gamma_{12} \in (T_b)_{\gamma_{13}}$. (Indeed, $\gamma_{12}$ and $\gamma_{23}$ are both in $[{\widetilde}\beta_2,\alpha_{{{\operatorname}{root}}}^{T_b}]$, so either $\gamma_{12} \in (T_b)_{\gamma_{23}}$ or $\gamma_{23} \in (T_b)_{\gamma_{12}}$. Suppose $\gamma_{12} \in (T_b)_{\gamma_{23}}$. Then ${\widetilde}\beta_1,{\widetilde}\beta_3 \in (T_b)_{\gamma_{23}}$. Moreover, $\gamma_{23}$ is the furthest vertex from the root with this property: if there exists $\zeta \in {{\operatorname}{in}}(\gamma_{23})$ such that $(T_b)_\zeta$ contains ${\widetilde}\beta_1$ and ${\widetilde}\beta_3$, then $(T_b)_\zeta$ also contains $\gamma_{12}$ and therefore ${\widetilde}\beta_2$, contradicting the fact that $\gamma_{23}$ is the furthest vertex from the root with ${\widetilde}\beta_2, {\widetilde}\beta_3 \in (T_b)_{\gamma_{23}}$.) Suppose (a) holds. If $\gamma_{12} = \gamma_{13} = \gamma_{23}$, then there are $\delta_1,\delta_2,\delta_3 \in {{\operatorname}{in}}(\gamma_{13})$ with ${\widetilde}\beta_j \in (T_b)_{\delta_j}$ for all $j$. By hypothesis, $\delta_1 <_{\gamma_{13}} \delta_2$ and $\delta_2<_{\gamma_{13}}\delta_3$, hence $\delta_1 <_{\gamma_{13}} \delta_3$, hence ${{\mathbf{2B}}}({\widetilde}\beta_1) < {{\mathbf{2B}}}({\widetilde}\beta_3)$ as desired. On the other hand, suppose (a) holds and $\gamma_{23} \in (T_b)_{\gamma_{13}} \setminus \gamma_{13}$. Define $\delta_1, \delta_{23} \in {{\operatorname}{in}}(\gamma_{13})$ by ${\widetilde}\beta_1 \in (T_b)_{\delta_1}$, $\gamma_{23} \in (T_b)_{\delta_{23}}$. The hypothesis ${{\mathbf{2B}}}({\widetilde}\beta_1) < {{\mathbf{2B}}}({\widetilde}\beta_2)$ implies $\delta_1 <_{\gamma_{13}} \delta_{23}$, hence ${{\mathbf{2B}}}({\widetilde}\beta_1) < {{\mathbf{2B}}}({\widetilde}\beta_3)$ as desired. A similar argument applies if (b) holds, so our putative partial order is transitive.
Finally, we verify the axioms of a 2-bracketing.
- [(1-bracketing)]{} Fix ${{\mathbf{2B}}}= (B,(2B_i)) \in {2\mathscr{B}}(2T)$. Then $\alpha({{\mathbf{2B}}}) \in V_{{{\operatorname}{comp}}}(T_b)$ has the property that $B$ is the set of indices of incoming leaves of $f(\alpha)$, which in turn is a 1-bracket in ${\mathscr{B}}(2T)$; therefore $B \in {\mathscr{B}}(2T)$.
- [(2-bracketing)]{} Suppose that ${{\mathbf{2B}}}, {{\mathbf{2B}}}' \in {2\mathscr{B}}(2T)$ have the property that for some $i_0 \in B \cap B'$, $2B_{i_0} \cap 2B_{i_0}' \neq \emptyset$, and denote $\alpha \coloneqq \alpha({{\mathbf{2B}}}), \alpha' \coloneqq \alpha({{\mathbf{2B}}}')$. Choose $j \in 2B_{i_0} \cap 2B_{i_0}'$. By assumption, the path from $\mu_{i_0j}^{T_b}$ to $\alpha_{{{\operatorname}{root}}}^{T_b}$ passes through both $\alpha$ and $\alpha'$, so we must either have $\alpha \in (T_b)_{\alpha'}$ or $\alpha' \in (T_b)_\alpha$. In the first case we must have ${{\mathbf{2B}}}\subset {{\mathbf{2B}}}'$, and similarly in the second case.
- [(root and marked points)]{} Since $f(\alpha_{{{\operatorname}{root}}}^{T_b}) = \alpha_{{{\operatorname}{root}}}^{T_s}$, the 2-bracket corresponding to $\alpha_{{{\operatorname}{root}}}^{T_b}$ is $\bigl(\{1,\ldots,r\},(\{1,\ldots,n_1\},\ldots,\{1,\ldots,n_r\})\bigr)$. On the other hand, the 2-bracket corresponding to $\mu_{ij}^{T_b}$ is $(\{i\},(\{j\}))$.
- [(marked seams are unfused)]{}
- Fix $\rho \in T_s$, $i \in B(\rho)$, and $j \in \{1,\ldots,n_i\}$; we must produce $\alpha_0 \in V_{{{\operatorname}{comp}}}(T_b) \cap f^{-1}\{\rho\}$ with $2B_i(\alpha_0) \ni j$. Choose a path from $\mu^{T_b}_{ij}$ to $\alpha_{{{\operatorname}{root}}}^{T_b}$. By examining the image of this path in $T_s$, we see that some element in the path must lie in $V_{{{\operatorname}{comp}}}(T_b)\cap f^{-1}\{\rho\}$, and we can define $\alpha_0$ to be this element.
- Fix $B(\rho) \in {\mathscr{B}}(2T)$; ${{\mathbf{2B}}}(\alpha), {{\mathbf{2B}}}(\alpha') \in {2\mathscr{B}}(2T)_{B(\rho)}$ with ${{\mathbf{2B}}}(\alpha') \subsetneq {{\mathbf{2B}}}(\alpha)$; $i \in B(\rho)$; and $j \in 2B_i(\alpha) \setminus 2B_i(\alpha')$. We must produce ${{\mathbf{2B}}}(\alpha'') \in {2\mathscr{B}}_{B(\rho)}$ with ${{\mathbf{2B}}}(\alpha'') \subsetneq {{\mathbf{2B}}}(\alpha)$ and $2B_i(\alpha'') \ni j$. The containment ${{\mathbf{2B}}}(\alpha') \subsetneq {{\mathbf{2B}}}(\alpha)$ and the inclusions ${{\mathbf{2B}}}(\alpha),{{\mathbf{2B}}}(\alpha') \in {2\mathscr{B}}(2T)_{B(\rho)}$ imply $\alpha \in V_{{{\operatorname}{comp}}}^1(T_b)$. Denote the incoming neighbor of $\alpha$ by $\beta$; we may now choose $\alpha'' \in {{\operatorname}{in}}(\beta)$ to have the property that $(T_b)_{\alpha''}$ includes $\mu_{ij}^{T_b}$.
- [(partial order)]{} Earlier we endowed every ${2\mathscr{B}}_{B(\beta)}, \alpha \in T_s$ with a partial order.
- It is an immediate consequence of our definition of the partial order on ${2\mathscr{B}}_{B(\beta)}$ that ${{\mathbf{2B}}}({\widetilde}\beta_1), {{\mathbf{2B}}}({\widetilde}\beta_2)$ are comparable if and only if $2B_i \cap 2B_i' = \emptyset$ for every $i \in B(\beta)$.
- For any $i \in \{1,\ldots,r\}$ and $j,j'$ with $1 \leq j < j' \leq n_i$, it is clear that $(i,(\{j\})) < (i,(\{j'\}))$ in the partial order on ${2\mathscr{B}}(2T)_{\{i\}}$.
- Fix ${{\mathbf{2B}}}(\alpha^j) \in {2\mathscr{B}}(2T)_{B(\rho)}$, ${{\mathbf{2B}}}({\widetilde}\alpha^j) \in {2\mathscr{B}}(2T)_{B({\widetilde}\rho)}$ for $j \in \{1,2\}$ with ${{\mathbf{2B}}}({\widetilde}\alpha^j) \subset {{\mathbf{2B}}}(\alpha^j)$ and ${{\mathbf{2B}}}(\alpha^1) < {{\mathbf{2B}}}(\alpha^2)$. We must show ${{\mathbf{2B}}}({\widetilde}\alpha^1) < {{\mathbf{2B}}}({\widetilde}\alpha^2)$. The inclusions ${{\mathbf{2B}}}({\widetilde}\alpha^j) \subset {{\mathbf{2B}}}(\alpha^j)$ for $j \in \{1,2\}$ are equivalent to the inclusions ${\widetilde}\alpha^j \in (T_b)_{\alpha^j}$. From this it is easy to see that ${{\mathbf{2B}}}({\widetilde}\alpha^1) < {{\mathbf{2B}}}({\widetilde}\alpha^2)$.
*Step 3: We define a putative inverse $2\tau\colon W_{\mathbf{n}}^{{{\operatorname}{br}}}\to W_{\mathbf{n}}^{{{\operatorname}{tree}}}$.*
Fix $({\mathscr{B}},{2\mathscr{B}}) \in W_{\mathbf{n}}^{{{\operatorname}{br}}}$. Define $T_s \coloneqq \tau({\mathscr{B}})$. Towards the definition of $T_b$, define the following sets: $$\begin{gathered}
V_{{{\operatorname}{mark}}}\coloneqq \left\{\bigl(\{i\},(\{j\})\bigr) \:\left|\: {{1 \leq i \leq r} \atop {1 \leq j \leq n_i}}\right.\right\} \ni \mu_{ij}^{T_b},
\quad
V_{{{\operatorname}{comp}}}\coloneqq {2\mathscr{B}}\setminus V_{{{\operatorname}{mark}}}\ni \alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}},
\\
V_{{{\operatorname}{seam}}}\coloneqq V_{{{\operatorname}{seam}}}^1\sqcup V_{{{\operatorname}{seam}}}^{\geq2},
\quad
V_{{{\operatorname}{seam}}}^1 \coloneqq \{{{\mathbf{2B}}}\in {2\mathscr{B}}\:|\: \exists\: {{\mathbf{2B}}}' \in {2\mathscr{B}}_{\pi({{\mathbf{2B}}})}\colon {{\mathbf{2B}}}' \subsetneq {{\mathbf{2B}}}\} \ni \alpha_{{{\mathbf{2B}}},\pi({{\mathbf{2B}}}),{{{\operatorname}{seam}}}}, \nonumber
\\
V_{{{\operatorname}{seam}}}^{\geq2} \coloneqq \left\{({{\mathbf{2B}}},B') \in {2\mathscr{B}}\times {\mathscr{B}}\:\left|\: {{\not\!\exists\: {{\mathbf{2B}}}'' \in {2\mathscr{B}}_{\pi({{\mathbf{2B}}})}\colon {{\mathbf{2B}}}'' \subsetneq {{\mathbf{2B}}}}\atop{B' \subsetneq \pi({{\mathbf{2B}}}), \: \not\!\exists B'' \in {\mathscr{B}}: B' \subsetneq B'' \subsetneq \pi({{\mathbf{2B}}})}}\right.\right\} \ni \alpha_{{{\mathbf{2B}}},B',{{{\operatorname}{seam}}}} \nonumber.\end{gathered}$$ Now define the vertices and incoming neighbors in $T_b$ by $$\begin{gathered}
\label{eq:Tb_in}
V \coloneqq V_{{{\operatorname}{comp}}}\sqcup V_{{{\operatorname}{seam}}}\sqcup V_{{{\operatorname}{mark}}}, \quad \alpha_{{{\operatorname}{root}}}\coloneqq \alpha_{(\{1,\ldots,r\},(\{1,\ldots,n_1\},\ldots,\{1,\ldots,n_r\})),{{{\operatorname}{comp}}}},
\\
{{\operatorname}{in}}(\mu_{ij}^{T_b}) \coloneqq \emptyset,
\quad
{{\operatorname}{in}}(\alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}}) \coloneqq \begin{cases}
\{\alpha_{{{\mathbf{2B}}},\pi({{\mathbf{2B}}}),{{{\operatorname}{seam}}}}\}, & \exists\: {{\mathbf{2B}}}' \in {2\mathscr{B}}_{\pi({{\mathbf{2B}}})} \colon {{\mathbf{2B}}}' \subsetneq {{\mathbf{2B}}}, \\
\left\{\alpha_{{{\mathbf{2B}}},B',{{{\operatorname}{seam}}}} \:\left|\: {{B' \subsetneq \pi({{\mathbf{2B}}})}\atop{\not\exists B'' \in {\mathscr{B}}: B' \subsetneq B'' \subsetneq \pi({{\mathbf{2B}}})}}\right.\right\}, & \text{otherwise},
\end{cases}
\nonumber
\\
{{\operatorname}{in}}(\alpha_{{{\mathbf{2B}}},B',{{{\operatorname}{seam}}}}) \coloneqq \left\{\alpha_{{{\mathbf{2B}}}'',{{{\operatorname}{comp}}}} \:\left|\: {{\pi({{\mathbf{2B}}}'') = B', {{\mathbf{2B}}}'' \subsetneq {{\mathbf{2B}}},} \atop {\not\!\exists\: {{\mathbf{2B}}}''' \in {2\mathscr{B}}\colon {{\mathbf{2B}}}'' \subsetneq {{\mathbf{2B}}}''' \subsetneq {{\mathbf{2B}}}}} \right.\right\}
\cup \hspace{1.5in}
\nonumber
\\
\hspace{2.5in} \cup
\left\{\mu_{ij}^{T_b} \:\left|\: {{B'=\{i\}, \bigl(\{i\},(\{j\})\bigr) \subsetneq {{\mathbf{2B}}},} \atop {\not\!\exists\: {{\mathbf{2B}}}'' \in {2\mathscr{B}}\colon \bigl(\{i\},(\{j\})\bigr) \subsetneq {{\mathbf{2B}}}'' \subsetneq {{\mathbf{2B}}}}} \right.\right\},
\nonumber\end{gathered}$$ where the incoming edges of $\alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}}$ are solid and the incoming edges of $\alpha_{{{\mathbf{2B}}},B',{{{\operatorname}{seam}}}}$ are dashed. For $\alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}}$ for which there does not exist ${{\mathbf{2B}}}' \in {2\mathscr{B}}_{\pi({{\mathbf{2B}}})}$ with ${{\mathbf{2B}}}' \subsetneq {{\mathbf{2B}}}$, order the incoming neighbors $\left\{\alpha_{{{\mathbf{2B}}},B',{{{\operatorname}{seam}}}} \:\left|\: {{B' \subsetneq \pi({{\mathbf{2B}}})}\atop{\not\exists B'' \in {\mathscr{B}}: B' \subsetneq B'' \subsetneq \pi({{\mathbf{2B}}})}}\right.\right\}$ according to the order on the incoming neighbors of $\pi({{\mathbf{2B}}})$ in $T_s = \tau({\mathscr{B}})$. For $\alpha_{{{\mathbf{2B}}},B',{{{\operatorname}{seam}}}}$, order the incoming neighbors according to the partial order on ${2\mathscr{B}}_{\pi({{\mathbf{2B}}})}$. Finally, define $f\colon T_b \to T_s$ like so: $$\begin{gathered}
f(\mu_{ij}^{T_b}) \coloneqq \lambda_i^{T_s},
\quad
f(\alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}}) \coloneqq \alpha(\pi({{\mathbf{2B}}})),
\quad
f(\alpha_{{{\mathbf{2B}}},B',{{{\operatorname}{seam}}}}) \coloneqq \alpha(B').\end{gathered}$$
There are a number of things we have to check in order to verify that $T_b {\stackrel}{f}{\to} T_s$ is a stable tree-pair. For to define a RRT via Lemma \[lem:RRT\_in\], we must check conditions (1–3) in the statement of that lemma.
- It is clear that no ${{\operatorname}{in}}(\alpha)$ can contain $\alpha_{{{\operatorname}{root}}}$.
- Fix a non-root vertex $\alpha$ in $T_b$. Depending on which type of vertex $\alpha$ is, we check that there exists a unique $\beta$ with ${{\operatorname}{in}}(\beta) \ni \alpha$:
- [**$\bullet \: \mathbf{\alpha = \mu_{ij}^{T_b}}$.**]{} The vertices $\beta$ with ${{\operatorname}{in}}(\beta) \ni \mu_{ij}^{T_b}$ are exactly those $\alpha_{{{\mathbf{2B}}},\{i\},{{{\operatorname}{seam}}}}$ with ${{\mathbf{2B}}}$ satisfying these properties:
- $(i,(\{j\})) \subsetneq {{\mathbf{2B}}}$;
- either $\pi({{\mathbf{2B}}}) = \{i\}$, or $\pi({{\mathbf{2B}}}) \supsetneq \{i\}$ and no $B'' \in {\mathscr{B}}$ has $\{i\} \subsetneq B'' \subsetneq \pi({{\mathbf{2B}}})$;
- no ${{\mathbf{2B}}}'' \ni {2\mathscr{B}}$ has $(i,(\{j\})) \subsetneq {{\mathbf{2B}}}'' \subsetneq {{\mathbf{2B}}}$.
Define $\Sigma$ to consist of those ${{\mathbf{2B}}}\in {2\mathscr{B}}$ that properly contain $(i,(\{j\}))$, and order $\Sigma$ by inclusion. Since $\alpha$ is not the root, $\Sigma$ contains $\alpha_{{{\operatorname}{root}}}$ and is therefore not empty; by the [(2-bracketing)]{} property of 2-bracketings, any two elements of $\Sigma$ are comparable under inclusion. Therefore $\Sigma$ has a unique minimal element ${{\mathbf{2B}}}^0$.
I claim that ${{\mathbf{2B}}}^0$ is the unique element of ${2\mathscr{B}}$ satisfying (a–c). Indeed, it is clear from its definition that ${{\mathbf{2B}}}^0$ satisfies (a) and (c). If ${{\mathbf{2B}}}$ does not satisfy (b), then there exists $B'' \in {\mathscr{B}}$ with $\{i\} \subsetneq B'' \subsetneq \pi({{\mathbf{2B}}}^0)$. By the [(marked seams are unfused)]{} property of 2-bracketings, there exists ${{\mathbf{2B}}}'' \in {2\mathscr{B}}$ with $\pi({{\mathbf{2B}}}'') = B''$ and ${{\mathbf{2B}}}''_i \ni j$. This 2-bracket satisfies $(i,(\{j\})) \subsetneq {{\mathbf{2B}}}'' \subsetneq {{\mathbf{2B}}}^0$, which contradicts the definition of ${{\mathbf{2B}}}$; therefore ${{\mathbf{2B}}}$ satisfies (b). On the other hand, suppose that ${{\mathbf{2B}}}$ satisfies (a–c). (a) implies that ${{\mathbf{2B}}}$ lies in $\Sigma$, and (c) implies that ${{\mathbf{2B}}}$ is in fact the minimal element of $\Sigma$; therefore ${{\mathbf{2B}}}= {{\mathbf{2B}}}_0$. We may conclude that $\beta \coloneqq \alpha_{{{\mathbf{2B}}},\{i\},{{{\operatorname}{seam}}}}$ is the unique vertex in $T_b$ with ${{\operatorname}{in}}(\beta) \ni \alpha$.
- [**$\bullet \: \mathbf{\alpha = \alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}}}$.**]{} An argument similar to the one used in the case $\alpha = \mu_{ij}^{T_b}$ shows that there is a unique $\beta$ with ${{\operatorname}{in}}(\beta) \ni \alpha$.
- [**$\bullet \: \mathbf{\alpha = \alpha_{{{\mathbf{2B}}},\mathit{B'},{{{\operatorname}{seam}}}}}$.**]{} If there exists ${{\mathbf{2B}}}'' \in {2\mathscr{B}}_{\pi({{\mathbf{2B}}})}$ with ${{\mathbf{2B}}}'' \subsetneq {{\mathbf{2B}}}$, then $\alpha \in V_{{{\operatorname}{seam}}}^1$ and $B' = \pi({{\mathbf{2B}}})$. Therefore $\beta \coloneqq \alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}}$ is the unique vertex with ${{\operatorname}{in}}(\beta) \ni \alpha$.
On the other hand, suppose that there does not exist such a ${{\mathbf{2B}}}''$. Then $\alpha \in V_{{{\operatorname}{seam}}}^{\geq2}$, and $\beta \coloneqq \alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}}$ is the unique vertex with ${{\operatorname}{in}}(\beta) \ni \alpha$.
- Suppose that $\alpha_1, \ldots, \alpha_\ell \in V$ has $\ell \geq 2$ and $\alpha_j \in {{\operatorname}{in}}(\alpha_{j+1})$ for every $j$. It is clear from our verification of (2) that $\alpha_1$ is not the same as $\alpha_\ell$.
Now that we have shown that $T_b$ is well-defined as an RRT, we check the rest of the requirements on $T_b {\stackrel}{f}{\to} T_s$.
- - We have defined $V$ as the union $V = V_{{{\operatorname}{comp}}}\sqcup V_{{{\operatorname}{seam}}}\sqcup V_{{{\operatorname}{mark}}}$, and the incoming and outgoing edges of the vertices are clearly the necessary type. It is not hard to see that $\alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}} \in V_{{{\operatorname}{comp}}}$ has at least one incoming edge: if there exists ${{\mathbf{2B}}}' \in {2\mathscr{B}}_{\pi({{\mathbf{2B}}})}$ with ${{\mathbf{2B}}}' \subsetneq {{\mathbf{2B}}}$, then $\#\!{{\operatorname}{in}}(\alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}}) = 1$. Next, suppose that there is no such ${{\mathbf{2B}}}'$. The incoming vertices of $\alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}}$ are in correspondence with maximal elements of $\Sigma$, where $\Sigma$ consists of those elements $B'$ of ${\mathscr{B}}$ with $B' \subsetneq \pi({{\mathbf{2B}}})$. It follows from the [(root and leaves)]{} property of 1-bracketings that $\#\!{{\operatorname}{in}}(\alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}}) \geq 2$.
- [(stability)]{} Fix $\alpha = \alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}} \in V_{{{\operatorname}{comp}}}^1$. Its unique incoming neighbor is $\beta\coloneqq \alpha_{{{\mathbf{2B}}},\pi({{\mathbf{2B}}}),{{{\operatorname}{seam}}}}$. We must show that $\beta$ has at least 2 incoming neighbors. The incoming neighbors of $\beta$ are in correspondence with the maximal elements of the set $\Sigma$ of ${{\mathbf{2B}}}'' \in {2\mathscr{B}}_{\pi({{\mathbf{2B}}})}$ with ${{\mathbf{2B}}}'' \subsetneq {{\mathbf{2B}}}$. The fact that $\alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}}$ lies in $V_{{{\operatorname}{comp}}}^1$ implies that $\Sigma$ is nonempty. Choose ${{\mathbf{2B}}}^1$ to be any maximal element of $\Sigma$. Now choose any $i, j$ with the property that ${{\mathbf{2B}}}_i \setminus {\widetilde}{{\mathbf{2B}}}_i$ contains $j$. Define $\Sigma'$ to consist of those ${{\mathbf{2B}}}'' \in \Sigma$ with ${{\mathbf{2B}}}''_i \ni j$. By [(marked seams are unfused)]{}, $\Sigma'$ is nonempty; choose ${{\mathbf{2B}}}^2$ to be any maximal element of $\Sigma'$. Then ${{\mathbf{2B}}}^1, {{\mathbf{2B}}}^2$ are distinct maximal elements of $\Sigma$. This shows that $\beta$ has at least 2 incoming neighbors.
On the other hand, suppose that $\alpha = \alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}}$ lies in $V_{{{\operatorname}{comp}}}^{\geq2}$. Write ${{\mathbf{2B}}}= (B, (2B_i))$. The incoming neighbors of $\alpha$ are in correspondence with maximal elements of the set $\Sigma$ of $B' \in {\mathscr{B}}$ with $B' \subsetneq B$. Not every $2B_i$ can be empty, so we may choose $i, j$ with the property that $2B_i$ contains $j$. Define $\Sigma'$ to be the set of 1-brackets $B' \in {\mathscr{B}}$ with $\{i\} \subset B' \subsetneq B$. $\Sigma'$ contains $\{i\}$, hence is nonempty; define $B^1$ to be a maximal element of $\Sigma'$ (in fact, this determines $B^1$ uniquely). Now define $\Sigma''$ to be the set of 2-brackets ${{\mathbf{2B}}}'' \in {2\mathscr{B}}_{B^1}$ with $(i,(\{j\})) \subset {{\mathbf{2B}}}'' \subsetneq {{\mathbf{2B}}}$. $\Sigma''$ contains $(i,(\{j\}))$, hence is nonempty; define ${{\mathbf{2B}}}^2$ to be a maximal element of $\Sigma''$. Then $\alpha = \alpha_{{{\mathbf{2B}}},{{{\operatorname}{comp}}}}$ has $\beta \coloneqq \alpha_{{{\mathbf{2B}}},B^1,{{{\operatorname}{seam}}}}$ as an incoming neighbor, and $\beta$ has $\alpha_{{{\mathbf{2B}}}^2,{{{\operatorname}{comp}}}}$ as an incoming neighbor.
- By Prop. \[prop:Kr\_iso\], $T_s$ is an element of $K_r^{{{\operatorname}{tree}}}$.
- It is clear that $f$ satisfies the necessary properties.
*Step 4: We verify that $2\nu$ and $2\tau$ are inverse to one another.*
This can be shown by an argument similar to the one made in the proof of Prop. \[prop:Kr\_iso\] to show that $\nu$ and $\tau$ are inverse to one another.
\[ex:2nu\_example\] In the following figure, we illustrate the definition of $2\nu$ as a map of sets:
\[fig:2nu\_example\]
On the left is the tree-pair in $W_{11410}^{{{\operatorname}{tree}}}$ we discussed in Ex. \[ex:tree-pair\_examples\], and on the right is the 2-bracketing in $W_{11410}^{{{\operatorname}{br}}}$ we discussed in Ex. \[ex:2bracketing\_examples\]. In fact, these objects are identified by $2\nu$. Indeed, we see here how the elements of $V_{{{\operatorname}{comp}}}(T_b) \cup V_{{{\operatorname}{mark}}}(T_b)$ are sent to 2-brackets (indicated by blue arrows), and how the elements of $T_s$ are sent to 1-brackets (green arrows). (We have omitted the blue and green arrows corresponding to $\rho_{{{\operatorname}{root}}}^{T_s}$, $\alpha_{{{\operatorname}{root}}}^{T_b}$, $V_{{{\operatorname}{mark}}}(T_b)$, and $T_s \setminus (T_s)_{{{\operatorname}{int}}}$.) The procedure for assigning a 2-bracket to an element $\alpha$ of $V_{{{\operatorname}{comp}}}(T_b) \cup V_{{{\operatorname}{mark}}}(T_b)$ is simple: the 2-bracket includes the elements of $V_{{{\operatorname}{mark}}}(T_b)$ lying above $\alpha$, and the projection of the 2-bracket includes the leaves of $T_s$ above $f(\alpha)$.
In the next figure we indicate, in the case of the same tree-pair $2T$, how the partial order on $2\nu(2T)$ is defined. Specifically, we indicate why the inequalities ${{\mathbf{2B}}}(\gamma_1) < {{\mathbf{2B}}}(\gamma_2)$, ${{\mathbf{2B}}}(\delta_1) < {{\mathbf{2B}}}(\delta_2)$, and ${{\mathbf{2B}}}({\epsilon}_1) < {{\mathbf{2B}}}({\epsilon}_2)$ hold. Here is the procedure, in the case of $\delta_1$ and $\delta_2$: draw (blue) paths downward from $\delta_1$ and $\delta_2$, until the paths intersect at a vertex $\alpha$. At $\alpha$ — necessarily an element of $V_{{{\operatorname}{seam}}}(T_b)$ — note which elements of ${{\operatorname}{in}}(\alpha)$ the two paths passed through. Using the order on ${{\operatorname}{in}}(\alpha)$, we obtain the inequality ${{\mathbf{2B}}}(\delta_1) < {{\mathbf{2B}}}(\delta_2)$.
\[fig:2nu\_order\_example\]
$\triangle$
In this lemma, we introduce the notion of a move on a 2-bracketing. We say that ${2\mathscr{B}}'$ is the result of performing a move on ${2\mathscr{B}}$ if $(2\nu)^{-1}({2\mathscr{B}}')$ is the result of performing a move on $(2\nu)^{-1}({2\mathscr{B}})$.
\[lem:Wn\_models\_orders\] The partial order on $W_{\mathbf{n}}^{{{\operatorname}{br}}}$ coincides with the one induced by the partial order on $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ and the isomorphism $2\nu\colon W_{\mathbf{n}}^{{{\operatorname}{tree}}}\to W_{\mathbf{n}}^{{{\operatorname}{br}}}$.
The nontrivial direction is to show that for $({\mathscr{B}}^1,{2\mathscr{B}}^1) \subsetneq ({\mathscr{B}}^2,{2\mathscr{B}}^2)$, we can obtain $({\mathscr{B}}^2,{2\mathscr{B}}^2)$ from $({\mathscr{B}}^1,{2\mathscr{B}}^1)$ via a finite sequence of moves. It is enough to show that there is a 2-bracketing $({\mathscr{B}}, {2\mathscr{B}}) \in W_{\mathbf{n}}^{{{\operatorname}{br}}}$ satisfying the containments $$\begin{aligned}
({\mathscr{B}}^1,{2\mathscr{B}}^1) \subset ({\mathscr{B}}, {2\mathscr{B}}) \subset ({\mathscr{B}}^2,{2\mathscr{B}}^2),\end{aligned}$$ and such that either $({\mathscr{B}},{2\mathscr{B}})$ is the result of performing a single move on $({\mathscr{B}}^1,{2\mathscr{B}}^1)$, or $({\mathscr{B}}^2,{2\mathscr{B}}^2)$ is the result of performing a single move on $({\mathscr{B}},{2\mathscr{B}})$. We produce such a 2-bracketing in the following, exhaustive, cases.
- [**Case 1: There exist $B^0 \in {\mathscr{B}}^1$ and ${{\mathbf{2B}}}\in {2\mathscr{B}}^2_{B^0}\setminus {2\mathscr{B}}^1_{B^0}$, ${{\mathbf{2B}}}' \in {2\mathscr{B}}^2_{B^0}$ with ${{\mathbf{2B}}}' \subsetneq {{\mathbf{2B}}}$.**]{} We claim that $({\mathscr{B}}^2, {2\mathscr{B}}^2 \setminus \{{{\mathbf{2B}}}\})$ is a valid 2-bracketing. The only property that does not obviously hold is [(marked seams are unfused)]{}. This is a consequence of the fact that $({\mathscr{B}}^2,{2\mathscr{B}}^2)$ has the [(marked seams are unfused)]{} property. $({\mathscr{B}}^2,{2\mathscr{B}}^2)$ is the result of performing a type-1 move on $({\mathscr{B}}^2,{2\mathscr{B}}^2\setminus\{{{\mathbf{2B}}}\})$, and the necessary containments hold: $$\begin{aligned}
({\mathscr{B}}^1,{2\mathscr{B}}^1) \subset ({\mathscr{B}}^2,{2\mathscr{B}}^2\setminus\{{{\mathbf{2B}}}\}) \subsetneq ({\mathscr{B}}^2,{2\mathscr{B}}^2).\end{aligned}$$
We illustrate this case in the following figure. On the left are the 2-bracketings $({\mathscr{B}}^1,{2\mathscr{B}}^1)$, $({\mathscr{B}},{2\mathscr{B}})$, $({\mathscr{B}}^2,{2\mathscr{B}}^2)$, from left to right; on the right are the tree-pairs corresponding via $2\nu$ to these 2-bracketings. $({\mathscr{B}}^2,{2\mathscr{B}}^2)$ is the result of performing a type-1 move on $({\mathscr{B}},{2\mathscr{B}})$, and we highlight in red the portion of the tree-pairs involved in this move.
\[fig:2nu\_poset\_map\_example\_1\]
- [**Case 2: Case 1 does not hold, and there exist $B^0 \in {\mathscr{B}}^1$ and ${{\mathbf{2B}}}\in {2\mathscr{B}}^2_{B^0}$, ${{\mathbf{2B}}}' \in {2\mathscr{B}}^2_{B^0}\setminus {2\mathscr{B}}^1_{B^0}$ with ${{\mathbf{2B}}}' \subsetneq {{\mathbf{2B}}}$.**]{} Fix such 2-brackets ${{\mathbf{2B}}}', {{\mathbf{2B}}}$. Without loss of generality, we may assume that ${{\mathbf{2B}}}$ is minimal among 2-brackets in ${2\mathscr{B}}^2_{B^0}$ that properly contain ${{\mathbf{2B}}}'$. The assumption that we are not in Case 1 implies that ${{\mathbf{2B}}}$ lies in ${2\mathscr{B}}^1_{B^0}$; this assumption also implies that ${{\mathbf{2B}}}'$ cannot properly contain any 2-bracket in ${2\mathscr{B}}^2_{B^0}$. This and the minimality of ${{\mathbf{2B}}}$ implies that for any $i \in B^0$ and $j \in 2B'_i$, there is no ${{\mathbf{2B}}}'' \in {2\mathscr{B}}^1_{B^0}$ with ${{\mathbf{2B}}}'' \subsetneq {{\mathbf{2B}}}$ and $2B''_i \ni j$; it therefore follows from the [(marked seams are unfused)]{} property of $({\mathscr{B}}^1,{2\mathscr{B}}^1)$ that there are no 2-brackets in ${2\mathscr{B}}^1_{B^0}$ that are properly contained in ${{\mathbf{2B}}}$. The [(marked seams are unfused)]{} property of $({\mathscr{B}}^2,{2\mathscr{B}}^2)$ implies that for every $i \in B^0$ and $j \in 2B^2_i$, there exists ${\widetilde}{{\mathbf{2B}}}\in {2\mathscr{B}}^2_{B^0}$ with ${\widetilde}{{\mathbf{2B}}}\subsetneq {{\mathbf{2B}}}$ and ${\widetilde}2B_i \ni j$. This, together with [(2-bracketing)]{}, imply that if ${\widetilde}B^1, \ldots, {\widetilde}B^k \in {2\mathscr{B}}^2_{B^0}$ denote the maximal elements (with respect to inclusion) of ${2\mathscr{B}}^2_{B^0}$ that are properly contained in ${{\mathbf{2B}}}$, then these 2-bracketings satisfy ${{\mathbf{2B}}}= \bigsqcup_{i=1}^k {\widetilde}{{\mathbf{2B}}}^k$. Therefore $({\mathscr{B}}^1,{2\mathscr{B}}^1\cup\{{\widetilde}{{\mathbf{2B}}}^1,\ldots,{\widetilde}{{\mathbf{2B}}}^k\})$ is the result of performing a single type-3 move on $({\mathscr{B}}^1,{2\mathscr{B}}^1)$, and the necessary containments hold: $$\begin{aligned}
({\mathscr{B}}^1,{2\mathscr{B}}^1) \subsetneq ({\mathscr{B}}^1,{2\mathscr{B}}^1\cup\{{\widetilde}{{\mathbf{2B}}}^1,\ldots,{\widetilde}{{\mathbf{2B}}}^k\}) \subset ({\mathscr{B}}^2,{2\mathscr{B}}^2).\end{aligned}$$
As in Case 1, we illustrate this procedure below. Here we take $B^0 = (1,2,3,4)$, ${{\mathbf{2B}}}= \bigl((1,2,3,4),((c,b,a),(),(),())\bigr)$, and ${{\mathbf{2B}}}' = \bigl((1,2,3,4),((b,a),(),(),())\bigr)$.
\[fig:2nu\_poset\_map\_example\_2\]
- [**Case 3: Neither Case 1 nor Case 2 hold.**]{} The proper containment $({\mathscr{B}}^1,{2\mathscr{B}}^1) \subsetneq ({\mathscr{B}}^2,{2\mathscr{B}}^2)$ implies that either ${\mathscr{B}}^2\setminus {\mathscr{B}}^1$ is nonempty or ${2\mathscr{B}}^2\setminus {2\mathscr{B}}^1$ is nonempty. I claim that under the current assumptions, ${\mathscr{B}}^2\setminus {\mathscr{B}}^1$ must be nonempty. Indeed, suppose that ${2\mathscr{B}}^2\setminus{2\mathscr{B}}^1$ is nonempty, and choose an element ${{\mathbf{2B}}}= (B, (2B_i))$. The assumption that neither Case 1 nor Case 2 hold implies that there is no ${{\mathbf{2B}}}' \in {2\mathscr{B}}^2_B$ with either ${{\mathbf{2B}}}' \subsetneq {{\mathbf{2B}}}$ or ${{\mathbf{2B}}}\subsetneq {{\mathbf{2B}}}'$. This, together with the [(2-bracketing)]{} and [(marked seams are unfused)]{} properties of $({\mathscr{B}}^1,{2\mathscr{B}}^1)$, imply that $B$ lies in ${\mathscr{B}}^2\setminus {\mathscr{B}}^1$. We may conclude that ${\mathscr{B}}^2\setminus{\mathscr{B}}^1$ is nonempty.
Choose $B\in {\mathscr{B}}^2\setminus{\mathscr{B}}^1$, and note that the assumption that neither Case 1 nor Case 2 hold implies that any two elements of ${2\mathscr{B}}^2_B$ are disjoint. Set ${\mathscr{B}}\coloneqq {\mathscr{B}}^2\setminus\{B\}$ and ${2\mathscr{B}}\coloneqq {2\mathscr{B}}^2\setminus {2\mathscr{B}}^2_B$. Then the necessary containments hold, and $({\mathscr{B}}^2,{2\mathscr{B}}^2)$ is the result of performing a single type-2 move on $({\mathscr{B}},{2\mathscr{B}})$.
As above, we illustrate this case in the following figure.
\[fig:2nu\_poset\_map\_example\_3\]
Key properties of Wn {#sec:Wn_polytope}
====================
In this section we establish several properties of $W_{\mathbf{n}}$, collected in this paper’s main theorem:
\[thm:main\] For any $r \geq 1$ and ${\mathbf{n}}\in {\mathbb{Z}}^r_{\geq0}\setminus\{{\mathbf{0}}\}$, the 2-associahedron $W_{\mathbf{n}}$ is a poset, the collection of which satisfies the following properties:
- <span style="font-variant:small-caps;">(abstract polytope)</span> For ${\mathbf{n}}\neq (1)$, ${\widehat}{W_{\mathbf{n}}} \coloneqq W_{\mathbf{n}}\cup \{F_{-1}\}$ is an abstract polytope of dimension $|{\mathbf{n}}| + r - 3$.
- <span style="font-variant:small-caps;">(forgetful)</span> $W_{\mathbf{n}}$ is equipped with forgetful maps $\pi\colon W_{\mathbf{n}}\to K_r$, which are surjective maps of posets.
- <span style="font-variant:small-caps;">(recursive)</span> For any stable tree-pair $2T = T_b {\stackrel}{f}{\to} T_s \in W_{\mathbf{n}}^{{{\operatorname}{tree}}}$, there is an inclusion of posets $$\begin{aligned}
\Gamma_{2T} \colon \prod_{
{\alpha \in V_{{{\operatorname}{comp}}}^1(T_b),}
\atop
{{{\operatorname}{in}}(\alpha)=(\beta)}
} W_{\#\!{{\operatorname}{in}}(\beta)}^{{{\operatorname}{tree}}}\times
\prod_{\rho \in V_{{{\operatorname}{int}}}(T_s)} \prod^{K_{\#\!{{\operatorname}{in}}(\rho)}}_{
{\alpha\in V_{{{\operatorname}{comp}}}^{\geq2}(T_b)\cap f^{-1}\{\rho\},}
\atop
{{{\operatorname}{in}}(\alpha)=(\beta_1,\ldots,\beta_{\#\!{{\operatorname}{in}}(\rho)})}
}
\hspace{-0.25in} W^{{{\operatorname}{tree}}}_{\#\!{{\operatorname}{in}}(\beta_1),\ldots,\#\!{{\operatorname}{in}}(\beta_{\#\!{{\operatorname}{in}}(\alpha)})}
\hra W_{\mathbf{n}}^{{{\operatorname}{tree}}},\end{aligned}$$ where the superscript on one of the product symbols indicates that it is a fiber product with respect to the maps described in <span style="font-variant:small-caps;">(forgetful)</span>. This inclusion is a poset isomorphism onto ${\mathrm{cl}}(2T) = (F_{-1},2T]$.
We prove the <span style="font-variant:small-caps;">(recursive)</span> and <span style="font-variant:small-caps;">(abstract polytope)</span> properties in Def.-Lem. \[deflem:Gamma2T\] and Thm. \[thm:Wn\_polytope\], respectively. $W_{\mathbf{n}}$ is a poset by its construction in Def. \[def:Wn\], and the forgetful map from the same definition is evidently a surjection of posets.
We now turn to the proof of the <span style="font-variant:small-caps;">(recursive)</span> property, which characterizes the closed faces of $W_{\mathbf{n}}$ as products and fiber products of lower-dimensional 2-associahedra. Toward this characterization, we show in the following lemma that for $2T' \in {\mathrm{cl}}(2T)$, certain vertices in $2T$ have avatars in $2T'$.
\[lem:avatars\] Fix $2T, 2T' \in W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ with $2T' \leq 2T$.
- For any $\rho \in T_s$, there exists a unique $\rho' \in T_s'$ satisfying $B(\rho') = B(\rho)$.
- For any $\alpha \in V_{{{\operatorname}{comp}}}(T_b) \cup V_{{{\operatorname}{mark}}}(T_b)$ there exists a unique $\alpha' \in V_{{{\operatorname}{comp}}}(T_b') \cup V_{{{\operatorname}{mark}}}(T_b')$ satisfying ${{\mathbf{2B}}}(\alpha') = {{\mathbf{2B}}}(\alpha)$.
First, we prove the first statement. The uniqueness of $\rho'$ is guaranteed by the stability condition, so it suffices to prove existence. We do so by induction on $d(2T')$, starting with $d(2T') = d(2T)$ and counting down. If $d(2T')=d(2T)$, then $T_s=T_s'$ and the statement holds trivially. Next, suppose that for $\rho \in T_s$, we have proved the existence of $\rho' \in T_s'$ with $B(\rho') = B(\rho)$ for every $2T' \leq 2T$ with $d(2T')\geq d(2T)-k \geq 1$; we must show that there is $\rho' \in T_s'$ with $B(\rho') = B(\rho)$ for $2T' < 2T$ with $d(2T') = d(2T)-k-1$. Choose $2T''$ with $2T' < 2T'' \leq 2T$ and $d(2T'') = d(2T)-k$, and denote by $\rho''$ the vertex in $T_s''$ with $B(\rho'')=B(\rho)$. $2T'$ can be obtained from $2T''$ via a single move, so either $T_s' = T_s''$ or $T_s'$ can be obtained from $T_s''$ by performing the following modification to some solid corolla in $T_s''$, for $2\leq\ell<k$:
In the former case, we can set $\rho'\coloneqq \rho''$. In the latter case, we can identify $V(T_s') \simeq V(T_s'') \cup \{v_{\text{new}}\}$; if we set $\rho'$ to be the vertex in $T_s'$ corresponding via this identification to $\rho''$, then $B(\rho') = B(\rho'')$.
The second statement of the lemma can be proven similarly.
Next, we show how this correspondence allows us to extract certain sub-tree-pairs from $2T'$.
\[deflem:subtreepairs\] Fix $2T, 2T' \in W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ with $2T' < 2T$.
- Fix $\alpha \in V_{{{\operatorname}{comp}}}^1(T_b)$ with ${{\operatorname}{in}}(\alpha) \eqqcolon (\beta)$ and ${{\operatorname}{in}}(\beta) \eqqcolon (\gamma_1,\ldots,\gamma_k)$, and denote by $\alpha', \gamma_1', \ldots, \gamma_k'$ the vertices in $T_b'$ that correspond to $\alpha,\gamma_1,\ldots,\gamma_k$ via Lemma \[lem:avatars\]. Define $(T_b')^\alpha$ to be the portion of $T_b'$ bounded by $\alpha'$ and $\gamma_1',\ldots,\gamma_k'$, and define $(T_s')^\alpha$ to be a single vertex. Then $(2T')^\alpha\coloneqq (T_b')^\alpha \to (T_s')^\alpha$ is a stable tree-pair in $W_k^{{{\operatorname}{tree}}}$.
- For any $\rho \in V_{{{\operatorname}{int}}}(T_s)$ with ${{\operatorname}{in}}(\rho) \eqqcolon (\sigma_1,\ldots,\sigma_k)$, define $(T_s')^\rho$ to be the portion of $T_s'$ bounded by $\rho'$ and $\sigma_1',\ldots,\sigma_k'$, where we continue to use the notation of Lemma \[lem:avatars\]. For any $\alpha \in V_{{{\operatorname}{comp}}}^{\geq2}(T_b)$ with $f(\alpha) = \rho$, denote ${{\operatorname}{in}}(\alpha) \eqqcolon (\beta_1,\ldots,\beta_k)$ and ${{\operatorname}{in}}(\beta_i) \eqqcolon (\gamma_{i1},\ldots,\gamma_{i\ell_i})$ for $i \in \{1,\ldots,k\}$, and define $(T_b')^\alpha$ to be the portion of $T_b'$ bounded by $\alpha'$ and $\gamma_{11}',\ldots,\gamma_{1\ell_1}',\ldots,\gamma_{k1}',\ldots,\gamma_{k\ell_k}'$. Then $\Bigl((T_s')^\rho, \bigl((2T')^\alpha\coloneqq (T_b')^\alpha \to (T_s')^\rho\bigr)_\alpha\Bigr)$ is an element of the following fiber product: $$\begin{aligned}
\prod^{K_{\#\!{{\operatorname}{in}}(\rho)}}_{
{\alpha\in V_{{{\operatorname}{comp}}}^{\geq2}(T_b)\cap f^{-1}\{\rho\}}
\atop
{{{\operatorname}{in}}(\alpha)=(\beta_1,\ldots,\beta_k)}
}
\hspace{-0.25in} W^{{{\operatorname}{tree}}}_{\ell_1,\ldots,\ell_k}.\end{aligned}$$
<!-- -->
- The statement that $(2T')^\alpha$ is an element of $W_k^{{{\operatorname}{tree}}}$ is almost immediate. Indeed, $(T_s)^\alpha = {{{\operatorname}{pt}}}$, and $(T_b)^\alpha$ is the following:
Define $2T = {\widetilde}{2T}^1,\ldots,{\widetilde}{2T}^{a_0} = 2T'$, where ${\widetilde}{2T}^{a+1}$ can be obtained from ${\widetilde}{2T}^a$ by making a single move. Then $({\widetilde}T_s^a)^\alpha = {{{\operatorname}{pt}}}$ for every $a$, and $({\widetilde}T_b^{a+1})^\alpha$ is either equal to $({\widetilde}T_b^a)^\alpha$ or can be obtained from $({\widetilde}T_b^a)^\alpha$ by performing the following modification to one of the dashed corollas in $(T_b^a)^\alpha$, for $2 \leq \ell < k$:
$W_k^{{{\operatorname}{tree}}}$ is closed under modifications of this form, so $(2T')^\alpha$ is a stable tree-pair in $W_k^{{{\operatorname}{tree}}}$.
- This statement can be proven via an argument similar to the one made for (a).
We are now ready to establish the <span style="font-variant:small-caps;">(recursive)</span> property.
\[deflem:Gamma2T\] Fix $2T \in W_{\mathbf{n}}^{{{\operatorname}{tree}}}$. Define a map $$\begin{aligned}
\Gamma_{2T} \colon \prod_{
{\alpha \in V_{{{\operatorname}{comp}}}^1(T_b)}
\atop
{{{\operatorname}{in}}(\alpha)=(\beta)}
} W_{\#\!{{\operatorname}{in}}(\beta)}^{{{\operatorname}{tree}}}\times
\prod_{\rho \in V_{{{\operatorname}{int}}}(T_s)} \prod^{K_{\#\!{{\operatorname}{in}}(\rho)}}_{
{\alpha\in V_{{{\operatorname}{comp}}}^{\geq2}(T_b)\cap f^{-1}\{\rho\}}
\atop
{{{\operatorname}{in}}(\alpha)=(\beta_1,\ldots,\beta_{\#\!{{\operatorname}{in}}(\rho)})}
}
\hspace{-0.25in} W^{{{\operatorname}{tree}}}_{\#\!{{\operatorname}{in}}(\beta_1),\ldots,\#\!{{\operatorname}{in}}(\beta_{\#\!{{\operatorname}{in}}(\rho)})}
\hra W_{\mathbf{n}}^{{{\operatorname}{tree}}},\end{aligned}$$ by sending $\Bigl((2T_\alpha)_\alpha,\bigl(T_\rho,({\widetilde}{2T}_\alpha)_\alpha\bigr)_\rho\Bigr)$ to the element of $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ defined like so: for $\alpha \in V^1_{{{\operatorname}{comp}}}(T_b)$ with ${{\operatorname}{in}}(\alpha)\eqqcolon(\beta)$, replace the portion of $T_b$ bounded by $\alpha$ and ${{\operatorname}{in}}(\beta)$ by $2T_\alpha$; for $\rho \in V_{{{\operatorname}{int}}}(T_s)$, replace the portion of $T_s$ bounded by $\rho$ and ${{\operatorname}{in}}(\rho)$ by $T_\rho$; and for $\rho \in V_{{{\operatorname}{int}}}(T_s)$ and $\alpha \in V^{\geq2}_{{{\operatorname}{comp}}}(T_b)\cap f^{-1}\{\rho\}$ with ${{\operatorname}{in}}(\alpha)\eqqcolon(\beta_1,\ldots,\beta_{\#\!{{\operatorname}{in}}(\rho)})$, replace the portion of $T_b$ bounded by $\alpha$ and ${{\operatorname}{in}}(\beta_1),\ldots,{{\operatorname}{in}}(\beta_{\#\!{{\operatorname}{in}}(\rho)})$ by ${\widetilde}{2T}_\alpha$. Then $\Gamma_{2T}$ restricts to a poset isomorphism from its domain to ${\mathrm{cl}}(2T) \subset W_{\mathbf{n}}^{{{\operatorname}{tree}}}$.
[*Step 1: $\Gamma_{2T}$ is a map of posets.*]{}
It suffices to show that for any $\Bigl((2T_\alpha^{(2)})_\alpha,\bigl(T_\rho^{(2)},({\widetilde}{2T}_\alpha^{(2)})_\alpha\bigr)_\rho\Bigr) < \Bigl((2T_\alpha^{(1)})_\alpha,\bigl(T_\rho^{(1)},({\widetilde}{2T}_\alpha^{(1)})_\alpha\bigr)_\rho\Bigr)$, there exists $\Bigl((2T_\alpha^{(3)})_\alpha,\bigl(T_\rho^{(3)},({\widetilde}{2T}_\alpha^{(3)})_\alpha\bigr)_\rho\Bigr)$ with $$\begin{aligned}
\Bigl((2T_\alpha^{(2)})_\alpha,\bigl(T_\rho^{(2)},({\widetilde}{2T}_\alpha^{(2)})_\alpha\bigr)_\rho\Bigr) \leq \Bigl((2T_\alpha^{(3)})_\alpha,\bigl(T_\rho^{(3)},({\widetilde}{2T}_\alpha^{(3)})_\alpha\bigr)_\rho\Bigr) < \Bigl((2T_\alpha^{(1)})_\alpha,\bigl(T_\rho^{(1)},({\widetilde}{2T}_\alpha^{(1)})_\alpha\bigr)_\rho\Bigr)\end{aligned}$$ and $$\begin{aligned}
\Gamma_{2T}\Bigl((2T_\alpha^{(3)})_\alpha,\bigl(T_\rho^{(3)},({\widetilde}{2T}_\alpha^{(3)})_\alpha\bigr)_\rho\Bigr) < \Gamma_{2T}\Bigl((2T_\alpha^{(1)})_\alpha,\bigl(T_\rho^{(1)},({\widetilde}{2T}_\alpha^{(1)})_\alpha\bigr)_\rho\Bigr).\end{aligned}$$ To do so, first suppose that there exists $\alpha_0 \in V^1_{{{\operatorname}{comp}}}(T_b)$ with $2T^{(2)}_{\alpha_0} < 2T^{(1)}_{\alpha_0}$. Define $\Bigl((2T_\alpha^{(3)})_\alpha,\bigl(T_\rho^{(3)},({\widetilde}{2T}_\alpha^{(3)})_\alpha\bigr)_\rho\Bigr)$ to be the result of starting with $\Bigl((2T_\alpha^{(1)})_\alpha,\bigl(T_\rho^{(1)},({\widetilde}{2T}_\alpha^{(1)})_\alpha\bigr)_\rho\Bigr)$, and replacing $2T^{(1)}_{\alpha_0}$ by $2T^{(2)}_{\alpha_0}$. The assumption on $\alpha_0$ implies that $2T^{(2)}_{\alpha_0}$ can be obtained from $2T^{(1)}_{\alpha_0}$ by performing a sequence of type-1 moves. Therefore $\Gamma_{2T}\Bigl((2T_\alpha^{(3)})_\alpha,\bigl(T_\rho^{(3)},({\widetilde}{2T}_\alpha^{(3)})_\alpha\bigr)_\rho\Bigr)$ can be obtained from $\Gamma_{2T}\Bigl((2T_\alpha^{(1)})_\alpha,\bigl(T_\rho^{(1)},({\widetilde}{2T}_\alpha^{(1)})_\alpha\bigr)_\rho\Bigr)$ by performing a sequence of type-1 moves.
If there exists no such $\alpha_0$, then we can choose $\rho_0 \in V_{{{\operatorname}{int}}}(T_s)$ with $\bigl(T_{\rho_0}^{(2)},({\widetilde}{2T}_\alpha^{(2)})_\alpha\bigr) < \bigl(T_{\rho_0}^{(1)},({\widetilde}{2T}_\alpha^{(1)})_\alpha\bigr)$ and make an argument similar to the previous paragraph.
[*Step 2: $\Gamma_{2T}$ restricts to a poset isomorphism onto ${\mathrm{cl}}(2T)$.*]{}
The injectivity of $\Gamma_{2T}$ is clear. It remains to show that the image of $\Gamma_{2T}$ is equal to ${\mathrm{cl}}(2T)$, and that the inverse is a poset map. By Step 1, the image of $\Gamma_{2T}$ is contained in ${\mathrm{cl}}(2T)$. Now define a putative inverse $$\begin{aligned}
\Gamma_{2T}^{-1}\colon {\mathrm{cl}}(2T)
&\to
\prod_{
{\alpha \in V_{{{\operatorname}{comp}}}^1(T_b)}
\atop
{{{\operatorname}{in}}(\alpha)=(\beta)}
} W_{\#\!{{\operatorname}{in}}(\beta)}^{{{\operatorname}{tree}}}\times
\prod_{\rho \in V_{{{\operatorname}{int}}}(T_s)} \prod^{K_{\#\!{{\operatorname}{in}}(\rho)}}_{
{\alpha\in V_{{{\operatorname}{comp}}}^{\geq2}(T_b)\cap f^{-1}\{\rho\}}
\atop
{{{\operatorname}{in}}(\alpha)=(\beta_1,\ldots,\beta_{\#\!{{\operatorname}{in}}(\rho)})}
}
\hspace{-0.25in} W^{{{\operatorname}{tree}}}_{\#\!{{\operatorname}{in}}(\beta_1),\ldots,\#\!{{\operatorname}{in}}(\beta_{\#\!{{\operatorname}{in}}(\rho)})}, \\
2T' &\mapsto \Bigl((2T'_\alpha)_\alpha, \bigl(T'_\rho,({\widetilde}{2T'}_\alpha)_\alpha\bigr)_\rho\Bigr)
\nonumber\end{aligned}$$ like so:
- For $\alpha \in V_{{{\operatorname}{comp}}}^1(T_b)$ with ${{\operatorname}{in}}(\alpha)\eqqcolon(\beta)$, set $2T'_\alpha \coloneqq (2T')^\alpha \in W^{{{\operatorname}{tree}}}_{\#\!{{\operatorname}{in}}(\beta)}$, where the latter stable tree-pair was defined in Def.-Lem. \[deflem:subtreepairs\](a).
- For $\rho \in V_{{{\operatorname}{int}}}(T_s)$ and $\alpha \in V_{{{\operatorname}{comp}}}^{\geq2}(T_b)$ with $f(\alpha)=\rho$ and ${{\operatorname}{in}}(\alpha)\eqqcolon(\beta_1,\ldots,\beta_{\#\!{{\operatorname}{in}}(\rho)})$, define $$\begin{aligned}
\bigl(T_\rho',({\widetilde}{2T'}_\alpha)_\alpha\bigr)_\rho
\coloneqq
\bigl((T_s')^\rho,((2T')^\alpha)_\alpha\bigr)_\rho,\end{aligned}$$ where the latter expression was defined in Def.-Lem. \[deflem:subtreepairs\](b).
It is simple to verify that $\Gamma_{2T}^{-1}$ is an inverse for the restriction of $\Gamma_{2T}$ to a map to ${\mathrm{cl}}(2T)$, and to verify that $\Gamma_{2T}^{-1}$ is a poset map.
Now that we have recursively characterized the closed faces of $W_{\mathbf{n}}$, we turn to our proof that ${\widehat}{W_{\mathbf{n}}}$ is an abstract polytope.
\[thm:Wn\_polytope\] For any $r \geq 1$ and ${\mathbf{n}}\in {\mathbb{Z}}^r_{\geq0}\setminus\{{\mathbf{0}},(1)\}$, ${\widehat}{W_{\mathbf{n}}}$ is an abstract polytope of dimension $|{\mathbf{n}}| + r - 3$.
We defer the proofs of [(diamond)]{} resp. [(strongly connected)]{} to Props. \[prop:Wn\_diamond\] resp. \[prop:Wn\_conn\], so here we only need to establish [(extremal)]{}, [(flag-length)]{}, and the dimension formula.
The least face of ${\widehat}{W_{\mathbf{n}}}$ is the face $F_{-1}$ we have added to $W_{\mathbf{n}}$ to form ${\widehat}{W_{\mathbf{n}}}$, while the greatest face (in ${\widehat}{W_{\mathbf{n}}^{{{\operatorname}{br}}}}$) is the 2-bracketing $({\mathscr{B}}, {2\mathscr{B}})$ with $$\begin{gathered}
{\mathscr{B}}\coloneqq \{\{1,\ldots,r\},\{1\},\ldots,\{r\}\},
\\
{2\mathscr{B}}\coloneqq \Bigl\{ \bigl(\{1,\ldots,r\},(\{1,\ldots,n_1\},\ldots,\{1,\ldots,n_r\})\bigr)\Bigr\}
\cup
\left\{\bigl(\{i\},(\{j\})\bigr) \:\left|\:
{{1\leq i\leq r}
\atop
{1\leq j\leq n_i}}
\right.\right\}.
\nonumber\end{gathered}$$ This establishes [(extremal)]{}.
To prove [(flag-length)]{} and to show that the dimension of $W_{\mathbf{n}}$ is $|{\mathbf{n}}|+r-3$, we must show that if $2T^0 < \cdots < 2T^\ell$ is a maximal chain in $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$, then $\ell = |{\mathbf{n}}| + r - 3$. By Lemmata \[lem:Wn\_dim\] and \[lem:Wn\_moves\], we have $0 \leq d(2T^0) < \cdots < d(2T^\ell) \leq |{\mathbf{n}}|+r-3$. To prove the claim, we must show that every dimension between 0 and $|{\mathbf{n}}|+r-3$ is represented. For any $T^i, T^{i+1}$, we must have $d(T^i) = d(T^{i+1}) - 1$: otherwise, there exists $T' \in W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ which can be obtained by performing a single move to $T^{i+1}$, and which satisfies $d(T^i) < d(T') < d(T^{i+1})$; this would contradict the maximality of the chain. Again by maximality, we must have $2T^\ell = F_{{{\operatorname}{top}}}$. It remains to show $d(T^0) = 0$. Suppose for a contradiction that $d(T^0)$ is positive. Then by Lemma \[lem:Wn\_dim\], either (a) there exists $\alpha \in V_{{{\operatorname}{comp}}}^1(T_b)$ with $\#\!{{\operatorname}{in}}(\beta) \geq 3$ for $(\beta) \coloneqq {{\operatorname}{in}}(\alpha)$; (b) there exists $\alpha \in V_{{{\operatorname}{comp}}}^{\geq2}(T_b)$ with $\sum_{\beta \in {{\operatorname}{in}}(\alpha)} \#\!{{\operatorname}{in}}(\beta) \geq 2$; or (c) there exists $\alpha \in (T_s)_{{{\operatorname}{int}}}$ with $\#\!{{\operatorname}{in}}(\alpha) \geq 3$. In these three cases we may perform a move of type 1 resp. type 3 resp. type 2 to $2T^0$, which contradicts the maximality of our chain.
\[prop:Wn\_diamond\] For any $r \geq 1$ and ${\mathbf{n}}\in {\mathbb{Z}}^r_{\geq0}\setminus\{{\mathbf{0}}\}$, ${\widehat}{W_{\mathbf{n}}}$ satisfies [<span style="font-variant:small-caps;">(diamond)</span>]{}.
We must show that for every $F < G$ in ${\widehat}{W_{\mathbf{n}}}$ with $d(G) - d(F) = 2$, the open interval $(F,G)$ contains exactly 2 elements. In the following steps, we prove this in the cases $F \neq F_{-1}$ and $F = F_{-1}$.
[*Step 1: We show that for $F < G$ in $W_{\mathbf{n}}$ with $d(G) - d(F) = 2$, the open interval $(F,G)$ contains exactly 2 elements.*]{}
In this step, we work with $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$. Fix $2T, 2T' \in W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ with $d(2T) - d(2T') = 2$. Then $2T'$ can be obtained from $2T$ by applying two moves; we must prove that there are exactly two elements of the open interval $(2T',2T)$. There are nine cases to consider, depending on whether each of the two moves are of type 1, 2, or 3. This proof quickly becomes repetitive, so we only give details in the case of two type-3 moves.
Suppose that $2T'$ is indeed the result of applying two type-3 moves to $2T$. Denote the modifications to $T_b$ are as in the upper-left and upper-right arrows of the following figure (where an arrow indicates a single move, and the adjacent number indicates the type of move):
(Here we have chosen a partition of ${\mathbf{a}}$ as ${\mathbf{a}}= \sum_{i=1}^q {\mathbf{b}}^i$, then chosen a partition of ${\mathbf{b}}^p$ as ${\mathbf{b}}^p = \sum_{j=1}^r {\mathbf{c}}^j$.) Then there is exactly one other element of the open interval $(2T',2T)$: the one obtained from $2T$ by replacing the portion of $T_b$ on the left of the figure by the bottom configuration in the figure. Note that this alternate path from $2T$ to $2T'$ consists not of two type-3 moves, but by a type-3 move followed by a type-1 move.
The other subcases to consider are simpler than this one. Moreover, the other eight cases are similar to this one; rather than include the details, we show in Table \[tab:diamond\_ex\] representative examples of the diamond property.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
type-1 type-2 type-3
-- ------------------------------------------------------------------------ ------------------------------------------------------------------------ ------------------------------------------------------------------------
$\def\svgwidth{0.2\columnwidth} $\def\svgwidth{0.35\columnwidth} $\def\svgwidth{0.35\columnwidth}
\PandocStartInclude{1_1_ex.pdf_tex}\PandocEndInclude{input}{2663}{23}$ \PandocStartInclude{1_2_ex.pdf_tex}\PandocEndInclude{input}{2666}{23}$ \PandocStartInclude{1_3_ex.pdf_tex}\PandocEndInclude{input}{2669}{23}$
$\def\svgwidth{0.2\columnwidth} $\def\svgwidth{0.2\columnwidth} $\def\svgwidth{0.2\columnwidth}
\PandocStartInclude{2_1_ex.pdf_tex}\PandocEndInclude{input}{2675}{23}$ \PandocStartInclude{2_2_ex.pdf_tex}\PandocEndInclude{input}{2678}{23}$ \PandocStartInclude{2_3_ex.pdf_tex}\PandocEndInclude{input}{2681}{23}$
$\def\svgwidth{0.2\columnwidth} $\def\svgwidth{0.2\columnwidth} $\def\svgwidth{0.2\columnwidth}
\PandocStartInclude{3_1_ex.pdf_tex}\PandocEndInclude{input}{2687}{23}$ \PandocStartInclude{3_2_ex.pdf_tex}\PandocEndInclude{input}{2690}{23}$ \PandocStartInclude{3_3_ex.pdf_tex}\PandocEndInclude{input}{2693}{23}$
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: In this step we show that for any $2T, 2T' \in W^{{{\operatorname}{tree}}}_{\mathbf{n}}$ with $d(2T)-d(2T')=2$, the open interval $(2T',2T)$ contains two elements. Here we illustrate this fact in nine cases: $2T'$ can be obtained from $2T$ by applying two moves, and these moves can be of type 1, 2, or 3. In each case, the four configurations are the bubble trees $T_b$ of a stable tree-pair; the seam trees can be inferred from the bubble tree. On the left is $2T$, on the right is $2T'$, and the remaining two stable tree-pairs are the two elements of $(2T',2T)$. The arrows indicate moves, and their labels are the types.[]{data-label="tab:diamond_ex"}
[*Step 2: We show that for $G \in W_{\mathbf{n}}$ with $d(G) = 1$, $(F_{-1},G)$ contains exactly 2 elements.*]{}
In this step, we again work with $W_{\mathbf{n}}^{{{\operatorname}{tree}}}$. Fix $2T \in W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ with $d(2T) = 1$. It follows from Lemma \[lem:Wn\_dim\] that the vertices in $2T$ satisfy exactly one of three valency conditions, which we treat in cases below.
- [*Every $\rho \in (T_s)_{{{\operatorname}{int}}}$ has $\#\!{{\operatorname}{in}}(\rho) = 2$. There is a single $\alpha \in V_{{{\operatorname}{comp}}}^1(T_b)$ with $\#\!{{\operatorname}{in}}(\beta) = 3$, where $\beta$ is the incoming neighbor of $\alpha$; every $\gamma \in V_{{{\operatorname}{comp}}}^1(T_b) \setminus \{\alpha\}$ with ${{\operatorname}{in}}(\gamma) \eqqcolon (\delta)$ has $\#\!{{\operatorname}{in}}(\delta) = 2$; and every $\gamma \in V_{{{\operatorname}{comp}}}^{\geq2}(T_b)$ with ${{\operatorname}{in}}(\gamma) \eqqcolon (\delta_1,\delta_2)$ has $\#\!{{\operatorname}{in}}(\delta_1)+\#\!{{\operatorname}{in}}(\delta_2) = 1$.*]{} In this case, the only moves that can be performed on $2T$ are type-1 moves based at $\alpha$. If we denote ${{\operatorname}{in}}(\beta) \eqqcolon (\gamma_1,\gamma_2,\gamma_3)$, then the type-1 moves at $\alpha$ correspond to proper consecutive subsets of $(\gamma_1,\gamma_2,\gamma_3)$ of length at least 2; $(\gamma_1,\gamma_2)$ and $(\gamma_2,\gamma_3)$ are the only such subsets.
- [*Every $\rho \in (T_s)_{{{\operatorname}{int}}}$ has $\#\!{{\operatorname}{in}}(\rho) = 2$. There is a single $\alpha \in V_{{{\operatorname}{comp}}}^{\geq2}(T_b)$ with $\#\!{{\operatorname}{in}}(\beta_1)+\#\!{{\operatorname}{in}}(\beta_2) = 2$, where $\beta_1,\beta_2$ are the incoming neighbors of $\alpha$; every $\gamma \in V_{{{\operatorname}{comp}}}^1(T_b)$ with ${{\operatorname}{in}}(\gamma) \eqqcolon (\delta)$ has $\#\!{{\operatorname}{in}}(\delta) = 2$; and every $\gamma \in V_{{{\operatorname}{comp}}}^{\geq2}(T_b) \setminus \{\alpha\}$ with ${{\operatorname}{in}}(\gamma) \eqqcolon (\delta_1,\delta_2)$ has $\#\!{{\operatorname}{in}}(\delta_1)+\#\!{{\operatorname}{in}}(\delta_2) = 1$.*]{} There are two subcases: either (a) $(\#\!{{\operatorname}{in}}(\beta_1),\#\!{{\operatorname}{in}}(\beta_2))=(1,1)$ or (b) $(\#\!{{\operatorname}{in}}(\beta_1),\#\!{{\operatorname}{in}}(\beta_2)) \in \{(2,0),(0,2)\}$. If (a) holds, the only moves that can be performed on $2T$ are type-3 moves based at $\alpha$. In the notation of the definition of type-3 moves, ${\mathbf{a}}=(1,1)$, and the type-3 moves at $\alpha$ correspond to choices of ${\mathbf{b}}^1, \ldots, {\mathbf{b}}^q \in {\mathbb{Z}}_{\geq0}^2\setminus\{{\mathbf{0}}\}$ with $\sum_j {\mathbf{b}}^j = {\mathbf{a}}$. There are two such choices: ${\mathbf{b}}^1 = (1,0)$ and ${\mathbf{b}}^2 = (0,1)$, or ${\mathbf{b}}^1 = (0,1)$ and ${\mathbf{b}}^2 = (1,0)$. On the other hand, if (b) holds, we can either perform a type-1 move or a type-3 move at $\alpha$, and there is only possible move of each type.
- [*There is a single $\rho \in (T_s)_{{{\operatorname}{int}}}$ with $\#\!{{\operatorname}{in}}(\rho)=3$. Every $\alpha \in V_{{{\operatorname}{comp}}}^1(T_b)$ with ${{\operatorname}{in}}(\alpha) \eqqcolon (\beta)$ has $\#\!{{\operatorname}{in}}(\beta) = 2$; every $\alpha \in V_{{{\operatorname}{comp}}}^{\geq2}(T_b)$ with ${{\operatorname}{in}}(\alpha) \eqqcolon (\beta_1,\beta_2)$ has $\#\!{{\operatorname}{in}}(\beta_1)+\#\!{{\operatorname}{in}}(\beta_2) = 1$; and every $\sigma \in (T_s)_{{{\operatorname}{int}}}\setminus \{\rho\}$ has $\#\!{{\operatorname}{in}}(\sigma) = 2$.*]{} In this case, the only moves that can be performed on $2T$ are type-2 moves based at $\rho$. Denote ${{\operatorname}{in}}(\rho) \eqqcolon (\sigma_1,\sigma_2,\sigma_3)$. The type-2 moves based at $\rho$ correspond to (1) a choice of a proper consecutive subset of ${{\operatorname}{in}}(\rho)$ of length at least 2 (of which there are two), and (2) for every $\alpha \in V_{{{\operatorname}{comp}}}^{\geq2}(T_b)$ with $f(\alpha)=\rho$, a choice of $q\geq0$ and ${\mathbf{b}}^1,\ldots,{\mathbf{b}}^q \in {\mathbb{Z}}_{\geq0}^\ell\setminus\{{\mathbf{0}}\}$ with $\sum_j {\mathbf{b}}^j = {\mathbf{a}}$, where ${\mathbf{a}}\in {\mathbb{Z}}_{\geq0}^\ell\setminus\{{\mathbf{0}}\}$ is defined by setting ${{\operatorname}{in}}(\alpha) \eqqcolon (\beta_1,\beta_2,\beta_3)$ and $a_i \coloneqq \#\!{{\operatorname}{in}}(\beta_{p+i})$. By assumption, for every such $\alpha$, we have $|{\mathbf{a}}|=1$. Therefore there is exactly one choice of the decomposition ${\mathbf{a}}= \sum_j {\mathbf{b}}^j$, so there are two type-2 moves based at $\rho$.
\[prop:Wn\_conn\] For any $r \geq 1$ and ${\mathbf{n}}\in {\mathbb{Z}}^r_{\geq0}\setminus\{{\mathbf{0}}\}$, ${\widehat}{W_{\mathbf{n}}}$ is strongly connected.
We must show that for every $F < G$ in ${\widehat}{W_{\mathbf{n}}}$ with $d(G) - d(F) \geq 3$, $[F,G]$ is connected, i.e. any two elements in the open interval $(F,G)$ can be connected by a path contained in $(F,G)$. In the following steps, we prove this in the cases $F \neq F_{-1}$ and $F = F_{-1}$.
[*Step 1: For $s\geq 1$ and ${\mathbf{m}}^1,\ldots,{\mathbf{m}}^\ell \in {\mathbb{Z}}^s_{\geq0}\setminus\{{\mathbf{0}}\}$, define a dimension function on the completed fiber product $P\coloneqq \{F_{-1}\} \cup \prod_{1\leq i\leq \ell}^{K_s^{{{\operatorname}{tree}}}} W_{{\mathbf{m}}^i}^{{{\operatorname}{tree}}}$ like so: $$\begin{aligned}
\label{eq:d_on_fiber_product}
d\bigl(T,(2T^{(i)})_i\bigr) \coloneqq d(T)+\sum_{1\leq i\leq\ell} \bigl(d(2T^{(i)})-d(T)\bigr), \qquad d(F_{-1}) \coloneqq -1.\end{aligned}$$ Denote the maximal element of $P$ by $F_{{{\operatorname}{top}}}$. For any $G \in P$ with $d(F_{{{\operatorname}{top}}})-d(G) \geq 3$, the interval $[G,F_{{{\operatorname}{top}}}]$ is connected.* ]{}
We divide this step into two cases, depending on whether or not $G$ is the minimal element $F_{-1}$.
First, suppose $G = F_{-1}$. The condition $d(F_{{{\operatorname}{top}}}) - d(G) \geq 3$ translates into the condition $s-2+\sum_{1\leq i\leq \ell} (|{\mathbf{m}}^i|-1) \geq 2$, which implies that at least one of these conditions holds:
- $s \geq 4$.
- $s \geq 3$ and there exists $i$ with $|{\mathbf{m}}^i| \geq 2$.
- There exist $i \neq j$ with $|{\mathbf{m}}^i| \geq 2$ and $|{\mathbf{m}}^j| \geq 2$.
- There exists $i$ with $|{\mathbf{m}}^i| \geq 3$.
In case (a), it suffices to show that for any face $H \in (F_{-1},F_{{{\operatorname}{top}}})$ with $d(H) = d(F^{{{\operatorname}{top}}})-1$, there is a path in $(F_{-1},F_{{{\operatorname}{top}}})$ from $H$ to the following element of $P$:
This can be shown by an argument similar to the one made in Prop. \[prop:Kr\_polytope\] to prove the [(strongly connected)]{} property for the associahedra. The same is true for cases (b–d).
Second, we must show that $[G,F_{{{\operatorname}{top}}}]$ is connected for $G \neq F_{-1}$. We do so by using the 2-bracketing model for 2-associahedra. Write $G = \bigl({\widehat}{\mathscr{B}},({\widehat}{{2\mathscr{B}}}_i)_{i=1}^\ell\bigr)$. Throughout this proof we often abbreviate 2-bracketings by their collection of 2-bracketings and omit the underlying 1-bracketing, as this 1-bracketing will be evident. Fix distinct $F^{(j)}\coloneqq\bigl({\mathscr{B}}^{(j)},({2\mathscr{B}}^{(j)}_i)_{i=1}^\ell\bigr) \in (G,F_{{{\operatorname}{top}}}) \subset P$ for $j \in \{1,2\}$; we must show that there is a path between these elements within $(G,F_{{{\operatorname}{top}}})$. Without loss of generality, we may assume $d(F^{(j)})=d(F_{{{\operatorname}{top}}})-1$ for $j \in \{1,2\}$. It follows that each $F^{(j)}$ can be obtained in exactly one of the following ways, where we write $F_{{{\operatorname}{top}}}\eqqcolon \bigl({\mathscr{B}},({2\mathscr{B}}_i)_i\bigr)$:
- Perform a type-1 move on a single ${2\mathscr{B}}_i$.
- Perform a single move ${\mathscr{B}}\to {\mathscr{B}}'$, then perform a type-2 move ${2\mathscr{B}}_i \to {2\mathscr{B}}'_i$ for every $1 \leq i \leq \ell$, such that $\pi({2\mathscr{B}}'_i) = {\mathscr{B}}'$.
- Perform a type-3 move on a single ${2\mathscr{B}}_i$.
Define ${\widetilde}F\coloneqq \bigl({\mathscr{B}}^{(1)}\cup{\mathscr{B}}^{(2)},({2\mathscr{B}}^{(1)}_i\cup{2\mathscr{B}}^{(2)}_i)_i\bigr)$. Then ${\widetilde}F$ is again an element of the fiber product $P$, where for $B_0 \in {\mathscr{B}}^{(1)} \cup {\mathscr{B}}^{(2)}$ and $1 \leq i \leq \ell$ the partial order on $({2\mathscr{B}}^{(1)}_i\cup{2\mathscr{B}}^{(2)}_i)_{B_0}$ is induced by the partial order on the larger collection $({\widehat}{{2\mathscr{B}}}_i)_{B_0}$. If (1) holds for $F^{(j)}$ for both $j=1$ and $j=2$, then $d({\widetilde}F) = d(F_{{{\operatorname}{top}}})-2$; we can then connect $F^{(1)}$ and $F^{(2)}$ by the path $F^{(1)}\to {\widetilde}F\to F^{(2)}$. The same is true in all of the other cases, except in the cases that (2) holds for $j \in \{1,2\}$, or that (3) holds for $j\in\{1,2\}$: here, $d({\widetilde}F)$ could be less than $d(F_{{{\operatorname}{top}}})$. The constructions in these two situations are similar, so we assume that (3) holds for $j \in \{1,2\}$. Denote by $i_1$ resp. $i_2$ the indices with the property that $F^{(j)}$ is obtained from $F_{{{\operatorname}{top}}}$ by performing a type-3 move on ${2\mathscr{B}}_{i_j}$. If $i_1\neq i_2$, then $d({\widetilde}F) = d(F_{{{\operatorname}{top}}})-2$ and we can again use the path $F^{(1)} \to {\widetilde}F \to F^{(2)}$. Suppose, on the other hand, that $i_1=i_2\eqqcolon i_0$. If $d({\widehat}{{2\mathscr{B}}}_{i_0})=d({2\mathscr{B}}_{i_0})-2$, we can again use the construction described above. Otherwise, it suffices to show that there is a path from ${2\mathscr{B}}^{(1)}_{i_0}$ to ${2\mathscr{B}}^{(2)}_{i_0}$ within $({\widehat}{{2\mathscr{B}}}_{i_0},F_{{{\operatorname}{top}}}^{W_{{\mathbf{m}}^{i_0}}})$. Towards this, we express ${2\mathscr{B}}^{(1)}_{i_0}$ and ${2\mathscr{B}}^{(2)}_{i_0}$ like so: $$\begin{aligned}
{2\mathscr{B}}^{(j)}_{i_0} &= \Bigl\{\bigl(\{1,\ldots,s\},(\{1,\ldots,m^{i_0}_1\},\ldots,\{1,\ldots,m^{i_0}_s\})\bigr)\Bigr\}
\cup
\left\{\bigl(\{k\},(\{\ell\})\bigr) \:\left|\: {{1\leq k\leq s}\atop{1\leq \ell\leq m^{i_0}_k}}\right.\right\}
\\
&\hspace{2.5in} \cup \left\{\bigl(\{1,\ldots,s\},(A^{(j)}_{t,1},\ldots,A^{(j)}_{t,s})\bigr) \:\Big|\: 1 \leq t \leq q^{(j)}\right\}
\nonumber\\
&\eqqcolon {2\mathscr{B}}_{{{\operatorname}{top}}}\cup \left\{\bigl(\{1,\ldots,s\},(A^{(j)}_{t,1},\ldots,A^{(j)}_{t,s})\bigr) \:\Big|\: 1 \leq t \leq q^{(j)}\right\}.
\nonumber\end{aligned}$$ To define our path from ${2\mathscr{B}}^{(1)}_{i_0}$ to ${2\mathscr{B}}^{(2)}_{i_0}$, we begin by examining $(A^{(1)}_{1,1},\ldots,A^{(1)}_{1,s})$ and $(A^{(2)}_{1,1},\ldots,A^{(2)}_{1,s})$. If these sequences of sets are equal, we do nothing. If they are not equal, then because ${2\mathscr{B}}^{(1)}_{i_0}$, ${2\mathscr{B}}^{(2)}_{i_0}$ are bounded from below by ${\widehat}{2\mathscr{B}}_{i_0}$, there must exist $q' \geq 2$ such that one of the following equalities holds: $$\begin{aligned}
\label{eq:conn_containments}
(A^{(1)}_{1,1},\ldots,A^{(1)}_{1,s}) &= (A^{(2)}_{1,1}\cup\cdots\cup A^{(2)}_{q',1},\ldots,A^{(2)}_{1,s}\cup\cdots\cup A^{(2)}_{q',s}), \\
(A^{(2)}_{1,1},\ldots,A^{(2)}_{1,s}) &= (A^{(1)}_{1,1}\cup\cdots\cup A^{(1)}_{q',1},\ldots,A^{(1)}_{1,s}\cup\cdots\cup A^{(1)}_{q',s}).
\nonumber\end{aligned}$$ Suppose that the first equality holds. Then we define the first two steps in our path like so: $$\begin{aligned}
\Bigl(&{2\mathscr{B}}^{(1)}_{i_0} = {2\mathscr{B}}_{{{\operatorname}{top}}}\cup \left\{\bigl(\{1,\ldots,s\},(A^{(1)}_{t,1},\ldots,A^{(1)}_{t,s})\bigr) \:\Big|\: 1 \leq t \leq q^{(1)}\right\}, \\
& {2\mathscr{B}}_{{{\operatorname}{top}}}\cup \left\{\bigl(\{1,\ldots,s\},(A^{(1)}_{t,1},\ldots,A^{(1)}_{t,s})\bigr) \:\Big|\: 1 \leq t \leq q^{(1)}\right\} \cup \left\{\bigl(\{1,\ldots,s\},(A^{(2)}_{t,1},\ldots,A^{(2)}_{t,s})\bigr) \:\Big|\: 1 \leq t \leq q'\right\},
\nonumber\\
& {2\mathscr{B}}_{{{\operatorname}{top}}}\cup \left\{\bigl(\{1,\ldots,s\},(A^{(1)}_{t,1},\ldots,A^{(1)}_{t,s})\bigr) \:\Big|\: 2 \leq t \leq q^{(1)}\right\} \cup \left\{\bigl(\{1,\ldots,s\},(A^{(2)}_{t,1},\ldots,A^{(2)}_{t,s})\bigr) \:\Big|\: 1 \leq t \leq q'\right\}\Bigr).
\nonumber\end{aligned}$$ If, on the other hand, the second equation in holds, we define the first two steps in our path like so: $$\begin{aligned}
\Bigl(&{2\mathscr{B}}^{(1)}_{i_0} = {2\mathscr{B}}_{{{\operatorname}{top}}}\cup \left\{\bigl(\{1,\ldots,s\},(A^{(1)}_{t,1},\ldots,A^{(1)}_{t,s})\bigr) \:\Big|\: 1 \leq t \leq q^{(1)}\right\}, \\
& {2\mathscr{B}}_{{{\operatorname}{top}}}\cup \left\{\bigl(\{1,\ldots,s\},(A^{(1)}_{t,1},\ldots,A^{(1)}_{t,s})\bigr) \:\Big|\: 1 \leq t \leq q^{(1)}\right\} \cup \left\{\bigl(\{1,\ldots,s\},(A^{(2)}_{1,1},\ldots,A^{(2)}_{1,s})\bigr)\right\},
\nonumber\\
& {2\mathscr{B}}_{{{\operatorname}{top}}}\cup \left\{\bigl(\{1,\ldots,s\},(A^{(1)}_{t,1},\ldots,A^{(1)}_{t,s})\bigr) \:\Big|\: q'+1 \leq t \leq q^{(1)}\right\} \cup \left\{\bigl(\{1,\ldots,s\},(A^{(2)}_{1,1},\ldots,A^{(2)}_{1,s})\bigr)\right\}\Bigr).
\nonumber\end{aligned}$$ By proceeding in this fashion, we can construct a path from ${2\mathscr{B}}^{(1)}$ to ${2\mathscr{B}}^{(2)}$ whose elements are either of codimension 1 or 2 in $W_{{\mathbf{m}}^{i_0}}$ and which are bounded below by $G$.
[*Step 2: For $G \in W_{\mathbf{n}}$ with $d(G) \geq 2$, $[F_{-1},G]$ is connected.*]{}
Fix $2T \in W_{\mathbf{n}}^{{{\operatorname}{tree}}}$ with $d(2T) \geq 2$. By Def.-Lem. \[deflem:Gamma2T\], we have the following formula for $[F_{-1},2T]$: $$\begin{aligned}
\label{eq:Wn_conn_decomp}
[F_{-1},2T]
\simeq
\{F_{-1}\}
\cup
\prod_{
{\alpha \in V_{{{\operatorname}{comp}}}^1(T_b)}
\atop
{{{\operatorname}{in}}(\alpha)=(\beta)}
} W_{\#\!{{\operatorname}{in}}(\beta)}^{{{\operatorname}{tree}}}\times
\prod_{\rho \in V_{{{\operatorname}{int}}}(T_s)} \prod^{K^{{{\operatorname}{tree}}}_{\#\!{{\operatorname}{in}}(\rho)}}_{
{\alpha\in V_{{{\operatorname}{comp}}}^{\geq2}(T_b)\cap f^{-1}\{\rho\}}
\atop
{{{\operatorname}{in}}(\alpha)=(\beta_1,\ldots,\beta_{\#\!{{\operatorname}{in}}(\rho)})}
}
\hspace{-0.25in} W^{{{\operatorname}{tree}}}_{\#\!{{\operatorname}{in}}(\beta_1),\ldots,\#\!{{\operatorname}{in}}(\beta_{\#\!{{\operatorname}{in}}(\rho)})}.\end{aligned}$$ A calculation using Lemma \[lem:Wn\_dim\] yields the following equality: $$\begin{aligned}
\label{eq:fiber_decomp_of_d}
d(G)
=
\sum_{{\alpha \in V_{{{\operatorname}{comp}}}^1(T_b)}
\atop
{{{\operatorname}{in}}(\alpha) = (\beta)}} \dim(W_{\#\!{{\operatorname}{in}}(\beta)}^{{{\operatorname}{tree}}})
+
\sum_{\rho \in V_{{{\operatorname}{int}}}(T_s)}
\dim\left(\prod^{K^{{{\operatorname}{tree}}}_{\#\!{{\operatorname}{in}}(\rho)}}_{
{\alpha\in V_{{{\operatorname}{comp}}}^{\geq2}(T_b)\cap f^{-1}\{\rho\}}
\atop
{{{\operatorname}{in}}(\alpha)=(\beta_1,\ldots,\beta_{\#\!{{\operatorname}{in}}(\rho)})}
}
\hspace{-0.25in} W^{{{\operatorname}{tree}}}_{\#\!{{\operatorname}{in}}(\beta_1),\ldots,\#\!{{\operatorname}{in}}(\beta_{\#\!{{\operatorname}{in}}(\rho)})}\right)\end{aligned}$$ (We have not shown that the fiber products are abstract polytopes. The dimension of this fiber product should be interpreted as the dimension of the top face, using the dimension function $d$ defined in .) The inequality $d(G)\geq 2$ implies that either (a) one of the posets in has dimension at least 2 or (b) at least two of the posets in have positive dimension. If (b) holds, $[F_{-1},G]$ is clearly connected. Next, suppose (a) holds. If $\dim(W_{\#\!{{\operatorname}{in}}(\beta)}^{{{\operatorname}{tree}}}) \geq 2$ for some $\alpha \in V_{{{\operatorname}{comp}}}^1(T_b), {{\operatorname}{in}}(\alpha) = (\beta)$, then the connectedness of $[F_{-1},G]$ follows from the isomorphism $W_{\#\!{{\operatorname}{in}}(\beta)}^{{{\operatorname}{tree}}}\simeq K_{\#\!{{\operatorname}{in}}(\beta)}^{{{\operatorname}{tree}}}$ proven in Lemma \[lem:WnKn\] and the strong connectedness of the associahedra proven in Prop. \[prop:Kr\_polytope\]. If one of the fiber products in has dimension at least 2, then the connectedness of $[F_{-1},G]$ follows from Step 1.
[*Step 3: For $F < G$ in $W_{\mathbf{n}}$ with $d(G) - d(F) \geq 3$, $[F,G]$ is connected.*]{}
The argument in Step 2 applies equally well to this case.
2- and 3-dimensional 2-associahedra {#app:ex}
===================================
In this appendix, we work out all[^1] the 2- and 3-dimensional 2-associahedra $W_{\mathbf{n}}$. First, we record the face vectors of these examples and note whether or not ${\widehat}{W_{\mathbf{n}}}$ is simple. (An abstract $d$-polytope is *simple* if every 0-face is adjacent to exactly $d$ 1-faces. The author believes that for every $r \geq 2$ and ${\mathbf{n}}\in {\mathbb{Z}}^r_{\geq0}$ with $|{\mathbf{n}}| \geq 4$, ${\widehat}{W_{\mathbf{n}}}$ is not simple.) Second, we give “net” representations of these examples, where each vertex corresponds to a 0-face of $W_{\mathbf{n}}$, each edge to a 1-face, and each polygonal face to a 2-face. In the case of the 3-dimensional examples, we do not represent the 3-dimensional face. Each numbered edge should be identified with the correspondingly numbered edge. Some faces are labelled by the 2-bracketings to which they correspond.
---------------- ------------- -------------- ---------------- -------------- ---------
${\mathbf{n}}$ face vector simple? ${\mathbf{n}}$ face vector simple?
30 (6,6,1) yes 40 (21,32,13,1) no
21 (8,8,1) yes 31 (36,56,22,1) no
200 (5,5,1) yes 22 (44,69,27,1) no
110 (6,6,1) yes 300 (18,27,11,1) yes
101 (4,4,1) yes 210 (30,45,17,1) yes
020 (6,6,1) yes 201 (18,27,11,1) yes
120 (32,48,18,1) yes
030 (24,36,14,1) yes
2000 (14,21,9,1) yes
1100 (18,27,11,1) yes
1010 (14,21,9,1) yes
1001 (10,15,7,1) yes
0200 (18,27,11,1) yes
0110 (22,33,13,1) yes
---------------- ------------- -------------- ---------------- -------------- ---------
{width="0.99\columnwidth"}
{width="\columnwidth"}
{width="\columnwidth"}
{width="1.0\columnwidth"}
{width="0.98\columnwidth"}
{width="0.9\columnwidth"}
{width="0.78\columnwidth"}
[10]{}
N. Bottman. Moduli spaces of witch curves topologically realize the 2-associahedra. arxiv:1712.01194.
N. Bottman. Pseudoholomorphic quilts with figure eight singularity. Thesis, MIT, 2015.
N. Bottman. Pseudoholomorphic quilts with figure eight singularity. arXiv:1410.3834.
N. Bottman, K. Wehrheim. Gromov compactness for squiggly strip shrinking in pseudoholomorphic quilts. In press, [*Select. Math. (N.S.)*]{} (2018).
S. Ma’u, K. Wehrheim, C. Woodward. $A_\infty$ functors for Lagrangian correspondences. arXiv:1601.04919.
M. Markl, S. Shnider, J.D. Stasheff. , volume 96 of [*Mathematical Surveys and Monographs*]{}. American Mathematical Society, Providence, RI, 2007.
D. McDuff, D.A. Salamon. , volume 52 of [*American Mathematical Society Colloquium Publications*]{}. American Mathematical Society, Providence, RI, 2004. Second edition.
[^1]: We make use of the identity $W_{n_1,n_2\ldots,n_r} \simeq W_{n_r,n_{r-1},\ldots,n_1}$. Also, we do not include 2-associahedra of the form $W_{n_1}$ or $W_{0,\ldots,0,1,0,\ldots,0}$, since these can be identified with associahedra via Lemma \[lem:WnKn\] and the following remark.
|
---
address: |
Universitá Roma Tre and INFN\
Via della Vasca Navale, 84 - 00184 Rome, Italy
author:
- |
F. PETRUCCI\
*[on behalf of the ATLAS and CMS collaborations]{}*
title: Standard Model Physics with ATLAS and CMS
---
Introduction {#subsec:prod}
============
The Large Hadron Collider (LHC) is a proton proton collider designed to provide collisions at a center of mass energy of 14 TeV with a nominal luminosity of 10$^{34}$ cm$^{-2}$s$^{-1}$. The LHC was built at CERN as a discovery machine with the target to find the Higgs Boson and any possible evidence of new physics beyond the Standard Model (SM). Two general purpose experiments (ATLAS and CMS [@CMSTDR1]) have been constructed to study the collisions provided by the LHC. They are currently in the final commissioning phase [@ATLAScomm; @CMScomm] and are ready to collect data.
Before any discovery can be claimed a set of key issues should be addressed by the experiments: the detectors response should be understood in detail and SM processes (W,Z,t) should be accurately measured as they can be considered as benchmark processes to assess the comprehension of the measurements.
The LHC can provide stringent tests of the Standard Model consistency measuring several fundamental parameters. A non exhaustive list includes the W boson mass and width (M$_{W}$ and $\Gamma_{W}$ through the W boson decay distributions), the top quark mass m$_{t}$ and sin$^2$$\theta_{W}$ via the Z forward-backward asymmetry. In addition to that, some processes (e.g. rare top decays) have a direct sensitivity to new physics. Cross sections which are known theoretically with high accuracy (as those of the vector bosons) are crucial to test QCD predictions in an unexplored regime and to measure Parton Density Functions (PDFs). Moreover, SM processes should be carefully studied at the LHC because they are a background to many new physics channels.
At the time of writing these proceedings, April 2009, it is foreseen to have the first proton-proton collisions before the end of the year. It is planned to run without long intervals during the winter with the goal to integrate $\sim$200 pb$^{-1}$. The energy per beam will be of 450 GeV at the beginning and will rise up to 5 TeV.
In this paper, the expected results for some measurements of SM processes at the LHC are presented to show the status of the analyses and the expected performance. More details and the description of the analyses not shown here can be found in [@ATLASCSC; @CMSTDR2] and in the references cited in the text. The studies are based on detailed simulations of the physics processes and of the detectors. The inclusive Z and W cross sections measurements are discussed in section \[wzxsec\]; the measurement of the W mass is described in section \[wmass\] and the top quark pair production cross section and top mass measurements are presented in section \[top\].
Inclusive W and Z cross sections measurements. {#wzxsec}
==============================================
The study of the production of W and Z events at the LHC is fundamental in several respects. First, the calculation of higher order corrections is very advanced, with a small theoretical uncertainty ($<$1%). Such precision makes W and Z production a stringent test of QCD.
Z and W production cross sections are expected to be very large. For a center of mass energy of 10 TeV (14 TeV) we will have $\sigma$$_{W\rightarrow l\nu}$=14.3 nb (20.5 nb) and $\sigma$$_{Z\rightarrow ll}$=1.35 nb (2.02 nb); the calculations are at NLO accuracy. The experimental signatures for these processes are very clean (in particular Z$\rightarrow ll$). Thus, they will be extensively used as $standard$ $candles$ processes for understanding the experiments and tuning the Montecarlo, providing calibration and alignment of the detectors; setting the energy scales and resolutions and measuring the efficiencies for leptons.
Finally, the clean and fully reconstructed leptonic final states, in Z events, will allow a precise measurement of the transverse momentum and rapidity distributions. These distributions will constrain non perturbative QCD aspects and the PDFs. The high expected statistics will bring significant improvement on all these aspects, and this improvement translates to virtually all physics at the LHC, where strong interaction and PDF uncertainties are a common factor.
The selection of Z$\rightarrow\mu\mu$ events (the CMS analysis [@CMSZWmu] is presented) starts requiring one single muon at the trigger level. Two high p$_T$ muons (p$_T$$>$20.0 GeV) with opposite charge sign should be reconstructed and should be isolated ($\sum$p$_T$$<$3 GeV for all the other tracks in a cone $\Delta$R$<$0.3).
In the case of Z$\rightarrow$$ee$ (ATLAS analysis [@ATLASCSC] is discussed), the trigger requires one electron with p$_T$$>$10 GeV. Two clusters in the Electromagnetic Calorimeter (E$_T$$>$15 GeV) are then required at reconstruction stage; they should be isolated ($\sum$E$_T$/E$_T$$^e$$<$0.2 in a cone $\Delta$R$<$0.45, where the electron energy is excluded in $\sum$E$_T$). These analyses are based on robust cuts in order to be safe against harsh experimental conditions; for example a common vertex is not required nor a cut based on the tracks impact parameter.
The background estimation can be performed from the sidebands and/or from simultaneous fit to signal and background. Anyway this is a low background sample, in particular in the muon case. The results of the selections are shown in figure \[zmumu\] for the case Z$\rightarrow\mu\mu$ channel.
The selection of W$\rightarrow$$e\nu$ events (CMS cuts [@CMSZWe] are reported) starts requiring a single isolated electron at trigger level. A high E$_T$ electron (E$_{T}$$>$30 GeV) should be reconstructed; it is required to be isolated in the tracker (no tracks with p$_{T}>$1.5 GeV, except for the one of the electron, within a cone $\Delta$R$<$0.6), in the electromagnetic calorimeter ($\sum$E$_T$/E$_T^{e}<$0.02; $\Delta$R$<$0.3) and in the hadronic calorimeter ($\sum$E$_T$/E$_T^{e}<$0.10; 0.15$<$$\Delta$R$<$0.3). Events with a second electron having E$_{T}>$20 GeV are rejected.
W$\rightarrow\mu\nu$ events are obtained (in the ATLAS analysis [@ATLASCSC]) with one muon having p$_T$$>$20 GeV at the trigger level. A high p$_T$ muon (p$_T$$>$25 GeV) should be reconstructed and must be isolated (the energy deposited in the calorimeter around the muon track in a cone $\Delta$R$<$0.4 should be lower than 5 GeV). Cuts on E$_T^{Miss}$$>$25 GeV and M$_{TW}$$>$40 GeV are also applied.
In the electron final state the dominant background are jet final state events. To determine the shape of the background two method are foreseen; CMS propose to use a data sample passing electron selection with isolation criteria inverted while ATLAS exploits a $\gamma$+jet sample obtained using the same selection as for the signal but requiring no matching tracks in the Inner Detector.
In the muon final state the background comes from Z$\rightarrow\mu\mu$ and W$\rightarrow\tau\nu$ events. As these processes are well understood, the shape of the background will be obtained from Montecarlo. In figure \[wmunu\] the transverse mass distribution is shown for the case W$\rightarrow \mu\nu$. $$\label{xseceq}
%\sigma_{W(Z)}\times BR(W(Z)\rightarrow leptons)=\frac{N^{obs}_{W(Z)} - B_{W(Z)}}{\epsilon_{W(Z)}\cdot A_{W(Z)}\cdot\int\textit{L}dt}
\sigma_{W(Z)}=\frac{N^{obs}_{W(Z)} - B_{W(Z)}}{\epsilon_{W(Z)}\cdot A_{W(Z)}\cdot\int\textit{L}dt} \hspace{0.5cm} ;\hspace{0.5cm}
\frac{d\sigma_{W(Z)}}{\sigma}=\frac{dN\oplus dB}{N-B}\oplus\frac{d\epsilon}{\epsilon}\oplus\frac{dA}{A}\oplus\frac{dL}{L}$$ The cross sections and the relative errors are computed as in eq. \[xseceq\]. $N^{obs}_{W(Z)}$ is the number of selected events; $B_{W(Z)}$ is the estimated number of background events, $\epsilon_{W(Z)}$ and $A_{W(Z)}$ are respectively the trigger and reconstruction efficiency and the acceptance and $\int\textit{L}dt$ is the integrated luminosity. The efficiencies will be computed from data using the Tag and Probe technique while the acceptance will be computed using the Montecarlo. At the beginning the dominant uncertainty ($\sim$10$\%$) will come from luminosity measurements. This number will be reduced with a better knowledge of the machine and when ALFA [@ALFA] will come into play.
The expected uncertainties on the measured cross sections are presented for different values of the integrated luminosity in table \[xsecrestab\]. The error on the luminosity is not taken into account. The uncertainty on identification and reconstruction efficiencies are expected to be at the level of 1% (3%) in the electron (muon) channel in the initial phase and to go well below 1% on a longer time scale after an integrated luminosity of 1 fb$^{-1}$ will be collected. The uncertainty on the background is foreseen to be at the level of 5% ($<$1%) in the electron (muon) channels at the beginning; they will be reduced with more stringent selections when the available statistics increases.
Experiment Process $\int\textit{L}dt$ $\Delta\sigma$/$\sigma$ ($\%$)
------------ ----------------------------------------- -------------------- --------------------------------
CMS pp$\rightarrow$Z+X$\rightarrow$$ee+X$ 10 pb$^{-1}$ 1.9 (stat) $\pm$ 2.3 (syst)
pp$\rightarrow$W+X$\rightarrow$$e+X$ 1.2 (stat) $\pm$ 5 (syst)
ATLAS pp$\rightarrow$Z+X$\rightarrow\mu\mu+X$ 50 pb$^{-1}$ 0.8 (stat) $\pm$ 3.8 (syst)
pp$\rightarrow$W+X$\rightarrow\mu+X$ 0.2 (stat) $\pm$ 3.1 (syst)
CMS pp$\rightarrow$Z+X$\rightarrow\mu\mu+X$ 1 fb$^{-1}$ 0.13 (stat) $\pm$ 2.3 (syst)
pp$\rightarrow$W+X$\rightarrow\mu+X$ 0.04 (stat) $\pm$ 3.3 (syst)
ATLAS pp$\rightarrow$Z+X$\rightarrow$$ee+X$ 0.20 (stat) $\pm$ 2.4 (syst)
pp$\rightarrow$W+X$\rightarrow$$e+X$ 0.04 (stat) $\pm$ 2.5 (syst)
: Expected uncertainties on the measured cross sections; the error on the luminosity is not considered.\[xsecrestab\]
W mass measurement. {#wmass}
===================
The W mass (M$_{W}$) is a fundamental parameter of the SM. It is related to the mass of the top quark and the mass of the Higgs boson and needs to be measured with highest precision. The LHC aims at improving the current world average (M$_W$=80399$\pm$25 MeV); the W production cross section is 10 times larger than at Tevatron and the luminosity is higher. W candidate events are selected as described in the previous section. The W mass can be extracted from the distribution of one of the two observables that are most sensitive to M$_{W}$: the transverse momentum of the lepton (p$_{T}^{l}$) and the transverse mass of the lepton-neutrino system (M$_{TW}$). The value of M$_{W}$ is obtained fitting the measured distributions with template distributions. The two analyses are complementary and are affected by different systematic effects. The shape of the p$_{T}^{l}$ distribution is distorted by the transverse momentum of the W while M$_{TW}$ is mainly affected by the finite resolution of the detectors. Z events are crucial in this analysis to build the templates and to properly account for experimental quantities like the lepton energy scale, the energy resolution and the reconstruction efficiency. There are several approaches to generate the templates. In the ATLAS analysis, the distributions are generated with the Montecarlo and then are convoluted with the momentum scales, resolutions and missing E$_T$ response measured in Z events. The CMS collaborations uses two methods. The scaled observable method uses templates obtained by transformation of the distributions of p$_{T}^{l}$ or of the Z transverse mass into the corresponding quantities for the W. The other method is a kinematic transformation on a event by event basis (Morphing Method): the lepton momentum in Z rest-frame is rescaled taking into account the ratio M$_{W}$/M$_Z$, one lepton is removed to simulate the neutrino and the observable is boosted back to detector frame.
The ATLAS collaboration studied the result of the analysis also in the case of a limited statistics; this is intended as a study to set the method and to understand what can be done with very early data. The results are listed in table \[wmasstab\] for the different channels and observables.
p$_{T}^{e}$ p$_{T}^{\mu}$ M$_{TW}(e)$ M$_{TW}(\mu)$
-------------------- ------------- --------------- ------------- ---------------
Statistical (MeV) 120 106 61 57
Experimental (MeV) 114 114 230 230
Theo (PDF) (MeV) 25 25 25 25
TOTAL (MeV) 167 158 239 238
: Expected uncertainties on the W mass for 15 pb$^{-1}$ of data.\[wmasstab\]
When the statistics will increase the measurement will become competitive. The CMS analysis performed with 1 fb$^{-1}$ of data, for example in with the Morphing Method applied to the muon channel, results in a value of $\Delta$M$_{W}$=40 (stat) $\pm$ 64 (syst.exp) $\pm$ 20 (syst.theo) MeV. The experimental error is dominated by the missing transverse energy scale and resolution while the theoretical uncertainty is mostly related to the uncertainties in the PDFs. It has been estimated that the systematic uncertainties can be further improved with increasing statistics; the expected result for 10 fb$^{-1}$ is $\Delta$M$_{W}$=15 (stat) $\pm$ 30 (syst.exp) $\pm$ 10 (syst.theo) MeV.
Top Quark observation and mass measurement. {#top}
===========================================
The top quark has been discovered at Tevatron and their current value for the top quark mass is M$_{t}$=173.1$\pm$0.6 (stat.)$\pm$1.1 (syst.) GeV [@TEVTMASS]. A more precise measurement of M$_{t}$ is needed for consistency checks of the Standard model and to constrain the Higgs mass. The LHC offers a great opportunity to this extent because top quark pair production cross section, mainly via gluon-gluon fusion, is of 833 pb (at NLO for $\sqrt{s}$=14 TeV), two order of magnitudes larger then at Tevatron. This measurement will be soon limited by systematic effects.
The golden channel is the lepton+jets channel in which the W from t ($\overline{t}$) quark decays leptonically and the W from the $\overline{t}$ (t) quark decays in 2 jets: $t\overline{t}\rightarrow$$Wb$$+$$W\overline{b}\rightarrow$$(l\nu)b$$+$$(jj)\overline{b}$. This channel can be selected with good purity (the isolated lepton is exploited for triggering); the hadronic side is used to measure the top mass. The dominant background are W/Z+jets events. Other background are $t\overline{t}$ in other channels and single top events. Multi-jet background (with fake leptons and missing E$_T$) has a very large cross section and a tiny efficiency to the selection cuts; the simulation is difficult and data driven methods are needed to estimate this contribution. In any case, this is expected to be lower than W+jets.
The study of $t\overline{t}$ events require the reconstruction of many relevant experimental signatures (e, $\mu$, jet, missing E$_T$, b-jet). Therefore the observation of the top signal will be a milestone in physics commissioning of the detectors. As an example, the expectations obtained by CMS [@CMStopobs] on the observation of the top signal with 10 pb$^{-1}$ is presented. The muon plus jets channel is selected requiring a muon with p$_{T}$$>$30 GeV isolated in the calorimeters (the energy deposit in the calorimeter excluding the muon should be lower than 1 GeV) and in the tracker (dR$_{min}$$>$0.3). At least 4 jets are required, three with E$_{T}$$>$40 GeV and one with E$_{T}$$>$65 GeV. The cut on the number of jets highly reduces the QCD and W/Z+jets background. No b-tagging of the jet is used. For 10 pb$^{-1}$ this results in a signal to background ratio of 128/90. The overall selection efficiency (including the acceptance) is 10.3%. The shape of the W/Z+jets background is obtained from simulation and the normalization will come by comparison with a control sample at low jet multiplicities. The resulting invariant mass distribution of the three jets with the highest summed transverse energy is shown in figure \[topfigcms\].
For more and more data being recorded and for improved understanding of the reconstruction performance, more refined analyses have been prepared; an example from ATLAS [@ATLASCSC] is described in the following. The selection is based on one isolated lepton with p$_T$$>$20(25) GeV in the case of muons (electrons). A cut on E$_{T}^{Miss}$$>$20 GeV reduces the multi-jet background while a cut on at least 4 jets with p$_{T}$$>$40 GeV rejects W/Z+jets events. Two of the jets are required to be tagged as b-jets.
The Jet Energy Scale (JES) is the main source of systematic uncertainties. Its effect is reduced by rescaling with a minimization procedure. All possible light jet combinations are tried with a correction to their energy scale. The mass of a couple is constrained to the W mass; the pair with the best $\chi^2$ is taken. To measure the top mass, the b-jet closest to the chosen pair is used. The invariant mass distribution of the three jets is shown in figure \[topfigatlas\] for an integrated luminosity of 1 fb$^{-1}$. The effect of the light jet energy scale (reduced with the rescaling) is of 0.2 GeV/%. The JES is expected to be known with a precision of 1% with 1 fb$^{-1}$ of data. The uncertainty in the b-jet energy scale produces a larger uncertainty of 0.7 GeV/%; it will be initially derived from the light JES using the Montecarlo and then complemented with Z+(b-jets) data. The expected uncertainty on M$_{t}$ with 1 fb$^{-1}$ is of $\Delta$M$_{t}$=0.3 GeV (stat.)$\pm$1 GeV (syst.).
Conclusions
===========
The LHC will start providing collisions late October this year. The first steps will be understanding the detector response and establishing SM signatures. A strategy for the measurement of W and Z cross sections has been developed also for early data; simple and robust selections for electrons and muons have been set up to cope with the imperfections in calibration and alignment of the detectors. The measurement of the W and top quark mass require a detailed detector understanding and will come at a later stage. The Tag and Probe technique (applied on Z events) will provide the selection, reconstruction and trigger efficiencies directly from the data. Some methods to estimate QCD backgrounds from data were developed. On a longer scale, for the precise measurement of SM parameters, the understanding of systematics is crucial.
Acknowledgments {#acknowledgments .unnumbered}
===============
I would like to thank all the ATLAS and CMS colleagues who worked on the analyses and in particular the SM and Top Working group conveners for the discussions and for their advice in preparing this contribution. I thank the organizers for the very interesting and pleasant conference.
References {#references .unnumbered}
==========
[99]{}
The ATLAS Collaboration, The ATLAS Experiment at the CERN Large Hadron Collider, JINST 3 S08003 (2008).
The CMS Collaboration, CMS Physics TDR, Volume I, Detector performance and software, CERN-LHCC-2006-001 (2006).
T. Pauly, Readiness of the ATLAS Experiment for First Data, these proceedings.
E. Meschi, Readiness of the CMS Detector for First Data, these proceedings. The ATLAS Collaboration, ATLAS Forward Detectors for Measurement of Elastic Scattering and Luminosity, ATLAS TDR 018, CERN/LHCC 2008-04.
The ATLAS Collaboration, Expected Performance of the ATLAS Experiment, Detector, Trigger and Physics, CERN-OPEN-2008-020, Geneva, 2008, to appear.
The CMS Collaboration, CMS Physics TDR, Volume II, Physics performance, CERN-LHCC-2006-021, J. Phys.G: Nucl. Part. Phys. 34 995-1579 (2006).
The CMS Collaboration, Towards a measurement of the inclusive W$\rightarrow\mu\nu$ and Z$\rightarrow\mu\mu$ cross sections in pp collisions at $\sqrt{s}$=14 TeV, CMS PAS EWK-07-002 (2008).
The CMS Collaboration, Towards a measurement of the Inclusive W$\rightarrow$$e\nu$ and $\gamma$$^{*}$/Z$\rightarrow$e$^{+}$e$^{-}$ cross sections in pp collisions at $\sqrt{s}$=14 TeV, CMS PAS EWK-08-005 (2008).
The CMS Collaboration, Prospects for the precision measurement of the W mass with the CMS detector at the LHC, CMS NOTE 2006/061.
Tevatron Electroweak Working Group for the CDF Collaboration and The D0 Collaboration, Combination of CDF and D0 Results on the Mass of the Top Quark, arXiv:0903.2503.
The CMS Collaboration, Observability of Top Quark Pair Production in the Semileptonic Muon Channel with the first 10 pb$^{-1}$ of CMS Data, CMS PAS TOP-08-005 (2008).
|
---
abstract: 'Model-based testing (MBT) is an well-known technology, which allows for automatic test case generation, execution and evaluation. To test non-functional properties, a number of test MBT frameworks have been developed to test systems with real-time, continuous behaviour, symbolic data and quantitative system aspects. Notably, a lot of these frameworks are based on Tretmans’ classical input/output conformance (ioco) framework. However, a model-based test theory handling probabilistic behaviour does not exist yet. Probability plays a role in many different systems: unreliable communication channels, randomized algorithms and communication protocols, service level agreements pinning down up-time percentages, etc. Therefore, a probabilistic test theory is of great practical importance. We present the ingredients for a probabilistic variant of ioco and define the [[pioco]{}]{}relation, show that it conservatively extends ioco and define the concepts of test case, execution and evaluation.'
author:
- Marcus Gerhold Mariëlle Stoelinga
bibliography:
- 'biblio.bib'
nocite: '[@*]'
title: ioco theory for probabilistic automata
---
Introduction {#sec(Introduction)}
============
Model-based testing (MBT) is a way to test systems more effectively and more efficiently. By generating, executing and evaluating test cases automatically from a formal requirements model, more tests can be executed at a lower cost. A number of MBT tools have been developed, such as the Axini test manager, JTorx [@DBLP:conf/tacas/Belinfante10], STG [@STG], TorXakis [@DBLP:conf/fmics/MostowskiPSTS09], Uppaal-Tron [@DBLP:conf/fortest/HesselLMNPS08; @LMNS05], etc.
A wide variety of model-based test theories exist: the seminal theory of Input/Output conformance [@DBLP:journals/cn/Tretmans96; @conf/fortest/Tretmans08] is able to test functional properties, and has established itself as the robust core with a wide number of extensions. The correct functioning of today’s complex cyberphysical systems, depends not only on functional behaviour, but largely on non-functional, quantitative system aspects, such as real-time and performance. MBT frameworks have been developed to support these aspects: To test timing requirements, such as deadlines, a number of timed ioco-variants have been developed, such as [@DBLP:journals/tcs/BensalemPQT08; @DBLP:conf/fortest/HesselLMNPS08; @DBLP:journals/fmsd/KrichenT09]. Symbolic data can be handled by the frameworks in [@FTW06; @DBLP:journals/entcs/Jeron09]; resources by [@BS08], and hybrid aspects in [@Osch06].
This paper introduces [[pioco]{}]{}, a conservative extension of ioco that is able to handle discrete probabilities. Starting point is a requirements model as a [*probabilistic quiescent transition system (pQTS)*]{}, an input/output transition system, with two additional features: (1) [*Quiescence*]{}, which models the absence of outputs explicitly via a distinct $\delta$ label: quiescence is an important notion in ioco, because a system-under-test (SUT) may fail a certain test case given an output is required, but the SUT does not provide one. (2) [*Discrete probabilistic choice*]{}. We work in the input-generative / output-reactive model [@GSST90], which extend Segala’s classical probabilistic automaton model [@Segala:1996:MVR:239648]: upon receiving an input, a pQTS chooses probabilistically, which target state to move to. For outputs, a pQTS chooses probabilistically both which action to take, and which state to move to, see Figure \[fig:(ExampleGraph)\] for an example.
An important contribution of our paper is the notion of test case execution and evaluation. In particular, we show how the use of statistical hypothesis testing can be exploited to determine the verdict of a test execution: if we execute a test case sufficiently many times and the observed trace frequencies do not coincide with the probabilities described in the specification pQTS depending on a predefined level of significance, then we fail the test case. In this way, we obtain a clean framework for test case generation, evaluation and execution. However, being a first step, we mainly establish the theoretical background. Further Research is needed to implement this theory into a working tool for probabilistic testing
#### Related work.
An early and influential paper on probabilistic testing is [*Bisimulation Through Probabilistic Testing*]{} [@DBLP:conf/popl/LarsenS89], which not only defines the fundamental concept of probabilistic bisimulation, but also shows how different (i.e. non-bisimilar) probabilistic behaviours can be detected via statistical hypothesis testing. This idea has been taken further in our earlier work [@CSV07; @DBLP:conf/icalp/StoelingaV03], which shows how to observe trace probabilities via hypothesis testing.
Testing probabilistic Finite State Machines is well-studied (e.g. [@Hwang20101108]) and coincidences to [[ioco]{}]{}theory can be found. However pQTS are more expressive than PFSMs, as they support non-determinism and underspecification, which both play a fundamental role in testing practice. Hence, they provide more suitable models for today’s highly concurrent and cyberphysical systems.
A paper that is similar in spirit to ours is by Hierons et al. [@hierons:hal-01055146; @DBLP:journals/fac/HieronsN12], and also considers input reactive / output generative systems with quiescence. However, there are a few important differences: Our model can be considered as an extension of [@hierons:hal-01055146] reconsiling probabilistic and nondeterministic choices in a fully fledged way. Being more restrictive enables [@hierons:hal-01055146; @DBLP:journals/fac/HieronsN12] to focus on individual traces, whereas we use trace distributions.
Other work that involves the use of probability is given in [@DBLP:conf/qsic/DulzZ03; @DBLP:journals/tosem/WhittakerP93; @DBLP:journals/infsof/WhittakerRT00], which models the behaviour of the tester, rather than of the SUT as we do, via probabilities.
#### Organization of the paper.
We start by defining overall preliminaries in Section \[sec(pqts)\]. Section \[sec(pioco)\] defines the conformance relation [[pioco]{}]{}for those systems and Section \[sec:Testing-for-pQTS\] provides the structure for testing and denotes what it means for an implementation to fail or pass a test suite by the means of an output and a statistical verdict. The paper ends with conclusions and future work in Section \[sec(FutureWork)\].
Probabilistic quiescent transition systems {#sec(pqts)}
==========================================
Basic definitions
-----------------
(Probability Distribution) A *discrete probability distribution* over a set $X$ is a function $\mu:X\longrightarrow\left[0,1\right]$ such that $\sum_{x\in X}\mu\left(x\right)=1$. The set of all distributions over $X$ is denoted as $\mathit{Distr}\left(X\right)$. The probability distribution that assigns $1$ to a certain element $x\in X$ is called the *Dirac* distribution over $x$ and is denoted $\mathit{Dirac}\left(x\right)$.
(Probability Space) A *probability space* is a triple $\left(\Omega,\mathcal{F},\mathbb{P}\right)$, such that $\Omega$ is a set, $\mathcal{F}$ is a $\sigma$-field of $\Omega$, and $\mathbb{P}:\mathcal{F}\rightarrow\left[0,1\right]$ a probability measure such that $\mathbb{P}\left(\Omega\right)=1$ and $\mathbb{P}\left(\bigcup_{i=0}^{\infty}A_{i}\right)=\sum_{i=0}^{\infty}\mathbb{P}\left(A_{i}\right)$ for $A_{i}$, $i=1,2,\ldots$ pairwise disjoint.
Probabilistic quiescent transition systems {#subsec:pqts}
------------------------------------------
As stated, we consider probabilistic transitions that are *input reactive* and *output generative* [@GSST90]: upon receiving an input, the system decides probabilistically which next state to move to. However, the system cannot decide probabilistically which inputs to accept. For outputs, in contrast, a system may make a probabilistic choice over various output actions. This means that each transition in a pQTS either involves a single input action, and a probabilistic choice over the target states; or it makes a probabilistic choice over several output actions, together with their target states. We refer to Figure \[fig:(ExampleGraph)\] for an example.
Moreover, we model quiescence explicitly via a $\delta$-label. Quiescence means absence of outputs and is essential for testing: if the SUT does not provide any outputs, a test must determine whether or not this behaviour is correct. In the non-probabilistic case, this can be done either via the suspension automaton construction [@Tretmans96], or via QTSs [@STS13]. The SA construction involves determinization. However, this is an ill-defined term for probabilistic systems. Therefore, we use the quiescent-labelling approach and demand to make quiescence explicit.
Finally, we assume that our pQTSs are finite and don’t contain internal steps (i.e., $\tau$-transitions).
\[def:(pQTS)\](pQTS) A *probabilistic quiescent transition system* (pQTS) is an ordered five tuple $\mathcal{A}=\left(S,s_{0},L_{I},L_{O}^{\delta},\Delta\right)$ where
- $S$ a finite set of states,
- $s_{0}\in S$ the initial state,
- $L_{I}$ and $L_{O}^{\delta}$ disjoint sets of input and output actions, with at least $\delta\in L_{O}^{\delta}$. We write $L:=L_{I}\cup L_{O}^{\delta}$ for the set of all labels and let $L_{O}=L_{O}^{\delta}\backslash\left\{ \delta\right\} $ the set of all real outputs.
- $\Delta\subseteq S\times\mathit{Distr}\left(L\times S\right)$ a finite transition relation such that for all $\left(s,\mu\right)\in\Delta$, $a?\in L_{I}$, $b\in L$, $s',s''\in S$, if $\mu\left(a?,s'\right)>0$, then $\mu\left(b,s''\right)=0$ for all $b\neq a?$.
We write $s\overset{\mu,a}{\rightarrow}s'$ if $\left(s,\mu\right)\in\Delta$ and $\mu\left(a,s'\right)>0$; and $s\rightarrow a$ if there are $\mu\in\mathit{Distr}\left(L\times S\right)$ and $s'\in S$ such that $s\overset{\mu,a}{\rightarrow}s'$. If it is not clear from the context about which system we are talking, we will write $s\overset{\mu,a}{\rightarrow}_\mathcal{A}s'$, $\left(s,\mu\right)_\mathcal{A}$ and $s\rightarrow_\mathcal{A} a$ to clarify ambiguities. Lastly we say that $\mathcal{A}$ is *input enabled* if for all $s\in S$ we have $s\rightarrow a?$ for every $a\in L_I$.
Paths and traces
----------------
We define the usual language-theoretic concepts for pQTSs.
\[defn(traces)\]
Let $\mathcal{A}=\left(S,s_{0},L_{I},L_{O}^{\delta},\Delta\right)$ be a pQTS.
- A *path* $\pi$ of a pQTS $\mathcal{A}$ is a (possibly) infinite sequence of the form $$\pi=s_{1}\mu_{1}a_{1}s_{2}\mu_{2}a_{2}s_{3}\mu_{3}a_{3}s_{4}\ldots,$$ where $s_{i}\in S$, $a_{i}\in L$ for $i=1,2,\ldots$ and $\mu\in\mathit{Distr}\left(L,S\right)$, such that each finite path ends in a state and $s_{i}\overset{\mu_{i},a_{i}}{\rightarrow}s_{i+1}$ for each nonfinal $i$. We use the notation $\mathit{first}\left(\pi\right):=s_{1}$ to denote the first state of a path, as well as $\mathit{last}\left(\pi\right):=s_{n}$ for a finite path ending in $s_{n}$, and $\mathit{last}\left(\pi\right)=\infty$ for infinite paths. The set of all finite paths of a pQTS $\mathcal{A}$ is denoted by $\mathit{Path}^{*}\left(\mathcal{A}\right)$ and the set of all infinite paths by $\mathit{Path}\left(\mathcal{A}\right)$ respectively.
- The *trace* of a path $\pi=s_{1}\mu_{1}a_{1}s_{2}\mu_{2}a_{}s_{3}\ldots$ is the sequence obtained by omitting everything but the action labels, i.e. $\mathit{trace}\left(\pi\right)=a_{1}a_{2}a_{3}\ldots$.
- All finite traces of $\mathcal{A}$ are summarized in $\mathit{traces}\left(\mathcal{A}\right)=\left\{ \mathit{trace}\left(\pi\right)\in L^{*}\mid\pi\in\mathit{Path}^{*}\left(\mathcal{A}\right)\right\} $.
- We write $s_{1}\overset{\sigma}{\Rightarrow}s_{n}$ with $\sigma\in L^{*}$ for $s_{1},s_{n}\in S$ in case there is a path $\pi=s_{1}\mu_{1}a_{1}\ldots\mu_{n-1}a_{n-1} s_{n}$ with $\mathit{trace}\left(\pi\right)=\sigma$ and $s_{i}\overset{\mu_{i},a_{i}}{\rightarrow}s_{i+1}$ for $i=1,\ldots,n-1$.
- We write $\mathit{reach}{}_{\mathcal{A}}\left(S',\sigma\right)$ for the set of reachable states of a subset $S'\subseteq S$ via $\sigma$, i.e.\
$\mathit{reach}_{\mathcal{A}}\left(S',\sigma\right)=\left\{ s\in S\mid\exists s'\in S'\,:\, s'\overset{\sigma}{\Rightarrow}s\right\}.$
- All complete initial traces of $\mathcal{A}$ are denoted by $\mathit{ctraces}\left(\mathcal{A}\right)$, which is defined as the set $$\left\{ \mathit{trace}\left(\pi\right)\mid\pi\in\mathit{Path\left(\mathcal{A}\right)}:\mathit{first}\left(\pi\right)=s_{0},\left|\pi\right|=\infty\vee\forall a\in L:\mathit{reach}_{\mathcal{A}}\left(\mathit{last}\left(\pi\right),a\right)=\emptyset\right\} .$$
- We write $\mathit{after}{}_{\mathcal{A}}\left(s\right)$ for the set of actions, enabled from state $s$, i.e. $\mathit{after}{}_{\mathcal{A}}\left(s\right)=\left\{ a\in L\mid s\rightarrow a\right\}$. We lift this definition to traces by defining $$\mathit{after}{}_{\mathcal{A}}\left(\sigma\right)=\bigcup_{s\in\mathit{reach}{}_{\mathcal{A}}\left(s_{0},\sigma\right)}\mathit{after}{}_{\mathcal{A}}\left(s\right).$$
- We write $\mathit{out}{}_{\mathcal{A}}\left(\sigma\right)=\mathit{after}{}_{\mathcal{A}}\left(\sigma\right)\cap L_{O}^{\delta}$ to denote the set of all output actions as well as quiescence after trace $\sigma$.
In order for a pQTS to be meaningful, [@STS13] postulated four well-formedness rules about quiescence, stating for instance that quiescence should not be succeeded by an output action. Since our current treatment does not rely on well-formedness, we omit these rules here. Moreover, our definition of a test case is a pQTS that does not adhere to the well-formedness criteria.
Trace distributions
-------------------
Very much like the visible behaviour of a labelled transition system is given by its traces, the visible behaviour of a pQTS is given by its trace distributions: each trace distribution is a probability space that assigns a probability to (sets of) traces [@Segala:1996:MVR:239648]. Just as a trace in an LTS is obtained by first selecting a path in the LTS and by then removing all states and internal actions, we do the same in the probabilistic case: we first resolve all the nondeterministic choices in the pQTS via an adversary, and by then removing all states — recall that our pQTSs do not contain internal actions. The resolution of the nondeterminism via an adversary leads to a purely probabilistic structure where we can assign a probability to each finite path, by multiplying the probabilities along that path. The mathematics to handle infinite paths is more complex, but completely standard [@cohn1980measure]: in non-trivial situations, the probability assigned to an individual trace is 0 (cf., the probability to always roll a 6 with a dice is 0). Hence, we consider the probability assigned to sets of traces (e.g., the probability that a 6 turns up in the first 100 dice rolls). A classical result in measure theory shows that it is impossible to assign a probability to all sets of traces. Therefore, we collect those sets that can be assigned a probability in a so-called $\sigma$-field $\mathcal{F}$.
#### Adversaries.
Following the standard theory for probabilistic automata [@MyThesis], we define the behavior of a pQTS via adversaries (a.k.a. policies or schedulers). These resolve the nondeterministic choices in pQTSs: in each state of the pQTSs, the adversary chooses which transition to take. Adversaries can be (1) history-dependent, i.e. the choice which transition to take can depend on the full history; (2) randomized, i.e. the adversary may make a random choice over all outgoing transitions; and (3) partial, i.e., at any point in time, a scheduler may decide, with some probability, to terminate the execution.
Thus, given any finite history leading to a current state, an adversary returns a discrete probability distribution over the set of available next transitions (distributions to be precise). In order to model termination, we define schedulers which continue the transitions of pQTSs with a halting extension.
(Adversary) A *(partial, randomized, history-dependent) adversary* $E$ of a pQTS $\mathcal{A}=\left(S,s_{0},L_{I},L_{O},\Delta\right)$ is a function $$E:\mathit{Path}{}^{*}\left(\mathcal{A}\right)\longrightarrow\mathit{Distr}\left(\mathit{Distr}\left(L\times S\right)\cup\left\{\perp\right\}\right),$$ such that for each finite path $\pi$, if $E\left(\pi\right)\left(\mu\right)>0$, then $\left(last\left(\pi\right),\mu\right)\in\Delta$. The value $E\left(\pi\right)\left(\perp\right)$ is considered as *interruption/halting*. We say that $E$ is *deterministic*, if $E\left(\pi\right)$ assigns the Dirac distribution for every distribution after all $\pi\in\mathit{Path}^{*}\left(\mathcal{A}\right)$. An adversary $E$ halts on a path $\pi$, if it extends $\pi$ to the halting state $\perp$, i.e. $$E\left(\pi\right)\left(\perp\right)=1.$$ We say that an adversary halts after $k\in\mathbb{N}$ steps, if it halts for every path $\pi$ with $\left|\pi\right|\geq k$. We denote all such adversaries by $\mathit{Adv}\left(\mathcal{A},k\right)$. Lastly $E$ is finite, if there exists $k\in\mathbb{N}$ such that $E\in\mathit{Adv}\left(\mathcal{A},k\right)$.
#### The probability space assigned to an adversary.
Intuitively an adversary tosses a coin at every step of the computation, thus resulting in a purely probabilistic (as opposed to nondeterministic) computation tree.
(Path Probability) Let $E$ be an adversary of $\mathcal{A}$. The function $Q^{E}:Path^{*}\left(\mathcal{A}\right)\rightarrow\left[0,1\right]$ is called the *path probability function* and it is defined by induction. We set $Q^{E}\left(s_{0}\right)=1$ and $Q^{E}\left(\pi\mu as\right)=Q^{E}\left(\pi\right)\cdot E\left(\pi\right)\left(\mu\right)\cdot\mu\left(a,s\right)$.
Loosely speaking, we follow a finite path in the transition system and multiply every scheduled probability along the way, resolving every nondeterminism according to the adversary $E$ to get the ultimate path probability. The path probability function enables us to define a probability space associated with an adversary, thus giving every path in a pQTS $\mathcal{A}$ an exact probability.
(Adversary Probability Space) Let $E$ be an adversary of $\mathcal{A}$. The *unique probability space associated to* $E$ is the probability space $\left(\Omega_{E},\mathcal{F}_{E},P_{E}\right)$ given by.
1. $\Omega_{E}=\mathit{Path}{}^{\infty}\left(\mathcal{A}\right)$
2. $\mathcal{F}_{E}$ is the smallest $\sigma$-field that contains the set $\left\{ C_{\pi}\mid\pi\in\mathit{Path}{}^{*}\left(\mathcal{A}\right)\right\} $, where the cone is defined as $C_{\pi}=\left\{ \pi'\in\Omega_{E}\mid\pi\mbox{ is a prefix of }\pi'\right\} $.
3. $P_{E}$ is the unique probability measure on $\mathcal{F}_{E}$ s. t. $P_{E}\left(C_{\pi}\right)=Q^{E}\left(\pi\right)$, for all $\pi\in\mathit{Path}^{*}\left(\mathcal{A}\right)$.
The set of all adversaries is denoted by $\mathit{adv}\left(\mathcal{A}\right)$ with $\mathit{adv}\left(\mathcal{A},k\right)$ being the set of adversaries halting after $k\in\mathbb{N}$ respectively.
#### Trace distributions.
As we mentioned, a trace distribution is obtained from (the probability space assigned to) an adversary by removing all states. This means that the probability assigned to a set of traces $X$ is defined as the probability of all paths whose trace is an element of $X$.
(Trace Distribution) The *trace distribution* $H$ of an adversary $E$, denoted $H=\mathit{trd}\left(E\right)$ is the probability space $\left(\Omega_{H},\mathcal{F}_{H},P_{H}\right)$ given by
1. $\Omega_{H}=L^*_\mathcal{A}$
2. $\mathcal{F}_{H}$ is the smallest $\sigma$- field containing the set $\left\{ C_{\beta}\mid\beta\in L^{*}_\mathcal{A}\right\} $, where the cone is defined as $C_{\beta}=\left\{ \beta'\in\Omega_{E}\mid\beta\mbox{ is a prefix of }\beta'\right\} $
3. $P_{H}$ is the unique probability measure on $\mathcal{F}_{H}$ such that $P_{H}\left(X\right)=P_{E}\left(\mathit{trace}{}^{-1}\left(X\right)\right)$ for $X\in\mathcal{F}_{H}$.
As an abbreviation, we will write $P_{H}\left(\beta\right):=P_{H}\left(C_{\beta}\right)$ for $\beta\in L^*_{\mathcal{A}}$
Like before, we denote the set of trace distributions based on adversaries of $\mathcal{A}$ by $\mathit{trd}\left(\mathcal{A}\right)$ and $\mathit{trd}\left(\mathcal{A},k\right)$ if it is based on an adversary halting after $k\in\mathbb{N}$ steps respectively. Lastly we write $\mathcal{A}=_{\mathit{TD}}\mathcal{B}$ if $\mathit{trd}\left(\mathcal{A}\right)=\mathit{trd}\left(\mathcal{B}\right)$, $\mathcal{A}\sqsubseteq_{\mathit{TD}}\mathcal{B}$ if $\mathit{trd}\left(\mathcal{A}\right)\subseteq\mathit{trd}\left(\mathcal{B}\right)$ and $\mathcal{A}\sqsubseteq_{\mathit{TD}}^{k}\mathcal{B}$ if $\mathit{trd}\left(\mathcal{A},k\right)\subseteq\mathit{trd}\left(\mathcal{B},k\right)$ for $k\in\mathbb{N}$, where the embedding means that for every trace distribution $H$ of $\mathcal{A}$ there is a trace distribution $H'$ of $\mathcal{B}$ such that for all traces $\sigma$ of $\mathcal{A}$, we have $P_H\left(\sigma\right)=P_{H'}\left(\sigma\right)$.
The fact that $\left(\Omega_{E},\mathcal{F}_{E},P_{E}\right)$, $\left(\Omega_{H},\mathcal{F}_{H},P_{H}\right)$ really define probability spaces, follows from standard measure theory arguments (see [@cohn1980measure]).
(BottomRowOne) at (0,0) ; (BottomRowTwo) at (1.25,0) ; (BottomRowThree) at (2.5,0) ; (BottomRowFour) at (5.5,0) ; (BottomRowFive) at (6.75,0) ; (BottomRowSix) at (8,0) ;
(MiddleRowOne) at (0.675,2) ; (MiddleRowTwo) at (2.5,2) ; (MiddleRowThree) at (5.5,2) ; (MiddleRowFour) at (7.325,2) ;
(TopRowOne) at (4,3.5) ;
(BottomRowOne) to (MiddleRowOne); (BottomRowTwo) to (MiddleRowOne);
(BottomRowThree) to (MiddleRowTwo); (BottomRowFour) to (MiddleRowThree);
(BottomRowFive) to (MiddleRowFour); (BottomRowSix) to (MiddleRowFour);
(MiddleRowOne) to (TopRowOne); (MiddleRowTwo) to (TopRowOne);
(MiddleRowThree) to (TopRowOne); (MiddleRowFour) to (TopRowOne);
(BottomRowOne) to (MiddleRowOne); (BottomRowTwo) to (MiddleRowOne);
(MiddlePointOne) at (0.395,1) ; (MiddlePointTwo) at (1.1,1) ; (MiddlePointOne) to\[bend right\] (MiddlePointTwo);
(MiddlePointThree) at (6.77,1) ; (MiddlePointFour) at (7.8,1) ; (MiddlePointThree) to\[bend right\] (MiddlePointFour);
(MiddlePointFive) at (2.5,2.9) [$a?$]{}; (MiddlePointSix) at (3.3,2.75) ; (MiddlePointFive) to\[bend right\] (MiddlePointSix);
(MiddlePointFive) at (4.65,2.75) ; (MiddlePointSix) at (5.6,2.9) [$a?$]{}; (MiddlePointFive) to\[bend right\] (MiddlePointSix);
(Various1) at (2.5,1) ; (Various2) at (5.5,1) ; (Various3) at (0,3.5) ;
(TopRowOne) to \[out=45,in=135,looseness=15\] (TopRowOne);
(BottomRowOne) to \[out=225,in=315,looseness=15\] (BottomRowOne); (BottomRowTwo) to \[out=225,in=315,looseness=15\] (BottomRowTwo); (BottomRowThree) to \[out=225,in=315,looseness=15\] (BottomRowThree); (BottomRowFour) to \[out=225,in=315,looseness=15\] (BottomRowFour); (BottomRowFive) to \[out=225,in=315,looseness=15\] (BottomRowFive); (BottomRowSix) to \[out=225,in=315,looseness=15\] (BottomRowSix);
Consider the pQTS $\mathcal{A}=\left(S,s_{0}.L_{I},L_{O}^{\delta},\Delta\right)$ in Figure \[fig:(ExampleGraph)\]. There $S=\left\{ s_{0},s_{1},\ldots,s_{10}\right\} $, $L_{I}=\left\{ a?\right\} $, $L_{O}^{\delta}=\left\{ b!,c!,d!\right\} \cup\left\{ \delta\right\} $ and $\Delta=\left\{ \left(s_{0},\mu_{0_{1}}\right),\left(s_{0},\mu_{0_{2}}\right),\left(s_{0},\mu_{0_{3}}\right),\left(s_{1},\mu_{1}\right),\ldots,\left(s_{10},\mu_{10}\right)\right\} $. We can see that this system has both probabilistic and nondeterministic choices. Observe that it has indeed only input reactive and output generative transitions as mentioned in the beginning of \[subsec:pqts\].
We will now consider an adversary $E$ for $\mathcal{A}$. The only nondeterministic choice we have in this system, is located at state $s_{0}$, where we can either apply $a?$ to enter the left branch, $a?$ to enter the right branch, or do nothing (corresponding to $\mu_{0_{1}}$, $\mu_{0_{2}}$ and $\mu_{0_{3}}$ respectively). Therefore consider the adversary $E\left(s_{0}\right)\left(\mu_{0_{1}}\right)=\frac{1}{2}$ and $E\left(s_{0}\right)\left(\mu_{0_{2}}\right)=\frac{1}{2}$ and $E\left(\pi\right)\left(\mu\right)=\mathit{Dirac}$ for every other distribution $\mu$ after a path $\pi$ (i.e. those are taken with probability $1$).
The adversary probability space created for this adversary assigns an unambiguous path probability to each path. Consider the path $\pi=s_{0}\mu_{0_{1}}a?s_{1}\mu_{1}b!s_{5}$, then $$P_{E}\left(\pi\right)=Q^{E}\left(\pi\right)=\underset{1}{\underbrace{Q^{E}\left(s_{0}\right)}}\underset{\frac{1}{2}}{\underbrace{E\left(s_{0}\right)\left(\mu_{0_{1}}\right)}}\underset{\frac{1}{2}}{\underbrace{\mu_{0_{1}}\left(a?,s_{1}\right)}}\underset{1}{\underbrace{E\left(s_{0}\mu_{0_{1}}a?s_{1}\right)\left(\mu_{1}\right)}}\underset{\frac{1}{2}}{\underbrace{\mu_{1}\left(b!,s_{5}\right)}}=\frac{1}{8}.$$ However, consider the trace distribution $H=\mathit{trd}\left(E\right)$. Then for $\sigma=a?b!$, we have $\mathit{trace}^{-1}\left(\sigma\right)=\left\{ \pi,\eta\right\} $ with $\pi$ as before and $\eta=s_{0}\mu_{0_{2}}a?s_{3}\mu_{3}b!s_{8}$. Hence $$\begin{aligned}
P_{H}\left(\sigma\right) & = & P_{\mathit{trd}\left(E\right)}\left(\mathit{trace}^{-1}\left(\sigma\right)\right)=P_{E}\left(\left\{ \pi,\eta\right\} \right)=P_{E}\left(\pi\right)+P_{E}\left(\eta\right)=\frac{1}{4}.\end{aligned}$$
The probabilistic conformance relation [[pioco]{}]{} {#sec(pioco)}
====================================================
The [[pioco]{}]{}relation
-------------------------
The classical input-output conformance relation [[ioco]{}]{}states that an implementation $\mathcal{A}_{i}$ conforms to a specification $\mathcal{A}_{s}$ if $\mathcal{A}_{i}$ never provides any unspecified output. In particular this refers to the observation of quiescence, when other output was expected.
(Input- Output Conformance) Let $\mathcal{A}_{i}$ and $\mathcal{A}_{s}$ be two QTS and let $\mathcal{A}_{i}$ be input enabled. Then we say $\mathcal{A}_{i}\sqsubseteq_{ioco}\mathcal{A}_{s}$, if and only if $$\forall\sigma\in\mathit{traces}\left(\mathcal{A}_{s}\right):\mathit{out}_{\mathcal{A}_{i}}\left(\sigma\right)\subseteq\mathit{out}_{\mathcal{A}_{s}}\left(\sigma\right).$$
To generalize [[ioco]{}]{}to pQTSs, we introduce two auxiliary concepts. For a natural number $k$, the prefix relation $H \sqsubseteq_{k} H'$ states that trace distribution $H$ assigns exactly the same probabilities as $H'$ to traces of length $k$ and halts afterwards. The output continuation of a trace distribution $H$ prolongs the traces of $H$ with output actions. More precisely, output continuation of $H$ wrt length $k$ contains all trace distributions that (1) coincide with $H$ for traces upto length $k$ and (2) the $k+1$st action is an output label (incl $\delta$); i.e. traces of length $k+1$ that end on an input action are assigned probability 0. Recall that $P_{H}\left(\sigma\right)$ abbreviates $P_{H}\left(C_\sigma\right)$.
(Notations) For a natural number $k\in\mathbb{N}$, and trace distributions $H \in\mathit{trd}\left(\mathcal{A},k\right)$, we say that
1. $H$ is a [*prefix*]{} of $H'\in\mathit{trd}\left(\mathcal{A}\right)$ up to $k$, denoted by $H\sqsubseteq_{k}H'$, iff $\forall\sigma\in L^{k}:P_{H}\left(\sigma\right)=P_{H'}\left(\sigma\right).$
2. the [*output continuation*]{} of $H$ in $\mathcal{A}$ is given by $$\begin{aligned}
\mathit{outcont}\left(H,\mathcal{A},k\right): & = & \left\{ H'\in\mathit{trd}\left(\mathcal{A},k+1\right) \mid H\sqsubseteq_{k}H'\wedge\forall\sigma\in L^{k}L_{I}:P_{H'}\left(\sigma\right)=0\right\} .\end{aligned}$$
We are now able to define the core idea of [[pioco]{}]{}. Intuitively an implementation should conform to a specification, if the probability of every trace in $\mathcal{A}_i$ specified in $\mathcal{A}_s$, can be matched in the specification. Just as in [[ioco]{}]{}, we will neglect underspecified traces continued with input actions (i.e., everything is allowed to happen after that). However, if there is unspecified output in the implementation, there is at least one adversary that schedules positive probability to this continuation, which consequently cannot be matched of output-continuations in the specification.
Let $\mathcal{A}_{i}$ and $\mathcal{A}_{s}$ be two pQTS. Furthermore let $\mathcal{A}_i$ be input enabled, then we say $\mathcal{A}_{i}\sqsubseteq_{\mathit{pioco}}\mathcal{A}_{s}$ if and only if
$$\forall k\in\mathbb{N}\forall H\in\mathit{trd}\left(\mathcal{A}_{s},k\right):\mathit{outcont}\left(H,\mathcal{A}_{i},k\right)\subseteq\mathit{outcont}\left(H,\mathcal{A}_{s},k\right).$$
(initImp) at (1,1.25) ; (leftImp) at (0.25,0) ; (rightImp) at (1.75,0) ;
(initImp) to node\[inner sep=0,label=135:$a!$, label=left:$p$\](middle1) (leftImp); (initImp) to node\[inner sep=0,label=45:$b!$, label=right:$1-p$\](middle2) (rightImp); (middle1) to\[bend right\] (middle2);
(pioco1) at (2.9,1) [$\sqsubseteq_{\mathit{pioco}}$]{} ; (pioco2) at (2.9,0.5) [$\cancel{\sqsupseteq}_{\mathit{pioco}}$]{};
(labelA) at (0.25,1.25) [$\mathcal{A}$]{} ; (labelB) at (4,1.25) [$\mathcal{B}$]{} ;
(initSpec) at (4.75,1.25) ; (leftSpec) at (4,0) ; (rightSpec) at (5.5,0) ;
(initSpec) to node\[label=135:$a!$\](middle3) (leftSpec); (initSpec) to node\[label=45:$b!$\](middle4) (rightSpec);
Consider the two systems of $\mathcal{A}$ and $\mathcal{B}$ shown in Figure \[fig:(piocoExample)\] and assume that $p\in\left[0,1\right]$. It is true that $\mathcal{A}\sqsubseteq_\mathit{\mathit{pioco}}\mathcal{B}$, because we can always choose an adversary $E$ of $\mathcal{B}$, which imitates the probabilistic behaviour of $\mathcal{B}$, i.e. choose $E(\varepsilon)(\mu)=\nu$ such that $\nu\left(a!,t_1\right)=p$ and $\nu\left(b!,t_2\right)=1-p$.
However, the opposite does not hold. For example assume $p=\frac{1}{2}$, then the trace distribution $H$ assigning $P_H\left(a!\right)=1$ is in $\mathit{outcont}\left(H,\mathcal{B},1\right)$ but not in $\mathit{outcont}\left(H,\mathcal{A},1\right)$ and hence $\mathcal{B}\cancel{\sqsubseteq}_\mathit{\mathit{pioco}}\mathcal{A}$.
Properties of the p-ioco relation
---------------------------------
As stated before, the relation [[pioco]{}]{}conservatively extends the [[ioco]{}]{}relation, i.e. both relations coincide for non-probabilistic QTSs. Moreover, we show that several other characteristic properties of [[ioco]{}]{}carry over to [[pioco]{}]{}as well. Below, a QTS is a pQTS where every occurring distribution is the Dirac distribution.
Let $\mathcal{A}_{i}$ and $\mathcal{A}_{s}$ be two QTS and let $\mathcal{A}_i$ be input enabled, then $$\mathcal{A}_{i}\sqsubseteq_{\mathit{ioco}}\mathcal{A}_{s}\Longleftrightarrow\mathcal{A}_{i}\sqsubseteq_{\mathit{pioco}}\mathcal{A}_{s}.$$
\
$"\Longleftarrow"$ Let $\mathcal{A}_{i}\sqsubseteq_{\mathit{pioco}}\mathcal{A}_{s}$ and $\sigma\in\mathit{traces}\left(\mathcal{A}_{s}\right)$. Our goal is to show $\mathit{out}_{\mathcal{A}_{i}}\left(\sigma\right)\subseteq\mathit{out}_{\mathcal{A}_{s}}\left(\sigma\right)$.
For $\mathit{out}_{\mathcal{A}_{i}}\left(\sigma\right)=\emptyset$ we are done, since $\emptyset\subseteq\mathit{out}_{\mathcal{A}_{s}}\left(\sigma\right)$ obviously.
So assume that there is $b!\in\mathit{out}_{\mathcal{A}_{i}}\left(\sigma\right)$. We want to show that $b!\in\mathit{out}_{\mathcal{A}_{s}}\left(\sigma\right)$. For this, let $k=\left|\sigma\right|$ and $H\in\mathit{trd}\left(\mathcal{A}_{s},k\right)$ such that $P_{H}\left(\sigma\right)=1$, which is possible because $\sigma\in\mathit{traces}\left(\mathcal{A}_{s}\right)$ and both $\mathcal{A}_{i}$ and $\mathcal{A}_{s}$ are non-probabilistic. The same argument gives us $\mathit{outcont}\left(H,\mathcal{A}_{i},k\right)\neq\emptyset$, because $\sigma\in\mathit{traces}\left(\mathcal{A}_{i}\right)$.
Thus we have at least one $H'\in\mathit{outcont}\left(H,\mathcal{A}_{i},k\right)$ such that $P_{H'}\left(\sigma b!\right)>0$. Let $\pi\in\mathit{trace}^{-1}\left(\sigma\right)\cap\mathit{Path^{*}}\left(\mathcal{A}_{s}\right)$. Now $H'\in\mathit{outcont}\left(H,\mathcal{A}_{s},k\right)$, because $\mathcal{A}_{i}\sqsubseteq_{\mathit{pioco}}\mathcal{A}_{s}$ by assumption and thus there must be at least one adversary $E'\in\mathit{adv}\left(\mathcal{A}_{s},k+1\right)$ such that $\mathit{trd}\left(E'\right)=H'$ and $Q^{E'}\left(\pi\cdot\mathit{Dirac}\cdot b!s'\right)>0$ for some $s'\in S$. Hence $E'\left(\pi\right)\left(\mathit{Dirac}\right)\mathit{Dirac}\left(b!,s'\right)>0$ and therefore with $s'\in\mathit{reach}\left(\mathit{last}\left(\pi\right),b!\right)$ this yields $b!\in\mathit{out}_{\mathcal{A}_{s}}\left(\sigma\right)$.\
$"\Longrightarrow"$ Let $\mathcal{A}_{i}\sqsubseteq_{ioco}\mathcal{A}_{s}$, $k\in\mathbb{N}$ and $H^{*}\in\mathit{trd}\left(\mathcal{A}_{s},k\right)$. Assume that $H\in\mathit{outcont}\left(H^{*},\mathcal{A}_{i},k\right)$, then we want to show that $H\in\mathit{outcont}\left(H^{*},\mathcal{A}_{s},k\right)$.
Therefore let $E\in\mathit{adv}\left(\mathcal{A}_{i},k+1\right)$ such that $\mathit{trd}\left(E\right)=H$. If we can find $E'\in\mathit{adv}\left(\mathcal{A}_{s},k+1\right)$ such that $\mathit{trd}\left(E\right)=\mathit{trd}\left(E'\right)$, we are done. We will do this constructively in three steps.\
\
1) By construction of $H^{*}$ we know that there must be $E'\in\mathit{adv}\left(\mathcal{A}_{s},k+1\right)$, such that for all $\sigma\in L^{k}$ we have $P_{\mathit{trd}\left(E'\right)}\left(\sigma\right)=P_{H^{*}}\left(\sigma\right)=P_{\mathit{trd}\left(E\right)}\left(\sigma\right)$. Thus $H^{*}\sqsubseteq_{k}\mathit{trd}\left(E'\right)$.\
\
2) We did not specify the behaviour of $E'$ for path of length $k+1$. Therefore we choose $E'$ such that for all traces $\sigma\in L^{k}$ and $a?\in L_{I}$ we have $P_{\mathit{trd}\left(E'\right)}\left(\sigma a?\right)=0=P_{\mathit{trd}\left(E\right)}\left(\sigma a?\right)$.\
\
3) The last thing to show is that $\mathit{trd}\left(E\right)=\mathit{trd}\left(E'\right)$. Therefore let us now set the behaviour of $E'$ for traces ending in outputs. Let $\sigma\in\mathit{traces}\left(\mathcal{A}_{i}\right)$, then assume $a!\in\mathit{out}_{\mathcal{A}_{i}}\left(\sigma\right)$ (if $\mathit{out}_{\mathcal{A}_{i}}\left(\sigma\right)=\emptyset$ we are done immediately) and because $\mathcal{A}_{i}\sqsubseteq_{\mathit{ioco}}\mathcal{A}_{s}$, we know that $a!\in\mathit{out}_{\mathcal{A}_{s}}\left(\sigma\right)$.
Now let $p:=P_{\mathit{trd}\left(E\right)}\left(\sigma\right)=P_{\mathit{trd}\left(E'\right)}\left(\sigma\right)$ and $q:=P_{\mathit{trd}\left(E\right)}\left(\sigma a!\right)$. By equality of the trace distributions for traces up to length $k$ we know that $q\leq p\leq1$ and therefore there is $\alpha\in\left[0,1\right]$ such that $q=p\cdot\alpha$. Let $\mathit{traces}\left(\mathcal{A}_{s}\right)\cap\mathit{trace}^{-1}\left(\sigma\right)=\left\{ \pi_{1},\ldots,\pi_{n}\right\} $. Without loss of generality, we choose $E'$ such that $$E'\left(\pi_{i}\right)\left(\mathit{Dirac}\right)=\begin{cases}
\alpha & \mbox{ if }i=1\\
0 & \mbox{ else}
\end{cases}.$$We constructed $E'\in\mathit{adv}\left(\mathcal{A}_{s},k+1\right)$, such that for all $\sigma\in L^{k+1}$ we have $P_{\mathit{trd}\left(E'\right)}\left(\sigma\right)=P_{\mathit{trd}\left(E\right)}\left(\sigma\right)$ and thus $\mathit{trd}\left(E\right)=\mathit{trd}\left(E'\right)$, which finally yields $H\in\mathit{outcont}\left(H^{*},\mathcal{A}_{s},k\right)$. Intuitively it makes sense that the implementation is input enabled, since it should accept every input at any time. The following two results justify, that we assume the specification to be not input enabled, since otherwise [[pioco]{}]{}would coincide with trace distribution inclusion. Equivalently it is known that [[ioco]{}]{}coincides with trace inclusion, if we assume both the implementation and the specification were input enabled. Thus, as stated before, we can see that [[pioco]{}]{}extends [[ioco]{}]{}.
\[lem:(weak implication)\]Let $\mathcal{A}_{i}$ and $\mathcal{A}_{s}$ be two pQTS, then
$$\mathcal{A}_{i}\sqsubseteq_{\mathit{TD}}\mathcal{A}_{s}\Longrightarrow\mathcal{A}_{i}\sqsubseteq_{\mathit{pioco}}\mathcal{A}_{s}.$$
Let $\mathcal{A}_{i}\sqsubseteq_{\mathit{TD}}^{k}\mathcal{A}_{s}$ then for every $H\in\mathit{trd}\left(\mathcal{A}_{i},k\right)$ we also have $H\in\mathit{trd}\left(\mathcal{A}_{s},k\right)$. So pick $m\in\mathbb{N}$, let $H^{*}\in\mathit{trd}\left(\mathcal{A}_{s},m\right)$ and take $H\in\mathit{outcont}\left(H^{*},\mathcal{A}_{i},m\right)\subseteq\mathit{trd}\left(\mathcal{A}_{i},m+1\right)$. We want to show that $H\in\mathit{outcont}\left(H^{*},\mathcal{A}_{s},m\right)$.
By assumption we know that $H\in\mathit{trd}\left(\mathcal{A}_{s},m+1\right)$. In particular that means there must be at least one adversary $E\in\mathit{adv}\left(\mathcal{A}_{s},m+1\right)$ such that $\mathit{trd}\left(E\right)=H$. However, for this adversary, we know that $H^{*}\sqsubseteq_{m}\mathit{trd}\left(E\right)$ and for all $\sigma\in L^{m}L_{I}$ we have $P_{\mathit{trd}\left(E\right)}\left(\sigma\right)=0$ and by trace distribution inclusion $\mathit{trd}\left(E\right)=H$. Thus $H\in\mathit{outcont}\left(H^{*},\mathcal{A}_{s},m\right)$ and therefore $\mathcal{A}_{i}\sqsubseteq_{\mathit{pioco}}\mathcal{A}_{s}$.
\[thm:(pioco =000026 TD)\]Let $\mathcal{A}_{i}$ and $\mathcal{A}_{s}$ be two input enabled pQTS, then
$$\mathcal{A}_{i}\sqsubseteq_{\mathit{pioco}}\mathcal{A}_{s}\Longleftrightarrow \mathcal{A}_{i}\sqsubseteq_{TD}\mathcal{A}_{s}.$$
$"\Longrightarrow"$ Let $\mathcal{A}_{i}\sqsubseteq_{\mathit{pioco}}\mathcal{A}_{s}$, fix $m\in\mathbb{N}$ and take a trace distribution $H^{*}\in\mathit{trd}\left(\mathcal{A}_{i},m\right)$. To show that $H^{*}\in\mathit{trd}\left(\mathcal{A}_{s},m\right)$, we prove that every prefix of $H^{*}$ is in $\mathit{trd}\left(\mathcal{A}_{s},m\right)$, i.e. if $H'\sqsubseteq_{k}H^{*}$ for some $k\in\mathbb{N}$, then $H'\in\mathit{trd}\left(\mathcal{A}_{s}\right)$. The proof is by induction up to $m\in\mathbb{N}$ over the prefix trace distribution length, denoted by $k$.
Obviously $H'\in\mathit{trd}\left(\mathcal{A}_{i},0\right)$ yields both $H'\sqsubseteq_{0}H^{*}$ and $H'\in\mathit{trd}\left(\mathcal{A}_{s}\right)$. Now assume, we know that $H'\sqsubseteq_{k}H^{*}$ for some $k<m$ and $H'\in\mathit{trd}\left(\mathcal{A}_{s}\right)$. Furthermore let $H''\in\mathit{trd}\left(\mathcal{A}_{i},k+1\right)$, such that $H''\sqsubseteq_{k+1}H^{*}$. If we can show that $H''\in\mathit{trd}\left(\mathcal{A}_{s},k+1\right)$, we are done.
With $H'\in\mathit{trd}\left(\mathcal{A}_{s},k\right)$, we take $H'''\in\mathit{outcont}\left(H',\mathcal{A}_{i},k\right)$ such that all traces of length $k+1$ ending in an output action have the same probability, i.e. for all $\sigma\in L^{k}L_{O}^{\delta}$, we have $P_{H'''}\left(\sigma\right)=P_{H''}\left(\sigma\right)$. By assumption $\mathcal{A}_{i}\sqsubseteq_{\mathit{pioco}}\mathcal{A}_{s}$ and thus $H'''\in\mathit{outcont}\left(H',\mathcal{A}_{s},k\right)\subseteq\mathit{trd}\left(\mathcal{A}_{s}\right)$.
Let $E\in\mathit{adv}\left(\mathcal{A}_{s},k+1\right)$ the corresponding adversary such that $\mathit{trd}\left(E\right)=H'''$. By construction, we have $P_{\mathit{trd}\left(E\right)}\left(\sigma a!\right)=P_{H''}\left(\sigma a!\right)$ and $P_{\mathit{trd}\left(E\right)}\left(\sigma b?\right)=0\overset{\tiny\mbox{in general}}{\neq}P_{H''}\left(\sigma b?\right)$ for all $\sigma\in L^{k}$. We create yet another adversary, denoted by $E'\in\mathit{adv}\left(\mathcal{A}_{s},k+1\right)$ such that for all $\sigma\in L^{k}$ and $a!\in L_{O}^{\delta}$, we have $P_{\mathit{trd}\left(E\right)}\left(\sigma\right)=P_{\mathit{trd}\left(E'\right)}\left(\sigma\right)$ and $P_{\mathit{trd}\left(E\right)}\left(\sigma a!\right)=P_{\mathit{trd}\left(E'\right)}\left(\sigma a!\right)$. Taking the sum over all probabilities of those traces yields $$\sum_{a!\in L_{O}^{\delta}}P_{\mathit{trd}\left(E\right)}\left(\sigma a!\right)=1-\alpha,$$ where $\alpha\in\left[0,1\right]$ and consequently the remaining bit is covered by $$\sum_{b?\in L_{I}}P_{H''}\left(\sigma b?\right)=\alpha.$$\
The aim is now to set the behaviour of $E'$ such that $\sigma\in L^{k}L_{I}$ has $P_{H''}\left(\sigma\right)=P_{\mathit{trd}\left(E'\right)}\left(\sigma\right)$. We prove that this can indeed be done independently from $\sigma$. The input enabledness gives that for all $\sigma b?\in\mathit{traces}\left(\mathcal{A}_{i}\right)$, we also have $\sigma b?\in\mathit{traces}\left(\mathcal{A}_{s}\right)$. Assume $P_{H''}\left(\sigma\right)=p$ and thus $$\begin{aligned}
\alpha & = & \sum_{b?\in L_{I}}P_{H''}\left(\sigma b?\right)=P_{H''}\left(\sigma b_{1}?\right)+\ldots+P_{H''}\left(\sigma b_{n}?\right)=p\alpha_{1}+\ldots+p\alpha_{\omega}\\
& \overset{!}{=} & P_{\mathit{trd}\left(E'\right)}\left(\sigma b_{1}?\right)+\ldots+P_{\mathit{trd}\left(E'\right)}\left(\sigma b_{n}?\right).\end{aligned}$$ However, since $\mathit{trd}\left(E\right)\sqsubseteq_{k}H''$, we also have $P_{\mathit{trd}\left(E\right)}\left(\sigma\right)=p$.
The last detail not yet specified about $E'$ is the behaviour of paths of length $k+1$ ending in an input transition. We demonstrate the choice of $E'$ for $p\alpha_{1}\overset{!}{=}P_{\mathit{trd}\left(E'\right)}\left(\sigma b_{1}?\right)$, and denote the associated paths $\left\{ \pi_{1},\ldots,\pi_{n}\right\} =\mathit{trace}^{-1}\left(\sigma\right)$. Furthermore $\pi_{i}':=\pi_{i}\mu b_{1}?s_{i_{j}}$ for some $s_{i_{j}}\in S$, $j=1,\ldots,l$, which are reachable after $\pi_{i}$ and distributions containing $b?$. Thus we want
$$\begin{aligned}
p\alpha_{1} & \overset{!}{=} & P_{\mathit{trd}\left(E'\right)}\left(\sigma b?\right)=\sum_{i=1}^{n}P_{E'}\left(\pi_{i}'\right)\\
& = & \sum_{i=1}^{n}\sum_{j=1}^{l}\underset{=p}{\underbrace{Q^{E'}\left(\pi_{i}\right)}}\underset{=:\alpha_{1}}{\underbrace{E'\left(\pi_{i}'\right)\left(\mu\right)}}\mu\left(b_{1}?,s_{i_{j}}\right)\\
& = & p\alpha_{1}\underset{=1}{\underbrace{\sum_{i=1}^{n}\sum_{j=1}^{l}\mu\left(b_{1}?,s_{i_{j}}\right)}.}\end{aligned}$$
We can do the same for all $\alpha_{i}$ for $i=1,\ldots,\omega$. Note that the choice of the adversary does *not* depend on the chosen trace $\sigma$ but solely on the presupposed behaviour of $H''$. Thus we have found $E'\in\mathit{adv}\left(\mathcal{A}_{s},k+1\right)$ such that $\mathit{trd}\left(E'\right)=H''$. Hence $H''\in\mathit{trd}\left(\mathcal{A}_{s},k+1\right)$, which ends the induction. Since this is possible for every $m\in\mathbb{N}$, we get $\mathcal{A}_i\subseteq_{\mathit{pioco}}\mathcal{A}_s$, ending the proof.\
$"\Longleftarrow"$ See Lemma \[lem:(weak implication)\] for the proof. In particular we do not even require input enabledness for $\mathcal{A}_{s}$ in this case. Next, we show that, under some input-enabledness restrictions, the [[pioco]{}]{}relation is transitive. Again, note that this is also true for [[ioco]{}]{}for non-probabilistic systems.
(Transitivity of pioco) Let $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$ be pQTS, such that $\mathcal{A}$ and $\mathcal{B}$ are input enabled, then
$$\mathcal{A}\sqsubseteq_{\mathit{pioco}}\mathcal{B}\wedge\mathcal{B}\sqsubseteq_{\mathit{pioco}}\mathcal{C}\Longrightarrow\mathcal{A}\sqsubseteq_{\mathit{pioco}}\mathcal{C}.$$
Let $\mathcal{A}\sqsubseteq_{\mathit{pioco}}\mathcal{B}$ and $\mathcal{B}\sqsubseteq_{\mathit{pioco}}\mathcal{C}$ and $\mathcal{A}$ and $\mathcal{B}$ be input enabled. By Theorem \[thm:(pioco =000026 TD)\] we know, that $\mathcal{A}\sqsubseteq_{\mathit{TD}}\mathcal{B}$. So let $k\in\mathbb{N}$ and $H^{*}\in\mathit{trd}\left(\mathcal{A},k\right)$. Consequently also $H^{*}\in\mathit{trd}\left(\mathcal{B},k\right)$ and thus the following embedding holds
$$\mathit{outcont}\left(\mathcal{A},H^{*},k\right)\subseteq\mathit{outcont}\left(\mathcal{B},H^{*},k\right)\subseteq\mathit{outcont}\left(\mathcal{C},H^{*},k\right),$$ and thus $\mathcal{A}\sqsubseteq_{\mathit{pioco}}\mathcal{C}$.
Testing for pQTS {#sec:Testing-for-pQTS}
================
Test cases for pQTSs.
---------------------
We will consider tests as sets of traces based on an action signature $\left(L_I,L_O^\delta\right)$, which will describe possible behaviour of the tester. This means that at each state in a test case, the tester either provides stimuli or waits for a response of the system. Additionally to output conformance testing like in [@TimmerStoelingaBrinksma], we introduce probabilities into our testing transition system. Thus we can represent each test case as a pQTS, albeit with a mirrored action signature $\left(L_O,L_I\cup\left\{\delta\right\} \right)$. This is necessary for the parallel composition of the test pQTS and the SUT.
Since we consider tests to be pQTS, we also use all the terminology introduced earlier on. Additionally we require tests to not contain loops (or infinite paths respectively).
\[def:(Test)\] A *test (directed acyclic graph)* over an action signature $\left(L_{I},L_{O}^{\delta}\right)$ is a pQTS of the form $t=\left(S,s_{0},L_{O},L_{I}\cup\left\{ \delta\right\} ,\Delta\right)$ such that
- $t$ is internally deterministic and does not contain an infinite path;
- $t$ is acyclic and connected;
- For every state $s\in S$, we either have\
- $\mathit{after}\left(s\right)=\emptyset$, or\
- $\mathit{after}\left(s\right)=L_{I}\cup\left\{ \delta\right\} $, or\
- $\mathit{after}\left(s\right)=\left\{ a!\right\} \cup L_{I}\cup\left\{ \delta\right\} $ for some $a!\in L_{O}$.
A test suite $T$ is a set of tests over an action signature $\left(L_{I},L_{O}^{\delta}\right)$. We write $\mathcal{T}\left(L_{I},L_{O}^{\delta}\right)$ to denote all the tests over an action signature $\left(L_{I},L_{O}^{\delta}\right)$ and $\mathcal{TS}\left(L_{I},L_{O}^{\delta}\right)$ as the set of all test suites over an action signature respectively.
For a given specification pQTS $\mathcal{A}_{s}=\left(S,s_{0},L_{I},L_{O}^{\delta},\Delta\right)$, we say that a test $t$ is a *test for* $\mathcal{A}_{s}$, if it is based on the same action signature $\left(L_{I},L_{O}^{\delta}\right)$. Similar to before, we denote all tests for $\mathcal{A}_{s}$ by $\mathcal{T}\left(\mathcal{A}_{s}\right)$ and all test suites by $\mathcal{TS}\left(\mathcal{A}_{s}\right)$ respectively.
Note that we mirrored the action signature for tests, as can be seen in Figure \[fig:Specification\] and Figure \[fig:Test\] respectively. That is, because we require tests and implementations to shake hands on shared actions. A special role is dedicated to quiescence in the context of parallel composition, since the composed system is considered quiescent if and only if the two systems are quiescent.
We will proceed to define parallel composition. Formally this means that output actions of one component are allowed to be present as input actions of the other component. These will be synchronized upon. However, keeping in mind the mirrored action signature of tests, we wish to avoid possibly unwanted synchronization, which is why we introduce system compatibility.
(Compatibility) Two pQTS $
\mathcal{A}=\left( S,s_{0},L_{I},L_{O}^{\delta},\Delta\right) ,
$ and $
\mathcal{A}'=\left( S',s_{0}',L_{I}',L_{O}^{\delta\prime},\Delta'\right)
$ are said to be *compatible* if $L_{O}^{\delta}\cap L{}_{O}^{\delta\prime}=\left\{ \delta\right\} $.
When we put two pQTSs in parallel, they synchronize on shared actions, and evolve independently on others. Since the transitions taken by the two component of the composition are stochastically independent, we multiply the probabilities when taking shared actions.
\[def:(Parallel-Composition)\](Parallel composition) Given two compatible pQTS $\mathcal{A}=\left( S,s_{0},L_{I},L_{O}^{\delta},\Delta\right) $ and $\mathcal{A}'=\left( S',s_{0}',L_{I}',L_{O}^{\delta\prime},\Delta'\right) $, their *parallel composition* is the tuple $$\mathcal{A}\mid\mid\mathcal{A}'=\left( S'',s_{0}'',L_{I}'',L_{O}^{\delta\prime\prime},\Delta''\right) ,$$ where
$S''=S\times S'$,
$s_{0}''=\left(s_{0},s_{0}'\right)$,
$L_{I}''=\left(L_{I}\cup L_{I}'\right)\backslash\left(L_{O}\cup L_{O}^\prime\right)$,
$L_{O}^{\delta\prime\prime}=L_{O}^{\delta}\cup L_{O}^{\delta\prime}$,
$\Delta''=\{\left(\left(s,t\right),\mu\right)\in S''\times\mathit{Distr}\left(L''\times S''\right)\mid$ $$\mu\left(a,\left(s',t'\right)\right)\equiv\begin{cases}
\mu_{a}\left(a,s'\right)\nu_{a}\left(a,t'\right) & \mbox{if }a\in L\cap L'\mbox{, where }s\overset{\mu_{a},a}{\longrightarrow}_{\mathcal{A}}s'\wedge t\overset{\nu_{a},a}{\longrightarrow}_{\mathcal{A}'}t'\\
\mu_{a}\left(a,s'\right) & \mbox{if }a\in L\backslash L'\mbox{, where }s\overset{\mu_{a},a}{\longrightarrow}_{\mathcal{A}}s'\wedge t=t'\\
\nu_{a}\left(a,t'\right) & \mbox{if }a\in L'\backslash L\mbox{, where }s=s'\wedge t\overset{\nu_{a},a}{\longrightarrow}_{\mathcal{A}'}t'\\
0 & \mbox{otherwise}
\end{cases}\}$$ where $\mu_{a}\in\mathit{Distr}\left(L,S\right)$ and $\nu_{a}\in\mathit{Distr}\left(L',S'\right)$ respectively.
Before we parallel compose a test case with a system, we obviously need to define which outcome of a test case is considered correct, and which is not (i.e., when it fails).
(Test case annotation) For a given test $t$ a *test annotation* is a function
$$a:\mathit{ctraces}\left(t\right)\longrightarrow\left\{ \mathit{pass},\mathit{fail}\right\} .$$ A pair $\hat{t}=\left(t,a\right)$ consisting of a test and a test annotation is called an *annotated test*. The set of all such $\hat{t}$ is defined as $\hat{T}=\left\{ \left(t_{i},a_{i}\right)_{i\in\mathcal{I}}\right\} $ for some index set $\mathcal{I}$ is called *annotated Test Suite*. If $t$ is a test case for a specification $\mathcal{A}_{s}$ we define the [[pioco]{}]{}test annotation $a_{\mathcal{A}_{s},t}^{\mathit{pioco}}:\mathit{ctraces}\left(t\right)\longrightarrow\left\{ \mathit{pass},\mathit{fail}\right\} $ by $$a_{\mathcal{A}_{s},t}^{\mathit{pioco}}\left(\sigma\right)=\begin{cases}
\mathit{fail} & \mbox{if }\exists\sigma_{1}\in\mathit{traces}\left(\mathcal{A}_{s}\right),a!\in L_{O}^{\delta}:\sigma_{1}a!\sqsubseteq\sigma\wedge\sigma_{1}a!\notin\mathit{traces}\left(\mathcal{A}_{s}\right);\\
\mathit{pass} & \mbox{otherwise.}
\end{cases}$$
Test execution.
---------------
By taking the intersection of all complete traces within a test and all traces of an implementation, we will define the set of all traces that will be executed by an annotated test case.
\[def:(Test-execution)\](Test execution) Let $t$ be a test over the action signature $\left(L_{I},L_{O}^{\delta}\right)$ and the pQTS $\mathcal{A}_{i}=\left(S,s_{0},L_{I},L_{O}^{\delta},\Delta\right)$. Then we define
$$\mathit{exec}_{t}\left(\mathcal{A}_{i}\right)=\mathit{traces}\left(\mathcal{A}_{i}\right)\cap\mathit{ctraces}\left(\hat{t}\right).$$
Consider the specification of a shuffle music player and a derived test for it given in Figure \[fig:SpecAndTest\]. Assuming we are to test whether or not the following two implementations conform to the specification with respect to [[pioco]{}]{}:
(init) at (0,0) ; (select) at (1.5,0) ;
(label) at (0,1.5) [$\mathcal{A}_{i_1}$]{}; (label) at (0,-1.95) ;
(init) to\[out=70, in=120, looseness=25\] node\[above\] [StartSong1!]{} (init); (select) to\[out=70, in=120, looseness=25\] node\[above\] [$\delta$]{} (select); (init) to node\[above\][shuffle?]{} (select);
(init) at (0,0) ; (select) at (1.5,0) ; (done) at (3.5,0) ;
(arcBegin) at (2.1,0.49) ; (arcEnd) at (2.1,-0.49) ;
(label) at (0,2) [$\mathcal{A}_{i_2}$]{};
(afterSelect) at (2.4,0) ; (beforeDone) at (3.0,0) ; (ldots) at (2.6,0) [$\ldots$]{};
(select) to (afterSelect); (beforeDone) to (done);
(init) to node\[above\][shuffle?]{} (select); (select) to\[in=-225, out=45\] node\[above\][Song1!]{} (done); (select) to\[in=225, out=-45\] node\[below\][SongN!]{}(done); (done) to\[in=-90, out=-90, distance=1.5cm\] node\[below\][shuffle?]{}(select); (done) to\[in=90, out=90, distance=2cm\] node\[above\][done!]{}(init); (arcBegin) to\[in=70, out=290\] (arcEnd);
(select) to \[in=120, out=70, looseness=25\] node\[above\] [shuffle?]{} (select); (init) to \[out=180, in=230, looseness=25\] node\[left\] [$\delta$]{} (init);
Here $p_{1},\ldots,p_{N}\in\left[0,1\right]$ such that $\sum_{i=1}^{N}p_{i}=1$. Now when we compose $\mathcal{A}_{i_{1}}$ with $t$ in Figure$\ref{fig:Test}$, we can clearly see that every complete trace of the parallel system is annotated with $\mathit{fail}$, as it would also have been the case for classical [[ioco]{}]{}theory. However, if we now also consider $\mathcal{A}_{i_{2}}$ and compose it with the same test $t$, every trace of the composed system would be given a $\mathit{pass}$ label if we restricted ourselves to the annotation function and the output verdict. Note how every trace $\mathit{shuffle?}\cdot\mathit{Song\_}i!$ is given probability $p_{i}$ for $i=1,\ldots,N$. The only restriction we assumed valid for $p_{1},\ldots,p_{N}$ is that they sum up to $1$ so a correct distribution for $\mathcal{A}_{i_{2}}$ would be $p_{1}=\frac{N-1}{N}$ and $p_{2}=\ldots=p_{N}=\frac{1}{N^{2}}$. This, however, should intuitively not be given the verdict $\mathit{pass}$, since it differs from the uniform distribution given in the specification $\mathcal{A}_{s}$.
Test evaluation
---------------
In order to give a verdict of whether or not the implementation passed the test (suite), we need to extend the test evaluation process of classical ioco testing with a statistical component. Thus the idea of evaluating probabilistic systems becomes two folded. On the one hand, we want that no unexpected output (or unexpected quiescence) ever occurs during the execution. On the other hand, we want the observed frequencies of the SUT to conform in some way to the probabilities described in the specification. Thus the SUT will pass the test suite only if it passes both criteria. We will do this by augmenting classical [[ioco]{}]{}theory with zero hypothesis testing, which will be discussed in the following.
To conduct an experiment, we need to define a length $k\in\mathbb{N}$ and a width $m\in\mathbb{N}$ first. This refers to how long the traces we want to record should be and how many times we reset the machine. This will give us traces $\sigma_1,\ldots,\sigma_m\in L^k$, which we call a *sample*. Additionally, we assume that the implementation is governed by an underlying trace distribution $H$ in every run, thus running the machine $m$ times, gives us a sequence of possibly $m$ different trace distributions $\vec{H}=H_1,\ldots,H_m$. So in every run the implementation makes two choices: 1) It chooses the trace distribution $H$ and 2) $H$ chooses a trace $\sigma$ to execute. Consequently that means that once a trace distribution $H_i$ is chosen, it is solely responsible for the trace $\sigma_i$. Thus for $i\neq j$ the choice of $\sigma_i$ is independent from the choice of $\sigma_j$.
Our statistical analysis is build upon the frequencies of traces occurring in a sample $O$. Thus the *frequency function* will be defined as $$\mathit{freq}\left(O\right)\left(\sigma\right)=\frac{\left|\left\{ i\in,\left\{ 1,\ldots,m\right\} |\sigma_{i}=\sigma\right\} \right|}{m}.$$ Note that although every run is governed by possibly different trace distributions, we can still derive useful information from the frequency function. For fixed $k,m\in\mathbb{N}$ and $\vec{H}$, the sample $O$ can be treated as a Bernoulli experiment of length $m$, where success occurs in position $i=1,\ldots m$, if $\sigma=\sigma_i$. The success probability is then given by $P_{H_i}\left(\sigma\right)$. So for given $\vec{H}$, the expected value for $\sigma$ is given by $\mathbb{E}^{\vec{H}}_\sigma=\frac{1}{m}\sum_{i=1}^m P_{H_i}\left(\sigma\right)$. Note that this expected value $\mathbb{E}^{\vec{H}}$ is the expected distribution over $L^k$ if we assume it is based on the $m$ trace distributions $\vec{H}$.
In order to apply zero hypothesis testing and compare an observed distribution with $\mathbb{E}^{\vec{H}}$, we will use the notion of metric spaces. This will enable us to measure deviation of two distributions. We will use the metric space $\left(L^{k},\mathit{dist}\right)$, where $dist$ is the euclidean distance of two distributions defined as $\mathit{dist}\left(\mu,\nu\right)=\sqrt{\sum_{\sigma\in L^k}\left|\mu\left(\sigma\right)-\nu\left(\sigma\right)\right|^2}$.
Now that we have a measure of deviation, we can say that a sample $O$ is accepted if $\mathit{freq}\left(O\right)$ lies in some distance $r$ of the expected value $\mathbb{E}^{\vec{H}}$, or equivalently if $\mathit{freq}\left(O\right)$ is contained in the closed ball $B_r\left(\mathbb{E}^{\vec{H}}\right)=\left\{\nu\in\mathit{Distr}\left(L^{k}\right)\mid \mathit{dist}\left(\nu,\mathbb{E}^{\vec{H}}\right)\leq r\right\}$. Then the set $\mathit{freq}^{-1}\left(B_r\left(\mathbb{E}^{\vec{H}}\right)\right)$ summarizes all samples that deviate at most $r$ from the expected value.
An inherent problem of hypothesis testing are the type 1 and type 2 errors, i.e. the probability of falsely accepting the hypothesis or falsely rejecting it. This problem is established in our framework by the choice of a level of significance $\alpha\in\left[0,1\right]$ and connected with it, the choice of radius $r$ for the ball mentioned above. So for a given level of significance $\alpha$ the following choice of the radius will in some sense minimize the probability of false acceptance of an erroneous sample and of false rejection of a valid sample (i.e., at most $\alpha$). $$\bar{r}:=\inf\left\{r\mid P_{\vec{H}}\left(\mathit{freq}^{-1}\left(B_r\left(\mathbb{E}^{\vec{H}}\right)\right)\right)>1-\alpha\right\}.$$ Thus assuming we have $m$ different underlying trace distributions, we can determine when an observed sample seems reasonable and is declared valid. Unifying over all sets of such $\vec{H}$, we will define the total set of acceptable outcomes, called *Observations*.
The *acceptable outcomes* of $\vec{H}$ with significance level $\alpha\in\left[0,1\right]$ are given by the set of samples of length $k\in\mathbb{N}$ and width $m\in\mathbb{N}$, defined as $$\mathit{Obs}\left(\vec{H},\alpha\right):=\mathit{freq}^{-1}\left(B_{\bar{r}}\left(\mathbb{E}^{\vec{H}}\right)\right)=\left\{O\in \left(L^k\right)^m\mid \mathit{dist}\left(\mathit{freq}\left(O\right),\mathbb{E}^{\vec{H}}\right)\leq\bar{r}\right\}.$$ The set of *observations* of $\mathcal{A}$ with significance level $\alpha\in\left[0,1\right]$ is given by $$\mathit{Obs}\left(\mathcal{A},\alpha\right)=\bigcup_{\vec{H}\in\mathit{trd}\left(\mathcal{A},k\right)^m}\mathit{Obs}\left(\vec{H},\alpha\right).$$
(initImp) at (1,1) ; (leftImp) at (0.25,0) ; (rightImp) at (1.75,0) ; (leftUnd) at (0.25,-0.75) ; (rightUnd) at (1.75,-0.75) ; (initImp) to node\[inner sep=0,label=135:$a?$, label=left:$\frac{1}{2}$\](middle1) (leftImp); (initImp) to node\[inner sep=0,label=45:$a?$, label=right:$\frac{1}{2}$\](middle2) (rightImp); (middle1) to\[bend right\] (middle2);
(leftImp) to node\[inner sep=0, label=left:$b!$\](middle3) (leftUnd); (rightImp) to node\[inner sep=0, label=right:$c!$\](middle4) (rightUnd);
Assume that the wanted level of significance is given by $\alpha=0.05$ and consider the probabilistic automaton in Figure \[fig(fairCoin)\] representing the toss of a fair coin. Furthermore assume that we are given two samples of depth $k=2$ and width $m=100$.
To sample this case, assume $E$ is the adversary that assigns probability equal to $1$ to the unique outgoing transition (if there is one) and probability $1$ to halting, in case there is no outgoing transition. We take $H=\mathit{trd}\left(E\right)$ and can see, that then $\mu_H\left(a?b!\right)=\mu_H\left(a?c!\right)=\frac{1}{2}$ and $\mu_H\left(\sigma\right)=0$ for all other sequences $\sigma$. We define $H^{100}=\left(H_{1},\ldots,H_{100}\right)$, where $H_1=\ldots=H_{100}=H$. As we can see, we have $\mathbb{E}^{H^{100}}=\mu_H$. Since $\mu_H$ only assigns positive probability to $a?b!$ and $a?c!$, we get $P_{H^{100}}\left(B_r\left(\mu_H\right)\right)=\left(\left\{O|\frac{1}{2}-r\leq\mathit{freq}\left(O\right)\left(a?b!\right)\leq\frac{1}{2}+r\right\}\right)$. One can show that the smallest ball, where this probability is greater or equal than $0.95$ is given by the ball of radius $\bar{r}=\frac{1}{10}$.
Thus a sample $O_1$, which consists of $42$ times $a?b!$ and $58$ times $a?c!$ is an observation, and a sample $O_2$, which consists of $38$ times $a?b!$ and $62$ times $a?c!$ is not.
Thus we can finally define a verdict function, that assigns *pass* when a test case never finds erroneous behaviour (i.e. wrong output or wrong probabilistic behaviour).
(Output verdict) Let $\left(L_{I},L_{O}^{\delta}\right)$ be an action signature and $\hat{t}=\left(t,a\right)$ an annotated test case over $\left(L_{I},L_{O}^{\delta}\right)$. The *output verdict function* for $\hat{t}$ is the function $v_{\hat{t}}:pQTS\rightarrow\left\{ \mathit{pass},\mathit{fail}\right\} $, given for any pQTS $\mathcal{A}_{i}$ $$v_{\hat{t}}\left(\mathcal{A}_{i}\right)=\begin{cases}
\mathit{pass} & \mbox{if }\forall\sigma\in\mathit{exec}_{t}\left(\mathcal{A}_{i}\right):a\left(\sigma\right)=\mathit{pass}\\
\mathit{fail} & \mbox{otherwise}
\end{cases}.$$ (Statistical verdict) Additionally let $\alpha\in\left[0,1\right]$ and $k,m\in\mathbb{N}$ and $O\in\mathit{Obs}\left(\mathcal{A}_{i}||\hat{t},\alpha\right)\subseteq\left(L^{k}\right)^{m}$, then the *statistical verdict function* is given by $$v_{\hat{t}}^{\alpha}\left(\mathcal{A}_{i}\right)=\begin{cases}
\mathit{pass} & \mbox{if }O\in\mathit{Obs}\left(\mathcal{A}_{s},\alpha\right)\\
\mathit{fail} & \mbox{otherwise }
\end{cases}.$$ (Verdict function) For any given $\mathcal{A}_{i}$, we assign the *verdict* $$V_{\hat{t}}^{\alpha}\left(\mathcal{A}_{i}\right)=\begin{cases}
\mathit{pass} & \mbox{if }v_{\hat{t}}\left(\mathcal{A}_{i}\right)=v_{\hat{t}}^{\alpha}\left(\mathcal{A}_{i}\right)=\mathit{pass}\\
\mathit{fail} & \mbox{otherwise}
\end{cases}.$$ We extend $V_{\hat{t}}^{\alpha}$ to a function $V_{\hat{T}}^{\alpha}:pQTS\rightarrow\left\{ \mathit{pass},\mathit{fail}\right\} $, which assigns verdicts to a pQTS based on a given annotated test suite by $V_{\hat{T}}^{\alpha}\left(\mathcal{A}_{i}\right)=\mathit{pass}$ if for all $\hat{t}\in\hat{T}$ and $V_{\hat{T}}^{\alpha}\left(\mathcal{A}_{i}\right)=\mathit{fail}$ otherwise.
Conclusion and Future Work {#sec(FutureWork)}
==========================
We introduced the core of a probabilistic test theory by extending classical [[ioco]{}]{}theory. We defined the conformance relation [[pioco]{}]{}for probabilistic quiescent transition systems, and proved several characteristic properties. In particular, we showed that [[pioco]{}]{}is a conservative extension of [[ioco]{}]{}. Second, we have provided definitions of a test case, test execution and test evaluation. Here, test execution is crucial, since it needs to assess whether the observed behaviour respects the probabilities in the specification pQTS. Following [@CSV07], we have used statistical hypothesis testing here.
Being a first step, there is ample future work to be carried out. First, it is important to establish the correctness of the testing framework, by showing the soundness and completeness. Second, we would like to implement our framework in the MBT testing framework JTorX, and test realistic applications. Also, we would like to extend our theory to handle $\tau$-transitions. Finally, we think that tests themselves should be probabilistic, in particular since many MBT tools in practice do already choose their next action probabilistically.
Appendix {#appendix .unnumbered}
========
Below, we present the proofs of our theorems.
Proofs {#proofs .unnumbered}
======
|
---
abstract: 'We have developed a material specific theoretical framework for modelling scanning tunneling spectroscopy (STS) of high temperature superconducting materials in the normal as well as the superconducting state. Results for $Bi_2Sr_2CaCu_2O_{8+\delta}$ (Bi2212) show clearly that the tunneling matrix element strongly modifies the STS spectrum from the LDOS of the $d_{x^2-y^2}$ orbital of Cu. The dominant tunneling channel to the surface Bi involves the $d_{x^2-y^2}$ orbitals of the four neighbouring Cu atoms. In accord with experimental observations, the computed spectrum displays a remarkable asymmetry between the processes of electron injection and extraction, which arises from contributions of Cu $d_{z^2}$ and other orbitals to the tunneling current.'
author:
- 'Jouko Nieminen$^{1,2}$'
- 'Hsin Lin$^{2}$'
- 'R.S. Markiewicz$^{2}$'
- 'A. Bansil$^{2}$'
date: Version of
title: 'Importance of Matrix Element Effects in the Scanning Tunneling Spectra of High-Temperature Superconductors'
---
Scanning tunneling spectroscopy (STS) has entered the realm of high-temperature superconductors powerfully by offering atomic scale spatial resolution in combination with high energy resolution. The physics of these materials is dominated by the cuprate layers, which are usually not exposed to the tip of the apparatus. Much of the existing interpretation of the spectra is based however on the assumption that the STS spectrum is directly proportional to the local density of states (LDOS) of the CuO$_ 2$ layer, neglecting the effects of the tunneling matrix element in the presence of the insulating overlayers. Here, we focus on the Bi2212 system, which has been the subject of an overwhelming amount of experimental work[@McElroy; @Hudson; @Yazdani; @Review; @expgap], although our results bear more generally on the STS spectra of the cuprates. We take into account the fact that the current originating in the CuO$_{2}$ layers reaches the tip after being ‘filtered’ through the overlayers of SrO and BiO, and show that instead of being a simple reflection of LDOS of the CuO$_{2}$ layers, the STS signal represents a complex mapping of the electronic structure of the system.
![(a) Side view showing tip placed schematically on top of seven layers used to compute the tunneling spectrum of Bi2212, where the surface terminates in the BiO layer. Tunneling signal from the conducting CuO$_2$ layers reaches the tip after passing through the filtering layers of SrO and BiO. (b) Top view of the surface showing the arrangement of various atoms. Eight two-dimensional real space primitive unit cells used in the present computations are marked by dashed lines.[]{data-label="geometric"}](fig1.eps){width="50.00000%"}
{width="100.00000%"}
{width="100.00000%"}
In order to construct a realistic framework capable of describing the tunneling spectrum of the normal as well as the superconducting state of the cuprates, we start with the normal state Hamiltonian $$\hat{H}_1 = \sum_{\alpha\beta\sigma}
\left[\varepsilon_{\alpha}c^{\dagger}_{\alpha \sigma} c_{\alpha \sigma}+
V_{\alpha \beta}
c^{\dagger}_{\alpha \sigma} c_{\beta\sigma}\right],\label{H1}$$ which describes a system of tight-binding orbitals created (or annihilated) via the real-space operators $c^{\dagger}_{\alpha
\sigma}$ (or $c_{\alpha \sigma}$). Here $\alpha$ is a composite index denoting both the type of orbital (e.g. Cu $d_{x^2-y^2}$) and the site on which this orbital is placed, and $\sigma$ is the spin index. $\epsilon_\alpha$ is the on-site energy of the $\alpha^{th}$ orbital. $\alpha$ and $\beta$ orbitals interact with each other through the potential $V_{\alpha\beta}$ to create the energy eigenstates of the entire system.
Superconductivity is included by adding a pairing interaction term $\Delta$ in the Hamiltonian of Eq. \[H1\] as follows $$\hat{H} = \hat{H}_1 + \sum_{\alpha \beta
\sigma} \left[\Delta_{\alpha \beta} c^{\dagger}_{\alpha \sigma}
c^{\dagger}_{\beta -\sigma} + \Delta_{\beta \alpha}^{\dagger}
c_{\beta -\sigma} c_{\alpha \sigma} \right]
\label{hamiltonian}$$ We take $\Delta$ to be non-zero only between $d_{x^2 - y^2}$ orbitals of the nearest neighbor Cu atoms, and to possess a d-wave form, i.e. $\Delta$ is given in momentum space by $ \Delta_k = \frac{\Delta}{2}
\left[\cos{k_x a} - \cos{k_y a} \right],$ where $a$ is the in-plane lattice constant. This interaction allows electrons of opposite spin to combine into superconducting pairs such that the resulting superconducting gap is zero along the nodal directions $k_x=\pm k_y$, and is maximum along the antinodal directions.
The Bi2212 sample is modeled as a slab of seven layers in which the topmost layer is BiO, followed by layers of SrO, CuO$_2$, Ca, CuO$_2$, SrO, and BiO, as shown in Fig. 1(a). The tunneling computations are based on a $2\sqrt{2} \times 2\sqrt{2}$ real space supercell consisting of 8 primitive surface cells with a total of 120 atoms (see Fig. 1(b)). The coordinates are taken from the tetragonal crystal structure of Ref. . A tetrahedral cluster of atoms attached to a thin slab simulates the tip and tip holder. The tip is allowed to scan across the substrate to generate the topographic STM map, or held fixed on top of a surface Bi atom for the STS spectra.
The tight-binding parameters are fitted to the LDA band structure of Bi2212 that underlies for example the extensive angle-resolved photointensity computations of Ref. . The Slater-Koster results[@Slater; @Harrison] are used to fix the angular dependence of the tight binding overlap integrals. The specific orbital sets used for various atoms are: ($s,p_x,p_y,p_z$) for Bi and O; $s$ for Sr; and ($4s,d_{3z^2-r^2},d_{xy},d_{xz},d_{yz}, d_{x^2-y^2}$) for Cu atoms. This yields 58 orbitals in a primitive cell, used in band calculations, and a total of $464$ orbitals for Green function supercell calculations in 256 evenly distributed k-points. Finally a, gap parameter value of $\vert\Delta\vert = 0.045eV$ is chosen to model a typical experimental spectrum[@McElroy] for the generic purposes of this study.
The LDOS and tunneling computations are based on Green function formalism. At first, the normal-state Green function is constructed via Dyson’s equation using methodology described in Ref. . At this stage a self-energy for orbital $\alpha$, $\Sigma^{\pm}_{\alpha} = \Sigma{'}_{\alpha} \pm i \Sigma{''}_{\alpha}$ is embedded in Dyson’s equation for possible effects of various bosonic couplings and correlation effects [@hogenboom; @VHS]. For simplicity, we have assumed the self-energy to be diagonal in the chosen basis. In building up the Green function in the superconducting state, we utilize the conventional BCS-type self-energy $\Sigma^{BCS}
= \Delta G^{h} \Delta^{\dagger}$ (see, e.g., Ref. ), where $G^{h}$ is the hole part of normal state Green function.
Fig. 2 shows the calculated band structure of Bi2212 in the normal as well as the superconducting state from Hamiltonians of Eqs. \[H1\] and \[hamiltonian\]. The normal state is seen to properly display the major features such as: The pair of CuO$_{2}$ bands crossing the Fermi energy ($E_F$) with the associated van Hove singularities (VHS’s) marked VHS-a (antibonding) and VHS-b (bonding), split by 250 meV at the $(\pi,0)$ point; BiO bands lying about 1 eV above $E_F$; and the ‘spaghetti’ of bands involving various Cu and O orbitals starting at a binding energy of around 1 eV below $E_F$. Although states near $E_F$ are mainly of Cu $d_{x^2-y^2}$ and O $p_{x,y}$ character, they also contain some Bi and Cu $d_{z^2}$ admixture. In the superconducting state in Fig. 2(b), a quasiparticle spectrum mirrored through $E_F$ is obtained with a doubled number of bands due to the pairing interaction. A d-wave superconducting gap opens up in both CuO$_2$ bands near $E_F$. The quasiparticles have a mixed electron/hole character only near the edges of the gap.
To compute the tunneling spectra we apply the Todorov-Pendry expression [@Todorov; @Pendry] for the differential conductance $\sigma$ between orbitals of the tip ($t,t'$) and the sample($s,s'$), which in our case yields $$\sigma = \frac{dI}{dV} = \frac{2 \pi e }{ \hbar} \sum_{t t' s s'}
\rho_{tt'}(E_F)V_{t's} \rho_{ss'}^{}(E_F+eV)V_{s't}^{\dagger},
\label{conductance}$$ where the density matrix $\rho_{s s'} = -\frac{1}{\pi}\sum_{\alpha} G_{s \alpha}^{+} \Sigma{''}_{\alpha}
G_{\alpha s'}^{-}$ is, in fact, the spectral function written in terms of retarded/advanced Green function and the self-energy. Eq. \[conductance\] differs from the more commonly used Tersoff-Hamann approach[@Tersoff] in that it takes into account the details of the symmetry of the tip orbitals and their overlap with the surface orbitals.
The use of the spectral function recasts Eq. \[conductance\] into the form $$\sigma = \sum_{t \alpha} T_{t \alpha},
\label{transition}$$ where $$\begin{aligned}
T_{t \alpha} =& -\frac{2 e }{ \hbar}\sum_{t' s s'}
\rho_{tt'}(E_F)V_{t's}G^{+}_{s\alpha}\Sigma{''}_{\alpha}G^{-}_{\alpha
s'}V_{s't}^{\dagger},
\label{partial}\end{aligned}$$ and the Green functions and the self-energy are evaluated at energy $E =
E_F + e V_b.$ Eq. \[partial\] is similar to the Landauer-Büttiker formula for tunneling across nanostructures (see, e.g., Ref. ), and represents a slight reformulation of Refs. and .
The nature of Eq. \[partial\] can be understood straightforwardly: $G_{s\alpha}$ gives the amplitude with which electrons residing on the $\alpha^{th}$ orbital in the solid propagate to the surface at energy $E$ broadened by $\Sigma{''}_{\alpha}$. The term $V_{s t}$ is the overlap between the surface orbital and the tip, while $\rho_{t
t'}$ gives the available states at the tip. Hence, $T_{t \alpha}$ gives the contribution of the $\alpha^{th}$ orbital to the current, and the summation in Eq. \[transition\] collects these individual contributions to yield the total tunneling current which reaches the tip. Thus, selecting individual terms in Eq. \[partial\] provides a transparent scheme to define tunneling paths between the sample and the microscope tip.
Fig. \[pristine\](a) shows the tunneling spectra over the broad energy range of $\pm$ 1 eV. At high positive voltages, the computed spectrum (black line) is fairly structureless. At low energies, a gap accompanied by the characteristic peak-dip-hump features is observed. The calculations show the anti-bonding (VHA-a) and bonding (VHS-b) VHS’s[@VHS] as distinct structures, followed by a broad dip around -0.7 eV and subsequent rise near -1 eV. In all these respects the present calculations follow the experiment (red line) of Ref. [@McElroy]. Moreover, theory reproduces the observed asymmetry of the tunneling spectrum with excess intensity at negative biases. The rapid increase in current at high binding energies results from increasing spectral weight of Cu $d_{z^2}$ and other orbitals contributing to the ‘spaghetti’ of bands starting around 1 eV binding energy (see Fig. 2(a)). We emphasize that the LDOS of the Cu $d_{x^2-y^2}$ (green line in Fig. 3(a)) does not provide a good description of the spectrum. In particular, the Cu $d_{x^2-y^2}$ LDOS possesses an asymmetry which is opposite to that of the tunneling spectrum.
Fig. 3(d) gives a blow up of the low energy region of $\pm 0.2 eV$, shown by gray shading in Fig. 3(a). The computed spectrum is seen to reproduce the coherence peaks and the characteristic peak-dip-hump feature. The generic form of the real and imaginary parts of the self-energies applied to the Cu $d_{x^2-y^2}$ orbitals (solid and dashed blue lines, respectively) and the rest of the orbitals are given in the inset. Fig. 3(b) shows the computed ‘topographic map’ of the BiO surface in constant current mode. Bi atoms appear as bright spots in accord with experimental observations, while O atoms sit at the centers of dark regions.
![[*Main frame*]{}: Partial contributions to the tunneling current from various orbitals in the two cuprate layers. The CuO$_2$ layer closest to the tip is identified as layer 1 or L1, while the second layer is denoted by L2. Specific contributions are: $d_{x^2-y^2}$ orbitals of the four nearest neighbor Cu atoms (L1-nn, blue line); $d_{x^2-y^2}$ orbitals of the four next nearest neighbor Cu atoms of the first layer (L1-nnn, green line) ; $d_{x^2-y^2}$ orbitals of the Cu atoms of the second layer (L2, red line); $d_{z^2}$ of the central Cu atom of L1 (magenta line). [*Inset*]{}: Decomposition of the current from the second cuprate layer: Total contribution (red line); contribution of the four nearest neighbours (blue line); and the next nearest neighbours (green line).[]{data-label="channels"}](fig4.eps){width="50.00000%"}
An analysis of the partial terms of Eq. \[partial\] reveals that the $d_{x^2-y^2}$ orbital of the Cu atom lying right under the Bi atom gives zero contribution to the current [@Balatsky]. The dominant contribution to the spectrum comes from the four nearest neighbor (NN) Cu atoms as indicated schematically in Fig. 3(c). A detailed decomposition of Eq. \[channels\] is shown in Fig. \[channels\], where paths starting from the CuO$_2$ layer closest to the tip (L1), as well as from the second cuprate layer (L2) are considered. The signal from cuprate layers is dominated by the $d_{x^2-y^2}$ orbitals on the four nearest neighbour (nn) Cu atoms in L1 up to about -0.7 eV (blue line). At higher binding energies, the contribution from the $d_{z^2}$ electrons from the Cu atom right below the Bi atom or the tip grows rapidly (magenta line).
A smaller but still significant contribution comes from the four next nearest neighbour (nnn) $d_{x^2-y^2}$ orbitals in L1 spread over a wide energy range (green line, main figure), while the total current originating from the $d_{x^2-y^2}$ orbitals of L2 is quite localized over zero to -0.6 eV bias (red line, main figure). Fig. \[channels\] emphasizes the nature of the current associated with the cuprate layers and points out an intrinsic electron-hole asymmetry originating from the $d_{z^2}$ orbitals. We note however that the Bi and O orbitals in the surface Bi-O layer can also play a role in producing an asymmetric background current.
In conclusion, we find that STS spectrum for Bi2212 is strongly modified from the LDOS of $d_{x^2-y^2}$ by the effect of the tunneling matrix element. Much of the observed asymmetry of the spectrum can be explained within the conventional picture due to the turning on of Cu $d_{z^2}$ and other channels with increasing (negative) bias voltage. This indicates that the effects of strong electronic correlations on the tunneling spectrum are more subtle than has been thought previously. However, we should note that we have not analyzed spectra associated with the deeply underdoped regime where charge order has been reported [@Kohsaka]. The present method naturally allows an analysis of the tunneling signal in terms of the possible tunneling channels and the related selection rules. Our scheme can be extended to incorporate effects of impurities and various nanoscale inhomogeneities by using appropriately larger basis sets in the computations.
[**Acknowledgments**]{}
This work is supported by the US Department of Energy, Office of Science, Basic Energy Sciences contract DE-FG02-07ER46352, and benefited from the allocation of supercomputer time at NERSC, Northeastern University’s Advanced Scientific Computation Center (ASCC), and the Institute of Advanced Computing, Tampere.
[99]{}
K. McElroy [*et al.*]{}, [*Science*]{} [**309**]{}, 1048 (2005).
E.W. Hudson [*et al.*]{}, [*Nature*]{} [**411**]{}, 920 (2001).
A.N. Pasupathy [*et al.*]{}, Science [**320**]{}, 196 (2008).
A.V. Balatsky [*et al.*]{}, [*Rev. Mod. Phys.*]{} [**78**]{}, 373 (2006).
Ø. Fischer [*et al.*]{}, [*Rev. Mod. Phys.*]{} [**79**]{}, 353(2007).
V. Bellini, F. Manghi, T. Thonhauser, and C. Ambrosch-Draxl, [*Phys. Rev. B*]{} [**69**]{}, 184508(2004).
H. Lin, S. Sahrakorpi, R.S. Markiewicz, and A. Bansil, [*Phys. Rev. Lett.*]{} [**96**]{}, 097001 (2006).
J.C. Slater and G.F. Koster, [*Phys. Rev.*]{} [**94**]{}, 1498 (1954).
W.A. Harrison, [*Electronic Structure and Properties of Solids.*]{} Dover, New York (1980).
J.A. Nieminen and S. Paavilainen, [*Phys. Rev. B*]{} [**60**]{}, 2921 (1999).
B.W. Hoogenboom, C. Berthod, M. Peter, O. Fischer, and A.A. Kordyuk, [*Phys. Rev. B*]{} [**67**]{}, 224502(2003).
G. Levy de Castro [*et al.*]{}, [*cond-mat/0703131*]{} (2007).
A.L. Fetter and J.D. Walecka, [*Quantum Theory of Many-Particle Systems.*]{} Dover (2003).
T.N. Todorov [*et al.*]{}, [*J.Phys.: Condens. Matter*]{} [**5**]{}, 2389 (1993).
J.B. Pendry [*et al.*]{}, [*J.Phys.: Condens. Matter*]{} [**3**]{}, 4313 (1991).
J. Tersoff and D.R. Hamann, [*Phys. Rev. B*]{} [**31**]{}, 805 (1985).
Y. Meir and N.S. Wingreen, [*Phys. Rev. Lett.*]{} [**68**]{}, 2512 (1992).
H. Ness and A.J. Fisher, [*Phys. Rev. B*]{} [**56**]{}, 12469 (1997).
T. Frederiksen, M. Paulsson, M. Brandbyge, and A.-P. Jauho, [*Phys. Rev. B*]{} [**75**]{}, 205413 (2007).
I. Martin [*et al.*]{}, [*Rev. Lett.*]{} [**88**]{}, 097003 (2002).
Y. Kohsaka [*et al.*]{}, [*Science*]{} [**315**]{}, 1380 (2007).
|
---
author:
- Srabani Kar
- 'Dipti R. Mohapatra'
- Eric Freysz
- 'A. K. Sood'
title: 'Tuning Photoinduced Terahertz Conductivity in Monolayer Graphene: Optical Pump Terahertz Probe Spectroscopy'
---
{width="100mm"}
{width="100mm"}
{width="100mm"}
{width="100mm"}
|
---
abstract: 'As a first step in the computation of the orbital phase evolution of spinless compact binaries including tidal effects up to the next-to-next-to-leading (NNL) order, we obtain the equations of motion of those systems and the associated conserved integrals in harmonic coordinates. The internal structure and finite size effects of the compact objects are described by means of an effective Fokker-type action. Our results, complete to the NNL order, correspond to the second-post-Newtonian (2PN) approximation beyond the leading tidal effect itself, already occurring at the 5PN order. They are parametrized by three polarizability (or deformability) coefficients describing the mass quadrupolar, mass octupolar and current quadrupolar deformations of the objects through tidal interactions. Up to the next-to-leading (NL) order, we recover previous results in the literature; up to the NNL order for quasi-circular orbits, we confirm the known tidal effects in the (PN re-expansion of the) effective-one-body (EOB) Hamiltonian. In a future work, we shall derive the tidal contributions to the gravitational-wave flux up to the NNL order, which is the second step required to find the orbital phase evolution.'
author:
- 'Quentin <span style="font-variant:small-caps;">Henry</span>'
- 'Guillaume <span style="font-variant:small-caps;">Faye</span>'
- 'Luc <span style="font-variant:small-caps;">Blanchet</span>'
bibliography:
- 'ListeRef\_HFB19.bib'
title: |
Tidal effects in the equations of motion of compact binary systems\
to next-to-next-to-leading post-Newtonian order
---
Introduction {#sec:introduction}
============
The direct detection of gravitational waves (GW) generated by the orbital motion and merger of compact binary systems [@GW150914; @GW170817] opens up a new avenue in fundamental physics. Notably, it will play a paramount role in understanding the physics of compact objects, mainly black holes or neutron stars. The tidal effects between such objects are particularly interesting because they permit revealing and probing their internal structure, as well as eventually distinguishing between black holes, neutron stars or, possibly, more exotic entities like boson stars [@FaberRasioLR; @BuonSathya15].
The tidal interaction affects both the conservative equations of motion (EoM) and the GW emission of the compact binary system. This results in a modification of the time evolution of the binary’s orbital frequency and phase which is directly observable (see *e.g.* [@MW03; @FH08; @DNV12; @F14]). The tidal distortion depends on the Love numbers [@Love11], characterizing the rigidity and the deformability of the body, *i.e.* its capacity to change shape under the influence of an external tidal field. Those Love numbers depend in turn on the internal equation of state (EoS) of the body, which is uncertain at high densities [@Hind08; @HindLLR10]. They decrease as the compactness of the body increases, reaching zero in the limit of a maximally compact object, *i.e.*, for a black hole [@FangLove05; @BinnP09; @DN09tidal].
The leading tidal contributions to the orbital dynamics are due to quadrupolar deformations and, for compact binaries, manifest themselves as formally very small corrections in the accelerations, of the order of 5PN or $\sim (v/c)^{10}$, where $v$ denotes the relative orbital velocity. However, the 5PN coefficient appearing in front of the small 5PN factor $(v/c)^{10}$ can be quite large and the effect is measurable.[^1] It scales like the dimensionless parameter $$\label{eq:Lambda}
\Lambda^{(2)} = \frac{2}{3}
k^{(2)} \biggl(\frac{R c^2}{G m}\biggr)^5\,,$$ where $k^{(2)}$ denotes the mass-type quadrupolar second Love number of the body, while $m$ and $R$ represent its mass and radius. Typically, the compactness parameter $\mathcal{C}\sim G m/(R c^2)$ is of order $0.15$ for neutron stars while the Love number is $k^{(2)}\sim 0.1$ (depending on the EoS) [@BinnP09; @DN09tidal], hence we expect $\Lambda^{(2)}\sim 1000$. With the binary neutron star event GW170817 [@GW170817], the detectors LIGO and Virgo have already been able to put an observational constraint on the particular combination of $\Lambda^{(2)}_1$, $\Lambda^{(2)}_2$ and the masses that enter the orbital phase evolution of the two neutron stars [@FH08; @F14]. This constraint permitted excluding some of the stiffest EoS, for which the neutron stars are less compact [@LVCpropertiesGW170817; @LVCO1O2]. However, the majority of softer EoS are still allowed (see also [@LVCcomparisonGW170817] and references therein).
The problem of tidal interactions between compact objects beyond the leading quadrupolar level has been addressed in Refs. [@VHF11; @BiniDF12; @VF13; @AGP18; @BV18; @Landry18]. The conservative dynamics, from which follow the EoM, was obtained in the work [@AGP18] at leading order but including linear spin couplings, or in [@VF13] and [@BiniDF12] up to the next-to-leading (NL) and the next-to-next-to-leading (NNL) orders, respectively, while the energy flux, waveform amplitude and phase evolution have been computed to the leading order in the presence of spin couplings, and NL order, equivalent to the formal 6PN level [@VHF11; @BV18; @Landry18], in the non-spinning case.[^2] The tidal interactions in both the dynamics and waveform have also been included in the effective-one-body (EOB) models for template generation [@DNV12; @BiniDF12].
In the present paper, we compute the tidal effects in the conservative EoM, as well as all associated conserved quantities, at the NNL order for spinless neutron stars on generic binary orbits in harmonic coordinates. We follow closely the method proposed in Ref. [@BiniDF12], describing the internal structure and finite size effects of the compact objects by means of an effective Fokker-type action. Our final NNL results are parametrized by three polarizability (or deformability) coefficients describing the mass quadrupolar, mass octupolar and current quadrupolar deformations of the objects through tidal interactions. In the case of quasi-circular orbits, we confirm the expression of the tidal terms in the EOB Hamiltonian up to the NNL order [@BiniDF12]. So as to compute the tidal contribution to the orbital phase at the NNL order, we need both the conservative NNL energy of the system and the GW energy flux at the same NNL order. In a forthcoming paper [@article_flux], we shall complete the present work by computing the latter effect for the GW flux, which will yield the orbital phase evolution at the NNL order.
Although the knowledge of the NNL/2PN relative tidal effect is probably not directly useful for the data analysis of the advanced LIGO and Virgo detectors, it may become relevant for the future third-generation detectors, like the Einstein Telescope or the Cosmic Observatory. On the other hand, detailed comparisons with numerical relativity (NR) simulations of binary neutron-star mergers require the control of high-order tidal interactions on the analytic side. Yet, such comparisons are essential to get a grip on the errors of the predicted waveforms and to properly calibrate EOB models. More generally, adding analytic tidal effects on the top of PN templates of point particles is a good way of controlling the systematic errors due to our lack of knowledge of the higher-order terms in the PN expansion [@F14; @BuonSathya15].
This article is organized as follows. In \[sec:action\], we define the effective Fokker action with appropriate non-minimal matter couplings describing finite size effects. The quantities entering this action are determined by the 2PN metric, presented in \[sec:metric\] and computed off-shell, *i.e.*, without replacement of accelerations by the EoM, ready for insertion into the action. Our final Lagrangian, accurate to NNL leading order for tidal effects, is displayed in \[sec:Fokker\], together with the associated NL center-of-mass (CoM) position. We then derive, in \[sec:CoM\], the tidal dynamics in the CoM frame for general orbits, as well as the reduction for quasi-circular orbits. The \[appendix:Newtonian\] is devoted to basic recalls and motivation concerning the treatment of tidal effects in the Newtonian theory. In \[appendix:proof\] we show, using standard techniques of Lagrangian formalism, that the tidal multipole moments up to the NNL order can be defined equivalently by means of either the Riemann tensor or the Weyl tensor. Finally we give in \[appendix:accNNL\] the complete tidal acceleration in a general frame for arbitrary orbits to NNL order.
Effective Fokker action with non-minimal matter couplings {#sec:action}
=========================================================
The model we use is defined by the gravitation-plus-matter action $S=S_g+S_m$, where the gravitational part $S_g$ is the standard Einstein-Hilbert action, to which we add the appropriate harmonic-gauge fixing term: $$\label{eq:Sg}
S_{g} = \frac{c^{3}}{16\pi G} \int {\mathrm{d}}^{4}x \, \sqrt{-g} \left[
R -\frac{1}{2}
g_{\mu\nu}\Gamma^{\mu}\Gamma^{\nu} \right] \,,$$ where $R$ is the curvature scalar, $\Gamma^{\mu}_{\rho\sigma}$ is the usual Christoffel symbol, and $\Gamma^{\mu} =
g^{\rho\sigma}\Gamma^{\mu}_{\rho\sigma}$. In practical calculations, we rather use the Laudau-Lifshitz [@LL] form of the action.[^3]
The matter part of the action $S_m$ describes massive point-like particles with internal structure. It contains specific non-minimal couplings to the space-time curvature that describe the finite size effects of the compact bodies solely due to the tidal interactions, all spins being taken to zero. Since the matter action is regarded as localized on the worldline of the particles, it is generally referred to as a “skeletonized” effective action. In order to define it, we introduce a local inertial coordinate frame along each body worldline, together with the associated local tetrad $e_{{\hat{\alpha}}}^{\phantom{{\hat{\alpha}}}\mu}$. More precisely, we pose $e_{{\hat{\alpha}}}^{ \phantom{{\hat{\alpha}}}\mu} = \partial
x^\mu/\partial X^{{\hat{\alpha}}}$, where $\{x^\mu\}$ is a global coordinate system and $\{X^{{\hat{\alpha}}}\}$ is the local inertial frame in the vicinity of the body in question. We may choose $\{X^{{\hat{\alpha}}}\}$ to be a Fermi local normal coordinate system [@Fermi22; @MM63], so that the tetrad is orthonormal on the worldline, the time coordinate of the Fermi coordinates coincides with the proper time along the worldline, and the zero-th time-like tetrad vector is the four velocity of the particle. In its own local frame, the body feels the tidal multipole moments generated by the other bodies at its very location, namely the $\ell$-th order mass-type moments $G_{{\hat{L}}}$ and the current-type ones $H_{{\hat{L}}}$, where those quantities refer to the spatial tetradic components of the moments, *i.e.* projected along the local tetrad, with ${\hat{L}}={\hat{i}}_1\cdots {\hat{i}}_\ell$ denoting a multi-spatial index composed of $\ell$ spatial tetradic indices.
In this paper, we assume that each body stays in static equilibrium at any instant. In the absence of spin, the internal structure is then entirely determined by the mass and the EoS. Thus, the elementary bricks that are allowed to construct $S_m$ are tensors defined from the metric only and evaluated at the given particle position, with all indices contracted so as to preserve the invariance under rotation and parity in the corresponding constant-time hypersurface of the local Fermi rest frame. For our purpose, it will be sufficient to consider the same non-minimal terms as in Ref. [@DN10], built from quadratic (kinetic-like) couplings in the tidal moments $G_{{\hat{L}}}$ and $H_{{\hat{L}}}$. Hence the form of the matter action (adding also the particle’s label $A\in\{1,2\}$)[^4] $$\label{eq:Sm}
S_{m} = \sum_{A} \int {\mathrm{d}}\tau_{A} \left\{ - m_A c^2 + \sum_{\ell=2}^{+\infty}
\frac{1}{2\ell!} \biggl[ \mu_{A}^{(\ell)}
\, \bigl(G^{A}_{\hat{L}}\bigr)^2 +
\frac{\ell}{(\ell+1)c^{2}} \,\sigma_{A}^{(\ell)}
\bigl(H^{A}_{{\hat{L}}}\bigr)^2 \biggr] + \cdots \right\}\,.$$ The ellipsis indicate many higher-order non-linear combinations of the tidal moments and their covariant (proper-time) derivatives, which we do not need to include here (see *e.g.* Eq. (2.3) of [@BiniDF12]). For more insight and motivation about the non-minimal action, see Refs. [@ThH85; @Zhang86; @DSX1; @BiniDF12] and the treatment of tidal effects in the Newtonian model as recalled in \[appendix:Newtonian\].
The above tidal moments are given by appropriate covariant derivatives of the Weyl tensor. We define first the spatial tetradic components of the moments appearing in \[eq:Sm\] (for $\ell\geqslant 2$) as
\[eq:defGH\] $$\begin{aligned}
G^A_{{\hat{L}}} &= - c^2
\Bigl[\nabla_{\langle{\hat{i}}_1}\cdots\nabla_{{\hat{i}}_{\ell-2}}
C_{{\hat{i}}_{\ell-1}\underline{{\hat{0}}}{\hat{i}}_\ell\rangle{\hat{0}}}\Bigr]_A
\,,\\ H^A_{{\hat{L}}} &= 2 c^3
\Bigl[\nabla_{\langle{\hat{i}}_1}\cdots\nabla_{{\hat{i}}_{\ell-2}}
\,C^{*}_{{\hat{i}}_{\ell-1}\underline{{\hat{0}}}{\hat{i}}_\ell\rangle{\hat{0}}}\Bigr]_A\,.\end{aligned}$$
The angle brackets over the $\ell$ free spatial indices ${\hat{L}} =
{\hat{i}}_1\cdots {\hat{i}}_\ell$ of the above tensor expressions means that they must be replaced by their symmetric and trace-free (STF) parts over those indices, the underlined indices being excluded from the STF projection. We denote by $\nabla_{{\hat{\alpha}}}$ the usual covariant tetradic derivative \[we pose ${\hat{\alpha}}=({\hat{0}},{\hat{i}})$\], whereas $C_{{\hat{\alpha}}{\hat{\beta}}{\hat{\gamma}}{\hat{\delta}}}$ and $C^{*}_{{\hat{\alpha}}{\hat{\beta}}{\hat{\gamma}}{\hat{\delta}}}$ represent the tetradic components of the Weyl tensor (whose definition is recalled in \[eq:Weyl\] below) and its dual.[^5] By construction, the tidal moments are symmetric over their spatial indices ${\hat{L}}$ and all their traces are zero, *i.e.*, $\delta_{{\hat{i}}_1{\hat{i}}_2}G_{{\hat{i}}_1{\hat{i}}_2\cdots{\hat{i}}_\ell}=0$.
Next, we introduce the covariant versions of the previous tidal tensors. Since $u^\mu=e_{{\hat{0}}}^{\phantom{{\hat{0}}}\mu}$, this is achieved by imposing that they live in the particle’s local spatial hypersurface, which is orthogonal to the four velocity. Thus, we complete the definition of the tidal moments by requiring them to obey $$\label{eq:G0i}
G^A_{{\hat{0}}{\hat{\alpha}}_2\cdots{\hat{\alpha}}_\ell} =
H^A_{{\hat{0}}{\hat{\alpha}}_2\cdots{\hat{\alpha}}_\ell}=0\,.$$ In this way, $G_{{\hat{\alpha}}_1\cdots{\hat{\alpha}}_\ell}$ and $H_{{\hat{\alpha}}_1\cdots{\hat{\alpha}}_\ell}$ are both Lorentz tensors and covariant scalars, while their covariant versions in an arbitrary coordinate system $\{x^\mu\}$ read
\[eq:defGHcov\] $$\begin{aligned}
G^A_{\mu_1\cdots\mu_\ell} &= - c^2 \Bigl[\nabla^\perp_{\langle
\mu_1}\cdots\nabla^\perp_{\mu_{\ell-2}}
C_{\mu_{\ell-1}\underline{\rho}\mu_\ell\rangle\sigma}\Bigr]_A
u_A^\rho \,u_A^\sigma \,,\\ H^A_{\mu_1\cdots\mu_\ell} &= 2 c^3
\Bigl[\nabla^\perp_{\langle \mu_1}\cdots\nabla^\perp_{\mu_{\ell-2}}
\,C^{*}_{\mu_{\ell-1}\underline{\rho}\mu_{\ell}\rangle\sigma}\Bigr]_A
u_A^\rho \,u_A^\sigma \,.\end{aligned}$$
Here, we denote $\nabla^\perp_\mu=\perp_\mu^\nu \nabla_\nu$, with $\perp_\mu^\nu = \delta_\mu^\nu + u_\mu u^\nu$ being the projector onto the hypersurface orthogonal to the four velocity \[notice that $\perp_{{\hat{\alpha}}}^\mu = (0,
e_{{\hat{i}}}^{\phantom{{\hat{i}}}\mu})$\]. The tidal moments are both STF over all their space-time indices and transverse to the four velocity, namely $u^\mu \,G_{\mu\mu_2\cdots\mu_\ell} = u^\mu
\,H_{\mu\mu_2\cdots\mu_\ell} = 0$, which is equivalent to .
Very important to the formalism is the fact that the Weyl tensor and its covariant derivatives in are to be evaluated at the location of the particle $A$ following the regularization, as indicated by the square brackets $[\cdots]_A$. Physically, the regularization is crucial because it removes the self field of the particle $A$, and therefore permits automatically selecting the external (tidal) field due to the other particles $B\not= A$. We know one regularization able to give a complete, consistent and physical answer in high PN approximations, namely dimensional regularization (see *e.g.* [@DJSdim; @BDE04]). In this paper, we shall systematically use it. However, in our practical calculations at the relatively low NNL/2PN order, it is simpler to use the Hadamard “partie finie” regularization, since it has been shown [@BiniDF12] to yield the same result for the specific system we are interested in (see also discussions in Ref. [@BDE04]).
On the other hand, as argued in Refs. [@BiniDF12; @BDEI05dr], we can choose to use, for our purpose, the Riemann tensor instead of the Weyl tensor in the definitions of the tidal moments. Indeed, the contributions due to the trace terms of the Riemann tensor may be absorbed in the off-shell metric by redefining it in a certain way. We give in \[appendix:proof\] a detailed proof of this statement valid up to the NNL/2PN level.
Note finally that the tidal moments have been normalized in such a way that they admit a finite non-zero Newtonian limit when $c\to+\infty$, and that the mass-type moments then match those of Newtonian mechanics given in \[appendix:Newtonian\]. In this limit, only the space components survive. We then get
\[eq:Nlimit\] $$\begin{aligned}
G_{L}^A &= \partial^A_{L} U_A +
\mathcal{O}\left(\frac{1}{c^{2}}\right)\,,\\ H_{L}^A &= 4
\,\varepsilon_{jk(i_\ell}\Bigl(\partial_{L-1)k}^A U^A_{j} + v_A^k
\partial^A_{L-1)j} U_A \Bigr) +
\mathcal{O}\left(\frac{1}{c^{2}}\right)\,,\end{aligned}$$
where $\partial^A_{L}=\partial^A_{i_1}\cdots\partial^A_{i_\ell}$ with $\partial^A_i=\partial/\partial y_A^i$; the potentials $U_A=\sum_{B\not=A} G m_B/r_B$ and $U_A^i=\sum_{B\not=A} G m_B
v_B^i/r_B$ denote the Newtonian and gravitomagnetic potentials regularized at the point $A$.
As the tidal moments are transverse to the velocity, the action can be rewritten in covariant form as $$\label{eq:Sm2}
S_{m} = \sum_{A} \int {\mathrm{d}}\tau_{A} \left\{ - m_A c^2 + \sum_{\ell=2}^{+\infty}
\frac{1}{2\ell!} \biggl[ \mu_{A}^{(\ell)}
\, G^{A}_{\mu_1\cdots\mu_\ell} G_{A}^{\mu_1\cdots\mu_\ell} +
\frac{\ell}{(\ell+1)c^{2}} \,\sigma_{A}^{(\ell)}
\,H^{A}_{\mu_1\cdots\mu_\ell} H_{A}^{\mu_1\cdots\mu_\ell} \biggr] + \cdots
\right\}\,.$$ We observe that the reference to the local tetrad has completely disappeared from the action. For convenience, we shall work only with the global (tensorial) components $G_{\mu_1\cdots\mu_\ell}$ and $H_{\mu_1\cdots\mu_\ell}$ of the moments henceforth.
The coefficients $\mu^{(\ell)}$ and $\sigma^{(\ell)}$ entering the non-minimal action characterize the deformability and polarizability of the body under the influence of the external tidal field. They are linked to the dimensionless mass-type $k^{(\ell)}$ and current-type $j^{(\ell)}$ second Love numbers as [@BiniDF12] $$\label{eq:defpolarizability}
G \mu_{A}^{(\ell)} = \frac{2}{(2\ell-1)!!} \,k_{A}^{(\ell)}
R_{A}^{2\ell+1}\,,\qquad G \sigma_{A}^{(\ell)} =
\frac{\ell-1}{4(\ell+2)(2\ell-1)!!} \,j_{A}^{(\ell)} R_{A}^{2\ell+1}\,,$$ where $R$ is the radius of the body (in a coordinate system such that the area of the sphere of radius $R$ is $4\pi R^2$).
The polarizability coefficients actually determine the formal PN order at which the tidal effects appear. For compact objects, indeed, the compactness parameter defined as the ratio $\mathcal{C}\sim G m/(R c^2)$ is of the order of one. Inserting $\mathcal{C}\sim 1$ in \[eq:defpolarizability\], we recover the fact that the dominant tidal effect is due to the mass quadrupole and is formally of the order of $$\label{eq:epstidal}
\epsilon_\text{tidal} \sim \frac{1}{c^{10}}\,,$$ *i.e.*, is comparable to a 5PN orbital effect. With the notation for the dominant effect, we see that the deformability coefficients in the action scale like $$\label{eq:order}
\left\{\mu_A^{(\ell)}\,,\,\sigma_A^{(\ell)}\right\} =
\mathcal{O}\left(\frac{\epsilon_\text{tidal}}{c^{4\ell-8}}\right)\,.$$ As we aim at computing tidal effects up to the NNL/2PN order, inspection of the action shows that we may consider only the mass quadrupole, current quadrupole and mass octupole interactions: $$\label{eq:Sm3}
S_{m} = \sum_{A} \int {\mathrm{d}}\tau_{A} \left[ - m_A c^2 + \frac{\mu_{A}^{(2)}}{4}
G^{A}_{\mu\nu}G_{A}^{\mu\nu} +
\frac{\sigma_{A}^{(2)}}{6c^{2}}H^{A}_{\mu\nu}
H_{A}^{\mu\nu} +
\frac{\mu_{A}^{(3)}}{12} G^{A}_{\lambda\mu\nu}
G_{A}^{\lambda\mu\nu} +
\mathcal{O}\left(\frac{\epsilon_\text{tidal}}{c^6}\right)\right]\,,$$ where the specified remainder means that we neglect higher order – NNNL and beyond – terms. Direct application of the general scaling relation shows that $\mu^{(2)}=\mathcal{O}(\epsilon_\text{tidal})$, $\sigma^{(2)}=\mathcal{O}(\epsilon_\text{tidal})$, and $\mu^{(3)}=\mathcal{O}(\epsilon_\text{tidal}/c^4)$. Thus, the first tidal term in yields the leading effect together with NL and NNL corrections, the second tidal term contains NL and NNL effects (because of the explicit factor $1/c^2$ in the action), whereas the third one represents a purely NNL effect.
Metric and required elementary potentials {#sec:metric}
=========================================
To build an action for the sole matter variables, we (i) start from the Einstein-Hilbert action with the non-minimal matter couplings , (ii) solve the Einstein field equations resulting from the metric variation by means of a direct PN iteration, (iii) insert the explicit PN solution for the metric back into Eqs. –, which defines the so-called (PN) Fokker action, say $S_\text{F}$. An important point is that, at the NNL/2PN level, it is necessary and sufficient to insert into Eqs. – the metric generated by a system of point particles, omitting all the terms associated with the body internal structure.
To see this, we write, as in Ref. [@BBBFMa], the (allegedly “exact”) PN solution of the Einstein field equations in terms of the gothic metric deviation $h^{\mu\nu}=\sqrt{-g}g^{\mu\nu} -
\eta^{\mu\nu}$, using the particular vector variable $$\label{eq:varh}
h = \bigl(h^{00ii}, h^{0i}, h^{ij}\bigr)\,,\quad\text{with}\quad
h^{00ii} \equiv h^{00} + \delta_{ij}h^{ij}\,.$$ We already know that the dominant tidal effect is due to the mass quadrupole moment and pops up in the EoM at the order . We can thus write the previous solution as $$\label{eq:hdecomp}
h = h_\text{pp} + h_\text{tidal}\,,$$ where the first term is just the result for the metric generated by point-particles (pp) without internal structure, and where the tidal corrections therein are at least of the order of (with obvious notation) $$\label{eq:htidal}
h_\text{tidal} = \mathcal{O}\left(\frac{\epsilon_\text{tidal}}{c^2},
\frac{\epsilon_\text{tidal}}{c^3},
\frac{\epsilon_\text{tidal}}{c^4}\right)\,.$$ Since $h$ is an exact solution of the Einstein field equations we have $\delta S_\text{F}/\delta h = 0$, which implies that the functional derivative of the Fokker action evaluated for the “approximate” solution $h_\text{pp}$ will be of the order of the committed error, namely (taking into account the coupling constant $c^4/(16\pi G)$ in the field equations) $$\label{eq:dSdhpp}
\frac{\delta S_\text{F}}{\delta h} \bigl[h_\text{pp}\bigr] =
\mathcal{O}\left( c^2\,\epsilon_\text{tidal},
c\,\epsilon_\text{tidal}, \epsilon_\text{tidal}\right)\,.$$ The two facts and combined together in a Taylor expansion of the action imply that $$\begin{aligned}
\label{eq:Spp}
S_\text{F} \bigl[h\bigr] &= S_\text{F} \bigl[h_\text{pp}\bigr] + \int
{\mathrm{d}}^4x\,\frac{\delta S_\text{F}}{\delta h} \bigl[h_\text{pp}\bigr]
\,h_\text{tidal} + \mathcal{O}\left( h^2_\text{tidal}\right)
\nonumber\\ &= S_\text{F} \bigl[h_\text{pp}\bigr] + \mathcal{O}\left(
\epsilon_\text{tidal}^2\right)\,,\end{aligned}$$ and we conclude that the final remainder $\mathcal{O}(
\epsilon_\text{tidal}^2)$ is at least comparable to a 10PN effect $\mathcal{O}( c^{-20})$ \[see \[eq:epstidal\]\]. Therefore, it is amply sufficient to insert into the Fokker action the metric $h_\text{pp}$ for point particles without internal structure. We can recover this conclusion from a general statement proved in Ref. [@BBBFMa] and called the method “$n+2$”, according to which, in order to control the Fokker action at some $n$PN order, it is necessary and sufficient to insert the components of the metric $h$ with all the PN corrections up to the order $1/c^{n+2}$ included. In our case, we want the Fokker action up to the NNL order, which means formally 7PN, hence $n=7$, so that we require the metric up to the maximal order $1/c^{9}$ while tidal effects are of higher order \[see \[eq:htidal\]\]. The same argument has also been shown and used in Sec. II.E of Ref. [@BiniDF12].
In this paper, we shall not try to compute the full action, including all the terms up to NNL order $\mathcal{O}(\epsilon_\text{tidal}/c^4)$, but only the tidal NNL contributions therein, proceeding essentially as in Ref. [@BiniDF12] although staying in harmonic coordinates. Consequently, we shall need the point-particle metric up to the 2PN order only, so as to obtain the regularized Weyl or Riemann tensor of point particles at the 2PN order, which is the minimum requirement to control the tidal moments at the same accuracy level:
\[eq:tidalR\] $$\begin{aligned}
G^A_{\mu\nu} &= - c^2 \bigl[R_{\mu\rho\nu\sigma}\bigr]_A
u_A^{\rho}u_A^{\sigma}\,, \\ H^A_{\mu\nu} &= 2 c^3 \bigl[
R^{*}_{(\mu\underline{\rho}\nu)\sigma}\bigr]_A
u_A^{\rho}u_A^{\sigma}\,, \\ G^A_{\lambda\mu\nu} &= - c^2 \bigl[
\nabla^\perp_{(\lambda}
\,R_{\mu\underline{\rho}\nu)\sigma}\bigr]_A
u_A^{\rho}u_A^{\sigma}\,.\end{aligned}$$
Remind that, for this calculation, the Weyl and the Riemann tensors give an equivalent dynamics (see \[appendix:proof\]). On the other hand, one can show that replacing the STF operator by the symmetrization operator in the definitions for the mass quadrupole, current quadrupole and mass octupole moments does not affect the values of those tensors. The resulting expressions, provided in \[appendix:proof\], are simpler than the original formulae. The tensors are then obtained by substituting there the Riemann tensor to the Weyl one. However, the off-shell mass-type tidal moments defined in this manner are no longer trace-free, contrary to their Weyl counterparts.
At the 2PN order, the metric of a general matter system in harmonic coordinates can be parametrized by the set of potentials $\{V,V_{i},\hat{W}_{ij},\hat{R}_{i},\hat{X}\}$ in the following way:
\[eq:metric\] $$\begin{aligned}
g_{00} &= -1 + \frac{2V}{c^{2}} - \frac{2V^{2}}{c^{4}}+ \frac{8}{c^{6}}
\left(\hat{X} + V_{i}V_{i} + \frac{V^{3}}{6} \right) +
\mathcal{O}\left( \frac{1}{c^{8}} \right)\,, \\ g_{0i} &=
-\frac{4V_{i}}{c^{3}} - \frac{8\hat{R}_{i}}{c^{5}} + \mathcal{O}\left(
\frac{1}{c^{7}} \right)\,, \\ g_{ij} &= \delta_{ij}\left(1 +
\frac{2V}{c^{2}} + \frac{2V^{2}}{c^{4}} \right) +
\frac{4\hat{W}_{ij}}{c^{4}} + \mathcal{O}\left( \frac{1}{c^{6}}
\right)\,.\end{aligned}$$
These potentials admit a non-zero finite Newtonian limit and solve the flat-space wave equations (with $\Box=\eta^{\mu\nu}\partial^2_{\mu\nu}$)
\[eq:potential\] $$\begin{aligned}
\Box V &= -4 \pi G \sigma\,, \\
\Box V_{i} &= -4 \pi G \sigma_{i}\,, \\
\Box \hat{W}_{ij} &= -4\pi G\left(\sigma_{ij} - \delta_{ij}
\sigma\indices{_k_k} \right) -\partial_{i}V
\partial_{j}V\,, \\
\Box \hat{R}_{i} &= -4 \pi G \left(V \sigma_{i} - V_{i} \sigma \right) -2
\partial_{k}V \partial_{i}V_{k} - \frac{3}{2}\partial_{t}V
\partial_{i}V\,, \\
\Box \hat{X} &= -4 \pi G V \sigma\indices{_k_k} +
2V_{k}\partial_{t}\partial_{k}V + V \partial_{t}^{2}V
+\frac{3}{2}(\partial_{t}V)^{2} -
2\partial_{i}V_{j}\partial_{j}V_{i}
+\hat{W}_{ij}\partial_{ij}V\,,\end{aligned}$$
where the matter source densities are defined in terms of the components of the matter stress-energy tensor as $$\label{eq:source}
\sigma = \frac{T^{00}+T^{ii}}{c^2}\,,\qquad \sigma_{i} =
\frac{T^{0i}}{c}\,,\qquad \sigma_{ij} = T^{ij}\,,$$ with $T^{ii}=\delta_{ij}T^{ij}$. To perform a consistent Fokker reduction of the original action, the solutions of Eqs. must be in principle constructed with the symmetric Green function, which kills all contributions of odd powers of $1/c$ at the current approximation level. As discussed above, thanks to the properties of the Fokker action, we only need the metric produced by point-like particles and can neglect tidal effects when inserting the metric into the Fokker action. Therefore, we shall just compute the potentials for point particles without including any internal structure effect. The requested potentials have already been published elsewhere [@BFP98], except that we compute here their off-shell values, without replacement of accelerations by means of the EoM (we then call them the “unreduced” potentials). However, it is known that the replacement of accelerations in the action is equivalent to performing an unphysical shift of the particles’ worldlines [@S84]. We have checked that, indeed, by inserting the reduced (“on-shell”) versions of the potentials into the action, the final gauge invariant result for the conserved energy reduced to circular orbits, which we shall obtain below \[in \[eq:Ex\]\], comes out the same.
For point particles without spins the matter source terms take the form
\[eq:potentiel\] $$\begin{aligned}
\sigma(\mathbf{x},t) &= \sum_{A} \tilde{\mu}_{A}(t)\,\delta^{(3)}
\bigl(\mathbf{x}-\bm{y}_{A}(t)\bigr)\,,\\ \sigma_i(\mathbf{x},t) &=
\sum_{A} \mu_{A}\,v_A^i\,\delta^{(3)}
\bigl(\mathbf{x}-\bm{y}_{A}(t)\bigr)\,,\\ \sigma_{ij}(\mathbf{x},t) &=
\sum_{A} \mu_{A}\,v_A^i v_A^j\,\delta^{(3)}
\bigl(\mathbf{x}-\bm{y}_{A}(t)\bigr)\,,\end{aligned}$$
where the three-dimensional Dirac function is confined to the worldline $\bm{y}_A(t)$ and we pose for the effective time-varying masses (with $m_A$ the constant PN mass) $$\label{eq:mu}
\mu_A(t) = \frac{m_A\,c}{\sqrt{[g\,g_{\mu\nu}]_A v_A^\mu v_A^\nu}} \,,\qquad
\tilde{\mu}_A = \left(1 + \frac{\bm{v}_A^2}{c^2}\right)\mu_A\,,$$ In Eqs. –, the worldlines are parametrized by the coordinate time $t=x^0/c$ of the harmonic coordinates; the coordinate velocities are $v_{A}^{\mu}=(c,v_A^i)$, with $v_A^i=c u_A^i/u_A^0 = {\mathrm{d}}y_A^i/{\mathrm{d}}t$, and the relativistic Lorentz factor reads $u_{A}^{0}=(-[g_{\mu \nu}]_A
v_{A}^{\mu}v_{A}^{\nu}/c^{2})^{-1/2}$. The metric is computed at the location of the particle $A$ following dimensional regularization; in particular, we have $[g\,g_{\mu\nu}]_A = [g]_A\,[g_{\mu\nu}]_A$ in . As we said, in practical calculations, we use the Hadamard regularization, which is equivalent to dimensional regularization up to the relatively low NNL/2PN order [@BDE04; @BiniDF12].
Tidal effects in the equations of motion to NNL order {#sec:Fokker}
=====================================================
From the discussion in the previous section, we know that, up to the NNL order, the only terms in the Fokker action that depend on the bodies’ internal structure are those that are explicitly present into the matter action . Here, we provide the results for the (coordinate basis components of the) tidal mass-quadrupole, mass-octupole and current-quadrupole moments at the NNL order felt by the body 1, *i.e.*, regularized at the point 1. We find[^6]
\[eq:tidalmoments\] $$\begin{aligned}
[G_{ij}]_1 &= \frac{G m_{2}}{r_{12}^3} \Biggl[3 n_{12\langle i} n_{12j\rangle}
+ \frac{1}{c^{2}} \biggl [n_{12\langle i} n_{12j\rangle} \Bigl(- \frac{15}{2}
(n_{12}{} v_{2}{})^2
+ 6 v_{12}{}^{2}
- \frac{3}{2} r_{12} (n_{12}{} a_{2}{})
- \frac{3 G m_{1}}{r_{12}}
- \frac{3 G m_{2}}{r_{12}}\Bigr)\nonumber\\
& - 6 n_{12\langle i} v_{1j\rangle} (n_{12}{} v_{12}{})
+ 2 v_{1\langle i} v_{1j\rangle}
+ n_{12\langle i} v_{2j\rangle} \Bigl(12 (n_{12}{} v_{1}{})
- 6 (n_{12}{} v_{2}{})\Bigr)
- 6 v_{1\langle i} v_{2j\rangle}
+ 3 v_{2\langle i} v_{2j\rangle}
- 3 a_{2\langle i} n_{12j\rangle} r_{12}\nonumber\\
& + \delta_{ij} \Bigl((n_{12}{} v_{1}{})^2
- \frac{1}{3} v_{1}{}^{2}\Bigr)\biggl]
+ \frac{1}{c^{4}} \biggl\{n_{12\langle i} n_{12j\rangle} \biggl
[\frac{105}{8} (n_{12}{} v_{2}{})^4
+ 30 (n_{12}{} v_{2}{})^2 (v_{1}{} v_{2}{})
+ 6 (v_{1}{} v_{2}{})^2
- 15 (n_{12}{} v_{2}{})^2 v_{1}{}^{2}\nonumber\\
& - 12 (v_{1}{} v_{2}{}) v_{1}{}^{2}
+ 6 v_{1}{}^{4}
- \frac{45}{2} (n_{12}{} v_{2}{})^2 v_{2}{}^{2}
- 12 (v_{1}{} v_{2}{}) v_{2}{}^{2}
+ 6 v_{1}{}^{2} v_{2}{}^{2}
+ 6 v_{2}{}^{4}
+ G m_{2} (n_{12}{} a_{2}{})\nonumber\\
& + \frac{G m_{1}}{r_{12}} \Bigl(- \frac{291}{2} (n_{12}{} v_{1}{})^2
+ 291 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})
- \frac{273}{2} (n_{12}{} v_{2}{})^2
+ 35 v_{12}{}^{2}\Bigr)
+ G m_{1} \Bigl(14 (n_{12}{} a_{1}{})
- 10 (n_{12}{} a_{2}{})\Bigr)\nonumber\\
& + \frac{G m_{2}}{r_{12}} \Bigl(9 (n_{12}{} v_{2}{})^2
+ 18 v_{12}{}^{2}\Bigr)
+ \frac{1}{8} r_{12}^3 (\ddot{a}_{2} n_{12}{})
- \frac{15 G^2 m_{1}^2}{14 r_{12}^2}
+ \frac{35 G^2 m_{1} m_{2}}{r_{12}^2}
+ \frac{5 G^2 m_{2}^2}{r_{12}^2}
+ r_{12} \Bigl(12 (v_{1}{} a_{2}{}) (n_{12}{} v_{2}{})\nonumber\\
& - \frac{27}{2} (v_{2}{} a_{2}{}) (n_{12}{} v_{2}{})
+ \frac{45}{4} (n_{12}{} a_{2}{}) (n_{12}{} v_{2}{})^2
+ 6 (n_{12}{} a_{2}{}) (v_{1}{} v_{2}{})
- 3 (n_{12}{} a_{2}{}) v_{1}{}^{2}
- \frac{9}{2} (n_{12}{} a_{2}{}) v_{2}{}^{2}\Bigr)\nonumber\\
& + r_{12}^2 \Bigl(\frac{9}{8} (n_{12}{} a_{2}{})^2
- \frac{15}{8} a_{2}{}^{2}
+ \frac{3}{2} (n_{12}{} v_{2}{}) (n_{12}{} \dot{a}_{2})
+ 2 (v_{1}{} \dot{a}_{2})
- 2 (v_{2}{} \dot{a}_{2})\Bigr)\biggl]
+ n_{12\langle i} v_{1j\rangle} \biggl [\frac{62 G m_{1}}{r_{12}} (n_{12}{}
v_{12}{})\nonumber\\
& - \frac{18 G m_{2}}{r_{12}} (n_{12}{} v_{12}{})
+ 15 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})^2
- 15 (n_{12}{} v_{2}{})^3
+ 6 (n_{12}{} v_{2}{}) (v_{1}{} v_{2}{})
+ 6 (n_{12}{} v_{2}{}) v_{12}{}^{2}
- 6 (n_{12}{} v_{1}{}) v_{1}{}^{2}\nonumber\\
& + r_{12} \Bigl(- (v_{12}{} a_{2}{})
+ 3 (n_{12}{} a_{2}{}) (n_{12}{} v_{1}{})
- 9 (n_{12}{} a_{2}{}) (n_{12}{} v_{2}{})\Bigr)
- r_{12}^2 (n_{12}{} \dot{a}_{2})\biggl]
+ v_{1\langle i} v_{1j\rangle} \Bigl(-3 (n_{12}{} v_{2}{})^2
+ 2 v_{1}{}^{2}\nonumber\\
& - r_{12} (n_{12}{} a_{2}{})
- \frac{3 G m_{1}}{r_{12}}
+ \frac{6 G m_{2}}{r_{12}}\Bigr)
+ n_{12\langle i} v_{2j\rangle} \biggl [-30 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})^2
+ 15 (n_{12}{} v_{2}{})^3
- 12 (n_{12}{} v_{1}{}) (v_{1}{} v_{2}{})\nonumber\\
& + 12 (n_{12}{} v_{1}{}) v_{1}{}^{2}
+ 12 (n_{12}{} v_{1}{}) v_{2}{}^{2}
- 6 (n_{12}{} v_{2}{}) v_{2}{}^{2}
+ \frac{G m_{1}}{r_{12}} \Bigl(-68 (n_{12}{} v_{1}{})
+ 62 (n_{12}{} v_{2}{})\Bigr)
+ \frac{G m_{2}}{r_{12}} \Bigl(12 (n_{12}{} v_{1}{})\nonumber\\
& - 18 (n_{12}{} v_{2}{})\Bigr)
+ r_{12}^2 (n_{12}{} \dot{a}_{2})
+ r_{12} \Bigl(-2 (v_{1}{} a_{2}{})
- 6 (n_{12}{} a_{2}{}) (n_{12}{} v_{1}{})
- (v_{2}{} a_{2}{})
+ 9 (n_{12}{} a_{2}{}) (n_{12}{} v_{2}{})\Bigr)\biggl]\nonumber\\
& + v_{1\langle i} v_{2j\rangle} \Bigl(-6 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})
+ 15 (n_{12}{} v_{2}{})^2
- 6 (v_{1}{} v_{2}{})
- 6 v_{12}{}^{2}
+ 5 r_{12} (n_{12}{} a_{2}{})
+ \frac{8 G m_{1}}{r_{12}}
- \frac{10 G m_{2}}{r_{12}}\Bigr)\nonumber\\
& + v_{2\langle i} v_{2j\rangle} \Bigl(6 (n_{12}{} v_{1}{})^2
- \frac{15}{2} (n_{12}{} v_{2}{})^2
+ 3 v_{2}{}^{2}
- \frac{5}{2} r_{12} (n_{12}{} a_{2}{})
- \frac{4 G m_{1}}{r_{12}}
+ \frac{5 G m_{2}}{r_{12}}\Bigr)
+ 4 G m_{1} a_{1\langle i} n_{12j\rangle}\nonumber\\
& + a_{2\langle i} n_{12j\rangle} \biggl [r_{12} \Bigl(-12 (n_{12}{} v_{1}{})
(n_{12}{} v_{2}{})
+ \frac{27}{2} (n_{12}{} v_{2}{})^2
+ 4 (v_{1}{} v_{2}{})
- 2 v_{1}{}^{2}
- 5 v_{2}{}^{2}\Bigr)
+ \frac{9}{2} r_{12}^2 (n_{12}{} a_{2}{})
- 3 G m_{1}\nonumber\\
& - G m_{2}\biggl]
+ a_{2\langle i} v_{1j\rangle} r_{12} \Bigl(- (n_{12}{} v_{1}{})
+ 7 (n_{12}{} v_{2}{})\Bigr)
+ a_{2\langle i} v_{2j\rangle} r_{12} \Bigl(-2 (n_{12}{} v_{1}{})
- 7 (n_{12}{} v_{2}{})\Bigr)
- \frac{5}{4} a_{2\langle i} a_{2j\rangle} r_{12}^2\nonumber\\
& + n_{12\langle i} \dot{a}_{2j\rangle} r_{12}^2 \Bigl(-2 (n_{12}{} v_{1}{})
+ 5 (n_{12}{} v_{2}{})\Bigr)
+ 3 v_{1\langle i} \dot{a}_{2j\rangle} r_{12}^2
- 3 v_{2\langle i} \dot{a}_{2j\rangle} r_{12}^2
+ \frac{7}{4} n_{12\langle i} \ddot{a}_{2j\rangle} r_{12}^3
+ \delta_{ij} \biggl [- \frac{5}{2} (n_{12}{} v_{1}{})^2
(n_{12}{} v_{2}{})^2\nonumber\\
& - 2 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{}) (v_{1}{} v_{2}{})
+ (v_{1}{} v_{2}{})^2
+ (n_{12}{} v_{1}{})^2 v_{1}{}^{2}
+ \frac{3}{2} (n_{12}{} v_{2}{})^2 v_{1}{}^{2}
- \frac{1}{3} v_{1}{}^{4}
+ 2 (n_{12}{} v_{1}{})^2 v_{2}{}^{2}
- v_{1}{}^{2} v_{2}{}^{2}\nonumber\\
& - \frac{4}{3} G m_{1} (n_{12}{} a_{2}{})
+ \frac{G m_{1}}{r_{12}} \Bigl(- \frac{16}{3} (n_{12}{} v_{12}{})^2
- (n_{12}{} v_{1}{})^2
+ \frac{4}{3} v_{12}{}^{2}
+ \frac{1}{3} v_{1}{}^{2}\Bigr)
+ \frac{G m_{2}}{r_{12}} \Bigl(4 (n_{12}{} v_{12}{})^2
- (n_{12}{} v_{1}{})^2\nonumber\\
& - \frac{4}{3} v_{12}{}^{2}
+ \frac{1}{3} v_{1}{}^{2}\Bigr)
+ r_{12} \Bigl(\frac{4}{3} (v_{12}{} a_{2}{}) (n_{12}{} v_{12}{})
- (v_{1}{} a_{2}{}) (n_{12}{} v_{1}{})
- \frac{1}{2} (n_{12}{} a_{2}{}) (n_{12}{} v_{1}{})^2
+ \frac{1}{2} (n_{12}{} a_{2}{}) v_{1}{}^{2}\Bigr)\nonumber\\
& - \frac{16 G^2 m_{1} m_{2}}{3 r_{12}^2}
+ \frac{2 G^2 m_{2}^2}{3 r_{12}^2}
+ r_{12}^2 \Bigl(\frac{4}{3} a_{2}{}^{2}
- \frac{4}{3} (v_{1}{} \dot{a}_{2})
+ \frac{4}{3} (v_{2}{} \dot{a}_{2})\Bigr)\biggl]\biggl\}\Biggl]+
\mathcal{O}\left(\frac{1}{c^6}\right)\, ,\\
[H_{ij}]_1 &=\frac{G m_{2}}{r_{12}^3} \biggl\{12 (n_{12}{}\times
v_{12}{})_{\langle i}n_{12j\rangle}
+ \frac{1}{c^{2}} \biggl [(n_{12}{}\times v_{12}{})_{\langle i}n_{12j\rangle}
\Bigl(-30 (n_{12}{} v_{2}{})^2
+ 12 (v_{1}{} v_{2}{})
+ 12 v_{12}{}^{2}
- 6 r_{12} (n_{12}{} a_{2}{})\nonumber\\
& + \frac{4 G m_{1}}{r_{12}}
+ \frac{12 G m_{2}}{r_{12}}\Bigr)
- 12 (a_{2}{}\times n_{12}{})_{\langle i}n_{12j\rangle} r_{12} (n_{12}{} v_{2}{})
+ 12 (n_{12}{}\times v_{12}{})_{\langle i}v_{2j\rangle} (n_{12}{} v_{1}{})
- 2 (a_{2}{}\times v_{12}{})_{\langle i}n_{12j\rangle} r_{12}\nonumber\\
& - 2 a_{2\langle i}(n_{12}{}\times v_{12}{})_{j\rangle} r_{12}
+ 2 (n_{12}{}\times \dot{a}_{2}{})_{\langle i}n_{12j\rangle} r_{12}^2
+ 4 \delta_{ij} (n_{12}{},v_{1}{},v_{2}{}) (n_{12}{}
v_{1}{})\biggl]\biggl\} +
\mathcal{O}\left(\frac{1}{c^4}\right)\, , \\
[G_{ijk}]_1&=- \frac{15 G m_{2} n_{12\langle i} n_{12j}
n_{12k\rangle}}{r_{12}^4}+ \mathcal{O}\left( \frac{1}{c^2}\right)\,.\end{aligned}$$
The other components of the tidal moments are readily obtained from, *e.g.*, the relations $[G_{0i}]_1 = -v_1^j \,[G_{ij}]_1/c$ and $[G_{00}]_1 = v_1^i v_1^j \,[G_{ij}]_1/c^2$, which are equivalent to $[G_{{\hat{0}}{\hat{0}}}]_1 = [G_{{\hat{0}}{\hat{i}}}]_1 = 0$ in tetradic notation. In Eqs. , most of the terms are STF, which we denote by angular brackets surounding the indices. Note however, as mentioned in \[sec:metric\], the appearance of pure trace contributions, due to the fact that we have not resorted here to tetradic projections and have used the Riemann tensor instead of the Weyl tensor \[see the discussion in \[appendix:proof\]\].
With the latter results and the 2PN metric in hands, it is straightforward to get the Lagrangian up to the relative NNL/2PN order for the finite-size tidal contributions. As usual, we apply a number of procedures to eliminate multiple time derivatives of the accelerations and reduce the numbers of terms, in particular removing those that contain higher time derivatives of the accelerations by adding suitable double-zero terms and total time derivatives [@DS85]. Recalling our notation introduced in \[eq:hdecomp\], we write $$\label{eq:Lpptidal}
L = L_\text{pp} + L_\text{tidal}\,,$$ where, to be consistent with the NNL order truncation, we recall here the Lagrangian for point particles up to 2PN order in harmonic coordinates, which is a generalized Lagrangian depending on positions $y_A^i(t)$, velocities $v_A^i(t)$, as well as accelerations $a_A^i(t)={\mathrm{d}}v_A^i/{\mathrm{d}}t$ (see, *e.g.*, Eq. (209) of [@BlanchetLR]): $$\begin{aligned}
\label{eq:Lpp}
L_\text{pp} &= \frac{m_1 v_1^2}{2} + \frac{G m_1 m_2}{2 r_{12}}
\nonumber \\ &
+ \frac{1}{c^2} \left\{ - \frac{G^2 m_1^2 m_2}{2
r_{12}^2} + \frac{m_1 v_1^4}{8} + \frac{G m_1 m_2}{r_{12}} \left(
- \frac{1}{4} (n_{12}v_1) (n_{12}v_2) + \frac{3}{2} v_1^2 -
\frac{7}{4} (v_1v_2) \right) \right\} \nonumber \\ &
+ \frac{1}{c^4} \Bigg\{ \frac{G^3 m_1^3 m_2}{2 r_{12}^3} + \frac{19
G^3 m_1^2 m_2^2}{8 r_{12}^3} \nonumber \\ & \qquad ~\, + \frac{G^2
m_1^2 m_2}{r_{12}^2} \left( \frac{7}{2} (n_{12}v_1)^2 - \frac{7}{2}
(n_{12}v_1) (n_{12}v_2) + \frac{1}{2}(n_{12}v_2)^2 + \frac{1}{4} v_1^2
- \frac{7}{4} (v_1v_2) + \frac{7}{4} v_2^2 \right) \nonumber \\ &
\qquad ~\, + \frac{G m_1 m_2}{r_{12}} \bigg( \frac{3}{16}
(n_{12}v_1)^2 (n_{12}v_2)^2 - \frac{7}{8} (n_{12}v_2)^2 v_1^2 +
\frac{7}{8} v_1^4 + \frac{3}{4} (n_{12}v_1) (n_{12}v_2) (v_1v_2)
\nonumber \\ & \qquad \qquad \qquad \qquad - 2 v_1^2 (v_1v_2) +
\frac{1}{8} (v_1v_2)^2 + \frac{15}{16} v_1^2 v_2^2 \bigg) + \frac{m_1
v_1^6}{16} \nonumber \\ & \qquad ~\, + G m_1 m_2 \left( -
\frac{7}{4} (a_1 v_2) (n_{12}v_2) - \frac{1}{8} (n_{12} a_1)
(n_{12}v_2)^2 + \frac{7}{8} (n_{12} a_1) v_2^2 \right) \Bigg\} +
1\leftrightarrow 2 + \mathcal{O}\left(\frac{1}{c^5}\right)\,.\end{aligned}$$ To the terms given above, we must add their symmetric counterpart in the exchange of the two particles, as indicated by the notation $1\leftrightarrow
2$. Now, the main result of the present paper is the complete expression of the tidal part of the Lagrangian up to the NNL/2PN order in harmonic coordinates. It reads $$\begin{aligned}
\label{eq:Ltidal}
L_{\text{tidal}}&= \frac{G^2 m_{2}^2}{r_{12}^6} \biggl\{\frac{3}{2} \mu_1^{(2)}
+ \frac{1}{c^{2}} \biggl [\mu_1^{(2)} \Bigl(- \frac{9}{2} (n_{12}{} v_{1}{})^2
- 18 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})
+ 18 (n_{12}{} v_{2}{})^2
- \frac{9}{2} (v_{1}{} v_{2}{})
+ \frac{15}{4} v_{1}{}^{2}\Bigr)\nonumber\\
& + \sigma_1^{(2)} \Bigl(-12 (n_{12}{} v_{12}{})^2
+ 12 v_{12}{}^{2}\Bigr)
- \frac{3 G m_{1} \mu_1^{(2)}}{r_{12}}
- \frac{21 G m_{2} \mu_1^{(2)}}{2 r_{12}}\biggl]
+ \frac{1}{c^{4}} \biggl [\mu_1^{(2)}
\Bigl(\frac{9}{2} (n_{12}{} v_{1}{})^4\nonumber\\
& - 18 (n_{12}{} v_{1}{})^3 (n_{12}{} v_{2}{})
+ 45 (n_{12}{} v_{1}{})^2 (n_{12}{} v_{2}{})^2
- 54 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})^3
+ \frac{63}{2} (n_{12}{} v_{2}{})^4
+ 9 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{}) (v_{1}{} v_{2}{})\nonumber\\
& - 18 (n_{12}{} v_{2}{})^2 (v_{1}{} v_{2}{})
+ \frac{9}{2} (v_{1}{} v_{2}{})^2
- 9 (n_{12}{} v_{1}{})^2 v_{12}{}^{2}
+ 27 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{}) v_{12}{}^{2}
- 36 (n_{12}{} v_{2}{})^2 v_{12}{}^{2}\nonumber\\
& + 9 (v_{1}{} v_{2}{}) v_{12}{}^{2}
+ 9 v_{12}{}^{4}
- \frac{9}{4} (n_{12}{} v_{1}{})^2 v_{1}{}^{2}
- \frac{9}{2} (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{}) v_{1}{}^{2}
+ \frac{27}{2} (n_{12}{} v_{2}{})^2 v_{1}{}^{2}
- 9 (v_{1}{} v_{2}{}) v_{1}{}^{2}\nonumber\\
& - \frac{27}{4} v_{12}{}^{2} v_{1}{}^{2}
+ \frac{69}{16} v_{1}{}^{4}\Bigr)
+ \mu_1^{(2)} r_{12} \Bigl(-12 (v_{12}{} a_{2}{}) (n_{12}{} v_{1}{})
+ 60 (n_{12}{} a_{2}{}) (n_{12}{} v_{1}{})^2
+ 21 (v_{12}{} a_{2}{}) (n_{12}{} v_{2}{})\nonumber\\
& - \frac{9}{2} (v_{1}{} a_{2}{}) (n_{12}{} v_{2}{})
- 102 (n_{12}{} a_{2}{}) (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})
+ 60 (n_{12}{} a_{2}{}) (n_{12}{} v_{2}{})^2
+ \frac{69}{2} (n_{12}{} a_{2}{}) (v_{1}{} v_{2}{})
- \frac{69}{4} (n_{12}{} a_{2}{}) v_{1}{}^{2}\nonumber\\
& - \frac{39}{2} (n_{12}{} a_{2}{}) v_{2}{}^{2}\Bigr)
+ \sigma_1^{(2)} \Bigl(60 (n_{12}{} v_{12}{})^4
- 96 (n_{12}{} v_{12}{})^3 (n_{12}{} v_{1}{})
+ 48 (n_{12}{} v_{12}{})^2 (n_{12}{} v_{1}{})^2
- 24 (n_{12}{} v_{12}{})^2 (v_{1}{} v_{2}{})\nonumber\\
& + 24 (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{}) (v_{1}{} v_{2}{})
+ 12 (v_{1}{} v_{2}{})^2
- 84 (n_{12}{} v_{12}{})^2 v_{12}{}^{2}
+ 96 (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{}) v_{12}{}^{2}
- 36 (n_{12}{} v_{1}{})^2 v_{12}{}^{2}\nonumber\\
& + 24 (v_{1}{} v_{2}{}) v_{12}{}^{2}
+ 24 v_{12}{}^{4}
+ 18 (n_{12}{} v_{12}{})^2 v_{1}{}^{2}
- 24 (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{}) v_{1}{}^{2}
- 24 (v_{1}{} v_{2}{}) v_{1}{}^{2}
- 18 v_{12}{}^{2} v_{1}{}^{2}
+ 12 v_{1}{}^{4}\Bigr)\nonumber\\
& + \sigma_1^{(2)} r_{12} \Bigl(16 (n_{12}{} a_{2}{}) (n_{12}{} v_{12}{})^2
+ 24 (v_{12}{} a_{2}{}) (n_{12}{} v_{1}{})
- 24 (n_{12}{} a_{2}{}) (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{})
- 16 (n_{12}{} a_{2}{}) v_{12}{}^{2}\Bigr)\nonumber\\
& + \frac{G m_{1} \mu_1^{(2)}}{r_{12}} \Bigl(\frac{807}{8} (n_{12}{} v_{1}{})^2
+ \frac{381}{8} (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})
- 138 (n_{12}{} v_{2}{})^2
- \frac{387}{8} (v_{1}{} v_{2}{})
+ \frac{63}{8} v_{1}{}^{2}
+ 42 v_{2}{}^{2}\Bigr)\nonumber\\
& + \frac{G m_{2} \mu_1^{(2)}}{r_{12}} \Bigl(\frac{27}{2} (n_{12}{} v_{1}{})^2
+ \frac{1051}{8} (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})
- \frac{865}{8} (n_{12}{} v_{2}{})^2
+ \frac{83}{8} (v_{1}{} v_{2}{})
- \frac{45}{4} v_{1}{}^{2}
+ \frac{49}{8} v_{2}{}^{2}\Bigr)\nonumber\\
& + \frac{G m_{1} \sigma_1^{(2)}}{r_{12}} \Bigl(-8 (n_{12}{} v_{12}{})^2
+ 8 v_{12}{}^{2}\Bigr)
+ \frac{G m_{2} \sigma_1^{(2)}}{r_{12}} \Bigl(36 (n_{12}{} v_{12}{})^2
- 36 v_{12}{}^{2}\Bigr)
- \frac{60 G^2 m_{1}^2 \mu_1^{(2)}}{7 r_{12}^2}\nonumber\\
& + \frac{707 G^2 m_{1} m_{2} \mu_1^{(2)}}{8 r_{12}^2}
+ \frac{165 G^2 m_{2}^2 \mu_1^{(2)}}{4 r_{12}^2}\biggl]
+ \frac{15 \mu_1^{(3)}}{2 r_{12}^2}\biggl\}
+ 1 \leftrightarrow 2 + \mathcal{O}\left(
\frac{\epsilon_\text{tidal}}{c^{6}}\right) \,.\end{aligned}$$ Note that the last term, although it does not contain any explicit $1/c$-factor, is actually a NNL term \[see \[eq:order\]\].
The long EoM derived by varying the Lagrangian are relagated to \[appendix:accNNL\]. We have verified that the latter EoM in harmonic coordinates stay manifestly invariant when we perform a global (PN-expanded) Lorentz boost with constant velocity $\bm{V}$. All the formulas employed to check the Lorentz invariance are given by Eqs. (3.20)–(3.23) of Ref. [@BFregM]. Furthermore, as a confirmation of the boost invariance of the EoM, we can compute the Noetherian invariant associated with this symmetry, which is nothing but the (mass weighted) position of the center of mass $G^i$ of the binary system. We obtain $G^i = G^i_\text{pp} + G^i_\text{tidal}$, where the point-particle piece is given by Eq. (4.4) in [@ABF01], *i.e.* at the 1PN order by $$\label{eq:Gipp}
G^i_\text{pp} = m_1 y_{1}^{i} + \frac{m_1}{2c^2}\left( v_1^2 -
\frac{G m_2}{r_{12}} \right)y_{1}^{i} + 1\leftrightarrow2 +
\mathcal{O}\left( \frac{1}{c^{4}} \right)\,,$$ and where the dominant tidal piece appears only at the NL/1PN order and is given by $$\label{eq:Gitidal}
G^i_\text{tidal} = \frac{3 G^{2} m_{2}^{2}}{2 r_{12}^{5} c^2}
\,\mu_{1}^{(2)} \left(3 n_{12}^{i}- \frac{y_{1}^{i}}{r_{12}} \right) +
1\leftrightarrow2 + \mathcal{O}\left(
\frac{\epsilon_\text{tidal}}{c^{4}} \right)\,.$$ For simplicity, since it is not needed in the following, we do not present the complicated NNL/2PN contributions beyond the result .
Tidal effects in the center-of-mass frame {#sec:CoM}
=========================================
The center-of-mass (CoM) frame is defined as the frame for which the equation $G^i = 0$ holds, consistently including the tidal terms. The structure of the leading order of the EoM and energy allows one to compute the corresponding CoM quantities at the 2PN relative order without requesting $G^{i}$ itself at that order. By contrast, it is sufficient to know $G^{i}$ at the 1PN relative order for this calculation, which means including the tidal effects at the NL/1PN order as given by \[eq:Gitidal\]. Solving for $G^i = 0$ then yields the CoM position of the particle 1 as a function of the relative separation and velocity.[^7] We find $y_1^i = (y_1^i)_\text{pp} +
(y_1^i)_\text{tidal}$, where the known 1PN expression for the point-particle piece reads $$\label{eq:y1pp}
(y_1^i)_\text{pp} = \left[ X_2 + \frac{\nu\,\Delta}{2 c^2} \left(v^2 - \frac{G
m}{r}\right)\right] x^i + \mathcal{O}\left( \frac{1}{c^{4}} \right)\,,$$ with the position of the particle 2 obtained by the exchange $1\leftrightarrow 2$. Now, the point is that, because of the tidal contribution to the CoM position found in , there also exists a NL/1PN contribution given by $$\label{eq:y1tidal}
(y_1^i)_\text{tidal} = - \frac{3 G^{2} m \nu}{2 r^{6} c^2} \Big( \Delta
\,\mu_{+}^{(2)} +5 \mu_{-}^{(2)} \Big) x^{i} + \mathcal{O}\left(
\frac{\epsilon_\text{tidal}}{c^{4}} \right)\,.$$ The velocities $v_1^i = (v_1^i)_\text{pp} + (v_1^i)_\text{tidal}$ are found by iteratively differentiating Eqs. –, using in that process the full EoM, which include the tidal effect. Here and below, we define the following convenient combinations of the tidal polarizabilities: $$\label{eq:polarpm}
\mu_\pm^{(\ell)} = \frac{1}{2}\left(\frac{m_{2}}{m_{1}}\,\mu_{1}^{(\ell)} \pm
\frac{m_{1}}{m_{2}}\,\mu_{2}^{(\ell)}\right)\,,\qquad \sigma_\pm^{(\ell)} =
\frac{1}{2}\left(\frac{m_{2}}{m_{1}}\,\sigma_{1}^{(\ell)} \pm
\frac{m_{1}}{m_{2}}\,\sigma_{2}^{(\ell)}\right)\,,$$ where the chosen normalisation is such that $\mu_+^{(\ell)} =
\mu_1^{(\ell)} = \mu_2^{(\ell)}$ and $\mu_-^{(\ell)} = 0$ when the two bodies are identical, with the same mass and internal structure. Likewise for $\sigma_\pm^{(\ell)}$.
At this stage, the EoM in the CoM frame can be derived in two possible ways: either by computing the CoM acceleration $a^{i}= a_{1}^{i}-a_{2}^{i}$ directly, based on the replacement rules –, or by getting first the expression of the Lagrangian in the CoM frame from the Lagrangian in a general frame, varying it then to recover the EoM. We resorted to the two methods and the results are in full agreement (see also [@MBBF17] for further details on the second method). The CoM Lagrangian may be decomposed as $L = L_\text{pp} + L_\text{tidal}$, where $L_\text{pp}$ is *e.g.* given by Eq. (4.2) in [@BI03CM] while the tidal part is, up to NNL order $$\begin{aligned}
\label{eq:LCoMtidal}
\frac{L_{\text{tidal}}}{\mu} &=\frac{G^2 m}{r^6} \Bigg\{ 3 \mu_{+}^{(2)}
+ \frac{1}{c^{2}} \biggl\{\biggl [\mu_{+}^{(2)} \Bigl(\frac{27}{2}
+ 9 \nu \Bigr)
+ \frac{45}{2} \Delta\mu_{-}^{(2)}
- 24 \sigma_{+}^{(2)}\biggl] \dot{r}^2
+ \biggl [\mu_{+}^{(2)} \Bigl(\frac{15}{4}
+ \frac{3}{2} \nu \Bigr)
- \frac{15}{4} \Delta\mu_{-}^{(2)}\nonumber\\
& + 24 \sigma_{+}^{(2)}\biggl] v^{2}
+ \frac{G m}{r} \Bigl(- \frac{27}{2} \mu_{+}^{(2)}
+ \frac{15}{2} \Delta\mu_{-}^{(2)}\Bigr)\biggl\}
+ \frac{1}{c^{4}} \Biggl[r \biggl\{\biggl [\mu_{+}^{(2)} \Bigl(21
- \frac{45}{2} \nu \Bigr)
+ \Delta\mu_{-}^{(2)} \Bigl(21
- \frac{9}{2} \nu \Bigr)\nonumber\\
& - 48 \nu \sigma_{+}^{(2)}\biggl] a_{v} \dot{r}
+ \biggl [\mu_{+}^{(2)} \Bigl(-60
+ 18 \nu \Bigr)
+ \Delta\mu_{-}^{(2)} \Bigl(-60
+ 18 \nu \Bigr)
+ \sigma_{+}^{(2)} \Bigl(-16
+ 48 \nu \Bigr)
- 16 \Delta\sigma_{-}^{(2)}\biggl] a_{n} \dot{r}^2\nonumber\\
& + \biggl [\mu_{+}^{(2)} \Bigl(\frac{39}{2}
- \frac{27}{4} \nu \Bigr)
+ \Delta\mu_{-}^{(2)} \Bigl(\frac{39}{2}
- \frac{9}{4} \nu \Bigr)
+ 16 \sigma_{+}^{(2)}
+ 16 \Delta\sigma_{-}^{(2)}\biggl] a_{n} v^{2}\biggl\}
+ \biggl [\mu_{+}^{(2)} \Bigl(36
- 72 \nu
+ 18 \nu^2\Bigr)\nonumber\\
& + \Delta\mu_{-}^{(2)} \Bigl(27
- 18 \nu \Bigr)
+ \sigma_{+}^{(2)} \Bigl(72
- 96 \nu \Bigr)
+ 48 \Delta\sigma_{-}^{(2)}\biggl] \dot{r}^4
+ \biggl [\mu_{+}^{(2)} \Bigl(- \frac{189}{4}
+ 72 \nu
- \frac{45}{2} \nu^2\Bigr)
+ \Delta\mu_{-}^{(2)} \Bigl(- \frac{99}{4}\nonumber\\
& - \frac{27}{2} \nu \Bigr)
+ \sigma_{+}^{(2)} \Bigl(-114
+ 132 \nu \Bigr)
- 54 \Delta\sigma_{-}^{(2)}\biggl] \dot{r}^2 v^{2}
+ \biggl [\mu_{+}^{(2)} \Bigl(\frac{249}{16}
- 12 \nu
- \frac{27}{8} \nu^2\Bigr)
+ \Delta\mu_{-}^{(2)} \Bigl(\frac{39}{16}
+ \frac{27}{8} \nu \Bigr)\nonumber\\
& + \sigma_{+}^{(2)} \Bigl(42
- 36 \nu \Bigr)
+ 6 \Delta\sigma_{-}^{(2)}\biggl] v^{4}
+ \frac{G m}{r} \biggl\{\biggl [\mu_{+}^{(2)} \Bigl(- \frac{249}{2}
+ \frac{355}{2} \nu
+ 39 \nu^2\Bigr)
+ \Delta\mu_{-}^{(2)} \Bigl(- \frac{303}{2}
+ \frac{135}{2} \nu \Bigr)\nonumber\\
& + 28 \sigma_{+}^{(2)}
- 44 \Delta\sigma_{-}^{(2)}\biggl] \dot{r}^2
+ \biggl [\mu_{+}^{(2)} \Bigl(\frac{123}{4}
- 41 \nu
+ 3 \nu^2\Bigr)
+ \frac{213}{4} \Delta\mu_{-}^{(2)}
- 28 \sigma_{+}^{(2)}
+ 44 \Delta\sigma_{-}^{(2)}\biggl] v^{2}\biggl\}\nonumber\\
& + \frac{G^2 m^2}{r^2} \biggl [\mu_{+}^{(2)} \Bigl(\frac{915}{28}
+ \frac{3119}{28} \nu \Bigr)
- \frac{1395}{28} \Delta\mu_{-}^{(2)}\biggl]\Biggl]
+ \mu_{+}^{(3)} \frac{15}{r^2}\Bigg\} + \mathcal{O}\left(
\frac{\epsilon_\text{tidal}}{c^{6}} \right)\,.\end{aligned}$$ Note again that the last term is actually a NNL/2PN contribution. The corresponding relative CoM acceleration is displayed in \[appendix:accNNL\]. Similarly, we show here the tidal part of the conserved energy $E = E_\text{pp} + E_\text{tidal}$: $$\begin{aligned}
\label{eq:Etidal}
\frac{E_\text{tidal}}{m\nu} =& -
3 \frac{G^{2} m }{r^{6}}\mu_{+}^{(2)} +
\frac{1}{c^{2}} \left\{ \frac{G^{2} m}{r^{6}}
\left[ \left[ \left(\frac{27}{2} + 9\nu \right)\mu_{+}^{(2)} +
\frac{45}{2} \Delta \, \mu_{-}^{(2)} -24 \sigma_{+}^{(2)} \right]\dot{r}^{2}
\right. \right. \nonumber\\
& \left. \left. + \left( \left(\frac{15}{4} +
\frac{3}{2}\nu \right)\mu_{+}^{(2)} -
\frac{15}{4} \Delta \, \mu_{-}^{(2)} +
24 \sigma_{+}^{(2)} \right) v^{2} \right] +
\frac{G^{3} m^{2}}{r^{7}} \left[ \frac{27}{2} \mu_{+}^{(2)} -
\frac{15}{2} \Delta \, \mu_{-}^{(2)} \right] \right\} \nonumber \\
& + \frac{1}{c^{4}} \left\{ \frac{G^{2} m}{r^{6}}
\left[ \left( \left( -372-72 \nu +54 \nu^{2} \right)\mu_{+}^{(2)} +
\left(-399 +90 \nu \right) \Delta \, \mu_{-}^{(2)} +
\left(88+96\nu \right)\sigma_{+}^{(2)} +
16 \Delta \, \sigma_{-}^{(2)}\right)\dot{r}^{4} \right. \right. \nonumber \\
& \left. \left. + \left( \left( \frac{1125}{4}-
\frac{27}{2} \nu -\frac{135}{2} \nu^{2} \right)\mu_{+}^{(2)} +
\left(\frac{1395}{4} -135 \nu \right) \Delta \, \mu_{-}^{(2)} +
\left(-198-36\nu \right)\sigma_{+}^{(2)} -18 \Delta \, \sigma_{-}^{(2)}
\right)
\dot{r}^{2}v^{2} \right. \right. \nonumber \\
& \left. \left. + \left( \left( \frac{99}{16}-
\frac{27}{4} \nu -\frac{81}{8} \nu^{2} \right)\mu_{+}^{(2)} +
\left(-\frac{531}{16} +\frac{135}{8} \nu \right)
\Delta \, \mu_{-}^{(2)} + \left(110-60\nu \right)\sigma_{+}^{(2)} +
2 \Delta \, \sigma_{-}^{(2)}\right)v^{4}\right] \right. \nonumber \\
& \left. + \frac{G^{3} m^{2}}{r^{7}}
\left[ \left( \left( -\frac{213}{2}+\frac{499}{2} \nu +
39 \nu^{2} \right)\mu_{+}^{(2)} +
\left(-\frac{267}{2} +\frac{135}{2} \nu \right) \Delta \, \mu_{-}^{(2)} +
\left(60+48\nu \right)\sigma_{+}^{(2)} -
12 \Delta \, \sigma_{-}^{(2)}\right) \dot{r}^{2} \right. \right. \nonumber \\
& \left. \left. + \left( \left( \frac{51}{4}-
113 \nu +3 \nu^{2} \right)\mu_{+}^{(2)} +\frac{141}{4} \Delta \, \mu_{-}^{(2)} +
\left(-60-48\nu \right)\sigma_{+}^{(2)} +
12 \Delta \, \sigma_{-}^{(2)}\right) v^{2} \right] \right. \nonumber \\
& \left. + \frac{G^{4} m^{3}}{r^{8}}
\left[ \left( -\frac{915}{28} - \frac{3119}{28} \nu \right)\mu_{+}^{(2)} +
\frac{1395}{28}\Delta \, \mu_{-}^{(2)}\right] \right\} -
15 \frac{G^{2} m }{r^{8}}\mu_{+}^{(3)} + \mathcal{O}\left(
\frac{\epsilon_\text{tidal}}{c^{6}} \right)\,.\end{aligned}$$ Finally, for the CoM angular momentum $J^{i} = J^{i}_\text{pp} + J^{i}_\text{tidal}$, we find (denoting $L^i=\varepsilon_{ijk}x^{j}v^{k}$) $$\begin{aligned}
\label{eq:Jtidal}
\frac{J^{i}_\text{tidal}}{m\nu} &= \frac{G^2 m}{c^2 r^6} L^{i}
\Biggl[\mu_{+}^{(2)} \Bigl(\frac{15}{2}
+ 3 \nu \Bigr)
- \frac{15}{2} \Delta\mu_{-}^{(2)}
+ 48 \sigma_{+}^{(2)}
+ \frac{1}{c^{2}} \biggl\{\biggl [\mu_{+}^{(2)} \Bigl(\frac{303}{2}
- 27 \nu
- 45 \nu^2\Bigr)
+ \Delta\mu_{-}^{(2)} \Bigl(\frac{393}{2}
- 90 \nu \Bigr)\nonumber\\
& + \sigma_{+}^{(2)} \Bigl(-196
- 120 \nu \Bigr)
- 76 \Delta\sigma_{-}^{(2)}\biggl] \dot{r}^2
+ \biggl [\mu_{+}^{(2)} \Bigl(\frac{9}{4}
- 12 \nu
- \frac{27}{2} \nu^2\Bigr)
+ \Delta\mu_{-}^{(2)} \Bigl(- \frac{201}{4}
+ \frac{45}{2} \nu \Bigr)
+ \sigma_{+}^{(2)} \Bigl(136\nonumber\\
& - 96 \nu \Bigr)
- 8 \Delta\sigma_{-}^{(2)}\biggl] v^{2}
+ \frac{G m}{r} \biggl [\mu_{+}^{(2)} \Bigl(\frac{87}{2}
- 154 \nu
+ 6 \nu^2\Bigr)
+ \frac{177}{2} \Delta\mu_{-}^{(2)}
+ \sigma_{+}^{(2)} \Bigl(-88
- 48 \nu \Bigr)
+ 56 \Delta\sigma_{-}^{(2)}\biggl]\biggl\}\Biggl]\nonumber \\ & + \mathcal{O}\left(
\frac{\epsilon_\text{tidal}}{c^{6}} \right)\,.\end{aligned}$$ The point-particle pieces $E_{\text{pp}}$ and $J^i_{\text{pp}}$ are depicted in Eqs. (4.8) and (4.9) of Ref. [@BI03CM].
Tidal effects for quasi-circular orbits {#sec:circ}
=======================================
We consider quasi-circular orbits, *i.e.* orbits that are circular in our harmonic coordinate system but for the dissipative radiation-reaction effects. For such orbits, we can neglect $\dot{r}=\mathcal{O}(c^{-5})$, which is precisely of the order of radiation reaction effects. Under this assumption, we see from \[eq:accCoM\] that the CoM acceleration becomes purely radial, $a^{i} = - \omega^{2} x^{i}$, from which we can read off the orbital angular frequency $\omega$. Relevant quantities will then depend only on the bodies’ separation $r$ or, equivalently (*via* a generalized Kepler third law), on the orbital frequency $\omega$. In the case of circular orbits, it is convenient to introduce the dimensionless PN parameters associated with the separation and orbital frequency as $$\label{eq:defgammax}
\gamma = \frac{Gm}{rc^2}\,,\qquad x=\left( \frac{G m \omega}{c^3}\right)^{2/3}\,,$$ as well as to adimensionalize the polarizability coefficients defined in Eqs. by considering the “tilded” quantities[^8] $$\label{eq:polarpmtilde}
\widetilde{\mu}_\pm^{(\ell)} = \left(\frac{c^2}{G m}\right)^{2\ell+1}
\!\!\!G\,\mu_\pm^{(\ell)}\,,\qquad \widetilde{\sigma}_\pm^{(\ell)} =
\left(\frac{c^2}{G m}\right)^{2\ell+1} \!\!\!G\,\sigma_\pm^{(\ell)}\,.$$ By identifying the expression of $\omega^{2}$ from the circular-orbit EoM as explained above and replacing $\gamma$ iteratively, we recover the well-known formula for point masses at the 2PN order, with a non-trivial NNL/2PN relative tidal contribution
\[eq:omega2\] $$\begin{aligned}
(\omega^{2})_\text{pp} &= \frac{G m}{r^{3}}\left[ 1 + (-3+\nu)\gamma +
\left( 6 + \frac{61}{4} \nu + \nu^{2} \right) \gamma^{2}\right] +
\mathcal{O}\left( \frac{1}{c^{6}} \right)
\,,\\ (\omega^{2})_\text{tidal} &= \frac{G m}{r^{3}}\biggl\{
18\,\widetilde{\mu}_{+}^{(2)} \gamma^{5} + \left[ \left(
-\frac{249}{2} +51 \nu \right)\widetilde{\mu}_{+}^{(2)} +
\frac{75}{2}\Delta \, \widetilde{\mu}_{-}^{(2)}
+96\,\widetilde{\sigma}_{+}^{(2)} \right] \gamma^{6}
\nonumber\\ &\qquad\quad + \left[ \left( \frac{34317}{56} +
\frac{2976}{7} \nu + 54\nu^{2}\right)\widetilde{\mu}_{+}^{(2)} +
\left( -\frac{12051}{56} +90\nu \right) \Delta \,
\widetilde{\mu}_{-}^{(2)} \right. \nonumber\\& \qquad\qquad\qquad
\left. + \bigl(-616+264\nu\bigr)\widetilde{\sigma}_{+}^{(2)} + 200
\Delta \, \widetilde{\sigma}_{-}^{(2)} + 120
\,\widetilde{\mu}_{+}^{(3)}\right]\gamma^{7}\biggr\} +
\mathcal{O}\left( \frac{\epsilon_\text{tidal}}{c^{6}} \right)\,.\end{aligned}$$
Next, we may determine the relation between $\gamma$ and $x$, defined in Eqs. , by inverting Eqs. , with result:
\[eq:gammaofx\] $$\begin{aligned}
\gamma_\text{pp} &= x \left[ 1 + \left(1-\frac{\nu}{3}\right)x +
\left( 1 - \frac{65}{12} \nu \right)x^{2} \right] +
\mathcal{O}\left(\frac{1}{c^{6}} \right) \,,\\ \gamma_\text{tidal} &=
x \biggl\{ -6\widetilde{\mu}_{+}^{(2)} x^{5} + \left[ \left(
-\frac{37}{2} +3 \nu \right)\widetilde{\mu}_{+}^{(2)} -
\frac{25}{2}\Delta \, \widetilde{\mu}_{-}^{(2)}
-32\widetilde{\sigma}_{+}^{(2)} \right] x^{6}
\nonumber\\ &\qquad\quad + \left[ \left( -\frac{4355}{56} +
\frac{1105}{21} \nu + 15\nu^{2}\right)\widetilde{\mu}_{+}^{(2)} +
\left( -\frac{3683}{56} +\frac{95}{6}\nu \right) \Delta \,
\widetilde{\mu}_{-}^{(2)} \right. \nonumber\\& \qquad\qquad\qquad
\left. +
\left(-\frac{440}{3}+\frac{88}{3}\nu\right)\widetilde{\sigma}_{+}^{(2)}
- \frac{200}{3} \Delta \, \widetilde{\sigma}_{-}^{(2)} - 40
\widetilde{\mu}_{+}^{(3)}\right]x^{7}\biggr\} + \mathcal{O}\left(
\frac{\epsilon_\text{tidal}}{c^{6}} \right)\,.\end{aligned}$$
The conserved energy for circular orbits can now be computed. To do so, we take \[eq:Etidal\] to which we add the point-particle part, set $\dot{r}=0$ and replace $v^{2}=r^{2}\omega^{2}$ by its expression in terms of the parameter $\gamma$ using Eqs. . This yields $E$ first as a function of $\gamma$. We finally insert there the previous relation between $\gamma$ and $x$ to get an important result, namely the expression of the circular energy as a function of the frequency-dependent parameter $x$:
\[eq:Ex\] $$\begin{aligned}
E_\text{pp} &= -\frac{1}{2} m \nu x c^2 \left[ 1 + \left( -\frac{3}{4}
-\frac{\nu}{12} \right)x + \left( -\frac{27}{8} +\frac{19}{8}
\nu - \frac{\nu^{2}}{24} \right)x^{2} \right] +
\mathcal{O}\left(\frac{1}{c^{6}} \right)\,, \\
E_\text{tidal} &= -\frac{1}{2} m \nu x c^2
\biggl\{ - 18 \widetilde{\mu}_{+}^{(2)} x^5 +
\left[\left(-\frac{121}{2} +33\nu \right)\widetilde{\mu}_{+}^{(2)}-
\frac{55}{2}\Delta \, \widetilde{\mu}_{-}^{(2)} -
176\,\widetilde{\sigma}_{+}^{(2)} \right]x^6 \nonumber\\
&\qquad\quad + \left[ \left(-\frac{20865}{56} +
\frac{5434}{21}\nu -\frac{91}{4}\nu^2 \right)
\widetilde{\mu}_{+}^{(2)} +
\Delta \left(-\frac{11583}{56}+\frac{715}{12}\nu \right)
\widetilde{\mu}_{-}^{(2)} \right. \nonumber\\& \qquad\qquad\qquad \left. +
\left(-\frac{2444}{3} +
\frac{1768}{3}\nu \right)\widetilde{\sigma}_{+}^{(2)} -
\frac{884}{3} \Delta \, \widetilde{\sigma}_{-}^{(2)} -
130 \,\widetilde{\mu}_{+}^{(3)} \right]x^7 \biggr\} + \mathcal{O}\left(
\frac{\epsilon_\text{tidal}}{c^{6}} \right)\,.\end{aligned}$$
We can also compute by the same method the constant angular momentum for circular orbits, which reads
\[eq:Jx\] $$\begin{aligned}
J_\text{pp} &= \frac{G m^2 \,\nu}{c\,x^{1/2}} \left[ 1 + \left(
\frac{3}{2} + \frac{\nu}{6} \right) x + \left( \frac{27}{8} -
\frac{19}{8} \nu + \frac{\nu^2}{24} \right) x^2\right] +
\mathcal{O}\left(\frac{1}{c^{6}} \right)\,,\\
J_\text{tidal} &= \dfrac{G m^{2} \nu}{c x^{1/2}} \biggl\{12
\widetilde{\mu}_{+}^{(2)} x^5 +
\left[\left(\dfrac{77}{2} -21\nu
\right)\widetilde{\mu}_{+}^{(2)}+\dfrac{35}{2}\Delta \,
\widetilde{\mu}_{-}^{(2)} +112 \widetilde{\sigma}_{+}^{(2)}
\right]x^6 + \left[ \left(\dfrac{1605}{7} - \dfrac{3344}{21}\nu
+14\nu^2 \right)\widetilde{\mu}_{+}^{(2)} \right. \nonumber \\ &
\qquad \qquad \left. + \Delta \left(\dfrac{891}{7}-\dfrac{110}{3}\nu
\right)\widetilde{\mu}_{-}^{(2)} + \left(\dfrac{1504}{3} -
\dfrac{1088}{3}\nu \right)\widetilde{\sigma}_{+}^{(2)} +
\dfrac{544}{3} \Delta \, \widetilde{\sigma}_{-}^{(2)} +80
\widetilde{\mu}_{+}^{(3)} \right]x^7 \biggr\} + \mathcal{O}\left(
\frac{\epsilon_\text{tidal}}{c^{6}} \right)\,.\end{aligned}$$
We have verified that the energy $E$ and angular momentum $J$ for circular orbits, including all the tidal contributions given in –, are linked by the famous relation $$\label{eq:thermo}
\frac{\partial E}{\partial \omega} = \omega \, \frac{\partial
\mathrm{J}}{\partial \omega} + \mathcal{O}\left( \frac{1}{c^{6}},
\frac{\epsilon_\text{tidal}}{c^{6}}\right)\,,$$ which is just one aspect of the “first law of binary point-particle mechanics” [@LBW12].
Summary and conclusions {#sec:conclusion}
=======================
We have computed the Lagrangian and associated conserved quantities of compact binaries including tidal interactions up to the NNL order, corresponding to the 2PN approximation beyond the leading quadrupolar tidal effect occuring at the 5PN order. The results follow from the effective Fokker action , with non-minimal matter couplings, and are parametrized by polarizability coefficients describing the mass quadrupole, mass octupole and current quadrupole tidal interactions. In particular, we have obtained the NNL conserved invariant energy of the compact binary for quasi-circular orbits.
To conclude, let us compare our expressions for the invariant energy as given by with existing results in the literature. In the following table, we provide for each order and for each multipolar piece contributing to the conserved energy $E_\text{tidal}(x)$ the references which we agree with:
$E_\text{tidal}$ Mass quadrupole Current quadrupole Mass octupole
------------------ ----------------------------------------- ---------------------------- ------------------------
5PN (L) [@FH08; @VHF11; @BiniDF12; @VF13; @F14] $\times$ $\times$
6PN (NL) [@BiniDF12; @VF13; @AGP18] [@BiniDF12; @AGP18; @BV18] $\times$
7PN (NNL) [@BiniDF12] [@BiniDF12] [@BiniDF12; @Landry18]
Note in particular that we are in full agreement with all results of Ref. [@BiniDF12]. We have checked, notably, that by re-expanding the tidal effects entering the EOB Hamiltonian [@BiniDF12] in the form of a PN Taylor series, we recover exactly our equation .[^9]
Now that the problem of the Lagrangian and EoM is solved (Ref. [@BiniDF12] and this work), we shall compute in a second paper [@article_flux] the gravitational-wave energy flux for quasi-circular orbits, and then, from it, deduce, through the energy balance equation, the crucial orbital phase and frequency evolution (or “chirp”) of compact binaries in circular orbits including tidal effects up to the NNL/2PN order beyond the Einstein quadrupole formula.
We thank Gilles Esposito-Farèse for useful discussions. We are also greatful to Justin Vines for interesting informative discussions during the preparation of this work.
Newtonian treatment of the tidal effects {#appendix:Newtonian}
========================================
In this Appendix, we derive the Newtonian EoM and the Lagrangian of a system of $N$ extended compact bodies without spins, including multipolar tidal interaction effects. The mass and the CoM position of each of the objects are defined by $$\label{eq:mAyACM}
m_A = \int_{\mathcal{V}_A} {\mathrm{d}}^3\mathbf{x} \,\rho(\mathbf{x},t)\,,\qquad y_A^i(t) =
\frac{1}{m_A} \int_{\mathcal{V}_A} {\mathrm{d}}^3\mathbf{x} \,\rho(\mathbf{x},t) \,x^i\,,$$ where the integrals extend over the volume $\mathcal{V}_A$ of body $A$, and where $\rho(\mathbf{x},t)$ denotes the Eulerian density of the $N$-body system satisfying the usual continuity equation $\partial_t\rho+\partial_i(\rho v^i)=0$ (hence the mass $m_A$ is constant). The equation of motion verified by the CoM line of body $A$ is then given by $$\label{eq:accA}
m_A \frac{{\mathrm{d}}^2 y_A^i}{{\mathrm{d}}t^2} = \sum_{B\not= A} \int_{\mathcal{V}_A} {\mathrm{d}}^3\mathbf{x}
\,\rho\,\partial_i U_B\,,$$ where we have discarded the self-field of body $A$ which is zero by Newton’s action-reaction theorem (so the sum runs over all the bodies $B\not=A$), and where the Newtonian potential generated by body $B$ reads $$\label{eq:UB}
U_B(\mathbf{x},t) = G \int_{\mathcal{V}_B}
\frac{{\mathrm{d}}^3\mathbf{x}'}{\vert\mathbf{x}-\mathbf{x}'\vert}
\,\rho(\mathbf{x}',t)\,.$$ For any point outside the body $B$, thus in particular located inside the body $A$, distinct from $B$, we have $\Delta U_B=0$. Next, we define the Newtonian STF multipole moment of body $A$ to be $$\label{eq:IAL}
I_A^{L}(t) = \int_{\mathcal{V}_A} {\mathrm{d}}^3\bm{z}_A
\,\rho_A(\bm{z}_A,t)\,\hat{z}_A^L\,,$$ where we adopted as integration variable the distance $\bm{z}_A=\mathbf{x}-\bm{y}_A(t)$ linking the line of the CoM $\bm{y}_A(t)$ to the generic point $\mathbf{x}\in\mathcal{V}_A$, where $\hat{z}_A^L=\text{STF}(z_A^L)$ denotes the STF product of $\ell$ spatial vectors $z_A^L=z_A^{i_1}\cdots z_A^{i_\ell}$ (with $L=i_1\cdots i_\ell$ a multi-spatial index), and where we have posed $\rho_A(\bm{z}_A,t)=\rho(\bm{y}_a+\bm{z}_a,t)$. With this notation the mass monopole moment is just the constant mass, while the CoM position $y_A^i$ is defined by the nullity of the mass dipole moment: $$\label{eq:monodipole}
I_A = m_A\,,\qquad I_A^i=0\,.$$ On the other hand, the Newtonian tidal moments, starting with the quadrupole moment ($\ell\geqslant 2$), are defined quite naturally as the multi-gradients of the total external potential due to the other bodies felt by the body $A$ at the location of its CoM $\bm{y}_A$: $$\label{eq:GAL}
G_A^{L}(t) = \sum_{B\not= A} \bigl(\partial_L
U_B\bigr)(\bm{y}_A)\qquad\text{(for $\ell\geqslant 2$)}\,,$$ with $\partial_L=\partial_{i_1}\cdots\partial_{i_\ell}$. Since $\Delta
U_B=0$ inside body $A$, the tidal moments are automatically STF in all their indices $L$, namely $\partial_L U_B=\hat{\partial}_L U_B$. For the dipolar tidal moment (with $\ell=1$) it is convenient to pose $$\label{eq:GAi}
G_A^{i} = \sum_{B\not= A} \bigl(\partial_i U_B\bigr)(\bm{y}_A) - \frac{{\mathrm{d}}^2
y_A^i}{{\mathrm{d}}t^2}\,,$$ so that $G_A^{i}=0$ for a system of point particles described only by their masses, their higher multipole moments being neglected. The EoM may then be rewritten in elegant form as (see *e.g.* [@DSX1]) $$\label{eq:EOM}
m_A\,G_A^{i} + \sum_{\ell=2}^{+\infty} \frac{1}{\ell!} \,I_A^L\,G_A^{iL} = 0\,.$$ Using the fact that for any $\mathbf{x}$ outside the body $B$ we have the multipole decomposition $$\label{eq:UBexp}
U_B = G \sum_{k=0}^{+\infty} \frac{(-)^k}{k!}
\,I_B^K\,\partial_{K}\Bigl(\frac{1}{r_B}\Bigr)\,,$$ with $r_{B} = \vert\mathbf{x}-\bm{y}_B\vert$, we see that the tidal moments themselves can be expanded in terms of the multipole moments of the other bodies as (for $\ell\geqslant 2$) $$\label{eq:GALexp}
G_A^{L} = G \sum_{B\not= A} \,\sum_{k=0}^{+\infty} \frac{(-)^k}{k!}
\,I_B^K\,\partial_{LK}^A\Bigl(\frac{1}{r_{AB}}\Bigr)\,,$$ where $r_{AB} = \vert\bm{y}_A-\bm{y}_B\vert$ is the distance between the CoMs of the bodies $A$ and $B$, the gradient is taken with respect to the point $A$, *i.e.* $\partial_i^A=\partial/\partial y_A^i$, and we denote $\partial_{LK}^A=\partial_{L}^A\partial_{K}^A$ with $\partial_{L}^A=\partial_{i_1}^A\cdots\partial_{i_\ell}^A$. Finally, the EoM admit the double multipole expansion series $$\label{eq:EOMexp}
m_A \frac{{\mathrm{d}}^2 y_A^i}{{\mathrm{d}}t^2} = G \sum_{B\not= A}
\,\sum_{\ell=0}^{+\infty}\,\sum_{k=0}^{+\infty} \frac{(-)^k}{\ell! \,k!}
\,I_A^L\,I_B^K\,\partial_{iLK}^A\Bigl(\frac{1}{r_{AB}}\Bigr)\,,$$ or in more details (see *e.g.* Eq. (1.201) of [@PoissonWill]), $$\begin{aligned}
\label{eq:EOMdetail}
m_A \frac{{\mathrm{d}}^2 y_A^i}{{\mathrm{d}}t^2} = G \sum_{B \not= A} \biggl\{ m_A
m_B \,\partial_i^A\Bigl(\frac{1}{r_{AB}}\Bigr) & +
\sum_{\ell=2}^{+\infty}\frac{(-)^\ell}{\ell!}\Bigl[ m_A \,I_B^{L} +
(-)^\ell m_B \,I_A^{L}
\Bigr]\partial_{iL}^A\Bigl(\frac{1}{r_{AB}}\Bigr) \nonumber \\ & +
\sum_{\ell=2}^{+\infty}\,\sum_{k=2}^{+\infty} \frac{(-)^k}{\ell!
\,k!}
\,I_A^L\,I_B^K\,\partial_{iLK}^A\Bigl(\frac{1}{r_{AB}}\Bigr)\biggr\}
\,.\end{aligned}$$ Those equations have been generalized to the 1PN order [@Xu97; @Wu98; @Racine05; @VF13] using the DSX formalism [@DSX1; @DSX2].
We now consider the case where the multipole moments are exclusively induced by the tidal field of the other bodies. To describe this situation, we assume that each extended body is at hydrodynamical equilibrium at every time, so that the mass distribution at any instant is aligned on the equipotentials of the external gravitational field. We are thus in the so-called adiabatic regime where the relaxation time scale of the body internal dynamics is significantly smaller than the orbital time scale. In particular, we neglect the dissipative effects due to the tides, considering only the conservative dynamics of the system, and look for a Lagrangian. In this case, we introduce a linear-response coefficient $\mu^{(\ell)}$ depending on the internal structure of the body and characterizing its deformability or “polarizability” under the influence of the external field, such that its multipole moments obey $$\label{eq:muAdef}
I_A^L = \mu_A^{(\ell)} \,G_A^L\,.$$ Following usual definitions (see *e.g.* [@Hind08; @BinnP09; @DN09tidal]), this coefficient is related to the radius $R$ of the body and the (mass-type) multipolar Love numbers $k^{(\ell)}$ by $$\label{eq:relLove}
G\,\mu_A^{(\ell)} = \frac{2}{(2\ell-1)!!} \,k_A^{(\ell)}\,R_A^{2\ell+1}\,.$$ The Newtonian EoM become now $$\begin{aligned}
\label{eq:EOMpolar}
m_A \frac{{\mathrm{d}}^2 y_A^i}{{\mathrm{d}}t^2} = G \sum_{B \not= A} \biggl\{ m_A
m_B \,\partial_i^A\Bigl(\frac{1}{r_{AB}}\Bigr) &+
\sum_{\ell=2}^{+\infty}\frac{(-)^\ell}{\ell!} \Bigl[ m_A
\,\mu_B^{(\ell)}\,G_B^{L} + (-)^\ell m_B \,
\mu_A^{(\ell)}\,G_A^{L}
\Bigr]\partial_{iL}^A\Bigl(\frac{1}{r_{AB}}\Bigr)
\nonumber\\ \qquad\qquad &+
\sum_{\ell=2}^{+\infty}\,\sum_{k=2}^{+\infty} \frac{(-)^k}{\ell!
\,k!} \,\mu_A^{(\ell)}\mu_B^{(k)}\,G_A^L\,G_B^K\,
\partial_{iLK}^A\Bigl(\frac{1}{r_{AB}}\Bigr)\biggr\} \,,\end{aligned}$$ in which the tidal moments obey the implicit relation $$\label{eq:GLAexpl}
G_A^{L} = G \sum_{B\not= A} \biggl[
m_B\,\partial_{L}^A\Bigl(\frac{1}{r_{AB}}\Bigr) +
\sum_{k=2}^{+\infty} \frac{(-)^k}{k!}
\,\mu_B^{(k)}\,G_B^K\,\partial_{LK}^A\Bigl(\frac{1}{r_{AB}}\Bigr)\biggr]\,.$$ The latter equations describe the conservative dynamics of the system of $N$ extended bodies. The dependence on the internal structure is entirely carried out by the coefficients $\mu^{(\ell)}$, which are supposed to be constant. The dynamics is conservative in the sense that it can be derived from the following exact Lagrangian, valid up to any order in the multipole expansion and the tidal moments: $$\begin{aligned}
\label{eq:LN}
L &= \sum_A \biggl\{ \frac{1}{2}m_A v_A^2 +\frac{1}{2}m_A\sum_{B\neq
A} U_B(\bm{y}_A) \biggr\} \nonumber \\ &= \sum_A \biggl\{
\frac{1}{2}m_A v_A^2 + \frac{1}{2}\sum_{\ell=2}^{+\infty}
\frac{1}{\ell!}\, \mu_A^{(\ell)}\,G_A^{L}\,G_A^{L} + G \sum_{B > A}
\biggl[\frac{ m_A m_B}{r_{AB}} -
\sum_{\ell=2}^{+\infty}\,\sum_{k=2}^{+\infty} \frac{(-)^k}{\ell!
\,k!} \,\mu_A^{(\ell)}\,\mu_B^{(k)}\,G_A^L\,G_B^K\,
\partial_{LK}^A\Bigl(\frac{1}{r_{AB}}\Bigr)\biggr] \biggr\}\,.\end{aligned}$$ The Newtonian action is formally the Newtonian limit, at the quadratic level, of the non-minimal matter action in general relativity. However, the action is effective (or “skeletonized”), with each compact object described by an effective point particle endowed with internal structure. The mass-type moments $G^{\hat{L}}$ (even parity sector) entering \[eq:Sm\] tend towards the Newtonian tidal moments $G^{L}$, so that they can be regarded as their legitimate relativistic versions, and the corresponding response coefficients $\mu^{(\ell)}$ identify with the Newtonian tidal deformabilities. Moreover, the relativistic action also depends on current-type moments $H^{\hat{L}}$ (odd parity sector) with associated response coefficients $\sigma^{(\ell)}$, first arising at the 1PN relativistic order.
Both sets of relativistic tidal moments are given by appropriate covariant derivatives of the Riemann tensor, which is nothing but the relativistic tidal field felt by the body. Those moments are evaluated at the location of the particle and a UV-type regularization is required to remove the self field of that particle. Thus, in the effective action, the self-field regularization automatically selects the external tidal field experienced by the body due to the other bodies composing the system.
Proof that the trace terms to NNL order can be removed by a redefinition of the metric {#appendix:proof}
======================================================================================
In this section, we show that the tidal moments entering the action may be defined in terms of the Riemann tensor instead of the Weyl tensor, since the traces of the Riemann tensor do not play any role in the dynamics. Here, we shall denote $G_{\mu\nu}^{(R)}$, $G_{\lambda\mu\nu}^{(R)}$ and $H_{\mu\nu}^{(R)}$ the tidal mass-quadrupole, mass-octupole and current-quadrupole moments introduced in Eqs. , while $G_{\mu\nu}^{(C)}$, $G_{\lambda\mu\nu}^{(C)}$ and $H_{\mu\nu}^{(C)}$ will represent the same but built with the Weyl tensor instead of the Riemann tensor. We thus pose (setting $G=c=1$, and omitting particles’ labels and mention of the regularization)
\[eq:tidalRC\] $$\begin{aligned}
&G^{(R)}_{\mu\nu} = - R_{\mu\rho\nu\sigma} u^{\rho}u^{\sigma}\,,
&G^{(C)}_{\mu\nu} = - C_{\mu\rho\nu\sigma} u^{\rho}u^{\sigma}\,,
\\ &H^{(R)}_{\mu\nu} = 2 R^{*}_{(\mu\underline{\rho}\nu)\sigma}
u^{\rho}u^{\sigma}\,, &H^{(C)}_{\mu\nu} =2
C^{*}_{(\mu\underline{\rho}\nu)\sigma}
u^{\rho}u^{\sigma}\,,\\ &G^{(R)}_{\lambda\mu\nu} = -
\nabla^\perp_{(\lambda}
\,R_{\mu\underline{\rho}\nu)\sigma} u^{\rho}
u^{\sigma}\,, &G^{(C)}_{\lambda\mu\nu} = - \nabla^\perp_{(\lambda}
\,C_{\mu\underline{\rho}\nu)\sigma} u^{\rho}
u^{\sigma}\,,\end{aligned}$$
where $C_{\mu\nu\rho\sigma}$ stands for the Weyl tensor $$\label{eq:Weyl}
C_{\mu\nu\rho\sigma} = R_{\mu\nu\rho\sigma} - \Bigl(
g_{\mu [ \rho} R_{\sigma ] \nu} - g_{\nu [ \rho} R_{\sigma ] \mu}
\Bigr) + \frac{1}{3}g_{\mu [ \rho}g_{\sigma ] \nu} R\,,$$ and where we have used expressions for the original Weyl tidal moments in which the STF operators have been removed or replaced by mere symmetrizations, thanks to the properties of the Weyl tensor and the covariant derivative. To start with, we notice that, as one can check, the Riemann and Weyl definitions of the current-type quadrupole coincide, *i.e.*, $H^{(C)}_{\mu\nu}=H^{(R)}_{\mu\nu}$. As a result, the following discussion will, in fact, only concern the mass-type moments. From Eqs. , we then get the following relations:
\[eq:diffWeylRiemm\] $$\begin{aligned}
\left(G_{\mu \nu}G^{\mu \nu}\right)^{(C)} &= \left(G_{\mu \nu}G^{\mu
\nu}\right)^{(R)} - G^{\mu\nu}_{(R)} R_{\mu \nu} + (\text{double
zero terms}) \,, \label{eq:G2} \\
\left(G_{\mu \nu \rho}G^{\mu \nu \rho}\right)^{(C)} &= \left(G_{\mu
\nu \rho}G^{\mu \nu \rho}\right)^{(R)}
-G^{\lambda\mu\nu}_{(R)}\nabla_\lambda R_{\mu\nu}- \frac{2}{3} u^\mu
u^\nu \nabla^\kappa R_{\kappa \mu\lambda\nu} \Big[\nabla^\lambda
R_{\rho\sigma} u^\rho u^\sigma + \frac{1}{3} \nabla^\lambda R \Big]+
(\text{double zero terms}) \, , \label{eq:G3}\end{aligned}$$
where the “double zero terms” are terms that are quadratic in the Ricci tensor or scalar. Let us now prove that the actions $S^{(R)}$ and $S^{(C)}$ corresponding to \[eq:Sm3\] using respectively the Riemann and Weyl definitions, lead to the same EoM.
The double zero terms are treated as follows. Varying their contributions to the action, which have necessarily the general form $\propto \int {\mathrm{d}}^4 x \sqrt{-g} A^{\mu\nu\rho\sigma...} \nabla_{...}
R_{\mu\nu}\nabla_{...} R_{\rho\sigma}$, leads, after possible integrations by part, to a sum of terms $\propto \int {\mathrm{d}}^4 x \sqrt{-g} [\nabla_{...}(A^{\mu\nu\rho\sigma...}
\nabla_{...}R_{\rho\sigma})+ \nabla_{...}(A^{\rho\sigma\mu\nu...}
\nabla_{...}R_{\rho\sigma})] \delta R_{\mu\nu}$, plus surface integrals at infinity which vanish, since their integrands contain factors $\nabla_{...} R_{\rho\sigma}$ that are identically zero in vacuum. The remaining terms are then proportional to (the covariant derivatives of) the Ricci tensor multiplied by $A^{\mu\nu\rho\sigma...}$. On the other hand, $A^{\mu\nu\rho\sigma...}$ is itself a sum of the form $\sum_{A} \delta^{(4)}(x-y_A) F_A^{\mu\nu\rho\sigma...}$ and the presence of the Dirac distributions forces the evaluation of the Ricci tensor to take place at one particle’s location, *e.g.*, at $\bm{x}=\bm{y}_A$, in the sense of dimensional regularization. Moreover, by virtue of Einstein’s equations (reinstalling the particles’ label), $[R_{\mu\nu}]_A = 8\pi [(T_\text{pp})_{\mu\nu}-
(T_{\text{pp}})^{\lambda}_{~\lambda} g_{\mu\nu}/2]_A +
\mathcal{O}(\epsilon_\text{tidal})$, where $[(T_\text{pp})^{\mu\nu}]_A$ denotes the point-particle stress-energy tensor of our particle system at point $A$ $$\label{eq:Tpp}
\left[T^{\mu\nu}_\text{pp}\right]_A = \sum_B m_B \int {\mathrm{d}}\tau_B u_B^\mu u_B^\nu
\frac{\delta^{(4)}[y_A(\tau_A)-y_B(\tau_B)]}{\sqrt{-g}}\, .$$ If $A\neq B$, then $\delta^{(4)}[y_{A}(\tau_{A})-y_{B}(\tau_{B})] = 0$ because the compact objects never collide in the PN regime. If $A=B$, the Dirac distribution reduces to $\delta^{(4)}(0)$, which is precisely zero in dimensional regularization, as the limit of $\int {\mathrm{d}}^{d+1}k\,
{\mathrm{e}}^{2\pi {\mathrm{i}}\,0}=0$ when $d\to 3$. Hence $[T^{\mu\nu}_\text{pp}]_A$ vanishes as well, and so does the contribution of the double-zero terms to the Euler-Lagrange equations for the point-like bodies.
However, terms that are linear in both the Riemann and the Ricci tensors (or the Ricci scalar) in Eqs. cannot be dealt with in the same way as the double zeros. Instead, they may be treated by making an appropriate infinitesimal change of variable on the original metric, say $g^\text{original}_{\mu \nu} = g_{\mu \nu} + h_{\mu \nu}$, in the action $S^{(C)}[g^\text{original}_{\mu \nu},\bm{y}_A]$. This naturally defines the new action $\tilde{S}[g_{\mu\nu},\bm{y}_A] = S^{(C)}[g^\text{original}_{\mu
\nu}[g_{\rho\sigma},\bm{y}_B],\bm{y}_A]$, dynamically equivalent to $S^{(C)}$ when regarded as a functional of the metric $g_{\mu\nu}$. At first order in $h_{\mu\nu}$, it reads $$\label{eq:deltaS}
\tilde{S}[g_{\mu \nu},\bm{y}_A] = S^{(C)}[g_{\mu \nu},\bm{y}_A] -
\frac{1}{16 \pi} \int {\mathrm{d}}^{4}x \sqrt{-g}\left(R^{\mu \nu}-\frac{1}{2}R
g^{\mu \nu} -8\pi \,T^{\mu \nu} \right)h_{\mu \nu} + \mathcal{O}\left(
h^{2} \right)\, .$$ Now, we want $\tilde{S}[g_{\mu \nu},\bm{y}_A]$ to coincide with $S^{(R)}[g_{\mu \nu},\bm{y}_A]$. By choosing conveniently $h_{\mu\nu}$, the term $(R^{\mu \nu}-\tfrac{1}{2}R g^{\mu\nu})h_{\mu\nu}$ will cancel the terms linear in the Ricci tensor or scalar entering Eqs. . As for the term $\int
{\mathrm{d}}^4 x\sqrt{-g}\, T^{\mu\nu}h_{\mu\nu}$, it vanishes by itself and can thus be ignored. Indeed, integrating the Dirac deltas contained in the expression chosen for $h_{\mu\nu}$ (see below) yields a sum on $A=1,2$ of $\propto [T^{\mu\nu}]_A=[T^{\mu\nu}_{\text{pp}}]_A +
\mathcal{O}(\epsilon_\text{tidal})$, which boils down to $\mathcal{O}(\epsilon_\text{tidal})$ since $[T^{\mu\nu}_{\text{pp}}]_A
=0$, as explained above around \[eq:Tpp\].
Let us examine more precisely how to construct a $h_{\mu\nu}$ suitable to absorb the Ricci-type terms in \[eq:deltaS\] that come from the difference $\Delta G_{\mu\nu}=G^{(C)}_{\mu\nu}-G^{(R)}_{\mu\nu}$. The contribution induced by this difference through the modification of the mass quadrupole invariant $\Delta (G_{\mu\nu}G^{\mu\nu}) = 2 G^{(R)}_{\mu\nu} \Delta G^{\mu\nu}+
(\text{double zero terms})$ has the form $$\label{eq:formDeltaS}
\int {\mathrm{d}}^4 x \,Z_{\mu\nu} R^{\mu\nu}=\int {\mathrm{d}}^4 x
\Big(Z_{\mu\nu}-\frac{1}{2}Z\indices{^\lambda_\lambda} g_{\mu\nu}\Big)
\Big(R^{\mu\nu}-\frac{1}{2}R g^{\mu\nu}\Big)\, .$$ It is to be canceled by the piece of the integral in \[eq:deltaS\] that is sourced by $\propto
(R^{\mu\nu}-\tfrac{1}{2}R g^{\mu\nu})$. An obvious choice guaranteeing such cancellation is $h^{(G_{\rho\sigma})}_{\mu\nu}=16\pi(Z_{\mu\nu}-\tfrac{1}{2}Z\indices{^\lambda_\lambda}
g_{\mu\nu})/\sqrt{-g}$. Possible extra terms linear (at least) in the Ricci tensor or scalar merely add irrelevant double zeros to the action. Those can be tuned to have $$h^{(G_{\rho\sigma})}_{\mu \nu} = - 4 \pi \sum_A \mu_A^{(2)} \int
{\mathrm{d}}\tau_A \,\left[G^{(R)}_{\mu \nu}\right]_{A}
\,\frac{\delta^{(4)}[x^{\mu}-y_A^{\mu}(\tau_A)]}{\sqrt{-g}}\,.$$ Regarding the mass octupole, we use the same method as for the mass quadrupole to construct some suitable $h_{\mu\nu}^{(G_{\rho
\sigma \tau})}$, the only new feature being that $\Delta
(G_{\lambda\mu\nu}G^{\lambda\mu\nu})$ is now a space-time integral with a source of the form $Z^{\lambda}_{~\mu\nu}\nabla_{\lambda}
R^{\mu\nu}$. However, the structure is straightforwardly recovered by integrating by part. We finally find that, in the mass-octupolar sector, the equality $\tilde{S}^{(G_{\rho
\sigma \tau})}=(S^{(R)})^{(G_{\rho \sigma \tau})}$ is achieved by setting: $$\begin{aligned}
h^{(G_{\rho \sigma \tau})}_{\mu \nu} &= 4 \pi
\sum_{A}\frac{\mu^{(3)}_{A}}{3} \int {\mathrm{d}}\tau_{A}\nabla^{\lambda}
\left[ \left(G^{(R)}_{\lambda\mu\nu} +\frac{2}{3} \nabla^\kappa
R_{\kappa\rho\lambda\sigma} u^\rho u^\sigma \Big(u_\mu
u_\nu + \frac{2}{3} g_{\mu\nu} \Big) \right)_{\!\!A}
\!\!\frac{\delta^{(4)}[x-y_{A}(\tau_{A})]}{\sqrt{-g}} \right]\,.\end{aligned}$$
The tidal acceleration to NNL order {#appendix:accNNL}
===================================
By varying the total generalized Fokker Lagrangian – and replacing iteratively the accelerations by the values provided by the EoM consistently truncated at lower orders, we obtain the total acceleration of body 1 as $a_1^i = (a_1^i)_\text{pp} +
(a_1^i)_\text{tidal}$, where the point-particle part can be found in *e.g.* [@BlanchetLR], and where $$\begin{aligned}
\label{eq:a1tidal}
m_1 (a_1^i)_\text{tidal} &= \frac{G^2}{r_{12}^7} \Bigg\{ n_{12}^{i} \Bigl(-9
m_{2}^{2} \mu_1^{(2)}
- 9 m_{1}^{2} \mu_2^{(2)}\Bigr)
+ \frac{1}{c^{2}} \biggl\{n_{12}^{i} \biggl [m_{2}^{2} \mu_1^{(2)}
\Bigl(-36 (n_{12}{} v_{1}{})^2
+ 72 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})\nonumber\\
& - 18 v_{12}{}^{2}
+ 9 v_{1}{}^{2}\Bigr)
+ m_{1}^{2} \mu_2^{(2)} \Bigl(144 (n_{12}{} v_{1}{})^2
- 288 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})
+ 180 (n_{12}{} v_{2}{})^2
- \frac{81}{2} v_{12}{}^{2}
+ 9 v_{1}{}^{2}\Bigr)\nonumber\\
& + m_{2}^{2} \sigma_1^{(2)} \Bigl(-96 (n_{12}{} v_{12}{})^2
- 48 v_{12}{}^{2}\Bigr)
+ m_{1}^{2} \sigma_2^{(2)} \Bigl(-96 (n_{12}{} v_{12}{})^2
- 48 v_{12}{}^{2}\Bigr)
+ \frac{G m_{1}}{r_{12}} \Bigl(\frac{159}{2} m_{2}^{2} \mu_1^{(2)}\nonumber\\
& + 132 m_{1}^{2} \mu_2^{(2)}\Bigr)
+ \frac{G m_{2}}{r_{12}} \Bigl(99 m_{2}^{2} \mu_1^{(2)}
+ 84 m_{1}^{2} \mu_2^{(2)}\Bigr)\biggl]
+ v_{1}^{i} \biggl [m_{2}^{2} \mu_1^{(2)} \Bigl(54 (n_{12}{} v_{1}{})
- 45 (n_{12}{} v_{2}{})\Bigr)\nonumber\\
& + 9 m_{1}^{2} \mu_2^{(2)} (n_{12}{} v_{1}{})
+ 144 m_{2}^{2} \sigma_1^{(2)} (n_{12}{} v_{12}{})
+ 144 m_{1}^{2} \sigma_2^{(2)} (n_{12}{} v_{12}{})\biggl]
+ v_{2}^{i} \biggl [m_{2}^{2} \mu_1^{(2)} \Bigl(-54 (n_{12}{} v_{1}{})\nonumber\\
& + 45 (n_{12}{} v_{2}{})\Bigr)
- 9 m_{1}^{2} \mu_2^{(2)} (n_{12}{} v_{1}{})
- 144 m_{2}^{2} \sigma_1^{(2)} (n_{12}{} v_{12}{})
- 144 m_{1}^{2} \sigma_2^{(2)} (n_{12}{} v_{12}{})\biggl]\biggl\}\nonumber\\
& + \frac{1}{c^{4}} \Biggl[n_{12}^{i} \biggl\{m_{2}^{2}
\mu_1^{(2)} \Bigl(135 (n_{12}{} v_{1}{})^4
- 540 (n_{12}{} v_{1}{})^3 (n_{12}{} v_{2}{})
+ 990 (n_{12}{} v_{1}{})^2 (n_{12}{} v_{2}{})^2\nonumber\\
& - 900 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})^3
+ 225 (n_{12}{} v_{2}{})^4
+ 72 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{}) (v_{1}{} v_{2}{})
- 18 (v_{1}{} v_{2}{})^2
- 126 (n_{12}{} v_{1}{})^2 v_{12}{}^{2}\nonumber\\
& + 324 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{}) v_{12}{}^{2}
- 90 (n_{12}{} v_{2}{})^2 v_{12}{}^{2}
- 36 (v_{1}{} v_{2}{}) v_{12}{}^{2}
- 27 v_{12}{}^{4}
- 72 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{}) v_{1}{}^{2}\nonumber\\
& + 36 (v_{1}{} v_{2}{}) v_{1}{}^{2}
+ 36 v_{12}{}^{2} v_{1}{}^{2}
- 18 v_{1}{}^{4}\Bigr)
+ m_{1}^{2} \mu_2^{(2)} \Bigl(-3855 (n_{12}{} v_{1}{})^4
+ 15420 (n_{12}{} v_{1}{})^3 (n_{12}{} v_{2}{})\nonumber\\
& - 23850 (n_{12}{} v_{1}{})^2 (n_{12}{} v_{2}{})^2
+ 16860 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})^3
- 4665 (n_{12}{} v_{2}{})^4
- 288 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{}) (v_{1}{} v_{2}{})\nonumber\\
& + 360 (n_{12}{} v_{2}{})^2 (v_{1}{} v_{2}{})
- \frac{81}{2} (v_{1}{} v_{2}{})^2
+ 2598 (n_{12}{} v_{1}{})^2 v_{12}{}^{2}
- 5484 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{}) v_{12}{}^{2}\nonumber\\
& + 3084 (n_{12}{} v_{2}{})^2 v_{12}{}^{2}
- 81 (v_{1}{} v_{2}{}) v_{12}{}^{2}
- \frac{1923}{8} v_{12}{}^{4}
+ 288 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{}) v_{1}{}^{2}
- 360 (n_{12}{} v_{2}{})^2 v_{1}{}^{2}\nonumber\\
& + 81 (v_{1}{} v_{2}{}) v_{1}{}^{2}
+ 81 v_{12}{}^{2} v_{1}{}^{2}
- \frac{81}{2} v_{1}{}^{4}\Bigr)
+ m_{2}^{2} \sigma_1^{(2)} \Bigl(840 (n_{12}{} v_{12}{})^4
- 960 (n_{12}{} v_{12}{})^3 (n_{12}{} v_{1}{})\nonumber\\
& + 480 (n_{12}{} v_{12}{})^2 (n_{12}{} v_{1}{})^2
- 192 (n_{12}{} v_{12}{})^2 (v_{1}{} v_{2}{})
+ 192 (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{}) (v_{1}{} v_{2}{})
- 48 (v_{1}{} v_{2}{})^2\nonumber\\
& - 336 (n_{12}{} v_{12}{})^2 v_{12}{}^{2}
- 192 (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{}) v_{12}{}^{2}
+ 192 (n_{12}{} v_{1}{})^2 v_{12}{}^{2}
- 96 (v_{1}{} v_{2}{}) v_{12}{}^{2}
- 72 v_{12}{}^{4}\nonumber\\
& + 192 (n_{12}{} v_{12}{})^2 v_{1}{}^{2}
- 192 (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{}) v_{1}{}^{2}
+ 96 (v_{1}{} v_{2}{}) v_{1}{}^{2}
+ 96 v_{12}{}^{2} v_{1}{}^{2}
- 48 v_{1}{}^{4}\Bigr)\nonumber\\
& + m_{1}^{2} \sigma_2^{(2)} \Bigl(1000 (n_{12}{} v_{12}{})^4
- 960 (n_{12}{} v_{12}{})^3 (n_{12}{} v_{1}{})
+ 480 (n_{12}{} v_{12}{})^2 (n_{12}{} v_{1}{})^2
- 192 (n_{12}{} v_{12}{})^2 (v_{1}{} v_{2}{})\nonumber\\
& + 192 (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{}) (v_{1}{} v_{2}{})
- 48 (v_{1}{} v_{2}{})^2
+ 64 (n_{12}{} v_{12}{})^2 v_{12}{}^{2}
- 192 (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{}) v_{12}{}^{2}\nonumber\\
& + 192 (n_{12}{} v_{1}{})^2 v_{12}{}^{2}
- 96 (v_{1}{} v_{2}{}) v_{12}{}^{2}
- 128 v_{12}{}^{4}
+ 192 (n_{12}{} v_{12}{})^2 v_{1}{}^{2}
- 192 (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{}) v_{1}{}^{2}\nonumber\\
& + 96 (v_{1}{} v_{2}{}) v_{1}{}^{2}
+ 96 v_{12}{}^{2} v_{1}{}^{2}
- 48 v_{1}{}^{4}\Bigr)
+ \frac{G m_{1}}{r_{12}} \biggl [m_{2}^{2} \mu_1^{(2)}
\Bigl(\frac{7215}{8} (n_{12}{} v_{1}{})^2
- \frac{7431}{4} (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})\nonumber\\
& + \frac{4461}{8} (n_{12}{} v_{2}{})^2
- \frac{285}{8} v_{12}{}^{2}
- \frac{159}{2} v_{1}{}^{2}\Bigr)
+ m_{1}^{2} \mu_2^{(2)} \Bigl(- \frac{15717}{8} (n_{12}{} v_{1}{})^2
+ \frac{16581}{4} (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})\nonumber\\
& - \frac{22521}{8} (n_{12}{} v_{2}{})^2
+ \frac{4597}{8} v_{12}{}^{2}
- 132 v_{1}{}^{2}\Bigr)
+ m_{2}^{2} \sigma_1^{(2)} \Bigl(656 (n_{12}{} v_{12}{})^2
- 144 (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{})\nonumber\\
& + 200 v_{12}{}^{2}\Bigr)
+ m_{1}^{2} \sigma_2^{(2)} \Bigl(1124 (n_{12}{} v_{12}{})^2
- 144 (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{})
+ 436 v_{12}{}^{2}\Bigr)\biggl]\nonumber\\
& + \frac{G m_{2}}{r_{12}} \biggl [m_{2}^{2} \mu_1^{(2)} \Bigl(252 (n_{12}{} v_{1}{})^2
- 504 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})
- \frac{387}{2} (n_{12}{} v_{2}{})^2
+ 162 v_{12}{}^{2}
- 99 v_{1}{}^{2}\Bigr)\nonumber\\
& + m_{1}^{2} \mu_2^{(2)} \Bigl(-2568 (n_{12}{} v_{1}{})^2
+ 5136 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})
- 2946 (n_{12}{} v_{2}{})^2
+ 426 v_{12}{}^{2}
- 84 v_{1}{}^{2}\Bigr)\nonumber\\
& + m_{2}^{2} \sigma_1^{(2)} \Bigl(672 (n_{12}{} v_{12}{})^2
+ 336 v_{12}{}^{2}\Bigr)
+ m_{1}^{2} \sigma_2^{(2)} \Bigl(592 (n_{12}{} v_{12}{})^2
+ 192 v_{12}{}^{2}\Bigr)\biggl]
+ \frac{G^2 m_{1}^2}{r_{12}^2} \Bigl(- \frac{2145}{7} m_{2}^{2} \mu_1^{(2)}\nonumber\\
& - 1008 m_{1}^{2} \mu_2^{(2)}\Bigr)
+ \frac{G^2 m_{1} m_{2}}{r_{12}^2} \Bigl(- \frac{2581}{2} m_{2}^{2} \mu_1^{(2)}
- 1805 m_{1}^{2} \mu_2^{(2)}\Bigr)
+ \frac{G^2 m_{2}^2}{r_{12}^2} \Bigl(-576 m_{2}^{2} \mu_1^{(2)}\nonumber\\
& - \frac{6705}{14} m_{1}^{2} \mu_2^{(2)}\Bigr)\biggl\}
+ v_{1}^{i} \biggl\{m_{2}^{2} \mu_1^{(2)} \Bigl(-144 (n_{12}{} v_{1}{})^3
+ 468 (n_{12}{} v_{1}{})^2 (n_{12}{} v_{2}{})
- 720 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})^2\nonumber\\
& + 360 (n_{12}{} v_{2}{})^3
- 342 (n_{12}{} v_{1}{}) (v_{1}{} v_{2}{})
+ 360 (n_{12}{} v_{2}{}) (v_{1}{} v_{2}{})
+ 144 (n_{12}{} v_{1}{}) v_{1}{}^{2}
- 135 (n_{12}{} v_{2}{}) v_{1}{}^{2}\nonumber\\
& + 198 (n_{12}{} v_{1}{}) v_{2}{}^{2}
- 225 (n_{12}{} v_{2}{}) v_{2}{}^{2}\Bigr)
+ m_{1}^{2} \mu_2^{(2)} \Bigl(1248 (n_{12}{} v_{1}{})^3
- 3888 (n_{12}{} v_{1}{})^2 (n_{12}{} v_{2}{})\nonumber\\
& + 3996 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})^2
- 1392 (n_{12}{} v_{2}{})^3
+ 9 (n_{12}{} v_{1}{}) (v_{1}{} v_{2}{})
- \frac{903}{2} (n_{12}{} v_{1}{}) v_{12}{}^{2}
+ 492 (n_{12}{} v_{2}{}) v_{12}{}^{2}\nonumber\\
& - 9 (n_{12}{} v_{1}{}) v_{1}{}^{2}\Bigr)
+ m_{2}^{2} \sigma_1^{(2)} \Bigl(-1056 (n_{12}{} v_{12}{})^3
+ 1248 (n_{12}{} v_{12}{})^2 (n_{12}{} v_{1}{})
- 576 (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{})^2\nonumber\\
& - 960 (n_{12}{} v_{12}{}) (v_{1}{} v_{2}{})
+ 48 (n_{12}{} v_{1}{}) (v_{1}{} v_{2}{})
+ 336 (n_{12}{} v_{12}{}) v_{1}{}^{2}
+ 48 (n_{12}{} v_{1}{}) v_{1}{}^{2}
+ 624 (n_{12}{} v_{12}{}) v_{2}{}^{2}\nonumber\\
& - 96 (n_{12}{} v_{1}{}) v_{2}{}^{2}\Bigr)
+ m_{1}^{2} \sigma_2^{(2)} \Bigl(-1664 (n_{12}{} v_{12}{})^3
+ 1248 (n_{12}{} v_{12}{})^2 (n_{12}{} v_{1}{})
- 576 (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{})^2\nonumber\\
& - 1168 (n_{12}{} v_{12}{}) (v_{1}{} v_{2}{})
+ 48 (n_{12}{} v_{1}{}) (v_{1}{} v_{2}{})
+ 440 (n_{12}{} v_{12}{}) v_{1}{}^{2}
+ 48 (n_{12}{} v_{1}{}) v_{1}{}^{2}
+ 728 (n_{12}{} v_{12}{}) v_{2}{}^{2}\nonumber\\
& - 96 (n_{12}{} v_{1}{}) v_{2}{}^{2}\Bigr)
+ \frac{G m_{1}}{r_{12}} \biggl [m_{2}^{2} \mu_1^{(2)}
\Bigl(- \frac{1209}{4} (n_{12}{} v_{1}{})
+ \frac{1179}{4} (n_{12}{} v_{2}{})\Bigr)
+ m_{1}^{2} \mu_2^{(2)} \Bigl(\frac{241}{4} (n_{12}{} v_{1}{})\nonumber\\
& - \frac{661}{4} (n_{12}{} v_{2}{})\Bigr)
+ m_{2}^{2} \sigma_1^{(2)} \Bigl(-712 (n_{12}{} v_{1}{})
+ 856 (n_{12}{} v_{2}{})\Bigr)
+ m_{1}^{2} \sigma_2^{(2)} \Bigl(-1416 (n_{12}{} v_{1}{})
+ 1560 (n_{12}{} v_{2}{})\Bigr)\biggl]\nonumber\\
& + \frac{G m_{2}}{r_{12}} \biggl [m_{2}^{2} \mu_1^{(2)} \Bigl(-378 (n_{12}{} v_{1}{})
+ 279 (n_{12}{} v_{2}{})\Bigr)
+ m_{1}^{2} \mu_2^{(2)} \Bigl(714 (n_{12}{} v_{1}{})
- 798 (n_{12}{} v_{2}{})\Bigr)\nonumber\\
& - 1008 m_{2}^{2} \sigma_1^{(2)} (n_{12}{} v_{12}{})
- 784 m_{1}^{2} \sigma_2^{(2)} (n_{12}{} v_{12}{})\biggl]\biggl\}
+ v_{2}^{i} \biggl\{m_{2}^{2} \mu_1^{(2)} \Bigl(144 (n_{12}{} v_{1}{})^3
- 468 (n_{12}{} v_{1}{})^2 (n_{12}{} v_{2}{})\nonumber\\
& + 720 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})^2
- 360 (n_{12}{} v_{2}{})^3
+ 342 (n_{12}{} v_{1}{}) (v_{1}{} v_{2}{})
- 360 (n_{12}{} v_{2}{}) (v_{1}{} v_{2}{})
- 144 (n_{12}{} v_{1}{}) v_{1}{}^{2}\nonumber\\
& + 135 (n_{12}{} v_{2}{}) v_{1}{}^{2}
- 198 (n_{12}{} v_{1}{}) v_{2}{}^{2}
+ 225 (n_{12}{} v_{2}{}) v_{2}{}^{2}\Bigr)
+ m_{1}^{2} \mu_2^{(2)} \Bigl(-1248 (n_{12}{} v_{1}{})^3\nonumber\\
& + 3888 (n_{12}{} v_{1}{})^2 (n_{12}{} v_{2}{})
- 3996 (n_{12}{} v_{1}{}) (n_{12}{} v_{2}{})^2
+ 1392 (n_{12}{} v_{2}{})^3
- 9 (n_{12}{} v_{1}{}) (v_{1}{} v_{2}{})
+ \frac{903}{2} (n_{12}{} v_{1}{}) v_{12}{}^{2}\nonumber\\
& - 492 (n_{12}{} v_{2}{}) v_{12}{}^{2}
+ 9 (n_{12}{} v_{1}{}) v_{1}{}^{2}\Bigr)
+ m_{2}^{2} \sigma_1^{(2)} \Bigl(1056 (n_{12}{} v_{12}{})^3
- 1248 (n_{12}{} v_{12}{})^2 (n_{12}{} v_{1}{})\nonumber\\
& + 576 (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{})^2
+ 960 (n_{12}{} v_{12}{}) (v_{1}{} v_{2}{})
- 48 (n_{12}{} v_{1}{}) (v_{1}{} v_{2}{})
- 336 (n_{12}{} v_{12}{}) v_{1}{}^{2}
- 48 (n_{12}{} v_{1}{}) v_{1}{}^{2}\nonumber\\
& - 624 (n_{12}{} v_{12}{}) v_{2}{}^{2}
+ 96 (n_{12}{} v_{1}{}) v_{2}{}^{2}\Bigr)
+ m_{1}^{2} \sigma_2^{(2)} \Bigl(1664 (n_{12}{} v_{12}{})^3
- 1248 (n_{12}{} v_{12}{})^2 (n_{12}{} v_{1}{})\nonumber\\
& + 576 (n_{12}{} v_{12}{}) (n_{12}{} v_{1}{})^2
+ 1168 (n_{12}{} v_{12}{}) (v_{1}{} v_{2}{})
- 48 (n_{12}{} v_{1}{}) (v_{1}{} v_{2}{})
- 440 (n_{12}{} v_{12}{}) v_{1}{}^{2}
- 48 (n_{12}{} v_{1}{}) v_{1}{}^{2}\nonumber\\
& - 728 (n_{12}{} v_{12}{}) v_{2}{}^{2}
+ 96 (n_{12}{} v_{1}{}) v_{2}{}^{2}\Bigr)
+ \frac{G m_{1}}{r_{12}} \biggl [m_{2}^{2}
\mu_1^{(2)} \Bigl(\frac{1209}{4} (n_{12}{} v_{1}{})
- \frac{1179}{4} (n_{12}{} v_{2}{})\Bigr)\nonumber\\
& + m_{1}^{2} \mu_2^{(2)} \Bigl(- \frac{241}{4} (n_{12}{} v_{1}{})
+ \frac{661}{4} (n_{12}{} v_{2}{})\Bigr)
+ m_{2}^{2} \sigma_1^{(2)} \Bigl(712 (n_{12}{} v_{1}{})
- 856 (n_{12}{} v_{2}{})\Bigr)\nonumber\\
& + m_{1}^{2} \sigma_2^{(2)} \Bigl(1416 (n_{12}{} v_{1}{})
- 1560 (n_{12}{} v_{2}{})\Bigr)\biggl]
+ \frac{G m_{2}}{r_{12}} \biggl [m_{2}^{2} \mu_1^{(2)} \Bigl(378 (n_{12}{} v_{1}{})
- 279 (n_{12}{} v_{2}{})\Bigr)\nonumber\\
& + m_{1}^{2} \mu_2^{(2)} \Bigl(-714 (n_{12}{} v_{1}{})
+ 798 (n_{12}{} v_{2}{})\Bigr)
+ 1008 m_{2}^{2} \sigma_1^{(2)} (n_{12}{} v_{12}{})
+ 784 m_{1}^{2} \sigma_2^{(2)} (n_{12}{} v_{12}{})\biggl]\biggl\}\Biggl]\nonumber\\
& + \frac{1}{r_{12}^{2}} n_{12}^{i} \Bigl(-60 m_{2}^{2} \mu_1^{(3)}
- 60 m_{1}^{2} \mu_2^{(3)}\Bigr)\Bigg\}
+ \mathcal{O}\left( \frac{\epsilon_\text{tidal}}{c^{6}}
\right)\,.\end{aligned}$$ The tidal part of the relative acceleration in the CoM frame, deriving from the CoM Lagrangian whose tidal part is shown in , reads $$\begin{aligned}
\label{eq:accCoM}
(a^{i})_\text{tidal} =& - 18 \frac{G^{2} m }{r^{7}}\mu_{+}^{(2)} n^{i} \nonumber \\
& + \frac{1}{c^{2}} \left\{ \frac{G^{2} m}{r^{7}}
\left[ \left( \left(108 + 72\nu \right)\mu_{+}^{(2)} +
180 \Delta \, \mu_{-}^{(2)} -192 \sigma_{+}^{(2)} \right)\dot{r}^2 n^{i} +
\left( \left(-\frac{81}{2} -54\nu \right)\mu_{+}^{(2)} -
\frac{45}{2} \Delta \, \mu_{-}^{(2)} \right. \right. \right. \nonumber \\
& \left. \left. \left. -96 \sigma_{+}^{(2)} \right) v^{2} n^{i} +
\left( \left(63-36\nu \right)\mu_{+}^{(2)} -45 \Delta \, \mu_{-}^{(2)} +
288\sigma_{+}^{(2)} \right)\dot{r} v^{i} \right] + \frac{G^{3} m^{2}}{r^{8}}
\left[ \left(183+57\nu \right) \mu_{+}^{(2)} -
15 \Delta \, \mu_{-}^{(2)} \right]n^{i} \right\} \nonumber \\
& + \frac{1}{c^{4}} \left\{ \frac{G^{2} m}{r^{7}}
\left[ \left( \left( -3720 -720 \nu +540 \nu^{2} \right)\mu_{+}^{(2)} +
\left(-3990 +900 \nu \right) \Delta \, \mu_{-}^{(2)} +
\left(880+960\nu \right)\sigma_{+}^{(2)} +
160 \Delta \, \sigma_{-}^{(2)}\right)\dot{r}^{4}n^{i} \right. \right. \nonumber \\
& \left. \left. + \left( \left( 2472 + 522 \nu -
288 \nu^{2} \right)\mu_{+}^{(2)} +
\left(2724 -450 \nu \right) \Delta \, \mu_{-}^{(2)} -
272\sigma_{+}^{(2)} +400\Delta \, \sigma_{-}^{(2)} \right)
\dot{r}^{2}v^{2}n^{i} \right. \right. \nonumber \\
& \left. \left. + \left( \left( -\frac{1671}{8}-\frac{153}{2} \nu +
72 \nu^{2} \right)\mu_{+}^{(2)} + \left(-\frac{1527}{8} +
\frac{45}{2} \nu \right) \Delta \, \mu_{-}^{(2)} +
\left(-104-96\nu \right)\sigma_{+}^{(2)} -
56 \Delta \, \sigma_{-}^{(2)}\right)v^{4}n^{i} \right. \right. \nonumber \\
& \left. \left. + \left( \left( 1104 +36 \nu -
144 \nu^{2} \right)\mu_{+}^{(2)} +
\left(1392-180 \nu \right) \Delta \, \mu_{-}^{(2)} +
\left(-1376-1536\nu \right)\sigma_{+}^{(2)} -
608 \Delta \, \sigma_{-}^{(2)}\right)\dot{r}^{3}v^{i} \right. \right. \nonumber \\
& \left. \left. + \left( \left( -\frac{633}{2} +
63 \nu +36 \nu^{2} \right)\mu_{+}^{(2)} +
\left(-\frac{1209}{2}+45 \nu \right) \Delta \, \mu_{-}^{(2)} +
\left(872+672\nu \right)\sigma_{+}^{(2)} +
104 \Delta \, \sigma_{-}^{(2)}\right)\dot{r}v^{2}v^{i} \right] \right. \nonumber \\
& \left. + \frac{G^{3} m^{2}}{r^{8}}
\left[ \left( \left( -2316+\frac{5391}{4} \nu +
\frac{549}{2}\nu^{2} \right)\mu_{+}^{(2)} +
\left(-2820 +\frac{1665}{4} \nu \right) \Delta \, \mu_{-}^{(2)} +
\left(1264+744\nu \right)\sigma_{+}^{(2)} -
80 \Delta \, \sigma_{-}^{(2)}\right) \dot{r}^{2}n^{i} \right. \right. \nonumber \\
& \left. \left. + \left( \left( 405+\frac{887}{2} \nu -
27 \nu^{2} \right)\mu_{+}^{(2)} + \left(279- \frac{135}{2}\nu \right)
\Delta\,\mu_{-}^{(2)} + \left(528+216\nu \right)\sigma_{+}^{(2)} -
144\Delta \, \sigma_{-}^{(2)}\right) v^{2}n^{i} \right. \right. \nonumber \\
& \left. \left. + \left( \left( 336-832 \nu - 114 \nu^{2} \right)\mu_{+}^{(2)} +
\left(1092-150\nu \right)\Delta \, \mu_{-}^{(2)} +
\left(-1792-960\nu \right)\sigma_{+}^{(2)} +
224\Delta \, \sigma_{-}^{(2)}\right) \dot{r}v^{i}\right] \right. \nonumber \\
& \left. + \frac{G^{4} m^{3}}{r^{9}}\left[
\left( -\frac{14769}{14} - \frac{8716}{7} \nu \right)\mu_{+}^{(2)} +
\left(\frac{1359}{14} +90 \nu\right)
\Delta \, \mu_{-}^{(2)}\right]n^{i} \right\} -
120 \frac{G^{2} m }{r^{9}}\mu_{+}^{(3)}n^{i} + \mathcal{O}\left(
\frac{\epsilon_\text{tidal}}{c^{6}} \right)\,.\end{aligned}$$
[^1]: One can speculate that the tidal 5PN coefficient is larger than the purely orbital 5PN contribution to the orbital phase for point particles, which is currently unknown.
[^2]: The NNL order in the dynamics corresponds to 2PN order beyond the leading 5PN quadrupolar tidal effect and is thus formally equivalent to a 7PN orbital effect; similarly, the NL order means 1PN beyond the leading 5PN effect.
[^3]: Throughout the paper, we use the conventions of MTW [@MTW]; in particular, the metric signature is $(-,+,+,+)$ and the Riemann tensor satisfies the identity $(\nabla_{\mu}\nabla_{\nu} - \nabla_{\nu}\nabla_{\mu})V^{\lambda} =
R\indices{^\lambda_\kappa_\mu_\nu}V^{\kappa}$.
[^4]: The constant mass of body $A$ is denoted $m_A$ and its proper time ${\mathrm{d}}\tau_{A} = (-[g_{\mu \nu}]_A{\mathrm{d}}y^{\mu}_{A}{\mathrm{d}}y^{\nu}_{A}/c^2)^{1/2}$, where $y_A^\mu(\tau_A)$ is the particle’s worldline. The four velocity $u^{\mu}_{A} = {\mathrm{d}}y^{\mu}_{A}/(c\,{\mathrm{d}}\tau_{A})$ is such that $[g_{\mu\nu}]_A
u_{A}^{\mu}u_{A}^{\mu} = -1$, with $[g_{\mu \nu}]_A$ denoting the metric regularized at the location of body $A$; this is of course nothing but the time-time component of the orthonormalizing condition of the tetrad, $\eta_{\hat{\alpha}\hat{\beta}} =
[g_{\mu\nu}]_A\,e_{{\hat{\alpha}}}^{A\mu}\,e_{{\hat{\beta}}}^{A\nu}$.
[^5]: In our convention, $C^{*}_{{\hat{\alpha}}{\hat{\beta}}{\hat{\gamma}}{\hat{\delta}}}
\equiv \frac{1}{2}
\varepsilon_{{\hat{\alpha}}{\hat{\beta}}{\hat{\eta}}{\hat{\zeta}}}
\,C^{{\hat{\eta}}{\hat{\zeta}}}_{\phantom{{\hat{\eta}}{\hat{\zeta}}}{\hat{\gamma}}{\hat{\delta}}}$ or, in covariant form, $C^{*}_{\mu \nu \rho \sigma} \equiv
\tfrac{1}{2}\varepsilon_{\mu \nu \lambda \kappa}
\,C\indices{^\lambda^\kappa_\rho_\sigma}$, where $\varepsilon_{{\hat{\alpha}}{\hat{\beta}}{\hat{\gamma}}{\hat{\delta}}}$ denotes the tetradic components of the completely anti-symmetric Levi-Civita tensor $\varepsilon_{\mu\nu\rho\sigma}$, defined by $\varepsilon_{{\hat{0}}{\hat{1}}{\hat{2}}{\hat{3}}}=1$ and $\varepsilon_{0123}=\sqrt{-g}$. The tetradic covariant derivative obeys, *e.g.*, $\nabla_{{\hat{\alpha}}}V^{{\hat{\beta}}} =
e_{{\hat{\alpha}}}^{\phantom{{\hat{\alpha}}}\mu}
e^{{\hat{\beta}}}_{\phantom{{\hat{\beta}}}\nu}\nabla_{\mu}V^{\nu}$.
[^6]: The notation $r_{12}=\vert \bm{y}_{1}-\bm{y}_{2}\vert$ represents the Euclidean distance between the two bodies (at constant time $y_{1}^{0}=y_{2}^{0}=c\, t$); the unit direction from body 2 to body 1 is then $n_{12}^{i} =
(y_{1}^{i}-y_{2}^{i})/r_{12}$; $v_{12}^{i}=v_{1}^i-v_{2}^i$ stands for the relative velocity; the usual Euclidean scalar product of vectors is denoted with parentheses, *e.g.* $(n_{12}v_1)=\bm{n}_{12}\cdot\bm{ v}_1$; the cross product is denoted, *e.g.* $(n_{12}\times v_{12})_i$, and the mixed product, *e.g.* $(n_{12}, v_1, v_2)= (n_{12} \ v_1 \times
v_2)$. All calculations are done with the software Mathematica and the tensor package *xAct* [@xtensor].
[^7]: We pose $x^i=y_1^i-y_2^i$ and $v^i={\mathrm{d}}x^i/{\mathrm{d}}t$; $r=\vert\bm{x}\vert=r_{12}$ denotes the separation, $n^i=x^i/r$ the unit direction, and we have $\dot{r}=(nv)=\bm{n}\cdot\bm{v}$; mass parameters are: the total mass $m=m_{1}+m_{2}$, the symmetric mass ratio $\nu = m_1 m_2/m^2 =
X_{1}X_{2}$ and the mass difference $\Delta = X_{1}-X_{2}$, with $X_{A}=m_{A}/m$.
[^8]: The quantity $\kappa^{T}_{2}$ defined in Ref. [@DNV12] is related to our definition $\widetilde{\mu}_{+}^{(2)}$ by $\kappa^{T}_{2} = 6
\widetilde{\mu}_{+}^{(2)}$.
[^9]: However, we do not recover the 1PN coefficient for the current quadrupole piece in Ref. [@Landry18], where the discrepancy is by a factor 2.
|
---
abstract: 'We study the structure of logical operators in local $D$-dimensional quantum codes, considering both subsystem codes with geometrically local gauge generators and codes defined by geometrically local commuting projectors. We show that if the code distance is $d$, then any logical operator can be supported on a set of specified geometry containing $\tilde d$ qubits, where $\tilde d d^{1/(D-1)} = O(n)$ and $n$ is the code length. Our results place limitations on partially self-correcting quantum memories, in which at least some logical operators are protected by energy barriers that grow with system size. We also show that for any two-dimensional local commuting projector code there is a nontrivial logical “string” operator supported on a narrow strip, where the operator is only slightly entangling across any cut through the strip.'
author:
- Jeongwan Haah and John Preskill
date: 2 July 2012
title: Logical operator tradeoff for local quantum codes
---
Introduction
============
Geometrically local quantum codes provide intriguing models of quantum many-body physics, and also have potential applications to fault-tolerant quantum computation in systems with short-range interactions. There has been impressive recent progress in understanding the properties of such codes. Bravyi, Poulin, and Terhal [@BravyiPoulinTerhal2010Tradeoffs] showed that for codes defined by geometrically local commuting projectors in $D$ dimensions, the code length $n$, distance $d$ and number of encoded qubits $k$ are related by $$kd^{2/(D-1)} = O(n).$$ Bravyi and Terhal [@BravyiTerhal2008no-go] showed that $$d = O(n^{(D-1)/D})$$ for subsystem codes with geometrically local gauge generators, and Bravyi [@Bravyi2010Subsystem] showed that $$kd = O(n)$$ for two-dimensional subsystem codes with geometrically local gauge generators.
Bravyi and Terhal [@BravyiTerhal2008no-go], and Kay and Colbeck [@KayColbeck2008Quantum], also showed that no two-dimensional local stabilizer code can be a *self-correcting quantum memory* — if we regard the code as a system governed by a local Hamiltonian, the energy barrier protecting against logical errors is a constant independent of system size. A self-correcting memory based on a geometrically local stabilizer code is possible in four dimensions [@DennisKitaevLandahlEtAl2002Topological; @AlickiHorodeckiHorodeckiEtAl2008thermal], where the storage time increases sharply as the system size grows. In three dimensions there are codes such that the energy barrier increases logarithmically with system size [@Haah2011Local; @BravyiHaah2011Energy], but where the storage time is bounded above by a constant independent of system size [@BravyiHaah2011Analytic].
We address a related but somewhat different question. To illustrate the question, consider the three-dimensional toric code [@CastelnovoChamon2008Topological], on a cubic lattice with linear size $L$. This code provides different degrees of protection against different types of errors. For example, we can arrange for the logical bit flip acting on the code space to have weight $L$ (*i.e.*, to be supported on a set of $L$ qubits), while the logical phase flip has weight $L^2$. In that case, the energy barrier protecting against logical phase errors grows linearly with $L$, though the energy barrier protecting against bit flips is only a constant. We might say this system is *partially self correcting*, meaning it has very robust physical protection against phase errors, but weaker protection against bit flips.
We find limitations on partial self correction in two-dimensional local subsystem codes with local stabilizer generators; in particular the logical phase flip must have weight $O(L)$ if the logical bit flip has weight $\Omega(L)$. More generally, we study how the code distance $d$ constrains the weight of logical operators, for both local commuting projector codes and subsystem codes, finding that $d$ limits not just the weight of the lowest-weight logical operator but also the higher-weight logical operators. Let us say that two logical operators are *equivalent* if they act in the same way on the protected system. Our result, which applies to both local subsystem codes and to local commuting projector codes in $D\ge 2$ dimensions, says that for any logical operator there is an equivalent logical operator with weight $\tilde d$ such that $$\label{eq:main-result}
\tilde d d^{1/(D-1)} = O(L^D)$$ where $L$ is the linear size of the lattice. We call this result the tradeoff theorem for logical operators, since, *e.g.*, increasing the weight of the lowest-weight logical operator reduces the upper bound on the weight of other logical operators. One immediate consequence is that, since $d \le \tilde d$, $$d = O(L^{D-1}),$$ a result previously known for local subsystem codes but not for local commuting projector codes with $D\ge 3$. For $D=2$ the tradeoff becomes $d\tilde d = O(L^2)$, and hence $d=O(L)$.
We also show that for any two-dimensional local commuting projector code there is a nontrivial logical “string” operator supported on a narrow strip (or on a narrow slab in higher dimensions), where the operator is only slightly entangling across any cut through the strip. However, we have not settled the question whether two-dimensional local commuting projector codes can be self correcting.
We review the theory of stabilizer codes and subsystem codes in Sec. II. In Sec. III we prove a “Cleaning Lemma” for subsystem codes previously stated by Bravyi [@Bravyi2010Subsystem]; our proof uses tools developed by Yoshida and Chuang [@YoshidaChuang2010Framework], and may be of independent interest. We prove the tradeoff theorem for local subsystem codes in Sec. IV and for local commuting projector codes in Sec. V. In Sec. VI we show that any two-dimensional commuting projector code admits a nontrivial logical “string” operator supported on a narrow strip. In Sec. VII we explain why partial self-correction is impossible for two-dimensional local stabilizer codes with distance $d=\Omega(L)$. In Sec. VIII we show that the logical string operator in a two-dimensional local commuting projector code can be chosen to be slightly entangling across any cut through the string. Sec. IX contains our conclusions.
Background: stabilizer and subsystem codes {#sec:background}
==========================================
A *stabilizer code* [@CalderbankRainsShorEtAl1997Quantum; @Gottesman1996Class] embeds $k$ protected qubits in the Hilbert space of $n$ physical qubits. The code has a stabilizer group $S$, an abelian subgroup with $n-k$ independent generators of the $n$-qubit Pauli group $P$, and the code is the simultaneous eigenspace with eigenvalue 1 of all elements of $S$.
It is convenient to abelianize $P$ by ignoring the phase in the product of two Pauli operators, thus obtaining a $2n$-dimensional vector space over the binary field, which we also denote by $P$. The vector space $P$ is equipped with a symplectic form, such that two vectors are orthogonal if and only if the corresponding Pauli operators commute. If $G$ is a subgroup of $P$, we use the symbol $G$ to denote both the subgroup and the corresponding subspace of $P$.
Viewed as a vector space, $S$ is $(n-k)$-dimensional. We denote by $S^\perp$ the vector space orthogonal to $S$, which has dimension $2n - (n-k)$ = $n+k$. It can be decomposed as a direct sum of $S$ and a $2k$ dimensional vector space corresponding to the logical Pauli group, which acts nontrivially on the $k$ protected qubits. We define the weight of a Pauli operator as the number of qubits on which the operator acts nontrivially, and the distance $d$ of the stabilizer code is the minimum weight of a nontrivial logical operator (one contained in $S^\perp$ but not in $S$).
A *subsystem code* [@Bacon2006Operator; @Poulin2005Stabilizer] can be viewed as a stabilizer code with $k+g$ encoded qubits, but where only $k$ of these qubits are used to store protected quantum information. The stabilizer group $S$ together with Pauli operators acting on the $g$ unused qubits generate the code’s *gauge group* $G$. Equivalently, we may say that the subsystem code is defined by its gauge group $G\le P$, and that the code’s stabilizer group $S=G\cap G^\perp$ is the subgroup of $G$ that commutes with all elements of $G$.
Logical operations in the subsystem code preserve the $2^k$-dimensional Hilbert space spanned by the $k$ protected qubits. We distinguish between *bare* logical operators, which act trivially on the gauge qubits, and *dressed* logical operators, which may act nontrivially on the gauge qubits as well as the protected qubits. Thus, nontrivial bare logical operators are in $G^\perp$ but not in $G$, while nontrivial dressed logical operators are in $S^\perp$ but not in $G$. The code distance $d$ is the minimum weight of a nontrivial dressed logical operator.
A bare logical operator $x\in G^\perp$ acts trivially on the protected qubits as well as the gauge qubits if and only if $x\in G^\perp \cap G= S$; hence we may regard $G^\perp/S$ as the group of bare logical operators. A dressed logical operator $x\in S^\perp$ acts trivially on the protected qubits (but perhaps nontrivially on the gauge qubits) if and only if $x\in G$; hence we may regard $S^\perp / G$ as the group of dressed logical operators, where we regard two dressed logical operators as equivalent if they act the same way on the protected qubits. We denote by $[G]$ the dimension of the vector space $G$ (the number of independent generators of the corresponding group); by counting the number of independent bare logical operators, we find that the number $k$ of protected qubits satisfies $$\begin{aligned}
2k &=& [G^\perp/S]=[G^\perp] - [S] \\
&=& [P] - [G] - [S]= 2n - [G] - [S].\end{aligned}$$ Similarly, by counting the number of independent dressed logical operators, we find $$\begin{aligned}
2k &=& [S^\perp/G]=[S^\perp] - [G] \\
&=& [P] - [S] - [G]= 2n - [S] - [G].\end{aligned}$$ A stabilizer code is the special case of a subsystem code in which $G=S$, and in that case, $k = n - [S]$.
We will also consider stabilizer codes and subsystem codes of the CSS type [@CalderbankShor1996Good; @Steane1996Multiple], where each generator of the gauge group, and each logical operator, may be chosen to be either of the $X$-type or the $Z$-type. We use $P^X$ ($P^Z$) to denote the group of $X$-type ($Z$-type) Pauli operators, $G^X$ ($G^Z$) to denote the $X$-type ($Z$-type) gauge group, and $S^X$ ($S^Z$) to denote the $X$-type ($Z$-type) stabilizer group. We use ($G^X)^\perp$ to denote the subgroup of $P^Z$ that commutes with $G^X$, etc. Then the group of bare $Z$-type logical operators is $(G^X)^\perp/S^Z$ and the group of bare $X$-type logical operators is $(G^Z)^\perp/S^X$. Therefore the number $k$ of protected qubits is $$\begin{aligned}
k &=& [(G^X)^\perp/S^Z]= n - [G^X] - [S^Z],\\
k &=& [(G^Z)^\perp/S^X]= n - [G^Z] - [S^X].\end{aligned}$$
We wish to study stabilizer codes in which the stabilizer generators are geometrically local and subsystem codes in which the gauge generators are geometrically local. To be concrete, we may imagine that the qubits reside at the vertices of a $D$-dimensional hypercubic lattice (with either open or periodic boundary conditions), and that each generator acts nontrivially only inside a hypercube (containing $w^D$ vertices) with linear size $w$. In fact our results can be easily extended to codes with geometrically local generators defined on any graph embedded in $D$-dimensional space. Note that for a subsystem code the stabilizer generators might be nonlocal even if the gauge generators are local. Some of our results also apply to a larger class of local codes that includes local stabilizer codes. For this class, which we call *local commuting projector codes*, the code space is the simultaneous eigenspace with eigenvalue one of a set of mutually commuting geometrically local projection operators, where the projectors do not necessarily project onto eigenspaces of Pauli operators. A local stabilizer code, but not a local subsystem code, is a special case of a local commuting projector code.
Cleaning lemma for subsystem codes
==================================
The Cleaning Lemma for subsystem codes relates the number of independent bare logical operators supported on a set of qubits $M$ to the number of independent dressed logical operators supported on the complementary set $M^c$. The concept of the Cleaning Lemma was introduced in [@BravyiTerhal2008no-go], then generalized in [@YoshidaChuang2010Framework] and [@Bravyi2010Subsystem]. Here we use ideas from [@YoshidaChuang2010Framework] to prove a version stated in [@Bravyi2010Subsystem]. (See also [@WildeFattal2009Nonlocal].) As in the Sec. \[sec:background\], we will regard a subgroup of the Pauli group as a vector space, allowing us to obtain the Cleaning Lemma from straightforward dimension counting.
We use $P_A$ to denote the subgroup of the Pauli group $P$ supported on a set $A$ of qubits; likewise for any subgroup $G$ of the Pauli group $G_A= G \cap P_A$, is the subgroup of $G$ supported on $A$. We denote by $\Pi_A : P \to P_A$ the restriction map that maps a Pauli operator to its restriction supported on the set $A$, and we use $|A|$ to denote the number of qubits contained in $A$; thus $[P_A] = 2|A|$.
If we divide $n$ qubits into two complementary sets $A$ and $B$, then a subgroup $G$ of $P$ can be decomposed into $G_A$, $G_B$, and a “remainder,” as follows:
*(Decomposition of Pauli subgroups)* Suppose that $A$ and $B$ are complementary sets of qubits. Then for any subgroup $G$ of the Pauli group, $$G = G_A \oplus G_B \oplus G'$$ for some $G'$, where $$\begin{aligned}
[ (G^\perp)_A ] &= 2|A| - [G_A] - [G'] ,\\
[ (G^\perp)_B ] &= 2|B| - [G_B] - [G']\end{aligned}$$
If $V$ is a vector space and $W$ is a subspace of $V$, then there is a vector space $V'$ such that $V=W\oplus V'$; we may choose $V'$ to be the span of the basis vectors that extend a basis for $W$ to a basis for $V$. Since $G_A$ and $G_B$ are disjoint, i.e., $G_A \cap G_B = \{0\}$, $G_A\oplus G_B$ is a subspace of $G$, and thus there exists an auxiliary vector space $G' \leq G$ such that $$G = G_A \oplus G_B \oplus G'.$$ The choice of $G'$ is not canonical, but we need only its existence. Since the restriction map $\Pi_A$ obviously annihilates $G_B$, we may regard it as a map from $G_A\oplus G'$ onto $\Pi_A G$. In fact this map is injective. Note that if $\Pi_A x = 0$ for some $x \in G_A\oplus G'$. then since $P=P_A\oplus P_B$ it must be that $x \in G_B$. But because the sum is direct, i.e. $G_B \cap (G_A\oplus G') = \{0\}$, it follows that $x = 0$, which proves injectivity. Hence $\Pi_A: G_A\oplus G'\to \Pi_A G$ is an isomorphism. Now, we may calculate $(G^\perp)_A$ by solving a system of linear equations. Noting that $x \in P_A$ is contained in $G^\perp$ if and only if $x$ commutes with the restriction to $A$ of each element of $G$, we see that the number of independent linear constraints is $[\Pi_A G] = [G_A] + [G']$; hence $[(G^\perp)_A]=[P_A] - [G_A] - [G']= 2|A| - [G_A] - [G']$. Likewise, $\Pi_B: G_B\oplus G'\to \Pi_B G$ is also an isomorphism, and hence $[(G^\perp)_B]=[P_B] - [G_B] - [G']= 2|B| - [G_B] - [G']$.
Now we are ready to state and prove the Cleaning Lemma. For a subsystem code, let $g_{\rm bare}(M)$ be the number of independent non-trivial bare logical operators supported on $M$, and let $g(M)$ be the number of independent non-trivial dressed logical operators supported on $M$, i.e., $$\begin{aligned}
g_{\rm bare}(M) &= [G^\perp \cap P_M / S_M ] = [(G^\perp)_M/S_M], \\
g(M) &= [S^\perp \cap P_M / G_M ]= [(S^\perp)_M/G_M].\end{aligned}$$ Likewise, for a CSS subsystem code, let $g_{\rm bare}^X(M)$ be the number of independent non-trivial bare $X$-type logical operators supported on $M$, and let $g^X(M)$ be the number of independent non-trivial dressed $X$-type logical operators supported on $M$, i.e., $$\begin{aligned}
g_{\rm bare}^X(M) &= [(G^Z)^\perp \cap P^X_M / S^X_M], \\
g^X(M) &= [(S^Z)^\perp \cap P^X_M / G^X_M],\end{aligned}$$ and similarly for the $Z$-type logical operators.
*(Cleaning Lemma for subsystem codes)* For any subsystem code, we have $$g_{\rm bare}(M) + g(M^c) = 2k ,$$ where $M$ is any set of qubits and $M^c$ is its complement. Moreover, for a CSS subsystem code $$g_{\rm bare}^X(M) + g^Z(M^c) = k = g_{\rm bare}^Z(M) + g^X(M^c).$$ \[lem:counting\_op\]
We use Lemma 1 to prove the Cleaning Lemma by a direct calculation: $$\begin{aligned}
g_{\rm bare}(M)
&= [(G^\perp)_M / S_M] \\
&= 2|M| - [G_M] - [G'] - [S_M] ,\end{aligned}$$ and $$\begin{aligned}
g(M^c)
&= [(S^\perp)_{M^c} / G_{M^c}] \\
&= 2|M^c| - [S_{M^c}] - [S'] - [G_{M^c}] .\end{aligned}$$ Summing, we find $$\begin{aligned}
g_{\rm bare}(M) + g(M_c)
&= 2|M| + 2|M_c| \\
&-([G_M] + [G_{M_c}] + [G'])\\
&-([S_M] + [S_{M_c}] + [S'])\end{aligned}$$ and invoking Lemma 1 once again, $$\begin{aligned}
g_{\rm bare}(M) + g(M_c)
&= 2n -[G] - [S] = 2k ,\end{aligned}$$ which proves the claim for general subsystem codes. For the CSS case, we apply the analogue of Lemma 1 to the $X$-type and $Z$-type Pauli operators, finding $$\begin{aligned}
g^Z_{\rm bare}(M)
&= [ (G^X)^\perp \cap P^Z_{M} / S^Z_{M} ] \\
&= |M| - [G^X_{M}] - [(G^X)'] - [S^Z_{M}]\end{aligned}$$ and also $$\begin{aligned}
g^X(M^c)
&= [ (S^Z)^\perp \cap P^X_{M^c} / G^X_{M^c} ] \\
&= |M^c| - [S^Z_{M^c}] - [(S^Z)'] - [G^X_{M^c}]. \end{aligned}$$ Summing and using Lemma 1 we have $$\begin{aligned}
g_{\rm bare}^Z(M) + g^X(M^c)
&= n - [G^X]-[S^Z] =k ;\end{aligned}$$ a similar calculation yields $$\begin{aligned}
g_{\rm bare}^X(M) + g^Z(M^c)
&= n - [G^Z]-[S^X] =k ,\end{aligned}$$ proving the claim for CSS subsystem codes.
Of course, for a stabilizer code there is no distinction between bare and dressed logical operators; the statement of the Cleaning Lemma becomes $$g(M) + g(M^c) = 2k$$ for general stabilizer codes, and $$g^X(M) + g^Z(M^c) = k$$ for CSS stabilizer codes.
To understand how the Cleaning Lemma gets its name, note that it implies that if no bare logical operator can be supported on the set $M$ then all dressed logical operators can be supported on its complement $M^c$. That is, any of the code’s dressed logical Pauli operators can be “cleaned up” by applying elements of the gauge group $G$. The cleaned operator acts the same way on the protected qubits as the original operator (though it might act differently on the gauge qubits), and acts trivially on $M$.
We say that a region $M$ is *correctable* if erasure of the qubits in $M$ is a correctable error. For a subsystem code, it follows that no nontrivial dressed logical operators are supported on $M$ if $M$ is correctable; hence $g(M)=0$ and thus $g_{\rm bare}(M)=0$. The Cleaning Lemma then asserts that all dressed logical operators can be supported on $M^c$. Let us say that two dressed logical operators $x$ and $y$ are *equivalent* if $x=yz$ and $z$ is an element of the gauge group $G$, so that $x$ and $y$ act the same way on the protected qubits. We have obtained:
*(Cleaning Lemma for dressed logical operators)* \[lem:clean-region\] For any subsystem code, if $M$ is a correctable region and $x$ is a dressed logical operator, then there is a dressed logical operator $y$ supported on $M^c$ that is equivalent to $x$.
Operator tradeoff for local subsystem codes {#sec:subsystem-tradeoff}
===========================================
In this section we consider local subsystem codes with qubits residing at the sites of a $D$-dimensional hypercubic lattice $\Lambda$. The code has *interaction range* $w$, meaning that the generators of the gauge group $G$ can be chosen so that each generator has support on a hypercube containing $w^D$ sites.
\[defn:boundary\] Given a set of gauge generators for a subsystem code, and a set of qubits $M$, let $M'$ denote the support of all the gauge generators that act nontrivially on $M$. The *external boundary* of $M$ is $\partial_+ M = M' \cap M^c$, where $M^c$ is the complement of $M$, and the *internal boundary* of $M$ is $\partial_- M = \left(M^c\right)' \cap M$. The *boundary* of $M$ is $\partial M=\partial_+M\cup\partial_-M$, and the *interior* of $M$ is $M^\circ = M \setminus \partial_- M$.
Recall that a region (*i.e.*, set of qubits) $M$ is said to be *correctable* if no nontrivial dressed logical operation is supported on $M$, in which case erasure of $M$ can be corrected. Since the code distance $d$ is defined as the minimum weight of a dressed logical operator, $M$ is certainly correctable if $|M| < d$. But in fact much larger regions are also correctable, as follows from this lemma:
*(Expansion Lemma for local subsystem codes)* For a local subsystem code, if $M$ and $A$ are both correctable, where $A$ contains $\partial M$, then $M\cup A$ is correctable. \[lem:subsystem-extend\]
Given a subsystem code $\mathcal{C}$ with gauge group $G$, we may define a subsystem code $\mathcal{C}_{M^c}$ on $M^c$ with gauge group $\Pi_{M^c}G$, where $\Pi_{M^c}$ maps a Pauli operator to its restriction supported on $M^c$. We note that a Pauli operator $x$ supported on $M^c$ is a bare logical operator for $\mathcal{C}$ if and only if $x$ is a bare logical operator for $\mathcal{C}_{M^c}$; that is, $x$ commutes with all elements of $G$ if and only if it commutes with all elements of the restriction of $G$ to $M^c$.
Furthermore, if $x$ is a dressed logical operator for $\mathcal{C}_{M^c}$ supported on $\partial_+M$, then $x$ can be extended to a dressed logical operator $\bar x$ for $\mathcal{C}$ supported on $\partial M$. Indeed, suppose $x=yz$, where $y$ is a bare logical operator for $\mathcal{C}_{M^c}$ (and hence also a bare logical operator for $\mathcal{C}$ supported on $M^c$), while $z$ is an element of the gauge group $\Pi_{M^c} G$ of $\mathcal{C}_{M^c}$. Then $z$ can be written as a product $z=\prod_i g_i$ of generators of $\Pi_{M^c} G$, each of which can be expressed as $g_i = \Pi_{M^c} \bar g_i$, where $\bar g_i$ is a generator of $G$ supported on $M^c\cup \partial_-M$. Thus $\bar x = y\prod_i\bar g_i$ is a dressed logical operator for $\mathcal{C}$ supported on $\partial M$.
It follows that if $\partial M$ is correctable for the code $\mathcal{C}$ (*i.e*, code $\mathcal{C}$ has no nontrivial dressed logical operators supported on $\partial M$), then $\partial_+ M$ is correctable for the code $\mathcal{C}_{M^c}$ ($\mathcal{C}_{M^c}$ has no nontrivial dressed logical operators supported on $\partial_+ M$). By similar logic, if $A$ is correctable for $\mathcal{C}$ and contains $\partial M$, then $A\cap M^c$ is correctable for $\mathcal{C}_{M^c}$.
Suppose now that the code $\mathcal{C}$ has $k$ encoded qubits and that $M$ is correctable, *i.e.* $g^{(\mathcal{C})}(M)=0$. Therefore, applying Lemma \[lem:counting\_op\] to the code $\mathcal{C}$, $g_{\rm bare}^{(\mathcal{C})}(M^c)= 2k$. Suppose further that the set $A$ containing $\partial M$ is correctable for $\mathcal{C}$, implying that $A\cap M^c$ is correctable for $\mathcal{C}_{M^c}$, *i.e.* $g^{(\mathcal{C}_{M^c})}(A\cap M^c)=0$. Then applying Lemma \[lem:counting\_op\] to the code $\mathcal{C}_{M^c}$, we conclude that $g_{\rm bare}^{(\mathcal{C}_{M^c})}(M^c\setminus A)=2k$. Since each bare logical operator for $\mathcal{C}_{M^c}$, supported on $M^c\setminus A$, is also a bare logical operator for $\mathcal{C}$, supported on $M^c\setminus A$, we can now apply Lemma \[lem:counting\_op\] once again to the code $\mathcal{C}$, using the partition into $M^c\setminus A$ and $M\cup A$, finding $g^{(\mathcal{C})}(M \cup A)=0$. Thus $M \cup A$ is correctable.
If the interaction range is $w$, and $M$ is a correctable hypercube with linear size $l-2(w-1)$, then we may choose $A\supseteq \partial M$ so that $M\cup A$ is a hypercube with linear size $l$ and $M\setminus A$ is a hypercube with linear size $l - 4(w-1)$. Then $A$ contains $$|A| = l^D - \left[l-4(w-1)\right]^D \le 4(w-1)Dl^{D-1}$$ qubits, and $A$ is surely correctable provided $|A|<d$, where $d$ is the code distance. Suppose that $d>1$, so a single site is correctable. Applying Lemma \[lem:subsystem-extend\] repeatedly, we can build up larger and larger correctable hypercubes, with linear size $1 + 2(w-1), 1+ 4(w-1), 1+ 6(w-1), \dots$. This process continues as long as $|A|< d$. We conclude:
*(Holographic Principle for local subsystem codes)* \[lem:subsystem-hypercube\] For a $D$-dimensional local subsystem code with interaction range $w>1$ and distance $d>1 $, a hypercube with linear size $l$ is correctable if $$\label{eq:hypercube-size}
4(w-1)Dl^{D-1} < d.$$
Thus (roughly speaking) for the hypercube to be correctable it suffices for its $\left[2(w-1)\right]$-thickened *boundary*, rather than its volume, to be smaller than the code distance. Bravyi [@Bravyi2010Subsystem] calls this property “the holographic principle for error correction,” because the *absence* of information encoded at the boundary of a region ensures that no information is encoded in the “bulk.” For local stabilizer codes, the criterion for correctability is slightly weaker than for local subsystem codes, as we discuss in Appendix \[app:holographic\_lemma\_stabilizer\_codes\].
Now we are ready to prove our first tradeoff theorem.
![(Color online) Lattice covering used in the proof of Theorem 1, shown in two dimensions. Each gray square is $l\times l$ and the white gap between squares has width $w-1$. The solid blue curve represents the support of a nontrivial logical operator; because the square $M_i$ is correctable, this square can be “cleaned” — we can find an equivalent logical operator supported on $M_i^c$, the complement of $M_i$. When all squares are cleaned, the logical operator is supported on the narrow strips between the squares. []{data-label="fig:cleaning"}](lop_cleaning.png){width="42.00000%"}
*(Tradeoff Theorem for local subsystem codes)* For a local subsystem code in $D\ge 2$ dimensions with interaction range $w>1$ and distance $d\gg w$, defined on a hypercubic lattice with linear size $L$, every dressed logical operator is equivalent to an operator with weight $\tilde d$ satisfying $$\label{eq:tradeoff-bound}
\tilde d {d}^{1/(D-1)} < c L^D,$$ where $c$ is a constant depending on $w$ and $D$. \[thm:subsystem-tradeoff\]
As shown in Fig. \[fig:cleaning\], we fill the lattice with hypercubes, separated by distance $w-1$, such that each hypercube has linear size $l$ satisfying eq.. (By “distance” we mean the number of sites in between — *e.g.* we say that adjacent sites are “distance zero” apart.) Thus no gauge generator acts nontrivially on more than one hypercube, and each hypercube is correctable by Lemma \[lem:subsystem-hypercube\]. Consider any nontrivial dressed logical operator $x$, and label the hypercubes $\{M_1, M_2, M_3, \dots\}$. By Lemma 3 there exists a gauge operator $y_i$ that “cleans” the logical operator in the hypercube $M_i$, *i.e.*, such that $xy_i$ acts trivially in $M_i$. Furthermore, since no gauge generator acts nontrivially on more than one hypercube, we can choose $y_i$ so that it acts trivially in all other hypercubes. Taking the product of all the $y_i$’s we construct a gauge operator that cleans all hypercubes simultaneously; thus $\tilde x= x\prod_i y_i$ is equivalent to $x$ and supported on the complement of the union of hypercubes $M=\cup_i M_i$. Therefore, the weight $\tilde d$ of $\tilde x$ is upper bounded by $|M^c|$.
The lattice is covered by hypercubes of linear size $l+(w-1)$, each centered about one of the $M_i$’s. There are $L^D/\left[l+(w-1)\right]^D$ such hypercubes in this union, each containing no more than $\left[l+(w-1)\right]^D - l^D \le (w-1)D\left[l+(w-1)\right]^{D-1}$ elements of $M^c$. Thus $$\begin{aligned}
\tilde d \le |M^c| &\le (w-1)D\left[l+(w-1)\right]^{D-1}\frac{L^D}{\left[l+(w-1)\right]^D} \nonumber\\
&= \frac{(w-1)D}{l+(w-1)}L^D.\end{aligned}$$ We optimize this upper bound on $\tilde d$ by choosing $l$ to be the largest integer such that a hypercube with linear size $l$ is known to be correctable, *i.e.*, satisfying $$l < \left(\frac{d}{4(w-1)D}\right)^{1/(D-1)},$$ thus obtaining eq.. Note that eq. is trivial if $d$ is a constant independent of $L$, since the weight $\tilde d$ cannot be larger than $L^D$.
Operator tradeoff for local commuting projector codes
=====================================================
In this section we consider a local commuting projector code, defined as the simultaneous eigenspace with eigenvalue one of a set of commuting projectors. As in Sec. \[sec:subsystem-tradeoff\] we assume that the qubits reside on a hypercubic lattice $\Lambda$ and that each projector acts trivially outside a hypercube of linear size $w$, where $w$ is the interaction range. By a *logical operator* we mean any transformation that preserves the code space, and we say that two logical operators are *equivalent* if they have the same action on the code space. The weight of a logical operator is the number of qubits on which it acts nontrivially. We say that a set of qubits $M$ is correctable if erasure of $M$ can be reversed by a trace-preserving completely positive recovery map. The distance $d$ of the code is the minimum size of a noncorrectable set of qubits.
Bravyi, Poulin, and Terhal [@BravyiPoulinTerhal2010Tradeoffs] proved some useful properties of these codes. To state their results, we use the definition
\[defn:boundary-projector\] Given a set of commuting projectors defining a code, and a set of qubits $M$, let $M'$ denote the support of all the projectors that act nontrivially on $M$. The *external boundary* of $M$ is $\partial_+ M = M' \cap M^c$, where $M^c$ is the complement of $M$, and the *internal boundary* of $M$ is $\partial_- M = \left(M^c\right)' \cap M$. The *boundary* of $M$ is $\partial M=\partial_+M\cup\partial_-M$, and the *interior* of $M$ is $M^\circ = M \setminus \partial_- M$.
\[lem:disentangling\] *(Disentangling Lemma [@BravyiPoulinTerhal2010Tradeoffs])* Consider a local commuting projector code and suppose that $M$ and $\partial_+M$ are both correctable regions. Then there exists a unitary operator $U_{\partial M}$ acting only on the boundary $\partial M$ such that, for any pure code vector $|\psi\rangle$, $$\label{eq:disentangling}
U_{\partial M} {| {\psi} \rangle}
= {| {\phi_M} \rangle} \otimes {| {\psi'_{M^c}} \rangle} .$$ Here ${| {\phi_M} \rangle}$, supported on $M$, does not depend on the code vector $|\psi\rangle$, while ${| {\psi'_{M^c}} \rangle}$, supported on $M^c$, does depend on $|\psi\rangle$.
The Disentangling Lemma says that, if $M$ and $\partial_+M$ are both correctable, then the entanglement of code vectors across the cut between $M$ and $M^c$ is localized in $\partial M$ and can be removed by a unitary transformation acting on only $\partial M$. Furthermore, in the resulting product state, no information distinguishing one code vector from another is available in $M$. This Lemma has a simple but important corollary:
*(Expansion Lemma for local commuting projector codes [@BravyiPoulinTerhal2010Tradeoffs])* For a local commuting projector code, if $M$ and $A$ are both correctable, where $A$ contains $\partial M$, then $M\cup A$ is correctable. \[lem:commuting-projector-extend\]
By eq.(\[eq:disentangling\]), if $A$ is erased the resulting state on $M\setminus A$ is independent of the code vector $|\psi\rangle$; all the information needed to reconstruct $|\psi\rangle$ resides in $M^c\setminus A$. Therefore, we can erase $M\setminus A$ as well without compromising our ability to reconstruct $|\psi\rangle$; that is, $M\cup A$ is correctable.
Definition \[defn:boundary-projector\] and Lemma \[lem:commuting-projector-extend\] for commuting projector codes are parallel to Definition \[defn:boundary\] and Lemma \[lem:subsystem-extend\] for subsystem codes. Arguing as in the proof of Lemma \[lem:subsystem-hypercube\], we see that one consequence is a holographic principle for these codes:
*(Holographic Principle for local commuting projector codes)* \[lem:projector-hypercube\] For a $D$-dimensional local commuting projector code with interaction range $w>1$ and distance $d>1 $, a hypercube with linear size $l$ is correctable if $$\label{eq:hypercube-size-projector}
4(w-1)Dl^{D-1} < d.$$
We will need an analog of the Cleaning Lemma to analyze the logical operator tradeoff for local commuting projector codes; it can be derived from the Disentangling Lemma.
*(Cleaning Lemma for local commuting projector codes)* Consider a local commuting projector code, and suppose that $M$ and $\partial_+ M$ are both correctable. For any logical operator $W$ there exists an equivalent logical operator $V$ supported on the complement of the interior $M^{\circ}$ of $M$. If $W$ is an isometry, then $V$ can be chosen to be unitary. \[lem:cleaning-projector\]
Let us name the regions: $$\begin{aligned}
A =& M^\circ = M \setminus \partial_- M, & B =& \partial_- M, \\
C =& \partial_+ M, & D =& (ABC)^c.\end{aligned}$$ Let $\{ {| {\alpha_i} \rangle} \}$ be an orthonormal basis for the code space. By Lemma \[lem:disentangling\], there exists a unitary transformation $U_{BC}$, and vectors ${| {\phi} \rangle}_{AB}, \{{| {\alpha'_i} \rangle}_{CD}\}$ such that $${| {\alpha_i} \rangle} = U_{BC} {| {\phi} \rangle}_{AB} \otimes {| {\alpha'_i} \rangle}_{CD} ,$$ where the normalized vector $|\phi\rangle_{AB}$ does not depend on $i$ and the vectors $\{|\alpha'_i\rangle_{CD}\}$ are normalized and mutually orthogonal. Because $W$ is a logical operator, ${| {\beta_i} \rangle} \equiv W {| {\alpha_i} \rangle}$ is also a code vector, and therefore $${| {\beta_i} \rangle} = U_{BC} {| {\phi} \rangle}_{AB} \otimes {| {\beta'_i} \rangle}_{CD}$$ where $\{{| {\beta'_i} \rangle}_{CD}\}$ is another set of vectors; if $W$ is an isometry then these vectors, too, are normalized and mutually orthogonal. Define a transformation $V'$ by ${| {\beta'_i} \rangle}_{CD} = V' {| {\alpha'_i} \rangle}_{CD}$, and choose an arbitrary extension so that $V'$ becomes an operator on $CD$. If $W$ is an isometry, then this extension $V'_{CD}$ can be chosen to be unitary. We now have $$\begin{aligned}
W {| {\alpha_i} \rangle}= |\beta_i\rangle &= U_{BC} (I_{AB} \otimes V'_{CD}) {| {\phi} \rangle}_{AB}\otimes {| {\alpha'_i} \rangle}_{CD}\\
&=U_{BC} (I_{AB} \otimes V'_{CD}) U_{BC}^\dagger {| {\alpha_i} \rangle}\end{aligned}$$ for all $i$. Defining $$V_{{(M^\circ})^c} = U_{BC} (I_{AB} \otimes V'_{CD}) U_{BC}^\dagger,$$ we observe that $V_{{(M^\circ})^c}$ acts trivially on $A=M^\circ$ and has the same action on code vectors as $W$, completing the proof.
To prove the tradeoff theorem we will need a further lemma establishing that a union of correctable sets is correctable under suitable conditions. Recall that we say a set of qubits $M$ is correctable if and only if erasure of $M$ can be corrected. Equivalently, $M$ is correctable if and only if, for any operator $\mathcal{O}$ supported on $M$, $$\label{eq:correctable}
\Pi \mathcal{O} \Pi = c_{\mathcal{O}} \Pi$$ where $\Pi$ denotes the projector onto the code space and $c_{\mathcal{O}}$ is a constant (possibly zero) depending on $\mathcal{O}$ [@Gottesman2009Introduction].
*(The union of separated correctable regions is correctable)* For a local commuting projector code, suppose that $M$ and $N$ are correctable regions such that no projector acts nontrivially on both $M$ and $N$. Then $M \cup N$ is also correctable. \[lem:extension\_of\_correctable\_region\]
A weaker version of this lemma was proved in [@BravyiPoulinTerhal2010Tradeoffs].
Let $\mathcal{S}$ be the set of local commuting projectors that define the code. We denote by $\mathcal{S}'_N$ the set of projectors in $\mathcal{S}$ that act nontrivially on $N$. Define $$\begin{aligned}
\Pi_{N'} &= \prod_{\Pi_a \in \mathcal{S}'_N} \Pi_a ,\\
\Pi_{N'}^c &= \prod_{\Pi_a \in \mathcal{S} \setminus \mathcal{S}'_N} \Pi_a ,\end{aligned}$$ and note that the projector onto the code space is $$\Pi = \prod_{\Pi_a\in\mathcal{S}}\Pi_a=\Pi_{N'} \Pi_{N'}^c.$$ Also note that the support of $\Pi_{N'}$ does not intersect $M$ and the support of $\Pi_{N'}^c$ does not intersect $N$. Let $\mathcal{O}$ be an arbitrary operator supported on $M \cup N$; we will show that $\mathcal{O}$ satisfies eq.. Since $M$ and $N$ are disjoint, $\mathcal{O}$ has a Schmidt decomposition $$\mathcal{O} = \sum_\alpha \mathcal{O}_M^\alpha \otimes \mathcal{O}_N^\alpha$$ where each $\mathcal{O}_M^\alpha$ is supported on $M$ and each $\mathcal{O}_N^\alpha$ is supported on $N$. Since $\Pi_{N'}$ commutes with $\mathcal{O}_M^\alpha$ and $\Pi_{N'}^c$ commutes with $\mathcal{O}_N^\alpha$, $$\begin{aligned}
\Pi \mathcal{O} \Pi
&= \sum_\alpha \Pi (\Pi_{N'}) \mathcal{O}_M^\alpha \mathcal{O}_N^\alpha (\Pi_{N'}^c) \Pi \\
&= \sum_\alpha \Pi \mathcal{O}_M^\alpha (\Pi_{N'}) (\Pi_{N'}^c) \mathcal{O}_N^\alpha \Pi \\
&= \sum_\alpha \left(\Pi \mathcal{O}_M^\alpha \Pi \right)\left(\Pi\mathcal{O}_N^\alpha \Pi\right) \\
&= \sum_\alpha c_{ \mathcal{O}_M^\alpha } c_{ \mathcal{O}_N^\alpha } \Pi \\
&= c_{\mathcal{O}} \Pi\end{aligned}$$ where in the fourth equality we used the correctability of $M$ and $N$. Thus $\mathcal{O}$ obeys eq., and $M \cup N$ is correctable.
Now we are ready to state and prove our second tradeoff theorem.
*(Tradeoff Theorem for local commuting projector codes)* For a local commuting projector code in $D\ge 2$ dimensions with interaction range $w>1$ and distance $d\gg w$, defined on a hypercubic lattice with linear size $L$, every logical operator is equivalent to an operator with weight $\tilde d$ satisfying $$\label{eq:tradeoff-bound-again}
\tilde d {d}^{1/(D-1)} < c L^D,$$ where $c$ is a constant depending on $w$ and $D$. \[thm:commuting\_tradeoff\]
The proof is similar to the proof of Theorem \[thm:subsystem-tradeoff\]. We fill the lattice with hypercubes, separated by distance $w-1$, where each hypercube $M_i$ has linear size $l$ sufficiently small so that $M_i$ and $\partial_+M_i$ are both correctable. Applying Lemma \[lem:extension\_of\_correctable\_region\] repeatedly, we conclude that the union $M$ of all $M_i$ is correctable, and the union $\partial_+M$ of all $\partial_+ M_i$ is correctable.
For any logical operator, Lemma \[lem:cleaning-projector\] now ensures the existence of an equivalent logical operator supported outside the interior $M^\circ$ of $M$, and hence the weight $\tilde d$ of this equivalent logical operator is bounded above by $|(M^\circ)^c|$. The lattice is covered by hypercubes with linear size $l + (w-1)$, each centered about one of the $M_i$, and there are $L^D/\left[l+(w-1)\right]^D$ such hypercubes, each containing no more than $$\begin{aligned}
\left[l+(w-1)\right]^D - \left[l-2(w-1)\right]^D \nonumber\\
\le 3(w-1)D\left[l+(w-1)\right]^{D-1}\end{aligned}$$ elements of $(M^\circ)^c$; therefore, $$\begin{aligned}
\tilde d &\le |(M^\circ)^c| \\
&\le 3(w-1)D\left[l+(w-1)\right]^{D-1}\frac{L^D}{\left[l+(w-1)\right]^D}\\
& = \frac{3(w-1)D}{l+(w-1)}L^D.\end{aligned}$$
To ensure that $M_i$ and $\partial_+ M_i$ are correctable, it suffices that $|\partial M_i| < d$, where $d$ is the code distance, or $$\begin{aligned}
|\partial M_i|
&\le \left[l+2(w-1)\right]^D - \left[l-2(w-1)\right]^D \\
&\le 4(w-1)D\left[l+2(w-1)\right]^{D-1} < d.\end{aligned}$$ We choose the largest such integer value of $l$, obtaining eq..
“String” operators for local commuting projector codes {#sec:string}
======================================================
Because the code distance $d$ is defined as the size of the smallest noncorrectable set, and because a set supporting a nontrivial logical operator is noncorrectable, we have $d\le \tilde d$ and hence Theorem \[thm:commuting\_tradeoff\] implies $$d = O(L^{D-1}).$$ In fact we can make a stronger statement, specifying the geometry of a region that supports a nontrivial logical operator with weight $O(L^{D-1})$. On the hypercube $\{1,2,3,\dots L\}^D$, we refer to the set $\{i, i+1, \dots, i+r-1\}\times \{1,2,3,\dots,L\}^{D-1}$ as a *slab* of width $r$. Let us say that a code is nontrivial if the code space dimension is greater than one. Then:
*(Existence of a noncorrectable thin slab)* For a nontrivial local commuting projector code in $D\ge 1$ dimensions with interaction range $w>1$, there is a noncorrectable slab of width $3(w-1)$. \[lem:slab\]
Suppose, contrary to the claim, that any slab of width $3(w-1)$ is correctable. Choose a correctable slab $M$ of width $3(w-1)$. The boundary $\partial M$ of $M$ is contained in two slabs $M_L$ and $M_R$, each of width $2(w-1)$. Hence $M_L$ and $M_R$ are both correctable, and since $M$ has width $3(w-1)$, $M_L$ and $M_R$ are separated by $w-1$. Therefore, no local projector acts on both $M_L$ and $M_R$, and by Lemma \[lem:extension\_of\_correctable\_region\], $M_L \cup M_R \supseteq \partial M$ is correctable. Then Lemma \[lem:commuting-projector-extend\] implies that the slab $M \cup M_L \cup M_R$ of width $5(w-1)$ is correctable. Repeating the argument, we see that if a slab $M$ of width $r$ is correctable, so is the slab of width $r+2(w-1)$ containing $M$.
If the system obeys open boundary conditions, then by induction the entire lattice is correctable. If the lattice is periodic, we may consider two thick correctable slabs $M_1, M_2$ such that $M_1 \cup M_2$ is the entire lattice and $\partial M_1 \subseteq M_2$; in that case Lemma \[lem:commuting-projector-extend\] implies that the entire lattice $M_1 \cup M_2$ is correctable. For either type of boundary condition, then, there are no nontrivial logical operators at all. But we assumed that the code is nontrivial, and therefore reach a contradiction.
It follows from Lemma \[lem:slab\] that the distance $d$ of a local commuting projector code satisfies $$d \le 3(w-1) L^{D-1}.$$ It was previously known that $d \le w L^{D-1}$ for a local stabilizer code [@BravyiTerhal2008no-go; @KayColbeck2008Quantum] and $d \le 3w L^{D-1}$ for a local subsystem code [@BravyiTerhal2008no-go].
Now we may wonder about the geometry of a set that supports a nontrivial logical operator. For a subsystem code, there is a nontrivial logical operator supported by any noncorrectable set, but this statement is not true for general codes (see Appendix \[app:counter\_example\_nolop\_noncorrectable\]). We say that an operator $\mathcal{O}$ is a *logical* operator if it preserves the code space, and that it is a *nontrivial* logical operator if it preserves the code space and its restriction to the code space is not proportional to the identity. From the definition of correctability, then, $M$ is not correctable if it supports a nontrivial logical operator. But for some codes the converse is false. If $M$ is not correctable, then an operator $\mathcal{O}$ exists that fails to satisfy eq.; however $\mathcal{O}$ might not preserve the code space.
But for a local commuting projector code, a correctable set can be extended to a slightly larger set that does support a nontrivial logical operator. Suppose the code is the simultaneous eigenspace with eigenvalue one of a set of commuting projectors $\mathcal{S}=\{\Pi_a\}$. For any set of qubits $M$, we define $M'$ as the support of all the projectors that act nontrivially on $M$. Then if $M$ is noncorrectable a nontrivial unitary logical operator is supported on $M'$.
\[lem:projector-support\] *(Support for nontrivial logical operator)* For a commuting projector code, if the set $M$ is not correctable, then there is a nontrivial unitary logical operator supported on $M'$ that commutes with every projector in $\mathcal{S}$.
Let $\Pi = \prod_{\Pi_a\in \mathcal{S}} \Pi_a$ be the projector onto the code space. We claim that there exists a Pauli operator $P_M$ supported on $M$ such that $\Pi P \Pi$ is not proportional to $\Pi$. Indeed, if $M$ is not correctable, then there exists an operator $\mathcal{O}_M$ supported on $M$ such that $\Pi \mathcal{O}_M \Pi \not\propto \Pi$. Expanding $\mathcal{O}_M = \sum_i c_i P^{(i)}_M $ as a linear combination of Pauli operators, we see that at least one Pauli operator $P^{(j)}_M$ must satisfy $\Pi P^{(j)}_M \Pi \not\propto \Pi$.
We denote by $\mathcal{S}'_M$ the set of projectors in $\mathcal{S}$ that act nontrivially on $M$, and define $$\Pi_{M'}= \prod_{\Pi_a\in \mathcal{S}'_M} \Pi_a.$$ We claim that $$\mathcal H = \Pi_{M'} P_M \Pi_{M'},$$ is a nontrivial Hermitian logical operator supported on $M'$.
To see that $\mathcal{H}$ is a logical operator, note that if $\Pi_a \in \mathcal{S}'_M$, then $\Pi_a \Pi_{M'} = \Pi_{M'} = \Pi_{M'}\Pi_a$, because $\Pi_a^2 = \Pi_a$; hence $$\Pi_a \mathcal H = \mathcal H = \mathcal H \Pi_a,$$ *i.e.*, $\Pi_a$ commutes with $\mathcal{H}$. If $\Pi_a\not\in \mathcal{S}'_M$, then $\Pi_a$ is supported in the complement $M^c$ of $M$; hence it commutes trivially with $P_M$, and therefore also with $\mathcal H$. Since $\mathcal H$ commutes with each projector in $\mathcal{S}$, it certainly commutes with $\Pi$ and hence preserves the code space. Furthermore, because $$\Pi \mathcal H \Pi = \Pi P_M \Pi,$$ $\mathcal{H}$ acts on the code space in the same way as $\Pi P_M \Pi$, and therefore must be nontrivial.
Thus $U=\exp\left(-i\lambda \mathcal{H}\right)$ preserves the code space and is unitary for any real $\lambda$. Since $\mathcal{H}$, restricted to the code space, has at least two distinct eigenvalues, the same is true of $U$ for a generic choice of $\lambda$; *i.e.*, $U$ is a nontrivial unitary logical operator.
Lemmas \[lem:slab\] and \[lem:projector-support\] now imply:
*(A logical operator is supported on one thin slab)* For a nontrivial local commuting projector code in $D\ge 1$ dimensions, with interaction range $w> 1$, there is a nontrivial unitary logical operator (commuting with all projectors) supported on a slab of width $5(w-1)$. \[thm:slab\]
Note that, though the proof of Theorem \[thm:slab\] establishes the existence of a logical operator supported on a slab of constant width, it provides no algorithm for constructing the operator.
In $D=2$ dimensions, the slab becomes a strip of constant width stretching across the $L\times L$ code block, and the logical operator supported on the strip may be called a “string” operator. It was previously known that for $D=2$ a string operator can be supported on a strip of width $w$ in a local stabilizer code [@BravyiTerhal2008no-go; @KayColbeck2008Quantum], and of width $3w$ in a local subsystem code [@BravyiTerhal2008no-go].
Two-dimensional local stabilizer codes are not partially self correcting
========================================================================
Theorems 1 and 2 constrain the weight of logical operators, but the proofs tell us more — they specify the *geometry* of a region that supports a logical operator. This geometry has further implications for the physical robustness of quantum memories.
Consider a subsystem code whose stabilizer group $S$ has a set of geometrically local generators $\{S_a\}$, where the qubits reside at the sites of a $D$-dimensional hypercubic lattice with linear size $L$. The generating set $\{S_a\}$ might be overcomplete, but we assume that the number of generators acting nontrivially on each qubit is a constant independent of $L$. The local Hamiltonian $$\label{eq:Hamiltonian}
H= -\sum_a \frac{1}{2}\left( S_a - I\right),$$ has a $2^{k+g}$-fold degenerate ground state with energy $E=0$, where $k$ is the number of protected qubits and $g$ is the number of gauge qubits of the subsystem code — each ground state is a simultaneous eigenstate with eigenvalue one of all elements of $\{S_a\}$. If a quantum memory governed by this Hamiltonian is subjected to thermal noise, how well protected is the $2^k$-dimensional code space?
If $|\psi\rangle$ is a zero-energy eigenstate of $H$ and $x\in P$, then $x|\psi\rangle$ is an eigenstate of $H$ with eigenvalue $E(x)$, where $E(x)$ is the number of elements of $\{S_a\}$ that anticommute with $x$. Thermal fluctuations may excite the memory, but excitations with energy cost $E$ are suppressed by the Boltzmann factor $e^{-E/\tau}$ where $\tau$ is the temperature (and Boltzmann’s constant $k_B$ has been set to one). Following [@BravyiTerhal2008no-go], we suppose that the environment applies a sequence of weight-one Pauli operators to the system, so that the error history after $t$ steps can be described as a walk on the Pauli group, starting at the identity: $$\{x_i\in P, i = 0,1,2,3, \dots t\},$$ where $x_0= I$, and $x_{i+1}x_i^{-1}$ has weight one. Let $\mathcal{P}(z)$ denote the set of all such walks, with any number of steps, that start at $I$ and terminate at $z\in P$. We define $$\Delta(z) \equiv \min_{\gamma\in \mathcal{P}(z)}\max_{x\in \gamma} E(x),$$ the minimum energy barrier that must be surmounted by any walk that reaches Pauli operator $z$. Thus such walks occur with a probability per unit time suppressed by the Boltzmann factor $e^{-\Delta(z)/\tau}$. We also define $$\begin{aligned}
\Delta_{\rm min}& \equiv \min_{x\in S^\perp\setminus G} \Delta(x),\\
\Delta_{\rm max}& \equiv \max_{x\in S^\perp} \min_{y\in G} \Delta(xy).\end{aligned}$$ Here $\Delta_{\rm min}$ is the lowest energy barrier protecting any nontrivial dressed logical operator (representing a nontrivial coset of $S^\perp/G$), and $\Delta_{\rm max}$ is the highest such energy barrier.
We say that a quantum memory is *self correcting* if $\Delta_{\rm min}$ grows faster than logarithmically with $L$. In that case *all* nontrivial logical operators are suppressed by a Boltzmann factor whose reciprocal grows super-polynomially with $L$. We say that the quantum memory is *partially self correcting* if $\Delta_{\rm max}$ grows faster than logarithmically with $L$. In that case *at least one* logical operator is protected by an energy barrier that increases with system size. Though the Pauli walk may not be a particularly accurate description of noise in realistic systems, it allows us to define the notion of barrier height precisely, and to state the criteria for self correction and partial self correction simply. Furthermore, we expect the Boltzmann factor $e^{-\Delta/\tau}$ suppressing the Pauli walk to provide a reasonable (though crude) estimate of the logical error rate for more realistic noise models, assuming that the system attains thermal equilibrium.
Bravyi and Terhal [@BravyiTerhal2008no-go], and Kay and Colbeck [@KayColbeck2008Quantum], showed that no two-dimensional local subsystem code with local stabilizer generators can be self correcting. On the other hand, partially self-correcting quantum memories are certainly possible in two dimensions — the Ising model, regarded as a quantum repetition code, is an example. In the Ising model, the logical bit flip operator flips every qubit, hence $\tilde d = L^2$. In the Pauli walk that reaches the logical bit flip and traverses the lowest barrier, a domain wall of length $\Omega(L)$ sweeps across the system; hence $\Delta_{\rm max}= \Omega(L)$. Theorem \[thm:subsystem-tradeoff\] shows that this high value of $\tilde d$ for the logical bit flip is possible only because the code distance $d$ is $O(1)$, and hence a logical phase flip can be realized by an operator of constant weight.
But suppose that, as in the toric code [@Kitaev2003Fault-tolerant], a logical phase flip can occur only if a thermally activated localized quasiparticle propagates across the system. Thus $\tilde d = \Omega(L)$ for the logical phase flip. Can the logical bit flip still be protected by a high barrier? Arguing as in [@BravyiTerhal2008no-go], and invoking Theorem \[thm:subsystem-tradeoff\], we see that under this condition robust protection against bit flips cannot be achieved using a local subsystem code with local stabilizer generators.
*(Limitation on partial self correction in local subsystem codes)* For a two-dimensional local subsystem code, with qubits residing at sites of an $L\times L$ square lattice, suppose that $\{S_a\}$ is a (possibly overcomplete) set of *geometrically local* stabilizer generators, where the number of generators acting on each qubit is an $L$-independent constant. Consider a quantum memory governed by the Hamiltonian eq.. If the code distance is $d=\Omega(L)$, then the memory is not partially self correcting — *i.e.*, $\Delta_{\rm max} = O(1)$. More generally, if the code distance is $d= \Omega(L^\alpha)$ in $D$ spatial dimensions, then $\Delta_{\rm max} = O(L^\beta)$, where $\beta= D-1 - \alpha/(D-1)$. \[thm:no-partial\]
Let $w$ be the interaction range of the gauge generators of the subsystem code and let $w_S$ be the interaction range of the stabilizer generators.
For any dressed logical operator $x$ supported on this set, we may build a Pauli walk that starts at $I$ and ends at $x$ by first building the horizontal strings column by column and then building the vertical strings row by row. At each stage of this walk, any “excited” local stabilizer $S_a$ such that $S_a = -1$ acts only on qubits in a $w_S\times w_S$ square that contains qubits either at the boundary of the walk or in the intersection of a horizontal and vertical string. The number of such qubits is $O(1)$ and the total number of stabilizer generators acting on these qubits is $O(1)$. Therefore, the energy cost of the partially completed walk, and hence $\Delta_{\rm max}$, are $O(1)$.
In $D$ spatial dimensions, the proof of Theorem \[thm:subsystem-tradeoff\] shows that the support of any dressed logical operator can be reduced to a network of overlapping $(D{-}1)$-dimensional slabs, where each slab has constant width and slabs with the same orientation are separated by distance $l$ such that $l^{D-1}=\Omega(d)$; hence $l=\Omega(L^{\alpha/(D-1)})$ if $d=\Omega(L^\alpha)$. For any dressed logical operator supported on this set of slabs, we may build a Pauli walk that sweeps across the system, such that at each stage of the walk the excited stabilizer generators are confined to a $(D{-}1)$-dimensional “surface.” This surface may be oriented such that it cuts across each slab on a $(D{-}2)$-dimensional surface with weight $O(L^{D-2})$. There are $O(L/l)$ such intersections; therefore during the walk the total number of excited stabilizer generators (and hence the energy cost) is $O((L/l)L^{D-2})=O(L^\beta)$, where $\beta = D-1 - \alpha/(D-1)$.
We needed to assume that each $S_a$ is geometrically local to ensure that eq. is a geometrically local Hamiltonian. For any local subsystem code with geometrically local gauge generators, whether or not the stabilizer generators are also geometrically local, the Hamiltonian [@Bacon2006Operator] $$H= -\sum_a \frac{1}{2}\lambda_a\left(G_a - I\right)$$ is geometrically local, where now $\{G_a\}$ is the set of gauge generators. However, because the gauge generators are not mutually commuting, the energetics of a Pauli walk is not easy to study in this model, which is beyond the scope of Theorem \[thm:no-partial\].
Are there self-correcting local commuting projector codes in two dimensions?
============================================================================
For a two-dimensional local commuting projector code, the simultaneous eigenspace with eigenvalue one of the projectors $\{\Pi_a\}$, the code space is the degenerate ground state with energy $E=0$ of the Hamiltonian $$\label{eq:projector-hamiltonian}
H= -\sum_a\frac{1}{2}\left(\Pi_a - I\right).$$ If only a constant number of projectors act on each qubit, then an operator supported on a set $M$ can increase the energy by at most $c|M|$, where $c$ is a constant. Since Theorem \[thm:slab\] establishes the existence of a nontrivial logical operator supported on a narrow strip, one might anticipate that, by arguing as in the proof of Theorem \[thm:no-partial\], we can show that this system is not self correcting or partially self correcting.
We may envision a sequence of operations interpolating between the identity and a nontrivial logical operator, where each operation in the sequence could plausibly evolve from the previous operation due to the action of a thermal bath. In the strip $M$ of constant width that supports a nontrivial logical operator $\mathcal{O}$, we can divide the qubits into two subsets $A$ and $B=M\setminus A$, imagining that the interface between $A$ and $B$ gradually creeps along the strip.
Now, however, we encounter an important distinction between stabilizer codes and more general commuting projector codes. For a stabilizer code, the nontrivial logical operator supported in $M$ can be chosen to be a Pauli operator, and hence the product of an operator supported in $A$ and an operator supported in $B$. For a commuting projector code, a logical operator supported in $M$ may actually be *entangling* across the $A$-$B$ cut. Are we assured that this entangling operation can be built up gradually due to the effects of local noise?
We have not been able to settle this question. We [*can*]{} say that in any two-dimensional local commuting projector code there exists a nontrivial logical operator that is only *slightly entangling* across any cut through the strip. This property, however, might not suffice to guarantee that the logical operator can be constructed as a product of physical operations, where each operation acts on a constant number of system qubits near the $A$-$B$ cut and also on a constant number of ancillary qubits in the “environment.”
To define the notion of “slightly entangling” for an operator $\mathcal{O}$ supported on $AB$, we perform a Schmidt decomposition $$\mathcal{O}= \sum_\alpha \sqrt{\lambda_\alpha}~\mathcal{O}_A^\alpha\otimes \mathcal{O}_B^\alpha;$$ here $\{\lambda_\alpha\}$ is a set of nonnegative real numbers, while $\{\mathcal{O}_A^\alpha\}$ is a set of operators supported on $A$ and $\{\mathcal{O}_B^\alpha\}$ is a set of operators supported on $B$, with the normalization conditions $$\begin{aligned}
&{\rm tr}\left(\mathcal{O}_A^{\alpha\dagger} \mathcal{O}_A^\beta\right)=2^{|A|}~\delta^{\alpha\beta},\nonumber\\
&{\rm tr}\left(\mathcal{O}_B^{\alpha\dagger} \mathcal{O}_B^\beta\right)=2^{|B|}~\delta^{\alpha\beta}.
$$ The number of nonzero terms in the Schmidt decomposition is the Schmidt rank of $\mathcal{O}$, and we say that $\mathcal{O}$ is slightly entangling if its Schmidt rank is a constant independent of system size.
As we know from Theorem \[thm:slab\], for a two-dimensional local commuting projector code on an $L\times L$ lattice, there is a nontrivial logical operator supported on a vertical strip $M$ with dimensions $r\times L$, where $r$ is a constant. $M$ can be regarded as the disjoint union of an $r\times h$ rectangle $A$ covering the bottom of $M$ and an $r\times (L-h)$ rectangle $B$ covering the top of $M$. We can prove
\[lem:e\_slightly\_entangling\_lop\] *(Existence of slightly entangling logical operators)* For a nontrivial two-dimensional local commuting projector code, there is a nontrivial Hermitian logical operator $\mathcal{H}$ supported on a strip of constant width $M'$ such that, for any division of $M'$ into constant-width rectangles $A$ and $B$, $\mathcal{H}$ is slightly entangling across the $A$-$B$ cut.
We know from Lemmas \[lem:slab\] and \[lem:projector-support\] that there is a noncorrectable constant-width strip $M$ and a Pauli operator $P_M$ supported on $M$ such that $$\mathcal H = \Pi_{M'} P_M \Pi_{M'},$$ is a nontrivial Hermitian logical operator supported on $M'$; here $\Pi_{M'}= \prod_{\Pi_a\in \mathcal{S}'_M} \Pi_a$ and $\mathcal{S}'_M$ is the set of projectors that act nontrivially on $M$. The Pauli operator $P_M$ is a product operator, with Schmidt number one across the $A$-$B$ cut. Among the local projectors occurring in the product $\Pi_{M'}$, those fully supported on either $A$ or $B$ have no effect on the Schmidt number of $\Pi_{M'} P_M \Pi_{M'}$, and only a constant number of the projectors act nontrivially on both $A$ and $B$. Since each such $\Pi_a$ is supported on a constant number of qubits, the action of $\Pi_a$ increases the Schmidt number by a constant. Thus $\mathcal{H}$ has constant Schmidt number, *i.e.* is slightly entangling.
We may relax the notion of slightly entangling, regarding an operator $\mathcal{O}$ as slightly entangling if it may be [*well approximated*]{} by an operator with constant Schmidt rank. In this sense the unitary logical operator $U = \exp( i \lambda \mathcal H)$ is also slightly entangling. We may expand the exponential as a power series where each term has a Schmidt rank independent of system size; furthermore, the power series expansion truncated at constant order approximates the exponential function very well with respect to the operator norm.
Now we might hope to construct a slightly entangling logical operator $\mathcal{O}$, supported on a constant-width vertical strip, by gradually building its support one horizontal row of qubits at a time. However, Lamata [*et al.*]{} [@Lamata2008Sequential] showed that, if $\mathcal{O}$ is entangling, then it cannot be obtained as a product of *unitary* operators where each of these operators acts on just a few rows of system qubits and on a shared ancillary system.
An alternative procedure for gradually building a nontrivial logical error has been proposed by Landon-Cardinal and Poulin [@Poulin2012Unpublished]. They envision a walk along the strip such that, in each step of the walk, first a constant size set of qubits depolarizes, and then the code projectors acting on that set are applied. If the projection fails to accept the state, the step can be repeated until the projection succeeds.
This procedure could fail if at some stage the projection succeeds with zero probability. But Landon-Cardinal and Poulin [@Poulin2012Unpublished] have shown that their procedure eventually generates a nontrivial logical error (and hence that the code is not self correcting) for any local commuting projector code obeying a “local topological order” criterion [@Hastings2010Short]. Whether self-correcting two-dimensional local commuting projector codes are possible remains open, though, because topologically ordered codes that violate [*local*]{} topological order have not been ruled out.
Conclusion
==========
The quantum accuracy threshold theorem [@Gottesman2009Introduction] shows that quantum information can be reliably stored and processed by a noisy physical system if the noise is not too strong. But can quantum information be protected “passively” in a macroscopic physical system governed by a static local Hamiltonian, at a sufficiently low nonzero temperature? This question [@DennisKitaevLandahlEtAl2002Topological; @Bacon2006Operator], aside from its far-reaching potential implications for future quantum technologies, is also a fundamental issue in quantum many-body physics. Hamiltonians derived from local quantum codes, whose properties are relatively easy to discern, can provide us with valuable insights.
A two-dimensional ferromagnet can be a self-correcting classical memory, but a Hamiltonian based on a two-dimensional local subsystem code with local stabilizer generators cannot be a self-correcting quantum memory [@BravyiTerhal2008no-go; @KayColbeck2008Quantum]. We have shown that for two-dimensional local subsystem code with local stabilizer generators on an $L \times L$ square lattice, robust *classical* protection is impossible if the code distance is $d=\Omega(L)$, as expected for a topologically ordered two-dimensional system. More generally, we have studied how the code distance $d$ limits the size of the support of arbitrary nontrivial logical operators, in both local subsystem codes and local commuting projector codes. In view of the upper bound $d=O(L^{D-1})$ on the code distance, we may write $d= \Theta (L^{(D-1)(1 - \delta)})$ where $0\le \delta \le 1$, and thus our upper bound eq. on the weight of logical operators becomes $$\tilde d = O(L^{D-1+\delta}).$$ In particular, in three dimensions, $d=\Omega(L)$ implies $\tilde d=O(L^{5/2})$. We have also shown that any two-dimensional local commuting projector code admits a nontrivial logical string operator which is only slightly entangling across any cut through the string.
Our arguments modestly extend the findings of [@BravyiPoulinTerhal2010Tradeoffs; @BravyiTerhal2008no-go; @Bravyi2010Subsystem; @KayColbeck2008Quantum], and use similar ideas. In passing, we also proved a Cleaning Lemma for subsystem codes based on ideas from [@YoshidaChuang2010Framework], proved a Cleaning Lemma for local commuting projector codes. Our methods might find further applications in future studies of quantum memories based on local codes.
We are grateful to Salman Beigi, Alexei Kitaev, Robert König, Olivier Landon-Cardinal, and Norbert Schuch for helpful discussions, and we especially thank David Poulin for useful comments on the manuscript. This research was supported in part by NSF under Grant No. PHY-0803371, by DOE under Grant No. DE-FG03-92-ER40701, by NSA/ARO under Grant No. W911NF-09-1-0442, and by the Korea Foundation for Advanced Studies. The Institute for Quantum Information and Matter (IQIM) is an NSF Physics Frontiers Center with support from the Gordon and Betty Moore Foundation.
Holographic lemma for local stabilizer codes {#app:holographic_lemma_stabilizer_codes}
============================================
We say that a local stabilizer code has interaction range $w$ if each stabilizer generator has support on a hypercube containing $w^D$ sites. For this case, we can improve the criterion for correctability of a hypercube, found for local subsystem codes in Lemma \[lem:subsystem-hypercube\].
*(Expansion Lemma for local stabilizer codes)* For a local stabilizer code, suppose that $\partial_+M$, $A$, and $M\setminus A$ are all correctable, where $\partial_- M \subseteq A \subseteq M$. Then $M$ is also correctable.
Suppose, contrary to the claim, that there is a nontrivial logical operator $x$ supported on $M$. Then, because $A$ is correctable, Lemma \[lem:clean-region\] implies that there is a stabilizer generator $y$ such that $xy$ acts trivially on $A$. Furthermore, $y$ can be expressed as a product of local stabilizer generators, each supported on $M'=M\cup\partial_+M$. Thus $xy$ is a product of two factors, one supported on $M\setminus A$ and the other supported on $\partial_+M$. Because $\partial_-M\subseteq A$, no local stabilizer generator acts nontrivially on both $M\setminus A$ and $\partial_+M$; therefore, each factor commutes with all stabilizer generators and hence is a logical operator. Because $M\setminus A$ and $\partial_+M$ are both correctable, each factor is a trivial logical operator and therefore $xy$ is also trivial. It follows that $x$ is trivial, a contradiction.
Now, if the interaction range is $w$ and $M$ is a hypercube with linear size $l$, we choose $A$ so that $M\setminus A$ is a hypercube with linear size $l-2(w-1)$, and we notice that $\partial_+M$ is contained in a hypercube with linear size $l+2(w-1)$. Thus both $M\setminus A$ and $\partial_+ M$ are correctable provided that $$\begin{aligned}
|\partial_+ M| &\le \left[l +2(w-1)\right]^D - l^D \nonumber\\
&\le 2(w-1)D\left[l +2(w-1)\right]^{D-1} < d.\end{aligned}$$ Reasoning as in the proof of Lemma \[lem:subsystem-hypercube\], we conclude that:
*(Holographic Principle for local stabilizer codes)* \[lem:stabilizer-hypercube\] For a $D$-dimensional local stabilizer code with interaction range $w>1$ and distance $d>1 $, a hypercube with linear size $l$ is correctable if $$\label{eq:hypercube-size-stabilizer}
2(w-1)D\left[l+ 2(w-1)\right]^{D-1} < d.$$
To ensure that the hypercube $M$ is correctable, it suffices for its $(w-1)$-thickened boundary, rather than its $\left[2(w-1)\right]$-thickened boundary, to be smaller than the code distance.
A noncorrectable set that supports no nontrivial logical operator {#app:counter_example_nolop_noncorrectable}
=================================================================
Here we give a simple example illustrating that for some quantum codes a noncorrectable set need not support a nontrivial logical operator. For $n=2$ qubits, consider the three-dimensional code space spanned by the orthogonal vectors $$\begin{aligned}
&|\phi\rangle =\frac{1}{\sqrt{2}}\left(|00\rangle + |11\rangle\right),\\
&|\psi\rangle =|01\rangle,\\
&|\chi\rangle =|10\rangle;\end{aligned}$$ this is the eigenspace with eigenvalue 1 of the projector $$\Pi = |\phi\rangle\langle \phi | + |\psi\rangle\langle \psi| +|\chi\rangle\langle \chi|.$$ If the first qubit is mapped to $|0\rangle$, then $|\phi\rangle$ is no longer perfectly distinguishable from $|\psi\rangle$ or $|\chi\rangle$; hence erasure of this qubit is not correctable. (Similarly, the second qubit is also a noncorrectable set.)
Is there a logical operator supported on the first qubit? Suppose that $$L = \begin{pmatrix} a & b \\ c & d \end{pmatrix}$$ is an operator acting on the first qubit. Then $L\otimes I |\psi\rangle = a|01\rangle + c |11\rangle$ is a code vector only if $c=0$, and $L\otimes I |\chi\rangle = b|00\rangle + d|10\rangle$ is a code vector only if $b=0$. Furthermore, if $b=c=0$, then $L\otimes I |\phi\rangle = \left(a|00\rangle + d|11\rangle\right)/\sqrt{2}$ is a code vector only if $a=d$. Thus $L$ is a multiple of the identity, a trivial operator.
[10]{}
S. Bravyi, D. Poulin, and B. Terhal, Tradeoffs for reliable quantum information storage in 2D systems, Phys. Rev. Lett. 104, 050503 (2010), arXiv:0909.5200. S. Bravyi and B. Terhal, No-go theorem for two-dimensional self-correcting quantum memory based on stabilizer codes, New J. Phys. 11, 043029 (2009), arXiv:0810.1983. S. Bravyi, Subsystem codes with spatially local generators, arXiv:1008.1029 (2010). A. Kay and R. Colbeck, Quantum self-correcting stabilizer codes, arXiv:0810.3557 (2008). E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, Topological quantum memory, J. Math. Phys. 43, 4452-4505 (2002), arXiv:quant-ph/0110143. R. Alicki, M. Horodecki, P. Horodecki, and R. Horodecki, On thermal stability of topological qubit in Kitaev’s 4D model, Open Syst. Inf. Dyn. 17, 1 (2010), arXiv:0811.0033. J. Haah, Local stabilizer codes in three dimensions without string logical operators, Phys. Rev. A 83, 042330 (2011), arXiv:1101.1962. S. Bravyi and J. Haah, On the energy landscape of 3D spin Hamiltonians with topological order, Phys. Rev. Lett. 107, 150504 (2011), arXiv:1105.4159. S. Bravyi and J. Haah, Analytic and numerical demonstration of quantum self-correction in the 3D Cubic Code, arXiv:1112.3252 (2011). C. Castelnovo and C. Chamon, Topological order in a 3D toric code at finite temperature, Phys. Rev. B 78, 155120 (2008), arXiv:0804.3591. B. Yoshida and I. L. Chuang, Framework for classifying logical operators in stabilizer codes, Phys. Rev. A 81, 052302 (2010), arXiv:1002.0085. A. R. Calderbank, E. M. Rains, P. W. Shor, and N. J. A. Sloane, Quantum error correction and orthogonal geometry, Phys. Rev. Lett 78, 405-408 (1997), arXiv:quant-ph/9605005. D. Gottesman, Class of quantum error-correcting codes saturating the quantum Hamming bound, Phys. Rev. A 54, 1862–1868 (1996), arXiv:quant-ph/9604038. D. Bacon, Operator quantum error-correcting subsystems for self-correcting quantum memories, Phys. Rev. A 73, 012340 (2006), arXiv:/quant-ph/0506023. D. Poulin, Stabilizer formalism for operator quantum error correction, Phys. Rev. Lett 95, 230504 (2005), arXiv:quant-ph/0508131. A. R. Calderbank and P. W. Shor, Good quantum error-correcting codes exist, Phys. Rev. A 54, 1098–1105 (1996), arXiv:quant-ph/9512032. A. Steane, Multiple particle interference and quantum error correction, Proc. Roy. Soc. Lond. A A452, 2551–2577 (1996), arXiv:quant-ph/9601029. M. Wilde and D. Fattel, Nonlocal quantum information in bipartite quantum error correction, Quant. Inf. Proc. 9, 591-610 (2010), arXiv:0912.2150. D. Gottesman, An introduction to quantum error correction and fault-tolerant quantum computation, arXiv:0904.2557 (2009), and references therein. A. Yu. Kitaev, Fault-tolerant quantum computation by anyons, Ann. Phys. 303, 2-30 (2003), arXiv:quant-ph/9707021. L. Lamata, J. León, D. Pérez-Garcia, D. Salgado, and E. Solano, Sequential implementation of global quantum operations, Phys. Rev. Lett. 101, 180506 (2008), arXiv:0711.3652. O. Landon-Cardinal and D. Poulin, unpublished (2012). S. Bravyi and M. B. Hastings, A short proof of stability of topological order under local perturbations, arXiv:1001.4363 (2010).
|
---
date: '04.23.2019'
---
[**Multiple nodal solutions having shared componentwise nodal numbers for coupled Schrödinger equations**]{}
Haoyu Li
Zhi-Qiang Wang
We investigate the structure of nodal solutions for coupled nonlinear Schrödinger equations in the repulsive coupling regime. Among other results, for the following coupled system of $N$ equations, we prove the existence of infinitely many nodal solutions which share the same componentwise-prescribed nodal numbers $$\label{ab}
\left\{
\begin{array}{lr}
-{\Delta}u_{j}+\lambda u_{j}=\mu u^{3}_{j}+\sum_{i\neq j}\beta u_{j}u_{i}^{2} \,\,\,\,\,\,\, in\ \W ,\\
u_{j}\in H_{0,r}^{1}(\W), \,\,\,\,\,\,\,\,j=1,\dots,N,
\end{array}
\right.$$ where $\W$ is a radial domain in $\mathbb R^n$ for $n\leq 3$, $\lambda>0$, $\mu>0$, and $\beta <0$. More precisely, let $p$ be a prime factor of $N$ and write $N=pB$. Suppose $\beta\leq-\frac{\mu}{p-1}$. Then for any given non-negative integers $P_{1},P_{2},\dots,P_{B}$, (\[ab\]) has infinitely many solutions $(u_{1},\dots,u_{N})$ such that each of these solutions satisfies the same property: for $b=1,...,B$, $u_{pb-p+i}$ changes sign precisely $P_b$ times for $i=1,...,p$. The result reveals the complex nature of the solution structure in the repulsive coupling regime due to componentwise segregation of solutions. Our method is to combine a heat flow approach as deformation with a minimax construction of the symmetric mountain pass theorem using a $\mathbb Z_p$ group action index. Our method is robust, also allowing to give the existence of one solution without assuming any symmetry of the coupling.
Multiple nodal solution; Componentwise-prescribed node; Coupled Schrödinger equations. 35J47, 35J50, 35J55, 35K45
Introduction
============
Main Result
-----------
In this paper, we consider the following coupled nonlinear Schrödinger system of $N$ equations: $$\label{e:A111}
\left\{
\begin{array}{lr}
-{\Delta}u_{j}+\lambda_{j} u_{j}=\mu_{j} u^{3}_{j}+\sum_{i=1, i\neq j}^N\beta_{ij} u_{j}u_{i}^{2} \,\,\,\,\,\,\, in\ \W ,\\
u_{j}\in H_{0,r}^{1}(\W), \,\,\,\,\,\,\,\,j=1,\dots,N,
\end{array}
\right.$$ where $\W\subset \mathbb R^n$ is a radially symmetric domain, bounded or unbounded, $n\leq 3$, and the constants satisfy $\lambda_j>0$, $\mu_j>0$ for $j=1,...,N$, and $\beta_{ij}=\beta_{ji}$ for $i\neq j$. $H_{0,r}^{1}(\W)$ denotes the subspace of $H^{1}_{0}(\W)$ of radially symmetric functions.
To demonstrate the spirit of our results, we state the result in a special case first, where all $\lambda_j$ are equal $\lambda>0$, all $\mu_j$ are equal $\mu>0$, and all $\beta_{ij}$ are equal $\beta$ for $i\neq j$, i.e., $$\label{e:All}
\left\{
\begin{array}{lr}
-{\Delta}u_{j}+\lambda u_{j}=\mu u^{3}_{j}+\sum_{i\neq j}\beta u_{j}u_{i}^{2} \,\,\,\,\,\,\, in\ \W ,\\
u_{j}\in H_{0,r}^{1}(\W), \,\,\,\,\,\,\,\,j=1,\dots,N.
\end{array}
\right.$$
\[t:1\] Let $p$ be a prime factor of $N$ and write $N=pB$. Suppose $\beta\leq-\frac{\mu}{p-1}$, Then for any given non-negative integers $P_{1},P_{2},\dots,P_{B}$, (\[e:All\]) has infinitely many solutions $(u_{1},\dots,u_{N})$ such that for $b=1,...,B$, $u_{pb-p+i}$ changes sign precisely $P_b$ times for $i=1,...,p$.
The result gives new insight into the structure of nodal solutions for coupled Schrödinger equations. For a prescribed componentwise-node, we find infinitely many solutions which share the same number of nodal domains, revealing more complexity of nodal solutions compared with the classical scalar field equation $-\Delta u +u = |u|^{p-2}u$ for which a long standing folklore has been the uniqueness of nodal solutions with a prescribed node. We say the solutions given above have componentwise-prescribed nodes.
Our method works in more general form than that of (\[e:All\]). Denote by $\mathcal B = (\beta_{ij})_{N\times N}$ the coefficient matrix involved on the right hand side of Problem (\[e:All\]), where we denote $\beta_{ii}=\mu_{i}$. We do not need to require the same values for $\lambda_j$, $\mu_j$ and $\beta_{ij}$. We denote the transformation of exchanging the $i$-th row and the $j$-th row of a matrix by $R_{ij}$, the $i$-th column and the $j$-th column by $C_{ij}$.
\[t:main\] Let $p$ be a prime factor of $N$ and write $N=pB$. Assume the following four conditions hold.
- $\lambda_{pb-p+1}=\lambda_{pb-p+2}=\dots=\lambda_{pb}>0$ for $b=1,\dots,B$.
- For $i,j=1,\dots,N$ and $i\neq j$, $\beta_{ij}=\beta_{ji}\leq0$ and $\mu_{j}>0$.
- For $b=1\dots,B$, $\mathcal B = (\beta_{ij})_{N\times N}$ is invariant under the action of $$\prod_{i=1}^{p-1} C_{pb-p+i,pb-p+i+1}\circ R_{pb-p+i,pb-p+i+1}.$$
- For $b=1,\dots,B$ and $pb-p+1\leq j\leq pb $, it holds $$\mu_{j}+\sum_{pb-p+1\leq i\leq pb\,;\,i\neq j}\beta_{ij}\leq 0.$$
Then for any given non-negative integers $P_{1},\dots,P_{B}$, the Problem (\[e:A111\]) possesses infinitely many solutions $(u_{1},\dots,u_{N})$ such that for $b=1,...,B$, $u_{pb-p+i}$ changes sign precisely $P_b$ times for $i=1,...,p$.
In the special case of , we see (A) and (C) are satisfied readily, while (B) and (D) are satisfied under $\beta\leq-\frac{\mu}{p-1}<0$. Thus Theorem \[t:1\] follows.
Our approach to study multiplicity of nodal solutions having the same componentwise nodes is to combine an associated parabolic flow serving as a descending flow of the variational problem with a minimax construction in the spirit of the symmetric mountain pass theorem via an $\mathbb Z_p$ index theory in the presence of invariant sets of the flow. While for multiplicity of nodal solutions having the same nodal numbers, we need in an essential way the symmetry of the coupling coefficients, our method will be set up in a more general framework and also allows us to treat the general case Problem (\[e:A111\]) without a such symmetry. In this general setting we prove the existence of one solution with a prescribed node, and this gives a different proof of a result from [@LW2] where such a solution was given by gluing on Nehari manifold. In the present paper, we employ the corresponding parabolic flow as a tool for deformation of the variational problem, which is essential for establishing multiplicity results.
\[t:exist\] Assume $\lambda_{j},\,\mu_{j}>0$ for $j=1\dots,N$. Then for any non-negative integers $P_{1},\dots,P_{N}$, there exists $b>0$ such that if $\beta_{ij}\leq b$ for all $i\neq j$, Problem (\[e:A111\]) has a solution $(u_{1}\dots,u_{N})$ with the $j$-th component $u_{j}$ changing sign precisely $P_{j}$ times for $j=1\dots,N$.
We note that while for the multiplicity results we need the condition of negative coupling, for the existence of one solution we can allow a wider range of coupling here.
To make the symmetry condition in Theorem \[t:main\] clear, we give three examples for the coupling coefficient matrix $\mathcal B = (\beta_{ij})_{N\times N}$ of Problem (\[e:A111\]). The matrices are cut into blocks for suitable symmetry.
For the case $N=4$ and $p=B=2$, the assumptions $(B-D)$ are satisfied in the following form $$\left(
\begin{array}{cc|cc}
\mu_{1} & \beta_{1} & \beta_{3} & \beta_{3} \\
\beta_{1} & \mu_{1} & \beta_{3} & \beta_{3} \\ \hline
\beta_{3} & \beta_{3} & \mu_{2} & \beta_{2} \\
\beta_{3} & \beta_{3} & \beta_{2} & \mu_{2} \\
\end{array}
\right)$$ with $\beta_{i}\leq-\mu_{i}<0$ for $i=1,2$ and $\beta_{3}\leq0$. Assume that $\lambda_{1}=\lambda_{2}>0$ and $\lambda_{3}=\lambda_{4}>0$. Then given any two nonnegative integers $P_1, P_2$, there exist infinitely many solutions with first two components $u_1, u_2$ each having exactly $P_1$ simple zeros, and with the last two components $u_3, u_4$ each having exactly $P_2$ simple zeros.
If we set $N=6$, $p=2$ and $B=3$, the assumptions $(B-D)$ are satisfied in $$\left(
\begin{array}{cc|cc|cc}
\mu_{1} & \beta_{1} & \beta_{4} & \beta_{4} & \beta_{5} & \beta_{5} \\
\beta_{1} & \mu_{1} & \beta_{4} & \beta_{4} & \beta_{5} & \beta_{5} \\ \hline
\beta_{4} & \beta_{4} & \mu_{2} & \beta_{2} & \beta_{6} & \beta_{6} \\
\beta_{4} & \beta_{4} & \beta_{2} & \mu_{2} & \beta_{6} & \beta_{6} \\ \hline
\beta_{5} & \beta_{5} & \beta_{6} & \beta_{6} & \mu_{3} & \beta_{3} \\
\beta_{5} & \beta_{5} & \beta_{6} & \beta_{6} & \beta_{3} & \mu_{3} \\
\end{array}
\right)$$ with $\beta_{i}\leq-\mu_{i}<0$ for $i=1,2,3$ and $\beta_{4},\beta_{5},\beta_{6}\leq0$.
If we set $N=6$, $p=3$ and $B=2$, the assumptions $(B-D)$ are satisfied in $$\left(
\begin{array}{ccc|ccc}
\mu_{1} & \beta_{1} & \beta_{1} & \beta_{3} & \beta_{3} & \beta_{3} \\
\beta_{1} & \mu_{1} & \beta_{1} & \beta_{3} & \beta_{3} & \beta_{3} \\
\beta_{1} & \beta_{1} & \mu_{1} & \beta_{3} & \beta_{3} & \beta_{3} \\ \hline
\beta_{3} & \beta_{3} & \beta_{3} & \mu_{2} & \beta_{2} & \beta_{2} \\
\beta_{3} & \beta_{3} & \beta_{3} & \beta_{2} & \mu_{2} & \beta_{2} \\
\beta_{3} & \beta_{3} & \beta_{3} & \beta_{2} & \beta_{2} & \mu_{2} \\
\end{array}
\right)$$ with $\beta_{i}\leq-\frac{\mu_{i}}{2}<0$ for $i=1,2$ and $\beta_{3}\leq0$.
Historical Remarks and the Idea of the Present Paper
----------------------------------------------------
The nonlinear coupled elliptic system (\[e:A111\]) has its theoretical root in Bose-Einstein condensates. The solutions of Problem (\[e:A111\]) give rise to standing waves solutions of the time-dependent nonlinear coupled Schrödinger system $$\left\{
\begin{array}{lr}
-i\partial_{t}\Phi_{j}-{\Delta}\Phi_{j}=\mu_{j}|\Phi_{j}|^2 \Phi_{j}+\sum_{i\neq j}\beta_{ij}\Phi_{j}|\Phi_{i}|^{2} \,\,\,\,\,\,\, in\ \W ,\\
\Phi_{j}(t,x)\in\mathbb{C}, \,\,\,\,\,\,\,\,j=1,\dots,N,
\end{array}
\right.$$ for $j=1,\dots,N$ and $t>0$. In physics models, the parameters $\mu_{j}$ and $\beta_{ij}$ are the intraspecies and interspecies scattering lengths respectively. When $\beta_{ij}>0$, it is called the attractive case, when $\beta_{ij}<0$, it is called the repulsive case. In this paper, we mainly consider the repulsive case while small attractive coupling is also considered. [@AA; @MS] is referred for more physics background.
In recent years, a large number of mathematical results on Problem (\[e:A111\]) have appeared, e.g., in [@AC; @BDW; @BW; @DWW; @LWei1; @LWei2; @LLW; @LW1; @LW; @NTTV; @S; @TV; @TW; @WW] for studying various aspects of the problem such as existence theory for the attractive case and for the repulsive case, the bifurcation analysis, the synchronization and segregation for different coupling parameter regimes, and convergence and regularity of large couplings in the repulsive case etc. We refer to these papers for more references therein. In the repulsive coupling case, solutions tend to be segregated component-wisely creating more complex patterns of solutions. The application of variational methods to the coupled Schrödinger systems mainly involves minimizing methods and minimax methods. The symmetric mountain-pass theorem has been well adopted for a large number of elliptic problems that goes back to the celebrated [@AR] by Ambrosetti-Rabinowitz. For Problem (\[e:A111\]), the first difficulty is that there exist infinitely many so-called semi-trivial solutions (solutions with some components being zeros) so the system is degenerated to a system of smaller number of equations. In [@LW1; @LW], Liu and Wang proved the existence of infinitely many non-trivial (all components are non-zero) solutions to Problem (\[e:A111\]) via invariant sets of descending flow and Nehari manifold method respectively. In [@DWW] and [@TW], the authors proved multiplicity results of positive solutions to the special case Problem (\[e:All\]) which possesses the componentwise permutation symmetry. This can be considered as a typical result for the repulsive case which shows distinct difference between a scalar field equation and a coupled nonlinear elliptic system since for the classical scalar field equation $-\Delta u +u = |u|^{p-2}u$ the uniqueness of positive solutions is well known ([@GNN; @K]) and a folklore has been the uniqueness of nodal solutions with a prescribed node ([@AWY; @Ta]). In [@LLW], the authors obtained a multiplicity result of solutions to Problem (\[e:A111\]) in general domains with prescribed number of positive components and prescribed number of sign-changing components. Recently, for radially symmetric domains, the existence of a nodal solution with componentwise prescribed number of nodes is obtained by Liu and Wang in [@LW2] via gluing on Nehari’s method, extending the work for scalar equations ([@BWillem; @Struwe]). More precisely, it is proved in [@LW2] that for any given nonnegative integers $P_1, ...,P_N$ there is a nodal solution $(u_1, ..., u_N)$ to Problem (\[e:A111\]) such that $u_i$ has exactly $P_i$ simple zeros, $i=1,...,N$.
In the present paper, our main concern and interest is that for a componentwise prescribed node whether there are [*multiple such solutions*]{} sharing the given nodal number, in particular whether there are [*infinitely many such solutions?*]{} This is the main goal of our studies. Our result gives a construction of infinitely many solutions sharing a given componentwise-prescribed node (Theorem \[t:1\] and \[t:main\]).
To deal with the sign-changing property of multiple solutions we will employ the heat flow of the corresponding coupled heat equations to Problem (\[e:A111\]). An important part of the present paper lies in the studies of the associated heat flow, including the existence and regularity results, the global existence and blow-up results, the non-increasing property of the sign-changing numbers along flow lines, the boundedness of trajectories and dynamical property of some invariant sets of the flow. We refer [@Amann; @H; @Lu; @Q] for general discussions on the parabolic problems. There have been a lot of works in the literature in which elliptic problems are solved with the help of heat flow methods. In [@CMT], Conti, Merizzi and Terracini proved the existence of radial solutions with prescribed number of nodal domains to a scalar field equation. Utilizing the semilinear parabolic flow and the topological degree, they proved the result which was only treated by Nehari method before ([@BWillem; @Struwe]). In [@Ch], Chang established a variational framework and applied it to minimal surface problems. Quittner proved the existence and multiplicity of solutions of several semilinear elliptic problems and other dynamical properties by using parabolic flow in [@Q1; @Q2; @Q3]. In [@AB], Ackermann and Bartsch developed the idea of superstable manifold and refined the symmetric mountain-pass theorem for sign-changing solutions (c.f. [@BWW Section 2]), which produced multiplicity results, nodal properties and order comparison results. More works on using parabolic flow to treat elliptic problems can be found in the references in these papers. However, there are few results on the coupled Schrödinger systems using the heat flow. We mention [@WW] in which for two equations a comparison between components of positive solutions was obtained. We will further develop the ideas in these papers by using heat flow as a tool of descending flow of our variational problem for Problem (\[e:A111\]). In fact, with the growth of the nonlinearity, a finer analysis on the global existence of the parabolic flow is also required. Combining the Cazenave-Lions interpolation ([@CL] and [@Ch]) and some estimate in [@Ch; @Q1], we can address that the growth of the nonlinearity is admissible to the global existence in dimensions $n\leq 3$. A finer analysis of the invariant sets requires the $H^1$-bounds of global solutions which we will prove in Section 2.4. We use a variant of the method in [@Q2003], and we refer [@FL; @Q2003] for more references on this topic. Another important part of our work involves using some natural permutation symmetry in the coupling patterns. We will make use of the symmetry of the problem, that is, the problem is invariant under a $\mathbb Z_p$ group action of a cyclic permutation $\sigma$. With the heat flow serving as a deformation we will construct minimax critical values in the spirit of the symmetric mountain pass theorem via a $\mathbb Z_p$ index. We need to build up special symmetric subsets of large $\mathbb Z_p$ index contained in the invariant sets of the flow. Inspired by the approach for scalar equations in [@CMT] our method is a sharper and symmetric variant of [@CMT] for coupled systems. To accomplish this, a certain combination of the methods in [@CMT; @LW; @TW] are needed. While the idea of Nehari manifold was used in [@LW], we will use the more natural ingredient, the boundary of the stable manifold of the origin, which has the advantage in keeping the non-increasing property of the sign-changing number along flow lines.
The Structure of This Paper
---------------------------
Section 2 mainly deals with the regularity and dynamical properties of the heat flow of the corresponding heat equations, constructing various invariant sets of the flow. We prove the existence result of Theorem \[t:exist\] for the general system in Section 3, and this also will set up the stage for the proof of the main result Theorem \[t:main\] in Section 4. In Section 4, we give the proof of the multiplicity result Theorem \[t:main\] from a minimax argument by constructing symmetric sets of large $\mathbb Z_p$ index inside various flow invariant sets on the boundary of the domain of attraction of the origin.
Dynamical properties of the associated heat equations
=====================================================
The parabolic flow associated with the elliptic system will be used as a mean of descending flow for the variational problems. We start by collecting some relevant results on existence and regularity of the heat equations. Then we will develop some further estimates and construct some invariant sets of the flow which will be used in our proof later. Let us fix some notations first.
We always use capital letters to represent vector valued functions and the corresponding lower case letters with subscript for their components. For example, $U=(u_{1},\dots,,u_{N})$ and $V=(v_{1},\dots,v_{N})$. A solution $U=(u_{1},\dots,,u_{N})$ to Problem (\[e:A111\]) is called a non-trivial solution if and only if $u_{j}\not\equiv0$ for any $j=1,\dots,N$. It is semi-trivial if and only if $U\not\equiv\theta$, where $\theta$ is the zero vector.
The norm of Lebesgue space $L^{p}(\W)$ is denoted by $|\cdot|_{p}$ and the norm of $H^{1}_{0}(\W)$ by $\|\cdot\|$. For the product of spaces, such as $(L^{p}(\W))^N$ ($(H^{1}_{0}(\W))^N$), we still use $|\cdot|_{p}$($\|\cdot\|$) to denote its norm. With no confusions, we sometimes omit the domain $\W$, the boundary condition and the radial condition and only denote the corresponding spaces by $L^p$, $H^{1}$, $H^{2}$ and $H^{s}$ for $s\in (1,2)$.
Existence and Regularity Results of the Parabolic Flow
------------------------------------------------------
Instead of the gradient flow, we will combine our variational structure with the following nonlinear coupled parabolic system: $$\label{e:A14}
\left\{
\begin{array}{lr}
\frac{\partial}{\partial t}u_{j}-{\Delta}u_{j}+\lambda_{j} u_{j}=\mu_{j} u^{3}_{j}+\sum_{i\neq j}\beta_{ij} u_{j}u_{i}^{2} \,\,\,\,\,\,\, in\ \W ,\\
u_{j}(t,x)\in H_{0,r}^{s}(\W), \,\,\,\,\,\,\,\,j=1,\dots,N,\\
u_{j}(0,x)=u_{0,j}(x)\in H_{0,r}^{s}(\W), \,\,\,\,\,\,\,\,j=1,\dots,N.
\end{array}
\right.$$ whose equilibria are solutions to Problem (\[e:A111\]). Here, we require the coefficients $\lambda_j$’s, $\mu_j$’s and $\beta_{ij}$’s satisfy the conditions in Theorem \[t:exist\] and Theorem \[t:main\] when we prove two theorems respectively.
By the notation $\eta^{t}(U)$ we denote a solution to the parabolic system with $U=(u_{0,1}, ..., u_{0,N})$ as its initial data. Sometimes, for the sake of simplicity, we also write $U(t)$.
A special case of Problem (\[e:A14\]) is of the form: $$\label{e:A12}
\left\{
\begin{array}{lr}
\frac{\partial}{\partial t}u_{j}-{\Delta}u_{j}+\lambda u_{j}=\mu u^{3}_{j}+\sum_{i\neq j}\beta u_{j}u_{i}^{2} \,\,\,\,\,\,\, in\ \W ,\\
u_{j}(t,x)\in H_{0,r}^{s}(\W), \,\,\,\,\,\,\,\,j=1,\dots,N,\\
u_{j}(0,x)=u_{0,j}(x)\in H_{0,r}^{s}(\W), \,\,\,\,\,\,\,\,j=1,\dots,N.
\end{array}
\right.$$ It is obvious that an equilibrium point of Problem (\[e:A12\]) is a solution to Problem (\[e:All\]). Both of the parameters $s$ in Problems (\[e:A14\]) and (\[e:A12\]) will be taken to be in $[1,2]$ depending upon the situation. Readers can find general theory of parabolic problems in [@Amann; @DM; @H; @Lu; @Q]. We will state a slightly more general result on the existence and regularity for the parabolic system (\[e:A14\]) than we need in this paper. Noticing that the spectrum of $-\Delta+\lambda$ is contained in $[\lambda,\infty)$, we conclude that the operator $-\Delta+\lambda$ is sectorial and, as a consequence, the existence and regularity results can be given. The results are stated and proved in terms of interpolation spaces $X_\alpha$ for $\alpha \in [0,1]$ (e.g., [@DM]). We refer [@Amann; @DM; @H; @Lu; @Q] once again for more information on sectorial operators and related properties on interpolation spaces. Note that with the range being $L^2$, the domain of the operator is $D(-\Delta+\lambda)=\{u\in H^{2}|\gamma_{2} u=0\}:=X_1$ (c.f. [@DM (4.6), (4.7)] or [@See]), where $\gamma_{2}$ is the trace operator in $L^2=X_{0}$. Using the relation between these interpolation spaces and the Bessel-potential spaces $X_{s/2}=H_{0}^{s}(\W)$ for $\alpha\in\left[1,\frac{3}{2}\right)\cup\left(\frac{3}{2},2\right]$, we will state the following theorem with the $H^s_0$ setting (c.f. [@DM Theorem 4.20], [@PrS] and [@Amann87]). The following result for Problem (\[e:A14\]) is useful in the present paper.
\[t:2\] Let $s\in[1,2]\backslash\left\{\frac{3}{2}\right\}$ be fixed. If the initial value $U:=(u_{1},\dots,u_{N})\in(H^{s})^{N}$, there is a unique solution $\eta^{t}(U)=(u_{1}(t),\dots,u_{N}(t))$ to Problem (\[e:A14\]) defined on its maximum interval $[0,T(U))$, satisfying
- it holds that $$\begin{aligned}
\eta^{t}(U)\in & C^1((0,T(U)),(L^2)^N) \cap C([0,T(U)),(H^s)^N);\nonumber
\end{aligned}$$
- for any $U\in (H^{s})^{N}$ and any $\delta\in[0,T(U))$, there are positive constants $r,K$ such that for any $t\in[0,\delta]$ $$\|U-V\|_{(H^{s})^{N}}<r\,\,\,\,\Rightarrow\,\,\,\,\|\eta^{t}(U)-\eta^{t}(V)\|_{(H^s)^N}\leq K\|U-V\|_{(H^{s})^{N}};$$
- the trivial solution $\theta\in (H^s)^N$ is asymptotically stable in $(H^s)^N$.
Part $\mbox{(\uppercase\expandafter{\romannumeral1})}$ of this theorem is due to [@DM Theorem 15.3, Theorem 16.2] and $\mbox{(\uppercase\expandafter{\romannumeral2})}$ is of [@DM Proposition 16.8]. The assertion $\mbox{(\uppercase\expandafter{\romannumeral3})}$ is due to [@H Theorem 5.1.1].
In following, we mainly use the result for $s=1$ and $s=2$.
Notice that the theorem also holds true if we restrict the spaces to the case of radial symmetric functions. A similar regularity theory can be found in [@Lu].
Global Existence of the Solutions Starting on the Boundary of the Stable Manifold
---------------------------------------------------------------------------------
The propositions are modified versions of some results in [@CMT] and in [@Q1]. In this section, we always assume $U(t)=(u_{1}(t),\dots,u_{N}(t))$ is a solution to Problem (\[e:A14\]). The energy of Problem (\[e:A111\]) is the functional $J(U):(H_{r}^{1})^{N}\to\mathbb{R}$ defined by $$\begin{aligned}
J(U)&:=J(u_{1},\dots,u_{N})\nonumber\\
&=\frac{1}{2}\sum_{j=1}^{N}\int|\nabla u_{j}|^{2}+\lambda_{j}|u_{j}|^{2}-\frac{1}{4}\sum_{j=1}^{N}\left(\int\mu_{j} u_{j}^4+\sum_{i\neq j}\int\beta_{ij} u_{i}^{2}u_{j}^{2}\right),\nonumber\end{aligned}$$ which is a $C^{2}$ functional and satisfies the (PS) condition.
\[p:L2partial\_t\] For a solution $U(t)=(u_{1}(t),\dots,u_{N}(t))$, we have $$\frac{\partial}{\partial t}J(U(t))=-\sum_{j=1}^{N}\int|\partial_{t}u_{j}|^2.$$
[**Proof.**]{} Note that $$\frac{\partial}{\partial t}J(U)=\sum_{j=1}^{N}\nabla_{u_{j}}J(U)\partial_{t} u_{j}.$$ By a direct computation we have $$\begin{aligned}
\nabla_{u_{j}}J(U)\partial_{t} u_{j}&=\int\nabla u_{j}\cdot\nabla\partial_{t}u_{j} +\lambda_{j} u_{j}\partial_{t}u_{j}-\int\mu_{j} u_{j}^{3}\partial_{t}u_{j}+\sum_{i\neq j}\beta_{ij} u_{j}\partial_{t}u_{j}u_{i}^{2}\nonumber\\
&=\int \partial_{t}u_{j}\Bigg(-\Delta u_{j}+\lambda_{j} u_{j}-\mu_{j} u_{j}^{3}-\sum_{i\neq j}\beta_{ij} u_{j}u_{i}^{2}\Bigg)=-\int|\partial_{t}u_{j}|^{2}.\nonumber\end{aligned}$$ Then the proposition follows.
\[c:A\] Let $$\begin{aligned}
\mathcal{A}=\big\{U\in(H^{1})^{N}|T(U)=\infty\,\,\,\mbox{and}\,\,\lim_{t\to T(U)}\eta^{t}(U)=\theta\,\,\,\mbox{in}\,\,(H^1)^N\big\}.\nonumber\end{aligned}$$ Then $\mathcal{A}$ is invariant under the heat flow and is open in $(H^{1})^{N}$.
This is a direct consequence of Theorem \[t:2\].
$\partial\mathcal{A}$ is invariant under the heat flow and $\inf_{U\in\partial\mathcal{A}}J(U)\geq0$.
[**Proof.**]{} The continuity of the energy $J(U)$ with respect to $(H^1)^N$ norm implies the second part of this lemma. Now we prove the first part.
Due to the definition of $\mathcal{A}$, if $U\in\partial\mathcal{A}\subset (H^1)^N\backslash\mathcal{A}$, an immediate consequence is that $\eta^t(U)\subset(H^1)^N\backslash\mathcal{A}$. And if there is a $t_{0}\in(0,T(U))$ such that $\eta^{t_{0}}(U)\in(H^1)^N\backslash\overline{\mathcal{A}}$, due to Theorem \[t:2\] and the openness of $\mathcal{A}$ in $(H^1)^N$, we can find a $V\in\mathcal{A}$ such that $\eta^{t_{0}}(V)\in(H^1)^N\backslash\overline{\mathcal{A}}$. Therefore, we address a contradiction. The above deduction implies that $\eta^{t}(U)\subset\overline{\mathcal{A}}\backslash\mathcal{A}=\partial\mathcal{A}$.
Now, we prove that the flow with its initial data on the boundary of the stable manifold $\partial\mathcal{A}$ has $[0,\infty)$ as its maximal existence interval. Before that, let us prove a lemma under a more general condition.
Suppose $\lim_{t\to T(U)}J(U)-J(\eta^{t}(U))\leq C<\infty$. Then $U(t)$ exists globally in $(H^1)^N$.
[**Proof.**]{} The proof makes use of some arguments from [@CL], [@Ch Lemma 1] and [@Q1 Section 3]. Since $\frac{\partial}{\partial t}J(\eta^t (U))=-\sum_{j=1}^{N}\int|\partial_{t}u_{j}|^{2}$, the condition in the lemma implies that $$\begin{aligned}
\label{ineq:PA1}
\sum_{j=1}^{N}\int_{0}^{t}\int|\partial_{t}u_{j}(s)|^{2}dxds=\big|J(\eta^{t}(U))-J(U)\big|\leq C.\end{aligned}$$ Now we give the $L^2$-estimate. First we have $$\begin{aligned}
\sum_{j=1}^{N}\int|u_{j}(t)|^2 dx&=\int_{0}^{t}\frac{d}{dt}\int|u_{j}(s)|^2 dxds+\sum_{j=1}^{N}\int|u_{j}(0)|^2 dx\nonumber\\
&=2\sum_{j=1}^{N}\int_{0}^{t}\int u_{j}\cdot\partial_{t}u_{j}dxds+\sum_{j=1}^{N}\int|u_{j}(0)|^2 dx\nonumber\\
&\leq C\Bigg(1+\sum_{j=1}^{N}\int_{0}^{t}\int|u_{j}(s)|^2 dxds\Bigg).\nonumber\end{aligned}$$ Using the Gronwall’s inequality, we have $$\begin{aligned}
\label{ineq:PA2}
\sum_{j=1}^{N}\int|u_{j}(t)|^2 dx\leq Ce^{Ct}.\end{aligned}$$ Notice that $$\begin{aligned}
&\;\;\;\;\sum_{j=1}^{N}\int|u_{j}(t)|^2 dx-\sum_{j=1}^{N}\int|u_{j}(0)|^2 dx\nonumber\\
&=2\sum_{j=1}^{N}\int_{0}^{t}\int u_{j}(s)\cdot\partial_{t}u_{j}(s)ds\nonumber\\
&=-8\int_{0}^{t}J(\eta^{s}(U))ds+2\sum_{j=1}^{N}\int_{0}^{t}\int|\nabla u_{j}(s)|^2 dxds.\nonumber\end{aligned}$$ Therefore, $$\begin{aligned}
\label{ineq:PA3}
\int_{0}^{t}\int|\nabla u_{j}(s)|^2 dxds\leq Ce^{Ct}\end{aligned}$$ follows immediately. Multiplying $u_{j}$ on both sides of the $j$-th equation of Problem (\[e:A14\]), integrating over $\W$ and summing up with $j$, we obtain $$\sum_{j=1}^{N}\int u_{j}\cdot\partial_{t}u_{j}+\sum_{j=1}^{N}\|u_{j}\|^{2}= \sum_{j=1}^{N}\int \mu_{j} u_{j}^{4}+\sum_{i\neq j}\beta_{j} u_{i}^{2}u_{j}^{2}.$$ Now we apply some methods in [@CL] and [@Ch]. For any $T>0$, we consider the norms on the time interval $[0,T]$. Due to the definition of the energy $J$ and (\[ineq:PA2\]), we have $$\begin{aligned}
\label{inequality:important!!}
\sum_{j=1}^{N}\|u_{j}\|^2 &\leq 4J(U)+\sum_{j=1}^N \int u_{j}\cdot\partial_{t}u_{j}\nonumber\\
&\leq C+\Bigg(\sum_{j=1}^N |u_{j}|_{2}^{2}\Bigg)^{\frac{1}{2}}\cdot\Bigg(\sum_{j=1}^N |\partial_{t}u_{j}|_{2}^{2}\Bigg)^{\frac{1}{2}}\\
&\leq C+C\Bigg(\sum_{j=1}^N |\partial_{t}u_{j}|_{2}^{2}\Bigg)^{\frac{1}{2}}\nonumber\end{aligned}$$ for $t\in[0,T]$. This implies that $$\begin{aligned}
\int_{0}^{T}\big(\int|u_{j}(t)|^{2^{*}}dx\big)^{\frac{4}{2^*}}dt &\leq C\int_{0}^{T}\Bigg(\sum_{j=1}^{N}\|u_{j}(t)\|^{2}\Bigg)^2 dt\nonumber\\
&\leq C(T)+\sum_{j=1}^{N}\int_{0}^{T}\int|\partial_{t}u_{j}(t)|_{2}^{2}dt\leq C(T).\nonumber\end{aligned}$$ That is, $u_{j}\in L^{4}((0,T),L^{2^{*}} (\W))$, with $2^{*}=6$ for dimension 3. At the same time, the embedding $H^{1}\hookrightarrow L^{p}$ holds for any $p\geq 2$ for dimension 2. Therefore, we also have $u_{j}\in L^{4}((0,T),L^6(\W))$ for dimension 2. Notice that (\[ineq:PA1\]) implies that $\partial_{t}u_{j}\in L^{2}((0,T),L^{2}(\W))$.
Next we claim $u\in L^{\infty}((0,T),L^{\frac{18}{5}}(\W))$. We prove it in the following paragraph. Using the idea of Cazenave-Lions interpolation (c.f. [@CL], [@Ch]), we set $v=|u|^{3}$. Let us extend $u$ to $\W\times(0,3T)$ with compact support with respect to the variable $t$. Using the Newton-Leibniz formula and Hölder’s inequality, we can compute that $$\begin{aligned}
|u|_{\frac{18}{5}}&=|v|_{\frac{6}{5}}^{\frac{5}{6}\cdot\frac{1}{3}}=\Big|\int_{t}^{3T}|v_{t}|_{\frac{6}{5}}\Big|^{\frac{1}{3}} \leq C\Big|\int_{t}^{\infty}|u^{2}\cdot u_{t}|ds\Big|^{\frac{1}{3}}\nonumber\\
&\leq C\Big(\int_{0}^{3T}|u^{2}|_{3}|u_{t}|_{2}ds\Big)^{\frac{1}{3}}\nonumber\end{aligned}$$ with $$\frac{5}{6}=\frac{1}{3}+\frac{1}{2}.$$ Hence, $$\begin{aligned}
\label{ineq:CL1}
|u|_{\frac{18}{5}}&\leq C\Big(\int_{0}^{3T}|u|^{2}_{6}|u_{t}|_{2}ds\Big)^{\frac{1}{3}}\leq C\Big(\int_{0}^{3T}|u|^{4}_{6}ds\Big)^{\frac{1}{6}} \Big(\int_{0}^{3T}|u_{t}|^{2}_{l}ds\Big)^{\frac{1}{6}}\nonumber\end{aligned}$$ with $$1=\frac{1}{2}+\frac{1}{2}.$$ Therefore, we will have $$\begin{aligned}
\sup_{t\in(0,T)}|u|_{\frac{18}{5}}&\leq C\Big(\int_{0}^{3T}|u|^{4}_{6}ds\Big)^{\frac{1}{6}}\Big(\int_{0}^{3T}|u_{t}|^{2}_{2}ds\Big)^{\frac{1}{6}}\nonumber\\
& \leq C(T)\nonumber\end{aligned}$$ by using Hölder inequality with respect to the variable $t$. This implies that $u\in L^{\infty}((0,T),L^{\frac{18}{5}}(\W))$.
Multiplying $u_{j}^{3}$ on the both sides of the $j$-th equation and integrating over $\W$, $$\begin{aligned}
\frac{d}{dt}|u^{2}_{j}|^{2}_{2}+\|u_{j}^{2}\|^{2} &\leq C\Big(\mu_{j}|u_{j}^{2}|_{3}^{3}+\sum_{i\neq j}\beta_{ij} u_{j}^{4}u_{i}^{2}\Big)\nonumber\\
&\leq C|u_{j}^{2}|_{3}^{3}.\nonumber\end{aligned}$$ Using the interpolation inequality, we have $$\frac{1}{3}=\frac{\frac{4}{7}}{6}+\frac{\frac{3}{7}}{\frac{9}{5}}$$ and $$\begin{aligned}
\frac{d}{dt}|u^{2}_{j}|^{2}_{2}+\|u_{j}^{2}\|^{2} &\leq C|u_{j}^{2}|_{\frac{9}{5}}^{\frac{9}{7}}|u_{j}^{2}|_{6}^{\frac{12}{7}}\nonumber\\
&\leq C|u_{j}^{2}|_{\frac{9}{5}}^{\frac{9}{7}}\|u_{j}^{2}\|^{\frac{12}{7}}\nonumber\\
&\leq \frac{1}{2}\|u_{j}^{2}\|^{2}+C|u_{j}|_{\frac{18}{5}}^{18}\nonumber\\
&\leq \frac{1}{2}\|u_{j}^{2}\|^{2}+C.\nonumber\end{aligned}$$ It follows that $$|u_{j}^{2}(t)|^{2}_{2}\leq C\int_{0}^{T}dt+|u_{j}^{2}(0)|_{2}^{2}\leq C(T),$$ i.e. $u_{j}\in L^{\infty}((0,T), L^{4}(\W))$ for $j=1\dots,N$. Using the definition of the energy $J$, $$\sum_{j=1}^{N}\|u_{j}\|^{2}\leq 2J(\eta^{t}(U))+\frac{1}{2}\sum_{j=1}^{N}\int\mu u_{j}^{4}+\sum_{i\neq j}\beta u_{i}^{2}u_{j}^{2} \leq C.$$ Hence $u_{j}\in L^{\infty}((0,T),H_{0}^{1}(\W))$. Therefore, $T(U)=\infty$ since $T>0$ is arbitrary.
\[remark:globalexistence\] With the same conditions, we can address that $T(U)=\infty$, i.e. $\eta^t(U)\in C((0,\infty),(H^s)^N)$ for any $s\in (1,2]\setminus \{\frac{3}{2}\}$. To this end, we only need to use the formula of variation of constants and the fact that $\sum_{j=1}^{N}\Big(\mu_{j}u_{j}^{3}+\sum_{i\neq j}u_{i}^{2}u_{j}\Big)\in L^{\infty}((0,T),L^{2}(\W))$ (c.f. [@Lu Proposition 7.1.8]). Especially, with the same method, we can prove $\eta^t(U)\in C((0,\infty),(H^2)^N)$ when $U\in(H^2)^N$.
The following corollary follows from the fact that $\inf_{\partial\mathcal{A}}J\geq 0$ and that the energy $J$ is non-increasing along the flow line.
For any $U\in\partial\mathcal{A}$, $T(U)=\infty$.
The $H^1$ bounds of the Solutions Starting on $\partial \mathcal{A}$
--------------------------------------------------------------------
We will borrow some ideas used by Quittner in [@Q2003], which can be adapted for our situation (see also [@FL] for some related work).
\[l2\] Let $U(t)$ be a global solution to (\[e:A14\]) such that $\lim_{t\to\infty} J (U(t))= E_{1}$ is finite. Then there is $C>0$ depending continuously upon the $L^2$-norm of the initial data, the initial energy $E_{0}:=J(U(0))$ and $E_{1}$, such that for any $t\geq0$, $|U(t)|_{2}\leq C$.
[**Proof.**]{} Suppose $0\leq t_{0}\leq t<+\infty$. Denote $\Phi(t)=\int_{t_{0}}^{t}|U(s)|_{2}^{2}ds$ and $E_{0}=J(U(0))$. Using the computation in previous subsection, we have $$\begin{aligned}
\left |\sum_{j=1}^{N}|u_{j}(t)|_{2}^{2}-\sum_{j=1}^{N}|u_{j}(t_{0})|_{2}^{2} \right |&=2\left| \sum_{j=1}^{N}\int_{t_{0}}^{t}\int_{\W}u_{j}\partial_{t}u_{j}dxds \right|\nonumber\\
&\leq2\Big(\sum_{j=1}^{N}\int_{t_{0}}^{t}|\partial_{t}u_{j}|_{2}^2 ds\Big)^{\frac{1}{2}}\Big(\sum_{j=1}^{N}\int_{t_{0}}^{t}|u_{j}|_{2}^2 ds\Big)^{\frac{1}{2}}\nonumber\\
&\leq2\sqrt{E_{0}-E_{1}}\Phi(t)^{\frac{1}{2}},\nonumber\end{aligned}$$ which gives $$\begin{aligned}
\Phi'(t) \leq |U(t_{0})|_{2}^{2} + 2\sqrt{E_{0}-E_{1}} \Phi(t)^{\frac{1}{2}}.\end{aligned}$$ Then we can compute that $$\begin{aligned}
2\Big(\sqrt{\Phi(t)}-|U(t_{0})|_2\Big)'_{+}=\frac{\Phi'(t)}{\sqrt{\Phi(t)}}\chi_{\{\Phi>|U(t_{0})|_{2}\}}\leq|U(t_{0})|_{2}+2\sqrt{E_{0}-E_{1}}.\nonumber\end{aligned}$$ This gives $$\begin{aligned}
\sqrt{\Phi(t)}\leq|U(t_{0})|_2+\big(|U(t_{2})|_2+2\sqrt{E_{0}-E_{1}}\big)\frac{t-t_{0}}{2}.\end{aligned}$$ Combining above deductions, we have $$\begin{aligned}
\label{inequ:L2bound3}
&\left |\sum_{j=1}^{N}|u_{j}(t)|_{2}^{2}-\sum_{j=1}^{N}|u_{j}(t_{0})|_{2}^{2}\right |\nonumber\\
\leq 2 &\sqrt{E_{0}-E_{1}}\Bigg(|U(t_{0})|_2 +\big(|U(t_{0})|_2+2\sqrt{E_{0}-E_{1}}\big)\frac{t-t_{0}}{2}\Bigg).\end{aligned}$$ Set $$\begin{aligned}
C_{1}=\frac{1}{\min_{j}\lambda_{j}}(9+8E_{0}+8E_{1})+2|U(0)|_{2}^{2}+81(E_{0}-E_{1})+3\end{aligned}$$ We claim $|U(t)|_{2}^{2}\leq C_{1}$ for any $t\geq0$. We prove the claim by contradiction. Suppose that there is a $\tau>0$ such that $|U(\tau)|_{2}^2 >C_{1}$. First since $\sum_{j=1}^{N}\int_{0}^{\infty}|\partial_{t}u_{j}|_{2}^{2}ds=E_{0}-E_{1}<\infty$, we can find a sequence $t_{k}\to\infty$ such that $\nabla J(U(t_{k}))\to0$ and $J(U(t_k))\to E_{1}$ as $k\to\infty$. Thus $(U(t_k))\subset (H^1)^N$ is a (PS) sequence. It is easy to check (e.g., [@W]) that $\|U_{k}\|^{2}\leq4(1+E_{1})$. We also have
- $|U(0)|_{2}^2 \leq |U(0)|_{2}^2 +1< \frac{C_{1}}{2}$;
- $|U(t_k)|_{2}^2 \leq\frac{4(1+E_{1})}{\min_{j}\lambda_{j}}<\frac{C_{1}}{2}$.
Let $k$ be the integer such that $\tau\in[t_{k-1},t_k]$ and, without loss of generality, let us assume $|U(\tau)|_{2}^{2}=\max_{[t_{k-1},t_k]}|U(s)|_{2}^{2}$. Then for any $t\in[\tau,\tau+1]$, applying (\[inequ:L2bound3\]) and the fact that $C_{1}>81(E_{0}-E_{1})+1$, $$\begin{aligned}
|U(t)|_{2}^{2}&\geq|U(\tau)|_{2}^{2}-2\sqrt{E_{0}-E_{1}}\Big(\frac{5}{2}|U(\tau)|_{2}+3\sqrt{E_{0}-E_{1}}\Big)\nonumber\\
&\geq|U(\tau)|_{2}^{2}-5\sqrt{E_{0}-E_{1}}|U(\tau)|_{2}-6(E_{0}-E_{1})\nonumber\\
&>\frac{|U(\tau)|_{2}^{2}}{2}>\frac{C_{1}}{2}>|U(t_k)|_{2}^{2}.\nonumber\end{aligned}$$ This implies that $t_{k}\notin[\tau,\tau+1]$. Therefore $\tau+1<t_{k}$ and $\tau+1\in[t_{k-1},t_{k}]$. Consequently we have $|U(\tau+1)|_{2}\leq|U(\tau)|_{2}$. From above computation, we also have $|U(t)|_{2}^{2}\geq\frac{C_{1}}{2}$ for $t\in[\tau,\tau+1]$. And now, since $C_{1}>\frac{1+8(E_{0}-E_{1})}{\min_{j}\lambda_{j}}$, we have $$\begin{aligned}
0&\geq|U(\tau+1)|_{2}^{2}-|U(\tau)|_{2}^{2}=2\sum_{j=1}^{N}\int_{\tau}^{\tau+1}\int_{\W}u_{j}\partial_{t}u_{j}dxds\nonumber\\
&=-8\int_{\tau}^{\tau+1}J(U(s))ds+2\int_{\tau}^{\tau+1}\sum_{j=1}^{N}\lambda_{j}|u_{j}|_{2}^{2}(s)ds\nonumber\\
&\geq-8E_{0}+2\min_{j}\lambda_{j}\int_{\tau}^{\tau+1}|U(s)|_{2}^{2}ds\nonumber\\
&\geq-8E_{0}+\min_{j}\lambda_{j}C_{1}>1,\nonumber\end{aligned}$$ which is a contradiction. Hence we have $|U(t)|_{2}^{2}\leq C_{1}$ for any $t\geq0$.
For a further discussion on the boundedness of trajectories of the flow in $(H^{1})^N$, we need a result on the maximal regularity of parabolic equation in [@Amann Theorem .4.10.7]. Spaces involving time will be used here, such as $L^{p}(I,X)$, $W^{1,p}(I,X)$, where $I$ is an interval and $X$ is a Banach space with a norm $||\cdot||_X$, and the norms are defined respectively by $$\|u\|_{L^{p}(I,X)}=\Bigg(\int_{I}\|u(t)\|_X^{p}dt\Bigg)^{\frac{1}{p}}$$ and $$\|u\|_{W^{1,p}(I,X)}=\Bigg(\int_{I}\left\|\frac{du}{dt}(t)\right\|_X^{p}+\|u(t)\|_X^{p}dt\Bigg)^{\frac{1}{p}}.$$
\[t:AmannRegularity\] Consider the linear parabolic problem $$\label{e:linear}
\left\{
\begin{array}{lr}
\frac{\partial}{\partial t}u-{\Delta}u+\lambda_{0} u=f \,\,\,\,\,\,\, in\ \W ,\\
u(t,x)=0\,\,\,\,\,\,\,\,\mbox{on}\,\,\partial\Omega,\\
u(0,x)=u_{0}(x) \,\,\,\,\,\,\,\,\mbox{in}\,\,\W,
\end{array}
\right.$$ where $\lambda_{0}>0$. Given a compact interval $I=[0,T]$, $f\in L^{p}(I,L^{q}(\W))$ and $1<p,q<\infty$, the solution $u$ to the Problem (\[e:linear\]) satisfies $$\begin{aligned}
\label{inequ:maximalregularity}
\|u\|_{W^{1,q}(I,L^{p}(\W))}+\|u\|_{L^{q}(I,W^{2,p}(\W))}\leq C_{MR}\big(\|u_{0}\|_{W^{s,p}(\W)}+\|f\|_{L^{q}(I,L^{p}(\W))}\big),\end{aligned}$$ where $C_{MR}$ is a positive constant independent of $f$, $u_{0}$ and $I$ and $s>2\left(1-\frac{1}{q}\right)$.
In fact, this is a special case of [@Amann Theorem .4.10.7]. We only give this version for our purpose here. In the original version stated in [@Amann], the first term on the right hand side of (\[inequ:maximalregularity\]) is in the form of $\|u_{0}\|_{X_{p,q}}$, where the interpolation space $X_{p,q}=(L^{p}(\W),W^{2,p}(\W))_{1-\frac{1}{q},q}$ satisfies $W^{s,p}(\W)\hookrightarrow X_{p,q}$ for $s>2\left(1-\frac{1}{q}\right)$. We refer [@LM; @Lu; @T] once again for details on interpolation spaces.
Now we prove the $H^1$-boundedness of the global solutions.
\[l:H1bounded\] Let $U(t)$ be a global solution to Problem (\[e:A14\]) with $U(0)\in (H^s)^N$ for $s\in(1,2]$ such that $\lim_{t\to\infty} J(U(t))\geq0$. Then there is a constant $C>0$ depending only on the $H^2$-norm of $U(0)$ and the initial energy $E_{0}$ such that $\|U(t)\|\leq C$ for any $t\geq0$.
[**Proof.**]{} Denote the interval $I=[t_{0},t_{0}+T]$. Firstly, using the global $L^2$ bound of $U(t)$ and the same computation of (\[inequality:important!!\]), we have, $$\begin{aligned}
\label{inequality:H1bounded11111111111111}
\sum_{j=1}^{N}\|u_{j}\|^2\leq C(C_{1})\big(1+|\partial_{t}U|_2\big).\end{aligned}$$ Here $C_1$ is the $L^2$ bound of the solution $U(t)$. Using Theorem \[t:AmannRegularity\] with respect to each equation in Problem (\[e:A14\]) and putting $p=2$ and $q=\frac{4}{3}$ (Therefore, $s\in\left(\frac{1}{2},2\right]$), we have $$\|u_{j}\|_{L^{\frac{4}{3}}(I,H^{2}(\W))}\leq C_{MR}\left(\sum_{i=1}^{N}\|U(t_{0})_{i}\|_{H^s}+\Big\|\mu_{j}u_{j}^3+\sum_{i\neq j}\beta_{ij}u^{2}_{i}u_{j}\Big\|_{L^{\frac{4}{3}}(I,L^{2}(\W))}\right).$$ The last term can be estimated as follows: $$\begin{aligned}
\Big\|\mu_{j}u_{j}^3+\sum_{i\neq j}\beta_{ij}u^{2}_{i}u_{j}\Big\|_{L^{\frac{4}{3}}(I,L^{2}(\W))} &\leq C\Bigg(\int_{t_{0}}^{T+t_{0}}\Big(|u_{j}|_{6}^{3}+\sum_{i=1}^{N}|u_{i}^2u_{j}|_{2}\Big)^{\frac{4}{3}}ds\Bigg)^{\frac{3}{4}}\nonumber\\
&\leq C\Bigg(\int_{t_{0}}^{T+t_{0}}\Big(\sum_{i=1}^{N}|u_{i}|_{6}^{3}\Big)^{\frac{4}{3}}ds\Bigg)^{\frac{3}{4}}\nonumber\\
&\leq C\Bigg(\int_{t_{0}}^{T+t_{0}}\Big(\sum_{i=1}^{N}\|u_{i}\|^{4}\Big)ds\Bigg)^{\frac{3}{4}}\nonumber\\
&\leq C\Bigg(\int_{t_{0}}^{T+t_{0}}\Big(\sum_{i=1}^{N}\|u_{i}\|^{2}\Big)^{2}ds\Bigg)^{\frac{3}{4}}.\nonumber\end{aligned}$$ Using (\[inequality:H1bounded11111111111111\]) and Proposition \[p:L2partial\_t\], we have $$\begin{aligned}
\Big\|\mu_{j}u_{j}^3+\sum_{i\neq j}\beta_{ij}u^{2}_{i}u_{j}\Big\|_{L^{\frac{4}{3}}(I,L^{2}(\W))} &\leq C(C_{1})\Big(\int_{t_{0}}^{t_{0}+T} \big(1+|\partial_{t}U|_{2}^{2}\big)ds\Big)^{\frac{3}{4}}\nonumber\\
&\leq C(C_{1})(T+E_{0})^{\frac{3}{4}}\leq C(C_{1},E_{0})(T+1)^{\frac{3}{4}}.\nonumber\end{aligned}$$
Now set $$\begin{aligned}
\label{constant222222222}
C_{2}=8(NC_{MR})^{2}\left((\|U(0)\|_{(H^s)^N}+1)^2+C(C_{1},E_{0})^{2}(2NC_{MR}+1)^{2}\right)+C(C_{1},E_{0})+1.\end{aligned}$$ Let $T=\big(2NC_{MR}+1\big)^{\frac{4}{3}}$ and $t_{0}=0$. Then we notice that $\|U(0)\|_{(H^s)^N}\leq C_{2}$. And we have $$\begin{aligned}
\Bigg(\int_{0}^{T}\|U\|_{(H^{2})^N}^{\frac{4}{3}}ds\Bigg)^{\frac{3}{4}}&\leq \sum_{j=1}^{N} \Bigg(\int_{0}^{T}\|u_{j}\|_{(H^{2})^N}^{\frac{4}{3}}ds\Bigg)^{\frac{3}{4}}\nonumber\\
&\leq NC_{MR}\Big(\|U(0)\|_{(H^{s})^{N}}+C(C_{1},E_{0})(T+1)^{\frac{3}{4}}\Big).\nonumber\end{aligned}$$ Therefore, there must be a positive number $t'\in(0,T)$ such that $$\begin{aligned}
\|U(t')\|_{(H^{2})^{N}}&\leq NC_{MR}\frac{\|U(0)\|_{(H^s)^N}}{T^{\frac{3}{4}}}+NC(C_{1},E_{0})C_{MR}\Bigg(1+\frac{1}{T}\Bigg)^{\frac{3}{4}}\nonumber\\
&\leq\frac{NC_{MR}\Big(\|U(0)\|_{(H^s)^N}+C(C_{1},E_{0})\Big)}{T^{\frac{3}{4}}}+NC_{MR}C(C_{1},E_{0})\leq C_{2}.\nonumber\end{aligned}$$ We may assume $t'$ is the largest such number in $(0, T]$. With above results, exchanging $t_{0}=0$ into $t_{0}=t'$. Note that we can select $s=2$ for the second and later steps. Via the same method, we can find a largest $t''\in(t',t'+T]$ such that $\|U(t'')\|_{(H^2)^N}\leq C_2$. Inductively, we can find a sequence of $(t'_{l})_{l}$ such that
- $0<t'_{l}-t'_{l-1}\leq T$;
- $\lim_{l\to\infty}t'_{l}=\infty$;
- $\|U(t'_{l})\|_{(H^2)^N}\leq C_{2}$.
The first and the last assertions are obvious. For $\lim_{l\to\infty}t'_{l}=\infty$, we first observe $$\begin{aligned}
\int_{0}^{T}\|U(s)\|_{(H^2)^N}^{\frac{4}{3}}ds&\leq4(NC_{MR})^2\left(\|U(0)\|_{(H^s)^N}^2 +C(C_{1},E_{0})^2\Big((2NC_{MR}+1)^{\frac{4}{3}}+1\Big)\right)\nonumber\\
&\leq C_{2}.\nonumber\end{aligned}$$ This implies that $$\begin{aligned}
C_{2}&\geq\int_{0}^{T}\|U(s)\|_{(H^s)^N}^{\frac{4}{3}}=\int_{\|U\|_{(H^2)^N}<C_{2}}+\int_{\|U\|_{(H^2)^N}\geq C_{2}}\|U(s)\|_{(H^2)^N}^{\frac{4}{3}}ds\nonumber\\
&\geq0+(T-\delta)C_{2}^{\frac{4}{3}}.\nonumber\end{aligned}$$ where $\delta=|\{t\in[0,T]|\|U(t)\|_{(H^2)^N}<C_{2}\}|$. This gives $\delta\geq T-C_{2}^{-\frac{1}{3}}>0$. Therefore for any $l=0,1,\dots$, we have $t'_{l+1}-t'_{l}\geq\delta>0$. Using the method in Section 2.2 on every interval $[t'_{l},t'_{l}+T]$, we can prove that $\|U(t)\|_{(H^1)^N}\leq C(C_{2})=C(\|U(0)\|_{(H^s)^N})$ for any $t\in [t'_{l},t'_{l}+T]$ and any $l=1,2,\dots$. Therefore, $\|U(t)\|_{(H^1)^N}$ is bounded for $t\geq0$ and the upper bound dependence on $\|U(0)\|_{(H^2)^N}$ continuously.
We now give two corollaries which will be useful in following paragraph.
For any $U\in\partial\mathcal{A}$, $\|\eta^{t}(U)\|_{(H^1)^N}\leq C$ for any $t\geq0$. Here, the constant $C>0$ depends continuously on the initial data.
With the results in Remark \[remark:globalexistence\], if we assume $U(0)\in(H^s)^N$, then we can conclude $\eta^t(U)\in L^{\infty}((0,\infty),(H^s)^N)$ for $s\in\left[1,\frac{3}{2}\right)\cap\left(\frac{3}{2},2\right]$ via the formula of variation of constants.
Finer Nodal Properties
----------------------
It is well known that for scalar equations along the heat flow the number of changing sign is non-increasing ([@An; @CMT; @Mat]). For coupled systems it was proved in [@DWW] for two equations. We can prove this is also the case for our system \[e:Al3\] using the arguments in [@CMT] and [@DWW], and we omit details here. But we need a more specific version of this theorem from [@CMT], which is based on the notation of bumps of a radial function.
Let us recall this from [@CMT]. The number of changing sign of a continuous radial function $u=u(|x|)$, denoted by $n(u)$, is defined as the largest number $k$ such that there exist a sequence of real number $0<x_{0}<x_{1}<\dots<x_{k}$ such that $$u(x_{j})\cdot u(x_{j+1})<0,\,\,\,\,\,\,j=0,\dots,k-1.$$ We call $n(u)$ the nodal number of the function $u$. We always assume that the functions we discussed have finite nodal numbers. For a radial function $u$ with $n(u)=k$ and $u(x_{0})>0$, we define its $q$-th bump for $q=1,...,k+1$, by $$\begin{aligned}
u_{1}(x)&=\chi_{\{u>0\}}\cdot\chi_{\{|x|<x_{1}\}}\cdot u(x), \nonumber\\
u_{q}(x)&=\chi_{\{(-1)^{q-1}u>0\}}\cdot\chi_{\{x_{q-2}<|x|<x_{q}\}}\cdot u(x),\,\,\,\,q=2,\dots,k+1.\nonumber
\end{aligned}$$ For a radial function $u$ with $n(u)=k$ and $u(x_{0})<0$, we define its $q$-th bump $q=1,...,k+1$ by $$\begin{aligned}
u_{1}(x)&=\chi_{\{u<0\}}\cdot\chi_{\{|x|<x_{1}\}}\cdot u(x), \nonumber\\
u_{q}(x)&=\chi_{\{(-1)^{q-1}u<0\}}\cdot\chi_{\{x_{q-2}<|x|<x_{q}\}}\cdot u(x),\,\,\,\,q=2,\dots,k+1.\nonumber
\end{aligned}$$ To avoid confusion, for the $j$-th component $u_{j}$ of $U=(u_{1},\dots,u_{N})$, we denote its $q$-th bump by $u_{j,q}$.
For the solution $U(t)$ to Problem (\[e:A14\]) with initial value $U\in (H_{r}^{2})^{N}$ for $t\in[0,T(U))$, we denote its $j$-th component by $u(t)_{j}$. By $u(t)_{j,q}$ we denote the $q$-th bump of its $j$-th component.
In this subsection, we always assume that the initial data $U(0)\in(H^2)^N$. Theorem \[t:2\] ensure that $\eta^t(U)\in(H^2)^N$ for any $t\geq0$.
We firstly consider the case for $\beta_{ij}\leq0$ for all $i,j=1,\dots,N$ and $i\neq j$.
\[p:L4\] There is a positive number $\rho>0$ such that if $|u_{j,q}|_{4}<\rho$ then $|u_{j,q}(t)|_{4}<\rho$ for $t\geq0$.
[**Proof.**]{} By Theorem \[t:2\] and the inclusion $H^{2}\subset C(\W)$ for dimension $n=2,3$, then $U(t)$ is continuous in spatial variable. As a consequence, the nodal number of $U(t)$, $n\left(U(t)\right)$, is well-defined. Hence, there exists a small $\varepsilon>0$ such that if $$(-1)^{q+1}u_{j}(x_{q},0)>0,$$ then $$(-1)^{q+1}u_{j}(x_{q},t)>0$$ for any $t\in[0,\varepsilon]$. Hence, due to the definition of bump $u_{j,q}$ (c.f. Section 2.1), the differential $\frac{\partial}{\partial t}\int|u_{j,q}|^{4}$ is well-defined. Notice that $$\begin{aligned}
\frac{\partial}{\partial t}\int|u_{j,q}|^{4} &=4\int u_{j,q}^{3}\partial_{t}u_{j,q}=4\int u_{j,q}^{3}\partial_{t}u_{j}\nonumber\\
&=4\int u_{j,q}^{3}\Big(\Delta u_{j}-\lambda_{j} u_{j}+\mu_{j} u_{j}^{3}+\sum_{i\neq j}\beta_{ij} u_{j}u_{i}^{2}\Big)\nonumber\\
&=-3\int |\nabla(u_{j,q}^{2})|^{2}-4\lambda_{j} \int u_{j,q}^{4}+4\mu_{j} \int u_{j,q}^{6}+4\sum_{i\neq j}\beta_{ij} \int u_{j,q}^{4}u_{i}^{2}.\nonumber\end{aligned}$$ Denote $W=u_{j,q}^{2}$. By computing $$\frac{1}{3}=\frac{\frac{1}{2}}{6}+\frac{1-\frac{1}{2}}{2},$$ we have from Sobolev embedding of dimensions 2 and 3, $$|W|_{3}^{3}\leq C\|W\|^{\frac{3}{2}}|W|_{2}^{\frac{3}{2}}.$$ Therefore, $$\begin{aligned}
\frac{\partial}{\partial t}\int |u_{j,p}|^{4}&\leq-C\|W\|^{2}+C\|W\|^{\frac{3}{2}}|W|^{\frac{3}{2}}\nonumber\\
&\leq-C\|W\|^{\frac{3}{2}}|W|_{2}^{\frac{1}{2}}+C\|W\|^{\frac{3}{2}}|W|^{\frac{3}{2}}\nonumber\\
&=-C\|W\|^{\frac{3}{2}}|W|_{2}^{\frac{1}{2}}\big(1-C|W|_{2}\big)\nonumber\\
&=-C\|W\|^{\frac{3}{2}}|W|_{2}^{\frac{1}{2}}\big(1-C|u_{j,q}|_{4}^{2}\big)<0\nonumber\end{aligned}$$ for $|u_{j,q}|_{4}$ small enough.
If there is a couple $(i_{0},j_{0})$ such that $i_{0}\neq j_{0}$, $i_{0},j_{0}=1,\dots,N$ and $\beta_{i_{0},j_{0}}>0$, the property becomes more delicate.
\[l:invariant2\] For a solution $U(t)$ with its initial data $U(0)\in(H^2)^N$, then there is a $b=b(\|U(0)\|_{(H^2)^N})>0$ such that if $\beta_{ij}<b$ for all $i,j=1,\dots,N$ and $i\neq j$, the conclusion of the last lemma holds true.
[**Proof.**]{} With the same computation, we have $$\begin{aligned}
\frac{\partial}{\partial t}\int|u_{j,q}|^{4} &=-3\int |\nabla(u_{j,q}^{2})|^{2}-4\lambda_{j} \int u_{j,q}^{4}+4\mu_{j} \int u_{j,q}^{6}+4\sum_{i\neq j}\beta_{ij} \int u_{j,q}^{4}u_{i}^{2}\nonumber\\
&\leq-C_{0}\|W\|^2+4\mu_{j}|W|_{3}^{3}+4\max_{ij}\beta_{ij}\sum_{i=1}^{N}\int W^2 u_{i}^2 ,\nonumber\end{aligned}$$ where $C_{0}=\min\{3,4\lambda_{j}\}$ and $W=u_{j,q}^2$. Now we deal with the last term and have $$\begin{aligned}
4\max_{ij}\beta_{ij}\sum_{i=1}^{N}\int W^2 u_{i}^2 &=4\max_{ij}\beta_{ij}\int W^2 \Big(\sum_{i=1}^{N}u_{i}^2\Big)\leq 4N\max_{ij}\beta_{ij}|W|_{3}^{2}|U|_{6}^{2}\nonumber\\
&\leq 4N\max_{ij}\beta_{ij}S_{3}^{2}S_{6}^{2}\|W\|^{2}\|U\|^{2}\nonumber\\
&\leq 4NbC_{3}(U(0))S_{3}^{2}S_{6}^{2}\|W\|^{2},\nonumber\end{aligned}$$ where $S_{p}$ is the best constant for the inequality $|U|_{p}\leq S_{p}\|U\|$ and $C_{3}(U(0))$ is the upper bound of $\|U(t)\|^{2}$ we computed in the last subsection. If we assume that $b=\frac{\min\{3,4\lambda_{j}\}}{8NS_{3}^{2}S_{6}^{2}C_{3}(U_{0})}>0$, we will have $$\frac{\partial}{\partial t}\int|u_{j,q}|^{4}\leq-\frac{C_{0}}{2}\|W\|+4\mu_{j}|W|_{3}^{3}.$$ The rest part of the proof is the same with the last lemma.
Proof of Theorem \[t:exist\]
============================
A Topological Lemma
-------------------
We give the linking structure without assuming any symmetry, which can be used in the proof of Theorem \[t:exist\]. The strategy of proving is to extend the original setting into a symmetric setting.
\[l:linking\] For $A=[0,+\infty)^{n}$ and a bounded open neighbourhood $\mathcal{O}$ of the origin $0$ in $\mathbb{R}^n$, there is no continuous map $F:\partial\mathcal{O}\cap A\to A$ such that $F(\partial\mathcal{O}\cap A)\subset\partial A\backslash\{0\}$ and the condition $x_{j}=0$ implies that $F_{j}(x_{1}\dots,x_{n})=0$, where $F_{j}(x_{1}\dots,x_{n})$ is the $j$-th component of the vector $F(x_{1}\dots,x_{n})$.
[**Proof.**]{} We argue it by contradiction. Suppose there is a continuous mapping $F:\partial\mathcal{O}\cap A\to A$ such that $F(\partial\mathcal{O}\cap A)\subset\partial A\backslash\{0\}$. Inspired by [@LM1111111], we begin the proof by extending the setting to a symmetric version and obtain the contradiction via a genus argument.
Firstly, let us define another open neighbourhood $\mathcal{O}^*$ of the origin $0\in\mathbb{R}^n$ by reflection with respect to each component of the coordinates, i.e. $$\mathcal{O}^* =\{x=(x_{1},\dots,x_{n})|(|x_{1}|,\dots,|x_{n}|)\in \mathcal{O}\}.$$ It is easy to see that the open set $\mathcal{O}^*$ is antipodal symmetric. Then the following inclusion holds true: $$\begin{aligned}
\label{inclusion:boundary}
\partial\mathcal{O}^{*}\cap A\subset \partial\mathcal{O}\cap A.\end{aligned}$$ Indeed, we observe that $\partial\mathcal{O}^{*}\cap int(A)= \partial\mathcal{O}\cap int(A)$. So we only need to show that $\partial\mathcal{O}^{*}\cap \partial A\subset \partial\mathcal{O}\cap\partial A$. For any $x\in \partial\mathcal{O}^{*}\cap \partial A$, for any $r>0$, $B_{\mathbb{R}^n}(x,r)\cap\mathcal{O}^{*}\neq\emptyset$. Due to the construction of $\mathcal{O}^*$, the last intersection gives $B_{\mathbb{R}^n}(x,r)\cap\mathcal{O}\neq\emptyset$, which implies that $x\in \partial\mathcal{O}\cap \partial A$. Now we restrict the mapping $F$ to the set $\partial\mathcal{O}^{*}\cap A$ and extend it to the whole $\partial\mathcal{O}^{*}$. Define the mapping $\tilde{F}:\partial\mathcal{O}^{*}\to X$ by $$\tilde{F}(x_{1},\dots,x_{n})=\big(sgn(x_{1})F_{1}(|x_{1}|,\dots,|x_{n}|),\dots,sgn(x_{n})F_{n}(|x_{1}|,\dots,|x_{n}|)\big),$$ where $X=\{x=(x_{1},\dots,x_{n})|\prod_{j=1,\dots,n}x_{j}=0\}$. Then we can claim
- $\tilde{F}$ is an odd extension of $F$;
- $\tilde{F}(\partial\mathcal{O}^{*})\subset X\backslash \{0\}$.
The first assertion is easy. Now we check the second one. We only need to verify that for any $x\in\partial\mathcal{O}^{*}$, $\tilde{F}(x)\neq0$. Otherwise, if for any $j=1,\dots,n$, $sgn(x_{j})F_{j}(|x_{1}|,\dots,|x_{n}|)=0$. Since $x\in\partial\mathcal{O}^{*}$, there are some integers, say $1,\dots,s$, with $x_{1}=\dots,x_{s}=0$ and other integers, say $s+1,\dots,n$, with $x_{s+1}\neq0$, $\dots$, $x_{n}\neq0$. Therefore, $F_{j}(|x_{1}|,\dots,|x_{n}|)\equiv0$ for any $j=1,\dots,n$. This is a contradiction with $(|x_{1}|,\dots,|x_{n}|)\in\partial\mathcal{O}\cap A$ and $F(\partial\mathcal{O}\cap A)\subset\partial A\backslash\{0\}$.
Now we will have a contradiction via the genus with respect to the symmetry of antipodal. We denote the genus generated by the antipodal symmetry by $\gamma'$. On one hand, we have $n=\gamma'(\partial\mathcal{O}^{*})\leq\gamma'(\tilde{F}(\partial\mathcal{O}^{*}))$ due to Borsuk’s theorem for the symmetry with respect to antipodal. On the other hand, notice that $X\cap X'=\{0\}$, where $X'=\{x=(x_{1},\dots,x_{n})|x_{1}=\dots=x_{n}\}$. Then we can construct an odd homotopy $G$ such that $$X\backslash\{0\}\overset{G}{\simeq}\mathbb{S}^{n-2}.$$ This implies that $\gamma'(\tilde{F}(\partial\mathcal{O}^{*}))\leq n-1$. This is a contradiction.
We remark that some of the computations in the above proof were used in [@LW1; @LW] and will be also used in the next section.
Proof of Theorem \[t:exist\]
----------------------------
We prove this theorem via the concept of invariant sets of a descending flow and we will use the parabolic flow as a mean of descending flow. Recall we fix $N$ non-negative integers $P_{1},\dots,P_{N}$ which are the prescribed componentwise nodal numbers. We first introduce some auxiliary functions.
- Firstly, for the radial domain $\W$, we cut it into $N$ radial sub-domains $\W_{j}$ for $j=1,\dots,N$ with $\overline{\W}=\cup_{j=1}^{N}\overline{\W_{j}}$;
- For any fixed $j=1,\dots,N$, we cut the domain $\W_{j}$ into $P_{j}+1$ sub-domains $\W_{j,q}$ with $q=1,\dots,P_{j}+1$ with $\overline{\W_{j}}=\cup_{q=1}^{P_{j}+1}\overline{\W_{j,q}}$;
- For any $j=1,\dots,N$ and $q=1,\dots,P_{j}+1$, we define a smooth non-zero radial function with compact support $w_{j,q}:\W_{j,q}\to[0,+\infty)$.
Without loss of generality, we can assume that $|w_{j,q}|_{4}\equiv 1$ for any $j=1,\dots,N$ and $q=1,\dots,P_{j}+1$. We define the following set $S$ by $$\begin{aligned}
S=\Bigg\{\Bigg(\sum_{q=1}^{P_{1}+1}(-1)^{q+1}\alpha_{1,q}w_{1,q}(x),&\dots,\sum_{q=1}^{P_{N}+1}(-1)^{q+1}\alpha_{N,q}w_{N,q}(x)\Bigg)\Bigg|\nonumber\\ &\alpha_{j,q}\geq\frac{\varepsilon}{100}\,\,\mbox{for}\,\,q=1,\dots,P_{j}+1\,\,and\,\,j=1,\dots,N\Bigg\},\nonumber\end{aligned}$$ which is a closed cone in the real Euclidean space of dimension $\sum_{j=1}^{N}P_{j}+N$. And there is an isomorphism $$\begin{aligned}
i:S&\to[0,+\infty)^{\sum_{j=1}^{N}P_{j}+N}\nonumber\\
\left(\sum_{q=1}^{P_{j}+1}\alpha_{j,q}w_{j,q}\right)_{j}&\mapsto\Big(\alpha_{j,q}-\frac{\varepsilon}{100}\Big)_{j,q}.\nonumber\end{aligned}$$ We denote by $Y$ the $\sum_{j=1}^{N}P_{j}+N$ dimensional Euclidean space spanned by $S$ with respect to the linearity in $i(S)$. It is easy to see that $\mathcal{A}\cap Y$ is also an open neighbourhood of the origin in $Y$ and $\mathcal{A}\cap Y$ is bounded. Notice that $S\cap\partial\mathcal{A}$ is a compact set and can be embedded into a finite dimensional Euclidean space, where all the norms are equivalence. If there is at least one $\beta_{ij}$ is positive, due to the limit of Lemma \[l:invariant2\], we need to find an upper bound $b>0$ depending on $\sup_{U\in S\cap\partial\mathcal{A}}\sup_{t\geq0}\|\eta^{t}(U)\|^2\in(0,\infty)$. If all the $\beta_{ij}$’s are non-positive, the limit on the upper bound is no longer necessary (Lemma \[p:L4\]).
Now we will locate the portions of the boundary $\partial \mathcal A$ in which along the flow lines the number of nodal domains can be controlled. For this purpose, as done for scalar equations in [@CMT], we introduce the following notations:
- $D_{j,k}=\{U=(u_{1},\dots,u_{N})\in(H_{r}^{2})^{N}|n(u_{j})=k\}$ and $D=\cap_{j=1}^{N}D_{j,P_{j}}$, where $P_{1},\dots,P_{N}$ are given in the theorem;
- $E_{j,q}^{\varepsilon}=\{U=(u_{1},\dots,u_{N})\in D||u_{j,q}|_{4}<\varepsilon\}$ for $q=1,\dots,P_{j}$ and $j=1,\dots,N$;
- denote $$\begin{aligned}
H=\Bigg\{U=(u_{1},\dots,u_{N})\in(H_{r}^{2})^{N}|& n(u_{j})\leq P_{j}\,\,\mbox{for}\, j=1,\dots,N\nonumber\\
&\mbox{and}\, \sum_{j=1}^N n(u_j) < \sum_{j=1}^N P_j\Bigg\}\nonumber
\end{aligned}$$
- denote $$\begin{aligned}
F_{\varepsilon}=\cup_{j=1}^{N}\cup_{q=1}^{P_{j}}E_{j,q}^{\varepsilon}\cup H;\nonumber
\end{aligned}$$
- the complete invariant set of the set $E_{j,q}^{\varepsilon}$ is defined as $$C(E_{j,q}^{\varepsilon})=\big\{U\in(H^{2})^{N}\big|\exists t_{0}\geq0\,\,s.t.\,\,\eta^{t_{0}}(U)\in E_{j,q}^{\varepsilon}\big\}$$ for $q=1,\dots,P_{j}+1$ and $j=1,\dots,N$. Therefore, we can denote $$A^{\varepsilon}_{j,q}=C(E_{j,q}^{\varepsilon})\cap\partial\mathcal{A}\cap D$$ for $q=1,\dots,P_{j}+1$ and $j=1,\dots,N$.
Due to the invariance property proved in the last section we will also define an arriving time
- for any $U\in D\cap\partial\mathcal{A}$, denote $T^{*}(U)=\inf\{t\geq0|\eta^{t}(U)\in F_{\frac{\varepsilon}{2}}\}$.
.1in
We note that the set $D_{j,k}$ consists of the vector-valued functions whose $j$-th components have exactly $k$ sign-changing number. For any function $(u_{1},\dots,u_{N})$ in the set $D$, let $u_{j}$ be an arbitrary component. The set $E_{j,q}^{\varepsilon}$ contains the functions whose $q$-th bump of $j$-th component has a small $L^{4}$ norm. This set is invariant due to Proposition \[p:L4\] when $n(\eta^{t}(\cdot)_{j})$ dose not change. As to the set $F_{\varepsilon}$, an element in $F_{\varepsilon}$ either has a small bump or has a component with a sign-changing number less than what we prescribed. It should be noted that this set is what we want to remove. $T^{*}$ is the time when the flow line arrives in the set $F_{\frac{\varepsilon}{2}}$. Using the computation in Proposition \[p:L4\], any flow line which flows into $F_{\varepsilon}$ in a finite time will flow into $F_{\frac{\varepsilon}{2}}$ eventually. We will fix small $\varepsilon$ so that the invariance property holds. We now have the continuity of the arriving time $T^* (U)$.
The arriving time $T^* (U)$ is continuous.
[**Proof.**]{} Due to the detailed computation in the following paragraph, we restrict ourselves to the case $n(u_{j})\leq P_{j}$ for $j=1,\dots,N$. Here $u_{j}$ denotes the $j$-th component of the vector-valued function $U$. In fact, we only need the case $n(u_{j})\equiv P_{j}$ for any $j=1,\dots,N$ in this section. Nonetheless, we still prove the general case for the sake of the next section.
First we consider the case $\sum_{j=1}^{N}n(u_{j})<\sum_{j=1}^{N}P_{j}$, so we have $T^{*}(U)=0$. We prove the continuity at $U$ by contradiction. For a sequence $U_{n}\to U$ in $(H^{1})^{N}$ with $U_{n}\in D$ for any $n=1,2,\dots$, suppose there is $t_{0}>0$ such that $T^{*}(U_{n})\geq 2t_{0}$ for large $n$. We select and fix a $t\in(0,t_{0})$. One one hand, we have $$\eta^{t}(U_{n})\to \eta^{t}(U)\qquad in\,\,(H^{1})^{N},$$ due to Theorem \[t:2\]. On the other hand, by the definition of the arriving time $T^{*}$ and the non-increasing property of nodal number along the flow line, using $U_{n}\in D$ we have
- $n\big((U_{n})_{j}\big)=P_{j}$ for $j=1,\dots,N$;
- $\big|\big(\eta^{t}(U_{n})\big)_{j,q}\big|_{4}\geq\frac{\varepsilon}{2}$ for $j=1,\dots,N$ and $q=1,\dots,P_{j}$;
- $n\big(\big(\eta^{t}(U)\big)_{j}\big)\leq P_{j}$ for $j=1,\dots,N$ and $q=1,\dots,P_{j}$ and at least one of the $\leq$’s holds strictly.
Here, $(\eta^{t}(W))_{j}$ and $(\eta^{t}(W))_{j,q}$ are the $j$-th component and the $q$-th bump of the $j$-th component of $\eta^{t}(W)$. Now we show that these assertions lead us to a contradiction.
Since $\eta^{t}(U_{n})\to\eta^{t}(U)$ in $(H^{1})^{N}$, we can select a large $n_{0}>0$ such that $\big|\eta^{t}(U_{n_{0}})-\eta^{t}(U)\big|_{4}\leq\frac{\varepsilon}{4}$. In the following, we will argue it in terms of components. Let us consider $\eta^{t}(U)_{1}$ and $\eta^{t}(U_{n_{0}})_{1}$ for the sake of simplicity, where $\eta^{t}(U)_{1}$ and $\eta^{t}(U_{n_{0}})_{1}$ are the first components of $\eta^{t}(U)$ and $\eta^{t}(U_{n_{0}})$ respectively. Let us assume that $n(\eta^{t}(U)_{1})<n(\eta^{t}(U_{n_{0}})_{1})=P_{1}$ without loss of generality. Due to the definition of the sign-changing number, we can find a sequence of numbers $x_{q-1}\in \mbox{supp} \,\eta^{t}(U_{n_{0}})_{1,q}$ for $q=1,\dots,P_{1}+1$, where $\eta^{t}(U_{n_{0}})_{1,q}$ is the $q$-th bump, such that $$\eta^{t}(U_{n_{0}})_{1}(x_{q})\cdot\eta^{t}(U_{0})_{1}(x_{q+1})<0$$ for $q=0,\dots,P_{1}$. Using the facts $\big|\eta^{t}(U_{n_{0}})_{1,q}\big|_{4}\geq\frac{\varepsilon}{2}$ for $q=1,\dots,P_{1}$, and $\big|\eta^{t}(U_{n_{0}})-\eta^{t}(U)\big|_{4}\leq\frac{\varepsilon}{4}$, we claim there must be $x'_{q-1}\in \mbox{supp} \, \eta^{t}(U_{n_{0}})_{1,q}$ such that $$\begin{aligned}
\label{ineq:NODALS}
\eta^{t}(U)_{1}(x'_{q})\cdot\eta^{t}(U)_{1}(x'_{q+1})<0\end{aligned}$$ for $q=0,\dots,P_{1}$. Otherwise, if there is a $q_{0}=1,\dots,P_{1}$ such that
- $\eta^{t}(U_{n_{0}})\geq0$;
- $\eta^{t}(U)\leq0$ on $\mbox{supp}\eta^{t}(U_{n_{0}})_{1,q_{0}}$;
- $|\eta^{t}(u)_{1,q_{0}}|_{4}\geq\frac{\varepsilon}{2}$;
then we have $$\begin{aligned}
\frac{\varepsilon}{4}&\geq|\eta^{t}(U_{n_0})-\eta^{t}(U)|_{4}\geq |\eta^{t}(U_{n_0})_{1,q_{0}}-\eta^{t}(U)\cdot\chi_{\mbox{supp}\eta^{t}(U_{n_0})_{1,q_{0}}}|_{4}\nonumber\\
&\geq|\eta^{t}(U_{n_0})_{1,q_{0}}|_4\geq\frac{\varepsilon}{2},\nonumber\end{aligned}$$ which is a contradiction. Here, the function $\chi_{A}$ is the characteristic function of the set $A$. Therefore, (\[ineq:NODALS\]) holds, i.e., $P_{1}=n(\eta^{t}(U)_{1})$, but this is a contradiction with $n(\eta^{t}(U)_{1})<P_{1}$. The proof of the first case is complete.
Next we consider the case $n(U_{j})= P_{j}$ for $j=1,\dots,N$. For a sequence $U_{n}\to U$ in $(H^{1})^{N}$, we only check the lower limit $$T^{*}(U)\leq\varliminf_{n\to\infty}T^{*}(U_{n}).$$ The upper one $T^{*}(U)\geq\varlimsup_{n\to\infty}T^{*}(U_{n})$ can be proved in the same way.
We argue by contradiction again. Suppose, up to a subsequence, we have $$s:=\lim_{n\to\infty}T^{*}(U_{n})<T^{*}(U)\leq T(U)=\infty.$$ Then we can find a $t\in(s,T^{*}(U))$. Set $V_{n}=\eta^{t}(U_{n})$ and $V=\eta^{t}(U)$. Then, due to Theorem \[t:2\], we have $V_{n}\to V$ in $(H^{1})^{N}$. Since $t<T^{*}(U)$, we have $V=\eta^{t}(U)\in D\cap\partial\mathcal{A}\backslash F_{\frac{\epsilon}{2}}$. Then $|V_{j,q}|_{4}\geq \frac{\varepsilon}{2}$ for some $j=1,\dots,N$, $q=1,\dots,P_{j}$. By using $T^{*}(U_{n})\to s<t$, we have $V_{n}=\eta^{t}(U_{n})\in F_{\frac{\epsilon}{2}}\cap\partial\mathcal{A}$, which implies $|(V_{n})_{j,q}|_{4}\leq\frac{\varepsilon}{2}$. Combining these with the fact that $V_{n}\to V$ in $(H^{1})^{N}$, we conclude that $|V_{j,q}|_{4}=\frac{\varepsilon}{2}$. Hence, we obtain $|\eta^{t}(U)_{j,q}|_{4}=|\eta^{T^{*}(U)}(U)_{j,q}|_{4}=\frac{\varepsilon}{2}$ with $t<T^{*}(U)$. Using Proposition \[p:L4\], for any $\theta\in(t,T^{*}(U))$, $|\eta^{\theta}(U)_{j,q}|_{4}\equiv\frac{\varepsilon}{2}$, which is a contradiction with the very proposition itself.
Finally to prove Theorem \[t:exist\], we only need to show that $$A:=\partial\mathcal{A}\cap D\backslash\big(\cup_{j=1}^{N}\cup_{q=1}^{P_{j}+1}A_{j,q}^{\varepsilon}\big)\neq\emptyset.$$ The rest part of the proof requires a lower bound of the energy functional $J$ on the set $A$ and the fact that the energy functional satisfies the (PS) condition. The second part is obvious and the first part is given by $$A\subset\partial\mathcal{A}$$ and $$0\leq\inf_{\partial\mathcal{A}}J\leq\inf_{A}J.$$
Now we verify that $\partial\mathcal{A}\cap D\backslash\big(\cup_{j=1}^{N}\cup_{q=1}^{P_{j}+1}A_{j,q}^{\varepsilon}\big)\neq\emptyset$.
This proof relies heavily on a technique used in [@LW1; @LW]. We will use the subset $S\subset D$ constructed at the beginning of this subsection and prove the theorem by proving $\partial\mathcal{A}\cap S\backslash\big(\cup_{j=1}^{N}\cup_{q=1}^{P_{j}+1}A_{j,q}^{\varepsilon}\big)\neq\emptyset$.
Now we argue by contradiction, i.e., we assume that $\partial\mathcal{A}\cap S\subset\cup_{j=1}^{N}\cup_{q=1}^{P_{j}+1}A_{j,q}^{\varepsilon}$. We use $\partial_{Y}S$ to denote the boundary of $S$ with respect to the space of $Y$. Define a continuous cut-off function $\phi:[0,\infty)\to[0,1]$: $$\label{cutoff}
\phi(x)=\left\{
\begin{aligned}
1 & \qquad & s\geq\varepsilon, \\
0 & \qquad & s\leq\frac{\varepsilon}{2}, \\
\frac{2s}{\varepsilon}-1 & \qquad & s\in\Big(\frac{\varepsilon}{2},\varepsilon\Big).
\end{aligned}
\right.$$ Let us define the mapping $h:\partial\mathcal{A}\cap S\to\partial_{Y}S$ by $$h\big(U\big)=\Bigg(\sum_{q=1}^{P_{1}+1}\Big(\phi(|\eta^{T^{*}(U)}(U)_{1,q}|_{4})+\frac{\varepsilon}{100}\Big)w_{1,q},\dots, \sum_{q=1}^{P_{N}+1}\Big(\phi(|\eta^{T^{*}(U)}(U)_{N,q}|_{4})+\frac{\varepsilon}{100}\Big)w_{N,q}\Bigg),$$ where $U=(U_{1},\dots,U_{N})\in\partial\mathcal{A}\cap S$.
We are here in position to use Lemma \[l:linking\]. To do this, we only need to check that $h(\partial_{Y}(\mathcal{A}\cap Y)\cap S)\subset \partial_{Y}S\backslash\{\theta\}$. Firstly, we notice that $\partial_{Y}(\mathcal{A}\cap Y)\subset\mathcal{A}\cap Y$. Then we claim that for any $U\in\partial\mathcal{A}\cap S$, there is a $j=1,\dots,N$ and a $q=1,\dots,P_{j}+1$ such that $\phi(|\eta^{T^{*}(U)}(U)_{j,q}|_{4})>0$. If the claim is not true, for any $j=1,\dots,N$ and $q=1,\dots,P_{j}+1$, we have $\int|U_{j,q}|^{4}<\varepsilon^4$ as $t$ goes large. Multiplying $u_{j}$ on the both sides of the $j$-th equation of Problem (\[e:A14\]) and integrating and summing up with respect to $j$, we have $$\begin{aligned}
\frac{1}{2}\frac{\partial}{\partial t}|U(t)|^{2}_{2}+\|U(t)\|^{2}\leq\sum_{j=1}^{N}\Bigg(\mu_{j}\int|u_{j}(t)|^4 +\sum_{i\neq j}\beta_{ij}\int u_{i}(t)^{2}u_{j}(t)^{2}\Bigg)<C\varepsilon\nonumber\end{aligned}$$ for $t$ large. Using the openness of $\mathcal{A}$ in $(H^1)^N$ and the invariance of $\partial\mathcal{A}$, $\|U(t)\|\geq C>0$ uniformly for $t\geq0$. This implies that $\frac{\partial}{\partial t}|U(t)|_{2}^{2}\leq-C$ for $t>0$, a contradiction. Therefore, the mapping $h$ satisfies the condition for $F$ in Lemma \[l:linking\], so we have a contradiction on the existence of the mapping $h$. The proof is complete.
Proof of Theorem \[t:main\]
===========================
The Idea of the Proof
---------------------
We are now in position to prove Theorem \[t:main\]. We outline our approach briefly first. Using the flow invariance property we reduce the variational problem to one defined on a subset on the boundary of the stable set of the origin where the nodal number of the functions is controlled by the componentwise-prescribed nodes. In order to establish multiple nodal solutions having the same componentwise nodal number we will make use of the symmetry property imposed in Theorem \[t:main\].
More precisely, our problem possesses a $\mathbb Z_p$ symmetry under a cyclic permutation $\sigma:(u_{1},\dots,u_{N})\mapsto(\sigma_{1}(u_{1}),\dots,\sigma_{N}(u_{N}))$ in $(H^{1}_{0}(\W))^N$ defined by
- $\sigma_{i}(u_{i})=u_{i+1}$ for $i\neq pb$ for $b=1,\dots,B$,
- $\sigma_{pb}(u_{pb})=u_{p(b-1)+1}$ for $b=1,\dots,B$.
In other words, we define the permutation $\sigma$ as $$\begin{aligned}
\sigma(u_{1},u_{2},\dots,u_{p};&\dots\dots;u_{N-p+1},u_{N-p+2},\dots,u_{N})\nonumber\\
&=(u_{2},\dots,u_{p},u_{1};\dots\dots;u_{N-p+2},\dots,u_{N},u_{N-p+1}).\nonumber
\end{aligned}$$ It is easy to see that this can be regarded as a $\mathbb{Z}_{p}$ cyclic group action, our variational functional $J$ is invariant under this action.
We will use a $\mathbb Z_p$ group action index (or genus), which is from [@TW] (see also related works in [@Wang; @WW]). We summarize some basic property of the index. Let $E$ be a Banach space on which there is an $\mathbb Z_p$ action generated by $\sigma$. Let $F_\sigma= \{U\in E\;|\; \sigma U = U \}$ be the set of fixed points of the $\sigma$ action. For a $\sigma$-symmetric compact set $A\subset E\backslash F_\sigma $, the index $\gamma(A)$ is defined as the smallest $m\in\mathbb{N}$ such that there exists a mapping $h:A\to\mathbb{C}^{m}\backslash\{0\}$ with $$h(\sigma U)=e^{\frac{2\pi i}{p}}h(U).$$ If there is no such mapping, we set $\gamma(A)=\infty$. It is easy to verify that the $\mathbb Z_p$ index $\gamma$ satisfies the following properties:
- If $A\subset B$, then $\gamma(A)\leq\gamma(B)$;
- $\gamma(A\cup B)\leq\gamma(A)+\gamma(B)$;
- if $g:A\to E\backslash F_{\sigma}$ is continuous and satisfies $$g(\sigma(u))=\sigma g(u),\qquad \forall u\in A,$$ then $$\gamma(A)\leq\gamma(\overline{g(A)});$$
- if $\gamma(A)>1$, then $A$ is an infinite set;
- if $A$ is compact and $\gamma(A)<\infty$, then there exist an open $\sigma$-invariant neighbourhood $\mathcal{N}$ of $A$ such that $\gamma(A)=\gamma(\overline{\mathcal{N}})$;
- if $S$ is the boundary of a bounded neighbourhood of the origin in a $m$-dimensional complex linear space such that $e^{\frac{2\pi i}{p}}U \in S $ for any $U\in S$, and $\Psi:S\to E\backslash F_\sigma $ is continuous and satisfies for any $U\in S$, $\Psi(e^{\frac{2\pi i}{p}}U)=\sigma(\Psi(U))$, then $\gamma(\Psi(S))\geq m$.
Using this $\mathbb Z_p$-cyclical permutation symmetry and the genus type index generated as above, we construct multiple nodal solutions with a given componentwise prescribed node by a minimax type argument. With the aid of the flow invariance, the central part of the proof is to construct a certain set of vector-valued functions which has infinite $\mathbb Z_p$ genus and in which the flow line always possesses prescribed number of nodal domains. Then by a minimax construction in variational methods (c.f. [@Ch0; @R]), we will have a sequence of critical levels and therefore a sequence of solutions to Problem (\[e:A111\]). For the construction of sets with large $\mathbb Z_p$-index, we will use a variant of the construction in [@TW Proposition 4.2] where only positive solutions were considered, by making sets of sign-changing functions with $\sigma$ symmetry property. For computations of the $\mathbb Z_p$ index, we will adapt some ideas from [@CMT; @LW1; @LW; @TW] incorporating the invariance of nodal domains and the $\mathbb Z_p$ symmetry.
Invariant Sets and Other Constructions
--------------------------------------
We need a symmetric version of the settings in Section 3. We begin with constructing sets of vector-valued functions with componentwise-prescribed number of nodal domains and with arbitrarily large genus.
Recall that $p$ is a prime factor of $N$ and $B$ is such that $N=pB$ and that $P_{1},\dots,P_{B}$ are $B$ non-negative integers and fixed in the proof. For any given positive integer $K$, we will construct a subset having $\mathbb Z_p$ genus not less than $K$ that consists of vector-valued functions $U=(u_{1},\dots,u_{N})$ such that for $b=1,\dots,B$, $n(u_{pb-p+i})=P_{b}$ for $i=1,...,p$, and satisfies other dynamic property.
Firstly, we divide the domain $\W$ into $B$ radial parts and denote them by $\W_{b}$ for $b=1,...,B$ so $\overline{\cup_{b=1}^{B}\W_{b}}=\overline{\W}$. For a fixed integer $b=1\dots,B$, we divide $\W_{b}$ into $P_{b}+1$ radial sub-domains $\W_{b,q}$ for $q=1,\dots,P_{b}+1$ so $\overline{\cup_{q=1}^{P_{b}+1}\W_{b,q}}=\overline{\W_{b}}$. For each sub-domain $\W_{b,q}$, divide it into $K$ radial sub-domains $\W_{b,q,k}$ for $k=1,\dots,K$ so $\overline{\cup_{k=1}^{K}\W_{b,q,k}}=\overline{\W_{b,q}}$. Denote $\mathcal{O}_{b,q}=\mathbb{S}^{1}\times\W_{b,q}$ and $\mathcal{O}_{b,q,k}=\mathbb{S}^{1}\times\W_{b,q,k}$. For $b=1\dots,B$, $q=1,\dots,P_{b}$ and $k=1,\dots,K$, we define functions for $(t,x) \in \mathcal{O}_{b,q,k}=\mathbb{S}^{1}\times\W_{b,q,k}$ as follows:
- $w_{b,q,k}(t,x)=w_{b,q,k}(t,|x|)=w_{b,q,k}(t,r):\mathcal{O}_{b,q,k}\to\mathbb{R}$ of class $C^{4}$ and of compact support in $\mathcal{O}_{b,q,k}$;
- $w_{b,q,k}\geq0$ and $w_{b,q,k}(t,\cdot)\not\equiv0$;
- $\mbox{supp}\, w_{b,q,k}(t,\cdot)\cap \mbox{supp}\, w_{b,q,k}\big(\frac{2\pi }{p}+t,\cdot\big)=\emptyset$ for any $t\in\mathbb{S}^1$.
Now we have a few words about the notation for clarity. We point out that the subscript “$b$” denotes the number of the blocks of components with each block having $p$ components so it is invariant under the $\mathbb{Z}_{p}$-permutation of components. The subscript “$q$” denotes the number of nodal domains. And the subscript “$k$” is for the factor $K$ of the dimension of the simplex. To give a simplex in Sobolev space $(H^{1})^{N}$ involving vector-valued functions, we start by its componentwise construction. In order to use the $\mathbb Z_p$ index $\gamma$, we need to consider the complex Euclidean space $\mathbb{C}^{K\big(\sum_{b=1}^{B}P_{b}+B\big)}$. For any vector $z=(z_{b,q,k})$ we decompose them in the polar-coordinates with respect to the components. This leads to $z_{b,q,k}=e^{i\theta_{b,q,k}}\alpha_{b,q,k}$ with $\alpha_{b,q,k}$’s are nonnegative real numbers and $\theta_{b,q,k}\in[0,2\pi)$ for any $b=1\dots,B$, $q=1,\dots,p$ and $k=1,\dots,K$. For $b=1,...,B$ fixed we define $$\begin{aligned}
V_{b}(t,z_{b})=\sum_{q=1}^{P_{b}+1}(-1)^{q+1}\sum_{k=1}^{K}\alpha_{b,q,k}w_{b,q,k}(t+{\theta_{b,q,k}},r)\nonumber\end{aligned}$$ where the vector $z_{b}=\{(z_{b,q,k})\,|\, q=1,\dots,P_{b}; k=1,\dots,K\}$. Then we can define a mapping $$\psi:\mathbb{C}^{K\big(\sum_{b=1}^{B}P_{b}+B\big)}\to(H_{r}^{2})^{N}$$ by $$\begin{aligned}
\psi(z)=\Bigg(V_{1}(0,z_{1}),&V_{1}\Big(\frac{2\pi}{p},z_{1}\Big),\dots,V_{1}\Big(\frac{2\pi (p-1)}{p},z_{1}\Big),\nonumber\\
&\dots\qquad\dots\nonumber\\
&V_{B}(0,z_{B}),V_{B}\Big(\frac{2\pi}{p},z_{B}\Big),\dots,V_{B}\Big(\frac{2\pi (p-1)}{p},z_{B}\Big)\Bigg).\nonumber\end{aligned}$$ We note that $$\begin{aligned}
V_{b}(t,e^{\frac{2\pi i}{p}}z_{b})&=\sum_{q=1}^{P_{b}+1}(-1)^{q+1}\sum_{k=1}^{K}\alpha_{b,q,k}w_{b,q,k}\Big(t+\theta_{b,q,k}+\frac{2\pi}{p},r\Big)\nonumber\\
&=V_{b}\Big(\frac{2\pi}{p}+t,z_{b}\Big),\nonumber\end{aligned}$$ which implies that $\psi\big(e^{\frac{2\pi i}{p}}z\big)=\sigma\psi(z)$. Here recall $\sigma$ is the $\mathbb Z_p$ cyclic permutation.
As the settings and notations used in Section 3, we introduce the following notations:
- $D_{j,k}=\{U=(u_{1},\dots,u_{N})\in(H_{r}^{2})^{N}|n(u_{j})=k\}$ and $$D=\cap_{b=1}^{B}\cap_{i=1}^{p}D_{pb-p+i,P_{b}};$$
- For $\varepsilon >0$, $E_{j,q}^{\varepsilon}=\{U=(u_{1},\dots,u_{N})\in D||u_{j,q}|_{4}<\varepsilon\}$;
- denote $$\begin{aligned}
H=\Bigg\{U=(u_{1},\dots,u_{N})\in(H_{r}^{2})^{N}|& n(u_{bp-p+i})\leq P_{b},\,\mbox{for}\, i=1,\,\dots,p,\,\,b=1,\dots,B,\;\nonumber\\
&\mbox{and}\, \sum_{j=1}^N n(u_j) < p\sum_{b=1}^B P_b\Bigg\}\nonumber
\end{aligned}$$ and $$\begin{aligned}
F_{\varepsilon}=&\cup_{b=1}^{B}\big(\cup_{q=1}^{P_{b}}\cup_{i=1}^{P_{b}}E_{pb-p+i,q}^{\varepsilon}\big)\cup H;\nonumber
\end{aligned}$$
- $T^{*}(U)=\inf\{t\in[0,T(U))|\eta^{t}(U)\in F_{\frac{\varepsilon}{2}}\cap\partial\mathcal{A}\}$ for $U\in\partial\mathcal{A}\cap (H^2)^N$.
The notations are symmetric versions of the ones in Section 3, and the difference is that we restrict the sign-changing condition for the sake of componentwise permutation.
As we proved in Section 3, the continuity of the arriving time $T^* (U)$ holds. Besides, the invariance of $T^* (U)$ is easy to check.
$T^{*}(U)$ is continuous and invariant under the permutation $\sigma$.
To compute the $\mathbb Z_p$-index, we will use an idea from [@LW1]. Nevertheless, it should be noticed that the simplex in [@LW1] is different from ours. In order to make it work, we need to enlarge the previous set $\psi\Big(\mathbb{C}^{K\big(\sum_{b=1}^{B}P_{b}+B\big)}\Big)$. Let us select a $\sigma$-invariant set $G$ which contains $\psi\Big(\mathbb{C}^{K\big(\sum_{b=1}^{B}P_{b}+B\big)}\Big)$ by the following $$\begin{aligned}
G=&\Bigg\{\Bigg(\sum_{k=1}^{K}\sum_{q=1}^{P_{1}+1}(-1)^{q+1}\alpha^{1,1}_{q,k}w_{1,q,k}(s^{1}_{q,k},r),\dots,\sum_{k=1}^{K}\sum_{q=1}^{P_{1}+1} (-1)^{q+1}\alpha_{q,k}^{1,p}w_{1,q,k}\Big(\frac{2\pi (p-1)}{p}+s^{1}_{q,k},r\Big),\nonumber\\
&\qquad\qquad\dots\dots\qquad\qquad\dots\dots\qquad\qquad\dots\dots\qquad\qquad\dots\dots, \nonumber\\
&\sum_{k=1}^{K}\sum_{q=1}^{P_{B}+1}(-1)^{q+1}\alpha^{B,1}_{q,k}w_{B,q,k}(s^{B}_{q,k},r),\dots,\sum_{k=1}^{K}\sum_{q=1}^{P_{B}+1} (-1)^{q+1}\alpha_{q,k}^{B,p}w_{B,q,k}\Big(\frac{2\pi (p-1)}{p}+s^{B}_{q,k},r\Big)\Bigg)\Bigg|\nonumber\\
&\alpha_{q,k}^{b,j}\geq0,\,\,s^{b}_{q,k}\in[0,2\pi),\,\,\mbox{for}\,\,\mbox{any}\,\,b=1,\dots,B,\,\,j=1\dots,p,\,\,q=1,\dots,P_{b},\nonumber\\
&k=1,\dots,K\Bigg\}.\nonumber\end{aligned}$$ Notice that for any $t\geq0$ and $U\in G$, we have $tU\in G$. The difference between the set $G$ and the set $\psi\Big(\mathbb{C}^{K\big(\sum_{b=1}^{B}P_{b}+B\big)}\Big)$ is that in $G$ each coefficient of components are independent. Notice that the set $G$ contains no nontrivial fixed points of $\sigma$ due to the definitions of the functions $w_{b,q,k}$’s. We observe that due to the same definition $\mathcal{A}$ and the property of the heat flow $\eta^{t}$, every half-line in $G$ starting at the origin intersects $\partial\mathcal{A}$. Moreover, the set $G\cap\partial\mathcal{A}$ is compact and $\sigma$-invariant. In particular, we denote $$G_{0}=\big\{U\in G\big|n(U_{bp-p+i})=P_{b}\,\,for\,\,i=1,\dots,p\,\,and\,\,b=1,\dots,B\big\}.$$ This is to say that we define $G_{0}$ as the portion of $G$ whose elements do not degenerate in the sense of no drop-off of the sign-changing number, i.e., $n(U_{bp-p+i})=P_{b}$ for any $i=1,\dots,p$ and $b=1,\dots,B$. It is easy to see $G=\overline{G_{0}}$.
Avoiding the Fixed Points
-------------------------
In Section 3, we already proved that $\partial\mathcal{A}\cap D\cap(H^2)^N\backslash A_{\varepsilon}\neq\emptyset$. In this subsection we show that $\partial\mathcal{A}\cap D\backslash A_{\varepsilon}$ contains no fixed points of the permutation $\sigma$ action. The following lemma ensures that the flow line does not go through the fixed points of the permutation $\sigma$.
For any $U\in\partial\mathcal{A}\cap D\backslash A_{\varepsilon}$, the flow line $\{\eta^{t}(U)\}_{t\geq0}$ contains no fixed point of the permutation $\sigma$.
[**Proof.**]{} We argue by contradiction. Suppose that there is a $t_{0}>0$ such that $\eta^{t_{0}}(U)$ is a fixed point of the permutation $\sigma$. Then we have $\eta^{t_{0}}(U)_{1}=\dots=\eta^{t_{0}}(U)_{p}$. Due to the uniqueness of the solution, $\eta^{t}(U)_{1}=\dots=\eta^{t}(U)_{p}$ for any $t\geq t_{0}$. Multiplying $u_{1}^{3}$ on both sides of the first equation of Problem (\[e:A14\]) and integrating over $\W$, we get $$\begin{aligned}
\frac{d}{dt}\int u_{1}^{4}+C\|u_{1}^{2}\|^{2}&= C\Bigg(\mu_{1}\int u_{1}^{6}+\sum_{i=2}^{N}\beta_{i1}\int u_{i}^{2}u_{1}^{4}\Bigg)\nonumber\\
&=C\Bigg(\big(\mu_{1}+\sum_{i=2}^{p}\beta_{i1}\big)\int u_{1}^{6}+\beta_{i1}\sum_{i=p+1}^{N}\int u_{i}^{2}u_{1}^{4}\Bigg).\nonumber\end{aligned}$$ Combining with $\mu_{1}+\sum_{i=2}^{p}\beta_{i1}\leq0$ (assumption $(D)$ of Theorem \[t:main\]) and the Sobolev’s embedding, we have $$\frac{d}{dt}\int u_{1}^{4}\leq-C\|u_{1}^{2}\|^{2}\leq-C|u_{1}|_{4}^{4}.$$ Hence, $\int u_{1}^4\leq Ce^{-Ct}$ follows. Therefore, for some $T_{0}>0$ and $q=1,\dots,P_{1}$, $\eta^{t}(U)\in E_{1,q}^{\frac{\varepsilon}{2}}$ for $t>T_{0}$. This is a contradiction with the fact that $U\in\partial\mathcal{A}\cap D\backslash A_{\varepsilon}$.
In fact, we can do the same computation to the other components, therefore we will have $\eta^{t}(U)\to\theta$ in $(L^{4})^{N}$.
The set $\partial\mathcal{A}\cap D\backslash A_{\varepsilon}$ contains no fixed point.
Construction of $\sigma$-Symmetric Sets of Functions of Prescribed Nodal Numbers with Arbitrarily Large Genus
-------------------------------------------------------------------------------------------------------------
The aim of this subsection is to prove that for any integer $k>0$, there is a compact subset $B_{k}\subset\partial\mathcal{A}\cap D\backslash A_{\varepsilon}$ satisfies $\sigma(B_{k})=B_{k}$ and $\gamma(B_{k})\geq k$. To do this, we only need to check that for the set $G$ constructed in the last subsection 4.2, it holds $\gamma(\partial\mathcal{A}\cap G \backslash A_{\varepsilon})\geq K$ since $K$ can be chosen to be arbitrarily large.
\[c:genusG\] $\gamma(G\cap\partial\mathcal{A})=K\big(\sum_{b=1}^{B}P_{b}+B\big)$.
[**Proof.**]{} It is obvious that $\psi\Big(\mathbb{C}^{K\big(\sum_{b=1}^{B}P_{b}+B\big)}\Big)\subset G$. Hence, we have $$\begin{aligned}
\gamma(G\cap\partial\mathcal{A})\geq\gamma\Bigg(\psi\Big(\mathbb{C}^{K\big(\sum_{b=1}^{B}P_{b}+B\big)}\Big)\cap \partial\mathcal{A}\Bigg)= K\big(\sum_{b=1}^{B}P_{b}+B\big),\nonumber\end{aligned}$$ where the equality holds due to Borsuk’s theorem. To obtain the reversed inequality, we note that the set $G$ is homeomorphic to the following subset $X$ of $\mathbb{C}^{pK\big(\sum_{b=1}^{B}P_{b}+B\big)}$: $$\begin{aligned}
X= & \Bigg\{(z^{b,j}_{q,k})\in\mathbb{C}^{pK\big(\sum_{b=1}^{B}P_{b}+B\big)} \Bigg| arc(z^{b,1}_{q,k})=arc(z^{b,2}_{q,k})= \dots=arc(z^{b,p}_{q,k}) \nonumber\\
& \mbox{for}\,\,\mbox{any}\,\,b=1,\dots,B,\,\,q=1,\dots,P_{b},\,\,k=1,\dots,K\Bigg\}.\nonumber\end{aligned}$$ In fact we may define $\xi : G\to X$ by $$\begin{aligned}
\xi:\Bigg(\sum_{k=1}^{K}\sum_{q=1}^{P_{b}+1} (-1)^{q+1}\alpha_{q,k}^{b,j}w_{b,q,k}\Big(\frac{2\pi (j-1)}{p}+s^{b}_{q,k},r\Big)\Bigg) \to \Bigg(e^{is^{b}_{q,k}}\alpha_{q,k}^{b,j} \Bigg).\nonumber\end{aligned}$$ To distinguish between the spaces $\mathbb{C}^{K\big(\sum_{b=1}^{B}P_{b}+B\big)}$ and $\mathbb{C}^{pK\big(\sum_{b=1}^{B}P_{b}+B\big)}$, we denote their vectors by $(z_{b,q,k})$ and $(z_{q,k}^{b,j})$ respectively. On the other hand, we have a continuous map $f:X\to\mathbb{C}^{K\big(\sum_{b=1}^{B}P_{b}+B\big)}$ written as $$\begin{aligned}
f(z_{q,k}^{b,j})=\Big(\sum_{j=1}^{p}z_{q,k}^{b,j}\Big).\nonumber\end{aligned}$$ Notice that $f^{-1}(0)=0$ and $f(e^{\frac{2\pi i}{p}}z_{q,k}^{b,j})=e^{\frac{2\pi i}{p}}f(z_{q,k}^{b,j})$. The reversed inequality follows from the identity $f\circ\xi(\sigma U)=e^{\frac{2\pi i}{p}}f\circ\xi(U)$.
By Proposition \[p:L4\] for $\varepsilon>0$ small enough, $E_{bp-p+i,q}^{\varepsilon}$ is invariant under the heat flow $\eta^{t}(\cdot)$ for any $b=1,\dots,B$, $i=1,\dots,p$ and $q=1,\dots,P_{b}$. For the sake of convenience, we will denote the set $E_{bp-p+i,q}^{\varepsilon}$ by $E_{j,q}^{\varepsilon}$ with $b=1,\dots,B$, $i=1,\dots,p$, $q=1,\dots,P_{b}$ and $j=pb-p+i$. The complete invariant set of $E_{j,q}^{\varepsilon}$’s defined by $$C(E_{j,q}^{\varepsilon})=\{U=(u_{1},\dots,u_{N})\in(H_{r}^{2})^{N}|\exists t_{0}\geq0\,\,s.t.\,\,\eta^{t_{0}}(U)\in E_{j,q}^{\varepsilon}\},$$ for any admissible $j$’s and $q$’s. The concept of complete invariant set is from [@LS]. As we did in Section 3.2, define a continuous cut-off function $\phi:[0,\infty)\to[0,1]$ by $\phi(s)=1$ for $s\geq \varepsilon$, $\phi(s)=0$ for $s\leq\frac{\varepsilon}{2}$, and $\phi(s)=\frac{2s}{\varepsilon}-1 $ for $s\in (\frac{\varepsilon}{2},\varepsilon)$. We Denote $$A_{\varepsilon}=\cup_{b=1}^{B}\cup_{i=1}^{p}\cup_{q=1}^{P_{b}+1}C(E_{bp-p+i,q}^{\varepsilon})\cup H.$$
Now we prove that $\gamma(\partial\mathcal{A}\cap G\backslash A_{\varepsilon})\geq K$, which will complete the proof. To obtain this, we can define a mapping $h: G\cap A_{\varepsilon}\to G$ as $$h(U)=\Big(\sum_{q=1}^{P_{1}+1}\phi\big(|\eta^{T^{*}(U)}(U)_{1,q}|_{4})U_{1,q},\dots, \sum_{q=1}^{P_{B}+1}\phi\big(|\eta^{T^{*}(U)}(U)_{N,q}|_{4}\big)U_{N,q}\Big)$$ for $U\in G_{0}$ and for $U\in G\backslash G_{0}$, let $$h(U)=\Big(\sum_{q=1}^{P_{1}+1}\phi\big(|U_{1}\cdot\chi_{\W_{1,q}}|_{4})U_{1}\cdot\chi_{\W_{1,q}},\dots, \sum_{q=1}^{P_{B}+1}\phi\big(|U_{N}\cdot\chi_{\W_{N,q}}|_{4}\big)U_{N}\cdot\chi_{\W_{N,q}}\Big).$$ Here, $\phi$ is the cut-off function in , $\chi_{A}$ is the characteristic function of the set $A\subset\W$. In fact, for $U\in G$, $U_{bp-p+i,q}=U_{bp-p+i}\cdot\chi_{\W_{b,q}}$ for $q=1\dots,P_{b}+1$, $i=1,\dots,p$ and $b=1\dots,B$. And $n(U_{bp-p+i})\leq P_{b}$ for $i=1,\dots,p$ and $b=1,\dots,B$. The “$\leq$”’s hold strictly for at least one of admissible $(b,j)$’s. It is easy to see that the map $h$ is continuous. Then, we have the following claim.
\[c:phi\] For any $U\in\partial\mathcal{A}\cap G_{0}\cap A_{\varepsilon}$, there are admissible couples $(j_{1},q_{1})$ and $(j_{2},q_{2})$ such that $\phi(|\eta^{T^{*}(U)}(U)_{j_{1},q_{1}}|_{4})=0$ and $\phi(|\eta^{T^{*}(U)}(U)_{j_{2},q_{2}}|_{4})>0$.
[**Proof.**]{} We notice that $\eta^{t}(U)$ always stays on $\partial\mathcal{A}$, which implies that $\|\eta^{t}(U)\|\geq C>0$ for any $t>0$. Let us assume that $\phi(|\eta^{T^{*}(U)}(U)_{j,q}|_{4})=0$ for any $(j,q)$ admissible. This gives $\sum_{j=1}^{N}\int|u_{j}|^{4}<C\varepsilon^4$ as $t$ goes large. Multiplying $u_{j}$ on the both sides of the $j$-th equation of Problem (\[e:A14\]) and summing up with respect to $j$, we have $$\begin{aligned}
\frac{1}{2}\frac{\partial}{\partial t}|U(t)|^{2}_{2}+\|U(t)\|^{2}\leq\sum_{j=1}^{N}\Bigg(\mu_{j}\int|u_{j}(t)|^4 +\sum_{i\neq j}\beta_{ij}\int u_{i}(t)^{2}u_{j}(t)^{2}\Bigg)<C\varepsilon\nonumber\end{aligned}$$ when $t$ is large. Using the openness of $\mathcal{A}$ in $(H^1)^N$ and the invariance of $\partial\mathcal{A}$, we have $\frac{\partial}{\partial t}|U(t)|_{2}^{2}\leq-C$ for $t>0$. This is a contradiction, and the proof is complete.
The lemma implies that for any $U\in\partial\mathcal{A}\cap G_{0}\cap A_{\varepsilon}$, then there are two admissible couples $(j_{1},q_{1})$ and $(j_{2},q_{2})$ such that $h(U)_{j_{1},q_{1}}=0$ and $h(U)_{j_{2},q_{2}}\neq0$.
\[ge\] It holds $\gamma(\partial\mathcal{A}\cap G\backslash A_{\varepsilon})\geq K$.
[**Proof.**]{} We use the notations in the proof of Lemma \[c:genusG\]. By Lemma \[c:phi\], for any $U\in h(\partial\mathcal{A} \cap G_{0}\cap A_{\varepsilon})$, when we denote $f\circ\xi(U)=(z_{b,q,k})$, there are some $b=1,\dots,B$ and some $q=1,\dots,P_{b}$ such that $z_{b,q,1}=\dots=z_{b,q,K}=0$, and $f\circ\xi(U)\neq\theta$, where $\theta$ is the zero vector. We define a subspace $W\subset\mathbb{C}^{K\big(\sum_{b=1}^{B}P_{b}+B\big)}$ by $$\begin{aligned}
W=&\Big\{(z_{b,q,k})\in\mathbb{C}^{K\big(\sum_{b=1}^{B}P_{b}+B\big)}|z_{1,1,k}=\dots=z_{1,P_{1}+1,k}=\dots=z_{B,1,k}=\dots\nonumber\\
&\dots= z_{B,P_{B}+1,k}\,\,\forall k=1,\dots,K\Big\}.\nonumber\end{aligned}$$ Obviously, $\psi(W)$ is invariant under the permutation $\sigma$. Since for every element $U\in\psi(W)$, $n(U_{bp-p+i})=P_{b}$ for $i=1,\dots,p$ and $b=1,\dots,B$, we have $\psi(W)\cap \partial\mathcal{A}\subset G_{0}\cap\partial\mathcal{A}$. To continue the proof, we need the following lemma:
\[l:homotopynbh\] There is an $\varepsilon_{0}>0$ such that for any $a=(a_{1},\dots,a_{K(\sum_{b=1}^{B}P_{b}+B)})\in f\circ\xi\circ h(\partial\mathcal{A}\cap G\cap A_{\varepsilon})$, $\sum_{l=1}^{K(\sum_{b=1}^{B}P_{b}+B)}|a_{l}|\geq\varepsilon_{0}$.
[**Proof.**]{} We argue it by contradiction. Suppose that for any large $n>0$, there is an $U^{(n)}\in \partial\mathcal{A}\cap G\cap A_{\varepsilon}$ such that $\sum_{j=1}^{N}|u_{j}^{(n)}|_{4}\leq C\varepsilon_{1}$, where the constant $C>0$ is independence on the chose of $U^{(n)}$ and $\varepsilon_{1}$. Using the fact that $T(U)=\infty$ for $U\in\partial\mathcal{A}$ and similar computations in Proposition \[p:L4\] and Lemma \[c:phi\], we will have a contradiction.
We now return back to the proof of Lemma \[ge\]. On one hand, we have $$dim(W)=K.$$ On the other hand, due to the definition of the space $W$ and Lemma \[l:homotopynbh\], we have $$\begin{aligned}
f\circ\xi\big( \overline{h(\partial\mathcal{A}\cap G\cap A_{\varepsilon})}\big)&\subset\mathbb{C}^{K\big(\sum_{b=1}^{B}P_{b}+B\big)}\backslash W_{\delta}\nonumber\\
&\overset{\sigma}{\simeq}\mathbb{S}^{K\big(\sum_{b=1}^{B}P_{b}+B-1\big)-1}.\nonumber\end{aligned}$$ Here, $\delta>0$ is small, $W_{\delta}$ represents the $\delta$-neighbourhood of $W$, and the symbol $\overset{\sigma}{\simeq}$ means that two topology spaces are homotopy equivalent via a homotopy $F$ satisfies $F(t,e^{\frac{2\pi i}{p}}z)=e^{\frac{2\pi i}{p}}F(t,z)$ and $\mathbb{S}^{m-1}$ denote the unit sphere in $\mathbb{C}^{m}$. Hence, we have $$\begin{aligned}
\gamma\big(\overline{A_{\varepsilon}\cap G\cap\partial\mathcal{A}}\big)&\leq
\gamma\Big(\overline{h\big(A_{\varepsilon}\cap G\cap\partial\mathcal{A}\big)}\Big)\nonumber\\
&\leq K\Bigg(\sum_{b=1}^{B}P_{b}+B-1\Bigg).\nonumber\end{aligned}$$
We claim that the set $\partial\mathcal{A}\cap D\backslash A_{\varepsilon}$ involves no fixed point of $\mathbb{Z}_{p}$, which is already proved in the last subsection. The above computation implies that $$\begin{aligned}
\gamma\big((\partial\mathcal{A}\backslash A_{\varepsilon})\cap G\big)
\geq\gamma(G\cap\partial\mathcal{A})- \gamma\big(\overline{A_{\varepsilon}\cap\partial\mathcal{A}\cap G}\big)\geq K.\nonumber\end{aligned}$$ The proof of Lemma \[ge\] is complete.
We note that $\partial\mathcal{A}\cap D\backslash A_{\varepsilon}$ is an invariant set of the heat flow, from which, a sequence of compact sets with unbounded genus can be selected.
The Existence of Multiple Equilibria Having the Same Componentwise Prescribed Nodes
-----------------------------------------------------------------------------------
In this subsection, we complete the proof of the main result Theorem \[t:main\].
\[def\] Let $c\in J(\partial\mathcal{A}\cap D\backslash A_{\varepsilon})$. If there are positive numbers $\alpha$ and $\varepsilon$ such that for any $U\in J^{-1}[c-\varepsilon,c+\varepsilon]\cap\partial\mathcal{A}\cap D\backslash A_{\varepsilon}$, $|\Delta u_{j}-\lambda u_{j}+\mu u_{j}^{3}+\sum_{i\neq j}\beta u_{i}^{2}u_{j}|_{2}\geq\alpha$ for some $j=1,\dots,N$, then there is $T>0$ independent of $U$ such that $\eta^{T}(U)\in J^{c-\varepsilon}\cap\partial\mathcal{A}\cap D\backslash A_{\varepsilon}$.
[**Proof.**]{} Let $T=\frac{4\varepsilon}{\alpha^2}>0$. Notice that $\partial\mathcal{A}\cap D\backslash A_{\varepsilon}$ is invariant under the flow $\eta$. If $\eta^{T}(U)\in J^{c-\varepsilon}\cap\partial\mathcal{A}\cap D\backslash A_{\varepsilon}$, the proof is complete. Otherwise, assume $J(\eta^{T}(U))> c-\varepsilon$. Then $\eta^{t}(U)\in J^{-1}[c-\varepsilon,c+\varepsilon]\cap\partial\mathcal{A}\cap D\backslash A_{\varepsilon}$ for any $t\in[0,T]$. Compute that $$\begin{aligned}
\frac{d}{dt}J(\eta^{t}(U))&=-\sum_{j=1}^{N}\int|\partial_{t}u_{j}(t,\cdot)|_{2}^{2}\nonumber\\
&=-\sum_{j=1}^{N}\int\big|\Delta u_{j}-\lambda_{j} u_{j}+\mu_{j} u_{j}^{3}+\sum_{i\neq j}\beta_{ij} u_{i}^{2}u_{j}\big|_{2}^{2}(t)\nonumber\\
&\leq-\alpha^{2}.\nonumber\end{aligned}$$ Therefore, $$\begin{aligned}
c-\varepsilon\leq c+\int_{0}^{T}\frac{d}{dt}J(\eta^{t}(U))dt\leq c-\alpha^{2}T=c-4\varepsilon.\nonumber\end{aligned}$$ This is a contradiction.
Define $$\Gamma_{k}=\{A\subset\partial\mathcal{A}\cap D\backslash A_{\varepsilon}| \mbox{$A$ is $\sigma$ invariant compact set},\,\gamma(A)\geq k\}.$$ By Lemma \[ge\], $\Gamma_{k}\neq\emptyset$ for large $k$, and the values $$c_{k}=\inf_{A\in\Gamma_{k}}\sup_{u\in A}J(U)$$ are well-defined. Using Lemma \[def\] and some classical arguments as [@R Propersition 8.5] it is easy to verify the following.
(i). $K_{c_{k}}\cap\partial\mathcal{A}\cap D\backslash A_{\varepsilon}\neq\emptyset$ for large $k$.
(ii). If $c:=c_{j}=\dots=c_{j+l}$, then $\gamma(K_{c}\cap\partial\mathcal{A}\cap D\backslash A_{\varepsilon})\geq l+1$.
[**Proof.**]{} A standard argument ensures that there is a sequence $U_{n}\in\partial\mathcal{A}\cap D\backslash A_{\varepsilon}$, such that
- $J(U_{n})\to c_{k}$;
- for any $j=1,\dots,N$, $\Delta u_{n,j}-\lambda_{j} u_{n,j}+\mu_{j} u_{n,j}^{3}+\sum_{i\neq j}\beta_{ij} u_{n,i}^{2}u_{n,j}\to0$ in $L^{2}$,
where $u_{n,j}$ is the $j$-th component of $U_{n}$. The second assertion implies that $\nabla_{u_{j}}J(U_{n})\to\theta$ in $H^{-1}$. Then $U_{n}\to U$ in $(H^1)^N$ for some $U$ since the energy functional $J$ satisfies the (PS) condition. It is easy to see that $U$ is a critical point of the energy $J$. Therefore, $U$ is of class $(H^2)^N$ due to the elliptic regularity theory. Now we prove that $U\in\partial\mathcal{A}\cap D\backslash A_{\varepsilon}$.
We have $U\in\partial\mathcal{A}\setminus A_{\varepsilon}$ due to the elliptic regularity and $(L^4)^N$-norm is continuous in $(H^1)^N$.
To show $U\in D$, denote $U= (u_1, ..., u_N)$. $U_{n}$’s are continuous since they are of class $(H^2)^N$. We have for $b=1,...,B$, $n\big((U_{n})_{pb-p+i}\big)=P_{b}$ for $i=1,...,p$, and $\big|(U_{n})_{pb-p+i,q}\big|_{4}\geq\frac{\varepsilon}{2}$ for $i=1,...,p$ and $q=0,\dots,P_{b}$. Here $(U_{n})_{pb-p+i,q}$ denotes the $q$-th bump of the $(pb-p+i)$-th component of $U_{n}$.
Since $U_{n}\to U$ in $(H^1)^N$, for $n$ large, we have $\big|(U_{n})_{pb-p+i}-u_{pb-p+i}\big|_{4}<\frac{\varepsilon}{100}$. Therefore, for fixed $b=1,...,B, i=1,...,p$, we can find a sequence $x_{0},\dots,x_{P_{b}}\in\Omega$ such that
- $0<|x_{0}|<\dots<|x_{P_{b}}|<\infty$;
- $u_{pb-p+i}(x_{k})\cdot u_{pb-p+i}(x_{k+1})<0$ for $k=0,\dots,P_{b}-1$.
Therefore, $U\in D$. Assertion (i) is proved.
Assertion (ii) can be proved by some arguments for the classical genus [@R Propersition 8.5], and we omit it here.
Acknowledgement {#acknowledgement .unnumbered}
===============
Li would like to express his sincere appreciation to Thomas Bartsch, Marek Fila, Desheng Li and Pavol Quittner for their useful communication and comments. Li especially thanks Pavol Quittner for his kindly offering [@Q1; @Q2; @Q2003; @Q3]. Wang thanks Zhaoli Liu for some useful discussion. This work is supported by NSFC (11771324, 11831009).
[44]{}
Ackermann, N., Bartsch, T., Superstable Manifolds of Semilinear Parabolic Problems. Journal of Dynamics and Differential Equations, 17 (2005) 115-173. Akhmediev, N., Ankiewicz, A., Partially Coherent Solitons on a Finite Background. Physical Review Letters, 82 (1999) 2661-2664. Amann, H., On abstract parabolic fundamental solutions. Journal of the Mathematical Society of Japan, 39 (1987) 93-116. Amann, H., Linear and Quasilinear Parabolic Problems, Birkhäuser Verlag, Basel, 1995. Ambrosetti, A., Colorado, E., Standing waves of some coupled nonlinear Schrödinger equations. Journal of the London Mathematical Society, 75 (2007) 67-82. Ambrosetti, A., Rabinowitz, P., H., Dual variational methods in critical point theory and applications. Journal of Functional Analysis, 14 (1973) 349-381.
Angenent, S., Nodal properties of solutions of parabolic equations. Rocky Mountain J. Math., 21 (1991) 585-592.
Ao, W., Wei, J., Yao, W., Uniqueness and non-degeneracy of sign-changing radial solutions to an almost critical elliptic problem, arXiv:1510.04678v1.
Bartsch, T., Wang, Z.-Q., Note on ground states of nonlinear Schrödinger systems. Partial Differential Equations. 3 (2006) 200-207.
Bartsch, T., Dancer, E.N., Wang, Z.-Q., A Liouville theorem, a-priori bounds, and bifurcating branches of positive solutions for a nonlinear elliptic system. [Calc. Var. Partial Differential Equations]{} [37]{} (2010) 345-361.
Bartsch, T., Willem, M., Infinitely many radial solutions of a semilinear elliptic problem on $\mathbb R^N$. Archive for Rational Mechanics and Analysis, 124 (1993) 261-276. Bartsch, T., Wang, Z.-Q., Willem, M., The Dirichlet problem for superlinear elliptic equations. Handbook of Differential Equations: Stationary Partial Differential Equations. Vol 2, 1-55, Elsevier Science and Technology, 2005.
Cazenave, T., Lions, P, L., Solutions globales d’¨¦quations de la chaleur semi lin¨¦aires. Communications in Partial Differential Equations, 9 (1984), 955-978.
Chang, K.-C., Infinite Dimensional Morse Theory and Multiple Solutions Problems. Birkhäuser Boston, 1993.
Chang, K.-C., Heat method in nonlinear elliptic equations. Topological methods, variational methods and their applications (Tianyuan, 2002), 65-76, World Sci., River Edge, NJ, 2003. Conti, M., Merizzi, L., Terracini, S., Radial Solutions of Superlinear Equations on $\mathbb{R}^{N}$. Part I: Global Variational Approach. Archive for Rational Mechanics and Analysis, 153 (2000) 291-316. Dancer, E, N., Wei, J., Weth, T., A priori bounds versus multiple existence of positive solutions for a nonlinear Schrödinger system. Annales De Linstitut Henri Poincare, 27 (2010) 953-969.
Daners, D., Medina, P, K., Abstract Evolution Equations, Periodic Problems, and Applications. Longman Scientific and Technical, 1992.
Fila, M., Levine, H. A., On the Boundedness of Global Solutions of Abstract Semilinear Parabolic Equations. Journal of Mathematical Analysis and Applications, 216 (1997) 654-666.
Galaktionov, V., Geometric sturmian theory of nonlinear parabolic equations and applications. Chapman and Hall/CRC Florida, 2004. Gidas, B., Ni, W, M., Nirenberg, L., Symmetry and related properties via the maximum principle. Communications in Mathematical Physics, 68 (1979) 209-243. Henry, D., Geometric Theory of Semilinear Parabolic Equations. Lecture Notes in Mathematics, 840, Springer-Verlag, Berlin, 2008. Kwong, M, K., Uniqueness of positive solutions of $\Delta u-u+u^{p}=0$ in $\mathbb{R}^n$. Archive for Rational Mechanics and Analysis, 105 (1989) 243-266. Li, H., Meng, L., Notes on A Superlinear Elliptic Problem, preprint. (2005) 629–653.
\(2005) 403–439.
Lions, J. L., Magenes, E., Non-Homogeneous Boundary Value Problems and Applications, Volum 1. Springer-Verlag, Berlin, 1972.
Liu, J., Liu, X., Wang, Z.-Q., Multiple mixed states of nodal solutions for nonlinear Schrödinger systems. Calculus of Variations and Partial Differential Equations, 52 (2015) 565-586.
Liu, Z., Sun, J., Invariant Sets of Descending Flow in Critical Point Theory with Applications to Nonlinear Differential Equations. Journal of Differential Equations, 172 (2001) 257-299. Liu, Z., Wang, Z.-Q.. Multiple bound states of nonlinear Schrödinger systems. Comm. Math. Phys., 282 (2008) 721-731. Liu, Z., Wang, Z.-Q., Ground States and Bound States of a Nonlinear Schrödinger System. Advanced Nonlinear Studies, 10 (2010) 175-193. Liu, Z., Wang, Z.-Q., Vector solutions with prescribed component-wise nodes for a Schrödinger system. Anal. Theory Appl., 35 (2019) 288-311.
Lunardi, A., Analytic semigroups and optimal regularity in parabolic problems. Progress in Nonlinear Differential Equations and their Applications, 16. Birkhäuser Verlag, Basel, 1995.
Matano, H., Nonincrease of the lap number of a solution for a one-dimensional semi-linear parabolic equation, J. Fac. Sci. Univ. Tokyo Sect. 1A Math., 29 (1982) 401-441.
Mitchell, M., Segev, M., Self-trapping of incoherent white light beams. Nature, 387 (1997) 880-883.
\(2010) 267-302.
Prüss, J., Sohr, H., Imaginary powers of elliptic second order differential operators in $L_p$-spaces. Hiroshima Mathematical Journal. 23 (1993) 395-418.
Quittner, P., Boundedness of trajectories of parabolic equations and stationary solutions via dynamical methods. Differential and Integral Equations., 7 (1994) 1547-1556. Quittner, P., Signed solutions for a semilinear elliptic problem. Differential and Integral Equations, 11 (1998) 551-559. Quittner, P., Continuity of the blow-up time and a priori bounds for solutions in superlinear parabolic problems. Houston Journal of Mathematics, 29 (2003) 757-799. Quittner, P., Multiple equilibria, periodic solutions and a priori bounds for solutions in superlinear parabolic problems. Nonlinear Differential Equations and Applications Nodea, 11 (2004) 237-258.
Quittner, P., Souplet, P., Superlinear parabolic problems. Birkhäuser, Basel, 2007.
Rabinowitz, P., Minimax Methods in Critical Point Theory with Applications to Differential Equations, CBMS Regional Conf. Ser. in Math., vol. 65. American Mathematical Society, Providence, 1986.
Sirakov, B., Least energy solitary waves for a system of nonlinear Schrödinger equations in $ \mathbb R^n$. [Comm. Math. Phys.]{} [271]{} (2007) 199-221.
Seeley, R., Interpolation in $L_p$ with boundary conditions. Studia Math. 44 (1972) 47-60.
Struwe, M., Superlinear elliptic boundary value problems with rotational symmetry. Archiv der mathematik. 39 (1982) 233-240.
Tartar, B, L., An Introduction to Sobolev Spaces and Interpolation Spaces. Springer-Verlag Berlin Heidelberg, 2007.
Tanaka, S. Uniqueness of sign-changing radial solutions for $\Delta u-u+|u|^{p-1}u=0$ in some ball and annulus. Journal of Mathematical Analysis and Applications, 439 (2016) 154-170.
\(2009) 717-741.
Tian, R., Wang, Z.-Q., Multiple solitary wave solutions of nonlinear Schrödinger systems. Topo. Methods Nonlinear Anal. 37 (2011) 203-223. Wang, Z.-Q., A $\mathbb Z_p$-Borsuk-Ulam Theorem. [Chinese Bulletin of Science]{}, [34]{} (1989) 1153-1157.
Wei, J., Weth, T., Radial Solutions and Phase Separation in a System of Two Coupled Schrödinger Equations. Archive for Rational Mechanics and Analysis, 190 (2008) 83-106. Willem, M., Minimax Theorems. Birkhäuser Boston, 1996.
|
---
abstract: 'The derivative expansion method has been used to solve the semiclassical kinetic equations of quark-gluon plasma. The nonlinear spatial damping rate, the imaginary part of the wave vector, for the longitudinal secondary color waves in the long wavelength limit has been calculated numerically.'
---
-1.5cm -0.2cm
The Nonlinear Spatial Damping rate in QGP [^1]
Chen Jisheng[^2] Li Jiarong
Institute of particle physics, Central China Normal University
Wuhan, 430079, Hubei,P.R. China
:QGP, nonlinear spatial damping rate, derivative expansion
The Landau damping as collisionless damping is an important collective effect in quark-gluon plasma(QGP); it describes how the color field is affected by media during its traveling in QGP. In the frame of kinetic theory, it’s shown that there is no Landau damping in linear or Abelian dominance approximation$^{\cite{s1,s2,s3}}$ but there is nonlinear Landau damping$^{\cite{s4}}$. However, all the previous works give only the temporal damping rate, i.e., the instabilities with complex frequency and real wave vector values. As in the electromagnetic plasma$^{\cite{s5}}$, if the imaginary part of wave vector for the secondary waves resulted from the nonlinear interaction in plasmas may not be zero, there may be spatial damping also in QGP. The collisionless damping in QGP can be completely recognized only when the temporal and spatial damping are known. Up to now, neither the QGP kinetic theory nor the finite temperature QCD has given the spatial damping rate directly. In this letter, we will work from the QGP kinetic equations and derive the nonlinear spatial damping rate in QGP finally.
The collisionless kinetic equations for QGP are$^{\cite{s6}}$ $$\begin{aligned}
p^{\mu}D_{\mu}f(x,p)+\frac{1}{2}
p^{\mu}\frac{\partial}{\partial p_{\nu}}\{F_{\mu \nu}
(x),f(x,p)\}=0; \nonumber\\
p^{\mu}D_{\mu}\bar{f}(x,p)-
\frac{1}{2}p^{\mu}\frac{\partial}
{\partial p_{\nu}}\{F_{\mu \nu}(x),\bar{f}(x,p)\}=
0;\\
p^{\mu}\tilde{D}_{\mu}G(x,p)+\frac{1}{2}p^{\mu}\frac{\partial}
{\partial p_{\nu}}\{{\cal F}_{\mu \nu}(x),G(x,p)\}=0\nonumber\end{aligned}$$ where $f$, $\bar{f}$, $G$ are the distribution functions of quarks, antiquarks, gluons in QGP, respectively. $F_{\mu \nu}$, $\cal{F}_{\mu \nu}$ represent the mean field stress tensors in fundamental and adjoint representation, i.e., $F_{\mu \nu}=F_{\mu \nu}^{a}I_a$, ${\cal F}_{\mu \nu} =F_{\mu \nu }^aF_a$. $I_a$ and $F_a$ are the corresponding generators.
The mean color field equation coupled with the kinetic equations is $$\begin{aligned}
D_{\mu}F^{\mu \nu}(x)=j^{\nu}(x)\end{aligned}$$
$j^{\nu}$ is the color current $$\begin{aligned}
j^{\nu}(x)=-\frac{g}{2}\int \frac{d^3p }{(2\pi)^3E_p}p^\nu\left [\left (f(x,p)-\bar{f}(x,p)\right )+2iI_af_{abc}G_{bc}(x,p)\right ]\end{aligned}$$ where $f_{abc}$ is the structure constant of $SU(N_c)$.
To calculate the imaginary part of the wave vector for the color field conveniently, we will solve the QGP kinetic equations iteratively in momentum space. Noting that the energy density of the color field in QGP is much smaller than the hot energy density in high temperature and density condition, the mean field strength may be selected as the separation of these scales. The derivative expansion method is an effective tool to solve the nonlinear equation$^{\cite{s7,s8}}$. In this method, it’s essential that not only the function but also the relevant derivative in equation will be expanded iteratively. To discuss the spatial damping, we will only expand the space derivative in momentum space; the wave vector and the relevant functions can be expanded as $$\begin{aligned}
{\bf k}&=&\sum _{j=0}^{N}\alpha ^j{\bf k}^{(j)};
\nonumber\\
A_i&=&\sum _{j=1}^N \alpha ^j A_i^{(j)}({\bf k}^{(0)},{\bf k}^{(1)},\cdots ,{\bf
k}^{(N)});\\
f&=&\sum _{j=0} ^N \alpha ^j f^{(j)}({\bf k}^{(0)},{\bf k}^{(1)},\cdots ,{\bf
k}^{(N)})\nonumber\end{aligned}$$ where $\alpha$ is a dimensionless parameter introduced to denote the order of a small quantity. $\bar{f}$ and $G$ can be expanded similarly. For convenience we work in the temporal gauge, $A_a^0=0$. The relation between the color electrical field ${\bf E}$ and the color vector potential ${\bf A}$ can be expressed as: $E^{i}_a=-\partial A^i_a(x)/\partial t$.
The color current at any order is expanded as $$\begin{aligned}
j^{h(n)}=-\frac{g}{2}\int \frac{d^3p}{E_p(2\pi)^3}p^{h}\left \{\left (f^{(n)}-\bar{f}^{(n)}\right )+2iI_af_{abc}G_{bc}^{(n)}\right \}\end{aligned}$$
From the kinetic equations we can see that the distribution functions will fluctuate when the mean field is applied to a plasma system as an external field, and the fluctuating functions reversely influence the mean field. This self-consistent relation between the mean field and the distribution functions of the plasma particles are well expressed by the expansions Eqs.(4) and Eq.(5). The leading terms describe the linear approximation. The important dispersion relation obtained in the linear approximation will be changed by the nonlinear interactions of the eigenwaves, i.e., wave vector corrections ${\bf k}^{(1)}$, ${\bf k}^{(2)}$, , $\cdots $, ${\bf k}^{(N)}$ will be given to the eigenwave vector ${\bf k}^{(0)}$ correspondingly. As in electromagnetic plasma, if the imaginary parts of the wave vector correction for eigenmodes are not vanishing, there will be spatial damping.
To simplify the calculation, we suppose that there’s only longitudinal vector potential: $$\begin{aligned}
A^{i}(k)=k^{(0)i}A(k)/K^{(0)}\end{aligned}$$ where $K^{(0)}=|{\bf k}^{(0)}|$.
Inserting the expansions Eqs.(4) and Eqs.(5) into the equations (1-2) in momentum space, then equating the coefficients of equal power in $\alpha$ in two sides of these equations, one can obtain a hierarchy of equations.
The first order mean field equation is $$\begin{aligned}
-\omega^{2}A^{(1)h}(k)=j^{(1)h}(k)\end{aligned}$$ The first order transport equation for the distribution function of quarks is $$\begin{aligned}
p\cdot k^{(0)}f^{(1)}(k,p)&+&\frac{g}{2}\sum_{k_1+k_2=k}(p\cdot k_1^{(0)})\{A_i^{(1)}(k_1),\partial _p^{i}f^{(0)}(k_2,p)\}\nonumber\\
&+&\frac{g}{2}\sum_{k_1+k_2=k}p_i\{A_i^{(1)}(k_1),k_{1\nu}^{(0)}\cdot \partial _p^{\nu}f^{(0)}(k_2,p)\}=0\end{aligned}$$ where we have defined $k^{(0)}$=$(\omega,{\bf k}^{(0)})$ and $$\begin{aligned}
\sum_{k_1+k_2=k}=\int \frac {d^4k_1d^4k_2}{(2\pi )^4}\delta ^4(k-k_1-k_2). \end{aligned}$$ The first order kinetic equations for antiquarks and gluons are similar to Eq.(8) except the opposite signs of the terms related to $\{ \cdots ,\cdots \}$ for antiquarks, $f$ and $A$ are replaced by $G$ and $\cal A$ for gluons, respectively.
Assuming background configuration is local neutral, the zero order colorless distribution functions can be chosen as the Fermi-Dirac and Bose-Einstein equilibrium distribution functions, respectively: $$\begin{aligned}
f^{(0)}(k,p)=\bar{f}^{(0)}(k,p)=(e^{\beta p\cdot U}+1)^{-1};&~~~~~~~~&
G^{(0)}(k,p)=(e^{\beta p\cdot U}-1)^{-1}\end{aligned}$$ where $\beta =1/T$ is the inverse of the temperature and $U$ is the local flow velocity(normalized to $U_{\mu}U^{\mu}=1$). Using Eq.(8) and Eq.(6), we can express the first order distribution functions in terms of the corresponding zero order distribution functions and the first order field potential $$\begin{aligned}
f^{(1)}(k,p)=-g\omega\frac{p^{0}k^{(0)i}}{p\cdot k^{(0)}+ip^{0}0^{+}}\frac {df^{(0)}(p)}{dp^{i}}\frac
{A^{(1)}(k)}{K^{(0)}};\nonumber\\
\bar{f}^{(1)}(k,p)=g\omega \frac{p^{0}k^{(0)i}}{p\cdot k^{(0)}+ip^{0}0^{+}}\frac {d\bar{f}^{(0)}(p)}{dp^{i}}\frac
{A^{(1)}(k)}{K^{(0)}};\\
G^{(1)}(k,p)=-g\omega \frac{p^{0}k^{(0)i}}{p\cdot k^{(0)}+ip^{0}0^{+}}\frac {dG^{(0)}(p)}{dp^{i}}\frac {{\cal A}^{(1)}(k)}{K^{(0)}}\nonumber\end{aligned}$$ By inserting Eq.(11) into the mean field equation Eq.(7) we can obtain the dispersion relation satisfied by the frequency and wave vector of the eigenwave in the first order approximation. $$\begin{aligned}
\epsilon (\omega,{\bf k}^{(0)})=1+
\frac {3\omega_p^{2}}{K^{(0)2}}
\left [
1-\frac {\omega }{2K^{(0)}}
\left (\ln\left |\frac{K^{(0)}+\omega }{K^{(0)}-\omega}\right |-i\pi \theta (K^{(0)}-\omega )\right )
\right ]=0\end{aligned}$$ and the solution is $$\begin{aligned}
A^{(1)\sigma}_{{\bf k}^{(0)}}=-i\frac{\pi}{\omega }
E^{\sigma}_{{\bf k}^{(0)}}
\left [e^{-i\phi ^{\sigma}_{{\bf k}^{(0)}}}
\delta(\omega-\omega^{\sigma}_{{\bf k}^{(0)}})+
e^{i\phi ^{\sigma}_{{\bf k}^{(0)}}}
\delta(\omega+\omega^{\sigma}_{{\bf k}^{(0)}})\right ]\end{aligned}$$ where $E^{\sigma}_{{\bf k}^{(0)}}$ and $\phi ^{\sigma}_{{\bf k}^{(0)}}$ are the initial amplitude and phase of the oscillation, respectively. $\omega_p=\sqrt{(2N_c+N_f)g^{2}T^{2}/18}$ is the plasma frequency, with $N_f$ being the number of flavors for quarks. The dispersion relation Eq.(12) agrees with the leading order result using the hard thermal loops in finite temperature QCD and the classical nature of the hard thermal loops has been investigated extensively by Blaizot and Jancu, etc.$^{\cite{z1,z2}}$. As shown by U. Heinz, the eigenwaves satisfying Eq.(12) are always timelike, i.e., $\frac {\omega }{K^{(0)}}>1
$(we set $\hbar =c=1$ for convenience). It means that the phase velocity of eigenwaves is bigger than the velocity of light. So these waves can’t exchange energy with plasma particles and do not undergo damping in the linear approximation$^{\cite{s1,s2}}$. In the long-wavelength region, the dispersion relation Eq.(12) is reduced to the form $$\begin{aligned}
\omega ^2=\omega _p^2+\frac {3}{5} K^{(0)^{2}}\end{aligned}$$
The following identities will be used to simplify the color current contributed by gluons in the following discussion: $$\begin{aligned}
&~&f_{abc}f_{abd}=N_c\delta _{cd};~~~~~~~~
if_{abc}G_{bc}=tr(F_aG); ~~~~~~~~
[F_a,F_b]=if_{abc}F_c;\nonumber\\
&~&I_dtr(F_d[F_a,F_b])=N_c[I_a,I_b];~~~~~~~~
I^{d}tr(F^{d}[F^{a},[F^{b},F^{c}]])=N_c[I^{a},[I^{b},I^{c}]];\nonumber\\
&~~~~~&\rm tr(\{F_a,\{F_b,F_c\}\}F_d\})=4\delta _{ad} \delta _{cb}+2\delta _{ab}\delta _{cd}
+2\delta _{ac} \delta _{bd}+N_cd^{ade}d^{bce}\end{aligned}$$
The second order equations can be discussed in the same manner. Substituting the results in the expression for the second color current by using the distribution functions obtained from the second order kinetic equations, one can find the second order mean field equation is reduced to $$\begin{aligned}
-\omega^{2}\epsilon(\omega,{\bf k}^{(0)})A^{(2)}(k)
&=&\int\frac {d^3p}{(2\pi )^3}\left [N_f\frac{d}{dE_p}
\left (f^{(0)}(p)+\bar{f}^{(0)}(p)\right )+2N_c \frac {dG^{(0)}(p)}{dE_p}\right ]\nonumber\\
&~&\left \{g^3 \frac{1}{p\cdot k^{(0)}+ip^{0}0^{+}}\frac {{\bf p}\cdot {\bf k^{(0)}}}
{K^{(0)}}\sum_{k_1+k_2=k}\frac {\omega _2}{p\cdot k_2+ip^{0}0^{+}}
\frac{{\bf p}\cdot {\bf k_1}^{(0)}}{K_1^{(0)}}\frac{{\bf p}\cdot {\bf k_2}^{(0)}}{K_2^{(0)}}\right.\nonumber\\
&~&\left.[A^{(1)}(k_1),A^{(1)}(k_2)]
-g^2 \frac{{\bf p} \cdot {\bf k}^{(1)}}{p^0}
\frac{A^{(1)}(k)}{(p\cdot k^{(0)}+ip^0 0^+)^2}
\frac{({\bf p}\cdot{\bf k}^{(0)})^3}{(K^{(0)})^2}\right \}\end{aligned}$$
Eq.(16) describes the three-wave processes owing to the nonlinear coupling term $A^{(1)}(k_1)A^{(1)}(k_2)$ of two eigenwaves with wave vectors ${\bf k}_1$ and ${\bf k}_2$ into a secondary wave with wave vector ${\bf k}$. However, as pointed out in the Refs.$\cite{s9,s10}$, the three-wave processes are forbidden, there will be nonlinear effects of the three-wave processes on wave vectors only in higher order perturbation.
Now let’s discuss the third order equations. The third order field equation is $$\begin{aligned}
-\omega^{2}A^{(3)}(k)
\frac{k^{(0)h}}{K^{(0)}}&+&g\sum_{k_1+k_2=k}\frac{k^{(1)i}k_1^{(0)i}k_2^{(0)h}}{K_1^{(0)}K_2^{(0)}}
[A^{(1)}(k_1),A^{(1)}(k_2)]\nonumber\\
&+&g^{2}\sum_{k_1+k_2=k}\sum_{k_3+k_4=k_2}\frac{k^{(0)i}k_3^{(0)i}k_4^{(0)h}}
{K_1^{(0)}K_2^{(0)}K_4^{(0)}}[A^{(1)}(k_1),[A^{(1)}(k_3),A^{(1)}(k_3)]]=j^{(3)h}(k)\end{aligned}$$ and the third order kinetic equation for quark is $$\begin{aligned}
p\cdot&& k^{(0)}f^{(3)}(k,p)-k_i^{(2)}p_if^{(1)}(k,p)-gp_i\sum_{k_1+k_2=k}[A_i^{(1)}(k_1),f^{(2)}(k_2,p)]-p_ik_i^{(1)}f^{(2)}(k,p)\nonumber\\
&~&-gp_i\sum_{k_1+k_2=k}[A_i^{(2)}(k_1),f^{(1)}(k_2,p)]+\frac{g}{2}\sum_{k_1+k_2=k}p\cdot k_1^{(0)}\{A_i^{(1)}(k_1),\partial _p^{i}f^{(2)}(k_2,p)\}\nonumber\\
&~&+\frac{g}{2}\sum_{k_1+k_2=k}p\cdot k_1^{(0)}\{A_i^{(2)}(k_1),\partial _{p}^{i}f^{(1)}(k_2,p)\}-\frac{g}{2}\sum_{k_1+k_2=k}p_ik_{1i}^{(1)}\{A_j^{(2)}(k_1),\partial _{p}^{j}f^{(0)}(k_2,p)\}\nonumber\\
&~&+\frac{g}{2}\sum_{k_1+k_2=k}p_i\{A_i^{(1)}(k_1),k_{1\nu}^{(0)}\partial _{p}^{\nu }f^{(2)}(k_2,p)\}+\frac{g}{2}\sum_{k_1+k_2=k}p_i\{A_i^{(2)}(k_1),k_{1\nu}^{(0)}\partial _{p}^{\nu }f^{(1)}(k_2,p)\}\nonumber\\
&~&+\frac {g}{2}\sum_{k_1+k_2=k}p_i\{A_i^{(2)},k_{1\nu}^{(1)}\partial _p^{\nu }f^{(0)}(k_2,p)\}+\frac {g}{2}\sum_{k_1+k_2=k}(p\cdot k)\{A_i^{(3)}(k_1),\partial _p^if^{(0)}(k_2,p)\}\nonumber\\
&~&-\frac {g}{2}\sum_{k_1+k_2=k}p_ik_{1i}^{(1)}\{A_j^{(2)}(k_1),\partial _p^jf^{(0)}(k_2,p)\}-\frac {g}{2}\sum_{k_1+k_2=k}p_ik_{1i}^{(2)}\{A_j^{(1)}(k_1),\partial _p^jf^{(0)}(k_2,p)\}\nonumber\\
&~&+\frac {g}{2}\sum_{k_1+k_2=k}p_i\{A_i^{(1)}(k_1),k_{1\nu}^{(1)}\partial _p^{\nu }f^{(1)}(k_2,p)\}+\frac {g}{2}\sum_{k_1+k_2=k}p_i\{A_i^{(1)}(k_1),k_{1\nu }^{(2)}\partial _p^{\nu }f^{(0)}(k_2,p)\}\nonumber\\
&~&+\frac{g}{2}\sum_{k_1+k_2=k}p_i\{A_i^{(3)}(k_1),k_{1 \nu }^{(0)} \partial _{p}^{\nu }f^{(0)}(k_2,p)\}\nonumber\\
&~&-\frac {g^2}{2}\sum_{k_1+k_2=k}\sum_{k_3+k_4=k_1}p_i\{[A_i^{(1)}(k_3),A_j^{(2)}(k_4)],\partial _p^jf^{(0)}(k_2,p)\}\nonumber\\
&~&-\frac{g^2}{2}\sum_{k_1+k_2=k}\sum_{k_3+k_4=k_1}p_i\{[A_i^{(1)}(k_3),A_j^{(1)}(k_4)],\partial _{p}^{j}f^{(1)}(k_2,p)\}\nonumber\\
&~&-\frac {g^2}{2}
\sum_{k_1+k_2=k}\sum_{k_3+k_4=k_1}p_i\{[A_i^{(2)}(k_3),A_j^{(1)}(k_4)],\partial _p^jf^{(0)}(k_2,p)\}=0\end{aligned}$$ The third order kinetic equations for anti-quarks and gluons are similar except for the changes pointed out in Eq.(8).
Analogously to the electromagnetic plasma theory$^{\cite{s11}}$, the wave vector ${\bf k}$ can be expressed as ${\bf k}=\vec{\beta}-i\vec{\alpha }$, where $\beta=|\vec{\beta}|$ is the phase constant and $\alpha=|\vec{\alpha }|$ is damping constant, i.e., the amplitude of the eigenwave will exponentially decrease in the direction of $\vec{\alpha}$. The nonlinear spatial damping rate at this order is defined as $$\begin{aligned}
\alpha=-Im|{\bf k}^{(2)}|=-ImK^{(2)}.\end{aligned}$$
As the oscillations are developed from random thermal motions, we have $\langle A^{(1)a}(k)\rangle=0$, where $\langle~~\rangle$ means the average with respect to the random phase of the oscillations. So we obtain $$\begin{aligned}
\langle A^{(1)a}(k)A^{(1)b*}(k')\rangle=(2\pi )^4
\delta ^4(k-k')\delta _{a}^{b}\langle A^{(1)2}(k)\rangle_{{\bf k}^{(0)}\omega
};\nonumber\\
\langle A^{(1)2}(k)\rangle_{{\bf k}^{(0)}\omega }=\frac
{\pi}{\omega ^{2}}[\delta (\omega-\omega_{{\bf k}^{(0)}})]I_{{\bf k}^{(0)}};~~~~~I_{{\bf k}^{(0)}}=\frac {|E_{{\bf k}^{(0)}}|}{2V} \end{aligned}$$ where $V$ is the volume of the plasma; $I_{{\bf k}^{(0)}}$ characterizes the total intensity of the fluctuating oscillation with frequency $\omega_{{\bf k}}$ and $-\omega_{{\bf k}}$. In an equilibrium plasma and the long wavelength limit($K^{(0)}=0$) there are $I_{{\bf k}^{(0)}}=4\pi T$. The average of the product of the field potential can be expanded as$^{\cite{s7,s12}}$ $$\begin{aligned}
\langle A^{(1)}(k_1)A^{(1)}(k_2)A^{(1)}(k_3)A^{(1)}(k_{4 })\rangle&=&
\langle A^{(1)}(k_1)A^{(1)}(k_2)\rangle\langle A^{(1)}(k_3)A^{(1)}(k_4)\rangle\nonumber\\
&~&+\langle A^{(1)}(k_1)A^{(1)}(k_3)\rangle\langle A^{(1)}(k_2)A^{(1)}(k_4)\rangle\nonumber\\
&~&+\langle A^{(1)}(k_1)A^{(1)}(k_4)\rangle\langle A^{(1)}(k_2)A^{(1)}(k_3)\rangle\end{aligned}$$
By inserting the third order distribution functions Eqs.(18) into Eq.(17), multiplying both sides of the equation by $A^{(1)b}(k')$ and performing the average of the results with respect to the random phase, we have $$\begin{aligned}
-Im\int \frac {d^3p}{E_p(2\pi )^3}\left (\frac {{\bf p}\cdot {\bf k}^{(0)}}{K^{(0)}}\right )^2
\frac {{\bf p}\cdot {\bf k^{(2)}}}{\left ( p\cdot k^{(0)}+ip^00^+ \right )^2}\left [N_f\frac {df^{(0)}(p)}{dE_P}+N_c\frac {dG^{(0)}(p)}{dE_p}\right ]
=A_1+A_2;\end{aligned}$$ $$\begin{aligned}
A_1=&Im&\left (-\frac {g^2}{\omega _p}\int \frac {d^3p}{(2\pi )^3E_p}
\int \frac {d^4k_1}{(2\pi )^4}\langle A^{(0)^2}\rangle _{{\bf k}_1^{(0)}\omega}
\left (\frac {\omega _1}{K_1^{(0)}}\right )^2
\left (\frac {\omega}{K^{(0)}} \right)^2 \right.\nonumber\\
&\times &\left.\frac {1}{p\cdot (k^{(0)}-k^{(0)}_1)+ip^00^+}
\left (\frac {p^0{\bf k}^{(0)}\cdot {\bf k}^{(0)}_1}
{p\cdot k^{(0)}+ip^00^+}-{\bf p}\cdot {\bf k}^{(0)}
\frac {\omega {\bf p}\cdot {\bf k}^{(0)}_1-p^0{\bf k}^{(0)}\cdot {\bf k}^{(0)}_1}
{(p\cdot k^{(0)}+ip^00^+)^2} \right )\right. \nonumber\\
&\times& \left.\left\{\left [{\bf k}^{(0)}\cdot {\bf k_1^{(0)}}
\left (
\frac {1}{p\cdot k_1^{(0)}+ip^00^+}
-\frac {1}{p\cdot k^{(0)}+ip^00^+}
\right )
+({\bf p}\cdot {\bf k_1}^{(0)})
\frac {-\omega _1{\bf p}\cdot {\bf k}^{(0)}/p^0+{\bf k}^{(0)}\cdot {\bf k}_1^{(0)}}
{({\bf p}\cdot {\bf k}_1^{(0)}+ip^00^+)^2}\right. \right.\right. \nonumber\\
&-&\left.\left.\left.({\bf p}\cdot {\bf k})
\frac {-\omega {\bf p}\cdot {\bf k}_1^{(0)}/p^0+{\bf k}^{(0)}\cdot {\bf k}_1^{(0)}}
{(p\cdot k^{(0)}+ip^00^+)^2}\right ]
\left (\frac {7}{6}N_f\frac {df^{(0)}(p)}{dE_p}+\frac {22}{3}\frac {dG^{(0)}(p)}{dE_p} \right )
\right.\right.\nonumber\\
&+&\left.\left. ({\bf p}\cdot {\bf k}_1^{(0)})\frac {({\bf p}\cdot {\bf k}^{(0)})}{p^0}
\left (
\frac {1}{p\cdot k_1^{(0)}}-\frac {1}{p\cdot k^{(0)}+ip^00^+}
\right )
\left (\frac {7}{6}N_f\frac {d^2f^{(0)}(p)}{dE_p^2}+\frac {22}{3}\frac {d^2G^{(0)}(p)}{dE_p^2} \right )
\right\}\right );\nonumber\\
A_2=&Im& \left \{\int \frac {d^3p}{(2\pi )^3E_p}\int \frac {d^4k_1}{(2\pi )^4}
\langle A^{(1)^{2}} \rangle_{{\bf k}_1^{(0)}\omega}
\left (N_f\frac {df^{(0)}}{dE_p}+N_c\frac {dG^{(0)}(p)}{dE_p}\right )\right.\nonumber\\
&\times &\left. \frac {1}{p\cdot k^{(0)}+ip^00^+}\times
\left [ \frac {-6g^2}{\omega _p}\left (\frac {{\bf p}\cdot {\bf k}^{(0)}}{K^{(0)}}\right )^2\left (
\frac {{\bf p}\cdot {\bf k}^{(0)}}{K^{(0)}_1}
\right )^2\frac {1}{p\cdot (k^{(0)}-k^{(0)}_1)+ip^00^+}\right.\right.\nonumber\\
&\times & \left.\left.\left (\frac {\omega _1}{p\cdot k_1+ip^00^+}-\frac {\omega}{p\cdot k^{(0)}+ip^00^+}\right )
+\frac {12g^4}{\omega _p}\left (\frac {{\bf p}\cdot {\bf k}^{(0)}}{K^{(0)}}\right )
\left (\frac {{\bf p}\cdot {\bf k}_1^{(0)}}{K_1^{(0)}}\right )
\left (\frac {{\bf p}\cdot ({\bf k}^{(0)}-{\bf k}_1^{(0)})}
{|{\bf k}^{(0)}-{\bf k}_1^{(0)}|} \right )\right.\right.\nonumber\\
&\times &\left.\left.
\left (\frac {\omega _1}{p\cdot k_1+ip^00^+}-\frac {\omega }{p\cdot k^{(0)}+ip^00^+} \right )
\times \int \frac {d^3p'}{(2\pi )^3E_p'}\left (N_f\frac {df^{(0)}(p')}{dE_p'}
+N_c\frac {dG^{(0)}(p')}{dE_p'}\right )\right.\right.\nonumber\\
&\times &\left.\left.
\left (
\frac {{\bf p}'\cdot ({\bf k}^{(0)}-{\bf k}_1^{(0)})}{|{\bf k}^{(0)}-{\bf k}_1^{(0)}|}
\right )
\left (
\frac {{\bf p}'\cdot {\bf k}^{(0)}}{K^{(0)}}
\right )
\left (
\frac {{\bf p}'\cdot {\bf k}_1^{(0)}}{K_1^{(0)}}
\right )
\frac {1}{(\omega -\omega _1)^2}\frac {1}{\epsilon (\omega -\omega _1,{\bf k}^{(0)}-{\bf k}_1^{(0)})}\right.\right.\nonumber\\
&\times&\left.\left.
\frac {1}{p'\cdot (k^{(0)}-k_1^{(0)})+ip^00^+}
\left (
\frac {\omega _1}{p'\cdot k_1^{(0)}+ip^00^+}-\frac {\omega}{p'\cdot k^{(0)}+ip'^00^+}
\right )
\right ]
\right \}\nonumber\end{aligned}$$
To get the numerical result of the nonlinear spatial damping rate, we perform the integrals in cylindrical coordinates and in the local rest frame of the plasma particle.
At first the left hand side of Eq.(22) can be simplified by averaging over all directions of ${\bf k}^{(2)}$ and selecting the direction of ${\bf k}^{(0)}$ as the direction of the polar axis, i.e., we have $$\begin{aligned}
-Im\int \frac {d^3p}{E_p(2\pi )^3}
\frac {({\bf p}\cdot {\bf k}^{(0)})^2}{(K^{(0)})^2}
\frac {{\bf p}\cdot {\bf k}^{(2)}}{(p\cdot k^{(0)}+ip^00^+)^2}
\left (N_f\frac
{d}{dE_p}f^{(0)}(p)+N_c\frac {dG^{(0)}(p)}{dE_p}\right )\nonumber\\
=-Im \langle K^{2} \rangle \frac {1}{3\omega_p^2}
\int \frac {d^3p}{E_p(2\pi )^3}\cos^2 \phi
\left ( N_f\frac {d}{dE_p}f^{(0)}(p)+N_c\frac {dG^{(0)}(p)}{dE_p}\right )
\end{aligned}$$ where $\phi $ is the angle between ${\bf p}$ and ${\bf k}^{(0)}$. $\langle ~~ \rangle $ means the average over all directions of ${\bf k}^{(2)}$$^{\cite{s12,s13}}$.
Now we can see that $Im\langle K^{(2)}\rangle$ can be extracted from the left hand side of Eq.(22), and the numerical value of $Im\langle K^{(2)}\rangle$ is determined by $A_1$ and $A_2$. Before performing the integrals in $A_1$ and $A_2$, we analyze the mechanism of the nonlinear spatial damping. The important relation $p^{0}/(p\cdot k'+ip^{0}0^{+})=P\left [1/(\omega '-{\bf v}\cdot {\bf k}'\right ]-i\pi \delta (\omega '-{\bf v}\cdot {\bf k}')$ is very useful in our discussion, where ${\bf v}={\bf p}/p^0$ is the velocity of particle and $P$ stand for the principal value of the function. We can see that $Im\langle K^{(2)}\rangle$ is not vanishing only when at least one of the imaginary parts of $1/(p\cdot k^{(0)}+ip^00^+)$, $1/(p\cdot
k_1^{(0)}+ip^00^+)$ and $1/(p\cdot (k^{(0)}-k_1^{(0)})+ip^00^+)$ is not zero. As is shown in the linear approximation, the color eigenwaves are timelike, i.e., the phase velocity of waves cann’t approach the velocity of particle and the imaginary parts of $1/(p\cdot k^{(0)}+ip^00^+)$ and $1/(p\cdot
k_1^{(0)}+ip^00^+)$ are zero. The term $1/(p\cdot (k^{(0)}-k_1^{(0)})+ip^00^+)$ appearing in $A_1$ and $A_2$ are very crucial, it describes that the eigenwaves with wave vectors ${\bf k}^{(0)}$ and ${\bf k}^{(0)}_1$ may produce the secondary waves with wave vectors ${\bf k}^{(0)}-{\bf k}^{(0)}_1$ through the nonlinear interactions. Even if the eigenwaves are timelike, these secondary waves may be spacelike, i.e., their phase velocity $\frac
{\omega-\omega _1}{{\bf k}^{(0)}-{\bf k}^{(0)}_1}$ may be smaller than the velocity of light and can approach the velocity of particle. So the imaginary part of $\delta \left ((\omega -\omega _1)-{\bf v}\cdot ({\bf k}^{(0)}-{\bf k}^{(0)}_1 \right
)$ may not be zero and the secondary waves may exchange energy with plasma particles and be damped by the particles. It should be emphasized that though the mechanism is analogous to the mechanism of nonlinear Landau damping in eletromagnetic plasmas, the nonlinear coupling of the waves takes place through the nonlinear relations $\left [A^{(1)}(k_1),A^{(1)}(k_2)\right ]$ etc. in Eq.(17), i.e., it describes the nonlinear and non-Abelian characteristics of QGP.
The similar integral to $A_1$ and $A_2$ has been discussed in Ref.[@s4], by taking only the leading order in $g$ and taking the long-wavelength limit, we find $$\begin{aligned}
A_1+A_2\sim 0.42 T.\end{aligned}$$
After finishing the integral of the right hand side of Eq.(23), and from Eq.(24), we obtain the nonlinear spatial damping rate for a pure gluon gas in the long wavelength limit $$\begin{aligned}
\alpha =-Im\langle K^{(2)} \rangle \sim 2.52 g^2T\end{aligned}$$
Now we summarize briefly. In this paper, we use the derivative expansion method to study the nonlinear and the non-Abelian effects given by the kinetic equations of QGP. By solving the equations to the third order, the numerical result of the spatial damping rate $\alpha$ for the secondary waves in the long wavelength limit is obtained. This result indicates that the amplitude of the secondary waves in QGP decrease exponentially, and its damping rate is $2.52 g^2T$ Neper approximately. It makes us have a concrete understanding about the spatial traveling of color waves in QGP.
[ss]{} H.-Th. Elze, Z. Phys. C[**38**]{},211(1988). U. Heinz, Ann. Phys. (N.Y.) [**168**]{},148(1986) Mr$\acute{o}$wczy$\acute{n}$ski, Phys. Rev. D[**39**]{},1940(1989). Zhang Xiaofei and Li Jiarong, Phys. Rev. C[**52**]{},964(1995). T. H. Stix, *Waves in Plasmas(American Institute of Physics, New York, 1992) H.-Th. Elze and U. Heinz,Phys. Rep. [**183**]{},81(1989). A. G. Sitenko, *Fluctuations and Nonlinear Wave Interactions in Plasmas(Pergamon, Oxford, 1990). A. Jeffrey and T. Kawahara, *Asymptotic Method in Nonlinear Wave theory(Pitman, London ,1982). J.-P. BLaizot and E. Jancu, Phys.Rev.Lett.[**70**]{},3376(1993); Nucl.Phys.B[**421**]{},565(1994). R. Jackiw, Q. Liu and C. Lucchesi, Phys. Rev.D[**49**]{},6787(1994);\
P. F. Kelly, Q. Liu, C. Lucchesi and C. Manuel, Phys. Rev. Lett. [**72**]{},3461(1994). B. B. Kadomtsev, *Plasma Turbulence(Academic, New York, 1965) J. Weiland and H. Willelmesson, *Coherent Nonlinear Interaction of Waves in Plasma(Pergamon, Oxford, 1976). H. G. Booker,*Electromagnetic and Hydromagnetic Waves in a Cold Magnetic Plasma\
(Phil. Trans. R. Soc. Lond., A[**280**]{},1975) L. D. Landau and E. M. Lifshitz, *Electrodynamics of Continuous Media(Pergamon, New York, 1960). E. M. Lifshitz and L. P. Pitaevskii,*Physical Kinetics(Pergamon, Oxford,1981).********
[^1]: Work supported by the National Natural Science Fund of China.
[^2]: Permanent address:University of Hydraulic & Electric Engineering/Yichang
|
The normal state of superconducting cuprates in many aspects contradicts the phenomenology of the normal Fermi liquid (FL). Anomalous frequency and temperature dependence of several response functions is generally attributed to electronic correlations, yet a proper description is missing so far. The angle resolved photoemission (ARPES) experiments [@shen; @ding; @olson] probe the one-particle spectral function $A({\bf k},\omega)$. At intermediate doping they reveal for a wide class of cuprates a well defined large Fermi surface (FS) consistent with the Luttinger theorem and similar quasiparticle (QP) dispersion [@ding]. This seems to imply the validity of the concept of the usual metal with electronic-like FS. Such simple FL picture is in an apparent contradiction with magnetic and transport properties, e.g. electrical conductivity scales with hole concentration, closer to the picture of holes moving in the antiferromagnetic (AFM) background. Moreover, in ARPES the FL interpretation is spoiled by the overdamped character of QP peaks [@olson; @ding]. Although a large background makes fits of particular lineshapes non-unique [@olson; @liu], the QP inverse lifetime is found to be of the order of the QP energy, i.e. $\tau^{-1}\propto \omega$ for $\omega>T$, leading to the concept of the marginal Fermi liquid (MFL) [@varma] with an anomalous single-particle and transport relaxation, in contrast to $\tau^{-1}\propto\omega^2$ in the normal FL.
It is unclear whether above features can be reproduced within generic models of strongly correlated systems, such as the Hubbard and the $t-J$ model, in particular in the most challenging regime of intermediate doping. Spectral properties of these 2D models have been so far studied mainly via numerical techniques [@dagorev], e.g. exact diagonalization (ED) [@stephan] and Quantum Monte Carlo (QMC) [@bulut]. These studies, as well as some analytical approaches [@wang], established a reasonable consistency of the model QP dispersion with the experimental one, as well as the possibility of large FS, but have not been able to investigate closer the character of QP, being in the core of the anomalous low-energy properties.
The aim of the present work is to employ the finite-temperature Lanczos method [@jplanc] to calculate $A({\bf k},\omega)$ within the $t-J$ model. This method has been already applied to other dynamic [@jpdyna] and static [@jpterm] functions, yielding features consistent with the MFL concept and experiments on cuprates. Although calculations are still done in small systems, by using finite (but small) $T>0$ smooth enough spectra are obtained not only to determine the QP dispersion, but for the first time also the spectral lineshapes and corresponding self-energies.
We study the $t-J$ model [@rice] $$H=-t\sum_{\langle ij\rangle s}(\tilde{c}^\dagger_{js}\tilde{c}_{is}+
\text{H.c.})+J\sum_{\langle ij\rangle} ({\bf S}_i\cdot {\bf S}_j -
{1\over 4} n_i n_j) \label{model}$$ on the planar square lattice and set $J/t=0.3$ to address the regime of cuprates. The operators $\tilde{c}_{js},\tilde{c}^\dagger_{js}$ project out the states with doubly occupied sites. The spectral properties of the model Eq. (\[model\]) are investigated by calculating the retarded Green’s function ($\mu$ is chemical potential) $$G({\bf k},\omega)=-i\int_0^\infty dt\;e^{i(\omega+\mu)t}
\langle\{\tilde{c}_{{\bf k}s}(t),\tilde{c}^\dagger_{{\bf k}s}(0)\}
\rangle.
\label{green}$$ The average is grandcanonical, which in actual calculations at low $T$ in a system with $N$ sites and fixed hole concentration $c_h=N_h/N$ is replaced by a canonical one in the subspace of states with $N_h$ holes. The two anticommutator terms correspond at low $T$ to the hole – inverse photoemission spectra (IPES), and the electron – photoemission spectra (PES), respectively.
The calculation of $G({\bf k},\omega)$ at $T=0$ with the ED technique is well established [@stephan; @dagorev], but a small number of sharp peaks in the spectra makes it difficult to extract information on lineshapes and self energies. The QMC methods resort to the use of maximum entropy analysis [@bulut], which also leads to quite restricted $\omega$-resolution. The $T>0$ Lanczos method [@jplanc] eliminates these problems for dynamic quantities, i.e. yields smoother spectra and allows for study of the $T$-dependence. The requirement is however that $T>T_{\rm fs}$, where $T_{\rm fs}$ is the characteristic temperature at which in a given small system the finite-size effects set in (for discussion of the method we refer to previous works [@jplanc; @jpdyna]). We have calculated the Green’s function Eq. (\[green\]) on systems with N=16 and 18 sites using $\sim 120$ Lanczos steps and sampling over $\sim
1000$ random states. The finite-size effects are small at $T\agt
T_{\rm fs}(N,N_h)$, where, e.g., $T_{\rm fs}\sim 0.1t$ for $N_h/N=3/16$.
From the Green’s function we obtain the spectral function $A({\bf
k},\omega)=-(1/\pi){\rm Im}G({\bf k},\omega)$ and the one-particle density of states (DOS) ${\cal N}(\varepsilon)=(2/N)\sum_{\bf k}
A({\bf k},\varepsilon-\mu)$. The latter is used to define the zero of energy and thus the chemical potential in Eq. (\[green\]) via $\int_{-\infty}^\infty {\cal N}(\omega+\mu)
(e^{\beta\omega}+1)^{-1}d\omega=1-c_h$. We find a very good agreement between $\mu$ calculated this way and from the thermodynamic function $c_h(T,\mu)=N_h/N$ [@jpterm].
Of particular interest is the self-energy $$\Sigma({\bf k},\omega)=\omega-G({\bf k},\omega)^{-1}.$$ The relation contains no free term, in contrast to the usual definition, since the $t-J$ model does not allow for a free-fermion propagation even at $J=0$. It is also important to note that due to projected fermion operators in the model the spectral function $A({\bf
k},\omega)$ is not normalized to unity [@stephan], but rather to $\langle\{\tilde{c}_{{\bf k}s},\tilde{c}^\dagger_{{\bf k}s}\}\rangle =
(1+c_h)/2$. This has several consequences, e.g. ${\rm Re} \Sigma({\bf
k},\omega\to \infty)$ does not vanish, but varies linearly with $\omega$.
In Fig. \[fig1\] we first present $ A({\bf k},\omega)$ for systems with $c_h \sim 0.12$ (combining results for $N_h=2$ on systems with $N=16,18$) and $c_h=3/16$. The spectra are broadened to Lorentzians of variable width $\delta=\delta_0+(\delta_\infty-\delta_0)\tanh^2(\omega/\Delta)$, with $\delta_\infty =0.2t$, $\delta_0=0.04t$, and $\Delta=1.0t$. In this way sharper (well resolved) low-energy features remain unaffected, while the fluctuations at higher $\omega$, mainly due to restricted sampling, are smoothened out. In any case, $\delta$ is always smaller than the energy scale of main spectral features.
We observe in Fig. \[fig1\], presented at all available ${\bf k}$, a coexistence of sharper features, associated with coherent QP peaks, and of a pronounced incoherent background, as already established in earlier studies [@stephan]. The coherent peaks in Fig. \[fig1\] disperse through $\omega=0$ as ${\bf k}$ crosses the FS. Within the given resolution in the ${\bf k}$-space the FS appears to be large already for $c_h=2/18$, consistent with the Luttinger theorem. The total QP dispersion $W$ is broadened as $c_h$ is increased, qualitatively consistent with the slave boson picture where $W
\propto c_h t + \chi J$ [@wang].
In Fig. \[fig2\] we show $\Sigma({\bf k},\omega)$ at $c_h=3/16$ and at lowest $T=0.1t\sim T_{\rm fs}$. We first notice an asymmetry between the PES ($\omega<0$) and IPES ($\omega>0)$ spectra at all ${\bf k}$. ${\rm Im}\Sigma$ are small for $\omega>0$, as compared to $\omega<0$. For ${\bf k}$ outside FS this implies a weak QP damping, consistent with sharp IPES peaks seen in $A({\bf k},\omega)$, Fig. \[fig1\], containing the major part of the spectral weight. ${\rm Re} \Sigma$ shows an analogous asymmetry, in the region $\omega>0$ resembling moderately renormalized QP. Due to projections in Eq. (\[model\]), the slope in ${\rm Re}\Sigma$ is not zero even at $|\omega|\gg t,J$.
The behavior on the PES ($\omega<0$) side is very different. For all ${\bf k}$, ${\rm Im}\Sigma$ are very large (several $t$ away from $\omega \sim 0$), leading to overdamped QP structures. We should here distinguish two cases. For $\bf k$ well outside FS, ${\rm Im}\Sigma>t$ does not invalidate a well defined QP (at $\omega>0$), but rather induces a weaker reflection (shadow) of the peak at $\omega <0$, as well seen in Fig. \[fig1\] for ${\bf k} =(\pi,\pi)$. On the other hand, the $\omega$ variation for ${\bf k}$ inside or near the FS is more regular, and can be directly related to the QP damping. Particularly remarkable feature, found in Fig. \[fig2\], is a linear frequency dependence of ${\rm Im}\Sigma$ at $\omega<0$ for ${\bf k}=
(\pi/2,0),(\pi/2,\pi/2)$. Meanwhile ${\bf k}=(0,0)$, being further away from the FS, seems to follow a different (more FL-type) behavior. Such general behavior remains similar also for the lower doping $c_h=2/18$.
To address the latter point in more detail, we show in Fig. \[fig3\] the $T$-variation of ${\rm Im}\Sigma$ for both dopings at selected ${\bf k}$ below the FS. For $c_h=3/16$ the linearity of ${\rm
Im}\Sigma(\omega)$ is seen in a broad range $-2t\alt \omega
\alt 0$ at the lowest $T$ shown. Moreover, for this higher (‘optimum’) doping the $T$-dependence is close to a linear one, assuming a small residual (finite-size) damping $\eta_0$ at $\omega=0$. Data can be well described by ${\rm Im}\Sigma=\eta_0+\gamma(|\omega|+\xi T)$, with $\gamma\sim 1.4$ and $\xi\sim 3.5$, baring a similarity to the MFL ansatz [@varma], as well as to the conductivity relaxation $\tau_c^{-1}$ found in the $t-J$ model [@jpdyna]. In contrast, the $T$-dependence for $c_h=2/18$ seems somewhat different, and ${\rm
Im}\Sigma\propto
\omega$ only in the interval $-t\alt\omega\alt T$. This would indicate the consistency with the alternative MFL form [@varma], however we should be aware that in this ‘underdoped’ regime finite-size effects are larger at fixed $T$.
Here we should comment on the manifestation of the FS in small correlated systems. At $T,\omega\sim 0$ we are dealing in the evaluation of Eq.(\[green\]) with the transition between ground states of systems with $N_h$ and $N_h'=N_h \pm 1$ holes, respectively. Since these states have definite momenta ${\bf k}_0$, they induce strong QP peaks for particular ${\bf k}={\bf k}'_0-{\bf
k}_0$ (defining in this way for a small system the FS, apparently satisfying the Luttinger theorem), with ${\rm Im}\Sigma({\bf k},\omega
\sim 0)\sim 0$. However, the calculated $T$-variation is for a given system meaningful only at $T>T_{\rm fs}$.
From $\Sigma({\bf k},\omega)$ we can calculate QP parameters: the dispersion $E_{\bf k}$, the weight $Z_{\bf k}$ and the damping $\Gamma_{\bf k}$, $$\begin{aligned}
E_{\bf k}&=&{\rm Re}\Sigma({\bf k}, E_{\bf k}),
\label{dispers}\\
Z_{\bf k}&=&[1-\partial{\rm Re}\Sigma({\bf k},\omega
)/\partial\omega]_{\omega=E_{\bf k}}^{-1},\\
\Gamma_{\bf k}&=&Z_{\bf k}|{\rm Im}\Sigma({\bf k},E_{\bf k})|,\end{aligned}$$ which are listed in Table \[table1\]. We note that parameters are of a limited meaning for $\bf k$ inside FS due to large $\Gamma$. In particular, $E_{\bf k}$ (as well as $Z_{\bf k}$ and $\Gamma_{\bf k}$) for ${\bf k}=(0,0)$ do not correspond to a weak QP peak at $\omega
\sim -t$, being overwhelmed by the incoherent background. Otherwise, the enhancement of the dispersion with $c_h$ is seen, accompanied by a decrease of $\Gamma$ for $|{\bf k}|>k_F$. To establish the relation with the FL theory one has to evaluate QP parameters at the FS, ${\bf
k} ={\bf k}_F$. Of particular importance is the renormalization factor $\tilde Z= Z_{{\bf k}_F}$. $\tilde Z$ is still decreasing as $T$ is lowered. Nevertheless we find a weak variation (cca. 20%) within the interval $0.1<T/t<0.3$, not inconsistent with the MFL form, leading to $\tilde Z^{-1}\sim \ln(\omega_c/T)$. Regarding the size of $\tilde
Z$ (at low but finite $T>0$) we note, that the value of the momentum distribution function $\bar n_{{\bf k}s}$ is very close to the maximum for the $t-J$ model, $\bar n_{{\bf k}s}\sim (1+c_h)/2$, for all $|{\bf
k}|<k_F$ [@stephan]. Taking the FS volume according to Luttinger theorem and assuming that $\bar n_{{\bf k}s}$ falls monotonously with $|{\bf k}|$, this implies the discontinuity $\tilde Z=
\delta \bar n_{{\bf k}s} <2c_h/(1+c_h)$. We indeed find a consistent result $\tilde Z= 0.28$ for $c_h=3/16$, while for $c_h=2/18$ the value is still larger, possibly due to too high $T$.
An analogous argument can be used to explain the electron-hole asymmetry of $A({\bf k},\omega)$. Holes added to the system at $|{\bf
k}|<k_F,\omega<0$ move in an extremely correlated system, strongly coupled to the spin dynamics [@prelovsek], also following the anomalous low-$\omega$ behavior [@jpdyna]. On the other hand, states for $|{\bf k}|>k_F$ are not fully populated, allowing for a moderately damped motion of added electrons for $\omega>0$.
Another feature is seen predominantly at smaller doping $c_h \sim
0.12$ for $|{\bf k}|>k_F$: along with the principal peak at $\omega>0$ a weak bump in the $\omega<0$ part of the spectrum appears when the FS is crossed along $\overline{\Gamma M}$. In ${\rm Re}
\Sigma$ for ${\bf k}=(\pi,\pi)$ it emerges even as a strong oscillation, leading to a double solution in Eq. (\[dispers\]) [@kampf]. In ARPES this should be seen as the reappearance of the ‘shadow’ QP band for ${\bf k}$ above the FS, in accordance with experiments [@shen] and some previous studies [@kampf; @stephan]. The effect is less pronounced at larger doping $c_h=3/16$, probably due to the reduction of the AFM correlation length.
Finally we show in Fig. \[fig4\] the variation of the DOS ${\cal
N}(\varepsilon)$ with doping. For a hole injected in the weakly doped system ($c_h\sim 0.06$), a QP coherent peak (of width $\sim 2J$) is seen at $\varepsilon \alt \mu$. Besides, a broad background (due to well understood incoherent hole motion) is dominating lower $\varepsilon$. At such low doping the electron part of DOS is weaker, with the total intensity $2 c_h$ as compared to $1-c_h$ of the hole part. With increasing $c_h$ the hole background doesn’t reduce in intensity, while the coherent peak near the Fermi energy widens and its spectral weight reduces, reflecting the broadening of the QP dispersion. At the same time, the electron part of DOS is increasing, both in the weight and in the width. Note that oscillations for $\varepsilon>\mu$ appear in this regime due to the underdamped QP and a restricted number of finite-size $\bf k$.
Here we mention the relation with the entropy $s$ [@jpterm], assuming the low-$T$ form as follows from the FL theory [@agd], i.e. $s = \pi^2 T {\cal N}(\mu)/3\tilde Z$. With ${\cal N}(\mu)$ in Fig. \[fig4\], weakly doping dependent at intermediate $c_h$ (also quite close to the free-fermion value), and $\tilde Z \sim 0.28$ for $c_h = 3/16$, we get $s \sim 0.29~k_B/{\rm site}$ at $T=0.1t$, consistent with static calculations [@jpterm]. Nevertheless, one should keep in mind that such $s$ represents a large increase over the undoped (AFM) system, taking into account very few mobile holes introduced into the system.
In the end we comment on the relevance of our results to the understanding of the ARPES spectra in cuprates [@shen]. For $\omega<0$ we notice the importance of the incoherent background, consistent with the observation that in fitting the experiments to either FL or MFL form an anomalously large background must be assumed [@liu]. For $|{\bf k}|<k_F$ we find the linewidth typically $\Gamma\sim t$ (see Table \[table1\]), well compatible ($t
\sim 0.4~{\rm eV}$ in cuprates [@hybertsen; @rice]) with experiments at ${\bf k}$ away from the FS and at intermediate doping [@shen; @ding]. Also the MFL form has been claimed to describe better the experiments [@olson], although this point is not yet clarified [@shen]. We note also that our QP dispersion and the shape of the FS are not entirely of the form found experimentally [@shen]. This could be possibly remedied by including the n.n.n. ($t'$) hopping term [@hybertsen]. Still we do not expect such corrections to modify conclusions concerning the spectral shapes and the QP character.
One of the authors (P.P.) wishes to thank P. Horsch, G. Khalliulin, R. Zeyher, and T.M. Rice for useful suggestions and fruitful discussions, and acknowledges the support of the MPI für Festkörperphysik, Stuttgart, where a part of this work has been performed.
For reviews, see Z.-X. Shen and D.S. Dessau, Phys. Rep. [**253**]{}, 1 (1995); Z.-X. Shen [*et al.*]{}, Science [**267**]{}, 343 (1995). H. Ding [*et al.*]{}, Phys. Rev. Lett. [**76**]{}, 1533 (1996); D.S. Marshall [*et al.*]{}, [*ibid.*]{}, [**76**]{}, 4841 (1996). C.G. Olson [*et al.*]{}, Phys. Rev. B [**42**]{}, 381 (1990). L.Z. Liu, R.O. Anderson, and J.W. Allen, J. Phys. Chem. Solids [**52**]{}, 1473 (1991). C.M. Varma [*et al.*]{}, Phys. Rev. Lett. [**63**]{}, 1996 (1989); P.B. Littlewood and C. M. Varma, J. Appl. Phys. [**69**]{}, 4979 (1991). For a review, see E. Dagotto, Rev. Mod. Phys. [**66**]{}, 763 (1994). W. Stephan and P. Horsch, Phys. Rev. Lett. [**66**]{}, 2258 (1991); A. Moreo, S. Haas, A.W. Sandvik, and E. Dagotto, Phys. Rev. B [**51**]{}, 12045 (1995). N. Bulut, D.J. Scalapino, and S.R. White, Phys. Rev. B [**50**]{}, 7215 (1994); R. Preuss, W. Hanke, and W. von der Linden, Phys. Rev. Lett. [**75**]{}, 1344 (1995). G. Baskaran, Z. Zou, and P.W. Anderson, Solid State Commun. [**63**]{}, 973 (1987); Z. Wang, Y. Bang, and G. Kotliar, Phys. Rev. Lett. [**67**]{}, 2733 (1991). J. Jaklič and P. Prelovšek, Phys. Rev. B [**49**]{}, 5065 (1994). J. Jaklič and P. Prelovšek, Phys. Rev. Lett. [**74**]{}, 3411 (1995); [*ibid.*]{}, [**75**]{}, 1340 (1995); Phys. Rev. B [**52**]{}, 6903 (1995). J. Jaklič and P. Prelovšek, Phys. Rev. Lett. (to be published), cond-mat/9603081. For a review, see T.M. Rice, in [*High Temperature Superconductivity*]{}, Proceedings of the 39th Scottish Universities Summer School in Physics, edited by D.P. Turnball and W. Barford (Adam Hilger, London, 1991), p.317. P. Prelovšek, to be published. A.P. Kampf and J.R. Schrieffer, Phys. Rev. B [**41**]{}, 6399 (1990); [*ibid.*]{} [**42**]{}, 7967 (1990). A.A. Abrikosov, L.P. Gor’kov, and I.E. Dzyaloshinskii, [*Quantum Field Theoretical Methods in Statistical Physics*]{} (Pergamon, Oxford, 1965), p. 169. M.S. Hybertsen [*et al.*]{}, Phys. Rev. B [**41**]{}, 11068 (1990); T. Tohyama and S. Maekawa, Phys. Rev. B [**49**]{}, 3596 (1994); A. Nazarenko [*et al*]{}, Phys. Rev. B [**51**]{}, 8676 (1995).
------------------- --------------- ------------- -------------------- ----------------- --------------- ------------- --------------------
${\bf k}$ $E_{\bf k}/t$ $Z_{\bf k}$ $\Gamma_{\bf k}/t$ ${\bf k}$ $E_{\bf k}/t$ $Z_{\bf k}$ $\Gamma_{\bf k}/t$
$(0,0)$ -3.8 0.80 1.9 $(0,0)$ -4.2 0.73 1.4
$(\pi/3,\pi/3)$ -0.7 0.26 0.65 $(\pi/2,0)$ -1.1 0.68 1.7
$(2\pi/3,0)$ -0.4 0.35 0.51 $(\pi/2,\pi/2)$ 0.0 0.28 0.32
$(2\pi/3,2\pi/3)$ 0.5 0.35 0.49 $(\pi,0)$ 0.0 0.28 0.32
$(\pi,\pi/3)$ 0.1 0.26 0.40 $(\pi,\pi/2)$ 0.8 0.44 0.31
$(\pi,\pi)$ 1.1 0.37 0.54 $(\pi,\pi)$ 1.7 0.46 0.35
------------------- --------------- ------------- -------------------- ----------------- --------------- ------------- --------------------
: QP parameters for two hole concentrations.[]{data-label="table1"}
|
---
abstract: 'An approach to visualize the accessible reciprocal space accounting the goniometer angles limitation and the resolution element in the reciprocal space is presented. The shapes of the accessible reciprocal space region for coplanar and non-coplanar geometries are given employing the additional degree of freedom provided by detector arm. The examples of the resolution elements in different points of the reciprocal space are shown. The equations obtained permit to find these regions and to calculate the experimental geometry to obtain the diffraction from any accessible reflections, which makes a further extension of previously reported methods. The introduced algorithm has been testified by experimental measurements of reciprocal space maps in coplanar and non-coplanar geometries. The examples of the resolution elements in different points of the reciprocal space are calculated to illustrate the method proposed.'
address:
- 'Rigaku Europe SE, Am Hardwald 11, 76275 Ettlingen, Germany'
- 'Atomicus OOO, Mogilevskaya Str. 39a-530, 220007 Minsk, Belarus'
- 'Atomicus OOO, Mogilevskaya Str. 39a-530, 220007 Minsk, Belarus'
- 'Atomicus GmbH, Schoemperlen Str. 12a, 76185 Karlsruhe, Germany'
author:
- Tatjana Ulyanenkova
- Aliaksei Zhylik
- Andrei Benediktovitch
- Alex Ulyanenkov
title: 'Computation and visualization of accessible reciprocal space and resolution element in high-resolution X-ray diffraction mapping'
---
Introduction {#intro}
============
The high-resolution X-ray diffraction (HRXRD) experiments for thin film analysis involve quite complicated techniques for geometrical configuration of the experiment and calculations of expected resolution parameters. The planning of such measurements is one of the important stages of the X-ray experiment and data analysis. The properly done planning can reduce the measurement preparation time, as well as bring closer the success of the result. The visualization tool for the accessible reciprocal space area providing the schematic drawing of the sample’s Bragg reflections simplifies the planning of X-ray diffraction experiment. Usually the accessible reciprocal space regions are drawn for the case of coplanar diffraction, when the incident, the outgoing wave vectors and the surface normal lie in the same plane, [@bowen2005]. There are publications available where the accessible reciprocal space is demonstrated in 3D for both coplanar and non-coplanar diffraction, [@yefanov2008] and the appropriate visualization software is developed by [@yefanov2008a]. This contribution makes a further extension of the previous works and presents an approach to visualize the accessible reciprocal space in the case when the vertical goniometer with a detector arm possesses two degrees of freedom, which enables the performance of the non-coplanar diffraction without sample inclination. The scattering vector is parameterized by angular variables associated with the physical axes of the goniometer, which takes into account the angular limitations of the goniometer. We propose below the analytical expressions for angular coordinates in considered parametrization for the position of the reciprocal lattice vector, which makes possible to visualize the resolution element in the reciprocal space.
The additional in-plane degree of freedom provided by the detector arm is very beneficial for the analysis of thin films, [@2002Ofuji], [@2007yoshida2007x], for the texture analysis, [@2011nagao2011x], for residual stress gradients investigation [@2014BenediktovitchIPStress] and the other applications. In the case of high-resolution X-ray diffraction applications, the advantage of non-coplanar diffraction with two degrees of freedom detector arm is provided by an access to those reflections, which involve the grazing angles in a coplanar mode. The examples of such a geometry are shown below for Ge/Si(001) sample for $(113)$ and $(\overline{1}\overline{1}3)$ reflections, which require the incidence (outgoing) angles in the range of 0 to 5 degrees in coplanar case, and for $(1 \overline{1}3)$ reflection in non-coplanar case, which does not involve grazing angles.
Basic principles {#sec:1}
================
X-ray diffraction methods are based on X-rays elastic scattering from the atoms of the investigated substance. The incident wave is characterized by the wave vector $\vec{k}_{in}$, and the scattered wave propagated in the detector direction is characterized by the wave vector $\vec{k}_{out}$. Due to an elastic nature of scattering process $\left|\vec{k}_{out}\right|=\left| \vec{k}_{in} \right| = k_0 = \frac{2 \pi}{\lambda}$, where $\lambda$ is a wavelength of the incident and scattered waves and the diffraction vector $\vec{Q} = \vec{k}_{out} - \vec{k}_{in}$ contains the information on the sample structure. There are three coordinate systems used for the analysis and for the interpretation of X-ray diffraction results: the laboratory coordinate system $\vec{L}_x$, $\vec{L}_y$, $\vec{L}_z$, the sample coordinate system $\vec{S}_x$, $\vec{S}_y$, $\vec{S}_z$ and the crystal coordinate system [@ulyan2014], [@zhylik2013covariant]. The crystal coordinate system is linked to the cell symmetry of the studied crystal and is convenient for the description of planes and directions using the Miller index form. The laboratory coordinate system is associated with the goniometer position in space and, finally, the sample coordinate system is related to the sample position in space. Usually the laboratory and the sample coordinate systems are chosen as orthonormal ones, therefore relations between these systems are described by a rotation matrix $\mathbf{M}^{LS}$. Furthermore, we use the laboratory coordinate system as a base system for calculations. There are many X-ray diffraction studies based on the reciprocal space scanning using the diffraction vector $\vec{Q}$ to find a proper position of a crystal reflection $\vec{H}$ or to study the vicinity of a reciprocal lattice point. During the measurement process, the positions of a source, detector and a sample vary consistently, and the goniometer axes related to the positioning of a detector and a source drive the diffraction vector $\vec{Q}$. In the most common case, the pair of angles for the incident and exit beams is enough for their positioning:
$$\begin{aligned}
\vec{k}_{in} = k_0 \mathbf{\Theta_{i2}} \mathbf{\Theta_{i1}} \vec{L}_x, \\
\vec{k}_{out} = k_0 \mathbf{\Theta_{o2}} \mathbf{\Theta_{o1}} \vec{L}_x,\end{aligned}$$
where $\mathbf{\Theta_{i2}}$, $\mathbf{\Theta_{i1}}$, $\mathbf{\Theta_{o2}}$, $\mathbf{\Theta_{o1}} $ are the rotation operators. The goniometer axes related to the sample positioning define the matrix $\mathbf{M}^{LS}$.
Accessible reciprocal space {#sec:2}
===========================
Main formulas for the vertical diffractometer with the additional detector axis {#sec:2a}
-------------------------------------------------------------------------------
The X-ray diffractometer setup (Fig. \[VerticalGonio\](a)) with standard vertical goniometer and additional detector arm is capable to measure X-ray diffraction in coplanar and non-coplanar geometries. As shown on the schematic drawing for the diffraction planes (Figure \[VerticalGonio\](b)), the positions of source and the detector are described by the following angles:
- $\theta_{in}$ is the angle between the line connecting sample and X-ray source and the horizontal plane of goniometer;
- $\theta_{out}$ is the angle between plane of the axis of in-plane arm rotation and the horizontal plane of goniometer; in case of no in-plane arm rotation $\theta_{out}$ is the angle between the line connecting sample and detector and and the horizontal plane of goniometer;
- $\theta_\chi$ is the angle of the in-plane arm rotation.
![The vertical goniometer with a detector arm possesses two degrees of freedom: (a) the axes $L_x$ and $L_z$ of laboratory coordinate system; (b) schematic axes view of the vertical goniometer with a detector arm.[]{data-label="VerticalGonio"}](GonioInPlane_ver.pdf){width="60.00000%"}
The incidence beam is defined by one rotation angle and for the exit beam two angles with the following order are assigned:
$$\begin{aligned}
\vec{k}_{in} = k_0 \mathbf{\Theta_{in}} \vec{L}_x, \\
\vec{k}_{out} = k_0 \mathbf{\Theta_{out}} \mathbf{\Theta_\chi} \vec{L}_x,\\
\mathbf{\Theta_{in}} = \mathbf{R}(\theta_{in}, \vec{L}_y),\\
\mathbf{\Theta_{out}} = \mathbf{R}(-\theta_{out}, \vec{L}_y),\\
\mathbf{\Theta_\chi} = \mathbf{R}(-\theta_\chi, \vec{L}_z),\end{aligned}$$
where
$$\begin{aligned}
\mathbf{R}(\theta_{in}, \vec{L}_y) =
\begin{bmatrix}
\cos(\theta_{in}) & 0 & \sin(\theta_{in}) \\
0 & 1& 0 \\
-\sin(\theta_{in}) & 0 & \cos(\theta_{in})
\end{bmatrix},\\
\mathbf{R}(-\theta_{out}, \vec{L}_y) =
\begin{bmatrix}
\cos(\theta_{out}) & 0 & -\sin(\theta_{out}) \\
0 & 1& 0 \\
\sin(\theta_{out}) & 0 & \cos(\theta_{out})
\end{bmatrix},\\
\mathbf{R}(-\theta_\chi, \vec{L}_z) =
\begin{bmatrix}
\cos(\theta_\chi) & \sin(\theta_\chi) & 0 \\
-\sin(\theta_\chi) & \cos(\theta_\chi) & 0\\
0 & 0& 1
\end{bmatrix}.\\ \nonumber\end{aligned}$$
As a result, the explicit form for the following parameters is obtained:
\[formulasSurf3D\] $$\begin{aligned}
\vec{k}_{in} = k_0 \left\{\cos (\theta_{in}), 0, - \sin (\theta_{in}) \right\}, \\
\vec{k}_{out} = k_0 \left\{\cos (\theta_{out}) \cos (\theta_\chi), -\sin ( \theta_\chi ),
\sin (\theta_{out}) \cos ( \theta_\chi ) \right\}, \\
\vec{Q} =k_0 \left\{ \cos (\theta_{out}) \cos (\theta_\chi )-\cos (\theta_{in}), -\sin (\theta_\chi ), \right. \nonumber \\
\left.\sin (\theta_{out}) \cos (\theta_\chi
)+\sin (\theta_{in}) \right\}, \label{formulasQ3D}\\
\left| \vec{Q} \right|^2 = 2 k_0 (1- \cos (\theta_\chi ) \cos (\theta_{out}+\theta_{in})).\end{aligned}$$
The diffraction vector in the laboratory coordinate system is a function of three parameters. The definition range for the angular parameters $\theta_{in}$, $\theta_{out}$, $\theta_\chi$ may be limited up to $-\pi \dots \pi $ range without loss of generality. In the case of goniometer without additional arm for the non-coplanar diffraction geometry ($\theta_\chi \equiv 0$), the equations (\[formulasSurf3D\]) take the following form:
\[formulasSurf2D\] $$\begin{aligned}
\vec{k}_{in} = k_0 \left\{\cos (\theta_{in}), 0, - \sin (\theta_{in}) \right\},\\
\vec{k}_{out} = k_0 \left\{\cos (\theta_{out}) , 0, \sin (\theta_{out}) \right\},\\
\vec{Q} =k_0 \left\{ \cos (\theta_{out}) -\cos (\theta_{in}), 0,
\sin (\theta_{out}) +\sin (\theta_{in}) \right\}, \label{formulasQ2D}\\
\left| \vec{Q} \right|^2 = 2 k_0(1 - \cos (\theta_{out}+\theta_{in})).\end{aligned}$$
Inverse problem {#sec:2b}
---------------
The inverse problem consists in the finding the positions $\theta_{in}$, $\theta_{out}$, $\theta_\chi$ for the goniometer axes at certain reciprocal space point for Bragg reflection $\vec{H}$.
$$\frac{\vec{Q}(\theta_{in},\theta_{out}, \theta_\chi )}{k_0} = \vec{H'}.$$
where $\vec{H'} = \left(H'_x, H'_y, H'_z \right) = \left( \frac{H_x}{k_0}, \frac{H_y}{k_0}, \frac{H_z}{k_0}\right)$.
In the case of the goniometer with additional axis for non-coplanar diffraction geometry using the vector $\vec{Q}$ from the equation (\[formulasQ3D\]), a common solution of the inverse problem contains a set of solutions due to presence of periodical functions, but only 4 of them fall into the definition range of angles $\theta_{in}$, $\theta_{out}$, $\theta_\chi$:
\[inverce\_problev\_formula\] $$\begin{aligned}
\theta_{\chi 1} = \arcsin(-H'_y) + 2 \pi n, \\
\theta_{\chi 2} = \pi + \arcsin(H'_y) + 2 \pi n,\end{aligned}$$ where $n$ is an integer value. $$\begin{aligned}
\nonumber H_{xz}^{'2} = H_x^{'2} + H_z^{'2}, \\
\nonumber H^{'2} = H_x^{'2} +H_y^{'2} + H_z^{'2}, \\
A = \frac{2H'_z - S \sqrt{4H_{xz}^{'2} - H^{'4}}}{H^{'2} - 2H'_x}, \\
\nonumber S = \pm 1, \\
\theta_{in} = 2 \arctan \left( A \right) + 2 \pi n, \\
\theta_{out} = \arctan \left(\frac{H'_z - \sin (\theta_{in})}{H'_x + \cos (\theta_{in})}\right) + 2 \pi n.\end{aligned}$$
Available reciprocal space plot {#sec:2c}
-------------------------------
axis min, $^\circ$ max, $^\circ$
----------------- --------------- ---------------
$\theta_{in}$ -5 95
$\theta_{out}$ -5 120
$\theta_{\chi}$ -3 120
: The angular ranges of a typical goniometer axes for X-ray diffractometer.[]{data-label="tableLimits"}
The accessible reciprocal space view is usually connected with a view of a set of available for investigated sample Bragg reflections. Here the accessible reciprocal space will be considered separately from a set of Bragg reflections and from the sample orientations in the laboratory system. The main goal of such separation is to reduce a number of parameters for reciprocal space representation and to take into account the physical limitations of the goniometer, which are caused by the instrument design. For the considered in this paper typical goniometer, the ranges of the accessible angles are shown in the Table \[tableLimits\].
[l]{} ![Accessible reciprocal space at different angle ranges $\theta_{in}$, $\theta_{out}$ for goniometer without axis for non-coplanar measurements.[]{data-label="surf2D"}](Surf2D.pdf "fig:"){width="0.7\columnwidth"}\
a b c d e f
--------------------- -------------- -------------- ------------- ------------- ------------ -------------
$\theta_{in, min}$ $-180^\circ$ $-90^\circ$ $-90^\circ$ $0^\circ$ $0^\circ$ $-5^\circ$
$\theta_{in, max}$ $180^\circ$ $90^\circ$ $90^\circ$ $180^\circ$ $90^\circ$ $95^\circ$
$\theta_{out, min}$ $-180^\circ$ $-180^\circ$ $-90^\circ$ $0^\circ$ $0^\circ$ $-5^\circ$
$\theta_{out, max}$ $180^\circ$ $180^\circ$ $90^\circ$ $180^\circ$ $90^\circ$ $120^\circ$
\
In the case of goniometer without additional axis $\theta_\chi$, all measurements will be done in a coplanar geometry. According to the equation (\[formulasQ2D\]), the accessible reciprocal space is restricted by the plane $L_x L_z$. Figure \[surf2D\] demonstrates the accessible reciprocal space regions for different angle ranges of $\theta_{in}$, $\theta_{out}$.
{width="80.00000%"}
[c]{} {width="80.00000%"}\
a b c d e f
---------------------- -------------- -------------- ------------- ------------- ------------ -------------
$\theta_{in, min}$ $-180^\circ$ $-90^\circ$ $-90^\circ$ $0^\circ$ $0^\circ$ $-5^\circ$
$\theta_{in, max}$ $180^\circ$ $90^\circ$ $90^\circ$ $180^\circ$ $90^\circ$ $95^\circ$
$\theta_{out, min}$ $-180^\circ$ $-180^\circ$ $-90^\circ$ $0^\circ$ $0^\circ$ $-5^\circ$
$\theta_{out, max}$ $180^\circ$ $180^\circ$ $90^\circ$ $180^\circ$ $90^\circ$ $120^\circ$
$\theta_{\chi, min}$ $-180^\circ$ $-180^\circ$ $-90^\circ$ $0^\circ$ $0^\circ$ $-3^\circ$
$\theta_{\chi, max}$ $180^\circ$ $180^\circ$ $90^\circ$ $180^\circ$ $90^\circ$ $120^\circ$
\
The drawing of two parametric surfaces in the case of the vector $\vec{Q}(\theta_{in},\theta_{out})$ is straightforward, however, in the case of goniometer with the additional axis $\theta_\chi$ becomes a quite complicated task because of the accessible reciprocal space is then described by three parametric function $\vec{Q}(\theta_{in}, \theta_{out}, \theta_\chi)$. In that case, the drawing of the accessible reciprocal space is reduced to the outer surface mapping $\vec{Q}(\theta_{in},\theta_{out}, \theta_\chi)$. To plot this complex picture, the property of the conformal mappings of a simply connected regions is used, which transfer the boundaries of one region into the boundaries of the second region. The region $(\theta_{in},\theta_{out}, \theta_\chi)$ is quite easy represented, however, the regions of a single-valued correspondence $(\theta_{in},\theta_{out}, \theta_\chi) \longleftrightarrow (Q_x, Q_y, Q_z)$ have to be found. The condition $\left| \vec{Q} \right|' = 0$ defines the part of the outer surface $(Q_x, Q_y, Q_z)$ or the boundary of the regions of a single-valued correspondence, and at fixed $\theta_\chi$ this condition is satisfied when $\theta_{in} + \theta_{out} = n \pi$. Figure \[Surf3DFull\] shows the drawings for all surfaces of the angular space $\left\{ \theta_{in} (-\pi, \pi),\theta_{out} (-\pi, \pi), \theta_\chi(-\pi,\pi) \right\}$ and the final shape of the accessible reciprocal space is demonstrated, too. The accessible reciprocal space shapes for the different angle ranges $\theta_{in}$, $\theta_{out}$, $\theta_\chi$ are shown on Fig. \[Surf3D\].
Resolution element {#sec:3}
==================
For the proper analysis of the measured X-ray diffraction data, both the resolution function of the optical system and the contribution of a sample form and size into the intensity of the detected signal have to be taken into account. The accounting of the additional degree of freedom of the detector arm introduces the additional complications [@2014BenediktovitchIF]. The accurate calculation of these effects in the measured data is quite tedious, and therefore it is important to estimate the system resolution before the measurements in order to evaluate the computational complexity of analysis or to introduce the correct simplifications into the resolution function. The classical definition of the resolution element is presented, for example, in [@pietsch2004]. For the accurate expression of the reciprocal space resolution element it is necessary to add to Eq. (\[formulasSurf3D\]) an additional degree of freedom for $\vec{k}_{in}$, that corresponds to a small deviation of the incidence beam in plane $L_xL_y$ due to a linear focal spot of X-ray tube and the absence of the optical limitations of the beam in this direction.
$$\begin{aligned}
\vec{k}_{in}^{'} = k_0 \mathbf{\Theta_{in}}\mathbf{\Theta_{\phi}} \vec{L}_x,\\
\mathbf{\Theta_\phi} = \mathbf{R}(\theta_\phi, \vec{L}_z),\\
\vec{k}_{in}^{'} = \{\cos (\theta_i) \cos (\theta_\phi ),\sin (\theta_\phi ),
\sin ( \theta_i )(-\cos (\theta_\phi ))\}, \\
\vec{Q}^{'} = \vec{k}_{out} - \vec{k}_{in}^{'},\end{aligned}$$
$$\begin{aligned}
\vec{Q}^{'} = \{\cos (\theta_{out}) \cos (\theta_\chi )-\cos (\theta_{in})
\cos (\theta_\phi ),
-\sin (\theta_\chi )-\sin (\theta_\phi ),\nonumber \\
\sin
(\theta_{in}) \cos (\theta_\phi )+\sin (\theta_{out}) \cos
(\theta_\chi )\},\end{aligned}$$
where angle $\theta_\phi$ is the angle of a small deviation of the line connecting sample and X-ray source in horizontal plane (see Fig. \[VerticalGonio\](a)).
The function of the resolution element at the fixed reciprocal space point parameterized with the angles $\theta_{in}$, $\theta_\phi$, $\theta_{out}$, $\theta_\chi$, in the case if $\theta_\phi = 0$ has the following form:
$$\begin{aligned}
\vec{Q}^{'} = \{\cos (\theta_{out}+\delta \theta_{out}) \cos (\theta_\chi+ \delta \theta_\chi)
-\cos (\theta_{in}+\delta\theta_{in} ) \cos (\delta \theta_\phi ), \nonumber \\
-\sin (\theta_\chi+ \delta \theta_\chi)-\sin (\delta \theta_\phi ), \\
\sin(\theta_{in}+\delta \theta_{in} ) \cos (\delta \theta_\phi )+
\sin (\theta_{out}+\delta \theta_{out}) \cos
(\theta_\chi + \delta \theta_\chi)\}, \nonumber\end{aligned}$$
where $\delta\theta_{in}$, $\delta \theta_\phi$, $\delta \theta_{out}$, $\delta \theta_\chi$ are the angular parameters of the resolution of the goniometer optical system.
[c]{} {width="80.00000%"}\
g a b c d e f
----------------- -------------- --------------- -------------- --------------- --------------- --------------- ---------------
$\theta_{in}$ $2.82^\circ$ $34.56^\circ$ $53.3^\circ$ $79.28^\circ$ $50.99^\circ$ $43.67^\circ$ $31.33^\circ$
$\theta_{out}$ $53.3^\circ$ $34.56^\circ$ $2.82^\circ$ $8.75^\circ$ $4.33^\circ$ $9.85^\circ$ $21.18^\circ$
$\theta_{\chi}$ $0^\circ$ $0^\circ$ $0^\circ$ $0^\circ$ $11.57^\circ$ $20.3^\circ$ $23.65^\circ$
\
Assuming the volume of the detected pixel to be small enough and the uniformity of the resolution element, and using the property of conformal mapping of boundaries for connected regions, we can plot the outer boundaries of the resolution element if each of 24 faces of four-dimensional cube of angular space $\delta\theta_{in}, \delta \theta_\phi, \delta \theta_{out}, \delta \theta_\chi$ is projected into reciprocal space. The shape and the volume of the resolution element essentially depends on the position in reciprocal space. The typical shapes of the resolution element in different point of reciprocal space are shown on Fig. \[ResEl\].
Non-coplanar HRXRD measurements {#sec:4}
===============================
In order to define the range of the angular parameters and to set up the measurement geometry, a set of expected reflections and the accessible reciprocal space have to be combined in a single view. This view helps to select a proper region of measurements in reciprocal space. On the basis of the equations (\[inverce\_problev\_formula\]), the angular parameters are calculated which are necessary for the proper positioning of goniometer. The selection of the measurement area and the geometry of measurement logically follow from the estimate for the resolution element.
[c]{} {width="80.00000%"}\
[c @ c @ c]{}
$\theta_{in}$ $53.3^\circ$
----------------- --------------
$\theta_{out}$ $2.82^\circ$
$\theta_{\chi}$ $0^\circ$
&
$\theta_{in}$ $2.82^\circ$
----------------- --------------
$\theta_{out}$ $53.3^\circ$
$\theta_{\chi}$ $0^\circ$
&
$\theta_{in}$ $31.33^\circ$
----------------- ---------------
$\theta_{out}$ $21.18^\circ$
$\theta_{\chi}$ $23.65^\circ$
\
\
In order to demonstrate the introduced above scheme of the experiment planning, the reciprocal space mappings have been carried out for the sample consisting of Ge layer on Si substrate and for the following reflections: $(113)$ in a coplanar geometry with large incidence angle, $(\overline{1}\overline{1}3)$ in a coplanar geometry with small incidence angle and $(1\overline{1}3)$ in a non-coplanar geometry. The sample position in space was fixed during all measurements. The $2\theta / \omega$ maps were received for the reflections $(113)$ and $(\overline{1}\overline{1}3)$, and $2\theta / \theta_\chi$ map was measured for the reflection $(1\overline{1}3)$. The results of these measurements are presented in Fig. \[RSMS\], which illustrates the convenience and practicality of proper experiment planning using the above described technique to obtain in an optimal way the measured X-ray intensity from Bragg reflections.
In the most common sense, the expressions for non-coplanar case of X-ray diffraction are more complicated due to non-zero $Y$ component of the diffraction vector. However, a non-coplanar measurement geometry delivers more advantageous information on the sample, which is not accessible in coplanar geometry. For example, the measurements of reciprocal space maps for the reflections $(113)$ and $(\overline{1}\overline{1}3)$ require grazing angles ($\sim 2.8^{\circ}$) for either incidence or exit beam, which makes the analysis quite difficult. In opposite, the $(1\overline{1}3)$ reciprocal space mapping is performed at the incidence and the exit angles, which are far from the grazing angles. The $X$ component of the vector $\vec{Q}$ is close to zero in the case of $(1\overline{1}3)$ reciprocal space map. All the measurements are performed at fixed sample, which is convenient for experimentalist and makes possible to introduce three offsets for the sample alignment correction applied to all measurements.
Conclusions {#sec:5}
===========
The presented approach provides the algorithms for convenient visualization of the accessible reciprocal space in the case when the vertical goniometer with a detector arm possesses two degrees of freedom. The proposed analytical expressions for the angular coordinates in the considered parametrization for the position of the reciprocal lattice vector are used for computer tool assisting the X-ray diffraction experiments. The calculation of the resolution element in the reciprocal space is a part of the presented method and the examples of the resolution elements in different points of the reciprocal space are demonstrated. The advantages of a non-coplanar diffraction with two degrees of freedom provided by detector arm are shortly discussed.
[30]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [**]{} (, , ) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [**]{} (, , ) @noop [****, ()]{} @noop [****, ()]{} @noop [**]{} (, , )
|
---
author:
- 'Jorrit H.J. Hagen'
- Amina Helmi
- 'Maarten A. Breddels'
bibliography:
- 'bibliography.bib'
date: 'Received 27 Jun 2019 / Accepted DD Mon YYYY'
title: 'Axisymmetric Schwarzschild models of an isothermal axisymmetric mock dwarf spheroidal galaxy.'
---
[The goal of this work is to test the ability of Schwarzschild’s orbit superposition method in measuring the mass content, scale radius and shape of a flattened dwarf spheroidal galaxy. Until now, most dynamical model efforts have assumed that dwarf spheroidal galaxies and their host halos are spherical.]{} [We use an Evans model (1993) to construct an isothermal mock galaxy whose properties somewhat resemble those of the Sculptor dwarf spheroidal galaxy. This mock galaxy contains flattened luminous and dark matter components, resulting in a logarithmic profile for the global potential. We have tested how well our Schwarzschild method could constrain the characteristic parameters of the system for different sample sizes, and also if the functional form of the potential was unknown.]{} [When assuming the true functional form of the potential, the Schwarzschild modelling technique is able to provide an accurate and precise measurement of the characteristic mass parameter of the system and reproduces well the light distribution and the stellar kinematics of our mock galaxy. When assuming a different functional form for the potential, such as a flattened NFW profile, we also constrain the mass and scale radius to their expected values. However in both cases, we find that the flattening parameter remains largely unconstrained. This is likely because the information content of the velocity dispersion on the geometric shape of the potential is too small, since $\sigma$ is constant across our mock dSph.]{} [Our results using Schwarzschild’s method indicate that the mass enclosed can be derived reliably, even if the flattening parameter is unknown, and already for samples containing $2000$ line-of-sight radial velocities, such as those currently available. Further applications of the method to more general distribution functions of flattened systems are needed to establish how well the flattening of dSph dark halos can be determined.]{}
Introduction
============
In the current cosmological $\Lambda$CDM model most of the mass is believed to be in the form of (cold) dark matter. While successful on large scales, on the scales of dwarf galaxies, the model suffers a number of challenges, including the missing satellites problem [@Klypinetal1999; @Mooreetal1999], the cusp-core conundrum [@Hui2001], and the too big to fail problem [@Boylan-Kolchin2011], although all may be solved one way or another by considering the effects of baryonic physics [e.g. @Zolotovetal2012; @Brooksetal2013; @Wetzeletal2016; @Kimetal2018]. The dwarf spheroidal satellite galaxies (dSph’s or dSph galaxies) of our Milky Way can provide particularly strong constraints on the nature of dark matter, since their high mass-to-light ratios suggest that they are fully dark matter dominated [@Strigarietal2008; @Walkeretal2007; @Wolfetal2010].
Various methods have been used to develop dynamical models of dSph galaxies using line-of-sight velocity measurements for large samples of individual stars in these systems [e.g. @Battagliaetal2006; @Battagliaetal2008; @Battagliaetal2008b; @Battagliaetal2011; @Walkeretal2009; @Walkeretal2015_Draco]. Modelling via the Jeans Equations, distribution functions, and orbit superposition methods like Schwarzschild modelling are amongst those most often used [@Battagliaetal2013]. All these methods have in common that they assume that the systems are in dynamical equilibrium.
The Jeans Equations are derived by taking moments of the Collisionless Boltzmann Equation, which itself describes the conservation of probability in phase-space [@BinneyTremaine2008]. Not every solution of the Jeans equations has an associated distribution function that is physical (i.e. positive) everywhere. Furthermore finding a solution requires additional assumptions, for example on the functional form of the density profile and on the velocity anisotropy [because this is generally not known, although see the work by @Massarietal2018 who determined directly the anisotropy of a sample of stars in the Sculptor dSph using proper motions derived from [*Gaia*]{} and HST]. Because Jeans modelling is very flexible and fast it has become the most widely used tool to model dSph galaxies, particularly in the spherical limit. It has, for example, allowed for a robust (independent of the velocity anisotropy) measurement of the mass enclosed within approximately the half light radii of the dSph galaxies [@Walkeretal2009b; @Wolfetal2010], and the determination that the masses of the classical dSph’s are in the range $\sim(10^8-10^9) M_{\odot}$ [e.g. @Walkeretal2007]. On the other hand, it has not been possible to rule out cusped or cored profiles on the basis of these types of models [e.g. @Evansetal2009; @Strigarietal2017].
The @Schwarzschild1979 modelling technique relies on the idea that a system can be seen as a superposition of stellar orbits. In Schwarzschild modelling one only needs to assume a specific gravitational potential form. The method does require a significant amount of computing power and therefore a smaller set of gravitational potentials can be explored in comparison to Jeans modelling. @BreddelsHelmi2013 have applied this method to 4 dwarf spheroidal galaxies and by modelling both the second and fourth line-of-sight velocity moments and assuming spherical symmetry they find that, independently of the particular form assumed for the potential, it is possible to constrain not only the mass at around the half-light radius (more precisely at $r_{-3}$ where the logarithmic slope of the luminous density is $-3$) but also the logarithmic slope of the dark matter density.
Most work thus far has assumed that dwarf spheroidal galaxies and their host halos are spherical, despite the fact that their light distribution is typically not round . Furthermore, dark matter halos are predicted to be triaxial when no baryonic effects are taken into account, although subhalos in cold dark matter simulations that could host dSph’s are only mildly triaxial, and almost axisymmetric [@Veraetal2014]. This implies that it is important to establish how many and which of the previously mentioned results still stand when taking into account deviations from spherical symmetry.
@Kowalczyketal2017_recoveringmassandanisotropy [@Kowalczyketal2018_theeffectofnonsphericity] have in fact studied the ability of recovering the mass profile and anisotropy of the remnants of the mergers of dwarf disky galaxies (one postulated channel for the formation of dSph) when using spherical Schwarzschild models. These authors have shown that for spherical remnants the method can break the mass-anisotropy degeneracy, whereas for non-spherical (prolate) remnants the anisotropy will always be underestimated, although the total mass profile will be recovered well for data along the minor axis (although not if the data are along the major axis).
On the other hand, used axisymmetric Jeans modelling to infer the axis ratio of the dark matter density distribution ($Q$) in several dSph’s assuming a constant velocity anisotropy $\beta_z$. They report rather low axis ratios ($Q=[0.3-0.5]$) compared to the observed projected flattening in the light ($q^\prime_* \sim 0.7$). These low values are somewhat counterintuitive, though the results may be affected by degeneracies between $Q$, the velocity anisotropy profile, the viewing angle of the dSph, and the inner slope of the dark matter density profile. In @Hayashietal2016, a very similar technique was applied to unbinned data, and for e.g. Scl dSph, the authors found that the flattening parameter is largely unconstrained.
In this work we explore the performance of the Schwarzschild modelling technique in the axisymmetric regime, to free ourselves from the assumptions inherent to Jeans models. We test the method on a mock Sculptor-like dSph and consider axisymmetric mass distributions for both the light and the dark matter component and establish how well the characteristic parameters of the potential can be recovered, for different sample sizes.
The paper is organised as follows. In Sect. \[sec:amockgalaxy\], we set up a mock galaxy and simulate a realistic dataset. In Sect. \[sec:method\] we describe the Schwarzschild method and its implementation in this work. Then, in Sect. \[subsec:recoveringthemockgalaxyparameters\], we apply the Schwarzschild method and show that we can recover the characteristic mass parameter of the mock galaxy potential, irrespective of the potential flattening assumed. In Sect. \[subsec:nfwmodels\] we model our mock galaxy with an axisymmetric NFW potential form and show that, even in this case, the Schwarzschild method is able to constrain the mass and scale radius to the expected values for datasets containing a realistic number of stars. We present our conclusions in Sect. \[sec:discussionandconclusion\] where we also discuss our findings.
The mock galaxy {#sec:amockgalaxy}
===============
Potential, luminous density and characteristic parameters {#subsec:potential}
---------------------------------------------------------
We have built a mock galaxy inspired in the Sculptor dSph. We have assumed a flattened stellar density profile ($q_{\ast}=0.8$), no net rotation and a line-of-sight velocity dispersion of $\sim 10$ km/s [@Mateo1998; @Battagliaetal2008; @Walkeretal2009b]. For simplicity, we have set up the mock galaxy following @Evans1993, who uses an elementary distribution function to describe a composite axisymmetric system. This distribution function is ergodic, i.e. it leads to a velocity ellipsoid that is isotropic and has a constant amplitude and thus is not generic[^1].
The gravitational potential of the composite system as a whole follows the form $$\label{eq:logarithmicpotential}
\Phi_\text{E}(R,z) = \frac{1}{2} v^2_0 \, \text{ln} \left( R^2_c + R^2 + \frac{z^2}{q^2} \right) + \Phi_0 \, ,$$ where ($R$, $\phi$, $z$) denote the cylindrical coordinates. Here ${v_0}^2$ relates to the mass of the system and $R_c$ is the core radius. The parameter $q$ is the axial ratio, and has to satisfy $1/\sqrt{2} = 0.707 \leq q \leq 1.08$ where the lower limit is set by the condition that the spatial density is positive everywhere [@BinneyTremaine2008] and the upper limit yields a composite distribution function of the form used by @Evans1993 that is positive everywhere. The zero point of the potential is set by $\Phi_0$.
The density profile of the stellar component is described by $$\label{eq:rholum}
\rho_\text{lum}(R,z) = \frac{\rho_0 R^p_c}{ \left( R^2_c + R^2 + z^2 / q_{\ast}^2 \right)^{p/2} } \, ,$$ where $\rho_0$ is the central density, $p$ denotes a slope parameter, and $q_{\ast}$ is the flattening of the stellar density. The associated stellar distribution function is given by $$\label{eq:flum}
f_{\text{lum}}(E) \propto \text{exp}[-pE/v_0^2] = \text{exp}[-p\Phi_\text{E}/v_0^2] \,\, \text{exp}[-pv^2/2v_0^2] \, ,$$ where $E$ is the sum of the gravitational potential and kinetic energies of a star.
In the Evans model $q_{\ast} = q$ and therefore the density flattening of the luminous component is the same as the potential (not the density) flattening of the composite system. The surface brightness profile of the mock galaxy can be found by integrating the luminous density along the line-of-sight.
The line-of-sight velocity profile is exactly Gaussian with a velocity dispersion that is isotropic and constant everywhere: $$\label{eq:sigmaevans}
\sigma_{\text{E}} = \frac{v_0}{\sqrt{p}} \, ,$$ and independent of the inclination, scale radius and flattening.
We choose here $v_0=20$ km/s, $R_c=1$ kpc, $q=0.8$, and $p=3.5$ for our mock galaxy. These values result in a velocity dispersion of roughly $10.7$ km/s. For these values of $p$ and $q$, the central total density should be at least $1.13$ times the central stellar density to yield positive phase-space densities for both the stellar and dark components everywhere.
![The surface brightness profile of our mock galaxy in an edge-on view. The black horizontal and vertical lines show the boundaries of the kinematic-bins. We only show the positive quadrant of our FOV ($x'>0$ kpc, $y'>0$ kpc). The yellow contours correspond to the isophotes of the system ($q_{\ast}=q=0.8$). In the top panel we have plotted the surface brightness normalised to its central value as function of $x'$, i.e. along the (projected) major axis of the galaxy.[]{data-label="fig:mockgalaxy"}](figures/mockgalaxy.pdf){width="50.00000%"}
In Fig. \[fig:mockgalaxy\] we show the 2D surface brightness profile of the mock galaxy for an edge-on view. Since the galaxy is axisymmetric, we only show the positive quadrant. Contours of constant surface brightness follow ellipses with axial ratios $q'_{\ast}$, which because of the edge-on view are identical to the intrinsic density flattening (i.e. $q'_{\ast}=q_{\ast}=0.8$). The 1D surface brightness profile along the major axis is plotted in the top panel of this figure. The surface brightness decreases a factor two with respect to its central value at a projected ellipsoidal radius of $0.86$ kpc, however, the projected half light radius is much larger ().
Observing the mock galaxy {#subsec:observing}
-------------------------
We generate the mock galaxy by drawing positions following the luminous density distribution (see Eq. \[eq:rholum\]) and velocities from the Gaussian distribution function (see Eq. \[eq:flum\]). We thus assume that the dataset of the stars with line-of-sight velocities follows the same distribution as the light. We place the mock galaxy at a distance of $80$ kpc, and “observe” it with a square field of view (FOV), centred on the mock galaxy, with a size of $7832^{\arcsec} \times 7832^{\arcsec}$ (which then corresponds to roughly $3 \times 3$ kpc). Throughout this work we assume an edge-on view.
The typical line-of-sight velocity measurements of individual stars have errors of order $\mathrm{d}v = 2$ km/s [@Battagliaetal2008b; @Walkeretal2009]. Therefore, to simulate a realistic dataset we convolve the line-of-sight velocities with a Gaussian distribution having a standard deviation of $2$ km/s. We compute velocity moments by combining the velocities of all available stars in a certain spatial bin on the sky (in what follows a [ *kinematic-bin*]{}) in our FOV. The velocity moments are estimated by correcting for the measurement errors (see Appendix \[App:AppendixA\]), similarly to @Breddelsetal2013. We assume that the surface brightness profile can be measured without error in much smaller spatial bins on the sky (which we refer to as [ *light-bins*]{}). To produce a reasonable galaxy, we also assume that the three-dimensional light distribution is known to much larger radii, but for many fewer bins (more details can be found in Sect. \[subsec:generatingorbitlibraries\]).
The Schwarzschild orbit superposition method {#sec:method}
============================================
In Schwarzschild modelling, orbits are used as building blocks of a dynamical system. Given a potential $\Phi$, a complete set of orbits are integrated numerically and for each orbit the predicted observables are stored in a so-called orbit library. Varying the parameters of the potential (or varying the potential form as a whole), will result in different libraries. The library which provides a combination of weighted orbits that matches the observations (light profile + kinematics) best, will be said to yield the best-fit parameters of the potential. The orbital weights themselves provide the corresponding distribution function. Since the orbital weights are positive by construction, the distribution function will be non-negative everywhere.
Generating orbit libraries {#subsec:generatingorbitlibraries}
--------------------------
In this paper we use a slightly modified version of the Schwarzschild code from @vandenBoschetal2008, who modelled the elliptical galaxy NGC4365. In what follows, we shortly describe how we generated the orbit libraries, how the orbital integration has been done and how the libraries are stored. For more information we refer the reader to @vandenBoschetal2008[^2].
Given an energy $E_i$, initial positions ${x_0}$ and ${z_0}$ are sampled on a open polar grid, which is defined by $N_{I_2}$ polar angles and $N_{I_3}$ radii in between a thin orbit and the equipotential. The polar angles are sampled linearly, but to obtain a better sampling of orbits near the major axis of the system, 50% of the polar angles are sampled from the $z$-axis towards $10^\circ$ above the midplane, and the remaining 50% from $10^\circ$ down to the $z=0$ plane. The initial $y$-coordinates and initial velocities in the $x$- and $z$-directions are set to zero. The initial velocities in the $y$-direction, ${v_{y,0}}$, are determined by . This is done for all $N_\text{ener}$ energies, which are defined by . The locations $x_i$ that fix the energy grid are logarithmically sampled between $25$ pc and $50$ kpc from the centre. This ‘orbit library’ thus consists of orbits ($z$-tubes in our axisymmetric potential).
To account for slowly precessing orbits in the library we also compute $17$ copies of each orbit, where each copied orbit is subsequently rotated by $10^\circ$ in the $xy$-plane. These $18$ copies are summed into a single orbit and replace the non-rotated orbit, such that each orbit now follows the axisymmetric requirements. Besides ensuring axisymmetric behaviour of our models, adding rotations also increases the sampling of an orbit. Note as well that each orbit has a counter-rotating sibling, obtained by appropriately changing the sign of the velocity vector.
We further improve the accuracy of the model by ‘dithering’: every orbit is split into $N^3_\text{dither}$ suborbits by replacing each of its three nonzero initial coordinates by $N_\text{dither}$ slightly different coordinates. In fact, the initial conditions of all suborbits are found by increasing $N_\text{ener}$, $N_{I_2}$, and $N_{I_3}$ by a factor of $N_\text{dither}$. The observables of each set of adjacent $N^3_\text{dither}$ suborbits are combined and stored as being the observables of the (bundled) orbit. Choosing an odd number for $N_\text{dither}$ ensures that the original orbit is the central suborbit of the bundle. In all our Schwarzschild models we use $N_\text{dither}=5$. Every main orbit is thus made from a bundle of $5^3=125$ neighbouring suborbits.
We use a Runge Kutta integrator to compute the stellar trajectories over roughly $200$ orbital time scales. We require that the energy of each suborbit is always conserved better than $1\%$ by increasing the accuracy of the integrator if necessary. For each orbit the kinematic information is stored in a velocity grid, which consists of a line-of-sight velocity axis ($N_\text{v}$ velocity bins) and an axis associated to the location on the sky ($N_\text{kin}$ kinematic-bins). On equally spaced time intervals, a count is added to the element of the grid associated to the velocity and location at the given time. The sky projected path of the orbit is determined in a similar way and stored in the surface brightness grid containing $N_\text{2D light}$ light-bins. In an additional 3D grid containing $N_\text{3D light}=800$ bins ($40$ radial, $5$ azimuthal and $4$ polar bins in the positive octant) the 3-dimensional path of an orbit is stored. This 3D grid reaches radii well beyond the FOV (in contrast to the velocity and surface brightness grid), and is used to control the system at such radii.
In this work we set $N_\text{2D light}$ equal to $99 \times 99 = 9801$ and $N_\text{kin}$ to $9 \times 9 = 81$, unless stated otherwise. The velocity axis of the velocity grid contains $N_v = 41$ bins and has a total velocity width of $80$ km/s, such that we cover velocities up to $\pm 4 \sigma_{\text{E}}$. The central velocity bin is centred on $0$ km/s. To be able to track how long an orbit spends in a given kinematic-bin, counts will also be added to the first or last velocity bin if velocities are beyond the limits of the velocity grid[^3].
Fitting orbital weights {#fitting}
-----------------------
Once the orbit libraries are in place, we find the orbital weights such that the total luminous mass, the surface brightness profile and the kinematics within the FOV, and the 3D light profile of the system are reproduced.
The 2D light profile is fitted using the surface brightness grid, where we define: $$m^{\text{light}}_j = \sum\limits_{i=1}^{N_\text{orb}} w_i \, m^{\text{light}}_{ij} \, ,$$ where we sum over all orbits $i$. Here, $m^{\text{light}}_{ij}$ is the fraction of time orbit $i$ spent in light-bin $j$ and $m^{\text{light}}_j$ is the fractional surface brightness in light-bin $j$. The orbital weights are denoted by $w_i$ and add to unity by construction. The 3D light profile is fitted similarly using the 3D grid.
At the same time as we fit the light, we also fit the kinematics. In every kinematic-bin $k$ we compute the first 4 mass-weighted velocity moments $\langle v^n_k \rangle$ by defining: $$m^{\text{kin}}_k \langle v^n_k \rangle = \sum\limits_{i=1}^{N_\text{orb}} w_i \, m^{\text{kin}}_{ik} \langle v^n_{ik} \rangle \, ,$$ where again we sum over all orbits $i$. This time $m^{\text{kin}}_{ik}$ is the fraction of time orbit $i$ spent in kinematic-bin $k$ and $m^{\text{kin}}_k$ is the fractional surface brightness in kinematic-bin $k$. The $n^{\text{th}}$ moment of orbit $i$ in kinematic-bin $k$ is given by $\langle v^n_{ik} \rangle$:\
$$\langle v^n_{ik} \rangle = \frac{\displaystyle \sum\limits_{l=2}^{N_v-1} h_{ikl} \, v_{\text{cen},l}^n \, \triangle v} {\displaystyle \sum\limits_{l=2}^{N_v-1} h_{ikl} \, \triangle v} \, ,$$ where, $\triangle v$ is the size of the velocity bin and $h_{ikl}$ is the fraction of time that orbit $i$ spent in kinematic-bin $k$ and velocity bin $l$. Velocity bin $l$ has velocity range \[$v_{\text{cen},l} - \frac{1}{2} \triangle v$, $v_{\text{cen},l} + \frac{1}{2}\triangle v$\], where $v_{\text{cen},l}$ denotes its central velocity. We sum over the $N_v$ velocity bins, although we discard the contributions of the first and last velocity bin. This is done since we did not set a stringent outer velocity boundary in these velocity bins: as described before, counts will be added here even if a star has a velocity outside the range of the grid and therefore the typical velocities of these bins are not known. Note that $m^{\text{kin}}_{ik} = \sum\limits_{l=1}^{N_v} h_{ikl}$ with this choice.
Now that we have defined the relation between the observables and the quantities in our model, we can describe how we fit the orbital weights. The fit is based on minimising $\chi^2_{\text{tot}}$: $$\label{eq:chi2total}
\chi^2_{\text{tot}} = \sum\limits^{N_\text{obs}}_{u=1} \left[ \frac{\text{Model}[u] - \text{Data}[u]}{\text{Error}[u]} \right]^2,$$ where $u$ runs over all $N_\text{obs}$ observables. The number of observables is given by: $$\label{eq:Ntotal}
N_\text{obs} = 1 + N_\text{2D light} + N_\text{3D light} + 4N_\text{kin},$$ which includes the contribution of the total light of the system, the fractional light for each 2D and 3D light-bin, and the four velocity moments for each kinematic-bin, respectively. Since using higher order moments might reduce the degeneracy between the velocity anisotropy and the mass profile we choose to use four velocity moments. We do not use higher moments since these are observationally harder to constrain.
We use a non-negative least square solver to ensure that all orbital weights are positive. The light is weighted by assigning an error of 2% to each of the 2D and 3D light-bins.
We note that we can investigate the individual contribution to the total $\chi^2_{\text{tot}}$ by decomposing it, e.g: $$\label{eq:chi2tot}
\chi^2_{\text{tot}} = \chi^2_{\text{total light}} + \chi^2_{\text{2D light}} + \chi^2_{\text{3D light}} + \chi^2_{\text{kin}}.$$ We stress that $\chi^2_\text{tot}$ is being minimised. We do not minimise the terms on the right-hand side individually. The term associated to the total light of the system turns out to be negligible, since it is always recovered very well. The same holds for $\chi^2_{\text{3D light}}$. These terms are only added to Eq. \[eq:chi2total\] to ensure that the model returns a realistic galaxy (in the sense that the luminous component might resemble a galaxy). Most of the constraining power thus comes from the surface brightness profile and the kinematics.
Results
=======
In this section we show that the Schwarzschild method can recover some of the characteristic parameters of the mock Sculptor-like dwarf spheroidal galaxy. We first show in Sect. \[subsec:recoveringthemockgalaxyparameters\] that if the true potential functional form of the system is known, we can constrain the characteristic mass parameter of the mock galaxy. In reality however, the true potential functional form is not known. Therefore, in Sect. \[subsec:nfwmodels\] we demonstrate how well we can constrain the characteristic parameters when assuming an axisymmetric form of a Navarro-Frenk-White [NFW, @NFW1996] potential.
Two parameter Evans models: recovering the mock galaxy parameters {#subsec:recoveringthemockgalaxyparameters}
-----------------------------------------------------------------
![ $\Delta \chi^2$-distribution of the characteristic parameters $q$ and $v_0$ of the Evans models obtained after applying the Schwarzschild method. In this case our mock data consist of $10^5$ stars inside the FOV . We use 9x9 kinematic-bins and assume the potential functional form and inclination are known. The black circles show the locations where the Schwarzschild models were evaluated. The green circle indicates the input parameters of the mock system. The best-fit model is indicated by the white cross and recovers the mock galaxy mass parameter. In white, grey and black we show the $\Delta \chi^2=[2.3, 6.18, 11.8]$-contours respectively. The coloured landscape shows interpolated $\Delta \chi^2$-values, and goes up to a maximum of $\Delta \chi^2=10$. On the right we show the $\Delta \chi^2$-landscapes when only considering $\chi^2_{\text{2D light}}$ (top), $\chi^2_{\text{kin}}$ (middle), or $\chi^2_{\text{3D light}}$ (bottom). []{data-label="fig:evans_results1_probcontours"}](figures/results1_chisquareprob_contour_remake.pdf){width="50.00000%"}
Here we assume the true form of the potential is known, i.e. we use it to build the orbit libraries for the Schwarzschild models. Our aim is to establish whether we can recover the correct values of the characteristic input parameters with this method. To this end we make a grid of models in which we vary the values of the characteristic parameters $q$ and $v_0$ (see Eq. \[eq:logarithmicpotential\]). We thus fix the core radius to , i.e. to its true value. We sample $q$ from $0.72$ to $0.96$, and $v_0$ from $11$ km/s to $29$ km/s, with higher sampling (decided iteratively) having steps in $q$ of 0.02 and in $v_0$ of $1$ km/s. We name the models by the values of their parameters: qXXvYY in which XX = $100q$ and YY = $v_0$ in km/s. For the orbit sampling, we set $N_\text{ener}=32$, $N_{I_2}=32$ and $N_{I_3}=16$ such that a total of $32 \times 32 \times 16 \times 5^3 = 2048000$ suborbits are integrated (see Sect. \[subsec:generatingorbitlibraries\]) and $2 \times 32 \times 32 \times 16 = 32768$ orbital weights are determined (see Sect. \[fitting\]).
![The difference of the best fit and the observed velocity dispersion in terms of the observed error, for all 9x9 kinematic-bins. The figure is obtained after fitting the q94v21 library to our mock data consisting of $10^5$ stars in our FOV, assuming an edge-on view. The top and the right panels show the fit (red full line) obtained along the major and minor axis respectively. The data points with 68% error bars are shown in black. Black dashed lines indicate the true velocity dispersions from theory (Eq. \[eq:sigmaevans\]).[]{data-label="fig:testevans_2apertures_100000k_moments_fov_theon_results1_q80v20_chisigma"}](figures/results1_q94v21_sb_chisigma.pdf){width="45.00000%"}
### Results for a large sample {#subsubsec:resultsforalargesample}
We start with an idealised case in which the data consist of $10^5$ stars. For 9x9 kinematic-bins on the sky, the typical error of the velocity dispersion in a kinematic-bin is $\sim 0.25$ km/s.
The large panel of Fig. \[fig:evans\_results1\_probcontours\] shows the results obtained by fitting the Schwarzschild models to the data. The small black circles show the grid of tested values for $q$ and $v_0$, the green circle the true input values, and the white cross indicates the values of the parameters corresponding to the maximum likelihood estimator (MLE). For the best-fit model q94v21 we find $\chi^2_\text{tot} = 207.7$. The contribution of the kinematics (see Eq. \[eq:chi2total\]) to this value is $205.6$. Using 81 kinematic-bins to fit 4 velocity moments, this corresponds to $0.64$ per kinematic constraint.
We have computed $\Delta \chi^2(q, v_0) = \chi^2_\text{tot}(q, v_0) -
\textrm{min}[\chi_\text{tot}^2]$ for each of these models and define 68%, 95% and 99.7% -confidence intervals (white, grey and black contours, respectively) at $\Delta \chi^2=[2.3, 6.18, 11.8]$ [@Pressetal1992][^4]. The coloured background shows the $\Delta \chi^2$-landscape and is truncated at $\Delta \chi^2 = 10$. The smaller panels on the right show the $\Delta \chi^2$-landscapes when only considering $\chi^2_{\text{2D light}}$ (top), $\chi^2_{\text{kin}}$ (middle), or $\chi^2_{\text{3D light}}$ (bottom). The $\Delta \chi^2$-landscape based on $\chi^2_{\text{tot}}$ is thus slightly dominated by the differences in $\chi^2_{\text{2D light}}$, although the kinematics provide similar constraints.
To estimate the error on the mass parameter we first marginalize over the flattening parameter by selecting for each $v_0$ the minimum $\Delta \chi^2$ along $q$. We define the 68% error at those values where $\Delta \chi^2 = 1.0$ [@Pressetal1992]. For this experiment we find . We therefore conclude that we can recover the input mass parameter of our mock galaxy well, but as Figure \[fig:evans\_results1\_probcontours\] shows we do not constrain well the flattening $q$.
In Fig. \[fig:testevans\_2apertures\_100000k\_moments\_fov\_theon\_results1\_q80v20\_chisigma\] we show how well the velocity dispersion is fitted in the best-fit q94v21 model. For each kinematic-bin, we show how much the model deviates from the data expressed in units of the error on the data. On top we show the fit along the major axis while the subpanel on the right shows the fit along the minor axis. These figures show that the fit is very good (and in fact, it is almost indistinguishable from the fit obtained for what would be the input parameters model, i.e. q80v20).
### Downsampling and folding data {#subsubsec:downsamplingandfoldingdata}
We now consider the more realistic case of a sample of $10^4$ stars. To reduce the observed uncertainties on the kinematics we decided to fold the kinematic data (but not the light). Since the system is axisymmetric, we fold our data into the kinematic-bins located in the first quadrant. We can simply move each star towards its corresponding kinematic-bin without changing its velocity, because our system has an identical Gaussian line-of-sight profile everywhere (see Sect. \[subsec:potential\]). In general, however, one should change the velocities following the assumed symmetry.
Since we fold the data from 9x9 bins of our FOV into the first quadrant, we effectively have $10^4$ stars located in the resulting 5x5 kinematic-bins. A typical kinematic-bin now contains $400$ stars on average, and the typical error on the velocity dispersion is $\sim 0.45$ km/s.
We fit the folded data with the Schwarzschild orbit superposition method and find the MLE for model q90v22 (see Fig. \[fig:evans\_results2\_folded9x9\_probcontours\]). As in the case of $10^5$ stars, and thus as expected, the flattening parameter remains fairly unconstrained. We find a slightly larger mass parameter , but is still within the 95%-confidence region.
For the best-fit model q90v22 we find $\chi^2_\text{tot} = 16.5$. The contribution of the kinematics (see Eq. \[eq:chi2total\]) to this value is $13.2$. Both values are much lower than in the case of $10^5$ stars, and this can be explained by the decrease in the number of kinematic constraints and the fact that the data have now been folded.
It is encouraging that a more realistic number of stars still gives such tight constraints. Comparing the $10^4$ stars folded case to the case of $10^5$ stars, the 2D 68%-probability contours are shifted towards just slightly larger masses. Note that the uncertainty on the mass parameter did not increase.
![Similar to Fig. \[fig:evans\_results1\_probcontours\], but now using $10^4$ stars and using the approach of folding the data from 9x9 into 5x5 kinematic-bins. The parameter inferences are similar, though slightly larger masses are preferred.[]{data-label="fig:evans_results2_folded9x9_probcontours"}](figures/results2_folded9x9_chisquareprob_contour_remake.pdf){width="50.00000%"}
To further test how the results depend on the number of stars observed, we decreased even further the number of stars to a sample of $2000$ stars. This is the typical size of currently available datasets used to put constraints on the mass of dSph galaxies . We again fold the data from 9x9 into 5x5 kinematic-bins. The resulting typical error on the velocity dispersion in a kinematic-bin is then $\sim 0.9$ km/s. In this case we find a best-fit model q92v23 ($\chi^2_\text{tot} = 38.9$, $\chi^2_\text{kin} = 32.6$). The $\Delta \chi^2$-distribution is shown in Fig. \[fig:evans\_results2000\_folded9x9\_probcontours\]. The best models are again reproduced by the most round models, although statistically the flattening parameter remains unconstrained. The region spanned by the contour drawn at $\Delta \chi^2=11.8$ is of similar size, but is shifted towards slightly higher masses ($\Delta v_0 \sim 1$ km/s) in comparison to the case of $10^4$ stars. The true q80v20 model is nevertheless still within the inferred 99.7%-confidence interval.
The weak trend found for smaller samples to prefer slightly higher values of $v_0$ may be due to the fact that, for small radii (compared to $R_c$), the potential (see Eq. \[eq:logarithmicpotential\]) is proportional to $v_0^2 [\ln R^2_c + (R/R_c)^2] + (v_0/q)^2 (z/R_c)^2$. Therefore, there is a weak degeneracy in the term $v_0/q$, that may manifest itself more when the sampling is sparse, and thus lead to a small shift in preferred values of $v_0$ for larger $q$.
![Similar to Fig. \[fig:evans\_results2\_folded9x9\_probcontours\], now using $2000$ stars. The parameter inferences are similar, though slightly larger masses are inferred ($\Delta v_0 \sim 1$ km/s).[]{data-label="fig:evans_results2000_folded9x9_probcontours"}](figures/results2000_folded9x9_chisquareprob_contour_remake.pdf){width="50.00000%"}
From the tests performed in this Section we conclude that, with a kinematic sampling that follows the light, we can not aim to constrain the flattening of an [ *isothermal*]{} dSph galaxy[^5], even if the true functional form of the potential is known. This is likely because the information content in a velocity dispersion regarding the geometric shape of the potential is too small (since $\sigma$ is constant across the whole system). We can however still reliably constrain the mass parameter of such a system, i.e. even though the true flattening remains unknown. This can already be done for a realistic number of stars.
Axisymmetric NFW models {#subsec:nfwmodels}
-----------------------
We have shown that the Schwarzschild method can constrain correctly the mass parameter when the true functional form of the potential is known. Now, we will tackle the problem more realistically by allowing a different functional form for the potential. We consider an axisymmetric NFW-profile, and follow the parametrization of @Vogelsbergeretal2008: $$\label{Vogelsbergerpotential}
\Phi_\text{V}(\tilde{r}) = -4\pi G \rho_0 R^3_s \left[ \frac{\ln(1+\tilde{r}/R_s)}{\tilde{r}} \right] \, ,$$ where $R_s$ is the scale radius and $\rho_0$ a characteristic density parameter. In comparison to the spherical NFW-profile, the radius $r=\sqrt{R^2 + z^2}$ is replaced by a newly defined radius: $$\label{eq:rtilde}
\tilde{r}= \frac{(r_a+r)r_E}{r_a+r_E} \, ,$$ where, for the axisymmetric case, $r_E= \sqrt{\left(\frac{R}{a}\right)^2 + \left(\frac{z}{c}\right)^2}$ is the ellipsoidal radius with $a$ and $c$ specifying the relative lengths of the major and minor axes, and where $r_a$ is a transition radius. In addition, we require that $2a^2+c^2=3$, such that when $a=c=1$, this results in the spherical NFW profile. For $r>>r_a$, $\tilde{r} \rightarrow r$, whereas for $r<<r_a$, $\tilde{r} \rightarrow r_E$. Therefore, the gravitational potential is axisymmetric in the central regions and becomes spherical in the outer regions. We set the transition radius to $r_a = 10$ kpc. In all our Vogelsberger models we keep the transition radius $r_a$ fixed.
To additionally guarantee that the total mass density is positive up to at least the orbits possessing the highest energies in our library ($\sim 50$ kpc), the flattening parameter must satisfy $c/a \gtrsim 0.7$ for a case with . For smaller scale radii, larger lower limit values of $c/a$ are needed to satisfy the positive density criterion.
For convenience, we define a characteristic mass parameter, $M_\text{1kpc}$ expressed in units of $M_{\odot}$, which corresponds to the total enclosed mass within 1 kpc from the centre for a spherical NFW profile with scale radius $R_s$, i.e. $$\label{NFWmass}
M_\text{NFW}(r = 1 {\rm kpc} \, \vert \, R_s) = 4 \pi \rho_0 {R_s}^3 \left[ \ln \left(\frac{R_s + r}{R_s} \right) - \left( \frac{r}{R_s + r} \right) \right]_{1 {\rm kpc}} \, .$$ From this equation we determine the value of $\rho_0$, and it is this value of $\rho_0$ that we use for the axisymmetric Volgelsberger potential in Eq. \[Vogelsbergerpotential\].
### The ‘true’ (equivalent) Vogelsberger system {#subsubsec:truevogelsbergersystem}
Before we can test the Schwarzschild orbit superposition method while assuming Vogelsberger mass models, we need to know when a result can be considered satisfactory. Since we could not constrain the flattening for the case when the true potential form is known we will not aim to constrain the flattening for the Vogelsberger models. Nevertheless, the expected best-fit scale radius $R_s$ and mass $M_\text{1kpc}$ of our system will depend on the $c/a$-value assumed. In this section we therefore establish what are good parameters for the mass $M_\text{1kpc}$, scale radius $R_s$, and flattening $c/a$, such that the properties of the Evans mock galaxy are reproduced the best.
{width="49.70000%"} {width="50.00000%"}
{width="55.00000%"} {width="44.00000%"}
Because most stars of our mock galaxy will have projected radii in between $0.5$ and $2.0$ kpc from the centre, we require that the flattening of the Vogelsberger potential should be comparable to that of the mock galaxy over this region. At a given position we define the Vogelsberger potential flattening $q_\text{V}$ as the axis ratio of the equipotential contour that goes through that point. For a position $(R, z)$, we thus define $q_\text{V}(R,z) = z_{\Phi} / R_{\Phi}$, where $\Phi(R=0, z_{\Phi}) \equiv \Phi(R_{\Phi}, z=0) \equiv
\Phi(R,z)$. On such equipotential, it must hold that $\tilde{r}(R=0, z_{\Phi}) = \tilde{r}(R_{\Phi}, z=0)$, and since $\tilde{r}$ only depends on $c/a$, $q_\text{V}(R,z)$ is independent of our mass parameter and scale radius[^6]. We take values for $z_{\Phi}$ from $0.5$ to $2.0$ kpc in steps of $0.05$ kpc along the minor axis and compute the corresponding $R_{\Phi}$-values (i.e. the radii where the equipotential contours that belong to $z_{\Phi}$ cross the major axis). For a given $c/a$ we then compute the mean of the absolute differences between the Evans mock galaxy potential flattening ($q=0.8$) and the Vogelsberger potential flattening along the defined range for $z_{\Phi}$, i.e. we compute: $\text{mean}(|q-q_\text{V}(R=0,z_{\Phi})|)$. We find that for $c/a \simeq 0.776$ this average difference is smallest (see left panel of Fig. \[fig:findingvogelsbergertruth\]). For our range of $z_{\Phi}$ and $c/a = 0.776$, the Vogelsberger potential flattening increases almost linearly with $z_{\Phi}$, though the gradient is small ($0.018$ kpc$^{-1}$).
Given this value for the flattening, we proceed to obtain the best equivalent values for the mass and scale radius of the mock galaxy now described by the Vogelsberger profile. We do this by comparing $|RF_R| \equiv \sqrt{R \left| \frac{\partial}{\partial R} \Phi(R,z=0) \right|}$ along the major axis and $|zF_z| \equiv \sqrt{\left| \, z \, \frac{\partial}{\partial z} \Phi(R=0,z) \right|}$ along the minor axis with respect to their values for the mock Galaxy. We investigate their trends for $R$- and $z$-values identical to those used for $z_{\Phi}$ previously.
We vary the scale radius and the mass parameter $\log_{10}(M_\text{1kpc}[M_{\odot}])$ and compute the mean of the absolute differences with respect to the mock galaxy obtained along the major and minor axis for $c/a = 0.776$. We denote this by $\langle \bigtriangleup v \rangle := \text{mean}[0.5 \{ \text{abs}(\Delta|RF_R|) + \text{abs}(\Delta|zF_z|) \} ]$. From the right panel of Fig. \[fig:findingvogelsbergertruth\] we infer that $\langle \bigtriangleup v \rangle$ is minimum for mass parameter $\log_{10}(M_\text{1kpc}[M_{\odot}]) \simeq 7.69$ and scale radius $R_s = 4.9$ kpc (green circle), although any value with $R_s \geq 2$ kpc works well, as $\langle \bigtriangleup v \rangle$ does not vary strongly. To be able to compare these findings to the results from our Schwarzschild models (see Sect. \[subsubsec:fittingvogelsbergermodels\]), we estimate the error on these ‘true’ parameters by considering those locations where $\langle \bigtriangleup v \rangle$ changes by a factor 2 with respect to its minimum value (green contour). The mass parameter is then within the range \[$7.63$, $7.73$\], the scale radius larger than $2.4$ kpc. For the smaller scale radii ($R_s < 2$ kpc) slightly higher values for the characteristic mass parameter would be preferred, but $\langle \bigtriangleup v \rangle$ is also larger in such cases. Note that the NFW mass value that we just estimated corresponds well to the mass enclosed within $1$ kpc of a spherical Evans model with and (as assumed in Sect. \[sec:amockgalaxy\]), since then $\log_{10}(M_\text{1kpc,Evans}[M_{\odot}]) \simeq 7.67$.
Although we will not constrain the flattening of the system, we can investigate how the expected best-fit parameters change if different values for $c/a$ are taken. Setting $c/a=0.70$ results in $\langle \bigtriangleup v \rangle = 0.32$ km/s for its minimum at $\log_{10}(M_\text{1kpc}[M_{\odot}]) \simeq 7.69$ and $R_s = 4.4$ kpc, and setting $c/a=0.85$ results in $\langle \bigtriangleup v \rangle = 0.37$ km/s for $\log_{10}(M_\text{1kpc}[M_{\odot}]) \simeq 7.69$ and $R_s = 5.1$ kpc. The expected best-fit mass parameter is thus not affected by the choice of $c/a$. The expected best-fit scale radius only increases slightly for larger values for the flattening parameter (i.e. rounder shapes), though the effect[^7] is rather small. In addition, the grey contours, which are drawn at fixed $\langle \bigtriangleup v \rangle$, span very similar regions for different values for $c/a$.
In Fig. \[fig:comparingvogelsbergertruth\] we compare the Vogelsberger equivalent potential to the true Evans potential of our galaxy. In the left panel we show the gradients of the potentials along the major and minor axis. Note that the Evans model seems to have lower $|RF_R|$ and $|zF_z|$ for $R\lesssim1$ kpc and $z\lesssim0.75$ kpc, respectively, than the NFW ‘equivalent’ model. In the panel on the right we confirm that the potential flattening is matched quite well by showing isopotential contours. Both panels reveal that only in the centre ($<0.7$ kpc) and at the distances larger than $3$ kpc, the gradients of both potentials start to deviate from each other.
In summary, the equivalent Vogelsberger system can be described by $\log_{10}(M_\text{1kpc}[M_{\odot}]) \simeq 7.69^{+0.04}_{-0.06}$ and by $R_s \gtrsim 2.4$ kpc (with its most likely value at $R_s=4.9$ kpc) for $c/a=0.776$.
### Fitting Vogelsberger models with the Schwarzschild method: exploring different sample sizes {#subsubsec:fittingvogelsbergermodels}
Since we could not constrain the flattening parameter when the potential functional form was known (see Sect \[subsec:recoveringthemockgalaxyparameters\]), we can not expect to constrain the flattening if we examine a different functional form. We set $c/a=0.80$, equal the observed flattening in the light, and subsequently find the inferences on the mass and scale radius. We initially make a grid in ($\log_{10}(M_\text{1kpc})$, $R_s$)-space, where $R_s$ ranges from $1$ to $8$ kpc with steps of $\Delta R_s = 1$ kpc, while for the characteristic mass we take steps of $0.05$ for values from $\log_{10}(M_\text{1kpc}[M_{\odot}])=7.55$ to $\log_{10}(M_\text{1kpc}[M_{\odot}])=7.85$, i.e. just spanning a factor of 2 in mass. Later, we also decided to sample $\log_{10}(M_\text{1kpc}[M_{\odot}])=[7.68,7.72]$ for $R_s \in [1.5,7.5]$ kpc with a similar $\Delta R_s$ step.
![Difference between the best fit Vogelsberger model (M772Rs250, blue line in the subpanels) and the observed velocity dispersion when applying the Schwarzschild method in 9x9 kinematic-bins to our mock dataset consisting of $10^5$ stars in the FOV (see Fig. \[fig:testevans\_2apertures\_100000k\_moments\_fov\_theon\_results1\_q80v20\_chisigma\] for a comparison).[]{data-label="fig:vogelsberger_chisigma"}](figures/results1_vogelsberger_sb_chisigma.pdf){width="50.00000%"}
To be more efficient we decrease the number of orbits compared to Sect. \[subsec:recoveringthemockgalaxyparameters\] and set $N_\text{ener}=24$, $N_{I_2}=24$ and $N_{I_3}=8$, such that a total of $24 \times 24 \times 8 \times 5^3 = 576000$ suborbits are integrated and $2 \times 24 \times 24 \times 8 = 9216$ orbital weights are determined. We have found this gives good results in terms of recovery of the light profile and kinematics. In addition we also add regularisation terms to the fit in this more realistic experiment: by applying regularisation we set additional constraints such that the orbital weights are more smoothly distributed, i.e. in a more physical way (as the weights relate to the distribution function, which itself is expected to be smooth). More details on the concept of regularisation and its effects can be found in Appendix \[App:AppendixB\].
We present the results following the same structure of Sect. \[subsec:recoveringthemockgalaxyparameters\] and name the Vogelsberger models by MxxxRsyyy, where xxx $= 100 \log_{10}(M_{\text{1kpc}}[M_{\odot}])$ and yyy = $100 R_{s}$ (in kpc). We discuss how well we can recover the characteristic parameters of the Vogelsberger potential for mock datasets containing $10^5$, $10^4$ and $2000$ stars.
We start with the case of $10^5$ stars for which we use 9x9 kinematic-bins and no folding. For this case, we find that model M772Rs250 provides the best fit (). Fig. \[fig:vogelsberger\_chisigma\] shows that this model reproduces well the mock velocity dispersions in all kinematic-bins (since , which results in $0.68$ per kinematic constraint). The fit is of comparable quality to the best-fit Evans model (for the same case) although the light is recovered slightly less well, which may be driven by the smaller number of orbits being used now.
![Confidence intervals for the axisymmetric Vogelsberger model in ($\log_{10}(M_\text{1kpc})$, $R_s$) (after fixing $c/a = 0.8$) for the dataset with $10^5$ stars and 9x9 kinematic-bins. The $\Delta \chi^2=[2.3, 6.18, 11.8]$-contours are in white, grey and black respectively. The best-fit model is indicated by the white cross, while the expectations are given by the green contour (identical to that shown in Fig. \[fig:findingvogelsbergertruth\]). The mass parameter is well constrained and models with $R_s \leq 2.0$ kpc are strongly disfavoured, consistent with our expectations. The small panels on the right show the $\Delta \chi^2$-landscapes when only considering $\chi^2_{\text{2D light}}$ (top), $\chi^2_{\text{kin}}$ (top-middle), $\chi^2_{\text{3D light}}$ (bottom-middle), or $\chi^2_{\text{reg}}$ (bottom). []{data-label="fig:vogelsberger_results1_c_all_probcontours"}](figures/results1_vogelsberger_delchisquare_contour_MRs_remake_slicec080.pdf){width="50.00000%"}
Fig. \[fig:vogelsberger\_results1\_c\_all\_probcontours\] shows the resulting $\Delta \chi^2$-distribution in ($\log_{10}(M_\text{1kpc})$, $R_s$)-parameter space. The scale radius of the Vogelsberger potential is constrained to and the mass parameter to $\log_{10}(M_\text{1kpc}[M_{\odot}]) = 7.72^{+0.01}_{-0.01}$. The Schwarzschild model thus prefers values towards the lower end for the scale radius and a mass parameter that agrees well with of our expectations. The panels on the right show the $\Delta \chi^2$-landscapes when only considering $\chi^2_{\text{2D light}}$ (top), $\chi^2_{\text{kin}}$ (top-middle), $\chi^2_{\text{3D light}}$ (bottom-middle), or $\chi^2_{\text{reg}}$ (bottom). The total $\Delta \chi^2$-landscape is dominated by the kinematics and 2D light.
Similar best-fit parameters are obtained for a smaller mock dataset with $10^4$ stars when folding the data into 5x5 kinematic-bins, as shown in Fig. \[fig:vogelsberger\_results2\_folded9x9\_c\_all\_probcontours\]. The mass and scale parameters are constrained to and $\log_{10}(M_\text{1kpc}[M_{\odot}]) = 7.75^{+0.05}_{-0.03}$. For the best-fit model M775Rs300, and , or $0.339$ per kinematic constraint on average. This $\chi^2_\text{tot}$ is lower than for the case of $10^5$ stars, likely because we folded the data. In comparison to the best-fit Evans model, the quality of the fit of the kinematics is slightly worse but still very good.
When decreasing the sample size even further to $2000$ stars, we find that models with low values for $R_s$ and larger $\log_{10}(M_\text{1kpc}[M_{\odot}])$, are now preferred as shown in Fig. \[fig:vogelsberger\_results2000\_folded9x9\_c\_all\_probcontours\], although the 95%-confidence region still overlaps with the expected values for the parameters. We obtain best-fit values of and $\log_{10}(M_\text{1kpc}[M_{\odot}]) = 7.80^{+0.02}_{-0.01}$.
It is interesting to note that the shape of the confidence contours obtained from the Schwarzschild method for all sample sizes, follows very closely the shape of the contours of $\langle \Delta v \rangle$ depicted in Fig. \[fig:comparingvogelsbergertruth\]. Recall that the quantity $\langle \Delta v \rangle$ is a proxy for the difference in enclosed mass between the Evans and Vogelsberger model. This implies that Schwarzschild’s method is actually very sensitive to enclosed mass, and it is identifying the set of Vogelsberger models that best follow the true underlying mass distribution. Also interesting is that the trend favouring larger values of the mass parameter when decreasing sample size, is present both for the Evans as well as for the Vogelsberger models.
![Similar to Fig. \[fig:vogelsberger\_results1\_c\_all\_probcontours\], but now after fitting mock data consisting of $10^4$ stars and folding into 5x5 kinematic-bins. The decrease in sample size (by a factor 10) has led to a slight increase by the area spanned by the probability contours, although the inference on the mass parameter is still very good and only changed to slightly higher masses.[]{data-label="fig:vogelsberger_results2_folded9x9_c_all_probcontours"}](figures/results2_vogelsberger_folded9x9_delchisquare_contour_MRs_remake_slicec080.pdf){width="50.00000%"}
![ As in Fig. \[fig:vogelsberger\_results2\_folded9x9\_c\_all\_probcontours\], but now for a dataset with $2000$ stars. Note how the confidence contours follow the shape of the green contour (derived in Fig. \[fig:findingvogelsbergertruth\]). []{data-label="fig:vogelsberger_results2000_folded9x9_c_all_probcontours"}](figures/results2000_vogelsberger_folded9x9_delchisquare_contour_MRs_remake_slicec080.pdf){width="50.00000%"}
We compare the Evans and Vogelsberger best-fit models to the observed velocity dispersions in Fig. \[fig:comparesigmas\]. The left and right panels compare the behaviour on the major and minor axes respectively, for different sample sizes: $10^5$, $10^4$ and $2000$ stars (in the top, middle and bottom rows respectively). The shaded areas enclose the minimum and maximum velocity dispersions for the evaluated models within the $\Delta \chi^2=[2.3, 6.18, 11.8]$-contours. These comparisons show that the Evans models fit the kinematics slightly better but that nearly equally good fits are provided by the Vogelsberger models (except in along the minor axis for the smallest dataset, bottom right panel).
From the analyses presented in this section we may thus conclude that the Schwarzschild modelling technique is sensitive to the mass enclosed and that it is successful in constraining well the mass parameter of the models, even if the functional form of the potential is not known.
{width="100.00000%"}
Discussion and Conclusions {#sec:discussionandconclusion}
==========================
We explored the ability of the Schwarzschild’s orbit superposition method to characterise the intrinsic properties of an axisymmetric dSph galaxy, such as its mass, scale radius and flattening. We did this by setting up an isothermal Sculptor-like mock galaxy that is flattened in both the luminous and dark components. We have shown that Schwarzschild’s method applied to mock datasets with a realistic number of stars with measured radial velocities distributed following the luminosity profile of the system, is successful in recovering the characteristic mass parameter of the underlying (true) logarithmic potential, even if the potential flattening is not known. On the other hand, we find that we can not put constraints on the flattening parameter.
Most likely, our inability to constrain the flattening is the consequence of our choice of the specific Evans model for our mock galaxy. In this model with a distribution function that is ergodic, the line-of-sight velocity profile is exactly the same everywhere and depends on the mass parameter only. This means that the kinematics are independent of the inclination and flattening, and the light alone does not contain enough information to constrain the flattening parameter.
One might also argue that it might not be optimal for a spectroscopic survey to sample stars according to the light profile of the system. In fact, slightly better results were obtained when the dataset with radial velocities provided an equal number of stars to each kinematic-bin. All these factors, in combination with the fact that for our specific Evans model just $\sim\!\!30$% of the system’s light is within our FOV, are likely playing a role. It might be possible however that better results could be obtained with a more realistic and general distribution function (i.e. non-ergodic), applied to a galaxy for which the kinematic tracers cover well the full system and sample more the outskirts.
Since in reality the potential functional form is not known, we also explored the case in which we assume an axisymmetric NFW model. We first determined the values of the characteristic parameters of the NFW model that mimic the mock galaxy best by comparing some basic properties (potential flattening and gradients in the potential). We found that even in this case, i.e. the orbits that form the building blocks of Schwarzschild’s method are integrated in the wrong potential, we can retrieve the correct characteristic mass and scale parameters.
We have explored the dependencies of our results on the sizes of the data samples used, and find that a decrease in the number of stars with line-of-sight velocities, only slightly affects the determination of the characteristic parameters of the model. For the smallest sample considered, with $2000$ stars, the inference on the mass of the NFW “equivalent” model is somewhat poorer but the true value differs by only 20% from the best-fit and also lies within the 95% confidence interval.
We have checked that our results are not strongly dependent on the choices of e.g. the number of orbits in the orbit libraries, number of kinematic- or light-bins, and the number of velocity bins. Furthermore we have also briefly investigated the distribution functions for the the best-fit models, and found that, particularly when regularisation is included, they are quite similar to the distribution function of the mock dwarf spheroidal galaxy.
In conclusion, it is promising that the mass of our flattened system can be recovered so well even if the flattening parameter is unknown. This is also aligned with the results of @Kowalczyketal2018_theeffectofnonsphericity, who applied their spherical Schwarzschild models on non-spherical objects. To some extent, this provides us with more confidence regarding previously reported estimates of the mass of dSph galaxies obtained assuming spherical symmetry.
We thank R. van den Bosch for supplying us the Schwarzschild code and L. Posti and P.T de Zeeuw for many useful discussions regarding the project. A.H. acknowledges financial support from a VICI grant from the Netherlands Organisation for Scientific Research, NWO.
\
Generating a mock dataset with realistic errors {#App:AppendixA}
===============================================
Like in @Breddelsetal2013, we define $v_i$ as the true line-of-sight velocity of star $i$ and $\epsilon_i$ as the (true and unknown) measurement error on that star. Therefore $v_i + \epsilon_i$ is the observed velocity of star $i$. We note that the expectation values for the moments of the measurement errors, which are drawn from a Gaussian distribution with $\sigma=2$ km/s, are given by: $E\left[ \langle \epsilon^n_i \rangle \right]= E \left[ \epsilon^n_i
\right] = 0$ for odd $n$ and $s_n \equiv E \left[ \langle \epsilon^n_i \rangle \right] = E
\left[\epsilon^n_i \right] = (n-1)!!\sigma^n$ for even $n$. In our terminology, $\hat{\mu_n} = E[\langle v_i^n \rangle] = E[v_i^n]$ denotes the true $n^{\text{th}}$ moment, $\mu_n$ is its estimator and the observed $n^{\text{th}}$ moment is $m_n = \frac{1}{N} \sum\limits^N_{i=1} (v_i + \epsilon_i)^n$ for a sample of $N$ stars in a given positional bin on the sky (i.e. kinematic-bin).
Since we want to know the true value of the moments, i.e. without measurement errors, we will compute the estimators of the true moments. We will also use raw moments (i.e. not taken about the mean velocities), and in what follows, we thus refer to ‘moments’ to denote ‘raw moments’. Since we can only in practise compute the estimators of the true moments, we replaced $\hat{\mu_n}$ by $\mu_n$ in the right-hand side of the following equations. The first four moment estimators are then given by: $$\mu_1 = \frac{1}{N} \sum\limits^N_{i=1} (v_i + \epsilon_i) \, ,$$ $$\mu_2 = \frac{1}{N} \sum\limits^N_{i=1} (v_i + \epsilon_i)^2 - s_2 \, ,$$ $$\mu_3 = \frac{1}{N} \sum\limits^N_{i=1} (v_i + \epsilon_i)^3 - 3 \mu_1 s_2 \, ,$$ and $$\mu_4 = \frac{1}{N} \sum\limits^N_{i=1} (v_i + \epsilon_i)^4 - 6 \mu_2 s_2 - 3 s^2_2 \, .$$
To compute the error on these moments, we compute the square root of the variance of the moments; $Var(\hat{\mu_n}) \approx Var(\mu_n) \approx Var(m_n) = E[{m_n}^2] - (E[m_n])^2$: $$\begin{split}
Var(\mu_1) &= \frac{\mu_2 + s_2 - \mu_1^2}{N} \\
&= \frac{1}{N} \left\{ \frac{1}{N} \sum\limits^N_{i=1} (v_i + \epsilon_i)^2 - \left[ \frac{1}{N} \sum\limits^N_{i=1} (v_i + \epsilon_i) \right]^2 \right\} \, ,
\end{split}$$ $$\begin{split}
Var(\mu_2) &= \frac{1}{N} \left[ \mu_4 - \mu_2^2 + 4 \mu_2 s_2 + 2s^2_2\right] \\
&= \frac{1}{N} \left\{ \frac{1}{N} \sum\limits^N_{i=1} (v_i + \epsilon_i)^4 - \left[ \frac{1}{N} \sum\limits^N_{i=1} (v_i + \epsilon_i)^2 \right]^2 \right\} \, ,
%&=...in terms of moments) \\
%&= .. (in terms of observed velocities!)
\end{split}$$ $$\begin{split}
Var(\mu_3) &= \frac{1}{N} \left[ \mu_6 +15\mu_4 s_2 + 45\mu_2 s^2_2 + 15s^3_2 - \mu^2_3 \right.\\
& \textrm{\, \, \,} \left. -6 \mu_3 \mu_1 s_2 - 9\mu^2_1 s^2_2 \right] \\
&= \frac{1}{N} \left\{ \frac{1}{N} \sum\limits^N_{i=1} (v_i + \epsilon_i)^6 - \left[ \frac{1}{N} \sum\limits^N_{i=1} (v_i + \epsilon_i)^3 \right]^2 \right\} \, ,
\end{split}$$ and $$\begin{split}
Var(\mu_4) &= \frac{1}{N} \left[ \mu_8 + 28\mu_6 s_2 -\mu^2_4 - 12 \mu_4 \mu_2 s_2 + 204\mu_4 s^2_2 \right. \\
& \textrm{\, \, \,} \left. - 36\mu^2_2 s^2_2 + 384\mu_2 s^3_2 + 96s^4_2\right] \\
&= \frac{1}{N} \left\{ \frac{1}{N} \sum\limits^N_{i=1} (v_i + \epsilon_i)^8 - \left[ \frac{1}{N} \sum\limits^N_{i=1} (v_i + \epsilon_i)^4 \right]^2 \right\} \, .
\end{split}$$ where the errors on the third and fourth moment estimators also depend on: $$\mu_6 = \frac{1}{N} \sum\limits^N_{i=1} (v_i + \epsilon_i)^6 - 15 \mu_4 s_2 -45 \mu_2 s^2_2 - 15 s^3_2 \, ,$$ and $$\mu_8 = \frac{1}{N} \sum\limits^N_{i=1} (v_i + \epsilon_i)^8 - 28 \mu_6 s_2 - 210 \mu_4 s^2_2 - 420 \mu_2 s^3_2 - 105 s^4_2 \, .$$ Obviously the errors on the moments decrease when the number of stars in a kinematic-bin increases.\
The effect of the sampling of line-of-sight velocities {#app:light}
======================================================
{width="45.00000%"} {width="45.00000%"}
In the main paper we have drawn samples of line-of-sight velocities that follow the light distribution of the mock galaxy. Here we show the results of applying the Schwarzschild modelling technique to a dataset consisting of $10^5$ stars, but this time distributed such that each kinematic-bin has an equal number of stars.
Fig. \[fig:resultsENSPB\] presents the inference on the mass and flattening parameter and should be compared to Fig. \[fig:evans\_results1\_probcontours\] and \[fig:testevans\_2apertures\_100000k\_moments\_fov\_theon\_results1\_q80v20\_chisigma\]. As can be observed, we have very similar inferences with the best-fit flattening parameter slightly moved into the direction of the input/true flattening value. Nonetheless, this remains fairly unconstrained.
Regularisation {#App:AppendixB}
==============
{width="80.00000%"}
The solution of our minimisation problem may result in a distribution of orbital weights that is rapidly varying or shows sharp discontinuities. Such a distribution would not be physical. Therefore we make the distribution of the orbital weights smoother by adding extra terms to the $\chi^2$-fitting algorithm such that a new quantity $\widetilde{\chi^2_{\text{tot}}}$ is minimised: $$\label{eq:chi2totplusreg}
\widetilde{\chi^2_{\text{tot}}} = \chi^2_{\text{tot}} + \chi^2_{\text{reg}} \, .$$ This procedure is called regularisation. The regularisation strength is chosen such that the orbital weights are forced to change smoothly from one neighbouring orbit to the next, while finding similar values for the best-fit characteristic parameters. In addition, the confidence contours should not be significantly shaped by the $\chi^2_{\text{reg}}$-term. We refer the reader to @vandenBoschetal2008 for more information about the exact implementation, in particular to Eqs. 28 and 29 of that paper. These equations require the 3-dimensional stellar density profile. For this work we assumed to know $\rho_{\text{lum}}$ (see Eq. \[eq:rholum\]). In reality one needs the inclination angle to transform the observed surface brightness profile into the stellar density profile.
In the bottom panels of Fig. \[fig:df\_q80v20\_addingregularisation\], we show the effect of adding regularisation for the Evans q80v20 model (i.e. this is the true model) on the distribution of angular momentum around the symmetry axis ($L_z$). The distributions can be compared to those of the q80v20 model without regularisation (top rows). We here show the example with $10^5$ stars with line-of-sight velocities. The modelled distribution functions (blue) are generally smoother when regularisation is used. As a reference we also include the distributions for a realization of the mock galaxy (red). The model reproduces the mock distribution reasonably well, though some differences exist. The fact that only $\sim30\%$ of the total number stars of the mock galaxy end up in our FOV might play a role here, in addition to the fact that we have discretized the data (by using kinematic-bins, and by modelling only the first four velocity moments).
[^1]: nor is this distribution function ideal as we shall see later in the paper, because it provides very little information on the symmetries of the system.
[^2]: Note that the Schwarzschild code by @vandenBoschetal2008 was developed to model triaxial systems, and therefore also generates initial conditions for box orbits, which have zero time-averaged angular momentum and which can cross the centre [@Schwarzschild1979; @Schwarzschild1993]. In an axisymmetric potential $L_z$ is conserved and such box orbits will therefore never attain velocities in the azimuthal direction. As this could cause non-axisymmetries in our model we do not specifically generate box orbits.
[^3]: When taking too few (i.e. too wide) velocity bins for the velocity grid, the velocity moments might not be recovered correctly. We have also checked that if we bin the true Gaussian line-of-sight velocity profile of our mock galaxy as described above, thus discarding the contribution of velocities that are outside the range of the grid, the velocity moments are recovered well, i.e. the first and third moments are not affected, while the second and fourth velocity moments might result in relative errors of order 0.1% and 2% given the choices made for binning the velocity data.
[^4]: We used the scipy.interpolate.LinearNDInterpolator to interpolate the $\Delta \chi^2$.
[^5]: Slightly better results can be obtained by sampling uniformly with distance, see Appendix \[app:light\].
[^6]: More precisely, $\tilde{r}$ depends on $r_a$ and $r_E(c/a)$, but we have chosen to fix the value of $r_a$ (and make it independent of $R_s$).
[^7]: Even for a spherical potential, i.e. $c/a=1.0$, we find the minimum $\langle \bigtriangleup v \rangle = 0.62$ km/s to be located at $\log_{10}(M_\text{1kpc}[M_{\odot}]) \simeq 7.68$ and $R_s = 6.1$ kpc.
|
---
abstract: 'Submillimetre (submm) observations of WISE-selected, dusty, luminous, high-redshift galaxies have revealed intriguing overdensities around them on arcmin scales. They could be the best signposts of overdense environments on the sky.'
author:
- 'Suzy F. Jones,$^1$ and Andrew W. Blain$^1$'
title: 'Overdensities of SMGs around WISE-selected, ultra-luminous, high-redshift galaxies'
---
NASA’s Wide-Field Infrared Survey Explorer (WISE) (Wright et al., 2010) found dusty, luminous, high-redshift, active galaxies because the hot dust heated by AGN and/or starburst activity can be traced using the WISE 12$\mu$m (W3) and 22$\mu$m (W4) bands. Eisenhardt et al. (2012), Bridge et al. (2013) and Lonsdale et al. (submitted) have shown that WISE can find different classes of interesting, luminous, high-redshift, dust-obscured AGN: with faint or undetectable flux densities in the 3.4$\mu$m (W1) and 4.6$\mu$m (W2) bands, and well detected fluxes in the W3 and/or W4 bands. A radio blind sample known as “W1W2-dropouts” or hot dust-obscured galaxies (Hot DOGs) was observed in the submm/mm by Wu et al. (2012) and Jones et al. (2014). A radio-selected sample known as WISE/radio-selected AGNs have also been observed with JCMT SCUBA-2 at 850$\mu$m (Jones et al. 2015) and ALMA (Lonsdale et al. submitted). Submm observations are used to find their coldest dust emission and to be able to calculate their total IR luminosities. Both samples were found to have high total IR luminosities, spectral energy distributions (SEDs) that were not well represented by standard AGN templates, due to excess mid-IR emission to submm ratio (Jones et al. 2014, 2015).
Serendipitous SMG sources are detected in the deepest 850$\mu$m SCUBA-2 1.5-arcmin-radius regions around both the Hot DOGs and WISE/radio-selected AGNs. There were 17 and 81 serendipitous 850$\mu$m sources detected at greater than 3$\sigma$ significance in the 10 Hot DOG and 30 WISE/radio-selected AGN SCUBA-2 maps, respectively. Both samples were overdensity of accompanying SMGs compared to those in two different blank-field submm surveys (Weiss et al. 2009; Casey et al. 2013).
Observations of dusty, luminous, high-redshift galaxies have revealed significant evidence that the galaxy density in their environments appears to be above average (Blain et al. 2004; Scott et al. 2006; Farrah et al. 2006; Chapman et al. 2009; Hickox et al. 2009; Cooray et al. 2010; Hickox et al. 2012). For example, the Clusters Around Radio-Loud AGN (CARLA) *Spitzer* programme that looked at the environments of radio-loud AGN (RLAGN) at $1.2 < z < 3.2$ concluded that RLAGN are in overdense environments in mid-IR wavelengths, and could be signposts of high-redshift galaxy clusters (Wylezalek et al. 2013; Hatch et al. 2014). Clustering of these mid-IR and SMGs could yield evidence for the nature of their massive dark matter halos and highlight their bias of as compared with the underlying dark matter distribution.
An overdensity of SMGs was found in the fields of Hot DOGs and WISE/radio-selected AGN by factors of $\sim$ 3 and $\sim$ 5, respectively (Jones et al. 2014, 2015). Hot DOGs have a higher typical redshift ($z=2.7$), compared to WISE/radio-selected AGNs ($z=1.7$). Due to the K-correction effect, the serendipitous SMG detection is independent of redshift, and so while the Hot DOGs have a typically higher redshift than the WISE/radio-selected AGNs, the higher overdensity of SMG sources around WISE/radio-selected AGNs is matched in luminosity of the Hot DOG companions. The higher overdensity around WISE/radio-selected AGN could be due to the lower redshift.
This agrees with ALMA cycle-0 results of WISE/radio-selected AGNs (Silva et al. 2014), where 23 serendipitous SMG sources were detected in 17 out of 49 fields, which implies an overdensity factor of $\sim$ 10 on $\sim$ 20arcsec scales. There is double the overdensity of SMG serendipitous sources in the ALMA fields than our SCUBA-2 fields when compared to previous submm surveys, this could be due to the multiplicity in SMGs (Karim et al. 2013). This is where multiple SMGs are found to be separated typically by $\sim$ 6arcsec and are resolved with high-resolution ALMA maps (1.5arcsec resolution), but are blended into one source with the resolution of single-dish surveys, where JCMT used in this paper has a resolution of 15arcsec at 850$\mu$m. There were no fields in both samples that contained 0 serendipitous sources, with $\sim$ 2 typically found in each Hot DOG field and $\sim$ 3 in WISE/radio-selected AGNs compared to $\sim$ 1 typically found in blank field submm surveys (Coppin et al. 2006; Weiss et al. 2009; Casey et al. 2013).
Hot DOGs and WISE/radio-selected AGNs are highly luminous and could be the best signposts of overdense environments of active, dusty, luminous galaxies.
References:
Blain A. W. et al., 2004, ApJ, 611, 725
Bridge C. R. et al., 2013, ApJ, 769, 91
Casey C. M. et al., 2013, MNRAS, 436, 1919
Chapman S. C. et al., 2009, ApJ, 691, 560
Coppin K. et al., 2006, MNRAS, 372, 1621
Cooray A. et al., 2010, A$\&$A, 518, L22
Eisenhardt P. R.. et al., 2012, ApJ, 755, 173
Farrah D. et al., 2006, ApJL, 641, L17
Hatch N. A. et al., 2014, MNRAS, 445, 280
Hickox R. C. et al., 2009, ApJ, 696, 891
Hickox R. C. et al., 2012, MNRAS, 421, 284
Jones S. F. et al., 2014, MNRAS, 443, 146
Jones S. F. et al., 2015, MNRAS, ArXiv 1503.02561
Karim A. et al., 2013, MNRAS, 432, 2
Scott S. E. et al., 2006, MNRAS, 370, 1057
Silva A. L. et al., 2014, in American Astronomical Society Meeting Abstracts, 224
Tsai C. W. et al., 2014, ArXiv 1410.1751T
Weiss A. et al., 2009, ApJ, 707, 1201
Wright E. L. et al., 2010, AJ, 140, 1868
Wu J. et al., 2012, ApJ, 756, 96
Wylezalek D. et al., 2013, ApJ, 769, 79
|
---
abstract: |
Recent advances in the field of network representation learning are mostly attributed to the application of the skip-gram model in the context of graphs. State-of-the-art analogues of skip-gram model in graphs define a notion of neighbourhood and aim to find the vector representation for a node, which maximizes the likelihood of preserving this neighborhood.
In this paper, we take a drastic departure from the existing notion of neighbourhood of a node by utilizing the idea of *coreness*. More specifically, we utilize the well-established idea that nodes with similar core numbers play equivalent roles in the network and hence induce a novel and an organic notion of neighbourhood. Based on this idea, we propose *core2vec*, a new algorithmic framework for learning low dimensional continuous feature mapping for a node. Consequently, the nodes having similar core numbers are relatively closer in the vector space that we learn.
We further demonstrate the effectiveness of *core2vec* by comparing word similarity scores obtained by our method where the node representations are drawn from standard word association graphs[^1] against scores computed by other state-of-the-art network representation techniques like node2vec, DeepWalk and LINE. Our results always outperform these existing methods, in some cases achieving improvements as high as **46%** on certain ground-truth word similarity datasets. We make all codes used in this paper available in the public domain: <https://github.com/Sam131112/Core2vec_test>.
author:
-
-
-
- |
Soumya Sarkar[^1^]{}, Aditya Bhagwat[^2^]{}, Animesh Mukherjee[^3^]{}\
[Department of Computer Science and Engineering, IIT Kharagpur, India]{}\
[`soumya015@iitkgp.ac.in1,bhagwat.work@gmail.com2`]{}\
[`animesh@cse.iitkgp.ernet.in3`]{}\
bibliography:
- 'Trans.bib'
title: |
Core2Vec: A core-preserving feature\
learning framework for networks
---
![Toy example showing connectivity profiles in each core. Nodes in similar core play similar roles as is evident from the figure though they may be non-adjacent. \[connects\_1\]](sw.jpg){width="0.5\linewidth" height="0.12\textheight"}
Conclusion
==========
This paper to the best of our knowledge is a first work which demonstrates a network embedding task, utilizing global information like coreness of a node. We successfully show that our embedding approach brings similar core nodes together in latent dimensions and separates disparate core nodes.
We apply our embedding approach to the task of detecting similar words by training on two large word association networks. Embeddings obtained by our approach maps similar words closer in space compared to other baseline approaches.
[^1]: In linguistics, such networks built from various linguistic units are known to have a core-periphery structure (see [@choudhury2010global] and the references therein).
|
---
abstract: 'Exceptional points are values of the spectral parameter for which the homogeneous Faddeev scattering problem has a non-trivial solution. We study the existence/absence of exceptional points for small perturbations of conductive potentials of arbitrary shape and show that problems with absorbing potentials do not have exceptional points in a neighborhood of the origin. A criterion for existence of exceptional points is given.'
author:
- 'Evgeny Lakshtanov[^1]'
- 'Boris Vainberg[^2]'
title: Exceptional points in Faddeev scattering problem
---
**Key words:** exceptional points, Faddeev’s Green function, conductive potential
Introduction
============
The paper concerns the 2-D Faddeev scattering problem where incident waves grow exponentially at infinity [@faddeev]. This problem is used often for solving inverse problems [@Nachman]-[@RN2]. Let us recall the statement of the Faddeev problem. Let $\mathcal O$ be an open bounded domain in $\mathbb R^2=\mathbb R^2_z,~z=(x,y),$ with $C^2$ boundary $\partial \mathcal O$ and the outward normal $\nu$.
Let $\zeta=(\zeta_1,\zeta_2)\neq 0$ be a vector with complex components $\zeta_i\in\mathbb C$ and $\zeta_1^2+\zeta_2^2=E\geq 0$. Mostly we will deal with the case when the energy $E$ is equal to zero, i.e., $\zeta=(k,\pm ik),~k\in \mathbb C \backslash 0 $. Then $u=u(z,\zeta)$ is a solution of the Faddeev scattering problem if $$\label{ref0401B}
-\Delta u - n u =0, ~ z\in \mathbb R^2,$$ $$\label{ref0401Ba}
u(z,\zeta)=e^{i\zeta \cdot z} + u^{out}, \quad e^{-i\zeta \cdot z} u^{out} \in W^{1,p}(\mathbb R^2), ~ p>2,$$ where $\zeta\cdot z=\zeta_1x+\zeta_2y$ and the potential $n=n(z)$ is bounded in $\mathcal O$, complex-valued if the opposite is not claimed, and vanishes outside $\overline{\mathcal O}$.
The main object of our study is the set $\mathcal E$ of [*exceptional points*]{} $ 0\neq k\in \mathbb C$ such that the homogeneous problem (\[ref0401B\]), (\[ref0401Ba\]) with $ \zeta=(k,\pm ik)$ has a nontrivial solution. This homogeneous problem has the form $$\label{ref0414A}
-\Delta v- n v =0, ~ z\in \mathbb R^2; \quad
e^{-i\zeta \cdot z} v\in W^{1,p}(\mathbb R^2), ~ p>2.$$ We will say that an exceptional point has [*multiplicity*]{} $m$ if the dimension of the solution space of problem (\[ref0414A\]) is $m$. We restrict ourself to the case $\zeta=(k,ik), k \in \mathbb C, $ the case $\zeta=(k,-ik), k \in \mathbb C \backslash \{0\}, $ can be treated similarly (see discussion in [@Nachman p.76]).
In the case of positive energy, equation (\[ref0401B\]) should be replaced by $$\label{ref0408B}
-\Delta u -Eu- n u =0, ~ z=(x,y)\in \mathbb R^2.$$ The following parametrization (see [@RN2]) of $\zeta\in \mathbb C^2,~\zeta^2=E>0,$ is used in this case instead of $\zeta=(k,ik)$: $$\zeta = \left ( \begin{array}{c}
(\lambda + \frac{1}{\lambda})\frac{\sqrt{E}}{2} \\
(\frac{1}{\lambda} - \lambda)i\frac{\sqrt{E}}{2}
\end{array} \right ), \quad |\lambda| \neq 1.$$ The fundamental solution that corresponds to outgoing waves governed by (\[ref0408B\]) has the form $$G_\zeta(z)= e^{i\zeta \cdot z} \frac{1}{(2\pi)^2} \int_{\mathbb R^2} \frac{e^{iz' \cdot z}}{|z'|^2 + 2 \zeta \cdot z' } dz' ,$$ and condition (\[ref0401Ba\]) should be replaced by the representation of the solution $u$ through $G$: $$\label{2a}
u(z,\zeta)=e^{i\zeta \cdot z} + \int_{\partial \mathcal O} G_\zeta(z-w)\mu_\zeta(w)dl_w, ~ \mu \in H^{-\frac{1}{2}}(\partial \mathcal O), ~ z \in \mathbb R^2 \backslash \mathcal O.$$
[**Definition.**]{} A point $\lambda \in \mathbb C \backslash \{0\}, |\lambda|\neq 1,$ will be called [*exceptional*]{} if problem (\[ref0408B\]), (\[2a\]) has a nontrivial solution. The multiplicity of an exceptional point is defined by the number of linearly independent solutions of (\[ref0408B\]), (\[2a\]). The set of all exceptional points will be denoted by $ \mathcal E(E)$.
Note that the incident waves in the classical scattering problem (with positive energy) have the form $e^{i(k_1x+k_2y)}$. The exceptional set in this case is empty due to the absence of eigenvalues imbedded into continuous spectrum (a very simple proof can be found in [@vai]). Similar arguments do not work for the Faddeev scattering problem since outgoing solutions of (\[ref0414A\]) and (\[ref0408B\]) do not decay at infinity.
The knowledge of exceptional points is particularly important when the Faddeev scattering problem is applied to solve the inverse problem of recovering the potential $n$ from the Dirichlet-to-Neuman map $F_n$ at the boundary $\partial\mathcal O$, which is defined by solutions of either equation (\[ref0401B\]) or (\[ref0408B\]) in $\mathcal O$. For example, if the real potential is continued by zero in the exterior of $\mathcal O$, then the following relation holds for the solution $u$ of (\[ref0401B\]) under the condition of absence of exceptional points (e.g., see [@Nachman] and the discussion after formula (21) in [@music]) $$\label{dbarinteq}
e^{-i\zeta \cdot z}u(z,k)=1- \frac{1}{(2\pi)^2} \int_{\mathbb R^2} \frac{t(k')}{(k'-k)\overline{k}'}e_{-z}(k') \overline{e^{-i\zeta' \cdot z}u(z,k')}dk'_1 dk'_2.$$\[inte\] Here $e_z(k)=\exp((kz+\overline{k}\overline{z}))$, and the coefficient $t(k)$ is the so-called direct scattering transform that could be calculated through $F_n$: $$\label{dst}
t(k)=\int_{\partial \mathcal O} e^{i\overline{kz}}[(F_n-F_0)u](z,k) dl_z,$$ Function $u$ on $\partial\mathcal O$ in (\[dst\]) can be found without using the potential (see formula (\[LS\]) below).
In order to complete the solution of the inverse problem, one also needs to know that the solution of the integral equation (\[dbarinteq\]) is unique. Then the potential $n(z)$ can be found using the approach developed in [@Nachman] or just as $-\frac{\Delta u}{u}$, see [@beals],[@Henkin].
This inverse method was justified in the case of conductive potentials [@Nachman]. The latter potentials have the form $n=-q^{-\frac{1}{2}}\Delta q^{\frac{1}{2}}$, where $q$ is smooth, non-negative, and $q-1$ vanishes outside $\mathcal O$. In this case, equation (\[ref0401B\]) can be reduced to the equation $\nabla (q \nabla v)=0$ by the substitution $u=\sqrt{q}v$. The case of so-called subcritical potentials was studied in [@music].
Exceptional points for sign definite perturbations of conductive potentials were studied in [@siltanen2], [@siltanen3] under the condition that the potential and its perturbation are spherically symmetric. The existence or non-existence of exceptional points in this case depends on the sign of the perturbation.
Similarly, one can extend the potential to the exterior of $\mathcal O$ by a nonzero real constant $E$. The absence of exceptional points and the uniqueness of solutions of the corresponding integral equation was justified in the case of small enough potentials (see [@grinevich]), and for any potential if $E$ is large enough (see [@RN2]). Let us also note that an integral equation similar to (\[dbarinteq\]) was obtained in certain cases [@RN2 (8.27),(8.28)] under the condition that only a neighborhood of the origin and a neighborhood of infinity are free of the exceptional points.
It is worth noting that the location of exceptional points is time independent for potentials that satisfy the Novikov-Veselov equation (eg. [@bogdanov],[@siltanen4]). The latter is a multidimensional generalization of the KdV equation.
One can find the solution $u(z,k)$ of (\[ref0401B\]), (\[ref0401Ba\]) by reducing the problem to the Lipmann-Schwinger equation, which leads to (see e.g. [@RN]) $$\label{LS}
u(z,k)=(I+S_k(F_n-F_0))^{-1}e^{i\zeta \cdot z}, ~ z \in \partial \mathcal O.$$ Here $F_n$ is the Dirichlet-to-Neumann map for the equation $(-\Delta-n)u=0$ in $\mathcal O, ~~F_0=F_n|_{n=0}$, $S=S_k$ is the single layer operator on the boundary with Faddeev’s Green function $G_k(z)$: $$\label{skk}
S_k:H^{-\frac{1}{2}}(\partial \mathcal O)\to H^{\frac{1}{2}}(\partial \mathcal O);~~ \quad S_k\sigma(z)=\int_{\partial \mathcal O}G_k(z-z')\sigma(z')dl_{z'},~~z\in \partial \mathcal O,$$ where $dl$ is the element of the length and $$G_k(z)=\frac{1}{(2\pi)^2} e^{i\zeta \cdot z} \int_{\mathbb R^2} \frac{e^{i(\xi_1x+\xi_2 y)}}{|\xi|^2+2k\xi}d\xi_1 d\xi_2, \quad \xi=\xi_1+i\xi_2.$$ Function $G_k$ is real-valued (see e.g. [@sthesis Part 3.1.1]). Indeed, the second condition in (\[ref0414A\]) can be written in the form $|e^{i\zeta \cdot z}|u^{out} \in W^{1,p}(\mathbb R^2)$. From here it follows that $\Re G_k$ is the Green function, while $\Im G_k=0$ due to Nachman’s uniqueness result [@Nachman].
Equation (\[LS\]) provides the standard basis for studying the exceptional set $\mathcal E$: exceptional points can be defined as the values of the parameter $k\in \mathbb C \backslash \{0\} $ for which the non-self-adjoint family of operators $$\label{expo}
I+S_k(F_n-F_0)$$ has a non-trivial kernel. It is natural to consider [**$S_k(F_n-F_0)$**]{} as operator in $L_2(\partial \mathcal O)$ or in Sobolev spaces $H^{s}(\partial \mathcal O)$, where value of $s$ is restricted only by smoothness of $\partial\mathcal O$. The second term in (\[expo\]) is a compact operator in each of these spaces, and the kernel of operator (\[expo\]) does not depend on the choice of the space.
[**Description of the results obtained below**]{}. Our progress in the study of $\mathcal E$ is based on establishing a connection between exceptional values of $k$ and the kernels of the family of operators $F_n-F^{out}(k)$, which are easier to control than the kernels of operator (\[expo\]). Here $F^{out}(k)$ is the Dirichlet-to-Neumann map for the exterior Faddeev scattering problem in $\mathbb R^2 \backslash \mathcal O$. The next section starts with studying some properties of operator $F^{out}(k)$. Then we prove that a point $k\in \mathbb C \backslash \{0\} $ is exceptional if and only if the kernel of the family of operators $F_n-F^{out}(k)$ is nontrivial. Moreover, the dimension of the kernel coincides with the multiplicity of the exceptional point (which is the dimension of the solution space of problem (\[ref0414A\])).
Section 3 contains the two main results of this paper. They follow from this new criterion for the exceptional points. The first result is the absence of exceptional points in a neighborhood of the origin $k=0$ in the case of absorbing potentials. The second one is a generalisation of the results of [@siltanen2], [@siltanen3] on sign definite perturbations of conductive potentials to non spherically symmetrical problems. We do not assume the radial symmetry of either the underlying conductive potential or its perturbation. We prove the absence of exceptional points for perturbations of a specific sign and the existence of the exceptional set for perturbations of the opposite sign. This set is close to a circle centred at the origin. A criterion for existence of exceptional points is given in the last short section.
Reduction to boundary operators.
=================================
We consider the Faddeev scattering problem (\[ref0401B\]), (\[ref0401Ba\]) with zero energy in this section. Without loss of the generality we can assume that the equation $-\Delta v- n v =0$ in $\mathcal O$ does not have non-trivial solutions vanishing at the boundary (i.e., zero is not an eigenvalue of the interior Dirichlet problem). In other words, the operator $F_n$ is well defined. We can make sure that this condition holds by extending the domain $\mathcal O$ slightly while preserving the function $n$.
Consider the exterior Dirichlet problem $$\label{ccc1}
-\Delta u =0, ~ z\in \mathbb R^2\backslash \overline{\mathcal O}; \quad u|_{\partial \mathcal O}=f \in H^{ \frac{1}{2}}(\partial \mathcal O);
\quad e^{-i\zeta \cdot z} u \in W^{1,p}(\mathbb R^2 \backslash \overline{\mathcal O}), ~ p>2.$$ By $F^{out}(k):H^{\frac{1}{2}}(\partial \mathcal O)\to H^{-\frac{1}{2}}(\partial \mathcal O)$ we denote the operator that maps the Dirichlet data $f$ into the outward (with respect to $\mathcal O$) normal derivative $u_\nu$ of the solution of the problem (\[ccc1\]) at the boundary $\partial \mathcal O$. Denote by $\mathcal E_D$ the set of values of $k\in \mathbb C \backslash \{0\}$ for which the homogeneous problem (\[ccc1\]) has a non-trivial solution (the subindex $D$ here stands for the Dirichlet).
[*Definition.*]{} We will call a set $\{k=k_1+ik_2\}\subset\mathbb C$ [*a 1-D real analytic variety*]{} if the set of corresponding points $(k_1,k_2)\in \mathbb R^2$ is an intersection of a 1-D analytic variety in $\mathbb C^2=\mathbb C^2_{k_1,k_2}$ with the Euclidean space $\mathbb R^2$. Let us stress the meaning of the notation $k$. It will be used for points $k=k_1+ik_2$ of the complex plane. When we need to think about these points as vectors in the Euclidian space $\mathbb R^2$, we will use notation $(k_1,k_2)$ instead of $k$.
[*Definition*]{}. We will say that an operator $A$ has a real-valued integral kernel if $\overline{Af}=A\overline{f}$ for every function $f$ from the domain of $A$. The operator $A^\dag:=(A-A^*)/{2i}$ is called the non-self-adjoint part of $A$.
The following lemma concerns the exterior Faddeev problem (\[ccc1\]). Let us introduce the following parameter $\varepsilon=\varepsilon(k):=[-\nu(\frac{\gamma}{2\pi}+\frac{1}{2\pi}\ln|k|)]^{-1},~|k|\ll1,$ where $\gamma$ is the Euler constant and $\nu=|\partial\mathcal O|$ is the boundary length.
\[lemma0408C\] 1) The set $\mathcal E_D\subset \mathbb C$ is a real analytic variety and coincides with the set $\mathcal K$ of values of $0\neq k\in \mathbb C$ for which the operator $S_k$ has a non-trivial kernel. The operator $S_k$ is onto when $k\notin \mathcal E_D\bigcup\{0\}$.
2\) The Dirichlet-to-Neumann map $$F^{out}(k):H^{\frac{1}{2}}(\partial \mathcal O)\to H^{-\frac{1}{2}}(\partial \mathcal O), \quad k\notin \mathcal E_D\bigcup\{0\}$$ of the exterior Faddeev problem (\[ccc1\]) exists, is analytic in each of the variables $k_1,k_2$, and is an elliptic pseudo-differential operator of the first order with negative symbol.
3\) Operator $F^{out}(k): H^{\frac{1}{2}}(\partial \mathcal O)\to H^{-\frac{1}{2}}(\partial \mathcal O), ~k\neq 0,$ admits a continuous extension at point $k=0$. When $|k|$ is small enough, the extended operator is an infinitely smooth function of $\varepsilon:=[-\nu(\frac{\gamma}{2\pi}+\frac{1}{2\pi}\ln|k|)]^{-1}\geq 0$ and $\arg k$, and it has the following properties:
Operator $F^{out}(0)$ is self-adjoint, has isolated simple eigenvalue $\lambda=0$ with constant eigenfunctions, and there exists $\delta>0$ such that $F^{out}(0)\leq -\delta<0$ on the subspace $ H^{\frac{1}{2},\bot}(\partial \mathcal O)$.
4\) Operator $F^{out}(k), ~k\notin \mathcal E_D\bigcup\{0\},$ has a real-valued integral kernel. The non-self-adjoint part of $F^{out}(k), ~k\notin \mathcal E_D\bigcup\{0\},$ is a smoothing operator whose norm vanishes as $|k|\to 0$. To be more exact, $$\label{gl}
\|(F^{out})^\dag\varphi\|_{H^{1/2}(\partial \mathcal O)}\leq C|k|\|\varphi\|_{H^{-1/2}(\partial \mathcal O)}.$$
[**Proof.**]{} We will start with a study of invertibility of operator $S_k$ (defined by (\[skk\])) as $k\to 0$. In particular, we are going to prove that the set $\mathcal K$ (of values of $0\neq k\in \mathbb C$ for which the operator $S_k$ has a non-trivial kernel) is a real analytic variety. Later we will show that set $\mathcal K$ coincides with $\mathcal E_D$, and therefore the latter is also a real analytic variety. Our first step is to show that $S_k$ is invertible as $k\to 0$.
Let $$G_k^0(z)=-\frac{1}{2\pi} \ln|z| -\frac{\gamma}{2\pi}-\frac{1}{2\pi}\ln|k|,$$ where $\gamma$ is the Euler constant, and let $S_k^0:H^{-\frac{1}{2}}(\partial \mathcal O)\to H^{\frac{1}{2}}(\partial \mathcal O)$ be the single layer operator (similar to (\[skk\])) defined by the kernel $G_k^0(z)$: $$\label{sk0}
S_k^0 \sigma(z)=-\frac{1}{2\pi}\int_{\partial \mathcal O}G_k^0(z-z')\sigma (z')dl_{z'}, \quad z\in \partial \mathcal O.$$
Let us denote by $H^{-\frac{1}{2},\bot}(\partial \mathcal O)$ and $ H^{\frac{1}{2},\bot}(\partial \mathcal O)$ the linear subspaces in spaces $ H^{\pm\frac{1}{2}}(\partial \mathcal O)$, respectively, that consist of functions $\varphi$ such that $\int_{\partial\mathcal O}\varphi ds=0$. Every element $\psi\in H^{\pm\frac{1}{2}}(\partial \mathcal O)$ can be uniquely presented as a vector $\left(
\begin{array}{c}
c \\
\varphi \\
\end{array}
\right)
$, where $c=\int_{\partial\mathcal O}\psi ds/|\partial\mathcal O|,~\varphi=\psi-c$. These components of $\psi$ are orthogonal only in $L_2(\partial\mathcal O)$, but the norms in the original Sobolev spaces are obviously equivalent to the corresponding Hilbert-Schmidt norms, i.e., $\|\psi\|\sim(|c|^2+\|\varphi\|^2)^{1/2}$.
Using these vector representations of Sobolev spaces $ H^{\pm\frac{1}{2}}(\partial \mathcal O)$, we will write operators $S_k, S_k^0$ in the matrix form. In particular, $$\label{matrix}
S_k^0=\left(
\begin{array}{cc}
-\nu(\frac{\gamma}{2\pi}+\frac{1}{2\pi}\ln|k|)& b_1\\
b_2 & B \\
\end{array}
\right)=\left(
\begin{array}{cc}
\varepsilon^{-1}& b_1\\
b_2 & B \\
\end{array}
\right),$$ where $B:H^{-\frac{1}{2},\bot}(\partial \mathcal O)\to H^{\frac{1}{2},\bot}(\partial \mathcal O)$ is the single layer operator (similar to (\[sk0\])) with the kernel $-\frac{1}{2\pi}\ln|z-z'|,~\nu=|\partial\mathcal O|$, and operators $b_1,b_2,B$ are bounded and $k$-independent. Operator $B$ is a pseudo-differential operator of order $-1$. From the standard potential theory, it follows that operator $B^{-1}$ is bounded.
It was proved in that $N(kz):=G_k-G_k^0$ is an infinitely smooth function of $kz$ and $N(0)=0$. The same letter $N$ will be used also to denote the operator with the integral kernel $N(k(z-z'))$, i.e., $$N:=S_k-S_k^0:H^{-\frac{1}{2}}(\partial \mathcal O)\to H^{\frac{1}{2}}(\partial \mathcal O).$$ Then $\|N\|=O(|k|)$ as $k\to 0$. The norm is exponentially small in $\varepsilon, ~\varepsilon\to 0$, and can be estimated by $C_n\varepsilon^n$ with arbitrary $n>0$. Hence the following matrix representation is valid for $S_k$ as $\varepsilon\to 0$: $$\label{matrix1}
S_k=\left(
\begin{array}{cc}
\varepsilon^{-1}& b_1\\
b_2 & B \\
\end{array}
\right)+\left(
\begin{array}{cc}
O(\varepsilon^n)& O(\varepsilon^n)\\
O(\varepsilon^n) & O(\varepsilon^n) \\
\end{array}
\right).$$ Let $D$ be the diagonal matrix with elements $\varepsilon^{-1}, B$ on the diagonal. We multiply the equality above from the left by $DD^{-1}$. This leads to $$\label{matrix22}
S_k=\left(
\begin{array}{cc}
\varepsilon^{-1}& 0\\
0 & B \\
\end{array}
\right)\left[\left(
\begin{array}{cc}
I& \varepsilon b_1\\
B^{-1}b_2 & I \\
\end{array}
\right)+\left(
\begin{array}{cc}
O(\varepsilon^n)& O(\varepsilon^n)\\
O(\varepsilon^n) & O(\varepsilon^n) \\
\end{array}
\right) \right], \quad \varepsilon\to 0.$$
The second factor on the right is an operator in the space $H^{-\frac{1}{2}}(\partial \mathcal O)$. We can use the Hilbert-Shmidt norms of all the the matrices, and they will be equivalent to the norms of the operators that are represented by these matrices. Obviously, the second factor in (\[matrix22\]) is a small perturbation of the invertible matrix $\left(
\begin{array}{cc}
I& 0\\
B^{-1}b_2 & I \\
\end{array}
\right)$. Thus $$\label{matrix122}
(S_k)^{-1}=\left[\left(
\begin{array}{cc}
I& 0\\
-B^{-1}b_2 & I \\
\end{array}
\right)+ O(\varepsilon)
\right]\left(
\begin{array}{cc}
\varepsilon & 0\\
0 & B^{-1} \\
\end{array}
\right)=\left(
\begin{array}{cc}
\varepsilon & 0\\
0 & B^{-1} \\
\end{array}
\right)+\left(
\begin{array}{cc}
O(\varepsilon^2)& O(\varepsilon)\\
O(\varepsilon) & O(\varepsilon) \\
\end{array}
\right),$$ where $\varepsilon\to 0$ and the remainder terms are infinitely smooth in $\varepsilon$. The invertibility of $S_k$ when $0\neq |k|\ll 1$ is proved.
From the latter fact it follows that the set $\mathcal K$ where operator $S_k$ is not invertible is a real analytic variety. Indeed, operator $S_k^0$ is an elliptic PDO of order $-1$ on the compact manifold $\partial \mathcal O$, and therefore it has zero index. Then the same is true for the operator $S_k$, since function $G_k-G_k^0$ is infinitely smooth in $(z,k)$ and analytic in $k_1,k_2$ where $k=k_1+ik_2\neq 0$ (eg [@siltanen2]). Thus $S_k$ is a Fredholm family of operators analytic in $k_1,k_2\neq (0,0)$. Hence if $S_k$ is invertible at one point $k\neq 0$, then the set of values of $ (k_1,k_2) \in \mathbb C^2\backslash\{0\}$ for which $S_k$ has a non-trivial kernel is a 1-D analytic variety (see [@kuchment Th.4.11]). The intersection of this variety with the real plane is a real analytic variety.
Let us show that $\mathcal E_D=\mathcal K$. Let operators $\widehat{S}_k, ~\widehat{S}_k^0:H^{-\frac{1}{2}}(\partial \mathcal O) \rightarrow W^{1,p}(\mathbb R^2 \backslash \overline{\mathcal O}),~p>2,$ be the single layer operators defined by the same formula as operators $S_k, ~S_k^0$ in (\[skk\]), (\[sk0\]), respectively, but for all $z\in\mathbb R^2 \backslash \mathcal O$. Consider the problem $$\label{ccc}
-\Delta u =0, ~ z\in \mathbb R^2\backslash \overline{\mathcal O}; \quad u|_{\partial \mathcal O}=f \in H^{\frac{1}{2}}(\partial \mathcal O);
\quad e^{-i\zeta \cdot z} u \in W^{1,p}(\mathbb R^2 \backslash \overline{\mathcal O}), ~ p>2.$$ Let $k=k'\notin \mathcal K$. Then operator $S_{k'}$ is onto, and there is a function $\mu\in H^{-\frac{1}{2}}(\partial \mathcal O)$ such that $S_{k'}\mu=f$. Thus $u=\widehat{S}_{k'}\mu $ is a solution of (\[ccc\]). If this solution is unique, then operator $F^{out}$ is well defined (by $F^{out}f=u_\nu|_{\partial \mathcal O}$) and $k'\notin \mathcal E_D$. If solution $u=\widehat{S}_{k'}\mu $ of (\[ccc\]) is not unique, then there exists a non-trivial solution $u$ of the homogeneous problem (\[ccc\]) when $k=k'$. Denote by $v$ the extension of $u$ by zero in $\mathcal O$. Then $$\label{aaa}
-\Delta v =\alpha \delta (\partial\mathcal O), ~z\in \mathbb R^2, \quad e^{-i\zeta \cdot z} v \in W^{1,p}(\mathbb R^2), ~ p>2,$$ where $\delta (\partial\mathcal O)$ is the delta-function on $\partial\mathcal O$ and $\alpha=u_\nu|_{\partial\mathcal O}$. From the Nachman uniqueness result [@Nachman Lemma 1.3], it follows that $\alpha\not \equiv 0$ (otherwise $v\equiv 0$) and that $v=\widehat{S}_{k'}\alpha\delta (\partial\mathcal O)$. Thus $0=u|_{\partial\mathcal O}=S_{k'}\alpha$. This contradicts the assumption that $k'\notin \mathcal K$. Thus $k'\notin \mathcal E_D$.
Assume now that $k=k'\in \mathcal K$. Then there exists a non-trivial $\mu$ such that $S_{k'}\mu=0. $ Function $v=\widehat{S}_{k'}\alpha\delta (\partial\mathcal O)$ is a solution of (\[aaa\]). Function $v$ vanishes in $\mathcal O$ since $v$ is harmonic there and $v=0$ on $\partial\mathcal O$. Since the jump of the normal derivative of $v$ is proportional to $\mu$, function $v$ is not identically equal to zero. Thus $v$ is a non-trivial solution of homogeneous ($f=0$) problem (\[ccc\]). Thus $k'\in \mathcal E_D$. Hence $\mathcal K=\mathcal E_D$. To complete the proof of the first statement of Lemma \[lemma0408C\], it remains to recall that operator $S_k$ has zero index, and therefore it is onto when the kernel is trivial.
Let us prove the second statement of the Lemma. The following simple formula from the potential theory is valid: $$\label{sf}
(F_0-F^{out})S_k=I.$$ This formula implies that $$\label{ffout}
F_0-F^{out}=(S_k)^{-1}, \quad k\notin \mathcal E_D\bigcup\{0\}.$$ Since the right-hand side is analytic in $k_1,k_2$ and $F_0$ does not depend on $k$, operator $F^{out}$ is analytic in $k_1,k_2$ when $ k\notin \mathcal E_D\bigcup\{0\}.$ Consider the standard Dirichlet-to-Neumann map $F^{out}_b$ defined using the bounded solutions of the exterior problem for the Laplacian (the subindex “b" here stands for “bounded"). The difference $F^{out}-F^{out}_b$ is a smoothing operator that maps $H^{1/2}(\partial\mathcal O)$ into $H^{3/2}(\partial\mathcal O)$ (it is infinitely smoothing if $\partial\mathcal O\in C^\infty$). Thus $F^{out}$ is an elliptic pseudo-differential operator of the first order with negative symbol. The second statement is proved.
The proof of the third statement of the lemma is based on (\[ffout\]) and the matrix representation (\[matrix122\]). Let us also take into account that only the lower right element of the matrix representation of the operator $F_0$ is non-zero. We will preserve the same notation $F_0$ for this element. Then $$\label{kvf}
F^{out}(k)=F_0-(S_k)^{-1}=\left(
\begin{array}{cc}
- \varepsilon& 0\\
0 &F_0 - B^{-1} \\
\end{array}
\right)+\left(
\begin{array}{cc}
O(\varepsilon^2)& O(\varepsilon)\\
O(\varepsilon) & O(\varepsilon)\\
\end{array}
\right).$$ This formula allows us to extend $F^{out}(k)$ by continuity at $k=0$. It will completely justify the third statement of the lemma if we show that $F_0 - B^{-1}<-\delta<0$.
Let us recall that $F^{out}_b$ is the Dirichlet-to-Neumann operator that maps the Dirichlet data on $\partial\mathcal O$ into the normal derivative $u_\nu$ of the corresponding bounded solution $u$ of the exterior problem for the Laplacian. We denote by $\widetilde{S}$ the single layer operator with the kernel $-\frac{1}{2\pi} \ln|z-z'|$. Similarly to (\[sf\]), we have that $(F_0-\widetilde{F}^{out})\widetilde{S}=I$ on functions orthogonal to constants. From here it follows that $$\label{kkk}
F_0-F^{out}_b=B^{-1} \quad {\rm on} \quad H^{\frac{1}{2},\bot}(\partial \mathcal O).$$ From the Green formula, it follows that $F^{out}_b<0$ on $H^{\frac{1}{2},\bot}(\partial \mathcal O)$. Thus $F_0- B^{-1}<0$ on $H^{\frac{1}{2},\bot}(\partial \mathcal O)$. The latter operator does not depend on $k$. It is an elliptic pseudo-differential operator of the first order (it is a restriction of $F^{out}$ to a subspace of co-dimension one), and therefore its eigenvalues tend to infinity. Thus from the negativity of $F_0 -B^{-1}$ it follows that $F_0- B^{-1}<-\delta<0$ on $H^{\frac{1}{2},\bot}(\partial \mathcal O)$.
Let us prove the last statement of the lemma. Let us recall that the kernel $G_k$ of operator $S_k$ is real-valued (see [@sthesis Part 3.1.1] and the discussion in the introduction of this paper). Thus the integral kernel of the operator $F^{out}$ is real-valued since the other two operators in (\[ffout\]) have this property. From (\[ffout\]) it also follows that $$\label{fst}
(F^{out})^\dag=-((S_k)^{-1})^\dag.$$
In order to prove (\[gl\]), we need to consider certain operators in Sobolev spaces with the indexes $s\in[-3/2,3/2]$, and this does not create difficulties since we assume that $\partial\mathcal O\in C^2$. We recall that $(S_k^0)^{-1}$ is a pseudo-differential operator of the first order, and from (\[matrix\]) it follows that $$\|(S_k^0)^{-1}\varphi\|_{H^{-3/2}(\partial\mathcal O)}\leq C\|\varphi\|_{H^{-1/2}(\partial\mathcal O)},~~0<|k|\ll 1.$$ It was proved in that $N(kz):=G_k-G_k^0$ is an infinitely smooth function of $kz$ and that $N(0)=0$. Hence the following estimate holds for the operator $N=S_k-S_k^0$: $$\label{nn}
\|N\varphi\|_{H^{3/2}(\partial\mathcal O)}\leq C|k|\|\varphi\|_{H^{-3/2}(\partial\mathcal O)}, \quad 0<|k|<1.$$ This implies the following estimate for operator $(S_k)^{-1}=(S_k^0+N)^{-1}=(S_k^0)^{-1}(I+N(S_k^0)^{-1})^{-1}$: $$\label{pr}
\|(S_k)^{-1}\varphi\|_{H^{-3/2}(\partial\mathcal O)}\leq C\|\varphi\|_{H^{-1/2}(\partial\mathcal O)}, \quad 0<|k|\ll 1.$$
Now we fix an arbitrary smooth enough $\varphi$ and denote by $\psi=\psi(k)$ the function $\psi=(S_k)^{-1}\varphi,~0<|k|\ll 1.$ Then $$(((S_k)^{-1})^\dag\varphi,\varphi)=\Im((S_k)^{-1}\varphi,\varphi)=\Im(\psi,(S_k^0+N)\psi)=\Im(\psi,N\psi),$$ since operator $S_k^0$ is self-adjoint. The last equality and (\[nn\]), (\[pr\]) imply that $$|(((S_k)^{-1})^\dag\varphi,\varphi)|\leq |(\psi,N\psi)|\leq C|k|\|\psi\|^2_{H^{-3/2}(\partial\mathcal O)}\leq C|k|\|\varphi\|^2_{H^{-1/2}(\partial\mathcal O)},~~0<|k|\ll 1.$$
Let $D:L_2(\partial\mathcal O)\to H^{-1/2}(\partial\mathcal O)$ be a bounded invertible operator. We replace $\varphi$ in the estimate above by $D\widehat{\varphi}=\varphi$ and obtain that $$|D^*(((S_k)^{-1})^\dag D\widehat{\varphi},\widehat{\varphi})|\leq C|k|\|\widehat{\varphi}\|^2_{L_2(\partial\mathcal O)},~~0<|k|\ll 1$$ on a dense set in $L_2(\partial\mathcal O)$. Thus $\|D^*(((S_k)^{-1})^\dag D\|_{L_2(\partial\mathcal O)}\leq C|k|,~0<|k|\ll 1$. This and (\[fst\]) together justify (\[gl\]).
In order to obtain an alternative definition of the exceptional set $\mathcal E$, we reduce system (\[ref0401B\]),(\[ref0401Ba\]) to the boundary: $$\label{ref0406A}
\left \{
\begin{array}{rlll}
u &=&u^{out} + e^{i\zeta \cdot z}, & z \in \partial \mathcal O, \\
F_n u &=& F^{out} u^{out} + F_0 e^{i\zeta \cdot z}, & z \in \partial \mathcal O.
\end{array}
\right .$$ This system immediately implies the following representation (which is equivalent to (\[LS\])) of function $u$ at the boundary $\partial \mathcal O$: $$\label{LS0428}
u=(F_n-F^{out})^{-1}(F_0-F^{out})e^{i\zeta \cdot z}.$$ The equivalence of (\[LS0428\]) and (\[LS\]) can be easily justified using the equality $F_n-F_0=(F_n-F^{out})+(F^{out}-F_0)$ and (\[sf\]).
So, now we get a simple but important alternative definition of exceptional set $\mathcal E$.
\[thkernel\] Let operator $F_n$ be well defined. Then a point $k \neq 0$ is exceptional if and only if the operator $F_n-F^{out}(k)$ has a non-trivial kernel. Moreover, the multiplicity of the exceptional point (i.e., the number of linearly independent solutions of (\[ref0414A\])) is equal to the dimension of $Ker(F_n-F^{out}(k))$.
[**Remark.**]{} If $k'\in \mathcal E_D$, i.e., the operator $F_n-F^{out}(k)$ has a singularity at $k=k'$, then the kernel is the set of functions on which both the singular and principal parts of the operator vanish. To be more rigorous, a function $\sigma$ belongs to the kernel of the operator if $\lim [F_n-F^{out}(k)]\sigma=0$ when $k\to k', ~ k\notin \mathcal E_D$.
[**Proof.**]{} Let $k\in \mathcal E$ and let $\sigma=v|_{\partial\mathcal O}$, where $v$ is a non-trivial solution of (\[ref0414A\]). From the assumption on $F_n$ it follows that $\sigma\not \equiv 0$, and equation (\[ref0414A\]) implies that $F_n\sigma=F^{out}\sigma$. Thus $F_n-F^{out}$ has a non-trivial kernel that includes $\sigma$. Conversely, assume that $\sigma\neq 0$ belongs to the kernel of $F_n-F^{out}$ for some $k=k_0$. We define the non-trivial solution $v$ of (\[ref0414A\]) as follows. In $\mathcal O$, it is defined as the solution of the Dirichlet problem that is equal to $\sigma$ at the boundary (recall that zero is not an eigenvalue of the interior Dirichlet problem). In $R^2\backslash \overline{\mathcal O}$, it is defined as the solution $u$ of the exterior problem (\[ccc1\]) with $f=\sigma$ if $k_0\notin \mathcal E_D$. Otherwise, $v$ is defined as $\lim_{k\to k_0}u$. The existence of the limit follows from the Remark above.
Exceptional points
==================
The following statement is a simple consequence of Theorem \[thkernel\] and Lemma \[lemma0408C\].
Let $n(x)$ be absorbing, i.e., $\Im n(x) \geq \delta> 0$ on $\mathcal O$. Then there are no exceptional points in a small neighborhood of the origin $k=0$.
[**Proof.**]{} The Green formula implies that the quadratic form $$\Im( F_n u,u)=\Im \int_{\partial \mathcal O} \frac{\partial u}{\partial \nu} \overline{u}dl = \Im \int_{\mathcal O} \Delta u \overline{u}dS = - \int_{\mathcal O}\Im n(x)|u(x)|^2 dS\leq -\delta\int_{\mathcal O}|u(x)|^2 dS$$ is sign definite. We take into account that the standard estimates for solutions of elliptic equations are valid in the Sobolev spaces with negative indexes if the equation is homogeneous (see [@roitberg]). In particular, $\|u\|_{L_2(\mathcal O)}\leq C\|u\|_{H^{-1/2}(\partial\mathcal O)}$ for solutions $u$ of the equation $-\Delta u-nu=0,~x\in \mathcal O.$ Thus $$\Im( F_n u,u)\leq -C\delta\|u\|_{H^{-1/2}(\partial\mathcal O)}.$$ On the other hand, Lemma \[lemma0408C\] implies that $$\Im( F^{out} u,u)\leq C|k|\|u\|_{H^{-1/2}(\partial\mathcal O)}, \quad 0<|k|\ll 1.$$ Therefore, the operator $\Im(F_n-F^{out}(k))$ is sign definite for small $|k|$, and the kernel of operator $F_n-F^{out}(k),~0<k\ll 1,$ is trivial. It remains to apply Theorem \[thkernel\].
Let $n$ be a conductive potential vanishing outside $\mathcal O$. It means that $$n=-q^{-\frac{1}{2}}\Delta q^{\frac{1}{2}},$$ where $q\in C^2(\mathbb R^2)$ is a smooth non-negative function and $q-1$ vanishes outside $\mathcal O$. Nachman proved [@Nachman] that there are no exceptional points for such potentials. Perturbations $n_\lambda=n(z)+\lambda \omega(z)$ of conductive potentials were considered in [@siltanen2], where $\omega$ is real-valued and supported on $\overline{\mathcal O}$. Under the assumptions that the potential is radial, i.e., $n=n(|z|), ~\omega=\omega(|z|)$, and $$\label{1405A}
\mu= \int_{\mathcal O} \omega q dS > 0,$$ the authors of [@siltanen2] proved that the exceptional set is empty for small negative $\lambda$, and there exists an exceptional set for positive small $\lambda$. (Formally, the sign of $\lambda$ in the latter statement is opposite to the one used in [@siltanen2] since here we use the wave equation with the different sign before the potential). It was shown that the exceptional set is a circle of radius $e^{-\frac{1}{\mu \lambda}(1+o(1))}, ~ \lambda \rightarrow +0$.
Our approach allows us to extend this result to the case of non-radial potentials. The exceptional set in this case is not a circle anymore, but it approaches a circle as $\lambda \rightarrow +0$. Consider the variables $\varepsilon=[-\nu(\frac{\gamma}{2\pi}+\frac{1}{2\pi}\ln|k|)]^{-1}, ~ \varphi=\arg k, ~\varphi \in [0,2\pi)$.
\[0408E\] Let $n_\lambda=n(z)+\lambda \omega(z)$, where $n$ is a conductive (real-valued) potential, $\omega$ is real-valued, $n(z)=\omega(z)=0$ when $z\notin \overline{\mathcal O}$, and (\[1405A\]) holds.
If $\lambda<0$ is small enough, then the exceptional set $\mathcal E$ is empty. Moreover the following estimate holds for the scattering transform (\[dst\]): $|t(k)|<C(\lambda)/|\ln|k||, ~ k \rightarrow 0$.
If $\lambda>0$ is small enough, then the exceptional points exist only in a neighbourhood of the origin and the exceptional set is given by the equation $\varepsilon= \mu \lambda(1+o(1)), ~ \lambda \rightarrow +0$, where the remainder depends smoothly on $\lambda$ and $\varphi$.
[**Proof.**]{} We have (eg [@RN2 (3.18)]) that $$\label{0410C}
|G_k(z)e^{-i\zeta \cdot z}|\leq \frac{c}{\sqrt{|k|}\sqrt{|z|}}, ~ c>0, ~ E=0.$$ This implies the unique solvability of the Lippman-Schwinger equation $$u-e^{i\zeta \cdot z}= -G_k * (n_\lambda u)$$ when $|n_\lambda|<C$ and $|k|$ is large enough. Problem (\[ref0414A\]) has only trivial solution for these $k$ and $\lambda$. Hence there exists $K_0>0$ such that the region $|k|>K_0$ is free of points $k\in \mathcal E$ when $|n_\lambda|<C $ (see more details in [@siltanen2 proof of the corollary 3.5]).
Now we are going to show that the exceptional points for potential $n_\lambda$ may occur only in a small neighborhood of the origin $k=0$. Indeed, operator $F_n$ is a pseudo-differential operator of the first order with a positive principal symbol. Due to Lemma \[lemma0408C\], operator $F^{out}(k),k\neq 0,$ is a pseudo-differential operator of the first order with a negative principal symbol. Hence, $F_{n}-F^{out}(k)$ is an elliptic operator of the first order, and therefore, its eigenvalues tend to infinity. We take additionally into account that the kernel of the operator $F_{n}-F^{out}(k)$ is trivial for all $k\neq 0$ due to Theorem \[thkernel\]. This implies that the operator $(F_{n}-F^{out}(k))^{-1}$ is bounded for each fixed $k\neq 0$. From the analyticity in $k_1,k_2$ it follows that the upper bound for the norm $\|(F_{n}-F^{out}(k))^{-1}\|$ can be chosen uniformly in $k$ on each region of the form $K_0\geq |k|\geq \delta>0$. Then the same is true if $n$ is replaced by $n_\lambda$ with small enough $|\lambda|$. Hence Theorem \[thkernel\] implies that the exceptional points for the problem with the perturbed potential $n_\lambda$ and sufficiently small $|\lambda|$ can appear only in a small neighborhood of $k=0$.
Now let us study the structure of the set $\mathcal E$ in a neighborhood of the origin $k=0$. Since the substitution $u=\sqrt{q}v$ reduces equation (\[ref0401B\]) with a conductive potential to the equation $\nabla (q \nabla) v=0$, the D-t-N maps for these equations coincide. Hence, the kernel and co-kernel of operator $F_n$ are one dimensional spaces of constants. The norm of the restriction of $F_n$ on the space $L^{2,\bot}$ of functions orthogonal to constants is greater than some positive constant.
Consider the operator $A(\lambda,k):=F_{n_\lambda}-F^{out}(k)$. From Lemma \[lemma0408C\] and the properties of $F_n$ established above, it follows that $A(0,0)$ has zero eigenvalue with constant eigenfunction, and all the other eigenvalues are greater than some positive constant $\delta>0$. Operator $F_{n_\lambda}$ is analytic in $\lambda$, and operator $F^{out}(k)$ is an infinitely smooth function of $\varepsilon=[-\nu(\frac{\gamma}{2\pi}+\frac{1}{2\pi}\ln|k|)]^{-1}$ at $\varepsilon=0$ with all the derivatives at $\varepsilon=0$ independent of the polar angle of $k$. The properties of $F^{out}(k)$ are proved in Lemma \[lemma0408C\]. In fact, the independence of the derivatives of $\varphi$ is not stated there, but could be easily verified in the process of the proof. Hence operator $A(\lambda,k)$ with small enough $|\lambda|+|k|$ has an eigenvalue $\xi$ of the form $$\xi(\lambda,\varepsilon,\varphi)=a \lambda + b\varepsilon + O(\lambda^2+\varepsilon^2)$$ with a smooth in $k,\lambda$ eigenfunction $e(k,\lambda)$ and all the other eigenvalues being separated from zero. The latter statement for general analytic families of operators with an isolated eigenvalue can be found in [@reed XII.8]. One can easily see that the proof there does not require the analyticity and remans valid for smooth operator functions.
Let us find constants $a$ and $b$. Let $e=e(0,0)$. Recall that $e$ is a constant. We normalize $e(k,\lambda)$ in such a way that $e\equiv 1$. Operator $A(0,0):=F_{n}-F^{out}(0)$ is self-adjoint, and therefore $$\label{2511A}
a=(A(\lambda,k)e(\lambda,k),e(\lambda,k))'_\lambda(0,0)=\left ( \frac{\partial}{\partial\lambda}A (0,0)e,e \right )=\left (\frac{\partial}{\partial\lambda}F_{n_\lambda}(0,0)e,e \right ),$$ where $e=e(0,0)$. We used here that, $$(A(0,0)e'(0,0),e) = (A(0,0)e,e'(0,0)) = 0,$$ since $A(0,0)$ is self-adjoint and $A(0,0)e=0$.
Let us evaluate the right-hand side in (\[2511A\]). Consider solutions $f_\lambda \in H^{1/2}(\mathcal O)$ of the equation $\Delta f_\lambda + n_\lambda f_\lambda =0$ in $\mathcal O$ subject to the boundary condition $f_\lambda = e$ at $\partial \mathcal O$. Note that its derivative satisfies $\Delta f'_\lambda + n_\lambda f'_\lambda= - n'_\lambda f_\lambda$ in $\mathcal O,~f_\lambda' =0$ at $\partial \mathcal O$. From the Green formula it follows that $$\int_{\partial O} \frac{\partial f_\lambda '}{\partial \nu} \overline{f} dl =\int_{\mathcal O} n'_\lambda |f_\lambda|^2 dS .$$ We put here $\lambda=0$ and take into account that $f_\lambda =q^{1/2}$ when $\lambda=0$. This leads to $$\left (\frac{\partial}{\partial\lambda}F_{n_\lambda}(0,0)e,e \right ) = -\mu,$$ where $\mu$ is given by (\[1405A\]). Hence $a=-\mu$.
Similarly, from (\[kvf\]) it follows that $$b=(A e,e)'_\varepsilon(0,0)=(A_\varepsilon(0,0)e,e)=-\left (\frac{\partial}{\partial\varepsilon}F^{out}(k)e,e \right )|_{\varepsilon=0}=1.$$ Thus $$\label{aaaa}
\xi(\lambda,\varepsilon,\varphi)=-\mu \lambda + \varepsilon + O(\lambda^2+\varepsilon^2), ~ |\lambda|+|k|\ll 1,~~\mu>0.$$
Since the set $\mathcal E$ is located in a small neighborhood of the origin $k=0$, from Theorem \[thkernel\] it follows that $\mathcal E$ is defined by the relations $\xi(\lambda,\varepsilon,\varphi)=0, ~0<\varepsilon\ll 1$. Since $\varepsilon>0$, all the statements of the theorem that do not concern $t(k)$ follow immediately from (\[aaaa\]).
Now let us estimate $t(k)$ in a neighbourhood of $k=0$. From (\[dst\]) and (\[LS0428\]) it follows that $|t(k)|$ can by estimated by $C\|(F_{n_\lambda}-F^{out})^{-1}(F_0-F^{out})e^{i\zeta \cdot z}\|$. From Theorem \[thkernel\] it follows that for $\lambda < 0$ and small $|\lambda |$, operator $(F_{n_\lambda}-F^{out})^{-1}$ is bounded in the small neighbourhood of $k$. Thus $$|t(k)| \leq C(\lambda) \|(F_0-F^{out}(k))e^{i\zeta \cdot z}\|,~ |k|\ll 1.$$ It remains to use representation (\[matrix122\]) for operator $F_0-F^{out}(k)$ (see also (\[ffout\])) and write $e^{i\zeta \cdot z}$ in the form $1+O(k)$.
.
Condition for existence of exceptional points
=============================================
Now we present a method that allows one, in some cases, to justify existence of exceptional points on a path $\gamma\subset \mathbb C$ that is analytic in $k_1,k_2$ by making certain measurements at the end points of $\gamma$. In this section, we assume that the potential $n$ is real-valued.
Consider the operator function $P(k):=I+S_k(F_n-F_0)$ in $L_2(\partial\mathcal O)$ or $H^{-1/2}(\partial\mathcal O)$. The integral kernel of $P(k)$ is real-valued, since function $G_k$ is real-valued (see [@sthesis Part 3.1.1]). Hence if $\mu$ is an eigenvalue of $P(k)$, then the complex conjugate number $\overline{ \mu}$ is also an eigenvalue of the same multiplicity. We already mentioned earlier that operator $S_k(F_n-F_0)$ is compact. Thus for each $k$, the eigenvalues $\mu_i=\mu_i(k)$ converge to $\mu=1$ as $i\to\infty$ and therefore, $P(k)$ has at most a finite number of negative eigenvalues. We can introduce a function that counts the number of negative eigenvalues $\mu_i(k)$ of operator $P(k)$: $$n^-(k)= \sum_{i ~:~ \mu_i(k)<0} m_i,$$ where $m_i$ is the algebraical multiplicity of $\mu_i$. In the case of positive energy, operator function $P_E(\lambda)$ and the counting function $n^-(\lambda)$ are introduced absolutely similarly to the corresponding objects in the case of $E=0$.
\[t22\] Let energy be zero, and let $k,\widehat{k} \in \mathbb C \backslash \{0\}$ be arbitrary points such that $n^-(k)\neq n^-(\widehat{k}) ~(mod ~2)$. Then every analytic path $\gamma$ connecting points $k$ and $\widehat{k}$ and not passing through $k=0$ contains at least one exceptional point.
The same statement (with $k,\widehat{k}$ replaced by $\lambda, \widehat{\lambda}$) is valid when energy is positive under additional condition that the path $\gamma$ does not contain points of the unit circle $|\lambda|=1$.
[**Proof.**]{} Let $E=0$, and let $k=k(s),~0\leq s\leq 1,$ be an analytic parametrization of $\gamma$. Then operator $P(k(s))$ is analytic in $s$. Since operator $S_k(F_n-F_0)$ is compact, the spectrum of $P(k(s))$ consists of eigenvalues $\mu_i(k(s))$ of finite multiplicities. Thus (see [@reed consequence of Th.XII.2]) the eigenvalues $\mu_i(k(s))$ are analytic in $s$ except for a possible finite amount of branching points. Moreover, for each eigenvalue $\mu_i(k(s_0)),~0\leq s_0\leq 1,$ there exists a complex neighborhood $V$ of the eigenvalue and $\varepsilon>0$ so small that the number of all eigenvalues $\mu_i(k(s))$ in $V$ with a fixed $s,~|s-s_0|<\varepsilon,$ counted with their algebraical multiplicity does not depend on $s$. Therefore, function $n^-(k(s))$ can change its value (when $s$ changes from $0$ to $1$) only if an eigenvalue $\mu(k(s))$ of $P$ passes through the point $\mu=0$ or if some eigenvalues leave/come to the real axis. The second option occurs only with pairs of complex-adjoint eigenvalues. Therefore, $n^-(k(1))-n^-(k(0))$ can be odd only if at least one of the eigenvalues $\mu(k(s))$ passes through $\mu=0$. The corresponding point $k(s)\in\gamma$ is exceptional. The statement of the theorem in the case $E=0$ is proved.
The proof in the case $E>0$ remains the same. The additional condition that $\gamma$ does not intersect the unit circle is needed only because the parameter $\lambda$ in the Faddeev scattering problem with positive energy can’t belong to the unit circle.
[**Acknowledgments.**]{} The authors are thankful to Eemeli Blåsten, Uwe Kahler, Michael Music, Roman Novikov, and Samuli Siltanen for useful discussions concerning the Faddeev scattering problem. Authors are thankful to the anonymous referee who found an essential error in the previous version of the article.
[102]{} L.Bogdanov, On the two-dimensional Zakharov-Shabat problem. Theoretical and Mathematical Physics, 72(1), 790-793, 1987.
Beals R., Coifman R. R. Multidimensional scattering and nonlinear partial differential equations, Proc. of Symposia in Pure Mathematics.— 1985. V. 43.,p. 45—70.
R. Brown and G. Uhlmann. Uniqueness in the inverse conductivity problem for nonsmooth conductivities in two dimensions. Communications in Partial Differential Equations, 22(5-6):1009-1027, 1997.
R. Croke, J.L.Mueller, M.Music, P.Perry, S.Siltanen,A.Stahel, The Novikov-Veselov equation: theory and computation, To appear in Contemporary Mathematics.
Faddeev, L.D., Increasing solutions of the [S]{}chr[ö]{}dinger equation, Soviet Physics Doklady, 10, 1033-1035, 1966
Grinevich, P. G., and Novikov, S. P. Two-dimensional “inverse scattering problem” for negative energies and generalized-analytic functions. I. Energies below the ground state. Funkts. Anal. Prilozh. 22(1) (1988), 23–33. Translation in Funct. Anal. Appl. 22(1) (1988), 19–27.
E.Lakshtanov, B.Vainberg, Applications of elliptic operator theory to the isotropic interior transmission eigenvalue problem, Inverse Problems, 29 (2013), 104003.
E.Lakshtanov, B.Vainberg, Weyl type bound on positive Interior Transmission Eigenvalues, Communications in PDE, accepted, (2013).
E.Lakshtanov, B.Vainberg, Sharp Weyl Law for Signed Counting function of positive interior transmission eigenvalues, arXiv:1401.6213.
M.Music, P.A. Perry, S. Siltanen, Exceptional circles of radial potentials, Inverse Problems 29, (2013), 045004.
Michael Music, The nonlinear Fourier transform for two-dimensional subcritical potentials, Inverse Problems and Imaging, Volume 8, No. 4, 2014, 11511167.
A. Nachman, Global uniqueness for a two-dimensional inverse boundary value problem, The Annals of Mathematics, Vol. 143, (1996), 71-96.
R. G. Novikov, “Multidimensional inverse spectral problem for the equation $-\Delta \psi+ (v(x)-Eu(x))\psi=0$, Functional Analysis and Its Applications 22 (4), 263-272, 1988.
R. G. Novikov, “The Inverse Scattering Problem on a Fixed Energy Level for the Two-Dimensional Schrödinger Operator, J. of Funct. Analysis, 103 (1992), 409-463.
R. G. Novikov and G. M. Khenkin, The $\overline{\partial}$-equation in the multidimensional inverse scattering problem, Russ. Math. Surv. 42, No 3 (1987) 109-180.
M. Reed, B. Simon, Methods of Modern Mathematical Physics, IV, Acad. Press, 1978.
Y. Roitberg, Elliptic boundary value problems in the spaces of distributions. Vol. 384. Kluwer Academic Pub, 1996
S.Siltanen, J.Tamminen, Exceptional points of radial potentials at positive energies arXiv:1307.2037 \[math.NA\].
S. Siltanen, Electrical Impedance Tomography and Faddeev’s Green functions, Ann. Acad. Sci. Fenn. Mathematica Dissertationes 121. Available in postscript form at www.research-siltanen.net/publications.html.
B.Vainberg, Principles of radiation, limiting absorption and limiting amplitude in the general theory of partial differential equations, Russian Math. Surveys, 21, No 3 (1966), 115-193.
R.Weder, Generalized limiting absorption method and multidimensional inverse scattering theory, Math. Methods in the Applied Sciences, 14(7) (1991), 509-524.
M. G. Zaidenberg, S. G. Krein, P. A. Kuchment, and A. A. Pankov, Banach bundels and linear operators, Uspekhi Mat. Nauk 30:5 (1975), 101-157.
[^1]: Department of Mathematics, Aveiro University, Aveiro 3810, Portugal. This work was supported by Portuguese funds through the CIDMA - Center for Research and Development in Mathematics and Applications and the Portuguese Foundation for Science and Technology (“FCT–Fundção para a Ciência e a Tecnologia”), within project PEst-OE/MAT/UI4106/2014 (lakshtanov@ua.pt).
[^2]: Department of Mathematics and Statistics, University of North Carolina, Charlotte, NC 28223, USA. The work was partially supported by the NSF grant DMS-1410547 (brvainbe@uncc.edu).
|
---
author:
- |
G. Peilert, T.C. Sangster, M.N. Namboodiri, and H.C. Britt\
\
Lawrence Livermore National Laboratory,\
Livermore, CA 94550\
USA
title: ' The Multifragmentation Freeze–Out Volume in Heavy Ion Collisions'
---
-0cm -0cm
[**Abstract:**]{}
The reduced velocity correlation function for fragments from the reaction Fe + Au at 100 A MeV bombarding energy is investigated using the dynamical–statistical approach QMD+SMM and compared to experimental data to extract the Freeze–Out volume assuming simultaneous multifragmentation. It is shown that the data are consistent with a Freeze–Out volume corresponding to $0.1 - 0.3$ times normal nuclear matter density. The calculations show an additional correlation due to recoil effects from a heavy third fragment in an asymmetric break–up. This effect is present, but less pronounced in the data.
A topic of great interest in nuclear physics is the multifragmentation of heavy nuclei at moderate excitation energies. Such systems are typically formed in p-nucleus reactions in the GeV energy regime or in intermediate energy heavy ion collisions. In the experiments of the Purdue group [@perdue], first attempts were made to relate the inclusive results from p–nucleus fragmentation to the critical exponents of a phase transition, analogous to the liquid gas transition in condensed matter physics. Within the last few years it has become possible to study such reactions in highly exclusive experiments in order to extract the critical properties. Critical properties can then be extracted investigating by, for example the moments of the mass distribution as proposed by Campi [@campi].
Such a procedure, however can not provide the physical quantities that drive this phase transition. If one wants to extract quantities like a critical temperature and density one has to rely on dynamical models. Unfortunately there is presently no complete model available that describes the process of thermally driven multifragmentation in heavy ion collisions in a consistent approach. Current modeling involves a two–step dynamical–statistical process beginning with a pre equilibrium stage described in the earliest approaches by an Intranuclear Cascade Model [@INC] and, more recently, by molecular dynamics pictures such as the Quantum Molecular Dynamics model, QMD [@QMD1; @QMD2] or by single particle models of the VUU/BUU type [@vuu]. Following this stage there remains a distribution of nucleons and complex fragments which can themselves be highly excited and will undergo further statistical decay. This statistical decay process has been modeled in various codes as a sequential evaporation [@blann; @gemini; @frdm] or as an explosive simultaneous multifragmentation [@fai; @bondorf; @smm; @gross; @qsm]. None of these models, however, includes the dynamics of the fragmentation.
Recent investigations with the QMD model have shown that for central collisions of heavy nuclei at high energies, a rapid compression–decompression mechanism emerges in the initial stage of the reaction and leads to a direct multifragmentation process, where a highly excited heavy residue is no longer formed with high probability [@QMD2; @lynch93]. These investigations suggest that the region to search for the thermally driven multifragmentation in heavy ion reactions is in central collisions at moderate bombarding energies (E/A $\approx$ 100 A MeV) or in peripheral (or highly asymmetric) reactions at higher energies (E/A $\approx$ 1 A GeV) . Under these conditions the direct reaction leads to a highly excited source that then breaks up into many IMF’s.
Recently, attempts have been made by Sangster et al., [@craig] and Lacey at al. [@lacey] to extract the emission time scale of the multifragmentation process by assuming a sequential decay mechanism with a fixed Freeze-Out volume. In contrast to earlier work performed by Kim et al. [@kim] using the Koonin–Pratt formalism [@kooninpratt] they used a the classical three-body trajectory code [*MENEKA [@meneka]*]{} to simulate the emission of IMF’s from the surface of a spherical source characterized by a unique radius parameter. In this Letter we will show, that the same data can be explained by assuming a multifragmentation from a Freeze–Out volume considerably larger than the ground state dimension.
This will be done by using the QMD+SMM approach. In this case the Hamilton equations of all nucleons are integrated, which implies that correlations between all the existing nucleons and fragments are treated in all orders. This is especially important in the case when one deals not only with two IMF’s, but with many. In the following we present comparisons of the QMD+SMM fragment-fragment correlation function for the reaction Fe (100 A MeV) + Au for central collisions (b = 0 - 6 fm) in to the data of Ref. [@sang92]. After the break-up of the hot target remnant within the SMM model (after 300 fm/c reaction time) we neglect the nuclear forces and follow the Coulomb trajectories of the charged particles through an experimental aceptence filter.
The QMD+SMM approach models the dynamical reaction and subsequent statistical decay of the excited pre fragments (details about the two models can be found in refs. [@QMD2; @sang92; @smm]). Previous investigations with the QMD model [@QMD2] have shown that there are two different mechanisms leading to the multifragmentation. One is related to the mechanical rupture of the system whenever compressional effects are important; the other produces fragments thermally from an equilibrated source. This thermal multifragmentation has so far not been described in a microscopic model like QMD [@QMD2]. In the first comparison to inclusive IMF data it was shown that a two–step model was necessary to reproduce the experimental angular distributions [@sang92]. This two–step model involved the calculation of initial kinematics and excitation energy of all pre fragments with the QMD model followed by a subsequent deexcitation calculation utilizing the Statistical Multifragmentation Model (SMM) of Botvina et al. [@smm]. The input for the SMM stage of the reaction, i.e., the mass and the excitation energy of the fragments, are consistently determined within the QMD approach.
The SMM model describes the multifragmentation of highly excited nuclei based on the statistical approach and a liquid–drop description of hot fragments. It is assumed that the excited primordial fragments break up into an assembly of nucleons and fragments. All these decay products are described as Boltzmann particles in a Freeze–Out volume $V = V_0 ( 1 + \kappa)$, where $\kappa$ is a model parameter and $V_0$ is the volume of the system corresponding to normal nuclear matter density. Since all the produced fragments are excited (only particles with A $\le$ 4 are considered as elementary particles) the final deexcitation is treated as a Fermi break up for lighter fragments ( A $<$ 16) and via an evaporation of nucleons and clusters up to $^{18}O$ for heavier fragments (for details see Ref. [@smm]). In the following we will vary the volume parameter $\kappa$ from 1 to 15 in order to extract the Freeze–Out volume by comparing the reduced velocity correlation functions for each value of $\kappa$ with the experimental data.
Figure \[fig1\] shows a comparison between the experimental data [@sang92] and the QMD+SMM calculations using $\kappa = 2,5$ and $10$ for two different projections of the double differential cross section, $d^2\sigma/dZ/d\Omega$. Both the data and the calculations are for central collisions only (for details see Ref. [@sang92]). The upper part of figure \[fig1\] shows the charge yield distribution at a laboratory angle of $72^{\circ}$ ($\pm 13^{\circ}$), while the lower part shows the angular distribution of fragments with Z=10. In both cases it can be seen that the variation of the volume parameter $\kappa$ does not influence these semi-exclusive observables and all calculations agree reasonably well with the data. However, the calculated angular distributions are still somewhat too steep and under predict the data at backward angles.
The reduced velocity correlation functions shown in figure \[fig2\] have been obtained by taking the ratio of the correlated reduced velocity distribution and a background distribution obtained by event mixing between correlated IMF’s in a single event divided by the same quantities obtained from two IMF’s from different physical events, thereby eliminating final state correlations. In all cases the true ($Y_{true} (v_{red})$) and background distributions ($Y_{back} (v_{red})$) were developed separately for two classes of events: 1) two IMF’s detected on opposite sides of the beam and, 2) two IMF’s detected on the same side of the beam. In the areas where these two correlation functions overlap in relative velocity it was found that the correlations obtained were effectively identical and so in this paper the results have, in all cases, been combined into single distributions. The two fragment correlation function is then calculated according to $$\label{defcorr}
1 + R(v_{red}) = \frac{Y_{true} (v_{red})}{Y_{back} (v_{red})}.$$
Fig. \[fig2\] shows the experimental correlation function (symbols) for the three different charge product bins $Z_1 \cdot Z_2 = 25-64 \;$ , $\;65-129\;$ and $\;130-250$ (the smallest detected charge is Z=5 in all cases) compared to the calculated results (lines) for different volume parameters $\kappa$ in the SMM model (recall that $V = V_0
(1+\kappa)$). The curves show fits to the theoretical correlation functions using the fitting function $$\label{fit}
1 + R(v_{red}) = a \frac{ 1 + e^{ \frac{ d-v_{red} }{e} } } {1 + e^{
\frac{ b-v_{red} }{c} } } .$$
It can be seen that, in the limit of simultaneous fragment emission, the size of the Coulomb hole can be explained within the QMD+SMM approach if a volume parameter between $\kappa = 2 \;$ and $\;10$ is used; both the results with $\kappa =1$ and $\kappa = 15$ clearly disagree with the data for the heavier fragments. This means that these correlation functions are consistent with a break–up with a Freeze–Out density of $\varrho = 0.1 - 0.3 \; \varrho_0$. For the lightest fragments the smallest volume parameter seems to describe the data best, but here the sensitivity to the Freeze-Out volume is less pronounced. This may indicate that the fragments do not come from a single Freeze-Out volume, but that the smaller fragments are emitted earlier from a smaller volume.
These results are in good agreement with previous investigations of the same data using the sequential three-body trajectory code MENEKA [@craig]. In this case an emission time of order 500 fm/c was found for a fixed freeze–out volume equivalent to $\kappa = 1$. These time scales are comparable to the typical time a fragment needs to traverse a Freeze-Out volume of roughly 5 times the nuclear volume.
Another feature that can be observed in the calculated correlation functions in figure \[fig2\] is the pronounced enhancement at a $v_{red} \approx 15-20$. This additional correlation is absent in the present light fragment data and is not seen in the data of the MSU group [@kim] except for very peripheral collisions [@bowman]. Our calculations also show that this effect is more pronounced for larger impact parameters and increases (see figure \[fig2\]) with decreasing SMM volume parameter (a similar behavior has recently been found in the Berlin model [@schapiro] when the excitation energy is decreased). A similar correlation was recently observed for the Au + Au reaction at 150 A MeV bombarding energy [@ross]. In this case the enhancement has its origin in the collective flow of the fragments. In our case, however we are dealing with a more asymmetric reaction at lower energies where the collective flow vanishes due to the compensation of the repulsive and attractive nuclear forces. However, the Coulomb repulsion due to a heavy third fragment can produce an additional correlation in the fragment–fragment correlation function. In table \[table1\] we show the average charge of the largest fragment as well as the average charge asymmetry $<\Delta Z> =
\sqrt{( (Z_1 - Z_2)^2 + (Z_1 - Z_3)^2 + (Z_2 - Z_3)^2)/3} $ and the average multiplicity of IMF’s with Z=4-20 for values of $\kappa$ from 1 to 15 over the impact parameter interval 0-6 fm in the reaction Fe (100 A MeV) + Au. It can be seen that the charge distribution gets very asymmetric with decreasing $\kappa$ and the average IMF multiplicity decreases by a factor of two (the same holds also when the impact parameter is increased for a fixed volume parameter). This is a strong indication that the enhancement in the calculated correlation functions in figure \[fig2\] results from a large charge asymmetry.
The fact that the experimental data in ref. [@craig] does not show such an enhanced peak indicates that the break up pattern in nature is more symmetric than described within the SMM model. In order to clarify this point we show in figure \[fig3\] the experimental data for events with three detected IMF’s. For this subset of the data, only the two lightest fragments are used to generate the correlation functions; the heaviest fragment is simply tagged as $Z_{max}$. The curves in Fig. \[fig3\] have been generated based on the following $Z_{max}$ selection criteria: a) the numerator in eq. \[defcorr\] contains only pairs from events with $Z_{max} \ge 18$ (note however that this refers to the largest fragment) (dotted line), while no requirement has been applied to the fragments in the background pairs (the denominator in eq. \[defcorr\]); b) both the correlated and the background pairs are from events with detected $Z_{max} \ge 18$ (dashed line); c) no cuts on $Z_{max}$ (full line).
One clearly observes that, in accordance to the QMD+SMM results, a large peak is found for the trigger condition a) due to the recoil of a heavy third fragment which is not accounted for in the background. This peaks goes away when the background fragments are taken from events with a similar charge asymmetry. The full line shows again that the enhancement virtually disappears when no cuts are made on $Z_{max} $.
We have shown that the fragment–fragment reduced velocity correlation functions can be explained within the two–step model QMD+SMM, where the initial, dynamical step of the reaction, modeled using the microscopic QMD approach, is followed by a simultaneous statistical multifragmentation. The correlation functions for the system Fe (100 A MeV, b=0-6fm) + Au are consistent with a break–up with a Freeze–Out density of $\varrho = 0.1 - 0.3 \; \varrho_0$. These findings are consistent with previous investigations assuming a sequential emission of the fragments from a fixed volume source. The typical time scal for this process was found to be less than 500 fm/c. We note that these time scales are comparable to the typical time a fragment needs to traverse a Freeze-Out volume of roughly 5 times the nuclear volume. The calculations show a additional correlation due to the coulomb repulsion from a heavy third fragment in an asymmetric decay. This effect is less pronounced in the experimental data, unless one explicitly triggers on pairs from events with a large charge asymmetry.
[**Acknowledgements:**]{}
This work was supported by the US Department of Energy by LLNL under Contract W-7405-ENG-48. One of us (G.P.) gratefully acknowledges support from the Wissenschaftsausschuss of the NATO via the DAAD.
J.E. Finn et al., A.S. Hirsch, A.T. Minich, B.C. Stringfellow, and F. Turkot, Phys. Rev. Lett. [**49**]{}, 1321 (1982); R.W. Minich et al., J.E. Finn, L.J. Gutay, A.S. Hirsch, and B.C. Stringfellow, Phys. Lett. B[**118**]{}, 458 (1982).
X. Campi, Phys. Lett. B[**124**]{}, 8 (1984); J. Phys. A [**19**]{}, L917 (1986) and Phys. Lett. B [**208**]{}, 351 (1988).
Y. Yariv and Z. Fraenkel, Phys. Rev. C [**20**]{}, 2227 (1979) and Phys. Rev. C [**24**]{}, 488 (1981); K.K. Gudima and V.D. Tonnev, Yad. Fiz. [**27**]{}, 67 (1978) \[Sov. J. Nucl. Phys. [**27**]{}, 67 (1978)\].
G. Peilert et al., W. Greiner, Phys. Rev. C [**39**]{}, 1402 (1989); Phys. Rev. C [**37**]{}, 2451 (1988).
G. Peilert et al., M.G. Mustafa Phys. Rev. C [**46**]{}, 1457 (1992).
J.J. Molitoris, H. Stöcker, and B.L. Winer, Phys. Rev. C [**36**]{}, 220 (1987); (1983); Y. Raffray, Nucl. Phys. A [**465**]{}, 317 (1987).
M. Blann, Phys. Rev. Lett. [**54**]{}, 2215 (1985) and Phys. Rev. C [**32**]{}, 1231 (1985); Phys. Rev. C [**40**]{}, 2498 (1989); G. Peilert, H. Stöcker, and W. Greiner, Phys. Rev. C [**44**]{}, 431 (1991).
R.J. Charity et al., Nucl. Phys. A [**453**]{}, 371 (1988); D.R. Bowman et al., ibid. A [**523**]{}, 386 (1991).
W.A. Friedman, Phys. Rev. Lett. [**60**]{}, 2125 (1988).
G. Fai and J. Randrup, Nucl. Phys. A [**381**]{}, 557 (1982) and Nucl. Phys. A [**404**]{}, 551 (1983).
J.P. Bondorf et al., H. Schulz, and K. Sneppen, Nucl. Phys. A [**443**]{}, 321 (1985); J.P. Bondorf et al., Nucl. Phys. A [**444**]{}, 460 (1985).
A.S. Botvina et al., R. Donangelo, and K. Sneppen, Nucl. Phys. A [**475**]{}, 663 (1987); A.S. Botvina, A.S. Iljinov, and I.N. Mishustin, Nucl. Phys. A [**507**]{}, 649 (1990).
D.H.E. Gross and X.Z Zhang, Phys. Lett. B[**161**]{}, 47 (1985); D.H.E. Gross, X.Z. Zhang, and S.Y. Xu, Phys. Rev. Lett. [**56**]{}, 1544 (1986); D. Hahn and H. Stöcker, Nucl. Phys. A [**476**]{}, 718 (1988).
B. Tsang et al., Phys. Rev. Lett. [**71**]{}, 1502 (1993).
T.C. Sangster et al., Th. Blaich, Prog. Part. Nucl. Phys. Vol. 30, 189 (1993) and Phys. Rev. C [**47**]{}, R2457 (1993) .
R. Lacey et al., Phys. Rev. Lett. [**24**]{}, 3705 (1993).
Y.D. Kim et al., Phys. Rev. C [**45**]{}, 338 (1992); C.K. Gelbke, W.G. Gong, and S. Pratt Phys. Rev. C [**45**]{}, 387 (1992).
S.E. Koonin, Phys. Lett. B[**70**]{}, 43 (1977); W.G. Gong et al., Phys. Rev. C[**43**]{}, 781 (1991).
A. Elmaani et al., Nucl. Instrum. Methods A[**313**]{}, 401 (1992).
T.C. Sangster et al., R.G. Lanier, S. Kaufman, W. Greiner, Phys. Rev. C[**46**]{}, 1404 (1992).
D.R. Bowman et al., C.K. Gelbke, W. G. Gong, Y.D. Kim, M.A. Lisa, M.B. Tsang, C. Williams, N. Colonna, K. Hanold, M.A. McMahan, G. Wozniak, and L.G. Moretto, Phys. Rev. Lett. [**23**]{}, 3534 (1993).
O. Schapiro, A.R. DeAngelis, and D. Gross, Preprint HMI 1993/P1-Schap 1.
B. Kämpfer at al., Phys. Rev. [**C48**]{} , R955 (1993).
\
Projections of the semi–exclusive triple differential cross section $d^2 \sigma/dZ d\Omega$ for the reaction Fe (100 A MeV) + Au. The symbols show the data [@sang92] for the fragment charge distributions at $\vartheta_{Lab} = 72^{\circ}$ (upper panel) and the fragment angular distributions for $ Z = 10$ (lower panel). The histograms show the calculations with the QMD+SMM (b = 0 - 6 fm) model for different volume parameters, $\kappa$, as indicated.
Mixed fragment reduced velocity correlation functions for three different constraints on the Coulomb product $Z_1 \cdot Z_2$. The symbols show the data of ref. [@craig], while the curves are fits to equation \[fit\] obtained with the QMD+SMM model. The different lines show calculations done with the volume parameter $\kappa = 1$ (solid line), $\kappa = 2$ (variable length dashed line), $\kappa = 5$ (dashed line), $\kappa = 10$ (dash–dotted line) and $\kappa = 15$ (dotted line),
Experimental correlation function for the lightest two fragments in events with three detected IMF’s. The full line shows the fit to the correlation function obtained from all events while for the dotted line includes only events in which the largest of the three detected fragments has a charge greater than 17. The dashed line shows the fit when both the true and background pairs are from events with $Z_{max}
\ge 18$ (dashed line).
$\kappa$ $<Z_{max}>$ $<\Delta Z>$ $<MUL_{Z=4-20}> $
---------- ------------- -------------- -------------------
1 26.5 12.3 2.2
2 22.7 10.5 2.7
5 18.8 8.2 3.4
10 17.9 7.5 3.6
15 16.0 6.2 4.0
: \[table1\]
The average charge of the largest fragment as well as the average charge asymmetry $<\Delta Z> = \sqrt{( (Z_1 - Z_2)^2 + (Z_1 - Z_3)^2 +
(Z_2 - Z_3)^2)/3} $ and the average multiplicity of IMF’s with Z=4-20 for the reaction Fe (100 A MeV, b=0-6 fm) + Au and values of $\kappa$ from 1 to 15.
|
---
abstract: 'In this paper, we introduce [**TextBrewer**]{}, an open-source knowledge distillation toolkit designed for natural language processing. It works with different neural network models and supports various kinds of tasks, such as text classification, reading comprehension, sequence labeling. TextBrewer provides a simple and uniform workflow that enables quick setup of distillation experiments with highly flexible configurations. It offers a set of predefined distillation methods and can be extended with custom code. As a case study, we use TextBrewer to distill BERT on several typical NLP tasks. With simple configuration, we achieve results that are comparable with or even higher than the state-of-the-art performance. [^1]'
author:
- |
Ziqing Yang$^1$, Yiming Cui$^{2,1}$, Zhipeng Chen$^{1}$,\
**Wanxiang Che$^2$,Ting Liu$^2$, Shijin Wang$^{3,4}$, Guoping Hu$^4$**\
[$^1$Joint Laboratory of HIT and iFLYTEK (HFL), iFLYTEK Research, China]{}\
[$^2$Research Center for SCIR,]{} [Harbin Institute of Technology, China]{}\
[$^3$iFLYTEK AI Research (Hebei), Langfang, China]{}\
[$^4$State Key Laboratory of Cognitive Intelligence, iFLYTEK Research, China]{}\
[{zqyang5,ymcui,zpchen,sjwang3,gphu}@iflytek.com]{}\
[{ymcui,car,tliu}@ir.hit.edu.cn]{}\
bibliography:
- 'my\_nlp.bib'
title: 'TextBrewer: An Open-Source Knowledge Distillation Toolkit for Natural Language Processing'
---
Introduction
============
Large pre-trained language models, such as GPT [@gpt], BERT [@devlin-etal-2019-bert], RoBERTa [@DBLP:journals/corr/abs-1907-11692] and XLNet [@DBLP:journals/corr/abs-1906-08237] have achieved great success in many NLP tasks and greatly contributed to the progress of NLP research. However, one big issue of these models is the high demand for computing resources — they usually have hundreds of millions of parameters, and take several gigabytes of memory to train and inference — which makes it impractical to deploy them on mobile or online systems. From a research point of view, we are tempted to ask: is it necessary to have such a big model that contains hundreds of millions of parameters to achieve high performance? Motivated by the above considerations, recently, some researchers in the NLP community have tried to design lite models [@albert], or resorted to the knowledge distillation technique to compress large pre-trained models to small models.
Knowledge Distillation (KD) is a technique of transferring knowledge from a teacher model to a student model, which is usually smaller than the teacher. The student model is trained to mimic the outputs of the teacher model. Before the birth of BERT, KD had been applied to several specific tasks like machine translation [@kim-rush-2016-sequence; @DBLP:conf/iclr/TanRHQZL19] in NLP. While the recent studies of distilling large pre-trained models focus on finding general distillation methods that work on various tasks and are receiving more and more attention [@sanh2019distilbert; @DBLP:journals/corr/abs-1909-10351; @sun-etal-2019-patient; @tang-etal-2019-natural; @DBLP:journals/corr/abs-1904-09482; @clark-etal-2019-bam; @DBLP:journals/corr/abs-1909-11687].
Though varieties of distillation methods have been proposed, they usually share a common workflow: firstly, train a teacher model, then optimize the student model by minimizing some losses that calculated between the outputs of the teacher and the student. Therefore it is desirable to have a reusable distillation workflow framework and treat different distillation strategies and tricks as plugins so that they could be easily and arbitrarily added to the framework. In this way, we could also achieve great flexibility in experimenting with different combinations of distillation strategies and comparing their effects.
In this paper, we introduce TextBrewer, a PyTorch-based [@pytorch-neurips19] knowledge distillation toolkit for NLP that aims to provide a unified distillation workflow, save the effort of setting up experiments, and help users to distill more effective models. TextBrewer provides simple-to-use APIs, a collection of distillation methods, and highly customizable configurations. It has also been proved able to reproduce the state-of-the-art results on typical NLP tasks. The main features of TextBrewer are:
- **Versatility in tasks and models**. It works with a wide range of models, from the RNN-based model to the Transformer-based model. It does not presume any network structures of teacher and student models. Its usability in tasks like text classification, reading comprehension, and sequence labeling has also been fully tested.
- **Flexibility in configurations**. The distillation process is configured by configuration objects, which can be initialized from JSON files and contain many tunable hyperparameters. If the presets do not meet the user’s requirements, they can extend the configurations with new custom losses, schedulers, etc.
- **Including various distillation methods and strategies**. KD has been studied extensively in computer vision (CV) and has achieved great success. It would be worthwhile to introduce these studies to the NLP community as some of the methods in these studies could also be applied on texts. TextBrewer include a set of methods from both CV and NLP, such as flow of solution procedure (FSP) matrix loss [@DBLP:conf/cvpr/YimJBK17], neuron selectivity transfer (NST) [@DBLP:journals/corr/HuangW17a], probability shift and dynamic temperature [@DBLP:journals/corr/abs-1911-07471], attention matrix loss, multi-task distillation [@DBLP:journals/corr/abs-1904-09482]. In our experiments, we will show the effectiveness of applying methods from CV on NLP tasks.
- **Being non-intrusive and simple to use**. *Non-intrusive* means there is no need to modify the existing model code. Users can re-use their existing training scripts, and only minimal changes are required to use TextBrewer to perform distillation.
TextBrewer also provides some useful utilities such as model size analysis and data augmentation to help model design and distillation.
Related Work
============
Recently some distilled BERT have been released, such as DistilBERT [@sanh2019distilbert], TinyBERT [@DBLP:journals/corr/abs-1909-10351], and ERNIE Slim[^2]. DistilBERT performs distillation on the pre-training task, i.e., masked language modeling. TinyBERT performs transformer distillation at both the pre-training and task-specific learning stages. ERNIE Slim distills ERNIE on a sentiment classification task. Their distillation code is publicly available, and users can replicate their experiments easily. However, it is laborious and error-prone to change the distillation method or adapt the distillation code for some other models and tasks, since the code is not written for general distillation purposes.
There also exist some libraries for general model compression. Distiller [@neta_zmora_2018_1297430] and PaddleSlim[^3] are two versatile libraries supporting pruning, quantization and knowledge distillation. They focus on models and tasks in computer vision. In comparison, TextBrewer is more focused on knowledge distillation on NLP tasks, more flexible, and offers more functionalities. Based on PyTorch, It provides simple APIs and rich customization for fast and clean implementations of experiments.
Architecture and Design
=======================
Figure \[overview\] shows an overview of the main functionalities and architecture of TextBrewer. To support different models and different tasks and meanwhile stay flexible and extensible, TextBrewer provides *distillers* to conduct the actual experiments and configuration classes to configure the behaviors of the distillers.
Distillers
----------
Distillers are the cores of TextBrewer. They automatically train and save models and support custom evaluation functions. There are five distillers have been implemented: `BasicDistiller` is used for single-task single-teacher distillation; `GeneralDistiller` in addition supports more advanced intermediate loss functions; `MultiTeacherDistiller` distill an ensemble of teacher models into a single student model; `MultiTaskDistiller` distill multiple teacher models of different tasks into a single multi-task student model. We also have implemented `BasicTrainer` for training teachers on labeled data to unify the workflows of supervised learning and distillation. All the distillers share the same interface and usage. They can be replaced by each other easily.
Configurations and Presets
--------------------------
The general training settings and the distillation method settings of a distiller are specified by two configurations: `TrainingConfig` and `DistillationConfig`.
**TrainingConfig** defines the settings that are general to deep learning experiments, including the directory where logs and student model are stored (`log_dir`, `output_dir`), the device to use (`device`), the frequency of storing and evaluating student model (`ckpt_frequencey`), etc.
**DistillationConfig** defines the settings that are pertinent to distillation, where various distillation methods could be configured or enabled. It includes the type of KD loss (`kd_loss_type`), the temperature and weight of KD loss (`temperature` and `kd_loss_weight`), the weight of hard-label loss (`hard_label_weight`), probability shift switch, schedulers and intermediate losses, etc. Intermediate losses are used for computing the losses between the intermediate states of teacher and student, and they could be freely combined and added to the distillers. Schedulers are used to adjust loss weight or temperature dynamically.
The available values of configuration options such as loss functions and schedulers are defined as dictionaries in presets. For example, the loss function dictionary includes hidden state loss, cosine similarity loss, FSP loss, NST loss, etc.
All the configurations can be constructed from JSON files. In Figure \[distill-config\] we shows an example of `DistillationConfig` for distilling BERT$_{\text{\tt BASE}}$, to a 4-layer transformers. See Section \[experiments\] for more details.
Workflow
--------
![\[minimal\] A code snippet that demonstrates the minimal TextBrewer workflow. ](minimal.png){width="\columnwidth"}
Before distilling a teacher model using TextBrewer, some preliminary works have to be done:
1. Train a teacher model on a labeled dataset. Users usually train the teacher model with their own training scripts. TextBrewer also provides `BasicTrainer` for supervised training on a labeled dataset.
2. Define and initialize the student model.
3. Build a DataLoader of the dataset for distillation and initialize the optimizer and learning rate scheduler.
The above steps are usually common to all deep learning experiments. To perform distillation, take the following additional steps:
1. Initialize training and distillation configurations, and construct a distiller.
2. Define *adaptors* and a *callback* function.
3. Call the `train` method of the distiller.
A code snippet that shows the minimal workflow is presented in Figure \[minimal\]. The concepts of callback and adaptor will be explained below.
![\[distill-config\] An example of distillation configuration. This configuration is used to distill a 12-layer BERT$_{\text{\tt BASE}}$ to a 4-layer T4-tiny.](distillconfig.png){width="\columnwidth"}
### Callback Function
To monitor the performance of the student model during training, people usually evaluate the student model on a development set at some checkpoints besides logging the loss curve. TextBrewer support such functionality by providing the callback function argument in the `train` method, as shown in line 24 of Figure \[minimal\]. The callback function receives two arguments: the student model and the current training step. At each checkpoint (determined by `num_train_epochs` and `ckpt_frequencey`), the distiller saves the student model and then calls the callback function.
Since it is impractical to implement evaluation metrics and evaluation procedures for all NLP tasks, we encourage users to implement their own evaluation functions as the callbacks for the best practice.
### Adaptor
The distiller is model-agnostic. It needs a translator to translate the model outputs into meaningful data. Adaptor plays the role of translator. An Adaptor is an interface and responsible for explaining the inputs and outputs of the teacher and student for the distiller.
Adaptor takes two arguments: the model inputs and the model outputs. It is expected to return a dictionary with some specific keys. Each key explains the meaning of the corresponding value, as shown in Figure \[overview\] (b). For example, `logits` is the logits of final outputs, `hidden` is intermediate hidden states, `attention` is the attention matrices, `inputs_mask` is used to mask padding positions. The distiller only takes necessary elements from the outputs of adaptors according to its distillation configurations. A minimal adaptor only needs to explain logits, as shown in lines 11–14 of Figure \[minimal\].
Extensibility
-------------
TextBrewer also works with user’s custom modules. New loss functions and schedulers can be easily added to the toolkit. For example, to use a custom loss function, one first implements the loss function with a compatible interface, then add it to the loss function dictionary in the presets with a custom name, so that the new loss function become available as a new option value of the configuration and can be recognized by distillers.
Experiments
===========
In this section, we conduct several experiments to show TextBrewer’s ability to distill large pre-trained models on different NLP tasks and achieve state-of-the-art results.
Settings
--------
**Datasets and tasks.** We conduct experiments on both English and Chinese datasets. For English datasets, We use MNLI [@DBLP:conf/iclr/WangSMHLB19] for text classification task, SQuAD1.1 [@rajpurkar-etal-2016-squad] for span-extraction machine reading comprehension (MRC) task and CoNLL-2003 [@tjong-kim-sang-de-meulder-2003-introduction] for named entity recognition (NER) task. For Chinese datasets, we use XNLI [@conneau2018xnli], LCQMC [@liu-etal-2018-lcqmc], CMRC 2018 [@cui-emnlp2019-cmrc2018] and DRCD [@DBLP:journals/corr/abs-1806-00920]. XNLI is the multilingual version of MNLI. LCQMC is a large-scale Chinese question matching corpus. We use these two datasets for testing text classification tasks. CMRC 2018 and DRCD are two span-extraction machine reading comprehension datasets similar to SQuAD.
The statistics are listed in Table \[datasets-statistics\].
**Models.** We choose BERT$_{\text{\tt BASE}}$ model as the teacher for all tasks. For English tasks, the teacher is initialized by the weights released by Google[^4] and converted into PyTorch format by HuggingFace[^5]. For Chinese tasks, teacher is initialized by the pre-trained weights Chinese RoBERTa-wwm-ext[^6] [@chinese-bert-wwm]. We test the performance of several different student models. The model structures of the teacher and students are summarized in Table \[model-configurations\]. T6 and T3 are BERT with fewer layers of transformers. T3-small is a 3-layer BERT with hidden size and feed-forward size being the half of BERT-base’s. T4-tiny, which is the same as TinyBERT, is a 4-layer model with an even smaller hidden size and feed-forward size. T3-small and T4-tiny are initialized randomly. BiGRU is a single-layer bidirectional GRU which uses the same word embeddings as BERT.
**Training settings**. To keep experiments simple, we directly distill the teacher model that has been trained on the task, while do not perform task-irrelevant language modeling distillation in advance. The number of epochs ranges from 30 to 60, and the learning rate of the student is 1e-4 for all experiments unless otherwise specified.
**Distillation settings**. Temperature is set to 8 for all experiments. We add intermediate losses uniformly distributed among all the layers between teacher and student (except BiGRU). The loss functions we choose are hidden\_mse loss, which computes the mean square loss between two hidden states and NST loss, which is an effective method in the CV field. In Figure \[distill-config\] we show an example of distillation configuration for distilling BERT$_{\text{\tt BASE}}$ to a T4-tiny. Since their hidden sizes are different, we use `proj` option to add linear layers to match the dimensions. The linear layers will be trained together with the student automatically. We experiment with two kinds of distillers: `GeneralDistiller` and `MultiTeacherDistiller` .
Results on English Datasets
---------------------------
We show the performance of students obtained by `GeneralDistiller` in Table \[distiilation-results-english\]. First, we observe that teachers can be distilled to T6 models with minor losses in performance: all the T6 models achieve 99% performance of the teachers. Second, T4-tiny outperforms TinyBERT though they have the same structure. This is attributed to the NST losses that we added in the distillation configuration. This result proves the effectiveness of applying the KD method developed in the CV on NLP tasks. Finally, data augmentation is critical. It significantly improves the performance, especially for the case where the training set size is small, like CoNLL-2003.
We next show the effectiveness of `MultiTeacherDistiller`, which distills an ensemble of teachers to a single student model. For each task, we train three teacher models with the same architecture but different seeds. The student has the same architecture as teachers. The learning rate is set to 3e-5, and intermediate losses are not used. Table \[multi-teacher-distillation\] shows the results. The student model achieves the best performance, higher than the ensemble results.
Results on Chinese Datasets
===========================
We show the results on Chinese datasets in Table \[distillation-results-chinese\]. All the distillation experiments were performed by `GeneralDistiller`. We observe that since CMRC 2018 and DRCD have relative small training sets, data augmentation has a much more significant effect on the student performance on the two tasks. Especially when the student model is randomly initialized (T3-small and T4-tiny model), distillation without DA leads to poor performance.
Conclusion and Future Work
==========================
In this paper, we present TextBrewer, a flexible PyTorch-based distillation toolkit for NLP research and applications. TextBrewer provides rich customization options for users to compare different distillation methods and build their strategies. We have conducted a series of experiments, and the results show that the distilled models can achieve state-of-the-art results with simple settings.
Apart from the distillation strategies, the structure of the student is also critical to its performance. In the future, we will continue to incorporate more distillation strategies, and integrate neural architecture search (NAS) into the toolkit to automate the searching for model structures.
Acknowledgments {#ack .unnumbered}
===============
This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011, and 61772153.
[^1]: TextBrewer: <http://textbrewer.hfl-rc.com>
[^2]: https://github.com/PaddlePaddle/ERNIE
[^3]: https://github.com/PaddlePaddle/PaddleSlim
[^4]: https://github.com/google-research/bert
[^5]: https://github.com/huggingface/transformers
[^6]: https://github.com/ymcui/Chinese-BERT-wwm
|
---
abstract: 'Edge computing is a distributed computing paradigm that relies on computational resources of end devices in a network to bring benefits such as low bandwidth utilization, responsiveness, scalability and privacy preservation. Applications range from large scale sensor networks to IoT, and concern multiple domains (agriculture, supply chain, medicine…). However, resource usage optimization, a challenge due to the limited capacity of edge devices, is typically handled in a centralized way, which remains an important limitation. In this paper, we propose a decentralized approach that relies on a combination of blockchain and consensus algorithm to monitor network resources and if necessary, migrate applications at run-time. We integrate our solution into an application container platform, thus providing an edge architecture capable of general purpose computation. We validate and evaluate our solution with a proof-of-concept implementation in a national cultural heritage building.'
author:
- Aleksandar Tošić
- Jernej Vičič
- Michael Mrissa
bibliography:
- 'ref.bib'
title: 'A Blockchain-based Decentralized Self-balancing Architecture for the Web of Things '
---
Introduction
============
In the last few years, edge computing has received a lot of attention as an alternative to cloud computing, due to the multiple advantages it offers, such as low bandwidth usage, responsiveness, scalability [@mach2017mobile] and privacy preservation [@satyanarayanan2017emergence]. Edge computing now becomes possible due to the evolution of devices that offer more computational power than ever. Combined with application container platforms such as Docker [@anderson2015] that mask heterogeneity problems, it becomes possible for connected devices to form a homogeneous distributed run-time environment. Additionally, orchestration engines (i.e. Kubernetes[^1]) have been developed to manage and optimize usage of network, memory, storage or processing power for edge devices and improve the global efficiency, scalability and energy management of edge platforms. However, such solutions are centralized, which means that they represent a single point of failure (SPOF), which entails several drawbacks, such as lack of reliability and security. The problem is so critical that developments for high availability have been explored, for instance with Kubernetes[^2].
In this paper, we propose to tackle this problem with a decentralized algorithm that monitors network resources to drive application execution. Our solution relies on an original combination of blockchain-like shared data structure, consensus algorithm and containerized monitoring application to enable run-time migration of applications, when relevant, according to the network state. It provides several advantages, such as verifiable optimal usage of all devices on the network, better resilience to disconnection, independence from cloud connection, improved privacy and security.
The remainder of this paper is organized in 7 sections. Section \[sec:motivation\] introduces our motivating scenario related to a cultural heritage building and shows the need for a decentralized approach. Section \[sec:related\] overviews relevant related work and highlights the originality of our approach. Section \[sec:architecture\] details our proposed architecture and shows how it drives run-time migration of applications on the edge. Section \[sec:node\_application\] presents our network monitoring application and shows how the monitoring takes place. In Section \[sec:implementation\], we propose a technical implementation, and we validate and evaluate our solution with a proof-of-concept prototype related to our cultural heritage scenario. Section \[sec:conclusion\] discusses the results obtained and gives insights for possible future work.
Motivating Scenario {#sec:motivation}
===================
In this section, we illustrate the relevance of our approach with a scenario related to a Slovenian cultural heritage building located in Bled, Slovenia. This building has been equipped with multiple sensors to monitor its evolution. The collected data includes temperature, CO2, relative humidity, Volatile Organic Compounds (VOC), ambient light and atmospheric pressure. In this scenario, the following constraints motivate the need for a fully decentralized edge computing approach:
- Privacy: collected data about the state of the technological solution being deployed is classified as sensitive information. Although data about the building could be sent to the cloud, data about the state of resources needs to remain local and only accessible for administration purpose and for the deployed solution to self-manage.
- Reliability: centralized orchestration is not appropriate as data collection needs to be resilient to failure of any device. The network of devices needs to adjust to device disconnection any time and keep operating in an optimal way.
- Cost: reducing the overall cost by avoiding investing in a cloud infrastructure that involves monthly payments and permanent connection to maintain.
- Scalability: as the number of devices will evolve over time, it is necessary for the solution to be able to adjust to changes and homogeneously spread the computation over the network.
- Performance: reactivity to external events is improved if processing is performed on-site.
- Cost effectiveness: using existing devices that control sensors to perform necessary processing reduces the resource requirements of cloud based solutions, which reduces cost.
In this context, it is relevant to equip devices with the capacity to run applications locally and to self-manage the global network load and distribute it over connected devices, according to the state of the network. In the next section, we present related work and show the need for a decentralized self-managed platform on the edge. We also overview existing solutions to abstract from platform heterogeneity and justify the technological choice of a container platform to support our solution.
Background Knowledge and Related Work {#sec:related}
=====================================
Orchestration solutions for edge computing
------------------------------------------
Strictly observing the definition of orchestration, it always represents control from one party’s perspective. This differs from choreography, which is more collaborative and allows each involved party to describe its part in the interaction [@peltz2003web]. However, to the authors’ knowledge, there are no choreography solutions that tackle the problems defined in previous section. Existing orchestration solutions typically rely on a master/slave model where a node is put in charge of the network and decides to allocate applications to nodes according to an optimization algorithm.
Kubernetes [@hightower2017kubernetes] is the most widely used orchestration tool, it is the go-to tool for orchestration in the Google cloud, mostly used in the Microsoft Azure platform and similar products. It is also the most feature-filled orchestration tool available [@medel2016modelling]. It has strong community support across many different cloud platforms (in addition to Google cloud, OpenStack, AWS, Azure).
AWS Elastic Container Service (AWS ECS) [@acuna2016amazon], Amazon’s native container orchestration tool, is the best option for orchestration of AWS services as it is fully integrated into the Amazon ecosystem. It thus integrates easily with other AWS tools. The biggest limitation is that it is limited to Amazon services.
Docker Swarm [^3] ships directly with Docker (integrates with Docker-compose) and is supposed to have the simplest configuration. However, it lacks some advanced monitoring options as compared to other products like Kubernetes.
Apache Mesos’ based DC/OS [^4] is a “distributed operation system” running on private and public cloud infrastructure that abstracts the resources of a cluster of machines and provides common services.
All presented architectures still have a common flaw: single point of failure and a lack of integration with the edge computing.
Container platforms
-------------------
Containers as used in the purpose of this paper are run as a group of namespaced processes within an operating system, avoiding the overhead of starting and maintaining virtual machines (at the same time providing most of the functionalities). Application containers, such as Docker, encapsulate the files, dependencies and libraries of an application to run on an OS as opposed to the System containers, such as LXC that encapsulate the whole operating system and are in this view more similar to Virtual Machines. The key advantage of containers over virtual machines is their light weight with respect to resources.
Docker [@anderson2015] is the de-facto standard in the open source application container platforms and made containers mainstream.
Core OS’ rkt [^5] offers similar functionality as Docker. Rkt is the container runtime from CoreOS. Like Docker, Rkt is designed for application containers. The market share comparing to Docker is still much lower, but it is raising and with the new announced merges of Redhat and CoreOS in the development, it presents a viable alternative.
LXC [^6], short for Linux Containers, is the container runtime and toolset that helped make Docker possible. LXC predates Docker by several years, and Docker was originally based on LXC (it’s not anymore), but LXC gained little traction.
LXD [^7] is a container platform based on LXC. Essentially, LXD provides an API for controlling the LXC library, as well as easy integration into OpenStack. it is backed by Canonical, the company that develops Ubuntu Linux, which is the primary backer of LXD development at the time of writing.
Unlike Docker and Rkt, LXC and LXD are system containers and as such out of scope of this paper. The selected platform for our research was Docker as it is the most widely used platform and one of the few that can migrate apps at runtime and enables easy communication. The migration is done by pausing the container, dumping the context of the paused container, transferring the context on a different host that can resume the execution given the context.
Decentralized Self-managing IoT Architectures
---------------------------------------------
A lot of work have proposed solutions to enable fully decentralized self-managing architectures for the IoT. For example, in [@maior2014self], the work focuses on a decentralized solution for energy management in IoT architectures connected to smart power grids. In [@higgins2011distributed], the authors propose a distributed IoT approach for electrical power demand management problems based on “distributed intelligence” rather than “traditional centralized control,” with the system improving on many levels. Then, in [@suzdalenko2013instantaneous] the authors further develop the former approach by creating a decentralized distributed model of an IoT; where consumers can freely join and leave the system automatically at any time. In [@niyato2011machine] a system that uses machine-to-machine (M2M) communication is presented, to reduce the costs of a home energy management system. Also, dSUMO [@bragard2017self], a distributed and decentralized microscopic simulation that eliminates the central entity and thus overcome the bottleneck in synchronization. In [@al2018energy], the authors demonstrate the effectiveness of utilizing a publish/subscribe messaging model as connection means for indoor localization utilizing Wireless Sensor Networks (WSNs) through a middle-ware, the results showed that RSS get an acceptable accuracy for multiple types of applications.
However, all the aforementioned contributions are different from the solution we propose in this paper, at two levels. First, they mostly focus on a single specific aspect and find an optimal solution for it, without considering the fact that an IoT architecture involves multiple criteria that require optimization. In our work, we already consider multiple criteria to optimize application migration, while envisioning that this number of criteria can increase in the future. Second, as far as we know, there is no approach that combines blockchain-like data structure and consensus algorithms in a single framework with the objective to drive application migration at run-time on the edge, which is the main contribution of this paper.
A Decentralized Self-managing Architecture {#sec:architecture}
==========================================
In the following, we describe the general architecture that support our edge computing platform. Devices on the edge are nodes running node software and a containerization software. A node can join the network by following a network protocol for exchanging known nodes and participating by executing the consensus algorithm. Nodes keep discovering the network by asking connected nodes for peers. For the sake of simplicity, in this paper we consider that the number of nodes remains reasonably limited, so that large scale discovery issues remain out of the scope of this paper.
Our devices are equipped to allow a specific containerized application (called node app) to introspect the state of the node and handle the diffusion of this information over the network. It also is responsible for maintaining the information about the other nodes up to date, for participating in the consensus algorithm, and for listening to messages coming from the exposed node API.
![Architecture of an edge device software platform[]{data-label="fig:node"}](Node){width="0.7\linewidth"}
Figure \[fig:node\] shows the key components of Nodes in the system. The node software is compiled into a container, in our case Docker. The container mounts a direct socket to the containerization service for querying the state of the system and managing local containers.
Node Application {#sec:node_application}
================
Every 500 milliseconds, each device collects information about the state of its neighbours. Typically, a state is a vector of scores that describes the device state and the applications being executed by the node. In this work we define a state to be a matrix of vectors $$S (APP,CPU,RAM,DISK,NETWORK,TIMESTAMP)$$ where each vector represents an application being executed by the node and the corresponding resource consumption. Resources are reported as a fraction of the total available. In order to have comparable values between nodes, reporting on CPU usage and network utilization require some engineering which is outside of the scope of this paper.
Monitoring resources within the P2P network is done by having nodes maintain a list of scores of other nodes. All nodes periodically broadcast digitally signed messages containing their score. All nodes follow simple P2P broadcasting rules that guarantee finality and efficiency in message propagation.
- If elapsed time greater then $\Delta ST$, broadcast signed message containing own score.
- When receiving a new score message, check if message was received before (compare digital signatures)
- If message was not seen before, broadcast it to all connected nodes with the exception of originating node
Where $\Delta ST$ is configurable and should depend on the time interval of the consensus algorithm. The score pool hence contains scores of all nodes participating in the network. Each score has a corresponding time-stamp which is later used by elected nodes to create a migration strategy.
For improved efficiency, every score message broadcast is prefaced with a “Do you need this” (DYNT) message coupled with the digital signature of the message only. Messages are sent to nodes that reply to the DYNT message to minimize bandwidth use.
Consensus algorithm {#subsec:consensus}
-------------------
The network requires a consensus algorithm to avoid race conditions when migrating applications. The choice of a consensus algorithm depends on the requirements of the implementation and domain of application. In general, any consensus based on leader election can be plugged in. Examples of such consensus algorithms are Paxos [@lamport2001paxos], Raft [@ongaro2014search], PoET [@olson2018sawtooth], etc.
The elected leader is responsible for creating a migration plan and including the resource consumption estimates in a block. The block gets digitally signed so other nodes can verify it originates from the elected leader. Nodes receiving a new block must verify the migration plan by computing it locally and comparing the results. If the migration plan is equal, they act on it, otherwise discard the block and wait for a new one. With these simple protocol rules in place the network is Byzantine fault tolerant [@castro1999practical].
A migration strategy is analogous to a block in block-chain based systems. Blocks contain all the data shared among nodes in the network and include a digital signature of the previous block thus creating a block chain. In order to create a digital signature of block $n +1 $ a node needs to have the digital signature of node $n$. A well formed block can be verified by other nodes that also have block $n$. In case of a malformed block, verification will fail, and nodes will reject the block, thus forcing the nodes to agree on the shared data. The block serves as an instruction set mapping applications to nodes. Consider a case with 4 nodes in set $N$ denoted by $A, B, C$, and $D$ respectively. All nodes share their score and keep a local copy of reported scores of other nodes. Each node also stores a vector of applications $v \in V$ that need to be executed. Table \[table:block\] shows an example of a block $k$ which assigns every $v \in V$ to a node $n \in N$ To create block $k+1$ a node elected as leader computes an assignment such that the use of resources is optimal. The input to the algorithm is limited to block data to ensure determinism that can enforce consensus. The algorithm depends on the application domain and exploring available possibilities will be subject to future work. In this paper, we use the simple algorithm described below, which is deterministic and can only take the block data as input for computation.
$Max \gets FindMaxLoadedNode(BlockData)$ $Min \gets FindMinLoadedNode(BlockData)$
----------------- ------ ----- ------ ----- -----------------
V Node RAM DISK CPU Average Latency
\[0.5ex\] $v_0$ A 50% 23% 90% 23ms
$v_1$ B 47% 87% 23% 33ms
$v_2$ C 12% 25% 15% 51ms
$v_3$ A 35% 14% 56% 101ms
$v_4$ D 25% 74% 16% 9ms
\[1ex\]
----------------- ------ ----- ------ ----- -----------------
: Block data
\[table:block\]
Once a block is created, currently reported scores are included that will be used to compute block $k + 2$. Additionally, blocks are equipped with meta-data like block hash, previous block hash, etc. to facilitate their utilization.
Implementation and Evaluation {#sec:implementation}
=============================
Technical Implementation
------------------------
As described in Section \[sec:motivation\], we have implemented and evaluated our solution with a set of sensors deployed in the cultural heritage building Mrakova Domačija in Bled, Slovenia. Each sensor is connected to a Raspberry Pi device that hosts a Linux Alpine OS and a Docker container. We developed our node application inside a container, it relies on the Docker introspection capacity (`docker stats` command called from our Java program) to collect information about each device. The application also hosts a HTTP server[^8] that allows communicating with other nodes through a RESTful API operating as follows:
- HTTP GET gives a representation of the target node, which includes information about the state of the device as well as all the necessary information about the node (i.e. last connection time, average connection time…).
- HTTP PUT sends information to the target node about the state of the source node. Such request is useful for nodes to send to their neighbours information about their current state. HTTP PUT allows system designers to specify URLs where shared information is stored (for example <http://192.168.1.15/shared>).
- HTTP POST holds the same role as HTTP PUT but it applies to new devices, so that the data is added to the shared pool and does not replace existing data.
- HTTP DELETE is utilized when a node leaves the network in a predictable way, so that its state information is removed from the shared pool without going through a time-out.
Validation and Evaluation
-------------------------
To validate the feasibility of our approach and test its scalability we ran performance simulation test cases. In each test case, a fixed number of nodes formed a P2P network. Nodes were assigned applications to execute. Each application had a random execution time and preset resource consumption expressed in fractions between 5% - 40%. For the sake of simplicity, only one resource was used (CPU). The simulation ran for 100 blocks with a block time of 1 second. Applications were queued until the average load of the entire system rose above 90%. The migration strategy was implemented based on the algorithm described in Section \[subsec:consensus\]. Applications arrived in the queue with certain probability, which was gradually increased with the number of nodes in the system. From the reported resource loads of nodes (reported in %), we compute the standard deviation as a measure of how balanced resource consumption is.
[.5]{} ![Simulation results[]{data-label="fig:test"}](5_s "fig:"){width="\linewidth"}
[.5]{} ![Simulation results[]{data-label="fig:test"}](25_s "fig:"){width="\linewidth"}
[.5]{} ![Simulation results[]{data-label="fig:test"}](50_s "fig:"){width="\linewidth"}
[.5]{} ![Simulation results[]{data-label="fig:test"}](100_s "fig:"){width="\linewidth"}
In Fig. \[fig:test\], we observe that the standard deviation remains low even when the number of applications in the system grows. The lower load cases where we can observe higher swings in standard deviations are expected due to the low number of applications. The crossover happens when the number of applications exceeds the number of nodes. Below the threshold, there are bound to be nodes that do not run any applications. We can observe from Fig. \[fig:5nodes\] that as the number of nodes is low, resource balancing between nodes is effective earlier, which explains why the measures are less marked than with the other figures, that correspond to test cases where it takes the simulation a longer time to reach the point of crossover where a higher number of applications is distributed over a lower number of nodes.
From the simulation results we conclude that the architecture can scale with the growing number of nodes in the network. Additionally, the naive algorithm for creating a migration strategy performed well in distributing load across the system.
Discussion and Conclusion {#sec:conclusion}
=========================
In this paper, we propose a decentralized solution to the resource usage optimization problem, a typical issue in edge computing. Our solution avoids the single point of failure that centralized architectures suffer from and improves network resilience as it does not depend on a master node. To design our solution, we have combined a blockchain-like shared data structure and a consensus algorithm with a monitoring application that runs on top of the Docker platform. Such combination allows edge devices to check at run-time if there is a need for migrating an application, and to reach consensus on a decision to do so. With our contribution, edge devices become a completely decentralized and distributed run-time platform. We have implemented and evaluated our solution with a set of sensors deployed in a cultural heritage building in Bled, Slovenia.
Results show that our approach is able to adjust and normalize the application load over a set of nodes. It also provides, thanks to the fact that the algorithm we use is deterministic and that all the data is stored in a distributed structure, the possibility to verify all the decisions that have been taken to optimize the usage of edge devices. The consensus algorithm that we use also allows to adjust the global network behaviour to entering or leaving nodes.
Several limitations have been identified that give insights for future work. First, it is important to observe how adding and removing devices affects network behaviour and to explore how scalable is our approach over a large number of devices. Second, it seems appropriate to find out what specific aspects of use cases can help determine which consensus algorithm is most suitable for deploying our solution, in order to best match the use case requirements. Third, it includes semantically describing applications and the services that edge devices offer, to support application migration, and combine in the same architecture the need for efficiently managing network resources together with the needs of applications in terms of functionality and quality of service.
Acknowledgment
==============
The authors gratefully acknowledge the European Commission for funding the InnoRenew CoE project (Grant Agreement \#739574) under the Horizon2020 Widespread-Teaming program and and the Republic of Slovenia (Investment funding of the Republic of Slovenia and the European Union of the European regional Development Fund).
[^1]: <https://kubernetes.io/>
[^2]: <https://kubernetes.io/docs/setup/independent/setup-ha-etcd-with-kubeadm>
[^3]: <https://github.com/docker/swarm>
[^4]: <https://dcos.io/>
[^5]: <https://coreos.com/rkt/>
[^6]: <https://linuxcontainers.org/>
[^7]: <https://linuxcontainers.org/lxd/introduction/>
[^8]: Please note that CoAP could be used for energy saving purposes.
|
---
abstract: 'We study the twisted Novikov homology of the complement of a complex hypersurface in general position at infinity. We give a self-contained topological proof of the vanishing (except possibly in the middle degree) of the twisted Novikov homology groups associated to positive cohomology classes of degree one defined on the complement.'
address:
- 'Department of Mathematics, University of Regensburg, Germany.'
- 'Department of Mathematics, University of Wisconsin-Madison, USA.'
author:
- Stefan Friedl
- Laurentiu Maxim
title: Twisted Novikov homology of complex hypersurface complements
---
Introduction
============
Novikov homology was originally introduced for the purpose of generalizing classical Morse theory to the context of arbitrary closed one-forms, e.g., see [@F04] for an overview. This theory and its variants have fascinating applications in dynamical systems, symplectic topology, geometry group theory, knot theory, etc. For example, twisted Novikov homology can detect fibering of knots. In relation to geometric group theory, the so-called BNSR-invariants of a group, which contain important information on the finiteness properties of certain subgroups, can be described in terms of vanishing results in Novikov homology (e.g., see [@FGS; @PS10]). Other topological implications of vanishing of Novikov homology have been derived through the language of barcodes and Jordan cells (e.g., see [@B]).
In this note, we study the twisted Novikov homology of complements to complex hypersurfaces in general position at infinity. We give a self-contained topological proof of the vanishing (except possibly in the middle degree) of the twisted Novikov homology groups associated to positive cohomology classes of degree one defined on the complement. Classical Novikov homology of (essential) hyperplane arrangement complements has been studied in [@KP15] by Morse-theoretical methods.
Let $M$ be a topological space. Throughout the paper we assume that all topological spaces are connected and that they admit universal coverings. Furthermore we make the canonical identifications $H^1(M;{\mathbb{R}})=\Hom(H_1(M,{\mathbb{Z}}),{\mathbb{R}})=\Hom(\pi_1(M),{\mathbb{R}})$. An [*[admissible pair]{}*]{} for $M$ is an epimomorphism $\psi\colon \pi_1(M)\to \Gamma$ to a free abelian group together with some $\xi \in H^1(M;{\mathbb{R}})=\Hom(\pi_1(M),{\mathbb{R}})$ such that $\xi\colon \pi_1(M)\to {\mathbb{R}}$ factors through $\psi$. By a slight abuse of notation we denote the unique induced homomorphism $\Gamma\to {\mathbb{R}}$ by $\xi$ as well.
In Section \[defNB\], we will associate to $(M,\psi,\xi)$, with $(\psi,\xi)$ an admissible pair together with a representation $\alpha\colon \pi_1(M)\to \gl(k,S)$ over a domain $S$, the [*twisted Novikov-Betti numbers*]{} $b_i^\a(M,\psi,\xi)$ and the [*twisted Novikov torsion numbers*]{} $q_i^\a(M,\psi,\xi)$. The classical Novikov-Betti numbers $b_i(M,\xi)$ and Novikov torsion numbers $q_i(M,\xi)$ as defined in [@F04] can be recovered by taking $\alpha$ to be the trivial one-dimensional representation $\pi_1(M)\to \gl(1,{\mathbb{Z}})$ and by taking $\psi=\xi\colon \pi_1(M)\to \Gamma:=\mbox{Im}(\xi)\subset {\mathbb{R}}$.
Before we state our main theorem we recall that for a complex hypersurface $X\subset {\mathbb{C}}^n$ the first homology of the complement $M_X$ has a basis that is given by the choice of a positive meridian for each irreducible component of $X$. We say that a homomorphism $\xi\colon H_1(M_X;{\mathbb{R}})\to {\mathbb{R}}$ is [*positive*]{}, if $\xi$ maps all meridians to positive real numbers.
\[mt\]\[mainthm\] Let $X \subset {\mathbb{C}}^n$ be a complex hypersurface in general position at infinity, with complement $M_X$. Then for any admissible pair $(\psi\colon \pi_1(M_X)\to \Gamma,\xi \in H^1(M_X;{\mathbb{R}}))$ such that $\xi$ is positive, together with a representation $\alpha\colon \pi_1(M_X)\to \gl(k,S)$ over a domain $S$, we have \[B\] b\_i\^(M\_X,,)= { 0, & in,\
(-1)\^[n]{}k(M\_X), &i=n. .In particular, \[E\](-1)\^n (M\_X) 0.Moreover, all twisted Novikov torsion numbers of $(M_X,\psi,\xi)$ vanish, that is: \[tor\] q\_i\^(M\_X,,)=0, [for all ]{} i 0.
Conventions. {#conventions. .unnumbered}
------------
All domains are understood to be commutative.
Acknowledgement. {#acknowledgement. .unnumbered}
----------------
S. Friedl gratefully acknowledges the support provided by the SFB 1085 ‘Higher Invariants’ at the University of Regensburg, funded by the Deutsche Forschungsgemeinschaft (DFG). L. Maxim is supported by grants from NSF, NSA, and by a fellowship from the Max-Planck-Institut für Mathematik, Bonn.
Preliminaries {#pre}
=============
Novikov rings
-------------
Let $\Gamma$ be a free abelian group and let $S$ be a domain.
1. We say that $p\in S[\Gamma]$ is a [*[monomial]{}*]{} if there exists $\gamma\in \Gamma$ and a unit $s\in S$ such that $p=s\gamma$.
2. Given $\xi \in \Hom(\Gamma,{\mathbb{R}})$ and $p=\sum_{\gamma\in \Gamma}n_\gamma \gamma\in S[\Gamma]\setminus \{0\}$ we write $$m_\xi(p):=\max\{ \xi(\gamma)\,|\,n_\gamma\ne 0\}$$ and we write $$t_\xi(p):=\sum_{\xi(\gamma)=m_\xi(p)} n_\gamma\gamma.$$
3. Given $\xi \in \Hom(\Gamma,{\mathbb{R}})$ we write $$T_\xi S[\Gamma]:=\{ p\in S[\Gamma]\setminus \{0\}\,|\,\mbox{$t_\xi(\gamma)$ is a monomial}\}.$$ Furthermore we write $${{\mathcal R}}_\xi S[\Gamma]:=(T_\xi S[ \Gamma])^{-1}S[\Gamma].$$ We refer to ${{\mathcal R}}_\xi S[\Gamma]$ as the [*rational Novikov completion of $S[\Gamma]$ with respect to $\xi$*]{}.
We recall the following well-known lemma.
\[lem:units-in-rc\] Let $\Gamma$ be a free abelian group and let $S$ be a domain. Then $T_\xi S[\Gamma]$ consists precisely of the elements of $S[\Gamma]$ which are invertible in ${{\mathcal R}}_\xi S[\Gamma]$.
Let $p\in S[\Gamma]$ be an element that is invertible in ${{\mathcal R}}_\xi S[\Gamma]$. This means that there there exists a $r\in T_\xi \Gamma$ and a $q\in S[\Gamma]$ such that $p\cdot r^{-1}q=1$. In particular $p\cdot q=r$. Since $S$ is a domain it follows that $t_\xi \colon S[\Gamma]\setminus \{0\}\to S[\Gamma]\setminus \{0\}$ is multiplicative. We now see that $$t_\xi(p)\cdot t_\xi(q)\,=\,t_\xi(r)\,=\,1.$$ It follows that $t_\xi(p)$ is a unit in $S[\Gamma]$. Since $S$ is a domain we deduce that $t_\xi(p)$ is a monomial, i.e. $p\in T_\xi S[\Gamma]$.
Novikov-Betti and torsion numbers {#defNB}
---------------------------------
Let $M$ be a topological space. We write $\pi=\pi_1(M)$. Let $(\psi\colon \pi\to \Gamma,\xi\in H^1(M;{\mathbb{R}}))$ be an admissible pair and let $\alpha\colon \pi_1(M)\to \gl(k,S)$ be a representation over a domain $S$. We denote by $$\wti{M} \lra M$$ the universal covering of $M$. The canonical left $\pi$-action on $\wti{M}$ turns the cellular groups $C_i(\wti{M};{\mathbb{Z}})$ into left-modules over the group ring ${\mathbb{Z}}[\pi]$.
The representation $\alpha$ turns $S^k$ into a right ${\mathbb{Z}}[\pi]$-module. We write $$C_*^\alpha(M;S^k):=S^k\otimes_{{\mathbb{Z}}[\pi]}C_*(\wti{M})$$ and we write $$H_i^\alpha(M;S^k):=H_i\big(C_*^\alpha(M;S^k)\big).$$ The homomorphism $\psi$ turns ${\mathbb{Z}}[\G]$, and thus also ${{\mathcal R}}_\xi {\mathbb{Z}}[\Gamma]$ into a right ${\mathbb{Z}}[\pi]$-module. Thus we can view $S[\Gamma]^k$ and ${{\mathcal R}}_\xi S [\Gamma]^k={{\mathcal R}}_\xi {\mathbb{Z}}[\Gamma]\otimes_{\mathbb{Z}}S^k$ as right ${\mathbb{Z}}[\pi]$-module. We write $$C_*^\alpha(M;{{\mathcal R}}_\xi S[\Gamma]^k):={{\mathcal R}}_\xi S[\Gamma]^k\otimes_{{\mathbb{Z}}[\pi]}C_*(\wti{M})$$ and $$H_i^\alpha(M;{{\mathcal R}}_\xi S[\Gamma]^k):=H_i\big(C_*^\alpha(M;{{\mathcal R}}_\xi S[\Gamma]^k)\big).$$
Let $M$ be a topological space. We write $\pi=\pi_1(M)$. Let $(\psi\colon \pi\to \Gamma,\xi\in H^1(M;{\mathbb{R}}))$ be an admissible pair and let $\alpha\colon \pi_1(M)\to \gl(k,S)$ be a representation over a domain $S$. The [*$i$-th twisted Novikov-Betti number*]{} is defined as $$b_i^\alpha(M,\psi,\xi):=\mbox{rank of the ${{\mathcal R}}_\xi S[\Gamma]$-module $H_i^\alpha(M;{{\mathcal R}}_\xi S[\Gamma]^k).$ }$$ The [*$i$-th twisted Novikov torsion number*]{} is defined as $$q_i(M,\psi,\xi):=\ba{c}\mbox{ minimal number of generators of the torsion}\\
\mbox{submodule of the ${{\mathcal R}}_\xi S[\Gamma]$-module $H_i^\alpha(M;{{\mathcal R}}_\xi S[\Gamma]^k)$. }\ea$$
In the following proposition we list a few basic facts about twisted Novikov-Betti and torsion numbers. The proofs are verbatim the same as the proofs of the corresponding statements for untwisted Novikov-Betti and Novikov torsion numbers that are studied in [@F04 Chapter 1]:
\[pr\] The twisted Novikov-Betti and the twisted Novikov torsion numbers satisfy the following properties:
The following equality holds: $$k\chi(M)=\sum_{i \geq 0} (-1)^i \cdot b_i^\alpha(M,\psi,\xi).$$
For any $\lambda \in {\mathbb{R}}_{>0}$ and any $i$ we have $$H_i^\alpha(M;{{\mathcal R}}_{\lambda\xi} S[\Gamma]^k)\,=\, H_i^\alpha(M;{{\mathcal R}}_\xi S[\Gamma]^k).$$ In particular, we have $$b_i^\alpha(M,\psi,\xi)=b_i^\alpha(M,\psi,\lambda\xi)\mbox { and } q_i^\alpha(M,\psi,\xi)=q_i^\alpha(M,\psi,\lambda\xi).$$
Topology of complex hypersurface complements {#top}
--------------------------------------------
Let $X$ be a hypersurface in ${\mathbb{C}}^{n}$ ($n \geq 2$), with underlying reduced hypersurface $X_{red}$ defined by the (square-free) equation $f=f_1\cdots f_s =0$, where $f_i$ are the irreducible factors of the polynomial $f$. Let $X_i=\{f_i=0\}$ denote the irreducible components of $X_{red}$. Embed ${\mathbb{C}}^{n}$ in $\cp^{n}$ by adding the hyperplane at infinity, $H$, and let $\overline{X}$ be the projective completion of $X$ in $\cp^{n}$. Let $M_X$ denote the affine hypersurface complement $$M_X:={\mathbb{C}}^n \setminus X= {\mathbb{C}}^n \setminus X_{red}.$$ Alternatively, $M_X$ can be regarded as the complement in $\cp^n$ of the divisor $\overline{X} \cup H$. Then it is well-known that $H_1 (M_X;{\mathbb{Z}})$ is a free abelian group, generated by the meridian loops $\gamma_i$ about the non-singular part of each irreducible component ${X}_i$, for $i=1,\cdots, s$ (e.g., see [@Di92], (4.1.3), (4.1.4)). Furthermore, since $M_X$ is an $n$-dimensional affine variety, it has the homotopy type of a finite CW-complex of real dimension $n$ (e.g., see [@Di92], (1.6.7), (1.6.8)).
Let $S^{\infty}$ be a $(2n-1)$-sphere in ${\mathbb{C}}^{n}$ of a sufficiently large radius (that is, the boundary of a small tubular neighborhood in $\cp^{n}$ of the hyperplane $H$ at infinity). Denote by $$X^{\infty}=S^{\infty} \cap X$$ the [*link of $X$ at infinity*]{}, and by $$M_X^{\infty}=S^{\infty} \sm X^{\infty}$$ its complement in $S^{\infty}$. Note that $M_X^{\infty}$ is homotopy equivalent to $T(H) \setminus \overline{X} \cup H$, where $T(H)$ is the tubular neighborhood of $H$ in $\cp^{n}$ for which $S^{\infty}$ is the boundary. Then a classical argument based on the Lefschetz hyperplane theorem yields that the homomorphism $$\pi_i(M_X^{\infty}) \lra \pi_i(M_X)$$ induced by inclusion is an isomorphism for $i < n-1$ and it is surjective for $i=n-1$; see [@DL06 Section 4.1] for more details. It follows that \[eq\]\_i(M\_X,M\_X\^)=0 , [for all]{} i n-1,hence $M_X$ has the homotopy type of a complex obtained from $M_X^{\infty}$ by adding cells of dimension $\geq n$.
If, moreover, $X$ is [*in general position at infinity*]{}, that is, the reduced underlying variety of $\overline{X}$ is transversal to $H$ in the stratified sense, then $M_X^{\infty}$ is a circle fibration over $H \setminus \overline{X} \cap H$, which is homotopy equivalent to the complement in ${\mathbb{C}}^{n}$ to the affine cone over the projective hypersurface $\overline{X} \cap H \subset H=\mathbb{CP}^{n-1}$ (for a similar argument see [@DL06 Section 4.1]). Hence, by the Milnor fibration theorem (e.g., see [@Di92 (3.1.9),(3.1.11)]), $M_X^{\infty}$ fibers over ${\mathbb{C}}^* \simeq S^1$, with fiber $F$ homotopy equivalent to a finite $(n-1)$-dimensional CW-complex.
Novikov homology of complex hypersurface complements {#section:complex-hypersurface-complements}
====================================================
In this section we will give the proof of Theorem \[mt\].
Preliminary lemmas
------------------
We start out with the following lemma.
\[lem:higher-homology\] Let $X \subset {\mathbb{C}}^n$ be a hypersurface with complement $M_X$. Let $(\psi\colon \pi_1(M_X)\to \Gamma,\xi \in H^1(M_X;{\mathbb{R}}))$ be an admissible pair and let $\alpha\colon \pi_1(M_X)\to \gl(k,S)$ be a representation over a domain $S$. Then the following hold:
1. We have $H_i(M_X;{{\mathcal R}}_\xi S[\Gamma]^k)=0$ for $i>n$. In particular, we have the vanishing $b_i^\a(M_X,\psi,\xi)=0$ and $q_i^\a(M_X,\psi,\xi)=0$ for $i>n$.
2. We have $q_n^\alpha(M_X,\psi,\xi)=0$.
The first statement follows immediately from the fact that $M_X$ has the homotopy type of a finite CW-complex $M'$ of real dimension $n$. This fact also implies that $ H_n^\a(M_X;{{\mathcal R}}_\xi S[\Gamma]^k)=H_n^\a(M';{{\mathcal R}}_\xi S[\Gamma]^k)$ is a submodule of the free ${{\mathcal R}}_\xi S[\Gamma]$-module $C_n^\alpha(M';{{\mathcal R}}_\xi S[\Gamma]^k)$. In particular $ H_n^\a(M_X;{{\mathcal R}}_\xi S[\Gamma]^k)$ has no ${{\mathcal R}}_\xi S[\Gamma]$-torsion, which in turn implies that $q_n^\a(M_X,\psi,\xi)=0$.
In the following we adopt the convention that if $\varphi\colon \pi_1(M)\to G$ is a homomorphism and $N\subset M$ is a connected subspace, then, by a slight abuse of notation, we denote the induced homomorphism $\pi_1(N)\to \pi_1(M)\xrightarrow{\varphi} G$ by $\varphi$ as well.
\[bound\] Let $X \subset {\mathbb{C}}^n$ be a hypersurface with complement $M_X$, and fix an admissible pair $(\psi\colon \pi_1(M_X)\to \Gamma,\xi \in H^1(M_X;{\mathbb{R}}))$. Let $\alpha\colon \pi_1(M_X)\to \gl(k,S)$ be a representation over a domain $S$. Then for any $i < n-1$, we have ${{\mathcal R}}_\xi S[\Gamma]$-isomorphisms $$H_i^\a(M_X^\infty;{{\mathcal R}}_\xi S[\Gamma]^k)\xrightarrow{\cong} H_i(M_X;{{\mathcal R}}_\xi S[\Gamma]^k)$$ and we have an epimorphism of ${{\mathcal R}}_\xi S[\Gamma]$-modules $$H_{n-1}(M_X^\infty;{{\mathcal R}}_\xi S[\Gamma]^k)\to H_{n-1}(M_X;{{\mathcal R}}_\xi S[\Gamma]^k).$$
This is an immediate consequence of the fact, mentioned in Section \[top\], that the complement $M_X$ is obtained (up to homotopy) from $M_X^{\infty}$ by adding cells of dimension $\geq n$ and the fact that twisted homology groups are homotopy invariants.
Given a manifold $M$ and a class $\xi \in H^1(M;{\mathbb{Z}})=\Hom(\pi_1(M),{\mathbb{Z}})$ we say that $\xi$ is [*fibered*]{} if there exists a bundle map $p\colon M\to S^1$ such that $p_*=\xi\colon \pi_1(M)\to {\mathbb{Z}}$.
\[prop:fibered\] Let $M$ be a manifold. Let $(\psi\colon \pi_1(M)\to \Gamma,\xi \in H^1(M;{\mathbb{Z}}))$ be an admissible pair and let $\alpha\colon \pi_1(M)\to \gl(k,S)$ be a representation over a domain $S$. If $\xi$ is fibered, then for any $i$ we have $H_i^\a(M;{{\mathcal R}}_\xi S[\Gamma]^k)=0$.
This proposition is well-known to the experts, it can be proved along the lines of [@GM05 Theorem 4.2] or alternatively [@Ch03; @GKM05; @FK06; @Fr14]. Since we could not find a result in the literature which gives precisely the statement desired we sketch a proof.
In the following, given a manifold $X$ and a map $\varphi\colon X\to X$ we denote by $T(X,\varphi)=(X\times [0,1])/(x,0)\sim(\varphi(x),1)$ the corresponding mapping torus. We refer to the induced map $\pi_1(T(X,\varphi)) \to \pi_1([0,1]/0\sim 1)={\mathbb{Z}}$ as the canonical homomorphism. We can identify the manifold $M$ with a mapping torus $T:=T(X,\varphi)$ such that $\xi\in \Hom(\pi_1(M),{\mathbb{Z}})=\Hom(T,{\mathbb{Z}})$ agrees with the canonical homomorphism. Following [@FK06 Section 3] there exists a Meyer–Vietoris sequence $$\ba{l}\dots \to H_i(X\times [0,1];S[\Gamma]^k)\otimes_{S[\Gamma]} {{\mathcal R}}_\xi S[\Gamma]
\overset{ \id-t \varphi_*}{\longrightarrow}
H_i(X\times [0,1];S[\Gamma]^k)\otimes_{S[\Gamma]} {{\mathcal R}}_\xi S[\Gamma] \\
\hspace{2cm} \to H_i(M;{{\mathcal R}}_\xi S[\Gamma]^k)\to \dots\ea$$ where $t$ is an element with $\xi(t)=1$. All the maps $ \id-t \varphi_*$ are invertible over ${{\mathcal R}}_\xi S[\Gamma]$. It follows that the the homology groups $H_i(M;{{\mathcal R}}_\xi S[\Gamma]^k)$ vanish.
Novikov homology for positive integral cohomology classes {#ad}
---------------------------------------------------------
Let $X \subset {\mathbb{C}}^n$ be a hypersurface with complement $M_X$. A cohomology class $\xi \in H^1(M_X;{\mathbb{R}})$ is called [*positive*]{} if the corresponding group homomorphism $\xi \colon \pi_1(M_X) \to {\mathbb{R}}$ takes strictly positive values on each positively oriented meridian generator $\gamma_i$ about an irreducible component of $X_{red}$.
The following theorem takes care of Theorem \[mainthm\] for [*integral*]{} cohomology classes.
\[m\] Let $X \subset {\mathbb{C}}^n$ be a hypersurface with complement $M_X$. We assume that $X$ is in general position at infinity. Let $(\psi\colon \pi_1(M_X)\to \Gamma,\xi \in H^1(M_X;{\mathbb{Z}}))$ be an admissible pair and let $\alpha\colon \pi_1(M_X)\to \gl(k,S)$ be a representation over a domain $S$. If $\xi$ is positive, then \[B\] b\_i\^(M\_X,,)= { 0, & in,\
(-1)\^[n]{}k(M\_X), &i=n. .In particular, \[E\](-1)\^n (M\_X) 0.Moreover \[tor\] q\_i\^(M\_X,,)=0, [for all ]{} i 0.
Since $M_X$ has the homotopy type of a finite CW complex, the homology groups $H_i^\a(M_X;{{\mathcal R}}_\xi S[\Gamma]^k)$ are finitely generated ${{\mathcal R}}_\xi S[\Gamma]$-modules.
Let $f$ be a square-free polynomial defining $X_{red}$, the reduced hypersurface underlying $X$. We denote the factors of $f$ by $f_1,\dots,f_s$. Let $\xi \in H^1(M_X;{\mathbb{Z}})$ be a positive integral cohomology class, with $(n_1,\cdots,n_s) \in {\mathbb{N}}^s$ the vector of values of $\xi\colon \pi_1(M_X) \to {\mathbb{Z}}$ on the positive meridians $\gamma_i$, $i=1,\cdots,s$, about the irreducible components of $X_{red}$ corresponding to the factors $f_1,\dots,f_s$. We consider the polynomial $g={f_1}^{n_1}\cdots {f_s}^{n_s}$ on ${\mathbb{C}}^n$. Clearly, the underlying reduced hypersurface $\{g=0\}_{red}$ coincides with $X_{red}$ and, moreover, the homomorphism $g_*\colon \pi_1(M_X) \to {\mathbb{Z}}$ induced by $g$ coincides with $\xi$ (cf. [@Di92 p.76-77]). By Section \[top\], the element $\xi=g_*\in H^1(M_X^\infty;{\mathbb{Z}})=\Hom(\pi_1(M_X^\infty),{\mathbb{Z}})$ is fibered. It follows from Proposition \[prop:fibered\] that $H_i^\a(M_X^\infty;{{\mathcal R}}_\xi S[\Gamma]^k)=0$ for all $i$.
The theorem now follows from the combination of Proposition \[pr\] (a), Lemma \[lem:higher-homology\] and Proposition \[bound\].
The statement about the vanishing of the classical Novikov-Betti numbers in Theorem \[m\] has also been obtained implicitely in [@Ma06] by using Alexander modules. Furthermore it can be also derived by using the corresponding vanishing statement for the $L^2$-Betti numbers of such complements, see [@Ma14 Theorem 1.1]. Indeed, it follows from [@FLM Proposition 2.4] that we have the identification: b\_i(M\_X;)=b\_i\^[(2)]{}(M\_X,\_1(M\_X) ())between the Novikov-Betti numbers and the $L^2$-Betti numbers corresponding to $\xi$. However, to our knowledge, Novikov torsion numbers do not have such interpretation in terms of $L^2$-invariants.
Novikov homology for positive real cohomology classes: The proof of Theorem \[mainthm\] {#pos}
---------------------------------------------------------------------------------------
1. A [*lattice*]{} in an $n$-dimensional real vector space $V$ is an additive subgroup $L$ of $V$ of rank $n$ such that $L$ generates $V$ as a real vector space.
2. Let $V$ be a vector space with lattice $L$.
3. An [*open integral half-space*]{} of $V$ is a subset of the form $f^{-1}({\mathbb{R}}_{>0})$ where $f\colon V\to {\mathbb{R}}$ is a homomorphism that takes integral values on $L$.
4. A [*closed integral half-space*]{} of $V$ is a subset of the form $f^{-1}({\mathbb{R}}_{\geq 0})$ where $f\colon V\to {\mathbb{R}}$ is a homomorphism that takes integral values on $L$.
5. The intersection of finitely many open and closed integral half-spaces is called an [*integral cone*]{}.
6. A finite union of integral cones is called an [*integral subset*]{} of $V$.
The following elementary lemma summarizes some properties of integral subsets.
\[lem:integral-subsets\] Let $V$ be a vector space together with a lattice $L$.
The complement of an integral subset is again an integral subset.
The intersection of finitely many integral subsets is again an integral subset.
The union of finitely many integral subsets is again an integral subset.
Any non-empty integral subset contains at least one lattice point.
Let $\Gamma$ be a free abelian group of rank $n$. In the following we always view $\Hom(\Gamma,{\mathbb{R}})$ as equipped with the lattice $\Hom(\Gamma,{\mathbb{Z}})$. We can now formulate the following technical proposition that will be proved in the next section.
\[mainprop\] Let $S$ be a domain, let $\Gamma$ be a free abelian group and let $C_*$ be a chain complex of finite free $S[\Gamma]$-modules. Then $$\big\{ \xi\in \Hom(\Gamma,{\mathbb{R}})\,\big|\, H_*({{\mathcal R}}_\xi S[\Gamma]\otimes_{S[\Gamma]}C_*)=0\}$$ is an integral subset of $\Hom(\Gamma,{\mathbb{R}})$.
The statement of Proposition \[mainprop\] is closely related to work of Pajitnov, see e.g. [@Pa90 Theorem 2.2][@Pa07 Corollary 2.7]. But to the best of our knowledge the statement of Proposition \[mainprop\] cannot be found in the literature. More precisely, all results that we could found that have similar statements are dealing only with $\xi$’s in $\Hom(\Gamma,{\mathbb{R}})$ that are monomorphisms.
Assuming Proposition \[mainprop\] we are now in a position to complete the proof of Theorem \[mt\].
Let $X \subset {\mathbb{C}}^n$ be a complex hypersurface in general position at infinity, with complement $M_X$. Let $\psi\colon \pi_1(M_X)\to \Gamma$ be an epimorphism onto a free abelian group $\Gamma$ and let $\alpha\colon \pi_1(M_X)\to \gl(k,S)$ be a representation over a domain $S$. Using the notations from Section \[top\], let $X^{\infty}$ be the link at infinity of $X$, with complement $M_X^{\infty}$. As per our convention, we also denote by $\psi$ and $\alpha$ the induced epimorphism $\pi_1(M_X^{\infty}) \to \Gamma$ and, respectively, the representation $\pi_1(M^{\infty}_X)\to \gl(k,S)$. Clearly, an admissible pair $(\psi,\xi)$ for $M_X$ induces an admissible pair for $M^{\infty}_X$.
As in the proof of Theorem \[m\], it follows from Lemma \[lem:higher-homology\] and Proposition \[bound\] that it suffices to extend the vanishing $H_*^\a(M_X^\infty;{{\mathcal R}}_\xi S[\Gamma]^k)=0$ to all positive real cohomology classes $\xi$, with $(\psi,\xi)$ admissible.
We denote by $\wti{M^{\infty}_X}$ the universal cover of $M^{\infty}_X$ and we write $$C_*:= S[\Gamma]^k\otimes_{{\mathbb{Z}}[\pi_1(M^{\infty}_X)]}C_*(\wti{M^{\infty}_X}).$$ In the following, given $\xi \in \Hom(\Gamma,{\mathbb{R}})$ we denote the induced composite homomorphism $\pi_1(M^{\infty}_X)\to \pi_1(M_X) \to \Gamma\to {\mathbb{R}}$ by $\xi$ as well. Note that for any $\xi\colon \Gamma\to {\mathbb{R}}$ we have $$\ba{rcl} H_*\big({{\mathcal R}}_\xi S[\Gamma]\otimes_{S[\Gamma]}C_*\big)&=&
H_*\big({{\mathcal R}}_\xi S[\Gamma]\otimes_{S[\Gamma]}S[\Gamma]^k\otimes_{{\mathbb{Z}}[\pi_1(M^{\infty}_X)]}C_*(\wti{M^{\infty}_X})\big)\\[0.1cm]
&\cong & H_*\big({{\mathcal R}}_\xi S[\Gamma]^k\otimes_{{\mathbb{Z}}[\pi_1(M^{\infty}_X)]}C_*(\wti{M^{\infty}_X})\big) \\[0.1cm]
&=&
H_*(M^{\infty}_X;{{\mathcal R}}_\xi S[\Gamma]^k).\ea$$ Combining this observation with Proposition \[mainprop\] and with Lemma \[lem:integral-subsets\] (1) we see that $$V\,:=\,\big\{ \xi \in \Hom(\Gamma,{\mathbb{R}})\,\big|\,\mbox{ there exists an $i$ with }
H_i(M^{\infty}_X;{{\mathcal R}}_\xi S[\Gamma]^k)\ne 0\big\}$$ is an integral subset of $\Hom(\Gamma,{\mathbb{R}})$.
Now let $\mu_1,\dots,\mu_s$ be the generators of $\Gamma$ that correspond to the meridians of the $s$ irreducible components of $X_{red}$. Recall that for an admissible pair $(\psi,\xi)$ we say $\xi \in \Hom(\Gamma,{\mathbb{R}})$ is positive if $\xi(\mu_i)>0$ for $i=1,\dots,s$. We denote by $\Hom^+(\Gamma,{\mathbb{R}})$ the set of all positive homomorphisms.
Clearly $\Hom^+(\Gamma,{\mathbb{R}})$ is an integral subset of $\Hom(\Gamma,{\mathbb{R}})$. From Lemma \[lem:integral-subsets\] (2) we deduce that $\Hom^+(\Gamma,{\mathbb{R}})\cap V$ is an integral subset. By Theorem \[m\] we know that $\Hom^+(\Gamma,{\mathbb{Z}})\cap V=\emptyset$. Put differently, $\Hom^+(\Gamma,{\mathbb{R}})\cap V$ does not contain a lattice point. It follows from Lemma \[lem:integral-subsets\] (4) that $\Hom^+(\Gamma,{\mathbb{R}})\cap V=\emptyset$. But that means exactly that $H_*(M^{\infty}_X;{{\mathcal R}}_\xi[\Gamma]^k)=0$ for all $\xi\in \Hom^+(\Gamma,{\mathbb{R}})$.
Proof of Proposition \[mainprop\]
---------------------------------
Before we can give the proof of Proposition \[mainprop\] we need to formulate two more lemmas.
Let $\Gamma$ be a free abelian group and let $S$ be a domain. Given $p\in S[\Gamma]$ we write $$M(p)\,:=\,\{\xi\in \Hom(\Gamma,{\mathbb{R}})\,|\, p\mbox{ is invertible in }{{\mathcal R}}_\xi S[\Gamma]\}$$ and given a matrix $A$ over $S[\Gamma]$ we write $$M(A)\,:=\,\{\xi\in \Hom(\Gamma,{\mathbb{R}})\,|\, A\mbox{ is invertible over }{{\mathcal R}}_\xi S[\Gamma]\}.$$ Furthermore, given a chain complex $C_*$ over $S[\Gamma]$ we write $$M(C_*)\,:=\,
\big\{ \xi\in \Hom(\Gamma,{\mathbb{R}})\,\big|\, H_*({{\mathcal R}}_\xi S[\Gamma]\otimes_{S[\Gamma]}C_*)=0 \}.$$
\[lem:determinant-pcones\] Let $S$ be a domain and let $\Gamma$ be a free abelian group. For any $p\in S[\Gamma]$ and for any matrix $A$ over $S[\Gamma]$ the sets $M(p)$ and $M(A)$ are integral subsets of $\Hom(\Gamma,{\mathbb{R}})$.
Clearly it suffices to prove the lemma for any non-zero $p\in S[\Gamma]$. By Lemma \[lem:units-in-rc\] we have $$M(p)=\{ \xi\in \Hom(\Gamma,{\mathbb{R}})\,|\, t_\xi(p)\mbox{ is a monomial}\}.$$ We write $p=\sum_{i=1}^n a_ig_i$ with $a_1,\dots,a_n\in S\sm \{0\}$ and where $g_1,\dots,g_n$ are pairwise disjoint elements of $\Gamma$. Furthermore we can arrange that there exists an $m$ such that $a_1,\dots,a_m$ are units in $S$ and such that $a_{m+1},\dots,a_n$ are not units in $S$. It follows from the above description of $M(p)$ that $$M(p) = \bigcup\limits_{i=1}^m
\big\{ \xi\in \Hom(\Gamma,{\mathbb{R}})\,|\, \xi(a_i)>\xi(a_j)\mbox{ for all }j\ne i\big\}.$$ This shows that $M(p)$ is the disjoint union of finitely many open integral cones, in particular $M(p)$ is an integral subset.
Let $R$ be a domain.
We say that a chain complex $C_*$ of free $R$-modules is [*based*]{} if each chain module $C_i$ is equipped with a basis.
Let $C_*$ be a based finite chain complex of length $m$ of finitely generated free $R$-modules. We denote by $A_i=(a_{jk}^i)$, $i=0,\dots,{m-1}$ the corresponding boundary matrices. (Here following the convention of [@Tu01], we think of elements in $R^k$ as row vectors and we think of the matrices as multiplying on the right.) Following [@Tu01 p. 8] we define a [*matrix chain for $C_*$*]{} to be a collection of sets $\alpha=(\alpha_0,\dots,\alpha_m)$ where $\alpha_i\subset \{1,2,\dots,\dim C_i\}$ so that $\alpha_0=\emptyset$. We denote by $A_i(\alpha)$ the submatrix of $A_i$ formed by the entries $a_{jk}^i$ with $j\in \alpha_{i+1}$ and $k\not\in \alpha_i$. The matrix chain $\alpha$ is called a [*$\tau$-chain*]{} if $A_0(\alpha),\dots,A_{m-1}(\alpha)$ are square matrices.
The following lemma is precisely [@Tu01 Lemma 2.5].
\[lem:vanishing-novikov-homology\] Let $R$ be a domain and let $C_*$ be a based finite chain complex of finitely generated free $R$-modules. We denote by $A_*$ the corresponding boundary matrices. Then $H_i(C_*)=0$ if and only if there exists a $\tau$-chain $\alpha$ such that $\det(A_i(\alpha))$ is invertible over $R$ for all $i$.
Let $S$ be a domain, let $\Gamma$ be a free abelian group and let $C_*$ be a chain complex of finite free $S[\Gamma]$-modules of length $m$. We pick a basis for each chain module $C_i$. We denote by $A_i$ the corresponding boundary matrices of the chain complex. It follows from Lemma \[lem:vanishing-novikov-homology\] that $$M(C_*)=\bigcup\limits_\alpha \big\{ \xi \in \Hom(\Gamma,{\mathbb{R}})\,|\,\mbox{ $ \det(A_i(\alpha))$ is invertible over ${{\mathcal R}}_\xi S[\Gamma]$ for all $i$}\big\},$$ where we take the union over all $\tau$-chains. Put differently, we have $$M(C_*)=\bigcup\limits_\alpha \bigcap\limits_{i=0}^{k-1} M(A_i(\alpha)).$$ Each $M(A_i(\alpha))$ is by Lemma \[lem:determinant-pcones\] an integral subset of $\Hom(\Gamma,{\mathbb{R}})$. It follows from Lemma \[lem:integral-subsets\] (2) and (3) that $M(C_*)$ is also an integral subset.
[10]{}
D. Burghelea, [*Topology of real and angle valued maps and graph representations*]{}, Advances in mathematics, 103–119, Ed. Acad. Române, Bucharest, 2013.
J. Cha, [ *Fibred knots and twisted Alexander invariants*]{}, Trans. Amer. Math. Soc. [**355**]{} (2003), 4187–4200.
A. Dimca, [*Singularities and topology of hypersurfaces*]{}, Universitext, Springer-Verlag, New York (1992).
A. Dimca and A. Libgober, [*Regular functions transversal at infinity*]{}, Tohoku Math. J. (2) [**58**]{} (2006), no. 4, 549–564.
M. Farber, [*Topology of closed one-forms*]{}, Mathematical Surveys and Monographs, [**108**]{}. American Mathematical Society, Providence, RI, 2004.
M. Farber, R. Geoghegan, D. Schütz, [*Closed 1-forms in topology and geometric group theory*]{}, Russian Math. Surveys [**65**]{} (2010), no. 1, 143–172.
S. Friedl, [ *Twisted Reidemeister torsion, the Thurston norm and fibered manifolds*]{}, Geometriae Dedicata [**172**]{}, (2014), 135–145.
S. Friedl and T. Kim, [ *The Thurston norm, fibered manifolds and twisted Alexander polynomials*]{}, Topology [**45**]{}, 929–953 (2006).
S. Friedl, C. Leidy and L. Maxim, [*$L^2$-Betti numbers of plane algebraic curves*]{}, Michigan Math. J. [**58**]{} (2009), no. 2, 411–421.
H. Goda, T. Kitano and T. Morifuji, [*Reidemeister Torsion, Twisted Alexander Polynomial and Fibred Knots*]{}, Comment. Math. Helv. [**80**]{} (2005), 51–61.
H. Goda and A. Pajitnov, [*Twisted Novikov homology and circle-valued Morse theory for knots and links*]{}, Osaka J. Math. [**42**]{} (2005), no. 3, 557–572.
T. Kohno and A. Pajitnov, [*Circle-valued Morse theory for complex hyperplane arrangements*]{}, Forum Math. [**27**]{} (2015), no. 4, 2113–2128.
L. Maxim, [*Intersection homology and Alexander modules of hypersurface complements*]{}, Comm. Math. Helv. [**81**]{} (2006), 123–155.
L. Maxim, [*$L^2$–Betti numbers of hypersurface complements*]{}, Int. Math. Res. Not. Vol. [**2014**]{}, No. 17, pp. 4665–4678.
A. Pazhitnov, [*On the sharpness of Novikov type inequalities for manifolds with free Abelian fundamental group*]{}, Math. USSR, Sb. [**68**]{} (1990), No.2, 351–389.
A. Pajitnov, [*Circle-valued Morse theory*]{}, de Gruyter Studies in Mathematics, [**32**]{}. Walter de Gruyter & Co., Berlin, 2006.
A. Pajitnov, [*Novikov homology, twisted Alexander polynomials, and Thurston cones*]{}, St. Petersburg Math. J. [**18**]{} (2007), no. 5, 809–835.
S. Papadima, A. Suciu, [*Bieri-Neumann-Strebel-Renz invariants and homology jumping loci*]{}, Proc. Lond. Math. Soc. (3) [**100**]{} (2010), no. 3, 795–834.
V. Turaev, [*Introduction to combinatorial torsions*]{}, Lectures in Mathematics. Birkhäuser Verlag, Basel, 2001.
|
---
abstract: 'Aerial image analysis at a semantic level is important in many applications with strong potential impact in industry and consumer use, such as automated mapping, urban planning, real estate and environment monitoring, or disaster relief. The problem is enjoying a great interest in computer vision and remote sensing, due to increased computer power and improvement in automated image understanding algorithms. In this paper we address the task of automatic geolocalization of aerial images from recognition and matching of roads and intersections. Our proposed method is a novel contribution in the literature that could enable many applications of aerial image analysis when GPS data is not available. We offer a complete pipeline for geolocalization, from the detection of roads and intersections, to the identification of the enclosing geographic region by matching detected intersections to previously learned manually labeled ones, followed by accurate geometric alignment between the detected roads and the manually labeled maps. We test on a novel dataset with aerial images of two European cities and use the publicly available OpenStreetMap project for collecting ground truth roads annotations. We show in extensive experiments that our approach produces highly accurate localizations in the challenging case when we train on images from one city and test on the other and the quality of the aerial images is relatively poor. We also show that the the alignment between detected roads and pre-stored manual annotations can be effectively used for improving the quality of the road detection results.'
author:
- Dragos Costea
- Marius Leordeanu
bibliography:
- 'complete.bib'
title: Aerial image geolocalization from recognition and matching of roads and intersections
---
Introduction
============
The ability to accurately recognize different categories of objects from aerial imagery, such as roads and buildings, is of great importance in understanding the world from above, with many useful applications ranging from mapping, urban planning to environment monitoring. This domain is starting a flourishing period, as the several technological and computational aspects involved, both at the hardware and algorithms levels, form in combination very powerful systems that are suitable for practical, real-world tasks. In this paper we address two important problems that are not sufficiently studied in the literature. We are among the first, to our best knowledge, to propose a method for automatic geo-localization in aerial images without GPS information, by putting in correspondence the real world images with the publicly available, manually labeled maps from the OpenStreetMap (OSM) project [^1]. We solve the task by first learning to detect roads and intersections in aerial images, and then learn to identify specific intersections based on a high level descriptor that puts in correspondence the detected intersections from real world images to intersections detected in the manually labeled OSM maps. Accurate localization is then obtained by the geometric alignment of the two road maps - the detected ones and the OSM annotations - at the final step. We present how the alignment to the OSM maps could be used to improve the quality of the detected roads and intersections. We also show that the accurate geometric registration of roads and intersections can improve both recognition of the roads and the initial localization. A key insight of our approach is the observation that intersections tend to have a unique road pattern surrounding them and thus can play a key role in localization, by reducing this difficult task to a sparse feature matching problem followed by a local refined roadmap alignment. For the accurate detection of roads we use a recent state of the art method [@AlinaNet2016] that is based on a dual stream local-global deep CNN, which takes advantage of both the local appearance of an object as well as the larger contextual region around the object of interest, in order to augment its local appearance and thus improve recognition performance.
Related work on road detection and localization
===============================================
Road detection in aerial imagery has been traditionally addressed by detection methods that use manually designed features [@kn:mayer2006rereview; @lin2012road; @laptev2000automatic; @klang1998automatic; @gruen1995road]. The recent success of convolutional neural networks [@krizhevsky2012imagenet; @simonyan2014very] has led to greatly improved accuracy and robust road detection [@kn:mnih2010learning; @kn:saito2015building]. As shown in [@AlinaNet2016], the lack of good quality aerial images, as well as clutter and occlusion can greatly affect and significantly degrade the learning and performance even for top, state-of-the-art architectures. Post-processing is often required in aerial image analysis [@kn:mayer2006rereview], but it is not expected to solve the most difficult cases. There are many approaches proposed for road detection, such as following road tracks [@hu2007road], local context modeling with CRFs [@kn:montoya2014mind], minimum path methods [@turetken2012automated] or using neural networks [@kn:mnih2010learning]. Arguably, free road vectors are widely available for most of the planet. However, they are sometimes misaligned and have a poor level of detail. Therefore some methods attempt to correct these road vectors by aligning them to real rectified aerial images [@Mattyus_2015_ICCV]. Topological road improvement methods trace back to [@kn:gamba2006improving]. A more recent approach [@kn:montoya2014mind] uses Conditional Random Fields in conjunction with a minimum cost path algorithm for improving topology. The authors take into account various cues, such as context, cars, smoothness between road widths in order to offset road vertices to their real location. The same authors previously proposed a metric for topology measurement [@kn:wegner2013higher].
There are several methods related to automatic geolocalization from aerial images, but the tasks they address differ from ours. Some use known landmarks, others ground floor images or extra GPS or IMU measurements. Most employ sparse, manually designed features - ours being the first, to the best of our knowledge, to automatically localize aerial images from recognition and matching of semantic categories, such as roads and intersections, in the context of deep neural networks. More specifically, related to our work, geolocalization for unmanned aerial vehicles (UAVs) using sparse manually designed features has been proposed in [@caballero2009unmanned], while accurate, sub-pixel manhole localization has been proposed using known landmarks [@drewniok1995high]. A road following strategy for UAVs with lost GPS signal is described in [@frew2004vision]. Other authors augment a feature-based approach by fusing camera input with GPU and inertial measurement unit (IMU) outputs. They propose a monocular SLAM approach without visual beacons [@kim2007real; @caballero2009vision], which yields an error of about 5m. Given the global coverage of aerial images, there has been interest in geolocalizing a ground image using aerial images at training time [@lin2015learning; @workman2015wide; @lin2013cross]. Geolocalizing single ground images has also been recently experimented in [@weyand2016planet]. An approach loosely related to geolocalization proposed the study of street patterns in order to identify the city class [@kn:barthelemy2008modeling].
{height="10cm"}
Our approach
============
Our method has several stages: 1) road pixelwise classification in a given aerial image; 2) detection of intersections based on the detected roads; 3) identification of a given intersection by matching its surrounding region to regions from a stored dataset of OpenStreetMap(OSM) road and interesections maps. At this stage we keep, for each test intersection, a list of closest OSM interesections in the intersections descriptor space; 4) accurate geometric alignment for improved localization and road detection enhancement. At this stage we keep from the list of candidate intersection matches the one with minimum geometric alignment error. In this work we focus on recognition and localization of given detected intersection. We use intersections as anchors for localization for three reasons. First, once intersections are found and images are aligned to known roadmaps the location of any given point in the image follows immediately. Second, intersections are sparse and require very little computational and storage costs for recognition and matching. Third, they are also sufficiently discriminative localization when their surrounding area is taken into account. They tend to have a unique pattern of roads in the neighborhood region, which acts as a unique fingerprint that is useful for location recognition. We present an overview of our approach in Figure \[fig:overview\]. Note that while we did not use any GPS information for localization, we assumed that we know the orientation of the image with respect to the cardinal points - an information that is easily obtained with a compass in a real world situation. To account for small errors in orientation estimation we added a random Gaussian noise to the test image rotation angle with 0 mean and standard deviation of 5 degrees. While the added noise affected slightly the performance of intersection recognition, it did not influence the final geometric alignment stage that is affine invariant. We detail the stages of our pipeline next.
Finding roads and intersections
-------------------------------
{height="8cm"}
#### Detection of roads:
We train a state-of-the-art dual stream local-global Convolutional Neural Network [@AlinaNet2016] (LG-Net) on the task of road detection (Figure \[fig:roads\_and\_intersection\_detection\]). The network combines two pathways, one based on an adjusted VGG-Net [@simonyan2014very] that uses local appearance information (a local 64x64 patch surrounding the road region) and the other, based on an adjusted AlexNet [@krizhevsky2012imagenet], which takes as input a significantly larger neighborhood (256x256) for contextual reasoning. The two pathways are joined in the last FC layers and the output is a small 16x16 center patch having 1’s for road pixels and zeros otherwise. The final road map is obtained by dividing the larger aerial images into disjoint 16x16 patches, which are classified independently. In the experiments presented in [@AlinaNet2016] the local-global network achieves an F-measure that is consistently superior to a network that has only the local pathway. Also, compared to previous contextual approaches to road detection, ours avoids hand crafted cues, such as the nearby cars and consistent road width [@Mattyus_2015_ICCV] or nearby lines [@yuan2013road], and effectively learns to reason about context by considering the larger area containing the road.
#### Detection of intersections:
For the detection of intersections we trained an adjusted AlexNet architecture, modified to output a single class to signal the presence or absence of an intersection at a given point in the image. We considered as input several channels containing the original RGB image as well as the estimated roadmap provided by the LG-Net. Including the channels with the original RGB low level signal improved the maximum detection F-measure from $65.18\%$ to $67.71\%$, in our experiments, using a scanning window approach with non-maxima suppression. The most relevant of the two types of input is the estimated roadmap that represents signal at a higher, semantic level of image interpretation. Note that intersections, by definition, are directly related to the existence of at least two roads that intersect. In order to speed up the detection of intersections we classified pixels on the grid (with steps of 10 pixels) and obtained the final dense intersections map by interpolation. This resulted in a speedup by two orders of magnitude at the cost of a relatively small decrease in detection quality. In Figure \[fig:roads\_and\_intersection\_detection\], we also present the system for intersection detection with an example estimated map of intersections. We notice that most intersections are detected, while, in some cases, intersections seem to be correctly detected in the image but are not present in the OSM, which we considered as ground truth. Note that such inconsistencies between images and manually labeled roads are not uncommon in OSM.
Automatic geolocalization
-------------------------
We represent each intersection by a descriptor which is learned such that identical intersections from detected roads and OSM roads should have similar descriptors, while descriptors for different intersections should be as far separated as possible. For extracting the intersection descriptors we start from the modified AlexNet trained for intersection detection, such that the last FC layer of 4096 elements is used as a descriptor. Intersections from the detected road maps will be matched against a database from OSM using Euclidean distances in descriptor space. While this approach proves to be very effective, we further improve the performance by fine-tuning the network using backpropagation for adjusting distances in descriptor space in order to improve matching performance. (Figure \[fig:performance\]). Localization is further refined by the geometric alignment between the estimated roads and the OSM roads in the regions centered at the intersections that have been put in correspondence. We detail next the algorithms for matching and localization.
#### Descriptor extraction and learning:
We extract descriptors for intersection images in a way that is similar to [@lin2015deep]. Moreover, we fine-tune the descriptor extracted for intersections from the neural network, so as to minimize the distance between identical intersections and maximize the distances between dissimilar ones. First, we train the modified Alexnet for intersection detection. Second we fine tune the network weights in a Siamese-like fashion, with corresponding intersection pairs from estimated roadmaps and OSM, respectively, marked as positive and different intersection pairs marked as negative. See [@hadsell2006dimensionality] for details on this type of training. The robust loss formula we use takes in consideration the ground truth label $y$, which is $1$ if the intersections are the same and $0$ otherwise, the squared Euclidean distance $d$ between pairs of intersections descriptors and a margin $m$, which gives zero penalty to descriptors $\mathbf{a}$ and $\mathbf{b}$ from different intersections that are at a distance of at least $m$ in descriptor space:
$$L(y) = \frac{1}{2}y d + \frac{1}{2}(1-y) \max(m-d, 0)).$$
#### Intersection identification:
The learning phase creates a descriptor for each intersection image. Similar images will correspond to descriptors that are close in Euclidean space. When matching two regions centered at two candidate intersection matches, we also consider the descriptors of the nearby intersections. This results in a bipartite graph matching problem for matching two sets of descriptors. It is possible, as nearby intersections usually have similar regions to wrongly match detected intersections to their neighbor OSM intersections , but such local misplacements are most often fixed at the final geometric alignment step when all the roads details in a region are taken into account. Next we present our method for finding correspondences between detected intersections and the ones from OSM, by matching sets of intersections from their corresponding regions. These neighborhoods of a certain radius centered at the intersections of interest. As our experiments show, the larger this radius the more accurate the intersection identification. This is expected, as larger regions include more road structures that are unique to a specific urban area.
Gather roads and intersections from region intersection region $R_w(i_T)$. Gather roads and intersections from region $R_w(i_L)$. **Compute matching distance between regions**: $\;\;\;$ 1) Get nearest neighbor distance $t_{j}$ between\
$\;\;\;\;\;\;\;\;\;\;\;\;\; \; \; \;$ each intersection $i_j$ in $R_w (i_T)$ to intersections from $R_w (i_L)$. $\;\;\;$ 2) Compute sum of 1NN distances $S_t(i_T, i_L) = \sum_{j}t_{j}$. $\;\;\;$ 3) Get nearest neighbor distance $l_{j}$ between\
$\;\;\;\;\;\;\;\;\;\;\;\;\; \; \; \;$ each intersection $j$ in $R_w (i_L)$ to intersections from $R_w (i_T)$. $\;\;\;$ 4) Compute sum of 1NN reverse distances $S_l(i_L, i_T) = \sum_{j}l_{j}$. $\;\;\;$ 5) Set distance between intersections: $d(i_T, i_L) = (S_t(i_T, i_L)+S_l(i_L, i_T))/2$.\
list $L_k(i_T)$ of $k$ closest $i_L$’s OSM intersections to $i_T$ using distance $d(i_T, i_L)$.
#### Geometric alignment:
Although a location can be theoretically determined by a single correctly identified intersection and a correct rotation with respect to the cardinal points , in order to have a robust match and further improve the initial localization (which could be off due to intersection detection misalignments), we also estimate for a given pair of candidate intersection matches $(i_T,i_L$, a geometric affine transformation between the roads in regions $R_w(i_T)$ and $R_w(i_L)$ Then, a misalignment measure is computed such that most outlier candidates in the list $L_k(i_T)$ of a given test intersection (found using Algorithm 1) $i_T$ are removed. The 2D registration procedure is performed by sampling road points from the test and query images and computing Shape Context descriptors[@belongie2000shape] at sampled locations. Using kNN with Shape Context descriptors, a list of candidate correspondences are found and an affine transform is robustly estimated using RANSAC. Then, the Euclidean distance transform ($bwdist$ Matlab function) is used in order to compute the symmetrized Chamfer distance between the two registered roadmaps, as a measure of misalignment - which, in practice yields significantly better results. Other approaches (such as [@Mattyus_2015_ICCV]) also proposed road alignment. Ours is fast and very effective for rejection of outlier intersection matches, improving localization and road enhancement (next Section). The more detailed overview of our localization algorithm is presented below:
1\) Find list $L_k(i_T)$ of $k$ candidate matches $i_L$ from OSM using Algorithm 1. 2) Compute $k$ symmetric Chamfer distances $C(i_T, i_L)$ between\
$\;\;\;\;\;\;\;\;\;\;\;\;\; \; \; \;$ region $R_w(i_T)$ and the corresponding regions $R_w(i_L)$ of each $i_L \in L_k(i_T)$. 3) Return aligned $i_L^{*}$ from $L_k(i_T)$ with minimum distance $C(i_T, i_L^{*})$.
Enhancing the road map
----------------------
We can use the aligned OSM roadmaps to improve the detected roads and vice-versa - since OSM roadmaps sometimes contain wrongly labeled roads, or do not reflect recent road changes. Here present a simple but effective method: 1) we apply a soft dilation procedure on the estimated roadmap and multiply it, pixel by pixel, with the aligned OSM map; 2) the resulted soft output is then smoothed with a Gaussian filter and the result is thinned using a standard nonmax suppression method for boundary detection. 3) after thinning the roads are dilated back, to achieve the initial thickness. The results are substantially better, as expected, greatly improving the similarity between the roads found and the OSM roads - the f-measure in road detection improved from $66.5\%$ to $93.9\$$. **Important note:** this procedure does not use ground truth localization, but only the entire OSM dataset and relies on the accuracy of the automatic matching and alignment algorithms. It has proved generally effective even when the localization was wrong but the road structure between the matched OSM region and the test image was similar. We present qualitative results in Figure \[fig:enhanced\_roads\].
Experimental analysis
=====================
#### Two Cities Dataset:
We collected aerial images of two European cities (termed A and B) and automatically aligned them with the OSM road maps for training and evaluation. We plan to make the dataset public. The images are 600x600px, have the spatial resolution of 1m/pixel and cover an area of about 70 sq. Km each. We use city A for training and validation and images from city B for testing. The quality of the images is fairly low, which makes the task of road detection and localization very challenging, even for the human eye (see example images in Figure \[fig:enhanced\_roads\]).
Figure \[fig:performance\] presents the average performance measures after geolocalizing all 3177 intersections from city B. We present intersection identification (recognition) rates versus the region radius (top left plot). As expected performance increases as the region radius increases, at the cost of more computation and data being required. We also demonstrate that the geometric alignment phase significantly increase performance, bringing it close to the $90\%$ mark even when the region radius is small. The plot also presents the consistent improvement brought by fine tuning the descriptors to optimize intersection matching. The other three plots present the distribution of localization errors in meters. We notice that most errors (around or above $90\%$ of them) are below 2.5 meters, that is below 3 pixels for the image resolution available in our experiments. This error is very small considering the poor image quality and the errors present in the OSM itself, which was considered as ground truth. For these reasons we believe that our results demonstrate high level of localization accuracy for our system, which could be very effective in most cases when the GPS signal is lost.
![Performance evaluation. Top left plot: performance increases with the region radius. Note that intersections descriptor learning as well as the final geometric alignment method significantly improve localization accuracy. The other three plots, showing distribution of errors per distance in meters show that our approach is able to correctly localize an intersection with an error of maximum 2.5 meters in at least $90\%$ of cases.[]{data-label="fig:performance"}](./performance_and_errors.jpg){height="8.1cm"}
#### Computational details:
Training time for road detection and intersections descriptor learning took between 3-5 days on a GeForce GTX 970 GPU with 4Gb memory and 1664 CUDA cores. At test time, road extraction speed is 5km^2^/s, at a spatial resolution of 1m/pixel and represents the most expensive task for geolocalization. Intersection detection takes 0.7km^2^/s, while localization by means of kNN in intersection descriptor space and geometric alignment is an order of magnitude faster in the context of searching within the limits of a $70$ sq. Km city.
{height="8.5cm"}
Discussion and Conclusions
==========================
We have presented a complete system for geo-localization from aerial images in the absence of GPS information. Our proposed pipeline includes many contributions with efficient methods for road and intersection detection, intersection recognition with geometric alignment for accurate localization, followed by road detection enhancement. There are many potential applications for our approach in areas such as urban planning, tracking structural changes, updating of existing maps and environment monitoring. Our system could also be used in the context of unmanned aerial vehicles, in order to correct their GPS localization or to make their flight possible even when GPS signal is lost. We estimate that if the search area was only $5$ times smaller than in our experiments, the automatic localization would be tractable for onboard processing, in near real-time, for current generation of NVIDIA’s embedded GPUs (Jetson TX1). For nighttime use for example, the roads are generally ’extracted’ by means of street lightning, which makes the problem of road and intersection detection easier - thus even more accessible for on-board processing. We have proven that geolocalization from images alone, using learned high level features is feasible and can achieve a high level of accuracy. It can be used as a GPS alternative or in conjunction with GPS, bringing valuable contributions to the literature and also to many applications that require offline or online, realtime processing.
The authors would like to thank Alina Marcu for his dedicated assistance with some of our experiments. Marius Leordeanu was supported in part by CNCS-UEFISCDI, under project PNII PCE-2012-4-0581.
[^1]: https://www.openstreetmap.org/
|
---
abstract: 'A curve over a field $k$ is [*pointless*]{} if it has no $k$-rational points. We show that there exist pointless genus-$3$ hyperelliptic curves over a finite field ${\mathbb{F}}_q$ if and only if $q\le 25$, that there exist pointless smooth plane quartics over ${\mathbb{F}}_q$ if and only if either $q\le 23$ or $q=29$ or $q=32$, and that there exist pointless genus-$4$ curves over ${\mathbb{F}}_q$ if and only if $q\le 49$.'
address:
- 'Center for Communications Research, 4320 Westerra Court, San Diego, CA 92121-1967, USA.'
- 'Microsoft Research, One Microsoft Way, Redmond, WA 98052, USA.'
- 'Department of Mathematics, University of Groningen, P.O. Box 800, 9700 AV Groningen, The Netherlands.'
author:
- 'Everett W. Howe'
- 'Kristin E. Lauter'
- Jaap Top
date: 1 March 2004
title: Pointless curves of genus three and four
---
Introduction {#S-intro}
============
How many points can there be on a curve of genus $g$ over a finite field ${\mathbb{F}}_q$? Researchers have been studying variants of this question for several decades. As van der Geer and van der Vlugt write in the introduction to their biannually-updated survey of results related to certain aspects of this subject, the attention paid to this question is
> motivated partly by possible applications in coding theory and cryptography, but just as well by the fact that the question represents an attractive mathematical challenge. [@GeerVlugt]
The complementary question — how [*few*]{} points can there be on a curve of genus $g$ over ${\mathbb{F}}_q$? — seems to have sparked little interest among researchers, perhaps because of the apparent [*lack*]{} of possible applications for such curves in coding theory or cryptography. But despite the paucity of applications, there are still mathematical challenges associated with such curves. In this paper, we address one of them:
Given an integer $g\ge 0$, determine the finite fields $k$ such that there exists a curve of genus $g$ over $k$ having no rational points.
We will call a curve over a field $k$ [*pointless*]{} if it has no $k$-rational points. Thus the problem we propose is to determine, for a given genus $g$, the finite fields $k$ for which there is a pointless curve of genus $g$.
The solutions to this problem for $g\le 2$ are known. There are no pointless curves of genus $0$ over any finite field; this follows from Wedderburn’s theorem, as is shown by [@Serre:CG § III.1.4, exer. 3]. The Weil bound for curves of genus $1$ over a finite field, proven by Hasse [@Hasse], shows that there are no pointless curves of genus $1$ over any finite field. If there is a pointless curve of genus $2$ over a finite field ${\mathbb{F}}_q$ then the Weil bound shows that $q\le 13$, and in 1972 Stark [@Stark] showed that in fact $q < 13$. For each $q<13$ there do exist pointless genus-$2$ curves over ${\mathbb{F}}_q$; a complete list of these curves is given in [@MaisnerNart Table 4].
In this paper we provide solutions for the cases $g=3$ and $g=4$.
\[T-genus3\] There exists a pointless genus-$3$ curve over ${\mathbb{F}}_q$ if and only if either $q\le 25$ or $q=29$ or $q=32$.
\[T-genus4\] There exists a pointless genus-$4$ curve over ${\mathbb{F}}_q$ if and only if $q \le 49$.
In fact, for genus-$3$ curves we prove a statement slightly stronger than Theorem \[T-genus3\]:
\[T-genus3specific\]
1. There exists a pointless genus-$3$ hyperelliptic curve over ${\mathbb{F}}_q$ if and only if $q\le 25$.
2. There exists a pointless smooth plane quartic curve over ${\mathbb{F}}_q$ if and only if either $q\le 23$ or $q=29$ or $q=32$.
The idea of the proofs of these theorems is simple. For any given genus $g$, and in particular for $g=3$ and $g=4$, the Weil bound can be used to provide an upper bound for the set of prime powers $q$ such that there exist pointless curves of genus $g$ over ${\mathbb{F}}_q$. For each $q$ less than or equal to this bound, we either provide a pointless curve of genus $g$ or use the techniques of [@HoweLauter] to prove that none exists.
We wrote above that the question of how few points there can be on a genus-$g$ curve over ${\mathbb{F}}_q$ seems to have attracted little attention, and this is certainly the impression one gets from searching the literature for references to such curves. On the other hand, the question has undoubtedly occurred to researchers before. Indeed, the third author was asked this very question for the special case $g=3$ by both N. D. Elkies and J.-P. Serre after the appearance of his joint work [@AuerTop] with Auer. Also, while it is true that there seem to be no applications for pointless curves, it [*can*]{} be useful to know whether or not they exist. For example, Leep and Yeomans were concerned with the existence of pointless plane quartics in their work [@LeepYeomans] on explicit versions of special cases of the Ax-Kochen theorem. Finally, we note that Clark and Elkies have recently proven that for every fixed prime $p$ there is a constant $A_p$ such that for every integer $n>0$ there is a curve over ${\mathbb{F}}_p$ of genus at most $A_p n p^n$ that has no places of degree $n$ or less.
In Section \[S-heuristics\] we give the heuristic that guided us in our search for pointless curves. In Section \[S-proof\] we give the arguments that show that there are no pointless curves of genus $3$ over ${\mathbb{F}}_{27}$ or ${\mathbb{F}}_{31}$, no pointless smooth plane quartics over ${\mathbb{F}}_{25}$, no pointless genus-$3$ hyperelliptic curves over ${\mathbb{F}}_{29}$ or ${\mathbb{F}}_{32}$, and no pointless curves of genus $4$ over ${\mathbb{F}}_{53}$ or ${\mathbb{F}}_{59}$. Finally, in Sections \[S-examples3\] and \[S-examples4\] we give examples of pointless curves of genus $3$ and $4$ over every finite field for which such curves exist.
### Conventions {#conventions .unnumbered}
By a [*curve*]{} over a field $k$ we mean a smooth, projective, geometrically irreducible $1$-dimensional variety over $k$. When we define a curve by a set of equations, we mean the normalization of the projective closure of the variety defined by the equations.
### Acknowledgments {#acknowledgments .unnumbered}
The first author spoke about the work [@HoweLauter] at AGCT-9, and he thanks the organizers Yves Aubry, Gilles Lachaud, and Michael Tsfasman for inviting him to Luminy and for organizing such a pleasant and interesting conference. The first two authors thank the editors for soliciting this paper, which made them think about other applications of the techniques developed in [@HoweLauter].
In the course of doing the work described in this paper we used the computer algebra system Magma [@magma]. Several of our Magma programs are available on the web: start at [http://www.alumni.caltech.edu/\~however/biblio.html]{}
and follow the links related to this paper. One of our proofs depends on an explicit description of the isomorphism classes of unimodular quaternary Hermitian forms over the quadratic ring of discriminant $-11$. The web site mentioned above also contains a copy of a text file that gives a list of the six isomorphism classes of such forms; we obtained this file from the web site
[http://www.math.uni-sb.de/\~ag-schulze/Hermitian-lattices/]{}
maintained by Rainer Schulze-Pillot-Ziemen.
Heuristics for constructing pointless curves {#S-heuristics}
============================================
To determine the correct statements of Theorems \[T-genus3\] and \[T-genus4\] we began by searching for pointless curves of genus $3$ and $4$ over various small finite fields. In this section we explain the heuristic we used to find families of curves in which pointless curves might be abundant. We begin with a lemma from the theory of function fields over finite fields.
\[L-Chebotarev\] Let $L/K$ be a degree-$d$ extension of function fields over a finite field $k$, let $M$ be the Galois closure of $L/K$, let $G = \operatorname{Gal}(M/K)$, and let $H = \operatorname{Gal}(M/L)$. Let $S$ be the set of places $\frakp$ of $K$ that are unramified in $L/K$ and for which there is at least one place $\frakq$ of $L$, lying over $\frakp$, with the same residue field as $\frakp$. Then the set $S$ has a Dirichlet density in the set of all places of $K$ unramified in $L/K$, and this density is $$\delta:=\frac{\#\cup_{\tau\in G} H^\tau }{\# G}.$$ We have $\delta\ge 1/d$, with equality precisely when $L$ is a Galois extension of $K$. Furthermore, we have $\delta\le 1 - (d-1)/\#G$.
An easy exercise in the class field theory of function fields ([*cf.*]{} [@vdHeiden proof of Lemma 2]) shows that the set $S$ is precisely the set of places $\frakp$ whose Artin symbol $(\frakp, L/K)$ lies in the union of the conjugates of $H$ in $G$. The density statement then follows from the Chebotarev density theorem.
Since $H$ is an index-$d$ subgroup of $G$, we have $$\frac{\#\cup_{\tau\in G} H^\tau }{\# G} \ge
\frac{\# H}{\# G} = \frac{1}{d}.$$ If $L/K$ is Galois then $H$ is trivial and the first relation in the displayed equation above is an equality. If $L/K$ is not Galois then $H$ is a nontrivial non-normal subgroup of $G$, so the first relation above is an inequality.
To prove the upper bound on $\delta$, we note that two conjugates $H^\sigma$ and $H^\tau$ of $H$ are identical when $\sigma$ and $\tau$ lie in the same coset of $H$ in $G$, so when we form the union of the conjugates of $H$ we need only let $\tau$ range over a set of coset representatives of the $d$ cosets of $H$ in $G$. Furthermore, the identity element lies in every conjugate of $H$, so the union of the conjugates of $H$ contains at most $d\cdot\#H - (d-1)$ elements. The upper bound follows.
Note that the density mentioned in Lemma \[L-Chebotarev\] is a Dirichlet density. If the constant field of $K$ is algebraically closed in the Galois closure of $L/K$, then the set $S$ also has a natural density (see [@MurtyScherk]). In particular, the set $S$ has a natural density when $L/K$ is a Galois extension and $L$ and $K$ have the same constant field.
Lemma \[L-Chebotarev\] leads us to our main heuristic:
\[H-automorphisms\] Let $C\to D$ be a degree-$d$ cover of curves over ${\mathbb{F}}_q$, let $L/K$ be the corresponding extension of function fields, and let $\delta$ be the density from Lemma [\[L-Chebotarev\]]{}. If the constant field of the Galois closure of $L/K$ is equal to ${\mathbb{F}}_q$, then $C$ will be pointless with probability $(1-\delta)^{\#D({\mathbb{F}}_q)}$. In particular, if $C\to D$ is a Galois cover, then $C$ will be pointless with probability $(1-1/d)^{\#D({\mathbb{F}}_q)}$.
Lemma \[L-Chebotarev\] makes it reasonable to expect that with probability $1 - \delta$, a given rational point of $D$ will have no rational points of $C$ lying over it. Our heuristic follows if we assume that all of the points of $D$ behave independently.
Consider what this heuristic tells us about hyperelliptic curves. Since a hyperelliptic curve is a double cover of a genus-$0$ curve, we expect that a hyperelliptic curve over ${\mathbb{F}}_q$ will be pointless with probability $(1/2)^{q+1}$. However, if the hyperelliptic curve has more automorphisms than just the hyperelliptic involution, it will be more likely to be pointless. For instance, suppose $C$ is a hyperelliptic curve whose automorphism group has order $4$. This automorphism group will give us a Galois cover $C\to{\mathbb{P}}^1$ of degree $4$. Then our heuristic suggests that $C$ will be pointless with probability $(3/4)^{q+1}$.
On the other hand, consider a generic smooth plane quartic $C$ over ${\mathbb{F}}_q$. A generic quartic has a $1$-parameter family of non-Galois maps of degree $3$ to ${\mathbb{P}}^1$. For any one of these maps, the Galois group of its Galois closure is the symmetric group on $3$ elements. In this case, the density $\delta$ from Lemma \[L-Chebotarev\] is $2/3$, so we expect (modulo the condition on constant fields mentioned in the heuristic) that a typical plane quartic will be pointless with probability $(1/3)^{q+1}$. But if the quartic $C$ has an automorphism group of order $4$, and if the quotient of $C$ by this automorphism group is ${\mathbb{P}}^1$, then we expect $C$ to be pointless with probability $(3/4)^{q+1}$.
This heuristic suggested two things to us. First, to find pointless curves it is helpful to look for curves with larger-than-usual automorphism groups. We decided to focus on curves whose automorphism groups contain the Klein $4$-group, because it is easy to write down curves with this automorphism group and yet the group is large enough to give us a good chance of finding pointless curves. Second, the heuristic suggested that we look at curves $C$ that are double covers of curves $D$ that are double covers of ${\mathbb{P}}^1$. The Galois group of the resulting degree-$4$ cover $C\to{\mathbb{P}}^1$ will typically be the dihedral group of order $8$, and the heuristic predicts that $C$ will be pointless with probability $(5/8)^{q+1}$. For a fixed $D$, if we consider the family of double covers $C\to D$ with $C$ of genus $3$ or $4$, our heuristic predicts that $C$ will be pointless with probability $(1/2)^{\#D({\mathbb{F}}_q)}$. If $\#D({\mathbb{F}}_q)$ is small enough, this probability can be reasonably high.
The curves that we found by following our heuristic are listed in Sections \[S-examples3\] and \[S-examples4\].
Proofs of the theorems {#S-proof}
======================
In this section we prove the theorems stated in the introduction. Clearly Theorem \[T-genus3\] follows from Theorem \[T-genus3specific\], so we will only prove Theorems \[T-genus4\] and \[T-genus3specific\].
The Weil bound says that a curve of genus $3$ over ${\mathbb{F}}_q$ has at least $q + 1 - 6\sqrt{q}$ points, and it follows immediately that if there is a pointless genus-$3$ curve over ${\mathbb{F}}_q$ then $q < 33$. In Section \[S-examples3\] we give examples of pointless genus-$3$ hyperelliptic curves over ${\mathbb{F}}_q$ for $q\le 25$ and examples of pointless smooth plane quartics for $q\le 23$, for $q = 29$, and for $q = 31$. To complete the proof, we need only prove the following statements:
1. \[31\] There are no pointless genus-$3$ curves over ${\mathbb{F}}_{31}$.
2. \[27\] There are no pointless genus-$3$ curves over ${\mathbb{F}}_{27}$.
3. \[25\] There are no pointless smooth plane quartics over ${\mathbb{F}}_{25}$.
4. \[32\] There are no pointless genus-$3$ hyperelliptic curves over ${\mathbb{F}}_{32}$.
5. \[29\] There are no pointless genus-$3$ hyperelliptic curves over ${\mathbb{F}}_{29}$.
### Statement \[31\] {#statement31 .unnumbered}
Theorem 1 of [@LauterSerre:CM] shows that every genus-$3$ curve over ${\mathbb{F}}_{31}$ has at least $2$ rational points, and statement \[31\] follows.
### Statement \[27\] {#statement27 .unnumbered}
To prove statement \[27\], we begin by running the Magma program [CheckQGN]{} described in [@HoweLauter]. The output of [CheckQGN(27,3,0)]{} shows that if $C$ is a pointless genus-$3$ curve over ${\mathbb{F}}_{27}$ then the real Weil polynomial of $C$ (see [@HoweLauter]) must be $(x-10)^2 (x-8)$. (To reach this conclusion without relying on the computer, one can adapt the reasoning on ‘defect 2’ found in [@LauterSerre:JAG § 2].) Applying Proposition 13 of [@HoweLauter], we find that $C$ must be a double cover of an elliptic curve over ${\mathbb{F}}_{27}$ with exactly $20$ rational points.
Up to Galois conjugacy, there are two elliptic curves over ${\mathbb{F}}_{27}$ with exactly $20$ rational points; one is given by $y^2 = x^3 + 2x^2 + 1$ and the other by $y^2 = x^3 + 2x^2 + a$, where $a^3 - a + 1 = 0$. By using the argument given in the analogous situation in [@HoweLauter § 6.1], we see that every genus-$3$ double cover of one of these two $E$’s can be obtained by adjoining to the function field of $E$ an element $z$ that satisfies $z^2 = f$, where $f$ is a function on $E$ of degree at most $6$ that is regular outside $\infty$, that has four zeros or poles of odd order, and that has a double zero at a point $Q$ of $E$ that is rational over ${\mathbb{F}}_{27}$. In fact, it suffices to consider $Q$’s that represent the classes of $E({\mathbb{F}}_{27}) / 2 E({\mathbb{F}}_{27})$. The first $E$ given above has four such classes and the second has two. We can also demand that the representative points $Q$ not be $2$-torsion points.
The divisor of the function $f$ is $$P_1 + P_2 + P_3 + P_4 + 2Q - 6\infty$$ for some geometric points $P_1,\ldots,P_4$. We are assuming that the double cover $C$ has no rational points, so none of the $P_i$ can be rational over ${\mathbb{F}}_{27}$. In particular, none of the $P_i$ is equal to the infinite point. Since $Q$ is also not the infinite point (because we chose it not to be a $2$-torsion point), we see that the degree of $f$ is exactly $6$.
It is easy to have Magma enumerate, for each of the six $(E,Q)$ pairs, all of the degree-$6$ functions $f$ on $E$ that have double zeros at $Q$. For each such $f$ we can check to see whether there is a rational point $P$ on $E$ such that $f(P)$ is a nonzero square; if there is such a point, then the double $D$ cover of $E$ given by $z^2 = f$ would have a rational point. For those functions $f$ for which such a $P$ does not exist, we can check to see whether the divisor of $f$ has the right form. If the divisor of $f$ does have the right form, we can compute whether the curve $D$ has a rational point lying over $Q$ or over $\infty$.
We wrote Magma routines to perform these calculations; they are available on the web at the URL mentioned in the acknowledgments. As it happens, no $(E,Q)$ pair gives rise to a function $f$ that passes the first two tests described in the preceding paragraph, so we never had to perform the third test.
Our conclusion is that there are no pointless genus-$3$ curves over ${\mathbb{F}}_{27}$, which completes the proof of statement \[27\].
### Statement \[25\] {#statement25 .unnumbered}
To prove statement \[25\] we start by running [CheckQGN(25,3,0)]{}. We find that the real Weil polynomial of a pointless genus-$3$ curve over ${\mathbb{F}}_{25}$ is either $f_1 := (x - 10)^2(x - 6)$ or $f_2:=(x - 10)(x^2 - 16 x + 62)$ or $f_3:=(x - 10)(x - 9)(x - 7)$ or $f_4:=(x - 10)(x - 8)^2$. (This list can also be obtained by using Table 4 and Theorem 1(a) of [@HoweLauter].)
We begin by considering the real Weil polynomial $f_1 = (x-10)^2 (x-6)$. Suppose $C$ is a genus-$3$ curve over ${\mathbb{F}}_{25}$ with real Weil polynomial equal to $f_1$. Arguing as in the proof of [@HoweLauter Cor. 12], we find that there is an exact sequence $$0 \to \Delta \to A\times E \to \operatorname{Jac}C \to 0,$$ where $A$ is an abelian surface with real Weil polynomial $(x-10)^2$, where $E$ is an elliptic curve with real Weil polynomial $x-6$, where $\Delta$ is a self-dual finite group scheme that is killed by $4$, and where the projections from $A\times E$ to $A$ and to $E$ give monomorphisms $\Delta\hookrightarrow A$ and $\Delta\hookrightarrow E$. Furthermore, there are polarizations $\lambda_A$ and $\lambda_E$ on $A$ and $E$ whose kernels are the images of $\Delta$ under these monomorphisms, and the polarization on $\operatorname{Jac}C$ induced by the product polarization $\lambda_A\times\lambda_E$ is the canonical polarization on $\operatorname{Jac}C$.
Since $\Delta$ is isomorphic to the kernel of $\lambda_E$ and since $\Delta$ is killed by $4$, we see that if $\Delta$ is not trivial then it is isomorphic to either $E[2]$ or $E[4]$. If $\Delta$ were trivial then $\operatorname{Jac}C$ would be equal to $A\times E$ and the canonical polarization on $\operatorname{Jac}C$ would be a product polarization, and this is not possible. Therefore $\Delta$ is isomorphic either to $E[2]$ or $E[4]$. Since the Frobenius endomorphism of $A$ is equal to the multiplication-by-$5$ map on $A$, the group of geometric $4$-torsion points on $A$ is a trivial Galois module. But $E[4]$ is not a trivial Galois module, so we see that $\Delta$ must be isomorphic to $E[2]$. Arguing as in the proof of [@HoweLauter Prop. 13], we find that there must be a degree-$2$ map from $C$ to $E$.
Thus, to find the genus-$3$ curves over ${\mathbb{F}}_{25}$ whose real Weil polynomials are equal to $(x-10)^2(x-6)$, we need only look at the genus-$3$ curves that are double covers of elliptic curves over ${\mathbb{F}}_{25}$ with $20$ points and with three rational points of order $2$. There are two such elliptic curves, and, as in the proof of statement \[27\], we can use Magma to enumerate their genus-$3$ double covers with no points. (Our Magma program is available at the URL mentioned in the acknowledgments.) We find that there is exactly one such double cover: if $a$ is an element of ${\mathbb{F}}_{25}$ with $a^2 - a + 2 = 0$, then the double cover $C$ of the elliptic curve $y^2 = x^3 + 2x$ given by setting $z^2 = a(x^2-2)$ has no points.
The curve $C$ is clearly hyperelliptic, because it is a double cover of the genus-$0$ curve $z^2 = a(x^2-2)$. By parametrizing this genus-$0$ curve and manipulating the resulting equation for $C$, we find that $C$ is isomorphic to the curve $y^2 = a(x^8 + 1)$, which is the example presented below in Section \[S-examples3\].
Next we show that there are no pointless genus-$3$ curves over ${\mathbb{F}}_{25}$ with real Weil polynomial equal to $f_2$ or $f_3$ or $f_4$. Suppose $C$ is a pointless genus-$3$ curve over ${\mathbb{F}}_{25}$ whose real Weil polynomial is $f_2$ or $f_3$ or $f_4$. By applying Proposition 13 of [@HoweLauter], we find that $C$ must be a double cover of an elliptic curve over ${\mathbb{F}}_{25}$ having either $16$ or $17$ points. There is one elliptic curve over ${\mathbb{F}}_{25}$ of each of these orders. As we did above and in the proof of statement \[27\], we can easily have Magma enumerate the genus-$3$ double covers of these elliptic curves. The only complication is that for the curve with $16$ points, we cannot assume that the auxiliary point $Q$ mentioned in the proof of statement \[27\] is not a $2$-torsion point.
The Magma program we used to enumerate these double covers can be found at the web site mentioned in the acknowledgments. Using this program, we found that the curve with $17$ points has no pointless genus-$3$ double covers. On the other hand, we found two functions $f$ on the curve $E$ with $16$ points such that the double cover of $E$ defined by $z^2 = f$ is a pointless genus-$3$ curve. But when we computed an upper bound for the number of points on these curves over ${\mathbb{F}}_{625}$, we found that both of the curves have at most $540$ points over ${\mathbb{F}}_{625}$. This upper bound is not consistent with any of the three real Weil polynomials we are considering. (In fact, one can show by direct computation that the two curves are isomorphic to the curve $y^2 = a(x^8 + 1)$ that we found earlier, whose real Weil polynomial is $f_1$.) Thus, there are no pointless genus-$3$ curves over ${\mathbb{F}}_{25}$ with real Weil polynomial equal to $f_2$ or $f_3$ or $f_4$.
This proves statement \[25\].
### Statement \[32\] {#statement32 .unnumbered}
Suppose that $C$ is a pointless genus-$3$ curve over ${\mathbb{F}}_{32}$. If $C$ were hyperelliptic, then its quadratic twist would be a genus-$3$ curve over ${\mathbb{F}}_{32}$ with $66$ rational points. But [@LauterSerre:JAG Thm. 1] shows that no such curve exists.
We give a second proof of statement \[32\] as well, which provides us with a little extra information and foreshadows some of our later arguments. This same proof is given in [@Elkies § 3.3] and attributed to Serre.
Suppose that $C$ is a pointless genus-$3$ curve over ${\mathbb{F}}_{32}$. Then $C$ meets the Weil-Serre lower bound, and (as Serre shows in [@Serre:notes]) its Jacobian is therefore isogenous to the cube of an elliptic curve $E$ over ${\mathbb{F}}_{32}$ whose trace of Frobenius is $11$. Note that the endomorphism ring of this elliptic curve is the quadratic order $\calO$ of discriminant $11^2 - 4\cdot32 = -7$. The polarizations of abelian varieties isogenous to a power of a single elliptic curve whose endomorphism ring is a maximal order can be understood in terms of Hermitian modules (see the appendix to [@LauterSerre:CM]). Since the endomorphism ring $\calO$ is a maximal order and a PID, there is exactly one abelian variety in the isogeny class of $E^3$, namely $E^3$ itself. Furthermore, the theory of Hermitian modules shows that the principal polarizations of $E^3$ correspond to the isomorphism classes of unimodular Hermitian forms on the $\calO$-module $\calO^3$. Hoffmann [@Hoffmann] shows that there is only one isomorphism class of indecomposable unimodular Hermitian forms on $\calO^3$, so there is at most one Jacobian in the isogeny class of $E^3$, and hence at most one genus-$3$ curve over ${\mathbb{F}}_{32}$ with no points. The example we give in Section \[S-examples3\] is a plane quartic, so there are no pointless genus-$3$ hyperelliptic curves over ${\mathbb{F}}_{32}$. This proves statement \[32\].
### Statement \[29\] {#statement29 .unnumbered}
We wrote a Magma program to find (by enumeration) all pointless genus-$3$ hyperelliptic curves over an arbitrary finite field ${\mathbb{F}}_q$ of odd characteristic with $q>7$. We applied our program to the field ${\mathbb{F}}_{29}$, and we found no curves. Our Magma program is available at the URL mentioned in the acknowledgments.
Note that in the course of proving Theorem \[T-genus3specific\] we showed that the pointless genus-$3$ curves over ${\mathbb{F}}_{25}$ and ${\mathbb{F}}_{32}$ exhibited in Section \[S-examples3\] are the only such curves over their respective fields. Also, our program to enumerate pointless genus-$3$ hyperelliptic curves shows that there is only one pointless genus-$3$ hyperelliptic curve over ${\mathbb{F}}_{23}$.
It follows from Serre’s refinement of the Weil bound [@Serre:CRAS Théorème 1] that if a curve of genus $4$ over ${\mathbb{F}}_q$ has no rational points, then $q\le 59$. In Section \[S-examples4\] we give examples of pointless genus-$3$ curves over ${\mathbb{F}}_q$ for all $q$ with $q\le 49$, so to prove the theorem we must show that there are no pointless genus-$4$ curves over ${\mathbb{F}}_{53}$ or ${\mathbb{F}}_{59}$.
Combining the output of [CheckQGN(53,4,0)]{} with Theorem 1(b) of [@HoweLauter], we find that a pointless genus-$4$ curve over ${\mathbb{F}}_{53}$ must be a double cover of an elliptic curve $E$ over ${\mathbb{F}}_{53}$ with exactly $42$ points. (Again, the information obtained by running [CheckQGN]{} can also be obtained without recourse to the computer by modifying the ‘defect $2$’ arguments in [@LauterSerre:JAG § 2].)
There are four elliptic curves $E$ over ${\mathbb{F}}_{53}$ with exactly $42$ points. Following the arguments of [@HoweLauter § 6.1], we find that every genus-$4$ double cover of such an $E$ can be obtained by adjoining to the function field of $E$ a root of an equation $z^2 = f$, where $f$ is a function on $E$ whose divisor is of the form $$P_1 + \cdots + P_6 + 2Q - 8\infty,$$ where $Q$ is a rational point of $E$ that is not killed by $2$, and where it suffices to consider $Q$ that cover the residue classes of $E({\mathbb{F}}_{53})$ modulo $3 E({\mathbb{F}}_{53})$. As in the preceding proof, we wrote Magma programs to enumerate the genus-$4$ double covers of the four possible $E$’s and to check to see whether all of these covers had rational points. Our programs, available at the URL mentioned in the acknowledgments, showed that every genus-$4$ double cover of these $E$’s has a rational point. Thus there are no pointless genus-$4$ curves over ${\mathbb{F}}_{53}$.
Next we show that there are no pointless curves of genus $4$ over ${\mathbb{F}}_{59}$. If $C$ were such a curve, then $C$ would meet the Weil-Serre lower bound, and therefore the Jacobian of $C$ would be isogenous to the fourth power of an elliptic curve $E$ over ${\mathbb{F}}_{59}$ with $45$ points. Note that there is exactly one such $E$, and its endomorphism ring $\calO$ is the quadratic order of discriminant $-11$. As in the proof of statement \[32\] of the proof of Theorem \[T-genus3specific\], we see that there is only one abelian variety in the isogeny class of $E^4$, and principal polarizations of $E^4$ correspond to the isomorphism classes of unimodular Hermitian forms on the $\calO$-module $\calO^4$. Schiemann [@Schiemann] states that there are six isomorphism classes of unimodular Hermitian forms on the module $\calO^4$. We were unable to find a listing of these isomorphism classes at the URL mentioned in [@Schiemann], but we did find them by following links from the URL
[http://www.math.uni-sb.de/\~ag-schulze/Hermitian-lattices/]{}
We have put a copy of the page listing these six forms on the web site mentioned in the acknowledgments.
Three of the isomorphism classes of unimodular Hermitian forms on $\calO^4$ are decomposable, and so do not come from the Jacobian of a curve. The three indecomposable Hermitian forms can each be written as a matrix with an upper left entry of $2$. Arguing as in the proof of [@HoweLauter Prop. 13], we find that our curve $C$ must be a double cover of the curve $E$.
We are again in familiar territory. As above, it is an easy matter to write a Magma program to enumerate the genus-$4$ double covers of the given elliptic curve $E$ and to check that they all have a rational point. (Our Magma programs are available at the URL mentioned in the acknowledgments.) Our computation showed that there are no pointless curves of genus $4$ over ${\mathbb{F}}_{59}$.
Examples of pointless curves of genus $3$ {#S-examples3}
=========================================
In this section we give examples of pointless curves of genus $3$ over the fields where such curves exist. We only consider curves whose automorphism groups contain the Klein $4$-group $V$. We begin with the hyperelliptic curves.
Suppose $C$ is a genus-$3$ hyperelliptic curve over ${\mathbb{F}}_q$ whose automorphism group contains a copy of $V$, and assume that the hyperelliptic involution is contained in $V$. Then $V$ modulo the hyperelliptic involution acts on $C$ modulo the hyperelliptic involution, and gives us an involution on ${\mathbb{P}}^1$. By changing coordinates on ${\mathbb{P}}^1$, we may assume that the involution on ${\mathbb{P}}^1$ is of the form $x\mapsto n/x$ for some $n\in {\mathbb{F}}_q^*$. (When $q$ is odd we need consider only two values of $n$, one a square and one a nonsquare. When $q$ is even we may take $n=1$.)
It follows that when $q$ is odd the curve $C$ can be defined either by an equation of the form $y^2 = f(x + n/x)$, where $f$ is a separable quartic polynomial coprime to $x^2 - 4n$, or by an equation of the form $y^2 = x f(x + n/x)$, where $f$ is a separable cubic polynomial coprime to $x^2 - 4n$. However, the latter possibility cannot occur if $C$ is to be pointless. When $q$ is even, if we assume the curve if ordinary then it may be written in the form $y^2 + y = f(x + 1/x)$, where $f$ is a rational function with $2$ simple poles, both nonzero.
We wrote a simple Magma program to search for pointless hyperelliptic curves of this form. We found such curves for every $q$ in $$\{2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17, 19, 23, 25\}.$$ We give examples in Table \[Tbl-examples3\].
$q$ curve
------ ----------------------------------------------------------------------------------------
$2$ $y^2 + y = (x^4 + x^2 + 1)/(x^4 + x^3 + x^2 + x + 1)$
$3$ $y^2 = - x^8 + x^7 - x^6 - x^5 - x^3 - x^2 + x - 1$
$4$ $y^2 + y = (ax^4 + ax^3 + a^2x^2 + ax + a)/(x^4 + ax^3 + x^2 + ax + 1)$
where $a^2 + a + 1 = 0$
$5$ $y^2 = 2x^8 + 3x^4 + 2$
$7$ $y^2 = 3x^8 + 2x^6 + 3x^4 + 2x^2 + 3$
$8$ $y^2 + y = (x^4 + a^6x^3 + a^3x^2 + a^6x + 1)/(x^4 + x^3 + x^2 + x + 1)$
where $a^3 + a + 1 = 0$
$9$ $y^2 = a(x^8 + 1)$
where $a^2 - a - 1= 0$
$11$ $y^2 = 2x^8 + 4x^6 - 2x^4 + 4x^2 + 2$
$13$ $y^2 = 2x^8 + 3x^7 + 3x^6 + 4x^4 + 3x^2 + 3x + 2$
$16$ $y^2 + y = (a^3x^4 + a^3x^3 + a^{14}x^2 + a^3x + a^3)/(x^4 + a^3x^3 + x^2 + a^3x + 1)$
where $a^4 + a + 1 = 0$
$17$ $y^2 = 3x^8 - 2x^5 + 4x^4 - 2x^3 + 3$
$19$ $y^2 = 2x^8 - x^6 - 8x^4 - x^2 + 2$
$23$ $y^2 = 5x^8 + x^6 + 6x^5 + 7x^4 - 6x^3 + x^2 + 5$
$25$ $y^2 = a(x^8 + 1)$
where $a^2 - a + 2 = 0$
: Examples of pointless hyperelliptic curves of genus $3$ over ${\mathbb{F}}_q$ with automorphism group containing the Klein $4$-group. For $q\neq 23$, the automorphism $x \mapsto 1/x$ of ${\mathbb{P}}^1$ lifts to give an automorphism of the curve; for $q=23$, the automorphism $x\mapsto -1/x$ lifts.[]{data-label="Tbl-examples3"}
Now we turn to the pointless smooth plane quartics. We searched for pointless quartics of the form $$ax^4 + by^4 + cz^4 + dx^2y^2 + ex^2z^2 + fy^2z^2 = 0$$ over finite fields of odd characteristic, because the automorphism groups of such quartics clearly contain the Klein group. We found pointless quartics of this form over ${\mathbb{F}}_q$ for $q$ in $$\{5,7,9,11,13,17,19,23,29\}.$$ We present sample curves in Table \[Tbl-quartics\].
$q$ curve
------ ------------------------------------------------------
$5$ $x^4 + y^4 + z^4 = 0$
$7$ $x^4 + y^4 + 2z^4 + 3x^2z^2 + 3y^2z^2 = 0$
$9$ $x^4 - y^4 + a^2z^4 + x^2y^2 = 0$
where $a^2 - a - 1 = 0$
$11$ $x^4 + y^4 + z^4 + x^2y^2 + x^2z^2 + y^2z^2 = 0$
$13$ $x^4 + y^4 + 2z^4 = 0$
$17$ $x^4 + y^4 + 2z^4 + x^2y^2 = 0$
$19$ $x^4 + y^4 + z^4 + 7x^2y^2 - x^2z^2 - y^2z^2 = 0$
$23$ $x^4 + y^4 + z^4 + 10x^2y^2 - 3x^2z^2 - 3y^2z^2 = 0$
$29$ $x^4 + y^4 + z^4 = 0$
: Examples of pointless smooth plane quartics over ${\mathbb{F}}_q$ (with $q$ odd) with automorphism group containing the Klein $4$-group.[]{data-label="Tbl-quartics"}
Over ${\mathbb{F}}_3$ there are many pointless smooth plane quartics; for instance, the curve $$x^4 + xyz^2 + y^4 + y^3z - yz^3 + z^4 = 0$$ has no points.
We know from the proof of Theorem \[T-genus3specific\] that there is at most one pointless genus-$3$ curve over ${\mathbb{F}}_{32}$, and its Jacobian is isomorphic to the cube of an elliptic curve whose endomorphism ring has discriminant $-7$. This suggests that we should look at twists of the reduction of the Klein quartic, and indeed we find that the curve $$(x^2 + x)^2 + (x^2 + x)(y^2 + y) + (y^2 + y)^2 + 1 = 0$$ has no points over ${\mathbb{F}}_{32}$. (This fact is noted in [@Elkies § 3.3].) For the other fields of characteristic $2$, we find examples by modifying the example for ${\mathbb{F}}_{32}$. We list the results in Table \[Tbl-quartics2\].
$q$ curve
------ --------------------------------------------------------------------
$2$ $(x^2 + xz)^2 + (x^2 + xz)(y^2 + yz) + (y^2 + yz)^2 + z^4 = 0$
$4$ $(x^2 + xz)^2 + a(x^2 + xz)(y^2 + yz) + (y^2 + yz)^2 + a^2z^4 = 0$
where $a^2 + a + 1 = 0$
$8$ $(x^2 + xz)^2 + (x^2 + xz)(y^2 + yz) + (y^2 + yz)^2 + a^3z^4 = 0$
where $a^3 + a + 1 = 0$
$16$ $(x^2 + xz)^2 + a(x^2 + xz)(y^2 + yz) + (y^2 + yz)^2 + a^7z^4 = 0$
where $a^4 + a + 1 = 0$
$32$ $(x^2 + xz)^2 + (x^2 + xz)(y^2 + yz) + (y^2 + yz)^2 + z^4 = 0$
: Examples of pointless smooth plane quartics over ${\mathbb{F}}_q$ (with $q$ even) with automorphism group containing the Klein $4$-group.[]{data-label="Tbl-quartics2"}
We close this section by mentioning a related method of constructing pointless genus-$3$ curves. Suppose $C$ is a genus-$3$ curve over a field of characteristic not $2$, and suppose that $C$ has a pair of commuting involutions (like the curves we considered in this section). Then either $C$ is an unramified double cover of a genus-$2$ curve, or $C$ is a genus-$3$ curve of the type considered in [@HoweLeprevostPoonen § 4], that is, a genus-$3$ curve obtained by ‘gluing’ three elliptic curves together along portions of their $2$-torsion. This suggests a more direct method of constructing genus-$3$ curves with no points: We can start with three elliptic curves with few points, and try to glue them together using the construction from [@HoweLeprevostPoonen § 4]. This idea was used by the third author to construct genus-$3$ curves with many points [@Top].
Examples of pointless curves of genus $4$ {#S-examples4}
=========================================
We searched for pointless genus-$4$ curves by looking at hyperelliptic curves whose automorphism group contained the Klein $4$-group; however, we found that for $q>31$ no such curves exist. Since we need to find pointless genus-$4$ curves over ${\mathbb{F}}_q$ for every $q\le 49$, we moved on to a different family of curves with commuting involutions.
Suppose $q$ is an odd prime power and suppose $f$ and $g$ are separable cubic polynomials in ${\mathbb{F}}_q[x]$ with no factor in common. An easy ramification computation shows that then the curve defined by $y^2 = f$ and $z^2 = g$ has genus $4$. Clearly the automorphism group of this curve contains a copy of the Klein $4$-group. It is easy to check whether a curve of this form is pointless: For every value of $x$ in ${\mathbb{F}}_q$, at least one of $f(x)$ and $g(x)$ must be a nonsquare, and exactly one of $f$ and $g$ should have a nonsquare as its coefficient of $x^3$. We found pointless curves of this form over every ${\mathbb{F}}_q$ with $q$ odd and $q\le 49$. Examples are given in Table \[Tbl-examples4\].
$q$ curve
------ -------------------------- -------------------------------------
$3$ $ y^2 = x^3 - x - 1$ $ z^2 = - x^3 + x - 1$
$5$ $ y^2 = x^3 - x + 2$ $ z^2 = 2 x^3 - 2 x$
$7$ $ y^2 = x^3 - 3$ $ z^2 = 3 x^3 - 1$
$9$ $ y^2 = x^3 - x + 1 $ $ z^2 = a (x^3 - x - 1) $
where $a^2 - a - 1 = 0$
$11$ $ y^2 = x^3 - x - 3$ $ z^2 = 2 x^3 - 2 x - 5$
$13$ $ y^2 = x^3 + 1 $ $ z^2 = 2 x^3 - 5$
$17$ $ y^2 = x^3 + x $ $ z^2 = 3 x^3 - 8 x^2 - 3 x + 5 $
$19$ $ y^2 = x^3 + 2$ $ z^2 = 2 x^3 + 1$
$23$ $ y^2 = x^3 + x + 6 $ $ z^2 = 5 x^3 + 9 x^2 - 3 x + 10$
$25$ $ y^2 = x^3 + x + 1 $ $ z^2 = a(x^3 + x^2 + 2)$
where $a^2 - a + 2 = 0$
$27$ $ y^2 = x^3 - x + a^5$ $ z^2 = -x^3 + x + a^5$
where $a^3 - a + 1 = 0$
$29$ $ y^2 = x^3 + x $ $ z^2 = 2 x^3 + 12 x + 14$
$31$ $ y^2 = x^3 - 10$ $ z^2 = 3 x^3 + 9$
$37$ $ y^2 = x^3 + x + 4 $ $ z^2 = 2 x^3 - 17 x^2 + 5 x + 15 $
$41$ $ y^2 = x^3 + x + 17 $ $ z^2 = 3 x^3 - x^2 - 12 x - 16$
$43$ $ y^2 = x^3 - 9$ $ z^2 = 2 x^3 + 18$
$47$ $ y^2 = x^3 + 5 x - 12 $ $ z^2 = 5 x^3 + 2 x^2 + 19 x - 9$
$49$ $ y^2 = x^3 + 4 $ $ z^2 = a (x^3 + 2) $
where $a^2 - a + 3 = 0$
: Examples of pointless curves of genus $4$ over ${\mathbb{F}}_q$ (with $q$ odd) with automorphism group containing the Klein $4$-group.[]{data-label="Tbl-examples4"}
We mention two points of interest about curves of this form. First, if the ${\mathbb{F}}_q$-vector subspace of ${\mathbb{F}}_q[x]$ spanned by the cubic polynomials $f$ and $g$ contains the constant polynomial $1$, then the curve $C$ defined by the two equations $y^2=f$ and $z^2=g$ is trigonal: If we have $af + bg = 1$, then $(x,y,z)\mapsto (y,z)$ defines a degree-$3$ map from $C$ to the genus-$0$ curve $ay^2 + bz^2 = 1$. Second, if $q\equiv1\bmod 3$ and if the coefficients of $x$ and $x^2$ in $f$ and $g$ are zero, then the curve $C$ has even more automorphisms, given by multiplying $x$ by a cube root of unity. (Likewise, if $q$ is a power of $3$ and if $f$ and $g$ are both of the form $a(x^3 - x) + b$, then $x\mapsto x+1$ gives an automorphism of $C$.) When it was possible, we chose the examples in Table \[Tbl-examples4\] to have these properties. In Table \[Tbl-examples4trigonal\] we provide trigonal models for the curves in Table \[Tbl-examples4\] that have them.
$q$ curve liftable involutions of ${\mathbb{P}}^1$
------ --------------------------------------------- ------------------------------------------
$3$ $v^3 - v = (u^4 + 1)/(u^2 + 1)^2$ $u\mapsto -u,\quad u\mapsto 1/u$
$5$ $v^3 - v = -2(u^2 - 2)^2/(u^2 + 2)^2$ $u\mapsto -u,\quad u\mapsto 2/u$
$7$ $v^3 = 2 u^6 + 2$ $u\mapsto -u,\quad u\mapsto 1/u$
$9$ $v^3 - v = (u^4 + a^2)/(u^2 + a^5)^2$ $u\mapsto -u,\quad u\mapsto a/u$
where $a^2 - a - 1 = 0$
$11$ $v^3 - v = (3 u^4 + 4 u^2 + 3)/(u^2 + 1)^2$ $u\mapsto -u,\quad u\mapsto 1/u$
$13$ $v^3 = 4 u^6 + 6$ $u\mapsto -u,\quad u\mapsto 2/u$
$19$ $v^3 = 2 u^6 + 2$ $u\mapsto -u,\quad u\mapsto 1/u$
$27$ $v^3 - v = a^{18} (u^4 + 1)/(u^2 + 1)^2$ $u\mapsto -u,\quad u\mapsto 1/u$
where $a^3 - a + 1 = 0$
$31$ $v^3 = 5 u^6 - 11 u^4 - 11 u^2 + 5$ $u\mapsto -u,\quad u\mapsto 1/u$
$43$ $v^3 = 7 u^6 + 8 u^4 + 8 u^2 + 7$ $u\mapsto -u,\quad u\mapsto 1/u$
$49$ $v^3 = 2 u^6 + a$ $u\mapsto -u,\quad u\mapsto a^3/u$
where $a^2 - a + 3 = 0$
: Trigonal forms for some of the curves in Table \[Tbl-examples4\]. The third column gives two involutions of ${\mathbb{P}}^1$ that lift to give commuting involutions of the curve.[]{data-label="Tbl-examples4trigonal"}
It remains for us to find examples of pointless genus-$4$ curves over ${\mathbb{F}}_2, {\mathbb{F}}_4, {\mathbb{F}}_8, {\mathbb{F}}_{16},$ and ${\mathbb{F}}_{32}$.
Let $q$ be a power of $2$. An easy argument shows that a genus-$4$ hyperelliptic curve over ${\mathbb{F}}_q$ provided with an action of the Klein group must have a rational Weierstraß point, and so will not be pointless. Thus we decided simply to enumerate the genus-$4$ hyperelliptic curves (with no rational Weierstraß points) over the remaining ${\mathbb{F}}_q$ and to check for pointless curves. We found pointless hyperelliptic curves over ${\mathbb{F}}_q$ for $q\in \{2,4,8,16\}$; the examples we give in Table \[Tbl-genus4char2\] are all twists over ${\mathbb{F}}_q$ of curves that can be defined over ${\mathbb{F}}_2$.
$q$ curve
------ -------------------------------------------------------
$2$ $y^2 + y = t + (x^4 + x^3 + x^2 + x)/(x^5 + x^2 + 1)$
$4$ $y^2 + y = t + (x^3 + 1)/(x^5 + x^2 + 1)$
$8$ $y^2 + y = t + (x^4 + x^3 + x^2 + x)/(x^5 + x^2 + 1)$
$16$ $y^2 + y = t + (x^3 + 1)/(x^5 + x^2 + 1)$
: Examples of pointless genus-$4$ hyperelliptic curves over ${\mathbb{F}}_q$ (with $q$ even). On each line, the symbol $t$ refers to an arbitrary element of ${\mathbb{F}}_q$ whose trace to ${\mathbb{F}}_2$ is equal to $1$.[]{data-label="Tbl-genus4char2"}
Our computer search also revealed that every genus-$4$ hyperelliptic curve over ${\mathbb{F}}_{32}$ has at least one rational point. So to find an example of a pointless genus-$4$ curve over ${\mathbb{F}}_{32}$, we decided to look for genus-$4$ double covers of elliptic curves $E$. Our heuristic suggested that we might have good luck finding pointless curves if $E$ had few points, but for the sake of completeness we examined every $E$ over ${\mathbb{F}}_{32}$.
We found that up to isomorphism and Galois conjugacy there are exactly two pointless genus-$4$ curves over ${\mathbb{F}}_{32}$ that are double covers of elliptic curves. The first can be defined by the equations $$\begin{aligned}
y^2 + y & = x + 1/x + 1 \\
z^2 + z & = \frac{a^7 x^4 + a^{30} x^3 y + a^{13} x^2
+ x + a^{23} x y + a^6}
{x^3 + a^{15} x^2 + x + a^{28}}\end{aligned}$$ and the second by $$\begin{aligned}
y^2 + y & = x + a^7/x \\
z^2 + z & = \frac{a^4 x^4 + a^7 x^3 y + a^3 x^3
+ a^{23} x^2 y + a^{28} x^2 + a^{28} x y + a^{16}}
{x^3 + a^{25} x^2 + a^{22} x + a^{25}},\end{aligned}$$ where $a^5 + a^2 + 1 = 0.$
[99]{}
: Some genus $3$ curves with many points, pp. 163–171 in [*Algorithmic Number Theory*]{} (Claus Fieker and David R. Kohel, eds.), Lecture Notes in Comp. Sci. [**2369**]{}, Springer-Verlag, Berlin, 2002.
: The Magma algebra system. I. The user language. [*J. Symbolic Comput.*]{} [**24**]{} (1997) 235–265.
: The Klein quartic in number theory, pp. 51–101 in: [*The eightfold way*]{} (Silvio Levy, ed.), Math. Sci. Res. Inst. Publ. [**35**]{}, Cambridge Univ. Press, Cambridge, 1999.
: Tables of curves with many points, [*Math. Comp.*]{} [**69**]{} (2000) 797–810. Updated versions available at [http://www.science.uva.nl/\~geer/]{}.
: Zur Theorie der abstrakten elliptischen Funktionkörper. I, II, III, [*J. Reine Angew. Math.*]{} [**175**]{} (1936) 55–62, 69–88, 193–208.
: Local-global problem for Drinfeld modules, [*J. Number Theory*]{} [**104**]{} (2004) 193–209.
: On positive definite Hermitian forms, [*Manuscripta Math.*]{} [**71**]{} (1991) 399–429.
: Improved upper bounds for the number of points on curves over finite fields, [*Ann. Inst. Fourier [(]{}Grenoble[)]{}*]{} [**53**]{} (2003) 1677–1737. [arXiv:math.NT/0207101]{}.
: Large torsion subgroups of split Jacobians of curves of genus two or three, [*Forum Math.*]{} [**12**]{} (2000) 315–364.
: Geometric methods for improving the upper bounds on the number of rational points on algebraic curves over finite fields, [*J. Algebraic Geom.*]{} [**10**]{} (2001) 19–36. [arXiv:math.AG/0104247]{}.
: The maximum or minimum number of rational points on genus three curves over finite fields, [*Compositio Math.*]{} [**134**]{} (2002) 87–111. [arXiv:math.AG/0104086]{}.
: Quintic forms over $p$-adic fields, [*J. Number Theory*]{} [**57**]{} (1996) 231–241.
: Abelian surfaces over finite fields as Jacobians, [*Experiment. Math.*]{} [**11**]{} (2002) 321Ð337.
: Effective versions of the Chebotarev density theorem for function fields, [*C. R. Acad. Sci. Paris Sér. I Math.*]{} [**319**]{} (1994) 523–528.
: Classification of Hermitian forms with the neighbour method, [*J. Symbolic Computation*]{} [**26**]{} (1998) 487–508.
: [*Cohomologie Galoisienne (cinquième édition, révisée et complétée)*]{}, Lecture Notes in Math. [**5**]{}, Springer-Verlag, Berlin, 1994.
: Sur le nombre des points rationnels d’une courbe algébrique sur un corps fini, [*C. R. Acad. Sci. Paris Sér. I Math.*]{} [**296**]{} (1983) 397–402; = Œuvres \[128\].
: [*Rational points on curves over finite fields*]{}, unpublished notes by Fernando Q. Gouvêa of lectures at Harvard University, 1985.
: On the Riemann hypothesis in hyperelliptic function fields, pp. 285–302 in: [*Analytic number theory*]{} (Harold G. Diamond, ed.), Proc. Sympos. Pure Math. [**24**]{}, American Mathematical Society, Providence, R.I. 1973.
: Curves of genus $3$ over small finite fields, [*Indag. Math. (N.S.)*]{} [**14**]{} (2003) 275–283.
|
---
abstract: 'Emergence of odd-frequency $s$-wave superconductivity is demonstrated in the two-channel Kondo lattice by means of the dynamical mean-field theory combined with the continuous-time quantum Monte Carlo method. Around half filling of the conduction bands, divergence of an odd-frequency pairing susceptibility is found, which signals instability toward the superconductivity. The corresponding order parameter is equivalent to a staggered composite-pair amplitude with even frequencies, which involves both localized spins and conduction electrons. A model wave function is constructed for the composite order with use of symmetry operations such as charge conjugation and channel rotations. Given a certain asymmetry of the conduction bands, another $s$-wave superconductivity is found that has a uniform order parameter. The Kondo effect in the presence of two channels is essential for both types of unconventional superconductivity.'
author:
- Shintaro Hoshino$^1$ and Yoshio Kuramoto$^2$
title: ' Superconductivity of Composite Particles in Two-Channel Kondo Lattice '
---
Unconventional superconductivity refers to such pairing states that have non-trivial symmetry in spin and/or space-time structures. Among those states, we address the odd-frequency (OF) pairing state [@berezinskii74], which breaks the gauge symmetry, but has a zero pairing amplitude at equal time. Possible relevance of the OF pairing to real materials was first pointed out by Berezinskii for $^3$He [@berezinskii74]. After the discovery of high-temperature superconductivity in cuprates, the OF pairing has aroused broad interest [@emery92; @balatsky92; @abrahams93; @coleman94; @zachar96; @coleman97; @vojta99; @fuseya03; @yada08; @shigeta09; @hotta09; @shigeta11; @kusunose11-2; @yanagi12; @heid95-2; @martisovits98; @martisovits00; @tanaka12; @emery93; @balatsky93; @coleman93; @schrieffer94; @coleman95; @abrahams95; @heid95; @jarrell97; @anders02; @anders02-2; @hoshino11; @belitz99; @solenov09; @kusunose11; @sakai04; @coleman99; @flint11; @shigeta13] as one of candidate mechanisms for unconventional superconductivity.
The OF pairing state can be viewed from a different perspective; it has been recognized that the OF superconductivity is alternatively regarded as a composite pairing state with even frequencies (EF) [@emery92; @balatsky93]. From another argument, it has also been suggested that OF superconductivity has a tendency to favor a spatial inhomogeneity [@coleman93; @heid95], and that finite density of states remains at the chemical potential [@balatsky92; @coleman93].
One of possible realizations of OF superconductivity is proposed in the two-channel Kondo systems. Emery and Kivelson have shown for the two-channel Kondo impurity that the OF pairing susceptibility is enhanced at the impurity site [@emery92]. They have further elaborated on a variant of the two-channel Kondo lattice (TCKL) in one dimension [@emery93], and have demonstrated the divergence of the OF pairing susceptibility at zero temperature. For the TCKL in higher dimensions, microscopic calculations have been performed, however, without finding divergent susceptibility [@jarrell97]. Another calculation for the corresponding Anderson lattice [@anders02; @anders02-2] did find the divergent susceptibility, which vanished as the system goes to the TCKL limit. So far no microscopic theory has established the OF pairing in the TCKL at finite temperature.
In this paper, we present highly accurate numerical results for the pairing susceptibility in high dimensional TCKL, and demonstrate that the TCKL realizes the $s$-wave OF superconductivity with the staggered order parameter. We also derive the corresponding composite order parameter and model wave functions for the composite orders explicitly, using symmetry arguments such as charge conjugation and channel rotations. It is further shown, with a certain channel asymmetry, that another $s$-wave superconductivity occurs with EF in the TCKL. Our key strategy is to exploit the charge conjugation that relates the diagonal long-range order to the off-diagonal one.
The TCKL Hamiltonian [@jarrell96] is given by $ {\cal H} = {\cal H}_0 + {\cal H}_\mu$ with $$\begin{aligned}
{\cal H}_0 &= \sum_{\bm k\alpha\sigma} \varepsilon_{\bm k}
c_{\bm k\alpha\sigma}^\dagger c_{\bm k\alpha\sigma}
+ J \sum_{i\alpha} \bm S_i \cdot \bm s_{{\rm c}i\alpha}
, \label{eqn_ham1} \\
{\cal H}_\mu&= - \mu \sum_{i\alpha\sigma}
c_{i\alpha\sigma}^\dagger c_{i\alpha\sigma}
. \label{eqn_ham2}\end{aligned}$$ The operator $c_{\bm k\alpha\sigma}$ ($c_{i\alpha\sigma}$) annihilates the conduction electron with energy $\varepsilon_{\bm k}$, channel $\alpha=1,2$ and spin $\sigma=\uparrow, \downarrow$ at wave vector $\bm k$ (site $i$). The spin for conduction electrons $\bm s_{{\rm c}i\alpha} = \tfrac 1 2 \sum_{\sigma\sigma'} c^\dagger_{i\alpha\sigma} \bm \sigma_{\sigma\sigma'} c_{i\alpha\sigma'}$ couple with the local spin $\bm S_i$ antiferromagnetically (i.e. $J>0$). The chemical potential $\mu$ controls the total number of conduction electrons. Note that this model has double SU(2) symmetries; SU(2)$_{\rm s}$ for spin and SU(2)$_{\rm c}$ for channel degrees of freedom. We assume the bipartite lattice with $\varepsilon_{\bm k} + \varepsilon_{\bm k + \bm Q} = 0$, where $\bm Q$ corresponds to a vector at edges of the Brillouin zone. This assumption makes the model invariant at half filling ($\mu=0$) under particle-hole transformations, [*i.e.*]{}, charge conjugation, as will be discussed later.
Physically, the degree of freedom described by $\bm S_i$ is interpreted as orbital for the non-Kramers doublet system with $f^2$ configuration in Pr$^{3+}$ and U$^{4+}$ [@cox98]. The labels $\sigma$ and $\alpha$ are then interpreted as orbital and real spin, respectively. On the other hand, in Kramers systems as in Ce$^{3+}$ with $f^1$ configuration, $\bm S_i$ is regarded as a real spin of $f$ electrons. The labels $\sigma$ and $\alpha$ are then regarded as real spin and orbital, respectively. The non-Kramers doublet system has the channel (real spin) symmetry protected by the time-reversal symmetry, while in the Kramers doublet case the channel (orbital) symmetry is only approximate. In the following, we call $\sigma$ and $\alpha$ simply as ‘spin’ and ‘channel’, respectively, unless otherwise stated explicitly.
We use the dynamical mean-field theory (DMFT) [@kuramoto85; @georges96] for analysis of the TCKL, and the continuous-time quantum Monte Carlo method [@rubtsov05; @gull11] for the numerical impurity solver. We take the semi-circular density of states $\rho (\varepsilon) = (2/\pi) \sqrt{1 - (\varepsilon/D)^2}$ with $D=1$ being the unit of energy. We have confirmed that the behaviors are qualitatively the same if we take the Gaussian density of state for example.
We begin with the pairing susceptibilities in the TCKL. Following the literature [@jarrell97; @anders02] we use for each pairing type the labels C and S indicating ‘channel’ and ‘spin’, and s and t indicating ‘singlet’ and ‘triplet’. We introduce the following operators dependent on imaginary time to describe possible pairings: $$\begin{aligned}
\displaystyle
O_i^{\rm CsSs}(\tau,\tau') &= \sum_{\alpha\alpha'\sigma\sigma'}
c_{i\alpha\sigma} (\tau)
\epsilon_{\alpha\alpha'} \epsilon_{\sigma\sigma'}
c_{i\alpha'\sigma'} (\tau')
, \label{eq_pair_def1} \\
\displaystyle
O_i^{\rm CsSt}(\tau,\tau') &= \sum_{\alpha\alpha'\sigma}
c_{i\alpha\sigma} (\tau)
\epsilon_{\alpha\alpha'}
c_{i\alpha'\sigma} (\tau')
, \label{eq_pair_def2} \\
\displaystyle
O_i^{\rm CtSs}(\tau,\tau') &= \sum_{\alpha\sigma\sigma'}
c_{i\alpha\sigma} (\tau)
\epsilon_{\sigma\sigma'}
c_{i\alpha\sigma'} (\tau')
, \label{eq_pair_def3} \\
\displaystyle
O_i^{\rm CtSt}(\tau,\tau') &= \sum_{\alpha\sigma}
c_{i\alpha\sigma} (\tau)
c_{i\alpha\sigma} (\tau')
, \label{eq_pair_def4}\end{aligned}$$ where $\epsilon \equiv {\mathrm{i}}\sigma^y$ is the antisymmetric unit tensor, and ${\cal A}(\tau) = {\mathrm{e}}^{\tau {\cal H}} {\cal A} {\mathrm{e}}^{-\tau {\cal H}}$. Note that $O_i^{\rm CsSs}(\tau,\tau) = O_i^{\rm CtSt}(\tau,\tau) = 0$ due to the Pauli principle, meaning that these cannot form the equal-time pairings.
In order to calculate susceptibilities, we introduce the two-particle Green function by $
\chi^{\ell}_{ij} (\tau_1, \tau_2, \tau_3, \tau_4) = \langle T_\tau
O^\ell_i (-\tau_2, -\tau_1)^\dagger O^\ell_j (\tau_3, \tau_4)
\rangle
$ where $T_\tau$ is the time-ordering operator, and $\ell$ represents one of the labels in Eqs. (\[eq\_pair\_def1\]–\[eq\_pair\_def4\]). The Fourier transform is defined by $$\begin{aligned}
\chi^{\ell}_{\bm q} ({\mathrm{i}}\varepsilon_n, {\mathrm{i}}\varepsilon_{n'})
&= \frac{1}{N\beta^2} \sum_{ij}\int_0^\beta \hspace{-2mm} {\mathrm{d}}\tau_1 \cdots {\mathrm{d}}\tau_4
\ \chi^{\ell}_{ij} (\tau_1, \tau_2, \tau_3, \tau_4)
\nonumber \\
&\ \ \times
{\mathrm{e}}^{-{\mathrm{i}}\bm q \cdot (\bm R_i - \bm R_j)}
{\mathrm{e}}^{{\mathrm{i}}\varepsilon_n (\tau_2 - \tau_1)}
{\mathrm{e}}^{{\mathrm{i}}\varepsilon_{n'} (\tau_4 - \tau_3)}
,\end{aligned}$$ with $\varepsilon_n = (2n+1)\pi T$. Using this quantity, we define the EF and OF pairing susceptibilities $\chi_{\bm q}^{\ell}$ with $\ell\rightarrow \ell_{\rm EF}, \ell_{\rm OF}$ by $$\begin{aligned}
\chi_{\bm q}^{\ell_{\rm EF}} &= \frac{1}{\beta} \sum_{nn'}
\chi^{\ell_{\rm EF}}_{\bm q} ({\mathrm{i}}\varepsilon_n, {\mathrm{i}}\varepsilon_{n'})
, \label{eq_even_suscep}
\\
\chi_{\bm q}^{\ell_{\rm OF}} &= \frac{1}{\beta} \sum_{nn'} g_n g_{n'}
\chi^{\ell_{\rm OF}}_{\bm q} ({\mathrm{i}}\varepsilon_n, {\mathrm{i}}\varepsilon_{n'})
, \label{eq_odd_suscep}\end{aligned}$$ where $\ell_{\rm EF}$ denotes CsSt or CtSs, while $\ell_{\rm OF}$ denotes CsSs or CtSt. The form factor is defined by $g_n = {\rm sgn }\, \varepsilon_n$ [@jarrell97; @anders02-2; @sakai04]. Namely we extract the EF part for CsSt and CtSs, and the OF part for CsSs and CtSt. Equation (\[eq\_even\_suscep\]) is the usual susceptibility and must be positive [@freericks93]. On the other hand, the OF susceptibility given by Eq. (\[eq\_odd\_suscep\]) is no longer positive definite due to the presence of $g_n$, but still signals the instability toward the pairing state by its divergence [@hoshino11]. The critical temperature thus obtained is insensitive to the choice of the form factor provided $g_n$ is odd against $\varepsilon_n$.
![ (Color online) (a) Inverse susceptibilities for EF and OF pairings with uniform (F) or staggered (AF) order. (b) Phase diagram of the TCKL in the plane of filling ($n_{\rm c}$) and temperature ($T$) at $J=0.8$. The dotted lines with blank symbols show referential instability points where another susceptibility has already diverged at higher temperature. []{data-label="fig_phase"}](phase7.eps){width="75mm"}
Figure \[fig\_phase\](a) shows the temperature dependence of $\chi^\ell_{\bm q}$ at $J=0.8$ and $n_{\rm c}=1.5$ with $n_{\rm c}$ being the average number of conduction electrons per site. Here we consider the two ordering vectors $\bm q = \bm 0$ and $\bm q = \bm Q$, which we call ferro (F) and antiferro (AF), respectively. Among the eight susceptibilities, only the one with AF-CsSs diverges at $T_{\rm sc} \simeq 0.024$ from the negative side, signaling the onset of the OF superconductivity with the ordering vector $\bm Q$. We have confirmed that similar behaviors are obtained when we take the other parameters such as $J=0.6$ and $J=1.0$. Since the normal state of the TCKL is a non-Fermi liquid state as seen in the electrical resistivity [@jarrell96], the present system becomes superconducting directly from the non-Fermi liquid.
Together with the diagonal orders that have been obtained in our previous study [@hoshino13], the phase diagram of the TCKL is completed as shown in Fig. \[fig\_phase\](b). Here the diagonal orders are characterized by the vector operators $$\begin{aligned}
& \bm S (\bm q) = \sum_i \bm S_i {\mathrm{e}}^{-{\mathrm{i}}\bm q \cdot \bm R_i}
, \\
& \bm \tau_{\rm c} (\bm q) =
\sum_{i\alpha\alpha'\sigma} c^\dagger_{i\alpha\sigma} \bm \sigma_{\alpha\alpha'} c_{i\alpha'\sigma}
{\mathrm{e}}^{-{\mathrm{i}}\bm q \cdot \bm R_i}
, \\
& \bm \Psi (\bm q) = \sum_{i\alpha\alpha'\sigma\sigma'}
c^\dagger_{i\alpha\sigma} \bm \sigma_{\alpha\alpha'} (\bm S_i \cdot \bm \sigma_{\sigma\sigma'}) c_{i\alpha'\sigma'}
{\mathrm{e}}^{-{\mathrm{i}}\bm q \cdot \bm R_i}
,
\label{Psi}\end{aligned}$$ which describe spin ($\bm S$), channel ($\bm \tau_{\rm c}$), and composite ($\bm \Psi $) orders, respectively.
The transition temperature to the superconducting AF-CsSs order is lower than the AF spin order at $n_{\rm c}=2$ (half filling) as shown in Fig. \[fig\_phase\](b). As $n_{\rm c}$ decreases, however, the AF-CsSs dominates the AF spin order. Since the AF-CsSs order can be best visualized at half filling, we consider the AF-CsSs state mainly at half filling by neglecting the AF spin order.
We derive the composite order parameter corresponding to the (odd-frequency) AF-CsSs phase by combining the particle-hole and channel-rotation symmetries. At half filling, the transition temperatures for the AF-CsSs and F channel $[\bm \Psi(\bm 0)]$ orders are the same within the numerical accuracy as seen in Fig. \[fig\_phase\](b), which indicates a degeneracy between these two orders. In fact, these two orders are obtained from each other by symmetry operations at half filling. To demonstrate this, we introduce a particle-hole transformation $\mathscr{P}_2$ that acts only on channel $\alpha=2$ as $$\begin{aligned}
\mathscr{P}_2c_{i2\sigma} \mathscr{P}_2^{-1} &=
\sum_{\sigma'} \epsilon_{\sigma\sigma'} c^\dagger_{i2\sigma'}
{\mathrm{e}}^{{\mathrm{i}}\bm Q \cdot \bm R_i}.
\label{ph}\end{aligned}$$ On the other hand, $c_{i1\sigma}$ and $\bm S_i$ are not affected by $\mathscr{P}_2$. The half-filled Hamiltonian ${\cal H}_0$ and composite quantity $\Psi^z (\bm 0)$ for the F-channel order are invariant under this transformation. Here the phase factor in Eq.(\[ph\]) is necessary to make the kinetic energy invariant under $\mathscr{P}_2$.
By contrast, the transverse components $\Psi^\pm (\bm 0) = \Psi^x (\bm 0) \pm {\mathrm{i}}\Psi^y (\bm 0)$ in Eq.(\[Psi\]) are affected by $\mathscr{P}_2$. The explicit form of $\Phi(\bm Q)^\dagger \equiv \mathscr{P}_2 \Psi^+ (\bm 0) \mathscr{P}_2^{-1}$ is given by $$\begin{aligned}
\Phi(\bm Q)^\dagger
= \sum_{i\alpha\alpha'\sigma\sigma'}
c^\dagger_{i\alpha\sigma} \epsilon_{\alpha\alpha'} [\bm S_i \cdot (\bm \sigma\epsilon)_{\sigma\sigma'}] c^\dagger_{i\alpha'\sigma'}
{\mathrm{e}}^{{\mathrm{i}}\bm Q \cdot \bm R_i}
. \label{eqn_trans_composite}\end{aligned}$$ This composite quantity gives the EF order parameter corresponding to the AF-CsSs phase. Thus the AF-CsSs and F-channel orders are exactly degenerate at half filling. This is also interpreted as a reflection of the SO(5) symmetry at $\mu=0$ [@affleck92; @hattori12]. The degeneracy is lifted by the chemical potential as shown in Fig. \[fig\_phase\](b).
Another form of the order parameter can be constructed that involves conduction electrons only. The simplest derivation is to start again from the F-channel order, and apply $\mathscr{P}_2$. In the F-channel phase with $\Psi^z (\bm 0)$, the conduction electrons with $\alpha=1$ are nearly free, while the ones with $\alpha=2$ form the Kondo insulator at half filling [@hoshino11]. Hence the difference $
\sum_{\bm k \sigma} \varepsilon_{\bm k} \langle c^\dagger_{\bm k1\sigma}c_{\bm k 1 \sigma} - c^\dagger_{\bm k2\sigma}c_{\bm k 2 \sigma}\rangle
$ of the kinetic energies between channels arises, which can be regarded as a secondary order parameter [@nourafkan08; @hoshino13]. We write the quantity in the SU(2)$_{\rm c}$ symmetric form as $$\begin{aligned}
\bm \psi_{\rm c} (\bm 0) = \sum_{\bm k \alpha \alpha' \sigma} \varepsilon_{\bm k}
c^\dagger_{\bm k\alpha\sigma} \bm \sigma_{\alpha\alpha'} c_{\bm k\alpha'\sigma}
. \label{eqn_kinetic}\end{aligned}$$ Performing the particle-hole transformation, we obtain $\phi_{\rm c} (\bm Q)^\dagger \equiv \mathscr{P}_2 \psi^+_{\rm c} (\bm 0) \mathscr{P}_2^{-1}$ as $$\begin{aligned}
\phi_{\rm c} (\bm Q)^\dagger
= \sum_{\bm k\alpha\alpha'\sigma\sigma'} \varepsilon_{\bm k}
c^\dagger_{\bm k\alpha\sigma}
\epsilon_{\alpha\alpha'} \epsilon_{\sigma\sigma'}
c^\dagger_{-\bm k-\bm Q, \alpha'\sigma'}
. \label{eqn_second_pair}\end{aligned}$$ Note that this expression is similar to the so-called $\eta$ pairing [@yang89]. However, a difference lies in the form factor $\varepsilon_{\bm k}$ present in Eq. (\[eqn\_second\_pair\]). Direct calculation shows that $\Phi (\bm Q)$ and $\phi_{\rm c} (\bm Q)$ are related to the time-derivative $\partial O_i^{\rm CsSs} (\tau,0) / \partial \tau |_{\tau=0}$ of the OF order given by Eq. (\[eq\_pair\_def1\]). If one would investigate the instability toward superconductivity using the EF susceptibility for $\Phi(\bm Q)$ or $\phi_{\rm c} (\bm Q)$, explicit calculation of the correlation function would be much more tedious. Hence the OF susceptibility defined in Eq. (\[eq\_odd\_suscep\]) provides a convenient tool serving for the same purpose.
![ Local configuration of electrons for disordered non-Fermi liquid state (left), and diagonally ($\Psi^z$) or off-diagonally ($\Phi$) ordered state (right). The ordered states are described by Eqs. (\[eqn\_wf\_chan\]) and (\[eqn\_wf\_super\]). []{data-label="fig_screen"}](screening2.eps){width="80mm"}
Let us proceed to construct wave functions for the ordered phases. A simple form, though crude, should be useful to visualize a composite order. Starting from the F-channel state at half filling, we obtain a superconducting state by symmetry operations on the wave function. According to Ref. [@hoshino11], the F channel state described by $\Psi^z(\bm 0) = 2\sum_i \bm S_i \cdot (\bm s_{{\rm c}i1} - \bm s_{{\rm c}i2})$ consists of itinerant electrons for $\alpha=1$ and the Kondo singlets for $\alpha = 2$. We introduce a simplified wave function describing this F channel state as $$\begin{aligned}
|\Psi^z (\bm 0)\rangle = \prod_{\bm k\in{\rm HBZ},\sigma} c^\dagger_{\bm k1\sigma}
\prod_{i}|{\rm KS}\rangle_{i2}
, \label{eqn_wf_chan}\end{aligned}$$ where ‘HBZ’ means a half Brillouin zone. The $\alpha=1$ part labeled by $\bm k$ represents free conduction electrons at half filling. The local Kondo singlet state with channel $\alpha$ is written as by $|{\rm KS}\rangle_{i\alpha} = (c^\dagger_{i\alpha\uparrow} |\downarrow\rangle_i - c^\dagger_{i\alpha\downarrow} |\uparrow \rangle_i )/\sqrt 2$ with $|\sigma \rangle_i$ being the localized-spin state at site $i$. The local picture of Eq. (\[eqn\_wf\_chan\]) is illustrated in the upper-right part of Fig. \[fig\_screen\].
To obtain the off-diagonal order, we combine $\mathscr{P}_2$ with a channel mixing unitary transformation $\mathscr{R}$ defined by $$\begin{aligned}
\mathscr{R}c_{i1(2)\sigma} \mathscr{R}^{-1} &=
\left[ c_{i1\sigma} +(-) c_{i2\sigma} \right] / \sqrt 2
,\end{aligned}$$ which rotates from $z$ to $x$ axis in channel space. In view of $\mathscr{P}_2\mathscr{R} {\cal H}_0 (\mathscr{P}_2\mathscr{R})^{-1}= {\cal H}_0$ and $\mathscr{P}_2\mathscr{R} {\Psi^z} (\bm 0) (\mathscr{P}_2\mathscr{R})^{-1}= \frac 1 2 [\Phi (\bm Q) + \Phi (\bm Q)^\dagger]$, it is reasonable to postulate a model wave function $|\Phi (\bm Q) \rangle \equiv \mathscr{P}_2\mathscr{R}|\Psi^z (\bm 0) \rangle$ for the AF-CsSs state as $$\begin{aligned}
&|\Phi (\bm Q) \rangle = \prod_{\bm k\in{\rm HBZ},\sigma} \frac{1}{\sqrt 2}
\left(
c^\dagger_{\bm k1\sigma} +
\sum_{\sigma'} \epsilon_{\sigma\sigma'} c_{-\bm k-\bm Q, 2\sigma'} \right)
\nonumber \\
&\ \ \ \ \times
\prod_{i} \frac{1}{\sqrt 2}
\left(
c^\dagger_{i2\uparrow}c^\dagger_{i2\downarrow}|{\rm KS}\rangle_{i1}
+
{\mathrm{e}}^{{\mathrm{i}}\bm Q \cdot \bm R_i} |{\rm KS}\rangle_{i2}
\right)
. \label{eqn_wf_super}\end{aligned}$$ Note that $\mathscr{P}_2$ transforms the vacant state into the states occupied doubly by electrons with channel $\alpha=2$ at each site. As seen in the second line of Eq. (\[eqn\_wf\_super\]), the states with one and three local conduction electrons per site are superposed, indicating the broken gauge symmetry. As is clear from this expression, both channels $\alpha=1, 2$ participate to make the local spin-singlet state. The lower-right part of Fig. \[fig\_screen\] illustrates this local state. The first line of Eq. (\[eqn\_wf\_super\]) includes the Bogoliubov quasi-particles composed of particle at $\bm k$ and hole at $-\bm k-\bm Q$, which in general have a finite density of states at the Fermi level.
We now consider the region around $n_{\rm c}=1$ in Fig.\[fig\_phase\](b) where the AF-channel order $\bm \tau_{\rm c} (\bm Q)$ is dominant. By the particle-hole transformation $\mathscr{P}_2$, we can relate the electronic state at $n_{\rm c}=1$ to the half-filled case under asymmetric channel potential. We introduce the new Hamiltonian $$\begin{aligned}
\tilde {\cal H} \equiv \mathscr{P}_2{\cal H} \mathscr{P}_2^{-1} = {\cal H}_0 -\mu \, \tau_{\rm c}^z (\bm 0)
.\end{aligned}$$ Starting from $n_{\rm c}=1$ for $\cal H$, we end up in $\tilde{\cal H}$ with 1/2 electron per site for $\alpha=1$, while 3/2 electrons for $\alpha=2$. Then we obtain the sum $\tilde{n}_{\rm c}=2$. Namely the hole-doped TCKL described by ${\cal H}$ is transformed into the TCKL at half filling under the channel field. Physically, $\tilde {\cal H}$ simulates such systems that have two conduction electrons per site but with inequivalent conduction bands. This situation may arise in a Kondo lattice with Kramers degeneracy, since the channels in this case correspond to spatial orbitals that have no degeneracy in general.
On the other hand, the AF-channel order at $n_{\rm c} = 1$ are transformed as $$\begin{aligned}
\mathscr{P}_2 \tau_{\rm c}^z (\bm Q) \mathscr{P}_2^{-1} &=
\sum_{i\alpha\sigma} c^\dagger_{i\alpha\sigma} c_{i \alpha\sigma}
{\mathrm{e}}^{{\mathrm{i}}\bm Q \cdot \bm R_i}
, \label{eqn_cdw}
\\
\mathscr{P}_2 \tau_{\rm c}^+ (\bm Q) \mathscr{P}_2^{-1} &=
\sum_{i\alpha\sigma\sigma'} c^\dagger_{i\alpha\sigma} \epsilon _{\sigma\sigma'} c^\dagger_{i \alpha\sigma'}
\equiv p_{\rm c}(\bm 0)^\dagger
. \label{eqn_s_wave_sc}\end{aligned}$$ Namely, the AF-channel order in ${\cal H}$ corresponds to the charge density wave given by Eq. (\[eqn\_cdw\]), as well as the $s$-wave superconductivity $p_{\rm c}(\bm 0)$ given by Eq. (\[eqn\_s\_wave\_sc\]) in the new Hamiltonian $\tilde {\cal H}$. These two orders are degenerate at $\tilde{n}_{\rm c}=2$ by symmetry. We have numerically confirmed that this degeneracy is lifted with $\tilde{n}_{\rm c}\neq 2$, and the $s$-wave superconductivity $p_{\rm c} (\bm 0)$ is more stable. This $s$-wave pairing can be understood from the strong coupling limit, following interpretation of the AF channel order in ${\cal H}$ [@cox98; @schauerte05]. Then, the local image of the $p_{\rm c}(\bm 0)$ state is given by the lower-right panel of Fig. \[fig\_screen\] without the Bogoliubov quasi-particle part. In contrast to the AF-CsSs state, the superconductivity with $p_{\rm c}(\bm 0)$ has a full gap in the density of states.
In a similar manner, we can show that the off-diagonal AF-CsSs order in ${\cal H}$ is transformed into the diagonal composite order $\Psi^{\pm} (\bm 0)$ in $\tilde {\cal H}$. The spin order $\bm S (\bm q)$ remains the same after the transformation. Thus the phase diagram for $\tilde {\cal H}$ is obtained without further calculation if we replace $n_{\rm c}$ by $\langle \tau_{\rm c}^z(\bm 0)\rangle$ in Fig. \[fig\_phase\](b).
Finally let us briefly discuss possible relevance of our results to real physical systems. The primary candidate for the non-Kramers doublet system with the order $\Phi (\bm Q)$ is UBe$_{13}$, which has first been proposed as the two-channel Kondo system by Cox [@cox87]. The superconductivity in UBe$_{13}$ appears directly from the non-Fermi liquid state [@ott83], which is consistent with what we have found in the TCKL. For Kramers-doublet systems, on the other hand, a channel (orbital) asymmetry is inevitable. Then the modified Hamiltonian $\tilde {\cal H}$ seems relevant to describe the two-orbital Kondo lattice. Recent specific-heat measurement in CeCu$_2$Si$_2$ [@kittaka13] has reported a full-gap superconductivity reminiscent of multiband superconductivity. Hence we suggest possible relevance of the $p_{\rm c}(\bm 0)$ state to CeCu$_2$Si$_2$.
In conclusion, taking the TCKL, we have demonstrated instability of the non-Fermi liquid state toward the OF $s$-wave superconductivity with finite center-of-mass momentum, and with channel-singlet and spin-singlet pairing. The OF pairing state is alternatively characterized by an EF composite order. We have further derived another EF $s$-wave superconductivity with the uniform order parameter by considering asymmetry in the channels. Further studies inside the superconducting phases will provide a better understanding of the unconventional superconductivity.
We are grateful to Yusuke Kato for stimulating discussions and valuable comments on our paper. We also appreciate Kazumasa Hattori, Hiroaki Kusunose and Youichi Yanase for fruitful discussions. S.H. acknowledges the financial support from Japan Society for Promotion of Science.
[99]{}
V. L. Berezinskii: Pis’ma Zh. Eksp. Teor. Fiz. [**20**]{} (1974) 628 \[JETP Lett. [**20**]{} (1974) 287\].
A. Balatsky and E. Abrahams: Phys. Rev. B [**45**]{} (1992) 13125. V. J. Emery and S. Kivelson: Phys. Rev. B [**46**]{} (1992) 10812. V. J. Emery and S. A. Kivelson: Phys. Rev. Lett. [**71**]{} (1993) 3701. A. V. Balatsky and J. Bonca: Phys. Rev. B [**48**]{} (1993) 7445. E. Abrahams, A. Balatsky, J. R. Schrieffer and P. B. Allen: Phys. Rev. B [**47**]{} (1993) 513. P. Coleman, E. Miranda and A. Tsvelik: Phys. Rev. Lett. [**70**]{} (1993) 2960. J. R. Schrieffer, A. V. Balatsky, E. Abrahams and D. J. Scalapino: J. Supercond. [**3**]{} (1994) 501. P. Coleman, E. Miranda and A. Tsvelik: Phys. Rev. B [**49**]{} (1994) 8955. R. Heid: Z. Phys. B [**99**]{} (1995) 15. P. Coleman, E. Miranda and A. Tsvelik: Phys. Rev. Lett. [**74**]{} (1995) 1653. E. Abrahams, A. Balatsky, D. J. Scalapino and J. R. Schrieffer: Phys. Rev. B [**52**]{} (1995) 1271. R. Heid, Y. B. Bazaliy, V. Martisovits and D. L. Cox: Phys. Rev. Lett. [**74**]{} (1995) 2571. V. Martisovits and D. L. Cox: Phys. Rev. B [**57**]{} (1998) 7466. V. Martisovits, G. Zaránd and D. L. Cox: Phys. Rev. Lett. [**84**]{} (2000) 5872. O. Zachar, S. A. Kivelson and V. J. Emery: Phys. Rev. Lett. [**77**]{} (1996) 1342. M. Jarrell, H. Pang and D. L. Cox: Phys. Rev. Lett. [**78**]{} (1997) 1996. P. Coleman, A. Georges and A. M. Tsvelik: J. Phys.: Condens. Matter [**9**]{} (1997) 345. P. Coleman, A. M. Tsvelik, N. Andrei and H. Y. Kee: Phys. Rev. B [**60**]{} (1999) 3608. M. Vojta and E. Dagotto: Phys. Rev. B [**59**]{} (1999) 713(R). D. Belitz and T. R. Kirkpatrick: Phys. Rev. B [**60**]{} (1999) 3485. F. B. Anders: Phys. Rev. B [**66**]{} (2002) 020504(R). F. B. Anders: Eur. Phys. J. B [**28**]{} (2002) 9. Y. Fuseya, H. Kohno and K. Miyake: J. Phys. Soc. Jpn. [**72**]{} (2003) 2914. S. Sakai, R. Arita and H. Aoki: Phys. Rev. B [**70**]{} (2004) 172504. K. Yada, S. Onari, Y. Tanaka and K. Miyake: arXiv:0806.4241 (2008). K. Shigeta, S. Onari, K. Yada and Y. Tanaka: Phys. Rev. B [**79**]{} (2009) 174507. T. Hotta: J. Phys. Soc. Jpn. [**78**]{} (2009) 123710. D. Solenov, I. Martin and D. Mozyrsky: Phys. Rev. B [**79**]{} (2009) 132502. K. Shigeta, Y. Tanaka, K. Kuroki, S. Onari and H. Aizawa: Phys. Rev. B [**83**]{} (2011) 140509(R). H. Kusunose, Y. Fuseya and K. Miyake: J. Phys. Soc. Jpn. [**80**]{} (2011) 054702. H. Kusunose, Y. Fuseya and K. Miyake: J. Phys. Soc. Jpn. [**80**]{} (2011) 044711. S. Hoshino, J. Otsuki and Y. Kuramoto: Phys. Rev. Lett. [**107**]{} (2011) 247202. R. Flint, A. H. Nevidomskyy and P. Coleman: Phys. Rev. B [**84**]{} (2011) 064514. For a review, see Y. Tanaka, M. Sato and N. Nagaosa: J. Phys. Soc. Jpn. [**81**]{} (2012) 011013. Y. Yanagi, Y. Yamashita and K. Ueda: J. Phys. Soc. Jpn. [**81**]{} (2012) 123701. K. Shigeta, S. Onari and Y. Tanaka: J. Phys. Soc. Jpn.[**82**]{} (2013) 104702.
M. Jarrell, H. Pang, D. L. Cox and K. H. Luk: Phys. Rev. Lett. [**77**]{} (1996) 1612.
For a review, see D. L. Cox and A. Zawadowski: Adv. Phys. [**47**]{} (1998) 599.
Y. Kuramoto: [*Theory of Heavy Fermions and Valence Fluctuations*]{}, Eds. T. Kasuya and T. Saso (Springer, 1985) p.152. For a review, see A. Georges, G. Kotliar, W. Krauth and M. J. Rozenberg: Rev. Mod. Phys. [**68**]{} (1996) 13. A. N. Rubtsov, V. V. Savkin and A. I. Lichtenstein: Phys. Rev. B [**72**]{} (2005) 035122. For a review, see E. Gull, A. J. Millis, A. I. Lichtenstein, A. N. Rubtsov, M. Troyer and P. Werner: Rev. Mod. Phys. [**83**]{} (2011) 349.
J. K. Freericks, M. Jarrell and D. J. Scalapino: Phys. Rev. B [**48**]{} (1993) 6302.
S. Hoshino, J. Otsuki and Y. Kuramoto: J. Phys. Soc. Jpn. [**82**]{} (2013) 044707.
I. Affleck, A. W. W. Ludwig, H.-B. Pang and D. L. Cox: Phys. Rev. B [**45**]{} (1992) 7918. K. Hattori: Phys. Rev. B [**85**]{} (2012) 214411.
R. Nourafkan and N. Nafari: J. Phys. Condens. Matter [**20**]{} (2008) 255231.
C. N. Yang: Phys. Rev. Lett. [**63**]{} (1989) 2144.
T. Schauerte, D. L. Cox, R. M. Noack, P. G. J. van Dongen and C. D. Batista: Phys. Rev. Lett. [**94**]{} (2005) 147201.
H. R. Ott, H. Rudigier, Z. Fisk and J. L. Smith: Phys. Rev. Lett. [**50**]{} (1983) 1595. D. L. Cox: Phys. Rev. Lett. [**59**]{} (1987) 1240.
S. Kittaka, Y. Aoki, Y. Shimura, T. Sakakibara, S. Seiro, C. Geibel, F. Steglich, H. Ikeda and K. Machida: arXiv:1307.3499 (2013).
|
---
abstract: 'Dark matter annihilation or decay could have a significant impact on the ionization and thermal history of the universe. In this paper, we study the potential contribution of dark matter annihilation ($s$-wave- or $p$-wave-dominated) or decay to cosmic reionization, via the production of electrons, positrons and photons. We map out the possible perturbations to the ionization and thermal histories of the universe due to dark matter processes, over a broad range of velocity-averaged annihilation cross-sections/decay lifetimes and dark matter masses. We have employed recent numerical studies of the efficiency with which annihilation/decay products induce heating and ionization in the intergalactic medium, and in this work extended them down to a redshift of $1+z = 4$ for two different reionization scenarios. We also improve on earlier studies by using the results of detailed structure formation models of dark matter haloes and subhaloes that are consistent with up-to-date $N$-body simulations, with estimates on the uncertainties that originate from the smallest scales. We find that for dark matter models that are consistent with experimental constraints, a contribution of more than 10% to the ionization fraction at reionization is disallowed for all annihilation scenarios. Such a contribution is possible only for decays into electron/positron pairs, for light dark matter with mass $m_\chi \lesssim \SI{100}{MeV}$, and a decay lifetime $\tau_\chi \sim 10^{24} - 10^{25}\SI{}{s}$.'
author:
- Hongwan Liu
- 'Tracy R. Slatyer'
- 'Jesús Zavala[^1]'
bibliography:
- 'ionization\_jzf\_hl.bib'
title: 'The Darkest Hour Before Dawn: Contributions to Cosmic Reionization from Dark Matter Annihilation and Decay'
---
Introduction {#sec:Introduction}
============
The epoch of reionization and the emergence of the universe from the cosmic dark ages is a subject of intense study in modern cosmology. As baryonic matter began to collapse around initial fluctuations in the dark matter (DM) density seeded by inflation, the earliest galaxies in our universe began to form. These structures, perhaps accompanied by other sources, eventually began to emit ionizing radiation, creating local patches of fully ionized hydrogen gas around them. These patches ultimately grew to encompass the entire universe, leading to the fully ionized intergalactic medium (IGM) that we observe today.
While the process of reionization is broadly understood, the exact details of how and when reionization occurred are still somewhat unclear. Quasars and the earliest stars certainly played a part in reionization, but their relative energy contributions to the process are still a matter of ongoing research. Some studies have found [@Fan2001] that a significant population of dim and unobserved quasars must be present in order for them to completely reionize the universe. Similar conclusions have been drawn for star-forming galaxies [@Robertson2013]. This uncertainty has resulted in some interest in other sources of energy that might contribute to reionization.
DM provides a particularly compelling candidate, and has been considered several times in the literature. Many models allow DM to annihilate or decay into Standard Model particles, which in turn can deposit energy into the IGM through ionization, heating or other processes. The annihilation rate, which scales as the square of the density, rises substantially with the onset of structure formation and the collapse of DM into dense haloes, potentially yielding a large energy injection in the reionization epoch.
Our current knowledge of reionization can already place interesting constraints on DM properties. Constraints from optical depth and the temperature of the IGM placed strong constraints on DM models [@Cirelli:2009bb] that could generate the cosmic ray excesses observed by PAMELA [@Adriani:2008zr] and Fermi+HESS [@Abdo:2009zk; @Collaboration:2008aaa; @Aharonian:2009ah]. IGM temperature data as well as CMB power spectrum measurements can also be used to constrain the properties of $p$-wave annihilating and decaying DM [@Diamanti2014]. More recently, it has been shown that with improved measurements of the optical depth to the surface of last scattering and near-future probes of the cosmic ionization history, it should be possible to set new and significant constraints on the properties of annihilating or decaying DM [@Kaurov2015].
Turning the question around, the potential role that DM may have played in reionization has also been broadly explored. Earlier papers in the literature were able to find possible scenarios in which annihilating DM could contribute significantly to reionization, once structure formation was taken into account [@Chuzhoy2008; @Natarajan:2008pk]. Subsequently, [@Belikov:2009qx] included the important effect of inverse Compton scattering off the cosmic microwave background (CMB) photons, and showed that weakly interacting massive particle (WIMP) DM candidates could play a dominant role in reionization. More recently, studies of $s$-wave annihilation of dark matter using an analytic description for the boost to the DM density during structure formation found that an unrealistic structure formation boost to the annihilation rates or an overly large cross-section was required for a DM-dominated reionization scenario consistent with existing experimental results from the CMB [@Poulin2015; @Lopez-Honorez:2013lcm]. Multiple authors [@Mapelli:2006ej; @Hansen:2003yj; @Kasuya2004] have also shown that a significant contribution from decaying DM to reionization in a manner consistent with WMAP results is possible using specific DM decay rates and products.
In this paper, we examine the potential contribution of dark matter toward reionizing the universe, but improve on previous results in four crucial ways:
1. We consider an extremely wide range of DM masses, from 10 keV to TeV scales, and rather than selecting specific annihilation/decay channels, we consider the impact of electrons, positrons and photons injected at arbitrary energies. This allows us to place general, model-independent constraints on DM annihilation or decay, beyond the WIMP paradigm;
2. In addition to $s$-wave annihilation, we consider energy injection into the IGM through $p$-wave annihilation and decay. Energy injections in these scenarios have a different dependence on redshift and on the details of structure formation compared to the case of $s$-wave annihilation: consequently, different constraints dominate. We improve on these earlier results by performing a more accurate calculation of the energy injection/deposition rates and by taking into account the relevant constraints in each energy injection channel;
3. The details of structure formation and its uncertainties are critical in determining the $s$-wave and $p$-wave annihilation rates [@Mack2014]. We use a detailed and up-to-date prescription of structure formation for our calculations, including the contribution of substructure in haloes (previous studies on substructure include [@Bartels:2015uba; @Moline:2016pbm]). By calculating the boost factor to DM annihilation assuming two different halo profiles (consistently applied to both haloes and subhaloes) as well as the difference to the boost factor that results from including substructure effects, these results also allow us to estimate the uncertainties associated with structure formation, including uncertainties related to the subhalo boost factor;
4. We use the latest results presented in [@Slatyer2015] to determine how energy injection from annihilations or decays is eventually deposited into the IGM via ionization and heating. We have extended the code to be applicable even when the universe is completely ionized, allowing us to determine how energy is deposited into the IGM at redshifts below $1+z=10$ (the previous lower limit for the code) assuming different reionization scenarios. This improvement allows us to use astrophysical constraints from $z \lesssim 6$ with confidence, and to estimate the sensitivity of our constraints to the details of the (re)ionization history.
Our paper is structured as follows: in Section \[sec:ExptConstraints\], we will review the main existing results that will be used to set constraints on the DM contribution to reionization. Section \[sec:EnergyInjection\] gives a brief overview of energy injection from $s$-wave annihilation, $p$-wave annihilation and decays, for an unclustered/homogeneous distribution of DM. Our structure formation prescription is detailed in Section \[sec:StructureFormation\], while Section \[sec:fz\] explains how we determine the heating and ionization deposited to the IGM, given an energy injection history and a structure formation model. Section \[sec:FreeEleFrac\] outlines the three-level atom model for hydrogen used to determine the ionization and IGM temperature history from the energy deposition history. Finally, Section \[sec:Constraints\] shows our derived constraints for each of the DM processes considered here, with our conclusions following in Section \[sec:Conclusion\].
Throughout this paper, we make use of the central values for the cosmological parameters derived from the TT,TE,EE+lowP likelihood of the Planck 2015 results [@PlanckCollaboration2015]. This is obtained from a combination of the measured TT, TE and EE CMB spectra for $l \geq 30$ and a temperature and polarization pixel-based likelihood for $l<30$. Specifically, our choice of parameters are $H_0 = \SI{67.27}{km \s^{-1} Mpc^{-1}}$, $\Omega_m = 0.3156$, $\Omega_b h^2 = 0.02225$ and $\Omega_c h^2 = 0.1198$. These values give a present day atomic number density of $n_A = 0.82 \rho_c \Omega_b/m_p = \SI{2.05E-7}{\centi\meter^{-3}}$.
Constraints from Experimental Results {#sec:ExptConstraints}
=====================================
To understand how significant a role DM can play in the process of reionization, we must first examine the current experimental constraints on both reionization and DM.
Extensive astrophysical observations of early quasars and the IGM around them have enhanced our understanding of the process of reionization. By studying quasars at redshift $z \sim 6$ and hydrogen Ly$\alpha$ absorption in their spectra due to the Gunn-Peterson effect, multiple groups have shown that reionization of hydrogen was mostly complete by $z \sim$ 6 [@Becker2001; @Fan2006; @Ota2008]. Observations from even larger redshifts $z\sim 7-8$ indicate that hydrogen reionization occurred relatively quickly, with the neutral hydrogen fraction rising to 0.34 at $z\sim 7$ and exceeding $0.65$ at $z \sim 8$ [@Schenker2014]. Neutral helium became reionized at a similar time compared to hydrogen due to their relatively similar ionization energies, but a harder spectrum of ionizing radiation is required to doubly-ionize neutral helium atoms [@Loeb2013; @Choudhury2006]. Work done on the helium Ly$\alpha$ spectra for quasars at lower redshifts has shown that helium was completely reionized by $z\sim 3$ [@Zheng2004], when quasars could produce the required ultraviolet spectrum.
Another quantity important to understanding reionization is the IGM temperature, $T_{\text{IGM}}$. Energy deposited into the IGM can both ionize and heat the gas, and the rate of ionization and heating are both highly dependent on $T_{\text{IGM}}$. Measurements of $T_{\text{IGM}}$ place interesting constraints on processes that inject energy into the IGM at redshifts $z \lesssim 6$, since a large injection of energy at these redshifts would result in excessive heating of the IGM. For example, in the case of potential DM contributions, [@Diamanti2014] made use of $T_{\text{IGM}}$ measurements to constrain the velocity-averaged cross-section of MeV-TeV DM undergoing $p$-wave annihilation into lepton pairs, as well as the decay lifetimes for MeV-TeV DM decaying into lepton pairs. They found that bounds from $T_{\text{IGM}}$ considerably improved the constraints set by measurements from the CMB and from baryon acoustic oscillations, strengthening the constraints set for the $p$-wave annihilation cross-section by more than an order of magnitude over the full range of DM masses considered.
Several measurements of $T_{\text{IGM}}$ as a function of redshift have been performed in the last two decades. Earlier studies [@Schaye2000] measured the distribution of widths in Ly$\alpha$ absorption spectra from quasars in the redshift range $z = 2.0 - 4.5$ to determine the history of $T_{\text{IGM}}$ in this range, and determined that $\SI{5100}{\kelvin} \leq T_{\text{IGM}}(z=4.3) \leq \SI{20000}{\kelvin}$. More recent studies [@Becker2011; @Bolton2011] of the IGM temperature from the Lyman-$\alpha$ forest [@Becker2011] and from quasars [@Bolton2010; @Bolton2011] have pushed these measurements back to $z \sim 6$, with the two measurements of $T_{\text{IGM}}$ at the largest redshifts given by (errors reflect 95% confidence):
$$\begin{aligned}
{1}
\log_{10} \left( \frac{T_{\text{IGM}}(z=6.08)}{\text{K}} \right) &= 4.21^{+0.06}_{-0.07}, \nonumber \\
\log_{10} \left( \frac{T_{\text{IGM}}(z=4.8)}{\text{K}} \right) &= 3.9 \pm 0.1.
\label{eqn:TIGMConstraints}\end{aligned}$$
The first measurement, discussed in [@Bolton2011], is almost certainly an overestimate of the true IGM temperature at that redshift: this result does not account for photo-heating of HeII around the quasar being measured, which would result in the measured temperature being significantly higher than the temperature of the IGM away from these quasars. Nonetheless, it serves as a conservative upper bound on $T_{\text{IGM}}$.
Aside from direct astrophysical measurements, the CMB can also reveal much about reionization. One important aspect of this epoch that can be measured from the CMB is the total optical depth $\tau$ since recombination, given by $$\begin{aligned}
\tau = -\int_0^{z_\text{CMB}} dz \, n_e(z) \sigma_T \frac{dt}{dz},
\label{eqn:OpticalDepth}\end{aligned}$$ where $n_e$ is the number density of free electrons, $\sigma_T$ is the Thomson scattering cross-section and $z_{\text{CMB}}$ is the redshift of recombination. Scattering of CMB photons off free electrons present after reionization suppresses the small-scale acoustic peaks in the power spectrum by a factor of $e^{-2\tau}$. The Planck collaboration reports the measured optical depth to be [@PlanckCollaboration2016] $$\begin{aligned}
\tau = 0.058 \pm 0.012.
\label{eqn:measuredOpticalDepth}\end{aligned}$$ Planck has also been able to determine a reionization redshift $z_{\text{reion}}$, assuming a step-like reionization transition modeled by a $\tanh$ function and characterized by some width parameter $\delta z = 0.5$ (referred to as the “redshift-symmetric” parameterization in [@PlanckCollaboration2016]). $z_{\text{reion}}$ is the redshift at which the free electron fraction $x_e \equiv n_e/n_{\text{H}} = 0.54$. Here $n_{\text{H}}$ is the number density of hydrogen (both neutral and ionized) and $n_e$ is the number density of free electrons. $x_e=1.08$ upon complete reionization after taking into account the complete (single) ionization of helium as well. Based on the measured optical depth, the derived $z_{\text{reion}}$ assuming a redshift-symmetric parameterization of the reionization is $$\begin{aligned}
{1}
z_{\text{reion}} = 8.8 \pm 0.9.\end{aligned}$$ We can factor out the uncertainty associated with reionization after $z = 6$ and its contribution to the optical depth by writing: $$\begin{gathered}
\tau = -\int_0^3 dz \left[n_{\text{H}}(z) + 2n_{\text{He}}(z) \right] \sigma_T \frac{dt}{dz} \\
- \int_3^6 dz\, [n_{\text{H}} (z) + n_{\text{He}}(z)] \sigma_T \frac{dt}{dz} \\
- \int_6^{z_{\text{CMB}}} dz\, n_e(z) \sigma_T \frac{dt}{dz},\end{gathered}$$ where $n_{\text{He}}$ is the redshift-dependent number density of helium (both neutral and ionized). The first two terms are the contribution to the optical depth from reionized hydrogen and helium, while the last term is the contribution from the unknown ionization history of the universe above $z = 6$. The first two terms can be directly evaluated given the baryon number density today, and give a total contribution of $\delta \tau_0 = 0.038$. The remaining measured optical depth must therefore have come from contributions prior to $z=6$, i.e. $$\begin{aligned}
\delta \tau = -\int_6^{z_{\text{CMB}}} dz\, n_e(z) \sigma_T \frac{dt}{dz} \leq 0.044,
\label{eqn:ExcessOpticalDepth}\end{aligned}$$ in order for $\tau$ to be within the experimental uncertainty of equation (\[eqn:measuredOpticalDepth\]) at the 95% confidence level.
For the case of $s$-wave annihilation, the CMB power spectrum also provides a robust constraint on the velocity-averaged annihilation cross-section $\langle \sigma v \rangle$, since additional ionization of the IGM at high redshifts induces a multipole-dependent modification to the temperature and polarization anisotropies [@Padmanabhan:2005es]. The Planck collaboration [@PlanckCollaboration2015] has placed an upper bound on $p_{\text{ann}}$, defined as $$\begin{aligned}
{1}
p_{\text{ann}} (z) = f_{\text{eff}} \frac{\langle \sigma v \rangle}{m_\chi},\end{aligned}$$ where $f_{\text{eff}}$ is a constant proxy for $f(z)$, the efficiency parameter that describes the ratio of total energy deposited to total energy injected at a particular redshift $z$, and $m_\chi$ is the mass of the DM particle. The CMB power spectra are most sensitive to redshifts $z \sim 600$ (for $s$-wave annihilation), and so the constraint on $\langle \sigma v \rangle$ can be estimated from that redshift [@Finkbeiner2012]. Using the TT,TE,EE+lowP Planck likelihood, the 95% upper limit on this parameter at $z=600$ was found to be: $$\begin{aligned}
{1}
p_{\text{ann}}(z = 600) < \SI{4.1E-28}{cm^3 s^{-1} GeV^{-1}}.
\label{eqn:pann}\end{aligned}$$
Given $f_{\text{eff}}$ for $s$-wave annihilation, which in turn is obtained from $f(z)$, this leads immediately to a constraint on $\langle \sigma v \rangle$ as a function of $m_\chi$. $f(z)$ has been calculated for arbitrary injections of electrons, positrons and photons in the 10 keV-TeV range; in this paper we will thus refer to injections of electron/positron pairs ($e^+e^-$) and photon pairs ($\gamma \gamma$), while keeping in mind that more general DM annihilation/decay channels can be represented as linear combinations of photons/electrons/positrons at different energies.[^2] This approach neglects the contribution of protons and antiprotons, which is generally quite small [@Weniger2013].
In Section \[sec:fz\], we will give a brief summary of our calculation of $f(z)$, which is based on the work detailed in [@Slatyer2012; @Slatyer2015]. The full details of obtaining an actual value for $f_{\text{eff}}$ from our calculation of $f(z)$ across a large range of DM masses can be found in [@Slatyer2015a]. Figure \[fig:excludedXSec\] shows the constraints on $s$-wave annihilation into $e^+e^-$ (left panel) and $\gamma \gamma$ (right panel), based on the CMB power spectrum data from Planck.
Unclustered Dark Matter Energy Injection Scenarios {#sec:EnergyInjection}
==================================================
In this paper, three scenarios by which DM can inject energy into the IGM are considered: $s$-wave annihilation, $p$-wave annihilation and decay. The total energy injected by both $s$- and $p$-wave annihilation of uniformly distributed DM is given by $$\begin{aligned}
{1}
\left( \frac{dE}{dV dt} \right)_{\text{ inj}} = \rho^2_{\chi,0} (1+z)^6 \frac{\langle \sigma v \rangle}{m_{\chi}},
\label{eqn:injRateSmooth}\end{aligned}$$ where $m_{\chi}$ is the DM particle mass and $\rho_{\chi,0} = \rho_{c} \Omega_c$ is the overall smooth density of DM today, with $\rho_c$ being the critical density of the universe today. In $s$-wave annihilation, $\langle \sigma v \rangle$ is constant, while in $p$-wave annihilation, $\sigma v \propto v^2$. This velocity dependence can be factored out by assuming a Maxwellian velocity distribution, which simplifies the calculation since we can take the 1D velocity dispersion ($\sigma_{1\text{D}}$) as a proxy for the velocity enhancement/suppression in the thermal average: $$\begin{aligned}
{1}\label{proxy_p}
\langle \sigma v \rangle_p \propto\int_0^1v^2f_{\rm MB}(v)dv=\sigma_{1D}^2.\end{aligned}$$ We can then write, by picking a reference dispersion velocity $\sigma_{1\text{D,ref}}$: $$\begin{aligned}
{1}
\langle \sigma v \rangle_{p,B} = \left(\frac{\sigma_{1\text{D,B}}}{\sigma_{1\text{D,ref}}}\right)^2 (\sigma v)_{\text{ref}},\end{aligned}$$ where $\sigma_{1\text{D,B}}$ is the one-dimensional characteristic dispersion velocity of unclustered DM. This quantity is redshift dependent, but assuming thermal equilibrium of the DM distribution, $ \sigma_{1\text{D,B}}^2 \propto T$, which for non-relativistic DM scales as $T \propto (1+z)^2$. Thus the energy injection rate for $p$-wave annihilation for uniformly distributed DM can be written as $$\begin{aligned}
{1}
\left(\frac{dE}{dV dt} \right)_{p\text{ inj}} = \rho^2_{\chi,0} (1+z)^8 \frac{(\sigma v)_{\text{ref}}}{m_\chi} \left(\frac{\sigma_{1\text{D,B}} (z=0)}{\sigma_{1\text{D,ref}}}\right)^2,
\label{eqn:smoothpwave}\end{aligned}$$ where $\sigma_{1\text{D,B}}(z=0)$ is the present-day value of $\sigma_{1\text{D,B}}$. Throughout this paper, we choose $\sigma_{1\text{D,ref}} = 100 \mathrm{km/s}$ (a value consistent with [@Diamanti2014]), which is roughly the present-day DM dispersion velocity in haloes with a mass comparable to the Milky Way ($\lesssim10^{12}$M$_\odot$) today.
Finally, the energy injected from the decay of DM is given by $$\begin{aligned}
{1}
\left(\frac{dE}{dV dt} \right)_{d \text{ inj}} = \rho_{\chi,0}(1+z)^3 \frac{1}{\tau_{\chi}},\end{aligned}$$ where $\tau_\chi$ is the decay lifetime, which is taken to be much longer than the age of the universe so that the change in DM density due to decay is negligible. This assumption is valid given known limits on the decay lifetime deduced from Planck and WMAP [@Diamanti2014] as well as gamma-ray experiments [@Dugger:2010ys; @Essig2013] for a large range of decay channels.
We have thus far only considered unclustered DM distributions, where the comoving DM density is constant, but structure formation causes the local density and velocity dispersion of DM to deviate strongly from the expected value for a homogeneous distribution. The onset of structure formation thus significantly changes the energy injection history due to $s$- and $p$-wave annihilations. However, the previous notation is still useful: once we have obtained a structure formation history, we can characterize the energy injection from a realistic DM distribution by replacing equations (\[eqn:injRateSmooth\]) and (\[eqn:smoothpwave\]) with effective multipliers to the unclustered DM density. A realistic structure formation history is thus crucial in calculating the energy injection rate from DM.
Structure Formation {#sec:StructureFormation}
===================
In the Cold Dark Matter (CDM) scenario, DM clusters into gravitationally self-bound haloes across a very large range of scales, from the (model-dependent) minimum limit set by DM kinetic decoupling ($10^{-11}-10^{-3}$M$_\odot$ for WIMPs [e.g. @Bringmann2009]) to $10^{15}$M$_\odot$ cluster-size haloes. $N$-body simulations can accurately follow DM structure formation but only in a limited mass range: it is not yet possible to cover the full dynamical range corresponding to CDM particles. In order to explore the unresolved regime, hybrid approaches which have a core analytical model calibrated against numerical simulations must be used, e.g., the well-known halo model [e.g. @Seljak_00], or the recently introduced $P^2SAD$ (clustering in phase space) [@Zavala2015]. We will follow these two approaches in this paper, describing their most relevant elements.
We assume that after recombination, structure formation is described by linear perturbation theory followed by the immediate formation (collapse) of haloes. In this scenario, haloes collapse (form) at a redshift $z_{\rm col}$ with an average overdensity $\bar{\rho}_h=\Delta\rho_{c}(z_{\rm col})$, where $\rho_{c}$ is the critical density of the universe. The choice of the overdensity $\Delta$ varies in the literature, but for simplicity we will use the redshift independent, widely used value of $\Delta=200$. The formation redshift is given by the spherical collapse model, which connects the linear power spectrum with the epoch of collapse, resulting in a hierarchical picture of structure formation. In particular, the halo collapses when the rms linear overdensity $\sigma(M,z)$ (mass variance) crosses the linear overdensity threshold $\delta_c\sim1.686$: $$\label{sigma_rms}
\sigma^2(M,z)=\int d^3{\bf k}\,P(k,z)W^2(k,M),$$ where $W(k,M)$ is a filter function in Fourier space, and $P(k,z)$ is the linear CDM power spectrum. For the spherical collapse model, the window function is a top-hat filter in real space. We compute the primordial matter power spectrum with the code CAMB [@2000ApJ...538..473L] with a cosmology consistent with Planck data.
Halo Model
----------
[**(i) Flux multiplier.**]{} For the purposes of this work, we are interested in computing the excess DM annihilation over the contribution from the smooth background due to the collapse of DM into haloes. Following the notation of [@Taylor2003],[^3] we write this excess (flux multiplier) for a particular redshift as: $$\begin{aligned}
\label{flux_cosmic}
\mathcal{B}(z)&=&\frac{1}{\rho_B^2V_B}\int_{m_{\rm min}}^{\infty}\left(V_B\frac{dn}{dM}dM\right)\bar{\rho}^2_hV_h(M)B_{h}(M)\nonumber\\
&=&\frac{\Delta}{\Omega_m^2\rho_{\rm crit}}\int_{m_{\rm min}}^{\infty}MB_h(M)\frac{dn}{dM}dM,\end{aligned}$$ where $\left(V_B\frac{dn}{dM}dM\right)$ is the number of haloes in the cosmic volume $V_B$, with a background matter density $\rho_B=\Omega_m\rho_c$. Each halo is assumed to be spherical with a radial density profile $\rho(r)$ truncated at a virial radius $r_{200}$. The annihilation rate in the halo is enhanced over the rate based on the average DM density by an amount $$\label{flux_halo}
B_h(M)=\frac{4\pi}{\bar{\rho}^2_hV_h(M)}\int_0^{r_{200}}\rho^2(r)r^2\,dr.$$
[**(ii) Density profile.**]{} In most of the resolved mass regime of current simulations, haloes are well-fitted by a [*universal*]{} two-parameter NFW density profile [@Navarro:1996gj]. An even better fit is that of a three-parameter Einasto profile [@Einasto]. The simplicity of the NFW profile and, more importantly, its reduction to an almost one-parameter profile makes it an appealing choice in analytic studies. We will consider these two profiles for this study except at very low halo masses near the filtering mass scale, where recent simulations of the formation of the first haloes (microhaloes) indicate that their inner density profiles might be cuspier than the NFW profile [e.g. @Anderhalden2013; @2014ApJ...788...27I]. Although these simulations can follow the evolution of microhaloes only until $z\sim30$ (due to limited resolution, since long wavelength perturbations comparable to the box size cannot be neglected at lower redshifts), we assume that the density profile of these microhaloes can be described by these results all the way down to $z=0$.
[*NFW profile and microhaloes.*]{} We use the density profile given by $$\label{rho_smooth}
\rho(x)=\frac{\rho_s}{x^\alpha(1+x)^{3-\alpha}},$$ where $x\equiv r/r_s$, and $r_s$ and $\rho_s$ are the scale radius and density, respectively. Setting $\alpha=1$ gives the NFW profile, which adopt for haloes and subhaloes. For haloes near the filtering mass scale, we follow [@2014ApJ...788...27I], which states that $\alpha$ scales as a power law of the halo mass: $$\label{alpha_micro}
\alpha=-0.123~{\rm log}\left(\frac{M}{10^{-6}M_\odot}\right)+1.461$$ for $M<10^{-3}$$M_\odot$. Above this scale, we set $\alpha=1$. Substituting equation (\[rho\_smooth\]) into equation (\[flux\_halo\]), we have: $$\label{flux_halo_power}
B_h(M)=\frac{c^3}{3m^2(c)}\int_0^c\frac{x^2 dx}{x^{2\alpha}(1+x)^{6-2\alpha}},$$ where $c\equiv r_{200}/r_s$ is the concentration parameter, which is a function of halo mass (see below), and: $$\label{flux_halo_power_2}
m(c)=\int_0^c\frac{x^2 dx}{x^\alpha(1+x)^{3-\alpha}}.$$ Equations (\[flux\_halo\_power\]) and (\[flux\_halo\_power\_2\]) both have analytic solutions.
[*Einasto profile.*]{} The density profile is given by: $$\label{einasto_eq}
\rho(r)=\rho_{-2}\,{\rm exp}\left(\frac{-2}{\alpha_e}\left[\left(\frac{r}{r_{-2}}\right)^{\alpha_e}-1\right]\right),$$ where $\rho_{-2}$ and $r_{-2}$ are the density and radius at the point where the logarithmic density slope is -2, and $\alpha_e$ is the Einasto shape parameter. This three-parameter profile is reduced to only two parameters once the total mass $M\equiv M_{200}$ of a halo is fixed. In particular we can write: $$\begin{gathered}
M_{200} = \frac{4\pi r_{-2}^3\rho_{-2}}{\alpha_e}{\rm exp}\left(\frac{3{\rm ln\alpha_e}+2-{\rm ln} 8}{\alpha_e}\right) \\
\times \gamma\left[\frac{3}{\alpha_e},\frac{2}{\alpha_e}\left(\frac{r_{200}}{r_{-2}}\right)^{\alpha_e}\right].\end{gathered}$$ The parameter $\alpha_e$ and the “concentration” $c_e=r_{200}/r_{-2}$ are connected to $M_{200}$ through $\sigma(M,z)$ as we describe below. Once these parameters are known, we can compute the boost to the annihilation rate over the average in a halo by solving equation (\[flux\_halo\]) numerically.
The cosmic annihilation flux multiplier given by equation (\[flux\_cosmic\]) due to the population of haloes above a minimum mass $M_{\rm min}$ is fully determined once we specify the halo mass function $dn/dM$ and the properties of the density profiles. In the Extended Press-Schechter (EPS) formalism, both of these are fully determined for a given halo mass. More specifically, they can be written as formulae that depend on $\sigma(M,z)$.
[**(iii) Mass function.**]{} The mass function in the case of ellipsoidal collapse is given by [@ST1999]: $$\begin{aligned}
\label{eq_mf}
\frac{dn}{d{\rm ln}M}&=\frac{1}{2}f(\nu)\frac{\rho_B}{M}\frac{d{\rm ln} (\nu)}{d{\rm ln}M},\\
f(\nu)&=A\sqrt{\frac{2q\nu}{\pi}}\left[1+\left(q\nu\right)^{-p}\right]{\rm exp}^{-q\nu^2},\end{aligned}$$ with $A=0.3222$, $p=0.3$, and $q=1$, and: $$\nu\equiv\frac{\delta_c(z)^2}{\sigma(M,z)^2},$$ where $\delta_c(z)=1.686/D(z)$ is the linearly extrapolated threshold for spherical collapse, with $D(z)$ being the growth factor normalized to unity at $z=0$.
Free-streaming of DM particles prevents the formation of haloes below a (filtering) scale, which depends on the mass of the DM particle. This results in a cutoff to the primordial power spectrum at the filtering scale. The difference between a CDM power spectrum with a filtering scale and without (i.e. setting the mass of the DM particles effectively to zero) is typically given in terms of the transfer function $T^2_\chi=P_{\rm m_\chi}/P_{\rm m_\chi\rightarrow0}$, which for neutralino DM has the form [@Green2005]: $$\label{transfer_func}
T_\chi(k)=\left[1-\frac{2}{3}\left(\frac{k}{k_A}\right)^2\right]{\rm exp}\left[-\left(\frac{k}{k_A}\right)^2-\left(\frac{k}{k_B}\right)^2\right],$$ where $$\begin{gathered}
\label{transfer_func_2}
k_A= 2.4\times10^6\left(\frac{m_\chi}{100~{\rm GeV}}\right)^{1/2}\nonumber\\
\times\frac{(T_{\rm kd}/30~{\rm MeV})^{1/2}}{1+{\rm ln}(T_{\rm kd}/30~{\rm MeV})/19.2}~{\rm Mpc}/h,\end{gathered}$$ $$\begin{aligned}
{1}
k_B&=5.4\times10^7\left(\frac{m_\chi}{100~{\rm GeV}}\right)^{1/2}\left(\frac{T_{\rm kd}}{30~{\rm MeV}}\right)^{1/2}~{\rm Mpc}/h, \end{aligned}$$ and $T_{\rm kd}$ is the (model-dependent) kinetic decoupling temperature.
To include the effect of free-streaming into the mass function, we use the code provided by [@2013MNRAS.433.1573S], which computes the mass function following equation (\[eq\_mf\]) using a [*sharp-k*]{} window function for the mass variance calibrated to match the results of simulations that include a cutoff in the power spectrum as given by the transfer function in equation (\[transfer\_func\]). We note that $T_{\rm kd}$ and $m_\chi$ together determine the minimum self-bound halo mass $M_{\rm min}$. Choosing a different $M_{\min}$ changes the global contribution of (sub)haloes by some overall factor in a redshift-independent manner. We take $m_\chi=100$ GeV and $T_{\rm kd}=28$ MeV to compute the cutoff to the primordial power spectrum given by equations (\[transfer\_func\]-\[transfer\_func\_2\]).[^4] This results in a damping scale due to free streaming with a characteristic mass of $M_{\rm min}=10^{-6}$M$_\odot$ [see equation (13) and Fig. 3 in Ref. @Bringmann2009], which is the canonical value for WIMPs. The impact of choosing different values of $M_{\rm min}$ will be studied later in this section.
[**(iv) Parameters of the density profiles.**]{} The median density profile of haloes with a given mass is fully specified by one parameter, typically the halo mass. Since CDM haloes form hierarchically, low mass haloes are more concentrated than more massive ones. This specifies the second parameter (concentration) of the profile. Ultimately, this parameter is connected to the density of the Universe at the (mass-dependent) time of collapse for a given halo.
[*NFW profile and microhaloes*]{}. The concentration of an NFW halo is a strong function of halo mass that has been explored in great detail in the literature using analytical and numerical methods. We use the model by [@2012MNRAS.423.3018P] to compute the concentration-mass relation. The model is calibrated to recent simulations down to their resolution limit ($M\sim10^{10}$ M$_\odot$), but more importantly, it is physically motivated since it uses $\sigma(M,z)$ as the main quantity connected to the concentration. In this way, it takes into account the flattening of the linear power spectrum towards smaller halo masses. We refer the reader to Section 5 of [@2012MNRAS.423.3018P] for the formulae that lead to the computation of $c(M,z)$. We only consider haloes with a “peak-height” $\nu\equiv\delta_c/\sigma$ up to $3\sigma$. The larger $\nu$ is, the rarer and the more massive the halo is relative to the characteristic clustering mass defined by $\nu=1$.
For microhaloes, we make a correction to the NFW concentrations given by the Ref. [@2012MNRAS.423.3018P] model to take into account the steeper profiles of microhaloes. To do so, we follow the results from [@2014ApJ...788...27I] (see their Figure 9). In particular, for $\alpha=1.5,1.4,1.3,1.0$ in equation (\[alpha\_micro\]), they find $c_{\rm NFW}=2.0c_{\rm micro},1.67c_{\rm micro}, 1.43c_{\rm micro}, 1.0c_{\rm micro}$; we use these values to interpolate for a given microhalo mass.
[*Einasto profile.*]{} In this case we follow the work by [@Klypin2014] to connect the parameters $\alpha_e$ and $c_e$ (concentration) with $\sigma(M)$. These authors use a similar analysis as that of [@2012MNRAS.423.3018P], and find the following empirical relations: $$\begin{aligned}
\alpha_e&=&0.015+0.0165\nu^2,\nonumber \\
r_{200}/r_{-2}&=&6.5\nu^{-1.6}(1+0.21\nu^2).
\end{aligned}$$ Note that $\alpha_e$ approaches a constant value asymptotically for low $\nu$ (i.e. low halo masses), which implies that low mass haloes of a given mass only differ in one parameter, their concentration (as in the NFW case).
[**(v) Substructure.**]{} Each DM halo is composed of a smooth DM distribution and a hierarchy of subclumps that merged into the main halo at some point in the past and have been subjected to tidal disruption. The modeling of the abundance of main haloes and their inner smooth structure have been described previously, and we now consider the impact of substructure on the annihilation rate.
To account for the self-annihilation of DM in substructures, we define a [*subhalo boost*]{} over the flux multiplier of a main halo (i.e. over $B_h(M)$ in equation (\[flux\_halo\])): $$\begin{gathered}
\label{sub_boost}
\mathcal{B}(m_{\rm sub})=\frac{1}{B_h(M)} \int_{m_{\rm min}}^{m_{\rm max}}\frac{\bar{\rho}_{\rm sub}(m_{\rm sub})}{\bar{\rho}_h} \\
\times B_{\rm sub}(m_{\rm sub})m_{\rm sub} \frac{dN}{dm_{\rm sub}}dm_{\rm sub},
\end{gathered}$$ where $dN/dm_{\rm sub}$ is the subhalo mass function and $\bar{\rho}_{\rm sub}$ and $B_{\rm sub}$ are the average density within a subhalo and its flux multiplier of mass $m_{\rm sub}$, respectively. Because of tidal disruption, these quantities depend in principle on the distance of the subhalo relative to the halo center, but since we are interested in the total subhalo boost to the annihilation rate, we can assume that most of the boost comes from subhaloes near the virial radius of the host. This is a good approximation since tidal disruption considerably reduces the abundance of subhaloes near the halo center. For instance, looking at Figure 3 of Ref. [@Springel:2008cc], we see that only $\sim30\%$ of the annihilation rate in subhaloes comes from within 100 kpc ($\sim0.4r_{200}$) of a Milky Way-sized halo. On the other hand, near the virial radius of a host with an assumed NFW profile, the tidal radius for a subhalo of mass $m_{\rm sub}$ is approximately given by [e.g. equation (12) of @Springel:2008cc] $$\begin{aligned}
r_t &=& \left(\frac{m_{\rm sub}}{\left[2-\frac{d{\rm ln}M}{d{\rm ln} r}\right]M(<r)}\right)^{1/3}r \qquad \qquad \qquad \qquad \nonumber
\end{aligned}$$ $$\begin{gathered}
\quad \sim \left(\frac{m_{\rm sub}}{M}\right)^{1/3} r_{200} \\ \qquad \times \left(2-\frac{c^2}{(1+c)^2}\frac{1}{{\rm ln}(1+c)-c/(1+c)}\right)^{-1/3},
\end{gathered}$$ where $c\equiv c(M,z)$ is the concentration of the host. We can then substitute $\frac{\bar{\rho}_{\rm sub}}{\bar{\rho}_h}$ for the following in equation (\[sub\_boost\]): $$\begin{aligned}
\left.\frac{\bar{\rho}_{\rm sub}(<r_t)}{\bar{\rho}_h}\right\vert_{r_{200}}=
2-\frac{c^2}{(1+c)^2}\frac{1}{{\rm ln}(1+c)-c/(1+c)}.\nonumber\\\end{aligned}$$ This density ratio has only small variations around 2 with low mass haloes being more overdense on average than more massive subhaloes.
The [*subhalo mass function*]{} is in principle also a function of halocentric distance, but it becomes the global subhalo mass function under the approximation that subhaloes near the virial radius dominate the annihilation rate. The subhalo mass function has a similar functional form as the halo mass function. In particular, it is approximately a power law (except at very large masses) with a similar slope to the halo mass function, $dN/dm_{\rm sub}\propto m_{\rm sub}^{-1.9}$ [@Springel:2008cc]; the normalization however is different. This functional form is nearly universal if $m_{\rm sub}$ is scaled to the host mass.[^5] We use the fitting formulae for the subhalo mass function given by [@Gao2011], which is based on a suite of high resolution simulations covering a large dynamical range of masses and is valid for $z\leq2$; for higher redshift we assume that the formulae at $z=2$ holds (our results are actually not very sensitive to this assumption). We assume also that these formulae are preserved in the unresolved regime, down to the filtering mass scale, and apply the same cutoff at low masses due to free streaming (or kinetic decoupling) as that for the halo mass function.
To calculate the subhalo flux multiplier $B_{\rm sub}$, we assume the same density profiles as in the case of main haloes, i.e. we use equations (\[flux\_halo\_power\]) and (\[flux\_halo\_power\_2\]) in the case of the NFW profile and the microhaloes, and find the result numerically in the case of the Einasto profile. This is a good approximation since, as we mentioned before, the subhaloes that contribute most to the signal are those near the virial radius of the host. Thus, tidal disruption would not have transformed their inner structure significantly, particularly their inner regions, which strongly dominate the annihilation rate. However, in the case of the NFW profile, we do account for a slight modification to the concentration-mass relation in the form of an upscaling of a factor of 2.6 to the characteristic density $\rho_s$ (which is roughly a $30\%$ increase in concentration, see Figure 28 of Ref. [@Springel:2008cc]). This modification is because for a given mass, subhaloes (even near the virial radius) are slightly more concentrated than isolated haloes. For the case of the Einasto profile, we do not make this correction since there is no systematic study about this. We note however that this correction to the overall flux multiplier $\mathcal{B}(z)$ is relatively small.
The Particle Average Phase Space Density ($P^2SAD$) Approach
------------------------------------------------------------
Instead of modeling the clustering of DM indirectly as a collection of haloes (and subhaloes) with a certain internal DM distribution, one can model it directly by looking at the DM two point correlation function $\xi(\Delta x)$ (or its Fourier transform, the power spectrum). It has been shown that the flux multiplier, defined in equation (\[flux\_cosmic\]), is equal to the limit of $\xi$ when the separation between particles $\Delta x$ goes to zero [@Serpico2012]: $$\label{eq_p2sad}
\mathcal{B}={\rm lim}_{\Delta x\rightarrow 0} \xi(\Delta x).$$ Thus, if one can directly obtain a prediction of the DM power spectrum in the deeply non-linear regime, then it is possible to directly compute the flux multiplier without the many steps and approximations involved in the halo model.
This approach has been developed recently by analyzing the coarse-grained phase space distribution directly from DM simulations. In particular, by measuring the two dimensional particle phase space average density ($P^2SAD\equiv\Xi(\Delta x, \Delta v)$, where $\Delta x$ and $\Delta v$ are the distance and relative speed between particles) in high resolution simulations, it has been possible to physically model this new statistic of DM clustering and predict the right hand side of equation (\[eq\_p2sad\]) [@Zavala2014a; @Zavala2014b; @Zavala2015]. In particular one can write: $$\label{real_2pcf_std}
\xi(\Delta x)_{{\cal V}_6} = \frac{\langle\rho\rangle_{{\cal V}_6}}{\rho_B^2}\int d^3{\bf \Delta v}~\Xi(\Delta x, \Delta v)_{{\cal V}_6} - 1,$$ where $\langle\rho\rangle_{{\cal V}_6}$ is the average DM density within the phase space volume (${\cal V}_6$) over which $P^2SAD$ is averaged. In a cosmic volume $V_B$ we can write: $$\label{normalization}
\frac{\langle\rho\rangle_{{\cal V}_6}}{\rho_B^2}=\frac{1}{\rho_B}\frac{M_{V_B}}{\rho_B V_B}=\frac{\mathcal{F}_{\rm subs}(V_B)}{\rho_B},$$ where $\mathcal{F}_{\rm subs}(V_B)$ is the mass fraction contained in substructures within the cosmic volume $V_B$ that is calculated using the subhalo and halo mass functions, described above in the halo model section: $$\label{norm_p2sad}
\mathcal{F}_{\rm subs}(V_B)=\frac{1}{\rho_B}\int_{M_{\min}}^{\infty}M\frac{dn}{dM}\mathcal{F}_{\rm s,h}(M)dM,$$ where $\mathcal{F}_{\rm s,h}(M)$ is the mass fraction within subhaloes in a halo of mass $M$ (computed from the subhalo mass function).
$P^2SAD$ can be described with a physically motivated model that combines the stable clustering hypothesis in phase space, the spherical collapse model and tidal disruption of subhaloes [@Zavala2014b; @Zavala2015]. This model has 7 free parameters, which have been calibrated in [@Zavala2015] for DM particles inside subhaloes exclusively. Since the clustering of DM at very small scales is dominated precisely by these particles, we can use this model to predict the global flux multiplier in a cosmic volume. We note that although $P^2SAD$ has remarkably universal structural properties (this is the reason why it is a powerful statistic to predict the nonlinear power spectrum at unresolved scales), the parameters of its modeling have only been calibrated at relatively low redshifts. We therefore warn that above $z=1$, its predictions remain uncertain at this point. Since we are particularly interested in DM annihilation at higher redshift in this paper, we assume that the parameters of the physical model of $P^2SAD$ calibrated at $z=0$ remain unchanged.
Overall, because of its direct connection with the annihilation signal, there is significantly less uncertainty associated with $P^2SAD$ compared to the more traditional halo models used to calculate the boost factor described earlier. With proper calibration at higher redshifts, $P^2SAD$ could have been used as the main method in this paper, but owing to the current limitations, we use it only as a sanity check on the results obtained from the halo model approach, and as a brief introduction to a powerful new method of obtaining boost factors that may become useful in future work.
The Effective Density for Dark Matter Annihilation due to Structure Formation
-----------------------------------------------------------------------------
Having described our modeling of the flux multiplier, we can finally write the effective DM density $\rho_{\text{eff}}$ as a boost over the background due to structure formation, which we will then use to compute the DM annihilation rate as a function of redshift: $$\rho_{\rm eff}(z)=\rho_B(z)\left(1+\mathcal{B}_s(z)\right)^{1/2},
\label{eqn:rhoeff}$$ where $\rho_B(z)= \rho_{\chi,0}(1+z)^3$ and $\mathcal{B}_s=\mathcal{B}$ (defined in equation (\[flux\_cosmic\])).
The predictions for $\rho_{\text{eff}}$ for the two structure formation models are shown in Figure \[fig\_rho\_eff\]. The predictions of the [*halo model*]{} are in blue (“conservative”, or low-boost) and red (“stringent”, or high-boost), corresponding to the cases where (sub)haloes are modeled with an NFW profile with a concentration mass relation as given by the model in [@2012MNRAS.423.3018P] and with an Einasto profile with parameters given in [@Klypin2014] respectively. In the plot we show these cases with (solid) and without (dashed) substructure. Beyond $z=2$ (vertical dot-dashed line), the parameters of the fitting formulae for the subhalo mass function have not been calibrated and the predictions are thus more uncertain, but at higher redshifts the impact of substructure on the global annihilation rate is minimal. The large difference between the red and blue curves is actually not caused directly by the use of different density profiles (Einasto vs NFW), but by the relatively different concentrations of low mass haloes predicted by the formulae in Refs. [@2012MNRAS.423.3018P] and [@Klypin2014]. We have also explored variations over the minimum self-bound halo mass, varying $M_{\rm min}$ by 6 orders of magnitude. The impact of this on $\rho_{\text{eff}}$ is shown by the hatched area for the Einasto halo model with substructures (the other cases show a similar variation). Although $M_{\rm min}$ plays a role in setting the value of $\rho_{\rm eff}$, varying $M_{\rm min}$ between $10^{-9}$ to $10^{-3} M_\odot$ changed $\rho_{\rm eff}$ by only a factor of approximately 2.15, with the effect being larger at larger redshifts, since a larger value of $M_{\text{min}}$ leads to a delay in the onset of structure formation. This effect is relatively minor compared to the uncertainties in the halo model, at least at $z<10$. We have also found that for both $s$-wave and $p$-wave annihilation, the level of variation in $M_{\rm min}$ explored here produced only percent-level variations in the ionization and thermal histories, and consequently none of our subsequent results are sensitive to our choice of $M_{\min}$. We therefore adopt the canonical value of $M_{\rm min}=10^{-6}$M$_\odot$ for the rest of this paper.
The approach based on the DM clustering in phase space, $P^2SAD$, is shown with a solid green line, and with a dotted green line beyond the reach where it has been calibrated. It predicts a behavior for $\rho_{\rm eff}$ that lies in between the [*halo model*]{} predictions. It does seem to favor a larger annihilation rate (i.e. ultimately larger halo concentrations) than the model with the smallest structure formation boost (blue), given that it lies closer to the model with the largest structure formation boost (red). This approach is however only certain close to $z=0$, where the green line is lower than the red one by a significant amount. We will take the difference between the red and the blue line as our degree of uncertainty in the predictions of the structure formation prescriptions.
Equation (\[eqn:rhoeff\]) is the quantity of relevance for the case of $s$-wave annihilation, where the astrophysical part of the signal scales as $\rho_{\rm eff}^2$. In the case of $p$-wave annihilation, given the velocity dependence of the astrophysical signal, we can write instead $$(\rho v/c)_{\rm eff}(z)=\rho_B(z)(\sigma_{\rm 1D, B}(z)/c)\left(1+\mathcal{B}_p(z)\right)^{1/2},
\label{eqn:rhoeff_p}$$ where we assume that the velocity distribution of the DM particles is Maxwellian, as in equation (\[proxy\_p\]). In particular, $\sigma_{\rm 1D, B}(z)=\sigma_{\rm 1D, B}(z=0)(1+z)=10^{-11}c({\rm GeV}/m_\chi)^{1/2}(1+z)$ is the velocity dispersion of unclustered DM, and $\mathcal{B}_p$ is given by multiplying the halo and subhalo flux multipliers by $(\sigma_{1D, h}/c)^2$. We have approximated the average 1D velocity dispersion of the (sub)halo by $\sigma_{1D, h}\sim V_{\rm max,h}/\sqrt{3}$, with $V_{\rm max, h}$ being the maximum circular velocity of the (sub)halo computed from its density profile.
Notice that while we have characterized the structure formation contribution as a boost factor multiplying the smooth background contribution, in reality this is an additive contribution: $(\rho v/c)_{\text{eff}}$ within the haloes does not depend on $\sigma_{\text{1D,B}}(z)$, since once structure formation sets in, the characteristic velocity of dark matter particles is set by gravity and not by the primordial thermal motion of unclustered dark matter. Thus the exact value of $\sigma_{\text{1D,B}}(z)$ is important only before the onset of structure formation at $z \gtrsim 50$. Throughout this paper, we have used the value of $\sigma_{\text{1D,B}}(z = 0)$ computed with $m_\chi = \SI{100}{GeV}$ and $T_\mathrm{kd} = 28$ MeV. This choice results in a highly suppressed annihilation rate prior to structure formation, and results in ionization histories that are indistinguishable from an ionization history with no dark matter at redshifts $z \gtrsim 50$. We have also investigated the effects of adopting larger values of $\sigma_{\text{1D,B}}(z=0)$ corresponding to smaller $m_\chi$ or $T_\mathrm{kd}$, but have found that our present choice is optimistic for producing significant ionization just prior to reionization in a manner that is consistent with the optical depth constraints. Further discussion of this matter can be found in Section \[sec:Constraints\].
We show the effective DM density $\times$ velocity in Figure \[fig\_rho\_eff\_pwave\], defined in equation (\[eqn:rhoeff\_p\]). The uncertainties in the structure formation scenario in this case are minimal since annihilation in massive, resolved haloes dominates the overall flux. The uncertain contribution for haloes below the resolution limit of current simulations is minimal. This is why the predictions from the halo model for the two cases we have considered nearly overlap each other, and is the reason why there is a negligible impact of substructures (the lines showing the effect overlap completely with those without substructures in Figure \[fig\_rho\_eff\_pwave\]). A different value of $M_{\rm min}$ is only important at the redshifts closest to the onset of structure formation. Still, within the 6 orders of magnitude of variation of $M_{\rm min}$, we have found no important changes in our main results.
Effective Deposition Efficiency {#sec:fz}
===============================
$f_c(z)$ for Smooth Dark Matter Distributions
---------------------------------------------
Energy injected by DM annihilation or decay at any given redshift is not immediately deposited into the IGM. At certain redshifts and input energies, the characteristic time for a photon to completely deposit its energy can be comparable to or greater than the Hubble time, making the ‘on-the-spot’ approximation for the deposition of energy problematic [@Slatyer2009]. Moreover, the efficiency at which injected energy is deposited into various channels (e.g. ionization of the IGM vs. heating of the IGM) is generically a complicated function of redshift, the energy of the injected particles, and the background level of ionization.
The details of the deposition process can be distilled into a single quantity $f_c(z)$, the ratio between energy deposited in channel $c$ and the injected energy at a given redshift $z$, i.e. $$\begin{aligned}
{1}
\left(\frac{dE}{dtdV} \right)_{c,\text{dep}} = f_c(z) \left(\frac{dE}{dtdV} \right)_{\text{inj}}
\label{eqn:fcz}\end{aligned}$$ where the channels considered are ionization of H (H ion), ionization of He (He ion), Lyman-$\alpha$ excitation of H atoms (Ly$\alpha$), heating of the IGM (heat), and energy converted into continuum photons that we observe as distortions to the CMB energy spectrum (cont).
To calculate $f_c(z)$, we first need to calculate $T_c(z_{\text{inj}},z_{\text{dep}},E) \, d \log(1+z_{\text{dep}})$, the fraction of energy injected at redshift $z_{\text{inj}}$ that is deposited at redshift $z_{\text{dep}}$ into channel $c$ due to an injection of particles with individual energy $E$, discretized into redshift bins of size $d \log(1+z_{\text{dep}})$. This is done using the code developed in [@Slatyer2012; @Slatyer2015], and only a brief summary of the code is given here. Starting with some injection of an $e^+ e^-$ or $\gamma \gamma$ pair at $z_{\text{inj}}$, the code tracks the cooling of particles and all of the secondary particles produced in these cooling processes in steps of $d \log(1 + z_{\text{dep}}) = 10^{-3}$. Photons that can efficiently photoionize HI, HeI and HeII in the IGM are removed from the main code and are considered to be “deposited”, together with all electrons (including secondary electrons from photoionization) below . The proportion of energy deposited into each channel $c$ from the deposited photons and electrons is then determined by a separate low-energy code, which is described in full detail in [@Slatyer2015]. The code assumes only small modifications to the ionization history of the universe from DM, since large modifications are ruled out by observational constraints. With this assumption, any arbitrary injection history with an arbitrary energy spectrum of particles can then be treated as a linear combination of individual injections of fixed energy at particular redshifts.
In the original code, $T_c (z_{\text{inj}},z_{\text{dep}},E) \, d \log(1+z_{\text{dep}})$ was computed from $1+z = 3000$ to $1+z=10$ for both injection and deposition redshift, over a large range of particle kinetic energies ($E \sim 10$ keV to $\SI{}{TeV}$). Below $1+z_{\text{dep}} = 10$, the ionization history becomes much less certain due to the process of reionization. The exact details of the ionization history can have a significant impact on our calculation of $f_c(z)$: $f_{\text{H ion}}$, for example, should decrease significantly when $x_e \equiv n_e/n_{\text{H}}$ is close to 1. However, in order to make use of constraints on $T_{\text{IGM}}$ and $\delta \tau$, the code has to be extended down to lower redshifts. Given this uncertainty, we defer a discussion of how these results are extended down to $1+z_{\text{dep}} = 4$ to the following sub-section.
At the end of the calculation, we would have determined the fraction of energy injected at $z_{\text{inj}}$ that is deposited at some deposition redshift $z_{\text{dep}}$, broken down by deposition channel. Determining the total deposited energy at some redshift $z_{\text{dep}}$ therefore requires knowledge of the full injection history. To relate the deposited energy to the current injected energy and obtain $f_c(z)$ as defined in equation (\[eqn:fcz\]), we have to integrate $T_c(z_{\text{inj}},z_{\text{dep}},E) d\log(1+z_{\text{dep}})$ over all injection redshifts prior to $z_{\text{dep}}$. For any arbitrary DM energy injection process, the spectrum of particles injected has a typical redshift dependence $dN/(dE\, dV\, dt) \propto (1+z)^\alpha$, where $\alpha = 6$ for $s$-wave annihilation, $\alpha = 8$ for $p$-wave annihilation and $\alpha = 3$ for decay. In each case, we can factor the spectrum into a redshift-dependent factor multiplied by an energy spectrum $d\bar{N}/dE$ that is independent of redshift. Doing this, one can show [@Slatyer2012] that $$\begin{gathered}
f_c(z) = \frac{H(z)}{(1+z)^{\alpha-3} \sum\limits_{\text{species}}\int E \frac{d\bar{N}}{dE} dE} \, \\
\times \sum_{\text{species}} \int \frac{(1+z')^{\alpha-4}}{H(z')} dz' \int T_c(z',z,E) E \frac{d\bar{N}}{dE} dE,\end{gathered}$$ where the sum over species indicates that we are combining effects from all species produced in the annihilation process. For this paper, we only consider the case where DM annihilates or decays into $e^+e^-$ or $\gamma \gamma$, with each particle having fixed, identical total energy $E=m_\chi$ for annihilations or $E = m_\chi/2$ for decays. In this case, $f_c(z)$ further simplifies to $$\begin{aligned}
{1}
f_{c}(z,E) = \frac{H(z)}{(1+z)^{\alpha-3}} \int \frac{(1+z')^{\alpha-4}}{H(z')} T_{c}(z',z,E)\, dz'\end{aligned}$$ for each of the injection species being considered. The quantity $f_c(z,E)$ for the injection species $e^+e^-$ and $\gamma\gamma$ will be denoted by a subscript $e$ and $\gamma$, respectively. While the spectrum of particles associated with any DM injection process may be significantly more complicated, ultimately any such process deposits energy into the IGM via $e^+e^-$ pairs or photon pairs. Understanding the energy deposition efficiency through $e^+e^-$ or $\gamma\gamma$ is thus sufficient to understand the effect of DM annihilation/decay on the IGM, since the energy deposition efficiency of any annihilation/decay process is simply an appropriate sum over $f_{c,e/\gamma}(z,E)$ over injection species and all relevant energies.
$f_c(z)$ at Low Redshifts
-------------------------
We defer a full treatment of calculating $f_c(z)$ to low redshifts to an upcoming paper, and instead give a brief summary of the method here. We have computed $f(z)$ down to a redshift of $1+z = 4$ in three different scenarios: (i) instantaneous and complete reionization at $z = 6$, which is close to the expected redshift of reionization from astrophysical measurements of $T_{\text{IGM}}$; (ii) instantaneous and complete reionization at $z = 10$, which is close to the expected redshift of reionization from measurements of the CMB power spectrum; and (iii) no reionization. These different reionization conditions were used not just for the deposition of energy by low-energy photons and electrons, but also for the high-energy code which tracks high-energy electrons and photons as they cool over time, since the photoionization rate of high-energy photons depend strongly on the ionization history. Previous studies typically assume that $f_c(z)$ can be written as a redshift- and model-dependent efficiency function $f(z)$, which describes the efficiency with which high-energy particles are degraded to low energies and is independent of the deposition channel. This function multiplies a channel-dependent factor $\chi_c(x_e(z))$ that depends only on the free electron fraction and describes the absorption of low-energy particles into each of the deposition channel.[^6] However, our calculation of $\chi_c(z)$ depends on the low-energy photon spectrum at each redshift, and so depends on both $x_e$ and the injection history in a non-trivial way. The $f_c(z)$ results found in [@Slatyer2015] took these effects into account assuming the standard `RECFAST` ionization history, and can be used for small perturbations about that scenario. However, when considering reionization and markedly different reionization scenarios, $f_c(z)$ must be re-computed in each case by re-calculating the cooling in both the high-energy and low-energy regimes.
In order to perform these calculations, we also assume simultaneous reionization of neutral helium (HeI) at the same redshift as HI reionization. After HI and HeI reionization, low-energy photons can deposit their energy through (i) the ionization of singly-ionized helium (HeII); (ii) excitations to HeII; or (iii) distortions of the CMB energy spectrum.
After reionization, the high energy code tags photons as deposited only when they can efficiently photoionize HeII. Thus any “deposited” photon with energy $E > \SI{54.4}{eV}$ corresponds to a HeII ionization and consequently gives rise to a secondary low-energy electron spectrum. Photons below this threshold cannot ionize anything else, and are assigned to the excitation or distortion channels. Low-energy electrons, including the secondary spectrum produced by photoionizing photons, deposit energy according to the same model used in [@Slatyer2015], which is in turn based on [@Valdes:2007cu; @Valdes:2009cq; @MNR:MNR20624]. In accordance with these results, once full reionization occurs, the electrons deposit their energy into the IGM solely through heating, since there are no longer any neutral hydrogen atoms to ionize or excite.
We note here that prior to the instantaneous reionization, the code assumes a standard ionization history computed by the recombination code `RECFAST`. Furthermore, we have assumed the instantaneous reionization of HeII at $1+z = 4$, which is not a fully realistic model. Once the contribution to $x_e$ from DM annihilations become significant enough, our calculation for $f_c(z)$ based on the `RECFAST` result will not reflect the true $f_c(z)$ for the new ionization history that includes the DM contribution, and likewise for a HeII reionization scenario that differs significantly from instantaneous reionization at $1+z = 4$.
In principle, this means that $f_c(z)$ should be calculated iteratively: after calculating $x_e(z)$ for a certain DM model using the $f_c(z)$ obtained from the `RECFAST` ionization history, $f_c(z)$ should be recalculated with the new $x_e(z)$, with this process repeated until convergence of $x_e(z)$ is achieved. However, we stress that such a computationally intensive process is unnecessary, since calculating $f_c(z)$ assuming a `RECFAST` ionization history results in an $x_e$ ($T_{\text{IGM}}$) prior to reionization that is always larger (smaller) than what we would get with an iterative calculation. This ensures that we have not unintentionally ruled out any DM model with a significant contribution to reionization consistent with the $T_{\text{IGM}}$ constraints, even without performing an iterative calculation of $f_c(z)$. This behavior can be seen in Figure \[fig:freeEleFracDecayAllowedRegion\], which shows a comparison of the ionization and thermal history computed with $f_c(z)$ after one iteration with the default $f_c(z)$ used in the rest of the paper. This point will be discussed further in Section \[sec:Constraints\].
$f_c(z)$ Including Structure Formation
--------------------------------------
The formation of structures at late times gives rise to local densities that greatly exceed the cosmological DM density $\rho_{\chi,0}$, accompanied by an increase in the velocity dispersion of DM particles within haloes. This has no effect on the rate of energy injection from DM decay, since the average rate of decays per unit volume across the universe remains the same. In the case of DM $s$-wave annihilation, however, the increased density increases the rate of interaction, while for $p$-wave annihilation both the increased density and increased velocity dispersion dramatically enhance the annihilation rate. These effects cause a significant deviation from the expected energy injection due to a smooth/homogeneous DM distribution.
The increase in the density can be parameterized by an effective density $\rho_{\text{eff}}(z)$ for $s$-wave annihilation (equation (\[eqn:rhoeff\]) and Figure \[fig\_rho\_eff\]), and an effective density times velocity dispersion $(\rho v/c)_{\rm eff}(z)$ for $p$-wave annihilation (equation (\[eqn:rhoeff\_p\]) and Figure \[fig\_rho\_eff\_pwave\]).
With these effective quantities, the energy injection rate can be written as a boost factor multiplied by the unclustered distribution injection rate: $$\begin{aligned}
{1}
\left(\frac{dE}{dV dt}\right)_{\text{inj}} &= \left(\frac{dE_s}{dV dt}\right)_{\text{inj}}[1 + \mathcal{B}_{s,p}(z)],\end{aligned}$$ where the subscript $s$ in $E_s$ indicates the energy injection due to a smooth distribution of DM given by equations (\[eqn:injRateSmooth\]) and (\[eqn:smoothpwave\]) for the $s$- and $p$-wave cases, respectively. The effective deposition efficiency can now be re-defined as $$\begin{aligned}
{1}
f_c(z) &= \frac{H(z)}{(1+z)^{\alpha-3}} \int \frac{(1+z')^{\alpha-4}}{H(z')} T_c(z',z,E) [1 + \mathcal{B}_{s,p}(z')] \, dz',
\label{eqn:fz}\end{aligned}$$ so that $$\begin{aligned}
{1}
\left(\frac{dE}{dV dt} \right)_{c,\text{dep}} = f_c(z) \left(\frac{dE_s}{dV dt} \right)_{\text{inj}}.\end{aligned}$$ $f_c(z)$ is now the ratio of the energy deposited in channel $c$ including structure formation effects to the injected energy due only to the smooth DM distribution, which has a simple analytic form. For $s$-wave annihilation, the boost factor is $$\begin{aligned}
{1}
1 + \mathcal{B}_s(z) = \frac{\rho_{\text{eff}}^2(z)}{(1+z)^6 \rho_{\chi,0}^2},\end{aligned}$$ where $\rho_{\text{eff}}$ is shown in Figure \[fig\_rho\_eff\]. For $p$-wave annihilation, the effect of structure formation is parametrized not only by an effective density $\rho_{\text{eff}}$, but also by the characteristic one-dimensional velocity of the DM particles. The boost factor is: $$\begin{aligned}
{1}
1 + \mathcal{B}_p(z) = \frac{(\rho v/c)_{\text{eff}}^2 (z)} {(1+z)^8 \rho_{\chi,0}^2(\sigma_{1D,B}(z=0)/c)^2}.
\label{eqn:pwaveInj}\end{aligned}$$ where $(\rho v/c)_{\text{eff}}$ is shown in Figure \[fig\_rho\_eff\_pwave\].
Contour plots of $f_c(z)$ for all of the DM energy injection processes producing $e^+e^-$ or $\gamma \gamma$, including the effects of structure formation where relevant, are shown in Appendix \[app:fz\].
Free Electron Fraction and IGM Temperature History {#sec:FreeEleFrac}
==================================================
The Three-Level Atom
--------------------
In order to compute the contribution of DM annihilation to the optical depth and IGM temperature, the hydrogen atoms in the IGM are modeled using the effective 3-level atom model for hydrogen, first described in [@Peebles1968; @Zeldovich1969]. Equations describing the rate of change of $x_e$ and $T_{\text{IGM}}$ as a function of redshift can be derived from this model, and are given in many studies that calculate the ionization history of the universe. These equations form the basis of the `RECFAST` [@Seager:2000] code: they are relatively easy to integrate, and show good agreement with the full `RECFAST` code in computing $x_e(z)$. We have checked that our integrated ionization history of the universe with neither DM nor ionization is in good agreement with the result produced by `RECFAST`. These equations can also be easily modified to include energy injection from DM with the full $f_c(z)$ dependence of equation (\[eqn:fz\]). We have verified that after including DM injection, our results are in good agreement with the ionization history obtained by `RECFAST` with the inclusion of DM.
A full description of the three-level atom is given in [@AliHaimoud:2010dx]. All hydrogen atoms are described by a ground state ($n=1$) and a first excited state $(n=2)$, with all excited states being in thermal equilibrium with the continuum. Direct recombination from the continuum to the ground state is assumed to have no net effect on $x_e$, as each photon produced quickly ionizes another hydrogen atom. Without DM, the net rate of ionization in this model is given by $$\begin{aligned}
{1}
\frac{dx_e}{dz} \frac{dz}{dt} = I_3(z) = C \left[\beta_e(1-x_e) e^{-h\nu_\alpha/k_BT} - \alpha_e x_e^2 n_{\text{H}} \right].
\label{eqn:I3}\end{aligned}$$ where $\nu_\alpha$ is the Lyman-$\alpha$ frequency. The rate of ionization is described by just a single recombination coefficient $\alpha_e$ and a single ionization coefficient $\beta_e$. As pointed out in [@Chluba:2015lpa], $\beta_e$ should be evaluated at the CMB temperature and not at the electron temperature as in the `RECFAST` code; this is consistent with the implementation of the `RECFAST` calculation in the `HyREC` code. $C$ is a factor dependent on redshift and $x_e$, given by $$\begin{aligned}
{1}
C = \frac{\Lambda n_{\text{H}}(1-x_e) + 8\pi \nu_\alpha^3 H}{\Lambda n_{\text{H}}(1-x_e) + 8\pi \nu_\alpha^3 H + \beta_e n_{\text{H}} (1-x_e)}.\end{aligned}$$ where $\Lambda = \SI{8.23}{s^{-1}}$ is the decay rate of the metastable $2s$-state in hydrogen to the ground state. The $C$ factor is the ratio of the recombination rates (from $n=2$ to $n=1$) to all possible transition rates from $n=2$, and characterizes the probability of achieving recombination from $n=2$.
Our analysis should in principle include ionized helium, but assuming that helium remains neutral prior to reionization is justified for several reasons. First, the helium ionization fraction has been shown to have little influence on the total free electron fraction, assuming a standard recombination history obtained from the more sophisticated `RECFAST` calculation. Even after including unclustered DM annihilation with a large annihilation parameter of $p_{\text{ann}} = \SI{1.8E-27}{cm^3 s^{-1} GeV^{-1}}$, setting the helium ionization fraction to be a constant anywhere in the range $10^{-10}$ to $10^{-3}$ resulted in a difference of at most 0.2% in the calculated free electron fraction at all redshifts [@Galli2013]. Moreover, $f_{\text{He ion}}(z)$ is small compared to the other channels; this, together with the significantly smaller number density compared to hydrogen, means that helium ionization is a relatively unimportant process even with large energy injections from DM. This allows us to safely assume that helium remains neutral prior to reionization in the three-level atom equations, although our calculation of $f_c(z)$, which features in the DM injection rate, does not make this assumption.
Below $1+z=10$, in the three scenarios we consider, the expression for $I_3(z)$ with only neutral helium continues to be valid until instantaneous reionization occurs. After reionization, $x_e$ is instantaneously set to 1.08, and $I_3(z)$, together with any other terms that contribute to changing $x_e$, are set to zero, since we assume the universe remains ionized from then on. Only $T_{\text{IGM}}$ will continue to evolve after reionization.
Heating of the IGM
------------------
The evolution of $x_e$ depends on $T_{\text{IGM}}$, and so $T_{\text{IGM}}$ also needs to be determined as a function of redshift in order to obtain the ionization history. The rate of change of $T_{\text{IGM}}$ without energy injection from DM can be written as the sum of two separate processes affecting the temperature: $$\begin{aligned}
{1}
\frac{dT_{\text{IGM}}}{dz} \frac{dz}{dt} = Q_{\text{adia}}(z) + Q_{\text{CMB}}(z).
\label{eqn:TIGMEvolutionNoDM}\end{aligned}$$ $Q_{\text{adia}}(z)$ represents the cooling of the IGM due to the expansion of the universe, and is simply given by $$\begin{aligned}
{1}
Q_{\text{adia}}(z) = \frac{2T_{\text{IGM}}}{1+z} \frac{dz}{dt},\end{aligned}$$ so that without any contribution from other sources, $T_{\text{IGM}} \propto (1+z)^2$, as is expected from adiabatic cooling of the baryons in the IGM. The second term, $Q_{\text{CMB}}(z)$, is the rate of change of temperature as a result of energy transfer to or from the CMB via Compton scattering processes. The rate of energy transfer from these processes is [@Weymann1965]: $$\begin{aligned}
{1}
\frac{dE}{dV dt} = 4\sigma_T a T_{\text{CMB}}^4 x_e n_{\text{H}} (1+z)^3 \left(\frac{T_{\text{CMB}} - T_{\text{IGM}}}{m_e} \right),\end{aligned}$$ where $\sigma_T$ is the Thomson scattering cross-section and $a$ is the radiation constant. This energy transfer leads to the following increase in temperature of the IGM: $$\begin{aligned}
{1}
\frac{dE}{dV} = \frac{3}{2} n_{\text{tot}} (1+z)^3 dT_{\text{IGM}}.\end{aligned}$$ Here, $n_{\text{tot}}$ is the total number density $n_{\text{tot}} = n_e + n_{\text{HII}} + n_{\text{HI}} + n_{\text{He}} = (x_e + 1 + 0.079)n_{\text{H}}$. This gives $$\begin{aligned}
{1}
Q_{\text{CMB}}(z) = \left(\frac{8\sigma_T a T^4_{\text{CMB}}}{3m_e} \right) \frac{n_{\text{H}}}{n_{\text{tot}}}(T_{\text{CMB}} - T_{\text{IGM}})x_e.\end{aligned}$$
Energy Deposition from Dark Matter
----------------------------------
We will now make use of $f_c(z)$ to translate the energy injection into terms that alter the rate of change of $x_e$ and $T_{\text{IGM}}$. The total amount of energy deposited into HI ionization leads straightforwardly to an increase in $x_e$:
$$\begin{aligned}
{1}
I_{\chi,\text{ion}}(z) = \left(\frac{dE}{dV dt} \right)_{\text{inj}} \frac{f_{\text{H ion}} (z)}{V_{\text{H}} n_{\text{H}} (1+z)^3}\, ,\end{aligned}$$
where $V_{\text{H}} = \SI{13.6}{eV}$ is the ionization potential of hydrogen. The factor of $1/n_H(1+z)^3$ normalizes the total energy to the density of hydrogen at that redshift. This term adds straightforwardly to the ionization rate of the IGM given by equation (\[eqn:I3\]).
Energy going into Lyman-$\alpha$ excitations also changes the rate of ionization, since hydrogen becomes easier to ionize. The total contribution to $x_e$ is given by $$\begin{aligned}
{1}
I_{\chi,\text{Ly}\alpha}(z) = \left(\frac{dE}{dV dt}\right)_{\text{inj}} \frac{(1-C) f_{\text{Ly}\alpha}(z) }{h\nu_\alpha n_{\text{H}} (1+z)^3} \,,\end{aligned}$$ where the $1-C$ factor is the probability of ionization from the excited hydrogen atom at energy level $n=2$ and hence the contribution to $x_e$.
Finally, DM annihilation can deposit energy directly into heating at a rate $$\begin{aligned}
{1}
Q_\chi(z) = f_{\text{Heat}}(z) \left(\frac{dE}{dV dt}\right)_{\text{inj}} \frac{2}{3 n_{\text{tot}} (1+z)^3}.\end{aligned}$$
To summarize, the coupled differential equations that need to be integrated simultaneously to obtain $x_e$ and $T_{\text{IGM}}$ are $$\begin{aligned}
{1}
\frac{dx_e}{dz} \frac{dz}{dt} &= I_3 (z) + I_{\chi,\text{ion}} (z) + I_{\chi,\text{Ly}\alpha}(z) \,, \\
\frac{dT_{\text{IGM}}}{dz} \frac{dz}{dt} &= Q_{\text{adia}}(z) + Q_{\text{CMB}}(z) + Q_\chi(z) \,.\end{aligned}$$
Aside from DM and the instantaneous reionization scenarios considered, no further sources of heating or reionization (e.g. star-forming galaxies and other stellar phenomena) are included in these equations.[^7] This simplification is consistent with our computation of $f_c(z)$ using the standard ionization history, which overestimates the true contribution of $x_e(z)$ from DM, while underestimating the corresponding $T_{\text{IGM}}(z)$ contribution. A full treatment including astrophysical sources of heating and ionization would require a better understanding of $f_c(z)$ in situations where reionization is gradual, and we defer such a study to future work.
The initial conditions used for the integration are $x_e(z=1700)=1$ and $T_{\text{IGM}} = T_{\text{CMB}}(z=1700)$, corresponding to the state of baryonic matter prior to recombination. The contribution to the optical depth by DM annihilation/decay $\delta \tau$, at a given $\langle \sigma v \rangle$ or $\tau_\chi$ and mass $m_\chi$ is then determined by integrating equation (\[eqn:OpticalDepth\]) up to $z=1700$ and subtracting the residual integrated optical depth that is already present when there is no DM. Note that when we consider reionization at $z = 10$, we do not include the contribution to $\delta \tau$ from $x_e$ between $z = 6$ and 10.[^8] We will discuss the calculation of $\delta \tau$ and the use of the optical depth constraints given by equation \[eqn:ExcessOpticalDepth\] further in Section \[sec:Constraints\].
Results {#sec:Constraints}
=======
We now calculate the integrated free electron fraction $x_e$ and IGM temperature $T_{\text{IGM}}$ as a function of redshift in each of the three DM energy injection scenarios considered ($s$-wave annihilation, $p$-wave annihilation and decay), for a wide range of $\langle \sigma v \rangle$ and decay lifetimes $\tau_\chi$, and $m_\chi$ between $\sim 10$ keV and $\sim 1$ TeV. As we discussed in Section \[sec:fz\], we have neglected any additional $x_e$ contribution from DM processes in our computation of $f_c(z)$, even though DM energy injection can produce significant deviations from the standard ionization history prior to reionization. Moreover, even after reionization occurs, the prescription for HeII reionization could affect the energy deposition. Thus the $f_c(z)$ curves we compute may not be completely accurate for an ionization history that is significantly different from the `RECFAST` result, or where HeII reionization cannot be approximated as occurring instantaneously at $1+z=4$.
Fortunately, our $f_c(z)$ calculations underestimate the contribution of DM to reionization, as more realistic ionization histories would generally have *higher* ionization fractions, which in turn would suppress the additional ionization from DM. With a higher ionization fraction for HI (HeII), the energy deposited into ionization of HI (HeII) decreases, since there are fewer HI (HeII) atoms to ionize or excite prior to reionization (after reionization), while energy going into heating increases in both cases. This intuitive explanation of the behavior of $f_c(z)$ is consistent with the results used in our low-energy code to assign deposited energy from low-energy electrons into the various channels, where the MC results show that all of the energy from low-energy electrons go into collisional heating processes as $x_e$ tends to 1. Thus the $f_c(z)$ curves calculated under our assumptions consistently overestimate the rate of energy deposition into ionization, while underestimating the rate of energy deposited as heat.
This means that if the contribution to reionization is small with the $f_c(z)$ values used here for a given cross-section/lifetime and mass, then a more accurately computed $f_c(z)$ assuming an elevated $x_e$ will have an even smaller contribution to $x_e$ and a larger contribution to $T_{\text{IGM}}$, making the result more constrained by the $T_{\text{IGM}}$ limits. Similarly, including other conventional sources of ionization would only decrease the contribution that DM can make to reionization: the presence of other sources would produce a larger $x_e$ than we have assumed, which again suppresses the energy deposition fraction into ionization while enhancing the fraction into heating.
To check the robustness of our constraints, we have also repeated our calculations considering:
1. Different reionization conditions, namely (i) instantaneous and complete reionization at $z=6$; (ii) instantaneous and complete reionization at $z=10$; and (iii) no reionization, to see how sensitive our results are to the uncertainty in the specifics of reionization and in particular in the redshift at which reionization occurs. For each reionization condition, $\delta \tau$ is integrated appropriately over $x_e(z)$, after which the optical depth from $x_e(z)$ without DM is subtracted. This includes the optical depth contribution from redshifts after reionization, where $x_e = 1.08$. Each reionization scenario results in a different $T_{\text{IGM}}(z)$ evolution after reionization occurs, and also has a different redshift at which we assess the contribution of DM to reionization (more details below);
2. A range of structure formation scenarios that bracket the uncertainties on the properties of low-mass (sub)haloes, below the resolution of current cosmological simulations; and
3. Two different IGM temperature constraints as shown in equation (\[eqn:TIGMConstraints\]), namely (i) $T_{\text{IGM}}(z=6.08) = \SI{18621}{K}$; (ii) $T_{\text{IGM}}(z = 4.8) = \SI{10000}{K}$, where we have taken the upper bound at 95% confidence. We do not make use of the lower bound, since $f_{\text{Heat}} (z)$ is likely to be an underestimate for reasons outlined above. The second temperature measurement is more constraining and will be used as the main temperature constraint, but constraints obtained from both temperature limits will be shown for the main $p$-wave result.
The three main quantities of interest are: (i) $x_e$ at a redshift just prior to the assumed instantaneous reionization at $z=6$ or $z=10$, or at $z=6$ for the case of no reionization, since hydrogen reionization is known to be complete by then; (ii) $T_{\text{IGM}}$ at $z=6.08$ and $z=4.8$ for comparison with the results shown in equation (\[eqn:TIGMConstraints\]); and (iii) the total integrated optical depth $\delta \tau$. If DM with a given $\langle \sigma v \rangle$ or $\tau_\chi$ and $m_\chi$ can produce $x_e > 0.1$ just before reionization (or at $z=6$ for the case of no reionization) we consider this a possible scenario in which DM can contribute significantly to reionization. The 10% level used in this paper is arbitrary, and we will also present results for contributions ranging from 0.025% to 90% in the form of color density plots for all injection species and all DM processes.
A few remarks should be made about the calculation of optical depth and the use of the optical depth constraints in this paper. To compute $\delta \tau$, we integrate the optical depth due to DM annihilation/decay from $z_{\text{reion}}$ to recombination.[^9] We then compare $\delta \tau$ to the bound on excess optical depth from redshifts $z > 6$, assuming full ionization for $z \leq 6$; that is, for the purposes of computing the maximum allowed exotic contribution to optical depth, we essentially treat $z_{\text{reion}} = 6$ for all scenarios, even when $\delta \tau$ includes only DM contributions from $z > 10$. This allows us to understand how our limits could weaken if the reionization history were different: including gradual reionization from astrophysical sources between $z = 6$ and $z = 10$, for example, would likely suppress the contribution to reionization and hence optical depth from DM annihilation during this period, resulting in a smaller contribution from DM to reionization than would have been determined with instantaneous reionization at $z_{\text{reion}} = 6$. By taking $z_{\text{reion}} = 10$ and not considering the contribution to optical depth for $z < 10$, we obtain the weakest constraints from the $\delta \tau$ bound given in equation (\[eqn:ExcessOpticalDepth\]). In this way, these two reionization scenarios bracket the possible contribution of DM to reionization. Thus, although including the optical depth due to complete, instantaneous reionization at $z = 10$ would exceed the Planck optical depth measurement, we still consider this scenario in order to study the DM contribution to reionization in a model-independent way. Assuming two different instantaneous reionization scenarios also allows us to probe the possible effects of earlier reionization on the DM contribution to the temperature evolution.
We will choose as our benchmark the scenarios where the largest $x_e$ just prior to reionization can be obtained from the [*smallest*]{} $\langle \sigma v \rangle$ or [*longest*]{} decay lifetimes, since various experimental constraints set upper bounds on the cross-sections and lower bounds on the decay lifetimes. In all cases, reionization at $z = 6$ is more realistic than no reionization and is also more easily achieved than at $z = 10$, making it the main reionization scenario to consider. The structure formation scenario with the largest boost factor allows for reionization with a smaller cross-section, and thus we choose this as our benchmark (for $s$-wave annihilation this is the “stringent” case shown with a solid red line in Figure \[fig\_rho\_eff\], while for $p$-wave annihilation all scenarios give the same boost).
$s$-wave Annihilation
---------------------
Figure \[fig:freeEleFracsWave\] shows the integrated free-electron fraction $x_e$ for the particular case of DM with $m_\chi = \SI{100}{MeV}$ undergoing $s$-wave annihilation into a pair of $\SI{100}{MeV}$ photons with a cross-section ranging from $\SI{3E-27}{}$ to $\SI{3E-25}{cm^3 s^{-1}}$, as well as the case with no DM for comparison. These curves show the result with no reionization: different reionization conditions are identical up to the redshift of reionization $z_{\text{reion}}$, whereupon $x_e$ instantaneously becomes 1 until the present day. These curves are representative of the $x_e$ histories across all DM masses and cross-sections for $s$-wave annihilation. At $z \sim 20$, structure formation becomes important, which greatly increases $f_c(z)$ in all channels, leading to an increase in $x_e$. $s$-wave annihilation of the smooth distribution of DM results in a larger baseline $x_e$ after recombination, which is higher for larger $\langle \sigma v \rangle$ at the same $m_\chi$.
Along with $x_e$, the IGM temperature history $T_{\text{IGM}}(z)$ is also simultaneously integrated. The IGM temperature curves for DM undergoing $s$-wave annihilation into photons for cross-sections ranging from $\SI{3E-27}{}$ to $\SI{3E-25}{cm^3 s^{-1}}$ are shown in the same figure and are also representative of IGM temperature histories across a broad range of $\langle \sigma v \rangle$ and $m_\chi$. The CMB temperature is included for reference. The IGM is initially coupled to the CMB, but once recombination occurs, the temperature starts to fall more rapidly than the CMB temperature. DM $s$-wave annihilations decrease the fall-off in temperature at relatively large redshifts. At $z\sim 20$, the impact of structure formation once again increases the IGM temperature significantly relative to the case with no DM.
The contribution of DM to reionization through $s$-wave annihilation is significantly constrained by the CMB power spectrum measurements derived by Planck 2015 [@PlanckCollaboration2015], as well as by the measured total integrated optical depth. The cross-section for annihilation must be large enough for significant ionization to occur at redshifts near reionization; however, increasing the cross-section also increases the residual free electron fraction during the cosmic dark ages. This residual $x_e$ is constrained severely by the CMB anisotropy spectrum, which is sensitive to any additional ionization near redshifts $z \sim 600$. A large $x_e$ during the cosmic dark ages also contributes significantly to the optical depth. Since $n_e(z) \propto x_e(z)(1+z)^3$ and $dt/dz \propto (1+z)^{-5/2}$, the integrand in equation (\[eqn:OpticalDepth\]) is proportional to $x_e(z)(1+z)^{1/2}$. The significantly elevated $x_e$ baseline means that the dominant contribution to $\delta \tau$ comes from early times when $z$ is large: since structure formation is relevant at later times, it does not add significantly to $\delta \tau$.
We performed the integration of $x_e(z)$ and $T_{\text{IGM}}(z)$ over a broad range of masses and cross-sections, and computed the optical depth from $x_e(z)$ using equation (\[eqn:OpticalDepth\]). Figure \[fig:xeConstraintsPlot\_sWave\] shows the free electron fraction just prior to reionization $x_e(z=6)$ for the benchmark scenario of both $\chi \chi \to e^+e^-$ and $\chi \chi \to \gamma \gamma$, as well as the excluded cross-sections due to constraints from the CMB power spectrum as measured by Planck and from the integrated optical depth. Constraints from $T_{\text{IGM}}$ are presented in Appendix \[app:additionalConstraints\]. These bounds are less constraining, but unlike the CMB and optical depth constraints, they are sensitive to the low redshift behavior of $s$-wave annihilations: increasing the boost from structure formation beyond the value used here may relax the CMB and optical depth bounds, but this would strengthen the $T_{\text{IGM}}$ constraints.
Although we have shown the results for these two processes ($\chi \chi \to e^+e^-$ and $\chi \chi \to \gamma \gamma$) as a function of $\langle \sigma v \rangle$ and $m_\chi$, we stress that these constraints go beyond these two annihilation channels. We discuss this point and present bounds on $\langle \sigma v \rangle/m_\chi$ as a function of the injection energy of the final products (which may in general be very different from $m_\chi$) in Appendix \[app:additionalConstraints\] of this paper.
In both annihilation channels, there is no parameter space where a significant contribution to reionization occurs while being consistent with either the CMB power spectrum or optical depth bounds, with the CMB power spectrum bounds being approximately one order of magnitude stronger than the optical depth bounds. We stress that the optical depth constraints are similar regardless of reionization conditions, since $\delta \tau$ is the additional contribution from DM only, and is therefore not affected by the period where $x_e = 1$ after reionization. As a result, the true optical depth limits for reionization at $z = 10$ are likely stronger than what is shown here, since we do not include the additional contribution to optical depth from the fully ionized universe between $z = 6$ and $z = 10$. Furthermore, $\delta \tau$ is dominated by contributions from larger redshifts ($z \gtrsim 100$) and is relatively insensitive to the exact details of reionization and structure formation at $z \lesssim 20$. At the maximum $\langle \sigma v \rangle$ allowed by the CMB power spectrum bound, the DM contribution to $x_e$ just prior to reionization is below 2% for $\chi \chi \to e^+e^-$ and below 0.1% for $\chi \chi \to \gamma \gamma$ across all $m_\chi$ considered. These results are shown in Figure \[fig:xeMaxConstraints\] in the conclusion.
Figure \[fig:xeConstraintsStructSysPlot\_sWave\] shows the reionization constraints on $s$-wave annihilation for the structure formation prescriptions with the smallest and largest boost factor (used as the benchmark). As expected, significant ionization prior to reionization can be achieved at lower cross-sections in the benchmark model, making it the most likely structure formation prescription for evading the constraints. Differences in structure formation can increase the value of $\langle \sigma v \rangle$ at which ionization becomes significant by less than an order of magnitude, and all of the regions with a significant contribution to reionization in either structure formation scenario are firmly ruled out by the Planck constraints.
Similarly, differences in reionization redshifts do little to change the result. Since $x_e(z)$ is identical in all three reionization scenarios until the point of reionization, there is no difference between $x_e(z=6)$ with reionization at $z=6$ and no reionization. With reionization at $z=10$, $x_e(z=10)$ is always less than $x_e(z=6)$ as $x_e$ increases rapidly between $z = 6$ and $10$, and so the region in parameter space where significant contribution to reionization occurs decreases when choosing an earlier redshift of reionization. Figure \[fig:xeConstraintsReionSysPlot\_sWave\] summarizes these results.
To conclude, any significant contribution to reionization through $s$-wave DM annihilation is severely constrained by the cross-section bounds from the Planck CMB power spectrum measurement as well as the expected integrated optical depth to the surface of last scattering. For values of $\langle \sigma v \rangle$ that are consistent with the Planck CMB power spectrum constraints, we can only expect a contribution of no more than 2% of the total ionization just prior to reionization (see Figure \[fig:xeMaxConstraints\]). Our results are consistent with the conclusion reached in [@Poulin2015]. We have also shown that these results are robust to our assumptions on the structure formation scenario and on the redshift of reionization.
$p$-wave Annihilation {#sec_pwave}
---------------------
In $p$-wave annihilation, the $v^2$ dependence of the cross-section results in a $v^2/v_{\text{ref}}^2$ suppression of the energy injection rate, given in equation (\[eqn:pwaveInj\]). Figure \[fig:freeEleFracpWave\] shows the integrated $x_e$ for the case of $\chi \chi \to \gamma \gamma$ $p$-wave annihilation with $(\sigma v)_{\text{ref}}$ between and . Prior to the relevance of structure formation, the velocity suppression is a large effect, resulting in no additional contribution to $x_e$ unless the cross-section is exceptionally large. Once structure formation occurs, however, the velocity dispersion of DM particles within haloes increases significantly, increasing in turn the energy injection rate from $p$-wave annihilation. This results in a sudden and large increase in both $x_e$ and $T_{\text{IGM}}$ at $z \sim 20$.
As we discussed earlier in section \[sec:StructureFormation\], the annihilation rate prior to structure formation is dependent on our choice of $\sigma_{\text{1D,B}}$, which we have taken to be the velocity dispersion for unclustered DM with $m_\chi = \SI{100}{GeV}$ and $T_{\text{kd}} = \SI{28}{MeV}$. Choosing a significantly smaller value of $m_\chi$ or $T_{\text{kd}}$ increases $\sigma_{\text{1D,B}}$, which in turn increases the annihilation rate prior to structure formation. With a sufficiently small value of $m_\chi$ and/or $T_{\text{kd}}$, $x_e$ will stay at a value significantly above the expected $x_e$ with no dark matter, similar to the ionization histories typical of $s$-wave dark matter shown in Figure \[fig:freeEleFracsWave\]. While this leads to an increase in $x_e$ just prior to reionization, the optical depth bounds that we considered for $s$-wave annihilations become very constraining, particularly with the sharp increase in $x_e$ after structure formation that is not present in the $s$-wave case. Decreasing $m_\chi$ and/or $T_{\text{kd}}$ therefore makes it harder for a significant contribution to be made to reionization in a way that is consistent with the optical depth limits, making our unclustered velocity dispersion choice an optimistic one.
Unlike $s$-wave annihilation, constraints from the CMB power spectrum on the contribution of DM to reionization for $p$-wave annihilation are velocity-dependent, and depend strongly on the “coldness” of DM particles, i.e. on their unclustered velocity dispersion. Significant $x_e$ at low redshifts can be achieved without any significant increase in the free electron fraction at redshift $z \sim 600$ by choosing a small enough $m_\chi$ so that the velocity dispersion prior to structure formation is small. Optical depth constraints are also weaker since there is no increase in the baseline ionization during the cosmic dark ages, unlike in $s$-wave annihilation. Instead, the IGM temperature after reionization has been shown to be a significantly more important constraint on the $p$-wave annihilation cross-section than bounds obtained from the CMB power spectrum [@Diamanti2014]. Once the effect of structure formation becomes relevant, the late-time energy injection results in significant heating of the IGM. Figure \[fig:freeEleFracpWave\] shows this behavior for the case of $\chi \chi \to \gamma \gamma$ $p$-wave annihilation with $\sigma v_{\text{ref}}$ between and . At large enough cross-sections, $T_{\text{IGM}}$ after reionization exceeds the limits set by equation (\[eqn:TIGMConstraints\]).
Figure \[fig:xeConstraintsPlot\_pWave\] shows $x_e(z=6)$ just prior to reionization for our benchmark scenario in the $(\sigma v)_{\text{ref}}$ - $m_\chi$ parameter space, as well as the excluded parameter space due to constraints from $T_{\text{IGM}}(z=6.08)$ and $T_{\text{IGM}}(z=4.8)$. The same results on the parameter space of $(\sigma v)_{\text{ref}}/m_\chi$ and injection energy of the annihilation products are shown in Appendix \[app:additionalConstraints\]. Masses above for $\chi \chi \to e^+ e^-$ and almost all $m_\chi$ for $\chi \chi \to \gamma \gamma$ are excluded by the benchmark IGM temperature constraint, $\log_{10} T_{\text{IGM}}(z=4.8) < 4.0$. The most likely region in parameter space that can still result in reionization is in the $\chi \chi \to e^+e^-$ channel with $m_\chi < \SI{100}{MeV}$ and $(\sigma v)_{\text{ref}}$ between $10^{-25}$ and $10^{-23} \SI{}{cm^3 s^{-1}}$, and in the $\chi \chi \to \gamma \gamma$ channel with $m_\chi \sim \SI{100}{MeV}$ and $(\sigma v)_{\text{ref}} \sim 10^{-21} \SI{}{cm^3 s^{-1}}$. These cross-sections are much larger than a thermal relic cross-section, but can be accommodated in a large variety of DM models, including any non-thermally produced DM or forbidden DM [@DAgnolo2015].
The sudden relaxation of the $T_{\text{IGM}}$ constraints below $m_\chi \sim \SI{100}{MeV}$ and the corresponding decrease in $x_e(z=6)$ for $\chi \chi \to e^+e^-$ deserve a special mention here. DM particles with $m_\chi < \SI{100}{MeV}$ annihilating into electrons lose their energy principally through inverse Compton scattering off CMB photons, which by $z \sim 10$ mainly produces photons close to or below the ionizing threshold for hydrogen. After reionization, photoionization by these secondary photons is suppressed further, as the only remaining neutral species is HeII, which has a larger ionization energy. Thus, only a small fraction of the energy goes into collisional heating (due to secondary electrons) of the IGM, with most of the energy from the DM annihilation being deposited as continuum photons. This results in a decrease in IGM temperature after the reionization redshift. At higher DM masses, in contrast, the lower-redshift IGM temperature bound is significantly more constraining, as the IGM temperature invariably continues to increase even after reionization: the $e^+e^-$ pair produced by the annihilation can now upscatter photons to energies above the ionization threshold of HeII. These photoionization events produce low-energy secondary electrons even after reionization, which in turn can collisionally heat the IGM.
Next, we present our results assuming different reionization redshifts in Figure \[fig:xeConstraintsReionSysPlot\_pWave\]. These results show that the allowed region for $\chi \chi \to e^+e^-$ is shifted upward in cross-section, since a larger cross-section is required to reionize the universe at an earlier redshift, while $T_{\text{IGM}}$ actually becomes less constraining as the IGM temperature now has more time to decrease after reionization. This suggests that the region that permits significant reionization is relatively independent of the reionization condition. The same is not true for the case of $\chi \chi \to \gamma \gamma$: the IGM temperature constraints remain fairly similar, but since we are now extracting $x_e$ at a higher redshift, the overall contribution to $x_e$ by DM decreases. With reionization at $z = 10$, for the $\gamma \gamma$ channel, there is no allowable $m_\chi$ where the contribution to $x_e$ prior to reionization exceeds 10%.
So far, there is still a range of DM masses with appropriate cross-sections that can reionize the universe at at least the 10% level through $p$-wave annihilations into $e^+e^-$ ($ m_\chi \lesssim \SI{100}{\mega \eV}$, $(\sigma v)_{\text{ref}} \sim 10^{-24}$ - $10^{-23} \SI{}{ cm^3 s^{-1}}$), and into $\gamma \gamma$ ($m_\chi \sim \SI{100}{\mega \eV}$, $(\sigma v)_{\text{ref}} \sim 10^{-21}$ - $10^{-20} \SI{}{ cm^3 s^{-1}}$) with reionization at $z = 6$. We turn our attention now to two further bounds on $(\sigma v)_{\text{ref}}$ that are relevant to these regions in parameter space.
First, we consider the cross-section constraints from the CMB power spectrum measurements. Although the results shown in Figure \[fig:excludedXSec\] are bounds on $\langle \sigma v \rangle$ for $s$-wave annihilation, they also serve as an estimate for the bound on $\langle \sigma v \rangle = (\sigma v)_\text{ref} v^2/v_{\text{ref}}^2$ in the case of $p$-wave annihilations, since the results are only sensitive to the rate of energy deposition into ionization of the IGM during the cosmic dark ages. The main difference with $p$-wave annihilations is that the bound now depends on $v^2$ after recombination and during the cosmic dark ages. $v^2$ is strongly dependent on the primordial “coldness” of DM, which in turn depends on the nature of the DM particles, i.e. mass and kinetic decoupling temperature. While DM is coupled to photons, $v^2 \sim 3 T_\gamma/m_\chi$, whereas after decoupling, $v^2 \propto (1+z)^2$. Taking the limit $L(m_\chi)$ on $\langle \sigma v \rangle$ set by the CMB spectrum at a particular DM mass $m_\chi$ as shown in Figure \[fig:excludedXSec\], $$\begin{aligned}
{1}
(\sigma v)_\text{ref} \lesssim 3.7 L(m_\chi) \left(\frac{m_\chi}{\SI{1}{MeV}} \right)^2 \left( \frac{x_{\text{kd}}}{10^{-4}} \right) \left(\frac{\SI{1}{eV}}{T_\gamma} \right)^2,\end{aligned}$$ where $x_{\text{kd}} \equiv T_{\text{kd}}/m_\chi$. $T_\gamma$ is some representative CMB temperature after recombination such that the CMB power spectrum is most sensitive to energy injections at the redshift $z$ corresponding to $T_\gamma$ ($z \sim 600$ in the $s$-wave case).
In the case of $\chi \chi \to e^+e^-$, in the region of parameter space where a significant contribution to reionization can be made, the CMB bounds can rule out these regions if $x_{\text{kd}} \lesssim 10^{-2} - 10^{-1}$ for $m_\chi \sim \SI{1}{MeV}$ and $x_{\text{kd}} \lesssim 10^{-6}$ for $m_\chi \sim \SI{100}{MeV}$ (we have set $T_\gamma = \SI{0.14}{eV}$ as a representative value), while for 100 MeV DM annihilating $\chi \chi \to \gamma \gamma$, we have $x_{\text{kd}} \sim 10^{-3} - 10^{-2}$. Thus for the CMB bounds to exclude these regions, we would need to have $T_\mathrm{kd} \lesssim 100$ keV, and in some cases it would need to be much lower (at the sub-keV scale).
Values of $T_{\text{kd}}$ higher than these bounds are consistent (and expected) in a large variety of DM models, e.g. $T_{\text{kd}} \sim \SI{}{MeV} (m_\chi/\SI{}{GeV})^{2/3}$ for neutralino DM [@Chen2001], and $T_{\text{kd}} \sim \SI{2.02}{MeV} (m_\chi/\SI{}{GeV})^{3/4}$ for DM-lepton interactions of the form $(1/\Lambda^2)(\bar{X} X)( \bar{l} l)$ for some interaction mass scale $\Lambda$, giving rise to $p$-wave suppressed cross-sections [@Shoemaker2013; @Diamanti2014]. In general, $T_\mathrm{kd}$ below the scale of the electron mass is unusual, as the only relativistic species available to maintain kinetic equilibrium are photons and neutrinos.[^10] The CMB bounds therefore place few constraints on our parameter space for $p$-wave annihilation, in stark contrast to the $s$-wave case.
Next, we look at $p$-wave constraints from gamma ray flux measurements of the galactic diffuse background. The derived constraints from the galactic diffuse background are shown in Figure \[fig:xeConstraintsGalacticPlot\_pWave\]. For $\chi \chi \to e^+ e^-$, final state radiation produced as part of the annihilation process in the Milky Way halo produces gamma ray photons that can be measured by these experiments, placing an upper bound on the rate of $p$-wave annihilation into $e^+e^-$ for DM masses of up to in the Milky Way. Constraints derived in [@Essig2013] from a combination of data from INTEGRAL, COMPTEL and Fermi set a limit of $\langle \sigma v \rangle \lesssim 10^{-27}\SI{}{cm^3 s^{-1}}$ for $m_\chi \lesssim \SI{100}{\mega \eV}$. This was derived assuming an NFW profile, which is a relatively conservative choice for these experiments: the constraints fluctuate by a factor of a few if different DM halo profiles are chosen. All of the measured photon flux is conservatively attributed to DM annihilation in the galaxy halo only, without accounting for extragalactic DM annihilation or other more conventional sources like inverse Compton scattering off starlight or synchrotron radiation.
The translation of these velocity-averaged cross-section bounds to constraints on $(\sigma v)_{\text{ref}}$ depends on the velocity dispersion $v_{\text{DM}}$ around the solar circle. Given a measured photon flux, a larger $v_{\text{DM}}$ would place a stronger constraint on $(\sigma v)_{\text{ref}}$, since the photon flux is proportional to the annihilation rate, which is in turn proportional to $(\sigma v)_{\text{ref}} v_{\text{DM}}^2$ in a $p$-wave process. Because of this, the constrained $(\sigma v)_{\text{ref}}$ is proportional to $1/v_{\text{DM}}^2$. However, in order for some region of parameter space with more than a 10% contribution to reionization from DM to be allowed, the dispersion velocity in the solar circle needs to satisfy $v_{\text{DM}} < \SI{20}{km s^{-1}}$, which is significantly smaller than the local velocity of the solar circle and is hence unrealistic [@Cerdeno:2010jj].
Similar results hold for $\chi \chi \to \gamma \gamma$, where searches for sharp spectral features such as lines or boxes in the galactic diffuse gamma ray background place strong bounds on the annihilation cross section of this process. By requiring the number of counts from $\chi \chi \to \gamma \gamma$ in each energy bin in the spectrum to not exceed the measured number of counts by $2 \sigma$, the gamma ray spectrum from COMPTEL and EGRET can be used to set an upper limit of $\langle \sigma v \rangle \lesssim 10^{-27} \SI{}{cm^3 s^{-1}}$ for $m_\chi \sim \SI{100}{MeV}$ [@Boddy2015], with a similar analysis using Fermi data [@Albert2014] giving a limit of $\langle \sigma v \rangle \lesssim 10^{-29} \SI{}{cm^3 s^{-1}}$ for $m_\chi \gtrsim \SI{100}{MeV}$. This means that the dispersion velocity required for a 10% contribution to reionization is $v_{\text{DM}} \sim \SI{0.1}{km s^{-1}}$, which is once again unrealistic.
Although we have freely used the constraints for $\langle \sigma v \rangle$ to directly set constraints on $(\sigma v)_{\text{ref}}$, some caution must be taken when doing so. The contribution of DM annihilations to the observed photon flux measured by a detector is due to annihilations all along the line-of-sight. In order to set constraints on DM annihilation from gamma ray flux measurements, the appropriate function of the DM density and velocity must therefore be averaged along the line-of-sight. $\langle \sigma v \rangle$ bounds are frequently set by averaging over the DM density, but without taking into account the velocity dispersion of the Milky Way halo. Without performing this average, $\langle \sigma v \rangle$ bounds are implicitly assumed to be for $s$-wave processes only.
However, as we demonstrate in Appendix \[app:JFactor\], averaging over the velocity dispersion as well as the density appears to change the $\langle \sigma v \rangle$ bounds for $p$-wave annihilation by less than a factor of 2 under many different assumptions. These bounds would need to relax by at least 2 orders of magnitude for $\chi \chi \to e^+e^-$ and 4 orders of magnitude for $\chi \chi \to \gamma \gamma$ to allow any significant contribution to reionization at all.
Overall, the possible contribution of $p$-wave DM annihilation to reionization appears to be constrained to the $<10\%$ level across all of the masses and injection species considered here. At $m_\chi \gtrsim \SI{10}{GeV}$, this contribution is limited by $T_\text{IGM}$ measurements, while for $m_\chi \lesssim \SI{10}{GeV}$, any allowed parameter space with more than 10% contribution to reionization after accounting for $T_{\text{IGM}}$ appears to be ruled out by observations of the galactic diffuse emission gamma ray spectrum.
Decay
-----
Figure \[fig:freeEleFracDecay\] shows $x_e(z)$ and $T_{\text{IGM}}(z)$ for $m_\chi = \SI{100}{MeV}$ DM undergoing $\chi \to \gamma \gamma$ decays (each photon now has an energy of ) with various representative decay lifetimes, which are typical for other masses and decay modes. Compared to $s$-wave annihilation, the energy injection rate in decays is not dependent on structure formation, and the $(1+z)^3$ redshift dependence for decays (compared to $(1+z)^6$ for $s$-wave annihilation) means that the energy injection is less weighted toward earlier redshifts. This leads to a steady rise in $x_e$ from immediately before recombination to the present day.
Optical depth constraints play an important role in placing bounds on the decay lifetime: with no structure formation boost, the only way for significant ionization at low redshifts to occur is for $x_e$ to be relatively high throughout the cosmic dark ages, contributing significantly to the optical depth. Figure \[fig:xeConstraintsPlot\_decay\] shows the region of the ($\tau_\chi$,$m_\chi$) parameter space where DM can contribute significantly to reionization, as well as the constraints on the decay lifetime coming from IGM temperature and the optical depth. Significant reionization occurs for relatively longer decay lifetimes for masses where $f_{\text{H ion.}}(z)$ is large at low redshifts. However, both optical depth and IGM temperature constraints rule out large parts of the allowed parameter space for $\chi \to e^+e^-$ and all of the parameter space for $\chi \to \gamma \gamma$ at the 10% level of contribution to reionization, with the $T_{\text{IGM}}$ bounds being more effective than optical depth for the $m_\chi \sim \SI{100}{MeV} - \SI{10}{GeV}$ range for $\chi \chi \to e^+e^-$.
Figure \[fig:xeConstraintsPlot\_decay\] also shows the same results after considering different reionization conditions. Once again, the optical depth constraints change very little with respect to reionization redshift, while the $T_{\text{IGM}}$ constraints are very similar in both reionization scenarios in the region where they are stronger than the optical depth, and we can hence simply compare the $x_e$ contributions with the $\delta \tau$ and $T_{\text{IGM}}$ constraints at $z_{\text{reion}} = 6$. As before, earlier reionization makes it more difficult for DM to contribute to $x_e$ just prior to reionization. For $\chi \to e^+e^-$, almost all decay lifetimes and masses which previously resulted in a 10% contribution to reionization now result in a contribution below 10% when the redshift of reionization is changed to $z = 10$, while the results for $z = 6$ and $z = 10$ for $\chi \to \gamma \gamma$ are similar.
Nevertheless, a contribution to $x_e$ just prior to reionization at more than the 10% level still remains possible for $\chi \to e^+e^-$ at a DM mass of $m_\chi \sim \SI{100}{MeV} - \SI{10}{GeV}$, $\tau_\chi \sim 10^{24} - 10^{25} \SI{}{s}$, as well as $m_\chi \sim \SI{1}{MeV}$, $\tau_\chi \sim 10^{24} \SI{}{s}$ in the benchmark reionization scenario. As with $p$-wave annihilation, the galactic diffuse background provides an additional constraint on the decay lifetime. These constraints are derived in a similar way to the $p$-wave case, i.e. by conservatively assuming that all of the diffuse gamma ray background comes from FSR from the DM decay. However, unlike with $p$-wave annihilation, the diffuse background constraints are of the same order as the optical depth bounds that we have set here. Figure \[fig:xeConstraintsGalacticPlot\_electron\_decay\] shows these constraints superimposed on Figure \[fig:xeConstraintsPlot\_decay\], showing that none of the experimental constraints are able to rule out the possibility of a more than 10% contribution to $x_e$ prior to reionization in the $m_\chi \sim \SI{10}{} - \SI{100}{MeV}$, $\tau_\chi \sim 10^{25}\SI{}{s}$ and $m_\chi \sim \SI{1}{MeV}$, $\tau_\chi \sim 10^{24} \SI{}{s}$ regions of parameter space. This conclusion still holds true for a different redshift of reionization for $m_\chi \sim \SI{100}{MeV}$.
![[]{data-label="fig:xeConstraintsGalacticPlot_electron_decay"}](xeConstraintsGalacticPlot_electron_decay.pdf)
The blue curve in Figure \[fig:freeEleFracDecayAllowedRegion\] shows $x_e(z)$ and $T_{\text{IGM}}(z)$ assuming reionization at $z = 6$, with $m_\chi = \SI{100}{MeV}$ and $\tau_\chi = \SI{1.5e25}{s}$, parameters which lie in one of the allowed regions found above. Reionization at $z = 6$ causes the behavior of $T_{\text{IGM}}$ to change abruptly due to the instantaneous change of $x_e$. Just before reionization, $x_e(z = 6) \sim 0.2$, with the integrated optical depth being $\delta \tau = 0.040$, which lies within the allowed limit. $T_{\text{IGM}}(z = 4.8)$ lies below the lower limit of the $T_{\text{IGM}}$ constraint, but as we have previously explained, $T_{\text{IGM}}$ is always underestimated with the default ionization history.
We have also performed the integration of $x_e(z)$ and $T_{\text{IGM}}(z)$ with $f_c(z)$ derived from the ionization history that we obtained above. Since $f_c(z)$ as calculated from the default ionization history overestimates $x_e(z)$, using this new $f_c(z)$ ensures that the allowed regions are not ruled out by a more accurate estimate of $x_e(z)$. The result is also shown in orange in Figure \[fig:freeEleFracDecayAllowedRegion\]. As we expect, this more accurate $f_c(z)$ increases $T_{\text{IGM}}(z)$ and decreases $x_e(z)$ slightly. The contribution to reionization remains the same, while still staying consistent with the $T_{\text{IGM}}(z = 4.8)$ and the optical depth bounds.
Figure \[fig:freeEleFracDecayAllowedRegion\] also shows two measurements of $x_e$ from just before reionization obtained by [@Schenker2014], corresponding to $$\begin{aligned}
{1}
x_e(z=7) &= 0.66^{+0.12}_{-0.09}, \nonumber \\
x_e(z=8) &< 0.35.
\label{eqn:schenkerxe}\end{aligned}$$ The ionization history for $m_\chi = \SI{100}{MeV}$ and $\tau_\chi = \SI{1.5e25}{s}$ is consistent with the bound from $z=8$, and can be made consistent with the $z=7$ bound with the addition of other sources of ionization between these two redshifts.
In summary, optical depth constraints as well as bounds from the galactic diffuse background constraints rule out reionization from $\chi \to \gamma \gamma$ and almost rules out reionization from $\chi \to e^+e^-$ at the 10% level, except for $m_\chi \sim \SI{10}{} - \SI{100}{MeV}$, $\tau_\chi \sim 10^{25} \SI{}{s}$ and $m_\chi \sim \SI{1}{MeV}$, $\tau_\chi \sim 10^{24} \SI{}{s}$. The former region remains viable even under the different reionization scenarios considered here.
Conclusion {#sec:Conclusion}
==========
We have studied the potential impact of $s$-wave annihilation, $p$-wave annihilation and decay of DM to $e^+e^-$ and $\gamma \gamma$ on the process of reionization. Using the latest calculations for the fraction of the energy deposition rate in channel $c$ to the energy injection rate at redshift $z$, $f_c(z)$, we have determined the free electron fraction $x_e$ and IGM temperature $T_{\text{IGM}}$ as a function of redshift. We have extended the $f_c(z)$ calculation from $1+z = 10$ down to $1+z = 4$ by assuming three different reionization scenarios and determining the total amount of energy deposited as ionization of HeII, IGM heating and continuum photons once reionization occurs.
We have also considered multiple detailed structure formation models in order to accurately calculate the $s$-wave and $p$-wave annihilation rates. This modeling accounts for the formation of DM haloes and their subhaloes, with abundance and internal properties that are consistent with current cosmological simulations. It also considers the uncertainties at the smallest scales (corresponding to low-mass haloes, $<10^8$ M$_\odot$, devoid of gas and stars) that cannot be resolved in current simulations in a full cosmological setting, but that are very relevant in predicting the annihilation rate in the case of $s$-wave self-annihilation. This is particularly important at low redshifts: at $z\sim10$, the uncertainty in $\rho_{\rm eff}^2$ is $\sim5$ for the case of $s$-wave self-annihilation (see Figure \[fig\_rho\_eff\]). On the other hand, for $p$-wave self-annihilation, the uncertainties in the unresolved regime are irrelevant since the signal is dominated by massive haloes (see Figure \[fig\_rho\_eff\_pwave\]).
The integrated free electron fraction $x_e(z)$ and IGM mean temperature $T_{\text{IGM}}(z)$ were both computed using a pair of coupled differential equations derived from a three-level atom model, modified to include energy injection from DM. This simplified model agrees well with `RECFAST`, and enables us to compute these two quantities and set constraints across a large range of annihilation cross-sections/decay lifetimes and DM masses $m_\chi$. For each process, we obtained constraints for different assumptions on the redshift of reionization, structure formation prescriptions as well as $T_{\text{IGM}}$ constraints to check the robustness of the constraints.
For $s$-wave annihilation, constraints from measurements on the CMB power spectrum and on the integrated optical depth $\tau$ rule out any possibility of DM contributing significantly to reionization, with the CMB power spectrum constraints on $\langle \sigma v \rangle$ being approximately an order of magnitude stronger at a given $m_\chi$. The maximum allowed value of $\langle \sigma v \rangle$ can at most contribute to 2% of $x_e$ at reionization for $\chi \chi \to e^+e^-$, and less than 0.1% for $\chi \chi \to \gamma \gamma$. These results are largely independent of reionization redshift and structure formation prescription.
In the case of $p$-wave annihilation, the velocity suppression at early times greatly relaxes the CMB constraints compared to $s$-wave annihilation, since the former are mainly dependent on the cross-section immediately after recombination. However, the sudden increase in energy deposition once structure formation becomes important leads to a sharp rise in $T_\text{IGM}$, making astrophysical measurements of $T_{\text{IGM}}$ at redshifts $z \sim 4$ to 6 important. The most optimistic assumptions appear to allow for significant contributions to reionization, but much of the allowed parameter space is ruled out with the stricter $T_{\text{IGM}}$ constraint and earlier reionization. The sole exception to this is in the channel $\chi \chi \to e^+e^-$ with $m_\chi$ between and , but this region is in turn ruled out by constraints from the photon flux from the galactic diffuse background emission. Overall, we find that only a $\sim 0.1\%$ contribution to $x_e$ at reionization is permitted for $p$-wave annihilation dominantly to $e^+ e^-$ pairs; for annihilation dominantly to photons, a $\sim 5\%$ contribution is possible.
Finally, for DM decay, optical depth constraints rule out any large contribution from decays into $\gamma \gamma$, with the strongest bounds occurring for heavier DM (a contribution to $x_e$ at the $\sim 10\%$ level is viable for the lightest DM we consider, around 10 keV). Contributions at the 20-40% level from decays into $e^+e^-$ are possible for $m_\chi \sim \SI{10}{} - \SI{100}{MeV}$, $\tau_\chi \sim 10^{25} \SI{}{s}$ and $m_\chi \sim \SI{1}{MeV}$, $\tau_\chi \sim 10^{24} \SI{}{s}$, with this result being independent of our assumptions on the redshift of reionization.
Overall, we find that DM is mostly unable to contribute more than 10% of the free electron fraction after reionization across most of the DM processes and annihilation or decay products considered in this paper, even after allowing for different structure formation prescriptions, reionization scenarios and choice of constraint. The one exception to this is found in $\chi \chi \to e^+e^-$, with a possible contribution of up to 40% near $m_\chi = \SI{100}{MeV}$. Figure \[fig:xeMaxConstraints\] summarizes the maximum $x_e$ achievable prior to reionization that is consistent with all of the constraints considered in this paper.
With potential input from 21 cm tomography and improved measurements of the IGM at large redshift and the CMB, we expect our understanding of the process of reionization and the end of the cosmic dark ages to improve dramatically in the near future. These future results may be sensitive to a contribution to reionization by DM at well below the 10% level, and may serve as a good probe of the properties of DM.[^11] The continued relevance of DM to reionization and vice-versa serves as strong motivation to improve on the results developed here. Future work may include new ways to calculate $f_c(z)$ at $1+z \leq 10$ with greater accuracy by taking into account the ionization and thermal history of the universe near reionization, as well as understanding the potential impact of DM annihilation products on the haloes in which they are generated, building on results from [@Schon2014].
Acknowledgments
===============
The Dark Cosmology Centre is funded by the DNRF. JZ is supported by the EU under a Marie Curie International Incoming Fellowship, contract PIIF-GA-2013-62772. TS and HL are supported by the U.S. Department of Energy under grant Contract Numbers DE$-$SC00012567 and DE$-$SC0013999. The authors would like to thank Jens Chluba, Rouven Essig, Dan Hooper, Katie Mack, Lina Necib, Nicholas Rodd, Sergio Palomares Ruiz, Aaron Vincent and Chih-Liang Wu for helpful comments and discussions.
Additional Constraints {#app:additionalConstraints}
======================
Figure \[fig:xeConstraintsTIGMPlot\_sWave\] shows the free electron fraction just prior to reionization $x_e(z=6)$ for the benchmark scenario of both $\chi \chi \to e^+e^-$ and $\chi \chi \to \gamma \gamma$ $s$-wave annihilations, as well as the excluded cross-sections due to constraints from the CMB power spectrum as measured by Planck and from the $T_{\text{IGM}}(z=4.8)$ constraints. The $T_{\text{IGM}}$ bounds alone can almost rule out a 10% contribution from $\chi \chi \to e^+e^-$ above a mass of approximately , but are weaker for $\chi \chi \to \gamma \gamma$, since less energy goes into heating for this process. However, if the structure formation boost factor has been underestimated in our paper, these bounds will become stronger. This effectively sets a limit on how large the boost can be.
Throughout this paper, we have obtained the limits on the contribution to reionization from DM in the case of $s$- and $p$-wave annihilation by considering the processes $\chi \chi \to e^+e^-$ and $\chi \chi \to \gamma \gamma$ with each annihilation product having fixed, identical total energy $E = m_\chi$. This allowed us to set limits on $\langle \sigma v \rangle$ or $(\sigma v)_{\text{ref}}$ as a function of $m_\chi$. However, the constraints that we set here extend beyond these two annihilation scenarios. The energy injection rate from annihilations is set only by the quantity $\langle \sigma v \rangle/m_\chi$, and is independent of the annihilation products produced; only the energy deposition rate is dependent on the species and energies of the annihilation products.
Thus, if we were to recast the $\langle \sigma v \rangle - m_\chi$ parameter space in Figures \[fig:xeConstraintsPlot\_sWave\] and \[fig:xeConstraintsPlot\_pWave\] as a $\langle \sigma v \rangle/m_\chi - m_\chi$ parameter space, the latter parameter actually corresponds to the injection energy of the annihilation products, which is not necessarily equal to the DM mass.
Figures \[fig:xeConstraintsPlotSigmavOverMChi\_sWave\] and \[fig:xeConstraintsPlotSigmavOverMChi\_pWave\] present the same set of constraints and results for $x_e(z = 6)$ as a function of $\langle \sigma v \rangle/m_\chi$ or $(\sigma v)_{\text{ref}}/m_\chi$ and the injection energy of the $s$- or $p$-wave annihilation products, which in general can be very different from $m_\chi$. Table \[tab:Constraints\] gives the $s$-wave CMB power spectrum constraints and the $p$-wave $T_{\text{IGM}}(z = 4.80) > \SI{10000}{K}$ constraints in table form for the convenience of the reader. For any arbitrary annihilation process, the total contribution to $x_e$ prior to reionization is strictly less than the highest contribution to $x_e$ possible among the different particles with different energies produced from the annihilation. This implies that for a given injection rate, the only dependence on the spectrum of the annihilation products enters through $f_c(z)$, and as a result, the CMB power spectrum constraints are relatively insensitive to the details of the injection spectrum from DM annihilations [@Elor2015a].
[500pt]{}[c >X >X >X >X]{} & &\
& &\
& $\chi \chi \to e^+e^-$ & $\chi \chi \to \gamma \gamma$ & $\chi \chi \to e^+e^-$ & $\chi \chi \to \gamma \gamma$\
-5.00 & & -27.2502 & & -20.6327\
-4.75 & & -27.2243 & & -20.1114\
-4.50 & & -27.2311 & & -20.1027\
-4.25 & & -27.2326 & & -20.2672\
-4.00 & & -27.1866 & & -20.4146\
-3.75 & & -27.0830 & & -20.5190\
-3.50 & & -26.9280 & & -20.5746\
-3.25 & & -26.7415 & & -20.5746\
-3.00 & -26.5871 & -26.5424 & -21.5524 & -20.5075\
-2.75 & -26.7722 & -26.6038 & -21.3538 & -20.3684\
-2.50 & -27.1549 & -26.9224 & -21.1154 & -20.1486\
-2.25 & -27.3000 & -27.1003 & -20.8725 & -19.8619\
-2.00 & -27.3572 & -27.2023 & -20.5468 & -19.5262\
-1.75 & -27.3727 & -27.2421 & -21.0758 & -19.1676\
-1.50 & -27.3787 & -27.2574 & -21.8876 & -18.8817\
-1.25 & -27.3611 & -27.2570 & -22.5907 & -18.9666\
-1.00 & -27.3186 & -27.2409 & -22.9054 & -19.2229\
-0.75 & -27.2587 & -27.2056 & -23.0043 & -19.4243\
-0.50 & -27.1635 & -27.1489 & -22.9120 & -19.4912\
-0.25 & -27.0370 & -27.0626 & -22.7140 & -19.4418\
0.00 & -26.9831 & -26.9568 & -22.4788 & -19.3185\
0.25 & -27.0701 & -26.9007 & -22.2346 & -19.1527\
0.50 & -27.1613 & -26.9332 & -21.9916 & -18.9624\
0.75 & -27.2024 & -27.0015 & -21.7520 & -18.7597\
1.00 & -27.1837 & -27.0369 & -21.5127 & -18.5503\
1.25 & -27.1212 & -27.0208 & -21.2700 & -18.3361\
1.50 & -27.0662 & -26.9702 & -21.0248 & -18.1182\
1.75 & -27.0467 & -26.9416 & -20.7816 & -17.8968\
2.00 & -27.0246 & -27.0247 & -20.5460 & -17.6747\
2.25 & -27.0014 & -27.0301 & -20.3158 & -17.4536\
2.50 & -27.0101 & -27.0116 & -20.0852 & -17.2340\
2.75 & -27.0139 & -27.0102 & -19.8505 & -17.0141\
3.00 & -27.0090 & -27.0089 & -19.6115 & -16.7924\
$p$-wave $J$-Factor {#app:JFactor}
===================
The photon flux per unit energy due to DM annihilations from DM in the galaxy is given by [@Essig2013] $$\begin{aligned}
{1}
\frac{d\Phi}{dE} = \frac{1}{2} \frac{r_\odot}{4\pi} \frac{\rho_\odot^2}{m_\chi} \frac{\langle \sigma v \rangle_\odot}{m_\chi} \frac{dN_\gamma}{dE} J,\end{aligned}$$ where $dN_\gamma/dE$ is the annihilation photon yield, and $r_\odot$ and $\rho_\odot$ are the distance from the Sun to the galactic center and the local DM density respectively. $J$ is a dimensionless factor that encapsulates the averaging of the DM density along the line-of-sight of the entire field of observation, and is given by $$\begin{aligned}
{1}
J = \int d\Omega \frac{ds}{r_\odot} \left(\frac{\rho(s)}{\rho_\odot} \right)^2.\end{aligned}$$ For $s$-wave annihilations, $J$ contains all of the dependence of the photon flux on the DM distribution in the galaxy. In $p$-wave annihilations, however, the rate of DM annihilations also depends on the velocity dispersion of DM, and thus both the density and the velocity of DM along each line-of-sight must be averaged. We should therefore replace $J$ with $$\begin{aligned}
{1}
J_p = \int d\Omega \frac{ds}{r_\odot} \left(\frac{\rho(s)}{\rho_\odot} \right)^2 \frac{v^2(s)}{v_\odot^2},\end{aligned}$$ and now $\langle \sigma v \rangle_\odot$ is explicitly the local annihilation cross-section due to the velocity dependence of $\langle \sigma v \rangle$.
Previous studies have implicitly assumed that $J$ and $J_p$ are equal. To assess the significance of this assumption, we consider a pure NFW DM profile given by equation (\[rho\_smooth\]) with $\alpha = 1$, with a corresponding velocity dispersion profile given by the following relation [@Zavala2014]: $$\begin{aligned}
{1}
\frac{\rho(r)}{\sigma_{\text{1D}}^3(r)} \propto r^{-1.9}.\end{aligned}$$ where $\sigma_{\text{1D}}$ is the 1D velocity dispersion that we use as a proxy for $v$. The constant of proportionality of this equation is determined by setting $\rho(r_\odot) = \SI{0.3}{GeV cm^{-3}}$ and assuming a Maxwellian distribution of the dark matter particles in the halo with a peak value set equal to the rotation velocity of the Sun given by $v = \SI{220}{km s^{-1}}$. With these assumptions, we find a difference between $J_p$ and $J$ of about 5 - 10%, after averaging over the solid angle within some typical galactic diffuse gamma-ray background survey regions. This result has also been confirmed using DM particle dispersion velocities as a function of radius [@Necib2016] derived from the Illustris $N$-body simulation [@Vogelsberger2014], which models both DM and baryons.
We have therefore assumed throughout our analysis that $J_p = J$, and anticipate an error of about 10% in translating the $\langle \sigma v \rangle$ constraints assuming and $s$-wave distribution directly into constraints for $(\sigma v)_{\text{ref}}$ in $p$-wave annihilations. Since the $p$-wave constraints that we have used rule out regions of parameter space with a contribution to reionization exceeding 10% by more than 2 orders of magnitude, we do not expect this assumption to change our conclusions in any significant way.
Contour Plots of $f_c(z)$ {#app:fz}
=========================
Figures \[fig:fz\_sWave\], \[fig:fz\_pWave\] and \[fig:fz\_decay\] shows contour plots of $f_c(z)$ for annihilations or decays into $e^+e^-$ and $\gamma \gamma$ as a function of redshift and injection energy, based on equation (\[eqn:fz\]). No reionization is assumed in all of these plots, and for scenarios where structure formation is important, the prescription with the largest boost is used in the calculation.
-- --
-- --
-- --
-- --
-- --
-- --
[^1]: Marie Curie Fellow
[^2]: See [@Slatyer2012; @Slatyer2015] and the publicly available results and examples found at `http://nebel.rc.fas.harvard.edu/epsilon` for further information on how this is done.
[^3]: To avoid conflicting with notation used in later sections, we use the letter $\mathcal{B}$ to refer to the flux multiplier instead of the letter $f$ as in [@Taylor2003].
[^4]: For neutralino dark matter, the kinetic decoupling temperature generally increases with particle mass, although a broad range of values for a fixed mass is allowed. Based on Fig. 2 of [@Bringmann2009] we have chosen a typical value within that range for $m_\chi=100$ GeV.
[^5]: This universality is even clearer if the ratio of maximum circular velocities is used instead of the masses to define the subhalo mass function [e.g. @Cautun2014].
[^6]: One popular choice is the scheme called the “SSCK approximation” in [@Slatyer2015a], where a fraction $(1-x_e)/3$ is deposited into ionization and excitation each, with the remaining $(1+2x_e)/3$ going into heating.
[^7]: See [@Poulin2015] for an example of how heating from astrophysical sources can be included in a similar analysis.
[^8]: Note that the optical depth contribution from instantaneous reionization at $z = 10$ exceeds the Planck optical depth measurement, and thus would leave no room for any contribution from DM at all. However, we do not use the optical depth constraint in this manner.
[^9]: When there is no reionization, we start integrating from $z = 6$, making $\delta \tau$ identical to the case with $z_{\text{reion}} = 6$.
[^10]: Models such as neutrinophilic DM [@Shoemaker2013; @VandenAarssen2012] can, however, exhibit such a behavior.
[^11]: See [@Lopez-Honorez2016] for recent work in understanding the impact of DM annihilations on the 21 cm signal, using methods that are similar to those used here.
|
---
abstract: 'Motivated by its application in ecology, we consider an extended Klausmeier model, a singularly perturbed reaction-advection-diffusion equation with spatially varying coefficients. We rigorously establish existence of stationary pulse solutions by blending techniques from geometric singular perturbation theory with bounds derived from the theory of exponential dichotomies. Moreover, the spectral stability of these solutions is determined, using similar methods. It is found that, due to the break-down of translation invariance, the presence of spatially varying terms can stabilize or destabilize a pulse solution. In particular, this leads to the discovery of a pitchfork bifurcation and existence of stationary multi-pulse solutions.'
author:
- 'Robbin Bastiaansen[^1]'
- 'Martina Chirilus-Bruckner'
- Arjen Doelman
bibliography:
- 'klausmeierModelVaryingTerrain.bib'
title: 'Pulse solutions for an extended Klausmeier model with spatially varying coefficients[^2]'
---
Introduction
============
Since Alan Turing’s revolutionary insight that patterns can emerge spontaneously in systems with multiple species if these diffuse at different rates [@turing1952], systems of reaction-diffusion equations have served as prototypical pattern forming models. Scientists have been using these reaction-diffusion models successfully to describe for instance animal markings [@koch1994], embryo development [@meinhardt2008] and the faceted eye of [*Drosophila*]{} [@maini2001]. Special interest has been given to localized solutions (e.g. pulses, fronts), that arise when the diffusivity of species involved is very different. The prototypical (two-component) model (in one spatial dimensional) is a singularly perturbed equation of the (scaled) form $$\label{eq:modelGeneric}
\left\{
\begin{array}{rcrcl}
\partial_t U & = & \partial_{x}^2 U & +& \mathcal{H}_1\left(x,u,u_x,v,v_x;\tilde{\varepsilon}\right) ,\\
\partial_t V & = & \tilde{\varepsilon}^2 \partial_{x}^2 V & +& \mathcal{H}_2\left(x,u,u_x,v,v_x;\tilde{\varepsilon}\right),
\end{array}
\right.$$ where $0 < \tilde{\varepsilon} \ll 1$ is a measure for the ratio of diffusion constants, and $\mathcal{H}_1$, $\mathcal{H}_2$ are sufficiently smooth functions. Because of the singular perturbed nature of , it is possible to establish existence and determine (linear) stability of localized patterns in these models. In the past, this has been done successfully for the Gray-Scott model [@DEK01; @doelman1998; @doelman2003semistrong; @chen2009oscillatory; @Kolokolnikov2005PS; @Sun2005], the Gierer-Meinhardt model [@D01; @veerman2013pulses; @doelman2003semistrong; @Sun2005], and in several other settings [@BjornRiccati; @doelman2015explicit; @rottschafer2017transition; @moyles2016explicitly]. However, these studies are usually limited to models with constant coefficients. Some research has focused on the introduction of localized spatial inhomogeneities [@van2010pinned; @nishiura2007dynamics; @nishiura2007dynamics2; @xin2000front; @yuan2007; @doelman2016geometric]; also (often formal) research has been done on reaction-diffusion equations with (less restricted) spatially varying coefficients [@brena2015; @brena2014; @avitabile2018; @wei2017; @wei2017-2; @berestycki2014]. In this article, we aim to expand the knowledge of such systems, by studying a reaction-diffusion system with fairly generic spatially varying coefficients rigorously; motivated by its use in ecology (see Remark \[remark:applicationKlausmeier\]), we consider the following extended Klausmeier model with spatially varying coefficients [@klausmeier1999; @BD18]: $$\begin{aligned}
\label{eq:klausmeier_model}
\left\{
\begin{array}{rcrl}
\partial_t U & = & \partial_{x}^2 U & + f(x) \partial_{x}U + g(x)U + a - U - U V^2 \, ,\\[.2cm]
\partial_t V & = & D^2 \partial_{x}^2 V & - \ m V + U V^2 \, ,
\end{array}
\right.\end{aligned}$$ with $ x \in \mathbb{R}, t \geq 0, U = U(x,t), V = V(x,t) \in \mathbb{R} $, parameters $ D, a, m > 0 $ and functions $ f, g \in C^1_b(\mathbb{R})$. Certain conditions are imposed on the parameters and functions $f$ and $g$ – these will be explained in section \[sec:assumptions\].
The model can be brought into the form of by a series of scalings – see section \[sec:existence\] and [@doelman2003semistrong].
\[remark:applicationKlausmeier\] This system of equations is used as a model in ecology to describe the dynamics of vegetation ($ U $) and water ($ V $). The extended Klausmeier model takes into account the amount of rainfall ($ a > 0 $) and mortality rate of the vegetation ($m>0$) and goes beyond its classical version by modeling a smooth, spatially varying terrain $ h = h(x) $ which then enters as $ f(x) = h'(x), g(x) = h''(x) $ (see [@BD18]). Variants of the Klausmeier model have been studied in various articles ranging from ecological studies [@klausmeier1999; @Bastiaansens2018] to mathematical analysis [@BD18; @siteur2014beyond; @sherratt2013; @sherratt2015]. The focus of all these studies are vegetation patterns, which have been found to play a crucial role in the process of desertification. A starting point for the analysis of more complicated patterns is a thorough understanding of their building blocks, namely, localized solutions. The present paper is motivated by observations – both in numerical simulations and in real ecosystems [@BD18; @Bastiaansens2018] – of the impact of nontrivial topographies on the dynamics of localized vegetation patterns.
The focus of this article is to analyze existence, stability and (some) bifurcations of stationary pulse solutions to . The presence of spatially varying coefficients, however, alters the approach that usually is taken in the case of constant coefficients models. For one, with spatially constant coefficients, possesses a uniform stationary state, with $V \equiv 0$, to which pulse solutions converge for $x \rightarrow \pm \infty$. In the case of spatially varying coefficients, however, typically such uniform stationary state does not exist; instead, a bounded solution $(u,v) = (u_b,0)$ exists and pulse solutions converge to this bounded solution for $x \rightarrow \pm \infty$ – see Figure \[fig:pulses\]. Moreover, standard proofs using geometric singular perturbation theory typically rely on the availability of closed form expressions for orbits of subsystems of – see below. These are no longer available in case of generic spatially varying coefficients, and only bounds can be found. Indeed, the core contribution of the present work is to overcome these difficulties, which we do by blending geometric singular perturbation theory [@Fen79] with the theory of exponential dichotomies [@coppel1978stability] in a new way.\
[0.32]{} ![Numerical simulation resulting in a stationary pulse solution for with $f(x) = h'(x)$, $g(x) = h''(x)$, where $h(x) = 0$ (a), $h(x) = \exp(-x^2/2)$ (b) and $h(x) = 0.1 \cos(2x)$ (c). $ U, V $ components are blue and red respectively, while the orange curve depicts the bounded solution $ u_b $ to which the $ U $-component converges for $|x| \rightarrow \infty$.[]{data-label="fig:pulses"}](figures/pulse-constant-coefficients "fig:"){width="\textwidth"}
[0.32]{} ![Numerical simulation resulting in a stationary pulse solution for with $f(x) = h'(x)$, $g(x) = h''(x)$, where $h(x) = 0$ (a), $h(x) = \exp(-x^2/2)$ (b) and $h(x) = 0.1 \cos(2x)$ (c). $ U, V $ components are blue and red respectively, while the orange curve depicts the bounded solution $ u_b $ to which the $ U $-component converges for $|x| \rightarrow \infty$.[]{data-label="fig:pulses"}](figures/pulse-gaussian-coefficients "fig:"){width="\textwidth"}
[0.32]{} ![Numerical simulation resulting in a stationary pulse solution for with $f(x) = h'(x)$, $g(x) = h''(x)$, where $h(x) = 0$ (a), $h(x) = \exp(-x^2/2)$ (b) and $h(x) = 0.1 \cos(2x)$ (c). $ U, V $ components are blue and red respectively, while the orange curve depicts the bounded solution $ u_b $ to which the $ U $-component converges for $|x| \rightarrow \infty$.[]{data-label="fig:pulses"}](figures/pulse-periodic-coefficients "fig:"){width="\textwidth"}
In this article, we initially follow the ‘standard’ approach of geometric singular perturbation theory. That is, we introduce a small parameter $\varepsilon := \frac{a}{m}$ – see assumption (A1) in section \[sec:assumptions\]– and construct a stationary pulse solution to in the limit $\varepsilon = 0$, which present itself as a homoclinic orbit in the related stationary fast-slow ODE system – in case of spatially varying coefficients it is homoclinic to the bounded solution. For this construction, the full system is split into a fast subsystem, and a (super)slow subsystem on a so-called slow manifold $\mathcal{M}$ that consists of fixed points of the fast subsystem. We establish fast connections to and from $\mathcal{M}$ that take off from submanifold $T_o \subset \mathcal{M}$ and touch down on submanifold $T_d \subset \mathcal{M}$. On $\mathcal{M}$, we construct stable and unstable submanifolds $W^{s/u}(u_b) \subset \mathcal{M}$ that consists of points on $\mathcal{M}$ that converge to the bounded solution for $x \rightarrow \infty$ respectively $x \rightarrow -\infty$. Intersections between these unstable/stable manifolds and take-off/touch-down submanifolds (and a symmetry assumption) then establish the existence of pulse solutions to . Finally, persistence of these pulse solutions for $\varepsilon > 0$ is guaranteed by geometric singular perturbation theory [@Fen79].
Specifically, stationary solutions $ (U(x,t),V(x,t))= (\tilde{u}(x), \tilde{v}(x)) $ of fulfill the system of ODEs $$\begin{aligned}
\label{eq:klausmeier_model_ODE}
\left\{
\begin{array}{rcrl}
0 & = & \tilde{u}_{xx} & +f(x)\tilde{u}_x + g(x) \tilde{u} + a - \tilde{u} - \tilde{u} \tilde{v}^2 \, ,\\[.1cm]
0 & = & \frac{D^2}{m} \tilde{v}_{xx} & - \tilde{v} + \frac{1}{m} \tilde{u} \tilde{v}^2 \, .
\end{array}
\right.\end{aligned}$$ After a sequence of (re)scalings, it can be seen that the associated fast subsystem is not affected by the spatially varying terms and can be studied using standard methods. However, the slow subsystem, on the slow manifold $\mathcal{M}$, is affected by the spatially varying terms. This subsystem is given (when rescaling $\hat{u} = a \tilde{u}$) by $$\label{eq:slowSubsystem}
\left\{
\begin{array}{rcl}
\partial_x \hat{u} & = & \hat{p}, \\
\partial_x \hat{p} & = & - f(x) \hat{p} - g(x) \hat{u} - 1 + \hat{u}.
\end{array}
\right.$$ For $f$ and $g$ constant, can be solved explicitly and the stable and unstable manifolds $W^{s,u}(u_b)$ are known explicitly. In case of (spatially) varying $f$ and $g$, typically no closed form solutions are available; however, when these varying coefficients are sufficiently small – specifically, when $\delta := \sup_{x \in \mathbb{R}} \sqrt{f(x)^2+g(x)^2} < \frac{1}{4}$ (so $\delta$ can be $\mathcal{O}(1)$ with respect to $\varepsilon$); see section \[sec:exp\_dich\] – the dynamics of can be related to the constant coefficient case $f,g \equiv 0$ using the theory of exponential dichotomies.
In particular, the saddle structure – present for $f,g\equiv0$ – persists as exponential dichotomy. Therefore, possesses a $1D$ family of solutions that converge to the (unique) bounded solution to for $x \rightarrow \infty$ and a $1D$ family of solutions that converge to the bounded solution for $x \rightarrow -\infty$. These families of solutions essentially form the stable and unstable manifolds $W^{s,u}(u_b)$. Due to the linear nature of , these (un)stable manifolds are made up of straight lines, i.e. $W^{s,u}(u_b) = \cup_{x\in\mathbb{R}} (x,l^{s,u}(x))$ where $l^{s,u}(x)$ describes a straight line in $\mathbb{R}^2$. An important difference now arises between the cases of constant and varying coefficients: when $f,g\equiv 0$, the lines $l^{s,u}(x)$ do not depend on $x$; when $f$ and $g$ are spatially varying, they do. Hence, $W^{s,u}(u_b)$ appears wiggly in case of varying coefficients – see Figure \[fig:manifoldsOnSlowManifold\]. The theory of exponential dichotomies enables us to bound the variation of the lines $l^{s,u}(x)$; if $\delta$ is small enough (i.e. $\delta < \delta_c(a,m,D)$, where $\delta_c \leq 1/4$ is $\mathcal{O}(1)$ with respect to $\varepsilon$), these bounds are strict enough that a non-empty intersection $(0,l^u(0)) \cap T_o$ is guaranteed – thus establishing existence of a (symmetric) pulse solution to . See Figure \[fig:existenceProofSketches\] for a sketch.
[0.25]{} ![Sketches of a crosssection of $\mathcal{M}$ that illustrate the heart of the existence proof. In green the takeoff and touchdown curves are shown, the solid blue lines indicate (possible) $l^{s/u}(0)$, the dashed blue lines $l^{s/u}(0)$ for the constant coefficient case $f = 0, g = 0$. The shaded blue area indicates all possible locations of $l^{s/u}(0)$; the shaded red region the possible locations of the bounded solution. The existence proof works when bounds on $u_b$ and $l^{s/u}(0)$ are strong enough such that $l^{u}(0)$ necessarily intersects with $T_o(0)$ – this happens when all straight lines that start from the red region and stay within the blue region intersect the green curves. If bounds are strong enough this is the case – as illustrated in (b) – but when bounds are too weak this is not the case and existence is not guaranteed by this method – as illustrated in (c). In (a) the situation for the constant coefficient case is shown.[]{data-label="fig:existenceProofSketches"}](figures/ExistenceProofSketch1 "fig:"){width="\textwidth"}
[0.25]{} ![Sketches of a crosssection of $\mathcal{M}$ that illustrate the heart of the existence proof. In green the takeoff and touchdown curves are shown, the solid blue lines indicate (possible) $l^{s/u}(0)$, the dashed blue lines $l^{s/u}(0)$ for the constant coefficient case $f = 0, g = 0$. The shaded blue area indicates all possible locations of $l^{s/u}(0)$; the shaded red region the possible locations of the bounded solution. The existence proof works when bounds on $u_b$ and $l^{s/u}(0)$ are strong enough such that $l^{u}(0)$ necessarily intersects with $T_o(0)$ – this happens when all straight lines that start from the red region and stay within the blue region intersect the green curves. If bounds are strong enough this is the case – as illustrated in (b) – but when bounds are too weak this is not the case and existence is not guaranteed by this method – as illustrated in (c). In (a) the situation for the constant coefficient case is shown.[]{data-label="fig:existenceProofSketches"}](figures/ExistenceProofSketch2 "fig:"){width="\textwidth"}
[0.25]{} ![Sketches of a crosssection of $\mathcal{M}$ that illustrate the heart of the existence proof. In green the takeoff and touchdown curves are shown, the solid blue lines indicate (possible) $l^{s/u}(0)$, the dashed blue lines $l^{s/u}(0)$ for the constant coefficient case $f = 0, g = 0$. The shaded blue area indicates all possible locations of $l^{s/u}(0)$; the shaded red region the possible locations of the bounded solution. The existence proof works when bounds on $u_b$ and $l^{s/u}(0)$ are strong enough such that $l^{u}(0)$ necessarily intersects with $T_o(0)$ – this happens when all straight lines that start from the red region and stay within the blue region intersect the green curves. If bounds are strong enough this is the case – as illustrated in (b) – but when bounds are too weak this is not the case and existence is not guaranteed by this method – as illustrated in (c). In (a) the situation for the constant coefficient case is shown.[]{data-label="fig:existenceProofSketches"}](figures/ExistenceProofSketch3 "fig:"){width="\textwidth"}
Next, the spectral stability of the thus created pulse solutions is studied. Using similar bounds as in the existence problem, it is shown that eigenvalues are $\delta$-close to their counterparts in case of constant coefficients – see Figure \[fig:spectralBounds\]. That is, under several conditions, typical for these systems, the ‘large’ eigenvalues can be bounded to the stable half-plane $\{ \lambda \in \mathbb{C}: \mbox{Re} \lambda < 0 \}$. For the ‘small’ eigenvalue – located close to the origin – it is more subtle. In case of $f,g \equiv 0$ this small eigenvalue is located precisely at the origin due to the translation invariance of . The introduction of spatially varying coefficients to the system breaks this invariance and as a result the small eigenvalue moves to the stable or the unstable half-plane.
Tracking of this eigenvalue indicates that it can, indeed, move to either half-plane, depending on the form of the functions $f$ and $g$. In particular, when taking $f = h'$, $g = h''$, the location of the small eigenvalue is related to the curvature $g=h''$ of $h$: when the curvature is weak, the pulse solution is stable if $g(0) = h''(0) < 0$ and unstable if $g(0) = h''(0) > 0$; for strong curvature, this is flipped, due to a pitchfork bifurcation.
Finally, the break-down of the translation invariance in has another novel effect. In case of constant coefficients, stationary multi-pulse solutions – solutions with multiple fast excursions – do not exist, due to the presence of the translation invariance. If this invariance is broken, they can exist; the introduction of functions $f$ and $g$ now allows for these stationary multi-pulse solutions (under some conditions on $f$ and $g$) and their existence can be established (although we refrain from going in the details).
The set-up for the rest of this paper is as follows. In section \[sec:existence\], we establish existence of stationary pulse solutions to ; here we first consider the case $f,g \equiv 0$ and subsequently the case of generic (bounded) $f$ and $g$. Then, using the theory of exponential dichotomies, both cases are related to each other, resulting in bounds for the generic case that allow us to prove existence. In section \[sec:linstability\] we study the spectral stability of found pulse solutions, again by relating the generic case to the constant coefficient case of $f,g\equiv0$. Then, in section \[sec:pulseLocationODE\] we consider the small eigenvalues more in-depth using formal and numerical techniques, focusing on the possible occurrence of bifurcations; we also present stationary multi-pulse solutions. We conclude with a discussion of the results in section \[sec:discussion\].
Assumption {#sec:assumptions}
----------
We will make several assumptions throughout the manuscript. Some are crucial, while some serve to simplify the exposition. $$\begin{aligned}
\mathbf{(A1):} \qquad &
\varepsilon := \frac{a}{m} \ll 1 \label{eq:a_m_assumption}; \\
\mathbf{(A2):} \qquad &
f(-x)= -f(x) \,, \quad g(-x)= g(x) \,, \qquad \mbox{for all $x \in \mathbb{R}$};\label{eq:f_g_assumptions_symmetry} \\
\mathbf{(A3):} \qquad &
\sup_{x \in \mathbb{R}} \sqrt{ f(x)^2 + g(x)^2} < \frac{1}{4} \, ; \label{eq:f_g_assumptions_magnitude} \\
\mathbf{(A4):} \qquad &
\lim_{x \rightarrow \pm \infty} f(x), g(x) = 0 \, ; \label{eq:f_g_asymptotics} \\
\mathbf{(A5):} \qquad &
||f||_{C_b} = \mathcal{O}(1) \, , \qquad ||g||_{C_b} = \mathcal{O}(1) \qquad \left(w.r.t. \ \frac{a}{m}\right) \label{eq:f_g_wrt_epsilon} \end{aligned}$$ Assumption (A1) ensures the presence of a small parameter, necessary to use geometric singular perturbation theory [@SD17; @BD18]. (A2) is a symmetry assumption, that ensures possesses a (point) symmetry in $x = 0$; this technicality significantly simplifies our rigorous proof; pulse solutions can also be found formally and/or numerically when (A2) does not hold (and we expect that their existence can be established rigorously by extending our methods). Then, assumption (A3) stems from the theory of exponential dichotomies: when this holds, solutions to for generic $f$ and $g$ can be linked to solutions of with $f,g\equiv 0$; when (A3) does not hold, this link is not provided by the theory of exponential dichotomies. Assumption (A4) is a technicality that is only needed in the stability section (specifically for the elephant-trunk method to work); for the existence theorems it is not necessary; in fact, it is suspected that even stability results continue to hold when (A4) is violated – see also Remarks \[remark:stability\_fgLimits\] and \[remark:stability\_fgBounded\]. Finally, assumption (A5) is needed to pass limits in the treatment of the fast-slow system.
Analysis of stationary pulse solutions {#sec:existence}
======================================
A crucial step for making the stationary ODE amenable to analytic considerations is to find a parameter regime convenient for rigorous perturbation techniques. While there are various choices, we pick a specific one for clarity, since our focus is on novel phenomena due to the non-autonomous character of the system and not to classify all possible dynamics across parameter regimes.
Following [@DEK01; @chen2009oscillatory; @BD18], we rescale the spatial coordinate (motivated by the diffusivity of the $ v $-component) and the amplitudes of the unknowns by $$\begin{aligned}
\label{eq:scaling_1}
\xi := \frac{\sqrt{m}}{D} x \, , \quad \tilde{u} = \frac{m \sqrt{m} D}{a} u \, , \quad \tilde{v} = \frac{a}{\sqrt{m} D} v \, ,\end{aligned}$$ to get $$\begin{aligned}
\label{eq:klausmeier_model_constant_ODE_second_order}
\left\{
\begin{array}{rcl}
u_{\xi \xi} & = & \frac{a^2}{m^2} \left[ \frac{D^2m}{a^2} u - \frac{Dm\sqrt{m}}{a^2} f\left( \frac{D}{\sqrt{m}} \xi\right) u_{\xi}- \frac{D^2m}{a^2} g\left( \frac{D}{\sqrt{m}} \xi\right) u - \frac{D}{\sqrt{m}} + u v^2 \right]\, ,\\[.1cm]
v_{\xi \xi} & = & v - u v^2 \, .
\end{array}
\right. \end{aligned}$$ It is now convenient to introduce $$\begin{aligned}
\label{eq:epsilon_mu}
0 < \varepsilon : = \frac{a}{m} \, , \quad 0 < \mu : = \frac{m \sqrt{m}D}{a^2} \, ,\end{aligned}$$ and write the above ODEs as the first order system of ODEs $$\begin{aligned}
\label{eq:klausmeier_model_ODE_first_order}
\left\{
\begin{array}{rcl}
\dot{u} & = & \varepsilon p \, , \\[.1cm]
\dot{p} & = & \varepsilon \left[ \varepsilon^2 \mu^2 u - \varepsilon \mu f\left( \varepsilon^2 \mu \xi\right) p - \varepsilon^2 \mu^2 g\left( \varepsilon^2 \mu \xi\right) u - \varepsilon^2 \mu + u v^2 \right] \, ,\\[.1cm]
\dot{v} & = & q\, , \\[.1cm]
\dot{q} & = & v - u v^2 \, .
\end{array}
\right. \end{aligned}$$ In order to use geometric singular perturbation theory, we make the customary assumption (A1), that is, $$\begin{aligned}
\label{eq:epsilon_small}
0 < \varepsilon \ll 1 \, .\end{aligned}$$ and stipulate assumption (A5) so we can pass to limits.
In the autonomous case $ f \equiv 0 $ and $ g \equiv 0 $, system has a fixed point $ \left(1/\mu,0,0,0\right) $ and stationary pulse solutions of correspond to orbits that are homoclinic to $ \left(1/\mu,0,0,0\right) $; see Figure \[fig:pulsesa\] for an example. In the non-autonomous case $ f \neq 0, g \neq 0 $ there is no fixed point, but instead a unique bounded solution $ (u_b, p_b, 0, 0) $. In this case, stationary pulse solutions of correspond to orbits that are homoclinic to this bounded solutions; see Figures \[fig:pulsesb\] and \[fig:pulsesc\] for examples. The existence of said unique bounded solution $(u_b,p_b,0,0)$ is established in the following proposition proven later in section \[sec:exp\_dich\] (in the proof of Proposition \[prop:roughness\_closeness\_general\]).
\[proposition:bounded\_solution\] Let assumptions (A3) and (A4) be fulfilled. Then has a unique bounded solution $ (u_b, p_b, 0, 0) $ that satisfies $$\begin{aligned}
\lim_{\xi \leftarrow \pm \infty} (u_b, p_b, 0, 0) = \left(1/\mu,0,0,0\right) \, . \end{aligned}$$
\[rem:orbits\_homoclinic\_bounded\_solutions\] Note that the assumption $ \lim_{x \rightarrow \pm \infty} f(x), g(x) = 0 $ in (A4) is not necessary for the existence proof, but will be used in the stability analysis. In case $ f, g $ are only bounded without approaching a constant state when $|x| \rightarrow \infty$, the corresponding constructed pulse solution is also a homoclinic to the respective bounded solution. An illustration of such a case is given in Figure \[fig:pulsesc\], where, due to the periodicity of the coefficients $ f, g $, the bounded background solution is periodic and so is the pulse solution in its tails.
To highlight the novelty of the presented approach, we first briefly explain how the construction is carried out in the constant coefficient case $ f = g = 0 $, to then proceed to the non-autonomous case.
Stationary pulse solutions for $ f = 0 $ and $ g = 0 $ {#sec:existence_f_g_zero}
------------------------------------------------------
The fast system reads $$\begin{aligned}
\label{eq:klausmeier_model_ODE_first_order_fast_autonomous}
\left\{
\begin{array}{rcl}
\dot{u} & = & \varepsilon p \, , \\[.1cm]
\dot{p} & = & \varepsilon \left[ \varepsilon^2 \mu^2 u - \varepsilon^2 \mu + u v^2 \right] \, ,\\[.1cm]
\dot{v} & = & q\, , \\[.1cm]
\dot{q} & = & v - u v^2 \, .
\end{array}
\right. \end{aligned}$$ Note that this system possesses the symmetry $ (\xi, u, p, v, q) \rightarrow (-\xi, u, -p, v, -q) $. The corresponding slow system in the slow scaling $ \eta = \varepsilon \xi $ is given by $$\begin{aligned}
\label{eq:klausmeier_model_ODE_first_order_autonomous_slow}
\left\{
\begin{array}{rcl}
u^\prime & = & p \, , \\[.1cm]
p^\prime & = & \varepsilon^2 \mu^2 u - \varepsilon^2 \mu + u v^2 \, ,\\[.1cm]
\varepsilon v^\prime & = & q\, , \\[.1cm]
\varepsilon q^\prime & = & v - u v^2 \, .
\end{array}
\right. \end{aligned}$$ Restricted to the invariant manifold $$\begin{aligned}
\label{eq:tilde_M}
\widetilde{\mathcal{M}}:= \{ (u, p, 0, 0) ~|~ u > 0 \}\end{aligned}$$ it reads $$\begin{aligned}
\label{eq:klausmeier_model_ODE_first_order_autonomous_on_M}
\left\{
\begin{array}{rcl}
u^\prime & = & p \, , \\[.1cm]
p^\prime & = & \varepsilon^2 \mu^2 u - \varepsilon^2 \mu \, ,
\end{array}
\right. \end{aligned}$$ which has a saddle structure around the fixed point $ \left(\frac{1}{\mu}, 0 \right) $ with stable and unstable eigenspaces given by $$\begin{aligned}
\label{eq:l_u_s_autonomous}
\tilde{l}^{u/s}:= \left\{ (u, p) ~|~ p = \varepsilon \mu (u - \frac{1}{\mu}) \right\} \, .\end{aligned}$$
Note that this step is much more intricate in the case of varying coefficients $ f,g $ where explicit solutions are possible only for very specific choices of coefficients. Therefore, one must resort to estimation techniques for the general case. Overcoming this difficulty using exponential dichotomies is the core contribution of the present work.
The reduced fast system has the form $$\begin{aligned}
\label{eq:klausmeier_model_ODE_first_order_autonomous_fast_reduced}
\left\{
\begin{array}{rcl}
\dot{u} & = & \ 0 \, , \quad \dot{p} \ = \ 0 \, ,\\[.1cm]
\dot{v} & = & q\, , \\[.1cm]
\dot{q} & = & v - u v^2 \, .
\end{array}
\right. \end{aligned}$$ A sketch of its planar subsystem $ \dot{v} = q, \dot{q} = v - u v^2 $ can be found in Figures \[fig:fastReducedSystem\]; this planar subsystem is a Hamiltonian system with Hamiltonian $$\begin{aligned}
\label{eq:hamiltonian_planar_subsystem}
H(v, q; u) = \frac{1}{2} q^2 - \frac{1}{2} v^2 + \frac13 u v^3 \, . \end{aligned}$$ Its fixed point $ (v,q) = (0,0) $ features a saddle structure and a family of homoclinic orbits $$\begin{aligned}
\label{eq:homoclinic}
\left\{
\begin{array}{rcl}
v_{hom}^{(0)}(\xi;u_0) &=& \frac{1}{u_0} \, \omega(\xi) \, , \quad \omega(\xi) : = \frac{3}{2} \, \mathrm{sech}^2(\xi/2) \, ,\\
q_{hom}^{(0)}(\xi;u_0) &=& \dot{v}_{hom}(\xi;u_0) \, , \quad u_0 \in \mathbb{R}\backslash\{0\} \, ,
\end{array}
\right.\end{aligned}$$ connecting its stable and unstable manifolds. Hence, is a Hamiltonian system with Hamiltonian $$\begin{aligned}
\label{eq:hamiltonian_autonomous}
\widetilde{K}(u, p, v, q) = H(v, q; u)\, .\end{aligned}$$ The invariant manifold $ \widetilde{\mathcal{M}} $ from is the collection of saddle points $ (u,p,0,0), u > 0, p \in \mathbb{R}, $ for and is, hence, normally hyperbolic. For its stable and unstable manifolds $ W_{0}^{s/u}(\widetilde{\mathcal{M}}) $ it holds true that $\dim [W_{0}^{s/u}(\widetilde{\mathcal{M}})] = 3 $ and, in fact, $ W_{0}^{s}(\widetilde{\mathcal{M}})$ and $W_{0}^{u}(\widetilde{M}) $ (partly) coincide, where the intersection is simply given by the family of homoclinic orbits. Moreover, we have that $ \widetilde{K}(u, p, v, q)|_{(u, p, v, q) \in \widetilde{\mathcal{M}}} = 0 $.
For $ \varepsilon > 0 $, we note that $ \widetilde{\mathcal{M}} $ is still an invariant manifold of the full system . It is a standard result in geometric singular perturbation theory (see, e.g. the classic articles [@Tik48; @Fen79; @Jon95] or, more recent, [@Kue15]) that, for $\varepsilon$ sufficiently small, its stable and unstable manifolds persist as $ W_{\varepsilon}^{s/u}(\widetilde{\mathcal{M}}) $ with $\dim[W_{\varepsilon}^{s/u}(\widetilde{\mathcal{M}})] = 3 $, but do not necessarily coincide anymore. In fact, they generically meet in a 2D intersection in $ \mathbb{R}^4 $.
In order to analyze the persistence of homoclinic orbits we measure the distance of $ W_{\varepsilon}^{s}(\widetilde{\mathcal{M}}) $ and $ W_{\varepsilon}^{u}(\widetilde{\mathcal{M}}) $ in the hyperplane $ \widetilde{R} = \{ (u,p,v,q) ~|~ q = 0 \} $, that is, we fix an even homoclinic orbit $ (u_{hom}, p_{hom}, v_{hom}, q_{hom}) $ with $ (u_{hom}(0), p_{hom}(0), v_{hom}(0), q_{hom}(0)) = (u_0, p_0, v_{max}, 0) $. To this end we use the Hamiltonian $ \widetilde{K} $ and analyze its difference during the jump of the orbit through the fast field $$\begin{aligned}
\label{eq:fast_field}
I_f : = \left( - \frac{1}{\sqrt{\varepsilon}} \, , \frac{1}{\sqrt{\varepsilon}} \right) \, ,\end{aligned}$$ by setting up $$\begin{aligned}
\label{eq:change_hamiltonian}
\Delta_{I_f} \widetilde{K} = \widetilde{K}(1/\sqrt{\varepsilon}) - \widetilde{K}(-1/\sqrt{\varepsilon}) = \int_{I_f} \frac{d}{d \xi} \widetilde{K}(\xi) \, d \xi = \frac13 \varepsilon \int_{I_f} p(\xi) v_{hom}(\xi)^3 \, d \xi + h.o.t.\end{aligned}$$ where we used that $ \frac{d}{d \xi} \widetilde{K} = \frac{\partial}{\partial u} H(v,q;u) (\frac{du}{d \xi}) + \frac{d}{d \xi}H(v,q;u) = \frac13 v^3 (\frac{du}{d \xi}) + 0 = \frac13 \varepsilon v^3 p$. We may set (using the fact that $p$ is constant to leading order) $
p(\xi)= p^{(0)} + \varepsilon p^{(1)}(\xi) + h.o.t.
$ Therefore, in order to make this difference vanish to leading order, we evidently need that $p^{(0)} = 0$ and $p^{(1)}(0) = 0$.
Now that a departure and return mechanism from and back to $ \widetilde{\mathcal{M}} $ is established through the intersection $ W_{\varepsilon}^{s}(\widetilde{\mathcal{M}}) \cap W_{\varepsilon}^{u}(\widetilde{\mathcal{M}}) \cap R $, the remaining task is to determine possible take-off and touch-down points on $ \widetilde{\mathcal{M}} $ and investigate if these intersect the stable and unstable eigenspaces $ l^{s/u} $ appropriately to form a homoclinic. To this end we observe that $$\begin{aligned}
\Delta_{I_f} u &= u(1/\sqrt{\varepsilon}) - u(-1/\sqrt{\varepsilon}) = \int_{I_f} \frac{d}{d \xi} u(\xi) \, d \xi = \varepsilon^2 \int_{I_f} p^{(1)}(\xi) \, d \xi = \mathcal{O}(\varepsilon^{3/2})\, ,\\
\Delta_{I_f} p &= p(1/\sqrt{\varepsilon}) - p(-1/\sqrt{\varepsilon}) = \int_{I_f} \frac{d}{d \xi} p(\xi) \, d \xi = \varepsilon u_0 \int_{I_f} v_{hom}^{(0)}(\xi)^2 \, d \xi = \frac{6}{u_0} \varepsilon + h.o.t. \, ,\end{aligned}$$ so, to leading order, only the $ p $-variable changes during the fast jump, and therefore, the take-off and touch-down curves on $ \widetilde{\mathcal{M}}$ are to leading order given by $$\begin{aligned}
\widetilde{T}_{o/d}: = \left\{ \left. \left(u,p, 0, 0 \right) ~\right|~ p = \mp \frac{3\varepsilon}{u}, u > 0 \right\} \, ,\end{aligned}$$ where we used that, by symmetry, to leading order $$\begin{aligned}
p(\pm 1/\sqrt{\varepsilon}) = p(0) \pm \frac12 \Delta_{I_f} p = \varepsilon \left( p^{(1)}(0) \pm \frac{3}{u_0} \right)\, . \end{aligned}$$ Finally, a straightforward computation of the intersection points of these with the stable and unstable eigenspaces $ l^{s/u} $ gives two possible homoclinics when $\mu \leq \frac{1}{12}$, with $$\begin{aligned}
\label{eq:intersection_points_aut}
u_0^{\pm} = \frac{1 \pm \sqrt{1-12 \mu}}{2 \mu} \, \qquad \left(\mbox{for } \mu \leq \frac{1}{12}\right).\end{aligned}$$
\[remark:u0\_autonomous\_mu\_small\] When $\mu \ll 1$, the expression for $u_0^\pm$, , can be expanded in terms of $\mu$; this yields for $u_0^\pm$ the following expansions $$\label{eq:u0_autonomous_mu_small}
\begin{array}{rcrcrcrcrc}
u_0^- & = && & 3 &+& 9 \mu &+& \mathcal{O}(\mu^2) \\
u_0^+ & = & \frac{1}{\mu} &-& 3 &-& 9 \mu &+& \mathcal{O}(\mu^2)
\end{array}$$
A conceptual sketch of the dynamics on $\widetilde{\mathcal{M}}$, along with an excursion through the fast field, is given in Figure \[fig:manifoldPlusExcursion\]. Moreover, in Figures \[fig:const-3D\] and \[fig:const-UxU\], the evolution of a homoclinic solution is projected onto manifold $\widetilde{\mathcal{M}}$.
[0.33]{} {width="\textwidth"}
[0.33]{} {width="\textwidth"}
[0.5]{} ![Numerical simulations resulting in a stationary pulse solution for with $f(x) = h'(x)$, $g(x) = h''(x)$, where $h(x) = 0$ (a,b), $h(x) = \exp(-x^2/2)$ (c,d) and $h(x) = 0.1 \cos(2x)$ (e,f). Shown are projections to the $(x,U,U_x)$-plane (a,c,e) and the $(U,U_x)$-plane (b,d,f) of a stationary pulse solution (blue) and the bounded solution $u_b$ to which the $U$-component converges for $|x| \rightarrow \infty$. Parts of the take-off and touch-down curves ($T_{o/d}$) along with stable and unstable manifolds at $x = 0$ are also sketched in green respectively red. Note that the plots in this figure correspond to the plots in Figure \[fig:pulses\].](figures/CONST-3D.pdf "fig:"){width="\textwidth"}
[0.35]{} ![Numerical simulations resulting in a stationary pulse solution for with $f(x) = h'(x)$, $g(x) = h''(x)$, where $h(x) = 0$ (a,b), $h(x) = \exp(-x^2/2)$ (c,d) and $h(x) = 0.1 \cos(2x)$ (e,f). Shown are projections to the $(x,U,U_x)$-plane (a,c,e) and the $(U,U_x)$-plane (b,d,f) of a stationary pulse solution (blue) and the bounded solution $u_b$ to which the $U$-component converges for $|x| \rightarrow \infty$. Parts of the take-off and touch-down curves ($T_{o/d}$) along with stable and unstable manifolds at $x = 0$ are also sketched in green respectively red. Note that the plots in this figure correspond to the plots in Figure \[fig:pulses\].](figures/CONST-UxU.pdf "fig:"){width="80.00000%"}
\
[0.5]{} ![Numerical simulations resulting in a stationary pulse solution for with $f(x) = h'(x)$, $g(x) = h''(x)$, where $h(x) = 0$ (a,b), $h(x) = \exp(-x^2/2)$ (c,d) and $h(x) = 0.1 \cos(2x)$ (e,f). Shown are projections to the $(x,U,U_x)$-plane (a,c,e) and the $(U,U_x)$-plane (b,d,f) of a stationary pulse solution (blue) and the bounded solution $u_b$ to which the $U$-component converges for $|x| \rightarrow \infty$. Parts of the take-off and touch-down curves ($T_{o/d}$) along with stable and unstable manifolds at $x = 0$ are also sketched in green respectively red. Note that the plots in this figure correspond to the plots in Figure \[fig:pulses\].](figures/VAR-3D.pdf "fig:"){width="\textwidth"}
[0.35]{} ![Numerical simulations resulting in a stationary pulse solution for with $f(x) = h'(x)$, $g(x) = h''(x)$, where $h(x) = 0$ (a,b), $h(x) = \exp(-x^2/2)$ (c,d) and $h(x) = 0.1 \cos(2x)$ (e,f). Shown are projections to the $(x,U,U_x)$-plane (a,c,e) and the $(U,U_x)$-plane (b,d,f) of a stationary pulse solution (blue) and the bounded solution $u_b$ to which the $U$-component converges for $|x| \rightarrow \infty$. Parts of the take-off and touch-down curves ($T_{o/d}$) along with stable and unstable manifolds at $x = 0$ are also sketched in green respectively red. Note that the plots in this figure correspond to the plots in Figure \[fig:pulses\].](figures/VAR-UxU.pdf "fig:"){width="80.00000%"}
\
[0.5]{} ![Numerical simulations resulting in a stationary pulse solution for with $f(x) = h'(x)$, $g(x) = h''(x)$, where $h(x) = 0$ (a,b), $h(x) = \exp(-x^2/2)$ (c,d) and $h(x) = 0.1 \cos(2x)$ (e,f). Shown are projections to the $(x,U,U_x)$-plane (a,c,e) and the $(U,U_x)$-plane (b,d,f) of a stationary pulse solution (blue) and the bounded solution $u_b$ to which the $U$-component converges for $|x| \rightarrow \infty$. Parts of the take-off and touch-down curves ($T_{o/d}$) along with stable and unstable manifolds at $x = 0$ are also sketched in green respectively red. Note that the plots in this figure correspond to the plots in Figure \[fig:pulses\].](figures/COS-3D.pdf "fig:"){width="\textwidth"}
[0.35]{} ![Numerical simulations resulting in a stationary pulse solution for with $f(x) = h'(x)$, $g(x) = h''(x)$, where $h(x) = 0$ (a,b), $h(x) = \exp(-x^2/2)$ (c,d) and $h(x) = 0.1 \cos(2x)$ (e,f). Shown are projections to the $(x,U,U_x)$-plane (a,c,e) and the $(U,U_x)$-plane (b,d,f) of a stationary pulse solution (blue) and the bounded solution $u_b$ to which the $U$-component converges for $|x| \rightarrow \infty$. Parts of the take-off and touch-down curves ($T_{o/d}$) along with stable and unstable manifolds at $x = 0$ are also sketched in green respectively red. Note that the plots in this figure correspond to the plots in Figure \[fig:pulses\].](figures/COS-UxU.pdf "fig:"){width="80.00000%"}
Stationary pulse solutions for varying $ f $ and $ g $ {#sec:existence_varying_f_g}
------------------------------------------------------
First, we convert the non-autonomous system into an autonomous one by setting $$\begin{aligned}
s(\xi): = \frac{D}{\sqrt{m}} \, \xi = \varepsilon^2 \mu \xi \, , \end{aligned}$$ which gives the extended (autonomous) fast system $$\begin{aligned}
\label{eq:klausmeier_model_ODE_first_order_fast}
\left\{
\begin{array}{rcl}
\dot{s} & = & \varepsilon^2 \mu \, , \\[.1cm]
\dot{u} & = & \varepsilon p \, , \\[.1cm]
\dot{p} & = & \varepsilon \left[ \varepsilon^2 \mu^2 u - \varepsilon \mu f\left( s\right) p - \varepsilon^2 \mu^2 g\left( s\right) u - \varepsilon^2 \mu + u v^2 \right] \, ,\\[.1cm]
\dot{v} & = & q\, , \\[.1cm]
\dot{q} & = & v - u v^2 \, .
\end{array}
\right. \end{aligned}$$ It is important to note that the symmetry assumptions (A2) on $ f $ and $ g $ translate directly into a symmetry for which is crucial for the construction of a homoclinic.
\[lemma:symmetry\_full\_system\] Let the symmetry assumptions (A2) be fulfilled, that is, let $ f $ be an odd function and $ g $ be an even function. Then we have for the symmetry $ (s, u, p, v, q) \rightarrow (-s, u, -p, v, -q) $.
The slow system corresponding to in the slow variable $ \eta = \varepsilon \xi $ is given by $$\begin{aligned}
\label{eq:klausmeier_model_ODE_first_order_slow}
\left\{
\begin{array}{rcl}
s^\prime & = & \varepsilon \mu \, , \\[.1cm]
u^\prime & = & p \, , \\[.1cm]
p^\prime & = & \varepsilon^2 \mu^2 u - \varepsilon \mu f\left( s\right) p - \varepsilon^2 \mu^2 g\left( s\right) u - \varepsilon^2 \mu + u v^2 \, ,\\[.1cm]
\varepsilon v^\prime & = & q\, , \\[.1cm]
\varepsilon q^\prime & = & v - u v^2 \, .
\end{array}
\right. \end{aligned}$$ It possesses a three-dimensional invariant manifold $$\begin{aligned}
\label{eq:M}
\mathcal{M}:=\{ (s, u, p, 0, 0) ~|~ u > 0, s, p \in \mathbb{R} \} \subset \mathbb{R}^5 \, ,\end{aligned}$$ on which it takes the form $$\begin{aligned}
\label{eq:klausmeier_model_ODE_first_order_slow_on_M}
\left\{
\begin{array}{rcl}
s^\prime & = & \varepsilon \mu \, , \\[.1cm]
u^\prime & = & p \, , \\[.1cm]
p^\prime & = & \varepsilon^2 \mu^2 u - \varepsilon \mu f\left( s\right) p - \varepsilon^2 \mu^2 g\left( s\right) u - \varepsilon^2 \mu \, .
\end{array}
\right. \end{aligned}$$ which is an extension of the non-autonomous system $$\begin{aligned}
\label{eq:klausmeier_model_ODE_first_order_slow_on_M_nonautonomous}
\left\{
\begin{array}{rcl}
u^\prime & = & p \, , \\[.1cm]
p^\prime & = & \varepsilon^2 \mu^2 u - \varepsilon \mu f\left( \varepsilon \mu \eta \right) p - \varepsilon^2 \mu^2 g\left(\varepsilon \mu \eta\right) u - \varepsilon^2 \mu \, .
\end{array}
\right. \end{aligned}$$ It is now convenient to introduce (or, actually, return to) the super-slow variable $ x = \varepsilon \mu \eta $. We set $ u(\eta) = \frac{1}{\mu} \hat{u}(\varepsilon \mu \eta) = \frac{1}{\mu} \hat{u}(x) $ and return to the second order non-autonomous setting $$\begin{aligned}
\label{eq:klausmeier_model_ODE_first_order_slow_on_M_rescaled}
\left\{
\begin{array}{rcl}
\frac{d}{dx} \hat{u} & = & \hat{p} \, , \\[.1cm]
\frac{d}{dx} \hat{p} & = & \hat{u} - f\left( x\right) \hat{p} - g\left(x\right) \hat{u} - 1\, .
\end{array}
\right. \end{aligned}$$
\[lemma:symmetry\_reduced\_system\] Let the symmetry assumptions (A2) be fulfilled, that is, let $ f $ be an odd function and $ g $ be an even function. Then we have for the symmetry $ (x, \hat{u}, \hat{p}) \rightarrow (-x, \hat{u}, -\hat{p}) $.
For conciseness, we note that we have three different scales:\
The construction that we illustrate in this article therefore relies heavily on assumption (A1). The specific definition of the small parameter is convenient since the fast reduced system is an ODE which is known to have homoclinic solutions and the slow system on the critical manifold $ \mathcal{M} $ is a linear planar system.
\[remark:scaling\_p\_phat\] Note the difference between $p = \frac{du}{d\eta}$ and $\hat{p} = \frac{d\hat{u}}{dx}$. Hence, $p = \varepsilon \hat{p}$.
\[prop:dynamics\_slow\_manifold\] Consider the slow system on $ \mathcal{M} $ with $ f, g $ fulfilling (A3). Then there exists a unique bounded solution $ (\hat{u}_b, \hat{p}_b) $ of and corresponding connected set $\Gamma \subset \mathbb{R} \cup \{\infty\}$ such that the following holds true: For each fixed $ x \in \mathbb{R} $ there exists $ C^{s/u}(x) \in \Gamma $ and lines $$\begin{aligned}
\label{eq:lines}
l^{s/u}(x) := \{ (\hat{u},\hat{p}) ~|~ \hat{p} - \hat{u}_b'(x) = C^{s/u}(x) (\hat{u} - \hat{u}_b(x)) \} \, , \end{aligned}$$ such that the solution to the initial value problem with $ (\hat{u}(x), \hat{p}(x)) = (\hat{u}_0,\hat{p}_0) \in l^{s}(x)$ converges to $ (\hat{u}_b, \hat{p}_b) $ for $ x \rightarrow \infty $, while with $ (\hat{u}(x), \hat{p}(x)) = (\hat{u}_0,\hat{p}_0) \in l^{u}(x)$ it converges to $ (\hat{u}_b, \hat{p}_b) $ for $ x \rightarrow -\infty $. Moreover, if $f$ and $g$ fulfill the symmetry assumption (A2), $C^{s/u}$ posses the symmetry $C^s(x) = - C^u(-x)$ for all $x \in \mathbb{R}$. In particular, $C^s(0) = - C^u(0)$.
The proof of Proposition \[prop:dynamics\_slow\_manifold\] constitutes the contents of section \[sec:dynamics\_slow\_manifold\]. Also note the similarities with Proposition \[proposition:bounded\_solution\], since the bounded solutions mentioned in both Propositions are identical up ot the scaling $\hat{u}_b(x) = \mu u_b(\xi)$.
When $\lim_{x \rightarrow \pm \infty} f(x),g(x) = 0$ (i.e. assumption (A4)), the unique bounded solution $(\hat{u}_b,\hat{p}_b)$ limits to the fixed point of the autonomous equation. That is, $$\begin{aligned}
\lim_{x \rightarrow \pm \infty} (\hat{u}_b(x), \hat{p}_b(x)) = (1,0) \, .\end{aligned}$$
This result implies that there are trajectories on $ \mathcal{M} $ that lead to and away from the bounded solution $ (\hat{u}_b, \hat{p}_b) $. Hence, the only remaining construction steps are the analysis of persistence of orbits biasymptotic to $ \mathcal{M} $ and their touch-down/take-off locations. We therefore switch back to the fast system and examine the dynamics during the jump of an orbit through the fast field. In order to pass to the reduced fast system, we use the assumption (A5) so, in the limit $ \varepsilon \rightarrow 0 $, we get the reduced fast system $$\begin{aligned}
\label{eq:klausmeier_model_ODE_first_order_fast_reduced}
\left\{
\begin{array}{rcl}
\dot{s} & = & 0 \, , \quad \dot{u} \ = \ 0 \, , \quad \dot{p} \ = \ 0 \, ,\\[.1cm]
\dot{v} & = & q\, , \\[.1cm]
\dot{q} & = & v - u v^2 \, .
\end{array}
\right. \end{aligned}$$ Note that in the reduced fast system the non-autonomous character of our problem is not visible. The only difference is the added trivial equation $ \dot{s} = 0 $. As alluded to in the constant coefficient case in section \[sec:existence\_f\_g\_zero\], the planar subsystem $ \dot{v} = q, \dot{q} = v - u v^2 $ is known to be Hamiltonian and features a homoclinic to the saddle point $ (v,q) = (0,0) $ which can be specified explicitly (see ). As a result, also is Hamiltonian with $$\begin{aligned}
\label{eq:hamiltonian_nonautonomous}
K(s, u, p, v, q) = H(v, q; u)\, .\end{aligned}$$ The invariant manifold $ \mathcal{M} $ from is the collection of saddle points $ (s,u,p,0,0), u > 0, s, p \in \mathbb{R}, $ for and is, hence, normally hyperbolic. For its stable and unstable manifolds $ W_{0}^{s/u}(\mathcal{M}) $ it holds true that $\dim [W_{0}^{s/u}(\mathcal{M})] = 4 $ and, in fact, $ W_{0}^{s}(\mathcal{M})$ and $W_{0}^{u}(\mathcal{M}) $ (partly) coincide, where the intersection is simply given by the family of homoclinic orbits. Moreover, we have that $ K(s, u, p, v, q)|_{(s,u, p, v, q) \in {\mathcal{M}}} = 0 $.
The analogy with the constant coefficient case continues for $ \varepsilon > 0 $ sufficiently small; we still have that $ {\mathcal{M}} $ is an invariant manifold of the full system and that its stable and unstable manifolds persist as $ W_{\varepsilon}^{s/u}(\mathcal{M}) $ with $\dim[W_{\varepsilon}^{s/u}(\mathcal{M})] = 4 $, but do not necessarily coincide anymore. In fact, they generically meet in a 3-D intersection in $ \mathbb{R}^5 $.
\[prop:persistence\] Let $\varepsilon$ be sufficiently small.
1. Define the hyperplane $ R = \{ (s, u, p, v, q) ~|~ q = 0 \} $. Then $\dim[W_{\varepsilon}^{s}(\mathcal{M}) \cap W_{\varepsilon}^{u}(\mathcal{M}) \cap R ] = 2 $ and orbits in this intersection fulfill $ p(\xi) = \varepsilon p^{(1)}(\xi) + h.o.t. $, that is, the leading order constant term $ p^{(0)} $ vanishes.
2. The take-off and touch-down surfaces on $ \mathcal{M} $ of orbits in the intersection $ W_{\varepsilon}^{s}(\mathcal{M}) \cap W_{\varepsilon}^{u}(\mathcal{M}) \cap R $ are to leading order given by $$\begin{aligned}
\label{eq:take_off_touch_down_surfaces}
{T}_{o/d}(s): = \left\{ \left. \left(s, u, p, 0, 0 \right) ~\right|~ p = \mp \frac{3\varepsilon}{u}, u > 0 \right\} \, .\end{aligned}$$
3. For orbits in the intersection $ W_{\varepsilon}^{s}(\mathcal{M}) \cap W_{\varepsilon}^{u}(\mathcal{M}) \cap R $ the touch-down curve $ {T}_{d}(0) $ and stable line $ l^s(0) $ from intersect in at most two points $$\begin{aligned}
\label{eq:intersection_points}
u_0^{\pm} = \frac{u_b(0) \pm \sqrt{u_b(0)^2+ 12/ (\mu C^s(0))}}{2} \, ,\end{aligned}$$ where $ C^s(0) $ is the slope of the stable line $ l^s(0) $ from and $ \hat{u}_b = \mu u_b $ is the (rescaled) bounded background solution from Proposition \[prop:dynamics\_slow\_manifold\]. By symmetry, the analogous is true for the take-off curve $ {T}_{o}(0) $ and unstable line $ l^u(0) $ from . In particular, the thus computed $u_0^\pm$-values coincide by the aforementioned symmetry $C^u(0) = -C^s(0)$ – see Proposition \[prop:dynamics\_slow\_manifold\].
4. There are two even homoclinic orbits for with $u_0^\pm > 0$ in case $ u_b(0)^2+ 12/(\mu C^s(0)) > 0 $ and $ u_b(0) - \sqrt{u_b(0)^2+ 12/ (\mu C^s(0))} > 0 $.
If we set $ u_b(0) = \frac{1}{\mu} $ and $ C^s(0) = -1 $ in , we recover .
Measuring the distance of $ W_{\varepsilon}^{s}({\mathcal{M}}) $ and $ W_{\varepsilon}^{u}({\mathcal{M}}) $ in the hyperplane $ R $ can again be accomplished using the difference of the Hamiltonian $ K $ during the fast the jump of the orbit through the fast field . We have exactly as in the constant coefficient case where (using that $ p $ is constant to leading order) we have set $
p(\xi)= p^{(0)} + \varepsilon p^{(1)}(\xi) + h.o.t. \, ,
$ and used that $ \frac{d}{d \xi} {K} = \frac{\partial}{\partial s} K(s, u, p, v, q)(\frac{ds}{d \xi}) + \frac{\partial}{\partial u} H(v,q;u) (\frac{du}{d \xi}) + \frac{d}{d \xi}H(v,q;u) = 0 + \frac13 v^3 (\frac{du}{d \xi}) + 0 = \frac13 \varepsilon v^3 p$. In order to make this difference vanish to leading order, we evidently need that $ p^{(0)} = 0 $ and $p^{(1)}(0) = 0$. This proves the first statement.
In order to construct the take-off and touch-down curves, we again investigate the change of the fast variables during the jump through the fast field: $$\begin{aligned}
\Delta_{I_f} s &= s(1/\sqrt{\varepsilon}) - s(-(1/\sqrt{\varepsilon})) = \int_{I_f} \frac{d}{d \xi} s(\xi) \, d \xi = \frac{2}{\sqrt{\varepsilon}} \, \varepsilon^2 \mu = \mathcal{O}(\varepsilon^{3/2})\, ,\\
\Delta_{I_f} u &= u(1/\sqrt{\varepsilon}) - u(-(1/\sqrt{\varepsilon})) = \int_{I_f} \frac{d}{d \xi} u(\xi) \, d \xi = \varepsilon^2 \int_{I_f} p^{(1)}(\xi) \, d \xi = \mathcal{O}(\varepsilon^{3/2})\, ,\\
\Delta_{I_f} p &= p(1/\sqrt{\varepsilon}) - p(-(1/\sqrt{\varepsilon})) = \int_{I_f} \frac{d}{d \xi} p(\xi) \, d \xi = \varepsilon u_0 \int_{I_f} v_{hom}^{(0)}(\xi)^2 \, d \xi = \frac{6}{u_0} \varepsilon + h.o.t. \, ,\end{aligned}$$ Hence, to leading order, only the $ p $-variable changes during the fast jump, and therefore, the take-off and touch-down curves on $ {\mathcal{M}}$ are to leading order given by where we used that, by symmetry, $
p(\pm 1/\sqrt{\varepsilon}) = p(0) \pm \frac12 \Delta_{I_f} p \, .
$ This proves the second statement.
Equating and (where we used that $p = \varepsilon \hat{p}$ – see Remark \[remark:scaling\_p\_phat\]) gives the equality $$\begin{aligned}
\varepsilon \mu C^{s}(0)\left( u_0 - u_b(0) \right) = \frac{3 \varepsilon}{u_0};\end{aligned}$$ the solutions of which give the claimed expression in the third statement. Finally, the fourth statement follows from inspecting .
Two examples of homoclinic solutions for varying $f$ and $g$ can be found in Figures \[fig:var-3D\]–\[fig:cos-UxU\]. In these figures the evolution of a homoclinic solution is projected onto the manifold $\mathcal{M}$, which shows the essence of Proposition \[prop:persistence\].
Proposition \[prop:persistence\] thus establishes existence of homoclinic solutions for under the conditions stated in Proposition \[prop:persistence\](4). However, in the case of varying coefficients, there typically are no explicit expressions available for the bounded solution $u_b(0)$ and the constant $C^s(0)$. To circumvent this, in the next section we derive bounds on these using the theory of exponential dichotomy, which simultaneously forms the proof of Proposition \[prop:dynamics\_slow\_manifold\].
Some basic results from the theory of exponential dichotomies {#sec:exp_dich}
-------------------------------------------------------------
When $f$ and/or $g$ are non-constant, generically it is not possible to capture the dynamics on manifold $\mathcal{M}$ in explicit expressions. Instead, our main tools for constructing a saddle-like structure on $\mathcal{M}$ are from the theory of exponential dichotomies. To fix notation and keep the exposition self-contained, we state (following [@coppel1978stability]) the definition of exponential dichotomies along with a selection of results that we use here.
\[def:exp\_dich\] Consider the planar ODE $\frac{d}{dx} Y = B(x) Y$ for the unknown $ Y: \mathbb{R} \rightarrow \mathbb{R}^2 $ and with $B: \mathbb{R} \rightarrow \mathbb{R}^{2 \times 2} $ a matrix-valued function which is continuous on $\mathbb{R}$. Let $\Phi= \Phi(x)$ be the associated canonical solution operator. This ODE is said to have an *exponential dichotomy* if there is a projection matrix $P$ and positive constants $K$ and $\rho$ such that $$\begin{aligned}
\| \Phi(x)\ P\ \Phi^{-1}(\tilde{x}) \| &\leq K e^{-\rho (x - \tilde{x})} \, , \quad x \geq \tilde{x} \, , \\
\| \Phi(x)\ (I-P)\ \Phi^{-1}(\tilde{x}) \| & \leq K e^{+\rho (x-\tilde{x})} \, , \quad x \leq \tilde{x} \, .\end{aligned}$$
In the next section we will be interested in first order ODEs of the form $$\label{eq:inhom_nonaut}
\frac{d}{dx} Y = [A_0 + A(x)] Y + F \, ,$$ with $ x \in \mathbb{R}, Y : \mathbb{R} \rightarrow \mathbb{R}^2, A_0 \in \mathbb{R}^{2 \times 2}, A : \mathbb{R} \rightarrow \mathbb{R}^{2\times2}, F \in \mathbb{R}^2$. In particular, we would like to corroborate knowledge of the autonomous version (which is often available in terms of explicit solutions) to deduce qualitative results for the full non-autonomous one. For the sake of clarity, we assemble first all auxiliary systems in one place: $$\begin{aligned}
\intertext{First, we have the homogeneous, autonomous system}
\label{eq:hom_aut} \frac{d}{dx} Z_h &= A_0 Z_h.\\
\intertext{Then, there is the homogeneous, non-autonomous system}
\label{eq:hom_nonaut} \frac{d}{dx} Y_h &= [A_0 + A(x)] Y_h.\\
\intertext{Finally, we have the inhomogeneous, autonomous system}
\label{eq:inhom_aut}\frac{d}{dx} Z \ &= A_0 Z + F.\end{aligned}$$
\[prop:roughness\_closeness\_general\] Let $ K_{aut}, \rho_{aut} > 0 $ be the exponential dichotomy constants of the homogeneous, autonomous ODE and $ \Phi_{aut}, P_{aut} $ the corresponding solution and projection operators. If $$\begin{aligned}
\label{eq:delta_general}
\delta : = \sup_{x \in \mathbb{R}} ||| A(x) ||| < \frac{\rho_{aut}}{4K_{aut}^2} \, ,\end{aligned}$$ the non-autonomous ODE has an exponential dichotomy for which the following holds true.
1. The exponential dichotomy constants of the homogeneous, non-autonomous ODE are $ K = \frac{5}{2} K_{aut}^2 $ and ${\rho} = \rho_{aut} - 2 K_{aut} \delta$, and concerning the solution and projection operators ${\Phi}, {P}$ of we have upon defining $$\begin{aligned}
{Q}(x) := {\Phi}(x) {P} {\Phi}^{-1}(x)\, , \quad Q_{aut}(x) := {\Phi_{aut}}(x) {P_{aut}} {\Phi_{aut}}^{-1}(x) \, \end{aligned}$$ the estimate $$\begin{aligned}
\label{eq:roughnessProjections_general_Q}
||| {Q}(x) - Q_{aut}(x) ||| \leq \frac{4 K_{aut}^3 \delta}{\rho_{aut}} \, , \quad x \in \mathbb{R} \, .\end{aligned}$$
2. There exist unique bounded solutions $ Z_{b, aut}, Y_{b} $ of the inhomogeneous, autonomous and non-autonomous ODEs and . In particular, they satisfy $$\begin{aligned}
\label{eq:bounded_solutions_estimate}
\sup_{x \in \mathbb{R}} |||Y_b(x)-Z_{b, aut}(x)||| \leq \frac{4 \delta K_{aut} {K}}{\rho_{aut} {\rho}} \, \| F \| \, . \end{aligned}$$
The first statement is the persistence of exponential dichotomies, known as “roughness”, and is a standard result (see [@coppel1978stability Ch.4, Prop.1]). Moreover, another standard result from the theory of exponential dichotomies stipulates that inhomogeneous equations have unique bounded solutions, when the homogeneous equations have an exponential dichotomy and the inhomogeneous terms are bounded (see [@coppel1978stability Ch.8, Prop.2]). Then, to demonstrate the rest of the second statement, we define $ W(x) = {Y}_b(x) - Z_{b,aut}(x) $ which gives $ W'(x) = A_0 W(x) + G(x) $ with $ G(x) = A(x) {Y}_b(x) $. The unique bounded solution $ W_b $ of this ODE satisfies the estimate $
\sup_{x \in \mathbb{R}} \| W_b(x)\| \leq \frac{2 {K}_{aut}}{{\rho_{aut}}} \, \sup_{x \in \mathbb{R}} \| G(x)\| \leq \frac{4 \delta K_{aut} {K}}{\rho_{aut} {\rho}} \, \| F \| \, ,
$ where we used that $
\sup_{x \in \mathbb{R}} \| {Y}_b(x) \| \leq \frac{2 {K}}{{\rho}} \| F\| \, .
$
Dynamics on $\mathcal{M}$ (Proof of Proposition \[prop:dynamics\_slow\_manifold\]) {#sec:dynamics_slow_manifold}
----------------------------------------------------------------------------------
Let us introduce the more concise notation $Y = \left(\hat{u},\frac{d}{dx}\hat{u}\right)^T$ such that has the form of from the previous section; that is, $$\label{eq:slow-system}
\frac{d}{dx} Y = [A_0 + A(x)] Y + F \, ,$$ with $$\label{eq:definition_A_0_A_x_F}
A_0 = \left( \begin{array}{cc} 0 & 1 \\ 1 & 0\end{array} \right) \, , \quad
A(x) = \left( \begin{array}{cc} 0 & 0 \\ - g(x) & -f(x) \end{array} \right) \, , \quad
F = \left( \begin{array}{c} 0 \\ -1 \end{array} \right) \, .$$
\[lemma:exp\_dich\_const\_roughness\] With the notation of Proposition \[prop:roughness\_closeness\_general\], let $$\begin{aligned}
\delta = \sup_{x \in \mathbb{R}} \sqrt{f(x)^2 + g(x)^2} < \frac14 \, .\end{aligned}$$ Then we have $ \rho_{aut}= K_{aut} = 1, \rho = 1-2 \delta, K = 5/2 $ and $$\begin{aligned}
||| {Q}(x) - Q_{aut}(x) ||| \leq 4 \delta \, , \quad x \in \mathbb{R} \, .\end{aligned}$$
We have the canonical solution operator $\Phi(x) = e^{A_0 x}$. The eigenvalues of the matrix $A_0$ are $\pm 1$ and the corresponding normed eigenvectors are $v =\frac{1}{\sqrt{2}}(1, 1)^T, w = \frac{1}{\sqrt{2}}(1, -1)^T$. Thus the fixed point $Y = (0,0)^T$ is a saddle. From this it is clear that we can choose $$P = w w^T = \frac{1}{2} \left( \begin{array}{cc} 1 & -1 \\ -1 & 1 \end{array} \right) \, .$$ With the basis transformation matrix $B = \left( v ~|~ w \right)$ and the diagonal matrix $D = \mbox{diag}(1,-1)$ we then get $$\begin{aligned}
\|\Phi(x) P \Phi^{-1}(s) \|
= \| B e^{Dx} B^{-1} P B e^{-Ds} B^{-1} \|
= \left\| \left( \begin{array}{cc} 1&-1\\-1&1 \end{array}\right) \right\| \frac{ e^{-(x-s)}}{2}
= e^{-(x-s)} \, .\end{aligned}$$ A similar reasoning – where one can use that $I-P = v v^T$ – gives $$\|\Phi(x) (I-P) \Phi^{-1}(s)\| = e^{(x-s)} \, .$$ Thus we have the estimate for exponential dichotomies from Definition \[def:exp\_dich\] with $\rho_{aut} = 1$ and $K_{aut} = 1$. The remaining statements can now be read off Proposition \[prop:roughness\_closeness\_general\].
The roughness of exponential dichotomies established in Lemma \[lemma:exp\_dich\_const\_roughness\] provides a bound on the projection operator $Q(x)$ of the non-autonomous system. However, this bound cannot be used directly to prove existence of homoclinic solutions using geometric singular perturbation theory, as geometric properties need to be derived. In particular, we need to find the stable and unstable manifolds for the unique bounded solution $Y_b$ of . These can be defined as $$\begin{aligned}
W^s(Y_b) &: = \left\{ (x,Y^s(x)) ~|~ Y^s(x) = Y_b(x)+ \Phi(x)P\Phi^{-1}(x)r \, , r \in \mathbb{R}^2 \right\} \, ,\\
W^u(Y_b) &: = \left\{ (x,Y^u(x)) ~|~ Y^u(x) = Y_b(x)+ \Phi(x)(Id-P)\Phi^{-1}(x)r \, , r \in \mathbb{R}^2 \right\} \, ,\end{aligned}$$ where $ \Phi, P $ are the solution and projection operator for . For the construction that we have in mind, it is convenient to notice that $$\begin{aligned}
W^{s/u}(Y_b) = \bigcup_{x \in \mathbb{R}} (x,l^{s/u}(x)) \, , \end{aligned}$$ with lines $$\begin{aligned}
l^{s}(x) &= \left\{ Y^s(x) ~|~ Y^s(x) = Y_b(x)+ \Phi(x)P\Phi^{-1}(x)r \, , r \in \mathbb{R}^2 \right\} \, , \\
l^{u}(x) &= \left\{ Y^u(x) ~|~ Y^u(x) = Y_b(x)+ \Phi(x)(I-P)\Phi^{-1}(x)r \, , r \in \mathbb{R}^2 \right\} \, .\end{aligned}$$ While, in general, it is not possible to find explicit expressions for these objects, we can derive estimates for their locations. For this we first observe that the line $l^s$ can be written equivalently as $$\begin{aligned}
l^s(x) = \left\{ (\hat{u},\hat{p}) ~|~ \hat{p} - \hat{u}_b'(x) = C(x) (\hat{u}-\hat{u}_b(x)) \right\} \, ,\end{aligned}$$ where $C(x)$ is the slope of the line. Starting from the bound on the projection operator $Q(x) = \Phi(x)P\Phi^{-1}(x)$ derived in Lemma \[lemma:exp\_dich\_const\_roughness\], a bound on the projection lines will be established in Lemma \[lemma:closeness\_projection\_lines\], which is then subsequently used to find a bound on the slope $C(x)$ via the angle $\theta(x)$ of the line in Lemma \[lemma:closeness\_slopes\].
In particular, for the case of , we thus obtain $$\begin{aligned}
l^{s}(x) = \left\{ (\hat{u},\hat{p}) ~|~ \hat{p} - \hat{u}_b'(x) = (-1 + \tilde{C}(x))(\hat{u} - \hat{u}_b(x)) \right\} \, ,\end{aligned}$$ with $ \tilde{C}(x) $ as in Lemma \[lemma:closeness\_slopes\] taking into account that the projection operator depends on $ x $, that is, $Q = Q(x)$ and so does the angle $ \theta = \theta(x) $, which defines $ C = C(x) $ and, hence, also $ \tilde{C} = \tilde{C}(x) $.
The rest of this section consists of the two technical lemmas that ultimately derive a bound for $\tilde{C}$.
\[lemma:closeness\_projection\_lines\] Let $Q$ and ${Q}_{aut}$ be the projection matrices with rank $1$ as defined in Proposition \[prop:roughness\_closeness\_general\](i), i.e. there are unit vectors $q$ and $q_{aut}$ such that $Q = q q^T$ and ${Q}_{aut} = {q}_{aut} {q}_{aut}^T$, and $\|Q - {Q}_{aut}\| < 4 \delta$ holds true. Then either $\|q - {q}_{aut} \| < \sqrt{ 8 \delta }$ or $\|q + {q}_{aut} \| < \sqrt{ 8 \delta }$.
We prove the equivalent statement that from $\|q - q_{aut} \| \geq \sqrt{8 \delta}$ and $\|q + q_{aut} \| \geq \sqrt{8\delta}$ it follows that $\|Q - Q_{aut}\| \geq 4 \delta$. First we observe that $$\begin{aligned}
(q-q_{aut})(q^T+q_{aut}^T)(q+q_{aut}) &= (q q^T - q_{aut}q_{aut}^T)(q+q_{aut}) + (q q_{aut}^T - q_{aut} q^T) (q+q_{aut}) \nonumber\\ &= 2 (q q^T- q_{aut} q_{aut}^T)(q+q_{aut}) = 2 (Q-Q_{aut})(q+q_{aut}) \, .\end{aligned}$$ Therefore, by assumption $$\begin{aligned}
& \|Q - Q_{aut}\| \, \|q+q_{aut} \| \geq \| (Q-Q_{aut})(q+q_{aut})\| \\
&= \frac12 \| (q-q_{aut})(q^T+q_{aut}^T)(q+q_{aut}) \|
= \frac12 \| q+q_{aut}\|^2 \|q - q_{aut}\|
\geq 4 \delta \|q+q_{aut}\| \, ,\end{aligned}$$ from which it follows that $ \|Q - Q_{aut}\| \geq 4 \delta $.
The previous lemma establishes closeness of projection lines of the autonomous and the non-autonomous case. The thus obtained bounds on norms can be transferred to bounds on the slope $C$ by use of elementary geometry. Note that transforming the norm bounds in this way leads to singularities when a projection line passes the vertical axis (which also leads to a seemingly disjoint set of admittable slopes). A visualisation of the results of Lemma \[lemma:closeness\_slopes\] are given in Figure \[fig:closenessSlopeVisualisation\]. In particular, the resulting bounds for the slope are shown.
\[lemma:closeness\_slopes\] Let $Q$ and ${Q}_{aut}$ be the projection matrices with rank $1$ as defined in Proposition \[prop:roughness\_closeness\_general\](i), i.e. there are unit vectors $q$ and $q_{aut}$ such that $Q = q q^T$ and ${Q}_{aut} = {q}_{aut} {q}_{aut}^T$, and $\|Q - Q_{aut}\| < 4 \delta$ holds true. Furthermore, let $ \theta, \theta_{aut} \in [-\pi, \pi) $ be defined by $ q =: (\cos(\theta), \sin(\theta)), q_{aut} = (\cos(\theta_{aut}), \sin(\theta_{aut})) $ such that the slopes of the lines spans by $ q $ and $ q_{aut} $ are given by $$\begin{aligned}
C := \tan(\theta) \, , \qquad C_{aut} := \tan(\theta_{aut}) \, .\end{aligned}$$ Then there exist constants $C_{min/max}(\delta,C_{aut})$ defined by $$\begin{aligned}
C_{\mathrm{min}}(\delta,C_{aut}) & :=
\begin{cases} -(1+C_{aut}^2) \frac{2 \sqrt{2}\sqrt{\delta} \sqrt{1-2\delta}}{(1-4\delta) + 2 C_{aut} \sqrt{2}\sqrt{\delta}\sqrt{1-2\delta}}, & \mbox{if }\delta \neq \frac{1}{4} \left( 1 + \frac{C_{aut}}{\sqrt{1+C_{aut}^2}}\right)\\
-\infty, & \mbox{if }\delta = \frac{1}{4} \left( 1 + \frac{C_{aut}}{\sqrt{1+C_{aut}^2}}\right)
\end{cases}\label{eq:Cmin} \\
C_{\mathrm{max}}(\delta,C_{aut}) & :=
\begin{cases} +(1+C_{aut}^2) \frac{2 \sqrt{2}\sqrt{\delta} \sqrt{1-2\delta}}{(1-4\delta) - 2 C_{aut} \sqrt{2}\sqrt{\delta}\sqrt{1-2\delta}}, & \mbox{if }\delta \neq \frac{1}{4} \left( 1 - \frac{C_{aut}}{\sqrt{1+C_{aut}^2}}\right);\\
+ \infty, & \mbox{if }\delta = \frac{1}{4} \left( 1 - \frac{C_{aut}}{\sqrt{1+C_{aut}^2}}\right),
\end{cases}\label{eq:Cmax}\end{aligned}$$ such that $C - C_{aut} \in \Gamma\left(\delta,C_{aut}\right)$, where $$\begin{aligned}
\Gamma\left(\delta,C_{aut}\right) :=
\begin{cases}
\Big( C_{\mathrm{min}}\left(\delta,C_{aut}\right), C_{\mathrm{max}}\left(\delta,C_{aut}\right) \Big), & \mbox{if } C_{\mathrm{min}}\left(\delta,C_{aut}\right) < C_{\mathrm{max}}\left(\delta,C_{aut}\right); \\
\Big(-\infty, C_{\mathrm{max}}\left(\delta,C_{aut}\right)\Big) \cup \Big(C_{\mathrm{min}}\left(\delta,C_{aut}\right),+\infty\Big), & \mbox{if } C_{\mathrm{max}}\left(\delta,C_{aut}\right) < C_{\mathrm{min}}\left(\delta,C_{aut}\right).
\end{cases}\label{eq:definitionSlopeBounds}\end{aligned}$$ In particular, for $ q_{aut} = \frac{1}{\sqrt{2}}(1, -1)^T $ we have $ C_{aut} = -1 $ and, hence, $$\begin{aligned}
\label{eq:slope_estimate}
C = -1 + \tilde{C} \, , \qquad \tilde{C} \in \Gamma(\delta,-1) \, .\end{aligned}$$
For technical reasons we assume that $\|q - q_{aut}\| \leq \|q + q_{aut}\|$; if this inequality does not hold, we can scale $q \rightarrow -q$ without changing the projection matrix $Q$. Then, with $$\begin{aligned}
\Delta \theta : = \theta - \theta_{aut} \, ,\end{aligned}$$ we have $$\begin{aligned}
\label{eq:slope_difference}
C - C_{aut} = \tan(\theta)- \tan(\theta_{aut}) = \tan(\Delta \theta + \theta_{aut})- \tan(\theta_{aut}) = (1+C_{aut}^2) \, \left( \frac{\tan(\Delta \theta)}{1-C_{aut}\tan(\Delta \theta)} \right) \, .\end{aligned}$$ From $\|Q - {Q}_{aut}\| < 4 \delta$ we know by the previous lemma that $\|q - {q}_{aut} \| < \sqrt{ 8 \delta }$ and, hence, since $ q $ and $ q_{aut} $ are unit vectors, we have $$\begin{aligned}
0 \leq 2(1- q^T q_{aut}) = \|q - q_{aut}\|^2 < 8 \delta \qquad \Longrightarrow \qquad 1- 4 \delta < q^T q_{aut} \, .\end{aligned}$$ Since $ \mathrm{arccos}(z) $ is monotonically decreasing, we hence get from $ | \Delta \theta | = \mathrm{arccos}(q^T q_{aut}) $ that $$\begin{aligned}
-\mathrm{arccos}(1-4\delta) < \Delta \theta < \mathrm{arccos}(1-4\delta) \, .\end{aligned}$$ Furthermore, since $ \frac{\tan(z)}{1-C_{aut}\tan(z)} $ is monotonically increasing in $z$, we have the claimed result by using $$\tan(\pm \mathrm{arccos}(z)) = \pm \frac{\sqrt{1-z^2}}{z}$$ and some simplifications in .
[0.33]{}
[0.2 ]{}
[0.2 ]{}
Existence results {#sec:existenceResults}
-----------------
Here, we first state our main existence results in detail. Their proofs are given in section \[sec:existenceResultsProofs\].
\[theorem:fg\_general\] Let assumptions (A1), (A2), (A3) and (A5) be satisfied. Then there is a $ \mu^* $ with $ 0 < \mu^* < \frac{1}{12} $ and corresponding $ \varepsilon^* = \varepsilon^*(\mu) >0, 0 < \delta^* = \delta^*(\mu) < \frac{2-\sqrt{2}}{8} $ such that the following holds true: For any $ \varepsilon, \mu, \delta $ with $$\begin{aligned}
\label{eq:varepsilon_mu_delta_conditions_general_h}
0 < \mu < \mu^* \, , \quad 0 < \varepsilon < \varepsilon^* = \varepsilon^*(\mu) \,, \quad \delta = \sup_{x \in \mathbb{R}} \sqrt{ f(x)^2 + g(x)^2 } < \delta^* = \delta^*(\mu) \, , \end{aligned}$$ the stationary wave ODE has (two) orbits $\left(s_p(\xi), u_p(\xi), p_p(\xi) ,v_p(\xi), q_p(\xi)\right)$, that are homoclinic to the bounded solution $\left(\xi,\frac{\hat{u}_b(\varepsilon^2\mu \xi)}{\mu}, \varepsilon \hat{u}_b'(\varepsilon^2\mu \xi), 0,0\right)$, with $\left(u_p(\xi),v_p(\xi)\right) $ to leading order given by [$$\begin{aligned}
\label{eq:leading_order_orbit_general_fg}
\left[
\begin{array}{c}
\frac{ (\hat{u}_b(\varepsilon^2\mu\xi)-(\hat{u}_b(0)-\mu u_0)\, \hat{u}_-(\varepsilon^2\mu\xi))}{\mu} \\ 0
\end{array}
\right]
\chi_{s}^{-}(\xi)
+
\left[
\begin{array}{c}
u_0 \\ \frac{3}{2 u_{0}} \, \mathrm{sech}\left(\frac{\xi}{2}\right)^2
\end{array}
\right]
\chi_{f}(\xi)
+
\left[
\begin{array}{c}
\frac{(\hat{u}_b(\varepsilon^2\mu\xi)-(\hat{u}_b(0)-\mu u_0)\, \hat{u}_+(\varepsilon^2\mu\xi))}{\mu} \\ 0
\end{array}
\right]
\chi_{s}^{+}(\xi)\end{aligned}$$ ]{} with $ u_0 = u_0^- $ or $u_0 = u_0^+$ from , i.e. $$\label{eq:definitionu0}
u_0 = \frac{\hat{u}_b(0) \pm \sqrt{\hat{u}_b(0)^2+ 12 \mu / C^s(0)}}{2\mu};$$ $\hat{u}_b $ the bounded solution from Proposition \[prop:dynamics\_slow\_manifold\] and where the indicator functions $$\begin{aligned}
\label{eq:slow_fast_fields}
\chi_{s}^{-}(\xi) = \chi_{\left(-\infty,-1/\sqrt{\varepsilon}\right)} \, , \qquad
\chi_{f}(\xi) = \chi_{\left(-1/\sqrt{\varepsilon},1/\sqrt{\varepsilon}\right)} \, , \qquad
\chi_{s}^{+}(\xi) = \chi_{\left(1/\sqrt{\varepsilon}, \infty \right)}\end{aligned}$$ distinguishes the behavior of the solution in the fast and super-slow fields. Furthermore, for $ \hat{u}_{\pm} $ we have the estimates $$\begin{aligned}
| \hat{u}_{\pm}(x) | \leq C e^{-(1-2\delta) |x| } \, , \quad x \gtrless 0 \, , \end{aligned}$$ for some $ C>0 $, the bounded solution $u_b$ obeys $$\sup_{x \in \mathbb{R}}
\sqrt{ (\hat{u}_b(x) - 1)^2 + \hat{u}_b'(x)^2 } \leq \frac{10\delta}{1 - 2 \delta} \, .$$ Finally, this homoclinic orbit gives rise to a stationary pulse solution $$\begin{aligned}
\label{eq:front_general_h}
\left[
\begin{array}{c}
U_p(x,t)\\[.2cm] V_p(x,t)
\end{array}
\right]
=
\left[
\begin{array}{c}
\frac{m \sqrt{m} D}{a} u \left( \frac{\sqrt{m}}{D} x \right)\\[.2cm]
\frac{a}{D \sqrt{m}} v \left( \frac{\sqrt{m}}{D} x \right)
\end{array}
\right]\end{aligned}$$ for the Klausmeier model that is biasymptotic to the bounded state $\left(a \hat{u}_b\left(\frac{\sqrt{m}}{D} x\right),0\right)$.
\[cor:fg\_equal\_zero\] Let $ f, g = 0$, and the conditions from Theorem \[theorem:fg\_general\] be fulfilled. Then $$\begin{aligned}
\hat{u}_\pm (x) = e^{\mp x} \, , \qquad \hat{u}_b \equiv 1 \, .\end{aligned}$$
\[cor:fg\_small\] Let the conditions from Theorem \[theorem:fg\_general\] be fulfilled and $ f = \delta \tilde f $, $ g = \delta \tilde g $ where $ \tilde{f}, \tilde{g} = \mathcal{O}(1), 0 < \delta \ll 1 $ (i.e. $\sup_{x \in \mathbb{R}} \sqrt{\tilde{f}(x)^2 + \tilde{g}(x)^2} = 1$. Then [$$\begin{aligned}
\hat{u}_+ (x) &= e^{-x} +\frac{\delta}{2} \left[ - e^{x} \int_{x}^{\infty} (\tilde f(z) - \tilde g(z))e^{-2z} dz + e^{-x} \left(\int_0^{\infty} (\tilde f(z) - \tilde g(z))e^{-2z} dz + \int_0^{x} (\tilde f(z) - \tilde g(z)) ds \right) \right] + h.o.t. \, ,\\
\hat{u}_- (x) &= e^{x} +\frac{\delta}{2} \left[ e^{-x} \int_{-\infty}^{x} (\tilde f(z) + \tilde g(z))e^{-2z} ds - e^{-x} \left(\int_{-\infty}^0 (\tilde f(z) + \tilde g(z))e^{-2z} dz + \int_0^{x} (\tilde f(z) + \tilde g(z)) dz \right) \right] + h.o.t. \, ,\\
\hat{u}_b(x) &= 1 + \frac{\delta}{2} \left[e^{x} \int_{x}^{\infty} \tilde g(z) e^{-z} \, dz + e^{-x} \int_{-\infty}^{x} \tilde g(z) e^{z} \, dz \right]+ h.o.t. \,.\end{aligned}$$]{} Moreover, $u_0$ as in can be expressed in terms of $\delta$ as $$u_0 = u_{00} + \delta u_{01} + h.o.t.,$$ where $u_{00}$ corresponds to the $u_0$-value for the autonomous case, i.e. $u_{00}$ is given by .
\[cor:h\_example\] Let $ h(x) = -2 \ln \cosh(\beta x), \beta > 0, f = h', g = h'' $, and the conditions from Theorem \[theorem:fg\_general\] be fulfilled. Then $$\begin{aligned}
\hat{u}_\pm (x) &= e^{\mp\sqrt{1+\beta^2}x} \cosh(\beta x) \, ,\\
\hat{u}_b(x) &= \frac{u_-(x)}{2\sqrt{1+\beta^2}} \int_{x}^\infty e^{-\sqrt{1+\beta^2}z} \operatorname{sech}(\beta z)\ dz + \frac{u_+(x)}{2\sqrt{1+\beta^2}} \int_{-\infty}^{x} e^{\sqrt{1+\beta^2}z} \operatorname{sech}(\beta z)\ dz \, .\end{aligned}$$
Pulses solutions as in Corollary \[cor:h\_example\] exist for any $\beta > 0$ without the need of the general assumption on $\delta$ as in Theorem \[theorem:fg\_general\]; since the flow on $\mathcal{M}$ can be solved explicitly for these functions $f$ and $g$, no condition on $\delta$ is needed.
Since the flow on $\mathcal{M}$ can be solved explicitly for the functions $f$ and $g$ as in Corollary \[cor:h\_example\], it is also possible to prove existence of symmetric, stationary $2$-pulse solutions (and, in fact, any symmetric, stationary $N$-pulse solution). Note that normally, for $f,g \equiv 0$, these do not exist, since pulses in repel each other [@dek1siam; @BD18]; this repulsive force can only be overcome by driving forces due to the spatially varying functions $f$ and $g$. We come back to these multi-pulse solutions in section \[sec:multiPulses\].
Proof of existence results {#sec:existenceResultsProofs}
--------------------------
The proofs of the existence results in section \[sec:existenceResults\] follow from the theory developed in the preceding sections. The heart of these proofs is formed by Proposition \[prop:persistence\] and the bounds on the bounded solution $u_b$ and the slopes $C^{s/u}$ as found in Proposition \[prop:dynamics\_slow\_manifold\]. Ultimately, it boils down to taking $\delta$ small enough such that an intersection between $l^s(0)$ and $T_o(0)$ is guaranteed. A sketch of this idea is given in Figure \[fig:existenceProofSketches\]; the rest of this section is devoted to the rigorous proof of the existence theorem and the corollories in section \[sec:existenceResults\].
Existence of the homoclinic orbits is established by Proposition \[prop:persistence\] if the conditions in Proposition \[prop:persistence\](4) are satisfied. Since $u_b(0) = \hat{u}_b(0)/\mu$, these hold if and only if the following three bounds hold true:
- $\hat{u}_b(0) > 0$;
- $C^s(0) < 0$;
- $\hat{u}_b(0)^2 + 12 \mu / C^s(0) > 0$.
By Proposition \[prop:roughness\_closeness\_general\] and Lemma \[lemma:exp\_dich\_const\_roughness\], we have $$\hat{u}_b(0) > \frac{1 - 12 \delta}{1-2\delta},$$ and by Lemma \[lemma:closeness\_slopes\] we have $$C^s(0) = -1 + \tilde{C}, \qquad \tilde{C} \in \Gamma(\delta,-1),$$ where $\Gamma$ is as in . Using these, bound (i) is satisfied when $\delta < \frac{1}{12}$ and bound (ii) when $\delta < \frac{2 - \sqrt{2}}{8}$. Since the bound (iii) holds true when $\delta = 0$ and $\mu < \frac{1}{12}$, continuity of mentioned bounds on $\hat{u}_b(0)$ and $C^s(0)$ guarantees the existence of the critical value $0 < \delta^*(\mu) < \frac{2 - \sqrt{2}}{8}$.
This follows immediately from solving with $f,g \equiv 0$, and is also carried out in more detail in section \[sec:existence\_f\_g\_zero\].
The super-slow system on $\mathcal{M}$ in can be solved using a regular expansion in $0 < \delta \ll 1$. By requiring that $\lim_{x \rightarrow \infty} \hat{u}_+(x)$ and $\lim_{x \rightarrow -\infty} \hat{u}_-(x)$ exist, the results follow by a straightforward calculation.
One can easily verify that $\hat{u}_\pm$ solve , and that $\lim_{x \rightarrow \pm \infty} \hat{u}_\pm(x) = 0$. The bounded solution $\hat{u}_b$ follows from a standard variation of constants method.
Linear stability analysis {#sec:linstability}
=========================
In the previous section, we proved the existence of stationary $1$-pulse solutions to . In this section we study the linear stability of these solutions. For $ (U_p, V_p) $ a pulse solution from Theorem \[theorem:fg\_general\] we define the linear operator $$\label{eq:linearization_operator}
\mathcal{L}\left(\begin{array}{c} \bar{U} \\ \bar{V} \end{array}\right) =
\left(
\begin{array}{c}
\partial_x^2 \bar{U} + f(x) \partial_x \bar{U} + g(x) \bar{U} - \bar{U} - V_p^2 \bar{U} - 2 U_p V_p \bar{V} \\
D^2 \partial_x^2 \bar{V} - m \bar{V} + V_p^2 \bar{U} + 2 U_p V_p \bar{V}.
\end{array}
\right) \, ,$$ with $ \mathcal{L}: H^2(\mathbb{R}) \times H^2(\mathbb{R}) \subset L^2(\mathbb{R}) \times L^2(\mathbb{R}) \rightarrow L^2(\mathbb{R}) \times L^2(\mathbb{R}) $ and its spectrum by $ \Sigma(\mathcal{L}) $, where we distinguish between the point spectrum $ \Sigma_\mathrm{pt}(\mathcal{L}) $ and the essential spectrum $ \Sigma_\mathrm{ess}(\mathcal{L}) = \Sigma(\mathcal{L})\setminus \Sigma_\mathrm{pt}(\mathcal{L}) $ – we denote the elements of $\Sigma_\mathrm{ess}(\mathcal{L})$ by $\underline{\lambda}$. As customary, we say that $ (U_p, V_p) $ is linearly stable if there is no spectrum in the right half plane. In order to keep the exposition at reasonable length, we will concentrate here on characterizing parameter regimes where the only instability that can occur is through the (translational) zero eigenvalue which starts moving due to the introduction of spatially varying $ f $ and/or $ g $. In particular, there are no essential instabilities:
\[lemma:essential\_spectrum\] Let the conditions of Theorem \[theorem:fg\_general\] and assumption (A4) be fulfilled, and let $(U_p,V_p)$ be a pulse solution to as in Theorem \[theorem:fg\_general\]. Then the essential spectrum of $ \mathcal{L} $ from is $$\Sigma_\mathrm{ess}(\mathcal{L}) = (-\infty, \max\{-m,-1\}] \, ,$$ and, hence, lies in the left half-plane.
The limiting operator of $ \mathcal{L} $ at $ x \rightarrow \pm \infty $ is $ \mathcal{L}_{\infty}:= \mathrm{diag}[\partial_x^2 - 1, D^2 \partial_x^2 - m] $ (note that we thus explicitly use assumption (A4)). Therefore, we have that the boundaries of the essential spectrum are $ \underline{\lambda}_1(k) = -(k^2 +1)$, $\underline{\lambda}_2(k) = -(D^2k^2 +m)$, $k \in \mathbb{R} $, which immediately gives the claimed result.
The assumptions on $ f, g $ allow (again through the use of exponential dichotomies) the derivation of bounds on the location of the point spectrum, which, under the assumption that $ f,g $ are chosen ‘small’, can be further refined to track the one small eigenvalue that can possibly lead to bifurcations. The proof of the following statements will be the subject of the next sections.
\[theorem:point\_spectrum\] Let the conditions of Theorem \[theorem:fg\_general\] and assumption (A4) be fulfilled, and let $(U_p,V_p)$ be a pulse solution to with $u_0 = u_0^-$ as in . Then there exist constants $m_c, \mu^*, \nu^* > 0$ such that if either (i) $m < m_c$ and $\mu < \mu^*$ or (ii) $m > m_c$ and $\mu \sqrt{m} < \nu^*$, then there exists a $\delta_c > 0$ such that if $0 \leq \delta < \delta_c$ precisely one eigenvalue $\underline{\lambda}_0$ is $\mathcal{O}(\varepsilon)$-close to $0$ and all other eigenvalues of $\mathcal{L}$ lie in the left-half plane.
The statement is demonstrated in section \[sec:point\_spectrum\_general\] by combining the setup of an Evans function and the theory of exponential dichotomies.
Note that Theorem \[theorem:point\_spectrum\] only holds for pulse solutions with $u_0 = u_0^-$; pulse solutions with $u_0 = u_0^+$ are always unstable. See also Remark \[remark:varTer\_positiveu0\].
The constants $m_c$, $\mu^*$ and $\nu^*$ in Theorem \[theorem:point\_spectrum\] can be computed explicitly (see Lemma \[lemma:rootsOft22ComputedBetter\]).
\[theorem:point\_spectrum\_small\] Assuming that $f = \delta \tilde{f}$, $g = \delta \tilde{g}$ with $0 < \delta \ll 1$, $\tilde{f}, \tilde{g} = \mathcal{O}(1)$ (i.e. $\sup_{x \in \mathbb{R}} \sqrt{\tilde{f}(x)^2+\tilde{g}(x)^2} = 1$), there exists a constant $\tau^* > 0$ such that if $\tau:= \varepsilon^4 \mu m < \tau^*$ the small eigenvalue $\underline{\lambda}_0$ close to $\underline{\lambda} = 0$ is located, to leading order, at $$\label{eq:smallEigenvalue}
\underline{\lambda}_0 = \frac{2 \tau \delta}{u_0 - \tau (1 - \mu u_0)} \int_0^{+\infty} e^{-2x} \left(\tilde{f}'(x)(1-\mu u_0) + \tilde{g}'(x) [e^{x}+\mu u_0 - 1] \right) \, dx,$$ where $u_0$ is as in and Corollary \[cor:fg\_small\].
This statement is derived in section \[sec:smallEigenvalue\] by employing a regular expansion in $\delta$.
\[cor:small\_eigenvalue\_double\_limit\] Let the conditions of Theorem \[theorem:point\_spectrum\_small\] be fulfilled. Then, in the double asymptotic limit $\mu \ll 1$ and $\tau := \varepsilon^4 \mu m \ll 1$ the leading order expression for $\underline{\lambda}_0$ becomes $$\underline{\lambda}_0 = \frac{2}{3} \tau \int_0^\infty e^{-2x} \left( \tilde{f}'(x) + \tilde{g}'(x)[e^x-1]\right)\ dx.$$
When the term $\tau = \varepsilon^4 \mu m = \frac{a^2 D}{m \sqrt{m}}$ in becomes too large (larger than $\tau^*$), the pulse becomes unstable due to a traveling wave bifurcation/drift instability [@chen2009oscillatory; @DEK01].
Qualitative description of the point spectrum location (Proof of Theorem \[theorem:point\_spectrum\]) {#sec:point_spectrum_general}
-----------------------------------------------------------------------------------------------------
This section is devoted to finding the point spectrum of the operator $\mathcal{L}$. For that, we use a decomposition method for the Evans function, first developed in [@AGJ90; @D01], which is supplemented by the theory of exponential dichotomies to treat the varying coefficients in . As before, the following computations will again heavily rely on the singularly perturbed structure. Therefore, we introduce for the eigenvalue problem $ (\mathcal{L} - \underline{\lambda} I) (\bar{U}, \bar{V})^T = 0 $, that is, $$\begin{aligned}
\label{eq:evp_original}
\left\{
\begin{array}{rcl}
\underline{\lambda} \bar{U} & = & \frac{d^2}{dx^2} \bar{U} + f(x) \frac{d}{dx} \bar{U} + g(x) \bar{U} - \bar{U} - V_p^2 \bar{U} - 2 U_p V_P \bar{V} \, ,\\[.2cm]
\frac{1}{m} \underline{\lambda} \bar{V} & = & \frac{D^2}{m} \frac{d^2}{dx^2} \bar{V} - \bar{V} + \frac{1}{m} V_p^2 \bar{U} + \frac{2}{m} U_p V_p \bar{V} \, ,
\end{array}
\right.\end{aligned}$$ and the scalings (analogous to and ) $$\xi = \frac{D}{\sqrt{m}} = \varepsilon^2 \mu x \, , \quad \bar{U} = m \varepsilon \mu \bar{u} \, , \quad U_p = m \varepsilon \mu u_p \, , \quad \bar{V} = \frac{1}{\varepsilon \mu} \bar{v} \, , \quad V_p = \frac{1}{\varepsilon \mu} v_p,$$ to get the fast eigenvalue problem $$\label{eq:evp_original_scaled_}
\left\{
\begin{array}{rcl}
\varepsilon^4 \mu^2 \underline{\lambda} \bar{u}
& =
& \ddot{\bar{u}} - \varepsilon^2 [ 2 u_p v_p \bar{v} + v_p^2 \bar{u} ] - \varepsilon^4 \mu^2 \bar{u} + \varepsilon^2 \mu f(\varepsilon^2 \mu \xi) \dot{\bar{u}} + \varepsilon^4 \mu^2 g(\varepsilon^2 \mu \xi) \bar{u} \, , \\
\frac{1}{m} \underline{\lambda} \bar{v}
&=
& \ddot{\bar{v}} - \bar{v} + [ 2 u_p v_p \bar{v} + v_p^2 \bar{u} ] \, ,
\end{array}\right.$$ which suggests (just as in [@BD18; @chen2009oscillatory; @DEK01]) the introduction of the scaled eigenvalue parameter $$\begin{aligned}
\label{eq:scaling_eigenvalue}
\underline{\lambda} = m \lambda \, ,\end{aligned}$$ so, finally, $$\left\{
\begin{array}{rcl}
\varepsilon^4 \mu^2 m \lambda \bar{u}
& =
& \ddot{\bar{u}} - \varepsilon^2 [ 2 u_p v_p \bar{v} + v_p^2 \bar{u} ] - \varepsilon^4 \mu^2 \bar{u} + \varepsilon^2 \mu f(\varepsilon^2 \mu \xi) \dot{\bar{u}} + \varepsilon^4 \mu^2 g(\varepsilon^2 \mu \xi) \bar{u} \, , \\
\lambda \bar{v}
&=
& \ddot{\bar{v}} - \bar{v} + [ 2 u_p v_p \bar{v} + v_p^2 \bar{u} ] \, .
\end{array}\right.
\label{eq:eigenvalueProblem}$$ It is convenient to introduce $\phi := \left( \bar{u}, \dot{\bar{u}} / (\varepsilon^2 \mu), \bar{v}, \dot{\bar{v}}\right)$ and to write the above ODEs as the system of first order ODEs $$\dot{\phi} = A(\xi; \lambda, \varepsilon, \mu, m) \phi ,\label{eq:eigenvalueProblemMatrixForm}$$ where $$A(\xi; \lambda, \varepsilon, \mu, m) =
\left( \begin{array}{cccc}
0 & \varepsilon^2 \mu & 0 & 0 \\
v_p^2 / \mu + \varepsilon^2 \mu \left[1+m \lambda - g(\varepsilon^2 \mu \xi) \right]& - \varepsilon^2 \mu f(\varepsilon^2 \mu \xi) &2 u_p v_p / \mu & 0 \\
0 & 0 & 0 & 1 \\
- v_p^2 & 0 & 1 + \lambda - 2 u_p v_p & 0
\end{array}\right).$$ From the existence analysis in section \[sec:existence\], we have seen that the real line $\mathbb{R}$ can be split in one fast region, $I_f$, near the pulse location and two super slow fields $I_s^\pm$ to both sides of the fast field: $$I_s^- := \left(-\infty,-\frac{1}{\sqrt{\varepsilon}}\right),\, \quad
I_f := \left[ - \frac{1}{\sqrt{\varepsilon}}, \frac{1}{\sqrt{\varepsilon}} \right],\, \quad
I_s^+ := \left( \frac{1}{\sqrt{\varepsilon}}, \infty \right).$$ Since we know that $ v_p $ vanished to leading order in the slow fields, we have in those regions the system matrix $$A_s(\xi;\lambda,\varepsilon,\mu,m) :=
\left( \begin{array}{cccc}
0 & \varepsilon^2 \mu & 0 & 0 \\
\varepsilon^2 \mu \left[1+m \lambda - g(\varepsilon^2 \mu \xi) \right]& - \varepsilon^2 \mu f(\varepsilon^2 \mu \xi) &0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 + \lambda & 0
\end{array}\right) \, ,$$ that is, the dynamics for slow and fast variables are decoupled. Any value $ \lambda \in \mathbb{C} $ for which this system of ODEs has a non-trivial solution in $ L^2(\mathbb{R}) \times L^2(\mathbb{R}) $ corresponds to an eigenvalue $\underline{\lambda} = m \lambda$ of $ \mathcal{L} $. A mechanism (that is by now standard) for detecting eigenvalues is the construction of an Evans function, whose roots coincide with the eigenvalues of $ \mathcal{L} $. Although the Evans function can also be extended into the essential spectrum, we do not need this in the present work and rather restrict $\lambda$ to $$\label{eq:definitionCe}
\mathcal{C}_e := \mathbb{C} \setminus \left\{ \lambda \in \mathbb{R} : \lambda \leq \max\{-1,-1/m\} \right\} = \left\{ \lambda = \frac{\underline{\lambda}}{m} : \underline{\lambda} \notin \Sigma_\mathrm{ess}(\mathcal{L})\right\} ,$$ on which the Evans function is analytic.
### Evans function construction {#sec:evans_function_construction}
By (conditions and results of) Theorem \[theorem:fg\_general\] and assumption (A4), we know that the limiting matrix for $ |\xi| \rightarrow \infty $ is given by $$A_\infty(\lambda,\varepsilon,\mu,m) :=
\left( \begin{array}{cccc}
0 & \varepsilon^2 \mu & 0 & 0 \\
\varepsilon^2 \mu \left[1+m \lambda \right]& 0 &0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 + \lambda & 0
\end{array}\right) \, .$$ Its eigenvalues $\Lambda_{1,2,3,4}$ and eigenvectors $E_{1,2,3,4}$ are $$\begin{array}{ll}
\Lambda_{1,4}(\lambda) = \pm \sqrt{1+\lambda},
& \Lambda_{2,3}(\lambda) = \pm \varepsilon^2 \mu \sqrt{1+ m \lambda} \\
E_{1,4}(\lambda) = \left(0,0,1,\Lambda_{1,4}\right)^T,
& E_{2,3}(\lambda) = \left(1, \pm \sqrt{1 + m \lambda},0,0\right)^T.
\end{array}$$ where $\mbox{Re}\left( \Lambda_1(\lambda) \right) < \mbox{Re}\left( \Lambda_2(\lambda) \right)< 0 < \mbox{Re}\left( \Lambda_3(\lambda) \right)< \mbox{Re}\left( \Lambda_4(\lambda) \right)$ for $\lambda \in \mathcal{C}_e$.
The system $\dot{\phi}_\infty = A_{\infty}(\lambda,\varepsilon,\mu,m) \phi_\infty$ admits exponential dichotomies on $\mathcal{C}_e$. Since $A_\infty$ is exponentially close to $A$ for large $|\xi|$, the stable and unstable subspaces of $\dot{\phi} = A(\xi;\lambda,\varepsilon,\mu,m)\phi$ and $\dot{\phi}_\infty = A_\infty(\lambda,\varepsilon,\mu,m) \phi_\infty$ are similar when $|\xi| \rightarrow \infty$. In particular, for all $\lambda \in \mathcal{C}_e$ there is a two-dimensional family of solutions, $\Phi_\infty^-(\lambda)$, to $\dot{\phi}_\infty = A_\infty(\lambda,\varepsilon,\mu,m) \phi_\infty$ such that $\lim_{\xi \rightarrow -\infty} \phi_\infty^-(\xi) = 0$ for all $\phi_\infty^- \in \Phi_\infty^-(\lambda)$, and a two-dimensional family of solutions, $\Phi_\infty^+(\lambda)$, to $\dot{\phi}_\infty = A_\infty(\lambda,\varepsilon,\mu,m) \phi_\infty$ such that $\lim_{\xi \rightarrow \infty} \phi_\infty^+(\xi) = 0$ for all $\phi_\infty^+ \in \Phi_\infty^+(\lambda)$, which implies that the system $\dot{\phi} = A(\xi;\lambda,\varepsilon,\mu,m)\phi$ also possesses two two-dimensional families of solutions, $\Phi^-(\lambda)$ and $\Phi^+(\lambda)$ with the same properties.
For the system $\dot{\phi} = A(\xi;\lambda,\varepsilon,\mu,m) \phi$, however, it is possible that the intersection $\Phi^+(\lambda) \cap \Phi^-(\lambda)$ is nonempty. The values $\lambda \in \mathcal{C}_e$ for which this happens correspond to $\underline{\lambda} = m \lambda$ in the point spectrum $\Sigma_\mathrm{pt}$. To find these, we use a Evans function [@AGJ90; @D01], which is defined as $$\label{eq:definitionEvansFunction}
\mathcal{D}(\lambda) = \det\left[ \phi_1(0;\lambda), \phi_2(0;\lambda), \phi_3(0;\lambda), \phi_4(0;\lambda)\right] \, ,$$ where $\{\phi_1(\cdot;\lambda),\phi_2(\cdot;\lambda)\}$ spans the space $\Phi^-(\lambda)$ and $\{\phi_3(\cdot;\lambda),\phi_4(\cdot;\lambda)\}$ spans the space $\Phi^+(\lambda)$. For notational clarity we have suppressed the dependence on the other parameters. Essentially, the Evans function $\mathcal{D}(\lambda)$ measures the linear independence of the solution functions $\phi_{1,\ldots,4}$. Therefore, zeros of $\mathcal{D}(\lambda)$ correspond to values of $\lambda$ for which $\Phi^+(\lambda) \cap \Phi^-(\lambda) \neq \varnothing$, and thus to eigenvalues in the point spectrum [@AGJ90].
In the solutions $\phi_{1,\ldots,4}$ are not uniquely defined, and any choice leads to the same eigenvalues. However, for singularly perturbed partial differential equations a specific choice enables the use of the scale separation in these equations, which in turn makes it possible to determine the eigenvalues.
\[lemma:EvansFunctionDecomposition\] Let the conditions of Theorem \[theorem:fg\_general\] be fulfilled and let $(U_p,V_p)$ be a pulse solution to as in Theorem \[theorem:fg\_general\]. Then all eigenvalues $\lambda \in \Sigma_\mathrm{pt}$ associated to are roots of the Evans function $$\mathcal{D}(\lambda) = t_{11}(\lambda) t_{22}(\lambda) (1 + m \lambda) (1 + \lambda) \exp\left(\int_0^\infty f(x)\ dx\right),$$ where $t_{11}$ and $t_{22}$ are analytic (transmission) functions of $\lambda$, defined by $$\begin{aligned}
\lim_{\xi \rightarrow \infty} \phi_1(\xi;\lambda) e^{-\Lambda_1(\lambda) \xi} &= t_{11} E_1; \\
\lim_{\xi \rightarrow \infty} \phi_2(\xi;\lambda) e^{-\Lambda_2(\lambda) \xi} & = t_{22} E_2,\end{aligned}$$ where $\phi_1$ is the (unique) solution to for which $$\begin{aligned}
\lim_{t \rightarrow -\infty} \phi_1(\xi;\lambda) e^{-\Lambda_1(\lambda)\xi} &= E_1;
\intertext{and $\phi_2$ is the (unique) solution to~\eqref{eq:eigenvalueProblemMatrixForm} (if $t_{11}(\lambda) \neq 0$) for which}
\lim_{t \rightarrow -\infty} \phi_2(\xi;\lambda)e^{-\Lambda_2(\lambda)\xi} &= E_2;\\
\lim_{t \rightarrow \infty} \phi_2(\xi;\lambda) e^{\Lambda_1(\lambda)\xi} & = 0\end{aligned}$$
The proof is heavily based on [@D01 Section 3.2]. Therefore, we present here only an outline of the proof and refer the interested reader to [@D01] for more details.
The heart of the proof is based on choosing $\phi_{1,\ldots,4}$ in such way that the scale separation of can be exploited. Because $A$ and $A_{\infty}$ are exponentially close when $\xi \rightarrow -\infty$, there is a unique solution $\phi_1$ such that $\phi_1$ closely follows $E_1(\lambda) e^{\Lambda_1(\lambda)\xi}$ as $\xi \rightarrow - \infty$. More precisely, we define $\phi_1$ uniquely such that $\lim_{\xi \rightarrow -\infty} \phi_1(\xi;\lambda) e^{-\Lambda_1(\lambda)\xi} = E_1(\lambda)$. For $\xi \rightarrow \infty$, we do not know the precise form of $\phi_1$, but we do know that, asymptotically, it is a combination of the eigenfunctions of the system $\dot{\phi}_\infty = A_\infty \phi_\infty$. That is, $\phi_1(\xi;\lambda) \rightarrow t_{11}(\lambda) E_1 e^{\Lambda_1(\lambda)\xi} + t_{12}(\lambda) E_2 e^{\Lambda_2(\lambda)\xi} + t_{13}(\lambda) E_3 e^{\Lambda_3(\lambda)\xi} + t_{14}(\lambda) e^{\Lambda_4(\lambda)\xi}$ as $\xi \rightarrow \infty$, where $t_{11},\ldots,t_{14}$ are analytic transmission functions.
Next, $\phi_2$ must be chosen such that $\{\phi_1(\cdot,\lambda),\phi_2(\cdot,\lambda)\}$ spans $\Phi^-(\lambda)$. As this does not determine $\phi_2$ uniquely, we may, additionally, require that $\phi_2$ grows, at most, as $E_2(\lambda) e^{\Lambda_2(\lambda)\xi}$ for $\xi \rightarrow \infty$. More precisely, we define $\phi_2$ uniquely such that $\lim_{\xi \rightarrow -\infty} \phi_2(\xi;\lambda) e^{-\Lambda_2(\lambda)\xi} = E_2$ and $\lim_{\xi \rightarrow +\infty} \phi_2(\xi;\lambda) e^{-\Lambda_1(\lambda)\xi} = 0$ (note that this construction is based on insight in $t_{11}$ – that may not be $0$ – that is obtained by the ‘elephant trunk procedure’, see [@D01; @gardner1991stability] and Remark \[remark:stability\_fgLimits\]). For $\xi \rightarrow \infty$, $\phi_2$ is then asymptotically given by $\phi_2(\xi;\lambda) \rightarrow t_{22}(\lambda) E_2(\lambda) e^{\Lambda_2(\lambda)\xi} + t_{23}(\lambda) E_3(\lambda) e^{\Lambda_3(\lambda)\xi} + t_{24}(\lambda) e^{\Lambda_4(\lambda)\xi}$ as $\xi \rightarrow \infty$, where $t_{21}$, $t_{23}$, $t_{24}$ are analytical transmission functions.
In a similar vein the solutions $\phi_3$ and $\phi_4$ can be defined such that $\lim_{\xi \rightarrow \infty} \phi_4(\xi;\lambda) e^{-\Lambda_4(\lambda)} = E_4(\lambda)$ and $\lim_{\xi \rightarrow \infty} \phi_3(\xi;\lambda) e^{-\Lambda_3(\lambda)} = E_3(\lambda)$.
Then, using that $\sum_{j=1}^4 \Lambda_j(\lambda) = 0$ and by Liouville’s formula, the Evans function can be rewritten: $$\begin{aligned}
\mathcal{D}(\lambda)
& = \lim_{\xi \rightarrow \infty} \det\left[ \phi_1(\xi;\lambda), \phi_2(\xi;\lambda), \phi_3(\xi;\lambda), \phi_4(\xi;\lambda)\right] \exp\left(-\int_0^\xi \mathrm{Tr} A(z)\ dz\right) \\
& = \lim_{\xi \rightarrow \infty} \det\left[ \phi_1(\xi;\lambda) e^{-\Lambda_1(\lambda)\xi}, \phi_2(\xi;\lambda)e^{-\Lambda_2(\lambda)\xi}, \phi_3(\xi;\lambda)e^{-\Lambda_3(\lambda)\xi}, \phi_4(\xi;\lambda)e^{-\Lambda_4(\lambda)\xi}\right] \exp\left(-\int_0^\xi \mathrm{Tr} A(z)\ dz\right) \\
& = \det\left[ t_{11}(\lambda)E_1(\lambda), t_{22}(\lambda)E_2(\lambda),E_3(\lambda),E_4(\lambda)\right] \exp\left(\int_0^\infty f(x)\ dx\right) \\
& = t_{11}(\lambda) t_{22}(\lambda) (1+ m \lambda)(1+\lambda) \exp\left(\int_0^\infty f(x)\ dx\right).\end{aligned}$$
The roots $\lambda \in \mathcal{C}_e$ of $\mathcal{D}(\lambda)$ thus correspond to the roots of $t_{11}(\lambda) t_{22}(\lambda)$. The next goal, therefore, is to determine the roots of these transmission functions.
### Fast transmission function $t_{11}$
The transmission function $t_{11}$ is closely related to the linearization around the pulse in the fast field, $$\label{eq:eigenvaluesFastReduced}
(\mathcal{L}^\mathrm{r} - \lambda) v= 0, \, \quad \mathcal{L}^\mathrm{r} v := \partial_\xi^2 v - [1 - 3 \operatorname{sech}(\xi/2)^2] v.$$ The eigenvalues of $\mathcal{L}^\mathrm{r}$ are well-known to be $\lambda_0^\mathrm{r} = 5/4$, $\lambda_1^\mathrm{r} = 0$ and $\lambda_2^\mathrm{r} = - 3 / 4$. By a standard winding number argument, it follows that roots of $t_{11}$ lie $\mathcal{O}(\varepsilon)$-close to these eigenvalues $\lambda_0^\mathrm{r}$, $\lambda_1^\mathrm{r}$ and $\lambda_2^\mathrm{r}$.
\[lemma:propertiesOft11\] Let the conditions of Proposition \[lemma:EvansFunctionDecomposition\] be fulfilled. The roots of $t_{11}$ lie $\mathcal{O}(\varepsilon)$ close to the eigenvalues (counting multiplicity) of $\mathcal{L}^\mathrm{r}$, i.e. close to $\lambda_0^\mathrm{r} = 5/4$, $\lambda_1^\mathrm{r} = 0$ and $\lambda_2^\mathrm{r} = -3/4$.
See [@D01 Lemma 4.1].
Although $t_{11}$ has a root (with multiplicity $1$) close to $\lambda^r_0 = 5/4$, this does not mean that $\mathcal{D}(\lambda)$ has a root for the same value of $\lambda$, since – as will be discussed in the next section – the transmission function $t_{22}$ has a pole of order $1$ for the same $\lambda$, thus preventing it from being an eigenvalue of $\mathcal{L}$ – in the literature, this is known as the ‘NLEP paradox’.
In studies of autonomous systems, the root of $t_{11}$ close to $\lambda = 0$ is actually located precisely at $\lambda = 0$ because of the translation invariance of those autonomous systems. However, is non-autonomous and therefore this reasoning no longer holds and the eigenvalue close to $\lambda^r_1 = 0$ can have negative or positive real part. As $t_{22}$ does *not* have a pole for this $\lambda$ – as will be discussed in the next section – the Evans function $\mathcal{D}(\lambda)$ has a root for this value; it thus corresponds to an eigenvalue of $\mathcal{L}$. To our best knowledge, it is, in general, not possible to determine the precise location of this eigenvalue; in section \[sec:smallEigenvalue\] we compute its location using standard regular perturbation techniques when the non-autonomous terms are small.
### Slow transmission function $t_{22}$
To determine the transmission function $t_{22}$, we focus on the function $\phi_2$, as defined in Proposition \[lemma:EvansFunctionDecomposition\]. Per construction, we know that $\phi_2(\xi;\lambda) \rightarrow t_{22}(\lambda) E_2(\lambda) e^{\Lambda_2(\lambda)\xi} + t_{23}(\lambda) E_3(\lambda) e^{\Lambda_3(\lambda)\xi} + t_{24}(\lambda) e^{\Lambda_4(\lambda)\xi}$ as $\xi \rightarrow \infty$. As $|\Lambda_4(\lambda)| \gg |\Lambda_{2,3}(\lambda)|$ for $\lambda \in \mathcal{C}_e$, the term $e^{\Lambda_4(\lambda)\xi}$ is exponentially small in the slow fields $I_s^\pm$. Therefore, we have $\phi_2(\xi;\lambda) \approx t_{22}(\lambda) E_2(\lambda) e^{\Lambda_2(\lambda)\xi} + t_{23}(\lambda) E_3(\lambda) e^{\Lambda_3(\lambda)\xi}$ for $\xi \in I_s^+$ sufficiently large. In this way, $\phi_2$ in the slow fields is related to the properties of the exponentially asymptotic constant-coefficient system $\dot{\phi}_\infty = A_\infty(\lambda,\varepsilon,\mu,m) \phi_\infty$. However, we need to relate $\phi_2$ in the slow fields to the exponentially asymptotic non-autonomous system $\dot{\phi}_s = A_s(\xi;\lambda,\varepsilon,\mu,m) \phi_s$ to determine $t_{22}$.
In the slow fields the system $\dot{\phi}_s = A_s(\xi;\lambda,\varepsilon,\mu,m) \phi_s$ has the dynamics for the $(\bar{u},\bar{p})$ part completely separated from the dynamics of the $(\bar{v},\bar{q})$ part. The $(\bar{u},\bar{p})$ part is governed by the non-autonomous ODE $$\left( \begin{array}{c} \dot{\bar{u}} \\ \dot{\bar{p}} \end{array} \right)
= \varepsilon^2 \mu \left[ B_0(\lambda) + B_1(\xi) \right] \left( \begin{array}{c} \bar{u} \\ \bar{p} \end{array} \right),
\label{eq:ODEforlinstab}$$ where $$B_0(\lambda) = \left( \begin{array}{cc} 0 & 1 \\ 1+ m\lambda & 0 \end{array} \right); \qquad
B_1(\xi) = \left( \begin{array}{cc} 0 & 0 \\ - g(\varepsilon^2 \mu \xi) & -f(\varepsilon^2 \mu \xi) \end{array} \right).$$ Here, only the matrix $B_1$ carries the non-autonomous part of the differential equation and the system without $B_1$ corresponds to the $(\bar{u},\bar{p})$ part of the system $\dot{\phi}_\infty = A_\infty(\lambda,\varepsilon,\mu,m) \phi_\infty$, which has spatial eigenvalues $\Lambda_{2,3} = \pm \varepsilon^2\mu\ \sqrt{1+m\lambda}$. When $\lambda \in \mathcal{C}_e$ this autonomous system admits an exponential dichotomy on $\mathbb{R}$ and, therefore, by roughness the non-autonomous system does so as well, provided that $\delta = \sup_{x \in \mathbb{R}} \sqrt{f(x)^2+g(x)^2} = \sup_{x \in \mathbb{R}} \|B_1(x)\|$ is sufficiently small. Under these conditions, there exist $\tilde{\psi}_2(\xi;\lambda) = (u_2(\xi;\lambda),p_2(\xi;\lambda),0,0)^T$ and $\tilde{\psi}_3(\xi;\lambda) = (u_3(\xi;\lambda),p_3(\xi;\lambda),0,0)^T$ such that $\tilde{\psi}_2(\xi;\lambda) \rightarrow E_2(\lambda) e^{\Lambda_2(\lambda)\xi}$ and $\tilde{\psi}_3(\xi;\lambda) \rightarrow E_3(\lambda) e^{\Lambda_3(\lambda) \xi}$ as $|\xi| \rightarrow \infty$. The same reasoning as before can now be used to deduce that $\phi_2(\xi;\lambda) \approx \tilde{\psi}_2(\xi;\lambda)$ for $\xi \in I_s^-$ and $\phi_2(\xi;\lambda) \approx t_{22}(\lambda) \tilde{\psi}_2(\xi;\lambda) + t_{23}(\lambda) \tilde{\psi}_3(\xi;\lambda)$ for $\xi \in I_s^+$.\
To compute $t_{22}$ we need to track the changes of $\bar{u}$ and $\bar{p}$ during the fast transition when $\xi \in I_f$. From , it follows that $\bar{u}$ stays constant to leading order. Hence, matching $\phi_2$ at the ends of both super-slow fields $I_s^\pm$ gives the leading order matching condition $$\label{eq:matchingUbar}
u_2(0;\lambda) = t_{22}(\lambda) u_2(0;\lambda) + t_{23}(\lambda) u_3(0;\lambda).$$ The $\bar{p}$ component changes in the fast field. On the one hand, this change is given by the difference of $\bar{p}$ values at both ends of the slow fields $I_s^\pm$, i.e. $$\Delta_\mathrm{s}\ \bar{p} = t_{22}(\lambda) p_2(0;\lambda) + t_{23}(\lambda) p_3(0;\lambda) - p_2(0;\lambda).$$ On the other hand, the accumulated jump over the fast field is $$\label{eq:fastJumpPBar}
\Delta_\mathrm{f}\ \bar{p} = \frac{1}{\mu} \int_{I_f} \left( v_p(\xi)^2 u_2(0;\lambda) + 2 u_p(\xi) v_p(\xi) \bar{v}(\xi;\lambda) \right)\ d\xi,$$ where $\bar{v}$ satisfies $\left( \mathcal{L}^r - \lambda \right) \bar{v} = - u_2(0;\lambda) v_p(\xi)^2$. We recall that, in the fast field, to leading order, $u_p = u_0$ and $v_p = \frac{\omega}{u_0}$, where $\omega(\xi) = \frac{3}{2} \operatorname{sech}(\xi/2)^2$. We rescale $\bar{v}(\xi;\lambda) = - \frac{u_2(0;\lambda)}{u_0^2} V_\mathrm{in}(\xi;\lambda)$. Then becomes $$\Delta_\mathrm{f}\ \bar{p} = \frac{1}{\mu} \frac{u_2(0;\lambda)}{u_0^2} \int_{I_f} \left( \omega(\xi)^2 - 2 \omega(\xi) V_\mathrm{in}(\xi;\lambda) \right)\ d\xi = \frac{1}{\mu} \frac{u_2(0;\lambda)}{u_0^2} \left( 6 - 2 \mathcal{R}(\lambda) \right) + h.o.t.$$ where $$\mathcal{R}(\lambda) := \int_{-\infty}^\infty \omega(\xi) V_{in}(\xi;\lambda)\ d\xi$$ and $V_\mathrm{in}$ satisfies $$\left( \mathcal{L}^r - \lambda \right) V_\mathrm{in}(\xi;\lambda) = \omega(\xi)^2.$$ Equating $\Delta_\mathrm{s}\ \bar{p} = \Delta_\mathrm{f}\ \bar{p}$ and by one readily derives (at leading order in $\varepsilon$) $$t_{22}(\lambda) = 1 + \frac{1}{\mu} \frac{1}{u_0^2} \frac{6 - 2 \mathcal{R}(\lambda)}{ \frac{p_2(0;\lambda)}{u_2(0;\lambda)} - \frac{p_3(0;\lambda)}{u_3(0;\lambda)}}.$$ Because of the symmetry $f(x) = f(-x)$, $g(x) = - g(-x)$, it follows that $u_2(0;\lambda) = u_3(0;\lambda)$ and $p_2(0;\lambda) = - p_3(0;\lambda)$. Hence $$t_{22}(\lambda) = 1 + \frac{1}{\mu} \frac{1}{u_0^2} \frac{3 - \mathcal{R}(\lambda)}{ \frac{p_2(0;\lambda)}{u_2(0;\lambda)}}.$$ The inhomogeneous ODE $\left(\mathcal{L}^r - \lambda\right) V_{in} = \omega^2$ admits bounded solutions for all $\lambda$ that are not eigenvalues of $\mathcal{L}^r$. When $\lambda$ is an eigenvalue, though, a bounded solution only exists if the following Fredholm condition is satisfied: $$\int_{-\infty}^\infty \omega^2 v^* d\xi = 0,$$ where $v^*$ is the corresponding eigenfunction. Therefore, by Sturm-Liouville theory, it is clear that there is a bounded solution for $\lambda^r_1 = 0$, but not for $\lambda_0^r = 5/4$ or $\lambda_2^r = -3/4$. That is, $\mathcal{R}(\lambda)$, and therefore $t_{22}$, has poles of order $1$ at $\lambda_0^r$ and $\lambda_2^r$.\
We have, hence, demonstrated the following:
\[lemma:evans\_function\] Let the conditions of Theorem \[theorem:fg\_general\] and assumption (A4) be fulfilled, and let $ (U_p, V_p) $ be a pulse solution to as described in Theorem \[theorem:fg\_general\]. It then holds true that the eigenvalues of the operator $ \mathcal{L} $ in arising from linearization around the pulse solution $ (U_p, V_p) $ coincide on $ \mathcal{C}_e $ with the roots of the Evans function $$\begin{aligned}
\label{eq:evans_function_}
\mathcal{D}(\lambda) = t_{11}(\lambda)t_{22}(\lambda)\widetilde{\mathcal{D}}(\lambda) \, ,\end{aligned}$$ with $ \widetilde{\mathcal{D}}(\lambda) \neq 0, \lambda \in \mathcal{C}_e$ and where the so-called fast transmission function is given by $$\begin{aligned}
\label{eq:t_11_fast}
t_{11}(\lambda) = C_{1} \left(\lambda - \lambda_0^f\right) \left(\lambda - \lambda_1^f \right) \left(\lambda - \lambda_2^f \right) \, ,\end{aligned}$$ with $ \lambda_1^f = \mathcal{O}(\varepsilon) $, while the so-called slow transmission function is given by $$\begin{aligned}
\label{eq:t_22_slow}
t_{22}(\lambda) = C_{2} \frac{\widetilde{t}_{22}(\lambda)}{\left(\lambda - \lambda_0^f \right) \left(\lambda - \lambda_2^f \right)} \, ,\end{aligned}$$ with some $ C_1, C_2, \lambda_0^f, \lambda_2^f \in \mathbb{R}\setminus\{ 0 \} $ and $ \widetilde{t}_{22} $ an analytic function on $ \mathcal{C}_e $. In particular, $$\label{eq:NLEPt22expression}
t_{22}(\lambda) = 1 + \frac{1}{u_0^2\mu} \left( \frac{3 - \mathcal{R}(\lambda)}{p_2(0;\lambda)/u_2(0;\lambda)} \right) \, ,$$ where $p_2(0;\lambda)/u_2(0;\lambda)$ is the slope of the unstable manifold of the trivial solution to at $x = 0$, and $\mathcal{R}$ is given (at leading order in $\varepsilon$) by $$\label{eq:definitionRfunction}
\mathcal{R}(\lambda) = \int_{-\infty}^\infty \frac{3}{2} \operatorname{sech}(\xi/2)^2 V_\mathrm{in}(\xi;\lambda)\ d\xi \, ,$$ where $V_\mathrm{in}$ satisfies $\left(\mathcal{L}^r - \lambda \right) V_\mathrm{in} = \frac{9}{4} \operatorname{sech}(\xi/2)^4$.
The function $\mathcal{R}$ has been extensively studied in [@BD18 Section 3.1.1], [@DP02 Section 4.1] and [@DRS12 Section 5]. We would like to stress, however, that $\mathcal{R}$ in this article has a different factor in front of it and is defined in terms of $\lambda$, whereas in [@DP02; @DRS12] it is defined as function of $P:= 2 \sqrt{1+\lambda}$. A plot of $\mathcal{R}$ has been included in Figure \[fig:Rlambda\].
![A plot of the function $\mathcal{R}(\lambda)$. The red lines show the form of $\mathcal{R}(\lambda)$ for real-valued $\lambda$, whereas the blue lines also show the complex $\lambda$ for which $\mathcal{R}(\lambda)$ is real-valued; the green, dashed lines indicate the poles of the $\mathcal{R}(\lambda)$.[]{data-label="fig:Rlambda"}](figures/Rlambda){width="30.00000%"}
The eigenvalue problem is often written as a nonlocal eigenvalue problem (NLEP). This can be achieved via the transformation $$V_\mathrm{in}(\xi;\lambda) = \frac{3 - \mu u_0^2 \frac{p_2(0;\lambda)}{u_2(0;\lambda)}}{\int_{-\infty}^{\infty} \omega(\xi) f(\xi;\lambda)\ d\xi} z(\xi;\lambda),$$ which results in the NLEP $$\left( \mathcal{L}^r - \lambda \right) z = \frac{\omega^2 \int_{-\infty}^\infty \omega z \ d\xi}{3 - \mu u_0^2 \frac{p_2(0;\lambda)}{u_2(0;\lambda)}}.$$
### Roots of transmission function $t_{22}$
In the constant coefficient case $ f,g \equiv 0 $, we have that $p_2(0;\lambda)/u_2(0;\lambda) = \sqrt{1+m\lambda}$ and so $ t_{22}(\lambda) = 0 $ reduces to $$\label{eq:stabilityConditionGeneral}
\mu u_0^2 = \frac{\mathcal{R}(\lambda)-3}{\sqrt{1+m\lambda}},$$ with $u_0$ as in , and eigenvalues can be readily extracted from this condition – see [@BD18]; in Figure \[fig:stabilityCondition\], we show plots of the right-hand side for various $m$. With additional asymptotic approximations, $m \ll 1$ and $m \gg 1$, this can be reduced even further, to leading order to, $$\begin{array}{rcll}
\mu u_0^2 & = & \mathcal{R}(\lambda) - 3, & \mbox{ when $m \ll 1$;} \\
\nu u_0^2 & = & \frac{\mathcal{R}(\lambda)-3}{\sqrt{\lambda}}, & \mbox{ when $m \gg 1$;}
\end{array}$$ where $$\begin{aligned}
\label{eq:nu}
\nu = \frac{m^2 D}{a^2} = \mu \sqrt{m} \, .\end{aligned}$$ Now, when $\mu \ll 1$, respectively $\nu \ll 1$, the left-hand side of these expressions becomes asymptotically small (since $u_0 = u_0^- = \mathcal{O}(1)$, see and Remark \[remark:u0\_autonomous\_mu\_small\]), but stays positive. Hence solutions $\lambda$ accumulate at points for which $\mathcal{R}(\lambda) - 3 \approx 0$, which happens to be at the tip of the essential spectrum, i.e. $\lambda = \underline{\lambda}/m \approx -1$, see Figure \[fig:stabilityCondition\] and [@BD18]. Certainly, no eigenvalues with positive real parts are found.
[0.3]{} ![Plots of the right-hand side of for various $m$. The red lines indicate the values for real-valued $\lambda$, whereas the blue lines indicate complex $\lambda$ for which the right-hand side of is real-valued; in green the poles are shown; see [@BD18] for more details.[]{data-label="fig:stabilityCondition"}](figures/stabCondm0p45 "fig:"){width="\textwidth"}
[0.3]{} ![Plots of the right-hand side of for various $m$. The red lines indicate the values for real-valued $\lambda$, whereas the blue lines indicate complex $\lambda$ for which the right-hand side of is real-valued; in green the poles are shown; see [@BD18] for more details.[]{data-label="fig:stabilityCondition"}](figures/stabCondm1p2 "fig:"){width="\textwidth"}
[0.3]{} ![Plots of the right-hand side of for various $m$. The red lines indicate the values for real-valued $\lambda$, whereas the blue lines indicate complex $\lambda$ for which the right-hand side of is real-valued; in green the poles are shown; see [@BD18] for more details.[]{data-label="fig:stabilityCondition"}](figures/stabCondm10 "fig:"){width="\textwidth"}
This idea can be expanded to include the non-autonomous cases. For this, as in the existence problem, we relate the non-autonomous equation to the autonomous equation. Here, it is useful to rescale such that it has the form of . Specifically, we set $\tilde{x} = \varepsilon^2 \mu |\sqrt{1 + m \lambda}| \xi$ and $\bar{p} = |\sqrt{1+m\lambda}| \tilde{p}$, under which turns into the system $$\label{eq:stabilitySystemStandardForm}
\left(\begin{array}{c} \bar{u}' \\ \tilde{p}' \end{array}\right) =
\left[ \left( \begin{array}{cc}
0 & 1 \\ 1 & 0
\end{array}\right)
+\left(
\begin{array}{cc}
0 & 0 \\
- \frac{g(\tilde{x}/|\sqrt{1+m\lambda}|)}{|1+m\lambda|} & - \frac{f(\tilde{x}/|\sqrt{1+m\lambda}|)}{|\sqrt{1+m\lambda}|}
\end{array}
\right)
\right]
\left(\begin{array}{c} \bar{u} \\ \tilde{p} \end{array}\right).$$ The autonomous part of this equation corresponds to the autonomous part for the existence problem – see section \[sec:dynamics\_slow\_manifold\] – and thus possesses an exponential dichotomy with constants $K = 1$ and $\rho = 1$. Therefore, for a given $\lambda \in \mathcal{C}_e$, by roughness (Proposition \[prop:roughness\_closeness\_general\]) it follows that the full non-autonomous equation has an exponential dichotomy as well when $$\sup_{x \in \mathbb{R}} \frac{1}{|\sqrt{1+m\lambda}|} \sqrt{ \frac{g(x)^2}{|1+m\lambda|} + f(x)^2} < \frac{1}{4}.$$ It is easily verified that this condition is satisfied when $$\delta = \sup_{x \in \mathbb{R}} \sqrt{f(x)^2+g(x)^2} < \delta_c(\lambda) := \frac{1}{4} |\sqrt{1+m\lambda}| \left| \sqrt{ \frac{1+m\lambda}{2+m\lambda}}\right|.$$ Thus, for all $\lambda \in \mathcal{C}_e$, we obtain a (different) bound $\delta_c(\lambda)$. Since $\delta_c(\lambda) \downarrow 0$ as $|\sqrt{1+m\lambda}| \downarrow 0$ – i.e. when $\lambda$ approaches $- 1/ m$ – we cannot take the infimum over the region $\mathcal{C}_e$. Instead, we further restrict $\lambda$ to $\lambda \in \tilde{C}_e := \mathcal{C}_e \cap \left\{\lambda \in \mathbb{C}: |\lambda + \frac{1}{m}| > \frac{1}{2m}\right\}$. Note that $\mathbb{C}^+ \subset \tilde{C}_e$. Then the infimum of $\delta_c(\lambda)$ over this region exists, and we define it as $\delta_c := \inf_{\lambda \in \tilde{C}_e} \delta_c(\lambda) = \frac{\sqrt{6}}{24} \approx 0.102$. Thus, if $\delta < \delta_c$, possesses an exponential dichotomy for all $\lambda \in \tilde{C}_e$.
Moreover, for all $\lambda \in \tilde{C}_e$ and $\delta < \delta_c$, the slope $p_2(0;\lambda)/u_2(0;\lambda)$ of the non-autonomous case can be related to that of the autonomous case, along the same lines as in the existence proof in section \[sec:dynamics\_slow\_manifold\] (specifically, as in Lemma \[lemma:closeness\_slopes\]). That is, there are $\mathcal{O}(1)$ constants $0 < C_-(\delta) \leq 1 \leq C_+(\delta)$ such that $\tilde{p}(0;\lambda) = C \bar{u}(0;\lambda)$ for some $C \in \left( C_-(\delta), C_+(\delta) \right)$. Rescaling back to the original variables then yields $p_2(0;\lambda) / u_2(0;\lambda) = C \sqrt{1 + m \lambda}$. Therefore $t_{22}(\lambda) = 0$ reduces to $$C\mu u_0^2 = \frac{\mathcal{R}(\lambda)-3}{\sqrt{1+m\lambda}}.
\label{eq:NLEPnonAutonomous}$$ The asymptotic arguments for the autonomous case can now be repeated and it readily follows that no solutions are found with $\lambda \in \tilde{C}_e$. In particular $ t_{22}(\lambda) = 0 $ does not have solutions with $\mbox{Re} \lambda > 0$. We, hence, have the following result.
\[proposition:slow\_transmission\_function\] Let $ t_{22} $ be the slow transmission function from Lemma \[lemma:evans\_function\]. Then, for $\lambda \in \left\{\lambda \in \mathcal{C}_e: \| \lambda + \frac{1}{m} \| > \frac{1}{2m}\right\}$, $$\label{eq:NLEPt22expression_details}
t_{22}(\lambda) = 1 + \frac{1}{u_0^2\mu} \left( \frac{3-\mathcal{R}(\lambda)}{C \sqrt{1+m \lambda}} \right) \, ,$$ with $u_0 = u_0^-$ as in and for some $ C \in \mathbb{R} $ with $$\begin{aligned}
0 < C_{\mathrm{min}}(\delta) < C < C_{\mathrm{max}}(\delta) < \infty\end{aligned}$$ and $ C_{\mathrm{min}/\mathrm{max}}(\delta) $ defined as in Lemma \[lemma:closeness\_slopes\].\
Moreover, if either of the following two asymptotic approximations hold true,
- $m \ll 1$ and $\mu \ll 1$;
- $m \gg 1$ and $\nu \ll 1$,
then $t_{22}(\lambda) = 0$ does not have any solution $\lambda \in \mathcal{C}_e$ with $\mbox{Re} \lambda > 0$.
Combining Lemma \[lemma:evans\_function\] with Proposition \[proposition:slow\_transmission\_function\] readily demonstrates Theorem \[theorem:point\_spectrum\].
### Further remarks
If the asymptotic conditions on $m$, $\mu$ and $\nu$ from Proposition \[proposition:slow\_transmission\_function\] do not hold, equation still holds. By restricting $\delta$ further (i.e. taking a lower bound $\delta_c$) stronger bounds on the constant $C_+$ can be enforced that guarantee all roots of $t_{22}$ lie to the left of the imaginary axis. The proof of this heavily relies on the proof for the autonomous case (see e.g. [@BD18]) and a careful estimation of the constant $C_+$. Specifically, the following lemma can be established:
\[lemma:rootsOft22ComputedBetter\] Let the conditions of Proposition \[lemma:EvansFunctionDecomposition\] be fulfilled. Then there exists critical values $m_c = 3$, $0 < \mu^*(m) < \frac{1}{12}$ (see Theorem \[theorem:fg\_general\]) and $\nu^*(m) > 0$ such that if either of the following holds
- $m < m_c$ and $\mu < \mu^*(m)$;
- $m > m_c$ and $\nu < \nu^*(m)$;
- $m = m_c$ and $\mu < \mu^*(m)$ and $\nu < \nu^*(m)$,
then there exists a $\delta_c > 0$ such that if $\delta < \delta_c$ the condition has no solutions with $\mbox{Re} \lambda > 0$; that is, $t_{22}$ has not roots with positive real part.
In , the left-hand side is always real-valued. Hence, only $\lambda \in \mathbb{C}$ for which the right-hand side is real-valued can satisfy . Due to this, eigenvalues can only appear on a skeleton in $\mathbb{C}$, of which the form only depends on $m$. In Figure \[fig:eigenvalueSkeletons\] we show several skeletons for different $m$. Note that this is the reason for (the shape of) the bounds on the ‘large’ eigenvalues shown in Figure \[fig:spectralBounds\] (in red).
[0.3]{} ![Plots of skeletons on which $\lambda$ that satisfy necessarily need to lie.[]{data-label="fig:eigenvalueSkeletons"}](figures/skeletonm0p45 "fig:"){width="\textwidth"}
[0.3]{} ![Plots of skeletons on which $\lambda$ that satisfy necessarily need to lie.[]{data-label="fig:eigenvalueSkeletons"}](figures/skeletonm3 "fig:"){width="\textwidth"}
[0.3]{} ![Plots of skeletons on which $\lambda$ that satisfy necessarily need to lie.[]{data-label="fig:eigenvalueSkeletons"}](figures/skeletonm10 "fig:"){width="\textwidth"}
\[remark:varTer\_positiveu0\] The arguments in this section have been applied to pulse solutions with $u_0 = u_0^-$ (see ; $u_0^-$ as in and ). There also exist pulse solutions with $u_0 = u_0^+$ (with $u_0^+$ as in and ) and the reasoning also holds for these, up to equation . However, $u_0^+ = \mathcal{O}\left(\frac{1}{\mu}\right)$ for these solutions (see Remark \[remark:u0\_autonomous\_mu\_small\]) and as an effect the left-hand side of thus is asymptotically large (for $\mu \ll 1$). As result, eigenvalues accumulate around the poles of the right-hand side. In particular, because of this, these alternative pulse solution necessarily have an eigenvalue close to $\lambda =5/4 > 0$, making these pulse solutions unstable.
If $\delta \ll 1$, a direct application of roughness of exponential dichotomies can be used to directly prove that eigenvalues of necessarily lie $\mathcal{O}(\delta)$ close to eigenvalues of the problem with $f \equiv 0$, $g \equiv 0$.
\[remark:stability\_fgLimits\] If $\lim_{x \rightarrow \pm \infty} f(x),g(x)$ exist but are not (all) equal to zero, a similar result can be found with minor changes to the proof – provided that the essential spectrum lies to the left of the imaginary axis.
\[remark:stability\_fgBounded\] If $\lim_{x \rightarrow \pm \infty} f(x),g(x)$ do not exists, the outlined proof fails because the ‘elephant trunk’ procedure used in the proof of Lemma \[lemma:EvansFunctionDecomposition\] does no longer work. If $f$ and $g$ approach (possibly different) period functions for $x \rightarrow \infty$ a variant of this proof using a Ricatti transformation such as in [@BjornRiccati] seems possible.
Small eigenvalue close to $\lambda = 0$ (Proof of Theorem \[theorem:point\_spectrum\_small\]) {#sec:smallEigenvalue}
---------------------------------------------------------------------------------------------
In this section we assume that $$\begin{aligned}
f(x) = \delta \widetilde{f}(x) \, , \quad g(x) = \delta \widetilde{g}(x) \, , \qquad 0 < \delta \ll 1 \,, \widetilde{f}, \widetilde{g} = \mathcal{O}(1) \, , \quad \sup_{x \in \mathbb{R}} \sqrt{\widetilde{f}(x)^2+\widetilde{g}(x)^2} = 1 ,\end{aligned}$$ which will ease the derivation of a more detailed estimate (as given in Theorem \[theorem:point\_spectrum\_small\]) of the location of the small eigenvalue around $ \lambda = 0 $ (in terms of $ \delta $), so we set $$\begin{aligned}
\lambda = \delta \widetilde{\lambda} \, .\end{aligned}$$ The strategy to derive such an estimate is to relate the eigenvalue and existence problems in an appropriate way and then use the Fredholm alternative. To this end, let us write the eigenvalue problem in the fast field in the more concise form $$\begin{aligned}
\label{eq:evp_concise}
\delta \tilde \lambda
\left(
\begin{array}{cc}
\varepsilon^4 \mu^2 m & 0\\
0 & 1
\end{array}
\right)
\left(
\begin{array}{c}
\bar{u}\\
\bar{v}
\end{array}
\right)
= \mathbb{L}_{u_p, v_p}
\left(
\begin{array}{c}
\bar{u}\\
\bar{v}
\end{array}
\right) \, ,\end{aligned}$$ and the existence problem in the fast field as $$\begin{aligned}
0 = L_{h} \left(
\begin{array}{c}
u_p\\
v_p
\end{array}
\right) +
\delta
L_{in}(\xi) \left(
\begin{array}{c}
u_p\\
v_p
\end{array}
\right) +
N \left(
\begin{array}{c}
u_p\\
v_p
\end{array}
\right) +
\left(\begin{array}{c} a \\ 0 \end{array}\right)
\, ,\end{aligned}$$ with (the linear part with constant coefficients) $$\begin{aligned}
L_h =
\left(
\begin{array}{cc}
\partial_\xi^2 - \varepsilon^4 \mu^2 & 0 \\
0 & \partial_\xi^2 -1
\end{array}\right) \, , \quad \end{aligned}$$ and $$\begin{aligned}
L_{in}(\xi) =
\left(
\begin{array}{cc}
\varepsilon^2 \mu\ \widetilde{f}(\varepsilon^2 \mu \xi) \partial_{\xi} + \varepsilon^4 \mu^2\ \widetilde{g}(\varepsilon^2 \mu \xi) & 0 \\
0 & 0
\end{array}\right) \, , \quad \end{aligned}$$ and $ N $ the nonlinear terms. Recall that in the autonomous case the derivative of the pulse solution is an eigenfunction for the zero eigenvalue. Motivated by this, we take a derivative w.r.t. $\xi$ of the non-autonomous existence problem which gives $$\begin{aligned}
\label{eq:diff_existence}
0 = \underbrace{[L_{h} + \delta L_{in}(\xi) + DN(u_p, v_p)]}_{= \mathbb{L}_{u_p, v_p}}
\left(
\begin{array}{c}
\dot{u}_p\\
\dot{v}_p
\end{array}
\right) +
\delta
\left(\frac{d}{d\xi} L_{in}(\xi)\right) \left(
\begin{array}{c}
u_p\\
v_p
\end{array}
\right) \, ,\end{aligned}$$ and plug into the above eigenvalue problem the ansatz $$\begin{aligned}
\left(
\begin{array}{c}
\bar{u} \\ \bar{v}
\end{array}
\right)
=
\left(
\begin{array}{c}
\dot{u}_p \\ \dot{u}_p
\end{array}
\right)
+
\delta
\left(
\begin{array}{c}
\widetilde{u} \\ \widetilde{v}
\end{array}
\right)
\, ,\end{aligned}$$ which results in $$\begin{aligned}
\delta \tilde \lambda
\left(
\begin{array}{cc}
\varepsilon^4 \mu^2 m & 0\\
0 & 1
\end{array}
\right)
\left(
\begin{array}{c}
\dot{u}_p \\ \dot{u}_p
\end{array}
\right)
+
\delta^2 \tilde \lambda
\left(
\begin{array}{cc}
\varepsilon^4 \mu^2 m & 0\\
0 & 1
\end{array}
\right)
\left(
\begin{array}{c}
\widetilde{u}\\
\widetilde{v}
\end{array}
\right)
=
\mathbb{L}_{u_p, v_p}
\left(
\begin{array}{c}
\dot{u}_p \\ \dot{u}_p
\end{array}
\right)
+
\delta
\mathbb{L}_{u_p, v_p}
\left(
\begin{array}{c}
\widetilde{u}\\
\widetilde{v}
\end{array}
\right) \, .\end{aligned}$$ Upon using to replace the term featuring $ \mathbb{L}_{u_p, v_p}( \dot{u}_p, \dot{v}_p)^T $, we get $$\begin{aligned}
\delta \tilde \lambda
\left(
\begin{array}{cc}
\varepsilon^4 \mu^2 m & 0\\
0 & 1
\end{array}
\right)
\left(
\begin{array}{c}
\dot{u}_p \\ \dot{v}_p
\end{array}
\right)
+
\delta^2 \tilde \lambda
\left(
\begin{array}{cc}
\varepsilon^4 \mu^2 m & 0\\
0 & 1
\end{array}
\right)
\left(
\begin{array}{c}
\widetilde{u}\\
\widetilde{v}
\end{array}
\right)
=
- \delta
\left(\frac{d}{d\xi} L_{in}(\xi) \right) \left(
\begin{array}{c}
u_p\\
v_p
\end{array}
\right)
+
\delta
\mathbb{L}_{u_p, v_p}
\left(
\begin{array}{c}
\widetilde{u}\\
\widetilde{v}
\end{array}
\right) \end{aligned}$$ For the perturbation analysis to follow we will use the notation $ u_{p,0}, v_{p,0}, \bar{u}_{0}, \bar{v}_{0} $ to indicate the leading order in $ \delta $ of the corresponding terms. In particular, $ u_{p,0}, v_{p,0} $ are the pulse solutions for the homogeneous case $ f=g=0 $ as described in Corollary \[cor:fg\_equal\_zero\]. We, hence, arrive at the leading order in $ \delta $ of the previous equation $$\begin{aligned}
\label{eq:perturbation_first_order}
\mathbb{L} \left(
\begin{array}{c}
\widetilde{u}_0\\
\widetilde{v}_0
\end{array}
\right)
=
\left(
\begin{array}{c}
\alpha\\
\beta
\end{array}
\right)\end{aligned}$$ with $$\begin{aligned}
\mathbb{L}:= \mathbb{L}_{u_{p,0}, v_{p,0}} =
\left(
\begin{array}{cc}
\partial_\xi^2 - \varepsilon^4 \mu^2 - \varepsilon^2 v_{p,0}^2 & - 2 \varepsilon^2 u_{p,0} v_{p,0} \\
v_{p,0}^2 & \partial_\xi^2 -1 + 2 u_{p,0} v_{p,0}
\end{array}\right) \, ,\end{aligned}$$ and [$$\begin{aligned}
\left(
\begin{array}{c}
\alpha\\
\beta
\end{array}
\right)
:=
\tilde \lambda
\left(
\begin{array}{cc}
\varepsilon^4 \mu^2 m & 0\\
0 & 1
\end{array}
\right)
\left(
\begin{array}{c}
\dot{u}_{p,0} \\ \dot{v}_{p,0}
\end{array}
\right)
+
\left(\frac{d}{d\xi} L_{in}(\xi)\right) \left(
\begin{array}{c}
u_{p,0}\\
v_{p,0}
\end{array}
\right)
=
\left(
\begin{array}{c}
\varepsilon^4 \mu^2 m \tilde{\lambda} \dot{u}_{p,0} + \varepsilon^4 \mu^2 \tilde{f}'(\varepsilon^2 \mu \xi) \dot{u}_{p,0} + \varepsilon^6 \mu^3 \tilde{g}'(\varepsilon^2 \mu \xi) u_{p,0} \\
\tilde{\lambda} \dot{v}_{p,0}
\end{array}
\right) \, .
\end{aligned}$$ ]{} In order to find an expression for the eigenvalue correction $ \widetilde{\lambda} $, we will make use of the Fredholm alternative for . Hence, we first need to study the kernel of the adjoint operator $$\mathbb{L}^* = \left(\begin{array}{cc}
\partial_\xi^2 - \varepsilon^4 \mu^2 - \varepsilon^2 v_{p,0}^2
& v_{p,0}^2 \\
- 2 \varepsilon^2 u_{p,0} v_{p,0}
& \partial_\xi^2 - 1 + 2 u_{p,0} v_{p,0}
\end{array}\right) \, ,$$ that is, to find $ (u^*,v^*)^T $ with $$\begin{aligned}
\label{eq:adjoint_problem}
\mathbb{L}^*
\left(
\begin{array}{c}
u^*\\
v^*
\end{array}
\right)
=
0 \, ,\end{aligned}$$ and rearrange the solvability condition $$\label{eq:smallEigenvalueFredholmCondition}
\left\langle \left( \begin{array}{c} u^* \\ v^* \end{array} \right), \left(\begin{array}{c} \alpha \\ \beta \end{array} \right) \right\rangle_{L^2 \times L^2} = 0,$$ to get an expression for $ \tilde{\lambda} $. Since is again a singularly perturbed problem (in $ \varepsilon $), we split this problem into three regions: two slow regions, $I_s^\pm$, and one fast region, $I_f$. As described in Theorem \[theorem:fg\_general\] and Corollary \[cor:fg\_equal\_zero\], we have $$\label{eq:smallEigenvalueExistence}
u_{p,0,0}(\xi) =
\begin{cases}
\frac{1}{\mu} \left[1-(1-\mu u_0)e^{+\varepsilon^2 \mu \xi} \right] \, , & \xi \in I_s^-; \\
u_0, & \xi \in I_f; \\
\frac{1}{\mu} \left[1-(1-\mu u_0)e^{-\varepsilon^2 \mu \xi} \right] \, , & \xi \in I_s^+,
\end{cases}\, \quad
v_{p,0,0}(\xi) =
\begin{cases}
0, & \xi \in I_s^-;\\
\frac{1}{u_0} \omega(\xi), & \xi \in I_f;\\
0, & \xi \in I_s^+,
\end{cases},$$ where $\omega(\xi) = \frac{3}{2} \operatorname{sech}(\xi/2)^2$ and the notation “$p,0,0$” indicates that this the leading order in both, $ \delta $ and $ \varepsilon $. In the slow regions we have $v_{p,0,0} = 0$ to leading order and therefore (again to leading order) $$u^*(\xi) =
\begin{cases}
C^- e^{\varepsilon^2 \mu \xi}, & \xi \in I_s^-;\\
C^+ e^{-\varepsilon^2 \mu \xi}, & \xi \in I_s^+;
\end{cases}\quad \, \quad
v^*(\xi) =
\begin{cases}
D^- e^{\xi}, & \xi \in I_s^-;\\
D^+ e^{-\xi}, & \xi \in I_s^+,
\end{cases}$$ where $C^\pm$ and $D^\pm$ are constants that need to be found via matching with the fast field at $\xi = \pm 1/\sqrt{\varepsilon}$. In the fast region, the adjoint problem is to leading order given by $$\left\{
\begin{array}{rcl}
0 & = & \ddot{u}^* + \frac{1}{u_0^2} \omega^2 v^* \, , \\
0 & = & \ddot{v}^* - v^* + 2 \omega v^* \, .
\end{array}
\right.$$ Up to a multiplicative constant, the only bounded solution to the $v^*$-equation is $v^* = \frac{1}{u_0} \omega'$. Matching with the slow fields indicates $D^\pm = 0$. The expression for $u^*$ in $I^f$ can be found by integrating twice, which reveals $$\begin{aligned}
u^*(\xi)
= - \frac{1}{3 u_0^3} \int^\xi \omega^3(z)\ dz + C_2
= - \frac{1}{3 u_0^3} \frac{9}{20} \left[ 6 \cosh(\xi) + \cosh(2\xi) + 8 \right] \tanh(\xi/2) \operatorname{sech}(\xi/2)^4 + C_2
=: \sigma(\xi) \, . \end{aligned}$$ The value of $C_2$ turns out to be irrelevant and therefore we choose $C_2 = 0$ for simplicity of presentation. Matching with the slow fields then gives $C^- = \frac{6}{5 u_0^3}$ and $C^+ = -\frac{6}{5 u_0^3}$. In summary, we have to leading order in $ \varepsilon $ $$\label{eq:solution_adjoint_problem}
u^*(\xi) =
\begin{cases}
+\frac{6}{5u_0^3} \, e^{+\varepsilon^2 \mu \xi} \, , & \xi \in I_s^-; \\
\sigma(\xi) \, , & \xi \in I_f; \\
-\frac{6}{5u_0^3} \, e^{-\varepsilon^2 \mu \xi} \, , & \xi \in I_s^+,
\end{cases}\, \quad
v^*(\xi) =
\begin{cases}
0 \, , & \xi \in I_s^-;\\
\frac{1}{u_0} \omega'(\xi)\, , & \xi \in I_f;\\
0 \, , & \xi \in I_s^+,
\end{cases},$$ and $$\label{eq:alpha_leading_order}
\alpha(\xi) =
\begin{cases}
\varepsilon^6 \mu^2 \, e^{+\varepsilon^2 \mu \xi}
\left[-m \widetilde{\lambda}(1-\mu u_0)- \widetilde{f}'(\varepsilon^2 \mu \xi) (1-\mu u_0)
+ \widetilde{g}'(\varepsilon^2 \mu \xi)\left( e^{-\varepsilon^2 \mu \xi} + \mu u_0 - 1 \right) \right] \, , & \xi \in I_s^-; \\
\varepsilon^6 \mu^3 \tilde{g}'(\varepsilon^2 \mu \xi) u_0 \, , & \xi \in I_f; \\
\varepsilon^6 \mu^2 \, e^{-\varepsilon^2 \mu \xi}
\left[\ \ m \widetilde{\lambda}(1-\mu u_0)+ \widetilde{f}'(\varepsilon^2 \mu \xi) (1-\mu u_0)
+ \widetilde{g}'(\varepsilon^2 \mu \xi)\left( e^{+\varepsilon^2 \mu \xi} + \mu u_0 - 1 \right) \right] \, , & \xi \in I_s^+,
\end{cases}$$
$$\label{eq:beta_leading_order}
\beta(\xi) =
\begin{cases}
0 \, , & \xi \in I_s^-;\\
\frac{\widetilde{\lambda}}{u_0} \omega'(\xi)\, , & \xi \in I_f;\\
0 \, , & \xi \in I_s^+; \, .
\end{cases},$$
We can now assemble the different terms for the solvability condition $$\begin{aligned}
\label{eq:smallEigenvalueFredholmCondition_details}
\left\langle \left( \begin{array}{c} u^* \\ v^* \end{array} \right), \left(\begin{array}{c} \alpha \\ \beta \end{array} \right) \right\rangle_{L^2 \times L^2}
= \int_{I_s^- \cup I_{f} \cup I_s^+} u^*(\xi) \alpha(\xi) \, d \xi + \int_{I_s^- \cup I_{f} \cup I_s^+} v^*(\xi) \beta(\xi) \, d \xi\end{aligned}$$ Using that $ f $ is odd and $ g $ is even, which makes $ f' $ even and $ g' $ odd, we get to leading order [$$\begin{aligned}
\int_{I_s^-} u^*(\xi) \alpha(\xi) d\xi
& = +\varepsilon^6 \mu^2 \left( \frac{6}{5 u_0^3} \right) \int_{I_s^-} e^{+2\varepsilon^2 \mu \xi} \left( - m\tilde{ \lambda}(1-\mu u_0)- -\tilde{f}'(\varepsilon^2 \mu \xi)(1-\mu u_0)- + \tilde{g}'(\varepsilon^2 \mu \xi) [e^{-\varepsilon^2 \mu \xi}+\mu u_0 - 1] \right) \, d\xi \\
& = +\varepsilon^4 \mu \left( \frac{6}{5 u_0^3} \right) \int_0^{+\infty} e^{-2x} \left( - m\tilde{ \lambda}(1-\mu u_0) -\tilde{f}'(x)(1-\mu u_0) - \tilde{g}'(x) [e^{x}+\mu u_0 - 1] \right) \, dx + h.o.t.\\
& = - \varepsilon^4 \mu \left( \frac{6}{5 u_0^3} \right) \int_0^{+\infty} e^{-2x} \left( m \tilde{\lambda}(1-\mu u_0) + \tilde{f}'(x)(1-\mu u_0) + \tilde{g}'(x) [e^{x}+\mu u_0 - 1] \right) \, dx + h.o.t.\\
& = - \varepsilon^4 \mu \left( \frac{6}{5 u_0^3} \right) \left( \frac{1}{2} m(1-\mu u_0) \tilde{ \lambda} + \int_0^{+\infty} e^{-2x} \left(\tilde{f}'(x)(1-\mu u_0) + \tilde{g}'(x) [e^{x}+\mu u_0 - 1] \right) \, dx \right) + h.o.t. \\
\int_{I_s^+} u^*(\xi) \alpha(\xi) d\xi
& = - \varepsilon^6 \mu^2 \left( \frac{6}{5 u_0^3} \right) \int_{I_s^+} e^{-2\varepsilon^2 \mu \xi} \left( \ \ m \tilde{\lambda}(1-\mu u_0) + \tilde{f}'(\varepsilon^2 \mu \xi)(1-\mu u_0) + \tilde{g}'(\varepsilon^2 \mu \xi) [e^{+\varepsilon^2 \mu \xi}+\mu u_0 - 1] \right) \, d\xi \\
& = - \varepsilon^4 \mu \left( \frac{6}{5 u_0^3} \right) \left( \frac{1}{2} m(1-\mu u_0) \tilde{ \lambda} + \int_0^{+\infty} e^{-2x} \left(\tilde{f}'(x)(1-\mu u_0) + \tilde{g}'(x) [e^{x}+\mu u_0 - 1] \right) \, dx \right) + h.o.t.\\
\int_{I_f} u^*(\xi) \alpha(\xi) \, d\xi
& = \int_{I_f} \varepsilon^6 \mu^2 \tilde{g}'(\varepsilon^2 \mu \xi) u_0 d\xi = \mathcal{O}(\varepsilon^{6-1/2}\mu^2) \\
\int_{I_s^\pm} v^*(\xi) \beta(\xi) d\xi
& = h.o.t \\
\int_{I_f} v^*(\xi) \beta(\xi) d\xi
& = \int_{I_f} \tilde{\lambda} \frac{1}{u_0^2} \omega'(\xi)^2 d\xi = \tilde{\lambda} u_0 \left( \frac{6}{5 u_0^3} \right) + h.o.t. \, .\end{aligned}$$]{} Putting all pieces together, the solvability condition reads [ $$\begin{aligned}
\left\langle \left( \begin{array}{c} u^* \\ v^* \end{array} \right), \left(\begin{array}{c} \alpha \\ \beta \end{array} \right) \right\rangle_{L^2 \times L^2}
= & \left( \frac{6}{5 u_0^3} \right)\left[ \tilde{\lambda} u_0 -\varepsilon^4 \mu \left( m \tilde{ \lambda}(1-\mu u_0) \right.\right. \\ & \left.\left.+ 2\int_0^{+\infty} e^{-2x} \left(\tilde{f}'(x)(1-\mu u_0) + \tilde{g}'(x) [e^{x}+\mu u_0 - 1] \right) \, dx \right) \right] + h.o.t.= 0 \, ,\end{aligned}$$ ]{} which can be rearranged to $$\tilde{\lambda} = \frac{2 \varepsilon^4 \mu}{u_0 - \varepsilon^4 \mu m (1 - \mu u_0)} \int_0^{+\infty} e^{-2x} \left(\tilde{f}'(x)(1-\mu u_0) + \tilde{g}'(x) [e^{x}+\mu u_0 - 1] \right) \, dx + h.o.t.$$
Since the problem is solved by a regular perturbation approach, the asymptotic analysis may be validated rigorously by classical methods (i.e. by rigorously controlling the higher order terms); alternatively a geometrical approach based on Lin’s method may be employed (see e.g. [@modfiedKlausmeier]).
To show Corollary \[cor:small\_eigenvalue\_double\_limit\], we observe that in the double asymptotic limit $\tau:= \varepsilon^4 \mu m \ll 1$ and $\mu \ll 1$, the leading order expression for $\tilde{\lambda}$ becomes $$\label{eq:smallEigenvalueLimits}
\tilde{\lambda} = \frac{2 \varepsilon^4 \mu}{3} \int_0^\infty e^{-2x} \left( \tilde{f}'(x) + \tilde{g}'(x)[e^x-1]\right)\ dx + h.o.t.$$ where we used that $ u_0 = u_0^-(\mu) \rightarrow 3 $ for $ \mu \rightarrow 0 $ (see Corollary \[cor:fg\_small\] and ).
### Interpretation of results for ecological applications {#sec:stability_ecology}
Going back to the ecological application, we set $f(x) = h'(x)$ and $g(x) = h''(x)$. Depending on the rate of topographical variation, several different simplifications can be made to Theorem \[theorem:point\_spectrum\_small\], that allow us to make generic statements about stability of pulse solutions on these terrains.
First, if the topographical changes are small, i.e. when $h = \mathcal{O}(\delta)$, we can write $h(x) = \delta \tilde{h}(x)$ and then can be simplified (via integration by parts):
Let the conditions of Theorem \[theorem:point\_spectrum\_small\] be fulfilled. If $\tilde{f}(x) = \tilde{h}'(x)$ and $\tilde{g}(x) = \tilde{h}''(x)$, then becomes $$\label{eq:smallEigenvalue_heightFunction}
\underline{\lambda}_0 = \frac{2 \delta \tau }{u_0 - \tau (1 - \mu u_0)}
\left[-\mu u_0 \tilde{h}''(0) + \tilde{h}(0)(1-2\mu u_0) + \int_0^\infty \tilde{h}(x) \left( e^{-x} - 4 (1-\mu u_0)e^{-2x}\right) dx \right];$$ additionally, in the double asymptotic limit $\tau := \varepsilon^4 \mu m \ll 1$, $\mu \ll1$ this further reduces to $$\label{eq:smallEigenvalue_heightFunctionLimit}
\underline{\lambda}_0 = \frac{2}{3} \delta \tau
\left[\tilde{h}(0) + \int_0^\infty \tilde{h}(x) \left( e^{-x} - 4 e^{-2x}\right) dx.\right] + h.o.t.$$
Note that $\tilde{h}$ appears in , while it does not appear in the original PDE , where only its derivatives appear. Thus, increasing $\tilde{h}$ by an additive constant does not affect the system, and in particular should not affect . Since $\int_0^\infty \left(e^{-x} - 4 (1-\mu u_0) e^{-2x}\right)\ dx = - (1-2 \mu u_0)$ the result in is indeed not changed when adding a constant to the height function $\tilde{h}$.
Second, if topographical variation happens only over long spatial scales (i.e. for terrains with weak curvature), we can write $\tilde{h}(x) = \hat{h}(\sigma x)$, where $0 < \sigma \ll 1$ to indicate the large-scale spatial variability. Hence, $\tilde{f}(x) = \sigma \hat{h}'(\sigma x) = \mathcal{O}(\sigma)$ and $\tilde{g}(x) = \sigma^2 \hat{h}''(\sigma x) = \mathcal{O}(\sigma^2)$. Because of the difference in size of $\tilde{f}$ and $\tilde{g}$, the sign of $\underline{\lambda}_0$ can be related to the sign of $\hat{h}''(0)$, i.e. to the local curvature at the location of the pulse.
Let the conditions of Theorem \[theorem:point\_spectrum\_small\] be fulfilled. If $\tilde{f}(x) = \sigma \hat{h}'(\sigma x)$ and $\tilde{g}(x) = \sigma^2 \hat{h}''(\sigma x)$ with $0 < \sigma \ll 1$, the leading order expansion of becomes $$\label{eq:smallEigenvalue_highCurvature}
\underline{\lambda}_0 = \frac{\tau \delta \sigma^2 (1- \mu u_0)}{u_0 - \tau (1- \mu u_0)} \hat{h}''(0);$$ additionally, in the double asymptotic limit $\tau := \varepsilon^4 \mu m \ll 1$, $\mu \ll 1$, this further reduces to $$\underline{\lambda}_0 = \frac{1}{3} \tau \delta \sigma^2 \hat{h}''(0) + h.o.t.$$ Furthermore, it follows that $\mbox{sgn}\ \underline{\lambda}_0 = \mbox{sgn}\ \hat{h}''(0)$, i.e. (vegetation) pulses on hilltops are stable and in valleys are unstable.
Since $|\tilde{f}'(x)| \gg |\tilde{g}'(x)|$ we can neglect the terms with $\tilde{g}'(x)$ in , thus obtaining $$\underline{\lambda}_0 = \frac{2 \tau \delta (1 - \mu u_0)}{u_0 - \tau (1-\mu u_0)} \int_0^\infty \tilde{f}'(x) e^{-2x}\ dx.$$ Substitution of $\tilde{f}'(x) = \sigma^2 \hat{h}''(\sigma x)$ and Taylor expanding $\hat{h}''$ as $\hat{h}''(x) = \hat{h}''(0) + \mathcal{O}(\sigma^3)$ immediately yields ; the rest of the statement follows straightforwardly.
Third, if topographical variation happens over short spatial scales (i.e. for terrains with strong curvature), we can write $\tilde{h}(x) = \breve{h}\left(x / \sigma\right)$, where $0 < \sigma \ll 1$ to indicate the short spatial scales. Hence, $\tilde{f}(x) = \breve{h}'\left(x / \sigma\right)/\sigma = \mathcal{O}(1/\sigma)$ and $\tilde{g}(x) = \breve{h}''\left(x / \sigma\right)/\sigma^2 = \mathcal{O}(1/\sigma^2)$. Again, the sign of $\underline{\lambda}_0$ can be related to the sign of $\breve{h}''(0)$, though the results are now flipped:
Let the conditions of Theorem \[theorem:point\_spectrum\_small\] be fulfilled. If $\tilde{f}(x) = \breve{h}'\left(x / \sigma\right)/\sigma$ and $\tilde{g}(x) = \breve{h}''\left(x / \sigma\right)/\sigma^2$ with $0 < \sigma \ll 1$ and $\breve{h}(y), \breve{h}'(y), \breve{h}''(y) \rightarrow 0$ exponentially fast for $|y| \rightarrow \infty$, the leading (and next-leading) order expansion of becomes $$\underline{\lambda}_0 = \frac{2 \tau \delta}{u_0 - \tau (1 - \mu u_0)} \left[ \frac{- \mu u_0}{\sigma^2}\breve{h}''(0) + \left(1 - 2 \mu u_0\right) \breve{h}(0) \right];$$ additionally, in the double asymptotic limit $\tau := \varepsilon^4 \mu m \ll 1$, $\mu \ll 1$, this further reduces to $$\underline{\lambda}_0 = \frac{2}{3} \tau \delta \breve{h}(0).$$ Furthermore, it follows that $\mbox{sgn}\ \underline{\lambda}_0 = - \mbox{sgn}\ \breve{h}''(0)$ when $\mu \neq 0$, i.e. (vegetation) pulses on hilltops are unstable and in valleys are stable; and $\mbox{sgn}\ \underline{\lambda}_0 = \mbox{sgn}\ \breve{h}(0)$ when $\mu = 0$.
Substitution of $\tilde{h}(x) = \breve{h}(x/\sigma)$ and the use of the transformation $y = x / \sigma$ in yields $$\underline{\lambda}_0 = \frac{2 \delta \tau}{u_0 - \tau (1-\mu u_0)} \left[ - \frac{\mu u_0}{\sigma^2} \breve{h}''(0) + \left(1-2\mu u_0\right) \breve{h}(0) + \sigma \int_0^\infty \breve{h}(y) \left( e^{-\sigma y} - 4 (1-\mu u_0)e^{-2\sigma y}\right)\ dy. \right]$$ Taylor expanding the exponential functions then indicates the integral contributes only at order $\mathcal{O}(\delta \tau \sigma)$. Hence the claimed results follow.
Thus, the corollaries in this section indicate that – under certain assumptions on the limiting behavior of the topography function $h$ – vegetation patterns concentrated on hilltops are stable if the terrain has weak curvature and unstable if the terrain has strong curvature; similarly, patterns concentrated in valleys are unstable for terrains with weak curvature, but they become stable if the terrain has strong curvature. A more in-depth inspection of this phenomena can be found in section \[sec:explicitExamples\], where a few explicit terrain functions $h$ are studied numerically.
The effect of the small eigenvalue: movement of pulses {#sec:pulseLocationODE}
======================================================
In the previous section we found that, under certain ‘standard’ assumptions on the system’s parameters, all large eigenvalues of a homoclinic pulse solution reside to the left of the imaginary axis. Only one small eigenvalue can lead to destabilization of the pulse solution. Since this small eigenvalue is closely related to the translation invariance of the system without spatially varying coefficients, it is possible to study its effects by projecting the whole system unto the corresponding eigenspace.
This derivation enables us to reduce the full PDE dynamics of to a simpler ODE that describes the movement of the pulse’s location. Concretely, let $P$ denote the location of the center of the pulse. Then the time-evolution of $P$ is given by $$\label{eq:pulseLocationOde}
\frac{dP}{dt} = \tau \frac{1}{6} \left[ \tilde{u}_x(P^+)^2 - \tilde{u}_x(P^-)^2\right],$$ where the superscripts $\pm$ denote taking the upper respectively lower limit, $\tau := \varepsilon^4 \mu m = \frac{D a^2}{m\sqrt{m}}$ and $\tilde{u}$ solves the differential-algebraic equation $$\label{eq:DAE}
\left\{
\begin{array}{rcl}
\tilde{u}_{xx} + f(x) \tilde{u}_x + g(x) \tilde{u} + 1 - \tilde{u} &=& 0\\
\tilde{u}(P) &=& \mu u_0 \\
\tilde{u}_x(P^+) - \tilde{u_x}(P^-) &=& \frac{6}{u_0}
\end{array}\right.$$
We follow [@BD18] and only give a short formal derivation of this PDE-to-ODE reduction, in section \[sec:pdeToOdeReduction\]. We refrain from going into the details of (proving) the validity of this reduction. Although the renormalization group approach of [@BDKTP13; @doelman2007nonlinear] for semi-strong pulse interactions has not yet been applied to systems with inhomogeneous terms, it can naturally be extended to include these effects. However, it should be noted that, so far, the results and techniques of [@BDKTP13; @doelman2007nonlinear] only cover strongly restricted region in parameter space: the general issue of validity of the reduction of semi-strong pulse interactions to finite dimensional settings still largely remains an open question in the field – see also [@BD18]. As a consequence, we formulate the main results of this section as Propositions and only provide their formal derivations.
Using the pulse location ODE we use formal analysis in section \[sec:odeStability\] to present a scheme by which we can determine the stability of the homoclinic pulse patterns of Theorem \[sec:existenceResults\] for any functions $f$ and $g$, i.e. without the restriction on their size by which we obtained Theorem \[theorem:point\_spectrum\_small\]; in section \[sec:small\_ev\_validation\] we (formally) validate this scheme by reducing it to the setting of Theorem \[theorem:point\_spectrum\_small\], i.e. by assuming that $f,g = \mathcal{O}(\delta)$ (with $\delta \ll 1$), and showing that this indeed confirms the results of Theorem \[theorem:point\_spectrum\_small\]. Next, we study a few explicit functions in section \[sec:explicitExamples\] – focusing on what happens when the pulse solution changes stability type. Finally, we briefly consider multi-pulse dynamics in section \[sec:multiPulses\].
Formal derivation of pulse location ODE {#sec:pdeToOdeReduction}
---------------------------------------
In this section we formally derive the pulse location ODE . Mathematically, this amounts to tracking perturbations along translational eigenvalues; this approach is sometimes called the ‘collective coordinate method’. Specifically, in this section, we show
\[theorem:pdeToOdeReduction\] Let $\varepsilon = \frac{a}{m} \ll 1$, $\tau = \frac{D a^2}{m\sqrt{m}} \ll 1$ and $\mu = \frac{D m \sqrt{m}}{a^2} \leq \mathcal{O}(1)$ (w.r.t. $\varepsilon$). Let $P$ denote the location of the homoclinic pulse’s center. Then the evolution of $P$ is described by the pulse location ODE .
*Formal derivation, cf. [@BD18].* We introduce the stretched travelling-wave coordinate $$\xi = \frac{\sqrt{m}}{D}\left(x - P(t)\right) = \frac{\sqrt{m}}{D}\left(x - P(0) - \int_0^t \frac{dP}{dt}(s) ds \right),$$ scale $\frac{dP}{dt} = \frac{D a^2}{m \sqrt{m}} c(t)$ and use scalings to transform to get $$\label{eq:innerRegion}
\left\{
\begin{array}{rcl}
- \frac{a^2}{m^2} \frac{D m \sqrt{m}}{a^2} \frac{D a^2}{m \sqrt{m}} c(t) u_\xi & = & u_{\xi\xi} - \frac{a^2}{m^2} \left[ \frac{D^2m}{a^2} u - \frac{Dm\sqrt{m}}{a^2} f\left( \frac{D}{\sqrt{m}} \xi\right) u_{\xi}- \frac{D^2m}{a^2} g\left( \frac{D}{\sqrt{m}} \xi\right) u - \frac{D}{\sqrt{m}} + u v^2 \right] \\
- \frac{a^2}{m^2} c(t) v_\xi & = & v_{\xi\xi} - v + u v^2
\end{array}
\right.$$ To find the solution in the fast region $I_f = \left[ - 1 / \sqrt{\varepsilon}, 1 / \sqrt{\varepsilon}\right]$, close to the pulse location, we expand $u$ and $v$ in terms of $\varepsilon$ and look for solution of the form $$\begin{cases}
u & = u_0 + \varepsilon^2 u_1 + \ldots \\
v & = v_0 + \varepsilon^2 v_1 + \ldots
\end{cases}$$ To leading order is given by $$\left\{
\begin{array}{rcl}
0 & = & u_0'', \\
0 & = & v_0'' - v_0 + u_0 v_0^2.
\end{array}
\right.$$ Hence we find $u_0$ to be constant and $$v_0(\xi) = \frac{3}{2} \frac{1}{u_0} \operatorname{sech}(\xi/2)^2.$$ The next order of is $$\label{eq:innerRegionSecondOrder}
\left\{
\begin{array}{rcl}
u_1'' & = & u_0 v_0^2, \\
v_1'' - v_1 +2 u_0 v_0 v_1 & = & - c(t) v_0' - v_0^2 u_1.
\end{array}
\right.$$ It is not a priori clear whether the $v$-equation is solvable; the self-adjoint operator $\mathcal{L} := \partial_\xi^2 - 1 + 2 u_0 v_0$ has a non-empty kernel, since $\mathcal{L} v_0' = 0$, and therefore the inhomogeneous $v$-equation is only solvable when the following Fredholm condition holds $$\int_{I_f} c(t) v_0'(\eta)^2 d\eta = - \int_{I_f} v_0(\eta)^2 u_1(\eta) v_0'(\eta) d \eta.$$ Upon integrating by parts twice on the right-hand side we obtain $$\int_{I_f} c(t) v_0'(\eta)^2 d\eta = - \frac{1}{3} \left[u_1'(\eta)\int_0^\eta v_0(y)^3 dy\right]_{\eta = - 1 / \sqrt{\varepsilon}}^{\eta = + 1 / \sqrt{\varepsilon}} + \frac{1}{3} \int_{I_f} u_1''(\eta) \int_0^\eta v_0(y)^3 dy d\eta + h.o.t.$$ Since $v_0$ is an even function, $u_1''$ is an even function and $\eta \mapsto \int_0^\eta v_0(y)^3 dy$ is an odd function. Therefore the last integral vanishes and we obtain $$c(t) \int_{I_f} v_0'(\eta)^2 d\eta = \frac{1}{6} \left[ u_1'\left(\frac{1}{\sqrt{\varepsilon}}\right) + u_1'\left(-\frac{1}{\sqrt{\varepsilon}}\right) \right] \int_{I_f} v_0(\eta)^3 d\eta.$$ The integrals over the fast field $I_f$ can be approximated by integrals over $\mathbb{R}$, since $v_0$ decays exponentially within fast field. Hence we find $$\label{eq:speed1}
c(t) = \frac{1}{u_0} \left[ u_1'\left(\frac{1}{\sqrt{\varepsilon}}\right) + u_1'\left(-\frac{1}{\sqrt{\varepsilon}}\right) \right].$$ Finally, it follows from the $u$-equation in that $$u_1'\left(\frac{1}{\sqrt{\varepsilon}}\right) - u_1'\left(-\frac{1}{\sqrt{\varepsilon}}\right) = \int_{I_f} u_1''(\eta) d\eta = \int_{I_f} u_0 v_0(\eta)^2 d\eta = \frac{6}{u_0} + h.o.t.$$ Combining this with we obtain $$c(t) = \frac{1}{6} \left[ u_1'\left(\frac{1}{\sqrt{\varepsilon}}\right)^2 - u_1'\left(-\frac{1}{\sqrt{\varepsilon}}\right)^2 \right]$$ The values of $u_1'(\pm 1/\sqrt{\varepsilon})$ can be matched to the solutions $\hat{u}$ in the slow fields. Careful inspection of the scalings involved reveals $u_1'(\pm 1/\sqrt{\varepsilon}) = \hat{u}_x(P^\pm)$, where $\hat{u}$ satisfies the differential-algebraic equation . Since $\frac{dP}{dt} = \tau c(t)$ this concludes the proof.
Note the link with the notation in section \[sec:existence\]: $u_1' = \hat{p}$. See also Remark \[remark:scaling\_p\_phat\].
Stability of fixed points of pulse location ODE {#sec:odeStability}
------------------------------------------------
The pulse location ODE describes the movement of a pulse over time. In general, for generic functions $f$ and $g$, it is not possible to solve in closed form, and therefore the pulse location ODE cannot be expressed more explicitly for generic functions $f$ and $g$. Thus, in general, can only be solved numerically – for instance using the numerical scheme developed in [@BD18]. Moreover, for generic $f$ and $g$ fixed points of can only be obtained numerically. However, when $f$ and $g$ obey the symmetry assumptions (A2), one can readily obtain that $P_* = 0$ is a fixed point. It is possible to determine the stability of fixed points using via direct numerics, but this can be rather time-intensive and is prone to errors close to bifurcation points. Instead, it is better to first use asymptotic expansions to derive a stability condition that can be checked (numerically) more easily.
\[theorem:stabilityInOde\] Let the conditions of Proposition \[theorem:pdeToOdeReduction\] be satisfied, let $\mu \ll 1$ and let $P_*$ be a fixed point of . Then, the eigenvalue $\underline{\lambda}$ – where $\underline{\lambda} = m \lambda$, see – corresponding to the pulse solution with a pulse located at the fixed point $P_*$ is given by $$\label{eq:eigenvalueFixedPointOde}
\underline{\lambda} = \frac{\tau}{6} \left\{ 2 \tilde{u}'(P_*^+)\left[ \tilde{u}''(P_*^+)+\tilde{w}'(P_*^+)\right] - 2 \tilde{u}'(P_*^-) \left[ \tilde{u}''(P_*^-) + \tilde{w}'(P_*^-) \right] \right\}.$$ Here $\tilde{u}$ and $\tilde{w}$ solve the coupled ODE system $$\label{eq:stabilityODEs}
\left\{
\begin{array}{rcl}
0 & = & \tilde{u}'' + f \tilde{u}' + g \tilde{u} - \tilde{u} + 1, \\
0 & = & \tilde{w}'' + f \tilde{w}' + g \tilde{w} - \tilde{w}, \\
\tilde{u}(P_*) & = & 0, \\
\tilde{w}(P_*^\pm) & = & - \tilde{u}'(P_*^\pm).
\end{array}
\right.$$
If $f$ and $g$ satisfy the symmetry assumption (A2) and $P_*$ is located at the point of symmetry, i.e. $P_* = 0$, then symmetry forces $\tilde{u}'(P_*^+) = - \tilde{u}'(P_*^-)$, $\tilde{u}''(P_*^+) = \tilde{u}''(P_*^-)$ and $\tilde{w}'(P_*^+) = \tilde{w}(P_*^-)$. Therefore, reduces to $$\label{eq:eigenvalueFixedPointOdeSymmetric}
\underline{\lambda} = \frac{2\tau}{3} \tilde{u}'(P_*^+)\left[ \tilde{u}''(P_*^+)+\tilde{w}'(P_*^+)\right].$$
The condition $\mu \ll 1$ in Theorem is not strictly necessary. When this condition holds, the differential-algebraic system simplifies to a normal boundary value problem, since $\tilde{u}(P) = 0$ to leading order. However, when $\mu = \mathcal{O}(1)$ (w.r.t. $\varepsilon$) the procedure explained below is still applicable and one can derive a similar result; only this time, $u_0$ in needs to be expanded as well and $\tilde{u}$ and $\tilde{w}$ satisfy the coupled differential-algebraic system $$\left\{
\begin{array}{rcl}
0 & = & \tilde{u}'' + f \tilde{u}' + g \tilde{u} - \tilde{u} + 1, \\
0 & = & \tilde{w}'' + f \tilde{w}' + g \tilde{w} - \tilde{w}, \\
\tilde{u}(P_*) & = & \mu u_{0}, \\
\tilde{w}(P_*^\pm) & = & - \tilde{u}'(P_*^\pm) + \mu w_{0},\\
\tilde{u}'(P_*^+) - \tilde{u}'(P_*^-) & = & \frac{6}{u_{0}}, \\
\tilde{w}'(P_*^+) - \tilde{w}'(P_*^-) & = & \frac{6 w_{0}}{u_{0}^2} + \tilde{u}''(P_*^-) - \tilde{u}''(P_*^+).
\end{array}
\right.$$
*Formal derivation.* To find the eigenvalue $\underline{\lambda}$ we need to evaluate the derivative of the right-hand side of at the fixed point $P_*$. That is, $$\begin{aligned}
\underline{\lambda} & = \frac{d}{dP} \left[ \frac{\tau}{6} \left( \tilde{u}'(P^+)^2 - \tilde{u}'(P^-)^2\right)\right]_{P = P_*} \nonumber\\&= \frac{\tau}{6} \left[ 2 \tilde{u}'(P_*^+) \left(\frac{d}{dP}\tilde{u}'(P^+)\right)_{P=P_*} - 2 \tilde{u}'(P_*^-) \left( \frac{d}{dP} \tilde{u}'(P^-)\right)_{P=P_*}\right].\label{eq:ODEeigenvalue1}\end{aligned}$$ By definition of the derivative $$\label{eq:derivativeOfUx}
\frac{d}{dP} \left[ \tilde{u}'(P^\pm)\right] = \lim_{\phi \rightarrow 0} \frac{\tilde{u}_\phi'( (P+\phi)^\pm) - \tilde{u}'(P^\pm)}{\phi},$$ where $\tilde{u}_\phi$ solves with every $P$ replaced by $P+\phi$. For small $\phi$, $\tilde{u}_\phi$ can be related to $\tilde{u}$ via a regular expansion. Specifically, let $|\phi| \ll 1$, and expand $\tilde{u}_\phi = \tilde{u} + \phi \tilde{w}$. Substitution in and careful bookkeeping readiliy shows that $\tilde{u}$ and $\tilde{w}$ satisfy . Finally, upon substituting the expansion for $\tilde{u}_\phi$ into and the use of a Taylor expansion we obtain $$\begin{aligned}
\frac{d}{dP} \left[ \tilde{u}'(P^\pm)\right] &= \lim_{\phi \rightarrow 0} \frac{\tilde{u}'((P+\phi)^\pm) + \phi \tilde{w}'((P+\phi)^\pm) - \tilde{u}(P^\pm)}{\phi} = \lim_{\phi \rightarrow 0} \frac{ \tilde{u}'(P^\pm) + \phi \tilde{u}''(P^\pm) + \phi \tilde{w}'(P^\pm) - \tilde{u}'(P^\pm)}{\phi} \\ & = \tilde{u}''(P^\pm) + \tilde{w}'(P^\pm).\end{aligned}$$ Finally, substitution into gives .
Small eigenvalue in case of small spatially varying coefficients {#sec:small_ev_validation}
----------------------------------------------------------------
As an example of the use of Proposition \[theorem:stabilityInOde\], in this section we use Proposition \[theorem:stabilityInOde\] to give another proof for Theorem \[theorem:point\_spectrum\_small\] in the limit $\mu \ll 1$. This not only shows the applicability of Proposition \[theorem:stabilityInOde\] but especially the relevance of the pulse location ODE . Moreover, it also provides a confirmation of the validity of the formal results in this section.
*Alternative formal derivation of Theorem \[theorem:point\_spectrum\_small\] for $\mu \ll 1$.* Since $f$ and $g$ satisfy the symmetry assumption (A2), the eigenvalue $\underline{\lambda}$ is given by . Therefore, it suffices to only look at the solutions $\tilde{u}$ and $\tilde{w}$ to for $x > 0$. Since $f,g = \mathcal{O}(\delta)$ with $\delta \ll 1$, we use regular expansions for $\tilde{u}$ and $\tilde{w}$; that is, we set $$\begin{aligned}
\tilde{u} & = \tilde{u}_{0} + \delta \tilde{u}_{1} + \ldots, \\
\tilde{w} & = \tilde{w}_{0} + \delta \tilde{w}_{1} + \ldots.\end{aligned}$$ Substitution in gives at leading order $$\left\{
\begin{array}{rcl}
0 & = & \tilde{u}_{0}'' - \tilde{u}_{0} + 1, \\
0 & = & \tilde{w}_{0}'' - \tilde{u}_{1}, \\
\tilde{u}_{0}(0) & = & 0, \\
\tilde{w}_{0}(0^+) & = & - \tilde{w}_{0}'(0^+);
\end{array}
\right.$$ and at the next order, $\mathcal{O}(\delta)$, we find $$\left\{
\begin{array}{rcl}
\tilde{u}_{1}'' - \tilde{u}_{1} & = & - \tilde{f} \tilde{u}_{0}' - \tilde{g} \tilde{u}_{0}, \\
\tilde{w}_{1}'' - \tilde{w}_{1} & = & - \tilde{f} \tilde{w}_{0}' - \tilde{g} \tilde{w}_{0}, \\
\tilde{u}_{1}(0) & = & 0, \\
\tilde{w}_{1}(0^+) & = & - \tilde{u}_{1}'(0^+).
\end{array}
\right.$$ Using the usual techniques to solve these ODEs, one can verify that $$\begin{aligned}
\tilde{u}_{0}(x) & = 1 - e^{-x} \\
\tilde{u}_{1}(x) & = \frac{1}{2} e^x \int_x^\infty F(z)e^{-z}dz - \frac{1}{2} \int_0^\infty F(z) e^{-z} dz + \frac{1}{2} e^{-x} \int_0^x F(z) e^{z} dz \\
\tilde{w}_{0}(x) & = - e^{-x} \\
\tilde{w}_{1}(x) & = \frac{1}{2}e^{x} \int_x^\infty G(z)e^{-z}dz - \frac{1}{2}e^{-x} \int_0^\infty G(z)e^{-z} dz + \frac{1}{2} e^{-x}\int_0^x G(z) e^{z} dz - e^{-x} \int_0^\infty F(z)e^{-z} dz\end{aligned}$$ where $$\begin{aligned}
F(z) &:= \tilde{f}(z)e^{-z} + \tilde{g}(z)(1-e^{-z}), \\
G(z) &:= \tilde{f}(z)e^{-z} - \tilde{g}(z)e^{-z}.\end{aligned}$$ Substitution of these expansions in then yields $$\begin{aligned}
\underline{\lambda}
& = \frac{2}{3} \tau \left[ \tilde{u}_{0}'(0) + \delta \tilde{u}_{1}'(0)\right]\left[ \tilde{u}_{0}''(0) + \delta \tilde{u}_{1}''(0) + \tilde{w}_{0}'(0) + \delta \tilde{w}_{1}'(0) \right] + \mathcal{O}(\delta^2) \\
& = \frac{2}{3} \tau \left[ 1 + \delta \int_0^\infty F(z)e^{-z} e^{-z} \right] \left[-1 + 1 + \delta \int_0^\infty \left( F(z) + G(z) \right) e^{-z} dz\right] + \mathcal{O}(\delta^2) \\
& = \frac{2}{3} \delta \tau \int_0^\infty \left( F(z) + G(z) \right) e^{-z} dz + \mathcal{O}(\delta^2) \\
& = \frac{2}{3} \delta \tau \int_0^\infty \left( 2 \tilde{f}(z)e^{-2z} + \tilde{g}(z)[1-2e^{-z}]e^{-z} \right) dz + \mathcal{O}(\delta^2)\\
& = \frac{2}{3} \delta \tau \int_0^\infty \left( \tilde{f}'(z)e^{-2z} + \tilde{g}'(z)(1-e^{-z})e^{-z} \right) dz + \mathcal{O}(\delta^2).\end{aligned}$$ Finally, we note that the eigenvalue has been rescaled as $\underline{\lambda} = m \lambda$ in Theorem \[theorem:point\_spectrum\]. Since $\tau / m = \varepsilon^4 \mu$ and $u_0 = 3$ in the limit $\mu \ll 1$, we have indeed recovered , i.e. Theorem \[theorem:point\_spectrum\_small\], in the case $\mu \ll 1$.
Examples of stationary single-pulse solutions {#sec:explicitExamples}
---------------------------------------------
In this section, we study a few explicit functions $f$ and $g$; in all examples we specify a function $h$ and take $f = h'$, $g = h''$. Not all functions we consider here limit to $0$ as $|x| \rightarrow \infty$; that is, some violate assumption (A4). Therefore, these examples also form an outlook, illustrating how the results in this paper are expected to extend beyond the imposed assumptions on functions $f$ and $g$. Specifically, we consider the following four examples:
- $h(x) = A e^{-Bx^2}$, ($A \in \mathbb{R}$, $B > 0$);
- $h(x) = A \operatorname{sech}(Bx)$, ($A \in \mathbb{R}$, $B > 0$);
- $h(x) = A \cos(Bx)$, ($A \in \mathbb{R}$, $B > 0$);
- $h(x) = -2 \ln(\cosh(\beta x))$, ($\beta > 0$).
Note that $\lim_{|x| \rightarrow \infty} f(x),g(x) = 0$ in cases (i)–(ii), which therefore satisfy assumption (A4). In case (iii) $f$ and $g$ are periodic when $|x| \gg 1$; in case (iv) $f$ and $g$ do have well-defined (though non-zero) limits for $|x| \rightarrow \infty$.
Note that $A > 0$ in (i)–(ii) corresponds to ‘hill-like’ topographies and $A < 0$ to ‘valley-like’ topographies. The value of $B$ in (i)–(iii) is a measure of the curvature of the terrain; the higher the value of $B$, the stronger the curvature of the terrain modeled by the function $h$.
[0.27]{}
[0.27]{}
[0.27]{}
\
[0.235]{}
[0.235]{}
[0.235]{}
[0.235]{}
Using the pulse location ODE and Proposition \[theorem:stabilityInOde\], we have tracked the fixed points and their stability for these examples in the limit $\mu \ll 1$, using numerical continuation methods. The resulting bifurcation diagrams for (i) are shown in Figure \[fig:numericalExponential\](a-b), for (ii) in Figure \[fig:numericalSech\](a-b) and for (iii) in Figure \[fig:numericalCos\](a). In all of these cases, we find fixed points at the point of symmetry, corroborating the results in section \[sec:existence\]. For small $B$ values – i.e. for weak curvature topographies – the stability of these fixed points is determined by the sign of $A$: $A > 0$ leads to stable and $A < 0$ to unstable fixed points – corroborating previous intuition indicating that pulses migrate in uphill direction [@siteur2014beyond; @SD17; @BD18]. However, for sufficiently large values of $B$ –i.e. topographies with strong curvature – the stability of those fixed points changes through a pitchfork bifurcation and new behavior is observed. In case (iii) this even leads to the possibility that both the tops ($BP = 0$) as well as the valleys ($BP = \pm \pi$) form stable fixed points of . The bifurcation value of the pitchfork bifurcation, $B_c(A)$, depends on the value of $A$. Using numerical continuation methods we also tracked this value; the results are in Figures \[fig:numericalExponential\](c), \[fig:numericalSech\](c) and \[fig:numericalCos\](b) (for topographies (i), (ii) and (iii)).
Theorem \[theorem:point\_spectrum\_small\], and in particular and , provide a leading order analytic expression for $B_c(0)$. Evaluating these yields $B_c(0) \approx 0.75$ (i), $B_c(0) \approx 1.23$ (ii) and $B_c(0) = \sqrt{2}$ (iii), which is confirmed by the numerical continuation that indicate $B_c(0) \approx 0.75$ (i), $B_c(0) \approx 1.24$ (ii) and $B_c(0) = 1.43$ (iii). Note that $A = 0$ is, indeed, just the flat terrain $h(x) \equiv 0$; however, these results for $A = 0$ should be interpreted to apply to ‘small’ topographical functions only, where $A$ is asymptotically small.
[0.27]{}
[0.27]{}
[0.27]{}
\
[0.235]{}
[0.235]{}
[0.235]{}
[0.235]{}
[0.54]{}
[0.27]{}
\
[0.245]{}
[0.235]{}
[0.235]{}
[0.235]{}
Moreover, these observations are validated by numerical simulation of the full PDE – see Figure \[fig:numericalExponential\](d-g) for (i), Figure \[fig:numericalSech\](d-g) for (ii) and Figure \[fig:numericalCos\](c-f) for (iii). Here, we observe the change in stability of the fixed points and, for well-chosen parameter values, these simulations show convergence to fixed points not located at the point of symmetry. Note also that in the case of periodic topography (i.e. case (iii)), there indeed is a region of $B$-values for which both a pulse at the top of a hill and one at the bottom of a valley can be stable (for the same $B$ value). Thus, we are led to conclude that a pitchfork bifurcation occurs at the critical values $B_c(0)$. Simulations indicate that these exist also when the asymptotic limit $\mu \ll 1$ does not hold.
For the last function, (iv), it is possible to derive the pulse location ODE explicitly, since can be solved explicitly – see Corollary \[cor:h\_example\]. Using the expressions given in Corollary \[cor:h\_example\], a straightforward computation reduces to $$\frac{dP}{dt} = \frac{\tau}{6} \left[ \left( \cosh(\beta P) \mathcal{I}_1(P)\right)^2 - \left(\cosh(\beta P) \mathcal{I}_2(P)\right)^2\right],\label{eq:pulseLocationOdeMCB}$$ where $$\mathcal{I}_1(P) := \int_P^\infty e^{r(P-z)} \operatorname{sech}(\beta z)\ dz; \qquad
\mathcal{I}_2(P) := \int_{-\infty}^P e^{-r(P-z)} \operatorname{sech}(\beta z)\ dz.$$ Thus, a point $P_*$ is a fixed point if and only if $\mathcal{I}_1(P_*) = \mathcal{I}_2(P_*)$. Straightforward inspection reveals that $P_* = 0$ therefore is the unique fixed point in case (iv) for all values of $\beta > 0$. By Proposition \[theorem:stabilityInOde\] and equation the corresponding (small) eigenvalue $\underline{\lambda}$ can be approximated by $$\underline{\lambda} = \frac{2\tau}{3} \mathcal{I}_1(0) \left( r \mathcal{I}_1(0) - 1\right).$$ Upon noting that $$r \mathcal{I}_1(0) - 1 = -\beta \int_0^\infty \operatorname{sech}(\beta z) \tanh(\beta z) e^{-rz}\ dz < 0,$$ it is clear that $\underline{\lambda} < 0$. Hence, $P_* = 0$ is the only fixed point of in case (iv), which is (globally) stable – for all $\beta > 0$. Direct PDE simulations verify this – even when the asymptotic limit $\mu \ll 1$ does not hold – see Figure \[fig:numericalMCB\].
[0.235]{} ![Direct numerical PDE simulation for $h(x) = -2 \ln(\cosh(\beta x)$ for $\beta = 1$ along with a plot of the function $h(x)$. In the PDE simulation we have used the parameters $a = 0.5, m = 0.45, D = 0.01$ and taken $x \in [-30,30]$.[]{data-label="fig:numericalMCB"}](figures/NumericalMCB "fig:"){width="\textwidth"}
Stationary multi-pulse solutions {#sec:multiPulses}
--------------------------------
The focus in this article has been on single pulse solutions to . As a short encore we briefly discuss the possibility of stationary multi-pulse solutions – i.e. solutions with multiple fast excursions. The movement of these solutions can be captured in an ODE much akin to \[eq:pulseLocationOde\]. Specifically, let $P_1,\ldots,P_N$ denote the location of $N$ pulses. Then their movement is described by the ODE $$\label{eq:NPulseLocationODE}
\frac{dP_j}{dt} = \frac{\tau}{6} \left[ \tilde{u}_x(P_j^+)^2 - \tilde{u}_x(P_j^-)^2 \right], \qquad (j = 1,\ldots,N)$$ where $\tilde{u}$ satisfies the differential-algebraic system $$\label{eq:DAENPulse}
\left\{
\begin{array}{rcll}
\tilde{u}_{xx} + f(x) \tilde{u}_x + g(x) \tilde{u} + 1 - \tilde{u} &=& 0\\
\tilde{u}(P_j) &=& \mu u_{0j} & (j=1,\ldots,N)\\
\tilde{u}_x(P_j^+) - \tilde{u_x}(P_j^-) &=& \frac{6}{u_{0j}}&(j=1,\ldots,N)
\end{array}\right.$$ The derivation is similar to that of Proposition \[theorem:stabilityInOde\]; we omit the details here and refer the interested reader to [@BD18] for a full coverage.
In case of constant coefficients $f,g\equiv 0$, it is well-known that stationary multi-pulse solutions do not exist [@dek1siam; @BD18]. In fact, from one can verify that in $2$-pulse solutions the pulses typically move away from each other with a speed proportional to $e^{-\Delta P}$, where $\Delta P := P_2 - P_1$ is the distance between the pulses – see [@dek1siam; @BD18].
However, the non-autonomous terms $f$ and $g$ affect the movement speed and can cancel this repulsive movement. Therefore stationary pulse solutions do exist in for well-chosen $f$ and $g$. In Figure \[fig:numericalMultiPulses\] we show several numerical examples of (stable) stationary multi-pulse solutions for various choices of $f$ and $g$.
\[remark:findingFixedPointsNPulses\] The spatially varying $f$ and $g$ have a order $\mathcal{O}(f,g)$ effect on the movement speed of the pulses. Finding fixed points of – i.e. finding stationary multi-pulse solutions to – thus boils down to balancing two effects of different size. In particular, if $f,g = \mathcal{O}(\delta)$, only multi-pulse solutions exist with $\Delta P = \mathcal{O}\left( - \ln(\delta)\right) \gg 1$. In this case, existence of stationary multi-pulse solutions can be established rigorously by asymptotic analysis and the methods of geometric singular perturbation theory.
We do not present a full analysis of the spectrum of (evolving) multi-pulse solutions here; they can be stable and unstable depending on the parameter values – similar to the one-pulse variants. A description of how to find the spectrum of multi-pulse solutions can be found in [@BD18].
[0.235]{}
[0.235]{}
[0.235]{}
[0.235]{}
For generic functions $f$ and $g$ it is, at the moment, not possible to prove existence of stationary multi-pulse solutions (however, see Remark \[remark:findingFixedPointsNPulses\] for the case of small $f$, $g$). We do remark however that stationary multi-pulse solutions can be constructed for $f$ and $g$ such that can be solved explicitly, as illustrated by the following proposition.
\[theorem:twoPulseSolution\] Let $h(x) = -2 \ln \cosh(\beta x)$, $\beta > 0$, $f = h'$, $g = h''$ and let $\mu \ll 1$. Then there exists a $P_* > 0$ such that admits a stationary symmetric two-pulse solutions with pulses at $P_1 = -P_*$ and $P_2 = P_*$.
*Formal derivation.* By symmetry of the desired two-pulse solution, we may set $P_2 = P$, $P_1 = - P$. Moreover, necessarily $\tilde{u}'(0) = 0$. Since $\mu \ll 1$, to leading order we have $\tilde{u}(P) = \tilde{u}(-P) = 0$. Therefore $\tilde{u}$ is given to leading order by $$\tilde{u}(x) =
\begin{cases}
\hat{u}_b(x) - \frac{\hat{u}_b(-P)}{\hat{u}_-(-P)} \hat{u}_-(x), & x <- P,\\
\hat{u}_b(x) - \frac{\hat{u}_b(P)}{\hat{u}_+(P)+\hat{u}_-(P)}\left(\hat{u}_+(x)+\hat{u}_-(x)\right), & -P < x < P;\\
\hat{u}_b(x) - \frac{\hat{u}_b(P)}{\hat{u}_+(P)} \hat{u}_+(x), & x > P;
\end{cases}$$ where $\tilde{u}_\pm$ and $\tilde{u}_b$ are as in Corollary \[cor:h\_example\]. To have stationary pulse solutions, by we need to have $$\label{eq:twoPulsePosition}
\mathcal{T}(P) := \hat{u}_b'(P) - \hat{u}_b(P) \left[ \frac{\sqrt{1+\beta^2}}{2} \left( \tanh(\sqrt{1+\beta^2}P)-1\right) + \beta \tanh(\beta P)\right] = 0,$$ Upon noting that $$\mathcal{T}(0) = \frac{1}{2} \int_0^\infty e^{-\sqrt{1+\beta^2}z}\operatorname{sech}(\beta z)\ dz > 0,$$ and, since $\lim_{P \rightarrow \infty} \hat{u}_b(P) = 1$ and $\lim_{P \rightarrow \infty} \hat{u}_b'(P) = 0$, $$\lim_{P \rightarrow \infty} \mathcal{T}(P) = - \beta < 0,$$ continuity of $\mathcal{T}$ guarantees the existence of $P_* > 0$ as claimed.
This result can be established rigorously by geometric singular perturbation theory, using the methods detailed in section \[sec:existence\]. We refrain from giving the details of this procedure.
Discussion {#sec:discussion}
==========
In this paper, we studied pulse solutions in a reaction-advection-diffusion system with spatially varying coefficients. The existence of stationary pulse solutions at a point of symmetry was established by combining the usual techniques from geometric singular perturbation theory with the tools from the theory of exponential dichotomies. The latter has been used to generate a saddle-like structure in the slow subsystem, and to obtain bounds on the stable/unstable manifolds of this subsystem. These techniques have also been used to determine the spectral stability of these pulse solutions. None of these concepts or ideas are model-dependent and therefore could be used in a wider variety of models, including Gierer-Meinhardt type models.
Analysis of the spectrum associated to these pulse solutions showed that ‘large’ eigenvalues can be bounded to the stable half-plane, under conditions similar to the usual, constant coefficient case. Although we did not focus on the dynamics of solutions when a large eigenvalue crosses the imaginary axis, simulations show the usual pulse annihilation and pulse splitting phenomena. However, the introduction of spatially varying coefficients does have a significant effect on the so-called ‘small’ eigenvalues (close to $\lambda = 0$) because of the break-down of the translation invariance in the system. Therefore, well-chosen $f$ and $g$ can either stabilize or destabilize solutions. When the small eigenvalue is in the unstable half-plane, the pulse solution is unstable and as an effect its *position* changes. In some cases, this in turn can subsequently lead to a pulse annihilation or a pulse splitting [@BD18]. We expect that a careful tuning of $f$ and $g$ can either prevent or force these subsequent bifurcations, which may have a relevance in the maintenance of vegetation patterns in semi-arid climates.
The small eigenvalues were studied more in-depth in the case of $f = h'$, $g = h''$ (where $h$ is used to model the topography of a dryland ecosystem). Here, we were able to link the stability of (stationary) pulse solution to the curvature of $h$. If the curvature is weak, the pulse is stable if $h''(0) < 0$ and unstable if $h''(0) > 0$; for strong curvature the opposite is true: the pulse is stable if $h''(0) > 0$ and unstable if $h''(0) < 0$. We found that this change in stability typically happens via a pitchfork bifurcation, and showed that the associated parameter combinations can be obtained numerically. However, we did not consider a fully general class of functions $f$ and $g$, and we do not know in which way these results generalize to other functions $f$ and $g$ – although for choices $f$ and $g$ for which does not posses the symmetry $(x,u) \rightarrow (-x,u)$ (i.e. when assumption (A2) does not hold), the pitchfork bifurcation will break down. A precise treatment of such generic functions could be the topic of subsequent work.
Moreover, in case of spatially varying coefficients, the system can also posses stationary multi-pulse solutions – i.e. solutions that have multiple fast excursions. When $f, g \equiv 0$, these solutions do not exist. Because the spatially varying coefficients break the translation invariance of the system, these multi-pulse solutions can exist – for well-chosen functions $f$ and $g$. In this article we gave numerical evidence for this and showed their existence for a specific choice of functions. We do not think their existence can be proven in as much generality as the existence of stationary one pulse solutions – certainly, the bounds used in this paper, provided by the theory of exponential dichotomies, are not sufficient in the regions between pulses. For sufficiently small $f$ and $g$, an asymptotic analysis can be developed to overcome this issue, although the distance between subsequent pulses then becomes asymptotically large and asymptotic analysis needs to be done with great care to keep track of the right scalings; this is topic of ongoing research.
Finally, the extended Klausmeier model studied in this paper has its application in ecology, where it is used to model dryland ecosystems. The studied pulse solutions in this model correspond to vegetation ‘patches’ that are typically found in those ecosystems. Naturally, the results in this paper can therefore be used for this application. Specifically, the treatment of a spatially varying height function $h$ is new and is inherently more realistic than taking a constant topography (or a constantly sloped topography) as has been done in the past (see e.g. [@siteur2014beyond; @Bastiaansens2018; @klausmeier1999; @modfiedKlausmeier; @modfiedKlausmeier]). Typically, the constant coefficient models exhibit pulses that only move uphill. However, as illustrated with numerics, we have shown that a varying topography can lead to both uphill *and* downhill movement of pulses. This aligns better with measurements, where also both uphill and downhill movement can be observed – even within the same general region [@dunkerley2014; @Bastiaansens2018]. In this regard, the study in this paper can be seen as a first step to better understand the role of topographic variability in pattern formation.
Acknowledgements {#acknowledgements .unnumbered}
================
We like to thank Marco Wolters for his exploratory (bachelor) research on the migration of vegetation pulses on periodic topographies. This work was funded by NWO’s Mathematics of Planet Earth program.
[^1]: Mathematical Institute, Leiden University, 2300 RA Leiden, The Netherlands (r.bastiaansen@math.leidenuniv.nl, m.chirilus-bruckner@math.leidenuniv.nl, doelman@math.leidenuniv.nl).
[^2]: Last edited: .
|
---
abstract: 'The Doppler wobble induced by the extra-solar planet HD 134987b was first detected by data from the Keck Telescope nearly a decade ago, and was subsequently confirmed by data from the Anglo-Australian Telescope. However, as more data have been acquired for this star over the years since, the quality of a single Keplerian fit to that data has been getting steadily worse. The best fit single Keplerian to the 138 Keck and AAT observations now in hand has an root-mean-square (RMS) scatter of 6.6 m/s. This is significantly in excess of both the instrumental precision achieved by both the Keck and Anglo-Australian Planet Searches for stars of this magnitude, and of the jitter expected for a star with the properties of HD134987. However, a double Keplerian (i.e. dual planet) fit delivers a significantly reduced RMS of 3.3 m/s. The best-fit double planet solution has minimum planet masses of 1.59 and 0.82 [[1.59$\pm$0.02]{}]{}[$M_{\rm Jup}$]{}, orbital periods of 258 and 5000 d, and eccentricities of 0.23 and 0.12 respectively. We find evidence that activity-induced jitter is a significant factor in our fits and do not find evidence for asteroseismological p-modes. We also present seven years of photometry at a typical precision of 0.003 mag with the T8 0.8 m automatic photometric telescope at Fairborn observatory. These observations do not detect photometric variability and support the inference that the detected radial-velocity periods are due to planetary mass companions rather than due to photospheric spots and plages.'
author:
- |
\
\
$^1$Centre for Astrophysics Research, University of Hertfordshire, College Lane, Hatfield, Herts AL10 9AB, UK\
$^2$Carnegie Institution of Washington, Department of Terrestrial Magnetism, 5241 Broad Branch Rd NW, Washington, DC 20015-1305, USA\
$^3$Department of Astrophysics, School of Physics, University of NSW, 2052, Australia\
$^4$Anglo-Australian Observatory, PO Box 296, Epping 1710, Australia\
$^5$ Center of Excellence in Information Systems, Tennessee State University, 3500 John A. Merritt Blvd., Box 9501, Nashville, TN 37209\
$^6$UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California at Santa Cruz, Santa Cruz, CA 95064, USA\
$^7$Faculty of Sciences, University of Southern Queensland, Toowoomba, QLD 4350, Australia\
$^8$Department of Astronomy, Universidad de Chile, Casilla Postal 36D, Santiago, Chile\
title: 'A long-period planet orbiting a nearby Sun-like star'
---
planetary systems - stars: individual (HD134987),
Introduction
============
Of the more than 300 nearby stars known to harbor one or more planets, 38 are multiple-planet systems, and some show radial velocity residuals indicative of additional companions (e.g., Wright et al. 2007). A number of known extrasolar planets (hereafter, shortened to exoplanets) are becoming suitable for increasingly powerful follow-up techniques. Primarily, this has been led by the discovery of transiting objects though other observations are becoming rewarding. Notably, observations of short-period exoplanets have enabled atmospheric abundance studies (e.g., Swain et al. 2009), phase variation in flux to be distinguished (e.g., Knutson et al. 2007), and high-resolution spectroscopic observations may soon allow for direct spectroscopic detection (e.g. Barnes et al. 2008). Recent leaps in image processing (e.g., Marois et al. 2008) have led to companion detections at 0.5 arcsec separations with contrast ratios of 10$^{-6}$ across the near infrared. This has allowed for the direct imaging of putative exoplanets which in turn is driving the instrumentation programmes of telescopes worldwide.
The architecture of our Solar System is dominated by Jupiter at 5.2 au and by Saturn at 9.5 au. However, very little is known about the frequency or nature of other planetary systems with orbital distances greater than 5 au (e.g., Marcy et al. 2005b). Precise radial velocities have only recently reached beyond the 10 years required to sense such objects. For planets orbiting solar mass stars, it is key to have sensitivity to Jupiter-like (12 yr) and Saturn-like (30 yr) orbits. The ability to put constraints on such planets, even with potentially incomplete orbits, will enable imaging observations to target systems for which they can provide critical observational constraints. With or without imaging, such orbits may substantially constrain the orbital structure and dynamical configuration, providing clues about planetary formation and migration (e.g., Currie 2009) and will help us trace the uniqueness of our own Solar System.
The Anglo-Australian Planet Search and the Keck planet searches are long-term radial velocity projects engaged in the detection and measurement of exoplanets at the highest possible precisions. Using the iodine calibration technique (e.g., Butler et al. 1996) they provide coverage of bright inactive F, G, K and M dwarfs. The AAPS began operation in 1998 January, and is currently surveying 250 stars. It has published exoplanets with $M$ sin $i$ ranging from 5[$M_{\rm Earth}$]{}to 10 [$M_{\rm Jup}$]{} (Tinney et al. 2001, 2002a, 2002b, 2003, 2005, 2006; Butler et al. 2001, 2002b, 2006a; Jones et al. 2002, 2003a,b, 2006; Carter et al. 2003; McCarthy et al. 2005; OÕToole et al. 2007, 2009a, 2009b, Bailey et al. 2009, Vogt et al. 2009). With its somewhat longer baseline (since 1996) the Keck project originally announced the exoplanetary signal around HD 134987 (Vogt et al. 2000). Both Keck and the AAT have been regularly observing the star since 1998. Nearly a decade later we present results for HD 134987 from the combined AAT and Keck dataset.
Characteristics of HD 134987
============================
HD134987 (23 Lib) is a solar-type (G4V) star which is nearby (22pc) and bright (V=6.45), with low activity (adopted log RHK =–5.1) and high metallicity (adopted 0.25 dex). As an analogue of the prototype “Hot Jupiter” host star (51 Peg), it has long been a target for precision exoplanet surveys. Vogt et al. (2000) reported a planetary mass signal with a period $P$ = 259 d, eccentricity $e$ = 0.24, $M$ sin $i$ = 1.58 [$M_{\rm Jup}$]{} from Keck data with an RMS of 3 m/s. Butler et al. (2001) conÞrmed the orbit using AAT data. In Butler et al. (2006a) these parameters were revised to $P$=258 d, eccentricity, $e$ = 0.24, $M$ sin $i$ = 1.64 [$M_{\rm Jup}$]{} with an RMS of 4 m/s to a [**reduced**]{} $\chi_{\nu}^2$=0.89 fit, including a trend of 2.9$\pm0.2$ m/s per year. At that time the jitter of HD 134987 was estimated to be 3.5m/s. Wright et al. (2007) identify a number of objects whose False Alarm Probability for an additional Keplerian versus a simple trend is below 2%. Wright et al. report that the signal appeared as a change in the level of the residuals between 2000 and 2002 of 15 m/s. They thus suggested an outer planet on a rather eccentric orbit which reached periastron in 2001. Recently estimates for the jitter, based on Wright (2005), have been reduced by around a factor of $\sqrt{2}$, (J.Wright, private communication) and an additional 43 epochs have been acquired with the AAT and Keck.
The properties of HD 134987 are summarised in Table \[stellarp\]. The variety of recent measurements reflect its inclusion in large-scale studies of nearby solar type stars. Recently, Holmberg et al. (2007) included it in a magnitude-limited, kinematically unbiased study of 16682 nearby F and G dwarf stars and Takeda et al. (2007a) included it in a study of the stellar properties for 1040 F, G and K stars observed for the Anglo-Australian, Lick and Keck planet search programmes. Takeda et al. used high signal-to-noise echelle spectra (originally taken as iodine-free templates for radial velocities) to derive effective temperatures, surface gravities and metallicities whereas Holmberg et al. used Strömgren photometry and the infrared flux method calibration of Alonso et al. (1996). Both studies used Hipparcos parallaxes to convert luminosities in order to make comparisons with different theoretical isochrones and so derive stellar parameters. To determine, stellar masses and ages Takeda et al. (2007a), use Yonsei-Yale isochrones (Demarque et al. 2004) and Holmberg et al. (2007) use Padova isochrones (Giradi et al. 2000; Salasnich et al. 2000). Both sets of derived parameters agree to within the uncertainties. The low activity index of HD 134987 ([log $R'_{\rm HK}$]{} $\sim$ –5.1) is consistent with the lack of significant photometric variability in measurements made by the Hipparcos satellite. Combining Hipparcos astrometry with their radial velocities, Holmberg et al. (2007) and Takeda et al. (2007b) determine U, V, W space velocities consistent with the old disk lifetimes in the range 8-11 Gyr inferred from the isochrones and the lack of X-ray flux detected from HD 134987 (Kashyap, Drake & Saar 2008).
Spectroscopic Observations
==========================
The 63 epochs of Doppler data obtained at the AAT between 1998 August and 2009 October are shown in Table \[aat\_vel\]. The 75 epochs of Doppler measurements obtained at the Keck Telescope between 1996 August and 2009 July are shown in Table \[keck\_vel\]. The observing and data processing procedures follow those described by Butler et al. (1996, 2001, 2006a). All these data have been reprocessed through our frequently up-graded analysis system, and here we report results from the current version of our pipeline. Our velocity measurements are derived by breaking the spectra into several hundred 2 Å chunks and deriving relative velocities for each chunk. The velocity uncertainty, given in the third column and labelled ‘Unc.’ is determined from the scatter of these chunks. This uncertainty includes the effects of photon-counting uncertainties, residual errors in the spectrograph PSF model, and variation in the underlying spectrum between the template and iodine epochs. For both the AAT and Keck, observations in which the uncertainty is more than three-times the median uncertainty of the entire set are not reported. All velocities are measured relative to the zero-point defined by the template observation. Since the AAT and Keck data were processed with different templates, we treat the difference between their zero-points as a free parameter in our fitting procedures.
Orbital Solution for HD 134987
==============================
The AAT and Keck data are shown in Fig.\[hd134987\] with a single Keplerian curve fit with an orbital period of [[258.19$\pm$0.07]{}]{} d, a velocity amplitude of 50.1$\pm$1.5 m/s and an eccentricity of [[0.233$\pm$0.002]{}]{}. The minimum ([$M \sin i~$]{}) mass of the planet is [[1.59$\pm$0.02]{}]{} [$M_{\rm Jup}$]{}. The RMS to the single Keplerian fit is 6.6 m/s. The activity measure (log $R_{HK}=$ –5.1) predicts a rotation period of 23—33 d (Wright 2005), which is significantly different from the exoplanet solution. The lack of any observed chromospheric activity or photometric variations gives us confidence that this solution proposed by Vogt et al. (2000) arises from an exoplanet rather than from long-period starspots or chromospherically active regions. However, since HD134987 is an inactive star we would expect a substantially lower RMS. Futher to the long-term trend found by Butler et al. (2006a), we now find a curvature that suggests a two planet solution with an $\sim$5000 d period for the outer planet. Fig. \[hd134987bc\] shows the best fit double Keplerian to the Keck and AAT data, with an RMS of 3.3 m/s. The best-fit parameters of the double-Keplerian fit are given in Table \[orbit\] and are based on treating the offset between the AAT and Keck as a free parameter.
Fig.\[power\] shows the periodogram of the residuals to HD134987b for the AAT, Keck and combined AAT+Keck datasets. Significant power is present at periods beyond a few thousand days in the datasets (with low false alarm probabilities computed using [systemic]{}: AAT – 5x10$^{-2}$, Keck – 7x10${^{-5}}$, combined – 3x10$^{-13}$). We investigated a range of solutions using the AAT and Keck datasets both separately and together. Both datasets produce very similar solutions for the inner planet. For the outer planet we have not definitively seen a full orbital period and thus the orbit is less clearcut. We found that the AAT dataset favours somewhat shorter periods (around 5000 d) and lower eccentricities (e$\sim$0.1), while the Keck dataset favours longer period solutions with higher eccentricities. The addition of a 5000 d outer planet to either dataset or the combined dataset produces an improvement in RMS by more than a factor of 2 (e.g., Figs \[hd134987\] and \[hd134987bc\]). The jitter value which must be used to result in a [**reduced**]{} ${\chi_{\nu}^2}$=1 is more than 6 m/s for a single planet fit, and drops significantly to around 2.6 m/s for a double planet fit for the combined dataset and 2.3 m/s (AAT) and 3.2 m/s (Keck) for the double planet fit to the separate datasets. These values of jitter are consistent with prediction (2.1 m/s) of the most recent activity jitter calibration of J.Wright (priv.comm.). The earliest Keck data have larger uncertainties than the other Keck observations and appear as outliers in the residuals plots (Fig. \[hd134987c\]). We have therefore checked the sensitivity of the solution to these three 1996 and 1997 data points, and find that removal of these data points does not signiÞcantly change the best fit orbital parameters and leads to a reduction of only 0.1 m/s in the fit RMS.
We have not yet seen a full orbital period for HD134987c and so it is difficult to assign a reliable solution for its parameters. In order to better understand how the fit parameters for HD134987c are related, in Fig. \[new\], we show contours of best-fit $\chi^2$ for period, mass and eccentricity. The contours indicate best-fit $\chi^2$ solutions increased by 2.3, 6.2 and 11.8, which correspond to 1$\sigma$, 2$\sigma$ and 3$\sigma$ confidence levels for systems represented with two degrees of freedom and Gaussian noise. They have been derived allowing all other orbital parameters for HD134987b and c to be best-fit. These plots highlight the asymmetric nature of the confidence regions due to not having a complete orbital period. These fits are made using the [systemic]{} package (Meschiari et al. 2009) and assumes no stellar jitter for all data points, however since stellar jitter provides a source of pseudo-random noise which may vary on various timescales, e.g., unknown stellar rotation timescale, the noise in the radial velocities may be non-Gaussian. In addition, to the best fit $\chi^2$ for the joint dataset, 1$\sigma$ contours for the individual AAT (dashed green) and Keck (dotted blue) datasets are shown.
The observing programmes at the AAT and Keck both use the same calibration methodology and follow similar observing and data reduction strategies. A major difference in their operation for a given star is that Keck achieves a given S/N in a much shorter exposure time. For HD134987, Keck integration times range from 27s to 10min, typically 1–2 min long, with a median of 84 secs and including 24 measurements of less than a minute. The median of AAT integration times are nearly five times longer (median 400 secs) and with a smaller spread in the range of times from 3.3 to 10min. It is therefore probable that the differences (to the double planet fit) seen at each telescope are the result of their sampling astrophysical noise sources on different time-scales. In order to make the jitter values to achieve a best-fit reduced $\chi^2$ the same for both datasets it would be necessary for there to be nearly 2 m/s of additional astrophysical noise (added in quadrature) present in the Keck, but not the AAT, data.
We investigate the importance of the relatively shorter Keck exposure times in a few different ways. We add velocity jitter to all Keck radial velocity errors corresponding to exposure times of less than 200sec (the shortest exposure time at the AAT). We do this by adding radial velocity jitter of 4.5$\times$(1-$t_{\rm exp}$/200) m/s). The scaling factor of 4.5 is chosen so as give resultant best fits requiring the same jitter as the AAT (2.3m/s) to achieve a best-fit reduced $\chi^2$ of one and is consistent with the higher levels of jitter expected for HD134987c from Wright (2005). However, this procedure as well as removal of the 10% of the data with the shortest exposure times do not significantly alter the Keck solution.
The Keck data allows us to gain more direct insight into the importance of the activity of HD134987. The Keck HIRES spectrometer simultaneously covers the the Ca II H&K lines and the Iodine region. While activity indices such as CaHK do not provide a one-to-one mapping onto stellar jitter and thus can not be used as an input error for the radial velocities, these CaHK lines are the primary method of radial velocity jitter estimation (Wright 2005). From an extraction of this activity measure (e.g., Tinney et al. 2002) the S values for the Keck data are given in Table \[keck\_vel\] and plotted with a Gaussian distribution in Fig. \[jit\]. This indicates that the distribution of S values is not a particularly good match for a Gaussian. Although the jitter values are the largest source of uncertainty, the assignment of the Gaussian $\sigma$ confidence limits should be robust due the large number of data points available for the fit. Wright (2005) indicates that in the regime of activity, spectral type and magnitude for $different$ stars in their table 2 for HD134987, there is a factor of 3 difference in radial velocity jitter between 20th and 80th percentile. While this scatter is for $different$ stars we can look at the impact of characterising the jitter radial velocity signal in terms of a linear function varying by a factor of three between the 20th-80th percentiles found by Wright (2005). For the HD134987 Keck data this corresponds to assigning radial velocity jitter values up to 6 m/s. We found that this operation expands the 1-$\sigma$ contours and brings best fit solutions to the Keck dataset to shorter periods. Alternatively, one can obtain a solution with little expansion in the 1-$\sigma$ contours by ignoring radial velocities with high activity values. Fig. \[new\] shows 1-$\sigma$ best-fit contours (dashed-dot-dot-dot) for a Keck dataset with the omission of the radial velocity data corresponding to the highest 10% of S values (that lie in the range 0.164 to 0.182). The removal of these high S values does not lead to much expansion in the 1-$\sigma$ best-fit contours, removes data points that are spread relatively evenly in time and yields 1-$\sigma$ contours closer to those found for the AAT values.
This result that exposure time has less impact on the solution than the S value can be investigated quantitatively for asteroseismological p-mode jitter. The methodology of O’Toole et al. (2008) allows the p-mode jitter for HD134987 to be derived relatively precisely as a function of exposure time using values from Table \[stellarp\]. This indicates than the Keck data should present no more than around 0.54 m/s p-mode jitter, compared to a maximum of about 0.36 m/s in AAT data. Since these are significantly smaller than the internal errors, the contributions of other stellar jitter noise sources such as granulation and convection (e.g. Bruntt et al. 2005) are presumably significantly larger. While we are not in a position to quantify the exact noise source responsible for the different jitter values, the low and reasonably consistent values of jitter in agreement with stars of similar spectral type and the improved consistency between datasets on removal of easily identifiable high jitter values indicates that a two planet fit to both AAPS and Keck velocity datasets is consistent with our rudimentary jitter expectations. We also note that a periodogram of the Keck S values does not show any significant periodicities nor any peaks with low false alarm probabilities which we would expect if there was a significant spot-induced radial velocity signal in the Keck dataset.
Photometric Observations
========================
In addition to the spectroscopic observations described and analyzed above, we have also acquired high-precision photometric observations of HD 134987 in seven observing seasons between 1999 March and 2009 June with the T8 0.80 m automatic photometric telescope (APT), one of seven automatic telescopes operated by TSU at Fairborn Observatory in southern Arizona (Eaton, Henry & Fekel 2003). The APTs can detect short-term, low-amplitude brightness variability in solar-type stars resulting from rotational modulation in the visibility of active regions, such as starspots and plages e.g., Henry, Fekel & Hall 1995, and can also detect longer-term variations produced by the growth and decay of individual active regions and the occurance of stellar magnetic cycles, e.g., Henry et al. (1995) and Hall et al. 2009. The photometric observations can help to establish whether observed radial velocity variations are caused by stellar activity or planetary reflex motion, e.g., Henry et al. (2000). Several examples of periodic radial velocity variations in solar-type stars caused by photospheric spots and plages have been documented by Queloz et al. (2001) and Paulson et al. (2004). The photometric observations are also useful to search for transits of the planetary companions e.g., Henry et al. (2000), Sato et al. (2005).
The T8 0.80 m APT is equipped with a two-channel precision photometer featuring two EMI 9124QB bi-alkali photomultiplier tubes (PMTs) to make simultaneous measurements of a star in Strömgren $b$ and $y$ passbands. The APT observes each target star (star D) in a quartet with three ostensibly constant comparison stars (stars A, B, and C). From these measurements, we compute $b$ and $y$ differential magnitudes for each of the six combinations of the four stars: $D-A$, $D-B$, $D-C$, $C-A$, $C-B$, and $B-A$. We then correct the Strömgren $b$ and $y$ differential magnitudes for differential extinction with nightly extinction coefficients and transform them to the Strömgren photometric system with yearly mean transformation coefficients. Finally, we combine the Strömgren $b$ and $y$ differential magnitudes into a single $(b+y)/2$ passband to improve the precision of the observations. Henry (1999) presents a detailed description of the T8 automated telescope and photometer, observing techniques, and data reduction and quality-control procedures needed for long-term, high-precision photometry.
The 419 $D-C$ differential magnitudes of HD 134987 are plotted in the top panel of Figure 6. We chose to analyze the $D-C$ observations for two reasons: (1) they have the smallest standard deviation of the three $D-A$, $D-B$, and $D-C$ time series, although not by much (0.00266, 0.00264, and 0.00263 mag, respectively), and, more importantly, (2) the $D-C$ time series is one year longer than the other two because comparison stars $A$ and $B$ had to be replaced after one year due to their variability. The three comparison stars $A$, $B$, and $C$ are HD 131992 ($V=6.94$, $B-V=0.20$), HD 137076 ($V=8.26$, $B-V=0.41$), and HD 135390 ($V=6.47$, $B-V=0.69$). The standard deviations of the $C-A$, $C-B$, and $B-A$ differential magnitudes about their means are 0.00297, 0.00308, and 0.00266 mag, respectively, comparable to the three standard deviations for star $D$ (HD 134987) given above. These values are somewhat larger than our typical precision of 0.0015 mag because HD 134987 lies at a declination of $-25\deg$ and so is observed through high airmass (1.8–2.0). The mean precision of the three HD 134987 differential time series is 0.00264 mag, while the mean precision of the three comparison star time series is 0.00290 mag. Therefore, we have not resolved intrinsic brightness variability in HD 134987, and the scatter in the $D-C$ measurements can be accounted for by the APT’s measurement precision at high air mass.
Nevertheless, we performed periodogram analyses on all six sets of differential magnitudes and find no significant periodicities between 0.03 and 1000 days. In particular, a least-squares sine fit to the $D-C$ observations phased on companion b’s orbital period of 258.187 days gives a semi-amplitude of only $0.00035~\pm~0.00017$ mag. The low level of magnetic activity in HD 134987 recorded in Table 1 and the lack of detectable photometric variability on the orbital period of companion b confirms that the radial velocity variability ($K = 49.5$ m s$^{-1}$) on that period is the result of stellar reflex variability induced by HD 134987b.
The $D-C$ photometric observations plotted in the top panel of Fig. \[photometry\] cover a range of 3758 days or eleven observing seasons but with a gap of four seasons. So the photometric observations are too few and the orbital period of HD 134987c too uncertain ($5000\pm338$) for the photometry to determine limits of stellar brightness variability on companion c’s orbital period. However, we can look at HD 134987’s long-term, year-to-year variability for the existing seven observing seasons plotted in Fig. \[photometry\]. The standard deviation of the seven yearly mean magnitudes about their grand mean is just 0.000220 mag; the slope of the best-fit line to the seven means is $-0.0000245~\pm~0.0000278$ mag yr$^{-1}$. Interestingly, the first six years of our photometry (1999–2004) correspond to the interval when the radial velocity residuals to the 258 day period, plotted in Figure 4, increased approximately linearly by $\sim22$ m sec$^{-1}$. The best-fit line to those six yearly photometric means has a slope of $-0.0000376~\pm~0.0000599$ mag yr$^{-1}$, which is indistinguishable from zero to high precision. Therefore, the photometric observations also provide strong support for the existence of HD 134987c.
With the 419 nightly APT observations of HD 134987, we examine the possibility of detecting transits of the inner planet. The geometric probability for transits to occur, given HD 134987b’s orbital elements in Table 4, is 0.65%, computed from equation (1) of Seagroves et al. (2003). This is a modest improvement over the transit probability (0.52%) for a circular orbit because of the favorable orientation ($\omega=352.8\deg$) of the planet’s moderately eccentric orbit ($e=0.23$). The 419 photometric measurements in the top panel of Figure 6 are replotted in the middle panel phased with the 258.187 day orbital period. Phase 0.0 corresponds to a predicted time of mid transit derived from the orbital elements, $T_{transit}=2455027.31$. The observations near phase 0.0 are replotted on an expanded scale in the bottom panel of Fig. \[photometry\]. The solid curve in the two lower panels approximates the depth (0.85%) and duration (15 hours) of a central transit, derived from the orbital elements. The horizontal bar below the predicted transit window in the bottom panel represents the $\pm2$ day uncertainty in the time of central transit; the vertical error bar to the right of the transit window corresponds to the $\pm0.00263$ mag measurement uncertainty of a single observation. It is clear that transits of the inner planet could be detected with the APT, but our phase coverage is insufficient to determine whether or not they occur.
Discussion
==========
At a distance of 22 pc, HD 134987 is one of the more nearby exoplanetary systems. The combination of the close distance and long period indicates a relatively large angular distance from the star of $\gtrsim$0.23 arcsec for an edge-on circular orbit. Such an angular separation will be accessible to the typical 0.2 arcsec figure of merit for a current and foreseen high resolution imaging systems on 8m-class telescopes. Only eight other radial-velocity-discovered exoplanets currently exceed a maximum angular separation of 0.2 arcsec They are $\epsilon$ Eridani b (Hatzes et al. 2000), GJ832b (Bailey et al. 2009), 55 Cnc d (Marcy et al. 2002), HD160691e (McCarthy et al. 2005), GJ849b (Butler et al. 2008b), HD190360b (Naef et al. 2003), 47Uma c (Fischer et al. 2002), HD154345b (Butler et al. 2006b). Notably, three-quarters of these have been detected using data from the Keck or AAT. Despite this promising separation for direct detection, evolutionary models indicate that the contrast ratio of HD134987c with its host star will make this a challenging observation. Models suggests a 5 Gyr, 2 [$M_{\rm Jup}$]{} exoplanet at 22 pc will have an H band magnitude of around 35. This will be beyond the reach of even the next generation of high resolution instruments. Nevertheless, it should be noted that this estimate is based on the minimum mass (sin $i$ = 1) for HD 134987c. Both an inclined orbit, and a longer period (which is plausible given the relatively poorly determined period at present) will lead to a larger mass and improved detectability for HD134987c. So, imaging observations will be useful to constrain possible orbits and masses of HD 134987c.
Detection of the astrometric signal from HD 134987 c is more plausible. The astrometric orbit semimajor axis is $\alpha$ sin $i$ $\gtrsim$ 0.19 mas, which is comparable to the 0.25$\pm$0.06 mas astrometric orbit determined by Benedict et al. (2002) for GJ 876b. An astrometric orbit would enable the inclination to be determined, removing the current sin $i$ uncertainty on the mass.
HD134987 joins the family of stars with multiple planets. It appears to be consistent with the broad general properties for multiple planets suggested by Wright et al. (2009). For example, its metallicity of +0.25 is close to that of the mean for exoplanets with long-term trends (+0.20) or multi-planet systems (+0.15, Wright et al. 2009) and the eccentricities of its planets (0.12 and 0.23) are somewhat lower than the 0.25 mean for all exoplanets (excluding tidally circularised ones) and its planetary masses of close to 1[$M_{\rm Jup}$]{} (the approximate dividing line in mass between higher and lower multiple-planet eccentricities). From a conservative analysis of exoplanet signals requiring an extra trend to adequately fit the dataset, Wright et al. (2009) suggest that $>$28% of exoplanets are in multiple systems, a finding likely to be consistent with the AAPS dataset. In order to assess the actual value it will be necessary to assess the detectability of such trends using simulations such as those presented by O’Toole et al. (2009) and Wittenmyer et al. (2009).
The fit to HD134987c indicates a relatively small semi-amplitude ($\sim$10m/s) in comparison to most of the other long-period exoplanets announced to date. Fig. \[au\_mp2009\] indicates that the orbital solution for HD134987c is rather more reminiscent of Jupiter than other exoplanets discovered to date. With a longer semi-major axis than Jupiter and similar eccentricity, the discovery of HD134987c signals that we have sensitivity to radial velocity planets with Jupiter-like periods around Sun-like stars. As the second decade of data is now being gathered by the AAT and Keck Telescopes, we can be confident that our long-term precision is sufficient to empirically constrain the incidence of exoplanets with Jupiter-like periods around Sun-like stars. This will enable us to ascertain just how common our Solar System might be and be able to make comparison with developing theoretical predictions (e.g., Mordasini et al. 2009). As our temporal baseline extends, we will become sensitive to longer-period planets, e.g., a true Saturn analog would require 15 more years of observation to fully sense. As our precision improves we will become sensitive to lower-mass longer-period exoplanets, which migration scenarios for planets around solar-type stars suggest is a rich domain (e.g., Schlaufman, Lin & Ida 2009).
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank the Anglo-Australian and Keck time assignment committees for continuing allocations of telescope time. We are grateful for the extraordinary support we have received from the AAT technical staff – E. Penny, R. Paterson, D. Stafford, F. Freeman, S. Lee, J. Pogson, S. James, J. Stevenson, K. Fiegert and W. Campbell. We gratefully acknowledge the UK and Australian government support of the Anglo-Australian Telescope through their PPARC, STFC and DETYA funding (HRAJ, CGT); NSF grant AST-9988087, NASA grant NAG5-12182, STFC grant PP/C000552/1, ARC Grant DP0774000 and travel support from the Carnegie Institution of Washington (to RPB) and from the Anglo-Australian Observatory (to CGT, BDC and JB). SSV gratefully acknowledges support from NSF grant AST-0307493. GWH acknowledges support from NASA, NSF, Tennessee State University, and the state of Tennessee through its Centers of Excellence program. This research has made use of the SIMBAD and exoplanet.eu databases, operated at CDS, Strasbourg and Paris Observatory respectively. The referee is thanked for making suggestions which led to substantial improvements in the paper.
[99]{}
Alonso, A., Arribas, S. & Martínez-Roger, C. 1996, A&A, 313, 873
Astronomical Almanac, 2000, Defense Dept., Naval Observatory, Nautical Almanac Office
Bailey, J., Butler, R.P., Tinney, C. G ., Jones, H. R. A ., O’Toole, S., Carter B.D., Marcy G.W., 2009, ApJ, 690, 743
Barnes, J. et al., 2008, MNRAS, 390, 1258
Benedict, G. F. et al., 2002, ApJ, 581, L115
Baraffe I., Chabrier G., Barman T., Allard F., Hauschildt P., 2003,A&A, 402, 701 Bond J.,Tinney C.G., Butler R.P., Jones H.R.A., Marcy G. W., Penny A.J., Carter B.,, 2006, MNRAS, 370, 163
Bruntt, H.; Kjeldsen, H.; Buzasi, D. L.; Bedding, T. R., 2005, ApJ, 633, 440
Butler, R.P., Marcy, G.W., Williams, E., McCarthy, C., Dosanjh, P., Vogt, S.S., 1996, PASP, 108, 500
Butler R.P., Tinney C.G., Marcy G.W., Jones H.R.A., Penny A.J., Apps K., 2001, ApJ, 555, 410
Butler R.P., Marcy G. W., Vogt S. S., Tinney C.G., Jones H.R.A., McCarthy C., Penny A.J., Apps K., Carter B., 2002a, ApJ, 578, 565
Butler, R.P., Marcy, G. W., Vogt, S. S., Fischer D.A., Henry G.W., Laughlin G., Wright J., 2002b, ApJ, 582, 455
Butler, R. P., Wright, J. T., Marcy, G. W., Fischer, D. A., Vogt, S. S., Tinney, C. G., Jones, H. R. A., Carter, B. D., Johnson, J. A. 2006a, ApJ, 646, 505
Butler, R. P., Johnson J.A., , Marcy, G. W., Wright, J. T., Vogt, S. S., Fischer, 2006b, PASP, 118, 1685
Carter, B.D., Butler, R.P., Tinney, C.G., Jones, H.R.A., Marcy, G.W., Fischer, D.A., Penny, A.J., 2003, ApJ, 593, L43
Cenarro, A. J . Peletier, R. F . S‡nchez-Bl‡zquez, P . Selam, S. O . Toloba, E . Cardiel, N . Falc—n-Barroso, J . Gorgas, J . JimŽnez-Vicente, J . Vazdekis, A., 2007, MNRAS, 374, 664
Currie T., 2009, arxiv0902.3459
Demarque P., Woo J-H, Kim Y-Yi S., 2004, ApJS, 155, 667
Eaton, J. A., Henry, G. W., & Fekel, F. C. 2003, in The Future of Small Telescopes in the New Millennium, Volume II - The Telescopes We Use, ed. T. D. Oswalt (Dordrecht: Kluwer), 189
Fischer, Debra A.; Marcy, Geoffrey W.; Butler, R. Paul; Laughlin, Gregory; Vogt, Steven S., 2002, ApJ, 564, 1028
Giradri, L., Bressan, A., Bertelli, G. & Chiosi, C. 2000, A&AS 141, 371
Hall, J. C., Henry, G. W., Lockwood, G. W., Skiff, B. A. and Sarr, S. H. 2009, AJ, 138, 312
Hatzes A., et al. 2000, ApJ, 544, 145
Henry, G. W. 1999, PASP, 111, 845
Henry, G. W., Baliunas, S. L., Donahue, R. A., Fekel, F. C., & Soon, W. 2000, ApJ, 531, 415
Henry, G. W., Eaton, J. A., Hamer, J., & Hall, D. S. 1995, ApJS, 97, 513
Henry, G. W., Fekel, F. C., & Hall, D. S. 1995, AJ, 110, 2926
Henry, G. W., Marcy, G. W., Butler, R. P., & Vogt, S. S. 2000, ApJL, 529, 41
Holmberg J., Nordstroem B., Andersen J., 2007, A&A, 475, 519
Jenkins, J.S., Jones H.R.A., Tinney C.G., Butler R.P., Marcy G.W., McCarthy C., Carter B.D., Penny A.J., 2006, MNRAS, 372, 163
Jenkins J. S., Jones H. R. A., Pavlenko, Y., Pinfield D. J., Barnes J. R., Lyubchik, Y., 2008, A&A 485, 571
Jones H.R.A., Butler R.P., Tinney C.G., Marcy G.W., Penny A.J., McCarthy C., Carter B.D., Pourbaix D., 2002a, MNRAS, 333, 871
Jones H.R.A., Butler R.P., Tinney C.G., Marcy G.W., Penny A.J., McCarthy C., Carter B.D. 2002b, MNRAS, 337, 1170
Jones H.R.A., Butler R.P., Tinney C.G., Marcy G.W., Penny A.J., McCarthy C., Carter B.D. 2003, MNRAS, 341, 948
Kashyap, V.L., Drake, J.J., Saar S.H., 2008, ApJ, 687, 1339
Knutson, H.A . Charbonneau, D., Allen, L. E ., Fortney, J. J ., Agol, E., Cowan, N. B ., Showman, A.P., Cooper, C.S., Megeath, S. T.,, 2007, Nature, 447, 183 Marcy et al. 2002, ApJ 581, 1375
Meschiari S., Wolf A.S., Rivera E., Laughlin G. , Vogt S., Butler P., 2009, arix.0907.1675
Marois, C., Macintosh, B., Barman, T., Zuckerman, B., Song, I., Patience, J., Lafreniere, D., & Doyon, R., 2008, Science, 322, 1348
Marcy G. W., Butler R.P., 1996, ApJ, 464, L147
Marcy G. W., Butler R.P., Fischer D.A., Vogt, S. S., 2005a, ApJ, 619, 570
Marcy G. W., Butler R. P., Fischer D. A., Vogt S. S., Wright J. T., Tinney C. G., Jones, H. R. A. 2005b, Progress of Theoretical Physics Supplement, 158, 24
McCarthy, C., Butler, R.P., Tinney, C.G., Jones, H.R.A., Marcy, G.W., Carter, B.D., Fischer, D., Penny, A.J., 2005, ApJ, 623, 1171
Mordasini, C., Alibert, Y., Benz, W., Naef, D., 2009, A&A,501, 1161
Naef D., 2003, A&A, 410, 1051
O’Toole, S.J . Butler, R. P., Tinney, C. G . Jones, H. R. A . Marcy, G. W . Carter, B., McCarthy, C., Bailey, J., Penny, A. J . Apps, K., Fischer, D., 2007, ApJ, 660, 1636
O’Toole, S., Tinney, C. G., Jones, H. R. A., 2008, MNRAS, 386, 516
O’Toole, S. J . Tinney, C. G . Jones, H. R. A . Butler, R. P . Marcy, G. W . Carter, B . Bailey, J., 2009, MNRAS, 392, 641
O’Toole, S., Tinney, C. G . Butler, R. P., Jones, H. R. A . Bailey, J., Carter, B. D . Vogt, S. S . Laughlin, G., Rivera, E. J., 2009, ApJ, 697, 1263
O’Toole, S. J . Jones, H. R. A . Tinney, C. G . Butler, R. P . Marcy, G. W . Carter, B . Bailey, J . Wittenmyer, R. A., 2009, arXiv0906.4619
Paulson, D. B., Saar, S. H., Cochran, W. D., & Henry, G. W. 2004, AJ, 127, 1644
Queloz, D., Henry, G. W., Sivan, J. P., Baliunas, S. L., Beuzit, J. L., Donahue, R. A., Mayor, M., Naef, D., Perrier, C. & Udry, S. 2001, AAP, 379, 279
Sato, B., Fischer, D. A., Henry, G. W., et al. 2005, ApJ, 633, 465
Seagroves, S., Harker, J., Laughlin, G., Lacy, J., & Castellano, T. 2003, PASP, 115, 1355
Schlaufman K., Lin D.N.C., Ida S., 2009, ApJ, 691, 1322
Saffe C., G—mez, M., Chavero C., 2005A&A...443..609
Saffe C., Levato, H., Lopez-Garcia Z., Jofre E., Petrucci R., Gonzalez E., 2008, arXiv, 0810.3798
Sousa S. G . Santos N. C., Mayor M., Udry S., Casagrande L., Israelian G., Pepe F., Queloz D., Monteiro M.J.P.F.G., 2008, A&A, 487, 373
Salasnich B., Girardi L., Weiss A., Chiosi C., 2000, A&A, 361, 1023
Swain M., et al., 2009, ApJ, 690, 114
Takeda, G., Ford, E. B ., Sills, A., Rasio, F. A . Fischer, D. A . Valenti, J. A., 2007a, ApJS, .168, 297 Takeda, Y., 2007b, PASJ, 59, 335
Tinney, C. G., Butler, R. P., Marcy, G. W., Jones, H. R. A., Penny, A. J. Vogt, S. S., Apps, K. & Henry, G. W. 2001, ApJ, 551, 507
Tinney, C. G., Butler, R. P., Marcy, G. W., Jones, H. R. A., Penny, A. J., McCarthy, C. & Carter, B. D. 2002a, ApJ, 571, 528
Tinney, C. G., McCarthy, C., Jones, H. R. A., Butler, R. P., Carter, B. D., Marcy, G. W., & Penny, A. J. 2002b, MNRAS, 332, 759
Tinney, C. G., Butler, R. P., Marcy, G. W., Jones, H. R. A., Penny, A. J., McCarthy, C., Carter, B. D. & Bond, J. 2003, ApJ, 587, 423
Tinney, C. G., Butler, R. P., Marcy, G. W., Jones, H. R. A., Penny, A. J., McCarthy, C., Carter, B. D. & Fischer, D. A. 2005, ApJ, 623, 1171
Tinney, C. G., Butler, R. P., Marcy, G. W., Jones, H. R. A., Laughlin, G., Carter, B. D., Bailey, J. A. & O’Toole, S. 2006, ApJ, 647, 594
van Belle G. T., von Braun K., 2009, ApJ, 694, 1085
van Leeuwen F., 2007, A&A, 474, 653
Vogt, S.S., Marcy, G.W., Butler, R.P. & Apps, K. 2000, ApJ, 536, 902
Vogt, S.S. Butler R.P., Marcy G.W., Fischer D.A., Henry G., Laughlin G., Wright J.T., Johnson J.A., 2005, ApJ, 632, 638
Vogt, S.S. et al. 2009, ApJ, in press
Wright J., 2005, PASP, 117, 657
Wright J.T., Marcy G.W., Fischer D.A., Butler R.P., Vogt S.S., Tinney C. G . Jones, H.R.A., Carter B.D., Johnson J.A., McCarthy C., Apps K., 2007, ApJ, 657, 533
Wright, J. T., Upadhyay, S., Marcy, G. W., Fischer D. A., Ford E.B., Johnson J.A., 2009, ApJ, 693, 1084
Wittenmyer, R.A., Endl M., Cochran, W.D., Levison H.F. Henry, G.W., 2009, ApJS, 182, 97
[lrrrrr]{} Spectral Type & G5V & Cenarro et al. (2007)\
log $R'_{\rm HK}$ & -5.04 & Jenkins et al. (2006)\
& -5.13 & Saffe et al. (2005)\
Variability( $\sigma$) & 0.0013 & van Leuwen (2007)\
Distance(pc) & 22.2$\pm$1.1 & van Leuwen (2007)\
log (L$_{\rm star}/$L$_\odot$) & 1.80$\pm$0.14 & van Belle & von Braun (2009)\
& 1.43$\pm$0.02 & Sousa et al. (2008)\
R$_{\rm star}$/R$_\odot$ & 1.25$\pm{0.04}$ & van Belle & von Braun (2009)\
M$_{\rm star}$/M$_\odot$ & 1.07$\pm^{0.03}_{0.08}$ & Holmberg et al. (2007)\
& 1.05$\pm^{0.07}_{0.05}$& Takeda et al. (2007a)\
& 1.10$\pm^{0.07}_{0.04}$ & Takeda (2007b)\
T$_{\rm eff}$ (K)& 5636 &Holmberg et al. (2007)\
& 5766 & Takeda et al. (2007b)\
& 5740$\pm$23& Sousa et al. (2008)\
& 5585$\pm$50 & van Belle & von Braun (2009)\
& 5623 $\pm$57 & Bond et al. (2006)\
$[$Fe/H$]$ & 0.20 & Holmberg et al. (2007)\
& 0.28 & Takeda et al. (2007b)\
& 0.25$\pm$0.02& Sousa et al. (2008)\
U,V,W (km s$^{-1}$) &-21,-41,20 & Holmberg et al. (2007)\
& -9.8,-25.4,28.4& Takeda et al. (2007b)\
Age (Gyr) & 8.4$\pm^{1.6}_{1.4}$ & Takeda et al. (2007a)\
& 11.1$\pm^{1.5}_{3.7}$ & Saffe et al. (2008)\
jitter (m s$^{-1}$) & 3.5 & Butler et al. (2006a)\
$v$ sin $i$ (km/s) & 2.17 & Butler et al. (2006a)\
\[stellarp\]
[rrr]{} 917.2282 & -41.8 & 2.0\
1213.2775 & -61.6 & 1.9\
1276.0475 & -50.4 & 2.1\
1382.9573 & 30.1 & 1.8\
1413.8813 & -16.4 & 1.0\
1630.2677 & 36.1 & 1.7\
1683.0609 & -20.8 & 2.0\
1706.0960 & -43.3 & 2.5\
1717.9564 & -56.0 & 1.8\
1742.9340 & -57.7 & 1.6\
1984.2154 & -51.4 & 2.0\
2060.9717 & -35.8 & 1.6\
2091.9394 & -11.9 & 1.3\
2124.8927 & 33.8 & 1.0\
2125.0410 & 34.2 & 1.3\
2125.8890 & 41.6 & 1.1\
2125.9847 & 35.2 & 1.2\
2126.9191 & 37.7 & 1.6\
2186.8784 & 0.6 & 1.7\
2189.8632 & -11.8 & 1.3\
2360.2379 & 7.4 & 1.6\
2387.1061 & 42.9 & 1.5\
2388.1532 & 44.9 & 1.3\
2455.9877 & -6.8 & 1.7\
2476.9724 & -29.5 & 1.4\
2655.2513 & 52.5 & 1.6\
2747.1502 & -39.9 & 1.3\
2785.0880 & -50.4 & 1.5\
2860.8964 & -10.0 & 1.4\
3042.2517 & -50.8 & 1.3\
3215.9389 & 7.3 & 1.0\
3485.0592 & -7.4 & 1.2\
3508.1736 & -25.2 & 1.4\
3521.0784 & -33.2 & 1.6\
3943.9202 & 49.4 & 0.9\
3946.9329 & 47.4 & 0.8\
4139.2578 & -28.8 & 1.2\
4226.0985 & 36.0 & 1.9\
4368.8908 & -48.2 & 1.1\
4543.2849 & -34.2 & 1.1\
4899.2183 & -47.7 & 1.2\
4900.2294 & -44.4 & 0.8\
4908.2357 & -42.0 & 1.5\
5017.9707 & 4.8 & 1.4\
5020.0140 & -2.8 & 1.1\
5020.9416 & -2.5 & 1.0\
5021.9442 & -3.2 & 1.2\
5023.9401 & -9.0 & 1.0\
5029.9292 & -13.3 & 1.1\
5030.8891 & -15.1 & 0.9\
5032.0031 & -17.6 & 0.9\
5032.9382 & -16.1 & 1.0\
5036.9205 & -19.3 & 1.2\
5044.9871 & -31.4 & 1.2\
5046.0131 & -31.4 & 0.7\
5046.9415 & -32.0 & 0.9\
5047.8949 & -34.5 & 0.8\
5048.9788 & -37.6 & 0.7\
5054.8813 & -42.6 & 0.9\
5055.9357 & -40.2 & 0.5\
5104.8746 & -57.2 & 1.4\
5110.8763 & -60.2 & 1.0\
5111.8779 & -61.8 & 1.4\
\[aat\_vel\]
[rrrrr]{} 276.8020 & -9.3 & 1.4 & 0.122 &600\
283.8984 & -15.4 & 1.8 & -0.020 &300\
604.8935 & 41.2 & 1.1 & 0.135 &150\
838.1755 & 40.0 & 1.1 & 0.151 &143\
839.1727 & 41.2 & 1.1 & 0.150 &143\
840.1707 & 43.1 & 1.2 & 0.156 &143\
863.1203 & 37.6 & 1.2 & 0.148 & 80\
954.9176 & -49.3 & 0.8 & 0.143 &143\
956.9547 & -44.6 & 1.1 & 0.153 &143\
981.8126 & -54.8 & 1.5 & 0.151 &120\
982.8190 & -48.6 & 1.1 & 0.154 &140\
983.8498 & -48.6 & 1.6 & 0.154 &110\
1011.8011 & -48.6 & 1.1 & 0.150 & 60\
1012.8005 & -46.2 & 1.2 & 0.138 &100\
1013.8006 & -46.1 & 1.3 & 0.128 & 80\
1050.7730 & -13.2 & 1.1 & 0.170 & 80\
1051.7547 & -10.7 & 1.1 & 0.152 &268\
1068.7306 & 7.1 & 1.0 & 0.174 &100\
1069.7193 & 8.2 & 1.0 & 0.172 &100\
1070.7242 & 9.9 & 1.0 & 0.164 &110\
1071.7229 & 8.8 & 1.0 & 0.182 &120\
1072.7204 & 14.9 & 1.1 & 0.161 & 80\
1073.7196 & 14.4 & 1.2 & 0.051 &120\
1074.7070 & 16.4 & 1.0 & 0.166 &120\
1200.1581 & -48.3 & 1.2 & 0.147 &100\
1227.0883 & -49.0 & 1.0 & 0.148 &120\
1228.1038 & -51.1 & 1.2 & 0.157 &120\
1229.1161 & -49.9 & 1.1 & 0.156 &140\
1310.8892 & -18.2 & 1.1 & 0.143 &100\
1311.9101 & -14.3 & 1.1 & 0.144 &100\
1312.9239 & -19.3 & 1.4 & 0.138 & 70\
1314.0005 & -14.5 & 1.2 & 0.150 &100\
1340.8393 & 30.9 & 1.1 & 0.148 &119\
1341.8853 & 31.8 & 1.3 & 0.154 &119\
1342.8787 & 34.2 & 1.3 & 0.150 & 60\
1367.7877 & 39.2 & 1.3 & 0.150 & 90\
1368.7558 & 48.3 & 1.3 & 0.146 & 55\
1369.7821 & 52.9 & 1.3 & 0.156 & 60\
1370.8677 & 53.4 & 1.4 & 0.155 & 90\
1371.7599 & 49.4 & 1.2 & 0.149 & 50\
1372.7678 & 49.7 & 1.2 & 0.145 & 60\
1373.7712 & 45.5 & 1.3 & 0.151 & 40\
1410.7258 & 4.3 & 1.1 & 0.153 &119\
1411.7251 & -3.6 & 1.3 & 0.143 &119\
1583.1620 & 8.2 & 1.6 & 0.153 & 70\
1704.8398 & -27.8 & 1.3 & 0.148 & 57\
2002.9877 & -46.3 & 1.4 & 0.128 &213\
2030.9596 & -40.7 & 1.3 & 0.134 &194\
2062.8470 & -25.1 & 1.3 & 0.119 & 68\
2094.7919 & 4.4 & 1.4 & 0.119 & 87\
2334.1494 & -10.6 & 1.4 & 0.127 & 56\
2446.9002 & 14.6 & 1.3 & 0.135 & 74\
2683.1724 & 40.0 & 1.5 & 0.140 & 50\
2828.8530 & -28.8 & 1.3 & 0.130 & 52\
3153.8956 & 49.5 & 1.1 & 0.135 & 42\
3426.0749 & 61.6 & 1.1 & 0.142 &446\
3934.7661 & 57.9 & 1.1 & 0.144 & 59\
4139.1543 & -10.3 & 1.2 & 0.149 & 40\
4246.9373 & 16.4 & 0.9 & 0.155 &173\
4247.9507 & 16.7 & 1.2 & 0.154 & 56\
4248.9033 & 13.2 & 1.0 & 0.153 & 27\
4251.8409 & 8.9 & 1.3 & 0.152 & 41\
4255.8205 & 5.3 & 1.0 & 0.151 & 27\
4278.7856 & -10.1 & 1.2 & 0.153 & 42\
4279.7868 & -12.1 & 1.1 & 0.150 & 49\
4285.7953 & -18.7 & 1.3 & 0.161 & 93\
4294.8522 & -28.6 & 1.2 & 0.151 & 53\
4343.7227 & -36.2 & 1.2 & 0.159 & 42\
4491.1711 & 37.2 & 1.3 & 0.149 & 55\
4545.0821 & -20.5 & 1.3 & 0.150 & 36\
4547.0645 & -20.6 & 1.3 & 0.149 & 36\
4600.9843 & -43.3 & 1.2 & 0.148 & 84\
4635.8703 & -31.3 & 1.3 & 0.152 & 27\
4718.7306 & 61.8 & 1.4 & 0.159 & 50\
5049.8434 & -15.8 & 0.8 & 0.177 & 59\
\[keck\_vel\]
[lrrr]{} Orbital period $P$ (d) & [[258.19$\pm$0.07]{}]{} & [[5000$\pm$400]{}]{}\
Velocity amplitude $K$ (m s$^{-1}$) &[[49.5$\pm$0.2]{}]{} & [[9.3$\pm$0.3]{}]{}\
Eccentricity $e$ &[[0.233$\pm$0.002]{}]{} & [[0.12$\pm$0.02]{}]{}\
$\omega$ (deg) & [[352.7$\pm$0.5]{}]{} & [[195$\pm$48]{}]{}\
Periastron Time (JD) &[[10071.0$\pm$0.8]{}]{} & [[11100$\pm$600]{}]{}\
$M$sin$i$ ([$M_{\rm Jup}$]{}) &[[1.59$\pm$0.02]{}]{} & [[0.82$\pm$0.03]{}]{}\
a (au) & [[0.81$\pm$0.02]{}]{} & [[5.8$\pm$0.5]{}]{}\
‘Zero point RV Offset’ & [[-3.8$\pm$3.2]{}]{}\
‘AAT Offset & [[-14.6$\pm$3.2]{}]{}\
\[orbit\]
![The solid line indicates the best fit single Keplerian which has an RMS of 6.6 m/s fit to the data. The Keck data are shown as squares and the AAT data as circles. []{data-label="hd134987"}](hd134987.ps){width="110mm"}
![The solid line shows the best fit double Keplerian to the Keck (squares) and AAT (circles) data. A significantly improved RMS of 3.3 m/s is achieved relative to the single planet fit.[]{data-label="hd134987bc"}](hd134987_2.ps){width="110mm"}
![**[Periodograms of the residuals to a single Keplerian fit to the HD 134987 data: AAT (top), Keck (middle), Combined (bottom). The outputs are taken from the publicly available package [systemic]{} (Meschiari et al. 2009).]{}**[]{data-label="power"}](aat_ls.ps "fig:") ![**[Periodograms of the residuals to a single Keplerian fit to the HD 134987 data: AAT (top), Keck (middle), Combined (bottom). The outputs are taken from the publicly available package [systemic]{} (Meschiari et al. 2009).]{}**[]{data-label="power"}](keck_ls.ps "fig:") ![**[Periodograms of the residuals to a single Keplerian fit to the HD 134987 data: AAT (top), Keck (middle), Combined (bottom). The outputs are taken from the publicly available package [systemic]{} (Meschiari et al. 2009).]{}**[]{data-label="power"}](all_ls.ps "fig:")
![Residuals to the 258d Keplerian fit for HD134987b shown in Fig. \[hd134987\] are shown.[]{data-label="hd134987c"}](fig3.ps){width="120mm"}
![The plots show contours of $\chi^2$ for best-fit orbits to the radial velocity data of HD134987c in period versus eccentricity (upper plot) and period versus mass (lower plot). The solid brown, red and yellow shading indicate the regions where $\Delta\chi^2$ is up to 2.3, 6.2 and 11.8 from the best-fit minimum $\chi^2$. Dashed green and dotted blue lines show the contours for the AAT and Keck data respectively. For the case of the Keck data a further contour is shown. The dashed-dot-dot-dot contour (dark blue, labelled Keck-S) represents a subset of the Keck data from which velocity values corresponding to 10% of the highest activity values are removed. The position of the cross marks the best fit solution for the combined dataset.[]{data-label="new"}](chisq_pvse.ps "fig:"){width="100mm"} ![The plots show contours of $\chi^2$ for best-fit orbits to the radial velocity data of HD134987c in period versus eccentricity (upper plot) and period versus mass (lower plot). The solid brown, red and yellow shading indicate the regions where $\Delta\chi^2$ is up to 2.3, 6.2 and 11.8 from the best-fit minimum $\chi^2$. Dashed green and dotted blue lines show the contours for the AAT and Keck data respectively. For the case of the Keck data a further contour is shown. The dashed-dot-dot-dot contour (dark blue, labelled Keck-S) represents a subset of the Keck data from which velocity values corresponding to 10% of the highest activity values are removed. The position of the cross marks the best fit solution for the combined dataset.[]{data-label="new"}](chisq_pvsm.ps "fig:"){width="100mm"}
![Histogram of S values for the Keck dataset plotted with a Gaussian distribution with a full-width half maximum, twice the standard deviation of the S values tabulated in Table \[keck\_vel\].[]{data-label="jit"}](jit.ps){width="100mm"}
![Top panel: the 419 Stromgren $(b+y)/2$ $D-C$ differential magnitudes of HD 134987 plotted against heliocentric Julian Date. The standard deviation of the observations from their mean (dotted line) is 0.00263 mag. The standard deviation of the yearly means is 0.00022 mag. Middle panel: the observations plotted modulo the 258.187-day orbital period of the inner planet. Phase 0.0 corresponds to the predicted time of mid transit. A least-squares sine fit at the orbital period yields a semiamplitude of only $0.00035\pm0.00017$ mag. Bottom panel: the observations near phase 0.0 plotted on an expanded scale. The duration of a central transit is 15 hours while the uncertainty of the transit time is $\pm2$ days. Our phase coverage is insufficient to determine whether or not companion b transits the star.[]{data-label="photometry"}](3panel.eps){width="150mm"}
![All exoplanets as recorded at http://exoplanets.eu on 2009 October 27 with planet and star masses, semi-major axes and eccentricity are included as small filled circles. The majority of data in the plots are for radial velocity discovered exoplanets and are included as $M$ sin$i$ values. The upper plot shows semi-major axis as a function of planet mass. Those labelled with crosses have primary masses within 25% of the Sun. Jupiter’s position is marked by the traditional mythological symbol. Jupiter’s semi-major axis and eccentricity are plotted as 5.20au and 0.0489 respectively from the Astronomical Almanac (2000). The best fit orbit for HD134987b is indicated by diamonds with the 2-$\sigma$ best-fit contour for HD134987c orbit. In the lower plot points plotted as large filled circles are those whose ratios of planet mass to star mass fall within 50% of that of Jupiter and the Sun.[]{data-label="au_mp2009"}](au_e2009.ps "fig:"){width="100mm"} ![All exoplanets as recorded at http://exoplanets.eu on 2009 October 27 with planet and star masses, semi-major axes and eccentricity are included as small filled circles. The majority of data in the plots are for radial velocity discovered exoplanets and are included as $M$ sin$i$ values. The upper plot shows semi-major axis as a function of planet mass. Those labelled with crosses have primary masses within 25% of the Sun. Jupiter’s position is marked by the traditional mythological symbol. Jupiter’s semi-major axis and eccentricity are plotted as 5.20au and 0.0489 respectively from the Astronomical Almanac (2000). The best fit orbit for HD134987b is indicated by diamonds with the 2-$\sigma$ best-fit contour for HD134987c orbit. In the lower plot points plotted as large filled circles are those whose ratios of planet mass to star mass fall within 50% of that of Jupiter and the Sun.[]{data-label="au_mp2009"}](au_mp2009.ps "fig:"){width="100mm"}
|
---
abstract: 'We describe a numerical method to construct Cauchy data extending to space-like infinity based on Corvino’s (2000) gluing method. Adopting the setting of Giulini and Holzegel (2005), we restrict ourselves here to vacuum axisymmetric spacetimes and glue a Schwarzschildean end to Brill-Lindquist data describing two non-rotating black holes. Our numerical implementation is based on pseudo-spectral methods, and we carry out extensive convergence tests to check the validity of our numerical results. We also investigate the dependence of the total ADM mass on the details of the gluing construction.'
author:
- Georgios Doulis
- Oliver Rinne
bibliography:
- 'paper.bib'
title: |
Numerical construction of initial data for Einstein’s equations\
with static extension to space-like infinity
---
Introduction {#sec:intro}
============
Many situations of astrophysical interest can be described to good approximation as isolated systems: an asymptotically flat spacetime containing a compact self-gravitating source such as a collapsing star, a black hole binary, etc. A fundamental problem in the numerical solution of the Einstein equations for such systems is the treatment of the far field. Access to the asymptotic region known as conformal infinity [@FrauendienerLRR] is important for several reasons. Firstly, gravitational radiation is only defined in an unambiguous way at future null infinity. Including this region in the computational domain enables extraction of the gravitational radiation emitted by the source in a straightforward way. This is important for the modelling of astrophysical sources of gravitational radiation. Secondly, many open problems in mathematical relativity such as black hole stability and cosmic censorship are statements about the global structure of spacetime. If numerical studies are to shed light on these questions then access to conformal infinity is indispensable.
The standard approach to numerical relativity is based on the Cauchy formulation of Einstein’s equations. The $t={\mathrm{const}}$ slices are truncated at a finite distance from the source, where boundary conditions are imposed. These must ensure that the resulting initial-boundary value problem is well posed, they must be compatible with the constraints that hold on the individual $t={\mathrm{const}}$ slices, and ideally they should be absorbing, i.e. the artificial boundary should be transparent to gravitational radiation. Despite much progress in this direction (see [@SarbachLRR] for a review article), this approach is necessarily limited because exact absorbing boundary conditions cannot be defined at a finite distance in the full nonlinear theory of general relativity so that linearisation about a given background spacetime is typically assumed. And imperfect boundary conditions can easily destroy relevant features of the solutions such as late-time power-law tails caused by the backscattering of gravitational radiation.
An alternative to evolution on truncated Cauchy slices is evolution on hyperboloidal slices extending to future null infinity ${\mathrsfs{I}}^+$. (Examples of hyperboloidal slices are the slices $\Sigma_1$ and $\Sigma_2$ in Fig. \[fig:evolution\].) In this approach a conformal transformation is applied to the spacetime metric, combined with a compactifying coordinate transformation that maps infinity to a finite location. The conformal boundary of the slices becomes a pure outflow boundary so that no boundary conditions are required there. Hyperboloidal evolution was first advocated in general relativity by Friedrich in the context of his regular conformal field equations [@Friedrich1983a], a symmetric hyperbolic formulation of the (suitably augmented) Einstein equations that is completely regular up to the conformal boundary. For reviews of the theoretical development as well as numerical implementations based on this system, see e.g. [@FrauendienerLRR; @Husa2002; @Husa2003]. An alternative method is based on a straightforward ADM [@Arnowitt1962] split of the conformally transformed Einstein equations on hyperboloidal surfaces of constant mean curvature [@Moncrief2009]. The resulting equations are formally singular at ${\mathrsfs{I}}^+$ but can nevertheless be evaluated there in terms of regular conformal data. Based on this system, stable numerical evolutions of a gravitationally perturbed Schwarzschild black hole in axisymmetry were achieved [@Rinne2010]; later matter fields were also included [@Rinne2013; @Rinne2014b]. Further proposals for hyperboloidal evolution systems that, as far as we know, have not been implemented numerically yet can be found in [@Zenginoglu2008; @Bardeen2011].
The hyperboloidal surfaces are only partial (in our case, future) Cauchy surfaces. The problem remains how to evolve entire spacetimes from Cauchy data extending to space-like infinity. The main difficulty here is that part of the Cauchy data—namely some of the components of the Weyl tensor—are singular at space-like infinity if the ADM mass is not zero [@Friedrich1988]. In [@Friedrich1998] Friedrich proposed a way to render these Cauchy data regular while guaranteeing the regularity of the conformal field equations at space-like infinity. The basic ingredient of this approach is the blowing up of space-like infinity $i^0$ to a *cylinder* $I = [-1, 1] \times \mathbb{S}^2$ that serves as a link of finite length (along the time direction) between past ${\mathrsfs{I}}^-$ and future ${\mathrsfs{I}}^+$ null infinity. The 2-spheres $I^\pm = I \cap {\mathrsfs{I}}^\pm$ where the cylinder meets future and past null infinity are called *critical sets*. The equations that propagate the data from ${\mathrsfs{I}}^-$ to ${\mathrsfs{I}}^+$ along the cylinder acquire an extremely simple form in Friedrich’s representation that makes them ideal for numerical implementation, see [@Beyer2012; @Doulis2013; @Beyer2014a; @Beyer2014b; @Frauendiener2014] for some recent numerical work. On the cylinder all the spatial derivatives drop out. Therefore, the cylinder is a *total characteristic* of the system and hence no boundary conditions are required there. However, this intrinsic system of propagation equations degenerates at the critical sets $I^\pm$ and develops logarithmic singularities there that are expected to travel along null infinity and spoil its smoothness. In Friedrich’s approach this generic singular behaviour is successfully reproduced. Its appearance has been made explicit and related to the structure of the initial data. In other words, there is a possibility that by choosing appropriately the initial data the occurrence of non-smooth features in the solutions at null infinity can be avoided. A possible solution proposed already in [@Friedrich1998] is to prescribe initial data that respect a set of regularity conditions involving the Cotton tensor. However it turned out [@Kroon2004] that these conditions are not sufficient to prevent the occurrence of the logarithmic singularities in higher order expansions of the solutions of the intrinsic system of propagation equations. In [@Kroon2004] Valiente Kroon proposed a new regularity condition in the form of the following conjecture:
If an initial data set which is time symmetric and conformally flat in a neighbourhood of infinity yields a development with a smooth null infinity, then the initial data is in fact Schwarzschildean in that neighbourhood.
Recently, the results in [@Kroon2010; @Kroon2012] have pointed in favour of the conjecture, but there is still work to be done in order to fully prove it. What has been shown is that the solution is smooth at the critical sets if and only if the initial data is exactly Schwarzschildean in a neighbourhood of infinity. It remains to be proved that the development of the solution along null infinity is smooth if and only if it is smooth at the critical sets. If true, the conjecture unveils the special role that static data play in the smooth development of Cauchy data extending to space-like infinity.
One might object that initial data that are static in a neighbourhood of space-like infinity are overly restrictive. However, a powerful result by Corvino [@Corv:2000] suggests that this is not the case. He showed that any given asymptotically flat and conformally flat initial data can be truncated and glued along an annulus to a Schwarzschild metric in the exterior, provided the radius of the gluing annulus is sufficiently large and the mass of the exterior Schwarzschild metric is chosen appropriately. There are otherwise no additional restrictions on the metric in the interior, in particular non-static spacetimes including gravitational radiation are allowed. The method has been generalised to stationary rotating ends described by the Kerr metric, and a cosmological constant has been included [@Corvino2006; @Chrusciel2008; @Chrusciel2009; @Cortier2013].
Corvino’s result can be used for the evolution problem as follows (see also [@Chrusciel2002]). Since his initial data are Schwarzschild in a neighbourhood of space-like infinity $i^0$ on the initial Cauchy slice $\Sigma_0$ (see Fig. \[fig:evolution\]), the future development of these initial data will also be Schwarzschild in a neighbourhood of $i^0$ (the shaded region in Fig. \[fig:evolution\]). By placing an artificial timelike boundary in this region, the data on $\Sigma_0$ can be evolved to the future for some time using standard Cauchy evolution with *exact* boundary conditions taken from the known Schwarzschild solution. From this evolution, data on a hypersurface $\Sigma_1$ are obtained, e.g. a hypersurface of constant mean curvature. Outside the artificial boundary, the solution on $\Sigma_1$ is known analytically (Schwarzschild), so we obtain data on a complete hyperboloidal surface. These can then be taken as initial data for a hyperboloidal evolution code. For the problem studied in the present paper (vacuum axisymmetric spacetimes), the code developed in [@Rinne2010] can in principle be used.
figure\_1.pdf\_t
The present paper deals with the first step of this proposal, namely the construction of initial data based on Corvino’s gluing method. It should be stressed that the proof of Corvino’s theorem is not explicit, i.e. it does not provide us with a prescription for how to actually construct the glued initial data. One of the aims of this paper is to compute such data numerically, at least in a simple setting. We assume here that spacetime is vacuum and axisymmetric. Corvino’s method under these assumptions was first studied analytically by Giulini and Holzegel . An important achievement of this paper was to turn Corvino’s idea into an explicit PDE problem that can, in principle at least, be solved to obtain the glued data. The 3-metric at a moment of time symmetry in a vacuum axisymmetric spacetime can be written in the form of a Brill wave [@Brill:1959]. This comprises both the Schwarzschild solution in isotropic coordinates and, by superposition, Brill-Lindquist data for an axisymmetric configuration of two non-rotating black holes (not in equilibrium). Giulini and Holzegel took the metric in the interior to be Brill-Lindquist and glued it to a Schwarzschild metric in the exterior using a general Brill wave metric on the gluing annulus. They were mainly interested in the question whether the ADM mass (i.e., the mass of the exterior Schwarzschild solution) can be smaller than the sum of the two Brill-Lindquist black hole masses, as they expected that this would reduce the (generally unwanted) gravitational radiation introduced in the gluing region. They claimed that this can be done at least to first order in the inverse gluing radius. Using numerical methods we are able to study the solution also for smaller gluing radii.
This paper is organised as follows. In Sec. \[sec:gluing\_construction\] we describe the details of the gluing construction and derive the equations to be solved. A novel ingredient is an integrability condition that fixes the relation between the masses of the Brill-Lindquist black holes and the exterior Schwarzschild solution (Sec. \[sec:integrability\]). Sec. \[sec:numer\_implement\] is devoted to the numerical implementation. We describe the pseudo-spectral method we use (Sec. \[sec:numer\_scheme\]) and test the code with an artificial exact solution (Sec. \[sec:exact\_sol\]) before turning to the actual gluing problem in Sec. \[sec:numer\_gluing\_constr\]. Detailed convergence tests are carried out. Finally, we investigate how the total ADM mass depends on the details of the gluing procedure (Sec. \[sec:reduction\]). We conclude with a discussion of our results and an outlook on future work in Sec. \[sec:discussion\].
The gluing construction {#sec:gluing_construction}
=======================
Following the line of thought in , we set up here the mathematical framework on which our numerical study of the gluing construction in the subsequent section will be based. We will also derive an integrability condition that unveils the dependence of the ADM mass on the details of the gluing construction.
Basic ingredients {#sec:basics}
-----------------
Fig. \[fig:2D\_gluing\] encapsulates the basic features of the construction proposed in : the interior spacetime consists of Brill-Lindquist data, the exterior spacetime extending to space-like infinity is Schwarzschild, and the transition between the two data sets takes place along a gluing annulus which is equipped with a Brill wave metric. The gluing annulus extends from $r_{\mathrm{int}}$ to $r_{\mathrm{ext}}$.
![Graphical two-dimensional representation of the gluing construction we are going to consider in the following. A Schwarzschildean end will be glued to Brill-Lindquist data representing two non-rotating black holes in the interior along a transition region equipped with a Brill wave metric. []{data-label="fig:2D_gluing"}](figure_2){height="6cm"}
More specifically, in the interior ($r \leq r_{\mathrm{int}}$) we consider axisymmetric vacuum Brill-Lindquist data describing two black holes at a moment of time symmetry, $$\label{B-L_metric1}
g_{\textrm{\tiny B-L}} = \left(1 + \frac{m_1}{2|\vec{r} - \vec{c}_1|} + \frac{m_2}{2|\vec{r} - \vec{c}_2|} \right)^4 \delta,$$ where $\delta = dr^2 + r^2 (d\theta^2 + \sin^2\theta\, d\phi^2)$ denotes the three-dimensional Euclidean line element in spherical polar coordinates, and $m_k$ and $\vec c_k$, with $k = 1,2$, are the masses and coordinate centres of the two black holes, respectively. In order to simplify our formulation, we will assume in the following that the two black holes are of equal mass, i.e. $m_1 = m_2 = m$, and that they lie symmetrically to the origin on the z-axis, i.e. $ \vec{c}_1 = - \vec{c}_2 = \vec{c} = (0, 0, \frac{d}{2})$, see Fig. \[fig:B-L\_data\]. With these choices the line element reduces to $$\label{B-L_metric}
g_{\textrm{\tiny B-L}} = \left(1 + \frac{m}{2|\vec{r} - \vec{c}|} + \frac{m}{2|\vec{r} + \vec{c}|} \right)^4 \delta.$$ Notice that the above line element is written in conformally flat form, a feature that will play a key role in the subsequent development of the gluing construction. It can be readily confirmed that the ADM mass of the Brill-Lindquist data is equal to $2m$.
![The Brill-Lindquist data. We consider two black holes of equal mass positioned symmetrically to the origin on the z-axis. The two black holes are at a moment of time symmetry, i.e. they do not carry any spin and they are momentarily at rest relative to each other.[]{data-label="fig:B-L_data"}](figure_3){height="6cm"}
In the present work we will consider only Brill-Lindquist data where the horizons of the two black holes do not intersect. Also all cases where a third outer horizon , enclosing both black holes, forms—which appears when the black holes are very close to each other—would not be considered here. As shown in both the above requirements are satisfied when the mass-to-distance ratio satisfies the inequality $m/d \lesssim 0.64$. In this setting, the radius of the event horizon of each of the black holes is given by the formula $$r_\mathrm{hor} = \frac{m}{2 + \frac{m}{d}}.$$ Therefore in the following, in order to keep the gluing annulus away from any possible horizons of the Brill-Lindquist data, the gluing radius $r_{\mathrm{int}}$ will be chosen in such a way that the inequality $r_{\mathrm{int}} > d/2 + r_\mathrm{hor}$ is always satisfied.
We intend to glue a Schwarzschildean end to the Brill-Lindquist data residing in the interior of our construction. Thus, in the exterior ($r \geq r_{\mathrm{ext}}$) of the gluing annulus we consider the usual spherically symmetric Schwarzschild data, which when expressed in isotropic coordinates can be written in the following conformally flat form, $$\label{Schw_metric}
g_{\textrm{\tiny Schw}} = \left(1 + \frac{M}{2|\vec{r}|} \right)^4 \delta.$$ By construction, the mass $M$ is identical with the ADM mass of the entire glued initial data.
The above two data sets and will be glued together using a Brill wave. This choice follows naturally from the axisymmetric nature of the Brill-Lindquist data considered in the interior of the construction. Brill waves [@Brill:1959] are the most general axisymmetric vacuum spacetimes with hypersurface-orthogonal Killing vector. In spherical coordinates, the spatial metric at a moment of time symmetry is given by the Weyl-type line element $$\label{Brill_metric}
g_{\textrm{\tiny Brill}} = \psi^4 \left(e^{2\,q} (dr^2 + r^2 d\theta^2) + r^2 \sin^2\theta\, d\phi^2\right).$$ The function $q(r, \theta)$ will be the unknown of our construction. It must satisfy the boundary conditions $$\label{bound_cond_theta}
\begin{aligned}
q = 0 \qquad \mathrm{for} \,\,\, \theta = 0,\, \pi, \\
\frac{\partial q}{\partial \theta} = 0 \qquad \mathrm{for} \,\,\, \theta = 0,\, \pi.
\end{aligned}$$ The latter condition follows from the fact that $q$ is an even function of $\theta$. To justify the former, one has first to write the metric in Cartesian coordinates and inspect the behaviour of its metric coefficients on the z-axis; then the vanishing of $q$ along the z-axis follows as a necessary regularity condition that guarantees the absence of any conical singularities on it . The conformal factor $\psi(r,\theta)$ introduced above must be positive definite everywhere and must satisfy the asymptotic conditions $\displaystyle \lim_{r \to \infty} \psi = 1$ at space-like infinity.
In summary, we want to construct a spacetime that is Brill-Lindquist in the interior $r \leq r_{\mathrm{int}}$, is of general Brill wave form on the intermediate gluing annulus $r_{\mathrm{int}} \leq r \leq r_{\mathrm{ext}}$, and is Schwarzschild in the exterior $r \geq r_{\mathrm{ext}}$. In addition, all the transitions between the different regions must be smooth.
The recipe {#sec:recipe}
----------
The novelty of Giulini’s and Holzegel’s construction lies in the way they incorporated Corvino’s original idea [@Corv:2000] solely into the definition of the conformal factor $\psi$, i.e. the metric on the entire three-dimensional time-symmetric slice is given by the Brill wave metric with $$\label{conf_factor}
\psi = \left(1 + \frac{m}{2|\vec{r} - \vec{c}|} + \frac{m}{2|\vec{r} + \vec{c}|} \right) \beta(r,\theta)
+(1 - \beta(r,\theta)) \left(1 + \frac{M}{2|\vec{r}|} \right).$$ Here $\beta(r,\theta)$ is the so-called gluing function, which apart of being smooth has the following properties: $$\label{beta_function}
\beta(r,\theta) =
\left\{
\begin{array}{l l}
1, & \quad r \leq r_{\mathrm{int}} ,\\
0, & \quad r \geq r_{\mathrm{ext}} ,
\end{array}
\right.$$ and all its $r$-derivatives must vanish at $r = r_\mathrm{int}$ and $r = r_\mathrm{ext}$. The precise form of the gluing function that is going to be used in the present work is left for Sec. \[sec:numer\_scheme\].
Let us see now how the gluing construction described in Sec. \[sec:basics\] can be realised by the choice of the conformal factor. Notice that the first and second term in are of Brill-Lindquist and Schwarzschildean character, respectively. In the interior $r \leq r_{\mathrm{int}}$ the gluing function equals unity, $\beta = 1$, therefore the second term in vanishes. Thus, the conformal factor $\psi$ consists now only of its Brill-Lindquist part; inserting it into the Brill wave metric and enforcing $q$ to vanish in the interior region, the Brill wave coincides exactly with the Brill-Lindquist data . In a similar manner in the exterior $r \geq r_{\mathrm{ext}}$ only the Schwarzschildean part of $\psi$ survives, as $\beta = 0$ there. Again inserting the resulting conformal factor in and setting $q = 0$ also in the exterior region, the Brill wave coincides with the Schwarzschildean data . In the intermediate region $r_{\mathrm{int}} \leq r \leq r_{\mathrm{ext}}$ the conformal factor $\psi$, and consequently the function $q$ in , have a more complicated form.
The function $q(r,\theta)$ in the gluing region will be determined by Einstein’s equations. In addition to the boundary conditions on the $\textrm{z}$-axis, smoothness requires that $q$ and all its radial derivatives vanish at the boundaries of the gluing annulus: $$\label{bound_cond_r}
q = 0 \quad \mathrm{and} \quad \frac{\partial^n \! q}{\partial r^n} = 0 \qquad \mathrm{at} \quad
r = r_{\mathrm{int}}, r_{\mathrm{ext}},$$ for all $n \in \mathbb{N}$. The boundary conditions that $q$ must satisfy are summarised in Fig. \[fig:bound\_cond\].
![The boundary conditions. The thick lines indicate the loci where boundary conditions on the function $q$ must be imposed. On the z-axis the conditions related to the axisymmetry of our construction must be satisfied while on the boundaries of the gluing annulus the conditions must be implemented.[]{data-label="fig:bound_cond"}](figure_4){height="7cm"}
Mathematical fomulation {#sec:math_formulation}
-----------------------
Having set up our gluing scheme in the previous sections, we now move on to Einstein’s equations. On the initial slice these reduce to the momentum and Hamiltonian constraints. The former is identically satisfied as our data are time-symmetric, so we are left only with the Hamiltonian constraint, which in the time-symmetric case reduces to the vanishing of the Ricci scalar of the Brill wave metric , i.e. $$R({g}_{\textrm{\tiny Brill}}) = 0.$$ Expanding the Ricci scalar in the above expression, the Hamiltonian constaint results in an inhomogeneous Poisson equation of the form $$\begin{aligned}
\label{Poisson_eq}
{}^{(2)} \Delta q &=& -4\, \frac{{}^{(3)}\Delta \psi}{\psi} \quad :\Leftrightarrow \nonumber\\
\frac{\partial^2 q}{\partial r^2} + \frac{1}{r^2} \frac{\partial^2 q}{\partial \theta^2} + \frac{1}{r}
\frac{\partial q}{\partial r} &=& -\frac{4}{\psi} \left( \frac{\partial^2 \psi}{\partial r^2} + \frac{1}{r^2} \frac{\partial^2 \psi}{\partial \theta^2} +
\frac{2}{r} \frac{\partial \psi}{\partial r} + \frac{\cot\theta}{r^2} \frac{\partial \psi}{\partial \theta} \right) =: f.\end{aligned}$$ According to our construction in Sec. \[sec:recipe\], the right-hand side of the above elliptic equation is specified by the form of $\psi$ that is defined by and . Since this is fixed *a priori*, we will consider the right-hand side of as an inhomogeneity and denote it by $f$. It should be noted that reduces to a homogeneous Poisson equation outside the gluing annulus as the constancy of $\beta$ enforces $f$ to vanish there. Summarising, our goal in the following will be to numerically solve the second-order linear PDE for $q(r,\theta)$ subject to the boundary conditions and .
Integrability condition {#sec:integrability}
-----------------------
At first sight it might seem that the choice of the two mass parameters $m$ and $M$ in the conformal factor is unconstrained. If this were true then nothing would prevent us from gluing a Minkowskian end to the Brill-Lindquist data in the interior! This would obviously violate the positive mass theorem . In fact Einstein’s equations constrain the choice of the masses. One way to see this is by employing the machinery developed by Brill [@Brill:1959] in order to prove that the ADM mass of time-symmetric, axisymmetric, vacuum gravitational waves is positive definite. It turns out that in our setting this result can be used as a condition to determine the relation between the masses involved in our construction.
Following Brill’s arguments in [@Brill:1959], we repeat here his original derivation adjusted to the details of our construction. Our starting point is the Poisson equation expressed in cylindrical coordinates $(\rho, \phi, z)$: $$\frac{\partial^2 q}{\partial \rho^2} + \frac{\partial^2 q}{\partial z^2} +
\frac{4}{\psi} \left( \frac{\partial^2 \psi}{\partial \rho^2} + \frac{\partial^2 \psi}{\partial z^2} +
\frac{1}{\rho} \frac{\partial \psi}{\partial \rho} \right) = 0,$$ which when expressed in terms of the three-dimensional flat Laplace operator $\nabla^2 = \rho^{-1} \partial_\rho + \partial^2_\rho + \rho^{-2} \partial^2_\phi + \partial^2_z$ in cylindrical coordinates takes the form $$4\, \frac{\nabla^2 \psi}{\psi} + \nabla^2 q - \frac{1}{\rho} \frac{\partial q}{\partial \rho} = 0.$$ Integrating over the interior $V$ of a large sphere $\Sigma$ of radius $R$ centred at the origin, one gets $$\label{Poisson_cyl}
4\int_V \left[ \nabla \cdot \left( \frac{\nabla \psi}{\psi} \right) + \left( \frac{\nabla \psi}{\psi} \right)^2 \right] dV +
\int_V \nabla^2 q\, dV - \int_V \frac{1}{\rho} \frac{\partial q}{\partial \rho}\, dV = 0,$$ where the gradient and the divergence in cylindrical coordinates read $\nabla = (\partial_\rho, \rho^{-1} \partial_\phi, \partial_z)$ and $\nabla \cdot = (\rho^{-1} + \partial_\rho, \rho^{-1} \partial_\phi, \partial_z) \cdot \,$, respectively. The integration of the last term reads $$\int_V \frac{1}{\rho} \frac{\partial q}{\partial \rho}\, \rho\, d\rho\, d\phi\, dz =
2\, \pi \int_{-R}^R \left[ q\left(\sqrt{R^2 - z^2}, z\right) - q(0, z) \right] dz =
- 2\, \pi \int^\pi_0 q(R, \theta)\, R\, \sin\theta\, d\theta,$$ where in the last step we used the first of the boundary conditions and expressed the remaining term in spherical coordinates. In the rest of the proof, the first two integrals in will also be expressed in spherical coordinates. Inserting the result of the above integration into and re-expressing the first and third term through the divergence theorem, one arrives at $$8\, \pi \int^\pi_0 \left. \frac{1}{\psi}\frac{\partial \psi}{\partial r}\right|_{r=R} R^2 \sin\theta\, d\theta +
4\int_V \left( \frac{\nabla \psi}{\psi} \right)^2 dV +
2\, \pi \int^\pi_0 \left[ \left. \frac{\partial q}{\partial r}\right|_{r=R} R^2 + q(R, \theta)\, R \right] \sin\theta\, d\theta = 0.
\label{Poisson2}$$ In the limit $R \rightarrow \infty$ the last term of the above expression vanishes because $q = 0$ for $r > r_\textrm{ext}$. In addition, according to , the conformal factor in the limit $R \rightarrow \infty$ behaves like $1 + \frac{M}{2\, R}$; thus the first term of the expession above reads $$8\, \pi \int^\pi_0 \left. \frac{1}{\psi}\frac{\partial \psi}{\partial r}\right|_{r=R} R^2 \sin\theta\, d\theta =
8\, \pi \int^\pi_0 \frac{-\frac{M}{2\, R^2}}{1 + \frac{M}{2\, R}} R^2 \sin\theta\, d\theta \overset{R \rightarrow \infty}{=}
- 8\, \pi\, M.$$ Taking into account the last two results, in the limit $R \rightarrow \infty$ reduces to $$- 2\, \pi\, M + \int_V \left( \frac{\nabla \psi}{\psi} \right)^2 dV = 0.$$ Finally, expanding the integrand and integrating over $\phi$ one arrives at Brill’s original expression for the ADM mass, $$\label{integr_cond}
M = \int^\pi_0 \int^\infty_0 \left[ \left( \frac{1}{\psi}\frac{\partial \psi}{\partial r} \right)^2 +
\left( \frac{1}{r\, \psi}\frac{\partial \psi}{\partial \theta} \right)^2 \right] r^2 \sin\theta\, dr\, d\theta,$$ which is obviously positive definite. It is interesting that this expression for the ADM mass only depends on the conformal factor. Recall that the ADM mass $M$ of our construction appears in the definition of the conformal factor and consequently is also present in the integrand above. Based on this observation one can use as an integrability condition for the ADM mass, namely the integral on the right-hand side of for a specific choice of $M$ must return the same value for the ADM mass.
Numerical implementation of the gluing construction {#sec:numer_implement}
===================================================
In this section our numerical implementation of the gluing construction described in the previous section and some first numerical results are presented.
Setting up the numerical scheme {#sec:numer_scheme}
-------------------------------
We choose to solve the Poisson equation numerically using pseudo-spectral methods. Accordingly, the unknown function $q(r, \theta)$ is approximated by a truncated series of suitable specific polynomials. We choose to expand the $r$-dependence of $q$ in Chebyshev polynomials $T_k$ and the $\theta$-dependence in Fourier-cosine series for reasons (in addition to the ones presented in ) that will soon become apparent.
Our two-dimensional physical domain is given by $(r, \theta) \in [r_{\mathrm{int}}, r_{\mathrm{ext}}] \times [0, \pi]$. While the range of the angular coordinate $\theta$ is in accordance with the expansion in Fourier-cosine series, the range of the radial coordinate $r$ is not, as the Chebyshev polynomials are defined on the interval $[-1, 1]$. In order to map the original $r$-domain to $[-1, 1]$, we use the mapping $$x \mapsto r(x) := \frac{1}{2}(r_{\mathrm{ext}} - r_{\mathrm{int}})\, x + \frac{1}{2}(r_{\mathrm{ext}} + r_{\mathrm{int}}),$$ where $x$ takes values in the interval $x \in [-1, 1]$. Therefore, from now on, we have to think of the expressions , and as expressed in terms of this new linearly transformed radial coordinate $x$. Therefore, in the following our two-dimensional computational domain will be $D = [-1, 1] \times [0, \pi]$. A finite representation of $D$ is obtained by the introduction of equidistant collocation points in the $\theta$-direction and of non-equidistant Gauss-Lobatto collocation points in the radial direction, namely $$\theta_i = \frac{i\, \pi}{L} \qquad \mathrm{and} \qquad x_j = -\cos\left(\frac{j\, \pi}{K}\right)
\qquad \mathrm{with} \qquad i = 0, \ldots, L \quad \mathrm{and} \quad j = 0, \ldots, K,$$ where $K$ and $L$ is the number of collocation points along the radial and $\theta$-direction, respectively.
Let us now turn to the boundary conditions and . In fact this is by far the most involved part of our numerical implementation. In order to satisfy we make the following ansatz: $$\label{ansatz}
q(x, \theta) = B(x)\, \hat{q}(x, \theta),$$ where $\hat{q}$ is an arbitrary function of its arguments and $B(x)$ is a function of “bump” character on the gluing annulus, i.e. $B(x)$ and all its $x$-derivatives vanish on the boundaries of the gluing annulus. An example of a “bump” function with the above properties looks like $$\label{bump_function}
B(x) = \mathrm{sech} \left(\frac{b_1}{x - 1} + \frac{b_2}{x + 1} \right),$$ where $b_1, b_2$ are constants. The convergence of our numerical solutions crucially depends on the choice of these constants. It has been observed that the convergence properties of the produced numerical solutions are optimal when the constants $b_1, b_2$ take values $b_1, b_2 < 1$. In the following the choice $b_1 = b_2 = 10^{-2}$ will always be used. The second boundary condition in is satisfied if one expands the newly introduced function $\hat{q}(x, \theta)$ in the way described in the first paragraph of this section, namely $$\label{q_expansion}
\hat{q}(x, \theta) = \sum^K_{k=0}{\sum^{L}_{l=0}{a_{kl}\, T_k(x)\, \cos(l\, \theta)}},$$ where $K, L$ are as above and the constants $a_{kl}$ are the expansion coefficients of our series. In order to satisfy the remaining boundary condition, i.e. the first of , one can use the freedom inherent in the choice of the gluing function . Recall that the gluing function, apart from the specific conditions that it has to satisfy on the boundaries of the gluing annulus, can be freely specified otherwise. A possible ansatz is $$\label{ansatz_beta}
\beta(x, \theta) = \alpha(x) + \hat{\alpha}(x) B(x) \sin^2\theta,$$ where $$\alpha(x) = \frac{1}{2} \left(1 + \tanh \left(\frac{b_1}{x - 1} + \frac{b_2}{x + 1}\right)\right)$$ with $b_1 = b_2 = 10^{-2}$ as above, $B(x)$ is given by , and $\hat{\alpha}(x)$ is a so far arbitrary function that we choose in order to enforce the condition $q = 0$ on the z-axis. Notice that the function $\alpha(x)$ takes the values $1$ and $0$ on the internal $x=-1$ and external $x=1$ boundary of the gluing annulus, respectively, and all its spatial derivatives vanish there; thus, it satisfies all the criteria of . The inclusion of the “bump” function $B(x)$ in the ansatz guarantees that, independently of the choice of $\hat{\alpha}(x)$, the second term in and all its derivatives vanish identically on the boundaries. Therefore, the form of $\hat{\alpha}(x)$ influences the shape of $\beta(x, \theta)$ only in the interior of the gluing annulus. (It is noteworthy that with a $\theta$-independent ansatz, e.g. of the form $\beta(x) = \alpha(x) + \hat{\alpha}(x) B(x)$, it was not possible to satisfy the first boundary condition in and at the same time have a convergent numerical solution.) Now, as the roots of the map $$\label{alphahat_map}
\hat \alpha(x) \mapsto q(x, \theta \in \{0,\pi\})$$ are $(K+1)$-dimensional vectors (recall $K$ refers to the number of radial collocation points), we have to use a multidimensional secant (quasi-Newton) method to find them. (We chose to use a secant instead of a Newton method as the former is computationally less costly and faster.) The most effective and efficient method of this kind has proven [@numer_recipes2007] to be *Broyden’s method* [@Broyden1965]. Given an initial guess for $\hat \alpha(x)$, Broyden’s method tries to find iteratively the form of $\hat \alpha(x)$ that leads to a solution of satisfying the first boundary condition in to a given accuracy (here to the order of $\sim 10^{-14}$). In the following the roots of will be computed numerically using the implementation of Broyden’s method in the optimize sub-package of Python’s SciPy library.
Summarising, by assuming that $q$ in is a multiple of a “bump” function $B$, the vanishing of $q$ and all its $x$-derivatives at $x = \pm 1$ is guaranteed. The expansion of $\hat{q}$ as a Fourier-cosine series sets $\partial_\theta \hat{q}$ to zero on the $z$-axis; consequently $\partial_\theta q$ also vanishes there as $\partial_\theta q = B\, \partial_\theta \hat{q}$. Finally, an appropriate choice of the function $\hat{\alpha}(x)$ in the ansatz can make $q$ vanish on the $z$-axis.
The code has been written from scratch in Python.
Testing the code with an exact solution {#sec:exact_sol}
---------------------------------------
Before we start using our code to study numerically the Poisson equation , we will carry out—as one should always do—some numerical tests to check the performance of our code. For this a family of exact solutions will be used. The exact solutions will be computed in the following way. First, we choose a $q$ and compute analytically the outcome of the left-hand side of , then we equate the resulting expession with the inhomogeneity $f$. Now, having at hand the expression for $f$, one can solve numerically for $q$ and compare the outcome with the exact expression of $q$ chosen originally. This procedure will give us hints about the accuracy and the convergence properties of the code.
As exact solutions we will use the following family of functions, $$\label{q_exact}
q(x, \theta) = x^{\kappa/3} \left(x - 1\right)^{10} \left(x + 1\right)^{10} \sin(6\, \theta) B(x) \sin\theta,$$ where $\kappa$ is a non-negative integer and $B$ is the “bump” function . The main reason for choosing the above family of solutions is that it allows us to control, through the choice of $\kappa$, the differentiability, and consequently the smoothness, at $x = 0$. Obviously, if $\kappa$ is zero or a multiple of three, then the function $\hat q$ corresponding to is a polynomial and thus infinitely differentiable $\mathcal{C}^\infty$. For any other value of $\kappa$, is finitely differentiable $\mathcal{C}^j$. In the following, we will assume that $\kappa$ takes the values $\kappa = 0, 7, 19, 61$ and as a consequence the solution will be $\mathcal{C}^\infty, \mathcal{C}^2, \mathcal{C}^6, \mathcal{C}^{20}$ at $x = 0$, respectively. Our goal of doing all this is not only to show that the numerical solutions converge to the exact ones, but also to observe the expected relation, see e.g. [@Trefethen:2000], between the convergence of the numerical solutions and the smoothness of the exact solution, i.e. the smoother the solution , the faster the convergence of the code.
Our findings are presented in Fig. \[fig:exact\]. Both graphs therein depict the $\log_{10}$ of the absolute value of the maximum error (in other worlds the $L^\infty$ norm) between the numerical and the corresponding exact solution for different numbers of grid points $N$, where here we have chosen $K = L =: N$. Fig. \[fig:exact\_a\] illustrates the case of smooth functions ($\kappa=0$). Here one observes the typical “step” behaviour of the convergence plots corresponding to polynomial functions [@Trefethen:2000]; this is because the Chebyshev polynomials form a complete basis for the polynomials, so that is represented *exactly* for $N>20$ (the error settles down to numerical roundoff $\sim 10^{-14}$). On the other hand, Fig. \[fig:exact\_b\] shows the case of finitely differentiable functions. It can be easily seen that in all the cases considered the numerical solutions converge to the exact ones, but with different speed. A detailed inspection of the individual plots shows that, as expected, the speed of convergence is faster the smoother is our solution [@Trefethen:2000].
Numerical realisation of the gluing construction {#sec:numer_gluing_constr}
------------------------------------------------
### Results {#sec:results}
The results of the previous section constitute strong evidence that our code can reproduce successfully the exact solutions , and its convergence behaviour is as expected. Thus, we are confident enough to proceed further in the numerical study of the gluing construction and look for general solutions of .
In order to do so, one has first to choose appropriately the free parameters entering the definition of the conformal factor and then to compute the inhomogeneity $f$ by evaluating the right-hand side of . Recall that according to its definition, the conformal factor depends on the mass $m$ of the individual Brill-Lindquist black holes, their mutual distance $d$, the mass $M$ of the exterior Schwarzschild region, the location of the gluing annulus $r_{\mathrm{int}}, r_{\mathrm{ext}}$, and the form of the gluing function . In the following, the ansatz will be used for the gluing function and the form of $\hat \alpha(x)$ entering its definition will be computed in accordance with the discussion of Sec. \[sec:numer\_scheme\]. Except for a couple of conditions that constrain their choice, the above parameters can be freely chosen. The first condition follows from the fact that the gluing annulus has to be placed away from any horizons of the Brill-Lindquist data; for this the inequality $r_{\mathrm{int}} > d/2 + r_\mathrm{hor}$ must always be satisfied—see Sec. \[sec:basics\] for the details. The second condition constrains the relation of the masses $m$ and $M$, as discussed in Sec. \[sec:integrability\].
Fig. \[fig:num\_solution\] shows several numerical solutions of the system , , for the following choice of the free parameters: $m = 2$, $d = 10$, $r_{\mathrm{ext}} = 2\, r_{\mathrm{int}}$, and the ADM mass $M$ has been chosen such that the integrability condition is satisfied (see Fig. \[fig:ADM\_increase\]). Starting from Fig. \[fig:num\_sol\_a\], the distance of the gluing annulus from the origin has been gradually increased from $r_{\mathrm{int}} = 50$ to $r_{\mathrm{int}} = 500$. As expected, the further away one places the gluing annulus, the smaller the numerically computed values of $q$ become. This behaviour follows naturally from the fact that the Brill-Lindquist data look more and more like Schwarzschild data the further away one goes from the origin; consequently, the Brill wave—essentially the function $q$—does not have to do “a lot of work” to glue the two sets of data together. Similar behaviour is observed when the distance of the annulus from the origin is kept fixed but its width is gradually increased. Now, the magnitude of $q$ gradually decreases as it has “more and more space” to perform the gluing between the two data sets.
The results of Fig. \[fig:num\_solution\] are the first evidence that the gluing constructions proposed in can be realised numerically. Whereas the analysis of applies only to the case when the gluing annulus is placed at large distances, our numerical findings here demonstrate that these results can be extended to smaller gluing radii.
At this point, it is worth checking what happens in the case that the distance between the two black holes is taken to be $d=0$ so that there is only a single black hole of mass $2m$ in the centre. One would expect that as long as the condition $M = 2m$ is satisfied, the function $q$ must vanish; for in this setting the Brill-Lindquist data are already in Schwarzschild form. It turns out that our code correctly reproduces the trivial solution for arbitrary position of the gluing annulus.
### Convergence analysis {#sec:con_analysis}
Let us turn now to the convergence analysis of our numerical solutions. In contrast to Sec. \[sec:exact\_sol\], here we do not have an exact solution to compare our numerical findings with. Thus, we have to follow a different approach to check the convergence of our numerical solutions. The usual way to proceed in such a situation is to study the decay of the expansion coefficients $a_{kl}$ in , see [@Boyd:2001]. The expansion coefficients $a_{kl}$ must gradually decay to zero for increasingly large indices in order for the series expansion to converge. Once $q$ has been computed numerically, the expansion coefficients can be readily evaluated by inverting .
Fig. \[fig:num\_convergence\] depicts the results of our convergence analysis for the numerical solution of Fig. \[fig:num\_sol\_b\]. Because of the two-dimensional nature of the series expansion , one has to choose along which direction to study $a_{kl}$. We chose here to study the convergence behaviour of the diagonal expansion coefficients $a_{NN}$ as they provide a good indication of the overall decay of $a_{kl}$. The fall-off behaviour of $|a_{NN}|$ is depicted in Fig. \[fig:num\_decay\] on a logarithmic scale; the observed approximately linear behaviour for $N < 20$ suggests an exponential decay to the roundoff plateau. To make this statement more quantitative, one has to study the ratio $-\log_{10}(|a_{NN}|)/N$ in the limit $N \rightarrow \infty$. Therefore, following [@Boyd:2001], if the limit $$\lim_{N \to \infty}\left(\frac{-\log_{10}(|a_{NN}|)}{N}\right) \geq 0$$ is a non-negative number then the expansion coefficients converge to zero exponentially. In Fig. \[fig:num\_exp\_conv\] one clearly sees a tendency of the ratio $-\log_{10}(|a_{NN}|)/N$ to asymptote to a small positive number, which is a strong indication of exponential decay.
We will conclude the present section by presenting another indication that the numerical solutions produced in Sec. \[sec:results\] converge exponentially. In Fig. \[fig:num\_high\_resolution\], on a rectangular $N \times N/4$ grid (i.e. $N$ grid points along the radial and $N/4$ along the angular direction), we compare numerical solutions of different resolutions to the one with the highest resolution for the solution of Fig. \[fig:num\_sol\_b\]. Specifically, the numerical values of $q$ for each resolution are interpolated onto the same grid and compared with the solution of highest resolution there (here an $100\times 25$ grid). Finally, the $L^2$-norm of the absolute value of the error for each resolution has been plotted on a logarithmic scale, see Fig. \[fig:num\_high\_resolution\]. The curve falls off in an approximately linear fashion.
Behaviour of the ADM mass {#sec:reduction}
-------------------------
We will now investigate the dependence of the ADM mass on the details of the gluing construction. Namely, we examine if it is possible to choose the free parameters entering the definition of the conformal factor in such a way that the ADM mass $M$ can take values different from the sum of the two Brill-Lindquist black holes, i.e. $M \neq 2\, m$. The case $0 < M < 2\, m$ corresponds to a reduction of the ADM mass, while the case $M > 2\, m$ to an increase. In other words, we explore the possibility of gluing together the spacetimes and under the assumption that their asymptotic behaviour at space-like infinity (when considered separately) is different.
As already mentioned in Sec. \[sec:integrability\], the integrability condition can be used to study the dependence of the ADM mass on the details of the gluing construction. After choosing the free parameters entering and computing the form of $\hat \alpha(x)$ entering the definition of the gluing function in the way described in Sec. \[sec:numer\_scheme\], the integral will be computed numerically using the integrate sub-package of the Python SciPy library. The value of the integral computed in this way will be denoted by $M_I$ in contrast to the parameter $M$ chosen originally.
Depending on the choice of the free parameters, the right-hand side of , i.e. $M_I$, can take on values that do not necessarily agree with $M$. In this case the integrability condition would be violated, $\Delta M= M_I - M \neq 0$. Here, we will only be interested in the case that $M_I = M$ holds, corresponding to a true physical solution.
To exemplify the use of the condition , we will use as a test case the scenario that the distance $d$ between the black holes in the interior is taken to be zero. In this setting, there is a single black hole of mass $2 m$ in the centre to which we attempt to glue a Schwarzschildean end of ADM mass $M$. Fig. \[fig:ADM\_mass\_Schw\] depicts how the integrability condition constrains the possible choice of the masses $m, M$. Therein, we have plotted the difference $\Delta M = M_I - M$ between the integral and the originally chosen value $M$ of the ADM mass as a function of the ADM mass $M$. For the choice $m = 2$, $r_{\mathrm{int}} =100$ and $r_{\mathrm{ext}} = 2\, r_{\mathrm{int}}$ the curve crosses the $M$-axis, i.e. the integrability condition $\Delta M = 0$ is satisfied, in two distinct points: $M_1 = 4$ and $M_2 \approx 4.095$. The first crossing corresponds to the case that the two Schwarzschildean data sets are identical $M_1 = 2 m$. Obviously, in this case the Brill wave responsible for the gluing must be trivial, i.e. $q = 0$ as it was confirmed at the end of Sec. \[sec:results\]. The second crossing now corresponds to a setting where the two Schwarzschildean data sets we attempt to glue together are different $M_2 \neq 2 m$; the Brill wave performing the gluing is now non-trivial, i.e. $q \neq 0$. Therefore, the results of Fig. \[fig:ADM\_mass\_Schw\] entail that for the class of gluing functions we consider, the integrability condition allows us to glue a Schwarzschildean end of ADM mass $M_1 = 4$ or $M_2 \approx 4.095$ to the single black hole of mass $2 m$ residing in the centre. Any other combination of the masses would lead to non-physical solutions that violate Einstein’s equations.
Let us return now to the behaviour of the ADM mass for general separations $d$ of the two black holes. In order to check if the integrability condition allows for a reduction (increase) of the ADM mass, we will fix $m$ and study the dependence of the difference $\Delta M = M_I - M$ on the ADM mass $M$ for different locations of the gluing annulus. If the violation $\Delta M$ of the integrability condition has different signs for two different values of the ADM mass $M$, then according to the intermediate value theorem $\Delta M$ must vanish somewhere in between these two values of $M$. In Fig. \[fig:ADM\_behaviour1\], the free parameters were chosen to be $m = 2$, $d = 10$, $r_{\mathrm{ext}} = 2\, r_{\mathrm{int}}$, and the gluing annulus has been placed at $r_{\mathrm{int}} = 30$ or $100$. The curve for $r_{\mathrm{int}} = 100$ crosses the $M$-axis twice for values $M > 2m = 4$—for the first crossing this will be clarified in Fig. \[fig:ADM\_behaviour2\]—and hence the ADM is increased. For $r_{\mathrm{int}} = 30$ the curve does not cross the $M$-axis, indicating that, for the choice of the free parameters we are using, there are no physically admissible solutions of . The same behaviour is observed for any choice of $r_{\mathrm{int}} \lesssim 40$, implying that the gluing is not possible for these positions of the annulus. On the other hand, for $r_{\mathrm{int}} \gtrsim 40$ the curve always crosses the $M$-axis twice for $M > 4$.
To clarify this point further, we have plotted in Fig. \[fig:ADM\_behaviour2\] for the first crossing the difference $\Delta M$ as a function of the gluing radius $r_{\mathrm{int}}$ for *fixed* $M = 4$. (Here, we will concentrate on the behaviour of the ADM mass at the first crossing because if the first crossing happens for $M_1 > 4$ then certainly the second crossing will happen for $M_2 > M_1 > 4$.) Based on the results of Fig. \[fig:ADM\_behaviour1\], one can safely conclude that close to the first crossing $\Delta M$ decreases with $M$; therefore, if $\Delta M$ is positive for $M = 4$ then an appropriate increase of $M$ will cause $\Delta M$ to vanish—a setting that leads to an increase of the ADM mass of the glued solution. Fig. \[fig:ADM\_behaviour2\] provides strong evidence that the ADM mass is increased for any position of the gluing annulus (no matter how far out). For gluing radii larger than $r_\mathrm{int} = 3500$ the violation $\Delta M$ becomes of the same order of magnitude of the numerical error, i.e. $10^{-11}$, which indicates that it is not possible to draw any decisive conclusions about the behaviour of $\Delta M$ there. However, one expects that $\Delta M$ asymptotes to zero from positive values as the gluing annulus is progressively placed further out: in the limiting case that the gluing is performed at infinity, where the two spacetimes become indistinguishable, the Brill wave becomes trivial and $\Delta M$ vanishes.
Let us look a little more closely into the details of the increase of the ADM mass and try to determine it quantitatively. As already indicated by Fig. \[fig:ADM\_behaviour\], the increase is larger for smaller gluing radii $r_{\mathrm{int}}$. In Fig. \[fig:ADM\_increase\] the actual increase of the ADM mass, $M_I - 2m$, for different locations of the gluing annulus is presented. Notice that the amount of increase, $M_I - 2m$, reduces extremely fast to zero with increasing gluing radius: increasing the gluing radius from $r_{\mathrm{int}} = 50$ to $100$ results in a decrease of $M_I - 2m$ by two orders of magnitude.
It was mentioned above that the increase of the ADM mass can be attributed to the presence of the Brill wave responsible for the gluing. To further clarify this point, we will consider the integrability condition in the form $$\label{integr_cond_x}
M(\chi) = \int^\pi_0 \int^\chi_0 \left[ \left( \frac{1}{\psi}\frac{\partial \psi}{\partial r} \right)^2 +
\left( \frac{1}{r\, \psi}\frac{\partial \psi}{\partial \theta} \right)^2 \right] r^2 \sin\theta\, dr\, d\theta,$$ where the upper limit of the radial integration takes values in the interval $\chi \in [0, \infty)$. Obviously $M(0) = 0$ and in the limit $M(\chi \rightarrow \infty) = M$ one obtains the total ADM mass of the gluing construction. In the case we have only pure Brill-Lindquist data, i.e. there is no gluing, we have $M_{B-L}(\chi \rightarrow \infty) = 2\, m$. According to Fig. \[fig:ADM\_increase\], $M(\infty) - M_{B-L}(\infty)$ is always positive. In the interior $\chi \in [0, r_{\mathrm{int}}]$, the difference $M(\chi) - M_{B-L}(\chi)$ must be zero as in both cases the data there are Brill-Lindquist. Therefore, there must be a point where $M(\chi)$ departs from $M_{B-L}(\chi)$ to positive values. This behaviour is studied in Fig. \[fig:ADM\_contribution\], where the difference $M(\chi) - M_{B-L}(\chi)$ has been plotted as a function of $\chi$ for the choice $m = 2$, $M = 4$, $d = 10$, $r_{\mathrm{int}} = 50$, $r_{\mathrm{ext}} = 2\, r_{\mathrm{int}}$ corresponding to the numerical solution of Fig. \[fig:num\_sol\_a\]. It is apparent that the main contribution to the increase of the ADM mass comes from the region where the gluing takes place, i.e. $\chi \in [50, 100]$; in the interior $\chi < 50$ the difference $M(\chi) - M_{B-L}(\chi)$ vanishes as expected; in the exterior $\chi > 100$ the difference $M(\chi) - M_{B-L}(\chi)$ asymptotes to the positive value given in Fig. \[fig:ADM\_increase\]. Thus, it seems that indeed the Brill wave is responsible for the increase of the ADM mass.
We conclude with a brief discussion on the possibility of reducing the ADM mass. Our extensive numerical study of the solution space of , corresponding to the specific choice of the gluing function, points in the direction that reduction of the ADM mass is not possible. As already pointed out in Fig. \[fig:ADM\_contribution\], the key point in reducing the ADM mass is to find a way to reduce the contribution of the Brill wave to it. In Fig. \[fig:ADM\_behaviour2\] we tried to do so by increasing the gluing radius (i.e. placing the gluing annulus further and further out); it was shown that reduction of the ADM mass cannot be achieved in this way. Other possible ways to “weaken” the Brill wave are widening the gluing annulus and decreasing the distance between the black holes. In Fig. \[fig:ADM\_reduction\] the behaviour of the ADM mass is studied in a setup where the black holes are placed very close to each other and the gluing annulus is extremely wide. Specifically, we choose the mass of each one of the black holes to be $m=2$ and the distance between them $d=3.2$. For this choice the mass-to-distance ratio $m/d = 0.625$ just respects the condition $m/d \lesssim 0.64$, see Sec. \[sec:basics\], which prevents the appearance of a third outer horizon enclosing both black holes. The horizon of each black hole is $r_\mathrm{hor} = 0.761905$ and thus the gluing radius must always satisfy $r_\mathrm{int} \gtrsim 2.4$. We fix the mass parameter to be $M=4$. In this setting, we plot in Fig. \[fig:ADM\_reduction\] the difference $\Delta M$ between the integral value $M_I$ of the ADM mass and the given parameter $M$ as a function of the position of the inner boundary $r_\mathrm{int}$ of the gluing annulus for three different locations of the outer boundary: $r_\mathrm{ext} = 100, 300$ and $500$. (Recall that reduction or increase of the ADM mass is possible when $\Delta M < 0$ or $\Delta M > 0$, respectively.) Our findings indicate that reduction is not possible even in this extreme scenario. Although the increase of the ADM mass is smaller the further out we place the outer boundary, the behaviour of all curves remains qualitatively the same: the difference $\Delta M$ remains always positive and an initial decrease of $\Delta M$ is followed by an increase while moving the inner boundary towards the outer boundary. The latter behaviour follows naturally from the fact that moving the inner boundary towards the outer one narrows the gluing annulus, leaving less and less space for the Brill wave to perform the gluing and thus increasing its contribution to the ADM mass.
Discussion {#sec:discussion}
==========
The purpose of this paper was to demonstrate for the first time how Corvino’s gluing construction [@Corv:2000] can be implemented numerically in order to compute nontrivial Cauchy data that are Schwarzschild in a neighbourhood of space-like infinity.
Our numerical implementation is based on the analytical work by Giulini and Holzegel , who applied Corvino’s method to axisymmetric vacuum spacetimes. In their setting, spacetime is Brill-Lindquist out to some radius, is described by a general Brill wave along an intermediate gluing region, and is Schwarzschild outside this region. Einstein’s equations determine the equation to be solved numerically, namely the second-order linear PDE subject to the boundary conditions and . In order to obtain physically meaningful solutions, one has to constrain the choice of the two mass parameters $m$ and $M$ appearing in the definition of the conformal factor. It turns out that Einstein’s equations imply an integrability condition that can be used for this purpose. In addition, we make sure that the gluing region lies outside of any black hole horizons.
To solve numerically the elliptic equation describing the gluing construction, we chose to use pseudo-spectral methods. An extensive convergence analysis, both for an artificial exact solution (Sec. \[sec:exact\_sol\]) and for the actual gluing problem (Sec. \[sec:con\_analysis\]), demonstrates the accuracy and convergence of our numerical solutions. Our results confirm the behaviour that one would intuitively expect: the numerically computed values of $q$ decrease with increasing distance of the gluing annulus from the origin and increasing width, see Fig. \[fig:num\_solution\].
Giulini and Holzegel wondered whether it is possible to choose the gluing parameters in such a way that the ADM mass $M$ is smaller than $2m$, the sum of the two Brill-Lindquist black hole masses. By reducing the ADM mass, one might hope to reduce the amount of gravitational radiation that is known to be contained in the Brill-Lindquist data [@Sperhake2007]. Our findings in Sec. \[sec:reduction\] suggest that the presence of the Brill wave in the gluing region generically tends to increase the ADM mass. We have not been able to reduce the ADM mass even in the rather special setup where the black holes are placed extremely close to each other and the gluing region extends from close to the black hole horizons to a large distance, see Fig. \[fig:ADM\_reduction\]. It should be stressed though that there is a lot of freedom in the choice of the gluing function $\beta$. Here we tried only the ansatz . It could be that there exist gluing functions that lead to a reduction of the ADM mass, even though we think this is unlikely. So our results do not necessarily contradict the asymptotic analysis of .
We remark that there are other proposals for constructing Cauchy data extending to space-like infinity that are not based on Corvino’s gluing method. For example, Avila [@AvilaPhD] considered initial data that are only asymptotically static up to a given order at space-like infinity. It would be interesting to implement this approach numerically as well. Evolving such data to future null infinity is likely to be more complicated than in our approach, where spacetime is known *a priori* in a whole neighbourhood of space-like infinity.
Our ultimate goal is to compute an entire spacetime from the Cauchy data constructed using the methods described in this paper. As a first step, we will evolve our data to a first hyperboloidal surface reaching future null infinity; this can then be used as initial data for a hyperboloidal evolution code based on either the regular conformal field equations or the alternative approaches described in Sec. \[sec:intro\].
Acknowledgments
===============
We are grateful to Carla Cederbaum, Helmut Friedrich, Domenico Giulini, Gustav Holzegel and Martín Reiris for helpful discussions. This research is supported by grant RI 2246/2 from the German Research Foundation (DFG) and a Heisenberg Fellowship to O.R.
|
---
abstract: 'The stabilization of lasers to absolute frequency references is a fundamental requirement in several areas of atomic, molecular and optical physics. A range of techniques are available to produce a suitable reference onto which one can ‘lock’ the laser, many of which depend on the specific internal structure of the reference or are sensitive to laser intensity noise. We present a novel method using the frequency modulation of an acousto-optic modulator’s carrier (drive) signal to generate two spatially separated beams, with a frequency difference of only a few MHz. These beams are used to probe a narrow absorption feature and the difference in their detected signals leads to a dispersion-like feature suitable for wavelength stabilization of a diode laser. This simple and versatile method only requires a narrow absorption line and is therefore suitable for both atomic and cavity based stabilization schemes. To demonstrate the suitability of this method we lock an external cavity diode laser near the $^{85}\mathrm{Rb}\,5S_{1/2}\rightarrow5P_{3/2}, F=3\rightarrow F^{\prime}=4$ using sub-Doppler pump probe spectroscopy and also demonstrate excellent agreement between the measured signal and a theoretical model.'
address: 'School of Physics and Astronomy, University of Southampton, Highfield, Southampton, SO17 1BJ, United Kingdom'
author:
- 'Matthew Aldous, Jonathan Woods, Andrei Dragomir, Ritayan Roy and Matt Himsworth'
bibliography:
- 'AOMref.bib'
title: 'Carrier frequency modulation of an acousto-optic modulator for laser stabilization'
---
Introduction {#intro}
============
Frequency stabilization of a laser to a known reference is a common requirement in a number of applications, such as atomic laser cooling, absorptive sensing and precision spectroscopy. The most stable frequency references are atomic transitions and numerous techniques exist to obtain a suitable ‘error signal’ which can be electronically fed back to the laser to correct for frequency drift. Effective methods include frequency modulation spectroscopy (FMS) [@bjorklund1979], dichroic atomic vapour laser lock (DAVLL) [@Corwin1998], polarization spectroscopy (PS) [@Wieman1976] and modulation transfer spectroscopy (MTS) [@McCarron2008], all of which can be used with sub-Doppler pump-probe methods to obtain very effective stabilization signals. In many cases one would prefer to avoid modulated sidebands on the laser spectrum and so there is interest in modulation-free spectroscopy, or techniques in which the modulation is confined to the spectroscopy system. The available techniques can be separated into two distinct methods: *phase-detection* spectroscopy (as used in FMS and MTS) which detects variation in the phase relationship between modulation sidebands on different sides of an absorption feature, and *frequency differential* spectroscopy (found in DAVLL and PS), where two absorption spectra separated (spatially, temporally, or both) with a frequency shift are subtracted to generate the error signal.
While both DAVLL and PS are very effective, they produce the differential frequency shift using the internal structure of the atoms under investigation, and this may not always be practical to access in a species where the electronic structure is not suitable or if one wishes to stabilize to a non-atomic reference. DAVLL achieves this by the Zeeman effect, and polarization spectroscopy through optical pumping; both measure the offset spectra via orthogonal polarization states, and balanced detection of the differential signals provides common-mode rejection, greatly reducing the effect of laser intensity noise on the spectra. An alternative spectroscopic method, presented here, uses the balanced detection between two spatially- and frequency-separated laser beams to provide a dispersion-shaped signal across an absorption feature. The beams are produced via the carrier modulation of an acousto-optic modulator’s (AOM) drive frequency and we demonstrate an example application of the method using sub-Doppler pump-probe spectroscopy of rubidium (Rb) vapour as a wavelength reference, but this versatile technique can be applied to any spectroscopic feature of appropriate width.
Carrier frequency modulation of an acousto-optic modulator {#theory}
==========================================================
In laser cooling experiments, it is common to use AOMs to switch trapping and optical-pumping beams on and off within short timescales, and to provide a tunable frequency offset. An AOM introduces an angular deviation of the optical path, capable of providing several deflected beams which are frequency shifted from the zeroth-order by harmonics of the AOM’s carrier frequency. Several schemes [@Zhang2009; @VanOoijen2004] have been proposed to use AOMs to produce dispersion-shaped spectroscopic features, all using the differential method between the various diffracted orders or involving multiple AOMs. It is understood that to obtain a well-resolved locking signal, the frequency difference between absorption spectra should be less than the width of the feature of interest. Additionally, the smaller the frequency difference, the steeper the error signal gradient and therefore the better-resolved the reference.
Commercial AOMs can be found with operating frequencies of several tens to hundreds of MHz, and for many applications this combination of parameters leads to a large capture range and good stability. The most demanding stabilization, however, requires locking to a very narrow absorption feature with a linewidth of a few MHz or below and so the frequency offset between diffracted orders in the AOM system (equal to $n\times f_0$ where $n$ is the order index and $f_0$ is the AOM carrier frequency) is usually far too large. Typically AOMs are operated with a single drive frequency, but if this carrier is modulated, it provides sidebands on the diffracted beam with a bandwidth of a few tens of MHz. This method can be an economic alternative to Electro-Optical Modulation (EOM) to produce frequency sidebands and has been used as to replace EOMs in Modulation Transfer Spectroscopy [@Negnevitsky2013]. This ‘carrier modulation’ is a simple method to generate spatially- and frequency-separated spectroscopic probe beams, which may be detected in a balanced manner and subtracted electronically to obtain the necessary error signal. For our application, we see the common mode rejection and simplicity of the optical and electronic set-up as attractive properties of this apparatus.
![Diagram of diffraction through the AOM in a) normal operation with a drive frequency $f_0$ and b) with the drive signal modulated at $f_{mod}$.[]{data-label="diffraction-fig"}](1.pdf){width="80.00000%"}
The angle of divergence between the $0$th and $1$st diffracted order from an AOM driven with a carrier frequency $f_0$ is given by [@Donley2005]:
$\theta =\frac{f_0\lambda}{v_g}$,
where $\lambda$ is the wavelength of the incident beam and $v_g$ is the acoustic velocity in the AOM crystal. If the AOM is driven by two frequencies with a difference of 5MHz, the angular separation of the beams would be $\simeq$0.05$^\circ$ (for $\lambda=780\,$nm and using a TeO$_2$ crystal with $v_g=4200$m/s along the (110) plane [@Ohmachi1972]). While these diverging beams may be picked off and directed onto individual balanced detectors, a segmented photodiode may be more convenient when monitoring beams with small separations. For a photodiode with segments separated by a $200\,\mu$m gap, the optical path length from AOM to detector must be at least 230mm in order for more than half of each beam spot to fall on the correct segment. This length scale is generally acceptable for most experiments. The distinction between the carrier (drive) frequency and the sidebands produced by modulation of the $V_{\mathrm{tune}}$ signal is presented in Figure \[diffraction-fig\].
The signal strength for small separations would be equal to the difference in overlap of the beam widths, $\delta x$, on the detector. Assuming a Gaussian beam shape the signal as a function of frequency shift $\Delta$ would be proportional to:
$I_{min}(\Delta)=I_0\left(1-\exp\left[-\left(\frac{f_0\lambda\Delta}{2 v_g \delta x}\right)^2\right]\right)$,
where, $I_0$ is the maximum optical intensity. The upper bound in frequency would be defined by the AOM analog modulation bandwidth, the size of the detector or both. The former is a fundamental boundary and is dependent on the $1/e^2$ beam width within the AOM, $d$:
$I_{max}(\Delta)=I_0\exp\left[-\frac{1}{8}\left(\frac{\pi d \Delta}{v_g}\right)^2\right]$.
Experiment {#expt}
==========
The laser source used is a $780\,$nm external cavity diode laser (ECDL) built based on established designs [@Arnold1998; @Hawthorn2001], which we use for laser cooling of atomic rubidium, and therefore must be locked precisely to a given electronic transition (specifically $^{85}$Rb D$_2 (5S_{1/2}\rightarrow5P_{3/2})$) to better than $1\,$MHz. Figure \[diffraction-fig\] shows the layout of the spectroscopy system. An acousto-optic modulator (*Gooch & Housego* FS310-2F-SU4), with a center carrier frequency $f_0 = 310$MHz, is aligned in the Bragg regime such that only a single diffraction order is produced. Note that the choice of AOM carrier frequency is arbitrary here, and that similar results were also produced using an AOM operating at 80MHz, the only practical difference being the corresponding AOM modulation bandwidths.
![The spectroscopy apparatus. The beam from an external cavity diode laser is collimated, isolated and focused through a 310MHz acousto-optic modulator (AOM). This is driven by an amplified voltage controlled oscillator (VCO). The VCO is tuned with a square wave modulation combined with a DC offset using a bias-tee. The multiple diffracted beams from the AOM are passed through a retro-reflected Doppler-free spectroscopy system including a neutral-density (ND) filter and half- and quarter-waveplates ($\lambda/2$, $\lambda/4$). The retro-reflected signal can either be obtained using polarizing optics, or via back-reflection through the AOM and simplifying the apparatus. The beam shape is also monitored using a line scan CCD (*Thorlabs* LC1-USB). The final detection is achieved on a quadrant photodiode (QPD) after concentration of the sideband beams by a cylindrical lens (CYL).[]{data-label="setup"}](2.pdf){width="80.00000%"}
One may produce the necessary RF fields either by modulating the carrier or by using two distinct carrier frequencies (at $f_0\pm \Delta$). Both methods produce similar results and all of the following data was obtained using the former method by simply modulating the tuning port of a voltage controlled oscillator (VCO) to produce the RF sidebands. We apply a square-wave RF waveform to the VCO (*Mini-Circuits* ZX95-330-S+) tuning port using a bias tee (*Mini-Circuits* ZX86-12G+). The VCO output is then amplified to 27$\,$dBm (*Mini-Circuits* TVA-11-422) and fed to the AOM. By modulating the tuning pin of the VCO, $\Delta$ is defined by the modulation *amplitude*, not the frequency $f_{mod}$, with a dependence of $\sim9$MHz/V for a square wave signal. The modulation frequency $f_{mod}$ need only lie within the bandwidth of tuning port and results in a modulation ‘noise’ in the detected signal which must be filtered out. The use of two distinct drive frequencies avoids this noise but requires a dedicated waveform generator to maintain the frequency difference between the components. The resulting pair of beams is then collimated and passed through a sub-Doppler pump-probe apparatus as shown in Figure \[setup\] and also directed onto a line-scan charge-coupled device CCD (*Thorlabs* LC1-USB). For simplicity we use the retro-reflected pump-probe configuration with the probe beam picked off with polarization optics.
The two beams are detected using a quadrature photodiode (QPD, *Centronic* QD7-5T), with two quadrants on each side summed together individually, before subtraction of the pairs to generated the differential signal. We focus the beams onto the detector using a cylindrical lens, oriented to increase optical capture on the sensing region without the reduction of spot separation associated with lateral focusing. The QPD sections are individually biased and the output of two horizontal sections are subtracted using an instrumentation operational amplifier with 20dB gain, before passing through a 100kHz low-pass filter, mandatory in order to eliminate systematic noise introduced by the VCO tuning pin modulation frequency. As the modulation frequency has little effect on the spectra one can choose a combination of filter and modulation frequency to suit the stabilization circuit bandwidth.
Results
=======
![Absorption spectra (blue and green curves) from each half of the QPD and (red curve) the associated error signal derived from them.[]{data-label="spectra"}](3.pdf){width="60.00000%"}
An example of sub-Doppler spectra detected by each half of the QPD are shown in Figure \[spectra\], recorded by zeroing the input of each into the instrumental amplifier in turn. The horizontal axis has been scaled using the known frequencies of each absorption peak fitted using a 4th order polynomial to mitigate non-linearity of the piezoelectric tuning. We also plot the direct subtraction of the absorption spectra without using the instrumentation amplifier, where the subtracted signal is very similar to an FMS or PS spectrum. The signal to noise ratio of the subtracted data is much higher because any intensity noise in the laser is common-mode to both beams and thus subtracted with the high bandwidth (2MHz) instrumentation op-amp. We find that the subtracted signal is remarkably insensitive to variations in the laser power, other than changing the overall signal strength around zero.
![The beam shape of the first order diffracted beam with different modulation modes, measured using a line-scan CCD. The upper dashed trace is produced using a sine-wave modulation of the VCO tuning voltage, the lower solid traces used a square-wave modulation. The square-wave spectra show more power in the sidebands compared to the sinusoidal modulation because the tuning voltage does not have a significant carrier component (310MHz at 0mm in this demonstration).[]{data-label="beams"}](4.pdf){width="60.00000%"}
We have explored varying both the dither frequency, amplitude, and waveform shape. We find very little variation between sine and square waves, except at high frequencies where sinusoidal modulation causes less distortion in the final signal due to the tuning pin bandwidth filtering the higher modes of the square wave. Modulating with a sinusoidal signal produced beam profiles with an inferior resolution, as well as causing the sideband frequency separation to be proportional to the r.m.s. amplitude, as shown in Figure \[beams\]. Within the frequency range from 200kHz to 5MHz we see negligible variation in the spectra for both modulation waveforms.
Figure \[voltdata\] shows the variation with sideband separation (proportional to modulation amplitude), from 500kHz to 20MHz together with a prediction (shown in Figure \[voltmodel\]) using a theoretical model with no free parameters [@Himsworth2010]. The optimum lineshape, where the error signal is most linear across resonance is found around 8-12MHz sideband separation. This is the linewidth of the sub-Doppler absorption features used (which is slightly broader than the natural linewidth), as expected from the theoretical model. At lower separations the smaller differences between spectra significantly weaken the derived signal, and at higher frequencies the separations are greater than the sub-Doppler linewidths so the different absorption and cross-over peaks begin to overlap.
To test the suitability of this technique for laser stabilization, a parallel DAVLL setup using the same laser passing through a different vapor cell was used to characterize the long-term drift of a laser locked using this technique [@Aldous2016]. The DAVLL signal, which in our case is only sensitive to the Doppler-broadened spectral features, was frequency shifted by a further AOM such that the zero in its error signal was situated very close to the center of the reference transition (a crossover resonance in the vicinity of $^{85}\mathrm{Rb}\,5S_{1/2}\rightarrow5P_{3/2}, F=3\rightarrow F^{\prime}=4$). This provided a diagnostic signal proportional to any drift, even if the system was far off-resonance. The error signal used was supplied to a proportional-integral-derivative (PID) controller which in turn fed back to the laser diode current and the piezo-mounted external grating. An overview of the spectra during a single laser sweep is shown in Figure \[fig:time-calibration\], which includes a SAS spectrum alongside the modulated AOM and DAVLL error signals.
The drift of the ECDL system was measured over 25 minutes in both free-running and locked modes of operation, as shown in Figure \[fig:locking-comparison\]. The maximum drift in DAVLL signal indicates the free-running laser naturally drifts on the order of $13\pm3$MHz during the 25min recorded period ($\simeq50$MHz per hour) which is comparable to the stability of similar lasers tested in the literature [@Matsubara2005]. Once the laser is locked there is no significant drift with a r.m.s. frequency variation of 0.66MHz, which is approximately equal to the laser linewidth [@himsworth2009coherent]. The lock remains remarkably secure, even within a noisy laboratory environment and with vibration of the optical breadboard.
Application to frequency modulation spectroscopy
------------------------------------------------
![Variation of the demodulated error signal with the VCO modulated with a square wave at 3MHz as the sideband separation is swept from 9 to 27MHz.[]{data-label="f-mod-data"}](7.pdf){width="60.00000%"}
Although we have focused on differential methods to obtain the error signal, the use of a modulated tuning voltage of the VCO offers an interesting version of frequency modulation spectroscopy. In generating the two 1st-order beams we modulate the VCO with a waveform whose amplitude defines their frequency separation, and the frequency of the modulation is essentially a noise source which is filtered out. However, if a single detector is used and we demodulate at the same frequency with which the VCO tuning port is driven, then we find that it is possible to produce a FMS error signal whose sideband frequency is decoupled from the demodulation frequency. Figure \[f-mod-data\] shows a selection of spectra with a constant modulation frequency but a variable sideband separation (via the tuning port modulation *amplitude*). The signal strength of FMS spectra typically reduces at higher modulation frequency with narrow absorption features [@silver1992frequency], however one requires the modulation frequency to be above the noise bandwidth of the laser (typically below 1 or 2MHz for a external cavity diode laser). Therefore it may be of interest to exploit this element of the technique if a specific sideband separation, independent of the demodulation frequency, is required.
Discussion
==========
We find the modulation frequency of the VCO tuning port to have little effect on the spectra in the range 200kHz to 5MHz: an operating range determined by the overlap in the bandwidths of the bias tee and the VCO tuning. The use of square or sinusoidal modulation waveshapes also has little effect on the spectra other than changing the sideband separation, however the the use of square waves allows one to alter the duty cycle of the modulation and thus produce small frequency offsets from the absorption reference.
One weakness of the technique proposed here is its sensitivity to fluctuations in the pointing direction of the beam emerging from the AOM caused by pressure fluctuations in the laboratory, since the ratio of optical intensities falling on each detector segment may vary, thus producing a change in the DC offset. Beyond shielding the apparatus, the presence of the focusing lens in front of the detector, but at a distance less than the focal length, serves to mitigate this by reducing the beam spot size on the detector in comparison to the sensor area.
Since the VCO in our apparatus is not stabilized to the AOM’s center frequency, any slow drift results in a variation of power in each beam and thus a drift in the error signal’s DC offset. Therefore, for long term stabilization a precision voltage reference is necessary, or a second QPD can be used to monitor the power in each beam, feeding back to stabilize the VCO (in much the same manner as the spectroscopic signal is used to stabilize the laser).
The apparatus can be made more compact if one discards the beam-splitting cube and allows the retro-reflected probe beam to pass back through the AOM after which its undeflected component may be focused on the QPD.
Conclusion
==========
A new method for wavelength stabilization of a laser diode has been demonstrated that depends on the carrier modulation of the AOM drive frequency to provide spatially and spectrally separated sidebands. These are used to jointly probe an absorption feature and the difference in the detected signals produced an error signal suitable for locking. A simple RF electronic system was also presented to produce the correct RF drive signal via the modulation of a VCO tuning port. An advantage of this method is its insensitivity to laser intensity noise, background electrical or magnetic fields, and optical polarization. Therefore it is suitable for both atomic and cavity wavelength references, especially where narrow absorption features are required for the highest precision. The technique was used to lock an external cavity diode laser to a sub-Doppler absorption line in rubidium and the measured stability, at one part in $10^9$, is suitable for cold atoms experiments.
Funding {#funding .unnumbered}
=======
This work was supported by funding from RAEng, EPSRC, and the UK Quantum Technology Hub for Sensors and Metrology under grant EP/M013294/1.
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to thank Paul Martin, Sanja Barkovic for their help in building up the apparatus, and Tim Freegarde for useful discussions and for the loan of certain pieces of equipment.
|
---
author:
- |
Yoshihisa Kitazawa\
High Energy Accelerator Research Organization (KEK)\
Tsukuba, Ibaraki 305-0801, Japan\
Department of Particle and Nuclear Physics, The Graduate University for Advanced Studies\
Tsukuba, Ibaraki 305-0801, Japan\
E-mail:
- |
Satoshi Nagaoka\
High Energy Accelerator Research Organization (KEK)\
Tsukuba, Ibaraki 305-0801, Japan\
E-mail:
title: 'Graviton Propagators on Fuzzy $G/H$'
---
Introduction \[s1\]
===================
Noncommutative (NC) gauge theory is realized [@CDS; @AIIKKT; @Li] by considering a NC background in matrix models [@IKKT; @BFSS]. It offers a promising possibility that it contains gravity as a quantum correction through the UV/IR mixing effect [@MRS]. In string theory, some perturbative vacua are well-known and the relation between them are clarified, but there is a vast amount of moduli space to be fixed, and we have no sufficient information to predict which is the nonperturbative vacuum. Landscape is one of the major fields in the recent development in string theory [@Susskind]. On the other hand, it is still a very fascinating idea that our universe is uniquely selected through the nonperturbative effect of string theory. To find this mechanism, it is necessary to study quantum gravity from string theory point of view.
Quantum gravity itself is very difficult to study, but in string theory, there is a duality between open string and closed string, therefore, we can analyze quantum gravity by using open string modes. AdS/CFT correspondence is a well-established correspondence [@Maldacena]. But in an ordinary gauge theory, it might not be easy for us to probe quantum gravity, since we do not keep higher tower of open string degrees of freedom. On the other hand, NC gauge theories may include such open string modes since they are essentially matrix models. In this sense, new effects of quantum gravity might be seen in the quantum corrections of NC gauge theory. Then, what kind of phenomena is included in this effects? One of the possibility which we discuss in this paper is that the 4 dimensional quantum gravity is realized in 4 dimensional NC gauge theory. Our scenario is similar to the brane world scenario [@RS], which explains the localization of gravity on D-brane. We suggest that NC gauge theories provide a localization of gravity on D-branes. Our goal is to derive $1/({\rm momentum})^2$ dependence of massless graviton propagators in NC gauge theories.
In section \[s21\], we briefly review the open Wilson lines in NC gauge theories on $S^2\times S^2$. By considering the regularized space, we can consider the large but finite $N$ system which serves us as a gauge invariant regularization. In section \[s22\], two point function of open Wilson lines which couple to the massless graviton mode is calculated. The tensor structure of Wilson line correlators (WLC), which depends on the isometry of $S^2 \times S^2$, is constructed in section \[s23\]. In section \[s31\], we show that the essential part of the correlator, which we explain there in detail, does not depend on our choice of $G/H$. In section \[s32\], we calculate two point function of WLC on another homogeneous space, $CP^2$, which has higher symmetry than $S^2\times S^2$. In section \[s33\], we generalize our result to other dimensions, for example, $S^2\times S^2\times S^2$. We conclude in section \[s4\] with discussions.
Wilson line correlators in noncommutative gauge theory \[s2\]
=============================================================
Noncommutative (NC) gauge theories on compact homogeneous spaces can be constructed from IIB matrix model. They have been investigated in [@KTT2; @fuzS2; @fuzS2S2; @KTT1; @fuzS2S2S2; @fuzCP2]. By considering the compact homogeneous space, we can deal with a large but finite $N$ system, which enables us to investigate non-perturbative questions. It thus serves us as a nonperturbative and gauge invariant regularization of NC gauge theory. The bosonic part of the action of IIB matrix model is written as $$\begin{aligned}
S= - \frac{1}{4} {{\rm tr}}[A_\mu,A_\nu]^2 \ ,
\label{IIBac}\end{aligned}$$ where $A_\mu$ are $N \times N$ Hermitian matrices and $\mu$ and $\nu$ run over 0, $\cdots$, 9. The equation of motion is obtained as $$\begin{aligned}
[A_\mu,[A_\mu,A_\nu]]=0 \ .\end{aligned}$$
NC gauge theory is obtained by expanding matrices around the NC backgrounds. We will denote the NC gauge field $a_\mu$ around the background $p_\mu$ as $$\begin{aligned}
\label{f1}
A_\mu =f_{\alpha} (p_\mu+a_\mu) \ ,\end{aligned}$$ where $f_\alpha$ is a scale factor. When we consider the action (\[IIBac\]) with a Myers term [@Myers] as $$\begin{aligned}
\label{f2}
{i\over 3}f_{\mu\nu\rho} A_{\mu} [A_\nu,A_\rho] \ ,\end{aligned}$$ we can identify a scale factor $f$ in (\[f1\]) with a coefficient $f$ in (\[f2\]). In this sense, the index $\alpha$ labels the representation of a fuzzy homogeneous space [@Mathom]. Alternatively such a space may be realized as a quantum solution [@fuzS2S2]. Although supersymmetry is softly broken in either case, the leading behavior of the correlators is constrained by SUSY.
Feynman rule of noncommutative gauge theory on $S^2 \times S^2$ \[s21\]
-----------------------------------------------------------------------
Let us briefly describe the Feynman rule of NC gauge theory on $S^2 $ with $U(1)$ gauge group. We will generalize the rule to $S^2 \times S^2$ background with $U(n)$ gauge group later. We follow the notation in [@KTT2].
We expand matrices in terms of matrix spherical harmonics as $$\begin{aligned}
A^\mu = f_{S^2} (p^\mu + \sum_{jm} a_{jm}^{\mu}Y_{jm}) \ ,\end{aligned}$$ where the representation $Y_{jm}$ is adopted as $$\begin{aligned}
&(Y_{jm})_{ss'}=(-1)^{l-s}
\left(\begin{array}{ccc}
l&j&l_{} \\
-s&m&s'
\end{array}\right)
\sqrt{2j+1} .\end{aligned}$$ $p_\mu$ can be identified with the angular momentum operator in the spin $l$ representation. The normalization is defined as $$\begin{aligned}
\mbox{Tr}\;Y_{j_1m_1}Y_{j_2m_2}=(-1)^{m_1}\delta_{j_1,j_2}
\delta_{m_1,-m_{2}} \ .\end{aligned}$$ The cubic vertex of matrix spherical harmonics is written as $$\begin{aligned}
\label{3pv}
\begin{picture}(0,0)
\put(5,0){\circle{20}}
\put(15,0){\line(1,0){10}}
\put(-15,5){\line(1,0){10}}
\put(-15,-5){\line(1,0){10}}
\put(-14,10){$Y_{j_2}$}
\put(-14,-16){$Y_{j_1}$}
\put(15,4){$Y_{j_3}$}
\end{picture} \hspace*{1cm}{\notag}={{\rm Tr}}[Y_{j_1m_1}Y_{j_2m_2}Y_{j_3m_3}]
&=(-1)^{2l}\sqrt{(2j_1+1)(2j_2+1)(2j_3+1)}\\
&\times\left(\begin{array}{ccc}
j_1&j_2&j_3\\
m_1&m_2&m_3
\end{array}\right)
\left\{\begin{array}{ccc}
j_1&j_2&j_3\\
l&l&l
\end{array}\right\} \ ,\end{aligned}$$ where we adopt the notation of $(3j)$ and $\{6j\}$ symbols in [@Edm]. The propagators of the NC gauge field $a_{jm}^\mu$ are read from the action as $$\begin{aligned}
\langle \; a^{\mu}_{j_1m_1}a^{\nu}_{j_2m_2}\;\rangle =
\frac{1}{f_{S^2}^{4}}\;\frac{(-1)^{m_1}}{j_1(j_1+1)}\;
\delta^{\mu\nu}\delta_{j_1j_2}\delta_{m_1-m_{2}} \ .\end{aligned}$$
Next, let us introduce Wilson lines in NC gauge theory [@IIKK] [^1] on $S^2$. They are constructed by the trace of polynomial of matrices as $$\begin{aligned}
y_{jm}^{\alpha_1,\alpha_2,\cdots,\alpha_j}TrA_{\alpha_1}A_{\alpha_2}\cdots A_{\alpha_j}
A_{i_1}A_{i_2}\cdots A_{i_k} \ .\end{aligned}$$ $\alpha=7,8,9$ denote the dimensions where $S^2$ is embedded. $y_{jm}^{\alpha_1,\alpha_2,\cdots,\alpha_j}$ denotes a totally symmetric traceless tensor which corresponds to the spin $j $ representation of $SU(2)$. The background $p_\mu$ consists of angular momentum operators in spin $l$ representation. In our expansion of $A_\mu$ around the background $p_\mu$, the leading term of the Wilson line is written as $$\begin{aligned}
f_{S^2}^{j+k}
y_{jm}^{\alpha_1,\alpha_2,\cdots,\alpha_j}
{{\rm Tr}}p_{\alpha_1}p_{\alpha_2}\cdots p_{\alpha_j}
{\cal O}_1 \cdots {\cal O}_k \ ,\end{aligned}$$ where ${\cal O}$ is a field around the background $p_\mu$. We define ${\cal Y}_{jm}$ as $$\begin{aligned}
{\cal Y}_{jm}&\equiv y_{jm}^{\alpha_1,\alpha_2,\cdots,\alpha_j}
p_{\alpha_1}p_{\alpha_2}\cdots p_{\alpha_j} \ .\end{aligned}$$ We will focus on the highest weight states of $SU(2)$, therefore, we also define ${\cal Y}_{j}$ as $$\begin{aligned}
y_{j,j}{{\rm Tr}}(p_{+})^{j} {\cal O}_1 \cdots {\cal O}_k
={{\rm Tr}}{\cal Y}_{j}{\cal O}_1 \cdots {\cal O}_k \ ,\end{aligned}$$ where $p_+\equiv p_7+ip_8$.
Using these Feynman rules, we find that there are planar and non-planar contribution in the two point function of ${{\rm Tr}}{\cal Y}_{j}{\cal O}_1 {\cal O}_2$ at the leading order, $$\begin{aligned}
& {1\over {2l+1}}\langle {{\rm Tr}}{\cal Y}_{j}{\cal O}_1 {\cal O}_2
{{\rm Tr}}{\cal O}_2^\dagger {\cal O}_1^\dagger {\cal Y}_{j}^{\dagger}\rangle
\ {\notag}\\
& =
\hspace*{1.5cm}
\begin{picture}(0,0)
\put(-40,0){\line(1,0){5}}
\put(-25,0){\circle{20}}
\put(5,0){\circle{20}}
\put(15,0){\line(1,0){5}}
\put(-15,5){\line(1,0){10}}
\put(-15,-5){\line(1,0){10}}
\put(-14,8){${\cal O}_1$}
\put(-14,-15){${\cal O}_2$}
\put(-45,4){${\cal Y}$}
\end{picture}\hspace*{8mm}
=\langle j|{1\over P_1^2P_2^2}|j\rangle_{\rm p} \ , {\notag}\\
& {1\over {2l+1}}\langle{{\rm Tr}}{\cal Y}_{j}{\cal O}_1 {\cal O}_2
{{\rm Tr}}{\cal O}_1^\dagger {\cal O}_2^\dagger {\cal Y}_{j}^{\dagger}\rangle {\notag}\\
& =\hspace*{1.5cm}
\begin{picture}(0,0)
\put(-40,0){\line(1,0){5}}
\put(-25,0){\circle{20}}
\put(5,0){\circle{20}}
\put(-10,0){\line(1,0){5}}
\put(-15,5){\line(1,0){10}}
\put(-15,-5){\line(1,0){10}}
\put(-14,8){${\cal O}_1$}
\put(-14,-15){${\cal O}_2$}
\put(-45,4){${\cal Y}$}
\end{picture}\hspace*{8mm}
=\langle j|{1\over P_1^2P_2^2}|j\rangle_{\rm np} \ ,\end{aligned}$$ where $$\begin{aligned}
P_i^{\mu}{\cal Y}_{j_{i'}m_{i'}}&\equiv [p^{\mu},{\cal Y}_{j_{i'}m_{i'}}]
\delta_{ii'} \ .\end{aligned}$$ The planar and nonplanar part of the correlation function on $S^2$ is given by $$\begin{aligned}
\langle j|X|j\rangle_p
&={1\over f_{S^2}^8 (2l+1)}\sum_{j_2,j_3,m_2,m_3}
\Psi_{123}^*X \Psi_{123} \ ,{\notag}\\
\langle j|X|j\rangle_{np}
&={1\over f_{S^2}^8 (2l+1)}\sum_{j_2,j_3,m_2,m_3}
\Psi_{132}^*X \Psi_{123} \ ,{\notag}\\
{\rm where} \hspace*{1cm}\Psi_{123}&\equiv
{{\rm Tr}}{\cal Y}_{j_3m_3}{\cal Y}_{j_2m_2}{\cal Y}_{j}.\end{aligned}$$
Now, let us formulate the Wilson line correlators on $S^2 \times S^2$ with $U(n)$ gauge group. The construction is the simple extension of the correlators on $S^2$. We expand matrices in terms of the tensor product of matrix spherical harmonics as $$\begin{aligned}
A_{\mu}=f_{\rm S^2\times S^2}
(p_{\mu}+\sum_{jmpq}a^{\mu}_{jmpq}Y_{jm}\otimes Y_{pq}) \ ,\end{aligned}$$ where $$\begin{aligned}
&p_{\mu}=j_{\mu}\otimes 1 ~~(\mu=4,5,6) \ ,{\notag}\\
&p_{\mu}=1\otimes \tilde{j}_{\mu} ~~(\mu=7,8,9) \ .\end{aligned}$$ In this section, we consider only the $S^2 \times S^2$ manifold, therefore, from now on, we denote $f_{S^2 \times S^2}$ as $f$. The summations over $j$ and $p$ run up to $j=2l$ and $p=2l$ respectively. We consider NC gauge theory with $U(n)$ gauge group, so $N=n(2l+1)^2$. The propagators are written as $$\begin{aligned}
\langle\; a^{\mu}_{j_1m_1p_1q_1}a^{\nu}_{j_2m_2p_2q_2}\;\rangle&=&
\frac{1}{f^{4}}\;
\frac{(-1)^{m_1+q_1}}{j_1(j_1+1)+p_1(p_1+1)}\;
\delta^{\mu\nu}\delta_{j_1j_2}\delta_{p_1p_2}
\delta_{m_1-m_{2}}\delta_{q_1-q_{2}} \ .\end{aligned}$$ We define the normalization as $$\begin{aligned}
{{\rm Tr}}Y_{j_1m_1p_1q_1}Y_{j_2m_2p_2q_2}=n (-1)^{m_1}\delta_{j_1j_2}
\delta_{m_1-m_{2}} \delta_{p_1p_2} \delta_{q_1-q_2} \ .\end{aligned}$$ The planar and nonplanar part of the correlation function on $S^2 \times S^2$ are given by $$\begin{aligned}
\langle j,p |X|j,p \rangle_p
&={n^3\over f^8 N}\sum_{j_2,j_3,m_2,m_3} \sum_{p_2,p_3,q_2,q_3}
\Psi_{123}^*X \Psi_{123} \ ,{\notag}\\
\langle j,p|X|j,p\rangle_{np}
&={n^3\over f^8 N}\sum_{j_2,j_3,m_2,m_3} \sum_{p_2,p_3,q_2,q_3}
\Psi_{132}^*X \Psi_{123} \ ,{\notag}\\
{\rm where} \hspace*{1cm}\Psi_{123}&\equiv
{{\rm Tr}}{\cal Y}_{j_3m_3p_3q_3}{\cal Y}_{j_2m_2p_2q_2}{\cal Y}_{j,p} \ .\end{aligned}$$ The leading terms of the Wilson lines in the highest weight state representation of $SU(2) \times SU(2)$ are written as $$\begin{aligned}
y_{j,j}y_{p,p}{{\rm Tr}}(p_{+})^{j}({\tilde{p}_{+}})^p
{\cal O}_1 \cdots {\cal O}_k
={{\rm Tr}}{\cal Y}_{j,p}{\cal O}_1 \cdots {\cal O}_k \ .\end{aligned}$$ Finally, we define $\lambda \equiv \frac{n^2}{f^4 N}$, which is identified with ’t Hooft coupling.
Two point correlation function of massless graviton mode \[s22\]
----------------------------------------------------------------
The relation between straight Wilson line operators and fields in the massless supergravity multiplet is clarified in [@vertex][@ITU]. In this section, we investigate the two point correlators of a massless graviton mode. The vertex operators which couple to the graviton in type IIB matrix model are written as $$\begin{aligned}
{{\rm Str}}exp(ik\cdot A)
([A^{\rho},A^{\mu}][A^{\rho},A_{\nu}]
+{1\over 2}\bar{\psi}\Gamma^{(\nu}[A^{\mu )},\psi])
h_{\nu\mu}{\notag}\\
+ {1\over 2}{{\rm Str}}exp(ik\cdot A) \bar{\psi}\Gamma^{\rho\beta(\nu}\psi
[A^{\mu )},A_{\beta}]\partial_{\rho}h_{\nu\mu} \ ,\end{aligned}$$ where the symbol ${{\rm Str}}$ implies that the ordering of the matrices is defined through the symmetric trace. $(\mu,\nu)$ implies that the Lorentz indices are symmetrized. In analogy with this operator, we may introduce the Wilson line operator in NC gauge theory on $S^2 \times S^2$ as $$\begin{aligned}
{{\rm Str}}{\cal Y}_{j, p }(A) ([A_\rho,A_\mu][A_\rho,A_\nu]
+{1\over 2}\bar{\psi}\Gamma^{(\nu}[A^{\mu )},\psi]) \ .\end{aligned}$$ The symmetric trace of the operators on compact space may be defined as $$\begin{aligned}
{{\rm Str}}(p_+)^{j} (\tilde{p}_+)^{p}{\cal O}_1 {\cal O}_2
&\equiv \frac{1}{j}
{{\rm Tr}}\sum_{j_1=0}^j
(p_+)^{j_1} (\tilde{p}_+)^{p_1}{\cal O}_1
(p_+)^{j-j_1} (\tilde{p}_+)^{p-p_1}
{\cal O}_2 \ , {\notag}\\
&{\rm where} \hspace*{5mm}
p_1 \sim \frac{p}{j}j_1 \ ,\end{aligned}$$ which is a natural extension of the symmetric trace in the flat noncommutative space. $p_1$ is an integer nearest to $j_1p/j$. Although supersymmetry is softly broken at the scale where the manifold is curved, it will not affect the leading behavior of the correlators with respect to the large $N$ limit.
The leading term of the Wilson line is written as $$\begin{aligned}
{{\rm Str}}{\cal Y}_{j, p} ([p_\rho,a_\mu]-[p_\mu,a_\rho])
([p_\rho,a_\nu]-[p_\nu,a_\rho]) \equiv
{{\rm Str}}{\cal Y}_{j, p} f_{\rho\mu} f_{\rho\nu}
\ .\end{aligned}$$ where we define $f_{\rho\mu}\equiv [p_\rho,a_\mu]-[p_\mu,a_\rho]$. Note that there are other terms in the expansion, for example, $$\begin{aligned}
{{\rm Str}}{\cal Y}_{j, p} [a_\rho,a_\mu] [a_\rho,a_\nu] \ .\end{aligned}$$ But these terms are of higher orders with respect to the ’t Hooft coupling $\lambda$. The two point function of the Wilson line operator which couples to graviton is written as $$\begin{aligned}
\label{tensor}
\langle\; {{\rm Str}}{\cal Y}_{j, p} f_{\rho\mu} f_{\rho\nu}
{{\rm Str}}f_{\rho'\mu'}^\dagger f_{\rho'\nu'}^\dagger {\cal Y}_{j, p}^\dagger
\;\rangle \ .\end{aligned}$$ First, we simplify the correlators in such a way that $$\begin{aligned}
f_{\rho\mu}\rightarrow f_1=[p_{\rho},a_{\mu}],
~~f_{\rho\nu}\rightarrow f_2=[p_{\rho},a_{\nu}] \ .\end{aligned}$$ This substitution is useful to understand the essential feature of the correlators. We will present the complete calculation of the correlators in section \[s23\].
In this way, we obtain $$\begin{aligned}
\label{strsum}
&\langle\; {{\rm Str}}{\cal Y}_{j, p} f_{1} f_{2}
{{\rm Str}}f_{2}^\dagger f_{1}^\dagger {\cal Y}_{j, p}^\dagger
\;\rangle {\notag}\\
&=y_j^2 y_p^2 \langle\; {{\rm Str}}(p_+)^{j} (\tilde{p}_+)^{p} f_1 f_2
{{\rm Str}}f_2^\dagger f_1^\dagger (p_-)^{j} (\tilde{p}_-)^{p}
\;\rangle {\notag}\\
&= \frac{y_j^2y_p^2}{j^2} \sum_{j_1=0}^j \sum_{j_2=0}^j
\langle\; {{\rm tr}}(p_+)^{j_1} (\tilde{p}_+)^{p_1}
f_1 (p_+)^{j-j_1} (\tilde{p}_+)^{p-p_1} f_2 {\notag}\\
&\hspace*{4cm}{{\rm tr}}f_2^\dagger (p_-)^{j-j_2} (\tilde{p}_-)^{p-p_2} f_1^\dagger
(p_-)^{j_2} (\tilde{p}_-)^{p_2}
\;\rangle {\notag}\\
&=\frac{y_j^2y_p^2}{j^2 }
\sum_{j_1=0}^j \sum_{j_2=0}^j
(y_{j_1} y_{p_1} y_{j_2} y_{p_2}
y_{j-j_1} y_{p-p_1} y_{j-j_2} y_{p-p_2})^{-1}
\hspace*{1.7cm}
\begin{picture}(0,0)
\put(-40,0){\line(1,0){5}}
\put(-5,0){\line(1,0){5}}
\put(20,0){\line(1,0){5}}
\put(55,0){\line(1,0){5}}
\put(-20,0){\circle{30}}
\put(40,0){\circle{30}}
\put(-5,5){\line(1,0){30}}
\put(-5,-5){\line(1,0){30}}
\put(-15,0){${\cal Y}$}
\put(5,10){$f_1$}
\put(-45,4){${\cal Y}$}
\put(5,-15){$f_2$}
\end{picture}\hspace*{8mm}
\hspace*{15mm} \ ,\end{aligned}$$ where we denote $$\begin{aligned}
\begin{picture}(0,0)
\put(-40,0){\line(1,0){5}}
\put(-5,0){\line(1,0){5}}
\put(20,0){\line(1,0){5}}
\put(55,0){\line(1,0){5}}
\put(-20,0){\circle{30}}
\put(40,0){\circle{30}}
\put(-5,5){\line(1,0){30}}
\put(-5,-5){\line(1,0){30}}
\put(-15,0){${\cal Y}$}
\put(5,10){$f_1$}
\put(-45,4){${\cal Y}$}
\put(5,-15){$f_2$}
\end{picture} \hspace*{2.2cm}
\equiv
\langle\; {{\rm tr}}{\cal Y}_{j_1, p_1} f_{1} {\cal Y}_{j-j_1,p-p_1} f_{2}
{{\rm tr}}f_{2}^\dagger {\cal Y}^\dagger_{j-j_2,p-p_2}
f_{1}^\dagger {\cal Y}_{j_2, p_2}
\;\rangle \ .\end{aligned}$$
Before proceeding further, let us show a property of the operator $f_i$, which helps us to perform the calculation: $$\begin{aligned}
\label{comp0}
<f_{1}f_{1}^\dagger> &\sim <[p_\rho ,a_\mu][ a_\nu^\dagger,
p_\rho]> {\notag}\\
&\sim \sum_{jmpq} {\cal Y} P^2 \frac{1}{P^2} ({\cal Y}^\dagger)
\delta_{\mu\nu}\sim
\sum_{jmpq} {\cal Y}({\cal Y}^\dagger)\delta_{\mu\nu} \ .\end{aligned}$$ Thus, we can use the completeness condition: $$\begin{aligned}
\label{comp1}
\sum_{jmpq} ({\cal Y})_{ab} ({\cal Y}^\dagger)_{cd}
=\delta_{ad} \delta_{bc} \ ,\end{aligned}$$ when we sum over the internal momenta. Here $a,b,c$ and $d$ are indices of matrices. Note that this property does not depend on the choice of the basis.
Now, let us resume the calculation of (\[strsum\]). We substitute the results (\[comp0\]) and (\[comp1\]) into (\[strsum\]) as $$\begin{aligned}
&\langle\; {{\rm Str}}{\cal Y}_{j, p} f_1 f_2
{{\rm Str}}f_2^\dagger f_1^\dagger {\cal Y}_{j, p}^\dagger
\;\rangle {\notag}\\
&=\frac{n^2 }{ j^2} \sum_{j_1=0}^j \sum_{j_2=0}^j
{y_{j}^{2}y_{p}^{2}\over
y_{j_1} y_{p_1}y_{j_2} y_{p_2}y_{j-j_1} y_{j-p_1}y_{j-j_2} y_{p-p_2}}
{\notag}\\
&\times {{\rm tr}}{\cal Y}_{j_1p_1}{\cal Y}_{j_2p_2}^{\dagger}
{{\rm tr}}{\cal Y}_{j-j_1,p-p_1}{\cal Y}_{j-j_2,p-p_2}^{\dagger}
{\notag}\\
&= \frac{n^2 }{ j^2} \sum_{j_1=0}^j \sum_{j_2=0}^j
{y_{j}^{2}y_{p}^{2}\over
y_{j_1} y_{p_1}y_{j_2} y_{p_2}y_{j-j_1} y_{j-p_1}y_{j-j_2} y_{p-p_2}}
\delta_{j_1-j_2,0} \delta_{p_1-p_2,0} {\notag}\\
&= \frac{n^2 }{ j^2} \sum_{j_1=0}^j
({y_jy_p\over y_{j_1} y_{p_1}y_{j-j_1} y_{p-p_1}})^{2} {\notag}\\
&\equiv \frac{n^2 }{ j^2} \sum_{j_1=0}^j
B_{j_1,j-j_1}^2 B_{p_1,p-p_1}^2 \ .\end{aligned}$$ where we have introduced the separating function $B_{j_1,j-j_1}=\frac{y_j}{y_{j_1}y_{j-j_1}}$.
$B_{j_1,j-j_1}$ depends on a homogeneous space $G/H$ we consider. As $(p_+)^j=(p_+)^{j_1}(p_+)^{j-j_1}$ leads to ${\cal Y}_j=B_{j_1,j-j_1}{\cal Y}_{j_1} {\cal Y}_{j-j_1}$, $$\begin{aligned}
B^{-1}_{j_1,j-j_1}
&={{\rm tr}}{\cal Y}_j^\dagger {\cal Y}_{j_1}
{\cal Y}_{j-j_1} {\notag}\\
&=(-1)^{2l} \sqrt{(2j+1)(2j_1+1)(2(j-j_1)+1)} {\notag}\\
& \times\left(\begin{array}{ccc}
j&j_1&j-j_1\\
j&-j_1&-j+j_1
\end{array}\right)
\left\{\begin{array}{ccc}
j&j_1&j-j_1\\
l&l&l
\end{array}\right\} \ ,\end{aligned}$$ in the case of $S^2 \times S^2$. $(3j)$ symbol is calculated as $$\begin{aligned}
\left(\begin{array}{ccc}
j&j_1&j-j_1\\
j&-j_1&-j+j_1
\end{array}\right)=\sqrt{\frac{1}{2j+1}} \ ,\end{aligned}$$ while $\{6j\}$ symbol is $$\begin{aligned}
\left\{\begin{array}{ccc}
j&j_1&j-j_1\\
l&l&l
\end{array}\right\}
\sim \sqrt{\frac{1}{2l}}
\left(\begin{array}{ccc}
j&j_1&j-j_1\\
0&0&0
\end{array}\right) \ ,\end{aligned}$$ when $l>>1$ [@VMK]. Using the Stirling formula $n! \sim \sqrt{2 \pi n}
n^n e^{-n}$, we obtain $$\begin{aligned}
\left(\begin{array}{ccc}
j&j_1&j-j_1\\
0&0&0
\end{array}\right)
&=(-1)^j \sqrt{\frac{(2j-2j_1)!(2j_1)!}{(2j+1)!}} \frac{j!}{(j-j_1)!j_1!}
{\notag}\\
&\sim \left(\frac{1}{4 \pi j (j-j_1)j_1}\right)^{1/4} \ .\end{aligned}$$ In this way, $B_{j_1,j-j_1}$ is obtained as $$\begin{aligned}
B_{j_1,j-j_1}^2\sim l \sqrt{\frac{ \pi j}{j_1(j-j_1)}}
\ .\end{aligned}$$ for $j, j_1, j-j_1 >>1$.
When the momenta are equally shared: $j=p=K/2$, Wilson line correlator (\[strsum\]) is found as $$\begin{aligned}
&\langle\; {{\rm Str}}{\cal Y}_{j,p} f_1 f_2
{{\rm Str}}f_2^\dagger f_1^\dagger {\cal Y}_{j,p}^\dagger
\;\rangle {\notag}\\
&\sim N\frac{n \pi }{K^2} \log{K^2} \ .
\label{logk}\end{aligned}$$ Thus, we have obtained $1/K^2$ dependence except for the $\log K$ factor. When we consider the correlators with $j\neq p$, they do not exhibit $SO(4)$ symmetry. This undesirable feature may be overcome if we consider the space with higher symmetry. In fact, we will find that there are no $\log$ factor nor directional asymmetry in the $CP^2$ space in section \[s32\].
Ward identity for Wilson line correlators and tensor structure \[s23\]
----------------------------------------------------------------------
In the preceding sub-section, we have found that the graviton two point function behaves as that of a propagator of massless field ($1/K^2$). In this sub-section, we present the complete calculation including the fermionic contribution. We will show that the tensor structure of the Wilson line correlators is consistent with Ward identity.
The two point function of (\[tensor\]) is written as $$\begin{aligned}
&\langle\; {{\rm Str}}{\cal Y}_{j, p} f_{\rho\mu} f_{\rho\nu}
{{\rm Str}}f_{\rho'\nu'}^\dagger f_{\rho'\mu'}^\dagger {\cal Y}_{j, p}^\dagger
\;\rangle {\notag}\\
& =\langle\; {{\rm Str}}{\cal Y}_{j, p} ([p_\rho,a_\mu]-[p_\mu,a_\rho])
([p_\rho,a_\nu]-[p_\nu,a_\rho]) {\notag}\\
& {{\rm Str}}([p_{\rho'}^\dagger,a_{\nu'}^\dagger]
-[p_{\nu'}^\dagger,a_{\rho'}^\dagger] )
([p_{\rho'}^\dagger,a_{\mu'}^\dagger] -
[p_{\mu'}^\dagger,a_{\rho'}^\dagger])
{\cal Y}_{j, p}^\dagger
\;\rangle \ ,\end{aligned}$$ where we focus on the leading terms of the ’t Hooft coupling $\lambda$. Two propagators in this correlator carry almost the same angular momenta since the external angular momentum is assumed to be very small compared to the internal angular momenta of the cut-off scale. It is because the correlator is quartically divergent in power counting. Therefore, we do not distinguish the two propagators and as a result, we obtain the following expression: $$\begin{aligned}
\langle\; &{{\rm Str}}{\cal Y}_{j, p} ([p_\rho,a_\mu]-[p_\mu,a_\rho])
([p_\rho,a_\nu]-[p_\nu,a_\rho]) {\notag}\\
& {{\rm Str}}([p_{\rho'}^\dagger,a_{\nu'}^\dagger]
-[p_{\nu'}^\dagger,a_{\rho'}^\dagger] )
([p_{\rho'}^\dagger,a_{\mu'}^\dagger] -
[p_{\mu'}^\dagger,a_{\rho'}^\dagger])
{\cal Y}_{j, p}^\dagger
\;\rangle
{\notag}\\
= & \frac{1}{K^2} \sum_{K_1, K_2} \sum_{a,b}
B_{K_1,K-K_1}^2 B_{K_2,K-K_2}^2 {\notag}\\
{{\rm tr}}&{\cal Y}_1{\cal Y}_a{\cal Y}_{1'}{\cal Y}_b
(\frac{1}{P^2})^2 \Big(
2(d-2) P^\mu P^{\mu'} P^{\nu} P^{\nu'} {\notag}\\
+&P^2 (2 P^{\mu}P^{\nu} \delta_{\mu'\nu'}+
2 P^{\mu'}P^{\nu'} \delta_{\mu\nu}
- P^{\mu}P^{\mu'} \delta_{\nu\nu'}
- P^{\nu}P^{\nu'} \delta_{\mu\mu'}
-
P^{\mu}P^{\nu'} \delta_{\mu'\nu}-
P^{\mu'}P^{\nu} \delta_{\mu\nu'}
){\notag}\\
+&P^4 (\delta_{\mu\mu'}\delta_{\nu\nu'}+
\delta_{\mu\nu'}\delta_{\mu'\nu}) \Big)
{{\rm tr}}{\cal Y}_b^\dagger {\cal Y}_{2'}^\dagger {\cal Y}_a^\dagger {\cal Y}_2^\dagger \ ,
\label{tens1} \end{aligned}$$ where $d(10)$ is a number of bosonic matrices. $K_1$ and $K_2$ specify the phase structure of the left and right sides of the symmetric trace. ${\cal Y}_{i(i')} (i=1,2)$ are related to ${\cal Y}_{j,p}$ as ${\cal Y}_{j,p}=B_{K_i,K-K_i} {\cal Y}_i {\cal Y}_{i'}$.
On $ S^2 \times S^2$, $\sum_{a,b} {{\rm tr}}{\cal Y}_1{\cal Y}_a {\cal Y}_{1'} {\cal Y}_b
{{\rm tr}}{\cal Y}_b^\dagger {\cal Y}_{2'}^\dagger {\cal Y}_a^\dagger {\cal Y}_2^\dagger
=\delta_{j_1-j_2,0} \delta_{p_1-p_2,0}$. We have evaluated the essential part of the correlators in the preceding sub-section as $$\begin{aligned}
\frac{1}{K^2}\sum_{K_1, K_2} \sum_{a,b} B_{K_1,K-K_1}^2 B_{K_2,K-K_2}^2
{{\rm tr}}{\cal Y}_1{\cal Y}_a {\cal Y}_{1'} {\cal Y}_b
{{\rm tr}}{\cal Y}_b^\dagger {\cal Y}_{2'}^\dagger {\cal Y}_a^\dagger {\cal Y}_2^\dagger
=\frac{1}{K^2} \sum_{K_1} B_{K,K-K1}^4 \ .\end{aligned}$$ We will focus on the tensor structure of the correlators in this sub-section.
The leading contribution of the fermionic part of the Wilson line correlators is obtained as $$\begin{aligned}
&\langle\; {{\rm Str}}{\cal Y}_{j,p} \frac{1}{2} \bar{\psi} \Gamma^{(\nu}[p^{\mu)},\psi]
({{\rm Str}}{\cal Y}_{j,p} \frac{1}{2} \bar{\psi'} \Gamma^{(\nu'}[p^{\mu')},\psi'])^\dagger
\;\rangle {\notag}\\
&= \frac{1}{K^2}
\sum_{K_1,K_2} \sum_{a,b} B_{K_1,K-K_1}^2 B_{K_2,K-K_2}^2 {\notag}\\
&{{\rm tr}}{\cal Y}_1{\cal Y}_a{\cal Y}_{1'}{\cal Y}_b
(\frac{1}{P^2})^2\left(
-f P^\mu P^{\mu'} P^{\nu} P^{\nu'} \right. {\notag}\\
&\left. +\frac{f}{8} P^2 ( P^{\mu}P^{\mu'} \delta_{\nu\nu'}+
P^{\nu}P^{\nu'} \delta_{\mu\mu'}+
P^{\mu}P^{\nu'} \delta_{\mu'\nu}+
P^{\mu'}P^{\nu} \delta_{\mu\nu'}
) \right)
{{\rm tr}}{\cal Y}_b^\dagger {\cal Y}_{2'}^\dagger
{\cal Y}_a^\dagger {\cal Y}_2^\dagger \ ,\end{aligned}$$ where $f(16)$ counts fermionic degrees of freedom. The total amplitude is obtained as $$\begin{aligned}
A_{\rm tot}^{\mu\nu\mu'\nu'}=& \frac{1}{K^2}
\sum_{K_1,K_2} \sum_{a,b}
B_{K_1,K-K_1}^2 B_{K_2,K-K_2}^2 {\notag}\\
&{{\rm tr}}{\cal Y}_1{\cal Y}_a{\cal Y}_{1'}{\cal Y}_b
(\frac{1}{P^2})^2
\left(
(2d-4-f) P^\mu P^{\mu'} P^{\nu} P^{\nu'} \right. {\notag}\\
&-(1-\frac{f}{8}) P^2(P^{\mu}P^{\mu'} \delta_{\nu\nu'}
+ P^{\nu}P^{\nu'} \delta_{\mu\mu'}
+
P^{\mu}P^{\nu'} \delta_{\mu'\nu}+
P^{\mu'}P^{\nu} \delta_{\mu\nu'}
){\notag}\\
& \left.
+2 P^2 (P^{\mu}P^{\nu} \delta_{\mu'\nu'}+
P^{\mu'}P^{\nu'} \delta_{\mu\nu})
+P^4 (\delta_{\mu\mu'}\delta_{\nu\nu'}+
\delta_{\mu\nu'}\delta_{\mu'\nu}) \right)
{{\rm tr}}{\cal Y}_b^\dagger {\cal Y}_{2'}^\dagger
{\cal Y}_a^\dagger {\cal Y}_2^\dagger \ , \end{aligned}$$ In the supersymmetric case ($f=2(d-2)$), it may be simplified further, $$\begin{aligned}
A_{\rm tot}^{\mu\nu\mu'\nu'}
=&\frac{1}{K^2}
\sum_{K_1,K_2} \sum_{a,b} B_{K_1,K-K_1}^2 B_{K_2,K-K_2}^2{\notag}\\
&{{\rm tr}}{\cal Y}_1{\cal Y}_a{\cal Y}_{1'}{\cal Y}_b
\left(
\frac{2}{\tilde{d}} (\tilde{\delta}_{\mu\nu}\delta_{\mu'\nu'}
+\delta_{\mu\nu} \tilde{\delta}_{\mu'\nu'}) \right. {\notag}\\
& +\frac{\frac{d}{4}-\frac{3}{2}}{\tilde{d}}
(\tilde{\delta}_{\mu\mu'}\delta_{\nu\nu'}
+\tilde{\delta}_{\mu\nu'}\delta_{\mu'\nu}
+\delta_{\mu\mu'}\tilde{\delta}_{\nu\nu'}
+\delta_{\mu\nu'}\tilde{\delta}_{\mu'\nu}) {\notag}\\
&\left. +(\delta_{\mu\mu'}\delta_{\nu\nu'}
+\delta_{\mu\nu'}\delta_{\mu'\nu})
\right)
{{\rm tr}}{\cal Y}_b^\dagger {\cal Y}_{2'}^\dagger
{\cal Y}_a^\dagger {\cal Y}_2^\dagger \ ,
\label{tot-am}\end{aligned}$$ where we have replaced $$\begin{aligned}
P_{\mu'}P_{\nu'}\to \frac{P^2}{\tilde{d}} \tilde{\delta}_{\mu'\nu'} \ .\end{aligned}$$ $\tilde{d}$ denotes the dimension of the isometry group $G$. $\tilde{\delta}_{\mu'\nu'}$ is a Kronecker delta in the $\tilde{d}$ dimensional subspace.
Now, let us consider the tensor structure of graviton correlators on $ S^2 \times S^2 =SU(2)\times SU(2)/U(1)\times U(1)$. The dimension of $G=SU(2)\times SU(2)$ is $\tilde{d}=6$ as they can be embedded in the 6 dimensional space. The total amplitude is obtained from (\[tot-am\]) as $$\begin{aligned}
& A_{\rm tot}^{\mu\nu\mu'\nu'}
=\frac{1}{K^2} \sum_{K_1} B_{K,K-K1}^4
\left(
\frac{1}{3} (\tilde{\delta}_{\mu\nu}\delta_{\mu'\nu'}
+\delta_{\mu\nu} \tilde{\delta}_{\mu'\nu'}) \right. {\notag}\\
& +\frac{1}{6}
(\tilde{\delta}_{\mu\mu'}\delta_{\nu\nu'}
+\tilde{\delta}_{\mu\nu'}\delta_{\mu'\nu}
+\delta_{\mu\mu'}\tilde{\delta}_{\nu\nu'}
+\delta_{\mu\nu'}\tilde{\delta}_{\mu'\nu})
\left. +(\delta_{\mu\mu'}\delta_{\nu\nu'}
+\delta_{\mu\nu'}\delta_{\mu'\nu})
\right) \ ,\end{aligned}$$ where we have substituted $f=16$ and $d=10$. Tensor structure for the bosonic part is discussed in appendix (\[appA\]).
In order to check the consistency of our calculation, we derive the following Ward identity for the Wilson line correlators of graviton mode $$\begin{aligned}
&K_\mu \left(
\hspace*{1.5cm}
\begin{picture}(0,0)
\put(-40,0){\line(1,0){5}}
\put(-5,0){\line(1,0){5}}
\put(20,0){\line(1,0){5}}
\put(55,0){\line(1,0){5}}
\put(-20,0){\circle{30}}
\put(40,0){\circle{30}}
\put(-5,5){\line(1,0){30}}
\put(-5,-5){\line(1,0){30}}
\end{picture}\hspace*{8mm}
\hspace*{15mm}
\right)
{\notag}\\ {\notag}\\
&=
K \langle\;
{{\rm Str}}V_{+\nu}^\dagger {{\rm Str}}V_{\mu'\nu'}
\;\rangle {\notag}\\
&=
\langle\;
(A_-)_{ij}^{K+1} (\frac{\delta}{\delta A_\nu})_{ji}
I {{\rm Str}}V_{\mu'\nu'} \;\rangle
-\sum_{K_1}\langle\; {{\rm tr}}{1\over 4}(A_-)^{K_1}
\bar{\psi}\Gamma_{-\nu}(A_-)^{K-K_1}
{\partial\over \partial \bar{\psi}} I{{\rm Str}}V_{\mu'\nu'} \;\rangle
{\notag}\\
&=-\langle\;
(A_-)_{ij}^{K+1} (\frac{\delta}{\delta A_\nu})_{ji}
{{\rm Str}}V_{\mu'\nu'}
\;\rangle
+\sum_{K_1}\langle\; {{\rm tr}}{1\over 4}
(A_-^{K_1}\bar{\psi}\Gamma_{-\nu}(A_-)^{K-K_1}
{\partial\over \partial \bar{\psi}}{{\rm Str}}V_{\mu'\nu'} \;\rangle
{\notag}\\
&-\delta_{-\nu } \sum_{K_1} \langle \; {{\rm tr}}(A_-)^{K_1} {{\rm tr}}(A_-)^{K-K_1}
{{\rm Str}}V_{\mu'\nu'}
\;\rangle \ , \label{Wardid}\end{aligned}$$ where $$\begin{aligned}
&V_{\mu\nu}=(A_+)^K \left(
[A_\rho,A_\mu][A_\rho,A_\nu]+\frac{1}{2}\bar{\psi}
\Gamma^{(\mu} [A^{\nu)},\psi] \right) {\notag}\\
&I={1\over 4}{{\rm Tr}}[A_\mu,A_\rho]^2
+{{\rm Tr}}\frac{1}{2}\bar{\psi} \Gamma^\mu [A_\mu,\psi] {\notag}\\
&(A_\pm)^K=(p_\pm +a_\pm+\tilde{p}_\pm +\tilde{a}_\pm)^K
\ .\end{aligned}$$ These vertex operators are closely related to those we have investigated up to a normalization factor of $y_j^2(j!)^2/(2j)!$ since $$\begin{aligned}
(p_\pm+\tilde{p}_\pm)^{2j}\sim {(2j)!\over (j!)^2}
p_\pm^j\tilde{p}_\pm^j \ .\end{aligned}$$ where $j>>1$ is assumed. [^2]
First, let us discuss the last line in ([\[Wardid\]]{}). We focus on the leading term of the expansion of ’t Hooft coupling $\lambda$. The leading term is one loop diagram. Therefore, the first and second trace can contain no creation (annihilation) operators. Thus, this three point function is calculated as $$\begin{aligned}
\sum_{K_1} \langle \; {{\rm tr}}(A_-)^{K_1} {{\rm tr}}(A_-)^{K-K_1}
{{\rm Str}}V_{\mu'\nu'}
\;\rangle
&= 0 \ ,\end{aligned}$$ since $$\begin{aligned}
{{\rm tr}}\langle \;(A_-)^K\;\rangle = 0 \quad {\rm for} \ K \neq 0 \ .\end{aligned}$$ The one point function of Wilson line operators is $$\begin{aligned}
&\langle\;
-(A_-)_{ij}^{K+1} (\frac{\delta}{\delta A_\nu})_{ji}
{{\rm Str}}V_{\mu'\nu'}
\;\rangle
+\sum_{K_1}\langle\;{{\rm tr}}{1\over 4}
(A_-)^{K_1}\bar{\psi}\Gamma_{-\nu}(A_-)^{K-K_1}
{\partial\over \partial \bar{\psi}}{{\rm Str}}V_{\mu'\nu'} \;\rangle \ .\end{aligned}$$ The bosonic part is calculated as $$\begin{aligned}
& ({\cal Y}_K)^2\langle\;(A_-)^{K+1}_{ij} (\frac{\delta}{\delta A_\nu })_{ji}
{{\rm Str}}(A_+)^K
[A^{\rho'},A^{\mu'}]
[A^{\rho'},A^{\nu'}] \;\rangle{\notag}\\
&= \frac{1}{K}\sum_{K_1} \sum_{a,b} ({\cal Y}_K)^2 {\notag}\\
&{{\rm tr}}\left(
(A_{\rho'} (A_-)^{K+1} (A_+)^{K_1} f^\dagger_{\rho' \nu'} (A_+)^{K-K_1}-
A_{\rho'} (A_+)^{K_1} f^\dagger_{\rho' \nu'} (A_+)^{K-K_1}(A_-)^{K+1}
)\delta_{\nu\mu'} \right. {\notag}\\
&+(A_{\rho'} (A_-)^{K+1} (A_+)^{K_1} f^\dagger_{\rho' \mu'} (A_+)^{K-K_1}-
A_{\rho'} (A_+)^{K_1} f^\dagger_{\rho' \mu'} (A_+)^{K-K_1}(A_-)^{K+1}
)\delta_{\nu\nu'} {\notag}\\
&+((A_-)^{K+1}A_{\mu'} (A_+)^{K_1} f^\dagger_{\rho' \nu'} (A_+)^{K-K_1}-
A_{\mu'} (A_-)^{K+1}(A_+)^{K_1} f^\dagger_{\rho' \nu'} (A_+)^{K-K_1}
)\delta_{\rho'\nu} {\notag}\\
&+ \left.
((A_-)^{K+1}A_{\nu'} (A_+)^{K_1} f^\dagger_{\rho' \mu'} (A_+)^{K-K_1}-
A_{\nu'} (A_-)^{K+1}(A_+)^{K_1} f^\dagger_{\rho' \mu'} (A_+)^{K-K_1}
)\delta_{\rho'\nu} \right) \ .\end{aligned}$$ Note that if we consider the noncommutative flat space, there are additional terms which come from the variation of external momenta $e^{ikA}$. We can show that such terms do not contribute to the correlator in this regularization.
The first line of the trace part is calculated as $$\begin{aligned}
&\frac{1}{K} \sum_{K_1} \sum_{a,b} ({\cal Y}_K)^2 {\notag}\\
&\langle\;\;{{\rm tr}}(A_{\rho'} (A_-)^{K+1} (A_+)^{K_1} f^\dagger_{\rho' \nu'} (A_+)^{K-K_1}-
A_{\rho'} (A_+)^{K_1} f^\dagger_{\rho' \nu'} (A_+)^{K-K_1}(A_-)^{K+1}
)\delta_{\nu\mu'} \;\rangle {\notag}\\
=&-\frac{1}{K^2}\sum_{K_1,K_2} \sum_{a,b}B_{K_1,K-K_1} B_{K_2,K-K_2} {\notag}\\
& {{\rm tr}}{\cal Y}_1{\cal Y}_a{\cal Y}_{1'}{\cal Y}_b
\frac{1}{P^2}
( (d-2) P_{\nu'}K\cdot P+ P^2 K_{\nu'}) \delta_{\nu\mu'}
{{\rm tr}}{\cal Y}_b^\dagger {\cal Y}_{2'}^\dagger
{\cal Y}_a^\dagger {\cal Y}_2^\dagger
\ .\end{aligned}$$ In this way, the bosonic part is obtained as $$\begin{aligned}
& ({\cal Y}_K)^2
\langle\;-(A_-)^{K+1}_{ij} (\frac{\delta}{\delta A_\nu })_{ji}
{{\rm Str}}(A_+)^K
[A^{\rho'},A^{\mu'}]
[A^{\rho'},A^{\nu'}] \;\rangle{\notag}\\
\sim &\frac{1}{K^2}\sum_{K_1,K_2} \sum_{a,b}
B_{K_1,K-K_1} B_{K_2,K-K_2}
{{\rm tr}}{\cal Y}_1{\cal Y}_a{\cal Y}_{1'}{\cal Y}_b {\notag}\\
&\frac{1}{P^2}\Big(
((d-2)K\cdot P P_{\nu'}+ P^2 K_{\nu'}) \delta_{\nu\mu'}+
((d-2)K\cdot P P_{\mu'}+ P^2 K_{\mu'}) \delta_{\nu\nu'} {\notag}\\
&-K\cdot P (\delta_{\mu'\nu} P_{\nu'}-P_\nu \delta_{\mu'\nu'})
+ P_{\mu'} (P_{\nu'} K_\nu-P_\nu K_{\nu'}) {\notag}\\
& -K\cdot P (\delta_{\nu'\nu} P_{\mu'}-P_\nu \delta_{\nu'\mu'})
+ P_{\nu'} (P_{\mu'} K_\nu-P_\nu K_{\mu'}) \Big)
{{\rm tr}}{\cal Y}_b^\dagger {\cal Y}_{2'}^\dagger
{\cal Y}_a^\dagger {\cal Y}_2^\dagger \ .\end{aligned}$$ The fermionic part is calculated as $$\begin{aligned}
&({\cal Y}_K)^2 \langle\;-(A_-)^{K+1}_{ij} (\frac{\delta}{\delta A_\nu })_{ji}
{{\rm Str}}(A_+)^K
\frac{1}{2}\bar{\psi} \Gamma^{(\nu'}
[A^{\mu' )},\psi] \;\rangle{\notag}\\
\to &\frac{1}{K^2}
\sum_{K_1,K_2} \sum_{a,b} B_{K_1,K-K_1} B_{K_2,K-K_2}
{{\rm tr}}{\cal Y}_1{\cal Y}_a{\cal Y}_{1'}{\cal Y}_b {\notag}\\
&\frac{ f}{4P^2}
(P\cdot K P_{\nu'} \delta_{\nu\mu'}+P\cdot K P_{\mu'} \delta_{\nu\nu'})
{{\rm tr}}{\cal Y}_b^\dagger {\cal Y}_{2'}^\dagger
{\cal Y}_a^\dagger {\cal Y}_2^\dagger \ .\end{aligned}$$ The contribution corresponding to fermionic equation of motion is $$\begin{aligned}
&({\cal Y}_K)^2 \sum_{K_1}\langle\; {{\rm tr}}{1\over 4}
(A_-)^{K_1}\bar{\psi}\Gamma_{-\nu}(A_-)^{K-K_1}
{\partial\over \partial \bar{\psi}}{{\rm Str}}V_{\mu'\nu'} \;\rangle
{\notag}\\
\to & \frac{1}{K^2} \sum_{K_1,K_2} \sum_{a,b} B_{K_1,K-K_1} B_{K_2,K-K_2}
{{\rm tr}}{\cal Y}_1{\cal Y}_a{\cal Y}_{1'}{\cal Y}_b
{\notag}\\
&\frac{f}{8 P^2} \left((P_\nu
(P_{\mu'}K_{\nu'}+P_{\nu'}K_{\mu'})-
P\cdot K P_{\nu'} \delta_{\nu\mu'}-P\cdot K P_{\mu'} \delta_{\nu\nu'}\right)
{{\rm tr}}{\cal Y}_b^\dagger {\cal Y}_{2'}^\dagger
{\cal Y}_a^\dagger {\cal Y}_2^\dagger \ .\end{aligned}$$ The leading contribution of the one point function of Wilson line operators is given by $$\begin{aligned}
&\frac{1}{K^2}\sum_{K_1,K_2} \sum_{a,b} B_{K_1,K-K_1} B_{K_2,K-K_2}
{{\rm tr}}{\cal Y}_1{\cal Y}_a{\cal Y}_{1'}{\cal Y}_b {\notag}\\
&\frac{1}{P^2}
\left(
(d-3-\frac{f}{4}-\frac{f}{8})(P_{\nu'} \delta_{\nu\mu'}
+P_{\mu'}\delta_{\nu\nu'})K \cdot P +2 P_\nu \delta_{\mu'\nu'}K\cdot P
\right. {\notag}\\
&\left.
+(K_{\nu'} \delta_{\nu\mu'}+K_{\mu'}\delta_{\nu\nu'})P^2
+2 K_\nu P_{\mu'} P_{\nu'}
+(\frac{f}{8}-1)P_\nu
(P_{\mu'}K_{\nu'}+P_{\nu'}K_{\mu'})
\right)
{{\rm tr}}{\cal Y}_b^\dagger {\cal Y}_{2'}^\dagger
{\cal Y}_a^\dagger {\cal Y}_2^\dagger \ .
\label{1pt}\end{aligned}$$ By multiplying $K_\mu$ to $A_{\rm tot}^{\mu\nu\mu'\nu'}$, we obtain $$\begin{aligned}
K_\mu A_{\rm tot}^{\mu\nu\mu'\nu'}&
=\frac{1}{K^2} \sum_{K_1,K_2} \sum_{a,b}
B_{K_1,K-K_1}
B_{K_2,K-K_2}
{{\rm tr}}{\cal Y}_1{\cal Y}_a{\cal Y}_{1'}{\cal Y}_b {\notag}\\
&{1\over P^2} \Big(
(2d-4-f) \frac{K \cdot P P^{\mu'} P^{\nu} P^{\nu'}}{P^2}
+ 2P_{\nu}\delta_{\mu'\nu'}K\cdot P+2K_{\nu}P_{\mu'}P_{\nu'}
{\notag}\\
&+(\frac{f}{8}-1)
((P_{\nu'} \delta_{\nu\mu'}
+P_{\mu'}\delta_{\nu\nu'})K \cdot P
+P_\nu
(P_{\mu'}K_{\nu'}+P_{\nu'}K_{\mu'})) {\notag}\\
& +P^2(K_{\nu'} \delta_{\nu\mu'} +K_{\mu'}\delta_{\nu\nu'})
\Big)
{{\rm tr}}{\cal Y}_b^\dagger {\cal Y}_{2'}^\dagger
{\cal Y}_a^\dagger {\cal Y}_2^\dagger \ .
\label{ward2}\end{aligned}$$ When $f=2d-4$, (\[ward2\]) and (\[1pt\]) agree with each other.
Universality of the result\[s3\]
================================
Universal amplitude \[s31\]
---------------------------
As we have seen in the previous section, the Wilson line correlator is given by the separating function $B_{j_1,j-j_1}$. In this section, we will show that this result is universal since it only assumes the completeness condition of the generators of $SU(N)$.
The correlators contain the following amplitude $$\begin{aligned}
\hspace*{1.5cm}
\begin{picture}(0,0)
\put(-40,0){\line(1,0){5}}
\put(-5,0){\line(1,0){5}}
\put(20,0){\line(1,0){5}}
\put(55,0){\line(1,0){5}}
\put(-20,0){\circle{30}}
\put(40,0){\circle{30}}
\put(-5,5){\line(1,0){30}}
\put(-5,-5){\line(1,0){30}}
\put(-17,0){${\cal Y}_1'$}
\put(5,10){${\cal Y}_a$}
\put(-47,4){${\cal Y}_1$}
\put(5,-15){${\cal Y}_b$}
\end{picture}\hspace*{8mm}
\hspace*{15mm} =
{{\rm tr}}{\cal Y}_1{\cal Y}_a {\cal Y}_{1'} {\cal Y}_b
{{\rm tr}}{\cal Y}_b^\dagger {\cal Y}_{2'}^\dagger
{\cal Y}_a ^\dagger {\cal Y}_2^\dagger
\ . \end{aligned}$$
We recall the completeness condition: $$\begin{aligned}
\label{comp}
\sum_a &({\cal Y}_a)_{ij} ({\cal Y}_a^\dagger)_{kl}=\delta_{il} \delta_{jk} \ .\end{aligned}$$ By using this relation, we obtain $$\begin{aligned}
&\sum_{ab} {{\rm tr}}{\cal Y}_1{\cal Y}_a
{\cal Y}_{1'}{\cal Y}_b
{{\rm tr}}{\cal Y}_b^\dagger {\cal Y}_{2'}^\dagger
{\cal Y}_a^\dagger {\cal Y}_2^\dagger {\notag}\\
=& {{\rm tr}}{\cal Y}_1 {\cal Y}_2^\dagger
{{\rm tr}}{\cal Y}_{1'} {\cal Y}_{2'}^\dagger \ .\end{aligned}$$ While ${\cal Y}$ depends on a particular $G/H$ we pick, the following relation is universal $$\begin{aligned}
{{\rm tr}}{\cal Y}_1 {\cal Y}_2^\dagger
=\delta_{j_1-j_2}+{\cal O}(1/N) \ ,\end{aligned}$$ where $j_1$ is a momentum carried by ${\cal Y}_1$. ${{\rm tr}}{\cal Y}_{1'} {\cal Y}_{2'}^\dagger$ provides the same $\delta$ due to the momentum conservation law. The universality of the amplitude reflects on the universality with respect to the topology of the D-brane worldvolume, which is closely related to the cut off independence of the analysis.
Finally, we provide a pictorial representation of our evaluation of the universal amplitude in figure \[thooft\]. Our result is naturally understood by using the ’t Hooft’s double line notation.
Example : $CP^2$ \[s32\]
------------------------
In contrast to the preceding sub-section, the separating function $B$ depends on a choice of $G/H$. In this sub-section, we will show that the momentum ($k$) dependence of WLC on $CP^2=SU(3)/U(2)$ is also as $1/k^2$. We will calculate $B$ in the semiclassical approximation. We define the raising and lowering operators as $$\begin{aligned}
p_{\pm}=\frac{1}{\sqrt{2}}(p_4\pm ip_5) \ , \quad
\tilde{p}_{\pm}=\frac{1}{\sqrt{2}}(p_6\pm ip_7) \ ,\end{aligned}$$
The normalization condition of spherical harmonics is $$\begin{aligned}
{{\rm tr}}{\cal Y}_j^\dagger {\cal Y}_j=1 \ ,\end{aligned}$$ where $$\begin{aligned}
{\cal Y}_j=y_j (p_+)^j \ .\end{aligned}$$ In the semiclassical approximation, $$\begin{aligned}
p_{+}=r \frac{\xi_1}{1+\bar{\xi} \xi} \ , \quad
p_- =r\frac{\xi_2}{1+\bar{\xi}{\xi}} \ ,\end{aligned}$$ we may estimate $$\begin{aligned}
{{\rm tr}}{\cal Y}_j^\dagger {\cal Y}_j&=
r^{2 j+2} \int \frac{2 d^4 \xi}{\pi^2 (1+\xi\bar{\xi})^3}
\frac{(\bar{\xi}\xi)^{j}}{(1+\bar{\xi}\xi)^{2j}}
y_j^2 {\notag}\\
&=r^{2j+2}\frac{2(j!)^2}{(2j+2)!}y_j^2 \ .\end{aligned}$$ Thus, we obtain $$\begin{aligned}
\tilde{B}_{j_1,j-j_1}^2&=\frac{y^2_j}{y^2_{j-j_1}y^2_{j_1}} {\notag}\\
&\sim \frac{\sqrt{\pi}}{2} (\frac{j}{(j-j_1)j_1})^{3\over 2}N \ .\end{aligned}$$ The Wilson line correlators (\[strsum\]) are calculated as $$\begin{aligned}
&\langle\; {{\rm Str}}{\cal Y}_{j} f_1 f_2
{{\rm Str}}f_2^\dagger f_1^\dagger {\cal Y}_{j}^\dagger
\;\rangle {\notag}\\
&= \frac{1}{j^2} \sum_{j_1=0}^j \sum_{j_2=0}^j
\langle\; {{\rm tr}}{\cal Y}_{j_1} f_1 {\cal Y}_{j-j_1} f_2
{{\rm tr}}f_2^\dagger {\cal Y}_{j_2}^\dagger f_1^\dagger
{\cal Y}_{j-j_2}^\dagger
\;\rangle B_{j_1j-j_1} B_{j_2j-j_2}
{\notag}\\
&\sim \frac{N}{j^2} \sqrt{\pi} \zeta(\frac{3}{2}) \ .\end{aligned}$$ We have obtained the $1/({\rm momentum})^2$ behavior without a $\log$ factor. The correlators are also invariant under the rotation of 8 dimensional space in which $CP^2$ sits.
Universality with respect to the dimensionality \[s33\]
-------------------------------------------------------
We have shown in this section that the correlator is given by the separating function $B$. This result holds for any $G/H$ , irrespective of its dimension. Therefore, we consider higher dimensional NC gauge theory here. NC gauge theory on $S^2 \times S^2 \times S^2$ is considered in [@fuzS2S2S2]. The WLC is obtained as $$\begin{aligned}
&\langle\; {{\rm Str}}(p_{a+})^j (p_{b+})^j (p_{c+})^j f_1 f_2
{{\rm Str}}f_2^\dagger f_1^\dagger (p_{a-})^j (p_{b-})^j (p_{c-})^j
\;\rangle {\notag}\\
&= \frac{n^2 }{j^2} \sum_{j_1=0}^j
({y_j\over y_{j_1}y_{j-j_1}})^6 {\notag}\\
& = \frac{n^2 }{j^2} \sum_{j_1=0}^j
B_{j_1,j-j_1}^6 {\notag}\\
&\sim \frac{n^2 }{j^2} \sum_{j_1=1}^j
(\frac{l^2 \pi j}{j_1(j-j_1)})^{\frac{3}{2}} {\notag}\\
&=\frac{N n \pi^{3/2}}{4 j^2} \zeta (\frac{3}{2})\ .\end{aligned}$$ Thus, the graviton is localized on 6 dimensional subspace: $S^2 \times S^2 \times S^2$. We may naturally interpret that graviton is localized on D5-brane.
When we consider $(S^2 \times)^x$ type spacetime, correlators are calculated as $$\begin{aligned}
\frac{n^2 }{ j^2} \sum B^{2x}
\sim N\frac{n}{j^2} \ .\end{aligned}$$ except $S^2 (x=1)$. Thus, the correlators exhibit the inverse squared momentum law on any $G/H$ whose dimension is larger than $2$.
Conclusions and Discussions \[s4\]
==================================
In this paper, we have investigated the two point correlation functions of graviton vertex operators in 4 dimensional NC gauge theory with maximal SUSY on compact homogeneous spacetime $G/H$. The infrared contributions ($k^4log(k)$) to the correlators are identical to those in conformal field theory just like the correlators of the energy-momentum tensor. However the ultra-violet contributions are very different even in the small external momentum case. This is due to the UV/IR mixing effects caused by the NC phases in the correlators. In the case of the symmetric ordered graviton operators, we find that the two point correlators behave as $1/k^2$. This fact indicates the existence of massless gravitons in NC gauge theory. It has been clear that there is a bulk gravity in 4d NC gauge theory with maximal SUSY since the one loop effective action involving the quadratic Wilson lines is consistent with 10 dimensional supergravity. In order to obtain realistic quantum gravity, we need to obtain 4 dimensional gravity. Such a possibility may be realized in various ways if a graviton is bound to the brane or through induced gravity on the brane. We hope our findings will make a first concrete step to identify such a mechanism in 4d NC gauge theory.
We still need to investigate various issues to establish such a mechanism. One issue is to understand the correlators of $n$ point functions. Another issue is to understand the correlators of more generic Wilson lines. If we consider the vertex operators which contain more commutators of $[A_{\mu},A_{\nu}]$, analogous calculations show that the two point functions are more singular in the infra-red limit than $1/k^2$. It might imply that the relevant modes are (gravitationally) confined and develop a mass gap in that channel. On the other hand, the correlators of the Wilson lines which contain fewer $[A_{\mu},A_{\nu}]$ do not exhibit singularity in the infra-red limit. The third issue is that the two point correlators are not transverse due to the one point functions as we have seen in the Ward identity. They seem to correspond to graviton propagators in a certain gauge.
Our investigation is also restricted to the leading order of the ’t Hooft coupling in NC gauge theory which is valid in the weak coupling regime. We need to understand higher order quantum corrections also. Since the behavior of the correlators is governed by the power counting, it is likely that higher order corrections do not modify our results. It is also desirable to have a consistent supergravity description in the strong coupling limit.
If the graviton vertex operators are coupled to conserved energy-momentum tensor, we can reproduce the Newton’s law between them by taking the expectation values of the graviton vertex operators. It might be a good strategy to pursue this idea further since such a structure is consistent with the one loop effective action of NC gauge theory.
This work is supported in part by the Grant-in-Aid for Scientific Research from the Ministry of Education, Science and Culture of Japan. The work of S. N. is supported in part by Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists.
Bosonic part of the tensor structure of graviton correlators on $S^2 \times S^2$ \[appA\]
==========================================================================================
In this appendix, we investigate the bosonic part of the tensor structure of graviton correlators on $S^2 \times S^2$. We obtain the anisotropic tensor structure. For the supersymmetric correlators, we obtain the isotropic tensor structure in section \[s23\]. By considering the isometry of the space, we can replace $$\begin{aligned}
P_\mu P_\nu P_\mu' P_\nu' &\to
\frac{P_A^4}{15}(\delta_A^{\mu\nu}\delta_A^{\mu'\nu'}+
\delta_A^{\mu\mu'}\delta_A^{\nu\nu'}+
\delta_A^{\mu\nu'}\delta_A^{\mu'\nu}) {\notag}\\
&+\frac{P_A^2P_B^2}{9}(\delta_A^{\mu\nu}\delta_B^{\mu'\nu'}+
\delta_A^{\mu\mu'}\delta_B^{\nu\nu'}+
\delta_A^{\mu\nu'}\delta_B^{\mu'\nu}){\notag}\\
&+\frac{P_A^2P_B^2}{9}(\delta_B^{\mu\nu}\delta_A^{\mu'\nu'}+
\delta_B^{\mu\mu'}\delta_A^{\nu\nu'}+
\delta_B^{\mu\nu'}\delta_A^{\mu'\nu}){\notag}\\
&+\frac{P_B^4}{15}(\delta_B^{\mu\nu}\delta_B^{\mu'\nu'}+
\delta_B^{\mu\mu'}\delta_B^{\nu\nu'}+
\delta_B^{\mu\nu'}\delta_B^{\mu'\nu}) \ , \label{repl}\end{aligned}$$ where $\delta_A$ and $\delta_B$ are Kronecker delta effective to the 3 dimensions, $$\begin{aligned}
&\delta_A^{\mu\nu}=
\left(\begin{array}{cccccc}
1&&&&&\\
&1&&&&\\
&&1&&&\\
&&&0&&\\
&&&&0&\\
&&&&&0
\end{array}\right),
\delta_B^{\mu\nu}=
\left(\begin{array}{cccccc}
0&&&&&\\
&0&&&&\\
&&0&&&\\
&&&1&&\\
&&&&1&\\
&&&&&1
\end{array}\right) {\notag}\\
&P_A^2=P_4^2+P_5^2+P_6^2, \quad
P_B^2=P_7^2+P_8^2+P_9^2 \ .\end{aligned}$$ By using (\[repl\]), the bosonic part of the correlator (\[tens1\]) is replaced as $$\begin{aligned}
&(\frac{16}{15}P_A^4+\frac{2}{3}P^4)(\delta_A^{\mu\nu}\delta_A^{\mu'\nu'}+
\delta_A^{\mu\mu'}\delta_A^{\nu\nu'}+
\delta_A^{\mu\nu'}\delta_A^{\mu'\nu}) {\notag}\\
+&(\frac{16}{9}P_A^2P_B^2+\frac{2}{3}P^4)
(\delta_A^{\mu\nu}\delta_B^{\mu'\nu'}+
\delta_A^{\mu\mu'}\delta_B^{\nu\nu'}+
\delta_A^{\mu\nu'}\delta_B^{\mu'\nu}) {\notag}\\
+&(\frac{16}{9}P_A^2P_B^2+\frac{2}{3}P)
(\delta_A^{\mu\nu}\delta_B^{\mu'\nu'}+
\delta_A^{\mu\mu'}\delta_B^{\nu\nu'}+
\delta_A^{\mu\nu'}\delta_B^{\mu'\nu}) {\notag}\\
+&(\frac{16}{15}P_B^4+\frac{2}{3}P^4)(
\delta_B^{\mu\nu}\delta_B^{\mu'\nu'}+
\delta_B^{\mu\mu'}\delta_B^{\nu\nu'}+
\delta_B^{\mu\nu'}\delta_B^{\mu'\nu}) \ . \label{isotro}\end{aligned}$$ We need to estimate the $P_A^4$ and $P_A^2P_B^2$. We calculate them under the semiclassical approximation. Angular momenta are represented by the adjoint representation on $S^2$, then, the integral of $P_A^4$ is semiclassically written as $$\begin{aligned}
\label{semicl}
\int dX_1 d\tilde{X}_1 \frac{(X_1-X_2)^4}{(X_1-X_2)^2
+(\tilde{X_1}-\tilde{X_2}^2)}\end{aligned}$$ where $$\begin{aligned}
P_A =X_1-X_2 \ .\end{aligned}$$ $X_2$ and $\tilde{X}_2$ are fixed at some point on $S^2$. (\[semicl\]) is calculated as $$\begin{aligned}
&\int d\Omega d\tilde{\Omega} \frac{(X_1-X_2)^4}{(X_1-X_2)^2
+(\tilde{X_1}-\tilde{X_2}^2)} {\notag}\\
=&\int_0^\pi d \cos \theta d \cos \tilde{\theta}
\frac{(2-2 \cos^2 \theta)^2}{(4-2 \cos^2 \theta
-2 \cos^2 \tilde{\theta})^2} {\notag}\\
=&\int_{-1}^1 d X d \tilde{X}\frac{(1-X^2)^2}{(2-X^2-\tilde{X}^2)^2}
\ ,
\label{pa4}\end{aligned}$$ where we transform the valuables as $$\begin{aligned}
X=\cos \theta, \quad
\tilde{X}=\cos \tilde{\theta} \ .\end{aligned}$$ The integral of $P_A^2P_B^2$ is also estimated as $$\begin{aligned}
\label{papb}
\int_{-1}^1 d X d \tilde{X}
\frac{(1-X^2)(2-\tilde{X}^2)}{(2-X^2-\tilde{X}^2)^2} \ .\end{aligned}$$ By carrying out the integration of $\tilde{X}$ in (\[papb\]), we obtain $$\begin{aligned}
4 \int_0^1 dX (-1+X^2) \left(-\frac{-1+X^2}{2 (-2+X^2)(-1+X^2)}
+\frac{(-3+X^2)\tan^{-1}\frac{1}{\sqrt{-2+X^2}}}{2 (-2+X^2)^{3/2}}
\right) \ .\end{aligned}$$ The first term is calculated as $$\begin{aligned}
\label{firstt}
-2+\sqrt{2}\log(1+\sqrt{2}) \ .\end{aligned}$$ The second term is calculated as $$\begin{aligned}
\label{second}
-4 \int_1^{\sqrt{2}} d x \frac{(x^2+1)(x^2-1)}{2 x^2 \sqrt{2-x^2}}
\tanh^{-1}\frac{1}{x} \ ,\end{aligned}$$ where we transform the valuables as $$\begin{aligned}
X^2-2=-x^2 \ .\end{aligned}$$ Formally, $\tanh (1/x) $ is expanded as $$\begin{aligned}
\tanh^{-1}\frac{1}{x}=
\sum_{k=1}^\infty \frac{1}{2k-1} (\frac{1}{x})^{2k-1} \ .\end{aligned}$$ By using this expression, we carry out the integral in (\[second\]) as $$\begin{aligned}
\sum_{k=1}^\infty \frac{-4}{2k-1} \left(
-\frac{2^{-5/2-k}}{k(k-2)} (\frac{(k-2)\sqrt{\pi} \Gamma (1-k)}{\Gamma
(1/2-k)}-\frac{4k\sqrt{\pi} \Gamma (3-k)}{\Gamma (5/2-k)})
\right.
{\notag}\\ \left.
+\frac{-(k-2) _2 F_1 ((1/2,1),(1-k),-1)+k _2 F_1((1/2,1),(3-k),-1)
}{4k(k-2)}
\right) \ , \label{secondt}\end{aligned}$$ where $_2 F_1(a;b;z)$ is a generalized hypergeometric function. We numerically obtain (\[papb\]) as $$\begin{aligned}
\int_{0}^1 d X d \tilde{X}
\frac{(1-X^2)(2-\tilde{X}^2)}{(2-X^2-\tilde{X}^2)^2}
&\sim-0.188+0.396 {\notag}\\
&=0.208 \ . \label{num1}\end{aligned}$$ We also evaluate (\[pa4\]) as $$\begin{aligned}
\int_{0}^1 d X d \tilde{X}\frac{(1-X^2)^2}{(2-X^2-\tilde{X}^2)^2}
\sim 0.292 \ . \label{num2}\end{aligned}$$ After all, the Bosonic part of the tensor structure of the graviton on $S^2 \times S^2$ (\[isotro\]) is evaluated among the estimations (\[num1\]) and (\[num2\]) as $$\begin{aligned}
&0.98(\delta_A^{\mu\nu}\delta_A^{\mu'\nu'}+
\delta_A^{\mu\mu'}\delta_A^{\nu\nu'}+
\delta_A^{\mu\nu'}\delta_A^{\mu'\nu}+
\delta_B^{\mu\nu}\delta_B^{\mu'\nu'}+
\delta_B^{\mu\mu'}\delta_B^{\nu\nu'}+
\delta_B^{\mu\nu'}\delta_B^{\mu'\nu}) {\notag}\\
+&1.04
(\delta_A^{\mu\nu}\delta_B^{\mu'\nu'}+
\delta_A^{\mu\mu'}\delta_B^{\nu\nu'}+
\delta_A^{\mu\nu'}\delta_B^{\mu'\nu}+
\delta_A^{\mu\nu}\delta_B^{\mu'\nu'}+
\delta_A^{\mu\mu'}\delta_B^{\nu\nu'}+
\delta_A^{\mu\nu'}\delta_B^{\mu'\nu}) \ .\end{aligned}$$
[99]{}
A. Connes, M. Douglas and A. Schwarz, [*Noncommutative Geometry and Matrix Theory: Compactification on Tori*]{}, , . H. Aoki, N. Ishibashi, S. Iso, H. Kawai, Y. Kitazawa and T. Tada, [*Non-commutative Yang-Mills in IIB Matrix Model*]{}, , . M. Li, [*Strings from IIB Matrices*]{}, , . N. Ishibashi, H. Kawai, Y. Kitazawa and A. Tsuchiya, [*A Large-N Reduced Model as Superstring*]{}, , . T. Banks, W. Fischler, S.H. Shenker and L. Susskind, [*M-theory As A Matrix Model: A Conjecture*]{}, , . S. Minwalla, M.V. Raamsdonk, N. Seiberg, [*Noncommutative Perturbative Dynamics*]{}, , . L. Susskind, [*The Anthropic Landscape of String Theory*]{}, . J. Maldacena, [*The Large $N$ Limit of Superconformal Field theories and Supergravity*]{}, , . L. Randall and R. Sundrum, [*An Alternative to Compactification*]{}, , . Y. Kitazawa, Y. Takayama and D. Tomino, [*Wilson Line Correlators in N=4 Non-commutative Gauge Theory on $S^2
\times S^2$*]{}, , . T. Imai, Y. Kitazawa, Y. Takayama and D. Tomino, [*Quantum Corrections on Fuzzy Sphere*]{}, , . T. Imai, Y. Kitazawa, Y. Takayama and D. Tomino, [*Effective Actions of Matrix Models on Homogeneous Spaces*]{}, , . Y. Kitazawa, Y. Takayama and D. Tomino, [*Correlators of Matrix Models on Homogeneous Spaces*]{}, , . H. Kaneko, Y. Kitazawa and D. Tomino, [*Stability of Fuzzy $S^2 \times S^2 \times S^2$ in IIB Type Matrix Models*]{}, , . H. Kaneko, Y. Kitazawa and D. Tomino, [*Fuzzy Spacetime with SU(3) Isometry in IIB Matrix Model*]{}, . R.C. Myers,[*Dielectric-Branes*]{}, , . Y. Kitazawa, [*Matrix Models in Homogeneous Spaces*]{}, , . A. R. Edmonds, [*Angular Momentum in Quantum Mechanics*]{}, Princeton Univ. Press (1957). N. Ishibashi, S. Iso, H. Kawai and Y. Kitazawa, [*Wilson Loops in Non-commutative Yang-Mills*]{}, , . D. J. Gross, A. Hashimoto and N. Itzhaki, [*Observables of Non-Commutative Gauge Theories*]{}, , . A. Dhar and Y. Kitazawa, [*High-Energy Behavior of Wilson Lines*]{}, , . Y. Kitazawa, [*Vertex Operators in IIB Matrix Model*]{}, , . S. Iso, H. Terachi and H. Umetsu, [*Wilson Loops and Vertex Operators in Matrix Model*]{}, , . D.A. Varshalovich, A.N. Moskalev and V.K. Khersonskii, [*Quantum theory of angular momentum : irreducible tensors, spherical harmonics, vector coupling coefficients, 3nj symbols*]{}, World Scientific, 1988.
[^1]: The large momentum limit of Wilson line correlators is discussed in [@Gross; @DhKhe].
[^2]: The two point functions are slightly different since there are no $log(K)$ factors unlike in (\[logk\]).
|
20 cm
[**LSDA+U approximation-based analysis of the**]{}
[**electronic structure of CeFeGe$_3$**]{}
[E. Chigo-Anota$^{\dagger}$[^1], A. Flores-Riveros$^{\ddagger}$ and J. F. Rivas-Silva$^{\ddagger}$ ]{}
$^{\dagger}$[*Posgrado en Ciencias Químicas-Facultad de Ciencias Químicas, Benemérita Universidad Autónoma de Puebla, Blvd. 14 Sur 6301, 72570 Puebla, Pue., México, e-mail: echigoa@sirio.ifuap.buap.mx*]{}
$^{\dagger}$[*Facultad de Ingeniería Química, Benemérita Universidad Autónoma de Puebla, Av. San Claudio y Blvd. 18 Sur , 72570 Puebla, Pue., México.*]{}
$^{\ddagger}$[*Instituto de Física ”Luis Rivera Terrazas”, Benemérita Universidad Autónoma de Puebla, Apdo. Postal J-48, 72570 Puebla, Pue., México, e-mail: rivas@sirio.ifuap.buap.mx*]{}
[Abstract]{}
We perform *ab initio* electronic structure calculations of the intermetallic compound CeFeGe$_3$ by means of the Tight Binding Linear Muffin-Tin Orbitals-Atomic Sphere Approximation (TB-LMTO-ASA) within the Local Spin Density Approximation containing the so-called Hubbard correction term (LSDA+U$^{SIC}$), using the Stuttgart’s TB (Tight Binding)-LMTO-ASA code in the framework of the Density Functional Theory (DFT).
KEYWORDS: Ab initio calculations, LSDA+U approximation, intermetallic compound.\
PACS: 31.15. Ar, 31.15.Ew, 71.27+a
Introduction
============
The heavy electron compounds refer to those having a specific heat coefficient $\gamma$ of the order of Jmol$^{-1}$K$^{-2}$, which is much larger than that of simple metals (being typically in the range of mJmol$^{-1}$K$^{-2}$). In addition, this implies that such compounds$-$which have been given a great deal of attention for long time$-$possess a large effective mass m$^*$, outweighing several hundreds of times the free electron mass \[1\]. They can be classified into two groups: the concentrated Kondo compounds CK and the intermediate valence compounds (IV) \[2,3\] depending on the position of the $4f$ or $5f$ level, relative to the Fermi level. The CK compounds have an integer valence at temperatures much higher ($T\gg T_k$) than the Kondo temperature $T_k$, at which, there appears the Kondo effect (an effect observed in metals with a magnetic impurity). However, at comparatively low temperatures $(T\ll T_k$) they form a Fermi liquid state \[4\] with a reduction of its magnetic moment. On the other hand, the IV compounds do not possess an integer valence at room temperature as a result of the strong hybridization between the $4f$ electrons and the conduction electrons, due to the anomalous proximity between the $4f$ and the Fermi levels.
Remarkably, the CeFeGe$_3$ compound here studied \[5\] apparently gives rise to two behaviors: A high T$_k$ of the order of 100 K and an integer valence for Ce at room temperature. This study is motivated by the fact that such compound is known to present a strong electronic correlation character, which occurs when the Coulomb repulsion between electrons strongly inhibits their motion, thus becoming highly localized. Because of this, on compounds containing lanthanides the normally expected metal behavior$-$cerium ions with $4f$ electrons or uranium and neptunium ions with $5f$ electrons$-$is not observed. Some examples are: CeA$_{13}$, CeCuSi$_2$, CeCu$_6$, UBe$_{13}$, UCd$_{11}$, U$_2$Zn$_{17}$ and NpBe$_{13}$, as well as transition metal oxides, organic metals and carbon compounds, i.e., carbon nanotubes, etc.
In the first stage of the present work *ab initio* calculations were carried out to investigate the electronic structure of the intermetallic compound CeFeGe$_3$ (tetragonal structure with spatial group 107) by using a DFT method \[7\] in the LMTO-ASA approximation \[8\], whereas in the second, an LSDA+U$^{SIC}$ approximation \[9\] was used, as implemented in the Stuttgart’s TB-LMTO-ASA code version 47 \[8\]. The density of states (DOS), total and partial, for cerium and iron, as well as the band structure (BS) are obtained from the compound geometrical conformation optimized by the CASTEP program (Cambridge Serial Total Energy Package) which makes use of the ultrasoft pseudopotentials theory \[10\]. The Coulomb and exchange parameters used in the calculations were $\mathrm{U}=5.4$ eV \[11\] for cerium and $\mathrm{U}= 2.3$ eV \[12\] $(J=0.9$ eV) for iron, just obtained in the literature.
Computational approach.
=======================
LMTO-ASA and CASTEP Calculations.
---------------------------------
The LMTO-ASA approximation employs the unitary cell splitting into overlaping Wigner-Seitz (WS) spheres with a maximum overlap of $15\%$, which is generally considered a reasonable approximation when having a spherically symmetric potential within the spheres. In addition, use is made of the ASA (Atomic Sphere Approximation) condition, i.e., a spherical approximation containing no zone of free electrons on the Muffin-Tin structure. For open structures, as the ones here considered, a set of empty spheres is introduced as a device for describing the repulsion potentials in the interstices (between an atomic and an interstitial sphere an overlap of $20\%$ is allowed). The total volume of the WS spheres is equal to the unit cell volume, thus eliminating the interstitial region.
On the other hand, the Muffin-Tin (MT) orbital is energy dependent with the following linearized form: $$\begin{aligned}
\phi_{\wedge}(\vec r) = i^l Y_{lm}(\hat{r})
\left\{
\begin{array}{cll}
\phi_\wedge (E,r) + p_{\wedge}(r/S_R)^l\, & ; & r < S_R\\
(S/r)^{l+1}\, & ; & r > S_R
\end{array}
\right \}\end{aligned}$$ where $\phi_{\wedge} (\vec r)$ is found by numerical solution of the radial Schrödinger equation, $\wedge= RL$. $R$ is the site index, whereas $l$ and $m$ are the orbital and magnetic quantum numbers of the angular momentum. The $Y_{lm}$’s refer to spherical harmonics and $S_R$ is the MT radius. The numerical orbital $\phi_{\wedge}(E,r)$ is augmented inside the sphere by a renormalized spherical Bessel function, $$\begin{aligned}
J_{k \wedge}(r) = i^l Y_{lm} \frac{(2l+1)!!}{kS_R)^l} j_l(kr),\end{aligned}$$ whereas outside the spheres, a renormalized spherical Hankel function is added: $$\begin{aligned}
H_{k \wedge}(r) = i^lY_{lm} \frac{(kS_R)^{l+1)}}{(2l-1)!!} h_l(kr).\end{aligned}$$ Here, $h_l=j_l-in_l$ is a linear combination of spherical Bessel and Neumann functions. Thus, the tail of the MT orbitals at $r>S_R$ is the solution of the Helmholtz equation with zero kinetic energy. The potential parameters $p_{\wedge}$ are chosen so as to ensure that the wave function be continuous and differentiable at the sphere boundary.
In this approximation the potential $V_{xc}$ is either von Barth-Hedin (vBH)\[13\] or Vosko-Ceperly-Alder \[14\] type at the Local Spin Density Approximation (LSDA) level, whereas at the Generalized Gradient Spin Approximation (GGS) level$-$which is a functional containing a density gradient correction$-$the potentials used the Langreth-Mehl-Hu \[15\] or Perdew-Wang type \[16\]. The exchange-correlation potential used in the present calculations corresponds to that containing the von Barth-Hedin parametrization, whose general form is $$\begin{aligned}
v^{\sigma}_{xc} = A(r_s)
\left(\frac{2n_{\sigma}}{n}\right)^{\frac{1}{3}} + B(r_s),\end{aligned}$$ where $A(r_s)$ and $B(r_s)$ are analytical functions.
Geometry Optimization
---------------------
The ternary system unit cell was optimized by means of ab initio methods, based on a DFT treatment within the Local Spin Density Approximation (LSDA) and with the Generalized Gradient Spin (GGS) approximation. In the CASTEP program \[10\] the wave function is expanded in planes waves for the valence electrons, whereas the core electrons (bound to the nucleus) are taken into account by means of their effective interaction on the valence electrons, in the form of pseudopotentials which are added to the corresponding Kohn-Sham Hamiltonian.
The pseudopotentials used in this work were generated by Vanderbilt \[17\] in the Kleinman-Bylander \[18\] representation. The parametrizations of polarized spin developed by Perdew-Zunger \[19\] and Perdew-Burke-Ernzerhof \[20\] for the exchange and correlation energy were used. The conjugated gradient method is employed to relax nuclear positions. The sampling of the Brillouin zone was of $7 \times 7 \times 8$ and $9 \times 9 \times
11$ for the LSDA and GGS approximations, respectively, using the scheme of Monkhorst-Pack \[21\]. The cutoff energy for the plane waves was of 400 eV approximately. Self-consistency in the calculations was attained whenever the total energy changes were $\leq$ 5 meV, which corresponds to a criterion of reasonably good convergence.
To perform geometry optimization we let the lattice constants $a,b$ and $c$ vary as free parameters$-$though they are expected to undergo changes no larger than 5% relative to the experimental values. The crystal energy is minimized with respect to the degrees of freedom by taking into account the calculation of Hellmann-Feynman forces in the atoms and the components of the stress tensor \[22\]. Finally, the utilized optimization criteria were 0.00002 eV, $0.0010$ Å and $0.050$ eV/Å for energy change, quadratic mean displacement, and quadratic mean force per atom, respectively.
Mathematical Structure of the approximation LDA+U$^{SIC}$
---------------------------------------------------------
In the LSDA+U \[9\] method the electrons are separated into two subsystems (i and ii). For Ce, (i) delocalized $s, p$ and $d$ electrons, which are described by an orbital-independent one-electron potential (the LSDA potential), and (ii) localized $4f$ electrons, for which we take into account the orbital degeneracy and a Coulomb interaction of the form $\frac{1}{2}U \sum_{\sigma \neq \sigma^{'}} n_{\sigma}
n_{\sigma{'}}$, where $n_\sigma$ is the $f$-orbital (or $d$-orbital) occupancy. For Fe, (i) delocalized $s, p$ electrons, and (ii) localized $3d$ electrons.
The Hamiltonian for the spin orbitally degenerate systems is of the form $$\begin{aligned}
\widehat{H} =
\sum_{i,j} \sum_{m, m'} \sum_{\sigma} t_{ij}^{mm'} \hat{c}^+_{im\sigma}
\hat{c}_{im'\sigma}
& + & \frac{(U-J)}{2} \sum_i \sum_{m \neq m'} \sum_{\sigma} \hat{n}_{im\sigma}
\hat{n}_{im'\sigma}
\nonumber \\
& + & \frac{U}{2} \sum_{i,m,m'} \sum_{\sigma}
\hat{n}_{im\sigma} \hat{n}_{im'-\sigma}\end{aligned}$$ where $\hat{c}_{jm'\sigma}$ is an electron annihilation operator with an orbital index $m$ and spin $\sigma(=\alpha, \beta)$. In the lattice site $i,t_{ij}^{mm'}$ are the hopping integrals and $\hat{n}_{im'-\sigma}$ is the number operator of the $f$ (or $d$) electron at site $i$, orbital $m$ with spin $\sigma$. The first term in Eq. (5) describes the hopping of electrons between lattice sites $i$ and $j$; the interactions between the localized electrons are described by the second and third terms, where $U$ and $J$ represent the on-site Coulomb and exchange interaction, respectively.
If we want to correct the LSDA functional for localized electrons we must first extract their LSDA treatment to avoid double count of the interaction. The spin density functional theory assumes a local exchange-correlation potential which is a function of the local charge and spin densities, so, fluctuations around the average occupations are neglected. In the mean field approximation (MFA) we can write $$\begin{aligned}
\hat{n}_{m \sigma} \hat{n}_{m' \sigma'} = \hat{n}_{m \sigma} n_{m'
\sigma'} + \hat{n}_{m' \sigma'} n_{m \sigma} - n_{m \sigma} n_{m'
\sigma'}\end{aligned}$$ where $n_{m \sigma}$ is the mean value of $\hat{n}_{m \sigma}$ and $n_{\sigma} = \sum_{m} n_{m \sigma}$. By introducing this approximation in Eq. (5), we obtain the expression for potential energy in the mean field approximation, $$\begin{aligned}
E^{MF} = \frac{U-J}{2} \sum_i \sum_{\sigma} n_{i \sigma}
(n_{i\sigma} - n_{im\sigma}) + \frac{U}{2} \sum_i \sum_{\sigma}
n_{i \sigma} n_{i-\sigma}\end{aligned}$$
Solovyev [*et al.*]{} \[23\] propose to extract an energy function from the total number of electrons per spin $n_{i \sigma}$ which would act as the LSDA potential. Such expression can be obtained from Eq.(5) in the atomic limit where occupation of the individual particle $n_{i \sigma}$ is either 0 or 1: $$\begin{aligned}
E_{cor} ^{LSDA} = \frac{U-J}{2} \sum_{i \sigma} n_{i \sigma} (n_{i
\sigma} -1) + \frac{U}{2} \sum_{i \sigma} n_{i \sigma}
n_{i-\sigma}\end{aligned}$$ This energy is now subtracted from $E^{MF}$ to obtain that associated with the total energy for localized states: $$\begin{aligned}
\Delta E = E^{MF} - E_{cor}^{LSDA}
& = & \frac{U-J}{2} \sum_{i \sigma} n_{i \sigma} (1-n_{im \sigma})
\nonumber \\
& = & \frac{U-J}{2} \sum_{im \sigma} (n_{im \sigma} - n_{im \sigma}^2).\end{aligned}$$
The fraction of the potential acting on the localized orbital $(m\sigma)$ is found by differentiating Eq. (9) with respect to the occupation number $n_{im\sigma}$: $$\begin{aligned}
\frac{d \Delta E}{d(n_{i m \sigma})}= \Delta V_{i m
\sigma}=(U-J) \left ( \frac{1}{2} -n_{i m \sigma} \right ).\end{aligned}$$ An orbital dependent one-electron potential is thus obtained.
Results and discussion.
=======================
First Stage: LMTO-ASA calculations
----------------------------------
Results obtained in the optimization of the intermetallic compound CeFeGe$_3$ are shown in Table II. The LSD and GGS approximations yield values below the experimental parameters, which may be due to the fact that neither approximation removes the error introduced by the self-energy interaction term, arising from double count in the Hamiltonian. Furthermore, this material does not present a trend similar to that of semiconductors since otherwise, the LSDA and GGA results would give values remaining below and above the experimental parameters, respectively. A similar situation is found in the equilibrium properties of the plutonium (phase $\delta$-Pu)\[24\], in which, results obtained with LDA and GGS give numbers that remain below the corresponding experimental values. We work with the CASTEP program at GGS level employing 446 K-points in the Brillouin zone. On the other hand, the density of states (total and partial) and band structure were obtained via the LMTO-ASA methodology within the DFT theory and the LSDA approximation, using the von Barth-Hedin parametrization at the optimized unit cell geometry. The corresponding results are compiled in Table I.
The total density of states (DOS) is given in Fig. 1, whereas the partial DOS associated with cerium $f$-states and iron $d$-states is illustrated in Figs. 2 and 3, respectively. They all indicate a metallic behavior for the material here analyzed, since we have the Fermi level slightly displaced from the central band. This is also supported by looking at the plot of band states (BS), as shown in Fig. 4, where a dense concentration of bands due to the cerium $f$ states occurs around the Fermi level. Furthermore, the fact of having almost horizontal bands points to the characteristic behavior of a heavy fermion compound (in our case, $\gamma=150$ mJ-molK$^2$) \[1,5\].
In the literature it is reported that the greater magnetic contribution stems from cerium (being the magnetic impurity responsible for the Kondo effect); a fact that is corroborated in the present theoretical calculations (see Table I). Once optimized the geometrical parameters, the material here analyzed turns out to be one of those classified as an intermediate valence compound. In our calculation, Ce has a magnetic moment of $-$0.00132 $\mu_{\beta}$ whereas the compound’s total magnetic moment is $-$0.0010084 $\mu_{\beta}$. Therefore, according to the criterion proposed by Vildosola and Llois \[25\], i.e., [*by using calculations at the LSDA level and the exchange-correlation potential of Perdew-Wang, they propose that the materials can be classified as follows:*]{} $$\begin{aligned}
\fbox{$\begin{array} {rcl}
Itinerant\,\,\,if &\mu_{Ce} & = \,\,0, \\
Intermediate\,\,valence\,\,\,if &\mu_{Ce} & < \,\,0.5 \mu_B, \\
Magnetic\,\,\,if &\mu_{Ce} & > \,\,0.5 \mu_B
\end{array}$
}\end{aligned}$$ An intermediate valence system is usually defined in the literature as one having a noninteger average number of $f$ electrons. Note that the charge on the Ce atom according with the LSDA calculation is of $-$2.4654 a.u. with a magnetic moment of $-$0.00132. This means that the normal atom configuration of $[Xe]4f^15d^16s^2$ is being added with almost 3 electrons whose alpha and beta spins pair off one another, which results in an overall moment of nearly zero magnitude.
Second Stage: LSDA+U calculations
---------------------------------
As can be inferred from the analysis presented in the previous stage, the DFT theory$-$because of its intrinsic formulation$-$it cannot deal properly with strongly correlated systems, therefore, we have chosen to resort to the LSDA+U$^{SIC}$ approximation, developed by Anisimov $et$ $al.$ \[9\], to calculate the electronic structure of the intermetallic compound $CeFeGe_3$, as implemented in the Stuttgart’s TB-LMTO-ASA code version 47. For this calculation, we employed the parameters reported in the literature: $U=5.4$ eV \[11\] for cerium and $U=2.3$ eV (J = 0.9 eV) \[12\] for iron.
The resulting density of states in this case is plotted in Fig. 5, whereas the partial DOS for cerium $4f$-states and iron $3d$-states are depicted in Figs. 6 and 7, respectively. By virtue of the approximation used in this stage, the latter include effects arising from strong correlation, not accounted for in those obtained by means of a conventional DFT calculation. The corresponding band structure is displayed in Fig. 8.
In the display of total DOS the greatest contribution comes from the cerium $4f$-states where an increased splitting within the energy bands$-$in addition to a relative overall shift around the Fermi level$-$can be seen in Fig. 5. This feature also shows up in the partial density of states, where a large band energy splitting arising from the cerium $4f$-states can be appreciated in Fig. 6 (being 5 eV wide, approximately), as compared with the corresponding energy band display (Fig. 2) obtained in the previous stage, in which, the presence of a nearly single peak dominates, located very close and above the Fermi level. The magnetic contributions to this material come partly from both cerium and iron, even prior to the introduction of the exchange parameter (previous stage), whereas the magnetic moment for germanium is almost null, as seen in Table I. Furthermore, the material presents metallic behavior. When carrying out calculations with an exchange parameter $J$ equal to 0.90 eV for iron there occurs a reduction of the magnetic moment in a $42\%$ rate, which probably follows from a slight localization effect of electrons on the iron $3d$-shell. On the other hand, the high concentration of energy bands occurring around the Fermi level is consistent with a metallic behavior. In fact, a greater density of bands is observed in this stage (see Fig. 8) as compared to that observed without introducing the Coulomb parameter (see Fig. 4). Such feature$-$in this stage$-$points to a typical characteristic of heavy fermion materials.
According to the criterion mentioned in the first stage, proposed by Vildosola and Llois \[25\], and extended to handle the total magnetic moment per unit cell, one can notice that the magnetic moments for cerium, $-1.1407
\mu_{\beta}$, iron, $(0.6147 \mu_{\beta})$, and $-0.00055
\mu_{\beta}$ for the empty spheres, give altogether a total moment of $-0.552 \mu_{\beta}$. This falls within the classification corresponding to an magnetic material. Also, the charge distribution shown in Table I (empty spheres introduced by the ASA condition), points to a possible covalent character among the atoms of cerium, iron and germanium, although this manifests iself very weakly, despite the fact of accounting for correlation effects in the theoretical calculation. Comparing the charge of the Ce atom in the LSDA(vBH) and LSDA(vBH)+U cases, it is seen to be similar for both: $-$2.4654 a.u. and $-$2.3847 a.u., respectively. However, the $U$ interaction gives rise to a complete change in the spin behavior: whereas in the absence of $U$ almost 3 electrons are added to the neutral atom$-$although the overall magnetic moment is nearly zero$-$now the salient effect in the presence of the $U$ interaction is to align the electrons, yielding an effective magnetic moment whose magnitude is $-1.1407 \mu_{\beta}$.
Having no charge difference in the two cases, where the $U$ interaction acts only on the $f$ electrons, the filling of the $f$-shell proceeds via splitting of the ${\alpha}$ and ${\beta}$ levels by the energy $U$ and the occurrence of electron alignment on the $f$-shell only. The remaining charge distributes over the other shells: $s$, $p$ and $d$.
Conclusions
===========
The calculations performed by means of the LMTO-ASA approximation, within the DFT theory, lead to results that are similar to those reported in the literature, in particular, the obtained magnetic contribution of the compound here analyzed, which practically corresponds to that of a nonmagnetic material. On the other hand, the partial DOS, together with the band structure, show a metallic behavior for the latter. Furthermore, results obtained with calculations carried out where the Coulomb parameters $U$ and $J$ (for iron) are introduced, also favor a metallic behavior and, in addition, a heavy fermion character for this material. The two-stage analysis performed in the present study also indicates a small charge covalent character. The pronounced magnetic moment reduction occurring in iron is here ascribed to an electronic cloud localization on the $3d$-shell of this atom, which arises as a direct consequence of taking into account strong electronic correlation effects.
Contract grant sponsors: Consejo Nacional de Ciencia y Tecnología (CONACYT, México) and Vicerrectoría de Investigación y Estudios de Posgrado (VIEP, México) at Benemérita Universidad Autónoma de Puebla. Contract grant numbers: 32213-E (CONACYT) and II-101I02 (VIEP).
[40]{} G. R. Stewart, *Rev. Mod. Phys.* [**56**]{}, 755 (1984); Peter Fulde, *J. Phys. F: Met. Phys.* [**18**]{}, 601 (1988). N. B. Brandt and V. V. Moshchalkov, *Adv. Phys.* [**33**]{}, 373 (1984). A. C. Hewson, *The Kondo Problem to heavy Fermions* (Cambrige University Press, 1997). Julian G. Sereni, *Rev. Esp. F[í]{}s.* [**13**]{} (1), 25 (1999). H. Yamamoto, H. Sawa and M. Ishikawa, *Phys. Lett. A* [**196**]{}, 83 (1994); H. Yamamoto, M. Ishikawa, K. Hasegawa and J. Sakurai, *Phys. Rev. B.* [**52**]{}, 10136 (1995); E. Chigo Anota, J. F. Rivas Silva, A. Bautista Hernandez and A. Flores Riveros, *Superficies y Vac[í]{}o* [**16**]{} (1), 17 (2003). P. Fulde, *cond-mat/9803299*(1998); E. Chigo-Anota and J. F. Rivas-Silva, *Superficies y Vac[í]{}o* [**17**]{}, xxx (2004); E. Chigo-Anota and J. F. Rivas-Silva, *Rev. Soc. Qu[í]{}m. M[é]{}x.* [**47**]{} (3), 221 (2003). P. Hohenberg and W. Kohn, *Phys. Rev. B* [**136**]{}, 864 (1964); W. Kohn and L. J. Sham, *Phys. Rev. A* [**140**]{}, 1133 (1965); W. Kohn, et al., *J. Phys. Chem.* [**100**]{}, 12974 (1996); K. Capelle, *cond-mat/0211443* v1 November (2002). Hans L. Skriver, *The LMTO Method* (Springer-Verlag, 1984); O. K. Andersen and O. Jepsen, *Phys. Rev. Lett.* [**53**]{}, 2571 (1984); O. Jepsen, G. Krier, A. Burkhardt and O. K. Andersen, *The TB-LMTO-ASA program*, Max-Planck Institute, Stuttgart, Germany (1995). V. I. Anisimov, I. V. Solovyev, et al., *Phys. Rev. B* [**48**]{}, 16929 (1993); A. I. Liechtenstein, V. I. Anisimov, and J. Zaanen, *Phys. Rev. B* [**52**]{}, 5467 (1995); V. I. Anisimov, F. Aryasetiawan, and A. I. Liechtenstein, *J. Phys.: Condens. Matter* [**9**]{}, 767 (1997); A. B. Shick, A. I. Liechtenstein and W. E. Pickett, *Phys. Rev. B* [**60**]{}, 10763 (1999); E. Chigo Anota and J. F. Rivas-Silva, *Rev. M[é]{}x. F[í]{}s.* [**50**]{}, xxx (2004). Cerius$^2$ Versi[ó]{}n 4.2 MatSci, *Manual of CASTEP*, Molecular Simulations Inc. (2000). B. E. Min, H. J. F. Jansen, T. Oguchi and A. J. Freeman, *Phys. Rev. B* [**33**]{}, 8005 (1986). A. M. Oles and G. Stolhoff, *Phys. Rev. B* [**29**]{}, 314 (1984). U. von Barth and L. Hedin, *J. Phys. C* [**5**]{}, 1629 (1972) . D. M. Ceperly and B. J. Alder, *Phys. Rev. Lett.* [**45**]{}, 566 (1980). D. C. Langreth and M. J. Mehl, *Phys. Rev. Lett.* [**47**]{}, 446 (1981); C. D. Hu and D. C. Langreth, *Physica Scripta* [**32**]{}, 391 (1985). J. P. Perdew and Y. Wang, *Phys Rev. B* [**33**]{}, 8800 (1986). D. Vanderbilt, *Phys. Rev. B* [**41**]{}, 7892 (1990). L. Klienman and D. M. Bylander, *Phys. Rev. Lett.* [**48**]{}, 1425 (1982). J. P. Perdew and A. Zunger, *Phys. Rev. B* [**23**]{}, 5048 (1981). J. P. Perdew, K. Burke and M. Ernzerhof, *Phys, Rev. Lett.* [**77**]{}, 3865 (1996). H. J. Monkhorst and J. D. Pack, *Phys. Rev. B* [**13**]{}, 5188 (1976). O. H. Nielsen and R. M. Martin, *Phys. Rev. Lett.* [**28**]{}, 697 (1983). I. V. Solovyev, P. H. Dederichs and V. I. Anisimov, *Phys. Rev. B* [**50**]{}, 16861 (1994). J. Boucher, B. Siberchicot, F. Jollet and A. Pasturel, *J. Phys.: Condens Matter* [**12**]{}, 1723 (2000); D. Price, B. R. Cooper, S. P. Lim, I. Avgin, *Phys. Rev. B* [**61**]{}, 9867 (2000); S. Y. Savrasov and G. Kotliar, *Phys. Rev. Lett.* [**84**]{}, 3670 (2000). V. L. Vildosola and A. M. Llois, *cond-mat/0001054* v1 January (2000).
[**Table I:**]{} Parameters utilized in the electronic structure calculations of the ternary compound CeFeGe$_3$ and theoretical data obtained by using the TB-LMTO-ASA approach.
[**Table II:**]{} Cell parameters optimized by means of the CASTEP program.
[**Fig. 1:**]{} Total density of states of CeFeGe$_3$ obtained by the LSDA-vBH approximation.
[**Fig. 2:**]{} Partial density of states (Ce 4$f$-states) obtained by the LSDA-vBH approximation.
[**Fig. 3:**]{} Partial density of states (Fe 3$d$-states) obtained by the LSDA-vBH approximation.
[**Fig. 4:**]{} Band structure obtained by the LSDA-vBH approximation.
[**Fig. 5:**]{} Total density of states obtained by the LSDA(vBH)+U approximation using the parameter U for cerium and parameters U and J (exchange) for iron.
[**Fig. 6:**]{} Partial density of states (Ce 4$f$-states) obtained by the LSDA(vBH)+U approximation.
[**Fig. 7:**]{} Partial density of states (Fe 3$d$-states) obtained by the LSDA(vBH)+U approximation.
[**Fig. 8:**]{} Band structure obtained by the LSDA(vBH)+U approximation using the parameter U for cerium and parameters U and J (exchange) for iron.
[**Table I**]{}
[ccccccc]{} & & & & & &\
Atoms in the & Crystallographic & MT sphere & Charge$^a$ & Magnetic$^a$ & Charge$^b$ & Magnetic$^b$\
CeFeGe$_3$ & Positions & radii & (a.u.) & Moment & (a.u.) & Moment\
& & (a.u.) & & ($\mu_B$)& & ($\mu_B$)\
Ce &(0.0, 0.0, 0.0) &4.1077 &-2.4654 &-0.00132 &-2.3847 &-1.1407\
Fe &(1.0,1.0,0.66) &2.4566 &-0.3027 &0.00033 &0.3339 &0.6147\
Ge$_1$ &(0.5,0.0,0.25) &2.5320 &1.1212 &-0.000005 &1.1208 &-0.01274\
Ge$_2$ &(1.0,1.0,0.42) &2.6284 &1.064 &-0.000013 &1.0499 &-0.02511\
E$_1$\* &(0.1224,0.1224,0.5684) &1.1281 &-0.1396 &-0.000001 &-0.1432 &-0.00055\
E$_2$\* &(0.1224,-0.1224,0.5684)&1.1281 &-0.1396 &-0.000001 &-0.1432 &-0.00055\
E$_3$\* &(-0.1224,0.1224,0.5684)&1.1281 &-0.1396 &-0.000001 &-0.1432 &-0.00055\
E$_4$\* &(-0.1224,-0.1224,0.5684)&1.1281&-0.1396 &-0.000001&-0.1432&-0.00055\
\
\
\
\
\
\
\
[**Table II**]{}
[ccccc]{} & & & &\
Experimental cell & Optimized cell & % of error & Optimized cell &% of error\
parameters & parameters & Exp. vs LSDA & parameters & Exp. vs GGS\
(Å) & (Å) LSDA & & (Å) GGS &\
a=b=4.332 & a=b=4.1767 & 3.71 & a=b=4.234 & 2.314\
c=9.955 & c=9.5981 & 3.72 & c=9.73 & 2.312\
[^1]: Corresponding author: Tel: +52 (222) 2295610, Fax: +52 (222) 2295611
|
---
abstract: 'If a fluid flow is driven by a weak Gaussian random force, the nonlinearity in the Navier-Stokes equations is negligibly small and the resulting velocity field obeys Gaussian statistics. Nonlinear effects become important as the driving becomes stronger and a transition occurs to turbulence with anomalous scaling of velocity increments and derivatives. This process has been described by V. Yakhot and D. A. Donzis, Phys. Rev. Lett. [**119**]{}, 044501 (2017) for homogeneous and isotropic turbulence (HIT). In more realistic flows driven by complex physical phenomena, such as instabilities and nonlocal forces, the initial state itself, and the transition to turbulence from that initial state, are much more complex. In this paper, we discuss the Reynolds-number-dependence of moments of the kinetic energy dissipation rate of orders 2 and 3 obtained in the bulk of thermal convection in the Rayleigh-Bénard system. The data are obtained from three-dimensional spectral element direct numerical simulations in a cell with square cross section and aspect ratio 25 by A. Pandey et al., Nat. Commun. [**9**]{}, 2118 (2018). Different Reynolds numbers $1\lesssim {\rm Re}_{\ell}\lesssim 1000$ which are based on the thickness of the bulk region $\ell$ and the corresponding root-mean-square velocity are obtained by varying the Prandtl number ${\rm Pr}$ from $0.005$ to 100 at a fixed Rayleigh number ${\rm Ra}=10^5$. A few specific features of the data agree with the theory but the normalized moments of the kinetic energy dissipation rate, ${\cal E}_n$, show a non-monotonic dependence for small Reynolds numbers before obeying the algebraic scaling prediction for the turbulent state. Implications and reasons for this behavior are discussed.'
author:
- Jörg Schumacher
- Ambrish Pandey
- Victor Yakhot
- 'Katepalli R. Sreenivasan'
title: 'Transition to turbulence scaling in Rayleigh-Bénard convection'
---
Introduction
============
The question of small-scale universality of turbulence is at the core of turbulence research since its beginnings [@Taylor1935; @Kolmogorov1941; @Frisch1994]. If universality exists, statistical moments must follow well-defined scaling laws with respect to length and time scales, or to essential parameters such as the Reynolds number ${\rm Re}$. Most studies which are dedicated to this subject aim at the highest possible Reynolds numbers in experiments [@Sreenivasan1997] or simulations [@Kaneda2009; @Yeung2015] in order to achieve sufficiently large range of scales separating the large and small ones in the flow. A different option is to study the statistics of gradients of the turbulent fields which are always supported at the smallest scales, and whose statistical moments must follow well-defined laws with respect to parameters such as ${\rm Re}$. For homogeneous and isotropic turbulence (HIT), a phase transition (to be described momentarily) from Gaussian to non-Gaussian statistics of velocity derivative moments, thus a transition to multiscaling, has been demonstrated in [@Yakhot2006] and more recently in [@Yakhot2017].
If the ideas proposed for this transition in the statistical properties are to have some general validity, they have to find application in more complex flows, such as wall-bounded shear flows [@Waleffe1997; @Eckhardt2007; @Smits2013] or thermal convection flows [@Chilla2012] as well. In this paper, we test these theoretical ideas for Rayleigh-Bénard convection (RBC). The mechanisms of production of turbulent kinetic energy in this flow are connected to life cycles of characteristic coherent structures of the thermal boundary layers [@Malkus1954; @Shishkina2005; @Zhou2007; @Chilla2012; @Schumacher2016], so the details are bound to be more complex than in homogeneous and isotropic turbulence. In particular, we will study here the scaling of moments of the kinetic energy dissipation rate with respect to Reynolds number.
Our RBC flows evolve in large-aspect ratio cells with values of $\Gamma=25$. In contrast to isotropic turbulence and wall-bounded flows, the Reynolds number Re is not a prescribed parameter, but is a derived quantity related to the turbulent momentum transfer in response to the applied temperature difference, and is related to the Rayleigh number ${\rm Ra}$; another property of importance for this flow is the Prandtl number ${\rm Pr}$, which is the ratio of the kinematic viscosity $\nu$ of the fluid to the temperature diffusivity $\kappa$. Here, a range of small to moderate Reynolds numbers is established by varying ${\rm Pr}$ over more than four orders of magnitude for a fixed Ra [@Pandey2018]. The lower the Prandtl number, the higher the Reynolds number [@Schumacher2015]. We focus our attention on the bulk of the flow away from the boundary layers at the heated bottom and cooled top plates of the RBC setup.
The manuscript is organized as follows. In section II we provide a self-contained review of the foundations of the theory. Section III presents the numerical model and defines the essential parameters of the convection runs. Section IV reports our results and interpretation, and the last section summarizes the conclusions.
Scaling of moments of the kinetic energy dissipation rate
=========================================================
Before describing the present work, it appears useful to recast the essential points of Yakhot and Donzis in a self-contained manner. Their analysis is specifically connected to the $x_1$-component of the velocity field $u_i(x_j,t)$ and the corresponding longitudinal derivative $\partial_1 u_1=\partial u_1/\partial x_1$. Throughout this work, we will use index notation, e.g. ${\bm x}=(x_1,x_2,x_3)=x_j$ in combination with the Einstein sum convection. The derivative moment of order $2n$ is given by $$M_{2n} = \langle(\partial_1 u_1)^{2n}\rangle \quad\mbox{with}\quad M_{2n}=A_{2n} \frac{u_{\rm rms}^{2n}}{L^{2n}} {\rm Re}^{\rho_{2n}}\,.
\label{Mn}$$ Here $u_{\rm rms}$ is the root-mean-square velocity obtained in practice from all three velocity components by a combined volume-time average which is assumed to be equal to an ensemble average $\langle\cdot\rangle$. The large-scale Reynolds number Re is given by ${\rm Re}=u_{\rm rms}L/\nu$, $\nu$ being the kinematic viscosity and $L$ the characteristic outer length scale. The prefactors $A_{2n}$ are dimensionless constants. The $n$-th order moment of the dissipation rate is given by $$E_{n} = \langle\epsilon^n\rangle\quad\mbox{with}\quad E_{n}=B_{n} \frac{u_{\rm rms}^{3n}}{L^{n}} {\rm Re}^{d_{n}}\,,
\label{En}$$ where $B_n$ are dimensionless constants, and the dissipation rate field is given by $$\epsilon({\bm x},t) =2\nu\, S^2({\bm x},t) \quad\text{with}\quad S^2=S_{ij}S_{ji} \,,
\label{ediss}$$ and $S_{ij}=(\partial_i u_j+\partial_j u_i)/2$ is the rate-of-strain tensor. It follows that the normalized moments of dissipation and longitudinal derivative are given by $$\begin{aligned}
{\cal E}_{n} &= \frac{E_n}{(E_1)^n}=\frac{B_n}{B_1^n}{\rm Re}^{d_n-nd_1}\,,\\
{\cal M}_{2n}& =\frac{M_{2n}}{(M_2)^n}=\frac{A_{2n}}{A_2^n}{\rm Re}^{\rho_{2n}-n\rho_2}\,.\end{aligned}$$ In a flow with Gaussian derivative statistics, one has normal scaling, i.e., $d_n=n d_1$ and $\rho_{2n}=n \rho_2$ leading to $${\cal M}_{2n} =\frac{A_{2n}}{A_2^n}=(2n-1)!!=\frac{B_n}{B_1^n}={\cal E}_n\,.$$ The double factorial is given by $(2n-1)!!=1\cdot 3\cdot 5\dots (2n-1)$. Beyond a critical Reynolds number ${\rm Re}^{\ast}\approx 100-200$, the velocity derivative moments follow algebraic scaling laws with respect to Re. The scaling exponents of the moments are then anomalous, that is, $d_n\ne n d_1$ and $\rho_{2n}\ne n\rho_2$. This transition depends on the order $n$ of the moment, i.e., the higher the order the smaller a ${\rm Re}_n^{\ast}$. The scaling exponents there can also be related to the anomalous scaling exponents, $\zeta_n$, for $n$-th order velocity increment moments in a fully developed inertial range of a high-Reynolds-number flow as shown in [@Yakhot2006]. These predictions were confirmed later in a high-resolution direct numerical simulations (DNS) [@Schumacher2007]. Normalized moments (of both ${\cal E}_n$ or ${\cal M}_n$) transition from Gaussian to non-Gaussian, and thence to turbulent regime at different Reynolds numbers. The situation is as described schematically in Fig. \[Sketch\](a).
In the spirit of Landau’s theory of phase transitions, two ideas are now adapted: (i) The transition for all moment orders occurs at a unique and suitably redefined Reynolds number. The rescaling is partly familiar and uses, instead of Re, the Taylor microscale Reynolds number $R_{\lambda} =\sqrt{5/(3\langle\epsilon\rangle\nu)} \,u_{\rm rms}^2$. But this step alone is not enough; we redefine the microscale Reynolds number on the basis of a generalized velocity to be discussed below, in units of which the transition proceeds at a unique and order-independent Reynolds number, $\hat{R}_{\lambda,n}^{\ast}=R_{\lambda}^{\ast}$ (see Fig. \[Sketch\](b)). (ii) This last step is necessary because this “phase transition" is characterized by strong fluctuations of an order parameter field in the transition region [@Landau1980; @Kadanoff1971]. These fluctuations are modeled here by a set of generalized velocity fields $\hat{v}_n$ given by $$\hat{v}_{n} = L\langle(\partial_1 u_1)^n\rangle^{\frac{1}{n}} \,.$$ We can also define the generalized velocity based on the fluctuating acceleration field $\hat{a}_n$ given by $$\hat{a}_{n}=L\langle(\partial_1 u_1)^{2n}\rangle^{\frac{1}{n}}\,.
\label{an}$$ Points (i) and (ii) can now be combined, using for the generalized velocity, to define an order-independent microscale Reynolds number $$\hat{R}_{\lambda,n}= \sqrt{\frac{5}{3\langle\epsilon\rangle\nu}} L \hat{a}_n
= \sqrt{\frac{5}{3\langle\epsilon\rangle\nu}} A_{2n}^{\frac{1}{n}} u_{\rm rms}^2 {\rm Re}^{\frac{\rho_{2n}}{n}}\,.$$ Note that $L \hat{a}_n$ carries a physical dimension of length$^2$/time$^2$ for all $n$. Taking $\beta_{\epsilon}=\langle\epsilon\rangle L/u_{\rm rms}^3$ as the dimensionless bulk energy dissipation rate, we get $$\hat{R}_{\lambda,n}= \sqrt{\frac{5 A_{2n}^{\frac{2}{n}}}{3\beta_{\epsilon}}} {\rm Re}^{\frac{1}{2}+\frac{\rho_{2n}}{n}}\,.
\label{Re_rel}$$ The driving of the isotropic flow, which is restricted to scales $r\approx L$, requires the further assumptions [@Yakhot2017] that the forcing is Gaussian and white in time, and injects turbulent kinetic energy in such a way that the mean kinetic energy dissipation rate is [*independent*]{} of the Reynolds number. The latter implies that $d_1=0$. In accordance with the collection of DNS results of decaying and forced turbulence in ref. [@Sreenivasan1998], we can set $\beta_{\epsilon}\approx 0.4$ and thus $\sqrt{5/(3\beta_\epsilon)}\approx 1$.
The next part of the strategy is to calculate theoretically the unique value of the rescaled Reynolds number at which the transition takes place. We can then obtain, by matching the Gaussian behavior at the low Reynolds number with the power-law part with anomalous scaling (see Fig \[Sketch\]b), the exponents $d_n$ and $\rho_{2n}$. We are in the fortunate position that the renormalization group theory [@Yakhot1986; @Yakhot1992; @Yakhot1992a] for HIT provides such a theory. We now take three specific steps:
\(a) We first establish a relation between $\rho_{2n}$ and $d_n$ by using arguments outlined in [@Yakhot2006; @Schumacher2007]. In the limit of vanishing distances $r$ of a longitudinal velocity increment, the velocity is an analytic function such that the $x_1$-derivative of $u_1$ is defined as $$\begin{aligned}
\frac{\partial u_1}{\partial x_1}\approx \frac{u_1(x_1+\eta)-u_1(x_1)}{\eta}
\equiv \frac{\Delta_{\eta}u_1}{\eta}\,.\end{aligned}$$ The scale $\eta$ is a still-unknown fluctuating length scale distributed around the Kolmogorov dissipation length $\eta_K=\nu^{3/4}/\langle\epsilon\rangle^{1/4}$. Viscous effects become important when a local Reynolds number $Re_{\eta}$ is approximately unity, a property that is used in [@Paladin1987; @Frisch1991; @Yakhot2006]. Such a Reynolds number is given by $${\rm Re}_{\eta}=\frac{\eta \Delta_{\eta}u_1 }{\nu}\approx 1\,.
\label{methods1}$$ Thus follows the relation $\Delta_{\eta}u_1 = \nu/\eta$, leading to the consequence that $$\partial_1 u_1\approx \frac{(\Delta_{\eta}u_1)^2}{\nu} \quad\mbox{and}\quad \epsilon\approx \frac{(\Delta_{\eta}u_1)^4}{\nu}\,.
\label{methods2}$$ For the following, we assume that these relations are exact. Relations are now used to rewrite as $${\rm Re}^{\rho_{2n}}=\frac{L^{2n}}{A_{2n} u_{\rm rms}^{2n}} M_{2n} = \frac{1}{A_{2n}} {\rm Re}^{-2n} \left(\frac{L}{\eta}\right)^{4n}\,,
\label{methods3}$$ and as $${\rm Re}^{d_{n}}=\frac{L^{n}}{B_{n} u_{\rm rms}^{3n}} E_{n} = \frac{1}{B_{n}} {\rm Re}^{-3n} \left(\frac{L}{\eta}\right)^{4n}\,.
\label{methods4}$$ Consequently, the relation $B_n {\rm Re}^{d_{n}+n} = A_{2n} {\rm Re}^{\rho_{2n}}$ follows by comparing Eqs. and and implies $$d_n+n = \rho_{2n} \quad \mbox{and}\quad B_n = A_{2n}\,.
\label{methods5}$$ (b) Using the last relation in (\[methods5\]), we rewrite as follows: $$\hat{R}_{\lambda,n} = \sqrt{\frac{5 A_{2n}^{\frac{2}{n}}}{3\beta_{\epsilon}}} {\rm Re}^{\frac{1}{2}+\frac{d_n+n}{n}} =
B_{n}^{\frac{1}{n}} {\rm Re}^{\frac{1}{2}+\frac{d_n+n}{n}}\,,$$ and thus, together with , we have $${\rm Re}=\left[B_{n}^{-\frac{1}{n}} \hat{R}_{\lambda,n}\right]^{\frac{2n}{2d_n+3n}}=\tilde{B}_n \hat{R}_{\lambda,n}^{\frac{2n}{2d_n+3n}}\,.
\label{rel1}$$ (c) Finally, at the critical point of the phase transition to anomalous scaling, we have a unique Reynolds number for all $n$. That is, $R_{\lambda,n}^{\ast}=R_{\lambda}^{\ast}$ and ${\rm Re}^{\ast}={\rm Re}^{\ast}_n/C_n$ where $C_n$ is a slowly varying function of $n$. The slow variation of $C_n$ is supported by the DNS [@Yakhot2017]. Thus, Eq. gives $$\begin{aligned}
{\rm Re}^{\ast}=\frac{{\rm Re}_n^{\ast}}{C_n}&=\tilde{B}_n (\hat{R}^{\ast}_{\lambda,n})^{\frac{2n}{2d_n+3n}} \nonumber\\
&\Rightarrow {\rm Re}_n^{\ast}
\approx C (\hat{R}^{\ast}_{\lambda,n})^{\frac{2n}{2d_n+3n}}\,.
\label{rel1a}\end{aligned}$$ In the last step, we use this weak $n$-dependence to simplify $C\approx C_n\tilde{B}_n$ for all $n$. For $n=1$, it follows that ${\rm Re}^{\ast}={\rm Re}^{\ast}_1=C (\hat{R}_{\lambda,1}^{\ast})^{2/3}$ and thus $C$ can be obtained. We are now able to derive the exponents $d_n$ by requiring that the turbulent and laminar Gaussian scaling laws to match at $R_{\lambda}^{\ast}=R_{\lambda,n}^{\ast}$. In detail, we obtain $$(2n-1)!!= \left[C\left(\hat{R}^{\ast}_{\lambda,n}\right)^{\frac{2n}{2d_n+3n}} \right]^{d_n-n d_1}\,.
\label{matchrel}$$ In particular, the following three steps are used to solve the problem $$d_n=f(n, \hat{R}_{\lambda,n}^{\ast},C, d_1)\,.$$ (i) Use $d_1=0$ as a consequence of the applied forcing; (ii) $3\beta_{\epsilon}/5 =1$ as already discussed above; finally, also as stated earlier, (iii) the rescaled Taylor microscale Reynolds numbers are set to $\hat{R}^{\ast}_{\lambda}\equiv\hat{R}^{\ast}_{\lambda,n}$ for all $n$. The specific value of $\hat{R}^{\ast}_{\lambda}\approx 9$ follows from the renormalization group theory for the derivation of turbulence models [@Yakhot1992; @Yakhot1992a; @Yakhot2014], supported by simulations in [@Schumacher2007]. Thus we are left with the relation $$d_n=f(n)\,,$$ and the matching condition simplifies to $$\log \left[2^n\frac{\Gamma(n+\frac{1}{2})}{\sqrt{\pi}} \right] = d_n \log C + \frac{2n d_n}{2d_n+3n} \log \hat{R}^{\ast}_{\lambda}\,,
\label{relmatch}$$ where we have used the relation between the double factorial and the Gamma function. This gives a quadratic equation for $d_n$ that can be solved for each order $n>1$ as done in Yakhot and Donzis [@Yakhot2017]. From this, one obtains $d_2=0.157$ and $d_3=0.489$. Similar predictions for the exponents $d_n$ can be obtained within the multifractal framework [@Paladin1987; @Frisch1991; @Nelkin1990; @Biferale2008; @Benzi2009]. For a recent application to Burgers turbulence we also refer to [@Friedrich2018].
This completes the description of the theory used by Yakhot and Donzis [@Yakhot2017]. The theory is specific to HIT on at least two important counts: (1) the assumption of Gaussian white-in-time forcing and the use of the renormalization result that the transition Reynolds number is about 9. Clearly, the exponents $d_n$ are sensitive to both of these conditions. Yet, the theory introduces the ostensibly powerful concept that the scaling exponents in the turbulent state are entirely determined by the forcing and the transition Reynolds number. To claim any universality to these theoretical ideas, as Yakhot and Donzis intended, there has to be some concrete evidence from at least one more flow that does not belong to the HIT class. This is explored in the rest of the paper.
Thermal convection model
========================
In convection, the buoyancy field is the product of the acceleration due to gravity, $g$, multiplied by a density contrast. It is given by $$B({\bm x},t)=-g\frac{\rho({\bm x},t)-\rho_0}{\rho_0}\,,$$ where $\rho$ is the mass density field and $\rho_0$ a reference value. In a Boussinesq system with $\rho({\bm x},t)=\rho_0[1- \alpha (T({\bm x},t)-T_0)]$, the result is the well-known buoyancy term $g\alpha(T-T_0)$ that is added on the right hand side of the Navier-Stokes equation for the vertical velocity component $u_z$; $\alpha$ is the thermal expansion coefficient. The equations are made dimensionless by substituting space coordinates $x_i$, time $t$, velocity fields $u_i$, pressure field $p$, and temperature field $T$ by $\tilde{x}_i H$, $\tilde{t} H/U_f$, $\tilde{u}_i U_f$, $\tilde{p} \rho_0 U_f^2$, and $\tilde{T}\Delta T$, respectively. This implies that $\tilde{B}=\tilde{T}$. Here, $H$ is the height of the cell, $U_f=\sqrt{g\alpha\Delta T H}$ is the free-fall velocity, and $\Delta T>0$ is the temperature difference between the bottom and top plates.
We solve the coupled three-dimensional equations of motion for velocity field $u_i$ and temperature field $T$ in the Boussinesq approximation of thermal convection. They are given in dimensionless form by ($i,j=1,2,3$) $$\begin{aligned}
\label{ceq}
\frac{\partial \tilde{u}_i}{\partial \tilde{x}_i}&=0\,,\\
\label{nseq}
\frac{\partial \tilde{u}_i}{\partial \tilde{t}}+\tilde{u}_j \frac{\partial \tilde{u}_i}{\partial \tilde{x}_j}
&=-\frac{\partial \tilde{p}}{\partial \tilde{x}_i}+\sqrt{\frac{\rm Pr}{\rm Ra}} \frac{\partial^2 \tilde{u}_i}{\partial \tilde{x}_j^2}+ \tilde{B} \delta_{i3}\,,\\
\frac{\partial \tilde{T}}{\partial \tilde{t}}+\tilde{u}_j \frac{\partial \tilde{T}}{\partial \tilde{x}_j}
&=\frac{1}{\sqrt{{\rm Ra} {\rm Pr}}} \frac{\partial^2 \tilde{T}}{\partial \tilde{x}_j^2}\,.
\label{pseq}\end{aligned}$$ Here the Rayleigh number ${\rm Ra}=g\alpha\Delta T H^3/(\nu\kappa)$. The aspect ratio of the cell is $\Gamma = L/H=25$, with the cross-section of the cell being $L \times L$. No-slip boundary conditions for the fluid are applied at all walls. The top and bottom plates are held at constant dimensionless temperatures $\tilde{T}=0$ and 1, respectively. The side walls are thermally insulated. The equations are numerically solved by the Nek5000 spectral element method package [@nek5000] which converges exponentially fast and resolves the velocity derivatives accurately [@Scheel2013; @Pandey2018]. Table 1 summarizes all the runs analyzed and lists a few important parameters. From now on, for simplicity, we will drop the tilde for dimensionless quantities.
The turbulent heat transfer can be decomposed into two contributions that sum up to a constant, a conductive and a convective heat current. In dimensionless form they can be written as $$\begin{aligned}
\label{current}
{\rm Nu} &=j_{\rm conv}+j_{\rm cond}\nonumber\\
&=\sqrt{{\rm Ra} {\rm Pr}} \langle u_3 T(x_3)\rangle_{A,t}-\frac{\partial\langle T(x_3)\rangle}{\partial x_3}\,, \end{aligned}$$ where Nu denotes the Nusselt number. Figure \[fig1\] displays mean vertical profiles of both currents which are obtained by averages with respect to the horizontal planes $A$ and time $t$. It is seen that the Nusselt number is significantly reduced for the low Prandtl number case. Note also that ${\rm Nu}(x_3)$ = constant for all the cases discussed. For ${\rm Pr}\ge 0.7$, the magnitude of $Nu$ is smaller than the convective heat flux (see top panel of Fig. \[fig1\]). This is in line with a finite [*positive*]{} slope of the mean temperature profile $\langle T(x_3)\rangle_{A,t}$ in the bulk, as is visible in the bottom panel of Fig \[fig1\].
${\rm Pr}$ $N_e$ $N$ ${\rm Nu}$ $\ell_1$ $\ell_2$ ${\rm Re}_{\ell}$ $\;\;\langle\epsilon\rangle_{V_{\ell}}\;\;$
---------------- ------------ ----------- ----- ----------------- ---------- ---------- ------------------- ---------------------------------------------
Run 1$^{\ast}$ 100 1,352,000 5 $4.6 \pm 0.003$ 0.247 0.753 0.44 $2.2\times 10^{-4}$
Run 2 70 1,352,000 5 $4.6 \pm 0.01$ 0.247 0.753 0.63 $4.0\times 10^{-4}$
Run 3 35 1,352,000 5 $4.5 \pm 0.01$ 0.247 0.753 1.23 $6.6\times 10^{-4}$
Run 4 7 1,352,000 5 $4.1 \pm 0.01$ 0.247 0.753 5.58 $2.1\times 10^{-3}$
Run 5 0.7 1,352,000 5 $4.2 \pm 0.02$ 0.247 0.753 48.9 $7.8\times 10^{-3}$
Run 6$^{\ast}$ 0.3 1,352,000 5 $4.0 \pm 0.01$ 0.247 0.753 96.7 $1.2\times 10^{-2}$
Run 7$^{\ast}$ 0.1 1,352,000 7 $3.5 \pm 0.01$ 0.247 0.753 215 $1.9\times 10^{-2}$
Run 8 0.021 2,367,488 7 $2.6 \pm 0.01$ 0.223 0.777 636 $2.9\times 10^{-2}$
Run 9 0.005 2,367,488 11 $1.9 \pm 0.01$ 0.223 0.777 1408 $3.3\times 10^{-2}$
Statistical analysis
====================
Normalized energy dissipation rate
----------------------------------
We consider here only the energy dissipation to make our main point; the velocity derivatives as well as the vorticity have been computed and the conclusions drawn from their behavior are similar. Figure \[fig2\] displays contour plots of mid-plane cross-sections of the instantaneous kinetic energy dissipation rate field. The levels are given in units of the decadal logarithm. We display snapshots for the two runs at the smallest (top) and one of the largest (bottom) Prandtl numbers. The differences in the fine structure of the two fields is evident. Low-Pr convection is known to be highly inertial [@Pandey2018; @Schumacher2015], as can be seen here clearly.
The statistical analysis to be discussed below is always restricted to the fraction of the convection layer between heights $\ell_1$ and $\ell_2$ highlighted roughly by vertical lines in Fig. \[fig1\]; the exact values are listed in Table I. The amplitude of the mean kinetic energy dissipation rate in this region varies systematically with Pr and thus with ${\rm Re}_\ell$, as indicated in the Table. This Reynolds number, which corresponds to Re in the HIT case, is given by $${\rm Re}_{\ell} = \frac{u^{(\ell)}_{\rm rms}\ell}{\nu}=\sqrt{\frac{\rm Ra}{\rm Pr}} \, \ell \sqrt{\langle u_x^2+ u_y^2+ u_z^2\rangle_{V_{\ell}}} \,,
\label{Reell}$$ where $\ell$ is the thickness of the bulk region (which is outside the thermal boundary layers), $V_{\ell}=A\ell$ and $A = L\times L$ being the cross sectional area of the cuboid cell (see again Table I). Since we are interested in the small-scale fluctuations, we decompose the velocity and temperature fields as follows $$\begin{aligned}
{\bm u}^{\prime}({\bm x},t) &= {\bm u}({\bm x},t)-\langle {\bm u}\rangle_t({\bm x})\,,\nonumber\\
T^{\prime}({\bm x},t) &= T ({\bm x},t)-\langle T\rangle_t({\bm x})\,,\nonumber\end{aligned}$$ In dimensionless form the kinetic energy dissipation rate is then given by $$\epsilon({\bm x},t) =\frac{1}{2}\sqrt{\frac{Pr}{Ra}} \left({\bm\nabla {\bm u}^{\prime}}+({\bm\nabla {\bm u}^{\prime}})^T\right)^2\,.$$ See also Eq. (\[ediss\]) for comparison.
The data for the normalized moments ${\cal E}_n({\rm Re}_{\ell})$ for orders $n=2$ and $n=3$ are summarized in Fig. \[fig3\]. These moments at high Reynolds numbers indeed follow the expected scaling laws [@Yakhot2006; @Schumacher2007; @Yakhot2017]. The transition Reynolds number ${\rm Re}_{\ell}\approx 100$ – 200 also corresponds well with the value reported for homogeneous and isotropic turbulence. These two features are in accord with a universal transition and subsequent universal scaling. However, the major difference from the schematic in Fig. 1 is that the Reynolds number dependence in the pre-transition region is non-monotonic. The data at the lowest Reynolds numbers are indeed roughly comparable to $(2n-1)!!$, as indicated by the horizontal lines, but pass through a minimum before following the expected power-laws. In the rest of this section, we will consider the low-Reynolds-number behavior and how, if at all, the non-monotonic behavior of the data may still be consistent with the spirit of the theory of section 2.
At very low Reynolds numbers prior to the onset of rising and falling thermal plumes, it is conceivable that the flow starts with a nearly Gaussian forcing, with dissipation moments given by $(2n-1)!!$. However, as the Reynolds number increases the small-scale fluctuations are mostly determined by the plumes. This is a significant difference from the low-Reynolds-number flows in [@Yakhot2017], which are always driven by [*stochastic*]{} forces. For convection, the momentum balance of the Boussinesq equations requires that $$g\alpha T^{\prime} \sim \frac{\partial u_z^{\prime}}{\partial t}\sim W_{\rm pl} \frac{\partial u_z^{\prime}}{\partial z} \sim
\sqrt{f_{\rm pl}\,\epsilon}\,,
\label{estimate}$$ where $W_{\rm pl}$ and $f_{\rm pl}$ are typical rising velocities and thermal plume detachment frequencies, respectively. They have been discussed, for example in [@Castaing1989]. This relation would imply that the statistics of the kinetic energy dissipation rate are connected to those of the temperature fluctuations, and so we shall discuss the nature of temperature fluctuations next.
Temperature fluctuations
------------------------
The PDFs of the temperature fluctuations are obtained in the same bulk volume $V_{\ell}$ as energy dissipation. Figure \[fig5\] (a) plots all data together with a Gaussian PDF (dashed line). The data at the highest Prandtl numbers develop the fattest tails while the remaining runs for ${\rm Pr}\le 0.7$ depart only slightly from Gaussian.
Predictions for the shape of the temperature PDF in convection have been worked out in [@Yakhot1989; @Yakhot1990]. According to this work, Gaussian temperature distributions follow when no particular velocity scale is present in the local convective heat flux $u_3^{\prime}T^{\prime}$, which is the production term for turbulent kinetic energy. An exponential distribution occurs when a characteristic plume velocity exists. Both functional forms were derived in [@Yakhot1990] for small values of the argument, $X=T^{\prime}/T^{\prime}_{\rm rms} \lesssim 1$ and thus not related to the tails of the PDF of the temperature fluctuations. Therefore, our obtained PDFs are magnified and replotted in Fig. \[fig5\] (b–d) for $|X|<1.5$ for three out of the nine data sets. It is clear from this plot that the PDFs of the temperature fluctuations for the lowest Reynolds (or highest Prandtl) numbers behave more like an exponential distribution than a Gaussian one (see panel (b) of Fig. \[fig5\] for $Pr=70$). In contrast, the PDFs of temperature fluctuations for higher Reynolds (or lower Prandtl) numbers are close to Gaussian in the center with sub-Gaussian tails, as seen in panels (c) and (d) of the same figure.
Our argument based on is supported by Fig. \[fig6\] where we replot the dissipation rate moments as a function of the Reynolds number. The data are moments based on the PDFs of the temperature fluctuations via the substitution $T^{\prime}\sim \sqrt{\epsilon}$ from (\[estimate\]). The same [*qualitative*]{} crossover behavior as the original data in Fig. \[fig3\] is observable. Since no quantitative estimate can be made, we took the lowest Reynolds number data as a reference in Fig. \[fig6\].
Again, for the intermediate Reynolds number regime between the Gaussian state and the turbulent state, a major change occurs which renders the moments of the energy dissipation lower than $(2n-1)!!$. We may speculate, for instance, that the forcing is then generated by stronger plumes which are still infrequent enough for them not to merge; this might push the moment values to lower numbers leading to the observed minimum that seems to come close to exponential statistics, ${\cal E}_n\sim n!$. We may thus enlarge the theoretical construct of section 2 in the following manner. A flow might always start at the lowest Reynolds number with Gaussian forcing but, in natural flows like convection, one may develop an intermediate state in which the driving is no longer Gaussian and white in time. This state usually precedes the turbulent state, which makes the transition process non-universal, though the turbulent state may well be universal.
Summary and discussion
======================
In refs. [@Yakhot2006; @Yakhot2017], a theory was developed to understand self-consistently the evolution of homogeneous and isotropic turbulence subject to a Gaussian forcing that is white in time. The flow was numerically shown to evolve from a state in which the moments of energy dissipation proceeded from $(2n-1)!!$ at low Reynolds numbers through a known transition point to become turbulent with anomalous scaling exponents. The transition point was known in the sense that it was computed by a renormalization group approach to turbulence modeling [@Yakhot1992; @Yakhot1992a]. Matching at this transition point the Gaussian initial state and the anomalous turbulent state yielded the scaling exponents in the latter. This led to the speculation that anomalous exponents in the turbulent state were determined entirely by the low-Reynolds-number state of the flow and the transition point. This is indeed a powerful conclusion if true, and can be advanced only by subjecting it to further tests. This has been the purpose of the paper.
After restating the theory to clarify its assumptions, we examined the data in recent convection simulations [@Pandey2018]. The low-Reynolds-number regime consists of two branches. We found that the flow at the lowest Reynolds numbers behaves as if the forcing is Gaussian which is indicated by ${\cal E}_2 \sim 3!!$ and ${\cal E}_3\sim 5!!$. It is followed by a regime that loosely resembles exponential statistics. The transition to the anomalous scaling proceeds for ${\rm Re}_{\ell}\sim 10^2$ which is interestingly at the same order of magnitude as that found in [@Yakhot2017]. The anomalous scaling exponents $d_n$ are the same as in the flow with Gaussian white-in-time forcing. However, the most important difference is that the flow does not go directly from the initial state with Gaussian-like characteristics to the final turbulent state. We expect this last conclusion to be a general feature of transitional flows, with each flow developing its own (i.e., non-universal) intermediate state. This brings us to the conclusion that one needs to temper the notion that the initial state fully determines the turbulent state and its anomalous scaling exponents. Nevertheless, it appears fruitful to regard the Yakhot-Donzis theory as basic in some sense, and examine it further for putting it on a firmer basis.
The variation of the Reynolds number results from a variation of the Prandtl number at a fixed Rayleigh number in the present simulation data record. This causes very different thicknesses of the viscous and thermal boundary layers with respect to each other and alters the structure of the thermal plumes, such as their stem width. As a part of the future work, we plan to conduct a series at ${\rm Pr}\equiv 1$ where an increase in Rayleigh number generates larger Reynolds numbers and to compare these results with the present findings.
Acknowledgements. {#acknowledgements. .unnumbered}
=================
AP acknowledges support by the Deutsche Forschungsgemeinschaft within the Priority Programme on Turbulent Superstructures under Grant No. DFG-SPP 1881. JS wishes to thank the Tandon School of Engineering at New York University for financial support. Computing resources at the Leibniz Rechenzentrum Garching are provided by the Large Scale Project with Grant No. pr62se of the Gauss Centre for Supercomputing.
[1]{} G. I. Taylor, Proc. R. Soc. London Ser. A [**151**]{}, 421 (1935).
A. N. Kolmogorov, Dokl. Akad. Nauk SSSR [**32**]{}, 16 (1941).
U. Frisch, [*Turbulence-The Legacy of A. N. Kolmogorov*]{}, Cambridge University Press, Cambridge, UK, 1994.
K. R. Sreenivasan and R. A. Antonia, Annu. Rev. Fluid Mech. [**29**]{}, 435 (1997).
T. Ishihara, T. Gotoh, and Y. Kaneda, Annu. Rev. Fluid Mech. [**41**]{}, 165 (2009).
P. K. Yeung, X. M. Zhai, and K. R. Sreenivasan, Proc. Natl. Acad. Sci. USA [**112**]{}, 12633 (2015).
V. Yakhot, Physica D [**215**]{},166 (2006).
V. Yakhot and D. A. Donzis, Phys. Rev. Lett. [**119**]{}, 044501 (2017).
J. Schumacher, K. R. Sreenivasan, and V. Yakhot, New J. Phys. [**9**]{}, 89 (2007).
F. Waleffe, Phys. Fluids [**9**]{} (4), 883 (1997).
B. Eckhardt, T. M. Schneider, B. Hof, and J. Westerweel, Annu. Rev. Fluid Mech. [**39**]{}, 447 (2007)
A. J. Smits and I. Marusic , Phys. Today [**66**]{}(9), 25 (2013).
F. Chillà and J. Schumacher, Eur. Phys. J. E [**35**]{}, 58 (2012).
W. V. R. Malkus, Proc. R. Soc. Lond. A [**225**]{}, 185 (1954).
O. Shishkina and C. Wagner, J. Fluid Mech. [**546**]{}, 51 (2005).
Q. Zhou, C. Sun, and K.-Q. Xia, Phys. Rev. Lett. [**98**]{}, 074501 (2007).
J. Schumacher and J. D. Scheel, Phys. Rev. E [**94**]{}, 043104 (2016).
A. Pandey, J. D. Scheel, and J. Schumacher, Nat. Commun. [**9**]{}, 2118 (2018).
J. Schumacher, P. Götzfried, and J. D. Scheel, Proc. Natl. Acad. Sci. USA [**112**]{}, 9530 (2015).
J. Schumacher, J. D. Scheel, D. Krasnov, D. A. Donzis, V. Yakhot, and K. R. Sreenivasan, Proc. Natl. Acad. Sci. USA [**111**]{}, 10961 (2014).
L. D. Landau and E. M. Lifshitz, [*Course of Theoretical Physics, Statistical Physics, Volume 5*]{}, Butterworth-Heinemann, Oxford, 1980.
L. P. Kadanoff, [*Critical Behavior. Universality and Scaling.*]{} In Critical Phenomena, Proceedings of the Int. School of Physics, “Enrico Fermi”, Course LI, ed. M.S. Green, (New York, Academic Press, 1971), p. 101.
K. R. Sreenivasan, Phys. Fluids [**10**]{}, 528 (1998).
V. Yakhot and S. A. Orszag, J. Sci. Comput. [**1**]{} (1), 3 (1986).
V. Yakhot and L. Smith, J. Sci. Comput. [**7**]{} (1), 3 (1992).
V. Yakhot, S. A. Orszag, T. Gatski, S. Thangam, and C. Speziale Phys. Fluids A [**4**]{}, 1510 (1992).
V. Yakhot, Phys. Rev. E [**90**]{}, 043019 (2014).
G. Paladin and A. Vulpiani, Phys. Rev. A [**35**]{}, 1971 (1987).
U. Frisch and M. Vergassola, Europhys. Lett. [**14**]{}, 439 (1991).
M. Nelkin, Phys. Rev. A [**42**]{}, 7226 (1990).
L. Biferale, Phys. Fluids [**20**]{}, 031703 (2008).
R. Benzi and L. Biferale, J. Stat. Phys. [**135**]{}, 977 (2009).
J. Friedrich, G. Margazoglou, L. Biferale, and R. Grauer, Phys. Rev. E [**98**]{}, 023104 (2018).
http://nek5000.mcs.anl.gov
J. D. Scheel, M. S. Emran, and J. Schumacher, New J. Phys. [**15**]{}, 113063 (2013).
D. A. Donzis, P. K. Yeung, and K. R. Sreenivasan, Phys. Fluids [**20**]{}, 045108 (2008).
B. Castaing, G. Gunaratne, F. Heslot, L. P. Kadanoff, A. Libchaber, S. Thomae, X.-Z. Wu, S. Zaleski, G. Zanetti, J. Fluid Mech. [**204**]{}, 1 (1989).
V. Yakhot, Phys. Rev. Lett. [**63**]{}, 1965 (1989).
V. Yakhot, S. A. Orszag, S. Balachandar, E. Jackson, Z.-S. She, and L. Sirovich, J. Sci. Comput. [**5**]{} (3), 199 (1990).
|
---
author:
-
title: '**Time scales of the s process - from minutes to ages**'
---
Introduction
============
The question of time scales is directly related to a number of key issues in $s$-process nucleosynthesis and from the very beginning the corresponding chronometers were considered to represent an important source of information. The long-lived radioactivities, which are produced in the $s$ process, were studied first because of their cosmological importance and since they are less sensitive to details of stellar scenarios. But in principle any unstable isotope along the $s$-process reaction path can be understood as a potential chronometer, provided that it defines a time scale corresponding to a significant quantity.
The shortest possible time scale is related to [*convective mixing in He shell flashes*]{}, which take place in thermally pulsing low mass asymptotic giant branch (AGB) stars, where turnover times of the less than an hour are attained (Sec. \[sec2\]).
[*Neutron capture*]{} occurs on time scales of days to years, depending on the $s$-process scenario. The life time of a given isotope $A$ is determined by the stellar neutron flux, $n_n
\times {\rm v}_T$, and by the stellar ($n, \gamma$) cross section, $$\tau_{n(A)} = \frac{1}{\lambda_{n(A)}}
= \frac{1}{n_n \times {\rm v}_T \times \sigma_{(A)}},$$ where $n_n$ denotes the neutron density and ${\rm v}_T$ the mean thermal velocity. If isotope $A$ is unstable against $\beta$-decay with a life time comparable to $\lambda_n$, the reaction path is split into a branching with a characteristic abundance pattern that reflects this time scale and provides a measure for the neutron density at the stellar site [@BSA01; @WVA01]. This situation can be complicated by the fact that in most cases only theoretical evaluations are available for the neutron capture cross sections of the unstable isotopes. Furthermore, the beta decay the $\beta$-decay rate of the branch point isotope $A$ may depend on temperature and/or electron density of the stellar plasma as in case of $^{176}$Lu (Sec. \[sec3\]).
If the neutron capture time is comparable to the [*duration of the neutron bursts*]{}, the abundance pattern of such a branching can be used to test the time scale of the He shell flashes in AGB stars. Suited branchings of this type could be those at $^{85}$Kr and $^{95}$Zr with neutron capture times of a few years. In these cases, the neutron flux dies out before reaction equilibrium is achieved. However, both branchings are difficult to analyze because of significant contributions from the weak $s$-process component from massive stars or from the $r$ process, and a constraining analysis is difficult to achieve.
The [*transport to the stellar surface*]{} in the third dredge-up phase can be investigated by means of the observed Tc abundances (Mathews et al. 1986; Smith and Lambert 1988; Busso et al. 1995). Analyses of these observations have to consider that the terrestrial decay rate of $^{99}$Tc ($t_{1/2} =
2.1 \times 10^5$ years) is reduced to a few years at $s$-process temperatures [@Sch83; @TaY87], which implies that it is quickly cooled to temperatures below 10$^8$ K after its production. A complementary, temperature-independent chronometer for the third dredge-up is $^{93}$Zr, which can be followed by the appearance of its daughter $^{93}$Nb [@MTW86].
The time scale for the [*formation of the solar system*]{} can, in principle, be inferred from the abundance patterns, which are affected by the decay of nuclei with half-lives between 10$^5$ and 10$^7$ years [@Pag90]. Quantitative studies based on isotopic anomalies found in presolar grains have confirmed that such effects exist for $^{26}$Al, $^{41}$Ca, $^{60}$Fe, $^{93}$Zr, and $^{107}$Pd. A comprehensive overview of this discussion was presented by Busso, Gallino, & Wasserburg (1999) (see also the contribution by R. Reifarth to this volume). Also $^{205}$Pb was discussed as a potentially promising chronometer (Yokoi, Takahashi, & Arnould 1985), whereas $^{53}$Mn, $^{129}$I, and $^{182}$Hf were found to result from the continuous pollution of the interstellar medium by explosive nucleosynthesis in supernovae (Busso et al. 1999).
A number of attempts have been made to constrain the [*cosmic time scale*]{} by means of $s$-process abundance information. In the course of these studies it turned out that the half-life of the most promising case, $^{176}$Lu (Audouze, Fowler, & Schramm 1972; Arnould 1973), was strongly temperature-dependent, making it an $s$-process thermometer rather than a cosmic clock as discussed in Sec. \[sec3\]. The other long-lived species, $^{40}$K [@BeP87] and $^{87}$Rb (Beer and Walter 1984) are produced by at least two different processes and are difficult to interpret quantitatively. Therefore, recent analyses of nuclear chronometers for constraining the cosmic time scale concentrate on the $r$-process clocks related to the decay of the long-lived actinides [@CPK99] and of $^{187}$Re (Yokoi et al. 1983; Arnould, Takahashi, & Yokoi 1984; Mosconi et al. 2007; see also the contribution by A. Mengoni to this volume).
In the following sections we will focus on the shortest $s$-process time scale related to the fast convective mixing during the He shell flashes in AGB stars.
The branching at $^{128}$I \[sec2\]
===================================
Xenon is an element of considerable astrophysical interest. The origin of the lightest isotopes, $^{124}$Xe and $^{126}$Xe, can be exclusively ascribed to the so-called $p$ process in supernovae (Arnould and Goriely 2003). Their relative isotopic abundances are important for testing $p$-process models describing the proton-rich side of the valley of stability. Concerning the $s$ process, xenon belongs to the six elements with a pair of $s$-only isotopes. In this case, the relevant nuclei are $^{128}$Xe and $^{130}$Xe, both shielded against the decay chains from the $r$-process region by their stable Te isobars. The abundances of these isotopes define the strength of the branching in the $s$-process reaction chain illustrated in Fig. \[fig1\]. Since the $p$-process components of these $r$-shielded nuclei do not exceed a few percent, they are commonly considered to be of pure $s$ origin. On the neutron-rich side, $^{134}$Xe and $^{136}$Xe can be ascribed to the $r$ process since the $\beta^-$ half life of $^{133}$Xe is short enough to prevent any significant $s$-process contributions. Hence, the Xe isotope chain carries signatures of all nucleosynthesis scenarios that contribute to the mass region of the heavy isotopes with $A\geq$ 90, and offers the possibility to constrain the underlying models. A detailed description of the isotopic abundance pattern of xenon involves necessarily quantitative models for all these processes.
![The $s$-process reaction path between Te and Xe. The isotopes $^{128}$Xe and $^{130}$Xe are shielded against $r$-process contributions by their stable Te isobars. In contrast to $^{130}$Xe, $^{128}$Xe is partly bypassed due to the branching at $^{128}$I. The branching at $^{127}$Te is negligible unless the temperature is low enough that ground state and isomer are not fully thermalized. The branching at $^{128}$I is unique since it results from the competition between ${\beta^-}$ and electron capture decays and is, therefore, independent of the neutron flux. \[fig1\]](fig1.eps)
The combined strength of the branchings in the $s$-process chain at $^{127}$Te and $^{128}$I (Fig. \[fig1\]) is defined by the relative abundances of the $s$-only isotopes $^{128}$Xe and $^{130}$Xe. Both branchings are expected to be comparably weak since only a small part of the total reaction flow is bypassing $^{128}$Xe. Therefore, the $\langle\sigma\rangle N_s$ value for $^{128}$Xe, which is characteristic of the reaction flow, is slightly smaller than the one for $^{130}$Xe. Since the solar isotopic ratio of the $s$-only isotopes is well defined (Pepin, Becker, & Rider 1995), the $\langle\sigma\rangle N_s $ difference can be obtained by an accurate measurement of the cross section ratio (Reifarth et al. 2002).
While the first branching at $^{127}$Te is marginal because the population of ground state and isomer is quickly thermalized in the hot stellar photon bath (Takahashi and Yokoi 1987), leading to a strong dominance of the $\beta$-decay channel, the second branching at $^{128}$I is utterly interesting. In contrast to all other relevant cases, this branching is only defined by the competition between $\beta^-$ and electron capture decay (Figure \[fig1\]). Both decay modes are sufficiently short-lived that the neutron capture channel and hence the influence of the stellar neutron flux is completely negligible.
Since the electron capture rate is sensitive to temperature and electron density of the stellar plasma [@TaY87], this branching provides a unique possibility to constrain these parameters without interference from the neutron flux. Under stellar conditions the corresponding branching ratio at $^{128}$I is $$f_- = \lambda_{\beta^{-}} / (\lambda_{\beta^{-}}+\lambda_{\beta^{EC}}) =
1-\lambda_{\beta^{EC}} / (\lambda_{\beta^{-}}+\lambda_{\beta^{EC}}).$$ While the $\beta^-$-rate varies only weakly, the electron capture rate depends strongly on temperature due to the increasing degree of ionization. Furthermore, at high temperatures, when the ions are fully stripped, the EC rate becomes sensitive to the density in the stellar plasma via electron capture from the continuum (Table \[tab2\]). The relatively small change of the branching ratio did not permit quantitative analyses until the stellar ($n, \gamma$) rates of the involved isotopes were measured to an accuracy of 1.5% [@RHK02].
[ccccc]{}\
$n_e$ ($10^{26}$ cm$^{-3}$) &\
& 0 & 1 & 2 & 3\
\
0 & 0.940 & 0.963 & 0.996 & 0.999\
3 & 0.940 & 0.952 & 0.991 & 0.997\
10 & 0.940 & 0.944 & 0.976 & 0.992\
30 & 0.940 & 0.938 & 0.956 & 0.980\
\
Thermally pulsing AGB stars
---------------------------
The $s$-process abundances in the mass range $A \ge$ 90 are produced during helium shell burning in thermally pulsing low mass AGB stars [@SGB95] by the subsequent operation of two neutron sources. The $^{13}$C($\alpha, n$)$^{16}$O reaction, which occurs under radiative conditions at low temperatures ($kT \approx$8 keV) and neutron densities ($n_n \leq 10^7$ cm$^{-3}$) between convective He-shell burning episodes, provides most of the neutron exposure. The resulting abundances are modified by a second burst of neutrons from the $^{22}$Ne($\alpha,
n$)$^{25}$Mg reaction, which is marginally activated during the highly convective He shell flashes, when peak neutron densities of n$_n \ge 10^{10}$ cm$^{-3}$ are reached at $kT \approx$23 keV. Although this second neutron burst accounts only for a few percent of the total neutron exposure, it is essential for adjusting the final abundance patterns of the $s$-process branchings. It is important to note that the ($n, \gamma$) cross sections in Te-I-Xe region are large enough that typical neutron capture times are significantly shorter than the duration of the two neutron exposures. Therefore, the abundances can follow the time variations of the neutron density.
Convection in He shell flashes
------------------------------
The effect of convection for the branchings at $A=127/128$ was extensively studied by means of the stellar evolution code FRANEC (Straniero et al. 1997; Chieffi and Straniero 1989) using a time-dependent mixing algorithm to treat the short time scales during thermal pulses along the AGB. Convective velocities were evaluated by means of the mixing length theory where the mixing length parameter was calibrated by a fit of the solar radius. An example for these results is given in Fig. \[fig2\], which represents a typical thermal pulse for a 3 $M_{\odot}$ AGB star of solar composition. The calculated convective velocities are plotted for five models ($t$=0, 0.31, 0.96, 2.42 and 5.72 yr after the maximum of the TP) as a function of the internal radius to show how the convective shell expands after the pulse maximum.
![The calculated convective velocities as a function of the internal radius for 5 models ($t$=0, 0.31, 0.96, 2.42 and 5.72 yr after the maximum of the TP) in a 3 $M_{\odot}$ star of solar composition. The convective turnover time is a few hours only. The scale on the abscissa starts at the bottom of the convective shell. \[fig2\]](fig2.eps)
The rather short convective turnover times of less than one hour that can be derived from Fig.\[fig2\] are in any case, shorter than the time during which the temperature at the bottom of the convective pulse remains higher than $2.5 \times 10^8$ K. Hence, the crucial transport time from the hot synthesis zone to cooler layers is only of the order of minutes. This can have an impact on those potential $s$-process branchings, which are characterized by branch-point isotopes with strong thermal enhancements of their decay rates. Even if such half-lives are reduced to a few minutes at the bottom of the He shell flash, the signature of the branching can survive by the rapid mixing of processed material into the cooler outer layers of the convective zone.
Branching analysis at [*A*]{}=127/128
-------------------------------------
The ($n, \gamma$) cross sections required for describing the time evolution of the $^{128}$Xe and $^{130}$Xe abundances during the He shell flashes have been accurately measured [@RHK02; @ReK02]. This information is crucial in view of the fact that only a comparably small fraction of the reaction flow is bypassing $^{128}$Xe (Table \[tab2\]). With the Xe cross sections compiled by Bao et al. (2000) for example, the $^{128}$Xe/$^{130}$Xe ratio could only be calculated with an uncertainty of about 20%, whereas this uncertainty was reduced to $\pm1.5$% with the improved cross section data. Another source of uncertainty in the nuclear input is due to the theoretical $\beta$-decay rates listed in Table \[tab2\]. Variation of these rates by a factor of two affects the branching ratio by less than 2%, because the decay of $^{128}$I is dominated by the $\beta^-$-mode.
In fact, the branching analysis indicates a more complex situation than one might expect from the trends of the branching factor in Table \[tab2\]. During the low temperature phase between He shell flashes, the neutron density produced via the $^{13}$C($\alpha$, n)$^{16}$O reaction is less than 10$^7$ cm$^{-3}$. The branching at $^{127}$Te is completely closed, resulting in a $^{128}$Xe/$^{130}$Xe abundance ratio of 0.93 relative to the solar values at the end of the low temperature phase due to the effect of the $^{128}$I branching. After the onset of convection at the beginning of the He shell flash the production factors of $^{128}$Xe and $^{130}$Xe differ by 8%, corresponding to the solar ratio, but are then modified during the flash. Since the ($n, \gamma$) cross sections of both isotopes are large enough for achieving local $\langle\sigma\rangle N_s $ equilibrium, one would expect that the production factor of $^{128}$Xe quickly approaches that of $^{130}$Xe. In this case, the final abundance ratio would clearly exceed the solar value.
In contrast, one finds that the branching still exists even during the high temperature phase of the He shell flash. There are essentially two effects, which concur to explain this behavior:
- During the peak of temperature and neutron density, the electron densities at the bottom of the convective He shell flash, i.e. in the $s$-process zone, are between 15 $\times$ 10$^{26}$ cm$^{-3}$ and 20 $\times$ 10$^{26}$ cm$^{-3}$. Therefore, the branching at $^{128}$I is never completely closed. Even at the peak temperatures of the He-shell flash, typically 3% of the flow are bypassing $^{128}$Xe.
- Around the maximum of the neutron density, the branching at $^{127}$Te is no longer negligible and leads to an additional decrease of the $^{128}$Xe abundance.
In view of these effects the $^{127}$Te branching can be fully understood only if the stellar reaction network is calculated with sufficiently small time steps so that the time scale of convective turnover is properly considered. If the neutron density is followed in time steps of 10$^5$ s up to freeze-out one finds an enhancement of the $^{127}$Te branching of 30% compared to calculations performed on a time grid of 10$^6$ s. However, reducing the time steps to $3\times10^4$ s had no further effect on the abundance ratios.
If averaged over the AGB evolution, the stellar models yield a final abundance ratio of $^{128}$Xe/$^{130}$Xe = 0.466$\pm$0.015, consistent with the smaller value measured in the solar system = 0.510$\pm$0.005 if an 8% $p$ contribution to the solar $^{128}$Xe abundance is taken into account.
In summary, the abundance ratio of $^{128}$Xe and $^{130}$Xe in the solar system could be eventually reproduced by combining the effects of mass density, temperature, neutron density, and convective turnover in a consistent way. This success represents an impressive confirmation of the stellar $s$-process model related to thermally pulsing low mass AGB stars.
The $^{176}$Lu/$^{176}$Hf pair \[sec3\]
=======================================
The mass region of the rare earth elements (REE) represents an important test ground for $s$-process models because the relative REE abundances are known to better than $\pm$2%, which implies that the branchings in this region are reliably defined by the respective $s$-only isotopes. Systematic analyses of these branchings contribute essentially to the quantitative picture of the main $s$-process component that has been achieved with stellar models for thermally pulsing low mass AGB stars (Gallino et al. 1998; Busso et al. 1999). Also the stellar ($n, \gamma$) cross sections (except for the lightest REE) are large enough that the assumption of reaction flow equilibrium is well justified with respect to the analysis of several prominent $s$-process branchings in this mass region. This means that the final abundances are essentially determined during the freeze-out phase at the decline of neutron density towards the end of the shell flash.
Among the REE branchings, $^{176}$Lu is especially attractive for the intricate way in which nuclear physics can affect the actual $s$-process production yields. This is illustrated in Fig.\[fig3\] showing that the reaction path in the vicinity of lutetium is determined not only by the stellar neutron capture cross sections of $^{175,176}$Lu and $^{176}$Hf, but also by the thermal coupling of isomer and ground state in $^{176}$Lu.
![The $s$-process reaction flow in the Lu region. The strength of the lines indicates that neutron captures on $^{175}$Lu leading to the isomeric state in $^{176}$Lu are more probable than those to the ground state.\[fig3\]](fig3.eps){width="45.00000%"}
Due to its long half-life of 37.5 Gyr, $^{176}$Lu was initially considered as a potential nuclear chronometer for the age of the $s$ elements [@AFS72]. This possibility is based on the fact that $^{176}$Lu as well as its daughter $^{176}$Hf are of pure $s$-process origin, since both are shielded against the $r$-process beta decay chains by their stable isobar $^{176}$Yb. In a straightforward approach, the reaction flow in the branching at $A
= 176$ and, therefore, the surviving $s$ abundance of $^{176}$Lu and of $^{176}$Hf would be determined by the partial ($n, \gamma$) cross sections of $^{175}$Lu feeding the ground and isomeric state of $^{176}$Lu. Since transitions between the two states are highly forbidden by selection rules, both states could be considered as separate species in the description of the $s$-process branching at $A$ = 176 (Fig. \[fig4\]). While the isomer rapidly decays quickly ($t_{1/2} = 3.7$ h) by $\beta$-transitions to $^{176}$Hf, the effective $s$-process yield of $^{176}$Lu appeared well defined by the partial cross section to ground state, thus providing an estimate the age of the $s$ elements by comparison with the actual solar system value.
However, Ward and Fowler (1980) noted that ground state and isomer of $^{176}$Lu are most likely connected by nuclear excitations in the hot stellar photon bath, since thermal photons at $s$-process temperatures are energetic enough to populate higher lying states, which can decay to the long-lived ground state and to the short-lived 123 keV isomer as well. In this way, the strict forbiddeness of direct transitions between both states is circumvented, dramatically reducing the effective half-life to a few hours. As a result, most of the reaction flow could have been directly diverted to $^{176}$Hf, resulting in a $^{176}$Lu abundance much smaller than observed in the solar system. That this temperature dependence had indeed affected the $^{176}$Lu/$^{176}$Hf abundance ratio was confirmed soon thereafter by Beer et al. (1981, 1984), thus changing its interpretation from a potential chronometer into a sensitive $s$-process thermometer.
The temperature dependence was eventually quantified on the basis of a comprehensive investigation of the level structure of $^{176}$Lu (Klay et al. 1991b; Klay et al. 1991a; Doll et al. 1999). The lowest mediating state was identified at an excitation energy of 838.6 keV, which implies that thermally induced transitions become effective at temperatures above $T_8 = 1.5 -2$ (where $T_8$ is the temperature in units of 10$^8$ K). Accordingly, ground state and isomer can be treated as separate species only at lower temperatures. In thermally pulsing low mass AGB stars this is the case between convective He shell flashes when the neutron production is provided by the $^{13}$C($\alpha, n$)$^{16}$O reaction in the so-called $^{13}$C pocket. Under these conditions the abundance of $^{176}$Lu is directly determined by the partial cross sections populating ground state and isomer.
During the He shell flashes, the higher temperatures at the bottom of the convective region lead to the activation of the $^{22}$Ne($\alpha, n$)$^{25}$Mg reaction. It is in this regime that the initial population of ground state and isomer starts to be changed by thermally induced transitions. This affects the $^{176}$Lu$^{\rm g}$ in the long-lived ground state that is actually produced by neutron capture in the burning zone as well as the Lu fraction circulating in the convective He shell flash. The latter part is exposed to the high bottom temperature only for rather short times and, therefore, less affected by the temperature dependence of the half life. Once produced, by far most of the long-lived $^{176}$Lu$^{\rm g}$ survives in the cooler layers outside of the actual burning zone.
Nuclear physics input
---------------------
The stellar neutron capture rates for describing the reaction flow in Fig. \[fig3\] were determined by accurate time-of-flight (TOF) measurements of the total ($n, \gamma$) cross sections for $^{175}$Lu and $^{176}$Lu using a 4$\pi$ BaF$_2$ array (Wisshak et al. 2006). The Maxwellian averaged cross sections (MACS) deduced from these data are five times more accurate than the values listed in the compilation of Bao et al. (2000). However, neutron captures on $^{175}$Lu may feed either the ground state or the isomer in $^{176}$Lu. Therefore, the total ($n, \gamma$) cross section was complemented by a measurement of at least one of the two partial cross sections ($\sigma_{p}^{\rm g}$, $\sigma_{p}^{\rm m}$). Since the corresponding reaction channels could not be distinguished in the TOF measurement [@WVK06a], the activation technique was used to determine the partial cross section to the isomeric state in $^{176}$Lu at thermal energies of $kT = 5.1$ and 25 keV.
Activation measurements of the partial ($n, \gamma$) cross section to the isomeric state in $^{176}$Lu at or near a neutron energy of 25 keV were performed via the $^7$Li($p, n$)$^7$Be reaction (Beer and K[ä]{}ppeler 1980; Allen, Lowenthal, & de Laeter 1981; Zhao and K[ä]{}ppeler 1991) and also using a filtered neutron beam from a nuclear reactor (Stecher-Rasmussen et al. 1988). In all experiments the induced activity after irradiation was detected via the 88 keV $\gamma$-transition in the decay of $^{176}$Lu$^{\rm m}$. These data were recently complemented by an activation at $kT = 5$ keV, adapted to the lower temperature in the $^{13}$C pocket [@HWD08], where most of the neutron exposure is provided by the $^{13}$C($\alpha,
n$)$^{16}$O source, which operates at $kT = 8$ keV.
The half-life of $^{176}$Lu was shown to decrease drastically at the temperatures reached during He shell burning in thermally pulsing low mass AGB stars (Klay et al. 1991a; Doll et al. 1999). Contrary to the situation at low excitation energy, where transitions between ground state and isomer are strictly forbidden by selection rules, interactions with the hot stellar photon bath lead to induced transitions to higher lying nuclear states, which can decay into the ground state and into the isomer as well (Fig. \[fig4\]).
![Schematic level scheme of $^{176}$Lu illustrating the thermal coupling between the long-lived ground state and the short-lived isomer. While direct transitions between these states are strictly forbidden by selection rules, thermal excitations in the hot stellar photon bath populate a higher lying state with intermediate quantum numbers that can decay both ways. At first this link depopulates the isomer towards the ground state because it is more easily reached from the isomer. However, at sufficiently high temperatures, induced transitions from the ground state to the mediating state can also feed the isomer, resulting in the destruction of $^{176}$Lu via $\beta$ decay of the short-lived isomer. The efficiency of this mechanism is determined by the lifetime $\tau_i$ of the mediating state and by the branching ratio $V$ for decays towards the isomer. \[fig4\]](fig4.eps){width="40.00000%"}
Up to $T_8 = 2$ the mediating states can not be reached and the reaction path of Fig. \[fig3\] is completely defined by the partial capture cross sections feeding ground state and isomer in $^{176}$Lu. This situation prevails in the $^{13}$C pocket between He shell flashes. Between $T_8 =2.2$ and 3.0, ground state and isomer start to be increasingly coupled. At first, this coupling leads to an increasing population of the ground state, because of the smaller energy difference between isomer and mediating state. It is due to this effect that more $^{176}$Lu is observed in nature than would be created in a “cool” environment with $T_8 \leq
2$. In this regime, internal transitions, $\beta$-decays, and neutron captures are equally important and have to be properly considered during He shell flashes, where the final abundance pattern of the $s$-process branchings are established by the marginal activation of the $^{22}$Ne($\alpha, n$)$^{25}$Mg neutron source reaction at thermal energies around $kT$ = 23 keV.
Branching factor
----------------
The branching factor $f_n$ describing the split of the reaction flow at $^{176}$Lu (Fig. \[fig3\]) can be expressed in terms of the neutron capture rate $\lambda_n = \langle \sigma_{176} \rangle v_T
n_n$ (where $\sigma_{176}$ denotes the MACS of $^{176}$Lu), $v_T$ the mean thermal neutron velocity, and $n_n$ the neutron density) and of the temperature and neutron density dependent $\beta$-decay rate of $^{176}$Lu. Because of the thermal effects on the lifetime of $^{176}$Lu sketched above, $f_n = \lambda_n / (\lambda_n +
\lambda_\beta$) becomes a complex function of temperature and neutron density (Klay et al. 1991a; Doll et al. 1999).
The bottom temperature in the He flashes reaches a maximum at the maximum expansion of the convection zone and then declines rapidly as the convective thermal instability shrinks. According to current AGB models of low mass stars, the neutron burst released by the $^{22}$Ne neutron source lasts for about six years, as long as $T_8 \ge$ 2.5, while the convective instability lasts for a much longer period of about 150 years. The maximum temperature in the He flash increases slightly with pulse number, reaching $T_8
\approx$ 3 in the more advanced pulses [@SDC03].
Although the second neutron burst during the He shell flashes contributes only a few percent to the total neutron exposure, the final composition of the Lu/Hf branching is, in fact, established during the freeze-out phase at the end of the thermal pulse. It is only during the high-temperature phase that the thermal coupling between ground state and isomer takes place. In this context, it is also important to remember that the ($n, \gamma$) cross sections in the Lu/Hf region are large enough that typical neutron capture times are significantly shorter than the duration of the neutron exposure during the He shell flash.
Previous $s$-process calculations for thermally pulsing low mass AGB stars (Gallino et al. 1988; Gallino et al. 1998), which were performed by post-processing using full stellar evolutionary models obtained with the FRANEC code (Chieffi and Straniero 1989; Straniero et al. 1995; Straniero et al. 2003; Straniero, Gallino, & Cristallo 2006) were successful in describing the solar main $s$-process component fairly well (Gallino et al. 1998; Arlandini et al. 1999). For the treatment of the branching at $^{176}$Lu, however, this approach had to be refined by subdividing the convective region of the He shell flashes into 30 meshes in order to follow the neutron density profile in sufficient detail and to take the strong temperature dependence of the branching factor $f_n$ properly into account. The production and decay of $^{176}$Lu was calculated in each mesh to obtain an accurate description of the final $^{176}$Lu/$^{176}$Hf ratio in the He shell flash.
After each time step of less than one hour, which corresponds to the typical turnover time for convection in the He shell flash [@RKV04], the abundances from all zones were averaged in order to account for the fast convective mixing. This treatment is well justified because there is no efficient coupling between ground state and isomer at temperatures below $T_8 \leq 2.5$, which holds for all meshes except the first few from the bottom. Interestingly, this is the temperature when neutron production via the $^{22}$Ne($\alpha, n$)$^{25}$Mg reaction diminishes.
During the main neutron exposure provided by the $^{13}$C($\alpha,
n$)$^{16}$O reaction thermal effects are negligible for the production of $^{176}$Lu because the temperatures in the $^{13}$C pocket of $T_8 \approx 1$ are not sufficient to reach the mediating state at 838.6 keV. Accordingly, the $^{176}$Lu/$^{176}$Hf ratio is simply determined by the partial ($n, \gamma$) cross section to the ground state, which would always produce too much $^{176}$Hf and too little $^{176}$Lu$^{\rm g}$. This situation is illustrated in Fig. \[fig5\] showing the relative production factors of $^{176}$Lu and $^{176}$Hf during the 15$^{\rm th}$ He shell flash of the AGB model with initial mass 1.5 $M_{\odot}$ and half-solar metallicity, which is representative of the overall relative abundance distribution ejected during the AGB phase. The $s$-process abundances are given as number fractions $Y_i = X_i / A_i$ ($X$ being the respective mass fractions), normalized to that of $^{150}$Sm at the end of the He shell flash. This isotope was chosen as a reference because it is the best known case of an unbranched $s$-only isotope among the REE.
![ Evolution of the $^{176}$Lu (squares) and $^{176}$Hf (circles) abundances during the 15$^{\rm th}$ He shell flash in an AGB star with 1.5 $M_{\odot}$. The time scale starts when the temperature at the bottom of the convective shell reaches $T_8 = 2.5$, i.e. at the onset of the $^{22}$Ne($\alpha$, n)$^{25}$Mg reaction. The values are given as number fractions normalized to that of $^{150}$Sm at the end of the He shell flash. \[fig5\]](fig5.ps){width="35.00000%"}
After the $s$-process nuclides synthesized in the $^{13}$C pocket are engulfed by the convective He shell, one finds the ratio shown at $t = 0$ in Fig. \[fig5\], which marks the onset of $s$ process nucleosynthesis in the He shell flash. In this phase temperatures are high enough to populate the the mediating state at 838 keV, leading to a strong increase in the production of the long-lived ground state of $^{176}$Lu$^{\rm g}$. As shown in Fig. \[fig5\] $^{176}$Hf is almost completely destroyed at the beginning of the flash, when neutron density and temperature are highest and $^{176}$Hf is efficiently bypassed by the reaction flow. Correspondingly, the $^{176}$Lu abundance reaches a pronounced maximum. As temperature and neutron density decline with time, the branching towards $^{176}$Hf is more and more restored, but the final abundance at the end of the He shell flash remains significantly lower than at the beginning. Because the thermal coupling between ground state and isomer depends so critically on temperature, the full $s$-process network was followed in each of the 30 meshes of the convective zone.
Plausible solutions of the Lu/Hf puzzle are characterized by equal overproduction factors for $^{176}$Lu and $^{176}$Hf, at least within the experimental uncertainties of the nuclear input data. The decay of long-lived $^{176}$Lu ground state in the interstellar medium prior to the formation of the solar system reduces the calculated overproduction factor of $^{176}$Lu by about 10%, leaving the more abundant $^{176}$Hf essentially unchanged. In the light of these considerations, overproduction factors between 1.00 and 1.10 are acceptable for $^{176}$Lu and between 0.95 and 1.05 for $^{176}$Hf.
In the course of these investigations it turned out that the observed $^{176}$Lu/$^{176}$Hf ratio could only be reproduced if the thermal coupling of ground state and isomer in $^{176}$Lu in combination with the neutron density was followed within the gradients of time, mass, and temperature by the refined zoning of the convective He shell flash. Only in this way overproduction factors could be obtained, which were compatible with the limits defined by the nuclear physics uncertainties and by the $^{176}$Lu decay. In contrast to a previous study [@AKW99b], the meanwhile improved cross section information together with the multi-zone approach for the He shell flash appears to settle the $^{176}$Lu puzzle. While the $^{176}$Hf abundance was before overproduced by 10 $-$ 20%, the present calculations yield final $^{176}$Lu$^{\rm g}$ and $^{176}$Hf abundances (relative to solar system values) of 1.04 and 0.96. These results do not yet consider the decay of the produced $^{176}$Lu$^{\rm g}$, a correction that brings the two numbers even into closer agreement.
Recently, additional information from Coulomb excitation measurements [@VLP00] and photoactivation studies [@Kne05] have triggered renewed interest in the Lu/Hf problem [@Moh08] and seem to provide further constraints for the convection at the bottom of the He shell flash.
Conclusions
===========
The elemental and isotopic abundances found in nature carry the signatures of their nucleosynthetic origin as well as of the related time scales. The branchings at $^{128}$I and $^{176}$Lu discussed in Secs. \[sec2\] and \[sec3\] are characterized by the rather short time scale of convection in He shell flashes of low mass AGB stars. Examples dealing with much longer time scales are discussed by R. Reifarth and A. Mengoni in their contributions to this volume.
To decipher the information contained in abundance patterns can be an intricate and complex process, which requires\
- accurate nuclear physics data, in particular the rates for production and transmutation by nuclear reactions, but often also nuclear structure properties for describing the impact of temperature, and\
- sufficiently detailed astrophysical prescriptions for a realistic or at least plausible modeling of the relevant phenomena. It is this duality that we had the pleasure to pursue in the Torino-FZK liaison for more than two decades, and which, in return, was found to provide surprising constraints for the physics of stellar evolution.
[99]{} Allen B.J., Lowenthal G.C., de Laeter J.R. 1981, J. Phys. G: Nucl. Phys., 7, 1271
Arlandini C., K[ä]{}ppeler F., Wisshak K., Gallino R., Lugaro M., Busso M., Straniero, O. 1999, Ap. J., 525, 886
Arnould M., Goriely, S. 2003, Phys. Rep., 384, 1
Arnould M., Takahashi K., Yokoi K. 1984, Astron. Astrophys., 137, 51
Arnould M. 1973, Astron. Astrophys., 22, 311
Audouze J., Fowler W.A., Schramm D.N. 1972, Nature Physical Science, 238, 8
Bao Z.Y., Beer H., K[ä]{}ppeler F., Voss F., Wisshak K., Rauscher T. 2000, Atomic Data Nucl. Data Tables, 76, 70
Beer H., K[ä]{}ppeler, F. 1980, Phys. Rev. C, 21, 534
Beer H., Penzhorn R.-D. 1987, Astron. Astrophys., 174, 323
Beer H., Walter G. 1984, Astrophys. Space Sci., 100, 243
Beer H., K[ä]{}ppeler F., Wisshak K., Ward R.A. 1981, Ap. J. Suppl., 46, 295
Beer H., Walter G., Macklin R.L., Patchett P.J. 1984, Phys. Rev. C, 30, 464
Best J., Stoll H., Arlandini C., Jaag S., K[ä]{}ppeler F., Wisshak K., Mengoni A., Reffo G., Rauscher, T. 2001, Phys. Rev. C, 64, 15801
Busso M., Lambert D.L., Beglio L., Gallino R., Raiteri C.M., Smith, V.V. 1995, Ap. J., 446, 775
Busso M., Gallino R., Wasserburg G.J. 1999, Ann. Rev. Astron. Astrophys., 37, 239
Chieffi A., Straniero O. 1989, Ap. J. Suppl., 71, 47
Cowan J.J., Pfeiffer B., Kratz K.-L., Thielemann F.-K., Sneden C., Burles S., Tytler D., Beers T.C. 1999, Ap. J., 521, 194
Doll C., B[ö]{}rner H., Jaag S., K[ä]{}ppeler F. 1999, Phys. Rev. C, 59, 492
Gallino R., Busso M., Picchio G., Raiteri C.M., Renzini, A. 1988, Ap. J., 334, L45
Gallino R., Arlandini C., Busso M., Lugaro M., Travaglio C., Straniero O., Chieffi A., Limongi M. 1998, Ap. J., 497, 388
Heil M., Winckler N., Dababneh S., K[ä]{}ppeler F., Wisshak K., Bisterzo S., Gallino R., Davis A.M., Rauscher T. 2008, Ap. J, 673, 434
Klay N., K[ä]{}ppeler F., Beer H., Schatz G. 1991, Phys. Rev. C, 44, 2839
Klay N., K[ä]{}ppeler F., Beer H., Schatz G., B[ö]{}rner H., Hoyler F., Robinson S.J., Schreckenbach K., Krusche B., Mayerhofer U., Hlawatsch G., Lindner H., von Egidy T., Andrejtscheff W., Petkov, P. 1991, Phys. Rev. C, 44, 2801
Kneissl U. 2005, Bulg. Nucl. Sci. Trans, 10, 55
Mathews G.J., Takahashi K., Ward R.A., Howard W.M. 1986, Ap. J., 302, 410
Mohr P. 2008, PoS, NIC X, 081
Mosconi M. [*et al.*]{}, (The n\_TOF Collaboration) 2007, Prog. Part. Nucl. Phys., 59, 165
Pagel B.E.J. 1990, in Astrophysical Ages and Dating Methods, eds. E. Vangioni-Flam, M. Cass[é]{}, J. Audouze, J. Tran Thanh Van, (Gif sur Yvette: Editions Fronti[è]{}res), 493
Pepin R.O., Becker R.H., Rider P.E. 1995, Geochim. Cosmochim. Acta, 59, 4997
Reifarth R., K[ä]{}ppeler F. 2002, Phys. Rev. C, 66, 054605
Reifarth R., Heil M., K[ä]{}ppeler F., Voss F., Wisshak K., Becvár F., Krticka M., Gallino R., Nagai Y. 2002, Phys. Rev. C, 66, 064603
Reifarth R., K[ä]{}ppeler F., Voss F., Wisshak K., Gallino R., Pignatari M., Straniero O. 2004, Ap. J., 614, 363
Schatz G. 1983, Astron. Astrophys., 122, 327
Smith V.V., Lambert D.L. 1988, Ap. J., 333, 219
Stecher-Rasmussen F., Abrahams K., Kopecky J., Lindner J., Polak P. 1988, in Capture Gamma-Ray Spectroscopy 1987, eds. K. Abrahams, P. Van Assche, Inst. Phys. Conf. Ser. 88 (Bristol: Institute of Physics), 754
Straniero O., Gallino R., Busso M., Chieffi A., Raiteri C.M., Limongi M., Salaris, M. 1995, Ap. J., 440, L85
Straniero O., Chieffi A., Limongi M., Busso M., Gallino R., Arlandini C. 1997, Ap. J., 478, 332
Straniero O., Domínguez I., Cristallo R., Gallino R. 2003, Publications of the Astronomical Society of Australia, 20, 389
Straniero O., Gallino R., Cristallo S. 2006, Nucl. Phys., A777, 311
Takahashi K., Yokoi K. 1987, Atomic Data Nucl. Data Tables, 36, 375
Vanhorenbeek J., Lagrange J.M., Pautrat M., Dionisio J.S., Vieu Ch. 2000, Phys. Rev. C, 62, 015801
Ward R.A., Fowler W.A. 1980, Ap. J., 238, 266
Wisshak K., Voss F., Arlandini C., Becvár F., Straniero O., Gallino R., Heil M., K[ä]{}ppeler F., Krticka M., Masera S., Reifarth R., Travaglio C. 2001, Phys. Rev. Lett., 87, 251102
Wisshak K., Voss F., K[ä]{}ppeler F., Kazakov L. 2006, Phys. Rev. C, 73, 015807
Yokoi K., Takahashi K., Arnould M. 1983, Astron. Astrophys., 117, 65
Yokoi K., Takahashi K., Arnould M. 1985, Astron. Astrophys., 145, 339
Zhao W.R., K[ä]{}ppeler F. 1991, Phys. Rev. C, 44, 506
|
---
abstract: 'We analyze the occurrence of anisotropy in the electronic, magnetic, elastic and transport properties of more than one thousand 2D materials from the C2DB database. We identify hundreds of anisotropic materials and classify them according to their point group symmetry and degree of anisotropy. A statistical analysis reveals that a lower point group symmetry and a larger amount of different elements in the structure favour all types of anisotropies, which could be relevant for future materials design approaches. Besides, we identify novel compounds, predicted to be easily exfoliable from a parent bulk compound, with anisotropies that largely outscore those of already known 2D materials. Our findings provide a comprehensive reference for future studies of anisotropic response in atomically-thin crystals and point to new previously unexplored materials for the next generation of anisotropic 2D devices.'
author:
- Luca Vannucci
- Urko Petralanda
- Asbjørn Rasmussen
- Thomas Olsen
- 'Kristian S. Thygesen'
bibliography:
- 'biblio\_anisotropy.bib'
title: 'Anisotropic properties of monolayer 2D materials: an overview from the C2DB database'
---
[^1]
[^2]
Introduction
============
Anisotropy is the characteristic of a material whereby it displays different physical properties along different directions. It is intrinsic to the atomic structure and can therefore influence the electric, magnetic, optical or mechanical response of a material to an external perturbation. In fact, anisotropic materials have become increasingly present in modern devices, finding applications in diverse fields. One paradigmatic example of anisotropc material is an optical polarizer, which is transparent to electromagnetic radiation polarized along a well defined axis, while blocking or deviating light that is polarized along a different direction.
Layered van der Waals (vdW) materials represent an interesting class of naturally anisotropic materials. In vdW materials, the anisotropy derives from the dispersive nature of the bonds between the two-dimensional (2D) atomic layers, which is much weaker than the covalent bonds existing between atoms within the 2D layers. This intrinsic anisotropy can be exploited for various applications. For example, in certain layered materials the interplay between the structural and electronic properties is so strong that it changes the iso-frequency surfaces of light from elliptic to hyperbolic[@Gjerding17_hyperbolic] with fascinating perspectives for sub-wavelength imaging and radative emission control[@jacob2006optical; @lu2014enhancing].
Individual 2D atomic layers are obviously anisotropic due to the missing third dimension, but they can exhibit in-plane anisotropy as well. However, the most widely studied 2D materials — graphene [@Novoselov04], hexagonal boron nitride (hBN) [@Ci10_BN] and the family of transition metal dichalcogenides (TMDCs) [@Mak10_MoS2] — have in-plane isotropic properties due their highly symmetric crystal structure. Graphene, for instance, has a six-fold rotational symmetry and three mirror planes, while hBN and TMDCs such as MoS$_2$ have a 6-fold roto-inversion symmetry with two mirror planes. Such large sets of crystal symmetries turn out to inhibit any form of anisotropic response.
The prototypical example of in-plane anisotropic 2D material is phosphorene [@Li14_BP]. Phosphorene is obtained by mechanical exfoliation of black phosphorus down to the monolayer limit, and exhibits a highly-anisotropic puckered structure, which differs along the zigzag and armchair direction (as shown in Fig. \[fig:overview\]). This strong anisotropy has motivated a large number of theoretical and experimental studies of phosphorene, which have revealed the effect of the structural anisotropy on its electronic, optoelectronic, electro-mechanical, thermal, and excitonic properties.[@Fei14_phospho; @Xia14_phospho; @Wei14; @Low14; @Wang15_phospho; @Jain15_phospho; @Wang15; @Tian16_BP_synapses; @Yang17_birefringence; @Wang20_optical_anisotropy].
Other notable examples of in-plane anisotropic 2D materials are TMDCs in the distorted 1T’-phase (such as WTe$_2$ [@Ma16; @Torun16; @Zhang19; @Zhang19_WTe2; @Gjerding17_hyperbolic; @Wang20_WTe2_hyperbolic]), titanium trichalcogenides (most notably TiS$_3$ [@Island14; @Island15; @Kang15; @Jin15; @SilvaGuillen17; @Khatibi19]), ReS$_2$ and ReSe$_2$ [@Liu15_ReS2_FET; @Zhang16_ReS2; @Yang17_birefringence; @Echeverry18; @Liu20_ReSe2], GaTe [@Wang19_anisotropic_resistance], and pentagonal structures such as PdSe$_2$ [@Oyedele17; @Lu20_anisotropy_PdSe2]. Such materials exhibit anisotropy in their mechanical, electrical, optical and magnetic properties, with intriguing applications for optical devices (such as birefringent wave plate or hyperbolic plasmonic surfaces), high mobility transistors, ultra-thin memory devices and controllable magnetic devices among others.
As of today, more than fifty different 2D materials have been identified and synthesized or exfoliated in monolayer form[@Haastrup18], but they represent only a small fraction of all the possible stable 2D materials that have been predicted by computations[@Mounet2017; @Haastrup18]. It is therefore reasonable to expect that the above mentioned examples of anisotropic 2D materials will be soon complemented by additional atomically-thin layers with highly direction-dependent properties.
Here we take a first step in this direction by presenting an extensive analysis of the occurrence of in-plane anisotropic features in the magnetic, elastic, transport and optical properties of more than 1000 predicted stable 2D materials from the Computational 2D Materials Database (C2DB)[@Haastrup18]. We discuss trends and similarities in the atomic and electronic structure of anisotropic monolayer materials by classifying them according to their point symmetry group, and highlight the most interesting candidates for different applications.
After introducing the C2DB database and the criteria used to assess the stability of the materials in Section \[sec:overview\], we analyze the occurrence of anisotropy in the magnetic easy-axis direction, elastic response, effective masses and polarizability in four separate sub-sections of Section \[sec:results\]. We have tried to make these sections as self-contained as possible so they can be read independently, with a separate introduction to the formalism used and the relevant literature for each of them. We conclude by summarizing the main results in Section \[sec:conclusions\], where we highlight the most interesting anisotropic and potentially exfoliable 2D materials identified in the study.
Overview of the C2DB database {#sec:overview}
=============================
The Computational 2D Materials Database (C2DB) is an open database containing thermodynamic, elastic, electronic, magnetic, and optical properties of two-dimensional (2D) monolayer materials [@Haastrup18]. All properties were calculated with the electronic structure code GPAW [@Enkovaara10] using additional software packages for atomic simulation and workflows handling such as the ASE [@HjorthLarsen17], ASR [@ASR_docs] and MyQueue [@Mortensen20]. Unless explicitly stated, all properties reported in this work were calculated with the PBE exchange-correlation functional [@Perdew96].
The latest development version of C2DB contains 4262 fully relaxed structures at the time of writing, which are categorized in terms of their dynamic and thermodynamic stability. The dynamic stability determines whether a material is stable with respect to distortions of the atomic positions or the unit cell, and is established from phonon frequencies (at the $\Gamma$-point and high-symmetry points of the Brillouin zone boundary) and the stiffness tensor. A material is dynamically stable only if all phonon frequencies are real and the stiffness tensor eigenvalues are positive. On the other hand, the thermodynamic stability of a given 2D material is assessed in terms of its heat of formation and total energy with respect to other competing phases (taken as the most stable elemental and binary compounds from the OQMD database[@saal2013materials]) — also known as *energy above the convex hull* [@Haastrup18]. A material’s thermodynamic stability is classified as *high* if the heat of formation is negative and the energy above the convex hull is below 0.2 eV/atom.
Materials with high thermodynamic and dynamic stability are the most likely to be exfoliated or synthesized in the lab. Although these criteria are not sufficient to ensure experimental realization, we note that they have been determined from a detailed analysis of more than 50 already synthesized monolayers[@Haastrup18]. We will therefore focus on the subset of highly stable materials (according to the criteria used in the C2DB) in the remainder of this work. For further details on the stability assessment and a complete overview of the C2DB, we refer the reader to Ref. .
{width="\linewidth"}
As shown by Neumann more than one century ago, the symmetries of any physical property of a material must include the symmetry operations of the point group to which the crystalline lattice belongs [@Neumann1885]. Fig. \[fig:overview\] provides an overview of the 1198 thermodynamically and dynamically stable materials in C2DB grouped according to their point group symmetry. Some specific examples of materials are shown with their point group indicated by the color of the frame. The selected materials represent some of the most interesting anisotropic 2D materials identified in this work and discussed in the following.
From Fig. \[fig:overview\], we make the following general observations:
- Materials with trigonal symmetry, that is, materials belonging to the point groups *-3m, 3m, -3, 3, 32* in the international notation[^3], are the most frequently occurring ($33\%$ of the total). These include, among others, TMDCs in the 1T phase such as HfS$_2$ [@Xu15_HfS2], group IV monolayers [@Zhu15_stanene], hydrogenated graphene (i.e. graphane [@Elias09]), MXY Janus structures [@Lu17_Janus; @Fulop18_Janus; @Riis-Jensen20] such as ZrSSe, and monolayer magnetic materials such as CrI$_3$ [@Huang2017_FM].
- Monoclinic materials (groups *2, m, 2/m*) account for 18% of the total. They include TMDCs in the 1T’ phase, such as WTe$_2$ [@Tang17_WTe2], TiS$_3$ [@Island14; @Island15], and the pentagonal PdSe$_2$ [@Oyedele17].
- The orthorhombic structures comprise 16% of the total (groups *mmm, mm2*). Notable examples are the highly anisotropic puckered phosphorene (that is, monolayer black phosphorus [@Li14_BP]) and puckered arsenene [@Kamal15_As]. We also point out the ternary compound CrOBr, which is predicted to be easily exfoliable from the layered bulk structure[@Mounet2017] and whose crystal prototype is largely recurrent in C2DB among orthorhombic structures.
- Triclinic materials (groups *1, -1*) account for 13% of the total. This group include materials with low symmetry, such as TMDC alloys [@Komsa12; @Xie15_alloys], the topological insulator SbI [@Song14_BiX_SbX; @Vannucci20], and other potentially exfoliable materials such as AuSe.
- 11% of structures have hexagonal point group symmetry (groups *-6m2, 6/mmm*). They include TMDCs in the H phase such as HfSe$_2$, hexagonal boron nitride (hBN), graphene (which is the only stable representative of the point group *6/mmm*) and other less common structures possessing 6-fold rotation symmetry such as TiCl$_3$.
- The remaining 8% correspond to tetragonal structures (groups *-42m, 4/mmm*) such as ZnCl$_2$, which is predicted to be easily exfoliable[@Mounet2017].
This set of 1198 known or potentially exfoliable/synthesizable materials forms the basis for the anisotropy analysis presented in this work.
Results and discussion {#sec:results}
======================
Magnetic easy axis
------------------
Magnetic anisotropy is defined in a material as the dependence of its properties on the direction of its magnetization. The main manifestation of magnetic anisotropy is the existence of an easy axis, along which it takes the least energy to magnetize the crystal, and a hard axis, where it takes the most. In order to quantify the degree of anisotropy, the magnetic anisotropy energy (MAE) is defined, which accounts for the energy necessary to deflect the magnetization from the easy to the hard direction. In general, the MAE may have contributions from different features of a crystal such as strain or defects. In this work we will consider perfect crystals, wherein only the so-called magnetocrystalline anisotropy, given by the coupling of the lattice to the spin magnetic moment, contributes to the MAE.
In 2D materials magnetic anisotropy gains a special importance due to the Mermin–Wagner theorem [@Mermin1966], which prohibits a broken symmetry phase at finite temperatures. This means that for a magnetic order to emerge, the spin rotational symmetry has to be broken explicitly by magnetic anisotropy. This has attracted a wide interest on the topic in the recent years, both in the light of new fundamental questions [@Huang2017_FM; @Fei2018; @Zhang2015; @Rossier2017; @Torelli2018; @Torelli2019] and applications [@Gong2019; @Cardoso2018; @Zhong2017; @Burch2018; @WangMorpurgo2018].
In this work we will focus on the in-plane MAE and define the $x$ and $y$ axes to span the atomic plane of the material. We then define the in-plane MAE, $\Delta_{xy}$, as: $$\Delta_{xy}=|E(\vec{M}\parallel y)-E(\vec{M}\parallel x)|,$$ where $E(\vec{M}\parallel x)$ and $E(\vec{M}\parallel y)$ are the electronic energies including spin-orbit coupling with magnetization parallel to the $x$ and $y$ axes, respectively.
{width="\linewidth"}
In Figure \[fig:mag.pdf\] we show the distribution of the magnetic materials in the C2DB database, sorted by their point group symmetry and the value of their in-plane magnetic anisotropy $\Delta_{xy}$. In comparison with Fig. \[fig:overview\], we find a similar landscape once we filter for ferromagnetic (FM) or anti-ferromagnetic (AFM) materials, as shown in Fig \[fig:mag.pdf\]a. This indicates that being magnetic or not is not strongly correlated to the point group, but to the presence of magnetic atoms in the structure. However, once anisotropy comes into play we do observe, in Fig \[fig:mag.pdf\]b, important structural features that condition it. In fact, one can expect a magnetic easy axis to appear in the direction where magnetic atoms are packed more loosely, creating an anisotropy in the magnetic properties. For example, as we set a very low (0.005 meV/unit cell) threshold for $\Delta_{xy}$, all hexagonal and tetragonal point groups vanish and only point groups not restricting the in-plane perpendicular directions by symmetry hold an in-plane magnetic anisotropy. Another feature we observe from Figure \[fig:mag.pdf\]c, is the prevalence of the orthorhombic (*mmm*) and, to a lower degree monoclinic (*2/m*), systems with crystals of *mmm* point group symmetry representing over 35$\%$ of the materials with a low $\Delta_{xy}$ threshold. When we increase the $\Delta_{xy}$ threshold to 0.7 meV/unit cell we see that the trend is enhanced: *mmm* dominates with half of the materials and *2/m* comprises a third of the materials (Figure \[fig:mag.pdf\]d). The materials above this threshold are classified and sorted according to their anisotropy in Figure \[fig:mag.pdf\]e. We see that both FM and AFM magnetic orders are equally represented, indicating little influence of the type of magnetic order on the anisotropy. In Figure \[fig:mag.pdf\]e we also show the direction of the magnetic easy axis, indicated by a full orange marker if it lies within the plane and an empty blue one if it is oriented out-of-plane. It is clear that most of the selected anisotropic materials indeed present an in-plane easy axis.
Among the 113 materials with $\Delta_{xy}$>0.005 meV/unit cell we find that the ternary compound structure prototype of orthorhombic symmetry *ABC-59-ab* [@Haastrup18] is the most frequently occurring with 47 entries (see CrBrO in Fig. \[fig:overview\] for an example of this structure). The main reason for this might be the mentioned lack of symmetry between the $x$ and $y$ directions in the plane, along with the fact that it is more likely to contain a magnetic atom due to its ternary nature (most other crystals in C2DB are binary). To the best of our knowledge, materials from this class have not been produced in monolayer form, but we note that several of them are listed as easily exfoliable in Ref , e.g. CrOBr, CrOCl, CrSBr, FeOCl, VOBr and VOCl. The monoclinic T’ phase of transition metal dichalcogenides occurs 15 times, followed by the trigonal MoS$_2$ [@Kappera2014] type with 10 occurrences.
We also cross-checked the rest of our selected anisotropic materials against the list of exfoliable 2D materials in Ref . We found, out of the 113 materials with $\Delta_{xy}$>0.005 meV/unit cell, over 20 materials whose stoichiometry match entries in the list of exfoliable materials. Among these, perhaps the most promising material with regard to a potential experimental realization is the AFM T’ di-halide V$_2$I$_4$, which lies at the convex hull according to the C2DB database [@Haastrup18]. V$_2$I$_4$ shows an in-plane magnetic easy axis and $\Delta_{xy}=1.09$ meV/unit cell, that competes with the highest out-of-plane anisotropies known to date [@Torelli2018]. In addition, we find several materials that are only a few meV above the convex hull and show remarkably high anisotropies. Among these the AFM Ni$_2$I$_4$ compound stands out with an exceptional in-plane anisotropy of over 20 meV/unit cell and an in-plane easy axis. Other materials in the same stability category, such as Ni$_2$Br$_4$, Co$_2$O$_4$ and CrBr$_2$, also show large |$\Delta_{xy}$| values and are listed in Table \[tab:magani\].
Sym. Mag. $E_{\mathrm{hull}}$(meV) $|\Delta_{xy}|$(meV/unit c.)
-------------- -------------- ------ -------------------------- ----- ------------------------------
Cr$_2$Br$_4$ P2$_1$/m FM 54.4 No 0.79
Co$_2$O$_4$ C2/m FM 7.1 No 0.94
V$_2$I$_4$ Pm AFM 0.0 Yes 1.09
Ni$_2$Br$_4$ P$\bar{3}$m1 AFM 8.8 No 1.55
Ni$_2$I$_4$ C2/m AFM 10.3 No 20.40
: Monolayers predicted stable and with the highest in-plane magnetic anisotropy in the C2DB database and in-plane magnetic easy axis, whose stoichiometry matches that of entries in the list of easily exfoliable 2D materials in Ref . The table shows the chemical formula, the space group symmetry, energy above the convex hull, magnetic state and in-plane magnetic anisotropy.[]{data-label="tab:magani"}
Elastic response and auxetic effect
-----------------------------------
The elastic response of 2D materials to strains and deformations is usually expressed in terms of the Young modulus $E$ and Poisson ratio $\nu$ [@Akinwande17; @Androulidakis18]. The former measures the response along a direction that is parallel to the applied strain, while the latter describes how the material reacts along orthogonal directions. For anisotropic materials, both the Young modulus and the Poisson ratio depend on the directions of stresses and strains. Assuming that the 2D material lies in the $xy$ plane, and neglecting the elastic response along the out-of-plane axis $z$, we will denote the axis-dependent Young modulus with $E_i$, $i = \{x, y\}$. Similarly, the coefficient relating the stress along the $i$ axis to an applied strain in the perpendicular $j$ direction will be quantified by the Poisson ratio $\nu_{ij}$, with $i \ne j$.
More generally, the elastic response of a continuous 2D medium is quantified in terms of the 2D stiffness tensor $C$, which is a linear map between the strain tensor $\varepsilon$ and the stress tensor $\sigma$[@Landau_elasticity]: $$\sigma_{ij} = \sum_{kl} C_{ijkl} \varepsilon_{kl} .$$ Here we have $i = \{x, y\}$, since we restrict to in-plane stresses and strains. A generic matrix element $\sigma_{ij}$ represents the $i$ component of the stress acting on a plane perpendicular to the $j$ direction, while the strain components $\varepsilon_{ij}$ are given by $\varepsilon_{ij} = (\partial_i u_j + \partial_j u_i)/2$ in terms of the infinitesimal deformations $u_i$.
Being a linear map between two 2nd rank tensors, the stiffness tensor is naturally a 4th rank tensor. However, one can make use of the symmetric properties of both $\sigma$ and $\varepsilon$ at equilibrium to write both of them as one-dimensional vectors, namely $$\begin{aligned}
\tilde \sigma & = \left(\sigma_{xx}, \sigma_{yy}, \sigma_{xy} \right)^T := \left(\sigma_{1}, \sigma_{2}, \sigma_{3} \right)^T , \\
\tilde \varepsilon & = \left(\varepsilon_{xx}, \varepsilon_{yy}, 2\varepsilon_{xy} \right)^T := \left(\varepsilon_{1}, \varepsilon_{2}, \varepsilon_{3} \right)^T .\end{aligned}$$ Such a notation is often called *Voigt* notation. Then, the stiffness tensor becomes a 2nd rank symmetric tensor with only 6 independent components, $$\tilde \sigma =
\begin{pmatrix}
C_{11} & C_{12} & C_{13} \\
C_{12} & C_{22} & C_{23} \\
C_{13} & C_{23} & C_{33}
\end{pmatrix} \tilde \varepsilon .$$ We will restrict the following analysis to the class of *orthotropic materials*, that is, materials having three mutually-orthogonal planes of reflection symmetry. In such a case, the stiffness tensor takes the form $$C =
\begin{pmatrix}
C_{11} & C_{12} & 0 \\
C_{12} & C_{22} & 0 \\
0 & 0 & C_{33}
\end{pmatrix} .$$ In practice, this means that we restrict attention to materials where the shear deformations $\varepsilon_{xy}$ are decoupled from $xx$ and $yy$ stresses. This allows us to straightforwardly relate the components $C_{ij}$ of the stiffness tensor to the in-plane Young modulus $E_i$ and in-plane Poisson ratio $\nu_{ij}$ via the following relations:
$$\begin{aligned}
E_x & = \frac{C_{11} C_{22} - C_{12}^2}{C_{22}} , \\
E_y & = \frac{C_{11} C_{22} - C_{12}^2}{C_{11}} , \\
\nu_{xy} & = \frac{C_{12}}{C_{11}} , \\
\nu_{yx} & = \frac{C_{12}}{C_{22}} .\end{aligned}$$
\[eq:def\_Young\_Poisson\]
In C2DB, each component of the 2D stiffness tensor is calculated by straining a material along a given direction ($x$ or $y$) and calculating the forces acting on the unit cell after relaxing the position of the atoms within the fixed unit cell [@Haastrup18]. To restrict to orthotropic materials only, we have discarded all materials whose stiffness tensor components $C_{13}$ or $C_{23}$ exceed a certain tolerance $C_\mathrm{max}$, which we set to $C_\mathrm{max} = 0.01$ N/m. With this method, we have obtained a subset of 555 materials (roughly 50% of all the stable materials) that we analyze in the following.
{width="\linewidth"}
In Fig. \[fig:elastic\_1\]a we show an overview of the direction-dependent Young modulus for all orthotropic and stable materials in C2DB. The quantity $E_y$ is plotted against $E_x$, which means that all data point lying outside the diagonal represent a material with anisotropic elastic properties. Well known anisotropic structures such as WTe$_2$, PdSe$_2$, TiS$_3$, P$_4$ and As$_4$ are all identified by this method, while hundreds of unexplored anisotropic materials are predicted as well. In Fig. \[fig:elastic\_1\]b we use a similar method to show the anisotropy of the Poisson ratio $\nu_{xy}$ against $\nu_{yx}$. While this does not add much information with respect to panel a — since $E_x/E_y = \nu_{yx}/\nu_{xy}$, as one can easily infer from Eqs. \[eq:def\_Young\_Poisson\] — we notice that Poisson ratios can also take negative values, differently from the Young modulus. In such a case, a material stretched (or compressed) along the $x$ direction will also expand (shrink) along the perpendicular $y$ direction, a quite counterintuitive property called *auxetic* behavior [@Akinwande17; @Jiang16]. We will investigate such cases in more detail in the following.
To describe elastic anisotropy in a more quantitative manner, we define an elastic anisotropy degree (or anisotropy parameter) for each material as $$\delta_E = \frac{|E_x - E_y|}{E_x + E_y} .$$ Such a parameter will be always bounded between 0 and 1, with $\delta_E = 0$ signifying a perfectly isotropic materials while $\delta \approx 1$ for an extremely anisotropic medium.
In Fig. \[fig:elastic\_1\]c we show the distribution of the elastic anisotropy degree for all materials having $\delta_E \geq 0.05$ (corresponding to a difference of at least 10% between $x$ and $y$ Young modulus), with the inset showing the full distribution including materials with $\delta_E < 0.05$. We notice that more than one third of the selected materials (201 out of 555) show an elastic anisotropy exceeding this threshold value, while 162 of them exceed the value $\delta_E = 0.1$ (signifying a difference of roughly 20% or more between $E_x$ and $E_y$) and 32 of them show a highly anisotropic elastic behaviour with $\delta_E \geq 0.4$.
The distribution of point groups corresponding to different threshold value for $\delta_E$ is shown as a series of pie charts in Fig. \[fig:elastic\_1\]d. On the left, we plot the distribution of point groups for all 555 selected materials. A comparison with Fig. \[fig:overview\] shows that our choice of selecting only orthotropic materials tends to favor orthorhombic structures (especially group *mmm*) with respect to trigonal ones, while the proportions between remaining point groups remain basically unaffected. However, when selecting all materials with at least 10% of difference between $E_x$ and $E_y$ ($\delta_E \geq 0.05$, in the middle), the proportions change drastically, with all trigonal and hexagonal groups suppressed in favor of orthorhombic and monoclinic structures. This shows that symmetric crystal structures such as the ones of TMDCs in the H and T phase, graphene and hBN are generally isotropic, with little difference in the elastic properties along $x$ and $y$ directions. On the other hand, TMDCs in the distorted T’ phase (such as WTe$_2$), pentagonal structures (PdSe$_2$) and puckered layers (phosphorene) stand out for their markedly anisotropic elastic properties due to their asymmetric crystal lattice.
When restricting to highly anisotropic materials having $\delta_E \geq 0.4$, monoclinic and orthorhombic structures share exactly 50% of the total each. The Young moduli of these 32 structures are plotted in the top panel of \[fig:elastic\_2\], sorted from lowest to highest value of $\delta_E$. Besides known structures such as phosphorene (P$_4$) and puckered arsenene (As$_4$), we find many new stable structures with exceptionally high elastic anisotropy. Four out of the first six materials are compounds of the form CrX$_2$ (with X a halogen element) in both the AFM and FM magnetic state, which also stand out for their markedly anisotropic magnetic behavior as described previously. These are, however, not the most stable structures with the same constituent elements, since they all have a competing phase of the form CrX$_3$ with a more favorable formation energy (one of them is shown in Fig. \[fig:overview\]). However, this is not the case for the monoclinic structures AuSe and AuTe, which represent the most stable phase of their respective elements. One of them (AuSe, which is shown in Fig. \[fig:overview\] as well) has also been identified as an easily exfoliable materials by independent work of Mounet *et al.* [@Mounet2017], making this one of the most appealing material for anisotropic elastic applications found in this work. We also note that puckered compounds GeS and GeSe seem also to be easily exfoliable from their respective three-dimensional parent structures, which is again confirmed in the literature [@Mounet2017].
Finally, it is worth mentioning the presence of several entries in the structural prototype *ABC-59-ab* (especially Hf- and Zr-based compounds), whose relevance has already been discussed in the previous section. We note that Ref. lists HfNBr, ZrNBr and ZrNI as easily exfoliable layered materials. We find a relatively low elastic anisotropy degree $\delta_E = 0.07$ for ZrNI, but we suggest that materials with much higher elastic anisotropy such as HfBrX and ZrBrX (with X = S, Se) should in principle be available by susbtitution of Nitrogen with an element from the halogen group.
{width="\linewidth"}
Let us now come back to the subset of materials showing a negative in-plane Poisson ratio, that is the ones highlighted with red markers in Fig. \[fig:elastic\_1\]b. The auxetic effect is not necessarily associated to anisotropy as both Poisson ratios $\nu_{xy}$ and $\nu_{yx}$ can take negative values without necessarily being different from each other. Indeed, such an effect does not originate from the material having a different elastic response along orthogonal axes, but rather from the presence of special re-entrant structures or rigid blocks linked by flexible hinges in the crystalline structure, that can compress or extend in counter-intuitive fashions. Nevertheless, our framework allows for the systematic search of novel 2D materials with negative Poisson ratio, which itself is an active field of research [@Jiang16]. Moreover, Poisson ratios of anisotropic materials can take arbitrarily large values (positive or negative), differently from ordinary isotropic media [@Ting05].
In the bottom panel of Fig. \[fig:elastic\_2\] we plot the Poisson ratios of the 31 stable materials in C2DB showing auxetic behavior, sorting from lowest to highest absolute value of $\mathrm{max}(\nu_{xy}, \nu_{yx})$. The largest negative Poisson ratio is found for TiCl$_3$ in the hexagonal crystal structure (shown in Fig. \[fig:overview\]), with both AFM and FM magnetic configurations. Once again, this is not the most stable phase for such a compound, which reaches the lowest energy configuration when arranged in a trigonal phase, in the same crystal prototype as the ferromagnetic insulator CrI$_3$ [@Huang2017_FM].
There are several interesting candidates among the materials with tetragonal structure. In particular, materials with stoichiometry AB$_2$ in point group *-42m*, such as the case of ZnCl$_2$ shown in Fig. \[fig:overview\], represent a large majority of stable auxetic materials in C2DB. Notable examples are metal di-halides involving Co, Mn, or Fe as the metallic element. Such materials are all exfoliable from a 3D parent compound with trigonal point group [@Mounet2017], but their tetragonal phases generally have total energies that are comparable or even lower than the trigonal monolayer phase (which is also present in C2DB).
A second notable example is given by group 12 di-halides involving Zn, Cd, and Hg, for which the tetragonal auxetic structure turns out to be the most stable phase. Interestingly, both HgI$_2$ and ZnCl$_2$ are reported as easily exfoliable materials by Mounet *et al.*[@Mounet2017], making these two materials very appealing candidates for novel auxetic 2D materials. We also note that MnTe, AgBr, and GeO$_2$ all seem to have total energies very close to the convex hull, and thus also belong to the set of predicted stable auxetic monolayers.
Let us note that a significant majority of known auxetic 2D structures display negative Poisson ratio in the out-of-plane direction [@Liu19; @Kong18; @Gomes15; @Jiang14; @Du16], while only very few materials were previously predicted to exhibit in-plane auxetic response [@Yu17; @Qin20]. Ref. reports negative Poisson ratio for monolayers of groups 6–7 transition-metal dichalcogenides (MX$_2$ with M=Mo, W, Tc, Re and X=S, Se, Te) in the 1T-phase. We do find a negative $\nu_{xy}$ in C2DB for all of them, but they have low dynamical and thermodynamical stability, and therefore are not identified by our analysis.
Effective masses
----------------
Monolayer 2D materials with a finite band gap can display large anisotropies in the effective masses along two orthogonal directions. This makes them appealing for highly directional-dependent transport, with applications in anisotropic field-effect transistors, polarization-sensitive detectors and non-volatile memory devices among others [@Jin15; @Liu15_ReS2_FET; @Zhang16_ReS2; @Liu20_ReSe2; @Wang19_anisotropic_resistance; @Wang15_phospho].
In C2DB, effective masses for conduction and valence bands are calculated for all materials having a finite band gap greater than 0.01 eV at the PBE level. We define the effective mass, $m$, from the curvature, $a$, at the band maximum (minimum) for valence (conduction) bands as $m = 1/2a$. To determine the curvature we start from a self-consistent ground state calculation performed at a $k$-point density of 12 Å$^{-1}$. From these $k$-points a preliminary band extremum is found and a second, non-self-consistent calculation is performed with higher density of $k$-points centered around the preliminary extremum. Then, from these values a final extremum is determined and the energies for a number of $k$-points spaced very closely around the extremum are calculated non-self-consistently. The $k$-points used for the first refinement step are by default chosen to lie in a sphere around the extremum with a radius of 250 meV (for a mass of 1) and the same number of $k$-points as the original calculation (but at least 19). The last refinement uses a 1 meV sphere and 9 points. The points calculated in the final refinement step are used to determine the curvature. We first do a fit to a second order polynomial to determine a preliminary extremum. Then we perform a fit to a third order polynomial and find the new extremum, unless the optimization algorithm diverges (as may happen for third order polynomials) in which case we revert to the original fit. We have found that the third order polynomial fit does provide an improvement to the description of the band extremum and in some cases is necessary, e.g. in the presence of parabolic bands crossing as in Rashba splitting. From the fit we find the curvature $a$ at the extremum and the mass is calculated as $m = 1/2a$.
To measure the presence of anisotropic effects in the effective masses, we define the parameters
$$\begin{aligned}
\delta_\mathrm{me} & = \frac{|m^\mathrm{(e)}_x - m^\mathrm{(e)}_y|}{m^\mathrm{(e)}_x + m^\mathrm{(e)}_y} ,\\
\delta_\mathrm{mh} & = \frac{|m^\mathrm{(h)}_x - m^\mathrm{(h)}_y|}{m^\mathrm{(h)}_x + m^\mathrm{(h)}_y} ,\end{aligned}$$
where:
- $m^\mathrm{(e)}_i$ is the effective electron mass calculated along the $i$ direction around the conduction band minimum;
- $m^\mathrm{(h)}_i$ is the effective hole mass calculated along the $i$ direction around the valence band maximum.
Unfortunately, getting a very accurate value for the effective masses in a fully automated fashion turns out to be a quite challenging task, with some fits being not accurate enough, or picking a wrong sign for the electron or hole mass in the case of a particularly heavy effective mass. We therefore remove all materials having $m^\mathrm{(e/h)}_i \geq 20 m_{\mathrm{e}}$, with $m_{\mathrm{e}}$ the free electron mass, and materials with extremely high ratio $m^\mathrm{(e/h)}_i / m^\mathrm{(e/h)}_j \geq 20$. We stress that these threshold values are arbitrary. They have been primarily chosen so that we discard all wrong results, while also keeping the highly anisotropic materials with accurate results into the analysis as much as possible.
{width="\linewidth"}
The C2DB database contains 607 dynamically and thermodynamically stable materials with a PBE band gap greater than 0.01 eV, of which 115 fall outside the range of validity described above. This leaves us with a total of 492 materials, whose effective electron and hole masses are shown as a scatter plot in Fig. \[fig:emasses\_1\]a and \[fig:emasses\_1\]b. We find a rather large set of materials with anisotropic effective masses, as one can immediately notice from the large number of points falling outside the main diagonal. Indeed, as shown in Fig. \[fig:emasses\_1\]c, 61% of the selected materials (301 out of 492) show a difference of at least 10% between electron effective masses, while 53% of them (261 out of 492) have at least 10% of difference between hole effective masses. Moreover, a quite large fraction of materials have extremely high values of $\delta_\mathrm{me}$ or $\delta_\mathrm{mh}$ as compared with the case of elastic anisotropy in Fig. \[fig:elastic\_1\]c.
When considering the distribution of point groups for materials with effective masses anisotropy, a quite different behavior with respect to previous cases emerges. First, as shown in the left column of Fig. \[fig:emasses\_2\], let us notice that the restriction to semiconductors with a band gap of at least 0.01 eV removes many structures in the orthorhombic (*mmm*) and monoclinic (*2/m*) groups, while favoring structures with trigonal (*-3m, 3m*) and hexagonal (*-62m*) symmetry with respect to the general case shown in Fig. \[fig:overview\]. More importantly, we notice that these structures are not filtered out even when we select materials with increasingly high effective masses anisotropy ($\delta_\mathrm{me (mh)} \geq 0.05$ in the middle, $\delta_\mathrm{me (mh)} \geq 0.7$ on the right). This means that, despite their structural symmetry, materials such as TMDCs in the T and H phase and Janus structures display a quite strong anisotropy in the effective masses. One should bear in mind that we only calculate the curvatures of valence and conduction bands in one particular valley. While these are not bound to symmetries of the crystal, the overall transport properties (such as, for instance, the mobility) are determined by adding up contributions from all valleys, which in the end cancels out any anisotropic effect and restores the Neumann principle. However, it’s worth noting that transport properties of a single anisotropic valley should in principle be accessible in experiments with valley-selection techniques such as circularly polarized optical excitation and gating [@Cao2012_ValleyHall; @Mak14_ValleyHall; @Lee2016_ValleyHall]
For the case of electron effective masses, we find that 65 stable materials have a rather high anisotropy degree $\delta_\mathrm{me} \geq 0.7$. While this group is dominated by monoclinic structures in the *2/m* point group (mostly TMCD in the distorted T’ phase) we find a rather large set of triclinic structures which sum up to roughly one third of the total, and a significant 14% of share for hexagonal structures. The situation is different for the case of hole effective masses, where orthorhombic structures represent a 34% of the 50 materials with $\delta_\mathrm{mh} \geq 0.7$. However, we still find that 22% of structures are in a trigonal point group as well.
{width="\linewidth"}
We have reported all the 105 structures with $\delta_\mathrm{me}$ or $\delta_\mathrm{mh}$ greater that 0.7 in Fig. \[fig:emasses\_2\], with 10 of them having both parameters above the threshold value. Among this group, we notice a recurrent presence of Hafnium, Bismuth and Antimony-based Janus structures, and Chromium based compounds (especially Chromium halides, which also bear elastic and magnetic anisotropy). Most importantly, we find six materials that have already been exfoliated down to monolayer thickness in experiments, including four Hafnium and Zirconium-based TMDC in the stable T phase. We also point out the presence of monolayers SbI$_3$, BiClTe, SbITe, and some ternary compounds in the crystal prototype *ABC-59-ab* (point group *mmm*) such as CrBrS, CrBrO, and CrClO, which all seem to be easily exfoliable from the layered 3D parent bulk structure[@Mounet2017].
A list of experimentally available or easily exfoliable materials with highly anisotropic effective masses is presented in Table \[tab:emasses\_candidates\], where we also report three experimentally available materials having $0.5 \leq \delta_\mathrm{me/mh} \leq 0.7$ (namely TiS$_3$, SnSe$_2$, GaTe). We also include additional structures that can be obtained from this subset by replacing a constituent element with atoms from the same group. This a relevant case for Janus monolayer, which can be obtained from already available structures by stripping off an outer layer of chalcogen atoms and substituting them with an element from the same family [@Lu17_Janus]. A similar method is likely to be applicable to halogen and chalcogen atoms in ternary orthorhombic compounds. For each entry of Table \[tab:emasses\_candidates\], we have double-checked the accuracy of the parabolic fit.
[c|c|c|c|l|l]{} material & pointgroup & $\delta_\mathrm{me}$ & $\delta_\mathrm{mh}$ & Ref. (exp.) &\
W$_2$Se$_4$ & 2/m & 0.07 & 0.88 & Ref. &\
HfSe$_2$ & -3m & 0.85 & 0.00 & Ref. & yes\
Ti$_2$CO$_2$ & -3m & 0.83 & 0.00 & Ref. &\
ZrSe$_2$ & -3m & 0.81 & 0.00 & Ref. & yes\
HfS$_2$ & -3m & 0.81 & 0.00 & Ref. & yes\
Re$_4$Se$_8$ & -1 & 0.80 & 0.30 & Ref. & yes\
SnS$_2$ & -3m & 0.50 & 0.78 & Ref. & yes\
ZrS$_2$ & -3m & 0.75 & 0.00 & Ref. & yes\
PbI$_2$ & -3m & 0.00 & 0.72 & Ref. & yes\
Ti$_2$S$_6$ & 2/m & 0.60 & 0.52 & Ref. & yes\
SnSe$_2$ & -3m & 0.55 & 0.00 & Ref. & yes\
Ga$_2$Te$_2$ & -6m2 & 0.51 & 0.37 & Ref. & yes\
CrBrS () & mmm & 0.89 & 0.94 & & yes[@Haastrup18; @Mounet2017]\
CrBrS () & mmm & 0.85 & 0.84 & & yes[@Haastrup18; @Mounet2017]\
CrBrO () & mmm & 0.37 & 0.84 & & yes[@Haastrup18; @Mounet2017]\
CrClO () & mmm & 0.83 & 0.79 & & yes[@Haastrup18; @Mounet2017]\
CrBrO () & mmm & 0.25 & 0.83 & & yes[@Haastrup18; @Mounet2017]\
CrClO () & mmm & 0.45 & 0.81 & & yes[@Haastrup18; @Mounet2017]\
I$_6$Sb$_2$ & -3m & 0.01 & 0.80 & & yes[@Haastrup18]\
Au$_2$Se$_2$ & 2/m & 0.65 & 0.26 & & yes[@Haastrup18; @Mounet2017]\
------------------------------------------------------------------------
material & pointgroup & $\delta_\mathrm{me}$ & $\delta_\mathrm{mh}$ &\
CrBrSe () & mmm & 0.94 & 0.92 &\
CrIS () & mmm & 0.93 & 0.73 &\
BrSbSe & 3m & 0.90 & 0.13 &\
CrISe () & mmm & 0.89 & 0.80 &\
CrBrSe () & mmm & 0.88 & 0.88 &\
CrIS () & mmm & 0.79 & 0.53 &\
CrIO () & mmm & 0.08 & 0.79 &\
HfSeTe & 3m & 0.79 & 0.01 &\
HfSTe & 3m & 0.78 & 0.00 &\
ZrSSe & 3m & 0.75 & 0.00 &\
CrIO () & mmm & 0.15 & 0.58 &\
Polarizability
--------------
The polarizability of a material relates the induced electric dipole moment density to an applied electric field to linear order.[@LeRu2008]. For 2D materials, this relation takes the form: $$P^{2D}_i(\vec{q},\omega)=\sum_j\alpha^{2D}_{ij}(\vec{q},\omega)E_j(\vec{q},\omega)$$ where $P^{2D}$ is the induced polarization in the material averaged over the area of the unit cell, $E(\vec{q},\omega)$ is the the applied electric field, and $\alpha^{2D}$ is the polarizability.[@Haastrup18].
In general, the polarizability can be split into a contribution from the electrons, $\alpha^{e}_{ij}(\vec{q},\omega)$, and a contribution from the lattice, $\alpha^{lat}_{ij}(\vec{q},\omega)$. Since the characteristic response time of the electrons is much faster than that of the lattice, the relevance of the two contributions depends on the timescale of the considered problem. For optical processes involving electromagnetic waves with frequency well above the characteristic phonon frequency of the lattice, only the electronic polarizability is relevant and we can write $\alpha_{ij}(\vec{q},\omega)\approx\alpha^{e}_{ij}(\vec{q},\omega)$. On the other hand, for processes involving infrared light, the lattice response must be considered as well and can in some case even dominate the electronic response.
The polarizability determines the degree of dielectric screening in a material and as such it sets the strength of Coulomb interaction between charged particles[@Huser2013; @TianSantos2019]. It thereby governs several of the unique properties that made 2D materials famous over the last decade [@Novoselov04; @CastroNeto2009; @Mak10_MoS2] including excitons, plasmons, and band gap renormalization effects[@thygesen2017calculating]. In this context, the in-plane anisotropy of 2D materials has attracted significant interest since the synthesis of few-layer black phosporous (P$_4$) in 2014 [@Li14_BP; @LiuYe2014; @Xia14_phospho]. For example, the anisotropic optical absorption (essentially the imaginary part of the electronic polarizability) makes the material act as a linear polarizer[@TranYang2014], which finds applications in diverse fields such as liquid-crystal displays, medical applications, or optical quantum computers [@Knill2001; @Zeng2009]. In addition, other fundamental properties, such as the electron-phonon coupling and electron-hole interactions, are influenced by an anisotropic polarizability resulting in formation of quasiparticles, e.g. polarons, excitons, trions, with unconventional shapes and dispersion relations.[@Dresselhaus2016; @LiTaniguchi2016; @TranYang2014; @xu2016extraordinarily; @yang2015optical; @deilmann2018unraveling; @gjerding2020efficient].
In the C2DB the electronic polarizability is calculated within the random phase approximation (RPA) [@RPA; @RPA_2] using PBE wave functions and eigenvalues, see Ref for further details. To keep the discussion general we focus here on the polarizability in the static ($\omega=0$) and long wavelength ($q=0$) limits. As a measure of the degree of anisotropy we adopt the $\delta$ parameter defined above and define $$\delta_{\alpha^p}=\frac{|\alpha^p_x-\alpha^p_y|}{|\alpha^p_x|+|\alpha^p_y|} ,$$ with $p = \{e, lat\}$ for the electronic and lattice polarizability respectively.
In figure \[fig:polari\] panels (c) and (d) we show the statistical distribution of the materials with electronic polarizability anisotropy above 0.005 and 0.4, respectively. For reference, the distribution of all the materials for which the polarizability has been calculated is shown in panel (a), and the materials are classified according to point group symmetry as usual. As the threshold is increased we see the same trend as for the magnetic, elastic, and effective masses anisotropies, namely, the orthorhomic *mmm* point group followed by the monoclinic *2/m* becomes increasingly dominant. Both the trigonal and tetragonal phases disappear from the distribution already for $\delta_{\alpha^{e}}$ >0.005. The othorhombic *mmm* group is particularly ubiquitous among the materials with high $\delta_{\alpha^{e}}$, surpassing 70 $\%$ of the remaining materials already at a moderate threshold of $\delta_{\alpha^{e}}$ > 0.4. In figure \[fig:polari\]e we show the materials with the largest anisotropies found in the range $\delta_{\alpha^{e}}$ > 0.7. We note that our analysis correctly identifies the known in-plane anisotropic compounds such as P$_4$ [@Li14_BP] (phosphorene), As$_4$ [@Kamal15_As] (arsenene), MoS$_2$ (in the T’ phase) and WTe$_2$ [@Tang17_WTe2] among others. The materials with the highest $\delta_{\alpha^e}$ are ternary compounds and therefore more challenging to realize experimentally than the more common binary 2D materials. However, several of the materials with large $\delta_{\alpha^{e}}$ have been previously predicted to be exfoliable from known parent bulk materials [@Mounet2017]. Among our anisotropic materials that match the stoichiometry of the materials listed in as easily exfoliable, the most promising of these are listed in Table \[tab:pole\].
Sym. $E_{\mathrm{hull}}$(meV) $\delta_{\alpha^{e}}$
-------------- ---------- -------------------------- ----- ----------------------- --
W$_2$Se$_4$ P$2_1$/m 91.7 No 0.47
Zr$_2$I$_4$ P$2_1$/m 0.0 Yes 0.51
Mo$_2$Se$_4$ P$2_1$/m 109.4 No 0.54
Zr$_2$Cl$_4$ P$2_1$/m 31.9 No 0.60
Ti$_2$Cl$_4$ P$2_1$/m 0.0 Yes 0.65
W$_2$S$_4$ P$2_1$/m 177.9 No 0.82
: Monolayers with the highest in-plane electronic polarizability anisotropy ($\delta_{\alpha^{e}}$) in the C2DB database whose stoichiometry matches that of the entries predicted to be easily exfoliable from a known layered bulk material in Ref. . Their space group symmetry, energy above convex hull, magnetic state, and in-plane electronic polarizability anisotropies are listed.[]{data-label="tab:pole"}
We highlight two easily exfoliable materials from Table \[tab:pole\] that have $\delta_{\alpha^{e}}$>0.65 and $\delta_{\alpha^{e}}$>0.5, respectively: Ti$_2$Cl$_4$ and Zr$_2$I$_4$. We stress that $\delta_{\alpha^{e}}=0.65$ implies that the polarizability in one direction of the plane is 4.7 times larger than in the other direction. Consequently, these materials are very promising candidates for anisotropic optical applications such as light polarizers.
{width="\linewidth"}
The lattice or infrared polarizability $\alpha^{lat}$ has been calculated for only about 10 $\%$ of the materials in the C2DB. Hence we will limit our analysis to extracting the most promising individual materials, since a statistical analysis would not be representative of the real distribution of the materials in the database. In Figure \[fig:polat\] we show all 16 materials with $\delta_{\alpha^{lat}}$>0.2. As it is the case with the rest of properties, we find several materials with a significant anisotropy. For instance, the monolayer Sn$_2$Te$_2$ has a $\delta_{\alpha^{lat}}$ value 0.45, is at the bottom of the convex hull combining Sn and Te among monolayers and is considered to be easily exfoliable [@Mounet2017]. This makes it a very interesting material for further experimental and theoretical exploration. We find other materials whose stoichiometry matches easily exfoliable entries in Ref. among those with $\delta_{\alpha^{lat}}$>0.25, and are listed in Table \[tab:polat\]. Taking into account that these promising materials are selected among only a little fraction of the entire C2DB database, we anticipate that there are a large amount of promising infrared anisotropic materials in the database yet to be discovered.
{width="60.00000%"}
Sym. $E_{\mathrm{hull}}$(meV) $\delta_{\alpha^{lat}}$
-------------- -------------- -------------------------- ----- ------------------------- --
Sn$_2$S$_2$ Pmn$2_1$ 42.6 No 0.26
Sn$_2$Se$_2$ Pmn$2_1$ 42.9 No 0.38
Sn$_2$Te$_2$ Pmn$2_1$ 62.9 Yes 0.45
ZrI$_2$ P$\bar{6}$m2 27.0 No 0.46
Ge$_2$Se$_2$ Pmn$2_1$ 24.9 No 0.50
: Stable materials with the highest in-plane infrared polarizability anisotropy in the C2DB database whose stoichiometry matches that of entries in the easily exfoliable 2D materials list in Ref . Their space group symmetry, energy over the hull minimum, magnetic state and in-plane infrared polarizability anisotropies are given.[]{data-label="tab:polat"}
Conclusions {#sec:conclusions}
===========
In this work, we have analyzed the presence of anisotropic behavior among more than one thousand 2D materials predicted to be stable in the C2DB database [@Haastrup18]. Specifically, we have identified materials with (in-plane) magnetic anisotropy, anisotropic Young’s modulus and/or negative Poisson ratio, anisotropic effective masses, and anisotropic polarizabilities.
Consistent with the Neumann principle, we have found that there are two main features in the C2DB database that favour anisotropy, namely (i) a lower symmetry and (ii) a larger number of constituent elements. In addition, our analysis satisfactorily captures the specific symmetry requirements of each anisotropy type: elastic and polarizability anisotropies, derived from second order tensors, are forbidden for trigonal, tetragonal or hexagonal compounds; the magnetic anisotropic materials do not include hexagonal and tetragonal groups, and the effective mass anisotropy is allowed for all symmetry groups in the database. Several of the materials identified in this study outperform the known 2D materials in terms of anisotropic figures of merit and are predicted to be stable and/or exfoliable from known parent bulk crystals[@Mounet2017], providing useful guidelines for future experimental investigations.
The most prominent material class resulting from our analysis is the ternary orthorhombic compound prototype *ABC-59-ab*[@Haastrup18]. This material class combines three different atomic species in a low symmetry structure, often resulting in strongly anisotropic properties. To the best of our knowledge, such materials have not yet been isolated in monolayer form, but experimental efforts in this direction could hopefully be motivated by our work.
We find several binary monolayers with interesting anisotropic behaviors that are predicted to be stable and in some cases even predicted as easily exfoliable. A transition metal (in particular Ni, V, Cr, Os) combined with a halide in a low symmetry structure appears to be the best recipe for obtaining in-plane magnetic and elastic anisotropy. For instance, VI$_2$ is an exfoliable and very stable compound with an in-plane magnetic anisotropy that competes with the highest out-of-plane anisotropies known to date. Moreover, there are multiple compounds with large predicted anisotropies, that match the stoichiometry of exfoliable materials and with similar total energies. Such materials could be stabilized under the right experimental conditions. Among them, we highlight anti-ferromagnetic Ni$_2$I$_4$, which has an exceptional in-plane anisotropy exceeding 20 meV/unit cell, that makes it a candidate for realization of high temperature in-plane 2D antiferromagnetism. Likewise, the chromium di-halides stand out for their markedly anisotropic behavior in the elastic response and effective electron and hole masses. Among the non-magnetic materials, we identify AuX (X=S, Se, Te) as a new class of potentially stable 2D materials with high anisotropy in several physical properties, with AuSe being reported as easily exfoliable in the literature.
On the other hand, transport properties of a single valley deserve to be mentioned separately, as they do not seem to be bound to the symmetries of the crystal lattice and could be experimentally accessed by means of circularly polarized light. Several TMDCs and Janus structures have highly symmetric crystal struture with trigonal symmetry but strong effective mass anisotropy, and we identify HfX$_2$, ZrX$_2$, and SnX$_2$ (X = S, Se) as the most interesting monolayers for anisotropic transport applications already available in experiments. Moreover, new undiscovered structures with very low inter-layer binding energy such as SbI3, AuSe, and ternary magnetic compounds CrXY (X = Br, Cl; Y = O, S) display strong effective masses anisotropies.
Regarding the electronic polarizability, we also find a large amount of anisotropic materials, nearly all being ternary compounds of orthorhombic symmetry. In addition, some binary compounds, mostly involving a transition metal and a halide or a chalcogen, that are predicted to be easily exfoliable are identified and listed in the text. Finally, we also identified materials with interesting infrared polarizability anisotropy values among a smaller set of candidates in the C2DB. The most promising prospects for experimental realization are listed in the text.
Among the materials with negative Poisson ratios (so-called auxetic materials) identified in our study, we highlight HgI$_2$ and ZnCl$_2$, which are both predicted as easily exfoliable[@Mounet2017], and MnTe, AgBr, and GeO$_2$, which are predicted to be stable in their monolayer form.
We thank Alireza Taghizadeh for useful discussions and suggestions. The research leading to these results has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 754462. (EuroTechPostdoc). KST acknowledges funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant No. 773122, LIMA). The Center for Nanostructured Graphene is sponsored by the Danish National Research Foundation, Project DNRF103.
Data availability {#data-availability .unnumbered}
=================
The data that support the findings of this study are openly available online at the C2DB website [@C2DB].
References {#references .unnumbered}
==========
[^1]: These authors contributed equally
[^2]: These authors contributed equally
[^3]: Note that we will substitute the common overline notation with a dash (that is, we will use $-n$ instead of $\overline n$)
|
---
abstract: |
We report upper limits on the masses of black holes that can be present in the centers of sixteen nearby galaxy bulges. These limits $M_{\rm BH}^{lim}$ for our statistically complete sample were derived from the modeling of the central emission-line widths (\[\] or \[\]), observed over a $0\farcs25\times
0\farcs2$ ($R\ltorder 9$ pc) aperture. The experiment has a mean detection sensitivity of $\sim 3.9\times10^6\,{\rm M}_\odot$. For three sample members with direct determinations of $M_{\rm BH}$ our upper limits agree within the uncertainties, while in general our upper limits are found to be close to the masses measured in other bulges with global properties similar to ours. Remarkably, our limits lie quite closely to the recently derived $M_{\rm
BH}-\sigma_{\star}$ relation. These results support a picture wherein the black-hole mass and overall galaxy structure are closely linked, as galaxies with exceptionally high $M_{\rm BH}$ at a given $\sigma_{\star}$ apparently are rare.
author:
- 'Marc Sarzi, Hans-Walter Rix, Joseph C. Shields, Daniel H. McIntosh, Luis C. Ho, Gregory Rudnick, Alexei V. Filippenko, Wallace L. W. Sargent, and Aaron J. Barth'
title: Limits on the Mass of the Central Black Hole in Sixteen Nearby Bulges
---
Introduction {#sec:UppLim_intro}
============
The last few years have seen great progress in studying the dark mass concentrations at the centers of “ordinary” quiescent galaxies, showing that they are very common and demonstrating in some cases that they must be supermassive black holes (SMBHs) by ruling out all astrophysically viable alternatives. Indeed, a picture is emerging in which SMBHs in the range of $10^6$ to $10^9\,\Msun$ are an integral part of galaxy formation (, Kauffmann & Haehnelt 2000). Two examples in the local universe stand out with particularly convincing evidence as SMBHs: the Milky Way and NGC 4258. At the Galactic Center, direct observations of individual stars (Genzel 1997; Eckart & Genzel 1997; Ghez 1998) and a stream of ionized gas (Herbst 1993) orbiting Sgr A$^*$ show that all the dynamically relevant mass inside $\sim 1$ pc, $2.6\times 10^6\,\Msun$, is concentrated with a density of $\rho >10^{12}\,\Msun\ {\rm
pc}^{-3}$. In NGC 4258, a disk of masing molecular gas is orbiting the center with a Keplerian rotation curve as traced by H$_2$O maser emission (Miyoshi 1995); models imply that $M_{\rm BH}\approx
3.6\times 10^7\,\Msun$ and $\rho >4\times10^{9}\,\Msun\ {\rm
pc}^{-3}$. Alternatives to SMBHs, such as clusters of brown dwarfs or stellar remnants, can be ruled out in these two cases (e.g. Maoz 1995, 1998).
A number of techniques have matured that have demonstrated the presence of a central dark mass concentration in an ever growing number of nearby galaxy nuclei, with mass estimates accurate to a factor of $\sim$2 and concentration limits of $\rho >10^{6-8}\,\Msun\
{\rm pc}^{-3}$. Simple analogy with the exemplary cases where the SMBH presence is all but proven, and the connection to active galactic nuclei (AGN) activity at various intensity levels, suggest strongly that these central dark masses are SMBHs as well. The most widely used technique is stellar dynamical modeling (e.g., Dressler & Richstone 1988; Kormendy et al. 1996, 1997; van der Marel 1997; Cretton & van den Bosch 1999; Gebhardt 2000a), which has provided mass estimates for over two dozen nuclei, mostly in massive, early-type galaxies. Modeling the kinematics of ionized gas has produced a number of additional mass measurements (e.g., Harms et al. 1994; Ferrarese, Ford, & Jaffe 1996; Bower 1998; Verdoes Kleijn et al. 2000; Sarzi 2001, hereafter S01; Barth 2001a). Finally, application of results from reverberation mapping of active galactic nuclei have yielded central virial mass estimates for Seyfert galaxies (Ho 1999; Wandel, Peterson, & Malkan 1999) and QSOs (Kaspi 2000) that seem to be robust (Gebhardt 2000b; Ferrarese 2001).
Taken together, these results show that $M_{\rm BH}$ is correlated both with the stellar luminosity $L_{\rm bulge}$ (Kormendy & Richstone 1995; Magorrian 1998; Ho 1999; Kormendy et al. 2001) and, more tightly, with the velocity dispersion of the bulge $\sigma_{\star}$ (Gebhardt 2000a; Ferrarese & Merritt 2000). By contrast, $M_{\rm BH}$ is unrelated to the properties of galaxy disks (Kormendy et al. 2001). The growth of SMBHs appears to be closely linked with the formation of bulges. However, the actual slope and scatter of the $M_{\rm BH}-L_{\rm bulge}$ and $M_{\rm
BH}-\sigma_{\star}$ relations are still under debate. It is also important to remember that our knowledge about SMBHs is very uneven across the Hubble sequence of galaxies. The existing samples are preferentially weighted toward early-type galaxies with very massive black holes. From an observational point of view, there is a pressing need to acquire better $M_{\rm BH}$ statistics for spiral galaxies.
Motivated by the recent progress and the emerging correlations, but also by the desire to improve the black hole census in spirals, we derive mass constraints on SMBHs potentially present in the bulges of sixteen nearby disk galaxies. As we will show, these constraints constitute significant progress, both in terms of the number of target galaxies and in terms of broadening the range of parent galaxies with significant constraints.
We draw on spectra of nearby nuclei obtained with the Space Telescope Imaging Spectrograph (STIS) onboard the [*Hubble Space Telescope (HST)*]{}, taken as part of the Spectroscopic Survey of Nearby Galaxy Nuclei (SUNNS) project (Shields et al. 2000; Ho 2000; S01; Rix et al. 2001). Only four of our original twenty-four target galaxies showed extended line emission with symmetric kinematics and hence were suited for a direct $M_{\rm BH}$ determination. These cases have been modeled by S01 and yielded $M_{\rm BH}$ estimates. Here we analyze data for sixteen galaxies from SUNNS with central \[\] $\lambda\lambda$6548, 6583 and \[\] $\lambda\lambda$6716, 6731 line emission from ionized gas. In most cases the central gas emission is spatially unresolved, or only marginally so, and has line widths $\sim 10^{2}$ .
Spatially unresolved lines do not permit precise mass estimates, but potentially do allow us to derive useful upper limits on $M_{\rm BH}$. This is because the line emission in our central aperture must arise from gas at a distance from the center that is, at most, equal to the physical dimension of the region subtended by the central aperture itself, for our sample typically $\sim 9$ pc. If the gas motions are orbital, all velocities and hence the integrated line width will scale as $\sqrt{M_{\rm BH}}$. Note that the resulting limit on $M_{\rm BH}$ scales linearly with the central aperture size, affording [*HST*]{} an order of magnitude gain over ground-based observations (e.g., Salucci 2000), and making the derived limits astrophysically interesting.
At parsec-scale distances from galactic centers the gas is subject not only to gravitational forces but also to gas pressure and magnetic forces (e.g., for the Milky Way, Timmermann 1996; Yusef-Zadeh, Roberts, & Wardle 1997). In general, these other effects cause additional line broadening.
The paper is organized as follows. In §\[sec:UppLim\_Obser&DataRed\] we present the spectroscopic and photometric STIS observations, and in §\[sec:UppLim\_LinewidthsModelling\] we describe our modeling of the ionized gas kinematics. In §\[sec:UppLim\_results\] and §\[sec:UppLim\_Disc&Concl\], we present our results and draw our conclusions.
Observations and Data Reduction {#sec:UppLim_Obser&DataRed}
===============================
Galaxy Sample {#subsec:UppLim_GalSample}
-------------
All sample galaxies, mostly early-type disk galaxies (S0 – Sb), were observed with STIS as part of the SUNNS project; the full details of this program will be reported elsewhere (Rix 2001). The SUNNS galaxies are drawn from the Palomar spectroscopic survey of nearby galaxies (Filippenko & Sargent 1985; Ho, Filippenko, & Sargent 1995, 1997) and include all S0 to Sb galaxies within 17 Mpc known to have line emission ($\gtrsim
10^{15}$ ergs s$^{-1}$ cm$^{-2}$) within a $2\asec \times 4\asec$ aperture. The present sample constitutes the subset of SUNNS objects with sufficient central line flux in H$\alpha$ or \[\] to provide adequate signal-to-noise (S/N $\gtrsim 10$). With the experimental setup described below, the sample is effectively defined by galaxies that have H$\alpha$ or \[\] line fluxes $\gtrsim
10^{-14}$ ergs s$^{-1}$ cm$^{-2}$ within a $0\farcs25 \times 0\farcs2$ central aperture, which correspond to line luminosities of $\gtrsim
3\times 10^{38}$ ergs s$^{-1}$ at the mean sample distance of 15 Mpc. Our sample is statistically well defined and complete in the sense that the original SUNNS sample was a volume-limited sample selected by emission-line flux within a (much larger) $2\asec \times 4\asec$ aperture. The basic parameters of the target galaxies are given in Table \[tab:UppLim\_GalSample\].
Observations {#subsec:UppLim_Observations}
------------
[*HST*]{} observations were acquired for all objects in SUNNS during 1998 and 1999. We placed the $0\farcs2 \times 52\asec$ slit across each nucleus along an operationally determined position angle, which is effectively random with respect to the galaxy orientation. After initial 20-s acquisition exposures with the optical long-pass filter (roughly equivalent to $R$), from which we derive surface photometry of the central regions, three exposures totaling approximately 45 minutes were obtained with the G750M grating; this resulted in spectra that cover 6300 Å to 6850 Å with a full-width at half maximum resolution for extended sources of 1.6 Å.
For 9 of the 16 galaxies in the present sample, the telescope was offset by $0\farcs05$ ($\sim$1 pixel) along the slit between repeated exposures to aid in the removal of hot pixels and cosmic rays. The two-dimensional (2-D) spectra were bias- and dark-subtracted, flat-fielded, aligned, and combined into single frames. Cosmic rays and hot pixels remaining in the combined 2-D spectra were cleaned following the recipe of Rudnick, Rix, & Kennicutt (2000). The 2-D spectra were then corrected for geometrical distortion and then wavelength and flux calibrated with standard [STSDAS]{} procedures within [IRAF]{}[^1]
To represent the generic “nuclear spectrum” of each galaxy, we extracted aperture spectra five pixels wide ($\sim 0\farcs25$), centered on the brightest part of the continuum. The acquisition images indicate that the uncertainties in the galaxy center due to dust are $\ltorder 0.25$ pix $\sim 0\farcs012$ (see also S01). In essence, therefore, the extracted spectra represent the average central emission, convolved with the STIS spatial point-spread function (PSF) and sampled over an aperture of $0\farcs25 \times
0\farcs2$, or 18 pc $\times$ 14 pc for the mean sample distance of 15 Mpc.
For three of our galaxies, central stellar velocity dispersions were either not available in the literature (NGC 3992 and NGC 4800) or quite uncertain (NGC 3982; Nelson & Whittle 1995). Therefore, we obtained new spectroscopic data for these objects on two observing runs: with the Boller & Chivens spectrograph at the Bok 90-inch telescope in May 2000 for NGC 3992, and with the Double Spectrograph (Oke & Gunn 1982) at the Palomar 200-inch telescope in June 2001 for NGC 3982 and NGC 4800. At Kitt Peak we used the 600 grooves mm$^{-1}$ grating to cover 3600–5700 Å with a pixel scale on the CCD of 1.86 Å pix$^{-1}$, while for the Palomar run we observed the Ca infrared triplet using the 1200 grooves mm$^{-1}$ grating on the red side of the Double Spectrograph, with a pixel scale of 0.63 Å pix$^{-1}$. Spectra were extracted for apertures of $3\farcs3 \times 2\farcs5$ and $3\farcs7 \times 2\farcs0$ for the Bok and Palomar spectra, respectively. Total exposure times were 40, 30, and 60 minutes for NGC 3982, NGC 3992, and NGC 4800, respectively. The stellar velocity dispersions were measured following the method of Rix (1995), and the obtained values are reported in Table \[tab:UppLim\_GalSample\].
Central Emission-Line Widths and Flux Profiles {#subsec:UppLim_CentralLinewidthsAndFluxProfiles}
----------------------------------------------
To quantify the emission-line velocity widths in these nuclear spectra, we simultaneously fit Gaussians of single width $\sigma_{cen}$ to the \[\] $\lambda\lambda$6548, 6583 and \[\] $\lambda\lambda$6716, 6731 emission-line doublets, using the [IRAF]{} task [SPECFIT]{}. The profiles of the \[\] and \[\] lines were found to be roughly Gaussian. We restricted ourselves to line widths from forbidden transitions to side-step the impact of a possible broad (broad-line region) component arising from radii much smaller than our observational aperture. However, in all objects where only a narrow H$\alpha$ emission-line component was present, we also included H$\alpha$ in the fit. In the few cases with prominent, very broad H$\alpha$ lines (, NGC 4203, Shields 2000; NGC 4450, Ho 2000), particular care was taken to minimize the impact of the very broad lines on the estimate of $\sigma_{cen}$ in the adjacent \[\] lines. In virtually all objects the S/N is in excess of 10, and hence the formal errors in the estimated line width are negligible for the subsequent analysis. The instrumental line width derived from comparison lamps is $\sigma_{inst}
\approx 32$ and was subtracted from the raw measurement of $\sigma_{cen}$ in quadrature; for all but two objects these line width corrections were negligible, implying that the intrinsic widths were well resolved. This correction also spares us from accounting in the following modeling of the line width for the broadening due to the instrumental line spread function. The resulting values for $\sigma_{cen}$ are listed in Table 1. Their characteristic errors, including the correction for the instrumental line width, are less than 10 .
As we will detail below, any information on the gas spatial flux distribution on scales $\lesssim 0\farcs25$ provides a valuable constraint for the central line width modeling procedure. Therefore, we also obtained radial profiles along the slit direction for the ionized gas flux, by fitting a Gaussian to the \[\] $\lambda$6583 line-flux profile along the spatial direction on the 2-D spectra. We chose \[\] because among our sample galaxies this is almost always the brightest line, and because it is less likely to be affected by underlying absorption features in the stellar continuum than H$\alpha$. The \[\] emission-line flux profiles are shown in Figure \[fig:UppLim\_FluxProfilesAndFitAndContinuum\].
Modeling the Central Line Width {#sec:UppLim_LinewidthsModelling}
===============================
Basic Concept {#subsec:UppLim_BasicIdea}
-------------
We are now faced with converting the observed central line widths into estimates for the central black-hole mass.
To start, we assume that the ionized gas motion is dominated solely by gravity. In this case the central line width depends on: (a) the total gravitational potential of the putative SMBH and of the surrounding stars; (b) the spatial emissivity distribution (, that of a disk inclined at $\cos{i}$); and (c) the “kinematic behaviour” of the ionized gas, for example “dynamically cold” gas moving on circular orbits or hotter gas with hydrostatic support. The lack of spatially resolved information on the gas flux distribution within the central $0\farcs25 \times 0\farcs2$ aperture means that we can only derive upper limits to $M_{\rm BH}$; if the emission-line flux within the aperture arose from $R \ll R_{aperture}
\approx 0\farcs1$, arbitrarily small values of $M_{\rm BH}$ could explain the observed line width. If the gas motion is also affected by non-gravitational forces, such as outflows, magnetic fields, or supernova winds, this would broaden the integrated line velocity width additionally, and hence lower the required black-hole mass needed to explain a given $\sigma_{cen}$. By ignoring non-gravitational forces we are therefore conservative in estimating upper limits for the central black-hole mass. The absence of constraints on the importance of non-gravitational forces constitutes a second reason (besides the lack of spatially resolved information on the gas flux) why it is not possible to secure the presence of a SMBH in our sample galaxies, as hypothetically the observed line widths could be entirely explained by non-gravitational effects.
If the functional form of the potential well is fixed, then the central line width will scale with the potential for any given choice of the emissivity distribution and for the gas kinematical behaviour. In the simplest case of a purely Keplerian potential induced by a SMBH, $\Phi_{\rm BH}$, the expected central line width will scale as the square root of the black-hole mass. As the circular velocity at any given reference radius $R_{ref}$, $v_c(R_{ref})$, will scale in the same way, the ratio between $\sigma_{cen}$ and $v_c(R_{ref})$ is independent of black-hole mass. The task at hand is therefore to derive a plausible range of values for this ratio by varying the spatial emissivity distribution, and then to obtain a mass range for the putative SMBH from the observed central line width via $\sigma_{cen} \rightarrow v_c^2(R_{ref})
\rightarrow M_{\rm BH}= v_c^2(R_{ref})R_{ref}/G$.
The same would hold for a sequence of purely stellar potentials derived from the luminosity density with differing mass-to-light ratios $\Upsilon$. When both the stellar and the SMBH contribution to the gravitational potential are considered, the shape of the rotation curve, and hence $\sigma_{cen}/v_c(R_{ref})$, will depend on the relative weight of $M_{\rm BH}$ and $\Upsilon$.
In all cases we will proceed through the following steps in order to make a prediction for the gas velocity dispersion within the central aperture:
- Specify the spatial gas emissivity distribution and the gravitational potential and choose the kinematic behaviour of the gas.
- Construct 2-D maps for the moments of the line-of-sight velocity distribution (LOSVD) at any position $(x,y)$ on the sky $$\overline{\Sigma v^k}(x,y) =\int\,{\rm
LOSVD} \,(x,y,v_z)v_z^kdv_z \,\,(k=0,1,2),
\label{eq:LOSVDmoments}$$ as they would appear without the limitations of the spatial resolution; the first moment, for instance, is the mean gas velocity.
- Convolve each of the 2-D $\overline{\Sigma v^k}$ maps with the STIS PSF.
- Sample the convolved $\overline{\Sigma v^k}_{conv}$ 2-D maps over the desired aperture to obtain the PSF-convolved, aperture-averaged LOSVD velocity moments $\overline{\Sigma v^k}_{conv,ap}$, which are directly comparable to the observables.
- In particular, compute the ionized gas flux $f_{ap}$, the projected mean streaming velocity $\overline{v}_{ap}$, and the velocity dispersion $\sigma_{ap}$ within the desired aperture through $f_{ap} = \overline{\Sigma v^0}_{conv,ap}$, $\overline{v}_{ap} = \overline{\Sigma v^1}_{conv,ap}\,/f_{ap}$, and $\sigma_{ap} = \sqrt{\,\overline{\Sigma v^2}_{conv,ap}\,/f_{ap} -
\overline{v}_{ap}^2}$, respectively. The last quantity, $\sigma_{ap}$, can be compared with the measured velocity width $\sigma_{cen}$.
In what follows, we will first derive upper limits on $M_{\rm BH}$ assuming that the gas is moving on circular orbits in a coplanar, randomly oriented disk within a Keplerian potential (§\[subsec:UppLim\_TheDiskModelling\]); then we consider the impact of the stellar potential on this disk modeling (§\[subsec:UppLim\_TheStarContribution\]); finally, we will explore a seemingly very different situation for the kinematical behaviour of the gas, that of hydrostatic equilibrium (§\[subsec:UppLim\_HydroEq\]), to demonstrate that our results are robust with respect to the underlying model assumptions. We anticipate that the most conservative upper limits on $M_{\rm BH}$ are derived from the first approach, where the impact of the stellar potential is neglected.
The Keplerian Disk Modeling {#subsec:UppLim_TheDiskModelling}
---------------------------
We start with the simple and plausible assumption that the ionized gas moves on circular orbits at the local circular velocity, which in turn is dictated solely by the gravitational influence of the putative SMBH, $v^2_c(R) = GM_{\rm BH}/R$. We further assume that the gas resides in a coplanar disk of unknown inclination with an intrinsically axisymmetric emissivity distribution, $\Sigma(R)$, centered on the stellar nucleus. Our best guess for $\Sigma(R)$ is derived from the data themselves. In this Keplerian disk the LOSVD at each position $(x,y)$ on the sky plane is just $${\rm LOSVD} \,(x,y,v_z)=\Sigma_{proj}(x,y)\, \delta[v_{c,proj}(x,y)-v_z]
\label{eq:LOSVDkeplerian}$$ and its $\overline{\Sigma v^k}$ velocity moments are simply given by $\Sigma_{proj}(x,y)$, $\Sigma_{proj}(x,y)\,v_{c,proj}(x,y)$, and $\Sigma(x,y)\,v^2_{c,proj}(x,y)$, respectively, where $\Sigma_{proj}$ and $v_{c,proj}$ are the projected gas surface brightness and circular velocity. To deal with the central circular velocity singularity, we neglected the contribution of the central point of the 2-D maps for the $\overline{\Sigma v^k}$ velocity moments, and we refined their grid sizes until no further substantial increase in the predicted line widths was found. The adopted grid size in our models corresponds to $0\farcs005$, or one tenth of a pixel.
The geometry of the projected velocity field will depend on the disk orientation, specified by its inclination $i$ with respect to the sky plane and its major axis position angle $\phi$ with respect to the slit direction. We have no information on the gas disk orientation within our central aperture, since the dust-lane morphology we employed in S01 cannot be used to provide a constraint on such small scales. Therefore, we need to explore all possible disk orientations to derive the probability distribution for $\sigma_{ap}/v_{c}(R_{ref})$, the ratio between the predicted central velocity dispersion and the circular velocity at the reference radius. We adopt $R_{ref}=0\farcs125$, corresponding to the distance of the central aperture edge from the center along the slit direction. We cover the possible disk orientations by constructing a grid of models with equally spaced $\cos{i}$ and $\phi$.
For the intrinsic radial surface brightness profile of the gas we assumed a Gaussian $$\Sigma(R)=a_{flux}\,e^{-R^2/2\sigma^2_{flux}},
\label{eq:intrinsicflux}$$ and we derived $a_{flux}$ and $\sigma_{flux}$ by matching the observed emission flux profiles along the slit (see §\[subsec:UppLim\_CentralLinewidthsAndFluxProfiles\]). This match again involves convolving the intrinsic $\Sigma(R)$ with the STIS PSF, which we parameterized as a sum of Gaussian components (see S01). The choice of a Gaussian for the intrinsic surface brightness distribution was a matter of convenience, for the convolution process in this case is simply analytical. Intrinsically more concentrated profiles, such as exponential ones, would also reproduce the data once convolved with the STIS PSF, and would lead to tighter upper limits on $M_{\rm BH}$, which make our choice more conservative. We fit only the central five flux pixels for each galaxy, corresponding to the same region subtended by our central aperture.
Figure \[fig:UppLim\_FluxProfilesAndFitAndContinuum\] displays our best fits of $\Sigma(R)$, and illustrates that our model (Eq. \[eq:intrinsicflux\]) matches the data well within $<0\farcs125$ in all
cases.[^2] Table \[tab:UppLim\_Results\] lists the best-fitting $\sigma_{flux}$ values. The actual position of the data points with respect to the center of the fitting function can be explained entirely by a small displacement ($\ll R_{ref}$) of the slit center from that of the gaseous disk, without violating our assumption of an axisymmetric emissivity distribution. Including this small offset in the modeling produces only negligible variations in the predicted central $\sigma_{ap}$. In this case the predicted mean velocities $\overline{v}_{ap}$, although non-zero, are very small and consequently the velocity dispersions $\sigma_{ap}$ are almost equal to the ones obtained with perfectly centered apertures (when $\overline{v}_{ap} \equiv
0$).
For comparison, we also show the stellar surface brightness profiles in Figure \[fig:UppLim\_FluxProfilesAndFitAndContinuum\], derived from the stellar continuum in the spectra; this comparison justifies our assumption that the gas and the stellar distribution are concentric. Indeed, an independent fit to the stellar profiles with the same functional profile adopted for the gas ones led to a mean offset along the slit direction of only $0.08\pm0.17$ pixels, consistent with no offset at all.
As an intermediate result of our modeling, we show in Figure \[fig:UppLim\_SigOverVcExample\] the $\sigma_{\rm ap}/v_{c}(R_{ref})$ ratios obtained in a Keplerian potential for different disk orientations and a range of typical values for $\sigma_{flux}$; the predicted $\sigma_{ap}$ does not depend on the total flux, or $a_{flux}$. The predicted central line width obviously increases from face-on to edge-on systems. At a given disk orientation, models with intrinsically more concentrated gas emissivity always have a larger line width than those with more extended flux distributions because the gas resides at smaller radii. Since the central $0\farcs25 \times 0\farcs2$ aperture is nearly square, the impact of the position angle parameter $\phi$ on the final confidence limits for the $\sigma_{ap}/v_{c}(R_{\rm ref})$ ratio is negligibly small.
The flux distributions in our sample galaxies are concentrated enough that the predicted line widths are [*monotonically*]{}
decreasing with increasing $\cos{i}$. Since randomly oriented disks have uniformly distributed $\cos{i}$, we can use Figure \[fig:UppLim\_SigOverVcExample\] to derive the median values and $68\%$ upper and lower confidence limits for the $M_{\rm BH}$, respectively, by simply taking the values of $\sigma_{ap}/v_{c}(R_{\rm
ref})$ for the models with $\cos{i}$ = 0.5, 0.84, and 0.16.
There is a minor practical complication in this modeling: while changing the orientation of a disk with a fixed intrinsic flux distribution, the predicted flux profile along each of the five $0\farcs05 \times 0\farcs2$ apertures within the central $0\farcs25
\times 0\farcs2$ aperture changes, eventually becoming inconsistent with the observed one. Hence, at any given disk orientation we must readjust the intrinsic flux concentration $\sigma_{flux}$ in order to match the central five flux data points. Such a correction is particularly important for highly inclined disks, which have very different flux profiles when considered at different position angles. Since our $M_{\rm BH}$ upper limits are derived for nearly face-on orientations, these corrections will not strongly affect our results. Furthermore, tests have shown that the induced scatter in the $\sigma_{ap}/v_{c}(R_{ref})$ ratio at any given inclination is considerably smaller than the face-on to edge-on variation.
For simplicity, we only used a statistical correction for this effect. For a set of intrinsic values $\sigma_{flux,in}$, we collected all the central flux profiles predicted for a uniform grid in $\cos{i}$ and $\phi$. Then we treated each of these profiles as observed ones and matched each of them with a PSF-convolved Gaussian profile to get a distribution of $\sigma_{flux,out}$ values and a median $\langle\sigma_{flux,out}\rangle$ value for each $\sigma_{flux,in}$. By comparing the median $\langle\sigma_{flux,out}\rangle$ values with the corresponding $\sigma_{flux,in}$ values (Fig. \[fig:UppLim\_SigFluxGuessCorrection\]), we can correct for each galaxy our initial guess of the intrinsic $\sigma_{flux}$, derived by matching the observed central flux profiles of
Figure \[fig:UppLim\_FluxProfilesAndFitAndContinuum\]. For a given galaxy the corrected flux concentration to be input into the models is characterized by the $\sigma_{flux,in}$ value that, according to the previous scheme, lead to a median $\langle\sigma_{flux,out}\rangle$ equal to the $\sigma_{flux}$ of that galaxy. This $\sigma_{flux,in}$ value describes the intrinsic flux distribution that, when considering different disk orientations, leads to the predicted flux profiles that are the most consistent with the observed one for the given galaxy.
We call these particular $\sigma_{flux,in}$ for each galaxy the corrected $\sigma_{flux,corr}$ values, which we list in Table \[tab:UppLim\_Results\] along with the ($+1\sigma$) upper limits on $M_{\rm BH}$ obtained adopting them.
As a final remark we notice that at the mean distance of our sample galaxies ($\sim 15$ Mpc) and considering the typical mass of their central SMBH’s ($\sim 2.4\times 10^7 {\rm M}_\odot$, as predicted by the $M_{\rm BH}-\sigma_{\star}$ relation), the adopted intrinsic flux distributions are generally concentrated enough that a double-horned LOSVD should be expected, expecially for highly inclined disks. The fact that such a feature is not found in the observed emission-line profiles could represent evidence of an intrinsic turbulence in the gas. Indeed, when the full LOSVD is properly constructed (by collecting at each velocity bin the total flux within the $0\farcs25 \times
0\farcs2$ aperture that arises from the corresponding iso-velocity slice of the flux distribution, once convolved with the STIS PSF), the double-horned shape disappears when an intrinsic gas velocity dispersion is introduced. In the favorable case, within this context, of a nearly face-on disk with $\cos{i}=0.84$, a velocity dispersion of about $\sim 30$ would be required on average to smooth the double-horned shape, thus increasing the predicted line widths by 9% and decreasing the derived upper-limits on $M_{\rm BH}$ by 19%. Too many assumptions will be required to make this correction on the case by case basis, and would probably require direct fitting of the observed emission lines, as done by Barth (2001b). By adopting our current approach, our limits on $M_{\rm BH}$ remain conservative upper-bounds.
The Stellar Contribution {#subsec:UppLim_TheStarContribution}
------------------------
We now proceed to evaluate the impact of the stellar potential $\Phi_{\star}$ on our modeling. We start by estimating for each galaxy the expected radius of the “sphere of influence” of the SMBH, $r_{infl}=GM_{\rm
BH}/\sigma_{\star}^2$, within which $M_{\rm BH}$ dominates the dynamics of a galaxy with a stellar velocity dispersion $\sigma_{\star}$. For this estimate we adopt the $M_{\rm BH}-\sigma_{\star}$ relation as parameterized by Gebhardt (2000), $\log M_{\rm BH} =
3.75\log\sigma_{\star} -0.55$. Then, $\log r_{infl} =
1.75\log\sigma_{\star} -2.92$, where $M_{\rm BH}$ is in units of ${\rm
M}_{\odot}$, $\sigma_{\star}$ in km s$^{-1}$, and $r_{infl}$ in pc. In Figure \[fig:UppLim\_Rinfluence\] we compare $r_{infl}$ with the physical scale corresponding to the mean radius of our central aperture $R_{aperture}=\sqrt{(0\farcs25 \times 0\farcs2)/\pi}\approx
0\farcs13$, and find that for many of our galaxies $r_{infl}
\leq R_{aperture}$, indicating that the stellar mass $M_{\star}$ within $R_{aperture}$ is comparable to or exceeds $M_{\rm BH}$. Therefore, our analysis needs to account for the stellar mass, and we need to derive the stellar mass density profiles $\nu_{\star}(r)$, in particular for galaxies with smaller $\sigma_{\star}$.
The relative importance of $M_{\star}$ and $M_{\rm BH}$ on the central line width depends also on the spatial extent of the gas emissivity. In particular even when $r_{infl} < R_{aperture}$, the presence of the SMBH can still be noticed from the observed $\sigma_{cen}$ if $\sigma_{flux} \approx r_{infl}$ — that is, if most of the collected flux within $R_{aperture}$ was emitted by gas moving on nearly Keplerian orbits. Further, the impact of $\Phi_{\star}$ on the predicted $\sigma_{ap}$ also decreases with increasing flux concentration since in general the circular velocity curve due to the stars increases monotonically with the distance from the galactic center. Indeed the central slope of stellar densities profiles can be represented by a power-law $\nu_{\star}(r)\sim r^{-\alpha}$ with $\alpha \leq 2$ (Gebhardt 1996).
Figure \[fig:UppLim\_Rinfluence\] shows $\sigma_{flux}$ is in general smaller than $R_{aperture}$, and we expect that the inclusion of the stellar mass in our modeling will cause only a modest correction in the black-hole masses inferred in §\[subsec:UppLim\_TheDiskModelling\].
To quantify the stellar mass contribution in each galaxy, we derived the mass density profile $\nu_{\star}(r)$ by deprojecting the stellar surface brightness distribution $\Sigma_{\star}(R)$ obtained from the STIS acquisition image, assuming spherical symmetry and a constant mass-to-light ratio $\Upsilon$. We applied the same multi-Gaussian algorithm adopted in S01 to circularly averaged $\Sigma_{\star}(\sqrt{ab})$ surface brightness profiles, extracted using the [IRAF]{} task [ELLIPSE]{} and color corrected into Johnson $R$-band magnitudes using the [IRAF]{} package [SYNPHOT]{} and assuming E–S0 galaxy templates. Gaussian components with $\sigma \leq 0.5$ pixel were considered as unresolved point sources and hence were excluded from the stellar mass budget. For simplicity we adopted for all galaxies $\Upsilon=5\; {\rm
M}_{\odot}/{\rm L}_{\odot}$, rescaled from van der Marel (1991) for $H_0=75$ km s$^{-1}$ Mpc$^{-1}$, instead of deriving individual values for $\Upsilon$ by matching ground-based $\sigma_{\star}$ measurements (see S01).
Including the stellar potential in the modeling results in a 27% reduction of the median black-hole masses (for $\cos i = 0.5$) needed to explain the observed central line widths. Nonetheless, this effect is important for galaxies with small $\sigma_{\star}$ values; the median $M_{\rm BH}$ decreased by 37% for the sample galaxies with the lowest $\sigma_{\star}$ (NGC 3982, NGC 4321, and NGC 4548), but only by 3% for the ones with the highest $\sigma_{\star}$ (NGC 4143).
The impact of the stellar mass on the upper limits of the black-hole mass, which are central to our analysis, is yet smaller, on average less then 12% (see Tab. \[tab:UppLim\_Results\]). Indeed, similar to the purely Keplerian case, the predicted central line widths are always found to be increasing monotonically with disk inclination, such that the derived $+1\sigma$ upper limits on $M_{\rm BH}$ are obtained here also from models with nearly face-on disks. In this situation the circular velocities needed by the model to explain the observed line widths always by far exceed the ones provided only by the stellar potential.
Gas in Hydrostatic Equilibrium {#subsec:UppLim_HydroEq}
------------------------------
So far we have assumed that the observed central line widths are due to pure orbital motion of gas in a disk around the galaxy center. But even if we ignore non-gravitational effects, we still need to investigate how much the derived $M_{\rm BH}$ upper limits depend on different choices for the kinematic behaviour of the ionized gas. In particular, it is conceivable that the gas is at least in part supported by gas pressure. In order to explore the impact of such pressure on our results we considered the extreme case of pure hydrostatic support. For any emissivity density profile $\rho$, we need only the second [LOSVD]{} velocity moment $\overline{\Sigma v^2}$ without streaming motions. We obtain this moment by solving the hydrostatic equilibrium equation $d(\rho\sigma^2)/\rho dr=-d\Phi/dr$ for the gas velocity dispersion $\sigma(r)$, and then by integrating the luminosity-weighted $\sigma$ along the line of sight.
Under these assumptions, we proceeded to match the observed $\sigma_{cen}$ for all the galaxies, assuming Gaussian profiles for their emissivity densities $\rho$, whose width has to match the observed central flux profiles. We thus obtained for each galaxy a value for the mass of the putative SMBH, in a purely Keplerian potential.
From this exercise we found that the predicted central velocity dispersions $\sigma_{ap}$ correspond to values from the rotating disk model with disk inclinations always between 0.65 and 0.71. Consequently, the $M_{\rm BH}$ values inferred by considering hydrostatic equilibrium lie within the $\pm 1\sigma$ confidence limits obtained from the rotating disk models, and all previous results hold.
Results {#sec:UppLim_results}
=======
We have explored the dynamical implications of the observed emission-line widths arising from the central $\sim 10$ pc of our sample galaxies, and we have demonstrated that the most conservative $1\sigma$ upper limits of $M_{\rm BH}$ are obtained assuming that the gas resides in a nearly face-on disk ($i\sim 33^{\circ}$; $\cos{i}=0.84$) moving in circular orbits around a central SMBH. Fortunately, extremely face-on orientations are statistically rare.
In Figure \[fig:UppLim\_UppLimKeplerian\] we place our $M_{\rm BH}$ upper limits derived for the Keplerian case (§\[subsec:UppLim\_TheDiskModelling\]) in the $M_{\rm BH}-\sigma_{\star}$ plane . For a comparison with the $M_{\rm BH}-\sigma_{\star}$ relation (as parameterized by Gebhardt 2000a) we need to scale each $\sigma_{\star}$ to $\sigma_{e}$, the value that would have been observed within a circular aperture of $R = R_e$. The proper computation of such a quantity, which should include the contribution from the rotation, was not possible for all the galaxies in our sample, since the necessary combination of surface brightness, velocity dispersion, and rotation velocity radial profiles was not available from the literature for all objects. The only choice in this case is to use the algorithm of J[ø]{}rgensen, Franx, & Kjaergaard (1995), which was derived on the basis of kinematical data for 51 elliptical and lenticular galaxies and which also accounts for the effect of rotation. For the effective radii we adopted the seeing-corrected values from Baggett, Baggett & Anderson (1998). In the few cases where this last compilation did not provide $R_e$ measurements (NGC 3982 and NGC 4800), we assumed $\sigma_{e} = \sigma_{\star}$.
Figure \[fig:UppLim\_UppLimKeplerian\] shows that, with one exception (NGC 4143), our derived upper limits on $M_{\rm BH}$ lie within the scatter of the $M_{\rm BH}-\sigma_{\star}$ relation, or above it. In this sense, the proposed $M_{\rm BH}-\sigma_{\star}$ relation passes this observational test for more than a dozen objects. Furthermore, except for the three galaxies with the lowest value of $\sigma_{\star}$, our upper limits on $M_{\rm BH}$ exceed the values predicted by the $M_{\rm BH}-\sigma_{\star}$ relation only by a modest amount (in average by a factor of $\sim 4.6$).
In particular for the galaxies in our sample with actual $M_{\rm BH}$ measurements from spatially resolved kinematics (S01), the present upper limits are consistent, within the errors, with the published $M_{\rm BH}$ values. Moreover, when our $+1\sigma$ upper limits are compared with other published $M_{\rm BH}$ measurements (as compiled recently by Kormendy & Gebhardt 2001, Fig. \[fig:UppLim\_UppLimKeplerian\_G00\_Mbhs\]), most appear to be quite close to the actual values for the black-hole mass, with the exception of the same galaxy lying below the $M_{\rm
BH}-\sigma_{\star}$ relation (NGC 4143) and the three objects in our sample with the lowest value of $\sigma_{\star}$ (NGC 3982, NGC 4321, and NGC 4548). Figure \[fig:UppLim\_UppLimKeplerian\_G00\_Mbhs\] therefore shows that for the bulk of our sample with $100\,{\rm km\,s^{-1}} \leq
\sigma_{\star} \leq 200\,{\rm km\,s^{-1}}$ our line-width modeling technique gives results statistically consistent with the values obtained through other techniques. The current data do not provide any indication that the spiral and lenticular galaxies in our sample differ in their black-hole masses. Further, our upper limits on $M_{\rm BH}$ do not seem to differ between barred or unbarred host galaxies, or as a function of the nuclear spectral classification.
For any interpretation of upper limits, the basic sensitivity of the experiment is crucial. Our $M_{\rm BH}$ sensitivity limit does not only depend on the physical size of the resolution element and on the amount of stellar mass, but also on the spatial emissivity distribution of the gas. Indeed, we often find the spatial extent of the ionized gas to be smaller than the dimension of the $0\farcs25 \times 0\farcs2$ aperture. Considering the spatial emissivity of each of our sample galaxies, we can derive conservative sensitivity limits for a nearly face-on disk ($\cos{i}=0.84$) by first computing the predicted line widths arising in a purely stellar potential, and then by asking what values of $M_{\rm BH}$ need to be added in order to increase those line widths by their typical measurement error, conservatively $\sim$10 (see §\[subsec:UppLim\_CentralLinewidthsAndFluxProfiles\]).
With a mean value of 3.9$\times10^6$ M$_{\odot}$, the derived sensitivity limits lie well below the inferred $+1\sigma$ upper limits on $M_{\rm BH}$ (see Fig. \[fig:UppLim\_UppLimKeplerian\] and Table \[tab:UppLim\_Results\]), which can therefore be considered robust. We notice that by assigning a constant mass-to-light ratio to [*all*]{} Gaussian components of the luminosity density profile, including the ones with $\sigma \leq 0.5$ pixel (see §\[subsec:UppLim\_TheStarContribution\]), the derived sensitivity limits increase only by 30%, to a mean value of 5.1$\times10^6$ M$_{\odot}$.
Our $+1\sigma$ upper limits for the whole sample correspond to $M_{\rm BH}$ values produced by nearly face-on disks ($\cos{i}=0.84$). As mentioned, we find one object, NGC 4143, for which the derived upper limit on $M_{\rm BH}$ is below the $M_{\rm BH}-\sigma_{\star}$ relation. For 16 sample members, the expected number of disks with inclinations more face-on than $\cos{i}=0.84$ is $\sim 2-3$. In order to reconcile the observed central line width of NGC 4143 with the $M_{\rm BH}-\sigma_{\star}$ relation within the Keplerian-disk framework, the disk would need to have an inclination angle of $27^{\circ}$. However, it should be noticed that NGC 4143 is among our sample one of the galaxies with a nearly unresolved spatial emissivity distribution (see Fig. \[fig:UppLim\_FluxProfilesAndFitAndContinuum\]), which could actually be more concentrated than the one adopted by us. Since this would lower the upper limit on $M_{\rm BH}$, we think that NGC 4143 represents an interesting candidate for future investigations.
The situation is different for the three galaxies with the lowest values of $\sigma_{\star}$ in our sample because their spatial flux profiles are resolved. Hence, we cannot explain their relatively high values of $\sigma_{cen}$ ($\sim 100$ ) within the context of a Keplerian disk in terms of gas orbiting in the vicinity of a $\sim 2\times10^6$ M$_{\odot}$ SMBH (as predicted by $M_{\rm BH}-\sigma_{\star}$ relation). Furthermore, as the derived sensitivity limits on $M_{\rm BH}$ for these three galaxies are also around $\sim 2\times10^6$ M$_{\odot}$ (see Table \[tab:UppLim\_Results\]), we cannot expect the stellar mass contribution to help explaining their $\sigma_{cen}$ values. Indeed, the line widths obtained from the stellar potential [*and*]{} a $2\times10^6$ M$_{\odot}$ SMBH are considerably smaller than the observed ones. The predicted line widths in this case are for these three galaxies in average $\sim 40$ and $\sim 75$ , in the nearly face-on ($\cos{i}=0.84$) and edge-on ($\cos{i}=0.16$) case, respectively. Alternatively, the observed central line widths might arise in all these three objects from highly inclined nuclear disks. This is not only unlikely but may also be insufficient, as in the case of NGC 3982, even when considering a perfectly edge-on nuclear disk. Therefore, in the case of the three less massive bulges, we may have indirect evidence that at least part of the observed line width is due to non-gravitational effects.
Discussion and Conclusions {#sec:UppLim_Disc&Concl}
==========================
We have demonstrated that with [*HST*]{}’s spatial resolution the integrated line widths of the central emission lines provide stringent and interesting constraints on the presence of SMBHs. The relative observational ease of this approach makes it potentially applicable to large galaxy samples, that would allow us to test the universal applicability of the emerging relations between $M_{\rm BH}$ and galaxy properties.
Our modeling, which was necessary to connect the observed $\sigma_{cen}$ with the quantity of immediate interest, $v_c(R_{ref})$, was based on the assumption that the gas line width arises solely from orbital motion within a randomly oriented disk around a putative SMBH. Reality is undoubtedly more complex, and we have considered other potentially relevant effects, such as the stellar contribution to the total gravitational potential, and more simplistically, hydrostatic support of the gas. The dynamical influence of outflows and magnetic fields could also be important. Except for fine-tuned circumstances, all these effects will provide an additional contribution to the observed line width and the inferred upper limit on $M_{\rm BH}$ will be tighter. Hence, our adopted set of assumptions lead to conservative estimates.
Comparison of our upper limits with direct $M_{\rm BH}$ determinations, either statistically (Fig. \[fig:UppLim\_UppLimKeplerian\_G00\_Mbhs\]) or in a few cases individually (Fig. \[fig:UppLim\_UppLimKeplerian\]), showed that our 1-$\sigma$ upper limits are generally near the actual value of $M_{\rm BH}$.
We have applied this analysis to a set of 16 galaxies whose sample selection was not biased toward particular $M_{\rm BH}$ values. Remarkably, with one exception, our $+1\sigma$ upper limits on $M_{\rm
BH}$ closely parallel the $M_{\rm BH}-\sigma_{\star}$ relation and suggest that for galaxies with $\sigma_{\star}\geq 100\,{\rm
km\,s^{-1}}$, SMBHs with exceptionally high $M_{\rm BH}$ that violate the $M_{\rm BH}-\sigma_{\star}$ relation must be rare. By considerably broadening the range of host galaxies surveyed for SMBHs, our 16 upper limits further support the emerging picture wherein the black-hole mass and the overall galaxy structure are closely linked.
Even with a limited sample of 16 objects, we have been able to isolate a few cases worthy of further investigations. NGC 4143 stands out as the only object that falls below the $M_{\rm BH}-\sigma_{\star}$ relation; we speculate that this may indicate that its nuclear disk is nearly face-on. Three low-$\sigma_{\star}$ galaxies (NGC 3982, NGC 4321, and NGC 4548) seem to have $M_{\rm BH}$ upper limits that lie systematically offset from other galaxies of low velocity dispersion in which the $M_{\rm BH}$ was obtained by studying the stellar kinematics. This suggests that in low-mass bulges non-gravitational forces can considerably affect the gas motions in the central 10 pc.
Baggett, W. E., Baggett, S. M., & Anderson, K. S. J. 1998, , 116, 1626
Barth, A. J., Sarzi, M., Rix, H.-W., Ho, L. C., Filippenko, A. V., & Sargent, W. L. W. 2001a, , 555, 685
Barth, A. J., Sarzi, M., Rix, H.-W., Ho, L. C., Filippenko, A. V., & Sargent, W. L. W. 2001b, in The Central Kpc of Starbursts and AGN: The La Palma Connection, ed J. H. Knapen, et al. (San Francisco: ASP), in press
Bertola, F., & Corsini, E. M. 2000, in Dynamics of Galaxies: from the Early Universe to the Present, ed. F. Combes, G. A. Mamon, & V. Charmandari (San Francisco: ASP), 115
Bertola, F., Corsini, E. M., Beltr[á]{}n, J. C. V., Pizzella, A., Sarzi, M., Cappellari, M., & Funes, S. J. 1999, , 519, L127
Bower, G. A., et al. 1998, , 492, L111
Corsini, E. M., et al. 1999, , 342, 671
Cretton, N., & van den Bosch, F. C. 1999, , 514, 704
Dalle Ore, C., Faber, S. M., Jesus, J., Stoughton, R., & Burstein, D. 1991, , 366, 38
de Vaucouleurs, G., de Vaucouleurs, A., Corwin, H. G., Jr., Buta, R. J., Paturel, G., & Fouqué, R. 1991, Third Reference Catalogue of Bright Galaxies (New York: Springer)
Di Nella, H., Garcia, A. M., Garnier, R., & Paturel, G. 1995, , 113, 151
Dressler, A., & Richstone, D. O. 1988, , 324, 701
Eckart, A., & Genzel, R. 1997, , 284, 576
Ferrarese, L., Ford, H. C., & Jaffe, W. 1996, , 470, 444
Ferrarese, L., & Merritt, D. 2000, , 539, L9
Ferrarese, L. Pogge, R. W., Peterson, B. M., Merritt, D., Wandel, A., & Joseph, C. L. 2001, , 555, L79
Filippenko, A. V., & Sargent, W. L. W. 1985, , 57, 503
Gebhardt, K., et al. 1996, , 112, 105
Gebhardt, K., et al. 2000a, , 539, L13
Gebhardt, K., et al. 2000b, , 543, L5
Genzel, R., Eckart, A., Ott, T., & Eisenhauer, F. 1997, , 291, 219
Ghez, A. M., Klein, B. L., Morris, M., & Becklin, E. E. 1998, , 509, 678
Harms, R. J., et al. 1994, , 435, L35
H[é]{}raudeau, Ph., & Simien, F. 1998, , 133, 317
H[é]{}raudeau, Ph., Simien, F., Maubon, G., & Prugniel, P. 1999, , 136, 509
Herbst, T. M., Beckwith, S. V. W., Forrest, W. J., & Pipher, J. L. 1993, , 105, 956
Ho, L. C. 1999, in Observational Evidence for Black Holes in the Universe, ed. S. K. Chakrabarti (Dordrecht: Kluwer), 157
Ho, L. C., Filippenko, A. V, & Sargent, W. L. W. 1995, , 98, 477
——. 1997, ApJS, 112, 315
Ho, L. C., Rudnick, G., Rix, H.-W., Shields, J. C., McIntosh, D. H., Filippenko, A. V., Sargent, W. L. W., & Eracleous, M. 2000, , 541, 120
Jarvis, B. J., Dubath, P., Martinet, L., & Bacon, R. 1988, , 74, 513
J[ø]{}rgensen, I., Franx, M., & Kjaergaard, P. 1995, , 276, 1341
Kaspi, S., Smith, P. S., Netzer, H., Maoz, D., Jannuzi, B. T., & Giveon, U. 2000, , 533, 631
Kauffmann, G., & Haehnelt, M. 2000, , 311, 576
Kent, S. M. 1990, , 100, 377
Kormendy, J., & Gebhardt, K. 2001, in The 20th Texas Symposium on Relativistic Astrophysics, ed. H. Martel & J. C. Wheeler (New York: AIP), in press
Kormendy, J., & Richstone, D. 1995, , 33, 581
Kormendy, J., et al. 1996, , 459, L57
——. 1997, , 473, L91
——. 2001, , submitted
Magorrian, J., et al. 1998, , 115, 2285
Maoz, E. 1995, , 447, L91
Maoz, E. 1998, , 494, L181
Merritt, D., & Ferrarese, L. 2001, , 547, 140
Miyoshi, M., Moran, J., Herrnstein, J., Greenhill, L., Nakai, N., Diamond, P., & Inoue, M. 1995, , 373, 127
Nelson, C. H., & Whittle, M. 1995, , 99, 67
Oke, J. B., & Gunn, J. E. 1982, , 94, 586
Richstone, D. O., et al. 1998, , 395, A14
Rix, H.-W., Kennicutt, R. C., Jr., Braun, R., & Walterbos, R. A. M. 1995, , 438, 155
Rudnick, G., Rix, H.-W., & Kennicutt, R. C., Jr. 2000, , 538, 569
Salucci, P., Ratnam, C., Monaco, P., & Danese, L. 2000, , 317, 488
Sarzi, M., Rix, H.-W., Shields, J. C., Rudnick, G., Ho, L. C., McIntosh, D. H., Filippenko, A. V., & Sargent, W. L. W. 2001, , 550, 65 (S01)
Scarlata M. C., Bertola, F., Cappellari, M., Sarzi, M., Corsini, E. M., & Pizzella, A. 2001, in Galaxy Disks and Disk Galaxies, ed. Funes, J. G., & Corsini, E. M. (San Francisco: ASP), 163
Schechter, P. L. 1983, , 52, 425
Shields, J. C., Rix, H.-W., McIntosh, D. H., Ho, L. C., Rudnick, G., Filippenko, A. V., Sargent, W. L. W., & Sarzi, M. 2000, , 534, L27
Timmermann, R., Genzel, R., Poglitsch, A., Lutz, D., Madden, S. C., Nikola, T., Geis, N., & Townes, C. H. 1996, , 466, 242
Tonry, J., Dressler, A., Blakeslee, J. P., Ajhar, E. A., Fletcher, A. B., Luppino, G. A., Metzger, M. R., & Moore, C. B. 2001, , 546, 681
Tully, R. B. 1988, Nearby Galaxies Catalog (Cambridge: Cambridge Univ. Press)
van der Marel, R. P. 1991, , 253, 710
van der Marel, R. P., de Zeeuw, P., Rix, H.-W., & Quinlan, G. D. 1997, , 385, 610
Verdoes Kleijn, G. A., van der Marel, R. P., Carollo, C. M., & de Zeeuw, P. T. 2000, , 120, 1221
Wandel, A., Peterson, B. M., & Malkan, M. A. 1999, , 526, 579
Whitmore, B. C., Schechter, P. L., & Kirshner, R. P. 1979, , 234, 68
Yusef-Zadeh, F., Roberts, D. A., & Wardle, M. 1997, , 490, L83
[cccccccccc]{} NGC 2787 & SB0$^+$ & 11.82 & L1.9 & 13.0 & $210 \pm 23$ & 1 & $185 \pm 20$ & $215.1 \pm 4.3$ & 05 Dec. 1998\
NGC 3351 & SBb & 10.53 & H & 8.1 & $101 \pm 16$ & 2 & $ 93 \pm 15$ & $ 47.2 \pm 1.5$ & 25 Dec. 1998\
NGC 3368 & SABab & 10.11 & L2 & 8.1 & $135 \pm 10$ & 3 & $114 \pm 8$ & $101.5 \pm 3.1$ & 31 Oct. 1998\
NGC 3982 & SABb: & …& S1.9 & 17.0 & $ 78 \pm 2$ & 4 & $ 78 \pm 2$ & $136.7 \pm 3.5$ & 11 Apr. 1998\
NGC 3992 & SBbc & 10.60 & T2: & 17.0 & $140 \pm 20$ & 4 & $119 \pm 17$ & $109.5 \pm 3.1$ & 19 Feb. 1999\
NGC 4143 & SAB0$^{\circ}$ & 11.65 & L1.9 & 17.0 & $270 \pm 12$ & 5 & $271 \pm 12$ & $226.3 \pm 2.1$ & 20 Mar. 1999\
NGC 4203 & SAB0$^-$: & 11.80 & L1.9 & 9.7 & $124 \pm 16$ & 1 & $110 \pm 14$ & $148.9 \pm 5.2$ & 18 Apr. 1999\
NGC 4321 & SABbc & 10.05 & T2 & 16.8 & $ 83 \pm 12$ & 6 & $ 74 \pm 11$ & $ 85.7 \pm 1.7$ & 23 Apr. 1999\
NGC 4450 & Sab & 10.90 & L1.9 & 16.8 & $130 \pm 17$ & 2 & $121 \pm 16$ & $162.4 \pm 1.7$ & 31 Jan. 1999\
NGC 4459 & S0$^+$ & 11.32 & T2: & 16.8 & $189 \pm 21$ & 1 & $167 \pm 18$ & $193.1 \pm 5.2$ & 23 Apr. 1999\
NGC 4477 & SB0:? & 11.38 & S2 & 16.8 & $156 \pm 12$ & 7 & $134 \pm 10$ & $128.9 \pm 2.2$ & 23 Apr. 1999\
NGC 4501 & Sb & 10.36 & S2 & 16.8 & $151 \pm 17$ & 8 & $136 \pm 15$ & $110.8 \pm 1.8$ & 26 Apr. 1999\
NGC 4548 & SBb & 10.96 & L2 & 16.8 & $ 82 \pm 9$ & 9 & $ 71 \pm 8$ & $ 81.2 \pm 1.8$ & 26 Apr. 1999\
NGC 4596 & SB0$^+$ & 11.35 & L2:: & 16.8 & $154 \pm 5$ & 10 & $136 \pm 4$ & $142.0 \pm 8.7$ & 20 Dec. 1998\
NGC 4698 & Sab & 11.46 & S2 & 16.8 & $134 \pm 6$ & 9 & $116 \pm 5$ & $101.9 \pm 2.2$ & 24 Nov. 1997\
NGC 4800 & Sb & 12.30 & H & 15.2 & $112 \pm 2$ & 4 & $112 \pm 2$ & $ 71.9 \pm 7.0$ & 03 Mar. 1999\
[cccccc]{} NGC 2787 & 0.056 & 0.053 & 1.9$\times10^8$ & 1.8$\times10^8$ & 3.8$\times10^6$\
NGC 3351 & 0.132 & 0.153 & 9.7$\times10^6$ & 8.0$\times10^6$ & 2.1$\times10^6$\
NGC 3368 & 0.086 & 0.095 & 3.8$\times10^7$ & 2.7$\times10^7$ & 4.5$\times10^6$\
NGC 3982 & 0.051 & 0.046 & 8.0$\times10^7$ & 7.5$\times10^7$ & 3.3$\times10^6$\
NGC 3992 & 0.051 & 0.046 & 5.7$\times10^7$ & 5.3$\times10^7$ & 3.0$\times10^6$\
NGC 4143 & 0.035 & 0.024 & 1.4$\times10^8$ & 1.4$\times10^8$ & 1.4$\times10^6$\
NGC 4203 & 0.030 & 0.015 & 2.3$\times10^7$ & 2.3$\times10^7$ & 4.3$\times10^5$\
NGC 4321 & 0.046 & 0.040 & 2.7$\times10^7$ & 2.5$\times10^7$ & 2.3$\times10^6$\
NGC 4450 & 0.046 & 0.040 & 1.1$\times10^8$ & 1.1$\times10^8$ & 3.1$\times10^6$\
NGC 4459 & 0.041 & 0.033 & 1.3$\times10^8$ & 1.3$\times10^8$ & 2.9$\times10^6$\
NGC 4477 & 0.056 & 0.053 & 8.7$\times10^7$ & 7.8$\times10^7$ & 5.1$\times10^6$\
NGC 4501 & 0.081 & 0.088 & 9.0$\times10^7$ & 7.4$\times10^7$ & 7.6$\times10^6$\
NGC 4548 & 0.066 & 0.067 & 3.3$\times10^7$ & 2.8$\times10^7$ & 3.8$\times10^6$\
NGC 4596 & 0.056 & 0.053 & 1.1$\times10^8$ & 9.4$\times10^7$ & 5.6$\times10^6$\
NGC 4698 & 0.091 & 0.100 & 8.0$\times10^7$ & 7.1$\times10^7$ & 6.0$\times10^6$\
NGC 4800 & 0.076 & 0.081 & 3.2$\times10^7$ & 2.0$\times10^7$ & 6.1$\times10^6$\
[^1]: IRAF is distributed by the National Optical Astronomical Observatories, which are operated by AURA, Inc. under contract to the NSF.
[^2]: All \[\] flux profiles of Figure \[fig:UppLim\_FluxProfilesAndFitAndContinuum\] were symmetrical even outside the central aperture region, with the noticeable exception of NGC 4501 and NGC 4698. This last Sa galaxy shows the presence of a stellar (Bertola 1999) and gaseous (Bertola & Corsini 1999) core with an angular momentum perpendicular to that of the main galactic disk. This core can be identified as a disk from [*HST*]{} imaging (Scarlata 2001). Consistent with these findings, a recent accretion event may explain why NGC 4698 is one of the two galaxies, among our sample of 16, to exhibit a strongly asymmetric gas distribution within $0\farcs3$ ($\pm 6$ rows) where, assuming that the nuclear regions of our galaxies are dominated by SMBHs with masses consistent with the $M_{\rm BH}-\sigma_{\star}$ relation, the dynamical timescale ranges from 0.5 to 12.0 Myr.
|
---
abstract: |
A [*heuristic*]{} approach is proposed to estimate the average speed of particles during binary encounters by using the macroscopic variables with their extended gradient-type which are the fundamental independent variables in [*extended thermodynamics*]{} theory. We also address the missing contribution (say, due to creation of new particles : [*Acoustons*]{}) in conventional Bremsstrahlung.
[**Key Words :**]{} Dynamic Casimir effect; Bremsstrahlung; stationary conservation form; hard-sphere.
[**PACS Codes**]{} : 02.30.Jr; 05.60.-k; 34.10.+x; 12.90.+b
author:
- 'A. Kwang-Hua Chu'
date: 'P.O. Box 39, Tou-Di-Ban, Road XiHong, Urumqi 830000'
title: |
Macroscopic Estimate of the Average Speed\
for New Particles Created in Collision
---
=0 pt =-10mm
+2.5mm
Introduction
============
The rough approximations to the velocity of particles for a specific distribution have been a fundamental issue in the kinetic theory of gas [@Jeans:Gas]. Normally the molecular speed varies with the molecular weight and absolute temperature if we only consider the translation part of the kinetic energy [@Chapman:Cowling] for a gas in a uniform steady state. We shall discuss the approximate estimate of average particle speed (in 1-D. sense, $c_x$) from the stationary equations of wave-breaking-like conservation laws \[3-4\]. The [*flow*]{} is assumed to be uniformly bounded and avoid the vacuum state. Since a conservation law is an integral relation, it may be satisfied by functions which are not differentiable (like the discrete particle- or molecule-based flow using Boltzmann approach for [*dilute*]{} gas), not even continuous, merely measurable and bounded. We noticed that steady shocks can occur in an ideal case \[5-6\] or in a microscopic way \[7-9\]. In this short paper, we will investigate this kind of 1D flow in a heuristic way. The flow field (if in terms of the flow velocity) depends on the pressure gradient and density gradient only.
Formulation
===========
Stationary Weak Shock
---------------------
Starting from the integral form of the balanced equations for the one-dimensional flow allowing discontinuity in x-direction velocity $u$ (cf. Fig. 1): $$\frac{d}{dt}\int_{x_l}^{x_r} f dx +[g]_{x_l}^{x_r} =0,$$ here, \[$\,$\] relates to the jump [@Whitham:Stoss]; $f$, $g$ can be the density and flux of mass, or the density and flux of momentum, and we neglect the source-term effects, e.g., body force in the momentum-balance analogy. We assume that $u$ has continuous first derivatives and $f$, $g$ are functions of $x$, $t$, $u$. Thus, together with jump condition [@Whitham:Stoss] and entropy condition for a weak solution \[3,4\], (1) becomes $$\frac{\partial f(x,t,u)}{\partial t}+\frac{\partial g(x,t,u)}{\partial
x} =0.$$ Let $f$ be the density and flux of mass, $g$ be the flux of mass and momentum, then we have $$\frac{\partial \rho}{\partial t}+\frac{\partial(\rho\,u)}{\partial x} =0,$$ $$\frac{\partial (\rho\,u)}{\partial t}+\frac{\partial(\rho\,u^2+p)}{\partial x}
=0.$$ Here, $$p=-\frac{1}{3}p_{ii}=\frac{1}{3} \int^{\infty}_{-\infty} m c_i^2
F d{\bf c} ,$$ $F$ is the velocity distribution function of molecules from the kinetic theory of gases, $m$ is the mass of a molecule, ${\bf c}$ (or $c_i$) measures the difference of the molecular velocity from the mean value or macroscopic velocity $\int^{\infty}_{-\infty}$ $m {\bf v} f d{\bf v}$; ${\bf v}$ is the absolute molecular velocity. The stationary solution $u$ from Eqns. (3) and (4) then is $$u=(\frac{\partial p/\partial x}{\partial \rho/\partial x})^{1/2} .$$ For the cases of weak shocks, Eqn. (5) tends to the characteristic velocity $$U=(\frac{\partial p}{\partial \rho})^{1/2},$$ in the limit as the shock strength approaches zero, which is just the generalization of 1D sound speed. Up to now, the internal energy $e$ or the enthalpy $h=e+{p}/{\rho}$, which for the ideal gas is a function of temperature alone, are still not specified yet. Besides, we neglect the viscous and heat-conducting effects in general. Thus, this kind of stationary shock can only exist either in discrete sense \[11-12\] or in microscopic way (e.g., induced by molecular collisions) \[9,13\]. Considering the time scale of collisions, e.g., the mean collision time , since it is much shorter than the relaxation time, so, once we neglect the high-frequency behavior or relaxation effects and only take the low-frequency limit into account, then the [*stationary shock*]{} concept is valid.
Application to Bremsstrahlung and New Particles Creation
--------------------------------------------------------
The conservation equations obtained after we impose [*weak*]{} formulations from the integral forms which are similar to the treatments of weak-shock problems [@Lax:SIAM] are constructed from the collision diagram as shown in Fig. 1 with respect to the axes $x$ and $y$. The one-dimensional velocity $c$ (in average) for particles during a binary encounter is in the x-direction for simplicity. The stationary equations, if we neglect the time-dependent effects, are $$(\rho c)_x =0, \hspace*{6mm} (\rho c^2 + p)_x =0 .$$ $c$ has been spatially and locally homogenized [@Allaire:Homogenization] with $c$ $\in C^1$. Thus, similar to the derivation of $u$ (cf. the equation (5)) we can get the average estimate of $c$ as $$c=(\frac{p_x}{\rho_x})^{1/2} .$$ Note that this velocity could be linked to the sound speed by the [*extended thermodynamics*]{} theory \[15-16\] and might be related to the neglected (energy) contributions during [*Bremsstrahlung*]{} (or collisions of particles) since it is rather weak (however, it should not be neglected considering the strong and weak interactions of particles) compared to the photon emission or others! To be precise, the approach we used above could be applied to the dynamic Casimir effect and the more interesting manifestation of the dynamic behavior is the creation of particles from vacuum by a moving boundary (here, the moving boundary is related to the non-flat shape) \[17-19\]! The vacuum fluctuations, according to the Casimir effect, can generate the pressure field and thus the acoustic field as mentioned above! The effect of creation of particles from vacuum by nonstationary electric and gravitational fields is well known (see, e.g., \[20,21\]). As noted above, boundary conditions are idealizations of concentrated external fields. It is not surprising, then, that moving boundaries act in the same way as a nonstationary external field. As to the possibility of experimental observation of the photons created by the moving mirrors. Additional factors such as imperfectness of the boundary mirrors, back reaction of the radiated photons upon the mirror, etc., could be traced in \[22\].
Results & Discussions
=====================
From Fig. 1 we know that, if we transform the coordinate system into the one based on the mass-center of these two colliding particles or molecules, as the particles are assumed to be hard-sphere ones and have equal mass, so, the mass-center (located at the contact point or the cross of $x$ & $y$-axis) will move with the speed ($c_{av}$) of total momentum divided by the total mass [@Reif:StPhys]. The collisions are assumed to be elastic. In fact, $c_{av}$ is equivalent to the speed of one-dimensional shock front. The energy associated to this velocity is rather weak (in intermediate regime) and thus was neglected in conventional Bremsstrahlung (emission of photons or other radiations) but, as mentioned above, it should be considered in the weak and strong interactions. The remaining question is how to detect this kind of energy in the test section for creation of new particles (say, if we termed these particles as [*Acoustons*]{}) subjected to collisions.
(180,65)(10,-30) (77,-3) (82,7) (77,-3)[(1,-1)[15]{}]{} (77,-3)[(1,2)[16]{}]{} (77,-3)[(2,1)[15]{}]{} (70,18)[(0,0)\[bl\]]{} (61,0)[(0,0)\[bl\]]{} (82,7)[(-3,2)[15]{}]{} (82,7)[(-2,-1)[15]{}]{} (94,1)[(0,0)\[bl\]]{} (82,-15)[(0,0)\[bl\]]{} (80,2)[(2,-1)[30]{}]{} (80,2)[(-2,1)[30]{}]{} (81,2)[(0,0)\[bl\]]{} (80,2) (112,-14)[(0,0)\[bl\]]{} (46,18)[(0,0)\[bl\]]{} (70,-3)[(4,0)[30]{}]{} (82,7)[(2,0)[15]{}]{} (95,27)[(0,0)\[bl\]]{} (30,-32)[(0,0)\[bl\]]{}
### Acknowledgements {#acknowledgements .unnumbered}
This work was the extension part of the author’s PhD thesis (dated 1997-Dec.) \[16\].
[99]{} J.H. Jeans, [*The Dynamic Theory of Gases*]{} (Cambridge University Press, 4th. ed., 1925) p. 25, 118. S. Chapman and T.G. Cowling, [ The Mathematical Theory of Non-Uniform Gases]{}, Cambridge University Press, 3rd. ed., 1970, p. 36. P.D. Lax, [ Hyperbolic Systems of Conservation Laws and the Mathematical Theory of Shock Waves]{}, SIAM, 1973, p. 4. R.J. DiPerna, in [*Nonlinear Partial Differential Equations in Applied Science; Proc. of the US-Japan Seminar, Tokyo, 1982*]{} (eds. H. Fugita, P.D. Lax, and G. Strang, Lecture Notes in Num. Appl. Anal. [**5**]{}, North-Holland Publ. Co., 1983) p. 1. Z.-Y. Han and X.-Z. Yin, Shock Dynamics, Kluwer Academic Pub., New York, 1993, pp. 76 & 238. I.I. Glass and J.P. Sislian, Nonstationary Flows and Shock Waves, Clarendon Press, Oxford Engg. Sci. Series, 39; London, 1994, p. 37. K.G. Gureev and V.O. Zolotarev, Zhurnal Tekhnicheskoi Fiziki [**60**]{} (Feb. 1990) 22. B.-C. Eu, Kinetic Theory and Irreversible Thermodynamics, John Wiley & Sons, Inc., New York, 1992, p. 404. A.A. Vlasov, Many-Particle Theory and its Application to Plasma, Gordon and Breach, Science Publ., New York,1961, p. 266. G.B. Whitham, Linear & Nonlinear Waves, John Wiley & Sons, Singapore,1974, pp. 39, 138, 170 & 208. G. Jennings, Comm. Pure Appl. Math. [**27**]{} (1974) 25. H.-L. Liu and J.-H. Wang, Math. Comp. [**65**]{} (1997) 1137. M.S. Ivanov, S.F. Gimelshein, and A.E. Beylich, Phys. Fluids [**7**]{} (1995) 685. G. Allaire, in [*Homogenization and Porous Media*]{} (U. Hornung ed., Springer, 1997) p. 225. I. Müller and T. Ruggeri, Extended Thermodynamics, Springer-Verlag, Berlin 1993. K.-H. Chu, PhD. Thesis. Hong Kong University of Science and Technology, Hong Kong (PR China), Jan. 1998. G.T.Moore, J. Math. Phys. [**11**]{} (1970) 2679. S.A. Fulling and P.C.W. Davies, Proc. Roy. Soc. London A [**348**]{} (1976) 393. V.V. Dodonov and A.B. Klimov, Phys. Rev. A [**53**]{} (1996) 2664. A.A. Grib, S.G. Mamayev, V.M. Mostepanenko, Vacuum Quantum Effects in Strong Fields, Friedmann Laboratory Publishing, St.Petersburg, 1994. N.D. Birrell, P.C.W. Davies, Quantum Fields in Curved Space, Cambridge University Press, Cambridge, 1982. M. Bordag, U. Mohideen, V.M. Mostepanenkoc, Phys. Rep. [**353**]{} (2001) 1. F. Reif, [Fundamentals of Statistical and Thermal Physics]{}, McGraw-Hill, Inc., 1965, p. 516.
|
---
abstract: 'We present an analytic representation of $F_K/F_\pi$ as calculated in three-flavour two-loop chiral perturbation theory, which involves expressing three mass scale sunsets in terms of Kampé de Fériet series. We demonstrate how approximations may be made to obtain relatively compact analytic representations. An illustrative set of fits using lattice data is also presented, which shows good agreement with existing fits.'
author:
- 'B. Ananthanarayan'
- Johan Bijnens
- Samuel Friot
- Shayan Ghosh
title: 'Analytic representation of $F_K/F_\pi$ in two loop chiral perturbation theory'
---
LU TP 17-40\
[**Introduction**]{}- The spectrum of QCD contains as lightest particles the pseudo-scalar octet, and their properties provide a delicate test of its non-perturbative features, including that of chiral symmetry breaking in the sector involving the three lightest quarks. Of these, a special place is accorded to the decay constants of the kaon and pion, namely $F_K$ and $F_\pi$. Their ratio has been investigated on the lattice now, even at quark masses that include the physical values [@Durr:2016ulb]. On the other hand, in chiral perturbation theory (ChPT) [@Gasser:1984gg] at two-loops, expressions have been available for nearly two decades, but involving certain integrals (sunsets) that are evaluated numerically [@Amoros:1999dp]. In this work, we provide an analytic expression for $F_K/F_\pi$, which among other things incorporates double series derived using Mellin-Barnes (MB) representations of the sunsets. This allows us to produce a template for easy fitting to lattice simulations.
[**Methodology**]{}- Three-flavour ChPT expressions for the decay constants of the pseudoscalar mesons at two-loops are given in [@Amoros:1999dp]. These may be decomposed as: $$\begin{aligned}
\frac{F_P}{F_0} = 1 + F_P^{(4)} + \left( F_P \right)^{(6)}_{CT} + \left( F_P \right)^{(6)}_{loop} + \mathcal{O}(p^8) , \label{Eq:FP}\end{aligned}$$ where $P$ is the particle in question. The $\mathcal{O}(p^6)$ contribution can be subdivided as: $$\begin{aligned}
F_{\pi}^4 \left( F_P \right)^{(6)}_{loop} =&\, d_{sunset}^{P} + d_{log \times log}^{P} + d_{log}^{P} + d_{log \times L_i}^{P} \nonumber \\
& + d_{L_i}^{P} + d_{L_i \times L_j}^{P} . \label{Eq:FPloop}\end{aligned}$$ $d_{L_i\times log}^{P}$ collects the terms linear in the $\mathcal{O}(p^4)$ LECs $L_i$ and containing chiral logs, $d_{log}^{P}$, $d_{log \times log}^{P}$ collect the terms linear respectively quadratic in chiral logarithms without $L_i$, $d_{L_i}$ and $d_{L_i \times L_j}^{P}$ the terms linear respectively quadratic in the LECs $L_i$. The term $\left( F_P \right)^{(6)}_{CT}$ is composed of the $\mathcal{O}(p^6)$ counterterms, i.e. the LECs $C^r_i$, while $d_{sunset}^{P}$ are the pure sunset terms.
One determines the ratio $F_K/F_\pi$ using: $$\begin{aligned}
\frac{F_K}{F_{\pi}} &= 1 + \left( \frac{F_K}{F_0} \bigg|_{p^4} - \frac{F_{\pi}}{F_0} \bigg|_{p^4} \right)_{\text{NLO}} \nonumber \\
& + \left( \frac{F_K}{F_0} \bigg|_{p^6} - \frac{F_{\pi}}{F_0} \bigg|_{p^6} - \frac{F_K}{F_0} \bigg|_{p^4} \frac{F_{\pi}}{F_0} \bigg|_{p^4} + \frac{F_{\pi}}{F_0} \bigg|^2_{p^4} \right)_{\text{NNLO}} . \label{Eq:fkfp}\end{aligned}$$
The terms $d_{sunset}^{P}$ are not available fully analytically. Their determination is the goal of this work. The sunset integral is defined as: $$\begin{aligned}
& {H}_{\{\alpha,\beta,\gamma\}}^d (m_1^2,m_2^2,m_3^2;p^2) = \nonumber \\
& \frac{(1/i)^2}{(2\pi)^{2d}} \int \frac{d^dq \; d^dr}{[q^2-m_1^2]^{\alpha} [r^2-m_2^2]^{\beta} [(q+r-p)^2-m_3^2]^{\gamma}} .
\label{Eq:SunsetDef}\end{aligned}$$ Aside from the basic scalar integral defined above, tensor integrals in which the momenta $q_{\mu}$ and $q_{\mu} q_{\nu}$ appear in the numerator, and derivatives with respect to the external momentum of both the scalar and tensor integrals contribute to $d_{sunset}^{P}$ [@Amoros:1999dp]. The tensor integrals, as well as all the derivatives, may be reduced into a linear combination of scalar integrals using the methods given in [@Tarasov:1997kx]. Thus only a smaller set of master integrals (MI) is needed.
The full list of sunset integrals contributing to $d_{sunset}^{P}$ can thus all be expressed in terms of a set of four MI (${H}_{\{1,1,1\}}^d$, ${H}_{\{2,1,1\}}^d$, ${H}_{\{1,2,1\}}^d$ and ${H}_{\{1,1,2\}}^d$) and the one-loop tadpole integral. The problem reduces to solving these analytically in the required mass configurations. For the evaluation of $F_K/F_\pi$, seven distinct three mass scale MI need evaluation.
MB theory leads to representations of these MI where each integral consists of at least one double complex plane integral. These double MB integrals are evaluated using the method proposed in [@Aguilar:2008qj] and fully systematized in [@Friot:2011ic] to obtain results in the form of sums of single and double infinite series [@Ananthanarayan:2016pos]-[@ABFG:2018].
[**The analytic representation**]{}- Using Eq.(\[Eq:fkfp\]), we obtain the following representation of $F_K/F_\pi$: $$\begin{aligned}
\frac{F_K}{F_\pi} &= 1 + 4 (4 \pi )^2 L^r_5 \left(\xi _K-\xi _{\pi }\right)
+ \frac{5}{8} \xi_\pi \lambda_\pi - \frac{1}{4} \xi_K \lambda_K
\nonumber \\
& + \left(\frac{1}{8} \xi_\pi - \frac{1}{2} \xi_K \right) \lambda_\eta
+ \xi_K^2 F_F\left[ \frac{m_\pi^2}{m_K^2} \right] + \hat K_1^r \lambda_\pi^2
\nonumber \\
& + \hat K_2^r \lambda_\pi\lambda_K + \hat K_3^r \lambda_\pi\lambda_\eta
+ \hat K_4^r \lambda_K^2 + \hat K_5^r \lambda_K\lambda_\eta
\nonumber \\
& + \hat K_6^r \lambda_\eta^2 \xi_K^2 + \hat C_1 \lambda_\pi
+ \hat C_2 \lambda_K + \hat C_3 \lambda_\eta + \hat C_4 , \label{Eq:fkfpLattice}\end{aligned}$$ where $\xi_\pi=m_\pi^2/(16\pi^2 F_\pi^2)$, $\xi_K= m_K^2/(16\pi^2 F_\pi^2)$, $\lambda_i = \log(m_i^2/\mu^2)$, and: $$\begin{aligned}
\hat{K}^r_1 =\,& \frac{11}{24} \xi_\pi \xi_K - \frac{131}{192} \xi_\pi^2,
&\hat{K}^r_2 =\,& -\frac{41}{96} \xi_\pi \xi_K - \frac{3}{32} \xi_\pi^2, \nonumber \\
\hat{K}^r_3 =\,& \frac{13}{24} \xi_\pi \xi_K + \frac{59}{96} \xi_\pi^2 ,
&\hat{K}^r_4 =\,& \frac{17}{36} \xi_K^2 + \frac{7}{144} \xi_\pi \xi_K,
\nonumber \\
\hat{K}^r_5 =\,& -\frac{163}{144} \xi_K^2 - \frac{67}{288} \xi_\pi \xi_K + \frac{3}{32} \xi_\pi^2 , \hspace*{-2cm} \nonumber \\
\hat{K}^r_6 =\,& \frac{241}{288} \xi_K^2 - \frac{13}{72} \xi_\pi \xi_K - \frac{61}{192} \xi_\pi^2 . \hspace*{-2cm}\end{aligned}$$ $$\begin{aligned}
& \hat{C}^r_1 = - \left(\frac{7}{9} + \frac{11}{2} (4 \pi )^2 L^r_{5} \right) \xi_\pi \xi_K\nonumber \\
& -\left(\frac{113}{72} + (4 \pi )^2 (4 L^r_{1} + 10 L^r_{2} + \frac{13}{2} L^r_{3} - \frac{21}{2} L^r_{5}) \right) \xi _\pi^2 , \nonumber \\[2mm]
& \hat{C}^r_2 = \left(\frac{209}{144} + 3 (4\pi)^2 L^r_{5} \right) \xi_\pi \xi_K \nonumber \\
& + \left(\frac{53}{96} + (4 \pi )^2 (4 L^r_{1} + 10 L^r_{2} + 5 L^r_{3} - 5 L^r_{5}) \right) \xi _K^2 , \nonumber \\[2mm]
& \hat{C}^r_3 = \left( \frac{13}{18} + (4 \pi )^2 \left( \frac{8}{3} L^r_{3} - \frac{2}{3} L^r_{5} - 16 L^r_{7} - 8 L^r_{8} \right) \right) \xi_K^2 \nonumber \\
& - \left( \frac{4}{9} + (4\pi)^2 \left( \frac{4}{3} L^r_{3} + \frac{25}{6} L^r_{5} - 32 L^r_{7} - 16 L^r_{8} \right) \right) \xi _\pi \xi_K \nonumber \\
& + \left( \frac{19}{288} + (4 \pi)^2 \left( \frac{1}{6} L^r_{3} + \frac{11}{6} L^r_{5} - 16 L^r_{7} - 8 L^r_{8} \right) \right) \xi_\pi^2 , \nonumber \\[2mm]
& \hat{C}^r_4 = (4 \pi)^2 (\xi_K - \xi_\pi) \nonumber \\
& \times \bigg\{ 8 (4 \pi )^2 \bigg( 2 (C^r_{14}+C^r_{15}) \xi _K + (C^r_{15}+2 C^r_{17}) \xi_\pi \bigg) \nonumber \\
& + \bigg( 8 (4 \pi )^2 L^r_{5} (8 L^r_{4}+3 L^r_{5}-16 L^r_{6}-8 L^r_{8})- 2 L^r_{1} \nonumber \\
& \quad - L^r_{2} - \frac{1}{18} L^r_{3} + \frac{4}{3} L^r_{5} - 16 L^r_{7} - 8 L^r_{8} \bigg) \xi_K \nonumber \\
& + \bigg( 8 (4 \pi )^2 L^r_{5} (4 L^r_{4} + 5 L^r_{5} - 8 L^r_{6} - 8 L^r_{8}) - 2 L^r_{1} \nonumber \\
& \quad - L^r_{2} - \frac{5}{18} L^r_{3} - \frac{4}{3} L^r_{5} + 16 L^r_{7} + 8 L^r_{8} \bigg) \xi _{\pi } \bigg\}.\end{aligned}$$
$F_F$ consists of the terms arising from the pure sunset contributions. The split between the $\hat K_i$ terms and $F_F$ is not unique: one convenient decomposition, that takes into account the freedom to distribute the chiral logs while keeping the final result unchanged, is: $$\begin{aligned}
& F_F = \frac{m_\pi^6}{m_K^6} \left(\frac{49}{48}+\frac{\pi ^2}{32}\right) + \frac{m_\pi^4}{m_K^4} \left(\frac{25871}{6912}+\frac{919 \pi^2}{2592}\right) \nonumber \\
& -\frac{m_\pi^2}{m_K^2} \left(\frac{9875}{864}+\frac{757 \pi ^2}{1296}\right) + \left(\frac{39233}{6912}+\frac{437 \pi ^2}{1296}\right) \nonumber \\
& +\frac{m_K^2}{m_\pi^2} \left(\frac{3}{2}-\frac{\pi ^2}{12}\right) -\frac{3}{32} \log ^2\left[\frac{m_\pi^2}{m_K^2}\right] - \frac{9}{16} \log \left[\frac{m_\pi^2}{m_K^2}\right] \nonumber \\
& - \frac{1}{8} \frac{m_K^2}{m_\pi^2} \log ^2\left[\frac{4}{3}-\frac{m_\pi^2}{3 m_K^2}\right] + \frac{5}{64} \frac{m_\pi^6}{m_K^6} \log ^2\left[\frac{4 m_K^2}{3 m_\pi^2}-\frac{1}{3}\right] \nonumber \\
& + \frac{(16\pi^2)^2}{m_K^4} \left( d^K_{K \pi \pi} + d^K_{K \eta \eta} + d^K_{K \pi \eta} - d^\pi_{\pi K K} - d^\pi_{\pi \eta \eta} - d^\pi_{K K \eta} \right)
\label{Eq:ExactFf}\end{aligned}$$ where: $$\begin{aligned}
d^K_{K \pi \pi} &= -\left(\frac{27}{64} \frac{m_{\pi}^4}{m_{K}^2} + \frac{1}{64}m_{K}^2 + \frac{9}{16} m_{\pi}^2 \right) \overline{H}^K_{K \pi \pi} \nonumber \\
& + \left(\frac{1}{16} m_{K}^4 + \frac{1}{8} m_{K}^2 m_{\pi}^2 + \frac{9}{16} m_{\pi}^4 \right) \overline{H}^K_{2K \pi \pi},\end{aligned}$$ $$\begin{aligned}
d^K_{K \eta \eta} &= - \left( \frac{15}{64} \frac{m_{\pi}^4}{m_{K}^2} + \frac{1189}{576} m_{K}^2 - \frac{65}{48} m_{\pi}^2 \right) \overline{H}^K_{K \eta \eta} \nonumber \\
& + \left(\frac{143}{48} m_{K}^4 - \frac{139}{72} m_{K}^2 m_{\pi}^2 + \frac{5}{16} m_{\pi}^4 \right) \overline{H}^K_{2K \eta \eta},\end{aligned}$$ $$\begin{aligned}
d^K_{K \pi \eta} &= \left( - \frac{7}{32} \frac{m_{\pi}^4}{m_{K}^2} + \frac{5}{96} m_{K}^2 + \frac{7}{6} m_{\pi}^2 \right) \overline{H}^{K}_{K \pi \eta} \nonumber \\
& + \left( \frac{3}{8} \frac{m_{\pi}^6}{m_{K}^2} + \frac{1}{4} m_{K}^2 m_{\pi}^2 - \frac{15}{8} m_{\pi}^4 \right) \overline{H}^{K}_{K 2\pi \eta} \nonumber \\
& - \left( \frac{11}{18} m_{K}^4 - \frac{1}{12} \frac{m_{\pi}^6}{m_{K}^2} + \frac{41}{72} m_{K}^2 m_{\pi}^2 + \frac{11}{72}m_{\pi}^4 \right) \overline{H}^{K}_{K \pi 2\eta} \nonumber \\
& - \left( \frac{1}{2} m_{K}^4 \right) \overline{H}^{K}_{2K \pi \eta},\end{aligned}$$ $$\begin{aligned}
{d}^{\pi}_{\pi K K} & = - \left(\frac{9}{16} \frac{m_{K}^4}{m_{\pi}^2} + \frac{3}{4} m_{K}^2 + \frac{1}{48} m_{\pi}^2 \right) \overline{H}^{\pi}_{\pi K K} \nonumber \\
& + \left( \frac{3}{4} m_{K}^4 + \frac{1}{6} m_{K}^2 m_{\pi}^2 +\frac{1}{12} m_{\pi}^4 \right) \overline{H}^{\pi}_{2\pi K K},\end{aligned}$$ $$\begin{aligned}
{d}^{\pi}_{\pi \eta \eta} &= \left( -\frac{1}{36} m_{\pi}^2 \right) \overline{H}^{\pi}_{\pi \eta \eta}+\left( \frac{1}{36} m_{\pi}^4 \right) \overline{H}^{\pi}_{2\pi \eta \eta},\end{aligned}$$ and $$\begin{aligned}
{d}^{\pi}_{K K \eta} &= \left( \frac{15}{16} \frac{m_{K}^4}{m_{\pi}^2} - \frac{13}{36} m_{K}^2 + \frac{13}{144} m_{\pi}^2 \right) \overline{H}^{\pi}_{K K \eta} \nonumber \\
& + \left( \frac{91}{108} m_{K}^4 - \frac{m_{K}^6}{m_{\pi}^2} - \frac{5}{27} m_{K}^2 m_{\pi}^2 + \frac{m_{\pi}^4 }{108}\right) \overline{H}^{\pi}_{K K 2\eta} \nonumber \\
& + \left( \frac{1}{2} m_{K}^4 - 2 \frac{m_{K}^6}{m_{\pi}^2} - \frac{1}{6} m_{K}^2 m_{\pi}^2 \right) \overline{H}^{\pi}_{2K K \eta}.\end{aligned}$$
The MI are denoted by $\overline{H}^{S}_{aP \, bQ \, cR} \equiv \overline{H}^d_{\{a,b,c\}}(m_P^2,m_Q^2,m_R^2;p^2=m_S^2 )$, the “bar" indicating that the chiral subtraction prefactor $\left( \mu^2 \frac{e^{\gamma_E-1}}{4\pi} \right)^{4-d}$ has been taken into acount and that the chiral logarithms have been extracted and included in the log terms of Eq.(\[Eq:FPloop\]). Expressions for the two mass scale MI are given in [@Ananthanarayan:2017yhz], and those for the three mass scale are given below in terms of generalized hypergeometric (${}_pF_q$) and Kampé de Fériet (KdF) series. The three mass scale MI not explicitly presented here can be derived from the following by differentiation w.r.t the appropriate square propagator mass. The validity of Eqs.(\[Eq:Hkpe\])-(\[Eq:Hekk\]) is dictated by the region of convergence of the KdF and ${}_pF_q$ series, which is given by $(m_\pi<m_\eta) \wedge (m_\pi+m_\eta<2m_K)$ and shown in Fig. \[Fig:Convergence\].
![Region of convergence of Eqs.(\[Eq:Hkpe\])-(\[Eq:Hekk\]) (blue region). The red dot marks the physical values of the meson masses.[]{data-label="Fig:Convergence"}](convergence.eps){width="25.00000%"}
$$\begin{aligned}
\label{Eq:Hkpe}
& \overline{H}^{K}_{K \pi \eta} = \frac{m_{K}^2}{512\pi ^4} \Bigg\{ - \frac{7}{4}\left(\frac{m_{\eta}^4}{m_{K}^4}+\frac{m_{\pi}^4}{m_{K}^4}\right) -\frac{m_{\pi}^2}{m_{K}^2} \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right]^2 \nonumber \\
& + \left(1-\frac{\pi^2}{2}\right)\left(\frac{m_{\eta}^2}{m_{K}^2}+\frac{m_{\pi}^2}{m_{K}^2}\right) +\frac{m_{\pi}^4}{2 m_{K}^4} \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right] -\frac{1}{4} \nonumber \\
& +\frac{m_{\pi}^2}{m_{K}^2} \frac{m_{\eta}^2}{m_{K}^2} \bigg( 7+\frac{2 \pi^2}{3}-2 \log\left[\frac{m_{\eta}^2}{m_{K}^2}\right]-2 \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right] \nonumber \\
& +\log\left[\frac{m_{\eta}^2}{m_{K}^2}\right] \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right] \bigg) + \frac{m_{\eta}^4}{2 m_{K}^4} \log\left[\frac{m_{\eta}^2}{m_{K}^2}\right] + \frac{5 \pi^2}{6} \nonumber \\
& -\frac{m_{\eta}^2}{m_{K}^2} \log\left[\frac{m_{\eta}^2}{m_{K}^2}\right]^2 +\frac{8 \pi }{3}\left(\frac{m_{\eta}^2}{m_{K}^2}\right)^{3/2}
{}_2F_1 \bigg[ \begin{array}{c}
\frac{1}{2},-\frac{1}{2} \\
\frac{5}{2} \\
\end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] \nonumber \\
&
+\frac{1}{36}\frac{m_{\eta}^6}{m_{K}^6}
{}_3F_2 \bigg[ \begin{array}{c}
1,1,2 \\
\frac{5}{2},4 \\
\end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg]
+ \frac{1}{36} \frac{m_{\pi}^6}{m_{K}^6}
{}_3F_2 \bigg[ \begin{array}{c}
1,1,2 \\
\frac{5}{2},4 \\
\end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] \nonumber \\
& + \frac{1}{6} \frac{m_{\eta}^4}{m_K^4} \frac{m_{\pi}^2}{m_K^2}
\left( 2\gamma_E - 1 + \log \left[\frac{m_{\pi}^2 m_{\eta}^2}{16 m_K^4}\right] \right) {}_2F_1 \bigg[ \begin{array}{c}
1,1 \\
\frac{5}{2} \\
\end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] \nonumber \\
& + \frac{\sqrt{\pi}}{8} \frac{m_{\pi}^2}{m_{K}^2} \frac{m_{\eta}^4}{m_{K}^4} \left( \log \left[\frac{m_{\eta}^2}{4 m_{K}^2}\right]+\log \left[\frac{m_{\pi}^2}{4 m_{K}^2}\right] + \frac{\partial}{\partial \alpha} \right) \cdot \nonumber\\
& \Bigg( \frac{\Gamma(1+2\alpha) \Gamma(2+2\alpha) \Gamma(3+2\alpha)}{\Gamma(1+\alpha) \Gamma^2(2+\alpha) \Gamma(3+\alpha) \Gamma(\frac{5}{2}+2\alpha)} \nonumber\\
& F^{3:1}_{1:2} \bigg[ \begin{array}{c}
1+2\alpha, 2+2\alpha, 3+2\alpha: 1,1 \\
\frac{5}{2}+2\alpha: 2+\alpha, 1+\alpha; 3+\alpha, 2+\alpha \\
\end{array} \bigg| \frac{m_\eta^2}{4m_K^2} , \frac{m_\pi^2}{4m_K^2} \bigg] \Bigg) \Bigg|_{\alpha=0} \nonumber \\
& - \frac{m_K}{m_\eta} \frac{m_{\pi}^4}{m_{K}^4} \left( \log \left[\frac{m_{\pi}^2}{m_{\eta}^2} \right] + \frac{\partial}{\partial \alpha} \right) \cdot \Bigg( \frac{\Gamma(\frac{1}{2}+\alpha) \Gamma(\frac{3}{2}+\alpha)}{\Gamma(2+\alpha) \Gamma(3+\alpha)} \nonumber \\
& F^{0:3}_{2:0} \bigg[ \begin{array}{c}
- : \frac{1}{2}+\alpha,-\frac{1}{2}; \frac{3}{2}+\alpha,\frac{1}{2}; 1,\frac{3}{2} \\
2+\alpha, 3+\alpha : - \\
\end{array} \bigg| \frac{m_\pi^2}{m_\eta^2}, \frac{m_\pi^2}{4m_K^2} \bigg] \Bigg) \Bigg|_{\alpha=0} \nonumber \\
& + \frac{m_\pi^2}{m_K^2} \frac{m_\eta}{m_K} \left( \log \left[\frac{m_{\pi}^2}{m_{\eta}^2} \right] + \frac{\partial}{\partial \alpha} \right) \cdot \nonumber\\
& \Bigg( \frac{\pi^2 }{\Gamma(\frac{1}{2}-\alpha) \Gamma(\frac{3}{2}-\alpha) \Gamma(1+\alpha) \Gamma(2+\alpha)} \nonumber \\
& F^{3:1}_{1:2} \bigg[ \begin{array}{c}
-\frac{1}{2},\frac{1}{2},\frac{3}{2}:1,1 \\
1: \frac{1}{2}-\alpha,1+\alpha; \frac{3}{2}-\alpha, 2+\alpha \\
\end{array} \bigg| \frac{m_\eta^2}{4m_K^2}, \frac{m_\pi^2}{4m_K^2} \bigg] \Bigg) \Bigg|_{\alpha=0} \nonumber \\
& + \frac{\sqrt{\pi}}{16} \frac{m_\eta^2}{m_K^2} \frac{m_\pi^4}{m_K^4} \frac{\partial}{\partial \alpha} \cdot \Bigg( \frac{\Gamma(1+2\alpha) \Gamma(2+\alpha) \Gamma(3+\alpha)}{\Gamma(\frac{5}{2}+2\alpha)} \nonumber \\
& \quad {}_4F_3
\bigg[ \begin{array}{c}
1, 1+2\alpha, 2+\alpha, 3+\alpha \\
2,3,\frac{5}{2}+2\alpha \\
\end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] \Bigg) \Bigg|_{\alpha=0} \Bigg\},\end{aligned}$$
$$\begin{aligned}
& \overline{H}^{K}_{2K \pi \eta} = \frac{1}{512\pi ^4} \Bigg\{ -\frac{m_{\eta}^2}{m_{K}^2} \bigg( 1 + \frac{\pi^2}{3} + \frac{1}{2} \log ^2 \left[\frac{m_{K}^2}{m_{\eta}^2}\right] \nonumber \\
& + \log \left[\frac{m_{K}^2}{m_{\eta}^2}\right] + \text{Li}_2 \left[ 1-\frac{m_{\pi}^2}{m_{\eta}^2} \right] \bigg) -\frac{m_{\pi}^2}{m_{K}^2} \bigg( 1 + \frac{\pi^2}{3} \nonumber \\
& - \log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] - \log \left[\frac{m_{K}^2}{m_{\eta}^2}\right] \log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] - \frac{1}{2} \log^2 \left[ \frac{m_{K}^2}{m_{\eta}^2} \right] \nonumber \\
& - \text{Li}_2 \left[1-\frac{m_{\pi}^2}{m_{\eta}^2}\right] \bigg) + \frac{2 \pi}{3} \frac{m_{\eta}^3}{m_{K}^3}
{}_2F_1 \bigg[ \begin{array}{c}
\frac{1}{2},\frac{1}{2} \\
\frac{5}{2} \\
\end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] \nonumber \\
& - \frac{m_{\pi}^4}{4 m_{K}^4}
{}_3F_2 \bigg[ \begin{array}{c}
1,1,1 \\
\frac{3}{2},3 \\
\end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg]
- \frac{m_{\eta}^4}{4 m_{K}^4}
{}_3F_2 \bigg[ \begin{array}{c}
1,1,1 \\
\frac{3}{2},3 \\
\end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] \nonumber \\
& -\frac{\sqrt{\pi}}{4} \frac{m_\eta^2}{m_K^2} \frac{m_\pi^2}{m_K^2} \left( \log \left[ \frac{m_\pi^2}{4m_K^2} \right] + \log \left[ \frac{m_\eta^2}{4m_K^2} \right] + \frac{\partial}{\partial \alpha} \right) \cdot \nonumber \\
& \Bigg( \frac{\Gamma^2(1+2\alpha) \Gamma(2+2\alpha)}{\Gamma(\frac{3}{2}+2\alpha) \Gamma^2(1+\alpha) \Gamma^2(2+\alpha)} \nonumber \\
& F^{3:1}_{1:2} \bigg[ \begin{array}{c}
1+2\alpha,1+2\alpha,2+2\alpha:1,1 \\
\frac{3}{2}+2\alpha: 1+\alpha,1+\alpha; 2+\alpha, 2+\alpha \\
\end{array} \bigg| \frac{m_\eta^2}{4m_K^2}, \frac{m_\pi^2}{4m_K^2} \bigg] \Bigg) \Bigg|_{\alpha=0} \nonumber \\
& + \frac{5 \pi^2}{6} -1 + \frac{\pi^2}{4} \frac{m_\eta}{m_K} \frac{m_\pi^2}{m_K^2} \left( \log \left[ \frac{m_\pi^2}{m_\eta^2} \right] + \frac{\partial}{\partial \alpha} \right) \cdot \nonumber \\
& \Bigg( \frac{1}{\Gamma(\frac{1}{2}-\alpha) \Gamma(\frac{3}{2}-\alpha) \Gamma(1+\alpha) \Gamma(2+\alpha)} \nonumber \\
& F^{3:1}_{1:2} \bigg[ \begin{array}{c}
\frac{1}{2},\frac{1}{2},\frac{3}{2}:1,1 \\
1: \frac{1}{2}-\alpha,1+\alpha; \frac{3}{2}-\alpha, 2+\alpha \\
\end{array} \bigg| \frac{m_\eta^2}{4m_K^2}, \frac{m_\pi^2}{4m_K^2} \bigg] \Bigg) \Bigg|_{\alpha=0} \nonumber \\
-& \frac{1}{4} \frac{m_\pi}{m_\eta} \frac{m_\pi^3}{m_K^3} \left( \log \left[ \frac{m_\pi^2}{m_\eta^2} \right] + \frac{\partial}{\partial \alpha} \right) \cdot \bigg( \frac{\Gamma(\frac{1}{2}+\alpha)\Gamma(\frac{3}{2}+\alpha)}{\Gamma(2+\alpha)\Gamma(3+\alpha)} \nonumber \\
& F^{0:3}_{2:0} \bigg[ \begin{array}{c}
-: 1,\frac{1}{2}; \frac{1}{2}+\alpha,\frac{1}{2}; \frac{3}{2}+\alpha, \frac{3}{2} \\
2+\alpha, 3+\alpha: - \\
\end{array} \bigg| \frac{m_\pi^2}{m_\eta^2}, \frac{m_\pi^2}{4m_K^2} \bigg] \bigg) \Bigg|_{\alpha=0} \Bigg\},
\label{Eq:H2kpe}\end{aligned}$$
and $$\begin{aligned}
& \overline{H}^{\pi}_{K K \eta} = \frac{m_\eta^2}{512 \pi ^4} \Bigg\{ \frac{\pi ^2}{6}-5 + 4 \log \left[\frac{m_\eta^2}{m_K^2}\right] - \log ^2 \left[\frac{m_\eta^2}{m_K^2}\right] \nonumber \\
& + \frac{m_K^2}{m_\eta^2} \left(6 + \frac{\pi ^2}{3}\right) - \frac{1}{18} \frac{m_\pi^2}{m_K^2} \frac{m_\pi^2}{m_\eta^2} {}_3F_2 \bigg[ \begin{array}{c}
1,1,2 \\
\frac{5}{2},4 \\
\end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] \nonumber \\
& + \frac{m_\pi^2}{m_\eta^2} \left( \log \left[ \frac{m_K^2}{m_\pi^2} \right] + \frac{5}{4} \right) - \frac{\sqrt{\pi}}{8} \left( \log \left[ \frac{m_\eta^2}{4 m_K^2} \right] + \frac{\partial}{\partial \alpha} \right)\cdot \nonumber \\
& \Bigg(
\frac{m_\pi^2}{m_K^2} \frac{\Gamma(3+\alpha)}{\Gamma(\frac{5}{2}+\alpha)} F^{3:1}_{1:2} \bigg[ \begin{array}{c}
1+\alpha,2+\alpha,3+\alpha:1,1 \\
\frac{5}{2}+\alpha:2,1+\alpha;3,2+\alpha \\
\end{array} \bigg| \frac{m_\pi^2}{4m_K^2}, \frac{m_\eta^2}{4m_K^2} \bigg] \nonumber \\
& + \frac{2 m_\eta^2}{m_K^2} \frac{\Gamma(1+\alpha)}{\Gamma(\frac{5}{2}+\alpha)} {}_2F_1 \bigg[ \begin{array}{c}
1,1+\alpha \\
\frac{5}{2}+\alpha \\
\end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] \Bigg) \Bigg|_{\alpha=0} \Bigg\}.
\label{Eq:Hekk}\end{aligned}$$
One may obtain simplified representations for $F_F$ by truncating the series at the desired precision, and taking an expansion around $ \rho= \frac{m_\pi^2}{m_K^2} = 0$. For illustrative purposes, we present one such representation in which we truncate the series such that the error between the exact and truncated values is $<1\%$ for most of the sets of masses used in the lattice study of [@Durr:2016ulb]. We get: $$\begin{aligned}
F_F & ( \rho ) = a_1 + \left( a_2 + a_3 \log[\rho] + a_4 \log^2[\rho] \right) \rho \nonumber \\
& + \left( a_5 + a_6 \log[\rho] + a_7 \log^2[\rho] \right) \rho^2 \nonumber\\
& + \left( a_8 + a_9 \log[\rho] + a_{10} \log^2[\rho] \right) \rho^3 \nonumber \\
& + \left( a_{11} + a_{12} \log[\rho] + a_{13} \log^2[\rho] \right) \rho^4 + \mathcal{O} \left( \rho^5 \right) \label{Eq:ApproxFF}\end{aligned}$$ where: $$\begin{aligned}
a_1 &= -\frac{6337}{5184} \left(\text{Li}_2\left[ \frac{3}{4} \right]+\log (4) \log \left[\frac{4}{3}\right] \right) + \frac{41 \pi ^2}{192} \nonumber \\
& -\frac{11 \sqrt{2} \pi }{27} +\frac{85957107031}{27662342400}-\frac{119 \pi }{216 \sqrt{2}} \nonumber \\
& +\frac{62591}{612360} \log [3] +\frac{43006343}{13471920} \log \left[\frac{4}{3}\right] \nonumber \\
& +\left(\frac{8 \sqrt{2}}{9}-\frac{41 \pi }{48}-\frac{5 \log [3]}{24 \sqrt{2}}\right) \csc ^{-1}\left[\sqrt{3}\right] \nonumber \\
& +\frac{41}{48} \csc ^{-1}\left[\sqrt{3}\right]^2 + \frac{5}{1152} \log ^2 \left[\frac{4}{3}\right] , \nonumber \\[2mm]
a_2 &= \frac{5821}{2592} \left(\text{Li}_2\left[\frac{3}{4}\right]+\log [4] \log \left[\frac{4}{3}\right]\right) - \frac{25 \pi ^2}{96} \nonumber \\
& -\frac{7269419973251}{1120324867200}+\frac{145 \pi }{72 \sqrt{2}}+\frac{38693 \pi }{25920 \sqrt{3}}+\frac{82 \gamma }{405} \nonumber \\
& -\frac{121}{576} \log ^2\left[\frac{4}{3}\right]-\left(\frac{6035437}{9797760}+\frac{13 \pi }{864 \sqrt{3}}\right) \log [3] \nonumber \\
& -\left(\frac{468002719}{161663040}+\frac{13 \pi }{576 \sqrt{3}}\right) \log \left[\frac{4}{3}\right] -\frac{29}{324} \psi\left[\frac{5}{2}\right] \nonumber \\
& + \left(\frac{463 \log [3]}{384 \sqrt{2}} + \frac{\log \left[\frac{4}{3}\right]}{2 \sqrt{2}} - \frac{11 \pi }{48}-\frac{13 \gamma }{18 \sqrt{2}}-\frac{15875}{3456 \sqrt{2}}\right) \nonumber \\
& \quad \times \csc ^{-1}\left[\sqrt{3}\right] + \frac{11}{48} \csc ^{-1}\left[\sqrt{3}\right]^2 , \nonumber \\[2mm]
a_3 &= \frac{803}{810}+\frac{13 \pi }{1728 \sqrt{3}}+\frac{7}{48} \log \left[\frac{4}{3}\right] - \frac{1}{2 \sqrt{2}} \csc ^{-1}\left[\sqrt{3}\right] , \nonumber \\[2mm]
a_4 &= -\frac{11}{24} , \quad a_7 = \frac{337}{384} , \quad a_{10} = -\frac{9}{64} , \quad a_{13} = -\frac{27}{128}
\nonumber \\[2mm]
a_5 &= \frac{47}{128} \log ^2\left[\frac{4}{3}\right] -\frac{845}{648} \left(\text{Li}_2\left[\frac{3}{4}\right]+\log [4] \log \left[\frac{4}{3}\right] \right) \nonumber \\
& -\frac{1301 \sqrt{3} \pi }{512}-\frac{66191 \gamma }{12960}+\frac{1576413731881}{3585039575040} + \frac{5 \pi ^2}{18} \nonumber \\
& -\frac{145 \pi }{144 \sqrt{2}}+\frac{3572063 \pi }{663552 \sqrt{3}} + \frac{59}{48} \csc ^{-1}\left[\sqrt{3}\right]^2 \nonumber \\
& + \left(\frac{744674317}{313528320}+\frac{176189 \pi }{55296 \sqrt{3}}\right) \log [3] +\frac{35}{144} \psi \left[\frac{5}{2}\right] \nonumber \\
& + \bigg(\frac{97621}{55296 \sqrt{2}} -\frac{59 \pi }{48} + \frac{3167 \gamma }{288 \sqrt{2}}-\frac{19589 \log [3]}{4096 \sqrt{2}} \nonumber \\
& \quad -\frac{115}{48 \sqrt{2}} \bigg) \log \left[\frac{4}{3}\right] \csc^{-1} \left[\sqrt{3}\right] \nonumber \\
& + \left(\frac{4312709021}{1293304320}+\frac{176189 \pi }{36864 \sqrt{3}}\right) \log \left[\frac{4}{3}\right] , \nonumber \\[2mm]
a_6 &= \frac{17003}{8640}-\frac{176189 \pi }{110592 \sqrt{3}}-\frac{155}{192} \log \left[\frac{4}{3}\right] \nonumber \\ & + \frac{115}{48 \sqrt{2}} \csc ^{-1}\left[\sqrt{3}\right] , \nonumber \\[2mm]
a_8 &= \frac{265}{864} \left(\text{Li}_2\left[ \frac{3}{4}\right] + \log [4] \log \left[\frac{4}{3}\right] \right) +\frac{199393 \gamma }{138240} \nonumber \\
& +\frac{25001310633017}{9481096396800}+\frac{4753 \pi }{13824 \sqrt{2}}+\frac{20910563 \pi }{26542080 \sqrt{3}} \nonumber \\
& -\frac{29 \pi ^2}{288}-\left(\frac{101313035}{143327232}+\frac{804611 \pi }{442368 \sqrt{3}}\right) \log [3] \nonumber \\
& -\left(\frac{129118553}{117573120}+\frac{804611 \pi }{294912 \sqrt{3}}\right) \log \left[\frac{4}{3}\right] - \frac{119}{288} \psi\left[\frac{5}{2}\right] \nonumber \\
& -\frac{5}{16} \csc ^{-1}\left[\sqrt{3}\right]^2 + \csc ^{-1}\left[\sqrt{3}\right] \bigg( \frac{823}{3072 \sqrt{2}} \log \left[\frac{4}{3}\right] \nonumber \\
& + \frac{5 \pi }{16} -\frac{19319 \gamma }{9216 \sqrt{2}}-\frac{5341499}{3538944 \sqrt{2}}+\frac{104075 \log [3]}{196608 \sqrt{2}} \bigg) , \nonumber \\[2mm]
a_9 &= -\frac{8327}{138240}+\frac{804611 \pi }{884736 \sqrt{3}}-\frac{1}{96} \log \left[\frac{4}{3}\right] \nonumber \\
& -\frac{823}{3072 \sqrt{2}} \csc ^{-1} \left[\sqrt{3}\right] , \nonumber \\[2mm]
a_{11} &= -\frac{5}{192} \left(\text{Li}_2\left[\frac{3}{4}\right]+\log [4] \log \left[\frac{4}{3}\right] \right)-\frac{25 \pi ^2}{192} \nonumber \\
& -\frac{1310311 \gamma }{6635520}-\frac{10567863311827}{10113169489920} +\frac{4453 \sqrt{3} \pi }{65536} \nonumber \\
& +\left(\frac{12616533707}{45864714240}+\frac{1674775 \pi }{7077888 \sqrt{3}}\right) \log [3] \nonumber \\
& +\left(\frac{17720699}{46448640}+\frac{1674775 \pi }{4718592 \sqrt{3}}\right) \log \left[\frac{4}{3}\right] \nonumber \\
& -\frac{13905571 \pi }{84934656 \sqrt{3}} -\frac{2135 \pi }{73728 \sqrt{2}} + \frac{97}{648} \psi \left[ \frac{5}{2}\right] \nonumber \\
& + \frac{1}{\sqrt{2}} \bigg(\frac{605645}{18874368} -\frac{391 \gamma }{49152} - \frac{121093 \log [3]}{4194304} \nonumber \\
& \quad -\frac{59}{4096} \log \left[\frac{4}{3}\right] \bigg) \csc ^{-1}\left[\sqrt{3}\right] , \nonumber \\[2mm]
a_{12} &= \frac{5538437}{11612160}-\frac{1674775 \pi }{14155776 \sqrt{3}}+\frac{1}{64} \log \left[\frac{4}{3}\right] \nonumber \\
& + \frac{59}{4096 \sqrt{2}} \csc ^{-1}\left[\sqrt{3}\right].
\label{Eq:ApproxFFnums}\end{aligned}$$
The range of validity of Eqs.(\[Eq:ApproxFF\])-(\[Eq:ApproxFFnums\]) is shown in Fig. \[Fig:FFcomp\], in which the exact value of $F_F$ is plotted against $x=\sqrt{\rho}$, as are the approximate $F_F$ retained up to various orders of $\rho$. The expansion up to $\mathcal{O}(\rho^4)$ approximates the exact value of $F_F$ to 1% for $m_\pi/m_K<3$ and to 6% for $m_\pi/m_K<0.5$. One may obtain a representation with greater accuracy by truncating the series with a larger number of terms.
![Comparison of the exact and approximate $F_F$.[]{data-label="Fig:FFcomp"}](FFcomp.eps){width="30.00000%"}
For the reader to be able to verify the implementation of these expressions, we give the numerical values of $F_K/F_\pi$ coming from both exact and approximate expressions and obtained with physical values $m_\pi=0.1350$GeV, $m_K=0.4955$GeV, $F_\pi=0.0922$GeV, as well as the LEC values of the BE14 fit of [@Bijnens:2014lea]. We get, using Eq.(\[Eq:ExactFf\]), $$\begin{aligned}
F_K/F_\pi = 1.19897,\end{aligned}$$ and using the approximation of Eqs.(\[Eq:ApproxFF\])-(\[Eq:ApproxFFnums\]), $$\begin{aligned}
F_K/F_\pi = 1.20071.\end{aligned}$$
[**Illustrative Lattice Fits**]{}- In this section, we present an exploratory numerical study based on our analytical representation by fitting Eq.(\[Eq:fkfpLattice\]) with the data of the lattice study [@Durr:2016ulb] to determine best-fit values of the NLO LEC $L^r_5$ and the NNLO LEC combinations $C^r_{14}+C^r_{15}$ and $C^r_{15}+2C^r_{17}$. We perform the fit (using [@James:1975dr]) on the mass sets for which $ m_\pi < 0.40$ GeV. We do the fit on the ‘exact’ $F_F$, i.e. truncating the KdF series after $1000^2$ terms, and cross-check by fitting the exact purely numerical version of Eq.(\[Eq:fkfp\]) with $\mathsf{CHIRON}$ [@Bijnens:2014gsa]. The fit on the approximate version presented in Eq.(\[Eq:ApproxFF\]) gives compatible results.
The uncertainties on the values of the LEC given in this section derive from the errors of the $F_K/F_\pi$ data of the lattice study, but do not take into account other uncertainties. As detailed in [@Durr:2016ulb], systematic effects due to lattice artificats can arise from correlator fit time choices, lattice spacings, renormalization and finite volume corrections, among other things. When these effects are taken into account, such as by means of the results presented in [@Colangelo:2002hy; @Colangelo:2005gd] to account for the extrapolation to infinite volume, the values of the LEC presented in this section are likely to change. However, determining the exact nature and magnitude of the change involves a detailed study that is outside the scope of this paper. Therefore, the numerical results in this section are given for an illustrative purpose only, to encourage the lattice community to undertake just such a detailed study using the NNLO analytic results presented above.
We fix the renormalization scale $\mu$ at $m_\rho = 0.77$ GeV, and use the values of the BE14 fit [@Bijnens:2014lea] for the other $L^r_i$. In addition we fix $F_\pi$ in the determination of $\xi_\pi$ and $\xi_K$ to 92.2 MeV and obtain: $$\begin{aligned}
& L^r_5 = (3.92 \pm 0.55)~10^{-4} \nonumber \\
& C^r_{14}+C^r_{15} = (2.59 \pm 0.63)~10^{-6} \nonumber \\
& C^r_{15}+2C^r_{17} = (6.10 \pm 1.41)~10^{-6}. \label{Eq:LECvalues}\end{aligned}$$
{width="90.00000%"}
{width="90.00000%"}
{width="90.00000%"}
$L_5$ $C_{14}+C_{15}$
------------------ --------- -----------------
$C_{14}+C_{15}$ $-0.93$ $1.00$
$C_{15}+2C_{17}$ $0.35$ $-0.66$
: Correlation values of the fit in (\[Eq:LECvalues\]).[]{data-label="Table:CorPar"}
\
The correlation parameters are given in Table \[Table:CorPar\] and the quality of the fit is shown in Fig. \[Fig:results\] (Left). The correlation is shown graphically in Fig. \[Fig:results\] (Middle, Right) by plotting a number of random points in a distribution given by the correlation matrix of the fit projected on the two different planes.
With these LEC values and the physical meson masses as inputs, we get for the value of $F_K/F_\pi$: $$\begin{aligned}
F_K/F_\pi = 1.194,\end{aligned}$$ which agrees well with the literature value of [@Bijnens:2014lea].
The values of Eq.(\[Eq:LECvalues\]) differ from those of the BE14 exact fit ($L_5 = 10.1 \times 10^{-4}, C_{14}+C_{15} = -4.00 \times 10^{-6} , C_{15}+2C_{17} = -5.00 \times 10^{-6}$) significantly, but are more compatible with those of [@Ecker:2010nc] ($L_5 = 0.76 \times 10^{-3}, C_{14}+C_{15} = 3.15 \times 10^{-6} , C_{15}+2C_{17} = 10.96 \times 10^{-6}$ in dimensionaless units) and [@Ecker:2013pba] ($L_5 = 0.75 \times 10^{-3}, C_{14}+C_{15} = 1.70 \times 10^{-6} , C_{15}+2C_{17} = 6.04 \times 10^{-6}$).
A similar fit, but now with $F_\pi$ also varied in $\xi_\pi,\xi_K$ requires the use of lattices common to [@Durr:2016ulb] and [@Durr:2013goa] to obtain the values of $F_\pi$ for each lattice. This fit gives: $$\begin{aligned}
& L^r_5 = (0.49 \pm 1.08)~10^{-4} \nonumber \\
& C^r_{14}+C^r_{15} = (5.59 \pm 1.08)~10^{-6} \nonumber \\
& C^r_{15}+2C^r_{17} = (39.7 \pm 2.10)~10^{-6}. \label{Eq:LECvalues2}\end{aligned}$$
The change in the values above arises primarily due to the variation of $F_\pi$. Keeping $F_\pi$ fixed at 92.2 MeV but with the set of inputs used to calculate Eq.(\[Eq:LECvalues2\]) results in changes of $\approx$ 20%, 35% and 10% in the Eq.(\[Eq:LECvalues\]) values of the $L^r_5$, $C^r_{14}+C^r_{15}$ and $C^r_{15}+2C^r_{17}$, respectively. As the difference in the inputs for Eq.(\[Eq:LECvalues\]) and Eq.(\[Eq:LECvalues2\]) is primarily the data from the coarsest lattices, it seems that the lattice data has a significant impact on fitting the LECs.
[**Conclusions**]{}- The ratio $F_K/F_\pi$ is a quantity at the heart of chiral symmetry breaking, a fundamental property of the strong interactions that is measured in ab initio calculations on the lattice. Tuning of the quark masses to physical values is now possible. Thus an analytic expansion for this quantity in masses of the quarks or the mesons is the order of the day. Using modern loop calculation techniques, we have achieved this goal. At present, two-loop precision is sufficient to fit the lattice data; this might change when the lattice precision improves in the future. While there exist three-loop results in two-flavour ChPT [@Bijnens:2017wba], in three-flavour ChPT two-loops is the state of the art, making our method and results all the more significant.
This work is a product of combining techniques developed independently in various branches of elementary particle physics and field theory, and represents an important advance on the results that appeared nearly two decades ago, when many sunsets were evaluated numerically. We hope this work will pave the way for detailed comparisons of other similar quantities with lattice simulations, and help improve our understanding of both ChPT and lattice studies.
[**Acknowledgements**]{}- We thank Pere Masjuan for helpful correspondance regarding the LECs. JB is supported in part by the Swedish Research Council grants contract numbers 621-2013-4287, 2015-04089 and 2016-05996 and by the European Research Council under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 668679). BA is partly supported by the MSIL Chair of the Division of Physical and Mathematical Sciences, Indian Institute of Science.
[99]{}
S. Dürr [*et al.*]{}, Phys. Rev. D [**95**]{} (2017) no.5, 054513 doi:10.1103/PhysRevD.95.054513 \[arXiv:1601.05998 \[hep-lat\]\]. J. Gasser and H. Leutwyler, Nucl. Phys. B [**250**]{}, 465 (1985). doi:10.1016/0550-3213(85)90492-4 G. Amoros, J. Bijnens and P. Talavera, Nucl. Phys. B [**568**]{} (2000) 319 \[hep-ph/9907264\]. O. V. Tarasov, Nucl. Phys. B [**502**]{} (1997) 455 \[hep-ph/9703319\]. J. P. Aguilar, D. Greynat and E. De Rafael, Phys. Rev. D [**77**]{} (2008) 093010 doi:10.1103/PhysRevD.77.093010 \[arXiv:0802.2618 \[hep-ph\]\]. S. Friot and D. Greynat, J. Math. Phys. [**53**]{} (2012) 023508 doi:10.1063/1.3679686 \[arXiv:1107.0328 \[math-ph\]\]. B. Ananthanarayan, J. Bijnens, S. Ghosh and A. Hebbar, Eur. Phys. J. A [**52**]{} (2016) no.12, 374 doi:10.1140/epja/i2016-16374-8 \[arXiv:1608.02386 \[hep-ph\]\]. B. Ananthanarayan, J. Bijnens, S. Friot and S. Ghosh \[Work in progress\]
B. Ananthanarayan, S. Friot and S. Ghosh \[Work in progress\]
B. Ananthanarayan, J. Bijnens and S. Ghosh, Eur. Phys. J. C [**77**]{} (2017) no.7, 497 doi:10.1140/epjc/s10052-017-5019-y \[arXiv:1703.00141 \[hep-ph\]\]. J. Bijnens and G. Ecker, Ann. Rev. Nucl. Part. Sci. [**64**]{} (2014) 149 \[arXiv:1405.6488 \[hep-ph\]\]. F. James and M. Roos, Comput. Phys. Commun. [**10**]{} (1975) 343. doi:10.1016/0010-4655(75)90039-9 J. Bijnens, Eur. Phys. J. C [**75**]{} (2015) no.1, 27 doi:10.1140/epjc/s10052-014-3249-9 \[arXiv:1412.0887 \[hep-ph\]\]. http://home.thep.lu.se/ bijnens/chiron/
G. Colangelo, S. Durr and R. Sommer, Nucl. Phys. Proc. Suppl. [**119**]{} (2003) 254 doi:10.1016/S0920-5632(03)80450-4 \[hep-lat/0209110\]. G. Colangelo, S. Durr and C. Haefeli, Nucl. Phys. B [**721**]{} (2005) 136 doi:10.1016/j.nuclphysb.2005.05.015 \[hep-lat/0503014\]. G. Ecker, P. Masjuan and H. Neufeld, Phys. Lett. B [**692**]{} (2010) 184 doi:10.1016/j.physletb.2010.07.037 \[arXiv:1004.3422 \[hep-ph\]\]. G. Ecker, P. Masjuan and H. Neufeld, Eur. Phys. J. C [**74**]{} (2014) no.2, 2748 doi:10.1140/epjc/s10052-014-2748-z \[arXiv:1310.8452 \[hep-ph\]\]. S. Dürr [*et al.*]{} \[Budapest-Marseille-Wuppertal Collaboration\], Phys. Rev. D [**90**]{}, no. 11, 114504 (2014) doi:10.1103/PhysRevD.90.114504 \[arXiv:1310.3626 \[hep-lat\]\].
J. Bijnens and N. H. Truedsson, JHEP [**1711**]{} (2017) 181 doi:10.1007/JHEP11(2017)181 \[arXiv:1710.01901 \[hep-ph\]\].
|
---
abstract: 'We develop deep Poisson-gamma dynamical systems (DPGDS) to model sequentially observed multivariate count data, improving previously proposed models by not only mining deep hierarchical latent structure from the data, but also capturing both first-order and long-range temporal dependencies. Using sophisticated but simple-to-implement data augmentation techniques, we derived closed-form Gibbs sampling update equations by first backward and upward propagating auxiliary latent counts, and then forward and downward sampling latent variables. Moreover, we develop stochastic gradient MCMC inference that is scalable to very long multivariate count time series. Experiments on both synthetic and a variety of real-world data demonstrate that the proposed model not only has excellent predictive performance, but also provides highly interpretable multilayer latent structure to represent hierarchical and temporal information propagation.'
author:
- |
Dandan Guo, Bo Chen[^1], Hao Zhang\
National Laboratory of Radar Signal Processing\
Collaborative Innovation Center of Information Sensing and Understanding\
Xidian University, Xi’an, China\
`gdd_xidian@126.com`, `bchen@mail.xidian.edu.cn`, `zhanghao_xidian@163.com` Mingyuan Zhou\
McCombs School of Business\
The University of Texas at Austin\
Austin, TX 78712, USA\
`mingyuan.zhou@mccombs.utexas.edu`
bibliography:
- 'nips\_2018\_111.bib'
title: Deep Poisson gamma dynamical systems
---
Introduction
============
The need to model time-varying count vectors ${\ensuremath{\boldsymbol{x}} }_{1},...,{\ensuremath{\boldsymbol{x}} }_{T}$ appears in a wide variety of settings, such as text analysis, international relation study, social interaction understanding, and natural language processing [@Sutskever2007Learning; @wang2008continuous; @hermans2013training; @gan2015deep; @GP-DPFA2015; @charlin2015dynamic; @Schein2016Poisson; @Gong2017Deep; @Rabiee2017Recurrent]. To model these count data, it is important to not only consider the sparsity of high-dimensional data and robustness to over-dispersed temporal patterns, but also capture complex dependencies both within and across time steps. In order to move beyond linear dynamical systems (LDS) [@ghahramani1999learning] and its nonlinear generalization [@wang2006gaussian] that often make the Gaussian assumption [@kalman1963mathematical], the gamma process dynamic Poisson factor analysis (GP-DPFA) [@GP-DPFA2015] factorizes the observed time-varying count vectors under the Poisson likelihood as ${\ensuremath{\boldsymbol{x}} }_t \sim \mbox{Poisson}({\ensuremath{\boldsymbol{\Phi}}}{\ensuremath{\boldsymbol{\theta}} }_{t}) $, and transmit temporal information smoothly by evolving the factor scores with a gamma Markov chain as ${\ensuremath{\boldsymbol{\theta}} }_t \sim \mbox{Gamma}({\ensuremath{\boldsymbol{\theta}} }_{t-1},{\ensuremath{\boldsymbol{\beta}} }) $, which has highly desired strong non-linearity. To further capture cross-factor temporal dependence, a transition matrix ${\ensuremath{\boldsymbol{\Pi}} }$ is further used in Poisson–gamma dynamical system (PGDS) [@Schein2016Poisson] as ${\ensuremath{\boldsymbol{\theta}} }_t \sim \mbox{Gamma}({\ensuremath{\boldsymbol{\Pi}} }{\ensuremath{\boldsymbol{\theta}} }_{t-1},{\ensuremath{\boldsymbol{\beta}} }) $. However, these shallow models may still have shortcomings in capturing long-range temporal dependencies [@Gong2017Deep]. For example, if given ${\ensuremath{\boldsymbol{\theta}} }_t$, then ${\ensuremath{\boldsymbol{\theta}} }_{t+1}$ no longer depends on ${\ensuremath{\boldsymbol{\theta}} }_{t-k}$ for all $k \geq 1$. Deep probabilistic models are widely used to capture the relationships between latent variables across multiple stochastic layers [@Gong2017Deep; @gan2015deep; @neal1992connectionist; @ranganath2014deep; @zhou2015the; @henao2015deep]. For example, deep dynamic Poisson factor analysis (DDPFA) [@Gong2017Deep] utilizes recurrent neural networks (RNN) [@hermans2013training] to capture long-range temporal dependencies of the factor scores. The latent variables and RNN parameters, however, are separately inferred. Deep temporal sigmoid belief network (DTSBN) [@gan2015deep] is a deep dynamic generative model defined as a sequential stack of sigmoid belief networks (SBNs), whose hidden units are typically restricted to be binary. Although a deep structure is designed to describe complex long-range temporal dependencies, how the layers in DTSBN are related to each other lacks an intuitive interpretation, which is of paramount interest for a multilayer probabilistic model [@zhou2015the].
In this paper, we present deep Poisson gamma dynamical systems (DPGDS), a deep probabilistic dynamical model that takes the advantage of the hierarchical structure to efficiently incorporate both between-layer and temporal dependencies, while providing rich interpretation. Moving beyond DTSBN using binary hidden units, we build a deep dynamic directed network with gamma distributed nonnegative real hidden units, inferring a multilayer contextual representation of multivariate time-varying count vectors. Consequently, DPGDS can handle highly overdispersed counts, capturing the correlations between the visible/hidden features across layers and over times using the gamma belief network [@zhou2015the]. Combing the deep and temporal structures shown in Fig. \[fig:generative model layer 3\], DPGDS breaks the assumption that given ${\ensuremath{\boldsymbol{\theta}} }_{t}$, ${\ensuremath{\boldsymbol{\theta}} }_{t+1}$ no longer depends on ${\ensuremath{\boldsymbol{\theta}} }_{t-k}$ for $k \geq 1$, suggesting that it may better capture long-range temporal dependencies. As a result, the model can allow more specific information, which are also more likely to exhibit fast temporal changing, to transmit through lower layers, while allowing more general information, which are also more likely to slowly evolve over time, to transmit through higher layers. For example, as shown in Fig. \[fig:gdelt\_example\] that is learned from GDELT2003 with DPGDS, when analyzing these international events, the factors at lower layers are more specific to discover the [relationships]{} between the different countries, whereas those at higher layers are more general to reflect the conflicts between the different areas consisting of several related countries, or the ones occurring simultaneously, and the latent representation ${\ensuremath{\boldsymbol{\theta}} }_t$ at a lower layer varies more intensely than that at a higher layer.
Distinct from DDPFA [@Gong2017Deep] that adopts a two-stage inference, the latent variables of DPGDS can be jointly trained with both a Backward-Upward–Forward-Downward (BUFD) Gibbs sampler and a sophisticated stochastic gradient MCMC (SGMCMC) algorithm that is scalable to very long multivariate time series [@ma2015a; @welling2011bayesian; @patterson2013stochastic; @ding2014bayesian; @Li2016Preconditioned]. Furthermore, the factors learned at each layer can refine the understanding and analysis of sequentially observed multivariate count data, which, to the best of our knowledge, may be very challenging for existing methods. Finally, based on a diverse range of real-world data sets, we show that DPGDS exhibits excellent predictive performance, inferring interpretable latent structure with well captured long-range temporal dependencies.
Deep Poisson gamma dynamic systems
==================================
Shown in Fig. \[fig:generative model layer 3\] is the graphical representation of a three-hidden-layer DPGDS. Let us denote $\theta\sim\mbox{Gam}(a,c)$ as a gamma random variable with mean $a/c$ and variance $a/c^2$. Given a set of $V$-dimensional sequentially observed multivariate count vectors ${\ensuremath{\boldsymbol{x}} }_{1},...,{\ensuremath{\boldsymbol{x}} }_{T}$, represented as a $V \times T$ matrix ${\ensuremath{{\bf X}} }$, the generative process of a $L$-hidden-layer DPGDS, from top to bottom, is expressed as $$\begin{aligned}
\label{DPGDS}
& {\ensuremath{\boldsymbol{\theta}} }_t^{(L)} \sim \mbox{Gam}\left(\tau_0 {\ensuremath{\boldsymbol{\Pi}} }^{(L)} {\ensuremath{\boldsymbol{\theta}} }_{t-1}^{(L)} , \tau_0 \right),\cdots ,~{\ensuremath{\boldsymbol{\theta}} }_t^{(l)} \sim \mbox{Gam}\left(\tau_0({\ensuremath{\boldsymbol{\Phi}}}^{(l+1)} {\ensuremath{\boldsymbol{\theta}} }_t^{(l+1)} + {\ensuremath{\boldsymbol{\Pi}} }^{(l)} {\ensuremath{\boldsymbol{\theta}} }_{t-1}^{(l)}) , \tau_0 \right),
\cdots , \nonumber\\
& {\ensuremath{\boldsymbol{\theta}} }_t^{(1)} \sim \mbox{Gam}\left(\tau_0 ({\ensuremath{\boldsymbol{\Phi}}}^{(2)} {\ensuremath{\boldsymbol{\theta}} }_t^{(2)} + {\ensuremath{\boldsymbol{\Pi}} }^{(1)}{\ensuremath{\boldsymbol{\theta}} }_{t-1}^{(1)}) , \tau_0 \right),~~
{\ensuremath{\boldsymbol{x}} }_t^{(1)}\! \sim \!\mbox{Pois} \left( {\delta_t^{(1)}} {\ensuremath{\boldsymbol{\Phi}}}^{(1)} {\ensuremath{\boldsymbol{\theta}} }_t^{(1)} \right),\end{aligned}$$ where ${\ensuremath{\boldsymbol{\Phi}}}^{(l)}\in \mathbb{R}_{+}^{K_{l-1} \times K_{l}}$ is the factor loading matrix at layer $l$, ${\ensuremath{\boldsymbol{\theta}} }_t^{(l)}\in\mathbb{R}_+^{K_l}$ the hidden units of layer $l$ at time $t$, and ${\ensuremath{\boldsymbol{\Pi}} }^{(l)}\in \mathbb{R}_{+}^{K_{l} \times K_{l}}$ a transition matrix of layer $l$ that captures cross-factor temporal dependencies. We denote $\delta_t^{(1)} \in\mathbb{R}_+ $ as a scaling factor, reflecting the scale of the counts at time $t$; one may also set $\delta_t^{(1)} = \delta^{(1)}$ for $t = 1, . . . ,T$. We denote $\tau_0 \in \mathbb{R}_+$ as a scaling hyperparameter that controls the temporal variation of the hidden units. The multilayer time-varying hidden units ${\ensuremath{\boldsymbol{\theta}} }_t^{(l)}$ are well suited for downstream analysis, as will be shown below.
DPGDS factorizes the count observation ${\ensuremath{\boldsymbol{x}} }_t^{(1)}$ into the product of $\delta_t^{(1)}$, ${\ensuremath{\boldsymbol{\Phi}}}^{(1)}$, and ${\ensuremath{\boldsymbol{\theta}} }_t^{(1)}$ under the Poisson likelihood. It further factorizes the shape parameters of the gamma distributed ${\ensuremath{\boldsymbol{\theta}} }_t^{(l)}$ of layer $l$ at time $t$ into the sum of ${\ensuremath{\boldsymbol{\Phi}}}^{(l+1)} {\ensuremath{\boldsymbol{\theta}} }_t^{(l+1)}$, capturing the dependence between different layers, and ${\ensuremath{\boldsymbol{\Pi}} }^{(l)} {\ensuremath{\boldsymbol{\theta}} }_{t-1}^{(l)}$, capturing the temporal dependence at the same layer. At the top layer, ${\ensuremath{\boldsymbol{\theta}} }_t^{(L)}$ is only dependent on ${\ensuremath{\boldsymbol{\Pi}} }^{(L)} {\ensuremath{\boldsymbol{\theta}} }_{t-1}^{(L)}$, and at $t=1$, ${\ensuremath{\boldsymbol{\theta}} }_1^{(l)} \sim \mbox{Gam}\left(\tau_0 {\ensuremath{\boldsymbol{\Phi}}}^{(l+1)} {\ensuremath{\boldsymbol{\theta}} }_1^{(l+1)} , \tau_0 \right)$ for $l=1,\ldots,L-1$ and ${\ensuremath{\boldsymbol{\theta}} }_1^{(L)} \sim \mbox{Gam}\left(\tau_0 \nu_k^{(L)} , \tau_0 \right)$. To complete the hierarchical model, we introduce $K_l$ factor weights $\textbf{{\ensuremath{\boldsymbol{\nu}} }}^{(l)} = (\nu_1^{(l)},...,\nu_{K_l}^{(l)})$ in layer $l$ to model the strength of each factor, and for $l=1,...,L$, we let $$\label{Pi prior}
\begin{array}{c}
{\ensuremath{\boldsymbol{\pi}} }_k^{(l)}\sim \textrm{Dir}(\nu_1^{(l)}\nu_k^{(l)},...,\nu_{k-1}^{(l)}\nu_k^{(l)},\xi^{(l)}\nu_k^{(l)},\nu_{k+1}^{(l)}\nu_k^{(l)}...,\nu_{K_l}^{(l)}\nu_k^{(l)}),~~
\nu_k^{(l)} \sim \textrm{Gam}(\frac{\gamma_0}{K_l},\beta^{(l)}).\\
\end{array}$$ Note that ${\ensuremath{\boldsymbol{\pi}} }_k^{(l)}$ is the $k^{th}$ column of ${\ensuremath{\boldsymbol{\Pi}} }^{(l)}$ and $\pi_{{k_1}{k_2}}^{(l)}$ can be interpreted as the probability of transiting from topic $k_2$ of the previous time to topic $k_1$ of the current time at layer $l$. Finally, we place Dirichlet priors on the factor loadings and draw other parameters from a noninformative gamma prior: ${\ensuremath{\boldsymbol{\phi}} }_k^{(l)}=(\phi_{1k}^{(l)},...,\phi_{{K_{l-1}}k}^{(l)})\sim \textrm{Dir}(\eta^{(l)},...,\eta^{(l)})$, and $\delta_t^{(1)},\xi^{(l)},\beta^{(l)}\sim \textrm{Gam}(\epsilon_0,\epsilon_0) $. [Note that imposing Dirichlet distributions on the columns of ${\ensuremath{\boldsymbol{\Pi}} }^{(l)}$ and ${\ensuremath{\boldsymbol{\Phi}}}^{(l)}$ not only makes the latent representation more identifiable and interpretable, but also facilitates inference, as will be shown in the next section.]{} Clearly when $L=1$, DPGDS reduces to PGDS [@Schein2016Poisson]. In real-world applications, a binary observation can be linked to a latent count using the Bernoulli-Poisson link as $b = 1(n\geq 1),n\sim \mbox{Pois}(\lambda)$ [@Zhou2015Infinite]. Nonnegative-real-valued matrix can also be linked to a latent count matrix via a Poisson randomized gamma distribution as $x \sim \mbox{Gam}(n,c),n\sim \mbox{Pois}(\lambda)$ [@JMLR:v17:15-633].
[**[Hierarchical structure:]{}**]{} To interpret the hierarchical structure of , we notice that $\mathbb{E}\left[ {\ensuremath{\boldsymbol{x}} }_t^{(1)} {\,|\,}{\ensuremath{\boldsymbol{\theta}} }_t^{(l)}, \{{\ensuremath{\boldsymbol{\Phi}}}^{(p)}\}_{p=1}^{l} \right] = \left[ \prod_{p=1}^{l} {\ensuremath{\boldsymbol{\Phi}}}^{(p)} \right] {\ensuremath{\boldsymbol{\theta}} }_t^{(l)}$ if the temporal structure is ignored. Thus it is straightforward to interpret ${\ensuremath{\boldsymbol{\phi}} }_k^{(l)}$ by projecting them to the bottom data layer as $\left[ \prod_{t=1}^{l-1} {\ensuremath{\boldsymbol{\Phi}}}^{(t)} \right] {\ensuremath{\boldsymbol{\phi}} }_k^{(l)} $, which are often quite specific at the bottom layer and become increasingly more general when moving upwards, as will be shown below in Fig. \[fig:icews\_dic\].
[**[Long-range temporal dependencies]{}**]{}: Using the law of total expectations on , for a three-hidden-layer DPGDS shown in Fig. \[fig:generative model layer 3\], we have $$\begin{aligned}
\small \mathbb{E} [{\ensuremath{\boldsymbol{x}} }_t^{(1)}\,|\,
{\ensuremath{\boldsymbol{\theta}} }_{t-1}^{(1)}, {\ensuremath{\boldsymbol{\theta}} }_{t-2}^{(2)}, {\ensuremath{\boldsymbol{\theta}} }_{t-3}^{(3)}
]/\delta_t^{(1)}
&= {\ensuremath{\boldsymbol{\Phi}}}^{(1)} {\ensuremath{\boldsymbol{\Pi}} }^{(1)} {\ensuremath{\boldsymbol{\theta}} }_{t-1}^{(1)} + {\ensuremath{\boldsymbol{\Phi}}}^{(1)} {\ensuremath{\boldsymbol{\Phi}}}^{(2)} [{\ensuremath{\boldsymbol{\Pi}} }^{(2)}]^{2} {\ensuremath{\boldsymbol{\theta}} }_{t-2}^{(2)}\notag\\
& ~~~~+
{\ensuremath{\boldsymbol{\Phi}}}^{(1)} {\ensuremath{\boldsymbol{\Phi}}}^{(2)} ({\ensuremath{\boldsymbol{\Pi}} }^{(2)}{\ensuremath{\boldsymbol{\Phi}}}^{(3)}+{\ensuremath{\boldsymbol{\Phi}}}^{(3)} {\ensuremath{\boldsymbol{\Pi}} }^{(3)} )[{\ensuremath{\boldsymbol{\Pi}} }^{(3)}]^2{\ensuremath{\boldsymbol{\theta}} }_{t-3}^{(3)},
$$ which suggests that $\{{\ensuremath{\boldsymbol{\Pi}} }^{(l)}\}_{l=1}^{L}$ play the role of transiting the latent representation across time and, different from most existing dynamic models, DPGDS can capture and transmit long-range temporal information (often general and change slowly over time) through its higher hidden layers.
Scalable MCMC inference
=======================
In this paper, in each iteration, across layers and times, we first exploit a variety of data augmentation techniques for count data to “backward” and “upward” propagate auxiliary latent counts, with which we then “downward” and “forward” sample latent variables, leading to a Backward-Upward–Forward-Downward Gibbs sampling (BUFD) Gibbs sampling algorithm.
Backward and upward propagation of latent counts
------------------------------------------------
\
Different from PGDS that has only backward propagation for latent counts, DPGDS have both backward and upward ones due to its deep hierarchical structure. To derive closed-form Gibbs sampling update equations, we exploit three useful properties for count data, denoted as [**[P1]{}**]{}, [**[P2]{}**]{}, and [**[P3]{}**]{}[@zhou2015negative; @Schein2016Poisson], respectively, as presented in the Appendix. Let us denote $x\sim\mbox{NB}(r,p)$ as the negative binomial distribution with probability mass function $P(x=k)=\frac{\Gamma(k+r)}{k!\Gamma(r)}p^k(1-p)^r$, where $k\in\{0,1,\ldots\}$. First, we can augment each count $x_{vt}^{(1)}$ in into the summation of $K_1$ latent counts that are smaller or equal as $x_{vt}^{(1)} = \sum_{k=1}^{{K_1}} {A_{vkt}^{(1)}},~A_{vkt}^{(1)}\sim \textrm{Pois}(\delta _t^{(1)}\phi _{vk}^{(1)}\theta _{kt}^{(1)})$, with $A_{{\ensuremath{\boldsymbol{\cdot}}}kt}^{(1)} = \sum_{v = 1}^{V} {A_{vkt}^{(1)}}$. Since $\sum_{v = 1}^{V} {\phi _{vk}^{(1)}}=1$ by construction, we also have $A_{{\ensuremath{\boldsymbol{\cdot}}}kt}^{(1)}\sim \textrm{Pois}(\delta _t^{\left( 1 \right)}\theta _{{k}t}^{(1)})$, as shown in Fig. \[fig:inference11\]. [We start with ${\ensuremath{\boldsymbol{\theta}} }_T^{(1)}$ at the last time point $T$, as none of the other time-step factors depend on it in their priors. Via [**[P2]{}**]{}, as shown in Fig. \[fig:inference22\]]{}, we can marginalize out $\theta_{kT}^{(1)}$ to obtain $$\label{NB T time in layer 11}
A_{{\ensuremath{\boldsymbol{\cdot}}}kT}^{(1)} \sim \textrm{NB}\left[\tau_0\left(\sum\nolimits_{{k_2}=1}^{{K_2}} \phi_{k{k_2}}^{(2)}\theta_{{k_2}T}^{(2)}+
\sum\nolimits_{{k_1}=1}^{{K_1}}\pi_{k{k_1}}^{(1)}\theta_{{k_1},T-1}^{(1)}\right),~g(\zeta_{T}^{(1)})\right],$$ where $\zeta_T^{(1)} = \ln ( {1 + \frac{{\delta _T^{(1)}}}{{{\tau_0}}}} )$ and $g\left( \zeta \right) = 1 - \exp \left( { - \zeta } \right)$.
[In order to marginalize out ${\ensuremath{\boldsymbol{\theta}} }_{T-1}^{(1)}$, as shown in Fig. \[fig:inference33\], we introduce an auxiliary variable following the Chines restaurant table (CRT) distribution [@zhou2015negative] as $$\label{CRT}
x_{kT}^{( 2 )}\sim \textrm{CRT}\left[ {A_{{\ensuremath{\boldsymbol{\cdot}}}kT}^{(1)},~{\tau _0}\left( {\sum\nolimits_{{k_2} = 1}^{{K_2}} {\phi _{k{k_2}}^{( 2 )}\theta _{{k_2}T}^{( 2 )}} + \sum\nolimits_{{k_1} = 1}^{{K_1}} {\pi _{k{k_1}}^{( 1 )}\theta _{{k_1},T - 1}^{( 1 )}} } \right)} \right].$$ ]{} As shown in Fig. \[fig:inference44\], we re-express the joint distribution over $A_{{\ensuremath{\boldsymbol{\cdot}}}kT}^{(1)}$ and $x_{kT}^{(2)}$ according to [**[P3]{}**]{} as $$\label{SumLog in layer 11} \small
A_{{\ensuremath{\boldsymbol{\cdot}}}kT}^{(1)} \sim \textrm{SumLog}( {x_{kT}^{(2)},g( {\zeta_T^{(1)}} )} ),~~x_{kT}^{(2)}\sim \textrm{Pois}\left[ {\zeta _T^{( 1)}{\tau _0}\left( {\sum\nolimits_{{k_2} = 1}^{{K_2}} {\phi _{k{k_2}}^{( 2 )}\theta_{{k_2}T}^{( 2)}} + \sum\nolimits_{{k_1} = 1}^{{K_1}} {\pi _{k{k_1}}^{( 1 )}\theta_{{k_1},T-1}^{(1)}} } \right)} \right],$$ where the sum-logarithmic (SumLog) distribution is defined as in Zhou and Carin [@zhou2015negative]. [Via [**[P1]{}**]{}, as in Fig. \[fig:inference55\], the Poisson random variable $x_{kT}^{( 2 )}$ in can be augmented as $x_{kT}^{( 2 )} = x_{kT}^{( {2,1} )} + x_{kT}^{( {2,2} )}$, where $$\label{Spilit Poisson in layer 11}
x_{kT}^{( {2,1} )}\sim \textrm{Pois}( {\zeta_T^{( 1 )}{\tau_0}\sum\nolimits_{{k_1} = 1}^{{K_1}} {\pi_{k{k_1}}^{( 1 )}\theta_{{k_1},T - 1}^{( 1 )}} } ),~~x_{kT}^{( {2,2} )}\sim \textrm{Pois}( {\zeta_T^{( 1 )}{\tau_0}\sum\nolimits_{{k_2} = 1}^{{K_2}} {\phi_{k{k_2}}^{( 2 )}\theta_{{k_2}T}^{( 2 )}} } ).$$ It is obvious that due to the deep dynamic structure, the count at layer two $x_{kT}^{( 2 )}$ is divided into two parts: one from time $T-1$ at layer one, while the other from time $T$ at layer two. Furthermore, $\zeta_T^{( 1)}$ is the scaling factor at layer two, which is propagated from the one at layer one $\delta_T^{(1)}$.]{} Repeating the process all the way back to $t=1$, and from $l=1$ up to $l=L$, we are able to marginalize out all gamma latent variables $\{{\ensuremath{\boldsymbol{\Theta}} }\}_{t=1,l=1}^{T,L}$ and provide closed-form conditional posteriors for all of them.
Backward-upward–forward-downward Gibbs sampling
-----------------------------------------------
[**[Sampling auxiliary counts:]{}**]{} This step is about the “backward” and “upward” pass. Let us denote $Z_{{\ensuremath{\boldsymbol{\cdot}}}kt}^{\left( {l} \right)} = \sum_{{k_l} = 1}^{{K_l}} {Z_{{k_l}kt}^{\left( {l} \right)}} $, $Z_{{\ensuremath{\boldsymbol{\cdot}}}{k},{T+1}}^{( {l} )}=0$, and $x_{kt}^{(1,1)}=x_{vt}^{(1)}$. Working backward for $t = T, . . . , 2$ and upward for $l = 1,...,L$, we draw $$\begin{aligned}
\label{Multi_Phi_Theta}
& ( {A_{k1t}^{(l)},...,A_{k{K_l}t}^{(l)}} )\sim \mbox{Multi}\left(x_{kt}^{(l,l)};\frac{{\phi _{k{1}}^{(l)}\theta _{{1}t}^{(l)}}}{{\sum\nolimits_{{k_l} = 1}^{{K_l}} {\phi_{k{k_l}}^{(l)}\theta _{{k_l}t}^{(l)}} }},...,\frac{{\phi _{k{K_l}}^{(l)}\theta _{{K_l}t}^{(l)}}}{{\sum\nolimits_{{k_l} = 1}^{{K_l}} {\phi _{k{k_l}}^{(l)}\theta_{{k_l}t}^{(l)}} }}\right),\\
\label{auxiliary_variables}
& x_{kt}^{( {l+1 } )}\sim \textrm{CRT}\left[ {A_{{\ensuremath{\boldsymbol{\cdot}}}kt}^{(l)}+Z_{{\ensuremath{\boldsymbol{\cdot}}}k,t+1}^{( {l} )},{\tau_0}\left( {\sum\nolimits_{{k_{l + 1}} = 1}^{{K_{l + 1}}} {\phi _{k{k_{l + 1}}}^{( {l + 1} )}\theta _{{k_{l + 1}}t}^{( {l + 1} )}} + \sum\nolimits_{{k_l} = 1}^{{K_l}} {\pi _{k{k_1}}^{( l )}\theta _{{k_1},t - 1}^{( l )}} } \right)} \right].\end{aligned}$$ Note that via the deep structure, the latent counts $x_{kt}^{( l+1 )}$ will be influenced by the effects from both of time $t-1$ at layer $l$ and time $t$ at layer $l+1$. With $p_1 := \sum\nolimits_{{k_l} = 1}^{{K_l}} {\pi_{k{k_l}}^{( l )}\theta_{{k_l},t - 1}^{( l )}}$ and $p_2 := \sum\nolimits_{{k_{l+1}} = 1}^{{K_{l+1}}} {\phi_{k{k_{l+1}}}^{( l+1 )}\theta_{{k_{l+1}}t}^{( l+1)}}
$, we can sample the latent counts at layer $l$ and $l+1$ by $$\label{Two_Poisson}
(x_{kt}^{({l+1,l})},x_{kt}^{({l+1,l+1})})\sim \textrm{Multi}\left(x_{kt}^{({l+1})},{p_1}/{(p_1+p_2)},{p_2}/{(p_1+p_2)}\right),$$ and then draw $$\label{Multi_Pi_Theta}
( {Z_{k1t}^{( {l} )},...,Z_{k{K_l}t}^{( {l} )}} )\sim \textrm{Multi} \left( {x_{kt}^{( {l+1,l} )};\frac{{\pi_{k1}^{( l )}\theta _{1,t - 1}^{( l )}}}{{\sum\nolimits_{{k_l} = 1}^{{K_l}} {\pi _{{k}{k_l}}^{( l )}\theta _{k_l,t - 1}^{( l )}} }},...,\frac{{\pi _{k{K_l}}^{( l )}\theta _{{K_l},t - 1}^{( l )}}}{{\sum\nolimits_{{k_l} = 1}^{{K_l}} {\pi _{k{k_l}}^{( l )}\theta _{{k_l},t - 1}^{( l )}} }}} \right).$$
**Sampling hidden units ${\ensuremath{\boldsymbol{\theta}} }_{t}^{(l)}$ and calculating $\zeta_{t}^{( l)} $:** Given the augmented latent count variables, working forward for $t = 1, . . . , T$ and downward for $l = L,...,1$, we can sample $$\begin{aligned}
\label{Sample Theta2}
\theta^{(l)}_{kt}\sim\mbox{Gamma}\Big[A_{{\ensuremath{\boldsymbol{\cdot}}}kt}^{(l)}+Z_{{\ensuremath{\boldsymbol{\cdot}}}k{(t+1)}}^{( {l} )}+ \tau_0 \Big(\sum\nolimits_{{k_{l+1}}=1}^{{K_{l+1}}} \phi_{k{k_{l+1}}}^{(l+1)}\theta_{{k_{l+1}}t}^{(l+1)}+
\sum\nolimits_{{k_l}=1}^{{K_l}}\pi_{k{k_l}}^{(l)}\theta_{{k_2},t-1}^{(l)}\Big), \nonumber \\
{{\tau_0}\big( {1 + \zeta _t^{( l-1 )} + \zeta _{t+1}^{( l )}} \big)}\Big],\end{aligned}$$ where $\zeta _t^{\left( 0 \right)} = \frac{{\delta _t^{\left( 1 \right)}}}{{{\tau _0}}}$ and $ \zeta_t^{\left( l \right)} = \ln \left( {1 + \zeta _t^{\left( {l - 1} \right)} + \zeta _{t + 1}^{\left( l \right)}} \right)$. Note if $\delta_t^{(1)} = \delta^{(1)}$ for $t = 1, . . . ,T$, then we may let $\zeta^{( l )} = - {{\bf{W}}_{ - 1}}( { - \exp ( { - 1 - \zeta^{( l-1 )}} )} ) - 1 - \zeta^{( l-1 )}$, where the function ${{\bf{W}}_{ - 1}}$ is the lower real part of the Lambert $\textrm{W}$ function [@corless1996on; @Schein2016Poisson]. From , we can find that the conditional posterior of ${\ensuremath{\boldsymbol{\theta}} }_t^{(l)}$ is parameterized by not only both ${\ensuremath{\boldsymbol{\Phi}}}^{(l+1)} {\ensuremath{\boldsymbol{\theta}} }_t^{(l+1)}$ and ${\ensuremath{\boldsymbol{\Pi}} }^{(l)} {\ensuremath{\boldsymbol{\theta}} }_{t-1}^{(l)}$, which represent the information from layer $l+1$ (downward) and time $t-1$ (forward), respectively, but also both $A_{{\ensuremath{\boldsymbol{\cdot}}},:,t}^{(l)}$ and $Z_{{\ensuremath{\boldsymbol{\cdot}}},:,t+1}^{(l)}$, which record the message from layer $l-1$ (upward) in and time $t+1$ (backward) in , respectively. We describe the BUFD Gibbs sampling algorithm for DPGDS in Algorithm \[Gibbs\] and provide more details in the Appendix.
Stochastic gradient MCMC inference {#Scalable Inference}
----------------------------------
Although the proposed BUFD Gibbs sampling algorithm for DPGDS has closed-form update equations, it requires processing all time-varying vectors at each iteration and hence has limited scalability [@Zhang2018WHAI]. To allow for scalable inference, we apply the topic-layer-adaptive stochastic gradient Riemannian (TLASGR) MCMC algorithm described in Cong et al. [@cong2017deep] and Zhang et al. [@Zhang2018WHAI], which can be used to sample simplex-constrained global parameters [@cong2017fast] in a mini-batch based manner. It improves its sampling efficiency via the use of the Fisher information matrix (FIM) [@girolami2011riemann], with adaptive step-sizes for the latent factors and transition matrices of different layers. More specifically, for ${\ensuremath{\boldsymbol{\pi}} }_k^{(l)}$, column $k$ of the transition matrix ${\ensuremath{\boldsymbol{\Pi}} }^{(l)}$ of layer $l$, its sampling can be efficiently realized as $$\begin{aligned}
\label{TLASGR Pi}
\left( {{\ensuremath{\boldsymbol{\pi}} }_k^{(l)}} \right)_{n + 1} \! = & \! \bigg[ \! \left( {{\ensuremath{\boldsymbol{\pi}} }_k^{(l)}} \right)_n \! + \! \frac{\varepsilon _n}{M_k^{(l)}} \! \left[ \left(\rho \tilde {\ensuremath{\boldsymbol{z}} }_{:k{\ensuremath{\boldsymbol{\cdot}}}}^{(l)} \! + \! {\ensuremath{\boldsymbol{\eta}} }_{:k}^{(l)}\right) \! - \! \left(\rho \tilde z_{{\ensuremath{\boldsymbol{\cdot}}}k{\ensuremath{\boldsymbol{\cdot}}}}^{(l)} \! + \! \eta_{{\ensuremath{\boldsymbol{\cdot}}}k}^{(l)} \right) \! \left( {{\ensuremath{\boldsymbol{\pi}} }_k^{(l)}} \right)_n \right] \nonumber \\
& + \mathcal{N} \left( 0, \frac{2 \varepsilon _n}{M_k^{(l)}}\left[ \mbox{diag}({\ensuremath{\boldsymbol{\pi}} }_k^{(l)})_n - ({\ensuremath{\boldsymbol{\pi}} }_k^{(l)})_n ({\ensuremath{\boldsymbol{\pi}} }_k^{(l)})_n^T \right] \right) \bigg]_\angle,\end{aligned}$$ where $M_k^{(l)}$ is calculated using the estimated FIM, both ${\tilde {\ensuremath{\boldsymbol{z}} }_{:k{\ensuremath{\boldsymbol{\cdot}}}}^{\left( l \right)}}$ and ${\tilde z_{{\ensuremath{\boldsymbol{\cdot}}}k{\ensuremath{\boldsymbol{\cdot}}}}^{\left( l \right)}}$ come from the augmented latent counts $Z^{(l)}$, ${\left[ . \right]_\angle }$ denotes a simplex constraint, and ${{\ensuremath{\boldsymbol{\eta}} }_{:k}^{\left( l \right)}}$ denotes the prior of ${{\ensuremath{\boldsymbol{\pi}} }_k^{\left( l \right)}}$. The update of ${\ensuremath{\boldsymbol{\Phi}}}^{(l)}$ is the same with Cong et al. [@cong2017deep], and all the other global parameters are sampled using SGNHT [@ding2014bayesian]. We provide the details of the SGMCMC for DPGDS in Algorithm \[SGMCMC\] in the Appendix.
Experiments
===========
In this section, we present experimental results on a synthetic dataset and five real-world datasets. For a fair comparison, we consider PGDS [@Schein2016Poisson], GP-DPFA [@GP-DPFA2015], DTSBN [@gan2015deep], and [GPDM [@wang2006gaussian]]{} that can be considered as a dynamic generalization of the Gaussian process latent variable model of Lawrence [@lawrence2005probabilistic], using the code provided by the authors. Note that as shown Schein et al. [@Schein2016Poisson] and Gan et al. [@gan2015deep], PGDS and DTSBN are state-of-the-art count time series modeling algorithms that outperform a wide variety of previously proposed ones, such as LDS [@kalman1963mathematical] and DRFM [@han2014dynamic]. The hyperparameter settings of PGDS, GP-DPFA, [GPDM]{}, TSBN, and DTSBN are the same as their original settings [@Schein2016Poisson; @GP-DPFA2015; @wang2006gaussian; @gan2015deep]. For DPGDS, we set $\tau_0=1,\gamma_0=100,\eta_0=0.1$ and $\epsilon_0=0.1$. We use $[K^{(1)},K^{(2)}, K^{(3)}]=[200,100,50]$ for both DPGDS and DTSBN and $K = 200$ for PGDS, GP-DPFA, GPDM, and TSBN. [For PGDS, GP-DPFA, [GPDM]{}, and DPGDS, we run 2000 Gibbs sampling as burn-in and collect 3000 samples for evaluation. We also use SGMCMC to infer DPGDS, with 5000 collection samples after 5000 burn-in steps, and use 10000 SGMCMC iterations for both TSBN and DTSBN to evaluate their performance.]{}
Synthetic dataset
-----------------
Following the literature [@Sutskever2007Learning; @gan2015deep], we consider sequences of different lengths, including $T=10,50,100,200,300,400,500$ and $600$, and generate 50 synthetic bouncing ball videos for training, and 30 ones for testing. Each video frame is a binary-valued image with size 30 $\times$ 30, describing the location of three balls within the image. Both TSBN and DTSBN model it with the Bernoulli likelihood, while both PGDS and DPGDS use the Bernoulli-Poisson link [@Zhou2015Infinite].
As shown in Fig. \[bouncing ball\_PE\], the average prediction errors of all algorithms decrease as the training sequence length increases. A higher-order TSBN, TSBN-4, performs much better than the first-order TSBN does, suggesting that using high-order messages can help TSBN better pass useful information. As discussed above, since a deep structure provides a natural way to propagate high-order information for prediction, it is not surprising to find that both DTSBN and DPGDS, which are both multi-layer models, have exhibited superior performance. Moreover, it is clear that the proposed DPGDS consistently outperforms DTSBN under all settings. Another advantage of DPGDS is that its inferred deep latent structure often has meaningful interpretation. As shown in Fig. \[fig: bouncing balls\_component\], for the bouncing ball data, the inferred factors at layer one represent points or pixels, those at layer two cover larger spatial contiguous regions, some of which exhibit the shape of a single bouncing ball, and those at layer three are able to capture multiple bouncing balls. In addition, we show in Appendix \[BB\_one\_step\] the one-step prediction frames of different models.
Real-world datasets
-------------------
\[Tab:results Top M\]
Besides the binary-valued synthetic bouncing ball dataset, we quantitatively and qualitatively evaluate all algorithms on the following real-world datasets used in Schein et al. [@Schein2016Poisson]. The State-of-the-Union (SOTU) dataset consists of the text of the annual SOTU speech transcripts from 1790 to 2014. The Global Database of Events, Language, and Tone (GDELT) and Integrated Crisis Early Warning System (ICEWS) are both datasets for international relations extracted from news corpora. [Note that ICEWS consists of undirected pairs, while GDELT consists of directed pairs of countries.]{} The NIPS corpus contains the text of every NIPS conference paper from 1987 to 2003. The DBLP corpus is a database of computer science research papers. Each of these datasets is summarized as a $V\times T$ count matrix, as shown in Tab. \[Tab:results Top M\]. Unless specified otherwise, we choose the top 1000 most frequently used terms to form the vocabulary, which means we set $V=1000$ for all real-data experiments.
### Quantitative comparison
For a fair and comprehensive comparison, we calculate the precision and recall at top-$M$ [@gopalan2014bayesian; @gan2015deep; @han2014dynamic; @GP-DPFA2015], which are calculated by the fraction of the top-$M$ words that match the true ranking of the words and appear in the top-$M$ ranking, respectively, with $M = 50$. We also use the Mean Precision (MP) and Mean Recall (MR) over all the years appearing in the training set to evaluate different models. As another criterion, the Predictive Precision (PP) shows the predictive precision for the final year, for which all the observations are held out. Similar as previous methods [@gan2015deep; @GP-DPFA2015], for each corpus, the entire data of the last year is held out, and for the documents in the previous years we randomly partition the words of each document into 80%/20% in each trial, and we conduct five random trials to report the sample mean and standard deviation. Note that to apply GPDM, we have used Anscombe transform [@Anscombe] to preprocess the count data to mitigate the mismatch between the data and model assumption. The results on all five datasets are summarized in Tab. \[Tab:results Top M\], which clearly show that the proposed DPGDS has achieved the best performance on most of the evaluation criteria, and again a deep model often improves its performance by increasing its number of layers. To add more empirical study
[r]{}[4.42cm]{}
on scalability, we have also tested the efficiency of our model on a GDELT data (from 2001 to 2005, temporal granularity of 24 hrs, with a total of 1825 time points), which is not too large so that we can still run DPGDS-Gibbs and GPDM. As shown in Fig. \[fig:SGMCMC\_Results\], we present how various algorithms progress over time, evaluated with MP. It takes about 1000s for DTSBN and DPGDS-SGMCMC to converge, 3.5 hrs for DPGDS-Gibbs, 5 hrs for GPDM. Clearly, our DPGDS-SGMCMC is scalable and clearly outperforms both DTSBN and GPDM. We also present in Appendix \[sec:last\] the results of DPGDS-SGMCMC on a very long time series, on which it becomes too expensive to run a batch learning algorithm.
### Exploratory data analysis
Compared to previously proposed dynamic systems, the proposed DPGDS, whose inferred latent structure is simple to visualize, provides much richer interpretation. More specifically, we may not only exhibit the content of each factor (topic), but also explore both the hierarchical relationships between them at different layers, and the temporal relationships between them at the same layer. Based on the results inferred on ICEWS 2001-2003 via a three hidden layer DPGDS, with the size of 200-100-50, we show in Fig. \[fig:laten factors and dictionary icews 2001-2003\] how some example topics are hierarchically and temporally related to each other, and how their corresponding latent representations evolve over time.
In Fig. \[fig:icews\_dic\], we select two large-weighted topics at the top hidden layer and move down the network to include any lower-layer topics that are connected to them with sufficiently large weights. For each topic, [we list all its terms whose values are larger than 1% of the largest element of the topic.]{} It is interesting to note that topic 2 at layer three is connected to three topics at layer two, which are characterized mainly by the interactions of Israel (ISR)-Palestinian Territory (PSE), Iraq (IRQ)-USA-Iran (IRN), and North Korea (PRK)-South Korea (KOR)-USA-China (CHN)-Japan (JPN), respectively. The activation strength of one of these three interactions, known to be dominant in general during 2001-2003, can be contributed not only by a large activation of topic 2 at layer three, but also by a large activation of some other topic of the same layer (layer two) at the previous time. For example, topic 41 of layer two on “ISR-PSE, IND-PAK, RUS-UKR, GEO-RUS, AFG-PAK, SYR-USA, MNE-SRB” could be associated with the activation of topic 46 of layer two on “IND-PAK, RUS-TUR, ISR-PSE, BLR-RUS” at the previous time; and topic 99 of layer two on “PRK-KOR, JPN-USA, CHN-USA, CHN-KOR, CHN-JPN, USA-RUS” could be associated with the activation of topic 63 of layer two on “IRN-USA, CHN-USA, AUS-CHN, CHN-KOR” at the previous time.
Another instructive observation is that topic 140 of layer one on “IRQ-USA, IRQ-GBR, IRN-IRQ, IRQ-KWT, AUS-IRQ” is related not only in hierarchy to topic 34 of the higher layer on “IRQ-USA, IRQ-GBR, GBR-USA, IRQ-KWT, IRN-IRQ, SYR-USA,” but also in time to topic 166 of the same layer on “ESP-USA, ESP-GBR, FRA-GBR, POR-USA,” which are interactions between the member states of the North Atlantic Treaty Organization (NATO). Based on the transitions from topic 13 on “PRK-KOR” to both topic 140 on “IRQ-USA” and 77 on “ISR-PSE,” we can find that the ongoing Iraq war and Israeli–Palestinian relations regain attention after the six-party talks [@Schein2016Poisson].
To get an insight of the benefits attributed to the deep structure, how the latent representations of several representative topics evolve over days are shown in Fig. \[fig:icews\_theta\]. It is clear that relative to these temporal factor trajectories at the bottom layer, which are specific for the bilateral interactions between two countries, these from higher layers vary more smoothly, whose corresponding high-layer topics capture the multilateral interactions between multiple closely related countries. Similar phenomena have also been demonstrated in Fig. \[fig:gdelt\_example\] on GDELT2003. Moreover, we find that a spike of the temporal trajectory of topic 166 (NATO) appears right before a one of topic 140 (Iraq war), matching the above description in Fig. \[fig:icews\_dic\]. Also, topic 14 of layer three and its descendants, including topic 23 of layer two and topic 48 at layer one are mainly about a breakthrough between RUS and Azerbaijan (AZE), coinciding with Putin’s visit in January 2001. Additional example results for the topics and their hierarchical and temporal relationships, inferred by DPGDS on different datasets, are provided in the Appendix.
In Fig. \[fig:transition matrix from ICEWS 200123\], we also present a subset of the transition matrix ${\ensuremath{\boldsymbol{\Pi}} }^{(l)}$ in each layer, corresponding to the top ten topics, some of which have been displayed in Fig. \[fig:icews\_theta\]. The transition matrix ${\ensuremath{\boldsymbol{\Pi}} }^{(l)}$ captures the cross-topic temporal dependence at layer $l$. From Fig. \[fig:transition matrix from ICEWS 200123\], besides the temporal transitions between the topics at the same layer, we can also see that with the increase of the layer index $l$, the transition matrix ${\ensuremath{\boldsymbol{\Pi}} }^{(l)}$ more closely approaches a diagonal matrix, meaning that the feature factors become more likely to transit to themselves, which matches the characteristic of DPGDS that the topics in higher layers have the ability to cover longer-range temporal dependencies and contain more general information, as shown in Fig. \[fig:icews\_dic\]. With both the hierarchical connections between layers and dynamic transitions at the same layer, distinct from the shallow PGDS, DPGDS is equipped with a larger capacity to model diverse temporal patterns with the help of its deep structure.
Conclusions
===========
We propose deep Poisson gamma dynamical systems (DPGDS) that take the advantage of a probabilistic deep hierarchical structure to efficiently capture both across-layer and temporal dependencies. The inferred latent structure provides rich interpretation for both hierarchical and temporal information propagation. For Bayesian inference, we develop both a Backward-Upward–Forward-Downward Gibbs sampler and a stochastic gradient MCMC (SGMCMC) that is scalable to long multivariate count/binary time series. Experimental results on a variety of datasets show that DPGDS not only exhibits excellent predictive performance, but also provides highly interpretable latent structure.
### Acknowledgements {#acknowledgements .unnumbered}
D. Guo, B. Chen, and H. Zhang acknowledge the support of the Program for Young Thousand Talent by Chinese Central Government, the 111 Project (No. B18039), NSFC (61771361), NSFC for Distinguished Young Scholars (61525105) and the Innovation Fund of Xidian University. M. Zhou acknowledges the support of Award IIS-1812699 from the U.S. National Science Foundation.
[**Supplementary material for deep Poisson gamma dynamical systems**]{}
Dandan Guo, Bo Chen, Hao Zhang, and Mingyuan Zhou
Details of inference via Gibbs sampling for DPGDS
=================================================
Inference for the DPGDS shown in (\[DPGDS\]) is challenging, as neither the conjugate prior nor closed-form maximum likelihood estimate is known for the shape parameter of a gamma distribution. Although seemingly difficult, by generalizing the data augmentation and marginalization techniques, we are able to derive a backward-upward and then forward-downward Gibbs sampling algorithm, making it simple to draw random samples to represent the posteriors of model parameters. We marginalize over $\Theta^{(1:L)}$ by performing a “ackward” and “upward” filters, starting with ${\ensuremath{\boldsymbol{\theta}} }^{(1)}_T$. We repeatedly exploit the following three properties:
[**[Property 1 (P1)]{}**]{}: if $ {y_{{\ensuremath{\boldsymbol{\cdot}}}kt}}= \sum\nolimits_{n = 1}^N {y_n} $, where ${y_n}\sim \textrm{Pois}({\theta_n})$ are independent Poisson-distributed random variables, then $\left( {{y_1},...,{y_n}} \right)\sim \textrm{Multi}\left( {{y_{{\ensuremath{\boldsymbol{\cdot}}}}},\frac{{{\theta_1}}}{{\sum\nolimits_{n = 1}^N {{\theta _n}} }},...,\frac{{{\theta_N}}}{{\sum\nolimits_{n = 1}^N {{\theta _n}} }}} \right)$ and ${y_{{\ensuremath{\boldsymbol{\cdot}}}}} \sim \textrm{Pois}(\sum\nolimits_{n = 1}^N {{\theta _n}} )$ [@dunson2005bayesian; @zhou2012beta].
[**[Property 2 (P2)]{}**]{}: $y \sim \mbox{Pois}(c \theta )$, where c is a constant, and $\theta \sim \textrm{Gam}\left( {a,b} \right)$ then $y \sim \textrm{NB}\left( {a,\frac{c}{{c + b}}} \right)$ is a negative binomial–distributed random variable. We can equivalently parameterize it as $y \sim \textrm{NB} \left( {a,g\left( \zeta \right)} \right)$, where $g\left( \zeta \right) = 1 - \exp \left( { - \zeta } \right)$ is the Bernoulli–Poisson link [@Zhou2015Infinite] and $\zeta = \ln \left( {1 + \frac{c}{b}} \right)$.
[**[Property 3 (P3)]{}**]{}: if $y \sim \textrm{NB} \left( {a,g\left( \zeta \right)} \right)$ and $l \sim \textrm{CRT} \left( {y,a} \right)$ is a Chinese restaurant table-distributed random variable, then $y$ and $l$ are equivalently jointly distributed as $y \sim \textrm{SumLog}\left( {l,g\left( \zeta \right)} \right)$ and $l \sim \mbox{Pois}\left( {a\zeta } \right)$ [@zhou2015negative].
Forward-downward sampling
-------------------------
**Sampling transition matrix ${\ensuremath{\boldsymbol{\Pi}} }^{(l)}$:** The alternative model specification, with $\Theta$ marginalized out, assumes that $\left( {Z_{1kt}^{( {l} )},...,Z_{{K_l},k,t}^{\left( {l} \right)}} \right) \sim \textrm{Multi} \left( {x_{kt}^{( {l + 1,l} )},\left( {\pi_{{1}{k}}^{( l )},...,\pi _{{K_l}{k}}^{( l )}} \right)} \right)$. Therefore, via the Dirichlet-multinomial conjugacy, we have $$\label{Sample Pi}
( {{\ensuremath{\boldsymbol{\pi}} }_k^{( l )}| - } ) \sim \textrm{Dir}( {\nu_1^{( l )}\nu_k^{( l )} + Z_{1k{\ensuremath{\boldsymbol{\cdot}}}}^{( {l} )},...,{\nu_{K_l}^{( l )}\nu_k^{( l )}} + Z_{{K_l}{k} {\ensuremath{\boldsymbol{\cdot}}}}^{( {l} )}} )\,\,.$$
**Sampling loading factor matrix ${\ensuremath{\boldsymbol{\Phi}}}^{(l)}$:** Given these latent counts, via the Dirichlet-multinomial conjugacy, we have $$\label{Sample Phi}
( {{\ensuremath{\boldsymbol{\phi}} }_k^{( l )}| - } ) \sim \textrm{Dir}( {{\eta ^{( l )}} + A_{1k {\ensuremath{\boldsymbol{\cdot}}}}^{( l )},...,{\eta ^{( l )}} + A_{{K_{l-1}}k {\ensuremath{\boldsymbol{\cdot}}}}^{( l )}} )\,\,.$$
**Sampling $\delta_t^{(1)}$:** Via the gamma-Poisson conjugacy, we have $$\label{Sample delta1}
( {\delta_t^{(1)}| - } ) \sim \mbox{Gam}\left( {{\varepsilon _0} + \sum\limits_{v = 1}^V {x_{vt}^{( 1 )}} ,{\varepsilon _0} + \sum\limits_{k = 1}^{{K_1}} {\theta _{kt}^{( 1 )}} } \right), \mbox{ if }\delta_t^{(1)}\neq \delta_{t'}^{(1)} \mbox{ for } t\neq t';$$ $$\label{Sample delta2}
( {\delta^{(1)}| - } ) \sim \mbox{Gam}\left( {{\varepsilon _0} + \sum\limits_{t = 1}^T {\sum\limits_{v = 1}^V {x_{vt}^{( 1 )}} } ,{\varepsilon _0} + \sum\limits_{t = 1}^T {\sum\limits_{k = 1}^{{K_1}} {\theta _{kt}^{( 1 )}} } } \right), \mbox{ if }\delta_t^{(1)}= \delta^{(1)} \mbox{ for all } t.$$
**Sampling $\beta ^{( l )}$:** $$\label{Sample beta}
( {{\beta ^{( l )}}| - } ) \sim \mbox{Gam}\left( {{\varepsilon _0} + {\gamma _0},{\varepsilon _0} + \sum\limits_{k = 1}^{{K_l}} {\nu_k^{( l )}} } \right)$$
**Sampling $v_k^{(l)}$ and $\xi^{(l)}$:**
$$\label{Multi_Pi_Theta2}
( {Z_{k1t}^{( {l} )},...,Z_{k{K_l}t}^{( {l} )}} {\,|\,}-)\sim \textrm{Multi}\left( {x_{kt}^{( {l+1,l} )};\frac{{\pi_{k1}^{( l )}\theta _{{1},{t - 1}}^{( l )}}}{{\sum\nolimits_{{k_l} = 1}^{{K_l}} {\pi _{{k}{k_l}}^{( l )}\theta _{{k_l},{t - 1}}^{( l )}} }},...,\frac{{\pi _{k{K_l}}^{( l )}\theta _{{K_l},t - 1}^{( l )}}}{{\sum\nolimits_{{k_l} = 1}^{{K_l}} {\pi _{k{k_l}}^{( l )}\theta _{{k_l},t - 1}^{( l )}} }}} \right),$$
To obtain closed-form conditional posterior for $v_k^{(l)}$ and $\xi^{(l)}$, we start with $$\label{sample_l}
({Z_{1 k {\ensuremath{\boldsymbol{\cdot}}}}^{(l)}},\cdots,{Z_{k k {\ensuremath{\boldsymbol{\cdot}}}}^{(l)}},\cdots,{Z_{K_1 k {\ensuremath{\boldsymbol{\cdot}}}}^{(l)}}) \sim \mbox{DirMult} ({Z_{{\ensuremath{\boldsymbol{\cdot}}}k {\ensuremath{\boldsymbol{\cdot}}}}^{(l)}}, (v_1^{(l)} v_k^{(l)}, \cdots, \xi^{(l)} v_k^{(l)}, \cdots, v_K^{(l)} v_k^{(l)}) ),$$ where ${Z_{k_1 k {\ensuremath{\boldsymbol{\cdot}}}}^{(l)}} = \sum_{t=1}^{T} {Z_{k_1 k t}^{(l)}}$ and $
{Z_{{\ensuremath{\boldsymbol{\cdot}}}k {\ensuremath{\boldsymbol{\cdot}}}}^{(l)}}= \sum_{t=1}^{T} \sum_{k_1=1}^{K_l} {Z_{k_1 k t}^{(l)}}$. Following Zhou [@zhou_bayesian], we draw a beta-distributed auxiliary variable: $$\label{q}
(q_k^{(l)} {\,|\,}-)\sim \mbox{Beta} ({Z_{{\ensuremath{\boldsymbol{\cdot}}}k {\ensuremath{\boldsymbol{\cdot}}}}^{(l)}}, \nu_k^{(l)} (\xi^{(l)} + \sum\nolimits_{k_1\neq k}\nu_{k_1}^{(l)}) ).$$ Consequently, we have $$\label{l_kk}
P( {Z_{k k {\ensuremath{\boldsymbol{\cdot}}}}^{(l)}}, q_k^{(l)}) \propto \mbox{NB} ({Z_{k k {\ensuremath{\boldsymbol{\cdot}}}}^{(l)}};\xi^{(l)} \nu_k^{(l)}, q_k^{(l)}) \quad \mbox{and} \quad P({Z_{k_1 k {\ensuremath{\boldsymbol{\cdot}}}}^{(l)}}, q_k^{(l)}) \propto \mbox{NB}({Z_{k_1 k {\ensuremath{\boldsymbol{\cdot}}}}^{(l)}};v_{k_1}^{(l)} \nu_k^{(l)},q_k^{(l)})$$ for $k_1 \neq k$. Next, we introduce the following auxiliary variables: $$\label{hh}
(h_{kk}^{(l)} {\,|\,}-)\sim \mbox{CRT} ({Z_{k k {\ensuremath{\boldsymbol{\cdot}}}}^{(l)}}, \xi^{(l)} \nu_k^{(l)}) \quad \mbox{and} \quad (h_{k_1 k}^{(l)} {\,|\,}-)
\sim \mbox{CRT} ({Z_{k_1 k {\ensuremath{\boldsymbol{\cdot}}}}^{(l)}}, \nu_{k_1}^{(l)} \nu_k^{(l)} )$$ for $k_1 \neq k$. We can then re-express the joint distribution over the variable in and as $$\label{l_kk2}
{Z_{k k {\ensuremath{\boldsymbol{\cdot}}}}^{(l)}} \sim \mbox{SumLog} (h_{kk}^{(l)}, q_k^{(l)}) \quad \mbox{and} \quad {Z_{k_1 k {\ensuremath{\boldsymbol{\cdot}}}}^{(l)}} \sim \mbox{SumLog} (h_{k_1 k}^{(l)}, q_k^{(l)})$$ and $$\label{h_kk2}
h_{kk}^{(l)} \sim \mbox{Pois} (-\xi^{(l)} \nu_k^{(l)} \ln (1-q_k^{(l)})) \quad \mbox{and} \quad h_{k_1 k} \sim \mbox{Pois} (-\nu_{k_1}^{(l)} \nu_k^{(l)} \ln (1-q_k^{(l)})).$$ Then, via the gamma-Poisson conjugacy, we have $$\label{sample_xi}
(\xi^{(l)}|-) \sim \mbox{Gam}\left (\frac{\gamma_0}{K_l}+\sum_{k=1}^{K_l} h_{kk}^{(l)},~ \beta^{(l)} - \sum_{k=1}^{K_l} \nu_k^{(l)} \ln (1-q_k^{(l)})\right).$$
Note that when $l=L$ and $t=1$, we have ${\ensuremath{\boldsymbol{\theta}} }_1^{(L)} \sim \mbox{Gam}\left(\tau_0 \nu_k^{(L)} , \tau_0 \right)$ and $m_{k 1}^{( L )}\sim \textrm{Pois}\left( {{\tau _0}( {\zeta_2^{( L )} + \zeta_{1}^{( L-1 )}} )\theta _{k1}^{( L )}} \right)$, where $m_{k1}^{\left( 1 \right)} = A_{{\ensuremath{\boldsymbol{\cdot}}}k1}^{(1)} + Z_{{\ensuremath{\boldsymbol{\cdot}}}k2}^{\left( 1 \right)}$. So we can sample $(x_{k1}^{( {L+1 } )}{\,|\,}-)\sim \textrm{CRT}( {m_{k1}^{(l)},{\tau_0}}\nu_k^{(l)} )$. Via [**[P3]{}**]{}, We can further get $x_{k1}^{(L+1)}\sim \textrm{Pois}( {\zeta _1^{( L)}{\tau _0}\nu_k^{(L)}} )$.
Next, because $x_{k}^{(L+1)}$ also depends on $\nu_k^{(L)}$, we introduce $$\label{n_k}
n_k^{(l)} = h_{kk}^{(l)}+\sum _{k_1 \neq k} h_{k_1 k}^{(l)} + \sum _{k_2 \neq k} h_{k k_2}^{(l)}$$ for $l=1,\ldots, L-1$ and $$\label{n_k}
n_k^{(L)} = h_{kk}^{(L)}+\sum _{k_1 \neq k} h_{k_1 k}^{(L)} + \sum _{k_2 \neq k} h_{k k_2}^{(L)} + x_{k1}^{(L+1)}.$$
Then, via [**[P1]{}**]{}, we have $$\label{n_k2}
n_k^{(l)} \sim \mbox{Pois} (\nu_k^{(l)} \rho_k^{(l)}),$$ where $$\label{rhok}
\rho_k^{(l)} = -\ln (1-q_k^{(l)}) (\xi^{(l)} + \sum_{k_1 \neq k} \nu_{k_1}^{(l)} ) - \sum_{k_2 \neq k} \ln (1-q_{k_2}^{(l)}) \nu_{k_2}^{(l)}$$ for $l=1,\ldots, L-1$ and $$\label{rhok}
\rho_k^{(L)} = -\ln (1-q_k^{(L)}) (\xi^{(L)} + \sum_{k_1 \neq k} \nu_{k_1}^{(L)} ) - \sum_{k_2 \neq k} \ln (1-q_{k_2}^{(L)}) \nu_{k_2}^{(L)} + \zeta^{(L)} \tau_0.$$ Finally, via the gamma-Poisson conjugacy, we have $$\label{v_k}
(\nu_k^{(l)}|-) \sim \mbox{Gam}\left (\frac{\gamma_0}{\beta^{(l)}} + n_k^{(l)}, \beta^{(l)} + \rho_k^{(l)}\right).$$
*$\setminus$ $\star$ Collect local information*\
Backward-upward Gibbs sampling for $\{A_{vkt}^{(l)}\}_{v,k,t}$; $\{x_{kt}^{( l+1 )}\}_{k,t}$; $\{x_{kt}^{( l+1 , l)}\}_{k,t}$ ; $\{x_{kt}^{( l+1 , l + 1)}\}_{k,t}$; $\{Z_{k_{1}{k_{2}}t}^{( {l} )}\}_{k_{1},k_{2},t}$ with -;
Backward-upward calculating for $\{\zeta_{t}^{( l)}\}_t $;
Forward-downward Gibbs sampling for $\{{\ensuremath{\boldsymbol{\theta}} }_{t}^{( l )}\}_{t}$ with ;
Sampling ${\ensuremath{\boldsymbol{\delta}} }^{(1)}$ with or ;
*$\setminus$ $\star$ Update global parameters*\
Update $\{{\ensuremath{\boldsymbol{\pi}} }_{k}^{( l)}\}_k$ from ; Update $\{{\ensuremath{\boldsymbol{\phi}} }_{k}^{( l)}\}_k$ from ; Update $\beta^{(l)},\xi^{(l)},\{\nu_k^{(l)}\}_k$ according to , , and ;\
\[Gibbs\]
SGMCMC for DPGDS
----------------
Although the Gibbs sampling algorithm for DPGDS has closed-form update equations discussed above, it requires handling all time-varying vectors in each iteration and hence has limited scalability [@Zhang2018WHAI]. To allow for tractable and scalable inference, in Section \[Scalable Inference\], we propose a SGMCMC method to infer the DGPDS using TLASGR-MCMC [@cong2017deep] to update $\{{\ensuremath{\boldsymbol{\Pi}} }^{(l)}\}_{l=1}^L$. In this section, we discuss how to update the other global parameters in detail, as described in Algorithm in \[SGMCMC\].
[**[Sample the transmission matrix $\{{\ensuremath{\boldsymbol{\Pi}} }^{(l)}\}_{l=1}^L$]{}:**]{} $$\begin{aligned}
\label{TLASGR update_Pi}
\left( {{\ensuremath{\boldsymbol{\pi}} }_k^{(l)}} \right)_{n + 1} \! = & \! \bigg[ \! \left( {{\ensuremath{\boldsymbol{\pi}} }_k^{(l)}} \right)_n \! + \! \frac{\varepsilon _n}{M_k^{(l)}} \! \left[ \left(\rho \tilde {\ensuremath{\boldsymbol{z}} }_{:k{\ensuremath{\boldsymbol{\cdot}}}}^{(l)} \! + \! {\ensuremath{\boldsymbol{\eta}} }_{:k}^{(l)}\right) \! - \! \left(\rho \tilde z_{{\ensuremath{\boldsymbol{\cdot}}}k{\ensuremath{\boldsymbol{\cdot}}}}^{(l)} \! + \! \eta_{{\ensuremath{\boldsymbol{\cdot}}}k}^{(l)} \right) \! \left( {{\ensuremath{\boldsymbol{\pi}} }_k^{(l)}} \right)_n \right] \nonumber \\
& + \mathcal{N} \left( 0, \frac{2 \varepsilon _n}{M_k^{(l)}}\left[ \mbox{diag}({\ensuremath{\boldsymbol{\pi}} }_k^{(l)})_n - ({\ensuremath{\boldsymbol{\pi}} }_k^{(l)})_n ({\ensuremath{\boldsymbol{\pi}} }_k^{(l)})_n^T \right] \right) \bigg]_\angle.\end{aligned}$$
[**[Sample the hierarchical topics $\{{\ensuremath{\boldsymbol{\Phi}}}^{(l)}\}_{l=1}^L$: ]{}**]{} In DPGDS, the prior and likelihood of $\{{\ensuremath{\boldsymbol{\Phi}}}^{(l)}\}_{l=1}^L$ resemble those for $\{{\ensuremath{\boldsymbol{\Pi}} }^{(l)}\}_{l=1}^L$, so we also apply the TLASGR MCMC sampling algorithm on it as $$\begin{aligned}
\label{TLASGR update_Phi}
\left( {{\ensuremath{\boldsymbol{\phi}} }_k^{(l)}} \right)_{n + 1} \! = & \! \bigg[ \! \left( {{\ensuremath{\boldsymbol{\phi}} }_k^{(l)}} \right)_n \! + \! \frac{\varepsilon _n}{P_k^{(l)}} \! \left[ \left(\rho \tilde {\ensuremath{\boldsymbol{A}} }_{:k{\ensuremath{\boldsymbol{\cdot}}}}^{(l)} \! + \! \eta_{0}^{(l)}\right) \! - \! \left(\rho \tilde A_{{\ensuremath{\boldsymbol{\cdot}}}k{\ensuremath{\boldsymbol{\cdot}}}}^{(l)} \! + \! K_{l-1}\eta_{0}^{(l)} \right) \! \left( {{\ensuremath{\boldsymbol{\phi}} }_k^{(l)}} \right)_n \right] \nonumber \\
& + \mathcal{N} \left( 0, \frac{2 \varepsilon _n}{P_k^{(l)}}\left[ \mbox{diag}({\ensuremath{\boldsymbol{\phi}} }_k^{(l)})_n - ({\ensuremath{\boldsymbol{\phi}} }_k^{(l)})_n ({\ensuremath{\boldsymbol{\phi}} }_k^{(l)})_n^T \right] \right) \bigg]_\angle,\end{aligned}$$ where $M_k^{(l)}$ and $P_k^{(l)}$ are calculated using the estimated FIM, $\tilde{{\ensuremath{\boldsymbol{z}} }}_{:k\cdot}$, $\tilde{z}_{\cdot k\cdot}^{(l)}$, $\tilde{{\ensuremath{\boldsymbol{A}} }}_{:k\cdot}^{(l)}$, and $\tilde{A}_{\cdot k\cdot}^{(l)}$ come from the augmented latent counts ${\ensuremath{{\bf Z}} }^{(l)}$ and ${\ensuremath{{\bf A}}}^{(l)}$, ${{\ensuremath{\boldsymbol{\eta}} }_{:k}^{\left( l \right)}}$ and $\eta_{0}^{(l)}$ denote the prior of ${{\ensuremath{\boldsymbol{\pi}} }_k^{( l )}}$ and ${{\ensuremath{\boldsymbol{\phi}} }_k^{( l )}}$, and $[\cdot]_\angle$ denotes a simplex constraint; more details about TLASGR-MCMC for DLDA can be found in Cong et al. [@cong2017deep].
For other global variables, $\Lambda_g$, containing $\{\xi^{(l)}\}_{l=1}^{L}$ and $\{v_k^{(l)}\}_{l=1,k=1}^{L,K_l}$ (the hyper-parameter $\{\beta^{(l)}\}_{l=1}^{L}$ is set to 1 here), we find that it is enough to use a first-order SGMCMC method to sample them. Considering the efficiency and the performance, we use the stochastic gradient Nose-Hoover thermostat (SGNHT) to update all these variables, which has the potential advantage of making the system jump out of local models easier and reach the equilibrium state faster. Specifically, the dynamic system are defined by the following stochastic differential equations: $$\begin{aligned}
\label{SDE}
& d \Lambda_g = {\ensuremath{\boldsymbol{p}} }dt, d {\ensuremath{\boldsymbol{p}} }= {{\ensuremath{\boldsymbol{f}} }}(\Lambda_g) - \tau {\ensuremath{\boldsymbol{p}} }dt + \sqrt{2A} \mathcal{N}(0,dt) \\
& d \tau = (\frac{1}{n} {\ensuremath{\boldsymbol{p}} }^T {\ensuremath{\boldsymbol{p}} }- 1) dt\end{aligned}$$ where ${\ensuremath{\boldsymbol{p}} }$ simulate the momenta in a system and $\tau$ is called the thermostat variable which ensures the system temperature to be constant. The stochastic force ${{\ensuremath{\boldsymbol{f}} }}(\Lambda_g) = -\nabla_{\Lambda_g} {U}(\Lambda_g)$, where ${U}(\Lambda_g)$ is the negative log-posterior of a Bayesian model, is calculated on a mini-batch subset of data or the other global parameters. Note that given the appropriate initial values of $\Lambda_g, \tau, {\ensuremath{\boldsymbol{p}} }, A$, it is only need to calculate the ${{\ensuremath{\boldsymbol{f}} }}(\Lambda_g)$ to update the $\Lambda_g$, which will be given.
[**[Calculate the stochastic force of $v_k^{(l)}$: ]{}**]{} $$\label{SF_v1}
U\left( {\nu _k^{\left( l \right)}} \right) = - \sum\limits_{k = 1}^{{K_l}} {\log p\left( {{\ensuremath{\boldsymbol{\pi}} }_k^{\left( l \right)}|{\zeta ^{\left( l \right)}}},\nu_k^{(l)} \right)} - \log p\left( {\nu _k^{\left( l \right)}|\frac{{{\gamma _0}}}{{{K_l}}},{\beta ^{\left( l \right)}}} \right),$$ $$\begin{aligned}
\label{SF_v2}
{\nabla _{\nu _k^{\left( l \right)}}}U\left( {\nu _k^{\left( l \right)}} \right) = &- \left[ {\sum\limits_{{k_1} = 1}^{{K_l}} {\left( {\nu _{{k_1}}^{\left( l \right)}} \right)\log \left( {\pi _{{k_1}k}^{\left( l \right)}} \right)} + \sum\limits_{{k_2} = 1}^{{K_l}} {\left( {\nu _{{k_2}}^{\left( l \right)}} \right)\log \left( {\pi _{k{k_2}}^{\left( l \right)}} \right)} + \left( {{\zeta ^{\left( l \right)}} - 4\nu _k^{\left( l \right)}} \right)\log \pi _{kk}^{\left( l \right)}} \right] \nonumber \\
& - \frac{{\left( {\frac{{{\gamma _0}}}{{{K_l}}} - 1} \right)}}{{\nu _k^{\left( l \right)}}} + {\beta ^{\left( l \right)}}.\end{aligned}$$
[**[Calculate the stochastic force of $\xi^{(l)}$: ]{}**]{} $$\label{SF_xi1}
U\left( {{\xi ^{\left( l \right)}}} \right) = - \sum\limits_{k = 1}^{{K_l}} {\log p\left( {\pi _k^{\left( l \right)}|{\xi ^{\left( l \right)}}} \right)} - \log p\left( {{\xi ^{\left( l \right)}}|{\varepsilon _0},{\varepsilon _0}} \right),$$ $$\begin{aligned}
\label{SF_xi2}
{\nabla _{{\xi ^{\left( l \right)}}}}U\left( {{\xi ^{\left( l \right)}}} \right) = - \sum\limits_{k = 1}^{{K_l}} {\nu _k^{\left( l \right)}\log \left( {\pi _{kk}^{\left( l \right)}} \right)} - \frac{{\left( {{\varepsilon _0} - 1} \right)}}{{{\xi ^{\left( l \right)}}}} + {\varepsilon _0}.\end{aligned}$$
Input: Data mini-batches; Output: Global parameters of DPGDS.
*$\setminus$ $\star$ Collect local information*\
Backward-upward Gibbs sampling on the $i$th mini-batch for $\{A_{vkt}^{(l)}\}_{v,k,t}$; $\{x_{kt}^{( l+1 )}\}_{k,t}$; $\{x_{kt}^{( l+1 , l)}\}_{k,t}$ ; $\{x_{kt}^{( l+1 , l + 1)}\}_{k,t}$; $\{Z_{k_{1}{k_{2}}t}^{( {l} )}\}_{k_{1},k_{2},t}$ with -;
Backward-upward calculating for $\{\zeta_{t}^{( l)}\}_t $;
Forward-downward Gibbs sampling for $\{{\ensuremath{\boldsymbol{\theta}} }_{t}^{( l )}\}_{t}$ with ;
Sampling ${\ensuremath{\boldsymbol{\delta}} }^{(1)}$ with or ;
*$\setminus$ $\star$ Update global parameters*\
Update $M_{k}^{(l)}$ according to Cong et al. [@cong2017deep], and then $\{{\ensuremath{\boldsymbol{\phi}} }_{k}^{( l)}\}_k$ with ; Update $M_{k}^{(l)}$ according to [@cong2017deep], and then $\{{\ensuremath{\boldsymbol{\pi}} }_{k}^{( l)}\}_k$ with ;
Update $\xi^{(l)}$, $\{\nu_k^{(l)}\}_k$, and $\beta^{(l)}$ with SGNHT [@ding2014bayesian]\
\[SGMCMC\]
Results on Bouncing ball {#BB_one_step}
========================
In Fig. \[Results on Bouncing ball\], we show the original data and the one-step prediction frames of five different algorithms. The frames in each subplot is arranged by time from left to right and top to bottom. We find that the most difficult prediction is the frames that describe how the balls move after the collision, such as observing the fourth row and ninth row. We find that comparing with the original data, a good model means that two balls can be separated soon after the collision, while a bad model means that two balls have unreasonable trajectories. According to this action mechanism, we can see that DPGDS outperforms the others.
\
Results on ICEWS 2007-2009 {#sec:last}
==========================
In order to understand the DPGDS better, based on the results inferred on ICEWS 2007-2009 via a three-hidden-layer DPGDS, with the size of 200-100-50, we show in Fig. \[fig:laten factors and dictionary icews 2007-2009\] how some example topics are hierarchically and temporally related to each other, and how their corresponding latent representations evolve over time. Similar findings and conclusions can be reached according to Fig. \[fig:laten factors and dictionary icews 2007-2009\] like ICEWS 2001-2003 in Figs. \[fig:laten factors and dictionary icews 2001-2003\] and \[fig:transition matrix from ICEWS 200123\]. In Fig. \[fig:transition matrix from ICEWS 200789\], we also present a subset of the transition matrix ${\ensuremath{\boldsymbol{\Pi}} }^{(l)}$ in each layer, corresponding to the top ten topics, some of which have been displayed in Fig. \[fig:laten factors and dictionary icews 2007-2009\].
Results on GDELT 2015-2018
==========================
To add more empirical study on scalability, we have collected GDELT data from February 2015 to July 2018 (temporal granularity of 15 mins), resulting in a count matrix with $V = 1000$ and $T \approx 120, 000$. For such a long time series, Backward-Upward–Forward-Downward Gibbs sampling for DPGDS is impractical to run as a single iteration takes nearly 3000 seconds. GPDM is trained with a batch algorithm, which is also too time consuming to run for this dataset. However, by taking short sequences at random locations from the data, we can run both DTSBN [@gan2015deep] and the proposed DPGDS using SGMCMC. Here, we use $[K^{(1)},K^{(2)}, K^{(3)}]=[200,100,50]$ for both DPGDS and DTSBN and choose the length of each short sequence to be $T=60$. As shown in Fig. \[fig:MP\_MR\_PP from GDELT 2015\_2018\], we present how DTSBN and the proposed DPGDS progress over time, evaluated with MP, MR and PP. It takes about 6000$s$ for DTSBN and DPGDS-SGMCMC to converge. Clearly, our DPGDS-SGMCMC is scalable and clearly outperforms DTSBN.
[^1]: Corresponding author
|
---
abstract: 'In this paper, we prove that the set of all $F$-pure thresholds on a fixed germ of a strongly $F$-regular pair satisfies the ascending chain condition. As a corollary, we verify the ascending chain condition for the set of all $F$-pure thresholds on smooth varieties or, more generally, on varieties with tame quotient singularities, which is an affirmative answer to a conjecture given by Blickle, Mustaţǎ and Smith.'
address: 'Graduate School of Mathematical Sciences, University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo 153-8914, Japan'
author:
- Kenta Sato
title: 'Ascending chain condition for $F$-pure thresholds on a fixed strongly $F$-regular germ'
---
= 9999
Introduction
============
In characteristic zero, Shokurov ([@Sho]) conjectured that the set of all log canonical thresholds on varieties of any fixed dimension satisfies the ascending chain condition. This conjecture was partially solved by de Fernex, Ein, and Mustaţă in [@dFEM] and [@dFEM2] using generic limit, and finally settled by Hacon, M^c^Kernan, and Xu in [@HMX] using global geometry.
In this paper, we deal a positive characteristic analogue of this problem. Let $(R,{\mathfrak{m}})$ be a Noetherian normal local ring of characteristic $p>0$ and ${\Delta}$ be an effective ${\mathbb{Q}}$-Weil divisor on $\Spec R$. We further assume that $R$ is $F$-finite, that is, the Frobenius morphism $F:R \to R$ is a finite ring homomorphism. For a proper ideal ${\mathfrak{a}}\subsetneq R$ and a real number $t \ge 0$, We consider the test ideal $\tau(R, {\Delta}, {\mathfrak{a}}^t)$, which is defined in terms of the Frobenius morphism (see Definition \[test def\] below). Since we have $\tau(R,{\Delta}, {\mathfrak{a}}^t) \subseteq \tau(R,{\Delta}, {\mathfrak{a}}^s)$ for every real numbers $0 \le s \le t$, for a given ${\mathfrak{m}}$-primary ideal $I \subseteq R$, we define the $F$-jumping number of $(R,{\Delta}; {\mathfrak{a}})$ with respect to $I$ as $${\mathrm{fjn}}^I (R, {\Delta};{\mathfrak{a}}) : = \inf \{ t \ge 0 \mid \tau (R, {\Delta}, {\mathfrak{a}}^t) \subseteq I \} \in {\mathbb{R}}.$$ When $I= {\mathfrak{m}}$ and $(R,{\Delta})$ is *strongly $F$-regular*, that is, $\tau(R,{\Delta})=R$, we denote it by $\mathrm{fpt}(R,{\Delta}; {\mathfrak{a}})$ and call it the *$F$-pure threshold* of $(R,{\Delta};{\mathfrak{a}})$.
Since test ideals in positive characteristic enjoy several important properties which hold for multiplier ideals in characteristic zero, it is natural to ask whether or not the set of $F$-pure thresholds satisfies the ascending chain condition. Blickle, Mustaţă, and Smith conjectured the following.
\[intro conj\] Fix an integer $n \ge 1$, a prime number $p>0$ and a set ${\mathcal{D}^{\mathrm{reg}}_{{n},{p}}}$ such that every element of ${\mathcal{D}^{\mathrm{reg}}_{{n},{p}}}$ is an $n$-dimensional $F$-finite Noetherian regular local ring of characteristic $p$. The set $${\mathcal{T}}^{\mathrm{reg}}_{n,p,\mathrm{pr}}: = \{ \mathrm{fpt} (A; {\mathfrak{a}}) \mid A \in {\mathcal{D}^{\mathrm{reg}}_{{n},{p}}} ,{\mathfrak{a}}\subsetneq A \textup{ is a principal ideal} \},$$ satisfies the ascending chain condition.
This problem has been considered by several authors ([@BMS2], [@HnBWZ], and [@HnBW]). We give an affirmative answer to this conjecture.
\[intro reg\] With the notation above, the set $${\mathcal{T}}^{\mathrm{reg}}_{n,p}: = \{ \mathrm{fpt} (A; {\mathfrak{a}}) \mid A \in {\mathcal{D}^{\mathrm{reg}}_{{n},{p}}}, {\mathfrak{a}}\subsetneq A \textup{ is an ideal} \}$$ satisfies the ascending chain condition.
Employing the strategy in [@dFEM], we can also verify the ascending chain condition for $F$-pure thresholds on tame quotient singularities.
\[intro quot\] Fix an integer $n \ge 1$, a prime number $p>0$ and a set ${\mathcal{D}^{\mathrm{quot}}_{{n},{p}}}$ such that every element of ${\mathcal{D}^{\mathrm{quot}}_{{n},{p}}}$ is an $n$-dimensional $F$-finite Noetherian normal local ring of characteristic $p$ with tame quotient singularities. The set $${\mathcal{T}}^{\mathrm{quot}}_{n,p}: = \{ \mathrm{fpt} (R; {\mathfrak{a}}) \mid R \in {\mathcal{D}^{\mathrm{quot}}_{{n},{p}}}, {\mathfrak{a}}\subsetneq R \textup{ is an ideal} \}$$ satisfies the ascending chain condition.
In order to prove Theorem \[intro reg\], it is enough to show that the set of all $F$-pure thresholds on a fixed $F$-finite Noetherian regular local ring satisfies the ascending chain condition. We consider this problem in a more general setting. Let $(R,{\Delta})$ be a pair, that is, $(R,{\mathfrak{m}})$ is an $F$-finite Noetherian normal local ring of characteristic $p>0$ and ${\Delta}$ be an effective ${\mathbb{Q}}$-Weil divisor on $\Spec R$. For a given ${\mathfrak{m}}$-primary ideal $I \subseteq R$, we define $${\mathrm{FJN}}^I(R,{\Delta}) : = \{ {\mathrm{fjn}}^I(R,{\Delta}; {\mathfrak{a}}) \mid {\mathfrak{a}}\subsetneq R \textup{ is an ideal} \} \subseteq {\mathbb{R}}_{\ge 0}.$$ We note that if $(R,{\Delta})$ is strongly $F$-regular and $I= {\mathfrak{m}}$, then the set ${\mathrm{FJN}}^I(R,{\Delta})$ coincides with the set of all $F$-pure thresholds $$\mathrm{FPT}(R,{\Delta}) := \{ \mathrm{fpt}(R,{\Delta}; {\mathfrak{a}}) \mid {\mathfrak{a}}\subsetneq R \textup{ is an ideal} \}.$$
Let $(R,{\Delta})$ be a pair such that $K_X+{\Delta}$ is ${\mathbb{Q}}$-Cartier with index not divisible by $p$, where $K_X$ is a canonical divisor of $X=\Spec R$ and $I \subseteq R$ be an ${\mathfrak{m}}$-primary ideal. Assume that $\tau(R,{\Delta})$ is ${\mathfrak{m}}$-primary or trivial. Then the set ${\mathrm{FJN}}^I(R,{\Delta})$ satisfies the ascending chain condition. In particular, if $(R,{\Delta})$ is strongly $F$-regular, then the set $\mathrm{FPT}(R,{\Delta})$ satisfies the ascending chain condition.
For a real number $t>0$ and a power $q$ of $p$, we consider the ascending sequence $\{ {\langle t \rangle_{n, q}} \}_{n \in {\mathbb{N}}}$, where ${\langle t \rangle_{n, q}} : = {\lceil t q^n-1 \rceil}/q^n$ is the *$n$-th truncation* of $t$ in base $q$. It is not so hard to prove that the set ${\mathrm{FJN}}^I(R,{\Delta})$ satisfies the ascending chain condition if and only if for every real number $t>0$, there exists an integer $n_1>0$ with the following property: for every ideal ${\mathfrak{a}}\subseteq R$ and every integer $n \ge n_1$, $\tau(R,{\Delta}, {\mathfrak{a}}^{{\langle t \rangle_{n, q}}}) \subseteq I$ if and only if $\tau(R, {\Delta}, {\mathfrak{a}}^{{\langle t \rangle_{n_1, q}}}) \subseteq I$.
In this paper, we define a new ideal $ {\tau_{e}^{n,u}} (R,{\Delta}, {\mathfrak{a}}^t) \subseteq R$ for every integers $u,n \ge 0$ in terms of the trace map for the Frobenius morphism so that for every $n$, the sequence $\{ {\tau_{e}^{n,u}}(R,{\Delta}, {\mathfrak{a}}^t) \}_{u \in {\mathbb{N}}}$ is an ascending chain which converges to $\tau(R,{\Delta}, {\mathfrak{a}}^{{\langle t \rangle_{n, q}}})$. We investigate the behavior of the ideals {${\tau_{e}^{n,u}}(R,{\Delta}, {\mathfrak{a}}^t)\}_{n \in {\mathbb{N}}}$ for some fixed $u \ge 0$ instead of the ideals $\{\tau(R, {\Delta}, {\mathfrak{a}}^{{\langle t \rangle_{n, q}}})\}_{n \in {\mathbb{N}}}$. In particular, we prove the following theorem, which plays a crucial role in the proof of the main theorem.
\[intro key\] Let $(X=\Spec R, {\Delta})$ be a pair such that $(p^e-1)(K_X+{\Delta})$ is Cartier for some integer $e>0$, $I \subseteq R$ be an ${\mathfrak{m}}$-primary ideal, $l, n_0 \ge 0$ and $u \ge 2$ be integers, and $t>0$ be a rational number such that $t=(s/p^e) + ( l/p^e(p^e-1))$ for some integers $s \ge 0$ and $0<l<p^e$. We set $t_0 : =p^{2e}/(p^e-1)$ and $M_0=(p^{e(n_0+6)}-1) \cdot {\mathrm{emb}}(R) / (p^e-1)$, where ${\mathrm{emb}}(R)$ is the embedding dimension of $R$. Then there exists an integer $n_1>0$ with the following property: for any ideal ${\mathfrak{a}}\subseteq R$ such that
1. $p^e>\mu_R({\mathfrak{a}}) + {\ell \ell_R(R/I)}+{\mathrm{emb}}(R)$, where $\mu_R({\mathfrak{a}})$ is the number of a minimal generator of ${\mathfrak{a}}$ and ${\ell \ell_R(R/I)}:= \max\{m \ge 0 \mid {\mathfrak{m}}^m \subseteq I\}$, and
2. ${\tau_{e}^{n_0+1, u}}(R, {\Delta}, {\mathfrak{a}}^{l t_0}) + {\mathfrak{m}}^{M_0} \cdot \tau(R, {\Delta}) \supseteq {\tau_{e}^{n_0, u}} (R, {\Delta}, {\mathfrak{a}}^{l t_0})$
we have $${\tau_{e}^{n,u}}(R,{\Delta}, {\mathfrak{a}}^t) \subseteq I \textup{ if and only if } {\tau_{e}^{n_1,u}}(R,{\Delta}, {\mathfrak{a}}^t) \subseteq I$$ for every integer $n \ge n_1$.
Another key ingredient of the proof of the main theorem is the rationality of accumulation points of ${\mathrm{FJN}}^I(R,{\Delta})$. Blickle, Mustaţă, and Smith proved in [@BMS2] that the set ${\mathcal{T}}^{\mathrm{reg}}_{n,p,\mathrm{pr}}$ is a closed set of rational numbers using ultraproduct. Their proof relies on the fact that for any local ring $A \in {\mathcal{D}^{\mathrm{reg}}_{{n},{p}}}$, any principal ideal ${\mathfrak{a}}\subsetneq A$, and any integer $e \ge 0$, the test ideal $\tau(A, {\mathfrak{a}}^{1/p^e})$ can be computed by the trace map ${\mathrm{Tr}}^e: F^e_* A \to A$ for the $e$-th Frobenius morphism $F^e$, that is, we have $\tau(A, {\mathfrak{a}}^{1/p^e})= {\mathrm{Tr}}^e(F^e_* {\mathfrak{a}})$, which fails if ${\mathfrak{a}}$ is not principal. In order to extend the result to the non-principal case, we introduce the notion of stabilization exponent for a triple $(R,{\Delta}, {\mathfrak{a}}^t)$, which indicates how many times we should compose the trace map for the Frobenius morphism to compute the test ideal $\tau(R,{\Delta}, {\mathfrak{a}}^t)$ (see Definition \[stab exp\]).
By combining the method used in [@BMS2] and some argument about the stabilization exponents, we prove the following theorem.
\[intro BMS\] Let $(X=\Spec R, {\Delta})$ be a pair such that $K_X+{\Delta}$ is ${\mathbb{Q}}$-Cartier with index not divisible by $p$ and $I \subseteq R$ be an ${\mathfrak{m}}$-primary ideal. Then the limit of any sequence in ${\mathrm{FJN}}^I(R,{\Delta})$ is a rational number.
As the consequence of Theorem \[intro key\] and Theorem \[intro BMS\], we obtain the main theorem.
The author wishes to express his gratitude to his supervisor Professor Shunsuke Takagi for his encouragement, valuable advice and suggestions. The author is also grateful to Professor Mircea Mustaţă for his helpful comments and suggestions. He would like to thank Doctor Sho Ejiri, Doctor Kentaro Ohno, Doctor Yohsuke Matsuzawa and Professor Hirom Tanaka for useful comments. A part of this work was carried out during his visit to University of Michigan with financial support from the Program for Leading Graduate Schools, MEXT, Japan. He was also supported by JSPS KAKENHI 17J04317.
Preliminaries
=============
Test ideals
-----------
In this subsection, we recall the definition and some basic properties of test ideals.
A ring $R$ of characteristic $p>0$ is said to be *$F$-finite* if the Frobenius morphism $F: R \to R$ is a finite ring homomorphism.
Through this paper, all rings will be assumed to be $F$-finite and of characteristic $p>0$. If $R$ is an $F$-finite Noetherian normal ring, then $R$ is excellent ([@Kun]) and $X=\Spec(R)$ has a *canonical divisor* $K_X$ (see for example [@ST17 p.4]).
A *pair* $(R, {\Delta})$ consists of an $F$-finite Noetherian normal local ring $(R, {\mathfrak{m}})$ and an effective ${\mathbb{Q}}$-Weil divisor ${\Delta}$ on $\Spec R$. A *triple* $(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} = \prod_{i=1}^m {\mathfrak{a}}_i^{t_i})$, consists of a pair $(R, {\Delta})$ and a symbol ${\mathfrak{a}}_\bullet^{t_\bullet}= \prod_{i=1}^m {\mathfrak{a}}_i^{t_i}$, where $m>0$ is an integer, ${\mathfrak{a}}_1, \dots, {\mathfrak{a}}_m \subseteq R$ are ideals, and $t_1, \dots, t_m \ge 0$ are real numbers.
Let $(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} = \prod_{i=1}^m {\mathfrak{a}}_i^{t_i})$ be a triple. An ideal $J \subseteq R$ is *uniformly $({\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} , F)$-compatible* if $\phi( F^e_* ({\mathfrak{a}}_1^{{\lceil t_1 (p^e-1) \rceil}} \cdots {\mathfrak{a}}_m^{{\lceil t_m (p^e-1) \rceil}} J)) \subseteq J$ for every $e \ge 0$ and every $\phi \in \Hom_R(F^e_* R({\lceil (p^e-1){\Delta}\rceil}), R)$.
\[test def\] Let $(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} =\prod_{i=1}^m {\mathfrak{a}}_i^{t_i})$ be a triple. Assume that ${\mathfrak{a}}_1, \dots, {\mathfrak{a}}_m$ are non-zero ideals. Then we define the *test ideal* $$\tau(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} )=\tau(R,{\Delta}, \prod_{i=1}^m {\mathfrak{a}}_i^{t_i}) = \tau(R,{\Delta}, {\mathfrak{a}}_1^{t_1} \cdots {\mathfrak{a}}_m^{t_m})$$ to be an unique minimal non-zero uniformly $({\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}, F)$-compatible ideal. The test ideal always exists (see [@Sch10 Theorem 6.3]).
When ${\mathfrak{a}}_i=R$ and $t_i=0$ for every $i$, then we denote the ideal $\tau(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})$ by $\tau(R, {\Delta})$. If ${\mathfrak{a}}_i=0$ for some $i$, then we define $\tau(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})=(0)$.
\[test inc\] Let $(X=\Spec R, {\Delta}, {\mathfrak{a}}^t)$ be a triple. Then the following hold.
1. If $t \le t'$ and ${\mathfrak{a}}' \subseteq {\mathfrak{a}}$, then $\tau(R, {\Delta}, ({\mathfrak{a}}')^{t'}) \subseteq \tau(R, {\Delta}, {\mathfrak{a}}^t)$.
2. Assume that $K_X+ {\Delta}$ is ${\mathbb{Q}}$-Cartier. Then there exists a real number $\epsilon>0$ such that if $t \le t' \le t+ \epsilon$, then $\tau(R, {\Delta}, {\mathfrak{a}}^{t'} )= \tau(R, {\Delta}, {\mathfrak{a}}^t)$
Let $(R, {\Delta})$ be a pair and ${\mathfrak{a}}\subseteq R$ be an ideal. A real number $t > 0$ is called a *$F$-jumping number* of $(R, {\Delta}; {\mathfrak{a}})$ if $$\tau(R, {\Delta}, {\mathfrak{a}}^{t-\epsilon}) \neq \tau(R, {\Delta}, {\mathfrak{a}}^t ),$$ for all $\epsilon > 0$.
\[disc rat\] Let $(X=\Spec R, {\Delta}, {\mathfrak{a}})$ be a triple such that $K_X+{\Delta}$ is ${\mathbb{Q}}$-Cartier. Then the set of all $F$-jumping numbers of $(R, {\Delta}; {\mathfrak{a}})$ is a discrete set of rational numbers.
Let $(R, {\Delta}, {\mathfrak{a}})$ be a triple such that ${\mathfrak{a}}\neq R$ and $I \subseteq R$ be an ${\mathfrak{m}}$-primary ideal. We define the *$F$-jumping number* of $(R, {\Delta}; {\mathfrak{a}})$ with respect to $I$ as $${\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{a}}) := \inf \{ t \in {\mathbb{R}}_{\ge 0} \mid \tau(R, {\Delta}, {\mathfrak{a}}^t) \subseteq I \} \in {\mathbb{R}}_{\ge 0}.$$
When $\tau(R,{\Delta})=R$ and $I = {\mathfrak{m}}$, we denote it by $\mathrm{fpt}(R, {\Delta};{\mathfrak{a}})$ and call it the *$F$-pure threshold* of $(R,{\Delta};{\mathfrak{a}})$. If ${\Delta}=0$, then we denote it by $\mathrm{fpt}(R; {\mathfrak{a}})$.
Let $(X=\Spec R, {\Delta})$ be a pair and $e \ge 0$ be an integer. Assume that $(p^e-1)(K_X+{\Delta})$ is Cartier. Then there exists an isomorphism $$\Hom_R(F^e_*( R((p^e-1){\Delta})) , R) \cong F^e_*R$$ as $F^e_*R$-modules (see for example [@Sch12 Lemma 3.1]). We denote by $\phi_{\Delta}^e$ a generator of $\Hom_R(F^e_*( R((p^e-1){\Delta})), R)$ as an $F^e_*R$-module.
Although a map $\phi_{\Delta}^e : F^e_*R \to R$ is not uniquely determined, it is unique up to multiplication by $F^e_*R^\times$. When we consider this map, we only need the information about the image of this map. Hence we ignore the multiplication by $F^e_* R^\times$.
Let $R$ be a Noetherian ring of characteristic $p>0$, $e$ be a positive integer, and ${\mathfrak{a}}\subseteq R$ be an ideal. Then we denote by ${\mathfrak{a}}^{[p^e]}$ the ideal generated by $\{ f^{p^e} \in R \mid f \in {\mathfrak{a}}\}.$
The following proposition seems to be well-known to experts, but difficult to find a proof in the literature.
\[test base change\] Let $(R, {\mathfrak{m}})$ and $(S, {\mathfrak{n}})$ be $F$-finite Noetherian normal local rings with residue fields $k$ and $l$, respectively. Let $R \to S$ be a flat local homomorphism, ${\Delta}_X$ be an effective ${\mathbb{Q}}$-Weil divisor on $X=\Spec R$ and ${\Delta}_Y$ be the flat pullback of ${\Delta}_X$ to $Y=\Spec S$. Assume that ${\mathfrak{m}}S ={\mathfrak{n}}$ and that the relative Frobenius morphism $F^e_{l/k} : F^e_* k \otimes_k l \to F^e_* l$ is an isomorphism for every $e \ge 0$. Then the following hold.
1. The morphism $R \to S$ is a regular morphism, that is, every fiber is geometrically regular.
2. The relative Frobenius morphism $F^e_{S/R} : F^e_*R \otimes_R S \to F^e_* S$ is an isomorphism for every $e \ge 0$.
3. For every $e \ge 0$, we have $$\Hom_R( F^e_* R ({\lceil (p^e-1) {\Delta}_X \rceil}), R) \otimes_R S \cong \Hom_S( F^e_* S ({\lceil (p^e-1) {\Delta}_Y \rceil}), S).$$
4. Let $(R,{\Delta}_X, {\mathfrak{a}}_\bullet^{t_\bullet} = \prod_{i=1}^m {\mathfrak{a}}_i^{t_i} )$ be a triple. We write $({\mathfrak{a}}_\bullet \cdot S)^{t_\bullet} : = \prod_i ({\mathfrak{a}}_i S)^{t_i}$. Then we have $$\tau(R, {\Delta}_X, {\mathfrak{a}}_\bullet^{t_\bullet}) \cdot S = \tau(S, {\Delta}_Y, ({\mathfrak{a}}_\bullet \cdot S)^{t_\bullet} ).$$
5. If $(p^e-1)(K_X+{\Delta}_X)$ is Cartier for some $e>0$, then $(p^e-1)(K_Y+ {\Delta}_Y)$ is also Cartier and $\phi^e_{{\Delta}_Y} : F^e_* S \to S$ coincides with the morphism $\phi^e_{{\Delta}_X} \otimes_R S : F^e_* R \otimes_R S \to S$ via the isomorphism $F^e_{S/R} : F^e_* R \otimes_R S \to F^e_* S$.
Since the relative Frobenius morphism $F_{l/k} : F_* k \otimes_k l \to F_*l$ is injective, the field extension $k \subseteq l$ is separable by [@Mat Theorem 26.4]. Then (1) follows from [@Mat Theorem 28.10] and [@And].
We will prove the assertion in (2). Fix an integer $e \ge 0$. By (1), the morphism $R \to S$ is generically separable. It follows from [@Mat Theorem 26.4] that the relative Frobenius morphism $F^e_{S/R} : F^e_* R \otimes_R S \to F^e_* S$ is injective.
We next consider the surjectivity of the map $F^e_{S/R}$. We denote the ring $F^e_* R \otimes_R S$ by $R'$. We consider the following commutative diagram: $$\xymatrix{
&F^e_* S \\
S \ar[r] \ar[ur]^{F^e_S} & R' \ar[u]_{F^e_{S/R}} \\
R \ar[r]_{F^e_R} \ar[u] & F^e_* R \ar[u]
}$$
Since the morphisms $F^e_R : R \to F^e_* R$ and $S \to R'$ are both finite and ${\mathfrak{n}}\cap R = {\mathfrak{m}}$, every maximal ideal of $R'$ contains the maximal ideal $F^e_* {\mathfrak{m}}$ of $F^e_*R$. Therefore, $I : = (F^e_* {\mathfrak{m}}) \cdot R' \subseteq R'$ is contained in the Jacobson radical of $R'$. On the other hand, since the finite morphism $F^e_S : F^e_* S \to S$ factors through $F^e_{S/R}$, the morphism $F^e_{S/R}$ is also finite. Then the morphism $$F^e_{S/R} \otimes_{R'} (R'/I) : R'/I \to (F^e_*S) \otimes_{R'} (R'/I)$$ coincides with the relative Frobenius morphism $F^e_{l/k} : F^e_* k \otimes_k l \to F^e_*l$, and hence it is surjective. Therefore, the map $F^e_{S/R}$ is surjective by Nakayama.
We next prove the assertion in (3). Since $S$ is flat over $R$ and $F^e_* R ({\lceil (p^e-1) {\Delta}_X \rceil})$ is a finite $R$-module, we have $$\Hom_R( F^e_* R ({\lceil (p^e-1) {\Delta}_X \rceil}), R) \otimes_R S \cong \Hom_S( F^e_* R ({\lceil (p^e-1) {\Delta}_X \rceil}) \otimes_R S, S).$$ By (1), the flat pullback of a prime divisor on $X$ to $Y$ is a reduced divisor. Therefore, the Weil divisor ${\lceil (p^e-1) {\Delta}_Y \rceil} $ coincides with the flat pullback of ${\lceil (p^e-1) {\Delta}_X \rceil}$. It follows from (2) that $F^e_* R ({\lceil (p^e-1) {\Delta}_X \rceil}) \otimes_R S \cong F^e_* S ({\lceil p^e-1 \rceil} {\Delta}_Y)$, which completes the proof of (3).
For (4), it follows from (3) that the test ideal $\tau(R, {\Delta}_X, {\mathfrak{a}}_\bullet^{t_\bullet}) \cdot S$ is uniformly $({\Delta}_Y, ({\mathfrak{a}}_\bullet \cdot S)^{t_\bullet}, F)$-compatible and $\tau(S, {\Delta}_Y, ({\mathfrak{a}}_\bullet\cdot S)^{t_\bullet} ) \cap R$ is uniformly $({\Delta}_X, {\mathfrak{a}}_\bullet^{t_\bullet}, F)$-compatible. Therefore, we have $$\begin{aligned}
\tau(S, {\Delta}_Y, ({\mathfrak{a}}_\bullet \cdot S)^{t_\bullet}) &\subseteq& \tau(R, {\Delta}_X, {\mathfrak{a}}_\bullet^{t_\bullet}) \cdot S \textup{ and} \\
\tau(S, {\Delta}_Y, ({\mathfrak{a}}_\bullet \cdot S)^{t_\bullet} ) \cap R &\supseteq& \tau(R, {\Delta}_X, {\mathfrak{a}}_\bullet^{t_\bullet} ),\end{aligned}$$ which complete the proof of (4).
For (5), we assume that $(p^e-1)(K_X+{\Delta}_X)$ is Cartier. Since the canonical divisor $K_Y$ coincides with the flat pullback of $K_X$ ([@Aoy Proposition 4.1], see also [@Sta Lemma 45.22.1]), the Weil divisor $(p^e-1)(K_Y+{\Delta}_Y)$ is also Cartier. The second assertion in (5) follows from (3).
Let $(R, {\mathfrak{m}})$ be a Noetherian local ring. For a finitely generated $R$-module $M$, we denote by $\mu_R(M)$ the minimal number of generators of $M$ as an $R$-module. We denote by ${\mathrm{emb}}(R)$ the embedding dimension $\mu_R({\mathfrak{m}})$. If $M$ has finite length, then we denote by ${\ell_R}(M)$ the length of $M$ as an $R$-module and define $$\ell \ell_R (M) :=\min \{ n \ge 0 \mid {\mathfrak{m}}^n M =0 \}.$$
The following lemma is well-known to experts, but we prove it for convenience.
\[Skoda\] Let $R$ be a Noetherian ring of characteristic $p>0$, let ${\mathfrak{a}}\subseteq R$ be an ideal, and let $a, b, n$ and $e$ be non-negative integers.
1. If $n > p^e (\mu_R({\mathfrak{a}})-1)$, then we have ${\mathfrak{a}}^n = ({\mathfrak{a}}^{{\lceil n/p^e \rceil}-\mu_R({\mathfrak{a}})})^{[p^e]} \cdot {\mathfrak{a}}^{n-p^e({\lceil n/p^e \rceil}-\mu_R({\mathfrak{a}}))}$. In particular, if $b > p^e (\mu_R({\mathfrak{a}})-1)$, then we have ${\mathfrak{a}}^{a p^e +b } = ({\mathfrak{a}}^{a})^{[p^e]} \cdot {\mathfrak{a}}^b$.
2. Assume that there exist ideals ${\mathfrak{a}}_1, \dots, {\mathfrak{a}}_m \subseteq R$ and integers $M_1, \dots, M_m \ge 1$ such that ${\mathfrak{a}}={\mathfrak{a}}_1^{M_1} + \cdots + {\mathfrak{a}}_m^{M_m}$. Set $l : = \sum_i \mu_R({\mathfrak{a}}_i)$. If $n >p^e (l-1)$, then we have $${\mathfrak{a}}^n = ({\mathfrak{a}}^{{\lceil n/p^e \rceil}-l})^{[p^e]} \cdot {\mathfrak{a}}^{n-p^e({\lceil n/p^e \rceil}-l)}.$$ In particular, if $b > p^e ((\sum_i \mu_R({\mathfrak{a}}_i))-1)$, then we have ${\mathfrak{a}}^{a p^e +b } = ({\mathfrak{a}}^{a})^{[p^e]} \cdot {\mathfrak{a}}^b$.
The proof of (1) is straightforward by taking a minimal generator of ${\mathfrak{a}}$. For (2), we first consider the case when $m=1$. If $M_1=1$, then the assertion in (2) is same as that in (1). If $l = \mu_R({\mathfrak{a}}_1)=1$, then the assertion holds because ${\mathfrak{a}}$ is a principal ideal. Therefore, we may assume that $M_1 \ge 2 $ and $l \ge 2$. In this case, it follows from (1) that $$\begin{aligned}
{\mathfrak{a}}^n={\mathfrak{a}}_1^{n M_1} &=& ({\mathfrak{a}}_1^{{\lceil n M_1/p^e \rceil} - l})^{[p^e]} \cdot {\mathfrak{a}}_1^{n M_1 - p^e({\lceil n M_1/p^e \rceil} - l)}\\
& \subseteq & ({\mathfrak{a}}_1^{M_1 ({\lceil n/p^e \rceil}-l)})^{[p^e]} \cdot {\mathfrak{a}}_1^{n M_1 - p^e M_1 ({\lceil n/p^e \rceil} -l)}\\
&=& ({\mathfrak{a}}^{{\lceil n/p^e \rceil}-l})^{[p^e]} \cdot {\mathfrak{a}}^{n-p^e ({\lceil n/p^e \rceil}-l)}.\end{aligned}$$
We next consider the case when $m \ge 2$. Set ${\mathfrak{b}}_i :={\mathfrak{a}}_i^{M_i}$ and $l_i : = \mu_R({\mathfrak{a}}_i)$. Then we have $${\mathfrak{a}}^n = \sum_{n_1, \dots, n_m} \prod_{i=1}^m {\mathfrak{b}}_i^{n_i},$$ where $n_i$ runs through all non-negative integers such that $\sum_i n_i=n$. Fix such integers $n_i$ and set $s_i : =\max \{ 0, {\lceil n_i/p^e \rceil}-l_i \}$. Then it follows from the first case that ${\mathfrak{b}}_i^{n_i}=({\mathfrak{b}}_i^{s_i})^{[p^e]} \cdot {\mathfrak{b}}_i^{n_i-p^e s_i}$ for every integer $i$. Therefore, we have $$\begin{aligned}
\prod_i {\mathfrak{b}}_i^{n_i} &=&(\prod_i {\mathfrak{b}}_i^{s_i})^{[p^e]} \cdot \prod_i {\mathfrak{b}}_i^{n_i-p^e s_i}\\
& \subseteq& ({\mathfrak{a}}^{\sum_i s_i})^{[p^e]} \cdot {\mathfrak{a}}^{\sum_i (n_i -p^e s_i)},\\
& \subseteq& ({\mathfrak{a}}^{{\lceil n/p^e \rceil}-l})^{[p^e]} \cdot {\mathfrak{a}}^{n-p^e ({\lceil n/p^e \rceil}-l)},\end{aligned}$$ which completes the proof of (2).
Ultraproduct
------------
In this subsection, we define the ultraproduct of a family of sets and recall some properties. We also define the catapower of a Noetherian local ring and prove some properties. The reader is referred to [@Scho] for details.
Let ${\mathfrak{U}}$ be a collection of subsets of ${\mathbb{N}}$. ${\mathfrak{U}}$ is called an *ultrafilter* if the following properties hold:
1. $\emptyset \not\in {\mathfrak{U}}$.
2. For every subsets $A, B \subseteq {\mathbb{N}}$, if $A \in {\mathfrak{U}}$ and $A \subseteq B$, then $B \in {\mathfrak{U}}$.
3. For every subsets $A, B \subseteq {\mathbb{N}}$, if $A, B \in {\mathfrak{U}}$, then $A \cap B \in {\mathfrak{U}}$.
4. For every subset $A \subseteq {\mathbb{N}}$, if $A \not\in {\mathfrak{U}}$, then ${\mathbb{N}}\setminus A \in {\mathfrak{U}}$.
An ultrafilter ${\mathfrak{U}}$ is called *non-principal* if the following holds:
5. If $A$ is a finite subset of ${\mathbb{N}}$, then $A \not\in {\mathfrak{U}}$.
By Zorn’s Lemma, there exists a non-principal ultrafilter. From now on, we fix a non-principal ultrafilter ${\mathfrak{U}}$.
Let $\{ T_m \}_{m \in {\mathbb{N}}}$ be a family of sets. We define the equivalence relation $\sim$ on the set $\prod_{m \in {\mathbb{N}}} T_m$ by $$(a_m)_m \sim (b_m)_m \textup{ if and only if }
\left\{ m \in {\mathbb{N}}\mid a_m=b_m \right\} \in {\mathfrak{U}}.$$ We define the *ultraproduct* of $\{ T_m \}_{m \in {\mathbb{N}}}$ as $${\operatorname{ulim}}_{m \in {\mathbb{N}}} T_m : = \left(\prod_{m \in {\mathbb{N}}} T_m \right) / \sim.$$ If $T$ is a set and $T_m=T$ for all $m$, then we denote ${\operatorname{ulim}}_m T_m$ by ${{}^* T}$ and call it the *ultrapower* of $T$.
Let $\{ T_m \}_{m \in {\mathbb{N}}}$ be a family of sets and $a_m \in T_m$ for every $m$. We denote by ${\operatorname{ulim}}_m a_m$ the class of $(a_m)_m$ in ${\operatorname{ulim}}_m T_m$. Let $\{ S_m \}_m$ be another family of sets and $f_m: T_m \to S_m$ be a map for every $m$. We can define the map $${\operatorname{ulim}}_m f_m : {\operatorname{ulim}}_m T_m \to {\operatorname{ulim}}_m S_m$$ by sending ${\operatorname{ulim}}_m a_m \in {\operatorname{ulim}}_m T_m$ to ${\operatorname{ulim}}_m f_m(a_m) \in {\operatorname{ulim}}_m S_m$. If $T_m =T$, $S_m=S$, and $f_m= f$ for every $m \in {\mathbb{N}}$, then we denote the map ${\operatorname{ulim}}_m f_m$ by ${{}^* f} : {{}^* T} \to {{}^* S}$.
Let $\{ R_m \}_{m \in {\mathbb{N}}}$ be a family of rings and $M_m$ be an $R_m$-module for every $m$. Then ${\operatorname{ulim}}_m R_m$ has the ring structure induced by that of $\prod_m R_m$ and ${\operatorname{ulim}}_m M_m$ has the structure of ${\operatorname{ulim}}R_m$-module induced by the structure of $\prod_m R_m$-module on $\prod_m M_m$. Moreover, if $k_m$ is a field for every $m$, then ${\operatorname{ulim}}_m k_m$ is a field.
\[ultra field ext\] We have the following properties.
1. Let $R$ be a Noetherian ring and $M$ be a finitely generated $R$-module. Then we have ${{}^* M} \cong M \otimes_R {{}^* R}$
2. Let $k$ be an $F$-finite field of positive characteristic. Then the relative Frobenius morphism $F^e_* (k) \otimes_k {{}^* k} \to F^e_* ({{}^* k})$ is an isomorphism. In particular, ${{}^* k}$ is an $F$-finite field.
For (1), we consider the natural homomorphism $M \otimes_R {{}^* R} \to {{}^* M}$. Since the functors ${{}^* (-)}$ and $(-) \otimes_R {{}^* R}$ are both right exact, we may assume that $M$ is a free $R$-module of finite rank. In this case, the assertion is obvious.
For (2), we consider the natural bijection ${{}^* (F^e_* k)} \cong F^e_*({{}^* k})$. Combining with (1), the relative Frobenius morphism $F^e_*(k) \otimes_k {{}^* k} \to F^e_*( {{}^* k})$ is an isomorphism.
Let ${\mathfrak{a}}_m \subseteq R_m$ be an ideal for every $m$. Then the natural map ${\operatorname{ulim}}_m {\mathfrak{a}}_m \to {\operatorname{ulim}}_m R_m$ is injective, and hence we can consider ${\operatorname{ulim}}_m {\mathfrak{a}}_m$ as an ideal of the ring ${\operatorname{ulim}}_m R_m$. Let ${\mathfrak{b}}_m \subseteq R_m$ be another ideals. Then ${\operatorname{ulim}}_m {\mathfrak{b}}_m \subseteq {\operatorname{ulim}}_m {\mathfrak{a}}_m$ if and only if $$\left\{ m \in {\mathbb{N}}\mid {\mathfrak{b}}_m \subseteq {\mathfrak{a}}_m \right\} \in {\mathfrak{U}}.$$
Moreover, we have the equation $$({\operatorname{ulim}}_m {\mathfrak{a}}_m) + ({\operatorname{ulim}}_m {\mathfrak{b}}_m) = {\operatorname{ulim}}_m ({\mathfrak{a}}_m +{\mathfrak{b}}_m).$$
\[ulim prod\] Let $\{ R_m \}_{m \in {\mathbb{N}}}$ be a family of rings, ${\mathfrak{a}}_m, {\mathfrak{b}}_m \subseteq R_m$ be ideals for every $m$. Assume that there exists an integer $l>0$ such that $\mu({\mathfrak{a}}_m) \le l$ for every $m$. Then we have $$({\operatorname{ulim}}_m {\mathfrak{a}}_m) \cdot ({\operatorname{ulim}}_m {\mathfrak{b}}_m) = {\operatorname{ulim}}_m ({\mathfrak{a}}_m \cdot {\mathfrak{b}}_m).$$
Let $\alpha ={\operatorname{ulim}}_m a_m \in {\operatorname{ulim}}_m {\mathfrak{a}}_m$ and $\beta= {\operatorname{ulim}}_m b_m \in {\operatorname{ulim}}_m {\mathfrak{b}}$. Then we have $\alpha \cdot \beta = {\operatorname{ulim}}_m (a_m b_m) \in {\operatorname{ulim}}_m ({\mathfrak{a}}_m \cdot {\mathfrak{b}}_m)$. This shows the inclusion $({\operatorname{ulim}}_m {\mathfrak{a}}_m) \cdot ({\operatorname{ulim}}_m {\mathfrak{b}}_m) \subseteq {\operatorname{ulim}}_m ({\mathfrak{a}}_m \cdot {\mathfrak{b}}_m)$.
We consider the converse inclusion. By the assumption, there exist $f_{m,1}, \dots , f_{m,l} \in {\mathfrak{a}}_m$ such that ${\mathfrak{a}}_m=(f_{m,1}, \dots, f_{m,l})$. Then we have ${\mathfrak{a}}_m \cdot {\mathfrak{b}}_m = \sum_i f_{m, i} \cdot {\mathfrak{b}}_m$, and hence we have $${\operatorname{ulim}}_m ({\mathfrak{a}}_m \cdot {\mathfrak{b}}_m)= \sum_i f_{\infty, i} \cdot ({\operatorname{ulim}}_m {\mathfrak{b}}_m),$$ where $f_{\infty, i} : = {\operatorname{ulim}}_m f_{m, i} \in {\operatorname{ulim}}_m {\mathfrak{a}}_m$ for every $i$, which complete the proof of the lemma.
Let $\{a_m \}_{m \in {\mathbb{N}}}$ be a sequence of real numbers such that there exist real numbers $M_1, M_2$ which satisfies $M_1<a_m<M_2$ for every $m \in {\mathbb{N}}$. Then there exists an unique real number $w \in {\mathbb{R}}$ such that for every real number $\epsilon >0$, we have $$\{ m \in {\mathbb{N}}\mid |w-a_m| <\epsilon \} \in {\mathfrak{U}}.$$ We denote this number $w$ by ${\mathrm{sh}}( {\operatorname{ulim}}_m a_m)$ and call it the *shadow* of ${\operatorname{ulim}}_m a_m \in {{}^* {\mathbb{R}}}$.
Let $(R, {\mathfrak{m}}, k)$ be a local ring. Then, one can show that $( {{}^* R}, {{}^* {\mathfrak{m}}}, {{}^* k})$ is a local ring. However, even if $R$ is Noetherian, the ultrapower ${{}^* R}$ may not be Noetherian because we do not have the equation $\cap_{n \in {\mathbb{N}}} ({{}^* {\mathfrak{m}}})^n = 0$ in general.
Let $(R, {\mathfrak{m}})$ be a Noetherian local ring and $( {{}^* R}, {{}^* {\mathfrak{m}}})$ be the ultrapower. We define the *catapower* ${R_\#}$ as the quotient ring $${R_\#} : = {{}^* R}/ (\cap_{n} ({{}^* {\mathfrak{m}}})^n).$$
Let $(R, {\mathfrak{m}}, k)$ be a Noetherian local ring of equicharacteristic and $\widehat{R}$ be the ${\mathfrak{m}}$-adic completion of $R$. We fix a coefficient field $k \subseteq \widehat{R}$. Then we have $${R_\#} \cong \widehat{R} \ \widehat{\otimes}_k ({{}^* k}).$$ In particular, if $(R,{\mathfrak{m}})$ is an $F$-finite Noetherian normal local ring, then so is ${R_\#}$.
Let $(R, {\mathfrak{m}})$ be a Noetherian local ring, ${R_\#}$ be the catapower and $a_m \in R$ for every $m$. We denote by ${[ a_m ]_m} \in {R_\#}$ the image of ${\operatorname{ulim}}_m a_m \in {{}^* R}$ by the natural projection ${{}^* R} \to {R_\#}$. Let ${\mathfrak{a}}_m \subseteq R$ be an ideal for every $m \in {\mathbb{N}}$. We denote by $[ {\mathfrak{a}}_m]_m \subseteq {R_\#}$ the image of the ideal ${\operatorname{ulim}}_m {\mathfrak{a}}_m \subseteq {{}^* R}$ by the projection ${{}^* R} \to {R_\#}$.
\[ultra incl\] Let $(R, {\mathfrak{m}})$ be a Noetherian local ring, ${\mathfrak{a}}_m, {\mathfrak{b}}_m \subseteq R$ be ideals for every $m \in {\mathbb{N}}$. If we have $[{\mathfrak{a}}_m]_m \subseteq [{\mathfrak{b}}_m]_m$, then for every ${\mathfrak{m}}$-primary ideal ${\mathfrak{q}}\subseteq R$, we have $$\{ m \in {\mathbb{N}}\mid {\mathfrak{a}}_m \subseteq {\mathfrak{b}}_m + {\mathfrak{q}}\} \in {\mathfrak{U}}.$$
By the definition of the catapower, if $[{\mathfrak{a}}_m]_m \subseteq [{\mathfrak{b}}_m]_m$, then we have $${\operatorname{ulim}}_m {\mathfrak{a}}_m \subseteq {\operatorname{ulim}}_m {\mathfrak{b}}_m+ ({{}^* {\mathfrak{m}}})^n.$$ for every $n$.
On the other hand, it follows from Lemma \[ulim prod\] that $({{}^* {\mathfrak{m}}})^n = {{}^* ({\mathfrak{m}}^n)}$. Therefore we have $$\begin{aligned}
{\operatorname{ulim}}{\mathfrak{a}}_m & \subseteq & ({\operatorname{ulim}}{\mathfrak{b}}_m) +{{}^* ({\mathfrak{m}}^n)} \\
& =& {\operatorname{ulim}}({\mathfrak{b}}_m + {\mathfrak{m}}^n),\end{aligned}$$ which is equivalent to $$\{ m \in {\mathbb{N}}\mid {\mathfrak{a}}_m \subseteq {\mathfrak{b}}+ {\mathfrak{m}}^n \} \in {\mathfrak{U}}.$$ This implies the assertion in the lemma.
Variants of test ideals
=======================
In this section, we introduce some variants of test ideals by using the trace maps for the Frobenius morphisms and the $q$-adic expansion of a real number (Definition \[variants1\] and \[variants2\]). We also introduce the stabilization exponent (Definition \[stab exp\]).
Let $q \ge 2$ be an integer, $t>0$ be a real number and $n \in {\mathbb{Z}}$ be an integer. We define the *$n$-th digit* of $t$ in base $q$ by $$t^{(n)} : ={\lceil t q^n -1 \rceil} - q {\lceil t q^{n-1} -1 \rceil} \in {\mathbb{Z}}.$$ We define the *$n$-th round up* and the *$n$-th truncation* of $t$ in base $q$ by $$\begin{aligned}
\langle t \rangle ^{n, q} &: =& {\lceil t q^n \rceil}/q^n \in {\mathbb{Q}}\textup{, and}\\
{\langle t \rangle_{n, q}} &: =& {\lceil t q^n -1 \rceil} / q^n \in {\mathbb{Q}},\end{aligned}$$ respectively.
\[qadic\] Let $q \ge 2$ be an integer, $t>0$ be a real number and $n \in {\mathbb{Z}}$ be an integer. Then the following hold.
1. $0 \le t^{(n)} <q$.
2. $t^{(n)}$ is eventually zero for $n \ll 0$ and is not eventually zero for $n \gg 0$.
3. $t = \sum_{m \in {\mathbb{Z}}} t^{(m)} \cdot q^{-m}$.
4. ${\langle t \rangle_{n, q}} = \sum_{m \le n} t^{(m)} \cdot q^{-m}$.
5. The sequence $\{ \langle t \rangle^{n,q} \}_{n \in {\mathbb{Z}}}$ is a descending chain which convergences to $t$.
6. The sequence $\{ {\langle t \rangle_{n, q}} \}_{n \in {\mathbb{Z}}}$ is an ascending chain which converges to $t$.
These all follow easily from the definitions. For the assertion in (2), we note that if $t=s/q^m$ for some integers $s$ and $m$, then we have $t^{(n)}=q-1$ for all $n > m$.
\[variants1\] Let $(X=\Spec R,{\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}=\prod_i {\mathfrak{a}}_i^{t_i})$ be a triple such that $t_i>0$ for all $i$ and $e>0$ be an integer such that $(p^e-1)(K_X + {\Delta})$ is Cartier. For every integer $n \ge 0$, we define $$\begin{aligned}
\tau^{en}_{+} (R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}) &: =& \phi^{en}_{\Delta}(F^{en}_*({\mathfrak{a}}_1^{{\lceil t_1 p^{en} \rceil}} \cdots {\mathfrak{a}}_m^{{\lceil t_m p^{en} \rceil}} \cdot \tau(R, {\Delta}) )) \subseteq R \textup{ and} \\
\tau^{en}_{-} (R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}) &: =& \phi^{en}_{\Delta}(F^{en}_*({\mathfrak{a}}_1^{{\lceil t_1 p^{en} -1 \rceil}} \cdots {\mathfrak{a}}_m^{{\lceil t_m p^{en} -1 \rceil}}\cdot \tau(R, {\Delta}) )) \subseteq R.\end{aligned}$$
Let $(X=\Spec R, {\Delta}, {\mathfrak{a}}^t)$ be a triple such that $t>0$ and that ${\mathfrak{a}}$ is a principal ideal and let $e$ be a positive integer such that $(p^e-1)(K_X+{\Delta})$ is Cartier. Then it follows from [@BSTZ Lemma 5.4] that $$\begin{aligned}
\tau^{en}_{+}(R, {\Delta}, {\mathfrak{a}}^t) &=& \tau(R, {\Delta}, {\mathfrak{a}}^{\langle t \rangle^{n,q}}) \textup{, and}\\
\tau^{en}_{-}(R,{\Delta}, {\mathfrak{a}}^t)&=&\tau(R, {\Delta}, {\mathfrak{a}}^{{\langle t \rangle_{n, q}}}).\end{aligned}$$
By Proposition \[disc rat\], the sequence $\{ \tau_{+}^{en} (R, {\Delta}, {\mathfrak{a}}^t) \}_n$ is an ascending chain of ideals which converges to $\tau(R, {\Delta}, {\mathfrak{a}}^t)$ and the sequence $\{ \tau_{-}^{en} (R, {\Delta}, {\mathfrak{a}}^t) \}_n$ is a descending chain of ideals which eventually stabilizes.
Let $(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})$ and $e$ be as in Definition \[variants1\]. Then the following hold. \[lower test basic\]
1. The sequence $\{ \tau^{en}_{+}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}) \}_{n \ge 0}$ is an ascending chain which converges to the test ideal $\tau(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})$.
2. If $t_1 > 1$, then we have $$\tau^{en}_{+}(R, {\Delta}, {\mathfrak{a}}_1^{t_1} \cdots {\mathfrak{a}}_m^{t_m}) \supseteq {\mathfrak{a}}_1 \cdot \tau^{en}_{+}(R, {\Delta}, {\mathfrak{a}}_1^{t_1-1} \cdots {\mathfrak{a}}_m^{t_m}).$$ Moreover, if $t_1 > \mu_R({\mathfrak{a}}_1)$, then we have $$\tau^{en}_{+}(R, {\Delta}, {\mathfrak{a}}_1^{t_1} \cdots {\mathfrak{a}}_m^{t_m}) = {\mathfrak{a}}_1 \cdot \tau^{en}_{+}(R, {\Delta}, {\mathfrak{a}}_1^{t_1-1} \cdots {\mathfrak{a}}_m^{t_m}).$$
3. $\phi^e_{\Delta}(F^e_*(\tau^{en}_{+}(R, {\Delta}, {\mathfrak{a}}_\bullet^{p^e \cdot t_\bullet} )))=\tau^{e(n+1)}_{+}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})$, where we set ${\mathfrak{a}}_\bullet^{p^e \cdot t_\bullet} : = \prod_i {\mathfrak{a}}_i^{p^e t_i}$.
The proof of (1) follows as in the case when $m=1$, see [@BSTZ Lemma 3.21]. If $t_1>\mu_R({\mathfrak{a}}_1)$, then by Lemma \[Skoda\] (1), we have ${\mathfrak{a}}_1^{{\lceil t_1 p^{en} \rceil} }={\mathfrak{a}}_1^{[p^{en}]} \cdot {\mathfrak{a}}_1^{{\lceil (t_1-1) p^{en} \rceil}}$, which proves (2). The assertion in (3) follows from the fact that $\phi^{e(n+1)}_{\Delta}=\phi^e_{{\Delta}} \circ F^e_* \phi^{en}_{{\Delta}}$ ([@Sch12 Theorem 3.11 (e)]).
\[stab exp\] Let $(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})$ and $e$ be as in Definition \[variants1\]. We define the *stabilization exponent* of $(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}; e)$ by $${\mathrm{stab}}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} ; e) : = \min \{ n \ge 0 \mid \tau^{en}_{+}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})= \tau(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}) \}.$$
\[stab basic\] Let $(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}= \prod_{i=1}^m {\mathfrak{a}}_i^{t_i} )$ and $e$ be as in Definition \[variants1\]. Then the following hold.
1. If $t_1 > \mu_R({\mathfrak{a}}_1)$, then we have $${\mathrm{stab}}(R, {\Delta}, {\mathfrak{a}}_1^{t_1} \cdots {\mathfrak{a}}_m^{t_m}; e ) \le {\mathrm{stab}}(R, {\Delta}, {\mathfrak{a}}_1^{t_1-1} \cdots {\mathfrak{a}}_m^{t_m}; e ).$$
2. We have $${\mathrm{stab}}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} ; e ) \le {\mathrm{stab}}(R, {\Delta}, {\mathfrak{a}}_\bullet^{p^e \cdot t_\bullet} ;e)+1.$$
3. If $t_i >\mu_R({\mathfrak{a}}_i)$ and $(p^e-1) t_i \in {\mathbb{N}}$ for every $i$, then for any integer $n \ge 0$, the inequality $n \ge {\mathrm{stab}}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} ; e )$ holds if and only if $$\tau^{en}_{+}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} )= \tau^{e(n+1)}_{+}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}).$$
The assertions in (1) and (2) follow from Proposition \[lower test basic\] (2) and (3), respectively.
For (3), it follows from Proposition \[lower test basic\] (2) and (3) that $$\begin{aligned}
\tau^{e(n+1)}_{+}( R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}) &=& \phi_{\Delta}^e(F^e_*(\tau^{en}_{+}(R, {\Delta},{\mathfrak{a}}_1^{p^e t_1} \cdots {\mathfrak{a}}_m^{p^e t_m}) ))\\
&=& \phi^e_{\Delta}(F^e_*({\mathfrak{a}}_1^{(p^e-1) t_1} \cdots {\mathfrak{a}}_m^{(p^e-1) t_m} \cdot \tau^{en}_{+}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}))). \end{aligned}$$ Therefore, if $\tau^{en}_{+}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} )= \tau^{e(n+1)}_{+}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})$, then we have $\tau^{e(n+1)}_{+}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})= \tau^{e(n+2)}_{+}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})$, which completes the proof.
\[uniform stab exp\] Let $(X=\Spec R, {\Delta}, {\mathfrak{a}}_\bullet = \prod_i {\mathfrak{a}}_i)$ be a triple, $e$ be a positive integer such that $(p^e-1)(K_X+{\Delta})$ is Cartier. We define $${\widetilde{\mathrm{stab}}}(R, {\Delta}, {\mathfrak{a}}_\bullet ; e): = \sup_{t_1,\dots, t_m} \{ {\mathrm{stab}}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} ; e)\},$$ where every $t_i$ runs through all positive rational numbers such that $(p^e-1)t_i \in {\mathbb{N}}$. Then we have ${\widetilde{\mathrm{stab}}}(R, {\Delta}, {\mathfrak{a}}_\bullet ;e) < \infty$. Moreover, for every integer $l \ge 0$ and rational numbers $t_1, \dots, t_m>0$ such that $p^{el}(p^e-1) t_i \in {\mathbb{N}}$, we have $${\mathrm{stab}}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} ;e) \le {\widetilde{\mathrm{stab}}}(R, {\Delta}, {\mathfrak{a}}_\bullet ; e ) + l.$$
By Proposition \[stab basic\] (1), we have $${\widetilde{\mathrm{stab}}}(R, {\Delta}, {\mathfrak{a}}_\bullet ; e )= \sup_{t_1, \dots, t_m} \{ {\mathrm{stab}}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} ; e ) \},$$ where every $t_i$ runs through all positive rational numbers such that $(p^e-1)t_i \in {\mathbb{N}}$ and $t_i \le \mu_R({\mathfrak{a}}_i)$. Hence we have ${\widetilde{\mathrm{stab}}}(R, {\Delta}, {\mathfrak{a}}_\bullet ; e )<\infty$.
The second statement follows from Proposition \[stab basic\] (2).
We next consider the sequence of ideals $\{\tau^{en}_{-} (R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}) \}_n$. In general, the sequence $\{\tau^{en}_{-} (R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}) \}_n$ may not be a descending chain. In order to make a descending chain, we mix the definitions of $\tau_+$ and $\tau_-$, and define the new variants of test ideals as below. In fact, we later see that we can make a descending chain by using these ideals under some mild assumptions (Proposition \[upper test dc\]).
\[variants2\] Let $(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}= \prod_i {\mathfrak{a}}_i^{t_i} )$ and $e$ be as in Definition \[variants1\], ${\mathfrak{q}}\subseteq R$ be an ideal, and $n , u \ge 0$ be integers. We define $$\tau^{n,u}_{e,{\mathfrak{q}}}(R,{\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} ) : = \phi^{e(n+u)}_{\Delta}(F^{e(n+u)}_*({\mathfrak{a}}_1^{p^{eu} {\lceil t_1 p^{en}-1 \rceil}} \cdots {\mathfrak{a}}_m^{p^{eu} {\lceil t_m p^{en}-1 \rceil}} \cdot {\mathfrak{q}}) ).$$ When ${\mathfrak{q}}= \tau(R,{\Delta})$, we denote it by ${\tau_{e}^{n,u}} (R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} )$.
\[upper test basic\] Let $(X= \Spec R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} = \prod_{i=1}^m {\mathfrak{a}}_i^{t_i})$ be a triple such that $t_i>0$ for every $i$ and $(q-1)(K_X+{\Delta})$ is Cartier for some $q=p^e$, ${\mathfrak{q}}\subseteq R$ be an ideal and $n,u \ge 0 $ be integers. Then the following hold.
1. For real numbers $0<s_i \le t_i$, we have $\tau^{n,u}_{e,{\mathfrak{q}}}(R, {\Delta}, {\mathfrak{a}}_\bullet^{s_\bullet} ) \supseteq \tau^{n,u}_{e,{\mathfrak{q}}}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}).$ Moreover, if ${\langle t_i \rangle_{n, q}} < s_i \le t_i$ for every $i$, then we have $\tau^{n,u}_{e,{\mathfrak{q}}}(R, {\Delta}, {\mathfrak{a}}_\bullet^{s_\bullet}) = \tau^{n,u}_{e,{\mathfrak{q}}}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} ) $.
2. For ideals ${\mathfrak{b}}_i \subseteq {\mathfrak{a}}_i$ and ${\mathfrak{q}}' \subseteq {\mathfrak{q}}$, we have $\tau^{n,u}_{e,{\mathfrak{q}}'}(R, {\Delta}, {\mathfrak{b}}_\bullet^{t_\bullet}) \subseteq \tau^{n,u}_{e,{\mathfrak{q}}}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})$.
3. If ${\mathfrak{a}}_1 \equiv {\mathfrak{b}}_1 \mod J$ for some ideal $J$ and ${\mathfrak{a}}_i={\mathfrak{b}}_i$ for every $i \ge 2$, then we have $$\tau^{n,u}_{e,{\mathfrak{q}}}(R,{\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}) \equiv \tau^{n,u}_{e,{\mathfrak{q}}}(R,{\Delta}, {\mathfrak{b}}_\bullet^{t_\bullet} ) \mod{\tau^{n,u}_{e,J \cdot {\mathfrak{q}}} (R, {\Delta}, \prod_{i=2}^m {\mathfrak{a}}_i^{t_i})}.$$ If ${\mathfrak{q}}\equiv {\mathfrak{q}}' \mod{J}$ for some ideals ${\mathfrak{q}}'$ and $J$, then we have $$\tau^{n,u}_{e,{\mathfrak{q}}}(R,{\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}) \equiv \tau^{n,u}_{e,{\mathfrak{q}}'}(R,{\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}) \mod{ \tau^{n,u}_{e,J}(R,{\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})}.$$
4. If ${\mathfrak{q}}= {\mathfrak{a}}_{m+1}^{q^u {\lceil t_{m+1} q^n-1 \rceil} } \tau(R,{\Delta})$, then we have $\tau^{n,u}_{e,{\mathfrak{q}}}(R,{\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} ) = \tau^{n,u}_{e}(R,{\Delta}, \prod_{i=1}^{m+1} {\mathfrak{a}}_i^{t_i})$.
5. If $t_1 > 1$, then we have $\tau^{n,u}_{e,{\mathfrak{q}}}(R,{\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}) \supseteq {\mathfrak{a}}_1 \cdot \tau^{n,u}_{e,{\mathfrak{q}}}(R,{\Delta}, {\mathfrak{a}}_1^{t_1-1} \cdots {\mathfrak{a}}_m^{t_m})$. Moreover, if $t_1 > \mu_R({\mathfrak{a}})+(1/q^n)$, then we have $$\tau^{n,u}_{e,{\mathfrak{q}}}(R,{\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}) = {\mathfrak{a}}_1 \cdot \tau^{n,u}_{e,{\mathfrak{q}}}(R,{\Delta}, {\mathfrak{a}}_1^{t_1-1} \cdots {\mathfrak{a}}_m^{t_m}).$$
6. $\phi^e_{\Delta}(F^e_*(\tau^{n,u}_{e,{\mathfrak{q}}}(R,{\Delta}, {\mathfrak{a}}_\bullet^{p^e \cdot t_\bullet}) )) =\tau^{n+1,u}_{e,{\mathfrak{q}}}(R,{\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})$.
7. The sequence $\{ \tau^{n,u}_{e}(R,{\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}) \}_{u \in {\mathbb{N}}}$ is an ascending chain of ideals which converges to $\tau(R, {\Delta}, \prod_i {\mathfrak{a}}_i^{{\langle t_i \rangle_{n, q}}})$.
8. If $u \ge {\widetilde{\mathrm{stab}}}(R, {\Delta}, {\mathfrak{a}}_\bullet ; e)$, then we have $${\tau_{e}^{n,u}} (R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet}) =\tau(R, {\Delta}, \prod_i {\mathfrak{a}}_i^{{\langle t_i \rangle_{n, q}}})$$ for every $n$.
9. Assume that $q^{u-1} \ge \mu_R({\mathfrak{a}}_i)$ and the $n$-th digit ${t_i}^{(n)}$ of $t_i$ in base $q$ is non-zero for every $i$. Then we have $\tau^{n,u}_{e,{\mathfrak{q}}}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})= \tau^{n-1, u}_{e, {\mathfrak{q}}'}(R,{\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})$, where ${\mathfrak{q}}' : = \phi^e_{\Delta}(F^e_*(\prod_i {\mathfrak{a}}_i^{q^{u} \cdot {t_i}^{(n)}} {\mathfrak{q}}))$.
The assertions in (1), (2), (3), (4) and (8) follow easily from the definitions. The assertions in (5), (6) and (7) follow from Proposition \[lower test basic\]. The assertion in (9) follows from Lemma \[Skoda\] (1).
\[upper test dc\] Let $(X=\Spec R,{\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet})$ be a triple such that $t_i>0$ for every $i$ and $(q-1)(K_X+{\Delta})$ is Cartier for some $q=p^e$, and $u>0$ be an integer such that $q^{u-1} \ge \max_i \mu_R({\mathfrak{a}}_i)$. Assume that $q(q-1) t_i \in {\mathbb{N}}$ for every $i$. Then the sequence $\{ {\tau_{e}^{n,u}}(R, {\Delta}, {\mathfrak{a}}_\bullet^{t_\bullet} ) \}_{n \ge 1}$ is a descending chain of ideals.
Since $q(q-1) t_i \in {\mathbb{N}}$, the $n$-th digit $t_i^{(n)}$ of $t_i$ in base $q$ is constant for $n \ge 2$. By Lemma \[qadic\] (2), it is non-zero. Therefore, the assertion follows from Proposition \[upper test basic\] (2) and (9).
Let $(X=\Spec R, {\Delta}, {\mathfrak{a}}^t)$ be a triple with $t>0$, let $I$ be an ${\mathfrak{m}}$-primary ideal, ${\mathfrak{b}}\subseteq R$ be a proper ideal, and let $e$ be a positive integer such that $(p^e-1)(K_X+{\Delta})$ is Cartier. Then we define $${{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t; {\mathfrak{b}}) : = \inf \{ s > 0 \mid {\tau_{e}^{n,u}} (R, {\Delta}, {\mathfrak{a}}^t {\mathfrak{b}}^s) \subseteq I \} \in {\mathbb{R}}_{\ge 0}.$$
\[fpt basic\] With the above notation, the following hold.
1. $0 \le {{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t ; {\mathfrak{b}}) \le {\ell \ell_R(R/I)}+ \mu_R({\mathfrak{b}})$.
2. $p^{en} \cdot {{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t ;{\mathfrak{b}}) \in {\mathbb{Z}}$.
By Proposition \[upper test basic\] (5), we have $$\begin{aligned}
{\tau_{e}^{n,u}}(R, {\Delta}, {\mathfrak{a}}^t {\mathfrak{b}}^{{\ell \ell_R(R/I)}+\mu_R({\mathfrak{b}})}) &=& {\mathfrak{b}}^{{\ell \ell_R(R/I)}} \cdot {\tau_{e}^{n,u}} (R, {\Delta}, {\mathfrak{a}}^t {\mathfrak{b}}^{\mu_R({\mathfrak{b}})})\\
& \subseteq & {\mathfrak{b}}^{{\ell \ell_R(R/I)}} \subseteq I,\end{aligned}$$ which proves the assertion in (1).
The assertion in (2) follows from Proposition \[upper test basic\] (1).
\[ACC for bdd\] Let $(X=\Spec R,{\Delta})$ be a pair such that $(p^e-1)(K_X+{\Delta})$ is Cartier for some positive integer $e$, let $t >0$ be a rational number, and let $M, \mu>0$ and $u \ge 2$ be positive integers. Assume that
1. $q> \mu+ {\mathrm{emb}}(R)$, and
2. $q^m(q-1) t \in {\mathbb{N}}$ for some integer $m$.
Then, there exists a positive integer $n_1$ such that for every ideal ${\mathfrak{b}}\subseteq R$, if ${\mathfrak{b}}= {\mathfrak{a}}+ {\mathfrak{m}}^M$ for some ideal ${\mathfrak{a}}\subseteq R$ with $\mu_R({\mathfrak{a}}) \le \mu$, then we have ${\tau_{e}^{n,u}}(R,{\Delta}, {\mathfrak{b}}^t)= {\tau_{e}^{n_1,u}}(R, {\Delta}, {\mathfrak{b}}^t)$ for every $n \ge n_1$.
By Proposition \[upper test basic\] (6), it is enough to show the assertion in the case when $t > \mu+{\mathrm{emb}}(R)$ and $(p^e-1)t \in {\mathbb{N}}$. Set $n_1 : = {\ell_R}( \tau(R,{\Delta}) / ({\mathfrak{m}}^{M {\lceil t \rceil}} \cdot \tau(R,{\Delta})))$. We will prove that the assertion holds for this constant $n_1$.
Let ${\mathfrak{a}}\subseteq R$ be an ideal such that $\mu_R({\mathfrak{a}}) \le \mu$ and set ${\mathfrak{b}}: = {\mathfrak{a}}+ {\mathfrak{m}}^M$. We consider the sequence of ideals $\{ {\tau_{e}^{n,u}} (R, {\Delta}, {\mathfrak{b}}^t)\}_{n \ge 1}$. As in the proof of Proposition \[upper test dc\], by using Lemma \[Skoda\] (2) instead of Lemma \[Skoda\] (1), the sequence $\{ {\tau_{e}^{n,u}}(R, {\Delta}, {\mathfrak{b}}^t) \}_{n}$ is a descending chain. Moreover, since ${\mathfrak{b}}\supseteq {\mathfrak{m}}^M$, we have $$\begin{aligned}
{\tau_{e}^{n,u}}(R, {\Delta}, {\mathfrak{b}}^t) &\supseteq& {\tau_{e}^{n,u}}(R, {\Delta}, ({\mathfrak{m}}^{M})^{t})\\
&\supseteq & {\tau_{e}^{n,u}}(R, {\Delta}, ({\mathfrak{m}}^{M})^{t}) \\
& \supseteq & {\tau_{e}^{n,0}}(R, {\Delta}, ({\mathfrak{m}}^{M})^{ t}) \\
& \supseteq & {\mathfrak{m}}^{M {\lceil t \rceil}} \cdot \tau(R, {\Delta}).\end{aligned}$$ Since we have $$\tau(R,{\Delta}) \supseteq {\tau_{e}^{1,u}} (R, {\Delta}, {\mathfrak{b}}^t) \supseteq {\tau_{e}^{2,u}} (R, {\Delta}, {\mathfrak{b}}^t) \supseteq \dots \supseteq {\mathfrak{m}}^{M {\lceil t \rceil}} \cdot \tau(R, {\Delta}),$$ there exists an integer $1 \le m \le n_1$ such that $${\tau_{e}^{m,u}}(R,{\Delta}, {\mathfrak{b}}^t)= {\tau_{e}^{m+1,u}}(R, {\Delta}, {\mathfrak{b}}^t).$$
On the other hand, as in the proof of Proposition \[upper test basic\] (5), by using Lemma \[Skoda\] (2) instead of Lemma \[Skoda\] (1), we have $${\tau_{e}^{m+1,u}}(R, {\Delta}, {\mathfrak{b}}^{t'+1}) = {\mathfrak{b}}\cdot {\tau_{e}^{m,u}}(R, {\Delta}, {\mathfrak{b}}^{t'})$$ for any real number $t'>\mu+{\mathrm{emb}}(R)$. Then, as in the proof of Proposition \[stab basic\] (3), we have ${\tau_{e}^{m+1,u}}(R, {\Delta}, {\mathfrak{a}}^t)= {\tau_{e}^{m+2,u}}(R, {\Delta}, {\mathfrak{a}}^t)$, which completes the proof.
Rationality of the limit of $F$-pure thresholds
===============================================
In this section, we give uniform bounds for the denominators of $F$-jumping numbers (Proposition \[jump fin colen\]) and for the stabilization exponents (Proposition \[stab fin colen\]) of ${\mathfrak{m}}$-primary ideals with fixed colength. By using these bounds, we will prove Theorem \[intro BMS\].
\[jump fin colen\] Let $(X=\Spec R, {\Delta})$ be a pair such that $(p^e-1)(K_X+{\Delta})$ is Cartier for some integer $e>0$ and $M>0$ be an integer. Then there exists an integer $N>0$ such that for any ideal ${\mathfrak{a}}\subseteq R$, if ${\mathfrak{a}}\supseteq {\mathfrak{m}}^M$, then any $F$-jumping number of $(R,{\Delta}; {\mathfrak{a}})$ is contained in $(1/N)\cdot {\mathbb{Z}}$.
Set $l : = {\ell_R}(R/ {\mathfrak{m}}^M) +\mu_R({\mathfrak{m}}^M)$ and $n : = {\ell_R}( \tau(R, {\Delta}) / \tau(R, {\Delta}, {\mathfrak{m}}^{M l}))$. We note that the module $\tau(R, {\Delta}) / \tau(R, {\Delta}, {\mathfrak{m}}^{M l})$ has finite length because the test ideals commute with localization ([@HT Proposition 3.1]). Let ${\mathfrak{a}}\subseteq R$ be an ideal such that ${\mathfrak{m}}^M \subseteq {\mathfrak{a}}$ and let $B \subseteq {\mathbb{R}}_{>0}$ be the set of all $F$-jumping numbers of $(R, {\Delta};{\mathfrak{a}})$.
Since we have $\mu( {\mathfrak{a}}) \le l$, it follows from [@BSTZ Corollary 3.27] that for every element $b \in B \cap {\mathbb{R}}_{> l}$, we have $b-1 \in B$. It also follows from [@BSTZ Lemma 3.25] that for every element $b \in B$, we have $p^e b \in B$. Moreover, since $\tau(R, {\Delta}) \supseteq \tau(R, {\Delta}, {\mathfrak{a}}^t) \supseteq \tau(R, {\Delta}, {\mathfrak{m}}^{M l})$ for every $t \le l$, the number of the set $B \cap [0, l]$ is at most $n$. Then the assertion follows from the lemma below.
Let $l, n>0$ and $q \ge 2$ be integers. Then there exists an integer $N>0$ with the following property: if $B \subseteq {\mathbb{R}}_{\ge 0}$ is a subset such that
1. for every element $b \in B$, if $b >l$, then we have $b-1 \in B$,
2. if $b \in B$, then $q \cdot b \in B$, and
3. the number of the set $B \cap [0, l]$ is at most $n$,
then we have $B \subseteq (1/N) \cdot {\mathbb{Z}}$.
The proof is essentially the same as that of [@BMS1 Proposition 3.8]. Set $N : = q^n (q^{n !}-1)$, where $n!$ is the factorial of $n$.
For every element $b \in B$ and every integer $m \ge 0$, we define $b_m \in B \cap [0, l]$ by $$b_m := (q^m b - \lfloor q^m b \rfloor)+ \min \{ l-1, \lfloor q^m b \rfloor \}.$$
If $b \not\in (1/N) \cdot {\mathbb{Z}}$, then $b_0, b_1, \dots, b_n$ are all distinct and hence contradiction.
\[stab fin colen\] Let $(X=\Spec R, {\Delta})$ be a pair such that $(p^e-1)(K_X+{\Delta})$ is Cartier for some integer $e>0$ and $M>0$ be an integer. Then there exists $u_0>0$ such that for every ideals ${\mathfrak{a}}\supseteq {\mathfrak{m}}^M$, we have $${\widetilde{\mathrm{stab}}}( R, {\Delta}, {\mathfrak{a}};e) \le u_0.$$
Set $l := {\ell_R}(R/ {\mathfrak{m}}^M) + \mu_R ({\mathfrak{m}}^M)$ and take an integer $n_0 > 0$ such that $p^{e (n_0 -1)}>l$. Let ${\mathfrak{a}}\subseteq R$ be an ideal such that ${\mathfrak{a}}\supseteq {\mathfrak{m}}^M$ and $t>0$ be a rational number such that $(p^e-1) t \in {\mathbb{N}}$.
We first consider the case when $l <t \le l p^{e n_0} $. In this case, by Proposition \[lower test basic\] (1), the sequence $\{ \tau^{en}_{+}(R, {\Delta}, {\mathfrak{a}}^t) \}_{n \ge 0}$ is an ascending chain such that $$\tau(R, {\Delta}) \supseteq \tau^{en}_{+}(R, {\Delta}, {\mathfrak{a}}^t) \supseteq \tau^{0}_{+}(R, {\Delta}, {\mathfrak{a}}^t) = {\mathfrak{a}}^{{\lceil t \rceil}} \cdot \tau(R, {\Delta}) \supseteq {\mathfrak{m}}^{l M p^{e n_0} } \cdot \tau(R, {\Delta})$$ for every $n$. Therefore, there exists an integer $0 \le n < {\ell_R}(\tau(R, {\Delta})/ ({\mathfrak{m}}^{l M p^{e n_0} } \cdot \tau(R, {\Delta})))$ such that $$\tau^{en}_{+}(R, {\Delta}, {\mathfrak{a}}^t)=\tau^{e(n+1)}_{+}(R, {\Delta}, {\mathfrak{a}}^t).$$ By Proposition \[stab basic\] (3), we have $${\mathrm{stab}}(R, {\Delta}, {\mathfrak{a}}^t ;e) \le n \le {\ell_R}(\tau(R, {\Delta})/ ({\mathfrak{m}}^{l M p^{e n_0} } \cdot \tau(R, {\Delta}))).$$
We next consider the case when $t \le l$. Since $l <t p^{e n_0} \le l p^{e n_0} $, it follows from Proposition \[stab basic\] (2) that $$\begin{aligned}
{\mathrm{stab}}(R, {\Delta}, {\mathfrak{a}}^t ; e) &\le& {\mathrm{stab}}(R, {\Delta}, {\mathfrak{a}}^{t p^{e n_0}} ; e) +n_0\\
&\le& {\ell_R}(\tau(R, {\Delta})/ ({\mathfrak{m}}^{l M p^{e n_0} } \cdot \tau(R, {\Delta}))) +n_0.\end{aligned}$$ Therefore, $u_0 : ={\ell_R}(\tau(R, {\Delta})/ ({\mathfrak{m}}^{l M p^{e n_0}} \cdot \tau(R, {\Delta}))) +n_0$ satisfies the property.
\[cata tau\] Let $(X=\Spec R, {\Delta})$ be a pair such that $(p^e-1)(K_X+{\Delta})$ is Cartier for some integer $e>0$, $\{ {\mathfrak{a}}_m \}_{m \in {\mathbb{N}}}$ be a family of ideals of $R$ and $t>0$ be a real number. Fix a non-principal ultrafilter ${\mathfrak{U}}$. Let $({R_\#}, {{\mathfrak{m}}_\#})$ be the catapower of the local ring $(R, {\mathfrak{m}})$, ${{\Delta}_\#}$ be the flat pullback of ${\Delta}$ to $\Spec {R_\#}$ and ${\mathfrak{a}}_\infty := {[ {\mathfrak{a}}_m ]_m} \subseteq {R_\#}$. If there exists a positive integer $M$ such that ${\mathfrak{a}}_m \supseteq {\mathfrak{m}}^M$ for every $m$, then we have $$\tau ({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^t) = {[ \tau(R, {\Delta}, {\mathfrak{a}}_m^t) ]_m} \subseteq {R_\#}.$$
We first consider the case when $t$ is a rational number. By enlarging $e$, we may assume that $p^{en}(p^e-1)t \in {\mathbb{Z}}$ for some integer $n \ge 0$. Take a positive integer $u$ as in Proposition \[stab fin colen\]. Then we have $$\tau(R,{\Delta}, {\mathfrak{a}}_m^t) = \tau^{e(n+u)}_{+}(R, {\Delta}, {\mathfrak{a}}^t_m),$$ for every $m$. By enlarging $u$, we may assume that $$\tau({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^t) = \tau^{e(n+u)}_{+}({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^t).$$ Since $\mu_R({\mathfrak{a}}_m) \le {\ell_R}(R/{\mathfrak{m}}^M) + \mu_R({\mathfrak{m}}^M)$ for every $m$, it follows from Lemma \[ulim prod\] that $$({\mathfrak{a}}_\infty)^s={[ ({\mathfrak{a}}_m)^s ]_m}$$ for every integer $s>0$. Combining with Proposition \[test base change\] and \[ultra field ext\], we have $$\begin{aligned}
\tau^{e l}_{+} ({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^t) &=&\phi^{e l}_{{{\Delta}_\#}}(F^{e l}_*({\mathfrak{a}}^{{\lceil t p^{e l} \rceil}}_{\infty} \cdot \tau({R_\#}, {{\Delta}_\#})))\\
&=& \phi^{e l}_{{{\Delta}_\#}} (F^{e l}_* {[ {\mathfrak{a}}^{{\lceil t p^{e l } \rceil}}_{m} \cdot \tau(R,{\Delta}) ]_m})\\
&=& {[ \phi^{e l}_{{\Delta}}(F^{e l}_*({\mathfrak{a}}^{{\lceil t p^{e l} \rceil}}_{m} \cdot \tau(R,{\Delta}))) ]_m}\\
&=& {[ \tau^{e l}_{+}(R, {\Delta}, {\mathfrak{a}}_m^t) ]_m} \subseteq {R_\#}\end{aligned}$$ for every integer $l$. Therefore, we have $$\tau ({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^t) = {[ \tau(R, {\Delta}, {\mathfrak{a}}_m^t) ]_m} \subseteq {R_\#}.$$
We next consider the case when $t$ is not a rational number. For sufficiently large integer $n$, we have $$\begin{aligned}
\tau ({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^t) &=& \tau^{en}_{+}({R_\#},{{\Delta}_\#}, {\mathfrak{a}}_\infty^t)\\
& =& {[ \tau^{en}_{+}(R, {\Delta}, {\mathfrak{a}}_m^t) ]_m} \\
&\subseteq& {[ \tau(R, {\Delta}, {\mathfrak{a}}_m^t) ]_m} \subseteq {R_\#}.\end{aligned}$$ For the converse inclusion, by Proposition \[disc rat\], we can take a rational number $t'$ such that $t' < t$ and $\tau({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^{t})=\tau({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^{t'})$. Then, we have $$\begin{aligned}
\tau({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^{t})&=&\tau({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^{t'})\\
&=& {[ \tau(R, {\Delta}, {\mathfrak{a}}_m^{t'}) ]_m} \\
& \supseteq & {[ \tau(R, {\Delta}, {\mathfrak{a}}_m^{t}) ]_m},\end{aligned}$$ which completes the proof.
\[fpt sh\] With the notation above, let $I \subseteq R$ be an ${\mathfrak{m}}$-primary ideal. Assume that ${\mathfrak{m}}^M \subseteq {\mathfrak{a}}_m \subseteq {\mathfrak{m}}$ for every $m$. Then there exists $T \in {\mathfrak{U}}$ such that for all $m \in T$, we have $${\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{a}}_m)= {\mathrm{fjn}}^{I \cdot R_{\#}} ( {R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty).$$
Set $t := {\mathrm{fjn}}^{I \cdot {R_\#}}({R_\#}, {{\Delta}_\#} ; {\mathfrak{a}}_\infty) \in {\mathbb{R}}_{\ge 0}$. If $\tau(R, {\Delta}) \subseteq I$, then we have ${\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{a}}_m)=0$ for every $m \in {\mathbb{N}}$ and ${\mathrm{fjn}}^{I \cdot {R_\#}} ({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty ) =0$. Therefore, we may assume that $\tau(R, {\Delta}) \not\subseteq I$. Since ${\mathfrak{a}}_\infty \neq (0)$, it follows from Lemma \[test inc\] (2) that $t >0$.
It follows from Proposition \[cata tau\] that we have $${[ \tau( R, {\Delta}, {\mathfrak{a}}_m^t) ]_m} = \tau( {R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^t) \subseteq I \cdot {R_\#}.$$ Since $I$ is ${\mathfrak{m}}$-primary, it follows from Lemma \[ultra incl\] that there exists $S_1 \in {\mathfrak{U}}$ such that $\tau(R, {\Delta}, {\mathfrak{a}}_m^t) \subseteq I$ for every $m \in S_1$. Therefore ${\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{a}}_m) \le {\mathrm{fjn}}^{I \cdot {R_\#}} ({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty)$ for every $m \in S_1$.
On the other hand, by Proposition \[jump fin colen\], there exists $0 < t' < t$ such that for every ideal ${\mathfrak{b}}\supseteq {\mathfrak{m}}^M$, if $ t' < {\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{b}})$, then $t \le {\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{b}})$. Since $t'<t$, we have $${[ \tau( R, {\Delta}, {\mathfrak{a}}_m^{t'}) ]_m} = \tau( {R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^{t'}) \not\subseteq I \cdot {R_\#}.$$ Hence, we have $${\operatorname{ulim}}_{m} \tau(R, {\Delta}, {\mathfrak{a}}_m^{t'}) \not\subseteq {{}^* I}.$$ Therefore, there exists $S_2 \in {\mathfrak{U}}$ such that $\tau(R, {\Delta}, {\mathfrak{a}}_m^{t'}) \not\subseteq I$ for every $m \in S_2$. Then $T := S_1 \cap S_2$ satisfies the assertion.
\[subadd\] Let $(X=\Spec R, {\Delta})$ be a pair such that $K_X+{\Delta}$ is ${\mathbb{Q}}$-Carter, $I$ be an ${\mathfrak{m}}$-primary ideal, ${\mathfrak{a}}, {\mathfrak{b}}\subseteq R$ be proper ideals. Then we have $${\mathrm{fjn}}^I(R, {\Delta};{\mathfrak{a}}+{\mathfrak{b}}) \le {\mathrm{fjn}}^I(R, {\Delta}; {\mathfrak{a}})+ {\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{b}}).$$
As in the proof of [@Tak04 Theorem 3.1], for every real number $c \ge 0$, we can show that $$\tau(R, {\Delta}, ({\mathfrak{a}}+{\mathfrak{b}})^c) = \sum_{u,v \ge 0, u+v =c} \tau( R, {\Delta}, {\mathfrak{a}}^u {\mathfrak{b}}^v).$$
Set $t := {\mathrm{fjn}}^I(R, {\Delta}; {\mathfrak{a}})$ and $s := {\mathrm{fjn}}^I(R, {\Delta}; {\mathfrak{b}})$. Then we have $$\tau(R, {\Delta}, ({\mathfrak{a}}+{\mathfrak{b}})^{t+s}) = \sum_{u,v \ge 0, u+v =s+t} \tau( R, {\Delta}, {\mathfrak{a}}^u {\mathfrak{b}}^v) \subseteq \tau (R, {\Delta}, {\mathfrak{a}}^t) + \tau(R, {\Delta}, {\mathfrak{b}}^s) \subseteq I.$$
\[fpt sh2\] Let $(X=\Spec R, {\Delta})$ be a pair such that $(p^e-1)(K_X+{\Delta})$ is Cartier for some integer $e>0$, $({R_\#}, {{\mathfrak{m}}_\#})$ be the catapower of $(R, {\mathfrak{m}})$, ${{\Delta}_\#}$ be the flat pullback of ${\Delta}$ to $\Spec {R_\#}$, $I \subseteq R$ be an ${\mathfrak{m}}$-primary ideal, $\{ {\mathfrak{a}}_m \}_{m \in {\mathbb{N}}}$ be a family of proper ideals and ${\mathfrak{a}}_\infty : = {[ {\mathfrak{a}}_m ]_m} \subseteq {R_\#}$. Then we have $${\mathrm{sh}}( {\operatorname{ulim}}_m {\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{a}}_m) ) = {\mathrm{fjn}}^{I\cdot R_\#} ({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty) \in {\mathbb{Q}}.$$ In particular, if the limit $\lim_{m \to \infty} {\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{a}}_m)$ exists, then we have $$\lim_{m\to \infty} {\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{a}}_m) = {\mathrm{fjn}}^{I\cdot R_\#} ({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty).$$
The proof is essentially the same as the proof of [@BMS2 Theorem 1.2]. If $\tau(R, {\Delta}) \subseteq I$, then the assertion in the theorem is trivial. Therefore, we may assume that $\tau(R, {\Delta}) \not\subseteq I$.
For every integer $M>0$, we set ${\mathfrak{b}}_{\infty ,M} : = {\mathfrak{a}}_\infty + ({{\mathfrak{m}}_\#})^M$ and ${\mathfrak{b}}_{m,M} : = {\mathfrak{a}}_m + {\mathfrak{m}}^M$ for every integer $m$. We write $s := {\mathrm{fjn}}^{I \cdot {R_\#}}({R_\#}, {{\Delta}_\#}; {{\mathfrak{m}}_\#})$
By Lemma \[subadd\], we have $$\label{BMS1}
| {\mathrm{fjn}}^{I \cdot {R_\#}} ({R_\#}, {{\Delta}_\#} ; {\mathfrak{a}}_\infty) - {\mathrm{fjn}}^{I \cdot {R_\#}} ({R_\#}, {{\Delta}_\#}; {\mathfrak{b}}_{\infty, M}) | \le s/M$$ for every $M$.
By Proposition \[test base change\] (4), we have $s = {\mathrm{fjn}}^I(R, {\Delta}; {\mathfrak{m}})$. Therefore, it follows from Lemma \[subadd\] that $$\label{BMS3}
| {\mathrm{fjn}}^I(R, {\Delta}; {\mathfrak{a}}_m) - {\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{b}}_{m,M}) | \le s/M$$ for every $m$ and $M$.
On the other hand, since ${\mathfrak{b}}_{\infty, M} = {[ {\mathfrak{b}}_{m, M} ]_m}$, it follows from Proposition \[fpt sh\] that there exists $T_M \in {\mathfrak{U}}$ such that $$\label{BMS2}
{\mathrm{fjn}}^{I \cdot {R_\#}} ({R_\#}, {{\Delta}_\#}; {\mathfrak{b}}_{\infty, M}) = {\mathrm{fjn}}^I(R, {\Delta}; {\mathfrak{b}}_{m,M})$$ for every $m \in T_M$.
By combining the equations (\[BMS1\]), (\[BMS3\]), and (\[BMS2\]), we have $$| {\mathrm{fjn}}^{I \cdot {R_\#}} ({R_\#}, {{\Delta}_\#}; {\mathfrak{a}}_\infty) - {\mathrm{fjn}}^I(R, {\Delta}; {\mathfrak{a}}_{m})| \le 2s/M$$ for every $m \in T_M$.
It follows from the definition of the shadow that $${\mathrm{sh}}( {\operatorname{ulim}}_m {\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{a}}_m) ) = {\mathrm{fjn}}^{I \cdot {R_\#}} ({R_\#}, {{\Delta}_\#} ; {\mathfrak{a}}_\infty),$$ which completes the proof.
Proof of Main Theorem
=====================
In this section, we introduce Condition $(\star)$ (Definition \[def condA\]) which plays the key role in the proof of the main theorem and we prove some properties of Condition $(\star)$ (Proposition \[B to A\] and Proposition \[A to C\]). By combining them with Proposition \[ACC for bdd\] and Theorem \[fpt sh2\], we give the proof of the main theorem (Theorem \[main\]).
\[obs\] Let $X$ be a normal variety over a field $k$ of characteristic zero, ${\Delta}$ be an effective ${\mathbb{Q}}$-Weil divisor on $X$ such that $K_X+{\Delta}$ is ${\mathbb{Q}}$-Cartier, ${\mathfrak{a}}\subseteq {\mathcal{O}}_X$ be a non-zero coherent ideal sheaf, $t \ge 0$ be a rational number, $x \in X$ be a closed point and ${\mathfrak{m}}_x \subseteq {\mathcal{O}}_X$ be the maximal ideal at $x$. We consider the *log canonical threshold* $${\mathrm{lct}}_x(X, {\Delta}, {\mathfrak{a}}^t ; {\mathfrak{m}}) : = \inf \{ s \ge 0 \mid (X, {\Delta}, {\mathfrak{a}}^t {\mathfrak{m}}^{s}) \textup{ is not log canonical at }x \}.$$
By considering a log resolution of $(X, {\Delta})$, ${\mathfrak{a}}$ and ${\mathfrak{m}}$, we can show that there exist a real number $t'<t$ and rational numbers $a, b$ such that $$\label{LC polytope}
{\mathrm{lct}}_x (X, {\Delta}, {\mathfrak{a}}^s ; {\mathfrak{m}}) = as +b$$ for every $t' < s < t$.
Assume that there exist integers $q \ge 2$ and $m \ge 0$ such that $q^m(q-1) t \in {\mathbb{N}}$. Then for every $n>m$, the $n$-th digit of $t$ in base $q$ satisfies $t^{(n)}=l$ for some constant $l>0$. Set $N := -a l /q$. Then we have $$\label{condA char0}
{\mathrm{lct}}_x(X, {\Delta}, {\mathfrak{a}}^{{\langle t \rangle_{n+1, q}}}; {\mathfrak{m}}) = {\mathrm{lct}}_x(X, {\Delta}, {\mathfrak{a}}^{{\langle t \rangle_{n, q}}} ; {\mathfrak{m}}) -N/q^n$$ for sufficiently large $n$.
Motivated by the observation above, we define the following condition.
\[def condA\] Let $(X=\Spec R, {\Delta}, {\mathfrak{a}}^t)$ be a triple such that $t>0$ and $(p^e-1)(K_X+{\Delta})$ is Cartier for some integer $e>0$, $I \subseteq R$ be an ${\mathfrak{m}}$-primary ideal and $u, N \ge 0$ be integers. We say that $(R, {\Delta}, {\mathfrak{a}}^t, I, e, u, N)$ satisfies *Condition $(\star)$* if for every $n \ge 0$, we have $${{\mathrm{fjn}}^{I, n+1, u}_e}(R,{\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) \ge {{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) -N / p^{en}.$$
\[CondA rmk\] If we have $u \ge {\widetilde{\mathrm{stab}}}(R, {\Delta}, {\mathfrak{a}}, {\mathfrak{m}};e)$, then we have $${{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t ; {\mathfrak{m}}) = \langle {\mathrm{fjn}}^I (R, {\Delta}, {\mathfrak{a}}^{{\langle t \rangle_{n, q}}} ; {\mathfrak{m}}) \rangle^{n,q},$$ where we write $q : = p^e$. Therefore, Condition $(\star)$ can be regarded as an analogue of the equation \[condA char0\] in Observation \[obs\]. See also Corollary \[CondA single\] below.
We also note that the equation \[LC polytope\] in Observation \[obs\] may not hold for $F$-pure thresholds (cf. [@Per Example 5.3]).
We first give a sufficient condition for Condition $(\star)$.
\[B to A\] Let $(X=\Spec R, {\Delta}, {\mathfrak{a}}^t)$ be a triple such that $t>0$ and $(p^e-1)(K_X+{\Delta})$ is Cartier for some $e>0$, let $I \subseteq R$ be an ${\mathfrak{m}}$-primary ideal, let $0 < l< p^e$ be a positive integer and let $n_0 \ge 0 $ and $ u \ge 2$ be integers. Set $$q=p^e, \ N := q^{n_0+3} {\mathrm{emb}}(R), \ t_0 : =\frac{q^2}{q-1}, \textup{ and } M_0 := \frac{(q^{n_0+6}-1) {\mathrm{emb}}(R) }{q-1}.$$ Assume that
1. $q > \mu_R({\mathfrak{a}})$,
2. $q>{\ell \ell_R(R/I)}$,
3. the $n$-th digit of $t$ in base $q$ satisfies $t^{(n)}=l$ for every $n \ge 2$, and
4. ${\tau_{e}^{n_0+1, u}}(R, {\Delta}, {\mathfrak{a}}^{l t_0}) + {\mathfrak{m}}^{M_0} \cdot \tau(R, {\Delta}) \supseteq {\tau_{e}^{n_0, u}} (R, {\Delta}, {\mathfrak{a}}^{l t_0})$.
Then, $(R, {\Delta}, {\mathfrak{a}}^t, I, e, u, N)$ satisfies Condition $(\star)$
By induction on $n \ge 0$, we will show the inequality $$\begin{aligned}
\label{eqn A}
{{\mathrm{fjn}}^{I, n+1, u}_e}(R,{\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) \ge {{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) -N / q^n.\end{aligned}$$
[**Step.1**]{} We consider the case when $n \le n_0 +2$. In this case, we have $$N/q^n \ge q \cdot {\mathrm{emb}}(R) \ge {\ell \ell_R(R/I)}+ {\mathrm{emb}}(R) .$$ By Proposition \[fpt basic\] (1), we have $${{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t ; {\mathfrak{m}}) \le {\ell \ell_R(R/I)}+ {\mathrm{emb}}(R).$$ Hence we have $${{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) -N / q^n \le 0,$$ which implies the inequality \[eqn A\].
[**Step.2**]{} From now on, we assume $n \ge n_0 + 3$. Set $r : = q^n \cdot {{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) $. By Proposition \[fpt basic\], we have $r \in {\mathbb{Z}}$. We first consider the case when $$r \le q^{n_0} \cdot {\mathrm{emb}}(R).$$ In this case, we have $${{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) -N / q^n \le 0,$$ which shows the inequality \[eqn A\]. Therefore, we may assume $r> q^{n_0} \cdot {\mathrm{emb}}(R).$
[**Step.3**]{} Set $s: = {\lceil r/q^{n_0} \rceil} -{\mathrm{emb}}(R)-1 $ and $s' : = {\lceil (s+M_0)/q^2 \rceil}$.
In this step, we will show the inclusion $$\label{B to A Step3}
{\tau_{e}^{n,u}} (R ,{\Delta}, {\mathfrak{a}}^t {\mathfrak{m}}^{r/q^n}) \subseteq {\tau_{e}^{n+1,u}} (R, {\Delta}, {\mathfrak{a}}^t {\mathfrak{m}}^{s/q^{n-n_0}} ) + {\tau_{e}^{n-n_0-2,2}}(R, {\Delta}, {\mathfrak{a}}^t {\mathfrak{m}}^{s'/q^{n-n_0+2}}).$$
By the assumption (3), $\alpha:= tq^{n-n_0} - l t_0 = q^2 {\lceil tq^{n-n_0-2} -1 \rceil}$ is an integer. It follows from Proposition \[upper test basic\] (1), (5), and (6) that $$\begin{aligned}
{\tau_{e}^{n,u}}(R,{\Delta},{\mathfrak{a}}^t {\mathfrak{m}}^{r/q^n})
&=&\phi^{e(n-n_0)}_{\Delta}(F^{e(n-n_0)} ({\tau_{e}^{n_0,u}}(R,{\Delta}, {\mathfrak{a}}^{tq^{n-n_0}} {\mathfrak{m}}^{r/q^{n_0}})))\\
&\subseteq & \phi^{e(n-n_0)}_{\Delta}(F^{e(n-n_0)} ({\mathfrak{a}}^{\alpha} {\mathfrak{m}}^{s} {\tau_{e}^{n_0,u}}(R,{\Delta}, {\mathfrak{a}}^{l t_0}))).\end{aligned}$$
Similarly, we have $$\begin{aligned}
{\tau_{e}^{n+1,u}}(R,{\Delta},{\mathfrak{a}}^t {\mathfrak{m}}^{s/q^{n-n_0}})
&=&\phi^{e(n-n_0)}_{\Delta}(F^{e(n-n_0)} ({\tau_{e}^{n_0+1,u}}(R,{\Delta}, {\mathfrak{a}}^{tq^{n-n_0}} {\mathfrak{m}}^{s})))\\
&\supseteq & \phi^{e(n-n_0)}_{\Delta}(F^{e(n-n_0)} ({\mathfrak{a}}^{\alpha} {\mathfrak{m}}^{s} {\tau_{e}^{n_0+1,u}}(R,{\Delta}, {\mathfrak{a}}^{l t_0} ))).\end{aligned}$$
On the other hand, it follows from the definitions that $$\begin{aligned}
{\tau_{e}^{n-n_0-2,2}}(R, {\Delta}, {\mathfrak{a}}^t {\mathfrak{m}}^{s'/q^{n-n_0-2}})
&=& \phi^{e(n-n_0)}_{\Delta}(F^{e(n-n_0)} ({\mathfrak{a}}^{\alpha} {\mathfrak{m}}^{q^2(s'-1)} \tau(R,{\Delta})))\\
&\supseteq& \phi^{e(n-n_0)}_{\Delta}(F^{e(n-n_0)} ({\mathfrak{a}}^{\alpha} {\mathfrak{m}}^{s+M_0} \tau(R,{\Delta}))).\end{aligned}$$
By combining them with the assumption (4), we have the inclusion \[B to A Step3\].
[**Step.4**]{} In this step, we will show the inclusion $$\label{B to A Step4}
{\tau_{e}^{ n-n_0-2, 2 }}(R, {\Delta}, {\mathfrak{a}}^t {\mathfrak{m}}^{s'/q^{n-n_0-2}}) \subseteq I.$$
It follows from the induction hypothesis that $${{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) \ge {{\mathrm{fjn}}^{I, n-n_0-2, u}_e}(R,{\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) - (\sum_{i=n-n_0-2}^{n-1} \frac{N}{q^i}).$$ Therefore, we have the inequality $$\begin{aligned}
&&\frac{s'}{q^{n-n_0-2}} \ge \frac{s+M_0}{q^{n-n_0}} \ge \frac{ r/q^{n_0} -{\mathrm{emb}}(R) -1+ M_0}{q^{n-n_0}} \\
&= & {{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) + \frac{ -{\mathrm{emb}}(R)-1 + M_0}{q^{n-n_0}} \\
& \ge & {{\mathrm{fjn}}^{I, n-n_0-2, u}_e}(R,{\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) - (\sum_{i=n-n_0-2}^{n-1} \frac{N}{q^i}) + \frac{-{\mathrm{emb}}(R) -1 + M_0}{q^{n-n_0}} \\
& > & {{\mathrm{fjn}}^{I, n-n_0-2, u}_e}(R,{\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}).\end{aligned}$$
Since we have $u \ge 2 $, It follows from Proposition \[upper test basic\] (7) that $${\tau_{e}^{n-n_0-2, 2}}(R, {\Delta}, {\mathfrak{a}}^t {\mathfrak{m}}^{s'/q^{n-n_0-2}}) \subseteq {\tau_{e}^{n-n_0-2, u}}(R, {\Delta}, {\mathfrak{a}}^t {\mathfrak{m}}^{s'/q^{n-n_0-2}}) \subseteq I.$$
[**Step.5**]{} It follows from Proposition \[upper test basic\] (1) that $${\tau_{e}^{ n, u}} (R ,{\Delta}, {\mathfrak{a}}^t {\mathfrak{m}}^{r/q^n}) \not\subseteq I.$$ Combining it with the inclusions \[B to A Step3\] and \[B to A Step4\], we have $${\tau_{e}^{ n+1, u}}(R, {\Delta}, {\mathfrak{a}}^t {\mathfrak{m}}^{s/q^{n-n_0}} ) \not\subseteq I.$$ Hence, we have $$\begin{aligned}
{{\mathrm{fjn}}^{I, n+1, u}_e}(R, {\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) & \ge & \frac{s}{q^{n-n_0}}\\
& \ge & \frac{r/q^{n_0} -{\mathrm{emb}}(R)-1}{q^{n-n_0}}\\
&=& {{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) - \frac{{\mathrm{emb}}(R)+1}{q^{n-n_0}}\\
&>& {{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) -\frac{N}{q^n},\end{aligned}$$ which completes the proof of the proposition.
\[CondA single\] Let $(X=\Spec R, {\Delta}, {\mathfrak{a}}^t)$ be a triple such that $t>0$ is a rational number and $(p^e-1)(K_X+{\Delta})$ is Cartier for some integer $e>0$ and $I \subseteq R$ be an ${\mathfrak{m}}$-primary ideal. Then, there exist integers $e', u_0, N >0$ such that for every $u \ge u_0$, $(R,{\Delta}, {\mathfrak{a}}^t, I, e', u, N)$ satisfies Condition $(\star)$. In particular, there exists an integer $N'>0$ such that if we write $q : = p^{e'}$, then $${\mathrm{fjn}}^I (R, {\Delta}, {\mathfrak{a}}^{{\langle t \rangle_{n+1, q}}} ;{\mathfrak{m}}) \ge {\mathrm{fjn}}^I(R,{\Delta},{\mathfrak{a}}^{{\langle t \rangle_{n, q}}}; {\mathfrak{m}}) - N'/{q}^n$$ for every integer $n \ge 0$.
Take an integer $m>0$ such that $q : = p^{em}$ satisfies the assumptions (1), (2), and (3) in Proposition \[B to A\].
Set $l = t^{(2)}$ and $t_0 : = q^2/(q-1)$. Then it follows from Proposition \[disc rat\] that there exists an integer $n_0>0$ such that $$\tau(R, {\Delta}, {\mathfrak{a}}^{{\langle l t_0 \rangle_{n_0, q}}}) = \tau(R, {\Delta}, {\mathfrak{a}}^{{\langle l t_0 \rangle_{(n_0 +1), q}}}).$$
Set $e' : =em$, $u_0 : = {\widetilde{\mathrm{stab}}}(R, {\Delta}, {\mathfrak{a}}; e')$ and $N :=q^{n_0+3} \cdot {\mathrm{emb}}(R)$. Then the first assertion follows from Proposition \[B to A\].
Set $N' : =N+1$. Then the second assertion follows from Remark \[CondA rmk\].
\[A to C\] Suppose that $(R,{\Delta},{\mathfrak{a}}^t)$, $q=p^e$, $u$, and $N$ satisfies the conditions of Proposition \[B to A\]. We further assume that $q>{\ell \ell_R(R/I)}+\mu_R({\mathfrak{a}}) +{\mathrm{emb}}(R)$. Then for every $n \ge 1$, we have $${{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}})= {{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{b}}^t;{\mathfrak{m}}),$$ where ${\mathfrak{b}}: = {\mathfrak{a}}+ {\mathfrak{m}}^{q^{u+2} \cdot N}$. In particular, for every $n$, we have $${\tau_{e}^{n,u}}(R, {\Delta}, {\mathfrak{a}}^t) \subseteq I \textup{ if and only if } {\tau_{e}^{n,u}}(R, {\Delta}, {\mathfrak{b}}^t) \subseteq I.$$
Set $M := q^{u+2} \cdot N$, $M' :=q^{u+1} \cdot N$, $s_n : ={{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}})$, and $\delta_n:=q^n s_n $ for every integer $n$. By Proposition \[fpt basic\] (2), we have $\delta_n \in {\mathbb{N}}$. It is enough to show the following claim.
For every $n \ge 1$ and every ideal ${\mathfrak{q}}\subseteq {\mathfrak{m}}^{\max\{0, q^u \cdot \delta_n -M'\}} \cdot \tau(R, {\Delta})$, we have $$\tau^{n,u}_{e,{\mathfrak{q}}}(R, {\Delta}, {\mathfrak{a}}^t) \equiv \tau^{n,u}_{e,{\mathfrak{q}}}(R,{\Delta}, {\mathfrak{b}}^t) \ (\mod{I}).$$
In fact, if the claim holds, then it follows from Proposition \[upper test basic\] (4) that $$\begin{aligned}
{\tau_{e}^{n,u}} (R, {\Delta}, {\mathfrak{b}}^t {\mathfrak{m}}^{s_n + \epsilon} )
&\equiv& {\tau_{e}^{n,u}} (R, {\Delta}, {\mathfrak{a}}^t {\mathfrak{m}}^{s_n+\epsilon } ) \ (\mod{I})\\
& \subseteq & I\end{aligned}$$ for every real number $0 < \epsilon \le 1/q^n$. Therefore we have $${{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) \ge {{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{b}}^t; {\mathfrak{m}}).$$
Similarly, if $s_n>0$, then we have $$\begin{aligned}
{\tau_{e}^{n,u}} (R, {\Delta}, {\mathfrak{b}}^t {\mathfrak{m}}^{s_n} )
&\equiv& {\tau_{e}^{n,u}}(R, {\Delta}, {\mathfrak{a}}^t {\mathfrak{m}}^{s_n} ) \ (\mod{I})\\
& \not\subseteq& I,\end{aligned}$$ which shows ${{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{a}}^t; {\mathfrak{m}}) \le {{\mathrm{fjn}}^{I, n, u}_e}(R, {\Delta}, {\mathfrak{b}}^t; {\mathfrak{m}})$. Since this inequality also holds when $s_n=0$, we complete the proof of the proposition.
We use induction on $n$.
[**Step.1**]{} We first consider the case when $n=1$. It follows from Proposition \[upper test basic\] (3) that $$\tau^{n,u}_{e,{\mathfrak{q}}}(R, {\Delta}, {\mathfrak{a}}^t) \equiv \tau^{n,u}_{e,{\mathfrak{q}}}(R,{\Delta}, {\mathfrak{b}}^t) \ (\mod{\tau^{n,u}_{e,{\mathfrak{q}}\cdot {\mathfrak{m}}^M} }(R,{\Delta}) ).$$
Since we have ${\mathfrak{q}}\cdot {\mathfrak{m}}^M \subseteq {\mathfrak{m}}^{q^u {\lceil q({\ell \ell_R(R/I)}+{\mathrm{emb}}(R))-1 \rceil}} \cdot \tau(R,{\Delta})$, it follows from Proposition \[upper test basic\] (2), (4) and (5) that $$\tau^{n,u}_{e,{\mathfrak{q}}\cdot {\mathfrak{m}}^M} (R,{\Delta}) \subseteq {\mathfrak{m}}^{{\ell \ell_R(R/I)}} \subseteq I.$$ Therefore, the assertion holds when $n=1$.
[**Step.2**]{} From now on, we consider the case when $n \ge 2$. Set ${\mathfrak{q}}' := \phi_{\Delta}^e(F^e_*({\mathfrak{a}}^{t^{(n)} \cdot q^u} {\mathfrak{q}}))$ and ${\mathfrak{q}}'' := \phi_{\Delta}^e(F^e_*({\mathfrak{b}}^{t^{(n)} \cdot q^u} {\mathfrak{q}}))$.
Then it follows from Proposition \[upper test basic\] (9) that $$\label{A to C Step2-1}
\tau^{n,u}_{e,{\mathfrak{q}}}(R,{\Delta},{\mathfrak{a}}^t) = \tau^{n-1,u}_{e,{\mathfrak{q}}'} (R,{\Delta}, {\mathfrak{a}}^t).$$
Similarly, by using Lemma \[Skoda\] (2) instead of (1), we have $$\label{A to C Step2-2}
\tau^{n,u}_{e,{\mathfrak{q}}}(R,{\Delta},{\mathfrak{b}}^t) = \tau^{n-1,u}_{e,{\mathfrak{q}}''} (R,{\Delta}, {\mathfrak{b}}^t).$$
[**Step.3**]{} In this step, we will show the equation $$\label{A to C Step3}
\tau^{n-1,u}_{e,{\mathfrak{q}}'} (R,{\Delta}, {\mathfrak{a}}^t) \equiv \tau^{n-1,u}_{e,{\mathfrak{q}}''} (R,{\Delta}, {\mathfrak{a}}^t) \ (\mod{I}).$$
Set $J := \phi_{\Delta}^e(F^e_*({\mathfrak{m}}^M {\mathfrak{q}}))$, then we have ${\mathfrak{q}}' \equiv {\mathfrak{q}}'' (\mod{ J })$. By Proposition \[upper test basic\] (3), it is enough to show that $$\tau^{n-1,u}_{e,J}(R,{\Delta}, {\mathfrak{a}}^t) \subseteq I.$$
Since we have $\delta_n \ge q \delta_{n-1}-q N$, it follows from Lemma \[Skoda\] that $$\begin{aligned}
J &\subseteq& \phi^e_{\Delta}({\mathfrak{m}}^{q^u \delta_n +M - M' } \cdot \tau(R, {\Delta})) \\
& \subseteq & {\mathfrak{m}}^{(q^u \delta_n + M-M' )/q -{\mathrm{emb}}(R) } \cdot \tau(R, {\Delta}) \\
& \subseteq & {\mathfrak{m}}^{q^u \delta_{n-1}} \cdot \tau(R, {\Delta}).\end{aligned}$$
Therefore, it follows from Proposition \[upper test basic\] (2) and (4) that $$\begin{aligned}
\tau^{n-1,u}_{e,J}(R,{\Delta}, {\mathfrak{a}}^t) \subseteq {\tau_{e}^{n-1,u}}(R, {\Delta}, {\mathfrak{a}}^t {\mathfrak{m}}^{s_{n-1} +(1/q^{n-1})}) \subseteq I,\end{aligned}$$ which shows the equation \[A to C Step3\].
[**Step.4**]{} In this step, we will show the equation $$\label{A to C Step4}
\tau^{n-1,u}_{e,{\mathfrak{q}}''} (R,{\Delta}, {\mathfrak{a}}^t) \equiv \tau^{n-1,u}_{e,{\mathfrak{q}}''} (R,{\Delta}, {\mathfrak{b}}^t) \ (\mod{I}).$$
As in Step 3, we have $$\begin{aligned}
{\mathfrak{q}}'' & \subseteq & \phi^e_{\Delta}(F^e_*( {\mathfrak{m}}^{\max \{ 0, q^u \delta_n -M' \}} \cdot \tau(R, {\Delta})))\\
& \subseteq & {\mathfrak{m}}^{\max\{0, q^{u-1} \delta_n -(M'/q)- {\mathrm{emb}}(R) \}} \cdot \tau(R, {\Delta}) \\
& \subseteq & {\mathfrak{m}}^{\max\{0, q^u \delta_{n-1} -M' \}} \cdot \tau(R, {\Delta}).\end{aligned}$$ By induction hypothesis, we get the equation \[A to C Step4\].
By combining the equations \[A to C Step2-1\], \[A to C Step2-2\], \[A to C Step3\] and \[A to C Step4\], we complete the proof of the claim.
\[key\] Let $(X=\Spec R, {\Delta})$ be a pair such that $(p^e-1)(K_X+{\Delta})$ is Cartier for some integer $e>0$, $I \subseteq R$ be an ${\mathfrak{m}}$-primary ideal, $l, n_0 \ge 0$ and $u \ge 2$ be integers and $t>0$ be a rational number such that $p^e(p^e-1)t \in {\mathbb{N}}$. We set $l : = t^{(2)}$, $t_0 : =p^{2e}/(p^e-1)$ and $M_0=(p^{e(n_0+6)}-1) \cdot {\mathrm{emb}}(R) / (p^e-1)$. Then there exists an integer $n_1>0$ with the following property: for any ideal ${\mathfrak{a}}\subseteq R$ such that
1. $p^e>\mu_R({\mathfrak{a}}) + {\ell \ell_R(R/I)}+{\mathrm{emb}}(R)$, and
2. ${\tau_{e}^{n_0+1, u}}(R, {\Delta}, {\mathfrak{a}}^{l t_0}) + {\mathfrak{m}}^{M_0} \cdot \tau(R, {\Delta}) \supseteq {\tau_{e}^{n_0, u}} (R, {\Delta}, {\mathfrak{a}}^{l t_0})$
we have $${\tau_{e}^{n,u}}(R,{\Delta}, {\mathfrak{a}}^t) \subseteq I \textup{ if and only if } {\tau_{e}^{n_1,u}}(R,{\Delta}, {\mathfrak{a}}^t) \subseteq I$$ for every integer $n \ge n_1$.
By Proposition \[B to A\] and Proposition \[A to C\], ${\mathfrak{b}}: = {\mathfrak{a}}+ {\mathfrak{m}}^{q^{u+n_0+5} {\mathrm{emb}}(R)}$ satisfies $${\tau_{e}^{n,u}}(R,{\Delta},{\mathfrak{a}}^t) \subseteq I \textup{ if and only if } {\tau_{e}^{n,u}}(R,{\Delta},{\mathfrak{b}}^t) \subseteq I.$$ for every integer $n$.
On the other hand, it follows from Proposition \[ACC for bdd\] that there exists an integer $n_1>0$ which depends only on $\mu : = q- {\mathrm{emb}}(R)-1 $, $M:=q^{u+n_0+5} {\mathrm{emb}}(R)$, $e, u$, and $t$ such that for every integer $n>n_1$, we have $${\tau_{e}^{n,u}}(R,{\Delta},{\mathfrak{b}}^t) \subseteq I \textup{ if and only if } {\tau_{e}^{n_1,u}}(R,{\Delta},{\mathfrak{b}}^t) \subseteq I,$$ which completes the proof.
By using the method of ultraproduct, we can apply Corollary \[key\] to infinitely many ideals simultaneously.
\[CondA holds\] Let $(X=\Spec R, {\Delta})$ be a pair such that $(p^e-1)(K_X+{\Delta})$ is Cartier for some integer $e>0$, $I \subseteq R$ be an ${\mathfrak{m}}$-primary ideal, $\{ {\mathfrak{a}}_m \}_{m \in {\mathbb{N}}}$ be a family of ideals of $R$, $t >0$ be a rational number, and ${\mathfrak{U}}$ be a non-principal ultrafilter. Assume that
1. $\tau(R, {\Delta})$ is ${\mathfrak{m}}$-primary or trivial,
2. $p^e > \mu_R({\mathfrak{a}}_m) + {\ell \ell_R(R/I)}+ {\mathrm{emb}}(R)$ for every $m$, and
3. $p^e(p^e-1) t \in {\mathbb{N}}$.
Then for any sufficiently large integer $u > 0$, there exist an integer $n_1$ and $T \in {\mathfrak{U}}$ such that $${\tau_{e}^{n,u}}(R, {\Delta}, {\mathfrak{a}}_m^t) \subseteq I \textup{ if and only if } {\tau_{e}^{n_1,u}}(R, {\Delta}, {\mathfrak{a}}_m^t) \subseteq I$$ for every integer $n \ge n_1$ and $m \in T$.
Set $t_0 : =p^{2e}/(p^e-1)$. Since $p^e(p^e-1)t \in {\mathbb{N}}$, there exists an integer $0<l<p^e$ such that $t^{(n)}=l$ for every $n \ge 2$. By Corollary \[key\], it is enough to show that for any sufficiently large integer $u>0$, there exist an integer $n_0$ and $T \in {\mathfrak{U}}$ such that for every $m \in T$, we have $${\tau_{e}^{n_0+1, u }}(R, {\Delta}, {\mathfrak{a}}^{l t_0}) + {\mathfrak{m}}^{M_0} \cdot \tau(R, {\Delta}) \supseteq {\tau_{e}^{n_0, u}} (R, {\Delta}, {\mathfrak{a}}^{l t_0}),$$ where $M_0 : = (p^{e(n_0+6)}-1) {\mathrm{emb}}(R)/(p^e-1)$.
Let $({R_\#}, {{\mathfrak{m}}_\#})$ be the catapower of $(R, {\mathfrak{m}})$, ${{\Delta}_\#}$ be the flat pullback of ${\Delta}$ to $\Spec {R_\#}$ and ${\mathfrak{a}}_\infty$ be the ideal ${[ {\mathfrak{a}}_m ]_m} \subseteq {R_\#}$. It follows from Lemma \[ulim prod\] that for every integers $u, n \ge 0$ we have $${\tau_{e}^{n,u}}({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^{l \cdot t_0}) = {[ {\tau_{e}^{n, u}}(R, {\Delta}, {\mathfrak{a}}_m^{l \cdot t_0}) ]_m}.$$
By Proposition \[disc rat\], there exists an integer $n_0 \ge 0$ such that $$\tau ( {R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^{{\langle l \cdot t_0 \rangle_{n_0, q}}}) = \tau({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^{{\langle l \cdot t_0 \rangle_{(n_0+1), q}}})$$
On the other hand, by Proposition \[upper test basic\] (8), there exists an integer $u_0$ such that for every integers $u \ge u_0$ and $n \ge 0$, we have $${\tau_{e}^{ n, u}}({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^{l \cdot t_0 }) = \tau({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^{{\langle l \cdot t_0 \rangle_{n, q}}}).$$ Therefore, we have $${[ {\tau_{e}^{ n_0, u}} (R, {\Delta}, {\mathfrak{a}}_m^{l \cdot t_0}) ]_m} = {[ {\tau_{e}^{n_0+1, u}} (R, {\Delta}, {\mathfrak{a}}_m^{l \cdot t_0}) ]_m} \subseteq {R_\#}.$$
Since ${\mathfrak{m}}^{M_0} \cdot \tau(R, {\Delta}) \subseteq R$ is an ${\mathfrak{m}}$-primary ideal, it follows from Lemma \[ultra incl\] that there exists $T \in {\mathfrak{U}}$ such that for every $m \in T$, we have $${\tau_{e}^{ n_0,u}} (R, {\Delta}, {\mathfrak{a}}_m^{l \cdot t_0}) \subseteq {\tau_{e}^{ n_0+1, u}} (R, {\Delta}, {\mathfrak{a}}_m^{l \cdot t_0}) + {\mathfrak{m}}^{M_0} \cdot \tau(R, {\Delta}),$$ which completes the proof.
\[main\] Let $(X=\Spec R, {\Delta})$ be a pair such that $\tau(R, {\Delta})$ is ${\mathfrak{m}}$-primary or trivial and that $(p^e-1)(K_X+{\Delta})$ is Cartier for some integer $e > 0$, and let $I \subseteq R$ be an ${\mathfrak{m}}$-primary ideal. Then, the set $${\mathrm{FJN}}^I(R,{\Delta}) : = \left\{ {\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{a}}) \mid {\mathfrak{a}}\subsetneq R \right\}$$ satisfies the ascending chain condition.
We assume the contrary. Then there exists a family of ideals $\{ {\mathfrak{a}}_m \}_{m \in {\mathbb{N}}}$ such that $\{ {\mathrm{fjn}}^I(R, {\Delta}; {\mathfrak{a}}_m) \}_{m \in {\mathbb{N}}}$ is a strictly ascending chain. Set $t := \lim_{m \to \infty} {\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{a}}_m)$. It follows from Proposition \[disc rat\] and Theorem \[fpt sh2\] that $t \in {\mathbb{Q}}_{>0}$.
Let ${\mathfrak{U}}$ be a non-principal ultrafilter, ${R_\#}$ be the catapower of $R$, ${{\Delta}_\#}$ be the flat pullback of ${\Delta}$ to $\Spec {R_\#}$, and ${\mathfrak{a}}_\infty := {[ {\mathfrak{a}}_m ]_m} \subseteq {R_\#}$. Take elements $f_1, \dots, f_l \in {R_\#}$ such that ${\mathfrak{a}}_\infty = (f_1, \dots, f_l)$. Since the natural map $\prod_{m \in {\mathbb{N}}} {\mathfrak{a}}_m \to {[ {\mathfrak{a}}_m ]_m}$ is surjective, there exists $f_{m, i} \in {\mathfrak{a}}_m$ for every $m \in {\mathbb{N}}$ such that $f_i ={[ f_{m,i} ]_m}$.
Set ${\mathfrak{a}}_m' : = (f_{m,1}, \dots, f_{m,l}) \subseteq {\mathfrak{a}}_m$. Since we have ${[ {\mathfrak{a}}_m' ]_m} = {\mathfrak{a}}_\infty$, it follows from Theorem \[fpt sh2\] that ${\mathrm{sh}}({\operatorname{ulim}}_m {\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{a}}_m'))=t$. On the other hand, since we have ${\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{a}}_m^{\prime}) \le {\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{a}}_m) <t$, by replacing by a subsequence, we may assume that the sequence $\{ {\mathrm{fjn}}^I (R, {\Delta}; {\mathfrak{a}}_m') \}$ is a strictly ascending chain. By replacing ${\mathfrak{a}}_m$ by ${\mathfrak{a}}_m'$, we may assume $\mu_R ({\mathfrak{a}}_m) \le l$ for every $m$.
By enlarging $e$, we may assume that $q=p^e$ satisfies the following properties:
1. $q(q-1)t \in {\mathbb{N}}$ and
2. $q>{\ell \ell_R(R/I)}+ l+{\mathrm{emb}}(R)$.
It follows from Proposition \[CondA holds\] that there exist integers $u,n_1>0$ and $T \in {\mathfrak{U}}$ such that $${\tau_{e}^{n,u}}(R, {\Delta}, {\mathfrak{a}}_m^t) \subseteq I \textup{ if and only if } {\tau_{e}^{n_1,u}}(R, {\Delta}, {\mathfrak{a}}_m^t) \subseteq I$$ for every integer $n \ge n_1$ and $m \in T$. By enlarging $u$, we may further assume that $u \ge {\widetilde{\mathrm{stab}}}({R_\#}, {{\Delta}_\#} , {\mathfrak{a}}_\infty; e)$
For every $m \in {\mathbb{N}}$ and for every sufficiently large $n \gg 0$ we have $${\tau_{e}^{ n, u}} (R, {\Delta}, {\mathfrak{a}}_m^t) \subseteq \tau ( R, {\Delta}, {\mathfrak{a}}_m^{{\langle t \rangle_{n, q}}}) \subseteq I.$$ Therefore we have ${\tau_{e}^{ n_1, u}}(R, {\Delta}, {\mathfrak{a}}_m^t) \subseteq I$ for every $m \in T$.
On the other hand, since ${\langle t \rangle_{n_1, q}} < t = {\mathrm{fjn}}^{I\cdot {R_\#}}({R_\#}, {{\Delta}_\#} ;{\mathfrak{a}}_\infty)$, we have $$\begin{aligned}
{[ {\tau_{e}^{ n_1,u}} (R, {\Delta}, {\mathfrak{a}}_m^t) ]_m} &=& {\tau_{e}^{ n_1,u}} ({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^t) \\
&=& \tau ({R_\#}, {{\Delta}_\#}, {\mathfrak{a}}_\infty^{{\langle t \rangle_{n_1, q}}}) \\
& \not\subseteq & I \cdot {R_\#}.\end{aligned}$$ Therefore, there exists a set $S \in {\mathfrak{U}}$ such that $${\tau_{e}^{ n_1, u}} (R, {\Delta}, {\mathfrak{a}}_m^t) \not\subseteq I$$ for every $m \in S$. Since $S \cap T \neq \emptyset$, we have contradiction.
\[reg ACC\] Fix an integer $n \ge 1$, a prime number $p>0$ and a set ${\mathcal{D}^{\mathrm{reg}}_{{n},{p}}}$ such that every element of ${\mathcal{D}^{\mathrm{reg}}_{{n},{p}}}$ is an $n$-dimensional $F$-finite Noetherian regular local ring of characteristic $p$. The set $${\mathcal{T}}^{\mathrm{reg}}_{n,p}: = \{ \mathrm{fpt} (A; {\mathfrak{a}}) \mid A \in {\mathcal{D}^{\mathrm{reg}}_{{n},{p}}} ,{\mathfrak{a}}\subsetneq A \},$$ satisfies the ascending chain condition.
We assume the contrary. Then there exists a sequence $\{ A_m \}_{m \in {\mathbb{N}}}$ in ${\mathcal{T}}^{\mathrm{reg}}_{n,p}$ and ideals ${\mathfrak{a}}_m \subsetneq A_m$ such that the sequence $\{ {\mathrm{fpt}}(A_m; {\mathfrak{a}}_m) \}$ is a strictly ascending chain.
Since test ideals commute with completion([@HT Proposition 3.2]), we may assume that $A_m = k_m[[x_1, \dots, x_n]]$ for some $F$-finite field $k_m$. Take an $F$-finite field $k$ such that $k_m \subseteq k$ for every $m$. Let $(A,{\mathfrak{m}}_A)$ be the local ring $k[[x_1, \dots, x_n]]$. Then it follows as in the proof of [@BMS2 Theorem 3.5 (i)] that ${\mathrm{fpt}}(A; ({\mathfrak{a}}_m A)) = {\mathrm{fpt}}(A_m; {\mathfrak{a}}_m)$. Therefore, we have ${\mathrm{fpt}}(A_m; {\mathfrak{a}}_m) \in {\mathrm{FJN}}^{{\mathfrak{m}}_A}(A, 0)$ for every $m$, which contradicts to Theorem \[main\].
Let $(R, {\mathfrak{m}}, k)$ be a Noetherian local ring of equicharacteristic. Then $(R,{\mathfrak{m}})$ is said to be a *quotient singularity* if there exist a regular affine variety $U=\Spec A$ over $k$, a finite group $G$ with a group homomorphism $G \to {\operatorname{Aut}}_k(U)$, and a point $x$ of the quotient $V=U/G := \Spec (A^G)$ such that there exists an isomorphism $\widehat{R} \cong \widehat
{{\mathcal{O}}_{V, x}}$ as rings. Moreover, if $|G|$ is coprime to $\chara(k)$, then we say that $(R,{\mathfrak{m}})$ is a *tame quotient singularity*.
\[tame quot basic\] Let $(R,{\mathfrak{m}}, k)$ be a tame quotient singularity of dimension $n$. Then, there exists a finite group $G \subseteq {\mathrm{GL}}_n(k)$ with the following properties.
1. $| G |$ is coprime to $\chara(k)$.
2. The natural action of $G$ on the affine space ${\mathbb{A}}^n_k$ has no fixed points in codimension 1.
3. Let $V : = {\mathbb{A}}^n_k/G$ be the quotient and $x \in V$ be the image of the origin of ${\mathbb{A}}^n_k$. Then we have $\widehat{R} \cong \widehat{{\mathcal{O}}_{V,x}}$.
The proof follows as in the case when $\chara(k)=0$ (see [@dFEM p.15]), but for the convenience of reader we sketch it here.
Since $R$ is a tame quotient singularity, there exists a regular affine variety $U$, a finite group $G$ which acts on $U$ such that $|G|$ is coprime to $\chara(k)$, and a point $x \in V$ such that $\widehat{R} \cong \widehat{{\mathcal{O}}_{V,x}}$.
Take a point $y \in U$ with image $x$. By replacing $G$ by the stabilizer subgroup $G_y \subseteq G$, we may assume that $G$ acts on the regular local ring $(A, {\mathfrak{m}}_A) : = ({\mathcal{O}}_{U,y}, {\mathfrak{m}}_y)$. Since $|G|$ is coprime to $\chara(k)$, it follows from Maschke’s theorem that the natural projection ${\mathfrak{m}}_A \to {\mathfrak{m}}_A/{\mathfrak{m}}_A^2$ has a section as $k[G]$-modules. This section induces $k[G]$-algebra homomorphism ${\mathrm{Gr}}_{{\mathfrak{m}}_A}(A) \to A$, where ${\mathrm{Gr}}_{{\mathfrak{m}}_A}(A)$ is the associated graded ring of $(A, {\mathfrak{m}}_A)$. Therefore, by replacing $U$ by $\Spec({\mathrm{Gr}}_{{\mathfrak{m}}_A}(A))$, we may assume that $U={\mathbb{A}}^n_k$ and $G \subseteq {\mathrm{GL}}_n(k)$.
Let $H \subseteq G$ be the subgroup generated by elements $g \in G$ which fixes some codimension one point of $U$. Since $|G|$ is coprime to $\chara(k)$, it follows from Chevalley–-Shephard-–Todd theorem (see for example [@Ben Theorem 7.2.1]) that $U/H \cong {\mathbb{A}}^n_k$. By replacing $U$ by $U/H$ and $G$ by $G/H$, we complete the proof of the lemma.
\[quot ACC\] Fix an integer $n \ge 1$, a prime number $p>0$ and a set ${\mathcal{D}^{\mathrm{quot}}_{{n},{p}}}$ such that every element of ${\mathcal{D}^{\mathrm{quot}}_{{n},{p}}}$ is an $n$-dimensional $F$-finite Noetherian normal local ring of characteristic $p$ with tame quotient singularities. The set $${\mathcal{T}}^{\mathrm{quot}}_{n,p}: = \{ \mathrm{fpt} (R; {\mathfrak{a}}) \mid R \in {\mathcal{D}^{\mathrm{quot}}_{{n},{p}}}, {\mathfrak{a}}\subsetneq R \textup{ is an ideal} \}$$ satisfies the ascending chain condition.
The proof is essentially the same as [@dFEM Proposition 5.3]. Let $(R,{\mathfrak{m}},k)$ be a local ring such that $R \in {\mathcal{D}^{\mathrm{quot}}_{{n},{p}}}$ and ${\mathfrak{a}}\subsetneq R$ be an ideal of $R$. Let $G$, $V$, and $x$ be as in Lemma \[tame quot basic\]. Consider the natural morphism $\pi : U:= {\mathbb{A}}^n_k \to V$. Since $G$ is a finite group, the morphism $\pi$ is a finite surjective morphism with $\deg(\pi)$ coprime to $\chara(k)$. Since $G$ acts on $U$ with no fixed points in codimension one, the morphism $\pi$ is étale in codimension one.
Set $W := \Spec(\widehat{R})$ and $U' : = U \times_V W$. Since $U$ is a regular scheme and $W \to V$ is a regular morphism, each connected component of $U'$ is a regular scheme. Fix a connected component $U'' \subseteq U'$.
Since the morphism $\widehat{\pi}: U'' \to W$ is finite surjective, étale in codimension 1 and $\deg{\widehat{\pi}}$ is coprime to $p$, it follows from [@HT Theorem 3.3] that $${\mathrm{fpt}}(W ;{\mathfrak{a}}{\mathcal{O}}_W) = {\mathrm{fpt}}(U''; {\mathfrak{a}}{\mathcal{O}}_{U''}).$$
On the other hand, since the test ideals commute with completion ([@HT Proposition 3.2]), we have $${\mathrm{fpt}}(R; {\mathfrak{a}})={\mathrm{fpt}}(W ; {\mathfrak{a}}{\mathcal{O}}_W).$$
Therefore, it follows from Corollary \[reg ACC\] that the set ${\mathcal{T}}^{\mathrm{quot}}_{n,p}$ satisfies the ascending chain condition.
We conclude with a natural question as below.
Does Theorem \[intro reg\] give an alternative proof of [@dFEM Theorem 1.1]? Moreover, does Theorem \[main\] imply that the set of all jumping numbers of multiplier ideals with respect to a fixed ${\mathfrak{m}}$-primary ideal on a log ${\mathbb{Q}}$-Gorenstein pair over ${\mathbb{C}}$ satisfies the ascending chain condition?
We hope to consider this question at a later time.
[HnBWZ99]{} M. André, Localisation de la lissité formelle, Manuscripta Math. **13** (1974), 297–-307.
Y. Aoyama, Some basic results on canonical modules, J. Math. Kyoto Univ. **23** (1983) no. 1, 85–94.
D. J. Benson, *Polynomial invariants of finite groups*, London Mathematical Lecture Note, Vol 190, Cambridge University Press, Cambridge,1993.
M. Blickle, M. Mustaţă and K. E. Smith, Discreteness and rationality of F-thresholds, Special volume in honor of Melvin Hochster, Michigan Math. J. **57** (2008), 43–-61.
M. Blickle, M. Mustaţă and K. E. Smith, $F$-thresholds of hypersurfaces, Trans. Amer. Math. Soc. **361** (2009), no. 12, 6549-–6565.
M. Blickle, K. Schwede, S. Takagi and W. Zhang, Discreteness and rationality of $F$-jumping numbers on singular varieties, Math. Ann. **347** (2010), 917–949.
T. de Fernex, L. Ein and M. Mustaţă, Shokurov’s ACC conjecture for log canonical thresholds on smooth varieties, Duke Math. J. **152** (2010), no. 1, 93–114.
T. de Fernex, L. Ein and M. Mustaţă, Log canonical thresholds on varieties with bounded singularities, *Classification of algebraic varieties*, pp. 221–-257, EMS Ser. Congr. Rep., Eur. Math. Soc., Zürich, 2011.
R. Goldblatt, *Lectures on the hyperreals*, An introduction to nonstandard analysis. Graduate Texts in Mathematics **188**, Springer-Verlag, New York, 1998.
C. D. Hacon, J. McKernan and C. Xu. ACC for log canonical thresholds, Ann. of Math. **180** (2014) no. 2, 523–571.
N. Hara and S. Takagi, On a generalization of test ideals, Nagoya Math. J. **175** (2004), 59–74.
D. J. Hernández, L. Núñez-Betancourt and E. E. Witt, Local ${\mathfrak{m}}$-adic constancy of $F$-pure thresholds and test ideals, Mathematical Proceedings of the Cambridge Philosophical Society (2017), 1–11.
D. J. Hernández, L. Núñez-Betancourt, E. E. Witt and W. Zhang, $F$-pure thresholds of homogeneous polynomials, Michigan Math. J. **65** (2016), no. 1, 57–87.
E. Kunz, On Noetherian rings of characteristic $p$, Amer. J. Math. **98** (1976), 999–1013.
H. Matsumura, *Commutative Ring Theory*, Translated from the Japanese by M. Reid, Second edition, Cambridge Studies in Advanced Mathematics, **8**, Cambridge University Press, Cambridge, 1989.
F. Pérez, On the constancy regions for mixed test ideals, J. Algebra **396** (2013), 82-–97.
K. Sato and S. Takagi, General hyperplane sections of threefolds in positive characteristic, to appear in J. Inst. Math. Jussieu.
H. Schoutens, *The use of ultraproducts in commutative algebra*, Lecture Notes in Mathematics **1999**, Springer-Verlag, Berlin, 2010.
K. Schwede, $F$-adjunction, Algebra Number Theory **3** (2009) no. 8, 907–-950.
K. Schwede, Centers of $F$-purity, Math. Z. **265** (2010), no. 3, 687–714.
0 K. Schwede, Test ideals in non-${\mathbb{Q}}$-Gorenstein rings, Trans. Amer. Math. Soc. **363** (2011), no. 11, 5925–5941.
K. Schwede and K. Tucker, Test ideals of non-principal ideals: Computations, Jumping Numbers, Alterations and Division Theorems, J. Math. Pures. Appl. **102** (2014), no. 05, 891–929.
V. Shokurov, Three-dimensional log perestroikas, With an appendix by Yujiro Kawamata, Izv. Ross. Akad. Nauk Ser. Mat. **56** (1992) no. 1, 105–-203.
T. Stacks Project Authors, Stacks Project.
S. Takagi, Formulas for multiplier ideals on singular varieties, Amer. J. Math. **128** (2006) no. 6, 1345–1362.
|
---
abstract: 'An infinite particle system of independent jumping particles is considered. Their constructions is recalled, further properties are derived, the relation with hierarchical equations, Poissonian analysis, and second quantization are discussed. The hydrodynamic limit for a general initial distribution satisfying a mixing condition is derived. The large time asymptotic is computed under an extra assumption.'
author:
- |
**Yuri G. Kondratiev**\
[Fakult[ä]{}t f[ü]{}r Mathematik, Universit[ä]{}t Bielefeld, D 33501 Bielefeld, Germany]{}\
[Forschungszentrum BiBoS, Universit[ä]{}t Bielefeld, D 33501 Bielefeld, Germany]{}\
[kondrat@mathematik.uni-bielefeld.de]{}
- |
**Tobias Kuna**\
[Department of Mathematics, University of Reading, RG6 6AX Reading, UK]{}\
[t.kuna@reading.ac.uk]{}
- |
**Maria Jo[ã]{}o Oliveira**\
[Univ. Aberta, P 1269-001 Lisbon, Portugal]{}\
[CMAF, University of Lisbon, P 1649-003 Lisbon, Portugal]{}\
[Forschungszentrum BiBoS, Universit[ä]{}t Bielefeld, D 33501 Bielefeld, Germany]{}\
[oliveira@cii.fc.ul.pt]{}
- |
**Jos[é]{} Lu[í]{}s da Silva**\
[CCM, University of Madeira, P 9000-390 Funchal, Portugal]{}\
[luis@uma.pt]{}
- |
**Ludwig Streit**\
[Forschungszentrum BiBoS, Universit[ä]{}t Bielefeld, D 33501 Bielefeld, Germany]{}\
[CCM, University of Madeira, P 9000-390 Funchal, Portugal]{}\
[streit@physik.uni-bielefeld.de]{}
title: Hydrodynamic limits for the free Kawasaki dynamics of continuous particle systems
---
[**Keywords:**]{} Infinite particle systems, Kawasaki dynamics, hydrodynamics limit, large time asymptotic
[**2000 AMS Classification:**]{} 82C21, 60G55, 60J75, 37A60
Introduction
============
Particle systems in the continuum are describing infinitely many particles located in the Euclidean space $\mathbb{R}^d$. One may equip these systems with different types of dynamics, deterministic as well as stochastic. First of all, we should mention the Hamiltonian dynamics and related problems concerning the derivation of kinetic equations for classical gases, see, e.g. [@S89]. Another dynamics strongly motivated by physical applications, is the gradient diffusion of infinitely many particles [@Fr87], [@Sp86]. Note that in spite of serious efforts in both cases, the answers obtained till now are far from being complete.
In order to identify further reasonable types of random evolutions in the continuum, we may look at the well established types of dynamics on lattice gas systems. There are two important classes of Markov dynamics, namely, Glauber and Kawasaki. Both are constructed in such a way that a given Gibbs measure (equilibrium state) on the lattice becomes an invariant measure for the dynamics. The Glauber dynamics is a birth-and-death evolution of the lattice gas, contrary to the jump type evolution in the case of the Kawasaki dynamics. The latter stochastic dynamics are especially interesting for the study of hydrodynamic limits of interacting particle systems due to their a priori conservation law, the number of particles, and the resulting existence of a continuous family of invariant measures [@PR91]. Continuous versions of Glauber dynamics were introduced in [@G81], [@BCC02], [@KL05], [@KLRII05]. Continuous analogs of the Kawasaki dynamics are random evolutions of particle systems in which individual particles jump in the space with rates leading to a Gibbs state in the continuum as an invariant measure [@G81], [@KLRII05].
In this paper, we concentrate on stochastic dynamics where each particle performs a jump process independently of the others. This type of dynamics one might call independent jump process or free Kawasaki dynamics in the continuum. One of the aims of this paper is to demonstrate that already for the free Kawasaki dynamics the situation is technically non trivial and different from the gradient diffusions. Another aim is to lay a solid ground for future investigation of interacting systems.
In order to better understand the chosen framework, one has to recall the underlying motivation. Infinite particle systems are introduced as an approximation for finite but very large systems. The underlying finite systems are systems of finitely many particles in a bounded subset (as, for example, a ball or a cube) of $\mathbb{R}^d$. Each particle independently performs a Markov jump process. This underlying jump process we call in the following the one particle dynamics. Motivated by physics, one assumes that the particles are indistinguishable. Therefore, we describe the collection of positions of $n$ particles not by a tuple $(x_1,\ldots,x_n)$ with $x_i \in \mathbb{R}^d$, but by the set $\{x_1,\ldots,x_n\}$ of points (excluding a priori coinciding positions) or by the integer valued measure $\delta_{x_1}+\ldots+\delta_{x_n}$. These notations include the indistinguishability automatically. We call this symmetrized collection of positions a configuration of particles. The latter interpretation, via measures, is called the empirical field. A key idea in the study of large particle systems is to substitute them by infinite volume systems in order to avoid boundary and finite size effects. The price to pay for this substitution is technical subtleties and difficulties, which we will describe in the sequel.
Let us consider an initial configuration of positions for the infinite particle system, denoted by $\gamma$. In any finite observation window, $\gamma$ should look like a configuration of a large, but finite,particle system looked at in the same window. In the notation for configurations as sets this just means that for any open bounded subset $\Lambda \subset \mathbb{R}^d$ there can be only finitely many points in $\gamma \cap \Lambda$. In the interpretation of configurations as non-negative integer valued measures, this means that $\gamma$ is a Radon measure. The construction of such a type of processes for general underlying one particle Markov process is given in [@KLR05] and we shortly recall this construction in Subsection \[Subsection2.1\]. In our case, each particle is equipped with an independent identically distributed exponential clock. If the clock of a particle rings the particle performs a random jump independent of the other particles. As one considers infinitely many particles, the construction is non trivial. First of all, the process cannot be started in any initial configuration. Actually, one may easily construct initial configurations for which the process explode, in the sense that infinitely many particles appear in a finite volume. Secondly, the infinite volume process is not any longer a jump process in the classical sense, because in each time interval a.s. infinitely many jump events occur. An essential step in the construction is to show that in any finite time interval only finitely many jump events are visible in a bounded observation window, i.e., the number of particles in the window stays finite and only finitely many particles pass the window. In Subsection \[Subsection2.1\] we add to the results of [@KLR05] the description of the path-space measure corresponding to the process. One easily sees that all Poisson measures (Poisson random fields) with constant intensities are invariant measures with respect to the free Kawasaki dynamics. If the one particle jump rate is symmetric than these measures are also reversible.
Till now we spoke only about processes starting in a single configuration of particles. Choosing an initial configuration at random w.r.t. a probability measure on the configuration space, one can derive further classes of processes. If, for example, one distributes the initial configurations w.r.t. an invariant measure, then one obtains a so-called equilibrium process. For the equilibrium processes w.r.t. reversible measures powerful methods of Dirichlet forms may be applied. In particular, existence can be shown even for general interacting systems, cf. [@KLRII05]. In the case of the free Kawasaki process, one can apply in addition second quantization techniques, cf. Section \[SecFock\] which give a full description of the $L^2$-theory, also for non-symmetric jump rates.
One speaks of a non-equilibrium process if the initial distribution is not an invariant measure. For the interacting non-equilibrium Kawasaki dynamics even the existence of such a Markov process is unknown. As was mentioned above, the free non-equilibrium Kawasaki dynamics exists for a large class of initial configurations. The aim of this paper is to go beyond mere existence. We want to study the ergodic properties of the process, i.e., we consider the large time asymptotic of the process in the sense of limiting distributions. As a next step, we consider the so-called hydrodynamic limit for the free Kawasaki dynamics.
We shall expect that the difficulties to treat non-equilibrium processes depend on the class of initial distributions. Poisson measures for non-constant intensities form a class which seems to be the easiest to be handled in the case of the free Kawasaki dynamics. These distributions are no longer invariant measures, they are, in general, not equivalent to any invariant measure and there exists no semigroup theory for the free Kawasaki dynamics in $L^p$-spaces with respect to these measures, cf. Remark \[remnosemi\].
Note that the one-dimensional distributions of the dynamics can be completely described in terms of the one-dimensional distributions of the underlying one particle jump process. Here a delicate moment is that the initial distribution of this one particle process is the intensity function for the starting Poisson measure. A natural class of intensities to consider should thus include intensities corresponding to the invariant measures, i.e., all constant intensities. In the infinite volume, this prevents us to work just with integrable intensities which from a probabilistic point of view would be natural for the one particle process. A right framework seems to be the cone of all non-negative measurable bounded intensities, but it contains intensities for which there seems to be no asymptotic, cf. Remark \[Aug08-eq1\]. The study of the large time asymptotic of the infinite particle processes reduces to the study of the large time asymptotic of the one-particle process for these initial intensities.
The non-ergodicity of the infinite particle processes is reflected in the non ergodicity of the one particle processes in this class of initial intensities, see Subsection \[subsecLargetime\]. Under the additional assumption that the Fourier transform of the initial intensity is a signed measure we can show, applying Fourier analysis, that all processes with Poissonian initial distribution with such intensities have again a Poissonian distribution as the large time asymptotic, but with a constant intensity. This constant we can identify as the arithmetic mean of the initial intensity. For this result a careful consideration of the notion of convergence is required. This is one of the aforementioned technical pitfalls caused by infinite volume. The above condition on the Fourier transform of the initial intensity cannot be directly described in terms of the intensity itself. It seems natural to claim that the large time asymptotic exists for all bounded intensities which have arithmetic mean. However, for example, bounded slowly oscillating intensities may not have arithmetic mean, cf. Remark \[Aug08-eq1\]. Under the condition of weak local asymptotic normality of the one-particle process, in [@DSS82] Theorem 4.1 the authors show that any free infinite particle dynamics is asymptotically similar to Brownian motion. We can show that that one-particle process is weak locally normal if the jump rate has finite second moments. Their proof is based on a central limit theorem type result and hence the existence of second moments is essential. The proof given in this paper is straight forward, simple and hold for general jump rates, but for a more restricted class of intensities.
In Subsection \[SubSecHydro\] the so-called hydrodynamic limit for the free Kawasaki dynamics is considered. This means that the asymptotic of the corresponding scaled empirical field $n_t^{(\varepsilon)}(\varphi,\mathbf{X}) :=
n(\varepsilon^d\varphi(\varepsilon
\cdot),\mathbf{X}_{\varepsilon^{-\kappa}t})$, $\kappa=1,2$, is the object under study. In order to obtain a nontrivial asymptotic, the initial intensity $z$ has to be scaled as $z(\varepsilon \cdot)$. The limiting process for $\varepsilon \rightarrow 0^+$ is a deterministic evolving density profile given as the solution of a partial differential equation. For symmetric jump rates the obtained limiting equation, for $\kappa=2$, is a second order elliptic equation and the coefficients are given by the second moments of the jump distribution. For asymmetric jump rates, the limiting differential equation, for $\kappa=1$, is of first order, cf. Proposition \[PropHyd\] and Remark \[Natal8\]. One may obtain a combined equation if one considers weak asymmetries, cf. Proposition \[PropHydW\] and Remark \[remweaka\]. Actually, the limit gives a way to construct a solution for the partial differential equation. If the Fourier transform of the initial intensity is a measure, then these results can be obtained directly via Fourier transform, considering carefully the sense of convergence. If the jump rate has all moments finite, we are able to prove very strong convergence of the semi-group and hence can treat more general initial intensities. However, this is technically harder, because further careful approximations are necessary and a stronger sense of convergence has to be considered. The presentation of this result is postponed to Proposition \[Prohydrogen\] for the clarity of the presentation. In [@DSS82] the authors also consideres the Case 1 of Proposition \[PropHyd\] and \[Prohydrogen\] for jump rates with finite second moment. In Proposition \[PropHyd\] only finite first moments of the jump rate are required, but not all intensities are considered. The above described results about large time asymptotic and hydrodynamic limit are extended to general initial distributions in Section \[nonled\]. We require some kind of mixing property for the initial distributions, the so-called decay of correlation, which we formulate in terms of the cumulants (or Ursell functions). For the hydrodynamic limit we have to require in addition that the first correlation function of the initial measure converges reasonably. The first correlation function is the density of the first moment measure of the empirical field. In the hydrodynamic limit the solutions of the associated partial differential equations are constructed as the limits of the first correlation functions, cf. Proposition \[PrAsyGibbs\]. The large time asymptotic is clearly again a Poisson measure for a constant intensity. However, the constant is now the arithmetic mean of the first correlation function of the initial distribution.
We prove in Section \[SecGibbs\] that the conditions required above are fulfilled, in particular, by Gibbs measures in the high temperature low activity regime.
Kawasaki dynamics\[SecDefKa\]
=============================
The generator
-------------
The configuration space $\Gamma :=\Gamma _{\mathbb{R}^d}$ over $\mathbb{R}^d$, $d\in \mathbb{N}$, is defined as the set of all locally finite subsets of $\mathbb{R}^d$, $$\Gamma :=\left\{ \gamma \subset \mathbb{R}^d:\left| \gamma_\Lambda\right|
<\infty \hbox{
for every compact }\Lambda\subset \mathbb{R}^d\right\} ,$$ where $\left| \cdot \right| $ denotes the cardinality of a set and $\gamma_\Lambda:= \gamma\cap\Lambda$. As usual we identify each $\gamma \in \Gamma $ with the non-negative Radon measure $\sum_{x\in \gamma }\delta_x\in \mathcal{M}(\mathbb{R}^d)$, where $\delta_x$ is the Dirac measure with mass at $x$, $\sum_{x\in\emptyset}\delta_x$ is, by definition, the zero measure, and $\mathcal{M}(\mathbb{R}^d)$ denotes the space of all non-negative Radon measures on the Borel $\sigma$-algebra $\mathcal{B}(\mathbb{R}^d)$. This identification allows to endow $\Gamma $ with the topology induced by the vague topology on $\mathcal{M}(\mathbb{R}^d)$, i.e., the weakest topology on $\Gamma$ with respect to which all mappings $$\Gamma \ni \gamma \longmapsto \langle f,\gamma\rangle :=
\int_{\mathbb{R}^d}\gamma(dx)\,f(x)=\sum_{x\in \gamma }f(x),\quad
f\in C_c(\mathbb{R}^d),$$ are continuous. Here $C_c(\mathbb{R}^d)$ denotes the set of all continuous functions on $\mathbb{R}^d$ with compact support. By $\mathcal{B}(\Gamma )$ we will denote the corresponding Borel $\sigma$-algebra on $\Gamma$.
Given a non negative function $a \in L^1(\mathbb{R}^d,dx)$, the generator $L:=L_a$ of the free Kawasaki dynamics for an infinite particle system is given by the informal expression $$\label{DefKawaOp}
(LF)(\gamma) := \sum_{x \in \gamma} \int_{\mathbb{R}^d} dy\, a(x-y)
\left( F(\gamma \setminus x \cup y)- F(\gamma) \right).$$ We proceed to give a rigorous meaning to the right-hand side of (\[DefKawaOp\]). Let $\mathcal{O}_c(\mathbb{R}^d)$ denote the set of all open sets in $\mathbb{R}^d$ with compact closure. A $\mathcal{B}(\Gamma)$-measurable function $F$ is called cylinder and exponentially bounded whenever there is a $\Lambda\in\mathcal{O}_c(\mathbb{R}^d)$ such that $F(\gamma)=F(\gamma_\Lambda)$ for all $\gamma \in \Gamma$, and $|F(\gamma)|\leq C e^{c\vert\gamma_\Lambda\vert}$, $\gamma
\in\Gamma$, for some $C,c >0$. For such a function $F$ one has $$\begin{aligned}
&&\sum_{x \in \gamma} \int dy\, a(x-y)
\left| F(\gamma \setminus x \cup y)- F(\gamma) \right|\nonumber \\
&\leq& \left[ |\gamma_\Lambda| \int_{\mathbb{R}^d}dy\,a(y) +
\int_\Lambda dy\,\sum_{x \in \gamma} a(x-y) \right] 2 C e^{c
(\vert\gamma_\Lambda\vert +1)}\label{eq2.1},\end{aligned}$$ which is finite provided the configuration $\gamma$ is an element in $\Gamma_a\subset\Gamma$, $$\Gamma_a := \left\{ \gamma \in \Gamma : y \mapsto \sum_{x \in
\gamma} a(x-y) \hbox{ is } L^{1}_\mathrm{loc} (\mathbb{R}^d,dy)
\right\}.$$ In this case, the sum and the integral in (\[DefKawaOp\]) are finite, and thus the operator $L$ is well-defined on the space $\mathcal{F}L_{eb}^0(\Gamma_a)$ of all cylinder functions exponentially bounded on $\Gamma$ restricted to $\Gamma_a$.
Concerning the set $\Gamma_a$, we note that $\mu(\Gamma_a)=1$ for any probability measure $\mu$ on $\Gamma$ with first correlation function $k^{(1)}_\mu$ bounded. This follows from the fact that for each closed ball $B(n)\subset\mathbb{R}^d$ centered at $0$ and radius $n\in\mathbb{N}$ we have $$\begin{aligned}
\int_\Gamma\mu(d\gamma)\int_{B(n)}dy\sum_{x \in \gamma} a(x-y)
&=&\int_{\mathbb{R}^d}dx\,k^{(1)}_\mu (x)\int_{B(n)}dy\,a(x-y)\\
&\leq& C \Vert a\Vert_{L^1(\mathbb{R}^d,dx)}\hbox{vol}(B(n))\end{aligned}$$ for any constant $C\geq\vert k^{(1)}_\mu\vert$. Here $\hbox{vol}(B(n))$ denotes the volume of $B(n)$ with respect to the Lebesgue measure on $\mathbb{R}^d$. Clearly, the latter implies that $\mu(\Gamma\setminus\Gamma_a)=0$.
In view of these considerations, throughout this work we shall restrict our setting to $\Gamma_a$.
Among the elements in $\mathcal{F}L_{eb}^0(\Gamma_a)$ we distinguish the functions $e_B(f)$, called Bogoliubov exponentials, $$e_B(f,\gamma):=\prod_{x\in\gamma}(1+f(x)),\quad \gamma\in\Gamma,$$ for $f$ being any bounded $\mathcal{B}(\mathbb{R}^d)$-measurable function with compact support ($f\in B_{c}(\mathbb{R}^d)$). A first reason to distinguish these functions is due to the especially simple form for the action of $L$ on them, namely, for all $f\in
B_{c}(\mathbb{R}^d)$ and all $\gamma\in\Gamma_a$, $$\begin{aligned}
\left(Le_B(f)\right)(\gamma) &=& \sum_{x \in \gamma} \int_{\mathbb{R}^d} dy\,a(x-y)
\left(f(y)-f(x) \right) e_B(f,\gamma \setminus x)\nonumber \\
&=& \sum_{x \in \gamma} \left(A f\right)(x) e_B(f, \gamma \setminus
x), \label{Eq2.10}\end{aligned}$$ where $$\label{OnePartOp} A f(x) := \int_{\mathbb{R}^d} dy\,a(x-y)
\left(f(y) -f(x) \right) .$$
In the sequel we call the linear operator $A$ the one particle operator. Due to its special role throughout this work, its properties will be studied in more detail in the next subsection.
Given a locally integrable function $z\geq 0$, we remind that the Poisson measure $\pi_z$ with intensity $z$ is the unique probability measure on $\Gamma$ for which the Laplace transform is given by $$\int_\Gamma\pi_z(d\gamma )\, e^{\langle\varphi,\gamma\rangle} =\exp
\left( \int_{\mathbb{R}^d}dx\,\left( e^{\varphi
(x)}-1\right)z(x)\right),\label{A7}$$ for all $\varphi$ in the Schwartz space $\mathcal{D}(\mathbb{R}^d):=C_c^\infty(\mathbb{R}^d)$ of all infinitely differentiable functions with compact support. We recall that a measure $\mu$ on $\Gamma_a$ is called infinitesimally reversible with respect to the operator $L$ defined in (\[DefKawaOp\]), whenever $L$ is symmetric in $L^2(\Gamma_a,\mu)$.
\[LemRev\] Assume that $a$ is an even function. Then for any real number $z>0$ the Poisson measure $\pi_z$ with constant intensity $z$ is an infinitesimally reversible measure with respect to $L$.
For all $F,G\in \mathcal{F}L_{eb}^0(\Gamma_a)$ we have
$$\begin{aligned}
\int_{\Gamma_a}\pi_z(d\gamma) LF(\gamma)G(\gamma) & =&
\int_{\Gamma_a}\pi_z(d\gamma) \sum_{x \in \gamma} \int_{\mathbb{R}^d}dy\, a(y-x)
F(\gamma \setminus x \cup y)G(\gamma)\\ && -
\int_{\Gamma_a}\pi_z(d\gamma) \sum_{x \in \gamma} \int_{\mathbb{R}^d}dy\, a(x-y)
F(\gamma)G(\gamma).\end{aligned}$$
The result follows by applying twice the Mecke identity (cf. [@Me67 Theorem 3.1]) to the first integral.
\[Natal11\] It is clear that the linear space spanned by the class of functions $e_B(f)$, $f\in B_{c}(\mathbb{R}^d)$, is in $\mathcal{F}L_{eb}^0(\Gamma_a)$. The space $\mathcal{F}L_{eb}^0(\Gamma_a)$ also contains the class of coherent states $e_{\pi_{z}}(f)$ corresponding to $f\in
B_{c}(\mathbb{R}^d)$, $$\label{JL}
e_{\pi_{z}}(f):=\exp\left(-\int_{\mathbb{R}^d}dx\,z(x)f(x)\right)e_B(f).$$
The one particle operator \[SecQuasi\]
--------------------------------------
In the sequel the one particle operator $A$ introduced in (\[OnePartOp\]) will play an essential role. Because of this, in this subsection we shall collect its main properties used below. We observe that in stochastic analysis the operator $A$ is known as a relatively simple example of a bounded generator of a Markov jump process on $\mathbb{R}^d$ (see e.g. [@EtKu86 Section 4.2]).
In terms of the usual convolution $*$ of functions, we also note that $$Af = a*f - a^{(0)} f , \quad \a0:=\int_{\mathbb{R}^d}a(x)\,dx.
\label{KF1}$$ Therefore, the properties of convolution of functions (namely, Young’s inequality) lead straightforwardly to the $L^p$ results stated in the next proposition. There, we also consider the real Banach spaces $B(\mathbb{R}^d)$ and $C_\infty(\mathbb{R}^d)$, respectively, of all bounded measurable functions and of all continuous functions vanishing at infinity, both with the supremum norm $\Vert f\Vert_u:=\sup_{x\in\mathbb{R}^d}\vert f(x)\vert$. We recall that a strongly continuous contraction semigroup $(T_t)_{t\geq 0}$ is called sub-Markovian whenever for $0\leq f\leq
1$ it follows $0\leq T_tf\leq 1$ for every $t\geq 0$. If, in addition, $T_tf_n\nearrow 1$ for some sequence $f_n\nearrow 1$, then $(T_t)_{t\geq 0}$ is called Markovian. Here, for the spaces $B(\mathbb{R}^d)$ and $C_\infty(\mathbb{R}^d)$ the convergence is pointwise, and for an $L^p$-space the convergence is almost everywhere. A strongly continuous contraction semigroup $(T_t)_{t\geq 0}$ is called positivity preserving, if $f\geq 0$ implies $T_tf\geq 0$ for every $t\geq 0$.
\[PropSemiA\] The linear operator $A$ is a bounded operator on $L^p(\mathbb{R}^d,zdx)$, for $z>0$ constant and $1\leq p\leq\infty$ (on $B(\mathbb{R}^d)$ and on $C_\infty(\mathbb{R}^d)$). As a consequence, $A$ is the generator of a uniformly continuous semigroup $(e^{tA})_{t\geq 0}$ on $L^p(\mathbb{R}^d,zdx)$, $1\leq
p\leq\infty$ (on $B(\mathbb{R}^d)$ and on $C_\infty(\mathbb{R}^d)$), $$e^{tA}:= \sum_{n=0}^\infty \frac{(tA)^n}{n!},\label{KF4}$$ the sum converging in norm for every $t\geq0$. Moreover, $(e^{tA})_{t\geq 0}$ is a positivity preserving (contraction) semigroup on each $L^p(\mathbb{R}^d,zdx)$ space, $1\leq p\leq\infty$ (on $B(\mathbb{R}^d)$ and on $C_\infty(\mathbb{R}^d)$), Markovian on $L^p(\mathbb{R}^d,zdx)$ for $1\leq p<\infty$
For the proof see e.g. [@EtKu86 Section 4.2], [@J01].
Due to (\[KF1\]), the operator $A$, as well as the semigroup $(e^{tA})_{t\geq 0}$, both either on a $L^p(\mathbb{R}^d,zdx)$ space, $1\leq p <\infty$, or on $C_\infty(\mathbb{R}^d)$, may be expressed in terms of the Fourier transform, $$\hat f(k):=\frac{1}{(2\pi)^{d/2}}\int_{\mathbb{R}^d}dx\,e^{-i\langle x, k \rangle} f(x)$$ see e.g. [@J01].
\[KFProp1\] For every function $\varphi$ in the Schwartz space $\mathcal{S}(\mathbb{R}^d)$ of tempered test functions one has $$\widehat{A\varphi}(k)=(2\pi)^{d/2}\left(\hat a(k)-\hat a(0)\right)\hat\varphi(k),\quad
k\in\mathbb{R}^d,$$ and $$\left(e^{tA}\varphi\right)(x) =
\frac{1}{(2\pi)^{d/2}}\int_{\mathbb{R}^d}dk\,
e^{i\langle k , x \rangle}e^{t(2\pi)^{d/2}(\hat a (k)-\hat a(0))}\hat\varphi(k),\quad x\in
\mathbb{R}^d.\label{Natal5}$$
\[Natal6\] Since $a$ is non-negative, for all $k\in\mathbb{R}^d$ one has $$\mathrm{Re}(\hat a(k) -\hat a(0))
=\frac{1}{(2\pi)^{d/2}} \int_{\mathbb{R}^d}dx\,(\cos (\langle k, x \rangle )-1)a(x)\leq 0.
\label{Ze}$$ The equality in (\[Ze\]) holds only for $k=0$. This follows from the fact that the set $\{x: \langle k , x \rangle =2n\pi, n\in\mathbb{Z}\}$ has zero Lebesgue measure if and only if $k\not= 0$.
According to Proposition \[KFProp1\], the semigroup $(e^{tA})_{t\geq 0}$ is defined by a kernel $\mu_t$, $t\geq 0$, whose Fourier transform is given by $$\hat\mu_t(k)= \frac{1}{(2\pi)^{d/2}}e^{t(2\pi)^{d/2}(\hat a (k)-\hat a(0))}.$$ Because $a$ is non-negative, $\hat a$ and thus $\hat\mu_t$, $t\geq
0$, are positive definite functions. Through Bochner’s theorem, the latter means that each $\mu_t$ is a non-negative finite measure on $\mathbb{R}^d$. We note that the Markovian property of $(e^{tA})_{t\geq 0}$ means that each $\mu_t$ is actually a probability measure. For more details see e.g. [@J01].
\[KFRem1\] It is easy to check that on the space $L^2(\mathbb{R}^d,zdx)$ the adjoint operator $A^*$ of $A$ is defined by $$(A^* f)(x) = \int_{\mathbb{R}^d}dy\,a(y-x) (f(y)-f(x))+ \int_{\mathbb{R}}dy \big(a(y)-a(-y)\big)\ f(x),\quad
f\in L^2(\mathbb{R}^d,zdx).$$ As a consequence, if $a$ is an even function, then $A$ is a bounded self-adjoint operator on $L^2(\mathbb{R}^d,zdx)$. In this case, it follows from (\[KF4\]) that the semigroup $(e^{tA})_{t\geq 0}$ is a self-adjoint contraction on $L^2(\mathbb{R}^d,zdx)$.
\[remnoncons\] For a non-constant activity parameter $z$, the operator $A$ is still bounded but, in general, the semigroup $(e^{tA})_{t\geq 0}$ is not any longer a contraction. For instance, on $\mathbb{R}$ consider $a(x)=e^{-x^2}$ and $z(x)= 1
+ e^{-x^2 +4x}$. A simple calculation shows that for $\varphi(x)=\pi^{-1/2} e^{-x^2}$ one has $\int_{\mathbb{R}}
dx\,\varphi(x) (A\varphi)(x) z(x)>0$, proving that the semigroup $(e^{tA})_{t\geq 0}$ cannot be a contraction in $L^2(\mathbb{R}^d,z dx)$.
Independent infinite particle processes {#Section2}
=======================================
A probabilistic approach {#Subsection2.1}
------------------------
As we mentioned at the beginning of Subsection \[SecQuasi\], the operator $A$ is the generator of a Markov jump process on $\mathbb{R}^d$. Following e.g. [@EtKu86], we can explicitly construct this process as follows. Consider the Markov chain $(Y_k)_{k\in\mathbb{N}_0}$ on $\mathbb{R}^d$, with transition density function $\mu(x,y)= \frac{a(x-y)}{\a0}$. Let $(Z_t)_{t\geq
0}$ be a Poisson process with parameter $\a0$ independent of the Markov chain. We then define the Markov process $(X_t)_{t\geq 0}$ by $X_t:=Y_{Z_t}$, $t\geq 0$. This process has generator $A$ and, by construction, it has *cadlag* paths in $\mathbb{R}^d$. We denote by $D([0,\infty),\mathbb{R}^d)$ the set of all *cadlag* paths from $[0,\infty)$ to $\mathbb{R}^d$ and by $P^x$ the path-space measure corresponding to the process $X$ starting at $x \in \mathbb{R}^d$. By $E_x$ we denote the expectation w.r.t. this measure.
The process on $\Gamma$ corresponding to $L$ is the following random evolution: each particle evolves according to the above jump process, independently of the other particles, cf. Lemma \[nov06-eq1\] below. This independent infinite particle process was rigorously constructed in [@KLR05] (see also [@KLR03]). In the following we present the main results therein.
Assume that there exist an $\alpha > 2d$ and a $C>0$ such that $$\label{eqbnda} 0 \leq a(y) \leq \frac{C}{(1+|y|)^\alpha},\qquad
{\mbox{for all }} y\in \mathbb{R}^d.$$ The condition sufficient for the construction done in [@KLR05] is then fulfilled. Hence there exists a Markov process $\left(
D([0,\infty),\Theta), (\mathbf{X}_t)_{t\geq 0},
(\mathbf{P}_\gamma)_{\gamma \in \Theta}\right)$ on the set $\Theta$ of all $\gamma \in \Gamma$ such that, for some $m\in\mathbb{N}$ (depending on $\gamma$), $$\vert\gamma_{B(n)}\vert\leq m\,\hbox{vol}(B(n)),\quad \forall\,
n\in\mathbb{N}.$$ Here $D([0,\infty),\Theta)$ denotes the set of all *cadlag* paths from $[0,\infty)$ to $\Theta$, the process $\mathbf{X}_t:
D([0,\infty),\Theta) \rightarrow \Theta$ is the canonical one, i.e., $\mathbf{X}_t(\omega):=\omega(t)$, $\omega\in D([0,\infty),\Theta)$, and each $\mathbf{P}_\gamma$ is the path-space measure of the process starting at a $\gamma \in \Theta$. By $\mathbf{E}_\gamma$ we denote the expectation w.r.t. $\mathbf{P}_\gamma$.
Note that the process cannot start at any arbitrary initial configuration $\gamma \in \Gamma$, as follows implicitly from the discussion before Theorem 2.2 in [@KLR05]. One is obliged to restrict the set of possible initial configurations e.g. to $\Theta$. However, by [@KoKuKva], [@K00], one has $\mu(\Theta)=1$ for every probability measure $\mu$ on $\Gamma$ whose correlation functions $k_\mu^{(n)}$, $n\in\mathbb{N}$, fulfill the so-called Ruelle bound, i.e., there is a $C>0$ such that $k_\mu^{(n)}\leq C^n$ for every $n\in\mathbb{N}$. This holds, for instance, for Gibbs measures w.r.t. superstable, lower and upper regular potentials, cf. [@Ru70]. In particular, it holds for Poisson measures with intensity being any non-negative bounded measurable function. For a constant intensity this result was shown using ergodicity in [@NZ77].
Choosing, for a fixed initial configuration $\gamma \in \Theta$, an enumeration $\{x_n\}_{n\in\mathbb{N}}$, the infinite particle process can be described more explicitly. For each $n$ let us consider an independent copy of the one-particle jump process $( (X^{(n)}_t)_{t\geq 0}, P^{x_n} )$ introduced at the beginning of the section. In [@KLR05] it is shown that a.s. the sequence $(X_t^{(n)})_{n\in\mathbb{N}}$ has no accumulation points and no two points in it which coincide. Then $(\{ X^{(n)}_t \}_{n \in
\mathbb{N}})_{t\geq 0}$ is the corresponding process on the configuration space $\Theta$ and the path-space measure is the “symmetrization" of $\bigotimes_{n=1}^\infty P^{x_n}$. Moreover, the transition probability $(\mathbf{P}_t)_{t \geq 0}$ of the process $(\mathbf{X}_t)_{t\geq 0}$ is just the product of the one-particle transition probabilities $e^{tA}(x-y)dy$, i.e., $\prod_{n=1}^\infty
e^{tA}(x_n-y_n) dy_n$. As a consequence, for all non-positive $\varphi\in\mathcal{D}(\mathbb{R}^d)$ we have $$\mathbf{E}_\gamma\left[e^{ \langle \varphi,\mathbf{X}_t \rangle }
\right] =\int_\Theta \mathbf{P}_t(\gamma,d\xi)\,e^{ \langle
\varphi,\xi \rangle } = \prod_{x \in \gamma} E_x\left[
e^{\varphi(X_t)} \right], \label{KF6a}$$ cf. [@KLR05] (see also Lemma \[lemapri\] below), or in terms of Bogoliubov exponentials for $\varphi\in\mathcal{D}(\mathbb{R}^d)$ with $-1< \varphi \leq 0 $, $$\mathbf{E}_\gamma \left[e_B(\varphi,\mathbf{X}_t) \right] = \prod_{x
\in \gamma} E_x\left[ \varphi(X_t) +1\right] =
e_B(e^{tA}\varphi,\gamma).\label{KF6b}$$ More generally, it follows that $$\begin{aligned}
\mathbf{E}_\gamma\left[e^{\langle \varphi_1,
\mathbf{X}_{t_1}\rangle}\cdots e^{\langle
\varphi_n,\mathbf{X}_{t_n}\rangle}\right]=\prod_{x \in \gamma}
E_x\left[e^{\varphi_1(X_{t_1})}\cdots e^{\varphi_n(X_{t_n})}\right],
\label{Eq3.1}\end{aligned}$$ for all $0\leq t_1 <\ldots < t_n $ and all non-positive $\varphi_1,\ldots,\varphi_n\in \mathcal{D}(\mathbb{R}^d)$, $n\geq
2$. By a monotone approximation procedure using Riemannian sums, this relation allows us to calculate the Laplace transform of the path-space measure $\mathbf{P}_\gamma$, i.e., $$\label{eqpath} \mathbf{E}_\gamma\left[ e^{-\int dt \langle
\varphi(t,\cdot), \mathbf{X}_t \rangle } \right] = \prod_{x \in
\gamma} E_x\left[ e^{-\int dt \varphi(t, X_t) } \right],$$ for all non-negative continuous functions $\varphi$ from $[0,\infty)\times \mathbb{R}^d$ to $\mathbb{R}$ with compact support.
The next result gives a relation between the process $\mathbf{X}$ and $L$.
\[nov06-eq1\] For all $F \in \mathcal{F}L^0_{eb}(\Theta)$ and all $\gamma \in \Theta$ it holds $$\label{eq3.2} \mathbf{E}_\gamma \left[
F(\mathbf{X}_t)-F(\mathbf{X}_0) - \int_0^t ds LF(\mathbf{X}_s)
\right] =0.$$
For all $F$ of the form $e_B(\varphi)$ with $\varphi \in
\mathcal{D}(\mathbb{R}^d)$ one has $$\begin{aligned}
\frac{d}{dt}\mathbf{E}_\gamma\left[F(\mathbf{X}_t)\right] =
\frac{d}{dt}e_B(e^{tA}\varphi,\gamma)= \sum_{x \in \gamma}
Ae^{tA}\varphi(x) e_B(e^{tA}\varphi,\gamma \setminus x).\end{aligned}$$ Hence using the product structure of the path-space measure and (\[Eq2.10\]) one finds $$\begin{aligned}
\sum_{x \in \gamma}
Ae^{tA}\varphi(x) e_B(e^{tA}\varphi,\gamma \setminus x)=
\mathbf{E}_\gamma \left[\sum_{x \in \mathbf{X}_t} A\varphi(x)
e_B(\varphi,\mathbf{X}_t\!\! \setminus x)\right] =
\mathbf{E}_\gamma \left[Le_B(\varphi)(\mathbf{X}_t) \right].\end{aligned}$$
In order to make the previous calculations rigorous and to extend them to the whole space $\mathcal{F}L^0_{eb}(\Theta)$, we have to establish bounds for the considered expressions. According to (\[eq2.1\]), for all $F\in \mathcal{F}L^0_{eb}(\Theta)$, one may bound the integrand $LF(\mathbf{X}_s)$ in (\[eq3.2\]) by $$\left[ |\mathbf{X}_s\cap\Lambda| \int_{\mathbb{R}^d}dy\,a(y) +
\int_\Lambda dy\,\sum_{x \in \mathbf{X}_s} a(x-y) \right] 2 C e^{c
(\vert\mathbf{X}_s\cap\Lambda\vert +1)}.$$ For some constants $C',C''>0$ one may bound the previous expression by $C'' e^{\langle \chi, \mathbf{X_s} \rangle }$, where $\chi(y) = \frac{C'}{(1+|y|)^{\alpha/2}}$ with $\alpha$ as in (\[eqbnda\]). The first summand in (\[eq3.2\]) and the expression $\sum_{x \in
\mathbf{X}_t} |A\varphi(x)| e_B(|\varphi|,\mathbf{X}_t \setminus x)$ can be bounded analogously. Due to Lemma \[lemapri\] below the expectation of these bounds are finite for all $\gamma \in \Theta$.
\[lemapri\] Let $\chi(y) = \frac{C'}{(1+|y|)^{\alpha/2}}$, for some $C'>0$ and $\alpha>0$ as in (\[eqbnda\]). Then there exists a $c>0$ such that $A\chi \leq c \chi$ and $e^{tA}\chi
\leq e^{ct} \chi$ for all $t\geq 0$. Moreover, for all $\gamma \in
\Theta$ and all measurable functions $\varphi$ such that $\frac{|\varphi|}{
\chi}$ is bounded, one has that the product $\prod_{x \in \gamma}(1+\varphi(x))$ is absolutely convergent and $$\mathbf{E}_\gamma \left[ e^{\langle\varphi, \mathbf{X}_t\rangle}
\right] < \infty.$$
The previous considerations yield $$\mathbf{E}_\gamma \left[ e^{\langle\varphi, \mathbf{X}_t\rangle}
\right]\leq
\mathbf{E}_\gamma \left[ e^{\langle\chi, \mathbf{X}_t\rangle}
\right] = \prod_{x \in \gamma} E_x\left[ e^{\chi(X_t)}\right] =
\prod_{x \in \gamma} e^{tA}e^\chi (x).$$ Thus the proof amounts to show the convergence of the latter infinite product. For this purpose, we note that $$\chi(x-y)= \frac{C^\prime}{(1+|x-y|)^{\alpha/2}}\leq
\frac{C^\prime(1+|y|)^{\alpha/2}}{(1+|x|)^{\alpha/2}}= (1+|y|)^{\alpha/2} \chi(x).$$ This gives for the one particle operator $$A\chi (x) = \int_{\mathbb{R}^d} dy a(y) \chi(x-y) - a^{(0)}\chi(x) \leq \Big(\int_{\mathbb{R}^d} dy (1+|y|)^{\alpha/2} a(y)\Big)
\chi(x) - a^{(0)}\chi(x).$$ Due to the bound (\[eqbnda\]) for $a$, the integral $\int_{\mathbb{R}^d} dy (1+|y|)^{\alpha/2} a(y)$ is finite. By Grönwall’s lemma this yields $E_x[\chi(X_t)] = e^{tA}\chi(x) \leq \chi(x)e^{ct}$, where $c= \int_{\mathbb{R}^d} dy (1+|y|)^{\alpha/2} a(y) -
a^{(0)}$. Therefore, also $|e^{tA}(e^{\chi}-1)|$ decays as $(1+|y|)^{-\alpha/2}$. According to the definition of $\Theta$, one can prove that $\langle \chi, \gamma \rangle < \infty $ for all $\gamma \in \Theta$. Hence also the product $\prod_{x \in \gamma}
e^{tA}e^\chi (x)$ converges absolutely and is finite.
In Sections \[led\] and \[nonled\] one considers the one-dimensional distributions of processes starting with initial distributions $\mu$ which are probability measures on $\Theta$. The path-space measure $\mathbf{P}^\mu$ corresponding to such a process is given by $\int_\Theta \mathbf{P}_\gamma \mu(d\gamma)$. Its one-dimensional distribution is a probability measure $P^{\mathbf{X}}_{\mu,t}$ on $\Theta$ defined for all non-negative measurable functions $F$ by $$\label{eqpromu} \int_\Theta P^{\mathbf{X}}_{\mu,t}(d\gamma) F(\gamma)
:= \int_{\Theta} \mu(d\gamma)
\mathbf{E}_\gamma\left[F(\mathbf{X}_t) \right] .$$ In particular, for $\mu=\delta_\gamma$, $\gamma \in \Theta$, the one-dimensional distribution coincides with the transition kernel $\mathbf{P}_t(\gamma,\cdot)$ described above.
For functions $F$ being Bogoliubov exponentials, definition (\[eqpromu\]) leads to the so-called Bogoliubov functionals [@Bo62]. By definition, the Bogoliubov functional corresponding to a probability measure $\mu$ on $\Gamma$ is defined by $$B_{\mu}(\varphi):= \int_\Gamma e_B(\varphi,\gamma) \mu(d\gamma),
\qquad \varphi \in \mathcal{D}(\mathbb{R}^d).$$ Such a functional is an analogue of the Fourier-Laplace transform on configuration spaces, cf. [@Ku03]. Due to (\[KF6b\]) there is an interesting relation between the Bogoliubov functional corresponding to the initial distribution and the Bogoliubov functional corresponding to the one-dimensional distribution of the process at a time $t>0$, namely, $$\label{onebog} \int_\Theta P^{\mathbf{X}}_{\mu,t}(d\gamma) e_B(\varphi,\gamma)
= \int_\Theta \mu(d\gamma)
e_B(e^{tA}\varphi,\gamma) = B_{\mu}(e^{tA} \varphi).$$
In particular, for $\mu=\pi_z$ for some bounded intensity function $z\geq 0$ one finds $$\begin{aligned}
\int_\Theta e_B(\varphi,\gamma) P^{\mathbf{X}}_{\pi_z,t}(d\gamma) =
\int_\Theta e_B(e^{tA}\varphi,\gamma) \pi_z(d\gamma) =\exp \left(
\int_{\mathbb{R}^d} e^{tA}\varphi(x) z(x) dx \right) \label{KF31}\end{aligned}$$ for all $\varphi\in\mathcal{D}(\mathbb{R}^d)$. In Section \[led\] we shall consider this special case in more detail.
An analytic approach {#Subsection2.2}
--------------------
Within the framework of infinite dimensional analysis on configuration spaces [@KoKu99] one may derive alternative representations and constructions of the dynamics. Instead of describing the infinite particle dynamics on $\Gamma$ through the Kolmogorov equation $\frac{\partial}{\partial t}F_t=LF_t$, the so-called $K$-transform [@Le75a; @Le75b] allows an alternative description for the action of $L$ on $\mathcal{F}L_{eb}^0(\Gamma_a)$.
Given the space $\Gamma_0:=\{\gamma\in\Gamma:\vert\gamma\vert<\infty\}$ of finite configurations endowed with the metrizable topology as described in [@KoKu99], let $B_{exp,ls}(\Gamma_0)$ be the space of all exponentially bounded Borel measurable functions $G$ (i.e., $|G(\eta)|\leq C_1 e^{C_2 |\eta|}$ for some $C_1,C_2 > 0$) with local support (i.e., there is a $\Lambda \in
\mathcal{O}_c(\mathbb{R}^d)$ such that $G\!\!\upharpoonright
_{\{\eta\in\Gamma_0:\vert\eta\cap(\mathbb{R}^d\setminus\Lambda)\vert\not=
0\}}\equiv 0$). The $K$-transform of a $G\in B_{exp,ls}(\Gamma_0)$ is the mapping $KG:\Gamma\to\mathbb{R}$ defined for all $\gamma\in\Gamma$ by $(KG)(\gamma ):=\sum_{{\eta \subset
\gamma}\atop{\vert\eta\vert < \infty} } G(\eta )$. Note that $K\left(B_{exp,ls}(\Gamma_0)\right)\subset\mathcal{F}L_{eb}^0(\Gamma_a)$. Given a $G\in B_{exp,ls}(\Gamma_0)$, then in terms of the operator $L$ we obtain $$\begin{aligned}
\left(L(KG)\right)(\gamma)
&=& \sum_{x \in \gamma} \int_{\mathbb{R}^d \setminus (\gamma\setminus
x)}\!\!dy\,a(x-y)
\left[(KG)(\gamma\setminus x) + (KG (\cdot\cup y))(\gamma\setminus x)\right.\\
&&\left. -(KG)(\gamma\setminus x)-(KG (\cdot\cup x))(\gamma\setminus x)\right]
$$ which leads to the so-called symbol acting on quasi-observables $\hat L$, $$(\hat{L}G)(\eta) := \sum_{x\in\eta} \int_{\mathbb{R}^d} dy\,a(x-y)
\left( G(\eta\setminus x \cup y )- G(\eta) \right),\quad G\in
B_{exp,ls}(\Gamma_0),$$ and the corresponding time evolution equation $\frac
{\partial}{\partial t}G_t=\hat LG_t$. The transition kernel $\hat{P}_t$ corresponding to $\hat L$ is then given by $$\int_{\Gamma_0} \hat{P}_t(\eta',d\eta)\,G(\eta)=
\int_{\mathbb{R}^{dn}} \prod_{i=1}^n \int_{\mathbb{R}^d} dy_i\,
e^{tA}(x_i-y_i) G(\{y_1,\ldots,y_n\}).$$ This allows us to extend the explicit formula for transition kernels of $\mathbf{X}$ to the class of all so-called observables of additive type $$\int_\Theta \mathbf{P}_t(\gamma,d\xi)\,(KG)(\xi)=
K\left(\int_{\Gamma_0}
\hat{P}_t(\cdot,d\eta)\,G(\eta)\right)(\gamma).$$
By duality one may extend the dynamical description to correlation functions. For this purpose, on the space $\Gamma_0=\cup_{n=0}^\infty \{\gamma\in\Gamma:\vert\gamma\vert =
n\}$ let us consider the so-called Lebesgue-Poisson measure $\lambda_z:=\sum_{n=0}^\infty \frac{1}{n!} (zm)^{(n)}$, where $m$ denotes the Lebesgue measure and each $(zm)^{(n)}$, $n\in\mathbb{N}$, is the image measure on $\{\gamma\in\Gamma:\vert\gamma\vert = n\}$ of the product measure $z(x_1)dx_1\cdots z(x_n)dx_n$ under the mapping ${(\mathbb{R}^d)^n}\ni(x_1,...,x_n)\mapsto\{x_1,...,x_n\}$. If for some probability measure $\mu$ on $\Gamma$ there is a function $k_\mu$ on $\Gamma_0$ such that the equality $\int_\Gamma\mu(d\gamma)\,(KG)(\gamma)=
\int_{\Gamma_0}\lambda(d\eta)\,G(\eta)k_\mu(\eta)$ holds for all $G\in B_{exp,ls}(\Gamma_0)$ we call $k_\mu$ the correlation function corresponding to $\mu$. Here we abbreviate $\lambda:= \lambda_1$. Denoting by $\hat L^*$ the dual operator of $\hat L$ in the sense $$\int_{\Gamma_0}\lambda_z(d\eta)\,(\hat LG)(\eta) k(\eta)=
\int_{\Gamma_0}\lambda_z(d\eta)\,G(\eta) (\hat L^*k)(\eta),$$ one obtains the following expression $$(\hat L^*k)(\eta)=\sum_{x\in\eta}\int_{\mathbb{R}^d}dy\,a(y-x)
k(\eta\setminus x\cup y)-|\eta| k(\eta)a^{(0)}.$$ The corresponding time evolution equation for correlation functions, $\frac {\partial}{\partial t}k_t=\hat L^*k_t$, is the analogue of the BBGKY-hierarchy for the case of the free Kawasaki dynamics. In our case this equation can be explicitly solved, namely, $$k_t(\{x_1,\cdots,x_n\})=\int_{\mathbb{R}^{dn}}dy_1\cdots dy_n\,
k_\mu(\{y_1,\cdots,y_n\})\prod_{i=1}^ne^{tA}(y_i-x_i)\label{KS}$$ for the initial condition $k_\mu$. Let us note that if one assumes a Ruelle bound for the initial correlation function $k_\mu$, then all the above considerations can be made rigorous. Moreover, similar arguments used in [@KoKtZh06] show that each $k_t$ is actually a correlation measure corresponding to some probability measure $\mu_t$ on $\Gamma$.
According to the previous considerations, the time evolution of the particle system may also be described in terms of Bogoliubov functionals, cf. [@KoKuOl02], through the time evolution equation $$\frac{\partial}{\partial t} B_{\mu,t}(\varphi)=
\int_{\mathbb{R}^d}dx\int_{\mathbb{R}^d}dy\,a(x-y)(\varphi(y)-\varphi(x))
\frac{\delta B_{\mu,t}(\varphi)}{\delta \varphi(x)},\quad
\varphi\in\mathcal{D}(\mathbb{R}^d).$$ Here $\frac{\delta B_{\mu,t}(\varphi)}{\delta \varphi(x)}$ denotes the first variational derivative of $B_{\mu,t}$ at $\varphi$. Actually, $B_{\mu,t}(\varphi)$ is the Bogoliubov functional corresponding to the one-dimensional distribution of the process starting in $\mu$. Hence, the previous equation has an explicit solution given by (\[onebog\]), i.e., $B_{\mu,t}(\varphi)=B_{\mu}(e^{tA}\varphi)$.
Equilibrium dynamics
====================
In this section we are interested in the representation of the generator $L$ and its semigroup in terms of the creation, annihilation and second quantization operators as analytic expressions. These representations are possible because there is a well-known canonical unitary isomorphism between the (symmetric) Fock space and the Poisson space. We start by recalling this isomorphism. Our approach is based on [@AKR97] and [@KSSU97], but see also [@KoKuOl00] and references therein.
The $L^{2}(\pi_{z})$ space and the Fock space representation {#SecFock}
------------------------------------------------------------
Let $z$ be a non-negative constant. We consider the complex Hilbert space $L^{2}(\pi_{z}):=L^{2}(\Gamma,\mathcal{B}(\Gamma),\pi_{z})$ of square integrable complex valued functions on $\Gamma$ with respect to the Poisson measure $\pi_{z}$. The coherent states introduced in Remark \[Natal11\] generate the system of so-called Charlier polynomials, namely, $$e_{\pi_{z}}(\varphi,\gamma)=\sum_{n=0}^{\infty}\frac{1}{n!}\langle
C_{z}^{n}(\gamma),\varphi^{\otimes n}\rangle,\qquad
C_{z}^{n}(\gamma)\in(\mathcal{D}')^{\hat{\otimes}n},$$ where $(\mathcal{D}')^{\hat{\otimes}n}$ is the $n$-th symmetric tensor product of the Schwartz distributions space $\mathcal{D}'(\mathbb{R}^d)$. This system is orthogonal and any $F\in L^{2}(\pi_{z})$ can be expanded in terms of Charlier polynomials $$F(\gamma)=\sum_{n=0}^{\infty}\langle
C_{z}^{n}(\gamma),f^{(n)}\rangle,\qquad f^{(n)}\in
L^{2}(zdx)^{\hat{\otimes}n}\label{mar06-eq6}.$$
This yields a unitary isomorphism $I_{\pi_z}$ between $L^{2}(\pi_{z})$ and the so-called symmetric Fock space $$\mathcal{F}(L^{2}(zdx)):=\bigoplus_{n=0}^{\infty}n!L^{2}(zdx)^{\hat{\otimes}n},\qquad
L^{2}(zdx)^{\hat{\otimes}0}:=\mathbb{C}.$$ More precisely, for each $F\in L^{2}(\pi_{z})$ of the form (\[mar06-eq6\]) one has $I_{\pi_z}(F)=(f^{(n)})_{n=0}^{\infty}$. Next we recall the definition of annihilation, creation and second quantization operators on the total subset of Fock space vectors of the form $ f^{(n)}=f_{1}\hat{\otimes}\ldots\hat{\otimes}f_{n}$. The action of the annihilation operator $a^{-}(h)$ of a $h \in L^{2}(zdx)$ on $f^{(n)}$ is given by $$a^{-}(h)f^{(n)}:=\sum_{j=1}^{n}(h,f_{j})f_{1}
\hat{\otimes}\ldots\hat{\otimes}f_{j-1}\hat{\otimes}f_{j+1}\hat{\otimes}\ldots\hat{\otimes}f_{n}\in
L^{2}(zdx)^{\hat{\otimes}(n-1)}.\label{mar06-eq10}$$ The adjoint operator of $a^{-}(h)$, called creation operator and denoted by $a^{+}(h)$, acts on elements $f^{(n)}\in
L^{2}(zdx)^{\hat{\otimes}n}$ by $
a^{+}(h)f^{(n)}=h\hat{\otimes}f^{(n)}$.
Given a contraction semigroup $(e^{tA})_{t\geq0}$ on $L^{2}(zdx)$ one can construct a contraction semigroup $(\mathrm{Exp}(e^{tA}))_{t\geq0}$ on $\mathcal{F}(L^{2}(zdx))$ defined by $e^{tA}\otimes...\otimes e^{tA}$ on each space $L^{2}(zdx)^{\hat{\otimes}n}$. Its generator is the so-called second quantization operator $d\mathrm{Exp}A$ corresponding to $A$. Hence the image of the Fock coherent state $e(f):=(f^{\otimes
n}/n!)_{n=0}^\infty$, $f\in L^{2}(zdx)$, under $\mathrm{Exp}(e^{tA})$ is given by $$\mathrm{Exp}(e^{tA})(e(f))=e(e^{tA}f).\label{KF7}$$ Through the unitary isomorphism $I_{\pi_z}$ we obtain a contraction semigroup $(\mathrm{Exp}_{\pi_{z}}(e^{tA}))_{t\geq0}$ on $L^{2}(\pi_{z})$. In particular, since $I_{\pi_z}^{-1}e(f)=e_{\pi_{z}}(f)$, it follows from (\[JL\]) that $$\mathrm{Exp}_{\pi_{z}}\left(e^{tA}\right)e_{B}(f)=e_{B}(e^{tA}f).\label{KF32}$$ In our case, for non-constant functions $z$ the semi-group $e^{tA}$ is in general not a contraction, see Remark \[remnoncons\] and indeed the previous construction ought to fail, cf. Remark \[remnosemi\].
Finally, we would like to present the “annihilation and creation operators” in the form more common in physical literature. In this heuristic way one can also treat non-constant intensities. For each $x\in\mathbb{R}^{d}$ we define an operator $a^-(x)$ acting on $\vec{f}=(f^{(n)})_n$ by $$(a^{-}(x)\vec{f})^{(n)}(y_{1},\ldots,y_{n})=\sqrt{n+1}f^{(n+1)}(x,y_{1},\ldots,y_{n}).$$ The adjoint of the operator $a^{-}(x)$ is formally given by $$(a^{+}(x)\vec{f})^{(n)}(y_{1},\ldots,y_{n})=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}\delta(x-y_{k})f^{(n-1)}(y_{1},\dots,\hat{y}_{k},\ldots,y_{n}),$$ where $\hat{y}_{k}$ means that the $k$-th coordinate is excluded. Actually, the expression for $a^{+}(x)$ is well-defined as a quadratic form. It is easy to check the relations $a^{\pm}(f) = \int_{\mathbb{R}^{d}}a^{\pm}(x)f(x)dx$ in the sense of quadratic forms.
The operator $L$ can be expressed in terms of creation and annihilation operators, namely, $$\label{eq4.1}
L = d\mathrm{Exp}_{\pi_{z}}( A ) + a^-_{\pi_{z}}(A_{z}^*1),$$ where $A^*_z$ is the adjoint operator of $A$ with respect to the scalar product on $L^2(\mathbb{R}^d,z(x)dx)$, $d\mathrm{Exp}_{\pi_{z}}( A
)$ and $a^-_{\pi_{z}}$ are, respectively, the image of $d\mathrm{Exp}(A)$ and $a^{-}$ under $I_{\pi_z}$. For more details see e.g. [@KoKuOl00]. Note that the expression of the annihilation operator $a^-_{\pi_{z}}$ in $L^{2}(\pi_{z})$ depends on the intensity of the underlying Poisson measure. Rewritten in a style more similar to the common usage in physics, (\[eq4.1\]) takes the form $$\int dx\,z(x)\int dy \left(a(x-y)-\a0 \delta(x-y)\right) \left(
a^+_{\pi}(x)a^-_{\pi}(y) - a^-_{\pi}(y) \right),$$ where $a_\pi^\pm(x)$ are the image of $a^\pm(x)$ under $I_{\pi_z}$.
The symmetric case {#Aug08-eq2}
------------------
If $a$ is an even function, the situation is essentially simpler (see Lemma \[LemRev\]) and yields an alternative construction to the one presented in Section \[Section2\]. As a matter of fact, for $a$ an even function, and for each constant $z>0$, the operator $L$ defined in (\[DefKawaOp\]) gives rise to a Dirichlet form on $L^2(\Gamma,\pi_z)$, $$\int_\Gamma \pi_z(d\gamma)(LF)(\gamma) F(\gamma)
\!=\! -\frac{1}{2}\!\int_\Gamma\!\pi_z(d\gamma)\!\sum_{x \in \gamma}\!
\int_{\mathbb{R}^d}\!\!dy\, a(x-y)
\left|F(\gamma\!\setminus\!x \cup y)-F(\gamma)\right|^2.$$ This allows the use of Dirichlet forms techniques to derive a Markov process on $\Gamma$ with *cadlag* paths and having $\pi_z$ as an invariant measure [@KLRII05]. Actually, one can show that in this situation $L$ is the second quantization operator corresponding to the non-positive self-adjoint operator $A$ on $L^2(\mathbb{R}^d,zdx)$ (Remark \[KFRem1\]). Hence, $L$ is a negative essentially self-adjoint operator on $L^2(\Gamma,\pi_z)$, and it is the generator of a contraction semigroup on $L^2(\Gamma,\pi_z)$.
Second quantization operators
-----------------------------
According to (\[KF6b\]) and (\[JL\]) the action of the transition kernel on coherent states corresponding $\pi_{z}$ is given by $$\label{JL2} \mathbf{E}_\gamma\left[e_{\pi_z}(\varphi,\mathbf{X}_t)
\right] = e_{\pi_{z}}(e^{tA}\varphi,\gamma)
\exp\left({\int_{\mathbb{R}^d}dx\, \left(e^{tA}\varphi(x)
-\varphi(x)\right)}z\right).$$ This allows to express the action of the transition probability on coherent states in terms of annihilation and creation operators $$\label{SemiAnn} \mathrm{Exp}_{\pi_z}(e^{tA})e^{a^-_{\pi_{z}}(A^*_{z}1)}.$$ Observe that the right-hand side of (\[JL2\]) gives the action of a semigroup which preserves coherent states. Lemma \[LemClos\] shows that $L$ is the generator of a strongly continuous Markov semigroup.
\[LemClos\] Let $\mathcal{D}_\mathrm{coh}$ be the vector space spanned by all functions $e_B(\varphi)$ with $\varphi \in L^1(\mathbb{R}^d,zdx)\cap
L^2(\mathbb{R}^d,zdx)$. The operator $L$ restricted to $\mathcal{D}_\mathrm{coh}$ is closable in $L^2(\Gamma,\pi_z)$ and its closure is an extension of the operator $(L,\mathcal{F}L^0_\mathrm{e b}(\Gamma_a))$ defined in Section \[SecDefKa\]. Moreover, it is the generator of a Markov semigroup and $ e^{tL}=\mathrm{Exp}_{\pi_z}( e^{tA})$.
The proof is an adaptation for a non-symmetric generator of the proof of Proposition 4.1 in [@AKR97]. Technically, in our case the proof is simpler, because $A$ is a bounded operator and we consider functions of the type $e_B(\varphi)$.
According to the bound (\[eq2.1\]), one can calculate the action of the adjoint of $(L,\mathcal{F}L^0_\mathrm{e b}(\Gamma_a))$ by the Mecke formula, i.e., $$(L^*F)(\gamma) := \sum_{x \in \gamma} \int_{\mathbb{R}^d} dy\, \left( a(y-x)
F(\gamma \setminus x \cup y)- a(x-y)F(\gamma) \right), \quad \forall F\in
\mathcal{F}L^0_\mathrm{e b}(\Gamma_a).$$ Hence $L^*$ is densely defined and $L$ is closable on any dense subset of $\mathcal{F}L^0_\mathrm{e b}(\Gamma_a)$. Using again the bound (\[eq2.1\]) one sees that the closures of $(L,\mathcal{D}_\mathrm{coh})$ and $(L,\mathcal{F}L^0_\mathrm{e b}(\Gamma_a))$ coincide. As $A$ is a bounded operator in $L^1(\mathbb{R}^d,zdx)$ and in $L^2(\mathbb{R}^d,zdx)$, the space $\mathcal{D}(\mathbb{R}^d)$ is a core of $A$ in $L^2$ as well as in $L^1$. Define an $L^2(\Gamma, \pi_z)$ semigroup on $\mathcal{D}_\mathrm{coh}$ (as an extension by continuity) of the following explicit formula $$T_t e_B(\varphi) := e_B(e^{tA}\varphi).$$ Hence $\mathcal{D}_\mathrm{coh}$ is a core for the generator of $T_t$. By computation one sees that $\varphi \mapsto e_B(\varphi)$ is a differentiable function from $L^1(\mathbb{R}^d,zdx)\cap L^2(\mathbb{R}^d,zdx)$ into $L^2(\Gamma,\pi_z)$ with derivative given by $$\varphi \mapsto \sum_{x \in \gamma} \varphi(x) e_B(\varphi, \gamma
\setminus x).$$ Summarizing, this yields that $T_t$ is a strongly continuous contraction semigroup on $L^2(\Gamma,\pi_z)$ such that for all $\varphi \in L^1(\mathbb{R}^d,zdx)\cap L^2(\mathbb{R}^d,zdx)$ holds $$\frac{d}{dt}T_t e_B(\varphi)(\gamma) = \sum_{x \in \gamma}
Ae^{tA}\varphi(x) e_B(e^{tA}\varphi,\gamma \setminus x).$$ For $\varphi \in \mathcal{D}(\mathbb{R}^d)$ the r.h.s. is just $Le_B(e^{tA}\varphi)$. Since $A$ is bounded this can be extended to all $\varphi \in L^1(\mathbb{R}^d,zdx)\cap L^2(\mathbb{R}^d,zdx)$. These two facts combined yield $e^{tL}=T_t$ and $\mathcal{D}_\mathrm{coh}$ is a core of $L$. Moreover, the operator $L$ coincides with the second quantization operator $d\mathrm{Exp}_{\pi_z}(A)$ on $\mathcal{D}_\mathrm{coh}$. This space is also a core for $d\mathrm{Exp}_{\pi_z}(A)$, cf. [@BeKo88]. Hence the two operators coincide.
Local equilibrium dynamics {#led}
==========================
The previous considerations were mainly for Poisson measures $\pi_z$ which have as an activity parameter a constant function $z$. According to Lemma \[LemRev\] and Section \[SecDefKa\], such measures are reversible measures for the free Kawasaki process. As a first step towards other initial distributions, in this section we consider the so-called local equilibrium case, that are, Poisson measures with non constant activity parameter. In this case, if the activity $z$ is a slowly varying function, then in a small volume the Poisson measure effectively has a constant activity. This property justifies the chosen name for this section.
\[remnosemi\] Although for bounded non-constant functions $z$ the process starting in $\pi_z$ can be constructed, cf. Subsection \[Subsection2.1\], one cannot expect that the expression given in (\[onebog\]) can be extended to a semigroup either in the $L^1(\Gamma_a,\pi_{z})$ or in the $L^2(\Gamma_a,\pi_{z})$ sense. The existence of the semigroup in an $L^p$-sense can only be expected w.r.t. an invariant measure. This can be seen from the following equality $$\frac{\| e^{tL} e_B(f) \|_{L^p(\pi_{z})}}{\| e_B(f)
\|_{L^p(\pi_{z})}} = \exp\left(\frac{1}{p} \int_{\mathbb{R}^d} \Big(
| 1+e^{tA}f(x)|^p - | 1+f(x)|^p \Big) z(x)dx\right),$$ and the fact that $$\| e^{tL} \|_{\textrm{Op},L^p(\pi_{z})} \geq \sup_{f \in
\mathcal{D}(\mathbb{R}^d), f\geq 0,} \ \exp\left(\frac{1}{p}
\int_{\mathbb{R}^d} \Big( (1+e^{tA}f(x))^p - ( 1+f(x))^p \Big)
z(x)dx\right),$$ where the r.h.s. is infinite if $(e^{tA})_{t\geq 0}$ is not a contraction semigroup (see Remark \[remnoncons\]).
Let $z\geq 0$ be a bounded measurable function. According to (\[KF31\]), for the one-dimensional distribution of the free Kawasaki process $(\mathbf{X}_t)_{t\geq 0}$ with initial distribution $\pi_{z}$ one finds $$\label{eq6.9} \int_\Theta e_B(\varphi,\gamma)
P^{\mathbf{X}}_{\pi_z,t}(d\gamma) = \exp\left(\int_{\mathbb{R}^d}dx\
e^{tA}\varphi(x) z(x)\right),$$ for every $\varphi\in\mathcal{D}(\mathbb{R}^d)$ and every $t\geq 0$.
For each $t\geq 0$ fixed, let us now consider the linear functional defined on $L^1(\mathbb{R}^d,dx)$ by $$L^1(\mathbb{R}^d,dx)\ni f\mapsto
\int_{\mathbb{R}^d}dx\,(e^{tA}f)(x)z(x).$$ Due to the contractivity property of the semigroup $e^{tA}$ in $L^1(\mathbb{R}^d,dx)$, see Proposition \[PropSemiA\], and to the boundedness of $z$, this functional is bounded on $L^1(\mathbb{R}^d,dx)$, and thus it is defined by a kernel $z_t\in
L^\infty(\mathbb{R}^d,dx)$, that is, $$\int_{\mathbb{R}^d}dx\, e^{tA}f(x)z(x)=
\int_{\mathbb{R}^d}dx\,f(x)z_t(x), \label{Natal15}$$ for all $f\in L^1(\mathbb{R}^d,dx)$. Moreover, since $e^{tA}$ is positivity preserving in $L^1(\mathbb{R}^d,dx)$, it follows from (\[Natal15\]) that $z_t\geq 0$.
This shows by (\[eq6.9\]) that the one-dimensional distribution $P^{\mathbf{X}}_{\pi_z,t}$ is just the Poisson measure $\pi_{z_t}$ (see also [@Do56], [@D53]).
Furthermore, the path-space measure $\mathbf{P}^{\pi_z}$ corresponding to the process is also Poissonian. It is a Poisson measure on $\Gamma_{D([0,\infty),\mathbb{R}^d)}$ with intensity $P^{z}$, where $P^{z}$ is the path-space measure on $D([0,\infty),\mathbb{R}^d)$ of the one-particle jump process corresponding to $A$ with initial distribution $z(x)dx$. Using the fact that the path-space measure $\mathbf{P}_\gamma$ is supported on $D([0,\infty),\Theta)$ one easily sees that this implies that $\mathbf{P}^{\pi_z}$ is actually supported on a subset of $\Gamma_{D([0,\infty),\mathbb{R}^d)}$ which can be naturally identified with $D([0,\infty),\Theta)$.
The path-space measure $\mathbf{P}^{\pi_z}$ of the process $\mathbf{X}$ starting with initial distribution $\pi_{z}$ is the Poisson measure $\pi_{P^{z}}$ on $\Gamma_{D([0,\infty),\mathbb{R}^d)}$. In particular, for any continuous function $\varphi$ on $[0,\infty) \times \mathbb{R}^d$ with compact support we have $$\int_{\Gamma_{D([0,\infty),\mathbb{R}^d)}}\mathbf{P}^{\pi_z}(d\mathbf{\omega}) e^{-\int_0^\infty \langle
\varphi(t,\cdot), \mathbf{\omega}(t) \rangle dt}
= \exp \left(
\int_{D([0,\infty),\mathbb{R}^d)} \left[ e^{-\int_0^\infty
\varphi(t,\omega(t)) dt}-1 \right] P^{z}(d\omega) \right).$$
By the definition of the path-space measure corresponding to the initial measure $\pi_z$ one has $$\int_{\Gamma_{D([0,\infty),\mathbb{R}^d)}} \mathbf{P}^{\pi_z}(d\mathbf{\omega}) e^{-\int_0^\infty \langle
\varphi(t,\cdot), \mathbf{\omega}(t) \rangle dt}
= \int_\Theta \mathbf{E}_\gamma \left[
e^{-\int_0^\infty \langle \varphi(t,\cdot), \mathbf{X}_t \rangle dt}
\right] \pi_z(d\gamma).$$ Due to (\[eqpath\]) the latter expectation is equal to $$\int_\Theta e_B(E_{\,\cdot}\left[ e^{-\int_0^\infty
\varphi(t,X_t) dt}\right]-1,\gamma) \pi_z(d\gamma) = \exp
\left(\int_{\mathbb{R}^d} E_x\left[ e^{-\int_0^\infty \varphi(t,X_t)
dt}-1\right] z(x)dx \right),$$ which yields the required result.
Large time asymptotic \[subsecLargetime\]
-----------------------------------------
In this subsection we want to study the behavior of the one-dimensional distribution of the free Kawasaki dynamics for large times. As mentioned above (\[Natal15\]), the free Kawasaki dynamic leaves the Poissonian structure of the initial distribution unchanged and only the underlying intensity develops with time. Thus, the analysis of the large time asymptotic behavior reduces to the large time asymptotic of the intensity, cf. Lemma \[lemasympred\]. In other words, we reduced the problem to a one particle problem where the intensity plays the role of the initial distribution. In the following remark we discuss which initial distributions are natural in this context.
As we work in the framework of a system scale much larger than the time scale, namely, we first perform the thermodynamic limit and only afterwards consider the time asymptotic, the situation is more subtle than it might seem at a first glance. The question is which class of initial distributions is the natural one. Usually, in the study of an one particle system one assumes that the initial data is integrable. In this case the dynamics is ergodic and the probability measure concentrated on the empty configuration is the invariant measure. Physically, this situation describes systems with zero density and perturbations of them. These perturbations are already singular in the sense that the corresponding Poisson measures are mutually singular. From a physical point of view, non-zero densities are interesting. The corresponding invariant measures are the Poisson measures with constant intensities $z>0$. However, these intensities are not any longer integrable perturbations of the zero intensity. Furthermore, also their mutual differences are not integrable. Hence a natural setting for the initial intensities seems to be bounded non-negative measurable functions.
One says that a function $z\in L^1_\mathrm{loc}(\mathbb{R}^d,dx)$ has arithmetic mean, denoted by $\mathrm{mean}(z)$, whenever the following limit exists $$\label{mean} \lim_{R\to +\infty}
\frac{1}{\mathrm{vol}(B(R))}\int_{B(R)} dx\,z(x).$$
Note that not every bounded function has arithmetic mean, cf. Remark \[Aug08-eq1\].
The following proposition states that the large time asymptotic of the one particle distribution is still Poissonian with intensity given by the arithmetic mean of the initial intensity.
\[PrAsya\] Let $z\geq 0$ be a bounded measurable function whose Fourier transform is a signed measure. Then $z$ has arithmetic mean and the one-dimensional distribution $P^{\mathbf{X}}_{\pi_z,t}$ converges weakly to $\pi_{\mathrm{mean}(z)}$ when $t$ goes to infinity.
This result is proved combining Lemma \[lemasympred\] and Corollary \[Colmeana\].
Note that, in particular, one sees that the process is not ergodic, because the large time asymptotic depends on the initial distribution. The situation of integrable intensities is technically comparable with the case in which the time scale is much larger than the space scale, e.g., instead of $\mathbb{R}^d$ one works on a torus. Under the assumptions of Proposition \[PrAsya\], the activity $z$ has arithmetic mean, cf. Corollary \[Colmeana\]. Then, using Fourier transform, one can easily derive the large time asymptotic, cf. Lemma \[LemasyA\]. The same technique also yields that the large time asymptotic only depends on the space asymptotic of the intensity $z$, cf. Corollary \[Corasy\].
In Proposition \[PrAsya\], the assumption concerning the Fourier transform is actually not elegant, because we cannot reasonably restate it in terms of $z$ in the position variables. Therefore, we give several examples below, cf. Example \[Exmean\], and we derive certain properties of the arithmetic mean. Some types of reasonable asymptotic behavior do not fulfill the aforementioned Fourier transform assumption, e.g. Example \[Exmean\](v).
\[remnoari\] If two functions $z_1,z_2 \in
L^1_\mathrm{loc}(\mathbb{R}^d,dx)$ have arithmetic mean, then for every $\alpha_1,\alpha_2 \in \mathbb{R}$ the function $\alpha_1 z_1
+ \alpha_2 z_2$ also has arithmetic mean and $\mathrm{mean}(\alpha_1
z_1+\alpha_2 z_2)
=\alpha_1\mathrm{mean}(z_1)+\alpha_2\mathrm{mean}(z_2)$.
\[Exmean\] To be more concrete we give some examples:
1. If $z$ is a constant function then $\mathrm{mean}(z) =z$.
2. If $z$ decays to zero, i.e., for every $\varepsilon >0 $ there exists an $R>0$ such that $|z(x)| \leq \varepsilon$ for $x \notin B(R)$, then $\mathrm{mean}(z)=0$.
3. If $z \in L^p(\mathbb{R}^d,dx)$, $p \in [1,\infty)$, then $\mathrm{mean}(z)=0$.
4. Also trigonometric functions have no influence, e.g., for $d=1$ and $z(x)=1+\varepsilon\sin(x)$ we have $\mathrm{mean}(z)=1$.
5. A less trivial example is the following one. Given $z_0, z_1\geq
0$, let $z$ be the function defined at each $x=(x_1,\ldots,x_d)\in\mathbb{R}^d$ by $$z(x) = \left\{ \begin{array}{ll}
z_1 & {\mbox{if}}\ x_1\geq 0 \\
z_0 & {\mbox{otherwise}}
\end{array} \right. .$$ In this case one finds $\mathrm{mean}(z)=\frac{z_0+z_1}{2}$. Note that in this case $\hat{z}$ is not a signed measure.
\[Aug08-eq1\] The arithmetic mean does not exist for all bounded non-negative functions.
1. For $d=1$ and $$z(x) = \left\{ \begin{array}{ll}
1 & {\mbox{if}}\ 2^{2k} \leq |x| \leq 2^{2k+1} \ {\mbox{for a}} \ k \in \mathbb{N}_0 \\
0 & {\mbox{otherwise}}
\end{array} \right.,$$ the integral in (\[mean\]) oscillates between $1/3$ and $2/3$. Thus $z$ does not have arithmetic mean.
2. Another example is given by the following slowly oscillating function $$z(x)=\cos(\ln(1+|x|))+z_0,\quad x\in\mathbb{R}^d,$$ where $z_0$ is a constant greater or equal to $1$. Then for large $R$ it holds $$\frac{1}{\mathrm{vol}(B(R))}\int_{B(R)} z(x)\,dx \sim
\frac{d}{\sqrt{1+d^2}}\ \sin\!\Big(\ln(R+1)+\arctan(d)\Big)+z_0.$$ In general slowly varying functions will show in general spurious behavior.
First, we prove that one may reduce the proof of Proposition \[PrAsya\] to a one particle system. This follows from the independent movement of the particles discussed in Section \[Section2\].
\[lemasympred\] Let $0 \leq z\in
L^1_\mathrm{loc}(\mathbb{R}^d,dx)$ be such that $$\lim_{t\to +\infty}\int_{\mathbb{R}^d} dx\,e^{tA}\varphi(x)\ z(x)=:
\int_{\mathbb{R}^d} z_\infty(dx)\varphi(x)$$ exists for all $\varphi \in \mathcal{D}(\mathbb{R}^d)$. If $z_\infty$ is a (non-negative) Radon measure on $\mathbb{R}^d$, then the one-dimensional distribution $P^{\mathbf{X}}_{\pi_{z},t}$ converges weakly to $\pi_{z_\infty}$ when $t$ tends to infinity.
According to (\[eq6.9\]), for all $\varphi\in\mathcal{D}(\mathbb{R}^d)$ we have $$\int_\Theta
P^{\mathbf{X}}_{\pi_{z},t}(d\gamma)\,e^{\langle\varphi,\gamma\rangle}
=\exp\left(\int_{\mathbb{R}^d}dx\, e^{tA}(e^\varphi-1)(x) \
z(x)\right),$$ with $e^\varphi-1\in\mathcal{D}(\mathbb{R}^d)$ too. By assumption, the latter converges when $t \rightarrow \infty$ to $$\exp\left( \int_{\mathbb{R}^d}z_\infty(dx)\, (e^{\varphi(x)}-1)
\right).$$ By the definition (\[A7\]) of a Poisson measure, observe that this is the Laplace transform of $\pi_{z_\infty}$. The convergence of Laplace transforms implies weak convergence, see [@Ka76 Theorem 4.2]).
It remains to derive the large time asymptotic for the one particle system.
\[LemasyA\] Let $z\geq 0$ be a bounded measurable function such that its Fourier transform $\hat{z}$ is a signed measure. For all $\varphi\in\mathcal{S}(\mathbb{R}^d)$ one has $$\label{asyA} \lim_{t\to +\infty}\int_{\mathbb{R}^d}
dx\,e^{tA}\varphi(x)\ z(x) = (2\pi)^{-d/2}\
\hat{z}(\{0\})\int_{\mathbb{R}^d} dx\,\varphi(x).$$
Through the Parseval formula and the explicit formula (\[Natal5\]) for the semigroup $(e^{tA})_{t\geq 0}$ one finds $$\int_{\mathbb{R}^d} dx\, e^{tA}\varphi(x) z(x) =\int_{\mathbb{R}^d}
\hat z(dk)\,e^{t(2\pi)^{d/2}(\hat a (-k)-\hat a(0))}
\hat\varphi(-k).$$ Since for all $k\in\mathbb{R}^d$, $\mathrm{Re}(\hat a(k)-\hat a(0))
\leq 0$ (cf. Remark \[Natal6\]), for every $t\geq 0$ we have $$\left|e^{t(2\pi)^{d/2}(\hat a (-k)-\hat
a(0))}\hat\varphi(-k)\right|\leq \vert\hat\varphi(-k)\vert,\quad
\forall\,k\in\mathbb{R}^d,$$ where $\hat\varphi\in L^1(\mathbb{R}^d,\hat z)$. Thus, an application of the Lebesgue dominated convergence theorem yields $$\lim_{t\to \infty}\int_{\mathbb{R}^d} dx\,e^{tA}\varphi(x) z(x)
=\int_{\mathbb{R}^d} \hat z(dk)\,\hat\varphi(-k)\lim_{t\to
\infty}e^{t(2\pi)^{d/2}(\hat a (-k)-\hat a(0))},$$ where $$\lim_{t\to \infty}e^{t(2\pi)^{d/2}(\hat a (-k)-\hat a(0))} =\left\{
\begin{array}{ll}
1, & k=0 \\
0, & k\not= 0
\end{array}
\right. .$$
The next lemma shows that the existence of the arithmetic mean is stable under $L^1$-convergence. In particular, it yields that the condition assumed in Proposition \[PrAsya\] on the Fourier transform of the intensity is sufficient to ensure the existence of the arithmetic mean.
\[Lemmeana\] Let $z\geq 0$ be a bounded measurable function. Assume that there exist a total subset of $ L^1(\mathbb{R}^d,dx)$ and a $C>0$ such that for all $\varphi$ in that total subset we have $$\label{eqmean} \lim_{R \rightarrow \infty}R^{-d}\int_{\mathbb{R}^d}
dx\,\varphi(x/R) z(x) = C \int_{\mathbb{R}^d} \varphi(x) dx.$$ Then $z$ has an arithmetic mean. In addition, equality (\[eqmean\]) holds for $C=\mathrm{mean}(z)$ and for any $\varphi \in L^1(\mathbb{R}^d,dx)$.
Let $\varphi$ be an arbitrary function in $L^1(\mathbb{R}^d,dx)$. Then for any $\varepsilon >0$ there exists a $\psi$ in the aforementioned total set such that $\|\varphi -\psi\|_{L^1} \leq
\varepsilon$ and $\psi$ fulfills (\[eqmean\]). By the following estimate, (\[eqmean\]) also holds for $\varphi$: $$\begin{aligned}
\lefteqn{ \left| R^{-d}\int_{\mathbb{R}^d} dx\,\varphi(x/R) z(x) -
C \int_{\mathbb{R}^d}dx\varphi(x) \right| }\\
&\leq& \left| \int_{\mathbb{R}^d} dx\, \left(\varphi(x) -
\psi(x)\right) z(xR) \right| + | C |\
\left|\int_{\mathbb{R}^d}dx (\psi(x) - \varphi(x)) \right| \\
&&+ \left| R^{-d}\int_{\mathbb{R}^d}
dx\,\psi(x/R) z(x) - C \int_{\mathbb{R}^d}dx \psi(x) \right| \\
&\leq& \Big( \|z\|_u + | C | \Big) \ \|\varphi -\psi\|_{L^1} +
\left| R^{-d}\int_{\mathbb{R}^d} dx\,\psi(x/R) z(x) - C
\int_{\mathbb{R}^d}dx \psi(x) \right|.\end{aligned}$$ Hence equality (\[eqmean\]) holds for any $\varphi \in L^1(\mathbb{R}^d,dx)$. Applying the result obtained till now to $$\label{eqind} \varphi_r(x) := \left\{ \begin{array}{ll}
\frac{1}{\mathrm{vol}(B(r))}, & x \in B(r) \\
0, & {\mbox{otherwise}}
\end{array}\right. \qquad r>0$$ yields that the l.h.s. of (\[eqmean\]) coincides with the $\mathrm{mean}(z)$.
\[Colmeana\] Given a bounded measurable function $z\geq 0$ the following two results hold:
1. If $z$ has arithmetic mean, then the limit in (\[eqmean\]) exists for all $\varphi
\in L^1(\mathbb{R}^d,dx)$ and $C=\mathrm{mean}(z)$.
2. If the Fourier transform of $z$ is a signed measure, then $z$ has arithmetic mean and $$\mathrm{mean}(z)= (2\pi)^{-d/2}\ \hat{z}(\{0\}).$$
The first part is a direct consequence of Lemma \[Lemmeana\] using the total set of all $\varphi_r$, defined as in (\[eqind\]), and their translates.
If $\hat{z}$ is a signed measure, then due to Parseval’s formula for all $\varphi\in\mathcal{S}(\mathbb{R}^d)$ it holds $$R^{-d}\int_{\mathbb{R}^d} dx\,\varphi(x/R) z(x) =
\int_{\mathbb{R}^d} \hat z(dk)\hat\varphi(-kR) \rightarrow
\hat{z}(\{0\}) \hat{\varphi}(0),\quad R\rightarrow \infty.$$ The limit exists because $\hat{\varphi}$ is continuous and decays quicker than any inverse polynomial. Hence, when $R$ goes to $\infty$, $\hat{\varphi}(-kR)$ converges to the characteristic function of the set $\{0\}$. The family $(\hat{\varphi}(-kR))_{R\geq
0}$ is dominated by an integrable function. The second part then follows by Lebesgue’s dominated convergence theorem and an application of Lemma \[Lemmeana\] for the total set $\mathcal{S}(\mathbb{R}^d)$.
Let us underline the difference between $\hat{z}(\{0\})$ and the evaluation of $\hat{z}$ as a function at zero. We explain this for the case when $z$ is a bounded $L^1$-function. The Fourier transform of $z$ is then a continuous function which we denote for the moment by $\tilde{z}$. Evaluation at zero makes sense in this case. However, we want to consider the Fourier transform as a generalized function, i.e., as a linear form on a function space. We assume that this linear form is regular enough to be expressed by a signed measure. If, as in our case, $z$ is an $L^1$-function, then its Fourier transform is the measure $\hat{z}(dk)=\tilde{z}(k)dk$. Thus $\hat{z}(\{0\})=0$, which does not necessarily coincides with $\tilde{z}(0)=\int_{\mathbb{R}^d}dx z(x)$.
Below we list the Fourier transforms (interpreted as generalized functions, signed measures, respectively) of the examples given in Examples \[Exmean\], provided they exist and with the same enumeration:
1. $(2\pi)^{d/2}z\delta_0(dk)$.
2. a continuous function for $p=1$ or an $L^{p/(p-1)}$-function for $1<p \leq 2$ multiplied in both cases by the Lebesgue measure.
3. $(2\pi)^{d/2} \left( \delta_0(dk) + i\varepsilon/2 \delta_1(dk) -
i\varepsilon/2 \delta_{-1}(dk)\right)$.
4. In the one dimensional case the Fourier transform is the following generalized function $\hat{z}(k)= \sqrt{2\pi}(z_0+z_1)/2\delta_0(k)
+i(z_0-z_1)/\sqrt{2\pi} \mathcal{P}(1/k)$, where $\mathcal{P}(1/k)$ denotes the Cauchy principal value of $1/k$. Using this explicit formula the conclusion of Proposition \[PrAsya\] can be shown although the assumptions of Proposition \[PrAsya\] are not fulfilled.
Applying the same technique as in Lemma \[LemasyA\] we can prove that the time asymptotic depends only on the behavior of $z$ at infinity.
\[Corasy\] Let $z_1,z_2\geq 0$ be two bounded measurable functions. If $z_1 - z_2\in L^1(\mathbb{R}^d,dx)$, then the free Kawasaki dynamics with initial distribution $\pi_{z_1}$ and the free Kawasaki dynamics with initial distribution $\pi_{z_2}$ have the same large time asymptotic limit.
For all $\varphi\in\mathcal{D}(\mathbb{R}^d)$ it follows from Lemma \[LemasyA\] and Corollary \[Colmeana\] that $$\begin{aligned}
\int_{\mathbb{R}^d}dx\, e^{tA}(e^\varphi-1)(x) z_1(x) -
\int_{\mathbb{R}^d}dx\, e^{tA}(e^\varphi-1)(x) z_2(x)\end{aligned}$$ converges when $t$ goes to $\infty$ to $$\begin{aligned}
(2\pi)^{-d/2}\widehat{(z_1-z_2)}(\{0\}) \int_{\mathbb{R}^d} (e^{\varphi(x)}-1)
dx.\end{aligned}$$ As we discussed in Example \[Exmean\], the assumption on $z_1-
z_2$ implies that $\mathrm{mean}(z_1-z_2) =0$. Hence $$\lim_{t\to +\infty}\int_\Gamma
P^{\mathbf{X}}_{\pi_{z_1,t}}(d\gamma)\,e^{\langle\varphi,\gamma\rangle}=
\lim_{t\to +\infty}\int_\Gamma
P^{\mathbf{X}}_{\pi_{z_2,t}}(d\gamma)\,e^{\langle\varphi,\gamma\rangle}.$$ By [@Ka76 Theorem 4.2]), this is enough to show the required result.
Throughout this subsection, the study of the time asymptotic behavior of the free Kawasaki process with an initial distribution $\pi_z$ was based on the analysis of the Laplace transform $$\int_\Gamma
\pi_z(d\gamma)\,\mathbf{E}_\gamma[e^{\langle\varphi,\mathbf{X}_t\rangle}],\label{Natal1}$$ cf. Lemma \[lemasympred\]. Actually, we had studied the time asymptotic behavior of the so-called empirical field corresponding to a $\varphi\in\mathcal{D}(\mathbb{R}^d)$, $$n_t(\varphi,\mathbf{X}) := \langle\varphi,\mathbf{X}_t\rangle=
\sum_{x \in \mathbf{X}_t}\varphi(x).$$ The special role of the empirical fields is essential in the next subsection.
Hydrodynamic limits\[SubSecHydro\]
----------------------------------
In the sequel let $z\geq 0$ be a bounded measurable function. In order to obtain a macroscopic description of our system, we rescale simultaneously the empirical field $n_t(\varphi,\mathbf{X})=\langle
\varphi, \mathbf{X}_t \rangle$, $\varphi\in\mathcal{D}(\mathbb{R}^d)$, in space and in time. The scale transformation in space is given by $\langle\varphi,\gamma\rangle\to\varepsilon^d\langle\varphi(\varepsilon\cdot),\gamma\rangle$, and in time by $t\to\varepsilon^{-\kappa}t$ for some $\kappa
>0$. To obtain non-trivial macroscopic density profiles, one has to scale the initial intensity as well, $z\to z(\varepsilon\cdot)$. This scaling yields a scaling of the Laplace transform of the empirical field, in other words, the Laplace transform of the one-dimensional distribution of the scaled process. For each $t\geq
0$ and each $\varphi\in\mathcal{D}(\mathbb{R}^d)$ we obtain from (\[KF6b\]) the following form $$\begin{aligned}
&&\int_\Gamma \pi_{z(\varepsilon\cdot)}(d\gamma)\,
\mathbf{E}_\gamma\left[e^{\varepsilon^d\left\langle\varphi(\varepsilon\cdot),\mathbf{X}_{\varepsilon^{-\kappa}t}\right\rangle}\right]\nonumber \\
&=&\int_\Gamma \pi_{z(\varepsilon\cdot)}(d\gamma)\,
e_B\left(e^{\varepsilon^{-\kappa}tA}\left(e^{\varepsilon^d\varphi(\varepsilon\cdot)} -1\right),\gamma\right)\label{Natal2} \\
&=&\exp\left( \int_{\mathbb{R}^d} dx\,
\left(e^{\varepsilon^{-\kappa}tA}\left(e^{\varepsilon^d\varphi(\varepsilon\cdot)}-1\right)\right)(x)
z(\varepsilon x)\right).\nonumber\end{aligned}$$ In the sequel we denote the scaled empirical field by $$\label{eqReHy} n_t^{(\varepsilon)}(\varphi,\mathbf{X}) :=
n_{\varepsilon^{-\kappa}t}(\varepsilon^d\varphi(\varepsilon
\cdot),\mathbf{X})=
\varepsilon^d\left\langle\varphi(\varepsilon\cdot),
\mathbf{X}_{\varepsilon^{-\kappa}t}\right\rangle.$$ According to the independent movement of the particles, we are again able to reduce the study of the infinite particle system to an effective one particle system. Again technical difficulties arise from the fact that the scale of the system size is much larger than the scale of space and time considered in the empirical field. Actually, the system size is infinite. Under the additional assumption that the Fourier transform of the activity $z$ is a signed measure, the hydrodynamic limit can be derived rather directly using Fourier techniques, cf. Propositions \[PropHyd\] and \[PropHydW\]. The general case of just bounded activities requires more technical involved considerations (postponed to Proposition \[Prohydrogen\]).
\[PropHyd\] Let $z\geq 0$ be a bounded measurable function such that its Fourier transform is a signed measure. For each $t\geq 0$ the following limit exists for all $\varphi\in\mathcal{D}(\mathbb{R}^d)$ $$\label{EqHydLap} \lim_{\varepsilon\to
0^+}\int_\Theta\pi_{z(\varepsilon \cdot)}(d\gamma)\,
\mathbf{E}_\gamma\left[e^{n^{(\varepsilon)}_t(\varphi,\mathbf{X})}\right]
= \int_{\mathcal{D}'(\mathbb{R}^d)} \delta_{\rho_t}(d\omega)
e^{\langle \varphi, \omega \rangle}$$ whenever one of the following conditions is fulfilled:
1. If $$a^{(1)}_i := \int_{\mathbb{R}^d}dx\,x_i a(x)<\infty,\quad \forall\,i=1,\cdots ,d,$$ and $a^{(1)}:=(a^{(1)}_1,\cdots,a^{(1)}_d)\neq 0$, then for $\kappa=1$ the limiting density $\rho_t$ is given (as a generalized function) by $$\int_{\mathbb{R}^d}dx\,\rho_t(x)\varphi (x) :=
\int_{\mathbb{R}^d}dx\,z(x + ta^{(1)})\varphi (x), \quad \varphi \in
\mathcal{D}(\mathbb{R}^d);$$
2. If $a^{(1)}= 0$, and $$a^{(2)}_{ij} := \int_{\mathbb{R}^d}dx\,x_i x_j a(x)<\infty,\quad \forall\,i,j=1,\cdots ,d,$$ then for $\kappa=2$ the limiting density $\rho_t$ is given (as a generalized function) by $$\int_{\mathbb{R}^d}dx\,\rho_t(x)\varphi (x)
:=\frac{1}{(2\pi)^{d/2}}\int_{\mathbb{R}^d}dx\,z(x)
\int_{\mathbb{R}^d}dk\,e^{i\langle k , x \rangle } e^{-\frac{t}{2}\langle
a^{(2)}k,k\rangle}\hat\varphi (k), \quad \varphi \in
\mathcal{D}(\mathbb{R}^d)$$ where $a^{(2)}$ denotes the $d\times d$ matrix with coefficients $a^{(2)}_{ij}$.
By (\[Natal2\]), for each $t\geq 0$ and each $\varphi\in\mathcal{D}(\mathbb{R}^d)$, one has $$\begin{aligned}
&&\int_\Gamma\pi_{z(\varepsilon
\cdot)}(d\gamma)\,\mathbf{E}_\gamma\left[
e^{n^{(\varepsilon)}_t(\varphi,\mathbf{X})}\right]\nonumber \\
&=& \exp\left( \int_{\mathbb{R}^d} dx\,
e^{\varepsilon^{-\kappa}tA}(e^{\varepsilon^d\varphi(\varepsilon\cdot)}-1)(x)\ z(\varepsilon x)\right)\nonumber \\
&=&\exp\left( \int_{\mathbb{R}^d} dx\,
e^{\varepsilon^{-\kappa}tA}\Big(e^{\varepsilon^d\varphi(\varepsilon\cdot)}-1
-\varepsilon^d\varphi(\varepsilon\cdot)\Big)(x)\ z(\varepsilon x)\right)\cdot\label{Natal4}\\
&&\cdot\exp\left(\int_{\mathbb{R}^d} dx\,\varepsilon^dz(\varepsilon
x)
\left(e^{\varepsilon^{-\kappa}tA}\varphi(\varepsilon\cdot)\right)\left(x
\right)\right).\label{Natal3}\end{aligned}$$ Concerning (\[Natal4\]), we observe that due to the contractivity property of the semigroup $(e^{tA})_{t\geq 0}$ in $L^1(\mathbb{R}^d,dx)$ (cf. Proposition \[PropSemiA\]) we have $$\begin{aligned}
&& \left|\int_{\mathbb{R}^d}dx\, e^{\varepsilon^{-\kappa}tA}
\Big(e^{\varepsilon^{d}\varphi(\varepsilon \cdot)} -1 -
\varepsilon^d\varphi(\varepsilon \cdot)\Big)( x)
z(\varepsilon x )\right| \label{Eq6.2}\\
&\leq&\| e^{\varepsilon^d\varphi(\varepsilon \cdot)} -1 -
\varepsilon^d\varphi(\varepsilon \cdot)\|_{L^{1}(\mathbb{R}^d,dx)}
\|z \|_u \nonumber \\
&\leq& \varepsilon^d\| \varphi\|^2_{L^1(\mathbb{R}^d,dx)} e^{\|
\varphi\|_{L^{1}(\mathbb{R}^d,dx)}}\|z \|_u, \nonumber\end{aligned}$$ for more details see Lemma \[lemmHyd\] in the Appendix \[appest\] below. Thus, the proof reduces to check the existence of the limit of the exponential (\[Natal3\]) when $\varepsilon$ converges to 0. For this purpose one shall apply the Parseval formula to the exponent in (\[Natal3\]). The use of the explicit formula (\[Natal5\]) for the semigroup $(e^{tA})_{t\geq
0}$ then yields $$\label{eqonefour} \int_{\mathbb{R}^d}\hat
z(dk)\,e^{\varepsilon^{-\kappa}t(2\pi)^{d/2}(\hat{a}(-\varepsilon
k)-\hat{a}(0))} \hat\varphi (-k).$$ Here, observe that since for all $k\in\mathbb{R}^d$, $\mathrm{Re}(\hat a(k)-\hat a(0)) \leq 0$ (cf. Remark \[Natal6\]), one has $$\left|e^{\varepsilon^{-\kappa}t(2\pi)^{d/2}(\hat a(-\varepsilon
k)-\hat a(0))} \hat\varphi (-k)\right|\leq
\left|\hat\varphi(-k)\right|,\quad \forall\,k\in\mathbb{R}^d,
\varepsilon >0,$$ where, in particular, $\hat\varphi\in L^1(\mathbb{R}^d,\hat z)$. Therefore, due to the Lebesgue dominated convergence theorem, one may infer the existence of the required limit from the existence of the limit $$\label{eq6.10} \lim_{\varepsilon\to 0^+}\frac{\hat a(-\varepsilon
k)-\hat a(0)}{\varepsilon^\kappa}$$ for each $k\in\mathbb{R}^d$.
Case 1: Let $\kappa=1$. Then, under the assumptions stated in 1, the function $\hat a$ is differentiable at 0, and thus (\[eq6.10\]) exists with $$\lim_{\varepsilon\to 0^+}\frac{\hat{a}(-\varepsilon
k)-\hat{a}(0))}{\varepsilon} = \frac{i}{(2\pi)^{d/2}} \langle k , a^{(1)} \rangle$$ for each $k\in\mathbb{R}^d$. Hence the limit (\[Natal3\]) exists, being equal to $$\begin{aligned}
\int_{\mathbb{R}^d}\hat z(dk)\,e^{it \langle k, a^{(1)}\rangle }\hat\varphi (-k)
&=&\frac{1}{(2\pi)^{d/2}}\int_{\mathbb{R}^d}dx\,z(x)\int_{\mathbb{R}^d}dk\,
e^{i\langle k, (-x+ta^{(1)})\rangle }\hat\varphi (-k)\\
&=&\int_{\mathbb{R}^d}dx\,z(x)\varphi (x-ta^{(1)})\\
&=&\int_{\mathbb{R}^d}dx\,z(x + ta^{(1)})\varphi (x), \quad \varphi
\in \mathcal{D}(\mathbb{R}^d).\end{aligned}$$ Therefore, the Laplace transform of the rescaled empirical field converges to $$\exp\left(\int_{\mathbb{R}^d}dx\,z(x + ta^{(1)})\varphi (x)\right).$$ This means that the limiting distribution is the Dirac measure with mass at $z(x + ta^{(1)})$.
Case 2: Now let $\kappa=2$. Then (\[eq6.10\]) can be written as $$\begin{aligned}
\frac{\hat{a}(-\varepsilon k)-\hat{a}(0)}{\varepsilon^2}
&=&\frac{1}{(2\pi)^{d/2}}\int_{\mathbb{R}^d}dx\,a(x)
\frac{e^{i\varepsilon \langle x , k \rangle } -1}{\varepsilon^2}\\
&=&\frac{1}{(2\pi)^{d/2}}\int_{\mathbb{R}^d}dx\,a(x)
\left(\frac{i}{\varepsilon} \langle x , k \rangle -
\frac{1}{2}\sum_{i,j=1}^dx_ix_jk_ik_j
+ \frac{o(\varepsilon^2)}{\varepsilon^2}\right)\\
&=&\frac{1}{(2\pi)^{d/2}}\int_{\mathbb{R}^d}dx\,a(x)
\left(-\frac{1}{2}\sum_{i,j=1}^dx_ix_jk_ik_j
+ \frac{o(\varepsilon^2)}{\varepsilon^2}\right),\end{aligned}$$ where we have used the Taylor expansion of the function $e^{i\langle x,
k\rangle }$ at $x=0$ and the assumptions for the Case 2. Since $a\in
L^1(\mathbb{R}^d,dx)$, the latter converges to $-1/2(2\pi)^{-d/2}\langle a^{(2)}k,k\rangle$, and thus $$\begin{aligned}
&&\lim_{\varepsilon\to 0^+}\int_\Gamma\pi_{z(\varepsilon
\cdot)}(d\gamma)\,\mathbf{E}_\gamma\left[
e^{n^{(\varepsilon)}_t(\varphi,\mathbf{X})}\right]\\
&=&\exp\left(\int_{\mathbb{R}^d}\hat z(dk)\,
e^{-\frac{t}{2}\langle a^{(2)}k,k\rangle}\hat\varphi (-k)\right)\\
&=&\exp\left(\frac{1}{(2\pi)^{d/2}}\int_{\mathbb{R}^d}dx
z(x)\int_{\mathbb{R}^d}dk e^{-i\langle k, x \rangle } e^{-\frac{t}{2}\langle
a^{(2)}k,k\rangle} {\hat\varphi}(-k) \right).\end{aligned}$$
\[Natal8\] For continuous differentiable $z$ the limiting density $\rho_t(x)=z(x + ta^{(1)})$ obtained in Proposition \[PropHyd\] is the strong solution of the linear partial differential equation $\frac{\partial}{\partial
t}\rho_t(x)=\langle a^{(1)} ,\nabla\rho_t(x) \rangle =
\mathrm{div}(a^{(1)}\rho_t(x))$ with the initial condition $\rho_0=z$. In the same way, the second case stated in Proposition \[PropHyd\] yields a limiting density which is the strong solution of the heat equation $$\frac{\partial}{\partial
t}\rho_t(x)=\frac12\sum_{i,j=1}^{d}a_{ij}^{(2)}
\frac{\partial^2}{\partial x_i\partial x_j}\rho_t(x)$$ with the same initial condition.
Given the function $a$, we may decompose $a$ into a sum of an even function $p$ and an odd function $q$, $a=p+q$. Note that one always has $p^{(1)}= 0$. Submitting also the function $a$ to a proper scale transformation (beyond the previous scale transformations in space and in time), one may complement the statement of Proposition \[PropHyd\]. The limit (\[eq6.10\]) required in Proposition \[PropHyd\] suggests the scaling $a_\varepsilon:=p+\varepsilon q$ and $\kappa=2$. Note that the assumptions on the function $a$ (i.e., $0\leq a\in L^1(\mathbb{R}^d,dx)$), carry over to $a_\varepsilon$, $\varepsilon
>0$. Hence all considerations done in Section \[SecDefKa\] and following sections for the one particle operator $A$ and its underlying dynamics still hold for the operator $A_\varepsilon$ defined by (\[OnePartOp\]) with $a$ replaced by $a_\varepsilon$. Denote by $$a^{(1)} := \int_{\mathbb{R}^d}dx\,x a(x) =
\varepsilon^{-1}\int_{\mathbb{R}^d}dx\,x a_\varepsilon(x)=
\int_{\mathbb{R}^d}dx\,x q(x).$$
(“weak asymmetry”) \[PropHydW\] Let $z\geq 0$ be a bounded measurable function such that its Fourier transform is a signed measure. Under the above conditions, if $0\not= q^{(1)}=a^{(1)}\in \mathbb{R}^d$ and $p^{(2)}_{ij}<\infty$ for every $i,j=1,\cdots ,d$, then for each $t\geq 0$ and each $\varphi\in\mathcal{D}(\mathbb{R}^d)$ the following limit exists, and it is given by $$\begin{aligned}
&&\lim_{\varepsilon\to 0^+}\int_\Gamma\pi_{z(\varepsilon \cdot)}(d\gamma)\,
\mathbf{E}_\gamma[e^{n^{(\varepsilon)}_t(\varphi,\mathbf{X})}]\\
&=&\exp\left(\frac{1}{(2\pi)^{d/2}}\int_{\mathbb{R}^d}dx\,z(x)
\int_{\mathbb{R}^d}dk\,e^{i \langle k, x\rangle } e^{-it\langle a^{(1)},k\rangle
- \frac{t}{2}\langle a^{(2)}k,k\rangle}\hat\varphi (k)\right).\end{aligned}$$
Since all statements for the operator $A$ and the corresponding process also hold for $A_\varepsilon$, in particular, one finds that $(e^{tA_\varepsilon})_{t\geq 0}$ is also a contraction semigroup on $L^1(\mathbb{R}^d,dx)$. Similar arguments as in the proof of Proposition \[PropHyd\] reduce the proof to the analysis of existence of the limit $$\lim_{\varepsilon\to 0^+}\frac{\widehat{a_\varepsilon}(-\varepsilon
k)- \widehat{a_\varepsilon}(0)}{\varepsilon^2}= \lim_{\varepsilon\to
0^+}\frac{\hat p(-\varepsilon k)-\hat p(0)}{\varepsilon^2}
+\lim_{\varepsilon\to 0^+}\frac{\hat q(-\varepsilon k)-\hat
q(0)}{\varepsilon},\quad k\in\mathbb{R}^d.$$
\[remweaka\] Similarly to Remark \[Natal8\], one may then conclude that for continuous differentiable $z$ Proposition \[PropHydW\] leads to a limiting density $\rho_t$ which is a strong solution of the partial differential equation $$\frac{\partial}{\partial t}\rho_t(x)=\mathrm{div}(a^{(1)}\rho_t(x))+
\frac12\sum_{i,j=1}^{d}a_{ij}^{(2)}\frac{\partial^2}{\partial
x_i\partial x_j}\rho_t(x)$$ with the initial condition $\rho_0=z$.
\[Prohydrogen\] Assume that $a$ has all moments finite. Then the results stated in Propositions \[PropHyd\] and \[PropHydW\] hold for all non-negative bounded measurable functions $z$.
We note that in the first part of the proof of Propositions \[PropHyd\] and \[PropHydW\] we have only used the boundedness of $z$. Therefore, it remains to prove the convergence of the exponent in (\[Natal3\]), namely, $$\begin{aligned}
\label{eq6.7} &&\int_{\mathbb{R}^d} dx\,z(x)
\left(e^{\varepsilon^{-\kappa}tA_\varepsilon}\varphi(\varepsilon\cdot)\right)\left(\frac{x}{\varepsilon}\right).\end{aligned}$$ Let $\psi\in L^1(\mathbb{R}^d,dx)$ be given. Then $$\begin{aligned}
&&\left| \int_{\mathbb{R}^d} dx\,z(x)
\left(e^{\varepsilon^{-\kappa}tA_\varepsilon}\varphi(\varepsilon\cdot)\right)\left(\frac{x}{\varepsilon}\right)
-\int_{\mathbb{R}^d} dx\,z(x)
\left(e^{\varepsilon^{-\kappa}tA_\varepsilon}\psi(\varepsilon\cdot)\right)\left(\frac{x}{\varepsilon}\right)\right|\\
& \leq &\varepsilon^d \int_{\mathbb{R}^d} dx\,|z(\varepsilon x)|
\left| e^{\varepsilon^{-\kappa}tA_\varepsilon} \left(
\varphi(\varepsilon\cdot)- \psi(\varepsilon\cdot)\right)(x)\right|\\
& \leq &\|z\|_u \varepsilon^d \left\|
e^{\varepsilon^{-\kappa}tA_\varepsilon} \left(
\varphi(\varepsilon\cdot)-
\psi(\varepsilon\cdot)\right)\right\|_{L^1(\mathbb{R}^d,dx)}.\end{aligned}$$ As $(e^{tA_\varepsilon})_{t\geq 0}$ is a $L^1(\mathbb{R}^d,dx)$-contraction semigroup, the latter can be bounded by $$\begin{aligned}
& \leq &\|z\|_u \varepsilon^d \int_{\mathbb{R}^d} dx\left|
\varphi(\varepsilon x)-
\psi(\varepsilon x)\right| = \|z\|_u \|\varphi
-\psi\|_{L^1(\mathbb{R}^d,dx)}.\end{aligned}$$ Therefore, it is enough to consider (\[eq6.7\]) for $\varphi$ from a total subset of $L^1(\mathbb{R}^d,dx)$. Let us consider the set of all functions in $L^1(\mathbb{R}^d,dx)$ with Fourier transform in $\mathcal{D}(\mathbb{R}^d)$. This set is total in $L^1(\mathbb{R}^d,dx)$, because $\mathcal{D}(\mathbb{R}^d)$ is dense in $\mathcal{S}(\mathbb{R}^d)$, the Fourier transform is continuous in $\mathcal{S}(\mathbb{R}^d)$, and $\mathcal{S}(\mathbb{R}^d)$ is dense in $L^1(\mathbb{R}^d,dx)$. Let $\varphi$ be such a function. In order to prove the convergence of (\[eq6.7\]) it is enough to show the convergence of $(e^{\varepsilon^{-\kappa}tA_{\varepsilon}}\varphi(\varepsilon\cdot))(\frac{x}{\varepsilon})$ in $L^1(\mathbb{R}^d,dx)$. For this purpose it is sufficient to show the following convergence of the Fourier transform of the latter expression,
$$\label{eq6.8}
e^{\varepsilon^{-\kappa}t(2\pi)^{d/2}(\hat{a}_\varepsilon(-\varepsilon
k)-\hat{a}(0))} \hat\varphi (-k) \rightarrow e^{ it \langle
a^{(1)},k \rangle -\frac{0^{2-\kappa}t}{2} \langle
a^{(2)}k,k\rangle } \hat\varphi (-k),\quad \varepsilon \rightarrow 0$$
in $\mathcal{S}(\mathbb{R}^d)$. The exponent in the l.h.s. of (\[eq6.8\]) may be written as $$\begin{aligned}
\hat{a}_\varepsilon(-\varepsilon k) -\hat{a}(0) =-\varepsilon
\langle \nabla \hat{a}_\varepsilon(0),k\rangle
-\frac{\varepsilon^2}{(2\pi)^{d/2}} \int_0^1 (1-s)
\int_{\mathbb{R}^d} \langle k,x\rangle^2 e^{is\varepsilon \langle
k,x\rangle } a_\varepsilon(x) dx ds. \label{Eq6.1}\end{aligned}$$ The first summand on the right hand side is of order $\varepsilon^\kappa$, because $\varepsilon \langle \nabla
\hat{a}_\varepsilon(0),k\rangle
=\frac{-i\varepsilon^\kappa}{(2\pi)^{d/2}} \int_{\mathbb{R}^d}dx
\langle k,x \rangle q(x)$. Hence all derivatives of this expression grow at most polynomially and all derivatives of third order and bigger are of order $\varepsilon$ or smaller. Hence only derivatives of the first and second order have to be computed carefully in order to see that $$e^{ -\varepsilon^{2-\kappa} \int_0^1 (1-s) \int_{\mathbb{R}^d}
\langle k,x\rangle^2 e^{is\varepsilon \langle k,x\rangle }
a_\varepsilon(x) dx ds} - e^{ -\frac{0^{2-\kappa}t}{2} \langle
a^{(2)}k,k\rangle } \label{eq6.8b}$$ are polynomially bounded of order $\varepsilon$. Summarizing, (\[eq6.8\]) and all its derivatives converge locally uniformly. As $\hat{\varphi} \in \mathcal{D}(\mathbb{R}^d)$, (\[eq6.8\]) also converges in $\mathcal{S}(\mathbb{R}^d)$. Appendix \[appB\] provides the tools for a more detailed calculation.
Non-equilibrium dynamics {#nonled}
========================
In this section we widen the class of initial distributions to measures far from equilibrium, that is we consider all probability measures $\mu$ on $\Theta$ as initial distributions subject only to a mild mixing condition. This means that we consider the processes constructed as in Section \[Section2\] but not necessarily with a Poissonian initial distribution. Assuming enough mixing of the initial measure $\mu$, namely (\[CondUrs\]), we are able to generalize, incorporating ideas from [@DSS82], Proposition \[PrAsya\] in Subsection \[SubsecGenTemp\] and Proposition \[Prohydrogen\] in Subsection \[SubsecGenHydro\].
We formulate the mixinig requirement in terms of the second Ursel function (factorial cumulant), which can be expressed in terms of the first and second correlation function (factorial moments) defined in Subsection \[Subsection2.2\], namely $$u_\mu^{(2)}(x,y) := k_\mu(\{x,y\}) - k_\mu(\{x\}) k_\mu(\{y\}).$$
We denote in the following the first correlation function $x \mapsto k_\mu(\{x\})$ by $\rho_\mu$. The condition on the second Ursel function, (\[CondUrs\]), is a rather weak mixing or decay of correlation condition. In Subsection \[SecGibbs\] we show that this condition is fulfilled, in particular, by Gibbs measures in the high temperature regime. It will also hold beyond that regime, cf. e.g. [@BaMePrKu03b].
Large time asymptotic \[SubsecGenTemp\]
---------------------------------------
Recalling (\[onebog\]), the Laplace transform of the one-dimensional distribution $P^{\mathbf{X}}_{\mu,t}$ can be expressed in terms of an one particle system, i.e., for all non-negative $f\in \mathcal{S}(\mathbb{R}^d)$ $$\label{eq6.12}
\int_\Gamma e^{-\langle f , \gamma \rangle }
P^{\mathbf{X}}_{\mu,t}(d\gamma) = \int_\Theta
e_B(e^{tA}(e^{-f}-1),\gamma) \mu(d\gamma) = \int_\Theta
e^{\langle \ln (e^{tA}(e^{-f}-1)+1),\gamma\rangle} \mu(d\gamma).$$
\[PrAsyGibbs\] Let $\mu$ be a measure on $\Theta$ which has first and second correlation function. Assume that $\mu$ fulfills the following mixing condition $$\label{CondUrs}
sup_{x \in \mathbb{R}^d} \int_{\mathbb{R}^d}u^{(2)}_\mu(x, y) dy < \infty.$$ In addition, we assume that the Fourier transform of the first correlation function $\rho_\mu$ is a signed measure. Then, the one-dimensional distribution $P^{\mathbf{X}}_{\mu,t}$ converges weakly to $\pi_{\mathrm{mean}(\rho_\mu)}$ when $t$ tends to infinity.
Given an non-negative $f \in \mathcal{S}(\mathbb{R}^d)$ such that $-1 \leq e^{-f}-1\leq 0$, let $\varphi:=1-e^{-f}$. Using (\[eq6.12\]) and $|e^{-x} - e^{-y}|\leq |x-y|$ for $x,y \geq 0$ one obtains that $$\begin{aligned}
&&\left| \int_\Theta
e^{\langle \ln (e^{tA}(e^{-f}-1)+1),\gamma\rangle} \mu(d\gamma) - e^{\mathrm{mean}(\rho_\mu) \int_{\mathbb{R}^d}(e^{-f}-1)(x) dx}\right|
\\&\leq& \int_\Theta \left|
\langle \ln (-e^{tA}\varphi+1),\gamma\rangle - \mathrm{mean}(\rho_\mu) \int_{ \mathbb{R}^d}\varphi(x) dx\right| \mu(d\gamma)\end{aligned}$$ As $0\leq x -\ln(x+1)\leq x^2 $ for $-1/2\leq x \leq 0$, $\|e^{tA}\varphi\|_u$ tends to zero for $t \rightarrow
\infty$, because $\hat{\varphi} \in L^1(\mathbb{R}^d,dx)$ and $$|e^{tA}\varphi(x)| \leq \frac{1}{(2\pi)^{d/2}}\int_{\mathbb{R}^d}
dk e^{t(2\pi)^{d/2}\mathrm{Re}(\hat{a}(k)-\hat{a}(0))}
|\hat{\varphi}(k)|.$$ and $(e^{tA})_{t\geq 0}$ is an $L^1(\mathbb{R}^d,dx)$-contraction we get that $$\begin{aligned}
&&\int_\Theta \left|
\langle \ln (e^{tA}\varphi+1),\gamma\rangle - \langle e^{tA}\varphi,\gamma\rangle \right| \mu(d\gamma)
\leq \int_{\mathbb{R}^d} \left(e^{tA}\varphi(x) \right)^2 \rho_\mu(x) dx \\ &\leq& \|e^{tA}\varphi\|_u \int_{\mathbb{R}^d} e^{tA}\varphi(x) \rho_\mu(x)dx \leq \|e^{tA}\varphi\|_u \|\varphi\|_{L^1} \|\rho_\mu\|_u.\end{aligned}$$ This means it is sufficient to estimate $$\begin{aligned}
\lefteqn{\int_\Theta \left|
\langle e^{tA}\varphi,\gamma\rangle - \mathrm{mean}(\rho_\mu) \int_{ \mathbb{R}^d}\varphi(x) dx\right| \mu(d\gamma)
}\\&\leq& \int_\Theta \left|
\langle e^{tA}\varphi,\gamma\rangle - \int_{ \mathbb{R}^d}e^{tA}\varphi(x) \rho_\mu(x) dx\right| \mu(d\gamma)
\\&&+ \left| \int_{ \mathbb{R}^d}e^{tA}\varphi(x) \rho_\mu(x) dx - \mathrm{mean}(\rho_\mu) \int_{ \mathbb{R}^d}\varphi(x) dx\right|.\end{aligned}$$ The last term converge to zero because of Lemma \[LemasyA\] and Corollary \[Colmeana\]. Due to the decay properties of the covariance function (\[CondUrs\]) $$\begin{aligned}
\lefteqn{\int_\Theta \left|
\langle e^{tA}\varphi,\gamma\rangle - \int_{ \mathbb{R}^d}e^{tA}\varphi(x) \rho_\mu(x) dx\right| \mu(d\gamma)}\\
&\leq& \int_{\mathbb{R}^d} \int_{\mathbb{R}^d}e^{tA}\varphi(x) e^{tA}\varphi(y) u^{(2)}_\mu(x,y)dx dy
+ \int_{\mathbb{R}^d} \left( e^{tA}\varphi(x)\right)^2 \rho_\mu(x) dx\\&\leq& \|e^{tA}\varphi\|_u \|e^{tA}\varphi\|_{L^1} \left( \sup_{x \in \mathbb{R}^d} \int_{\mathbb{R}^d}u^{(2)}_\mu(x,y) dy + \| \rho_\mu\|_u \right),\end{aligned}$$ which implies the result, as $e^{tA}$ is an $L^{1}(\mathbb{R}^d,dx)$ contraction and $\|e^{tA}\varphi\|_u$ converges to zero.
Hydrodynamic limits \[SubsecGenHydro\]
--------------------------------------
As in Subsection \[SubSecHydro\] we want to study the rescaled empirical field $n_t^{(\varepsilon)}(\varphi,\mathbf{X}) =
\varepsilon^d\left\langle\varphi(\varepsilon\cdot),
\mathbf{X}_{\varepsilon^{-\kappa}t}\right\rangle$. Since we do not have any longer a natural parameter associated to the initial measure, one cannot formulate something like slowly varying intensities. However, one sees that a possible framework is to work with a quite arbitrary sequence of initial measures $(\mu_\varepsilon)_{\varepsilon >0}$. The main restriction on this sequence is that one has to assume a particular convergence for the first correlation measure described below in more details and the mixing condtion (\[CondUrs\]) uniformly in $\varepsilon$. In Corollary \[Lem1Ursell\] we prove that these conditions are fulfilled by Gibbs measures in the high temperature regime for slowly varying intensity $z(\varepsilon
\cdot)$. Furthermore, the limit is identified.
\[ThHydGibbs\] Assume that $a$ has all moments finite. Let $(\mu_{\varepsilon})_{\varepsilon > 0}$ be a sequence of measures on $\Theta$ such that $\rho_{\mu_\varepsilon}$ is uniformly bounded in $\varepsilon$, the limit $\lim_{\varepsilon \rightarrow 0^+}
\rho_{\mu_\varepsilon}(\{x/\varepsilon\}) =: \rho_0(x)$ exists for all $x \in \mathbb{R}^d$ and the following mixing condition $$\sup_{x \in \mathbb{R}^d,\varepsilon >0} \int_{\mathbb{R}^d} u^{(2)}_{\mu_{\varepsilon}}(x,y) dy <\infty$$ holds. Then, for each $t\geq 0$, the following limit exists for all non-negative $\varphi\in\mathcal{D}(\mathbb{R}^d)$ $$\label{eqhydrolim} \lim_{\varepsilon\to
0^+}\int_\Gamma\mu_\varepsilon(d\gamma)\,
\mathbf{E}_\gamma\left[e^{-n^{(\varepsilon)}_t(\varphi,\mathbf{X})}\right]
=: \int_{\mathcal{D}'(\mathbb{R}^d)} \delta_{\rho_t}(d\omega)
e^{-\langle \varphi, \omega \rangle},$$ whenever one of the following conditions is fulfilled:
1. If $a^{(1)}=(a^{(1)}_1,\cdots,a^{(1)}_d)\neq 0$, then for $\kappa=1$ the limit (\[eqhydrolim\]) holds for $\rho_t$ equal to the strong solution of $\frac{\partial}{\partial t}\rho_t(x)=
\mathrm{div}(a^{(1)}\rho_t(x))$ with initial condition $\rho_0$.
2. If $a^{(1)}= 0$, then for $\kappa=2$ the limit (\[eqhydrolim\]) holds for $\rho_t$ equal to the strong solution of $\frac{\partial}{\partial
t}\rho_t(x)=\frac12\sum_{i,j=1}^{d}a_{ij}^{(2)}
\frac{\partial^2}{\partial x_i\partial x_j}\rho_t(x)$ with initial condition $\rho_0$.
3. As in Proposition \[PropHydW\], let $p$ and $q$ be the even and the odd part of $a$. Define $a_\varepsilon = p + \varepsilon q$ and assume that $0\not= q^{(1)}=a^{(1)}\in \mathbb{R}^d$. Then, for $\kappa=2$, the limit (\[eqhydrolim\]) for the dynamics w.r.t. $a_\varepsilon$ holds for $\rho_t$ equal to the strong solution of $\frac{\partial}{\partial
t}\rho_t(x)=\mathrm{div}(a^{(1)}\rho_t(x))+
\frac12\sum_{i,j=1}^{d}a_{ij}^{(2)}\frac{\partial^2}{\partial
x_i\partial x_j}\rho_t(x)$ with initial condition $\rho_0$.
Given an non-negative $\varphi \in \mathcal{S}(\mathbb{R}^d)$ according to (\[eq6.12\]) one may write (\[eqhydrolim\]) as $$\left| \int_\Theta\mu_\varepsilon(d\gamma)\,
e^{\left\langle \ln \left(e^{\varepsilon^{-\kappa}tA_\varepsilon}
(e^{-\varepsilon^{d}\varphi(\varepsilon \cdot)} -1)+1\right),\gamma\right\rangle}
- e^{-\int_{\mathbb{R}^d} \varphi(x) \rho_t(x) dx}\right|$$ Using $|e^{-x} - e^{-y}|\leq |x-y|$ for $x,y \geq 0$ one can bound this by $$\begin{aligned}
\int_\Theta \left|
\langle \ln \left(e^{\varepsilon^{-\kappa}tA_\varepsilon}
(e^{-\varepsilon^{d}\varphi(\varepsilon \cdot)} -1)+1\right),\gamma\rangle + \int_{ -\mathbb{R}^d}\varphi(x) \rho_t(x) dx\right| \mu_\varepsilon(d\gamma)\end{aligned}$$ Let us show that it is sufficient to consider $$\label{eqxx34}
\int_\Theta \left|\langle e^{\varepsilon^{-\kappa}tA_\varepsilon}
\varepsilon^{d}\varphi(\varepsilon \cdot),\gamma\rangle
- \int_{\mathbb{R}^d}\varphi(x) \rho_t(x) dx\right| \mu_\varepsilon(d\gamma),$$ indeed, proceeding as in the proof of Proposition \[PrAsyGibbs\] we get $$\begin{aligned}
&&\int_\Theta \left|
\langle \ln \left(e^{\varepsilon^{-\kappa}tA_\varepsilon}
(e^{-\varepsilon^{d}\varphi(\varepsilon \cdot)} -1)+1\right),\gamma\rangle +
\langle e^{\varepsilon^{-\kappa}tA_\varepsilon}
\varepsilon^{d}\varphi(\varepsilon \cdot),\gamma\rangle \right|
\mu_\varepsilon(d\gamma)\\
&\leq& \int_{\mathbb{R}^d} \left|\ln \left(e^{\varepsilon^{-\kappa}tA_\varepsilon}
(e^{-\varepsilon^{d}\varphi(\varepsilon \cdot) }-1)(x)+1\right)- e^{\varepsilon^{-\kappa}tA_\varepsilon}
(e^{-\varepsilon^{d}\varphi(\varepsilon \cdot)} -1)(x) \right|\rho_{\mu_\varepsilon}(dx) \\
&&+ \int_{\mathbb{R}^d} \left|e^{\varepsilon^{-\kappa}tA_\varepsilon}
(e^{-\varepsilon^{d}\varphi(\varepsilon \cdot)} -1)(x) + e^{\varepsilon^{-\kappa}tA_\varepsilon}
\varepsilon^{d}\varphi(\varepsilon \cdot)(x) \right|\rho_{\mu_\varepsilon}(dx) \\
&\leq& \int_{\mathbb{R}^d} \left(e^{\varepsilon^{-\kappa}tA_\varepsilon}
(e^{-\varepsilon^{d}\varphi(\varepsilon \cdot) }-1)(x)\right)^2\rho_{\mu_\varepsilon}(dx) \\
&&+ \int_{\mathbb{R}^d} \left|e^{\varepsilon^{-\kappa}tA_\varepsilon}
\left(e^{-\varepsilon^{d}\varphi(\varepsilon \cdot)} -1 +
\varepsilon^{d}\varphi(\varepsilon \cdot) \right)\right|(x) \rho_{\mu_\varepsilon}(dx) \\
&\leq& \left\| e^{\varepsilon^{-\kappa}tA_\varepsilon}
(e^{-\varepsilon^{d}\varphi(\varepsilon \cdot) }-1)\right\|_u \left\|e^{\varepsilon^{-\kappa}tA_\varepsilon}
(e^{-\varepsilon^{d}\varphi(\varepsilon \cdot) }-1)\right\|_{L^1}\|\rho_{\mu_\varepsilon}\|_u \\
&&+ \left\|e^{\varepsilon^{-\kappa}tA_\varepsilon}
\left(e^{-\varepsilon^{d}\varphi(\varepsilon \cdot)} -1 +
\varepsilon^{d}\varphi(\varepsilon \cdot)\right) \right\|_{L^1} \|\rho_{\mu_\varepsilon}\|_u .\end{aligned}$$ The latter expression is at least of order $\varepsilon^d$ according to Lemma \[lemmHyd\].
Thus it remains to consider (\[eqxx34\]) which can be bounded by $$\begin{aligned}
&& \int_\Theta \left|\langle e^{\varepsilon^{-\kappa}tA_\varepsilon}
\varepsilon^{d}\varphi(\varepsilon \cdot),\gamma\rangle
- \int_{\mathbb{R}^d} \left(e^{\varepsilon^{-\kappa}tA_\varepsilon}\varphi(\varepsilon \cdot) \right)(x/\varepsilon)
\rho_{\mu_\varepsilon}(x/\varepsilon) dx\right| \mu_\varepsilon(d\gamma)
\nonumber \\&&+ \left|\int_{\mathbb{R}^d} \left(e^{\varepsilon^{-\kappa}tA_\varepsilon}\varphi(\varepsilon \cdot) \right)(x/\varepsilon)
\rho_{\mu_\varepsilon}(x/\varepsilon) dx - \int_{\mathbb{R}^d}\varphi(x) \rho_t(x) dx \right| \label{eqsecsum}\end{aligned}$$ In Proposition \[Prohydrogen\] we show that $e^{\varepsilon^{-\kappa}tA_{\varepsilon}}\varphi(\varepsilon
\cdot)(x/\varepsilon)$ converges in $\mathcal{S}(\mathbb{R}^d)$ to $$e^{-t\langle a^{(1)},\nabla \rangle + t 0^{2-\kappa}/2 \langle
\nabla, a^{(2)} \nabla \rangle}\varphi(x).$$ By assumption, $\rho_{\mu_\varepsilon}(\{x/\varepsilon\})$ converges, in particular, in $\|\cdot\|_{0,-d-1,2}$, cf. (\[defnorm\]), and thus one obtains $$\begin{aligned}
&&\lim_{\varepsilon \rightarrow 0^+} \int_{\mathbb{R}^d
}e^{\varepsilon^{-\kappa}tA_{\varepsilon}}
\varphi(\varepsilon \cdot)(x/\varepsilon)
\rho_{\mu_\varepsilon}(\{x/\varepsilon\}) dx \\
&=& \int_{\mathbb{R}^d }e^{t\left(-\langle a^{(1)},\nabla \rangle +
0^{2-\kappa}/2 \langle \nabla, a^{(2)} \nabla
\rangle\right)}\varphi(x) \rho_0(x) dx.\end{aligned}$$ Hence the second summand in (\[eqsecsum\]) converges to zero. The first summand can be bounded by the second Ursel functions as in the proof of Proposition \[PrAsyGibbs\] $$\begin{aligned}
&& \int_\Theta \left|\langle e^{\varepsilon^{-\kappa}tA_\varepsilon}
\varepsilon^{d}\varphi(\varepsilon \cdot),\gamma\rangle
- \int_{\mathbb{R}^d} \left(e^{\varepsilon^{-\kappa}tA_\varepsilon}\varphi(\varepsilon \cdot) \right)(x/\varepsilon)
\rho_{\mu_\varepsilon}(x/\varepsilon) dx\right| \mu_\varepsilon(d\gamma)
\\&&\leq \left\|e^{\varepsilon^{-\kappa}tA_\varepsilon}
\varepsilon^{d}\varphi(\varepsilon \cdot) \right\|_u \left\|e^{\varepsilon^{-\kappa}tA_\varepsilon}
\varepsilon^{d}\varphi(\varepsilon \cdot) \right\|_{L^1} \left( \sup_{x \in \mathbb{R}^d,\varepsilon>0} \int_{\mathbb{R}^d} u^{(2)}_{\mu_{\varepsilon}}(x,y) dy + \|\rho_{\mu_\varepsilon} \|_u \right).\end{aligned}$$
Application to Gibbs measures \[SecGibbs\]
------------------------------------------
In this subsection we prove that the hypothesis for the results of the previous subjection are fulfilled for a concrete class of non-equilibrium measures, namely, for Gibbs measures in the high temperature low activity regime. In order to recall the definition of a Gibbs measure, first we have to introduce a pair potential $V:\mathbb{R}^d\to\mathbb{R}\cup\{+\infty\}$, that is, a measurable function such that $V(-x)=V(x)\in\mathbb{R}$ for all $x\in\mathbb{R}^d\setminus\{0\}$. For $\gamma\in\Gamma$ and $x\in\mathbb{R}^d\setminus\gamma$ we define a relative energy of interaction between a particle located at $x$ and the configuration $\gamma$ by $$\begin{aligned}
E(x,\gamma ):=\left\{
\begin{array}{cl}
\displaystyle\sum_{y\in \gamma }V(x-y), & \mathrm{if\;}
\displaystyle\sum_{y\in \gamma }|V (x-y)|<\infty \\
& \\
+\infty , & \mathrm{otherwise}
\end{array}
\right. .\end{aligned}$$ A probability measure $\mu$ on $\Gamma$ is called a Gibbs measure corresponding to $V$, an intensity function $z\geq 0$, and an inverse of temperature $\beta$ whenever it fulfills the Georgii-Nguyen-Zessin equation [@NZ79 Theorem 2] $$\int_\Gamma \mu(d\gamma)\,\sum_{x\in \gamma }H(x,\gamma )=
\int_\Gamma \mu(d\gamma)\int_{\mathbb{R}^d}dx\,z(x) H(x,\gamma \cup
\{x\}) e^{-\beta E(x,\gamma )}\label{1.3}$$ for all positive measurable functions $H:\mathbb{R}^d\times\Gamma\to\mathbb{R}$. This definition is equivalent to the definition via DLR-equation, see [@Ge76; @NZ79; @P81; @K00]. We observe that for $V\equiv 0$ (\[1.3\]) reduces to the Mecke identity, which yields an equivalent definition of the Poisson measure $\pi_z$ [@Me67 Theorem 3.1]. We also note that for either $V\equiv 0$ and $z$ not being a constant or $V\not=0$, a Gibbs measure neither is a reversible nor an invariant initial distribution for the free Kawasaki dynamics under consideration. In order to have thermodynamical behavior we assume that $V$ is stable, i.e., there exists a $B>0$ such that $\sum_{\{x,y\} \subset \eta} V(x-y) \geq -
B |\eta|$ for all configurations $\eta \in \Gamma_0$. Furthermore, we shall assume that the parameters $\beta,z$ are small (high temperature low activity regime), i.e., $$\|z\|_u e^{2\beta B+1} C(\beta) < 1,$$ where $C(\beta):=\int_{\mathbb{R}^d}dx\, |e^{-\beta V(x)}-1|$. These conditions are, in particular, sufficient to insure the existence of Gibbs measures, cf. [@Ru69]. Moreover, the correlation functions corresponding to such measures exist and fulfill a Ruelle bound defined in Section \[Section2\], and thus, as noted there, they are supported on $\Theta$.
In the high temperature low activity regime one has rather detailed information about the Ursell functions (factorial cumulants) $u_\mu:\Gamma_0\rightarrow \mathbb{R}$ corresponding to $\mu$, which are bounded measurable functions (also called Ursell functions), i.e., for all $f \in
\mathcal{D}(\mathbb{R}^d)$ holds $$\label{DefUrs} \int_{\Gamma} e^{\langle f,\gamma \rangle}
\mu(d\gamma) = \exp \left( \int_{\Gamma_0} \prod_{y \in
\eta}(e^{f(y)}-1) u_\mu(\eta ) \lambda(d\eta) \right).$$ The function $x \rightarrow u_\mu(\{x\})$ coincides with the first correlation function of $\mu$.
We present the results necessary for the following. For further details, see e.g. [@DuSoIa75], [@MaMi91] and see also [@Ku98].
The Ursell functions can be expressed in terms of a sum over all connected graphs weighted by the Meyer-functions $$k(\xi) := \sum_{G \in \mathcal{G}_c(\xi)} \prod_{\{x,y\} \in G}
(e^{-\beta V(x-y)}-1)$$ where $\mathcal{G}_c(\xi)$ denotes the set of all connected graphs with vertex set $\xi$: $$u_\mu(\eta) := \int_{\Gamma_0}\lambda_z(d\xi) k(\eta \cup \xi)
\prod_{x \in \eta} z(x).$$ Actually, the following bound is the key result of the cluster expansion of Penrose-Ruelle type $$\label{boundTree} | k(\xi) | \leq e^{2\beta B |\xi|} \sum_{T \in
\mathcal{T}(\xi)} \prod_{\{x,y\} \in G} (e^{-\beta V(x-y)}-1),$$ where $\mathcal{T}(\xi)$ denotes the set of all trees with set of vertices $\xi.$ This leads to the following integrability bound $$\begin{aligned}
\label{boundUrsell1} \lefteqn{\int_{\mathbb{R}^{dn}}
|u_\mu(\{x,y_1,\ldots,y_n\})|
z(y_1) dy_1\ldots z(y_n)dy_n }\nonumber \\
&\leq& e^{(2\beta B+1) (n+1)}
\left(\|z\|_u C(\beta)\right)^{n} \sum_{m=0}^\infty
\frac{(n+m+1)!}{m!} \left(e^{2\beta B +1}\|z\|_u C(\beta) \right)^m.\end{aligned}$$ In particular the mixing condition (\[CondUrs\]) of Proposition \[PrAsyGibbs\] and Theorem \[ThHydGibbs\] holds.
We show that Gibbs measures in the high temperature regime with a translation invariant potential fulfill the assumptions of Proposition \[PrAsyGibbs\].
Let $z\geq 0$ be a bounded measurable function which Fourier transform is a bounded signed measure. Let $\mu$ be a Gibbs measure corresponding to a translation invariant potential $V$ described above, inverse temperature $\beta$ and activity $z$ which are in the high temperature low activity regime. Then the first correlation function $\rho_\mu$ has as Fourier transform a measure and the arithmetic mean $$\mathrm{mean}(\rho_\mu) =
\frac{1}{(2\pi)^{d/2}}\sum_{n=0}^\infty\frac{1}{n!}
\int_{\mathbb{R}^{dn}} \overline{\hat{k_r}(p_1,\ldots,p_n)}
\hat{z}(\{p_1+\ldots+p_n\})\hat{z}(dp_1)\cdot\ldots\cdot
\hat{z}(dp_n),$$ where $$k_r(y_1,\ldots,y_n):=\sum_{G \in {\tilde \mathcal{G}}_c}
\prod_{\{x_1,x_2\} \in G} (e^{-\beta V(x_1-x_2)}-1).$$ Here ${\tilde\mathcal{G}}_c$ denotes the set of all connected graphs with vertex set $(0,y_1,\ldots,y_n)$. As a consequence, all assumptions of Proposition \[PrAsyGibbs\] are fulfilled.
Due to the translation invariance of $V$ and cluster expansion one can rewrite the first correlation function as $$\rho_\mu(x) = \sum_{n=0}^\infty
\frac{1}{n!}\int_{\mathbb{R}^{dn}} \sum_{G \in
{\tilde\mathcal{G}}_c} \prod_{\{y_1,y_2\} \in G} (e^{-\beta
V(y_1-y_2)}-1) z(x_1-x)\cdot \ldots \cdot z(x_n-x) z(x) dx_1\cdot
\ldots \cdot dx_n,$$ where ${\tilde\mathcal{G}}_c$ denotes the set of all connected graphs with vertex set $(0,y_1,\ldots,y_n):=(0,x_1-x,\ldots,x_n-x)$. By (\[boundTree\]) and (\[boundUrsell1\]) the function $k_r$ is integrable, and thus $\hat{k}_r$ is a continuous function which decays to zero at infinity. Hence $$\begin{aligned}
k_r(x_1-x,\ldots,x_n-x) &=& (2\pi)^{-nd/2} \int_{\mathbb{R}^{dn}}
dp_1\ldots dp_n dp e^{ip_1 x_1}\cdot \ldots \cdot e^{ip_nx_n}e^{ipx}
\\
&&\cdot \hat{k}_r(p_1,\ldots,p_n) \delta(p-p_1-\ldots-p_n) .\end{aligned}$$ To compute the Fourier transform of $\rho_\mu$ in the weak sense it remains to compute the Fourier transform of $$\begin{aligned}
\lefteqn{\varphi(x) z(x_1-x)\cdot \ldots \cdot z(x_n-x) z(x)}\\
&=&(2\pi)^{-(n+2)d/2} \int_{\mathbb{R}^{dn}} \hat{z}(dp_1)
e^{ip_1x_1}\cdot\ldots\cdot \hat{z}(dp_n) e^{ip_nx_n}\ dp\ e^{ipx}\\
&&e^{-i(p_1+\ldots+p_n)x} \left(\hat{\varphi}\ast\hat{z}\right)(p),\end{aligned}$$ for any $\varphi\in \mathcal{D}(\mathbb{R}^d)$. Summarizing, we obtain that $$\begin{aligned}
\int_{\mathbb{R}^d} dx \varphi(x) \rho_\mu(x) &=&
\sum_{n=0}^\infty\frac{1}{n!} \int_{\mathbb{R}^{dn}}
\hat{z}(dp_1)\cdot\ldots\cdot \hat{z}(dp_n)\hat{z}(dp) \\ &&\cdot
\overline{\hat{k_r}(p_1,\ldots,p_n)}\hat{\varphi}(p_1+\ldots+p_n-p).\end{aligned}$$
As in Subsection \[SubSecHydro\], we consider initial measures with a slowly varying intensity, i.e., Gibbs measures corresponding to $\beta$, $V$, and $z(\varepsilon \cdot)$, which we denote by $\mu_{\varepsilon}$. As $\|z\|_u$ is unchanged, all scaled measures $\mu_\varepsilon$ remain in the high temperature low activity regime and the bound (\[boundUrsell1\]) holds uniformly in $\varepsilon >0$. For Gibbs measures which are not Poisson measures, the first correlation function is not any longer just the intensity. The function appearing as initial value in the limiting partial differential equation is the scaling limit of the first correlation function and not just the unscaled activity. Let us describe what is the scaling limit of the first correlation function. Denote by $\rho^{\mathrm{equi}}_c$ the correlation function corresponding to the Gibbs measure with constant activity $c$, temperature $\beta$ and potential $V$. Due to the translation invariance of $V$ this correlation function is a constant function. Given a function $z\geq 0$ denote by $x \mapsto \rho^{\mathrm{equi}}_{z(x)}$ the function which associates to each $x$ the constant value of the first correlation function of the Gibbs measure with constant activity $z(x)$. This function is the scaling limit of the first correlation of $\mu_\varepsilon$. Note that $\rho_\mu (x)=
u_\mu(\{x\})$.
\[Lem1Ursell\] Given a bounded measurable function $z\geq 0$, a potential $V$, and an inverse temperature $\beta$ fulfilling the conditions of the high temperature and low activity regime, the corresponding Gibbs measures fulfill all assumptions of Theorem \[ThHydGibbs\]. Moreover, $$\rho_0(x) := \lim_{\varepsilon \rightarrow 0^+}
u_{\mu_\varepsilon}(\{x/\varepsilon\}) = \rho^{\mathrm{equi}}_{z(x)}.$$
According to the cluster expansion of $u_\mu$ and the translation invariance of $V$ we obtain that $$u_{\mu_\varepsilon}(\{x/\varepsilon\}) = \sum_{n=1}^\infty
\frac{1}{n!}\int_{\mathbb{R}^{dn}} dy_l \sum_{G \in
\mathcal{G}_c(\{0,\ldots ,n\})} \prod_{\{i,j\} \in G} (e^{-\beta
V(y_{i}-y_{j})}-1) \prod_{l=1}^n z(\varepsilon y_l +x ) ,$$ where $y_0 := 0$. Due to (\[boundUrsell1\]) the above expression is uniformly integrable in $\varepsilon$ and in $x$, yielding the claimed result.
Estimates for hydrodynamic limits {#appest}
=================================
\[lemmHyd\] Let $\varphi\in\mathcal{S}(\mathbb{R}^d)$ and $0\leq
\varepsilon \leq 1$ be given. Then $$\begin{aligned}
\|e^{tA}(e^{\varepsilon^d \varphi(\varepsilon \cdot)}-1)\|_u
&\leq& \varepsilon^d \|\varphi\|_u e^{\|\varphi\|_u} \nonumber \\
\|e^{tA}(e^{\varepsilon^d \varphi(\varepsilon
\cdot)}-1)\|_{L^1(\mathbb{R}^d,dx)}
&\leq& \|\varphi \|_{L^1(\mathbb{R}^d,dx)} e^{\|\varphi \|_{L^1(\mathbb{R}^d,dx)}}\label{Natal9} \\
\|e^{tA}(e^{\varepsilon^d \varphi(\varepsilon \cdot)}-1 -
\varepsilon^d \varphi(\varepsilon \cdot))\|_{L^1(\mathbb{R}^d,dx)}
&\leq& \varepsilon^d \|\varphi \|^2_{L^1(\mathbb{R}^d,dx)}
e^{\|\varphi \|_{L^1(\mathbb{R}^d,dx)}}\nonumber\end{aligned}$$
On the one hand $(e^{tA})_{t\geq 0}$ is a contraction semigroup with respect to the the supremum norm. So $$\|e^{tA}(e^{\varepsilon^d \varphi(\varepsilon \cdot)}-1)\|_u \leq
\|e^{\varepsilon^d \varphi(\varepsilon \cdot)}-1\|_u \leq
\sum_{n=1}^\infty \frac{\varepsilon^{nd}}{n!} \|\varphi\|_u^n.$$ On the other hand $(e^{tA})_{t\geq 0}$ is a contraction semigroup on $L^1(\mathbb{R}^d,dx)$. Therefore, the left-hand side of (\[Natal9\]) is bounded by $$\int_{\mathbb{R}^d}dx\, \frac{|e^{\varepsilon^d \varphi(x)}-1|}
{\varepsilon^d} \leq \|\varphi\|_{L^1(dx)} \sum_{n=1}^\infty
\frac{\varepsilon^{d(n-1)}}{n!} \|\varphi\|_{L^1(dx)}^{n-1}\leq
\|\varphi \|_{L^1(dx)} e^{\|\varphi \|_{L^1(dx)}}.$$ The last inequality follows by a similar computation.
Norm estimates in $\mathcal{S}(\mathbb{R}^d)$ {#appB}
=============================================
Let us introduce the following two equivalent systems of norms for the locally convex topological vector space $\mathcal{S}(\mathbb{R}^d)$ for $A \in \mathbb{N}$ and $M\geq 0$: $$\begin{aligned}
\|f \|_{A,M,u} &:=& \sum_{\stackrel{ \scriptstyle
\alpha \in \mathbb{N}_0^d}{|\alpha|\leq A}}
\sup_{x \in \mathbb{R}^d} \left| D^\alpha f (x) \right|
\left(1 + |x|^2\right)^M \\
\|f \|_{A,M,2} &:=& \sum_{\stackrel{ \scriptstyle \alpha \in
\mathbb{N}_0^d}{|\alpha|\leq A}} \left(\int_{ \mathbb{R}^d} \left|
D^\alpha f (x) \right|^2 \left(1 + |x|^2\right)^M dx\right)^{1/2}
\label{defnorm}\end{aligned}$$
Let $f_1,f_2 \in C^\infty(\mathbb{R}^d;\mathbb{C})$ be two $C^\infty$-functions with non-positive real part such that for an $A \in \mathbb{N}$ and a $M\geq 0$ one has $\|f_i\|_{A,-M,u}< \infty$. Then there exists a constant $C$ depending on $A$ and on $\|f_i\|_{A,-M,u}$ such that $$\|e^{f_1} -e^{f_2}\|_{A,-(A+1)M,2}
\leq C \| f_1 -f_2 \|_{A,-M,2}.$$
By induction one finds $$\begin{aligned}
D^\alpha (e^f -1)(x) = \sum_{k=1}^{|\alpha|} \sum_{\stackrel{
\scriptstyle \alpha_1,\ldots,\alpha_k \in
\mathbb{N}^d}{\alpha_1+\ldots+\alpha_k = \alpha}} \prod_{j=1}^k
D^{\alpha_j}f(x) (e^{f(x)}-1).\end{aligned}$$ Thus, using telescope sums one can write the following difference as $$\begin{aligned}
D^\alpha (e^{f_1} -e^{f_2})(x) &=& \sum_{k=1}^{|\alpha|}
\sum_{\stackrel{ \scriptstyle \alpha_1,\ldots,\alpha_k \in
\mathbb{N}^d}{\alpha_1+\ldots+\alpha_k = \alpha}} \sum_{l=1}^k
\left( \prod_{j=1}^{l-1} D^{\alpha_j}f_1(x) D^{\alpha_l}f_1(x)
\prod_{j=l+1}^{k} D^{\alpha_j}f_2(x) \right.\\
&&\left. - \prod_{j=1}^{l-1} D^{\alpha_j}f_1(x) D^{\alpha_l}f_2(x)
\prod_{j=l+1}^{k} D^{\alpha_j}f_2(x) \right) e^{f_1(x)}\\
&& +\prod_{j=1}^{k} D^{\alpha_j}f_2(x) (e^{f_1(x)} -e^{f_2(x)}).\end{aligned}$$ Using Cauchy-Schwarz inequality one may estimate this by $$\begin{aligned}
\lefteqn{\left(\int_{\mathbb{R}^d} \left| D^\alpha (e^{f_1} -e^{f_2})(x) \right|^2
\left(1 + |x|^2
\right)^{- (A+1)M} dx \right)^{1/2}}\\
&\leq& \sum_{k=1}^{|\alpha|} \sum_{l=1}^k \| f_1 \|_{A,-M,u}^{l-1}
\| f_2 \|_{A,-M,u}^{k-l-1}
\| f_1-f_2 \|_{A,-M,2} \|e^{f_1}\|_u\\
&& +\| f_2 \|_{A,-M,u}^{k} \|e^{f_1} -e^{f_2}\|_{A,-M,2} \\
&\leq & |\alpha|^2 \left( 1 +
\| f_1 \|_{A,-M,u} + \| f_2 \|_{A,-M,u}\right)^{|\alpha|}\\
&&\left( \| f_1 -f_2 \|_{A,-M,2} \|e^{f_1}\|_u + \|e^{f_1}
-e^{f_2}\|_{0,-M,2} \right).\end{aligned}$$ The non-positivity of the real part of $f_i$ and the properties of the exponential function yield $$\|e^{f_1} -e^{f_2}\|_{0,-M,2} \leq \|f_1 -f_2\|_{0,-M,2}.$$ Putting the additional sum out by triangle inequality and using non-negativity one obtains $$\begin{aligned}
\lefteqn{\| e^{f_1} -e^{f_2}\|_{A,-(A+1)M,2}} \\
&\leq& A^3 \left( 1 + \| f_1 \|_{A,-M,u} + \| f_2
\|_{A,-M,u}\right)^{A} \| f_1 -f_2 \|_{A,-M,2}.\end{aligned}$$
\[Lemtest\] Let $f, f_1,f_2 \in C^\infty(\mathbb{R}^d;\mathbb{C})$ be such that $f_1,f_2$ have non-positive real part and for an $A \in \mathbb{N}$ and a $M\geq 0$ one has $\|f_i\|_{A,-M,u}< \infty$ and for a $M_0 \in
\mathbb{R}$ one has $\|f\|_{A,M_0,u}< \infty$. Then for any $A \in
\mathbb{N}$ and for any $0\leq M'\leq M_0-(A+1)M$ there exists a constant $C$ depending monotonically on $A$, $\|f\|_{A,M_0,u}$, and on $\| f_i\|_{A,-M,u}$ such that $$\|fe^{f_1} -f e^{f_2}\|_{A,M',2} \leq C \| f_1 -f_2 \|_{A,-M,2}.$$
Due to triangle inequality and product rule one obtains $$\begin{aligned}
\left|D^\alpha\!\left( f(e^{f_1} - e^{f_2}) \right)\right| \leq
\sum_{\stackrel{\scriptstyle \alpha_1,\alpha_2 \in \mathbb{N}^d_0}
{\alpha_1 +\alpha_2 = \alpha}} \left| D^{\alpha_1}\!f\
D^{\alpha_2}(e^{f_1} - e^{f_2}) \right|.\end{aligned}$$ Then using triangle inequality and the supremums norm one finds $$\begin{aligned}
\lefteqn{\left(\int_{\mathbb{R}^d} \left|D^\alpha f(e^{f_1} - e^{f_2}) (x)
\right|^2
\left(1 + |x|^2 \right)^{M_0-(A+1)M} dx \right)^{1/2}}\\
&\leq& \sum_{\stackrel{\scriptstyle \alpha_1,\alpha_2 \in
\mathbb{N}^d_0} {\alpha_1 +\alpha_2 = \alpha}} \left( \int
\left|D^{\alpha_1}f(x) D^{\alpha_2}(e^{f_1} - e^{f_2})(x) \right|^2
\left(1 + |x|^2 \right)^{M_0-(A+1)M} \right)^{1/2}\\
&\leq& A \| f\|_{A,M_0,u} \|e^{f_1} - e^{f_2}\|_{A,-(A+1)M,2}.\end{aligned}$$
Acknowledgment {#acknowledgment .unnumbered}
--------------
We would like to thanks N. Jacob, E. Lytvynov, M. R[ö]{}ckner, W. Hoh for fruitful discussions, T. Hilberdink for mentioning the theory slowly varying functions and T. Funaki for making us aware of [@DSS82]. Yu. K., T. K., M. J. O. thanks CCM (University of Madeira) for the pleasant ongoing hospitality. Financial support from DFG through the SFB 701 (Bielefeld University) and FCT through, POCI-2010, FEDER, PTDC, PTDC/MAT/67965/2006 are gratefully acknowledged.
[BMPK04]{}
S. Albeverio, [Yu]{}. G. Kondratiev, and M. R[ö]{}ckner. Analysis and geometry on configuration spaces. , 154(2):444–500, 1998.
L. Bertini, N. Cancrini, and F. Cesi. The spectral gap for a [G]{}lauber-type dynamics in a continuous gas. , 38:91–108, 2002.
. M. Berezansky and [Yu]{}. G. Kondratiev. . Naukova Dumka, Kiev, 1988. (in Russian). English translation, [K]{}luwer [A]{}cademic [P]{}ublishers, [D]{}ordrecht, 1995.
F. Baffioni, I. Merola, E. Presutti, and T. Kuna. A relativized [D]{}obrushin uniqueness condition and applications to [P]{}irogov-[S]{}inai models. Preprint 04-107, mp-arc, 2004.
N. N. Bogoliubov. . Gostekhisdat, Moskau, 1946. In Russian. English translation in J. de Boer and G. E. Uhlenbeck, editors, [*Studies in Statistical Mechanics*]{}, volume 1, pages 1-118, Amsterdam, 1962. North-Holland.
A. De Masi and E. Presutti. , volume 1501 of [*Lecture Notes in Mathematics*]{}. Springer-Verlag, Berlin, 1991.
R. L. Dobrushin. On [P]{}oisson’s law for distribution of particles in space. , pages 127–134, 1956. (In Russian).
J. L. Doob. . John Wiley & Sons, New York and London, 1953.
M. Duneau, B. Souillard, and D. Iagolnitzer. Decay of correlations for infinite-range interactions. , 16(8):1662–1666, 1975.
R. L. Dobrushin and Ra. Siegmund-Schultze. The hydrodynamic limit for systems of particles with independent evolution. , 105:199–224, 1982.
S. N. Ethier and T. G. Kurtz. . Wiley Series in Probability and Mathematical Statistics. John Wiley & Sons, NewYork, Chichester, Brisbane, Toronto, and Singapore, 1986.
J. Fritz. Gradient dynamics of infinite point systems. , 15(2):478–514, 1987.
H. O. Georgii. Canonical and grand canonical [G]{}ibbs states for continuum systems. , 48:31–51, 1976.
E. Gl[ö]{}tzl. Time reversible and [G]{}ibbsian point processes. [I.]{} [M]{}arkovian spatial birth and death processes on a general phase space. , 102:217–222, 1981.
N. Jacob. , volume I. Fourier Analysis and Semigroups. Imperial College Press, London, 2001.
O. Kallenberg. . Academic Press, London, New York, second edition, 1976.
. G. Kondratiev and T. Kuna. Harmonic analysis on configuration space [I]{}. [G]{}eneral theory. , 5(2):201–233, 2002.
. G. Kondratiev, T. Kuna, and O. Kutoviy. On relations between a priori bounds for measures on configuration spaces. , 7(2):195–213, 2004.
. G. Kondratiev, T. Kuna, and M. J. Oliveira. Analytic aspects of [P]{}oissonian white noise analysis. , 8(4):15–48, 2002.
. G. Kondratiev, T. Kuna, and M. J. Oliveira. Holomorphic [B]{}ogoliubov functionals for interacting particle systems in continuum. , 238(2):375–404, 2006.
Yu. Kondratiev, O. Kutoviy, and E. Zhizhina. Non-equilibrium [G]{}lauber type dynamics in continuum. , 47(11):113501, 2006.
Yu. Kondratiev and E. Lytvynov. Glauber dynamics of continuous particle systems. , 41(4):685–702, 2005.
Yu. Kondratiev, E. Lytvynov, and M. R[ö]{}ckner. The heat semigroup on configuration spaces. , 39(1):1–48, 2003.
. G. Kondratiev, E. Lytvynov, and M. R[ö]{}ckner. Equilibrium [K]{}awasaki dynamics of continuous particle systems. , 7(4):185–209, 2007.
. G. Kondratiev, E. Lytvynov, and M. R[ö]{}ckner. Non-equilibrium stochastic dynamics in continuum: the free case. , 11(4(56)):701–721, 2009.
. G. Kondratiev, J. L. Silva, L. Streit, and G. F. Us. Analysis on [P]{}oisson and [G]{}amma spaces. , 1(1):91–117, 1998.
T. Kuna. . PhD thesis, Bonner Mathematische Schriften Nr. 324, University of Bonn, 1999.
T. Kuna. Properties of marked [G]{}ibbs measures at high temperature regime. , 7:33–54, 2001.
T. Kuna. Bochner’s theorem for point processes via [B]{}ogoliubov functionals. n preparation, 2009.
A. Lenard. States of classical statistical mechanical systems of infinitely many particles [I]{}. , 59:219–239, 1975.
A. Lenard. States of classical statistical mechanical systems of infinitely many particles [II]{}. , 59:241–256, 1975.
J. Mecke. Stationäre zufällige [M]{}aße auf lokalkompakten [A]{}belschen [G]{}ruppen. , 9:36–58, 1967.
V. A. Malyshev and R. A. Minlos. . Kluwer Academic Publishers, Dordrecht, Boston, and London, 1991.
X. X. Nguyen and H. Zessin. Martin-[D]{}ynkin boundary of mixed [P]{}oisson processes. , 37:191–200, 1976.
X. X. Nguyen and H. Zessin. Integral and differential characterizations of the [G]{}ibbs process. , 88:105–115, 1979.
J. Puzicha. . Master thesis, [U]{}niversity of [B]{}ielefeld, 1981.
D. Ruelle. . Benjamin, New York and Amsterdam, 1969.
D. Ruelle. Superstable interactions in classical statistical mechanics. , 18:127–159, 1970.
. G. Sinai, editor. , volume II of [*Encyclopaedia Math. Sci.*]{}, chapter III. Dynamical systems of statistical Mechanics and Kinetic Equations. Springer-Verlag, Berlin Heidelberg, 1989.
H. Spohn. Equilibrum fluctuations for interacting [B]{}rownian particles. , 103:1–33, 1986.
|
---
address:
- 'Departemen Fisika, FMIPA, Universitas Indonesia, Depok 16424, Indonesia'
- 'Department of Applied Physics, Okayama University of Science, 1-1 Ridai-cho, Okayama 700, Japan'
- 'Center for Nuclear Studies, Department of Physics, The George Washington University, Washington, D.C. 20052, USA'
author:
- 'T. Mart'
- 'A. Salam and K. Miyagawa'
- 'C. Bennhold'
title: 'Photoproduction of $\Theta^+$ on the nucleon and deuteron'
---
Introduction
============
The observation of the pentaquark $\Theta^+$ baryon[@experiment] has triggered a great number of investigations on the production process of this unconventional particle. In general, these efforts can be divided into two categories, i.e., investigations using hadronic and electromagnetic processes. The electromagnetic (photoproduction) process is, however, well known as a more ”clean” process. Furthermore, photoproduction process provides an easier way to ”see” the $\Theta^+$ which contains an antiquark, since all required constituents are already present in the initial state[@Karliner:2004gr]. Other processes, such as $e^+e^-$ and ${\bar p}p$ annihilations, would produce the strangeness-antistrangeness from gluons, which has a consequence of the suppressed cross section[@Titov:2004wt].
Several $\Theta^+$ photoproduction studies have been performed by using isobar models with Born approximation, where the obtained cross section spans from several nanobarns to almost one $\mu$barn, depending on the $\Theta^+$ width, parity, hadronic form factor cut-off, and the exchanged particles used in the process. Those parameters are unfortunately still uncertain at present.
In this paper, we calculate the photoproduction cross sections by utilizing an isobar model. Since the production threshold is already high we compare the results with those obtained from a Regge model. The comparison is also very important, since most input parameters in the isobar model are less known.
Formalism
=========
The basic background amplitudes for the processes $$\begin{aligned}
\gamma (k) + n (p) \to K^- (q) + \Theta^+(p') ~~\textrm{and}~~
\gamma (k) + p (p) \to {\bar K}^0 (q) + \Theta^+(p')\end{aligned}$$ are obtained from a series of tree-level Feynman diagrams shown in Fig.\[fig:feynman\]. They contain the $n$, $\Theta^+$, $K^-$, $K^{*-}$ and $K_1$ intermediate states in the first process, whereas in the second process the ${\bar K}^0$ exchange does not present since a real photon cannot interact with a neutral meson. The $K^*$ and $K_1$ intermediate states are considered here, since previous studies on $K\Lambda$ and $K\Sigma$ photoproductions have shown that their roles are significant.
The transition matrix for both reactions can be decomposed into $$\begin{aligned}
M_{\mathrm fi} &=& {\bar u}({\mbox{\boldmath ${p}$}}')
\sum_{i=1}^{4} A_i~M_i ~u({\mbox{\boldmath ${p}$}}) ~,
\label{eq:mfi}\end{aligned}$$ where the gauge and Lorentz invariant matrices $M_i$ are given in, e.g., Ref.[@Lee:1999kd]. In terms of Mandelstam variables $s$, $u$, and $t$, the functions $A_i$ are given by $$\begin{aligned}
\label{eq:a1}
A_{1} & = & -\frac{e g_{\Theta}}{s - m_{N}^{2}} \left(Q_{N} +
\kappa_{N} \frac{m_{N} - m_{\Theta}}{2 m_{N}} \right) F_1(s)
- \frac{e g_{\Theta}}{u - m_{\Theta}^{2} + im_\Theta\Gamma_\Theta} \times
\nonumber\\&&
\left[ Q_{\Theta} + \kappa_{\Theta} \left(
\frac{m_{\Theta} - m_{N}}{2 m_{\Theta}} - i\,\frac{\Gamma_\Theta}{4m_\Theta}\right)
\right] F_2(u) \nonumber\\
&& - \frac{C_{K^*}G^TF_3(t)}{M(t-m_{K^*}^2+im_{K^*}\Gamma_{K^*})(m_\Theta + m_N)} ~, \\
A_{2} & = & \frac{2e g_{\Theta}}{t - m_{K}^{2}}
\left(\frac{Q_{N}}{s - m_{N}^{2}} + \frac{Q_{\Theta}}{u - m_{\Theta}^{2}} \right)
{\widetilde F}%(s,u,t)
+ \frac{C_{K^*}G^TF_3(t)}{M(t-m_{K^*}^2+im_{K^*}\Gamma_{K^*})} \nonumber\\&&
\times \frac{1}{(m_\Theta + m_N)}
%\nonumber\\&&
- \frac{C_{K_1}G^T_{K_1}F_3(t)}{M(t-m_{K_1}^2+im_{K_1}\Gamma_{K_1})(m_\Theta + m_p)}
,\label{eq:a2} \\
A_{3} & = & \frac{e g_{\Theta}}{s - m_{N}^{2}}~
\frac{\kappa_{N} F_1(s)}{2 m_{N}} - \frac{e g_{\Theta}}{u -
m_{\Theta}^{2}}~ \frac{\kappa_{\Theta} F_2(u)}{2 m_{\Theta}}
- \frac{C_{K^*}G^TF_3(t)}{M(t-m_{K^*}^2+im_{K^*}\Gamma_{K^*})}\nonumber\\&&
\times \frac{m_\Theta - m_N}{m_\Theta + m_N}
+ \frac{(m_\Theta +m_p)C_{K_1}G^V_{K_1}+
(m_\Theta -m_p)C_{K_1}G^T_{K_{1}}}{M(t-m_{K_1}^2+im_{K_1}\Gamma_{K_1})
(m_\Theta + m_p)}F_3(t)\label{eq:a3} \\
A_{4} & = & \frac{e g_{\Theta}\kappa_{N}}{s - m_{N}^{2}}
\frac{F_1(s)}{2 m_{N}} +
\frac{e g_{\Theta}\kappa_{\Theta}}{u - m_{\Theta}^{2}}~
\frac{F_2(u)}{2 m_{\Theta}} %\nonumber\\&&
+ \frac{C_{K^*}G^VF_3(t)}{M(t-m_{K^*}^2+im_{K^*}\Gamma_{K^*})} ,
\label{eq:a4}\end{aligned}$$ with $g_{\Theta}=g_{K \Theta N}$, $Q_\Theta =1$, $Q_N=1\, (0)$ for proton (neutron), $\kappa_N$ and $\kappa_\Theta$ indicate the anomalous magnetic moments of the nucleon and $\Theta^+$, and $M$ is taken to be 1 GeV in order to make the coupling constants $ G^{V,T} = g^{V,T}_{K^*\Theta N}\, g_{K^* K\gamma}$ dimensionless.
The inclusion of hadronic form factors at hadronic vertices is performed by utilizing the Haberzettl prescription[@Haberzettl:1998eq]. The form factors are taken as $$\begin{aligned}
\label{eq:form_factor}
F_i(q^2) &=& \frac{\Lambda^4}{\Lambda^4 + (q^2-m_i^2)^2} ~~~~~~
\textrm{with} ~~~~~ q^2 ~=~ s,u,t ~,\end{aligned}$$ with $\Lambda$ the corresponding cut-off. The form factor for non-gauge-invariant terms ${\widetilde F}(s,u,t)$ in Eq.(\[eq:a2\]) is extra constructed in order to satisfy crossing symmetry and to avoid a pole in the amplitude[@Davidson:2001rk], i.e., $$\begin{aligned}
\label{eq:fhat}
\widetilde{F}(s,u,t) &=& F_1(s)+F_1(u)+F_3(t)-F_1(s)F_1(u) \nonumber\\&&
-F_1(s)F_3(t) - F_1(u)F_3(t)
+ F_1(s)F_1(u)F_3(t) .\end{aligned}$$ Since $\Theta^+$ is an isoscalar particle, the coupling constants relations read $$\begin{aligned}
\label{eq:cc1}
g_{K\Theta N} = g_{K^- \Theta^+ n} = g_{{\bar K}^0 \Theta^+ p} ~~,~~
g^{V,T}_{K^*\Theta N} = g^{V,T}_{K^{*-} \Theta^+ n} = g^{V,T}_{{\bar K}^{*0} \Theta^+ p} ~.\end{aligned}$$ The coupling constant $g_{K^- \Theta^+ n}$ can be calculated from the decay width of the $\Theta^+\to K^+ n$ by using $$\begin{aligned}
\label{eq:width}
\Gamma &=& \frac{g^2_{K^- \Theta^+ n}}{4\pi}\,\frac{E_n-m_n}{m_\Theta}\, p ~,\end{aligned}$$ with $ p = [\{m_\Theta^2-(m_K+m_n)^2\}\{m_\Theta^2-(m_n-m_K)^2\}^{1/2}]/{2m_\Theta}$. The precise measurement of the decay width is still lacking due to the experimental resolution. The reported width[@experiment] is in the range of 6–25 MeV. Using the partial wave analysis of $K^+N$ data Arndt [*et al.*]{}[@arndt] found $\Gamma\le 1 $ MeV, whereas the PDG[@pdg2004] announces $\Gamma = 0.9\pm 0.3 $ MeV. Based on this information we use a width of 1 MeV in our calculation. Explicitly, we use $$\begin{aligned}
\label{eq:cc_num}
{g_{K\Theta N}}/{\sqrt{4\pi}} &=& 0.39 ~.\end{aligned}$$ The magnetic moment of $\Theta^+$ is also not well known. A recent chiral soliton calculation[@kim2003] yields a value of $\mu_\Theta = 0.82 ~ \mu_N$, from which we obtain $\kappa_\Theta=0.35$. Note that in the second channel the Regge model does not depend on this coupling constant as well as the $\Theta^+$ magnetic moment.
The coefficient $C_{K^*}$ in Eqs.(\[eq:a1\])-(\[eq:a4\]) is introduced since in ${\bar K}^0$ photoproduction the vector meson exchange in the $t$-channel is $K^{*0}$. The coefficient reads[@Mart:1995wu] $$\begin{aligned}
C_{K^*} &=& %\left\{ \begin{array}[c]{ccl}
1 %&~~&
{\rm ~~for~~} K^-\Theta^+ %\\
~~[-1.53 %&&
{\rm ~~for~~} {\bar K}^0\Theta^+]% \\\end{array} \right.
~.\end{aligned}$$
The coupling constants $g^{V}_{K^{*} \Theta N}$ and $g^{T}_{K^{*} \Theta N}$ are also not well known. Therefore, we follow Refs., i.e., using $g^{V}_{K^{*} \Theta N}=1.32$ and neglecting $g^{T}_{K^{*} \Theta N}$ due to the lack of information on this coupling. By combining the electromagnetic and hadronic coupling constants we obtain $$\begin{aligned}
\label{eq:GVK*}
{G^{V}_{K^*\Theta N}}/{4\pi} &=& 8.72\times 10^{-2} ~.\end{aligned}$$
Most previous calculations excluded the $K_1$ exchange, mainly due to the lack of information on the corresponding coupling constants. Reference used the vector dominance relation $g_{K_1K\gamma}=eg_{K_1K\rho}/f_\rho$ to determine the electromagnetic coupling $g_{K_1K\gamma}$, where $f^2_\rho/4\pi=2.9$ and $g_{K_1K\rho}=12$ is taken from the effective Lagrangian calculation of Ref.[@haglin94]. As in the case of $K^*$, the $K_1$ hadronic tensor coupling will be neglected in this calculation due to the same reason. Following Ref., the $K_1$ axial vector coupling $g^{V}_{K_1\Theta N}$ is estimated from an isobar model for $K^+\Lambda$ photoproduction[@wjc] by using the extracted ratio $G^{V}_{K^*\Lambda N}/ G^{V}_{K_1\Lambda N}=-8.26$. We note that the same ratio is also obtained in Ref.[@Mart:2000ed] for the model without missing resonance $D_{13}(1895)$. Therefore, in our calculation we use $$\begin{aligned}
\label{eq:GVK1}
{G^{V}_{K_{1}\Theta N}}/{4\pi} &=& -7.64\times 10^{-3} ~.\end{aligned}$$ The constant $C_{K_1}$ in Eqs.(\[eq:a2\]) and (\[eq:a3\]) is extracted from fitting an isobar model to the $K^+\Sigma^0$ and $K^0\Sigma^+$ photoproduction data[@Mart:jv], i.e., $$\begin{aligned}
\label{eq:ck1}
C_{K_1} &=& %\left\{\begin{array}[c]{ccl}
1 %&~~&
{\rm ~~for~~} K^-\Theta^+ %\\
~~[-0.17 {\rm ~~for~~} {\bar K}^0\Theta^+]% \\\end{array} \right.
~.\end{aligned}$$
Regge Model
===========
In Regge model one should only use the $K^-$ and $K^*$ ($K^*$ and $K_1$) diagrams in Fig.\[fig:feynman\] for the $\gamma n\to K^-\Theta^+$ ($\gamma p\to {\bar K}^0 \Theta^+$) channel. Hence, the result from Regge model will not depend on the value of $g_{K\Theta N}$ and $\Theta^+$ magnetic moment in the second channel. The procedure is adopted from Ref.[@guidal97], i.e., by replacing the Feynman propagator with the Regge propagator $$\begin{aligned}
\label{eq:regge}
P_{\rm Regge} &=& \frac{s^{\alpha_{K^i}(t)-1}}{\sin [\pi\alpha_{K^i}(t)]}
~ e^{-i\pi\alpha_{K^i}(t)} ~
\frac{\pi\alpha_{K^i}'}{\Gamma [\pi\alpha_{K^i}(t)]} ~,\end{aligned}$$ where $K^i$ refers to $K^*$ and $K_1$, and $\alpha_{K^i} (t) = \alpha_0 + \alpha '\, t$ denotes the corresponding trajectory[@guidal97].
Results and Discussion
======================
=3.5in
The differential cross sections obtained from the isobar model in both channels are shown in Fig.\[fig:d3dim\]. Obviously, both channels show a forward peaking differential cross section which is due to the strong contribution from the $K^*$ intermediate state. Previous studies which use only Born terms obtained a backward peaking cross section for the $\gamma p\to {\bar K}^0 \Theta^+$ channel, since in this case no $t$-channel intermediate state is included. Figure\[fig:d3dim\] also demonstrates that the hadronic form factors are unable to suppress the cross sections at higher energies.
The strong contribution of the $K^*$ in both channels can be observed in Fig.\[fig:born\], where we can see that the inclusion of this state increases the total cross sections by more than one order of magnitude. In contrast to the $K^*$, contribution from the $K_1$ vector meson is negligible. This fact can be traced back to the coupling constants given by Eqs.(\[eq:GVK1\]) and (\[eq:ck1\]).
=3.5in
=3.5in
=3.2in
Figure \[fig:parameter\] demonstrates the sensitivity of the total cross sections to the choice of the hadronic form factor cut-off. Clearly, a right choice of the cut-off is very important in this case. For this purpose, we calculate also the cross sections by using a Regge model. The results are shown in Fig.\[fig:regge\]. Obviously, the Regge approach predicts smaller cross sections than those obtained from the isobar model. In the case of $K\Lambda$ and $K\Sigma$ photoproductions, Ref.[@Mart:2003yb] showed that Regge model works nicely at higher energies (up to $W=5$ GeV) but overpredicts the $K^+\Lambda$ (underpredicts the $K^+\Sigma^0$) data at the resonance region ($W\le 2$ GeV) by up to 50%. Thus, we would expect the same result for $\Theta^+$ photoproduction. By comparing with the result obtained from the isobar model, we can conclude that the isobar prediction could overestimate the realistic cross section, especially at higher energies, unless a softer hadronic form factor is chosen. This result can partly explain why the high energy experiments are unable to observe the existence of the $\Theta^+$.
Using the elementary operator of the isobar model we predict the inclusive total cross section for $\Theta^+$ photoproduction on the deuteron. The results for both possible channels are given in Fig.\[fig:deuteron\], where we show the inclusive total cross section obtained by using an isobar model with $\Lambda=0.8$ GeV. The fact that the $K^-\Theta^+$ cross section is smaller than the $K^0\Theta^+$ one is originated from the elementary process (see Fig.\[fig:born\]).
In conclusion, we have calculated cross sections of $\Theta^+$ photoproduction by using an isobar and a Regge models. The Regge model predicts smaller cross sections, especially at higher energies.
The work of TM has been partly supported by the QUE project.
[0]{} T. Nakano [*et al.*]{}, Phys. Rev. Lett. [**91**]{}, 012002 (2003); J. Barth [*et al.*]{}, Phys. Lett. B [**572**]{}, 127 (2003); S. Stepanyan [*et al.*]{}, Phys. Rev. Lett. [**91**]{}, 252001 (2003); V. Kubarovsky [*et al.*]{}, Phys. Rev. Lett. [**92**]{}, 032001 (2004); V.V. Barmin [*et al.*]{}, Phys. Atom Nucl. [**66**]{}, 1715 (2003); A. Airapetian [*et al.*]{}, Phys. Lett. B [**585**]{}, 213 (2004); A. Aleev [*et al.*]{}, [hep-ex/0401024]{}; S. Nussinov, [hep-ph/0307357]{} (2003). M. Karliner and H.J. Lipkin, Phys. Lett. B [**597**]{}, 309 (2004) A.I. Titov, A. Hosaka, S. Date and Y. Ohashi, [nucl-th/0408001]{}. B.G. Yu, T.K. Choi, and C.-R. Ji, [nucl-th/0312075]{} and references therein. R.A. Arndt, I.I. Strakovsky, and R.L. Workman, [nucl-th/0311030]{} (2003). Particle Data Group: S. Eidelman [*et al.*]{}, Phys. Lett. B [**592**]{}, 1 (2004). W. Liu and C.M. Ko, [nucl-th/0308034]{}. S.I. Nam, A. Hosaka, and H-Ch Kim, [hep-ph/0308313]{}. Hyun-Chul Kim, [hep-ph/0308242]{}. T. Mart, C. Bennhold and C.E. Hyde-Wright, Phys. Rev. C [**51**]{}, 1074 (1995). W. Liu, C.M. Ko, and V. Kubarovsky, Phys. Rev. C [**69**]{}, 025202 (2004). T. Mart and C. Bennhold, Phys. Rev. C [**61**]{}, 012201 (2000). T. Mart, Phys. Rev. C [**62**]{}, 038201 (2000). K. Haglin, Phys. Rev. C [**50**]{}, 1688 (1994). R.A. Williams, C.-R. Ji, and S.R. Cotanch, Phys. Rev. C [**46**]{}, 1617 (1992). M. Guidal, J.M. Laget, and M. Vanderhaeghen, Nucl. Phys. [**A627**]{}, 645 (1997). F.X. Lee, T. Mart, C. Bennhold and L.E. Wright, Nucl. Phys. [**A695**]{}, 237 (2001). H. Haberzettl, C. Bennhold, T. Mart, and T. Feuster, Phys. Rev. C [**58**]{}, R40 (1998). R.M. Davidson and R. Workman, Phys. Rev. C [**63**]{}, 025210 (2001). T. Mart and T. Wijaya, Acta Phys. Polon. B [**34**]{}, 2651 (2003).
|
---
abstract: 'Distributed model predictive control (MPC) has been proven a successful method in regulating the operation of large-scale networks of constrained dynamical systems. This paper is concerned with cooperative distributed MPC in which the decision actions of the systems are usually derived by the solution of a system-wide optimization problem. However, formulating and solving such large-scale optimization problems is often a hard task which requires extensive information communication among the individual systems and fails to address privacy concerns in the network. Hence, the main challenge is to design decision policies with a prescribed structure so that the resulting system-wide optimization problem to admit a loosely coupled structure and be amendable to distributed computation algorithms. In this paper, we propose a decentralized problem synthesis scheme which only requires each system to communicate sets which bound its states evolution to neighboring systems. The proposed method alleviates concerns on privacy since this limited communication scheme does not reveal the exact characteristics of the dynamics within each system. In addition, it enables a distributed computation of the solution, making our method highly scalable. We demonstrate in a number of numerical studies, inspired by engineering and finance, the efficacy of the proposed approach which leads to solutions that closely approximate those obtained by the centralized formulation only at a fraction of the computational effort.'
author:
- 'Georgios Darivianakis, Angelos Georghiou and John Lygeros'
bibliography:
- 'darivianos\_abrv.bib'
- 'Papers.bib'
title: Decentralized decision making for networks of uncertain systems
---
Introduction
============
Operation of large-scale networks of interacting dynamical systems remains an active field of research due to its high impact on real-world applications, e.g., regulation of power networks [@Venkat2008] and energy management of building districts [@Darivianakis2016]. For system of this scale, the design and deployment of a centralized controller is often difficult due to computation and communication limitations, and also prohibitive in cases the individual systems need to retain a certain degree of privacy. In such cases, it is desirable the design of interacting local controllers which rely only on local computational resources and a distributed communication network with a prescribed structure.
The problem of synthesizing optimal distributed controllers based on an arbitrary communication structure typically amounts to an infinite-dimensional, non-convex optimization problem and is a known NP-hard [@Tsitsiklis1985]. For that reason, several studies have been devoted to identifying communication structures under which the problem of designing optimal decentralized controllers can be simplified [@Lin2011; @Mahajan2012]. For instance, if the communication network admits a partially nested structure [@Ho1972] then affine controllers are known to be optimal for decentralized linear systems with quadratic costs and additive Gaussian noise [@Ho1972; @Rantzer2006; @Rantzer2006a]. Similar results exist for communication structures that are spatially invariant [@Fardad2009; @Bamieh2002; @Motee2008], introduce delays on information sharing [@Lamperski2015; @Nayyar2011; @Nayyar2013]. Recent advances with convex optimization algorithms shifted research interest on identifying information structures that allow the optimal distributed controllers synthesis problem to be formulated as a convex optimization problem [@Bamieh2005; @DeCastro2002; @Dvijotham2013; @Matni2013; @Qi2004]. These structures usually possess properties as quadratic invariance [@Rotkowitz2006; @Swigart2014] and funnel causality [@Bamieh2005] which essentially eliminate the incentive of signaling among the decentralized controllers. For general network structures, the usual practice is to resort to linear matrix inequality relaxations [@Langbort2004; @Zecevic2010] or semidefinite programming relaxations [@Lavaei2012; @Fazelnia2017] to obtain a suboptimal design with performance guarantees.
A downside of the aforementioned approaches is the inability of the resulting static controllers to cope with state and input constraints in the systems. MPC is an optimization based methodology that is well-suited for regulating the operation of constrained systems [@Mayne2000]. In the MPC framework adopted here, distributed control schemes are usually categorized into cooperative or non-cooperative [@Scattolini2009]. Cooperative distributed MPC approaches require substantial communication infrastructure and computation resources since a system-wide MPC problem is formulated and solved [@Venkat2005; @Stewart2010; @Giselsson2014]. On the other hand, non-cooperative approaches, though computationally simple and effective in practice, can be conservative in presence of strong coupling [@Richards2004; @Keviczky2006; @Trodden2010]. Both cooperative and non-cooperative schemes typically require a centralized offline design phase. This requirement can be restrictive, making the distributed schemes suffer from similar complexity and privacy concerns as the centralized MPC formulation. To alleviate these issues, it is desirable to develop decentralized schemes that rely on local computational resources and information structure. In the literature, this is commonly achieved by each system considering the worst-case effect of its neighbors as a bounded exogenous disturbance to its own system [@Camponogara2002; @Dunbar2007; @Farina2012; @Lucia2015; @Trodden2017]. Nevertheless, this can be conservative approach if the sets of bounded exogenous disturbances are calculated offline; therefore, disregarding the possibility of adapting their size based on the dynamical evolution of the system.
In this paper, we consider the problem of designing optimal decentralized cooperative MPC controllers for linear time varying interconnected systems. We assume that these distributed systems are coupled through their dynamics and/or constraints and are subject to possibly correlated exogenous disturbances which are assumed to have known and bounded support. Unless a nested information structure is imposed, as suggested in [@Lin2016], the aforementioned synthesis problem is computationally intractable. In this paper, however, we are interested on minimum information structures which only require communication among neighboring systems and hence do not necessarily admit any nestedness. In particular, each system is only required to transmit to its direct neighbors a set that bounds these of its predicted states which affect their constraints and/or dynamics. Due to this minimal communication structure, the proposed decentralized scheme allows the decoupling of the optimization problems of the individual systems in the network. In this setting, however, we abandon the search for optimal decentralized control policies and resort, instead, to approximation of the original non-convex, infinite dimensional problem. We relax NP-hardness of the original formulation by restricting ourselves on communicated sets that result as the scaling and translation of a predefined convex conic set. The proposed method scales polynomially with respect to the number of agents and the prediction horizon length. The polynomial scalability is achieved by reformulating the original problem into a convex infinite dimensional optimization problem, and then approximating it using decision rules [@bental2004ars]. The resulting problem retains its decoupled structure making it amendable to distributed computations algorithms such as the alternating direction method of multipliers [@Boyd2011]. The proposed method partly alleviates concerns on privacy by not revealing sensitive information regarding the operational characteristics of the individual agents. The importance of adaptive set communication as a method for decentralized cooperative MPC has also been investigated in recent works (e.g., [@Farina2012; @Lucia2015; @Trodden2017]). The key difference of our approach is that the size of these communicated sets is adapted online being a decision variable in the resulting optimization problem. A proof-of-concept study for the problem of the efficient energy management of building districts was presented in [@DarivianakisDec]. This paper considerably extends the content of our preliminary paper by providing a mathematically rigorous presentation of the proposed method and by investigating its merits and demerits through a number of illustrative examples inspired from engineering and finance.
The remainder of this paper is organized as follows. Section \[sec::ProbForm\] provides the problem formulation and briefly reviews available methods for the design of optimal decision policies with centralized and nested information structures. The main contributions are presented in Sections \[sec::DecCont\] and \[sec::SolMethod\], where the proposed approach is developed and the techniques associated with the derivation of a tractable approximation to the original non-convex infinite-dimensional problem are discussed. Section \[sec::Numerics\] provides numerical studies to demonstrate the efficacy and scalability of the proposed method. Section \[sec::Conclusion\] closes the paper with conclusions and possible directions for future work. Proofs of the propositions and theorems are found in the Appendix.
**Notation:** For given vectors $ v_{i} \in {\mathbb}R^{k_i} $ with $ k_i \in {\mathbb}N $, $ i \in {\mathcal}M = \{1,\ldots,m\} $, we define $ v_{{\scriptscriptstyle \mathcal M \scriptstyle}}= [v_{i}]_{i\in {\mathcal}M} = [v_{1}^\top \ldots v_{m}^\top]^\top \in {\mathbb}R^{k} $ with $ k = \sum_{i=1}^{m}k_i $ as their vector concatenation. Concatenated vectors are represented in boldface. Dimensions of matrices and concatenated vectors are assumed clear from the context. We denote by $ t $ the time instant of a horizon $ T \in {\mathbb}N $, and we define the sets $ {\mathcal}T = \{1,\ldots, T\} $ and $ {\mathcal}T_+ = {\mathcal}T \cup \{T+1\} $. Given time dependent vectors $ \nu_{i,t} \in {\mathbb}R^{\ell_i} $ with $ i \in {\mathcal}M $, $ t \in {\mathcal}T $ and $ \ell_i \in {\mathbb}N $, we define $ {\boldsymbol}\nu_{{{\scriptscriptstyle \mathcal M \scriptstyle}},t} = [\nu_{i,t}]_{i \in {\mathcal}M} $ as the concatenated vector at time $t$, $ {\boldsymbol}\nu_i^t = [\nu_{i,1}^\top \ldots \nu_{i,t}^\top]^\top $ as the history of the $ i $-th vector up to time $ t $, and $ {\boldsymbol}\nu_{{\scriptscriptstyle \mathcal M \scriptstyle}}^t = [{\boldsymbol}\nu_i^t]_{i \in {\mathcal}M} $ as the history of the concatenated vector up to time $ t $.
Problem formulation {#sec::ProbForm}
===================
We consider a physical network comprising $ M $ interconnected systems, henceforth referred to as agents. We assume that the agents are coupled among themselves through the dynamics. We describe these interactions through a directed graph in which an arc connecting agent $ j $ to agent $ i $, with $ i, j\in {\mathcal}M = \{1,\ldots,M\} $, indicates that the states of the $ j $-th agent affect the dynamics of the $ i $-th agent. We refer agent $ j $ as the preceding neighbor to agent $ i $, henceforth neighbor, and we define the set $ {\mathcal}N_i \subset {\mathcal}M $ to include all the neighbors of the $ i $-th agent. Fig. \[fig::physNet\] illustrates a system of $ M = 5 $ agents where the neighbors of agent 3 are given by $ {\mathcal}N_3 = \{2, 5\}$. In the sequel, we refer to the [physical network]{} depicted in Fig. \[fig::physNet\] as the “working example” and use it to streamline the presentation of key ideas in the paper.
System dynamics, constraints and objective function
---------------------------------------------------
In this paper, we study finite horizon problems with $T$ stages. We use linear dynamics to model the state evolution of the agent $i$ at time instant $ t \in {\mathcal}T $, as $$\label{eq::stateDynamics}
x_{i,t+1} = A_{i,t} x_{i,t} + B_{i,t} {\boldsymbol}x_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},t} +D_{i,t} u_{i,t} + E_{i,t}\xi_{i,t}.$$ Here $ x_{i,t} \in {\mathbb}R^{n_{x,i}} $ denotes the states, with the initial state $ x_{i,1} $ known. The interaction of agent $i$ and its neighbors is captured through the term $ B_{i,t} {\boldsymbol}x_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},t}$. The vector $ u_{i,t} \in {\mathbb}R^{n_{u,i}} $ models the inputs and $ \xi_{i,t} \in {\mathbb}R^{n_{\xi,i}} $ captures the exogenous disturbances affecting the system dynamics. The time-varying system matrices $ A_{i,t} $, $ B_{i,t} $, $D_{i,t} $ and $ E_{i,t} $ are assumed known with proper dimensions and of full column rank. To economize on notation, we now compactly rewrite as $$\label{eq::stateDynamicsCompact}
{\boldsymbol}x_i = f_i({\boldsymbol}x_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}, {\boldsymbol}u_i, {\boldsymbol}\xi_i) := A_i x_{i,1} + B_i {\boldsymbol}x_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} + D_i {\boldsymbol}u_i + E_i {\boldsymbol}\xi_i,$$ where $ {\boldsymbol}x_i = [x_{i,t}]_{t\in {\mathcal}T_+} $, $ {\boldsymbol}u_i = [u_{i,t}]_{t \in {\mathcal}T} $, $ {\boldsymbol}\xi_i = [\xi_{i,t}]_{t\in {\mathcal}T} $ and $ {\boldsymbol}x_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} = [{\boldsymbol}x_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},t}]_{t \in {\mathcal}T} $. The system matrices $ A_i $, $ B_i $, $ D_i $ and $ E_i $ used to define the function $ f_{i}(\cdot) $ are directly constructed by the problem data given in (see e.g., [@Goulart2006] for such a derivation). The $ i $-th agent is subject to linear operational constraints $$\label{eq::InequalitiesCompact}
{\mathcal}O_i = \big\{({\boldsymbol}x_{i}, {\boldsymbol}u_{i}) \,:\, H_{x,i}{\boldsymbol}{x}_i + H_{u,i} {\boldsymbol}u_i \leq h_i\big\},$$ where the matrices $ H_{x,i} $, $ H_{u,i} $ and $ h_i $ are assumed known and of proper dimensions. Note that this compact constraint formulation allows the consideration of time-varying linear operational constraints with time-stage coupling. In addition, operational constraints involving neighboring states or exogenous disturbances can always be included by appropriately extending the state space of the $ i $-th system.
The $ i $-th agent’s objective function is given as, $$\label{eq::objFnc}
J_i({\boldsymbol}x_i,{\boldsymbol}u_i) = \sum_{t=1}^T \left(\| Q_i x_{i,t} \|_p + \| R_i u_{i,t} \|_p\right),$$ where $ p \in \{\infty, 1, 2\} $ allows for different objective formulation. The penalization matrices $ Q_i $, $ R_i $ are assumed known and of proper dimensions.
In the following, we assume that the exogenous disturbances affecting agent $ i $ reside in the nonempty, convex and compact polyhedral uncertainty set $\Xi_i = \{{\boldsymbol}\xi_i : W{\boldsymbol}\xi_i \geq w\}$ where matrix $W$ and vector $w$ are known and of proper dimensions. We will be making the simplifying assumption that the joint uncertainty set of all agents in the system has a decoupled structure, i.e., $ {\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}\in \Xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}= \bigtimes_{i \in {\mathcal}M} \Xi_i $, which essentially precludes the existence of coupled disturbances amongst the agents. This assumption can be relaxed at the expense of further case distinctions in what follows.
Designing decision policies with centralized information exchange
-----------------------------------------------------------------
A common assumption in designing decision policies is to assume that at time $t$, each agent has access to the states of all the other agents in the network up to and including period $t$ [@HGK:2011b; @Goulart2006]. We will refer to this communication as the *centralized information exchange*, depicted in Fig. \[fig::InfStr\_CS\] for the working example. In this context, we denote the *causal state feedback* policies for agent $i$ at time $t\in{\mathcal}T$ as where $ n_{x}^t = t \left(\sum_{j\in {\mathcal}M} n_{x,j} \right)$, such that its input at time $ t $ is given as $ u_{i,t} = \pi_{i,t}({\boldsymbol}x_{{\scriptscriptstyle \mathcal M \scriptstyle}}^t) $. We write $ {\boldsymbol}\pi_i({\boldsymbol}x_{{\scriptscriptstyle \mathcal M \scriptstyle}}) = [\pi_{i,t}({\boldsymbol}x_{{\scriptscriptstyle \mathcal M \scriptstyle}}^t)]_{t\in{\mathcal}T} $ to denote the policy concatenation over the horizon, and we define as $ {\mathcal}C({\boldsymbol}x_{{\scriptscriptstyle \mathcal M \scriptstyle}}) $ the space of causal state feedback policies.
The optimization problem for designing robust policies with centralized information exchange which minimize the sum of worst-case individual objectives is formulated as $$\label{Centralized}
\begin{array}{l}
\text{\;\;minimize } \displaystyle\sum\limits_{i = 1}^M \max\limits_{{\boldsymbol}\xi \in \Xi} J_i({\boldsymbol}x_i,{\boldsymbol}u_i) \\
\left.\begin{array}{@{}r@{\;}l@{}}
\text{subject to }& {\boldsymbol}\pi_i(\cdot) \in {\mathcal}C({\boldsymbol}x_{{\scriptscriptstyle \mathcal M \scriptstyle}}), \,
{\boldsymbol}u_i = {\boldsymbol}\pi_i({\boldsymbol}x_{{\scriptscriptstyle \mathcal M \scriptstyle}})\\
& {\boldsymbol}x_{i} = f_{i}\big({\boldsymbol}x_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}, {\boldsymbol}u_{i}, {\boldsymbol}\xi_{i}\big)\\
& ({\boldsymbol}x_{i}, {\boldsymbol}u_{i}) \in {\mathcal}O_i
\end{array}\right \rbrace \forall {\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}\in \Xi_{{\scriptscriptstyle \mathcal M \scriptstyle}},\; \forall i \in {\mathcal}M.
\end{array}
\tag{$\text{C}:{\mathcal}C({\boldsymbol}x_{{\scriptscriptstyle \mathcal M \scriptstyle}})$}$$ The optimization variables are $ {\boldsymbol}\pi_i(\cdot) $ for all $ i \in {\mathcal}M $. As shown in [@HGK:2011b; @Goulart2006], the state feedback structure of the decision variables induces a non-convex feasible region. To deal with this, they propose the design of *strictly causal disturbance feedback policies* $ \Pi_{i,t}:{\mathbb}R^{n_{\xi}^t} \rightarrow {\mathbb}R^{n_{u,i}} $ where $ n_{\xi}^t = (t-1)\sum_{i\in {\mathcal}M} n_{\xi,i} $, such that the input at each time step is given by $ u_{i,t} = \Pi_{i,t}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}^{t-1}) $. We write $ {\boldsymbol}\Pi_i({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}) = [\Pi_{i,t}([{\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}^{t-1}]_{t\in{\mathcal}T}) $ to denote the policy concatenation over the horizon, and we define as $ {\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}) $ the space of strictly causal disturbance feedback policies. This formulation leads to the infinite dimensional linear optimization [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}. Using the fact that matrices $A_{i,t}$, $B_{i,t}$ and $E_{i,t}$ are full column rank, they show that there is an one-to-one relationship between state and disturbance feedback policies in terms of feasibility and optimality. Restricting the admissible policies to have an affine structure reduces the problem to finite-dimensional linear optimization problem which can be solved with off-the-shelf optimization solvers [@Goulart2006]. Furthermore, due to the one-to-one relationship of state and feedback policies, there is also a unique mapping that translates the resulting affine disturbance policies to an equivalent affine state feedback policy which allows to be implemented locally by each agent using the centralized communication network.
Although theoretically appealing, decision rules based on centralized information exchange are hard to design and implemented in practice for large-scale systems. This is partially the case since for large networks $ (i) $ solving the resulting linear optimization problem from the affine approximation can be computationally challenging due to its monolithic structure; while $ (ii) $ the excessive communication and the centralized physical network required to allow each agent to evaluate its policy, does not promote privacy since the exact policy/constraints of individual agents are eventually revealed to the rest of the network. We will demonstrate the former through numerical experiments in Section \[sec::Numerics\].
Designing policies with partially nested information exchange
-------------------------------------------------------------
In an attempt to address the computational and privacy issues, researchers have proposed a number of policy designs that consider only partial communication among the agents. For an arbitrary information exchange network the design phase typically results in a non-convex problem which is computationally intractable. A notable exception is the work of [@Lin2016] which assumes a *partially nested information exchange*, leading to convex formulations. Roughly speaking this communication exchange implies that agent $ i $ has access to information coming from all of its *precedent agents* in a non-anticipative manner. In this setting, agent $ j $ is named a *precedent to agent $ i $*, if input at system $ j $ at time $ t' $ can affect the local information available to agent $ i $ at some time $ t > t' $ in the future [@Lin2016 Definition 1]. The partially nested information exchange associated with the working example is depicted in Fig. \[fig::InfStr\_NS\].
In a partially nested communication, the policy is designed as follows. We denote by $ {\overline{{\mathcal}N}}_i \subseteq {\mathcal}M $ the set that includes agent $ i $ and all its precedent agents. At time $t$, the $ i $-th agent measures its own states and the states of its precedent agents, denoted by $ {\boldsymbol}x_{{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}},t} = [x_{j,t}]_{j \in {\overline{{\mathcal}N}}_i} $. Using all measurements from stage 1 up to time $t$, it designs a *causal partial state feedback* policy $ \phi_{i,t}:{\mathbb}R^{{\overline{n}}_{x,i}^t} \rightarrow {\mathbb}R^{n_{u,i}} $, where $ {\overline{n}}_{x,i}^t = t \left(\sum_{j\in {\overline{{\mathcal}N}}_i} n_x^j\right) $. The input is now given as $ u_{i,t} = \phi_{i,t}({\boldsymbol}x_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}^t) $. We write $ {\boldsymbol}\phi_i({\boldsymbol}x_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}) = [\phi_{i,t}({\boldsymbol}x_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}^t)]_{t\in{\mathcal}T} $ to denote the policy concatenation over the time horizon, and we define $ {\mathcal}C({\boldsymbol}x_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}) $ the space of causal state feedback policies associated with agent $i$. The nested information structure associated with the working example is depicted in Fig. \[fig::InfStr\_NS\].
The optimization problem to design robust policies with partially nested information exchange which minimize the sum of worst-case individual objectives is formulated as $$\label{Semi-Centralized}
\begin{array}{l}
\text{\;\;minimize } \displaystyle\sum\limits_{i = 1}^M \max\limits_{{\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}\in \Xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}} J_i({\boldsymbol}x_i,{\boldsymbol}u_i) \\
\left.\begin{array}{@{}r@{\;}l@{}}
\text{subject to }& {\boldsymbol}\phi_{i}(\cdot) \in {\mathcal}C({\boldsymbol}x_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}),\;{\boldsymbol}u_i = {\boldsymbol}\phi_{i}({\boldsymbol}x_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}})\\
& {\boldsymbol}x_{i} = f_{i}\big({\boldsymbol}x_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}, {\boldsymbol}u_{i}, {\boldsymbol}\xi_{i}\big)\\
& ({\boldsymbol}x_{i}, {\boldsymbol}u_{i}) \in {\mathcal}O_i
\end{array}\right \rbrace \forall {\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}\in \Xi_{{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}},\;
\forall i \in {\mathcal}M,
\end{array}
\tag{$\text{PN}:{\mathcal}C({\boldsymbol}x_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}})$}$$ where $ \Xi_{{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}} = \bigtimes_{j \in {\overline{{\mathcal}N}}_i} \Xi_j $. The decision variables are $ {\boldsymbol}\phi_{i}(\cdot) $ for all $ i \in {\mathcal}M $. [Problem ]{}is typically non-convex due to the state feedback structure of the policies. Similar to the centralized case, [@Lin2016] proposes the use of *strictly causal partial nested disturbance feedback policies* $ \Phi_{i,t}:{\mathbb}R^{{\overline{n}}_{\xi,i}^t} \rightarrow {\mathbb}R^{n_{u,i}} $ where $ {\overline{n}}_{\xi,i}^t = (t-1) \left( \sum_{j\in {\overline{{\mathcal}N}}_i} n_\xi^j\right) $, such that the input at each time step is given by $ u_{i,t} = \Phi_{i,t}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}^{t-1}) $. We write $ {\boldsymbol}\Phi_i({\boldsymbol}\xi_{{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}}) = [\Phi_{i,t}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}^{t-1})]_{t\in{\mathcal}T} $ to denote the policy concatenation, and we define as $ {\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}) $ the space of strictly causal disturbance feedback policies, leading to the infinite dimensional linear optimization [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}. If these disturbance feedback policies are restricted to admit an affine structure then [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}becomes a finite-dimensional linear optimization problem. As with the centralized case, there is an one-to-one relationship between the state and disturbance feedback policies, both for the infinite dimensional and affine restriction. This allows the agents in the network to evaluate their policies based on the established partially nested communication.
The following theorem establishes the connection between centralized and partially nested information design problems, and will be used in the following section to demonstrate the relationship between the proposed approach to the centralized and partially nested policy structures.
\[thm::1\] [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}is a conservative approximation of [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}in the following sense: every feasible solution of [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}is feasible in [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}, and the optimal value of [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}is larger or equal to the optimal value of [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}.
Partially nested information exchange slightly reduces the communication requirements compared to the centralized problem (see Figs. \[fig::InfStr\_CS\] and \[fig::InfStr\_NS\] of the working example). This has a positive impact on the solution time needed to design affine feedback policies, however, the resulting linear program inherits in large part the monolithic structure of the centralized problem due to absence of a non-sparse structure. Most importantly, even in the simple model of our working example, the partially nested communication requires three additional links compared to the physical links. In some cases, the minimum number of communications links needed to ensure a partially nested information coincides with the centralized information exchange, see the examples presented in Section \[sec::Numerics\]. Therefore, the synthesis of policies with a partially nested information inherit, in large extend, the drawbacks of the centralized problem, both from a computational and privacy standpoint.
Designing decision policies with local information exchange {#sec::DecCont}
===========================================================
In this paper we propose a decentralized policy structure that relies on local information exchange among the agents. The proposed policy aims to address both the computational and privacy concerns discussed so far. While in the previous section the information flow had to be sufficiently complex as to capture a partially nested structure, in this section we will assume that the information flow can be as simple as the physical network, as this is illustrated in Fig. \[fig::InfStr\_DS\] for the working example.
√
In contrast to the previous section where agent $ j\in{\mathcal}{N}_i $ communicates to agent $i$ explicitly the functional form of its states, in the proposed framework agent $ j$ communicates a compact set $ {\mathcal}X_j $, henceforth referred as the *state forecast set*, that contains possible evolution of its states, i.e., ${\boldsymbol}x_j \in {\mathcal}X_j$. In this framework, the shape of $ {\mathcal}X_j $ is a decision quantity for each agent $ j $. Upon receiving these state forecast sets from all of its neighbors, agent $ i $ treats these states as uncertain quantities that affect its dynamics in a similar way as exogenous disturbances. To emphasize this, we denote by $ \zeta_{j,t}\in\mathbb{R}^{n_{x,j}} $ the belief of agent $ i $ about the states of agent $j$ at time $t$, such that ${\boldsymbol}\zeta_j=[\zeta_{j,t}]_{t\in{\mathcal}{T}_+} \in {\mathcal}{X}_j$. In this context, the policy of agent $i$ at time $t$ is based on the information from its own states $x_{i,t}$ and on the belief states $ {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},t} $, from stage 1 up to $t$. This leads us to design *causal local state/disturbance feedback policies* $ \psi_{i,t}:{\mathbb}R^{\hat n_{x,i}^t} \rightarrow {\mathbb}R^{n_u^i} $ where $ \hat {n}_{x,i}^t = t\left(n_{x,i} + \sum_{j\in {\mathcal}N_i} n_{x,j}\right) $, such that the input of agent $i$ at time $ t $ is given by $ u_{i,t} = \psi_{i,t}({\boldsymbol}x_i^t, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^t) $. We denote by $ {\boldsymbol}{\psi}_{i}({\boldsymbol}x_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}) = [\psi_{i,t}({\boldsymbol}x_i^t, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^t)]_{t \in {\mathcal}T} $ the policy concatenation over the time horizon, and by $ {\mathcal}C({\boldsymbol}x_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}) $ the corresponding space of causal local state/disturbance feedback policies associate with agent $i$.
In this decentralized setting, the robust optimization problem to design policies with local information exchange which minimize the sum of worst-case individual objectives is formulated as $$\label{Decentralized}
\begin{array}{l}
\text{\;\;minimize } \displaystyle\sum\limits_{i = 1}^M \max\limits_{{\boldsymbol}\xi_i \in \Xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} \in {\mathcal}X_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}} J_i({\boldsymbol}x_i,{\boldsymbol}u_i) \\
\left.\begin{array}{@{}r@{\;}l@{}}
\text{subject to }& {\boldsymbol}{\psi}_{i}(\cdot) \in {\mathcal}C({\boldsymbol}x_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}),\, {\boldsymbol}u_i = {\boldsymbol}{\psi}_{i}({\boldsymbol}x_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})\\
& {\boldsymbol}x_{i} = f_{i}\big({\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}, {\boldsymbol}u_{i}, {\boldsymbol}\xi_{i}\big)\\
& ({\boldsymbol}x_{i}, {\boldsymbol}u_{i}) \in {\mathcal}O_i \\
& {\boldsymbol}x_i \in {\mathcal}X_i, \; {\mathcal}X_i \in {\mathcal}{F}({\mathbb}R^{N_{x,i}})
\end{array}\right \rbrace \begin{array}{@{}l}
\forall {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} \in {\mathcal}X_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}\\
\forall {\boldsymbol}\xi_i \in \Xi_i
\end{array}\;\forall i \in {\mathcal}M,
\end{array}
\tag{$\text{L}:{\mathcal}C({\boldsymbol}x_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}{F}({\mathbb}R^{N_{x,i}})$}$$ where $ {\mathcal}X_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} = \bigtimes_{j \in {\mathcal}N_i} {\mathcal}X_j $, and ${\mathcal}F({\mathbb}R^{N_{x,i}}) $, where $ N_{x,i} = (T+1)n_{x,i} $, denotes the field of sets generated by all the compact subsets of the power set of $ {\mathbb}R^{N_{x,i}} $. The optimization variables are $ {\boldsymbol}{\psi}_{i}(\cdot) $ and $ {\mathcal}X_i $ for all $ i \in {\mathcal}M $. Since agent $i$ treads the beliefs ${\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}$ as exogenous disturbances, its decisions are taken in view of the worst-case both with respect to ${\boldsymbol}\xi_i \in \Xi_i$ and ${\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} \in {\mathcal}X_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}$. This is also reflected in the construction of the objective function. Notice that agent $i$ is not directly affected by the disturbances ${\boldsymbol}\xi_j \in \Xi_j$ of its neighbors. Rather, the effect of ${\boldsymbol}\xi_j \in \Xi_j$ of agent $j\in{\mathcal}N_i$ is been translated into set ${\mathcal}X_j$, which in turn affects agent $i$.
[Problem ]{}can be interpreted as a method for finding a compromise between the agents as this is represented through sets ${\mathcal}X_i$. If we focus attention in two agents, agents $i$ and $j$ with $j\in {\mathcal}{N}_i$, on the one hand agent $j$ will benefit the most if the set ${\mathcal}X_j$ is as large as possible. By doing that, set ${\mathcal}X_j$ will not impose any additional constraints on its states, thus agent $j$ will individually achieve the lowest objective value contribution. On the other hand, agent $i$ will benefit the most if it receives from agent $j$ the smallest possible set ${\mathcal}X_j$ (preferable ${\mathcal}X_j$ is a singleton) which will reduce the uncertainty on its dynamics, and will have a positive effect in achieving the lowest objective value contribution. Since the objective of [Problem ]{}is to minimize the equally weighted sum of individual worst-case costs, the resulting policy/set pair finds the trade-off among the agents and achieving the lowest, network wide, objective value, while achieving robustly feasibility with respect to ${\boldsymbol}{\xi}_{{\scriptscriptstyle \mathcal M \scriptstyle}}\in \Xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}$.
[Problem ]{}addresses the privacy concerns in the following ways. First, the local communication network ensures that the information exchange is the minimum among the agents. Second, focusing again on agents $i$ and $j$ with $j\in {\mathcal}{N}_i$, agent $j$ does not directly reveal the function form of its states to agent $i$, which is the case in both the centralized and partially nested information exchange. Rather, the future state trajectories are “masks" by set ${\mathcal}X_j$, thus reducing exposure to agent $i$ who might want to leverage on the knowledge gained about the constraints and dynamics of agent $j$. If additionally the set ${\mathcal}X_j$ has a simple structure, e.g., ${\mathcal}X_j$ is rectangular, agent $j$ will reveal the bare minimum to achieve a compromise in cost in terms objective of [Problem ]{}. This will be studied further in the next section. Notice that [Problem ]{}has highly decoupled structure, with only sets ${\mathcal}X_j$ linking the agents in the system.
The state feedback policies ${\mathcal}C({\boldsymbol}x_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})$ induces a non-convex optimization problem. As in [@HGK:2011b; @Lin2016], we now focus on purely disturbance feedback policies $ \Psi_{i,t}:{\mathbb}R^{\hat n_{\xi,i}^t} \rightarrow {\mathbb}R^{n_{u,i}} $ where $ \hat n_{\xi,i}^t = (t-1)n_{\xi,i} + t\sum_{j\in {\mathcal}N_i} n_{x,j} $, such that the input at time $t$ is $ u_{i,t} = \Psi_{i,t}({\boldsymbol}\xi_i^{t-1},{\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^{t}) $. We write $ {\boldsymbol}\Psi_i({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}) = [\Psi_{i,t}({\boldsymbol}\xi_i^{t-1},{\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^{t}]_{t\in{\mathcal}T} $ to denote the policy concatenation over time, and we define as $ {\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}) $ the space of strictly causal disturbance feedback policies. Note that the “strictly causal" refers to the uncertain vector ${\boldsymbol}\xi_{i}^{t-1}$ which the policy is allowed to depend up to stage $t-1$, while the policy is “causal" in state beliefs $ {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^{t} $ and is allowed to depend up to stage $t$. The following theorem, establishes the equivalence between the proposed state/disturbance and disturbance feedback policies.
\[thm::2\] [Problem ]{}is a equivalent to [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}in the following sense: Given a feasible state/disturbance feedback policy $ {\boldsymbol}\psi_{i}(\cdot) $ for [Problem ]{}, a feasible disturbance feedback policy $ {\boldsymbol}\Psi_{i}(\cdot) $ for [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}can be constructed that achieves the same objective value, and vice versa.
The key difference between the partially nested information [Problem ]{}and the local information [Problem ]{}, is that the synthesis phase of the latter requires that each agent communicates only with its direct neighbors rather than with all its precedent agents in the network (compare Fig. \[fig::InfStr\_NS\] to Fig. \[fig::InfStr\_DS\] for the working example). This minimum communication exchange is sufficient to address [Problem ]{}since the coupling among agents in the network only appears through sets $ {\mathcal}X_i $. This however introduces a level of conservativeness which is formalized in the following theorem.
\[thm::3\] [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}is a conservative approximation of [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}in the following sense: every feasible solution of [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}is feasible in [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}, and the optimal value of [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}is equal or larger than the optimal value of [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}.
The following corollary summarizes the relation, in terms of optimal value and cost, between the centralized [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}and local information exchange [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}, which is an immediate implication of Theorems \[thm::1\] and \[thm::3\].
\[corr::1\] [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}is a conservative approximation of [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}in the following sense: every feasible solution of [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}is feasible in [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}, and the optimal value of [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}is equal or larger than the optimal value of [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}.
[Problem ]{}is computationally intractable because $ (i) $ the optimization of the policies is performed over the infinite space of causal functions; $ (ii) $ the optimization of the state forecast sets $ {\mathcal}X_i $ is performed over arbitrary sets; and $ (iii) $ the constraints must be satisfied robustly for every uncertain realization.
Solution method {#sec::SolMethod}
===============
In this section, we discuss appropriate restrictions to [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}that allow us to obtain a computationally tractable approximation. We begin by restricting each state forecast set $ {\mathcal}X_i $, with $ i \in {\mathcal}M $, to admit a convex conic structure. We denote by $ {\mathcal}F_{{{\scriptscriptstyle {\mathcal}{CC} \scriptstyle}}}({\mathbb}R^{N_{x,i}}) $ the field of sets generated by all the convex conic compact subsets of the power set of $ {\mathbb}R^{N_{x,i}} $. This design choice is motivated by the fact that every convex set admits a conic representation [@Rockafellar2015 p. 15]. [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F_{{{\scriptscriptstyle {\mathcal}{CC} \scriptstyle}}}({\mathbb}R^{N_{x,i}}))$]{}still remains computational intractable. As shown in Theorem , this is indeed the case even when its policies admit an affine structure, (e.g., see [@DanielPrimalDual; @bental2004ars]), i.e., $ u_{i,t} = \Psi_{i,t}({\boldsymbol}\xi_i^{t-1},{\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^{t}) $ with $$\Psi_{i,t}({\boldsymbol}\xi_i^{t-1},{\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^{t}) = v_{i,t} + V_{i,t} {\boldsymbol}\xi_i^{t-1} + V_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},t} {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^{t} ,$$ where $ v_{i,t} $, $ V_{i,t} $ and $ V_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},t} $ are the decision variables matrices, with appropriate dimensions, that define the affine policy. We denote by ${\mathcal}{SC}_{a}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{\mathcal}N_i})$ the finite dimensional space of affine policies, and refer to [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}restricted to affine decision policies and convex conic state forecast sets as [Problem $(\text{L}:{\mathcal}{SC}_a({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F_{{{\scriptscriptstyle {\mathcal}{CC} \scriptstyle}}}({\mathbb}R^{N_{x,i}}))$]{}.
\[thm::4\] [Problem $(\text{L}:{\mathcal}{SC}_a({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F_{{{\scriptscriptstyle {\mathcal}{CC} \scriptstyle}}}({\mathbb}R^{N_{x,i}}))$]{}is NP-hard.
To gain tractability, we further restrict $ {\mathcal}X_i$ to sets that can be represented through an affine transformation of a fixed set, in a similar spirit as [@Jaillet2016; @Zhang2017; @Bitlislioglu2017]. Considering agent $i$, for a given $K_i\in\mathbb{Z}_+$ we define matrices $P_{i,k}\in\mathbb{R}^{N_{x,i} \times N_{x,i}} $ to be orthogonal projection of the state ${\boldsymbol}{x}_i$ such that $${\boldsymbol}{x}_i = \sum_{k=1}^{K_i} P_{i,k}{\boldsymbol}{x}_{i}.$$ For a given convex conic compact set $ {\mathcal}S_i $, $ {\mathcal}X_i$ is restricted to $$\label{eq::SFSv1}
\begin{array}{l}
{\mathcal}X_i(y_i,z_i) = \displaystyle\left\{{\boldsymbol}{x}_i \in\mathbb{R}^{N_{x,i}} \,:\, \exists {\boldsymbol}s_{i} \in {\mathcal}S_i \text{ s.t. } {\boldsymbol}x_{i} = \sum_{k=1}^{K_i} y_{i,k} P_{i,k} {\boldsymbol}s_{i} + z_{i} \right\}
\end{array}$$ where the decision variables that define the shape of the state forecast set are vectors $y_i = [y_{i,1},\ldots,y_{i,K_i}]^\top\in\mathbb{R}^{K_i}_+$ and $ z_{i}\in\mathbb{R}^{N_{x,i}}$. Set ${\mathcal}S_i$ is expressed using given matrices $ G_{i,k} \in \mathbb{R}^{\ell_{k,i} \times N_{x,i}} $, vectors $ g_{i,k} \in \mathbb{R}^{\ell_k }$ and convex cones $ {\mathcal}K_{i,k} $, as follows $$\begin{array}{l}
{\mathcal}S_i = \Big\{{\boldsymbol}{s}_{i} \in {\mathbb}R^{N_{x,i}}\;:\; G_{i,k} P_{i,k} {\boldsymbol}s_{i} \preceq_{{\mathcal}K_{i,k}} g_{i,k},\; k=1,\ldots,K_i \Big\}.
\end{array}$$ The positive scalar decision $y_{i,k}$ allows to scale the $k$-th constraint $G_{i,k} P_{i,k} {\boldsymbol}s_{i} \preceq_{{\mathcal}K_{i,k}} g_{i,k}$ thus controlling the shape of the state forecast set in the direction defined by the states in the projection $P_{i,k}{\boldsymbol}{x}_i$, while vector $z_i$ is responsible for the translation of the set. Henceforth, we denote by $ {\mathcal}F_{{{\scriptscriptstyle {\mathcal}{AT} \scriptstyle}}}({\mathcal}S_i) $ the field of bounded convex conic sets, $ {\widehat{{\mathcal}X}}_i(y_i, z_i) $, that can be represented through an affine transformation of a fixed set $ {\mathcal}S_i $.
Consider the state of agent $i$ such that ${\boldsymbol}x_{i}\in{{\mathbb R}}^2$. If we want to enclose states $ x_{i,1}$ and $ x_{i,2}$ within a hyper-rectangular, we first set $P_{i,1} = \left(\begin{smallmatrix}1&0\\0&0\end{smallmatrix}\right)$ and $P_{i,2} = \left(\begin{smallmatrix}0&0\\0&1\end{smallmatrix}\right)$, and ${\mathcal}S_i$ is given by $${\mathcal}S_i = \Big\{{\boldsymbol}{s}_{i} \in {\mathbb}R^{2}\;:\; \|{s}_{i,1}\|_\infty \leq 1 ,\,\|s_{i,2}\|_\infty \leq 1 \Big\}$$ implying that $ G_{i,1} = \left(\begin{smallmatrix}0&0\\-1&0\end{smallmatrix}\right)$, $ g_{i,1} = \left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)$ and $ G_{i,2} = \left(\begin{smallmatrix}0&0\\0&-1\end{smallmatrix}\right)$, $ g_{i,2} = \left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)$, and ${\mathcal}K_{i,1}$, ${\mathcal}K_{i,2}$ are both infinity cones. Such a construction is also graphically illustrated in Fig. \[fig::PrimitiveSets\].
Consider the state of agent $i$ such that ${\boldsymbol}x_{i}\in{{\mathbb R}}^3$. If we want to enclose states $ x_{i,1}$ and $ x_{i,2}$ within a two dimensional circle, and state $ x_{i,3}$ within a one dimensional box, we first set $P_{i,1} = \left(\begin{smallmatrix}1&0&0\\0&1&0\\0&0&0\end{smallmatrix}\right)$ and $P_{i,2} = \left(\begin{smallmatrix}0&0&0\\0&0&0\\0&0&1\end{smallmatrix}\right)$, and ${\mathcal}S_i$ is given by $${\mathcal}S_i = \Big\{{\boldsymbol}{s}_{i} \in {\mathbb}R^{3}\;:\; \|({s}_{i,1},{s}_{i,2})\|_2 \leq 1 ,\,\| s_{i,3}\|_2\leq 1 \Big\}$$ implying that $ G_{i,1} = \left(\begin{smallmatrix}0&0&0\\-1&0&0\\0&-1&0\end{smallmatrix}\right)$, $ g_{i,1} = \left(\begin{smallmatrix}1\\0\\0\end{smallmatrix}\right)$ and $ G_{i,2} = \left(\begin{smallmatrix}0&0&0\\0&0&-1\end{smallmatrix}\right)$, $ g_{i,2} = \left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)$, and ${\mathcal}K_{i,1}$, ${\mathcal}K_{i,2}$ are both second order cones.
Approximation is more restrictive than the one proposed in [@Jaillet2016; @Zhang2017; @Bitlislioglu2017] which allows for arbitrary translation/rotation of set $\mathcal{S}_i$. However, this additional restriction is crucial since set is in generally a non-convex region due to the multiplication between $ y_{i,k}$ and $ {\boldsymbol}s_{i} $ which are both decision variables in the resulting optimization problem. By taking advantage of the fact that $ {\mathcal}S_i $ is compact, $ {\mathcal}X_i(y_i,z_i) $ can be expressed as the following convex set. $$\label{eq::SFSv2}
\begin{array}{l@{\,}l}
{\mathcal}{\widehat X}_i(y_i,z_i) = \displaystyle\Bigg\{{\boldsymbol}{x}_i \,:\, \exists {\boldsymbol}\nu_{i,k}\in\mathbb{R}^{N_{x,i}} \text{ s.t. } & \displaystyle{\boldsymbol}x_{i} = \sum_{k=1}^{K_i} P_{i,k} {\boldsymbol}\nu_{i,k} + z_{i},\\
& G_{i,k} P_{i,k} {\boldsymbol}\nu_{i,k} \preceq_{{\mathcal}K_{i,k}} y_{i,k} g_{i,k},\;k=1,\ldots,K_i \Bigg\}
\end{array}$$ where $ {\boldsymbol}\nu_{i,k} $ are auxiliary variables. The relationship between sets and is summarized in the following proposition.
\[prop::nCc\] Set $ {\mathcal}X_{{{\scriptscriptstyle {\mathcal}{FS} \scriptstyle}}} = \{({\boldsymbol}x_i, y_i, z_i)\,:\, {\boldsymbol}x_i \in {\mathcal}X_i(y_i, z_i) \} $ is equivalent to $ {\widehat{{\mathcal}X}}_{{{\scriptscriptstyle {\mathcal}{FS} \scriptstyle}}} =\{({\boldsymbol}x_i, y_i, z_i)\,:\, {\boldsymbol}x_i \in {\mathcal}{\widehat X}_i(y_i,z_i) \} $ in the following sense: there exist a unique mapping between feasible points in $ {{\mathcal}X}_{{{\scriptscriptstyle {\mathcal}{FS} \scriptstyle}}} $ and $ {\widehat{{\mathcal}X}}_{{{\scriptscriptstyle {\mathcal}{FS} \scriptstyle}}} $.
The restriction of [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F_{{{\scriptscriptstyle {\mathcal}{CC} \scriptstyle}}}({\mathbb}R^{N_{x,i}}))$]{}to state forecast sets $ {\widehat{{\mathcal}X}}_i(y_i, z_i) $ is given as $$\label{DecentralizedXab}
\begin{array}{@{}l}
\text{\;\;minimize\,} \displaystyle\sum\limits_{i = 1}^M \max\limits_{{\boldsymbol}\xi_i \in \Xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} \in {\widehat{{\mathcal}X}}_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}} J_i({\boldsymbol}x_i,{\boldsymbol}u_i) \\
\left.\begin{array}{@{}r@{\;}l@{}}
\text{subject to\,}& {\boldsymbol}{\psi}_{i}(\cdot) \in {\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\boldsymbol}u_i = {\boldsymbol}{\Psi}_{i}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})\\
& {\boldsymbol}x_{i} = f_{i}\big({\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}, {\boldsymbol}u_{i}, {\boldsymbol}\xi_{i}\big)\\
& ({\boldsymbol}x_{i}, {\boldsymbol}u_{i}) \in {\mathcal}O_i \\
& {\boldsymbol}x_i \in {\widehat{{\mathcal}X}}_i(y_i, z_i),\, {\widehat{{\mathcal}X}}_i(\cdot) \in {\mathcal}F_{{{\scriptscriptstyle {\mathcal}{AT} \scriptstyle}}}({\mathcal}S_i)
\end{array}\right \rbrace \begin{array}{@{}l}
\forall {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} \in {\widehat{{\mathcal}X}}_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}(y_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}, z_{{\scriptscriptstyle \mathcal N_i \scriptstyle}})\\
\forall {\boldsymbol}\xi_i \in \Xi_i
\end{array}\forall i \in {\mathcal}M,
\end{array}
\tag{$\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}{F}_{{\scriptscriptstyle {\mathcal}{AT} \scriptstyle}}({\mathcal}S_i)$}$$ where for each $i\in {\mathcal}M$ we define $ {\widehat{{\mathcal}X}}_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}(y_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}, z_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}) = \bigtimes_{j \in {\mathcal}N_i} {\widehat{{\mathcal}X}}_j(y_{j}, z_j) $. The optimization variables are $ {\boldsymbol}{\Psi}_{i}(\cdot) $, $ y_i $ and $ z_i $ for all $ i \in {\mathcal}M $. [Problem ]{}has a semi-infinite structure with decision-dependent uncertainty sets. Thus, in general, it admits a non-convex feasible region because of the decision-dependent uncertainty sets. This can be verified by noticing that reformulating any semi-infinite robust constraint in the problem leads to a non-convex set of inequality and equality constraints. To obtain a problem formulated with decision-independent uncertainty sets, we propose, in the spirit of [@Jaillet2016; @Zhang2017; @Bitlislioglu2017], the design of affine causal feedback policies $ \Gamma_{i,t}:{\mathbb}R^{\bar n_{i}^t} \rightarrow {\mathbb}R^{n_{u,i}} $ such that the control input at each time step is given by $ u_{i,t} = \Gamma_{i,t}({\boldsymbol}\xi_i^{t-1}, {\boldsymbol}s^t_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}) $. We write $ {\boldsymbol}\Gamma_i({\boldsymbol}\xi_i, {\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}) = [\Gamma_{i,t}({\boldsymbol}\xi_i^{t-1}, {\boldsymbol}s^t_{{\scriptscriptstyle \mathcal N_i \scriptstyle}})]_{t\in{\mathcal}T} $ to denote the policy concatenation over time, and we define as $ {\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}) $ the space of strictly causal disturbance feedback policies. In this context, the counterpart of [Problem ]{}is given as $$\label{DecentralizedFinal}
\begin{array}{@{}l}
\text{\;\;minimize } \displaystyle\sum\limits_{i = 1}^M \max\limits_{{\boldsymbol}\xi_i \in \Xi_i, {\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} \in {\mathcal}S_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}} J_i({\boldsymbol}x_i,{\boldsymbol}u_i) \\
\left.\begin{array}{@{}r@{\;}l@{}}
\text{subject to }& {\boldsymbol}{\Gamma}_{i}(\cdot) \in {\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), \,{\boldsymbol}u_i = {\boldsymbol}{\Gamma}_{i}({\boldsymbol}\xi_i, {\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})\\
& {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} = Y_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}{\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}+ z_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\\
& {\boldsymbol}x_{i} = f_{i}\big({\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}, {\boldsymbol}u_{i}, {\boldsymbol}\xi_{i}\big)\\
& ({\boldsymbol}x_{i}, {\boldsymbol}u_{i}) \in {\mathcal}O_i \\
& {\boldsymbol}x_i \in {\widehat{{\mathcal}X}}_i(y_i, z_i),\, {\widehat{{\mathcal}X}}_i(\cdot) \in {\mathcal}F_{{{\scriptscriptstyle {\mathcal}{AT} \scriptstyle}}}({\mathcal}S_i)
\end{array}\right \rbrace \begin{array}{@{}l}
\forall {\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} \in {\mathcal}S_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}\\
\forall {\boldsymbol}\xi_i \in \Xi_i
\end{array}\forall i \in {\mathcal}M,
\end{array}
\tag{$\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}{F}_{{\scriptscriptstyle {\mathcal}{AT} \scriptstyle}}({\mathcal}S_i)$}$$ where $ {\mathcal}S_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} = \bigtimes_{j \in {\mathcal}N_i} {\mathcal}S_j $ and, with a slight abuse of notation, $ Y_i = \sum_{k=1}^{K_i} y_{i,k} P_{i,k} $ and $ Y_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}{\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}+ z_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}= \left[ Y_i {\boldsymbol}s_{j} + z_j \right]_{j \in {\mathcal}N_i} $ for each $ i \in {\mathcal}M $. The decision variables are $ {\boldsymbol}\Gamma_i(\cdot) $, $ y_i $ and $ z_i $ for all $ i \in {\mathcal}M $. Note that for [Problem ]{}to be computationally tractable, we further need to restrict the infinite-dimensional structure of its decision policies, e.g., to admit an affine structure, as suggested in [@DanielPrimalDual; @bental2004ars].
In the following, we will show the relationship between [Problem ]{}and [Problem ]{}. To this end, we define the linear mapping $ L_{i,t}: {\mathbb}R^{n_{x,i}} \rightarrow {\mathbb}R^{n_{x,i}} $, $ s_{i,t} \mapsto x_{i,t} $, as, $$\label{app::map1}
L_{i,t}(s_{i,t}) = Y_{i,t} s_{i,t} + z_{i,t},$$ and the linear mapping, $ R_{i,t}: {\mathbb}R^{n_{x,i}} \rightarrow {\mathbb}R^{n_{x,i}} $, $ x_{i,t} \mapsto s_{i,t} $, as, $$\label{app::map2}
R_{i,t}(x_{i,t}) = Y_{i,t}^{+}(x_{i,t} -z_{i,t})$$ where $ Y^+_{i,t} := (Y_{i,t}^\top Y_{i,t})^{-1}Y^\top_{i,t} $ is the pseudo-inverse of the positive semi-definite matrix $ Y_{i,t} $. Note that the mapping $ R_{i,t} $ may not be unique because of the pseudo-inverse $ Y^+_{i,t} $. Moreover, $ L_{i,t} $ can be viewed as a “left inverse” of the operator $ R_{i,t} $, i.e., it satisfies $ L_{i,t}\big(R_{i,t}(x_{i,t})\big) = x_{i,t} $. Using this mapping, the following theorem establishes equivalence between [Problem ]{}and [Problem ]{}.
\[thm::5\] [Problem ]{}is equivalent to [Problem ]{}in the following sense: there exist a mapping between feasible solutions in [Problem ]{}and [Problem ]{}. Moreover, the optimal value of [Problem ]{}is equal to the optimal value of [Problem ]{}.
In view of Theorem , the mapping between feasible solutions in [Problem ]{}and [Problem ]{}can now be defined. This allows a local controller to evaluate the resulting policy based on the established local communication network. Given $ u_{i,t} = \Gamma_{i,t}({\boldsymbol}\xi_i^{t-1}, {\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^t) $ as the optimal control policy derived from the solution of [Problem ]{}, then $ u_{i,t} = \Psi_{i,t}({\boldsymbol}\xi_i^{t-1}, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^t) = \Gamma_{i,t}({\boldsymbol}\xi_i^{t-1},[ R_j^t({\boldsymbol}\zeta_j^t)]_{j\in {\mathcal}N_i}) $ is the optimal control policy for [Problem ]{}.
[Problem ]{}retains the coupling structure of [Problem ]{}since agent $ i $ only needs to communicate to its direct neighbors the scaling $ y_i $ and translation $ z_i $ of its predefined fixed set $ {\mathcal}S_i $. Contrary to [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}and [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}, there is no coupling introduced among the agents due to the form of their decision policies. This is because the agents treat the effect of their neighbors as a disturbance to their systems. In this framework, we can claim that the proposed method alleviates concerns on privacy since this limited communication scheme does not reveal the exact characteristics of the dynamics within each system. Moreover, the loosely coupled structure of [Problem ]{}makes it amendable to distributed computation algorithms as ADMM to efficiently solve it [@Boyd2004].
Numerical results {#sec::Numerics}
=================
In this section, we conduct a number of simulation-based studies to assess the efficacy of the proposed decentralized controller synthesis approach. We focus our attention on three examples: $ (i) $ A toy example that allow us to illustrate, numerically and graphically, the connection between the shape of the primitive sets and the solution quality; $ (ii) $ A system composed of masses that are connected by springs and dampers which is suitable to study the scalability and the closed-loop behavior of the proposed methodology; $ (iii) $ a supply chain operated in a distributed decision making authority where we exhibit the efficacy of the proposed method as a contract design mechanism and we investigate its performance on various coupling network structures.
Example 1: Illustrative example
-------------------------------
![Physical and information structure of the two agents in the system.[]{data-label="fig::toyExample"}](ex1_Toy.pdf){width="60.00000%"}
We consider a system composed by two agents with states $ x_1, x_2 \in {\mathbb}R^2 $, inputs $ u_1, u_2 \in {\mathbb}R $ and disturbances $ \xi_1, \xi_2 \in \Xi = \{\xi \in {\mathbb}R^2: \| \xi \|_\infty \leq 1 \} $, respectively. The nested physical and information communication network along with the objective functions and constraint sets of the agents are shown in Fig. \[fig::toyExample\]. The system matrices are given as, $$c = \begin{bmatrix}
1 \\ -1
\end{bmatrix}, \;B = \begin{bmatrix}
1 \\ 0.8
\end{bmatrix},\; E = \begin{bmatrix}
1 & -1 \\ -1 & 1
\end{bmatrix} \text{ and } D = \begin{bmatrix}
1 & 0 \\ 0 & -2
\end{bmatrix}.$$
The system-wide robust optimization problem is formulated as follows: $$\label{problem_toyExample_Cent}
\begin{array}{@{}l@{}}
\min \max\limits_{\xi_1, \xi_2} J_1(x_1) + \max\limits_{\xi_1, \xi_2} J_2(x_2) \\
\left.\begin{array}{r@{\;}l@{}}
{{\textrm{s.t.}}}& u_1 \in {\mathcal}{SC}_1, u_2 \in {\mathcal}{SC}_2,\\
& x_1 = B u_1 + E \xi_1,\\
& (x_1,u_1) \in {\mathcal}O_1,\\
& x_2 = B u_2 + E \xi_2 + D x_1,\\
& (x_1,u_1) \in {\mathcal}O_2,
\end{array}\right \rbrace \begin{array}{@{}l}
\forall \xi_1, \xi_2 \in \Xi.
\end{array}
\end{array}$$ with $ {\mathcal}{SC}_1, {\mathcal}{SC}_2 \subseteq {\mathcal}{SC}(\xi_1,\xi_2) $ where $ {\mathcal}{SC}(\cdot) $ denotes the infinite dimensional function spaces generated by strictly causal disturbance feedback policies on its arguments. In this setting, the equivalent to [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}for the synthesis of optimal centralized controllers is obtained by restricting $ {\mathcal}{SC}_1 = {\mathcal}{SC}_2 = {\mathcal}{SC}(\xi_1,\xi_2) $. Similarly, the equivalent to [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}for the synthesis of optimal nested-information controllers is obtained by restricting $ {\mathcal}{SC}_1 = {\mathcal}{SC}(\xi_1) $ and $ {\mathcal}{SC}_2 = {\mathcal}{SC}(\xi_1,\xi_2) $. Both problems are solved by resorting to affine disturbance feedback policies.
In Fig. \[fig::centralizedSynthesis\] we depict in green the feasible set for $ x_1 $ of Problem when $ {\mathcal}{SC}_1 = {\mathcal}{SC}_2 = {\mathcal}{SC}(\xi_1, \xi_2) $. In addition to this, Fig. \[fig::centralizedSynthesis\_a\] shows in yellow the region generated by the optimal centralized policy for $ x_1 $, while Fig. \[fig::centralizedSynthesis\_b\] shows in yellow the respective region for the optimal decentralized policy of it. To streamline the presentation, henceforth we refer to yellow regions as optimal centralized and nested-information, respectively. The resulting objective values are reported in the title of the respective figure as “obj. = $ J_1(x_1) + J_2(x_2) $”. We observe that the information restriction imposed on the nested-information controllers synthesis results in a larger objective value and a smaller optimal region with respect to the centralized solution.
The synthesis of robust decentralized controllers using the communicated sets approach is given as, $$\label{problem_toyExample_Decent}
\begin{array}{@{}l@{}}
\min \max\limits_{\xi_1} J_1(x_1) + \max\limits_{s_1, \xi_2} J_2(x_2) \\
\left.\begin{array}{r@{\;}l@{}}
{{\textrm{s.t.}}}& u_1 \in {\mathcal}C(\xi_1), u_2 \in {\mathcal}C(\xi_2,\zeta_1),\\
& x_1 = B u_1 + E \xi_1,\\
& (x_1,u_1) \in {\mathcal}O_1,\\
& x_1 \in {\mathcal}X_1 = y_1 {\mathcal}S_1 + z_i,\\
& \zeta_1 = y_1 s_1 + z_1,\\
& x_2 = B u_2 + E \xi_2 + D \zeta_1,\\
& (x_1,u_1) \in {\mathcal}O_2,
\end{array}\right \rbrace \begin{array}{@{}l}
\forall s_1 \in {\mathcal}S_1,\\
\forall \xi_1, \xi_2 \in \Xi.
\end{array}
\end{array}$$ This problem is an instance of [Problem ]{}where the state forecast set $ {\mathcal}X_1 $ is expressed as the scaling, $ y_1 $, and translation, $ z_1 $, of a predefined primitive sets, $ {\mathcal}S_1 $. To solve it we use affine feedback policies to restrict the infinite structure of its decision variables.
In what follows we investigate how the quality of the solution, in terms of objective value, is affected by the choice of the primitive set. In Fig. , this comparison is performed with respect to a box, rhombus and circle primitive set. As previously mentioned, the area in green depicts the feasible set for $ x_1 $, the area in yellow depicts the region generated by the optimal policy of $ x_1 $ after solving Problem using the respective primitive set. In addition, we show in red the area covered by the set $ {\mathcal}X_1 $ communicated by agent 1 to agent 2 and with black stars the worst-case scenarios for agent 2 in the state forecast set $ {\mathcal}X_1 $. We observe that the optimal region of agent 1 changes with the different primitive sets as an attempt to cooperate with agent 2. This cooperative behavior is also identified in the objective values as agent 1 “sacrifices” some of its optimality for the good of agent 2. Interestingly enough, part of the state forecast set $ {\mathcal}X_1 $ may lie outside the feasible region of the problem which adds conservativeness to the system as can be verified by inspecting the position of the worst-case scenarios. Moreover, since the worst-case scenarios are not necessarily placed to the corner points defined by the optimal region, agent 1 retains some of its privacy since the behavior of its optimal policy is not revealed to agent 2.
To quantify the importance of the primitive set orientation in space, we conducted a second numerical experiment in which we use as primitive sets $ (i) $ a rotated rectangular set which we can independently scale its major and minor axis, and $ (ii) $ a rotated ellipsoid for which the major axis is forced to be $ 1.5 $ times larger than its minor axis. Illustrative examples of the effect in optimal region of such rotated sets are depicted in Fig. \[fig::rotatedSets\]. We observe that as the shape of the communicated set deviates from the optimal one depicted in Fig. \[fig::centralizedSynthesis\](b) then suboptimality and privacy are increased. To clarify this finding, we repeated the simulation experiment for all possible rotations in the range $ [0,\,180] $ degrees of the rectangular and scaled ellipsoid sets. The results are reported in Fig. \[fig::costPlot\]. We observe that if the rotation of the communicated sets matches the one of the set generated by the nested-information policy in Fig. \[fig::centralizedSynthesis\](b) then the solution resulting from the proposed decentralized method closely approximates the optimal one. If, however, this is not the case, then the cost considerably deviates and even infeasible instances of Problem may appear.
![Cost associated with different shapes of communicated sets: (green) rotated rectangular sets, (red) rotated ellipsoids with principal axis ratio of ten.[]{data-label="fig::costPlot"}](CostPlot.pdf){width="60.00000%"}
Example 2: Spring-mass-damper
-----------------------------
In this numerical study, we consider systems composed of masses that are connected by springs and dampers and arranged in a chain formation, demonstrated in Fig. \[fig::Sim\]. The values of the masses, spring constants and damping coefficients are chosen uniformly at random from the intervals $ [5,\,10] $kg, $ [0.8,\,1.2] $N/m and $ [0.8,\,1.2] $Ns/m, respectively. We assume that each $ i $-th mass is an individual system with its state vector $ x_{i,t} \in {\mathbb}R^2 $ representing the position and velocity deviation from the system’s equilibrium state, and its input $ u_{i,t} \in {\mathbb}R $ denoting the force applied to the $ i $-th mass. We assume that the states and inputs are constrained such that $ \|x_{i,t}\|_\infty \leq 6 $ and $ \|u_{i,t}\|_\infty \leq 4 $ for all times $ t $. Moreover, we assume that the dynamics of each mass is affected by additive exogenous disturbance $ \xi_{i,t} \in {\mathbb}R^2 $ which takes value in the bounded set $ \Xi = \{\xi \in {\mathbb}R^2: \|\xi\|_\infty \leq 1\} $.
The prediction control model is obtained by the discretization of the system’s continuous dynamics using forward Euler with the sampling time $ 0.1 $s. Although inexact, Euler discretization is chosen as to preserve the distributed structure of the system. On the contrary, the discrete-time simulation model of the system is obtained using the exact zero-order hold discretization method with the sampling time $ 0.1 $s. The objective function of each system is of the form with $ Q_{i} = \text{diag}(1,0) $ and $ R_i= 0.1 $.
The dynamics of this interconnected dynamical systems naturally admits a distributed non-nested structure. If one extends the information exchange network as to be nested, then communication needs to be established among all the agents in the system. Hence, the problem formulations for the synthesis of optimal centralized and nested-information controllers, as these are describe in [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}and [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}, are exactly the same. On the contrary, the information exchange network associated with the proposed decentralized method given in [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}matches the physical coupling network and requires the establishment of communication only between adjacent agents in the system. These information structures are graphically depicted in Fig. \[fig::SMD\_Network\].
We now investigate the effect of the horizon length and number of agents on the quality of the solution and the execution time. For each system configuration, we conducted 100 Monte Carlo simulations for uniformly chosen at random initial displacements of the masses in the system. In Fig. \[fig::SMD\_NT\], we report the effect on this metrics with respect to the number of agents in the system when the horizon length is kept constant to $ T = 8 $. The area in blue is associated with the optimal centralized approach while the one in red with the proposed decentralized one. They both show the range of the respective values over the simulation experiments. We observe that the suboptimality level remains roughly the same with the system size which indicates that the introduced uncertainty as the number of agents increases is dissipated locally by interactions among adjacent agents. On the other hand, the execution time for solving the decentralized robust optimization problem only slightly increases with the number of agents, in contrast to the centralized approach for which the increase is linear. This can be explained considering that the number of decision variables present in [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}is considerably lower than the one in [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}. It is interesting to note that [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}could only be formulated and solved within a reasonable amount of time for a limited number of agents in the network.
In Fig. \[fig::SMD\_HT\], we examine the effect of the horizon length on the solution quality and execution time for a system comprising two masses. Here, we observe that the suboptimality of the proposed decentralized method increases linearly with the horizon length. This suggests that the complexity introduced by the evolution of the dynamical system over the time is hard to be accurately enveloped by sets with a predefined orientation. This indicates that conservativeness is being accumulated over time. Moreover, we observe that although the decentralized method is computationally more efficient, the rate of increase in the execution time is the same for both approaches. This was expected since the affine policies make the size of the resulting linear problem to grow polynomially with respect to the horizon length.
Finally, the performance of the system is evaluated on a receding horizon implementation, i.e., the first input resulting from the respective centralized and distributed optimization problem is applied to the exact system dynamics, and the next state is evaluated. We conduct a closed-loop simulation experiment for a system comprising five masses and the prediction horizon of $ T = 15 $. In Fig. \[fig::SMD\_RH\_tr\], the trajectories generated for the centralized and distributed designs are shown for five different simulation experiments. We observe that these trajectories are very similar, which illustrates the proximity in performance between the centralized and distributed designs. To better quantify the performance comparison between the centralized and distributed designs, we conducted Monte Carlo simulation experiments for uniformly chosen at random initial masses displacements which are also subject to distinct exogenous disturbances realizations. Fig. \[fig::SMD\_RH\_err\] shows the error in position and velocity with respect to the simulation time. We observe that as the systems approach their equilibrium state, the errors between the centralized and decentralized approach converges to zero. We note, however, that in all instances this error in position and velocity is fairly small, although the respective open-loop error is high (c.f., Fig \[fig::SMD\_HT\]), which indicates the efficacy of the proposed method in closed-loop simulations.
Example 3: Supply chain with quantity flexibility contracts
-----------------------------------------------------------
We now exhibit the performance of the proposed method as a contract design mechanism for supply chains operated in a decentralized fashion and we investigate its performance on various physical coupling topologies, e.g., chain, star, ring and mesh structures. As a running example, we adopt the decentralized operation of modern supply chains using quantity flexibility (QF) contracts, as this is described in [@Tsay1999]. Modern supply chains naturally operate in a distributing decision making authority since multiple sites worldwide work together to deliver product. In this distributed decision making context, each manufacturer (supplier) knows only the schedule of desired replenishments provided by its immediate retailer (manufacturer), and is only concerned with its own cost performance. To avoid “mutual deception” situations (e.g., some buyers inflate demand only to later disavow any undesired product [@Lee1997]) which increase the uncertainties and costs in the system [@Magee1967; @Lovejoy1998], a commonly used approach in the industry is to introduce the QF contract as a method for coordinating materials and information flows in distributed supply chains. This type of contract discourages the customer from overstating its needs by allowing a maximum upside revision of its scheduled replenishment. In this context, the supplier is obligated to cover any requests that remain within these limits. We graphically depict in Fig. \[fig::QFC\_v2\] the operation of a single-product, serial supply chain using QF contracts.
For each $ i $ agent, i.e., retailer, manufacturer or supplier, we consider inventory dynamics given as $$I^i_{t+1,p} = I^i_{t,p} + R^i_{t,p} -D^i_{t,p} \text{ for } p = 1,\ldots,P$$ where $ I^i_{t,p} $ is the inventory stock, $ R^i_{t,p} $ is the replenishment and $ D^i_{t,p} $ is the customer demand for the $ p $-th out of $ P $ products. For manufacturers and/or suppliers, the demand originates from the replenishment schedule of another manufacturer or a retailer. If the $ i $-th agent is a retailer, then the product demand originates from the market and we assume that at each period $ t $ is given as $$D_{t,p}^i = \left \lbrace\begin{array}{ll}
2 + \sin\left(2\pi \dfrac{t}{T-1}\right) + \dfrac{1}{k} \sum \limits_{i=1}^{k} \Phi_{p,i}^i \xi_{t}^i & \text{for } p \text{ even},\\
2 + \cos\left(2\pi \dfrac{t}{T-1}\right) + \dfrac{1}{k} \sum \limits_{i=1}^{k} \Phi_{p,i}^i \xi_{t}^i & \text{for } p \text{ odd},
\end{array}\right.$$ where $ \xi_{t,p}^i \in \Xi_{i,p} = [-1,1] $ and the factor loading coefficients $ \Phi_{p,i}^i $ are chosen uniformly at random from $ [-1,1] $. By construction, the product demands thus satisfy $ D_{t,p}^i \in [0,4] $ for all $ t \in {\mathcal}T $. To this end, we model the process of product making, i.e., combination of different materials, as follows $$R_{t,p}^i = \sum \limits_{j=1}^{{\mathcal}N_i} \Psi_{p,j}^i D_{t,p}^j + w_{t,p}^i \text{ for } p = 1,\ldots,P$$ where $ \Psi_{p,j}^i $ are chosen uniformly at random from $ [0,1] $ as to satisfy $ \sum \limits_{j=1}^{{\mathcal}N_i} \Psi_{p,j}^i = 1 $. Moreover, $ w_{t,p}^i \in {\mathcal}W_i = [-0.2,0] $ is a random variable that captures production delays and materials loss. Any excess demand is backlogged at a unit cost of $ c_B $ per period, and any excess inventory incurs a unit holding cost of $ c_H $ per period. The objective is to determine an ordering policy that minimizes the worst-case sum of backlogging and inventory holding costs over all anticipated demand realizations given as, $$\max \limits_{{\boldsymbol}\xi, {\boldsymbol}w } \sum_{t \in {\mathcal}T} \sum_{p \in P_i} c_H \big[ I^i_{t,p}\big]_+ + c_B \big[-I^i_{t,p}\big]_+$$ where $ c_B, c_H $ are chosen uniformly at random in the interval $ [0,2] $, and the maximum operator $ [\cdot]_+ $ can be easily removed using the epigraph representation.
![Effect of uncertainty on retailer-manufacturer communicated sets over a 24 stages horizon.[]{data-label="fig::SC_UndEff"}](figUncEff.pdf){width="60.00000%"}
In the first simulation experiment, we investigate for the structure depicted in Fig. \[fig::QFC\_v2\] how the increase on the market demand uncertainty affects the bounds communicated from retailers to manufactures. The simulation configuration comprises one retailer, one manufacturer and one supplier and we symmetrically increase the size of the uncertainty set $ \Xi_{i,p} $ from $ 40 \% $ to $ 100 \% $ of its original size. We report the results in Fig. \[fig::SC\_UndEff\] where we observe that the size of the communicated bounds increases as the size of the uncertainty and the horizon length increase. Although, this was expected it is interesting to visualize how the bounds adapt on the demand profile pattern, highlighting the cooperative nature of the approach which strives to minimize the uncertainty mitigating backwards in the supply chain.
We now investigate the performance of the proposed approach on different supply chains topologies which vary on size. In particular, we focus our attention on the chain, ring, mesh and star topologies, graphically depicted in Fig. \[fig::SC\_Topologies\].
![Performance comparison for different coupling topologies as the number of agents in the network increases.[]{data-label="fig::SC_Comparison"}](figTopologies.pdf){width="60.00000%"}
To assess the efficacy of the proposed method, we compare the objective value and execution time of the decentralized solution given by [Problem ]{}solved with rectangular primitive sets, and the solution generated by the centralized [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}. We report the results of this comparison for networks admitting from 6 to 20 agents in Fig. \[fig::SC\_Comparison\]. We observe that for all structures under investigation the increase on the size of the agents in the network only slightly affects the suboptimality increase. For structures which are loosely coupled, as the chain and star, the suboptimality of the decentralized solution is close to zero. On the other hand, the worst-performance is observed for the ring structure since due to its cyclic nature its hard to obtain a decentralized solution which approximates the centralized one and is based only on neighbors communication. The large benefit of the proposed decentralized approach is however identified on the execution time needed to generate its solution. As shown in Fig. \[fig::SC\_Comparison\], even for small networks with loose coupling the decentralized approach can be 50 times faster than the centralized, while in the extreme case of a large mesh structure this efficiency gap can grow up to 200 times faster.
Conclusions {#sec::Conclusion}
===========
This paper presents a decentralized control framework for the problem of cooperatively managing the operation of large-scale networks of constrained dynamical systems. The proposed method requires each system to communicate to its neighboring systems bounds on the evolution of its states. This minimal communication provides a certain degree of privacy to each agent since the exact characteristics of the dynamics within each system are not revealed. Moreover, the method is suitable to manage problems involving large-scale systems since its underlying minimum communication scheme preserves the original decoupled structure of the problem. To optimize the size of the communicated bounds, a non-convex infinite-dimensional problem is formulated. This computationally intractable optimization problem is approximated by a convex finite-dimensional one, using methods from robust optimization. It is shown that these approximations retain the decoupled structure of the problem making it amendable to distributed computation algorithms. In the numerical study, it is shown that the proposed decentralized method achieves highly computationally efficient solutions even for networks involving a large number of agents. Depending on the structure of underlying physical coupling network and the form of the communication bounds, the proposed method is shown to generate solutions which closely approximate those of the centralized formulation.
Proofs of Propositions and Theorems
===================================
We show that every feasible solution of [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}is a feasible solution to [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}. Let $ {\boldsymbol}\Phi_i $ for all $ i \in {\mathcal}M $ be any feasible policy in [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}. Starting with $ \chi_{i,1} = x_{i,1}$, the state of agent $ i $ at time $ t $ are given as $$\label{eq::thm1::eq1}
\begin{array}{@{}r@{\;}l}
x_{i,t} &=\displaystyle A_{i,1}^t x_{i,1} + \sum_{\tau=1}^{t-1} \Big(A_{i,\tau+1}^t B_{i,\tau} [\chi_{j,\tau}({\boldsymbol}\xi_{{{\scriptscriptstyle \overline{\mathcal N}_j \scriptstyle}}}^{\tau-1})]_{j\in {\mathcal}N_i} + A_{i,\tau+1}^t D_{i,\tau} \Phi_{i,\tau}({\boldsymbol}\xi_{{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}}^{\tau-1}) + A_{i,\tau+1}^t E_{i,\tau} \xi_{i,\tau} \Big) \\
&=: \chi_{i,t}({\boldsymbol}\xi_{{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}}^{t-1})
\end{array}$$ where $ A_{i,\tau}^t = A_{i,\tau}A_{i,\tau+1}\dots A_{i,t-1} $ for $ \tau<t $ and $ A_{i,t}^t = I $. The last implication follows from the fact that $ {\overline{{\mathcal}N}}_i \supseteq {\overline{{\mathcal}N}}_j $ for all $ j \in {\mathcal}N_i $ since the network admits a partially nested structure. Given , it is easy to verify that for each agent $ i $ its dynamics and constraints in [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}only depend on $ {\boldsymbol}\xi_{{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}} $. Hence, any feasible solution to [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}is also feasible to the following optimization problem: $$\label{eq::thm1::eq2}
\begin{array}{l}
\text{minimize} \;\; \displaystyle\sum\limits_{i = 1}^M \max\limits_{{\boldsymbol}\xi \in \Xi} J_i({\boldsymbol}x_i,{\boldsymbol}u_i) \\
\!\!\!\left.\begin{array}{r@{\;}l@{}}
\text{subject to}& {\boldsymbol}\phi_{i}(\cdot) \in {\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}),\;{\boldsymbol}u_i = {\boldsymbol}\Phi_{i}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}})\\
& {\boldsymbol}x_{i} = f_{i}\big({\boldsymbol}x_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}, {\boldsymbol}u_{i}, {\boldsymbol}\xi_{i}\big)\\
& ({\boldsymbol}x_{i}, {\boldsymbol}u_{i}) \in {\mathcal}O_i
\end{array}\right \rbrace \forall {\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}\in \Xi_{{\scriptscriptstyle \mathcal M \scriptstyle}},\;
\forall i \in {\mathcal}M
\end{array}$$ Additionally, they achieve the same objective value since they share the same objective function. This shows equivalence of [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}and Problem . The relation between [Problem $(\text{C}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \mathcal M \scriptstyle}}))$]{}and [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}stated in the theorem now follows immediately since the two problems share the same constraints and objective function, and ${\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}})\subseteq{\mathcal}{SC}({\boldsymbol}\xi)$ for all $i\in{\mathcal}M$.
The statement is proved by induction using similar theoretical tools to [@HGK:2011b Prop. 2.1]. The statement holds for $ t = 1 $ since the initial state, $ x_{i,1} $, is known for every $ i \in {\mathcal}M $; therefore, functions $ \psi_{i,1}(x_{i,1}, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},1}) $ and $ \Psi_{i,1}({\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},1}) $ can always be constructed such $$\psi_{i,1}(x_{i,1}, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},1}) = \Psi_{i,1}({\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},1})$$
Assume now that the statement holds for all $ 1 < \tau \leq t-1 $, i.e., there exist policies $ \psi_i(\cdot) $ and $ \Psi_i(\cdot) $ such that $ \psi_{i,\tau}({\boldsymbol}x_i^{\tau}, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^{\tau}) = \Psi_{i,\tau}({\boldsymbol}\xi_i^{\tau-1}, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^{\tau}) $. In the sequel, we show that the statement also holds for $ \tau = t $. From , we have that, $$\label{eqqv1}
\begin{array}{r@{}l}
x_{i,t} &=\displaystyle A_{i,1}^t x_{i,1} + \sum_{\tau=1}^{t-1} \Big( A_{i,\tau+1}^t B_{i,\tau} {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\tau} + A_{i,\tau+1}^t D_{i,\tau} \Psi_{i,\tau}({\boldsymbol}\xi_i^{\tau-1}, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^\tau) + A_{i,\tau+1}^t E_{i,\tau} \xi_{i,\tau} \Big) \\
&=: \chi_{i,t}({\boldsymbol}\xi_{i}^{t-1}, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}^{t-1})
\end{array}$$ where $ A_{i,\tau}^t = A_{i,\tau}A_{i,\tau+1}\dots A_{i,t-1} $ for $ \tau<t $ and $ A_{i,t}^t = I $. Moreover, it holds that, $$\label{eqq1v1}
\begin{array}{r@{}l}
\xi_{i,t-1} &= E_{i,t-1}^{+}\big(x_{i,t} - A_{i,t-1} x_{i,t-1} + B_{j,t-1} {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},t-1} + D_{i,t-1} \psi_{i,t-1}({\boldsymbol}x_i^{t-1}, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^{t-1})\big) \\
&=: \rho_{i,t}({\boldsymbol}x_{i}^{t}, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^{t-1})
\end{array}$$ where $ E^+_{i,t} := (E_{i,t}^\top E_{i,t})^{-1}E^\top_{i,t} $ is the left inverse of $E_{i,t}$ since it is full rank.
The relation implies that given a feasible policy $ \psi_{i,t}(\cdot) $ for [Problem ]{}, we can construct a feasible policy for [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}as, $$\label{eqq2v1}
\psi_{i,t}({\boldsymbol}x_i^t, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^t) = \psi_{i,t}(\chi_i^t({\boldsymbol}\xi_{i}^{t-1},{\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}^t), {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^t):= \Psi_{i,t}({\boldsymbol}\xi_i^{t-1}, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^t)$$ The claim follows from the fact that the composition of continuous differentiable functions is a continuous differentiable function. Hence, the policy $ \psi_{i,t}(\cdot) $ will also be feasible in [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}since the two problems have the same pointwise constraints. Additionally, they achieve the same objective value since they share the same objective function.
Similarly, the relation implies that given a feasible policy $ \Psi_{i}(\cdot) $ for [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}, we can construct a feasible policy for [Problem ]{}as, $$\label{eqq3v1}
\Psi_{i,t}({\boldsymbol}\xi_i^{t-1}, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^t) = \Psi_{i,t}(\rho_i^t({\boldsymbol}x_{i}^{t},{\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^{t-1}), {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^t) := \psi_{i,t}({\boldsymbol}x_i^t, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^t)$$ The claim follows from the fact that the composition of continuous differentiable functions is a continuous differentiable function. Hence, the policy $ \Psi_{i,t}(\cdot) $ will also be feasible in [Problem ]{}since the two problems have the same pointwise constraints. Additionally, they achieve the same objective value since they share the same objective function.
We show that every feasible solution of [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}is feasible in [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}. Let $ ({\boldsymbol}\Psi_i, {\mathcal}X_i) $ for all $ i \in {\mathcal}M $ be feasible in [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}. Since the state of agent $i$ evolve according to , we can conclude that at time $t$ we have $$\begin{array}{r@{}l}
x_{i,t} &=\displaystyle A_{i,1}^t x_{i,1} + \sum_{\tau=1}^{t-1} \Big( A_{i,\tau+1}^t B_{i,\tau} {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\tau} + A_{i,\tau+1}^t D_{i,\tau} \Psi_{i,\tau}({\boldsymbol}\xi_i^{\tau-1}, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^\tau) + A_{i,\tau+1}^t E_{i,\tau} \xi_{i,\tau} \Big)\\
&=: \chi_{i,t}({\boldsymbol}\xi_{i}^{t-1}, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}^{t-1})
\end{array}$$ where where $ A_{i,\tau}^t = A_{i,\tau}A_{i,\tau+1}\dots A_{i,t-1} $ for $ \tau<t $ and $ A_{i,t}^t = I $. Note that $ {\boldsymbol}\chi_{i}({\boldsymbol}\xi_{i},{\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}) = [\chi_{i,t}({\boldsymbol}\xi_{i}^{t-1}, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}^{t-1})]_{t \in {\mathcal}T_+} \in {\mathcal}X_i $ for all $ {\boldsymbol}\xi_i \in \Xi_i $ and $ {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} \in {\mathcal}X_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}$ due to the feasibility of [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}. To show that $ {\boldsymbol}\Psi_i $ is feasible in [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}, we first construct the state of agent $i$ which evolves according to ${\boldsymbol}x_{i} = f_{i}\big({\boldsymbol}x_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}, {\boldsymbol}\Psi_{i}({\boldsymbol}{\xi}_i, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}), {\boldsymbol}\xi_{i}\big)$. Starting with $\hat \chi_{i,1} = x_{i,1}$, we have that $$\label{eq::c1}
\begin{array}{r@{}l}
x_{i,t} &=\displaystyle A_{i,1}^t x_{i,1} + \sum_{\tau=1}^{t-1} \Big(A_{i,\tau+1}^t B_{i,\tau} [\widehat \chi_{j,\tau}({\boldsymbol}\xi_{{{\scriptscriptstyle \overline{\mathcal N}_j \scriptstyle}}}^{\tau-1}]_{j\in {\mathcal}N_i} + A_{i,\tau+1}^t D_{i,\tau} \Psi_{i,\tau}({\boldsymbol}\xi_i^{\tau-1}, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}^{\tau}) + A_{i,\tau+1}^t E_{i,\tau} \xi_{i,\tau} \Big)\\
&=\displaystyle A_{i,1}^t x_{i,1} + \sum_{\tau=1}^{t-1} \Big(A_{i,\tau+1}^t B_{i,\tau} [\widehat \chi_{j,\tau}({\boldsymbol}\xi_{{{\scriptscriptstyle \overline{\mathcal N}_j \scriptstyle}}}^{\tau-1}]_{j\in {\mathcal}N_i} + A_{i,\tau+1}^t D_{i,\tau} \Psi_{i,\tau}({\boldsymbol}\xi_i^{\tau-1}, [\widehat{{\boldsymbol}\chi}_{j}^\tau ({\boldsymbol}\xi_{{{\scriptscriptstyle \overline{\mathcal N}_j \scriptstyle}}}^{\tau-1}]_{j\in {\mathcal}N_i}) + A_{i,\tau+1}^t E_{i,\tau} \xi_{i,\tau} \Big)\\
& = \chi_{i,t}({\boldsymbol}\xi_i^{t-1},[\widehat \chi^{t-1}_{j}({\boldsymbol}\xi_{{{\scriptscriptstyle \overline{\mathcal N}_j \scriptstyle}}}^{t-2})]_{j\in {\mathcal}N_i})\\
& =: \widehat\chi_{i,t}({\boldsymbol}\xi_{{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}}^{t-1}).
\end{array}$$ where the implication follows because ${\boldsymbol}{\widehat\chi}_{i}([{\boldsymbol}\xi_{j}^{t-1}]_{j\in {{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}}) = [\widehat\chi_{i,t}({\boldsymbol}\xi_{{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}}^{t-1})]_{t \in {\mathcal}T_+}\in{\mathcal}X_i$ for all $ {\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}\in \Xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}$ and $ i \in {\mathcal}M $. For each $i\in {\mathcal}M$, we consider the decision ${\boldsymbol}{\hat \Phi}_i\in{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}})$ defined through $$\label{eq::c2}
\widehat \Phi_{i,t}({\boldsymbol}\xi_{{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}}^{t-1}):= \Psi_{i,t}({\boldsymbol}\xi_i^{t-1},[\widehat \chi^{t}_{j}({\boldsymbol}\xi_{{{\scriptscriptstyle \overline{\mathcal N}_j \scriptstyle}}}^{t-1})]_{j\in {\mathcal}N_i})$$ Notice that defines a valid policy construction since ${\boldsymbol}{\widehat\chi}_{i}([{\boldsymbol}\xi_{j}^{t-1}]_{j\in {{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}})\in{\mathcal}X_i$ for all $ {\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}\in \Xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}$. It remains to show that ${\boldsymbol}\Psi_{i}$ is feasible also for the constraints of [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}. We do so using deduction, as follows: $$\begin{array}{rll}
& \big({\boldsymbol}\chi_{i}({\boldsymbol}\xi_{i},{\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\boldsymbol}\Psi_{i}({\boldsymbol}{\xi}_i,{\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})\big) \in {\mathcal}O_i, & \forall {\boldsymbol}\xi_i \in {\Xi}_i, \forall {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\mathcal}X_{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\\
\implies & \big({\boldsymbol}\chi_{i}({\boldsymbol}\xi_{i},[{\boldsymbol}{\widehat\chi}_{j}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_j \scriptstyle}})]_{j \in {\mathcal}N_i}), {\boldsymbol}\Psi_{i}({\boldsymbol}{\xi}_i,[{\boldsymbol}{\widehat\chi}_{j}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_j \scriptstyle}})]_{j \in {\mathcal}N_i})\big) \in {\mathcal}O_i, & \forall {\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}\in {\Xi}_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}, \\
\implies & \big({\boldsymbol}{\widehat\chi}_{i}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}), {\boldsymbol}{\widehat\Phi}_{i}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}})\big) \in {\mathcal}O_i, & \forall {\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}\in {\Xi}_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}},
\end{array}$$ The first implication follows from and the fact that ${\boldsymbol}{\widehat\chi}_{i}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}) \subseteq {\mathcal}X_i$ for all ${\boldsymbol}{\xi}_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}\in\Xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}$, while the second implication follows from and . Finally, this feasible solution attains a value for the objective function of [Problem $(\text{L}:{\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F({\mathbb}R^{N_{x,i}}))$]{}which is equal or larger than the value attained for the objective function of [Problem $(\text{PN}:{\mathcal}{SC}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}))$]{}, that is: $$\begin{array}{@{\,}r@{\,}l@{\,}}
&\sum\limits_{i = 1}^M \max\limits_{{\boldsymbol}\xi_i \in \Xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} \in {\mathcal}X_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}}
J_i\big({\boldsymbol}\chi_{i}({\boldsymbol}\xi_{i},{\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\boldsymbol}\Psi_{i}({\boldsymbol}{\xi}_i,{\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})\big)\\
\geq& \sum\limits_{i = 1}^M \max\limits_{{\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}\in \Xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}}\; J_i\big({\boldsymbol}\chi_{i}({\boldsymbol}\xi_{i},[{\boldsymbol}{\widehat\chi}_{j}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_j \scriptstyle}})]_{j \in {\mathcal}N_i}), {\boldsymbol}\Psi_{i}({\boldsymbol}{\xi}_i,[{\boldsymbol}{\widehat\chi}_{j}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_j \scriptstyle}})]_{j \in {\mathcal}N_i})\big) \\
=& \sum\limits_{i = 1}^M \max\limits_{{\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}\in \Xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}}\; J_i\big({\boldsymbol}{\widehat\chi}_{i}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}), {\boldsymbol}{\widehat\Phi}_{i}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}})\big)
\end{array}$$ where again the first implication follows from and the fact that ${\boldsymbol}{\widehat\chi}_{i}({\boldsymbol}\xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}) \subseteq {\mathcal}X_i$ for all ${\boldsymbol}{\xi}_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}\in\Xi_{{\scriptscriptstyle \overline{\mathcal N}_i \scriptstyle}}$, while the second implication follows from and .
We consider the 3-Satisfiability problem (3-SAT) for a set $ N = \{1,\ldots,n\} $ literals and $ m $ clauses, which seeks to find a solution $ x \in \{0,1\}^{n} $ that satisfies $$\label{3sat}
x_{i,1} + x_{i,2} + (1-x_{i,3}) \ge 1, \forall i = 1,\ldots,m$$ where $ x_{i,1}, x_{i,2}, x_{i,1} \in \{0,1\} $ are auxiliary variables that allow us to reformulate the disjunction of $ n $ literals into a conjunction of $ m $ clauses [@Cook1971].
We also consider the following robust optimization problem with decision-dependent uncertainty set: $$\label{problem_3sat}
\begin{array}{@{}l@{}}
\min \sum_{i=1}^m \max(-\alpha_i) \\
\left.\begin{array}{r@{\;}l@{}}
{{\textrm{s.t.}}}& x_{i,1}, x_{i,2} \ge 0, \\
& \alpha_i \in {\mathcal}X_i(x_{i,1}, x_{i,2}) = \{\alpha_i: \alpha_i \ge x_{i,1}, \alpha_i \ge x_{i,2}, \alpha_i \leq 1\},\\
& (1-x_{i,3}) \leq \alpha_i, \;\forall \alpha_i \in {\mathcal}X_i(x_{i,1}, x_{i,2}),\\
& x_{i,3} \ge 0, x_{i,3} \leq 1,
\end{array}\right \rbrace \begin{array}{@{}l}
\forall i = 1,\ldots,m.
\end{array}
\end{array}$$ where $ \alpha_i \in {\mathbb}R $ are auxiliary decision variables. If we assign the decision variables $ x_{i,1},x_{i,2}, \alpha_i $ to agent $ i_1 $ and $ x_{i,3} $ to agent $ i_2 $ with $ i = 1,\ldots,m $ then it is easy to verify that Problem is an instance of [Problem $(\text{L}:{\mathcal}{SC}_a({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\mathcal}F_{{{\scriptscriptstyle {\mathcal}{CC} \scriptstyle}}}({\mathbb}R^{N_{x,i}}))$]{}involving $ 2m $ agents. We now prove using similar theoretical tools as those developed in [@Nohadani2016 Thm. 1] that the optimal value of Problem is $ -m $ if and only if the 3-SAT problem in has a solution. We do so as follows: $ (i) $ we assume that the 3-SAT problem has a solution, i.e., there exist $ x \in \{0,1\}^{n} $ such that is satisfied. This implies that either of the terms $ x_{i,1} $, $ x_{i,2} $ or $ 1-x_{i,3} $ is one, hence $ \alpha_i = 1 $ for all $ i = 1,\ldots, M $ due to the constraints in Problem . This leads to an optimal objective value of at least $ -m $ for Problem ; $ (ii) $ We now prove by contradiction that the optimal objective of Problem can not be smaller than $ -m $. If so then there must exists $ \alpha_i $ such that $ \alpha_i > 1 $ which contradicts the constraint $ \alpha_i \leq 1 $. Hence, if Problem is solved to optimality, i.e., $ \sum_{i=1}^m \max(-\alpha_i) = -m $, then $ \alpha_i = 1 $ for all $ i = 1, \ldots, m $. In this case, at least one of the terms $ x_{i,1} $, $ x_{i,2} $ or $ 1-x_{i,3} $ is one otherwise the uncertainty set $ {\mathcal}X_i $ is not a singleton and an $ \alpha_i = \max\{x_{i,1},x_{i,2},1-x_{i,3}\} < 1 $ is also valid which would then give an optimal objective $ \sum_{i=1}^m \max(-\alpha_i) > -m $. This however clearly contradicts the assumption that Problem is solved to optimality.
Therefore, solving the optimization Problem is equivalent to trying to find a feasible solution for the 3-SAT problem. However, as shown in [@Cook1971] the feasibility of the 3-SAT problem is known NP-hard, hence the optimization Problem is also NP-hard.
The recession cone of a set $ {\mathcal}S_i $ is defined as $ \textrm{recc}({\mathcal}S_i) = \{{\boldsymbol}\nu_i \in {\mathbb}R^{n_x^i}: s_i + \lambda \nu_i \in {\mathcal}S_i, \forall s_i \in {\mathcal}S_i, \,\lambda \ge 0 \} $ [@Gorissen2014]. The fact that $ {\mathcal}S_i $ is bounded implies that the recession cone of $ {\mathcal}S_i $ is empty, i.e., $ \textrm{recc}({\mathcal}S_i) = \{0\} $. We now show that, $${\mathcal}X_{{{\scriptscriptstyle {\mathcal}{FS} \scriptstyle}}} = \Big\{({\boldsymbol}x_i, y_i, z_i)\,:\, \exists {\boldsymbol}s_i \in\mathbb{R}^{N_{x,i}} \text{ s.t. }{\boldsymbol}x_{i} = \sum_{k=1}^{K_i} y_{i,k} P_{i,k} {\boldsymbol}s_{i} + z_{i},\; G_{i,k} P_{i,k} {\boldsymbol}s_{i} \preceq_{{\mathcal}K_{i,k}} g_{i,k},\; k=1,\ldots,K_i \Big\}$$ is equivalent to $${\widehat{{\mathcal}X}}_{{{\scriptscriptstyle {\mathcal}{FS} \scriptstyle}}} =\Big\{({\boldsymbol}x_i, y_i, z_i)\,:\, \exists {\boldsymbol}\nu_{i,k}\in\mathbb{R}^{N_{x,i}} \text{ s.t. } {\boldsymbol}x_{i} = \sum_{k=1}^{K_i} P_{i,k} {\boldsymbol}\nu_{i,k} + z_{i},\; G_{i,k} P_{i,k} {\boldsymbol}\nu_{i,k} \preceq_{{\mathcal}K_{i,k}} y_{i,k} g_{i,k},\;k=1,\ldots,K_i \Big\}$$ It is easy to verify that this deduction also holds true in case that $ y_{i,k} $ are positive scalar by using the substitution ${\boldsymbol}{\nu}_{i,k} = y_{i,k} {\boldsymbol}{s}_i$. In the case that any $ y_{i,k} = 0 $ then it remains to show that the only feasible solution is ${\boldsymbol}{\nu}_{i,k} = 0 $ so that the equality ${\boldsymbol}{\nu}_{i,k} = y_{i,k} {\boldsymbol}{s}_i$ to hold. Assume that this is not the case, i.e., there exist $ {\boldsymbol}{\nu}_{i,k} \neq 0 $. Then, $ {\boldsymbol}{\nu}_{i,k} \in \textrm{recc}(S_i) $ which means that the $ {\mathcal}S_i $ recedes in the direction of $ {\boldsymbol}{\nu}_{i,k} $. However, this is a contradicts the boundedness of $ {\mathcal}S_i $.
Preliminaries for the proof of Theorem \[thm::5\] {#preliminaries-for-the-proof-of-theorem-thm5 .unnumbered}
-------------------------------------------------
The following lemma establishes constraint satisfaction between [Problem ]{}and [Problem ]{}.
\[lem::1\] Given $ y_i $ and $ z_i $ such that $ {\widehat{{\mathcal}X}}_i(y_i, z_i) $, then for any two functions $ f_{i,t} $ and $ g_{i,t} $, it holds: $$\label{eq::op}
\begin{array}{r@{\,}ll}
& f_{i,t}({\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^t, {\boldsymbol}\xi_i^t) \leq 0,& \forall {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\widehat{{\mathcal}X}}_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}(y_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}, z_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}), \,\forall {\boldsymbol}\xi_i \in \Xi_i, \\
\Rightarrow & f_{i,t}\big([L_j^t({\boldsymbol}s_j^t)]_{j\in {\mathcal}N_i}), {\boldsymbol}\xi_i^t\big) \leq 0, &\forall {\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\mathcal}S_{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\,\forall {\boldsymbol}\xi_i \in \Xi_i,
\end{array}$$ and $$\label{eq::opInv}
\begin{array}{r@{\,}ll}
& g_{i,t}({\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^t, {\boldsymbol}\xi^t)\leq 0,& \forall {\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\mathcal}S_{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\,\forall {\boldsymbol}\xi_i \in \Xi_i, \\
\Rightarrow & g_{i,t}\big([R_j^t({{\boldsymbol}\zeta_j^t})]_{j\in {\mathcal}N_i}, {\boldsymbol}\xi_i^t\big) \leq 0, & \forall {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\widehat{{\mathcal}X}}_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}(y_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}, z_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}), \,\forall {\boldsymbol}\xi_i \in \Xi_i.
\end{array}$$
We prove by contradiction. Assume that $ f_{i,t}({\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^t, {\boldsymbol}\xi_i^t) \leq 0, \forall {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\widehat{{\mathcal}X}}_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}(y_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}, z_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}), \,\forall {\boldsymbol}\xi_i \in \Xi_i, $ and there exist $ {\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\mathcal}S_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}$ such that $ f_{i,t}\big([L_j^t({\boldsymbol}s_j^t)]_{j\in {\mathcal}N_i}), {\boldsymbol}\xi_i^t\big) > 0 $. Considering that $ [L_j^t({\boldsymbol}s_j^t)]_{j \in {\mathcal}N_i} \in {\widehat{{\mathcal}X}}(y_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}, z_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}) $ for all $ {\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\mathcal}S_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}$ by construction, this leads to a contradiction. The proof of follows similar arguments.
We show that every feasible solution of [Problem ]{}is feasible in [Problem ]{}. Let $ ({\widehat{{\boldsymbol}\Gamma}}_i, {\widehat{{\mathcal}X}}_i) $ for all $ i \in {\mathcal}M $ be a feasible solution in [Problem ]{}. Since the state of agent $i$ evolve according to , we can conclude that at time $t$ we have $$\begin{array}{r@{}l}
x_{i,t} &=\displaystyle A_{i,1}^t x_{i,1} + \sum_{\tau=1}^{t-1} \Big(A_{i,\tau+1}^t B_{i,\tau} (Y_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\tau} {\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\tau} + z_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\tau}) + A_{i,\tau+1}^t D_{i,\tau} {\widehat{\Gamma}}_{i,\tau}({\boldsymbol}\xi_i^{\tau-1}, {\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^\tau) + A_{i,\tau+1}^t E_{i,\tau} \xi_{i,\tau} \Big) \\
&=: {\widehat{\chi}}_{i,t}({\boldsymbol}\xi_{i}^{t-1}, {\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}^{t-1})
\end{array}$$ where where $ A_{i,\tau}^t = A_{i,\tau}A_{i,\tau+1}\dots A_{i,t-1} $ for $ \tau<t $ and $ A_{i,t}^t = I $. To show that $ {\widehat{{\boldsymbol}\Gamma}}_i $ is feasible in [Problem ]{}, we first construct the state of agent $i$ which evolves according to ${\boldsymbol}x_{i} = f_{i}\big({\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}, {\widehat{{\boldsymbol}\Gamma}}_{i}({\boldsymbol}{\xi}_i,{\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}), {\boldsymbol}\xi_{i}\big)$. Starting with $\widetilde \chi_{i,1} = x_{i,1}$, we have that $$\label{eq::c1_p1}
\begin{array}{r@{}l}
x_{i,t} &=\displaystyle A_{i,1}^t x_{i,1} + \sum_{\tau=1}^{t-1} \Big(A_{i,\tau+1}^t B_{i,\tau} {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\tau} + A_{i,\tau+1}^t D_{i,\tau} {\widehat{\Gamma}}_{i,\tau}({\boldsymbol}\xi_i^{\tau-1}, {\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^\tau) + A_{i,\tau+1}^t E_{i,\tau} \xi_{i,\tau} \Big) \\
&=\displaystyle A_{i,t-1}^t x_{i,1} + \sum_{\tau=1}^{t-1} \Big(A_{i,\tau+1}^t B_{i,\tau} {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} + A_{i,\tau+1}^t D_{i,\tau} {\widehat{\Gamma}}_{i,\tau}({\boldsymbol}\xi_i^{\tau-1}, [R^{\tau}_j({\boldsymbol}\zeta_j^{\tau})]_{j\in {\mathcal}N_i}) + A_{i,\tau+1}^t E_{i,\tau} \xi_{i,\tau} \Big)\\
& = {\widehat{\chi}}_{i,t}\left({\boldsymbol}\xi_i^{t-1},[R^{t-1}_j({\boldsymbol}\zeta_j^{t-1})]_{j\in {\mathcal}N_i}\right)\\
& =: {\widetilde{\chi}}_{i,t}\left({\boldsymbol}\xi_i^{t-1},{\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^{t-1}\right).
\end{array}$$ where the implications follow due to the mapping . For each $i\in {\mathcal}M$, we consider the decision $ {\widetilde{{\boldsymbol}\Psi}}_{i}(\cdot) \in {\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}) $ defined through $$\label{eq::c2_p1}
{\widetilde{\Psi}}_{i,t}\left({\boldsymbol}\xi_i^{t-1},{\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^{t}\right)= {\widehat{\Gamma}}_{i,t}\left({\boldsymbol}\xi_i^{t-1},[R^{t}_j({\boldsymbol}\zeta_j^{t})]_{j\in {\mathcal}N_i}\right).$$ Notice that defines a valid policy construction due to the mapping . It remains to show that ${\widehat{{\boldsymbol}\Gamma}}_{i}$ is feasible also for the constraints of [Problem ]{}. We do so using deduction, as follows: $$\begin{array}{rll}
& \big({\widehat{{\boldsymbol}\chi}}_{i}({\boldsymbol}\xi_{i},{\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\widehat{{\boldsymbol}\Gamma}}_{i}({\boldsymbol}{\xi}_i,{\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})\big) \in {\mathcal}O_i, & \forall {\boldsymbol}\xi_i \in {\Xi}_i, \forall {\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\mathcal}S_{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\\
\implies & \big({\widehat{{\boldsymbol}\chi}}_{i}({\boldsymbol}\xi_{i},[R_j({\boldsymbol}\zeta_j)]_{j \in {\mathcal}N_i}),{\widehat{{\boldsymbol}\Gamma}}_{i}({\boldsymbol}{\xi}_i,[R_j({\boldsymbol}\zeta_j)]_{j \in {\mathcal}N_i})\big) \in {\mathcal}O_i,& \forall {\boldsymbol}\xi_i \in {\Xi}_i, \forall {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\widehat{{\mathcal}X}}_{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\\
\implies &\big({\widetilde{{\boldsymbol}\chi}}_{i}({\boldsymbol}\xi_i,{\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}),{\widetilde{{\boldsymbol}\Psi}}_{i}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})\big) \in {\mathcal}O_i,& \forall {\boldsymbol}\xi_i \in {\Xi}_i, \forall {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\widehat{{\mathcal}X}}_{{\scriptscriptstyle \mathcal N_i \scriptstyle}},
\end{array}$$ where the implications directly follow from and , and Lemma \[lem::1\]. Same reasoning applies to all constraints in the problem formulation. This feasible solution attains the same value, $ \ell $, for the objective functions of [Problem ]{}and [Problem ]{}, that is: $$\begin{array}{rl}
\ell & = \sum\limits_{i = 1}^M \max\limits_{{\boldsymbol}\xi_i \in \Xi, {\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\mathcal}S_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} J_i\big({\widehat{{\boldsymbol}\chi}}_{i}({\boldsymbol}\xi_{i},{\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\widehat{{\boldsymbol}\Gamma}}_{i}({\boldsymbol}{\xi}_i,{\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})\big) \\& = \left \lbrace \begin{array}{l}
J_i\big({\widehat{{\boldsymbol}\chi}}_{i}({\boldsymbol}\xi_{i},{\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\widehat{{\boldsymbol}\Gamma}}_{i}({\boldsymbol}{\xi}_i,{\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})\big) \leq\ell_i,~~ \forall {\boldsymbol}\xi_i \in {\Xi}_i, \forall {\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\mathcal}S_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\\
\sum_{i = 1}^{M} \ell_i = \ell,
\end{array} \right \rbrace \\
& = \left \lbrace \begin{array}{l}
J_i\big({\widehat{{\boldsymbol}\chi}}_{i}({\boldsymbol}\xi_{i},[R_j({\boldsymbol}\zeta_j)]_{j \in {\mathcal}N_i}),{\widehat{{\boldsymbol}\Gamma}}_{i}({\boldsymbol}{\xi}_i,[R_j({\boldsymbol}\zeta_j)]_{j \in {\mathcal}N_i})\big) \leq\ell_i,~~ \forall {\boldsymbol}\xi_i \in {\Xi}_i, \forall {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\widehat{{\mathcal}X}}_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\\
\sum_{i = 1}^{M} \ell_i = \ell,
\end{array} \right \rbrace \\
& = \sum\limits_{i = 1}^M \max\limits_{{\boldsymbol}\xi_i \in {\Xi}_i, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\widehat{{\mathcal}X}}_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} J_i\big({\widetilde{{\boldsymbol}\chi}}_{i}({\boldsymbol}\xi_i,{\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}),{\widetilde{{\boldsymbol}\Psi}}_{i}({\boldsymbol}\xi_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})\big) = \ell.
\end{array}$$ The implications directly follow from and , and Lemma \[lem::1\].
Similarly, we now show that every feasible solution of [Problem ]{}is feasible in [Problem ]{}. Let $ ({\widetilde{{\boldsymbol}\Psi}}_i, {\widehat{{\mathcal}X}}_i) $ for all $ i \in {\mathcal}M $ be feasible in [Problem ]{}. Since the state of agent $i$ evolve according to , we can conclude that at time $t$ we have $$\begin{array}{r@{}l}
x_{i,t} &=\displaystyle A_{i,1}^t x_{i,1} + \sum_{\tau=1}^{t-1} \Big(A_{i,\tau+1}^t B_{i,\tau} {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\tau} + A_{i,\tau+1}^t D_{i,\tau} {\widetilde{\Psi}}_{i,\tau}({\boldsymbol}\xi_i^{\tau-1}, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^\tau) + A_{i,\tau+1}^t E_{i,\tau} \xi_{i,\tau} \Big)\\
&=: {\widetilde{\chi}}_{i,t}({\boldsymbol}\xi_{i}^{t-1}, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}^{t-1})
\end{array}$$ To show that $ {\widetilde{{\boldsymbol}\Psi}}_i $ is feasible in [Problem ]{}, we first construct the state of agent $i$ which evolves according to ${\boldsymbol}x_{i} = f_{i}\big(Y_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} {\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} + z_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}},{\widetilde{{\boldsymbol}\Psi}}_{i}({\boldsymbol}{\xi}_i, {\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}), {\boldsymbol}\xi_{i}\big)$. Starting with $\widehat \chi_{i,1} = x_{i,1}$, we have that $$\label{eq::c1_p2}
\begin{array}{r@{}l}
x_{i,t}&=\displaystyle A_{i,1}^t x_{i,1} + \sum_{\tau=1}^{t-1} \Big(A_{i,\tau+1}^t B_{i,\tau} (Y_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\tau} {\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\tau} + z_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\tau}) + A_{i,\tau+1}^t D_{i,\tau} {\widetilde{\Psi}}_{i,t}({\boldsymbol}\xi_i^{\tau-1},{\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}^{\tau}) + A_{i,\tau+1}^t E_{i,\tau} \xi_{i,\tau} \Big) \\
&=\displaystyle A_{i,1}^t x_{i,1} + \sum_{\tau=1}^{t-1} \Big(A_{i,\tau+1}^t B_{i,\tau} [L_{j,\tau}({\boldsymbol}s_{j,\tau})]_{j\in {\mathcal}N_i} + A_{i,\tau+1}^t D_{i,\tau} {\widetilde{\Psi}}_{i,t}({\boldsymbol}\xi_i^{\tau-1},[L^{\tau}_j({\boldsymbol}s_j^{\tau})]_{j\in {\mathcal}N_i}) + A_{i,\tau+1}^t E_{i,\tau} \xi_{i,\tau} \Big) \\
& = {\widetilde{\chi}}_{i,t}\left({\boldsymbol}\xi_i^{t-1},[L^{t-1}_j({\boldsymbol}s_j^{t-1})]_{j\in {\mathcal}N_i}\right)\\
& =: {\widehat{\chi}}_{i,t}\left({\boldsymbol}\xi_i^{t-1},{\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^{t-1}\right).
\end{array}$$ where the implications follow due to the mapping . For each $i\in {\mathcal}M$, we consider the decision $ {\widehat{{\boldsymbol}\Gamma}}_{i}(\cdot) \in {\mathcal}{SC}({\boldsymbol}\xi_i, {\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}) $ defined through $$\label{eq::c2_p2}
{\widehat{\Gamma}}_{i,t}\left({\boldsymbol}\xi_i^{t-1},{\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}^{t}\right)= {\widetilde{\Psi}}_{i,t}\left({\boldsymbol}\xi_i^{t-1},[L^{t}_j({\boldsymbol}s_j^{t})]_{j\in {\mathcal}N_i}\right).$$ Notice that defines a valid policy construction due to the mapping . It remains to show that ${\widehat{{\boldsymbol}\Gamma}}_{i}$ is feasible also for the constraints of [Problem ]{}. We do so using deduction, as follows: $$\begin{array}{rll}
& \big({\widetilde{{\boldsymbol}\chi}}_{i}({\boldsymbol}\xi_{i},{\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}) ,{\widetilde{{\boldsymbol}\Psi}}_{i}({\boldsymbol}{\xi}_i,{\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})\big) \in {\mathcal}O_i,& \forall {\boldsymbol}\xi_i \in {\Xi}_i, \forall {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\widehat{{\mathcal}X}}_{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\\
\implies & \big({\widetilde{{\boldsymbol}\chi}}_{i}({\boldsymbol}\xi_{i},[L_j({\boldsymbol}s_j)]_{j \in {\mathcal}N_i}), {\widetilde{{\boldsymbol}\Psi}}_{i}({\boldsymbol}{\xi}_i,[L_j({\boldsymbol}s_j)]_{j \in {\mathcal}N_i})\big) \in {\mathcal}O_i,& \forall {\boldsymbol}\xi_i \in {\Xi}_i, \forall {\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {{\mathcal}S}_{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\\
\implies &\big({\widehat{{\boldsymbol}\chi}}_{i}({\boldsymbol}\xi_i,{\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}), {\widehat{{\boldsymbol}\Gamma}}_{i}({\boldsymbol}\xi_i, {\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})\big) \in {\mathcal}O_i, & \forall {\boldsymbol}\xi_i \in {\Xi}_i, \forall {\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {{\mathcal}S}_{{\scriptscriptstyle \mathcal N_i \scriptstyle}},
\end{array}$$ where the implications directly follow from and , and Lemma \[lem::1\]. Same reasoning applies to all constraints in the problem formulation. This feasible solution attains the same value, $ \ell $, for the objective functions of [Problem ]{}and [Problem ]{}, that is: $$\begin{array}{rl}
\ell & = \sum\limits_{i = 1}^M \max\limits_{{\boldsymbol}\xi_i \in {\Xi}_i, {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\widehat{{\mathcal}X}}_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} J_i\big({\widetilde{{\boldsymbol}\chi}}_{i}({\boldsymbol}\xi_{i},{\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}) ,{\widetilde{{\boldsymbol}\Psi}}_{i}({\boldsymbol}{\xi}_i,{\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})\big) \\& = \left \lbrace \begin{array}{l}
J_i\big({\widetilde{{\boldsymbol}\chi}}_{i}({\boldsymbol}\xi_{i},{\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}}) ,{\widetilde{{\boldsymbol}\Psi}}_{i}({\boldsymbol}{\xi}_i,{\boldsymbol}\zeta_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})\big) \leq\ell_i,~~ \forall {\boldsymbol}\xi_i \in {\Xi}_i, \forall {\boldsymbol}\zeta_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {\widehat{{\mathcal}X}}_{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\\
\sum_{i = 1}^{M} \ell_i = \ell,
\end{array} \right \rbrace \\
& = \left \lbrace \begin{array}{l}
J_i\big({\widetilde{{\boldsymbol}\chi}}_{i}({\boldsymbol}\xi_{i},[L_j({\boldsymbol}s_j)]_{j \in {\mathcal}N_i}), {\widetilde{{\boldsymbol}\Psi}}_{i}({\boldsymbol}{\xi}_i,[L_j({\boldsymbol}s_j)]_{j \in {\mathcal}N_i})\big) \leq\ell_i,~~ \forall {\boldsymbol}\xi_i \in {\Xi}_i, \forall {\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {{\mathcal}S}_{{\scriptscriptstyle \mathcal N_i \scriptstyle}},\\
\sum_{i = 1}^{M} \ell_i = \ell,
\end{array} \right \rbrace \\
& = \sum\limits_{i = 1}^M \max\limits_{{\boldsymbol}\xi_i \in {\Xi}_i, {\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}\in {{\mathcal}S}_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}} J_i\big({\widehat{{\boldsymbol}\chi}}_{i}({\boldsymbol}\xi_i,{\boldsymbol}s_{{\scriptscriptstyle \mathcal N_i \scriptstyle}}), {\widehat{{\boldsymbol}\Gamma}}_{i}({\boldsymbol}\xi_i, {\boldsymbol}s_{{{\scriptscriptstyle \mathcal N_i \scriptstyle}}})\big) = \ell.
\end{array}$$ The implications directly follow from and , and Lemma \[lem::1\].
|
---
abstract: 'The spatial resolution along the pad-row direction was measured with a GEM-based TPC prototype for the future linear collider experiment in order to understand its performance for tracks with finite projected angles with respect to the pad-row normal. The degradation of the resolution due to the angular pad effect was confirmed to be consistent with the prediction of a simple calculation taking into account the cluster-size distribution and the avalanche fluctuation.'
title: |
Cosmic ray tests of a GEM-based TPC prototype\
operated in Ar-CF$_4$-isobutane gas mixtures: II
---
TPC,ILC,GEM,CF$_4$,Spatial resolution,Angular pad effect
29.40.Cs ,29.40.Gx
Introduction
============
In the previous paper [@ref1] we demonstrated the feasibility of a GEM-based Time Projection Chamber (TPC) operated in an Ar-CF$_4$-isobutane gas mixture as a central tracker (LCTPC) for the future linear collider experiments (ILC [@refLC1] and CLIC [@refLC2]). The spatial resolution along the pad-row direction was presented for tracks nearly perpendicular to the pad row in Ref. [@ref1] because the resolution in the $r$-$\phi$ plane better than $\sim$ 100 $\mu$m per pad row for stiff and radial tracks is of prime importance for the physics goals of the experiments. A TPC equipped with GEM readout is certainly an ideal main tracker, which is free from the $E \times B$ and the angular wire effects inherent in conventional TPCs with MWPC readout.
It should be noted, however, that the azimuthal resolution degrades with increasing projected track angle ($\phi$) measured from the pad-row normal because of the angular pad effect as far as conventional pads are employed for readout. As will be seen the angular pad effect adds an almost constant offset to the resolution with its amount depending on the pad height as well as the track angle. Therefore the requirement for the spatial resolution above would not be met for inclined tracks. The degraded resolution for slanted and/or low-momentum tracks provided by the central tracker could affect the physics capability of the whole detector system. An example is the need for good energy resolution for soft jets; thus it is important to understand whether such things can be affected by the design of the TPC.
In this paper the resolutions measured with cosmic rays for inclined tracks in a prototype TPC are presented and compared to the expectation in order to provide a basis for the optimization of the pad height of the LCTPC.
The expected deterioration of the resolution for inclined tracks, compared to that for right angle tracks, is estimated in Section 2. The comparison of the measured resolution with the expectation is presented in Section 3 after a brief description of the experiment. Section 4 is devoted to a discussion and Section 5 concludes the paper.
Expectation
===========
For right angle tracks ($\phi = 0^\circ$) the resolution along the pad-row direction ($\sigma_{\rm X}$) is approximately given by
$$\sigma_{\rm X}^2 = \sigma_{\rm X00}^2 + \frac{D^2}{n_{\rm eff}} \cdot z$$
where $\sigma_{\rm X00}$ is the intrinsic resolution[^1], $D$ is the diffusion constant, $n_{\rm eff}$ is the effective number of electrons per pad row, and $z$ is the drift distance [@ref2][^2]. It is worth noting that the value of $n_{\rm eff}$ is almost independent of the drift distance [@ref3][^3]. Even in the case of finite track angle the explicit drift-distance dependence (the second term) of the resolution is scarcely affected by practically small $\phi$ [@ref4]. The first term is, on the other hand, sensitive to the track angle. It may be expressed as
$$\sigma_{\rm X0}^2 = \sigma_{\rm X00}^2 + \frac{h^2 \cdot \tan^2 \phi}
{12 \cdot N_{\rm eff}}$$
where $h$ is the pad height[^4] and $N_{\rm eff}$ is the effective number of [*clusters*]{} per pad row (see footnote 3). The second term in Eq. (2) represents the contribution of the angular pad effect to the resolution, which is parametrized by $N_{\rm eff}$.
In fact, $N_{\rm eff}$ is a function of $\phi$, $\theta$, $z$ and $h$:
$$N_{\rm eff} = N_{\rm eff}(\phi, \theta, z, h)$$
where $\theta$ is the angle between the track and the readout pad plane[^5]. Let us consider first the $h$ dependence of $N_{\rm eff}$, i.e. $N_{\rm eff}(0, 0, z, h)$. The average number of clusters per pad row ($\left< N \right>$) is proportional to $h$. Furthermore the $z$ dependence of the effective number of clusters due to de-clustering is expected to be small [@ref4][^6]. Accordingly
$$N_{\rm eff}(0, 0, z, h) \sim N_{\rm eff}(0, 0, 0, h)
= N_{\rm eff}(\left< N \right>) \;.$$
For a fixed number of clusters $N$, $N_{\rm eff}$ is given by
$$N_{\rm eff}(N) = \left< \sum_{i=1}^N Q_i^2 \; /
\left( \sum_{i=1}^N Q_i \right) ^2 \right> ^{-1}$$
where $Q_i$ is the total charge of the cluster $i$ given by
$$Q_i = \sum_{j=1}^{n_i} q_j$$
with $q_j$ being the amplified signal of the $j$-th electron in the cluster $i$ of size $n_i$ (see Appendix). $N_{\rm eff}$ was estimated by numerical calculations, taking into account the cluster-size distribution for argon [@ref5], and is shown in Fig. 1, with (filled circles) and without (open circles) the typical avalanche fluctuation (a Polya distribution with $\theta =
0.5$[^7]) for each electron. The figure tells us that the effective number of clusters is considerably smaller than $N$ because of the large cluster-size fluctuation. Furthermore it is not a linear function of $N$; see Appendix for a qualitative estimation of $N_{\rm eff}$. In real case, $N$ is not a constant and obeys Poisson statistics with the average $\left< N \right>$. The curves in Fig. 1 show the effective number of clusters as a function of $\left< N \right>$.
![\[fig1\] Effective number of clusters ($N_{\rm eff}$) as a function of the total or average number of clusters: plots for fixed $N$, and curves for Poissonly distributed $N$. The filled (open) circles and the full (dotted) curve are calculated with (without) the avalanche fluctuation.](\figdir/fig1.eps)
From the curve with the avalanche fluctuation in Fig. 1 the effective number of clusters for a given track angle can be estimated since
$$\left< N \right> = \frac{d \cdot h}{\cos \phi \cdot \cos \theta}$$
with $d$ being the cluster density ($\sim$ 2.43/mm for minimum ionizing particles in argon [@ref6]), and
$$N_{\rm eff}(\phi, \theta, 0, h)
= N_{\rm eff}(\left< N \right>)
= N_{\rm eff}\left( \frac{d \cdot h}{\cos \phi \cdot \cos \theta} \right) \;.$$
Let us define $S_{\rm X00}$ as the square root of the second term in Eq. (2) at $z$ = 0:
$$S_{\rm X00} \equiv \frac{h \cdot \tan \phi}
{\sqrt{12 \cdot N_{\rm eff}(\left< N \right>)}}\;.$$
Fig. 2 shows $S_{\rm X00}$ as a function of the pad height ($h$) for $\phi$ = 5$^\circ$, 10$^\circ$, 15$^\circ$ and 30$^\circ$, calculated with $\theta$ fixed to 0$^\circ$. It should be noted that the resolutions shown in the figure are the best possible values expected to be obtained without diffusion (at $z$ = 0) and without contribution of the intrinsic term ($\sigma_{\rm X00}$).
![\[fig2\] Expected contribution of the angular pad effect ($S_{\rm X00}$) as a function of the pad height ($h$) for different track angles. Poisson statistics is assumed for the number of clusters and a Polya distribution ($\theta = 0.5$) is assumed for the avalanche fluctuation. The tracks are assumed to be minimum ionizing and parallel to the readout plane. The points plotted at $h$ = 6.3 mm are the measurements (see Section 3.2).](\figdir/fig2.eps)
Experiment
==========
Setup and analysis
------------------
We used a small GEM-based TPC prototype (MP-TPC) operated in a gas mixture of Ar (95%)-CF$_4$ (3%)-isobutane (2%) at atmospheric pressure. The MP-TPC is a small time projection chamber with a maximum drift length of 257 mm. Its gas amplification device is a triple GEM, 100 mm $\times$ 100 mm in size. The amplified electrons are collected by a readout plane placed right behind the GEM stack, having 16 pad rows at a pitch ($h$) of 6.3 mm, each consisting of 1.17 mm $\times$ 6 mm rectangular pads arranged at a pitch of 1.27 mm. The neighboring pad rows are staggered by half a pad pitch. The pad signals are then fed to readout electronics, a combination of preamplifiers, shaper amplifiers and digitizers. See Ref. [@ref1] for details of the experimental setup and the analysis procedure for the cosmic ray tests of the MP-TPC.
We re-analyzed the data taken for the previous paper on the normal incident tracks [@ref1] with different cuts on the track angles. Among the data sets the data collected with a drift field of 250 V/cm and $B$ = 0 T were selected because of its highest statistics and the negligible influence of the finite pad-pitch term in the absence of axial magnetic field [@ref1]. The offset to the resolution due to finite track angle is to be added quadratically as well in the presence of a magnetic field, depending on the local track angle, at drift distances where the finite pad-pitch term is negligible.
The track angle distributions are shown in Fig. 3. As mentioned in Introduction our primary concern in the cosmic ray tests with the MP-TPC was the resolution for right angle tracks. Therefore the acceptance to inclined tracks was limited by trigger-counter arrangement in order to reduce the trigger rate to the relatively slow readout electronics. The maximum available track angle is thus $|\phi| {\hbox{ \raise3pt\hbox to 0pt{$<$}\raise-3pt\hbox{$\sim$} }}10^\circ$ as seen in Fig. 3 (a).
![\[fig3\] Track angle distributions: (a) for $\phi$, and (b) for $\theta$.](\figdir/fig3.eps)
Results
-------
The spatial resolutions along the pad-row direction are shown in Fig. 4 for $|\phi|$ = 0$^\circ$, 5$^\circ$ and 10$^\circ$, for tracks nearly parallel ($|\theta| \leqq 10^\circ$) to the readout plane. The azimuthal angle cuts are the nominal values $\pm$ 2$^\circ$. The resolutions squared as function of the drift distance ($z$) were fitted by a function $\sigma_{\rm X}^2 = \sigma_{\rm X0}^2 + D^2 / n_{\rm eff} \cdot z$ for free parameters $\sigma_{\rm X0}$ and $n_{\rm eff}$, with the value of $D$ fixed to 315 $\mu$m/$\sqrt{\rm cm}$ given by Magboltz [@ref7]. $S_{\rm X00}$ and $N_{\rm eff}$ were then obtained using Eqs. (2) and (9) for each $\phi$, assuming $\sigma_{\rm X00}$ to be $\sigma_{\rm X0}$ measured for $\phi$ = 0$^\circ$. The resultant $\sigma_{\rm X0}$, $n_{\rm eff}$, $S_{\rm X00}$ and $N_{\rm eff}$ are summarized in Table 1 along with the values of $S_{\rm X00}$ and $N_{\rm eff}$ calculated for $h$ = 6.3 mm. The measured values of $S_{\rm X00}$ are plotted also in Fig. 2.
The measured values of $S_{\rm X00}$ and $N_{\rm eff}$ are consistent with those given by the calculation. In addition, the values of $n_{\rm eff}$ for inclined tracks are close to that for normal incident tracks as expected and are consistent with an estimation in Ref. [@ref2].
![\[fig4\] Resolution squared ($\sigma_{\rm X}^2$) as a function of the drift distance ($z$): (a) for $|\phi|$ = 0$^\circ$, (b) for $|\phi|$ = 5$^\circ$ and (c) for $|\phi|$ = 10$^\circ$. See text for the straight lines fitted through the data points.](\figdir/fig4.eps)
Discussion
==========
The figure of merit for the azimuthal spatial resolution of a cylindrical TPC is the resolution per projected track length in the $r$-$\phi$ plane along the radial direction. From Eqs. (1), (2) and (9), the resolution per pad row is expressed as
$$\sigma_{\rm X}^2 \sim \sigma_{\rm X00}^2 + S_{\rm X00}^2
+ \frac{D^2}{n_{\rm eff}} \cdot z$$
at drift distances where the finite pad-pitch term is negligible, and each of the three terms is a function of the pad height $h$.
We consider here the effect of splitting a pad row with $h$ = $H$ into a couple of identical pad rows with a height of $H$/2. Let us assume for simplicity that the combined track coordinate is given by the average of the two measurements provided by the neighboring pad rows with $h$ = $H$/2. Then the resolution per projected track length $H$ becomes
$$\sigma_{\rm X}^2 = \frac{{\sigma_{\rm X}^\ast}^2}{2}$$
where $\sigma_{\rm X}^\ast$ is the resolution obtained with a single pad row with the halved height. The diffusion contribution (the third term in Eq.(10)) is almost unaffected since $n_{\rm eff}$ is approximately proportional to the pad height [@ref3].
On the other hand, the angular pad effect ($S_{\rm X00}$) is reduced appreciably. We temporarily assume $N_{\rm eff}(\left< N \right>)$ to be proportional to the pad height[^8]. Then
$$S_{\rm X00}^2 \propto h$$
from Eq. (9), and the combined contribution of the angular pad effect ($S_{\rm X00}$) per projected track length $H$ is halved because of Eq. (11). Actually $S_{\rm X00}$ is reduced by more than a factor of 2 because Eq. (12) gives an overestimate for a smaller $h$ (see Fig. 2).
In addition, $\sigma_{\rm X00}^2$ at long drift distances can be shown mathematically to be
$$\sigma_{\rm X00}^2 = \frac{B_0^2}{n_{\rm eff}}$$
with a constant $B_0$ independent of the pad height if the contribution of the electronic noise is negligible (see Appendix B of Ref. [@ref1]). Similarly to the diffusion contribution, the intrinsic term ($\sigma_{\rm X00}$) in the combined resolution is expected to be close to the counterpart in the resolution for a single pad row with $h$ = $H$.
Consequently the net effect of halving the pad height on the resolution per projected track length is essentially the alleviation of the angular pad effect ($S_{\rm X00}$) by more than a factor of 2. For example, Eq. (9) gives $S_{\rm X00}$ $\sim$ 140 $\mu$m (450 $\mu$m) for $\phi$ = 10$^\circ$ (30$^\circ$) with $h$ = 6.3 mm, while the corresponding value for a couple of pad rows with $h$ = 3.15 mm is about 60 $\mu$m (200 $\mu$m). The spatial resolutions, and therefore the momentum resolutions improve significantly for slanted and/or low momentum tracks with the shorter pads.
The number of voxels in the sensitive volume of a TPC is doubled if the pad height is halved (with the electronics channel density doubled). This would enhance the pattern recognition capability and the d$E$/dx resolution of the TPC as well.
Conclusion
==========
The azimuthal spatial resolutions for inclined tracks were measured with a GEM-equipped prototype TPC as well as for right angle tracks. The angular pad effect contributes as a virtually constant offset to the spatial resolution to be added quadratically, depending on the track angle and the pad height. The offsets are found to be consistent with the predictions given by a simple model calculation taking into account the cluster-size distribution and the avalanche fluctuation.
The results are expected to be useful in optimizing the pad height of the LCTPC from the physics point of view.
Acknowledgments {#acknowledgments .unnumbered}
===============
We would like to thank the group at the KEK cryogenics science center for the preparation and the operation of the superconducting magnet. We are also grateful to many colleagues of the LCTPC collaboration for their continuous encouragement and support, and for fruitful discussions. This work was supported by the Creative Scientific Research Grant No. 18GS0202 and No. 23000002 of the Japan Society of Promotion of Science.
Behavior of $N_{\rm eff}(N)$
============================
The effective number of clusters ($N_{\rm eff}$) parametrizes the degradation of the resolution due to the angular pad effect (the second term in Eq. (2)). We consider here the behavior of $N_{\rm eff}$ at $z = 0$ as a function of the fixed total number of clusters per pad row ($N$), qualitatively for $N$ = 1, 2, 4 and $\infty$. The track coordinate ($X$) along the pad-row direction is assumed to be determined from the charge centroid of the clusters detected by the pad row having an infinitesimal pad pitch. The clusters are fully intact at $z = 0$ and are assumed to be point-like.
In order to estimate $N_{\rm eff}(N)$ it is necessary to evaluate the variance ($\equiv {S_N}^2$) of the charge centroid of $N$ clusters, each with charge $Q_i$ and coordinate $x_i$, which are randomly scattered over the lateral range on the pad row covered by an inclined track ($h \cdot \tan \phi$).
1. $N = 1$\
The resolution ($\equiv S_1$) does not depend on the cluster charge ($Q$).
$$\begin{aligned}
{S_1}^2 &=& \frac{h^2 \cdot \tan^2\phi}{12} \;, \;\; {\rm and} \\
N_{\rm eff}(1) &=& 1 \;\;\; {\rm by \;\; definition}.\end{aligned}$$
2. $N = 2$\
Let the coordinates and charges of the clusters be $(x_1, Q_1)$ and $(x_2, Q_2)$. Their weighted-mean coordinate is given by
$$X = \frac{x_1 Q_1 + x_2 Q_2}{Q_1 + Q_2} \;.$$
Its variance (${S_2}^2$) is given by
$$\begin{aligned}
{S_2}^2 &\equiv& \left< (X - \left< X \right>)^2 \right> \nonumber\\
&=& \left< \left( \frac{(x_1 - \left< X \right>)\cdot Q_1
+ (x_2 - \left< X \right>)\cdot Q_2}{Q_1 + Q_2} \right)^2 \right> \nonumber\\
&=& \left< \frac{(x_1 - \left< X \right>)^2\cdot {Q_1}^2
+ (x_2 - \left< X \right>)^2\cdot {Q_2}^2}{(Q_1 + Q_2)^2} \right> \nonumber\\
&=& \left< (x - \left< x \right>)^2 \right> \cdot
\left< \frac{{Q_1}^2 + {Q_2}^2}{(Q_1 + Q_2)^2} \right> \nonumber\\
&=& {S_1}^2 \cdot \left< \frac{(Q_1 + Q_2)^2 - 2Q_1Q_2}{(Q_1 + Q_2)^2} \right> \nonumber\\
&=& {S_1}^2 \cdot \left( 1 - 2 \cdot
\left< \frac{Q_1Q_2}{(Q_1 + Q_2)^2} \right> \right) \nonumber\\
&\geqq& {S_1}^2 \; / 2 \;.\end{aligned}$$
The third and fourth lines in the equation above are justified since the variables $x$ and $Q$ are not correlated, whereas the last line follows from
$$\begin{aligned}
\label{eqX} Q_1Q_2 \; /(Q_1 + Q_2)^2 &\leqq& 1/4 \\
\because \; (Q_1 - Q_2)^2 &=& (Q_1 + Q_2)^2 - 4Q_1Q_2 \geqq 0 \;. \nonumber \end{aligned}$$
The equality in Eq. (\[eqX\]) holds only when $Q_1 = Q_2$. Therefore, in a general case addressed here
$$N_{\rm eff}(2) \equiv {S_1}^2 \;/{S_2}^2 < 2 \;.$$
3. $N = 4$\
$$\begin{aligned}
X &=& \frac{x_1Q_1 + x_2Q_2 + x_3Q_3 +x_4Q_4}{Q_1+Q_2+Q_3+Q_4} \nonumber\\
&=& \frac{x_1^\prime Q_1^\prime + x_2^\prime Q_2^\prime}
{Q_1^\prime + Q_2^\prime} \end{aligned}$$
where
$$\begin{aligned}
x_1^\prime Q_1^\prime &\equiv& x_1Q_1 + x_2Q_2 \nonumber\\
x_2^\prime Q_2^\prime &\equiv& x_3Q_3 + x_4Q_4 \nonumber\end{aligned}$$
with $Q_1^\prime \equiv Q_1 + Q_2$ and $Q_2^\prime \equiv Q_3 + Q_4$.
$$\begin{aligned}
{S_4}^2 &\equiv& \left < (X - \left< X \right>)^2 \right> \nonumber\\
&=& \left< (x^\prime - \left< x \right>)^2 \right> \cdot
\left< \frac{{Q_1^\prime}^2 + {Q_2^\prime}^2}
{(Q_1^\prime + Q_2^\prime)^2} \right> \nonumber\\
&=& \left< (x^\prime - \left< x \right>)^2 \right> \cdot
\left( 1 - 2 \cdot \left< \frac{Q_1^\prime Q_2^\prime}
{(Q_1^\prime + Q_2^\prime)^2} \right> \right) \nonumber\\
&>& {S_2}^2 / 2 \;,\;\;{\rm with} \;\;
{S_2}^2 \equiv \left< (x^\prime - \left< x \right>)^2 \right> \;. \end{aligned}$$
Therefore
$$\begin{aligned}
\frac{{S_2}^2}{{S_4}^2} &<& 2 \;,\;\;{\rm and} \\
N_{\rm eff}(4) &<& 2N_{\rm eff}(2) \;.\end{aligned}$$
Consequently
$$N_{\rm eff}(1) = 1,\; N_{\rm eff}(2) < 2,\;
N_{\rm eff}(4) < 2 N_{\rm eff}(2), \;\; {\rm and \; so \; on.}$$
Thus $N_{\rm eff}(N) / N$ is expected to be a decreasing function of $N$.
4. $N = \infty$
$$\begin{aligned}
X &=& \frac{\sum_{i=1}^N x_i Q_i}{\sum_{i=1}^N Q_i} \\
{S_{\rm N}}^2 &\equiv& \left< (X - \left< X \right>)^2 \right> \nonumber\\
&=& \left< (x - \left< x \right>)^2 \right> \cdot
\left< \frac{\sum_{i=1}^N {Q_i}^2}{\left( \sum_{i=1}^N Q_i\right)^2} \right> \nonumber\\
&\sim& {S_1}^2 \cdot \frac{\left< \sum_{i=1}^N {Q_i}^2 \right>}
{N^2 \left< Q \right>^2} \nonumber\\
&\sim& {S_1}^2 \cdot \frac{1}{N} \cdot
\frac{\left< Q^2 \right>}{\left< Q \right>^2} \nonumber\\
&\sim& {S_1}^2 \cdot \frac{1}{N} \cdot \frac{\left< Q \right >^2
+ {\sigma_Q}^2}{\left< Q \right>^2} \nonumber\\
&\sim& {S_1}^2 \cdot \frac{1}{N} \cdot (1 + F^\prime) \end{aligned}$$
where the relative variance $F^\prime \equiv {\sigma_Q}^2\; / \left< Q \right>^2$ with $\sigma_Q$ being the standard deviation of the cluster charge, including the fluctuations in cluster size, and in avalanche gain for each electron in the cluster. Actually
$$F^\prime = F + \frac{1}{\left<n\right>} \cdot f$$
with $F$ ($f$) being the relative variance of the cluster-size (avalanche-size) fluctuation and $\left<n\right>$ the average cluster size. Therefore
$$\lim_{N \rightarrow \infty} \frac{N_{\rm eff}(N)}{N} = \frac{1}{1+F^\prime}
\sim \frac{1}{1+F}$$
because $F$ ($\sim$ 2000 for argon [@ref3]) is much greater than $f$ ($\sim$ 1).
[99]{} M. Kobayashi, [et al.]{}, Nuclear Instrumentation and Methods in Physics Research\
A 641 (2011) 37. The International Linear Collider, ILC Technical Design Report, available at\
$<$https://www.linearcollider.org/ILC/Publications/Technical-Design-Report$>$. The Compact Linear Collider, Compact Linear Collider, available at\
$<$http://clic-study.org/$>$. Makoto Kobayashi, Nuclear Instrumentation and Methods in Physics Research\
A 562 (2006) 136. Makoto Kobayashi, Nuclear Instrumentation and Methods in Physics Research\
A 729 (2013) 273. R. Yonamine, et al., Journal of Instrumentation 9 (2014) C03002. H. Fischle, J. Heintze, B. Schmidt, Nuclear Instrumentation and Methods in Physics Research\
A 301 (1991) 202. A. Sharma, F. Sauli, Nuclear Instrumentation and Methods in Physics Research\
A 350 (1994) 470. S.F. Biagi, Nuclear Instrumentation and Methods in Physics Research\
A 421 (1999) 234.
[^1]: The values of $\sigma_{\rm X00}$ are measured to be about 100 $\mu$m without axial magnetic field ($B$ = 0 T) and $\sim$ 50 $\mu$m for $B$ = 1 T. The observed $B$-dependence of $\sigma_{\rm X00}$ is most likely due to the intrinsic track width. See Appendix C of Ref. [@ref1] for the possible contributors to the intrinsic term.
[^2]: The finite pad-pitch term [@ref1] is neglected here.
[^3]: In Refs. [@ref1; @ref2; @ref3], $n_{\rm eff}$ is denoted as $N_{\rm eff}$, which is reserved for the effective number of [*clusters*]{} per pad row (see below) in the present paper.
[^4]: More precisely $h$ should be understood as the pad-row pitch, which is usually slightly larger than the pad height when the readout plane is covered over with pads. The pad-row pitch and the pad height ($h$) are not distinguished in the present paper.
[^5]: $\theta$ is defined to be 0$^\circ$ when the track is parallel to the readout plane.
[^6]: The value of $\sigma_{\rm X0}$ given by Eq. (2) is therefore practically independent of $z$.
[^7]: The parameter $\theta$ for Polya distributions (see, for example, Ref. [@ref2]) should not be confused with the track angle $\theta$ defined above. We use the same symbol since they can be easily distinguished by their units.
[^8]: This is a bolder assumption than $n_{\rm eff} \propto h$ above (see Fig. 1).
|
---
abstract: |
A typical census of 3-manifolds contains all manifolds (under various constraints) that can be triangulated with at most $n$ tetrahedra. Although censuses are useful resources for mathematicians, constructing them is difficult: the best algorithms to date have not gone beyond $n=12$. The underlying algorithms essentially (i) enumerate all relevant 4-regular multigraphs on n nodes, and then (ii) for each multigraph $G$ they enumerate possible 3-manifold triangulations with $G$ as their dual 1-skeleton, of which there could be exponentially many. In practice, a small number of multigraphs often dominate the running times of census algorithms: for example, in a typical census on 10 tetrahedra, almost half of the running time is spent on just 0.3% of the graphs.
Here we present a new algorithm for stage (ii), which is the computational bottleneck in this process. The key idea is to build triangulations by recursively constructing neighbourhoods of edges, in contrast to traditional algorithms which recursively glue together pairs of tetrahedron faces. We implement this algorithm, and find experimentally that whilst the overall performance is mixed, the new algorithm runs significantly faster on those “pathological” multigraphs for which existing methods are extremely slow. In this way the old and new algorithms complement one another, and together can yield significant performance improvements over either method alone.
author:
- |
Benjamin A. Burton\
School of Mathematics and Physics, The University of Queensland,\
Brisbane QLD 4072, Australia\
`bab@maths.uq.edu.au`
- |
William Pettersson\
School of Mathematics and Physics, The University of Queensland,\
Brisbane QLD 4072, Australia\
`william@ewpettersson.se`
bibliography:
- 'bib.bib'
title: 'An edge-based framework for enumerating 3-manifold triangulations [^1]'
---
Introduction
============
In many fields of mathematics, one can often learn much by studying an exhaustive “census” of certain objects, such as knot dictionaries. Our focus here is on censuses of closed 3-manifolds—essentially topological spaces that locally look like ${\mathbb{R}}^3$. Combinatorially, any closed 3-manifold can be represented by a *triangulation*, formed from tetrahedra with faces identified together in pairs [@Moise1952]. A typical census of 3-manifolds enumerates all 3-manifold under certain conditions that can be constructed from a fixed number of tetrahedra.
One of the earliest such results was a census of all cusped hyperbolic 3-manifolds which could be built from at most 5 tetrahedra, by Hildebrand and Weeks [@Hildebrand1989]; this was later extended to all such manifolds on at most 9 tetrahedra [@Burton2014Cusped; @Callahan1999; @thistlethwaite10-cusped8]. For closed orientable 3-manifolds, Matveev gave the first census of closed orientable prime manifolds on up to 6 tetrahedra [@Matveev1998]; this has since been extended to 12 tetrahedra [@Martelli2001; @Matveev2007AlgorithmicTopology].
Most (if not all) census algorithms in the literature enumerate 3-manifolds on $n$ tetrahedra in two main stages. The first stage is to generate a list of all 4-regular multigraphs on $n$ nodes. The second stage takes each such graph $G$, and sequentially identifies faces of tetrahedra together to form a triangulation with $G$ as its dual 1-skeleton (for a highly tuned implementation of such an algorithm, see [@Regina]).
There are $|S_3|=6$ possible maps to use for each such identification of faces. Thus for each graph $G$, the algorithm searches through an exponential (in the number of tetrahedra) search tree, and each leaf in this tree is a triangulation but is not necessarily a 3-manifold triangulation. Much research has focused on trimming this search tree down by identifying and pruning subtrees which only contain triangulations which are not 3-manifold triangulations [@Burton2007; @burton11-genus; @martelli02-decomp; @Matveev1998].
In this paper we describe a different approach to generating a census of 3-manifolds. The first stage remains the same, but in the second stage we build up the neighbourhood of each *edge* in the triangulation recursively, instead of joining together faces one at a time. This is, in a sense, a paradigm shift in census enumeration, and as a result it generates significantly different search trees with very different opportunities for pruning. By implementing the new algorithm and comparing its performance against existing algorithms, we find that this new search framework complements existing algorithms very well, and we predict that a heuristic combination that combines the benefits of this with existing algorithms can significantly speed up census enumeration.
The key idea behind this new search framework is to extend each possible dual 1-skeleton graph to a “fattened face pairing graph”, and then to find particular cycle-based decompositions of these new graphs. We also show how various improvements to typical census algorithms (such as those in [@Burton2004]) can be translated into this new setting.
Definitions and notation {#sec:notation}
========================
In combinatorial topology versus graph theory, the terms “edge” and “vertex” have distinct meanings. Therefore in this paper, the terms [*edge*]{} and [*vertex*]{} will be used to mean an edge or vertex of a tetrahedron, triangulation or manifold; and the terms [*arc*]{} and [*node*]{} will be used to mean an edge or vertex in a graph respectively.
A 3-manifold is a topological space that locally looks like either 3-dimensional Euclidean space (i.e., ${\mathbb{R}}^3$) or closed 3-dimensional Euclidean half-space (i.e., ${\mathbb{R}}^3_{z\geq 0}$). In this paper we only consider compact and connected 3-manifolds. When we refer to faces, we are explicitly talking about 2-faces (i.e., facets of a tetrahedron). We represent 3-manifolds combinatorially as triangulations [@Moise1952]: a collection of tetrahedra (3-simplices) with some 2-faces pairwise identified.
A [*general triangulation*]{} is a collection $\Delta_1,\Delta_2,\ldots,\Delta_n$ of $n$ abstract tetrahedra, along with some bijections $\pi_1,\pi_2,\ldots,\pi_m$ where each bijection $\pi_i$ is an affine map between two faces of tetrahedra, and each face of each tetrahedron is in at most one such bijection.
We call these affine bijections [*face identifications*]{} or simply [*identifications*]{}. Note that this is more general than a simplicial complex (e.g., we allow an identification between two distinct faces of the same tetrahedron). If the quotient space of such a triangulation is a 3-manifold, we will say that the triangulation represents said 3-manifold.
Given a tetrahedron with vertices $a$, $b$, $c$ and $d$, we will define face $a$ to be the face opposite vertex $a$. That is, face $a$ is the face consisting of vertices $b$, $c$ and $d$. We will sometimes also refer to this as face [*bcd*]{}. We will write ${\textit{abc}} {\leftrightarrow}{\textit{efg}}$ to mean that face [*abc*]{} is identified with face [*efg*]{} and that in this identification we have vertex $a$ identified with vertex $e$, vertex $b$ identified with vertex $f$ and vertex $c$ identified with vertex $g$.
We will also use the notation $ab$ to denote the edge joining vertices $a$ and $b$ on some tetrahedron. Note that by this notation, the edge $ab$ on a tetrahedron with vertices labelled $a$, $b$, $c$ and $d$ will be the intersection of faces $c$ and $d$.
As a result of the identification of various faces, some edges or vertices of various tetrahedra are identified together. The [*degree*]{} of an edge of the triangulation, denoted $\deg(e)$, is defined to be the number of edges of tetrahedra which are identified together to form the edge of the triangulation.
We also need to define the [*link*]{} of a vertex before we can discuss 3-manifold triangulations. An example of a general triangulation, and a link of one of its vertices, is given in Example \[ex:gt\] below.
\[definition:link\] Given a vertex $v$ in some triangulation, the link of $v$, denoted ${\textrm{\em Link}}(v)$, is the (2-dimensional) frontier of a small regular neighbourhood of $v$.
\[ex:gt\]
Consider two tetrahedra; one with vertices labelled $a$, $b$, $c$, and $d$ and the second with vertices labelled $e$, $f$, $g$ and $h$; and then apply the face identification ${\textit{abc}} {\leftrightarrow}{\textit{efg}}$. The resulting triangulation is topologically a 3-ball. Figure \[fig:gt\_a\] shows this triangulation, with the arrow indicating two faces being identified. The actual identification involved is not displayed in the diagram, however.
We now detail the properties a general triangulation must have to represent a 3-manifold.
\[lemma:3mfld\_tri\] A general triangulation is a [*3-manifold triangulation*]{} if the following additional conditions hold:
- the triangulation is connected;
- the link of any vertex in the triangulation is homeomorphic to either a 2-sphere or a disc;
- no edge in the triangulation is identified with itself in reverse.
It is well known that these conditions are both necessary and sufficient for the underlying topological space to be a 3-manifold (possibly with boundary). However in this paper we only consider with 3-manifolds without boundary. That is, every face of a tetrahedron will be identified with some other face in a 3-manifold triangulation. Example \[ex:3sphere\] now gives an example 3-manifold triangulation of a 3-sphere.
\[ex:3sphere\]
![A visual representation of the 3-sphere triangulation described in Example \[ex:3sphere\]. The arrows indicate which faces are identified together with dashed arrows referring to the “back” faces. Note that some of the identifications involve rotations or flips which are not shown in the figure.[]{data-label="fig:3sphere"}](3sphere_triangulation)
Figure \[fig:3sphere\] shows how two tetrahedra may have faces identified together to form a triangulation of a 3-sphere[^2]. Each tetrahedron has two faces identified together, and another two faces identified with the corresponding faces from the second tetrahedra. The exact identifications are as follows. $$\begin{array}{cc}
{\textit{abc}} {\leftrightarrow}{\textit{hfe}} & {\textit{abd}} {\leftrightarrow}{\textit{gfe}} \\
{\textit{acd}} {\leftrightarrow}{\textit{bcd}} & {\textit{egh}} {\leftrightarrow}{\textit{fgh}}
\end{array}$$
We now given some results on the links of vertices in various triangulations and manifolds. These results are well known, and are given for completeness. First, however, we need the following definition.
The [*Euler characteristic*]{} of a triangulation is topological invariant, denoted as $\chi$. For triangulations it can be calculated as $\chi =
V - E + F - T$ where $V$, $E$, $F$ and $T$ are the number of vertices, edges, faces and tetrahedra in the triangulation respectively. For 2-dimensional triangulations, $\chi = V - E + F$.
For the following proofs, we also briefly need the Euler characteristic of a cell decomposition[^3]. We omit the technical details, but $\chi$ can be calculated as $\sum_i (-1)^i k_i$ where $k_i$ is the number of $i$-cells in the decomposition.
It is well known by the classification of 2-manifolds ([@Weeks1999ZIP]) that $\chi \leq 2$ for any surface and that $\chi = 2$ if and only if the the surface is a 2-sphere. Additionally, a closed 3-manifold (that is, a compact 3-manifold with no boundary) has an Euler characteristic of zero (by Poincaré duality, see [@Hatcher2002Algebraic]).
The following lemmas will help determine the form of links in various triangulations.
\[lemma:inclusionexclusion\] Take two triangulations $L$ and $K$ with combinatorially equivalent boundaries, and create a new triangulation $M$ by identifying $L$ and $K$ along their boundaries. Then $\chi(M) = \chi(L) + \chi(K) - \chi(\partial L)$.
The above follows from a simple counting argument along the shared boundary. We can now prove Lemma \[lemma:2sphere-links\].
\[lemma:2sphere-links\] Given any connected closed triangulation $T$ on $n$ tetrahedra with $k$ vertices where no edge is identified with itself in reverse, the triangulation has $n+k$ edges if and only if the link of each vertex in $T$ is homeomorphic to a 2-sphere.
Let the triangulation have $e$ edges. As each face of $T$ is identified with exactly one other face, and a tetrahedron has 4 faces, we know that $T$ must have $2n$ faces. Then $\chi(T) = k - e + 2n - n = n+k-e$ which immediately gives one direction of the proof.
Label each vertex in $T$ by $\{v_1,\ldots,v_k\}$. Now consider the collection of truncated tetrahedra (or 3-cells) $T'$ created from $T$ by truncating $T$ along the link of every vertex $v_i$. The boundary of $T'$ is therefore the union of the links of the vertices of $T$. When forming $T'$ from $T$, for each vertex $v_i$ we removed one 0-cell, and then added a cell for every vertex, face and edge of ${\textrm{\em Link}}(v_i)$. As a result, we get that $$\chi(T') = \chi(T) + \sum_{i=1}^k \left(\chi(Link(v_i)) - 1\right).$$ Furthermore, since $\chi(T) = n+k-e$ we get $$\begin{aligned}
\chi(T') = n - e + \sum_{i=1}^k\chi({\textrm{\em Link}}(v_i)). \tag{$\star$}\end{aligned}$$
Note that $T'$ represents a compact 3-manifold with boundary as it contains no edge identified with itself in reverse and every vertex on $\partial T'$ has a link homeomorphic to a disc.
Now take a copy of $T'$, call the copy $T''$, and identify the boundary of $T'$ to the boundary of $T''$ to form the triangulation $T^\dagger$. Since $T'$ is a compact 3-manifold with boundary, $T^\dagger$ is a closed 3-manifold, giving $\chi(T^\dagger)=0$ (even if the vertex links of $T$ are not spheres).
If we set $L=T'$, $K=T''$ and $M = T^\dagger$ in Lemma \[lemma:inclusionexclusion\], then we get the following result. $$0 = \chi(T') + \chi(T') - \sum_{i=1}^k\chi({\textrm{\em Link}}(v_i))$$ We then substitute in from $(\star)$ and rearrange to get $$\sum_{i=1}^k \chi({\textrm{\em Link}}(v_i)) = 2(e-n).$$ Since $\chi({\textrm{\em Link}}(v_i)) = 2$ if and only if the link of each vertex is homeomorphic to a 2-sphere, and is less than $2$ otherwise, we get that $2k = 2(e-n)$ if and only if the link of each vertex of $T$ is homeomorphic to a 3-sphere.
We also need to define the [*face pairing graph*]{} of a triangulation. The [*face pairing graph*]{} of a triangulation, also known as the dual 1-skeleton, is a graphical representation of the face identifications of the triangulation. Each tetrahedron is associated with a node in the face pairing graph, and one arc joins a pair of tetrahedra for each identification of faces between the two tetrahedra. Note that face pairing graph is not necessarily a simple graph. Indeed, it will often contain both loops (when there is an identification of two distinct faces of the same tetrahedron) and parallel arcs (when there are multiple face identifications between two tetrahedra).
Lastly, we need a few properties of manifolds and triangulations.
A 3-manifold $\mathcal{M}$ is [*irreducible*]{} if every embedded 2-sphere in $\mathcal{M}$ bounds a 3-ball in $\mathcal{M}$.
A 3-manifold $\mathcal{M}$ is [*prime*]{} if it cannot be written as a connected sum of two manifolds where neither is a 3-sphere.
A 3-manifold is [*$\mathbb{P}^2$-irreducible*]{} if it is irreducible and also contains no embedded two-sided projective plane.
Prime manifolds are the most fundamental manifolds to work with. We note that prime 3-manifolds are either irreducible, or are one of the orientable direct product $S^2 \times S^1$ or the non-orientable twisted product $S^2 {\mathbin{ \stackrel{\sim}{\smash{\times}\rule{0pt}{0.6ex}}}}S^1$. As these are both well known and have triangulations on two tetrahedra, for any census of minimal triangulations on three or more tetrahedra we can interchange the conditions “prime” and “irreducible”. Any non-prime manifold can be constructed from a connected sum of prime manifolds, so enumerating prime manifolds is sufficient for most purposes. A similar (but more complicated) notion holds for $\mathbb{P}^2$-irreducible manifolds in the non-orientable setting. As such, minimal prime $\mathbb{P}^2$-irreducible triangulations form the basic building blocks in combinatorial topology.
A 3-manifold triangulation of a manifold $\mathcal{M}$ is [*minimal*]{} if $\mathcal{M}$ cannot be triangulated with fewer tetrahedra.
Minimal triangulations are well studied, both for their relevance to computation and for their applications in zero-efficient triangulations [@Jaco2003ZeroEfficient]. Martelli and Petronio [@martelli02-decomp] also showed that, with the exceptions $S^3$, $RP^3$ and $L_{3,1}$, the minimal number of tetrahedra required to triangulate a closed, irreducible and $\mathbb{P}^2$-irreducible 3-manifold $\mathcal{M}$ is equal to the *Matveev complexity* [@Matveev2007AlgorithmicTopology] of $\mathcal{M}$.
Manifold decompositions {#sec:decomp}
=======================
In this section we define a fattened face pairing graph, and show how we can represent any general triangulation as a specific decomposition of its fattened face pairing graph. This allows us to enumerate general triangulations by enumerating graph decompositions. We then demonstrate how to restrict this process to only enumerate 3-manifold triangulations.
A [*fattened face pairing graph*]{} is an extension of a face pairing graph $F$ which we use in a dual representation of the corresponding triangulation. Instead of one node for each tetrahedron, a fattened face pairing graph contains one node for each face of each tetrahedron. Additionally, a face identification in the triangulation is represented by *three* arcs in the fattened face pairing graph; these three arcs loosely correspond to the three pairs of edges which are identified as a consequence of the face identification.
Given a face pairing graph $F$, a fattened face pairing graph is constructed by first tripling each arc (i.e., for each arc $e$ in $F$, add two more arcs parallel to $e$), and then replacing each node $\nu$ of $F$ with a copy of $K_4$ such that each node of the $K_4$ is incident with exactly one set of triple arcs that meet $\nu$.
\[ex:ffpg\] Figure \[fig:ffpg\] shows a face pairing graph and the resulting fattened face pairing graph. The arcs shown in green are what we call [*internal*]{} arcs. Each original node has been replaced with a copy of $K_4$ and in place of each original arc a set of three parallel arcs have been added.
We will refer to the arcs of each $K_4$ as [*internal arcs*]{}, and the remaining arcs (coming from the triple edges) as [*external arcs*]{}. As a visual aid we will always draw internal arcs in green. Each such $K_4$ represents a tetrahedron in the associated triangulation, and as such we will say that a fattened face pairing graph has $n$ tetrahedra if it contains $4n$ nodes.
Triangulations are often labelled or indexed in some manner. Given any labelling of the tetrahedra and their vertices, we label the corresponding fattened face pairing graph as follows. For each tetrahedron $i$ with faces $a$, $b$, $c$ and $d$, we label the nodes of the corresponding $K_4$ in the fattened face pairing graph $v_{i,a}$, $v_{i,b}$, $v_{i,c}$ and $v_{i,d}$, such that if face $a$ of tetrahedron $i$ is identified with face $b$ of tetrahedron $j$ then there are three parallel external arcs between $v_{i,a}$ and $v_{j,b}$.
In such a labelling, the node $v_{i,a}$ represents face $a$ of tetrahedron $i$. Each internal arc $\{v_{i,a},v_{i,b}\}$ represents the unique edge common to faces $a$ and $b$ of tetrahedron $i$. Each external arc $\{v_{i,a},v_{j,b}\}$ represents one of the three pairs of edges of tetrahedra which become identified as a result of identifying face $a$ of tetrahedron $i$ with face $b$ of tetrahedron $j$. Note that the arc only represents the pair of edges being identified, and does not indicate the orientation of said identification.
We now define [*ordered decompositions*]{} of fattened face pairing graphs. Later we show that there is a natural correspondence between such a decomposition and a general triangulation, and we show exactly how the 3-manifold constraints on general triangulations (see Lemma \[lemma:3mfld\_tri\]) can be translated to constraints on these decompositions. There is also a natural relationship between such decompositions and *spines* of 3-manifolds, as used by Matveev and others [@Matveev2007AlgorithmicTopology]; we touch on this relationship again later in this section.
\[definition:ordered-decomp\] An [*ordered decomposition*]{} of a fattened face pairing graph $F=(E,V)$ is a set of closed walks $\{P_1,P_2,\ldots,P_n\}$ such that:
- $\{P_1,P_2,\ldots,P_n\}$ partition the arc set $E$;
- $P_i$ is a closed walk of even length for each $i$; and
- if arc $e_{j+1}$ immediately follows arc $e_j$ in one of the walks then exactly one of $e_j$ or $e_{j+1}$ is an internal arc.
An ordered decomposition of a fattened face pairing graph exactly describes a general triangulation. We first outline this idea here by showing how three parallel external arcs can represent an identification of faces. Complete technical details follow later.
Since the ordered decomposition consists of closed walks of alternating internal and external arcs, the decomposition pairs up the six arcs exiting each nodes so that each external arc is paired with exactly one internal arc. To help visualise this, we can draw such nodes as larger ellipses, with three external arcs and three internal arcs entering the ellipse, as in Figure \[fig:nodezoomed\]. Each external arc meets exactly one internal arc inside this ellipse. This only represents how such arcs are paired up in a given decomposition—the node is still incident with all six arcs. We also see in Figure \[fig:nodezoomed\] that the fattened face pairing graph can always be drawn such that any “crossings” of arcs only occur between external arcs. Such crossings are simply artefacts of how the fattened face pairing graph is drawn in the plane, and in no way represent any sort of underlying topological twist.
Figure \[fig:TwoNodes\] shows a partial drawing of an ordered decomposition of a fattened face pairing graph. In this, we see a set of three parallel external arcs between nodes $v_{1,d}$ and $v_{2,h}$. This tells us that face $d$ of tetrahedron 1 is identified with face $h$ of tetrahedron $2$. Additionally, we see that one of the external arcs connects internal arc $\{v_{1,c},v_{1,d}\}$ with internal arc $\{v_{2,g},v_{2,h}\}$. This tells us that edge ${\textit{ab}}$ of tetrahedron $1$ (represented by $\{v_{1,c},v_{1,d}\}$) is identified with edge ${\textit{ef}}$ of tetrahedron $2$ (represented by $\{v_{2,g},v_{2,h}\}$). Since we know that face ${\textit{abc}}$ is identified with face ${\textit{efg}}$ modulo a possible reflection and/or rotation, this tells us that vertex $c$ is identified with vertex $g$ in this face identification. We can repeat this process for the other paired arcs to see that vertex $a$ is identified with vertex $e$ and vertex $b$ is identified with vertex $f$. The resulting identification is therefore ${\textit{abc}} {\leftrightarrow}{\textit{efg}}$.
![A partial drawing of a fattened face pairing graph.[]{data-label="fig:TwoNodes"}](TwoNodes)
Repeating this for each set of three parallel external arcs gives the required triangulation. The process is easily reversed to obtain an ordered decomposition from a general triangulation.
We now give the technical details for the construction of an ordered decompositions from a general triangulations, and vice-versa.
\[lemma:ffpg\_to\_gtri\]
It is straight forward to see that we can simplify a ordered decomposition of a fattened face pairing graph into a regular face pairing graph, and this gives a collection of tetrahedra and shows which faces are identified. What remains is to determine the exact identification between each pair of faces.
First we label the nodes of the fattened face pairing graph such that each $K_4$ in the fattened face pairing graph has nodes labelled $v_{i,a}, v_{i,b}, v_{i,c}, v_{i,d}$. The choice of $i$ here assigns the label $i$ to the corresponding tetrahedron in the triangulation. Similarly, the assignment of $v_{i,a}$ to a node labels a face of the corresponding tetrahedron. Different labellings of nodes will therefore result in a triangulation with different labels on tetrahedra and vertices. However, up to isomorphism the actual triangulation is not changed.
For each identification of two tetrahedron faces, we have three corresponding external arcs in the fattened face pairing graph. Each arc $e$ out of these three belongs to one walk in the ordered decomposition, and in said walk $e$ has exactly one arc $e_1$ preceding it and one arc $e_2$ succeeding it such that the sequence of arcs $(e_1,e,e_2)$ occurs the walk.
Since $e$ is an external arc, $e_1$ and $e_2$ must be internal arcs and therefore of the form $\{v_{i,a},v_{i,b}\}$ where $a\neq b$. Let $e=\{v_{i,b},v_{j,c}\}$, $e_1 = \{v_{i,a},v_{i,b}\}$ and $e_2
=\{v_{j,c},v_{j,d}\}$. This tells us that this identification is between face $a$ of tetrahedron $i$ and face $d$ of tetrahedron $j$, and in this identification the edge common to faces $a$ and $b$ on tetrahedra $t_i$ is identified with the edge common to faces $c$ and $d$ on tetrahedra $t_j$. The orientation of this edge identification is not given, however it is not needed.. Each of faces $a$ and $d$ have three vertices, and this identification of edges also identifies two vertices from face $a$ with two vertices from face $d$. This leaves one vertex from each face, which must be identified together. By repeating this process for the two external arcs parallel to $e$ we can therefore determine the actual face identification between face $a$ and face $d$.
![Three tetrahedra about a central edge. Note that only vertices of tetrahedra are labelled in this diagram (i.e., vertices 2 and 3 are both of tetrahedra, but in the triangulation they are identified together), and recall that vertex $1$ is opposite face $1$.[]{data-label="fig:FaceIdentToDecompExample"}](FaceIdentToDecompExample)
First we give an example of how to partially build an ordered decomposition. For this example we have a triangulation edge of degree $\geq 3$, depicted in Figure \[fig:FaceIdentToDecompExample\] as the thicker central edge. Recall that face $x$ of a tetrahedron is the face opposite vertex $x$. We see that in the leftmost tetrahedron, the thickened edge is opposite vertices $1$ and $2$, and that face $1$ is identified with face $4$. We therefore have the sequence $(\{v_{1,1},v_{1,2}\},\{v_{1,1},v_{2,4}\},\ldots)$ occurring in one of the walks of the ordered decomposition.
Continuing this process shows that the sequence $$(\{v_{1,1},v_{1,2}\},\{v_{1,1},v_{2,4}\},\{v_{2,4},v_{2,3}\},\{v_{2,3},v_{3,6}\},\{v_{3,6},v_{3,5}\},\ldots)$$ occurs in one of the walks of the ordered decomposition.
\[lemma:gtri\_to\_ffpg\]
First construct the fattened face pairing graph from the face pairing graph of the triangulation. We now label the fattened face pairing graph. Begin by labelling the tetrahedra in the triangulation, and their vertices. Label the individual nodes of the fattened face pairing graph such that if face $a$ of tetrahedron $i$ is identified with face $b$ of tetrahedron $j$ then the corresponding three parallel arcs are between node $v_{i,a}$ and node $v_{j,b}$ in the fattened face pairing graph.
Recall that an edge $ab$ is the edge between vertices $a$ and $b$. Given a tetrahedron with vertices labelled $a$, $b$, $c$ and $d$, the edge $ab$ has as endpoints the two vertices $a$ and $b$ and thus is the intersection of face $c$ and face $d$, so the edge $ab$ in the triangulation is represented by the arc $\{v_{i,c},v_{i,d}\}$ in the fattened face pairing graph.
Start with an edge $ab$ on tetrahedron $i$ in the triangulation, and add $\{v_{i,c},v_{i,d}\}$ to the start of what will become a walk in the ordered decomposition.
![Face $c$ of tetrahedron $i$ is identified with face $g$ of tetrahedron $j$. As a result, one of the walks of the ordered decomposition contains the three arcs $\{\{v_{i,d},v_{i,c}\},\{v_{i,c},v_{j,g}\},\{v_{j,g},v_{j,h}\}$ in order.[]{data-label="fig:FaceIdentToDecomp"}](FaceIdentToDecomp)
Face $c$ on this tetrahedron must be identified with some face $g$ on tetrahedron $j$. For a diagram, see Figure \[fig:FaceIdentToDecomp\]. Through this identification, the edge $ab$ must be identified with some edge on face $g$. Call this edge $ef$. Add one of the three arcs $\{v_{i,c},v_{j,g}\}$ to the current walk. Since a face contains three edges, by construction we can always find such an arc which is not already in one of the walks of the ordered decomposition. If $\{v_{j,g},v_{j,h}\}$ is already in this walk then we are finished with the walk. Otherwise, add the arc $\{v_{j,g},v_{j,h}\}$ into the walk. The process then continues with the edge $ef$. Since each tetrahedron edge is the intersection of two faces of a tetrahedron, it is clear that this process will continue until the initial edge $ab$ is reached and the current walk is complete.
The above procedure is then repeated until all arcs have been added to a walk. By construction, we have created an ordered decomposition with the required properties.
Recall that $\deg (e)$ is the number of edges of tetrahedra identified together to form edge $e$ in the triangulation. The following corollary follows immediately from the constructions.
\[cor:edge\_link\] Given an ordered decomposition $\{P_1,\ldots,P_t\}$, each walk $P_i$ corresponds to exactly one edge $e$ in the corresponding general triangulation. In addition, $|P_i| = 2 \deg(e)$.
Recall that in a 3-manifold triangulation, no edge may be identified with itself in reverse. In the triangulation one may consider the ring of tetrahedra $\Delta_1,\ldots,\Delta_k$ (which need not be distinct) around an edge $e=ab$, as in Figure \[fig:edge\_marking\_tetrahedra\]. Start on $\Delta_1$, and mark one edge incident to $e$ (say $bc$) as being “above” $e$. Since $bc$ is “above” $e$, the face $bcd$ must be the “top” face of $\Delta_1$, and thus the edge $bd$ must also be “above” $e$ and is marked. We can then track the edge $bd$ through a face identification, and across the top of the next tetrahedron. At some point, we must reach $\Delta_1$ again. If $\Delta_1$ is reached via one of the edges $ac$ or $ad$, then $e$ is identified with itself in reverse. However, if $\Delta_1$ is reached via the edge $bc$ again, then we know that the edge $ab$ is not been identified with itself in reverse.
Loosely speaking, in the decomposition setting, we look at one walk $P_x$ of our ordered decomposition and mark arcs in the decomposition as being “above” arcs in the walk $P_x$. If we again consider the edge $bc$ as “above” $ab$, we mark the arc[^4] $\{v_{i,a},v_{i,d}\}$. Since the ordered decomposition corresponds to exactly one triangulation, we can use the ordered decomposition to determine which edge is identified with $\{v_{i,a},v_{i,d}\}$. We then mark the next edge, and proceed as in the previous paragraph. The following definition combined with Lemma \[lemma:no\_single\_edge\_reverse\] achieves the same result in our new framework.
\[definition:marking\] Given an ordered decomposition $\mathcal{P} = \{P_1,P_2,\ldots,P_t\}$, we can *mark* a walk $P_x$ as follows.
![The process used to mark edges as per Definition \[definition:marking\]. The dot-dashed arcs are the ones marked as “above”. Recall that the ellipses are whole nodes, the insides of which denote how internal and external arcs are paired up in the decomposition.[]{data-label="fig:markingedges"}](edge_marking_decomp)
Pick an external arc $e_s$ from $P_x$. Arbitrarily pick an external arc $e_S$ parallel to $e_s$, and mark $e_S$ as being “above” $e_s$. Then let $e_a = e_s$ and $e_A = e_S$ and continue as follows (see Figure \[fig:markingedges\] for a diagram of the construction):
- Let $e_b$ be the next external arc in $P_x$ after $e_a$.
- The internal arc preceding $e_b$ joins two nodes. Call these nodes $i$ and $j$, such that $e_b$ is incident on $j$.
- Some external arc $e_A$ incident on $i$ must be marked as “above” $e_a$. Find the closed walk which $e_A$ belongs to. In this closed walk there must exist some internal arc which either immediately precedes or follows $e_A$ through node $i$. Call this internal arc $e_B$. Note that the walk containing these two arcs need not be, and often is not, $P_x$. Arc $e_B$ must be incident to $i$, and some other node which we shall call $k$.
- Find the internal arc $e_C$ between nodes $k$ and $j$, and find the walk $P_y$ that it belongs to. In this walk, one of the arcs parallel to $e_b$ must either immediately precede or follow $e_C$ and be incident upon node $j$. Call this arc $e_D$.
- If $e_b = e_s$, and $e_D$ is already marked as being above $e_b$, we terminate the marking process.
- Otherwise, mark the arc $e_D$ as being above $e_b$ and repeat the above steps, now using $e_b$ in place of $e_a$, and using $e_D$ in place of $e_A$.
Note that this process of marking specifically marks one arc as being “above” another. It does not mark arcs as being “above” in general.
To visualise this definition in terms of the decomposition, see Figure \[fig:markingedges\]. The arcs $e_a$ and $e_b$ are part of a closed walk, and we are marking the edges “above” this walk. Arc $e_A$ was arbitrarily chosen. Arc $e_B$ follows $e_A$, and then we find $e_C$ as the arc sharing one node with $e_B$ and one with $e_b$. From $e_C$ we can find and mark $e_D$.
We will show how to achieve a similar result in our new framework. First we give a overview of the idea, with technical details to follow. In brief, the walks containing $e_A$ and $e_D$ represent edges of tetrahedra in the triangulation that share triangles with the common edge represented by $P_x$, and which both sit “above” this common edge (assuming some up/down orientation). Both $e_B$ and $e_C$ are internal arcs of the same tetrahedron and share a common node $k$, so we know that both these internal arcs represent edges of the same tetrahedron which share a common face $k$. The external arcs $e_A$ and $e_D$ represents identifications of $e_B$ and $e_C$ respectively with edges of (typically different) adjacent tetrahedra.
\[lemma:no\_single\_edge\_reverse\] Take an ordered decomposition containing a walk $P_x$ with arcs marked according to Definition \[definition:marking\], and consider the corresponding triangulation. Then the edge of the triangulation represented by $P_x$ is identified to itself in reverse if and only if there exists some external arc $e$ in $P_x$ that has two distinct external arcs both marked as “above” $e$.
Part of an ordered decomposition is shown in Figure \[fig:BigDecomp\], and we use the notation as shown there. The part shown represents a single face identification between two (not necessarily distinct) tetrahedra. The markings on the tetrahedra denote exactly what each labelled arc in the fattened face pairing graph represents. As such, we say that an internal arc of the ordered decomposition “is” also an edge of a tetrahedron. For example, $e_C$ is an internal arc, so it represents the edge of the tetrahedron yet we say that $e_C$ is the edge on the tetrahedron. The external arcs all represent edges in face identifications, and are drawn with dashed lines.
![Part of an ordered decomposition, and associated tetrahedra. Identifications of edges are shown with dashed arrows.[]{data-label="fig:BigDecomp"}](BigDecomp)
We prove the result by applying an orientation onto each of the edges of tetrahedra contained in the edge of the triangulation represented by $P_x$. Consider first the arc $e_a$, which represents one edge identification in some face identification. The arc $e_A$ (one of the two arcs parallel to $e_a$) is marked as being “above” $e_a$. This is equivalent to assigning an orientation onto each of the pair of edges represented by $e_a$. Since $e_b$ is one of these, we now have an orientation on the edge $e_b$. We want to fix an orientation onto the edge $e_f$ such that the orientations of $e_b$ and $e_f$ agree after the identification of faces. Since $e_B$ immediately follows $e_A$ (or vice-versa) and $e_C$ immediately follows $e_D$ (or vice-versa, again) in the ordered decomposition, edges $e_B$ and $e_C$ meet in a common tetrahedron vertex, call this vertex $v$. We also see that the edge $e_b$ meets $v$. Since the edge $e_b$ is identified with the edge $e_f$ (via the edge identification represented by $e_d$), and the edge $e_C$ is identified with the edge $e_E$ (via the edge identification represented by $e_D$), $v$ must be identified to the vertex common to edges $e_f$ and $e_E$.
The orientation of the edge represented by $e_b$ has been used to orient the edge represented by $e_f$ such that the two orientations agree after the face identification. Repeating this process for all arcs in $P_x$ in turn orients all the edges of tetrahedra that are contained in the edge of the triangulation.
If every external arc $e$ in $P_x$ has exactly one external arc marked as “above” $e$, then we have exactly one orientation for each edge of a tetrahedron. That is, the edge of the triangulation corresponding to $P_x$ cannot be identified with itself in reverse.
If some external arc $e$ in $P_x$ has two distinct external arcs marked as “above” $e$, then every external arc must have two such other arcs marked (as the marking process can only terminate when it reaches $e_s$ in Definition \[definition:marking\]). This must mean that we have assigned two distinct orientations to each tetrahedron edge in the triangulation edge corresponding to $P_x$ and therefore this triangulation edge is identified with itself in reverse.
If a walk $P_x$ in an ordered decomposition satisfies Lemma \[lemma:no\_single\_edge\_reverse\], we say that this walk is [*non-reversing*]{}.
\[definition:manifold\_decomp\] A [*manifold decomposition*]{} is an ordered decomposition of a fattened face pairing graph satisfying all of the following conditions.
- The ordered decomposition contains $n+1$ closed walks.
- The fattened face pairing graph contains $4n$ nodes.
- Each walk is non-reversing.
- The associated manifold triangulation contains exactly $1$ vertex.
\[thm:md\_equiv\_3tri\] Up to relabelling, there is a one-to-one correspondence between manifold decompositions of connected fattened face pairing graphs and 1-vertex 3-manifold triangulations.
Constructions \[lemma:ffpg\_to\_gtri\] and \[lemma:gtri\_to\_ffpg\] give the correspondence between general triangulations and ordered decompositions. All that remains is to show that the extra properties of a manifold decomposition force the corresponding triangulation to be a 3-manifold triangulation. Since the decomposition contains $n+1$ walks, Corollary \[cor:edge\_link\] tells us the triangulation has $n+1$ edges. Additionally, each tetrahedron corresponds to four nodes in the fattened face pairing graph, so the triangulation has $n$ tetrahedra and thus by Lemma \[lemma:2sphere-links\] we see that the link of each vertex is homeomorphic to a 2-sphere. Each walk is non-reversing so Lemma \[lemma:no\_single\_edge\_reverse\] says that no edge in the corresponding triangulation is identified with itself in reverse, and we have the required result.
We now define the notation used to express specific ordered decompositions. The notation is defined such that it can also be interpreted as a [*spine code*]{} (as used by Matveev’s Manifold Recognizer [@MatveevRecog]), and that the spine generated from such a spine code is a dual representation of the same combinatorial object represented by the manifold decomposition. For more detail on spine codes, see [@Matveev2007AlgorithmicTopology].
\[not:md\] Take an ordered decomposition of a fattened face pairing graph with $4n$ nodes, and label each set of three parallel external arcs with a distinct value taken from the set $\{1,\ldots,2n\}$ (so two external arcs receive the same label if and only if they are part of the same triple of parallel arcs). Assign an arbitrary orientation to each set of three parallel external arcs. For each walk in the ordered decomposition:
1. Create an empty ordered list.
2. Follow the external arcs in the walk.
1. If an external arc is traversed in a direction consistent with its orientation, add $+i$ to the end of the corresponding ordered list.
2. If instead the arc in the walk is traversed in the reverse direction, add $-i$ to the end of the list.
3. Continue until the first external arc in the walk is reached.
See Example \[ex:3sphere-ffpg\] for an example of the use of this notation. Note that this notation only records the external arcs, and does not record any internal arcs in walks.
We can also reconstruct the face pairing graph (and therefore the fattened face pairing graph) from this notation (in particular, we can reconstruct the internal arcs). The method essentially uses the fact that each external arc represents some identification of two faces (and three parallel external arcs will represent the same identification of two faces), and so we can use the orientation of each arc to distinguish between the two faces in each identification and thereby build up the face pairing graph.
\[ex:3sphere-ffpg\]
The following set of walks (remember, we omit internal arcs and instead prescribe orientations on external arcs) describes a manifold decomposition of a 3-sphere.
$$T = \{ (1), (1,2,4,-2,3,-4,-3,-1,3,-2), (4)\}$$
Figure \[fig:3sphere-decomp\] shows this manifold decomposition of a 3-sphere. Given the appropriate vertex labellings, this represents the same triangulation as that given in Example \[ex:3sphere\].
Each integer in $T$ represents an identification of faces, and we can also track each face in an identification individually using the sign of said integer. For example, $-3$ is before $-1$ in the second walk, so we can say that the “second” face in identification $1$ belongs to the same tetrahedron as the “first” face in identification $3$. Each integer (or its negative) appears exactly three times in an ordered decomposition, so we can determine exactly which faces belong to the same tetrahedron. For example, both faces involved in identification $1$ belong to the same tetrahedron as the “first” face in identification $2$ and the “first” face in identification $3$.
![An edge-link decomposition of a fattened face pairing graph that represents a 3-sphere. Recall that each grey ellipse is actually a node in the fattened face pairing graph.[]{data-label="fig:3sphere-decomp"}](ffpg-decomp)
An implementation note: it is trivial, given a fattened face pairing graph and a “partial” ordered decomposition in which all the internal arcs are missing, to reconstruct the complete ordered decomposition. For the theoretical discussions in this paper we work with the full ordered decompositions, but in the implementation we only store the sequential list of external arcs as in Notation \[not:md\].
Algorithm and improvements {#sec:existing-results}
==========================
In this section we give various improvements that may be used when enumerating manifold decompositions (i.e., 3-manifold triangulations). These are based on known theoretical results in 3-manifold topology, combined with suitable data structures.
Many existing algorithms in the literature [@Regina; @Matveev2007AlgorithmicTopology] build triangulations by identifying faces pairwise (or taking combinatorially equivalent steps, such as annotating edges of special spines [@Matveev2007AlgorithmicTopology]). The algorithm we give here essentially constructs the neighbourhood of each *edge* of the triangulation one at a time. Therefore the search tree traversed by our new algorithm is significantly different than that traversed by other algorithms. This is highlighted experimentally by the results given in Section \[sec:results\].
Algorithm {#sec:algo}
---------
The basis of our implementation is a simple backtracking approach to enumerate manifold decompositions. First each set of three parallel arcs is given an arbitrary orientation. Then each set of three parallel arcs is given a distinct value from the set ${1,\ldots,n}$ (so two external arcs receive the same label if and only if they are part of the same triple of parallel arcs). The algorithm then finds the arc $e$ with the lowest such value that is not already in some walk. A new walk is started with $e$ being used in the “forwards” direction. As seen in Figure \[fig:algorithm\], $e$ is incident to some node $n_1$ at its “head”. There are three choices for the next arc in the walk, corresponding to the three internal arcs incident to $n_1$. The algorithm will try each of these three, one at a time, as long as said internal arc has not already been used in a walk. Assume internal arc $i$ is used, and that $i$ is the arc between nodes $n_1$ and $n_2$. Then the next step the algorithm takes is to check whether this can be the last arc in the walk. That is, was the first initial arc also incident upon $n_2$. If so, finish this walk and attempt to create the next walk (or if all arcs have been used, determine if the current set of walks comprises an ordered decomposition). After attempted to complete the walk (and regardless of whether the walk could be completed or not), one of the three parallel arcs incident to $n_2$, say $e_2$, is used next in the walk. Since these are parallel, there is no need to differentiate between them. Additionally, since the internal arc $i$ was not used in a walk, we know that at least one of these three arcs has not been used in a walk. The algorithm then adds $e_2$ to the current walk and continues the process until all possible ordered decompositions have been found.
![Partial walk being built as in the algorithm. The large arrows indicate the orientation assigned to each set of three parallel external arcs. In this diagram, $e$ was the starting arc. The choice of $i$ is shown. Note that all three possible choices for $e_2$ are equivalent. Also since the orientation on $e_2$ is “backwards”, the new partial walk would be $\mathcal{P}\lhd (i,-e_2)$.[]{data-label="fig:algorithm"}](algorithm-explain)
However, this is approach is not tractable for any interesting values of $n$, and so we introduce the following improvements.
Limiting the size of walks
--------------------------
Enumeration algorithms [@Burton2004; @Burton2007; @Regina; @Martelli2001; @Matveev1998; @Matveev2007AlgorithmicTopology] in 3-manifold topology often focus on closed, minimal, irreducible and ${\mathbb{P}^2}$-irreducible 3-manifold triangulations. These properties were all defined in Section \[sec:notation\]. For brevity, we say that a triangulation (or manifold decomposition) has such a property if and only if the underlying manifold has the property.
The following results are taken from [@Burton2004], though in the orientable case similar results were known earlier by other authors [@Martelli2001; @Matveev2007AlgorithmicTopology].
[*(2.1 in [@Burton2004])*]{} No closed minimal triangulation has an edge of degree three that belongs to three distinct tetrahedra.
[*(2.3 and 2.4 in [@Burton2004])*]{} No closed minimal ${\mathbb{P}^2}$-irreducible triangulation with $\geq 3$ tetrahedra contains an edge of degree $\leq 2$.
Given the degree of an edge $e$ of a triangulation is the number of tetrahedron edges which are identified to form $e$, these results translate to manifold decompositions as follows.
\[cor:degree1or2\] No closed minimal ${\mathbb{P}^2}$-irreducible manifold decomposition with $\geq 3$ tetrahedra contains a walk which itself contains less than three external arcs.
\[cor:degree3\] No closed minimal manifold decomposition contains a walk which itself contains exactly three internal arcs representing edges on distinct tetrahedra (i.e., belonging to three distinct $K_4$ subgraphs).
The above results are direct corollaries, as it is simple to translate the terms involved and the results are simple enough to implement in an algorithm. In the backtracking algorithm, this means we can implement a check on the number of arcs in a walk before added the walk to the decomposition. This is implementable as a constant time check if the length of the current partial walk is stored.
Additionally, for a census of 1-vertex triangulations on $n$ tetrahedra, a manifold decomposition must contain exactly $n+1$ walks. If the algorithm has completed $k$ walks, then there are $n+1-k$ walks left to complete. Each such walk must contain at least three external arcs, so if there are less than $3(n+1-k)$ unused external arcs, the current subtree of the search space can be pruned.
\[impro:3x\_arcs\_remaining\] If during the enumeration process $k$ walks have been completed and there are less than $3(n+1-k)$ unused external arcs, prune the search tree at this point.
We extend this result one step further. By Corollary \[cor:degree3\] a closed walk in a manifold decomposition which contains three internal arcs must contain two internal arcs belonging to the same $K_4$, as in Figure \[fig:3-walk\]. We modify our algorithm to enumerate all such closed walks first. Each such walk is either present or absent in any manifold decomposition. For each possible combination of such walks, we fix said walks and then run the search on the remaining arcs. All other walks must now contain at least four external arcs, so during the census on $n$ tetrahedra if the algorithm has completed $k$ walks and there are less than $4(n+1-k)$ unused external arcs we know that the partial decomposition cannot be completed to a manifold decomposition.
\[impro:4x\_arcs\_remaining\] For each $K_4$ in the given graph, determine if two of its internal arcs can be used together in a walk containing exactly three internal arcs. If this is possible, add said walk to the set $S$. Then, for each subset $s \subseteq S$, use $s$ as a starting set of walks and attempt to complete the ordered decomposition. If during the enumeration process $k$ walks have been completed and there are less than $4(n+1-k)$ unused external arcs, prune the search tree at this point.
![The only possible walk containing 3 internal arcs not all from distinct tetrahedra in a fattened face pairing graph on more than 1 tetrahedron. Only the external arcs used in the walk are shown, other external arcs are not shown.[]{data-label="fig:3-walk"}](3_walk)
Avoiding cone faces
-------------------
For some properties of minimal triangulations, it is not clear that the corresponding tests can be implemented cheaply. Here we identify further results from the literature that enable fast implementations in our setting. The following was shown in [@Burton2004].
[*(2.8 in [@Burton2004])*]{} Let $T$ be a closed minimal ${\mathbb{P}^2}$-irreducible triangulation containing $\geq
3$ tetrahedra. Then no single face of $T$ has two of its edges identified to form a cone as illustrated in Figure \[fig:one-face-cone\].
![A one-face cone formed by identifying the two marked edges.[]{data-label="fig:one-face-cone"}](one-face-cone)
For manifold decompositions, our translation of this result also requires the underlying manifold to be orientable in order to give a fast algorithmic test.
Let $D$ be a closed minimal ${\mathbb{P}^2}$-irreducible manifold decomposition of an orientable manifold containing $\geq 3$ tetrahedra. Then no walk of $D$ can use two parallel external arcs in opposite directions (as seen in Figure \[fig:BothDirections\]).
![The depicted walk cannot occur in a closed minimal ${\mathbb{P}^2}$-irreducible orientable manifold decomposition as external arcs $e_1$ and $e_2$ are used in opposite directions. The dotted lines indicates the walk continues through undrawn parts of the fattened face pairing graph.[]{data-label="fig:BothDirections"}](ArcBothDirections)
Recall that by our definition, if some walk $P_x$ of a manifold decomposition contains the sequence of arcs $(\{v_{i,a},v_{i,b}\},\{v_{i,a},v_{j,c}\})$ then face $a$ of tetrahedron $i$ is identified with face $c$ of tetrahedron $j$. Assume towards a contradiction that we also have the sequence of arcs $(\{v_{i,a},v_{j,c}\},\{v_{i,a},v_{i,d}\})$ in the walk $P_x$ somewhere, such that the parallel arcs of the form $\{v_{i,a},v_{j,c}\}$ are used in the walk in both directions.
Affix some orientation onto the edge of the manifold represented by $P_x$, and consider the ring of tetrahedra surrounding this edge. Since we have an orientable manifold, we can make use of a “right-hand” rule. See Figure \[fig:RightHand\] for a visual aid. Imagine a right hand inside tetrahedron $i$, gripping edge $cd$ (represented by $\{v_{i,a},v_{i,b}\}$) such that the thumb points towards the positive end of the edge and the fingers curl around the edge so that they leave the tetrahedron through face $a$ (see Figure \[fig:RightHand-a\]). Since the manifold is orientable, any time this hand is back inside tetrahedron $i$ it must have this same orientation. Now since $\{v_{i,a},v_{i,b}\}$ preceded $\{v_{i,a},v_{j,c}\}$ in the walk and the fingers curl “out” through face $a$ of tetrahedron $i$, if some other arc $\{v_{i,a},v_{i,d}\}$ succeeds arc $\{v_{i,a},v_{j,c}\}$ then the fingers must curl “in” through face $a$ of tetrahedron $i$ as the hand grips edge $cd$. As yet, this is no contradiction, as the hand is gripping the one edge of the triangulation, but is therefore gripping many edges of tetrahedra. However, this necessarily leads to these two edges of tetrahedra having the same common vertex as their “positive” end (see Figure \[fig:RightHand-b\]). Then face $a$ has two edges identified as in \[fig:one-face-cone\], contradicting Lemma \[lemma:no\_single\_edge\_reverse\].
This result leads to the following.
\[impro:no\_reverse\] When enumerating *orientable* manifold decompsitions, if an external arc $e$ is to be added to some walk $W$, and $e$ is parallel to another external arc $e'$ which itself is in $W$, check whether $e$ and $e'$ will be used in opposite directions. If so, do not use $e$ at this point; instead backtrack and prune the search tree.
One vertex tests {#sec:1vtx}
----------------
Definition \[definition:manifold\_decomp\] requires that the associated manifold only have one vertex. We test this by tracking properties of the vertex links as the manifold decomposition (i.e., triangulation) is built up. Specifically, while the manifold decomposition is still being constructed, no vertex link may be a closed surface.
![A tetrahedron, with the link of the top vertex drawn in heavier lines. This link, when triangulated, is homeomorphic to a disc. Each of the three heavier lines is a frontier edge.[]{data-label="fig:Link"}](Link)
Initially, the link of each vertex may be triangulated as a single triangular face, and therefore has 3 frontier edges. Each time an external arc is used in a walk, two edges in the triangulation are identified together, and as a result two frontier edges are identified together (see Figure \[fig:IdentifyEdge\]).
![When the two edges of the two tetrahedra (long thick lines) are identified, we also know that the two frontier edges (short thick lines) will be identified.[]{data-label="fig:IdentifyEdge"}](IdentifyEdge)
The orientation of this identification is not known, but is also not required. We only require that the triangulation only have one vertex and we do this by tracking how many frontier edges are in each link. When frontier edges are identified together, the two edges either belong to the same link, or to two distinct links. If the two frontier edges belong to the same link (see Figure \[fig:LinkIdentSame\]), the number of frontier edges in the link is reduced by two. However if the frontier edges belong to two distinct links (see Figure \[fig:LinkIdentDiff\]), with $l_a$ and $l_b$ frontier edges respectively, the resulting link has $l_a + l_b - 2$ frontier edges. Note that after this identification, two links have been joined together so we must not just track the number of frontier edges, but also which links have been identified.
Once a vertex link has no frontier edges, it is a closed surface. If any other distinct vertex links exist, we know that the triangulation must have more than one vertex, which gives the following.
\[impro:1vtx\] When building up a manifold decomposition, track how many “frontier edges” remain around each vertex link. If any vertex links are closed off before the manifold decomposition is completed, backtrack and prune the current subtree of the search space.
The number of frontier edges of each vertex link, as well as which vertex links are identified together, are tracked via a union-find data structure. The data structure is slightly tweaked to allow back tracking (see [@Burton2007] for details), storing the number of frontier edges at each node. For more details on the union-find algorithm in general, see [@Sedgewick1992].
Canonicity and Automorphisms
----------------------------
When running a search, many equivalent manifold decompositions will be found. These decompositions may differ in the order of the walks found, or two walks might have different starting arcs or directions. For example, the two walks $(a,b,c)$ and $(-b,-a,-c)$ are equivalent. The second starts on a different arc, and traverses the walk backwards, but neither of these change the manifold decomposition. Additionally, the underlying face pairing graph often has non-trivial automorphism group.
To eliminate such duplication, we only search for [*canonical*]{} manifold decompositions. We use the obvious definition for a canonical walk (lowest-index arc is written first and is used in the positive direction).
A walk $P=(x_1,x_2,\ldots,x_m)$ in an ordered decomposition is semi-canonical if
- $x_1 > 0$ ; and
- $|x_1| \leq |x_i|$ for $i=2,\ldots,m$.
A walk $P=(x_1,x_2,\ldots,x_m)$ in an ordered decomposition is canonical if
- $P$ is semi-canonical; and
- for any semi-canonical $P'=(x'_1,x'_2,\ldots,x'_m)$ isomorphic (under cyclic permutation of the edges in the path or reversal of orientation) to $P$, either $|x_2| < |x'_2|$ or $|x_2| = |x'_2|$ and $x_2 > 0$.
This definition of canonical simply says that we always start on the arc with lowest given value, and use said arc in a forwards direction. If there are two or three such choices, we take the arc which results in the second arc in the walk having lowest value. If this still leaves us with two choices, we take the walk where we use said second arc in the “forwards” direction. Since there is exactly one internal arc between any two external arcs, we are guaranteed a unique choice at this stage.
Given two walks $P_x=(x_1,\ldots,x_k)$ and $P_y=(y_1,\ldots,y_m)$ in canonical form, we say that $P_x < P_y$ if and only if
- $x_i = y_i$ for $i=1,\ldots,n-1$ and $x_n < y_n$; or
- $x_i = y_i$ for $i=1,\ldots,k$ and $k < m$.
In plainer terms, pairs of arcs from each walk are compared in turn until one arc index is smaller in absolute value than the other, or until the end of one walk is reached in which case the shorter walk is considered “smaller”.
A manifold decomposition consisting of walks $P_1,P_2,\ldots,P_m$ is considered canonical if:
- $P_i$ is canonical for $i=1,\ldots,m$; and
- $P_i< P_{i+1}$ for $i=1,\ldots,m-1$.
Recall that we may have automorphisms of the underlying face pairing graph to consider. Each automorphism will relabel the arcs of the labelled fattened face pairing graph. Each relabelling changes any manifold decomposition by renumbering the arcs in the walks. We apply each automorphism to a manifold decomposition $\mathcal{D}$ to obtain a new decomposition $\mathcal{D}'$. Then $\mathcal{D}'$ is made canonical itself (by setting the first external arc in each walk and reordering the walks), and we compare $\mathcal{D}$ and $\mathcal{D'}$. If $\mathcal{D}' < \mathcal{D}$ then we can discard $\mathcal{D}$ and prune the search tree.
There are two points in the algorithm where we might test for canonical decompositions.
\[impro:CanonEveryArc\] Every time an external arc is added to a walk, check if the current decomposition is canonical. If not, disregard this choice of arc and prune the search tree.
\[impro:CanonWalks\] Every time a walk is completed, check if the current decomposition is canonical. If not, disregard this choice of arc and prune the search tree.
Unfortunately, checking if a (possibly partial) decomposition is canonical is not computationally cheap. Experimental results showed that using Improvement \[impro:CanonWalks\] was significantly faster than using Improvement \[impro:CanonEveryArc\] as fewer checks for canonicity were made.
Results and Timing {#sec:results}
==================
In this section we detail the results from testing the algorithm. We test the manifold decomposition algorithm and its improvements from Section \[sec:existing-results\] against the existing implementation in [*Regina*]{}.
[*Regina*]{} is a suite of topological software and includes state of the art algorithms for census enumerations in various settings, including non-orientable and hyperbolic manifolds [@burton11-genus; @Burton2014Cusped]. [*Regina*]{} and its source code are freely available, which facilitates comparable implementations and fair testing. [*Regina*]{} also filters out invalid triangulations as a final stage, which allows us to test the efficiency of our various improvements by enabling or disabling them independently. Like other census algorithms in the literature, [*Regina*]{} builds triangulations using the traditional framework by identifying faces two at a time.
We find that while [*Regina*]{} outperforms our new algorithms overall, there are non-trivial subcases for which our algorithm runs an order of magnitude faster. Importantly, in a typical census on 10 tetrahedra, [*Regina*]{} spends almost half of its running time on precisely these subcases. This shows that our new algorithm has an important role to play: it complements the existing framework by providing a means to remove some of its most severe bottlenecks. Section \[sec:graphtests\] discusses these cases in more detail.
These observations are, however, in retrospect: what we do not have is a clear indicator in advance for which algorithm will perform best for any given subcase.
Recall that a full census enumeration involves generating all 4-regular multigraphs, and then for each such graph $G$, enumerating triangulations with face pairing graph $G$. In earlier sections we only dealt with individual graphs, but for the tests here we ran each algorithm on all 4-regular multigraphs of a given order $n$.
In the following results, we use the term MD to denote our basic algorithm, using improvements \[impro:4x\_arcs\_remaining\], \[impro:1vtx\] and \[impro:CanonWalks\]. For enumerating orientable manifolds only, we also use Improvement \[impro:no\_reverse\] and denote the corresponding algorithm as MD-o. Experimentation indicated that Improvement \[impro:1vtx\] was computationally expensive, and so we also tested algorithm MD\* (using only Improvements \[impro:4x\_arcs\_remaining\] and \[impro:CanonWalks\]) and algorithm MD\*-o (using Improvements \[impro:4x\_arcs\_remaining\] \[impro:no\_reverse\] and \[impro:CanonWalks\]). Note that these last two algorithms may find ordered decompositions which are not necessarily manifold decompositions, but we can easily filter these out once the enumeration is complete.
The algorithms were tested on a cluster of Intel Xeon L5520s running at 2.27GHz. Times given are total CPU time; that is, a measure of how long the test would take single-threaded on one core. The algorithms themselves, when run on all 4-regular multigraphs on $n$ nodes, are trivially parallelisable which allows each census to complete much faster by taking advantage of available hardware.
We note that, as expected, the census results are consistent between the old and new algorithms.
Aggregate tests
---------------
In the general setting (where we allow orientable and non-orientable triangulations alike) Table \[tab:all\_mfolds\] highlights that [*Regina*]{} outperforms MD when summed over all face pairing graphs. The difference seems to grow slightly as $n$ increases, pointing to the possibility that more optimisations in this setting are possible.
We suspect that tracking the orientability of vertex links is giving [*Regina*]{} an advantage here (see [@Burton2007], Section 5). Tracking orientability more difficult with ordered decompositions, as the walks are built up one at a time—each external arc represents an identification of edges, but does not specify the orientation of this identification. Thus orientability cannot be tested until at least two of any three parallel external arcs are used in walks.
We also compare MD-o to [*Regina*]{}, where we ask both algorithms to only search for orientable triangulations. Both algorithms run significantly faster than in the general setting (demonstrating that Improvement \[impro:no\_reverse\] is a significant improvement). Table \[tab:orientable\] shows that [*Regina*]{} outperforms MD-o roughly by a factor of four. This appears to be constant, and here we expect MD to be comparable to [*Regina*]{} after more careful optimisation (such as [*Regina*]{}’s own algorithm has enjoyed over the past 13 years [@Burton2004; @Burton2007]).
To test Improvement \[impro:1vtx\] (the one-vertex test), we compare MD\* and MD\*-o against MD and MD-o respectively. The timing data in Tables \[tab:md\_star\_or\] and \[tab:md\_star\] shows that MD\* and MD\*-o out-performed MD and MD-o, demonstrating that Improvement \[impro:1vtx\] actually slows down the algorithm. We verified that Improvement \[impro:1vtx\] is indeed discarding unwanted triangulations—the problem is that tracking the vertex links is too computationally expensive. Algorithms MD\* and MD\*-o instead enumerate these unwanted triangulations and test for one vertex after the fact, discarding multiple vertex triangulations after they have been explicitly constructed. The cost of this is included in the timing results, which confirms that such an “after the fact” verification process is indeed faster than the losses incurred by Improvement \[impro:1vtx\].
Individual graph tests {#sec:graphtests}
----------------------
It is on individual (and often pathological) face pairing graphs that the new algorithm shines. Recall that the census enumeration problem requires running an enumeration algorithm on all connected 4-regular multigraphs of a given order. Table \[tab:md\_graphs\] shows the running time of both [*Regina*]{} and MD\* on a cherry-picked sample of such graphs on 10 tetrahedra.
From these we can see that on some particular graphs, MD\* outperforms [*Regina*]{} by an order of magnitude. While these graphs were cherry-picked, they do display the shortfalls of [*Regina*]{}. There are 48432 4-regular multigraphs on 10 nodes, and it takes [*Regina*]{} 89.9 CPU-hours to complete this census. Of these 48432 graphs, 48242 are processed in under 300 seconds each. In contrast, it takes [*Regina*]{} 43.6 CPU-hours to process these remaining 190 graphs. This accounts for 48.5% of the running time of [*Regina*]{}’s census on 10 tetrahedra triangulations.
Running these “pathological” graphs through MD takes 12.1 CPU-hours, for a saving of 31.5 CPU-hours. This would reduce the running time of the complete census from 89 hours to 58 hours, a 35% improvement.
If we find the ideal heuristic which tells us exactly which of [*Regina*]{} or MD\* will be faster on a given graph, we could always just use the faster algorithm. This would save 40 hours of computing time for the 10 tetrahedra census, which would turn the running time from 90 CPU-hours down to 50 CPU-hours, a 44% improvement. Further work in this area involves identifying exactly which heuristics and graph metrics can be used to determine whether [*Regina*]{} or MD will analyse a given graph faster.
Task [*Regina*]{} MD$^*$
------- -------------- --------
48308 2476 142
48083 2487 192
48288 2164 118
47332 2141 229
47333 2003 134
47520 2083 221
46914 2108 302
: Running time in seconds of MD$^*$ and [*Regina*]{} on particular graphs on 10 nodes. Here “Task” identifies the specific graph as being the $i$-th graph produced by [*Regina*]{}.
\[tab:md\_graphs\]
[^1]: Partially supported by the Australian Research Council (projects DP1094516, DP110101104).
[^2]: A 3-sphere is a 3-manifold that in $\mathbb{R}^4$ can be modelled as $x_1^2 + x_2^2 + x_3^2 +
x_4^2 = 1$. It can be thought of as the 3-dimensional surface of a 4-dimensional ball.
[^3]: A triangulation is a cell decomposition satisfying some extra properties.
[^4]: Recall that the arc $\{v_{i,a},v_{i,d}\}$ denotes the edge common to face $a$ and face $d$. Since face $a$ contains vertices $b$,$c$ and $d$ and face $d$ contains vertices $a$, $b$ and $c$ this edge must be $bc$.
|
---
abstract: 'In selective withdrawal, fluid is withdrawn through a nozzle suspended above the flat interface separating two immiscible, density-separated fluids of viscosities $\nu_{upper}$ and $\nu_{lower} = \lambda \nu_{upper}$. At low withdrawal rates, the interface gently deforms into a hump. At a transition withdrawal rate, a spout of the lower fluid becomes entrained with the flow of the upper one into the nozzle. When $\lambda = 0.005$, the spouts at the transition are very thin with features that are over an order of magnitude smaller than any observed in the humps. When $\lambda = 20$, there is an intricate pattern of hysteresis and a spout appears which is qualitatively different from those seen at lower $\lambda$. No corresponding qualitative difference is seen in the hump shapes.'
author:
- 'Sarah C. Case'
- 'Sidney R. Nagel'
nocite: '[@Lister_1988]'
title: Spout States in the Selective Withdrawal System
---
Fluids change shape easily in response to stress. In some cases, the change is so dramatic that the fluid interface changes topology. An example is fluid entrainment by selective withdrawal. Here, a change in topology occurs when shear stresses cause the initially smooth interface between two immiscible fluids to erupt into a spout that pierces through the entraining fluid. If the shear is reduced, the spout collapses. In this paper, we address the nature of this topological change by characterizing the spout shapes near the transition.
It is tempting to think of such topological changes as analogous to thermodynamic phase transitions. Some fluid transformations, such as drop breakup, proceed as some physical dimension approaches zero, becoming much smaller than the macroscopic dimensions of the flows[@Zeff_2000; @Eggers_1997; @Lister_1998; @Cohen_2001]. Such a separation of length scales often leads to universal behavior as the dynamics approach a singularity where physical quantities diverge[@Constantin_1993; @Goldstein_1993; @Bertozzi_1996]. This resembles second-order or critical behavior. While this framework is appealing, it is imperfect. It was recently discovered that “critical" fluid transitions can exhibit a broader set of behaviors than their thermodynamic counterparts, and that not all fluid singularities obey universal dynamics[@Doshi_2003; @Keim_2006]. Other fluid transformations, such as selective withdrawal, proceed via a discontinuous jump and thus resemble first-order hysteretic transitions. This paper explores the nature of this type of topological change.
In our experiments, schematically shown in Fig. 1, a nozzle is suspended a height $S$ above the flat interface between two immiscible, density-separated fluids. Fluid is withdrawn at a flow rate $Q$ through the nozzle, initially deforming the interface into an axisymmetric hump. If $S$ is held fixed and $Q$ is increased, the hump will grow sharper until, at a transition flow rate, a spout of the lower fluid becomes entrained with the flow of the upper fluid into the nozzle. There is hysteresis in this transition, and once the spout is formed, decreasing $Q$ past a second, lower transition flow rate causes the spout to collapse.
![Experimental setup. Two immiscible fluids are density separated in a tank 20 cm x 20 cm x 30 cm. Fluid is withdrawn through a nozzle a distance $S$ above the unperturbed interface using a miniature gear pump. Before reaching the pump, the lower fluid sediments out in an airtight separation tank, which also serves to damp pressure variations in the flows. The upper fluid is deposited in a reservoir and siphoned back to the main tank, maintaining the upper fluid at a constant depth of 15-20 cm. The apparatus is illuminated from the rear, and a CCD camera images the interface which is then traced using the programs ImageJ (NIH) and Pro Fit (Quansoft). []{data-label="expt"}](experiment_new.eps){width="80"}
We observe different behavior depending on whether the selective withdrawal transition is approached from low $Q$, in the hump state, or from high $Q$, in the spout state. Previous experiments[@Cohen_2002; @Cohen_2004] and simulations[@MKB_2005] have described the approach to this transition from the hump state, analyzing it either as a weakly first-order thermodynamic phase transition[@Cohen_2002; @Cohen_2004] or as a saddle-node bifurcation[@MKB_2005].
In this paper we analyze the approach to the transition from the spout. We focus on two systems with different ratios $\lambda$ between the upper and lower fluid viscosities, $\nu_{upper}$ and $\nu_{lower} = \lambda \nu_{upper}$, respectively. When $\lambda = 0.005$, the spouts exhibit length scales near the transition that are an order of magnitude smaller than any observed in the humps. These small scales allow the entrainment of extremely thin fluid threads, which are potentially useful in many applications, such as the coating of biological tissue for the purposes of transplantation.[@Cohen_2001_S; @Wyman_2006] On the other hand, when $\lambda = 20$, we observe a much thicker and qualitatively different spout. By contrast this increase in the viscosity ratio does not cause a corresponding qualitative change in the hump state [@TBP]. In addition, at this value of $\lambda$ two distinct types of spouts appear at the same flow rates but in different hysteretic regimes.
![Spout shapes at $\lambda = 0.005$. The upper fluid is heavy mineral oil with $\nu_{upper} = 198$ cSt, $\rho_{upper} = 0.87$ g/ml. The lower fluid is deionized water with $\nu_{lower} = 1.000$ cSt, $\rho_{lower} = 0.998$ g/ml. The surface tension between the fluids is $\gamma = 35$ dyne/cm[@Cohen_2004]. (a) Image of a typical spout.(b) $z$ versus $r$, shown for three spouts with $S = 0.51$ cm: $(Q - Q^{*})/Q^{*} = 0.019$ (black circles), $(Q - Q^{*})/Q^{*} = 0.65$ (gray diamonds), $(Q - Q^{*})/Q^{*} = 2.34$ (open squares). The outlines are averaged for every three pixels in $r$ and $z$. $S$ changes by less than $0.01 \%$ while acquiring each data point. Inset shows $z$ versus $log(r)$. (c) $z$ versus $\kappa_{ax}$ is shown for the same values of $S$ and $Q$ as in (b).](lambda_005_fig.eps){width="80"}
We first examine the spout shapes near the transition when $\lambda = 0.005$. A typical spout profile is seen in Fig. 2a. Gravity forces the interface to be horizontal far from the nozzle, and the entraining flows force it to be vertical inside the nozzle. We analyze the shape of the spatial region that connects these two asymptotic behaviors. As $Q$ is varied, the shape of this connecting region changes, as shown in Fig. 2b. At low $Q$, the spout resembles a thin thread attached to a broad base, with a localized region of high curvature connecting the two structures. When $Q$ is increased, the spouts become smoother, approaching a nearly logarithmic shape as shown in the inset where $z$ is plotted versus $\log{r}$.
We characterize these profiles by calculating the axial and azimuthal curvatures $$\kappa_{ax}(z) = \frac{\frac{d^{2}r}{dz^{2}}}{(1 + ( \frac{dr}{dz})^{2})^{3/2}}\ ; \qquad \kappa_{az}(z) = \frac{1}{r(z)}.$$ Very near the transition to the hump, the spout exhibits a sharp peak in $\kappa_{ax}$, as seen in Fig. 2c. As $Q$ increases and the spouts become smoother, the peak in $\kappa_{ax}$ disappears below our resolution. We characterize the evolution of the spout as the transition is approached by examining the following length scales: $z_{p}$ (the $z$ location of the peak in $\kappa_{ax}$), $R_{p} \equiv 1/\kappa_{ax}(z_{p}) $, and $ r_{p} \equiv r(z_{p}) $.
{width="80"}
In Fig. 3, we plot these profile parameters as functions of the flow rate $Q$ for three values of $S$. Near the transition, $z_{p}$ approaches a minimum value $z_{min}$ as Q approaches a value $Q^{*}$. Using $z_{min}$ and $Q^{*}$ as fitting parameters, we find that plotting $\log{(z_{p}- z_{min})/S}$ versus $\log{(Q-Q^{*})/Q^{*}}$ collapses the data onto a single curve, as shown in Fig. 3a, whereas the data for $R_{p}$ and $r_{p}$ collapse without scaling by $S$, as seen in Figs. 3b and 3c. We search for power-law scaling in the same way as was done in the analysis of the approach to the transition from the hump state[@Cohen_2004]. For ($(Q-Q^{*})/Q^{*} < 0.6$), $(z_{p} - z_{min})/S \propto ((Q- Q^{*})/Q^{*})^{0.50 \pm 0.08}$. Using $Q^{*}$ determined from this fit, we find that $R_{p} \propto ((Q - Q^{*})/Q^{*})^{0.45 \pm 0.15}$. In this case, our fitting range is smaller: $(Q-Q^{*})/Q^{*} < 0 .1$. Although we are able to place an upper limit on the point closest to the transition, our imaging does not allow us to determine $R_{p}$ more precisely. Finally, for $(Q - Q^{*})/Q^{*} < 1.5$, $r_{p}$ remains constant: $r_{p} = 22 \ \mu m \ \pm 4 \ \mu m$. All three quantities show large departures from power-law behavior far from the transition.
Previous studies of the approach to the transition from low $Q$ found that the radius of curvature of the hump tip could be much smaller than the other lengths characterizing the system. For Ê$\lambda = 0.005$, the largest separation of length scales $(h_{c}\kappa)_{hump}$ was $~10$(we estimate $h_{c}$ from the largest $h_{max}$ value shown)[@Cohen_2004] . Applying a similar analysis to the approach from high $Q$, we find that $z_{min}/R_{p}$ is $~40$. We note, however, that $R_{p}$ is not the smallest length scale observed for this system. The azimuthal curvature $r_{p}$ is significantly smaller than $R_{p}$, so that $z_{min}/r_{p}$ is greater than $150$.
When the viscosity ratio is increased to $\lambda = 20$, we observe a qualitatively different spout. These spouts are very thick, and the flows penetrate deeply into the bulk of the lower fluid. This is shown in Fig. 4a, where the flows, obtained by dyeline tracing inside the lower fluid, are superimposed on a typical spout image. These flows are distinct from those seen when $\lambda = 0.005$, where the flows are primarily along the interface[@Wyman_2006]. We therefore refer to the broad spouts as “bulk-flow" spouts. We note that the bulk-flow spout can be so thick that it fills the entire orifice when the nozzle diameter, $D$, is small. We restrict our analysis to $D$ large enough that this does not occur.
{width="80"}
At $\lambda = 20$, a spout closely resembling those seen at small $\lambda$ also exists in a narrow hysteretic region at the same values of $S$ and $Q$ as does the bulk-flow spout. We are able to observe these spouts by carefully manipulating the flow rate. At the transition flow rate, an unstable thread of the lower fluid becomes entrained in the upper one. After a time lag, which can be as long as several seconds, this initial spout begins to widen, culminating in the steady bulk-flow spout. However, if $Q$ is rapidly reduced before the initial spout widens significantly, a stable spout as much as two orders of magnitude thinner than the bulk-flow one is produced. This thin spout exhibits flow patterns in the lower fluid similar to those seen at $\lambda = 0.005$.
The profiles of these two spouts are distinctly different, as seen in Fig. 4b. In the inset to Fig. 4b, the bulk-flow profiles are fit to $z \propto e^{-r/r_{0}}$. The data deviates very slightly from this form very near the straw entry as well as near the horizontal interface, but is in excellent agreement in the intermediate region. This fit is used to calculate $\kappa_{ax}(z)$ for these spouts. The decay length, $r_{0}$, changes only slightly as a function of $D$: for $D = 0.40$ cm, $r_{0} = 0.54$ cm $\pm$ $0.02$ cm, while for $D = 0.80$, $r_{0}= 0.44$ cm $\pm$ $0.02$ cm. In contrast to this exponential, the hysteretic surface-flow spout profiles are closer to logarithmic, and are similar to those seen in the inset to Fig 2b for $\lambda = 0.005$. The hysteretic surface-flow spout also displays a sharper peak in $\kappa_{ax}$ than the bulk-flow spout, as shown in Fig. 4c.
A clearly defined transition exists between the two spouts. At a threshold flow rate, the stable surface-flow spout becomes unstable and rapidly widens into a bulk-flow spout with no stable intermediate structures. Thus, there are transitions between three distinct steady states. In Fig. 5, we show these transitions, including the relevant hystereses, on a phase diagram of the transition flow rate $Q_{t}$ versus $S$. As seen in Fig. 5a, the transition from hump to bulk-flow spout can be fit to $S = (0.29 \pm 0.01)(Q_{t})^{0.32 \pm 0.03}$. The hysteresis in the hump to bulk-flow spout transition is large, especially at low $S$, and is observed to vary with $D$. Fig. 5b shows a narrow but well-defined region in which, depending on the initial state of the system, surface-flow spouts, bulk-flow spouts, and humps can all be observed. This region drops below experimental resolution at $S$ = 35 ml/s.
{width="80"}
In conclusion, the independent variation of the viscosity ratio $\lambda$ and the flow rate $Q$ dramatically changes the spout profiles. At $\lambda = 20$ there is a broad bulk-flow spout which disappears for smaller $\lambda$. Thin surface-flow spouts, seen at low $\lambda$, are only visible in a limited hysteretic region at $\lambda = 20$. Cohen[@Cohen_2004] observed no qualitative difference in the hump shapes near the transition for $\lambda$ between $10^{-3}$ and $1.7$. Similar analysis of our data extends the range to $\lambda = 20$[@TBP]. Thus, a qualitative change in the spout, not seen in the hump, is found by varying $\lambda$. Decreasing the other control parameter, $Q$, while fixing $\lambda=0.005$, causes the spout to narrow drastically as the transition is approached. This produces a much greater separation of length scales in the spout than found in the hump.
Unlike the hump interface, the spout interface is not bounded in the vertical direction. Thus, the spout has two asymptotic regimes which must be matched to each other by the dynamics: at large radius it is constrained by gravity to be horizontal and at large heights it is constrained by the flows in the nozzle to be vertical. The increase in the separation of length scales as the transition is approached can be viewed as a decrease in the spatial width connecting these constraints. At high $Q$, the axial curvature, $\kappa_{ax}$, is nearly constant in $z$. As $Q$ is decreased to the point where the spout disappears, the transition region becomes increasingly localized, leading to a sharp peak in $\kappa_{ax}$. However, the spatial transition region never collapses to zero and $\kappa_{ax}$ is always cut off at a finite value. The degree to which the selective-withdrawal transition can approach a continuous one with a singularity is limited by the minimum extent to which the zone matching the two constraints can shrink to zero.
We are grateful to W. W. Zhang, F. Blanchette, M. Kleine-Berkenbusch, J. L. Wyman, and K. Walker for helpful discussions. This research was supported by NSF MRSEC DMR-0213745 and NSF DMR-0352777.
[16]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , , , ****, ().
, , , ****, ().
, , , , in **, edited by (, ).
, , , , , , , ****, ().
, , (), .
, ****, ().
, ****, ().
, ****, ().
, , (),
, , , , , ****, ().
, , , , (), .
(), .
|
---
abstract: 'The concepts of risk-aversion, chance-constrained optimization, and robust optimization have developed significantly over the last decade. Statistical learning community has also witnessed a rapid theoretical and applied growth by relying on these concepts. A modeling framework, called [*distributionally robust optimization*]{} (DRO), has recently received significant attention in both the operations research and statistical learning communities. This paper surveys main concepts and contributions to DRO, and its relationships with robust optimization, risk-aversion, chance-constrained optimization, and function regularization.'
author:
- 'Hamed Rahimian[^1]'
- 'Sanjay Mehrotra[^2]'
title: 'Distributionally Robust Optimization: A Review '
---
Distributionally robust optimization; Robust optimization; Stochastic optimization; Risk-averse optimization; Chance-constrained optimization; Statistical learning
90C15, 90C22, 90C25, 90C30, 90C34, 90C90, 68T37, 68T05
Introduction {#sec: rev.intro}
============
Many real-world decision problems arising in engineering and management have uncertain parameters. This parameter uncertainty may be due to limited observability of data, noisy measurements, implementations and prediction errors. [*Stochastic optimization*]{} (SO) and (2) [*robust optimization*]{} frameworks have classically allowed to model this uncertainty within a decision-making framework. Stochastic optimization assumes that the decision maker has [*complete*]{} knowledge about the underlying uncertainty through a [*known*]{} probability distribution and minimizes a functional of the cost, see, e.g., @shapiro2014SP [@birge2011SP]. The probability distribution of the random parameters is inferred from prior beliefs, expert opinions, errors in predictions based on the historical data (e.g., @kim2015scheduling), or a mixture of these. In robust optimization, on the other hand, it is assumed that the decision maker has no distributional knowledge about the underlying uncertainty, except for its support, and the model minimizes the worst-case cost over an uncertainty set, see, e.g., @elghaoui1997lsq [@elghaoui1998SDP; @ben1998robust; @bertsimas2004price; @ben2000robust; @ben2009RO]. The concept of robust optimization has a relationship with chance-constrained optimization, where in certain cases there is a direct relationship between a robust optimization model and a chance-constrained optimization model, see, e.g., @boyd2004CVX [ pp157–158].
We often have partial knowledge on the statistical properties of the model parameters. Specifically, the probability distribution quantifying the model parameter uncertainty is known ambiguously. A typical approach to handle this ambiguity, from a statistical point of view, is to estimate the probability distribution using statistical tools, such as the maximal likelihood estimator, minimum Hellinger distance estimator [@vidyashankar2015], or maximum entropy principle [@grunwald2004game]. The decision-making process can then be performed with respect to the estimated distribution. Because such an estimation may be imprecise, the impact of inaccuracy in estimation—and the subsequent ambiguity in the underlying distribution—is widely studied in the literature through (1) the perturbation analysis of optimization problems, see, e.g., @bonnans2013Perturbation, (2) stability analysis of a SO model with respect to a change in the probability distribution, see, e.g, @rachev1991 [@romisch2003], or (3) input uncertainty analysis in stochastic simulation models, see, e.g., @lam2016input and references therein. The typical goals of these approaches are to quantify the sensitivity of the optimal value/solution(s) to the probability distribution and provide continuity and/or large-deviation-type results, see, e.g., @dupavcova1990stability [@schultz2000; @heitsch2006stability; @rachev2002quantitative; @pflug2012distance]. While these approaches quantify the input uncertainty, they do not provide a systematic modeling framework to hedge against the ambiguity in the underlying probability distribution.
[*Ambiguous stochastic optimization*]{} is a systematic modeling approach that bridges the gap between data and decision-making—statistics and optimization frameworks—to protect the decision-maker from the ambiguity in the underlying probability distribution. The ambiguous stochastic optimization approach assumes that the underlying probability distribution is unknown and lies in an [*ambiguity set*]{} of probability distributions. As in robust optimization, this approach hedges against the ambiguity in probability distribution by taking a worst-case approach. @scarf1958 is arguably the first to consider such an approach to obtain an order quantity for a newsvendor problem to maximize the worst-case expected profit, where the worst-case is taken with respect to all product demand probability distributions with a known mean and variance. Since the seminal work of Scarf, and particularly in the past few years, significant research has been done on ambiguous stochastic optimization problems. This paper provides a review of the theoretical, modeling, and computational developments in this area. Moreover, we review the applications of the ambiguous stochastic optimization model that have been developed in the recent years. This paper also puts [DRO]{} in the context of risk-averse optimization, chance-constrained optimization, and robust optimization.
A General [DRO]{} Model {#sec: rev.generic_model}
-----------------------
We now formally introduce the model formulation that we discuss in this paper. Let ${\boldsymbol{x}} \in {\mathcal{X}} \subseteq {\mathbb{R}}^{n}$ be the decision vector. On a measurable space ${\left( \Xi, {\mathcal{F}} \right)}$, let us define a random vector ${\tilde{{\boldsymbol{\xi}}}}: \Xi \mapsto \Omega \subseteq {\mathbb{R}}^{d}$, a random cost function $h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}): {\mathcal{X}} \times \Xi \mapsto {\mathbb{R}}$, and a vector of random functions ${\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}): {\mathcal{X}} \times \Xi \mapsto {\mathbb{R}}^{m}$, i.e., ${\boldsymbol{g}}({\boldsymbol{x}}, \cdot):=[g_1({\boldsymbol{x}}, \cdot),\ldots,g_m({\boldsymbol{x}}, \cdot)]^{\top}$. Given this setup, a general stochastic optimization problem has the form $$\label{eq: SO}
\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \sset*{{{\mathcal{R}}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}}{{{\mathcal{R}}_{P} \left[ {\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le {\boldsymbol{0}}}, \tag{\text{SO}}$$ where $P$ denotes the (known) probability measure on ${\left( \Xi, {\mathcal{F}} \right)}$ and ${\mathcal{R}}_{P}: {\mathcal{Z}} \mapsto {\mathbb{R}}$ denotes a (componentwise) real-valued functional under $P$, where ${\mathcal{Z}}$ is a linear space of measurable functions on ${\left( \Xi, {\mathcal{F}} \right)}$. The functional ${\mathcal{R}}_{P}$ accounts for quantifying the uncertainty in the outcomes of the decision, for a given fixed probability measure $P$. This setup represents a broad range of problems in statistics, optimization, and control, such as regression and classification models [@friedman2016SL; @james2013SL], simulation-optimization [@fu2016SO; @pasupathy2013SO], stochastic optimal control [@bertsekas1995DP], Markov decision processes [@puterman2005MDP], and stochastic programming [@birge2011SP; @shapiro2014SP].
As a special case of , we have the classical stochastic programming problems: $$\label{eq: SO_Obj}
\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}, $$ and $$\label{eq: SO_Cons}
\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \sset*{h({\boldsymbol{x}})}{{\mathbb{E}_{P} \left[ {\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le {\boldsymbol{0}}}, $$ where ${{\mathcal{R}}_{P} \left[ \cdot \right]}$ is taken as the expected-value functional ${\mathbb{E}_{P} \left[ \cdot \right]}$. Note that by taking $h({\boldsymbol{x}}, \cdot):=\mathbbm{1}_{A({\boldsymbol{x}})} (\cdot)$ in , where $\mathbbm{1}_{A({\boldsymbol{x}})} (\cdot)$ denotes an indicator function for an arbitrary set $A({\boldsymbol{x}}) \subseteq {\mathcal{B}}({\mathbb{R}}^{d})$ (we define the indicator function and ${\mathcal{B}}({\mathbb{R}}^{d})$ precisely in Section \[sec: rev.notation\]), we obtain the class of problems with a probabilistic objective function of the form $P\{ {\tilde{{\boldsymbol{\xi}}}}\in A({\boldsymbol{x}})\}$, see, e.g., @prekopa2003probabilistic. The set $A({\boldsymbol{x}})$ is called a [*safe region*]{} and may be of the form ${\boldsymbol{a}}({\boldsymbol{x}})^{\top} {\tilde{{\boldsymbol{\xi}}}}\le {\boldsymbol{b}}({\boldsymbol{x}})$ or ${\boldsymbol{a}}({\tilde{{\boldsymbol{\xi}}}})^{\top} {\boldsymbol{x}} \le {\boldsymbol{b}}({\tilde{{\boldsymbol{\xi}}}})$[^3]. Similarly, by taking $h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}):=h({\boldsymbol{x}})$ and ${\boldsymbol{g}}({\boldsymbol{x}}, \cdot):=[\mathbbm{1}_{A_{1}({\boldsymbol{x}})} (\cdot), \ldots, \mathbbm{1}_{A_{m}({\boldsymbol{x}})} (\cdot)]^{\top}$, for suitable indicator functions $\mathbbm{1}_{A_{j}({\boldsymbol{x}})} (\cdot)$, $j=1, \ldots, m$, is in the form of probabilistic (i.e., chance) constraints $P\{ {\tilde{{\boldsymbol{\xi}}}}\in A_{j}({\boldsymbol{x}})\} \le 0, \; j=1, \ldots, m$, see, e.g., @charnes1958chance [@charnes1959chance; @prekopa1970probabilistic; @prekopa1974; @dentcheva2006probabilistic]. Note that the case where the event $\{{\tilde{{\boldsymbol{\xi}}}}\in A_{j}({\boldsymbol{x}})\}$ is formed via several constraints is called [*joint chance constraint*]{} as compared to [*individual chance constraint*]{}, where the event $\{{\tilde{{\boldsymbol{\xi}}}}\in A_{j}({\boldsymbol{x}})\}$ is formed via one constraint.
A robust optimization model is defined as $$\label{eq: RO}
\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \sup_{{\boldsymbol{\xi}} \in {\mathcal{U}}} \ \sset*{h({\boldsymbol{x}}, {\boldsymbol{\xi}})}{\sup_{{\boldsymbol{\xi}} \in {\mathcal{U}}} \ {\boldsymbol{g}}({\boldsymbol{x}},{\boldsymbol{\xi}}) \le {\boldsymbol{0}} }, \tag{\text{RO}}$$ where ${\mathcal{U}} \subseteq {\mathbb{R}}^{d}$ denotes an [*uncertainty set*]{} for the parameters ${\tilde{{\boldsymbol{\xi}}}}$. Similar to , $$\label{eq: RO_Obj}
\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \sup_{{\boldsymbol{\xi}} \in {\mathcal{U}}} \ h({\boldsymbol{x}}, {\boldsymbol{\xi}})$$ and $$\label{eq: RO_Cons}
\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \sset*{h({\boldsymbol{x}})}{\sup_{{\boldsymbol{\xi}} \in {\mathcal{U}}} \ {\boldsymbol{g}}({\boldsymbol{x}},{\boldsymbol{\xi}}) \le {\boldsymbol{0}} }$$ are two special cases of .
Problem , as well as and , require the knowledge of the underlying measure $P$, whereas , as well as and , ignore all distributional knowledge of ${\tilde{{\boldsymbol{\xi}}}}$, except for its support. An ambiguous version of is formulated as $$\label{eq: DRO}
\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \sup_{P \in {\mathcal{P}}} \ \sset*{{{\mathcal{R}}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}}{\sup_{P \in {\mathcal{P}} } \ {{\mathcal{R}}_{P } \left[ {\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le {\boldsymbol{0}}} \tag{\text{DRO}}.$$ Here, ${\mathcal{P}}$ denotes the [*ambiguity set of probability measures*]{}, i.e., a family of measures consistent with the prior knowledge about uncertainty. Note that if we consider the measurable space $(\Omega, {\mathcal{B}})$, where ${\mathcal{B}}$ denotes the Borel $\sigma$-field on $\Omega$, i.e., ${\mathcal{B}}=\Omega \cap {\mathcal{B}}({\mathbb{R}}^{d})$, then ${\mathcal{P}}$ can be viewed as an ambiguity set of probability distributions ${\mathbbmtt{P}}$ defined on $(\Omega, {\mathcal{B}})$ and induced by ${\tilde{{\boldsymbol{\xi}}}}$[^4].
As discussed before, finds a decision that minimizes the worst-case of the functional ${\mathcal{R}}$ of the cost $h$ among all probability measures in the ambiguity set provided that the (componentwise) worst-case of the functional ${\mathcal{R}}$ of the function ${\boldsymbol{g}}$ is non-positive. The ambiguous versions of and are formulated as follows: $$\label{eq: DRO_Obj}
\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \sup_{P \in {\mathcal{P}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]},$$ and $$\label{eq: DRO_Cons}
\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \sset*{h({\boldsymbol{x}})}{\sup_{P \in {\mathcal{P}} } \ {\mathbb{E}_{P } \left[ {\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le {\boldsymbol{0}}}.$$ Models and are discussed in the context of minimax stochastic optimization models, in which optimal solutions are evaluated under the worst-case expectation with respect to a family of probability distributions of the uncertain parameters, see, e.g., @scarf1958; @zackova1966 (a.k.a. Dupa[č]{}ov[á]{}); @dupacova1987 [@breton1995; @shapiro2002minimax; @shapiro2004minmax]. @delage2010 refer to this approach as [*distributionally robust optimization*]{}, in short [DRO]{}, and since then, this terminology has become widely dominant in the research community. We adopt this terminology, and for the rest of the paper, we refer to the ambiguous stochastic optimization of the form as [DRO]{}.
As mentioned before, is a modeling approach that assumes only partial distributional information, whereas assumes complete distributional information. In fact, if ${\mathcal{P}}$ contains only the true distribution of the random vector ${\tilde{{\boldsymbol{\xi}}}}$, reduces to . On the other hand, if ${\mathcal{P}}$ contains all probability distributions on the support of the random vector ${\tilde{{\boldsymbol{\xi}}}}$, supported on ${\mathcal{U}}$, then, reduces to . Thus, a judicial choice of ${\mathcal{P}}$ can put between and . Consequently, may not be as conservative as , which ignores all distributional information, except for the support ${\mathcal{U}}$ of the uncertain parameters. can be viewed as a unifying framework for and (see also @qian2018).
Motivation and Contributions
----------------------------
In this paper, we provide an overview of the main contributions to [DRO]{} within both operations research and machine learning communities. While there are separate review papers on RO, see, e.g., [@bertsimas2011REV; @gabrel2014; @gorissen2015], to the best of our knowledge, there are a few tutorials and survey papers on [DRO]{} within the operations research community. A tutorial on [DRO]{}, its connection to risk-averse optimization, and the use of $\phi$-divergence to construct the ambiguity set is presented in @bayraksan2015. @shapiro2018tutorial provides a general tutorial on [DRO]{} and its connection to risk-averse optimization. @postek2016 surveys different papers that address distributionally robust risk constraints, with a variety of risk functional and ambiguity sets. Similar to [@bayraksan2015; @shapiro2018tutorial; @postek2016], in this paper, we show the connection between [DRO]{} and risk aversion. However, the current review is different from those in the literature from a number of perspectives. We outline our contributions as follows:
- We bring together the research done on [DRO]{} within the operations research and machine learning communities. This motivation is materialized throughout the paper as we take a holistic view of [DRO]{}, from modeling, to solution techniques and to applications.
- We provide a detailed discussion on how [DRO]{} models are connected to different concepts such as game theory, risk-averse optimization, chance-constrained optimization, robust optimization, and function regularization in statistical learning.
- From the algorithmic perspective, we review techniques to solve a [DRO]{} model.
- From the modeling and theoretical perspectives, we categorize different approaches to model the distributional ambiguity and discuss results for each of these ambiguity sets. Moreover, we discuss the calibration of different parameters used in these ambiguity sets of distributions.
Organization of this Paper
--------------------------
This paper is organized as follows. In Section \[sec: rev.notation\], we introduce the notation and the basic definitions. Section \[sec: rev.game\_risk\_chance\_reg\] reviews the connection of [DRO]{} to different concepts: game theory in Section \[sec: rev.game\_theory\], robust optimization in Section \[sec: rev.dro\_ro\], risk-aversion and chance-constrained optimization with its relationship to robust optimization in Section \[sec: rev.rel\_risk\], and regularization in statistical learning in Section \[sec: rev.rel\_regularization\]. In Section \[sec: rev.solution\], we review two main solution techniques to solve a [DRO]{} model by introducing tools in semi-infinite programming and duality. In Section \[sec: rev.choice.ambiguity\], we discuss different models to construct the ambiguity set of distributions. This includes discrepancy-based models in Section \[sec: rev.distance\], moment-based models in Section \[sec: rev.moment\], shape-preserving-based models in Section \[sec: rev.shape\], and kernel-based models in Section \[sec: rev.kernel\]. In Section \[sec: rev.calibration\], we discuss the calibration of different parameters used in the ambiguity set of distributions. In Section \[sec: rev.cost\_inner\], we discuss different functionals that amount for quantifying the uncertainty in the outcomes of a fixed decision. This includes regret functions in Section \[sec: rev.regret\], risk measures in Section \[sec: rev.risk\], and utility functions in Section \[sec: rev.utility\]. In Section \[sec: rev.toolboxes\], we introduce some modeling toolboxes for a [DRO]{} model.
Notation and Basic Definitions {#sec: rev.notation}
==============================
In this section, we introduce additional notation used throughout the paper. In order to keep the paper self-contained, we also introduce all definitions used in this paper in this section.
For a given space $\Xi$ and a $\sigma$-field ${\mathcal{F}}$ of that space, we define an underlying measurable space ${\left( \Xi, {\mathcal{F}} \right)}$. In particular, let us define $({\mathbb{R}}^{d}, {\mathcal{B}}({\mathbb{R}}^{d}))$, where ${\mathcal{B}}({\mathbb{R}}^{d})$ is the Borel $\sigma$-field on ${\mathbb{R}}^{d}$. Let $\mathbbm{1}_{A}: \Xi \mapsto \{0,1\}$ indicate the indicator function of set $A \in {\mathcal{F}}$ where $\mathbbm{1}_{A}(s)=1$ if $s \in A$, and 0 otherwise. Let ${\mathfrak{M}}_{+}(\cdot,\cdot)$ and ${\mathfrak{M}}(\cdot,\cdot)$ denote the set of all nonnegative measures and the set of all probability measures $Q: {\mathcal{F}} \mapsto [0,1]$ defined on ${\left( \Xi, {\mathcal{F}} \right)}$, respectively. A measure $\nu_{2}$ is preferred over a measure $\nu_{1}$, denoted as $\nu_{2} \succeq \nu_{1}$ if $\nu_{2}(A) \ge \nu_{1}(A)$ for all measurable sets $A \in {\mathcal{F}}$. We denote by $Q\{A\}$ the probability of event $A \in {\mathcal{F}}$, with respect to $Q \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$. A random vector ${\tilde{{\boldsymbol{\xi}}}}: {\left( \Xi, {\mathcal{F}} \right)}\mapsto ({\mathbb{R}}^{d}, {\mathcal{B}}({\mathbb{R}}^{d}))$ is always denoted with a tilde sign, while a realization of the random vector ${\tilde{{\boldsymbol{\xi}}}}$ is denoted by the same symbol without a tilde, i.e., ${\boldsymbol{\xi}}$. For a probability measure $Q \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$, we define a probability space ${\left( \Xi, {\mathcal{F}}, Q \right)}$. We denote by ${\mathbbmtt{Q}}:=Q \circ {\tilde{{\boldsymbol{\xi}}}}^{-1}$ the probability distribution induced by a random vector ${\tilde{{\boldsymbol{\xi}}}}$ under $Q$, where ${\tilde{{\boldsymbol{\xi}}}}^{-1}$ denotes the inverse image of ${\tilde{{\boldsymbol{\xi}}}}$. That is, ${\mathbbmtt{Q}} : {\mathcal{B}}({\mathbb{R}}^{d}) \mapsto [0,1]$ is a probability distribution on $({\mathbb{R}}^{d}, {\mathcal{B}}({\mathbb{R}}^{d}))$. Let ${\mathfrak{P}}(\cdot,\cdot)$ denote the set of all such probability distributions. For example, ${\mathfrak{P}}({\mathbb{R}}^{d}, {\mathcal{B}}({\mathbb{R}}^{d}))$ denotes the set of all probability distributions of ${\tilde{{\boldsymbol{\xi}}}}$. Note that in our notation, we make a distinction between a probability measure $Q \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$ and a probability distribution ${\mathbbmtt{Q}} \in {{\mathfrak{P}}({\mathbb{R}}^{d},{\mathfrak{B}}({\mathbb{R}}^{d}))}$. Nevertheless, we have always an appropriate transformation, so we might use the terminology of probability measure and probability distribution interchangeably. Given this, for a function $f: {\mathbb{R}}^{d} \mapsto {\mathbb{R}}$, we may write $\int_{\Xi} f({\tilde{{\boldsymbol{\xi}}}}(s)) Q(ds)$ equivalently as $\int_{{\mathbb{R}}^{d}} f(s) {\mathbbmtt{Q}}(ds)$ with a change of measure. As we shall see later, we may denote $f({\tilde{{\boldsymbol{\xi}}}}(s))$ with $f(s)$ in this transformation. For two random variables $Z, Z^{\prime}: \Xi \mapsto {\mathbb{R}}$, we use $Z \ge Z^{\prime}$ to denote $Z(s) \ge Z^{\prime}(s) $ almost everywhere (a.e.) on $\Xi$. A random variable $Z$ is $Q$-integrable if $ \|Z\|_{1}:=\int_{\Xi} |Z| d Q$ is finite. Two random variables $Z, Z^{\prime}$ are distributionally equivalent, denoted by $Z \overset{\text{d}}{\sim} Z^{\prime}$, if they induce the same distribution, i.e., $Q\{Z \le z\}=Q\{Z^{\prime} \le z\}$. We also denote by ${\mathcal{S}}(\Xi, {\mathcal{F}})$ the collection of all ${\mathcal{F}}$-measurable functions $Z: {\left( \Xi, {\mathcal{F}} \right)}\mapsto ({\overline{{\mathbb{R}}}}, {\mathcal{B}}(\overline{{\mathbb{R}}}))$, where ${\overline{{\mathbb{R}}}}$ denotes the extended real line ${\mathbb{R}} \cup \{-\infty, +\infty\}$.
For a finite space $\Xi$ with $M$ atoms $\Xi=\{s_{1}, \ldots, s_{M}\}$ and ${\mathcal{F}}=2^{\Xi}$, let $\{q(s_{1}), \ldots, q(s_{M})\}$ be the probabilities of the corresponding elementary events under probability measure $Q \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$. As a shorthand notation, we use ${\boldsymbol{q}}=[q_{1}, \ldots, q_{M}]^{T} \in {\mathbb{R}}^{M}$, where $q_{i}:=q(s_{i})$, $i \in \{1, \ldots, M\}$. A ${\mathcal{F}}$-measurable function $Z: \Xi \mapsto {\mathbb{R}}$ has $M$ outcomes $\{Z(s_{1}), \ldots, Z(s_{M})\}$ with probabilities $\{q_{1}, \ldots, q_{M}\}$. For short, we identify $Z$ as a vector in ${\mathbb{R}}^{M}$, i.e., ${\boldsymbol{z}}=[z_{1}, \ldots, z_{M}]^{T}$ with $z_{i}:=Z(s_{i})$, $i \in \{1, \ldots, M\}$.
Consider a linear space ${\mathcal{V}}$, paired with a dual linear space ${\mathcal{V}}^{*}$, in the sense that a (real-valued) bilinear form $\langle \cdot, \cdot \rangle: {\mathcal{V}} \times {\mathcal{V}}^{*} \mapsto {\mathbb{R}}$ is defined. That is, for any $v \in {\mathcal{V}}$ and $v^{*} \in {\mathcal{V}}^{*}$, we have that $\langle \cdot, v^{*} \rangle: {\mathcal{V}} \mapsto {\mathbb{R}}$ and $\langle v, \cdot \rangle: {\mathcal{V}}^{*} \mapsto {\mathbb{R}}$ are linear functionals on ${\mathcal{V}}$ and ${\mathcal{V}}^{*}$, respectively. Similarly, we define ${\mathcal{W}}$ and ${\mathcal{W}}^{*}$. For a linear mapping $A: {\mathcal{V}} \mapsto {\mathcal{W}}$, we define the adjoint mapping $A^{*}: {\mathcal{W}}^{*} \mapsto {\mathcal{V}}^{*}$ by means of the equation $\langle w^{*}, Av \rangle= \langle A^{*}w^{*}, v \rangle$, $\forall v \in {\mathcal{V}}$. For two linear mappings, defined by finite dimensional matrices $A$ and $B$, $A\bullet B=Tr(A^TB)$ denotes the Frobenius inner product between matrices. Moreover, ${\boldsymbol{A}} \odot {\boldsymbol{B}}$ denotes the Hadamard (i.e., componentwise) product between matrices.
For a function $f: {\mathcal{V}} \mapsto {\overline{{\mathbb{R}}}}$, the (convex) conjugate $f^{*}: {\mathcal{V}}^{*} \mapsto {\overline{{\mathbb{R}}}}$ is defined as $f^{*}(v^{*})=\sup_{v \in {\mathcal{V}}} \{\langle v^{*},v \rangle - f(v) \} $. Similarly, the biconjugate $f^{**}: {\mathcal{V}} \mapsto {\overline{{\mathbb{R}}}}$ is defined as $f^{**}(v)=\sup_{v^{*} \in {\mathcal{V}}^{*}} \{\langle v^{*},v \rangle - f^{*}(v^{*}) \} $. The characteristic function $\delta(\cdot|{\mathcal{A}})$ of a nonempty set ${\mathcal{A}} \in {\mathcal{V}}$ is defined as $\delta(v|{\mathcal{A}})=0$ if $v \in {\mathcal{A}}$, and $+\infty$ otherwise. The support function of a nonempty set ${\mathcal{A}} \in {\mathcal{V}}$ is defined as the convex conjugate of the characteristic function $\delta(\cdot|{\mathcal{A}})$: $\delta^{*}(v^{*}|{\mathcal{V}})=\sup_{v \in {\mathcal{V}}} \{\langle v^{*},v \rangle - \delta(v|{\mathcal{A}}) \}= \sup_{v \in {\mathcal{V}}} \langle v^{*},v \rangle $.
For $Q \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$, let ${\mathcal{L}}_{\infty}{\left( \Xi, {\mathcal{F}}, Q \right)}$ be the linear space of all essentially bounded ${\mathcal{F}}$-measurable functions $Z$. A function $Z$ is essentially bounded if $ \|Z\|_{\infty}:=\operatorname*{ess\,sup}_{s \in \Omega} |Z(s)|$ is finite, where $$\operatorname*{ess\,sup}_{s \in \Xi} |Z(s)|:=\inf\Bigg\{\sup_{s \in \Xi} |Z^{\prime}(s)| \; \Big| \linebreak \; Z(s)=Z^{\prime}(s) \ \text{a.e.} \ s \in \Xi \Bigg\}.$$
We denote by $\|\cdot\|_{p}: {\mathbb{R}}^{d} \mapsto {\mathbb{R}}$ the $\ell_{p}$-norm on ${\mathbb{R}}^{d}$. That is, for a vector ${\boldsymbol{u}} \in {\mathbb{R}}^{d}$, $\|{\boldsymbol{u}}\|_{p}=\Big( \sum_{i=1}^{d} |u_{i}|^{p} \Big)^{\frac{1}{p}}$. We use $\Delta^{d}$ to denote the simplex in ${\mathbb{R}}^{d}$, i.e., $\Delta^{d}=\sset*{{\boldsymbol{u}} \in {\mathbb{R}}^{d}}{{\boldsymbol{e}}^{\top}{\boldsymbol{u}}=1, \; {\boldsymbol{u}} \ge {\boldsymbol{0}}}$, where ${\boldsymbol{e}}$ is a vector of ones in ${\mathbb{R}}^{d}$. Let $(\cdot)_{+}$ denote $\max\{0,\cdot\}$.
For a proper cone ${\mathcal{K}}$, the relation $x \preccurlyeq_{{\mathcal{K}}} y$ indicates that $y-x \in {\mathcal{K}}$. For simplicity, we drop ${\mathcal{K}}$ from the notation, when ${\mathcal{K}}$ is the positive semidefinite cone. Let ${\mathcal{S}}_{+}^{n}$ denote the cone of symmetric positive semidefinite matrices in the $n \times n$ matrix spaces ${\mathbb{R}}^{n \times n}$. For a cone ${\mathcal{K}} \subset {\mathcal{V}}$, we define its dual cone as ${{\mathcal{K}}^{\prime}}:=\sset*{v^{*} \in {\mathcal{V}}^{*}}{\langle v^{*},v \rangle \ge 0, \; \forall v \in {\mathcal{K}} }$. The negative of the dual cone is called polar cone and is denoted by ${{\mathcal{K}}^{\mathrm{o}}}$. The ${\mathcal{K}}$-epigraph of a function ${\boldsymbol{f}}: {\mathbb{R}}^{N} \mapsto {\mathbb{R}}^{M}$ and a proper cone ${\mathcal{K}}$ is conic-representable if the set $\sset*{({\boldsymbol{x}},{\boldsymbol{y}}) \in {\mathbb{R}}^{N}\times {\mathbb{R}}^{M}}{{\boldsymbol{f}}({\boldsymbol{x}}) \preccurlyeq_{{\mathcal{K}}} {\boldsymbol{y}}}$ can be expressed via conic inequalities, possibly involving a cone different from ${\mathcal{K}}$ and additional auxiliary variables.
For a set ${\mathcal{K}}$, we use ${\text{conv}({\mathcal{K}})}$ and ${\text{int}\left({\mathcal{K}}\right)}$ to denote the convex hull and the interior of ${\mathcal{K}}$, respectively.
Because we also review [DRO]{} papers in the context of statistical learning in this paper, we introduce some terminologies in statistical learning. For every approach that uses a set of (training) data to prescribe a solution or to predict an outcome, it is important to assess the [*out-of-sample*]{} quality of the prescriber/predictor under a new set of (test) data, independent from the training set. Consider a given set of (training) data $\{{\boldsymbol{\xi}}^{i}\}_{i=1}^{N}$. Suppose that ${\mathbbmtt{P}}_{N}$ is the empirical probability distribution on $\{{\boldsymbol{\xi}}^{i}\}_{i=1}^{N}$. Data-driven approaches are interested in the performance of a data-driven solution (or, in-sample solution) $\hat{{\boldsymbol{x}}}_{N}$ that is constructed using $\{{\boldsymbol{\xi}}^{i}\}_{i=1}^{N}$. A primitive data-driven solution for a problem of the form can be obtained by solving a [*sample average approximation*]{} (SAA) of that problem, where the underlying distribution is chosen to be ${\mathbbmtt{P}}_{N}$ [@shapiro2014SP]. Assessing the quality of this solution is well-studied in the context of SO, see, e.g., @bayraksan2006quality [@bayraksan2009quality; @homem2014montecarlo]. Here, we introduce the analogous of such performance measure that are used to assess the quality of a solution in the context of a [DRO]{} model. Let us focus on a [DRO]{} problem of the form for the ease of exposition. Consider a data-driven solution ${\boldsymbol{x}}_{N} \in {\mathcal{X}}$. Such a solution may be obtained by solving a data-driven version of the [DRO]{} model , where the ambiguity set ${\mathcal{P}}$ is constructed using data, namely ${\mathcal{P}}_{N}$. The out-of-sample performance of ${\boldsymbol{x}}_{N}$ is defined as ${\mathbb{E}_{{{\mathbbmtt{P}}^{\text{true}}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}\right]}$, which is the expected cost of ${\boldsymbol{x}}_{N}$ given a new (test) sample that is independent of $\{{\boldsymbol{\xi}}^{i}\}_{i=1}^{N}$, drawn from an unknown true distribution ${{\mathbbmtt{P}}^{\text{true}}}:=P^{\text{true}} \circ {\tilde{{\boldsymbol{\xi}}}}^{-1}$. However, as ${{\mathbbmtt{P}}^{\text{true}}}$ is unknown, one need to establish performance guarantees. One such guarantee, referred to as [*finite-sample performance guarantee*]{} or [*generalization bound*]{} is defined as $${\mathbbmtt{P}}_{N} \left \lbrace {\mathbb{E}_{{{\mathbbmtt{P}}^{\text{true}}}} \left[ h({\boldsymbol{x}}_{N},{\tilde{{\boldsymbol{\xi}}}}\right]} \le \hat{V}_{N} \right \rbrace \ge 1-\alpha,$$ which guarantees that an (in-sample) [*certificate*]{} $\hat{V}_{N}$ provides a $(1-\alpha)$ confidence (with respect to the training sample) on the out-of-sample performance of ${\boldsymbol{x}}_{N}$. The certificate $\hat{V}_{N}$ may be chosen as the optimal value of the inner problem in [DRO]{}, where the worst-case is taken within ${\mathcal{P}}_{N}$, evaluated at ${\boldsymbol{x}}_{N}$, see, e.g., [@mohajerin2018]. The other guarantee, referred to as [*asymptotic consistency*]{}, guarantees that as $N$ increases, the certificate $\hat{V}_{N} $ and the data-driven solution ${\boldsymbol{x}}_{N}$ converges—in some sense—to the optimal value and an optimal solution of the true (unambiguous) problem of the form , see, e.g., [@mohajerin2018].
Relationship with Game Theory, Risk-Aversion, Chance-Constrained Optimization, and Regularization {#sec: rev.game_risk_chance_reg}
=================================================================================================
Relationship with Game Theory {#sec: rev.game_theory}
-----------------------------
In this section, we present a game-theoretic interpretation of [DRO]{}. Indeed, a worst-case approach to SO may be viewed to have its roots in John von Neumann’s game theory. For ease of exposition, let us consider a problem of the form .
The decision maker, the first player in this setup, makes a decision ${\boldsymbol{x}} \in {\mathcal{X}}$ whose consequences (i.e., cost $h$) depends on the outcome of the random vector ${\tilde{{\boldsymbol{\xi}}}}$. The decision maker assumes that ${\tilde{{\boldsymbol{\xi}}}}$ follows some distribution ${\mathbbmtt{P}} \in {\mathcal{P}}$. However, he/she does not know which distribution the nature, the second player in this setup, will choose to represent the uncertainty in ${\tilde{{\boldsymbol{\xi}}}}$. Thus, in one hand, the decision maker is looking for a decision that minimizes the maximum expected cost with respect to ${\mathcal{P}}$, on the other hand, the nature is seeking a distribution that maximizes the minimum expected cost with respect to ${\mathcal{X}}$. Under suitable conditions, it can be shown that these two problems are the dual of each other and the solution to one problem provides the solution to the other problem. Such a solution $({\boldsymbol{x}}^{*}, {\mathbbmtt{P}}^{*})$ is called an [*equilibrium*]{} or [*saddle*]{} point. In other words, at this point, the decision maker would not change its decision ${\boldsymbol{x}}^{*}$, knowing that the nature chose ${\mathbbmtt{P}}^{*}$. Similarly, the nature would not change its distribution ${\mathbbmtt{P}}^{*}$, knowing that the decision maker chose ${\boldsymbol{x}}^{*}$. We state this result in the following theorem, which generalizes John von Neumann’s minmax theorem.
[(@sion1958 [Theorem 3.4])]{} Suppose that
1. ${\mathcal{X}}$ and ${\mathcal{P}}$ are convex and compact spaces,
2. ${\boldsymbol{x}} \mapsto {{\mathcal{R}}_{{\mathbbmtt{P}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ is upper semicontinuous and quasiconcave on ${\mathcal{P}}$ for all ${\boldsymbol{x}} \in {\mathcal{X}}$, and
3. ${\mathbbmtt{P}} \mapsto {{\mathcal{R}}_{{\mathbbmtt{P}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ is lower semicontinuous and quasiconvex on ${\mathcal{X}}$ for all ${\mathbbmtt{P}} \in {\mathcal{P}}$.
Then, $$\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \sup_{{\mathbbmtt{P}} \in {\mathcal{P}}} \ {{\mathcal{R}}_{{\mathbbmtt{P}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}= \sup_{{\mathbbmtt{P}} \in {\mathcal{P}}} \ \inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ {{\mathcal{R}}_{{\mathbbmtt{P}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}.$$
According to the above theorem, under appropriate conditions, the exchange of the order between $\inf$ and $\sup$ will not change the optimal value to $\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \sup_{{\mathbbmtt{P}} \in {\mathcal{P}}} \ {\mathbb{E}_{{\mathbbmtt{P}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$. We refer to @grunwald2004game for a variety of alternative regularity conditions for this to hold. The exchange of the order between $\inf$ and $\sup$ can be interpreted as follows [@grunwald2004game]: a probability distribution ${\mathbbmtt{P}}^{*}$ that maximizes the [*generalized entropy*]{} $\inf_{{\boldsymbol{x}} \in {\mathcal{X}}} \ {{\mathcal{R}}_{{\mathbbmtt{P}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ over ${\mathcal{P}}$ has an associated decision ${\boldsymbol{x}}^{*}$, achieving $\inf_{{\boldsymbol{x}} \in {\mathcal{X}}} \ {{\mathcal{R}}_{{\mathbbmtt{P}}^{*}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$, and it achieves $\inf_{{\boldsymbol{x}} \in {\mathcal{X}}} \ \sup_{{\mathbbmtt{P}} \in {\mathcal{P}}} \ {{\mathcal{R}}_{{\mathbbmtt{P}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$.
Relationship between DRO and RO {#sec: rev.dro_ro}
-------------------------------
In Section \[sec: rev.intro\], we mentioned that when the ambiguity set of probability distributions contains all probability distributions on the support of the uncertain parameters, [DRO]{} and RO are equivalent. In this section, we present a different perspective on the relationship between [DRO]{} and RO under the assumption that the sample space $\Xi$ is finite. For ease of exposition, we focus on . A similar argument follows for .
Suppose that $\Xi$ is a finite sample space with $M$ atoms, $\Xi=\{s_{1}, \ldots, s_{M}\}$. Then, for a fixed $x \in {\mathcal{X}}$, $h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}})$ has $M$ possible outcomes $\{h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}(s_{1})), \ldots, h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}(s_{M})) \}$. For short, let us write these outcomes as a vector ${\boldsymbol{h}}({\boldsymbol{x}}) \in {\mathbb{R}}^{M}$, where $h_{m}({\boldsymbol{x}}):=h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}(s_{m}))$. In , ${\mathcal{P}}$ is a subset of all probability measures on ${\tilde{{\boldsymbol{\xi}}}}$. So, one can think of ${\mathcal{P}}$ as a subset of all discrete probability distributions ${\mathbbmtt{P}}$ on ${\mathbb{R}}^{d}$ induced by ${\tilde{{\boldsymbol{\xi}}}}$. That is, ${\mathbbmtt{P}}$ can be identified with a vector ${\boldsymbol{p}} \in {\mathbb{R}}^{M}$. Consequently, ${\mathcal{P}}$ may be interpreted as a subset of ${\mathbb{R}}^{M}$. With this interpretation, is written as $$\label{eq: rev.DRO_RO}
\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \sup_{{\boldsymbol{p}} \in {\mathcal{P}}} \ {\boldsymbol{p}}^{\top}{\boldsymbol{h}}({\boldsymbol{x}}).$$ By defining $f({\boldsymbol{x}}, {\boldsymbol{p}}):= {\boldsymbol{p}}^{\top}{\boldsymbol{h}}({\boldsymbol{x}})$, we can rewrite the above problem as $\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \sup_{{\boldsymbol{p}} \in {\mathcal{P}}} \ f({\boldsymbol{x}}, {\boldsymbol{p}}) $. This problem has the form of , where the probability vector ${\boldsymbol{p}}$ takes values in an “uncertainty set" ${\mathcal{P}}$. Techniques that are applicable for specifying the uncertainty set in a RO model may now be used to specify ${\mathcal{P}}$ in , see, e.g., @ben2001Convex [@ben2000robust; @bertsimas2004norm; @chen2007robust]. We also refer to @bertsimas2018RO and Section \[sec: rev.rel\_chance\]. For a through treatment of different nonlinear functions $f({\boldsymbol{x}},{\boldsymbol{p}})$ and different uncertainty sets ${\mathcal{P}}$, we refer to @bental2015nonlinear. However, as we shall see below, DRO has the richness that allows the use of techniques developed in the statistical literature to model the problem. Moreover, its framework allows $\Xi$ to be continuous. We also refer to @xu2012optimization for a distributional interpretation of RO.
Relationship with Risk-Aversion {#sec: rev.rel_risk}
-------------------------------
### Relationship between [DRO]{} and Coherent and Law Invariant Risk Measures {#sec: rev.dro_coherent}
Under mild conditions (e.g., real-valued cost functions, a convex and compact ambiguity set), the worst-case expectations given in or are equivalent to a [*coherent*]{} risk measure [@artzner1999; @rockafellar2007; @ruszczynski2006optimization]. Furthermore, under mild conditions, the worst-case expectations given in or are equivalent to a [*law invariant*]{} risk measure [@shapiro2017DRSP]. These results imply that [DRO]{} models have an equivalent risk-averse optimization problem. In order to explain the relationship between and and risk-averse optimization more precisely, we present some definitions and fundamental results.
[(@artzner1999 [Definition 2.4], @shapiro2014SP [Definition 6.4])]{} \[def: rev.coherent\] A (real-valued) risk measure $\rho: {\mathcal{Z}} \mapsto {\mathbb{R}}$ is called coherent if it satisfies the following axioms:
- [*Translation Equivariance:*]{} If $a \in {\mathbb{R}}$ and $Z \in {\mathcal{Z}}$, then $\rho(Z+a)=\rho(Z)+a$.
- [*Positive Homogeneity:*]{} If $t \ge 0$ and $Z \in {\mathcal{Z}}$, then $\rho(tZ)=t\rho(Z)$.
- [*Monotonicity:*]{} If $Z, Z^{\prime} \in {\mathcal{Z}}$ and $Z \ge Z^{\prime}$, then $\rho(Z) \ge \rho(Z^{\prime})$.
- [*Convexity:*]{} $\rho\left( tZ+(1-t)Z^{\prime} \right) \le t\rho(Z) + (1-t) \rho(Z^{\prime})$, for all $Z, Z^{\prime} \in {\mathcal{Z}}$ and all $t \in [0,1]$.
A risk measure $\rho$ is called convex if it satisfies all the above axioms besides the positive homogeneity condition.
In Definition \[def: rev.coherent\], the convexity axiom can be replaced with the [*subadditivity*]{} axiom: $\rho\left( Z+Z^{\prime} \right) \le \rho(Z) + \rho(Z^{\prime})$, for all $Z, Z^{\prime} \in {\mathcal{Z}}$. This is true because the convexity and positive homogeneity axioms imply the subadditivity axiom, and conversely, the positive homogeneity and subadditivity axioms imply the convexity axiom. @artzner1999 [Definition 2.4] defines a coherent risk measure with the subadditivity axiom, whereas @shapiro2014SP [Definition 6.4] defines a coherent risk measure with the convexity axiom.
[(@shapiro2017DRSP [Definition 2.1])]{} \[def: rev.law\_invariant\_measure\] A (real-valued) risk measure $\rho: {\mathcal{Z}} \mapsto {\mathbb{R}}$ is called law invariant if for all $Z, Z^{\prime} \in {\mathcal{Z}}$, $Z \overset{\text{d}}{\sim} Z^{\prime}$ implies that $\rho(Z)=\rho(Z^{\prime})$.
[(@shapiro2017DRSP [Definition 2.2])]{} \[def: rev.law\_invariant\_set\] A set ${\mathcal{M}}$ is called law invariant if $\zeta \in {\mathcal{M}}$ and $\zeta \overset{\text{d}}{\sim} \zeta^{\prime}$ implies that $\zeta^{\prime} \in {\mathcal{M}}$.
To relate the worst-case expectation with respect to a set of probability distributions induced by ${\tilde{{\boldsymbol{\xi}}}}$ to coherent risk measures, we adopt the following result from @shapiro2014SP [Theorem 6.7], @shapiro2012minimax [Theorem 3.1].
\[thm: rev.duality\_rho\] Let ${\mathcal{Z}}$ be the linear space of all essentially bounded ${\mathcal{F}}$-measurable functions $Z: \Xi \mapsto {\mathbb{R}}$ that are $P$-integrable for all $P \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$. Let ${\mathcal{Z}}^{*}$ be the space of all signed measures $P$ on ${\left( \Xi, {\mathcal{F}} \right)}$ such that $\int_{\Xi} |d P| < \infty$. Suppose that ${\mathcal{Z}}$ is paired with ${\mathcal{Z}}^{*}$ such that the bilinear form ${\mathbb{E}_{P} \left[ Z \right]}$ is well-defined. Moreover, suppose that ${\mathcal{Z}}$ and ${\mathcal{Z}}^{*}$ are equipped with the sup norm $\|\cdot\|_{\infty}$ and variation norm $\|\cdot\|_{1}$, respectively[^5]. Recall ${\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$ denotes the space of all probability measures on ${\left( \Xi, {\mathcal{F}} \right)}$: ${\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}=\sset*{ P \in {\mathcal{Z}}^{*}}{\int_{\Xi} d P=1, \; P \succcurlyeq 0} $. Let $\rho: {\mathcal{Z}} \mapsto {\overline{{\mathbb{R}}}}$. Then, $\rho$ is a real-valued coherent risk measure if and only if there exists a convex compact set ${\mathcal{M}} \subseteq {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$ (in the weakly\* topology of ${\mathcal{Z}}^{*}$) such that $$\label{eq: duality_rho}
\rho(Z)= \sup_{P \in {\mathcal{M}}} \ {\mathbb{E}_{P} \left[ Z \right]}, \; \forall Z \in {\mathcal{Z}}.$$ Moreover, given a real-valued coherent risk measure, the set ${\mathcal{M}}$ in can be written in the form $${\mathcal{M}}= \sset*{P \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}}{{\mathbb{E}_{P} \left[ Z \right]} \le \rho(Z), \; \forall Z \in {\mathcal{Z}}}.$$
First note that ${\mathcal{Z}}$ is a Banach space, paired with the dual space ${\mathcal{Z}}^{*}$, which is also a Banach space. Then, by a similar proof to @shapiro2014SP [Theorem 6.7], we can show that if $\rho$ is a proper and lower semicontinuous coherent risk measure, then holds when ${\mathcal{M}}$ is equal to the subdifferential of $\rho$ at $0 \in {\mathcal{Z}}$, i.e., ${\mathcal{M}}=\partial \rho(0)$, where $$\partial \rho(Z)=\operatorname*{arg\,max}_{P \in {\mathcal{M}}} {\mathbb{E}_{P} \left[ Z \right]}.$$ Now, we show that $\rho$ is a proper and lower semicontinuous coherent risk measure. Consider the cone ${\mathcal{C}} \subset {\mathcal{Z}}$ of nonnegative functions $Z$. This cone is closed, convex, and pointed, and it defines a partial order relation on ${\mathcal{Z}}$ that $Z \ge Z^{\prime}$ if and only if $Z(s) \ge Z^{\prime}(s) $ a.e. on $\Xi$. We let the least upper bound of $Z, Z^{\prime}$ be $Z \vee Z^{\prime}$, where $(Z \vee Z^{\prime})(s)=\max\{Z(s), Z^{\prime}(s)\}$. It follows that ${\mathcal{Z}}$ with cone ${\mathcal{C}}$ forms a Banach lattice[^6]. Thus, by @shapiro2014SP [Theorem 7.91], we conclude that $\rho$ is continuous and subdifferentiable on the interior of its domain. This, in turn, implies that the lower semicontinuity of $\rho$ is automatically satisfied. Moreover, by @shapiro2014SP [Theorem 7.85], the subdifferentials of $\rho$ at any point form a nonempty, convex, and weakly\* compact subset of ${\mathcal{Z}}^{*}$. In particular, ${\mathcal{M}}=\partial \rho(0)$ is a convex and weakly\* compact set ${\mathcal{M}} \subseteq {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$.
Conversely, suppose that holds with the set ${\mathcal{M}}$ being a convex and weakly\* compact subset of ${\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$. Then, $\rho$ is a real-valued coherent risk measure.
To prove the last part notice that for any $Z \in {\mathcal{Z}}$, we have $\rho(Z) \ge \rho(0)+ {\mathbb{E}_{P} \left[ Z-0 \right]}$, for all $P \in \partial \rho(0)$. Now, by the facts that ${\mathcal{M}}=\partial \rho(0)$ and $\rho(0)=0$, we conclude ${\mathcal{M}}= \sset*{P \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}}{{\mathbb{E}_{P} \left[ Z \right]} \le \rho(Z), \; \forall Z \in {\mathcal{Z}}}$.
Before we proceed, let us characterize the set ${\mathcal{M}}$, as described in Theorem \[thm: rev.duality\_rho\], for three well-studied coherent risk measures, namely [*conditional value-at-risk*]{} (CVaR), see, e.g., @rockafellar2000 [@rockafellar2002; @rockafellar2007], convex combination of expectation and CVaR, see, e.g., @zhang2016, and [*mean-upper-absolute semideviation*]{}, see, e.g., @shapiro2014SP. CVaR at level $\beta$, $0<\beta<1$, denoted by ${\mathrm{CVaR}^{Q}_{\beta} \left[ \cdot \right]}$, is defined as ${\mathrm{CVaR}^{Q}_{\beta} \left[ Z \right]}:=\frac{1}{1-\beta}\int_\beta^1 {\mathrm{VaR}_{\alpha} \left[ Z \right]}\,d\alpha$, where ${\mathrm{VaR}^{Q}_{\alpha} \left[ Z \right]}:=\inf\sset*{u}{Q\{Z \leq u\}\geq \alpha}$ is the Value-at-Risk (VaR) at level $\alpha$. The mean-upper-absolute semideviation is defined as ${\mathbb{E}_{Q} \left[ Z \right]} + c{\mathbb{E}_{Q} \left[ (Z-{\mathbb{E}_{P} \left[ Z \right]})_{+} \right]}$, where $c \in [0,1]$.
\[ex: rev.CVaR\_dual\] Consider a probability space ${\left( \Xi, {\mathcal{F}}, Q \right)}$ and ${\mathcal{Z}}={\mathcal{L}}_{\infty}{\left( \Xi, {\mathcal{F}}, Q \right)}$. Suppose that $\Xi$ is a finite space with $M$ atoms. For a coherent risk measure $\rho$, we have $\rho(Z)= \sup_{{\boldsymbol{p}} \in {\mathcal{M}}} \ \left \lbrace \sum_{m=1}^{M} Z_{m} p_{m} \right \rbrace, \; \forall Z \in {\mathcal{Z}}$, where ${\mathcal{M}}$ is closed convex subset of $${\mathcal{D}}:=\sset*{{\boldsymbol{p}} \in {\mathbb{R}}^{M}}{ {\boldsymbol{p}}^{\top}{\boldsymbol{e}}=1, \; {\boldsymbol{p}} \ge {\boldsymbol{0}}},$$ and ${\boldsymbol{e}}$ is a vector of ones.
- When $\rho(Z)={\mathrm{CVaR}^{Q}_{\beta} \left[ Z \right]}$, we have $${\mathcal{M}}= \sset*{{\boldsymbol{p}} \in {\mathcal{D}}}{ p_{m} \in [0, \frac{q_{m}}{1-\beta}], \; m=1, \ldots, M}.$$
- When $\rho(Z)={\mathbb{E}_{Q} \left[ Z \right]}+ \inf_{\tau \in {\mathbb{R}}} \ {\mathbb{E}_{Q} \left[ (1-\gamma_{1})(\tau-Z)_{+} + (\gamma_{2}-1)(Z-\tau)_{+} \right]}$, with $\gamma_{1} \in [0,1)$ and $\gamma_{2}>1$, we have $${\mathcal{M}}= \sset*{{\boldsymbol{p}} \in {\mathcal{D}}}{ p_{m} \in [q_{m}\gamma_{1}, q_{m}\gamma_{2} ], \; m=1, \ldots, M}.$$ The above risk measure is also equivalent to $\gamma_{1} {\mathbb{E}_{Q} \left[ Z \right]}+ (1-\gamma_{1}){\mathrm{CVaR}^{Q}_{\beta} \left[ Z \right]}$, where $\beta:=\frac{1-\gamma_{1}}{\gamma_{2}-\gamma_{1}}$.
- When $\rho(Z)={\mathbb{E}_{Q} \left[ Z \right]} + c{\mathbb{E}_{Q} \left[ (Z-{\mathbb{E}_{P} \left[ Z \right]})_{+} \right]}$, we have $${\mathcal{M}}= \sset*{{\boldsymbol{p}}^{\prime} \in {\mathcal{D}}}{ {\boldsymbol{p}}^{\prime}= {\boldsymbol{q}} + {\boldsymbol{\zeta}} \odot {\boldsymbol{q}} - ({\boldsymbol{\zeta}}^{\top}{\boldsymbol{q}}) \odot {\boldsymbol{q}}, \; \|{\boldsymbol{\zeta}}\|_{\infty} \le c},$$ where ${\boldsymbol{a}} \odot {\boldsymbol{b}}$ denotes the componentwise product of two vectors ${\boldsymbol{a}}$ and ${\boldsymbol{b}}$.
Theorem \[thm: rev.duality\_rho\] relates problems and to risk-averse optimization problems, involving the coherent risk-measure $\rho$. Consider a fixed ${\boldsymbol{x}} \in {\mathcal{X}}$. With an appropriate transformation of measure ${\mathbbmtt{P}}=P \circ {\tilde{{\boldsymbol{\xi}}}}^{-1}$, we can write the inner problem $\sup_{{\mathbbmtt{P}} \in {\mathcal{P}}} \ {\mathbb{E}_{{\mathbbmtt{P}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ in as $\sup_{P \in {\mathcal{P}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},s) \right]}$, where in the former, ${\mathcal{P}}$ is a set of probability distributions induced by ${\tilde{{\boldsymbol{\xi}}}}$, while in the latter, ${\mathcal{P}}$ is a set of probability measures on ${\left( \Xi, {\mathcal{F}} \right)}$. Then, by applying Theorem \[thm: rev.duality\_rho\] and setting $Z=h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$, $\sup_{P \in {\mathcal{P}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},s) \right]}$ evaluates a (real-valued) coherent risk measure ${\rho \left[ h({\boldsymbol{x}},s) \right]}$, provided that ${\mathcal{P}} \subset {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$ is a convex compact set. It is easy to verify that such a function $\rho$ is coherent:
- [*Translation Equivariance:*]{} Consider ${\boldsymbol{x}} \in {\mathcal{X}}$ and $a \in {\mathbb{R}}$. Then, $ {\rho \left[ h({\boldsymbol{x}},s)+ a \right]}= \sup_{P \in {\mathcal{P}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},s)+a \right]}= \sup_{P \in {\mathcal{P}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},s) \right]} +a = {\rho \left[ h({\boldsymbol{x}},s) \right]} + a$.
- [*Positive Homogeneity:*]{} Consider ${\boldsymbol{x}} \in {\mathcal{X}}$ and $t \ge 0$. Then, ${\rho \left[ t h({\boldsymbol{x}},s) \right]}= \sup_{P \in {\mathcal{P}}} \ {\mathbb{E}_{P} \left[ t h({\boldsymbol{x}},s) \right]}= t \sup_{P \in {\mathcal{P}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},s) \right]}= t {\rho \left[ h({\boldsymbol{x}},s) \right]}$.
- [*Monotonicity:*]{} Consider ${\boldsymbol{x}}, {\boldsymbol{x}}^{\prime} \in {\mathcal{X}}$ such that $h({\boldsymbol{x}},s) \ge h({\boldsymbol{x}}^{\prime},s)$. Thus, ${\mathbb{E}_{P} \left[ h({\boldsymbol{x}},s) \right]} \ge {\mathbb{E}_{P} \left[ h({\boldsymbol{x}}^{\prime},s) \right]}$ for any $P \in {\mathcal{P}}$, which implies ${\rho \left[ h({\boldsymbol{x}},s) \right]}=\sup_{P \in {\mathcal{P}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},s) \right]} \ge \sup_{P \in {\mathcal{P}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}}^{\prime},s) \right]}={\rho \left[ h({\boldsymbol{x}}^{\prime},s) \right]}$.
- [*Convexity:*]{} Consider ${\boldsymbol{x}}, {\boldsymbol{x}}^{\prime} \in {\mathcal{X}}$ and $t \in [0,1]$. Then, we have $$\begin{aligned}
{\rho \left[ t h({\boldsymbol{x}},s)+ (1-t) h({\boldsymbol{x}}^{\prime},s) \right]} & = \sup_{P \in {\mathcal{P}}} \ {\mathbb{E}_{P} \left[ t h({\boldsymbol{x}},s) + (1-t) h({\boldsymbol{x}}^{\prime},s) \right]}\\
& \le \sup_{P \in {\mathcal{P}}} \ {\mathbb{E}_{P} \left[ t h({\boldsymbol{x}},s) \right]} + \sup_{P \in {\mathcal{P}}} \ {\mathbb{E}_{P} \left[ (1-t) h({\boldsymbol{x}}^{\prime},s) \right]} \\
& = t \sup_{P \in {\mathcal{P}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},s) \right]} + (1-t) \sup_{P \in {\mathcal{P}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}}^{\prime},s) \right]}\\
& = t {\rho \left[ h({\boldsymbol{x}},s) \right]} + (1-t) {\rho \left[ h({\boldsymbol{x}}^{\prime},s) \right]},
\end{aligned}$$ where we used the translation equivariance property.
Consequently, is equivalent to minimizing a coherent risk measure. Similarly, is equivalent to a risk-averse optimization problem, subject to coherent risk constraints. Thus, a convex and compact ambiguity set of distributions gives rise to a coherent risk measure. Conversely, Theorem \[thm: rev.duality\_rho\] implies that given a risk preference that can be expressed in the form of a coherent risk measure as a primitive, we can construct a corresponding convex and compact ambiguity set ${\mathcal{P}}$ of probability distributions in a [DRO]{} framework. Thus, the ambiguity set becomes a consequence of the particular risk measure the decision maker selects.
It is worth noting that if $h$ is a convex random function in , i.e., $h(\cdot, {\boldsymbol{\xi}})$ is convex in ${\boldsymbol{x}}$ for almost every ${\boldsymbol{\xi}}$, then, ${\rho \left[ h(\cdot,{\tilde{{\boldsymbol{\xi}}}}) \right]}$ is convex in ${\boldsymbol{x}}$. Convexity of ${\boldsymbol{g}}$ in also implies the convexity of the region induced by the risk constraints ${\rho \left[ {\boldsymbol{g}}(\cdot,{\tilde{{\boldsymbol{\xi}}}}) \right]} \le {\boldsymbol{0}}$. In our setup, neither $h(\cdot, {\boldsymbol{\xi}})$ nor ${\boldsymbol{g}}(\cdot, {\boldsymbol{\xi}})$ need to be convex as for example in the case where they are indicator functions.
We now state the connection between the worst-case expectation with respect to a set of probability distributions induced by ${\tilde{{\boldsymbol{\xi}}}}$ to law invariant risk measures.
\[thm: duality\_rho\_law\][(@shapiro2017DRSP [Theorem 2.3])]{} Consider ${\mathcal{Z}}$ and ${\mathcal{Z}}^{*}$ as defined in Theorem \[thm: rev.duality\_rho\]. Also, consider $\rho: {\mathcal{Z}} \mapsto {\mathbb{R}}$, defined as $\rho(Z)= \sup_{P \in {\mathcal{P}}} {\mathbb{E}_{P} \left[ Z \right]}, \; \forall Z \in {\mathcal{Z}},$ If the set ${\mathcal{P}}$ is law invariant, then the corresponding risk measure $\rho$ is law invariant. Conversely, if the risk measure $\rho$ is law invariant, and the set ${\mathcal{P}}$ is convex and weakly\* closed, then the set ${\mathcal{P}}$ is law invariant.
For the connection between a general multistage [DRO]{} model, risk-averse multistage programming with conditional coherent risk mappings, and the concept of time consistency of the problem and policies, we refer to @shapiro2012minimax [@shapiro2016rectangular; @shapiro2018tutorial].
### Relationship with Chance-Constrained Optimization {#sec: rev.rel_chance}
In the previous section, we discussed how [DRO]{} is connected to risk-averse optimization. In this section, we present another perspective that connects [DRO]{} to risk-averse optimization through a proper choice of the uncertainty set of the random variables ${\tilde{{\boldsymbol{\xi}}}}$, as in RO.
Many approaches in RO construct the uncertainty set for the parameters ${\tilde{{\boldsymbol{\xi}}}}$ such that the uncertainty set implies a probabilistic guarantee with respect to the true unknown distribution. To explain how this construction is related to risk and [DRO]{}, consider the uncertain constraints $g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \le 0$ for a fixed ${\boldsymbol{x}}$. Suppose that ${\tilde{{\boldsymbol{\xi}}}}$ belongs to a bounded uncertainty set ${\mathcal{U}} \subseteq {\mathbb{R}}^{d}$, i.e., ${\mathcal{U}}$ is the support of ${\tilde{{\boldsymbol{\xi}}}}$. The RO counterpart of this constraint then can be formulated as $$\label{eq: RO_Cons1}
g({\boldsymbol{x}}, {\boldsymbol{\xi}}) \le 0, \; \forall {\boldsymbol{\xi}} \in {\mathcal{U}}.$$ Two criticisms of are that: (1) it treats all uncertain parameters ${\boldsymbol{\xi}} \in {\mathcal{U}}$ with equal weights and (2) all the parametrized constraints are hard, i.e., no violation is accepted. An alternative framework to reduce the conservatism caused by this approach is to use a chance constraint framework that allows a small probability of violation (with respect to the probability distribution of ${\tilde{{\boldsymbol{\xi}}}}$) instead of enforcing the constraint to be satisfied almost everywhere. Under the assumption that ${\tilde{{\boldsymbol{\xi}}}}$ is defined on a probability space $(\Xi, {\mathcal{F}}, P^{\text{true}})$, the chance constraint framework can be represented as follows: $$\label{eq: chance}
P^{\text{true}}\{g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \le 0 \} \ge 1-\epsilon,$$ for some $0<\epsilon <1 $. The parameter $\epsilon$ controls the risk of violating the uncertain constraint $g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \le 0$. In fact, as $\epsilon$ goes to zero, the set $${\mathcal{X}}_{\epsilon}:=\sset*{{\boldsymbol{x}} \in {\mathcal{X}} }{P^{\text{true}}\{g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \le 0 \} \ge 1-\epsilon}$$ decreases to $${\mathcal{X}}({\mathcal{U}}):=\sset*{ {\boldsymbol{x}} \in {\mathcal{X}} }{ g({\boldsymbol{x}}, {\boldsymbol{\xi}}) \le 0, \; \forall {\boldsymbol{\xi}} \in {\mathcal{U}} }.$$ Motivated by the chance constraint framework , many approaches in RO construct an uncertainty set ${\mathcal{U}}_{\epsilon}$ such that a feasible solution to a problem of the form will also be feasible with probability at least $1-\epsilon$ with respect to $P^{\text{true}}$. More precisely, for any fixed ${\boldsymbol{x}}$, these constructions guarantee that the following implication holds: $$\label{eq: rev.prob.guarantee}
\text{If} \ g({\boldsymbol{x}}, {\boldsymbol{\xi}}) \le 0, \ \forall {\boldsymbol{\xi}} \in {\mathcal{U}}_{\epsilon}, \ \text{then,} \ P^{\text{true}}\{g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \le 0 \} \ge 1-\epsilon. \tag{C1}$$
However, as we argued before, the probability measure $P^{\text{true}}$ cannot be known with certainty. As far as it is relevant to the scope and interest of this paper, there are two streams of research in order to handle the ambiguity about the true probability distribution and obtain a safe (or, conservative) approximation[^7] to [^8]: (1) scenario approximation scheme of based on Monte Carlo sampling, see, e.g., @campi2004 [@calafiore2005; @nemirovski2006scenario; @campi2008; @luedtke2008chance; @bental2009LMI], and (2) [DRO]{} approach to , see, e.g., @nemirovski2006convex [@erdougan2006]. Research on scenario approximation of focuses on providing probabilistic guarantee (with respect to the sample probability measure) that a solution to the sampled problem of is feasible to with a high probability. The [DRO]{} approach, on the other hand, forms a version of as follows: $$\label{eq: chance_DRO}
P\{g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \le 0 \} \ge 1-\epsilon, \; \forall P \in {\mathcal{P}} \equiv \inf_{P \in {\mathcal{P}}} \ P\{g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \le 0 \} \ge 1-\epsilon.$$ Let $\bar{{\mathcal{X}}}_{\epsilon}$ denote the feasibility set induced by : $$\bar{{\mathcal{X}}}_{\epsilon}:=\sset*{ {\boldsymbol{x}} \in {\mathcal{X}} }{\inf_{P \in {\mathcal{P}}} \ P\{g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \le 0 \} \ge 1-\epsilon}.$$ If $P^{\text{true}} \in {\mathcal{P}}$, then, ${\boldsymbol{x}} \in \bar{{\mathcal{X}}}_{\epsilon} $ implies ${\boldsymbol{x}} \in {\mathcal{X}}_{\epsilon}$. That is, $\bar{{\mathcal{X}}}_{\epsilon} $ provides a conservative approximation to ${\mathcal{X}}_{\epsilon}$[^9]. By leveraging a goodness-of-fit test, @bertsimas2018RO construct a $(1-\alpha)$-confidence region ${\mathcal{P}}(\alpha)$ for $P^{\text{true}}$. Such a construction leads to an uncertainty set ${\mathcal{U}}_{\epsilon}(\alpha)$ that guarantees the implication [@bertsimas2018RO].
Let us now assume that the sample space $\Xi$ is finite. By the relationship between RO and [DRO]{}, discussed in Section \[sec: rev.dro\_ro\], one may think the parameter ${\boldsymbol{\xi}}$ in represents a probability distribution ${\boldsymbol{p}}$ on ${\mathbb{R}}^{d}$, which is random. That said, we may define $f({\boldsymbol{x}}, {\boldsymbol{p}}):={{\mathcal{R}}_{{\boldsymbol{p}}} \left[ g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \right]}$. By leveraging the results in @bertsimas2018RO, we aim to construct a data-driven ambiguity set ${\mathcal{P}}_{\epsilon}$ that guarantees the following implication: $$\label{eq: rev.prob.guarantee_dro}
\text{If} \ {{\mathcal{R}}_{{\boldsymbol{p}}} \left[ g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \right]} \le 0, \ \forall {\boldsymbol{p}} \in {\mathcal{P}}_{\epsilon}, \ \text{then,} \ P^{\text{true}}\{{{\mathcal{R}}_{{\tilde{{\boldsymbol{p}}}}} \left[ g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \right]} \le 0 \} \ge 1-\epsilon. \tag{C2}$$
[(@bertsimas2018RO [Theorem 2])]{} \[thm: rev.chanceDRO\] Suppose that for any fixed ${\boldsymbol{x}}$, ${{\mathcal{R}}_{{\boldsymbol{p}}} \left[ g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \right]}$ is concave in ${\boldsymbol{p}}$. Consider a set of data $\{{\boldsymbol{\xi}}^{i}\}_{i=1}^{N}$, drawn independently and identically distributed (i.i.d.) according to $P^{\text{true}}$. Let ${\mathcal{P}}_{\epsilon}(\alpha)$ be a $(1-\alpha)$-confidence region for $P^{\text{true}}$, constructed from a goodness-of-fit test on data. Moreover, for any ${\boldsymbol{y}} \in {\mathbb{R}}^{d}$, let $l_{\epsilon}({\boldsymbol{y}}; \alpha)$ be a closed, convex, finite-valued, and positively homogeneous (in ${\boldsymbol{y}}$) upper bound to the worst-case VaR of ${\boldsymbol{y}}^{\top} {\tilde{{\boldsymbol{p}}}}$ at level $1-\epsilon$ over ${\mathcal{P}}_{\epsilon}(\alpha)$, i.e., $\sup_{P \in {\mathcal{P}}_{\epsilon}(\alpha)} \ {\mathrm{VaR}^{P}_{1-\epsilon} \left[ {\boldsymbol{y}}^{\top} {\tilde{{\boldsymbol{p}}}} \right]} \le l_{\epsilon}({\boldsymbol{y}}; \alpha), \; {\boldsymbol{y}} \in {\mathbb{R}}^{d}$. Then, the closed, convex set ${\mathcal{P}}_{\epsilon}(\alpha)$ for which $\delta^{*}\big({\boldsymbol{y}} | {\mathcal{P}}_{\epsilon}(\alpha)\big)= l_{\epsilon}({\boldsymbol{y}}; \alpha)$ guarantees the implication with probability at least $(1-\alpha)$ (with respect to the sample probability measure).
As a byproduct of Theorem \[thm: rev.chanceDRO\], $\delta^{*}\big({\boldsymbol{y}} | {\mathcal{P}}_{\epsilon}(\alpha)\big) \le {\boldsymbol{b}}$ provides a safe approximation to $\sup_{P \in {\mathcal{P}}_{\epsilon}(\alpha)} \ P\{{\boldsymbol{y}}^{\top} {\tilde{{\boldsymbol{p}}}} \le {\boldsymbol{b}}\} \ge 1-\epsilon$. That is, there is a one-to-one correspondence between the ambiguity set ${\mathcal{P}}_{\epsilon}(\alpha)$ that satisfies the probabilistic guarantee and safe approximations to $\sup_{P \in {\mathcal{P}}_{\epsilon}(\alpha)} \ P\{{\boldsymbol{y}}^{\top} {\tilde{{\boldsymbol{p}}}} \le {\boldsymbol{b}}\} \ge 1-\epsilon$.
Relationship with Function Regularization {#sec: rev.rel_regularization}
-----------------------------------------
The goal of this section is to discuss the relationship of [DRO]{}/RO with the function regularization commonly used in machine learning.
### [DRO]{} and Regularization
Some papers have shown that [DRO]{} problems via the [*optimal transport discrepancy*]{} and [*$\phi$-divergences*]{} are connected to regularization. When the optimal transport discrepancy is used, as shown in @shafieezadeh2015 [@blanchet2016robust; @gao2016], many mainstream machine learning classification and regression models, including support vector machine (SVM), regularized logistic regression, and Least Absolute Shrinkage and Selection Operator (LASSO), have a direct distributionally robust interpretation that connects regularization to the protection from the disturbance in data. To state this result, we first present a duality theorem, due to @blanchet2017DRO, and we relegate the technical details and assumptions to Section \[sec: rev.choice.ambiguity\]. On the other hand, when $\phi$-divergences are used, [DRO]{} problem is connected to variance regularization, see, e.g., @duchi2016 [@namkoong2018variance].
Let us begin by defining the optimal transport discrepancy. Consider two probability measures $P_{1}, P_{2} \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$. Let $\Pi(P_{1}, P_{2})$ denote the set of all probability measures on ${\left( \Xi \times \Xi, {\mathcal{F}} \times {\mathcal{F}} \right)}$ whose marginals are $P_{1}$ and $P_{2}$: $$\Pi(P_{1}, P_{2})=\sset*{ \pi \in{\mathfrak{M}}{\left( \Xi \times \Xi, {\mathcal{F}} \times {\mathcal{F}} \right)}}{\pi(A\times\Xi) = P_{1}(A), \pi(\Xi\times A) = P_{2}(A) \forall A\in{\mathcal{F}} }.$$ An element of the above set is called a [*coupling*]{} or [*transport plan*]{}. Furthermore, suppose that there is a lower semicontinuous function $c: \Xi \times \Xi \mapsto {\mathbb{R}}_{+} \cup\{\infty\}$ with $c(s_{1},s_{2})=0$ if $s_{1}=s_{2}$. Then, the optimal transport discrepancy between $P_{1}$ and $P_{2}$ is defined as[^10]: $${\mathfrak{d}}^{\text{W}}_{c}(P_{1}, P_{2}):= \inf_{\pi\in \Pi(P_{1}, P_{2})} \int_{\Xi\times \Xi} c(s_1,s_2) \pi(d s_1\times d s_2). $$
[(@blanchet2017DRO [Remark 1])]{} \[thm: rev.opt\_transport\_duality\_no\_details\] Consider an ambiguity set of probability measures as $${\mathcal{P}}^{\text{W}}(P_{0}; \epsilon):=\sset*{P\in{\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}}{{\mathfrak{d}}^{\text{W}}_{c}(P,P_{0})\le \epsilon},$$ formed via the optimal transport discrepancy ${\mathbb{W}}_{c}(P,P_{0})$, where $c$ is the transportation cost function, $\epsilon$ is the size of the ambiguity set (i.e., level of robustness), and $P_{0}$ is a nominal probability measure. Then, for a fixed ${\boldsymbol{x}} \in {\mathcal{X}}$, we have $$ \sup_{P \in {\mathcal{P}}^{\text{W}}(P_{0};\epsilon) } \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},\cdot) \right]} =
\inf_{\lambda \ge 0} \ \left\lbrace \lambda \epsilon + {\mathbb{E}_{P_{0}} \left[ \sup_{s \in \Xi} \ \{h({\boldsymbol{x}},s)- \lambda c(\tilde{s},s)\} \right]} \right\rbrace.$$
We can use Theorem \[thm: rev.opt\_transport\_duality\_no\_details\] to explicitly state the connection between [DRO]{} and regularization. We adopt the following two theorems from @blanchet2017DRO, due to their generality. However, similar results are obtained in other papers, see, e.g., @shafieezadeh2015 [@gao2016].
[(@blanchet2016robust [Theorem 2–3])]{} \[thm: rev.lin\_reg\_square\_loss\_reg\_reg\] Consider a given set of data $\{{\boldsymbol{\xi}}^{i}:=({\boldsymbol{u}}^{i},y^{i})\}_{i=1}^{N}$, where ${\boldsymbol{u}}^{i} \in {\mathbb{R}}^{n}$ is a vector of covariates and $y^{i} \in {\mathbb{R}}$ is the response variable. Suppose that ${\mathbbmtt{P}}_{N}$ is the empirical probability distribution on $\{{\boldsymbol{\xi}}^{i}\}_{i=1}^{N}$, $c({\boldsymbol{\xi}}^{1},{\boldsymbol{\xi}}^{2}):=\| {\boldsymbol{u}}_{1} - {\boldsymbol{u}}_{2} \|_{q}^{2}$ if $y^{1}=y^{2}$, and $c({\boldsymbol{\xi}}^{1},{\boldsymbol{\xi}}^{2})=\infty$, otherwise. Let $\frac{1}{p}+\frac{1}{q}=1$. Then,
- For a linear regression model with a square loss function $h_{1}({\boldsymbol{x}}, {\boldsymbol{\xi}}):=(y-{\boldsymbol{x}}^{\top}{\boldsymbol{u}})^{2}$, we have $$\inf_{{\boldsymbol{x}} \in {\mathbb{R}}^{n} } \ \sup_{{\mathbbmtt{P}} \in {\mathcal{P}}^{\text{W}}({\mathbbmtt{P}}_{N}; \epsilon) } \ {\mathbb{E}_{{\mathbbmtt{P}}} \left[ h_{1}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} = \inf_{{\boldsymbol{x}} \in {\mathbb{R}}^{n}} \ \left\lbrace \epsilon^{\frac{1}{2}} \| {\boldsymbol{x}} \|_{p} + \Big( {\mathbb{E}_{{\mathbbmtt{P}}_{N}} \left[ h_{1}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \Big)^{\frac{1}{2}} \right\rbrace^{2},$$
- For a logistic regression model with cost function $h_{2}({\boldsymbol{x}}, {\boldsymbol{\xi}}):=\log(1+e^{-y{\boldsymbol{x}}^{\top}{\boldsymbol{u}} })$, we have $$\inf_{{\boldsymbol{x}} \in {\mathbb{R}}^{n} } \ \sup_{{\mathbbmtt{P}} \in {\mathcal{P}}^{\text{W}}({\mathbbmtt{P}}_{N}; \epsilon) } \ {\mathbb{E}_{{\mathbbmtt{P}}} \left[ h_{2}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} = \inf_{{\boldsymbol{x}} \in {\mathbb{R}}^{n}} \ \left\lbrace \epsilon \| {\boldsymbol{x}} \|_{p} + {\mathbb{E}_{{\mathbbmtt{P}}_{N}} \left[ h_{2}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \right\rbrace,$$
- For a SVM with Hinge loss $h_{3}({\boldsymbol{x}}, {\boldsymbol{\xi}}):=(1-y{\boldsymbol{x}}^{\top}{\boldsymbol{u}})_{+} $, we have $$\inf_{{\boldsymbol{x}} \in {\mathbb{R}}^{n} } \ \sup_{{\mathbbmtt{P}} \in {\mathcal{P}}^{\text{W}}({\mathbbmtt{P}}_{N}; \epsilon) } \ {\mathbb{E}_{{\mathbbmtt{P}}} \left[ h_{3}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} = \inf_{{\boldsymbol{x}} \in {\mathbb{R}}^{n}} \ \left\lbrace \epsilon \| {\boldsymbol{x}} \|_{p} + {\mathbb{E}_{{\mathbbmtt{P}}_{N}} \left[ h_{3}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \right\rbrace.$$
As stated in Theorem \[thm: rev.lin\_reg\_square\_loss\_reg\_reg\], we can rewrite an [*unconstrained*]{} [DRO]{} model with the optimal transport discrepancy as a minimization problem, in which the objective function, in one hand, includes an expected-cost term with respect to the empirical distribution, and on the other hand, includes a regularization term. Two other interesting results can be inferred from Theorem \[thm: rev.lin\_reg\_square\_loss\_reg\_reg\] about the connection between [DRO]{} and regularization: (i) the shape of the transportation cost $c$ in the definition of the optimal transport discrepancy directly implies the type of regularization, and (ii) the size of the ambiguity set is related to the regularization parameter. An important implication of these results is that one can judicially choose an appropriate regularization parameter for the problem in hand by using the [DRO]{} equivalent reformulation. We review the papers that draw this conclusion in Section \[sec: rev.distance\].
Now, let us focus on [DRO]{} problems formulated via $\phi$-divergences. For two probability measures $P_{1}, P_{2} \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$, the $\phi$-divergence between $P_{1}$ and $P_{2}$ is defined as ${\mathfrak{d}}^{\phi}(P_{1}, P_{2}):=\int_{\Xi}\phi\left(\frac{d P_{1}}{d P_{2}}\right) d P_{2}$, where the $\phi$-divergence function $\phi : {\mathbb{R}}_{+} \rightarrow {\mathbb{R}}_{+} \cup \{+ \infty\}$ is convex, and it satisfies the following properties: $\phi(1)=0$, $0\phi\left(\frac{0}{0}\right):=0$, and $a\phi\left(\frac{a}{0}\right):=a \lim_{t \rightarrow \infty} \frac{\phi(t)}{t}$ if $a>0$[^11].
[(@duchi2016 [Theorem 2])]{} \[thm: rev.phi\_divergences\_reg\] Consider an ambiguity set of probability distributions as $${\mathcal{P}}^{\phi}({{\mathbbmtt{P}}_{0}}; \epsilon):= \sset*{{\mathbbmtt{P}} \in {{\mathfrak{P}}({\mathbb{R}}^{d},{\mathfrak{B}}({\mathbb{R}}^{d}))}}{ {\mathfrak{d}}^{\phi}({\mathbbmtt{P}},{{\mathbbmtt{P}}_{0}}) \le \epsilon},$$ formed via the $\phi$-divergence ${\mathfrak{d}}^{\phi}({\mathbbmtt{P}} ,{{\mathbbmtt{P}}_{0}})$, where $\epsilon$ is the size of the ambiguity set and ${{\mathbbmtt{P}}_{0}}$ is the empirical probability distribution on a set of independently and identically distributed (i.i.d) data $\{{\boldsymbol{\xi}}^{i}\}_{i=1}^{N}$, according to ${{\mathbbmtt{P}}^{\text{true}}}$. Furthermore, suppose that ${\mathcal{X}}$ is compact, there exists a measurable function $M: \Omega \mapsto {\mathbb{R}}_{+}$ such that for all ${\boldsymbol{\xi}} \in \Omega$, $h(\cdot, {\boldsymbol{\xi}})$ is $M({\boldsymbol{\xi}})$-Lipschitz with respect to some norm $\|\cdot\|$ on ${\mathcal{X}}$, ${\mathbb{E}_{{{\mathbbmtt{P}}^{\text{true}}}} \left[ M({\tilde{{\boldsymbol{\xi}}}})^{2} \right]}< \infty$, and ${\mathbb{E}_{{{\mathbbmtt{P}}^{\text{true}}}} \left[ |h({\boldsymbol{x}}_{0}, {\tilde{{\boldsymbol{\xi}}}})| \right]}<\infty$ for some ${\boldsymbol{x}}_{0} \in {\mathcal{X}}$. Then, $$\sup_{{\mathbbmtt{P}} \in {\mathcal{P}}^{\phi}({\mathbbmtt{P}}_{N}; \frac{\epsilon}{N}) } \ {\mathbb{E}_{{\mathbbmtt{P}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}= {\mathbb{E}_{{\mathbbmtt{P}}_{N}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} + \Big( \frac{\epsilon}{N} {\mathrm{Var}_{{\mathbbmtt{P}}_{N}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \Big)^{\frac{1}{2}} + \gamma_{N}({\boldsymbol{x}}),$$ where $\gamma_{N}({\boldsymbol{x}})$ is such that $\sup_{{\boldsymbol{x}} \in {\mathcal{X}}} \sqrt{N} |\gamma_{N}({\boldsymbol{x}})| \rightarrow 0$ in probability.
As Theorem \[thm: rev.phi\_divergences\_reg\], we can rewrite the inner problem of a model of the form with $\phi$-divergences as the expected cost plus a regularization term that accounts for the standard deviation of the cost, under the empirical distribution.
General Solution Techniques to Solve [DRO]{} Models {#sec: rev.solution}
====================================================
In this section, we discuss two approaches to solve . Let us first reformulate as follows:
\[eq: DRO\_Reformulation\] $$\begin{aligned}
\inf_{{\boldsymbol{x}} \in {\mathcal{X}}, \theta } \ & \theta \\
\text{s.t.} \quad & \theta \ge {{\mathcal{R}}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}, \; \forall P \in {\mathcal{P}}\\
& {{\mathcal{R}}_{P} \left[ {\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le {\boldsymbol{0}}, \; \forall P \in {\mathcal{P}}.
\end{aligned}$$
Reformulation is a semi-infinite program ([SIP]{}), and at a first glance, obtaining an optimal solution to this problem looks unreachable[^12]. It is well-known that even convex [SIP]{}s cannot be solved directly with numerical methods, and in particular are not amenable to the use of methods such as interior point method. Therefore, a key step of the solution techniques to handle the semi-infinite qualifier (i.e., $\forall P \in {\mathcal{P}}$) is to reformulate as an optimization problem that is amenable to the use of available optimization techniques and off-the-shelf solvers. Of course, the complexity and tractability of such [SIP]{}s and their reformulations depend on the geometry and properties of both the ambiguity set ${\mathcal{P}}$ and the functions $h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ and ${\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$. As we shall see in details in Section \[sec: rev.choice.ambiguity\], proper assumptions on ${\mathcal{P}}$ and these functions are important in most studies on [DRO]{} in order to obtain a solvable reformulation or approximation of .
In the context of [DRO]{}, there are two main approaches to handle the semi-infinite quantifier $\forall P$ and to numerically solve . Both approaches have their roots in the [SIP]{} literature, and they both aim at getting rid of the quantifier $\forall P$, but in different ways.
Cutting-Surface Method
----------------------
The first approach replaces the quantifier $\forall P$ by [*for some finite atomic subset of*]{} ${\mathcal{P}}$. The idea is to successively solve a relaxed problem of over a finitely generated inner approximations of the ambiguity set ${\mathcal{P}}$. To be precise, this approach approximates the semi-infinite constraints for all $P \in {\mathcal{P}}$ by finitely many ones over a finite set of probability distributions. In each iteration of this approach, a new probability distribution is added to this finite set until optimality criteria are met. We refer to this as a [*cutting-surface*]{} method (also known as [*exchange method*]{}, following the terminology in the [SIP]{} literature, see, e.g., @mehrotra2014semi [@hettich1993]). We refer to @pflug2007 [@rahimian2019; @bansal2018] as examples of this approach in the context of [DRO]{}.
The key requirements in order to use the cutting-surface method are the abilities to (i) solve a relaxation of with a finite number of probability distributions to optimally and (ii) generate an $\epsilon$-optimal solution[^13] to a distribution separation subproblem [@luo2019].
[(@luo2019 [Theorem 3.2])]{} \[thm: SIP\] Suppose that ${\mathcal{X}} \times {\mathcal{P}} $ is compact, and ${{\mathcal{R}}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ and ${{\mathcal{R}}_{P} \left[ {\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ are continuous on ${\mathcal{X}} \times {\mathcal{P}}$. Moreover, suppose that we have an oracle that generates an optimal solution $({\boldsymbol{x}}_{k}, \theta_{k})$ to a relaxation of problem for any finite set ${\mathcal{P}}_{k} \subseteq {\mathcal{P}}$, and an oracle that generates an $\epsilon$-optimal solution of the distribution generation subproblem $$\sup_{P \in {\mathcal{P}}} \max\Bigg\{{{\mathcal{R}}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}, {{\mathcal{R}}_{P} \left[ g_{1}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}, \ldots, {{\mathcal{R}}_{P} \left[ g_{m}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}\Bigg\}$$ for any ${\boldsymbol{x}} \in {\mathcal{X}}$ and $\epsilon>0$. Suppose that iteratively the relaxed master problem is solved to optimally and yields the solution $({\boldsymbol{x}}_{k}, \theta_{k})$, and the distribution separation subproblem is solved to $\frac{\epsilon}{2}$-optimality and yields the solution $P_{k}$. Then, the stopping criteria ${{\mathcal{R}}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le \theta_{k} + \frac{\epsilon}{2}$ and ${{\mathcal{R}}_{P} \left[ g_{j}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le \frac{\epsilon}{2}$, $j=1, \ldots, m$, guarantee that an $\epsilon$-feasible solution[^14] to problem , yielding an objective function value lower bounding the optimal value of , can be obtained in a finite number of iterations.
It is worth noting that the distribution generation subproblem in the cutting-surface method may be a nonconvex optimization problem. One may efficiently solve through the cutting-surface method if the ambiguity set ${\mathcal{P}}$ can be convexfied without causing a change to the optimal value. The following lemma states that if ${{\mathcal{R}}_{P} \left[ \cdot \right]}$ is convex in $P$ on ${\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$, then, it can be assumed without loss of generality that ${\mathcal{P}}$ is convex.
\[lem: G\_Convex\_Hull\] Consider . For a fixed $x \in {\mathcal{X}}$, suppose that ${{\mathcal{R}}_{P} \left[ \cdot \right]}$ is convex in $P$ on ${\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$. Then, ${\boldsymbol{x}}^{*} \in {\mathcal{X}}$ is an optimal solution to if and only if it is an optimal solution to the following problem: $$\label{eq: rev.conv_formulation}
\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \sup_{P \in {\text{conv}({\mathcal{P}})}} \ \sset*{{{\mathcal{R}}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}}{\sup_{P \in {\text{conv}({\mathcal{P}})} } \ {{\mathcal{R}}_{P } \left[ {\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le {\boldsymbol{0}}}.$$
Problems and can be reformulated, respectively, as $\min \sset*{\theta}{(x,\theta) \in {\mathcal{G}}}$ and $\min \sset*{\theta}{(x,\theta) \in {\mathcal{G}}^{\prime}}$, where $${\mathcal{G}}:=\sset*{(x,\theta) \in {\mathbb{R}}^{n+1}}{{\boldsymbol{x}} \in {\mathcal{X}}, \; {{\mathcal{R}}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le \theta, \; {{\mathcal{R}}_{P} \left[ {\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le {\boldsymbol{0}}, \; \forall P \in {\mathcal{P}}},$$ and $${\mathcal{G}}^{\prime}:=\sset*{(x,\theta) \in {\mathbb{R}}^{n+1}}{{\boldsymbol{x}} \in {\mathcal{X}}, \; {{\mathcal{R}}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le \theta, \; {{\mathcal{R}}_{P} \left[ {\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le {\boldsymbol{0}}, \; \forall P \in {\text{conv}({\mathcal{P}})}}.$$ Because ${\mathcal{P}} \subseteq {\text{conv}(P)}$, we have ${\mathcal{G}}^{\prime} \subseteq {\mathcal{G}}$, and thus, an optimal solution to is optimal to . We now show that ${\mathcal{G}} \subseteq {\mathcal{G}}^{\prime} $. Consider an arbitrary $({\boldsymbol{x}},\theta) \in {\mathcal{G}}$. For an arbitrary $P \in {\text{conv}({\mathcal{P}})}$, there exists a collection $\{P^{i}\}_{i \in {\mathcal{I}}}$ such that $P=\sum_{i \in {\mathcal{I}}} \lambda^{i} P^{i}$, where $\sum_{i \in {\mathcal{I}}} \lambda^{i} =1$, $ P^{i} \in {\mathcal{P}}$, $\lambda^{ i} \ge 0$, $i \in {\mathcal{I}}$. Now, by the convexity of ${{\mathcal{R}}_{P} \left[ \cdot \right]}$ in $P$ on ${\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$, we have ${{\mathcal{R}}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le \sum_{i \in {\mathcal{I}}} \lambda^{i} {{\mathcal{R}}_{P^{i}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le \theta$ and ${{\mathcal{R}}_{P} \left[ {\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le \sum_{i \in {\mathcal{I}}} \lambda^{i} {{\mathcal{R}}_{P^{i}} \left[ {\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le {\boldsymbol{0}}$. Thus, it follows that $({\boldsymbol{x}},\theta) \in {\mathcal{G}}^{\prime}$, and hence, ${\mathcal{G}} \subseteq {\mathcal{G}}^{\prime} $.
Dual Method
-----------
The second approach to solve handles the quantifier $\forall P$ through the dualization of $\sup_{P \in {\mathcal{P}}} \ {{\mathcal{R}}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ and $\sup_{P \in {\mathcal{P}}} \ {{\mathcal{R}}_{P} \left[ {\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le {\boldsymbol{0}}$. Under suitable regularity conditions, there is no duality gap between the primal problem and its dual, i.e., strong duality holds. Hence, the supremum can be replaced by an infimum which should hold for at least one corresponding solution in the dual space. We refer to this approach as a [*dual method*]{}. Most of the existing papers in the [DRO]{} literature are focused on the dual method, see, e.g., @delage2010 [@bertsimas2010minmax; @wiesemann2013; @ben2013]. A situation where one benefits from the application of the dual method to solve arises in cases where the ambiguity set of probability distribution depends on decision ${\boldsymbol{x}}$ as formulated below, see, e.g., @luo2018 [@noyan2018]: $$\label{eq: D3RO}
\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \sup_{P \in {\mathcal{P}}({\boldsymbol{x}})} \ \sset*{{{\mathcal{R}}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}}{\sup_{P \in {\mathcal{P}}({\boldsymbol{x}}) } \ {{\mathcal{R}}_{P} \left[ {\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le {\boldsymbol{0}}},$$ where, ${\mathcal{P}}({\boldsymbol{x}})$ denotes a [*decision-dependent*]{} ambiguity set of the probability distributions.
The papers that rely on the dual method exploit linear duality, Lagrangian duality, convex analysis (e.g., support function, conjugate duality, Fenchel duality), and conic duality. A fundamental question is then under what conditions the strong duality holds. One such condition is the existence of a probability measure that lies in the interior of the ambiguity set, i.e., the ambiguity set satisfies a Slater-type condition. We refer the readers to the optimization textbooks for results on linear and Lagrangian duality, see, e.g., @bazaraa2013NLP [@bertsekas1999NLP; @ruszczynski2006NLP; @rockafellar1974duality]. For detailed discussions of the duality theory in infinite-dimensional convex problems, we refer to @rockafellar1974duality, and we refer to @isii1962 and @shapiro2001duality for duality theory in conic linear programs. Below, we briefly present the results from conic duality that are widely used in the dualization of [DRO]{} models.
[(@shapiro2001duality [Proposition 2.1])]{} \[thm: rev.conic\_duality\] For a linear mapping $A: {\mathcal{V}} \mapsto {\mathcal{W}}$, recall the definition of the adjoint mapping $A^{*}: {\mathcal{W}}^{*} \mapsto {\mathcal{V}}^{*}$, where $\langle w^{*}, Av \rangle= \langle A^{*}w^{*}, v \rangle$, $\forall v \in {\mathcal{V}}$. Consider a conic linear optimization problem of the form
\[eq: rev.conic\_primal\] $$\begin{aligned}
\min_{v \in {\mathcal{C}}} \ & \langle c,v \rangle \\
{\text{s.t.}}\quad & Av \succcurlyeq_{{\mathcal{K}}} b,\end{aligned}$$
where, ${\mathcal{C}} $ and ${\mathcal{K}}$ are convex cones and subsets of linear spaces ${\mathcal{V}}$ and ${\mathcal{W}}$, respectively, such that for any $w^{*} \in {\mathcal{W}}^{*}$, there exists a unique $v^{*} \in {\mathcal{V}}^{*}$ with $\langle w^{*},Av \rangle=\langle v^{*},v \rangle$, with $v^{*}=A^{*}w^{*}$, for all $v \in {\mathcal{V}}$. Then, the dual problem to is written as
\[eq: rev.conic\_dual\] $$\begin{aligned}
\max_{w^{*} \in {{\mathcal{K}}^{\prime}}} \ & \langle w^{*},b \rangle \\
{\text{s.t.}}\quad & A^{*}w^{*} \preccurlyeq_{{{\mathcal{C}}^{\prime}}} c.\end{aligned}$$
Moreover, there is no duality gap between and and both problems have optimal solutions if and only if there exists a feasible pair $(v, w^{*})$ such that $\langle w^{*},Av-b \rangle=0$ and $\langle c-A^{*}w^{*},v \rangle=0$.
It is worth noting that other numerical methods to solve a SIP, such as penalty methods, see, e.g., @lin2014 [@yang2016], smooth approximation and projection methods, see, e.g., @xu2014, and primal methods, see, e.g., @wang2015, have not been popular in the [DRO]{} literature, although there are a few exceptions. @liu2017primal propose to discretize [DRO]{} by a min-max problem in a finite dimensional space, where the ambiguity set is replaced by a set of distributions on a discrete support set. Then, they consider lifting techniques to reformulate the discretized [DRO]{} as a saddle-point problem, if needed, and implement a primal-dual hybrid algorithm to solve the problem. They showcase this method for cases where the ambiguity set is formed via the moment constraints as in or the Wasserstein metric, and they present the quantitative convergence of the optimal value and optimal solutions. Other iterative primal methods that have been proposed to solve a [DRO]{} model include @lam2013 for $\chi^{2}$-distance, and @ghosh2018sgd [@namkoong2018; @ghosh2018] for general $\phi$-divergences.
Choice of Ambiguity Set of Probability Distributions {#sec: rev.choice.ambiguity}
====================================================
The ambiguity set of distribution in a [DRO]{} model provides a flexible framework to model uncertainty by allowing the modelers to incorporate partial information about the uncertainty, obtained from historical data or domain-specific knowledge. This information includes, but it is not limited to, support of the uncertainty, discrepancy from the reference distribution, descriptive statistics, and structural properties, such as symmetry and unimodality. Early [DRO]{} models considered ambiguity sets based on the support and moment information, for which techniques in global optimization for polynomial optimization problems and problem of moments are applied to obtain reformulations, see, e.g., @lasserre2001 [@bertsimas2006persistence; @bertsimas2005optimal; @popescu2005semidefinite; @popescu2007; @gilboa1989]. Since then, many researchers have incorporated information such as descriptive statistics as well as the structural properties of the underlying unknown true distribution into the ambiguity set.
There are usually two principles to choose the ambiguity set: (1) ${\mathcal{P}}$ should be chosen as small as possible, (2) ${\mathcal{P}}$ should contain the unknown true distribution with certainty (or at least, with a high confidence). Abiding by these two principles not only reduces the conservatism of the problem but it also robustifies the problem against the unknown true distribution. These two, in turn, give rise to two questions: (1) what should be the shape of the ambiguity set and (2) what should be the size of the ambiguity set. We discuss the latter in Section \[sec: rev.calibration\], and focus on the shape of the ambiguity set in this section.
Except for a few exceptions, the common practice in constructing the ambiguity set is that first, the shape of the set is determined by decision makers/modelers. In this step, data does not directly affect the choice of the shape of the ambiguity set. Then, the parameters that control the size of the ambiguity set are chosen in a data-driven fashion. We emphasize that albeit being a common practice, the size and shape of the ambiguity set might not necessarily be chosen separately. To make the transition between Section \[sec: rev.choice.ambiguity\] and \[sec: rev.calibration\] somewhat smoother, we devote Section \[sec: rev.kernel\] to review those papers that address these two questions simultaneously.
When dealing with the question of the shape of the ambiguity set, most researchers, on one hand, have focused on the ambiguity sets that facilitate a tractable (exact or conservative approximate) formulation, such as linear program (LP), second-order cone program (SOCP), or to a lesser degree, semidefinite program (SDP), so that efficient computational techniques can be developed. On the other hand, many researchers have focused on the expressiveness of the ambiguity set by incorporating information such as descriptive statistics as well as the structural properties of the underlying unknown true distribution. In what follows in this section, we review different approaches to model the distributional ambiguity. We acknowledge that the ambiguity sets in the literature are typically categorized in two groups: [*moment-based*]{} and [*discrepancy-based*]{} ambiguity sets. In short, moment-based ambiguity sets contain distributions whose moments satisfy certain properties, while discrepancy-based ambiguity sets contain distributions that are close to a nominal distribution in the sense of some [*discrepancy*]{} measure. Within these two groups, some specific ambiguity sets have been given names, see, e.g., @hanasusanto2015chance. For example,
- [*Markov*]{} ambiguity set contains all distributions with known mean and support,
- [*Chebyshev*]{} ambiguity set contains all distributions with bounds on the first- and second-order moments,
- [*Gauss*]{} ambiguity set contains all unimodal distributions from within the Chebyshev ambiguity set,
- [*Median-absolute deviation*]{} ambiguity set contains all symmetric distributions with known median and mean absolute deviation,
- [*Huber*]{} ambiguity set contains all distributions with known upper bound on the expected Huber loss function,
- [*Hoeffding*]{} ambiguity set contains all componentwise independent distributions with a box support,
- [*Bernstein*]{} ambiguity set contains all distributions from within the [*Hoeffding*]{} ambiguity set subject to marginal moment bounds,
- [*Choquet*]{} ambiguity set contains all distributions that can be written as an infinite convex combination of extremal distributions of the set,
- [*Mixture*]{} ambiguity set contains all distributions that can be written as a mixture of a parametric family of distributions.
While we use the above terminology in this paper, we categorize [DRO]{} papers into four groups:
- Discrepancy-based ambiguity sets (Section \[sec: rev.distance\]),
- Moment-based ambiguity sets (Section \[sec: rev.moment\]),
- Shape-preserving ambiguity sets (Section \[sec: rev.shape\]),
- Kernel-based ambiguity sets (Section \[sec: rev.kernel\]).
We briefly mentioned what is meant by discrepancy-based and moment-based ambiguity sets. In short, shape-preserving ambiguity sets contain distributions with similar structural properties (e.g., unimodality, symmetry). Kernel-based ambiguity sets also contain distributions that are formed via a kernel and its parameters are close to the parameters of a nominal kernel function.
The above groups are not necessarily disjoint from a modeling perspective and there are some overlaps between them. However, we try to assign papers to these categories as close as possible to what the authors explicitly or implicitly might have stated in their work. We review these four groups of ambiguity sets in Sections \[sec: rev.distance\]–\[sec: rev.kernel\]. Finally, we review the papers that are general and do not consider a specific form for the ambiguity set in Section \[sec: rev.general\].
Discrepancy-Based Ambiguity Sets {#sec: rev.distance}
--------------------------------
In many situations, we have a [*nominal*]{} or [*baseline*]{} estimate of the underlying probability distribution. A natural way to hedge against the distributional ambiguity is then to consider a neighborhood of the nominal probability distribution by allowing some perturbations around it. So, the ambiguity set can be formed with all probability distributions whose [*discrepancy*]{} or [*dissimilarity*]{} to the nominal probability distribution is sufficiently small. More precisely, such an ambiguity set has the following generic form: $$\label{eq: rev.ambiguity_distance_generic}
{\mathcal{P}}^{{\mathfrak{d}}}(P_{0};\epsilon)=\sset*{P \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}}{{\mathfrak{d}} (P,P_{0}) \le \epsilon},$$ where $P_{0}$ denotes the nominal probability measure, and ${\mathfrak{d}} : {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}\times {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}\mapsto {\mathbb{R}}_{+} \cup \{\infty\}$ is a functional that measures the discrepancy between two probability measure $P, P_{0} \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$, dictating the shape of the ambiguity set. Moreover, parameter $\epsilon \in [0, \infty]$ controls the size of the ambiguity set, and it can be interpreted as the decision maker’s belief in $P_{0}$. Parameter $\epsilon$ is also referred to as the [*level of robustness*]{}.
A generic ambiguity set of the form has been widely studied in the [DRO]{} literature. We relegate the discussion about $P_{0}$ and $\epsilon$ to Section \[sec: rev.calibration\]. In this section, we review different discrepancy functionals ${\mathfrak{d}}(\cdot, \cdot)$ that are used in the literature. These include (i) [*optimal transport discrepancy*]{}, (ii) [*$\phi$-divergences*]{}, (iii) [*total variation metric*]{}, (iv) [*goodness-of-fit test*]{}, (v) [*Prohorov metric*]{}, (vi) [*$\ell_{p}$-norm*]{}, (vii) [*$\zeta$-structure metric*]{}, (viii) [*Levy metric*]{}, and (ix) [*contamination neighborhood*]{}. We emphasize that although all studied functionals ${\mathfrak{d}}$ can quantify the discrepancy between two probability measures, they may or may not be a metric. For example, Prohorov and total variation are probability metrics, see, e.g., @gibbs2002, while [*Kullback-Leibler*]{} and [$\chi^{2}$-distance]{} from the family of $\phi$-divergences are not a probability metric. Thus, we refer to the models of the form collectively as [*discrepancy-based*]{} ambiguity sets.
### Optimal Transport Discrepancy
We begin this section by providing more details on the optimal transport discrepancy. Consider two probability measures $P_{1}, P_{2} \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$. Let $\Pi(P_{1}, P_{2})$ denote the set of all probability measures on ${\left( \Xi \times \Xi, {\mathcal{F}} \times {\mathcal{F}} \right)}$ whose marginals are $P_{1}$ and $P_{2}$: $$\Pi(P_{1}, P_{2})=\sset*{ \pi \in{\mathfrak{M}}{\left( \Xi \times \Xi, {\mathcal{F}} \times {\mathcal{F}} \right)}}{ \pi(A\times\Xi) = P_{1}(A), \pi(\Xi\times A) = P_{2}(A) \forall A\in{\mathcal{F}} }.$$ Furthermore, suppose that there is a lower semicontinuous function $c: \Xi \times \Xi \mapsto {\mathbb{R}}_{+} \cup\{\infty\}$ with $c(s_{1},s_{2})=0$ if $s_{1}=s_{2}$. Then, the optimal transport discrepancy between $P_{1}$ and $P_{2}$ is defined as: $$\label{eq: rev.opt_transport}
{\mathfrak{d}}^{\text{W}}_{c}(P_{1}, P_{2}):= \inf_{\pi\in \Pi(P_{1}, P_{2})} \int_{\Xi\times \Xi} c(s_1,s_2) \pi(d s_1\times d s_2). $$ If, in addition, function $c$ is symmetric (i.e., $c(s_{1},s_{2})=c(s_{2},s_{1})$) and $c^{\frac{1}{r}}(\cdot)$ satisfies a triangle inequality for some $1 \le r < \infty$ (i.e., $c^{\frac{1}{r}}(s_{1},s_{2}) \le c^{\frac{1}{r}}(s_{1},s_{3}) + c^{\frac{1}{r}}(s_{3},s_{2})$), then, ${\mathfrak{d}}^{\text{W}}_{c^{{\frac{1}{r}}}}(P_{1}, P_{2})$ metricizes the weak convergence in ${\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$, see, e.g., @villani2008 [Theorem 6.9]. If $\Xi$ is equipped with a metric $d$ and $c(\cdot)=d^{r}(\cdot)$, then $ {\mathfrak{d}}^{\text{W}}_{c}(P_{1}, P_{2})$ is called [*Wasserstein metric of order $r$ or $r$-Wasserstein metric*]{}, for short[^15].
The optimal transport discrepancy can be used to form an ambiguity set of probability measures as follows: $$\label{eq: rev.opt.transport.set}
{\mathcal{P}}^{\text{W}}(P_{0}; \epsilon):=\sset*{P\in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}}{{\mathfrak{d}}^{\text{W}}_{c}(P,P_{0})\le \epsilon}.$$ Over the past few years, there has been a significant growth in the popularity of the optimal transport discrepancy to model the distributional ambiguity in [DRO]{}, in both operations research and machine learning communities, see, e.g., @pflug2007 [@mehrotra2014; @mohajerin2018; @gao2016; @chen2018chance; @blanchet2018structural; @lee2015; @luo2019; @shafieezadeh2015; @sinha2018; @lee2018stat; @shafieezadeh2018; @singh2018]. Pioneered by the work of @pflug2007, most of the literature has focused on the Wasserstein metric. Before we review these papers, we present a duality result on $\sup_{P \in {\mathcal{P}}^{\text{W}}(P_{0}; \epsilon)} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$, proved in a general form in @blanchet2017DRO.
Because the infimum in the defintion of is attained for a lower semicontinuous function $c$ [@villani2008; @rachev1998], we can rewrite $\sup_{P \in {\mathcal{P}}^{\text{W}}(P_{0}; \epsilon)} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ as follows: $$\label{eq: rev.opt_transport_primal}
\sup_{\pi \in \Phi_{P_{0}, \epsilon} } \ \int_{\Xi} h({\boldsymbol{x}},s) \pi(\Xi \times d s),$$ where $$\begin{split}
& \Phi_{P_{0}, \epsilon} := \\
& {} \sset*{ \pi \in {\mathfrak{M}}{\left( \Xi \times \Xi, {\mathcal{F}} \times {\mathcal{F}} \right)}}{\pi \in \cup_{P \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}} \Pi(P_{0}, P), \; \int_{\Xi\times \Xi} c(s_1,s_2) \pi(d s_1\times d s_2) \le \epsilon}.
\end{split}$$ Recall that ${\mathcal{S}}{\left( \Xi, {\mathcal{F}} \right)}$ is the collection of all ${\mathcal{F}}$-measurable functions $Z: {\left( \Xi, {\mathcal{F}} \right)}\mapsto ({\overline{{\mathbb{R}}}}, {\mathcal{B}}(\overline{{\mathbb{R}}}))$. With the primal problem , we have a dual problem $$\label{eq: rev.opt_transport_dual}
\inf_{(\lambda, \phi) \in \Lambda_{c,h({\boldsymbol{x}}, \cdot)} } \ \left\lbrace \lambda \epsilon + \int_{\Xi} \phi(s) P_{0}(ds) \right\rbrace ,$$ where $$\Lambda_{c,h({\boldsymbol{x}}, \cdot)}:=\sset*{(\lambda, \phi)}{\lambda \ge 0, \ \phi \in {\mathcal{S}}{\left( \Xi, {\mathcal{F}} \right)}, \ \phi(s_{1}) + \lambda c(s_1,s_2) \ge h({\boldsymbol{x}}, s_{2}), \forall s_1,s_2 \in \Xi}.$$
[(@blanchet2017DRO [Theorem 1])]{} \[thm: rev.opt\_transport\_duality\] For a fixed ${\boldsymbol{x}} \in {\mathcal{X}}$, suppose that $h({\boldsymbol{x}}, \cdot)$ is upper semicontinuous and $P_{0}$-integrable, i.e., $\int_{\Xi} |h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}(s))| P_{0}(ds) \linebreak < \infty$. Then, $$\sup_{\pi \in \Phi_{P_{0}, \epsilon} } \ \int_{\Xi} h({\boldsymbol{x}},s) \pi(\Xi \times d s) = \inf_{(\lambda, \phi) \in \Lambda_{c,{\boldsymbol{g}}({\boldsymbol{x}}, \cdot)} } \ \left\lbrace \lambda \epsilon + \int_{\Xi} \phi(s) P_{0}(ds) \right\rbrace.$$ Moreover, there exists a dual optimal solution of the form $(\lambda, \phi_{\lambda})$, for some $\lambda \ge 0$, where $\phi_{\lambda}(s_{1}):=\sup_{s_2 \in \Xi} \ \{h({\boldsymbol{x}},s_{2})- \lambda c(s_{1},s_{2}) \}$. In addition, any feasible $\pi^{*} \in \Phi_{P_{0},\epsilon}$ and $(\lambda^{*}, \phi_{\lambda^{*}}) \in \Lambda_{c,{\boldsymbol{g}}({\boldsymbol{x}}, \cdot)}$ are primal and dual optimizers, satisfying $$\int_{\Xi} h({\boldsymbol{x}},s) \pi^{*}(\Xi \times d s) = \lambda^{*} \epsilon + \int_{\Xi} \phi_{\lambda^{*}}(s) P_{0}(ds),$$ if and only if
\[eq: rev.opt\_trasnport\_conditions\] $$\begin{aligned}
& h({\boldsymbol{x}},s_{2})- \lambda^{*} c(s_{1},s_{2})= \sup_{s_{3} \in \Xi} \ \{h({\boldsymbol{x}},s_{3})- \lambda^{*} c(s_{1},s_{3}) \}, \ \pi^{*}\text{-almost surely},\\
& \lambda^{*} \Big( \int_{\Xi\times \Xi} c(s_1,s_2) \pi(d s_1\times d s_2) - \epsilon \Big)=0.
\end{aligned}$$
\[cor: rev.opt\_transport\_duality\] Suppose that $h({\boldsymbol{x}}, \cdot)$ is upper semicontinuous and $P_{0}$-integrable. Then, $$\label{eq: rev.opt_transport_duality_final}
\sup_{P \in {\mathcal{P}}^{\text{W}}(P_{0}; \epsilon)} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}= \inf_{\lambda \ge 0} \ \left\lbrace \lambda \epsilon + {\mathbb{E}_{P_{0}} \left[ { \right]}\sup_{s \in \Xi} \ \{h({\boldsymbol{x}},s)- \lambda c(\tilde{s},s)} \right\rbrace.$$
The importance of Theorem \[thm: rev.opt\_transport\_duality\] and Corollary \[cor: rev.opt\_transport\_duality\] is that (1) the transportion cost $c(\cdot, \cdot)$ is only known to be lower semicontinuous, (2) function $h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ is assumed to be upper semicontinuous and integrable, and (3) $\Xi$ is a general Polish space. In fact, there are only mild conditions on $h({\boldsymbol{x}}, \cdot)$ and function $c$, and $P_{0}$ can be any probability measure. Moreover, $\sup_{P \in {\mathcal{P}}^{\text{W}}(P_{0};\epsilon)} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ can be obtained by solving a univariate reformulation of the dual problem , where it involves an expectation with respect to $P_{0}$ and a linear term in the level of robustness $\epsilon$. We shall shortly comment on similar results in the literature but under stronger assumptions. As shown in Section \[sec: rev.rel\_regularization\], by using Theorem \[thm: rev.opt\_transport\_duality\] or its weaker forms, researchers have shown many mainstream machine learning algorithms, such as regularized logistic regression and LASSO, have a [DRO]{} representation, see, e.g., @blanchet2016robust [@blanchet2017groupwise; @blanchet2017Semi; @gao2017; @shafieezadeh2015; @shafieezadeh2017]. While a strong duality result for [DRO]{} formed via the optimal transport discrepancy is provided in @blanchet2017DRO under mild assumptions by utilizing Fenchel duality, @mohajerin2018 and @gao2016 are also among notable papers in this area. Below, we first highlight the main differences of @mohajerin2018 and @gao2016 with @blanchet2017DRO. Then, we comment on their main contributions.
In @mohajerin2018, it is assumed that the transportation cost $c(\cdot, \cdot)$ is a norm on ${\mathbb{R}}^{n}$, function $h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ has specific structures, and the nominal probability measure $P_{0}$ is the empirical distribution of data supported on ${\mathbb{R}}^{n}$. On the other hand, @gao2016 consider a more general setting than the one in @mohajerin2018, but slightly more restricted than that of @blanchet2016robust. More precisely, in contrast to @blanchet2016robust, it is assumed in @gao2016 that the transportation cost $c(\cdot, \cdot)$ forms a metric on the underlying Polish space.
@mohajerin2018 study data-driven [DRO]{} problems formed via $1$-Wasserstein metric utilizing an arbitrary norm on ${\mathbb{R}}^{n}$. The main contribution of @mohajerin2018 is in proving a strong duality result for the studied problem and to reformulate it as a finite-dimesnional convex program for different cost functions, including a pointwise maximum of finitely many concave functions, convex functions, and sums of maxima of concave functions. This contribution is of importance as most of the previous research on [DRO]{} formed via Wasserstein ambiguity sets reformulates the problem as a finite-dimensional nonconvex program and relies on global optimization techniques, such as difference of convex programming, to solve the problem, see, e.g., [@wozabal2012 Theorem 6]. In addition, @mohajerin2018 propose a procedure to construct an extremal distribution (respectively, a sequence of distributions) that attains the worst-case expectation precisely (or, asymptotically). They further show that their solutions enjoy finite-sample and asymptotic consistency guarantees. The results were applied to the mean-risk portfolio optimization and to the uncertainty quantification problems.
@gao2016 study [DRO]{} problems formed via $p$-Wasserstein metric utilizing an arbitrary metric on a Polish space $\Xi$. Recognizing the fact that the ambiguity set should be chosen judicially for the application in hand, they argue that by using the Wasserstein metric the resulting distributions hedged against are more reasonable than those resulting from other popular choices of sets, such as $\phi$-divergence-based sets, see Section \[sec: rev.phi\]. They prove a strong duality result for the studied problem by utilizing Lagrangian duality and approximate the worst-case distributions (or obtain a worst-case distribution, if it exists) explicitly via the first-order optimality conditions of the dual reformulation. Using this, they show data-driven [DRO]{} problems can be approximated by robust optimization problems.
In addition to the papers by @blanchet2017DRO [@mohajerin2018; @gao2016], there are other research on [DRO]{} problems formed via the optimal transport discrepancy, but under more restricted assumptions, that move the frontier of research in this area. In the following review, we mention the properties of the transportation cost $c(\cdot, \cdot)$ in the definition of the optimal transport discrepancy, function ${\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ or $h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$, and the nominal distribution ${{\mathbbmtt{P}}_{0}}$ and its underlying space as studied in these papers. @zhao2018wass study a data-driven distributionally robust two-stage stochastic linear program over a Wasserstein ambiguity set, with $1$-Wasserstein metric utilizing $\ell_1$-norm. By developing a strong duality result, they reformulate the problem as a semi-infinite linear two-stage robust optimization problem. In addition, under mild conditions, they derive a closed-form expression of the worst-case distribution whose parameters can be obtained by solving a traditional two-stage robust optimization model. They also show the convergence of the problem to the corresponding stochastic program under the true unknown probability distribution as the data points increase.
@hanasusanto2018 derive conic programming reformulation to distributionally robust two-stage stochastic linear programs formed via $p$-Wasserstein metric utilizing an arbitrary norm. In particular, by relying on the strong duality result from @mohajerin2018 and @gao2016, they show that when the ambiguity set is formed via the $2$-Wasserstein metric around a discrete distribution, the resulting model is equivalent to a copositive program of polynomial size (if the problem has complete recourse) or it can be approximated by a sequence of copositive programs of polynomial size (if for any fixed ${\boldsymbol{x}}$ and ${\boldsymbol{\xi}}$, the dual of the second-stage problem is feasible). Moreover, by using nested hierarchies of semidefinite approximations of the (intractable) copositive cones from the inside, they obtain sequences of tractable conservative approximations to the problem. They also show that the two-stage distributionally robust stochastic linear program with non-random cost function in the second stage, where the ambiguity set is formed via the $1$-Wasserstein metric around a discrete distribution is equivalent to a linear program. They further extend their result to a case where optimized certainty equivalent (OCE) [@bental1986; @bental2007OCE] is used as a risk measure. As applications, they demonstrate their results for the least absolute deviations regression and multitask learning problems.
For random variables supported on a compact set and a bounded continuous function $h({\boldsymbol{x}}, \cdot)$, @luo2019 study formed via the $1$-Wasserstein metric utilizing an arbitrary norm, around the empirical distribution of data. They present an equivalent SIP reformulation of the problem by reformulating the inner problem as a conic linear program. In order to solve the resulting SIP, they propose a finitely convergent exchange method when the cost function $h$ is a general nonlinear function in ${\boldsymbol{x}}$, and a central cutting-surface method with a linear rate of convergence when the cost function $h(\cdot, {\boldsymbol{\xi}})$ is convex in ${\boldsymbol{x}}$ and ${\mathcal{X}}$ is convex. They investigate a logistic regression model to exemplify their algorithmic ideas, and the benefits of using $1$-Wasserstein metric.
@pflug2014 study a [DRO]{} approach to single- and two-stage stochastic programs formed via the $p$-Wasserstein metric utilizing an arbitrary norm. They assume that all probability distributions in the ambiguity set are supported on discrete, fixed atoms, while only the probabilities of atoms are changing in the ambiguity set. Hence, the ambiguity set can be represented as a subset of a finite-dimensional space. To solve the resulting problem, they apply the exchange method, proposed in @pflug2007. @mehrotra2014 study a distributionally robust ordinary least squares problem, where the ambiguity set of probability distribution is formed via $1$-Wasserstein metric utilizing $\ell_{1}$-norm. Similar to @pflug2014, they restrict the ambiguity set of distributions to all discrete distributions and show that the resulting problem can be solved by using an equivalent SOCP reformulation.
Unlike @pflug2014 and @mehrotra2014 that only allow varying the probabilities on atoms identical to those of the nominal distribution, the ambiguity set is allowed to contain an infinite-dimensional distribution in @wozabal2012. @wozabal2012 study a [DRO]{} approach to single-stage stochastic programs, where the distributional ambiguity in the constraints and objective function is modeled via $1$-Wasserstein metric utilizing $\ell_{1}$-norm around the empirical distribution. Because such a model has a higher complexity than that of those considered in @pflug2014 and @mehrotra2014, they propose to reformulate the problem into an equivalent finite-dimensional, nonconvex saddle-point optimization problem, under appropriate conditions. The key ideas in @wozabal2012 to obtain such a reformulation are that (i) at any level of precision and in the sense of Kantorovich distance, every distribution in the ambiguity set can be approximated via a probability distribution supported on a uniform number of atoms, and (ii) considering only the extremal distributions in the ambiguity set suffices to obtain the equivalent reformulation. Furthermore, for a portfolio selection problem complemented via a broad class of convex risk measures appearing in the constraints, they obtain an equivalent finite-dimensional, nonconvex, semidefinite saddle-point optimization problem. They propose to solve such a reformulated problem via the exchange method, proposed in @pflug2007. @pichler2017 study a [DRO]{} model with a distortion risk measure and form the ambiguity set of distributions via $p$-Wasserstein metric utilizing an arbitrary norm. They quantitatively investigate the effect of the variation of the ambiguity set on the optimal value and the optimal solution in the resulting optimization problem, as the number of data points increases. They illustrate their results in the context of a two-stage stochastic program with recourse.
A class of data-driven distributionally robust fractional optimization problems, representing a reward-risk ratio, is studied in @ji2017 as follows: $$\inf_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \sup_{P \in {\mathcal{P}}} \ \frac{{\mathcal{R}}^{1}_{P}\left[h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})\right]}{{\mathcal{R}}^{2}_{P}\left[h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})\right]},$$ where ${\mathcal{R}}^{1}_{P}: {\mathcal{Z}} \mapsto {\mathbb{R}}$ is a reward measure and ${\mathcal{R}}^{2}_{P}: {\mathcal{Z}} \mapsto {\mathbb{R}}_{+}$ is a nonnegative risk measure. Assuming that the underlying distribution is discrete, @ji2017 model the ambiguity about discrete distributions using the $1$-Wasserstein metric utilizing $\ell_1$-norm, around the empirical distribution. They provide a nonconvex reformulation for the resulting model and propose a bisection algorithm to obtain the optimal value by solving a sequence of convex programming problems. As in @postek2016, the reformulation is obtained through investigating the support function of the ambiguity set and the convex conjugate of the ratio function. They further apply their results to portfolio optimization problem for the Sharpe ratio [@sharpe1966] and Omega ratio [@keating2002].
Motivated by the drawback of moment-based [DRO]{} problems, @gao2017dep study [DRO]{} formed via various ambiguity sets of probability distributions that incorporate the dependence structure between the uncertain parameters. In the case that there exists a linear dependence structure, they consider probability distributions around a nominal distribution, in the sense of $p$-Wasserstein metric utilizing an arbitrary norm, satisfying a second-order moment constraint. They also study cases with different rank dependencies between the uncertain parameters. They obtain tractable reformulations of these models and apply their results to a portfolio optimization problem. Along the same lines as @gao2017dep, @pflug2017review study a [DRO]{} approach to portfolio optimization via the $1$-Wasserstein metric utilizing an arbitrary norm. They address the case where the dependence structure between the assets is uncertain while the marginal distributions of the assets are known. @noyan2018 study [DRO]{} model with decision-dependent ambiguity set, where the ambiguity set is formed via the $p$-Wasserstein metric utilizing $\ell_{p}$-norm. They consider two types of ambiguity sets: (1) [*continuous*]{} ambiguity set, where there is ambiguity in both probability distribution of ${\tilde{{\boldsymbol{\xi}}}}$ and its realizations, and (2) [*discerte*]{} ambiguity set, where there is only ambiguity in the probability distribution of ${\tilde{{\boldsymbol{\xi}}}}$, while the realizations are fixed. They apply their results to problems in machine scheduling and humanitarian logistics. @rujeerapaiboon2018reduction study continuous and discrete scenario reduction [@dupavcova2003scenario; @heitsch2003scenario; @heitsch2009modeling; @heitsch2009reduction; @arpon2018], where $p$-Wasserstein metric utilizing $\ell_{p}$-norm is used as a measure of discrepancy between distributions.
#### Discrete Problems
We now review [DRO]{} models over Wasserstein ambiguity sets, with discrete decisions. @bansal2018 study a distributionally robust integer program with pure binary first-stage and mixed-binary second-stage variables on a finite set of scenarios as follows: $$\min_{{\boldsymbol{x}}}\sset*{{\boldsymbol{c}}^{\top}{\boldsymbol{x}} +\max_{P \in {\mathcal{P}}} {\mathbb{E}_{P} \left[ h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \right]}}{{\boldsymbol{A}}{\boldsymbol{x}} \ge {\boldsymbol{b}}, \; {\boldsymbol{x}} \in \{0,1\}^{n}},$$ where $$h({\boldsymbol{x}},{\boldsymbol{\xi}})=\min_{{\boldsymbol{y}}}\sset*{{\boldsymbol{q}}^{\top}({\boldsymbol{\xi}}){\boldsymbol{y}}({\boldsymbol{\xi}})}{{\boldsymbol{W}}({\boldsymbol{\xi}}){\boldsymbol{y}}({\boldsymbol{\xi}}) \ge {\boldsymbol{r}}({\boldsymbol{\xi}}) - {\boldsymbol{T}}({\boldsymbol{\xi}}) {\boldsymbol{x}}, \; {\boldsymbol{y}}({\boldsymbol{\xi}}) \in \{0,1\}^{q_{1}} \times {\mathbb{R}}^{q-q_{1}}}.$$ They propose a decomposition-based L-shaped algorithm and a cutting surface algorithm to solve the resulting model. They investigate the conditions and the ambiguity sets under which the proposed algorithm is finitely convergent. They show that the ambiguity set of distributions formed via $1$-Wasserstein metric utilizing an arbitrary norm satisfy these conditions. @xu2018mip study a mixed 0-1 linear program, where the coefficients of the objective functions are affinely dependent on the random vector ${\tilde{{\boldsymbol{\xi}}}}$. They seek a bound on the worst-case expected optimal value to this problem, where the worst-case is taken with respect to an ambiguity set of discrete distributions formed via $2$-Wasserstein metric utilizing $\ell_{2}$-norm around the empirical distribution of data. Under mild assumptions, they reformulate the problem into a copositive program, which leads to a tractable semidefinite-based approximation.
#### Chance Constraints
In this section, we review distributionally robust chance-constrained programs over Wasserstein ambiguity sets, see, e.g., @jiang2016chance [@chen2018chance; @xie2018wass; @yang2018control]. @ji2018chance study a distributionally robust individual chance constraint, where the ambiguity set of distributions is formed via $1$-Wasserstein metric utilizing $\ell_{1}$-norm, and $g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}})$ in is defined as $$g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}):=\mathbbm{1}_{[{\boldsymbol{a}}({\tilde{{\boldsymbol{\xi}}}})^{\top}{\boldsymbol{x}} \le {\boldsymbol{b}}({\tilde{{\boldsymbol{\xi}}}})]}({\tilde{{\boldsymbol{\xi}}}}).$$ For the case that the underlying distribution is supported on the same atoms as those of the empirical distribution, they provide mixed-integer LP reformulations for the linear random right-hand side case, i.e., $g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}):=\mathbbm{1}_{[{\boldsymbol{a}}^{\top}{\boldsymbol{x}} \le {\tilde{{\boldsymbol{\xi}}}}]}({\tilde{{\boldsymbol{\xi}}}})$, and the linear random technology matrix case, i.e., $g({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}):=\mathbbm{1}_{[{\tilde{{\boldsymbol{\xi}}}}^{\top}{\boldsymbol{x}} \le {\boldsymbol{b}}]}({\tilde{{\boldsymbol{\xi}}}})$, and provide techniques to strengthen the formulations. For the case that the underlying distribution is infinitely supported, they propose an exact mixed-integer SOCP reformulation for models with random right-hand side, while a relaxation is proposed for constraints with a random technology matrix. They show that this mixed-integer SOCP relaxation is exact when the decision variables are binary or bounded general integer.
@chen2018chance study data-driven distributionally robust chance constrained programs, where the ambiguity set of distributions is formed via $p$-Wasserstein metric utilizing an arbitrary norm. For individual linear chance constraints with affine dependency on the uncertainty, and for joint chance constraints with right-hand side affine uncertainty, they provide an exact deterministic reformulation as a mixed-integer conic program. When $\ell_{1}$-norm or $\ell_{\infty}$-norm are used as the transportation cost in the definition of Wasserstein metric, the chance-constrained program can be reformulated as a mixed-integer LP. They leverage the structural insights into the worst-case distributions, and show that both the CVaR and the Bonferroni approximation may give solutions that are inferior to the optimal solution of their proposed reformulation.
#### Statistical Learning
[DRO]{} problems formed via the optimal transport discrepency has been widely studied in the context of statistical learning. We already mentioned @mehrotra2014 as an example in this area. Below, we review the latest developments of [DRO]{} in the context of statistical learning. A data-driven distributionally robust maximum likelihood estimation model to infer the inverse of the covariance matrix of a normal random vector is proposed in @nguyen2018. They form the ambiguity set of distributions with all normal distributions close enough to a nominal distribution characterized by the sample mean and sample covariance matrix, in the sense of the $2$-Wasserstein metric utilizing $\ell_1$-norm. By leveraging an analytical formula for the Wasserstein distance between two normal distributions, they obtain an equivalent SDP reformulation of the problem. When there is no prior sparsity information on the inverse covariance matrix, they propose a closed-form expression for the estimator that can be interpreted as a nonlinear shrinkage estimator. Otherwise, they propose a sequential quadratic approximation algorithm to obtain the estimator by solving the equivalent SDP. They apply their results to linear discriminant analysis, portfolio selection, and solar irradiation patterns inference problems.
@lee2015 study a distributionally robust framework for finding support vector machines via the $1$-Wasserstein metric. They provide SIP formulation of the resulting model and propose a cutting-plane algorithm to solve the problem. @lee2017 [@lee2018stat] study a distributionally robust statistical learning problem formed via the $p$-Wasserstein metric utilizing $\ell_{p}$-norm, motivated by a domain (i.e., measure) adaption problem. This problem arises when training data are generated according to an unknown source domain ${\mathbbmtt{P}}$, but the learned hypothesis is evaluated on another unknown but related target domain ${\mathbbmtt{Q}}$. In this problem, it is assumed that a set of labeled data (covariates and responses) is drawn from ${\mathbbmtt{P}}$ and a set of unlabeled covariates is drawn from ${\mathbbmtt{Q}}$. It is further assumed that the domain drift is due to an unknown deterministic transformation on the covariates space that preserves the distribution of the response conditioned on the covariates. Under these assumptions and some further regularity conditions, they prove a generalization bound and generalization error guarantees for the problem.
@gao2018hypothesis develop a novel distributionally robust framework for hypothesis testing where the ambiguity set of distribution is constructed by $1$-Wasserstein metric utilizing an arbitrary norm, around the empirical distribution. The goal is to obtain the optimal decision rule as well the least favorable distribution by minimizing the maximum of the worst-case type-I and type-II errors. They develop a convex safe approximation of the resulting problem and show that such an approximation renders a nearly-optimal decision rule among the family of all possible tests. By exploiting the structure of the least favorable distribution, they also develop a finite-dimensional convex programming reformulation of the safe approximation. We now turn our attention to the connection between [DRO]{} and regularization in statistical learning. @pflug2012 [@pichler2013; @wozabal2014] draw the connection between robustification and regularization, where as in Theorem \[thm: rev.lin\_reg\_square\_loss\_reg\_reg\], the shape of the transportation cost in the definition of the optimal transport discrepancy directly implies the type of regularization, and (ii) the size of the ambiguity set dictates the regularization parameter. @pichler2013 studies worst-case values of lower semicontinuous and law-invariant risk measures, including spectral and distortion risk measures, over an ambiguity set of distributions formed via the $p$-Wasserstein metric utilizing an arbitrary norm around the empirical distribution. They show when the function $h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ is linear in ${\tilde{{\boldsymbol{\xi}}}}$, the worst-case value is the sum of the risk of $h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ under the nominal distribution and a regularization term. @pflug2012 and @wozabal2014 show the worst-case value of a convex law-invariant risk measure over an ambiguity set of distributions, formed via the $p$-Wasserstein metric utilizing $\ell_{p}$-norm around the empirical distribution, reduces to the sum of the nominal risk and a regularization term whenever the function $h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ is affine in ${\tilde{{\boldsymbol{\xi}}}}$.They provide closed-form expressions for risk measures such as expectation, sum of expectation and standard deviation, CVaR, distortion risk measure, Wang transform, proportional hazards transform, the Gini measure, and sum of expectation and mean absolute deviation from the median. They apply their results to a portfolio selection problem. Important parts of the derivation of results in @pflug2012 [@pichler2013; @wozabal2014] are Kusuoka’s representation of risk measures [@kusuoka2001; @shapiro2013kusuoka] and Fenchel-Moreau theorem [@rockafellar1997; @ruszczynski2006optimization].
In the context of statistical learning, the connection between [DRO]{} and regularization was first made in @shafieezadeh2015, to the best of our knowledge. In fact, they study a distributionally robust logistic regression, where an ambiguity set of probability distributions, supported on an open set, is formed around the empirical distribution of data and via the $1$-Wasserstein metric utilizing an arbitrary norm. They show the resulting problem admits an equivalent reformulation as a tractable convex program. As stated in Theorem \[thm: rev.lin\_reg\_square\_loss\_reg\_reg\], this problem can be interpreted as a standard regularized logistic regression, where the size of the ambiguity set dictates the regularization parameter. They further propose a distributionally robust approach based on Wasserstein metric to compute upper and lower confidence bounds on the misclassification probability of the resulting classifier, based on the optimal values of two linear programs.
@shafieezadeh2017 extend the work of @shafieezadeh2015 and study distributionally robust supervised learning (regression and classification) models. They introduce a new generalization technique using ideas from [DRO]{}, whose ambiguity set contains all infinite-dimensional distributions in the Wasserstein neighborhood of the empirical distribution. They show that the classical robust and the distributionally robust learning models are equivalent if the data satisfies a dispersion condition (for regression) or a separability condition (for classification). By imposing bound on the decision (i.e., hypothesis) space, they improve the upper confidence bound on the out-of-sample performance proposed in @mohajerin2018 and prove a generalization bound that does not rely on the complexity of the hypothesis space. This is unlike the traditional generalization bounds that are derived by controlling the complexity of the hypothesis space, in terms of Vapnik-Chervonenkis (VC)-dimension, covering numbers, or Rademacher complexities [@bartlett2002; @shalev2014ML], which are usually difficult to calculate and interpret in practice. They extend their results to the case that the unknown hypothesis is searched from the space of nonlinear functionals. Given a symmetric and positive definite kernel function, such a setting gives rise to a lifted [DRO]{} problem that searches for a linear hypothesis over a [*reproducing kernel Hilbert space*]{} (RKHS).
@gao2017 study [DRO]{} problems formed via the $p$-Wasserstein metric utilizing an arbitrary norm, around the empirical distribution. They identify a broad class of cost functions, for which such a [DRO]{} is asymptotically equivalent to a regularization problem with a gradient-norm penalty under the nominal distribution. For linear function class, this equivalence is exact and results in a new interpretation for discrete choice models, including multinomial logit, nested logit, and generalized extreme value choice models. They also obtain lower and upper bounds on the worst-case expected cost in terms of regularization.
@mohajerinesfahani2018inverse study a data-driven inverse optimization problem to learn the objective function of the decision maker, given the historical data on uncertain parameters and decisions. In an environment with imperfect information, they propose a [DRO]{} model formed via the $p$-Wasserstein metric utilizing an arbitrary norm to minimize the worst-case risk of the predicted error. Such a model can be interpreted as a regularization of the corresponding empirical risk minimization problem. They present exact (or safe approximation) tractable convex programming reformulation for different combinations of risk measures and error functions.
@blanchet2017groupwise study group-square-root LASSO (group LASSO focuses on variable selection in settings where some predictive variables, if selected, must be chosen as a group). They model this problem as a [DRO]{} problem formed via the $p$-Wasserstein metric utilizing an arbitrary norm. A method for (semi-) supervised learning based on data-driven [DRO]{} via $p$-Wasserstein metric utilizing an arbitrary norm, is proposed in @blanchet2017Semi. This method enhances the generalization error by using the unlabeled data to restrict the support of the worst-case distribution in the resulting [DRO]{}. They select the level of robustness using cross-validation, and they discuss the nonparametric behavior of an optimal selection of the level of robustness.
@chen2018regression study a [DRO]{} approach to linear regression using an $\ell_{1}$-norm cost function, where the ambiguity set of distributions is formed via $p$-Wasserstein metric utilizing an arbitrary norm. They show that this [DRO]{} formulation can be relaxed to a convex optimization problem. By selecting proper norm spaces for the Wasserstein metric, they are able to recover several commonly used regularized regression models. They establish performance guarantees on both the out-of-sample behavior (prediction bias) and the discrepancy between the estimated and true regression planes (estimation bias), which elucidate the role of the regularizer. They study the application of the proposed model to outlier detection, arising in an abnormally high radiation exposure in CT exams, and show it achieves a higher performance than M-estimation [@huber2009RobustStat].
#### Choice of the Transportation Cost
When forming a Wasserstein ambiguity set, the transportation cost function $c(\cdot, \cdot)$ should be chosen besides the nominal probability measure $P_{0}$ and the size of the ambiguity set $\epsilon$. @blanchet2017transport propose a comprehensive approach for designing the ambiguity set in a data-driven way, using the role of the transportation cost $c(\cdot,\cdot)$ in the definition of the $p$-Wasserstein metric. They apply various metric-learning procedures to estimate $c(\cdot,\cdot)$ from the training data, where they associate a relatively high transportation cost to two locations if transporting mass between these locations substantially impacts performance. This mechanism induces enhanced out-of-sample performance by focusing on regions of relevance, while improving the generalization error. Moreover, this approach connects the metric-learning procedure to estimate the parameters of adaptive regularized estimators. They select the level of robustness using cross-validation. @blanchet2017doubly propose a data-driven robust optimization approach to optimally inform the transportation cost in the definition of the $p$-Wasserstein metric. This additional layer of robustification within a suitable parametric family of transportation costs does not exist in the metric-learning approach, proposed in @blanchet2017transport, and it allows to enhance the generalization properties of regularized estimators while reducing the variability in the out-of-sample performance error.
#### Multistage Setting
The single- and two-stage stochastic programs in @pflug2014 are extended in @analui2014 and @pflug2014 to the multistage case, where the reference data and information structure is represented as a tree. In these papers it is assumed that the tree structure and scenario values are fixed, while the probabilities are changing only in an ambiguous neighborhood of the reference model by utilizing the multistage [*nested distance*]{}, formed via the Wasserstein metric. Both papers further apply their results to a multiperiod production/inventory control problem. Built upon the above results, @glanzer2018 show that a scenario tree can be constructed out of data such that it converges (in terms of the nested distance) to the true model in probability at an exponential rate. @glanzer2018 also study a [DRO]{} framework formed via nested distance that allows for setting up bid and ask prices for acceptability pricing of contingent claims. Another study of multistage linear optimization can also be found in @baziermatte2018.
### Phi-Divergences {#sec: rev.phi}
Another popular way to model the distributional ambiguity is to use [*$\phi$-divergences*]{}, a class of measures used in information theory. A $\phi$-divergence measures the discrepancy between two probability measures $P_{1}, P_{2} \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$ as ${\mathfrak{d}}^{\phi}(P_{1}, P_{2}):=\int_{\Xi}\phi\left(\frac{d P_{1}}{d P_{2}}\right) d P_{2}$[^16], where the $\phi$-divergence function $\phi : {\mathbb{R}}_{+} \rightarrow {\mathbb{R}}_{+} \cup \{+ \infty\}$ is convex, and satisfy the following properties: $\phi(1)=0$[^17], $0\phi\left(\frac{0}{0}\right):=0$, and $a\phi\left(\frac{a}{0}\right):=a \lim_{t \rightarrow \infty} \frac{\phi(t)}{t}$ if $a>0$. Note that a $\phi$-divergence does not necessarily induce a metric on the underlying space. For detailed information on $\phi$-divergences, we refer to @read1988 [@vajda1989; @pardo2005].
A $\phi$-divergence can be used to model the distributional ambiguity as follows: $$\label{eq: rev.phi_set}
{\mathcal{P}}^{\phi}(P_{0};\epsilon):= \sset*{P \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}}{ {\mathfrak{d}}^{\phi}(P, P_{0}) \le \epsilon},$$ where as before $P_{0}$ is a nominal probability measure and $\epsilon$ controls the size of the ambiguity set. Table \[T: rev.phi\] presents a list of commonly used $\phi$-divergence functions in [DRO]{} and their conjugate functions $\phi^{*}$.
Before we review the papers that model the distributional ambiguity via the $\phi$-divergences, we present a duality result on $\sup_{P \in {\mathcal{P}}^{\phi}(P_{0};\epsilon)} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$.
\[thm: rev.phi\_duality\] Suppose that $\epsilon >0$ in . Then, for a fixed ${\boldsymbol{x}} \in {\mathcal{X}}$, we have $$\sup_{P \in {\mathcal{P}}^{\phi}(P_{0};\epsilon)} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} = \inf_{(\lambda, \mu) \in \Lambda_{\phi,h({\boldsymbol{x}}, \cdot)} } \ \left\lbrace \mu + \lambda \epsilon + \int_{\Xi} (\lambda\phi)^{*}( h({\boldsymbol{x}}, s) -\mu ) P_{0}(ds) \right\rbrace,$$ where $\Lambda_{\phi,h({\boldsymbol{x}}, \cdot)}:=\sset*{(\lambda, \mu)}{\lambda \ge 0, \ h({\boldsymbol{x}}, s) -\mu -\lambda \lim_{t \rightarrow \infty} \frac{\phi(t)}{t} \le 0, \forall s \in \Xi}$, with the interpretation that $(\lambda\phi)^{*}(a)=\lambda\phi^{*}(\frac{a}{\lambda})$ for $\lambda \ge 0$. Here, $(0\phi)^{*}(a)=0\phi^{*}(\frac{a}{0})$, which equals to $0$ if $a\le 0 $ and $+\infty$ if $a>0$.
The above result can be obtained by taking the Lagrangian dual of $\sup_{P \in {\mathcal{P}}^{\phi}(P_{0};\epsilon)} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$, and we refer the readers to @ben2013 [@bayraksan2015; @love2013] for a detailed derivation.
[max width=]{}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Divergence $\phi(t)$ $\phi(t), \; t \ge 0$ ${\mathfrak{d}}^{\phi}(P_{1}, P_{2})$ $\phi^{*}(a)$ [DRO]{} Counterpart
--------------------------------------- -------------------------------- ----------------------------------------------------------- ---------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------- --------------------- --
Kullback-Leibler $\phi_{\text{kl}}(t)$ $t \log t -t +1$ $\int_{\Xi} \log\left(\frac{d P_{1}}{d P_{2}}\right) d P_{1}$ $e^{a}-1$ Convex program
Burg entropy $\phi_{\text{b}}(t)$ $- \log t + t -1 $ $\int_{\Xi} \log\left(\frac{d P_{2}}{d P_{1}}\right) d P_{2}$ $-\log(1-a), \; a<1$ Convex program
$J$-divergence $\phi_{\text{j}}(t)$ $(t-1) \log t$ $\int_{\Xi} \log\left(\frac{d P_{1}}{d P_{2}}\right) (d P_{1} - d P_{2})$ No closed form Convex program
$\chi^{2}$-distance $\phi_{\text{c}}(t)$ $\frac{1}{t}(t-1)^{2}$ $\int_{\Xi} (\frac{(d P_{1} - d P_{2})^{2}}{d P_{1}}$ $2-2\sqrt{1-a}, \; a<1$ SOCP
Modified $\chi^{2}$-distance $\phi_{\text{mc}}(t)$ $(t-1)^{2}$ $\int_{\Xi} (\frac{(d P_{1} - d P_{2})^{2}}{d P_{2}}$ $\begin{cases} SOCP
-1 \quad & a<-2\\
a +\frac{a^{2}}{4} \quad & a \ge -2
\end{cases}$
Hellinger distance $\phi_{\text{h}}(t)$ $(\sqrt{t}-1)^{2}$ $\int_{\Xi} (\sqrt{d P_{1}} - \sqrt{d P_{2}})^{2}$ $\frac{a}{1-a}, \; a<1$ SOCP
$\chi$-divergence of order $\theta>1$ $\phi_{\text{ca}^{\theta}}(t)$ $|t-1|^{\theta}$ $\int_{\Xi} |1- \frac{d P_{1}}{d P_{2}}|^{\theta}d P_{2}$ $a+(\theta-1)\left(\frac{|a|}{\theta}\right)^{\frac{\theta}{\theta-1}}$ SOCP
Variation distance $\phi_{\text{v}}(t)$ $|t-1|$ $\int_{\Xi} |d P_{1} - d P_{2}|$ $\begin{cases} LP
-1 \quad & a\le -1 \\
a \quad & -1 \le a \le 1
\end{cases}$
Cressie-Read $\phi_{\text{cr}^{\theta}}(t)$ $\frac{1-\theta+\theta t - t^{\theta}}{\theta(1-\theta)}$ $\frac{1}{\theta(1-\theta)} (1-\int_{\Xi} d P_{1}^{\theta} d P_{2}^{1-\theta} )$ $\frac{1}{\theta}\big(1-a(1-\theta)\big)^{\frac{\theta}{1-\theta}}-\frac{1}{\theta}, \; a < \frac{1}{1-\theta}$ SOCP
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: Examples of $\phi$-divergence functions, their conjugates $\phi^{*}(a)$, and their [DRO]{} counterparts
\[T: rev.phi\]
The robust counterpart of linear and nonlinear optimization problems with an uncertainty set of parameters defined via general $\phi$-divergence is studied in @ben2013. As it is presented in Table \[T: rev.phi\], when the uncertain parameter is a finite-dimensional probability vector, the robust counterpart is tractable for most of the choices of $\phi$-divergence function considered in the literature. The use of $\phi$-divergence to model the distributional ambiguity in [DRO]{} is systematically introduced in @bayraksan2015 and @love2015. To elucidate the use of $\phi$-divergences for models with different sources of data and decision makers with different risk preferences, they present a classification of $\phi$-divergences based on the notions of [*suppressing*]{} and [*popping*]{} a scenario. The situation that a scenario with a positive nominal probability ends up having a zero worst-case probability is called suppressing. On the contrary, the situation that a scenario with a zero nominal probability ends up having a positive worst-case probability is called popping. These notions give rise to four categories of $\phi$-divergences. For example, they show that the variation distance can both suppress and pop scenarios, while Kullback-Leibler divergence can only suppress scenarios. Furthermore, they analyze the value of data and propose a decomposition algorithm to solve the dual of the resulting [DRO]{} model formed via a general $\phi$-divergence.
Motivated by the difficulty in choosing the ambiguity set and the fact that all probability distributions in the set are treated equally (while those outside the set are completely ignored), @ben2010soft propose to minimize the expected cost under the nominal distribution while the maximum expected cost over an infinite nested family of ambiguity sets, parametrized by $\epsilon$, is bounded from above. More specifically, they allow a varying level of feasibility for each family of probability distributions, where the maximum allowed expected cost for distributions in a set with parameter $\epsilon$ is proportional to $\epsilon$. They refer to this approach as [*soft robust optimization*]{} and relate the feasibility region induced by this approach to the convex risk measures. They illustrate that the ambiguity sets formed via $\phi$-divergences are related to an optimized certainty equivalent risk measure formed via $\phi$-functions [@bental2007OCE]. Furthermore, they show that the complexity of the soft robust approach is equivalent to that of solving a small number of standard corresponding [DRO]{} (i.e., [DRO]{} with one ambiguity set) problems. In fact, by showing that standard [DRO]{} is concave in $\epsilon$, they solve the soft robust model by a bisection method. They also investigate how much larger a feasible region implied by the soft robust approach can cover compared to the standard [DRO]{}, without compromising the objective value. Furthermore, they study the downside probability guarantees implied by both the soft robust and standard robust approaches. They also apply their results to portfolio optimization and asset allocation problems.
A data-driven [DRO]{} approach to chance-constrained problems modeled via $\phi$-divergences is studied in @yanikoglu2012. They propose safe approximations to these ambiguous chance constraints. Their approach is capable of handling joint chance constraints, dependent uncertain parameter, and a general nonlinear function ${\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$.
@hu2013ambiguous and @jiang2016chance show that distributionally robust chance-constrained programs formed via $\phi$-divergences can be transformed into a chance-constrained problem under the nominal distribution but with an adjusted risk level. For a general $\phi$-divergence, a bisection line search algorithm to obtain the perturbed risk level is proposed in @hu2013ambiguous [@jiang2016chance]. In addition, closed-form expressions for the adjusted risk level are obtained for the case of the variation distance (see, @hu2013ambiguous and @jiang2016chance), and Kullback-Leibler divergence and $\chi^2$-distance (see, @jiang2016chance). For the ambiguous probabilistic programs formed via $\phi$-divergences, similar results to the chance-constrained programs are shown in @hu2013ambiguous. @hu2013ambiguous show that the ambiguous probability minimization problem can be transformed into a corresponding problem under the nominal distribution. In particular, they show that these problems have the same complexity as the corresponding pure probabilistic programs.
#### Statistical Learning
@hu2018 study distributionally robust supervised learning, where the ambiguity set of distributions is formed via $\phi$-divergences. They prove that such a [DRO]{} model for a classification problem gives a classifier that is optimal for the training set distribution rather than being robust against all distributions in the ambiguity set. They argue such a pessimism comes from two sources: the particular losses used in classification and the over-conservation of the ambiguity set formed via $\phi$-divergences. Motivated by this observation, they propose an ambiguity set that incorporates prior expert structural information on the distribution. More precisely, they introduce a latent variable from a prior distribution. While such a distribution can change in the ambiguity set, they leave the ambiguous joint distribution of data conditioned on the latent variable intact. @duchi2016 show that the inner problem of a data-driven [DRO]{} formed around the empirical distribution, with $\epsilon=\frac{\chi^{2}_{1, 1-\alpha}}{N}$ has an almost-sure asymptotic expansion. Such an expansion is equivalent to the expected cost under the empirical distribution plus a regularization term that accounts for the standard deviation of the objective function. They also show that the set of the optimal solutions of the [DRO]{} model converges to that of the stochastic program under the true underlying distribution, provided that $h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ is lower-semicontinuous.
#### Specific $\phi$-Divergences
In this section, we review papers that consider specific $\phi$-divergences.
##### Kullback-Leibler Divergence
@calafiore2007 investigates the optimal robust portfolio and worst-case distribution for a data-driven distributionally robust portfolio optimization problem with a mean-risk objective. Motivated by the application, they consider the variance and absolute deviation as measures of risk.
@hu2012kullback study a variety of distributionally robust optimization problems, where the ambiguity is in either the objective function or constraints. They show that the ambiguous chance-constrained problem can be reformulated as a chance-constrained problem under the nominal distribution but with an adjusted risk level. They further show that when the chance safe region is bi-affine in ${\boldsymbol{x}}$ and ${\tilde{{\boldsymbol{\xi}}}}$[^18] , and the nominal distribution belongs to the exponential families of distributions, both the nominal and worst-case distribution belong to the same distribution family.
@blanchet2018structural study a [DRO]{} approach to extreme value analysis in order to estimate the tail distributions and consequently, extreme quantiles. They form the ambiguity set of distributions by the class of Réyni divergences [@pardo2005], that includes Kullback-Leibler as a special case[^19]. Kullback-Leibler is also used for the [DRO]{} approach to hypothesis testing in @levy2009 [@gul2017; @gul2017asymptotically].
##### Burg Entropy
@wang2016 model the distributional ambiguity via the Burg entropy to consider all probability distributions that make the observed data achieve a certain level of likelihood. They present statistical analyses of their model using Bayesian statistics and empirical likelihood theory. To test the performance of the model, they apply it to the newsvendor problem and the portfolio selection problem.
@wiesemann2013 study Markov decision processes where the transition Kernel is known. They use Burg entropy to construct a confidence region that contains the unknown probability distribution with a high probability, based on an observation history. It is shown in @lam2016 that a [DRO]{} model formed via the Burg entropy around the empirical distribution of data gives rise to a confidence bound on the expected cost that recovers the exact asymptotic statistical guarantees provided by the Central Limit Theorem.
##### $\chi^2$-Distance
@hanasusanto2013 propose a robust data-driven dynamic programming approach which replaces the expectations in the dynamic programming recursions with worst-case expectations over an ambiguity set of distributions. Their motivation to propose such a scheme is to mitigate the poor out-of-sample performance of the data-driven dynamic programming approach under sparse training data. The proposed method combines convex parametric function approximation methods (to model the dependence on the endogenous state) with nonparametric kernel regression method (to model the dependence on the exogenous state). They show the conditions under which the resulting [DRO]{} model, formed via $\chi^2$-distance, reduces to a tractable conic program. They apply their results to problems arising in index tracking and wind energy commitment applications. @klabjan2013 study optimal inventory control for a single-item multiperiod periodic review stochastic lot-sizing problem under uncertain demand, where the distributional ambiguity is modeled via $\chi^{2}$-distance. They show that the resulting model generalizes the Bayesian model, and it can be interpreted as minimizing demand-history-dependent risk measures.
##### Modified $\chi^2$-Distance
A [*stochastic dual dynamic programming*]{} (SDDP) approach to solve a distributionally robust multistage optimization model formed via the modified $\chi^2$-distance is porposed in @philpott2018.
##### Variation Distance
Variation distance, or $\ell_{1}$-norm, as defined in Table \[T: rev.phi\], can be used to safely approximate several ambiguity sets formed via $\phi$-divergences, including $\chi$-divergence of order 2, $J$-divergence, Kullback-Leibler divergence, and Hellinger distance. The following lemma states the above result more formally.
\[lem: rev.TV\] The following relationship holds between $\phi$-divergences, as defined in Table \[T: rev.phi\]: $$\label{eq: rev.TV}
\frac{1}{4}\big({\mathfrak{d}}^{\phi_{\text{v}}}(P ,P_{0}) \big)^{2} \le {\mathfrak{d}}^{\phi_{\text{h}}}(P , P_{0}) \le {\mathfrak{d}}^{\phi_{\text{kl}}}(P , P_{0}) \le {\mathfrak{d}}^{\phi_{\text{j}}}(P, P_{0}) \le {\mathfrak{d}}^{\phi_{\text{ca}^{2}}}(P , P_{0}),$$ which implies $$\label{eq: rev.TV_set}
{\mathcal{P}}^{\phi_{\text{ca}^{2}}}(P_{0}; \epsilon) \subseteq {\mathcal{P}}^{\phi_{\text{j}}}(P_{0} ;\epsilon) \subseteq {\mathcal{P}}^{\phi_{\text{kl}}}(P_{0}; \epsilon) \subseteq {\mathcal{P}}^{\phi_{\text{h}}}(P_{0}; \epsilon) \subseteq {\mathcal{P}}^{\phi_{\text{v}}}(P_{0};2\epsilon^{\frac{1}{2}}).$$
The first two inequalities in can be found in e.g., @reiss1989 [p. 99][^20] and the last two inequalities can be found in e.g., @jiang2016 [Lemma 1]. Then, follows from .
### Total Variation Distance
For two probability measures $P_{1}$, $P_{2} \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$, the total variation distance is defined as $d_{\text{TV}}(P_{1},P_{2}):=\sup_{A \in {\mathcal{F}}} \ |P_{1}(A)-P_{2}(A)|$. When $P_{1}$ and $P_{2}$ are absolutely continuous with respect to a measure $\nu \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$, with Radon-Nikodym derivaties $f_{1}$ and $f_{2}$, respectively, then, ${\mathfrak{d}}^{\text{TV}}(P_{1},P_{2})=\frac{1}{2} \int_{\Xi} |f_{1}(s)-f_{2}(s)| \nu(ds)$. Note that the total variation distance can be obtained from other classes of probability metrics: (1) it is a $\phi$-divergence with $\phi(t)=\frac{1}{2}|t-1|$, (2) it is half of the $\ell_{1}$-norm, and (3) it is obtained from the optimal transport discrepancy with $$c(s_{1},s_{2})=\begin{cases} 0, & \text{if} \ s_{1}=s_{2},\\
1, & \text{if} \ s_{1} \neq s_{2}.
\end{cases}$$ The total variation distance can be used to model the distributional ambiguity as follows: $$\label{eq: rev.tv_set}
{\mathcal{P}}^{\text{TV}}(P_{0};\epsilon):= \sset*{P \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}}{ {\mathfrak{d}}^{\text{TV}}(P,P_{0}) \le \epsilon},$$ where as before $P_{0}$ is a nominal probability measure and $\epsilon$ controls the size of the ambiguity set.
The total variation distance between $P_{1}$ and $ P_{2}$ is also related to the [*one-sided*]{} variation distances $\frac{1}{2} \int_{\Xi} (f_{1}(s)-f_{2}(s))_{+} \nu(ds)$ and $\frac{1}{2} \int_{\Xi} (f_{2}(s)-f_{1}(s))_{+} \nu(ds)$ [@rahimian2019], which are $\phi$-divergences with $\phi(t)=\frac{1}{2}(t-1)_{+}$ and $\phi(t)=\frac{1}{2}(1-t)_{+}$, respectively. However, unlike the total variation distance, the one-sided variation distances are not a probability metric.
Before we review the papers that model the distributional ambiguity via the total variation distance, we present a duality result on $\sup_{P \in {\mathcal{P}}^{\text{TV}}(P_{0};\epsilon)} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$.
[(@jiang2018 [Theorems 1–2], @rahimian2019 [Proposition 3], @shapiro2017DRSP)]{} \[thm: rev.tv\_duality\] For a fixed ${\boldsymbol{x}} \in {\mathcal{X}}$, we have $$\begin{split}
\sup_{P \in {\mathcal{P}}^{\text{TV}}(P_{0};\epsilon)} \ & {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \\
& {}= \begin{cases}
{\mathbb{E}_{P_{0}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}, & \epsilon=0,\\
\epsilon \ \nu\textrm{-}\operatorname*{ess\,sup}_{s \in \Xi} h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}(s)) + (1-\epsilon) {\mathrm{CVaR}^{P_{0}}_{\epsilon} \left[ h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \right]}, & 0<\epsilon<1,\\
\nu\textrm{-}\operatorname*{ess\,sup}_{s \in \Xi} h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}(s)), & \epsilon\ge 1,
\end{cases}
\end{split}$$ where $\nu\textrm{-}\operatorname*{ess\,sup}_{s \in \Xi} h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}(s))=\inf\Big\{a \in {\mathbb{R}}: \nu\{s \in \Xi: h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}(s))>a)=0\} \Big\}$.
[(@rahimian2019 [Proposition 3], @shapiro2017DRSP)]{} Let ${\mathcal{P}}^{\text{OTV}}(P_{0};\epsilon)$ denote the ambiguity set formed via either of the one-sided variation distances. Then, for a fixed ${\boldsymbol{x}} \in {\mathcal{X}}$, $\sup_{P \in {\mathcal{P}}^{\text{TVO}}(P_{0};\frac{\epsilon}{2})}$ can be obtained by the right-hand side of the result in Theorem \[thm: rev.tv\_duality\].
@jiang2018 study distributionally robust two-stage stochastic programs formed via the total variation distance. They discuss how to find the nominal probability distribution and analyze the convergence of the problem to the corresponding stochastic program under the true unknown probability distribution. @rahimian2019 study distributionally robust convex optimization problems with a finite sample space. They study how the uncertain parameters affect the optimization. In order to do so, they define the notion of “effective" and “ineffective" scenarios. According to their definitions, a subset of scenarios is effective if their removal from the support of the worst-case distribution, by forcing their probabilities to zero in the ambiguity set, changes the optimal value of the [DRO]{} problem. They propose easy-to-check conditions to identify the effective and ineffective scenarios for the case that the distributional ambiguity is modeled via the total variation distance. @rahimian2019NV extends the work of @rahimian2019 to distributionally robust newsvendor problems with a continuous sample space. They derive a closed-form expression for the optimal solution and identify the maximal effective subsets of demands.
### Goodness-of-Fit Test
@postek2016 review and derive computationally tractable reformulations of distributionally robust risk constraints over discrete probability distributions for various risk measures and ambiguity sets formed using statistical goodness-of-fit tests or probability metrics, including $\phi$-divergences, Kolmogrov-Smirnov, Wasserstein, Anderson-Darling, Cramer-von Mises, Watson, and Kuiper. They exemplify the results in portfolio optimization and antenna array design problems. @bertsimas2018RO and @bertsimas2018SAA propose a systematic view on how to choose statistical goodness-of-fit test to construct an ambiguity set of distributions that guarantee the implication (recall Theorem \[thm: rev.chanceDRO\]). They consider the situation that (i) ${{\mathbbmtt{P}}^{\text{true}}}=P^{\text{true}} \circ {\tilde{{\boldsymbol{\xi}}}}^{-1}$ may have continuous support, and the components of ${\tilde{{\boldsymbol{\xi}}}}$ are independent, (ii) ${{\mathbbmtt{P}}^{\text{true}}}$ may have continuous support, and data are drawn from its marginal distributions asynchronously, and (iii) ${{\mathbbmtt{P}}^{\text{true}}}$ may have continuous support, and data are drawn from its joint distribution. They also study a wide range of statistical hypothesis tests, including $\chi^{2}$, G, Kolmogrov-Smirnov, Kuiper, Cramer-von Mises, Watson, and Anderson-Darling goodness-of-fit tests, and they characterize the geometric shape of the corresponding ambiguity sets.
### Prohorov Metric
For two probability measures $P_{1}, P_{2} \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$, the Prohorov metric is defined as $${\mathfrak{d}}^{\text{p}}(P_{1},P_{2}):=\inf \sset*{\gamma >0 }{P_{1}\{A\} \le P_{2}\{A^{\gamma}\}+\gamma \ \text{and} \ P_{2}\{A\} \le P_{1}\{A^{\gamma}\}+\gamma \; \forall A \in {\mathcal{F}}},$$ where $A^{\gamma}:=\sset*{s \in \Xi}{\inf_{s^{\prime} \in A} \ d(s,s^{\prime}) \le \gamma}$ [@gibbs2002]. The Prohorov metric takes values in $[0,1]$ and can be used to model the distributional ambiguity as follows: $$\label{eq: rev.prohorov_set}
{\mathcal{P}}^{\text{p}}(P_{0};\epsilon) :=
\sset*{P \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}}{ {\mathfrak{d}}^{\text{p}}(P,P_{0}) \le \epsilon},$$ where as before $P_{0}$ is a nominal probability measure and $\epsilon$ controls the size of the ambiguity set. A specialization of the Prohorov metric to the univariate distributions is called [*Levy*]{} metric, which is defined as [@gibbs2002] $$\begin{split}
{\mathfrak{d}}^{\text{L}}&(P_{1},P_{2}) :=\\
& \inf \sset*{\gamma >0 }{P_{2}\{(-\infty,t-\gamma]\} -\gamma \le P_{1}\{(-\infty,t]\} \le P_{2}\{(-\infty,t+\gamma]\} + \gamma, \; \forall t \in {\mathbb{R}}}.
\end{split}$$ The Levy metric can be used to model the distributional ambiguity as follows: $$\label{eq: rev.levy_set}
{\mathcal{P}}^{\text{L}}(P_{0};\epsilon) := \sset*{P \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}}{ {\mathfrak{d}}^{\text{L}}(P,P_{0}) \le \epsilon}.$$ @erdougan2006 study an optimization problem subject to a set of parameterized convex constraints. Similar to the argument in Section \[sec: rev.rel\_chance\], they study a [DRO]{} approach to this problem, where the distributional ambiguity is modeled by the Prohorov metric. They also consider a scenario approximation scheme of the problem. By extending the work of [@campi2004; @calafiore2005], they provide an upper bound on the number of samples required to guarantee that the sampled problem is a good approximation for the associated ambiguous chance-constrained problem with a high probability.
### Lp-Norm
@calafiore2006 study distributionally robust individual linear chance-constrained problem, and provide convex conditions that guarantee the satisfaction of the chance constraint within the family of radially-symmetric nonincreasing densities whose supports are defined by means of the $\ell_{1}$- and $\ell_{\infty}$-norm[^21]. @mevissen2013 study distributionally robust polynomial optimization, where the distribution of the uncertain parameter is estimated using polynomial basis functions via the $\ell_{p}$-norm. They show that the optimal value of the problem is the limit of a sequence of tractable SDP relaxations of polynomial optimization problems. They also provide a finite-sample consistency guarantee for the data-driven uncertainty sets, and an asymptotic guarantee on the solutions of the SDP relaxations. They apply their techniques to a water network optimization problem.
@jiang2018 study distributionally robust two-stage stochastic programs formed via $\ell_{\infty}$-norm. @huang2017 study extend the work of @jiang2018 to the multistage setting. They formulate the problem into a problem that contains a convex combination of expectation and CVaR in the objective function of each stage to remove the nested multistage minmax structure in the objective function. They analyze the convergence of the resulting [DRO]{} problem to the corresponding multistage stochastic program under the true unknown probability distribution. They test their results on the hydrothermal scheduling problem.
### Zeta-Structure Metrics
Consider $P_{1}, P_{2} \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$ and let ${\mathcal{Z}}$ be a family of real-valued measurable functions $z: {\left( {\mathbb{R}}^{d}, {\mathfrak{B}}({\mathbb{R}}^{d}) \right)} \mapsto ({\mathbb{R}},{\mathfrak{B}}({\mathbb{R}}))$. The $\zeta$-structure metric is defined as ${\mathfrak{d}}^{{\mathcal{Z}}}(P_{1},P_{2}):=\sup_{ z \in {\mathcal{Z}}} \Big| {\mathbb{E}_{P_{1}} \left[ z({\tilde{{\boldsymbol{\xi}}}}) \right]} - {\mathbb{E}_{P_{2}} \left[ z({\tilde{{\boldsymbol{\xi}}}}) \right]} \Big|$. A wide range of metrics in probability theory can be written as special cases of the above family of metrics [@zhao2015; @pichler2017]. Let us introduce them below.
- [*Total variation metric*]{} ${\mathfrak{d}}^{\text{TV}}(P_{1},P_{2})$: $${\mathcal{Z}}=\sset*{z}{\|z\|_{\infty} \le 1},$$ where $\|z\|_{\infty}= \sup_{{\boldsymbol{\xi}} \in \Omega } \ |z({\boldsymbol{\xi}})|$.
- [*Bounded Lipschitz metric*]{} ${\mathfrak{d}}^{\text{BL}}(P_{1},P_{2})$: $${\mathcal{Z}}=\sset*{z}{\|z\|_{\infty} \le 1, \; z \ \text{is Lipschitz continuous}, \; L_{1}(z) \le 1 },$$ where $L_{1}(z)=: \sup \sset*{|z({\boldsymbol{u}})-z({\boldsymbol{v}})|/d({\boldsymbol{u}},{\boldsymbol{v}})}{ {\boldsymbol{u}} \neq {\boldsymbol{v}} }$, is the Lipschitz modulus.
- [*Kantorovich metric*]{} ${\mathfrak{d}}^{\text{K}}(P_{1},P_{2})$: $${\mathcal{Z}}=\sset*{z}{z \; \text{is Lipschitz continuous}, \; L_{1}(z) \le 1 }.$$
- [*Fortet-Mourier metric*]{} ${\mathfrak{d}}^{\text{FM}}(P_{1},P_{2})$: $${\mathcal{Z}}=\sset*{z}{z \; \text{is Lipschitz continuous}, \; L_{q}(z) \le 1},$$ where $$\begin{split}
L_{q}& (z)=: \\
& \inf \sset*{L}{ |z({\boldsymbol{u}})-z({\boldsymbol{v}})| \le L \cdot d({\boldsymbol{u}},{\boldsymbol{v}}) \cdot \max(1, \|{\boldsymbol{u}}\|^{q-1}, \|{\boldsymbol{v}}\|^{q-1}), \forall {\boldsymbol{u}},{\boldsymbol{v}} \in {\mathbb{R}}^{d} },
\end{split}$$ with $\|\cdot\|$ as the Euclidean norm. Note that when $q=1$, Fortet-Mourier metric is the same as the Kantorovich metric.
- [*Uniform (Kolmogorov) metric*]{} ${\mathfrak{d}}^{\text{U}}(P_{1},P_{2})$: $${\mathcal{Z}}=\sset*{z}{z=\mathbbm{1}_{(-\infty, t]}, \; t \in {\mathbb{R}}^{n} }.$$
The class of $\zeta$-structure metrics may be used to model the distributional ambiguity as follows: $$\label{eq: rev.zeta_set}
{\mathcal{P}}^{{\mathcal{Z}}}(P_{0};\epsilon):= \sset*{P \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}}{ {\mathfrak{d}}^{{\mathcal{Z}}}(P,P_{0}) \le \epsilon},$$ where as before $P_{0}$ is a nominal probability measure and $\epsilon$ controls the size of the ambiguity set.
\[lem: rev.zeta\] Suppose that the support $\Omega$ of ${\tilde{{\boldsymbol{\xi}}}}$ is bounded with diameter $\theta$, i.e., $\theta:=\sup\{d({\boldsymbol{\xi}}_{1}, {\boldsymbol{\xi}}_{2}): {\boldsymbol{\xi}}_{1}, {\boldsymbol{\xi}}_{2} \in \Omega\}$, where $d$ is metric. Then, the following relationship holds between $\zeta$-structure metrics:
\[eq: rev.zeta\] $$\begin{aligned}
& {\mathfrak{d}}^{\text{BL}}(P,P_0) \le {\mathfrak{d}}^{\text{K}}(P,P_0)\\
& {\mathfrak{d}}^{\text{K}}(P,P_0) \le {\mathfrak{d}}^{\text{TV}}(P,P_0)\\
& {\mathfrak{d}}^{\text{U}}(P,P_0) \le {\mathfrak{d}}^{\text{TV}}(P,P_0)\\
& {\mathfrak{d}}^{\text{K}}(P,P_0) \le {\mathfrak{d}}^{\text{FM}}(P,P_0)\\
& {\mathfrak{d}}^{\text{FM}}(P,P_0) \le \max\{1, \theta^{q-1}\}{\mathfrak{d}}^{\text{K}}(P,P_0).
\end{aligned}$$
The proof is immediate from @zhao2015 [Lemmas 1–4].
@zhao2015 study distributionally robust two-stage stochastic programs via $\zeta$-structure metrics. They discuss how to construct the ambiguity set from historical data while utilizing a family of $\zeta$-structure metrics. They propose solution approaches to solve the resulting problem, where the true unknown distribution is discrete or continuous. They further analyze the convergence of the DRO problem to the corresponding stochastic program under the true unknown probability distribution. They test their results on newsvendor and facility location problems.
@pichler2017 study a [DRO]{} model with a expectation as the risk measure and form the ambiguity set of distribution via $\zeta$-structure metric. They investigate how the variation of the ambiguity set would affect the optimal value and the optimal solution in the resulting optimization problem. They illustrate their results in the context of a two-stage stochastic program with recourse.
### Contamination Neighborhood
The contamination neighborhood around a nominal probability measure $P_{0}$ is defined as $$\label{eq: rev.contamination_set}
{\mathcal{P}}^{\text{c}}(P_{0};\epsilon) = \sset*{P \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}} {P= (1-\epsilon) P_{0} + \epsilon Q, \; Q \in {\mathfrak{Q}}},$$ where ${\mathfrak{Q}} \subseteq {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$ and $\epsilon \in [0,1]$.
This ambiguity set is extensively used in the context of robust statistics, see, e.g., @huber1973 [@huber2009RobustStat], and it has also been used in the economics literature, see, e.g., @nishimura2004search [@nishimura2004contamination]. @bose2009 study ambiguity aversion in a mechanism design problem using a maximin expected utility model of @gilboa1989. The contamination neighborhood is also used in the context of statistical learning, see, e.g., @duchi2019 and hypothesis testing, see, e.g., @huber1965.
### General Discrepancy-Based Ambiguity Sets
We devote this subsection to the papers that consider general discrepancy-based models. @postek2016 review and derive tractable reformulations of distributionally robust risk constraints over discrete probability distributions and for function ${\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ in ${\tilde{{\boldsymbol{\xi}}}}$. They provide a comprehensive list for risk measures and ambiguity sets, formed using statistical goodness-of-fit tests or probability metrics. They consider risk measures such as (1) expectation, (2) sum of expectation and standard deviation/variance, (3) variance, (4) mean absolute deviation from the median, (5) Sharpe ratio, (6) lower partial moments, (7) certainty equivalent, (8) optimized certainty equivalent, (9) shortfall risk, (10) VaR, (11) CVaR, (12) entropic VaR, (13) mean absolute deviation from the mean, (14) distortion risk measures, (15) coherent risk measures, and (16) spectral risk measures. They also consider (1) $\phi$-divergences, (2) Kolmogrov-Smirnov, (3) Wasserstein, (4) Anderson-Darling, (5) Cramer-von Mises, (6) Watson, and (7) Kuiper to model the distributional ambiguity. For each pair of risk measure and ambiguity set, they obtain a tractable reformulation by relying on the conjugate duality for the risk measure and the support function of the ambiguity set (i.e., the convex conjugate of the indicator function of the ambiguity set). They exemplify the results in portfolio optimization and antenna array design problems.
A connection between [DRO]{} models formed via discrepancy-based ambiguity sets and law invariant risk measures is made in @shapiro2017DRSP as described in Theorem \[thm: duality\_rho\_law\]. They specifically derive law invariant risk measures for cases when Wasserstein metric, $\phi$-divergences, and total variation distance is used to model the distributional ambiguity. They also propose a SAA approach to solve the corresponding dual of these problems, and establish the statistical properties of the optimal solutions and optimal value, similar to the results for the risk-neutral stochastic programs, see, e.g., @shapiro2014SP [@shapiro2003montecarlo].
Moment-Based Ambiguity Sets {#sec: rev.moment}
---------------------------
A common approach to model the ambiguity set is moment based, in which the ambiguity set contains all probability distributions whose moments satisfy certain properties. We categorize this type of models into several subgroups, although there are some overlaps.
### Chebyshev {#sec: rev.Chebyshev}
@scarf1958 models the distributional ambiguity in a newsvendor problem, where only the mean and variance of the random demand is known. He obtains a closed-form expression for the optimal order quantity and shows that the worst-case probability distribution is supported on only two points. Motivated by the Scarf’s seminal work, other researchers have investigated the Chebyshev ambiguity set in the context of the newsvendor model. @gallego1993 study multiple extensions of the problem studied in @scarf1958. These include the situations where there is a recourse opportunity, a fixed ordering cost, a random production output, and a scare resource for multiple competing products.
Unlike the ambiguity sets studied in @scarf1958 and @gallego1993, the mean and covariance matrix can be unknown themselves and belong to some uncertainty sets. @ghaoui2003worst study a distributionally robust one-period portfolio optimization, where the worst-case VaR over an ambiguity set of distributions with a known mean and covariance matrix is minimized. They show that this problem can be reformulated as a SOCP. Moreover, they show that minimizing worst-case VaR with respect to such an ambiguity set can be interpreted as a RO model where the worst-case portfolio loss with respect to an ellipsoid uncertainty set is minimized. They extend their study to the case that the first two order moments are only known to belong to a convex (bounded) uncertainty set, and they show the conditions under which the resulting model can be cast as a SDP. In particular, for independent polytopic uncertainty sets for the mean and covariance (so that the mean and covariance belong to the Cartesian product of these two sets), the problem can be reformulated as a SOCP. Also, for sets with componentwise bound on the mean and covariance, they cast the problem as a SDP (see also @halldorsson2003 for a similar result). Moreover, they show that in the presence of additional information on the distribution, besides the first two order moments, including constraints on the support and Kullback-Leibler divergence, an upper bound on the worst-case VaR can be obtained by solving a SDP. Motivated by the work in @ghaoui2003worst, @li2016law showcases the results in the context of a risk-averse portfolio optimization problem. Unlike @ghaoui2003worst that considers polytopic and interval uncertainty sets for the mean and covariance, @lotfi2018 assume that the unknown mean and covariance belong to an ellipsoidal uncertainty set. They study the worst-case VaR and worst-case CVaR optimization problems, subject to an expected return constraint. They show that both problems can be reformulated as SOCPs.
@goldfarb2003 study a distributionally robust portfolio selection problem, where the asset returns ${\tilde{{\boldsymbol{\xi}}}}$ are formed by a linear factor model of the form ${\tilde{{\boldsymbol{\xi}}}}={\boldsymbol{\mu}} + {\boldsymbol{A}} {\tilde{{\boldsymbol{f}}}} + {\tilde{{\boldsymbol{\epsilon}}}}$, where ${\boldsymbol{\mu}}$ is the vector of mean returns, ${\tilde{{\boldsymbol{f}}}} \sim N({\boldsymbol{0}}, {\boldsymbol{\Sigma}})$ is the vector of random returns that derives the market, ${\boldsymbol{A}}$ is the factor loading matrix, and ${\tilde{{\boldsymbol{\epsilon}}}} \sim N({\boldsymbol{0}}, {\boldsymbol{B}})$ is the vector of residual returns with a diagonal matrix ${\boldsymbol{B}}$. It is assumed that ${\tilde{{\boldsymbol{\epsilon}}}}$ is independent of ${\tilde{{\boldsymbol{f}}}}, {\boldsymbol{F}}$, and ${\boldsymbol{B}}$. Thus, ${\tilde{{\boldsymbol{\xi}}}}\sim N({\boldsymbol{\mu}}, {\boldsymbol{A}}{\boldsymbol{\Sigma}}{\boldsymbol{A}}^{\top} + {\boldsymbol{B}})$; hence, the uncertainty in the mean is independent of the uncertainty in the covariance matrix of the returns. Under the assumption that the covariance matrix ${\boldsymbol{\Sigma}}$ is known, @goldfarb2003 study three different models to form the uncertainty in ${\boldsymbol{B}}$, ${\boldsymbol{A}}$, and ${\boldsymbol{\mu}}$ as follows: $$\begin{aligned}
& {\mathcal{U}}_{{\boldsymbol{B}}}=\sset*{{\boldsymbol{B}}}{{\boldsymbol{B}}=\text{diag}({\boldsymbol{b}}), \; b_{i} \in [{\underline{b}}_{i}, {\overline{b}}_{i}], \; i=1, \ldots, d}, \label{eq: A}\\
& {\mathcal{U}}_{{\boldsymbol{A}}}=\sset*{{\boldsymbol{A}}}{{\boldsymbol{A}}={\boldsymbol{A}}_{0}+ {\boldsymbol{C}}, \; \|{\boldsymbol{c}}_{i}\|_{g} \le \rho_{i}, \; i=1, \ldots, d}, \label{eq: B}\\
& {\mathcal{U}}_{{\boldsymbol{\mu}}}=\sset*{{\boldsymbol{\mu}}}{{\boldsymbol{\mu}}={\boldsymbol{\mu}}_{0}+ {\boldsymbol{\zeta}}, \; |\zeta_{i}| \le \gamma_{i}, \; i=1, \ldots, d}, \label{eq: mu}\end{aligned}$$ where ${\boldsymbol{c}}_{i}$ denotes the $i$-th column of ${\boldsymbol{C}}$, and $\|{\boldsymbol{c}}_{i}\|_{g}=\sqrt{{\boldsymbol{c}}_{i}^{\top} {\boldsymbol{G}} {\boldsymbol{c}}_{i}^{\top}}$ denotes the elliptic norm of ${\boldsymbol{c}}_{i}$ with respect to a symmetric positive definite matrix ${\boldsymbol{G}}$. Calibrating the uncertainty sets ${\mathcal{U}}_{{\boldsymbol{B}}}$, ${\mathcal{U}}_{{\boldsymbol{A}}}$, and ${\mathcal{U}}_{{\boldsymbol{\mu}}}$ involves choosing parameters ${\underline{d}}_{i}$, ${\overline{d}}_{i}$, $\rho_{i}$, $\gamma_{i}$, $i=1, \ldots, d$, vector ${\boldsymbol{\mu}}_{0}$, and matrices ${\boldsymbol{A}}_{0}$ and ${\boldsymbol{G}}$. Given this setup, @goldfarb2003 study a [DRO]{} approach to different portfolio optimization problems for the return ${\tilde{{\boldsymbol{\xi}}}}^{\top} {\boldsymbol{x}}$ on the portfolio ${\boldsymbol{x}}$, where $\sum_{i=1}^{n} x_{i}=1$. This includes: (1) minimum variance, ${\mathrm{Var} \left[ \cdot \right]}$, subject to a minimum expected return constraint $$\min_{{\boldsymbol{x}} \ge {\boldsymbol{0}}} \ \max_{{\boldsymbol{A}} \in {\mathcal{U}}_{{\boldsymbol{A}}}, {\boldsymbol{B}} \in {\mathcal{U}}_{{\boldsymbol{B}}}} \sset*{{\mathrm{Var} \left[ {\tilde{{\boldsymbol{\xi}}}}^{\top} {\boldsymbol{x}} \right]}}{\min_{{\boldsymbol{\mu}} \in {\mathcal{U}}_{{\boldsymbol{\mu}}}} {\mathbb{E} \left[ {\tilde{{\boldsymbol{\xi}}}}^{\top} {\boldsymbol{x}} \right]} \ge \alpha , \; \sum_{i=1}^{n} x_{i}=1},$$ (2) maximum expected return subject to a maximum variance constraint $$\max_{{\boldsymbol{x}} \ge {\boldsymbol{0}}} \ \min_{{\boldsymbol{\mu}} \in {\mathcal{U}}_{{\boldsymbol{\mu}}}} \sset*{{\mathbb{E} \left[ {\tilde{{\boldsymbol{\xi}}}}^{\top} {\boldsymbol{x}} \right]}}{\max_{{\boldsymbol{A}} \in {\mathcal{U}}_{{\boldsymbol{A}}}, {\boldsymbol{B}} \in {\mathcal{U}}_{{\boldsymbol{B}}}} {\mathrm{Var} \left[ {\tilde{{\boldsymbol{\xi}}}}^{\top} {\boldsymbol{x}} \right]} \le \lambda, \; \sum_{i=1}^{n} x_{i}=1},$$ (3) maximum Sharpe ratio $$\max_{{\boldsymbol{x}} \ge {\boldsymbol{0}}} \ \min_{{\boldsymbol{\mu}} \in {\mathcal{U}}_{{\boldsymbol{\mu}}}, {\boldsymbol{A}} \in {\mathcal{U}}_{{\boldsymbol{A}}}, {\boldsymbol{B}} \in {\mathcal{U}}_{{\boldsymbol{B}}}} \sset*{\frac{{\mathbb{E} \left[ {\tilde{{\boldsymbol{\xi}}}}^{\top} {\boldsymbol{x}} \right]}-{\boldsymbol{\xi}}_{0}^{\top} {\boldsymbol{x}}}{\sqrt{{\mathrm{Var} \left[ {\tilde{{\boldsymbol{\xi}}}}^{\top}{\boldsymbol{x}} \right]}}}}{\sum_{i=1}^{n} x_{i}=1},$$ where ${\boldsymbol{\xi}}_{0}$ is a risk-free return rate, and (4) maximum expected return subject to a maximum VaR constraint $$\max_{{\boldsymbol{x}} \ge {\boldsymbol{0}}} \ \min_{{\boldsymbol{\mu}} \in {\mathcal{U}}_{{\boldsymbol{\mu}}}} \sset*{{\mathbb{E} \left[ {\tilde{{\boldsymbol{\xi}}}}^{\top} {\boldsymbol{x}} \right]}}{\max_{{\boldsymbol{\mu}} \in {\mathcal{U}}_{{\boldsymbol{\mu}}}, {\boldsymbol{A}} \in {\mathcal{U}}_{{\boldsymbol{A}}}, {\boldsymbol{B}} \in {\mathcal{U}}_{{\boldsymbol{B}}}} {\mathrm{VaR}_{\beta} \left[ {\tilde{{\boldsymbol{\xi}}}}^{\top} {\boldsymbol{x}} \right]} \ge \alpha, \; \sum_{i=1}^{n} x_{i}=1}.$$ Note that the constraint ${\mathrm{VaR}_{\beta} \left[ {\tilde{{\boldsymbol{\xi}}}}^{\top} {\boldsymbol{x}} \right]} \ge \alpha$ is equivalent to $P\{{\tilde{{\boldsymbol{\xi}}}}^{\top} {\boldsymbol{x}} \le \alpha\} \le \beta$. They show that all the above four classes of problems can be reformulated as SOCPs. They further assume the covariance matrix ${\boldsymbol{\Sigma}}$ or its inverse are unknown and belong to ellipsoidal uncertainty sets, and show that the above problems can be reformulated as SOCPs. @ghaoui2003worst study a similar linear factor model as the one in @goldfarb2003, but they assume that the uncertainty in the mean is not independent of the uncertainty in the covariance matrix of the returns. When the factor matrix ${\boldsymbol{A}}$ belongs to ellipsoidal uncertainty set, they show that an upper bound on the worst-case VaR can be computed by solving a SDP.
@li2013 study a distributionally robust approach for a single-period portfolio selection problem. They consider a set of reference means and variances, and they form the ambiguity set by all distributions whose means and variance are in a pre-specified distance from the reference means and variances set (in the regular sense of a point from a set via a norm). For the case that moments take values outside the reference region, since evaluation based on its worst-case performance can be overly-conservative, they consider a penalty term that further accounts for measure discrepancy between the moments in and outside the reference region. Moreover, for the case that the reference region is a conic set, they obtain an equivalent SDP reformulation.
@grunwald2004game confine the ambiguity set to distributions with fixed first order moments ${\boldsymbol{\tau}}$. By varying ${\boldsymbol{\tau}}$, they obtain a collection of maximum generalized entropy distribution and relate it to the exponential family of distributions.
@rujeerapaiboon2018chebyshev derive Chebyshev-type bounds on the worst-case right and left tail of a product of nonnegative symmetric random variables. They assume that the mean is known, but the covariance matrix might be known or bounded above by a matrix inequality. They show that if both the mean and covariance matrix are known, these bounds can be obtained by solving a SDP. For the case that the covariance matrix is bounded above, they show that (i) the bound on the left tail is equal to the bound on the left tail under the known covariance setting, and (ii) the bound on the right tail is equal to the bound on the right tail under the known mean and covariance setting, for a sufficiently large tail. They extend their results to construct Chebyshev bounds for sums, minima, and maxima of nonnegative random variables.
### Delage and Ye {#sec: rev.DelageYe}
Unlike the ambiguity sets studied in @scarf1958 and @gallego1993, @delage2010 allow the mean and covariance matrix to be unknown themselves. This ambiguity set is defined as follows [@delage2010]: $$\label{eq: rev.DelageYe}
{\mathcal{P}}^{DY}:=\sset*{P \in{\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}}{
\begin{aligned}
& P\{{\tilde{{\boldsymbol{\xi}}}}\in \Omega\}=1,\\
& \Big({\mathbb{E}_{P} \left[ {\tilde{{\boldsymbol{\xi}}}}\right]} - {\boldsymbol{\mu}}_{0}\Big)^{\top}{\boldsymbol{\Sigma}}_{0}^{-1}\Big({\mathbb{E}_{P} \left[ {\tilde{{\boldsymbol{\xi}}}}\right]} - {\boldsymbol{\mu}}_{0}\Big) \le \varrho_{1},\\
& {\mathbb{E}_{P} \left[ ({\tilde{{\boldsymbol{\xi}}}}-{\boldsymbol{\mu}}_{0})(\xi-{\boldsymbol{\mu}}_{0})^{\top} \right]} \preccurlyeq \varrho_{2} {\boldsymbol{\Sigma}}_{0}
\end{aligned}
}.$$ The first constraint denotes the smallest closed convex set $\Omega \subseteq {\mathbb{R}}^{d}$ that contains ${\tilde{{\boldsymbol{\xi}}}}$ with probability one (w.p. $1$), i.e., $\Omega$ is the support of ${\mathbbmtt{P}}=P \circ {\tilde{{\boldsymbol{\xi}}}}^{-1}$ w.p. $1$. The second constraint ensures that the mean of ${\tilde{{\boldsymbol{\xi}}}}$ lies in an ellipsoid of size $\varrho_{1}$ and centered around the nominal mean estimate ${\boldsymbol{\mu}}_{0}$. Note that we can equivalently write this constraint as $${\mathbb{E}_{P} \left[ \begin{pmatrix}
-{\boldsymbol{\Sigma}}_{0} & {\boldsymbol{\mu}}_{0} -{\tilde{{\boldsymbol{\xi}}}}\\
({\boldsymbol{\mu}}_{0} -{\tilde{{\boldsymbol{\xi}}}})^{\top} & -\varrho_{1}
\end{pmatrix} \right]} \preccurlyeq {\boldsymbol{0}}.$$ The third constraint defines the second central-moment matrix of ${\tilde{{\boldsymbol{\xi}}}}$ by a matrix inequality. The parameters $\varrho_{1}$ and $\varrho_{2}$ control the level of confidence in ${\boldsymbol{\mu}}_{0}$ and $ {\boldsymbol{\Sigma}}_{0}$, respectively. Note that the ambiguity sets with a known mean and covariance matrix can be seen as a special case of , with $\varrho_{1}=0$ and $\varrho_{2}=1$. @delage2010 propose data-driven methods to form confidence regions for the mean and the covariance matrix of the random vector ${\tilde{{\boldsymbol{\xi}}}}$ using the concentration inequalities of @mcdiarmid1998, and provide probabilistic guarantees that the solution found using the resulting [DRO]{} model yields an upper bound on the out-of-sample performance with respect to the true distribution of the random vector. A conic generalization of the ambiguity set ${\mathcal{P}}^{\text{DY}}$, beyond the first and second moment information is also studied in @delage2009DY. Below, we present a duality result for $\sup_{{\mathbbmtt{P}} \in {\mathcal{P}}^{\text{DY}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ given a fixed ${\boldsymbol{x}} \in {\mathcal{X}}$, due to @delage2010.
[(@delage2010 [Lemma 1])]{} \[thm: rev.dual\_DelageYe\] For a fixed ${\boldsymbol{x}} \in {\mathcal{X}}$, suppose that Slater’s constraint qualification conditions are satisfied, i.e., there exists a strictly feasible $P$ to ${\mathcal{P}}^{DY}$, and $h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}})$ is $P$-integrable for all $P \in {\mathcal{P}}^{DY}$. Then, $\sup_{P \in {\mathcal{P}}^{\text{DY}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ is equal to the optimal value of the following semi-infinite convex conic optimization problem: $$\begin{aligned}
\inf_{{\boldsymbol{Y}},{\boldsymbol{y}},r,t} \ & r+ t \\
{\text{s.t.}}\quad & r \ge h({\boldsymbol{x}},{\boldsymbol{\xi}})-{\boldsymbol{\xi}}^{\top}{\boldsymbol{Y}}{\boldsymbol{\xi}} - {\boldsymbol{\xi}}^{\top} {\boldsymbol{y}}, \; \forall {\boldsymbol{\xi}} \in \Omega,\\
& t \ge (\varrho_{2} {\boldsymbol{\Sigma}}_{0} + {\boldsymbol{\mu}}_{0} {\boldsymbol{\mu}}_{0}^{\top})\bullet {\boldsymbol{Y}} + {\boldsymbol{\mu}}_{0}^{\top} {\boldsymbol{y}} + \sqrt{\varrho_{1}} \| {\boldsymbol{\Sigma}}_{0} ^{\frac{1}{2}}({\boldsymbol{y}}+ 2 {\boldsymbol{Y}}{\boldsymbol{\mu}}_{0})\|,\\
& {\boldsymbol{Y}} \succcurlyeq 0,
\end{aligned}$$ where ${\boldsymbol{Y}} \in {\mathbb{R}}^{d \times d}$ and ${\boldsymbol{y}} \in {\mathbb{R}}^{d}$.
The reformulated problem in Theorem \[thm: rev.dual\_DelageYe\] is polynomial-time solvable under the following assumptions [@delage2010]:
- The sets ${\mathcal{X}}$ and $\Omega$ are convex and compact, and are both equipped with oracles that confirm the feasibility of a point ${\boldsymbol{x}}$ and ${\tilde{{\boldsymbol{\xi}}}}$, or provide a hyperplane that separates the infeasible point from its corresponding feasible set in time polynomial in the dimension of the set.
- Function $h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}):=\max_{k \in \{1,\ldots, K\}} h_{k}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ is piecewise and is such that for each $k$, $h_{k}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ is convex in ${\boldsymbol{x}}$ and concave in ${\tilde{{\boldsymbol{\xi}}}}$. In addition, for any given pair $({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$, one can evaluate $h_{k}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$, find a supergradient of $h_{k}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ in ${\tilde{{\boldsymbol{\xi}}}}$, and find a subgradient of $h_{k}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ in ${\boldsymbol{x}}$, in time polynomial in the dimension of ${\mathcal{X}}$ and $\Omega$.
As a special case where $\Omega$ is an ellipsoid, the resulting reformulation in Theorem \[thm: rev.dual\_DelageYe\] reduces to a SDP of finite size. Motivated by the computational challenges of solving a semidefinite reformulation of formed via , @cheng2018 propose an approximation method to reduce the dimensionality of the resulting [DRO]{}. This approximation method relies on the principal component analysis for the optimal lower dimensional representation of the variability in random samples. They show that this approximation yields a relaxation of the original problem and give theoretical bounds on the gap between the original problem and its approximation.
@popescu2007 study a class of stochastic optimization problems, where the objective function is characterized with one- or two-point support functions. They show that when the ambiguity set of distributions is formed with all distributions with known mean and covaraince, the problem reduces to a deterministic parametric quadratic program. In particular, this result holds for increasing concave utilities with convex or concave-convex derivatives.
@goh2010tractable study a [DRO]{} approach to a stochastic linear optimization problem with expectation constraints, where the support and mean of the random parameters belong to a conic-representable set, while the covariance matrix is assumed to be known.
#### Discrete Problems
Under the assumption that the mean and covariance are known, @natarajan2017SDP investigate the worst-case expected value of the maximum of a linear function of random variables as follows: $$\sup_{P \in {\mathcal{P}}} \ {\mathbb{E}_{P} \left[ Z({\tilde{{\boldsymbol{\xi}}}}) \right]},$$ where $Z({\tilde{{\boldsymbol{\xi}}}})=\max\sset*{{\tilde{{\boldsymbol{\xi}}}}^{\top}{\boldsymbol{x}}}{{\boldsymbol{x}} \in {\mathcal{X}}}$. The set ${\mathcal{X}}$ is specified with either a finite number of points or a bounded feasible region to a mixed-integer LP. To obtain an upper bound, they approximate the copostive programming reformulation of the problem, presented in @natarajan2011mixed [Theorem 3.3], with a SDP. They show that the complexity of computing this bound is closely related to characterizing the convex hull of the quadratic forms of the points in the feasible region.
@xie2018integer study a [DRO]{} approach to a two-stage stochastic program with a simple integer round-up recourse function, defined as follows: $$\min_{{\boldsymbol{x}}}\sset*{{\boldsymbol{c}}^{\top}{\boldsymbol{x}} +\max_{P \in {\mathcal{P}}} {\mathbb{E}_{P} \left[ h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \right]}}{{\boldsymbol{A}}{\boldsymbol{x}} \ge {\boldsymbol{b}}, \; {\boldsymbol{x}} \in {\mathbb{R}}^{n}},$$ where $$h({\boldsymbol{x}},{\boldsymbol{\xi}})=\min_{{\boldsymbol{u}},{\boldsymbol{v}}}\sset*{{\boldsymbol{q}}^{\top}{\boldsymbol{u}}+ {\boldsymbol{r}}^{\top}{\boldsymbol{v}}}{{\boldsymbol{u}} \ge {\boldsymbol{\xi}}-{\boldsymbol{T}} {\boldsymbol{x}}, \; {\boldsymbol{v}} \ge {\boldsymbol{T}} {\boldsymbol{x}} -{\boldsymbol{\xi}}, \; {\boldsymbol{u}}, {\boldsymbol{v}} \in {\mathbb{Z}}_{+}^{q}}.$$ The ambiguity set is formed by the product of one-dimensional ambiguity sets for each component of the random parameter ${\tilde{{\boldsymbol{\xi}}}}$, formed with marginal distributions with known support and mean. They obtain a closed-form expression for the inner problem corresponding to each component, and they reformulate the problem as a mixed-integer SOCP.
@ahipasaoglu2016distributionally study distributionally robust project crashing problems. They assume the underlying joint probability distribution of the activity durations lies in an ambiguity set of distributions with the given mean, standard deviation, and correlation information. The goal is to select the means and standard deviations to minimize the worst-case expected makespan for the project network with respect to the ambiguity set of distributions. Unlike the typical use of the SDP solvers to directly solve the problem, they exploit the problem structure to reformulate it as a convex-concave saddle point problem over the first two moment variables in order to solve the formulation in polynominal time. A distributionally robust approach to an individual chance constraint with binary decisions is studied in @zhang2018. They consider the following individual chance constraints with $g_{j}({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}})$, $j=1, \ldots, m$, in is defined as $$g_{j}({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}):=\mathbbm{1}_{[{\tilde{{\boldsymbol{\xi}}}}^{\top}{\boldsymbol{x}} \le {\boldsymbol{b}}]}({\tilde{{\boldsymbol{\xi}}}}),$$ where ${\boldsymbol{x}} \in \{0,1\}^{n}$. They form the ambiguity set of distributions by all joint distributions whose marginal means and covarinces satisfy the constraints in . They reformulate the chance constraints as binary second-order conic (SOC) constraints.
#### Risk and Chance Constraints
Risk-based [DRO]{} models formed via the ambiguity set are also studied in the literature. @bertsimas2010minmax study a risk-averse distributionally robust two-stage stochastic linear optimization problem where the mean and the covariance matrix are known, and a convex nondecreasing piecewise linear disutility function is used to model risk. When the second-stage objective function’s coefficients are random, they obtain a tight polynomial-sized SDP formulation. They also provide an explicit construction for a sequence of (worst-case) distributions that asymptotically attain the optimal value. They prove that this problem is NP-hard when the right-hand side is random, and further show that under the special case that the extreme points of the dual of the second-stage problem are explicitly known, the problem admits a SDP reformulation. An explicit construction of the worst-case distributions is also given. The results are applied to the production-transportation problem and a single facility minimax distance problem. @li2016law obtains a closed-form expression to the worst-case of the class of law invariant coherent risk measures, where the worst case is taken with respect to all distributions with the same mean and covariance matrix.
@zymler2013VaR extend the work of @ghaoui2003worst with known first and second order moments to a portfolio of derivatives, and develop two worst-case VaR models to capture the nonlinear dependencies between the derivative returns and the underlying asset returns. They introduce worst-case polyhedral VaR with convex piecewise-linear relationship between the derivative return and the asset returns. They also show that minimizing worst-case polyhedral VaR is equivalent to a convex SOCP. A worst-case quadratic VaR with (possibly nonconvex) quadratic relationships between the derivative return and the asset returns is also introduced, and they show that minimizing worst-case quadratic VaR is equivalent to a convex SDP. These worst-case VaR measures are equivalent to the worst-case CVaR of the underlying polyhedral or quadratic loss function, and they are coherent. As in @ghaoui2003worst, @zymler2013VaR show that optimization of these new worst-case VaR has a RO interpretation over an uncertainty set, asymmetrically oriented around the mean values of the asset returns. Using the result from @zymler2013Chance, @rujeerapaiboon2016 show that the worst-case VaR of the quadratic approximation of a portfolio growth rate can be expressed as the optimal value of a SDP.
@chen2010joint summarize and develop different approximations to the individual chance constraint used in the robust optimization as the consequence of applying different bounds on CVaR. These bounds, in turn, can be written as an optimization problem over an uncertainty set. For instance, they show that when the uncertainties are characterized only by their means and covariance, the corresponding uncertainty set is an ellipsoid. @calafiore2006 provide explicit results for enforcement of the individual chance constraint over an ambiguity set of distributions. When only the information on the mean and covariance are considered, the worst-case chance constraint is equivalent to a convex second-order conic (SOC) constraint. With additional information on the symmetry, the worst-case chance constraint can be safely approximated via a convex SOC constraint. Additionally, when the means are known and individual elements are known to belong with probability one to independent bounded intervals, the worst-case chance constraint can be safely approximated via a convex SOC constraint.
@zymler2013Chance study a safe approximation to distributionally robust individual and joint chance constraints based on the worst-case CVaR. Under the assumptions that the ambiguity set is formed via distributions with fixed mean and covariance, and the chance safe regions are bi-affine in ${\boldsymbol{x}}$ and ${\tilde{{\boldsymbol{\xi}}}}$, they obtain an exact SDP reformulation of the worst-case CVaR. They show that the CVaR approximation is in fact exact for individual chance constraints whose constraint functions are either convex or (possibly nonconconvex) quadratic in ${\tilde{{\boldsymbol{\xi}}}}$ by relying on nonlinear Farkas lemma and ${\mathcal{S}}$-lemma, see, e.g., @polik2007survey.
@chen2010joint extend their idea to the joint chance constraint by using bounds for order statistics. They show that the resulting approximation for the joint chance constraint outperforms the Bonferroni approximation, and the constraints of the approximation are second-order conic-representable. @zymler2013Chance show that the CVaR approximation is exact for joint chance constraints whose constraint functions depend linearly on ${\tilde{{\boldsymbol{\xi}}}}$. They evaluate the performance of their approximation for joint chance constraint in the context of a water reservoir control problem for hydro power generation and show it outperforms the Bonferroni approximation and the method of @chen2010joint.
Motivated by the fact that chance constraints do not take into account the magnitude of the violation, @xu2012optimization study a probabilistic envelope constraint. This approach can be interpreted as a continum of chance constraints with nondecreasing target values and probabilities. They show that when the first two order moments are known, an ambigious probabilistic envelope constraint is equivalent to a deterministic SIP, which is called as a [*comprehensive robust optimization*]{} problem [@ben2006comprehensive; @ben2010soft]. In other words, ambiguous probabilistic envelope constraint alleviates the “all-or-nothing" view of the standard RO that ignores realizations outside of the uncertainty set. We refer to @yang2016chance for an extension of the work in @xu2012optimization to the nonlinear inequalities.
#### Statistical Learning
@lanckriet2002 present a [DRO]{} approach to a binary classification problem to minimize the worst-case probability of missclassification where the mean and covariance matrix of each class are known. They show that for a linear hypothesis, the problem can be formulated as a SOCP. They also investigate the case where the mean and covariance are unknown and belong to convex uncertainty sets. They show that when the mean is unknown and belongs to an ellipsoid, the problem is a SOCP. On the other hand, when the mean is known and covariance belongs to a matrix norm ball, the problem is a SOCP and adopts a regularization term. For a nonlinear hypothesis, they seek a kernal function to map into a higher-dimensional covariates-response space such that a linear hypothesis in that space corresponds to a nonlinear hypothesis in the original covariate-response space. Using this idea, the model is reformulated as an SOCP.
#### Multistage Setting
@xin2018moment study a multistage distributionally robust newvendor problem where the support and the first two order moments of the demand distribution are known at each stage. They provide a formal definition of the time consistency of the optimal policies and study this phenomena in the context of the newsvendor problem. They further relate time consistency to rectangularity of measures, see, e.g., @shapiro2016rectangular, and provide sufficient conditions for time consistency. Unlike @xin2018moment that suppose the demand process is stage-wise independent, @xin2018martingle assume that the demand process is a martingale. They form the ambiguity set by all distributions with a known support and mean at each stage. They obtain the optimal policy and a two-point worst-case probability distribution, one of which is zero, in closed forms. They also show that for any initial inventory level, the optimal policy and random demand (distributed according to the worst-case distribution) is such that for all stages, either demand is greater than or equal to the inventory or demand is zero, meaning that all future demands are also zero.
@yang2018game and @vanparys2016constrained study a stochastic optimal control model to minimize the worst-case probability that a system remains in a safe region for all stages. @yang2018game forms the ambiguity set at each stage by all distributions for which the componentwise mean of random parameters is within an interval, while the covariance is in a positive semidefinite cone. @vanparys2016constrained form the ambiguity set by all distributions with a known mean and covariance.
### Generalized Moment and Measure Inequalities {#sec: rev. measure_marginal_moments}
In this section we review an ambiguity set that allows to model the support of the random vector, and impose bounds on the probability measure as well as functions of the random vector as follow: $$\label{eq: moment-rob-set}
{\mathcal{P}}^{MM}:=\sset*{P\in{{\mathfrak{M}}_{+}(\Xi,{\mathcal{F}})}}{ \nu_1 \preceq P \preceq \nu_2,\; \int_{\Xi} {\boldsymbol{f}} d P \in [{\boldsymbol{l}}, {\boldsymbol{u}}]},$$ where $\nu_{1}, \nu_{2} \in{{\mathfrak{M}}_{+}(\Xi,{\mathcal{F}})}$ are two given measures that impose lower and upper bounds on a measure $P \in {{\mathfrak{M}}_{+}(\Xi,{\mathcal{F}})}$, and ${\boldsymbol{f}}:=[f_1,\ldots,f_m]$ is a vector of measurable functions on ${\left( \Xi, {\mathcal{F}} \right)}$, with $m \ge 1$. The first constraint in enforces a preference relationship between probability measures. To ensure that $P$ is a probability measure, i.e., $P \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$, we set $l_1=u_1=1$ and $f_1=1$ in the above definition of ${\mathcal{P}}^{MM}$. @shapiro2004minmax propose this framework, and special cases of it appear in @popescu2005semidefinite, @bertsimas2005optimal, @perakis2008regret, @mehrotra2014semi, among others. Note that if the first constraint in is disregarded (i.e., we only have $P \succeq 0$), then we can form the constraints of a classical problem of moments, see, e.g., @landau1987moments. Using this unified set, one can impose bounds on the standard moments, by setting the $i$th entry of ${\boldsymbol{f}}$ to have the form: $f_i({\tilde{{\boldsymbol{\xi}}}}):=(\xi_1)^{k_{i1}}\cdot (\xi_2)^{k_{i2}} \cdots (\xi_d)^{k_{id}}$, where $k_{ij}$ is a nonnegative integer indicating the power of $\xi_j$ for the $i$th moment function. Other possible choices for the functions ${\boldsymbol{f}}$ include the mean absolute deviation, the (co-)variances, semi-variance, higher order moments, and Huber loss function. Moreover, proper choices of ${\boldsymbol{f}}$ will give the flexibility to impose structural properties on the probability distribution, see, e.g., @popescu2005semidefinite and @perakis2008regret to model the unimodality and symmetry of distributions within this framework (see also Section \[sec: rev.shape\]).
Below, we present a duality result $\sup_{P \in {\mathcal{P}}^{\text{MM}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$, given a fixed ${\boldsymbol{x}} \in {\mathcal{X}}$.
[(@shapiro2004minmax [Proposition 2.1])]{} \[thm: rev.dual\_generalizedmoments\] For a fixed ${\boldsymbol{x}} \in {\mathcal{X}}$, suppose that $h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ is $\nu_{2}$-integrable, i.e., $\int_{\Xi} |h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})| d \nu_{2} < \infty$, as defined in . Moreover, suppose that ${\boldsymbol{f}}$ is $\nu_{2}$-integrable, and there exists $\nu_1 \preceq P \preceq \nu_2$ such that $\int_{\Xi} {\boldsymbol{f}} d P \in ({\boldsymbol{l}}, {\boldsymbol{u}})$. If $\sup_{P \in {\mathcal{P}}^{\text{MM}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ is finite, then, it can be written as the optimal value of the following problem: $$\begin{aligned}
\inf_{{\boldsymbol{r}},{\boldsymbol{t}}} \ & {\boldsymbol{r}}^{\top} {\boldsymbol{u}} -{\boldsymbol{t}}^{\top} {\boldsymbol{l}} + \Psi({\boldsymbol{r}},{\boldsymbol{t}}) \\
{\text{s.t.}}\quad & {\boldsymbol{r}},{\boldsymbol{t}} \ge {\boldsymbol{0}},
\end{aligned}$$ where $$\Psi({\boldsymbol{r}},{\boldsymbol{t}})=\int_{\Xi} \Big(h({\boldsymbol{x}},s)+ ({\boldsymbol{t}}-{\boldsymbol{r}})^{\top}{\boldsymbol{f}}(s) \Big)_{+} \nu_{2}(ds) - \int_{\Xi} \Big(-h({\boldsymbol{x}},s)- ({\boldsymbol{t}}-{\boldsymbol{r}})^{\top}{\boldsymbol{f}}(s) \Big)_{+} \nu_{1}(ds).$$
@shapiro2004minmax focus on a special case of , where the first constraint is written as $(1-\epsilon)P^{*} \preceq P \preceq (1+\epsilon)P^{*}$, for some reference measure $P^{*}$, and they identify the coherent risk measure corresponding to the studied [DRO]{}. They further study the class of problems with convex objective function $h$ and two-stage stochastic programs. @popescu2005semidefinite [@bertsimas2005optimal; @mehrotra2014semi] study the classical problem of moments, i.e., ambiguity set is formed via only the second constraints in . When ${\boldsymbol{f}}$ are moment functions, @mehrotra2014semi show that under mild conditions (continuous function $h$ and compact support $\Omega$), the optimal value of a sequence of problems of the form , where the ambiguity set is constructed via an increasing number of moments of the underlying probability distributions, with moments matched to those under a reference distribution, converges to the optimal value of a problem of the form under the reference distribution. Moreover, using the SIP reformulation of , @mehrotra2014semi propose a cutting surface method to solve a convex . This method can be applied to problems where bounds of moments are of arbitrary order, and possibly, bounds on nonpolynomial moments are available.
@royset2017 study a [DRO]{} model with a decision-dependent ambiguity set, where the ambiguity set has the form of , without the second set of constraints, and the first constraint is formed via the decision-dependent cumulative distribution functions (cdf). They establish the convergence properties of the solutions to this problem by exploiting and refining results in variational analysis.
Besides @shapiro2004minmax, there are other studies that focus on special types of cost function $h$. Two-stage stochastic programs have received much attention in this class. @chen2018discrete consider a two-stage stochastic linear complementarity problem, where the underlying random data are continuously distributed. They study a distributionally robust approach to this problem, where the ambiguity set of distributions is formed via without the first constraint, and propose a discretization scheme to solve the problem. They investigate the asymptotic behavior of the approximated solution in the number of discrete partitions of the sample space $\Xi$. As an application, they study robust game in a duoploy market where two players need to make strategic decisions on capacity for future production with anticipation of Nash-Cournot type competition after demand uncertainty is observed. There are studies that consider only lower order moments, up to order 2. @ardestanijaafari2016 study distributionally robust multi-item newsvendor problem, where the ambiguity set of distribution contains all distributions with a known budgeted support, mean, and partial first order moments. To provide a reformulation of the problem, they propose a conservative approximation scheme for maximizing the sum of piecewise linear functions over polyhedral uncertainty set based on the relaxation of an associated mixed-integer LP. They show that for the above studied newsvendor problem such an approximation is exact and it is a linear program.
#### Discrete Problems
@bansal2018 study a (two-stage) distributionally robust integer program with pure binary first-stage and mixed-binary second stage decisions on a finite set of scenarios. They propose a decomposition-based L-shaped algorithm and a cutting surface algorithm to solve the resulting model. They investigate the conditions and ambiguity set of distribution under which the proposed algorithm is finitely convergent. They show that ambiguity set of distributions formed via without the first constraint, satisfy these conditions. @hanasusanto2016 study a finite adaptability scheme to approximate the following two-stage distributionally robust linear program, with binary recourse decisions and optimized certainty equivalent as a risk measure: $$\min_{{\boldsymbol{x}}} \ \max_{P \in {\mathcal{P}}} \sset*{{\tilde{{\boldsymbol{\xi}}}}^{\top} {\boldsymbol{C}} {\boldsymbol{x}} + {{\mathcal{R}}_{P} \left[ h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \right]}}{{\boldsymbol{A}}{\boldsymbol{x}} \ge {\boldsymbol{b}}, \; {\boldsymbol{x}} \in \{0,1\}^{q_{1}} \times {\mathbb{R}}^{n-q_{1}}},$$ where $$h({\boldsymbol{x}},{\boldsymbol{\xi}})=\min_{{\boldsymbol{y}}}\sset*{{\boldsymbol{q}}^{\top}{\boldsymbol{Q}}{\boldsymbol{y}}({\boldsymbol{\xi}})}{{\boldsymbol{W}}{\boldsymbol{y}}({\boldsymbol{\xi}}) \ge {\boldsymbol{R}}{\boldsymbol{\xi}} - {\boldsymbol{T}} {\boldsymbol{x}}, \; {\boldsymbol{y}}({\boldsymbol{\xi}}) \in \{0,1\}^{q_{2}} },$$ and ${{\mathcal{R}}_{P} \left[ h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \right]}$ is an optimized certainty equivalent risk measure corresponding to the utility function $u$: ${{\mathcal{R}}_{P} \left[ h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \right]}=\inf_{\eta \in {\mathbb{R}}} \eta + {\mathbb{E}_{P} \left[ u\big(h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}})-\eta\big) \right]}$ [@bental1986; @bental2007OCE]. As an alternative to the affine recourse approximation, they pre-determine a set of finite recourse decisions here-and-now, and implement the best among them after the realization is observed. They form the ambiguity set of distributions as in but without the first constraint, where the support is assumed to be a polytope and functions $f_{i}$ are also convex piecewise linear in ${\tilde{{\boldsymbol{\xi}}}}$. They derive an equivalent mixed-integer LP for the resulting model. They also obtain upper and lower bounds on the probability with which any of these recourse decisions is chosen under any ambiguous distribution as linear programs. @postek2018sip study a two-stage stochastic integer program, where the second-stage problem is a mixed-integer program. They model the distributional ambiguity by all distributions whose mean and mean-absolute deviation are known. While they show that the problem reduces to a two-stage stochastic program when there is no discrete variables, they develop a general approximation framework for the [DRO]{} problem with integer variables. They apply their results to a surgery block allocation problem.
#### Risk and Chance Constraints
@bertsimas2005optimal study the worst-case bound on the probability of a multivariate random vector falling outside a semialgebreic confidence region (i.e., a set described via polynomial inequalities) over an ambiguity set of the form , where functions ${\boldsymbol{f}}$ are represented by all polynomials of up to $k$th-order. For the univariate case, they obtain the result as a SDP. In particular, they obtain closed-form bounds, when $k \le 3$. For the multivariate case, they show that such a bound can be obtained via a family of SDP relaxations, yielding a sequence of increasingly stronger, asymptotically exact upper bounds, each of which is calculated via a SDP. A special case of @bertsimas2005optimal appears in @vandenberghe2007, where the confidence region is described via linear and quadratic inequalities, and the first two order moments are assumed to be known within the ambiguity set.
Building from @chen2018discrete, @liu2017 study a distributionally robust reward-risk ratio model, based on a variation of the Sharpe ratio. The ambiguity set contains all distributions whose componentwise means and covariances are restricted to intervals. They turn this problem into a model with a distributionally robust inequality constraint, and further reformulate this model as a nonconvex SIP. They approximate the semi-infinite constraint with an entropic risk measure approximation[^22] and provide an iterative method to solve the resulting model. They provide statistical analysis to assess the likelihood of the true probability distribution lying in the ambiguity set, and provide a convergence analysis of the optimal value and solutions of the data-driven distributionally robust reward-risk ratio problems. The results are applied to a portfolio optimization problem.
@nemirovski2006convex study a convex approximation, referred to as [*Bernstein*]{} approximation, to an ambiguous joint chance-constrained problem of the form
\[eq: DRO\_joint\_chance\] $$\begin{aligned}
\min_{{\boldsymbol{x}} \in {\mathcal{X}} } \ & h({\boldsymbol{x}}) \\
{\text{s.t.}}\quad & \inf_{P \in {\mathcal{P}} } \ P \left \lbrace {\tilde{{\boldsymbol{\xi}}}}: g_{i0}({\boldsymbol{x}}) + \sum_{j=1}^{d} \tilde{\xi}_{j} g_{ij}({\boldsymbol{x}}) \le 0, \; i=1, \ldots, m \right \rbrace \ge 1- \epsilon.\end{aligned}$$
[(@nemirovski2006convex [Theorem 6.2])]{} \[thm: rev.Bernstein\] Suppose that the ambiguous joint chance-constrained problem is such that (i) the components of the random vector ${\tilde{{\boldsymbol{\xi}}}}$ are independent of each other, with finite-valued moment generating functions, (ii) function $h({\boldsymbol{x}})$ and all functions $g_{ij}$, $i=1, \ldots, m$, $j=0, \ldots, d$, are convex and well defined on ${\mathcal{X}}$, and (iii) the ambiguity set of probability distributions ${\mathcal{P}}$ forms a convex set. Let $\epsilon_{i}$, $i=1, \ldots, m$, be positive real values such that $\sum_{i=1}^{m} \epsilon_{i} \le \epsilon$. Then, the problem $$\begin{aligned}
\min_{{\boldsymbol{x}} \in {\mathcal{X}} } \ & h({\boldsymbol{x}}) \\
{\text{s.t.}}\quad & \inf_{t >0 } \ \big [g_{i0}({\boldsymbol{x}}) + t \hat{\Psi} (t^{-1}{\boldsymbol{z}}^{i}[{\boldsymbol{x}}]) - t \log \epsilon_{i} \big] \le 0, \; i=1, \ldots, m,
\end{aligned}$$ where $z^{i}({\boldsymbol{x}})=\big(g_{i1}({\boldsymbol{x}}), \ldots, g_{id}({\boldsymbol{x}})\big)$ and $$\hat{\Psi}({\boldsymbol{z}}):=\max_{Q_{1} \times \ldots \times Q_{d} \in {\mathcal{P}}} \sum_{j=1}^{d} \log \Big( \int_{\Xi} \exp\{z_{j}s\}d Q_{j}(s) \Big),$$ is a conservative approximation of problem , i.e., every feasible solution to the approximation is feasible for the chance-constrained problem . This approximation is a convex program and is efficiently solvable, provided that all $g_{ij}$ and $\hat{\Psi}$ are efficiently computable, and ${\mathcal{X}}$ is computationally tractable.
@hanasusanto2017ambiguous study a distributionally robust joint chance constrained stochastic program where each chance constraint is linear in ${\tilde{{\boldsymbol{\xi}}}}$, and the technology matrix and right hand-side are affine in ${\boldsymbol{x}}$. They form the ambiguity set of distributions as in without the first constraint. They show that the pessimistic model (i.e., the chance constraint holds for every distribution in the set) is conic-representable if the technology matrix is constant in ${\boldsymbol{x}}$, the support set is a cone, and $f_{i}$ is positively homogeneous. They also show the optimistic model (i.e., the chance constraint holds for at least one distribution in the set) is also conic-representable if the technology matrix is constant in ${\boldsymbol{x}}$. They apply their results to problems in project management and image reconstruction. While their formulation is exact for the distributionally robust chance constrained project crashing problem, the size of the formulation grows in the number of paths in the network. For other research in chance-constrained optimization problem, we refer to @xie2017optimized [@xie2018joint].
#### Statistical Learning
@fathony2018 study a distributonally robust approach to graphical models for leveraging the graphical structure among the variables. The proposed model in @fathony2018 seeks a predictor to make a probabilistic prediction $\hat{P}(\hat{y}|{\boldsymbol{u}})$ over all possible label assignments so that it minimizes the worst-case conditional expectation of the prediction loss $l(\hat{y},\bar{y})$ with respect to $\bar{P}(\bar{y}|{\boldsymbol{u}})$ as follows: $$\begin{aligned}
\min_{\hat{P}(\hat{y}|{\boldsymbol{u}})} \ \max_{\bar{P}(\bar{y}|{\boldsymbol{u}})} \ & {\mathbb{E}_{\substack{{\boldsymbol{U}} \sim \breve{P} \\ \hat{Y}|
{\boldsymbol{U}} \sim \hat{P}\\ \bar{Y}|
{\boldsymbol{U}} \sim \bar{P}}} \left[ l(\hat{Y},\bar{Y}) \right]} \\
{\text{s.t.}}\quad & {\mathbb{E}_{\substack{{\boldsymbol{U}} \sim \breve{P} \\ \bar{Y}|
{\boldsymbol{U}} \sim \bar{P}}} \left[ \Phi({\boldsymbol{U}},Y) \right]} =\breve{\Phi}, \end{aligned}$$ where $\Phi({\boldsymbol{U}}, Y)$ is a given feature function and $\breve{\Phi}={\mathbb{E}_{({\boldsymbol{U}},Y) \sim \breve{P}} \left[ \Phi({\boldsymbol{U}},Y) \right]}$. The worst-case in the above formulation is taken with respect to all conditional distributions of the predictor, conditioned on the covariates. This conditional distribution $\bar{P}(\bar{y}|{\boldsymbol{u}})$ is such that the first-order moment of the feature function $\Phi({\boldsymbol{U}}, Y)$ matches the first-order moment under the empirical joint distribution of the covariates and labels, $\breve{P}$. @fathony2018 show that the [DRO]{} approach enjoys the consistency guarantees of probabilistic graphical models, see, e.g., @lafferty2001, and has the advantage of incorporating customized loss metrics during the training as in large margin models, see, e.g., @tsochantaridis2005.
### Moment Matrix Inequalities
In this section we review an ambiguity set that generalizes both the ambiguity set ${\mathcal{P}}^{\text{DY}}$ and the ambiguity set ${\mathcal{P}}^{\text{MM}}$ as follows: $$\label{eq: rev.MMI}
{\mathcal{P}}^{MMI}:=\sset*{P\in{{\mathfrak{M}}_{+}(\Xi,{\mathcal{F}})}}{ {\boldsymbol{L}} \preccurlyeq \int_{\Xi} {\boldsymbol{F}} d P \preccurlyeq {\boldsymbol{U}}},$$ where ${\boldsymbol{F}}:=[{\boldsymbol{F}}_1,\ldots,{\boldsymbol{F}}_m]$, with ${\boldsymbol{F}}_{i}$ be a symmetric matrix in ${\mathbb{R}}^{n_{i} \times n_{i}}$ or scalar with measurable components on ${\left( \Xi, {\mathcal{F}} \right)}$. Similarly, let ${\boldsymbol{L}}:=[{\boldsymbol{L}}_1,\ldots,{\boldsymbol{L}}_m]$ and ${\boldsymbol{U}}:=[{\boldsymbol{U}}_1,\ldots,{\boldsymbol{U}}_m]$ be the vectors of symmetric matrices or scalars. As in , to ensure that $P$ is a probability measure, i.e., $P \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$, we set ${\boldsymbol{L}}_1={\boldsymbol{U}}_{1}= [1]_{1 \times 1}$ and ${\boldsymbol{F}}_1=[1]_{1 \times 1}$ in the above definition of ${\mathcal{P}}^{MMI}$. We generalize this ambiguity set from the ambiguity set proposed in @xu2018matrix, where the moment constraint are either in the form of equality or upper bound. Note that as a special case of ${\mathcal{P}}^{MMI}$, we can set ${\boldsymbol{F}}_{i}$, ${\boldsymbol{L}}_{i}$, and ${\boldsymbol{U}}_{i}$ to be scalars, $i=2, \ldots, m$, to recover the second constraint in the ambiguity set ${\mathcal{P}}^{MM}$, defined in . Moreover, by setting ${\boldsymbol{F}}_{2}$ to be a matrix as $\begin{pmatrix}
-{\boldsymbol{\Sigma}}_{0} & {\boldsymbol{\mu}}_{0} -{\tilde{{\boldsymbol{\xi}}}}\\
({\boldsymbol{\mu}}_{0} -{\tilde{{\boldsymbol{\xi}}}})^{\top} & -\varrho_{1}
\end{pmatrix}$, ${\boldsymbol{F}}_{3}$ to be a matrix as $({\tilde{{\boldsymbol{\xi}}}}-{\boldsymbol{\mu}}_{0})(\xi-{\boldsymbol{\mu}}_{0})^{\top}$, ${\boldsymbol{L}}_{2}=-{\boldsymbol{\infty}}$, ${\boldsymbol{U}}_{2}={\boldsymbol{L}}_{3}= {\boldsymbol{0}}$, and ${\boldsymbol{U}}_{3}=\varrho_{2} {\boldsymbol{\Sigma}}_{0}$, we can recover .
Below, we present a duality result on $\sup_{P \in {\mathcal{P}}^{\text{MMI}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$, given a fixed ${\boldsymbol{x}} \in {\mathcal{X}}$.
\[thm: rev.dual\_MMI\] For a fixed ${\boldsymbol{x}} \in {\mathcal{X}}$, suppose that $h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ and ${\boldsymbol{F}}$ are integrable for all $P \in {\mathcal{P}}^{\text{MMI}}$. In addition, suppose that the following Slater-type condition holds: $$(-{\boldsymbol{U}}, {\boldsymbol{L}}) \in {\text{int}\left(\left \lbrace \Big(-\int_{\Xi} {\boldsymbol{F}} d P, \int_{\Xi} {\boldsymbol{F}} d P \Big) - {\mathcal{K}} \; \Big| \; P \in {{\mathfrak{M}}_{+}(\Xi,{\mathcal{F}})}\right \rbrace\right)},$$ where ${\mathcal{K}}:={\mathcal{S}}_{+}^{n_{1}} \times \ldots {\mathcal{S}}_{+}^{n_{m}} \times {\mathcal{S}}_{+}^{n_{1}} \times \ldots {\mathcal{S}}_{+}^{n_{m}}$. If $\sup_{P \in {\mathcal{P}}^{\text{MM}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ is finite, then, it can be written as the optimal value of the following problem: $$\begin{aligned}
\inf_{{\boldsymbol{W}},{\boldsymbol{Y}}} \ & \sum_{i=1}^{m} {\boldsymbol{W}}_{i} \bullet {\boldsymbol{U}}_{i} -\sum_{i=1}^{m} {\boldsymbol{Y}}_{i} \bullet {\boldsymbol{L}}_{i} \\
\begin{split}
{\text{s.t.}}\quad & \sum_{i=1}^{m} {\boldsymbol{W}}_{i} \bullet \int_{\Xi} {\boldsymbol{F}}_{i}(s) P(ds) - \sum_{i=1}^{m} {\boldsymbol{Y}}_{i} \bullet \int_{\Xi} {\boldsymbol{F}}_{i}(s) P(ds) \\
& \quad {} \ge \int_{\Xi} h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}(s)) P(ds), \; \forall P \in {{\mathfrak{M}}_{+}(\Xi,{\mathcal{F}})},
\end{split}\\
& {\boldsymbol{W}},{\boldsymbol{Y}} \succcurlyeq {\boldsymbol{0}}.
\end{aligned}$$
Using the conic duality results from Theorem \[thm: rev.conic\_duality\], we write the dual of $\sup_{P \in {\mathcal{P}}^{\text{MM}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ as $$\begin{aligned}
\inf_{{\boldsymbol{W}},{\boldsymbol{Y}}} \ & \sum_{i=1}^{m} {\boldsymbol{W}}_{i} \bullet {\boldsymbol{U}}_{i} -\sum_{i=1}^{m} {\boldsymbol{Y}}_{i} \bullet {\boldsymbol{L}}_{i} \\
{\text{s.t.}}\quad & \sum_{i=1}^{m} {\boldsymbol{W}}_{i} \bullet {\boldsymbol{F}}_{i} - \sum_{i=1}^{m} {\boldsymbol{Y}}_{i} \bullet {\boldsymbol{F}}_{i} \succcurlyeq_{{\mathfrak{M}}_{+}^{\prime}{\left( \Xi, {\mathcal{F}} \right)}} h({\boldsymbol{x}},\cdot), \\
& {\boldsymbol{W}},{\boldsymbol{Y}} \succcurlyeq {\boldsymbol{0}},
\end{aligned}$$ where ${\mathfrak{M}}_{+}^{\prime}{\left( \Xi, {\mathcal{F}} \right)}$ is the dual cone of ${\mathfrak{M}}_{+}{\left( \Xi, {\mathcal{F}} \right)}$: $${\mathfrak{M}}_{+}^{\prime}{\left( \Xi, {\mathcal{F}} \right)}=\sset*{Z \in {\mathcal{S}}{\left( \Xi, {\mathcal{F}} \right)}}{ \int_{\Xi} Z(s) P(ds) \ge 0, \; \forall P \in {{\mathfrak{M}}_{+}(\Xi,{\mathcal{F}})}}.$$ Thus, we can write the first constraint above as $$\begin{split}
& \sum_{i=1}^{m} {\boldsymbol{W}}_{i} \bullet \int_{\Xi} {\boldsymbol{F}}_{i}(s) P(ds) - \sum_{i=1}^{m} {\boldsymbol{Y}}_{i} \bullet \int_{\Xi} {\boldsymbol{F}}_{i}(s) P(ds) \\
& \quad {} \ge \int_{\Xi} h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}(s)) P(ds), \; \forall P \in {{\mathfrak{M}}_{+}(\Xi,{\mathcal{F}})}.
\end{split}$$
The Slater-type condition ensures that the strong duality holds [@shapiro2001duality].
Suppose that every finite subset of $\Xi$ is ${\mathcal{F}}$-measurable, i.e., for every $s \in \Xi$, the corresponding Dirac measure $\delta(s)$ (of mass one at point $s$) belongs to ${{\mathfrak{M}}_{+}(\Xi,{\mathcal{F}})}$. Then, the first constraint in Theorem \[thm: rev.dual\_MMI\] can be written as follows [@shapiro2001duality]: $$\sum_{i=1}^{m} {\boldsymbol{W}}_{i}^{*} \bullet {\boldsymbol{F}}_{i}(s) - \sum_{i=1}^{m} {\boldsymbol{Y}}_{i}^{*} \bullet {\boldsymbol{F}}_{i}(s) \ge h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}(s)), \quad \forall s \in \Xi.$$
Motivated by the difficulty in verifying the Slater-type conditions to guarantee strong duality for $\sup_{P \in {\mathcal{P}}^{\text{MMI}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ and its dual, @xu2018matrix investigate the duality conditions from the perspective of lower semicontinuity of the optimal value function inner maximization problem, with a perturbed ambiguty set. While these conditions are restrictive in general, they show that they are satisfied in the case of compact $\Xi$ or bounded ${\boldsymbol{F}}_{i}$. @xu2018matrix present two discretization schemes to solve the resulting [DRO]{} model: (1) a cutting-plane-based exchange method that discretizes the ambiguity set ${\mathcal{P}}^{\text{MMI}}$ and (2) a cutting-plane-based dual method that discretizes the semi-infinite constraint of the dual problem. For both methods, they show the convergence of the optimal values and optimal solutions as sample size increases. They illustrate their results for the portfolio optimization and multiproduct newsvendor problems.
### Cross-Moment or Nested Moment {#sec: rev.Wiesemann}
In an attempt to unify modeling and solving [DRO]{} models, @wiesemann2014 propose a framework for modeling the ambiguity set of probability distributions as follows: $$\label{eq: rev.WKS}
{\mathcal{P}}^{\text{WKS}}:=\sset*{{\mathbbmtt{P}}\in{{\mathfrak{P}}\left( {\mathbb{R}}^{d} \times {\mathbb{R}}^{r}, {\mathfrak{B}}({\mathbb{R}}^{d}) \times {\mathfrak{B}}({\mathbb{R}}^{r}) \right)}}{
\begin{aligned}
& {\mathbb{E}_{{\mathbbmtt{P}}} \left[ {\boldsymbol{A}}{\tilde{{\boldsymbol{\xi}}}}+{\boldsymbol{B}} {\tilde{{\boldsymbol{u}}}} \right]}={\boldsymbol{b}},\\
& P\{({\tilde{{\boldsymbol{\xi}}}},{\tilde{{\boldsymbol{u}}}}) \in {\mathcal{C}}_{i} \} \in [{\underline{p}}_{i}, {\overline{p}}_{i}], \; i \in {\mathcal{I}}
\end{aligned}
},$$ where ${\mathbbmtt{P}}$ represents a joint probability distribution of ${\tilde{{\boldsymbol{\xi}}}}$ and some auxiliary random vector ${\tilde{{\boldsymbol{u}}}} \in {\mathbb{R}}^{r}$. Moreover, ${\boldsymbol{A}} \in {\mathbb{R}}^{s \times d}$, ${\boldsymbol{B}} \in {\mathbb{R}}^{s \times r}$, ${\boldsymbol{b}} \in {\mathbb{R}}^{s}$, and ${\mathcal{I}}=\{1, \ldots, I\}$, while the confidence sets ${\mathcal{C}}_{i}$ are defined as $$\label{eq: rev.WKS_Cone}
{\mathcal{C}}_{i}:=\sset*{({\boldsymbol{\xi}},{\boldsymbol{u}}) \in {\mathbb{R}}^{d} \times {\mathbb{R}}^{r} }{{\boldsymbol{C}}_{i} {\boldsymbol{{\boldsymbol{\xi}}}} + {\boldsymbol{D}}_{i} {\boldsymbol{u}} \preccurlyeq_{{\mathcal{K}}_{i}} {\boldsymbol{c}}_{i}},$$ with ${\boldsymbol{C}}_{i} \in {\mathbb{R}}^{L_{i} \times d}$, ${\boldsymbol{D}}_{i} \in {\mathbb{R}}^{L_{i} \times r}$, ${\boldsymbol{c}} \in {\mathbb{R}}^{L_{i}}$, and ${\mathcal{K}}_{i}$ being a proper cone. By setting ${\underline{p}}_{I}={\overline{p}}_{I}=1$, they ensure that ${\mathcal{C}}_{I}$ contains the support of the joint random vector $({\tilde{{\boldsymbol{\xi}}}},{\tilde{{\boldsymbol{u}}}})$. This set contains all distributions with prescribed conic-representable confidence sets and with mean values residing on an affine manifold. An important aspect of is that the inclusion of an auxiliary random vector ${\tilde{{\boldsymbol{u}}}}$ gives the flexibility to model a rich variety of structural information about the marginal distribution of ${\tilde{{\boldsymbol{\xi}}}}$ in a unified manner. Using this framework, @wiesemann2014 show that many ambiguity sets studied in the literature can be represented by a projection of the ambiguity set on the space of ${\tilde{{\boldsymbol{\xi}}}}$. In other words, these ambiguity sets are special cases of the ambiguity set ${\mathcal{P}}^{\text{WKS}}$. This development is based on the following lifting result.
[(@wiesemann2014 [Theorem 5])]{} \[thm: rev.lifting\_WKS\] Let ${\boldsymbol{f}} \in {\mathbb{R}}^{N}$ and ${\boldsymbol{l}}: {\mathbb{R}}^{d} \mapsto {\mathbb{R}}^{N}$ be a function with a conic-representable ${\mathcal{K}}$-epigraph, and consider the following ambiguity set: $${\mathcal{P}}^{\prime}:=\sset*{{\mathbbmtt{P}}\in{{\mathfrak{P}}({\mathbb{R}}^{d},{\mathfrak{B}}({\mathbb{R}}^{d}))}}{
\begin{aligned}
& {\mathbb{E}_{{\mathbbmtt{P}}} \left[ {\boldsymbol{l}}({\tilde{{\boldsymbol{\xi}}}}) \right]} \preccurlyeq_{{\mathcal{K}}}{\boldsymbol{f}},\\
& {\mathbbmtt{P}}\{{\tilde{{\boldsymbol{\xi}}}}\in {\mathcal{C}}_{i} \} \in [{\underline{p}}_{i}, {\overline{p}}_{i}], \; i \in {\mathcal{I}}
\end{aligned}
},$$ as well as the lifted ambiguity set $${\mathcal{P}}:=\sset*{{\mathbbmtt{P}}\in {{\mathfrak{P}}\left( {\mathbb{R}}^{d} \times {\mathbb{R}}^{N}, {\mathfrak{B}}({\mathbb{R}}^{d}) \times {\mathfrak{B}}({\mathbb{R}}^{N}) \right)} }{
\begin{aligned}
& {\mathbb{E}_{{\mathbbmtt{P}}} \left[ {\tilde{{\boldsymbol{u}}}} \right]}={\boldsymbol{f}},\\
& P\{{\boldsymbol{l}}({\tilde{{\boldsymbol{\xi}}}}) \preccurlyeq_{{\mathcal{K}}} {\tilde{{\boldsymbol{u}}}} \} = 1,\\
& P\{{\tilde{{\boldsymbol{\xi}}}}\in {\mathcal{C}}_{i} \} \in [{\underline{p}}_{i}, {\overline{p}}_{i}], \; i \in {\mathcal{I}}
\end{aligned}
},$$ which involves the auxiliary random vector ${\tilde{{\boldsymbol{u}}}} \in {\mathbb{R}}^{N}$. We have that (i) ${\mathcal{P}}^{\prime}$ is the union of all marginal distributions of ${\tilde{{\boldsymbol{\xi}}}}$ under all ${\mathbbmtt{P}} \in {\mathcal{P}}$ and (ii) ${\mathcal{P}}$ can be formulated as an instance of the ambiguity set ${\mathcal{P}}^{\text{WKS}}$ in .
Using Theorem \[thm: rev.lifting\_WKS\], @wiesemann2014 show how an ambiguity set of the form ${\mathcal{P}}^{\text{WKS}}$, defined in , with conic-representable expectation constraints and a collection of conic-representable confidence sets, can represent ambiguity sets formed via (1) $\phi$-divergences, (2) mean, (3) mean and upper bound on the covariance matrix (i.e., a special case of the ambiguity set ), (4) coefficient of variation (i.e., the inverse of signal-to-noise ratio from information theory), (5) absolute mean spread, and (6) higher-order moment information. Moreover, they illustrate that can capture information from robust statistics, such as (7) marginal median, (8) marginal median-absolute deviation, and (9) known upper bound on the expected Huber loss function. It is worth noting that does not cover ambiguity sets that impose infinitely many moment restrictions that would be required to describe symmetry, independence, or unimodality characteristics of the distributions [@chen2018infinite].
@wiesemann2014 determine conditions under which distributionally robust expectation constraints, formed via the proposed ambiguity set , can be solved in polynomial time as follows: (i) the cost function $g_{j}$, $j=1, \ldots, m$, is convex and piecewise affine in ${\boldsymbol{x}}$ and ${\boldsymbol{{\tilde{{\boldsymbol{\xi}}}}}}$ (i.e., $g_{j}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}):=\max_{k \in \{1,\ldots, K\}} g_{jk}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ with $g_{jk}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}):=s_{jk}({\tilde{{\boldsymbol{\xi}}}}){\boldsymbol{x}}+t_{jk}({\tilde{{\boldsymbol{\xi}}}})$ such that $s_{jk}({\tilde{{\boldsymbol{\xi}}}})$ and $t_{jk}({\tilde{{\boldsymbol{\xi}}}})$ are affine in ${\tilde{{\boldsymbol{\xi}}}}$) and (ii) the confidence sets ${\mathcal{C}}_{i}$’s satisfy a strict nesting condition. Below, we present a duality result under above assumptions and additional regularity conditions.
[(@wiesemann2014 [Theorem 1])]{} \[thm: rev.dual\_WKS\] Consider a fixed ${\boldsymbol{x}} \in {\mathcal{X}}$. Then, under suitable regularity conditions, $\sup_{{\mathbbmtt{P}} \in {\mathcal{P}}^{\text{WKS}}} \ {\mathbb{E}_{{\mathbbmtt{P}}} \left[ g_{j}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le 0$, $j=1, \ldots, m$, is satisfied if and only if there exists ${\boldsymbol{\beta}} \in {\mathbb{R}}^{K}$, ${\boldsymbol{\kappa}}, {\boldsymbol{\lambda}} \in {\mathbb{R}}_{+}^{I}$, and ${\boldsymbol{\alpha}}_{ik} \in {{\mathcal{K}}^{\prime}}_{i}$, $i \in {\mathcal{I}}$ and $k \in \{1, \ldots, K\}$, that satisfy the following systems: $$\begin{aligned}
& {\boldsymbol{b}}^{\top}{\boldsymbol{\beta}} + \sum_{i \in {\mathcal{I}}} ({\overline{p}}_{i} \kappa_{i}- {\underline{p}}_{i} \lambda_{i}) \le 0, \\
& {\boldsymbol{c}}_{i}^{\top}{\boldsymbol{\alpha}}_{ik}+ {\boldsymbol{s}}_{k}^{\top}{\boldsymbol{x}} + {\boldsymbol{t}}_{k} \le \sum_{i^{\prime} \in \{i\} \cup {\mathcal{A}}(i)} (\kappa_{i^{\prime}}-\lambda_{i^{\prime}}), \quad \forall i \in {\mathcal{I}}, \ k \in \{1, \ldots, K\},\\
& {\boldsymbol{C}}_{i}^{\top} {\boldsymbol{\alpha}}_{ik} + {\boldsymbol{A}}^{\top}{\boldsymbol{\beta}}= {\boldsymbol{S}}^{\top}_{k} {\boldsymbol{x}} + {\boldsymbol{t}}_{k}, \quad \forall i \in {\mathcal{I}}, \ k \in \{1, \ldots, K\},\\
&{\boldsymbol{D}}_{i}^{\top} {\boldsymbol{\alpha}}_{ik} + {\boldsymbol{B}}^{\top}{\boldsymbol{\beta}}=0, \quad \forall i \in {\mathcal{I}}, \ k \in \{1, \ldots, K\},
\end{aligned}$$ where ${\mathcal{A}}(i)$ denote the set of all $i^{\prime} \in {\mathcal{I}}$ such that ${\mathcal{C}}_{i^{\prime}}$ is strictly contained in the interior of ${\mathcal{C}}_{i}$.
The tractability of the resulting system in Theorem \[thm: rev.dual\_WKS\] depends on how the confidence sets ${\mathcal{C}}_{i}$ are described, and hence, they give rise to linear, conic-quadratic, or semidefinite programs for the corresponding confidence sets ${\mathcal{C}}_{i}$. @wiesemann2014 also provide tight tractable conservative approximations for problems that violate the nesting condition by proposing an outer approximation of . They discuss several mild modifications of the conditions on ${\boldsymbol{g}}$.
There are several papers that use the ambiguity set and consider its generalization or special cases. @chen2018infinite introduce an ambiguity set of probability distributions that is characterized by conic-representable expectation constraints and a conic-represetable support set, similar to the one studied in @wiesemann2014. However, unlike @wiesemann2014, an infinite number of expectation constraints can be incorporated into the ambiguity set to describe stochastic dominance, entropic dominance, and dispersion, among other. A main result in this work is that for any ambiguity set, there exists an infinitely constrained ambiguity set, such that worst-case expected $h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}})$ over both sets are equal, provided that the objective function $h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}})$ is tractable and conic-representable in ${\tilde{{\boldsymbol{\xi}}}}$ for any ${\boldsymbol{x}} \in {\mathcal{X}}$. Reformulation of the resulting [DRO]{} model formed via this infinitely constrained ambiguity set yields a conic optimization problem. To solve the model, @chen2018infinite propose a procedure that consists of solving a sequence of relaxed [DRO]{} problems—each of which considers a finitely constrained ambiguity set, and results in a conic optimization reformulation—and converges to the optimal value of the original [DRO]{} model. When incorporating covariance and fourth-order moment information into the ambiguity set, they show that the relaxed [DRO]{} is a SOCP. This is different from @delage2010 which shows that a [DRO]{} problem formed via a fixed mean and an upper bound on covariance is reformulated as a SDP.
@postek2018 derive exact reformulation of the worst-case expected constraints when function $g({\boldsymbol{x}}, \cdot)$ is convex in ${\tilde{{\boldsymbol{\xi}}}}$, and the ambiguity set of distributions consists of all distributions of componentwise independent ${\tilde{{\boldsymbol{\xi}}}}$ with known support, mean, and mean-asboulute deviation information. They also obtain exact reformulation of the resulting model when $g({\boldsymbol{x}}, \cdot)$ is concave in ${\tilde{{\boldsymbol{\xi}}}}$ and there is additional information on the probability that a component is greater than or equal to its mean. These reformulations involve a number of terms that are exponential in the dimension of ${\tilde{{\boldsymbol{\xi}}}}$. They show how upper bounds can be constructed that alleviate the independence restriction, and require only a linear number of terms, by exploiting models in which random variables are linearly aggregated and function $g({\boldsymbol{x}}, \cdot)$ is convex. Under the assumption of independent random variables, they use the above results for the worst-case expected constraints to derive safe approximations to the corresponding individual chance constrained problems.
To reduce the conservatism of the robust optimization due to its constraint-wise approach and the assumption that all constraints are hard for all scenarios in the uncertainty set, @roos2018reducing propose an approach that bounds worst-case expected total violation of constraints from above and condense all constraints into a single constraint. They form the ambiguity set with all distributions of ${\tilde{{\boldsymbol{\xi}}}}$ with known support, mean, and mean-asboulute deviation information. When the right-hand side is uncertain, they use the results in @postek2018 to show that the proposed formulation is tractable. When the left-hand side is uncertain, they use the aggregation approach introduced in @postek2018 to derive tractable reformulations. We also refer to @sun2018 for a two-stage quadratic stochastic optimization problem and @demiguel2009portfolio for a portfolio optimization problem.
@bertsimas2018adaptiveDRO develop a modular and tractable framework for solving an adaptive distributionally robust two-stage linear optimization problem with recourse of the form $$\min_{{\boldsymbol{x}}}\sset*{{\boldsymbol{c}}^{\top}{\boldsymbol{x}} +\sup_{P \in {\mathcal{P}}} {\mathbb{E}_{P} \left[ h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \right]}}{ {\boldsymbol{x}} \in {\mathcal{X}}},$$ where $$h({\boldsymbol{x}},{\boldsymbol{\xi}})=\min_{{\boldsymbol{y}}}\sset*{{\boldsymbol{q}}^{\top}{\boldsymbol{y}}({\boldsymbol{\xi}})}{{\boldsymbol{W}}{\boldsymbol{y}}({\boldsymbol{\xi}}) \ge {\boldsymbol{r}}({\boldsymbol{\xi}}) - {\boldsymbol{T}}({\boldsymbol{\xi}}) {\boldsymbol{x}}, \; {\boldsymbol{y}}({\boldsymbol{\xi}}) \in {\mathbb{R}}^{q}},$$ and the function ${\boldsymbol{r}}({\boldsymbol{\xi}})$ and ${\boldsymbol{T}}({\boldsymbol{\xi}})$ are affinely dependent on ${\boldsymbol{\xi}}$. Both the ambiguity set of probability distributions ${\mathcal{P}}$ and the support set are assumed to be second-order conic-representable. Such an ambiguity set is a special case of the conic-representbale ambiguity set . They show that the studied [DRO]{} model can be formulated as a classical RO problem with a second-order conic-representable uncertainty set. To obtain a tractable formulation, they replace the recourse decision functions ${\boldsymbol{y}}({\boldsymbol{\xi}})$ with generalized linear decision rules that have affine dependency on the uncertain parameters ${\boldsymbol{\xi}}$ and some auxiliary random variables[^23]. By adopting the approach of @wiesemann2014 to lift the ambiguity set to an extended one by introducing additional auxiliary random variables, they improve the quality of solutions and show that one can transform the adaptive [DRO]{} problem to a classical RO problem with a second-order conic-representable uncertainty set. @bertsimas2018adaptiveDRO discuss extension to the conic-representbale ambiguity set and multistage problems. They also apply their results to medical appointment scheduling and single-item multiperiod newsvendor problems.
Following the approach in @bertsimas2018adaptiveDRO, @zhen2018 reformulate an adaptive distributionally robust two-stage linear optimization problem with recourse into an adaptive robust two-stage optimization problem with recourse. Then, using Fourier-Motzkin elimination, they reformulate this problem into an equivalent problem with a reduced number of adjustable variables at the expense of an increased number of constraints. Although from a theoretical perspective, every adaptive robust two-stage optimization problem with recourse admits an equivalent static reformulation, they propose to eliminate some of the adjustable variables, and for the remaining adjustable variables, they impose linear decision rules to obtain an approximated solution. They show that for problems with simplex uncertainty sets, linear decision rules are optimal, and for problems with box uncertainty sets, there exists convex two-piecewise affine functions that are optimal for the adjustable variables. By studying the medical appointment scheduling considered in @bertsimas2018adaptiveDRO, they show that their approach improves the solutions obtained in @bertsimas2018adaptiveDRO.
#### Statistical Learning
@gong2018 study a distributionally robust multiple linear regression model with the least absolute value cost function. They form the ambiguity set of distributions using expectation constraints over a conic-representable support set as in . They reformulate the resulting model as a conic optimization problem, based on the results in @wiesemann2014.
#### Multistage Setting.
A Markov decision process with unknown distribution for the transition probabilities and rewards for each state is studied in @xu2012MDP [@xu2010MDP]. It is assumed that the parameters are statewise independent and each state belongs to only one stage. Moreover, the parameters of each state are constrained to a sequence of nested sets, such that the parameters belong to the largest set with probability one, and there is a lower bound on the probability that they should belong to other sets, in a increasing manner. @yu2016dMDP extends the work in @xu2012MDP [@xu2010MDP] by forming the ambiguity set of distributions as in .
### Marginals (Fréchet)
All the moment-based ambiguity sets discussed so far, study the ambiguity of the joint probability distribution of the random vector ${\tilde{{\boldsymbol{\xi}}}}$. Papers reviewed in this section assume that additional information on the marginal distributions is available. We refer to the class of joint distributions with fixed marginal distributions as the [*Fréchet*]{} class of distributions [@doan2015robustness].
#### Discrete problems
@chen2018 study a problem of the form , where the cost function $h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ denotes the optimal value of a linear or discrete optimization problem with random linear objective coefficients. They assume the ambiguity set of distribution is formed by all distributions with known marginals. Using techniques from optimal transport theory, they identify a set of sufficient conditions for the polynomial time solvability of this class of problems. This generalizes the tractability results under marginal information from 0-1 polytopes, studied in @bertsimas2004probabilistic, to a class of integral polytopes. They discuss their results on four polynomial time solvable instances, arising in the appointment scheduling problem, max flow problem with random arc capacities, ranking problem with random utilities, and project scheduling problems with irregular random starting time costs.
#### Risk and Chance Constraints
@dhara2017 provide bounds on the worst-case CVaR over an ambiguity set of discrete distributions, where the ambiguity set contains all joint distributions whose univariate marginals are fixed and their bivariate marginals are within a minimum Kullback-Leibler distance from the nominal bivariate marginals. They develop a convex reformulation for the resulting [DRO]{}. @doan2015robustness study a [DRO]{} model of the form with a convex piecewise linear objective function in ${\tilde{{\boldsymbol{\xi}}}}$ and affine in ${\boldsymbol{x}}$. They form the ambiguity set of joint distributions via a Fréchet class of discrete distributions with multivariate marginals, where the components of the random vector are partitioned such that they have overlaps. They show that the resulting [DRO]{} model for a portfolio optimization problem is efficiently solvable with linear programming. In particular, they develop a tight linear programming reformulation to find a bound on the worst-case CVaR over such an ambiguity set, provided that the structure of the marginals satisfy a regularity condition. @natarajan2014 study a distributionally robust approach to minimize the worst-case CVaR of regret in combinatorial optimization problems with uncertainty in the objective function coefficients, defined as follows: $$\min_{{\boldsymbol{x}} \in {\mathcal{X}}} \ \mathrm{WCVaR}_{\alpha}^{P}\left[h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}})\right],$$ where $h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}})=- {\tilde{{\boldsymbol{\xi}}}}^{\top} {\boldsymbol{x}}+ \max_{{\boldsymbol{y}} \in \{0,1\}^{q_{1}}} {\tilde{{\boldsymbol{\xi}}}}^{\top} {\boldsymbol{y}} $ and $$\mathrm{WCVaR}_{\alpha}^{P}\left[h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}})\right]=\sup_{P \in {\mathcal{P}}} {\mathrm{CVaR}^{P}_{\alpha} \left[ h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \right]}.$$ It is assumed that the ambiguity set is formed with the knowledge of marginal distributions, where the ambiguity for each marginal distribution is formed via . They reformulate the resulting problem as a polynomial sized mixed-integer LP when (i) the support is known, (ii) the support and mean are known, and (iii) the support, mean, and mean absolute deviation are known; and as a mixed-integer SOCP when the support, mean, and standard deviation are known. They show the maximum weight subset selection problem is polynomially solvable under (i) and (ii). They illustrate their results on subset selection and the shortest path problems.
@zhang2015bin study a distributionally robust approach to a stochastic bin-packing problem subject to chance constraints on the total item sizes in the bins. They form the ambiguity set by all discrete distributions with known marginal means and variances for each item size. By showing that there exists a worst-case distribution that is at most a three-point distribution, they obtain a closed-form expression for the chance constraint and they reformulate the problem as a mixed-binary program. They present a branch-and-price algorithm to solve the problem, and apply their results to a surgery scheduling problem for operating rooms.
#### Statistical Learning
@farnia2016 study a [DRO]{} approach in the context of supervised learning problems to infer a function (i.e., decision rule) that predicts a response variable given a set of covariates. Motivated by the game-theoretic interpretation of @grunwald2004game and the principle of maximum entropy, they seek a decision rule that predicts the response based on a distribution that maximizes a generalized entropy function over a set of probability distributions. However, because the covariate information is available, they apply the principle of maximum entropy to the conditional distribution of the response given the covariates, see, also @globerson2004 for the case of Shannon entropy. @farnia2016 form the ambiguity set of distributions by matching the marginal of covariates to the empirical marginal of covariates while keeping the cross-moments between the response variables and covariates close enough (with respect to some norm) to that of the joint empirical distribution. They show that the [DRO]{} approach adopts a regularization interpretation for the maximum likelihood problem under the empirical distribution. As a result, @farnia2016 recover the regularized maximum likelihood problem for generalized linear models for the following loss functions: linear regression under quadratic loss function, logistic regression under logarithmic loss function, and SVM under the 0-1 loss function.
@eban2014 study a [DRO]{} approach to a classification problem to minimize the worst-case hinge loss of missclassification, where the ambiguity set of the joint probability distributions of the discrete covariates and response should contain all distributions that agree with nominal pair-wise marginals. They show that the proposed classifier provides a 2-approximation upper bound on the worst-case expected loss using a zero-one hinge loss. @razaviyayn2015 study a [DRO]{} approach to the binary classification problem, with an ambiguity set similar to that of @eban2014, to minimize the worst-case missclassification probability. By changing the order of $\inf$ and $\sup$, and smoothing the objective function, they obtain a probability distribution, based on which they propose a randomized classifier. They show that this randomized classifier enjoys a 2-approximation upper bound on the worst-case missclassification probability of the optimal solution to the studied [DRO]{}.
### Mixture Distribution
In this section, we study [DRO]{} models, where the ambiguity set is formed via [*mixture distribution*]{}. A mixture distribution is defined as a convex combination of pdfs, known as the [*mixture components*]{}. The weights associated with the mixture components are called [*mixture probabilities*]{} [@kapsos2014]. For example, a mixture model can be defined as the set of all mixtures of normal distributions with mean $\mu$ and standard deviation $\sigma$ with parameter ${\boldsymbol{a}}=(\mu, \sigma)$ in some compact set ${\mathcal{A}} \subset {\mathbb{R}}^{2}$. In a more generic framework, the distribution $P$ can be any mixture of probability distributions $Q_{{\boldsymbol{a}}} \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$, for some family of distributions $\{Q_{{\boldsymbol{a}}}\}_{{\boldsymbol{a}} \in {\mathcal{A}}} \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$, that depends on the parameter vector ${\boldsymbol{a}} \in {\mathcal{A}}$ as follows: $$\label{eq: rev.mixture}
P(B)=\int_{{\mathcal{A}}} Q_{{\boldsymbol{a}}}(B) \ M (d {\boldsymbol{a}}), \quad B \in {\mathcal{F}},$$ where $M$ is any probability distribution on ${\mathcal{A}}$ [@lasserre2018representation]. Hence, modeling the ambiguity in the mixture probabilities may give rise to a [DRO]{} model over the [*resultant or barycenter*]{} $P$ of $M$ [@popescu2005semidefinite].
#### Risk and Chance Constraints
@lasserre2018representation study a distributionally robust (individual and joint) chance-constrained program with a polynomial objective function, over a mixture ambiguity set and a semi-algebraic deterministic set. They approximate the ambiguous chance constraint with a polynomial whose vector coefficients is an optimal solution of a SDP. They show that the induced feasibility set by a nested sequence of such polynomial optimization approximation problems converges to that of the ambiguous chance constraints as the degree of approximate polynomials increases.
@kapsos2014 introduce a probability Omega ratio for portfolio optimization (i.e., a probability weighted ratio of gains versus losses for some threshold return target). They study a distributionally robust counterpart of this ratio, where each distribution of the ratio can be represented through a mixture of some known prespecified distributions with unknown mixture probabilities. In particular, they study a mixture model for a nominal discrete distribution, where the mixture probabilities are modeled via the box uncertainty and ellipsoidal uncertainty models. In the former case, they reformulate the problem as a linear program, and in the latter case, they reformulate the problem as a SOCP.
@hanasusanto2015NV study a distributionally robust newsvendor model with a mean-risk objective, as a convex combination of the worst-case CVaR and the worst-case expectation. The worst case is taken over all demand distributions within a [*multimodal*]{} ambiguity set, i.e., a mixture of a finite number of modes, where the conditional information on the ellipsoid support, mean, and covariance of each mode is known. The ambiguity in each mode is modeled via . They cast the resulting model as an exact SDP, and obtain a conservative semidefinite approximation by using quadratic decision rules to approximate the recourse decisions. @hanasusanto2015NV further robustify their model against ambiguity in estimating the mean-covariance information, caused from ambiguity about the mixture weights. They assume that the mixture weights are close to a nominal probability vector in the sense of $\chi^2$-distance. For this case, they also obtain exact SDP reformulation as well as a conservative SDP approximation.
Shape-Preserving Models {#sec: rev.shape}
-----------------------
A few papers propose to model the distributional ambiguity in a way that all distributions in the ambiguity set share similar structural properties. We refer to such models as [*shape-preserving*]{} models to form the ambiguity set of probability distributions. @popescu2005semidefinite propose to incorporate structural distributional information, such as symmetry, unimodality, and convexity, into a moment-based ambiguity set. The proposed ambiguity set is of the following generic form: $$\label{eq: rev.shape_set}
{\mathcal{P}}^{SP}:=\sset*{P \in{{\mathfrak{M}}_{+}(\Xi,{\mathcal{F}})}}{ \int_{\Xi} {\boldsymbol{f}} d P = {\boldsymbol{a}} } \cap \{P \ \text{satisfies structural properties}\}.$$ @popescu2005semidefinite obtains upper and lower bounds on a generalized moment of a random vector (e.g., tail probabilities), given the moments and structural constraints in a convex subset of the proposed ambiguity set . @popescu2005semidefinite uses conic duality to evaluate such lower and upper bounds via SDPs. The key to the development in @popescu2005semidefinite is to focus on ambiguity sets that posses a [*Choquet representation*]{}, where every distribution in the ambiguity set can be written as a mixture (i.e., an infinite convex combination) of measures in a generating set and in the virtue of . For univariate distributions, it is assumed that the generating set is defined by a Markov kernel. It is shown that if the optimal value of the problem is attained, there exists a worst-case probability measure that is a convex combination of $m+1$ (recall $m$ is the dimension of ${\boldsymbol{f}}$) (extremal) probability measures from the generating set. @popescu2005semidefinite uses the above result to obtain generalized Chebyshev’s inequalities bounds for distributions of a univariate random variable that are (1) symmetric, (2) unimodal with a given mode, (3) unimodal with bounds on the mode, (4) unimodal and symmetric, or (5) convex/concave monotone densities with bounds on the slope of densities. @popescu2005semidefinite further derives generalized Chebyshev’s inequality for symmetric and unimodal distributions of multivariate random variables. A related notion to unimodality is $\alpha$-unmiodality, which is defined as follows:
[@dharmadhikari1988unimodality]{} \[def: rev.alpha\_model\] For $\alpha>0$, a distribution ${\mathbbmtt{P}} \in {{\mathfrak{P}}({\mathbb{R}}^{d},{\mathfrak{B}}({\mathbb{R}}^{d}))}$ is called $\alpha$-unimodal with mode $a$ if $\frac{{\mathbbmtt{P}}\{t (A-a)\}}{t^{\alpha}}$ is nonincreasing in $t>0$ for all $A \in {\mathcal{B}}({\mathbb{R}}^{d})$.
@vanparys2016SDP further extend the work of @popescu2005semidefinite to obtain worst-case probability bounds over $\alpha$-unimodal multivariate distributions with the same mode and within the class of distributions in ${\mathcal{P}}^{\text{DY}}$, defined in , and on a polytopic support. They show that when the support of the random vector is an open polyhedron, this generalized Gauss bound can be obtained via a SDP. Similar to @popescu2005semidefinite, @vanparys2016SDP derive semidefinite representations for worst-case probability bounds using Choquet representation of the ambiguity set. They demonstrate that classical generalized Chebyshev and Guass bounds[^24] can be obtained as special cases of their result. They also show how to obtain a SDP reformulation to obtain the worst-case bound over $\alpha$-multimodal multivariate distributions, defined via a mixture distribution.
By relying on information from classical statistics as well as robust statistics, @hanasusanto2015chance propose a unifying canonical ambiguity set that contains many ambiguity sets studied in the literature as special cases, including Gauss and median-absolute deviation ambiguity sets. Such a canonical framework is characterized through intersecting the cross-moment ambiguity set, proposed in @wiesemann2014, and a structural ambiguity set on the marginal distributions, representing information such as symmetry and $\alpha$-unimodality. As in [@popescu2005semidefinite], the key to the development in @hanasusanto2015chance is to focus on structural ambiguity sets that posses a Choquet representation. They study distributionally robust uncertainty quantification (i.e., a probabilistic objective function) and chance-constrained programs over the proposed ambiguity sets, where the safe region is characterized by a bi-affine expression in ${\tilde{{\boldsymbol{\xi}}}}$ and ${\boldsymbol{x}}$. They study the ambiguity sets over which the resulting problems are reformulated as conic programming formulations. A summary of these results can be found in @hanasusanto2015chance [Table 2]. A by-product of their study is to recover some results from probability theory. For instance, by studying the worst-case probability of an event over the Chebyshev ambiguity set with a known mean and upper bound on the covariance matrix, they recover the generalized Chebyshev inequality, discovered in @popescu2005semidefinite [@vandenberghe2007]. Similarly, they recover the generalized Gauss inequality, discovered in @vanparys2016SDP, by considering the Gauss ambiguity set. Furthermore, they propose computable conservative approximations for the chance-constrained problem. Recognizing that the uncertainty quantification problem is tractable over a broad range of ambiguity sets, their key idea for the proposed approximation scheme is to decompose the chance-constrained problem into an uncertainty quantification problem that evaluates the worst-case probability of the chance constraint for a fixed decision ${\boldsymbol{x}}$, followed by a decision improvement procedure.
@li2017 study distributionally robust chance- and CVaR-constrained stochastic programs, where the ambiguity set contains all $\alpha$-unimodal distributions with the same first two order moments, and the safe region is bi-affine in both ${\tilde{{\boldsymbol{\xi}}}}$ and ${\boldsymbol{x}}$. They show that these two ambiguous risk constraints can be cast as an infinite set of SOC constraints. They propose a separation approach to find the violated SOC constraints in an algorithmic fashion. They also derive conservative and relaxation approximations of the two SOC constraints by a finite number of constraints. These approximations for the CVaR-constrained problem are based on the results in @vanparys2017structured. @hu2015 study a data-driven newsvendor problem to decide on the optimal order quantity and price. They assume that demand depends on the pricing, however, there is ambiguity about the price-demand function. To hedge against the misspecification of the demand function, they introduce a novel approach to this problem, called [*functionally robust*]{} approach, where the demand-price function is only known to be decreasing convex or concave. The proposed modeling approach in @hu2018 also provides a systematic view on the risk-reward trade-off of coordinating pricing and order quantity decisions based on the size of the ambiguity set. To solve the resulting minimax model, @hu2018 reduce the problem into a univariate problem that seeks the optimal pricing and develop a two-sided cutting surface algorithm that generates function cuts to shrink the set of admissible functions.
To overcome the difficulty in evaluating extremal performance due to the lack of data, @lam2017tail study the computation of worst-case bounds under the geometric premise of the tail convexity. They show that the worst-case convex tail behavior is in a sense either extremely light-tailed or extremely heavy-tailed.
Kernel-Based Models {#sec: rev.kernel}
-------------------
In Sections \[sec: rev.distance\]–\[sec: rev.shape\], we discussed different sets to model the distributional ambiguity. In all the papers we reviewed in those sections, the form of ambiguity set is endogenously chosen by decision makers. However, when facing high-dimensional uncertain parameters, it may not be practical to fix the form of ambiguity set a priori, being even more complicated with the calibration of different parameters describing the set (see Section \[sec: rev.calibration\]). An alternative practice is to learn the form of the ambiguity set by using unsupervised learning algorithms on the historical data. Consider a given set of data $\{({\boldsymbol{u}}^{i},{\boldsymbol{\xi}}^{i})\}_{i=1}^{N}$, where ${\boldsymbol{u}}^{i} \in {\mathbb{R}}^{m}$ is a vector of covariates associated with the uncertain parameter of interest ${\boldsymbol{\xi}}^{i} \in {\mathbb{R}}^{d}$. Let $K: {\mathbb{R}}^{d} \times {\mathbb{R}}^{d} \mapsto {\mathbb{R}}$ be a [*kernel*]{} function. @bertsimas2018predictive propose a decision framework that incorporates the covariates ${\boldsymbol{u}}$ in addition to ${\boldsymbol{\xi}}$ into the optimization problem in the form of a conditional-stochastic optimization problem, where the decision-maker is seeking a [*predictive prescription*]{} ${\boldsymbol{x}}({\boldsymbol{u}})$ that minimizes the conditional expectation of $h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}})$ in anticipation of the future, given the observation ${\boldsymbol{u}}$. However, the conditional distribution of ${\tilde{{\boldsymbol{\xi}}}}$ given ${\boldsymbol{u}}$ is not known and should be learned from data. Given $\{({\boldsymbol{u}}^{i}, {\boldsymbol{\xi}}^{i})\}_{i=1}^{N}$, they suggest to find a data-driven predictive prescription that minimizes $\sum_{i=1}^{k} w_{k}^{i}({\boldsymbol{u}}) h({\boldsymbol{x}}, {\boldsymbol{\xi}}^{i})$ over ${\mathcal{X}}$. Functions $w_{k}^{i}({\boldsymbol{u}})$ are weights learned locally from the data, in a way that predictions are made based on the mean or mode of the past observations that are in some way similar to the one at hand. @bertsimas2018predictive obtain these weight functions by methods that are motivated by $k$-nearest-neighbors regression, Nadaraya-Watson kernel regression, local linear regression (in particular, LOESS), classification and regression trees (in particular, CART), and random forests. For instance, the estimate of ${\mathbb{E}_{P} \left[ h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}})\Big|{\boldsymbol{u}} \right]}$ using the Nadaraya-Watson kernel regression is obtained as $$\sum_{i=1}^{N} \frac{K_{b}({\boldsymbol{u}}-{\boldsymbol{u}}^{i})}{\sum_{i=1}^{N} K_{b}({\boldsymbol{u}}-{\boldsymbol{u}}^{i})} h({\boldsymbol{x}}, {\boldsymbol{\xi}}^{i}),$$ where $K_{b}(\cdot):=\frac{K(\frac{\cdot}{b})}{b}$ is a kernel function with bandwidth $b$. Common kernel smoothing functions are
- Naive: $K(a)= \mathbbm{1}_{[\|a\| \le 1]}$,
- Epanechnikov: $K(a)=(1- \|a\|^{2}) \mathbbm{1}_{[\|a\| \le 1]}$,
- Tri-cubic: $K(a)=(1- \|a\|^{3})^{3} \mathbbm{1}_{[\|a\| \le 1]}$,
- Guassian or radial basis function: $K(a)=\frac{1}{\sqrt{2\pi}} \exp(- \frac{\|a\|^{2}}{2} )$.
The general framework of the proposed data-driven model in @bertsimas2018predictive resembles SAA. They show that under mild conditions, the problem is polynomially solvable and the resulting predictive prescription is asymptotically optimal and consistent. However, it is worth noting that @bertsimas2018predictive illustrate that direct usage of SAA on $\{{\boldsymbol{\xi}}^{i}\}_{i=1}^{N}$ and ignoring $\{{\boldsymbol{u}}^{i}\}_{i=1}^{N}$ can result in suboptimal decisions which are neither asymptotically optimal nor consistent.
A similar modeling framework as the conditional stochastic optimization problem studied in @bertsimas2018predictive is investigated in other papers, see, e.g., @hannah2010 [@deng2018LEO; @ban2019; @ho2019], to incorporate machine learning into decision making. @deng2018LEO use regression models such as $k$-nearest-neighbors regression to learn the conditional distribution of ${\tilde{{\boldsymbol{\xi}}}}$ given ${\boldsymbol{u}}$. They study the statistical optimality of the resulting solution and its generalization error, and they provide hypothesis-based tests for model validation and selection. In @hannah2010 [@ban2019; @ho2019], the weights are obtained by the Nadaraya-Watson kernel regression method. For a newsvendor problem, @ban2019 show that the SAA decision does not converge to the true optimal decision. This motivates them to derive generalization bounds for the out-of-sample performance of the cost and the finite-sample bias from the true optimal decision. @ban2019 apply their study to the staffing levels of nurses for a hospital emergency room.
@tulabandhula2013ML incorporate machine learning for the decision making. But, different from @bertsimas2018predictive, they study a framework that simultaneously seeks a best statistical model and a corresponding decision policy. In their framework, in addition to $\{({\boldsymbol{u}}^{i},{\boldsymbol{\xi}}^{i})\}_{i=1}^{N}$, a new set of unlabeled data is available that in conjunction with the statistical model affects the cost. The minimum of such a cost function over the set of possible decisions is cast by a regularization term in the objective function of the learning algorithm. @tulabandhula2013ML show that under some conditions this problem is equivalent to a robust optimization model, where the uncertainty set of the statistical model contains all models that are within $\epsilon$-optimality from the predictive model describing $\{({\boldsymbol{u}}^{i},{\boldsymbol{\xi}}^{i})\}_{i=1}^{N}$. They illustrate the form of the uncertainty set for different loss functions used in the predictive statistical model, including least squares, 0-1, logistic, exponential, ramp, and hing losses. @tulabandhula2014combining study the application of the framework studied in @tulabandhula2013ML to a travelling repairman problem, where a repair crew is seeking for an optimal route to repair the nodes on a graph while the failure probabilities are unknown.
Similar to @tulabandhula2013ML, @tulabandhula2014ML use a new set of unlabeled data in addition to $\{({\boldsymbol{u}}^{i},{\boldsymbol{\xi}}^{i})\}_{i=1}^{N}$ in order to combine machine learning and decision making. However, unlike @bertsimas2018predictive, @deng2018LEO, @tulabandhula2013ML, and @tulabandhula2014combining, @tulabandhula2014ML study a robust optimization framework. Their idea to form the uncertainty set of ${\tilde{{\boldsymbol{\xi}}}}$ is to consider a class of “good" predictive models with low training error on the data set $\{({\boldsymbol{u}}^{i},{\boldsymbol{\xi}}^{i})\}_{i=1}^{N}$. Recognizing that the uncertainty can be decomposed into the predictive model uncertainty and residual uncertainty, they form the uncertainty by the Minkowski sum of two sets: (1) predictions of the new data set with the class of “good" predictive models, and (2) residuals of the new data set with the class of “good" predictive models. To form the class of “good" predictive models, one can use loss functions such as least squares and hing loss.
Similar to @bertsimas2018predictive, @bertsimas2017 consider the problem of finding an optimal solution to a data-driven stochastic optimization problem, where the uncertain parameter is affected by a large number of covariates. They study a distributionally robust approach to this problem formed via Kullback-Leibler divergence. By borrowing ideas from the statistical bootstrap, they propose two prescriptive methods based on the Nadaraya-Watson and nearest-neighbors learning formulation, first introduced by @bertsimas2018predictive, which safeguards against overfitting and lead to an improved out-of-sample performance. Both resulting prescriptive methods reduce to tractable convex optimization problems.
Kernel density estimation (KDE) [@devroye1985] in combination with [*principal component analysis*]{} (PCA) is also used in the RO literature to construct the uncertainty set [@ning2018kernel]. PCA captures the correlation between uncertain parameters and transfoms data into their corresponding uncorrelated principal components. KDE, then, captures the distributional information of the transformed, uncorrelated uncertain parameters along the principal components, by using kernel smoothing methods. @ning2018kernel propose to use a Gaussian kernel $K$ defined between the latent uncertainty along the principal component $k$, $w_{k}$, and the projected data along the principal component $k$, $t_{k}$[^25]. By incorporating forward and backward deviations to allow for asymmetry [@chen2007robust], @ning2018kernel propose the following polytopic uncertainty set that resembles the intersection of a box, with the so-called [*budget*]{}, and polyhedral uncertainty sets: $${\mathcal{U}}=\sset*{{\boldsymbol{u}}}{
\begin{aligned}
& {\boldsymbol{u}}={\boldsymbol{\mu}}_{0}+ {\boldsymbol{V}} {\boldsymbol{w}}, \; {\boldsymbol{w}}= {\boldsymbol{{\underline{w}}}} \odot {\boldsymbol{z}}^{-} + {\boldsymbol{{\overline{w}}}} \odot {\boldsymbol{z}}^{+}, \\
& {\boldsymbol{0}} \le {\boldsymbol{z}}^{-}, {\boldsymbol{z}}^{+} \le {\boldsymbol{1}}, \; {\boldsymbol{z}}^{-}+{\boldsymbol{z}}^{+} \le {\boldsymbol{1}}, \; {\boldsymbol{1}}^{\top} ({\boldsymbol{z}}^{-}+{\boldsymbol{z}}^{+}) \le \Gamma, \\
& {\underline{{\boldsymbol{w}}}}= [F^{-1}_{1}(\alpha), \ldots, F^{-1}_{m}(\alpha)]^{\top}, \\
& {\overline{{\boldsymbol{w}}}}= [F^{-1}_{1}(1-\alpha), \ldots, F^{-1}_{m}(1-\alpha)]^{\top}
\end{aligned}
}.$$ Let us define ${\boldsymbol{U}}=[{\boldsymbol{u}}^{1}, \ldots, {\boldsymbol{u}}^{N}]^{\top}$. Above ${\boldsymbol{\mu}}_{0}=\frac{1}{N} \sum_{i=1}^{N} {\boldsymbol{u}}^{i}$, and ${\boldsymbol{V}}$ is a square matrix consists of all $m$ eigenvvectors (i.e., principal components) obtained from the eignevalue decomposition of the sample covariance matrix ${\boldsymbol{S}}=\frac{1}{N-1} ({\boldsymbol{U}}-{\boldsymbol{1}}{\boldsymbol{\mu}}_{0}^{\top})^{\top}({\boldsymbol{U}}-{\boldsymbol{1}}{\boldsymbol{\mu}}_{0}^{\top}) $. Moreover, ${\boldsymbol{z}}^{-}$ is a backward deviation, ${\boldsymbol{z}}^{+}$ is a forward deviation vector, and $\Gamma$ is the uncertainty budget. In addition, $F^{-1}_{k}:=\min\{w_{k}|F_{k}(w_{k}) \ge \alpha\}$, $k=1, \ldots, m$, where $F_{k}(w_{k})$ is the cdf of $w_{k}$, with the density function is obtained using KDE as follows: $f_{k}(w_{k})=\frac{1}{N} \sum_{i=1}^{n} K_{b}(w_{k}, t_{k}^{i})$. @ning2018kernel further extend their approach to the data-driven static and adaptive robust optimization.
In the context of RO, [*support vector clustering*]{} (SVC) is proposed to form the uncertainty set, which seeks for a sphere with the smallest radius that encloses all data mapped in the covariate space [@shang2017]. In SVC, to avoid overfitting, the violations of the data outside the sphere is penalized by a regularization term as follows: $$\begin{aligned}
\min_{\delta, {\boldsymbol{s}}, {\boldsymbol{c}}} \ & \delta^{2} + \frac{1}{N\gamma} \sum_{i=1}^{N} s_{i} \\
{\text{s.t.}}\quad & \|\Phi({\boldsymbol{u}}^{i})-{\boldsymbol{c}}\|_{2}^{2} \le \delta^{2} +s_{i}, \; i=1, \ldots, N,\\
& {\boldsymbol{s}}\ge {\boldsymbol{0}}. \end{aligned}$$ Dualizing the problem of finding the smallest sphere using dual multipliers ${\boldsymbol{\pi}}$ results in a quadratic problem where the kernel function appears in the objective function. It is shown that commonly used kernel functions in SVC, such as polynomial, radial basis function, sigmoid function kernel, lead to an intractable robust counterpart problem for the corresponding uncertainty set. Hence, @shang2017 propose to use a piecewise linear kernel, referred to as a [*weighted generalized intersection kernel*]{}, defined as follows: $$\label{eq: rev.kernel}
K({\boldsymbol{u}},{\boldsymbol{v}})=\sum_{k=1}^{m} l_{k} - \|{\boldsymbol{Q}}({\boldsymbol{u}}-{\boldsymbol{v}})\|_{1},$$ where ${\boldsymbol{Q}}={\boldsymbol{S}}^{-\frac{1}{2}}$ and ${\boldsymbol{S}}=\frac{1}{N-1} \sum_{i=1}^{N} \Big[ {\boldsymbol{u}}^{i} ({\boldsymbol{u}}^{i})^{\top} - \big(\sum_{i=1}^{N}{\boldsymbol{u}}^{i}\big) \big(\sum_{i=1}^{N}{\boldsymbol{u}}^{i}\big) ^{\top} \Big]$, and $l_{k}$, $k=1, \ldots, m$, is chosen such that $l_{k} > \max_{i=1}^{N} {\boldsymbol{Q}}_{\cdot k}^{\top}{\boldsymbol{u}}^{i} -\min_{i=1}^{N} {\boldsymbol{Q}}_{\cdot k}^{\top}{\boldsymbol{u}}^{i}$. Such a kernel not only incorporates covariance information, but also gives rise to the following results.
[(@shang2017 [Propositions 1, Propositions 3–4])]{} \[thm: shang\_SVC\] Suppose that the kernel function is constructed as in . Then,
1. The kernal matrix induced by the kernel $K$ is positive definite.
2. The constructed uncertainty set $${\mathcal{U}}=\sset*{{\boldsymbol{u}}}{
\begin{aligned}
& \exists {\boldsymbol{v}}_{i}, \; i \in {\mathcal{S}} \ {\text{s.t.}}\\
& \sum_{i \in {\mathcal{S}}} \pi_{i} {\boldsymbol{v}}_{i}^{\top} {\boldsymbol{1}} \le \epsilon, \\
& -{\boldsymbol{v}}_{i} \le {\boldsymbol{Q}}({\boldsymbol{u}}-{\boldsymbol{u}}^{i}) \le {\boldsymbol{v}}_{i}, \; i \in {\mathcal{S}}
\end{aligned}
},$$ where ${\mathcal{S}}:=\sset*{i}{\pi_{i} > 0}$, $\epsilon=\sum_{i \in {\mathcal{S}}} \pi_{i} \|{\boldsymbol{Q}}({\boldsymbol{u}}^{j}-{\boldsymbol{u}}^{i})\|_{1}$, $j \in {\mathcal{B}}$, and ${\mathcal{B}}:=\sset*{i}{0 < \pi_{i} < \frac{1}{N\gamma}}$, is a polytope; hence, the robust counterpart $\max_{{\boldsymbol{u}} \in {\mathcal{U}}} \ {\boldsymbol{u}}^{\top} {\boldsymbol{x}} \le b$ has the same complexity as the deterministic problem.
3. The regularization parameter $\gamma$ gives an upper bound on the fraction of the outliers; hence, a feasible solution ${\boldsymbol{x}}$ in the robust counterpart $\max_{{\boldsymbol{u}} \in {\mathcal{U}}} \ {\boldsymbol{u}}^{\top} {\boldsymbol{x}} \le b$ is also feasible to a SAA-based chance-constrained problem $P\{{\tilde{{\boldsymbol{u}}}}^{\top} {\boldsymbol{x}} \le b\} \ge 1-\gamma$.
4. As the number of data points increases, the fraction of outliers converges to the regularization parameter $\gamma$ with probability one.
5. The regularization parameter $\gamma$ gives a lower bound on the fraction of the support vectors.
@shang2018predictive further propose to calibrate the radius of the uncertainty set and provide a probabilistic guarantee of the proposed uncertainty set. @shang2018SVC use PCA in combination with SVC to construct the uncertainty set. By employing PCA, the data space is decomposed into the principal subspace and residual subspace. Then, they utilize the uncertainty set formed in @shang2017 to explain the variation in the principal subspace, and utilize a polyhedral set to explain noise in the residual subspace. The proposed uncertainty set is then the intersection of the above two sets. @shang2018dDROscheduling adopt the ambiguity set proposed in @wiesemann2014, and propose to use PCA to calibrate the moment functions. In fact, a moment function in their model is a piecewise linear function, which is defined as a first-order deviation of the uncertain parameter along a certain projection direction, truncated at certain points. They propose to use PCA to come up with the projection directions, and choose the truncation points symmetrically around the sample mean along the direction.
Applications of the proposed method in @ning2018kernel are studied in production scheduling [@ning2018kernel] and in process network planning [@ning2018kernel; @ning2018hedging; @ning2018PCA]. The proposed method in @shang2017 is used in different application domains to construct the uncertainty set, see, e.g., control of irrigation system [@shang2018robust] and chemical process network planning [@shang2017]. Applications of the proposed method in @shang2018dDROscheduling are studied in production scheduling [@shang2018dDROscheduling; @shang2018process] and in process network planning [@shang2018dDROscheduling; @shang2018DRO].
General Ambiguity Sets {#sec: rev.general}
----------------------
In Sections \[sec: rev.distance\]–\[sec: rev.kernel\], we reviewed papers with specific distributional and structural properties for the random parameters, captured via discrepancy-based, moment-based, shape-preserving, and kernel-based ambiguity sets. In this section, we review papers that either do not consider any specific form for the ambiguity set or provide some general results for a broad class of ambiguty sets.
A unified scenario-wise format for ambiguity sets to contain both the moment-based and discrepancy-based distributional information about the ambiguous distribution is proposed in @chen2018adaptive. It is shown that the ambiguity sets formed via generalized moments, mixture distribution, Wasserstein metric, $\phi$-divergence, $k$-means clustering, among other, all can be represented under this unified ambiguity set. The key feature of this scenario-wise ambiguity set is the introduction of a discrete random variable, which represents a finite number of scenarios that would affect the distributional ambiguity of the underlying nominal random variable. This ambiguity set can be characterized by a finite number of (conditional) expectation constraints based on generalized moments @wiesemann2014. For practical purposes, they restrict the ambiguity set to be second-order conic representable. Based on the scenario-wise ambiguity set, they introduce an adaptive robust optimization format that unifies the classical SP and (distributionally) RO models with recourse. They also introduce a scenario-wise affine recourse approximation to provide tractable solutions to the adaptive robust optimization model. Besides @chen2018adaptive, there are some proposals for unified models in the context of discrepancy-based, moment-based, and shape-preserving models. As mentioned before, a broad class of moment-based ambiguity sets with conic-representable expectation constraints and a collection of nested conic-representable confidence sets is proposed in @wiesemann2014, and a broad class of shape-preserving ambiguity sets is proposed in @hanasusanto2015chance.
@luo2018 study [DRO]{} problem where the ambiguity sets of probability distributions can depend on the decision variables. They consider a wide range of moment- and discrepancy-based ambiguity sets formed, such as (1) measure and moment inequalities (see Section \[sec: rev. measure\_marginal\_moments\]), (2) bounds on moment constraints (see Section \[sec: rev.Chebyshev\]), (3) $1$-Wasserstein metric utilizing $\ell_1$-norm, (4) $\phi$-divergences, and (5) Kolmogorov-Smirnov test. They present equivalent reformulations for these problems by relying on duality results.
@pflug2007 study a [DRO]{} problem, where the ambiguity exists in both the objective function and constraints as in . To solve the model, they propose an exchange method to successively generate a finite inner approximation of the ambiguity set of distributions. They show that when the ambiguity set is compact and convex, and the risk measure is jointly continuous in both ${\boldsymbol{x}}$ and ${\mathbbmtt{P}}$, then the proposed algorithm is finitely convergent.
@bansal2018conic introduce two-stage stochastic integer programs in which the second-stage problem have $p$-order conic constraints as well as integer variables. They present sufficient conditions under which the addition of parametric (non)linear cutting planes along with the linear relaxation of the integrality constraints provides a convex programming equivalent for the second-stage problem. They show that this result is also valid for the distributionally robust counterpart of this problem. This paper generalizes the results on two-stage mixed-binary linear programs studied in @bansal2018.
@bansal2018solving introduce two-stage distributionally robust disjunctive programs with disjunctive constraints in both stages and a general ambiguity set for the probability distributions. To solve the resulting model, they develop decomposition algorithms, which utilize Balas’ linear programming equivalent for deterministic disjunctive programs or his sequential convexification approach within the L-shaped method. They demonstrate that the proposed algorithms are finitely convergent if a distribution separation subproblem can be solved in a finite number of iterations, as in sets formed via ${\mathcal{P}}^{\text{MM}}$, defined in , $1$-Wasserstein metric utilizing an arbitrary norm, and the total variation distance. These algorithms generalize the distributionally robust integer L-shaped algorithm of @bansal2018 for two-stage mixed binary linear programs.
@wang2019OR study a distributionally robust chance-constrained bin-packing problem with a finite number of scenarios, where the safe region of the chance constraint is bi-affine in ${\boldsymbol{x}}$ and ${\tilde{{\boldsymbol{\xi}}}}$, with a random technology matrix. They present a binary bilinear reformulation of the problem, where the feasible region is modeled as the intersection of multiple binary bilinear knapsack constraints, a cardinality constraint, and a general (probability) knapcksack constraint. They propose lifted cover valid inequalities for the binary bilinear knapsack substructure induced by a given bin and scenario, and they further obtain lifted cover inequalities that are valid for the substrcture induced by each bin. They obtain valid probability cuts and incorporate them with the lifted cover inequalities in a branch-and-cut framework to solve the model. They show that the proposed algorithm is finitely convergent if a distribution separation subproblem can be solved in a finite number of iterations. @wang2019OR apply their results to an operating room scheduling problem.
@guo2017convergence study the impacts of the variation of the ambiguity set of probability distributions on the optimal value and optimal solution of the stochastic programs with distrubutionally robust chance constraints. To establish the results, they present conditions under which a sequence of approximated ambiguity sets converges to the true ambiguity set, for some discrepancy measure, including Kolmogorov and the total variation distance. They apply their convergence results to the ambiguity sets formed via and Kullback-Leibler divergence.
@delage2018mip study the value of using a randomized policy, as compared to a deterministic policy, for mixed-integer [DRO]{} problems. They show that the value of randomization for such [DRO]{} models with a convex cost function $h$ and a convex risk measure is bounded by the difference between the optimal values of the nominal [DRO]{} problem and that of its convex relaxation. They show that when the risk measure is an expectation and the cost function is affine in the decision vector, this bound is tight. They also develop a column generation algorithm for solving a two-stage mixed-integer linear [DRO]{} problem, formed via and $1$-Wasserstein metric utilizing an arbitrary norm. They test their results on assignment problem, and on uncapacitated and capacitated facility location problems.
@long2014 study a distributionally robust binary stochastic program to minimize the entropic VaR, also known as Bernstein approximation for the chance constraint. They propose an approximation algorithm to solve the problem via solving a sequence of problems. They showcase their results for ambiguity set formed as in for a stochastic shortest path problem.
@shapiro2013worst study a multistage stochastic program, where the data process can be naturally separated into two components: one can be modeled as a random process, with a known probability distribution, and the other can be treated as a random process, with a known support and no distributional information. They propose a variant of the stochastic dual dynamic programming (SDDP) method to solve this problem.
Calibration of the Ambiguity Set of Probability Distributions {#sec: rev.calibration}
=============================================================
Choice of the Nominal Parameters
--------------------------------
All discrepancy-based ambiguity sets, studied in Section \[sec: rev.distance\], and some of the moment-based ambiguity sets, studied in Section \[sec: rev.moment\], rely on some nominal input parameters, for instance, the nominal distribution $P_{0}$ in the ambiguity set ${\mathcal{P}}^{\text{W}}(P_{0}, \epsilon)$, defined in , and parameters ${\boldsymbol{\mu}}_{0}$ and ${\boldsymbol{\Sigma}}_{0}$ in the ambiguity set ${\mathcal{P}}^{\text{DY}}$, defined in . In this section, we discuss how these parameters are chosen in a data-driven setting.
The nominal distribution $P_{0}$ in the discrepancy-based ambiguity sets is usually obtained by the maximal likelihood estimator of the true unknown distribution. In the discrete case, $P_{0}$ is typically chosen as the empirical distribution on data. In the case that the true unknown distribution is continuous, @jiang2018 and @zhao2015 propose to obtain $P_{0}$ with nonparametric kernel density estimation methods, see, e.g., @devroye1985.
@delage2010 propose to estimate ${\boldsymbol{\mu}}_{0}$ and ${\boldsymbol{\Sigma}}_{0}$ by their empirical estimates (see Section \[sec: rev.robustness\_param\] for more details on how this choice of nominal parameters, in conjuction with other assumptions, ensure that the constructed ambiguity set ${\mathcal{P}}^{\text{DY}}$ contains the true unknown probability distribution with a high probability).
Choice of Robustness Parameters {#sec: rev.robustness_param}
-------------------------------
In Section \[sec: rev.choice.ambiguity\], we reviewed different approaches to form the ambiguity set of distributions. All discrepancy-based ambiguity sets, studied in Section \[sec: rev.distance\], and some of the moment-based ambiguity sets, studied in Section \[sec: rev.moment\], rely on parameters that control the size of the ambiguity set. For instance, parameter $\epsilon$ in the ambiguity set ${\mathcal{P}}^{\text{W}}(P_{0}; \epsilon)$, defined in , and parameters $\varrho_{1}$ and $\varrho_{2}$ in the ambiguity set ${\mathcal{P}}^{\text{DY}}$, defined in , control the size of their corresponding ambiguity sets. A judicial choice of these parameters reduce the level of conservatism of the resulting [DRO]{}. A natural question is then how to choose appropriate values for these parameters.
In this section, we review different approaches to choose the level-of-robustness parameters. To have a structured review, we make a distinction between data-driven [DRO]{} and non-data-driven [DRO]{}.
### Data-Driven [DRO]{}s
Data-driven [DRO]{}s usually propose a robustness parameter that is inversely proportional to the number of available data points. This construction is motivated from the asymptotic convergence of the optimal value of [DRO]{} to that of the corresponding model under the true unknown distribution, with an increasing number of data points, see, e.g., [@pflug2007; @delage2010; @bertsimas2018RO].
An underlying assumption in data-driven methods is that data points are independently and identically distributed (i.i.d.) from the unknown distribution. Given this assumption, data-driven approaches for discrepancy-based ambiguity sets propose to choose the level of robustness by analyzing the discrepancy—with respect to some metric—between the empirical distribution and the true unknown distribution[^26], asymptotically, see, e.g., @ben2013 [@shafieezadeh2015], or with a finite sample, see, e.g., @pflug2007. A direct consequence of such analysis is that it establishes a finite-sample probabilistic guarantee on the discrepancy between the empirical distribution and the true unknown distribution. Hence, it gives rise to a probabilistic guarantee on the inclusion of the unknown distribution in the constructed set, with respect to the empirical distribution. By construction, such an ambiguity set can be interpreted as a confidence set on the true unknown distribution. Moreover, such a construction implies a finite-sample guarantee on the out-of-sample performance, so that the current optimal value provides an upper bound on the out-of-sample performance of the current solution with a high probability. A similar idea is used in moment-based ambiguity sets, see, e.g., @goldfarb2003 and @delage2010. In a recent work, @gotoh2017 propose to choose the level of robustness by trading off between the mean and variance of the out-of-sample objective function value. We refer the readers to that paper for a review of calibration approaches in [DRO]{}.
Below, we review the data-driven approaches to choose the level of robustness in more details. In this section, we suppose that a set $\{{\boldsymbol{\xi}}^{i}\}_{i=1}^{N}$ of i.i.d data, distributed according to ${{\mathbbmtt{P}}^{\text{true}}}$, is available, where ${\mathbbmtt{P}}_{N}$ denotes the empirical probability distribution of data.
#### Optimal Transport Discrepancy
When the ambiguity set contains all discrete distributions around the empirical distribution in the sense of the Wasserstein metric, @pflug2007 and @pflug2012 propose to choose the level of robustness based on a probabilistic statement on the Wasserstein metric between the empirical and true distributions, due to @dudley1969, as $\epsilon=\frac{C N^{-\frac{1}{d}}}{\alpha}$. This choice of $\epsilon$ guarantees that ${\mathbbmtt{P}}\{ {\mathfrak{d}}^{\text{W}}_{c} ({\mathbbmtt{P}}, {\mathbbmtt{P}}_{N}) \ge \epsilon\} \le \alpha$. In addition to the confidence level $1-\alpha$ and the number of available data points $N$, the proposed level of robustness in [@pflug2007; @pflug2012] depends on the dimension of ${\tilde{{\boldsymbol{\xi}}}}$, $d$, and a constant $C$. For such a Wasserstein-based ambiguity set, one can also choose the size of the set by utilizing the probabilistic statement on the discrepancy between empirical distribution and the true unknown distribution, established in @fournier2015. Nevertheless, because all the utilized probabilistic statements rely on the exogenous constant $C$, the size of the ambiguity set calculated from the theoretical analysis may be very conservative; hence, such proposals are not practical.
By acknowledging the issue raised above, some researchers propose to choose the level of robustness without relying on exogenous constants. For cases that the ambiguity set contains all discrete distributions, supported on a compact space and around the empirical distribution, @ji2017 derive a closed-form expression for computing the size of the Wasserstein-based ambiguity set.
[(@ji2017 [Theorem 2])]{} \[thm: rev.Was.epsilon\] Suppose that the random vector ${\tilde{{\boldsymbol{\xi}}}}$ is supported on a finite Polish space $(\Omega, d)$, where $\Omega \subseteq {\mathbb{R}}^{d}$ and $d$ is the $\ell_{1}$-norm. Choose $c(\cdot, \cdot)=d(\cdot, \cdot)$ in the definition of the optimal transport discrepancy . Assume that $$\log \int_{\Omega} e^{\lambda d({\boldsymbol{\xi}},{\boldsymbol{\xi}}_{0})} {{\mathbbmtt{P}}^{\text{true}}}(d {\boldsymbol{\xi}}) < \infty, \quad \forall \lambda >0,$$ for some ${\boldsymbol{\xi}}_{0}$. Let $\theta:=\sup\{d({\boldsymbol{\xi}}_{1}, {\boldsymbol{\xi}}_{2}): {\boldsymbol{\xi}}_{1}, {\boldsymbol{\xi}}_{2} \in \Omega\}$ be the diameter of $\Omega$. Then, $${\mathbbmtt{P}}_{N}\{{\mathfrak{d}}^{\text{W}}_{d} ({{\mathbbmtt{P}}^{\text{true}}}, {\mathbbmtt{P}}_{N}) \le \epsilon \} \ge 1- \exp\Big\{- N \Big( \frac{\sqrt{4 \epsilon (4 \theta +3) + (4 \theta +3 )^{2} }}{4 \theta + 3 }-1\Big)^{2}\Big\}.$$ Moreover, if $$\epsilon \ge \Big( \theta + \frac{3}{4}\Big) \Big(-\frac{1}{N} \log \alpha + 2 \sqrt{-\frac{1}{N} \log \alpha}\Big),$$ then $${\mathbbmtt{P}}_{N}\{{\mathfrak{d}}^{\text{W}}_{d} ({{\mathbbmtt{P}}^{\text{true}}}, {\mathbbmtt{P}}_{N}) \le \epsilon \} \ge 1-\alpha.$$
Unlike the result in @pflug2007, the proposed level of robustness in @ji2017, stated in Theorem \[thm: rev.Was.epsilon\], depends only on the confidence level $\alpha$, the number of available data points, and the diameter of the compact support $\Omega$. @ji2017 obtain this result by bounding the Wasserstein distance between two probability distributions from above, using the properties of the weighted total variation [@bolley2005], and the weighted Csiszar-Kullback-Pinsker inequality [@villani2008], and consequently applying Sanov’s large deviation theorem [@dembo1998] to reach a probabilistic statement on the Wasserstein distance between two distributions. As stated in Theorem \[thm: rev.Was.epsilon\], such a result guarantees that the constructed set contains the unknown probability distribution with a high probability. Moreover, it implies a probabilistic guarantee on the true optimal value.
Another criticism of methods such as those proposed in @pflug2007 and @pflug2012 is that they merely rely on the discrepancy between two probability distributions, and the optimization framework plays no role in the prescription. By making connection between the regularizer parameter and the size of the ambiguity for Wassersetin-based sets, @blanchet2016robust aim to optimally choose the regularization parameter. A key component of their analysis is a [*robust Wasserstein profile*]{} (RWP) function. At a given solution ${\boldsymbol{x}}$, this function calculates the minimum Wasserstein distance from the nominal distribution to the set of optimal probability distributions for the inner problem at ${\boldsymbol{x}}$. For any confidence level $\alpha$, they show that the size of the ambiguity set should be chosen as $(1-\alpha)$-quantile of RWP at the optimal solution to the minimization problem under the true unknown distribution. Using this selection of $\epsilon$, the optimal solution to the true problem belongs to the set of optimal solutions to the [DRO]{} problem, with $(1-\alpha)$ confidence for all ${\mathbbmtt{P}} \in {\mathcal{P}}^{\text{W}}({\mathbbmtt{P}}_{N},\epsilon)$. As such a result is based on the true optimal solution, they study the asymptotic behavior of the RWP function and discuss how to use it to optimally choose the regularization parameter without cross validation. The work in @blanchet2016robust is extended in @blanchet2017groupwise [@blanchet2016SOS]. @blanchet2017groupwise utilize the RWP function to introduce a data-driven (statistical) criterion for the optimal choice of the regularization parameter and study its asymptotic behavior. For a [DRO]{} approach to linear regression, @chen2018regression give guidance on the selection of the regularization parameter from the standpoint of a confidence region.
#### Goodness-of-Fit Test
@bertsimas2018RO propose to form the ambiguity set of distributions using the confidence set of the unknown distribution via goodness-of-fit tests. With such an approach, one chooses the level of robustness as the threshold value of the corresponding test, depending on the confidence level $\alpha$, data, and the null hypothesis.
#### Phi-Divergences {#phi-divergences}
By noting that the class of $\phi$-divergences can be used in statistical hypothesis tests, a similar approach to the one in @bertsimas2018RO can be used to choose the level of robustness for $\phi$-divergence-based ambiguity sets. For the case that the distributional ambiguity in discrete distributions is modeled via $\phi$-divergences, some papers propose to choose the level of robustness by relying on the asymptotic behavior of the discrepancy between the empirical distribution and true unknown distribution, see, e.g., @ben2013 [@bayraksan2015; @yanikoglu2012].
Suppose that $\Xi$ is finite sample space of size $m$ and the $\phi$-divergence function in is twice continuously differentiable in a neighborhood of $1$, with $\phi^{\prime\prime}(1)>0$. Then, it is shown in @pardo2005 that under the true distribution, the statistics $\frac{2N}{\phi^{\prime\prime}(1)}{\mathcal{D}}_{\phi}({{\mathbbmtt{P}}^{\text{true}}},{{\mathbbmtt{P}}_{0}})$ converges in distribution to a $\chi^{2}_{m-1}$-distribution, with $m-1$ degrees of freedom. Thus, at a given confidence level $\alpha$, one can set the level of robustness to $\frac{\phi^{\prime\prime}(1)}{2N}\chi^{2}_{m-1, 1-\alpha}$, where $\chi^{2}_{m-1, 1-\alpha}$ is the $(1-\alpha)$-quantile of $\chi^{2}_{m-1}$, to obtain an (approximate) confidence set on the true unknown distribution. @ben2013 show that such a choice of the level of robustness gives a one-sided confidence interval with (asymptotically) inexact coverage on the true optimal value of $\inf_{{\boldsymbol{x}} \in {\mathcal{X}}} \ {\mathbb{E}_{{{\mathbbmtt{P}}^{\text{true}}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} $. For corrections for small sample sizes, we refer readers to @pardo2005.
By generalizing the empirical likelihood framework [@owen2001empirical] on a separable metric space (not necessarily finite), @duchi2016 propose to choose the level of robustness $\epsilon$ such that a confidence interval $[l_{N}, u_{N}]$ on the true optimal value of $\inf_{{\boldsymbol{x}} \in {\mathcal{X}}} \ {\mathbb{E}_{{{\mathbbmtt{P}}^{\text{true}}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} $ has an asymptotically exact coverage $1-\alpha$, i.e., $\lim_{N \rightarrow \infty} {\mathbbmtt{P}}_{N}\{\inf_{{\boldsymbol{x}} \in {\mathcal{X}}} {\mathbb{E}_{{{\mathbbmtt{P}}^{\text{true}}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \in [l_{N},u_{N}] \}=1-\alpha$, where $$u_{N}:= \inf_{{\boldsymbol{x}} \in {\mathcal{X}}} \ \sup_{{\mathbbmtt{P}} \in {\mathcal{P}}^{\phi}({\mathbbmtt{P}}_{N}; \epsilon) } \ {\mathbb{E}_{{\mathbbmtt{P}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]},$$ $$l_{N}:= \inf_{{\boldsymbol{x}} \in {\mathcal{X}}} \ \inf_{{\mathbbmtt{P}} \in {\mathcal{P}}^{\phi}({\mathbbmtt{P}}_{N}; \epsilon) } \ {\mathbb{E}_{{\mathbbmtt{P}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]},$$ and $${\mathcal{P}}^{\phi}({\mathbbmtt{P}}_{N}; \epsilon):= \sset*{{\mathbbmtt{P}} \in {{\mathfrak{P}}({\mathbb{R}}^{d},{\mathfrak{B}}({\mathbb{R}}^{d}))}}{ {\mathcal{D}}_{\phi}({\mathbbmtt{P}} \| {\mathbbmtt{P}}_{N}) \le \epsilon}.$$
[(@duchi2016 [Theorem 4])]{} \[thm: rev.phi.epsilon\] Suppose that the $\phi$ function is three time continuously differentiable in a neighborhood of $1$, and normalized with $\phi(1)=\phi^{\prime}(1)=0$[^27] and $\phi^{\prime\prime}(1)=2$. Furthermore, suppose that ${\mathcal{X}}$ is compact, there exists a measurable function $M: \Omega \mapsto {\mathbb{R}}_{+}$ such that for all ${\boldsymbol{\xi}} \in \Omega$, $h(\cdot, {\boldsymbol{\xi}})$ is $M({\boldsymbol{\xi}})$-Lipschitz with respect to some norm $\|\cdot\|$ on ${\mathcal{X}}$, ${\mathbb{E}_{{{\mathbbmtt{P}}^{\text{true}}}} \left[ M({\tilde{{\boldsymbol{\xi}}}})^{2} \right]}< \infty$, and ${\mathbb{E}_{{{\mathbbmtt{P}}^{\text{true}}}} \left[ |h({\boldsymbol{x}}_{0}, {\tilde{{\boldsymbol{\xi}}}})| \right]}<\infty$ for some ${\boldsymbol{x}}_{0} \in {\mathcal{X}}$. Additionally, suppose that $h(\cdot, {\boldsymbol{\xi}})$ is proper and lower semicontinuous for ${\boldsymbol{\xi}}$, ${{\mathbbmtt{P}}^{\text{true}}}$-almost surely. If $\inf_{{\boldsymbol{x}} \in {\mathcal{X}}} \ {\mathbb{E}_{{{\mathbbmtt{P}}^{\text{true}}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} $ has a unique solution, then $$\lim_{n \rightarrow \infty} {\mathbbmtt{P}}_{N}\{\inf_{{\boldsymbol{x}} \in {\mathcal{X}}} {\mathbb{E}_{{{\mathbbmtt{P}}^{\text{true}}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \le u_{N} \}= 1- \frac{1}{2} P(\chi^{2}_{1} \ge N \epsilon)$$ and $$\lim_{n \rightarrow \infty} {\mathbbmtt{P}}_{N}\{\inf_{{\boldsymbol{x}} \in {\mathcal{X}}} {\mathbb{E}_{{{\mathbbmtt{P}}^{\text{true}}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} \ge l_{N} \}= 1- \frac{1}{2} P(\chi^{2}_{1} \ge N \epsilon).$$
According to Theorem \[thm: rev.phi.epsilon\], if $\inf_{{\boldsymbol{x}} \in {\mathcal{X}}} \ {\mathbb{E}_{{{\mathbbmtt{P}}^{\text{true}}}} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]} $ has a unique solution, the desired asymptotic guarantee is achieved with the choice $\epsilon=\frac{\chi^{2}_{1, 1-\alpha}}{N}$. @duchi2016 also give rates at which $u_{N} - l_{N} \rightarrow 0$. Moreover, the upper confidence interval $(-\infty, u_N]$ is a one-sided confidence interval with an asymptotic exact coverage when $\epsilon=\chi^{2}_{1, 1-2\alpha}$.
On another note, it can be seen from Table \[T: rev.phi\] that the $\phi$-divergence function corresponding to the variation distance is not twice differentiable at 1. Hence, one cannot use the above result. However, by utilizing the first inequality in Lemma \[lem: rev.TV\], i.e., the relationship between the variation distance and the Hellinger distance, @jiang2018 propose to set the level of robustness to $\sqrt{\frac{1}{N}\chi^{2}_{m-1, 1-\alpha}}$ in order to obtain an (approximate) confidence set on the true unknown discrete distribution. The proposed choice of the level of robustness ensures that the unknown discrete distribution belongs to the ambiguity set with a high probability. For the case that ${\tilde{{\boldsymbol{\xi}}}}$ follows a continuous distribution, the proposed level of robustness in [@jiang2018] depends on some constants that appear in the probabilistic statement of the discrepancy between the empirical distributions and the true distribution.
#### Lp-Norm
For the case that $\ell_{\infty}$-norm is used to model the distributional ambiguity, @jiang2018 propose to choose the level of robustness based on a probabilistic statement on the discrepancy between the empirical distributions and the true distribution as $\epsilon=\frac{z_{1-\frac{\alpha}{2}}}{\sqrt{N}}\max_{i=1}^{m} \ \sqrt{{p_{0}}^{i}(1-{p_{0}}^{i})}$, where $z_{1-\frac{\alpha}{2}}$ represents the $(1-\frac{\alpha}{2})$-quantile of the standard normal distribution, and ${\boldsymbol{p}}_{0}:=[{p_{0}}^{1}, \ldots, {p_{0}}^{m}]$ denotes the empirical distribution of data. The proposed choice of the level of robustness ensures that the unknown discrete distribution belongs to the ambiguity set with a high probability. Similar to the $\ell_{1}$-norm (i.e., the variation distance) case, when ${\tilde{{\boldsymbol{\xi}}}}$ follows a continuous distribution, the proposed level of robustness depends on some constants that appear in the probabilistic statement of the discrepancy between the empirical distributions and the true distribution.
#### Zeta-Structure
By exploiting the relationship between different metrics in the $\zeta$-structure family, see, e.g., Lemma \[lem: rev.zeta\], @zhao2015 provide guidelines on how to choose the level of robustness for the ambiguity sets of the unknown discrete distribution formed via bounded Lipschitz, Kantorovich, and Fortet-Mourier metrics as follows.
\[thm: rev.zeta.epsilon\] Suppose that the random vector ${\tilde{{\boldsymbol{\xi}}}}$ is supported on a bounded finite space $\Omega$ and $\theta$ denotes the diameter of $\Omega$, as defined in Theorem \[thm: rev.Was.epsilon\].
1. if $\epsilon \ge \theta \sqrt{-2 \frac{\log \alpha}{N} }$, then ${\mathbbmtt{P}}_{N}\{{\mathfrak{d}}^{\text{K}} ({{\mathbbmtt{P}}^{\text{true}}}, {\mathbbmtt{P}}_{N}) \le \epsilon \} \ge 1-\alpha$ and ${\mathbbmtt{P}}_{N}\{{\mathfrak{d}}^{\text{BL}} ({{\mathbbmtt{P}}^{\text{true}}}, {\mathbbmtt{P}}_{N}) \le \epsilon \} \ge 1-\alpha$.
2. if $\epsilon \ge \theta \max\{1, \theta^{q-1}\} \sqrt{-2 \frac{\log \alpha}{N} }$, then ${\mathbbmtt{P}}_{N}\{{\mathfrak{d}}^{\text{FM}} ({{\mathbbmtt{P}}^{\text{true}}}, {\mathbbmtt{P}}_{N}) \le \epsilon \} \ge 1-\alpha$.
The proof is immediate from the relationship between $\zeta$-structure metrics, stated in Lemma \[lem: rev.zeta\], and the fact that ${\mathbbmtt{P}}_{N}\{{\mathfrak{d}}^{\text{K}} ({{\mathbbmtt{P}}^{\text{true}}}, {\mathbbmtt{P}}_{N}) \le \epsilon \} \ge 1- \exp\{-\frac{\epsilon^{2}N}{2 \theta^{2}}\}$ due to @zhao2015 [Proposition 3].
As it can be seen from Theorem \[thm: rev.zeta.epsilon\], the proposed levels of robustness for the case that the unknown distribution is discrete depend on the diameter of $\Omega$, the number of data points $N$, and the confidence level $1-\alpha$. However, the results in @zhao2015 for the continuous case suffer from similar practical issues as in [@pflug2007; @pflug2012; @jiang2018].
#### Chebyshev {#chebyshev}
A data-driven approach to construct a Chebyshev ambiguity set is proposed in @goldfarb2003. Recall the linear model for the asset returns ${\tilde{{\boldsymbol{\xi}}}}$ in @goldfarb2003: ${\tilde{{\boldsymbol{\xi}}}}={\boldsymbol{\mu}} + {\boldsymbol{A}} {\tilde{{\boldsymbol{f}}}} + {\tilde{{\boldsymbol{\epsilon}}}}$, where ${\boldsymbol{\mu}}$ is the vector of mean returns, ${\tilde{{\boldsymbol{f}}}} \sim N({\boldsymbol{0}}, {\boldsymbol{\Sigma}})$ is the vector of random returns that derives the market, ${\boldsymbol{A}}$ is the factor loading matrix, and ${\tilde{{\boldsymbol{\epsilon}}}} \sim N({\boldsymbol{0}}, {\boldsymbol{B}})$ is the vector of residual returns with a diagonal matrix ${\boldsymbol{D}}$. Under the assumption that the covariance matrix ${\boldsymbol{\Sigma}}$ is known, recall that @goldfarb2003 study three different models to form the uncertainty in ${\boldsymbol{B}}$, ${\boldsymbol{A}}$, and ${\boldsymbol{\mu}}$ as follows: $$\begin{aligned}
& {\mathcal{U}}_{{\boldsymbol{B}}}=\sset*{{\boldsymbol{B}}}{{\boldsymbol{B}}=\text{diag}({\boldsymbol{b}}), \; b_{i} \in [{\underline{b}}_{i}, {\overline{b}}_{i}], \; i=1, \ldots, d},\\
& {\mathcal{U}}_{{\boldsymbol{A}}}=\sset*{{\boldsymbol{A}}}{{\boldsymbol{A}}={\boldsymbol{A}}_{0}+ {\boldsymbol{C}}, \; \|{\boldsymbol{c}}_{i}\|_{g} \le \rho_{i}, \; i=1, \ldots, d},\\
& {\mathcal{U}}_{{\boldsymbol{\mu}}}=\sset*{{\boldsymbol{\mu}}}{{\boldsymbol{\mu}}={\boldsymbol{\mu}}_{0}+ {\boldsymbol{\zeta}}, \; |\zeta_{i}| \le \gamma_{i}, \; i=1, \ldots, d},\end{aligned}$$ where ${\boldsymbol{c}}_{i}$ denotes the $i$-th column of ${\boldsymbol{C}}$, and $\|{\boldsymbol{c}}_{i}\|_{g}=\sqrt{{\boldsymbol{c}}_{i}^{\top} {\boldsymbol{G}} {\boldsymbol{c}}_{i}^{\top}}$ denotes the elliptic norm of ${\boldsymbol{c}}_{i}$ with respect to a symmetric positive definite matrix ${\boldsymbol{G}}$. Calibrating the uncertainty sets ${\mathcal{U}}_{{\boldsymbol{B}}}$, ${\mathcal{U}}_{{\boldsymbol{A}}}$, and ${\mathcal{U}}_{{\boldsymbol{\mu}}}$ involves choosing parameters ${\underline{d}}_{i}$, ${\overline{d}}_{i}$, $\rho_{i}$, $\gamma_{i}$, $i=1, \ldots, d$, vector ${\boldsymbol{\mu}}_{0}$, and matrices ${\boldsymbol{A}}_{0}$ and ${\boldsymbol{G}}$. Assuming that a set of data points is available on ${\tilde{{\boldsymbol{\xi}}}}$ and ${\tilde{{\boldsymbol{f}}}}$, by relying on the multivariate linear regression, @goldfarb2003 obtain least square estimates $({\boldsymbol{\mu}}_{0},{\boldsymbol{A}}_{0})$ of $({\boldsymbol{\mu}},{\boldsymbol{A}})$, respectively, and construct a multdimensional confidence region of $({\boldsymbol{\mu}},{\boldsymbol{A}})$ around $({\boldsymbol{\mu}}_{0},{\boldsymbol{A}}_{0})$. Now, projecting this confidence region along vector ${\boldsymbol{A}}$ and matrix ${\boldsymbol{\mu}}$ gives the corresponding uncertainty sets ${\mathcal{U}}_{{\boldsymbol{A}}}$ and ${\mathcal{U}}_{{\boldsymbol{\mu}}}$, respectively. To form the uncertainty set ${\mathcal{U}}_{{\boldsymbol{B}}}$, they propose to use a bootstrap confidence interval around the regression error of the residual.
#### Delage and Ye {#delage-and-ye}
Data-driven methods to construct the ambiguity set ${\mathcal{P}}^{\text{DY}}$ is proposed in @delage2010.
[(@delage2010 [Corollary 4])]{} \[thm: rev.DelageYe.calibrration\] Suppose that the random vector ${\tilde{{\boldsymbol{\xi}}}}$ is supported on a bounded space $\Omega$. Consider the following parameters: $$\begin{aligned}
& \hat{{\boldsymbol{\mu}}}_{0}=\frac{1}{N} \sum_{i=1}^{N} {\boldsymbol{\xi}}^{i},\\
& \hat{{\boldsymbol{\Sigma}}}_{0}=\frac{1}{N-1} \sum_{i=1}^{N} ({\boldsymbol{\xi}}^{i}- \hat{{\boldsymbol{\mu}}}_{0})({\boldsymbol{\xi}}^{i}- \hat{{\boldsymbol{\mu}}}_{0})^{\top},\\
& \hat{\theta}= \sup_{i=1}^{N} \|\hat{{\boldsymbol{\Sigma}}}^{-\frac{1}{2}} ({\boldsymbol{\xi}}^{i}- \hat{{\boldsymbol{\mu}}}_{0})\|_{2},
\end{aligned}$$ where $\hat{{\boldsymbol{\mu}}}_{0}$, $\hat{{\boldsymbol{\Sigma}}}_{0}$, and $\hat{\theta}$ are estimates of the mean, covariance, and diameter of the support of ${\tilde{{\boldsymbol{\xi}}}}$, respectively. Moreover, for a confidence level $1-\alpha$, let us define $$\begin{aligned}
& \bar{\theta}=\Big( 1- (\hat{\theta}^{2}+2) \frac{2 + \sqrt{2 \log (\frac{4}{\bar{\alpha}})}}{\sqrt{N}}\Big)^{-\frac{1}{2}} \hat{\theta},\\
& \bar{\gamma}_{1}= \frac{\bar{\theta}^{2}}{\sqrt{N}} \Big( \sqrt{1- \frac{d}{\bar{\theta}^{4}}} + \sqrt{\log{(\frac{4}{\bar{\alpha}})}} \Big)\\
& \bar{\gamma}_{2}=\frac{\bar{\theta}^{2}}{N} \Big( 2 + \sqrt{2 \log{(\frac{2}{\bar{\alpha}})}}\Big),\\
& \bar{\varrho}_{1}= \frac{\bar{\gamma}_{2}}{1- \bar{\gamma}_{1}- \bar{\gamma}_{2}},\\
& \bar{\varrho}_{2}= \frac{1+ \bar{\gamma}_{2}}{1- \bar{\gamma}_{1}- \bar{\gamma}_{2}},
\end{aligned}$$ where $\bar{\alpha}=1- \sqrt{1-\alpha}$. Let ${\mathcal{P}}^{\text{DY}}(\Omega, \hat{{\boldsymbol{\mu}}}_{0}, \hat{{\boldsymbol{\Sigma}}}_{0}, \bar{\varrho}_{1}, \bar{\varrho}_{2})$ be the ambiguity set formed via , using parameters $\hat{{\boldsymbol{\mu}}}_{0}$, $\hat{{\boldsymbol{\Sigma}}}_{0}$, $\bar{\varrho}_{1}$, and $\bar{\varrho}_{2}$. Then, we have $${\mathbbmtt{P}}_{N}\{{{\mathbbmtt{P}}^{\text{true}}}\in {\mathcal{P}}^{\text{DY}}(\Omega, \hat{{\boldsymbol{\mu}}}_{0}, \hat{{\boldsymbol{\Sigma}}}_{0}, \bar{\varrho}_{1}, \bar{\varrho}_{2})\} \ge 1- \alpha.$$
### Non-Data-Driven [DRO]{}s
As mentioned before, data-driven [DRO]{}s typically assume that a set of i.i.d. sampled data is available from the unknown true distribution. In many situations, however, there is no guarantee that the future uncertainty is drawn from the same distribution. Recognizing this fact, some research is devoted to choosing the level of robustness in situations where the i.i.d. assumption is violated and data-driven methods to calibrate the level of robustness may be unsuitable.
@rahimian2019NV use the notions of maximal effective subsets and prices of optimism/pessimism and nominal/worst-case regrets to calibrate the level of robustness in discrepancy-based [DRO]{} models. Price of optimism pessimism is defined as the loss by being too optimistic (i.e., using SO model with the nominal distribution)—and hence, implementing the corresponding solution—while [DRO]{} accurately represents the ambiguity in the distribution. Similarly, the price of pessimism is defined as the loss by being too pessimistic (i.e., using RO model with no distributional information except for the support of uncertainty). Nominal/worst-case regret is defined as the loss of being unnecessarily ambiguous/not being ambiguous enough—and hence, implementing the corresponding solution—while [DRO]{} is ill-calibrated. @rahimian2019NV suggest to balance the price of optimism and pessimism if the decision-maker is indifferent regarding the error from using too optimistic or pessimistic solutions. They refer to the smallest level of robustness for which such a balance happens as [*indifferent-to-solution*]{} level of robustness. On the other hand, @rahimian2019NV propose to balance the nominal and worst-case regrets if the decision-maker wants to be indifferent regarding the error from using an ill-calibrated [DRO]{} model in either the optimistic or the pessimistic scenarios. They refer to the smallest level of robustness for which such a balance happens as [*indifferent-to-distribution*]{} level of robustness.
Cost Function of the Inner Problem {#sec: rev.cost_inner}
==================================
Recall formulation and the functional ${\mathcal{R}}_{P}: {\mathcal{Z}} \mapsto {\mathbb{R}}$. This functional accounts for quantifying the uncertainty in the outcomes of a fixed decision ${\boldsymbol{x}} \in {\mathcal{X}}$ and for a given fixed probability measure $P \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$. As pointed out before in Section \[sec: rev.generic\_model\] for and , one choice for this functional is the expectation operator. Other functionals, such as [*regret function*]{}, [*risk measure*]{}, and [*utility function*]{} have also been used in the [DRO]{} literature. These functionals are closely related concepts and we refer to @bental2007OCE and [@rockafellar2015] for a comprehensive treatment and how one can induce one from the other. In this section, we review some notable works, where regret function, risk measure, and utility function are used to capture the uncertainty in the outcomes of the decision.
Regret Function {#sec: rev.regret}
---------------
Given a decision ${\boldsymbol{x}} \in {\mathcal{X}}$ and a probability measure $P \in {\mathfrak{M}}{\left( \Xi, {\mathcal{F}} \right)}$, a regret functional ${\mathcal{V}}_{P}$ may quantify the expected displeasure or disappointment of the current decision with respect to a possible mix of future outcomes as follows: $${{\mathcal{V}}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}:= {\mathbb{E}_{P} \left[ h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}})- \min_{{\boldsymbol{x}} \in {\mathcal{X}}} \ h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \right]}.$$ In other words, ${{\mathcal{V}}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}$ calculates the expected additional loss that could have been avoided. This definition of regret function is used in @natarajan2014 and @hu2011budget in the context of combinatorial optimization and multicriteria decision-making, respectively. Another way for formulating a regret function may be as $${{\mathcal{V}}_{P} \left[ h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \right]}:= {\mathbb{E}_{P} \left[ h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \right]}- \min_{{\boldsymbol{x}} \in {\mathcal{X}}} \ {\mathbb{E}_{P} \left[ h({\boldsymbol{x}}, {\tilde{{\boldsymbol{\xi}}}}) \right]}.$$ This type of regret function is used in @perakis2008regret in the context of the newsvendor problem. @perakis2008regret obtain closed form solutions to distributionally robust single-item newsvendor problems that minimize the worst-case expected regret of acting optimally, where only (1) support, (2) mean, (3) mean and median, and (4) mean and variance information is available. This information can be captured with the ambiguity set ${\mathcal{P}}^{\text{MM}}$, defined in . @perakis2008regret also study the ambiguity sets that preserve the shape of the distribution, including information on (1) mean and symmetry, (2) support and unimodality with a given mode, (3) median and unimodality with a given mode, and (4) mean, symmtery, and unimodality with a given mode.
Risk Measure {#sec: rev.risk}
------------
As introduced in Section \[sec: rev.dro\_coherent\], a functional that quantifies the uncertainty in the outcomes of a decision is a risk measure @artzner1999 [@acerbi2002; @kusuoka2001; @shapiro2013kusuoka]. A risk measure $\rho_{P}$ usually satisfies some [*averseness*]{} property, i.e., ${\rho_{P} \left[ \cdot \right]}>{\mathbb{E}_{P} \left[ \cdot \right]}$ and imposes a preference order on random variables, i.e., if $Z, Z^{\prime} \in {\mathcal{Z}}$ and $Z \ge Z^{\prime}$, then ${\rho_{P} \left[ Z \right]} \ge {\rho_{P} \left[ Z^{\prime} \right]}$. Explicit incorporation of a risk measure into a [DRO]{} model has also received attention in the literature. We refer to @pflug2012 [@pichler2013; @wozabal2014; @pichler2017] for spectral and distortion risk measures, @calafiore2007 for variance, @calafiore2007 for mean absolute-deviation, @hanasusanto2016 [@wiesemann2014] for optimized certainty equivalent, @hanasusanto2015NV for CVaR, and @postek2016 for a variety of risk measures.
Utility Function {#sec: rev.utility}
----------------
An alternative to using risk measures to compare random variables is to evaluate their expected utility @gilboa1989. As before, let us consider a probability space ${\left( \Xi, {\mathcal{F}}, P \right)}$. A random variable $Z \in {\mathcal{Z}}$ is preferred over a random variable $Z^{\prime} \in {\mathcal{Z}}$ if ${\mathbb{E}_{P} \left[ u(Z_{1}) \right]} \ge {\mathbb{E}_{P} \left[ u(Z_{2}) \right]}$ for a given univariate utility function $u$[^28]. A bounded utility function $u$ can be normalized to take values between $0$ and $1$, and hence, it can be interpreted as a cdf of a random variable $\zeta$, i.e., $u(t)=P\{\zeta \le t\}$ for $t \in {\mathbb{R}}$. Under this interpretation, $Z$ is preferred over $Z^{\prime}$ if $P\{Z \ge \zeta\} \ge P\{Z^{\prime} \ge \zeta\} $ because $${\mathbb{E}_{P} \left[ u(Z) \right]}={\mathbb{E}_{P} \left[ P\{\zeta \le Z| Z\} \right]}={\mathbb{E}_{P} \left[ {\mathbb{E}_{P} \left[ {\mathbbm{1}_{\{\zeta \le Z\}}}|Z \right]} \right]}={\mathbb{E}_{P} \left[ {\mathbbm{1}_{\{\zeta \le Z\}}} \right]}=P\{\zeta \le Z\}.$$ However, as in decision theory, it is difficult to have a complete knowledge of a decision maker’s preference (i.e., utility function), it is also difficult to have a complete knowledge of the cdf of $\zeta$. The notion of [*stochastic dominance*]{} handles this issue by comparing the expected utility of random variables, for a given family ${\mathcal{U}}$ of utility functions, or equivalently, compare the probability of exceeding the target random variable $\zeta$ for a given family of cdf. Consequently, to address the problem of ambiguity in decision maker’s utility or equivalently, cdf of the random variable $\zeta$, one can study $$\label{eq: Utility_Obj}
\min_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \max_{\zeta \in {\mathcal{U}}} \ P\{h({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \ge \zeta\},$$ and $$\label{eq: Utility_Cons}
\min_{{\boldsymbol{x}} \in {\mathcal{X}} } \ \max_{\zeta \in {\mathcal{U}}} \ \sset*{h({\boldsymbol{x}})}{\max_{\zeta \in {\mathcal{U}} } \ P\{{\boldsymbol{g}}({\boldsymbol{x}},{\tilde{{\boldsymbol{\xi}}}}) \ge \zeta\} \le {\boldsymbol{0}}},$$ where ${\mathcal{U}}$ denotes a given family of normalized and nondecreasing utility functions, or equivalently, a given family of cdf. Note that problems and have the form of problems and , respectively. @hu2015utility study problem of the form , where ${\mathcal{U}}$ is further restricted to include concave utility functions or equivalently, cdf, and satisfy functional bounds on the utility and marginal utility functions (cdf and pdf of $\zeta$) as in . They provide a linear programming formulation of a particular case where the bounds on the utility function are piecewise linear increasing concave functions, and the bounds on all other functions are step functions. For the general continuous case, they study an approximation problem by discretisizing the continuous functions, and analyze the convergence properties of the approximated problem. They apply their results to a portfolio optimization problem. Unlike @hu2015, in @hu2018, no shape restrictions on the utility function is assumed and only functional bounds on the utility function are enforced. @hu2018 show that an SAA approach to the Lagrangian dual of the resulting problem can be used while solving a mixed-integer LP. They study the convergence properties of this SAA problem, and illustrate their results using examples in portfolio optimization and a streaming bandwidth allocation problem. @bertsimas2010minmax study a [DRO]{} model of the form , where a convex nondecreasing disutility function is used to quantify the uncertainty in decision. A utility function is closely related to risk measures [@hu2015utility]. For instance, for a given probability measure, the expected utility might have the form of a combination of expectation and expected excess beyond a target, or an optimized certainty equivalent risk measure. As shown in @bental2007OCE, under appropriate choices of utility functions, an optimized certainty equivalent risk measure can be reduced to the mean-variance and the mean-CVaR formulations. @wiesemann2014 study a [DRO]{} model formed via , where the decision maker is risk-averse via a nondecreasing convex piecewise affine disutility function. In particular, they investigate shortfall risk and optimized certainty equivalent risk measures.
Unlike the above discussion, many decision-making problems involve comparing random vectors. One can generalize the notion of utility-based comparison to random vectors by using multivariate utility functions [@armbruster2015]. Another approach to compare random vectors is based on the idea of the weighted scalarization of random vectors. For the case that the weights are deterministic and take value in an arbitrary set, we refer to @dentcheva2009 for unrestricted sets, @homemdemello2009 [@hu2011budget; @hu2012MO] for polyhedral sets, and @hu2012SAA for convex sets. For instance, @hu2011budget study a weighted sum approach to a multiobjective budget allocation problem under uncertain performance indicators of projects. They assume that the weights take value in the convex hull of the weights suggested by experts and study a minmax approach to the expected weighted sum problem, where the expectation is taken with respect to the uncertainty in the performance indicators and the worst-case is taken with respect to the weights. Note that the problem studied in @hu2011budget is in the framework of RO as the weights are deterministic.
The idea of using stochastic weights, governed by a probability measure that determines the relative importance of each vector of weights, is also introduced in @hu2012MO and @hu2014weighted. For instance, @hu2012MO study a [DRO]{} approach to stochastically weighted multiobjective deterministic and stochastic optimization problems, where the weights are perturbed along different rays from a reference weight vector. They study the reformulations of the deterministic problem for the cases where the weights take values in (1) a polyhedral set, including those induced by a simplex, $\ell_{1}$-norm, and $\ell_{\infty}$-norm, and (2) a conic-representable set, including those induced by a single cone (e.g., $\ell_{p}$-norm, ellipsoids), intersection of multiple cones, and union of multiple cones. They further study the stochastic optimization problem. For the case that the weights and random parameters are independent, and the ambiguity in the probability distribution of weights is modeled via , they obtain a reformulation of the problem using the result in @delage2010. For the case that the weights and random parameters are dependent, they also obtain reformulations of the resulting problem by utilizing the result from the deterministic case. They illustrate the ideas set forth in the paper using examples from disaster planning and agriculture revenue management problems.
Modeling Toolboxes {#sec: rev.toolboxes}
==================
@goh2011 develop a MATLAB-based algebraic modeling toolbox, named ROME, for a class of [DRO]{} problems with conic-representable sets for the support and mean, known covariance matrix, and upper bounds on the directional deviations studied in @goh2010tractable. @goh2011 elucidate the practicability of this toolbox in the context of (1) a service-constrained inventory management problem, (2) a project-crashing problem, and (3) a portfolio optimization problem. A C++-based algebraic modeling package, named ROC, is developed in @bertsimas2018adaptiveDRO, to demonstrate the practicability and scalability of the studied adaptive [DRO]{} model. Some features of ROC include declaration of uncertain parameters and linear decision rules, transcriptions of ambiguity sets, and reformulation of [DRO]{} using the results obtained in @bertsimas2018adaptiveDRO. A brief introduction to ROC and some illustrative examples to declare the objects of a model, such as variables, constraints, ambiguity set, among others, are given in an early version of @bertsimas2014practicable. XProg (<http://xprog.weebly.com>), is a MATLAB-based algebraic modeling package that also implements the proposed model in @bertsimas2018adaptiveDRO. @chen2018adaptive develop an algebraic modeling package, AROMA, to illustrate the modeling power of their proposed ambiguity set.
[^1]: Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, IL 60208 ().
[^2]: Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, IL 60208 ().
[^3]: We say a safe region of the form ${\boldsymbol{a}}({\boldsymbol{x}})^{\top} {\tilde{{\boldsymbol{\xi}}}}\le {\boldsymbol{b}}({\boldsymbol{x}})$ is bi-affine in ${\boldsymbol{x}}$ and ${\boldsymbol{\xi}}$ if ${\boldsymbol{a}}({\boldsymbol{x}}) $ and ${\boldsymbol{b}}({\boldsymbol{x}})$ are both affine in ${\boldsymbol{x}}$. Similarly, we say a safe region of the form ${\boldsymbol{a}}({\tilde{{\boldsymbol{\xi}}}})^{\top} {\boldsymbol{x}} \le {\boldsymbol{b}}({\tilde{{\boldsymbol{\xi}}}})$ is bi-affine in ${\boldsymbol{x}}$ and ${\boldsymbol{\xi}}$ if ${\boldsymbol{a}}({\tilde{{\boldsymbol{\xi}}}})$ and ${\boldsymbol{b}}({\tilde{{\boldsymbol{\xi}}}})$ are both affine in ${\tilde{{\boldsymbol{\xi}}}}$. Observe that a bi-affine safe region of the form ${\boldsymbol{a}}({\boldsymbol{x}})^{\top} {\tilde{{\boldsymbol{\xi}}}}\le {\boldsymbol{b}}({\boldsymbol{x}})$ can be equivalently written as a bi-affine safe region of the form ${\boldsymbol{a}}({\tilde{{\boldsymbol{\xi}}}})^{\top} {\boldsymbol{x}} \le {\boldsymbol{b}}({\tilde{{\boldsymbol{\xi}}}})$, and vice versa.
[^4]: In this paper, we use ${\mathcal{P}}$ to denote both an ambiguity set of probability measures and an ambiguity set of distributions induced by ${\tilde{{\boldsymbol{\xi}}}}$. Whether ${\mathcal{P}}$ denotes an ambiguity set of probability measures or an ambiguity set of distributions induced by ${\tilde{{\boldsymbol{\xi}}}}$ should be understood from the context and the distinction we make between the notation of a probability measure and a probability distribution.
[^5]: Recall that for a function $Z \in {\mathcal{Z}}$, $\|Z\|_{\infty}=\operatorname*{ess\,sup}_{s \in \Omega} |Z(s)|$, where $\operatorname*{ess\,sup}_{s \in \Xi} |Z(s)|=\inf\Big\{\sup_{s \in \Xi} |Z^{\prime}(s)| \; \Big| \; Z(s)=Z^{\prime}(s) \ \text{a.e.} \ s \in \Xi \Big\}$. Also, for a measure $P \in {\mathcal{Z}}^{*}$, $\|P\|_{1}=\int_{\Xi} |d P| $.
[^6]: It is said a partial order relation induces a [*lattice structure*]{} on ${\mathcal{Z}}$ if the least upper bound exists for any $Z, Z^{\prime} \in {\mathcal{Z}}$ [@shapiro2014SP]. A Banach space ${\mathcal{Z}}$ with lattice structure is called [*Banach lattice*]{} if $Z, Z^{\prime} \in {\mathcal{Z}}$ and $|Z| \ge |Z^{\prime}|$ implies $\|Z\| \ge \|Z^{\prime}\|$ [@shapiro2014SP].
[^7]: A set of constraints is called a safe or conservative approximation of the chance constraint if the feasible region induced by the approximation is a subset of the feasible region induced by the chance constraint.
[^8]: There is another stream of research that approximates by CVaR or its approximations, see, e.g., @chen2007robust [@chen2009goal; @chen2010joint] and references there in.
[^9]: One can in turn seek a safe approximation to . For example, one stream of such approximations includes using Chebyshev’s inequality, see, e.g., @popescu2005semidefinite [@bertsimas2005optimal], Bernstein’s inequality, see, e.g., @nemirovski2006convex, or Hoeffding’s inequality. We review such safe approximations to in Section \[sec: rev.choice.ambiguity\] in details.
[^10]: One can similarly define the optimal transport discrepancy between two probability distributions ${\mathbbmtt{P}}_{1}$ and ${\mathbbmtt{P}}_{2}$ induced by ${\tilde{{\boldsymbol{\xi}}}}$.
[^11]: One can similarly define the $\phi$-divergence between two probability distributions ${\mathbbmtt{P}}_{1}$ and ${\mathbbmtt{P}}_{2}$ induced by ${\tilde{{\boldsymbol{\xi}}}}$.
[^12]: The study of [SIP]{}s is pioneered by @haar1924, and followed up in @charnes1962duality [@charnes1963duality; @charnes1969theory], which focus on linear [SIP]{}s. The first- and second-order optimality conditions of general SIP are also obtained in @hettich1977conditions [@hettich1978SIP; @hettich1995; @nuernberger1985SIP; @nuernberger1985Opt; @still1999]. For reviews of the theory and methods for [SIP]{}s, we refer the readers to @hettich1993 [@reemtsen1998; @lopez2007].
[^13]: For an optimization problem of the form $z^{*}=\min\sset*{\alpha({\boldsymbol{x}})}{\beta({\boldsymbol{x}}) \le {\boldsymbol{0}}}$, a point ${\boldsymbol{x}}_{0}$ is an $\epsilon$-optimal solution if $\beta({\boldsymbol{x}}_{0}) \le {\boldsymbol{0}}$ and $\alpha({\boldsymbol{x}}_{0}) \le z^{*}+ \epsilon$.
[^14]: For an optimization problem of the form $z^{*}=\min\sset*{\alpha({\boldsymbol{x}})}{\beta({\boldsymbol{x}}) \le {\boldsymbol{0}}}$, a point ${\boldsymbol{x}}_{0}$ is an $\epsilon$-feasible solution if $\beta({\boldsymbol{x}}_{0}) \le {\boldsymbol{\epsilon}}$.
[^15]: Wasserstein metric of order $1$ is sometimes referred to as [*Kantorovich*]{} metric. Wasserstein metric of order $\infty$ is defined as $\inf_{\pi\in \Pi(P_{1},P_{2})} \pi\textrm{-}\operatorname*{ess\,sup}\ d(s_{1},s_{2})$, where $\pi\textrm{-}\operatorname*{ess\,sup}_{\Xi \times \Xi} \ [\cdot]$ is the essential supremum with respect to measure $\pi$: $\pi\textrm{-}\operatorname*{ess\,sup}_{\Xi \times \Xi} \ d(s_{1},s_{2})=\inf\{a \in {\mathbb{R}}: \pi(s \in \Xi: \exists s^{\prime} \in \Xi \ {\text{s.t.}}\ d(s,s^{\prime})>a)=0\}$.
[^16]: One can similarly define the $\phi$-divergence between two probability distributions ${\mathbbmtt{P}}_{1}$ and ${\mathbbmtt{P}}_{2}$ induced by ${\tilde{{\boldsymbol{\xi}}}}$.
[^17]: The assumption $\phi(1)=0$ is without loss of generality because the function $\psi(t)= \phi(t)+ c (t-1)$ yields identical discrepancy measure to $\phi$ [@pardo2005].
[^18]: Recall the discussion following and , where we gave a characterization of $A({\boldsymbol{x}})$ as ${\boldsymbol{a}}({\boldsymbol{x}})^{\top}{\tilde{{\boldsymbol{\xi}}}}\le {\boldsymbol{b}}({\boldsymbol{x}})$ and ${\boldsymbol{a}}({\tilde{{\boldsymbol{\xi}}}})^{\top} {\boldsymbol{x}} \le {\boldsymbol{b}}({\tilde{{\boldsymbol{\xi}}}})$. A safe region characterized by a bi-affine expression in ${\tilde{{\boldsymbol{\xi}}}}$ and ${\boldsymbol{x}}$ means that both ${\boldsymbol{a}}({\boldsymbol{x}})$ and ${\boldsymbol{b}}({\boldsymbol{x}})$ are affine in ${\boldsymbol{x}}$ for the form ${\boldsymbol{a}}({\boldsymbol{x}})^{\top}{\tilde{{\boldsymbol{\xi}}}}\le {\boldsymbol{b}}({\boldsymbol{x}})$, and both ${\boldsymbol{a}}({\tilde{{\boldsymbol{\xi}}}})$ and ${\boldsymbol{b}}({\tilde{{\boldsymbol{\xi}}}})$ are affine in ${\tilde{{\boldsymbol{\xi}}}}$ for the form ${\boldsymbol{a}}({\tilde{{\boldsymbol{\xi}}}})^{\top}{\boldsymbol{x}} \le {\boldsymbol{b}}({\tilde{{\boldsymbol{\xi}}}})$.
[^19]: The class of Réyni divergences is defined as ${\mathfrak{d}}^{\text{R}}_{r}(P_{1}, P_{2}):=\frac{1}{1-r}\int_{\Xi}\left(\frac{d P_{1}}{d P_{2}}\right)^{r-1} d P_{1}$. This class is not a $\phi$-divergence, but ${\mathfrak{d}}^{\text{R}}_{r}(P_{1}, P_{2})$ can be rewritten as $h({\mathcal{D}}_{\phi}(P_{1}, P_{2}))$, where $h(t)=\frac{1}{r-1}\log [(r-1)t +1] $ and $\phi(t)=\frac{t^{r}-r(t-1)-1}{r-1}$ [@pardo2005].
[^20]: As shown for e.g., in @reiss1989 and [@gibbs2002], ${\mathfrak{d}}^{\phi_{\text{h}}}(P , P_{0}) \le {\mathfrak{d}}^{\phi_{\text{kl}}}(P , P_{0})$. However, in @jiang2016 [Lemma 1] this relationship has been shown incorrectly as ${\mathfrak{d}}^{\phi_{\text{h}}}(P ,P_{0}) \le \big({\mathfrak{d}}^{\phi_{\text{kl}}}(P , P_{0})\big)^{\frac{1}{2}}$.
[^21]: Consider the sets ${\mathcal{H}}({\boldsymbol{A}},{\boldsymbol{\xi}}_{0}):=\sset*{{\boldsymbol{\xi}}={\boldsymbol{\xi}}_{0} + {\boldsymbol{A}} \omega}{\|\omega\|_{\infty} \le 1}$ and ${\mathcal{E}}({\boldsymbol{B}},{\boldsymbol{\xi}}_{0}):=\sset*{{\boldsymbol{\xi}}={\boldsymbol{\xi}}_{0} + {\boldsymbol{B}} \omega}{\|\omega\|_{1} \le 1}$, where ${\boldsymbol{A}}$ is a diagonal positive-definite matrix and ${\boldsymbol{B}}$ is a positive-definite matrix. A random vector ${\tilde{{\boldsymbol{\xi}}}}$ has a probability distribution $P$ within the class of radially-symmetric nonincreasing densities supported on ${\mathcal{H}}({\boldsymbol{A}},{\boldsymbol{\xi}}_{0})$ (respectively, ${\mathcal{E}}({\boldsymbol{B}},{\boldsymbol{\xi}}_{0})$) if ${\tilde{{\boldsymbol{\xi}}}}- {\mathbb{E}_{P} \left[ {\tilde{{\boldsymbol{\xi}}}}\right]}= {\boldsymbol{A}} \omega$ (respectively, ${\tilde{{\boldsymbol{\xi}}}}- {\mathbb{E}_{P} \left[ {\tilde{{\boldsymbol{\xi}}}}\right]}= {\boldsymbol{B}} \omega$), where $\omega$ is a random vector having the probability density $f_{\omega}$ such that $f_{\omega}(\omega)= t(\|\omega\|_{\infty})$ for $\|\omega\|_{\infty} \le 1$ and $0$ otherwise (respectively, $f_{\omega}(\omega)= t(\|\omega\|_{1})$ for $\|\omega\|_{1} \le 1$ and $0$ otherwise) and $t(\cdot)$ is a nonincreasing function. The class of radially-symmetric distributions contains for example Gaussian, truncated Gaussian, uniform distribution on ellipsoidal support, and nonunimodal densities [@calafiore2006]
[^22]: For a measurable function $Z \in {\mathcal{Z}}_{\infty}(Q)$, the entropic risk meaure is defined as $\frac{1}{\gamma} \ln {\mathbb{E}_{Q} \left[ \exp{(-\gamma Z)} \right]}$, where $\gamma>0$ [@liu2017].
[^23]: Restricting the recourse decision function ${\boldsymbol{y}}({\boldsymbol{\xi}})$ to the class of functions that are affinely-dependent on ${\boldsymbol{\xi}}$, referred to as [*linear decision rules*]{}, is an approach to derive computationally tractable problems to approximate stochastic programming and robust optimization models [@chen2007robust; @chen2008linear; @ben2004adjustable]. Whether or not the linear decision rules are optimal depends on the problem [@shapiro2005complexity].
[^24]: The random variable differs from its mean by more than $k$ standard deviations.
[^25]: It is known that for any positive definite symmetric kernel $K$, there is a mapping $\Phi$ from the covariates space to a higher-dimensional space ${\mathbb{H}}$ such that $K(\xi_{k},t_{k})$ is equal to the inner product between $\Phi(\xi_{k})$ and $\Phi(t_{k})$, see, e.g., @mohri2018foundations [Theorem 5.2]. Such a space ${\mathbb{H}}$ is called [*reproducing kernel Hilbert space*]{}. A kernel is said to be positive definite symmetric if the induced kernel matrix is symmetric positive semidefinite.
[^26]: Some probability metrics, such as Wasserstein metric, metrize the weak convergence [@gibbs2002]. That is, the convergence between two probability distributions, with respect to some metric, implies the convergence in probability.
[^27]: As in the definition of $\phi$-divergence, the assumptions $\phi(1)=\phi^{\prime}(1)=0$ are without loss of generality because the function $\psi(t)= \phi(t)- \phi^{\prime}(1) (t-1)$ yields identical discrepancy measure to $\phi$ [@pardo2005]
[^28]: For definitions in a multivariate case, we refer to @hu2012SAA [@hu2014weighted].
|
---
abstract: 'In a magnetic substance the gap in the Raman spectrum, $\Delta_R$, and the neutron scattering gap, $\Delta_S$, are related by $\Delta_R/\Delta_S\approx2$ if the the magnetic excitations (magnons) are only weakly interacting. But for CuGeO$_3$ the experimentally observed ratio is of the order $\Delta_R/\Delta_S\sim1.49-1.78$, indicating attractive magnon-magnon interactions in the quasi-1D Spin-Peierls compound CuGeO$_3$. We present numerical estimates of $\Delta_R/\Delta_S$ from exact diagonalization studies for finite chains and find agreement with experiment for intermediate values of the frustration parameter $\alpha$. An analysis of the numerical Raman intensity leads us to postulate a [*continuum*]{} of two-magnon bound states in the Spin-Peierls phase. We discuss in detail the numerical method used, the dependence of the results on the model parameters and a novel matrix-element effect due to the dimerization of the Raman-operator in the Spin-Peierls phase.'
address:
- '$^1$ Institut für Physik, Universität Dortmund, 44221 Dortmund, Germany\'
- '$^2$ Physics Department, University of Wuppertal, 42097 Wuppertal, Germany\'
- '$^3$ 2. Physikalisches Institut, RWTH Aachen, 52056 Aachen, Germany\'
- '$^4$ FB Technische Physik, TH-Darmstadt, Hochschulstr. 8, 64289 Darmstadt, Germany\'
author:
- |
Claudius Gros$^1$, Wolfgang Wenzel$^1$, Andreas Fledderjohann$^2$\
P. Lemmens$^3$, M. Fischer$^3$, G. Güntherodt$^3$, M. Weiden$^4$, C. Geibel$^4$, F. Steglich$^4$
title: 'Magnon-magnon interactions in the Spin-Peierls compound CuGeO$_3$'
---
The Spin-Peierls compound CuGeO$_3$ has lately been studied intensively [@Hase] and found to exhibit well defined magnetic excitations (magnons) in the dimerized phase [@Nishi]. These magnons are found [@Ain] to be seperated by a gap from the continuum of two-spinon excitations predicted for the Heisenberg-chain [@Mueller] and were recently observed in KCuF$_3$ [@KCuF3] and in CuGe$O_3$ [@Arai]. In this context it was realized [@Tsvelik; @Uhrig; @Fledderjohann] that the magnons in the dimerized Heisenberg chain can be regarded as two-spinon bound-states. While spinons are essentially free in a homogeneous spin chain they interact strongly in a (gapped) dimerized spin chain where magnons are well defined excitations with dispersion $\omega_q$ contributing a delta-function $\sim\delta(\omega-\omega_q)$ to the dynamical structure factor, $S(q,\omega)$.
It is then all but natural to investigate the interactions of two magnons in a dimerized spin-chain. Here we will present numerical and experimental evidence that magnons do strongly interact in dimerized spin chains leading to a continuum of two-magnon bound states. For this purpuse we will present data from exact diagonalization of chains with up to $N_s=28$ sites and experimental Raman spectra for CuGeO$_3$. We will, in particular, investigate the gap $\Delta_S$ observed in $S(q,\omega)$ and the gap $\Delta_R$ observed in the two-magnon Raman spectrum $I_R(\omega)$. We find generally that $\Delta_S/\Delta_R<2$, indicating strong magnon-magnon interactions in 1D dimerized spin systems.
As the minimal model for magnetic excitations in CuGeO$_3$ one can consider the frustrated 1D spin Hamiltonian $$H = J\sum_i\, [ (1+\delta(-1)^i)\,{\bf S}_i\cdot{\bf S}_{i+1}
+ \alpha\, {\bf S}_i\cdot{\bf S}_{i+2}
]~,
\label{H}$$ where $\delta$ is the dimerization parameter that vanishes above $T_{SP}$ [@Castilla; @Riera]. The special geometry [@Braden; @Khomskii] of the superexchange path in CuGeO$_3$ along the c-axis leads to a small value of the exchange integral $J\approx150$K and a substantial n.n.n. frustration term $\sim\alpha$ which competes with the n.n. antiferromagnetic exchange. The correct value of $\alpha$ suitable for CuGeO$_3$ is still under discussion. While Castilla [*et al.*]{} proposed $\alpha\approx0.24$ [@Castilla], a much larger value $\alpha\approx0.35$ was proposed by Riera and Dobry [@Riera] and, recently, by Brenig [*et al.*]{} [@Brenig]. The interchain couplings have been estimated to be small, $J_b\approx0.1J$ and $J_a\approx-0.01J$ for the interchain exchange constants along a- and b- directions, respectively [@Nishi].
The phase diagram of $H$ in Eq. (\[H\]) has been calculated using the density-matrix renormalization-group method [@Chitra]. For $\delta=0$ and $\alpha<\alpha_c\approx0.2411$, the ground state is gapless and renormalizes to the Heisenberg fixed point. For $\alpha=0.5$ and $\delta=0$, the ground state is given by a valence-bond state with a gap of order $J/4$ induced by frustration.
An experimental method particularly suited for the study of magnetic excitations in an antiferromagnet is two-magnon Raman scattering. For CuGeO$_3$, the Raman operator in $A_{1g}$ symmetry [@Fleury] is proportional [@RC] to $$H_R = \sum_i\,(1+\gamma(-1)^i)\,{\bf S}_i\cdot{\bf S}_{i+1}~.
\label{H_R}$$ In the homogeneous state ($\delta=\gamma=0$) the Raman operator commutes with the Heisenberg Hamiltonian for the case $\alpha=0$ and there would be no Raman scattering [@RC; @Singh]. However when $\alpha \neq 0$, the model (\[H\]) leads to magnetic Raman scattering $\sim\alpha^2$ due to the presence of competing interactions which can be observed experimentally. Note the presence of the factor $\gamma$ in Eq. (\[H\_R\]) which appears for $T<T_{SP}$ because the exchange integral is sensitive to the inter-ionic distance.
We have exactly diagonalized Eq. (\[H\]) for chains with up to 28 sites by a generalized Lanczos method and evaluated the Raman spectral weight at zero temperature, $$I_R(\omega) \,=\, -{1\over\pi}Im\,
\langle0|H_R{1\over \omega+i\epsilon - (H-E_0)}H_R|0\rangle,
\label{I_R}$$ where $E_0$ is the ground state energy, $H$ the Hamiltonian given by Eq. (\[H\]) and $\epsilon\rightarrow0+$. We have also calculated the dynamical structure factor $$S(q,\omega) \,=\, \sum_n
\big|\langle n|S_{q}^z|0\rangle\big|^2\delta(\omega-(E_n-E_0))
\label{S(q,o)}$$ where $S_q^z=N_s^{-1/2}\sum_{l=1}^{N_s}\exp[iql]S_l^z$ and $|n\rangle,\ E_n$ are the Eigenstates and Eigenenergies of the spin chain, respectively.
We have evaluated $S(q,\omega)$ for chains with up to $N_s=24$ sites using an approximate scheme for the determination of the low lying excitation energies $E_n-E_0$ and the corresponding transition probabilities $w_n(q)=|\langle n|S_q^z|0\rangle|^2$ [@rec]. Using a recursion algorithm a set of orthogonal states is built starting with $S_q^z|0\rangle$. Coefficients occuring in this procedure form a tridiagonal matrix whose eigenvalues and eigenstates determine the excitation energies and transition probabilities [@Fledderjohann; @rec].
For a numerical evaluation of Eq. (\[I\_R\]), we have used the [*kernel polynomial approximation*]{}. Since the advantages of this method have been realized only recently [@Silver] we give here a brief account. We start by rescaling the Hamiltonian by $H=cX+d$ such that the eigenvalues of the rescaled Hamiltonian $X$ are in the interval $[-1,1]$. Similary we define a rescaled energy and frequency by $E_0=cx_0+d$ and $\omega=cx+d$ and expand $I_R(x)$ in terms of Tschebycheff polynomials, $T_l(x)$: $$I_R(x) = {1\over\sqrt{1-x^2}}\sum_{l=0}^{N_p}\, a_l\, T_l(x+x_0),
\label{expansion}$$ where the number of polynomials retained, $N_p$, determines the accuracy of the approximation which becomes exact in the limit $N_p\rightarrow\infty$. The expansion coefficients $a_l$ are determined using the orthogonality relations for Tschebeycheff polynomials to be $$a_l={2-\delta_{l,0}\over\pi}\,\langle0|\,H_R\,T_l(X)\,H_R\,|0\rangle,$$ which can be evaluated recursively via the formula $T_{l+1}(x)=2xT_l(x)-T_{l-1}(x)$. The advantage of an expansion in orthogonal polynomials is its numerical stability. It is indeed possible to evaluate several thousands of $a_l$ recursively without encountering numerical instabilities. Often we will use only a limited number $N_p=100$ for comparison with experimental data.
A truncated expansion in orthogonal polynomials will, in general, lead to unwelcome Gibbs oscillations for any finite $N_p<\infty$. These Gibbs oscillations have been studied carefully in the past [@Silver] and can be suppressed efficiently and reliable by the replacement $a_l\rightarrow a_l g(z_l)$ with $g(z_l) = [\sin(\pi z_l)/(\pi z_l)]^3$ and $z_l = l/(N_p+1)$ [@Silver]. Note that this replacement, which we use throughout this paper, still satisfies the correct limit $N_p\rightarrow\infty$.
A finite $N_p<\infty$ in (\[expansion\]) does broaden the delta-poles in a finite system calculation. The dependence of $I_R(\omega)$ on $N_p$ is illustrated in Fig. \[N\_p\] for a chain with $N_s=28$ sites, $\alpha=0.24$ and $\delta=0=\gamma$. In the inset of Fig. \[N\_p\] we compare the broading of a single pole with the kernel polynomial approximation ($N_p=100$) with a Lorentzian $\epsilon/((\omega-\omega_i)^2+\epsilon^2)$ and $\epsilon=0.025$. Note the absence of the high-energy tails in the kernel polynomial approximation.
We have measured the Raman intensity on single crystal CuGeO$_3$ using the excitation line $\lambda=514.5$-nm of an Ar-laser with a laser power of 2.7mW. We ensured that the incident radiation does not increase the temperature of the sample by more than 1.5 K. We used a DILOR-XY spectrometer and a nitrogen cooled CCD (back illuminated) as a detector in a quasi backscattering geometry with the polarization of incident and scattered light parallel to the c-axis and the Cu-O chains, respectively.
In Fig. \[T=20\], we present the data for the two-magnon Raman continuum in the homogeneous state at $T=20$ K. Phonon lines [@Raman; @Lemmens] at 184cm$^{-1}$ and at 330cm$^{-1}$ are subtracted from the experimental data (squares). The experimental Raman spectrum presented in Fig. \[T=20\] is in agreement with other Raman studies on CuGeO$_3$ [@Raman]. We have included in Fig. \[T=20\] the numerical results for $I_R(\omega)$ obtained for chains with $N_s=24$ (dashed lines) and $N_s=28$ (solid lines) sites for $\delta=0=\gamma$ and $N_p=100$. Note that the finite-size effects are quite small. We show in Fig. \[T=20\] data for two parameter sets, namely $\alpha=0.24,\ J=150K$ and $\alpha=0.35,\ J=159K$. We note that $\alpha=0.35$, which is favoured by fits to the susceptibility [@Riera] and to the specific heat [@Brenig] does not agree well with the Raman spectrum. Similar results have been obtained previously with a solitonic mean-field approach to the frustrated Heisenberg chain [@RC; @susc].
In the dimerized phase $\delta\ne0$ we have found that the numerically obtained Raman spectrum depends very much on the dimerization parameter $\gamma$ in the Raman operator as we illustrate in Fig. \[gamma\] for a chain with $N_s=24$ sites, $\delta=0.03,\ \alpha=0.24$ and $N_p=100$. Between $\gamma=0$ and $\gamma=0.15$ the spectrum changes qualitatively and a low-energy peak can be resolved for $\gamma=0.12,0.15$, but not for $\gamma=0$.
In order to understand the dramatic dependence of the $I_R(\omega)$ on $\gamma$ observed in Fig. \[gamma\] we have analyzed the dependence of the seven lowest poles contributing to $I_R(\omega)$ on $\gamma$, illustrated in the inset of Fig. \[weight\_gamma\] for $N_s=24$, $\alpha=0.24$, $\delta=0$, $N_p=1000$ and $\gamma=0$.
We start by rewriting Eq. (\[I\_R\]) in the form $$I_R(\omega) \,=\, \sum_n
\big|\langle 0|H_R|n\rangle\big|^2\delta(\omega-(E_n-E_0))
\label{I_new}$$ and noting that we can decompose the Raman operator into two parts, $H_R=H^\prime+\gamma H^{\prime\prime}$. The weight of an excited state then becomes $$w_n =
\big|\langle 0| H^\prime+\gamma H^{\prime\prime} |n\rangle \big|^2
= \big|m^\prime+\gamma m^{\prime\prime}\big|^2,
\label{w_n}$$ and for any $n$ there is a $\gamma_0=-m^\prime/m^{\prime\prime}$ for which $w_n$ vanishes. We have analyzed the energy-dependence of $\gamma_0=\gamma_0(E_n)$ and found that for the five dominant poles, $n=1,2,4,6,7$ $$\gamma_0\ \approx\ \delta + {4\over 3000}\, E_n,
\label{gamma_0}$$ in inverse wavelength $[cm^{-1}]$ for $E_n$. This dependence of $w_n$ on $E_n$ is shown in Fig. \[weight\_gamma\] where we have plotted the weights as a function of $\gamma-\gamma_0(E_n)$. The data presented in Fig. \[weight\_gamma\] clearly indicates that the dominant contributions to $I_R(\omega)$ follow the scaling relation $$\rho(\gamma,E_i) = {I_{\gamma=0}\over I_\gamma}
\left({\gamma-\gamma_0(E_i)\over\gamma_0(E_i)}\right)^2,
\label{erasor}$$ where $I_\gamma$ is a normalization constant, which we have approximated by the constraint $\int_0^{\omega_c}d\omega\rho(\gamma,\omega)=1$, with $\omega_c=6J$. The scaling relation Eq. (\[erasor\]) constitutes, on the other hand, also the rescaling of the Raman intensity $I_R(\omega)=I_R(\gamma,\omega)$ with $\gamma$, such that we can write $$I_R(\gamma,\omega)\approx\rho(\gamma,\omega)I_R(0,\omega).
\label{analytic}$$ We see that the effect of $\gamma$ is to “burn a spectral hole” in $I_R(0,\omega)$ at a characteristic frequency $\gamma_0(\omega)$, in agreement with the numerical results presented in Fig. \[gamma\].
We can actually find a complete analytic formula for $I_R(\gamma,\omega)$ by noting that for $\alpha=0.24$, $\delta=0.03$ we can approximate $I_R(0,\omega)$ by the expression (see Fig. \[gamma\] and [@RC]) $$I_R(\gamma=0,\omega) = A\theta(\omega-\Delta_R)
\left(1-\tanh[2(\omega-\omega_0)]\right),
\label{I_0}$$ with the values of the parameters $\Delta_R\approx30{\rm cm}^{-1}$, $\omega_0\approx312{\rm cm}^{-1}$ being determined by a fit to the numerical data (A is a normalization constant). The heavy-side function $\theta(\omega-\Delta_R)$ in Eq. (\[I\_0\]) reflects the absence of two-magnon excitations below $\Delta_R$ in the dimerized state.
In Fig. \[T=5\], we present the experimental Raman spectrum in the spin-Peierls phase at $T=5$K (with phonon lines substracted, filled squares) in comparision with the analytic result for $\gamma=0.12$(Eq. (\[analytic\]) together with Eq. (\[I\_0\]), solid line). Experimentally a two-magnon peak is observed at 30cm$^{-1}$ and a broad continuum at higher frequencies. These two features are well reproduced by the analyitic result. There is, on the other hand, a new, probably magnetic line at 225cm$^{-1}$, which is not reproduced by our theoretical study of the one-dimensional spin model (\[H\]), compare also Fig. \[gamma\]. This fact has led us to speculate [@RC] that the interchain coupling $J_B$ might become relevant for $T<T_{SP}$ in CuGeO$_3$.
Which is the physical significance of the peak at $\Delta_R\sim30{\rm cm}^{-1}$ in Fig. \[T=5\] observed both in experiment and in our analytical result? Neutron-scattering experiments [@Nishi; @Martin] indicate a one-magnon gap $\Delta_S\sim(2.1-2.5)$meV, i.e. $\Delta_S\sim(16.9-20.2){\rm cm}^{-1}$. The experimental ratio $${\Delta_R\over\Delta_S}\bigg|_{\rm Exp.}\ \sim\ 1.49-1.78
\label{ratio_exp}$$ is smaller than two, the value expected for non-interacting magnons. This indicates an attractive interaction between magnons in 1D dimerized spin chains, similar to the attraction between spinons. Uhrig and Schulz [@Uhrig] have indeed postulated an isolated singlet bound state with energy $E_s$ and $2\Delta_S>E_s>\Delta_s$ which might be a two-magnon bound state.
A singlet bound-state with zero total momentum would be observable by Raman scattering and would show up as a delta-function contribution to the Raman intensity (\[I\_new\]): $$I_R(\omega)\ =\ A\delta(\omega-E_s)\ +\ I_R^\prime(\omega),$$ with $I_R^\prime(\omega)$ beeing the continuum part of the Raman intensity. A finite weight $A$ implies also a finite value of the relative weight of the first pole, $$A_r\ =\ \lim_{N_s\rightarrow\infty}{w_1\over\sum_n w_n},
\label{A}$$ with $w_n=|\langle0|H_R|n\rangle|^2$. In the inset of Fig. \[T=5\] we show the relative weight as a function of $1/N_s^2$ for chains with up to $N_s=28$ sites and $\alpha=0.24$. We find no indication for an isolated bound state, i.e. for a finite value of $A_r$ both for $\delta=0$ (filled circles), as expected, and for $\delta=0.03$ (filled triangles). The data for $\delta=0.03$ shown in the inset of Fig. \[T=5\] have been calculated for $\gamma=0.12$. The data for $\gamma=0$ and $\delta=0.03$ are very similar.
We have therefore no indication from numerics for an isolated singlet bound state, in agreement with our interpretation of the 30cm$^{-1}$ peak as a continuum contribution truncated by matrix-element effect, see Eq. (\[analytic\]). Our numerical results show, on the other hand, unambigously that the lower edge of the Raman spectrum is pulled below the non-interacting two-magnon density of states. This fact, which is discussed in detail further below, indicates a [*continuum*]{} of two-magnon bound states with zero total momentum. This scenario is conceivable in view of the fact, that the two-magnon spectrum is made up out of four spinons
The experimentally observed values of single- and two-magnon gap energies can be used, to a certain extent, to determine the value of the parameters $J$, $\delta$ and $\alpha$ entering Eq. (\[H\]). The ratio $\Delta_R/\Delta_S$ is independent of the coupling-constant $J$. We have plotted $\Delta_R/\Delta_S$ in Fig. \[ratio\] for various values of $\alpha$ as a function of $\delta$. The data are obtained for $N_s=24$, the finite-size corrections are very small (compare also [@Fledderjohann]). The experimental possible values 1.49-1.78 are indicated as the shaded region in Fig. \[ratio\]. Again, we see that a larger value of $\alpha\sim0.35$ does not do a good job on $\Delta_R/\Delta_S$. In using Fig. \[ratio\] for determing possible combinations of $\alpha$ and $\delta$ one has to keep in mind that the interchain coupling $J_b$, not included in our caculations, has an effect of order 10% on the values quoted in Fig. \[ratio\]. It is interesting to note, that recent experiments on CuGeO$_3$ under pressure [@private] show that the frustration parameter $\alpha$ increases with presure, as does the experimental ratio $\Delta_R/\Delta_S$, in accordance with the data presented in Fig. \[ratio\].
In conlusion we have discussed in detail the effect of the dimerization of the Raman operator on the Raman spectrum in the spin-Peierls state of CuGeO$_3$. We have not found any numerical evidence for a singlet bound-state and conclude that the 30cm$^{-1}$ Raman line observed at $T=5K$ in CuGeO$_3$ is a two-magnon line which is carved out of a continuum by matrix-element effects due to the dimerization of the Raman operator, $\gamma$. We have then compared the observed ratio of the Raman gap to the neutron-scattering gap and compared with numerical results. We conclude that this ratio indicates an intermediate value for the frustration parameter $\alpha$.
This work was supported through the Deutsche Forschungsgemeinschaft, the Graduiertenkolleg “Festkörperspektroskopie”, SFB 341 and SFB 252, and by the BMBF 13N6586/8,
M. Hase, I Terasaki and K. Uchinokura, Phys. Rev. Lett. [**70**]{}, 3651 (1993).
M. Nishi, O. Fujita and J. Akimitsu, Phys. Rev. B [**50**]{}, 6508 (1994).
M. Aïn, J.E. Lorenzo, L.P. Regnault, G. Dhalenne, A. Revcolevschi and Th. Jolicoeur, [*“Double gap and solitonic excitations in the spin-Peierls chain CuGeO$_3$”*]{}, preprint.
See, e.g. G. Müller, H. Thomas, H. Beck and J.C. Bonner, Phys. Rev. B [**24**]{}, 1429 (1981).
D.A. Tennant, R.A. Cowley, S.E. Nagler and A.M. Tsvelik, Phys. Rev. B [**52**]{}, 13 368 (1995).
M. Arai [*et al.*]{}, Phys. Rev. Lett. [**77**]{}, 3649 (1996).
A.M. Tsvelik, Phys. Rev. B [**45**]{}, 486 (1992).
G.S. Uhrig and H.J. Schulz, Phys. Rev. B [**54**]{}, R9624 (1996).
A. Fledderjohann and C. Gros, [*“Spin dynamics of dimerized Heisenberg chains”*]{}, Euro. Phys. Lett., (in press, Sissa preprint cond-mat/9612013)
G. Castilla, S. Chakravarty and V.J. Emery, Phys. Rev. Lett. [**75**]{}, 1823 (1995).
J. Riera and A. Dobry, Phys. Rev. B [**51**]{}, 16 098 (1995); The authors estimate a slightly larger value of $\alpha\approx0.36$ than Castilla [*et al.*]{} [@Castilla].
M. Braden, G. Wilkenorf, J. Lorenzana, M. Aïn, G.J. McIntyre, M. Behruzi, G. Heeger, G. Dhalenn and A. Revcolevschi, Phys. Rev. B [**54**]{}, 1105 (1996).
D. Khomskii, W. Geertsma and M. Mostovoy, Sissa preprint cond-mat \# 9609244.
W. Brenig [et al.]{}, personal communication.
R. Chitra [*et al.*]{}, Phys. Rev. B [**52**]{}, 6581 (1995).
P.A. Fleury and R. Loudon, Phys. Rev. [**166**]{}, 514 (1967).
V.N. Muthukumar, C. Gros, W. Wenzel, R. Valentí, P. Lemmens, B. Eisener, G. Güntherodt, M. Weiden, C. Geibel and F. Steglich, Phys. Rev. B [**54**]{}, R9635 (1996).
R.R.P. Singh, P. Prevlovsek and B.S. Shastry, Phys. Rev. Lett. [**77**]{}, 4086 (1996).
A. Fledderjohann, M. Karbach, K.-H. Mütter and P. Wielath, [J. Phys.: Condens. Matter]{} [**7**]{}, 8993 (1995); V.S. Viswanath, S. Zhang, J. Stolze and G. Müller, [Phys. Rev.]{} [**B**]{} 49, 9702 (1994).
R.N. Silver and H. Röder, Int. J. Mod. Phys. C [**5**]{}, 735 (1994); R.N. Silver, H. Röder, A.F. Voter and J.D. Kress, J. Comp. Phys. [****]{}, in press.
P. Lemmens, M. Udagawa, M. Fischer, G. Güntherodt, M. Weiden, W. Richter, C. Geibel and F. Steglich, J. Phys. [**46**]{}, 1979 (1996).
H. Kuroe [*et al.*]{}, Phys. Rev. B [**50**]{}, 16 468 (1994); P.H.M. van Loosdrecht [*et al.*]{}, Phys. Rev. Lett. [**76**]{}, 311 (1996).
P. Lemmens, B. Eisener, M. Brinkmann, L.V. Gasparov, P.v. Dongen, M. Weiden, W. Richter, C. Geibel and F. Steglich, Physica B [**223&224**]{}, 535 (1996).
V.N. Muthukumar, C. Gros, R. Valentí, M. Weiden, C. Geibel, F. Steglich, P. Lemmens, M. Fischer and G. Güntherodt, [*“The $J_1-J_2$ model revisited: Phenomenology of CuGeO$_3$”*]{}, Phys. Rev. B (in press).
M.C. Martin, G. Shirane, Y. Fujii, M. Nishi, O. Fujita, J. Akimitsu, M. Hase and K. Uchinokura, Phys. Rev. B [**53**]{}, R14 713 (1996).
P. van Loosdrecht, private communication.
|
---
abstract: 'We consider the problem of finding a low rank symmetric matrix satisfying a system of linear equations, as appears in phase retrieval. In particular, we solve the gauge dual formulation, but use a fast approximation of the spectral computations to achieve a noisy solution estimate. This estimate is then used as the initialization of an alternating gradient descent scheme over a nonconvex rank-1 matrix factorization formulation. Numerical results on small problems show consistent recovery, with very low computational cost.'
author:
- 'Ron Estrin, Yifan Sun, Halyun Jeong, Michael Friedlander'
bibliography:
- 'refs.bib'
title: Approximate methods for phase retrieval via gauge duality
---
Introduction
============
Consider the problem of finding a low rank symmetric matrix satisfying a system of linear equations $$\begin{array}{ll}
{\mathbf{find}}& X \\
{\mathbf{subject\; to}}& a_i^TX a_i = b_i,\quad i = 1,\hdots, m\\
& {\mathbf{rank}}(X) \leq r
\end{array}
\label{e-main}$$ Problems of form appear in applications like imaging [@walther1963question] and x-ray crystallography [@dierolf2010ptychographic], and finding $x$ is in general NP-hard [@vavasis2010complexity]. Convex relaxations of appears by omitting the rank constrant, and can often lead to a close approximation of $x$ (see [@ahmed2014blind; @candes2013phaselift]).
We consider two convex relaxations of , both of the form $$\begin{array}{ll}
{\underset{X}{\textbf{minimize}}}& \kappa(X)\\
{\mathbf{subject\; to}}& a_i^TX a_i = b_i,\quad i = 1,\hdots, m
\end{array}
\label{eq:main-primal}$$ where $\kappa(X)$ is a convex gauge function that promotes low-rank structure in $X$. Specifically, we consider two choices:
1. $\kappa(X) = \|X\|_*$ the nuclear norm, e.g. the sum of the singular values of $X$, and
2. $\kappa(X) = {\mathbf{tr}}(X) + \delta_+(X)$ where $$\delta_+(X) =
\begin{cases}
0 & X\succeq 0 \\
+\infty & \text{ else.}
\end{cases}$$
Using either gauge, we see that is a semidefinite optimization problem.
#### Application 1: Phase retrieval
Problems of this form appears in image processing as a convexification of the phase retrieval problem $$\begin{array}{ll}
{\mathbf{find}}& x\\
{\mathbf{subject\; to}}& |a_i^Tx|^2 = b_i,\quad i = 1,\hdots, m
\end{array}
\label{eq:phaseretrieval}$$ where $a_i$ may be complex or real-valued measurements, and $b_i$ are the squared magnitude readings. (See PhaseLift, [@candes2015phase].) In particular, it has been shown [@chen2001atomic] that low-rank estimates of the SDP can recover the exact source vector $x$ in both the noisy and exact measurement regime, for large enough $m$ and incoherent enough $a_i$. It has been previously observed (and numerically verified here) that the positive semidefinite formulation ($\kappa(X) = {\mathbf{tr}}(X) + \delta_+(X)$) provides better recovery results (recovering successfuly for smaller $m$) than the nuclear norm formulation. However, we will see there are numerical advantages of using the nuclear norm formulation, and thus we consider both.
#### Application 2: Linear diagonal constrained
Many combinatorial problems can be relaxed to semidefinite programs with diagonal constraints. For example, the MAX-CUT problem can be written as $$\begin{array}{ll}
{\underset{W}{\textbf{minimize}}}& \langle C,W\rangle\\
{\mathbf{subject\; to}}& {\mathbf{diag}}(W) = b \\
& W\succeq 0
\end{array}
\label{eq:diag-cons}$$ for $b = {\mathbf{1} }$ and $C$ is a matrix related to the graph edge weights (e.g. Laplacian). Considering the a more generalized family of linear diagonally constrained problems, we see that replacing $C$ with $(C+C^T)/2 + {\mathbf{diag}}(v)$ does not alter the problem, for any $v$, since the diagonal of $W$ is fixed. Therefore we can assume without loss of generality that $C$ is symmetric positive definite with Cholesky factorization $C = LL^T$. Then is equivalent to where $a_i$ is the $i$th column of $L^{-1}$.
Problem statement
=================
Gauge duality
-------------
In general, the gauge primal and dual problem pair[@freund1987dual; @friedlander2014gauge; @aravkin2017foundations; @friedlander2016low] can be written as $$\begin{array}[t]{ll}
{\underset{X}{\textbf{min}}}& \displaystyle \kappa(X)\\
{\mathbf{st}}& {\mathcal A}(X) = b,\quad i = 1,\hdots, m
\end{array}
\qquad
\begin{array}[t]{ll}
{\underset{y}{\textbf{min}}}& \displaystyle \kappa^\circ({\mathcal A}^*(y))\\
{\mathbf{st}}& \langle y,b\rangle = 1
\end{array}
\label{eq:gauge-pair}$$ where we use the shorthand $${\mathcal A}(X)_i := a_i^TXa_i,\; i = 1,\hdots, m, \qquad {\mathcal A}^*(y):=\sum_{i=1}^m y_i a_ia_i^T$$ for the linear operator and adjoint. Here, $\kappa^\circ$ is the polar gauge of $\kappa$, defined as $$\kappa^\circ(Z) = \inf \{\mu : \langle X, Z\rangle \leq \mu \kappa(X)\; \forall X. \}$$ In particular, it is shown that if the feasible domain of both primal and dual have nontrivial relative interior, then at optimality, the eigenspace of the primal matrix variable $X^*$ and transformed dual variable $Z^* = {\mathcal A}^*(y*)$ are closely related, and can often be recovered easily.
#### Nuclear norm
When $\kappa(X)$ corresponds to a norm, then $\kappa^\circ(X)$ is the dual norm. Therefore $$\kappa(X) = \|X\|_* \iff \kappa^\circ (Z) = \|Z\|_2$$ the spectral norm of $Z$. Note that neither $X$ nor $Z$ are constrained to be positive semidefinite. At optimality, the singular vectors of the primal matrix variable $X^*$ and transformed dual variable $Z^* = {\mathcal A}^*(y*)$ correspond closely; if $X^*$ has rank $r$, then $$X^* = \sum_{i=1}^r \sigma^P_i v_iv_i^T, \qquad Z^* = \sigma^D_{\max}\sum_{i=1}^r v_iv_i^T + \sum_{i=r+1}^n \sigma^D_i v_iv_i^T.$$ Here, $v_1,\hdots, v_n$ are the singular vectors of $X^*$ and $Z^*$, and $\sigma^P_i$ and $\sigma^D_{\max}$ correspond to the primal singular values and maximum dual singular values, respectively. Note that the singular vectors of the primal and dual variables are the *same*, so the range of $X^*$ can be recovered from either primal or dual optimal solutions.
#### Symmetric PSD
In the second case, $$\kappa(X) = {\mathbf{tr}}(X) + \delta_+(X) \iff \kappa^\circ (Z) = \max\{0,\lambda_{\max}(Z)\}.$$ Through gauge duality, $X^*$ and $Z^* = {\mathcal A}^*(y^*)$ have a *simultaneous eigendecomposition*; that is, if $X^*$ has rank $r$, then $$X^* = \sum_{i=1}^r \lambda^P_i u_iu_i^T, \qquad Z^* = \lambda^D_{\max}\sum_{i=1}^r u_iu_i^T + \sum_{i=r+1}^n \lambda_i u_iu_i^T.$$ Here, $u_i$ are the eigenvectors of both $X^*$ and $Z^*$, and $\lambda^P_i$ and $\lambda^D_{\max}$ correspond to the primal eigenvalues and maximal dual eigenvalue, respectively.
Additionally, strong gauge duality enforces $1 = \kappa(X)\kappa^\circ(Z)$ at optimality. Assuming the primal of is feasible, $\kappa(X) < +\infty$ which forces $\kappa^\circ(Z^*) > 0$. Therefore, we can simplify the dual objective function to $$\kappa^\circ (Z) = \lambda_{\max}(Z)$$ over $Z$ where $\lambda_{\max}(Z) > 0$.
#### Unconstrained formulation
We now rewrite the dual of in an unconstrained formulation $${\underset{z}{\textbf{minimize}}}\quad \displaystyle \kappa^\circ({\mathcal A}^*( Bz+\bar z ))
\label{eq:main-dual-uncons}$$ using a change of variables $y = Bz+\bar y$ for any $B$ where ${\mathbf{range}}(B) = {\mathbf{null}}(b^T)$ and $\bar y$ such that $\langle \bar y,b\rangle = 1$. In this case, for any $z$, $\langle y,b\rangle = 1$.
Methods
=======
General overview
----------------
We consider three methods, described in “vanilla" form below.
1. *Projected gradient descent* on the constrained gauge dual of $$y^{(k)} = {\mathbf{proj}}_{{\mathcal H}}(y^{(k-1)} - t\nabla_y \kappa^\circ (y^{(k-1)}))$$ where ${\mathcal H}= \{y : \langle y,b\rangle = 1\}$ is the constraint set. The Euclidean projection on this set can be done efficiently via $${\mathbf{proj}}_{{\mathcal H}}(s) = \left(I-\frac{1}{b^Tb} bb^T\right) s + \frac{1}{b^Tb} b.$$
2. *Gradient descent* on the unconstrained gauge dual $$z^{(k)} = z^{(k-1)} - tB^T g
\label{eq:unconstrained-descent-step}$$ where $$g = \nabla_y \kappa^\circ (y) \quad \text{ at }\quad y = Bz^{(k-1)} + \bar y.$$ An obvious choice for $B\in {\mathbb R}^{m-1,m}$ computed from a full QR of $b^T$ where $B^TB = I$ and $Bb = 0$.
3. *Coordinate descent* (Alg. \[a:cd\]) on the unconstrained gauge dual $$z^{(k)} = z^{(k-1)} - tB^T g$$ where $$(g_z)_j =
\begin{cases}
\frac{\partial \kappa^\circ(y)}{\partial z_j} & j\in \widehat {\mathcal I}^{(k)}\\
0 & \text{ else.}
\end{cases}
\quad \text{ at }\quad y = Bz^{(k-1)} + \bar y.$$ and $\widehat {\mathcal B}^{(k)} = \{j : i\in {\mathcal B}^{(k)}, B_{ij} \neq 0\}$. Here, in order to maintain scalability, we want $|\widehat {\mathcal B}^{(k)}|$ to be small whenever $|{\mathcal B}^{(k)}|$ to be small, e.g. $B$ should be sparse. Note $B$ does not have to be orthogonal for the unconstrained formulation to be equivalent to the dual constrained formulation . However, we have found much better results when $B$ is as close to $I$ as possible, so we pick $$B = {\left[\begin{matrix}}I_i & \tilde b_{\mathrm{top}} & 0 \\0 & \tilde b_{\mathrm{bot}} & I_{m-i} {\end{matrix}\right]},
\label{eq:sparseB}$$ where $$\tilde b_{\mathrm{top}} = \left(\frac{b_1}{b_i},\hdots, \frac{b_{i-1}}{b_i}\right), \qquad \tilde b_{\mathrm{bot}} = \left(\frac{b_{i+1}}{b_i},\hdots, \frac{b_m}{b_i}\right)$$ and $i = {\underset{i}{\textbf{argmax}}}\; b_i$.
The main focus of this work is to exploit structural properties of the linear operator ${\mathcal A}$, and offer several “enhancements" that significantly improve the scalability of these methods. In particular, we will focus on
1. approximations for building the gradients $\nabla_y \kappa^\circ(y)$ (or $s\in \partial_y \kappa^*(y)$ in cases where the largest eigenvalue of the formed matrix is not simple)
2. picking $B$ in the unconstrained formulation so that multiplying by $B$ and $B^T$ is efficient, and
3. estimating the partial coordinates in the coordinate descent method. This is our primary contribution is the third improvement, which theoretically can avoid doing any spectral computations (`svds` or `eigs`) and is limited only to small matrix products and a tiny QR computation, and maintains only low-rank approximations of all matrices. [^1]
Gradients of dual objective
---------------------------
Let us first consider $$f(y) = \kappa^\circ({\mathcal A}^*(y)) = \max\{\lambda_{\max}({\mathcal A}^*(y)),0\}.$$ Then computing $\nabla f(y)$ requires three steps.
1. Form the dual matrix variable $$W = {\mathcal A}^*(y)$$
2. Find $u$ where $Wu = \lambda_{\max}u$. Since this step is important, we will denote this operation $u = {\mathbf{evec}}_{\max}(W)$.
3. The gradient is now $$\nabla f(y) =
{\left[\begin{matrix}}(a_1^T u)^2\\
\vdots\\
(a_m^T u)^2.
{\end{matrix}\right]}$$ This step is comparatively cheap, so we will not discuss it.
Now consider $m$ and $n$ both large, with $m > n$.
#### Building $W$
When $m$ is large, this step is both computationally expensive ($O(n^2 m)$) and memory inefficient if $n$ is large. Note that if $y > 0$, there is a simple computational shortcut is to form $$U = {\left[\begin{matrix}}\sqrt{y_1} a_1,\hdots , \sqrt{y_m}a_m{\end{matrix}\right]}, \qquad W = UU^T.$$ However, in general the intermediate $y^{(k)}$ and final $y^*$ are not nonnegative. We therefore try to estimate this quantity using $$\widehat W = \sum_{i\in {\mathcal B}} y_i a_ia_i^T
\label{eq:approx-matrix}$$ and ${\mathcal B}$ is some sample subset of $\{1,...,m\}$. In particular, we investigate three regimes
1. ${\mathcal B}= \{1,...,m\}$ exact gradient computation
2. ${\mathcal B}= \{i : y_i \geq 0 \}$ for a PSD estimate of ${\mathcal A}^*(y)$
3. $j\in {\mathcal B}$ is randomly picked from the set $\{i : y_i > 0\}$ with probability $y_i / \sum_{y_j \geq 0}y_j$.
#### Solving the eigenvalue problem
This step can be solved fairly quickly using a fast eigenvalue solver (such as a power method). However, a key issue when $W$ is indefinite is that the largest magnitude eigenvalue may be much larger than the largest algebraic eigenvalue. This is another key motivation behind the second choice of ${\mathcal B}$, to work with a PSD estimate $\widehat W$. In practice, we do not observe performance degradation with this estimation, and in fact observe considerable speedup in the convergence of `eigs`. (See also Figure \[fig:subsamp-grad\].)
![**Error in subsampling.** In this experiment, we consider $Z = \sum_{i=1}^m y_ia_ia_i^T$ where $m = 1000$ and $a_i\in {\mathbb R}^n$ and $y_i\in {\mathbb R}$ are i.i.d. Gaussian randomly generated vectors and scalars. Sorting the weights $y_{i_1} \geq y_{i_2} \geq \cdots$, we select ${\mathcal S}= \{i_1,...,i_{|S|} \}$ and $\hat Z = \sum_{i\in {\mathcal S}} y_ia_ia_i^T$, and compare the alignment between $u = {\mathbf{evec}}_{\max}(Z)$ and $\hat u = {\mathbf{evec}}_{\max}(\hat Z)$.[]{data-label="fig:subsamp-grad"}](subsamp_grad.png){width="3in"}
#### Extension to nuclear norm
The above discussion mostly also holds for $f(y) = \kappa^\circ({\mathcal A}^*(y)) = \|{\mathcal A}^*(y)\|_*$, but replacing eigenvalue computations with singular value computations, and sampling based on $|y_i|$ rather than $y_i$. A key subtle advantage of using the nuclear norm is that since singular values are always nonnegative, we do not need to worry as much about the convergence of `svds`. Note that though nuclear norm minimization is generally used for nonsymmetric matrix variables, here because ${\mathcal A}^*(y)$, we are still only considering symmetric matrix variables. (The distinction is that we are now running `eigs(W,1,lm)` where previously we ran `eigs(W,1,la)`.)
Coordinate descent
------------------
For applications where both $m$ and $n$ are large, we further parametrize $W$ with a low-rank approximation and use a block coordinate update at each iteration. This method is inspired by the following observation: if $z$ and $\hat z$ differ by at most $k$ elements, then for $B$ constructed as in , $$y = Bz + \bar y \text{ and } \hat y = B\hat z + \bar y$$ differ by at most $2k$ elements, and $$W = {\mathcal A}^*(y) \text{ and } \widehat W = {\mathcal A}^*(\hat y)$$ differ by at most a term of rank $2k$. Now assume that at iteration $k$, we maintained a rank-$r$ approximation of $W^{(k)}$ as $$\begin{aligned}
U^{(k)}D^{(k)} (U^{(k)})^T &=& U^{(k-1)}D^{(k-1)} (U^{(k-1)})^T + A_{{\mathcal I}^{(k)}} {\mathbf{diag}}(y_{{\mathcal I}^{(k)}}) (A_{{\mathcal I}^{(k)}})^T\\
&=& {\left[\begin{matrix}}U^{(k-1)} & A_{{\mathcal I}^{(k)}}{\end{matrix}\right]}{\left[\begin{matrix}}D^{(k-1)} & 0 \\ 0 & {\mathbf{diag}}(y_{{\mathcal I}^{(k)}}) {\end{matrix}\right]}{\left[\begin{matrix}}(U^{(k-1)})^T \\ (A_{{\mathcal I}^{(k)}})^T{\end{matrix}\right]}\end{aligned}$$ where the columns of $A_{{\mathcal I}^{(k)}}$ are $a_i$ for $i\in {\mathcal I}^{(k)}$. Packing $\widetilde D = {\left[\begin{matrix}}D^{(k-1)} & 0 \\ 0 & {\mathbf{diag}}(y_{{\mathcal I}^{(k)}}) {\end{matrix}\right]}$, taking a QR factorization of $[U^{(k-1)}, A_{{\mathcal I}^{(k)}}] = QR$, we have $$\begin{aligned}
U^{(k)}D^{(k)} (U^{(k)})^T &=&
QR \widetilde D R^TQ^T\end{aligned}$$ where $R\widetilde D R^T$ is $r+2k \times r+2k$, and in general $r+2k \ll m$. Taking a “tiny eig" of this matrix $$R\widetilde D R^T = \widetilde U D^{(k)} \widetilde U^T$$ gives the new rank-$r+2k$ factorization of $W^{(k)}$ with $U^{(k)} = Q\widetilde U$. In the algorithm, we then further prune $D^{(k)}$ and $U^{(k)}$ to its best rank-$r$ approximation.
#### Picking the coordinates
At each iteration, the coordinates ${\mathcal I}^{(k)}$ can be picked uniformly without replacement from $\{1,\hdots, m\}$, or according to a greedy method. In particular, the Gauss-Southwell “flavor" of coordinate descent algorithms picks $$i = {\underset{i}{\textbf{argmax}}}\; |(\nabla \kappa^\circ (y))_i|.$$ However, note that just making this call requires computing a full gradient, which we never want to do. Therefore we approximate this operation by sampling at each iteration $i$ according to a *weighted* uniform distribution, with weights $\max\{y_i,0\}$ when $\kappa = {\mathbf{tr}}+ \delta_+$ and $|y_i|$ when $\kappa = \|\cdot\|_*$.
$\bar z = e_{i_{\max}}$ where $i_{\max} = {\underset{i}{\textbf{argmax}}}\;|b_i|$.
$z^{(0)} = 0$, $y^{(0)} := \bar z$, $W^{(0)} := {\mathcal A}^*(y^{(0)})$ Compute top-$r$ eigenvalue decomposition $$UDU^T = {\mathbf{proj}}_{{\mathbf{rank}}= r}(W^{(0)})$$ and the diagonal of $D$ is decreasing in order.
Set $k = 0$. Sample $\widehat{\mathcal I}^{(k)}$ containing $L$ elements without replacement from $\{1,\hdots, m\}$ and update $${\mathcal I}^{(k)} = \{i : B_{ij} \neq 0,\; i\in \widehat {\mathcal I}^{(k)}\}.$$
Compute partial gradients of $\kappa^\circ({\mathcal A}^*(y))$ with respect to $y$ and $z$, with $u = U[:,1]$ $$g_i = (a_i^T u)^2, \; i\in {\mathcal I}^{(k)},\qquad
\hat g_j =
\begin{cases}
\displaystyle \sum_{i:B_{ij}\neq 0} B_{ij} g_i,& j\in \widehat{\mathcal I}^{(k)} \\
0 & \mathrm{ else.}
\end{cases}$$
Update $z^{(k)}$ $$z^{(k)} := z^{(k-1)} - t \hat g.$$ Update the rank $r$ approximation of $W^{(k)}$ $$\widetilde D = {\left[\begin{matrix}}D^{(k)} & 0 \\ 0 & {\mathbf{diag}}(\Delta y) {\end{matrix}\right]},\qquad
QR = \texttt{qr}({\left[\begin{matrix}}V & A_{{\mathcal I}^{(k)}}^T{\end{matrix}\right]},0),\qquad
\Delta y = - t B^T\hat g$$ $$[\widetilde U,\widehat D] = \texttt{eig}(R*\widetilde D*R'),\qquad
\widehat U = Q \widetilde U$$ Prune to rank $r$ $$U^{(k)} D^{(k)} (U^{(k)})^T = {\mathbf{proj}}_{{\mathbf{rank}}=r}(\widehat U \widehat D {\widehat U}^T)$$
\[a:cd\]
Numerical results
=================
Musical note
------------
We begin by considering a small, simple problem of recovering an 11 x 11 black and white image (Figure \[fig:musicalnote\]). This problem is small enough to be solved globally using an interior point method, which can serve as a baseline.
#### Fast methods do not give high enough fidelity solutions.
To evaluate how “fast" our methods work, we pick a fairly easy problem, with $m = 1000$ samples $a_i$ sampled uniformly without replacement from a Hadamard matrix.
- Figure \[fig:traj\] shows the trajectory of the dual objective error for the first-order and coordinate methods on this problem. We can see that when full gradients and full rank methods are used, the global solution can be found, but the number of iteratious is onerous, especially since this is a very small example. When partial gradients and low rank approximations are used, the global solution is not found.
- Intermediate recovered images of the oversampled problem are given in Figure \[fig:projgrad-images\] (projected gradient), \[fig:uncgrad-images\] (reduced gradient), and \[fig:coord-images\] (coordinate descent). Again, we notice that while non-approximated methods can recover the true solution (after many iterations), their approximated versions do not reach high fidelity solutions.
![**Trajectory.** Oversampled musical note example, with $m = 1000$, $n = 121$. **Top:** Gradient methods, comparing full gradients vs $m/10$ subsampled gradients. PG = projected gradient, RG = reduced gradient. **Bottom:** Coordinate methods, with block size 100. []{data-label="fig:traj"}](music_gradmethods_trajectory_paper.png "fig:"){width="0.7\linewidth"} ![**Trajectory.** Oversampled musical note example, with $m = 1000$, $n = 121$. **Top:** Gradient methods, comparing full gradients vs $m/10$ subsampled gradients. PG = projected gradient, RG = reduced gradient. **Bottom:** Coordinate methods, with block size 100. []{data-label="fig:traj"}](music_coord_trajectory_paper.png "fig:"){width="0.7\linewidth"}
#### Fast methods approach good approximate solutions quickly.
One thing we do observe from the previous batch of experiments is that our fast methods are able to reach good approximate solutions almost immediately, suggesting they may provide good initializations to simpler nonconvex methods.
#### Nonconvex matrix completion
Specifically, a common approach to phase retrieval is to solve the following nonconvex rank-1 matrix completion problem $${\underset{u}{\textbf{minimize}}} \quad \sum_{i=1}^m ((a_i^Tu)^2-b_i)^2
\label{eq:nonconvex-matrix-completion}$$ Given some initial point $u^{(0)}$, we can minimize by iteratively using gradient steps $$u^{(k+1)} = u^{(k)} -\alpha^{(k)} \sum_{i=1}^m ((a_i^Tu)^2-b_i) \cdot a_i$$ where $\alpha^{(k)}$ is some decaying step size. This type of approach is often favored in practice because of its low per-iteration complexity ($O(mn)$) and storage $O(n)$ cost, and is often observed to recover very clean images. A disadvantage of this approach is that the quality of the solution is very sensitive to the choice of initialization. As an example, the Wirtinger flow algorithm of [@candes2015phase] recovers images using the initialization $$u^{(0)} = {\mathbf{evec}}_{\max}\left(\sum_{i=1}^m b_i a_ia_i^T\right).$$
We propose to recover images using the nonconvex matrix completion method using, as initialization, an approximate solution from a few iterations of our faster methods $$u^{(0)} = {\mathbf{evec}}_{\max}\left(\sum_{i=1}^m y^{(K)}_i a_ia_i^T\right)$$ Figure \[fig:musicnote-init\] (random instance) and \[fig:musicnote-recovery\] (averaged over 250 trials) illustrate the competitive advantage of using a solution to the fast method as an approximate solution as a higher fidelity initialization of the matrix completion problem.
![**Solutions to nonconvex matrix completion.** Recovered images using a variety of initializations. Title = \# samples.[]{data-label="fig:musicnote-init"}](musicnotes_initialization.png){width="5.5in"}
![**Observed recovery rate.** k = \# samples in gradient. e = epochs, r = rank. WF = Wirtinger Flow initialization.[]{data-label="fig:musicnote-recovery"}](musicnote_recovery.png){width="5.5in"}
#### Slightly larger numerical results.
Figure \[fig:ubclogo\] and \[fig:tree\] repeat the experiment on slightly larger images, with different structural properties.
![**UBC Logo.** $n = 5220$. Relative objective = $f* / f*_{\mathrm{WF}}$. Overhead time refers to total time used to compute the initial point. The average runtime of the nonconvex matrix completion is about 30 seconds. All hyperparameters (number of iterations, step size decay scheme) are tuned to make each example as efficient and high quality as possible. []{data-label="fig:ubclogo"}](ubcphoto.png){width="6in"}
![**Tree.** $n = 4824$. Relative objective = $f* / f*_{\mathrm{WF}}$. Overhead time refers to total time used to compute the initial point. The average runtime of the nonconvex matrix completion is about 5 minutes. All hyperparameters (number of iterations, step size decay scheme) are tuned to make each example as efficient and high quality as possible.[]{data-label="fig:tree"}](tree_results.png){width="6in"}
Further directions
==================
This document represents a quick set of experiments on a simple idea for reducing the computational complexity of the SDP relaxation of the phase retrieval problem. There are several interesting directions for extension.
#### Gradient sampling
Currently, our gradient sampling approach is to sample each weight according to its positive contribution, normalized, with no other transformations. A more generalized class of sampling weights is to use softmax smoothing, where $$\mathrm{Pr}(j) = \frac{\exp(\frac{y_j}{\mu})}{\sum_k\exp(\frac{y_k}{\mu})}$$ and for a specific choice of $\mu$, reduces to our sampling scheme. This kind of sampling can be modeled using a Gumbel random variable, for example.
#### Better choices of $B$
In the unconstrained dual formulation , there is a tradeoff in the choice of $B$ as incredibly sparse (improving its per-iteration complexity) and perfectly conditioned (ideally, orthogonal, and therefore dense). Further exploration here can be made to optimize this tradeoff.
#### Scalability
Thus far, we have viewed the most successes with very small images. With larger images, it is not clear how the approximation error scales, and if it is still close enough to ensure a good initialization in the nonconvex problem.
#### Primal feasibility
In approximate dual methods, primal feasibility is usually not assured. Here, we just use the maximum eigenvector of ${\mathcal A}^*(y^*)$ to reconstruct the image, but first performing some projection to ensure primal feasibility may lead to better answers.
#### Other spectral approximation methods
Here, we experiment with a spectral approximation method unique to the phase retrieval problem, where in the limit, ${\mathcal A}^*({\mathcal A}(x)) \approx x$. We have not compared this to other spectral approximation methods, like sketching, subsampling, sparsification, etc.
[^1]: In practice, in order to reach the global solution, sometimes the matrix estimates deteriorate, and need to be “refreshed", so a full `eigs` is run. However, in phase retrieval we often don’t need this much precision.
|
---
abstract: 'Three planets have been directly imaged around the young star HR 8799. The planets are 5–13 [$M_{\rm Jup}$]{} and orbit the star at projected separations of 24–68 AU. While the initial detection occurred in 2007, two of the planets were recovered in a re-analysis of data obtained in 2004. Here we present a detection of the furthest planet of that system, HR 8799 b, in archival HST/NICMOS data from 1998. The detection was made using the locally-optimized combination of images algorithm to construct, from a large set of HST/NICMOS images of different stars taken from the archive, an optimized reference point-spread function image used to subtract the light of the primary star from the images of HR 8799. This new approach improves the sensitivity to planets at small separations by a factor of $\sim$10 compared to traditional roll deconvolution. The new detection provides an astrometry point 10 years before the most recent observations, and is consistent with a Keplerian circular orbit with $a\sim$70 AU and low orbital inclination. The new photometry point, in the F160W filter, is in good agreement with an atmosphere model with intermediate clouds and vertical stratification, and thus suggests the presence of significant water absorption in the planet’s atmosphere. The success of the new approach used here highlights a path for the search and characterization of exoplanets with future space telescopes, such as the James Webb Space Telescope or a Terrestrial Planet Finder.'
author:
- 'David Lafrenière, Christian Marois, René Doyon, Travis Barman'
title: 'HST/NICMOS detection of HR 8799 in 1998'
---
Introduction
============
After more than a decade marked by the great success of the radial velocity and transit planet detection techniques, the direct imaging technique has finally made its grand entry on the scene in late 2008 through a series of exciting exoplanets discoveries [@marois08; @kalas08; @lafreniere08; @lagrange08], including the spectacular discovery of the multiple-planet system HR 8799. This system showcases three planets of mass 5–13 [$M_{\rm Jup}$]{} orbiting at projected separations of $\sim$24, 38 and 68 AU. It is located 39.4 pc away from the Sun and appears to be seen nearly face-on. The planets all orbit the star in the same direction and likely in the same plane, consistent with formation within a circumstellar disk. The host is a 30–160 Myr-old star of spectral type A5, mass $\sim$1.5 [$M_{\odot}$]{} and luminosity $\sim$4.9 [$L_{\odot}$]{}; it is also classified as both $\gamma$ Dor and $\lambda$ Boo types. A large infrared excess is detected at $\sim$100 $\mu$m, suggesting that the planetary system is surrounded by a large dust disk. The reader is referred to @marois08 and references therein for further details on this system.
As the first multiple-planet system directly imaged, HR 8799 offers many new possibilities to advance our understanding of planets. First, multi-wavelength photometry and spectroscopy of these three planets, which are almost certainly of the same age and metallicity, will be of great value for the validation and calibration of both evolutionary and atmospheric models of giant planets. Also, with good measurements of the orbits of the planets, it could be possible to independently constrain their masses through dynamical studies, as already attempted by @fabrycky08.
Besides its high scientific interest, the discovery of the HR 8799 system represents a great technical achievement given the brightness ratios and angular separations between the planets and the star. The main obstacle to direct imaging of exoplanets, for both space- and ground-based telescopes, is the scattering of stellar light by irregularities on optical surfaces which creates a halo of bright quasi-static speckles around the stellar core, masking the underlying fainter planets. The planets around HR 8799 were initially discovered using adaptive optics and the angular differential imaging (ADI) technique [@marois06], which uses the natural field-of-view rotation of an alt-az telescope to discriminate planets from quasi-static speckles. Effectively, this techniques allows to construct a high-fidelity image of the stellar point-spread function (PSF) that does not contain the signal of any eventual planets; this reference PSF image is subtracted from the target image to remove the light from the star while preserving that of any eventual planet. This technique, coupled with the locally-optimized combination of images (LOCI) algorithm [@lafreniere07a], has proved to be the most efficient approach to detect planets from the ground [see e.g. @lafreniere07b].
Following the initial detection of HR 8799 b and c in October 2007, a careful re-analysis of adaptive optics observations of HR 8799 obtained in July 2004, using improved PSF subtraction algorithms, allowed to recover both planets, thus providing a baseline of four years relative to the most recent observations of September 2008. This four-year baseline firmly establishes common proper-motion of the two outermost planets and clearly reveals their orbital motion. However, this baseline is still quite small compared to their orbital periods ($P>200$ yr), and astrometric measurements over a longer baseline would be highly valuable in better constraining their orbits, and consequently the star and planets masses. This will require several years of observations, but could be achieved more rapidly should a similar re-analysis be applied successfully to even earlier data.
HR 8799 was observed with HST/NICMOS in 1998 as part of a direct imaging survey for massive planets around young nearby stars (program 7226, PI Eric Becklin); the main results of this survey were published in @lowrance05. Although these observations have, in principle, sufficient angular resolution and flux sensitivity to see the planets of the HR 8799 system, the scattered light from the bright primary prevented their actual detection. This scattered light can be partly removed by subtracting images obtained at two different spacecraft roll angles, the so-called roll-deconvolution technique [@schneider03], but this is insufficient to reveal the planets because of important PSF evolution between the two roll angles. As part of the same HST program, many other stars were observed using the same instrumental configuration as for HR 8799, and as a result several images display PSFs very similar to that of HR 8799. As suggested in @lafreniere07a, such observations provide a very interesting basis for constructing an optimized reference PSF image using the LOCI algorithm. The LOCI algorithm, detailed in @lafreniere07a, determines the coefficients needed to linearly combine several reference images into an optimized reference PSF image whose subtraction from the target image will minimize the residual noise. An important and powerful feature of this algorithm is its flexibility to optimize the PSF subtraction locally over several sub-sections of the image. Applied to the HST data set mentioned above, the LOCI algorithm could potentially reduce the scattered light of HR 8799 sufficiently to reveal the planets. With this in mind we have re-analyzed these data and here we report the successful detection of the furthest of the three planets, HR 8799 b, thus extending astrometric measurements of this planet to ten years. Incidentally, this means that an exoplanet could have been discovered by direct imaging only a few years after the discovery of 51 Peg b by the radial velocity technique.
Data set and analysis {#sect:analysis}
=====================
All the data used in this study come from HST program 7226, which was a survey for giant planets and brown dwarfs around young nearby stars. The observations used the NICMOS medium resolution camera NIC2, the F160W filter and the 0.63 diameter occulting spot to reduce the diffracted light from the primary star. Typically three images of each target were obtained at each of two roll angles differing by 29.9. A total of 39 targets were observed for this program. The pipeline-reduced images used here were retrieved from the HST archive housed at the CADC.[^1]
As mentioned above, the LOCI algorithm was used with this large set of well-correlated images to construct and subtract an optimized PSF for each of the six images of HR 8799. A priori, the set of reference images used by the LOCI algorithm could include all images of targets that are not HR 8799, but in practice we have omitted some targets from the set because they showed a second point source within a separation of 3 from the star, were significantly fainter than HR 8799, or were not centered behind the coronagraph. The final set of reference images included 203 images from 23 unique sources. When analyzing a given image of HR 8799, we have added to this set the three images of HR 8799 obtained at the other roll angle. Before execution of the LOCI algorithm, all reference images were registered to the target image based on a cross-correlation of the secondary mirror support diffraction spikes. The algorithm was first applied with the specific goal of detecting HR 8799 b, and thus a single optimization region was used. This optimization region was defined as a half annulus extending radially from 1.275 to 2.175 and centered on, but excluding pixels within a radius of 5 pixels from the calculated position of HR 8799 b. The exclusion was used to prevent the algorithm from trying to subtract the companion itself, thus biasing the determination of the coefficients of the linear combination and affecting the residual flux of the companion. The area of the optimization region, $\sim$900 pixels, is equivalent to 225 PSF cores (i.e. the characteristic size of a speckle). This number of “degrees of freedom” in the optimization region may be compared with the number of different variables, 46 different observations (23 sources each at two roll angles), in the linear combination of the optimum reference image. The different observations do not represent free parameters of the model PSF, however, as they are highly correlated with each other.
The companion HR 8799 b was detected in all six residual images and its position between the two rolls angles is precisely equal to the spacecraft rotation angle applied. The three residual images at each roll angle were co-added and the results are shown in Fig. \[fig:im\]. At the peak pixel of the companion in these co-added images, the detections are at the levels of 10$\sigma$ and 6$\sigma$ for the first and second roll angles, respectively. We have verified that the residual noise in the optimization region closely follows a Gaussian distribution, thus the significance of the detection is high. The improvement in PSF subtraction provided by the LOCI algorithm is obvious from Fig. \[fig:im\], in which the much higher residuals of the roll-subtracted image is apparent. At the separation of HR 8799 b, the residual noise in the LOCI-subtracted image is nine times lower than for the classical roll-deconvolution image (panel [*b*]{} of Fig. \[fig:im\]). It is interesting to note that the LOCI subtraction almost reaches the PSF photon noise. At the separation of HR 8799 b, we estimate that the total residual noise is $\sim$1.8 times more important than the local photon noise.
To confirm that the above detection is not an artifact from the image processing done, we have repeated the analysis 1) without excluding the region containing the companion from the optimization half-annulus, and 2) using only half of the available reference images. The companion was still well detected in both cases, although at lower signal-to-noise ratios (S/Ns) as expected. We have repeated the above analysis with an optimization region designed specifically for HR 8799 c, at $\sim$0.9, but the results were inconclusive. The same applies for d, which is at an even smaller separation.
Results {#sect:results}
=======
The astrometric and photometric analysis of the companion was done using model PSFs generated with the TinyTim[^2] software (version 7.0)[@krist93]. Model PSFs appropriate for the NIC2 pre-cryocooler camera with the F160W filter were generated for each of three approximate positions on the detector: the central star, the companion at the first roll angle, and the companion at the second roll angle. All model PSFs were generated for a width of 10 and a pixel oversampling factor of 9 to minimize interpolation effects when shifting images. Accordingly, during the analysis all shifts were done on the oversampled images, which were then binned $9\times9$ pixels to match the actual NIC2 plate scale. Initially, the model PSFs are normalized to a total flux of one.
For each image, the appropriate companion model PSF was first spatially shifted and intensity scaled over a grid in $dx$, $dy$, and flux. Then each model PSF in this 3D parameter space was subtracted from the image and the residual noise in a $7\times7$-pixel box centered on the companion was computed. The combination of $dx$, $dy$, and flux that yielded the minimum residual noise was found and provided the coordinates and flux of the companion. The uncertainty on the position of the companion, 5 mas, was estimated from the dispersion of the measurements made in all individual images. The center of the star was determined by shifting the appropriate model PSF to maximize the cross-correlation of its diffraction spikes with those of the actual image, as well as by a cross-correlation of the observed PSF diffraction spikes with themselves after a rotation of 180 about the inferred center; the maximum difference between these two centroid measurements, $\sim$0.1 pixel or 7.5 mas, was taken as an estimate of the centroid accuracy. The pixel coordinates of the star and companion were converted to RA and DEC using the astrometry solution defined in the image FITS headers. A S/N-weighed mean of the relative position of HR 8799 b over the six images was finally calculated; the results are indicated in Table \[tbl:pos\].
[lcccc]{} 1998.8285 & $1411\pm9$ & $986\pm9$ & $1721\pm12$ & $55.1\pm0.4$\
2004.5337 & $1471\pm5$ & $884\pm5$ & $1716\pm7$ & $59.00\pm0.23$\
2008.6267 & $1527\pm2$ & $800\pm2$ & $1724\pm3$ & $62.35\pm0.09$
Since the model PSFs span 10, the companion flux found by the above procedure is effectively the same as that in an infinite aperture. The information in the [*NICMOS Data Handbook v7.0*]{} was then used to convert this flux ($20.5\pm2.5$ counts s$^{-1}$) to physical units, yielding $(4.2\pm0.5)\times10^{-5}$ Jy, or a magnitude of $18.54\pm0.12$. This magnitude is also indicated in Table \[tbl:phot\] along with the measurements reported by @marois08 and the expected magnitudes for two different atmosphere models (see §\[sect:discussion\] for more detail on models).
[lccc]{} $J$ & $19.28\pm0.16$ & 19.42 & 19.66\
F160W & $18.54\pm0.12$ & 18.58 & 18.31\
$H$ & $17.85\pm0.17$ & 18.30 & 18.08\
$K_{\rm s}$ & $17.03\pm0.08$ & 17.43 & 16.91\
$L^\prime$ & $15.64\pm0.11$ & 15.24 & 15.59
discussion {#sect:discussion}
==========
As visible in Fig. \[fig:pos\], the new position is consistent with the previous ones, and still suggests a nearly circular orbit seen close to face-on. We have done very simple orbital fits to verify that the data are consistent with true Keplerian orbits. For simplicity, we have considered only circular orbits and assumed that the stellar mass is precisely equal to 1.5 [$M_{\odot}$]{}, and then we explored a range of semimajor axes ($a=60$–100 AU) and inclinations ($i=0$–45). The best fits, with a $\chi^2\sim3.2$, were found for $a\sim68$–74 AU and $i\sim13$–23; an example of such fit is shown in Fig. \[fig:pos\]. This range of inclination is in line with what is expected for the equatorial plane of the star based on its measured $v \sin{i}$ (37.5 km s$^{-1}$, @gray99) which, for the range of true rotation velocity of A5 stars (100–300 km s$^{-1}$, @royer07), would yield $i=7\degr$–22. We refrain from carrying out more detailed orbital fits as the limited time baseline and astrometric precision available would prevent us from reaching useful constraints on the orbits.
The low luminosity and red near-infrared color of HR 8799 b ($J-K_{\rm s} = 2.25$) are indicative of atmospheric dust, but constraining the dust properties (composition and distribution) requires a detailed comparison of its SED with various model atmosphere predictions. Figure \[fig:sed\] compares the new F160W photometric point, along with earlier $J$, $H$, $K_{\rm s}$ and $L^\prime$-band photometry, to an intermediate (vertically stratified) cloud model (Barman et al., in prep.) with parameters selected by comparing the planet’s age and luminosity to substellar evolution tracks [@marois08]. The observed and model photometry are listed in Table \[tbl:phot\]. The F160W filter is wide enough to incorporate a substantial fraction of the water band between the $J$ and $H$ bands. Since the depths of the water bands are greatly reduced as dust content increases, this filter, and other NICMOS band-passes, have greater potential for constraining the atmospheric dust content than the standard ground-based near-IR windows. As can be seen in Fig. \[fig:sed\] (and Table \[tbl:phot\]), the new F160W photometric point is in excellent agreement with this intermediate cloud model, and about 2$\sigma$ fainter than an extreme dusty atmosphere having clouds that blanket most of the atmosphere. While the overall agreement with the extreme cloud atmosphere model appears to be relatively good, based on Table \[tbl:phot\], this model requires $T_{\rm eff} = 1600$ K (cf $\sim$800 K for the intermediate cloud model) and a very small radius ($\sim$4 Earth radii) to match the observed luminosity. Both of which are inconsistent with formation and evolution models.
Concluding remarks
==================
The work presented here demonstrates that the LOCI algorithm can take HST data a step further, extending the sensitivity to planets beyond what has been achieved before. The good performance of the LOCI algorithm with HST data is due in large part to the high stability of the optical aberrations of HST and the consistent, accurate pointing between the different visits. Over the years, a large archive of HST observations aimed at detecting exoplanets has been assembled. Based on the present results, it would be interesting to look again at this set of data, using the LOCI algorithm, to see if any other sources have been missed by previous searches. On a related note, it could be interesting to attempt a re-analysis of archival ground-based AO observations using an approach similar to that used here. However, this could turn out be much less efficient than for HST given the larger evolution of the PSF structure for ground-based telescopes.
Looking into the future, the approach presented here should definitely be an integral part of the strategy for the search and characterization of planets with future space telescopes, such as the James Webb Space Telescope (JWST) or a Terrestrial Planet Finder. JWST, for instance, is expected to be relatively stable in temperature and should not suffer from the breathing problem experienced by HST. Its PSF should therefore be more stable than that of HST and the LOCI algorithm will likely perform extremely well, especially for observations obtained within a given wavefront adjustment campaign, typically every 14 days [@gardner06]. A preliminary analysis of the JWST PSF temporal evolution done by @makidon08 indeed suggests that PSF subtraction should perform well. For this approach to work, enough attention should be paid to the accurate positioning of the different targets, in particular for observations made with the coronagraphic occulting masks. Having an efficient PSF subtraction strategy will be absolutely necessary for JWST given its complicated, speckled PSF structure arising from its segmented pupil, in addition to the speckles arising from the unavoidable optical aberrations. While roll-deconvolution could perform well for JWST in the absence of telescope breathing, the LOCI approach remains very interesting given the observatory’s maximum roll angle of only $\pm5\degr$, which will severely limit the use of roll-deconvolution at small separations.
The authors would like to thank Yanqin Wu and Andrew Shannon for discussions of the possible orbits of this system, Bruce Macintosh for discussions relating to the new NICMOS PSF subtraction approach used here, and Markus Janson for discussions about various aspects of the NICMOS data. D.L. and C.M. are supported in part through postdoctoral fellowships from the Fonds Québécois de la Recherche sur la Nature et les Technologies. R.D. is supported through a grant from the Natural Sciences and Engineering Research Council of Canada.
[15]{} natexlab\#1[\#1]{}
, D. C., & [Murray-Clay]{}, R. A. 2008, ArXiv e-prints, astro-ph/0812.0011
, J. P. et al. 2006, Space Science Reviews, 123, 485
, R. O., & [Kaye]{}, A. B. 1999, , 118, 2993
, P. et al. 2008, Science, 322, 1345
, J. 1993, in ASP Conf. Series, ed. R. J. [Hanisch]{}, R. J. V. [Brissenden]{}, & J. [Barnes]{}, Vol. 52, 536
, D. et al. 2007, , 670, 1367
, D. et al. 2007, , 660, 770
, D. et al. 2008, , 689, L153
, A. . et al. 2008, ArXiv e-prints, astro-ph/0705.4290
, P. J. et al. 2005, , 130, 1845
Makidon, R. B. et al. 2008, in Proc. SPIE, 7010, 70100O
, C. et al. 2006, , 641, 556
, C. et al. 2008, Science, 322, 1348
, F., [Zorec]{}, J., & [G[ó]{}mez]{}, A. E. 2007, , 463, 671
, G., & [Silverstone]{}, M. D. 2003, in Proc. SPIE, 4860, 1
[^1]: <http://www2.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/cadc/>
[^2]: <http://www.stsci.edu/software/tinytim/tinytim.html>
|
---
author:
- 'Kazuo [Hida]{}[^1], Ken’ichi [Takano]{}$^{1}$, and Hidenori [Suzuki]{}$^{1}$[^2]'
title: ' Haldane Phases and Ferrimagnetic Phases with Spontaneous Translational Symmetry Breakdown in Distorted Mixed Diamond Chains with Spins $1$ and $1/2$'
---
Introduction
============
Quantum magnetism in frustrated spin systems is a rapidly developing field of condensed matter physics.[@hfm2008; @diep] At first glance, one would expect that geometrical frustration enhances quantum fluctuation and drives an ordered state into a disordered state. However, recent progress in this field of physics has shown that this simple intuition is not always valid and that geometrical frustration induces a variety of exotic quantum phenomena, which are not easily predicted. Under an appropriate condition, it even stabilizes an unexpected magnetic long range order such as the frustration-induced ferrimagnetic and spin nematic orders.
To understand magnetism under the interplay of geometrical frustration and quantum fluctuation, it is desirable to begin with typical spin models with exact solutions. Among them, there exist a class of models whose ground states are exactly written down as spin cluster solid (SCS) states because of frustration. A SCS state is a tensor product state of exact local eigenstates of cluster spins. Well-known examples are the Majumdar-Ghosh model[@mg] whose ground state is a prototype of spontaneously dimerized phases in one-dimensional frustrated magnets[@hase] and the Shastry-Sutherland model[@shs] which corresponds to the material SrCu$_2$(BO$_3$)$_2$.[@kage1; @kage2] In these models, the spin clusters are singlet dimers.
The diamond chain is another frustrated spin chain with exact SCS ground states. The lattice structure is shown in Fig. \[lattice\_structure\]. In a unit cell, there are two kinds of nonequivalent lattice sites occupied by spins with magnitudes $S$ and $\tau$; we denote the set of magnitudes by ($S$, $\tau$). One of the authors and coworkers[@takano; @Takano-K-S] introduced this lattice structure and generally investigated the case of ($S$, $S$), i.e., the pure diamond chain (PDC). Any PDC is shown to have at least one exact SCS ground-state phase where each spin cluster has spin 0. Particularly, in the case of (1/2, 1/2), they determined the full phase diagram of the ground state by combining rigorous arguments with numerical calculations. After that, Niggemann et al.[@nig1; @nig2] argued about a series of diamond chains with ($S$, 1/2). As for the special case of (1/2, 1/2), they reproduced the results of ref. .
![ Structure of the diamond chain. Spin magnitudes in a unit cell are indicated by $S$ and $\tau$; we denote the set of magnitudes by ($S$, $\tau$). The PDC is the case of $S = \tau$, while the MDC is the case of $S = 2\tau$ with an integer or half-odd integer $\tau$.[]{data-label="lattice_structure"}](64400Fig1.eps){width="4.5cm"}
The mixed diamond chain (MDC) is defined as a diamond chain with ($S$, $S/2$) for the integer $S$.[@tsh] The special case of (1, 1/2) was first investigated by Niggemann et al.[@nig1; @nig2] They considered it as one of the series of models with ($S$, 1/2). Recently, extensive investigation on the MDC has been carried out by the present authors.[@tsh; @hts; @htsalt] The MDC is of special interest among diamond chains, because only the MDC has the Haldane phase in the absence of frustration, so that we can observe the transition from the Haldane phase to a SCS phase induced by frustration. In contrast, diamond chains of other types have ferrimagnetic ground states for weak frustration.
The features common to all types of diamond chains are their infinite number of local conservation laws and more than two different types of exact SCS ground states that are realized depending on the strength of frustration. For example, $S=1/2$ PDC has a nonmagnetic phase accompanied by spontaneous translational symmetry breakdown (STSB) and a paramagnetic phase without STSB. This model also has a ferrimagnetic ground state in the less frustrated region.[@Takano-K-S] On the other hand, the MDC with spins 1 and $1/2$ has 3 different paramagnetic phases accompanied by STSB and one paramagnetic phase without STSB. This model also has a nonmagnetic Haldane ground state in a less frustrated region.[@nig1; @tsh] The SCS structures of the ground states are also reflected in characteristic thermal properties, as reported in ref. .
Modifications of the PDC and MDC have been examined by many authors. Among them, the spin 1/2 PDC with distortion has been thoroughly investigated by numerical methods.[@ottk; @otk; @sano] It is found that azurite, a natural mineral, consists of distorted PDCs with spin 1/2 and that the magnetic properties of this material have been experimentally studied in detail.[@kiku; @ohta] Other materials have also been reported.[@izuoka; @uedia] The diamond chain is one of the simplest models compatible with the 4-spin cyclic interaction. The effects of this type of interaction on PDC have recently been investigated by Ivanov [*et al.*]{}[@dia4spin] The present authors also investigated the MDC with bond-alternating distortion and found an infinite series of ground states with STSB.[@htsalt] In addition, as reviewed in ref. , the MDC is related to other important models of frustrated magnetism such as the dimer-plaquette model[@plaq; @plaq2; @koga1; @koga2; @plaq3; @plaq4], frustrated Heisenberg ladders[@frulad1], hybrid diamond chains consisting of Heisenberg bonds and Ising bonds, [@str1; @str2] and an Ising model on a hierarchical diamond lattice.[@fuku] Among them, the dimer-plaquette chain with ferromagnetic interplaquette interaction reduces to the MDC in the limit of strong interplaquette interaction.[@plaq4]
Thus far, in spite of the theoretical relevance of the MDC, no materials described by the MDC have been found. Nevertheless, synthesizing MDC materials is not an unrealistic expectation in view of the success of the synthesis of many low dimensional bimetallic magnetic compounds[@m-d] and organic magnetic compounds.[@cb] In general, it is natural to expect that the lattice is possibly distorted in real MDC compounds as in azurite. From this viewpoint, it is important to present theoretical predictions on the ground state of distorted MDCs to widen the range of candidate materials of MDC and to raise the possibility of their synthesis.
![Displacement modes of a diamond unit.[]{data-label="mode"}](64400Fig2.eps){width="6cm"}
We begin by classifying the distortion patterns by the normal modes of each diamond unit. Excluding two translations and one rigid body rotation, we have 5 normal modes as depicted in Fig. \[mode\] within the diamond plane. A distorted MDC may be realized as a result of the collective softening of these normal modes. In particular, the distortion patterns in (a) and (b) break the local conservation laws that hold in the undistorted MDC. Hence, these distortions induce effective interactions between the cluster spins in the whole lattice, and may form novel exotic phases. We investigate these interesting cases in the present paper. In what follows, we name the distortion patterns in (a) and (b) as type A and type B, respectively. The MDCs with type A and type B distortions are depicted in Figs. \[lattice\](a) and \[lattice\](b), respectively. The distortion patterns in Figs. \[mode\](d) and \[mode\](e) do not change the geometry of the original undistorted MDC. The distortion pattern in Fig. \[mode\](c) is of another interest, since it induces the bond alternation in the undistorted MDC without breaking the local conservation laws. This case has been investigated separately and published in a previous paper.[@htsalt]
![Structures of MDC with [$S=1$ and $\tau^{(1)}=\tau^{(2)}=1/2$]{} with (a) type A and (b) type B distortions.[]{data-label="lattice"}](64400Fig3.eps){width="6cm"}
This paper is organized as follows. In §2, the Hamiltonians for the MDCs with the type A and type B distortions are presented, and the structure of the ground states of the MDC without distortion is summarized. The ground-state phases for the MDC with the type A distortion are discussed in §3, and those for the MDC with the type B distortion are discussed in §4. The last section is devoted to summary and discussion.
Hamiltonian {#section:ham}
===========
The MDCs with the type A and type B distortions are described, respectively, by the following Hamiltonians: $$\begin{aligned}
{\cal H}_{\rm A} = &\sum_{l=1}^{N}
\Bigl[ (1+\deltaa)\v{S}_{l}\v{\tau}^{(1)}_{l}
+ (1-\deltaa)\v{\tau}^{(1)}_{l}\v{S}_{l+1}
\nonumber\\
&+(1-\deltaa)\v{S}_{l}\v{\tau}^{(2)}_{l}
+(1+\deltaa)\v{\tau}^{(2)}_{l}\v{S}_{l+1}
+ \lambda\v{\tau}^{(1)}_{l}\v{\tau}^{(2)}_{l} \Bigr] ,
\label{hama}\\
{\cal H}_{\rm B} =&\sum_{l=1}^{N}
\Bigl[ (1+\deltab)\v{S}_{l}\v{\tau}^{(1)}_{l}
+(1+\deltab)\v{\tau}^{(1)}_{l}\v{S}_{l+1}
\nonumber\\
&+(1-\deltab)\v{S}_{l}\v{\tau}^{(2)}_{l}
+(1-\deltab)\v{\tau}^{(2)}_{l}\v{S}_{l+1}
+ \lambda\v{\tau}^{(1)}_{l}\v{\tau}^{(2)}_{l} \Bigr] ,
\label{hamb}\end{aligned}$$ where $\v{S}_{l}$ is the spin-1 operator, and $\v{\tau}^{(1)}_{l}$ and $\v{\tau}^{(2)}_{l}$ are the spin-1/2 operators in the $l$th unit cell. The parameter $\deltaa$ ($\deltab$) represents the strength of type A (type B) distortion, and is taken to be nonnegative without spoiling generality. The number of unit cells is denoted by $N$, and then the total number of sites is $3N$. We will consider these systems in the large $N$ limit.
For $\deltaa$ = 0 and $\deltab$ = 0, both eqs. (\[hama\]) and (\[hamb\]) reduce to the undistorted MDC Hamiltonian, $$\begin{aligned}
{\cal H}_0 &=\sum_{l=1}^{N} \left[\v{S}_{l}\v{T}_{l}+\v{T}_{l}\v{S}_{l+1}+ \frac{\lambda}{2}\left(\v{T}^2_{l}-\frac{3}{2}\right)\right]
\label{ham2}\end{aligned}$$ with the composite spin operators $\v{T}_l\equiv\v{\tau}^{(1)}_{l}+\v{\tau}^{(2)}_{l}$. Before going into the analysis of the distorted MDC, we briefly summarize the ground-state properties of the Hamiltonian (\[ham2\]) reported in ref. for convenience.
1. $\v{T}_l^2$ commutes with the Hamiltonian ${\cal H}_0$ for any $l$. Therefore, the composite spin magnitude $T_l$ defined by $\v{T}_l^2 = T_l(T_l+1)$ is a good quantum number that takes the values 0 or 1. Hence, each energy eigenstate has a definite set of $\{T_l\}$, i.e. a sequence of 0’s and 1’s with length $N$. A pair of $\v{\tau}^{(1)}_{l}$ and $\v{\tau}^{(2)}_{l}$ with $T_l=0$ is called a dimer. A cluster including $n$ successive $T_l=1$ pairs bounded by two $T_l=0$ pairs is called a cluster-$n$. The cluster-$n$ is equivalent to an antiferromagnetic spin-1 Heisenberg chain of length $2n+1$ with open boundary condition. Since a cluster-$n$ is decomposed into a sublattice consisting of $n+1$ sites with $\v{S}_l$’s and that consisting of $n$ sites with $\v{T}_l$’s, the ground states of a cluster-$n$ are spin triplet states with total spin unity on the basis of the Lieb-Mattis theorem.[@lm; @kene] This implies that each cluster-$n$ carries a spin-1 in its ground state.
2. There appear 5 distinct ground-state phases called dimer-cluster-$n$ (DC$n$) phases with $n=0,1,2,3$, and $\infty$. The DC$n$ state is an alternating array of dimers and cluster-$n$’s. The phase boundary $\lambda_{\rm c}(n,n')$ between DC$n$ and DC$n'$ phases are $$\begin{aligned}
&\lambda_{\rm c}(0,1) = 3, \nonumber\\
&\lambda_{\rm c}(1,2) \simeq 2.660425045542, \nonumber\\
&\lambda_{\rm c}(2,3) \simeq 2.58274585704, \nonumber\\
&\lambda_{\rm c}(3,\infty) \simeq 2.5773403291,
\label{lambdac}\end{aligned}$$ where $\lambda_{\rm c}(0,1)$ is obtained analytically and other values are calculated numerically.
3. In the DC$\infty$ ground state realized for $\lambda < \lambda_{\rm c}(3,\infty) $, $T_l=1$ for all $l$. This state is not accompanied by STSB and is equivalent to the Haldane state of an antiferromagnetic spin-1 Heisenberg chain with infinite length.
4. Each of the DC$n$ states with $0 \leq n \leq 3$ realized for $\lambda > \lambda_{\rm c}(3,\infty)$ is a uniform array of cluster-$n$’s with a common value of $n$ and dimers in between. In the DC$n$ phase with $1\leq n \leq 3$, $(n+1)$-fold STSB takes place. In the DC$0$ phase, no translational symmetry is broken.
In what follows, we numerically examine various aspects of the type A and type B distortion effects on the MDC. Because the DC3 phase is only realized within a very narrow interval of $\lambda$, it is difficult to analyze the effect of distortion numerically in this phase. Hence, we do not consider the DC3 phase in the following numerical analysis.
Ground-State Properties of the MDC with Type A Distortion
=========================================================
Weak distortion regime
----------------------
\[t\]
![Effective bilinear interaction ($J_{\rm eff}$) and biquadratic interaction ($K_{\rm eff}$) between spin clusters for small $\delta_A$ in HDC0, HDC1, and HDC2 phases from top to bottom. The ratio $K_{\rm eff}/J_{\rm eff}$ is also shown.[]{data-label="jeff"}](64400Fig4.eps){width="6.5cm"}
We now inspect the nature of the effective interaction between two cluster-$n$’s separated by a dimer consisting of $\v{\tau}_l^{(1)}$ and $\v{\tau}_l^{(2)}$ in the presence of the weak type A distortion. For $\deltaa> 0$, ${\v{S}}_l$ (${\v{S}}_{l+1}$) tends to be antiparallel to $\v{\tau}_l^{(1)}$ ($\v{\tau}_l^{(2)}$) rather than to $\v{\tau}_l^{(2)}$ ($\v{\tau}_l^{(1)}$), as is known from Fig. \[lattice\](a). The spins $\v{\tau}_l^{(1)}$ and $\v{\tau}_l^{(2)}$ are antiparallel to each other because they form a singlet dimer. Therefore, ${\v{S}}_l$ and ${\v{S}}_{l+1}$ tend to be antiparallel to each other. In each cluster-$n$, the number of spins $\v{S}_l$’s is larger than the number of composite spins $\v{T}_l$’s by one. Hence, from the Lieb-Mattis theorem,[@lm] the total spin of the ground state of the cluster-$n$ points to the same direction as the ${\v{S}}_l$’s belonging to that cluster-$n$. Therefore, the total spins of the cluster-$n$’s on both sides of the dimer also tend to be antiparallel to each other. Thus, the effective coupling between the spins of neighboring cluster-$n$’s is antiferromagnetic. This physical argument will be numerically ensured below.
In general, the interaction between two spins with a magnitude of 1 is the sum of bilinear and biquadratic terms. Therefore, the effective Hamiltonian for cluster-$n$’s in the phase that continues to the DC$n$ phase in the limit of $\deltaa\rightarrow 0$ is written as $$\begin{aligned}
{\cal H}^{\rm eff}&=\sum_{i=1}^{N_{\rm c}} {\cal H}^{\rm eff}(i,i+1), \\
{\cal H}^{\rm eff}(i,i+1)&=J_{\rm eff}\hat{\v{S}}_i\hat{\v{S}}_{i+1}+K_{\rm eff}\left(\hat{\v{S}}_i\hat{\v{S}}_{i+1}\right)^2, \end{aligned}$$ where $\hat{\v{S}}_i$ is the total spin of the $i$-th cluster-$n$ with a magnitude of 1, $N_{\rm c}$ is the total number of cluster-$n$’s, and $J_{\rm eff}$ and $K_{\rm eff}$ are effective coupling constants. From symmetry consideration, the signs of $\deltaa$ does not affect the sign of the effective coupling constants. Hence, these coupling constants are of the order of $\deltaaa$ for small $\deltaa$. We numerically calculated the ground-state energy of a pair of cluster-$n$’s with total spin $S_{\rm tot}$, and compared it with the corresponding eigenvalues of ${\cal H}^{\rm eff}(i,i+1)$. Then we confirmed that $J_{\rm eff}/\deltaaa$ and $K_{\rm eff}/\deltaaa$ are almost independent of $\deltaa$ typically for $\deltaa < 0.002$. The constant values of $J_{\rm eff}/\deltaaa$ and $K_{\rm eff}/\deltaaa$ are shown in Fig. \[jeff\] for three phases ($n$=0, 1, and 2), which will be explained below.
Because the effective coupling constants satisfy $0 < \Keff/\Jeff <1$, the ground state is the Haldane state for small $\deltaa$.[@ft] In the Haldane state, each spin-1 degree of freedom is carried by a cluster-$n$ rather than by a single spin. We call the state the Haldane DC$n$ (HDC$n$) state. In the HDC$n$ state with $n \geq 1$, the $(n+1)$-fold translational symmetry is spontaneously broken unlike the conventional Haldane state without STSB. Both the HDC0 state for $\lambda > \lambda_{\rm c}(0,1)$ and the HDC$\infty$ state for $\lambda < \lambda_{\rm c}(3,\infty)$ are the Haldane states without STSB. In particular, the HDC$\infty$ state continues from the Haldane [state (DC$\infty$ state)]{} of the undistorted MDC mentioned in §\[section:ham\].[@tsh]
Connection to the strong distortion regime
------------------------------------------
In the strong distortion regime of $\deltaa \rightarrow 1$ and small $\lambda$, the three spins $\v{\tau}^{(2)}_{l-1}$, $\v{S}_{l}$, and $\v{\tau}^{(1)}_{l}$ form a singlet cluster. Hence, the ground state is a state with spin gap and without STSB. This nature is common to the HDC0 and HDC$\infty$ phases in §3.1. Furthermore, the HDC$\infty$ state is transformed into the HDC0 state only by rearranging two valence bonds within each diamond unit, as shown in Figs. \[valence\](a) and \[valence\](e). Therefore, the strong distortion, HDC0, and HDC$\infty$ regimes are considered to be different parts of a single phase. The continuity of the three regimes will be confirmed by the numerical analysis discussed in §3.3. In what follows, we call this phase the uniform Haldane (UH) phase as a whole.
![ Valence bond structures of the ground states of all phases for the MDC with type A distortion. A small filled circle represents a spin with a magnitude of 1/2. An original spin with a magnitude of 1 is represented by two decomposed 1/2 spins in an open circle indicating the symmetrization. A valence bond is represented by a dashed oval.[]{data-label="valence"}](64400Fig5.eps){width="8cm"}
Numerical phase diagram
-----------------------
![Profiles of $T_l$ for (a) $\lambda=2.8$ and (b) $\lambda=2.62$.[]{data-label="profile"}](64400Fig6.eps){width="7cm"}
![System size dependences of $\Delta T$ at (a) $\lambda = 2.85$ and (b) $\lambda = 2.62$. The data are plotted against $N^{-\beta/\nu}$ where $\beta$ and $\nu$ are the critical exponents of the order parameter and correlation length, respectively, for the 2-dimensional (a) Ising and (b) 3-clock model. []{data-label="ndep"}](64400Fig7.eps){width="7cm"}
![Phase diagram of the MDC with type A distortion. The triangles indicate the position of the phase boundary for $\deltaa=0$.[]{data-label="phase_stag"}](64400Fig8.eps){width="7cm"}
![Finite-size scaling plot of $\Delta T$ around the critical points. (a) Plot around the HDC1-UH phase boundary at $\lambda = 2.85$. The Ising critical exponents $\nu=1$ and $\beta=1/8$ are assumed. The critical point is set at $\delta_{\rm A}^{\rm c}=0.0307$. (b) Plot around the HDC2-UH phase boundary at $\lambda = 2.62$. The 3-state Potts critical exponents $\nu=5/6$ and $\beta=1/9$[@wu] are assumed. The critical point is set at $\delta_{\rm A}^{\rm c}=0.008248$.[]{data-label="z23"}](64400Fig9.eps){width="7cm"}
Under the periodic boundary condition, even in the parameter region where STSB takes place, the ground state of a finite chain is a superposition of the symmetry-broken states, and the translational symmetry is recovered. Under the open boundary condition, however, one of the symmetry broken states is selected by the boundary effect. Therefore, we employ the DMRG calculation with the open boundary condition to determine the phase diagram for finite $\deltaa$. The DMRG calculation is carried out using the finite-size algorithm up to 288 sites keeping 200 states in subsystems. We calculate the ground-state expectation values ${\left\langle {\v{T}_l^2} \right\rangle}$ and define the effective spin magnitude $T_l$ on the $l$-th diagonal bond by $T_l(T_l+1)={\left\langle {\v{T}_l^2} \right\rangle}$. A typical $l$ dependence of $T_l$ is shown in Fig. \[profile\] in each phase. With the increase in $\deltaa$, the translational symmetry is recovered as expected. For finite $\deltaa$, the ground-state phase is identified from the periodicity in the oscillation of $T_l$. In the HDC$n$ phase, the values of $T_l$ follow the sequence $$\begin{aligned}
...T_{\rm S} \, \underbrace{T_{\rm L} \cdots T_{\rm L}}_{n} \, T_{\rm S}, \underbrace{T_{\rm L} \cdots T_{\rm L}}_{n} \,.., \ (T_{\rm L} > T_{\rm S}).\end{aligned}$$ Thus, we define the order parameter of the HDC$n$ phase by $\Delta T=T_{\rm L}-T_{\rm S}$. In DMRG, $\Delta T$ is measured at the sites closest to the center of the chain.
The valence bond structures for the HDC$n$ phases as well as the UH phase are shown in Fig. \[valence\]. We see the translational invariance of period $n+1$ in the HDC$n$ ground state in contrast to the period-1 invariance in the UH ground state. Hence, the $Z_{n+1}$ STSB takes place at the HDC$n$-UH phase boundary. We expect that this transition belongs to the 2-dimensional $(n+1)$-clock model universality class. The system size dependence of $\Delta T$ for $\lambda=2.85$ is shown in Fig. \[ndep\](a) around the HDC1-UH phase boundary. Here, the data are plotted against $N^{-\beta/\nu}$ with the order parameter critical exponent $\beta=1/8$ and the correlation length critical exponent $\nu=1$ for the two-dimensional Ising universality class. This shows that the critical value of $\deltaa$ lies between 0.0304 and 0.0308. A similar plot is shown in Fig. \[ndep\](b) for $\lambda=2.62$ around the HDC2-UH phase boundary, assuming the critical exponents of two-dimensional 3-clock model (equivalently 3-state Potts model[@wu]) with $\beta=1/9$ and $\nu=5/6$. This shows that the critical value of $\deltaa$ lies between 0.00822 and 0.00826. The critical points at other values of $\lambda$ are determined similarly. The results are shown in the phase diagram of Fig. \[phase\_stag\]. The error bars are within the size of the symbols.
To confirm the consistency of the universality class, the finite size scaling plot for the order parameter $\Delta T$ is carried out. According to the scaling hypothesis, the $\deltaa$ dependence of the order parameter $\Delta T$ of the finite size systems near the critical point should obey the finite size scaling law[@barbar] $$\begin{aligned}
\Delta T N^{\beta/\nu}
=f(N(\deltaa-\delta_{\rm A}^{\rm c})^{\nu}),\end{aligned}$$ in terms of the scaled variables $\Delta T N^{\beta/\nu}$ and $N(\deltaa-\delta_{\rm A}^{\rm c})^{\nu}$ and the scaling function $f(x)$. In Figs. \[z23\](a) and \[z23\](b), $\Delta T N^{\beta/\nu}$ is plotted against $N(\deltaa-\delta_{\rm A}^{\rm c})^{\nu}$ around the HDC1-UH and HDC2-UH phase boundaries assuming the Ising and 3-clock universality classes, respectively. The critical points $\delta_{\rm A}^{\rm c}$ = 0.0307 (Fig. \[z23\](a)) and 0.008248 (Fig. \[z23\](b)) are chosen so that all data fall on a single universal scaling curve as well as possible. These plots are consistent with the expected universality class.
![ Valence bond structures of the ground states of spin-1 bilinear-biquadratic chain in the (a) Haldane phase and (b) dimer phase. The spins with a magnitude of unity represented by open circles are decomposed into two spin-1/2 degrees of freedom represented by small filled circles. The valence bonds are represented by dashed ovals. The spins belonging to disconnected clusters in the dimer phase are connected by the valence bonds in the Haldane phase. []{data-label="bilbiq"}](64400Fig10.eps){width="7cm"}
The critical behavior at the HDC$1$-UH transition in our model should be compared with that of the $S=1$ bilinear-biquadratic chain at the Takhtajan-Babujian point[@takh; @babu]. Both transitions are accompanied by $Z_2$-STSB which contributes to the conformal charge by 1/2. For the HDC$1$-UH transition in our model, the rearrangement of valence bonds take place only within each diamond unit, as shown in Fig. \[valence\](a) and \[valence\](b). In contrast, in the $S=1$ bilinear-biquadratic chain, the spins belonging to disconnected clusters in the dimer phase are connected by the valence bonds in the Haldane phase, as shown in Fig. \[bilbiq\]. Apart from $Z_2$-STSB, this is similar to the Gaussian criticality of the Haldane-dimer transition in the spin-1 alternating bond Heisenberg chain that contribute to the conformal charge by 1.[@ia; @yt; @kn] Therefore, the $S=1$ bilinear-biquadratic chain at the Takhtajan-Babujian point is described by the conformal field theory with $c=1/2+1=3/2$, while the HDC$1$-UH transition in our model is described by the $c=1/2$ Ising conformal field theory.
Ground-State Properties of the MDC with Type B Distortion
=========================================================
In the case of type B distortion, the effective interaction between the spins of two cluster-$n$’s separated by the dimer consisting of $\v{\tau}_l^{(1)}$ and $\v{\tau}_l^{(2)}$ is ferromagnetic, because both $\v{S}_l$ and $\v{S}_{l+1}$ tend to be antiparallel to $\v{\tau}_l^{(1)}$. Therefore, we expect the ferrimagnetic ground state with spontaneous magnetization quantized as $m=$ $1/(n+1)$ per unit cell for small $\deltab$ in the range $\lambda_{\rm c}(n,n+1) < \lambda < \lambda_{\rm c}(n-1,n)$. We call this phase a ferrimagnetic DC$n$ phase (FDC$n$ phase). In contrast, the ground state for $\lambda <\lambda_{\rm c}(3,\infty)$ will remain in the Haldane phase, since a nonmagnetic gapped phase is generally robust against a weak distortion. For finite $\deltab$, we determined the ground-state phase diagram by the numerical diagonalization for the system size $3N=18$, as shown in Fig. \[phase\_ferri\]. Among system sizes tractable by numerical diagonalization, only this size of $3N=18$ is compatible with all the ground-state structures with $n=0, 1$, and 2. As expected, the FDC$n$ phases with $m=1/(n+1)$ are found for these values of $n$.
![Phase diagram of the MDC with type B distortion with $3N=18$. The triangles indicate the position of the phase boundaries $\lambda_{\rm c}(n,n+1)$ for $\deltab=0$.[@tsh][]{data-label="phase_ferri"}](64400Fig11.eps){width="7cm"}
![Spontaneous magnetization for (a) $\deltab=0.2$ and (b) $\deltab=0.6$. The triangles on the vertical axes indicate the values of the spontaneous magnetization $m=1/(n+1)$ in the FDC$n$ phases.[]{data-label="mag"}](64400Fig12.eps){width="7cm"}
By inspecting numerical data for the $3N=18$ system, we also find other narrow ferrimagnetic phases between the FDC$n$ and FDC$(n+1)$ phases with $n=$ 0, 1, and 2, although they are too narrow to be shown in Fig. \[phase\_ferri\]. In order to investigate these phases in detail, we employ the DMRG calculation for $3N=72$ keeping 120 states in each subsystem. Typical examples of the $\lambda$ dependence of spontaneous magnetization are shown in Fig. \[mag\](a) for $\deltab=0.2$ and Fig. \[mag\](b) for $\deltab=0.6$. Between the FDC$n$ and FDC$(n+1)$ phases with $n=0,1,2$, we find the partial ferrimagnetic phase in which the spontaneous magnetization varies continuously with $\lambda$. The ferrimagnetic phase of this kind has been found in various frustrated one-dimensional quantum spin systems.[@ss; @bkk; @ir; @ym; @kh; @htsdec; @filho] In contrast, between the nonmagnetic phase and the FDC$3$ phase, we find no partial ferrimagnetic phase for small $\deltab$.
This can be understood as follows: At $\lambda=\lambda_{\rm c}(n,n+1)$, the cluster-$n$ and cluster-$(n+1)$ can coexist. As stated above, it is physically evident that the effective magnetic interaction between the clusters is ferromagnetic. Therefore, we can restrict the states of each cluster to the maximally polarized ground state with $\hat{S}^z_i=1$. Hence, the ground state of the whole chain is described by specifying the arrangement of cluster-$n$’s and cluster-$(n+1)$’s. We map the two possible values of the length of $i$-th cluster, $n_i=n$ and $n_i=n+1$, to two possible values of the spin-1/2 pseudospin, $\sigma^z_i=1/2$ and $\sigma^z_i=-1/2$, respectively. Then, the total magnetization $M$ is equal to the number of clusters $N_{\rm c}$. The total number of unit cells, $N$, is related to the pseudospins $\sigma^z_i$ as $$\begin{aligned}
N=\sum_{i=1}^{N_{\rm c}} \left(n+1+\frac{1}{2}-\sigma^z_i\right)
=N_{\rm c}\left(n+\frac{3}{2}\right)-\sum_{i=1}^{N_{\rm c}} \sigma^z_i.
\label{eq:length}\end{aligned}$$ Therefore, the ground-state magnetization per unit cell $m$ is given by $$\begin{aligned}
m=\frac{N_{\rm c}}{{\left\langle {N} \right\rangle}}
=\frac{1}{n+\frac{3}{2}-\sigma} \end{aligned}$$ with $\sigma\equiv \sum_{i=1}^{N_{\rm c}} {\left\langle {\sigma^z_i} \right\rangle} /N_{\rm c}$. The bracket ${\left\langle {\cdots} \right\rangle}$ represents the ground-state expectation value. In the presence of $\deltab$, the length of neighboring clusters can exchange through a second order process in $\deltab$. This corresponds to the spin exchange in terms of pseudospins. In this case, the interaction between the pseudospins is approximated by the spin-$1/2$ XXZ Hamiltonian $$\begin{aligned}
\H_{\rm XXZ}&=
\sum_{i=1}^{N_{\rm c}}\H_{\rm XXZ}(i,i+1) , \\
\H_{\rm XXZ}(i,i+1)&=J_{z}^{\rm eff}\sigma^z_i\sigma^z_{i+1}+J_{\perp}^{\rm eff}(\sigma^x_i\sigma^x_{i+1}+\sigma^y_i\sigma^y_{i+1})\label{hameff0}\end{aligned}$$ up to the second order in $\deltab$. Here, further neighbor interactions are neglected. We estimate the effective exchange constants by comparing the energy spectrum of the pair Hamiltonian $\H_{\rm XXZ}(i,i+1)$ with that of the corresponding pair of clusters as follows:
1. $\lambda=\lambda_{\rm c}(0, 1)$ $$\begin{aligned}
J_z^{\rm eff}&\simeq -0.039\deltab^2, \nonumber\\
J_{\perp}^{\rm eff}&\simeq 0.087\deltab^2.\label{j01}\end{aligned}$$
2. $\lambda=\lambda_{\rm c}(1, 2)$ $$\begin{aligned}
J_z^{\rm eff}&\simeq -0.0082\deltab^2, \nonumber\\
J_{\perp}^{\rm eff}&\simeq 0.069\deltab^2.\label{j12}\end{aligned}$$
3. $\lambda=\lambda_{\rm c}(2, 3)$ $$\begin{aligned}
J_z^{\rm eff}&\simeq -0.0029\deltab^2, \nonumber\\
J_{\perp}^{\rm eff}&\simeq 0.018\deltab^2.\label{j23}\end{aligned}$$
[The details of the calculation are explained in Appendix.]{}
In all cases (i) $\sim$ (iii), we find that the effective coupling constants satisfy the inequality $-|J_{\perp}^{\rm eff}| < J_{z}^{\rm eff} \leq |J_{\perp}^{\rm eff}|$. As is well known, the ground state of the spin-1/2 XXZ chain in this parameter regime is nonmagnetic and gapless in the absence of a magnetic field. Roughly speaking, $\Delta\lambda\equiv\lambda-\lambda_{\rm c}(n,n+1)$ corresponds to the effective magnetic field $h_{\rm eff}$ conjugate to the total pseudospin $\sum_i \sigma^z_i$, because the increase in $\lambda$ favors cluster-$n$ over cluster-$(n+1)$; however, this correspondence should not be taken literally. A more precise argument is [also]{} given in Appendix. When $\Delta\lambda$ takes a large negative value, the pseudospins are fully polarized downward to give ${\left\langle {\sigma^z_l} \right\rangle}=-1/2$. This state corresponds to the FDC$(n+1)$ state with $m=1/(n+2)$. When $h_{\rm eff}$ reaches the critical value $h_{\rm c1}$ $\equiv -(|J_{\perp}^{\rm eff}|+J_{z}^{\rm eff})$, the magnetization starts to increase continuously until all spins are fully polarized upward at the critical effective field $h_{\rm c2}\equiv |J_{\perp}^{\rm eff}|+J_{z}^{\rm eff}$. This corresponds to the FDC$n$ state with $m=1/(n+1)$.
On the other hand, the magnetization jumps from 0 to $1/4$ at the phase boundary between the Haldane phase and the FDC3 phase for small $\deltab$. At this phase boundary, no finite size clusters coexist with cluster-$3$. Therefore, no pseudospin degrees of freedom can be defined. Consequently, no partial ferrimagnetic phase can be realized. In contrast, for larger values of $\deltab$, we numerically find a partial ferrimagnetic phase between the FDC3 and UH phases. This would be ascribed to the contribution of other finite-length clusters with low lying energies which come into play through higher-order processes in $\deltab$.
Summary and Discussion
======================
We introduced two types of distortion, type A and type B, into the MDC with spins 1 and 1/2, and investigated the ground-state phases. The phase diagrams are characteristic of the type A and type B distortions, respectively. For the type A distortion, the effective interaction between the cluster spins is antiferromagnetic with bilinear and biquadratic terms. The numerically estimated values of the effective couplings show that the DC$n$ ground states are transformed into the HDC$n$ ground states. The order parameters characterizing the HDC$n$ phases are defined and the UH-HDC$n$ phase boundaries are determined using the DMRG data. From the valence bond structure of each phase, we expect that the UH-HDC$n$ phase transition belongs to the universality class of the 2-dimensional $(n+1)$-clock model. The finite size scaling plot of the order parameter is consistent with this identification. For the type B distortion, the effective interaction between the cluster spins is ferromagnetic. In addition to the FDC$n$ phases with quantized spontaneous magnetization $m=1/(n+1)$, the partial ferrimagnetic phases are also found numerically between the FDC$n$ and FDC$(n+1)$ phases. A physical interpretation of the partial ferrimagnetic phase is given for small $\deltab$ by mapping onto an effective pseudospin-1/2 XXZ chain.
Generally, the introduction of lattice distortion into a physical model increases the possibility that a corresponding material is realized. In the MDC, there are three types of distortion modes affecting the exchange interactions. Among them, the two types investigated in the present paper are of generic nature, because the local conservation laws that hold in the undistorted MDC are broken. This suggests that the observation of the exotic phenomena predicted in the present paper is possible even if the corresponding material is not exactly described by the model Hamiltonians (\[hama\]) and (\[hamb\]).
If a distorted MDC material is synthesized, the distortion may be controlled by, e.g., applying pressure. If the distortion is of type A, the Curie constant vanishes as the DC$n$ ground state turns into one of the HDC$n$ ground states. The magnetic susceptibility and magnetic specific heat will have an activation-type temperature dependence with activation energy proportional to the effective coupling between the cluster spins, which is of the order of $\deltaa^2$. These HDC$n$ phases are not realized if the distortion $\deltaa$ exceeds $\sim 0.03$ even in the most robust case of $n=1$. In a real material, the STSB in the valence bond structure manifests itself as a magnetic superstructure. It is also possible that it is accompanied by a lattice superstructure of corresponding periodicity if the spin-lattice coupling is present. Therefore careful measurements of magnetic and lattice superstructures would help with the observation of HDC$n$ phases with $1 \leq n \leq 3$.
On the other hand, if the distortion of the material is of type B, the ground sate is ferrimagnetic. At low but finite temperatures, however, the spontaneous magnetization vanishes owing to the one-dimensionality. As a precursor of ordering at $T=0$, the low-temperature magnetic susceptibility should diverge as $T^{-2}$ with a coefficient proportional to the effective coupling$\sim \deltab^2$ between the cluster spins.[@yamamoto; @yf; @takahashi] This means that even a weak magnetic field of the order of $H \sim T^2/\deltab^2$ derives the finite-temperature magnetization up to the value of the ground-state spontaneous magnetization. This enables the experimental estimation of the spontaneous magnetization in real materials. The quantized ferrimagnetic behavior should be observed for wide ranges of the parameters $\lambda$ and $\deltab$ as shown in Fig. \[phase\_ferri\], and should be easily observed if an appropriate material is synthesized. The partial ferrimagnetic phases are limited to narrow intervals of the parameters $\deltab$ and $\lambda$. Therefore, these can only be observed as a temperature-independent crossover between two quantized ferrimagnetic behaviors with careful exclusion of the thermal effect.
We have demonstrated that various exotic ground states and phase transitions between them are realized in the distorted MDC with spins 1 and 1/2, which has a strong frustration. The physical pictures of these phenomena have become clear. This is made possible because the ground state of the [*undistorted*]{} MDC is known exactly. Therefore, we expect that our model may provide a means of understanding the similar exotic phenomena realized owing to the interplay of spin ordering, quantum fluctuation, and strong frustration in more general frustrated quantum chains on a firm ground. For example, partial ferrimagnetic phases are found in various one-dimensional frustrated quantum spin models[@ss; @bkk; @ir; @ym; @kh; @htsdec; @filho]. However, some of them are only numerically confirmed and no physical explanation has been given so far. We hope that the present study paves the way to the general understanding of these partial ferrimagnetic states.
We thank J. Richter for drawing our attention to ref. and related works. The numerical diagonalization program is based on the package TITPACK ver.2 coded by H. Nishimori. The numerical computation in this work has been carried out using the facilities of the Supercomputer Center, Institute for Solid State Physics, University of Tokyo and Supercomputing Division, Information Technology Center, University of Tokyo. KH is supported by a Grant-in-Aid for Scientific Research on Priority Areas, “Novel States of Matter Induced by Frustration” (20048003) from the Ministry of Education, Culture, Sports, Science and Technology of Japan and a Grant-in-Aid for Scientific Research (C) (21540379) from the Japan Society for the Promotion of Science. KT and HS are supported by a Fund for Project Research from Toyota Technological Institute.
The Hamiltonian ${\cal H}_{\rm B}$ with the type B distortion is rewritten as $$\begin{aligned}
{\cal H}_{\rm B} = {\cal H}_0 +\delta {\cal H} , \end{aligned}$$ where $$\begin{aligned}
\delta{\cal H} = \deltab\sum_{l=1}^{N}
\bigl( \v{S}_{l} + \v{S}_{l+1} \bigr)
\bigl( \v{\tau}^{(1)}_{l} - \v{\tau}^{(2)}_{l} \bigr) .
\label{hamp}\end{aligned}$$ For small $\deltab$, the ground state around $\lambda=\lambda_{\rm c}(n,n+1)$ consists almost entirely of cluster-$n$’s and cluster-$(n+1)$’s. Hence, as a good approximation, we consider ${\cal H}_{\rm B}$ only in the restricted Hilbert space where each state involves no clusters except for cluster-$n$’s and cluster-$(n+1)$’s. Under the fixed cluster number $\Nc$ in this Hilbert space, ${\cal H}_0$ is equivalent to the following effective Hamiltonian expressed in terms of pseudospin operators: $$\begin{aligned}
{\cal H}^0_{\rm eff}&=E_{\rm G}^0(n+1 ; \lambda)\sum_{i=1}^{\Nc} \left(\frac{1}{2}-\sigma^z_i\right)\nonumber\\
& +E_{\rm G}^0(n ; \lambda)\sum_{i=1}^{\Nc} \left(\frac{1}{2}+\sigma^z_i\right) , \end{aligned}$$ where $\sigma^z_i=1/2$ and $\sigma^z_i=-1/2$ correspond to $n_i=n$ and $n_i=n+1$, respectively. $E_{\rm G}^0(n ; \lambda)$ is the ground-state energy of a cluster-$n$ and a dimer in the absence of distortion, and is given by $$\begin{aligned}
E_{\rm G}^0(n ; \lambda)&=\EHal(2n+1)+\frac{\lambda n}{4}-\frac{3\lambda }{4} , \end{aligned}$$ where $\EHal(2n+1)$ is the ground-state energy of the spin-1 antiferromagnetic Heisenberg chain with length $2n+1$.[@tsh]
The application of $\delta{\cal H}$ to the unperturbed ground state transforms one of the $T_l=0$ bonds to a $T_l=1$ bond or vice versa. Then the resulting states contain clusters with lengths less than $n$ or greater than $2n$. Since these states are outside the restricted Hilbert space, no correction to the ground-state energy is present within the first order in $\deltab$. Hence, the lowest-order correction is of the order of $\deltab^2$. Up to the second order in $\deltab$, the effective pseudospin Hamiltonian is given by $$\begin{aligned}
{\cal H}_{\rm eff}&=E_{\rm G}(n+1;\lambda,\deltab)\sum_{i=1}^{\Nc} \left(\frac{1}{2}-\sigma^z_i\right)\nonumber\\
& +E_{\rm G}(n ; \lambda,\deltab)\sum_{i=1}^{\Nc} \left(\frac{1}{2}+\sigma^z_i\right) +{\cal H}_{\rm XXZ}, \end{aligned}$$ where $E_{\rm G}(n ; \lambda,\deltab)$ is the ground-state energy of a cluster-$n$ and a dimer including the second order correction in $\deltab$. This is simply expressed as $$\begin{aligned}
{\cal H}_{\rm eff}
={\Nc}\bar{E}_{\rm G}+\Delta E_{\rm G}\sum_{i=1}^{\Nc} \sigma^z_i+{\cal H}_{\rm XXZ}, \end{aligned}$$ with $$\begin{aligned}
\bar{E}_{\rm G}&=\frac{1}{2}(E_G(n+1;\lambda,\deltab)+E_G(n ; \lambda,\deltab)) , \\
\Delta E_{\rm G}&=E_G(n ; \lambda,\deltab)-E_G(n+1;\lambda,\deltab).\end{aligned}$$ The effective coupling constants $J_z^{\rm eff}$ and $J_{\perp}^{\rm eff}$ in ${\cal H}_{\rm XXZ}$ are also of the second order in $\deltab$. We determine $J_z^{\rm eff}$ and $J_{\perp}^{\rm eff}$ so as to reproduce the low-lying energy spectrum of a pair of cluster-$n$’s by that of two-pseudospin Hamiltonian $$\begin{aligned}
{\cal H}_{\rm eff}(i,i+1)
&=2\bar{E}_{\rm G}+\Delta E_{\rm G} (\sigma^z_i+\sigma^z_{i+1})+{\cal H}_{\rm XXZ}(i,i+1) \end{aligned}$$ In each of the subspaces $\sigma^z_i+\sigma^z_{i+1}=\pm 1$, $\sigma^z_i=\sigma^z_{i+1}=\sigma=\pm 1/2$. Therefore, the Hilbert space is one-dimensional and the eigenvalue of ${\cal H}_{\rm eff}(i,i+1)$ is simply $E_{\sigma\sigma}=2\bar{E}_{\rm G}+2\Delta E_{\rm G} \sigma + J_z^{\rm eff}/4$ with $\sigma=\pm 1/2$. In the subspace $\sigma^z_i+\sigma^z_{i+1}=0$, the Hilbert space is two-dimensional and the eigenvalues of ${\cal H}_{\rm eff}(i,i+1)$ are $E_{\pm}=2\bar{E}_{\rm G} - J_z^{\rm eff}/4 \pm J_{\perp}^{\rm eff}/2$.
The original Hamiltonian of the cluster consisting of a cluster-$n$ and a cluster-$n'$ is the distorted diamond chain with length $n+n'$. $$\begin{aligned}
{\cal H}&(n+n')=\sum_{l=1}^{n+n'+1}
\Bigl[ (1+\deltab)\v{S}_{l}\v{\tau}^{(1)}_{l}
+(1+\deltab)\v{\tau}^{(1)}_{l}\v{S}_{l+1}
\nonumber\\
&+(1-\deltab)\v{S}_{l}\v{\tau}^{(2)}_{l}
+(1-\deltab)\v{\tau}^{(2)}_{l}\v{S}_{l+1}
+ \lambda\v{\tau}^{(1)}_{l}\v{\tau}^{(2)}_{l} \Bigr] ,
\label{hamnn}\end{aligned}$$ We denote the $\alpha$-th eigenvalue of ${\cal H}(n+n')$ as $E(n+n';\alpha)$. Comparing the corresponding expression for the eigenvalues, we find $$\begin{aligned}
E(2n;0)&=E_{\frac{1}{2},\frac{1}{2}}=2\bar{E}_{\rm G}+\Delta E_{\rm G} +\frac{J_z^{\rm eff}}{4}\\
E(2n+2;0)&=E_{-\frac{1}{2},-\frac{1}{2}}=2\bar{E}_{\rm G}-\Delta E_{\rm G} +\frac{J_z^{\rm eff}}{4}\\
E(2n+1;0)&=E_{-}=2\bar{E}_{\rm G} -\frac{J_z^{\rm eff}}{4} - \frac{J_{\perp}^{\rm eff}}{2} \\
E(2n+1;1)&=E_{+}=2\bar{E}_{\rm G} -\frac{J_z^{\rm eff}}{4} + \frac{J_{\perp}^{\rm eff}}{2} .\end{aligned}$$ Solving these sets of equations, with respect to $J_z^{\rm eff}$ and $J_{\perp}^{\rm eff}$, we find $$\begin{aligned}
J_{\perp}^{\rm eff} &=E(2n+1;1)-E(2n+1;0)\label{jeff1} ,
\\
J_z^{\rm eff} &=2[E(2n+2;0)+E(2n;0)
\nonumber\\
& \qquad -E(2n+1;1)-E(2n+1;0)] .
\label{jeff2}\end{aligned}$$ Note that the rhs’s of (\[jeff1\]) and (\[jeff2\]) vanish for $\deltab=0$. We numerically evaluated $E(2n;0)$, $E(2n+1;0)$, $E(2n+1;1)$, and $E(2n+2;0)$ at $\lambda=\lambda_{\rm c}(n,n+1)$ ($n = 0, 1, 2$) for small $\deltab$. Using these values in eqs. (\[jeff1\]) and (\[jeff2\]), we determined $J_{\perp}^{\rm eff}$ and $J_z^{\rm eff}$ as eqs. (\[j01\])-(\[j23\]).
For the whole MDC, the ground-state energy is written as $$\begin{aligned}
E_0&=\Nc \bar{E}_{\rm G}+\Nc\Delta E_G\sigma+\Nc\epsilon_{\rm XXZ}(\sigma) \end{aligned}$$ where $\epsilon_{\rm XXZ}(\sigma)$ is the ground-state energy per site of a magnetized spin-1/2 XXZ chain with ${\left\langle {\sigma_i^z} \right\rangle}=\sigma$. The number of unit cells, $N$, of the original MDC is given by the expectation value of eq. (\[eq:length\]) as $
N=N_{\rm c}(n+\frac{3}{2}-\sigma)
$. Therefore, we have $$\begin{aligned}
E_0&=\frac{N}{n+\frac{3}{2}-\sigma}\left(\bar{E}_{\rm G}+\Delta E_G\sigma+\epsilon_{\rm XXZ}(\sigma) \right).\end{aligned}$$ Minimizing this with respect to $\sigma$ with fixed $N$, we find $$\begin{aligned}
\Delta \lambda=\left(n+\frac{3}{2}-\sigma\right)\frac{\partial \epsilon_{\rm XXZ}(\sigma)}{\partial \sigma} +\epsilon_{\rm XXZ}(\sigma),\end{aligned}$$ where $\Delta\lambda=\lambda-\lambda_{\rm c}(n,n+1; \deltab)$ and $\lambda_{\rm c}(n,n+1; \deltab)$ is defined by $$\begin{aligned}
(n+2)E_G(n;\lambda_{\rm c},\deltab)-(n+1)E_G(n+1;\lambda_{\rm c},\deltab)=0 . \end{aligned}$$ To simplify the calculation, we replace $\epsilon_{\rm XXZ}(\sigma)$ by the ground-state energy of the spin-1/2 XY chain $\epsilon_{\rm XY}=- (J_{\perp}^{\rm eff}/\pi) \cos\pi\sigma$, because $|J^{\rm eff}_{\perp}|$ is substantially larger than $|J^{\rm eff}_{z}|$ in all cases. Then we find $$\begin{aligned}
\frac{\Delta \lambda}{J_{\perp}^{\rm eff}}=\left(n+\frac{3}{2}-\sigma\right)\sin\pi\sigma-\frac{1}{\pi}\cos\pi\sigma.\end{aligned}$$ This relation is plotted in Fig. \[effmag\] for $n=0, 1$ and 2. It is clear that $\sigma$ continuously increases from $-1/2$ to $1/2$ with an increase in $\lambda$.
![Relationship between $\sigma$ and $\Delta \lambda$ for $n=0, 1$, and 2.[]{data-label="effmag"}](64400FigA1.eps){width="7cm"}
[50]{} , ,ed. H. T. Diep: (World Scientific, Singapore, 2005), Chaps. 5 and 6. J. Phys.: Conf. Series [**145**]{} (2009). C. K. Majumdar and D. K. Ghosh: J. Math. Phys. [**10**]{} (1969) 1399. For examples of experimental materials, see M. Hase, H. Kuroe, K. Ozawa, O. Suzuki, H. Kitazawa, G. Kido, and T. Sekine: Phys. Rev. B [**70**]{} (2004) 104426. B. S. Shastry and B. Sutherland: Physica B+C [**108**]{} (1981) 1069. H. Kageyama, K. Yoshimura, R. Stern, N.V. Mushnikov, K. Onizuka, M. Kato, K. Kosuge, C.P. Slichter, T. Goto, and Y. Ueda: Phys. Rev. Lett. [**82**]{} (1999) 3168. H. Kageyama, M. Nishi, N. Aso, K. Onizuka, T. Yosihama, K. Nukui, K. Kodama, K. Kakurai, and Y. Ueda: Phys. Rev. Lett. [**84**]{} (2000) 5876. K. Takano: J. Phys. A: Math. Gen. [**27**]{} (1994) L269. K. Takano, K. Kubo, and H. Sakamoto: J. Phys.: Condens. Matter [**8**]{} (1996) 6405. H. Niggemann, G. Uimin, and J. Zittartz: J. Phys.: Condens. Matter [**9**]{} (1997) 9031. H. Niggemann, G. Uimin, and J. Zittartz: J. Phys.: Condens. Matter [**10**]{} (1998) 5217. K. Takano, H. Suzuki, and K. Hida: Phys. Rev. B [**80**]{} (2009) 104410. K. Hida, K. Takano, and H. Suzuki: J. Phys. Soc. Jpn. [**78**]{} (2009) 084716 K. Hida, K. Takano, and H. Suzuki: J. Phys. Soc. Jpn. [**79**]{} (2010) 044702. K. Okamoto, T. Tonegawa, Y. Takahashi, and M. Kaburagi: J. Phys.: Condens. Matter [**11**]{} (1999) 10485. K. Okamoto, T. Tonegawa, and M. Kaburagi: J. Phys.: Condens. Matter [**15**]{} (2003) 5979. K. Sano and K. Takano: J. Phys. Soc. Jpn. [**69**]{} (2000) 2710. H. Kikuchi, Y. Fujii, M. Chiba, S. Mitsudo, T. Idehara, T. Tonegawa, K. Okamoto, T. Sakai, T. Kuwai, and H. Ohta: Phys. Rev. Lett. [**94**]{} (2005) 227201. H. Ohta, S. Okubo, T. Kamikawa, T. Kunimoto, Y. Inagaki, H. Kikuchi, T. Saito, M. Azuma, and M. Takano: J. Phys. Soc. Jpn. [**72**]{} (2003) 2464. A. Izuoka, M. Fukada, R. Kumai, M. Itakura, S. Hikami, and T. Sugawara: J. Am. Chem. Soc. [**116**]{} (1994) 2609. D. Uematsu and M. Sato: J. Phys. Soc. Jpn. [**76**]{} (2007) 084712. N. B. Ivanov, J. Richter, and J. Schulenburg: Phys. Rev. B [**79**]{} (2009) 104412. N.B. Ivanov and J. Richter: Phys. Lett. A [**232**]{} (1997) 308. J. Richter, N. B. Ivanov, and J. Schulenburg: J. Phys.: Condens. Matter [**10**]{} (1998) 3635. A. Koga, K. Okunishi, and N. Kawakami: Phys. Rev. B [**62**]{} (2000) 5558. A. Koga and N. Kawakami: Phys. Rev. B [**65**]{} (2002) 214415. J. Schulenburg and J. Richter: Phys. Rev. B [**65**]{} (2002) 054420. J. Schulenburg and J. Richter: Phys. Rev. B [**66**]{} (2002) 134419. T. Hakobyan, J. H. Hetherington, and M. Roger: Phys. Rev. B [**63**]{} (2001) 144433. L. Canovà, J. Strecka, and M. Jascŭr: J. Phys.: Condens. Matter [**18**]{} (2006) 4967. L. Canovà, J. Strecka, and T. Lucivjanský: Condens. Matter Phys. [**12**]{} (2009) 353. H. Kobayashi, Y. Fukumoto, and A. Oguchi: J. Phys. Soc. Jpn. [**78**]{} (2009) 074004. C. Mathonière, J.-P. Sutter, and J. V. Yakhmi: in [*Magnetism: Molecules to Materials IV,*]{} ed. J. S. Miller and M. Drillon (Wiley, Weinheim, 2003) p. 1. Y. Hosokoshi and K. Inoue: in [*Carbon Based Magnetism*]{}, ed. T. L. Makarova and F. Palacio (Elsevier B. V., Amsterdam, 2006) p. 107. E. Lieb and D. Mattis: J. Math. Phys. [**3**]{}, (1962) 749. T. Kennedy: J. Phys.: Condens. Matter [**2**]{} (1990) 5737. G. Fáth and J. Sólyom: Phys. Rev. B [**44**]{} (1991) 11836. F. Y. Wu: Rev. Mod. Phys. [**54**]{} (1982) 235. M. N. Barber: [*Phase Transitions and Critical Phenomena 8*]{}, ed. C. Domb and J. L. Lebowitz (Academic Press, London, 1983) p. 146. L.A. Takhtajan: Phys. Lett. [**87A**]{} (1982) 479.
H. M. Babujian: Phys. Lett. [**90A**]{} (1982) 479. I. Affleck and F. D. M. Haldane: Phys. Rev. [**B36**]{} (1987) 5291. Y. Kato and A. Tanaka: J. Phys. Soc. Jpn [**66**]{} (1997) 3944. A. Kitazawa and K. Nomura: J. Phys. Soc. Jpn. [**66**]{} (1997) 3944. S. Sachdev and T. Senthil: Ann. Phys. [**251**]{} (1996) 76. L. Bartosch, M. Kollar, and P. Kopietz: Phys. Rev. B [**67**]{} (2003) 092403. N. B. Ivanov and J. Richter: Phys. Rev. B [**69**]{} (2004) 214420. S. Yoshikawa and S. Miyashita: J. Phys. Soc. Jpn. Suppl. [**74**]{} (2005) 71. K. Hida: J. Phys. Condens. Matter: [**19**]{} (2007) 145225. K. Hida and K. Takano: Phys. Rev. B [**78**]{} (2008) 064407. R. R. Montenegro-Filho and M. D. Coutinho-Filho: Phys. Rev. B [**78**]{} (2008) 014418. M. Takahashi: Prog. Theor. Phys. Suppl. [**87**]{} (1986) 233. S. Yamamoto: Phys. Rev. B [**59**]{} (1999) 1024. S. Yamamoto and T. Fukui: Phys. Rev. B [**57**]{} (1998) R14008.
[^1]: E-mail address: hida@phy.saitama-u.ac.jp
[^2]: Present address: Department of Physics, College of Humanities and Sciences, Nihon University, Setagaya-ku, Tokyo 156-8550
|
---
abstract: 'Charge transport in electrorheological fluids is studied experimentally under strongly nonequlibrium conditions. By injecting an electrical current into a suspension of conducting nanoparticles we are able to initiate a process of self-organization which leads, in certain cases, to formation of a stable pattern which consists of continuous conducting chains of particles. The evolution of the dissipative state in such system is a complex process. It starts as an avalanche process characterized by nucleation, growth, and thermal destruction of such dissipative elements as continuous conducting chains of particles as well as electroconvective vortices. A power-law distribution of avalanche sizes and durations, observed at this stage of the evolution, indicates that the system is in a self-organized critical state. A sharp transition into an avalanche-free state with a stable pattern of conducting chains is observed when the power dissipated in the fluid reaches its maximum. We propose a simple evolution model which obeys the maximum power condition and also shows a power-law distribution of the avalanche sizes.'
address: |
Department of Physics and Division of Engineering and Applied Sciences,\
Harvard University, Cambridge, Massachusetts 02138
author:
- 'A. Bezryadin, R. M. Westervelt, and M. Tinkham'
date: 'October 21, 1998'
title: Evolution of avalanche conducting states in electrorheological liquids
---
=17.5cm
[PACS numbers:]{} 64.60.Lx, 83.80.Gv, 82.70.Kj, 05.70.Ln.
Introduction
============
A breaking of translational or temporal symmetry often occurs if a homogeneous, spatially extended system is driven far from equilibrium. This results in pattern formation [@Cross]. The patterns (sometimes called “dissipative structures”) accelerate the energy dissipation and the motion of the system towards equilibrium. Spatiotemporal disorder which occurs if the patterns vary in time and space can involve the chaotic evolution of an amplitude field [@Steinberg; @Kolodner], or it can be connected with the dynamics of defects [@Porta]. Also, out-of-equilibrium driven systems with [*threshold*]{} dynamics exhibit a rich phenomenology, from synchronized behavior [@Strogatz] to self-organized criticality (SOC) [@Bak; @Olami], when the long-range correlations are manifested as power-law distributions of avalanche sizes and lifetimes [@Held; @Westervelt].
In this paper we study the evolution of dissipative structures in initially homogeneous electrorheological fluids [@Havelka] suddenly driven out of equilibrium by applying a strong electric field. The driving mechanism of the evolution is found to be a competition between the forces which attempt to order the system and the destructive influence of increased thermal fluctuations. The ordering forces appear when the system is driven out of equilibrium. In our case it is the electric field which polarizes the particles and leads to dipole-dipole attraction between them. The ordering leads to an increase in the dissipation rate. The increasing rate of the dissipation and associated temperature rise have an opposite, destructive effect on the self-organized structures.
Our attention will be restricted only to systems with [*limited*]{} dissipation. They consist of two parts: an “adaptive” subsystem (the electrorheological fluid) and a “rigid” subsystem (in our experiments it is a plain resistor connected in series with the fluid). The rigid part imposes an absolute limit on the power dissipated in the fluid. As a consequence, there is a global nonlinear interaction between all dissipative elements of the forming dissipative structure. Two types of collective behavior which lead to an increase in the dissipation rate have been encountered. These are (i) the conducting chains [@Halsey] which appear due to the dipole-dipole attraction and (ii) convective flows of electrically charged volumes of the liquid (see the illustration in Fig.1). The degree of order is characterized by the electrical current, which we can accurately measure. The charging of nanoparticles and the associated [*repulsion*]{} between them competes with the dipole-dipole [*attraction*]{} and renders the chain formation less evident than in usual electrorheological liquids with zero conductivity [@Martin] where the charging is not possible.
New findings/results presented in this article are the following: (i) Two qualitatively different current-carrying states are found in electrorheological liquids exposed to a strong electric field. A “scale-invariant” avalanche state (AS) appears at the beginning of the evolution. It resembles the SOC state observed previously in, for example, sandpiles [@Held] and is characterized by a power-law distribution of avalanche sizes and durations (even though there is no external flux-drive [@Sornette]). (ii) The AS can transform itself into a stable state (SS) with a visible pattern of strings of nanoparticles (Fig.2). (iii) This transformation (which can be considered as a pattern formation) takes place only if the power dissipated by the adaptive part (the fluid) reaches its maximum (imposed by the rigid part). (iv) We propose a simple evolution model which obeys the “maximum power” principle [@Odum; @Nagel] and shows an avalanche state with a power-law distribution of sizes and durations.
Experimental details
====================
The sample configuration is depicted in Fig.1. It consists of a pair of stainless-steel parallel cylindrical electrodes, 0.7 mm in diameter, separated by a distance of 10 mm and immersed over 10 mm into an electrorheological fluid. The fluid consists of a dielectric solvent (toluene) with ultrasonically dispersed conducting carbon nanoparticles [@Carbon] available commercially [@Carbon1]. The concentration of particles is $\approx 0.02 mg/ml$, which is far below the percolation threshold. Consequently, the initial resistance between the electrodes is high ($\sim 10^{12}\Omega$). At time $t=0$ a DC voltage $V_{0}=100V$ is applied to the electrodes through a series resistor $R_{s}$ (Fig.1). Then an evolution curve, current vs time, $I(t)$, is measured. In the following discussion we will distinguish between curves measured on the “same sample”, and on “different samples”. In the first case a series of $I(t)$ curves is measured on the same hermetically closed bottle with electrodes and the fluid. To restore the homogeneity, the fluid is excited ultrasonically before each new $I(t)$ measurement [@Expl]. Measurements on different samples means that a new, freshly prepared suspension is used for each new sample.
Transport measurements
======================
Experimentally we find three main Evolution Scenarios. (i) ES1: The first measurement on a freshly prepared suspension shows a monotonic growth of the current with time, if the concentration of particles is high enough. An example of such behavior is given in Fig.3a, curve “A”. (ii) ES2: Next measurements on the same sample show much more complicated curves with three different stages. For example, the curve “B” in Fig.3 was measured on the same sample as curve “A” after the fluid was again homogenized ultrasonically. Curves “C” and “D” were taken one after the other on a different sample, using a much higher series resistor $R_{s}$. They also illustrate the scenario ES2. Three different stages observed in the ES2 case are described below. Stage 1: During the first few hundreds of seconds (or less) after the voltage is applied, the current is small and it does not grow considerably; Stage 2 (avalanche state, AS): strong fluctuations of the current (by a factor of $\approx100$ in some cases) appear; the averaged value of the current ($<I>$) gradually increases. Stage 3: The current rapidly increases to yet a higher level and the fluctuations disappear. Thereafter the current continues to grow very slowly and monotonically. This new stable state (SS), which is the final stage of the evolution, is characterized by a visible and stable pattern of entangled strings composed of carbon particles. Examples of such strings are visible on the four bottom pictures of Fig.2. (iii) ES3: After a few successive measurements on the same sample the system can not reach the stable state any more, but the first two evolution stages are the same. Examples of such evolution curves are shown in Fig.3b, curves “E” and “F”. These curves were taken after the curves “C” and “D”, on the same sample. The avalanche state in ES3 case lasts $\sim 10^{5}s$ and finally, instead of the transition to the stable state, the current slowly decreases to zero; the conducting state “dies”. This happens when most of the particles cluster and settle down. (iv) After many ($\sim 10$) measurements, the same sample shows no current growth at all.
The described evolution scenarios are quite general. They have been observed in liquids with different viscosity (toluene, hexadecane, mineral oil), with different electrodes (e.g. Pt, Sn), and at different values of the series resistors $R_{s}$. We have also found that the evolution scenarios described above can be observed not only by doing repetitive $I(t)$ measurements on the same sample (this “aging” technique was described above) but also by reducing the concentration of the nanoparticles, while the I(t) curve is measured only once on each new sample with a freshly prepared suspension. The concentration reduction leads to the same transitions between evolution scenarios as the aging (when a series of $I(t)$ measurements is made on the same sample). The aging approach was found to give much more reproducible results than the approach when the concentration is the control parameter.
Imaging of the pattern formation process
========================================
The evolution of patterns in electrorheological liquids can be observed directly with an optical microscope. Photographs shown in Fig.2 illustrate different stages for the evolution of the type ES2. The first image shows an aggregation process which takes place in the suspension of particles at zero electric field. Formation of fractal-like clusters is clearly visible.
The voltage was applied at time $t=0$ between the two electrodes (black). The applied field causes a strong polarization of the clusters (made of electrically conducting particles). The second photograph in Fig.2 shows that at $t \approx 2 \ s$, $i.e.$ immediately after the voltage was applied, all big aggregates break apart, so the fluid looks much more uniform. This rupture process is due to the polarization mentioned above. Since opposite sides of polarized clusters of nanoparticles carry opposite charges, big enough clusters are pulled apart if the applied electric field is strong enough.
During the first few hundreds of seconds the system shows some sort of collective behavior which may be called electroconvection or a “shuttling” effect. At this stage the electrical current is carried from one electrode to the other by macroscopic streams which develop in the fluid. Each stream carries many charged particles (or small clusters of particles) with the same charge. Upon the contact with the oppositely charged electrode, the particles acquire the opposite charge and start to move toward the opposite electrode. Initially those streams are very unstable and the flow looks “turbulent”. With time, new streams nucleate, become stronger and disappear. This shuttling effect is shown schematically in Fig.1. At this stage no stable strings were observed.
As time passes, the streams become bigger and slower. At some moment we observe an abrupt “stabilization” transition (which takes less than a second) when the turbulent electroconvection disappears and continuous strings of particles, extended from one electrode to the other, become visible. This is illustrated in Fig.2 (third image) which is taken a few seconds after the first stable strings became visible. Note that though we do not see the strings before the stabilization transition, we can not exclude that some strings of particles are being formed for a short time and then destroyed by heating or convection. After the pattern is stabilized, the strings show a tendency to form bundles. As is shown in the forth, fifth, and sixth images of Fig.2, these bundles grow continuously with time (which leads to the measured monotonic decrease of the sample resistance).
The maximum power principle
===========================
The electrical scheme of our setup is shown in Fig.1. The power dissipated by the electrorheological fluid can be written as $P_{f}=I(V_{0}-V_{s})=4P_{max}/(r+2+1/r)$ where $P_{max} \equiv V_{0}^{2}/4R_{s}$, $r \equiv R_{f}/R_{s}$, $R_{f} \equiv V_{f}/I$ is the time-dependent resistance of the fluid, $V_{0}$ is the battery voltage applied to the fluid and the resistor $R_{s}$ connected in series (see the schematic in Fig.1), $V_{s}$ ($V_{f}$) is the voltage drop on the series resistor (fluid), and therefore $V_{0}=V_{s}+V_{f}$. The expression for the $P_{f}$ has a single maximum which is achieved when $r=1$ or $R_{f}=R_{s}$. Therefore the maximum power which can be dissipated by the fluid is $P_{max}=V_{0}^{2}/4R_{s}$. Note also that the expression for the $P_{f}$ is symmetric under substitution $r \rightarrow 1/r$ or, what is the same, $R_{f} \rightarrow R_{s}^{2}/R_{f}$. In other words, any allowed ($P_{f} \leq P_{max}$) level of the dissipated power $P_{f}$ (except only one point $P_{f}=P_{max}$) can be achieved in two physically different states of the fluid. Our measurements show that these two states are qualitatively different. All states with $R_{f}>R_{s}$ are characterized by strong avalanche-like current fluctuations. As soon as the fluid resistance decreases to the level $R_{f}=R_{s}$ where dissipated power reaches its absolute maximum, the fluctuations disappear abruptly. At $R_{f}<R_{s}$ the fluid resistance continues to decrease but slowly and monotonically. In Fig.4 we plot the power dissipated by the fluid (normalized by $P_{max}$) versus time. Curves “G” and “I” (which correspond to the ES2 scenario) illustrate the maximum power principle for two different values of the series resistance $R_{s}=48.1M\Omega$ (curve “G”) and $R_{s}=1.04G\Omega$ (curve “I”). In both cases the huge current fluctuations disappear when the power reaches the maximum when $P_{f}/P_{max}=1$. The curve “H” shows the normalized power vs time in the ES3 case. In this evolution scenario the power does not increase up to the $P_{f}=P_{max}$ level. Consequently the system never stabilizes. To summarize, the experiment shows that the choice between the two scenarios (ES2 or ES3) is determined by the ability of the adaptive part of the system with limited dissipation to reach the maximum rate of energy dissipation.
Avalanche state and self-organized criticality
==============================================
It is interesting to compare the dynamics of fluctuations, observed in our systems, to the critical behavior of sandpiles and other self-organized systems. We suggest that the huge current fluctuations measured before the pattern is stabilized, constitute an avalanche activity of the dissipative structure. To make a quantitative comparison, we analyze the distribution of avalanche sizes ($X$) defined as the amplitude (in Amperes) of each monotonic decrease of the current. This definition is acceptable since the noise level of our apparatus ($<1 \ pA$) is much lower than the amplitude of the current fluctuations. Similarly, the duration of an avalanche $T$ is defined as the duration (measured in seconds) of each monotonic current drop.
Statistical analysis shows that the avalanche activity in our system is scale invariant. This means that the avalanche distributions do not peak at any particular value. In the example of Fig.5b (see triangles), the avalanche-size probability density $D_{X}$ follows a power-law distribution $D_{X}\sim X^{-\alpha}$ (with $\alpha\approx 1$) over about four decades. This suggests that the dissipative structure (before it is stabilized) is in the self-organized critical state. To corroborate this, we have found the distribution $D_{T}$ of avalanche durations which is plotted in Fig.5c. It is also a power-law distribution: $D_{T}\sim T^{-\beta}$, as can be expected for a self-organized critical state. The exponent is larger in this case: ($\beta \approx 2.3$). Other samples have shown a very similar behavior.
The distributions of avalanche sizes and durations, presented above, have been calculated for the evolution curves of the type ES3. In this cases the avalanche state lasts up to $10^{5} \ s$. In the ES2 case, the avalanche state lasts for a much shorter period of time, but the distributions are similar to those in the ES3 case.
Dissipative elements as building blocks of the dissipative structure
====================================================================
Many properties of the pattern evolution, described above, can be understood by introducing the notion of “dissipative elements” (DE). The dissipative structure is assumed to be composed of relatively independent dissipative elements. In general, a DE is a region of space where any sort of self-organization or collective behavior (in our system it may be chain formation or electroconvection) leads to a strong increase of the local dissipation. The capability of each DE to dissipate energy $G_{i}$ (which is electrical conductance in our case) as well as the total number of DE’s are assumed to grow with time. This reflects the general tendency of ordering, observed in nonequilibrium systems. This tendency will force more and more particles to join the existing dissipative elements or to form new ones. This process of ordering may be limited by the heating associated with the activity of each DE. To build a simple evolution model (see below) we will assume that any DE burns out when the power dissipated in it reaches some critical value $P_{c}$. This constitutes the threshold dynamics of our system.
Under the assumptions, outlined above, the evolution consists of nucleation, growth, and destruction of dissipative elements. The stabilization transition, observed experimentally, can be understood in the following way. In the case when the total rate of the dissipation is limited (by the presence of a series resistor in our case), the pattern stabilizes if a sufficiently big number of DE’s with high enough $G_{i}$ values develops at the same time. In this case the total dissipated power (which is never bigger than $P_{max}$) will be shared between a big number of DE’s. Therefore the dissipation in each DE ($P_{i}$) can never become strong enough for it to burn out.
It has to be explained why the stabilization coincides with the point of maximum power and does not depend on the properties of the fluid. This follows from the fact that $P_{f}=4P_{max}/(r+2+1/r)$ and therefore the decreasing resistance (or increasing conductance) of the fluid causes an increase in the dissipation rate [*only*]{} if $r>1$ or $R_{f}>R_{s}$. Oppositely, if $R_{f}<R_{s}$ then the increasing degree of order and associated decrease in the fluid resistance lead to a [*decrease*]{} in the power dissipated in the fluid. Therefore the pattern stabilizes as soon as the power reaches the maximum. We assume here that the degree of order always increases with time if the system is driven far enough from the equilibrium (meaning in our cases that the applied voltage is strong enough). Also we assume that the ordered structures may be destroyed due to the heating, but [*only*]{} if the local power reaches some critical value (as was already explained above).
Evolution Model
===============
To confirm our hypothesis that the experimentally observed behavior is caused by nucleation, growth and destruction of DEÕs by the local heating associated with each DE, we suggest the following simple evolution model formulated in terms of electrical circuits. Let $R_{i}$ to be electrical resistance of the $i^{th}$ dissipative element. The conductance $G_{i}=1/R_{i}$ represents the efficiency of the $i^{th}$ DE to dissipate energy. The number of particles joining each DE increases with time and so $G_{i}$ increases as well. All DEÕs are assumed to be connected in parallel. We consider a model where the power dissipated by all DEÕs together can not exceed some value $P_{max}$. In the model (as well as in the experiment) the power is limited by a resistor $R_{s}$ connected in series with DEÕs. The total current can be written as $I=V_{0}/(R_{s}+1/G_{f})$ where the total conductance of the fluid is $G_{f}=\sum_{i=1}^{N} G_{i}$. The sum is taken over all available dissipative elements. Their total number will be $N$ ($N>>1$), but some of them may be switched off (meaning that $G_{i}=0$ for them). The “threshold dynamics” appears due to the assumption that the heating destroys the order in the $i^{th}$ dissipative element and its conductance goes to zero if the power $P_{i}=V_{f}^{2}G_{i}$ dissipated by this particular DE exceeds some critical value $P_{c}$ ($P_{c}<<P_{max}$). Since all DE’s are assumed to be connected in parallel, they all will be biased with the same voltage $V_{f}=V_{0}-IR_{s}$ which depends on the total conductance of the fluid ($G_{f}=1/R_{f}$). Clearly this causes a global (and nonlinear) interaction between all DEs. Indeed, if one of the DEs burns out, then $V_{f}$ increases and therefore some other DEÕs with a high conductance may burn out as well. This leads to further increase of the voltage $V_{f}$ applied to the electrodes and may lead to destruction of other DE’s. Such a “chain reaction” can explain the avalanches observed experimentally.
Our numerical model works as follows. At t=0 all dissipative elements have zero conductance ($G_{i}=0$). Each time step we choose randomly $N_{1}$ integer numbers $K_{m}$ such that $1\leq K_{m}\leq N$. Some of $K_{m}$ numbers may be identical. Here $N_{1}$ is a fixed number, such that $1\leq N_{1} \leq N$. It controls the nucleation rate of dissipative elements. The $K_{m}$ numbers represent dissipative elements the conductance of which is going to be increased during the time step. The conductance of DEÕs with corresponding numbers $K_{m}$ is increased in the following way: $G_{K_{m}}$ $\rightarrow$ $G_{K_{m}}+RND$. Here $RND$ is a random value, such that $0<RND<STEP$, and $STEP$ is a constant representing the growth rate of DEÕs. Therefore the growth of each DE is a “biased random walk“. If two numbers $K_{m}$ are equal then $G_{K_{m}}$ will be increased twice, and so on. After the conductance of all randomly chosen DE’s is increased, following the algorithm explained above, we calculate the power dissipated in each DE using the expression $P_{i}=V_{f}^{2}G_{i}$. If a dissipative element for which $P_{i}>P_{c}$ is found, its conductance is put to zero, representing the destruction of this DE. After each such destruction event the voltage $V_{f}$, which is the same for all DE’s, is updated. We proceed to the next time step only when there are no DE’s with $P_{i}>P_{c}$ left.
It is remarkable that this simple model can produce evolution curves which are very similar to the experimental ones. Three examples of the power versus time dependence are shown in Fig.6. In all these examples the DE nucleation rate $N_{1}=5$ and the critical power $P_{c}=1.8P_{max}/N$ are the same. The parameter which is changed is the DE growth rate $STEP$. If the growth rate is low enough, the model generates a smooth evolution curve without avalanches, which look similar to the experimental curves of the type $ES1$. Such an example is given in Fig.6a. The absence of the avalanche activity is due to the low rate of the conductance growth, which means that a large number of DE’s can form before any particular DE reaches its critical power point. So the total dissipated power can reach the maximum before any DE burns out. After the total power reaches its maximum, no dissipative elements will be destroyed because the probability that the rate of the heat dissipation in each particular DE would increase goes to zero.
At higher growth rates (see Fig.6b) the model generates more complicated evolution curves. Now it shows the transition from an avalanche to the stable state, similar to the experimentally measured ES2 scenario. If the growth rate is chosen to be yet higher (Fig.6c), the normalized power ($P_{f}/P_{max}$) always stays well below unity. Since it never reaches the maximum ($P_{f}/P_{max}=1$), the stabilization can not be achieved. Consequently the curve shown in Fig.6c represents the ES3 scenario when the avalanche state lasts indefinitely long.
In the framework of our model, the same three types of behavior (ES1, ES2, and ES3) can be observed if the growth rate is kept constant while other parameters are changed. For example, at low values of the critical power we always observe the ES3 scenario. By increasing the normalized critical power $NP_{c}/P_{max}$ it is possible to shift the system to the ES2 and even to the ES1 scenario at yet higher values of the normalized critical power. The maximum power principle is always obeyed: The stabilization takes place only if the power can reach the absolute maximum. The model-generated evolution curves are also characterized by a power-law distribution of avalanche sizes (see Fig.5b, solid dots). The power-law exponent $\alpha_{model}\approx 2$ is higher than the experimental value ($\alpha\approx 1$). In many systems described previously by other authors an opposite relation was observed when the theoretically predicted value for the exponent $\alpha$ was smaller than the values observed in experiments.
The model suggests three main parameters which control the transitions between different evolution scenarios. These parameters are the rates of nucleation and growth of dissipative elements and the normalized critical power $NP_{c}/P_{max}$ of the DE’s destruction. Experimentally we observe different evolution scenarios by aging the sample (see the discussion above). It is not well established which one of the control parameters changes during the aging. Preliminary observations suggest that in the process of the chain formation the particles can form stable clusters which can not be dissociated during subsequent ultrasonic excitation. This irreversible clustering leads to a decrease of the total number of independent particles participating in the chain formation, and consequently causes an effective decrease of the parameter $N$ which represents the maximum number of chains. The normalized critical power $NP_{c}/P_{max}$ decreases with decreasing $N$. This is one possible explanation for the aging process described in Section III which leads to the observed transitions from ES1 to ES2 and subsequently to the ES3 type scenario.
Our model possesses certain similarities to the models developed by D. Sornette [@Sornette]. He proposed a class of models in which the self-organized nature of the criticality stems from the fact that the critical point (defined as the point when the coherence length becomes infinite: $\xi \rightarrow +\infty$) is attracting the nonlinear feedback dynamics. His models are based on the existence of a feedback of the order parameter on the control parameter. Our model also possesses certain feedback mechanisms since the order parameter, say dissipated power, tends to destroy the order in the system. This leads to a decreases of the conductance and therefore causes a change of the voltage applied to the fluid, which can be considered as a control parameter. On the other hand our model is different since it is not spatially extended in the usual sense. In our model each dissipative element interacts with [*all*]{} other DE’s with the same strength, not only with neighbor DE’s. Therefore our model may be considered as a zero-dimensional one, so that the notion of critical state (which is used in Sornette’s models) defined as the state when the coherence length diverges ($\xi \rightarrow +\infty$), is not applicable to our system. Our model is based on an assumption that the order which develops in some [*nonequilibrium*]{} system may cause its own destruction due to the heat dissipated by the ordered structures themselves.
Conclusions
===========
In conclusion, we present an experimental study of the evolution of patterns in a system with limited dissipation. Experiments are done on a new type of electrorheological fluid. A transition from the SOC-type scale-invariant avalanche state to a stable pattern is observed. It takes place when the power dissipated in the adaptive part of the system reaches its maximum defined by the rigid part. A general model of the pattern evolution in nonequilibrium systems with limited dissipation is suggested.
We thank D. Weitz and S. Maslov for useful discussions. This work was supported in part by NSF Grants DMR-94-00396, DMR-97-01487, and PHY-98-71810.
M. C. Cross and P. C. Hohenberg, Rev. Mod. Phys. [**65**]{}, 851 (1993). J. Fineberg, E. Moses, and V. Steinberg, [**61**]{}, 838, (1988). P. Kolodner, J. A. Glazier, and H. Williams, , 1579 (1990). A. La Porta and C. M. Surko, [**77**]{}, 2678 (1996). S. H. Strogatz and I. Steward, Sci. Am. [**269**]{}, 102 (1993); A. V. M. Herz and J.J.Hopfield, [**75**]{}, 1222 (1995) P. Bak, C. Tang, and K. Wiesenfeld, [**59**]{}, 381 (1987); H. Nakanishi, Phys.Rev.A, [**41**]{}, 7086 (1990). Z. Olami, et al., [**68**]{}, 1244 (1992); Kwan-tai Leung et al., [**80**]{}, 1916 (1998). G. A. Held, et al. [**65**]{}, 1120 (1990). K. L. Babcock and R. M. Westervelt, , 2168 (1990). K. O. Havelka and F. E. Filisko, [*Progress in Electrorheology*]{} (Plenum Press, New York, 1995). T. C. Halsey and W. Toor, [**65**]{}, 2820 (1990). J. E. Martin and J. Odinek, [**75**]{}, 2827 (1995). D. Sornette, J. Phys. I France [**2**]{}, 2065 (1992); D. Sornette, A. Johansen, and I. Dornic, J. Phys. I (France) [**5**]{}, 325 (1995). H. T. Odum, Science, [**242**]{}, 1132 [1988]{}. K. Nagel and M. Paczuski, Phys. Rev. E, [**51**]{}, 2909 (1995). M.S. Dresselhaus, G. Dresselhaus, and P.C. Eklund, [*Science of Fullerenes and Carbon Nanotubes*]{} (Academic Press, New York, 1996), p.28-29. Graphitized Carbon nanoparticles. The diameter is 27-30 nm. Polysciences Inc. (http://www.polysciences.com), cat.\#08441. Successive growths of chains followed by the ultrasonic mixing change properties of particles and let us observe all possible evolution scenarios without changing the particle concentration.
|
---
title: 'Search for Charged Higgs bosons via decays to $W^\pm$ and a 125 GeV Higgs at the Large Hadron Collider'
---
Introduction
============
The observation of a Higgs boson ($\hobs$) [@Aad:2012tfa; @*Chatrchyan:2012ufa] at the Large Hadron Collider (LHC) may be just the first glimpse into the rich phenomenology of a larger Higgs sector. Indeed, many models require additional Higgs states, including charged Higgs bosons ($\hpm$), the observation of which would be a clear sign of physics beyond the standard model (SM). Experimental searches for $\hpm$ have largely focused on decays to $tb$ or $\tau\nu$, which dominate much of the parameter space of many models. However, when kinematically allowed, the decay $\hpm\to W^\pm \hobs$ can become significant for many parameter configurations. Earlier studies [@Drees:1999sb; @*Moretti:2000yg] demonstrated the potential of this channel, and the newfound knowledge of the Higgs mass, $m_{\hobs}\approx 125\gev$, provides an additional input for the analysis and a constraint on extensions of the Higgs sector (see also [@Coleppa:2014cca] for a recent discussion). Here we describe a collider analysis for the $\hpm\to W^\pm \hobs$ channel and determine its sensitivity at the LHC, which we compare to the possible signal strengths of several two Higgs doublet models (2HDMs) compatible with current experimental observations.
Collider Analysis
=================
The main production channel at the LHC for a charged Higgs above the top mass is typically $pp\to t(b)H^\pm$,[^1] which is possible through the coupling of the charged Higgs to third generation quarks. Focusing then on $H^\pm\to\hobs W^\pm$ decays, we consider the subsequent decay $\hobs\to b\bar{b}$, as both $b$-quarks are observable, allowing us to directly reconstruct the observed 125state, and because SM-like Higgs bosons in this mass range decay dominantly in this channel.[^2] The process we then wish to search for is $pp\to (b)tH^\pm \to (b)b W^\mp W^\pm \hobs \to (b)bbb jj \ell\nu_\ell$, where one of the $W$-bosons (from either $H^\pm$ or top decay) decays leptonically and the other hadronically. The presence of a single lepton allows us to avoid multi-jet backgrounds, while requiring one hadronic $W$ avoids additional unseen neutrinos, making the event reconstruction more straightforward. The main background for this process is $t\bar{t}b(\bar{b})$, where either an additional $b$-tagged jet combines with a $b$-jet from a top decay or an additional $b\bar{b}$ pair mimics an $\hobs\to b\bar{b}$ decay.
To get a measure of the sensitivity that could be obtained at the $14\tev$ LHC, we generate the $t(b)\hpm$ signal using Pythia 6.4.28 [@Sjostrand:2006za] with the MATCHIG [@Alwall:2004xw] add-on to avoid double counting among $bg\to t\hpm$ and $gg\to tb\hpm$ processes, and all $t(b)WX,X\to b\bar{b}$ backgrounds with MadGraph5 [@Alwall:2011uj]. Both signal and background undergo parton showering and hadronization using Pythia 8 [@Sjostrand:2007gs] and are further processed with the DELPHES 3 [@deFavereau:2013fsa] detector simulation using experimental parameters based on the ATLAS experiment with modified $b$-tagging efficiencies.[^3] To reconstruct our signal events and reduce background, we use the following procedure, inspired by previous studies [@Drees:1999sb; @*Moretti:2000yg], with an additional top veto:
1. **Event selection:** Require events to have at least 3 $b$-tagged jets, at least 2 light jets, one lepton ($e/\mu$), and missing energy $\etmiss\ge 20\gev$. All objects must have transverse momentum $p_T\ge 20\gev$ and rapidity $|\eta|\leq 2.5$, with separation $\Delta R \ge 0.4$ from other objects.
2. **Hadronic ${W}$ reconstruction:** Choose the pair of light jets with invariant mass $m_{jj}$ closest to $m_W$, and reject the event if no pair satisfies $|m_{jj}-m_W|\leq 30\gev$.
3. **Leptonic $W$ reconstruction:** Attributing all $\etmiss$ to a neutrino from a $W$ decay, use the observed lepton to find the longitudinal component of the neutrino momentum, $p_{\nu,z}$ by imposing the mass constraint $m_{\ell\nu}=m_W$. The solution will have a twofold ambiguity as a result of the quadratic nature of the constraint. For two real solutions, keep both. For complex solutions, discard the imaginary component and retain a single real $p_{\nu,z}$.
4. \[it:topveto\]**Top veto (high mass region, “veto first”):** If two top quarks can be reconstructed from reconstructed $W$’s and any unassigned jets, with both satisfying $|m_{Wj}-m_t|\leq 20\gev$, reject the event. The jets used may or may not be $b$-tagged.
5. **$\hobs$ reconstruction:** Choose the pair of $b$-tagged jets with invariant mass $m_{bb}$ closest to $m_{\hobs}\sim 125\gev$, and reject the event if no pair satisfies $|m_{bb}-m_{\hobs}|\leq 15\gev$.
6. **Top veto (low mass region, “veto second”):** Same as (\[it:topveto\].), but $b$-jets used in $\hobs$ reconstruction are excluded.
7. **Top reconstruction:** From the reconstructed $W$’s and remaining $b$-tagged jets, identify the best top quark candidate, determined by the $Wb$ combination with the invariant mass $m_{Wb}$ closest to $m_t$. If the selected combination includes one leptonic $W$ solution, discard the other. If there is no good candidate with $|m_{Wb}-m_t|\leq 30\gev$, reject the event.
8. **$\hpm$ reconstruction**: Combine the reconstructed $\hobs$ with the remaining $W$ to yield the discriminating variable $m_{W\hobs}$. If there are two leptonic $W$’s remaining, retain both values of $m_{W\hobs}$.
The background is often able to mimic the signal by combining a $b$-jet from a top decay with an additional $b$-tagged jet to reconstruct the $\hobs$. In order to remove this type of event, the top veto should be applied prior to the $\hobs$ reconstruction (“veto first”). However, because of the relative sizes of the masses involved, for charged Higgs masses not too far above the $\hpm\to\hobs W^\pm$ threshold, one of the resulting $b$-jets combines with the $W^\pm$ to give an invariant mass $m_{bW}\approx m_t$ in a large fraction of the available phase space. Such signal events are cut in the “veto first” scenario, negating the benefits of the background reduction. For lower mass searches, we then postpone the top veto until after the $\hobs$ reconstruction (“veto second”). In practice, we consider both top vetoes for a given mass choose the one which maximizes the statistical signal, $S/\sqrt{B}$, and find that “veto second” is preferable for $\mhpm \lesssim 350\gev$.[^4] This is apparent in Fig. \[fig:sigback\], where the $\mhpm$ resonant peak is also evident. To further improve significance, for each $\mhpm$ we consider, we place a cut on the range of reconstructed $m_{W\hobs}$ which maximizes $S/\sqrt{B}$.
Models
======
One of the most straightforward extensions of the Higgs sector is the 2HDM, in which there are two scalar electroweak doublets, $\Phi_1$ and $\Phi_2$, which can each in general acquire a vacuum expecation value $v_i$ and couple to each other and standard model particles. After symmetry breaking, the Higgs sector in a 2HDM contains five states: two $CP$-even ($h,H$), one pseudoscalar ($A$), and two charged ($\hpm$). In general, either $h$ or $H$ could correspond to $\hobs$. Here we will focus on results for the case where $\hobs$ is the lighter state, $h$.
The Yukawa couplings to fermions are *a priori* free parameters of the theory but can easily lead to large tree-level flavor-changing neutral currents (FCNCs). One way to suppress FCNCs is to introduce a $Z_2$-symmetry which only allows each type of fermion to couple to a single doublet [@Glashow:1976nt; @*Paschos:1976ay]. There are four possible $Z_2$ assignments, and here we will consider two cases: Type I (2HDM-I), where fermions only couple to $\Phi_2$; and Type II (2HDM-II), where down-type quarks and leptons couple to $\Phi_1$ and up-type quarks couple to $\Phi_2$. In these models, the Yukawa couplings are determined entirely by the parameter $\tan\beta=v_2/v_1$. Another mechanism for controlling FCNCs is to require that the two Higgs doublets have Yukawa matrices which are proportional to one another, or aligned. Here we consider the case where all fermions couple to both $\Phi_1$ and $\Phi_2$ with aligned couplings, known as the A2HDM [@Pich:2009sp].
In order to see whether the $\hpm\to W^\pm \hobs$ channel is a useful probe of these models, we scan their parameter spaces for regions with a strong signal. We require that the lightest $CP$-even Higgs have a mass consistent with the observed state, $123\leq m_h \leq 127 \gev$, and that the heaver $H$ be non-degenerate, $135\leq m_H\leq 500\gev$. To satisfy electroweak constraints, we require $m_A=\mhpm$, and we consider $\hpm$ masses in the region above the $W^\pm h$ threshold, $200\leq \mhpm\leq 500\gev$. For the 2HDM-II, this is modified to $320\leq \mhpm\leq 500\gev$ to reflect $b$-physics constraints. In Type-I and II, we consider $1.5\leq\tan\beta\leq 6$, where the branching ratio $BR(\hpm\to W^\pm h)$ is typically largest.[^5]
Some of the strongest constraints on 2HDMs come from $b$-physics observables, and we subject the scans to 95% confidence limits on $BR(\bar{B}\to X_s\gamma)$, $BR(B_u\to \tau\nu)$, and $BR(B_s\to \mu^+\mu^-)$ given in [@superiso], and on $\Delta M_{B_d}$ from [@Mahmoudi:2009zx]. For $Z_2$-symmetric models, the parameter space scanned was chosen to satisfy these constraints, as described in [@Mahmoudi:2009zx], and for the A2HDM, $b$-physics observables were calculated with SuperIso-v3.4 [@superiso]. In addition, we subject all Higgs states other than $h$ to LEP, Tevatron, and LHC constraints using HiggsBounds-v4.1.3 [@Bechtle:2013wla]. Finally, we consider signal strength $\mu^X$ of $\hobs$ decay channels which have been recently measured, where $\mu^X=\sigma(pp\to\hobs\to X)/\sigma(pp\to \hsm\to X)$, with a $125\gev$ SM Higgs boson $\hsm$. We determine the theoretical counterparts of $\mu^X$ with HiggsSignals-v1.20 [@Bechtle:2013xfa] for $X= \gamma\gamma,\,ZZ$ and compare with the measurements of $\mu^{\gamma \gamma} = 1.13 \pm 0.24$, $\mu^{ZZ} = 1.0 \pm 0.29$ by CMS [@CMS-PAS-HIG-14-009].
Results
=======
Fig. \[fig:results\] shows the results of the parameter scans along with the sensitivity expected from the collider analysis. For the $Z_2$-symmetric 2HDMs, we find a large number of points which are potentially discoverable at a high-luminosity LHC. However, both of these models see deviations of $h$ from $\hsm$ for the points with the largest signal and consequently show less detection potential when the very SM-like CMS constraints are imposed. The A2HDM shows even stronger signals, well within reach of even the standard luminosity LHC. The effect of the CMS constraints is again severe, but some points still remain testable at lower luminosities. The $\hpm\to W^\pm\hobs$ channel can be a useful probe of 2HDMs at the LHC, particularly at high luminosities.
[^1]: This should be interpreted as $pp\to t(\bar{b})H^{-}+pp\to \bar{t}(b)H^{+}$. Throughout this text, we will not distinguish fermions and anti-fermions when their identity is unspecified and/or can be inferred.
[^2]: In principle, other decay channels could also be competitive. It has been suggested that, especially in analyses dominated by systematic errors, the $\hobs\to\tau^+\tau^-$ channel could be useful despite additional missing energy from $\tau$ decays and a reduced branching ratio, largely as a result of lower backgrounds [@Coleppa:2014cca].
[^3]: The $b$-tagging efficiency chosen is $\epsilon_\eta\tanh(0.03 p_T - 0.4)$, with $\epsilon_\eta = 0.7$ for central ($|\eta|\leq 1.2$) and $\epsilon_\eta = 0.6$ for forward ($1.2\leq|\eta|\leq 2.5$) jets, and the transverse momentum, $p_T$, in GeV. This is a conservative choice compared with high-luminosity projections.
[^4]: A full experimental analysis considering all sources of error may place greater emphasis on background reduction, which would likely shift this value.
[^5]: For a full description of the parameter scans, and results for $\hobs~=~H$ and supersymmetric models, see [@Enberg:2014pua].
|
---
abstract: 'We report the temperature($T$) and perpendicular magnetic field($B$) dependence of the Hall resistivity $\rho_{xy}(B)$ of dilute metallic two-dimensional(2D) holes in GaAs over a broad range of temperature(0.02-1.25K). The low $B$ Hall coefficient, $R_H$, is found to be enhanced when $T$ decreases. Strong magnetic fields further enhance the slope of $\rho_{xy}(B)$ at all temperatures studied. Coulomb interaction corrections of a Fermi liquid(FL) in the ballistic regime can not explain the enhancement of $\rho_{xy}$ which occurs in the same regime as the anomalous metallic longitudinal conductivity. In particular, although the metallic conductivity in 2D systems has been attributed to electron interactions in a FL, these same interactions should reduce, [*not enhance*]{} the slope of $\rho_{xy}(B)$ as $T$ decreases and/or $B$ increases.'
author:
- 'X. P. A. Gao'
- 'G. S. Boebinger'
- 'A. P. Mills Jr.'
- 'A. P. Ramirez, L. N. Pfeiffer, and K. W. West'
title: Temperature and Magnetic Field Enhanced Hall Slope of a Dilute 2D Hole System in the Ballistic Regime
---
The interplay between single particle localization and electron-electron interactions in disordered electronic systems has been under much investigation for two decades. Due to disorder induced single particle localization, 2D non-interacting electron systems are predicted to be insulators at zero temperature in the presence of any disorder. It was also widely accepted that adding electron interactions does not change this conclusion and, thus, there is no true metallic state in 2D at $T$=0. It came as a surprise when a 2D metallic state and metal-insulator transition(MIT) were observed in various high mobility low density 2D systems after the initial discovery of Kravchenko [*et al.*]{}[@mitreview]. The strong Coulomb interactions in these low density metallic systems revived interest in the role of Coulomb interactions in disordered 2D systems.
A comprehensive theoretical understanding of the Coulomb interaction effects on the 2D electron transport has emerged over the years[@Altshuler; @Finkel; @Gold; @DasSarma; @Zala]. For diffusive electrons at low $T$, Coulomb interactions are known to give a ln$T$ conductivity correction $\delta\sigma(T)$, accompanying the similar ln$T$ correction from single particle interference in the weakly disordered regime[@Altshuler; @Finkel]. Recently Zala, Narozhny and Aleiner(ZNA) pointed out that the logarithmic Altshuler-Aronov interaction correction to $\sigma$ originates from coherent scattering of Friedel oscillations. They extended the calculation to intermediate temperatures where transport is ballistic($k_BT>\hbar/\tau$) instead of diffusive($k_BT<\hbar/\tau$)[@Zala]. For high mobility samples exhibiting 2D metallic conduction, the elastic scattering time $\tau$ is large and the sample is usually in the ballistic regime. In this regime, ZNA showed that $\delta\sigma(T)$, the interaction correction, could be positive(’metallic’) or negative(insulating), depending on the FL parameter $F_0^\sigma$ just as in the diffusive regime. The ZNA theory improves the previous screening theory of Coulomb interactions at intermediate temperatures\[5,6a,b\], and predicts a linear $T$-dependent $\delta\sigma(T)$ controlled by $F_0^\sigma$.
The interaction correction theory of FL systems in the ballistic regime[@Zala] was applied by various experimental groups to explain the zero magnetic field metallic conductivity[@proskuryakov; @coleridge; @kvon; @shashkin; @Noh; @pudalov; @vitkalov]. In these analyses, negative $F_0^\sigma$’s were obtained from fitting the metallic $\sigma(T)$ to a linear function of $T$ as predicted by the ZNA theory. In the FL theory, a negative(positive) $F_0^\sigma$ corresponds to ferromagnetic(antiferromagnetic) spin exchange interaction. While various scattering mechanisms besides the interaction correction can contribute to the longitudinal conductivity, the $T$ dependent Hall resistivity is a good probe for separating the Coulomb interaction effects[@Altshuler; @bishophall; @uren; @emeleus]. In this paper we present an analysis of the temperature dependent Hall resistivity together with the longitudinal conductivity of a metallic 2D hole system within the recent ballistic FL theory in both weak[@Zala] and strong perpendicular magnetic field[@Gornyi]. We found that for all the densities studied, the slope of $\rho_{xy}(B)$ is enhanced by a decreasing temperature and/or increasing magnetic field. When the $B$=0 metallic conductivity is used to fix the FL parameters, analysis shows that the enhanced slope of $\rho_{xy}(B)$ is qualitatively and quantitatively inconsistent with interaction corrections to Fermi liquid theory.
We performed the experiments on two dilute 2D hole systems in two 10nm wide GaAs quantum wells. The samples were made from the same wafer used in our previous study[@GaoPRL02]. The hole density $p$ was tuned by a gold backgate which is about 150$\mu$m underneath the quantum well. The two samples were measured in two different toploading Helium3-4 dilution refrigerators: sample A was mounted on the copper tail of the mixing chamber of the refrigerator at UC-Riverside, while sample B was immersed in the liquid Helium3-4 mixture inside the mixing chamber of the refrigerator at LANL. The data collected from the two samples in the two refrigerators are consistent with each other even down to our lowest experimental temperature of 20mK. During the measurements, the voltage applied to the sample was always kept low (typically a few microvolts) such that the power delivered to the sample is less than a few fWatts/cm$^2$ to avoid overheating the holes.
In Fig.\[fig1\]a, we present the temperature dependent conductivity $\sigma(T)$ of sample A for various hole densities($p$=0.74-1.9$\times$10$^{10}$cm$^{-2}$) at $B$=0. The density is determined from the Shubnikov-de Haas(SdH) oscillations. For all the densities except 0.74$\times$10$^{10}$cm$^{-2}$, $\sigma(T)$ turns from insulating-like(d$\sigma(T)$/d$T>$0) to metallic-like(d$\sigma(T)$/d$T<$0) below a characteristic temperature $T^*$. The metallic $\sigma(T)$ for $p>p_c$ below $T^*$ was recently attributed by some authors to the Coulomb interaction correction of a Fermi liquid with $F_0^\sigma<$0 at intermediate temperatures according to the ZNA theory [@proskuryakov; @coleridge; @kvon; @shashkin; @Noh; @pudalov; @vitkalov]. Theoretically, interaction effects will also give a correction to the Hall resistivity. In the low $T$ diffusive limit, interactions have a correction $\delta R_H(T)\sim $ln$T$ to $R_H$, the Hall coefficient(the slope of $\rho_{xy}(B)$ in small $B$)[@Altshuler]. In the ballistic regime, $\delta R_H(T)$ is expected to change to a 1/$T$ dependence[@Zala]. Thus, depending on the value of $F_0^\sigma$, $R_H$ will increase or decrease towards the Drude Hall coefficient as $R_H(T)\sim 1/T$ when $T$ increases. Fig.\[fig1\]b presents the $R_H$ vs. $T$ data for four metallic densities in Fig.\[fig1\]a. $R_H$ was obtained by linearly fitting $\rho_{xy}(B)$ between -0.05T and +0.05T perpendicular field. It can be seen that at temperatures above 0.1K the measured $R_H(T)$ may be described as a $const.+1/T$ function(Fig.\[fig1\]b), although the fit fails at lower temperatures where the theory should apply best.
Now we quantitatively discuss the longitudinal transport together with the Hall resistivity within the interaction correction theory of FL, using a density ($p$=1.65$\times$10$^{10}$cm$^{-2}$) in sample B as an example. Fig.\[fig2\]a presents $\sigma(T)$ at $B$=0. In the ballistic regime, the interaction correction to conductivity is[@Zala] $$\label{eq1}
\delta\sigma(T)=\sigma_D\left(1+\frac{3F_{\text{0}}^\sigma}{1+F_{\text{0}}^\sigma}\right)\frac{T}{T_F}.$$ Following the analyses of ref.[@proskuryakov; @coleridge; @kvon; @shashkin; @Noh; @pudalov; @vitkalov], we also can fit the $B$=0 conductivity data for 0.1K$<T<$0.2K to the linear dependence of Eq. \[eq1\], obtaining a Drude conductivity of 40 $e^2/h$ and $F_0^\sigma$=-0.6. The hole mass was set to be $m^*$=0.38$m_e$ in the fitting process, with $m_e$ being the free electron mass. In Fig.\[fig2\]b, $R_H$ vs $T$ data are plotted together with the predicted $R_H(T)$ (the gray line) according to ZNA theory with $\sigma_D$=40 $e^2/h$ and $F_0^\sigma$=-0.6. In the ZNA theory, the interaction correction to $R_H$ is the summation of the corrections from the singlet(charge) channel and the triplet(spin) channel: $\delta R_H=\delta R^{\rho}_H+\delta R^{\sigma}_H$. The singlet channel correction $\delta R^{\rho}_H$ and the triplet channel correction $\delta R^{\sigma}_H$ are given as Eq.17, Eq.18 respectively in ref.5c.
The discrepancy between the data and theoretical expectation in the metallic regime of Fig.2 is obvious. In fact, for $F_0^\sigma$=-0.6, the theory predicts a nearly flat but [*decreasing*]{} $R_H$ as temperature decreases in the experimental temperature range (20mK-1.2K). Note that the FL theory predicts the interaction correction to $R_H$ to be very small in the ballistic regime for large $\sigma_D$, consistent with the Hall coefficient measurements for metallic 2D electrons in high mobility Silicon-metal-oxide-semiconductor field-effect transistors(Si-MOSFET’s) [@Pudalovhall; @Sarachikhall; @Khodas].
It is important to know if the temperature enhanced $R_H$ is actually related to a varying carrier density effect. A standard way to measure carrier density is the SdH oscillations in the longitudinal magneto-resistivity $\rho_{xx}(B)$. From the positions of the SdH minima/maxima one can extract the carrier density. At 20mK we could observe SdH oscillation in $\rho_{xx}(B)$ down to $\sim$0.06T. Note that resolving SdH at low magnetic fields(high filling factors) is difficult for low density holes with large effective mass(and hence small cyclotron energy) because of the necessity to cool the holes to very low temperature. Fig.\[fig3\]a shows the index number vs. 1/$B$ for the positions of the SdH oscillations shown in the inset. We obtain the total hole density $p=1.74\times 10^{10}cm^{-2}$ and the majority/minority spin subband densities $p_{+/-}=1.15,0.59\times 10^{10}cm^{-2}$, via linear fitting of the index number vs. 1/$B$ following ref.[@stormer; @eisenstein]. The analysis of SdH beating is consistent with a fixed ($B$-independent) density(with 30$\%$ net spin polarization at $B$=0) in the regime of SdH oscillations and quantum Hall plateaus[@SdH]. However, the low-field($\leq$0.05T) slope of the Hall coefficient,$R_H(T)$, changes by more than 20$\%$ between 0.1 and 0.5K, temperatures sufficiently high that most SdH oscillations at high filling factors are no longer observable. Nevertheless, the positions of the SdH dips at $\nu$=1,2 do not move with $T$, and hence strongly imply a fixed ($T$-independent) carrier density. The $T$=20mK SdH oscillations and Hall resistivity $\rho_{xy}(B)$ are presented in Fig.\[fig3\]b. The data are averaged from both positive and negative magnetic field measurements to remove the admixture between $\rho_{xx}$ and $\rho_{xy}$. We see that the SdH dips and quantized Hall plateaus occur at the same magnetic fields. Note, however, that the extrapolation of the low $B$($\leq$0.05T) $\rho_{xy}$ (dashed line) intersects the Hall plateaus at magnetic fields higher than the plateau centers, indicating that the low field $R_H$ is smaller than that determined at high fields. While this 20$\%$ discrepancy could, in principle, be due to interaction corrections to $R_H$[@Altshuler; @bishophall; @uren; @emeleus], we have already shown that the $\sigma(T)$ and $R_H(T)$ data are not explained consistently within the interaction theory of FL.
While ZNA’s theory is only applicable in the low field limit ($\omega_c\tau<$1), Gornyi and Mirlin(GM) recently calculated the interaction correction to $\rho_{xy}$ into the high magnetic field regime($\omega_c\tau\gg$1) with $\omega_c=eB/m^*$ being the cyclotron frequency[@Gornyi]. We also investigated the behavior of $\rho_{xy}(B)$ in strong magnetic fields to further test the FL interaction correction theory for our sample.
In GM’s strong magnetic field theory, the interaction correction to $\rho_{xy}$ is separated into two parts. One part is $T$-dependent but $B$-independent, and the other part is $B$-dependent and $T$-independent. In Fig.\[fig4\], we plot the Hall slope, $\rho_{xy}/B$ vs. $B$ at various temperatures. To remove the admixture of $\rho_{xx}$ into $\rho_{xy}$, we antisymmetrized the $\rho_{xy}$ data from both $B>$0 and $B<$0 measurements to obtain Fig.\[fig4\] . The low field($B\leq$500G) $R_H$ data are also included. Fig.\[fig4\] shows that the $\rho_{xy}/B$ data indeed may be viewed as a $T$-independent magnetic field enhancement on the background of a $B$-independent temperature enhancement[@notehall]. The interaction correction to $\rho_{xy}$ at strong $B$ is also quantitatively related to the FL parameter $F_0^\sigma$ as in ZNA theory[@Gornyi]. The $T$-dependent part of the $\rho_{xy}$ correction in the ballistic regime and strong $B$ is[@Gornyi] $$\label{GMT}
\frac{\delta\rho_{xy}^T}{\rho_{xy}}=-7.117\frac{e^2/\hbar}{\sigma_D}\left(\frac{3F_0^\sigma}{F_0^\sigma+1}+1\right)\left(\frac{k_BT}{\hbar/\tau}\right)^{1/2}.$$ In this high field regime, as in the low field regime, theory predicts a decreasing slope of the Hall resistivity with decreasing temperature. However, the opposite behaviour, i.e. enhancement of the Hall resistivity, is observed when $T$ decreases. The $B$-dependent part of the GM correction to $\rho_{xy}$ is[@Gornyi] $$\label{GMB}
\frac{\delta\rho_{xy}^B}{\rho_{xy}}\approx\frac{e^2/\hbar}{\sigma_D}\left(\frac{3F_0^\sigma}{F_0^\sigma+1}+1\right)(\omega_c\tau)^{1/2}.$$ Fig.\[fig4\] also includes the theoretical curve from Eq.\[GMB\] for $F_0^\sigma=-0.6$, $\sigma_D=40$ and $\rho_{xy}/B$($B$=0) = 25k$\Omega$/T. One can see that $\delta\rho_{xy}^B/\rho_{xy}$ is expected to be negative for $F_0^\sigma$=-0.6 but the data show a positive increase as $B$ increases.
Fig.\[fig4\] also suggests that $\rho_{xy}$/$B$ is enhanced with decreasing $T$ at both weak and strong magnetic fields in a similar fashion. It is reasonable to conclude that the $T$ dependent $\rho_{xy}/B$ originates from the same mechanism for both magnetic field regimes. Since our temperature dependent SdH shows that the enhanced $\rho_{xy}/B$ at high $B$ is not related to a temperature dependent density, we further conclude that the enhanced low magnetic field Hall coefficient is not due to a density effect. In conclusion, for both the low magnetic field(ZNA) and high magnetic field(GM) regimes our combined resistivity and Hall data are inconsistent with the electron interaction corrections interpretation in a Fermi liquid.
Finally, we briefly comment on the relevance between our data and several other FL-based models of the 2D metallic state, which do not invoke the FL parameters[@DasSarma; @Altshulertrap]. For our sample in the metallic state, $\sigma$ is enhanced as large as three times as $T$ is reduced, a result perhaps consistent with the screening theory of Das Sarma and Hwang[@DasSarma]; however, the behavior of $R_H$ has not yet been theoretically discussed within the screening theory. Alternatively, the enhanced $R_H$ at low $T$ could be interpreted as a carrier freeze out\[ref.6a\] or trapping effect[@Altshulertrap]; however, the field($B>$0.06T) and temperature independent density we observe in the SdH oscillations require these effects to disappear above 0.06T and make these interpretations seem highly unlikely.
The authors are pleased to thank I.L. Aleiner and A. Punnoose for valuable discussions. Work at UCR was supported by LANL-CARE program. The NHMFL is supported by the NSF and the State of Florida.
P.A. Lee and T.V. Ramakrishnan, [*Rev. Mod. Phys.*]{} [**57**]{}, 287 (1985).
E. Abrahams, S.V. Kravchenko, and M.P. Sarachik, *Rev. Mod. Phys.* **73**, 251 (2001).
B.L. Altshuler and A.G. Aronov, in [*Electron-Electron Interactions in Disordered Systems*]{}, edited by A.L. Efros and M. Pollak (North-Holland, Amsterdam, 1985). A.M. Finkel’stein, [*Sov. Phys. JETP*]{} [**57**]{}, 97 (1983). A. Gold and V. T. Dolgopolov, [*Phys. Rev. B*]{} [**33**]{}, 1076 (1986). S. Das Sarma and E. H. Hwang, (a) [*Phys. Rev. Lett.*]{} [**83**]{}, 164 (1999); (b)[*Phys. Rev. B*]{} [**61**]{}, R7838 (2000);(c) [*ibid*]{}, [**69**]{}, 195305 (2004).
G. Zala, B.N. Narozhny, and I.L. Aleiner,(a) [*Phys. Rev. B*]{} [**64**]{}, 214204 (2001);(b) [**65**]{}, 020201(R) (2002); (c)[**64**]{}, 201201 (2001).
Y.Y. Proskuryakov [*et al.*]{}, [*Phys. Rev. Lett.*]{} [**89**]{}, 076406 (2002). P.T. Coleridge, A.S. Sachrajda, and P. Zawadzki, [*Phys. Rev. B*]{} [**65**]{}, 125328 (2002). Z.D. Kvon,O. Estibals ,G.M. Gusev , and J.C. Portal, [*Phys. Rev. B*]{} [**65**]{}, R161304 (2002). A.A. Shashkin,S.V. Kravchenko,V.T. Dolgopolov and T.M. Klapwijk, [*Phys. Rev. B*]{} [**66**]{}, 073303 (2002). H. Noh [*et al.*]{}, [*Phys. Rev. B*]{} [**68**]{}, 165308 (2003). V. M. Pudalov [*et al.*]{}, [*Phys. Rev. Lett.*]{} [**91**]{}, 126403 (2003). S. A. Vitkalov, K. James, B. N. Narozhny, M. P. Sarachik, and T. M. Klapwijk,[*Phys. Rev. B*]{} [**67**]{}, 113310 (2003).
D.J. Bishop, D.C. Tsui and R.C. Dynes, [*Phys. Rev. Lett.*]{} [**46**]{}, 360 (1981).
M.J. Uren, R.A. Davies and M. Pepper, [*J. Phys. C:Solid St. Phys.*]{} [**13**]{}, L985 (1980).
C.J. Emeleus [*et al.*]{}, [*Phys. Rev. B*]{} [**47**]{}, 10016 (1993).
I.V. Gornyi and A.D. Mirlin, (a)[*Phys. Rev. Lett.*]{} [**90**]{}, 076801 (2003); (b)[*Phys. Rev. B*]{} [**69**]{}, 045313 (2004).
X. P.A. Gao, A.P. Mills, Jr., A.P. Ramirez, L.N. Pfeiffer, and K.W. West, *Phys. Rev. Lett.* **89**, 016801 (2002).
V.M. Pudalov, G. Brunthaler, A. Prinz, and G. Bauer [*JETP Letters*]{} [**70**]{}, 48 (1999). M.P. Sarachik, D. Simonian, K.M. Mertes, S.V. Kravchenko, and T.M. Klapwijk,[*Phyica B*]{} [**280**]{}, 301 (2000). M. Khodas and A.M. Finkel’stein [*Phys. Rev. B*]{} [**68**]{}, 155114 (2003). H.L. Stormer [*et al.*]{}, [*Phys. Rev. Lett.*]{} [**51**]{}, 126 (1983). J. P. Eisenstein, H. L. Stormer, V. Narayanamurti, A. C. Gossard, and W. Wiegmann [*Phys. Rev. Lett.*]{} [**53**]{}, 2579 (1984).
We found that the $B$=0 spin splitting increases from 20$\%$ to 32$\%$ as the density decreases from 2.35 to 1.35$\times$10$^{10}$cm$^{-2}$ for sample B. Note that the inversion asymmetry related Rashba spin splitting should be negligible for our symmetrically doped quantum well with low hole density[@eisenstein]. The $B$=0 spin splitting here is perhaps related to the strong ferromagnetic spin exchange interactions and ferromagnetic instability of 2D MIT in high $r_s$ 2D systems (A. A. Shashkin, S. V. Kravchenko, V. T. Dolgopolov, and T. M. Klapwijk [*Phys. Rev. Lett.*]{} [**87**]{}, 086801 (2001)).
The oscillatory behavior of $\rho_{xy}/B$ in Fig.\[fig4\] at low temperature comes from the onset of quantum Hall effects.
B. L. Altshuler and D. L. Maslov, [*Phys. Rev. Lett.*]{} [**82**]{}, 145 (1999).
|
---
abstract: 'A record in a permutation is a maximum or a minimum, from the left or from the right. The entries of a permutation can be partitioned into two types: the ones that are records are called external points, the others are called internal points. Permutations without internal points have been studied under the name of square permutations. Here, we explore permutations with a fixed number of internals points, called almost square permutations. Unlike with square permutations, a precise enumeration for the total number of almost square permutations of size $n+k$ with exactly $k$ internal points is not known. However, using a probabilistic approach, we are able to determine the asymptotic enumeration. This allows us to describe the permuton limit of almost square permutations with $k$ internal points, both when $k$ is fixed and when $k$ tends to infinity along a negligible sequence with respect to the size of the permutation. Finally, we show that our techniques are quite general by studying the set of $321$-avoiding permutations of size $n+k$ with exactly $k$ internal points ($k$ fixed). In this case we obtain an interesting asymptotic enumeration in terms of the Brownian excursion area. As a consequence, we show that the points of a uniform permutation in this set concentrate on the diagonal and the fluctuations of these points converge in distribution to a biased Brownian excursion.'
address:
- 'Institut fur Mathematik, Universitat Zurich, Winterthurerstr. 190, CH-8057 Zurich, Switzerland'
- 'Université de Paris Diderot, IRIF, Batiment Sophie Germain, 75013, Paris, France'
- 'Université de Paris Diderot, LPSM, Batiment Sophie Germain, 75013, Paris, France'
author:
- Jacopo Borga
- Enrica Duchi
- Erik Slivken
bibliography:
- 'pattern.bib'
title: Almost square permutations are typically square
---
Introduction
============
We look at permutations as diagrams, that is, if $n$ denotes the size of a permutation $\sigma$, we identify $\sigma$ with the set of points $\{(i,\sigma(i))\}_{i=1}^n$. The points of a permutation can be divided into two types, internal and external. The external points are the records of the permutation, either maximum or minimum, from the left or from the right. The internal points are the points that are not external. Square permutations are permutations where every point is external. Almost square permutations are permutations with some fixed number of internal points. We use the notation ${Sq}(n)$ to denote the set of square permutations of size $n$ and ${{ASq}(n,k)}$ to denote the set of almost square permutations of size $n+k$ with exactly $n$ external points and $k$ internal points.
Square permutations were first studied in [@mansour_square], and later in [@duchi_square1] and [@ALBERT2011715], where several approaches were used to find their generating function and to derive from it an explicit expression for $|{Sq}(n)|$, specifically $$|{Sq}(n)|=2(n+2)4^{n-3}-4(2n-5)\binom{2n-6}{n-3}.$$ More recently in [@duchi_square2] the second author of the present paper devised an enumerative approach through generating trees which highlights a fast sampling procedure for uniform random elements in ${Sq}(n)$. Finally, a probabilistic exploration in [@borga2019square] by the first and the third author of the present paper, found many interesting limiting objects for uniform random permutations in ${Sq}(n)$.
In [@ALBERT2011715] square permutations were referred to as *convex permutations* and were described by pattern-avoidance. In particular, square permutations are permutations that avoid the sixteen permutations of size $5$ that have an internal point.
We recall the definition of pattern avoidance for permutations. Let ${\mathcal{S}}_n$ denote the set of permutations of size $n$. For $\pi\in {\mathcal{S}}_n$ and $\omega \in {\mathcal{S}}_k$ we say that $\pi$ contains an occurrence of $\omega$ if there exists a subsequence $i_1 < \ldots < i_k$ such that $(\pi(i_1), \ldots, \pi(i_k))$ has the same relative order as $\omega.$ We say that $\pi$ avoids the pattern $\omega$ if it contains no occurrences of $\omega$. We let ${A\!v_n}(\omega)$ denote the set of permutations of size $n$ that avoid $\omega$ and for a collection of patterns $\mathcal{B}$ we let ${A\!v_n}(\mathcal{B})$ denote the set of permutations of size $n$ that avoid every pattern in $\mathcal{B}$. See [@bona; @kit; @vatter2014permutation] for a proper introduction to the wide range of topics related to patterns in permutations.
Almost square permutations were studied for the first time in [@disanto2011permutations]. It was shown that, for fixed $k>0$, the generating function with respect to the size for ${{ASq}(n,k)}$ is algebraic of degree $2$, and this generating function was explicitly computed for $k=1,2,3$ (see Theorem \[squareInt123\]). However, computations become intractable for $k>3$.
Permutations that almost avoid a pattern were considered in [@brignall2009almost; @griffiths2011almost] from an enumerative point of view. A permutation is said to $k$-almost avoid a pattern (or set of patterns) if $k$ or fewer points can be deleted so that the resulting permutation avoids the pattern (or set of patterns). The notion of almost avoiding permutations has a deep relation with sorting algorithms (we refer to the introduction of [@brignall2009almost] for more details) which are widely studied in computer science and combinatorics. The notion of almost avoidance used in [@brignall2009almost; @griffiths2011almost] differs slightly from our definition of almost square permutations. A permutation is $k$-almost square if removing exactly $k$ *internal* points one obtains a square permutation. On the contrary, a permutation is a $k$-almost avoiding permutation if, removing $k$ or fewer points, *either internal or external*, one obtains a permutation that belongs to the appropriate class.
We finally point out that problems similar to the ones mentioned above, this is, involving the removal of some specific atoms from a discrete structure, have been extensively considered in graph theory. Classes of graphs, defined as follows, have been studied: for a graph class $\mathcal{G}$ and an integer $k$, define $\mathcal A_k(\mathcal G)$ as the class of all graphs in which the removal of $k$ well-chosen vertices leads to $\mathcal G$. We refer to the introduction of [@leivaditis2019minor] for a nice overview of the literature on graph classes of the form $\mathcal A_k(\mathcal G)$ and related problems. We mention that probably the most famous instance of this kind of problems is the study of $k$-apex graphs, i.e. graphs that can be made planar by the removal of exactly $k$ vertices.
The first main result of this paper uses the approach in [@borga2019square; @duchi_square2] to give the asymptotic enumeration of ${{ASq}(n,k)}$.
We write $a_n \sim b_n$ if $\lim_{n\to \infty} a_n/b_n = 1$, and $a_n = o(b_n)$ if $\lim_{n\to \infty} a_n/b_n = 0.$
\[approx\_size\] For $k=o(\sqrt n)$, as $n\to \infty,$
$$\label{approx_size_eq}
|{{ASq}(n,k)}| \sim \frac{k!2^{k+1}n^{2k+1}4^{n-3}}{(2k+1)!}\sim \frac{k!2^{k}n^{2k}}{(2k+1)!}|{Sq}(n)|.$$
When $k$ grows at least as fast as $\sqrt n$ the above result fails. Nevertheless, when $k=o(n)$, we can still obtain the following weaker asymptotic expansion that determines the behavior of the exponential growth.
\[approx\_size\_2\] For $k=o(n)$, as $n\to \infty,$
$$\log\left(|{ASq}(n,k)|\right)=\log\left(\frac{k!}{(2k+1)!}2^{k+1}n^{2k+1}4^{n-3}\right)+o(k).$$
In order to determine the above asymptotic enumerations, we use an understanding of the geometric structure of a typical square permutation. Specifically, we use some previous results (established in [@borga2019square]) about the precise description of the typical shape of a large square permutation and then we find bounds on the different possible ways of adding internal points. These two results lead to the desired asymptotic enumeration, and also give the description of the typical shape of a large almost square permutation.
For the latter, we utilize the language of *permutons* [@MR2995721]. A permuton is a probability measure on the unit square with uniform marginals. Every permutation $\sigma$ can be associated with the permuton $\mu_{\sigma}$ representing a scaled version of its diagram (see Section \[sect:permutons\] for a precise definition). Permuton limits have been widely studied in recent years, see for instance [@bassino2017universal; @borga2018localsubclose; @permuton] (we refer to our previous article [@borga2019square] for a detailed description of the literature on permutons).
Given $z\in(0,1)$ we denote with $\mu^{z}$ the permuton corresponding to a rectangle in $[0,1]^2$ with corners at $(z,0), (0,z),(1-z,1)$ and $(1,1-z)$ (for a rigorous construction we refer to Section \[sect:muz\]). In [@borga2019square] it was shown that the permuton limit of a uniform random square permutation is given by the random permuton $\mu^{\bm z}$, where $\bm{z}$ is chosen uniformly in the interval $(0,1)$. Permutations in ${{ASq}(n,k)}$ can be constructed starting from a permutation in ${Sq}(n)$ and adding internal points (shifting points appropriately). Intuitively, this suggests that the permuton limit of ${{ASq}(n,k)}$ is biased toward rectangles with larger area. This is confirmed by the following result.
\[fixedk\_thm\] Fix $k>0$. Let ${\bm{z}}^{(k)}$ denote the random variable in $(0,1)$ with density $$f_{{\bm{z}}^{(k)}}(t) = (2k+1){2k \choose k} (t(1-t))^k,$$ i.e., ${\bm{z}}^{(k)}$ is beta distributed with parameters $(k+1,k+1)$. If $\bm\sigma_n$ is uniform in ${{ASq}(n,k)}$, then as $n\to \infty,$ $$\mu_{\bm{\sigma}_n} \stackrel{d}{\longrightarrow} \mu^{{\bm{z}}^{(k)}}.$$
The distribution of ${\bm{z}}^{(k)}$, when $k$ increases, gives more weight around the value $1/2$ (see Fig. \[fig:beta\_distrib\]). We therefore expect that, in the regime when $k\to\infty$ together with $n$ and $k=o(n)$, a uniform random permutation with $k$ internal points tends to $\mu^{1/2}$. The following theorem shows exactly this concentration result.
![The chart displays the density of the distribution of ${\bm{z}}^{(k)}$ for different values of $k$.[]{data-label="fig:beta_distrib"}](beta_distrib)
\[permuton\_limit\] Let $k$ and $n$ both tend to infinity with $k=o(n)$. If $\bm\sigma_n$ is uniform in ${{ASq}(n,k)}$ then $$\mu_{\bm{\sigma}_n} \stackrel{d}{\longrightarrow} \mu^{1/2}.$$
The probabilistic approach used to obtain our results has a wide range of possible applications and does not apply only to the set of square permutations. For example, in Section \[sect:321av\] of this paper, we apply our techniques to establish the asymptotic enumeration for permutations avoiding the pattern $321$ with $k$ additional internal points. Permutations avoiding a decreasing sequence of length three are extensively studied in the literature, see for instance [@borga2018local; @callan; @hoffman2017pattern; @HRS1; @HRS2; @Ja321; @madras_monotone; @mp; @mrs]. We recall that the points of a $321$-avoiding permutation can be partitioned into two increasing subsequences, one weakly above the diagonal and one strictly below the diagonal. Therefore $321$-avoiding permutations are particular instances of square permutations.
Let ${ASq({A\!v_n}(321),k)}$ denote the set of permutations avoiding the pattern $321$ with $n$ external points and $k$ additional internal points or, equivalently, the subset of permutations $\sigma$ in ${{ASq}(n,k)}$ where the pattern induced by the records of $\sigma$ is in ${A\!v_n}(321)$. For fixed $k$, we show (Theorem \[parallelgf123\]) that the generating function of these permutations is again algebraic of degree 2, and more precisely rational in the Catalan generating series. Explicit expressions are derived for $k=1,2,3$. As in the case of square permutations, for $k>3$, the computations to determine the generating function become intractable (see Section \[sect:genfunc\]). Nevertheless, using our new probabilistic approach, we are able to compute the first order approximation of the enumeration.
Let $c_n$ denote the $n$-th Catalan number, $c_n=\frac{1}{n+1}{2n\choose n}$, so that $|{A\!v_n}(321)| = c_n$.
\[thin red line\] Fix $k>0$. Then as $n\to\infty,$
$$|{ASq({A\!v_n}(321),k)}|\sim \frac{(2n)^{3k/2}}{k!}\cdot c_n\cdot \mathbb{E}\left[\left(\int_0^1\bm e(t)dt\right)^k\right],$$ where $\bm e(t)$ denotes the standard Brownian excursion on the interval $[0,1]$.
The evaluation on the right-hand side of the $k$-th moment of the Brownian excursion area is derived in Section 2 of [@janson2007brownian] where the author shows that $$\mathbb{E}\left[\left(\int_0^1\bm e(t)dt\right)^k\right]=(36\sqrt{2})^{-k}\frac{2\sqrt{\pi}}{\Gamma((3k-1)/2)}\xi_k,$$ where $\xi_k$ satisfies the recurrence $$\label{svante constant}
\xi_r = \frac{12r}{6r-1}\frac{\Gamma(3r+1/2)}{\Gamma(r+1/2)}- \sum_{j=1}^{r-1}{r \choose j} \frac{\Gamma(3j+1/2)}{\Gamma(j+1/2)}\xi_{r-j}, \qquad r\geq 1.$$
The final result of our paper is a generalization of Theorem 1.2 in [@hoffman2017pattern] where the authors proved that the points of a uniform random 321-avoiding permutation concentrate on the diagonal and the fluctuations of these points converge in distribution to a Brownian excursion. We are able to generalize this result for uniform random permutations in ${ASq({A\!v_n}(321),k)}$.
We define for a permutation $\tau^k_n\in {ASq({A\!v_n}(321),k)}$ (with the convention that $\tau^k_n(0)=0$) and $t\in[0,1]$, $$F_{\tau^k_n}(t) \coloneqq \frac{1}{\sqrt{2(n+k)}}\big |\tau^k_n(s(t)) - s(t) \big|,$$ where $s(t)=\max\left\{m\leq \lfloor (n+k)t \rfloor|\tau^k_n(m)\text{ is an external point}\right\}$. Note that heuristically the function $F_{\tau^k_n}(t)$ is interpolating only the external points of $\tau^k_n$, forgetting the internal one. We also introduce the following biased Brownian excursion.
\[def:kbiasedex\] Let $k>0$. The $k$-biased Brownian excursion $(\bm{e}^k_t)_{t\in[0,1]}$ is a random variable in the space of right-continuous functions $D([0,1],\mathbb{R})$ with the following law: for every continuous bounded functional $G:D([0,1],\mathbb{R})\to{\mathbb{R}},$ $${\mathbb{E}}\left[G\left(\bm{e}^k_t\right)\right]=\mathbb{E}\left[\left(\int_0^1\bm e(t)dt\right)^k\right]^{-1}{\mathbb{E}}\left[G(\bm{e}_t)\cdot\left( \int_0^1 \bm{e}_t dt\right) ^k\right],$$ where $\bm{e}_t$ is the standard Brownian excursion on \[0,1\].
\[thm:fluctuations\] Fix $k>0$. Let $\bm{\tau}^k_n$ be a uniform random permutation in ${ASq({A\!v_n}(321),k)}$. Then $$\left(F_{\bm{\tau}^k_n}(t)\right)_{t\in [0,1]} \stackrel{d}{\longrightarrow} \left(\bm{e}^k_t\right)_{t\in [0,1]},$$ where $\bm{e}^k_t$ is the $k$-biased Brownian excursion on $[0,1]$, and the convergence holds in the space of right-continuous functions $D([0,1],\mathbb{R})$.
Further questions {#further-questions .unnumbered}
-----------------
We collect here some problems and open questions that we would like to investigate in future projects.
1. We studied the permuton limit for permutations with no internal points ([@borga2019square Theorem 4.4]), with a fixed number of internal points (Theorem \[fixedk\_thm\]) and with an increasing but negligible number of internal points (Theorem \[permuton\_limit\]). What can we say about the permuton limit of ${{ASq}(n,k)}$ when $k $ has order $n$ (or is larger than $n$)? We expect a phase transition in the permuton limit from the permuton $\mu^{1/2}$ to the permuton given by the Lebesgue measure on the unit square (which is the limit of uniform permutations). We believe that the precise analysis of this phase transition is an interesting and challenging question to address.
2. In Section \[sect:321av\] we investigate $321$-avoiding permutations with $k$ additional internal points for $k$ fixed. What about the case when $k\to\infty$? We believe that in this regime the fluctuations of the points for a uniform permutation are not of order $\sqrt n$ any more, drastically changing the behavior of the limiting shape of the permutation.
3. The $k$-th moment of the Brownian excursion area appearing in Theorem \[thin red line\] is known to be the continuous limit of the normalized $k$-th moment of the area under large Dyck paths: it would be interesting to establish a bijection between ${ASq({A\!v_n}(321),k)}$ and some specific set of Dyck paths covering $k$ marked points.
Outline of the paper {#outline-of-the-paper .unnumbered}
--------------------
Section \[sect:genfunc\] briefly explores the explicit generating functions for ${{ASq}(n,k)}$ and ${ASq}({A\!v_n}(321),k)$. Even for small $k$, the generating functions become quite complicated. Section \[sect:insert\] explains how to construct permutations in ${{ASq}(n,k)}$ starting from a permutation in $Sq(n)$. Section \[sect:perm\_to\_anchored\_seq\] recalls useful results from [@borga2019square] that are necessary in the subsequent sections. Section \[sect:asym\_enum\] contains the proof of Theorems \[approx\_size\] and \[approx\_size\_2\], while Section \[sect:permutons\] considers permutons and contains the proof of Theorems \[fixedk\_thm\] and \[permuton\_limit\]. Finally Theorems \[thin red line\] and \[thm:fluctuations\] are proved in Section \[sect:321av\].
Generating Functions for small values of $k$. {#sect:genfunc}
=============================================
In [@disanto2011permutations], Disanto et al. extended the linear recursive construction of square permutations that was given in [@duchi_square1] in order to enumerate the class ${{ASq}(n,k)}$. In particular they proved the following theorem for the generating function $ASq^{(k)}(t)=\sum_{n \ge 4}|{ASq}(n,k)| \cdot t^{n}$ of almost square permutations with $n$ external points and $k$ internal points:
\[squareInt123\] For all $k\geq0$, the generating function $ASq^{(k)}(t)$ of almost square permutations with $k$ internal points is algebraic of degree 2 and there exists a rational function $R^{(k)}(u)$ such that $$ASq^{(k)}(t)= R^{(k)}(C(t)),$$ where $C(t)$ is the Catalan generating function: $C(t)=\frac{1-\sqrt{1-4t}}{2t}$. In particular[^1], for $k=1,2,3$, $$\label{con:Exact}
R^{(k)}(u)=\frac{8(u-1)^{4}u^{k-3}}{(2-u)^{4+4k}}P^{(k)}(u),$$ where $P^{(1)}(u),P^{(2)}(u)$ and $P^{(3)}(u)$ are explicit polynomials given in [@disanto2011permutations].
It is tempting to conjecture that (\[con:Exact\]) holds for all $k$ with some polynomial $P^{(k)}(u)$, but we would like to stress that finding an explicit expression for $R^{(k)}(u)$ is still an open problem. Indeed, even if the solution of the system requires only subtle substitutions and applications of the kernel method, it turns out that the computations are heavy because they involve derivatives and specializations of series in 6 variables, and the authors of [@disanto2011permutations] were not able to push them further than the case $k=3$. However, assuming (\[con:Exact\]), they conjectured that $P^{(k)}(C(\frac{1}{4}))= 2^{(4k-1)}k!$ where $C(\frac{1}{4})=2$. Via singularity analysis [@FlSe09], this conjecture would imply Theorem \[approx\_size\] for the asymptotic enumeration of almost square permutations with $k$ internal points (for $k$ fixed). In particular it proves the cases $k=1,2,3$ of the theorem. However, as mentioned earlier, this direct approach appears to be intractable, as opposed to the probabilistic approach discussed in the rest of the present paper.
We now discuss the case of ${ASq({A\!v_n}(321),k)}$, that was not considered in [@disanto2011permutations]. Since the strategy is similar to the one used in [@disanto2011permutations], we do not furnish here all the details and we simply explain how to adapt the computations in [@disanto2011permutations] to obtain our result. The explicit computations can be found in the Maple files in [@maplecomp2019].
In the case of ${ASq({A\!v_n}(321),k)}$, the system of equations used in [@disanto2011permutations] takes a slightly simpler form. Let $ASq(Av(321),k)(t)$ denote the generating function for the enumeration sequence $|ASq(Av_n(321),k)|$. This generating function decomposes according to the five relevant subclasses of permutations identified in [@disanto2011permutations] as $$ASq(Av(321),k)(t)=F^{(k)}_A(t)+F^{(k)}_{B_\alpha}(t)+ F^{(k)}_{B_{a,u}}(t)+ F^{(k)}_{B_{a,c}}(t)+F^{(k)}_{B_{d,c}}(t).$$ Introducing five catalytic variables $u,v,x,y,z$, the multivariate power series $F_E^{(k)}(u,v;x,y,z)=F_E^{(k)}(t;u,v;x,y,z)$, $E\in\{A,B_{\alpha},B_{a,u},B_{a,c},B_{d,c}\}$, satisfy a system of linear equations of the form (see below for the notation) $$F_E^{(0)}(u,v;x,y,z)=t\cdot \Phi_E(\mathcal{G}^{(0)}(u,v);x,y,z)+t^2uv\cdot\delta_{E=A}+t^2\cdot\delta_{E=B_{\alpha}},$$ and for $k\geq1$ $$F_{E}^{(k)}(u,v;x,y,z)=t\cdot \Phi_E(\mathcal{G}^{(k)}(u,v);x,y,z)+\nabla(F_E^{(k-1)})(u,v;x,y,z).$$ We now explain the various terms appearing in the previous equations (noting that the system is very similar to the one obtained in [@disanto2011permutations Proposition 2.1]). We denote with $\delta_{\mathcal{P}}$ the indicator function of a property $\mathcal{P}$. For $k\geq 0$, $$\mathcal{G}^{(k)}(u,v)=\left\{G^{(k)}_A(u,v),G^{(k)}_{B_{\alpha}}(u,v),G^{(k)}_{B_{a,u}}(u,v),G^{(k)}_{B_{a,c}}(u,v),G^{(k)}_{B_{d,c}}(u,v)\right\},$$ where $G_E^{(k)}(u,v)=F_E^{(k)}(u,v;1,1,1)$ depends only on $u$ and $v$. Additionally, $\Phi_E(\mathcal{G}^{(k)}(u,v);x,y,z)$ is a linear combination with rational coefficients in $u,v,x,y,z$ of the series in the family $\mathcal{G}^{(k)}(a,b)$ using combinations of the substitutions $a\in\{uxy,u,xy,1\}$ and $b\in\{vz,v,z,1\}$. Finally, $\nabla$ is a mixed differential and divided difference operator, $$\begin{aligned}
\nabla(f)(u,v;x,y,z)=&\frac{x^2y^2z}{1-z}\left((\partial_xf)_{|x=xy,y=1,z=1}-(\partial_xf)_{|x=xyz^{-1}}\right)\\
&+\frac{xy^2}{1-xy^{-1}}\left(\partial_yf-xz^{-1}(\partial_xf)_{|x=xyz^{-1},y=z}\right)\\&+\frac{xy^2z^{-1}}{(1-yz^{-1})^2}\left(f-f_{|x=xyz^{-1},y=z}\right),\end{aligned}$$ where $\partial_x$ (resp. $\partial_y$) denotes the partial derivative w.r.t. $x$ (resp. $y$).
The first system for the series $F_E^{(0)}$ can be solved explicitly using the kernel method and yields as expected a refinement of the Catalan generating series. In order to deal with $k\geq1$ we first set $x=y=z=1$ and build a generic system where the terms $\nabla(F_E^{(k-1)})(u,v,1,1,1)$ are considered as parameters $h^{(k-1)}_E(u,v)$: $$G_E^{(k)}(u,v)=t\cdot \Phi_E(\mathcal{G}^{(k)}(u,v);1,1,1)+h^{(k-1)}_E(u,v).$$ We solve this system of equations using the kernel method on the series $G_E^{(k)}(u,v)$. This yields an explicit expression of the form $$G_E^{(k)}(u,v)=\sum_{a,b,D} g^{E}_{a,b,D}(U,u,v)\cdot h^{(k-1)}_D(a,b),$$ with $D\in\{A,B_{\alpha},B_{a,u},B_{a,c},B_{d,c}\}$, $b\in\{v,1\}$ and $a\in\{V,U,u,1\}$, with $V=1/(1-v(1-U)/U^2)$, and $U=1+tU^2$, and where the coefficients $g^E_{a,b,D}(U,u,v)$ are rational in $U,u,v$.
Returning to the original problem we have that $$F_{E}^{(k)}(u,v;x,y,z)=t\cdot \Phi_E\left(\sum_{a,b,D} g_{a,b,D}^E(U,u,v)\cdot h^{(k-1)}_D(a,b);x,y,z\right)+\nabla(F_E^{(k-1)})(u,v;x,y,z),$$ where $$\begin{aligned}
h^{(k-1)}_D(a,b)&=\lim_{x,y,z\to1}\nabla(F_D^{(k-1)})(a,b;x,y,z))
\\&=\left(\partial_x(\partial_y+\partial_z-1)-\frac12(\partial_x^2+\partial_y^2)\right)(F_D^{(k-1)})(a,b;1,1,1).\end{aligned}$$ This equation implies a result similar to Theorem \[squareInt123\] above and it can be iterated from the initial terms $F_E^{(0)}$ to get the successive series $F_E^{(k)}$ for $k\geq 1$.
\[parallelgf123\] For all $k\geq0$, the generating function $ASq(Av(321),k)(t)$ of square permutations avoiding $321$ with $k$ additional internal points is algebraic of degree 2 and there exists a rational function $R^{(k)}(u)$ such that $$ASq(Av(321),k)(t)= R^{(k)}(C(t)),$$ where $C(t)$ is the Catalan generating function: $C(t)=\frac{1-\sqrt{1-4t}}{2t}.$ In particular, for $k=1,2,3$, $$R^{(k)}(u)= \frac{(u-1)^4}{(u^2+1-u)^{2k}(u-2)^{3k-1}}P^{(k)}(u),$$ where $P^{(1)}(u),P^{(2)}(u)$ and $P^{(3)}(u)$ are explicit polynomials given in [@maplecomp2019].
Via singularity analysis, this theorem implies the cases $k=1,2,3$ of Theorem \[thin red line\]. But again, for $k > 3$, the computations become intractable.
Internal insertions and deletions for permutations {#sect:insert}
==================================================
To understand permutations in ${{ASq}(n,k)}$ we investigate how they can be obtained by adding points to a permutation in ${Sq}(n)$. Each permutation $\pi\in {{ASq}(n,k)}$ has a unique *exterior*, $\sigma={\text{ext}}(\pi)$, obtained by removing the internal points in $\pi$ and appropriately shifting the remaining points (keeping the relative position among them) so that the resulting set of points corresponds to a (square) permutation.
Consider the permutation $\pi=4752316=\begin{array}{lcr}
\begin{tikzpicture}
\begin{scope}[scale=.3]
{
\setcounter{indice}{0};
\foreach \i in {4,7,5,2,3,1,6}
\addtocounter{indice}{1};
\addtocounter{indice}{1}
\draw [help lines] (1,1) grid (\theindice,\theindice);
\setcounter{indice}{1};
\foreach \i in { 4,7,5,2,3,1,6 } {
\draw (\theindice+.5,\i+.5) [fill] circle (.2);
\addtocounter{indice}{1};
}
\addtocounter{indice}{-1};
}
\draw (3+.5,5+.5) [red, fill] circle (.3);
\draw (5+.5,3+.5) [red, fill] circle (.3);
\end{scope}
\end{tikzpicture}
\end{array}\in ASq(5,2)$, where we highlighted in red the two internal points. Then the exterior of $\pi$ is the permutation $$\sigma={\text{ext}}(\pi)=35214=\begin{array}{lcr}
\begin{tikzpicture}
\begin{scope}[scale=.3]
{
\setcounter{indice}{0};
\foreach \i in {3,5,2,1,4}
\addtocounter{indice}{1};
\addtocounter{indice}{1}
\draw [help lines] (1,1) grid (\theindice,\theindice);
\setcounter{indice}{1};
\foreach \i in { 3,5,2,1,4 } {
\draw (\theindice+.5,\i+.5) [fill] circle (.2);
\addtocounter{indice}{1};
}
\addtocounter{indice}{-1};
}
\end{scope}
\end{tikzpicture}
\end{array}.$$
In this section we define the *insertion* and *deletion* operation on permutations. These operations will allow us to grow certain classes of permutations from other well-understood classes.
For $n\in{{\mathbb N}}$, we denote with $[n]$ the set $\{1,2,\dots,n\}$. For a permutation $\sigma$ of size $n$ and a pair $(i,j) \in [n+1]^2$, the *insertion* of $(i,j)$ in $\sigma$ gives the permutation obtained by adding a point at $(i,j)$ and shifting the points in $\sigma$ at or to the right of column $i$ to the right by 1 and shifting the points in $\sigma$ at or above row $j$ up by $1$. We denote the permutation by ${\textbf{insert}}(\sigma,(i,j))$. For a permutation $\sigma'$ that contains the point $(i',j')$, the *deletion* of $(i',j')$ in $\sigma'$ gives the permutation obtained by removing the point $(i',j') = (i',\sigma(i'))$ and shifting points to the right of column $i'$ to the left by $1$ and points above row $j'$ down by $1$. We denote this permutation ${\textbf{delete}}(\sigma',(i',j'))$. If $\sigma'$ is obtained by inserting $(i,j)$ in $\sigma$, then $\sigma$ is obtained by deleting $(i,j)$ from $\sigma'$. Note that any pair $(i,j)\in [n+1]^2$ is a valid insertion for $\sigma\in {\mathcal{S}}_n$, whereas only the points of the form $(i',\sigma'(i'))$ make for valid deletions in $\sigma'$.
We consider the permutation $\pi=215684793$ and we insert on it the point $(6,6)$, obtaining (as shown in Fig. \[fig:insert\_6\_6\]) the permutation $\sigma={\textbf{insert}}(\sigma,(6,6))=2\,1\,5\,7\,9\,6\,4\,8\,10\,3.$
![Insertion of the point $(6,6)$ (highlighted with a red circle) in the permutation $\pi=215684793$.\[fig:insert\_6\_6\]](insert_6_6 "fig:")\
For a sequence of points, $J=\{(i_\ell,j_\ell)\}_{\ell=1}^k$, and a permutation $\sigma$ of size $n$, we call $J$ a *valid insertion sequence* for $\sigma$ if $(i_\ell,j_\ell) \in [n+\ell]^2$ for $\ell \in [k].$ A valid insertion sequence, $J$, gives a corresponding sequence of permutations, $(\sigma^0,\cdots, \sigma^k)$, defined by $\sigma^0 = \sigma$ and $\sigma^{\ell}={\textbf{insert}}(\sigma^{\ell-1},(i_\ell,j_\ell))$ for $1\leq \ell \leq k$. We denote the final permutation obtained in the sequence by ${\textbf{insert}}(\sigma,J)=\sigma^k.$
Similarly, for a permutation $\rho\in {\mathcal{S}}_{n+k}$ and a sequence, $J'=\{(i'_\ell,j'_\ell)\}_{\ell=1}^k$ with $(i'_\ell,j'_\ell)\in [n+k+1-\ell]^2$, we say $J'$ is a *valid deletion sequence* if there is a sequence of permutations $(\rho^0, \cdots, \rho^k)$ with $\rho^0 = \rho$ and, for $1\leq \ell \leq k$, $\rho^{\ell}={\textbf{delete}}(\rho^{\ell-1},(i'_\ell,j'_\ell)).$ If for some $\ell$, $(i'_\ell,j'_\ell)$ is not a valid deletion in $\rho^{\ell-1}$, we say $J'$ is an *invalid deletion sequence* for $\rho$. If $J'$ is a valid sequence of deletions we let ${\textbf{delete}}(\rho,J') = \rho^k$. If $J$ is an insertion sequence for $\sigma$ and $\pi = {\textbf{insert}}(\sigma,J)$, then the reverse of $J$, denoted ${\textbf{reverse}}(J)$, is a valid deletion sequence for $\pi$, with ${\textbf{delete}}(\pi,{\textbf{reverse}}(J)) = \sigma.$
We say an insertion, $(i,j)$, is *internal* for $\sigma$ if the point $(i,j)$ is internal in ${\textbf{insert}}(\sigma,(i,j))$. A sequence of insertions, $J$, is internal for $\sigma$ if for each $1\leq \ell \leq |J|$ the point $(i_\ell,j_\ell) \in J$ is internal in $\sigma^\ell.$ If $J$ is internal for $\sigma$, the corresponding sequence $(\sigma^0,\cdots, \sigma^{|J|})$ has permutations whose external points are exactly the external points of $\sigma$. In particular, if $\sigma$ is a square permutation, then the external points of the permutations in the sequence are exactly the points of $\sigma$. Lastly, we say a deletion sequence is internal if every deletion in the sequence comes from an internal point of the corresponding permutation in the sequence.
For a permutation $\sigma$, let $I(\sigma)$ denote the set of possible internal insertions for $\sigma$ and let ${\mathcal{J}}(\sigma,k)$ denote the set of possible internal insertion sequences of length $k$ for $\sigma$.
\[graves\]
Let $\pi \in {\mathcal{S}}_{n+k}$ and let $M$ be a collection of $k$ marked points in $\pi$. There are precisely $k!$ deletion sequences starting from $\pi$ that remove only the $k$ marked points.
The order in which the points of $M$ are removed uniquely determines the deletion sequence. There are $k!$ possible orders.
\[graves\_int\] Let $\pi \in {{ASq}(n,k)}$ and let $\sigma = {\text{ext}}(\pi)$. There are precisely $k!$ internal insertion sequences, $J$, such that $\pi = {\textbf{insert}}(\sigma,J).$
It is important to highlight that the insertion sequences in Corollary \[graves\_int\] are internal. For example, the permutation $\sigma = 4132$ has a unique internal insertion (the point $(3,3)$) that gives the permutation $51342$, while the insertion of $(4,4)$ in $4132$ also gives $51342$ but is not internal (see Fig. \[fig:Double\_poss\_ins\]).
![Two different insertions (highlighted with a red circle) that give the same permutation. The first insertion is internal, the second one it is not.\[fig:Double\_poss\_ins\]](Double_poss_ins "fig:")\
Projection for square permutations and Petrov conditions {#sect:perm_to_anchored_seq}
========================================================
We recall in this section some results from [@borga2019square] that are useful for the next sections.
Projections for square permutations
-----------------------------------
We recall the following key definition.
An *anchored pair of sequences* of size $n$ is a triplet $(X,Y,z_0)$, where $X\in\{U,D\}^n$, $Y\in\{L,R\}^n$ and $z_0 \in [n].$ We say that the pair $(X,Y)$ is anchored at $z_0$.
Given a square permutation $\sigma\in Sq(n),$ we associate to it an anchored pair of sequences $(X,Y,z_0)$ of size $n$ (*cf.* Fig. \[Square\_perm\_sampling\_example\]) where the labels of $(X,Y)$ are determined by the record types (the sequence $X$ records if a point is a maximum ($U$) or a minimum ($D$) and the sequence $Y$ records if a point is a left-to-right record ($L$) or a right-to-left record ($R$)) and the anchor $z_0$ is determined by the value $\sigma^{-1}(1)$. As a convention, if a point is both a maximum and a minimum (resp. a left-to-right and a right-to-left record) we assign a $D$ (resp. a $L$). For a precise and rigorous definition we refer to [@borga2019square Section 2].
We denote with $\phi$ the injective map that associates to every square permutation the corresponding anchored pair of sequences, therefore $$\phi:Sq(n)\to\{U,D\}^n\times\{L,R\}^n\times[n].$$
![A square permutation $\sigma$ with the associated anchored pair of sequences $\phi(\sigma)=(X,Y,z_0).$ The sequence $X$ is under the diagram (read from left to right) of the permutation and the sequence $Y$ on the left (read from bottom to top).[]{data-label="Square_perm_sampling_example"}](Square_perm_sampling_example)
We say that an anchored pair of sequences $(X,Y,z_0)$ of size $n$ is *good* if $X_1 = X_n = X_{z_0}=D$ and $Y_1 = Y_n = L$. Note that $\phi(Sq(n))$ is contained in the set of good anchored pairs of size $n$. Note also that the total number of possible good anchored pairs $(X,Y,z_0)$ of size $n$ is $$\label{karl}
2^{n-2}(2\cdot 2^{n-2}+(n-2)\cdot 2^{n-3})= 2(n+2)4^{n-3}.$$
This map $\phi$ is not surjective, but we can identify subsets of good anchored pairs of sequences (called *regular*) and of square permutations where the projection map is a bijection. In order to do that we need to introduce the *Petrov conditions*.
Petrov conditions
-----------------
Let $X\in\{U,D\}^n$ and $Y\in\{L,R\}^n$. Let ${ct_D}(i)$ denote the number of $D$s in $X$ up to (and including) position $i$. Similarly define ${ct_U}(i)$, ${ct_L}(i)$ and ${ct_R}(i)$ for the number of $U$s in $X$ and the number of $L$s or $R$s in $Y$, respectively. Let ${pos_D}(i)$ denote the position of the $i$-th $D$ in $X$ with ${pos_D}(i) = n$ if there are fewer than $i$ indices labeled with $D$ in $X$. Similarly define ${pos_U}(i)$, ${pos_L}(i)$ and ${pos_R}(i)$ for the location of the indices of the other labels.
\[defn:petrov\] We say that the labels $D$ in $X$ satisfy the Petrov conditions if the following are true:
1. $|{ct_D}(i) - {ct_D}(j) - \frac12(i-j) | < n^{.4}$, for all $|i-j| < n^{.6}$;
2. $|{ct_D}(i) - {ct_D}(j) - \frac12(i-j) | < \frac{1}{2}|i-j|^{.6}$, for all $|i-j|> n^{.3}$;
3. $|{pos_D}(i) - {pos_D}(j) - 2(i-j)| < n^{.4}$, for all $|i-j| < n^{.6}$ and $i,j \leq {ct_D}(n)$;
4. $|{pos_D}(i) - {pos_D}(j) - 2(i-j)| < 2|i-j|^{.6}$, for all $|i-j|> n^{.3}$ and $i,j \leq {ct_D}(n)$.
A similar definition holds for the labels $U$ in $X$ and the labels $L$ and $R$ in $Y$ for the functions ${ct_U},{ct_L},{ct_R},$ and ${pos_U}, {pos_L}, {pos_R}$. We say the Petrov conditions hold for the pair of sequences $X$ and $Y$ if the Petrov conditions hold for the four types of labels of $X$ and $Y$.
Given a permutation $\sigma\in{Sq}(n)$, we say that $\sigma$ has *regular projections* if the Petrov conditions hold for the corresponding pair of sequences $X$ and $Y$. For each $z_0 \in [n]$, let $\mathcal{R}(z_0)$ denote the subset of ${Sq}(n)$ consisting of permutations anchored at $z_0$ and having regular projections.
Given a sequence $k=k_n=o(n)$ (resp. $k=k_n=o(\sqrt{n})$), we fix now some sequence $\delta_n=\delta_n(k)$ such that $$\label{eq:cond_dn}
\delta_n=o(n),\quad \delta_n\geq n^{.9},\quad\text{and}\quad k=o(\delta_n)\quad(\text{resp.\ } k=o(\sqrt{\delta_n})).$$ Note that this is always possible taking for example $\delta_n=\max\{\sqrt{nk},n^{.9}\}$ (resp. $\delta_n=\max\{\sqrt{n}k,n^{.9}\}$).
Let ${\mathcal{R}_{irr}}$ denote the permutations of ${Sq}(n)$ that either do not have regular projections or such that the anchor $z_0$ belongs to $[n] \backslash (\delta_n,n-\delta_n)$.
For $z_0$ in $(\delta_n,n-\delta_n)$, a uniform square permutation anchored at $z_0$ is in ${\mathcal{R}_{irr}}$ with probability at most $Ce^{-n^c}$ for some positive constants $c$ and $C$ independent of $z_0$ (this follows from the classical bounds for Petrov conditions, see for instance the proof of [@borga2019square Lemma 3.4]). Thus, for $n$ large enough, we have the following bound $$\label{eq:irr_bound}
|{\mathcal{R}_{irr}}|\leq 2\delta_n4^{n-2}+2n4^{n-2}Ce^{-n^c}\leq 2\delta_n4^n.$$
We also say that a good anchored pair of sequences $(X,Y,z_0)$ is *regular*[^2] if the Petrov conditions hold for $X$ and $Y$ and $z_0\in(\delta_n,n-\delta_n)$. In [@borga2019square Section 3.2] we constructed a simple algorithm to produce a square permutation from regular anchored pairs of sequences and we showed that the map $\phi^{-1}$ restricted to this set is bijective. We also showed (see [@borga2019square Lemma 3.8]) that asymptotically almost all square permutations can be constructed from regular anchored pairs of sequences, thus a permutation sampled uniformly from the set of regular anchored pairs of sequences will produce, asymptotically, a uniform square permutation.
Asymptotic enumeration of almost square permutations {#sect:asym_enum}
====================================================
For $\sigma\in {Sq}(n)$, let ${ASq}(\sigma,k)$ denote the set of permutations in ${{ASq}(n,k)}$ that are of the form ${\textbf{insert}}(\sigma,J)$ for some valid insertion sequence $J$ that is internal with respect to $\sigma$. For a collection of permutations $\mathcal{S} \in {Sq}(n)$, let ${ASq}(\mathcal{S},k) = \bigcup_{\sigma\in \mathcal{S}} {ASq}(\sigma,k),$ i.e. the set of permutations in ${{ASq}(n,k)}$ whose exterior lies in $\mathcal{S}$. By Corollary \[graves\_int\], for each $\pi \in {ASq}(\sigma,k) $, there are exactly $k!$ internal insertion sequences $J \in {\mathcal{J}}(\sigma,k)$ such that ${\textbf{insert}}(\sigma,J) = \pi$. These are the only ways to reach $\pi$ by an internal insertion sequence from a square permutation. Thus
$$\label{bumping}
|{{ASq}(n,k)}| = \sum_{\sigma \in {Sq}(n)}|{ASq}(\sigma,k)| = \sum_{\sigma\in {Sq}(n)}\frac{1}{k!}|{\mathcal{J}}(\sigma,k)|.$$
We proceed by finding upper and lower bounds for $|{\mathcal{J}}(\sigma,k)|$ for $\sigma\in {Sq}(n)$. The following lemma gives bounds on the size of $I(\sigma)$ for $\sigma\in{\mathcal{R}}(z_0)$ for some $z_0 \in (\delta_n,n-\delta_n)$.
\[regbound\] There exists $c>0$ such that for every $z_0\in (\delta_n,n-\delta_n)$, and every $\sigma \in {\mathcal{R}}(z_0)$, $$\label{regeq}
2(z_0 - cn^{.6})(n-z_0-cn^{.6})\leq |I(\sigma)| \leq 2(z_0 + cn^{.6})(n-z_0 +cn^{.6}).$$ As a consequence, there exists $\varepsilon>0$ such that $|I(\sigma)|\geq \varepsilon n\delta_n$, for every $z_0\in (\delta_n,n-\delta_n)$ and every $\sigma \in {\mathcal{R}}(z_0)$.
By [@borga2019square Lemma 3.6], square permutations with regular projections anchored at $z_0\in (\delta_n,n-\delta_n)$ have points which are contained between the lines
- $x+y = z_0 \pm cn^{.6}$;
- $x+y = 2n - z_0 \pm cn^{.6}$;
- $x-y = z_0 \pm cn^{.6}$;
- $y-x = z_0 \pm cn^{.6}$.
The upper and lower bounds of are given by the area of the smallest and largest rectangle given by these bounded lines.
The existence of $\varepsilon>0$ such that $|I(\sigma)|\geq \varepsilon n\delta_n$, for every $z_0\in (\delta_n,n-\delta_n)$ and every $\sigma \in {\mathcal{R}}(z_0)$, is a consequence of .
Note that, for $\sigma \in {\mathcal{R}_{irr}}$, we have the bound $$\label{irreq}
0\leq |I(\sigma)| \leq (n+1)^2.$$
For insertion sequences, the number of possible insertions at each step in the sequence increases.
\[insert\_growth\] Let $\sigma\in\mathcal{S}_n$. Let $\sigma'$ be obtained by the insertion in $\sigma$ of some point in $I(\sigma)$. Then $$|I(\sigma)| \leq |I(\sigma')| \leq |I(\sigma)| + 2n.$$
For the lower bound let $(i,j)$ be a point in $I(\sigma)$ that is inserted into $\sigma$ and let $\sigma' = {\textbf{insert}}(\sigma,(i,j)).$ For $(i',j')\in I(\sigma)$ with $i'<i$ and $j'<j$ then $(i',j')\in I(\sigma')$. On the other hand, if $(i',j')\in I(\sigma)$ with $i'\geq i$ and/or $j' \geq j,$ the points $(i'+1,j')$, $(i',j'+1)$ or $(i'+1,j'+1)$ are in $I(\sigma')$ depending on whether $i'>i, j'>j$ or both. Thus for every point in $I(\sigma)$ there is a unique corresponding point in $I(\sigma')$. All other points of $I(\sigma')$ will have the form $(i,j')$ or $(i',j)$, giving the upper bound since there are at most $2n$ such points.
Thus subsequent insertions give the following bounds on the size of ${\mathcal{J}}(\sigma,k).$
\[insert\_bound\] Let $\sigma\in\mathcal{S}_n$ and $k\in{{\mathbb Z}}_{>0}$. It holds that $$\label{insertion_sequence_bound}
|I(\sigma)|^k \leq |{\mathcal{J}}(\sigma,k)| \leq ( |I(\sigma)| + 2nk + k^2)^k.$$
Let $i\leq k$ and $\sigma^i$ be a permutation obtained by the insertions of $i$ internal point in $\sigma$. From Lemma \[insert\_growth\] we have $$|I(\sigma)| \leq |I(\sigma^i)| \leq |I(\sigma)| + 2n+\dots+2(n+i-1)=|I(\sigma)| + i(2n+i-1).$$ Therefore $$|I(\sigma)|^k \leq |{\mathcal{J}}(\sigma,k)| \leq \prod_{i=1}^{k} ( |I(\sigma)| + i(2n+i-1) ).$$ Noting that $i(2n+i-1)\leq 2nk + k^2$, for all $i\leq k$, we obtain $\prod_{i=1}^{k} ( |I(\sigma)| + i(2n+i-1) )\leq ( |I(\sigma)| + 2nk + k^2)^k$ and we conclude the proof.
Fix now $\varepsilon >0$ and assume that $|I(\sigma)| \geq \varepsilon n\delta_n$. Then for $k\leq n,$ $$\label{eq:boundbound}
(|I(\sigma)| + 2nk +k^2)^k \leq |I(\sigma)|^{k}\left( 1 + \frac{2nk+k^2}{\varepsilon n\delta_n}\right)^k \leq |I(\sigma)|^k\cdot\exp\left(\tfrac{3k^2}{\varepsilon \delta_n}\right),$$ where in the last equality we used that $(1+x)\leq e^x$ and $k\leq n$. Furthermore for $k = o(\sqrt{\delta_n})$, $$\label{eq:bound}
(|I(\sigma)| + 2nk +k^2)^k \leq |I(\sigma)|^k\cdot\exp(3/\varepsilon\cdot o(1)).$$
\[jarjar\] Let $\sigma\in\mathcal{S}_n,$ $\varepsilon >0$, $k = o(\sqrt n)$. If $|I(\sigma)| \geq \varepsilon n\delta_n$ then $$|{ASq}(\sigma,k)| \sim \frac{|I(\sigma)|^k}{k!}\;.$$
Note that from Corollary \[graves\_int\], $|{ASq}(\sigma,k)| = \frac{1}{k!}|{\mathcal{J}}(\sigma,k)|.$ We conclude combining the bounds in and (recalling that from we have that $k = o(\sqrt{\delta_n})$ since $k = o(\sqrt n)$).
We can now prove the main result of this section, that is, when $k=o(\sqrt{n})$ then $$|{{ASq}(n,k)}| \sim \frac{k!2^{k+1}n^{2k+1}4^{n-3}}{(2k+1)!}.$$
For $\sigma \in {\mathcal{R}_{irr}}$, we have from the rough bounds of $$0\leq |{\mathcal{J}}(\sigma,k)| \leq (n+k)^{2k}.$$ Therefore the contribution from ${\mathcal{R}_{irr}}$ to ${{ASq}(n,k)}$ is bounded above by
$$\label{eq:irrbound}
\sum_{\sigma \in {\mathcal{R}_{irr}}}|{ASq}(\sigma,k)|=\sum_{\sigma\in {\mathcal{R}_{irr}}} \frac{1}{k!}|{\mathcal{J}}(\sigma,k)| \leq \frac{1}{k!}2\delta_n4^n(n+k)^{2k} \leq \frac{1}{k!}2\delta_n4^n2^{2k}n^{2k},$$
where in the first inequality we also used the bound for $|{\mathcal{R}_{irr}}|$ obtained in and in the second inequality the fact that $k=o(\sqrt n)$.
We now focus on the contribution of $$\sum_{\sigma \in {Sq}(n)\setminus{\mathcal{R}_{irr}}}|{ASq}(\sigma,k)|=\sum_{z_0 \in (\delta_n,n-\delta_n)} \sum_{\sigma\in\mathcal{R}(z_0)}\frac{1}{k!}|{\mathcal{J}}(\sigma,k)|.$$ Using the bounds in Lemma \[insert\_bound\], we have $$\begin{gathered}
\label{eq:logbound_1}
\sum_{z_0 \in (\delta_n,n-\delta_n)} \sum_{\sigma\in\mathcal{R}(z_0)}\frac{1}{k!}|I(\sigma)|^k \\
\leq\sum_{\sigma \in {Sq}(n)\setminus{\mathcal{R}_{irr}}}|{ASq}(\sigma,k)|\leq\\
\sum_{z_0 \in (\delta_n,n-\delta_n)} \sum_{\sigma\in\mathcal{R}(z_0)}\frac{1}{k!}\left(|I(\sigma)|+2nk+k^2\right)^k .
\end{gathered}$$ From Lemma \[regbound\] we know that there exists $\varepsilon>0$ such that $|I(\sigma)|\geq \varepsilon n\delta_n$, for every $z_0\in (\delta_n,n-\delta_n)$ and every $\sigma \in {\mathcal{R}}(z_0)$. Therefore, using , the right-hand side of the above inequality is bounded by $$\label{eq:logbound_2}
\exp(3/\varepsilon\cdot o(1))\cdot\sum_{z_0 \in (\delta_n,n-\delta_n)} \sum_{\sigma\in\mathcal{R}(z_0)}\frac{1}{k!}|I(\sigma)|^k.$$
For any fixed anchor $z_0$ in $(\delta_n,n-\delta_n)$, the total number of permutations in $\mathcal{R}(z_0)$ is asymptotically $2\cdot4^{n-3} \cdot (1+o(1))$, where the error term is uniform in $z_0$ (again this follows from the classical bounds for Petrov conditions, see for instance the proof of [@borga2019square Lemma 3.4]). Using this result and the estimate in Lemma \[regbound\], we obtain that $$\label{reg_contribution}
\sum_{z_0 \in (\delta_n,n-\delta_n)} \sum_{\sigma\in\mathcal{R}(z_0)}\frac{1}{k!}|I(\sigma)|^k\sim \frac{1}{k!}2^{k+1}n^{2k+1}4^{n-3} \int_0^1 (t(1-t))^kdt,$$ where we used the standard Riemann integral approximation with the substitution $z_0=\lfloor nt \rfloor$ for $t\in(0,1)$. Combining , and , we conclude that $$\label{eq:keyapprox}
\sum_{\sigma \in {Sq}(n)\setminus{\mathcal{R}_{irr}}}|{ASq}(\sigma,k)|\sim \frac{1}{k!}2^{k+1}n^{2k+1}4^{n-3} \int_0^1 (t(1-t))^kdt .$$
The integral in evaluates to $(k!)^2/(2k+1)!$. Thus combining this with the (negligible) contribution from ${\mathcal{R}_{irr}}$ (since $\delta_n=o(n)$) we have $$|{{ASq}(n,k)}|\sim\frac{k!}{(2k+1)!}2^{k+1}n^{2k+1}4^{n-3}.\qedhere$$
A consequence of the proof of is the following
\[cor:asympt\] For $k=o(\sqrt n)$ and $s\in(0,1)$, $$\sum_{z_0 \in (\delta_n,ns)}|{ASq}(\mathcal{R}(z_0),k)|\sim \frac{1}{k!}2^{k+1}n^{2k+1}4^{n-3} \int_0^s (t(1-t))^kdt .$$
We finally investigate the case when $k=o(n),$ proving that $$\log\left(|{ASq}(n,k)|\right)=\log\left(\frac{k!}{(2k+1)!}2^{k+1}n^{2k+1}4^{n-3}\right)+o(k).$$
Similarly as before, from , , , and $ \int_0^1 (t(1-t))^kdt=\frac{(k!)^2}{(2k+1)!}$ we have that $$\begin{aligned}
|{ASq}(n,k)|&=\sum_{\sigma \in {Sq}(n)\setminus{\mathcal{R}_{irr}}}|{ASq}(\sigma,k)|+\sum_{\sigma \in {\mathcal{R}_{irr}}}|{ASq}(\sigma,k)|\\
&\leq\exp\left(\frac{3k^2}{\varepsilon\delta_n}\right)\cdot\frac{k!}{(2k+1)!}2^{k+1}n^{2k+1}4^{n-3} (1+o(1)).
\end{aligned}$$ Applying the logarithm we obtain $$\log\left(|{ASq}(n,k)|\right)\leq
\frac{3k^2}{\varepsilon\delta_n}+\log\left(\frac{k!}{(2k+1)!}2^{k+1}n^{2k+1}4^{n-3}\right) +o\left(1\right),$$ and so $$\label{eq:morebpunds}
\frac{\log\left(|{ASq}(n,k)|\right)-\log\left(\frac{k!}{(2k+1)!}2^{k+1}n^{2k+1}4^{n-3}\right)}{k}\leq \frac{\frac{3k^2}{\varepsilon\delta_n}+o(1)}{k}.$$ On the other hand, using again , and and $ \int_0^1 (t(1-t))^kdt=\frac{(k!)^2}{(2k+1)!}$, we have $$|{ASq}(n,k)|\geq\sum_{\sigma \in {Sq}(n)\setminus{\mathcal{R}_{irr}}}|{ASq}(\sigma,k)|\geq
\frac{k!}{(2k+1)!}2^{k+1}n^{2k+1}4^{n-3} (1+o(1))$$ and so $$\frac{\log\left(|{ASq}(n,k)|\right)-\log\left(\frac{k!}{(2k+1)!}2^{k+1}n^{2k+1}4^{n-3}\right)}{k}\geq \frac{o(1)}{k}.$$ Putting together the last bound with the bound in and recalling that when $k=o(n)$ then $\delta_n$ is a sequence such that $k=o(\delta_n)$ we obtain the desired result.
The permuton limit of almost square permutations {#sect:permutons}
================================================
We recall the minimal notions on permutons limits that we need for this section. For a complete introduction to permutons see [@bassino2017universal Section 2].
A *permuton* $\mu$ is a Borel probability measure on the unit square $[0,1]^2$ with uniform marginals, that is $$\mu( [0,1] \times [a,b] ) = \mu( [a,b] \times [0,1] ) = b-a,$$ for all $0 \le a \le b\le 1$. Any permutation $\sigma$ of size $n \ge 1$ may be interpreted as a permuton $\mu_\sigma$ given by the sum of Lebesgue area measures $$\label{eq:perdef}
\mu_\sigma= n \sum_{i=1}^n \operatorname{Leb}\big([(i-1)/n, i/n]\times[(\sigma(i)-1)/n,\sigma(i)/n]\big).$$
Let $\mathcal M$ be the set of permutons. We recall that a sequence of (deterministic) permutons $(\mu_n)_n$ converges *weakly* to $\mu$ (simply denoted $\mu_n \to \mu$) if $$\int_{[0,1]^2} f d\mu_n \to \int_{[0,1]^2} f d\mu,$$ for every bounded and continuous function $f: [0,1]^2 \to \mathbb{R}$. With this topology, $\mathcal M$ is compact and metrizable by the metric $d_{\square}$ defined, for every pair of permutons $(\mu,\mu'),$ by $$d_{\square}(\mu,\mu')=\sup_{R\in\mathcal{R}}|\mu(R)-\mu'(R)|,$$ where $\mathcal R$ denotes the set of rectangles contained in $[0,1]^2.$
The convergence for random permutations is defined as follows.
We say that a random permutation $\bm{\sigma}_n$ converges in distribution to a random permuton $\bm{\mu}$ as $n \to \infty$ if the random permuton $\mu_{\bm{\sigma}_n}$ converges in distribution to $\bm{\mu}$ with respect to the topology defined above.
Permuton convergence for square permutations with a fixed number of internal points {#sect:muz}
-----------------------------------------------------------------------------------
We prove in this section Theorem \[fixedk\_thm\]. We recall here the rigorous construction of the permuton $\mu^z$ mentioned in the introduction.
Let $z$ be a point in $[0,1]$. Let $L_1$ and $L_4$ denote the line segments with slope $-1$ connecting $(0,z)$ to $(z,0)$ and $(1-z,1)$ to $(1,1-z)$, respectively. Similarly let $L_2$ and $L_3$ denote the line segments with slope $1$ connecting $(0,z)$ to $(1-z,1)$ and $(z,0)$ to $(1,1-z)$, respectively. The union of $L_1$, $L_2$, $L_3$ and $L_4$ forms a rectangle in $[0,1]^2.$ For each of the line segments $L_i$ ($i=1,2,3$, or $4$) we will define a measure $\mu^z_i$ as a rescaled Lebesgue measure. Let $\nu$ be the Lebesgue measure on $[0,1]$. Let $S$ be a Borel measurable set on $[0,1]^2$. For each $i$, let $S_i = S\cap L_i$. Finally let $\pi_x(S_i)$ be the projection of $S_i$ onto the $x$-axis and $\pi_y(S_i)$ the projection onto the $y$-axis. As each line has slope $1$ or $-1$, the measures of the projections satisfy $\nu(\pi_x(S_i)) = \nu(\pi_y(S_i)).$ For each $i=1,2,3,4$, define $\mu^z_i(S) := \frac{1}{2} \nu( \pi_x( S_i ) ) = \frac12 \nu( \pi_y(S_i)).$ Finally we define the measure $\mu^z = \mu^z_1+\mu^z_2+\mu^z_3+\mu^z_4.$ The measure $\mu^z$ is a permuton (see [@borga2019square Lemma 4.2]).
Before proving our main result we need two technical lemmas.
\[lem:nochanges2\] Let $\sigma$ be a permutation of size $n$ and $\sigma'$ be a permutation obtained from $\sigma$ by adding a point (not necessarily internal) to the diagram of $\sigma$. Then $$d_{\square}(\mu_{\sigma},\mu_{\sigma'})\leq\frac 6 n.$$
Fix a rectangle $R\subset [0,1]^2$. Recall that by definition $\mu_{\sigma}$ is the permuton induced by the sum of area measures on points of $\sigma$ scaled to fit within $[0,1]^2$ (see ). Suppose that there are $\ell$ points of $\sigma$ contained in $R$. Therefore, keeping track of the possible area measures intersecting the boundaries of $R$, we have that $|\mu_{\sigma}(R)-\frac \ell n|\leq \frac{2}{n}$. Now, noting that the addition of one point to the diagram of $\sigma$, can change the number of points inside $R$ by at most 2, we obtain that $|\mu_{\sigma'}(R)-\frac \ell n|\leq \frac{4}{n}.$ Therefore $|\mu_{\sigma}(R)-\mu_{\sigma'}(R)|\leq \frac{6}{n}$. Since the latter bound does not depend on the choice of $R$ we can conclude the proof.
In [@borga2019square Lemma 4.3] we showed that for $\sigma_n\in Sq(n)\setminus{\mathcal{R}_{irr}}$ the permutons, $\mu_{\sigma_n}$ and $\mu^{z_n}$ with $z_n = \sigma_n^{-1}(1)/n,$ have distance $d_{\square}(\mu_{\sigma_n},\mu^{z_n})$ that tends to zero as $n$ tends to infinity, uniformly over all choices of $\sigma_n$. We prove here that the same result holds for permutations in $ASq(Sq(n)\setminus{\mathcal{R}_{irr}},k)$ whenever $k=o(n)$.
\[lem:nochanges\] Let $k=o(n)$. The following limit holds $$\sup_{\sigma_n\in ASq(Sq(n)\setminus{\mathcal{R}_{irr}},k)}d_{\square}(\mu_{\sigma_n},\mu^{z_n})\to 0.$$
We have the following bound for every $\sigma_n\in ASq(Sq(n)\setminus{\mathcal{R}_{irr}},k)$ $$d_{\square}(\mu_{\sigma_n},\mu^{z_n})\leq d_{\square}(\mu_{\sigma_n},\mu_{{\text{ext}}(\sigma_n)})+d_{\square}(\mu_{{\text{ext}}(\sigma_n)},\mu^{z_n})$$ that translates into $$\sup_{\sigma_n\in ASq(Sq(n)\setminus{\mathcal{R}_{irr}},k)}d_{\square}(\mu_{\sigma_n},\mu^{z_n})\leq \sup_{\sigma_n\in ASq(Sq(n)\setminus{\mathcal{R}_{irr}},k)}d_{\square}(\mu_{\sigma_n},\mu_{{\text{ext}}(\sigma_n)})+\sup_{\sigma_n\in Sq(n)\setminus{\mathcal{R}_{irr}}}d_{\square}(\mu_{\sigma_n},\mu^{z_n}).$$ The second term in the right-hand side of the above equation tends to zero thanks to the aforementioned [@borga2019square Lemma 4.3] and the first term tends to zero because the addition of $k=o(n)$ internal points cannot modify the permuton limit of a sequence of permutations, as shown in Lemma \[lem:nochanges2\].
We can now prove the main result of this section, that is, if $k>0$ is fixed and $\bm\sigma_n$ is uniform in ${{ASq}(n,k)}$, then $\mu_{\bm{\sigma}_n} \stackrel{d}{\longrightarrow} \mu^{{\bm{z}}^{(k)}}$ as $n\to \infty$.
With the asymptotic formula for the cardinality of ${{ASq}(n,k)}$ (obtained in Theorem \[approx\_size\]) and Corollary \[cor:asympt\] we can determine the distribution for the value of $\bm \sigma^{-1}_n(1)$, for a uniform permutation $\bm{\sigma}_n$ in ${{ASq}(n,k)}$ when $k$ is fixed. Specifically, $$\label{coral}
{\mathbb{P}}(\bm{\sigma}_n^{-1}(1) \leq ns ) \sim \frac{ \frac{1}{k!}2^{k+1}n^{2k+1}4^{n-3}\int_0^s (t(1-t))^kdt}{\frac{k!}{(2k+1)!}2^{k+1}n^{2k+1}4^{n-3}} = (2k+1){2k \choose k}\int_{0}^s(t(1-t))^kdt,$$ where we used again the fact that the contribution to the numerator of permutations in ${\mathcal{R}_{irr}}$ is negligible w.r.t. the cardinality of ${{ASq}(n,k)}$ (see ). Therefore ${\bm{z}}_n=\frac{\bm{\sigma}_n^{-1}(1)}{n}\stackrel{d}{\longrightarrow}{\bm{z}}^{(k)}.$
The map $z\to\mu^z$ is continuous as a function from $(0,1)$ to $\mathcal{M}$, and thus $\mu^{{\bm{z}}_n}$ converges in distribution to $\mu^{{\bm{z}}^{(k)}}.$ By Lemma \[lem:nochanges\], and again the fact that $|{\mathcal{R}_{irr}}|$ is negligible w.r.t. $|{{ASq}(n,k)}|$ , we also have that $d_{\square}(\mu_{\bm{\sigma}_n},\mu^{{\bm{z}}_n})$ converges almost surely to zero. Therefore, combining these results, we can conclude that $\mu_{\bm{\sigma}_n}$ converges in distribution to $\mu^{{\bm{z}}^{(k)}}.$
Permuton convergence for square permutations with a growing number of internal points
-------------------------------------------------------------------------------------
We prove in this section Theorem \[permuton\_limit\]. When the number $k$ of internal points tends to infinity, we have the following result.
\[lemm:conc\_result\] Let $k=o(n)$ and assume that $k\to\infty$. Then for $\bm{\sigma}_n$ uniform in ${{ASq}(n,k)}$ it holds $$\frac{\bm{\sigma}_n^{-1}(1)}{n}\stackrel{d}{\longrightarrow}1/2.$$
Note that for every $0<\lambda<\frac 1 2,$ $$\label{eq:startingpoint}
{\mathbb{P}}(\bm{\sigma}_n^{-1}(1) \leq n(1/2 - \lambda))\leq\frac{\sum_{z_0 \in (\delta_n,n(1/2 - \lambda))} \sum_{\sigma\in\mathcal{R}(z_0)}\frac{1}{k!}|{\mathcal{J}}(\sigma,k)|+\sum_{\sigma \in {\mathcal{R}_{irr}}}|{ASq}(\sigma,k)|}{|{{ASq}(n,k)}|}.$$ We focus on the term $\sum_{z_0 \in (\delta_n,n(1/2 - \lambda))} \sum_{\sigma\in\mathcal{R}(z_0)}\frac{1}{k!}|{\mathcal{J}}(\sigma,k)|$. From Lemma \[regbound\] we know that $|I(\sigma)|\geq \varepsilon n\delta_n$. Therefore, using Lemmas \[regbound\] and \[insert\_bound\], the estimates in and the fact that $|{R}(z_0)|\leq 2^{2n-5}$, we obtain $$\sum_{z_0 \in (\delta_n,n(1/2 - \lambda))} \sum_{\sigma\in\mathcal{R}(z_0)}\frac{1}{k!}|{\mathcal{J}}(\sigma,k)|\leq\frac{1}{k!}2^{2n-5}\exp\left(\tfrac{3k^2}{\varepsilon \delta_n}\right)\sum_{z_0 \in (\delta_n,n(1/2 - \lambda))}\left( 2(z_0 + cn^{.6})(n-z_0 +cn^{.6})\right)^k.$$ We also have the following asymptotic estimate $$\begin{gathered}
\frac{1}{k!}2^{2n-5}\exp\left(\tfrac{3k^2}{\varepsilon \delta_n}\right)\sum_{z_0 \in (\delta_n,n(1/2 - \lambda))}\left( 2(z_0 + cn^{.6})(n-z_0 +cn^{.6})\right)^k\\
\sim\frac{1}{k!}2^{2n-5}\exp\left(\tfrac{3k^2}{\varepsilon \delta_n}\right)2^kn^{2k+1}\int_{0}^{1/2-\lambda}(t(1-t))^kdt,
\end{gathered}$$ where we used again the standard Riemann integral approximation with the substitution $z_0=\lfloor nt \rfloor$ for $t\in(0,1/2-\lambda)$. Noting that for $t \leq 1/2 - \lambda$ we have $(t(1-t))^k \leq 4^{-k}(1-4\lambda^2)^k \leq 4^{-k}e^{-4\lambda^2k}$, from the two equations above we obtain that $$\label{eq:furtherbound}
\sum_{z_0 \in (\delta_n,n(1/2 - \lambda))} \sum_{\sigma\in\mathcal{R}(z_0)}\frac{1}{k!}|{\mathcal{J}}(\sigma,k)|\leq \frac{1}{k!}2^{2n-5}\exp\left(\tfrac{3k^2}{\varepsilon \delta_n}\right)2^kn^{2k+1}4^{-k}e^{-4\lambda^2k}(1+o(1)).$$ Using that $|{{ASq}(n,k)}| \sim \frac{k!2^{k+1}n^{2k+1}4^{n-3}}{(2k+1)!}$ and the bound in we can conclude from and that $${\mathbb{P}}(\bm{\sigma}_n^{-1}(1) \leq n(1/2 - \lambda))\leq\frac{(2k+1)e^{-4\lambda^2k}}{\sqrt{k \pi}}\exp\left(\tfrac{3k^2}{\varepsilon \delta_n}\right)(1+o(1)).$$ Therefore we can conclude that ${\mathbb{P}}(\bm{\sigma}_n^{-1}(1) \leq n(1/2 - \lambda))\to0$, for every $0<\lambda<\frac 1 2$.
Since $\bm{\sigma}_n^{-1}(1)\stackrel{d}{=}n+1-\bm{\sigma}_n^{-1}(1)$ then the probability that ${\mathbb{P}}(\bm{\sigma}_n^{-1}(1) \geq n(1/2 + \lambda))$ is also equally small and this concludes the proof.
The proof is identical to the proof of Theorem \[fixedk\_thm\] above, using the concentration result for $\bm{\sigma}_n^{-1}(1)$ obtained in Lemma \[lemm:conc\_result\].
Insertions in 321-avoiding permutations {#sect:321av}
=======================================
Permutations in ${A\!v_n}(321)$ are in bijection with Dyck paths of size $2n$. A Dyck path of size $2n$ is a path with two types of steps: $(1,1)$ or $(1,-1)$, that is conditioned to start at $(0,0),$ end at $(2n,0)$, and remain non-negative in between. There are many possible bijections to choose from between these two sets. One particular bijection comes from [@BJS], which we refer to as the Billey–Jockusch–Stanley (or BJS) bijection. For a Dyck path, $\gamma_n,$ of size $2n$, we let $\tau_n = \tau_{\gamma_n}$ denote the corresponding permutation in ${A\!v_n}(321)$ under the BJS-bijection. In the other direction, for a permutation $\tau_n \in {A\!v_n}(321)$ we let $\gamma_n = \gamma_{\tau_n}$ denote the corresponding Dyck path under the inverse bijection. This bijection is used in [@hoffman2017pattern] to show that the points of a permutation that avoid a decreasing sequence of size three converge to the Brownian excursion when properly scaled.
Specifically, extend the definition of the permutation $\tau_n$ so that $\tau_n(0)=0$ and for $t\in [0,1]$, let $$F_{\tau_n}(t) \coloneqq \frac{1}{\sqrt{2n}}\big |\tau_n(\lfloor nt \rfloor) - \lfloor nt \rfloor \big|.$$
\[chacha\] Let $\bm{\tau}_n$ be a uniformly random permutation in ${A\!v_n}(321)$. Then $$\left(F_{\bm{\tau}_n}(t)\right)_{t\in [0,1]} \stackrel{d}{\longrightarrow} \left(\bm{e}_t\right)_{t\in [0,1]},$$ where $\bm{e}_t$ is the Brownian excursion on $[0,1]$ and the convergence holds in the space of right-continuous functions $D([0,1],\mathbb{R})$.
The main step in the proof of Theorem \[chacha\] is showing that the function $F_{\bm{\tau}_n}(t)$ is often close to the corresponding scaled Dyck path $\gamma_{\bm{\tau}_n}$, which converges in distribution to the Brownian excursion [@Ka76]. The proof uses an alternative version of the Petrov conditions stated in terms of Dyck paths. We denote the Petrov conditions for Dyck paths with $PC'$ and the Petrov conditions for permutations used in this paper with $PC$ (see Definition \[defn:petrov\]). $PC'$ can be translated to permutations obtaining a slightly modified version of $PC$. We say $\tau_n\in {A\!v_n}(321)$ satisfies $PC'$ if, using the BJS bijection, the corresponding Dyck path, $\gamma_n$, satisfies $PC'$.
In what follow we say that $\tau_n\in {A\!v_n}(321)$ satisfies the Petrov conditions if it satisfies both $PC$ and $PC'$ (and we do the same for Dyck paths). The exact version of $PC'$ is not important for our results, though we point out that a uniform random permutation in ${A\!v_n}(321)$ has exponentially small probability to satisfy only one set of conditions among $PC$ and $PC'$. This together with Corollary 5.5 and Proposition 5.6 and 5.7 of [@hoffman2017pattern], implies that there exist positive constants $C$ and $\delta$ such that the probability that the Petrov conditions are not satisfied for a uniform random permutation $\bm \tau_n$ in ${A\!v_n}(321)$ is bounded above by $Ce^{-n^\delta}$.
\[flux\] Let $\gamma_n$ be a Dyck path of size $2n$ that satisfies the Petrov conditions, and let $\tau_n$ be the corresponding permutation in ${A\!v_n}(321)$. If $(j,\tau_n(j))$ is a left-to-right maximum, then $$|\tau_n(j) - j - \gamma_n(2j)| \leq 10n^{.4},$$ and if $(j,\tau_n(j))$ is a right-to-left minimum, then $$| \tau_n(j) - j + \gamma_n(2j)| \leq 10n^{.4}.$$ Therefore for all $j\leq n$, $$\gamma_n(2j)- 10n^{.4} \leq | \tau_n(j) - j| \leq \gamma_n(2j) + 10n^{.4}.$$
Let $M(\gamma_n) = \max_{1\leq j\leq n}\gamma_n(2j)$ be the maximum of $\gamma_n$ and $D(\tau_n) = \max_{1\leq j \leq n} | \tau_n(j) - j|$ be the maximum absolute displacement.
\[dyck\_max\] Let $\tau_n$ be a permutation in ${A\!v_n}(321)$ and let $\gamma_n = \gamma_{\tau_n}$ be the corresponding Dyck path of size $2n$. If $\tau_n$ (and thus $\gamma_n$) satisfies the Petrov conditions, then $D(\tau_n) \leq M(\gamma_n) + 10n^{.4}.$
Let $E^+(\tau_n)$ denote the set of left-to-right maxima of $\tau_n$ and $E^-(\tau_n)$ the complement of $E^+(\tau_n)$ (thus the points of $E^-(\tau_n)$ are all the right-to-left minima of $\tau_n$ that are not fixed points). For $1\leq i \leq n$ let $$i^+ = \max_{j\leq i} \{ j: (j,\tau_n(j)) \in E^+(\tau_n)\}.$$ Similarly let $$i^- =
\min_{j\geq i} \{ j: (j,\tau_n(j)) \in E^-(\tau_n)\},$$ with the exception that if $i$ is a fixed point, then $i^-=i$.
\[sandwich\] Let $\tau_n\in {A\!v_n}(321)$ satisfy the Petrov conditions. For $1\leq i \leq n$, $$\Big| |\tau_n(i^+) - i^+ | - |\tau_n(i) - i| \Big| < 25n^{.4},$$ and $$\Big| |\tau_n(i^-) - i^-| - |\tau_n(i) - i| \Big|< 25n^{.4}.$$
By [@hoffman2017pattern Lemma 2.5], any interval of length $n^{.3}$ must contain both a point in $E^+(\tau_n)$ and $E^-(\tau_n)$, and therefore $$\label{eq:swedish}
\max\{i-i^+, i^- -i\} \leq n^{.3}.$$ By the Petrov conditions for Dyck paths, if $|x-y| < 2n^{.6}$ then $|\gamma_n(x) - \gamma_n(y)|< n^{.4}.$ Therefore both $|\gamma_n(2i^+) - \gamma_n(2i)| < n^{.4}$ and $|\gamma_n(2i^-) - \gamma_n(2i)| < n^{.4}$. By Lemma \[flux\], $$\begin{aligned}
\Big|| \tau_n(i^+) - i^+| - |\tau_n(i) - i)|\Big| & \leq |\gamma_n(2i^+) + 10n^{.4} - \gamma_n(2i) + 10n^{.4}|\\
& \leq |\gamma_n(2i^+) - \gamma_n(2i)| + 20n^{.4}\\
&< 25n^{.4},\end{aligned}$$ and similarly, $$\begin{aligned}
\Big|| \tau_n(i^-) - i^-| - |\tau_n(i) - i)|\Big| & \leq |\gamma_n(2i^-) + 10n^{.4} - \gamma_n(2i) + 10n^{.4}|\\
& \leq |\gamma_n(2i^-) - \gamma_n(2i)| + 20n^{.4}\\
&< 25n^{.4}.\end{aligned}$$ This ends the proof.
\[historique\] Let $\tau_n$ be a permutation in ${A\!v_n}(321)$ that satisfies the Petrov conditions. Then $$\frac{1}{(2n)^{3/2}}|I(\tau_n)| = \int_0^1 F_{\tau_n}(t) dt + O(n^{-.1}).$$
Since $(i,j) \in I(\tau_n)$ if and only if $\tau_n(i^-)< j \leq \tau_n(i^+)$ then $$|I(\tau_n)| = \sum_{i=1}^n \tau_n(i^+) - \tau_n(i-).$$ For each $i$ we may use Lemma \[sandwich\] and to obtain the upper bound $$\begin{aligned}
\tau_n(i^+) - \tau_n(i^-) &= (\tau_n(i^+) -i^+) + (i^+ - i^-) + (i^- - \tau_n(i^-))\\
& \leq |\tau_n(i) - i| + 25n^{.4} + 2n^{.3}+ |\tau_n(i) - i| + 25n^{.4}\\
& \leq 2|\tau_n(i) - i| + 100n^{.4} \end{aligned}$$ as well as the lower bound $$\tau_n(i^+) - \tau_n(i^-) \geq 2|\tau_n(i) - i| - 100n^{.4}.$$
In terms of $F_{\tau_n}(\cdot)$, the above estimate rewrites as $|\tau_n(i^+) - \tau_n(i^-) - 2\sqrt{2n}F_{\tau_n}(i/n)| \leq 100n^{.4}$ and so $$|I(\tau_n)| = \sum_{i=1}^n \left(2\sqrt{2n}F_{\tau_n}(i/n) + O(n^{.4})\right).$$ For $t\in [\frac i n,\frac{i+1}{n})$, $F_{\tau_n}(t) = F_{\tau_n}(i/n)$ and therefore the above sum can be expressed exactly as an integral plus an error term that is at most $O(n^{1.4})$ giving $$|I(\tau_n)| = (2n)^{3/2}\int_0^1 F_{\tau_n}(t) dt + O(n^{1.4}).$$ Dividing by $(2n)^{3/2}$ finishes the proof.
Most permutations in ${A\!v_n}(321)$ satisfy the Petrov conditions and therefore Lemma \[historique\] applies to most permutations. This helps in determining the asymptotic behavior of ${ASq({A\!v_n}(321),k)}$.
\[city maps\] Fix $k>0$. Let $\tau_n \in {A\!v_n}(321)$ satisfy the Petrov conditions. Then $$\frac{1}{(2n)^{3k/2}}|{\mathcal{J}}(\tau_n,k)| = \left( \int_0^1 F_{\tau_n}(t) dt \right)^k + O(n^{-.1})$$ and thus $$\frac{k!}{(2n)^{3k/2}}|{ASq}(\tau_n,k)| = \left(\int_0^1 F_{\tau_n}(t)dt \right)^k + O(n^{-.1}).$$
This follows exactly from the previous lemma together with Lemma \[insert\_bound\].
Partition ${A\!v_n}(321)$ into two sets $A_n$ and $B_n$, where permutations in $A_n$ satisfy the Petrov conditions and permutations in $B_n$ do not. Let $c_n$ denote the $n$-th Catalan number $\frac{1}{n+1}{2n \choose n} \sim \frac{4^n}{\sqrt{2\pi n^3}}.$ Let $a_n$ and $b_n$ denote the size of $A_n$ and $B_n$ respectively. For a uniform permutation in ${A\!v_n}(321)$, the Petrov conditions fail with probability at most $Ce^{-n^\delta}$ for some $C,\delta > 0$, thus we have that $b_n \leq Ce^{-n^\delta}c_n$.
For any $\tau_n \in {A\!v_n}(321)$ we always have the upper bound ${ASq}(\tau_n,k) \leq (n+k)^{2k}$. Thus the contribution to $|{ASq({A\!v_n}(321),k)}|$ from permutations with external points in $B_n$, i.e. permutations in ${ASq}(B_n,k),$ is at most $c_n(n+k)^{2k} Ce^{-n^\delta}\leq c_n (2n)^{2k}Ce^{-n^\delta} = o(c_n)$. Using Lemma \[city maps\], we obtain $$\begin{aligned}
|{ASq({A\!v_n}(321),k)}| &= \sum_{\tau_n \in {A\!v_n}(321)} |{ASq}(\tau_n,k)|\nonumber \\
&= c_n \cdot {\mathbb{E}}\left[ |{ASq}(\bm{\tau}_n,k)| \right] \nonumber\\
&= \frac{(2n)^{3k/2}c_n}{k!}{\mathbb{E}}\left [\left ( \int_0^1 F_{\bm{\tau}_n}(t) dt \right)^k \Bigg | \bm \tau_n \in A_n \right]{\mathbb{P}}(\bm\tau_n \in A_n) + o\left(c_n n^{3k/2-.1}\right).\label{carlos}\end{aligned}$$
Using that ${\mathbb{P}}(\bm\tau_n\in B_n ) \leq Ce^{-n^{\delta}}$ and $F_{\bm\tau_n}(t) \leq n^{1/2}$, we have $$\label{eq:bound1}
{\mathbb{E}}\left [\left ( \int_0^1 F_{\bm{\tau}_n}(t) dt \right)^k \Bigg | \bm \tau_n \in B_n \right]{\mathbb{P}}(\bm\tau_n\in B_n )\leq Cn^{k/2}e^{-n^\delta}.$$ Rewriting the expectation ${\mathbb{E}}\left [\left ( \int_0^1 F_{\bm{\tau}_n}(t) dt \right)^k \right]$ as $$\label{eq:bound2}
{\mathbb{E}}\left [\left ( \int_0^1 F_{\bm{\tau}_n}(t) dt \right)^k \Bigg | \bm \tau_n \in A_n \right] {\mathbb{P}}(\bm\tau_n\in A_n )+{\mathbb{E}}\left [\left ( \int_0^1 F_{\bm{\tau}_n}(t) dt \right)^k \Bigg | \bm \tau_n \in B_n \right] {\mathbb{P}}(\bm\tau_n\in B_n )$$ we have that convergence of the $k$-th moment of $(\int_0^1 F_{\bm\tau_n}(t)dt \big | \bm\tau_n\in A_n)$ is equivalent to the convergence of the $k$-th moment of $\int_0^1 F_{\bm\tau_n}(t)dt$. Moreover, if the limits exist, they must agree. Suppose this is the case, then becomes
$$\label{dealio}
|{ASq({A\!v_n}(321),k)}| = \frac{(2n)^{3k/2}c_n}{k!}{\mathbb{E}}\left [ \left ( \int_0^1 F_{\bm\tau_n}(t) dt \right)^k\right] + o\left(c_n n^{3k/2-.1}\right).$$
It remains to show the existence of the limit of the $k$-th moment of the area $\int_0^1 F_{\bm\tau_n}(t) dt$. We have the simple upper bound $$\label{upside down}
\int_0^1 F_{\bm{\tau}_n}(t) dt \leq \sup_{t\in [0,1]} F_{\bm\tau_n}(t) = \frac{1}{\sqrt{2n}} D( \bm \tau_n ).$$ For each $k>0$ and for $n$ large enough, from Corollary \[dyck\_max\] $$\begin{aligned}
{\mathbb{E}}\left [ \left( \int_0^1 F_{\bm\tau_n}(t) dt \right)^k \Bigg | \bm\tau_n \in A_n \right] &\leq {\mathbb{E}}\left[ \left(\frac{1}{\sqrt{2n}}D( \bm\tau_n) \right)^k \Bigg | \bm\tau_n \in A_n \right] \nonumber \\
& \leq {\mathbb{E}}\left [ \left( \frac{1}{\sqrt{2n}}(M(\bm\gamma_n) + 10n^{.4})\right)^k \Bigg | \bm\tau_n \in A_n \right ] \nonumber \\
& \leq {\mathbb{E}}\left[ \left( \frac{1}{\sqrt{2n}} M(\bm\gamma_n) \right)^k \right ] \frac{(1 + O(n^{-.1}))}{{\mathbb{P}}(\bm\tau_n \in A_n)}\nonumber \\
& \leq \frac{2}{{\mathbb{P}}(\bm\tau_n \in A_n)}{\mathbb{E}}\left[ \left( \frac{1}{\sqrt{2n}} M(\bm\gamma_n) \right)^k \right ].\nonumber\end{aligned}$$ Therefore, from and we obtain the following bound $${\mathbb{E}}\left [\left ( \int_0^1 F_{\bm{\tau}_n}(t) dt \right)^k \right]\leq 2\cdot {\mathbb{E}}\left[ \left( \frac{1}{\sqrt{2n}} M(\bm\gamma_n) \right)^k \right ]+Cn^{k/2}e^{-n^\delta}.$$
By [@Khorunzhiy_Marckert Theorem 1] the exponential moment of $(2n)^{-1/2} M(\bm\gamma_n)$ is uniformly bounded in $n$, thus for any $k>0$, the $k$-th moment of $\int_0^1 F_{\bm\tau_n}(t)dt$ is uniformly bounded in $n$. This along with the convergence in distribution of $\int_0^1 F_{\bm\tau_n}(t) dt$ to $\int_0^1 \bm e_tdt$ implies convergence of the $k$-th moments:
$$\label{final countdown}
{\mathbb{E}}\left [\left(\int_0^1 F_{\bm{\tau}_n}(t) dt \right)^k\right] \longrightarrow {\mathbb{E}}\left[ \left( \int_0^1 \bm{e}_t dt\right) ^k\right ]$$
(see [@ChungP Theorem 4.5.2], for instance).
Dividing both sides of by $(2n)^{3k/2}c_n/k!$ gives $$\frac{|{ASq({A\!v_n}(321),k)}|}{(2n)^{3k/2}c_n/k!} = {\mathbb{E}}\left [\left(\int_0^1 F_{\bm{\tau}_n}(t) dt \right)^k\right] + o(1),$$ and letting $n$ tend to infinity finishes the proof.
We conclude this section proving Theorem \[thm:fluctuations\]. We recall that for a permutation $\tau_n\in Av_n(321)$ (with the convention that $\tau_n(0)=0$) we defined $$\label{eq:first_def}
F_{\tau_n}(t) \coloneqq \frac{1}{\sqrt{2n}}\big |\tau_n(\lfloor nt \rfloor) - \lfloor nt \rfloor \big|, \quad t\in[0,1].$$ We also generalized this definition, by setting, for a permutation $\tau^k_n\in {ASq({A\!v_n}(321),k)}$ (with the convention that $\tau^k_n(0)=0$), $$\label{eq:second_def}
F_{\tau^k_n}(t) \coloneqq \frac{1}{\sqrt{2(n+k)}}\big |\tau^k_n(s(t)) - s(t) \big|,\quad t\in[0,1],$$ where $s(t)=\max\left\{m\leq \lfloor (n+k)t \rfloor|\tau^k_n(m)\text{ is an external point}\right\}$. Note that, for permutations in $ Av_n(321)$, the definition given in coincides with the definition given in .
We need the following technical result.
\[lem:fluctuations\]Let $Reg_n^k$ be the set of permutations in ${ASq({A\!v_n}(321),k)}$ such that the exterior satisfies the Petrov conditions. As $n\to\infty,$ $$\sup_{\tau^k_n\in Reg^k_n}||F_{\tau^k_n}(t)-F_{{\text{ext}}(\tau^k_n)}(t)||_{\infty}\to 0,$$ where, for a function $f:[0,1]\to{\mathbb{R}}$, we denote $||f||_{\infty}=\sup_{t\in[0,1]}|f(t)|$.
Fix $t\in[0,1]$ and $\tau^k_n\in Reg_n^k$. Set $\tau_n={\text{ext}}(\tau^k_n)$. When we add an internal point to a permutation, we shift the points of the permutation diagram above and/or to the right by at most one cell. So, there exist two integers $m(t)$ and $\ell(t)$ such that $$\tau_n^k(s(t))=\tau_n(m(t))+\ell(t), \quad\text{with}\quad |m(t)-s(t)|\leq k\text{ and }|\ell(t)|\leq k.$$ Therefore $$\begin{aligned}
\left|F_{\tau^k_n}(t)-F_{\tau_n}(t)\right|&=\left|\frac{1}{\sqrt{2(n+k)}}\big |\tau^k_n(s(t)) - s(t) \big| -\frac{1}{\sqrt{2n}}\big |\tau_n(\lfloor nt \rfloor) - \lfloor nt \rfloor \big| \right|\\
&=\left|\frac{1}{\sqrt{2(n+k)}}\big |\tau_n(m(t))+\ell(t) - s(t) \big| -\frac{1}{\sqrt{2n}}\big |\tau_n(\lfloor nt \rfloor) - \lfloor nt \rfloor \big| \right|\\
&\leq \frac{1}{\sqrt{2n}}\bigg|\big |\tau_n(m(t))- m(t) \big| + 2k -\big |\tau_n(\lfloor nt \rfloor) - \lfloor nt \rfloor \big| \bigg|.
\end{aligned}$$ Let $\gamma_n$ be the Dyck path corresponding to $\tau_n$. By Lemma \[flux\], $$\Big|\big |\tau_n(m(t))-m(t) \big| +2k -\big |\tau_n(\lfloor nt \rfloor) - \lfloor nt \rfloor \big| \Big|\leq|\gamma_n(m(t))+10n^{.4}+2k-\gamma_n(\lfloor nt \rfloor)+10n^{.4}|.$$ By the Petrov conditions for Dyck paths, if $|x-y| < 2n^{.6}$ then $|\gamma_n(x) - \gamma_n(y)|< n^{.4}.$ Noting that $|m(t)-\lfloor nt \rfloor|\leq|m(t)-s(t)|+|s(t)-\lfloor nt \rfloor|\leq 3k$, then we obtain that, for $n$ large enough, $$\Big|\big |\tau_n(m(t))-m(t) \big| +2k -\big |\tau_n(\lfloor nt \rfloor) - \lfloor nt \rfloor \big| \Big|\leq 25n^{.4}$$ and so $$\begin{aligned}
\left|F_{\tau^k_n}(t)-F_{\tau_n}(t)\right|\leq\frac{25n^{.4}}{\sqrt{2n}}\to 0.
\end{aligned}$$ This bound is independent of $t$ and $\tau_n^k$, concluding the proof.
It is enough to show that for every continuous bounded functional $G:D([0,1],\mathbb{R})\to{\mathbb{R}},$ $${\mathbb{E}}\left[G\left(F_{\bm{\tau}^k_n}(t)\right)\right]\to{\mathbb{E}}\left[G\left(\bm{e}^k_t\right)\right].$$ Note that $$\begin{gathered}
\left|{\mathbb{E}}\left[G\left(F_{\bm{\tau}^k_n}(t)\right)\right]-{\mathbb{E}}\left[G\left(\bm{e}^k_t\right)\right]\right|\\
\leq{\mathbb{E}}\left[\left|G\left(F_{\bm{\tau}^k_n}(t)\right)-G\left(F_{{\text{ext}}(\bm{\tau}^k_n)}(t)\right)\right|\right]+\left|{\mathbb{E}}\left[G\left(F_{{\text{ext}}(\bm{\tau}^k_n)}(t)\right)\right]-{\mathbb{E}}\left[G\left(\bm{e}^k_t\right)\right]\right|.
\end{gathered}$$ We first show that $$\label{eq:gooal1}
{\mathbb{E}}\left[\left|G\left(F_{\bm{\tau}^k_n}(t)\right)-G\left(F_{{\text{ext}}(\bm{\tau}^k_n)}(t)\right)\right|\right]\to 0.$$ We have that $${\mathbb{E}}\left[\left|G\left(F_{\bm{\tau}^k_n}(t)\right)-G\left(F_{{\text{ext}}(\bm{\tau}^k_n)}(t)\right)\right|\right]=\sum_{\tau^k_n\in{ASq({A\!v_n}(321),k)}}\left|G\left(F_{\tau^k_n}(t)\right)-G\left(F_{{\text{ext}}(\tau^k_n)}(t)\right)\right|{\mathbb{P}}\left(\bm{\tau}^k_n=\tau^k_n\right).$$ The continuity of $G$ and Lemma \[lem:fluctuations\] show that the contribution to the sum vanishes as $n\to\infty$ for $\tau_n^k\in Reg_n^k$. Since $G$ is bounded and ${\mathbb{P}}(\bm{\tau}^k_n\notin Reg_n^k)\to 0$, we can conclude that the contribution to the sum for $\tau_n^k\notin Reg_n^k$ also vanishes as $n\to\infty$, and thus (\[eq:gooal1\]) holds.
It remains to prove that $$\label{eq:goaal2}
\left|{\mathbb{E}}\left[G\left(F_{{\text{ext}}(\bm{\tau}^k_n)}(t)\right)\right]-{\mathbb{E}}\left[G\left(\bm{e}^k_t\right)\right]\right|\to 0.$$ Note that $${\mathbb{E}}\left[G\left(F_{{\text{ext}}(\bm{\tau}^k_n)}(t)\right)\right]=\sum_{\tau^k_n\in{ASq({A\!v_n}(321),k)}}G\left(F_{{\text{ext}}(\tau^k_n)}(t)\right)\cdot{\mathbb{P}}\left(\bm{\tau}^k_n=\tau^k_n\right).$$ From Theorem \[thin red line\] we have that, uniformly for every $\tau^k_n\in{ASq({A\!v_n}(321),k)}$, $${\mathbb{P}}\left(\bm{\tau}^k_n=\tau^k_n\right)\sim \frac{k!}{(2n)^{3k/2}}\cdot\frac{1}{c_n}\cdot\mathbb{E}\left[\left(\int_0^1\bm e(t)dt\right)^k\right]^{-1},$$ and so, setting $Ar_k=\mathbb{E}\left[\left(\int_0^1\bm e(t)dt\right)^k\right]^{-1}$, we obtain $$\begin{aligned}
{\mathbb{E}}\left[G\left(F_{{\text{ext}}(\bm{\tau}^k_n)}(t)\right)\right]\sim& Ar_k\cdot\sum_{\tau^k_n\in{ASq({A\!v_n}(321),k)}}G\left(F_{{\text{ext}}(\tau^k_n)}(t)\right)\cdot\frac{k!}{(2n)^{3k/2}}\cdot\frac{1}{c_n}\\
&=Ar_k\cdot\sum_{\sigma_n\in Av_n(321)}G\left(F_{\sigma_n}(t)\right)\cdot\left|{ASq}(\sigma_n,k)\right|\cdot\frac{k!}{(2n)^{3k/2}}\cdot\frac{1}{c_n}.
\end{aligned}$$ From Lemma \[city maps\], for every $\sigma_n\in Av_n(321)$ that satisfies the Petrov conditions, it holds that $$\left|{ASq}(\sigma_n,k)\right| = \frac{(2n)^{3k/2}}{k!}\left(\left(\int_0^1 F_{\sigma_n}(t)dt \right)^k + O(n^{-.1})\right).$$ Therefore, using the asymptotic result above and recalling that the number of 321-avoiding permutations that do not satisfy the Petrov conditions is bounded by $Ce^{-n^\delta}c_n$, we obtain $$\begin{aligned}
{\mathbb{E}}\left[G\left(F_{{\text{ext}}(\bm{\tau}^k_n)}(t)\right)\right]\sim& Ar_k\cdot\sum_{\sigma_n\in Av_n(321)}G\left(F_{\sigma_n}(t)\right)\cdot \left(\int_0^1 F_{\sigma_n}(t)dt \right)^k\cdot\frac{1}{c_n}\\
&=Ar_k\cdot{\mathbb{E}}\left[G\left(F_{\bm\sigma_n}(t)\right)\cdot \left(\int_0^1 F_{\bm\sigma_n}(t)dt \right)^k\right],
\end{aligned}$$ where $\bm\sigma_n$ is a uniform permutation in $Av_n(321)$. Using similar arguments to the ones used for proving the result in (\[final countdown\]), we have that $${\mathbb{E}}\left[G\left(F_{\bm\sigma_n}(t)\right)\cdot \left(\int_0^1 F_{\bm\sigma_n}(t)dt \right)^k\right]\to{\mathbb{E}}\left[G\left(\bm{e}_t\right)\cdot \left(\int_0^1 \bm{e}_tdt \right)^k\right].$$ Finally, recalling the definition of $k$-biased excursion given in Definition \[def:kbiasedex\], we can conclude that (\[eq:goaal2\]) holds, finishing the proof.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors are very grateful to Mathilde Bouvel and Valentin Féray for the various discussions during the preparation of the paper.
The first author is supported by the SNF grant number 200021-172536, “Several aspects of the study of non-uniform random permutations". The second author is supported by the ANR “COMBINé” number 193951. The third author is supported by ERC Starting Grant 680275 “MALIG".
[^1]: The expression of $R^{(k)}(u)$ in [@disanto2011permutations] differs by a factor $f(u)=((u-1)u^{-2})^{k-1}$, or equivalently by a factor $f(C(t))=t^{k-1}$, from the one in due to the fact that the authors of [@disanto2011permutations] considered permutations of size $n$ with $k$ internal points (we consider permutations of size $n+k$ with $k$ internal points) and moreover, a factor $t$ is missing in their expression.
[^2]: In [@borga2019square] a regular anchored pair of sequences $(X,Y,z_0)$ satisfies $z_0\in(n^{.9},n-n^{.9})$ instead of $z_0\in(\delta_n,n-\delta_n)$. One can check that all the statements of [@borga2019square] are also true for this slightly more general definition.
|
---
abstract: 'The important task of determining the connectivity of gene networks, and at a more detailed level even the kind of interaction existing between genes, can nowadays be tackled by microarraylike technologies. Yet, there is still a large amount of unknowns with respect to the amount of data provided by a single microarray experiment, and therefore reliable gene network retrieval procedures must integrate all of the available biological knowledge, even if coming from different sources and of different nature. In this paper we present a reverse engineering algorithm able to reveal the underlying gene network by using time-series dataset on gene expressions considering the system response to different perturbations. The approach is able to determine the sparsity of the gene network, and to take into account possible [*a priori*]{} biological knowledge on it. The validity of the reverse engineering approach is highlighted through the deduction of the topology of several [*simulated*]{} gene networks, where we also discuss how the performance of the algorithm improves enlarging the amount of data or if any a priori knowledge is considered. We also apply the algorithm to experimental data on a nine gene network in [*Escherichia coli*]{}.'
author:
- 'M. Pica Ciamarra$^{1}$[^1], G. Miele$^{1,2}$[^2], L. Milano$^{1}$[^3], M. Nicodemi$^{1,3}$[^4], G. Raiconi$^{4}$[^5]'
title: 'A statistical mechanics approach to reverse engineering: sparsity and biological priors on gene regulatory networks'
---
Introduction
============
The amount and the timing of appearance of the transcriptional product of a gene is mostly determined by regulatory proteins through biochemical reactions that enhance or block polymerase binding at the promoter region ([@Jacob61; @Dickson75]). Considering that many genes code for regulatory proteins that can activate or repress other genes, the emerging picture is conveniently summarized as complex network where the genes are the nodes, and a link between two genes is present if they interact. The identification of these networks is becoming one of the most relevant task of new large-scale genomic technologies such as DNA microarrays, since gene networks can provide a detailed understanding of the cell regulatory system, can help unveiling the function of previously unknown genes and developing pharmaceutical compounds.
Different approaches have been proposed to describe gene networks (see ([@filkov]) for a review), and different procedures have been proposed ([@Tong02; @Lee02; @Ideker01; @Davidson02; @Arkin97; @Yeung02]) to determine the network from experimental data. This is a computationally daunting task, which we address in the present work. Here we describe the network via deterministic evolution equations ([@Tegner03; @Bansal06]), which encode both the strenght and the direction of interaction between two genes, and we discuss a novel reverse engineering procedure to extract the network from experimental data. This procedure, though remaining a quantitative one, realizes one of the most important goal of modern system biology, which is the integration of data of different type and of knowledge obtained by different means.
We assume that the rate of synthesis of a transcript is determined by the concentrations of every transcript in a cell and by external perturbations. The level of gene transcripts is therefore seen to form a dynamical system which in the most simple scenario is described by the following set of ordinary differential equations ([@deJong02]): $$\dot{X}(t) = {\mathcal{A}}X(t) + {\mathcal{B}}U(t) \label{eq-cont}$$ where $X(t) = (x_1(t),\ldots,x_{N_g}(t))$ is a vector encoding the expression level of $N_g$ genes at times $t$, and $U$ a vector encoding the strength of $N_p$ external perturbations (for instance, every element $u_k$ could measure the density of a specific substance administered to the system). In this scenario the gene regulatory network is the matrix ${\mathcal{A}}$ (of dimension $N_g
\times N_g$), as the element ${\mathcal{A}}_{ij}$ measures the influence of gene $j$ on gene $i$, with a positive ${\mathcal{A}}_{ij}$ indicating activation, a negative one indicating repression, and a zero indicating no interaction.
The matrix ${\mathcal{B}}$ (of dimension $N_g \times N_p$) encodes the coupling of the gene network with the $N_p$ external perturbations, as ${\mathcal{B}}_{ik}$ measures the influence of the $k$-th perturbation on the $i$-th gene.
A critical step in our construction is the choice of a linear differential system. Even if a such kind of model is based on particular assumptions on the complex dynamics of a gene network, it seem the only practical approach due to the lack of knowledge of real interaction mechanism between thousands of genes. Even a simple nonlinear approach would give rise to an intractable amount of free parameters. However, it must also be recognized that all other approaches or models have weakness points. For instance, boolean models (which have been very recently applied to inference of networks from time series data, as in ([@martin]), strongly discretize the data and select, [*via*]{} the use of an arbitrary threshold, among active and inactive gene at every time-step. Dynamical Bayesian models, instead, are more data demanding than linear models due to their probabilistic nature. Moreover, their space complexity grows like $N_g^4$ (at least in the famous Reveal Algorithm by K.P. Murphy ([@Murphy01])), which makes this tool suitable for small networks.
The linear model of Eq. (\[eq-cont\]) is suitable to describe the response of a system to small external perturbations. It can be recovered by expanding to first order, and around the equilibrium condition $\dot{X}(t) = 0$, the dependency of $\dot{X}$ on $X$ and $U$, $\dot{X}(t) = f(X(t),U)$. Stability considerations ($X(t)$ must not diverge in time) require the eigenvalues of ${\mathcal{A}}$ to have a negative real part. Moreover it clarifies that if the perturbation $U$ is kept constant the model is not suitable to describe periodic systems, like cell cycles for example, since in this case $X(t)$ asymptotically approaches a constant.
Unfortunately data from a given cell type involve thousands of responsive genes $N_g$. This means that there are many different regulatory networks activated at the same time by the perturbations, and the number of measurements (microarray hybridizations) in typical experiments is much smaller than $N_g$. Consequently, inference methods can be successful, but only if restricted to a subset of the genes (i.e. a specific network) ([@Basso05]), or to the dynamics of genes subsets. These subsets could be either gene clusters, created by grouping genes sharing similar time behavior, or the modes obtained by using singular value decomposition (SVD). In these cases it is still possible to use Eq. (\[eq-cont\]), but $X(t)$ must be interpreted as a vector encoding the time variation of the clusters centroids, or the time variation of the characteristics modes obtained via SVD.
In this paper we present a method for the determination of the matrices ${\mathcal{A}}$ and ${\mathcal{B}}$ starting from time series experiments using a Global Optimization approach to minimize an appropriate figure of merit. With respects to previous attempts, our algorithm as the uses explicitly the insight provided by earlier studies on gene regulatory networks ([@Barabasi00; @Barabasi01]), namely, that gene networks in most biological systems are sparse. In order to code such type of features the problem itself must be formulated as mixed-integer nonlinear optimization one ([@minlp]). Moreover our approach is intended to explicitly incorporate prior biological knowledge as, for instance, it is possible to impose that: ${\mathcal{A}}_{ij} < 0$ $(= 0,> 0, \neq 0)$ if it is known that gene $j$ inhibits (does not influence, activates, influences) gene $i$. This means that the optimization problem is subject to inequality and/or equality constraints. Summing up the characteristics of the problem we must solve: high dimensionality, mixed integer, nonlinear programming problem for the exact solution of which no method exists. An approximate solution can be found efficiently using a global optimization techniques ([@Pardalos95; @Pardalos02]) based on an intelligent stochastic search of the admissible set. As consequence of the optimization method used, there is no difficulties to integrates different time series data investigating the response of the same set of genes to different perturbations, even if different time series are sampled at different (and not equally spaced) time points. The integration of different time series is a major achievement, as it allows for the joint use of data obtained by different research groups.We believe that the integration of multiple time-series dataset in unveiling a gene network is a topic of great interest as focused in recently published papers ([@shi]).
We illustrate and test the validity of our algorithm on computer simulated gene expression data, and we apply it to an experimental gene expression data set obtained by perturbing the SOS system in bacteria [*E. coli*]{}.
Methods
=======
The simplest assumption regarding the dynamical response of gene transcripts (intially in a steady state, $X(t) = 0$ for $t < 0$), to the appearance of an external perturbation $U(t)$ at time $t>0$ is given by Eq. (\[eq-cont\]). Since the state of the system measured at discrete times $t = t_k$, $k=0,\ldots,N_t$, it useful to consider the discrete form of Eq. \[eq-cont\]. $$\label{eq-discrete} X(t_{k+1}) = A X(t_k) +
\widetilde{U}(t_k,t_{k+1}),$$ where $A$ is a matrix with dimension $N_g \times N_g$, and $\widetilde{U}$ is a function of the perturbations, namely $$\begin{aligned}
A &=& \exp({\mathcal{A}}\Delta t),\nonumber\\
\widetilde{U}(t_k,t_{k+1}) &=&
\int_{t_k}^{t_{k+1}}\exp\{{\mathcal{A}}(t_{k+1}- \tau)\} \, {\mathcal{B}}\, U(\tau) \,
d\tau .\label{eq-discrete-matrix}\end{aligned}$$ Here we have assumed, for simplicity sake, $t_k = k\Delta$, but the generalization to the most general case is straightforward. In particular, for constant $U$ one gets $B \equiv
\widetilde{U}(t_k,t_{k+1})=\left( \exp\{ {\mathcal{A}}\Delta t\}-1 \right)
{\mathcal{A}}^{-1} {\mathcal{B}}\, U$.
Due to the presence of noise the measured $X(t_k)$ do not coincide with the true values $\overline{X}(t_k)$ expected to satisfy Eq. (\[eq-cont\]). If we for simplicity observed samples affected by independent, zero mean additive noise $\varepsilon_{k}$, namely $X(t_k) =
\overline{X}(t_k)+\varepsilon_{k}$, the matrices ruling the dynamics of Eq. (\[eq-cont\]) can be found by requiring the minimization of a suitably defined [*cost function*]{}.
Under the simplifying assumption of a constant external perturbation, previous works have been focused on the determination of $A$ and $B$ (from which ${\mathcal{A}}$ and ${\mathcal{B}}$ can be retrieved), as in ([@Bansal06; @Holter]). The matrices $A$ and $B$ have been assumed to be those minimizing the cost function $$CF(A,B) = \sum_{k = 0}^{N_t-1} |X(t_{k+1})-(AX(t_k)+BU))|^2.
\label{eq-cost}$$ Eq. \[eq-cont\] can be written as standard linear least squares estimation problem for $A, B$, whose solution can be found by computing the pseudoinverse of a suitable matrix, providing that number of observations is sufficiently high: $N_{t}>N_{g}+N_{p}$.
In the present analysis we introduce a new reverse engineering approach to determine the matrices ${\mathcal{A}}$ and ${\mathcal{B}}$, which turns out to be more efficient and flexible than previous ones. Our approach is based on the following considerations, which have not been taken into account in previous works:
- Each gene expression time–series could in principle be scanned according to both time versus, namely the [*time-reversibility*]{} of dynamics.
- There is a biological evidence suggesting that the matrix $A$ is sparse ([@Barabasi00; @Barabasi01]). For this reason any reverse engineering algorithm has to be able to capture the proper sparsity of gene regulatory network.
- In many situations there are biological prior information about bounds on numerical value of some specific entries of ${\mathcal{A}}$ and ${\mathcal{B}}$. Such bounds must be taken into account in the solution procedure.
As a consequence of this we use as a cost function the reduced ${\chi^2_{\rm red}}$ defined as follows: $$\label{eq-rcs}
{\chi^2_{\rm red}}= \frac{\mathcal{CF}({\mathcal{A}},{\mathcal{B}})}{n_{\rm dof}\sigma^2},$$ where $$\begin{aligned}
\mathcal{CF}({\mathcal{A}},{\mathcal{B}}) &=&\sum_{k = 0}^{N_t-1}\left[
\left|X(t_{k+1})-\left(A \,
X(t_k)-\widetilde{U}(t_k,t_{k+1})\right) \right|^2 +
\left|X(t_k)-\left(A^{-1}X(t_{k+1}) +
\widetilde{U}(t_{k+1},t_k)\right)\right|^2\right].
\label{eq-mcost}\end{aligned}$$ Note that $A^{-1}\widetilde{U}(t_k,t_{k+1}) =
-\widetilde{U}(t_{k+1},t_k)$, and the quantities $A$ and $\widetilde{U}(t_k,t_{k+1})$ can be obtained from ${\mathcal{A}}$ and ${\mathcal{B}}$ by appropriate numerical approximation algorithms for Eq.s (\[eq-discrete-matrix\]). The quantity $\sigma$ denotes the standard deviation of the independent, additive noise affecting the dataset.
A straightforward optimization on dynamics/input matrices of Eq. (\[eq-cont\]) is the main improvement of the proposed approach with respect to the previous ones. This is the only way that enable us to incorporate the sparseness requirement on ${\mathcal{A}},{\mathcal{B}}$ and eventually available biological priors. It is clear that sparseness was destroyed by exponentiation and integration involved in the continuous-discrete transformation of the problem, in the same way simple bounds on ${\mathcal{A}},{\mathcal{B}}$ elements are transformed in highly complex nonlinear relations on $A,B$. The price paid for the flexibility of the approach is the computational effort required for any computation of the error function. This put a very strong attention to the efficiency of the optimization algorithm.
In Eq. (\[eq-mcost\]) the two contributions in square brackets account for the forward and backward propagation, respectively, and thus implement the time reversibility of the dynamics. Moreover, the sparsity of the gene network is taken into account via the number of degrees of freedom (d.o.f.) defined as $n_{\rm
dof} \equiv n_{\rm par} - n_{\rm eq}$ with $n_{\rm par} = N_g(N_g
+ N_p) - n_{\rm zero}$ the number of free parameters, $n_{\rm eq}
= N_g (N_t - 1)$ the number of equations (constraints) and $n_{\rm zero}$ the number of elements of ${\mathcal{A}}$ and ${\mathcal{B}}$ (a total of $n_{\rm zero}$) fixed to zero.
The generalization of the algorithm to the case in which there are different time-series, $X^\alpha(t_k)$, corresponding to the response of the same set of genes to similar and/or different perturbations $B^\alpha$ with $\alpha = 1, . . . ,N_p$ is straightforward. In this case the cost function to be minimized is simply $${\chi^2_{\rm red}}= \frac{1}{2 n_{\rm dof}} \sum_{\alpha = 1}^{N_p}\frac{\mathcal{CF}({\mathcal{A}},{\mathcal{B}}^\alpha)}{\sigma^2_\alpha}.
\label{eq-min-molti}$$ Here we have assumed the noise to depend on the time-series ($\alpha$). It is clearly possible, however, to introduce a time ($t_k$) and even a gene (i) dependence, i.e to use $\sigma = \sigma^i_\alpha(t_k)$.
We detail now our procedure to find the spare matrices ${\mathcal{A}}$ and ${\mathcal{B}}$ minimizing ${\chi^2_{\rm red}}$,, which is in general a formidable task. The first difficulty is the determination of the number $n_{\rm
par}$ of not vanishing elements of ${\mathcal{A}}, {\mathcal{B}}$ (or equivalently the number of d.o.f. $n_{\rm dof}$ ). Having determined $n_{\rm par}$ the problem is still very complicated since there are $$\frac{(N_g(N_g+N_p))!}{n_{\rm par}! \, (N_g(N_g+N_p)-n_{\rm
par})!}$$ different ways of choosing these $n_{\rm par}$ elements out of the $N_g(N_g+N_p)$ candidates. For typical values of the parameters, for instance $N_g = 10$ and $n_{\rm par} = 1/2N_g^2 = 50$, the number of possible combinations is of the order of $10^{32}$, so big that any kind of extensive algorithmic procedure is precluded. A practical approach to, at least approximately solve, this formidable problem is that of resort to a global optimization techniques based on a stochastic strategy to search of the admissible set, for a comprehensive review os such type of methods one can see ([@Pardalos95; @Pardalos95]). We have tackled this problem via the implementation of the more classical of such methods: a simulated annealing procedure ([@Kirkpatrick]), based on a Monte Carlo dynamics. For each possible value of the number of parameters $n_{\rm par}$, the algorithm search for the matrices ${\mathcal{A}}$ and ${\mathcal{B}}$ with a total of $n_{\rm par}$ non zero elements minimizing the cost function of Eq. (\[eq-mcost\]), as discussed below. We then easily determine ${\chi^2_{\rm red}}(n_{\rm par})$ and the minimizing matrices ${\mathcal{A}}^*$ and ${\mathcal{B}}^*$ which are our best estimates of the true matrices. In order to determine the matrices ${\mathcal{A}}$ and ${\mathcal{B}}$ with a total of $n_{\rm par}$ non zero parameters which minimize the cost function, our simulated annealing procedure starts with two random matrices ${\mathcal{A}}$ and ${\mathcal{B}}$ with a total of $n_{\rm par}$ not vanishing parameters, and changes the elements of these matrices according to two possible Monte Carlo moves. One move is the variation of the value of a not vanishing element of the two matrices, the other one consists in setting to zero a previously non-zero element, and to a random value a zero element. Each move, which involves a variation $\Delta \mathcal{CF}$ of the cost function, is accepted with a probability $\exp[-\Delta
\mathcal{CF}/T]$, where $T$ is an external parameter. As in standard optimization by annealing procedures, we start from a high value of $T$, of the order of the cost function value, and then we slowly consider the limit $T \to 0$. In the limit of infinitesimally small decrease of $T$ the algorithm is able to retrieve the true minimum of the cost function, while for faster cooling rates estimates of the real minimum are recovered.
As the Monte Carlo moves attempt to change the values of the elements of ${\mathcal{A}}$ and ${\mathcal{B}}$, it is easy to introduce biological constraints on the values of $A_{ij}$ and of $B_k$, as we will shown in a following example. The algorithm requires the evaluation of the cost function $\mathcal{CF}$, which is a time consuming operation as the computation of the discrete matrix $A$ and of its inverse $A^{-1}$ are required. We have implemented this algorithm in C++ making use of the GNU Scientific Library, www.gsl.org.
Results
=======
In this section, we illustrate our reverse engineering algorithm with three examples. The validity of our algorithm and of other known ones are evaluated by comparing the exact dynamical matrices ${\mathcal{A}}$ and ${\mathcal{B}}$ with their best estimate ${\mathcal{A}}^*$ and ${\mathcal{B}}^*$ obtained via the reverse engineering procedure. To this end, we have introduced the parameter $$\eta_{\mathcal{C}} =
\frac{|\mathcal{C}-\mathcal{C}^*|}{|\mathcal{C}|},
\label{eq-parameter}$$ where $\mathcal{C}^* = {\mathcal{A}}^*$ or ${\mathcal{B}}^*$ and $|\mathcal{C}|$ is the $L_2$ norm of the matrix $\mathcal{C}$. Clearly, $\eta_C \ge 0$, the equality being satisfied if and only if $C = C^*$. Since $\eta_C$ is a measure of a relative error it has no upper bound, but the estimate of $C$ becomes unreliable when $\eta_C$ is above $1$, i.e. $|C - C^*| > |C|$. This parameter allows for a faithful evaluation of the quality of the reverse engineering approach, as it summarizes the comparisons of all retrieved elements $C_{ij}$ with their true values $C^*_{ij}$.
We discuss three applications. First, we show how our algorithm works when applied to a single time series. In this case one can show that the cost function ${\chi^2_{\rm red}}({\mathcal{A}},\ B)$, which takes into account both the forward and the backward propagation, is more effective in determining the structure of the gene network than the usual cost function $CF(A,B)$ of Eq. (\[eq-cost\]), which only considers the forward propagation. The second example shows how we can easily take into account the presence of different time-series, while the last example shows how biological priors can be included. Before discussing the examples we shortly describe the procedure used to generate the synthetic dataset.
### Generation of a synthetic dataset
In order to generate a synthetic dataset $X(t_k)$ one must construct the matrices ${\mathcal{A}}$ and ${\mathcal{B}}$, from which it is possible to generate the noiseless time-series $\overline{X}(t_k)$. Hence, one gets $X(k) =
\overline{X}(t_k) + \varepsilon_k$ for $k = 1, ..,N_t$ where $\varepsilon_k$ are i.i.d. random variables with standard deviation $\sigma$.
While there are no constraints on ${\mathcal{B}}$, ${\mathcal{A}}$ must be a sparse random matrix whose complex eigenvalues have negative real part. The generation of ${\mathcal{A}}$ proceeds according the following steps. First, we generate a $N_g \times N_g$ block diagonal matrix ${\mathcal{A}}^{(0)}$, whose $N_g$ blocks are $2 \times 2$ antisymmetric matrices with diagonal elements $\lambda_r^\alpha$ and off diagonal elements $\lambda_i\alpha$, or $1 \times 1$ negative real elements $\lambda$. By direct constructions all of the $N_g$ eigenvalues of the matrix ${\mathcal{A}}^{(0)}$ have negative real part. Then we generate a series $R_k$ of random unitary matrices, with only $4$ off-diagonal not vanishing entries, and compute the matrices $A^k = R_k{\mathcal{A}}^{(k-1)}R^{-1}_k$, all of them sharing the spectrum of $A^{(0)}$. Clearly, as $k$ grows, the number of vanishing entries (the sparsity) of $A^{(k)}$ decreases. We fix ${\mathcal{A}}$ as the matrix ${\mathcal{A}}^{(k)}$ characterized by the desired number of vanishing elements. By choosing typical values of $\lambda_r^\alpha$ and $\lambda_i^\alpha$ it is possible to control the time scale of the relaxation process of the system following the application of the perturbation.
![\[fig1\] Synthetic time-series $X(t_k)$ with $N_g = 8$ elements measured at $N_t$ = 20 equally spaced time-points.](fig1.eps)
### Example 1: a single time series
Let us consider a simulated time-series $X(t_k)=
(x_1(t_k),\ldots,x_{N_g}(t_k))$ with $N_g = 8$ measured at $N_t =
20$ equally-spaced time-points, as shown in Fig. \[fig1\]. This dataset is generated by starting from a sparse gene network ${\mathcal{A}}$ (with only $49$ out of $N_g^2 = 64$ non-zero elements), a constant perturbation $U(t) = 1$ and a sparse external perturbation-coupling matrix ${\mathcal{B}}$ with a single not vanishing entry. The white noise is characterized by a standard deviation $$\sigma(p) = p \sum_{i=1}^{N_g} \sum_{k=1}^{N_t} \frac{|
x_i(t_k)|}{N_g \, N_t}, \label{eq-noise}$$ measured in units of the mean absolute value of the expression levels of all genes. In particular the value $p = 0.05$ has been used.
We have applied our algorithm to this dataset. To this end, we have minimized the reduced chi-square ${\chi^2_{\rm red}}$, defined in Eq. (\[eq-rcs\]), for different values of the number of parameters $n_{\rm par}$ (i.e. of the number of degrees of freedom $n_{\rm
dof}$). Fig. \[fig2\] shows that ${\chi^2_{\rm red}}$ has a non-monotonic dependence on the number of parameters $n_{\rm par}$. This feature is a signature of the fact that both networks with few or with many connections are bad descriptions of the actual gene regulatory system. Accordingly, our best estimate of the number of not vanishing parameters is $n_{\rm par}^* = 39$, where ${\chi^2_{\rm red}}$ has its minimum, and the corresponding minimizing matrices ${\mathcal{A}}^*$ (with $33$ non zero entries) and ${\mathcal{B}}^*$ (with $6$ non zero elements) are our best estimates of the actual gene network encoding matrix ${\mathcal{A}}$ and of the matrix ${\mathcal{B}}$.
![\[fig2\] The main panel (inset) show the dependence of ${\chi^2_{\rm red}}$ (of the minimum of the cost function) on the number of not vanishing parameters $n_{\rm par}$, as determined by our algorithm when applied to the time-series shown in Fig. \[fig1\]. The fluctuations are due to the probabilistic nature of the Monte Carlo minimization procedure. The quantity ${\chi^2_{\rm red}}$ varies non-monotonically with $n_{\rm par}$, and has a minimum with $n_{\rm par} = 39$ parameters.](fig2.eps)
The estimators assume the values $\eta_{\mathcal{A}}= 0.76$ and $\eta_{\mathcal{B}}=
0.005$. These values indicate that, when applied to this small dataset, our algorithm is able to retrieve ${\mathcal{B}}$ to a very good approximation, and ${\mathcal{A}}$ with a comparatively larger error.
For comparison, we have also obtained the matrices $A$ and $B$ which exactly minimize $CF(A,B)$ via a linear algebraic approach, and retrieved the corresponding continuous matrices via the use of the bilinear transformation, obtaining the scores $\eta_{\mathcal{A}}= 2.1$ and $\eta_{\mathcal{B}}= 0.012$. These numbers prove that by exploiting the time reversibility of the equation of motion, and the sparseness of the gene network, is it possible to estimate the parameters of the network with a greater accuracy, as also shown in Fig. \[fig3\] where we plot the best estimates ${\mathcal{A}}^*_{ij}$ obtained by both methods versus their true values ${\mathcal{A}}_{ij}$: in the case of perfect retrieval all of the points should lie on the $y=x$ line.\
![\[fig3\] We plot here the values of the element of the estimated matrices $A^*_{ij}$, obtained both with the linear algebraic approach and with our algorithm, versus their true value $A_{ij}$. Ideally, the points should line on the $y=x$ dotted line.](fig3.eps)
### Example 2: multiple time series
There are two major problems encountered when trying to infer a gene network via the analysis of time-series data. The first one is that there are usually to few time-points with respect to the large number of genes. The second one is associated to the fact that, when the system responds to an external perturbation, only the expression of the genes directly or indirectly linked to that perturbation changes, i.e., only a specific sub-network of the whole gene network is [*activated*]{} by the external perturbation. While through the study of the time-series it is possible to learn something about the regulatory role of the responding genes, nothing can be learnt about the regulatory role of the non-responding genes.
These problems can be addressed by using gene network retrieval procedures which are able to simultaneously analyze different time-series ([@Wang]), particularly if these measure the response of the system to different perturbations, as we expect different perturbations to activate different genes. Our reverse engineering approach naturally exploits the presence of multiple time series by requiring the minimization of Eq. (\[eq-min-molti\]).
Here we study the network discussed in the previous example by adding to the time-series shown in Fig. \[fig1\], other ones generated by the application of two different perturbations. For sake of simplicity all time-series are measured at equally-spaced time-points, but with an elapsing time between two consecutive data points depending on the particular time-series. Hence that the problem cannot be reduced to the one of a single average time-series by exploiting the linearity of Eq. (\[eq-cont\]).
As the number of time-series increases, our determination of the gene network ${\mathcal{A}}$ becomes more and more accurate. For instance, while by means of a single perturbation we obtain $\eta_{\mathcal{A}}= 0.76$ ($\eta_{{\mathcal{B}}_0} = 0.005$), by using two time-series we obtain $\eta_{\mathcal{A}}= 0.25$ ($\eta_{{\mathcal{B}}_0} = 0.004, \eta_{{\mathcal{B}}_1} = 0.003$), and by using three time series we get $\eta_{\mathcal{A}}= 0.13$ ($\eta_{{\mathcal{B}}_0} =
0.004, \eta_{{\mathcal{B}}_1} = 0.002, \eta_{{\mathcal{B}}_2} = 0.002$).
![\[fig\_priors\] Dependence of $\eta_{\mathcal{A}}$ on the fraction of priors, as obtained by analyzing one or two time-series. The scoring parameter $\eta_{\mathcal{A}}$ decreases as the number of priors increases, indicating that a better estimate of the gene network ${\mathcal{A}}$ is recovered.](fig4.eps)
\
\
### Example 3: biological priors
As the traditional approach to research in Molecular Biology has been an inherently local one, examining and collecting data on a single gene or a few genes, there are now couples of gene which are known to interact in a specific way, or do not interact at all. This information is nowadays easily available by consulting pubic databases such as Gene Ontology. Here we show that it is possible to integrate this non-analytical information in our reverse engineering approach, improving the accuracy of the retrieved network. To this end we consider again the gene network ${\mathcal{A}}$ but we introduce some constraints on a fraction $f$ of randomly selected elements of the matrices ${\mathcal{A}}$ and ${\mathcal{B}}$, namely $10\% \leq f \leq 40\%$. As our retrieval procedure tries to exchange vanishing and not vanishing elements of ${\mathcal{A}}$ and ${\mathcal{B}}$ we introduce the constraints as follows: if the element is zero in the exact matrices then we set it to zero and we never try to set it to a non-zero value; on the contrary, if the element is different from zero, its value is free to change and we never try to set it to zero. By using this approach we assure that our best estimates of ${\mathcal{A}}$ and ${\mathcal{B}}$ are consistent with the previous knowledge. In order to stress the greater improvement that can be obtained via the use of biological priors, we consider now the same gene network ${\mathcal{A}}$ and perturbations of examples 1 and 2, but we corrupt the noiseless dataset by adding a noise (see Eq. (\[eq-noise\])) characterized by $p = 0.1$, and not by $p =
0.05$ as before. Due to the high value of the noise the linear algebraic approach is not more able to recover the gene network matrix, as it obtains a score $\eta_{\mathcal{A}}= 4.40$.
We show in Fig. \[fig\_priors\] the dependence of $\eta_{\mathcal{A}}$ on the fraction of randomly selected elements of ${\mathcal{A}}$ and ${\mathcal{B}}$ fixed either to zero or to non-zero, both for the case in which only one or two perturbations have been used in the retrieval procedure. As expected, $\eta_A$ decreases as the number of priors increases, showing that as more biological knowledge on the system of interest is available the reliability of our reverse engineering approach improves.
### Results on Escherichia Coli
We applied our algorithm to a nine gene network, part of the SOS network in [*E. Coli*]{}. The genes are $recA$, $lexA$, $Ssb$, $recF$, $dinI$, $umuDC$, $rpoD$, $rpoH$, $rpoS$, and the used time-series consists of six time measurements (in triplicate) of the expression level of these genes following treatment with Norfloxacin, a known antibiotic that acts by damaging the DNA. The time series is the same used in Ref. ([@Bansal06]), and experimental details can be found there.
Given $N_g = 9$ there are $90$ unknowns to be determined, as ${\mathcal{A}}$ is a $N_g \times N_g$ matrix, and ${\mathcal{B}}$ is a vector of length $N_g$. Since $N_t = 6$, the experimental data allows for the writing of $N_g(N_t -1) = 45$ equations, and for the determination of only $45$ unknowns, while a literature survey ([@Bansal06]) suggests that there are at list $52$ connections between the considered genes (including the self-feedback). As in previous works, we are therefore forced to use an interpolation technique to add new time measurements, creating a time series with $11$ time points.
When applied to this dataset, our algorithm found that ${\chi^2_{\rm red}}$ is minimized by a matrix ${\mathcal{A}}$ with $57$ not vanishing entries, and a vector ${\mathcal{B}}$ with $6$ non-zero elements, which are given in Table \[table1\]. In the literature, there are $52$ known connections between the nine considered genes, including the self-feedback.. We are able to find $37$ of these connections. Regarding the interaction with Norfloxacin, our algorithm found that primary target is $recA$, as expected.
[l|ccccccccc||c]{} & recA & lexA & Ssb & recF & dinI & umuDC & rpoD & rpoH & rpoS & ${\mathcal{B}}$\
\
recA & -1.68 & - & -0.36 & 1.81 & 1.05 & 0.84 & - & - & -0.59 & 0.71\
\
lexA & -0.11 & -1.56 & 0.59 & 0.58 & 0.40 & - & -0.34 & - & - & 0.13\
\
Ssb & -0.47 & 1.82 & -2.83 & - & 0.60 & - & 0.96 & -1.71 & 1.29 & -\
\
recF & 0.68 & 0.42 & - & -0.93 & -0.52 & -0.40 & -0.30 & 1.13 & - & 0.38\
\
dinI & 1.18 & 0.72 & 0.39 & -0.96 & -1.71 & 0.42 & - & - & - & 0.34\
\
umuDC & 0.47 & -0.63 & -0.39 & -0.64 & 0.19 & -0.65 & 0.11 & - & 0.53 & -\
\
rpoD & -0.06 & -0.28 & - & 0.36 & - & - & -0.22 & - & - & 0.40\
\
rpoH & - & - & -1.10 & 1.60 & -0.32 & 0.92 & - & -3.46 & 1.46 & -\
\
rpoS & -0.39 & -0.43 & - & - & 0.18 & 0.92 & 0.26 & 0.82 & -0.72 & -0.11\
Conclusions
===========
In the framework of a linear deterministic description of the time evolution of gene expression levels, we have presented a reverse engineering approach for the determination of gene networks. This approach, based on the analysis of one or more time-series data, exploits the time-reversibility of the equation of motion of the system, the sparsity of the gene network and previous biological knowledge about the existence/absence of connections between genes. By taking into account this information the algorithm significatively improves the level of confidence in the determination of the gene network over previous works.
The drawback of our procedure is the computational cost, which at the moment limits the applicability of the algorithm to a small number of genes/clusters. There are two time-consuming procedures. One is the transformation of the continuous matrix ${\mathcal{A}}$ in the discrete matrix $A$, which we have been avoided by using the bilinear transformation, but whose validity breaks down as the time interval between two consecutive measurements increases. The second one, which at the moment is the most expensive in time, is the computation of the inverse matrix $A^{-1}$, which we accomplish through the so-called LU decomposition whose computational cost is $O(N^3)$. Alternative methods for exploiting the reversibility of the dynamics should therefore by devised for applications with a larger number of genes.\
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank D. di Bernardo for rousing our interest in this subject, and for helpful discussions.
[00]{}
Arkin,A. et al. 1997. A test case of correlation metric construction of a reaction pathway from measurements. [*Science*]{} 277, 1275–1279.
Bansal, M. et al. 2006. Inference of gene regulatory networks and compound mode of action from time course gene expression profiles. [*Bioinformatics*]{} 22, 815–822.
Basso, K. et al. 2005. Reverse engineering of regulatory networks in human B cells. [*Nat. Genet.*]{} 37 (4), 382–390.
Davidson, E.H. et al. 2002. A genomic regulatory network for development. [*Science*]{} 295, 1669–1678.
de Jong, H. 2002. Modeling and simulation of genetic regulatory systems: a literature review. [*J. Comp. Biol.*]{} 9, 67–103.
Dickson, R., Abelson, J., Barnes, W. and Reznikoff, W.S. 1975. Genetic regulation: the Lac control region. [*Science*]{} 187, 27–35.
Filkov V. 2005. In Handbook of Computational Molecular Biology. [*Chapman & Hall/CRC Press*]{}.
Hansen, P. et al. 1993. Constrained Nonlinear 0-1 Programming. [*ORSA Journal on Computing*]{} 5, 2.
Holter, H.S. et al. (2001) Dynamic modeling of gene expression data, [*Proc. Natl. Acad. Sci. USA*]{} 98, 1693–1698.
Horst, R. and Pardalos, P.M. (Eds.) 1995. Handbook of Global Optimization. [*Springer Publisher*]{}.
Ideker,T. et al. 2001. Integrated genomic and proteomic analyses of a systematically perturbed metabolic network. [*Science*]{} 292, 929–934.
Jacob, F. and Monod, J. 1961. Genetic regulatory mechanism in the synthesis of proteins. [*J. Mol. Biol. Cell*]{} 3, 318–256.
Jenog, H. et al. 2000. The large-scale organization of metabolic networks. [*Nature*]{} 407, 651–654.
Jenog, H. et al. 2001. Lethality and centrality in protein networks. [*Nature*]{} 411, 41–42.
Kirkpatrick, S. et al. 1983. Optimization by Simulated Annealing, [*Science*]{}, 220, 4598, 671-680.
Lee,T.I. et al. 2002. Transcriptional regulatory networks in Saccharomyces cerevisiae. [*Science*]{} 298, 799–804.
Martin, S. et al. 2007. Boolean dynamics of genetic regulatory networks inferred from microarray time series data. [*Bioinformatics*]{} 23, 866–874.
Murphy, K.P. 2001. The Bayes Net Toolbox for Matlab. [*Computing Science and Statistics*]{} 33/I2001Proceedings/KMurphy/KMurphy.pdf
Pardalos, P.M. and Romeijn, H.E. (Eds.) 2002. Handbook of Global Optimization. Vol.2 [*Kluwer Academic Publisher*]{}.
Shi, Y. et al. 2007. Inferring pairwise regulatory relationships from multiple time series datasets. [*Bioinformatics*]{} 23, 755–763.
Tegner, J. et al. 2003. Reverse engineering gene networks: integrating genetic perturbations with dynamical modeling. [*Proc. Natl. Acad. Sci. USA*]{} 100, 5944–5949.
Tong,A.H.Y. et al. 2002. A combined experimental and computational strategy to define protein interaction networks for peptide recognition modules. [*Science*]{} 295, 321–324.
Yeung, M.K.S. et al. 2002. Reverse engineering gene networks using singular value decomposition and robust regression. [*Proc. Natl. Acad. Sci. USA*]{} 99, 6163–6168.
Wang, Y. et al. 2006. Inferring gene regulatory networks from multiple microarray datasets. [*Bioinformatics*]{} 22, 2413-2420.
[^1]: Corresponding author - tel. +39 081 676805; fax +39 081 676346; picaciam@na.infn.it
[^2]: tel. +39 081 676463; fax +39 081 676463; miele@na.infn.it
[^3]: tel. +39 081 676142; milano@na.infn.it
[^4]: tel. +39 081 676475; nicodem@na.infn.it
[^5]: tel. +39 089 963320; fax +39 089 963303; gianni@unisa.it
|
---
abstract: 'We address the dynamics of a [ qubit interacting with a quasi static random classical field having both a longitudinal and a transverse component and described by a Gaussian stochastic process.]{} In particular, we analyze in details the conditions under which the dynamics may be effectively approximated by a unitary operation or a pure dephasing without relaxation.'
author:
- CLAUDIA BENEDETTI
- 'MATTEO G. A. PARIS'
title: EFFECTIVE DEPHASING FOR A QUBIT INTERACTING WITH A TRANSVERSE CLASSICAL FIELD
---
Introduction
============
Studying the interaction of a quantum system with its environment plays a fundamental role in the development of quantum technologies. In fact, the quantum features of a system, such as the presence of quantum correlations or superposition of states, are very fragile and may be destroyed by the action of the environmental noise. Decoherence may be induced by classical or quantum noise, i.e. by the interaction with an environment described classically or quantum-mechanically. The classical description is often more realistic to describes environments with a very large number of degrees of freedom, or to describe quantum systems coupled to a classical fluctuating field. Recently, it has also been shown that even certain quantum environments may be described with equivalent classical models [@helm09; @joynt13; @sarma13]. Since the environment surrounding a quantum system is often composed by a large number of fluctuators, it is legitimate to assume a Gaussian statistics for the noise [@tsai04]. Moreover, the Gaussian approximation is valid even in the presence of non-Gaussian noise, as far as the coupling with the environment is weak [@galp06; @abel08].
Among the different classes of open quantum systems, a large attention has been paid to qubit systems subject to environmental noise inducing a dephasing dynamics [@averin04; @shnirma; @shibata; @sarma08; @sarma13b; @rev13]. In this framework, in studying the interaction of a qubit with an external field, it is often assumed that the typical frequencies of the system are larger than the characteristic frequencies of the environment. In these situations it is likely that the interaction with the environment induces [*decoherence*]{} through dephasing rather than [*relaxation*]{} via damping, i.e. by inducing transitions between the energy levels of the qubit. The effective Hamiltonian describing these kind of processes may be thus written as $$H(t)=\omega_0\sigma_z+B_z(t)\sigma_z
\label{Hz}$$ where, $\omega_0$ is the natural frequency of the qubit and $B_z(t)$ is a classical stochastic field with a noise spectrum containing frequencies that are smaller than $\omega_0$. The overall evolution of the system is obtained by averaging the unitary evolution governed by the Hamiltonian (\[Hz\]) over the realizations of the stochastic process. The resulting map $\rho(t)={\cal E}_t (\rho_0)$ corresponds to a pure dephasing which, in turn, leads to a number of interesting phenomena [@rev13], including the abrupt vanishing of entanglement (the so-called entanglement sudden-death [@esd1; @esd2; @esd3]) and the sudden transition between classical and quantum decoherence [@sd1; @sd2]. Pure dephasing has been also used to describe the dynamics of qubit systems in colored environments [@ben12; @ben13] and to quantify their non-Markovian character [@ben13a].
In this paper we do not assume the Hamiltonian in Eq. (\[Hz\]), and address the dynamics of a qubit interacting with a Gaussian field with both a longitudinal and a transverse component and with a broad spectrum, possibly including the natural frequency $\omega_0$ of the qubit. In particular, we are interested in analyzing the conditions under which the dynamics may be effectively approximated by a unitary operation or a pure dephasing without relaxation. [ Addressing the problem for a generic transverse stochastic field is a challenging task [@cum1; @cum2] since a high order cumulant expansion is involved. We thus restrict attention to the quasi static regime, where the dynamics of the external field is assumed to be slow, and discuss in some details the conditions to obtain an effective dephasing in this regime.]{}
The paper is structured as follows: In Section \[s:qb\] we describe the dynamics of a qubit interacting with an external random classical field having nonzero longitudinal and transverse components. In Section \[s:tr\] we assume a pure transverse field and analyze the conditions under which its effects may be neglected, i.e. the dynamics may be effectively approximated by a unitary operation or a dephasing, whereas in Section \[s:fl\] we consider both components and again analyze the regimes where the dynamics corresponds to dephasing without relaxation. Section \[s:out\] closes the paper with some concluding remarks.
Qubit interacting with a classical random field {#s:qb}
===============================================
Let us consider a two level system interacting with an external fluctuacting field $\vec{B}$, having both a longitudinal and a transverse component, denoted by $B_z(t)$ and $B_x(t)$ respectively. The system Hamiltonian is given by: $$\begin{aligned}
H(t)&=\omega_0\sigma_z+B_x(t)\sigma_x+B_z(t)\sigma_z \label{hamiltonian},\end{aligned}$$ where $\omega_0$ is the qubit energy and the $\sigma_i$ are Pauli matrices. Our purpose is to study under which conditions the dynamics governed by the Hamiltonian , and by the average over the stochastic processes $B_z(t)$ and $B_x(t)$, may be described by a dephasing map, such that the added term $B_x(t)\sigma_x$ does not affect the population of the qubit. The time-dependent coefficients $B_i(t)$ describe stationary Gaussian stochastic processes with zero mean and covariance $K(t,t')\equiv
K(t-t')$, in formula $$\begin{aligned}
[B_i(t)]_{B_i}&=0\nonumber\\
[B_i(t)B_i(t')]_{B_i}&=K_i(t-t')\qquad i=x,z\end{aligned}$$ where the symbol $[\cdot]_{B_i}$ denotes the average over the process $B_i(t)$. A Gaussian process is a process which can be fully described by its second-order statistics. The characteristic function is given by [@puri] $$\left[ \exp\left(i\int_{t_0}^t\!\!\! ds\, J(s) B_i(s)\right)\right]_{B_i}=
\hbox{exp}\left(-\frac{1}{2}
\int_{t_0}^t
\int_{t_0}^t \!\!ds\,ds'\,J(s) K_i(s-s')J(s')\right).
\label{gauSt}$$ [ Upon assuming $t_0=0$, the evolution operator is expressed as: $$\begin{aligned}
U(t,\omega_0)& =\exp\left\{-i\, {\cal T}\int_0^t\!\!ds\, H(s)\right\}
\notag \\ &\simeq
\exp\left\{-i \left[\,\omega_0 t\, \sigma_z +\varphi_x(t)\,
\sigma_x+\varphi_z(t)\,\sigma_z\right]\right\}\label{uu}\end{aligned}$$ where ${\cal T}$ denotes time ordering operator and we have introduced the noise phases $$\varphi_i(t)=\int_0^t\!\! ds\, B_i(s)\,.$$ The second equality in Eq. (\[uu\]) is only approximated and is valid upon truncating the Dyson series at the first order, i.e. assuming that we are in the quasi static regime such that the two-time commutator $[H(t_1),H(t_2)]$ is negligible. If the external field is exactly static, i.e. it is random but it does not change in time, the phases are given by $\varphi_i(t)=
B_i(s)\,t$ while in the quasi static regime they encompass the effects of the (slow) dynamics of the external field.]{} Because of the Gaussian nature of the considered process, the average of any functional of the noise phase $g[\varphi(t)]$ may be written as the the average over the process $\varphi(t)$ with a Gaussian probability distribution: $$\begin{aligned}
[g(\varphi_i)]_{B_i}&=
\frac{1}{\sqrt{2\pi\beta_i(t)}}
\int\!\! d\varphi_i\, g(\varphi_i)\,
\exp\left\{-\frac{\varphi_i^2}{2\beta_i(t)}\right\}\end{aligned}$$ where we omitted the explicit dependency of $\varphi$ on time, and the variance function $\beta(t)$ is defined as: $$\begin{aligned}
\beta_i(t)=\int_0^t\int_0^t \!\!ds\,ds'\, K_i(s-s').
\label{beta}\end{aligned}$$ The evolution operator may be decomposed into the Pauli basis, $U(t,\omega_0)=\frac{1}{2}\sum_{j=0}^4
\hbox{Tr}[U(t,\omega_0)\sigma_j]\sigma_j$, with $\sigma_0$ corresponding to the identity matrix $\mathbb{I}$, and can thus be expressed as: $$\begin{aligned}
U(t,\omega_0)=f_I(t,\omega_0)\,\mathbb{I}+i\,f_x(t,\omega_0)\,\sigma_x
+i\,f_z(t,\omega_0)\,\sigma_z,\end{aligned}$$ where $$\begin{aligned}
f_I(t,\omega_0)&=\cos\left[\sqrt{\varphi_x^2+(\varphi_z+\omega_0 t)^2}\right]\\
f_x(t,\omega_0)&=-\frac{\varphi_x\sin\left[
\sqrt{\varphi_x^2+(\varphi_z+\omega_0 t)^2}
\right]}{\sqrt{\varphi_x^2+(\varphi_z+\omega_0 t)^2}}
\label{fx}\\
f_z(t,\omega_0)&=-\frac{(\varphi_z+\omega_0 t)\sin
\left[\sqrt{\varphi_x^2+(\varphi_z+\omega_0 t)^2}
\right]}{\sqrt{\varphi_x^2+(\varphi_z+\omega_0 t)^2}}\,.\end{aligned}$$ The qubit density matrix is then evaluated as the average of the evolved density matrix over the stochastic processes $\vec{B}=\{B_x,B_z\}$: $$\begin{aligned}
\rho(t)=\left[U(t,\omega_0)\rho_0U^{\dagger}(t,\omega_0)\right]_{\vec{B}}
\label{rhot}\end{aligned}$$ where $\rho_0=\sum_{j,k=1}^2\rho_{jk}{\vert j \rangle \! \langle k \vert}$ is the initial density operator. Since the average of any odd terms in $\varphi_x$ and $\varphi_z$ in Eq. vanishes, we have $$\begin{aligned}
&\rho(t)=
\Bigg[f_I^2\,\rho_0+f_x^2\,\sigma_x\rho_0\sigma_x+
f_z^2\,\sigma_z\rho_0\sigma_z+i\,f_If_z\,[\sigma_z,\rho_0]\,\Bigg]_{\vec{B}}
\label{evol}\end{aligned}$$ where we omitted the dependency of the $f$ functions on $t$ and $\omega_0$. After performing the average in Eq. , the evolved density matrix may be rewritten as: $$\begin{aligned}
\rho(t)=A_I\,\rho_0+A_x\, \sigma_x\rho_0\sigma_x+A_z\,\sigma_z\rho_0\sigma_z
+i A_{Iz}\,[\sigma_z,\rho_0]
\label{rho_A}\end{aligned}$$ where: $$\begin{aligned}
A_i&=A_i(t,\omega_0)=\Big[f_i(t,\omega_0)^2\Big]_{\vec{B}}\qquad i=I,x,z\label{Ai}\\
A_{Iz}&=A_{Iz}(t,\omega_0)=\Big[f_{I}(t,\omega_0)f_{z}(t,\omega_0)\Big]_{\vec{B}}\end{aligned}$$ and the condition $A_I+A_x+A_z=1$ must be satisfied to preserve unitarity. Upon writing explicitly the density matrix $$\rho(t)=\left(\begin{array}{cc}
(A_I+A_z)\rho_{11}+A_x\rho_{22}&(A_I+2i\;A_{Iz}-A_z)\rho_{12}+A_x\rho_{21}\\
A_x\rho_{21}+(A_I-2i\;A_{Iz}-A_z)\rho_{21}&A_x\rho_{11}+(A_I+A_z)\rho_{22}
\end{array}
\right)\label{rho}$$ one immediately sees that whenever $A_x$ is vanishing or may be neglected, the Hamiltonian leads to a dephasing map, with a complex dephasing coefficient. In the next Section, we analyze whether this is true also in other conditions.
Interaction with a pure transverse field {#s:tr}
----------------------------------------
In order to gain insight into the dynamics of the system let us first consider the case of zero longitudinal field $B_z(t)=0$ and look for the conditions under which the effects of the transverse field may be neglected or subsumed by a dephasing. We set $\varphi_x=\varphi$ and evaluate $A_x(t)$ from Eq. , which now reads $$\begin{aligned}
A_x(t,\omega_0,\beta)&=
\frac{1}{\sqrt{2\pi\beta(t)}}\int_{-\infty}^{\infty}
\!\!\!\!d\varphi\; \varphi^2\,\frac{\sin^2\left[\sqrt{\varphi^2+(\omega_0 t)^2}\right]}
{\varphi^2+(\omega_0
t)^2}\,\exp\left(-\frac{\varphi^2}{2\beta(t)}\right)\,,\label{axx}\end{aligned}$$ where the exact functional form of the variance $\beta(t)$ depend on the specific features of the process $B_x(t)$. Upon inspecting Eq. (\[axx\]) one sees that $A_x(t,\omega_0,\beta)$ vanishes whenever $\omega_0 t\gg1$ or $\beta(t)\ll1$. The first condition corresponds to the assumption of a large qubit frequency (outside the spectrum of the noise), whereas the second one $\beta\ll1$ is related to the specific properties of the stochastic process describing the noise. In order to better understand the effects of the transverse field, we now evaluate the function $A_x(t,\omega_0,\beta)$ from Eq. for three classical Gaussian processes with Ornstein-Uhlenbeck (OU), Gaussian (G) and power-law (PL) autocorrelation function, i.e. $$\begin{aligned}
K_{OU}(t-t', \gamma,\Gamma)&=\frac12\, \Gamma\gamma \,e^{-\gamma|t-t'|}\quad\\
K_{G}(t-t',\gamma,\Gamma)&=\frac{1}{\sqrt{\pi}}\,\Gamma\gamma\,e^{-\gamma^2(t-t')^2}\\
K_{PL}(t-t',\gamma,\Gamma,\alpha)&=\frac{1}{2}\,(\alpha-1)\,
\gamma\Gamma\,\frac{1}{\big(\gamma|t-t'|+1\big)^{\alpha}}\label{plc}\end{aligned}$$ which, by Eq. , give: $$\begin{aligned}
\beta_{OU}(\tau, R_{\Gamma})
&=R_{\Gamma}\left(\tau-1+e^{-\tau}\right)\label{beta_1}
\equiv R_\Gamma\, g_{OU} (\tau)\,,\\
\beta_{G}(\tau, R_{\Gamma})&=\frac{R_{\Gamma}}{\sqrt{\pi}}
\left[e^{- \tau^2}-1+\sqrt{\pi}\,\tau\, \hbox{Erf}(\tau)\right]
\equiv R_\Gamma\, g_{G} (\tau)\,,\\
\beta_{PL}(\tau, R_{\Gamma},\alpha)&=R_{\Gamma}\,
\frac{(1-\tau)^2+(1+\tau)^{\alpha}[\tau(\alpha-2)-1]}{(1+\tau)^{\alpha}(\alpha-2)}
\equiv R_\Gamma\, g_{PL} (\tau)]\,,
\label{beta_3}\end{aligned}$$ where $\Gamma$ and $\gamma$ are the [*damping*]{} and the [*memory*]{} parameters of the processes, $\tau=\gamma t$ denotes the rescaled dimensionless time, $R_{\Gamma}=\frac{\Gamma}{\gamma}$, $\alpha>2$ is a real number and Erf(x) is the error function. [ The (quasi) static limit is obtained for vanishing $\gamma$ keeping $\Gamma\gamma$ finite.]{} The $g_x(\tau)$’s are functions of the sole rescaled time, $x=OU,G,PL$. We have numerically evaluated the integral in Eq. for the three different process as a function of rescaled time $\tau$ and the two ratios $R_{\omega}=\omega_0/\gamma$ and $R_{\Gamma}$. In particular, we want to see when $A_x(\tau,R_{\omega},R_{\Gamma})$ is negligible, as a function of the parameters $R_{\omega}$ and $R_{\Gamma}$, and to this aim, we have maximized the function over the time $\tau$ and determined where the maximum is smaller than a given threshold. In Fig \[f1\] we show the region in the $R_\omega$–$R_\Gamma$ plane where $\max_\tau |A_x(\tau,R_{\omega_0},R_{\Gamma})|<10^{-3}$ for the three different processes. As it is apparent from the plots, the coefficient is negligible if $R_{\omega}\gg1$ and/or $R_{\Gamma}\ll1$, with the specific ranges depending on the chosen process.
![Region where the coefficient $\max_\tau
|A_x(\tau,R_{\omega_0},R_{\Gamma})|<10^{-3}$ for three different processes characterized by a) exponential b) Gaussian and c) power law ($\alpha=4$) autocorrelation function[]{data-label="f1"}](f1_OU.pdf "fig:"){width="32.00000%"} ![Region where the coefficient $\max_\tau
|A_x(\tau,R_{\omega_0},R_{\Gamma})|<10^{-3}$ for three different processes characterized by a) exponential b) Gaussian and c) power law ($\alpha=4$) autocorrelation function[]{data-label="f1"}](f1_gaus.pdf "fig:"){width="32.00000%"} ![Region where the coefficient $\max_\tau
|A_x(\tau,R_{\omega_0},R_{\Gamma})|<10^{-3}$ for three different processes characterized by a) exponential b) Gaussian and c) power law ($\alpha=4$) autocorrelation function[]{data-label="f1"}](f1_PL.pdf "fig:"){width="32.00000%"}
In Fig. \[f1\]c, we have shown results for a powerlaw process with $\alpha=4$. This is a good representative of the family , since different values of the parameter lead to the same conditions for an effective dephasing. The behaviour emerging from Fig. \[f1\] is in agreement with the qualitative considerations made above and with the fact that the condition $\beta\ll1$ is equivalent to $R_{\Gamma}\ll 1$.
Since we assumed Gaussian processes with zero mean, we can Taylor-expand the function $f_x(t,\omega_0)^2$ around $\varphi=0$. By dropping the expansion at the second order we may analytically compute the integral and obtain: $$\begin{aligned}
\tilde{A}_x(t,\omega_0) &\simeq\beta(t)\frac{\sin^2\omega_0t}{(\omega_0t)^2}.
\label{Ax}\end{aligned}$$ >From Eq. we immediately see that the coefficient $A_x(t,\omega_0)$ vanishes for vanishing $\beta(t)$ or for $R_\omega \tau \equiv \omega_0t\gg1$. This is in agreement with the numerical results and shows that a second order expansion is sufficient to capture the two regimes where the effects of the transverse field on the populations may be neglected. In order to gain more insight on the possible differences between the two regimes we expand, up to second order in $\varphi$, also the other $f$ functions, arriving at $$\begin{aligned}
\tilde{A}_I(t,\omega_0)&\simeq\cos^2\omega_0 t
- \beta(t) \frac{\sin 2\omega_0t}{2\omega_0t}\\
\tilde{A}_z(t,\omega_0)&\simeq\sin^2\omega_0 t
- \beta(t) \left (
\frac{\sin^2 \omega_0t}{(\omega_0t)^2} -
\frac{\sin 2\omega_0t}{2\omega_0t}\right) \\
\tilde{A}_{Iz}(t,\omega_0)&\simeq-\frac{1}{2}\sin2\omega_0 t
- \beta(t) \left (
\frac{\cos 2\omega_0t}{2\omega_0t}-
\frac{\sin 2\omega_0t}{4(\omega_0t)^2}
\right)\,.\end{aligned}$$ In turn, the coefficient in the off-diagonal elements of the density matrix reads as follows $$\begin{aligned}
\label{offd}
A_I+2i\,A_{Iz}-A_z
&\simeq
e^{-2 i \omega_0 t} + \frac{\beta(t)}{2(\omega_0 t)^2}& R_\omega \gg 1
\\
&\simeq
e^{-2 i \omega_0 t} \left[
1-
\frac{\beta(t)}{2(\omega_0 t)^2}
-i\,\frac{\beta(t)}{\omega_0 t}
\right]
+ \frac{\beta(t)}{2(\omega_0 t)^2}
& R_\Gamma \ll 1\,.\label{offd1}\end{aligned}$$ The above expressions, together with Eq. (\[Ax\]) which is valid in both the limiting cases, illustrate the qualitative differences between the two regimes: for $R_\omega\gg1$ the leading terms in Eqs. (\[offd\]) and (\[Ax\]) are the same, meaning that either relaxation occurs or the dynamics is unitary, whereas for $R_\Gamma\ll1$ the multiplicative term in Eq. (\[offd1\]) reveals that the effective dynamics of the qubit corresponds to a dephasing. [ The expressions above correspond to situations where the effective dynamics is valid at all times. More generally, it may happen that the weaker conditions $R_{\omega}\tau \gg1$ and $R_{\Gamma}\,g_x(\tau)\ll1$ are satisfied for up to some values of $\tau$, corresponding to regimes where the effective dynamics appears only for a finite interaction time.]{}
Effective dephasing in the general case {#s:fl}
---------------------------------------
We now consider the complete Hamiltonian , with the longitudinal term $B_z(t)\neq0$. The coefficient $A_x$, in this case, takes the form: $$\begin{aligned}
&A_x(\beta_x,\beta_z,t,\omega_0)=\nonumber\\
&\frac{1}{2\pi\sqrt{\beta_x(t)\beta_z(t)}}\int\!\!\int\!\!
d\varphi_x\,d\varphi_z\,
\exp\left(-\frac{\varphi_x^2}{2\beta_x(t)}-\frac{\varphi_z^2}{2\beta_z(t)}\right)\,
f_x^2(\varphi_x,\varphi_z,t,\omega_0)
\end{aligned}$$ where we explicitly wrote the dependency on the $\beta$ functions. Following the line of reasoning of the previous section, we expand the function $f_x$ in Eq. around $\varphi_x=0$ and $\varphi_z=0$, and we drop the expansion at the second order. Inserting this expansion in Eq. , we are able to write the analytical expression for $\tilde{A}_x$: $$\begin{aligned}
\tilde{A}_x(\beta_x,\beta_z,t,\omega_0)=& \beta_x (t)\frac{\sin^2(2\omega_0t)}{\omega_0^2t^2}
+ \notag \\
& \frac{\beta_x(t)\beta_z(t)}{2 (\omega_0 t)^4} \Big[
3 + (2 \omega_0^2 t^2 -3)\cos 2 \omega_0 t - 4 \omega_0 t \sin 2 \omega_0 t
\Big]\end{aligned}$$ Upon expanding to the second order all the terms we may write the analytical expression of the evolved density matrix, where the off-diagonal coefficient $K=A_I+2i\,A_{Iz}-A_z$ is given by: $$\begin{aligned}
K\simeq&\,e^{-2i \omega_0 t}[1-2\beta_z(t)]+
\frac{\beta_x(t)}{2\omega_0^2t^2}&\quad R_{\omega}\gg1\label{r1}\\
\simeq&\, e^{-2i\,\omega_0t}\left[ 1-2\beta_z (t) -
\frac{\beta_x(t)}{2 \omega_0^2 t^2} - i \frac{\beta_x(t)}{\omega_0t}\right]
+\frac{\beta_x(t)}{2\omega_0^2t^2} &\quad R_{\Gamma}\ll 1\,.
\label{r2}\end{aligned}$$ Looking at Eq. (\[r1\]) and Eq. , one sees that when $R_\omega \gg1$ one may just neglect the effects of the transverse field, whereas for $R_\Gamma \ll 1$ one has an additional effective term in the coefficient $K$. [ [As for the previous case, the effective dynamics emerges if the above conditions are valid at all times. More general regimes can be written as $R_{\omega}\tau \gg1$ and $R_{\Gamma}\,g_x(\tau)\ll1$. ]{}]{}
Conclusions {#s:out}
===========
The effect of classical noise on a qubit system may be described as the interaction with a random field. In this paper we analyzed, in the quasi static regime, the conditions under which a general dynamics, including interaction with a transverse field, may be approximated by an effective dephasing, without changes in the populations. In particular, we studied the time evolution of a qubit subject to a transverse and longitudinal field. We found that the properties of the stochastic processes analyzed, i.e. the autocorrelation function, play a role, through the variance function $\beta(t)$. Whenever this function is small, the dynamics can be described as a dephasing. Moreover, we recovered the known condition of large system’s energy, [ $\omega_0t\gg1$]{}, which prevents jumps between the qubit levels. If these assumptions do not hold, the general dynamics is not a dephasing and relaxation phenomena may occur, with changes in the qubit populations as described by Eq. .
Acknowledgments {#acknowledgments .unnumbered}
===============
This work has been partially supported by MIUR (FIRB LiCHIS-RBFR10YQ3H) and by the Finnish Cultural Foundation (Science Workshop on Entanglement). The authors thank [Ł]{}ukasz Cywinski, P. Bordone, F. Buscemi, F. Caruso, A. D’Arrigo, S. Maniscalco and E. Paladino for discussions and suggestions, and the University of Modena and Reggio-Emilia for hospitality.
[99]{} J. Helm and W. T. Strunz, [*Phys. Rev. A*]{} [**80**]{}, 042108 (2009); J. Helm, W. T. Strunz, S. Rietzler, and L. E. Würflinger, [ *Phys. Rev. A*]{} [**83**]{}, 042103 (2011). D. Crow, R. Joynt, arXiv:1309.6383v1. W. M. Witzel, K. Young, S. Das Sarma, arXiv:1307.2597v1. O. Astafiev, Yu. A. Pashkin, Y. Nakamura, T. Yamamoto, and J. S. Tsai [*Phys. Rev. Lett.*]{} [**93**]{}, 267007 (2004). Y. M. Galperin, B. L. Altshuler, J. Bergli, and D. V. Shantsev, [*Phys. Rev. Lett.*]{} [**96**]{}, 097009 (2006). B. Abel and F. Marquardt, [*Phys. Rev.*]{} B [**78**]{}, 201302(R) (2008). K. Rabenstein, V. A. Sverdlov and D. V. Averin, [*JETP Lett.* ]{} [**79**]{}, 646 (2004). Y. Makhlin and A. Shnirma,[*Phys. Rev. Lett.*]{} [**92**]{}, 178301 (2004). M. Bana, S. Kitajimac, F. Shibata, [*Phys. Lett. A*]{} [**349**]{}, 415 (2006). Ł. Cywinski, R. M. Lutchyn, C. P. Nave, S. Das Sarma, [*Phys. Rev. B*]{} [**77**]{}, 174509 (2008). J.-T. Hung, Ł. Cywinski, X. Hu, S. Das Sarma, Phys. Rev. B [**88**]{}, 085314 (2013). E. Paladino, Y. M. Galperin, G. Falci, B. L. Altshuler, preprint arXiv:1304.7925, [*Rev. Mod. Phys.*]{}, in press. J. H. Eberly, T. Yu, [*Science*]{} [**316**]{}, 555 (2007). M. P. Almeida, F. de Melo, M. Hor-Meyll, A. Salles, S. P. Walborn, P. H. Souto Ribeiro, and L. Davidovich, [*Science*]{} [**316**]{}, 579 (2007). J. Laurat, K. S. Choi, H. Deng, C. W. Chou, H. J. Kimble, [*Phys. Rev. Lett.*]{} [**99**]{}, 180504 (2007). L. Mazzola, J. Piilo, and S. Maniscalco, [*Phys. Rev. Lett.*]{} [**104**]{}, 200401 (2010). L. Mazzola, J. Piilo, and S. Maniscalco, [*Int. J. Quantum Inf.*]{} [**9**]{}, 981 (2011). C. Benedetti, F. Buscemi, P. Bordone, M. G. A. Paris, [*Int. J. Quantum Inf.*]{} [**10**]{}, 1241005 (2012). C. Benedetti, F. Buscemi, P. Bordone, M. G. A. Paris, [*Phys. Rev. A*]{} [**87**]{}, 052328 (2013). C. Benedetti, M. G. A. Paris, S. Maniscalco, Phys. Rev. A [**89**]{}, 012114 (2014). M. Aihara, H. M. Sevian, and J. L. Skinner, Phys. Rev. A [**41**]{}, 6596 (1990). H. Risken, L. Schoendorff and K. Vogel, Phys. Rev. A [**42**]{}, 4562 (1990). R. R. Puri, [*Mathematical Methods of Quantum Optics*]{} ( Springer, 2001).
|
---
abstract: |
In this work, we generalize and utilize the linear relations of LLT polynomials introduced by Lee [@Lee]. By using the fact that the chromatic quasisymmetric functions and the unicellular LLT polynomials are related via plethystic substitution and thus they satisfy the same linear relations, we can apply the linear relations to both sets of functions.
As a result, in the chromatic quasisymmetric function side, we find a class of $e$-positive graphs, called *melting lollipop graphs*, and explicitly prove the $e$-unimodality. In the unicellular LLT side, we obtain Schur expansion formulas for LLT polynomials corresponding to certain set of graphs, namely, complete graphs, path graphs, lollipop graphs and melting lollipop graphs.
address:
- 'Department of Mathematics, Ajou University, Suwon 16499 Republic of Korea'
- 'Department of Mathematics, Sogang University, Seoul 04107, Republic of Korea'
- ' Applied Algebra and Optimization Research Center, Sungkyunkwan University, Suwon 16420, Republic of Korea'
author:
- JiSun Huh
- 'Sun-Young Nam'
- Meesue Yoo
title: |
Melting lollipop chromatic quasisymmetric functions\
and Schur expansion of unicellular LLT polynomials
---
[^1]
Introduction
============
Shareshian and Wachs [@SW] introduced the *chromatic quasisymmetric function* as a refinement of Stanley’s chromatic symmetric function introduced in [@S1] by considering an extra parameter $q$. Recall that a *proper coloring* of a simple graph $G=(V,E)$ is any function $\kappa : V \rightarrow \{1,2,3,\dots\}$ satisfying that $\kappa(u)\neq \kappa(v)$ for any $u, v\in V$ such that $\{u,v\}\in E$. For a simple graph $G$ with a vertex set $V$ of positive integers and a proper coloring $\kappa$, we denote the number of edges $\{i,j\}$ of $G$ with $i<j$ and $\kappa(i)<\kappa(j)$ by ${\rm asc}(\kappa)$. Given a sequence ${{\bf x}}=(x_1,x_2,\dots)$ of commuting indeterminates, the chromatic quasisymmetric function of $G$ is defined as $$\label{eqn:quasichro}
X_G({\bf x};q)=\sum_{\kappa}q^{\rm{asc}(\kappa)} x^{\kappa},$$ where the sum is over all proper colorings $\kappa$. The function $X_G({{\bf x}};1)$ is called the *chromatic symmetric function* and denoted by $X_G({{\bf x}})$.
Shareshian and Wachs showed that if $G$ is the incomparability graph of a natural unit interval order, then the coefficients of $q^i$ in $X_G({\bf x};q)$ are symmetric functions and form a palindromic sequence. They also made a conjecture (Conjecture \[conj:SW\]) on the $e$-positivity and the $e$-unimodality of $X_G({\bf x};q)$, which specializes to the famous $e$-positivity conjecture on the chromatic symmetric functions of Stanley and Stembridge [@S1; @SS]. In [@Guay], Guay-Paquet proved that if Conjecture \[conj:SW\] holds, then Stanley and Stembridge’s conjecture also holds. This result has put a spotlight on the incomparability graphs of natural unit interval orders. As a result, the $e$-positivity (and $e$-unimodality) of several subclasses of the incomparability graphs of natural unit interval orders are proved, see [@AP; @CH; @Dah; @DW; @FHM; @HP; @SW].
On the other hand, if we remove the proper condition of the colorings, the sum over all colorings $\kappa$ gives *the unicellular LLT polynomials*. The LLT polynomials are defined by Lascoux, Leclerc and Thibon in [@LLT] which can be considered as a $q$-deformation of a product of Schur functions. The LLT polynomials are indexed by a tuple of skew diagrams, but especially when each skew diagram consists of only one cell, then the corresponding LLT polynomial is called *unicellular*. In [@GH], Grojnowski and Haiman proved the Schur positivity of LLT polynomials using the Kazhdan–Lusztig theory, but there is no known combinatorial description for the Schur coefficients, except some special cases. In a sense that the combinatorial description for the Schur coefficients of LLT polynomials could give a combinatorial description of the $q,t$-Kostka polynomials which are the Schur coefficients of the modified Macdonald polynomials, having a combinatorial formula for the Schur expansion of the LLT polynomials is very important. In the case of the unicellular LLT polynomials, due to the correspondence between the unicellular LLT diagrams and the Dyck diagrams (cf. [@AP], [@Lee]), abundant connections to other branches of mathematics such as Hopf algebra and Hessenberg varieties have been figured out.
The precise relationship between chromatic quasisymmetric functions and the unicellular LLT polynomials is given by Carlsson and Mellit [@CM] via plethystic substitution. This relationship explains the parallel phenomena between the chromatic quasisymmetric functions and the unicellular LLT polynomials. In particular, the two sets of polynomials satisfy the same linear relations.
Recently, Lee in [@Lee] introduced local linear relations on LLT polynomials and used them to prove the $k$-Schur positivity of the LLT polynomials when $k=2$. In this work, we generalize and utilize these local linear relations. In chromatic quasisymmetric function side, by using these linear relations, we find a class of $e$-positive graphs, called *melting lollipop graphs*. In unicellular LLT side, we prove Schur expansion formulas for LLT diagrams corresponding to certain set of graphs.
The contents of the paper is organized as follows. In Section \[sec:pre\] we provide necessary definitions and known results used throughout the paper. In Section \[sec:local\] we generalize and utilize those local linear relations. By using these linear relations, in section \[sec:lollipopG\] we explain $e$-positivity and $e$-unimodality of the chromatic quasisymmetric functions corresponding to lollipop graphs. And, we find a new class of $e$-positive and $e$-unimodal graphs, called melting lollipop graphs. Thereafter we obtain a combinatorial interpretation for Schur expansion of unicellular LLT polynomials corresponding to melting lollipop graphs in section \[sec:LLT\_Schur\]. To do this, we first obtain Schur expansion formulas for unicellular LLT polynomials corresponding to certain set of basic graphs including complete graphs and path graphs. In the final section, as an aside, we introduce a combinatorial way to compute the Schur coefficients of LLT polynomials when the Schur functions are indexed by hook shapes.
Preliminary {#sec:pre}
===========
In this section we collect definitions and notions which are required to develop our arguments. More details can be found in [@AP; @HHL05; @SW].
Semistadard Young tableaux and Schur functions
----------------------------------------------
A *partition* of $n$ is a nonincreasing sequence ${\lambda}=({\lambda}_1, {\lambda}_2, \dots, {\lambda}_\ell)$ of positive integers such that $\sum_i {\lambda}_i =n$ and we use the notation ${\lambda}\vdash n$ to denote that ${\lambda}$ is a partition of $n$. Each $\lambda_i>0$ is called a *part* of $\lambda$, and we define the *length* $\ell (\lambda)$ of $\lambda$ to be the number of parts in $\lambda$.
When two partitions $\lambda$ and $\mu$ satisfy $\mu_i \leq \lambda_i$ for all $i \geq 1$, we write $\mu \subseteq \lambda$, and define *skew shape* $\lambda/\mu$ to be the set theoretic difference $\lambda\setminus \mu$. The *diagram* of $\lambda/\mu$ is defined to be the set $\{ (i,j)~ :~ 1\le i\le \ell(\lambda), ~ \mu_i < j\le \lambda_i\}$. Throughout this paper, we frequently identify skew shape $\lambda/\mu$ with its diagram, and the elements in the diagram of $\lambda/\mu$ are visualized as boxes in a plane.
(0,4)–(2,4)–(2,3)–(4,3)–(4,1)–(5,1)–(5,0)–(4,0)–(4,1)–(2,1)–(2,2)–(1,2)–(1,3)–(0,3)–(0,4)–cycle; (1,3)–(1,4); (2,2)–(2,3); (3,1)–(3,3); (1,3)–(2,3); (2,2)–(4,2); (0,0)–(0,3); (0,0)–(4,0); (0,2)–(1,2); (0,1)–(2,1); (1,0)–(1,2); (2,0)–(2,1); (3,0)–(3,1);
Let $\nu$ be a skew shape $\lambda/\mu$. The *content* of a cell $u=(i,j)$ in $\nu$ is the integer $c(u)=i-j$. A *(semistandard) Young tableau* of shape $\nu$ is a filling of $\nu$ with letters from $\mathbb{Z}_+$ such that the entries are weakly increasing on each row and strictly increasing on each column. For a Young tableau $T$, we define the *weight* $\text{wt}(T)$ of $T$ to be the sequence $(\text{wt}_1(T), \text{wt}_2(T), \cdots, )$, where $\text{wt}_i(T)$ denotes the number of occurrences of $i$ in $T$. Especially, $T$ is called *standard* if its weight is $(1,1, \cdots, 1)$. We denote the set of Young tableaux of shape $\nu$ by ${\operatorname{SSYT}}(\nu)$ and the set of standard Young tableaux of shape $\nu$ by ${\operatorname{SYT}}(\nu)$. Given $T\in {\operatorname{SSYT}}(\nu)$, define its corresponding monomial $x^T = \prod_{u\in\nu} x_{T(u)}.$ Then the *Schur functions* are defined as follows: $$s_\lambda = \sum_{T \in {\operatorname{SSYT}}(\lambda)} {x}^T \, .$$ For partitions $\lambda, \mu$ and $\nu$, the *Littlewood-Richardson (LR) coefficients* $c_{\mu}^{\nu/\lambda}$ are the structure constants appearing in the Schur expansion of the product of two Schur functions $s_\lambda$ and $s_\mu$, that is, $$s_\lambda \cdot s_\mu = \sum_\nu c_\mu^{\nu/\lambda}s_\nu \, .$$ It is a well-established fact that the LR coefficients $c_\mu^{\nu/\lambda}$ are nonnegative integers and their nonnegativity can be interpreted in a combinatorial way by virtue of *Schüzenberge’s jeu de taquin* sliding process. Note that this sliding process is a combinatorial algorithm taking a Young tableau to another Young tableau with the same weight, but different shape, that can be carried out by applying the slide described in Figure \[jeu de taquin\] in succession.
(0,0-) rectangle (1,1-); (1,0-) rectangle (2,1-); (0,-1-) rectangle (1,0-); (1,-1-) rectangle (2,0-); (2.7,1-) to (4.3,0); (2.7,-1-) to (4.3,-3); (6,0) rectangle (7,1); (0+5,0) rectangle (1+5,1); (0+5,-1) rectangle (1+5,0); (1+5,-1) rectangle (2+5,0); (0+5,0-3) rectangle (6,1-3); (0+6,0-3) rectangle (7,1-3); (5,-1-3) rectangle (6,0-3); (6,-1-3) rectangle (7,-3); (A) at (5.5,0) ; () \[right=of A\] [if $a \geq b$]{}; at (5.5,-3) (A21) ; () \[right=of A21\] [if $a < b$]{}; at (0.5,0.5-) () [$b$]{}; at (1.5,0.5-) () [$c$]{}; at (1.5,-0.5-) () [$a$]{}; at (1.5+5,0.5) () [$c$]{}; at (0.5+5,-0.5) () [$b$]{}; at (1.5+5,-0.5) () [$a$]{}; at (0.5+5,-2.5) () [$b$]{}; at (1.5+5,-2.5) () [$c$]{}; at (0.5+5,-3.5) () [$a$]{};
Given a Young tableau $T$ of skew shape, one can apply jeu de taquin slides to $T$ iteratively to get a Young tableau of partition shape, which is called the *rectification* of $T$ and denoted by $\text{Rect}(T)$. Let $ R_\lambda$ be the standard Young tableau of shape $\lambda$, called the row tableau of shape $\lambda$, whose $i$th row consists of the entries $\lambda_0 + \cdots + \lambda_{i-1} + 1, \, \lambda_0 + \cdots + \lambda_{i-1} + 2, \,
\cdots , \, \lambda_0 + \cdots + \lambda_{i-1} + \lambda_i $ from left to right for every $1 \leq i \leq \ell$. Here $\lambda_0$ is set to be 0. Then the LR coefficient $c_\mu^{\nu/\lambda}$ equals the number of standard Young tableaux $T$ of shape $\nu/\lambda$ a satisfying that $\text{Rect}(T) = R_\mu$.
Natural unit interval orders {#sec:nui}
----------------------------
For a positive integer $n$, we set $[n]=\{1,2,\dots,n\}$ and $[n]_q=1+q+\cdots+q^{n-1}$. Let $P$ be a poset. The *incomparability graph* ${\rm inc}(P)$ of $P$ is a graph which has as vertices the elements of $P$, with edges connecting pairs of incomparable elements. For a list ${\bf m}:=(m_1, m_2, \dots, m_{n-1})$ of nondecreasing positive integers satisfying that $i\leq m_i \leq n$ for each $i\in[n-1]$, the corresponding *natural unit interval order* $P(\bf{m})$ is the poset on $[n]$ with the order relation given by $i<_{P(\bf{m})} j$ if $i<n$ and $j\in \{m_i +1, m_i +2, \dots, n\}$. For example, Figure \[fig:lollipop\] shows the graph ${\rm inc}(P)$ for the natural unit interval order $P=P(2,3,4,5,6,11,11,11,11,11)$.
in [6]{} iin [1,...,]{} (i360/:) coordinate (ni) circle(2\*pt) i>1 foreach in[i,...,1]{}[(ni)edge(n)]{}; (180:6)–(-31,0);
iin [1,...,5]{} at (-36+5\*i,0) [i]{};
at (180:6) [6]{}; at (240:6) [7]{}; at (300:6) [8]{}; at (0:6) [9]{}; at (60:6) [10]{}; at (120:6) [11]{};
(-11,0) circle (12pt) (-16,0) circle (12pt) (-21,0) circle (12pt) (-26,0) circle (12pt) (-31,0) circle (12pt);
It is well-known that the set of *elementary symmetric functions* $\{e_{\lambda}\,:\, {\lambda}\vdash n\}$ is a basis of the subspace $\Lambda^n$ of symmetric functions of degree $n$. One says a symmetric function $f({{\bf x}})\in\Lambda^n $ is $b$-[*positive*]{} if the expansion of $f({\bf x})$ in the basis $\{b_{{\lambda}}\}$ has nonnegative coefficients when $\{b_{\lambda}\}$ is a basis of $\Lambda^n$. One of main motivation for our study is to understand the following conjecture.
\[conj:SW\] For the incomparability graph $G$ of a natural unit interval order, if $X_G({\bf x};q)=\sum_{i=0}^{m}a_i({\bf x})q^i$ then $a_{i}({\bf x})$ is $e$-positive for all $i$, and $a_{i+1}({\bf x})-a_{i}({\bf x})$ is $e$-positive whenever $0\leq i < \frac{m-1}{2}$.
We denote the complete graph with $m$ vertices by $K_m$ and the path with $n$ vertices with natural labelling by $P_n$. For each of $K_{m}$ and $P_{n}$, it is proved that the above conjecture is true.
[@SW Table 1]\[prop:basic\] Let $m$ and $n$ be nonnegative integers.
1. The chromatic quasisymmetric function of a complete graph $K_m$ is $$X_{K_m}({\bf x};q)=[m]_q!e_m.$$
2. The chromatic quasisymmetric function of a path $P_n$ is $$X_{P_n}({\bf x};q)=\sum_{m=1}^{\lfloor \frac{n+1}{2} \rfloor} \sum_{\substack{k_1,\dots,k_m \geq 2\\ \sum k_i=n+1}} e_{(k_1-1,k_2,\dots,k_m)} q^{m-1} \prod_{i=1}^{m}[k_i-1]_q.$$
LLT polynomials
---------------
LLT polynomials are a family of symmetric functions introduced by Lascoux, Leclerc and Thibon in [@LLT] which naturally arise in the description of the power-sum plethysm operators on symmetric functions. The original definition of LLT polynomials uses *cospin* statistic of ribbon tableaux, but Haiman and Bylund found a consistent statistic, called *inv*, defined over $n$-tuples of semistandard Young tableaux of various skew shapes. Here, we use the inversion statistic of Haiman and Bylund to define the LLT polynomials used in this paper. For the proof of the consistency of two definitions, see [@HHLRU].
We consider a $k$-tuple of skew diagrams ${\boldsymbol{\nu}}= (\nu^{(1)},\dots, \nu^{(k)})$ and let $${\operatorname{SSYT}}({\boldsymbol{\nu}}) = {\operatorname{SSYT}}(\nu^{(1)})\times \cdots \times{\operatorname{SSYT}}(\nu^{(k)}).$$ Given ${\boldsymbol{T}}\in (T^{(1)},\dots, T^{(k)})\in{\operatorname{SSYT}}({\boldsymbol{\nu}})$, we set $$x^{{\boldsymbol{T}}}=\prod_{i=1}^k x^{T^{(i)}}.$$ We define the *content reading order* by giving the ordering on the cells in $\bigsqcup{\boldsymbol{\nu}}$, so that the left-bottom most cell has the smallest order and the order increases upward along diagonals, moving from the left to the right. Given ${\boldsymbol{T}}\in (T^{(1)},\dots, T^{(k)})\in{\operatorname{SSYT}}({\boldsymbol{\nu}})$, we obtain the *content reading word* by reading the entries according to the content reading order. We say a pair of entries $T^{(i)}(u) > T^{(j)}(v)$ form an *inversion* if either
- $i<j$ and $c(u)=c(v)$, or
- $i> j$ and $c(u)=c(v)+1$.
(0,0)–(1,0)–(1,1)–(0,1)–(0,0)–cycle; (2,2)–(3,2)–(3,3)–(2,3)–(2,2)–cycle; (0) at (.5,.5) [$u$]{}; (1) at (2.5,2.5) [$v$]{}; (-1,-1)–(0,0); (1,1)–(2,2); (3,3)–(4,4);
(0,0)–(1,0)–(1,1)–(0,1)–(0,0)–cycle; (2,3)–(3,3)–(3,4)–(2,4)–(2,3)–cycle; (-1,-1)–(0,0); (1,1)–(4.5,4.5); (-1.5,-.5)–(2,3); (3,4)–(4,5); (0) at (.5,.5) [$v$]{}; (1) at (2.5,3.5) [$u$]{};
Let ${\operatorname{inv}}({\boldsymbol{T}})$ be the number of inversions in ${\boldsymbol{T}}$.
The LLT polynomial indexed by ${\boldsymbol{\nu}}= (\nu^{(1)},\dots, \nu^{(k)})$ is $${\operatorname{LLT}}_{{\boldsymbol{\nu}}}({\bf x};q) = \sum_{{\boldsymbol{T}}\in{\operatorname{SSYT}}({\boldsymbol{\nu}})}q^{{\operatorname{inv}}({\boldsymbol{T}})} x^{{\boldsymbol{T}}}.$$
Relation between the LLT polynomials and chromatic quasisymmetric functions {#subsec:relation}
---------------------------------------------------------------------------
From now on, we only consider unicellular LLT polynomials, i.e., each $\nu^{(i)}$ consists of only one cell. In [@AP] and independently in [@Lee], bijective correspondence between unicellular LLT diagrams and skew shapes contained in a staircase shape partition has been introduced. We explain it with an example.
\[ex:dg\] In Figure \[fig:example\_nu\], the labelling in the left side figure denotes the content reading order of the given LLT diagram ${\boldsymbol{\nu}}$ and we identify those numbers with the numbers in the main diagonal in the figure on the right hand side.
(1,2)–(2,2)–(2,3)–(1,3)–(1,2)–cycle; (3,3)–(4,3)–(4,4)–(3,4)–(3,3)–cycle; (5,4)–(6,4)–(6,5)–(5,5)–(5,4)–cycle; (6,7)–(7,7)–(7,8)–(6,8)–(6,7)–cycle; (8,8)–(9,8)–(9,9)–(8,9)–(8,8)–cycle; (10,9)–(11,9)–(11,10)–(10,10)–(10,9)–cycle; (11,11)–(11,12)–(12,12)–(12,11)–(11,11)–cycle; (0.3,1.3)–(1,2); (2,3)–(6,7); (7,8)–(12.5,13.5); (.8,.8)–(3,3); (4,4)–(8,8); (9,9)–(11,11); (12,12)–(13,13); (1.3,0.3)–(5,4); (6,5)–(10,9); (11,10)–(13.5,12.5); (1) at (1.5, 2.5) [$1$]{}; (2) at (6.5, 7.5) [$2$]{}; (3) at (3.5, 3.5) [$3$]{}; (4) at (8.5, 8.5) [$4$]{}; (5) at (11.5, 11.5) [$5$]{}; (6) at (5.5, 4.5) [$6$]{}; (7) at (10.5, 9.5) [$7$]{}; (8) at (9.5, 4) [${\boldsymbol{\nu}}$]{}; (9) at (13, 6) [$\Longleftrightarrow$]{};
(0,0)–(0,7)–(7,7); iin [1,...,6]{} (0,i)–(i,i); iin [1,...,6]{} (i,7)–(i,i); (1) at (.5, .5) [$1$]{}; (2) at (1.5, 1.5) [$2$]{}; (3) at (2.5, 2.5) [$3$]{}; (4) at (3.5, 3.5) [$4$]{}; (5) at (4.5, 4.5) [$5$]{}; (6) at (5.5, 5.5) [$6$]{}; (7) at (6.5, 6.5) [$7$]{}; (8) at (.5, 6.5) [$\mathsf{X}$]{}; (8) at (1.5, 6.5) [$\mathsf{X}$]{}; (8) at (2.5, 6.5) [$\mathsf{X}$]{}; (8) at (3.5, 6.5) [$\mathsf{X}$]{}; (8) at (.5, 5.5) [$\mathsf{X}$]{}; (8) at (1.5, 5.5) [$\mathsf{X}$]{}; (8) at (2.5, 5.5) [$\mathsf{X}$]{}; (8) at (.5, 4.5) [$\mathsf{X}$]{}; (8) at (1.5, 4.5) [$\mathsf{X}$]{}; (8) at (.5, 3.5) [$\mathsf{X}$]{}; (8) at (1.5, 3.5) [$\mathsf{X}$]{}; (8) at (.5, 2.5) [$\mathsf{X}$]{}; (0,1)–(0,2)–(1,2)–(1,3)–(2,3)–(2,5)–(3,5)–(3,6)–(4,6)–(4,7)–(6,7)–(6,6)–(5,6)–(5,5)–(4,5)–(4,4)–(3,4)–(3,3)–(2,3)–(2,2)–(1,2)–(1,1)–(0,1)–cycle; (9) at (5,2) [$\pi_{{\boldsymbol{\nu}}}$]{}; (10) at (2,-.7) [$\quad$]{};
Starting from the top row (or the largest possible number in the reading order), we cross out the cells in the same row corresponding to the numbers which cannot make an inversion pair with the given number of the row. We obtain a (top-left justified) skew shape contained in a staircase shape in the end.
We call the skew shape coming from an LLT diagram ${\boldsymbol{\nu}}$ the *Dyck diagram* and denote it by $\pi_{{\boldsymbol{\nu}}}$, since the outer boundary defines a Dyck path. Reading the number of cells in the $i$th row from the top defines an *area sequence* which has the last entry $0$. For instance, the area sequence of the Dyck diagram in Figure \[fig:example\_nu\] is $a=(2,2,2,1,1,1,0)$. We denote the area sequence of the Dyck diagram corresponding to the LLT diagram ${\boldsymbol{\nu}}$ by $a_{{\boldsymbol{\nu}}}$.
Given a Dyck diagram corresponding to an LLT diagram ${\boldsymbol{\nu}}$, we can associate a graph which has $V=\{\nu^{(i)} :~ 1\le i \le n\}$ as the vertex set and the edge set as follows : label the vertices $\nu^{(i)}$ according to the reverse of the content reading order and for $i<j$, $(i,j)\in E$ if the cell in the column labeled $i$ and row labeled $j$ is contained in the Dyck diagram. In other words, replace the diagonal entries in the Dyck diagram $i$ by $n+1-i$, and according to this labelling, $(i,j)\in E$ if the cell in the row $i$ and column $j$ is in the Dyck diagram. We denote this graph by $G_{{\boldsymbol{\nu}}}$. We also use the notation ${\boldsymbol{\nu}}_G$ for the LLT diagram corresponding to a graph $G$. If there is no confusion, we use the notation ${\operatorname{LLT}}_{G}({\bf x};q)$ for ${\operatorname{LLT}}_{{\boldsymbol{\nu}}_G}({\bf x};q)$.
We keep considering the LLT diagram given in Figure \[fig:example\_nu\]. To obtain the graph corresponding to the LLT diagram ${\boldsymbol{\nu}}$, we relabel the main diagonal in reverse order (as in the left hand side of Figure \[fig:example\_G\]), and then draw edges $(i,j)$ for $i<j$ if the cell in the row $i$ and column $j$ is contained in the Dyck diagram $\pi_{{\boldsymbol{\nu}}}$. Note that the $i$th cell in ${\boldsymbol{\nu}}$ in terms of the reading order corresponds to the $n+1-i$ vertex in $G_{{\boldsymbol{\nu}}}$.
(0,1) rectangle (1,2); (1,2) rectangle (2,3); (2,3) rectangle (3,5); (3,4) rectangle (4,6); (4,5) rectangle (5,7); (5,6) rectangle (6,7); (0,0)–(0,7)–(7,7); iin [1,...,6]{} (0,i)–(i,i); iin [1,...,6]{} (i,7)–(i,i); (1) at (.5, .5) [$7$]{}; (2) at (1.5, 1.5) [$6$]{}; (3) at (2.5, 2.5) [$5$]{}; (4) at (3.5, 3.5) [$4$]{}; (5) at (4.5, 4.5) [$3$]{}; (6) at (5.5, 5.5) [$2$]{}; (7) at (6.5, 6.5) [$1$]{}; (8) at (.5, 6.5) [$\mathsf{X}$]{}; (8) at (1.5, 6.5) [$\mathsf{X}$]{}; (8) at (2.5, 6.5) [$\mathsf{X}$]{}; (8) at (3.5, 6.5) [$\mathsf{X}$]{}; (8) at (.5, 5.5) [$\mathsf{X}$]{}; (8) at (1.5, 5.5) [$\mathsf{X}$]{}; (8) at (2.5, 5.5) [$\mathsf{X}$]{}; (8) at (.5, 4.5) [$\mathsf{X}$]{}; (8) at (1.5, 4.5) [$\mathsf{X}$]{}; (8) at (.5, 3.5) [$\mathsf{X}$]{}; (8) at (1.5, 3.5) [$\mathsf{X}$]{}; (8) at (.5, 2.5) [$\mathsf{X}$]{}; (0,1)–(0,2)–(1,2)–(1,3)–(2,3)–(2,5)–(3,5)–(3,6)–(4,6)–(4,7)–(6,7)–(6,6)–(5,6)–(5,5)–(4,5)–(4,4)–(3,4)–(3,3)–(2,3)–(2,2)–(1,2)–(1,1)–(0,1)–cycle; (9) at (5,2) [$\pi_{{\boldsymbol{\nu}}}$]{}; (11) at (4.5, 6.5) [$13$]{}; (11) at (5.5, 6.5) [$12$]{}; (11) at (3.5, 5.5) [$24$]{}; (11) at (4.5, 5.5) [$23$]{}; (11) at (2.5, 4.5) [$35$]{}; (11) at (3.5, 4.5) [$34$]{}; (11) at (2.5, 3.5) [$45$]{}; (11) at (1.5, 2.5) [$56$]{}; (11) at (.5, 1.5) [$67$]{}; (9) at (8, 3) [$\Longleftrightarrow$]{};
(0,0) circle (5pt) (4,0) circle (5pt) (0,4) circle (5pt) (4,4) circle (5pt) (7,2) circle (5pt) (11,2) circle (5pt) (15,2) circle (5pt); (0,4)–(0,0)–(4,0)–(4,4)–(0,0); (0,0)–(4,4); (0,4)–(4,0); (4,0)–(7,2); (4,4)–(7,2)–(15,2); (1) at (0,-1) [$2$]{}; (1) at (0,5) [$1$]{}; (1) at (4,5) [$4$]{}; (1) at (4,-1) [$3$]{}; (1) at (7,1) [$5$]{}; (1) at (11,1) [$6$]{}; (1) at (15,1) [$7$]{}; (2) at (9, -2) [$G_{{\boldsymbol{\nu}}}$]{};
We remark that given a coloring $\kappa$ of $G_{{\boldsymbol{\nu}}}$, the statistic $\rm{asc}(\kappa)$ is consistent with the inversion statistic ${\operatorname{inv}}(\boldsymbol{S})$ obtained from the Dyck diagram $\pi_{{\boldsymbol{\nu}}}$, where $\boldsymbol{S}$ is a word of length $n$, written in the main diagonal of $\pi_{{\boldsymbol{\nu}}}$ from the south-west cell to the north-east cell (as we did in Example \[ex:dg\]). In fact, every unicellular LLT polynomial can be written as $${\operatorname{LLT}}_{G}({\bf x};q) =\sum_{\kappa :V(G)\rightarrow \mathbb{Z}_{>0}} q^{\rm{asc}(\kappa)}x^{\kappa}$$ for some $G$ which is the incomparability graph of a natural unit interval order. By comparing to the definition of the chromatic quasisymmetric function $X_G ({\bf x};q)$, we can observe that the only difference is the *proper* condition on the coloring $\kappa$. The precise relationship via *plethysm* is given in [@CM Proposition 3.4].
[@CM Proposition 3.4] Let $G$ be the incomparability graph of a natural unit interval order with $n$ elements. Then we have $$X_G ({\bf x};q)= (q-1)^{-n}{\operatorname{LLT}}_G [(q-1)X;q].$$
Note that the square bracket on the right hand side of the above equation implies the *plethystic substitution* and it is a convention that $X=x_1 + x_2 +\cdots$ in the plethystic substitution. For the detailed explanation of it, we refer the reader to [@Hai2001; @Hag08].
Due to this relation between LLT polynomials and the chromatic quasisymmetric functions, we have the following equivalency of linear relations.
[@AP Proposition 55]\[prop:AP\] Let $G_0,G_1,\dots,G_{\ell}$ be the incomparability graphs of natural unit interval orders. Then $$\sum_{i=0}^{\ell}c_i(q)X_{G_i}({\bf x};q)=0 \qquad\text{if and only if}\qquad \sum_{i=0}^{\ell}c_i(q){\operatorname{LLT}}_{G_i}({\bf x};q)=0,$$ for some $c_i(q)$.
The $k$-deletion property {#sec:local}
=========================
Recently, Lee [@Lee] provided a local linear relation between some unicellular LLT polynomials.
[@Lee Theorem 3.4](Local linear relation)\[thm:Lee\] For an area sequence $a=(a_1,a_2,\dots,a_n)$ and $i$ such that $a_{i-1}+1\leq a_{i}$ (we set $a_0=1$), let $a^{0}=a,a^{1},a^{2}$ be area sequences defined by $a^{z}_j=a_i$ if $j\neq i$ and $a^{z}_i=a_i-z$ for $z=0,1,2$. If $a_{i+a_i-1}=a_{i+a_i}+1$, then $${\operatorname{LLT}}_{{\boldsymbol{\nu}}^0}({\bf x};q)+q{\operatorname{LLT}}_{{\boldsymbol{\nu}}^2}({\bf x};q)=(1+q){\operatorname{LLT}}_{{\boldsymbol{\nu}}^1}({\bf x};q),$$ where ${\boldsymbol{\nu}}^{z}$ is a unicellular LLT diagram satisfying that $a_{{\boldsymbol{\nu}}^{z}}=a^{z}$ for $z=0,1,2$.
Equivalently, if we let $G_{z}$ be the graph ${\boldsymbol{\nu}}^{z}_G$ for $z=0,1,2$, then $$X_{G_0}({\bf x};q)+q X_{G_2}({\bf x};q)=(1+q) X_{G_1}({\bf x};q).$$
Let $a=(2,3,3,2,1,1,0)$ be an area sequence and let $i=2$. Since $a_1+1\leq a_2$, $a^1=(2,2,3,2,1,1,0)$ and $a^2=(2,1,3,2,1,1,0)$ are well-defined. We note that $a_{2+a_2-1}=a_4=2$ and $a_{2+a_2}=a_5=1$ so that $a_{2+a_2-1}=a_{2+a_2}+1$. Therefore, $${\operatorname{LLT}}_{{\boldsymbol{\nu}}^0}({\bf x};q)+q{\operatorname{LLT}}_{{\boldsymbol{\nu}}^2}({\bf x};q)=(1+q){\operatorname{LLT}}_{{\boldsymbol{\nu}}^1}({\bf x};q),$$ where ${\boldsymbol{\nu}}^{0}$, ${\boldsymbol{\nu}}^{1}$, ${\boldsymbol{\nu}}^{2}$ are defined as Figure \[fig:localex\].
(1.05,-0.95)–(1.95,-0.95)–(1.95,-0.05)–(1.05,-0.05)–cycle; (2.05,1.05)–(2.95,1.05)–(2.95,1.95)–(2.05,1.95)–cycle; (6.05,4.05)–(6.95,4.05)–(6.95,4.95)–(6.05,4.95)–cycle; (3.05,3.05)–(3.95,3.05)–(3.95,3.95)–(3.05,3.95)–cycle; (8.05,7.05)–(8.95,7.05)–(8.95,7.95)–(8.05,7.95)–cycle; (7.05,6.05)–(7.95,6.05)–(7.95,6.95)–(7.05,6.95)–cycle; (10.05,8.05)–(10.95,8.05)–(10.95,8.95)–(10.05,8.95)–cycle;
(-1,0)–(10.05,11.05); (-.5,-.5)–(3.05,3.05) (3.95,3.95)–(10.55,10.55); (0,-1)–(2.05,1.05) (2.95,1.95)–(7.05,6.05) (8.95,7.95)–(11.05,10.05); (.5,-1.5)–(1.05,-.95) (1.95,-0.05)–(6.05,4.05) (6.95,4.95)–(10.05,8.05) (10.95,8.95)–(11.55,9.55); (1,-2)–(12.05,9.05); at (6.5,4.5) [$u$]{}; at (7.5, 6.5) [$v_1$]{}; at (8.5, 7.5) [$v_2$]{};
(1.05,-0.95)–(1.95,-0.95)–(1.95,-0.05)–(1.05,-0.05)–cycle; (2.05,1.05)–(2.95,1.05)–(2.95,1.95)–(2.05,1.95)–cycle; (7.05,5.05)–(7.95,5.05)–(7.95,5.95)–(7.05,5.95)–cycle; (3.05,3.05)–(3.95,3.05)–(3.95,3.95)–(3.05,3.95)–cycle; (5.05,4.05)–(5.95,4.05)–(5.95,4.95)–(5.05,4.95)–cycle; (8.05,7.05)–(8.95,7.05)–(8.95,7.95)–(8.05,7.95)–cycle; (10.05,8.05)–(10.95,8.05)–(10.95,8.95)–(10.05,8.95)–cycle;
(-1,0)–(10.05,11.05); (-.5,-.5)–(3.05,3.05) (3.95,3.95)–(10.55,10.55); (0,-1)–(2.05,1.05) (2.95,1.95)–(5.05,4.05) (5.95,4.95)–(8.05,7.05) (8.95,7.95)–(11.05,10.05); (.5,-1.5)–(1.05,-.95) (1.95,-0.05)–(7.05,5.05) (7.95,5.95)–(10.05,8.05) (10.95,8.95)–(11.55,9.55); (1,-2)–(12.05,9.05); at (7.5,5.5) [$u$]{}; at (5.5, 4.5) [$v_1$]{}; at (8.5, 7.5) [$v_2$]{};
(1.05,-0.95)–(1.95,-0.95)–(1.95,-0.05)–(1.05,-0.05)–cycle; (2.05,1.05)–(2.95,1.05)–(2.95,1.95)–(2.05,1.95)–cycle; (8.05,6.05)–(8.95,6.05)–(8.95,6.95)–(8.05,6.95)–cycle; (3.05,3.05)–(3.95,3.05)–(3.95,3.95)–(3.05,3.95)–cycle; (5.05,4.05)–(5.95,4.05)–(5.95,4.95)–(5.05,4.95)–cycle; (6.05,5.05)–(6.95,5.05)–(6.95,5.95)–(6.05,5.95)–cycle; (9.05,7.05)–(9.95,7.05)–(9.95,7.95)–(9.05,7.95)–cycle;
(-1,0)–(10.05,11.05); (-.5,-.5)–(3.05,3.05) (3.95,3.95)–(10.55,10.55); (0,-1)–(2.05,1.05) (2.95,1.95)–(5.05,4.05) (6.95,5.95)–(11.05,10.05); (.5,-1.5)–(1.05,-.95) (1.95,-0.05)–(8.05,6.05) (9.95,7.95)–(11.55,9.55); (1,-2)–(12.05,9.05); at (8.5,6.5) [$u$]{}; at (5.5, 4.5) [$v_1$]{}; at (6.5, 5.5) [$v_2$]{};
Remark that the condition $a_{i+a_i-1}=a_{i+a_i}+1$ prevents any cells existing on the left consecutive diagonal of the diagonal containing $v_1$ and $v_2$ in Figure \[fig:localex\], in between $v_1$ and $v_2$. If the condition is not satisfied, then the linear relation does not hold.
(0.05,0.05)–(0.95,0.05)–(0.95,0.95)–(0.05,0.95)–cycle; (1.05,2.05)–(1.95,2.05)–(1.95,2.95)–(1.05,2.95)–cycle; (2.05,4.05)–(2.95,4.05)–(2.95,4.95)–(2.05,4.95)–cycle; (4.05,5.05)–(4.95,5.05)–(4.95,5.95)–(4.05,5.95)–cycle; (-2,1)–(4.5,7.5); (-1.5, .5)–(2.05, 4.05) (2.95,4.95)–(5,7); (-1,0)–(1.05, 2.05) (1.95,2.95)–(4.05, 5.05) (4.95, 5.95)–(5.5, 6.5); (-.5, -.5)–(.05, .05) (.95,.95)–(6,6) (0,-1)–(6.5,5.5); at (.5, .5) [$u$]{}; at (1.5, 2.5) [$v_1$]{}; at (4.5, 5.5) [$v_2$]{}; at (2.5, 4.5) [$x$]{};
For example, the LLT diagram in Figure \[fig:ctex\] has the area sequence $a= (2,1,1,0)$ which satisfies the condition $a_{i-1}+1\le a_i$ for $i=1$. However, due to the existence of the cell containing $x$, the condition $a_{i+a_i -1}=a_{i+a_i}+1$ is not satisfied for $i=1$; $a_{1+a_1 -1}= a_2 =1$ and $a_{1+a_1}+1=a_{3}+1=2$. Thus, the linear relation does not hold among the LLT polynomials corresponding to the LLT diagrams ${\boldsymbol{\nu}}^0$, ${\boldsymbol{\nu}}^1$ and ${\boldsymbol{\nu}}^2$, where ${\boldsymbol{\nu}}^0$ is given in Figure \[fig:ctex\] and ${\boldsymbol{\nu}}^1$ and ${\boldsymbol{\nu}}^2$ are obtained by moving the cell $u$ upward along the diagonal so that $v_1$ cell, and both of $v_1 $ and $v_2$ cells are on the left-below of the cell $u$, respectively.
We note that for the incomparability graphs of natural unit interval orders with some restrictions, Theorem \[thm:Lee\] is a refinement of Triple-deletion property:
[@OS Theorem 3.1](Triple-deletion property) \[prop:OS\]Let $G$ be a graph with edge set $E$ such that $e_1,e_2,e_3\in E$ form a triangle. Then $$X_{G}({\bf x})=X_{G-\{e_1\}}({\bf x})+X_{G-\{e_2\}}({\bf x})-X_{G-\{e_1,e_2\}}({\bf x}).$$
We can further generalize the linear relations by applying Theorem \[thm:Lee\] iteratively.
\[thm:local\] For an area sequence $a=(a_1,a_2,\dots,a_n)$, $2\le \ell\le n-1 $, and $i$ with $a_{i-1}+\ell-1\leq a_{i}$, let $a^{0}=a,a^{1},\dots,a^{\ell}$ be area sequences defined by $a^{z}_j=a_i$ if $j\neq i$ and $a^{z}_i=a_i-z$ for $z=0,1,\dots,\ell$. If $$a_{i+a_i-1}=a_{i+a_i}+1,\quad a_{i+a_i-2}=a_{i+a_i-1}+1,\quad \dots,\quad a_{i+a_i-\ell+1}=a_{i+a_i-\ell+2}+1,$$ then, for $1\le k\le \ell -1$,
1. ${\operatorname{LLT}}_{{\boldsymbol{\nu}}^0}({\bf x};q)+q[k]_q {\operatorname{LLT}}_{{\boldsymbol{\nu}}^{k+1}}({\bf x};q)=[k+1]_q {\operatorname{LLT}}_{{\boldsymbol{\nu}}^{k}}({\bf x};q)$,\
2. $[\ell-k]_q {\operatorname{LLT}}_{{\boldsymbol{\nu}}^0}({\bf x};q)+q^{\ell-k}[k]_q {\operatorname{LLT}}_{{\boldsymbol{\nu}}^{\ell}}({\bf x};q)=[\ell]_q {\operatorname{LLT}}_{{\boldsymbol{\nu}}^k}({\bf x};q),$
where ${\boldsymbol{\nu}}^{z}$ is a unicellular LLT diagram satisfying that $a_{{\boldsymbol{\nu}}^{z}}=a^{z}$ for $z=0,1,\dots,\ell$.
Equivalently, for $z=0,1,\dots,\ell$, if we simply denote $G_{{\boldsymbol{\nu}}^{z}}$ by $G_{z}$, then for $1\le k\le \ell -1$,
1. $X_{G_0}({\bf x};q)+q[k]_q X_{G_{k+1}}({\bf x};q)=[k+1]_q X_{G_{k}}({\bf x};q)$,\
2. $[\ell-k]_q X_{G_0}({\bf x};q)+q^{\ell-k}[k]_q X_{G_{\ell}}({\bf x};q)=[\ell]_q X_{G_k}({\bf x};q).$
We note that for each $z\in \{0,1,\dots,\ell-2\}$, the area sequence $a^z$ satisfies the condition of Theorem \[thm:Lee\]. Therefore, we have $$\begin{aligned}
X_{G_0}({\bf x};q)+q X_{G_2}({\bf x};q)&=&[2]_q X_{G_1}({\bf x};q),\\
X_{G_1}({\bf x};q)+q X_{G_3}({\bf x};q)&=&[2]_q X_{G_2}({\bf x};q),\\
&\vdots&\\
X_{G_{\ell-2}}({\bf x};q)+q X_{G_{\ell}}({\bf x};q)&=&[2]_q X_{G_{\ell-1}}({\bf x};q).\end{aligned}$$
We prove the theorem by mathematical induction.
1. The case when $k=1$ is equivalent to Theorem \[thm:Lee\]. So we may assume that this statement holds for $1\leq k \leq \ell-2$; $$X_{G_0}({\bf x};q)+q[k]_q X_{G_{k+1}}({\bf x};q)=[k+1]_q X_{G_k}({\bf x};q).$$ From Theorem \[thm:Lee\], it follows that $$X_{G_0}({\bf x};q)+q[k]_q X_{G_{k+1}}({\bf x};q)=[k+1]_q([2]_q X_{G_{k+1}}({\bf x};q)-q X_{G_{k+2}}({\bf x};q)).$$ Then we obtain $$\begin{aligned}
X_{G_0}({\bf x};q)+q[k+1]_q X_{G_{k+2}}({\bf x};q)&=&([2]_q[k+1]_q-q[k]_q)X_{G_{k+1}}({\bf x};q)\\
&=&[k+2]_q X_{G_{k+1}}({\bf x};q),\end{aligned}$$ which complete the induction step and the proof.
2. The equation is equivalent to the following equation. $$[m]_q X_{G_0}({\bf x};q)+q^{m}[\ell-m]_q X_{G_{\ell}}({\bf x};q)=[\ell]_q X_{G_{\ell-m}}({\bf x};q).$$ If $m=1$, then it is true by (a$'$). Now we assume that $$[m]_q X_{G_0}({\bf x};q)+q^{m}[\ell-m]_q X_{G_{\ell}}({\bf x};q)=[\ell]_q X_{G_{\ell-m}}({\bf x};q)$$ for $m<\ell-1$, then by (a$'$) we have $$[m]_q X_{G_0}({\bf x};q)+q^{m}[\ell-m]_q X_{G_{\ell}}({\bf x};q)=\frac{[\ell]_q}{q[\ell-m-1]_q}([\ell-m]_q X_{G_{\ell-m-1}}({\bf x};q)-X_{G_0}({\bf x};q)),$$ or equivalently, $$\begin{aligned}
&&[\ell]_q[\ell-m]_q X_{G_{\ell-m-1}}({\bf x};q)\\
&& \qquad \qquad \qquad =(q[\ell-m-1]_q[m]_q+[\ell]_q)X_{G_0}({\bf x};q)+q^{m+1}[\ell-m-1]_q[\ell-m]_q X_{G_{\ell}}({\bf x};q).\end{aligned}$$ Since $[\ell]_q=[m]_q+q^m[\ell-m]_q$, $$q[\ell-m-1]_q[m]_q+[\ell]_q=q[\ell-m-1]_q[m]_q+([m]_q+q^m[\ell-m]_q)=[m+1]_q[\ell-m]_q.$$ From this, we have $$[\ell]_q X_{G_{\ell-m-1}}({\bf x};q)=[m+1]_q X_{G_0}({\bf x};q)+q^{m+1}[\ell-m-1]_q X_{G_{\ell}}({\bf x};q),$$ as we desired.
$e$-expnasion of chromatic quasisymmetric functions related to certain graphs {#sec:lollipopG}
=============================================================================
For two graphs $G$ and $H$ with vertex set $\{v_1,v_2,\dots,v_n\}$ and $\{v_n,v_{n+1},\dots,v_{n+m}\}$, respectively, let $G+H$ to be the graph with $V(G+H)=\{v_{1},v_{2},\dots,v_{n+m}\}$ and $E(G+H)=E(G)\cup E(H)$.\
A graph $P_{n+1}+K_m$ is called a [*lollipop graph*]{} $L_{m,n}$ on $[m+n]$, where $P_{n+1}$ is a path on $[n+1]$ and $K_m$ is a complete graph with vertices $\{n+1,n+2,\dots,n+m\}$. We note that the lollipop graph $L_{m,n}$ on $[m+n]$ is the incomparability graph of the natural unit interval order $P(m_1,m_2,\dots,m_{m+n-1})$ such that $m_i=i+1$ for $i\leq n$ and $m_{i}=n+m$ for $i> n$. Figure \[fig:lollipop\] shows the lollipop graph $L_{6,5}$.
Recently, Dahlberg and van Willigenburg [@DW] gave an explicit $e$-positive formula for the chromatic symmetric function of a lollipop graph by iterating Triple-deletion property.
[@DW Proposition 10] \[prop:lollipop\]For $m\geq 2$ and $n\geq 0$, $$X_{L_{m,n}}({\bf x})=\frac{(m-1)!}{(m+n-1)!}X_{K_{m+n}}({\bf x})+\sum_{i=0}^{n-1}\frac{(m+i-1)}{m(m+1)\cdots(m+i)} X_{P_{n-i}}({\bf x}) X_{K_{m+i}}({\bf x}).$$
In this section we give explicit $e$-positive and $e$-unimodal formulae for chromatic quasisymmetric functions of some graphs, generalizations of lollipop graphs, by using Theorem \[thm:local\].
Lollipop graphs {#sec:lollipop}
---------------
In this subsection we consider lollipop graphs.
For $m,n\geq0$, a *lollipop quasisymmetric function* is given by $X_{L_{m,n}}({\bf x};q)$, that is, a chromatic quasisymmetric function of $L_{m,n}$. We simply denote $X_{L_{m,n}}({\bf x};1)$ by $X_{L_{m,n}}({\bf x})$.
By the definition, $X_{L_{m,0}}({\bf x};q)=X_{K_m}({\bf x};q)$ and $X_{L_{0,n}}({\bf x};q)=X_{P_n}({\bf x};q)$, both of which are $e$-positive and $e$-unimodal, see Proposition \[prop:basic\].\
From Theorem \[thm:local\] (a$'$), we have the following linear relation.
For integers $m\geq 2$ and $n\geq 0$, $$\label{eqn:lollipop}
X_{L_{m,n}}({\bf x};q)=\frac{1}{[m]_q} \left( X_{L_{m+1,n-1}}({\bf x};q)+q[m-1]_q X_{P_n \cup K_m}({\bf x};q) \right).$$
We first note that the LLT diagram corresponding to the lollipop graph $L_{m+1,n-1}$ has the area sequence $a=(a_1,a_2,\dots,a_{n+m})$ such that $a_{i}=1$ for $i< n$ and $a_{i}=n+m-i$ for $i\geq n$. If we take $\ell=m$ and $i=n$, then the area sequence $a$ satisfies that the condition of Theorem \[thm:local\];
- $a_{i-1}+\ell-1=a_{n-1}+m-1=m\leq a_n$,
- $a_{i}=n+m-i=a_{i+1}+1$ for $i\in\{n+1, n+2,\dots,n+m-1\}$.
Thus, by Theorem \[thm:local\] (a$'$), we have $$X_{L_{m+1,n-1}}({\bf x};q)+q[m-1]_q X_{G_{m}}({\bf x};q)=[m]_q X_{G_{m-1}}({\bf x};q),$$ where $G_{m-1}$ is the lollipop graph $L_{m,n}$ and $G_{m}$ is the graph $P_n \cup K_m$.
From , we have a formula of a lollipop quasisymmetric function.
\[prop:quasilollipop\] For $m\geq 2$ and $n\geq 0$, a lollipop quasisymmetric function is $$X_{L_{m,n}}({\bf x};q)=[m-1]_q!\left([m+n]_q \,e_{m+n}+\sum_{i=0}^{n-1}q[m+i-1]_q X_{P_{n-i}}({\bf x};q)\,e_{m+i}\right).$$ Consequently $X_{L_{m,n}}({\bf x};q)$ is $e$-positive and $e$-unimodal with center of symmetry $\frac{|E(L_{m,n})|}{2}$.
If we use Equation (\[eqn:lollipop\]) repeatedly, then we have a refinement of Proposition \[prop:lollipop\]; $$X_{L_{m,n}}({\bf x};q)=\frac{X_{L_{m+k,n-k}}({\bf x};q)}{[m]_q\cdots[m+k-1]_q}+\sum_{i=0}^{k-1}\frac{q[m+i-1]_q}{[m]_q\cdots[m+i]_q} X_{P_{n-i}}({\bf x};q) X_{K_{m+i}}({\bf x};q).$$ If we let $k=n$, then $X_{L_{m+n,0}}({\bf x};q)=X_{k_{m+n}}({\bf x};q)$. Since $X_{K_{n}}({\bf x};q)=[n]_q! \,e_n$, $$\begin{aligned}
X_{L_{m,n}}({\bf x};q)&=&\frac{[m+n]_q! \,e_{m+n}}{[m]_q\cdots[m+n]_q}+\sum_{i=0}^{n-1}\frac{q[m+i-1]_q}{[m]_q\cdots[m+i]_q}[m+i]_q!X_{P_{n-i}}({\bf x};q) \,e_{m+i}\\
&=&[m-1]_q!\left([m+n]_q \, e_{m+n}+\sum_{i=0}^{n-1}q[m+i-1]_q X_{P_{n-i}}({\bf x};q) \,e_{m+i}\right).\end{aligned}$$ By Proposition \[prop:basic\] (b), $X_{P_{n-i}}({\bf x};q)$ is $e$-positive and $e$-unimodal with center of symmetry $\frac{n-i-1}{2}$. Since both of $[m+n]_q$ and $q[m+i-1]_q X_{P_{n-i}}({\bf x};q)$ are $e$-positive and $e$-unimodal with center of symmetry $\frac{n+m-1}{2}$, $X_{L_{m,n}}({\bf x};q)$ is $e$-positive and $e$-unimodal with center of symmetry $\frac{|E(L_{m,n})|}{2}$.
In [@CH], the first author and Cho obtained an $e$-positive and $e$-unimodal formula of the chromatic quasisymmetric function of $K_{r}+K_{n-r+1}$.
[@CH Corollary 4.4]\[lem:CH\] For $1\leq r \leq n-1$, let $G$ be the graph $K_{r}+K_{n-r+1}$. Then $$X_G({\bf x};q)=\sum_{i=0}^{\min \{n-r,r-1\}}q^{i}[n-r]_q![r-1]_q![n-2i]_q\, e_{(n-i,i)}.$$
By combining Theorem \[thm:local\] and Lemma \[lem:CH\], we obtain a formula of the chromatic quasisymmetric function of a graph $K_{r}+L_{m,n}$.
\[thm:double\] For $m\geq 3$, $1\leq r \leq m$, and $n\geq 0$, let $G$ be the graph $K_{r}+L_{m,n}$. If we let $d=n+m+r-1$, then $$X_{G}({\bf x};q)=[m-1]_q!\left( \sum_{i=0}^{r-1}q^{i}[r-1]_q![d-2i]_q \,e_{(d-i,i)} +\sum_{j=0}^{n-1}q[n+m-j-2]_q X_{L_{r,j}}({\bf x};q) \,e_{n+m-j-1}\right).$$ Hence, $X_{G}({\bf x};q)$ is $e$-positive and $e$-unimodal with center of symmetry $\frac{|E(G)|}{2}$.
By Lemma \[lem:CH\], $$X_{K_{r}+L_{m+n,0}}({\bf x};q)=X_{K_{r}+K_{m+n}}({\bf x};q)=[n+m-1]_q!\sum_{i=0}^{r-1}q^{i}[r-1]_q![d-2i]_q \,e_{(d-i,i)}.$$ For convenience, we denote $X_{K_{r}+K_{m+n}}({\bf x};q)=[n+m-1]_q! \,f({\bf x};q)$.\
From Theorem \[thm:local\] (a$'$), we have the following relation for $0\leq k \leq n+m-3$,
$$\label{eqn:double}
X_{K_{r}+L_{n+m-k-1,k+1}}({\bf x};q)=\frac{X_{K_{r}+L_{n+m-k,k}}({\bf x};q)+q[n+m-k-2]_q X_{L_{r,k}}({\bf x};q)X_{K_{n+m-k-1}}({\bf x};q)}{[n+m-k-1]_q}.$$
If $k=0$, then is equal to $$\begin{aligned}
X_{K_{r}+L_{n+m-1,1}}({\bf x};q)&=&\frac{X_{K_{r}+L_{n+m,0}}({\bf x};q)+q[n+m-2]_q X_{L_{r,0}}({\bf x};q)X_{K_{n+m-1}}({\bf x};q)}{[n+m-1]_q}\\
&=&[n+m-2]_q!\left(f({\bf x};q)+q[n+m-2]_q X_{L_{r,0}}({\bf x};q) \,e_{n+m-1}\right).\end{aligned}$$ If $k=1$, then is equal to $$\begin{aligned}
X_{K_{r}+L_{n+m-2,2}}({\bf x};q)&=&\frac{X_{K_{r}+L_{n+m-1,1}}({\bf x};q)+q[n+m-3]_qX_{L_{r,1}}({\bf x};q)X_{K_{n+m-2}}({\bf x};q)}{[n+m-2]_q}\\
&=&[n+m-3]_q!\left(f({\bf x};q)+\sum_{j=0}^{1}q[n+m-j-2]_q X_{L_{r,j}}({\bf x};q)\,e_{n+m-j-1}\right).\end{aligned}$$ If we do this continually, then we have $$\begin{aligned}
&&X_{K_{r}+L_{n+m-k-1,k+1}}({\bf x};q)\\
&&\hspace{30mm}=\frac{X_{K_{r}+L_{n+m-k,k}}({\bf x};q)+q[n+m-2]_q X_{L_{r,k}}({\bf x};q)X_{K_{n+m-k-1}}({\bf x};q)}{[n+m-k-1]_q}\\
&&\hspace{30mm}=[n+m-k-2]_q!\left(f({\bf x};q)+\sum_{j=0}^{k}q[n+m-j-2]_q X_{L_{r,j}}({\bf x};q) \,e_{n+m-j-1}\right),\end{aligned}$$ for $0\leq k\leq n-1$. In particular, when $k=n-1$ we get $$\begin{aligned}
X_{K_{r}+L_{m,n}}({\bf x};q)&=&\frac{X_{K_{r}+L_{m+1,n-1}}({\bf x};q)+q[m-1]_q X_{L_{r,n-1}}({\bf x};q)X_{K_{m}}({\bf x};q)}{[m]_q}\\
&=&[m-1]_q!\left(f({\bf x};q)+\sum_{j=0}^{n-1}q[n+m-j-2]_q X_{L_{r,j}}({\bf x};q) \,e_{n+m-j-1}\right),\end{aligned}$$ as we desired.
Melting lollipop graphs {#melting}
-----------------------
For integers $m,n\geq 0$, and $0\leq k \leq m-1$, a *melting lollipop graph* $L^{(k)}_{m,n}$ on $[m+n]$ is obtained from the lollipop graph $L_{m,n}$ by deleting $k$ edges, $$\{n+1,n+m\}, ~\{n+1,n+m-1\},~ \dots,~ \{n+1,n+m-k+1\}.$$ We call $X_{L^{(k)}_{m,n}}({\bf x};q)$ the *melting lollipop quasisymmetric function*.
A melting lollipop graph $L^{(k)}_{m,n}$ is the incomparability graph of a natural unit interval order $P(m_1,m_2,\dots,m_{m+n-1})$ with $m_i=i+1$ for $i\leq n$, $m_{n+1}=m+n-k$, and $m_{i}=m+n$ for $i>n+1$. Figure \[fig:melting\] shows the melting lollipop graph $L^{(2)}_{6,5}$, for example.
in [6]{} iin [1,...,]{} (i360/:) coordinate (ni) circle(2\*pt) i>3 foreach in[i,...,1]{}[(ni)edge(n)]{}; (180:6)–(-31,0); (60:6) – (120:6);
iin [1,...,5]{} at (-36+5\*i,0) [i]{};
at (180:6) [6]{}; at (240:6) [7]{}; at (300:6) [8]{}; at (0:6) [9]{}; at (60:6) [10]{}; at (120:6) [11]{};
(-11,0) circle (12pt) (-16,0) circle (12pt) (-21,0) circle (12pt) (-26,0) circle (12pt) (-31,0) circle (12pt);
By the definition, one can easily see that $L^{(0)}_{m,n}=L_{m,n}$, $L^{(m-2)}_{m,n}=L_{m-1,n+1}$, and $L^{(m-1)}_{m,n}$ is the disjoint union of $P_{n+1}$ and $K_{m-1}$. Therefore, from Theorem \[thm:local\] (b$'$), we have the following relation, $$\label{eqn:mlollipoplr}
[m-k-1]_q X_{L_{m,n}}({\bf x};q)+q^{m-k-1}[k]_q X_{P_{n+1}\cup K_{m-1}}({\bf x};q)=[m-1]_q X_{L^{(k)}_{m,n}}({\bf x};q),$$ which is equivalent to the following proposition.
$$\label{eqn:melting}
X_{L^{(k)}_{m,n}}({\bf x};q)=\frac{[m-k-1]_q}{[m-1]_q}X_{L_{m,n}}({\bf x};q)+q^{m-k-1}[k]_q[m-2]_q!X_{P_{n+1}}({\bf x};q) \,e_{m-1}.$$
From and Proposition \[prop:quasilollipop\], we have formulae of melting lollipop quasisymmetric functions.
\[thm:melting\] For integers $m\geq2$, $n\geq0$, and $0\leq k\leq m-1$, a melting lollipop quasisymmetric function is $$X_{L^{(k)}_{m,n}}({\bf x};q)=[m-k-1]_q [m-2]_q!\left( [m+n]_q \,e_{m+n} +\sum_{i=0}^{n-1}q[m+i-1]_q X_{P_{n-i}}({\bf x};q)\,e_{m+i} \right)$$ $+q^{m-k-1}[k]_q [m-2]_q! X_{P_{n+1}}({\bf x};q)\,e_{m-1}$.\
Consequently $X^{(k)}_{L_{m,n}}({\bf x};q)$ is $e$-positive and $e$-unimodal with center of symmetry $\frac{|E(L^{(k)}_{m,n})|}{2}$.
Schur expansion of unicellular LLT polynomials related to certain graphs {#sec:LLT_Schur}
========================================================================
Due to the equivalency given in Proposition \[prop:AP\] between LLT polynomials and chromatic quasisymmetric functions, the relations satisfied by $X_G ({{\bf x}};q)$ given in Section \[sec:lollipopG\] can be restated in terms of LLT polynomials. In this section, we utilize those relations to prove Schur expansion formulas of LLT polynomials corresponding to certain graphs. We first define a statistic which will be used in the description of Schur coefficients of LLT polynomials.
Given a partition $\lambda$, we define the *reading order* as the total ordering of the cells in $\lambda$ by reading them row by row, from top to bottom, and from left to right within each row. Then for a standard Young tableau $T\in {\operatorname{SYT}}(\lambda)$, the *descent set* $D(T)$ is defined by $$D(T)=\{i~:~ T^{-1}(i+1) \text{ precedes } T^{-1}(i) \text{ in the reading order}\}.$$
\[def:wt\] Given a unicellular LLT diagram ${\boldsymbol{\nu}}= (\nu^{(1)},\dots, \nu^{(n)})$, an $n$-tuple of single cells, say the corresponding Dyck diagram $\pi_{{\boldsymbol{\nu}}}$ has the area sequence $a=(a_1,\dots a_{n-1},0)$. Then for any partition $\lambda\vdash n$ and a standard Young tableau $T\in{\operatorname{SYT}}(\lambda)$, define $$wt_{{\boldsymbol{\nu}}}(T) = \sum_{i \in D(T)}a_i.$$
Our purpose is to give a combinatorial interpretation of Schur coefficients appearing in the Schur expansion of unicellular LLT polynomials in terms of $q$ polynomials weighted by the statistic in Definition \[def:wt\]. More precisely, we will introduce certain classes of unicellular LLT diagrams ${\boldsymbol{\nu}}= (\nu^{(1)},\dots, \nu^{(n)})$ satisfying that $${\operatorname{LLT}}_{{\boldsymbol{\nu}}} ({\bf x};q) = \sum_{\lambda \, \vdash n} \left( \sum_{P \in {\operatorname{SYT}}(\lambda)} q^{wt_{{\boldsymbol{\nu}}}(P)} \right) s_\lambda \, .$$
To begin with, we introduce the following lemma which will play a key role in proving the Schur coefficients of LLT polynomials in the rest of this section.
\[lem:prod\_of\_LLT\] Let $G_1$ and $G_2$ be graphs of order $n$ and $m$ associated to the LLT diagram ${\boldsymbol{\nu}}_1$ and ${\boldsymbol{\nu}}_2$, respectively. Let $G$ be the graph $G_1 \cup G_2$ of order $(n+m)$ and let ${\boldsymbol{\nu}}$ be the LLT diagram corresponding to $G$. If $${\operatorname{LLT}}_{{\boldsymbol{\nu}}_1}({\bf x};q) = \sum_{\lambda \, \vdash n} \left( \sum_{P \in {\operatorname{SYT}}(\lambda)} q^{wt_{{\boldsymbol{\nu}}_1}(P)} \right) s_\lambda
\quad \text{and} \quad
{\operatorname{LLT}}_{{\boldsymbol{\nu}}_2}({\bf x};q) = \sum_{\lambda \, \vdash m} \left( \sum_{Q \in {\operatorname{SYT}}(\lambda)} q^{wt_{{\boldsymbol{\nu}}_2}(Q)} \right) s_\lambda \, ,$$ then $${\operatorname{LLT}}_{{\boldsymbol{\nu}}}({\bf x};q) = \sum_{\lambda \, \vdash (n+m)} \left( \sum_{T \in {\operatorname{SYT}}(\lambda)} q^{wt_{{\boldsymbol{\nu}}}(T)} \right) s_\lambda \, .$$
To prove the above lemma, we briefly introduce the switching algorithms on standard Young tableaux introduced by Benkart, Sottile, and Stroomer [@BSS], which is built upon Schüzenberge’s jeu de taquin sliding process. For partitions $\lambda \subseteq \mu \subseteq \nu$, let $T$ and $S$ be standard Young tableaux of shape $\mu/\lambda$ and $\nu/\mu$, respectively. The tableau switching on $(T,S)$ is a combianatorial algorithm to apply jeu de taquin slides to $S$ iteratively following the order in which we choose empty boxes given by $T$ from the largest entry to the smallest entry. For example, if $$\begin{tikzpicture}[scale=0.8]
\def \hhh{5mm} \def \vvv{5mm} \def \hhhhh{70mm} \node[] at (\hhh*0.8,-\vvv*0.4) {$T=$};
\draw[-,black!10] (\hhh*3,-\vvv*2) rectangle (\hhh*4,-\vvv*1);
\draw[fill=black!30] (\hhh*3,0) rectangle (\hhh*4,\vvv*1);
\draw[fill=black!30] (\hhh*3,-\vvv*1) rectangle (\hhh*4,\vvv*0);
\draw[fill=black!30] (\hhh*4,-\vvv*1) rectangle (\hhh*5,\vvv*0);
\draw[fill=black!30] (\hhh*4,-\vvv*2) rectangle (\hhh*5,-\vvv*1);
\node[] at (\hhh*3.5,\vvv*0.5) {$4$};
\node[] at (\hhh*3.5,-\vvv*0.5) {$2$};
\node[] at (\hhh*4.5,-\vvv*0.5) {$3$};
\node[] at (\hhh*4.5,-\vvv*1.5) {$1$};
\node[] at (\hhh*3.5+\hhhhh*0.5,-\vvv*0.4) {and};
\node[] at (\hhh*0.8+\hhhhh,-\vvv*0.4) {$S=$};
\draw[-,black!10] (\hhh*3+\hhhhh,0) rectangle (\hhh*4+\hhhhh,\vvv*1);
\draw[-,black!10] (\hhh*3+\hhhhh,-\vvv*1) rectangle (\hhh*4+\hhhhh,\vvv*0);
\draw[-,black!10] (\hhh*4+\hhhhh,-\vvv*1) rectangle (\hhh*5+\hhhhh,\vvv*0);
\draw[-] (\hhh*5+\hhhhh,-\vvv*1) rectangle (\hhh*6+\hhhhh,\vvv*0);
\draw[-] (\hhh*4+\hhhhh,0) rectangle (\hhh*5+\hhhhh,\vvv*1);
\draw[-] (\hhh*5+\hhhhh,0) rectangle (\hhh*6+\hhhhh,\vvv*1);
\draw[-,black!10] (\hhh*3+\hhhhh,-\vvv*2) rectangle (\hhh*4+\hhhhh,-\vvv*1);
\draw[-,black!10] (\hhh*4+\hhhhh,-\vvv*2) rectangle (\hhh*5+\hhhhh,-\vvv*1);
\draw[-] (\hhh*5+\hhhhh,-\vvv*2) rectangle (\hhh*6+\hhhhh,-\vvv*1);
\draw[-] (\hhh*6+\hhhhh,-\vvv*2) rectangle (\hhh*7+\hhhhh,-\vvv*1);
\node[] at (\hhh*4.5+\hhhhh,\vvv*0.5) {$4$};
\node[] at (\hhh*5.5+\hhhhh,\vvv*0.5) {$5$};
\node[] at (\hhh*5.5+\hhhhh,-\vvv*0.5) {$2$};
\node[] at (\hhh*5.5+\hhhhh,-\vvv*1.5) {$1$};
\node[] at (\hhh*6.5+\hhhhh,-\vvv*1.5) {$3$};
\end{tikzpicture} \, ,$$ then the followings illustrate how the tableau switching algorithm acts on $(T,S)$ step by step: $$\begin{tikzpicture}[scale=0.8]
\def \hhh{5mm} \def \vvv{5mm} \draw[-,black!10] (\hhh*3,-\vvv*2) rectangle (\hhh*4,-\vvv*1);
\draw[fill=black!30] (\hhh*3,0) rectangle (\hhh*4,\vvv*1);
\draw[-] (\hhh*4,0) rectangle (\hhh*5,\vvv*1);
\draw[-] (\hhh*5,0) rectangle (\hhh*6,\vvv*1);
\draw[fill=black!30] (\hhh*3,-\vvv*1) rectangle (\hhh*4,\vvv*0);
\draw[fill=black!30] (\hhh*4,-\vvv*1) rectangle (\hhh*5,\vvv*0);
\draw[-] (\hhh*5,-\vvv*1) rectangle (\hhh*6,\vvv*0);
\draw[fill=black!30] (\hhh*4,-\vvv*2) rectangle (\hhh*5,-\vvv*1);
\draw[-] (\hhh*5,-\vvv*2) rectangle (\hhh*6,-\vvv*1);
\draw[-] (\hhh*6,-\vvv*2) rectangle (\hhh*7,-\vvv*1);
\node[] at (\hhh*3.5,\vvv*0.5) {$4$};
\node[] at (\hhh*4.5,\vvv*0.5) {$4$};
\node[] at (\hhh*5.5,\vvv*0.5) {$5$};
\node[] at (\hhh*3.5,-\vvv*0.5) {$2$};
\node[] at (\hhh*4.5,-\vvv*0.5) {$3$};
\node[] at (\hhh*5.5,-\vvv*0.5) {$2$};
\node[] at (\hhh*4.5,-\vvv*1.5) {$1$};
\node[] at (\hhh*5.5,-\vvv*1.5) {$1$};
\node[] at (\hhh*6.5,-\vvv*1.5) {$3$};
\end{tikzpicture}
\begin{tikzpicture}[scale=0.8]
\def \hhh{5mm} \def \vvv{5mm} \draw[->,decorate,decoration={snake,amplitude=.4mm,segment length=2mm,post length=1mm}] (\hhh*0,-\vvv*0.5) -- (\hhh*1.7,-\vvv*0.5);
\draw[-,black!10] (\hhh*3,-\vvv*2) rectangle (\hhh*4,-\vvv*1);
\draw[-] (\hhh*3,0) rectangle (\hhh*4,\vvv*1);
\draw[-] (\hhh*4,0) rectangle (\hhh*5,\vvv*1);
\draw[fill=black!30] (\hhh*5,0) rectangle (\hhh*6,\vvv*1);
\draw[fill=black!30] (\hhh*3,-\vvv*1) rectangle (\hhh*4,\vvv*0);
\draw[fill=black!30] (\hhh*4,-\vvv*1) rectangle (\hhh*5,\vvv*0);
\draw[-] (\hhh*5,-\vvv*1) rectangle (\hhh*6,\vvv*0);
\draw[fill=black!30] (\hhh*4,-\vvv*2) rectangle (\hhh*5,-\vvv*1);
\draw[-] (\hhh*5,-\vvv*2) rectangle (\hhh*6,-\vvv*1);
\draw[-] (\hhh*6,-\vvv*2) rectangle (\hhh*7,-\vvv*1);
\node[] at (\hhh*3.5,\vvv*0.5) {$4$};
\node[] at (\hhh*4.5,\vvv*0.5) {$5$};
\node[] at (\hhh*5.5,\vvv*0.5) {$4$};
\node[] at (\hhh*3.5,-\vvv*0.5) {$2$};
\node[] at (\hhh*4.5,-\vvv*0.5) {$3$};
\node[] at (\hhh*5.5,-\vvv*0.5) {$2$};
\node[] at (\hhh*4.5,-\vvv*1.5) {$1$};
\node[] at (\hhh*5.5,-\vvv*1.5) {$1$};
\node[] at (\hhh*6.5,-\vvv*1.5) {$3$};
\end{tikzpicture}
\begin{tikzpicture}[scale=0.8]
\def \hhh{5mm} \def \vvv{5mm} \draw[->,decorate,decoration={snake,amplitude=.4mm,segment length=2mm,post length=1mm}] (\hhh*0,-\vvv*0.5) -- (\hhh*1.7,-\vvv*0.5);
\draw[-,black!10] (\hhh*3,-\vvv*2) rectangle (\hhh*4,-\vvv*1);
\draw[-] (\hhh*3,0) rectangle (\hhh*4,\vvv*1);
\draw[-] (\hhh*4,0) rectangle (\hhh*5,\vvv*1);
\draw[fill=black!30] (\hhh*5,0) rectangle (\hhh*6,\vvv*1);
\draw[fill=black!30] (\hhh*3,-\vvv*1) rectangle (\hhh*4,\vvv*0);
\draw[-] (\hhh*4,-\vvv*1) rectangle (\hhh*5,\vvv*0);
\draw[fill=black!30] (\hhh*5,-\vvv*1) rectangle (\hhh*6,\vvv*0);
\draw[fill=black!30] (\hhh*4,-\vvv*2) rectangle (\hhh*5,-\vvv*1);
\draw[-] (\hhh*5,-\vvv*2) rectangle (\hhh*6,-\vvv*1);
\draw[-] (\hhh*6,-\vvv*2) rectangle (\hhh*7,-\vvv*1);
\node[] at (\hhh*3.5,\vvv*0.5) {$4$};
\node[] at (\hhh*4.5,\vvv*0.5) {$5$};
\node[] at (\hhh*5.5,\vvv*0.5) {$4$};
\node[] at (\hhh*3.5,-\vvv*0.5) {$2$};
\node[] at (\hhh*4.5,-\vvv*0.5) {$2$};
\node[] at (\hhh*5.5,-\vvv*0.5) {$3$};
\node[] at (\hhh*4.5,-\vvv*1.5) {$1$};
\node[] at (\hhh*5.5,-\vvv*1.5) {$1$};
\node[] at (\hhh*6.5,-\vvv*1.5) {$3$};
\end{tikzpicture}
\begin{tikzpicture}[scale=0.8]
\def \hhh{5mm} \def \vvv{5mm} \draw[->,decorate,decoration={snake,amplitude=.4mm,segment length=2mm,post length=1mm}] (\hhh*0,-\vvv*0.5) -- (\hhh*1.7,-\vvv*0.5);
\draw[-,black!10] (\hhh*3,-\vvv*2) rectangle (\hhh*4,-\vvv*1);
\draw[-] (\hhh*3,0) rectangle (\hhh*4,\vvv*1);
\draw[fill=black!30] (\hhh*4,0) rectangle (\hhh*5,\vvv*1);
\draw[fill=black!30] (\hhh*5,0) rectangle (\hhh*6,\vvv*1);
\draw[-] (\hhh*3,-\vvv*1) rectangle (\hhh*4,\vvv*0);
\draw[-] (\hhh*4,-\vvv*1) rectangle (\hhh*5,\vvv*0);
\draw[fill=black!30] (\hhh*5,-\vvv*1) rectangle (\hhh*6,\vvv*0);
\draw[fill=black!30] (\hhh*4,-\vvv*2) rectangle (\hhh*5,-\vvv*1);
\draw[-] (\hhh*5,-\vvv*2) rectangle (\hhh*6,-\vvv*1);
\draw[-] (\hhh*6,-\vvv*2) rectangle (\hhh*7,-\vvv*1);
\node[] at (\hhh*3.5,\vvv*0.5) {$4$};
\node[] at (\hhh*4.5,\vvv*0.5) {$2$};
\node[] at (\hhh*5.5,\vvv*0.5) {$4$};
\node[] at (\hhh*3.5,-\vvv*0.5) {$2$};
\node[] at (\hhh*4.5,-\vvv*0.5) {$5$};
\node[] at (\hhh*5.5,-\vvv*0.5) {$3$};
\node[] at (\hhh*4.5,-\vvv*1.5) {$1$};
\node[] at (\hhh*5.5,-\vvv*1.5) {$1$};
\node[] at (\hhh*6.5,-\vvv*1.5) {$3$};
\end{tikzpicture}
\begin{tikzpicture}[scale=0.8]
\def \hhh{5mm} \def \vvv{5mm} \draw[->,decorate,decoration={snake,amplitude=.4mm,segment length=2mm,post length=1mm}] (\hhh*0,-\vvv*0.5) -- (\hhh*1.7,-\vvv*0.5) node[midway,below] {};
\draw[-,black!10] (\hhh*3,-\vvv*2) rectangle (\hhh*4,-\vvv*1);
\draw[-] (\hhh*3,0) rectangle (\hhh*4,\vvv*1);
\draw[fill=black!30] (\hhh*4,0) rectangle (\hhh*5,\vvv*1);
\draw[fill=black!30] (\hhh*5,0) rectangle (\hhh*6,\vvv*1);
\draw[-] (\hhh*3,-\vvv*1) rectangle (\hhh*4,\vvv*0);
\draw[-] (\hhh*4,-\vvv*1) rectangle (\hhh*5,\vvv*0);
\draw[fill=black!30] (\hhh*5,-\vvv*1) rectangle (\hhh*6,\vvv*0);
\draw[-] (\hhh*4,-\vvv*2) rectangle (\hhh*5,-\vvv*1);
\draw[-] (\hhh*5,-\vvv*2) rectangle (\hhh*6,-\vvv*1);
\draw[fill=black!30] (\hhh*6,-\vvv*2) rectangle (\hhh*7,-\vvv*1);
\node[] at (\hhh*3.5,\vvv*0.5) {$4$};
\node[] at (\hhh*4.5,\vvv*0.5) {$2$};
\node[] at (\hhh*5.5,\vvv*0.5) {$4$};
\node[] at (\hhh*3.5,-\vvv*0.5) {$2$};
\node[] at (\hhh*4.5,-\vvv*0.5) {$5$};
\node[] at (\hhh*5.5,-\vvv*0.5) {$3$};
\node[] at (\hhh*4.5,-\vvv*1.5) {$1$};
\node[] at (\hhh*5.5,-\vvv*1.5) {$3$};
\node[] at (\hhh*6.5,-\vvv*1.5) {$1$};
\end{tikzpicture}$$ As we can see in the above example, the tableau switching on $(T,S)$ results another pair of standard Young tableaux and we denote it by $({}^TS,T_S)$ to respect notation in [@BSS]. In the above case, $$\begin{tikzpicture}[scale=0.8]
\def \hhh{5mm} \def \vvv{5mm} \def \hhhhh{70mm} \node[] at (\hhh*0.8,-\vvv*0.4) {${}^TS=$};
\draw[-,black!10] (\hhh*3,-\vvv*2) rectangle (\hhh*4,-\vvv*1);
\draw[-] (\hhh*3,0) rectangle (\hhh*4,\vvv*1);
\draw[-] (\hhh*3,-\vvv*1) rectangle (\hhh*4,\vvv*0);
\draw[-] (\hhh*4,-\vvv*1) rectangle (\hhh*5,\vvv*0);
\draw[-] (\hhh*4,-\vvv*2) rectangle (\hhh*5,-\vvv*1);
\draw[-] (\hhh*5,-\vvv*2) rectangle (\hhh*6,-\vvv*1);
\node[] at (\hhh*3.5,\vvv*0.5) {$4$};
\node[] at (\hhh*3.5,-\vvv*0.5) {$2$};
\node[] at (\hhh*4.5,-\vvv*0.5) {$5$};
\node[] at (\hhh*4.5,-\vvv*1.5) {$1$};
\node[] at (\hhh*5.5,-\vvv*1.5) {$3$};
\node[] at (\hhh*3.5+\hhhhh*0.5,-\vvv*0.4) {and};
\node[] at (\hhh*0.8+\hhhhh,-\vvv*0.4) {$T_S=$};
\draw[-,black!10] (\hhh*3+\hhhhh,0) rectangle (\hhh*4+\hhhhh,\vvv*1);
\draw[-,black!10] (\hhh*3+\hhhhh,-\vvv*1) rectangle (\hhh*4+\hhhhh,\vvv*0);
\draw[-,black!10] (\hhh*4+\hhhhh,-\vvv*1) rectangle (\hhh*5+\hhhhh,\vvv*0);
\draw[-,black!10] (\hhh*3+\hhhhh,-\vvv*2) rectangle (\hhh*4+\hhhhh,-\vvv*1);
\draw[-,black!10] (\hhh*4+\hhhhh,-\vvv*2) rectangle (\hhh*5+\hhhhh,-\vvv*1);
\draw[-,black!10] (\hhh*5+\hhhhh,-\vvv*2) rectangle (\hhh*6+\hhhhh,-\vvv*1);
\draw[fill=black!30] (\hhh*4+\hhhhh,0) rectangle (\hhh*5+\hhhhh,\vvv*1);
\draw[fill=black!30] (\hhh*5+\hhhhh,0) rectangle (\hhh*6+\hhhhh,\vvv*1);
\draw[fill=black!30] (\hhh*5+\hhhhh,-\vvv*1) rectangle (\hhh*6+\hhhhh,\vvv*0);
\draw[fill=black!30] (\hhh*6+\hhhhh,-\vvv*2) rectangle (\hhh*7+\hhhhh,-\vvv*1);
\node[] at (\hhh*4.5+\hhhhh,\vvv*0.5) {$2$};
\node[] at (\hhh*5.5+\hhhhh,\vvv*0.5) {$4$};
\node[] at (\hhh*5.5+\hhhhh,-\vvv*0.5) {$3$};
\node[] at (\hhh*6.5+\hhhhh,-\vvv*1.5) {$1$};
\end{tikzpicture} \, .$$
\[lemma:switching\][@BSS] Let $T$ and $S$ be standard Young tableaux of shape $\mu/\lambda$ and $\nu/\mu$, respectively. Assume that the tableau switching on $(T,S)$ transforms $T$ into $T_S$ and $S$ into ${}^TS$. Then
1. $T$ and $T_S$ are Knuth equivalent, that is, they have the same rectification.
2. $S$ and ${}^TS$ are Knuth equivalent, that is, they have the same rectification.
3. The tableau switching ${}^TS$ and $T_S$ transforms ${}^TS$ into $S$ and $T_S$ into $T$.
*Proof of Lemma \[lem:prod\_of\_LLT\].* For simplicity, we write $$c_\lambda(q) = \sum_{T \in {\operatorname{SYT}}(\lambda)} q^{wt_{{\boldsymbol{\nu}}_1}(T)}
\quad \text{and} \quad
d_\mu(q) = \sum_{T \in {\operatorname{SYT}}(\mu)} q^{wt_{{\boldsymbol{\nu}}_2}(T)} \, .$$ Then we have $$\begin{aligned}
{\operatorname{LLT}}_{{\boldsymbol{\nu}}}({\bf x};q) &= {\operatorname{LLT}}_{{\boldsymbol{\nu}}_{G_1 \cup G_2}}({\bf x};q) ={\operatorname{LLT}}_{{\boldsymbol{\nu}}_1}({\bf x};q) \cdot {\operatorname{LLT}}_{{\boldsymbol{\nu}}_2}({\bf x};q) \\
&= \left( \sum_{\lambda \, \vdash n} c_{\lambda}(q) s_\lambda \right)
\cdot \left( \sum_{\mu \, \vdash m} d_{\mu}(q) s_\mu \right) \\
&= \sum_{\nu \, \vdash (n+m)}\left( \sum_{\substack{(\lambda,\mu) \\ {\lambda \, \vdash n} \\
{\mu \, \vdash m}}}c_\lambda^{\nu/\mu} c_\lambda(q) d_\mu(q) \right) s_\nu \, .
\end{aligned}$$ Let $\mathcal{C}^{\nu/\mu}_\lambda$ be the set of all standard Young tableaux of shape $\nu/\mu$ whose rectification is the row tableau $R_\lambda$. To prove our assertion, it is enough to show that for each $\nu \vdash (n+m)$ there is a correspondence $$\varphi :
\bigcup_{\substack{(\lambda,\mu) \\ {\lambda \vdash n} \\ {\mu \vdash m}}}
\left\{ (R,P,Q) \, : \, R \in \mathcal{C}^{\nu/\mu}_\lambda , P \in {\operatorname{SYT}}(\lambda) \text{ and } Q \in {\operatorname{SYT}}(\mu) \right\} \rightarrow \{ T \, : \, T \in {\operatorname{SYT}}(\nu) \}$$ satisfying that
1. $\varphi$ is bijective, and
2. $\varphi$ is weight-preserving, that is, if $\varphi : (R,P,Q) \mapsto T $, then $wt_{{\boldsymbol{\nu}}_1}(P) + wt_{{\boldsymbol{\nu}}_2}(Q) = wt_{{\boldsymbol{\nu}}}(T)$.
Indeed, we construct such a bijection by means of the tableau switching as follows: note that the tableau switching on $(Q,R)$ results a pair $(R_\lambda, Q_R)$ of standard Young tableaux such that $Q_R$ is of shape $\nu/\lambda$. Let $\hat{Q}_R$ be a filling obtained from $Q_R$ by replacing the entry $i$ with $n+i$ for each $i$. Obviously, $\hat{Q}_R$ is a standard Young tableau of shape $\nu/\lambda$ with entries from $\{ n+1, n+2, \cdots, n+m \}$. Now we define $\varphi((R,P,Q))$ by $P \cup \hat{Q}_R$, which is well defined because ${Q}_R$ is uniquely determined due to Lemma \[lemma:switching\](c).
On the other hand, when $T \in {\operatorname{SYT}}(\nu)$ let us denote $T^{(n)}$ be a subtableau of $T$ consisting of $\{ 1, 2, \cdots , n \}$. And let $T^{(m)}$ be a filling obtained from $T\setminus T^{(n)}$ by replacing the entry $j$ with $j-n$. Then $T^{(n)}$ and $T^{(m)}$ is a standard Young tableau of shape $\lambda$ and $\nu/\lambda$ for some $\lambda \vdash n$, respectively. Applying the tableau switching on $(R_\lambda, T^{(m)})$, we get a pair $(\text{Rect}(T^{(m)}), R_{T^{(m)}})$ of standard Young tableaux, where $\text{Rect}(T^{(m)})$ is the rectification of $T^{(m)}$. Moreover, we know that $R_{T^{(m)}}$ is Knuth equivalent to $R_\lambda$ due to Lemma \[lemma:switching\](a). Thus $\text{Rect}(T^{(m)}) \in {\operatorname{SYT}}(\mu)$ for some $\mu \vdash m$ and $R_{T^{(m)}} \in \mathcal{C}_\lambda^{\nu/\mu}$. In all, we can conclude that for each $T \in {\operatorname{SYT}}(\nu)$ with $\nu \vdash (n+m)$, the tableau switching produces the triple $(R_{T^{(m)}}, T^{(n)}, \text{Rect}(T^{(m)}))$ such that $R_{T^{(m)}} \in \mathcal{C}_\lambda^{\nu/\mu}$, $T^{(n)} \in {\operatorname{SYT}}(\lambda)$ and $\text{Rect}(T^{(m)}) \in {\operatorname{SYT}}(\mu)$ for some $\lambda \vdash n$ and $\mu \vdash m$. Furthermore, it follows from the ivolutiveness of the tableau switching that $\varphi\left((R_{T^{(m)}}, T^{(n)}, \text{Rect}(T^{(m)}) \right) = T$, which shows that $\varphi$ is bijective.
In order to prove that $\varphi$ is weight-preserving, we recall the well known fact that the descent set of a given Young tableau is invariant under applying forward or reverse jeu de taquin slides. Hence, if $\varphi((R,P,Q))=T$, then $i \in D(P)$ (resp., $i-n \in D(Q)$) if and only if $i \in D(T)$ for $1 \leq i \leq n-1$ (resp., $n+1 \leq i \leq n+m-1$). If we let the area sequences of ${\boldsymbol{\nu}}_1$ and ${\boldsymbol{\nu}}_2$ be $a_{{\boldsymbol{\nu}}_1}= (\alpha_1, \alpha_2, \cdots, \alpha_{n-1},0)$ and $a_{{\boldsymbol{\nu}}_2}= (\beta_1, \beta_2, \cdots, \beta_{m-1},0)$, respectively, then the corresponding area sequence $a_{{\boldsymbol{\nu}}} = (a_1, a_2, \cdots, a_{n+m})$ is of the form $$a_i =
\begin{cases}
\alpha_i \quad \quad \text{if } 1\leq i \leq n-1 \\
0 \quad \quad \ \text{if } i=n \\
\beta_{i-n} \quad \text{if } n+1 \leq i \leq n+m-1 \\
0 \quad \ \quad \text{if } i=n+m
\end{cases} \, .$$ Therefore, $wt_{{\boldsymbol{\nu}}_1}(P) + wt_{{\boldsymbol{\nu}}_2}(Q)$ and $wt_{{\boldsymbol{\nu}}}(T)$ are the same. It should be noted that $n$ might be a descent of $T$. But it dose not matter to our assertion since $a_n$ is always $0$.
[The properties of the tableau switching described in Lemma \[lemma:switching\] played a key role in proving Lemma \[lem:prod\_of\_LLT\]. These properties are well extended to the case of $m$-fold multitableaux in [@BSS], and hence one can prove the following: let $G_i$ be graph of order $n_i$ associated to the LLT diagram ${\boldsymbol{\nu}}_i$ for $1 \leq i \leq m$ and $G = \bigcup_i G_i$ graph of order $\sum_i n_i := n$. If for every $1 \leq i \leq m$ ${\operatorname{LLT}}_{{\boldsymbol{\nu}}_i} ({\bf x};q) = \sum_{\lambda \, \vdash n_i} \left( \sum_{P \in {\operatorname{SYT}}(\lambda)} q^{wt_{{\boldsymbol{\nu}}_i}(P)} \right) s_\lambda$, then $${\operatorname{LLT}}_{{\boldsymbol{\nu}}}({\bf x};q) = \sum_{\lambda \, \vdash n} \left( \sum_{T \in {\operatorname{SYT}}(\lambda)} q^{wt_{{\boldsymbol{\nu}}}(T)} \right) s_\lambda \,$$ where ${\boldsymbol{\nu}}$ is the LLT diagram corresponding to $G$. This can be proved in a similar way as in the proof of Lemma \[lem:prod\_of\_LLT\] so we omit the detailed proof. ]{}
Complete graphs
---------------
We remark that the same LLT polynomial can be realized by different LLT diagrams, as far as the inversion relations are kept invariant. Keeping this in mind, the simplest LLT diagram corresponding to the complete graph $K_n$ is when all the $n$ cells are on the same diagonal which we denote by ${\boldsymbol{\nu}}_{K_n}$. The LLT polynomial of ${\boldsymbol{\nu}}_{K_n}$ is known to be modified Hall-Littlewood polynomials indexed by one column shape $\tilde{H}_{(1^n)}({{\bf x}};q)$. Let us give a simple derivation here.
Consider the *modified Macdonald polynomials* $\tilde{H}_{\mu}({\bf {\bf x}};q,t)$ which have the following expansion in terms of Schur functions $$\tilde{H}_{\mu}({\bf x};q,t) = \sum_{\lambda\vdash n}\tilde{K}_{\lambda\mu}(q,t)s_{\lambda},$$ where $\tilde{K}_{\lambda\mu}(q,t)$ are known as *modified $q,t$-Kostka polynomials*. The LLT expansion of $\tilde{H}_{\mu}({\bf x};q,t)$ is given in [@HHL05], and especially when the LLT diagram is ${\boldsymbol{\nu}}_{K_n}$, by considering the combinatorial description for the monomial expansion of $\tilde{H}_{\mu}({\bf x};q,t)$ given in [@HHL05], it is not very hard to see that $${\operatorname{LLT}}_{K_n}({\bf x};q)=\tilde{H}_{(n)}({\bf x};q,t).$$ For the details, we refer the readers to [@HHL05 Section 3]. In the case when $\mu=(n)$, the $t$-parameter does not occur and thus we can set $t=0$. Also, noting that $\tilde{K}_{\lambda\mu}(q,t)=\tilde{K}_{\lambda\mu'}(t,q)$, we obtain $$\begin{aligned}
{\operatorname{LLT}}_{K_n}({\bf x};q)&=\tilde{H}_{(n)}({\bf x};q,0)\\
&= \sum_{\lambda\vdash n}\tilde{K}_{\lambda,(n)}(q,0)s_{\lambda}\\
&= \sum_{\lambda\vdash n}\tilde{K}_{\lambda,(1^n)}(0,q)s_{\lambda}\\
&= \sum_{\lambda\vdash n}\tilde{K}_{\lambda, (1^n)}(q)s_{\lambda}=\tilde{H}_{(1^n)}({{\bf x}};q),\end{aligned}$$ where $$\tilde{K}_{\lambda, (1^n)}(q)=\sum_{T\in{\operatorname{SYT}}(\lambda)}q^{cocharge(T)}.$$ Hence, we have $${\operatorname{LLT}}_{K_n}({\bf x};q)= \sum_{\lambda\vdash n}\left(\sum_{T\in{\operatorname{SYT}}(\lambda)}q^{cocharge(T)}\right) s_{\lambda}.$$ For the detailed description of the cocharge statistic, see [@Hag08]. By considering the way how the cocharge statistic is defined and the fact that the Dyck diagram $\pi_{{\boldsymbol{\nu}}_{K_n}}$ has the area sequence $a=(n-1,n-2,\dots, 1,0)$, i.e., $a_i = n-i$, for $1\le i \le n$, we can check that the statistic in Definition \[def:wt\] gives another combinatorial description for the Schur coefficients in this case, namely, $$\label{eqn:LLT_Kn}
{\operatorname{LLT}}_{K_n}({\bf x};q)= \sum_{\lambda\vdash n}\left(\sum_{T\in{\operatorname{SYT}}(\lambda)}q^{wt_{{\boldsymbol{\nu}}_{K_n}}(T)}\right) s_{\lambda}.$$
Path graphs
-----------
In this subsection, we consider the LLT polynomial ${\operatorname{LLT}}_{P_n}({\bf x};q)$ corresponding to the path graph $P_n$ of order $n$. Definition \[def:wt\] also gives a combinatorial description for the Schur coefficients appearing in the Schur expansion of ${\operatorname{LLT}}_{P_n}({\bf x};q)$.
\[prop:path\] Let $P_n$ be the path graph of order $n$. Then $${\operatorname{LLT}}_{P_n}({\bf x};q)= \sum_{\lambda\vdash n}\left(\sum_{T\in{\operatorname{SYT}}(\lambda)}q^{wt_{{\boldsymbol{\nu}}_{P_n}}(T)}\right) s_{\lambda} \, .$$
For a word $w = w_1w_2 \cdots w_n \in \mathbb{Z}_{>0}^n$, an index $i \in \{ 1,2, \cdots, n \}$ is said to be a *descent* of $w$ if $w_i > w_{i+1}$. If ${\boldsymbol{\nu}}$ is a unicellular LLT diagram of $n$ cells and $\boldsymbol{S} \in {\operatorname{SSYT}}{({\boldsymbol{\nu}})}$, then $\boldsymbol{S}$ can be regarded as a word of length $n$, say $w({\boldsymbol{S}})$, and ${\operatorname{inv}}(\boldsymbol{S})$ is equal to the number of pairs $(i,j)$ satisfying the following conditions:
1. the cell whose the column labeled by $i$ and the row by $j$ is contained in $\pi_{{\boldsymbol{\nu}}}$, and
2. $i < j$ but $w({\boldsymbol{S}})_i > w({\boldsymbol{S}})_j $.
In the case where ${\boldsymbol{\nu}}= {\boldsymbol{\nu}}_{P_n}$, to count ${\operatorname{inv}}(\boldsymbol{S})$ it is enough to consider pairs of the form $(i, i+1)$ for $1 \leq i \leq n-1$. That is, for $\boldsymbol{S} \in {\operatorname{SSYT}}{({\boldsymbol{\nu}}_{P_n})}$ $${\operatorname{inv}}(\boldsymbol{S}) = | \, \{ (i, i+1) : w({\boldsymbol{S}})_i > w({\boldsymbol{S}})_{i+1} \} \, |$$ which counts the number of descents of $w({\boldsymbol{S}})$.
For each word $w$, we define $D(w)$ to be the set of all descents of $w$ and $\text{stan}(w)$ its standardization, that is, $\text{stan}(w)$ is the permutation in $S_n$ obtained by sorting pairs $(w_i, i)$ in lexicographic order. Then it is well known that three kinds of descent sets $D(w)$, $D(\text{stan}(w))$, $D(Q(w))$ are the same, where $Q(w)$ denote the recording tableau corresponding to $w$ in the procedure of Robinson-Schensted-Knuth insertion algorithm.
In all, we have $$\begin{aligned}
{\operatorname{LLT}}_{P_n}({\bf x};q) &= \sum_{\boldsymbol{S} \in {\operatorname{SSYT}}({\boldsymbol{\nu}}_{P_n})} q^{{\operatorname{inv}}{(\boldsymbol{S}})} \, x^{\boldsymbol{S}} = \sum_{w \in \mathbb{Z}_{>0}^n} \, q^{|D(w)|} \, x^{w} \\
&= \sum_{\sigma \in S_n} \, q^{|D(\sigma)|} \left( \sum_{\substack{w \in \mathbb{Z}_{>0}^n \text{ such that }\\ \text{stan}(w)=\sigma}} x^{w}\right) \\
&= \sum_{\sigma \in S_n} \, q^{|D(Q(\sigma))|} \left( \sum_{\substack{w \in \mathbb{Z}_{>0}^n \text{ such that }\\ Q(w)=Q(\sigma)}} x^{w}\right) \\
&= \sum_{\lambda \vdash n} \, \sum_{Q \in {\operatorname{SYT}}(\lambda)} \, q^{|D(Q)|} \left( \sum_{\substack{w \in \mathbb{Z}_{>0}^n \text{ such that }\\ Q(w)=Q}} x^w \right) \,
= \sum_{\lambda \vdash n} \, \sum_{Q \in {\operatorname{SYT}}(\lambda)} \, q^{|D(Q)|} \, s_\lambda \, .
\end{aligned}$$ Our proposition follows from the fact that the area sequence is $a_{{\boldsymbol{\nu}}_{P_n}} = (1, 1, \cdots, 1, 0)$ and thus $wt_{{\boldsymbol{\nu}}_{P_n}}(Q) = \sum_{i \in D(Q)} 1 = |D(Q)|$.
Graphs related by linear relations
----------------------------------
Consider the LLT diagram ${\boldsymbol{\nu}}_{K_n}$ corresponding to the complete graph $K_n$ (for the sake of using the linear relation, we use the left-most figure in Figure \[fig:linearrel\]) and move the cell on the right most diagonal upward so that $k$ cells on the left diagonal are left-below of the moved cell (see the middle figure in Figure \[fig:linearrel\]). If we consider the corresponding graphs, moving the cell on the second diagonal removes edges connected to the moved vertex and the $k$ vertices going below of it. We denote such a graph by $K_n ^{(k)}$. Then from Theorem \[thm:local\], we obtain the following linear relations.
(1.05,1.05)–(1.95,1.05)–(1.95,1.95)–(1.05,1.95)–(1.05,1.05)–cycle; (2.05,3.05)–(2.95,3.05)–(2.95,3.95)–(2.05,3.95)–(2.05,3.05)–cycle; (3.05,4.05)–(3.95,4.05)–(3.95,4.95)–(3.05,4.95)–(3.05,4.05)–cycle; (4.05,5.05)–(4.95,5.05)–(4.95,5.95)–(4.05,5.95)–(4.05,5.05)–cycle; (5.05,6.05)–(5.95,6.05)–(5.95,6.95)–(5.05,6.95)–(5.05,6.05)–cycle; (6.05,7.05)–(6.95,7.05)–(6.95,7.95)–(6.05,7.95)–(6.05,7.05)–cycle; (7.05,8.05)–(7.95,8.05)–(7.95,8.95)–(7.05,8.95)–(7.05,8.05)–cycle; (8.05,9.05)–(8.95,9.05)–(8.95,9.95)–(8.05,9.95)–(8.05,9.05)–cycle; (-.5,1.5)–(9.5,11.5); (0,1)–(2.05,3.05); (8.95,9.95)–(10,11); (.5,.5)–(1.05,1.05); (1.95,1.95)–(10.5,10.5); (1,0)–(11,10); (1.1, 3.8)–(8.1, 10.8); (a) at (3.9, 7.7) \[rotate=45\][$n-1$]{};
(1.05,2.05)–(1.95,2.05)–(1.95,2.95)–(1.05,2.95)–(1.05,2.05)–cycle; (2.05,3.05)–(2.95,3.05)–(2.95,3.95)–(2.05,3.95)–(2.05,3.05)–cycle; (3.05,4.05)–(3.95,4.05)–(3.95,4.95)–(3.05,4.95)–(3.05,4.05)–cycle; (4.05,5.05)–(4.95,5.05)–(4.95,5.95)–(4.05,5.95)–(4.05,5.05)–cycle; (6.05,6.05)–(6.95,6.05)–(6.95,6.95)–(6.05,6.95)–(6.05,6.05)–cycle; (7.05,8.05)–(7.95,8.05)–(7.95,8.95)–(7.05,8.95)–(7.05,8.05)–cycle; (8.05,9.05)–(8.95,9.05)–(8.95,9.95)–(8.05,9.95)–(8.05,9.05)–cycle; (9.05,10.05)–(9.95,10.05)–(9.95,10.95)–(9.05,10.95)–(9.05,10.05)–cycle; (0,2)–(10,12); (0.5,1.5)–(1.05,2.05); (4.95,5.95)–(7.05,8.05); (9.95,10.95)–(10.5,11.5); (1,1)–(6.05,6.05); (6.95,6.95)–(11,11); (1.5,0.5)–(11.5,10.5); (.3, 3)–(4, 6.7); (6.3, 9)–(9, 11.7); (a) at (1.7, 5.5) \[rotate=45\][$k$]{}; (b) at (7, 10.8) \[rotate=45\][$n-k-1$]{};
(1.05,2.05)–(1.95,2.05)–(1.95,2.95)–(1.05,2.95)–(1.05,2.05)–cycle; (2.05,3.05)–(2.95,3.05)–(2.95,3.95)–(2.05,3.95)–(2.05,3.05)–cycle; (3.05,4.05)–(3.95,4.05)–(3.95,4.95)–(3.05,4.95)–(3.05,4.05)–cycle; (4.05,5.05)–(4.95,5.05)–(4.95,5.95)–(4.05,5.95)–(4.05,5.05)–cycle; (5.05,6.05)–(5.95,6.05)–(5.95,6.95)–(5.05,6.95)–(5.05,6.05)–cycle; (6.05,7.05)–(6.95,7.05)–(6.95,7.95)–(6.05,7.95)–(6.05,7.05)–cycle; (7.05,8.05)–(7.95,8.05)–(7.95,8.95)–(7.05,8.95)–(7.05,8.05)–cycle; (9.05,9.05)–(9.95,9.05)–(9.95,9.95)–(9.05,9.95)–(9.05,9.05)–cycle; (0,2)–(9.5,11.5); (0.5,1.5)–(1.05,2.05); (7.95,8.95)–(10.5,11.5); (1,1)–(9.05,9.05); (9.95,9.95)–(11,11); (1.5,.5)–(11.5,10.5); (.5, 3.2)–(7.2, 9.8); (a) at (3.3, 7.1) \[rotate=45\][$n-1$]{};
\[prop:lrs\] $${\operatorname{LLT}}_{K_n}({\bf x};q) +q[n-2]_q {\operatorname{LLT}}_{K_n ^{(n-1)}}({\bf x};q) = [n-1]_q {\operatorname{LLT}}_{K_n ^{(n-2)}}({\bf x};q).\label{eqn:lr1}$$ More generally, for $1\le k\le \ell-1$ and $2\le \ell \le n-1$, we have $$[\ell -k]_q {\operatorname{LLT}}_{K_n}({\bf x};q) +q^{\ell -k}[k]_q {\operatorname{LLT}}_{K_n ^{(\ell)}}({\bf x};q) = [\ell]_q {\operatorname{LLT}}_{K_n ^{(k)}}({\bf x};q).\label{eqn:lr2}$$
By using the linear relations in Proposition \[prop:lrs\], we can prove combinatorial formulas corresponding to LLT diagrams ${\boldsymbol{\nu}}_{K_n ^{(k)}}$.
We have $${\operatorname{LLT}}_{K_n ^{(k)}} ({\bf x};q)=\sum_{\lambda\vdash n}\left( \sum_{T\in {\operatorname{SYT}}(\lambda)}q^{wt_{{\boldsymbol{\nu}}_{K_n ^{(k)}}}(T)}\right)s_\lambda,$$ where $$wt_{{\boldsymbol{\nu}}_{K_n ^{(k)}}}(T) =\sum_{i\in D(T)} a_i,$$ with the area sequence $a_1 = n-k-1$ and $a_i = n-i$ for $2\le i\le n$.
We use the linear relation when $\ell =n-1$ : $$[n-k-1]_q {\operatorname{LLT}}_{K_n}({\bf x};q) +q^{n -k-1}[k]_q {\operatorname{LLT}}_{K_n ^{(n-1)}}({\bf x};q) = [n-1]_q {\operatorname{LLT}}_{K_n ^{(k)}}({\bf x};q).$$ Considering this linear relation, given $\lambda\vdash n$, for each $T\in{\operatorname{SYT}}(\lambda)$, we need to prove that $$q^{wt_{{\boldsymbol{\nu}}_{K_n ^{(k)}}}(T)}=\frac{1}{[n-1]_q}\left([n -k-1]_q\cdot q^{wt_{{\boldsymbol{\nu}}_{K_n }}(T)}+q^{n-k-1}[k]_q \cdot q^{wt_{{\boldsymbol{\nu}}_{K_n ^{(n-1)}}}(T)}\right).$$ First of all, observe that $a_i$ values of the Dyck diagrams corresponding to ${\boldsymbol{\nu}}_{K_n}$, ${\boldsymbol{\nu}}_{K_n ^{(k)}}$ and ${\boldsymbol{\nu}}_{K_n ^{(n-1)}}$ are the same, for $2\le i\le n$ (or $n\ne 1$) as $a_i = n-i$. We divide the cases when $1\in D(T)$ or not. If $1 \notin D(T)$, then $$\begin{aligned}
&\frac{1}{[n-1]_q}\left([n -k-1]_q\cdot q^{wt_{{\boldsymbol{\nu}}_{K_n }}(T)}+q^{n-k-1}[k]_q \cdot q^{wt_{{\boldsymbol{\nu}}_{K_n ^{(n-1)}}}(T)}\right) \\
&= \frac{q^{wt_{{\boldsymbol{\nu}}_{K_n}}(T)}}{[n-1]_q} ([n-k-1]_q +q^{n-k-1}[k]_q)\\
&= q^{wt_{{\boldsymbol{\nu}}_{K_n}}(T)}=q^{wt_{{\boldsymbol{\nu}}_{K_n ^{(k)}}}(T)}.\end{aligned}$$ If $1\in D(T)$, then $$\begin{aligned}
&\frac{1}{[n-1]_q}\left([n-k-1]_q \cdot q^{wt_{{\boldsymbol{\nu}}_{K_n}}(T)}+q^{n-k-1}[k]_q \cdot q^{wt_{{\boldsymbol{\nu}}_{K_n ^{(n-1)}}}(T)}\right) \\
&= \frac{1}{[n-1]_q} \left([n-k-1]_q \cdot q^{n-1}\cdot q^{wt_{{\boldsymbol{\nu}}_{K_n ^{(n-1)}}}(T)}+q^{n-k-1}[k]_q \cdot q^{wt_{{\boldsymbol{\nu}}_{K_n ^{(n-1)}}}(T)}\right)\\
&= \frac{q^{n-k-1}\cdot q^{wt_{{\boldsymbol{\nu}}_{K_n ^{(n-1)}}}(T)}}{[n-1]_q}(q^{k}[n-k-1]_q +[k]_q)\\
&=q^{n-k-1} \cdot q^{wt_{{\boldsymbol{\nu}}_{K_n ^{(n-1)}}}(T)}=q^{wt_{{\boldsymbol{\nu}}_{K_n ^{(k)}}}(T)}.\end{aligned}$$
Lollipop graphs {#lollipop-graphs}
---------------
In this section, we consider the Schur expansion of LLT polynomials corresponding to lollipop graphs defined in Section \[sec:lollipopG\].
(.05,.05)–(.95,.05)–(.95,.95)–(.05,.95)–(.05,.05)–cycle; (1.05,1.05)–(1.95,1.05)–(1.95,1.95)–(1.05,1.95)–(1.05,1.05)–cycle; (2.05,2.05)–(2.95,2.05)–(2.95,2.95)–(2.05,2.95)–(2.05,2.05)–cycle; (3.05,3.05)–(3.95,3.05)–(3.95,3.95)–(3.05,3.95)–(3.05,3.05)–cycle; (4.05,4.05)–(4.95,4.05)–(4.95,4.95)–(4.05,4.95)–(4.05,4.05)–cycle; (7.05,7.05)–(7.95,7.05)–(7.95,7.95)–(7.05,7.95)–(7.05,7.05)–cycle; (6.05,5.05)–(6.95,5.05)–(6.95,5.95)–(6.05,5.95)–(6.05,5.05)–cycle; (10.05,9.05)–(10.95,9.05)–(10.95,9.95)–(10.05,9.95)–(10.05,9.05)–cycle; (9.05,7.05)–(9.95,7.05)–(9.95,7.95)–(9.05,7.95)–(9.05,7.05)–cycle; (13.05,11.05)–(13.95,11.05)–(13.95,11.95)–(13.05,11.95)–(13.05,11.05)–cycle; (12.05,9.05)–(12.95,9.05)–(12.95,9.95)–(12.05,9.95)–(12.05,9.05)–cycle; (-1,0)–(12.5,13.5); (-.5,-.5)–(.05,.05); (4.95,4.95)–(7.05,7.05); (7.95,7.95)–(13.5,13.5); (0,-1)–(6.05,5.05); (6.95,5.95)–(10.05,9.05); (10.95,9.95)–(14,13); (.5,-1.5)–(9.05,7.05); (9.95,7.95)–(13.05,11.05); (13.95,11.95)–(14.5,12.5); (1,-2)–(12.05,9.05); (12.95,9.95)–(15,12); (1.5,-2.5)–(15.5,11.5); at (.5, .5) [$11$]{}; at (1.5, 1.5) [$10$]{}; at (2.5, 2.5) [$9$]{}; at (3.5, 3.5) [$8$]{}; at (4.5, 4.5) [$7$]{}; at (6.5, 5.5) [$5$]{}; at (7.5, 7.5) [$6$]{}; at (10.5, 9.5) [$4$]{}; at (9.5, 7.5) [$3$]{}; at (13.5, 11.5) [$2$]{}; at (12.5, 9.5) [$1$]{};
in [6]{} iin [1,...,]{} (i360/:) coordinate (ni) circle(2\*pt) i>1 foreach in[i,...,1]{}[(ni)edge(n)]{}; (180:6)–(-31,0);
iin [1,...,5]{} at (-36+5\*i,0) [i]{};
at (180:6) [6]{}; at (240:6) [7]{}; at (300:6) [8]{}; at (0:6) [9]{}; at (60:6) [10]{}; at (120:6) [11]{};
(-11,0) circle (12pt) (-16,0) circle (12pt) (-21,0) circle (12pt) (-26,0) circle (12pt) (-31,0) circle (12pt);
\[prop:LLTlollipop\] We have $${\operatorname{LLT}}_{L_{m,n}}({\bf x};q) =\sum_{\lambda\vdash n}\left( \sum_{T\in {\operatorname{SYT}}(\lambda) }q^{wt_{L_{m,n}}(T)}\right) s_{\lambda},$$ where $wt_{L_{m,n}}(T)=\sum_{i\in D(T)}a_i$ with $$a_i = \begin{cases}
1 & \text{ for } 1\le i \le n,\\
m+n-i & \text{ for } n+1 \le i\le m+n.\end{cases}$$
Let us rewrite the linear relation given in Proposition \[prop:quasilollipop\] in terms of LLT polynomials ; $$\label{eqn:lollipoplr1}
{\operatorname{LLT}}_{L_{m,n}}({\bf x};q) =\frac{1}{[m]_q} \left({\operatorname{LLT}}_{L_{m+1,n-1}}({\bf x};q) +q[m-1]_q {\operatorname{LLT}}_{K_m}({\bf x};q) \cdot {\operatorname{LLT}}_{P_n}({\bf x};q) \right).$$ We utilize the above linear relation to prove the Schur coefficients formula. Note that can be used to compute the LLT polynomial corresponding to the lollipop graph $L_{m,n}$, given the LLT polynomial corresponding to the graph $L_{m+1,n-1}$ which has a larger complete graph part and shorter path graph part. So, as an initial case, we prove a combinatorial formula for ${\operatorname{LLT}}_{L_{N-1,1}}({\bf x};q)$. In this case, the linear relation becomes $$\label{eqn:lollipoplr2}
{\operatorname{LLT}}_{L_{N-1,1}}({\bf x};q) =\frac{1}{[N-1]_q} \left({\operatorname{LLT}}_{K_N}({\bf x};q) +q[N-2]_q {\operatorname{LLT}}_{K_{N-1}}({\bf x};q) \cdot s_1 \right).$$ By Lemma \[lem:prod\_of\_LLT\], we know that $$\begin{aligned}
{\operatorname{LLT}}_{K_{N-1}}({\bf x};q) \cdot s_1 & = {\operatorname{LLT}}_{K_{N}^{(N-1)}}({\bf x};q)\\
&= \sum_{\lambda\vdash N}\left(\sum_{T\in{\operatorname{SYT}}(\lambda)} q^{wt_{K_N ^{(N-1)}}(T)} \right)s_\lambda, \end{aligned}$$ where $wt_{K_N ^{(N-1)}}(T)=\sum_{i\in D(T)}a_i$ with $a_1 =0$, and $a_i =N-i$ for $2\le i\le N$. We already have seen this type of Schur expansion for ${\operatorname{LLT}}_{K_N}({\bf x};q)$ in with $wt_{K_N}(T)=\sum_{i\in D(T)}a_i$, for $a_i = N-i$, $1\le i \le N$. Observing that $a_i$ values are consistent for $2\le i\le N$, we consider two cases when $1\in D(T)$ and $1\notin D(T)$, and prove, for $\lambda\vdash N$, $T\in {\operatorname{SYT}}(\lambda)$, $$q^{wt_{L_{N-1,1}}(T)}=\frac{1}{[N-1]_q} \left( q^{wt_{K_N}(T)}+q[N-2]_q \cdot q^{wt_{K_N ^{(N-1)}}(T)}\right).$$ If $1\notin D(T)$, then $$\begin{aligned}
& \frac{1}{[N-1]_q} \left( q^{wt_{K_N}(T)}+q[N-2]_q \cdot q^{wt_{K_N ^{(N-1)}}(T)}\right)\\
&= \frac{ q^{wt_{K_N}(T)}}{[N-1]_q} (1+q[N-2]_q)\\
&= q^{wt_{K_N}(T)} = q^{wt_{L_{N-1,1}}(T)}.\end{aligned}$$ If $1\in D(T)$, then $$\begin{aligned}
& \frac{1}{[N-1]_q} \left( q^{wt_{K_N}(T)}+q[N-2]_q \cdot q^{wt_{K_N ^{(N-1)}}(T)}\right)\\
&= \frac{1}{[N-1]_q} \left(q^{N-1}\cdot q^{wt_{K_N ^{(N-1)}}(T)}+q[N-2]_q \cdot q^{wt_{K_N ^{(N-1)}}(T)}\right)\\
&= \frac{ q^{1+ wt_{K_N ^{(N-1)}}(T)}}{[N-1]_q}(q^{N-2}+[N-2]_q)\\
&= q^{1+ wt_{K_N ^{(N-1)}}(T)} = q^{wt_{L_{N-1,1}}(T)}.\end{aligned}$$ Now, for $\lambda\vdash n$, we compute the coefficient of $s_\lambda$ of the right hand side of (denote it by $\mathsf{RHS}$) and check that it is consistent with $\sum_{T\in {\operatorname{SYT}}(\lambda) }q^{wt_{L_{m,n}}(T)}$. By Lemma \[lem:prod\_of\_LLT\], we know that $${\operatorname{LLT}}_{K_{m}}({\bf x};q) \cdot {\operatorname{LLT}}_{P_{n}}({\bf x};q) = \sum_{\lambda\vdash n}\left( \sum_{T\in {\operatorname{SYT}}(\lambda) }q^{wt_{K_{m} \cup P_{n}}(T)}\right) s_{\lambda},$$ where $wt_{K_{m} \cup P_{n}}(T)=\sum_{i\in D(T)}a_i$ with $$a_i = \begin{cases}
1 & \text{ for } 1\le i \le n-1,\\
0, & \text{ for } i = n,\\
m+n-i & \text{ for } n+1 \le i\le m+n.\end{cases}$$ So $$\begin{aligned}
\mathsf{RHS} &= \frac{1}{[m]_q}\left(\sum_{\substack{T\in{\operatorname{SYT}}(\lambda)\\ n\in D(T)}} q^{m-1}\cdot q^{wt_{L_{m,n}}(T)} +
\sum_{\substack{T\in{\operatorname{SYT}}(\lambda)\\ n\notin D(T)}} q^{wt_{L_{m,n}}(T)}\right.\\
& \qquad\qquad\qquad\qquad\qquad\left.+q[m-1]_q \sum_{T\in{\operatorname{SYT}}(\lambda)}q^{wt_{L_{m,n}}(T)-\chi (n\in D(T))}\right)\\
&= \frac{1}{[m]_q}\left(\sum_{\substack{T\in{\operatorname{SYT}}(\lambda)\\ n\in D(T)}} q^{m-1}\cdot q^{wt_{L_{m,n}}(T)} +
\sum_{\substack{T\in{\operatorname{SYT}}(\lambda)\\ n\notin D(T)}} q^{wt_{L_{m,n}}(T)} \right.\\
&\quad\qquad\qquad\left. + [m-1]_q \sum_{\substack{T\in{\operatorname{SYT}}(\lambda)\\ n\in D(T)}} q^{wt_{L_{m,n}}(T)} +q[m-1]_q
\sum_{\substack{T\in{\operatorname{SYT}}(\lambda)\\ n\notin D(T)}} q^{wt_{L_{m,n}}(T)}\right)\\
&=\frac{1}{[m]_q}\left(\sum_{\substack{T\in{\operatorname{SYT}}(\lambda)\\ n\in D(T)}} q^{wt_{L_{m,n}}(T)} (q^{m-1}+[m-1]_q)
+ \sum_{\substack{T\in{\operatorname{SYT}}(\lambda)\\ n\notin D(T)}} q^{wt_{L_{m,n}}(T)}(1+q[m-1]_q) \right)\\
&=\sum_{\substack{T\in{\operatorname{SYT}}(\lambda)\\ n\in D(T)}} q^{wt_{L_{m,n}}(T)}+ \sum_{\substack{T\in{\operatorname{SYT}}(\lambda)\\ n\notin D(T)}} q^{wt_{L_{m,n}}(T)}\\
&= \sum_{T\in{\operatorname{SYT}}(\lambda)} q^{wt_{L_{m,n}}(T)}.\end{aligned}$$
Melting lollipop graphs {#melting-lollipop-graphs}
-----------------------
The Schur coefficients of LLT polynomials corresponding to the melting lollipop graphs (see Section \[melting\] for the definition of melting lollipop graphs) can be described in a similar fashion.
\[prop:meltinglollipop\] We have $${\operatorname{LLT}}_{L_{m,n}^{(k)}}({\bf x};q) =\sum_{\lambda\vdash n}\left( \sum_{T\in {\operatorname{SYT}}(\lambda) }q^{wt_{L_{m,n}^{(k)}}(T)}\right) s_{\lambda},$$ where $wt_{L_{m,n}^{(k)}}(T)=\sum_{i\in D(T)}a_i$ with $$a_i = \begin{cases}
1 & \text{ for } 1\le i \le n,\\
m-k-1, & \text{ for } i = n+1,\\
m+n-i & \text{ for } n+2 \le i\le m+n.\end{cases}$$
To prove this Schur expansion formula, we rewrite the linear relation in terms of LLT polynomials $$\begin{gathered}
\label{eqn:mlollipoplr1}
{\operatorname{LLT}}_{L_{m,n}^{(k)}}({\bf x};q) \\=\frac{1}{[m-1]_q} \left([m-k-1]_q {\operatorname{LLT}}_{L_{m,n}}({\bf x};q) +q^{m-k-1}[k]_q {\operatorname{LLT}}_{K_{m-1}}({\bf x};q) \cdot {\operatorname{LLT}}_{P_{n+1}}({\bf x};q) \right).\end{gathered}$$ We already obtained the Schur expansion of ${\operatorname{LLT}}_{L_{m,n}}({\bf x};q)$ in Proposition \[prop:LLTlollipop\] and by Lemma \[lem:prod\_of\_LLT\], we have $${\operatorname{LLT}}_{K_{m-1}}({\bf x};q) \cdot {\operatorname{LLT}}_{P_{n+1}}({\bf x};q) = \sum_{\lambda\vdash n}\left( \sum_{T\in {\operatorname{SYT}}(\lambda) }q^{wt_{K_{m-1}\cup P_{n+1}}(T)}\right) s_{\lambda},$$ where $wt_{K_{m-1}\cup P_{n+1}}(T)=\sum_{i\in D(T)}a_i$ with $$a_i = \begin{cases}
1 & \text{ for } 1\le i \le n,\\
0, & \text{ for } i = n+1,\\
m+n-i & \text{ for } n+2 \le i\le m+n.\end{cases}$$ For $\lambda\vdash n$, we compute the coefficient of $s_\lambda$ of the right hand side of and check that it is consistent with $\sum_{T\in {\operatorname{SYT}}(\lambda) }q^{wt_{L_{m,n}^{(k)}}(T)}$. The way how the proof goes is similar to the proof of Proposition \[prop:LLTlollipop\], by dividing the cases when $n+1\in D(T)$ and when $n+1\notin D(T)$. We omit the details.
Appendix
========
In this section, we introduce a combinatorial way to compute the Schur coefficients of LLT polynomials when the Schur functions are indexed by hook shapes. We apply the result of Egge-Loehr-Warrington [@ELW] to the quasisymmetric expansion of LLT polynomials. Especially when the LLT polynomials are unicellular, then the weight statistic in Definition \[def:wt\] can be also used in the description of the Schur coefficients indexed by hook shapes.
We note a quasisymmetric expansion of LLT polynomials given in [@HHL05]. A semistandard tableau $\boldsymbol{S}$ is *standard* if it is a bijection $\boldsymbol{S}:\bigsqcup {\boldsymbol{\nu}}\rightarrow \{1,2,\dots, n\}$, where $n=|{\boldsymbol{\nu}}|=\sum_{j=1}^k |\nu^{(j)}|$. We denote the set of standard tableaux of shape ${\boldsymbol{\nu}}$ by ${\operatorname{SYT}}({\boldsymbol{\nu}})$.
Define the *descent set* $D(\boldsymbol{S})\subseteq \{1,2,\dots, n-1\}$ of $\boldsymbol{S}\in {\operatorname{SYT}}({\boldsymbol{\nu}})$ by $$D(\boldsymbol{S}) =\{ i ~:~ \boldsymbol{S}^{-1}(i+1)\text{ precedes } \boldsymbol{S}^{-1}(i) \text{ in the content reading order} \}.$$ Then, $$\label{eqn:LLTinQ}
{\operatorname{LLT}}_{{\boldsymbol{\nu}}}({\bf x};q) =\sum_{\boldsymbol{S}\in {\operatorname{SYT}}({\boldsymbol{\nu}})}q^{{\operatorname{inv}}(\boldsymbol{S})}F_{co(D(\boldsymbol{S}))}({\bf x}),$$ where $co(D(\boldsymbol{S}))$ is the composition corresponding to the set $D(\boldsymbol{S})$ and $F_\alpha ({\bf x})$ is the fundamental quasisymmetric function indexed by the composition $\alpha$.
Schur coefficients indexed by hook shapes
-----------------------------------------
Recall the result of Egge-Loehr-Warrington [@ELW] on obtaining the Schur expansion given the quasisymmetric expansion in terms of the fundamental quasisymmetric functions.
A skew diagram $\lambda/\mu$ is a *rim-hook* of $\lambda$ if $\lambda/\mu$ does not contain any $2\times 2$ subdiagram and any two consecutive cells of $\lambda/\mu$ share an edge. A rim-hook is *special* if it starts from the cell in the first column. The number of rows of a rim hook $H$ is referred to as its *height*, denoted by $\text{ht}(H)$. The sign of a rim hook $H$ is defined to be $(-1)^{\text{ht}(H)-1}$. A *special rim-hook tableau* $S$ of shape $\lambda$ and content $\alpha$ is a partition of the diagram of $\lambda$ using special rim-hooks such that the length of the $i$th rim-hook from the bottom is $\alpha_i$. The sign of $S$ is the product of the signs of the rim hooks of $S$.
(0,0)–(0,4)–(2,4)–(2,3)–(3,3)–(3,2)–(5,2)–(5,0)–(0,0)–cycle; (0,1)–(5,1); (0,2)–(3,2); (0,3)–(2,3); (1,0)–(1,4); (2,0)–(2,3); (3,0)–(3,2); (4,0)–(4,2); (.4,3.5)–(1.5, 3.5)–(1.5,2.5)–(2.6,2.5); (.5, 2.6)–(.5,1.5)–(4.5,1.5)–(4.5,.4); (.4, .5)–(3.6,.5);
(0,0)–(0,4)–(2,4)–(2,3)–(3,3)–(3,2)–(5,2)–(5,0)–(0,0)–cycle; (0,1)–(5,1); (0,2)–(3,2); (0,3)–(2,3); (1,0)–(1,4); (2,0)–(2,3); (3,0)–(3,2); (4,0)–(4,2); (.4,3.5)–(1.5, 3.5)–(1.5,2.5)–(2.6,2.5); (.44, 2.5)–(.56,2.5); (.4,1.5)–(4.5,1.5)–(4.5,.4); (.4, .5)–(3.6,.5);
The result of Egge-Loehr-Warrington [@ELW] gives a combinatorial description of Schur coefficients, given a fundamental quasisymmetric expansion of any symmetric functions.
[@ELW Theorem 11]\[thm:ELW1\] Suppose $\mathbb{F}$ is a field, and we have a symmetric function $$f=\sum_{\lambda\vdash n}c_\lambda s_\lambda =\sum_{\alpha\models n }d_{\alpha}F_\alpha \quad(c_\lambda , d_\alpha\in \mathbb{F}).$$ Then we have $$c_\lambda=\sum_{\alpha\models n}d_\alpha K_n ^{\ast} (\alpha, \lambda)$$ for all $\lambda\vdash n$, where $$K_n ^{\ast} (\alpha, \lambda) =\sum_{\beta \text{ finer than } \alpha}K_n ' (\beta, \lambda),$$ and $K_n '$ is a right inverse of the Kostka matrix $K_n$ with entries $K'_n(\alpha,\lambda)$, the sum of the signs of the special rim-hook tableaux of shape $\lambda$ and content $\alpha$.
If each rim-hook contains exactly one cell in the first column of the diagram of $\lambda$, then we say that the rim-hook tableau $S$ of shape $\lambda$ and content $\alpha$ (or equivalently, $(\alpha, \lambda)$) is *flat*. Then we can simplify the description of $K_n ^{\ast}(\alpha, \lambda)$ even more.
[@ELW Theorem 15]\[thm:ELW2\] Let $\alpha\models n$, $\lambda\vdash n$. If $(\alpha,\lambda)$ is flat, then $K_n ^{\ast} (\alpha, \lambda)=K_n '(\alpha,\lambda)=\pm 1$. Otherwise, $K_n ^{\ast}(\alpha, \lambda)=0$. In particular, $K_n ^{\ast}(\alpha,\lambda)=\chi(\alpha=\lambda)$ when $\lambda$ is a hook.
Given the quasisymmetric expansion , we apply Theorem \[thm:ELW1\] and \[thm:ELW2\] to obtain the Schur coefficients of LLT polynomials when the Schur functions are indexed by hook shapes.
\[prop:hook\] Let $\lambda=(k, 1^{n-k})$ be a partition of a hook shape. $$\langle {\operatorname{LLT}}_{\boldsymbol{\nu} }({\bf x};q) , s_{\lambda} \rangle = \sum_{\substack{\boldsymbol{S}\in{\operatorname{SYT}}(\boldsymbol{\nu} ) \text{ such that}\\ D(\boldsymbol{S}) = \{ k, k+1,\dots, n-1\}} } q^{{\operatorname{inv}}(\boldsymbol{S})}.$$
The Dyck diagram explained in Section \[subsec:relation\] can be used to compute $\langle {\operatorname{LLT}}_{\boldsymbol{\nu} }({\bf x};q) , s_{\lambda} \rangle$ in Proposition \[prop:hook\]. Since ${\boldsymbol{\nu}}$ is an $n$-tuple of single cells, $\boldsymbol{S}\in{\operatorname{SYT}}(\boldsymbol{\nu} )$ can be considered as a word of length $n$, and to satisfy the condition $D(\boldsymbol{S}) = \{ k, k+1,\dots, n-1\}$, the reading words should be in the set of shuffle product of $(n-1,n-2,\dots, k+1)$ and $(1,2,\dots, k-1)$ followed by $k$ in the end. We denote by $D_k$ the set of all words obtained from such shuffle product. To compute the inversion statistic, place the reading word in $D_k$ on the main diagonal starting from the bottom-left corner of the Dyck diagram and count the number of inversion pairs in this setting.
\[ex:hook\] We keep considering the LLT diagram ${\boldsymbol{\nu}}$ in Example \[ex:dg\]. To obtain, for instance, the coefficient of $s_{2111}$, we have to consider the set of reading words $(1\shuffle (5,4,3))2$, i.e., $$\begin{aligned}
& 15432\\
&51432\\
&54132\\
&54312.\end{aligned}$$ We place those reading words on the diagonal of the Dyck diagram and compute the inversion statistic. $$\begin{tikzpicture}[scale=.46]
\draw (0,0)--(0,5)--(5,5);
\draw (0,4)--(4,4);
\draw (0,3)--(3,3);
\draw (0,2)--(2,2);
\draw (0,1)--(1,1);
\draw (1,1)--(1,5);
\draw (2,2)--(2,5);
\draw (3,3)--(3,5);
\draw (4,4)--(4,5);
\draw[thick] (0,1)--(0,4)--(1,4)--(1,5)--(4,5)--(4,4)--(3,4)--(3,3)--(2,3)--(2,2)--(1,2)--(1,1)--(0,1);
\node (7) at (.5, 4.5) {$\mathsf{X}$};
\node (8) at (.45, .5) {$1$};
\node (9) at (1.45, 1.5) {$5$};
\node (10) at (2.45, 2.5) {$4$};
\node (11) at (3.45, 3.5) {$3$};
\node (12) at (4.45, 4.5) {$2$};
\node (13) at (1.5,4.5) {$\bigcirc$};
\node (14) at (2.5,4.5) {$\bigcirc$};
\node (15) at (3.5,4.5) {$\bigcirc$};
\node (16) at (.5,3.5) {$\cdot$};
\node (17) at (1.5,3.5) {$\bigcirc$};
\node (18) at (2.5,3.5) {$\bigcirc$};
\node (19) at (.5,2.5) {$\cdot$};
\node (20) at (1.5,2.5) {$\bigcirc$};
\node (21) at (.5,1.5) {$\cdot$};
\node (22) at (4, 1) {$q^6$};
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale=.46]
\draw (0,0)--(0,5)--(5,5);
\draw (0,4)--(4,4);
\draw (0,3)--(3,3);
\draw (0,2)--(2,2);
\draw (0,1)--(1,1);
\draw (1,1)--(1,5);
\draw (2,2)--(2,5);
\draw (3,3)--(3,5);
\draw (4,4)--(4,5);
\draw[thick] (0,1)--(0,4)--(1,4)--(1,5)--(4,5)--(4,4)--(3,4)--(3,3)--(2,3)--(2,2)--(1,2)--(1,1)--(0,1);
\node (7) at (.5, 4.5) {$\mathsf{X}$};
\node (8) at (.45, .5) {$5$};
\node (9) at (1.45, 1.5) {$1$};
\node (10) at (2.45, 2.5) {$4$};
\node (11) at (3.45, 3.5) {$3$};
\node (12) at (4.45, 4.5) {$2$};
\node (13) at (1.5,4.5) {$\cdot$};
\node (14) at (2.5,4.5) {$\bigcirc$};
\node (15) at (3.5,4.5) {$\bigcirc$};
\node (16) at (.5,3.5) {$\bigcirc$};
\node (17) at (1.5,3.5) {$\cdot$};
\node (18) at (2.5,3.5) {$\bigcirc$};
\node (19) at (.5,2.5) {$\bigcirc$};
\node (20) at (1.5,2.5) {$\cdot$};
\node (21) at (.5,1.5) {$\bigcirc$};
\node (22) at (4, 1) {$q^6$};
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale=.46]
\draw (0,0)--(0,5)--(5,5);
\draw (0,4)--(4,4);
\draw (0,3)--(3,3);
\draw (0,2)--(2,2);
\draw (0,1)--(1,1);
\draw (1,1)--(1,5);
\draw (2,2)--(2,5);
\draw (3,3)--(3,5);
\draw (4,4)--(4,5);
\draw[thick] (0,1)--(0,4)--(1,4)--(1,5)--(4,5)--(4,4)--(3,4)--(3,3)--(2,3)--(2,2)--(1,2)--(1,1)--(0,1);
\node (7) at (.5, 4.5) {$\mathsf{X}$};
\node (8) at (.45, .5) {$5$};
\node (9) at (1.45, 1.5) {$4$};
\node (10) at (2.45, 2.5) {$1$};
\node (11) at (3.45, 3.5) {$3$};
\node (12) at (4.45, 4.5) {$2$};
\node (13) at (1.5,4.5) {$\bigcirc$};
\node (14) at (2.5,4.5) {$\cdot$};
\node (15) at (3.5,4.5) {$\bigcirc$};
\node (16) at (.5,3.5) {$\bigcirc$};
\node (17) at (1.5,3.5) {$\bigcirc$};
\node (18) at (2.5,3.5) {$\cdot$};
\node (19) at (.5,2.5) {$\bigcirc$};
\node (20) at (1.5,2.5) {$\bigcirc$};
\node (21) at (.5,1.5) {$\bigcirc$};
\node (22) at (4, 1) {$q^7$};
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale=.46]
\draw (0,0)--(0,5)--(5,5);
\draw (0,4)--(4,4);
\draw (0,3)--(3,3);
\draw (0,2)--(2,2);
\draw (0,1)--(1,1);
\draw (1,1)--(1,5);
\draw (2,2)--(2,5);
\draw (3,3)--(3,5);
\draw (4,4)--(4,5);
\draw[thick] (0,1)--(0,4)--(1,4)--(1,5)--(4,5)--(4,4)--(3,4)--(3,3)--(2,3)--(2,2)--(1,2)--(1,1)--(0,1);
\node (7) at (.5, 4.5) {$\mathsf{X}$};
\node (8) at (.45, .5) {$5$};
\node (9) at (1.45, 1.5) {$4$};
\node (10) at (2.45, 2.5) {$3$};
\node (11) at (3.45, 3.5) {$1$};
\node (12) at (4.45, 4.5) {$2$};
\node (13) at (1.5,4.5) {$\bigcirc$};
\node (14) at (2.5,4.5) {$\bigcirc$};
\node (15) at (3.5,4.5) {$\cdot$};
\node (16) at (.5,3.5) {$\bigcirc$};
\node (17) at (1.5,3.5) {$\bigcirc$};
\node (18) at (2.5,3.5) {$\bigcirc$};
\node (19) at (.5,2.5) {$\bigcirc$};
\node (20) at (1.5,2.5) {$\bigcirc$};
\node (21) at (.5,1.5) {$\bigcirc$};
\node (22) at (4, 1) {$q^8$};
\end{tikzpicture}$$ Hence, we obtain $$\langle {\operatorname{LLT}}_{{\boldsymbol{\nu}}}({\bf x};q), s_{2111}\rangle = 2q^6 +q^7+q^8.$$
To describe the Schur coefficients in Proposition \[prop:hook\] in terms of the weight statistic used in Section \[sec:LLT\_Schur\], we first recall the descent set of words and Young tableaux, respectively. For a word $u = u_1u_2 \cdots u_n$, we say that an index $i \in \{ 1,2,\cdots,n-1 \}$ is a descent of $u$ if $u_i > u_{i+1}$ and denote by $D(u)$ the set of all descents of $u$. For a standard Young tableau $T$, we say that $i \in \{ 1,2,\cdots,n-1 \}$ is a descent of $T$ if $i$ appears in a lower row of $T$ than $i+1$ and denote by $D(T)$ the set of all descents of $T$. Notice that if $u \in D_k$, then $u_n=k$, for any $1 \leq i <j \leq k-1$ $j$ cannot precede $i$ and for any $k+1 \leq i <j \leq n$ $i$ cannot precede $j$ in $u$. Hence, whenever $u \in D_k$ and $i \in D(u)$, $u_i$ should be in $\{k+1,k+2,\cdots,n-1 \}$ and $u_i > u_j$ for all $i < j$. This implies that if $u$ is a reading word of $\boldsymbol{S}\in{\operatorname{SYT}}(\boldsymbol{\nu} )$ such that $D(\boldsymbol{S}) = \{ k, k+1,\dots, n-1\}$, then ${\operatorname{inv}}(\bf{S})$ counts the number of cells whose column is labeled by $i$ and row by $j$ in $\pi_{\boldsymbol{\nu}}$ for all $i \in D(u)$ and all $j <i$. Letting $\boldsymbol{\nu}^t$ be the LLT diagram whose corresponding Dyck diagram is the conjugate of $\pi_{\boldsymbol{\nu}}$ as a skew shape, we can see that $$\label{LLT_conjugate_hook}
\langle {\operatorname{LLT}}_{\boldsymbol{\nu}^t }({\bf x};q) , s_{\lambda} \rangle
= \sum_{u \in D_k} q^{wt_{\boldsymbol{\nu}}(u)}$$ where $wt_{\boldsymbol{\nu}}(u)$ is the sum of $a_i$ in $a_{\boldsymbol{\nu}}$ for all $i \in D(u)$.
On the other hand, from the definition of the shuffle product one can see that every word in $D_k$ can be obtained from the word $1 \, 2 \cdots (k-1) \, n \, (n-1) \cdots (k+1) \, k $ by applying the following relation in succession: replace $xzy$ by $zxy$ or vice versa if $x < y < z$. This suggests that any two words in $D_k$ are Knuth equivalent and thus they result the same insertion tableau in the procedure of Robinson-Shensted-Knuth (RSK) insertion algorithm. Moreover, since when $\lambda = (k, 1^{n-k})$ ${\operatorname{SYT}}(\lambda)$ and $D_k$ are equicardinal, RSK algorithm guarantees that ${\operatorname{SYT}}(\lambda)$ is the set of all recording tableaux $Q(u)$ for $u \in D_k$. Finally, by combining with well known facts that ${\operatorname{LLT}}_{\boldsymbol{\nu}^t }({\bf x};q) = {\operatorname{LLT}}_{\boldsymbol{\nu} }({\bf x};q)$ and $i \in D(u)$ if and only if $i \in D(Q(u))$, we can conclude the following proposition.
\[prop:hook\_unicellular\] Let $\lambda=(k, 1^{n-k})$ be a partition of a hook shape and $\boldsymbol{\nu}$ a unicellular LLT diagram. If $a_{\boldsymbol{\nu}} = (a_1, a_2, \cdots, a_n)$ is the area sequence of $\pi_{\boldsymbol{\nu}}$, then $$\langle {\operatorname{LLT}}_{\boldsymbol{\nu} }({\bf x};q) , s_{\lambda} \rangle =
\sum_{T \in {\operatorname{SYT}}(\lambda)} q^{wt_{\boldsymbol{\nu}}(T)} \, ,$$ where $wt_{\boldsymbol{\nu}}(T) = \sum_{i \in D(T)} a_i$.
\[exam:hook\_with\_descent\] We keep considering the LLT diagram ${\boldsymbol{\nu}}$ and the hook partition $\lambda$ in Example \[ex:hook\]. Then we have the area sequence $a_{{\boldsymbol{\nu}}} = (3,3,2,1,0)$ and the following four standard Young tableaux of shape $(2,1,1,1)$
(0,0)–(2,0); (0,0)–(0,4); (0,1)–(2,1); (0,2)–(1,2); (0,3)–(1,3); (0,4)–(1,4); (1,0)–(1,4); (2,0)–(2,1); (1) at (.45, .5) [$1$]{}; (2) at (1.45,.5) [$2$]{}; (6) at (1.45,.5) [$\bigcirc$]{}; (3) at (.45, 1.5) [$3$]{}; (7) at (.45, 1.5) [$\bigcirc$]{}; (4) at (.45, 2.5) [$4$]{}; (8) at (.45, 2.5) [$\bigcirc$]{}; (5) at (.45, 3.5) [$5$]{};
(0,0)–(2,0); (0,0)–(0,4); (0,1)–(2,1); (0,2)–(1,2); (0,3)–(1,3); (0,4)–(1,4); (1,0)–(1,4); (2,0)–(2,1); (1) at (.45, .5) [$1$]{}; (8) at (.45, .5) [$\bigcirc$]{}; (2) at (1.45,.5) [$3$]{}; (6) at (1.45,.5) [$\bigcirc$]{}; (3) at (.45, 1.5) [$2$]{}; (4) at (.45, 2.5) [$4$]{}; (8) at (.45, 2.5) [$\bigcirc$]{}; (5) at (.45, 3.5) [$5$]{};
(0,0)–(2,0); (0,0)–(0,4); (0,1)–(2,1); (0,2)–(1,2); (0,3)–(1,3); (0,4)–(1,4); (1,0)–(1,4); (2,0)–(2,1); (1) at (.45, .5) [$1$]{}; (8) at (.45, .5) [$\bigcirc$]{}; (2) at (1.45,.5) [$4$]{}; (6) at (1.45,.5) [$\bigcirc$]{}; (3) at (.45, 1.5) [$2$]{}; (7) at (.45, 1.5) [$\bigcirc$]{}; (4) at (.45, 2.5) [$3$]{}; (5) at (.45, 3.5) [$5$]{};
(0,0)–(2,0); (0,0)–(0,4); (0,1)–(2,1); (0,2)–(1,2); (0,3)–(1,3); (0,4)–(1,4); (1,0)–(1,4); (2,0)–(2,1); (1) at (.45, .5) [$1$]{}; (8) at (.45, .5) [$\bigcirc$]{}; (2) at (1.45,.5) [$5$]{}; (3) at (.45, 1.5) [$2$]{}; (7) at (.45, 1.5) [$\bigcirc$]{}; (4) at (.45, 2.5) [$3$]{}; (8) at (.45, 2.5) [$\bigcirc$]{}; (5) at (.45, 3.5) [$4$]{};
with its $\boldsymbol{\nu}$-weight 6,6,7 and 8, respectively. Once again, we obtain $$\langle {\operatorname{LLT}}_{{\boldsymbol{\nu}}}({\bf x};q), s_{2111}({\bf x})\rangle = 2q^6 +q^7+q^8.$$
Acknowledgements {#sec:acknow .unnumbered}
================
The authors would like to thank Soojin Cho for her kind support and encouragement.
[99]{} P. Alexandersson and G. Panova, “[LLT]{} polynomials, chromatic quasisymmetric functions and graphs with cycles”, [*Discrete Math. *]{}**341** (2018), 3453–3482.
G. Benkart, P. Sottile and J. Stroomer, “Tableau switching: algorithms and applications", [*J. Combin. Theory Ser. A* ]{} **76** (1996), 11–43.
S. Cho and J. Huh, “On $e$-positivity and $e$-unimodality of chromatic quasisymmetric functions", [*Sém. Lothar. Combin. *]{}**80B** (2018), Art. 59, 12.
E. Carlsson and A. Mellit, “A proof of the shuffle conjecture", [*J. Amer. Math. Soc. *]{} **31** (2018), 661–697.
S. Dahlberg and S. van Willigenburg, “Lollipop and lariat symmetric functions", [*SIAM J. Discrete Math. *]{} **32** (2018), 1029–1039.
S. Dahlberg, “Triangular ladders ${P}_{d,2}$ are $e$-positive", preprint `arXiv:1811.04885v1`.
R. P. Stanley, [*Enumerative combinatorics. [V]{}ol. 2*]{}, Cambridge University Press, Cambridge, 1999.
E. Egge, N. A. Loehr and G. S. Warrington, “From quasisymmetric expansions to [S]{}chur expansions via a modified inverse [K]{}ostka matrix", [*Eropean. J. Combin. *]{}**31** (2010), 2014–2027.
A. M. Foley, C. T. Hoàng and O. D. Merkel, “Classes of graphs with $e$-positive chromatic symmetric function", preprint.
V. Gasharov, “Incomparability graphs of [$(3+1)$]{}-free posets are [$s$]{}-positive", [*Discrete Math. *]{} **157** (1996), 193–197.
I. Grojnowski and M. Haiman, “Affine [H]{}ecke algebras and positivity of [LLT]{} and [M]{}acdonald polynomials", preprint.
D. D. Gebhard and B. E. Sagan, “A chromatic symmetric function in noncommuting variables", [*J. Algebraic Combin. *]{} **93** (2001), 227–255.
M. Guay-Paquet, “A modular law for the chromatic symmetric functions of $(3+1)$-free posets", preprint `arXiv:1306.2400v1`.
M. Haiman, “Hecke algebra characters and immanant conjectures", [*J. Amer. Math. Soc. *]{} **6** (1993), 569–595.
M. Haiman, “Hilbert schemes, polygraphs and the [M]{}acdonald positivity conjecture", [*J. Amer. Math. Soc. *]{} **14** (2001), 941–1006.
J. Haglund, [*The [$q$]{},[$t$]{}-[C]{}atalan numbers and the space of diagonal harmonics*]{}, University Lecture Series, American Mathematical Society, 2008.
J. Haglund, M. Haiman and N. Loehr, “A combinatorial formula for [M]{}acdonald polynomials", [*J. Amer. Math. Soc. *]{} **18** (2005), 735–761.
J. Haglund, M. Haiman, N. Loehr, J. B. Remmel and A. Ulyanov, “A combinatorial formula for the character of the diagonal coinvariants", [*Duke Math. J. *]{} **126** (2005), 195–232.
M. Harada and M. Precup, “The cohomology of abelian [H]{}essenberg varieties and the [S]{}tanley-[S]{}tembridge conjecture", preprint `arXiv:1709.06736v1`.
S. Lee, “Linear relation on [LLT]{} polynomials and their $k$-[S]{}chur positivity for $k=2$", preprint `arXiv:1709.06736v1`.
A. Lascoux, B. Leclerc and J. Thibon, “Ribbon tableaux, [H]{}all-[L]{}ittlewood functions, quantum affine algebras, and unipotent varieties", [*J. Math. Phys. *]{} **38** (1997), 1041–1068.
R. Orellana and G. Scott, “Graphs with equal chromatic symmetric functions", [*Discrete Math. *]{} **320** (2014), 1–14.
J. Shareshian and M. Wachs, “Eulerian quasisymmetric functions", [*Adv. Math. *]{} **225** (2010), 2921–2966.
J. Shareshian and M. Wachs, “Chromatic quasisymmetric functions", [*Adv. Math. *]{} **295** (2016), 497–551.
R. P. Stanley, [*Log-concave and unimodal sequences in algebra, combinatorics, and geometry*]{}, Ann. New York Acad. Sci. **576** (1989), 500–535.
R. P. Stanley, “A symmetric function generalization of the chromatic polynomial of a graph", [*Adv. Math. *]{} **111** (1995), 166–194.
R. P. Stanley, [*Enumerative combinatorics. [V]{}olume 1*]{}, Cambridge Studies in Advanced Mathematics **49**, 2012.
R. P. Stanley and J. R. Stembridge, “On immanants of [J]{}acobi-[T]{}rudi matrices and permutations with restricted position", [*J. Combin. Theory Ser. A *]{} **62** (1993), 261–279.
[^1]: The first author was supported by NRF grant \#2015R1D1A1A01057476. The second author was supported by NRF grant \#2017R1D1A1B03030945. The third author was supported by NRF grants \#2016R1A5A1008055 and \#2017R1C1B2005653.
|
---
abstract: |
We present a distributed randomized algorithm finding *Minimum Spanning Tree* (MST) of a given graph in $O(1)$ rounds, with high probability, in the congested clique model.
The input graph in the congested clique model is a graph of $n$ nodes, where each node initially knows only its incident edges. The communication graph is a clique with limited edge bandwidth: each two nodes (not necessarily neighbours in the input graph) can exchange $O(\log n)$ bits.
As in previous works, the key part of the algorithm is an efficient *Connected Components* (CC) algorithm. However, unlike the former approaches, we do not aim at simulating the standard Boruvka[’s]{} algorithm, at least at initial stages of the algorithm. Instead, we develop a new technique which combines connected components of sample sparse subgraphs of the input graph in order to accelerate the process of uncovering connected components of the original input graph. More specifically, we develop a sparsification technique which reduces an initial problem in $O(1)$ rounds to its two restricted instances. The former instance has a graph with maximal degree $O(\log \log n)$ as the input – here our sample-combining technique helps. In the latter instance, a partition of the input graph into $O(n/\log\log n)$ connected components is known. This gives an opportunity to apply previous algorithms to determine connected components in $O(1)$ rounds.
[Our result addresses a problem proposed by Lotker et al. \[SPAA 2003; SICOMP 2005\] and]{} improves over previous $O(\log^* n)$ algorithm of Ghaffari et al. \[PODC 2016\], and $O(\log \log \log n)$ algorithm of Hegeman et al. \[PODC 2015\]. It also determines $\Theta(1)$ round complexity in the congested clique for , as well as other graph problems, including bipartiteness, cut verification, s-t connectivity, and cycle containment.
author:
- 'Tomasz Jurdziński[^1]'
- 'Krzysztof Nowicki[^2]'
bibliography:
- 'references.bib'
title: 'MST in $O(1)$ Rounds of Congested Clique '
---
*Keywords:* [congested clique, connected components, minimum spanning tree, randomized algorithms, broadcast, unicast]{}
Introduction
============
[\[s:intro\]]{} The congested clique model of distributed [computing]{} attracted much attention in algorithmic community in recent years. [Initially, each node knows its incident edges in the input graph $G(V,E)$. Unlike the classical CONGEST model [@peleg-book], the communication graph connects each pair of nodes, even if they are not neighbours in the input graph, i.e., in each round, each pair of nodes can exchange a message of size $O(\log n)$ bits.]{}
The main algorithm-theoretic motivation of the model is to understand the role of congestion in distributed computing. The well-known LOCAL model of distributed computing ignores congestion by allowing unlimited size of transmitted messages [@peleg-book; @Linial92] and focuses on *locality*. The CONGEST model on the other hand takes congestion into account by limiting the size of transmitted messages. Simultaneously, locality plays an important role as well in the CONGEST model, since direct communication is possible only between neighbors of the input graph. The congested clique is considered as a complement, which focuses [solely]{} on congestion.
Some variants and complexity measures for the congested clique have applications to efficiency of algorithms in other models adjusted to current computing challenges such as $k$-machine big data model [@KlauckNP015] or MapReduce [@HegemanP15; @KarloffSV10].
Related work
------------
The general congested clique model as well as its limited variant called broadcast congested clique were studied in several papers, e.g. [@Lotker:2003:MCO:777412.777428; @HegemanPPSS15; @GhaffariParter2016; @DruckerKO13; @BeckerMRT14; @Lenzen:2013:ODR:2484239.2501983; @MT2016]. The recent Lenzen’s [@Lenzen:2013:ODR:2484239.2501983] constant time routing and sorting algorithm in the unicast congested clique exhibited strength of this model and triggered a new wave [of research]{}.
Besides general interest in the congested clique, specific attention has been given to and connectivity. Lotker et al. [@Lotker:2003:MCO:777412.777428] [proposed]{} a $O(\log \log n)$ round deterministic algorithm for in the unicast model. An alternative solution of the same complexity has been presented recently [@Korhonen16]. The best known randomized solution for works in $O(\log^*n)$ rounds [@GhaffariParter2016], improving the recent $O(\log\log\log n)$ bound [@HegemanPPSS15]. The result from [@HegemanPPSS15] uses a concept of linear graph sketches [@AhnGM12], while the authors of [@GhaffariParter2016] introduce new sketches, which are sensitive to the degrees of nodes and adjusted to the congested clique model. Reduction of the number of transmitted messages in the algorithms was studied in [@DBLP:conf/fsttcs/PS16]. In contrast to the general (unicast) congested clique, no sub-logarithmic round algorithm is known for in the broadcast congested clique, while the first sub-logarithmic solution for the problem has been obtained only recently [@JurdzinskiN17]. An extreme scenario that the algorithm consists of one [round]{} in which each node can send only one message has also been considered. As shown in [@AhnGM12], connectivity can be solved with public random bits in this model, provided nodes can send messages of size $\Theta(\log^3 n)$.
In [@DruckerKO13], a simulation of powerful classes of bounded-depth circuits in the congested clique is presented, which points out to the power of the congested clique and explains difficulty in obtaining lower bounds.
Our result
----------
The main result of the paper determines $O(1)$ round complexity of the problem in the congested clique.
[\[t:MST\]]{} There is a randomized algorithm in the congested clique model that computes a minimum spanning tree in $O(1)$ rounds, with high probability.
Using standard reductions of some graph problems to the connectivity problem, we establish $O(1)$ round complexity of several graph problems in the congested clique model.
[\[c:reductions\]]{} There are randomized distributed algorithms that solve the following verification problems in the congested clique model in $O(1)$ rounds with high probability: bipartiteness verification, cut verification, s-t connectivity, and cycle containment.
Preliminaries
-------------
[\[s:prel\]]{} In this section we provide some terminology and tools for design of distributed algorithms in the congested clique.
Given a natural number $p$, $[p]$ denotes the set $\{1,2,\ldots,p\}$.
We use the following *Lenzen’s routing* result in our algorithms.
[@Lenzen:2013:ODR:2484239.2501983] [\[l:lenzen\]]{} Assume that each node of the congested clique is given a set of $O(n)$ messages with fixed destination nodes. Moreover, each node is the destination of $O(n)$ messages from other nodes. Then, it is possible to deliver all messages in $O(1)$ rounds of the congested clique.
Efficient congested clique algorithms often make use of auxiliary nodes. Below, we formulate this opportunity to facilitate design of algorithms.
[\[l:auxiliary\]]{} Let $A$ be a congested clique algorithm which, except of the nodes $u_1,\ldots,u_n$ corresponding to the input graph, uses $O(n)$ auxiliary nodes $v_1,v_2,\ldots$ such that the auxiliary nodes do not have initially any knowledge of the input graph on the nodes $u_1,\ldots,u_n$. Then, each round of $A$ might be simulated in $O(1)$ rounds in the standard congested clique model, without auxiliary nodes.
Assume that there are at most $cn$ auxiliary nodes, for constant $c\in{{\mathbb N}}$. We can assign the set of $c$ auxiliary nodes $V_j=\{v_{(j-1)c+1},\ldots,v_{jc}\}$ to $u_j$ for each $j\in[n]$ and assign internal IDs in the range $[c]$ to the elements of $V_j$. Then, a round of an original algorithm with auxiliary nodes is simulated in $c^2$ actual rounds indexed by $(a,b)\in [c]^2$. The $a$th auxiliary nodes assigned to $u_j$ transmit messages addressed to the $b$th auxiliary nodes [of $u_1,\ldots,u_n$]{} in the rounds indexed $(a,b)$.
We consider randomized algorithms in which a computational unit in each node of the input network can use private random bits in its computation. We say that some event holds with high probability (whp) for an algorithm $A$ running on an input of size $n$ if this event holds with probability $1-1/n^c$ for a given constant $c$. We require here that the constant $c$ can be chosen arbitrarily large, without changing *asymptotic* complexity of the considered algorithm.
### Graph terminology
If not stated otherwise, we consider undirected graphs. Thus, in particular, an edge $(u,v)$ appears in the graph iff $(v,u)$ appears in that graph as well. For a node $v\in V$ of a graph $G(V,E)$, $N(v)$ denotes the set of neighbours of $v$ in $G$ and $d(v)$ denotes the degree of $v$, i.e., $d(v)=|N(v)|$. We say that a graph $G(V,E)$ has degree $\Delta$ if the degree of each node of $G$ is smaller or equal to $\Delta$.
A *component* of a graph $G(V,E)$ is a connected subgraph of $G$. That is, $C\subseteq V$ corresponds to a *component* of $G$ iff the graph $G(C, E\cap (C\times C))$ is connected. A component $C$ of a graph $G(V,E)$ is *growable* if there are edges connecting $C$ with $V\setminus C$, i.e., the set $E\cap (C\times(V\setminus C))$ is non-empty. Otherwise, the component $C$ is *ungrowable*.
For a graph $G(V,E)$, sets $C_1, C_2, ..., C_k \subset V$ form a partition of $G$ into *components* if $C_i$s are pairwise disjoint, $\bigcup_{i\in[k]} C_i = V$, and $C_i$ is a component of $G(V,E)$ for each $i\in[k]$. A partition $C_1,\ldots,C_k$ of a graph $G(V,E)$ into components is the *complete partition* if $C_i$ is ungrowable for each $i\in[k]$. For a fixed $E'\subseteq E$, the *complete partition* of $G(V,E')$ will be also called the *complete partition with respect to* $E'$.
Given a partition $\mathbb{C}$ of a graph $G(V,E)$ into components and $v\in V$, $C(v)$ denotes the component of $\mathbb{C}$ containing $v$.
An edge $(u,v)$ is *incident* to a component $C$ wrt to a partition $\mathbb{C}$ of a graph if it connects $C$ with another component of $\mathbb{C}$, i.e., $C(u)\neq C(v)=C$ or $C(v)\neq C(u)=C$.
### Graph problems in the congested clique model
Graph problems in the congested clique model are considered in the following framework. The joint input to the $n$ nodes of the network is an undirected $n$-node weighted graph $G(V,E,c)$, where each node in $V=\{u_1,\ldots,u_n\}$ corresponds to a node of the communication network and weights $c(e)$ of edges are integers of polynomial size (i.e., each weight is a bit sequence of length $O(\log n)$). Each node $u_i$ initially knows the network size $n$, its unique ID $i\in[n]$, the list of IDs of its neighbors in the input graph and the weights of its incident edges. Specifically, $\text{ID}(v)=i$ for $v=u_i$. All graph problems are considered in this paper in accordance with this definition.
### Connected Components and Minimum Spanning Tree
In the paper, we consider the connected components problem () and the minimum spanning tree problem (MST). A solution for the problem consists of the complete partition of the input graph $G(V,E)$ into connected components $C_1\cup\cdots\cup C_k=V$, accompanied by spanning trees of all components. Our goal is to compute or of the input graph, i.e., each node should know the set of edges inducing / at the end of an execution of an algorithm.
For the purpose of fast simultaneous executions of many instances of the algorithms, we also consider the definition of the problem, where spanning trees of all components are known only to a fixed node of a network. The presented solutions usually correspond to this weaker definition. However, for a single instance of the / problem, a spanning forest can be made known to all nodes in two rounds, provided it is known to a fixed node $v$. Namely, it is sufficient that $v$ fixes roots of spanning trees of all components. Then, in the former round, $v$ sends to each $u$ the ID of the parent of $u$ in the appropriate tree. In the latter round each $u$ sends the ID of its parent to all nodes of the network.
We also consider the situation that some “initial” partition $\mathbb{C}$ of the input graph into connected components is known at the beginning of an execution of an algorithm to a fixed node.
We say that a component $C$ of a partition $\mathbb{C}$ which is known to all/some nodes of the network is an *active* component if $C$ is a growable component of the original input graph $G(V,E)$. Otherwise, if $C$ is not a growable component of $G(V,E)$, $C$ is an *inactive* component.
Below, we make a simple observation which we use in our algorithms.
[\[f:graph:union\]]{} Assume that solutions of the problem for the graphs $G(V,E_1)$ and $G(V,E_2)$ are [available.]{} Then, one can determine a solution of the problem for $G(V,E_1\cup E_2)$.
High-level description of our solution
======================================
In this section we describe our algorithm on a top level. The main technical result is an $O(1)$ round connectivity algorithm. The extension to [(described in Section \[s:MST\])]{} is based on a known technique [which requires]{} $n^{1/2}$ simultaneous executions of the connectivity algorithm [(on partially related instances)]{}. Thus, the key issue in design of the ${\text{MST}}{}$ algorithm is to guarantee that such $n^{1/2}$ simultaneous executions of the connected components algorithm can be performed in $O(1)$ rounds.
The algorithm for connected components works in two phases: Sparsification Phase and Size-reduction Phase. In Sparsification Phase, we reduce the original connectivity problem to two specific instances of the problem (Lemma \[l:degreereduction\]). [(The idea of this reduction can be implemented in the much weaker broadcast congested clique model and allows to obtain new round-efficient algorithms in that model [@JurdzinskiN17].)]{}
In the former instance, a partition of the input graph into $O(n/\log\log n)$ active and some non-active components is known. The connected components can be determined for such an instance in $O(1)$ rounds by the algorithm from [@GhaffariParter2016]. The latter instance is a graph with degree $O(\log\log n)$. Therefore, Size-reduction Phase gets a graph with degree $O(\log\log n)$ as the input. In this phase, the problem for such a sparse input graph is reduced to an instance of the problem where, except of the input graph $G$, a partition of $G$ into $O(n/\log\log n)$ active components is known. Therefore, as before, the connected components can be determined for this final instance in $O(1)$ rounds by the algorithm from [@GhaffariParter2016].
#### Connected Components: Sparsification Phase.
Sparsification Phase is based on a simple *deterministic* procedure (see Alg. \[a:degreereduction\]):
- Firstly, for each node $v$, an edge $(u,v)$ connecting $v$ with its highest degree neighbour is determined and delivered to a fixed node called the *coordinator*.
- Then, the complete partition $\mathbb{C}$ with respect to the set of edges delivered to the coordinator is computed. The *degree* of each component of $\mathbb{C}$ is defined as the largest degree of its elements.
- Next, for each node $v$, an edge $(u,v)$ connecting $v$ with the highest degree component $C\neq C(v)$ of $\mathbb{C}$ is determined and send to the coordinator.
- The coordinator computes the complete partition $\mathbb{C}'$ with respect to the set of edges announced in all steps of this procedure, assigns IDs to the components of this partition. Then, the coordinator sends to each node the ID and the degree of its component. Finally, the nodes pass information obtained from the coordinator to their neighbours.
Let $C$ be a component of $\mathbb{C}'$. We say that $C$ is *awake* if the degree of $C$ (i.e., the largest degree of its elements) is at most $s$, for some fixed $s\in{{\mathbb N}}$. Otherwise, $C$ is *asleep*. A node $u$ is awake (asleep, respectively) iff $u$ belongs to an awake (asleep, respectively) component. [As we will show, each awake component of $\mathbb{C}'$ has size $\ge s$ and therefore the partition $\mathbb{C}'$ contains $O(n/s)$ awake components.]{} Moreover, the degree of the graph induced by edges incident to nodes from asleep components is smaller than $s$. (This fact does not follow simply from the definitions, since the graph contains also neighbors of nodes from asleep components located in awake components).
Using the above properties for $s=\log\log n$ we can split the input graph $G(V,E)$ into a subgraph $G_A$ containing $O(n/\log\log n)$ growable components and a subgraph $G_B$ with degree $O(\log\log n)$. For the former subgraph, we determine connected components in $O(1)$ rounds using the algorithm from [@GhaffariParter2016], based on graph sketches. The connected components of the latter subgraph are determined in Size-reduction Phase.
#### Connected Components: Size-reduction Phase.
The key technical novelty in our solution is an algorithm which reduces the problem for a graph of degree $O(\log\log n)$ to the problem for a graph with $O(n/\log\log n)$ components. This algorithm is the main ingredient of Size-reduction Phase. A pseudocode of the algorithm is presented in Alg. \[a:componentreduction\]. We find connected components of the described above graph $G_A$, by applying this reduction (Alg. \[a:componentreduction\]) and the algorithm from [@GhaffariParter2016] (which finds connected components for graphs with a known partition into $O(n/\log\log n)$ components, in $O(1)$ rounds).
In order to describe the above reduction, assume that the (upper bound on) degree of an input graph $G(V,E)$ is $\Delta=O(\log\log n)$. The idea of the reduction is to calculate simultaneously $m=n^{1/2}$ spanning forests for randomly chosen sparse subgraphs $G_i=G(V,E_i)$ of the input graph (called *samples*), and use the results to build a partition of $G$ into $O(n / \log \log n)$ growable components and some non-growable components. Below, we describe the reduction in more detail.
First, we build random subgraphs $G_i=G(V,E_i)$ of $G$ for $i\leq m=\sqrt{n}$ such that, for each $i\in[m]$, the set of edges $E_i$ can be collected at a single node. As a node can receive only $O(n)$ messages in a round, the size of $E_i$ should be $O(n)$ as well. To assure this property, each edge will belong to $E_i$ for each $i\in[m]$ with probability $1/ \log \log n$, and the random choices for each $i\in[m]$ and each edge are independent. For each $i\in[m]$, all edges from $E_i$ are sent to a fixed node $b_i$ (the $i$th boss) and $b_i$ computes connected components of $G_i$ locally. [The problem is that only one message per round might be transmitted on each edge, while a node can choose non-constant number of incident edges as the elements of the $i$th sample $E_i$ for some $i\in[m]$. (Thus, all those edges should be delivered to $b_i$.)]{} This problem is solved by the fact that, whp, the random choices defining the graphs $G_i$ require to send $O(n)$ messages and receive $O(n)$ messages by each node. (Note that the expected number of edges in $E_i$ is $O(n)$.) If this is the case, all messages can be delivered with help of Lenzen’s routing algorithm [@Lenzen:2013:ODR:2484239.2501983] (Lemma \[l:lenzen\]).
Another challenge is how to combine connected components of the graphs $G_i$ such that the number of growable components is reduced to $O(n/\log\log n)$. To this aim, each node is chosen to be a leader, independently of other nodes, with probability $1/\log\log n$. Thus, the number of leaders will be $\Theta(n/ \log \log)$ whp. Then, for each node $v$, if $v$ is in a connected component containing a leader in some graph $G_i$, information about its connection with some leader will be delivered to the coordinator. Thus, if each node is connected to some leader in the partition determined by the coordinator then we have $O(n/ \log \log n)$ connected components and we are done. Certainly, we cannot get such a guarantee. However, as we show in Section \[ss:leaderless\], the number of nodes from growable components which are not connected to any leader will be $O(n/\log \log n)$, whp. The main part of the proof of this fact is to determine a sequence of *independent* random variables whose sum gives the upper bound on the number of nodes in growable components which are not connected to a leader in the final partition.
#### Minimum Spanning Tree.
As shown in [@Karger:1995:RLA:201019.201022; @HegemanPPSS15], it is possible to reduce the problem of an input graph, to two instances of on graphs with $O(n^{3/2})$ edges. Then, for a graph with $O(n^{3/2})$ edges is reduced to $O(\sqrt{n})$ instances of the problem, where the set of edges in the $i$th instance is included in the set of edges of the $(i+1)$st instance. In Section \[s:MST\], we show that our algorithm can be executed in parallel on these specific $\sqrt{n}$ instances of the problem. The main challenge here is that a “naive” implementation of these parallel executions requires to send superlinear number of messages by some nodes. However, using Lenzen’s routing [@Lenzen:2013:ODR:2484239.2501983] (see Lemma \[l:lenzen\]) and the fact that the set of edges of the $(i+1)$st instance of includes the set of edges of the $i$th instance for each $i\in[m]$, we show that connected components of all those instances can be computed in parallel.
Connectivity in $O(1)$ rounds
=============================
In this section we describe our algorithm which leads to the following theorem.
[\[t:connectedcomponents\]]{} There is a randomized algorithm in the congested clique model that computes connected components in $O(1)$ rounds, with high probability.
The algorithm consists of two phases: Sparsification Phase and Size-reduction Phase.
In Sparsification Phase, we reduce the original problem to two specific instances of the problem. In the former instance, the problem has to be solved for a graph with degree $O(\log\log n)$. The latter instance is equipped with additional information about a partition of the considered graph in $O(n/\log\log n)$ components. The pseudocode of the appropriate algorithm is presented as Alg. \[a:degreereduction\]. The following lemma describes the reduction more precisely.
[\[l:degreereduction\]]{} There is a deterministic algorithm in the congested clique that reduces in $O(1)$ rounds the problem for an arbitrary graph $G(V,E)$ to the instances of the problem for graphs $G_A(V,E_A)$ and $G_B(V,E_B)$ such that $E_A\cup E_B=E$ and
- a partition $\mathbb{C}_A$ of $G_A$ into $O(n/\log\log n)$ active components is known to a fixed node;
- each node knows which of its incident edges belong to $E_A$ and which of them belong to $E_B$;
- the degree of $G_B$ is $O(\log \log n)$.
An important building block of our solution comes from [@GhaffariParter2016], where the properties of graph sketches play the key role. It is the algorithm which determines connected components of the input graph in $O(1)$ rounds, provided an initial partition of the input graph in $O(n/\log\log n)$ growable and some ungrowable components is known at the beginning. For further applications in a solution of the problem, we state a stronger result regarding several simultaneous executions of the algorithm.
[\[l:GP2016\]]{}[@GhaffariParter2016] There is a randomized algorithm in the congested clique model that computes connected components of a graph in $O(1)$ rounds with high probability, provided that a partition of the input graph into $O(n / \log \log n)$ growable (and some ungrowable) components is known to a fixed node at the beginning of an execution of the algorithm. Moreover, it is possible to execute $m=\sqrt{n}$ instances of the problem simultaneously in $O(1)$ rounds.
In [@GhaffariParter2016], the authors gave $O(1)$ round procedure $\mathit{ReduceCC}(x)$, reducing the number of active components from $O(n / \log^2 x)$ to $n/x$ in $O(1)$ rounds, with high probability. Thus, by using $\mathit{ReduceCC}$ constant number of times, it is possible to reduce the number of active components from $n / \log \log n$ to $0$ in $O(1)$ rounds. And, if there are no active (i.e., growable) components in a partition, then that partition describes connected components of the input graph (i.e., it is the complete partition of the input graph).
Let GPReduction denote the algorithm satisfying properties from Lemma \[l:GP2016\]. Using GPReduction, we can determine connected components of the graph $G_A$ (described in Lemma \[l:degreereduction\]) in $O(1)$ rounds. Thus, in order to build $O(1)$ rounds algorithm, it is sufficient to solve the problem for graphs with degree $O(\log\log n)$. This problem is addressed in Size-reduction Phase. In the following lemma, we show that the problem for a $O(\log\log n)$-degree graph can be reduced to the problem for a graph with $O(n/\log\log n)$ growable components in $O(1)$ rounds. The pseudocode of the algorithm performing this reduction is given in Alg. \[a:componentreduction\].
[\[l:componentreduction\]]{} There is a randomized algorithm in the congested clique model that reduces in $O(1)$ rounds the problem for a graph with degree bounded by $\log \log n$ to an instance of the problem for which a partition with $O(n/\log \log n)$ active connected components is known to a fixed node, with high probability.
Then, the next application of the algorithm from [@GhaffariParter2016] (Lemma \[l:GP2016\]) gives the final partition of the input graph into connected components, as summarized in Alg. \[a:cc\].
The proofs of Lemma \[l:degreereduction\] and Lemma \[l:componentreduction\] are presented in Section \[s:degreereduction\] and Section \[s:componentreduction\], respectively. Using Lemmas \[l:GP2016\], \[l:degreereduction\], and \[l:componentreduction\], one can show that Alg. \[a:cc\] determines connected components of an input graph in $O(1)$ rounds, with high probability. This in turn gives the proof of Theorem \[t:connectedcomponents\].
**Sparsification Phase** Execute Alg. \[a:degreereduction\] on the input graph for $s=\log\log n$ \[s:cc1\] Let $G_A$ and $\mathbb{C}_A$ be the graph and its $O(n/\log\log n)$-size partition determined in Alg. \[a:degreereduction\] $G_B\gets $ the graph of degree $O(\log\log n)$ determined in Alg. \[a:degreereduction\] Execute Alg. GPReduction on $G_A$, using the partition $\mathbb{C}_A$\[s:GP2016-1\] **Size-reduction Phase** Execute Alg. \[a:componentreduction\] on $G_B$ $G'\gets$ the graph obtained in Alg. \[a:componentreduction\], with its partition into $O(n/\log\log n)$ active components \[s:cc7\] Execute Alg. GPReduction on $G'$, using its partition determined by Alg. \[a:componentreduction\]\[s:GP2016-2\] [Combine connected components computed in steps \[s:GP2016-1\] and \[s:GP2016-2\]]{}
Graph sparsification
--------------------
[\[s:degreereduction\]]{} In this section we describe Sparsification Phase and prove Lemma \[l:degreereduction\]. Let the *coordinator* be a fixed node of the input network. For a partition $\mathbb{C}$ of a graph $G(V,E)$ into components, we use the following notations:
- $d(C) = \max\limits_{v \in C} d(v)$ is the *degree* of the component $C$ of $\mathbb{C}$,
- $I(v)$ is the ID of the component $C(v)$, according to a fixed labeling of components of $\mathbb{C}$.
The general idea of the reduction is to build components from the edges determined in the following two stages:
- **Stage 1.** For each node $v$, chose an edge connecting $v$ to its neighbour with the largest degree. Then, determine the complete partition with respect to the set of chosen edges. Moreover, set the *degree* of each component of the obtained partition as the maximum of the degrees of its elements.
- **Stage 2.** For each node $v$, chose an edge connecting $v$ to a component $C\neq C(v)$ with the largest degree. Determine the complete partition with respect to the set of edges chosen in both stages.
For $s\le n$, we say that components with degree at least $s$ are *awake* components, while components with degree smaller than $s$ are *asleep* components. Similarly, all nodes from awake components are called *awake nodes* and nodes from asleep components are called *asleep nodes*. As we show below, the complete partition $\mathbb{C}$ determined by the edges chosen in Stages 1 and 2 satisfies the following conditions:
1. the size of each awake component is larger than $s$,
2. each asleep node of $\mathbb{C}$ does not have neighbors with the degree larger than $s$,
are satisfied for each $s\le n$. Algorithm \[a:degreereduction\] contains a pseudo-code of an implementation of the above described idea in the congested clique model in $O(1)$ rounds, in accordance with requirements of Lemma \[l:degreereduction\]. The above properties a)–b) combined with Alg. \[a:degreereduction\] imply Lemma \[l:degreereduction\]. Below, we provide a formal proof of Lemma \[l:degreereduction\] based on the above described ideas.
coordinator$ \gets u_1$ \[ag:s1\] $v$ announces $d(v)$ to all nodes in $N(v)$ \[ag:s2\] **Stage 1** $v$ sends the edge $(u,v)$ to the coordinator, where $u$ is the node with the largest ID from the set of neighbors of $v$ with highest degree, i.e., $\{w\in N(v)\, |\, d(w) = \max\limits_{t\in N(v)} d(t)\} $ \[ag:s3\] the coordinator calculates the complete partition determined by the received edges, sends the message $(I(v), d(C(v)))$ to each $v$ \[ag:s4\] $v$ announces the received message $(I(v), d(C(v)))$ to all nodes in $N(v)$ \[ag:s5\] **Stage 2** $v$ sends the edge $(u,v)$ to the coordinator, where $C(u)$ is the highest degree component incident to $v$, i.e., $C(v) \neq C(u)$ and $d(C(u))$ is maximal among components incident to $v$ \[ag:s6\] the coordinator calculates the complete partition determined by all received edges (i.e., in both stages), sends the message $(I(u), d(C(u)))$ to each $u$ \[ag:s7\] $v$ announces the received message $(I(v), d(C(v)))$ to nodes in $N(v)$ \[ag:s8\] **if** $d(C(v))\ge s$ **then** $v$ is awake **else** $v$ is asleep $G_A\gets (V, E_A)$, where $E_A=\{ (u,v)\in E\,|\, u,v \text{ are awake}\}$ $\mathbb{C}_A\gets$ the partition consisting from awake components \[ag:s11\] $G_B\gets (V,E_B)$, where $E_B=\{ (u,v)\in E\,|\, u\text{ or }v \text{ is asleep}\}$
[\[f:reductiontosparse\]]{} The following conditions are satisfied at the end of an execution of Alg. \[a:degreereduction\]: (i) there are at most $n/s$ awake components; (ii) the degree of the graph $G_B$ induced by edges incident to the asleep nodes is smaller than $s$.
Let $\mathbb{C}$ be a partition of the input graph obtained from edges announced in Stages 1 and 2. Let $\prec$ be the lexicographic ordering of the pairs $(d(v), \text{ID}(v))$ for $v\in V$.
Firstly, we show that each awake component $C$ of $\mathbb{C}$ has at least $s+1$ nodes, which implies (i). For an awake component $C$, let $v_{\text{max}}\in C$ be the element of $C$ corresponding to the largest tuple in the set $\{(d(v),\text{ID}(v))\,|\, v\in C\}$. Thus, $d(v_{\text{max}})\geq s$.
Moreover, each $u\in N(v_{\text{max}})$ belongs to $C(v_{\max})$ after Stage 1 of the algorithm. Indeed, if we contrary assume that
- $d(v_{\text{max}})<s$:
Then $d(v)<s$ for each $v\in C$ and therefore $d(C)<s$ and $C$ is asleep. This contradicts the assumption that $C$ is awake.
- some neighbour $u$ of $v_{\max}$ does not belong to $C(v_{\max})$ after Stage 1:
Then, let $U$ be the set neighbours of $v_{\max}$ which are not in $C(v_{\max})$ after Stage 1. In Stage 2, $v_{\max}$ sends an edge connecting it with some $u\in U$. According to the algorithm, each $u\in U$ sends an edge $(u,w)$ in Stage 1 such that $(d(v_{\text{max}}),\text{ID}(v_{\text{max}}))\prec(d(w),\text{ID}(w))$. Thus $v_{\max}$ and $w$ as above are in the same component of the partition obtained after Stage 2, which contradicts the choice of $v_{\max}$.
Given that $d(v_{\text{max}})\geq s$, $v_{\text{max}}\in C$ and $N(v_{\text{max}})\subseteq C$, we see that the size of $C$ is larger than $s$.
Now, we prove the property (ii). As the degree of all asleep nodes is smaller than $s$, it is sufficient to show that the degrees of all neighbors of asleep nodes are smaller than $s$ as well. Contrary, assume that a node $u$ is asleep, $v\in N(u)$, and $d(v)\ge s$. Then, $u$ reports a node $w$ in Stage 1 such that $(d(v),\text{ID}(v))\preceq (d(w),\text{ID}(w))$ which implies that $s\le d(v)\le d(w)$. This in turn implies that $u$ and $w$ are eventually in the same component and, by the fact that $d(w)\ge s$, they are both awake. This however contradicts the assumption that $u$ is asleep.
Now, we apply Fact \[f:reductiontosparse\] for $s=\log\log n$ to prove Lemma \[l:degreereduction\]. Let $G_A$ be the subgraph of $G$ containing edges whose both ends are awake. By Fact \[f:reductiontosparse\](i), the partition determined in the algorithm contains at most $n/s=O(n/\log\log n)$ awake components. That is, the partition $\mathbb{C}_A$ of $G_A$ has $O(n/\log\log n)$ active components (see step \[ag:s11\] of Alg. \[a:degreereduction\]). And, the partition $\mathbb{C}_A$ is known to the coordinator. As the nodes learn components’ IDs and degrees of their neighbours in step \[ag:s8\] of Alg. \[a:degreereduction\], they know which edges incident to them belong to $G_A$ and which to $G_B$. The graph $G_B$ contains the edges incident to asleep nodes. By Fact \[f:reductiontosparse\](ii), the degree of $G_B$ is smaller than $s=\log\log n$.
Size-reduction Phase
--------------------
[\[s:componentreduction\]]{}
In this section we provide $O(1)$ round algorithm reducing the number of active components for sparse graphs (Alg. \[a:componentreduction\]). Assuming that the degree of the input graph $G(V,E)$ is at most $\Delta \in O(\log \log n)$, our algorithm returns a partition of the input graph into $O\left(n / \log \log n \right)$ active components. Additionally, some fixed node (the coordinator) knows a spanning tree of each component in the final partition. Thus, Lemma \[l:componentreduction\] follows from the properties of the presented algorithm.
In Algorithm \[a:componentreduction\], $C_i(u_j)$ denotes the component of the node $u_j$ in the $i$th sample graph $G_i$. Moreover, for a node $u_i\in V$, let $e_{(i,1)},\ldots,e_{(i,r)}$ for $r\le |N(u_i)|$ denote all edges $(u_i,u_j)$ such that $j<i$.
The algorithm randomly selects $m=\sqrt{n}$ subgraphs $G_1,\ldots,G_m$ of the input graph $G$, called *samples*. Each sample will consist of $O(n)$ edges, with high probability. We will ensure that all edges of the sample $G_i$ are known to a fixed node called the *boss* $b_i$ (lines \[ss:send:edge:b\]–\[ss:send:edge\]). Therefore, for each sample, we can locally determine its connected components and its spanning forest. Finally, the results from samples are combined in order to obtain a partition of the input graph which consists of $O(n/\log \log n)$ active components. The key challenge here is how to combine knowledge about locally available components of sample graphs such that significant progress towards establishing components of the original input graph is achieved. To this aim, we select randomly $\Theta(n/\log \log n)$ leaders among nodes of the input network. More precisely, each node of the network assigns itself the status *leader* with probability $1/\log\log n$, independently of other nodes. Thus, the number of leaders is $\Theta(n/\log\log n)$, with high probability. Then, the idea is to build a (global) knowledge about connected components of the input graph by assigning nodes to the leaders which appear together with them in connected components of samples. [In order to determine connected components without their spanning trees, it is sufficient to apply the following procedure:]{} if the connected component of $u_j$ in the $i$th sample contains some leader, the boss $b_i$ will send a message to $u_j$ containing the ID of that leader. [However, as we want to determine spanning trees as well, we need a more complicated approach (see lines \[ss:compred:leader:b\]–\[ss:compred:leader:e\] of Alg. \[a:componentreduction\]):]{}
- If the connected component of $u_j$ in the $i$th sample contains a leader, the boss $b_i$ determines a shortest path $P$ connecting $u_j$ and a leader in $C_i(u_j)$.
- If the connected component of $u_j$ in the $i$th sample does *not* contain a leader, the boss $b_i$ determines a shortest path $P$ connecting $u_j$ and the node of $C_i(u_j)$ with the smallest ID.
- [Then, $b_i$ sends a message to $u_j$ containing the ID of the neighbour of $u_j$ in $P$. The message sent to $u_j$ contains some additional information which we need in order to deal with nodes which are connected to different leaders in various samples and nodes which are not connected to any leader. (Details are explained in proofs of Prop. \[p:leader:connect\] and Prop. \[p:small:uncovered\].)]{}
We will say that a component $C$ is *small* if $C$ is ungrowable and the size of $C$ is at most $s=\sqrt{\log n}$.[[^3]]{} In the following, we split nodes of the input graph into three subsets:
- $V_{\alpha}$: the nodes connected to a leader in at least one sample graph,
- $V_{\beta}$: the elements of small components of the input graph, [which do not belong to $V_{\alpha}$,]{}
- $V_{\gamma}$: the remaining nodes of the graph; thus, $v$ belongs to $V_\gamma$ when $v$ is not an element of a small component of the input graph and there are no leaders in connected components of $v$ in samples $G_1,\ldots,G_m$.
In the analysis of Alg. \[a:componentreduction\], we show that each node from $V_\alpha$ will belong to a component containing a leader in the final partition $\mathbb{C}$ determined by the coordinator.
[\[p:leader:connect\]]{} Assume that $C_i(v)$ (i.e., the connected component of $v\in V$ in $G_i$) for some $i\in[m]$ contains a leader. Then, the connected component of $v$ in the final partition $\mathbb{C}$ contains a leader as well.
Moreover, we show that small components of the input graph are uncovered by the coordinator with high probability, which determines the final components of nodes from $V_\beta$.
[\[p:small:uncovered\]]{} The following property holds with high probability for each small component $C$ (i.e., an [ungrowable]{} component of size at most $s=\sqrt{\log n}$) of the input graph: $C$ is a connected component of the final partition determined by the coordinator in Alg. \[a:componentreduction\] or (at least one) leader belongs to $C$.
While Prop. \[p:leader:connect\] and \[p:small:uncovered\] concern $V_\alpha$ and $V_\beta$, we give an estimation of the size of $V_\gamma$ in the following proposition.
[\[p:large:afew\]]{} The number of nodes from $V_\gamma$ is $O(n/\log\log n)$, with high probability.
We postpone the proofs of the above propositions and show properties of Alg. \[a:componentreduction\] following from them. This in turn directly implies Lemma \[l:componentreduction\].
$m\gets \sqrt{n}$ **for** $i\in[m]$ **do** $b_i\gets u_i$ the coordinator $\gets u_n$ \[ss:send:edge:b\] $u_i$ adds $e_{(i,k)}$ to $G_j$ and sends it to $b_j$ with probability $1 / \log \log n$ \[ss:send:edge\] **for** each $i\in[m]$ **do** $b_i$ calculates a spanning forest $F_i$ induced by received edges each node, with probability $1 / \log \log n$ becomes a leader, and announces it to all bosses $b_i$ \[ss:compred:leader:b\] [$\textit{dist}\gets$ the length of a shortest path from $u_j$ to a leader in $C_i(u_j)$]{} $p_i(u_j)\gets $ the first node on a path from $u_j$ to the closest leader in $C_i(u_j)$ $b_i$ sends the message $(n-\textit{dist}, |C_i(u_j)|, i, p_i(u_j))$ to $u_j$ \[a:sr:largest1\] $p_i(u_j)\gets$ the first node on a shortest the path from $u_j$ to the node with the smallest ID in $C_i(u_j)$ $b_i$ sends the message $(0, |C_i(u_j)|, i, p_i(u_j))$ to $u_j$\[ss:compred:leader:e\] let $(x,|C_i(u_j)|,i,p_i(u_j))$ be the largest message according to the lexicographic order received by $u_j$\[a:l:largest\] $p(u_j)\gets p_i(u_j)$ $u_j$ sends the edge $(u_j,p(u_j))$ to the coordinator\[s:ujsends\] the coordinator computes components determined by the received edges
[\[l:componentreductionalgcorrectness\]]{} At the end of Algorithm \[a:componentreduction\], a partition with at most $O(n/\log \log n)$ active components is determined, the coordinator knows this partition and a spanning tree for each component of the partition.
Let $\mathbb{C}$ be the final partition determined by the coordinator. Proposition \[p:small:uncovered\] implies that all small components of the input graph are also components of $\mathbb{C}$. Thus, they are inactive in $\mathbb{C}$. As there are $\Theta(n/\log\log n)$ leaders whp, Proposition \[p:leader:connect\] implies that all nodes from $V_\alpha$ belong to $O(n/\log\log n)$ active components. Finally, as there are only $O(n/\log\log n)$ nodes from $V_\gamma$, with high probability (Prop. \[p:large:afew\]), there are at most $O(n/\log\log n)$ components of $\mathbb{C}$ containing those nodes.
Finally, as the coordinator computes the final partition into connected components based on the received edges (step \[s:ujsends\]), it can also determine spanning trees of the components of this partition.
The remaining part of this section contains the proofs of Propositions \[p:leader:connect\], \[p:small:uncovered\], and \[p:large:afew\].
### Connections to leaders: Proof of Prop. \[p:leader:connect\]
Let $\text{dist}_{\text{leader}}(v)$ be the length of a shortest path connecting a node $v$ with a leader in the sample graphs $G_1,\ldots,G_m$, provided $v$ is connected with a leader in some sample.
We prove the proposition by induction with respect to the value of $\text{dist}_{\text{leader}}(v)$. The fact that $\text{dist}_{\text{leader}}(v)=0$ means that $v$ is a leader. Thus, $v$ certainly is in the connected component of the final partition $\mathbb{C}$ containing a leader. For the inductive step, assume that the proposition holds for each node $v$ such that $\text{dist}_{\text{leader}}(v)<j$ for some $j<n$. Let $v$ be a node connected to a leader in some sample such that $\text{dist}_{\text{leader}}(v)=j$. Thus, a shortest path connecting $v$ and a leader in a sample has length $j$. Therefore, the largest tuple according to lexicographic ordering obtained from the bosses by $v$ is $(n-j, |C_i(v)|, i, p_i(v))$ for some $i\in[m]$, where $p_i(v)$ is a neighbour of $v$ [in distance $j' < j$ from a leader]{}, i.e., $\text{dist}_{\text{leader}}(p_i(v))=j' < j$. Thus,
- $p(v)$ will be assigned the value $p_i(v)$ in the algorithm and $v$ sends an edge $(v,p_i(v))$ to the coordinator,
- as $\text{dist}_{\text{leader}}(p_i(v))= j' < j$, the inductive hypothesis guarantees that $p_i(v)$ is connected with a leader in the final partition $\mathbb{C}$ determined by the coordinator.
[Therefore]{}, as $p_i(v)$ is connected with a leader in the partition $\mathbb{C}$ determined by the coordinator and the edge $(v,p_i(v))$ is also known to the coordinator, $v$ is connected with a leader in $\mathbb{C}$ as well.
### Spanning trees of small components: Proof of Prop. \[p:small:uncovered\]
Before the formal proof of Prop. \[p:small:uncovered\], we give a general statement regarding connected subgraphs of size $s\le 3\sqrt{\log n}$ of the input graph $G$. Below, we show that a spanning tree of $G'$ will appear in some sample, with high probability.
[\[p:stprobability\]]{} For a given set of nodes $V'$ of size $s \leq 3\sqrt{\log n}$ such that the subgraph of $G$ induced by $V'$ is connected, there is no spanning tree of $V'$ in all samples with probability at most $O\left(\frac{1}{n^{\omega\left(1\right)}}\right)$.
A spanning tree of $V'$ consists of at most $s-1$ edges. Thus, it is present in some particular random sample with probability ${\text{Prob}}(\mathit{present}) \geq \left(\frac{1}{\log \log n}\right)^{s-1}$. Thus, with probability at most $1-{\text{Prob}}(\mathit{present})$, it is not present in some particular random sample, and is not present in all samples simultaneously with probability\
$$\begin{aligned}
\left(1-{\text{Prob}}\left( \mathit{present} \right) \right)^{\sqrt{n}} &\leq
\left(1-\left(\frac{1}{\log \log n}\right)^{s-1}\right)^{\sqrt{n}} = \left(1-\left(\frac{1}{\log \log n}\right)^{s-1}\right)^{ \left( \log \log n \right)^{s-1} \sqrt{n} \left( \log \log n \right)^{1-s} }\\
&\leq \left(\frac{1}{e}\right)^{\frac{\sqrt{n}}{ \left( \log \log n \right)^{s-1}}} =
O\left(\frac{1}{n^{\omega\left(1\right)}}\right)
\end{aligned}$$
Using Prop. \[p:stprobability\], we will prove Prop. \[p:small:uncovered\]. Let $C$ be a (ungrowable) connected component of the input graph $G$ such that $|C| \leq \sqrt{\log n}$ [and no leader belongs to $C$.]{} By Prop. \[p:stprobability\], $C$ has no spanning tree in all samples with probability at most $O(\frac{1}{n^{\omega(1)}})$. As there are at most $n$ small components, by union bound, the probability that there exists a small component of the input graph which is not a component of any sample is at most $$n \cdot O\left(\frac{1}{n^{\omega(1)}}\right) = O\left(\frac{1}{n^{\omega(1)}}\right).$$ Therefore, with probability $1 - O(\frac{1}{n^{\omega(1)}})$, each small component of the input graph is a component of some sample. Thus, in order to prove Prop. \[p:small:uncovered\], it is sufficient to show the following fact: if a small component $C$ of the input graph is a component of some sample and no leader belongs to $C$, then $C$ will also be a component of the partition $\mathbb{C}$ determined by the coordinator. Assume that $C$ is a small component of $G$, $C$ is a component of a sample $G_i$ for $i\in [m]$ and no leader belongs to $C$. W.l.o.g. assume that $i$ is the largest index of a sample containing $C$ as a component. That is, $C$ is not a component of $G_j$ for $j>i$ and $C$ is a component of $G_i$. Let $v_{\min}$ be the node of $C$ with the smallest ID. Then, for each $v\in C$, the boss $b_i$ sends $(0, |C|, i, p_i(v))$ to $v$, where $p_i(v)$ is the parent of $v$ in a tree $T$ rooted at $v_{\min}$, consisting of shortest paths between $v_{\min}$ and other elements of $C$. The choice of $i$ and the assumption that there are no leaders in $C$ guarantee that, for each $v\in C$, the message received by $v$ from $b_i$ is the largest message according to the lexicographic ordering among messages received by $v$ from the bosses (see line \[a:l:largest\]). Thus, each $v\in C$ sends $p(v)=p_i(v)$ to the coordinator. Thanks to that fact, the coordinator learns about the described above spanning tree $T$ of $C$. Hence, $C$ is a component of the partition determined by the coordinator. Therefore, the coordinator knows a spanning tree for every small connected component $C$ of $G$ with probability at least $1 - O(\frac{1}{n^{\omega(1)}})$.
### Leaderless nodes in large components: Proof of Prop. \[p:large:afew\]
[\[ss:leaderless\]]{}
We say that a node is *bad* if it does not belong to a small component nor to a component containing a leader in the final partition $\mathbb{C}$ determined by the coordinator. That is, a node is bad iff it belongs to $V_\gamma$. In order to prove Prop. \[p:large:afew\], it is sufficient to show that there are $O(\frac{n}{\log \log n})$ bad nodes. Then, in the worst-case, bad nodes would be partitioned into $\Theta(\frac{n}{\log \log n})$ active components.
The outline of the proof is as follows. Firstly, we cover all nodes from non-small components (i.e., from components of size at least $s=\sqrt{\log n}$) of the input graph by connected sets $V_1, V_2,\ldots, V_r$ of sizes in the range $[s, 3s]$ for $s=\sqrt{\log n}$ (Fact \[f:treepartition\]) such that $V_i$’s are “almost pairwise disjoint” (a more precise definition will be provided later). Then, we associate the random $0/1$ variable $X_i$ to each set $V_i$ such that $X_i=0$ implies that no nodes from $V_i$ are bad. (In particular, $X_i=0$ holds when $V_i$ contains a leader and at least one sample graph $G_j$ contains a spanning tree of $V_i$. Thus, by Prop. \[p:leader:connect\], the nodes of $V_i$ are not bad if $X_i=0$.) Importantly, the variables $X_i$ are independent and the probabilities ${\text{Prob}}(X_i=1)$ are small. As the number of bad nodes is at most $$\sum_{i\in[r]} |V_i|X_i\le 3\sqrt{\log n}\sum_{i\in[r]} X_i,$$ we prove the upper bound on $\sum_i X_i$ which ensures that the number of bad nodes is $O(n/\log\log n)$ with high probability.
We start with a cover of non-small components by connected “almost pairwise disjoint” components of sizes in the range $[s,3s]$, called an *almost-partition*. More precisely, we say that sets $A_1,\ldots,A_k$ form an *almost-partition* of a set $A$ iff $\bigcup_{i=1}^k A_i=A$ and, for each $j\in[k]$, $A_j$ contains at most one element belonging to other sets from $A_1,\ldots, A_k$, i.e., $|A_j\cap\bigcup_{i\neq j}A_i|\le 1$. The elements of $A_i\setminus \bigcup_{i\neq j}A_i$ are called *unique* for $A_i$.
[\[f:treepartition\]]{} Let $T$ be a tree of size at least $s\in{{\mathbb N}}$. Then, there exists an almost-partition $T_1, T_2, \dots, T_k$ of $T$ such that $$\label{e:almost}
T_i \mbox{ is a connected subgraph of } T \mbox{ and }
|T_i| \in [s, 3s]\mbox{ for each }i\in[k].$$
We prove the statement of the fact inductively. If the size of $T$ is in the interval $[s,3s]$, the almost-partition consisting from $T$ only satisfies the given constraints.
For the inductive step, assume that the fact holds for trees of size at most $n$ for $n>3s$. Let $T$ be a tree of size $|T|=n+1$ on a set of nodes $V$. In the following, we say that a subgraph $T'$ of $T$ induced by $V'\subseteq V$ is a *subtree* of $T$ iff $T'$ and the subgraph of $T$ induced by $V\setminus V'$ are trees. Assume that $T$ contains a subtree $T'$ such that $|T'| \ge s$ and $|T|-|T'|\ge s$. Then, by the inductive hypothesis, there exists an almost-partition of $T\setminus T'$ and an (one element) almost-partition of $T'$ satisfying (\[e:almost\]). Thus, an almost-partition of $T$ obtained from the almost-partitions of $T\setminus T'$ and of $T'$ satisfies (\[e:almost\]) as well.
Now, assume that $$\label{e:balans}
T \mbox{ does not contain a subtree } T' \mbox{ such that } |T'| \ge s \mbox{ and }|T|-|T'|\ge s.$$ For a tree with a fixed root $r$, $T(u)$ denotes a subtree of $T$ rooted at $u$. Now, we show an auxiliary property of trees satisfying (\[e:balans\]).
[\[cl:balans\]]{} Let $T$ be a tree satisfying (\[e:balans\]). Then, one can chose $r\in T$ as the root of $T$ such that
1. $|T(v_i)|<s$ for each $i\in[k]$, where $\{v_1,\ldots,v_k\}$ is the set of children of $r$.
*Proof of Claim \[cl:balans\].* Let $r_0$ be an arbitrary node of a tree $T$ which satisfies (\[e:balans\]). If (a) is satisfied for $r=r_0$, we are done. Otherwise, we define the sequence $r_0, r_1,\ldots$ of nodes such that $r_{i+1}$ for $i\ge 0$ is the child of $r_i$ in $T$ (rooted at $r$) with the largest subtree. Then,
- $|T(r_i)|>|T(r_{i+1})|$, because $T(r_{i+1})$ is a subtree of $T(r_i)$,
- if $|T(r_i)|\ge s$, then $|T\setminus T(r_i)|<s$, by the assumption (\[e:balans\]).
The condition (i) guarantees that $|T(r_{j})|\ge s$ and $|T(r_{j+1})|<s$ for some $j\ge 0$. Thus, $|T(v)|<s$ for all children of $r_{j}$, since $T(r_{j+1})$ has the largest size among subtrees rooted at children of $r_j$. The assumption (\[e:balans\]) implies also that $|T\setminus T(r_j)|<s$. Thus, (a) is satisfied for $T$ if the root $r$ is equal to $r_j$. (*Proof of Claim \[cl:balans\]*)
Using Claim \[cl:balans\], we can choose the root $r$ of $T$ such that $$s>|T(v_1)|\ge |T(v_2)|\ge\cdots\ge |T(v_k)|,$$ where $\{v_1,\ldots,v_k\}$ is the set of children of $r$ in $T$ (when $r$ is the root of $T$). Next, we split the set of trees $T(v_1),\ldots,T(v_k)$ into subsets such that the number of nodes in each subset is in the range $[s-1,3s-1]$. Such a splitting is possible thanks to the facts that $|T(v_i)|<s$ for each $i\in[k]$ and $\sum_{i=1}^k|T(v_i)|\ge 3s$. Finally, by adding the node $r$ to each subset, we obtain an almost-partition satisfying (\[e:almost\]).
Using Fact \[f:treepartition\], we will eventually prove Prop. \[p:large:afew\]. Let $S_1, S_2, \dots, S_k$ be non-small connected components of the input graph $G$, i.e., $|S_i| > \sqrt{\log n}$ for each $i\in[k]$. By Fact \[f:treepartition\], there exists an almost-partition of spanning trees of $S_i$’s into trees of sizes from the interval $[\sqrt{\log n}, 3\sqrt{\log n}]$. Let $\mathbb{T}=\{T_1,T_2,\ldots\}$ be the set of trees equal to the union of all those almost-partitions. Observe that, according to the properties of almost-partitions, there are at least $\sqrt{\log n} - 1$ nodes *unique* for $T_{i}$ for each $i$, i.e., nodes which belong to $T_{i}$ and do not belong to any other tree of the above specified almost-partitions of the components $S_1, S_2, \dots, S_k$. Thus, $|T|=O(n/\sqrt{\log n})$. We associate random events $A_{i}$ and $B_{i}$ with each tree $T_{i}$, where
- $A_{i}$ is the event that all edges of $T_{i}$ appear in at least one sample graph among $G_1,\ldots,G_m$ in an execution of Alg. \[a:componentreduction\];
- $B_{i}$ is the event that at least one element of the set of nodes unique for $T_{i}$ has the status leader in an execution of Alg. \[a:componentreduction\].
Importantly, all event $A_{i}$ and $B_{i}$ are independent, thanks to the facts that each edge is decided to be included in each sample graph independently, the sets of edges of $T_{i}$’s are disjoint, the set of nodes **unique** for $T_{i}$’s are disjoint as well, and the random choices determining whether a node has a status leader are also independent.
By Prop. \[p:stprobability\] the probability of $A_{i}$ is ${\text{Prob}}(A_{i})=1-O(\frac{1}{n^{\omega(1)}})$. As $T_{i}$ has at least $s=\sqrt{\log n}$ unique nodes, the probability of $B_{i}$ is at least $${\text{Prob}}(B_{i})\ge 1-\left(1-\frac{1}{\log \log n}\right)^{\sqrt{\log n}-1}.$$ Observe that the conjunction of the events $A_{i}$ and $B_{i}$ guarantees that the nodes of $T_{i}$ are connected to a leader in the partition $\mathbb{C}$, i.e., they are not bad nodes. Let $X_{i}$ be a 0/1 variable, where $X_{i}=0$ iff $A_{i}$ and $B_{i}$ are satisfied. Thus, $X_{i}=0$ implies that no node from $T_{i}$ is bad. As the events $A_{i}$ and $B_{i}$ are independent, the probability that $X_{i}=0$ (implying that no node from $T_{i}$ is bad) can be estimated as follows: $$\begin{aligned}
{2}
{\text{Prob}}(X_{i}=0) & > {\text{Prob}}(A_i)\cdot {\text{Prob}}(B_i)\\
& >
\left(1 -O\left(\frac{1}{n^{\omega\left(1\right)}}\right)\right) \cdot \left(1-\left(1-\frac{1}{\log \log n}\right)^{\sqrt{\log n}-1}\right) \\
& >
\left(1 -O\left(\frac{1}{n^{\omega\left(1\right)}}\right)\right)\cdot \left(1 -\frac{1}{e^{(\sqrt{\log n}-1)/ {\log \log n}}}\right) \\
& = 1 -O\left(\frac{1}{e^{(\sqrt{\log n}-1)/ {\log \log n}}}\right).\end{aligned}$$
Thus, ${\text{Prob}}(X_{i}=1) =O\left(\frac{1}{e^{(\sqrt{\log n}-1)/ {\log \log n}}}\right)$. The expected number of *bad* nodes is upper bounded by $$\label{e:xi}
E\left[\sum\limits_i X_{i}\cdot |T_{i}|\right]
= O\left(\sqrt{\log n}\right) \sum\limits_i X_{i},$$ since $|T_{i}| \in \Theta(\sqrt{\log n})$ for each $i$ under consideration. The expected value of the sum of the variables $X_{i}$ can be estimated as $$\begin{aligned}
{2}
E\left[\sum\limits_i X_{i}\right] & =\sum\limits_i O\left(\frac{1}{e^{(\sqrt{\log n}-1)/ {\log \log n}}}\right)
= O\left(\frac{n}{\sqrt{\log n}}\right) O\left(\frac{1}{e^{(\sqrt{\log n}-1)/ {\log \log n}}}\right) \\
&= O\left(\frac{n}{\sqrt{\log n}\cdot e^{(\sqrt{\log n}-1)/ {\log \log n}}}\right).\end{aligned}$$ As $X_{i}$ are independent $0-1$ random variables, $$\label{e:chern}
\sum\limits_{i} X_{i} \in O\left(\frac{n}{\sqrt{\log n}\cdot e^{(\sqrt{\log n}-1)/ {\log \log n}}}\right)
\mbox{ with high probability}$$ by a standard Chernoff bound. Therefore, by (\[e:xi\]) and (\[e:chern\]), the number of *bad* nodes is $$O(\sqrt{\log n})\cdot O\left(\frac{n}{\sqrt{\log n}\cdot e^{(\sqrt{\log n}-1)/ \log \log n}}\right) = O\left(\frac{n}{e^{(\sqrt{\log n}-1)/ \log \log n}}\right) = O\left(\frac{n}{\log \log n}\right)$$ with high probability. This fact finishes the proof of Prop. \[p:large:afew\].
in $O(1)$ rounds
=================
[\[s:MST\]]{}
In order to find of a given input graph, we will use the $O(1)$ round algorithm and the reduction from [@HegemanPPSS15]. The problem for a given graph can be reduced, by using the KKT random sampling [@Karger:1995:RLA:201019.201022] to two consecutive instances of , each for a graph with $O(n^{3/2})$ edges. The authors of [@HegemanPPSS15] observed that the problem for a graph with $O(n^{3/2})$ edges can be reduced to $\sqrt{n}$ instances of the problem simultaneously. In each of those $\sqrt{n}$ instances of the problem, the set of neighbours of each node is a subset of the set of its neighbours in the original input graph with $O(n^{3/2})$ edges (a nice exposition of the reduction is also given in [@GhaffariParter2016]). For further references, we state these reductions more precisely.
[@HegemanPPSS15][\[l:red1:hegeman\]]{} Let $G(V,E,c)$ be an instance of the problem. There are congested clique $O(1)$ rounds algorithms $A_1, A_2$ such that, with high probability,
1. $A_1$ builds $G_1(V,E_1,c)$ with $O(n^{3/2})$ edges such that $E_1\subseteq E$ and,
2. given a minimum spanning forest of $G_1$, $A_2$ builds $G_2(V,E_2,c)$ with $O(n^{3/2})$ edges such that $E_2\subseteq E$ and a minimum spanning tree of $G_2$ is also a minimum spanning tree of $G$.
[@HegemanPPSS15][\[l:red2:hegeman\]]{} Let $G(V,E,c)$ be an instance of , where $|E|=O(n^{3/2})$. There is a congested clique $O(1)$ round algorithm which reduces the problem for $G$ to $m=\sqrt{n}$ instances $G_i(V,E_i)$ for $i\in[m]$ of the problem, such that (i) $E_1\subseteq E_2\subseteq\cdots\subseteq E_m=E$; (ii) each node $v$ knows edges incident to $v$ in $E_i$ for each $i\in[m]$ at the end of an execution of the algorithm.
If we show that our algorithm can be executed simultaneously for $\sqrt{n}$ instances satisfying the properties from Lemma \[l:red2:hegeman\] in $O(1)$ rounds, then we obtain the $O(1)$ round randomized algorithm for .
As shown in Section 3 of [@GhaffariParter2016], the algorithm ReduceCC applied in GPReduction (Lemma \[l:GP2016\]) can be executed in parallel for $\sqrt{n}$ instances as above. As the algorithm GPReduction satisfying Lemma \[l:GP2016\] consist of $O(1)$ executions of ReduceCC (see the proof of Lemma \[l:GP2016\]), this algorithm can be executed in parallel for $\sqrt{n}$ instances of the problem satisfying conditions from Lemma \[l:red2:hegeman\].
Therefore, in order to prove that our algorithm can be executed in parallel for $\sqrt{n}$ instances described in Lemma \[l:red2:hegeman\], it is sufficient to show that [executions of Algorithm \[a:degreereduction\] and Algorithm \[a:componentreduction\] called in Alg. \[a:cc\] can be executed in parallel for such instances.]{} In Sections \[ss:parallel1\] and \[ss:parallel2\], we show that it is the case. Finally, in Section \[ss:parallel:final\], we discuss a parallel execution of the whole Alg. \[a:cc\] for all those instances. This gives a $O(1)$ round randomized algorithm determining a minimum spanning tree which proves Theorem \[t:MST\].
Parallel executions of Alg. \[a:degreereduction\]
-------------------------------------------------
[\[ss:parallel1\]]{}
In this section we show that Alg. \[a:degreereduction\] can be executed for $\sqrt{n}$ related sparse instances of the problem in parallel, as stated in the following lemma.
[\[l:parallel1\]]{} Let $G_1(V,E_1), \ldots, G_m(V,E_m)$ for $m=\sqrt{n}$ be the input graphs in the congested clique model such that $|E_i|=O(n^{3/2})$ for each $i\in[m]$, $E_1\subseteq E_2\subseteq\cdots\subseteq E_m$, and $v_j$ knows its neighbours in each of the graphs $G_1,\ldots, G_m$. Then, Alg. \[a:degreereduction\] can be executed simultaneously for $G_1,\ldots,G_m$ in $O(1)$ rounds in the following framework:
- for each $i\in[m]$, $j\in[n]$, the node $u_j$ has assigned a node $\text{proxy}(i,j)\in\{u_1,\ldots,u_n\}$ such that $\text{proxy}(i,j)$ works on behalf of $u_j$ in the $i$th instance of the problem;
- for each $j\in[n]$, the node $u_j$ works [as the proxy on behalf of $O(1)$ nodes (possibly in many instances of the problem),]{} i.e., [$$|\{k\,|\,u_j=\text{proxy}(i,k)\}|=O(1).$$]{}
The proof of Lemma \[l:parallel1\] is presented in the remaining part of this section. As there are $m$ instances of the problem to solve (and therefore $m$ instance of Alg. \[a:degreereduction\]), we can set $m$ coordinators $c_1,\ldots,c_m$ such that, e.g., the node $u_i$ acts as the coordinator $c_i$.
Steps \[ag:s3\], \[ag:s4\], \[ag:s6\] and \[ag:s7\] of Alg. \[a:degreereduction\] either do not require any communication (steps \[ag:s4\] and \[ag:s7\]) or each node transmits a single message to the coordinator (steps \[ag:s3\] and \[ag:s6\]). Thus, these steps can be executed in parallel in $O(1)$ rounds: instead of sending a message to one coordinator, each node can send appropriate messages to the coordinators $c_1,\ldots,c_m$ in a round for $m=\sqrt{n}$.
The main problem with parallel execution of of steps \[ag:s2\], \[ag:s5\] and \[ag:s8\] of Alg. \[a:degreereduction\] is that each node sends a message to all its neighbours. Thus, a node with degree $\Delta=\omega(\sqrt{n})$ needs to send $$m\cdot \Delta=\sqrt{n}\cdot \omega(\sqrt{n})=\omega(n)$$ messages in $m$ parallel executions of Alg. \[a:degreereduction\], which cannot be done in $O(1)$ rounds, even with help of e.g. Lenzen’s routing, because of limited bandwidth of edges. In order to overcome the above observed problem, we will take advantage of the fact that the number of edges in each of $m=\sqrt{n}$ instances of the problem is $O(n^{3/2})$. Thus, the overall number of messages to send in all instances is $$O\left(\sum_{v\in V}\sum_{i\in[m]} d_{G_i}(v)\right)=O\left(\sum_{i\in[m]}\sum_{v\in V} d_{G_i}(v)\right)=O\left(\sum_{i\in[m]} |E_i|\right)=O(\sqrt{n}\cdot n^{3/2})=O(n^2).$$ Hence, the amount of communication fits into quadratic number of edges of the (congested) clique. In our solution, we distribute communication load among so-called *proxies*. Let $d(u_j)=|\{(u_j,v)\,|\, (u_j,v)\in E_m\}|$ be the upper bound on the degree of $u_j$ in all graphs $G_1,\ldots,G_m$, due to the assumption $E_1\subseteq\cdots\subseteq E_m$. Assume that a pool of *proxy* nodes $v_1,v_2,\ldots$ is available (see Lemma \[l:auxiliary\]). We assign $l_j=\lceil d(u_j)/\sqrt{n}\rceil$ proxy nodes to $u_j$ for each $j\in[n]$. Altogether, we need $$\sum_{j\in[n]}\lceil d(u_j)/\sqrt{n}\rceil\le n+\frac1{\sqrt{n}}\sum_{j\in[n]}d(u_j)=n+O\left(\frac1{\sqrt{n}}\cdot n^{3/2}\right)=O(n)$$ proxy nodes. The key idea is that the work of the node $u_j$ is split between its proxies such that each proxy node is responsible for simulating $u_j$ in $\min\{\sqrt{n},\lceil n/d(u_j)\rceil\}$ instances of Alg. \[a:degreereduction\]. In order to guarantee feasibility of a simulation of all $\sqrt{n}$ executions of Alg. \[a:degreereduction\] by the proxies in $O(1)$ rounds, we have to address the following issues:
1. In order to simulate the nodes $\{u_1,\ldots,u_n\}$ in various instances of the problem, the proxies need to know the mapping between the nodes $\{u_1,\ldots,u_n\}$ and proxies simulating them in respective instances of the problem.
2. In order to simulate $u_j$ in the $i$th instance of the problem, the appropriate proxy should know the neighbors of $u_j$ in $G_i$. Thus, information about neighbours of appropriate nodes should be delivered to proxies before the actual executions of Alg. \[a:degreereduction\].
3. It should be possible that each proxy $v$ of $u_j$ is able to simulate each step of $u_j$ in all instances of the problem in which $v$ works on behalf of $u_j$ in $O(1)$ rounds.
Regarding (a), note that the values $d(u_j)$ can be distributed to all nodes in a single round. Using this information, each node can locally compute which proxies are assigned to particular nodes of the network in consecutive instances of the problem, assuming that the proxies are assigned in the ascending order, i.e., $v_1,\ldots, v_{l_1}$ are assigned to $u_1$, $v_{l_1+1},\ldots, v_{l_1+l_2}$ are assigned to $u_2$ and so on.
As for (b), we rely on the fact that $E_1\subseteq\cdots\subseteq E_m$. The node $u_j$ encodes information about its neighbours in all sets $E_i$ in a following way: for each edge $e$ it is enough to remember what is the smallest $i$, such that $e \in E_i$. More formally, $u_j$ encodes it as the set [$$T_j=\{(v,l)\,|\, (u_j,v)\in E_{l'}\text{ for }l'\ge l, (u_j,v)\not\in E_{l'}\text{ for }l'<l\}.$$ That is, $(v,l)\in T_j$ iff $(u_j,v)\in E_l, E_{l+1},\ldots,E_m$, $(u_j,v)\not\in E_1,\ldots,E_{l-1}$.]{} Thus, knowing $T_j$, it is possible to determine the neighbors of $u_j$ in $G_i$, for each $i\in[m]$. The set $T_j$ is delivered to all $\lceil d(u_j)/\sqrt{n}\rceil$ proxies of $u_j$ in the following way:
- **Stage 1.** The set $T_j$ is split into $\lceil d(u_j)/\sqrt{n}\rceil$ subsets of size $\sqrt{n}$, each subset is delivered to a different proxy of $u_j$.
- **Stage 2.** The subset of $T_j$ of size $\sqrt{n}$ delivered to a proxy of $u_j$ in Stage 1 is delivered to all other proxies of $u_j$.
In order to perform Stages 1 and 2 in $O(1)$ rounds, we apply Lenzen’s routing algorithm [@Lenzen:2013:ODR:2484239.2501983] (Lemma \[l:lenzen\]), which works in $O(1)$ rounds provided each node has $O(n)$ messages to send and $O(n)$ messages to receive. Note that $u_j$ has $d(u_j)=O(n)$ messages to be transmitted and each proxy has $O(\sqrt{n})=O(n)$ messages to receive in Stage 1. In Stage 2, each proxy of $u_j$ is supposed to deliver and receive $$O\left(\sqrt{n}\cdot\frac{d(u_j)}{\sqrt{n}}\right)=O(n)$$ messages.
Knowing that the above issues (a) and (b) are resolved, we can also address (c). Thanks to the presented solution for (a), there is global knowledge which proxy nodes are responsible for particular nodes of the network in various instances of the problem and all proxies start with the knowledge of the nodes simulated by them. Thus, in each step of the algorithm, transmissions between nodes are replaced with transmissions between appropriate proxy nodes. It remains to verify whether proxy nodes are able to deliver messages on behalf of the actual nodes simulated by them in **all** instances of Alg. \[a:degreereduction\] simulated by them. The number of messages supposed to be sent and received by the node $u_j$ in a step of of Alg. \[a:degreereduction\] is at most $d(u_j)$. As each proxy of $u_j$ simulates $u_j$ in $\min\{\sqrt{n},\lceil n/d(u_j)\rceil\}$ instances of Alg. \[a:degreereduction\], it is supposed to send/receive at most $$O\left(d(u_j)\cdot \min\{\sqrt{n},\lceil n/d(u_j)\rceil\}\right)=O(n)$$ messages in each round. Hence, using Lenzen’s routing [@Lenzen:2013:ODR:2484239.2501983] (Lemma \[l:lenzen\]), each step of each execution might be simulated in $O(1)$ rounds.
Parallel executions of Algorithm \[a:componentreduction\]
---------------------------------------------------------
[\[ss:parallel2\]]{} In this section, we show feasibility of simulation of $\sqrt{n}$ instances of Alg. \[a:componentreduction\] in $O(1)$ rounds.
[\[l:parallel2\]]{} Assume that $m=\sqrt{n}$ graphs of degree $O(\log\log n)$ are given in the congested clique model, i.e., each node knows its neighbors in each of the graphs. Then, Alg. \[a:componentreduction\] can be executed simultaneously on all those instances in $O(1)$ rounds.
In order to prove Lemma \[l:parallel2\], it is sufficient to analyze the number of messages which nodes send/receive in the case that they simulate $\sqrt{n}$ instances of Alg. \[a:componentreduction\] simultaneously. As Alg. \[a:componentreduction\] is executed on a graph with degree $O(\log\log n)$, thus the number of edges in each instance is $O(n\log\log n)$.
As there are $\sqrt{n}$ bosses and one coordinator in the “original” Alg. \[a:componentreduction\], the number of bosses increases to $\sqrt{n}\cdot\sqrt{n}=O(n)$ and the number of coordinators to $\sqrt{n}$ when $\sqrt{n}$ instances are executed simultaneously. In [step \[ss:send:edge\] of Alg. \[a:componentreduction\], the node $u_j$ is supposed to deliver each of $|N(u_j)|$ incident edges with probability $1/\log\log n$ to each of the bosses in an instance of the problem. ]{} Hence, $u_j$ has at most $\sqrt{n}|N(u_j)|=O(n)$ messages to send in all $\sqrt{n}$ instances of the problem. As each edge is sent to each boss independently with probability $1/\log\log n$ and all graphs have $O(n\log\log n)$ edges, each boss receives $$O\left(\frac1{\log\log n}\cdot n\log\log n\right)=O(n)$$ edges with high probability, by a standard Chernoff bound. [Thus, we can perform step \[ss:send:edge\] in parallel for all instances using Lenzen routing lemma [@Lenzen:2013:ODR:2484239.2501983].]{} In the remaining steps of Alg. \[a:componentreduction\]:
- the original nodes $u_j$ send a single message to each boss or to the coordinator;
- the bosses send a message to all nodes of the network.
As there are $O(n)$ bosses and coordinators (each of them participating in exactly one instance of the problem), all $\sqrt{n}$ executions can be performed without asymptotic slowdown of the algorithm.
Parallel executions of Algorithm \[a:cc\]
-----------------------------------------
[\[ss:parallel:final\]]{}
Using Lemma \[l:parallel1\], we can execute step \[s:cc1\] of Alg. \[a:cc\] in parallel for all instances from Lemma \[l:red2:hegeman\]. However, after such execution, the results are distributed among proxies, not available in the original nodes of the network corresponding to the nodes of the graph. It might seem that, in order to perform the remaining steps of $m=\sqrt{n}$ instances of Alg. \[a:cc\], we can collect appropriate information back at the “original” nodes and use Lemma \[l:GP2016\] and Lemma \[l:parallel2\]. This however is not that simple, as we have to face the following obstacle. The original instances $G_i(V,E_i)$ satisfied the relationship $E_1\subseteq\cdots\subseteq E_m$, which helped to pass information about neighborhoods to the proxies. After executions of Alg. \[a:degreereduction\], this “inclusion property” does not hold any more. Therefore, we continue with proxies: both executions of GPReduction as well as an execution of Alg. \[a:componentreduction\] are executed in parallel in such a way that proxies work on behalf of their “master” nodes. Thanks to the fact that each node is a proxy of $c=O(1)$ “original” nodes only, Lemma \[l:GP2016\] and Lemma \[l:parallel2\] can be applied here and give $O(1)$ round solutions for all instances with help of Lemma \[l:auxiliary\]. Finally, when spanning forests of all $m=\sqrt{n}$ instances of the problem are determined, each proxy knows the parent of the “simulated” node in the respective spanning trees. As we consider $m=\sqrt{n}$ simulations and each node is the proxy of $O(1)$ nodes of the input network, information about parents of nodes in the spanning trees can be delivered from the proxies to the original nodes in $O(1)$ rounds using Lenzen’s routing algorithm [@Lenzen:2013:ODR:2484239.2501983] (Lemma \[l:lenzen\]).
Conclusions
===========
In the paper, we have established $O(1)$ round complexity for randomized algorithms solving in the congested clique. In contrast to recent progress on randomized algorithms for MST, the best deterministic solution has not been improved from 2003 [@Lotker:2003:MCO:777412.777428]. As shown in [@DBLP:conf/fsttcs/PS16], the $O(\log^* n)$ round MST algorithm from [@GhaffariParter2016] can be implemented with relatively small number of messages transmitted over an execution of an algorithm. We believe that our solution might also be optimized in a similar way. However, to obtain such a result, more refined analysis and adjustment of parameters are necessary. [An interesting research direction is also to study limited variants of the general congested clique model, better adjusted to real architectures.]{}
Acknowledgments {#acknowledgments .unnumbered}
===============
The work of the first author was supported by the Polish National Science Centre grant DEC-2012/07/B/ST6/01534.
[^1]: email: [tju@cs.uni.wroc.pl](tju@cs.uni.wroc.pl)
[^2]: email: [knowicki@cs.uni.wroc.pl](knowicki@cs.uni.wroc.pl)
[^3]: Our results should work for smaller $s$, e.g., polynomial wrt $\log\log n$. However, as it does not affect round complexity of the algorithm, we choose $s$ which makes analysis easier.
|
---
abstract: 'Feedback can be utilized to convert information into useful work, making it an effective tool for increasing the performance of thermodynamic engines. Using feedback reversibility as a guiding principle, we devise a method for designing optimal feedback protocols for thermodynamic engines that extract all the information gained during feedback as work. Our method is based on the observation that in a feedback-reversible process the measurement and the time-reversal of the ensuing protocol both prepare the system in the same probabilistic state. We illustrate the utility of our method with two examples of the multi-particle Szilard engine.'
address: 'Departamento de Física Atómica, Molecular y Nuclear and GISC, Universidad Complutense de Madrid, 28040 Madrid, Spain, EU'
author:
- 'Jordan M. Horowitz and Juan M. R. Parrondo'
bibliography:
- 'Feedback.bib'
- 'PhysicsTexts.bib'
title: 'Designing optimal discrete-feedback thermodynamic engines'
---
Introduction
============
An important application of feedback is to increase the performance of thermodynamic engines by converting the information gathered during feedback into mechanical work [@Leff; @Allahverdyan2008; @Suzuki2009; @Toyabe2010; @Kim2011; @Abreu2011; @Vaikuntanathan2011]. However, for feedback implemented discretely – through a series of feedback loops initiated at predetermined times – the second law of thermodynamics for discrete feedback limits the maximum amount of work that can be extracted [@Sagawa2008; @Sagawa2010; @Horowitz2010; @Suzuki2010; @Ponmurugan2010; @Sagawa2011b; @Sagawa2011]. Namely, the average work extracted $\langle W\rangle$ during a thermodynamic process with discrete feedback in which a system is driven from one equilibrium state at temperature $T$ to another equilibrium state at the same temperature is bounded by the difference between the information gained during feedback $\langle
I\rangle$ and the average free energy difference $\langle\Delta
F\rangle$: $$\label{eq:GenSecLaw}
\langle W\rangle \le kT\langle I\rangle -\langle \Delta F\rangle,$$ where $k$ is Boltzmann’s constant. Here, $\langle I \rangle$ is the mutual information between the microscopic state of the system and the measurement outcomes, and $\langle \Delta F\rangle$ is the average free energy difference between the initial equilibrium state and the final equilibrium state, which may differ for each measurement outcome. Notice is expressed in terms of the extracted work, since we have in mind applications to thermodynamic engines. This differs from the more common convention of using the work done on the system, which is minus the work extracted [@Sagawa2008; @Sagawa2010; @Horowitz2010; @Suzuki2010; @Ponmurugan2010; @Sagawa2011b; @Sagawa2011].
*Optimal* thermodynamic engines extract the maximum amount of work, saturating the bound in \[$\langle
W\rangle =kT\langle I\rangle -\langle\Delta F\rangle$\]. Their design often proceeds in two steps. One first selects a physical observable $M$ to be measured. Then, associated to each measurement outcome $m$, one chooses a unique protocol for varying a set of external parameters $\lambda$ during a time interval from $t=0$ to $\tau$, $\Lambda^m=\{\lambda_t^m\}_{t=0}^\tau$. For the process to be optimal the collection of protocols $\{\Lambda^m\}$ must be designed to extract as work all the information gained from the measurement.
While at first it may not be obvious how to design a collection of optimal protocols [@Kim2011; @Abreu2011], there is a generic procedure for constructing such a collection given a physical observable $M$ [@Abreu2011; @Jacobs2009; @Erez2010; @Hasegawa2010; @Takara2010; @Esposito2011]; specifically, the optimal protocol is to instantaneously switch the Hamiltonian immediately after the measurement – through an instantaneous change of the external parameters – so that the probabilistic state of the system conditioned on the measurement outcome is an equilibrium Boltzmann distribution with respect to the new Hamiltonian. The external parameters are then reversibly adjusted to their final value, completing the protocol. While such a protocol can always be constructed theoretically, it may be difficult to realize experimentally: one may need access to an infinite number of external parameters in order to affect the instantaneous switching of the Hamiltonian [@Esposito2011]. Furthermore, there are optimal protocols that cannot be constructed by implementing this generic procedure. Hence, it is worthwhile to develop alternative procedures for engineering collections of optimal protocols.
In a recent article, we characterized optimal feedback processes, demonstrating that they are *feedback reversible* – indistinguishable from their time-reversals [@Horowitz2011]. There we pointed to the possibility of exploiting feedback reversibility in the design of optimal thermodynamic engines. In this article, we take the next step by explicitly formulating a recipe for engineering a collection of optimal feedback protocols for a given observable $M$ using feedback reversibility as a guiding principle. We present our method in , generalizing the generic procedure outlined in the previous paragraph. We then illustrate our method in with two pedagogical models inspired by the multi-particle Szilard engine recently introduced in [@Kim2011], and subsequently analyzed in [@Kim2011b]: a classical two-particle Szilard engine with hard-core interactions, and a classical $N$-particle Szilard engine with short-ranged, repulsive interactions. In each model, we design a different collection of feedback protocols, demonstrating the utility and versatility of our method. Concluding remarks are offered in with a view towards potential applications of our method to quantum feedback.
Measurement and preparation {#sec:prep}
===========================
In this section, we describe a general method for designing optimal feedback protocols. Our analysis is based on a theoretical framework characterizing the thermodynamics of feedback formulated in [@Sagawa2010; @Horowitz2010; @Suzuki2010; @Ponmurugan2010; @Sagawa2011b; @Sagawa2011; @Horowitz2011].
Consider a classical system whose position in phase space at time $t$ is $z_t$. The system, initially in equilibrium at temperature $T$, is driven by varying a set of external control parameters $\lambda$ initially at $\lambda_0$ from time $t=0$ to $\tau$ using feedback. At time $t=t_m$, an observable $M$ is measured whose outcomes $m$ occur randomly with probability $P(m|z_{t_m})$ depending only on the state of the system at the time of measurement $z_{t_m}$. The protocol, denoted as $\Lambda^m=\{\lambda_t^m\}_{t=0}^\tau$, depends on the measurement outcome after time $t_m$. Thermal fluctuations cause the system to trace out a random trajectory through phase space $\gamma=\{z_t\}_{t=0}^\tau$. The work extracted along this trajectory is $W[\gamma; \Lambda^m]$, and the reduction in our uncertainty due to the measurement is [@Sagawa2010; @Horowitz2010; @Horowitz2011] $$\label{eq:I2}
I[\gamma;\Lambda^m]=\ln\frac{P(m|z_{t_m})}{P(m)},$$ where $P(m)$ is the probability of obtaining measurement outcome $m$. For error-free measurements, which we consider in our illustrative examples below, the measurement outcome is uniquely determined by the state of the system at the time of measurement. Consequently, $P(m|z_{t_m})$ is always either zero or one. When $P(m|z_{t_m})=1$, reduces to $$\label{eq:I}
I[\gamma;\Lambda^m]=-\ln P(m).$$ When $P(m|z_{t_m})=0$, is divergent; however, this divergence occurs with zero probability, and therefore does not contribute to the average in . Finally, the change in free energy from the initial equilibrium state, $F(\lambda_0)$, to the final equilibrium state, $F(\lambda^m_\tau)$, denoted as $\Delta F[\Lambda^m]=F(\lambda^m_\tau)-F(\lambda_0)$, is realization dependent, since the final external parameter value at time $\tau$ depends on the measurement outcome $m$.
Associated to the feedback process is a distinct thermodynamic process called the reverse process [@Horowitz2010; @Sagawa2011b; @Horowitz2011]. The reverse process begins by first randomly selecting a protocol $\Lambda^m$ according to $P(m)$. The system is then prepared in an equilibrium state at temperature $T$ with external parameters set to $\lambda^m_\tau$. From time $t=0$ to $\tau$, the system is driven by varying the external parameters according to the time-reversed conjugate protocol $\tilde{\Lambda}^m=\{{\tilde \lambda}_t\}_{t=0}^\tau$, where $\tilde\lambda^m_t=\lambda^m_{\tau-t}$. For every trajectory $\gamma=\{z_t\}_{t=0}^\tau$ of the forward process there is a time-reversed conjugate trajectory $\tilde\gamma=\{\tilde{z}_t\}_{t=0}^\tau$, where $\tilde{z}_t=z_{\tau-t}^*$ and $*$ denotes momentum reversal.
A feedback process that is indistinguishable from its reverse process is called *feedback reversible* [@Horowitz2011]. A useful microscopic expression for the present considerations is in terms of the phase space densities along the feedback process and the corresponding reverse process. Namely, the phase space density of the feedback process at time $t$ conditioned on executing protocol $ \Lambda^m$, $\rho(z_t|\Lambda^m)$, is identical to the phase space density in the reverse process at time $ \tau-t$ conditioned on executing protocol $ \tilde\Lambda^m$, $\tilde\rho(\tilde{z}_{\tau-t}| \tilde\Lambda^m)$: $$\label{eq:reversible}
\rho(z_t|\Lambda^m)=\tilde\rho(\tilde{z}_{\tau-t}| \tilde\Lambda^m).$$ Additionally, $$\label{eq:WIequal}
W[\gamma,\Lambda^m]=kTI[\gamma,\Lambda^m]-\Delta F[\Lambda^m]$$ for every realization [@Horowitz2011]. For cyclic ($\Delta F=0$) feedback-reversible processes, such as our illustrative examples, is simply $W[\gamma,\Lambda^m]=kTI[\gamma,\Lambda^m]$.
We now utilize and to develop a method for designing optimal feedback processes (or equivalently feedback-reversible processes). Our method is based on the observation that has a noteworthy interpretation at the measurement time $t=t_m$: $$\label{eq:revMeas}
\rho(z_{t_m}|\Lambda^m)=\tilde\rho(\tilde{z}_{\tau-t_m}|\tilde\Lambda^m).$$ Specifically, $\rho(z_{t_m}|\Lambda^m)$ is the phase space density of the system at the time of the measurement conditioned on implementing protocol $\Lambda^m$; it represents our knowledge about the microscopic state of the system immediately after the measurement. We therefore refer to it as the *post-measurement* state. The right hand side of , $\tilde\rho(\tilde{z}_{\tau-t_m}| \tilde\Lambda^m)$, is the phase space density at time $t=\tau-t_m$ produced by the reverse process when protocol $ \tilde\Lambda^m$ is executed; it is the probabilistic state of the system prepared (or produced) by using protocol $\tilde\Lambda^m$ in the reverse process. Thus, we refer to $\tilde\rho(\tilde{z}_{\tau-t_m}|\tilde\Lambda^m)$ as the *prepared* state. With this terminology, states that for a process to be feedback reversible the state prepared by the reverse process must be identical to the post-measurement state. This insight is our main tool for designing optimal feedback protocols. Instead of focusing on the feedback process, we search for a protocol that prepares the post-measurement state. We call this procedure *preparation*. Once we have chosen our protocols, we can verify their effectiveness by checking the equality in ; the deviation from equality in is a measure of the the reversibility of each of the protocols in $\{\Lambda^m\}$.
Applications to the multi-particle Szilard engine {#sec:ex}
=================================================
In this section, we apply the preparation method presented in to two classical extensions of the Szilard engine inspired by the quantum multi-particle Szilard engine considered by Kim *et. al. *in [@Kim2011]. In , we design a collection of optimal protocols for a classical Szilard engine composed of two square particles with hard-core interactions. An $N$-particle Szilard engine consisting of ideal point particles with short-ranged, repulsive interactions is analyzed in . In both examples, we verify that our protocols are optimal through analytic calculations of the work and information.
Two-particle Szilard engine {#subsec:two}
---------------------------
To illustrate the utility of our method, we now analyze a two-particle Szilard engine. We have in mind two indistinguishable square hard-core particles with linear dimension $d$ confined to a two-dimensional box of width $L_x$ and height $L_y$, pictured in .
The particles have a hard-core interaction with the walls, entailing that the center of the particles must be at least a distance $d/2$ from the walls. The box is in weak thermal contact with a thermal reservoir at temperature $kT=1$.
Work is extracted using a cyclic, isothermal feedback protocol performed infinitely slowly, as illustrated in . Since the process is cyclic, $\langle\Delta
F\rangle=0$, and we only need to investigate the extracted work. In addition, since the process is infinitely slow and isothermal, the work can be expressed in terms of partition functions, as in [@Parrondo2001]. There are two configurational partition functions that will prove useful: the first, denoted $Z_2(x,y)$, is the partition function for the state when both particles are in the same box of width $x$ and height $y$; the second, $\bar{Z}_2(x,y)$, is the partition function for the state where the particles are in seperate boxes, each of width $x$ and height $y$. The calculation of these partition functions is a straightforward though lengthy exercise in integral calculus, which we outline in \[sec:appendix\].
We initiate the feedback protocol with the engine in thermal equilibrium at temperature $kT=1$. We then infinitely slowly insert a thin partition from below, dividing the box into two equal halves along the horizontal direction, as depicted in .
Because the particles are hard-bodied and of finite size, the insertion of the partition extracts work. As we slowly insert the partition, the system remains in equilibrium and able to explore its entire phase space until the leading tip of the partition is one particle length $d$ from the box’s top wall. At which point, the particles are too large to pass between the left and right half of the box. At that moment, each particle becomes trapped in one half of the box; either they both become trapped in the same half of the box, or each is trapped in a separate half of the box. The partition function at that moment, being a sum over all distinct microscopic configurations, is then the sum of the partition function when they both become trapped in the left (or right) half, $Z_2(L_x/2,L_y)$, plus the partition function when they become trapped in separate halves, $\bar{Z}_2(L_x/2,L_y)$: $2Z_2(L_x/2,L_y)+\bar{Z}_2(L_x/2,L_y)$. The work extracted up to that instant is determined from the ratio of the partition function at that moment to the initial partition function $Z_2(L_x,L_y)$ as $$\label{eq:Win}
W_{\rm part}(L_x,L_y)=\ln\left[\frac{2Z_2(L_x/2,L_y)+\bar{Z}_2(L_x/2,L_y)}{Z_2(L_x,L_y)}\right].$$ Once the distance between the leading tip of the partition and the far wall of the box is less than $d$, neither particle is able to fit in the space between the tip and the wall. The partition’s tip is no longer able to push on the particles, and as a result no additional work beyond that in is extracted.
Next, we measure in which half of the box the two particles are located. There are three outcomes, which we label $A$, $B$, and $C$, see . Outcomes $A$ and $C$ occur when both particles are found in the same half of the box, whereas outcome $B$ occurs when each particle is found in a separate half of the box. Since the partition functions $Z_2$ and $\bar{Z}_2$ count the number of distinct microscopic configurations, we can express the change in uncertainties associated to each outcome by inserting these partition functions into : $$\begin{aligned}
\label{eq:IA}
I_A=I_C=-\ln\left[\frac{Z_2(L_x/2,L_y)}{2Z_2(L_x/2,L_y)+\bar{Z}_2(L_x/2,L_y)}\right], \\
\label{eq:IB}
I_B=-\ln\left[\frac{\bar{Z}_2(L_x/2,L_y)}{2Z_2(L_x/2,L_y)+\bar{Z}_2(L_x/2,L_y)}\right].\end{aligned}$$
If both particles are found in the same half of the box (outcome $A$ or $C$), the optimal protocol is to quasi-statically shift the partition to the opposite end of the box, as in the single-particle Szilard engine [@Szilard1964], extracting work $$\label{eq:WorkExp}
W_{\rm shift}=\ln\left[\frac{Z_2(L_x,L_y)}{Z_2(L_x/2,L_y)}\right].$$ Summing and , we find that the work extracted during the feedback protocol associated to measurement outcome $A$ (or $C$) is $$\begin{aligned}
W_A&=W_{\rm part}(L_x,L_y)+W_{\rm shift} \\
&=\ln\left[\frac{2Z_2(L_x/2,L_y)+\bar{Z}_2(L_x/2,L_y)}{Z_2(L_x/2,L_y)}\right],\end{aligned}$$ which equals $I_A$ in . Thus, according to this protocol is optimal as expected, since this protocol when run in reverse clearly prepares the post-measurement state conditioned on $A$.
When each particle is found in a separate half of the box (outcome $B$), the optimal protocol is less clear. The motion of the piston in either direction requires work rather than extracts it. Kim *et. al.,* for instance, opt to extract the partition without obtaining any useful work [@Kim2011]: the information in the measurement is wasted. However, our discussion in suggests a way to design an optimal cyclic protocol: the protocol must drive the system from the state post measurement outcome $B$ back to the initial state and when run in reverse must prepare the state associated to outcome $B$ by segregating each particle into a different half of the box. When the particles do not interact, there is no obvious optimal protocol. However, in our model we can exploit the particle interactions. Specifically due to the hard-core interactions, there is a greater likelihood of trapping the particles in separate halves of the box upon inserting the partition when the box is smaller. This observation suggests the following protocol executed in response to measurement outcome $B$.
After the partition is inserted, we infinitely slowly compress the box until its width is $l_x>2d$ and its height is $l_y>d$. The extracted work during compression is $$\label{eq:Wcomp}
W_{\rm comp}=\ln\left[\frac{\bar{Z}_2(l_x/2,l_y)}{\bar{Z}_2(L_x/2,L_y)}\right].$$ Next, the partition is removed infinitely slowly, extracting $-W_{\rm part}(l_x,l_y)$ \[see \] work. Finally, the box is expanded back to its original size extracting $$\label{eq:Wexp2}
W_{\rm exp}=\ln\left[\frac{Z_2(L_x,L_y)}{Z_2(l_x,l_y)}\right].$$ Combining the sum of , , , and $-W_{\rm part}(l_x,l_y)$, with , we find, after a simple algebraic manipulation, that the deviation from reversibility \[cf. \] can be expressed as $$\label{eq:WminusI}
W_B-I_B=-\ln\left[1+2\frac{Z_2(l_x/2,l_y)}{\bar{Z}_2(l_x/2,l_y)}\right].$$ Note that $W_B-I_B$ only depends on the size of the compressed box with dimensions $l_x\times l_y$. To investigate the reversibility of our protocol, we study the dependence of $W_B-I_B$ on the compressed box size. To simplify our analysis, we only consider boxes such that $l_x=2l_y$. In , we plot $W_B-I_B$ as a function of the box size parameter $\xi=l_x/d=2l_y/d$.
The smaller $\xi$ the smaller the box. Notice that $W_B-I_B<0$. We also observe that the process becomes reversible ($W_B-I_B=0$) when $\xi<4$ ($l_x<4d$ and $l_y<2d$); the box is so small when $\xi<4$ that both particles cannot fit into the same half of the box. Consequently, when the partition is inserted during the reverse process each particle is confined to a separate half of the box, preparing the post-measurement state with probability one.
To confirm that our protocol can be optimal, we plot in the total average work extracted $\langle W\rangle = P_AW_A+P_BW_B+P_CW_C$ – where $P_j$ is the probability to implement protocol $j=A,B,C$ – as a function of the box size parameter $\xi$.
Again, we see that when $\xi<4$ our protocol becomes optimal: $\langle W\rangle = \langle I\rangle$. For comparison, we have included in the work extracted when implementing the protocol proposed in Ref. [@Kim2011], $\langle W_{\rm K}\rangle$, where the partition is slowly removed in response to outcome $B$.
Further insight can be gained by noting that the ratio $Z_2/\bar{Z}_2$ in , which controls the degree of reversibility, has a simple physical interpretation in terms of the change in free energy during an irreversible mixing of two indistinguishable particles, each in separate boxes of sizes $l_x/2\times l_y$, into one box of the same size, $l_x/2\times l_y$: $$\Delta F_{\rm mix}=-\ln\left[\frac{Z_2(l_x/2,l_y)}{\bar{Z}_2(l_x/2,l_y)}\right].$$ Thus, this protocol is reversible when there is an infinite free energy difference between the states in which both particles are in the same box and where each particle is in a separate box. For an ideal gas $\Delta F_{\rm mix}=\ln2$: two indistinguishable ideal gas particles confined to the same box have half as many distinct microscopic configurations than when they are in seperate boxes. For ideal gases our protocol is not optimal ($\Delta F_{\rm mix}\neq\infty$ and $W_B-I_B\neq0$), as it exploits particle interactions. Nevertheless, there may exist other protocols that are optimal for ideal gases. In particular, such a collection could be devised using the generic procedure outlined in the Introduction, where the Hamiltonain is instantaneously switched immediately after the measurement so that the post-measurement state is described by an equilibrium Boltzmann distribution with respect to the new Hamiltonian [@Abreu2011; @Jacobs2009; @Erez2010; @Hasegawa2010; @Takara2010; @Esposito2011]; however, this new Hamiltonian would contain an interaction potential that forces the particles to segregate themselves into opposite halves of the box.
$N$-particle Szilard engine {#subsec:many}
---------------------------
As a final illustration, we present an optimal feedback protocol for a classical $N$-particle Szilard engine. Consider $N$ indistinguishable, classical, point particles with short-ranged, repulsive interactions confined to a box of volume $V$ in weak thermal contact with a thermal reservoir at temperature $kT=1$. The protocol begins by quickly and isothermally inserting an infinitely thin partition into the box dividing it into two equal halves of volume $V/2$. Since this is performed rapidly and the particles are infinitely small, the particles never have an opportunity to interact with the partition implying that this insertion requires no work. We then measure the number of particles in the left half of the box. Based on the outcome, we implement a cyclic, isothermal feedback protocol.
The change in uncertainty when $n$ particles are found in the left half of the box ($N-n$ particles in the right half) is, from , $$\label{eq:In}
I_n=-\ln\left[\frac{1}{2^N}\frac{N!}{n!(N-n)!}\right].$$ This information can be extracted completely as work by implementing the following protocol. First, we slowly lower $n$ ($N-n$) localized potential minima or trapping potentials to a depth $E$ in the left (right) half of the box. The trapping potentials are assumed to be deep compared to the thermal energy ($E\gg kT$), but shallow compared to the interaction energy; so that only one particle is confined in each trapping potential, as depicted in .
The partition is then quickly removed, and the trapping potentials are slowly turned off.
Work is only extracted when the trapping potentials are turned on or off. Since these processes are very slow, the work extracted can be computed in terms of partition functions. Assuming that the volume $V$ of the box is large compared with the interaction length, we can approximate the configurational partition function for the equilbrium state prior to inserting the partition as $$Z(V)=\frac{V^N}{N!}.$$ After making the measurement and finding $n$ particles in the left half of the box, the configurational partition function is $$Z_n(V)=\frac{1}{n!(N-n)!}\left(\frac{V}{2}\right)^N.$$ After lowering the trapping potentials to a depth $E$ each particle is confined to a unique trapping potential of volume $\emph{v}$. At which point the configurational partition function is $$\bar{Z}_n(\emph{v})=\emph{v}^Ne^{-NE}.$$ In terms of these partion functions, the work extracted while trapping the particles is $$\label{eq:trap}
W_{\rm trap}=\ln\left[\frac{\bar{Z}_n(\emph{v})}{Z_n(V)}\right]=\ln\left[2^N\left(\frac{\emph{v}}{V}\right)^Nn!(N-n)!e^{-NE}\right],$$ and the work extracted when the trapping potentials are turned off is $$\label{eq:off}
W_{\rm off}=\ln\left[\frac{Z(V)}{\bar{Z}_n(\emph{v})}\right]=\ln\left[\frac{1}{N!}\left(\frac{V}{\emph{v}}\right)^Ne^{NE}\right].$$ Summing and , we find the total work to be $$W_n=W_{\rm trap}+W_{\rm off}=\ln\left[2^N\frac{n!(N-n)!}{N!}\right],$$ which is independent of $E$ and is equal to the change in uncertainty $I_n$ in . This protocol is optimal and feedback reversible; run in reverse the protocol confines exactly $n$ particles in the left half with certainty.
At first it may be surprising that work can be extracted from this protocol, since we are mearly adding and then removing potential minima. However, net work can be extracted, since the work extracted while slowly turning on or off a trapping potential depends on the total volume accesible to the particles. To see this, consider the simplest scenario of turning off one trapping potential with one particle confined to a box of volume $V$. As the depth of the potential minimum becomes shallower, work is done on the particle until it escapes from the range of the trapping potential. Once the particle leaves, turning off the potential requires no additional work until the particle returns. The time for the particle to return depends on the size of the box. For a box of larger volume, the time to return is longer, and the process requires less work. Going back to the $N$-particle protocol, the work extracted while turning on the trapping potentials after the partition has been inserted – when the available volume for each particle is $V/2$ – is more than the work done during the final step as the trapping potentials are removed, because the volume $V$ available for the particles to explore is larger.
When the number of trapping potentials is not equal to the number of particles $N$, this protocol is no longer optimal. The reason being that work can only be extracted when a particle can fall into a potential being lowered; the more trapping potentials a particle has access to, the more work that can be extracted. If there were less trapping potentials then particles, overall less work would be extracted; as there would be fewer sites where energy was being removed. If more than $N$ trapping potentials are lowered, we are able to extract additional work. However, after the partition is removed, each particle can explore an even greater number of trapping potentials; the work to turn off the potentials would exceed that extracted by turning them on.
Conclusion {#sec:conclusion}
==========
Feedback-reversible processes are optimal, converting all the information acquired through feedback into work. In this article, we formulated a strategy, called preparation, for designing a collection of optimal protocols given a measured physical observable. In the preparation method, optimal protocols are selected by searching for an external parameter protocol whose time-reversal prepares the post-measurement state. To highlight the utility of the preparation method, we applied it to two pedagogical examples – a two- and $N$-particle Szilard engine – exhibiting a distinct collection of optimal protocols for each. In both examples, we addressed the simplest scenario of error-free measurements. When there are measurement errors – for example, if in the $N$-particle Szilard engine (), there were a chance to miscount the number of particles in the left half of the box – the preparation method still provides a useful procedure for selecting an optimal protocol. Furthermore, each of our optimal protocols contained at least one infinitely slow step. This is unavoidable as the process must be reversible before and after any measurements. Consequently, our method does not strictly apply to finite-time processes. However, the preparation method may still provide insight into the design of optimal finite-time processes, since an optimal finite-time protocol, roughly speaking, is as close to reversible as possible [@Abreu2011; @Schmiedl2007].
Generally, we expect the preparation method to be of use whenever the external parameter protocol forces a symmetry breaking in the system prior to the measurement, such as the insertion of the partition in the Szilard engine. Consider a thermodynamic process ${\cal P}$ during which a system is driven from an initial equilibrium state $A$ through a critical point, where the system chooses among several phases or macroscopic states $B_i$ with probability $p_i$. In addition, suppose there exists a collection of processes ${\cal P}'_i$ during which the symmetry is broken forcibly (not spontaneously), driving the system from $A$ to $B_i$ with probability one. Then, according to our recipe this spontaneous symmetry breaking transition can be exploited using the following optimal feedback protocol: start in state $A$, execute process ${\cal P}$, measure which state $B_i$ resulted from the symmetry breaking, and then run the corresponding process ${\cal P}_i^\prime$ in reverse to drive the system back to its initial state $A$. By construction, this process prepares the post-measurement state with unit probability, and therefore extracts as work $\langle W\rangle= - kT\sum_i p_i\log p_i$, which is $kT$ times the information gained in the measurement, $\langle I\rangle= - \sum_i p_i\log p_i$. One interesting instance of this setup is the Ising model, where a measurement of the system’s total magnetization after the symmetry breaking phase transition between the paramagnetic and ferromagnetic states can be exploited to extract work. This information can be utilized by modifying an external magnetic field, as demonstrated in [@Parrondo2001].
In the introduction, we outlined a general procedure for preparing a collection of optimal protocols, original presented in [@Abreu2011; @Jacobs2009; @Erez2010; @Hasegawa2010; @Takara2010; @Esposito2011], in which the Hamiltonian is instantaneously changed immediately following the measurement in order to make the post-measurment state an equilibrium Boltzmann distribution, followed by a reversible switching of the external parameters to their final values. These protocols prepare the post-measurement states; as such this generic procedure is a special case of the preparation method developed here. Though, the implementation of the preparation method can lead to a wider variety of protocols. Take for example the two-particle Szilard engine discussed in . Imagine we make a measurement and find outcome $B$, where each particle is confined to a separate half of the box. Let $\rho_B(z)$ denote the phase space density conditioned on this measurement outcome. In the generic procedure, immediately after the measurement we would change the Hamiltonian to $H_B(z)=-\ln\rho_B(z)$, which is a strange Hamiltonian that assigns infinite energy to configurations where both particles are in the same half of the box. In contrast, the preparation method led to a physically realizable protocol, in which we vary the size of the box.
Finally, we formulated the preparation method only for classical systems. Though, the second law of thermodynamics for discrete feedback was originally predicted for quantum evolutions [@Sagawa2008]. Its mathematical structure resembles the classical version, which suggests that feedback-reversible processes are also optimal quantum feedback protocols and that the preparation method would also apply to quantum feedback engines. Applications of the preparation method to quantum systems holds interesting possibilities. For example, in both the classical multi-particle Szilard engines analyzed here, the optimal protocols required repulsive particle interactions. In a quantum multi-particle Szilard engine composed of fermions, the Pauli exclusion principle induces a repulsive interaction of purely quantum origin, which could be exploited to develop a collection of optimal feedback protocols.
We acknowledge Hal Tasaki for suggesting the $N$-particle Szilard engine protocol. Financial support for this project came from Grant MOSAICO (Spanish Government) and MODELICO (Comunidad de Madrid).
Partition functions for two square hard-core particles in a two-dimensional box {#sec:appendix}
===============================================================================
In this appendix, we report the configurational partition functions employed in Sect. \[subsec:two\] for a gas composed of two square particles of width $d$ with hard-core interactions confined to a two-dimensional box of width $L_x$ and height $L_y$. The partition function for hard-core particles is the number of distinct microscopic configurations subject to the constraint that the centers of the particle be separated by a distance of at least $d$. In addition, the particles have a hard-core interaction with the walls enclosing the box, with the result that the center of each particle must be at least a distance $d/2$ from the edges of the box.
Two partition functions are utlized in our analysis in . The first is the partition function for the equilibrium state when each particle is confined to separate box of dimensions $L_x \times L_y$: $$\begin{aligned}
\bar{Z}_2(L_x,L_y)&=\int_{d/2}^{L_x-d/2}dx_1\, \int_{d/2}^{L_y-d/2}dy_1\, \int_{d/2}^{L_x-d/2}dx_2\, \int_{d/2}^{L_x-d/2}dy_2 \\
&=(L_x-d)^2(L_y-d)^2.\end{aligned}$$ The second is for the equilibrium state when both particles are confined to the same box of dimensions $L_x \times L_y$. This partition function can be expressed as the integral $$\begin{aligned}
\nonumber
\fl Z_2(L_x,L_y)=&\frac{1}{2}\int_{d/2}^{L_x-d/2}dx_1\, \int_{d/2}^{L_y-d/2}dy_1\, \int_{d/2}^{L_x-d/2}dx_2\, \int_{d/2}^{L_y-d/2}dy_2\, \\ \nonumber
&\times[\Theta(|x_1-x_2|-d)+\Theta(|y_1-y_2|-d)-\Theta(|x_1-x_2|-d)\Theta(|y_1-y_2|-d)],\end{aligned}$$ where $\Theta(x)$ is the Heaviside step function and the preceding factor of $1/2$ is included because the particles are indistinguishable. The calculation of the above integral can be performed using standard methods of integral calculus, with the result, assuming $L_x>2d$, $$\fl
Z_2(L_x,L_y)=
\left\{
\begin{array}{ll}
\frac{1}{2}(L_x-2d)^2(L_y-2d)^2+2d(L_x-2d)(L_y-2d)(L_x+L_y-4d) \\
\, \, \, +d^2\left[(L_x-2d)^2+(L_y-2d)^2\right], & L_y \ge 2d \\
\frac{1}{2}(L_y-d)^2(L_x-2d)^2, & d\le L_y< 2d
\end{array}
\right. .$$
|
---
abstract: 'We clarify the structure of the four-dimensional low-energy effective action that encodes the conformal and $U(1)$ R-symmetry anomalies in an ${\mathcal{N}}=1$ supersymmetric field theory. The action depends on the dilaton, $\tau$, associated with broken conformal symmetry, and the Goldstone mode, $\beta$, of the broken $U(1)$ R-symmetry. We present the action for general curved spacetime and background gauge field up to and including all possible four-derivative terms. The result, constructed from basic principles, extends and clarifies the structure found by Schwimmer and Theisen in [@Schwimmer:2010za] using superfield methods. We show that the Goldstone mode $\beta$ does not interfere with the proof of the four-dimensional $a$-theorem based on $2 \to 2$ dilaton scattering. In fact, supersymmetry Ward identities ensure that a proof of the $a$-theorem can also be based on $2 \to 2$ Goldstone mode scattering when the low-energy theory preserves ${\mathcal{N}}=1$ supersymmetry. We find that even without supersymmetry, a Goldstone mode for any broken global $U(1)$ symmetry cannot interfere with the proof of the four-dimensional $a$-theorem.'
---
=10000
MCTP-13-43\
[**Dilaton Effective Action with $\mathcal{N}=1$ Supersymmetry**]{}\
[**Nikolay Bobev$^{1}$, Henriette Elvang$^{2}$, and Timothy M. Olson$^{2}$**]{}
$^{1}$Perimeter Institute for Theoretical Physics\
31 Caroline Street North, ON N2L 2Y5, Canada
$^{2}$Randall Laboratory of Physics, Department of Physics,\
University of Michigan, Ann Arbor, MI 48109, USA\
nbobev@perimeterinstitute.ca, elvang@umich.edu, timolson@umich.edu\
[ ]{}
Introduction and summary
========================
The dilaton-based proof [@Komargodski:2011vj; @Komargodski:2011xv] of the four-dimensional $a$-theorem has provided new insights into the behavior of quantum field theories under renormalization group (RG) flows, for example in studies of conformal versus scale invariance [@Luty:2012ww; @Dymarsky:2013pqa; @Farnsworth:2013osa]. The arguments in [@Komargodski:2011vj; @Komargodski:2011xv; @Luty:2012ww] exploit that the structure of the effective action for the dilaton — introduced as a conformal compensator or as the Goldstone boson for spontaneously broken conformal symmetry — is determined by symmetries up to and including four-derivative terms. This is used to extract the change in the Euler central charge $\Delta a = a_\text{UV} - a_\text{IR}$ in an RG flow between UV and IR CFTs. The form of the dilaton action shows that the low-energy expansion of the scattering process of four dilatons is proportional to $\Delta a$ and a sum rule then allowed the authors of [@Komargodski:2011vj] to argue that $\Delta a >0$, thus proving the $a$-theorem.
It is worth exploring if this argument can be affected by the presence of other massless modes in the low-energy theory, such as Goldstone bosons arising from the spontaneous breaking of other continuous global symmetries. This situation arises in ${\mathcal{N}}=1$ supersymmetric theories, because the stress tensor is in the same supermultiplet as the R-current, so the Goldstone boson $\beta$ for the broken $U(1)$ R-symmetry accompanies the dilaton $\tau$. In the low-energy effective action, there are couplings between $\tau$ and $\beta$, even in the flat-space limit, so one may wonder if this affects the proof of the $a$-theorem.
Since the Goldstone boson $\beta$ is a pseudo-scalar (an axion), we are quickly relieved of our worries: its presence cannot change the scattering of four scalars (the dilatons) through single-axion exchanges, which would be the only option in the low-energy effective action. But precisely how this works is less trivial, since the “naive" dilaton field $\tau$ is non-linearly coupled to the axion $\beta$, and to identify the physical modes one must disentangle the fields via a field redefinition. The result of course still holds true: the axion does not spoil the proof of the four-dimensional $a$-theorem presented in [@Komargodski:2011vj].
In this note, we consider in detail the form of the bosonic terms in the ${\mathcal{N}}=1$ supersymmetric extension of the four-dimensional dilaton effective action in order to fully illuminate the above questions and to clarify results in the previous work [@Schwimmer:2010za]. Our focus is four-dimensional $\mathcal{N}=1$ superconformal theories in which the conformal symmetry is broken by a relevant operator that preserves the ${\mathcal{N}}=1$ supersymmetry. We assume that the induced flow terminates in another ${\mathcal{N}}=1$ superconformal theory in the deep IR. The fields $\tau$ and $\beta$ form a complex scalar field which is the lowest component of a chiral Goldstone superfield $\Phi = (\tau + i\beta) + \dots$. We are interested in writing down the most general low-energy effective action for $\tau$ and $\beta$ in a general rigid four-dimensional curved space with background metric $g_{{\mu}{\nu}}$ and background $U(1)$ R-symmetry gauge potential $A_{\mu}$. Such an action has been studied previously by Schwimmer and Theisen using a superspace approach [@Schwimmer:2010za]. One of our goals is to derive the action in component form from basic symmetry principles and use this to clarify the structure of the result presented in [@Schwimmer:2010za].
The fundamental ideas we use to determine the effective action $S[\tau,\beta]$ are diffeomorphism invariance and the following three properties:
1. Weyl variation $(\delta_{\sigma}g_{{{\mu\nu}}}= 2{\sigma}g_{{\mu\nu}}$ and $\delta_{\sigma}{\tau}= {\sigma})$ produces the trace anomaly, i.e. $$\begin{aligned}
\delta_\sigma S = \int d^4x\, \sqrt{-g}\, \sigma\, \langle T_\mu{}^\mu \rangle \,.\end{aligned}$$ The expectation value of the trace of the stress tensor, $\langle T_\mu{}^\mu \rangle$, is a functional of the background fields, namely the metric $g_{{\mu\nu}}$ and the $U(1)_R$ gauge field $A_\mu$. It does not depend on $\tau$ or $\beta$. The full trace anomaly for an $\mathcal{N}=1$ SCFT with central charges $a$ and $c$ is[^1] $$\begin{aligned}
\langle T_\mu{}^\mu \rangle &= c W^2 - a E_4 + b' \square R - 6\,c\, (F_{\mu\nu})^2 \;.\end{aligned}$$ The coefficient of ${\square}R$ is non-physical as it can be removed by adding a local counterterm in the UV theory. Thus it is not an anomaly and we drop it henceforth.
2. Gauge transformations $(\delta_\alpha A_{\mu}= {\nabla}_{\mu}\alpha$ and $\delta_\alpha {\beta}= \alpha)$ generate the gauge anomaly: $$\label{gaugevary}
\delta_\alpha S
= \int d^4x\, \sqrt{-g}\, \alpha\,
\Big(2\,(5a-3c)\,F_{\mu\nu}\,\widetilde{F}^{\mu\nu} + (c-a)\,R_{\mu\nu\rho\sigma}\,\widetilde{R}^{\mu\nu\rho\sigma} \Big)
\;,$$ where the tilde denotes Hodge dualization with respect to the curved metric $g_{\mu\nu}$, $$\widetilde{R}_{{\mu\nu\rho\sigma}}\equiv \frac{1}{2}\epsilon_{{{\mu\nu}}\lambda\delta}R^{\lambda\delta}{}_{{\rho}{\sigma}}\,, \qquad \widetilde{F}_{{\mu\nu}}\equiv \frac{1}{2}\epsilon_{{{\mu\nu}}{\rho}{\sigma}}F^{{\rho}{\sigma}}\;.$$ The second line of [(\[gaugevary\])]{} gives the gauge anomaly[^2] for the case of an $\mathcal{N}=1$ superconformal theory; it was derived in [@Anselmi:1997am] with slightly different normalization of $a$ and $c$ (see also [@Schwimmer:2010za; @Intriligator:2003jj; @Cassani:2013dba]).
3. The low-energy effective action must be invariant under ${\mathcal{N}}=1$ supersymmetry. Throughout this note we mostly ignore the fermionic degrees of freedom and focus entirely on the bosonic part of the action.
The first and second properties allow us to split the action into two parts $S = S_{\text{WZ}}+ S_{\text{inv}}$ where Weyl and gauge variations of $S_{\text{WZ}}$ produce the trace and gauge anomalies, respectively, while $S_{\text{inv}}$ is gauge and Weyl invariant. The general form of $S_{\text{inv}}$ is a linear combination of all possible gauge and Weyl invariant operators and the principles 1 and 2 above do not allow us to constrain the constant coefficients in this linear combination. However, the third property (supersymmetry) does fix certain relationships between the two parts of the action: some of the coefficients in $S_{\text{inv}}$ are determined in terms of the central charges $a$ and $c$. This still leaves the possible freedom of having gauge and Weyl invariant operators that are independently supersymmetric. We will show that no such operators contribute to the flat-space scattering process of four-particle dilaton and Goldstone modes at the four-derivative order. This means that such independently supersymmetric terms in the dilaton effective action (if they exist) cannot affect the proof of the $a$-theorem.
It is not easy to check whether a given four-derivative operator is supersymmetrizable. Thankfully the power of supersymmetry Ward identities allow us to test this question indirectly and to the extent we need it. As we show in Section \[sec:wards\], the supersymmetry Ward identities require that the scattering process of four dilatons is identical to the scattering process of the four associated R-symmetry Goldstone modes. This means that if an operator contributes only to one of these processes, it cannot possibly be supersymmetrizable on its own. We use this to exclude contributions from Weyl and gauge invariant operators that could otherwise affect the proof of the $a$-theorem in four-dimensional ${\mathcal{N}}=1$ supersymmetric theories.[^3]
Our work suggests several natural avenues for further exploration. First it will be interesting to analyze the effective actions for conformal field theories (not neccessarily supersymmetric) with larger continuous global symmetry groups. For superconformal theories with $\mathcal{N}=1$ supersymmetry and more than one Abelian global symmetry one may hope that such an effective action will offer a new perspective on the principle of $a$-maximization [@Intriligator:2003jj]. It will also be of great interest to construct the dilation effective action for four-dimensional SCFTs with extended supersymmetry, in particular for $\mathcal{N}=4$ SYM. In this context, one may be able to establish a more precise connection between the dilation effective action and the Dirac-Born-Infeld action for SCFTs with holographic duals. Finally, one can also study the supersymmetric dilation effective action for SCFTs in two and six dimensions.[^4] The methods of this paper should extend readily to two-dimensional SCFTs with $(0,2)$ or $(2,2)$ supersymmetry since these theories have Abelian R-symmetry. The extension to six-dimensional $(1,0)$ or $(2,0)$ SCFTs may prove more subtle, although in the latter case holography should provide useful insights.
Before delving into the construction of the dilaton effective action, we start by deriving supersymmetry Ward identities for on-shell scattering amplitudes in Section \[sec:wards\]. In Section \[sec:dea\] we derive the most general form of the dilaton effective action for $\mathcal{N}=1$ SCFTs up to four-derivative terms. We compare this action to the results of Schwimmer-Theisen in Section \[sec:matchingST\] to clarify the structure of their superspace-based result. In Section \[sec:amplitudes\], we show that the Ward identities from Section \[sec:wards\] confirm the supersymmetry of our result for the action in the flat-space limit. The resulting dilaton-axion effective action gives an explicit verification that the dilaton-based proof is not affected by $\beta$. Furthermore, we show that supersymmetry is actually not needed to reach this conclusion: the Goldstone mode of any broken global $U(1)$ symmetry cannot spoil the proof of the $a$-theorem. Finally, we note that supersymmetry requires that the $2\to 2$ axion scattering amplitude must equal the $2 \to 2$ dilaton amplitude, and this allows for a proof of the $a$-theorem based on the axion scattering for $\mathcal{N}=1$ SCFTs. In Appendix \[app:anomaly\], we present a way to derive the conformal anomaly for four-dimensional CFTs from basic principles.
Scattering constraints from supersymmetry {#sec:wards}
=========================================
Scattering amplitudes in supersymmetric theories obey supersymmetry Ward identities [@Grisaru:1976vm; @Grisaru:1977px]. We consider here an ${\mathcal{N}}=1$ chiral model with a complex scalar ${\zeta}$ and its fermionic superpartner $\lambda$. In Section \[sec:amplitudes\], the chiral scalar will be related to the dilaton and $U(1)$ Goldstone modes. As a result of the supersymmetry transformations of the free fields, it can be shown [@Elvang:2013cua] that the supersymmetry generators $Q$ and $Q^\dagger$ act on the states as[^5] $$\begin{aligned}
\begin{array}{rlcrl}
{[Q,{\zeta}]} ~= & [p|\,\lambda \;, & \qquad & [Q^\dagger,\,{\lambda}] &=~ |p\rangle\,{\zeta}\;,\\
{[Q,\,{\lambda}]} ~=& 0\;, && [Q^\dagger,{\zeta}] &= ~0\;,\\
{[Q,\overline{{\zeta}}]} ~=&0\;, & & [Q^\dagger,\,\overline{{\lambda}}] &=~0\;, \\
{[Q,\,\overline{{\lambda}}]} ~=& [p|\,\overline{{\zeta}}\;, && [Q^\dagger,\overline{{\zeta}}] &=~ |p\rangle\,\overline{\lambda}
\,,
\end{array}\end{aligned}$$ where the (anti)commutators are graded Lie brackets. The two-component spinors $|p\rangle$ and $[p|$ represent components of the particle momentum in the spinor-helicity formalism.[^6] More precisely, the on-shell four-momentum $p_{\mu}$ for a massless particle can be written in terms of a pair of two-component spinors $|p\rangle^{\dot{a}}$ and $[p|^b$ as $$p_\mu\,(\overline{{\sigma}}^{\mu})^{\dot{a}b} = - |p\rangle^{\dot{a}}[p|^b \;,
~~~~~\text{and}~~~~~
p_{\mu}\,({\sigma}^{\mu})_{a\dot{b}}= - |p]_a\langle p|_{\dot{b}} \,.$$ For two light-like four-vectors, $p^\mu$ and $q^\mu$, angle- and square-brackets are defined as $$\begin{aligned}
[pq] = [p|^a |q]_a \;,
~~~~~\text{and}~~~~~
\langle pq \rangle = \langle p| _{\dot{a}} |q\rangle^{\dot{a}}\,.\end{aligned}$$ These brackets are antisymmetric, $[ pq ] = - [ qp ]$ and $\langle pq \rangle = - \langle qp \rangle$, because spinor indices are raised and lowered with the two-dimensional Levi-Civita symbol.
Now assuming the vacuum is supersymmetric, i.e.$\,Q\boldsymbol{|0\rangle} = Q^\dagger \boldsymbol{|0\rangle}=0$, we can derive supersymmetry Ward identities for the amplitudes. For example (treating $\lambda$ and ${\zeta}$ as creation operators),[^7] $$\begin{aligned}
0=\boldsymbol{\langle0|}\,
\big[Q^\dagger,{\lambda}\,{\zeta}\,{\zeta}\,{\zeta}\big]\,\boldsymbol{|0\rangle}
=
\boldsymbol{\langle0|}\,
\big[Q^\dagger,{\lambda}\big]\, {\zeta}\,{\zeta}\,{\zeta}\,\boldsymbol{|0\rangle}
=
|p_1 \rangle \,
\boldsymbol{\langle0|}\,
{\zeta}\, {\zeta}\,{\zeta}\,{\zeta}\,\boldsymbol{|0\rangle} \;,\end{aligned}$$ where we have used that $Q^\dagger$ annihilates ${\zeta}$. This is simply the statement that at any loop-order, the on-shell four-scalar amplitude ${\mathcal{A}}_4({\zeta}\,{\zeta}\,{\zeta}\,{\zeta})$ must vanish (where now we mean the particles created by the field ${\zeta}$). Similarly, ${\mathcal{A}}_4(\overline{{\zeta}}\,\overline{{\zeta}}\,\overline{{\zeta}}\,\overline{{\zeta}})=0$.
The four-scalar amplitudes with three ${\zeta}$ and one $\overline{{\zeta}}$ also vanish. To see this, we write $$0 =
\boldsymbol{\langle 0|}\,\big[Q^\dagger,\overline{{\zeta}}\,{\lambda}\,{\zeta}\,{\zeta}\big]\,\boldsymbol{|0\rangle} =
|p_1\rangle\,\boldsymbol{\langle 0|}\, \overline{\lambda}\,{\lambda}\,{\zeta}\,{\zeta}\,\boldsymbol{|0\rangle} +|p_2\rangle\,\boldsymbol{\langle 0|}\,\overline{{\zeta}}\,{\zeta}\,{\zeta}\,{\zeta}\big]\,\boldsymbol{|0\rangle}\,.
\label{SWI2}$$ Now dot in $\langle p_1|$ and use the antisymmetry of the angle bracket to eliminate the first term on the right hand side in [(\[SWI2\])]{}. For generic momenta, this leads to the statement that ${\mathcal{A}}_4(\,\overline{{\zeta}}\,{\zeta}\,{\zeta}\,{\zeta}\, )= 0$.
A similar story applies to scalar amplitudes with three $\overline{{\zeta}}$’s. Altogether, supersymmetry requires the following amplitudes to vanish: $$\begin{split}
&{\mathcal{A}}_4({\zeta}\,{\zeta}\,{\zeta}\,{\zeta})
~=~
{\mathcal{A}}_4(\overline{{\zeta}}\,\overline{{\zeta}}\,\overline{{\zeta}}\,\overline{{\zeta}}) ~=~ 0\,,
\\[1mm]
&{\mathcal{A}}_4(\overline{{\zeta}}\,{\zeta}\,{\zeta}\,{\zeta})
~=~{\mathcal{A}}_4({\zeta}\,\overline{{\zeta}}\,{\zeta}\,{\zeta})
~=~ \ldots
~=~ {\mathcal{A}}_4(\overline{{\zeta}}\,\overline{{\zeta}}\,\overline{{\zeta}}\,{\zeta})
= 0\,.
\end{split}
\label{SWI3}$$ The second line includes all four-point amplitudes with an odd number of ${\zeta}$’s. Amplitudes with two ${\zeta}$’s and two $\overline{{\zeta}}$’s, such as ${\mathcal{A}}_4(\overline{{\zeta}}\,\overline{{\zeta}}\,{\zeta}\,{\zeta})$, are permitted to be non-vanishing by supersymmetry. The reader may be puzzled: surely a supersymmetric Lagrangian can have interactions terms of the form ${\zeta}^4 + \overline{{\zeta}}^4$, so how can that be compatible with our claim above that for massless scalars ${\mathcal{A}}_4({\zeta}\,{\zeta}\,{\zeta}\,{\zeta}) = 0$? To see this in an example, consider an ${\mathcal{N}}=1$ theory with a canonical kinetic term $\Phi^\dagger \Phi$ and a superpotential $W = f \Phi + \tfrac{1}{5}\Phi^5$. The scalar potential $V = |dW/d{\zeta}|^2 = |f|^2 + f {\zeta}^4 +\bar{f} \,\overline{{\zeta}}^4 + {\zeta}^4 \overline{{\zeta}}^4$ has exactly the four-scalar interaction terms that our supersymmetry Ward identity argument appears to be incompatible with. However, the origin ${\zeta}=\overline{{\zeta}}=0$ is obviously not a supersymmetric vacuum, so the Ward identity — which used $Q^\dagger \boldsymbol{|0\rangle} =0$ — is not valid. If we expand around another vacuum, we generate mass-terms and we are only interested in the case of massless particles. This resolves the puzzle.
Now suppose we decompose the complex scalar field ${\zeta}$ into its real and imaginary parts, ${\zeta}= {\varphi}+ i {\xi}$ and denote the corresponding scalar, ${\varphi}$, and pseudo-scalar, ${\xi}$, states by the same symbols. Expanding the supersymmetry constraints [(\[SWI3\])]{} then leads to the following non-trivial constraints on the amplitudes:[^8] $$\begin{aligned}
{\mathcal{A}}_4({\varphi}\,{\varphi}\,{\varphi}\,{\varphi}) ~&=~ {\mathcal{A}}_4({\xi}\,{\xi}\,{\xi}\,{\xi})\,, \label{wards1}
\\
{\mathcal{A}}_4({\varphi}\,{\varphi}\,{\varphi}\,{\varphi}) ~=~ {\mathcal{A}}_4({\varphi}\,{\varphi}\,{\xi}\,{\xi}) \,&+\, {\mathcal{A}}_4({\varphi}\,{\xi}\,{\varphi}\,{\xi}) \,+\, {\mathcal{A}}_4({\varphi}\,{\xi}\,{\xi}\,{\varphi})\,. \label{wards2}\end{aligned}$$
These linear relations between amplitudes will be very valuable in the analysis of the ${{\mathcal{N}}=1}$ low-energy effective action for the dilaton. In this context, ${\varphi}$ will be associated with the physical dilaton and ${\xi}$ with the R-symmetry Goldstone mode. Thus, without knowing any details of the form of the ${\mathcal{N}}=1$ supersymmetric dilaton effective action, we have already learned from the first identity that the four-dilaton amplitude must be equal to the four-axion amplitude. The second identity is important for testing that the explicit action we derive in Section \[sec:amplitudes\] is supersymmetric.
The identities in – can also be used to test if a given candidate Weyl and gauge invariant operator is compatible with supersymmetry. If the on-shell four-point amplitudes resulting from the operator do not satisfy –, then the operator cannot be supersymmetrized. On the other hand, if the resulting amplitudes are compatible with –, then the operator has a supersymmetric extension at the level of four fields (though not necessarily beyond that order).
Dilaton effective action {#sec:dea}
========================
We turn now to the construction of an ${\mathcal{N}}=1$ supersymmetric effective action for the dilaton and axion fields ${\tau}$ and ${\beta}$ in the presence of a curved background metric $g_{{\mu\nu}}$ and background gauge field $A_{\mu}$. As noted in the Introduction, the dilaton effective action can be split into two parts $$S = S_{\text{WZ}}+ S_{\text{inv}}\;,
\label{Ssplit}$$ depending on whether gauge and Weyl transformations act non-trivially.
Wess-Zumino action {#sec:WZaction}
------------------
The Wess-Zumino part of the action is defined such that its gauge variation produces the anomaly for the $U(1)_R$ symmetry and its Weyl variation results in the conformal anomaly. It can be obtained either by iteratively applying transformations and adding terms to cancel extra variations, or by integrating the anomalies directly [@Wess:1971yu]. The result is the four-dimensional Wess-Zumino action for the dilaton and axion: $$\label{SWZdef}
\begin{split}
S_{\text{WZ}}=& \int d^4x\,\sqrt{-g}\,\bigg[ {\,\Delta c\,}\,\tau\, W^2\,-{\,\Delta a\,}\tau\,E_4
- 6\,{\,\Delta c\,}\,\tau\, F^2 \\
&\qquad\qquad~~~ + \beta\Big( 2\,(5{\,\Delta a\,}-3{\,\Delta c\,})F\widetilde{F} + ({\,\Delta c\,}-{\,\Delta a\,})R\widetilde{R}\Big) \\
&\qquad\qquad~~~-{\,\Delta a\,}\bigg(4\,\Big(R^{\mu\nu}-\frac{1}{2}R\,g^{\mu\nu}\Big)\nabla_\mu\tau\,\nabla_\nu\tau - 2\,(\nabla\tau)^2\Big(2\,\Box\tau - (\nabla\tau)^2\Big) \bigg) \bigg]\;.
\end{split}$$ Here $F=dA$ is the flux for the background $U(1)_R$ gauge field. Under a Weyl transformation, the variation of $\tau$ on the first line produces the conformal anomaly, while the Weyl tensor and field strength are inert. However, $E_4$ is not inert, but the Weyl variation of the third line cancels the contributions from $\tau\, \delta_{\sigma}(\sqrt{-g}E_4)$. The second line is Weyl invariant. Gauge transformations shift $\beta \to \beta + \alpha$, hence the second line in [(\[SWZdef\])]{} produces the $U(1)_R$ anomaly. When the flux and the axion vanish, one recovers the WZ action for the dilaton [@Schwimmer:2010za; @Komargodski:2011vj; @Komargodski:2011xv]. The coefficients ${\,\Delta a\,}=a_{\text{UV}}- a_{\text{IR}}$ and ${\,\Delta c\,}=c_{\text{UV}}- c_{\text{IR}}$ are the difference between the corresponding central charges of the UV and IR SCFTs, as required by the anomaly matching conditions [@Schwimmer:2010za; @Komargodski:2011vj].
Gauge and Weyl invariants
-------------------------
Since $S_{\text{WZ}}$ is determined by its variation, it is only specified up to terms whose gauge and Weyl variations vanish. We define $S_{\text{inv}}$ to be the sum of all independent gauge and Weyl invariant combinations of $\tau$, $\beta$, $g_{{\mu\nu}}$, and $A_{\mu}$. To facilitate the analysis, we define a Weyl invariant metric $\hat{g}_{{\mu\nu}}= e^{-2\tau}g_{{\mu\nu}}$, so that any curvature terms computed in terms of $\hat{g}_{{\mu\nu}}$ will be invariant. This procedure appeared in the analysis in [@Komargodski:2011vj] (see also [@Elvang:2012st; @Elvang:2012yc; @Baume:2013ika] for analogues in higher dimensions) where there were three possible four-derivative Weyl invariants with independent coefficients: $\sqrt{-\hat{g}}\hat{W}^2$, $\sqrt{-\hat{g}}\hat{R}^2$, and $\sqrt{-\hat{g}}\hat{E}_4$. (The Euler density $\hat{E}_4$ is total derivative in four dimensions so it can be dropped.) In the present context, the additional fields can be used to construct invariants. Specifically, the combination $(A-{\nabla}\beta)_{\mu}$ is both gauge and Weyl invariant. This combination also suggests that we should treat $A_{\mu}$ on the same footing as a derivative in the low-energy effective action. With these building blocks we find the most general Ansatz for $S_{\text{inv}}$ including terms with at most four derivatives: $$\label{Sinv}
\begin{split}
S_{\text{inv}}= \int d^4x\,&\sqrt{-\hat{g}}\,\bigg[-\frac{f^2}{2}\,\bigg(\, \frac{\hat{R}}{6} + \hat{g}^{\mu\nu}\,(A-\nabla\beta)_\mu\,(A-\nabla\beta)_\nu \bigg) + \sum\limits_{i=1}^{9}\gamma_i\,W_i
+ \mathcal{O}({\nabla}^6) \bigg] \;,
\end{split}$$ where we have dropped total derivatives such as $\sqrt{-\hat{g}}\hat{E}_4$. The hatted two-derivative gauge-Weyl invariants produce the kinetic terms for the scalars when expanded in terms of the unhatted metric and the dilaton. The real constants $\gamma_1,\ldots,\gamma_9$ are arbitrary coefficients of the independent four-derivative gauge and Weyl invariant terms, $\sqrt{-\hat{g}}W_i$, defined by $$\begin{aligned}
\label{Weylinv}
\begin{array}{lcl}
W_1\equiv\hat{W}^2\;, & & W_2 \equiv\hat{R}^2\;, \\
W_3\equiv(A-\nabla\beta )_\mu\,\hat{\nabla}^\mu\hat{R}\;, & & W_4\equiv\Big(\hat{\nabla}^\mu(A-\nabla\beta)_\mu\Big)^2\;, \\
W_5 \equiv \hat{g}^{\mu\nu}\,(A-\nabla\beta)_\mu\, \hat{\square}\, (A-\nabla\beta)_\nu\;, & & W_6 \equiv \hat{R}^{\mu\nu}\, (A-\nabla\beta)_\mu\,(A-\nabla\beta)_\nu\;, \\
W_7 \equiv \hat{R}\,\hat{g}^{\mu\nu}\, (A-\nabla\beta)_\mu\,(A-\nabla\beta)_\nu\;, &\hspace{1.5cm} & W_8 \equiv \Big(\hat{g}^{\mu\nu}\,(A-\nabla\beta)_\mu\,(A-\nabla\beta)_\nu\Big)^2\;,\\
\multicolumn{3}{l}{W_9 \equiv \hat{g}^{\mu\nu}\,(A-\nabla\beta)_\mu\,(A-\nabla\beta)_\nu\,\hat{\nabla}^\lambda(A-\nabla\beta)_\lambda\;.}\\
\end{array}\end{aligned}$$ All other invariants can be written as linear combination of the $W_i$ and total derivatives, e.g. the Bianchi identity implies ${\hat{R}^{{{\mu\nu}}}\, \hat{{\nabla}}_{\mu}(A-{\nabla}{\beta})_{\nu}= \hat{{\nabla}}_{\mu}\Big(\hat{R}^{{\mu\nu}}\,(A-{\nabla}{\beta})_{\nu}\Big) - \frac{1}{2}W_3}$.
This is the most general possible action written in terms of natural gauge and Weyl invariant objects constructed from the basic fields. So far, we have not imposed any supersymmetry on the Weyl+gauge invariant action $S_\text{inv}$. As we will see in the following sections, the constraints implied by ${\mathcal{N}}=1$ supersymmetry and the consequences for the $a$-theorem are easily expressed and understood in terms of the $W_i$ and their coefficients.
Matching to superspace calculation {#sec:matchingST}
==================================
The bosonic terms in the ${\mathcal{N}}=1$ supersymmetric version of the Wess-Zumino action were derived earlier by Schwimmer and Theisen [@Schwimmer:2010za]. They started with the Weyl anomaly in superspace and integrated it directly using the Wess-Zumino method [@Wess:1971yu]. This gives a superspace form of the Wess-Zumino action which was then expanded in component fields; the result is given in equation (3.23) of [@Schwimmer:2010za]. In that expression, it is easy to pick out the terms that match $S_{\text{WZ}}$ in [(\[SWZdef\])]{}. The two-derivative terms in [(\[Sinv\])]{} are also easily recognized. However, it is not *a priori* clear how to interpret the rest of the 4-derivative terms in (3.23) of [@Schwimmer:2010za]. Indeed, at first sight it may seem almost miraculous that these additional terms would not contribute to the anomaly under a gauge/Weyl transformation.
The correct interpretation of the rest of the terms in (3.23) of [@Schwimmer:2010za] is that they are a combination of gauge and Weyl invariants required for the supersymmetric completion of $S_{\text{WZ}}$ in [(\[SWZdef\])]{}. Thus, the extra terms in (3.23) of [@Schwimmer:2010za] are a particular linear combination of the operators $W_i$ from [(\[Weylinv\])]{}: there is a unique choice of $\gamma_i$ in $S_{\text{inv}}$ such that our action $S=S_{\text{WZ}}+S_{\text{inv}}$ agrees with (3.23) in [@Schwimmer:2010za].[^9] This choice is to set $$\label{gamma678}
\gamma_6 \,=\, -6\,\gamma_7 \,=\, 2\,\gamma_8 \,=\, -4{\,\Delta a\,}$$ and drop the other $W_i$’s. This yields the following action: $$\label{Sst}
\begin{split}
&S_0 = \int d^4x\,\bigg\{-f^2\,\sqrt{-\hat{g}}\,\bigg[\, \frac{1}{12}\hat{R} +\frac{1}{2}\Big(\hat{g}^{\mu\nu}\,(A-\nabla\beta)_\mu\,(A-\nabla\beta)_\nu\Big) \bigg]\\
&\qquad~~ +\sqrt{-{g}}\,\bigg[ {\,\Delta c\,}\,\tau\, W^2\,-{\,\Delta a\,}\tau\,E_4
- 6\,{\,\Delta c\,}\,\tau\, F^2 \\
&\hspace{4cm} + \beta\Big( 2\,(5{\,\Delta a\,}-3{\,\Delta c\,})F\widetilde{F} + ({\,\Delta c\,}-{\,\Delta a\,})R\widetilde{R}\Big) \\
&\hspace{4cm} -{\,\Delta a\,}\bigg(4\,\Big(R^{\mu\nu}-\frac{1}{2}R\,g^{\mu\nu}\Big)\nabla_\mu\tau\,\nabla_\nu\tau - 2\,(\nabla\tau)^2\Big(2\,\Box\tau - (\nabla\tau)^2\Big) \bigg) \bigg]\\
&\qquad~~ - 4{\,\Delta a\,}\sqrt{-\hat{g}}\,\bigg[ \Big(\hat{R}^{{\mu\nu}}- \frac{1}{6}\hat{R}\,\hat{g}^{{\mu\nu}}\Big) (A-\nabla\beta)_\mu\,(A-\nabla\beta)_\nu\\
&\hspace{4cm} + \frac{1}{2}\,\Big(\hat{g}^{\mu\nu}\,(A-\nabla\beta)_\mu\,(A-\nabla\beta)_\nu\Big)^2 \bigg]
+ \mathcal{O}({\nabla}^6)
\bigg\}\;.
\end{split}$$ The first line contains the kinetic terms. The second through fourth lines are the WZ action, , whose Weyl and gauge variations respectively produce the conformal and $U(1)_{R}$ anomaly. The last two lines are gauge and Weyl invariant and can be viewed as the supersymmetric completion of the Wess-Zumino action.
Although the other $\gamma_i$ and $W_i$ do not appear in , this should not be interpreted as setting them equal to zero. Rather, the remaining $\gamma_i$ do not contribute to because the superspace calculation in [@Schwimmer:2010za] derived only the terms related to the anomaly in a general ${\mathcal{N}}=1$ theory. At present, the rest of the $\gamma_i$ are not fixed. We will see later that the supersymmetry Ward identities imply additional constraints.
One can now expand to facilitate comparison with equation (3.23) in [@Schwimmer:2010za]: $$\label{S0}
\begin{split}
S_0 &= -f^2\int d^4x\,\sqrt{-g}\,e^{-2\tau}\Big(\frac{1}{2}({\nabla}\tau)^2 + \frac{1}{12}R + \frac{1}{2}\big({\nabla}\beta - A \big)^2 \Big)\\
&\quad + \int d^4x\,\sqrt{-g}\,\Big[{\,\Delta c\,}\tau\,W^2 - {\,\Delta a\,}\tau\,E_4 - 6{\,\Delta c\,}\tau\,(F_{{\mu\nu}})^2\\
&\hspace{4cm} +\beta\,\Big(2\,(5{\,\Delta a\,}-3{\,\Delta c\,})\,F^{{\mu\nu}}\,\widetilde{F}_{{\mu\nu}}+ ({\,\Delta c\,}-{\,\Delta a\,})\, R^{{\mu\nu\rho\sigma}}\,\widetilde{R}_{{\mu\nu\rho\sigma}}\Big)\Big]\\
&\quad + 8{\,\Delta a\,}\int d^4x\,\sqrt{-g}\,\bigg(\Big[R^{\mu\nu}A_\nu -\frac{1}{6}R\,A^\mu + A^2\,A^\mu \Big]\,\nabla_\mu\beta - A^\mu\, A^\nu\, \nabla_\mu\nabla_\nu\tau \bigg) \\
&\quad +\,2 {\,\Delta a\,}\int d^4x\,\sqrt{-g}\, \bigg\{\bigg[ \Big(R+2\, A^2\Big)g^{\mu\nu} - 2\,\Big(R^{\mu\nu} + 2\,A^\mu\,A^\nu\Big) \bigg] \nabla_\mu\tau \,\nabla_\nu\tau \\
&\qquad~ + \,\bigg[ \Big(\frac{1}{3}R - 2\,A^2 \Big) g^{\mu\nu} - 2\,\Big(R^{\mu\nu} + 2\, A^\mu\,A^\nu\Big) \bigg] \nabla_\mu\beta\, \nabla_\nu\beta + 8\,A^\nu \nabla^\mu \beta \,\nabla_\nu\nabla_\mu \tau
\bigg\}
+ \ldots \;.
\end{split}$$ Here the dots denote terms with either no $\beta$’s and $\tau$’s, or more than two of them. Higher-derivative terms are also suppressed.
The comparison between our dilaton effective action and the result in [@Schwimmer:2010za] uniquely selects the three gauge-Weyl invariants $W_6$, $W_7$, and $W_8$ and fixes their coefficients as in [(\[gamma678\])]{}. If there are any other gauge-Weyl invariants in the low-energy dilaton-axion effective action, then their linear combination must be independently supersymmetrizable. We analyze this in the next section.
Dilaton and axion scattering in flat space {#sec:amplitudes}
==========================================
For the purposes of testing supersymmetry and investigating the $a$-theorem, we now take the theory on a flat background with vanishing gauge field. Then $\tau$ and $\beta$ will be the only fields involved. For the moment, we continue to ignore the other $W_i$ that did not contribute to . We will explain later why this is justified. The action encodes the familiar dilaton interactions, as well as new couplings to the axion $\beta$. These new interactions are present even in the flat-space limit with no background gauge field. Up to total derivatives, we find $$\label{flatST}
\begin{split}
S_0 = \int d^4x\bigg\{&-\frac{f^2}{2}\,e^{-2\tau}\Big[ (\partial\tau)^2 + (\partial\beta)^2 \Big]
+ 2{\,\Delta a\,}\Big[ 2\,\Box\tau\big((\partial\tau)^2- ({\partial}\beta)^2 \big)+4\, \Box\beta\,(\partial\tau\cdot\partial\beta) \\
&\quad -4\,(\partial\tau\cdot\partial\beta)^2 -\left((\partial\tau)^2 - (\partial\beta)^2\right)^2 \Big]+\mathcal{O}({\partial}^6)\bigg\}\;.
\end{split}$$ The fields $\tau$ and $\beta$ are coupled already at the two-derivative level through $e^{-2\tau}(\partial\beta)^2$, so the equations of motion mix $\tau$ and $\beta$: $$\square \tau = (\partial\tau)^2 - (\partial\beta)^2\;, \qquad\text{and}\qquad
\square \beta = 2(\partial\tau \cdot \partial\beta) \;.$$
Field redefinition
------------------
To facilitate the calculation of scattering amplitudes, we make a field redefinition to decouple the kinetic terms. This is easiest when we identify the complex scalar field $Z$ that produces the kinetic terms $$Z\equiv e^{-(\tau+i\,{\beta})} ~~\Rightarrow ~~ |{\partial}Z|^2 = e^{-2\tau}\Big((\partial\tau)^2 + (\partial\beta)^2\Big)\;.
\label{redef}$$ The action (\[flatST\]) can be rewritten in terms of $Z$ and its complex conjugate $\overline{Z}$ and takes a very simple form $$\begin{aligned}
S_0 = \int d^4x \,\bigg\{
-\frac{f^2}{2} \Big| {\partial}Z \Big|^2 + 2\Delta a
\bigg[
- \bigg(\frac{{\partial}Z}{Z}\bigg)^2 \frac{\Box \overline{Z}}{\overline{Z}}
- \bigg(\frac{{\partial}\overline{Z}}{\overline{Z}}\bigg)^2 \frac{\Box {Z}}{{Z}}
+\bigg|\frac{{\partial}Z}{Z}\,\bigg|^4 \,
\bigg]
+\mathcal{O}({\partial}^6)
\bigg\}\,.\label{STactZ}\end{aligned}$$ Note that when the Goldstone mode $\beta$ vanishes we have a real scalar $Z \to e^{-\tau} \equiv \Omega$ and the action (\[STactZ\]) reduces to the familiar form for the dilaton effective action in the flat space limit (see, for example, equation (2.8) in [@Luty:2012ww]).
The field $Z$ is the compensator we introduce to restore the broken symmetries. We can expand about its constant vev[^10] $f$ with the fluctuating field ${\zeta}$, $$Z=1-\frac{{\zeta}}{f}\, ,\qquad\qquad {\zeta}= {\varphi}+ i\,{\xi}\;,$$ where ${\varphi}$ and ${\xi}$ are real scalar fields. Plugging this into the action and expanding up to fourth order in the fields, we find $$\label{phiST}
\begin{split}
S_0\to \int d^4x \,&\bigg\{ -\frac{1}{2}\bigg((\partial{\varphi})^2 + (\partial{\xi})^2 \bigg) + \frac{4{\,\Delta a\,}}{f^3}\bigg( {\square}{\varphi}\, \Big(({\partial}{\varphi})^2 - ({\partial}{\xi})^2 \Big)+ 2\,{\square}{\xi}\,({\partial}{\varphi}\cdot {\partial}{\xi}) \bigg)\\
&\quad+ \frac{2\,\Delta a}{f^4}\bigg[2\, {\square}{\varphi}\,\Big(3\,{\varphi}\,\Big(({\partial}{\varphi})^2-({\partial}{\xi})^2 \Big) -2\,{\xi}\,({\partial}{\varphi}\cdot{\partial}{\xi}) \Big) \bigg)\\
&\hspace{1.6cm} + 2\,{\square}{\xi}\,\Big({\xi}\,\Big(({\partial}{\varphi})^2-({\partial}{\xi})^2 \Big) +6\,{\varphi}\,({\partial}{\varphi}\cdot{\partial}{\xi}) \Big) \bigg)\\
&\hspace{1.6cm} + \Big(({\partial}{\varphi})^2 - ({\partial}{\xi})^2 \Big)^2 + 4\,({\partial}{\varphi}\cdot{\partial}{\xi})^2\bigg] + \mathcal{O}({\partial}^6)\bigg\}\;.
\end{split}$$ This parameterization decouples the equations of motion into those of free massless scalars $$\label{EOMphixi}
{\square}{\varphi}= 0\,,\qquad\qquad {\square}{\xi}= 0 \;.$$ As an effective action with a derivative expansion, we only include the two-derivative quadratic terms in the equations of motion. All other terms in the action involve three or more fields and give rise to interaction terms in the quantized theory. In , all such interactions involve at least four derivatives, so the amplitudes have no local contributions from pole diagrams until at least $\mathcal{O}(p^6)$.
Amplitudes
----------
We are interested in the four-point amplitudes. From the action , we see that the low-energy expansion starts at $\mathcal{O}(p^4)$. The equations of motion make it easy to read off the amplitudes from the contact terms in the last line of , which yield at $\mathcal{O}(p^4)$: $$\label{STamps}
\begin{split}
{\mathcal{A}}_4({\varphi}\,{\varphi}\,{\varphi}\,{\varphi}) &= \frac{4\Delta a}{f^4}(s^2+t^2+u^2)\;,\\
{\mathcal{A}}_4({\xi}\,{\xi}\,{\xi}\,{\xi}) &= \frac{4\Delta a}{f^4}(s^2+t^2+u^2)\;,\\
{\mathcal{A}}_4({\varphi}\,{\varphi}\,{\xi}\,{\xi}) &= \frac{4\Delta a}{f^4}(-s^2+t^2+u^2)\;,\\
{\mathcal{A}}_4({\varphi}\,{\xi}\,{\varphi}\,{\xi}) &= \frac{4\Delta a}{f^4}(s^2-t^2+u^2)\;,\\
{\mathcal{A}}_4({\varphi}\,{\xi}\,{\xi}\,{\varphi}) &= \frac{4\Delta a}{f^4}(s^2+t^2-u^2)\;.
\end{split}$$ We can now use these results to check if the action is compatible with supersymmetry. Combining the corresponding results from , we see that indeed the constraints – from the supersymmetry Ward identities are obeyed.
All three Weyl invariants, $W_{6,7,8}$, contributed to the amplitudes in a non-trivial way that ensures that the supersymmetry Ward identities are satisfied. Hence, this tests the supersymmetry of . The combination of Weyl invariants $W_i$ in -[(\[S0\])]{} was fixed via comparison with the superspace form given by Schwimmer and Theisen [@Schwimmer:2010za]. The match was obtained by comparing the last three lines of with the corresponding expressions in [@Schwimmer:2010za]. Note that all the terms used explicitly in the match vanish in the flat-space limit with the background gauge potential turned off. However, as we have seen, $W_{6,7,8}$ also have flat-space contributions, so supersymmetry could also be tested via the Ward identities. Thus, in that limit, we have tested that our completion of the Schwimmer-Theisen terms does obey the supersymmetry constraints.
Supersymmetry and the other Weyl invariants
-------------------------------------------
So far we have considered only the part of the action that matched the superspace derivation of the Wess-Zumino action, fixing the values of $\gamma_6$, $\gamma_7$, and $\gamma_8$. The full dilaton effective action may have contributions from the other invariants $W_i$ as well. This is important because their flat-space limits could include additional dilaton and axion scattering beyond what we have considered so far, with potentially dangerous consequences for the $a$-theorem.
With that in mind, let us return to the list of gauge-Weyl invariants and evaluate them in the flat background. Applying the equations of motion , we find: $$\begin{aligned}
\label{Wiflat}
\begin{array}{l l}
W_1\to 0 \;,
&
W_2\to \frac{36}{f^4}({\partial}{\xi})^4\;,
\\
W_3\to 0\;,
&
W_4\to 0\;,
\\
W_5\to -\frac{2}{f^4}\Big( ({\partial}{\xi})^4 + ({\partial}{\varphi}\cdot{\partial}{\xi})^2 \Big) \;,
\hspace{0.5cm}&
\boldsymbol{W_6}\to \boldsymbol{-\frac{2}{f^4}\Big( ({\partial}{\xi})^4 + ({\partial}{\varphi}\cdot{\partial}{\xi})^2 \Big) }\;,
\\
\boldsymbol{W_7}\to \boldsymbol{-\frac{6}{f^4}({\partial}{\xi})^4 }\;,
&
\boldsymbol{W_8}\to \boldsymbol{\frac{1}{f^4}({\partial}{\xi})^4 }\;,
\\
W_9\to 0\;,
\end{array}\end{aligned}$$ where the three expressions in boldface are those already included in .
The first key feature to notice is that none of the invariants contain a $({\partial}{\varphi})^4$ interaction. Hence the four-scalar amplitude, ${\mathcal{A}}_4({\varphi}\,{\varphi}\,{\varphi}\,{\varphi})$ in , receives contributions only from the dilaton part of the Wess-Zumino action. It is completely blind to the presence of the axion. Thus it is not surprising that the resulting amplitude in matches exactly the one found in [@Komargodski:2011vj]. Moreover, this implies that the proof of the $a$-theorem using the four-dilaton amplitude is unaffected by the presence of the axion.
The second key feature is that any gauge+Weyl+supersymmetry invariant four-derivative term has to be a linear combination of the $W_i$’s, say $\mathcal{W}=\sum_{i=1}^9 b_i W_i$. Since [(\[Wiflat\])]{} tells us that the four-dilaton amplitude has zero contribution from $\mathcal{W}$, the supersymmetry Ward identity [(\[wards2\])]{} requires $b_5 + b_6 =0$, and consequently [(\[wards1\])]{} enforces $b_2 - 6 b_7+ 36 b_8 = 0$. There are no constraints on the other $b_i$’s from four-particle supersymmetry Ward identities. In conclusion, any gauge+Weyl+supersymmetry invariant four-derivative operator (if it exists) does not contribute at all to the four-particle scattering processes, so from that point of view we can completely neglect it.
Using general principles, we have shown that — up to four-derivative terms — the dilaton-axion effective action for $\mathcal{N}=1$ SCFTs takes the form $S=S_{\text{WZ}}+S_{\text{inv}}$, with $S_{\text{WZ}}$ and $S_{\text{inv}}$ given by and respectively. The results of [@Schwimmer:2010za] fix the coefficients $\gamma_i$ as in to complete the Wess-Zumino action to an ${\mathcal{N}}=1$ supersymmetric form. The supersymmetry Ward identities can be applied in the flat-space limit to see that no supersymmetric linear combination of the $W_i$’s contribute to any four-particle process. However, we cannot eliminate the possibility of such supersymmetric combinations; we can only say that in the flat-space limit their four-field terms must be proportional to total derivatives and the EOM. It would be curious to know if such fully supersymmetric operators do exists, although we have established that for the proof of the $a$-theorem in four dimensions they do not matter.
We have demonstrated that the four-point axion scattering amplitude is given by the second line in . One can now use the same positivity arguments as in [@Komargodski:2011vj; @Komargodski:2011xv] to show that for $\mathcal{N}=1$ SCFTs $\Delta a=a_{\text{UV}}-a_{\text{IR}} >0$. This can be regarded as an alternative route to the $a$-theorem for four-dimensional SCFTs with $\mathcal{N}=1$ supersymmetry.
No supersymmetry
----------------
Suppose we do not assume ${\mathcal{N}}=1$ supersymmetry. Then the coefficients in the gauge anomaly [(\[gaugevary\])]{} are no longer fixed in terms of the trace anomalies $a$ and $c$. This affects only the second line of the WZ action [(\[SWZdef\])]{}, now with $\beta$ interpreted as the Goldstone mode of *some* broken $U(1)$ symmetry. Nothing else changes in the WZ action. The general form of the Weyl and gauge invariant action [(\[Sinv\])]{} is unchanged in the flat-space limit with $A_\mu=0$. (The relative normalization between $A_\mu$ and $\beta$ may change, but we do not have to worry about this when $A_\mu=0$.) Of course, there is no supersymmetry or other principle to fix the coefficients $\gamma_i$. However, that is not important for the Komargodski-Schwimmer proof of the $a$-theorem because [(\[Wiflat\])]{} shows that none of the Weyl+gauge invariants $W_i$ affect the $2 \to 2$ scattering amplitude of the physical dilaton at order $p^4$. Hence we conclude that even in the absence of supersymmetry the proof of the $a$-theorem is unaffected by the presence of Goldstone bosons for Abelian global symmetries.
Acknowledgements {#acknowledgements .unnumbered}
================
We are grateful to Stefan Theisen for correspondence and discussions on the methods and results in [@Schwimmer:2010za]. We thank Chris Beem, Zohar Komargodski, Finn Larsen, Jim Liu, and Balt van Rees for helpful discussions on the physics presented here. Most of this work was done while NB was a postdoc at the Simons Center for Geometry and Physics and he would like to thank this institution for its support and great working atmosphere. The work of NB is supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. HE is supported by NSF CAREER Grant PHY-0953232 and by a Cottrell Scholar Award from the Research Corporation for Science Advancement. Both HE and TMO are supported in part by the US Department of Energy under DoE Grant \#DE-SC0007859. TMO is supported by a Regents Fellowship from the University of Michigan and a National Science Foundation Graduate Research Fellowship under Grant \#F031543.
Conformal anomaly {#app:anomaly}
=================
The conformal anomaly in four-dimensional CFTs in the presence of background metric and gauge field is well-known (see, for example, [@Erdmenger:1996yc] for a summary). The goal here is to show how this result arises from imposing the WZ consistency condition [@Wess:1971yu] and compatibility with the anomaly for the global $U(1)$ symmetry associated with the background gauge field.
The trace anomaly ${\langle T_{\mu}{}^{\mu}\rangle}$ should be a function only of the background fields $g_{{\mu\nu}}$, $A_{\mu}$, and their derivatives. Since the gauge symmetry is broken, it is conceivable that one could have new gauge-noninvariant contributions to the trace anomaly in addition to the standard $W^2$, $E_4$, $(F_{{\mu\nu}})^2$, and $\square R$ terms.[^11] The possible new quantities should be constructed out of the following list with various choices of the coefficients $d_i$: $$\begin{aligned}
&d_1 \nabla_\mu(R)A^\mu + d_2\, R\, \nabla_\mu A^\mu
+ d_3 \nabla_\mu\square A^\mu + d_4 R^{\mu\nu}\,\nabla_\mu A_\nu + d_5 R\,(A_\mu)^2 + d_6 R_{\mu\nu}A^\mu A^\nu
\nonumber\\
& + d_7 \nabla_\mu(A^\mu) \nabla_\nu(A^\nu)
+ d_8 \nabla_\mu(A^\nu) \nabla_\nu(A^\mu) + d_{9} \nabla_\mu(A_\nu) \nabla^\mu(A^\nu) + d_{10} A_\mu \nabla_\nu\nabla^\nu A^\mu + d_{11} A^\nu \nabla_\mu \nabla_\nu A^\mu
\nonumber\\
& + d_{12} (A_\mu)^2\, \nabla_\nu A^\nu + d_{13} A^\mu \,A^\nu\, \nabla_\mu A_\nu + d_{14}\, (A_\mu)^4 \;.
\label{anoms}\end{aligned}$$ We will find, however, that none of these possibilities are allowed in the trace anomaly.
[[**WZ consistency conditions**]{}\
]{} The full action $S$ should satisfy the Wess-Zumino consistency conditions [@Wess:1971yu] (see also [@Cappelli:1988vw] for further discussion). In particular, since the Weyl variation of $S$ is the trace anomaly, the WZ conditions amount to the requirement $$\int d^4x\, \Big({\sigma}_2 \delta_{{\sigma}_1} - {\sigma}_1\delta_{{\sigma}_2} \Big) \sqrt{-g}\, \langle T_\mu{}^\mu\rangle = 0 \;.$$ The usual anomalies, $W^2$, $E_4$, $(F_{{\mu\nu}})^2$, and $\square R$, satisfy that constraint, but it remains to check whether any combination of the terms in $\eqref{anoms}$ might also work. In fact, one can verify that each of the following independently satisfies the constraint: $$\label{Kdef}
\begin{split}
K_1 &= \nabla_\mu \Big(3 R^{\mu\nu}\,A_\nu - R\, A^\mu + 3 \square A^\mu \Big)\;,\\
K_2 &= \nabla_\mu \Big( A^\nu\, \nabla_\nu A^\mu \Big)\;,\\
K_3 &= \nabla_\mu \Big( A^\mu\, \nabla_\nu A^\nu \Big)\;,\\
K_4 &= A_\mu\, \nabla_\nu(F^{\mu\nu})\;,\\
K_5 &= \nabla_\mu \Big(A^\mu\,(A_\nu)^2\Big)\;,\\
K_6 &= (A_{\mu})^4\;.\\
\end{split}$$ Therefore based on the WZ consistency conditions alone, the trace anomaly can take the form $$\begin{aligned}
c W^2 - a E_4 + b' \square R + \kappa_0 (F_{\mu\nu})^2 + \kappa_1 K_1 + \kappa_2 K_2 + \kappa_3 K_3 + \kappa_4 K_4 +\kappa_5 K_5 + \kappa_6 K_6 \;,
\label{generalAnomaly}\end{aligned}$$ where the first four terms are the standard conformal anomalies in the presence of a background gauge field and curved background for a theory with central charges $c$ and $a$ [@Erdmenger:1996yc]. The coefficient of $(F_{{{\mu\nu}}})^2$ is generally an independent physical quantity, although for $\mathcal{N}=1$ theories it is fixed in terms of $c$ and $a$.
[[**Constraints on ${\langle T_{\mu}{}^{\mu}\rangle}$ from the gauge anomaly**]{}\
]{} Just as the Weyl anomaly does not depend on either $\tau$ or $\beta$, the gauge anomaly should also be a function of just the background fields. Thus there cannot be gauge dependent fields in [(\[generalAnomaly\])]{}; under a gauge variation those terms generate $\tau$-dependent contributions to the gauge anomaly. To illustrate this point, let us consider an example. Suppose $\kappa_6\neq 0$, so ${\langle T_{\mu}{}^{\mu}\rangle}$ includes an $(A)^4$ anomaly. Since $\sqrt{-g}(A)^4$ is Weyl invariant, the action whose variation produces this anomaly is simply $$S_{{\text{WZ}},A^4} = \kappa_6\int d^4x\,\sqrt{-g}\,\tau\,(A)^4\;.$$ Now consider a gauge variation of this action, which should produce the gauge anomaly as in $$\delta_\alpha S_{{\text{WZ}},A^4} \sim \kappa_6\int d^4x\,\sqrt{-g}\,\tau\,(A)^3\,{\nabla}\alpha \;,$$ which is $\tau$-dependent. The other new quantities have similar issues; in fact, no linear combination of $K_1,\ldots,K_6$ in is gauge invariant. This forces us to set $\kappa_1=\kappa_2=\ldots=\kappa_6 = 0$ so that the trace anomaly is gauge invariant.
Since none of the new possibilities can contribute, we find that the trace anomaly for any $\mathcal{N}=1$ superconformal theory is $$\langle T_\mu{}^\mu \rangle =
c W^2 - a E_4 + b' \square R - 6\,c\, (F_{\mu\nu})^2 \;,$$ where the coefficient $\kappa_0=-6c$ of the last term is fixed by supersymmetry as in [@Schwimmer:2010za; @Anselmi:1997am; @Cassani:2013dba] (though with different normalization for the gauge field).
[9]{}
A. Schwimmer and S. Theisen, “Spontaneous Breaking of Conformal Invariance and Trace Anomaly Matching,” Nucl. Phys. B [**847**]{}, 590 (2011) \[arXiv:1011.0696 \[hep-th\]\]. Z. Komargodski and A. Schwimmer, “On Renormalization Group Flows in Four Dimensions,” JHEP [**1112**]{}, 099 (2011) \[arXiv:1107.3987 \[hep-th\]\]. Z. Komargodski, “The Constraints of Conformal Symmetry on RG Flows,” JHEP [**1207**]{}, 069 (2012) \[arXiv:1112.4538 \[hep-th\]\]. M. A. Luty, J. Polchinski and R. Rattazzi, “The $a$-theorem and the Asymptotics of 4D Quantum Field Theory,” JHEP [**1301**]{}, 152 (2013) \[arXiv:1204.5221 \[hep-th\]\]. A. Dymarsky, Z. Komargodski, A. Schwimmer and S. Theisen, “On Scale and Conformal Invariance in Four Dimensions,” arXiv:1309.2921 \[hep-th\]. K. Farnsworth, M. A. Luty and V. Prelipina, “Scale Invariance plus Unitarity Implies Conformal Invariance in Four Dimensions,” arXiv:1309.4095 \[hep-th\]. D. Anselmi, D. Z. Freedman, M. T. Grisaru and A. A. Johansen, “Nonperturbative formulas for central functions of supersymmetric gauge theories,” Nucl. Phys. B [**526**]{}, 543 (1998) \[hep-th/9708042\]. K. A. Intriligator and B. Wecht, “The Exact superconformal R symmetry maximizes a,” Nucl. Phys. B [**667**]{}, 183 (2003) \[hep-th/0304128\]. D. Cassani and D. Martelli, “Supersymmetry on curved spaces and superconformal anomalies,” arXiv:1307.6567 \[hep-th\]. H. Elvang, D. Z. Freedman and M. Kiermaier, “A simple approach to counterterms in N=8 supergravity,” JHEP [**1011**]{}, 016 (2010) \[arXiv:1003.5018 \[hep-th\]\]. M. T. Grisaru, H. N. Pendleton and P. van Nieuwenhuizen, “Supergravity and the S Matrix,” Phys. Rev. D [**15**]{}, 996 (1977). M. T. Grisaru and H. N. Pendleton, “Some Properties of Scattering Amplitudes in Supersymmetric Theories,” Nucl. Phys. B [**124**]{}, 81 (1977). H. Elvang and Y.-t. Huang, “Scattering Amplitudes,” arXiv:1308.1697 \[hep-th\]. L. J. Dixon, “A brief introduction to modern amplitude methods,” arXiv:1310.5353 \[hep-ph\]. J. Wess and B. Zumino, “Consequences of anomalous Ward identities,” Phys. Lett. B [**37**]{}, 95 (1971). H. Elvang, D. Z. Freedman, L. -Y. Hung, M. Kiermaier, R. C. Myers and S. Theisen, “On renormalization group flows and the a-theorem in 6d,” arXiv:1205.3994 \[hep-th\]. H. Elvang and T. M. Olson, “RG flows in d dimensions, the dilaton effective action, and the a-theorem,” JHEP [**1303**]{}, 034 (2013) \[arXiv:1209.3424 \[hep-th\]\]. F. Baume and B. Keren-Zur, “The dilaton Wess-Zumino action in higher dimensions,” arXiv:1307.0484 \[hep-th\].
J. Erdmenger and H. Osborn, “Conserved currents and the energy momentum tensor in conformally invariant theories for general dimensions,” Nucl. Phys. B [**483**]{}, 431 (1997) \[hep-th/9605009\]. A. Cappelli and A. Coste, “On The Stress Tensor Of Conformal Field Theories In Higher Dimensions,” Nucl. Phys. B [**314**]{}, 707 (1989).
[^1]: In Appendix \[app:anomaly\] we discuss why no other terms involving the gauge field are allowed.
[^2]: This is the ’t Hooft anomaly for the global $U(1)_R$ symmetry present in any $\mathcal{N}=1$ SCFT. With slight abuse of notation we will refer to it as the gauge anomaly.
[^3]: Very similar arguments were developed in [@Elvang:2010jv] to test supersymmetrization of candidate counterterms in ${\mathcal{N}}=8$ supergravity.
[^4]: There are no SCFTs in dimension greater than six and there are no conformal anomalies in odd dimensions. thus dimensions two, four and six exhaust all cases of interest.
[^5]: We are abusing notation by using the same symbols to represent the fields and their corresponding creation and annihilation operators. Hopefully it is clear enough from context what we mean.
[^6]: See the reviews [@Elvang:2013cua; @Dixon:2013uaa] for more details about the spinor-helicity formalism and supersymmetry Ward identities.
[^7]: We are not including explicit momentum labels, but assume that the first state in the list has momentum $p_1^\mu$, the next $p_2^\mu$ etc.
[^8]: The Ward identities also imply certain relationships between the four-point amplitudes containing only one ${\varphi}$ or one ${\xi}$, e.g. ${\mathcal{A}}_4({\varphi}{\varphi}{\varphi}{\xi})=-{\mathcal{A}}_4({\xi}{\xi}{\xi}{\varphi})$. These relations are independent from those in –. However, they are trivially satisfied for our application because any amplitude with an odd number of pseudo-scalars ${\xi}$ vanishes in a parity-invariant theory.
[^9]: Our sign conventions differ from those of [@Schwimmer:2010za]. We use the curvature convention $[{\nabla}_{\mu},{\nabla}_{\nu}]\,V^{\rho}=R_{{{\mu\nu}}}{}^{\rho}{}_{\sigma}\,V^{\sigma}$. All equations shown here can be translated into the conventions of [@Schwimmer:2010za] by flipping the signs of the curvature tensors. We use a different normalization for $f$ and $A_{\mu}$, namely $f^2_{\text{here}} = 2f^2_{\text{ST}}$ and $A_{\text{here}} = \frac{2}{3}A_{\text{ST}}$. Also, our result [(\[S0\])]{} fixes minor typos in [@Schwimmer:2010za].
[^10]: Note that one can always choose the vev of $Z$ to be real using the global $U(1)$ symmetry in the action .
[^11]: The $\square R$ “anomaly” is non-physical because it can be removed by a local counterterm, but we include it here for completeness.
|
---
abstract: 'We investigate the heat current of a strongly interacting quantum dot in the presence of a voltage bias in the Kondo regime. Using the slave-boson mean-field theory, we discuss the behavior of the energy flow and the Joule heating. We find that both contributions to the heat current display interesting symmetry properties under reversal of the applied dc bias. We show that the symmetries arise from the behavior of the dot transmission function. Importantly, the transmission probability is a function of both energy and voltage. This allows us to analyze the heat current in the nonlinear regime of transport. We observe that nonlinearities appear already for voltages smaller than the Kondo temperature. Finally, we suggest to use the contact and electric symmetry coefficients as a way to measure pure energy currents.'
address: 'IFISC (UIB-CSIC), Campus Universitat Illes Balears, E-07122 Palma de Mallorca, Spain'
author:
- 'Miguel A. Sierra and David Sánchez'
title: Heat current through an artificial Kondo impurity beyond linear response
---
Introduction
============
There is a growing interest in controlling and manipulating heat currents flowing in nanoscale devices [@review]. Efficient heat-to-work transformation, low-temperature thermometry, heat rectification and quantum cooling are but a few examples of possible beneficial applications foreseen within the realm of mesoscopic conductors [@review2]. An equally important motivation is the study of the fundamental mechanisms that govern energy transfer and dissipation beyond a purely (semi)classical description of particle transport.
In this work, we are concerned with heat fluxes driven by voltage biases. Current carriers (electrons) also carry energy and represent the main contribution to heat in quantum electronics at low temperatures. For very small shifts, the generated Peltier heat is reversible, its sign being dependent of the electric current direction. For larger shifts, heat is dominated by Joule power, which is always positive since it is connected to dissipation. The measured heat current then consists of two terms (energy flux and Joule heating) and it is thus natural to analyze their relative importance based on the transmission properties of the system. Here, we consider a single-level quantum dot setup, a prototypical mesoscopic system that allows us to analyze the role of strong electron-electron interactions in the generation of heat far from equilibrium.
The topic of voltage-driven quantum heat currents is interesting [@review3] and has been examined in a number of different systems [@kul94; @bog99; @cip04; @free06; @zeb07; @lei10; @whi13; @jia15; @zim16]. Quantum dot setups are specially appealing due to their experimental tunability and theoretical simplicity. Nonlinear heat current in quantum dots were addressed in Ref. [@lop13; @mea13]. It was argued that nonequilibrium screening effects lead to charge buildups that affect the heat current–voltage characteristic beyond linear response. The scattering model considered weak interactions and was therefore valid for large dots only. Coulomb blockade effects were then analyzed by us in Refs. [@sie14; @sie15]. We found that Joule heating quickly surpasses the Peltier contribution as the applied voltage increases. The problem was also investigated recently [@ger15; @yam15]. We note in passing that the study of heat currents is strongly related to thermoelectrics due to reciprocity. In linear response, the Seebeck and Peltier coefficients are connected through the Kelvin-Onsager relation. Discussions beyond linear response are available in the literature [@sie15; @iyo10; @hwa13; @bed13; @cim14; @sel14]. The role of electron-electron interactions is crucial for the breakdown of reciprocal relations when the driving fields are large [@hwa13; @mat14].
Quantum dots with strong Coulomb interactions act as artificial quantum impurities. In the limit of very low temperatures and strong coupling, the many-body spin interaction between the localized electron in the quantum dot and the delocalized carriers in the attached reservoirs leads to the Kondo effect, which can be detected via electric conductance measurements [@exp1; @exp2; @exp3]. It is worth mentioning that studies of the generated heat current in Kondo systems are scarce [@boe576; @sai13] and focusing only on the linear response regime. Our aim here is to extend our work in Ref. [@sie085] and investigate beyond linear response the dissipated power of a quantum dot in the Fermi liquid fixed point where charge fluctuations are quenched and the Kondo singlet is well formed. This case is relevant for very low temperatures. We find that the symmetry properties of the local density of states with respect to voltage determine the energy flux and the dissipated power. Our results are relevant for the evaluation of heat asymmetries that take place when voltage is reversed or when heat is measured in different contacts, thus providing a better understanding for the nonlinear electrothermal response of a Kondo-correlated system.
Model Hamiltonian
=================
We consider a Kondo impurity in a quantum dot connected to two leads $\alpha = \{L,R\}$ characterized by their electrochemical potential $\mu_\alpha = \varepsilon_F +eV_\alpha$ and temperature $T_\alpha = T$. In the limit where interactions are infinite (charging energy $U\rightarrow\infty$), we consider the Anderson Hamiltonian in the slave-boson formalism [@hewson]. The full Hamiltonian reads
$$\begin{aligned}
\mathcal{H}=\sum_{\alpha k \sigma} \varepsilon_{\alpha k} C^\dagger_{\alpha k \sigma} C_{\alpha k \sigma} + \sum_{\sigma} \varepsilon_d f^\dagger_{\sigma} f_{\sigma} + \sum_{\alpha k \sigma} \mathcal{V}_{\alpha k} \left(C^\dagger_{\alpha k \sigma} b^\dagger f_\sigma + \mathrm{H.c.}\right) + \lambda \left(b^\dagger b + \sum_\sigma f^\dagger_\sigma f_\sigma -1\right)\, .\end{aligned}$$
Here, the first term in the right-hand side represents the fermionic reservoirs. The operator $C^\dagger_{\alpha k \sigma}$ ($C_{\alpha k \sigma}$) creates (destroys) electrons in the $\alpha$ reservoir with energy $\varepsilon_{\alpha k}$, where $k$ and $\sigma$ are the wavenumber and the spin, respectively. The next term is the dot Hamiltonian. In this case, $f^\dagger_\sigma$ ($f_\sigma$) creates (annihilates) a pseudofermion with energy $\varepsilon_d$ and spin $\sigma$. The tunneling between the dot and the reservoirs is described in the next term. $\mathcal{V}_{\alpha k}$ is the amplitude of electrons hopping on and off the reservoir $\alpha$. Additionally, we add the auxiliary bosonic field $b^\dagger$, which creates an empty state in the dot. Finally, in order to ensure that the quantum dot is occupied with only one electron (we recall that $U\rightarrow\infty$) we include a Lagrange multiplier $\lambda$ term in the Hamiltonian $\mathcal{H}$.
Let us consider the mean-field approach to the slave-boson Hamiltonian. This amounts to taking the leading order in a $1/N$ expansion ($N$ being the spin degeneracy). As a consequence, we replace the boson operator by its mean value $\langle b \rangle \equiv \tilde{b}$. This approximation neglects charge fluctuations since they are completely screened out. We remark that this approximation is valid in the Fermi liquid regime where temperatures are lower than the Kondo temperature $T_K$ and dot energy level lies below the Fermi energy $\varepsilon_F$.
Within the nonequilibrium Green’s function framework, we derive the mean-field equations for the expectation value of $b$ and the Lagrange multiplier $\lambda$. First, we compute the evolution of the boson operator in the stationary limit, $\sum_{\alpha k \sigma} \tilde{\mathcal{V}}_{\alpha k} G^<_{f\sigma, \alpha k \sigma}(t,t) =-iN\lambda |\tilde{b}|^2/\hbar$, where $\tilde{\mathcal{V}}_{\alpha k}=b_\alpha\mathcal{V}_{\alpha k}$ is the normalized tunneling amplitude through the dot and $G^<_{f\sigma,\alpha k \sigma}(t',t)=-(1/\hbar)\langle C^\dagger_{\alpha k \sigma}(t') f_\sigma(t)\rangle$ is the lesser Green’s function for the tunneling process. The next expression corresponds to the single-occupied state condition in the Lagrange term. Its mean-field equation reads $\sum_\sigma G^<_{f\sigma,f\sigma}(t,t) = i(1-N|b|^2)/\hbar$, where $G^<_{f\sigma,f\sigma}(t',t)=-(i/\hbar)\langle f_\sigma^\dagger(t') f_\sigma(t) \rangle$ is the dot pseudofermion lesser Green’s function. Both mean-field equations can be combined in a complex self-consistent equation in the Fourier space,
$$\begin{aligned}
\frac{2}{\pi} \int_{-D}^D d\omega \frac{\mathcal{F}(\omega)}{\omega-\tilde{\varepsilon}_d+i\tilde{\Gamma}}=(\varepsilon_d-\tilde{\varepsilon}_d)\frac{N}{\Gamma}-i\left(1-N\frac{\tilde{\Gamma}}{\Gamma}\right) \, . \label{Eq:Selfeq}\end{aligned}$$
The parameters $\tilde{\varepsilon}_d=\varepsilon_d+\lambda$ and $\tilde{\Gamma}=|\tilde{b}|^2 \Gamma$ are the renormalized level position and width of the Kondo resonance at the dot. $\Gamma=\Gamma_L+\Gamma_R$ is the hybridization of the energy level due to tunneling, where $\Gamma_\alpha = \pi \rho_\alpha(\omega) |\mathcal{V}_\alpha(\omega)|^2$ depends on the tunneling amplitudes and the density of states $\rho_\alpha(\omega)$ of the lead $\alpha$. In the wide band limit, $\rho$ is constant and nonzero inside the bandwidth $\omega<|D|$. Equation (\[Eq:Selfeq\]) also depends on the nonequilibrium distribution function $\mathcal{F}(\omega)=\sum_\alpha (\Gamma_\alpha/\Gamma) f_\alpha(\omega)$ which is, in this case, a weighted sum of the Fermi-Dirac distribution functions $f_\alpha(\omega)=1/\{1+\exp{[(\omega-\mu_\alpha)/(k_BT_\alpha)]}\}$ [@cab425]. In the slave-boson mean-field approach, the Kondo resonance arises when the auxiliary boson condenses and, as result, $\tilde{\Gamma}$ becomes $k_B T_K$ and $\lambda$ shifts $\varepsilon_d$ up to the Fermi level [@coleman].
Once we solve Eq. (\[Eq:Selfeq\]) for both $\lambda$ and $|\tilde{b}|$, the transport properties can be readily analyzed. We focus on the calculation of the heat current through the left reservoir $J\equiv J_L$, which is split into two different contributions: $J=J_E+J_{I}$. The first term is the energy current flowing through the lead, $J_E=d(\sum_{k\sigma} \varepsilon_{Lk} C_{Lk\sigma}^\dagger C_{Lk\sigma})/dt$. The second term is the Joule heating, $J_{I}=-I_LV_L$, where $V_L$ is the voltage applied to the left reservoir and $I_L=-e d(\sum_{k\sigma} C_{Lk\sigma}^\dagger C_{Lk\sigma})/dt$ is the charge current given by the evolution of the left lead occupation. Both charge and energy currents are conserved in the steady state whilst the heat fluxes satisfy the relation $\sum_\alpha (J_\alpha+I_\alpha V_\alpha)=0$.
We find $$\begin{aligned}
J_E&=&\frac{2}{h} \int_{-D}^D d\omega\, \mathcal{T}(\omega)(\omega-\varepsilon_F)[f_L(\omega)-f_R(\omega)]\, , \label{Eq:JE}\\
J_I&=&-\frac{2eV_L}{h} \int_{-D}^D d\omega\, \mathcal{T}(\omega)[f_L(\omega)-f_R(\omega)]\, .\label{Eq:JI}\end{aligned}$$ Both currents depend on the Fermi functions of the leads and the transmission function, which takes a particularly simple form: $\mathcal{T}(\omega)=4\tilde{\Gamma}_L\tilde{\Gamma}_R/[(\omega-\tilde{\varepsilon}_d)^2+\tilde{\Gamma}^2]$. It represents a Breit-Wigner lineshape with renormalized parameters ($\tilde{\varepsilon}_d$ and $\tilde{\Gamma}$). Unlike the noninteracting case, however, the parameters of this transmission function depends on the applied voltages and must be calculated from Eq. (\[Eq:Selfeq\]) for each dc bias.
Results
=======
![(a) Energy current $J_E$ and (b) Joule dissipation current $J_I$ as a function of the applied voltage bias $V$ for a given value of the energy level of the dot ($\varepsilon_d=-2.5\Gamma$). Parameters: temperature $k_BT=0.01\Gamma$ and bandwidth $D=100\Gamma$. []{data-label="fig:1"}](Figure1.eps)
We investigate the energy and Joule currents given by Eqs. and in response to a symmetrically applied voltage bias ($\mu_L=eV/2$ and $\mu_R=-eV/2$ setting $\varepsilon_F=0$ as the reference energy). Hereafter, we assume symmetric tunnel couplings $\Gamma_L=\Gamma_R=\Gamma/2$. In Fig \[fig:1\], we depict $J_E$ and $J_I$ as a function of $V$ for a quantum dot energy level in the Kondo regime ($\varepsilon_d = -2.5\Gamma$). We focus on voltages smaller than the Kondo temperature $k_BT_K=D\exp{(-|\varepsilon_d|\pi/\Gamma)}$. For $V$ larger than $k_BT_K/e$ the boson field vanishes and the mean-field approach breaks down. For voltages smaller than $k_BT_K/e$ the boson is nonzero and the results are thus reliable.
The energy current in Fig. \[fig:1\](a) is an increasing function of $V$ since for positive voltages the energy flows from the left lead and this energy increases for higher values of $V$. For $V<0$ carriers predominantly impinge from the right lead and $J_E$ decreases as $V$ becomes more negative. Clearly, the energy current shows an antisymmetric shape around $V=0$. In contrast, the Joule term is always negative (with our sign convention, dissipation is negative) and symmetric under the replacement $V\to -V$. Let us examine in more detail the origin of these symmetry properties. First, we note that the difference of the Fermi functions in Eqs. and , $f_L-f_R$, is an odd function of $V$ whereas the transmission function is even $\mathcal{T}(\omega,V)=\mathcal{T}(\omega,-V)$ because the mean-field parameters that characterize the Kondo resonance are also even functions of the applied voltage \[see Figs. \[fig:2\](a) and \[fig:2\](b)\]. These findings are expected since the renormalized width should not depend on the direction of the voltage bias. Further, the position of the Kondo peak lies near the Fermi level and is a function that weakly depends on $V$. As a result of the combination of an odd and an even function, we infer that the energy current is antisymmetric when $V\rightarrow -V$. On the other hand, the Joule term $J_I(V)$ is symmetric due to the fact that there is an additional $V$ factor in front of the integral in Eq. .
The total heat current is shown in Fig. \[fig:2\](c). At very low voltages, $J_E(V)$ dominates the heat transport over $J_I(V)$, showing a linear dependence \[dashed line in Fig. \[fig:2\](c)\], which is the hallmark of the Peltier effect. Then, we can approximate $J(V)\simeq M_0 V$ with $M_0$ the electrothermal conductance [@butcher], directly connected to the thermopower via the Kelvin-Onsager relation. Now, the thermopower is nonzero only for asymmetric density of states. In our case, we find nonzero electrothermal conductances because the Kondo peak is not exactly located at the Fermi energy $\tilde{\varepsilon}_d\simeq\varepsilon_F$ \[see Fig. \[fig:2\](a)\] due to the potential scattering term in the quantum dot. This results in a nonsymmetric transmission function $\mathcal{T}(\omega,V)\neq\mathcal{T}(-\omega,V)$. Additionally, we find a positive heat current at positive voltages indicating $M_0>0$. Therefore, the Kondo dot at low $V>0$ might serve as a cooler, although this property is quickly dominated by Joule heating at increasing voltages. As seen in Fig. \[fig:2\](c), the Joule term overcomes the energy current inducing a negative heat flow. Importantly, $J(V)$ is not symmetric under the transformation $V\to -V$ because $J$ arises from the addition of symmetric and antisymmetric functions.
![(a) Renormalized level position $\tilde{\varepsilon}_d$ and (b) renormalized width $\tilde{\Gamma}$ of the Kondo resonance as a function of the voltage bias $V$ for values lower than the Kondo temperature. (c) Heat current $J$ as a function of the voltage bias $V$. The dashed green line exhibits the Peltier contribution to the heat current. The parameters are the same as in Fig. \[fig:1\].[]{data-label="fig:2"}](Figure2.eps)
Discussion
==========
The asymmetries shown in the calculated heat current can be analyzed in terms of contact and electric asymmetries, as suggested in Ref. [@lee13] for molecular tunnel junctions. The contact asymmetry $\Delta_C = J(V)-J(-V)$ measures the dissipated power under the reversal of the dc bias. The electric asymmetry $\Delta_E = J_L(V)-J_R(V)$ quantifies the different heat dissipation in both terminals as a function of $V$. Using the even and odd properties of $J_E(V)$ and $J_I(V)$ \[Eqs. (\[Eq:JE\]) and (\[Eq:JI\])\], we straightforwardly arrive at the relation $\Delta_C(V) = 2J_E(V)$. This implies that the behavior of the contact asymmetry is in fact given by the results in Fig. \[fig:1\]a. Remarkably, we find that the electric asymmetry is also proportional to the energy current $\Delta_E(V) = 2J_E(V)$. This result is general [@jar165] in the case of both a symmetric voltage bias $V_L=-V_R=V/2$ and a symmetric transmission function. Any quantum conductor that satisfies these conditions will show similar contact and electric asymmetries. Here, we find that these symmetry conditions are satisfied for an artifitial Kondo impurity due to the symmetric renormalization of the mean-field parameters. Therefore, the Kondo effect does not break these symmetries at least in the Fermi liquid regime. As a byproduct, our result would facilitate the experimental detection of the energy current from a measurement of either the heat current for different voltage directions or the heat current at different reservoirs for the same dc bias.
Conclusions
===========
To sum up, we have examined the heat flow through an artificial Kondo impurity connected to two leads with a symmetric applied voltage. Within the mean-field slave-boson formalism, we have obtained the expression for the transmission and have computed both the energy current and the Joule dissipation current. We find an antisymmetric energy current and a symmetric Joule term as a function of the dc bias. This behavior can be understood from the symmetry properties of the transmission function which arises from the symmetry of the renormalized width and energy position of the Kondo resonance. Additionally, we show that at voltages smaller than the Kondo temperature the Peltier effect first dominates the heat current while Joule heating governs the heat current at higher biases. Then, the crossover from linear to quadratic behavior is controlled by the Kondo temperature. Our results could be tested with present techniques since there is a variety of available methods that measure heat currents in quantum conductors [@lee13; @mol92; @chi06; @mes09; @jez13; @cui17].
We acknowledge support from MINECO under grant No. FIS2014-52564 and a PhD grant from CAIB.
References {#references .unnumbered}
==========
[9]{} Benenti G, Casati G, Saito K, and Whitney R S 2017 *Phys. Rep.* **694** 1 Giazotto F, Heikkilä T T, Luukanen A , Savin A M and Pekola J P 2006 *Rev. Mod. Phys.* **78** 217 Sánchez D and López R 2016 *C. R. Physique* **17** 1060 Kulik I O 1994 *J. Phys.: Condens. Matter* **6** 9737 Bogachek E N, Scherbakov A G, and Landman U 1999 *Phys. Rev. B* **60** 11678 Çipiloǧlu M A, Turgut S, and Tomak M 2004 *Phys. Stat. Sol. (b)* **241** 2575 Freericks J K and Zlatic V, *Condensed Matter Physics* **9** 603 Zeberjadi M, Esfarjani K, and Shakouri A 2007 *Appl. Phys. Lett.* **91** 122104 Leijnse M, Wegewijs M R, and Flensberg K 2010 *Phys. Rev. B* **82** 045412 Whitney R S 2013 *Phys. Rev. B* **88** 064302 Jiang J H, Kulkarni M, Segal D, and Imry Y 2015 *Phys. Rev. B* **92** 045309 Zimbovskaya N 2016 *J. Phys.: Condens. Matter* **28** 183002 López R and Sánchez D 2013 *Phys. Rev. B* **88** 045129 Meair J and Jacquod P 2013 *J. Phys.: Condens. Matter* **25** 082201 Sierra M A and Sánchez D 2014 *Phys. Rev. B* **90** 115313 Sierra M A and Sánchez D 2015 *Materials Today: Proceedings* **2** 483 Gergs N M, Hörig C B M, Wegewijs M R, and Schuricht D 2015 *Phys. Rev. B* **91** 201107(R) Yamamoto K and Hatano N 2015 *Phys. Rev. E* **92** 042165 Iyoda E, Utsumi Y, and Kato T 2010 *J. Phys. Soc. Jpn.* **79** 045003 Hwang S Y, Sánchez D, Lee M, and López R 2013 *New J. Phys.* **15** 105012 Bedkihal S, Bandyopadhyay M, and Segal D 2013 *Eur. Phys. J. B* **86** 506 Cimmelli V A, Sellitto A, and Jou D 2014 *Proc. Royal Soc. A* **470** 0265 Sellitto A 2014 *Physica D* **283** 56 Matthews J, Battista F, Sánchez D, Samuelsson P, and Linke H 2014 *Phys. Rev. B* **90** 165428 Goldhaber-Gordon D *et al.* 1998 *Nature* **391** 156 Cronenwett S M, Oosterkamp T H, and Kouwenhoven L P 1998 *Science* **281** 540 Schmid J, Weis J, Eberl K, and Klitzing K v 1998 *Physica B* **256** 182 Boese D and Fazio R 2001 *Europhys. Lett.* **56** 576 Saito K and Kato T 2013 *Phys. Rev. Lett.* **111** 214301 Sierra M A, López R and Sánchez D 2017 *Phys. Rev. B* **96** 085416 Hewson A C 1997 *The Kondo Problem to Heavy Fermions* (Cambridge University Press) Balseiro C A, Usaj G, and Sánchez M J 2010 *J. Phys.: Condens. Matter* **22** 425602 Coleman P 1984 *Phys. Rev. B* **29** 3035 Butcher P N 1990 *J. Phys.: Condens. Matter* [**2**]{} 4869 Lee W, Kim K, Jeong W, Zotti L A, Pauly F, Cuevas J C, and Reddy P 2013 *Nature* **498** 209 Argüello-Luengo J, Sánchez D, and López R 2015 *Phys. Rev. B* **91** 165431 Molenkamp L W, Gravier T, van Houten H, Buijk O J A, Mabesoone M A A, and Foxon C T 1992 *Phys. Rev. Lett.* **68** 3765 Chiatti O, Nicholls J T, Proskuryakov Y Y, Lumpkin N, Farrer I, and Ritchie D A 2006 *Phys. Rev. Lett.* **97** 056601 Meschke M, Guichard W, and Pekola J P 2009 *Nature* **444** 187 Jezouin S, Parmentier F D, Anthore A, Gennser U, Cavanna A, Jin Y, and Pierre F 2013 *Science* **342** 601 Cui L, Jeong W, Hur S, Matt M, Klöckner J C, Pauly F, Nielaba P, Cuevas J C, Meyhofer E, and Reddy P 2017 *Science* **355** 1192
|
---
abstract: 'In this review, we give an introduction to the structural and functional properties of the biological networks. We focus on three major themes: topology of complex biological networks like the metabolic and protein-protein interaction networks, nonlinear dynamics in gene regulatory networks and in particular the design of synthetic genetic networks using the concepts and techniques of nonlinear physics and lastly the effect of stochasticity on the dynamics. The examples chosen illustrate the usefulness of interdisciplinary approaches in the study of biological networks.'
author:
- Indrani Bose
title: Biological Networks
---
Department of Physics, Bose Institute, 93/1, A.P.C. Road, Calcutta-700009, India
Introduction
============
Networks are widely prevalent in all spheres of life [@1; @2; @3]. A network of acquaintances is the simplest example one can think of. Social, economic and political networks of various kinds are part of human society. The internet, a network of information resources, plays a vital role in the gathering, sharing and transmission of information. A network consists of nodes connected by links. Figure 1 shows the example of a network in which the solid circles denote the nodes and the solid lines the links. Some examples of real life networks are as follows: in a network describing an electrical power grid, the generators, transformers and substations are the nodes and the high-voltage transmission lines connecting them the links. In the World Wide Web (WWW), the documents/pages constitute the nodes. These are connected to other documents/pages through links. In a collaboration graph of movie actors, the nodes represent the actors. Two actors are connected by a link if they appear in the same movie. In a citation network, the nodes are the papers published in refereed journals. A paper is linked to all the other papers it cites. Cellular processes are controlled by various types of biochemical networks. A metabolic network [@4] controls the processes which generate mass and energy from nutritive matter. The nodes in such a network are the substrates such as ATP, ADP and $ H_{2}O $. Two substrates are connected by a link if both of them participate in the same biochemical reaction. Traditional cell biology assigns specific functional roles to individual proteins, such as catalysts, signalling molecules and constituents of cellular matter. In the post-genomic era, there is an increasing emphasis on understanding the functions of proteins as parts of an interacting network and also on the collective, emergent properties of the network. In a protein-protein interaction network [@5], the nodes represent the proteins. A link exists between two nodes if the corresponding proteins have a direct physical interaction.
The networks discussed above have complex topology. Spectacular advances in computerisation of data acquisition (the Human Genome Project is a prime example) have made it possible to construct large databases which contain information on the topology of real life networks. The advent of powerful computers has given rise to extensive investigations of networks containing millions of nodes. The interesting fact emerging out of these studies is that biological networks share common topological features with non-biological networks. There appears to be a general blueprint for the large scale organisation of several of these networks. In Section 2 of this review, we discuss two types of biological networks, namely, the metabolic networks of several organisms and the protein-protein interaction networks associated with the yeast S. cerevisiae [@6] and the human gastric pathogen H. Pylori [@7]. The major topological features of these networks are described and the similarity in the design principles of large-scale biological and non-biological networks pointed out.
Gene regulatory networks are the most significant examples of biological networks. Gene expression and regulation are the central activities of a living cell [@8]. Genes are fragments of $ DNA $ molecules and determine the structure of functional molecules like $ RNAs $ and proteins. In each cell, at any instant of time, only a subset of genes present is active in directing $ RNA $ / protein synthesis. The gene expression is “on” in such a case. The information present in the gene is expressed through the processes of transcription and translation. During transcription, the sequence along one of the strands of the $ DNA $ molecule is copied onto a $ RNA $ molecule ($ mRNA $ ). The sequence of the $ mRNA $ molecule is then translated into the sequence of amino acids, which determines the functional nature of the protein molecule produced. In a gene regulatory network, the protein encoded by one gene can regulate the expression of other genes. These genes in turn produce new regulatory proteins which control still other genes. A protein may also regulate its own level of production through an autoregulatory feedback process. The occurrence of cell differentiation, when an organism grows from its embryonic stage, depends upon the selective switching on of gene expression in individual cells. All these cells have identical sets of genes but follow different developmental pathways depending upon the patterns of gene expression in the cells. Thus distinct types of cells such as hair and skin cells are obtained. Gene expression is also regulated in metabolism and progression through the cell cycle as well as in responses to external signals. Infected cells can multiply because the expression of certain genes is “on” in these cells whereas in normal cells the expression of the same genes is “off”.
Despite a vast amount of experimental data, the complex dynamical processes involved in gene regulation are not fully understood as yet. A large number of theoretical studies has been undertaken [@9] but only a few of these make quantitative predictions in agreement with experimental results. Two key concepts which emerge out of the theoretical studies are: nonlinearity of the network dynamics and the role of stochasticity in gene expression and regulation [@10]. The variables of interest in the network dynamics are the concentrations of the $ mRNAs $, proteins and other biomolecules within the cell. The rate of change in the concentration of a biomolecular species is a nonlinear function of the other variables. The dynamics is governed by a set of coupled non-linear differential equations which in most cases are solved numerically. Let $ U $ be the concentration of, say, a particular type of protein in the cell. The rate of change of $ U $ is given by
$ \frac{dU}{dt}= $ (Production - Loss/Decay ) of $ U $ per unit time.\
The production term is a nonlinear function of the other concentration variables and the loss term is usually proportional to $ U $. In Section 3 of this review, we briefly describe the major features of nonlinearity in the dynamics of gene networks. There is currently a significant emerging trend to utilise the concepts and techniques of nonlinear physics in the actual construction of synthetic gene regulatory networks with a variety of applications. In Section 3, a specific example of this, namely, the genetic toggle switch [@11] will be given.
The biochemical rate equations which govern the dynamics of gene regulatory networks are deterministic in nature. Many molecules associated with the networks have low intracellular concentrations and consequently fluctuations in reaction rates are considerably large. Gene expression involves a series of biochemical reactions and due to stochastic fluctuations in the reaction rates, proteins are produced in short bursts at random time intervals. In the last few years, there is an increasing realization that stochasticity plays a significant role in biological processes [@10; @12]. To give an example, consider the situation in which two independently produced regulatory proteins A and B are in competition to control a developmental switch that selects between two pathways depending on which protein wins. The protein concentrations have to reach effective levels in order to activate the switch. Due to stochastic fluctuations, the amounts of proteins A and B produced as a function of time can vary widely from cell to cell. In some cells, protein A reaches the effective level first and activates the developmental switch along one specific pathway. In the other cells, protein B takes control and the other pathway is activated. Thus even a clonal cell population exhibits phenotypic variations as the cells follow different developmental pathways. Environmental signals can bias the probabilities of path choice in a regulatory circuit. Organisms make use of this mechanism to increase the probability of survival in a hostile environment. Cells often utilise fluctuations (noise) to randomize choices of developmental pathways when such randomization is desirable for the survival and growth of the organism. A well-known example is that of the phage $ \lambda $ lysis-lysogeny network [@13]. The bacterial E.coli cells, when infected by the virus phage $ \lambda $ , can follow two developmental pathways: lysis and lysogeny. In the lysogenic state, the infection is dormant. Phage $ \lambda $ is inert and integrated into the host cell’s chromosome. It replicates along with the bacterial DNA and each new cell contains the dormant phage. In the lytic state, the infection proliferates. The viral DNA replicates using the host cell machinery giving rise to a large number of progeny phage. These in turn lyse or burst the host bacterium cell and the infection spreads to more cells. Again, due to stochastic fluctuations, the cell population divides into two subpopulations: lysogenic and lytic. The selection of a developmental pathway after the host cell is infected is not deterministic but probabilistic. In Section 4 of this review, the effect of stochasticity on the dynamics of the $ \lambda $-phage network will be briefly discussed. The network illustrates the competitive control of a developmental switch by two regulatory proteins.
As already mentioned before, gene expression/regulation involves several biochemical reactions with appreciable stochastic fluctuations. Gillespie [@14; @15] has proposed a Monte Carlo simulation algorithm to describe the kinetics of coupled stochastic reactions. This method is physically more rigorous than the conventional differential equation approach. The inherent assumption in the latter method is that the temporal changes in the concentrations of reacting molecules are both continuous and deterministic. The assumption is not true if the concentrations are small and the reaction rates slow or if the system undergoes large, rapid and discrete transitions. In the Gillespie algorithm, changes in the numbers of the reacting molecules occur in integral numbers brought about by random, distinct reaction events. The Gillespie algorithm is described in detail in Section 4 and some illustrative examples are given. Recent experiments at the level of a single cell have shown that gene expression occurs in abrupt stochastic bursts [@16; @17; @18; @19]. Further, in an ensemble of cells, the levels of proteins produced have a bimodal distribution. In a large fraction of cells, the gene expression is either off or has a high value. We have proposed a model of gene expression the essential features of which are stochasticity and cooperative binding of RNA polymerase, the molecule responsible for transcription [@20]. The model can reproduce the bimodality observed in experiments. We include a description of the model in Section 4 to give an additional example of the effect of stochasticity on gene regulation. Section 5 of the review contains concluding remarks.
The emphasis in this review is on recent studies of biological networks. There are a number of exhaustive reviews and books on earlier work. The three major themes that several recent studies focus on are: topology of networks, nonlinear dynamics and its consequences and the role of stochasticity in biological processes. The present review is meant to be an introduction to these themes and to highlight the fact that interdisciplinary approaches are essential to develop an integrated understanding of biological networks.
Topology of complex networks
============================
Many real life networks have a complex structure. The mathematicians Erdös and Rényi [@21] were the first to propose a model of a complex network known as a random graph. One starts with $ N $ nodes and connects every pair of nodes with probability $ p $. The graph thus has approximately $ \frac{pN(N-1)}{2} $ links distributed in a random manner. Studies of real life networks, however, reveal that these cannot be described as random graphs. This distinction is possible on the basis of quantitative measurements of certain topological features which we define below. Several complex networks including the random graph are described as small world networks [@1; @2; @3; @22]. The small world idea implies that though the networks are large in size (the number of nodes in a network is a measure of its size), any pair of nodes can be connected by a short path. The distance between two nodes is given by the number of links along the shortest path connecting the nodes. In Figure 1, the distance between the nodes A and B is three. The diameter of the network, also known as the average path length l, is the average of the distances between all pairs of nodes. The global population is huge but still we live in a small world as any random pair of individuals are connected to each other through a short path of intermediate acquaintances. This was first established by Stanley Milgram [@23] who found out that the average path length of intermediate acquaintances is six. In a small world network, the diameter scales as the logarithm of the number of nodes.
The second measurable topological feature of a complex network is its degree distribution [@1; @2; @3]. The number of links by which a node is connected to the other nodes varies from node to node. Let $ P(k) $ be the probability that a randomly selected node has exactly $ k $ links. Equivalently, $ P(k) $ is the fraction of nodes, on an average, which has exactly $ k $ links. One can define an average degree $ \left\langle k\right\rangle $ of the network, the degree of a node being the number of links attached to the node. In a random graph, the links are established randomly and most of the nodes have degrees close to $ \left\langle k\right\rangle $ . The degree distribution $ P(k) $ vs. $ k $ is Poissonian. It is strongly peaked at $ k=\left\langle k\right\rangle $ and has an exponential decay for large k, i.e., $ P(k)\sim e^{-k} $ for $ k\gg \left\langle k\right\rangle $ and $ k\ll \left\langle k\right\rangle $ . In many real life networks, the degree distribution $ P(k) $ has no well-defined peak but has a power-law distribution
$$\label{1}
P(k)\sim k^{-\gamma }$$
where the exponent $ \gamma $ is a numerical constant. Such networks are known as scale-free networks because they are not tied to a specific scale. $ P(k) $ has a finite value over a wide range of $ k $ values. The power-law form of the degree distribution implies that the networks are extremely inhomogeneous unlike in the case of a random graph. In a scale- free network, there are many nodes with few links and a few nodes with many links. The highly connected nodes play a key role in the functionality of the network. Both the random graph and the scale-free networks are small world networks.
The third topological quantity which is measurable is known as the clustering coefficient [@2; @22]. The coefficient is a measure of the tendency of the nodes of the network towards clustering. In a social network, the individuals are the nodes and two nodes are connected by a link if the individuals are acquainted with each other. In such a network, one’s friend’s friends are also likely to be one’s friends giving rise to a clustering of acquaintances. The clustering coefficient is defined in the following manner. Let us select a specific node $ i $ in the network which is connected by $ k_{i} $ links to $ k_{i} $ other nodes. If these first neighbours are all part of a cluster, there would be $ \frac{k_{i}(k_{i}-1)}{2} $ links between them. The clustering coefficient $ C_{i} $ of node i is given by
$$\label{2}
C_{i}=\frac{2E_{i}}{k_{i}(k_{i}-1)}$$
where $ E_{i} $ is the number of actual links which exist between the $ k_{i} $ nodes. The clustering coefficient C of the whole network is obtained by taking an average over all the $ C_{i} $ values. The utility of the clustering coefficient is demonstrated in the following example. The neural network of the nematode worm C. Elegans is small in size [@2; @24]. The number of neurons which constitute the nodes of the network is 282. A link exists between two nodes if the neurons are connected by either a synapse or a gap junction. The average degree of the network is $ \left\langle k\right\rangle = $14. Now consider a random graph of the same size and average degree. The average path lengths for the neural network and the random graph are similar, 2.65 and 2.25 respectively. Is the neural network then a random graph? The answer is no as the clustering coefficient of the former has the value 0.28 which is much larger than the value 0.05 in the case of the latter network. Examples of real life networks which are scale-free are [@2; @25]: the collaboration graph of movie actors (size $ N $ of the network = 212 250 nodes, average degree $ \left\langle k\right\rangle = $ 28.78, the exponent $ \gamma $ in Eq.(1) is $ \gamma $ = 2.3 ), the WWW ($ N= $ 325 729, $ \left\langle k\right\rangle = $ 5.46, $ \gamma = $ 2.1) and the network of citations ($ N= $ 783 339 papers, $ \left\langle k\right\rangle = $ 8.57, $ \gamma = $ 3). The results are obtained from available databases. A more comprehensive and up to date list of networks is given in Ref. [@2].
We now discuss some complex, biological networks. Recently, Jeong et al [@4] have systematically investigated the topological properties of the core metabolic networks of 43 different organisms representing all the three domains of life. The data on these organisms are available in the WIT (What Is There) database. As already mentioned in the Introduction, the nodes of the metabolic network are the different substrates. Two substrates are connected by a link if they participate in the same biochemical reaction. The metabolic networks have different sizes, the less complex organisms having smaller sizes. There is considerable variation in the individual constituents and the pathways of the networks. Yet they display identical topological scaling properties which resemble those of complex non-biological networks. The metabolic networks have been found to belong to the class of scale-free networks. The probability that a substrate participates in k reactions has a power-law distribution. The links in a metabolic network are directed as many biochemical reactions are preferentially catalysed in one direction. For each node, one has to distinguish between incoming and outgoing links. Correspondingly, there are two exponents $ \gamma _{in} $ and $ \gamma _{out} $ . The exponents turn out to have the same value of 2.2.
In the metabolic network, the distance between two substrates is given by the number of links (reactions) in the shortest biochemical pathway connecting the two substrates. A surprising result obtained by Jeong et al is that the diameter of the metabolic network is the same for all the 43 organisms, i.e., it does not depend upon the number of substrates (nodes) belonging to the network. This is counterintuitive and only possible if with increasing organism complexity pre-existing individual substrates are increasingly connected in order to maintain a more or less constant network diameter. In support of this conjecture, Jeong et al found that the average number of reactions in which a certain substrate participates increases as the number of substrates in the organism increases. Conservation of the network diameter may be favourable for the survival and growth of an organism. A larger diameter would possibly diminish the organism’s ability to respond to changes in an efficient manner.
The scale-free character of the metabolic network implies that a few hubs which are highly connected play a dominant role in the functioning of the network. On sequential removal of these highly-connected nodes, the network diameter rises sharply and ultimately the network disintegrates into isolated fragments. On the other hand, the network diameter does not change appreciably when the nodes with a few links are removed from the network. Scale-free networks, in general, are robust against random mutations/errors but vulnerable to attacks targeted at highly connected nodes. Complex communication networks are surprisingly robust, local failures rarely hamper the global transmission of information. Organisms can grow and survive in hostile environments due to the error tolerance of the underlying metabolic network. A random graph is not as robust against random mutations/errors. Mutagenesis studies in-silico and in-vivo [@26] have established the remarkable error tolerance of the metabolic network of E.coli on removing a large number of metabolic enzymes. Jeong et al, in their study of the metabolic networks of organisms found that only $ \sim 4\% $ of all the substrates present in all the 43 organisms are present in all the species. The striking fact is that the small number of substrates, common to all species, turn out to be the most highly connected ones. On the other hand, there are species-specific differences in the case of less-connected substrates.
Jeong et al in a separate study [@5] have investigated the protein-protein interaction network of the yeast S.cerevisiae. The network has 1870 proteins as nodes which are linked by 2240 direct physical interactions identified mostly by systematic two-hybrid experiments. Actual measurements show that the probability $ P(k) $ that a given yeast protein interacts with $ k $ other yeast proteins has a power-law distribution with an exponential cutoff at $ k_{c}= $ 20.
$$\label{3}
P(k)\sim (k+k_{0})^{-\gamma }e^{-\frac{(k+k_{0})}{k_{c}}}$$
with $ k_{0}= $ 1 and $ \gamma = $ 2.4. The protein-protein interaction network of the bacterium H. Pylori [@7] displays similar topology. For the metabolic networks, the exponent $ \gamma $ has the value 2.2. The value of $ \gamma $ falls in the range 2.0-2.5 for many scale-free networks. Like the metabolic network and other scale-free networks, the protein-protein interaction network is found to be immune to random mutations. The removal of highly connected nodes may, however, disrupt the network function. The protein product of the p53 tumor-suppressor gene is one of the most highly connected proteins found in human cells. Mutations of p53 gene affect cellular functions severely from a biomedical point of view.
In fact, Jeong et al’s study on the protein-protein interaction network in yeast shows that proteins with five or lesser number of links constitute $ \sim $ 93% of the total number of proteins but only $ \sim $ 21% of them are essential so that the removal of such proteins proves to be lethal. In contrast, only $ \sim $ 0.7% of the total number of proteins have more than 15 links but single deletion of $ \sim $ 62% of these severely affects the functioning of the network. It is possible that the proteins which constitute the highly connected nodes in a network share common structural features. These features favour the binding of many different types of proteins to the proteins in question. The scale-free character of both the metabolic and protein-protein interaction networks suggests the evolutionary selection of a common large scale structure of biological networks. Studies of other biological networks are expected to provide further evidence for this idea.
Nonlinear dynamics
==================
The dynamics of gene regulatory networks are described by coupled nonlinear p.d.e.’s which can be collectively represented as
$$\label{4}
\frac{dX(t)}{dt}=f(X,R)$$
where $ X(t) $ is the $ N $ -component state vector $ (X_{1}(t),...,X_{N}(t)) $ and $ f $ is a set of nonlinear functions $ f_{1}(X,R),....,f_{N}(X,R) $ . There are thus N coupled p.d.e.’s and an individual p.d.e. is of the form
$$\label{5}
\frac{dX_{i}(t)}{dt}=f_{i}(X_{1},....X_{N},R)$$
There are in total N species of biochemical molecules participating in M reactions. $ X_{i}(t) $ $ (i=1,...,N) $ represents the concentration of the ith molecular species at time $ t $ . R represents a set of control parameters. The functions $ f_{i}'s $ are nonlinear functions of the $ X_{i}'s $ and the specific forms of the functions are determined by the structures and rate constants of the M chemical reactions. As an example consider the set of reactions
$$\label{6}
P\rightarrow A$$
$$\label{7}
A\rightarrow B$$
$$\label{8}
A+2B\rightarrow 3B$$
$$\label{9}
B\rightarrow C$$
The reactions represent the conversion of the precursor species $ P $ into a final product $ C $ via a sequence of four reactions involving two intermediates $ A $ and $ B $ . The third reaction is autocatalytic as $ B $ catalyses its own production. The second reaction represents the uncatalysed conversion of $ A $ to $ B $ and the last reaction shows that the catalyst $ B $ decays into the product $ C $ . The equations are assumed to be irreversible. Also, the concentration of the reactant $ P $ is assumed to be constant over a reasonable period of time. This is possible if the initial concentration of $ P $ is large. Let $ p_{0} $ (constant), $ a $ and $ b $ denote the concentrations of the molecular species $ P $ , $ A $ and $ B $ . The decay product $ C $ does not participate in any further reaction and so does not influence the chemical kinetics. The rates of the four successive chemical reactions are $ k_{0}p_{0} $ , $ k_{u}a $ , $ k_{c}ab^{2} $ and $ k_{d}b $ respectively where $ k_{0} $ , $ k_{u} $ , $ k_{c} $ , $ k_{d} $ are the rate constants. The equations governing the chemical kinetics are
$$\label{10}
\frac{da}{dt}=k_{0}p_{0}-k_{c}ab^{2}-k_{u}a$$
$$\label{11}
\frac{db}{dt}=k_{u}a+k_{c}ab^{2}-k_{d}b$$
In the general scheme of p.d.e.’s shown in Eq. (5), $ N $ = 2, i.e., there are two molecular species $ A $ and $ B $ participating in $ M $ = 4 chemical reactions. $ X_{1}=a $ and $ X_{2}=b $ are the concentrations of the moleculeas $ A $ and $ B $. The r.h.s.’s of Eqs. (10) and (11) are the nonlinear functions $ f_{1}(X_{1},X_{2}) $ and $ f_{2}(X_{1},X_{2}), $ the nonlinearity arising from the autocatalytic term $ k_{c}ab^{2} $ . The rate constants together with $ p_{0} $ constitute the control parameters $ R $.
In the general case, imagine an abstract $ N $ - dimensional state space with axes $ X_{1},....,X_{N} $ . The state of the system at any instant of time, say $ t_{0} $, is given by the $ N $ -component state vector $ X(t_{0}) $. In the state space, this state is represented by a single point. The time evolution of the system gives rise to a trajectory in the state space. The trajectory may end up at a fixed point $ X^{*} $. At this point, the rates of change of all the variables in the system are exactly zero, i.e., the l.h.s.’s of the $ N $ equations in Eq.(5) are zero. The system is said to be in the steady state at the fixed point. At this point, the state of the system remains unchanged as a function of time. The only way of changing the state of the system is to apply perturbations to it. A fixed point is stable if small perturbations around the point eventually damp out. The stable fixed point acts as an attractor to the states in its vicinity. The corresponding region in the state space is called the basin of attraction. The nonlinear dynamics may give rise to more than one fixed point. If there are two stable fixed points, the system is bistable, i.e., two stable steady states are possible. One can similarly define multistability.
The other long-term possibilities for the trajectory in the state space are a limit cycle and a strange attractor. In the first case, the trajectory goes towards a closed loop and eventually circulates around it forever. In physical terms, this corresponds to stable oscillations in the system. The strange attractor is a set of states to which the trajectory is confined, never stopping or repeating. Such aperiodic motion is often indicative of chaos in the system. We now discuss the role of the control parameters R (Eqs. (4) and (5)) in the nonlinear dynamics of a system. By varying these parameters, one can bring about changes in the qualitative structure of the dynamics. Such changes are known as bifurcations. For example, as a parameter is changed, a steady state can become unstable and replaced by stable oscillations. A system with one stable steady state changes over to multistability, i.e., the system can exist in multiple steady states. To give a simple example of bifurcation, consider the rate equation
$$\label{12}
\frac{dx}{dt}=\mu x-x^{2}$$
There are two fixed points of this equation: $ x^{*} $ = 0 and $ x^{*}=\mu $. To determine the stability of the fixed points, one undertakes what is known as the linear stability analysis. One determines the time evolution of a small perturbation $ \delta x(t)(=x(t)-x^{*}) $ around the fixed point. By substituting $ x(t)=x^{*}+\delta x(t) $ in Eq.(12) and ignoring terms of the order of $ (\delta x(t))^{2} $ , one obtains $ \delta x(t)\sim e^{\mu t} $ when $ x^{*} $ = 0. The fixed point is stable if $ \mu <0 $ since $ \delta x(t) $ reduces to zero during time evolution. The fixed point is unstable if $ \mu $ > 0 and $ \mu _{c} $ = 0 is the bifurcation point. If $ x^{*}=\mu $, then $ \delta x(t)\sim e^{-\mu t} $ . Hence the fixed point is unstable if $ \mu $ < 0 and stable for $ \mu $ > 0. Different types of bifurcation are possible a detailed discussion of which is given in standard textbooks and reviews [@27; @28; @29] on nonlinear dynamics.
If there is more than one stable fixed point, a switch-like behaviour is possible. In the case of bistability, the system remains in one stable state until a sufficiently large perturbation drives the system to the other stable state. The system continues to remain in the latter state even after the perturbation is removed. The $ \lambda $-phage lysis-lysogeny network offers an example of bistability [@9]. The lytic and the lysogenic states are the two possible steady states. A transition from the lysogenic to the lytic state occurs on irradiating with ultra-violet light. In a gene regulatory network, a negative (positive) feedback implies that a gene product inhibits (promotes) its own level of activity. To give an example, a protein which represses the transcription of its own gene operates through negative feedback. It has been found that negative (positive) feedback increases (decreases) stability in gene regulatory systems [@30]. Real life gene regulatory networks are often complex. Some of the examples are the $ \lambda $-phage lysis-lysogeny circuit, the regulatory network for the activation of the tumour-suppressor protein p53 [@31] and the bacteriophage T7 (another lytic phage which infects E.coli ) network [@32]. Computational modelling studies of these networks have been undertaken with a view to explain experimental results. The quantitative agreement between theory and experiment is most often not good. The reasons are two fold: the complex nature of the networks and the difficulty in carrying out actual experiments on them. Computational as well as mathematical modelling of simpler networks is more extensive. Such networks incorporate the essential features of their more complex counterparts. The models seek to explain experimental results at a qualitative level. There are also abstract mathematical models of gene expression/regulation which highlight the general principles and their outcomes. There are already some good reviews and books on the computational and mathematical modelling of gene regulatory networks [@9; @33; @34]. For the purpose of this review, we pick on just one example, that of a synthetic gene regulatory network which illustrates the importance of nonlinearity in the dynamics of the network.
Gardner et al [@11] have constructed and tested a synthetic, bistable gene regulatory network based on the predictions of a simple mathematical model. The network is called a genetic toggle switch and consists of two repressors (proteins) and two promoters. The enzyme RNA polymerase (RNAP) binds to the promoter region of a DNA sequence to initiate the process of transcription. The initial binding of RNAP to a promoter can be prevented by the binding of a regulatory protein to an overlapping segment of DNA, called operator. The gene expression is off in this case. Fig. 2 shows a simple sketch of the toggle network. The two promoters are designated as $ P_{L} $ and $ Ptrc-2 $ . $ P_{L} $ drives the expression of the $ lacI $ gene and $ Ptrc-2 $ that of the $ cI $ gene. The $ lacI $ and $ cI $ genes express the proteins of the same names. The proteins mutually inhibit the production of each other, hence the name repressor. The $ lacI $ proteins form tetramers and the tetramer binds to operator sites adjacent to the $ Ptrc-2 $ promoter, blocking the transcription of the $ cI $ gene in the process. The $ cI $ proteins, when produced, form dimers. The repressor dimer cooperatively binds to the operator sites in the vicinity of the $ P_{L} $ promoter. As a result, transcription of the $ lacI $ gene is not possible.
The nonlinear dynamics of the toggle network are governed by the following two equations:
$$\label{13}
\frac{dU}{dt}=\frac{\alpha _{1}}{1+V^{\beta }}-U$$
$$\label{14}
\frac{dV}{dt}=\frac{\alpha _{2}}{1+U^{\gamma }}-V$$
where $ U $ and $ V $ are the concentrations of $ lacI $ and $ cI $ proteins respectively, $ \alpha _{1} $ and $ \alpha _{2} $ are the effective rates of synthesis of $ lacI $ and $ cI $ proteins, $ \beta $ is the cooperativity of repression of the $ P_{L} $ promoter and $ \gamma $ the same in the case of the $ Ptrc-2 $ promoter. Fig. 3 reveals the origin of bistability in the system. The nullclines $ \frac{dU}{dt} $ = 0 and $ \frac{dV}{dt} $ = 0 intersect at three points. These are the fixed points (steady states) of the dynamics. Two of the fixed points are stable and the third unstable. The bistability occurs provided $ \beta ,\gamma $ > 1 (cooperative repression of transcription) and the rates of synthesis of the two repressors are balanced. If the rates are not balanced, the nullclines intersect at a single point giving rise to a single stable steady state (monostability).
In the region of bistability, the two stable steady states correspond to (1) State 1 (high $ V $ / low $ U $ ) and (2) State 2 ( low $ V $ / high $ U $ ) respectively. There are two basins of attraction, one above the separatrix and the other below it. In the $ log(\alpha _{1}) $ vs. $ log(\alpha _{2}) $ parameter space, bifurcation lines separate the monostable and bistable regions [@11]. The size of the bistable region decreases on reducing the cooperativity of repression ($ \beta $ and $ \gamma $ ). The parameters $ \alpha _{1},\alpha _{2},\beta $ and $ \gamma $ act as the control parameter R changing which a transition (bifurcation) between monostability and bistability occurs.
In the region of bistability, the toggle is flipped between the stable states (States 1 and 2) using transient chemical or thermal induction. The chemical agent isopropyl-$ \beta $ -D-thiogalactopyranoside (IPTG) can bind to $ lacI $ tetramers. As a result, the latter cannot bind to the operator region in the neighbourhood of the promoter $ Ptrc-2 $, i.e., $ lacI $ can no longer repress the production of the $ cI $ proteins. Suppose the bistable system is originally in the stable State 2 (high $ U $ ( $ lacI $ ), low $ V $ ( $ cI $ )). On the induction of IPTG, the concentration of the $ cI $ proteins increases as a function of time. The $ cI $ proteins in their turn repress the production of $ lacI $ proteins the concentration of which begins to fall. The dynamics ultimately leads the system to the other fixed point (State 1). The system remains in this stable steady state (low $ U $ / high $ V $ ) even after the removal of the IPTG stimulus. How can the toggle flip back to State 2 ? This is achieved by using a temperature-sensitive $ cI $ protein in the network. The degradation rate of this protein increases as temperature is raised. On raising the temperature to $ 42^{\circ }C $ (actual experiment), the concentration of $ cI $ proteins starts to fall. Since repression is less, the concentration of $ lacI $ proteins starts to go up.
The system finally reaches the fixed point corresponding to the stable steady State 2. After the steady state is reached, the temperature of the system is reduced ($ 32^{\circ }C $ in the experiment). The system continues to remain in the steady State 2. A full cycle of the switching process is now completed. The actual construction of the toggle switch has been accomplished in E.coli using the standard tools of molecular biology [@11]. There is a reasonable agreement between the theoretical predictions based on Eqs.(13) and (14) and the results obtained from experiments on the synthetic toggle network. The design of the network relies significantly on theoretical inputs like identification of the region of bistability, increasing the cooperativity in repression ($ \beta $ and $ \gamma $ ) to achieve bistability over a wider region in parameter space etc. As a practical device, the toggle switch may have applications in biotechnology, biocomputing and gene therapy. As a cellular memory unit, the toggle provides the basis for “genetic applets” which are self-contained, programmable synthetic gene networks used in the control of cell functions. In parallel with the toggle work, another synthetic network, the repressilator has been designed and tested [@35]. The repressilator dynamics is again nonlinear and give rise to oscillations in the concentrations of the cellular proteins. The design of the network is based on a simple mathematical model of transcriptional regulation. The repressilator provides insight about the design principles of other oscillatory systems such as circadian clocks found in many organisms including cyanobacteria. The genetic toggle switch and the repressilator demonstrate that theoretical models can provide the design criteria for the actual construction of synthetic, gene regulatory networks. These simple networks have applications as practical devices and also help us to understand the functional properties of the more complex, naturally-occurring networks.
Nonlinear dynamics can give rise to various types of instability one of which is the Turing instability. In 1952, Turing [@36] in a seminal paper proposed a mechanism for pattern formation in biological systems as well as the development of structure during the growth of an organism. Examples of biological patterns are the spots on the skin of a leopard, the stripes of a zebra, the arrangement of veins on the leaves of a tree etc. Structure formation is initiated by the process of cell differentiation, an example is the emergence of limbs in an organism during the growth of the organism from the featureless embryonic stage. The Turing mechanism involves both reaction as well as diffusion processes. To illustrate the mechanism, consider two chemical agents, the activator and the inhibitor. The activator is autocatalytic, i.e., it promotes its own production as well as that of the inhibitor. The inhibitor, as the name implies, is antagonistic to the activator and represses its production. Both the chemicals can diffuse but the inhibitor has a much larger diffusion coefficient. Consider a homogeneous distribution of the activator and the inhibitor in the system. Increase the concentration of the activator by a small amount in a local region. This gives rise to further increases in the local concentrations of the activator and the inhibitor. The inhibitor quickly diffuses to the surrounding region and prevents the activator from reaching there. Thus, in the steady state, islands of high activator concentration exist in a sea of high inhibitor concentration. The islands constitute what is known as the Turing pattern. Diffusion in general smooths out concentration differences in a system but the Turing process involving both reaction and diffusion gives rise to a steady pattern of concentration gradients. There is now increasing evidence that chemical gradients play a crucial role in the formation of patterns and cell differentiation in biological systems. To give an example, the protein bicoid has been found to have a graded concentration distribution in the Drosophila melanogaster embryo. It is responsible for the organization of the anterior half of the fly and has been fully characterised [@37; @38]. Many reaction-diffusion (RD) models have been proposed based on the Turing mechanism and some of these can reproduce the patterns observed in nature [@39; @40; @41; @42]. The basic scale of a pattern is larger than the size of an individual cell and so the RD processes involve more than one cell. Cells possibly choose developmental pathways depending upon their location in the concentration gradient. Position-dependent activation of genetic swtches in the cells may constitute an important step in both pattern and structure formation. Direct evidence for this in terms of a detailed characterization of the genes involved and an identification of the actual biochemical reactions occurring in the cells, is, however, yet to be obtained. Turing patterns have so far been experimentally observed in certain chemical RD systems in the laboratory [@43; @44] and also in some biological systems [@45; @46].
Effect of stochasticity
=======================
As already mentioned in the Introduction, stochastic fluctuations in the dynamics of the gene regulatory network lead to a probabilistic selection of developmental pathways. The $ \lambda $-phage lysis-lysogeny network [@47] was discussed as an example. Fig. 4 shows some of the key components of the network. The complexity of the full network is captured in Figure 1 of Ref. [@13]. We confine our attention to the simpler network. It consists of two $ \lambda $ -phage genes $ cI $ and $ cro $. The corresponding promoters are $ P_{RM} $ and $ P_{R} $ respectively. Transcription of the gene $ cI $ ( $ cro $ ) expresses the regulatory protein $ \lambda $ repressor ( $ Cro $ ). Both the proteins are capable of binding to the operator regions $ O_{R}1 $ , $ O_{R}2 $ and $ O_{R}3 $. They act antagonistically to control promoter activity. Transcription of the $ cI $ gene, initiated from the promoter $ P_{RM} $ , takes place whenever there is no protein of either type binding to $ O_{R}3 $ . The $ \lambda $ repressor molecule has a dumbell shape and there is a tendency for two such molecules to bind and form a dimer. The operator region $ O_{R}1 $ has the highest affinity for the binding of $ \lambda $ repressor dimer. The binding increases the affinity of $ O_{R}2 $ for a second repressor dimer, i.e., a cooperative binding of dimers to the operator regions $ O_{R}1 $ and $ O_{R}2 $ takes place. The $ \lambda $ repressor has both negative and positive control. If the $ \lambda $ repressor is present at $ O_{R}2 $ , transcription of the $ cro $ gene is not possible. This is because the repressor covers part of the $ DNA $ that a $ RNAP $ molecule must have access to in order to recognize the promoter $ P_{R} $ , bind to it and initiate the transcription of the $ cro $ gene. The same repressor at $ O_{R}2 $ exhibits positive control in helping a RNAP molecule to bind to the promoter $ P_{RM} $ and begin transcription of the $ cI $ gene. The increase in the transcription rate is approximately tenfold [@47]. If $ O_{R}2 $ is not occupied by the repressor, the transcription rate of the $ cI $ gene is low. The reason for the dramatic increase in the transcription rate is the following. The presence of a repressor dimer bound to $ O_{R}2 $ leads to an increased affinity of $ P_{RM} $ for $ RNAP $ because the polymerase is held at $ P_{RM} $ not only by its contacts with the $ DNA $ but also due to the protein-protein contact with the repressor. In summary, a repressor dimer bound to $ O_{R}2 $ , represses transcription from $ P_{R} $ but promotes transcription at $ P_{RM} $ .
The $ cro $ gene is transcribed only when the operator region $ O_{R}3 $ is either empty or has $ Cro $ dimer bound to it. The transcription of the $ cI $ gene cannot take place if the $ O_{R}1 $ and $ O_{R}2 $ regions are occupied by either protein, $ \lambda $ repressor and $ Cro $. In the lysogenic state, all the phage genes are off except for one gene $ cI $ which produces the protein $ \lambda $ repressor. The protein in turn binds to the operators $ O_{R}1 $ and $ O_{R}2 $ in the form of dimers and activates the transcription of its own gene at $ P_{RM} $. The bound $ \lambda $ repressor dimers further prevent transcription initiation at $ P_{R} $ . Irradiation of the lysogen with ultra-violet light inactivates $ \lambda $ repressor making the synthesis of the second regulatory protein $ Cro $ possible. $ Cro $ promotes lytic growth and competes with the $ \lambda $ repressor in occupying the same operator sites. Increased $ Cro $ production leads to a greater probability of $ Cro $ binding at $ O_{R}3 $ which prevents the initiation of transcription at $ P_{RM} $ . The concentration of the $ \lambda $ repressor starts to fall down as a result. The concentration of $ Cro $ proteins increases and when it reaches a level such that the operator regions $ O_{R}1 $ and $ O_{R}2 $ begin to be occupied, the transcription at $ P_{R} $ is also halted. The switchover from the lysogenic to the lytic state is further possible by $ recA $ -mediated degradation of the $ \lambda $ repressor ( $ recA $ is a catalytic protein ).
Arkin et al [@13] have analysed the stochastic kinetics of the full $ \lambda $-phage network which consists of more genes and regulatory elements than shown in Figure 4. Their detailed investigations show that fluctuations in the rates of gene expression give rise to random patterns of protein production in individual cells and wide diversity in instantaneous protein concentrations across cell populations. Each cell has two developmental pathways: lytic and lysogenic. The pathway selection depends upon which protein, $ \lambda $ repressor or $ Cro $, takes control of the operator region. If it is the $ \lambda $ repressor, the lysogenic pathway is chosen. If the $ Cro $ takes control, the lytic pathway is selected. Due to stochastic fluctuations, the concentrations of $ \lambda $ repressor and $ Cro $ vary considerably from cell to cell tipping the balance in favour of one or the other pathway. As a result, initially homogeneous cell populations can partition randomly into distinct lytic and lysogenic subpopulations. Arkin et al have constructed a stochastic kinetic model of the $ \lambda $-phage circuit and based on model calculations predicted the fraction of infected cells selecting the lysogenic pathway at different phage:cell ratios. The theoretical results are consistent with the experimental results of Kourilsky [@48]. The kinetic model uses the stochastic formulation of chemical kinetics [@14; @15], stochastic mechanisms of gene expression [@12] and a statistical-thermodynamical model of promoter regulation [@49]. Probabilistic selection of developmental pathways occurs in several other gene regulatory networks producing stochastic phenotypic outcomes. Some examples are given in Table 4 of Ref. [@10].
We now describe the well-known Gillespie algorithm [@14; @15] which is incresingly being used by biologists in the stochastic kinetic approach to the study of gene expression and regulation in different systems Let us consider a system of $ N $ chemicals participating in $ M $ reactions $ R_{\mu } $. The state of the system at any instant of time $ t $ is represented as $ (X_{1},...,X_{N}) $ where $ X_{i} $ is the number of molecules of the ith chemical species. Two questions have to be answered to determine how the system evolves in time: (1) when will the next reaction occur and (2) what type of reaction will it be ? Let
$ C_{\mu }dt $ = the probability that an $ R_{\mu } $ $ (\mu =1,...,M) $ reaction occurs in the next infinitesimal time interval $ dt $ for a particular combination of the reactant molecules. Let $ h_{\mu } $ be the number of distinct combinations of molecules available in the state $ (X_{1},...X_{N}) $ for the $ R_{\mu } $ reaction.\
As an example, consider the reaction
$$\label{15}
A+B\rightarrow C$$
Let $ X_{1} $ and $ X_{2} $ be the number of molecules of types A and B respectively. Then $ h=X_{1}X_{2} $ . Let
$ a_{\mu }dt= $ $ h_{\mu }C_{\mu }dt $ be the probability that an $ R_{\mu } $ reaction occurs in time $ (t,t+dt) $ given the system is in the state $ (X_{1},...,X_{N}) $ at time $ t $.\
The reaction probability density function $ P(\tau ,\mu )d\tau $ is the probability that given the state $ (X_{1},...,X_{N}) $ at time $ t $, the next reaction will occur in the infinitesimal time interval $ (t+\tau ,t+\tau +d\tau ) $ and will be an $ R_{\mu } $ reaction,
$$\label{16}
P(\tau ,\mu )d\tau =P_{0}(\tau )a_{\mu }d\tau$$
where $ P_{0}(\tau ) $ is the probability that no reaction occurs in the time interval $ (t,t+\tau ) $ and $ a_{\mu }d\tau $ is the subsequent probability that an $ R_{\mu } $ reaction occurs in the time interval $ (t+\tau ,t+\tau +d\tau ) $. Now
$$\label{17}
P_{0}(\tau +d\tau )=P_{0}(\tau )\left[ 1-\sum ^{M}_{\nu =1}a_{\nu }d\tau \right]$$
where the expression inside the bracket is the probability that no reaction occurs in time $ d\tau $ from the state $ (X_{1},....,X_{N}) $. Eq. (17) can be solved to obtain
$$\label{18}
P_{0}(\tau )=exp\left[ -\sum ^{M}_{\nu =1}a_{\nu }\tau \right]$$
Substituting for $ P_{0}(\tau ) $ in Eq. (16), one gets
$$\label{19 }
P(\tau ,\mu )=a_{\mu }exp\left( -a_{0}\tau \right)$$
if $ 0\leq \tau <\infty $, $ \mu =1,...,M $ and $ P(\tau ,\mu )=0 $ otherwise.
$$\label{20}
a_{\mu }=h_{\mu }C_{\mu },(\mu =1,...,M)$$
and
$$\label{21}
a_{0}=\sum ^{M}_{\nu =1}a_{\nu }$$
Now the goal is to generate a pair of random numbers $ (\tau ,\mu ) $ acording to the probability distribution (19). To do this, use the standard random number generator to obtain two random numbers from the uniform distribution in the unit interval. Take
$$\label{22}
\tau =\frac{1}{a_{0}}ln\left( \frac{1}{r_{1}}\right)$$
and $ \mu $ is chosen to be the integer for which
$$\label{23}
\sum ^{\mu -1}_{\nu =1}a_{\nu }<r_{2}a_{0}\leq \sum ^{\mu }_{\nu =1}a_{\nu }$$
The pair of numbers $ (\tau ,\mu ) $, (Eqs. (22) and (23)), belongs to the set of random pairs described by the probability density function $ P(\tau ,\mu ) $. For a rigorous proof of this see Refs. [@14; @15]. Once $ (\tau ,\mu ) $ are known, put
$$\label{24}
t=t+\tau$$
and adjust the $ X_{i} $ values according to the $ R_{\mu } $ reaction. If the $ R_{\mu } $ reaction is the one shown in Eq.(15), both $ X_{1} $ and $ X_{2} $ have to be decreased by 1 and $ X_{3}, $ the number of molecules of C, increased by 1.
The input values at time $ t $ = 0 are $ h_{\nu },C_{\nu }(\nu =1,...M) $ and the initial values of $ X_{i}(i=1,...,N) $. The steps of the Gillespie algorithm are:
Step 1\
Calculate $ a_{\nu }=h_{\nu }C_{\nu }(\nu =1,...,M) $ and $ a_{0}=\sum ^{M}_{\nu =1}a_{\nu } $.
Step 2\
Generate $ r_{1} $ and $ r_{2} $ with the help of a uniform random number generator. Calculate $ \tau $ and $ \mu $ according to the formulae in Eqs. (22) and (23).
Step 3\
Advance $ t $ by $ \tau $ (Eq.(24)) and adjust the $ X_{i} $ values according to $ R_{\mu } $. Then repeat the steps from Step 1 to further advance the system in time.
Recently, Kierzek et [@50] al have used the Gillespie algorithm to study a stochastic kinetic model of prokaryotic gene expression. They explicitly considered ten biochemical reactions:
$$\label{25}
P+RNAP\rightarrow P_{-}RNAP$$
$$\label{26}
P_{-}RNAP\rightarrow P+RNAP$$
$$\label{27}
P_{-}RNAP\rightarrow TrRNAP$$
$$\label{28}
TrRNAP\rightarrow RBS+P+EIRNAP$$
where $ P $ denotes the promoter region of the gene and $ P_{-}RNAP $ the bound promoter-$ RNAP $ complex. Reaction 2 (Eq.(26)) describes $ RNAP $ dissociation and Reaction 3 the isomerization of “closed complex” to “open complex”, $ TrRNAP $ is the activated $ RNAP- $promoter complex. Reaction 4 describes clearance of promoter region by $ RNAP, $ $ EIRNAP $ stands for $ RNAP $ transcribing the gene and synthesizing the $ mRNA $ molecule and $ RBS $ is the ribosome binding site on $ mRNA $. The other reactions are:
$$\label{29}
Ribosome+RBS\rightarrow RibRBS$$
$$\label{30}
RibRBS\rightarrow RBS+Ribosome$$
$$\label{31}
RibRBS\rightarrow EIRib+RBS$$
$$\label{32}
RBS\rightarrow decay$$
$$\label{33}
EIRib\rightarrow protein$$
$$\label{34}
Protein\rightarrow decay$$
Reactions 5-10 (Eqs. (29)-(34)) describe translation, $ mRNA $ decay and protein degradation. Reaction 5 describes $ Ribosome $ binding to $ RBS $ , the bound complex is designated as $ RibRBS $ . Reaction 6 is the dissociation of the bound complex. Reaction 7 describes $ Ribosome $ binding site clearance, $ EIRib $ is the $ Ribosome $ which translates the $ mRNA $. Reaction 8 has degradation of $ RBS $ by the enzyme $ RNAaseE $. $ RNAaseE $ and $ Ribosomes $ are in competition to occupy $ RBS $. If $ RNAaseE $ binds first then it initiates the degradation of $ mRNA $ but does not interfere with the movement of the already bound $ Ribosomes $ engaged in the process of translation. Every $ Ribosome $ which successfully binds to the $ RBS $ completes translation of the protein. Reaction 9 corresponds to the completion of protein synthesis. Reaction 10 represents protein decay. The stochastic rate constants $ C_{\mu }'s $ of the different reactions, needed as inputs to the Gillespie algorithm, can be calculated from the more familiar chemical rate constants listed in Kierzek et al’s paper [@50]. For first order chemical reactions, the stochastic rate constant is equal to the rate constant of a chemical reaction. For second order reactions, the stochastic rate constant is equal to the rate constant divided by the volume of the system (in this case a cell).
In the last part of this Section, we describe a cooperative stochastic model of gene expression proposed by us [@20]. As already explained in the Introduction, the model has been constructed to explain the bimodal distribution in gene expression observed in recent experiments. The model describes the transcription of a single gene with one promoter region. There is one operaor region to which a regulatory protein $ R $ can bind. This prevents the binding of a $ RNAP $ to the promoter so that transcription of the gene cannot be initiated. There is a finite probability that the bound $ R $ molecule dissociates from the operator at any instant of time. $ RNAP $ molecule then has a certain probability of binding to the promoter and initiating transcription.
Each of the possibilities described above actually involves a series of physico-chemical processes, a detailed characterization of which is not required for the model of gene expression proposed by us. We represent a gene by a one-dimensional lattice of $ n+2 $ sites. The first two sites represent the operator and promoter respectively. The lattice is a coarse-grained description of an actual gene. In reality, the operator and promoter regions may extend over a certain number of base pairs in the DNA and they can be overlapping or not. In our model, they are represented as single sites. Each of the other sites in the lattice represents a finite number of base pairs in the DNA molecule.
The different physico-chemical processes are lumped together into a few simple events which are random in nature. This lumping together avoids unnecessary complexity that has no bearing on the basic nature of the process. The operator $ (O) $ and the promoter $ (P) $ together can be in four possible configurations: $ 10,01,00 $ and $ 11 $. The numbers “$ 1 $” and “$ 0 $” stand for “occupied” and “unoccupied”. The configuration $ ij $ describes the occupation status of $ O $ $ (i) $ and $ P $ $ (j) $. For example, the configuration $ 10 $ corresponds to $ O $ being occupied by a $ R $ molecule and $ P $ being unoccupied. Similarly, in the configuration $ 01 $, $ O $ is unoccupied and $ P $ is occupied by a $ RNAP $ molecule. Binding of $ R $ and $ RNAP $ molecules are mutually exclusive so that the configuration $ 11 $ is strictly prohibited. Given a $ 00 $ configuration at time $ t $, the transition probabilities to configurations $ 10 $ and $ 01 $ at time $ t+1 $ are $ p_{1} $ and $ p_{2} $ respectively. The probability of remaining in the configuration $ 00 $ is $ 1-p_{1}-p_{2} $. A $ 10 $ configuration at time $ t $ goes to a $ 00 $ configuration at time $ t+1 $ with probability $ p_{3} $ and remains unchanged with probability $ 1-p_{3} $. We have assumed all the probabilities to be time-independent. The $ RNAP $ molecule once bound to the promoter initiates transcription in the next time step, i.e., the $ 01 $ configuration makes a transition to a $ 00 $ configuration with probability $ 1 $. The motion of $ RNAP $ is in the forward direction and the molecule covers a unit distance (the distance between two successive lattice sites) in each time step. Once the molecule reaches the last site of the lattice, the transcription ends and a $ mRNA $ is synthesized.
The second major feature of our model is the cooperative binding of $ RNAP $ to the promoter, when an adjacent $ RNAP $ molecule is present. This implies that there is a higher probability of binding of $ RNAP $ to the promoter in one time step if another $ RNAP $ molecule is present at the site next to the promoter. In our model, the probability of cooperative binding is $ p_{4} $ which is larger than $ p_{2} $. The probabilities $ p_{1} $ and $ 1-p_{1}-p_{2} $ are changed to new values $ p_{5} $ and $ 1-p_{4}-p_{5} $ respectively. Degradation of $ mRNA $ is taken into account by assuming the decay rate to be given by $ \mu N $, where $ N $ is the number of $ mRNAs $ present at time $ t $. The number of $ mRNAs $ produced as a function of time is studied by Monte Carlo simulation. For the sake of simplicity, we have not tried to simulate protein levels or enzymatic products thereof, i.e., we study gene expression upto the level of transcription ($ mRNA $ synthesis). Since the number of protein molecules and converted products should be proportional to the number of $ mRNA $ molecules, no loss of generality is introduced by this simplification. The lattice consists of 52 sites $ (n=50) $. Stochastic events are simulated with the help of a random number generator. The updating rule of our cellular automaton (CA) model is that in each time step $ t $, the occupation status ( $ 0 $ or $ 1 $) of each site (except for the $ O $ site) at time $ t-1 $ is transferred to the nearest-neighbour site towards the right. If the last site is $ 1 $ at $ t-1 $, a $ mRNA $ is synthesized at $ t $ and the number of $ mRNAs $ increases by one. In the same time step, the configuration $ ij $ of $ OP $ is determined with the probabilities already specified. Thus, in each time step, the $ RNAP $ molecule, if present on the gene, moves forward by unit lattice distance (progression of transcription) followed by the updating of the $ OP $ configuration. Figure 5 shows the concentration $ [mRNA] $ of $ mRNA $ molecules in the cell as a function of time for the parameter values $ p_{1}=0.5,p_{2}=0.5,p_{3}=0.3,p_{4}=0.85,p_{5}=0.05 $ and $ \mu =0.4 $. Note that an almost four-fold increase in the probability of $ RNAP $ binding is assumed due to cooperativity. The stochastic nature of the gene expression is evident from the figure, with random intervals between the bursts of activity. One also notices the presence of several bursts of large size. It is important to emphasize that the frequency of transition between high and low expression levels is a function of the parameter values chosen and may be low for certain parameter values. For the probability values considered, the two predominantly favourable states are when the gene expression is off (state 1) and when a large amount of gene expression takes place (state 2). In the absence of $ RNAP $ binding, state 1 has greater weight but with the chance binding of $ RNAP $ to the promoter (probability $ p_{2} $ for this is small), the weight shifts to state 2 until another stochastic event terminates cooperative binding and the gene reverts to state 1. The probability of obtaining a train of $ N $ successive transcribing $ RNAP $ molecules is $ p_{2}p^{N-1}_{4}(1-p_{4}) $. This is the geometric distribution function and the mean and the variance of the distribution are given by $ \frac{p_{2}}{1-p_{4}} $ and $ \frac{p_{2}(1+p_{4}-p_{2})}{(1-p_{4})^{2}} $ respectively. For the probability values already specified, the simulation has been repeated for an ensemble of 3000 cells. For each cell, the time evolution is upto 10000 time steps. Figure 6 shows the distribution of the number $ N(m) $ of cells versus the fraction $ m $ of the maximal number of $ mRNA $ molecules produced after $ 10000 $ time steps. Two distinct peaks are seen corresponding to zero and maximal gene expression. Such a bimodal distribution occurs over a range of parameter values.
Some theories have been proposed so far to explain the so-called “all or none” phenomenon in gene expression [@19; @51; @52]. These theories are mostly based on an autocatalytic feedback mechanism, synthesis of the gene product gives rise to the transport or production of an activator molecule. While such processes are certainly possible, the bimodal distribution is a much more general phenomenon and has now been found in many types of cells, from bacterial to eukaryotic and for different types of promoters [@16; @17; @18]. The two major features of the model of gene expression that we have proposed are stochasticity and cooperative binding of $ RNAP $. There is by now enough experimental evidence of stochasticity in gene expression. Our suggestion of cooperative binding of $ RNAP $ is novel and there is no direct experimental verification of the proposal as yet. There are some recent experiments which provide indirect evidence and these are discussed in Ref. [@20].
Concluding remarks
==================
In this review we have given an elementary introduction to some of the major aspects of biological networks, namely, topological characteristics, nonlinear dynamics and the role of stochasticity in gene expression and regulation. The main aim of the review is to highlight the usefulness of interdisciplinary approaches in the study of both natural and synthetic biological networks. Some important features of such networks have not been discussed in the review. One of these is the operational reliability of networks in spite of randomness in basic regulatory mechanisms. Many regulatory pathways do have highly predictable outcomes even when stochastic fluctuations are considerable. Cells adopt various strategies like populational transcriptional cooperation, checkpoints to ensure that cascaded events are appropriately synchronised, redundancy and feedback to achieve regulatory determinism. Some of these ideas are discussed in Ref. [@10]. The complexity of biological networks raises the question of their functional stability. In particular, the issue of interest is the sensitivity of networks to variations in their biochemical parameters. Barkai and Leibler [@53] have studied a biochemical network responsible for bacterial chemotaxis and shown that the functional properties of the network are robust, i.e., relatively insensitive to changes in biochemical parameters like reaction rate constants and enzymatic concentrations. Bialek [@54] has shown that extremely stable biochemical switches can be constructed from small numbers of molecules though intuitively one expects such systems to be prone to instability due to the inherent noise.
Metzler [@55] in a recent paper has shown that spatial fluctuations in the distribution of regulatory molecules play a non-trivial role in genetic switching processes. Apart from internal stochastic fluctuations, external noise originating in random variations in the environment or in the externally set control parameters, may affect the functioning of a biological network. Hasty et al [@56] have proposed a synthetic genetic network in which external noise is utilised to operate a protein switch (short noise pulses are used to turn protein production “on” and “off”). In another novel application, external noise is used to amplify gene expression, i.e., protein production by a considerable amount.
Genetic networks with many components are difficult to analyze using conventional techniques. Many parallels have been drawn in the functioning of genetic and electrical circuits [@57; @58]. In electrical engineering, there are well developed techniques of circuit analysis which can be used to characterise the operation of complex electrical networks. Some of these techniques are increasingly being used to study genetic networks. Engineers are familiar with some of the design principles of biological networks. Rapid transitions between the two stable states of a system can be brought about by positive feedback loops. Negative feedback loops control the value of an output parameter to be within a narrow range even if there are wide fluctuations in the input. Coincidence detection systems activate an output provided two or more events occur simultaneously. Parallel connections enable a device to remain functional in the event of failures in one of the lines. One can give analogous examples from biology. One set of positive feedback loops is responsible for the rapid transition of cells into mitosis (division of cell nucleus), another set brings about the exit from mitosis in an irreversible manner. Gene transcription in eukaryotes involve coincidence detection. A $ mRNA $ can be produced only if the promoters regulating gene expression are occupied by the different transcription factors. These examples indicate that general principles govern the functioning of genetic and electrical networks though there are other aspects of such networks which are not common to both. Biological networks constitute a field of research the interdisciplinary nature of which will become more evident as we progress into the twenty first century.\
**Acknowledgement: The Author thanks Subhasis Banerjee for help in drawing the figures.**
S. H. Strogatz, Nature 410, 268 (2001) R. Albert and A.-L. Barabási, cond-mat/0106096, to appear in Rev. Mod. Phys. S. N. Dorogovtsev and J. F. F. Mendes, cond-mat/0106144, to appear in Adv. Phys. H. Jeong, B. Tombor, R. Albert, Z.N. Oltvai and A.-L. Barabási, Nature 407, 651 (2000) H. Jeong, S.P. Mason, A.-L. Barabási and Z.N. Oltvai, Nature 411, 41 (2001) P. Uetz et al, Nature 403, 623 (2000) J.-C. Rain et al, Nature 409, 211 (2001) B. Levin, Genes V ( Oxford University Press, New York 1994) J. Hasty, D. McMillen, F. Isaacs and J.J. Collins, Nature Reviews Genetics 2, 268 (2001) H. H. McAdams and A. Arkin, Trends in Genetics 15, 65 (1999) T. S. Gardner, C.R. Cantor and J. J. Collins, Nature 403, 339 (2000) H. McAdams and A. Arkin, Proc. Natl. Acad. Sci. 94, 814 (1997) A. Arkin, J. Ross and H. H. McAdams, Genetics 149, 1633 (1998) D. T. Gillespie, J. Comput. Phys. 22, 403 (1976) D. T. Gillespie, J. Phys. Chem. 81, 2240 (1977) G. Zlokarnik et al, Science 279, 84 (1998) P.A. Negulescu, N. Shastri and M.D. Cahalan, Proc. Natl. Acad. Sci. 91, 2873 (1994) J. Karttunen and N. Shastri, Proc. Natl. Acad. Sci. 88, 3972 (1991) A. Novick and M. Weiner, Proc. Natl. Acad. Sci. 43, 553 (1957) S. Roy, I. Bose and S. S. Manna, Int. J. Mod. Phys. C 12, 413 (2001) P. Erdös and A. Rényi, Publ. Math. Inst. Hung. Acad. Sci. 5, 17 (1960) M. E. J. Newman, J. Stat. Phys. 101, 819 (2000) S. Milgram, Psychology Today 2, 60 (1967) D. J. Watts and S. H. Strogatz, Nature 393, 440 (1998) A.-L. Barabási and R. Albert, Science 286, 509 (1999) J. S. Edwards and B. O. Palsson, Proc. Natl. Acad. Sci. 97, 5528 (2000) S. H. Strogatz, Nonlinear Dynamics and Chaos (Perseus, New York 1994) S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos (Springer, New York 1990) M. C. Cross and P. C. Hohenberg, Rev. Mod. Phys. 65, 851 (1993) M. A. Savageau, Nature 252, 546 (1974) B. Vogelstein, D. Lane and A. J. Levine, Nature 408, 307 (2000) D. Endy, L. You, J. Yin and I. J. Molineux, Proc. Natl. Acad. Sci. 97, 5375 (2000) G. Rowe, Theoretical Models in Biology (Clarendon Press, Oxford 1994); J. J. Tyson and H. G. Othmer, Prog. Theor. Biol. 5, 1 (1978) U. S. Bhalla and R. Iyengar, Science 283, 381 (1999) M. B. Elowitz and S. Leibler, Nature 403, 335 (2000); see also J. Hasty, F. Isaacs, M. Dolnik, D. McMillen and J. J. Collins, Chaos 11, 207 (2001) A. M. Turing, Philos. Trans. R. Soc. London, Ser. B 237, 37 (1952) W. Driever and Ch. Nüsslein-Volhard, Cell 54, 83 (1988) L. Boring, M. Weir and G. Schubiger, Mech. Dev. 42, 97 (1993) A. J. Koch and H. Meinhardt, Rev. Mod. Phys. 66, 1481 (1994) H. Meinhardt, Models of Biological Pattern Formation (Academic, New York 1982) J. Dziarmaga, physics/0002050 I. Bose and I. Chaudhuri, Phys. Rev. E 55, 5291 (1997); I. Bose and I. Chaudhuri, Int. J. Mod. Phys. C 12, 247 (2001) V. Castets et al, Phys. Rev. Lett. 64, 2953 (1990) Q. Ouyang and H. L. Swinney, Nature 352, 610 (1991) S. Kondo and R. Asai, Nature 376, 765 (1995) S. Sawai, Y. Maeda and Y. Sawada, Phys. Rev. Lett. 85, 2212 (2000) M. Ptashne, A Genetic Switch: Phage $ \lambda $ and Higher Organisms (Cell Press, Cambridge, Massachusetts 1992) P. Kourilsky, Mol. Gen. Genet. 122, 183 (1973) M.A. Shea and G.K. Ackers, J. Mol. Biol. 181, 211 (1985) A.M. Kierzek, J. Zaim and P. Zielenkiewicz, J. Biol. Chem. 276, 8165 (2001) M. T. Beckman and K. Kirkegaard, J. Biol. Chem. 273, 6724 (1998) T. A. Carrier and J.D. Keasling, J. Theor. Biol. 201, 25 (1999) N. Barkai and S. Leibler, Nature 387, 913 (1997) W. Bialek, cond-mat/0005235 R. Metzler, Phys. Rev. Lett. 87, 68103 (2001) J. Hasty, J. Pradines, M. Dolnik and J. J. Collins, Proc. Natl. Acad. Sci. 97, 2075 (2000) H. H. McAdams and L. Shapiro, Science 269, 650 (1995) L. H. Hartwell, J. J. Hopfield, S. Leibler and A. W. Murray, Nature 402 (Supplement), C47 (1999)\
Figure Captions\
Figure 1. Example of a network. The solid circles and lines denote the nodes and the links respectively.\
Figure 2. Schematic diagram of the synthetic genetic toggle switch network. There are two promoters $ P_{L} $ and $ Ptrc-2 $ and two genes $ lacI $ and $ cI $.\
Figure 3. Graphical representation of the toggle equations (Eqs. (13) and (14)). $ U,V $ are the concentrations of the $ lacI $ and $ cI $ proteins respectively. There are two stable steady states: State 1 (high $ V $/ low $ U $ ) and State 2 (low $ V $/ high $ U $ ) and one unstable steady state.\
Figure 4. Some key components of the $ \lambda $-phage lysis-lysogeny network: $ cI,cro $ are the two genes, $ P_{RM} $ and $ P_{R} $ are the two promoters and $ O_{R}1,O_{R}2 $ and $ O_{R}3 $ are the three operator regions.\
Figure 5. Concentration of $ mRNA $ molecules $ [mRNA] $ in arbitrary units as a function of time t. The parameter values are $ p_{1}=0.5,p_{2}=0.5,p_{3}=0.3,p_{4}=0.85,p_{5}=0.05 $ and $ \mu =0.4 $.\
Figure 6. Distribution of no. $ N(m) $ of cells expressing fraction $ m $ of maximal number of $ mRNA $ molecules produced after 10000 time steps. The total number of cells is 3000. The parameter values are as in Figure 5.
|
---
abstract: 'Quantum correlations have fundamental and technological interest, and hence many measures have been introduced to quantify them. Some hierarchical orderings of these measures have been established, e.g. discord is bigger than entanglement, and we present a class of bipartite states, called premeasurement states, for which several of these hierarchies collapse to a single value. Because premeasurement states are the kind of states produced when a system interacts with a measurement device, the hierarchy collapse implies that the uncertainty of an observable is quantitatively connected to the quantum correlations (entanglement, discord, etc.) produced when that observable is measured. This fascinating connection between uncertainty and quantum correlations leads to a reinterpretation of entropic formulations of the uncertainty principle, so-called entropic uncertainty relations, including ones that allow for quantum memory. These relations can be thought of as lower-bounds on the entanglement created when incompatible observables are measured. Hence, we find that entanglement creation exhibits *complementarity*, a concept that should encourage exploration into “entanglement complementarity relations".'
author:
- 'Patrick J. Coles'
bibliography:
- 'EntanglementUR.bib'
title: Collapse of the quantum correlation hierarchy links entropic uncertainty to entanglement creation
---
Introduction
============
As researchers attempt to develop the ultimate theory of information, encompassing both classical and quantum information, it is becoming increasingly apparent that quantum correlations - correlations that go beyond classical correlations - are of great fundamental and technological interest. Questions like, what gives the quantum advantage in computing tasks [@DatShaCav08], have motivated the definition and study of many quantitative measures of quantum correlations, ranging from entanglement [@HHHH09] to discord [@OllZur01] and other related measures [@ModiEtAl2011review]. Some of these measures are operationally motivated, e.g. the number of Einstein-Podolsky-Rosen (EPR) pairs that can be distilled from the state, others are geometrically motivated like the distance to the nearest separable state or the nearest classical state, while others are motivated due to their ease of calculation. The zoo of quantum correlation measures is vast, and yet the story is simple for bipartite pure states, where the entropy of the reduced state pretty much captures it all. While it would be nice if the correlations of mixed states shared the simplicity of those of pure states, in general, we must settle for a hierarchical ordering of the various correlation measures, e.g., discord is bigger than entanglement [@PianiAdessoPRA.85.040301; @HorEtAl05], which in turn is bigger than coherent information [@DevWin05].
In the present article, we consider a class of bipartite states for which this zoo dramatically simplifies to a single number; various quantum correlation measures which are in general related by a hierarchy of *inequalities* become equal for these states, so we say that these states “collapse the quantum correlation hierarchy". Hence these states are like pure states in that their correlations are “simple", even though the set includes not only pure states but also some mixed states. Interestingly, the set of states that collapse the quantum correlation hierarchy corresponds precisely to the set of states that can be produced when a system interacts with a measurement device. These states have been called premeasurement states, since the unitary interaction (called premeasurement) that potentially correlates the system to the measurement device is the first step in the measurement process [@ZurekReview]. The fact that premeasurement states collapse the quantum correlation hierarchy has significant consequences, and much of this article is devoted to exploring these consequences.
The most interesting consequence is a connection to uncertainty and the uncertainty principle. While the study of quantum correlations has seen a revolution of sorts recently, so has the study of the uncertainty principle. In quantitative expressions of the uncertainty principle, so-called uncertainty relations, researchers have replaced the standard deviation, the uncertainty measure employed in the original formulations [@Heisenberg; @Robertson], with *entropy* measures, leading to a variety of different entropic uncertainty relations (EURs) [@EURreview1], which are more readily applied to information-processing tasks. Allowing the observer to possess “quantum memory" (a quantum system that may be entangled to the system of interest) has led to EURs [@RenesBoileau; @BertaEtAl; @TomRen2010; @ColesEtAl; @ColesColbeckYuZwolak2012PRL] with direct application in entanglement witnessing [@LXXLG; @PHCFR] and cryptography [@TLGR].
Our results allow us to establish a precise and general connection between the uncertainty of an observable and the quantum correlations, such as entanglement, created when that observable is measured (more precisely, premeasured). As a consequence, a wide variety of EURs, including those allowing for quantum memory, are subject to reinterpretation. The conventional interpretation is that EURs are lower bound on our inability to predict the outcomes of incompatible measurements, but our results imply that EURs can be thought of as lower bounds on the *entanglement created* in incompatible measurements.
It is helpful to illustrate this connection with a simple example. Consider a qubit in state ${|0\rangle}$, then the unitary associated with a $Z$-measurement is a controlled-not (CNOT) acting on a register qubit that is initially in state ${|0\rangle}$. In this case, the overall state evolves trivially: ${|0\rangle} {|0\rangle}\to {|0\rangle} {|0\rangle}$, producing no entanglement. But if instead we did an $X$-measurement, with a CNOT controlled by the $\{{|+\rangle}, {|-\rangle}\}$ basis, then the state evolves as ${|0\rangle} {|0\rangle}= ({|+\rangle}+{|-\rangle}) {|0\rangle}/\sqrt{2} \to ({|+\rangle} {|0\rangle} +{|-\rangle} {|1\rangle})/\sqrt{2} $, which is maximally entangled. Note that the uncertainty of the $Z$ ($X$) observable was zero (maximal), which is connected to the final entanglement being zero (maximal). This example shows the connection of uncertainty to entanglement creation, and it also shows the *complementarity* of entanglement creation: the $X$ measurement must create entanglement because the $Z$ measurement does not.
We remark that the entanglement created in measurements has been an area of interest previously [@ZurekReview; @VedralPRL2003], and there is renewed interest in this as it provides a general framework for quantifying discord [@PianiEtAl11; @StrKamBru11; @PianiAdessoPRA.85.040301]. It should, therefore, be of interest that our reinterpretation of EURs implies that the entanglement (and discord) created in measurements exhibits complementarity. This idea, which seems to be a general principle, suggests that there are classes of inequalities that capture the complementarity of quantum mechanics, which have yet to be explored and involve entanglement (or discord) creation. There is generally a trade-off; for a given quantum state, if one avoids creating quantum correlations in one measurement, then a complementary measurement will necessarily create such correlations.
In summary, we emphasize three main concepts in this article: (1) the quantum correlation hierarchy dramatically simplifies for premeasurement states, (2) an observable’s uncertainty quantifies the entanglement created upon measuring that observable, and (3) entanglement creation exhibits complementarity. Mathematically speaking, concept (1) implies concept (2) which in turn implies concept (3), as we will discuss.
The rest of the manuscript is organized as follows. In Section \[sct22\] we define various classes of bipartite quantum states, including premeasurement states. In Section \[sct5\] we consider several different quantum correlation hierarchies, and we show that premeasurement states collapse these hierarchies. In particular, we consider hierarchies of measures based on a generic relative entropy, measures related to the von Neumann entropy, and measures related to smooth entropies. In Section \[sct6\], we use these results to connect an observable’s uncertainty to the quantum correlations created when that observable is measured. Then we argue that this gives a reinterpretation for EURs in Section \[sct7\], focusing particularly on the complementarity of entanglement creation. Section \[sct8\] gives a few more implications of our results and discusses the future outlook for “entanglement complementarity relations". Section \[sct9\] gives some concluding remarks.
Classes of bipartite states {#sct22}
===========================
Classical, separable, and entangled states {#sct22a}
------------------------------------------
Since we will be considering various correlation measures, it is helpful to define particular classes of bipartite quantum states. First, consider the set of all separable states, hereafter denoted $\textsf{Sep}$, which have the general form of a convex combination of tensor products: $$\label{eqn1}
\rho_{AB}= \sum_j p_j \rho_{A,j} {\otimes}\rho_{B,j},$$ where $\{p_j\}$ is some probability distribution and $\rho_{A,j}$ and $\rho_{B,j}$ are density operators on systems $A$ and $B$. Entangled states are defined as those states that are not separable; we denote this set as $\textsf{Ent}$, the complement of $\textsf{Sep}$.
A special kind of separable state is a classical state, often called a classical-classical or $\textsf{CC}$ state, with the general form: $$\label{eqn2}
\rho_{AB}= \sum_{j,k} p_{j,k} {{|j\rangle}\!{\langle j|}} {\otimes}{{|k\rangle}\!{\langle k|}},$$ which is like the embedding of a classical joint probability distribution $\{p_{j,k}\}$ in a Hilbert space, where $\{{|j\rangle}\}$ and $\{{|k\rangle}\}$ are orthonormal bases on ${\mathcal{H}}_A$ and ${\mathcal{H}}_B$, respectively. More generally, a state can be classical with respect to one of the subsystems, e.g., of the form: $$\label{eqn3}
\rho_{AB}= \sum_{j} p_{j} {{|j\rangle}\!{\langle j|}} {\otimes}\rho_{B,j},$$ in which case it is called classical-quantum or $\textsf{CQ}$, and naturally is called quantum-classical or $\textsf{QC}$ if it is classical with respect to system $B$. The following relations between these sets should be clear from the above definitions: $$\label{eqn4}
\textsf{CQ}\subset \textsf{Sep}, \quad \textsf{QC}\subset \textsf{Sep}, \quad \textsf{CQ} \cap \textsf{QC} = \textsf{CC},$$ and a Venn diagram in Fig. \[fgr2\] depicts these relations.
Pure states {#sct22aa}
-----------
Pure states can be either separable or entangled, though if a pure state is separable, it is necessarily a classical state (more specifically, a product state), in other words, $$\label{eqn5}
(\textsf{Pure} \cap \textsf{Sep}) \subset \textsf{CC},$$ as depicted in Fig. \[fgr2\]. The correlations of pure states are very well understood, e.g., see [@HHHH09; @NieChu00], and one of our contributions is to characterize a set of states whose correlations are somewhat analogous to those of pure states, a set that encompasses, but goes beyond, pure states. We discuss this set below.
![Venn diagram for several classes of bipartite states. Bipartite states are either separable ($\textsf{Sep}$) or non-separable ($\textsf{Ent}$). Subsets of $\textsf{Sep}$ include $\textsf{QC}$ and $\textsf{CQ}$, which are shaded with lines slanted up-to-the-right and up-to-the-left, respectively, and $\textsf{CC}$ is the intersection of these two sets. The set of pure states is shaded solid gray and is contained inside $\textsf{MM}$, the intersection of $\textsf{MQ}$ and $\textsf{QM}$, which are respectively shaded with small dots and large dots. Note: the figure is not to scale, and is only meant to convey the set relationships given in Eqs. , , and –.[]{data-label="fgr2"}](QStateClasses){width="2in"}
Premeasurement states {#sct22b}
---------------------
Consider the following set of bipartite states: $$\label{eqn6}
\textsf{MQ}:=\{\rho_{AB} : \rho_{AC}\in \textsf{CQ}\text{ for pure }\rho_{ABC} \}.$$ Here, $\rho_{ABC}$ is any purification of $\rho_{AB}$, so we are considering the set of states $\rho_{AB}$ such that there exists a purification $\rho_{ABC}$ whose marginal $\rho_{AC}$ is of the general form of , i.e., classical with respect to system $A$. (If $\rho_{AC}\in \textsf{CQ}$ for some purification of $\rho_{AB}$, then the same will be true for all other purifications.) If $A$ and $B$ change roles in , i.e., if $\rho_{BC} \in \textsf{CQ}$, then we denote this set as $\textsf{QM}$, and if both $\rho_{AC}$ and $\rho_{BC}$ are $\textsf{CQ}$, then we say that $\rho_{AB}\in \textsf{MM}$. In other words, $$\label{eqn7}
\textsf{MQ} \cap \textsf{QM} = \textsf{MM}.$$ It turns out, as we will see below, that $\textsf{MM}$ corresponds precisely to the “maximally correlated states", introduced by Rains [@RainsPhysRevA.60.179].
It is clear that if $\rho_{AB}$ is pure, then any purifying system $C$ will be in a tensor product with (uncorrelated with) $AB$, hence both marginals $\rho_{AC}$ and $\rho_{BC}$ will be classical. So all pure states are in $\textsf{MM}$, $$\label{eqn8}
\textsf{Pure}\subset \textsf{MM}.$$
Figure \[fgr2\] schematically depicts Eqs. and . Also captured by this figure is an extension of to $\textsf{MQ}$ and $\textsf{QM}$ states: $$\label{eqn9}
(\textsf{MQ} \cap \textsf{Sep}) \subset \textsf{CC},\quad (\textsf{QM} \cap \textsf{Sep}) \subset \textsf{CC}.$$ While is not at all obvious, it is a consequence of our results proven in Sec. \[sct5\].
Our curious notation $\textsf{MQ}$ is motivated by the fact that one of the subsystems, namely system $A$ in , is behaving like a measurement device in a way that we elaborate on below. Thus, one can read $\textsf{MQ}$ as “measurement device - quantum", analogous to how one reads $\textsf{CQ}$ as “classical - quantum".
(-1.0,-0.3)(5.5,2.4) (\#1,\#2,\#3,\#4)[(\#1,\#2)(\#3,\#4)]{} (\#1,\#2)[(-\#1,-\#2)(\#1,\#2)]{} (\#1,\#2)[(-\#1,-\#2)(\#1,\#2)]{} \#1[{height \#1cm depth \#1cm width 0pt.]{} \#1[.height \#1cm depth \#1cm width 0pt}]{} \#1[height \#1 cm depth \#1 cm width 0pt]{}
(\#1)\#2\#3[(\#1)[\#2]{}(\#1)[\#3]{}]{} (\#1,\#2,\#3)[(\#1,\#2)(\#1,\#3)(\#1,\#2)(\#1,\#3)]{} (\#1,\#2,\#3)[(\#1,\#2)(\#1,\#3)(\#1,\#2)(\#1,\#3)]{} (\#1,\#2,\#3)[(\#1,\#2)(\#1,\#3)(\#1,\#2)(\#1,\#3)]{} (\#1,\#2,\#3)\#4[(\#1,\#2)(\#1,\#3)(\#1,\#2)[\#4]{}(\#1,\#3)]{} (\#1,\#2,\#3)\#4[(\#1,\#2)[(0.0,-0.3)(0.0,\#3)]{} (0,-0.4)[(\#1,\#2)[\#4]{}]{}]{} (\#1,\#2,\#3)\#4[(\#1,\#2)[(0.0,-0.3)(0.0,\#3)]{} (0,-0.7)[(\#1,\#2)[\#4]{}]{}]{} (0.0,2.0)(4.0,2.0) (0.0,0.0)(4.0,0.0) (0.0,1.0)(4.0,1.0) (2.0,0.0,1.0) (-0.45,1.5) (4.12,0.5)
(-0.58,1.5)[${|\psi\rangle} $]{} (0.0,1.0)[$\rho_S $]{} (0.0,2.0)[$\rho_E $]{} (0.0,0.0)[${|0\rangle} $]{} (5.23,0.5)[$\tilde{\rho}_{M_XS} $]{} (0.6,2.1)[$E$]{} (0.6,1.1)[$S$]{} (0.6,0.1)[$M_X $]{} (2.0,1.18)[$X$]{}
To make the connection to measurement, it is helpful to switch to a more intuitive notation for the various subsystems. We consider the interaction of system $S$ with a device $M_X$ that measures observable $X=\{X_j\}$ of $S$, where the $X_j$ are orthogonal projectors that sum to the identity on ${\mathcal{H}}_S$. \[We emphasize that the $X_j$ are not necessarily rank-one; $X$ is a general projection valued measure (PVM).\] This can be modeled by considering a set of orthonormal states $\{{|j\rangle}\}$ on $ M_X$, and if $S$ is hit by projector $X_j$, then $M_X$ goes to the state ${|j\rangle}$, as follows: $$\label{eqn10}
{|0\rangle}_{M_X} {|\psi\rangle}_S \to \sum_j {|j\rangle}_{M_X} (X_j {|\psi\rangle})_S =V_X{|\psi\rangle}_S,$$ which is essentially a controlled-shift operation, and the notation is simplified by defining the isometry: $$\label{eqn11}
V_X = \sum_j {|j\rangle}_{M_X} {\otimes}(X_j)_S.$$ In we assumed that both $S$ and $M_X$ were initially described by pure states. More generally either state could be mixed, although we could always lump the measurement device’s environment into system $M_X$ and hence purify the state of $M_X$ and call it the ${|0\rangle}$ state. We make this simplification throughout, although see [@VedralPRL2003] for a treatment allowing the measurement device to be in a mixed state. On the other hand, we find it convenient and natural to the think of the system’s initial state as being a (possibly mixed) density operator $\rho_S$; then the final state after the interaction with $M_X$ is: $$\label{eqn12}
{\tilde{\rho}}_{M_X S} = V_X \rho_S V_X{^\dagger}.$$ The circuit diagram for this process is depicted in Fig. \[fgr1\], using the controlled-not (CNOT) symbol even though the process is slightly more general. Also shown is the quantum system that purifies $\rho_S$, called $E$. Because it is the first step in performing a measurement, this process has been called “premeasurement", and the resulting states ${\tilde{\rho}}_{M_X S}$ that are produced have been called “premeasurement states" [@ZurekReview].
Now suppose we consider the set of *all* premeasurement states, i.e., the set of all bipartite states that can be thought of as resulting from a process like that depicted in Fig. \[fgr1\]. It turns out that this set is precisely equivalent to $\textsf{MQ}$, as shown in Appendix \[app2\]. To write $\textsf{MQ}$ as the set of all premeasurement states, we revert to the notation $A$ and $B$ for the two subsystems, since we are being general and abstract again. Denote the set of all orthonormal bases $W=\{{|W_j\rangle}\}$ on ${\mathcal{H}}_A$ as ${\mathcal{W}}_A$, denote the set of all PVMs $X=\{X_j\}$ on system $B$ as ${\mathcal{X}}_B$, and denote the set of all premeasurement isometries $V_X:{\mathcal{H}}_B\to{\mathcal{H}}_{AB}$ as $$\label{eqn13}
{\mathcal{V}}=\{V_X: (\exists W \in {\mathcal{W}}_A)( \exists X\in {\mathcal{X}}_B)(V_X=\sum_j {|W_j\rangle}{\otimes}X_j ) \}.$$ Denoting the set of (normalized) density operators on $B$ as ${\mathcal{D}}_B$, then we have (see Appendix \[app2\] for the proof) $$\label{eqn14}
\textsf{MQ}=\{\rho_{AB} {\,\hbox{:}\,}(\exists {\sigma }_B \in {\mathcal{D}}_B)(\exists V_X\in {\mathcal{V}})(\rho_{AB}=V_X {\sigma }_B V_X{^\dagger}) \}.$$ In other words, the general form for states in $\textsf{MQ}$ is: $$\label{eqn15}
\rho_{AB}=\sum_{j,k} {{|W_j\rangle}\!{\langle W_k|}}{\otimes}X_j{\sigma }_BX_k$$ for some $W\in {\mathcal{W}}_A$, $X\in {\mathcal{X}}_B$, and ${\sigma }_B\in{\mathcal{D}}_B$. It is clear from that if all the $X_j$ projectors are rank-one and hence $X$ can be thought of as an orthonormal basis, then the state is a “maximally correlated state", in the sense that the $W$ basis on $A$ is perfectly correlated with the $X$ basis on $B$. So maximally correlated states are a special kind of $\textsf{MQ}$ state, corresponding to $\textsf{MM}$. But more generally, we can think of $\textsf{MQ}$ states as being one-way maximally correlated in the sense that an orthonormal basis on $A$ is perfectly correlated to some projective observable (not necessarily a basis) on $B$; again $W$ and $X$ are the two observables playing this role in .
We remark that $\textsf{MQ}$ is a strict subset of a set of states considered in Ref. [@CorDeOFan11], defined as follows $$\label{eqn16}
{\textsf{mQ}}:=\{\rho_{AB} : \rho_{AC}\in \textsf{Sep} \text{ for pure }\rho_{ABC} \},$$ i.e., the set of states $\rho_{AB}$ where $\rho_{AC}$ is separable for any purification $\rho_{ABC}$. Since $\textsf{CQ}\subset \textsf{Sep}$, it is clear that $\textsf{MQ} \subset {\textsf{mQ}}$. We believe it is important to make this connection with Ref. [@CorDeOFan11], because they showed an analog of , namely that $({\textsf{mQ}}\cap \textsf{Sep}) \subset \textsf{QC}$, a consequence of the fact that states in ${\textsf{mQ}}$ partially collapse the quantum correlation hierarchy. However, we note in Sec. \[sct5.3\] that ${\textsf{mQ}}$ states do not necessarily collapse the “full" quantum correlation hierarchy, and that the restriction of ${\textsf{mQ}}$ to $\textsf{MQ}$ is precisely what is needed in order to obtain the “full" collapse.
Our notation $\textsf{mQ}$ is motivated by the following observation. Unlike $\textsf{MQ}$ there are states in $\textsf{mQ}$ of the form: $$\label{eqn17}
\rho_{AB}=\sum_{j,k} {{|\phi_j\rangle}\!{\langle \phi_k|}}{\otimes}X_j{\sigma }_BX_k = \tilde{V}_X {\sigma }_B \tilde{V}_X{^\dagger}$$ where the ${|\phi_j\rangle}$ are non-orthogonal pure states, ${\sigma }_B\in{\mathcal{D}}_B$, $\{X_j\}\in {\mathcal{X}}_B$, and $\tilde{V}_X = \sum_j {|\phi_j\rangle} {\otimes}X_j$ is an isometry. States of the form of can be viewed as resulting from a sort of premeasurement, but where the conditional states on the measurement device $\{{|\phi_j\rangle}\}$, associated with the different $X_j$ projectors on the system being measured, are not necessarily orthogonal. Hence these states are obtained from doing a “weak" or “soft" premeasurement (i.e., not fully extracting the $X$ information), and the lower-case $\textsf{m}$ in $\textsf{mQ}$ emphasizes this.
Collapse of quantum correlation hierarchy {#sct5}
=========================================
Four types of quantum correlation measures {#sct5.0}
------------------------------------------
To what degree is the correlation between two systems different from that of a classical joint probability distribution - this is the basic question one aims to answer with quantum correlation measures. This difference can be quantified in a wide variety of ways, but let us consider four common paradigms. (This introduction is for completeness only, please see [@HHHH09; @ModiEtAl2011review] for review articles.)
One can quantify how far the quantum state is from the set of classical states, $\textsf{CC}$, either in terms of a distance or in terms of the information content of the states. These are sometimes called two-way quantumness or two-way discord measures, since they measure non-classicality with respect to both subsystems.
A second paradigm is to quantify how far the state is from either $\textsf{CQ}$ or $\textsf{QC}$, these are one-way quantumness (or discord) measures, since they measure the non-classicality with respect to just one subsystem.
A third paradigm is to quantify the distance to $\textsf{Sep}$, these are called entanglement measures. In practice, the label “entanglement measure" is restricted to those measures that are non-increasing under local operations and classical communication (LOCC) [@HHHH09], though there is some connection between this criterion and quantifying the distance to $\textsf{Sep}$ [@VedrPlen1998].
Finally there are measures of the form of the negative of a conditional entropy, which quantify the distance to a state of the form $\tau_A {\otimes}\rho_B$ where $\tau_A$ is the maximally mixed state (see below), and in the case of von Neumann entropy, the measure is called coherent information.
Basic structure of results {#sct5.1}
--------------------------
Within each of these four paradigms, there are different quantitative measures, and below we discuss measures based on relative entropies, measures related to the von Neumann entropy, and measures related to smooth entropies. But in each case there is a basic structure that negative conditional entropy (coherent information) lower bounds entanglement which lower bounds one-way discord which lower bounds two-way discord. We call this the *quantum correlation hierarchy*, e.g., Ref. [@PianiAdessoPRA.85.040301] discussed this idea. Many of the inequalities in these hierarchies are well-known, although some require proof.
In what follows, we present our main technical results, that premeasurement states *collapse the quantum correlation hierarchy*. For these states, which we also call $\textsf{MQ}$ states, defined by or , the inequalities relating coherent information, entanglement, one-way discord, and two-way discord turn into *equalities*.
Geometrically speaking, the collapse is some reflection of the fact that the closest separable state to a premeasurement state is a $\textsf{CC}$ state. One can verify this claim (Appendix \[app5\]) using the Bures distance [@BenZyc06], a true metric, though in what follows we observe this phenomenon using various relative entropies as (pseudo) measures of distance.
Collapse of measures based on relative entropy {#sct5.2}
----------------------------------------------
Here we use the relative entropy to express various correlation measures as a distance to a certain class of states [@ModiEtAl2010]. In particular, we consider a generalized relative entropy $D_K(P||Q)$, a function that maps two positive-semidefinite operators $P$ and $Q$ to the real numbers, that satisfies the following two properties (also considered in [@ColesColbeckYuZwolak2012PRL]):
1. \[a\] Non-increasing under quantum channels ${\mathcal{E}}$: $D_K({\mathcal{E}}(P)||{\mathcal{E}}(Q)){\leqslant}D_K(P||Q)$.
2. \[b\] Being unaffected by null subspaces: $D_{K}(P \oplus 0 || Q\oplus Q')=D_{K}(P||Q)$, where $\oplus$ denotes direct sum.
These properties are satisfied by several important examples [@ColesColbeckYuZwolak2012PRL], and so there is power in formulating a general result that relies only on the properties. Examples include the von Neumann relative entropy [@VedralReview02; @NieChu00]: $$\label{eqn18}
D(P || Q) := {{\rm Tr}}(P \log P)- {{\rm Tr}}(P \log Q),$$ the Renyi relative entropies [@Renyi; @Petz84] within the range ${\alpha }\in (0,2]$: $$\label{eqn19}
D_{{\alpha }}(P || Q) := \frac{1}{{\alpha }-1}\log {{\rm Tr}}(P ^{{\alpha }} Q^{1-{\alpha }}),$$ and the relative entropies associated with the min- and max-entropies [@RennerThesis05; @KonRenSch09], respectively, $$\begin{aligned}
\label{eqn20}D_{\max}(P || Q)&:=\log\min\{{\lambda }: P{\leqslant}{\lambda }Q \},\\
\label{eqn21}D_{{\text{fid}}}(P || Q)&:= -2\log {{\rm Tr}}[(\sqrt{P} Q\sqrt{P})^{1/2}].\end{aligned}$$ We label as $D_{\max}$ (even though it is associated with the min-entropy) because in general $D_{\max}(P || Q) {\geqslant}D( P || Q)$ [@Datta09], and we label as $D_{{\text{fid}}}$ because it is closely related to the fidelity.
Consider an entanglement measure [@VedrPlen1998] based on $D_K$: $$\label{eqn22}
{\mathbb{E}}_K^{A|B}(\rho_{AB}):=\min_{{\sigma }_{AB}\in \textsf{Sep}}D_K(\rho_{AB}|| {\sigma }_{AB}).$$ Property \[a\] implies, for any LOCC ${\Lambda }$, $$\label{eqn23}
{\mathbb{E}}_K^{A|B}(\rho_{AB}){\geqslant}{\mathbb{E}}_K^{A|B}({\Lambda }(\rho_{AB})),$$ which is the well-known monotonicity property [@HHHH09]. Let us also define one-way and two-way measures of quantumness (a.k.a. discord) [@ModiEtAl2011review]: $$\begin{aligned}
\label{eqn24}{\Delta}_K^{\overrightarrow{A|B}}(\rho_{AB}):=\min_{{\sigma }_{AB}\in \textsf{CQ}}D_K(\rho_{AB}|| {\sigma }_{AB}),\\
\label{eqn25}{\Delta}_K^{\overleftrightarrow{A|B}}(\rho_{AB}):=\min_{{\sigma }_{AB}\in \textsf{CC}}D_K(\rho_{AB}|| {\sigma }_{AB}).\end{aligned}$$ Finally, let us define a conditional entropy [@ColesColbeckYuZwolak2012PRL], $$\label{eqn26}
H_K(A|B):=\max_{\sigma_B}[-D_K(\rho_{AB}||{\openone}{\otimes}\sigma_B)],$$ where the maximization is over all (normalized) density operators $\sigma_B$ on $B$.
To prove our result, we note two additional properties, which were discussed in [@ColesColbeckYuZwolak2012PRL]. If $D_K$ satisfies \[a\] and \[b\], and if $\tilde{Q}{\geqslant}Q$, then $$\label{eqn27}
D_{K}(P||Q){\geqslant}D_{K}(P||\tilde{Q}),$$ and if $\Pi_P$ is a projector onto a space that includes the support of $P$, then $$\label{eqn28}
D_{K}( P || Q ){\geqslant}D_{K}(P|| \Pi_P Q\Pi_P).$$
We now show that the correlation measures defined above form a hierarchy. This hierarchy, though interesting in itself, will be useful below in proving that the various correlation measures become equal in the special case of premeasurement states.
\[thm1\] Let $D_K$ satisfy \[a\] and \[b\], then for any $\rho_{AB}$, $$\label{eqn29}
-H_K(A|B) {\leqslant}{\mathbb{E}}_K^{A| B}{\leqslant}{\Delta}_K^{\overrightarrow{A|B}}, {\Delta}_K^{\overrightarrow{B|A}}{\leqslant}{\Delta}_K^{\overleftrightarrow{A|B}}.$$
The left-most inequality is proven by supposing ${\sigma }_{AB}\in\textsf{Sep}$ achieves the minimization in ${\mathbb{E}}_K^{A| B}(\rho_{AB})$, then $$\begin{aligned}
{\mathbb{E}}_K^{A| B}(\rho_{AB})&=D_K(\rho_{AB}|| {\sigma }_{AB})\notag\\
&{\geqslant}D_K(\rho_{AB}|| {\openone}{\otimes}{\sigma }_B){\geqslant}- H_{K}(A|B)\notag\end{aligned}$$ where we invoked and the fact that, if ${\sigma }_{AB}$ is separable, then ${\openone}{\otimes}{\sigma }_B{\geqslant}{\sigma }_{AB}$ with ${\sigma }_B={{\rm Tr}}_A({\sigma }_{AB})$. The other inequalities follow from $\textsf{CC} \subset \textsf{CQ} \subset \textsf{Sep}$.
Now we can state one of our main technical results, that the hierarchy in collapses onto a single value for $\textsf{MQ}$ states, which we also call premeasurement states (see Sec. \[sct22b\]) since system $A$ plays the role of a measurement device $M_X$ and $B$ is the system $S$ being measured.
\[thm2\] Let $D_K$ satisfy \[a\] and \[b\], then for any premeasurement state $\tilde{\rho}_{M_X S}= V_X\rho_S V_X{^\dagger}$, $$\begin{aligned}
\label{eqn30}
-H_K(M_X|S)= {\mathbb{E}}_K^{M_X| S}= {\Delta}_K^{\overrightarrow{M_X|S}}={\Delta}_K^{\overrightarrow{S|M_X}}= {\Delta}_K^{\overleftrightarrow{M_X|S}}.\end{aligned}$$
Let ${\sigma }_S$ be the state that achieves the optimization in $H_{K}(M_X|S)$, then $$\begin{aligned}
-H_K(M_X|S) &=D_K(\tilde{\rho}_{M_X S}|| {\openone}{\otimes}{\sigma }_S ) \notag\\
&{\geqslant}D_K(\tilde{\rho}_{M_X S}|| V_XV_X{^\dagger}({\openone}{\otimes}{\sigma }_S) V_XV_X{^\dagger}) \notag\\
&=D_K(\tilde{\rho}_{M_X S}|| \sum_j {{|j\rangle}\!{\langle j|}}{\otimes}X_j{\sigma }_SX_j)\notag\\
&{\geqslant}{\Delta}_K^{\overleftrightarrow{M_X|S}}. \notag\end{aligned}$$ We used in the second line, and the last inequality notes that $\sum_j {{|j\rangle}\!{\langle j|}}{\otimes}X_j{\sigma }_SX_j \in \textsf{CC}$. (Expand the $X_j{\sigma }_SX_j $ blocks in their eigenbasis to verify this.) But gives an inequality in the reverse direction, so the entire hierarchy in must collapse onto the same value.
Theorem \[thm2\] applies to all the relative entropies listed in through . For example, in the case of von Neumann relative entropy, the quantities in , from left to right, are the coherent information [@NieChu00], the relative entropy of entanglement [@VedrPlen1998], the one-way information deficit [@HorEtAl05], and the relative entropy of quantumness [@HorEtAl05]. In the next subsection, we further extend our results for these von Neumann measures, including other measures into the hierarchy collapse.
Collapse of von Neumann measures {#sct5.3}
--------------------------------
### Long list of measures involved in the collapse
Here we elaborate on the hierarchy collapse for von Neumann measures, giving a long list of the measures involved. While operational or conceptual meanings of many of the measures can be found in [@HHHH09; @ModiEtAl2011review], this article is more concerned with the fact that they form a hierarchy and that this hierarchy collapses for ${\textsf{MQ}}$ states. To illustrate the dramatic effect of the collapse, we attempt to demonstrate it for as many measures as possible here, even though it comes at the expense of having to define many quantities.
In the previous subsection, we considered the coherent information $I_c$ [@NieChu00], relative entropy of entanglement ${\mathbb{E}}_R$ [@VedrPlen1998], one-way information deficit ${\Delta}^{\to}$ [@HorEtAl05], and relative entropy of quantumness ${\Delta}^{\leftrightarrow}$ [@HorEtAl05], respectively defined by: $$\begin{aligned}
\label{eqn31}
I_c^{\overrightarrow {A|B}}(\rho_{AB})&:=-H(A|B)=D(\rho_{AB}|| {\openone}{\otimes}\rho_B),\notag\\
{\mathbb{E}}_R^{A|B}(\rho_{AB})&:=\min_{{\sigma }_{AB}\in \textsf{Sep}}D(\rho_{AB}|| {\sigma }_{AB}), \notag\\
{\Delta}^{\overrightarrow{A|B}}(\rho_{AB})&:=\min_{{\sigma }_{AB}\in \textsf{CQ}}D(\rho_{AB}|| {\sigma }_{AB}), \notag\\
{\Delta}^{\overleftrightarrow{A|B}}(\rho_{AB})&:=\min_{{\sigma }_{AB}\in \textsf{CC}}D(\rho_{AB}|| {\sigma }_{AB}). \end{aligned}$$ We note that $I_c$ appears in the expression for the quantum capacity of a quantum channel [@Lloyd97], is related to the entanglement distillable through one-way hashing [@DevWin05], and has been interpreted as the entanglement gained in quantum state merging [@HorOppWin05].
We will also consider discord measures [@OllZur01; @WuEtAl2009] based on a difference of quantum mutual informations $I(\rho)$, defined as follows $$\begin{aligned}
\label{eqn32}
{\delta }^{\overrightarrow{A|B}}(\rho_{AB}):=\min_{{\mathcal{Y}}} \{I(\rho_{AB})-I[({\mathcal{Y}}{\otimes}{\openone})(\rho_{AB})]\},\notag\\
{\delta }^{\overleftrightarrow{A|B}}(\rho_{AB}):= \min_{{\mathcal{Y}},{\mathcal{Y}}'} \{I(\rho_{AB})-I[({\mathcal{Y}}{\otimes}{\mathcal{Y}}')(\rho_{AB})]\}.\end{aligned}$$ Here, we suppose that $\{Y_j\}$ and $\{Y'_k\}$ are positive operator valued measures (POVMs) on $A$ and $B$, respectively, and the quantum channels ${\mathcal{Y}}$ and ${\mathcal{Y}}'$ associated with these POVMs are defined such that $$\begin{aligned}
({\mathcal{Y}}{\otimes}{\openone})(\rho_{AB})&:=\sum_j {{|j\rangle}\!{\langle j|}}{\otimes}{{\rm Tr}}_A(Y_j \rho_{AB}), \notag \\
({\mathcal{Y}}{\otimes}{\mathcal{Y}}')(\rho_{AB})&:=\sum_{j,k} {{\rm Tr}}[(Y_j{\otimes}Y'_k) \rho_{AB}] {{|j\rangle}\!{\langle j|}}{\otimes}{{|k\rangle}\!{\langle k|}},\notag \end{aligned}$$ with $\{{|j\rangle}\}$ being the standard (orthonormal) basis.
Now we define regularized versions of these measures: $$\begin{aligned}
\label{eqn34}
I_{c,\infty}^{\overrightarrow {A|B}}(\rho_{AB})&:=\lim_{N\to \infty} (1/N) I_c^{\overrightarrow{A^{{\otimes}N}|B^{{\otimes}N}}}(\rho_{AB}^{{\otimes}N}), \notag\\
{\mathbb{E}}_{R,\infty}^{A|B}(\rho_{AB})&:= \lim_{N\to \infty}(1/N)E_R^{A^{{\otimes}N}|B^{{\otimes}N}}(\rho_{AB}^{{\otimes}N}), \notag\\
{\Delta}_{\infty}^{\overrightarrow{A|B}}(\rho_{AB})&:= \lim_{N\to \infty}(1/N) {\Delta}^{\overrightarrow{A^{{\otimes}N}|B^{{\otimes}N}}}(\rho_{AB}^{{\otimes}N}), \notag\\
{\Delta}_{\infty}^{\overleftrightarrow{A|B}}(\rho_{AB})&:= \lim_{N\to \infty}(1/N) {\Delta}^{\overleftrightarrow{A^{{\otimes}N}|B^{{\otimes}N}}}(\rho_{AB}^{{\otimes}N}), \notag\\
{\delta }_{\infty}^{\overrightarrow{A|B}}(\rho_{AB})&:= \lim_{N\to \infty}(1/N) {\delta }^{\overrightarrow{A^{{\otimes}N}|B^{{\otimes}N}}}(\rho_{AB}^{{\otimes}N}),\notag\\
{\delta }_{\infty}^{\overleftrightarrow{A|B}}(\rho_{AB})&:= \lim_{N\to \infty}(1/N) {\delta }^{\overleftrightarrow{A^{{\otimes}N}|B^{{\otimes}N}}}(\rho_{AB}^{{\otimes}N}) .\end{aligned}$$ From the additivity of the von Neumann relative entropy, we have $$I_{c,\infty}^{\overrightarrow {A|B}}=I_c^{\overrightarrow {A|B}},$$ and it was shown in [@Devetak07] and discussed in [@HorEtAl05] that $$\label{eqn35}
{\delta }_{\infty}^{\overrightarrow{A|B}}={\Delta}_{\infty}^{\overrightarrow{A|B}}.$$ In asymptotia, ${\mathbb{E}}_{R,\infty}$ uniquely characterises the amount of entanglement in a state when all non-entangling transformations are allowed [@BranPlen2008], while ${\delta }_{\infty}^{\to}$ has been linked to entanglement irreversibility (when dilution and distillation are respectively done by LOCC and hashing) in a tripartite scenario [@CorDeOFan11].
In what follows, we also consider the distillable entanglement ${\mathbb{E}}_D$ and the distillable secret key $K_D$ [@HHHH09], both of which are asymptotic rates for conversion of many copies of $\rho_{AB}$ into some resource, where the resource is EPR pairs and bits of secret correlation, respectively, for ${\mathbb{E}}_D$ and $K_D$.
Now we consider some hierarchies satisfied by the above measures. As mentioned, the basic structure for these hierarchies is that coherent information lower bounds entanglement which lower bounds one-way discord which lower bounds two-way discord, and indeed – below each have this form. Equation involves discord based on relative entropy whereas involves discord based on a difference of mutual informations, and Eq. involve regularised versions of these measures. Thus, individually, each equation in the following Lemma, proved in Appendix \[app4\], can be regarded as a quantum correlation hierarchy.
\[thm3\] For any bipartite state $\rho_{AB}$, $$\begin{aligned}
\label{eqn36}&I_c^{\overrightarrow {A|B}}{\leqslant}{\mathbb{E}}_{D}^{A|B}{\leqslant}K_{D}^{A|B} {\leqslant}{\mathbb{E}}_{R}^{A|B}{\leqslant}{\Delta}^{\overrightarrow{A|B}}, {\Delta}^{\overrightarrow{B|A}}{\leqslant}{\Delta}^{\overleftrightarrow{A|B}},\\
\label{eqn37}&I_c^{\overrightarrow {A|B}}{\leqslant}{\mathbb{E}}_{D}^{A|B}{\leqslant}K_{D}^{A|B}{\leqslant}{\delta }^{\overrightarrow{A|B}}, {\delta }^{\overrightarrow{B|A}} {\leqslant}{\delta }^{\overleftrightarrow{A|B}} {\leqslant}{\Delta}^{\overleftrightarrow{A|B}},\\
\label{eqn38}&I_c^{\overrightarrow {A|B}} {\leqslant}{\mathbb{E}}_{R,\infty}^{A|B} {\leqslant}{\delta }_{\infty}^{\overrightarrow{A|B}}, {\delta }_{\infty}^{\overrightarrow{B|A}} {\leqslant}{\delta }_{\infty}^{\overleftrightarrow{A|B}} {\leqslant}{\Delta}_{\infty}^{\overleftrightarrow{A|B}} {\leqslant}{\Delta}^{\overleftrightarrow{A|B}}.\end{aligned}$$
We now see that each of these hierarchies – collapses in the special case where the state is ${\textsf{MQ}}$. In fact, the hierarchies themselves are useful in proving the collapse. In Theorem \[thm2\], we showed that, if $\tilde{\rho}_{M_XS}\in \textsf{MQ}$, then $$I_c^{\overrightarrow{M_X|S}}= {\Delta}^{\overleftrightarrow{M_X|S}},$$ so combining this with Lemma \[thm3\] immediately implies the following result.
\[thm4\] For any state in $\textsf{MQ}$, i.e., any premeasurement state $\tilde{\rho}_{M_XS}= V_X\rho_S V_X{^\dagger}$, $$\begin{aligned}
\label{eqn39}
I_c^{\overrightarrow{M_X|S}}&= {\mathbb{E}}_{D}^{M_X|S}=K_{D}^{M_X|S}={\mathbb{E}}_{R,\infty}^{M_X|S}={\mathbb{E}}_{R}^{M_X|S}\notag\\
&= {\delta }_{\infty}^{\overrightarrow{M_X|S}}= {\delta }_{\infty}^{\overrightarrow{S|M_X}}={\delta }^{\overrightarrow{M_X|S}}= {\delta }^{\overrightarrow{S|M_X}} \notag\\
&= {\Delta}^{\overrightarrow{M_X|S}}= {\Delta}^{\overrightarrow{S|M_X}}={\delta }_{\infty}^{\overleftrightarrow{M_X|S}}={\Delta}_{\infty}^{\overleftrightarrow{M_X|S}} \notag\\
&={\delta }^{\overleftrightarrow{M_X|S}}={\Delta}^{\overleftrightarrow{M_X|S}}.\end{aligned}$$
While the list in Theorem \[thm4\] is very long, we note that not all measures participate in the collapse for $\textsf{MQ}$. For example, $I_c^{\overrightarrow{S|M_X}}=-H(S|M_X)$ need not be equal to the other correlation measures appearing above. One can see this as follows. For the state $\tilde{\rho}_{M_XS}$ which is purified by $E$ to the state $\tilde{\rho}_{M_XSE}$, we have: $$H(S|M_X)-H(M_X|S)=H(E|M_X)=\sum_j p_j H(\rho_{E,j}),\notag$$ where $\tilde{\rho}_{M_XE}=\sum_j p_j {{|j\rangle}\!{\langle j|}}{\otimes}\rho_{E,j}\in \textsf{CQ}$. Hence if the $\rho_{E,j}$ are non-pure, then $[-H(S|M_X)]$ will not collapse onto the other measures. Also, see [@CorDeOFan11] for a discussion of entanglement of formation and entanglement cost.
### Is the collapse unique to $\textsf{MQ}$?
Here we give a simple argument that $\textsf{MQ} $ is the *only* set of bipartite states for which $I_c^{\overrightarrow{A|B}}= {\Delta}^{\overleftrightarrow{A|B}} $, and hence the only set that collapses the *full* hierarchy as in Theorem \[thm4\]. Let $C$ purify $\rho_{AB}$, then it is straightforward to show that: $$\begin{aligned}
I_c^{\overrightarrow{A|B}}= {\delta }^{\overrightarrow{A|B}}-{\delta }^{\overrightarrow{A|C}},\end{aligned}$$ by noting that the optimization in ${\delta }^{\to}$ is achieved by a rank-one POVM, and in fact the same rank-one POVM achieves the optimization in both ${\delta }^{\overrightarrow{A|B}}$ and ${\delta }^{\overrightarrow{A|C}}$ [@ColesEtAl]. From Ref. [@DattaArxiv2010] and the definition of ${\textsf{MQ}}$, we have: $${\delta }^{\overrightarrow{A|C}}=0 \Leftrightarrow \rho_{AC}\in \textsf{CQ} \Leftrightarrow \rho_{AB}\in \textsf{MQ} .$$ Therefore, for any $\rho_{AB}$ that is *not* in $\textsf{MQ}$, we have ${\delta }^{\overrightarrow{A|C}}>0$ (for all purifications $\rho_{ABC}$ of $\rho_{AB}$) and $$\begin{aligned}
\label{eqn40}
I_c^{\overrightarrow{A|B}} < {\delta }^{\overrightarrow{A|B}} {\leqslant}{\Delta}^{\overleftrightarrow{A|B}},\end{aligned}$$ showing that $\textsf{MQ}$ is the only set of states for which $I_c^{\overrightarrow{A|B}} = {\delta }^{\overrightarrow{A|B}}$, and hence the only set for which $I_c^{\overrightarrow{A|B}}={\Delta}^{\overleftrightarrow{A|B}}$.
We wish to emphasize that other states besides $\textsf{MQ}$ states may collapse “part" of the hierarchy. For example, consider a tensor product of maximally mixed states, say, of the form $\rho_{AB}= ( {\openone}/ d) {\otimes}( {\openone}/ d)$. Clearly all measures of entanglement and discord are zero for this state. But the coherent information is $I_c^{\overrightarrow{A|B}} = - \log d$, and this state is not an $\textsf{MQ}$ state.
Likewise, as mentioned in Section \[sct22b\], a superset of ${\textsf{MQ}}$, denoted ${\textsf{mQ}}$, partially collapses the hierarchy, as shown in [@CorDeOFan11]. Specifically, Ref. [@CorDeOFan11] showed that $$\begin{aligned}
\label{eqn41}
I_c^{\overrightarrow{A|B}}= {\mathbb{E}}_{D}^{A|B}=K_{D}^{A|B}={\mathbb{E}}_{R,\infty}^{A|B}= {\delta }^{\overrightarrow{B|A}}_{\infty} ={\delta }^{\overrightarrow{B|A}}\end{aligned}$$ for $\rho_{AB}\in \textsf{mQ}$. However, indicates that, for those states in ${\textsf{mQ}}$ that are not in ${\textsf{MQ}}$, there is a gap between the “collapsed measures" appearing in and a particular one-way discord, ${\delta }^{\overrightarrow{A|B}}$.
Collapse of smooth measures {#sct5.4}
---------------------------
While there are various correlation hierarchies that we could investigate, we have been focusing on those that involve a conditional entropy as one of the measures. This is because we will ultimately be interested in using the hierarchy collapse to reinterpret entropic uncertainty relations (EURs), which are often formulated using conditional entropies. One such EUR has been formulated for smooth entropies [@TomRen2010], and so we will consider the correlation hierarchy related to smooth entropies in this subsection, again with the intention of giving a reinterpretation of this EUR.
Smooth entropies pose a dilemma in that they are highly powerful tools relevant to non-asymptotic information-processing tasks [@RennerThesis05URL; @TomamichelThesis2012], yet they are quite technical. We therefore give only the main results in this section, and relegate all proofs to (a lengthy) Appendix \[app44\].
We start with the min- and max-entropies [@KonRenSch09], $$\begin{aligned}
&H_{\min}(A|B)_{\rho}:= \max_{{\sigma }_B}[-D_{\max}(\rho_{AB}|| {\openone}{\otimes}{\sigma }_B)], \notag\\
&H_{\max}(A|B)_{\rho}:= \max_{{\sigma }_B}[-D_{{\text{fid}}}(\rho_{AB}|| {\openone}{\otimes}{\sigma }_B)], \notag\end{aligned}$$ where the maximization is over all normalized density operators ${\sigma }_B$, and $D_{\max}$ and $D_{{\text{fid}}}$ were defined in and .
To define the smooth entropy of $\rho$, we optimize the entropy over a ball of radius ${\epsilon}$ centered around $\rho$ in the space of subnormalized positive operators, denoted ${\mathcal{B}}^{{\epsilon}}(\rho)$. We use the purified distance to define this ball [@TomColRen10], again with all the details in Appendix \[app44\]. Then, the smooth min- and max-entropies are defined as [@TomamichelThesis2012; @TomColRen10]: $$\begin{aligned}
&H^{{\epsilon}}_{\min}(A|B)_{\rho}:=\max_{{\sigma }_{AB} \in {\mathcal{B}}^{{\epsilon}}(\rho)} H_{\min}(A|B)_{{\sigma }}, \notag\\
&H^{{\epsilon}}_{\max}(A|B)_{\rho}:= \min_{{\sigma }_{AB} \in {\mathcal{B}}^{{\epsilon}}(\rho)} H_{\max}(A|B)_{{\sigma }}. \notag\end{aligned}$$ Note that a maximisation (minimisation) is performed for the smooth min (max) entropy; in this form these are the relevant quantities for characterising the operational tasks involved in quantum key distribution [@RennerThesis05URL; @TomamichelThesis2012].
To obtain results that are mathematically analogous to Lemma \[thm1\] and Theorem \[thm2\], we will need to define smooth measures of entanglement and discord. We note that smooth measures of entanglement were considered, e.g., in [@BusDatPRL2011; @BranDatt2011]. Consider first the unsmooth measures of entanglement and discord (one-way and two-way) based on the max relative entropy, respectively given by: $$\begin{aligned}
{\mathbb{E}}_{\max}^{A|B}(\rho_{AB}):=\min_{{\sigma }_{AB}\in \textsf{Sep}}D_{\max}(\rho_{AB}|| {\sigma }_{AB}),\notag\\
{\Delta}_{\max}^{\overrightarrow{A|B}}(\rho_{AB}):=\min_{{\sigma }_{AB}\in \textsf{CQ}}D_{\max}(\rho_{AB}|| {\sigma }_{AB}),\notag\\
{\Delta}_{\max}^{\overleftrightarrow{A|B}}(\rho_{AB}):=\min_{{\sigma }_{AB}\in \textsf{CC}}D_{\max}(\rho_{AB}|| {\sigma }_{AB}).\notag\end{aligned}$$ and consider analogous quantities ${\mathbb{E}}_{{\text{fid}}}^{A|B}$, ${\Delta}_{{\text{fid}}}^{\overrightarrow{A|B}}$, and ${\Delta}_{{\text{fid}}}^{\overleftrightarrow{A|B}}$ defined similarly but with $D_{\max}$ replaced by $D_{{\text{fid}}}$. We note that ${\mathbb{E}}_{{\text{fid}}}$ and ${\mathbb{E}}_{\max}$ are non-increasing under LOCC due to Property \[a\], and ${\mathbb{E}}_{\max}$ was characterised in [@Datta09].
We now define smooth versions of these quantum correlation measures, as follows: $$\begin{aligned}
\label{eqn42}
_{{\epsilon}}{\mathbb{E}}_{\max}^{A|B}(\rho_{AB}):= \min_{{\sigma }_{AB} \in {\mathcal{B}}^{{\epsilon}}(\rho)} {\mathbb{E}}_{\max}^{A|B}({\sigma }_{AB}),\notag\\
_{{\epsilon}}{\Delta}_{\max}^{\overrightarrow{A|B}}(\rho_{AB}):= \min_{{\sigma }_{AB} \in {\mathcal{B}}^{{\epsilon}}(\rho)} {\Delta}_{\max}^{\overrightarrow{A|B}}({\sigma }_{AB}),\notag\\
_{{\epsilon}}{\Delta}_{\max}^{\overleftrightarrow{A|B}}(\rho_{AB}):= \min_{{\sigma }_{AB} \in {\mathcal{B}}^{{\epsilon}}(\rho)} {\Delta}_{\max}^{\overleftrightarrow{A|B}}({\sigma }_{AB}),\end{aligned}$$ and $$\begin{aligned}
\label{eqn43}
_{{\epsilon}}{\mathbb{E}}_{{\text{fid}}}^{A|B}(\rho_{AB}):= \max_{{\sigma }_{AB} \in {\mathcal{B}}^{{\epsilon}}(\rho)} {\mathbb{E}}_{{\text{fid}}}^{A|B}({\sigma }_{AB}),\notag\\
_{{\epsilon}}{\Delta}_{{\text{fid}}}^{\overrightarrow{A|B}}(\rho_{AB}):= \max_{{\sigma }_{AB} \in {\mathcal{B}}^{{\epsilon}}(\rho)} {\Delta}_{{\text{fid}}}^{\overrightarrow{A|B}}({\sigma }_{AB}),\notag\\
_{{\epsilon}}{\Delta}_{{\text{fid}}}^{\overleftrightarrow{A|B}}(\rho_{AB}):= \max_{{\sigma }_{AB} \in {\mathcal{B}}^{{\epsilon}}(\rho)} {\Delta}_{{\text{fid}}}^{\overleftrightarrow{A|B}}({\sigma }_{AB}).\end{aligned}$$ A smooth max entanglement defined similarly to the one in was previously given an operational meaning in terms of one-shot catalytic entanglement cost under non-entangling maps [@BranDatt2011]. We note that performing a minimisation in and a maximisation in appears to be necessary to obtain the generalisation of our results to smooth measures.
We now state an analog of Lemma \[thm1\] for smooth measures, where Eqs. and below can be viewed as quantum correlation hierarchies involving the smooth min and max entropies, respectively.
\[thm5\] For any bipartite state $\rho_{AB}$, $$\begin{aligned}
\label{eqn44}&-H^{{\epsilon}}_{\min}(A|B) {\leqslant}{}_{{\epsilon}}{\mathbb{E}}^{A|B}_{\max} {\leqslant}{}_{{\epsilon}}{\Delta}^{\overrightarrow{A|B}}_{\max}, {}_{{\epsilon}}{\Delta}^{\overrightarrow{B|A}}_{\max} {\leqslant}{}_{{\epsilon}}{\Delta}^{\overleftrightarrow{A|B}}_{\max}, \\
\label{eqn45}&-H^{{\epsilon}}_{\max}(A|B) {\leqslant}{}_{{\epsilon}}{\mathbb{E}}^{A|B}_{{\text{fid}}} {\leqslant}{}_{{\epsilon}}{\Delta}^{\overrightarrow{A|B}}_{{\text{fid}}}, {}_{{\epsilon}}{\Delta}^{\overrightarrow{B|A}}_{{\text{fid}}} {\leqslant}{}_{{\epsilon}}{\Delta}^{\overleftrightarrow{A|B}}_{{\text{fid}}}.\end{aligned}$$
Analogous to Theorem \[thm2\], we find that the hierarchies of smooth quantum correlation measures in and collapse in the special case of premeasurement states.
\[thm6\] For any state in $\textsf{MQ}$, i.e., any premeasurement state $\tilde{\rho}_{M_XS}= V_X\rho_S V_X{^\dagger}$, $$\begin{aligned}
\label{eqn46} -H^{{\epsilon}}_{\min}(M_X|S) &= {}_{{\epsilon}}{\mathbb{E}}^{M_X|S}_{\max} = {}_{{\epsilon}}{\Delta}^{\overrightarrow{M_X|S}}_{\max}\notag\\
&={}_{{\epsilon}}{\Delta}^{\overrightarrow{S|M_X}}_{\max} = {}_{{\epsilon}}{\Delta}^{\overleftrightarrow{M_X|S}}_{\max},\\
\label{eqn47} -H^{{\epsilon}}_{\max}(M_X|S) & = {}_{{\epsilon}}{\mathbb{E}}^{M_X|S}_{{\text{fid}}} = {}_{{\epsilon}}{\Delta}^{\overrightarrow{M_X|S}}_{{\text{fid}}}\notag\\
&={}_{{\epsilon}}{\Delta}^{\overrightarrow{S|M_X}}_{{\text{fid}}} = {}_{{\epsilon}}{\Delta}^{\overleftrightarrow{M_X|S}}_{{\text{fid}}}.\end{aligned}$$
We note that these smooth measures reduce to the corresponding non-smooth measures for ${\epsilon}= 0$. Hence, we had already proved Lemma \[thm5\] and Theorem \[thm6\] for the special case of ${\epsilon}= 0$ in Section \[sct5.2\], but the smooth versions of these results, valid for any ${\epsilon}{\geqslant}0$, are a significant generalization. While superficially it seems simple to add an ${\epsilon}$ as a superscript or subscript, let the reader beware that the proof of this result for smooth measures is non-trivial.
Connection to uncertainty {#sct6}
=========================
We have investigated several quantum correlation hierarchies, and in each case we found that premeasurement states collapse the hierarchy. We would now like to take advantage of the dynamic view, shown schematically in Fig. \[fgr1\], that these states are produced during the measurement process. In principle, premeasurement states can range from being maximally entangled to being only classically correlated to being completely uncorrelated. What features of the state *prior* to the controlled-shift operation in Fig. \[fgr1\] determine the correlations of the premeasurement state? As we will see, it is the *uncertainty* of the observable being measured that ultimately determines the correlations produced during the premeasurement.
The key property that allows us to connect uncertainty to the quantum correlations of premeasurement states is the tripartite duality of conditional entropy functions. For example, for the von Neumann entropy, we have: $$\label{eqn48}
H(A|B) = - H(A|C)$$ for any pure state on ${\mathcal{H}}_{ABC}$. Let us apply this duality to the pure state ${\tilde{\rho}}_{M_XSE} = V_X{{|\psi\rangle}\!{\langle \psi|}}V_X{^\dagger}$ shown in Fig. \[fgr1\], giving: $$\label{eqn49}
H(M_X|E)_{{\tilde{\rho}}} = - H(M_X|S)_{{\tilde{\rho}}}$$ Now we note that the left side of is the standard way of defining the uncertainty of an observable conditioned on quantum memory [@RenesBoileau; @BertaEtAl; @TomRen2010; @ColesEtAl; @ColesColbeckYuZwolak2012PRL]. That is, $H(X|E)_{\rho} := H(M_X|E)_{{\tilde{\rho}}}$, the uncertainty of observable $X$ when the observer is given access to system $E$ is defined as the quantum conditional entropy of $M_X$ given $E$ at the end of the process depicted in Fig. \[fgr1\]. In addition, Theorem \[thm4\] showed that the right side of is equal to a long list of other quantum correlation measures, so we have: $$\begin{aligned}
\label{eqn50}
H(X|E)&= {\mathbb{E}}_{D}^{M_X|S}=K_{D}^{M_X|S}={\mathbb{E}}_{R,\infty}^{M_X|S}={\mathbb{E}}_{R}^{M_X|S}\notag\\
&= {\delta }_{\infty}^{\overrightarrow{M_X|S}}= {\delta }_{\infty}^{\overrightarrow{S|M_X}}={\delta }^{\overrightarrow{M_X|S}}= {\delta }^{\overrightarrow{S|M_X}} \notag\\
&= {\Delta}^{\overrightarrow{M_X|S}}= {\Delta}^{\overrightarrow{S|M_X}} ={\delta }_{\infty}^{\overleftrightarrow{M_X|S}}= {\Delta}_{\infty}^{\overleftrightarrow{M_X|S}}\notag\\
&={\delta }^{\overleftrightarrow{M_X|S}}={\Delta}^{\overleftrightarrow{M_X|S}},\end{aligned}$$ where it should be understood that the measures on the right side are applied to the state ${\tilde{\rho}}_{M_XS}$. We note that a preliminary version of appeared in Theorem 2 of Ref. [@ColesDecDisc2012], but our results here go significantly beyond the related results in [@ColesDecDisc2012].
This is a fascinating connection. It says that the uncertainty of an observable, given the environment, is a measure of quantum correlations (entanglement, discord, etc.) produced when that observable is measured. When the system’s initial state is pure, the environment $E$ can be ignored and the left side of becomes $H(X)$, the Shannon entropy of the $X$ observable.
We remark that we have assumed $X$ is a projective (but not necessarily fine-grained) observable, i.e., a PVM. To generalize to the case where $X$ is a POVM, simply replace $S$ on the right side with a Naimark extension ${\mathbf{S}}$, i.e., an enlargement of the system’s Hilbert space that allows $X$ to be thought of as a projective observable. Such an extension can be found for any POVM.
Now let us consider the analog of for other entropies. Consider the generic conditional entropy $H_K(A|B)$ introduced in Sect. \[sct5.2\], based on a generic relative entropy $D_K$. In [@ColesColbeckYuZwolak2012PRL], it was shown that, because of Properties \[a\] and \[b\], $H_K$ is guaranteed to have a *dual* entropy $H_{{\widehat{K}}}$ that is well-defined by $$\label{eqn51}
H_{{\widehat{K}}}(A|B):=-H_K(A|C),$$ where $\rho_{ABC}$ is a purification of $\rho_{AB}$. Again let us apply this duality to ${\tilde{\rho}}_{M_XSE}$ to obtain $H_{{\widehat{K}}}(M_X|E)_{{\tilde{\rho}}}=-H_K(M_X|S)_{{\tilde{\rho}}}$, invoke the standard definition for uncertainty with quantum memory $H_{{\widehat{K}}}(X|E)_{\rho} := H_{{\widehat{K}}}(M_X|E)_{{\tilde{\rho}}}$, and combine this with Theorem \[thm2\] to find: $$\begin{aligned}
\label{eqn52}
H_{{\widehat{K}}}(X|E)= {\mathbb{E}}_K^{M_X| S}= {\Delta}_K^{\overrightarrow{M_X|S}}={\Delta}_K^{\overrightarrow{S|M_X}}= {\Delta}_K^{\overleftrightarrow{M_X|S}}.\end{aligned}$$ This gives a fairly general connection between an observable’s uncertainty and the quantum correlations created upon its measurement, e.g., applicable when the correlation measures are based on any of the relative entropies in –. For example, because the min- and max-entropies are dual to each other [@KonRenSch09], implies that $$\begin{aligned}
\label{eqn53}
H_{\min}(X|E)&= {\mathbb{E}}_{{\text{fid}}}^{M_X| S}= {\Delta}_{{\text{fid}}}^{\overrightarrow{M_X|S}}={\Delta}_{{\text{fid}}}^{\overrightarrow{S|M_X}}= {\Delta}_{{\text{fid}}}^{\overleftrightarrow{M_X|S}},\notag\\
H_{\max}(X|E)&= {\mathbb{E}}_{\max}^{M_X| S}= {\Delta}_{\max}^{\overrightarrow{M_X|S}}={\Delta}_{\max}^{\overrightarrow{S|M_X}}= {\Delta}_{\max}^{\overleftrightarrow{M_X|S}}.\end{aligned}$$
Now consider the *smooth* min- and max- entropies discussed in Section \[sct5.4\]. They are dual to each other [@TomColRen10] in that $$H^{{\epsilon}}_{\max}(A|B) = -H^{{\epsilon}}_{\min}(A|C),$$ for pure $\rho_{ABC}$. Again applying this duality to ${\tilde{\rho}}_{M_XSE}$ and combining with Theorem \[thm6\] gives $$\begin{aligned}
\label{eqn54}
H^{{\epsilon}}_{\min}(X|E) = {}_{{\epsilon}}{\mathbb{E}}^{M_X|S}_{{\text{fid}}} = {}_{{\epsilon}}{\Delta}^{\overrightarrow{M_X|S}}_{{\text{fid}}}={}_{{\epsilon}}{\Delta}^{\overrightarrow{S|M_X}}_{{\text{fid}}} = {}_{{\epsilon}}{\Delta}^{\overleftrightarrow{M_X|S}}_{{\text{fid}}},\notag\\
H^{{\epsilon}}_{\max}(X|E) = {}_{{\epsilon}}{\mathbb{E}}^{M_X|S}_{\max} = {}_{{\epsilon}}{\Delta}^{\overrightarrow{M_X|S}}_{\max}={}_{{\epsilon}}{\Delta}^{\overrightarrow{S|M_X}}_{\max} = {}_{{\epsilon}}{\Delta}^{\overleftrightarrow{M_X|S}}_{\max}.\end{aligned}$$ This reduces to in the case ${\epsilon}= 0$, and hence generalizes the connection between uncertainty and the creation of quantum correlations to any ${\epsilon}{\geqslant}0$. It is worth remarking that the smooth entropies on the left side of have important operational meanings in one-shot randomness extraction and data compression [@TRSS10; @RenesRenner2012; @RennerThesis05URL], which is typically the motivation for their study. While we have not yet established operational meanings for the quantities on the right side of (though a similar smooth max-entanglement was given an operational meaning in [@BranDatt2011]), the connection nonetheless seems interesting, one reason being the validity for any value of ${\epsilon}$, suggesting that there truly is a deep connection between uncertainty and the quantum correlations produced in measurements.
Reinterpreting Entropic Uncertainty Relations {#sct7}
=============================================
Introduction {#sct7.1}
------------
The uncertainty principle plays a crucial role in our understanding of quantum mechanics, expressing a fundamental limit on our knowledge of certain pairs of observables. This idea, with no classical analog, has been captured quantitatively by so-called uncertainty relations, which in modern times typically have the form of a lower bound on the sum of the entropies of different observables and hence called entropic uncertainty relations (EURs). Though the field dates back to Heisenberg, research on the uncertainty principle has seen a sort of revolution in recent years as it was realized [@RenesBoileau; @BertaEtAl] that the observer can possess quantum memory (a quantum system that could be entangled with the system of interest), and hence we should try to formulate the uncertainty principle within this more general context. This, along with the rise of quantum information theory, has led to a wide variety of EURs [@EURreview1] expressed using various entropy functions, some of which allow for quantum memory [@RenesBoileau; @BertaEtAl; @TomRen2010; @ColesEtAl; @ColesColbeckYuZwolak2012PRL].
The results of this article imply that there exists an interpretation of these EURs that is quite different from the typical one as constraints on our knowledge. The uncertainties appearing in these EURs have the form, for example, of the left-hand-sides of , , and . But we have shown that the uncertainty of an observable is quantitatively connected to, e.g., the entanglement created when that observable is measured. Hence, EURs have an interpretation that has nothing to do with uncertainty: they are lower bounds on the entanglement created when incompatible observables are measured! We illustrate this alternative view with a game, in what follows.
Entanglement distillation game {#sct7.2}
------------------------------
Here we focus on , in particular, the portion that reads: $$\label{eqn55}
H(X|E)= {\mathbb{E}}_{D}^{M_X|S}$$ where ${\mathbb{E}}_{D}$ is the distillable entanglement [@HHHH09], i.e., the optimal rate to distill EPR pairs using LOCC in the asymptotic limit (infinitely many copies of the state). Again, note that when the initial state of the system, $\rho_S$ in Fig. \[fgr1\], is pure then becomes $H(X)= {\mathbb{E}}_{D}^{M_X|S}$.
Equation gives an operational meaning to uncertainty relations written in terms of Shannon entropies [@EURreview1], or “Shannon uncertainty relations". We illustrate this with the following game, where Alice wants to establish entanglement with Bob but Eve (the adversary) wants to prevent this. Suppose the game is set up such that Eve feeds Alice an (unknown to Alice) pure state ${|\psi\rangle}_S$ of a qubit $S$ and a register qubit $M$ known to be in the ${|0\rangle}$ state. Alice is allowed to perform a CNOT between $S$ and $M$, such that some basis on $S$ controls the NOT on the $M$ qubit, and then she can send the $M$ qubit (over a perfect quantum channel) to Bob. The only freedom Alice is allowed is to change the basis that controls the CNOT. Suppose they repeat this $3N$ times, where $N$ is very large ($N\to \infty$), and each time Eve feeds Alice the same states. At the end of the game, Eve announces the state ${|\psi\rangle}_S$, and Alice’s and Bob’s task is now to distill at least $2N$ EPR pairs from their $3N$ pairs of qubits using LOCC, *no matter what ${|\psi\rangle}_S$ was*. A winning strategy is for Alice to use the $X$, $Y$, and $Z$ bases (three mutually orthogonal axes of the Bloch sphere) each $N$ times. Then, since ${\mathbb{E}}_D $ is additive here (see Appendix \[app1\]), the number of distillable EPR pairs is $N({\mathbb{E}}_D^{M_X|S}+{\mathbb{E}}_D^{M_Y|S}+{\mathbb{E}}_D^{M_Z|S})$. From and an uncertainty relation from [@SanchezRuiz1995] we have: $${\mathbb{E}}_D^{M_X|S}+{\mathbb{E}}_D^{M_Y|S}+{\mathbb{E}}_D^{M_Z|S} =H(X)+H(Y)+H(Z){\geqslant}2.$$ So, regardless of ${|\psi\rangle}_S$, the number of distillable EPR pairs is lower-bounded by $2N$. (This also gives an operational meaning to minimum uncertainty states of Shannon uncertainty relations [@ColesYuZwo2011]; in this example they have an EPR yield of precisely $2N$.) However, Eve can beat this protocol by feeding Alice a *mixed* state $\rho_S$ and keeping the purifying system $E$. On the other hand, Alice can partially salvage the situation if she can somehow get a hold of a subsystem $E_1$ of $E=E_1E_2$ such that $SE_2$ is in a separable state. In this case, the uncertainty principle with quantum memory [@BertaEtAl], combined with , gives: $$\begin{aligned}
&{\mathbb{E}}_D^{M_X|SE_1}+{\mathbb{E}}_D^{M_Y|SE_1}+{\mathbb{E}}_D^{M_Z|SE_1} =\notag\\
& H(X|E_2) + H(Y|E_2)+H(Z|E_2) {\geqslant}\frac{3}{2}[1+H(S|E_2)]{\geqslant}\frac{3}{2},\notag\end{aligned}$$ since the von Neumann conditional entropy $H(S|E_2){\geqslant}0$ for separable states [@NieChu00]. So at least, in this case, Alice and Bob are assured to get $(3/2)N$ EPR pairs. This game illustrates that Shannon uncertainty relations are useful for designing protocols to create entanglement, whenever the state of one’s system is unknown.
Entanglement creation view of other EURs {#sct7.3}
----------------------------------------
We discussed EURs written in terms of Shannon entropies in the previous subsection, but EURs have been found for other entropies as well. Of particular interest are the min- and max-entropies because they have operational meanings [@KonRenSch09], and more generally, the smooth min- and max-entropies have operational meanings [@TRSS10; @RenesRenner2012; @RennerThesis05URL].
Let us consider an EUR proved by Tomamichel and Renner for the min- and max-entropies [@TomRen2010]. Consider any two POVMs $X=\{X_j\}$ and $Z=\{Z_k\}$ on system $S$ and any tripartite state $\rho_{SE_1E_2}$, then $$\label{eqn56}
H_{\min}(X|E_1)+ H_{\max}(Z|E_2) {\geqslant}\log\frac{1}{c(X,Z)},$$ where $c(X,Z)=\max_{j,k} \| \sqrt{Z_{k}} \sqrt{X_{j}}\|_\infty^2$ (the infinity norm of an operator is its largest singular value). Let us specialize to pure $\rho_{SE_1E_2}$, and combine with to obtain $$\label{eqn57}
{\mathbb{E}}_{{\text{fid}}}^{M_X| {\mathbf{S}}E_2} + {\mathbb{E}}_{\max}^{M_Z| {\mathbf{S}}E_1} {\geqslant}\log\frac{1}{c(X,Z)},$$ where ${\mathbf{S}}$ extends $S$ to allow $X$ and $Z$ to be projective. It is interesting that has nothing to do with uncertainty, and conceptually is just about entanglement, where ${\mathbb{E}}_{\max}$ has been given an operational meaning as a one-shot entanglement cost [@BranDatt2011], and ${\mathbb{E}}_{{\text{fid}}}$ is closely related to the geometric entanglement [@StreltsovEtAl2010NJP]. We note that is stronger than the inequality obtained from replacing $E_1$ and $E_2$ in with the joint system $E=E_1E_2$, since ${\mathbb{E}}_{{\text{fid}}}$ and ${\mathbb{E}}_{\max}$ are non-increasing under local partial trace, e.g., ${\mathbb{E}}^{M_X|{\mathbf{S}}E}_{{\text{fid}}} {\geqslant}{\mathbb{E}}^{M_X|{\mathbf{S}}E_2}_{{\text{fid}}}$. This strengthening of the inequality \[i.e, restricting $E$ to its subsystems as in \] corresponds precisely to the strengthening obtained from allowing quantum memory in the uncertainty relation, .
In addition to proving , Tomamichel and Renner [@TomRen2010] generalized the uncertainty relation to the case of smooth entropies (${\epsilon}{\geqslant}0$): $$\label{eqn58}
H^{{\epsilon}}_{\min}(X|E_1) + H^{{\epsilon}}_{\max}(Z|E_2) {\geqslant}\log \frac{1}{c(X,Z)}$$ This uncertainty relation has been received with significant excitement due to its application in proving the security of QKD even in the non-asymptotic case, where Alice and Bob do only a finite number of measurements [@TLGR]. It therefore seems interesting that we can rewrite in a way that takes on a completely different conceptual meaning. For pure $\rho_{SE_1E_2}$, this uncertainty relation combined with becomes $$\label{eqn59}
{}_{{\epsilon}}{\mathbb{E}}^{M_X|{\mathbf{S}}E_2}_{{\text{fid}}} + {}_{{\epsilon}}{\mathbb{E}}^{M_Z |{\mathbf{S}}E_1}_{\max} {\geqslant}\log \frac{1}{c(X,Z)},$$ which reduces to for the special case of ${\epsilon}=0$. Again, we note that is stronger than the inequality obtained from replacing $E_1$ and $E_2$ in with the joint system $E=E_1E_2$, since ${}_{{\epsilon}}{\mathbb{E}}_{{\text{fid}}}$ and ${}_{{\epsilon}}{\mathbb{E}}_{\max}$ are non-increasing under local partial trace (see Appendix \[app44\]).
Inequalities of the form of and bring to mind the paradigm of entanglement distribution, similar to the discussion in the previous subsection. Here one wishes to establish entanglement between distant locations by, for example, sending a carrier quantum system. A scenario that and would be relevant to is the following. Suppose that Alice, Bob, and Charlie, respectively, possess the $S$, $E_1$, and $E_2$ portions of two copies of a tripartite pure state ${|\psi\rangle}_{SE_1E_2}$, i.e., the overall state is ${|\psi\rangle}_{SE_1E_2}^{{\otimes}2}$. Suppose that they do not initially know what the state ${|\psi\rangle}_{SE_1E_2}$ is, but at the end of the protocol ${|\psi\rangle}_{SE_1E_2}$ is revealed to them. Perhaps Alice wishes to establish entanglement with Bob or with Charlie. She can perform a premeasurement of observable $X$ on one of her $S$ systems, keep the register $M_X$, and send the resulting $S$ system to Bob. On the other $S$ system, she premeasures observable $Z$, keeps the register $M_Z$, and sends the resulting $S$ system to Charlie. If $c(X,Z) < 1$, then she will have established entanglement with Bob and/or with Charlie (at least one of the two). This fact is guaranteed by and , and of course these inequalities quantitatively bound the amount of entanglement that is established.
Implications and future outlook {#sct8}
===============================
Entanglement complementarity relations
--------------------------------------
Our main technical results were given in Section \[sct5\], as theorems stating that a certain class of bipartite states called premeasurement states cause the quantum correlation hierarchy to collapse. However, our most important contribution may be the conceptual insight about the nature of uncertainty and uncertainty relations, discussed in Sections \[sct6\] and \[sct7\].
Apparently, many uncertainty relations, which are typically thought of as bounds on our knowledge of incompatible observables, can be reinterpreted as bounds on the entanglement created when incompatible observables are measured. This reinterpretation holds for any EUR for a finite-dimensional quantum system written, e.g., in terms of the Shannon entropy, smooth min-entropy, or smooth max-entropy.
Perhaps the most important implication is the idea that entanglement creation exhibits complementarity. Of course, researchers are somewhat familiar with the idea because of the so-called “measurement problem" and the fact that Schrodinger’s cat will get produced when a measurement device interacts with a system that is initially in a superposition state. But much of that discussion has been qualitative, whereas, we have shown here that there are *precise* and *general* lower-bounds on entanglement creation during measurement. It seems very interesting that complementarity, the idea that there are certain observables that are incompatible, can be expressed in a manner that has nothing to do with uncertainty. We think it is worthwhile to give these inequalities a name, say, entanglement complementarity relations (ECRs).
Even though each of the ECRs that we have presented in this article is equivalent to some EUR, it seems extremely likely that researchers will find ECRs in the future that have no obvious connection to an EUR. In other words, we think that ECRs are their own class of inequalities, and we believe there is plenty of room to explore them! This is especially true given that there is a vast zoo of entanglement measures [@HHHH09]. More generally, there is a vast zoo of quantum correlation measures [@ModiEtAl2011review], and so we should open to possibly finding complementarity relations for entanglement, discord, and other related correlation measures.
As discussed in Section \[sct7\], it is possible that these ECRs could be useful for developing strategies to create and distribute entanglement, particularly if they are formulated with entanglement measures that have operational meanings.
Implications for quantum correlations
-------------------------------------
Here we mention a few more implications of our results in the field of quantum correlations, emphasizing that the connection of EURs to ECRs is the main implication.
One reason that pure states are so nice is that their entanglement, discord, and relative entropy of quantumness are so easy to calculate - just the entropy of the reduced state. The collapse of the hierarchy for $\textsf{MQ}$ states (which include but go beyond pure states) implies that their entanglement, discord, and relative entropy of quantumness are also quite easy to calculate. Calculating them simply involves calculating a conditional entropy, which, in the von Neumann case, does not involve any optimization process.
Another implication of the hierarchy collapse is that operational meanings get shared. That is, for $\textsf{MQ}$ states, entanglement measures inherit operational meanings of discord [@ModiEtAl2011review], and vice-versa.
Finally, we note that the entanglement created in premeasurements has been studied previously as a fairly general strategy to quantify discord [@PianiEtAl11; @StrKamBru11; @PianiAdessoPRA.85.040301]. The idea is that a state $\rho_{AB}$ is classical with respect to system $A$ if and only if there exists a premeasurement in some orthonormal basis $W$ on ${\mathcal{H}}_A$ that creates no entanglement between the register $M_W$ and the $AB$ system, i.e., if and only if ${\tilde{\rho}}_{M_W|AB}\in \textsf{Sep}$. On the other hand, our results (for example, Theorem \[thm4\]) imply that the following four conditions are equivalent: ${\tilde{\rho}}_{M_W |AB}\in \textsf{Sep} \Leftrightarrow {\tilde{\rho}}_{M_W |AB}\in \textsf{CQ} \Leftrightarrow {\tilde{\rho}}_{M_W |AB}\in \textsf{QC} \Leftrightarrow {\tilde{\rho}}_{M_W |AB}\in \textsf{CC} $. Thus, we have four equivalent classicality conditions. This naturally leads one to think of quantitative measures of the form $$\label{eqn60}
{\mathcal{D}}^{\overrightarrow{A|B}} (\rho_{AB}) = \min_{W\in {\mathcal{W}}_A }Q^{M_W |AB}({\tilde{\rho}}_{M_W AB})$$ where ${\mathcal{W}}_A$ is the set of all orthonormal bases on ${\mathcal{H}}_A$, and where $Q$ is any non-negative correlation measure that vanishes only on either $\textsf{Sep}$, $\textsf{CQ}$, $\textsf{QC}$, or $\textsf{CC}$. The quantity ${\mathcal{D}}^{\to}$ in can be thought of as a general one-way discord measure, with the generality of $Q$ giving a slightly more general framework than the case where $Q$ is restricted to be an entanglement measure.
Conclusions {#sct9}
===========
We have investigated the hierarchical ordering of quantum correlation measures, in which two-way discord is the broadest kind of quantum correlation (i.e., giving the largest value), and then becoming progressively more narrow (i.e., smaller in value) is one-way discord, then entanglement, and finally coherent information. Each of these four kinds of correlations can be quantified with different measures, for example, we have considered measures related to the von Neumann entropy, measures related to the smooth min- and max-entropies, and measures based on a generic relative entropy. In each case, we find a hierarchical ordering, and furthermore, we find that this hierarchy collapses to a single value for a special class of bipartite states called premeasurement states. In the case of measures related to von Neumann entropy, Section \[sct5.3\], we showed that these states are the *only* states that fully collapse the quantum correlation hierarchy, as in Theorem \[thm4\].
In addition to collapsing the hierarchy, these states are interesting because they can be thought of as being produced from the interaction of a system with a measurement device, schematically shown in Fig. \[fgr1\]. Indeed, they have been studied previously in the context of measurement, decoherence, and einselection [@ZurekReview]. Maximally correlated states are a special example of premeasurement states, though more generally premeasurement states can be asymmetric with respect to the two subsystems (one-way maximally correlated). In Section \[sct22b\] we discussed the relation of premeasurement states to a broader class of states considered in Ref. [@CorDeOFan11].
Considering the dynamic view that, indeed, the premeasurement state arose from the measurement process in Fig. \[fgr1\], we made the very interesting connection that the quantum correlations of the premeasurement state is precisely connected to the *uncertainty* of the observable being measured. As discussed in Section \[sct6\], this connection holds when the uncertainty is quantified, e.g., with Shannon / von Neumann entropy, smooth min-entropy, or smooth max-entropy. Though we gave a few preliminary results on this idea in [@ColesDecDisc2012] (see Theorem 2 of that article), the present article dramatically extends and generalizes this idea. We are left with the realization that uncertainty prior to a measurement implies that entanglement will be created in that measurement (one may need access to the purifying system to see the entanglement), and conversely the production of entanglement implies the lack of certainty about the observable being measured.
This intimate connection between uncertainty and the creation of entanglement (more generally, quantum correlations) has immediate consequences. Researchers have been proving stronger and stronger entropic uncertainty relations (EURs) over the past few decades. But these bounds on our knowledge of incompatible observables can be completely reinterpreted as bounds on the entanglement created when incompatible observables are measured. In Section \[sct7\], we illustrated this idea with a game, where Alice wanted to create and distribute entanglement to Bob, even when she has no idea what state she possesses. Measuring incompatible observables on different copies of her system, and then sending the registers to Bob, is a strategy for Alice to win this game, as guaranteed by “entanglement complementarity relations", i.e., our reinterpretation of EURs in terms of entanglement creation. Section \[sct7\] discussed the reinterpretation of several EURs, including ones allowing for quantum memory [@BertaEtAl], and ones for the smooth min- and max-entropies [@TomRen2010].
Section \[sct8\] gives an optimistic future outlook for entanglement complementarity relations (ECRs). Even though every ECR presented here is linked to some EUR, is it possible to find ECRs that are not linked to some EUR? The present work shows that entanglement creation exhibits the phenomenon of complementarity - if this is a basic principle, then we would expect that there could be a whole class of yet-to-be-discovered inequalities that have nothing to do with uncertainty. These ECRs (or more generally, one can substitute any measure of quantum correlations in place of entanglement, and there is a vast zoo of such measures) offer a new way of capturing the complementarity of quantum mechanics. Exploration into ECRs could inspire strategies to generate and distribute entanglement, and perhaps more importantly, give deeper insight into the complementarity of quantum processes.
I thank Shiang Yong Looi, Roger Colbeck, and Marco Piani for helpful discussions, and I especially thank Eric Chitambar for helpful discussions as well as helpful comments on Section \[sct22b\]. I note that this work was partly inspired by Refs. [@PianiEtAl11; @StrKamBru11; @PianiAdessoPRA.85.040301]. Finally, I acknowledge support from the U.S. Office of Naval Research.
$\textsf{MQ}$ states {#app2}
====================
Here we show the equivalence of two alternative definitions of the set of $\textsf{MQ}$ states. Denote the definition given in and , respectively, as $ \textsf{MQ}_1 $ and $ \textsf{MQ}_2 $. We will now show that $\textsf{MQ}_1 = \textsf{MQ}_2 $. Consider some $\rho_{AB}\in \textsf{MQ}_1$ given by , then there exists an orthonormal basis $W=\{{|W_j\rangle}\}$ on ${\mathcal{H}}_A$ whose information is perfectly present in $B$ in the sense that the (unnormalized) conditional density operators on $B$: $$\label{eqn61}
\tau_{B,j}^W:={{\rm Tr}}_A({{|W_j\rangle}\!{\langle W_j|}}\rho_{AB})=X_j{\sigma }_B X_j$$ are all orthogonal (i.e., for distinct $j$). In [@ColesDecDisc2012], it was shown that, for pure $\rho_{ABC}$, $\rho_{AC}\in \textsf{CQ}$ iff the information about some orthonormal basis of ${\mathcal{H}}_A$ is perfectly present in $B$ \[see Eq. below for the argument\], hence we have shown that $\rho_{AB}\in \textsf{MQ}_2$ and that $\textsf{MQ}_1\subseteq \textsf{MQ}_2$.
To show the converse that $\textsf{MQ}_2 \subseteq \textsf{MQ}_1$ is significantly more difficult. Suppose $\rho_{AB}\in \textsf{MQ}_2$, i.e., for some purification $\rho_{ABC}$, $\rho_{AC}$ has the form $\sum_j p_j{{|W_j\rangle}\!{\langle W_j|}}{\otimes}\rho_{C,j}$, for some $W\in {\mathcal{W}}_A$. It was shown in [@ColesDecDisc2012] that for pure $\rho_{ABC}$, $$\label{eqn62}
H(W|B) = D(\rho_{AC}|| \sum_j {{|W_j\rangle}\!{\langle W_j|}}\rho_{AC}{{|W_j\rangle}\!{\langle W_j|}}),$$ where ${{|W_j\rangle}\!{\langle W_j|}}$ is short-hand for ${{|W_j\rangle}\!{\langle W_j|}} {\otimes}{\openone}$. By our assumption that $\rho_{AC}=\sum_j p_j{{|W_j\rangle}\!{\langle W_j|}}{\otimes}\rho_{C,j}$, the right-hand-side of is zero, and hence $H(W|B)=0$, implying that the $W$ information is perfectly present in $B$, i.e., the conditional density operators $$\tau_{B,j}^W ={{\rm Tr}}_{AC}({{|W_j\rangle}\!{\langle W_j|}}\rho_{ABC})$$ are orthogonal for distinct $j$.
The task now is to show that this condition, $H(W|B)=0$, implies that $\rho_{AB}$ is of the form of . Our proof of this relies on the conditions for the relative-entropy monotonicity to be satisfied with equality [@Petz2003; @HaydenEtAl04] and is closely related to the study of minimum uncertainty states of entropic uncertainty relations [@ColesYuZwo2011; @RenesBoileau]. Let $Z=\{{|Z_k\rangle}\}$ be the orthonormal basis on ${\mathcal{H}}_A$ that is related to $W$ by the Fourier transform, i.e., $$\label{eqn63}
{|Z_k\rangle}=\sum_j \frac{{\omega }^{jk}}{\sqrt{d}}{|W_j\rangle},\quad {|W_j\rangle}=\sum_k \frac{{\omega }^{-jk}}{\sqrt{d}}{|Z_k\rangle},$$ where $d=\dim({\mathcal{H}}_A)$ and ${\omega }=e^{2\pi i/d}$. In general the following uncertainty relation holds for any bipartite state $\rho_{AB}$ [@BertaEtAl], $$\label{eqn64}
H(W|B)+H(Z|B){\geqslant}\log d + H(A|B).$$ Notice that if $H(W|B)=0$, then is satisfied with equality since, in general, $H(Z|B){\leqslant}\log d + H(A|B)$, implying in this case that $H(Z|B) = \log d + H(A|B)$. Under these conditions, i.e. when is satisfied with equality, we say that $\rho_{AB}$ is a minimum uncertainty state (MUS) of . Thus, all states for which $H(W|B)=0$ are MUSs of , so we now proceed to find an analytical form for the MUSs of . In fact, these MUSs were found previously in [@ColesYuZwo2011], but we repeat some of the discussion here for completeness.
Let ${\mathcal{E}}_Z(\cdot) = \sum_k {{|Z_k\rangle}\!{\langle Z_k|}}(\cdot ){{|Z_k\rangle}\!{\langle Z_k|}}$ be the quantum channel that decoheres in the $Z$ basis. Then (see [@ColesYuZwo2011] for more details) the MUSs of are the states for which $$\begin{aligned}
\label{eqn65}
&D(\rho_{AB} || \sum_j {{|W_j\rangle}\!{\langle W_j|}} \rho_{AB} {{|W_j\rangle}\!{\langle W_j|}} )\notag\\
& = D({\mathcal{E}}_Z(\rho_{AB}) || {\mathcal{E}}_Z(\sum_j {{|W_j\rangle}\!{\langle W_j|}} \rho_{AB} {{|W_j\rangle}\!{\langle W_j|}} )),\end{aligned}$$ since, for any $\rho_{AB}$, the left-hand-side of equals $H(W|B)-H(A|B)$, and the right-hand-side equals $ \log d - H(Z|B)$. For some quantum channel ${\mathcal{E}}$, Petz showed [@Petz2003; @HaydenEtAl04] that $D(\rho||{\sigma })=D({\mathcal{E}}(\rho)||{\mathcal{E}}({\sigma }))$ if and only if there exists a quantum channel $\hat {\mathcal{E}}$ that undoes the action of ${\mathcal{E}}$ on $\rho$ and ${\sigma }$: $$\label{eqn66}
\hat{\mathcal{E}}{\mathcal{E}}\rho=\rho,\quad \hat{\mathcal{E}}{\mathcal{E}}{\sigma }= {\sigma }.$$ The construction given [@HaydenEtAl04] for this, defined on the support of ${\mathcal{E}}({\sigma })$, is $$\label{eqn67}
\hat{\mathcal{E}}(\rho)=\sqrt{{\sigma }}{\mathcal{E}}{^\dagger}({\mathcal{E}}({\sigma })^{-1/2}\rho{\mathcal{E}}({\sigma })^{-1/2})\sqrt{{\sigma }},$$ which automatically satisfies $\hat{\mathcal{E}}{\mathcal{E}}{\sigma }= {\sigma }$, so one just needs to solve $\hat{\mathcal{E}}{\mathcal{E}}\rho=\rho$. To apply this formula to , we set $\rho=\rho_{AB}$, ${\sigma }= \sum_j {{|W_j\rangle}\!{\langle W_j|}} \rho_{AB} {{|W_j\rangle}\!{\langle W_j|}}$, and ${\mathcal{E}}= {\mathcal{E}}_Z$. Solving $\rho_{AB} = \hat{\mathcal{E}}{\mathcal{E}}\rho_{AB}$ gives $$\begin{aligned}
\label{eqn68}
\rho_{AB}=& \sum_{j,j',k} \frac{{\omega }^{(j-j')k}}{d} {{|W_j\rangle}\!{\langle W_{j'}|}} {\otimes}\notag\\
&\sqrt{\tau_{B,j}^W}\rho_B^{-1/2} \tau_{B,k}^Z \rho_B^{-1/2}\sqrt{\tau_{B,j'}^W}\end{aligned}$$ where $\rho_B={{\rm Tr}}_A(\rho_{AB})$ and $\tau_{B,k}^Z:= {{\rm Tr}}_A({{|Z_k\rangle}\!{\langle Z_k|}}\rho_{AB})$. Again, the idea is that is the general form for all MUSs of , and so we can specialize this formula to the special case where $H(W|B)=0$. This corresponds to all the $\tau_{B,j}^W $ being orthogonal, hence $\rho_B = \bigoplus_j \tau_{B,j}^W$, and $$X_j:=\sqrt{\tau_{B,j}^W}\rho_B^{-1/2}= \rho_B^{-1/2} \sqrt{\tau_{B,j}^W}$$ is the projector onto the support of $\tau_{B,j}^W$. Thus the $\{X_j\}$ form a set of orthogonal projectors that sum to ${\openone}_B$ provided, if $\rho_B$ is not full rank, then we enlarge one of the projectors, say $X_1$, so that $X_1$ also includes the space orthogonal to $\rho_B$.
Thus, under the condition $H(W|B)=0$, becomes $$\begin{aligned}
\label{eqn69}
\rho_{AB}&= \sum_{j,j',k} \frac{{\omega }^{(j-j')k}}{d} {{|W_j\rangle}\!{\langle W_{j'}|}} {\otimes}X_j \tau_{B,k}^Z X_{j'}\notag\\
&= \sum_{j,j'} {{|W_j\rangle}\!{\langle W_{j'}|}} {\otimes}X_j {\sigma }_B X_{j'}\end{aligned}$$ where ${\sigma }_B$ can be defined block-by-block with $X_j{\sigma }_B X_{j'} = \sum_k (1/d) {\omega }^{(j-j')k} \tau_{B,k}^Z$. One can verify that ${\sigma }_B$ is a normalized density operator with ${\sigma }_B = \sum_{j,j'} X_j{\sigma }_B X_{j'} = d \tau_{B,0}^Z$, noting that ${{\rm Tr}}(\tau_{B,0}^Z) = (1/d)$ since $H(W|B)=0$ forces $Z$ to be uniformly distributed by the uncertainty relation. Thus, we have shown that $\rho_{AB}$ has the form of , so $\rho_{AB}\in \textsf{MQ}_1 $ and $\textsf{MQ}_2 \subseteq \textsf{MQ}_1 $, proving that $\textsf{MQ}_1 = \textsf{MQ}_2 $.
Bures distance for $\textsf{MQ}$ states {#app5}
=======================================
Here we show that the closest separable state to a $\textsf{MQ}$ state is a $\textsf{CC}$ state, as measured by the Bures distance, defined as [@BenZyc06]: $$\label{eqn70}
D_{B}(\rho ,{\sigma }):= \sqrt{2-2F(\rho , {\sigma })}.$$ where $\rho$ and ${\sigma }$ are (normalized) density operators, and $F(\rho , {\sigma })= {{\rm Tr}}(\sqrt{\rho} {\sigma }\sqrt{\rho})^{1/2} $ is the fidelity.
Consider the following properties of the fidelity. For positive-semidefinite operators $P$ and $Q$, if $\tilde{Q}{\geqslant}Q$, then $$\label{eqn71}
F(P , Q){\leqslant}F(P , \tilde{Q}).$$ Also, suppose that $\Pi_{P}$ is a projector onto a space that includes the support of $P$, then $$\label{eqn72}
F(P ,Q)= F(P , \Pi_{P} Q\Pi_{P}).$$ Now consider a general $\textsf{MQ}$ state, which is of the form $\tilde{\rho}_{M_XS}=V_X\rho_S V_X{^\dagger}$, for some density operator $\rho_S$ on $S$ and some premeasurement isometry $V_X: {\mathcal{H}}_S\to{\mathcal{H}}_{M_XS}$, which has the form given in . In what follows, we first use , noting that if ${\sigma }_{M_XS}\in\textsf{Sep}$, then ${\openone}{\otimes}{\sigma }_S{\geqslant}{\sigma }_{M_XS}$, with ${\sigma }_{S} ={{\rm Tr}}_{M_X}({\sigma }_{M_XS})$. We then use , noting that $V_X V_X{^\dagger}$ projects onto a space that includes the support of $\tilde{\rho}_{M_XS}$. For some ${\sigma }_{M_XS} \in \textsf{Sep}$, we find: $$\begin{aligned}
\label{eqn73}
F(\tilde{\rho}_{M_XS} ,{\sigma }_{M_XS})&{\leqslant}F(\tilde{\rho}_{M_XS} , {\openone}{\otimes}{\sigma }_{S}) \notag\\
&= F(\tilde{\rho}_{M_XS} , V_X V_X{^\dagger}({\openone}{\otimes}{\sigma }_{S}) V_X V_X{^\dagger})\notag\\
&= F(\tilde{\rho}_{M_XS} ,{\alpha }_{M_XS} ),\end{aligned}$$ where $$\begin{aligned}
\label{eqn74}
{\alpha }_{M_XS} &:= V_X \sum_j X_j {\sigma }_{S} X_j V_X{^\dagger}\notag\\
&= \sum_j {{|j\rangle}\!{\langle j|}}{\otimes}X_j {\sigma }_{S} X_j \end{aligned}$$ is a $\textsf{CC}$ state, which can be verified by expanding the $X_j {\sigma }_{S} X_j$ blocks in their eigenbasis. Therefore, shows that, for any separable state ${\sigma }_{M_XS}$, there is a $\textsf{CC}$ state ${\alpha }_{M_XS}$ that is closer to $\tilde{\rho}_{M_XS}$, according to the fidelity. Since $D_B$ varies monotonically with $F$, then this statement also holds for $D_B$. Thus, the closest $\textsf{Sep}$ state, according to $D_B$, is a $\textsf{CC}$ state.
Proof of Lemma \[thm3\] {#app4}
=======================
For , $I_c{\leqslant}{\mathbb{E}}_D$ was shown in [@DevWin05] and the fact that a bit of secret key can be obtained from an e-bit implies ${\mathbb{E}}_D{\leqslant}K_D$. Now it is obvious from that $E_R{\leqslant}{\Delta}^{\to} {\leqslant}{\Delta}^{\leftrightarrow}$, and the additivity of the von Neumann relative entropy implies that regularisation cannot increase these measures: $E_{R,\infty} {\leqslant}E_R$, ${\Delta}^{\to}_{\infty}{\leqslant}{\Delta}^{\to}$, and ${\Delta}^{\leftrightarrow}_{\infty} {\leqslant}{\Delta}^{\leftrightarrow}$. Combining this with a result from [@HHHOprl05] that $K_D{\leqslant}E_{R,\infty}$ implies .
For , we have from that $E_{R,\infty}{\leqslant}{\Delta}^{\to}_{\infty}={\delta }^{\to}_{\infty}{\leqslant}{\delta }^{\to}$, where the last inequality follows from the additivity of the mutual information. Also, the relation ${\delta }^{\to}{\leqslant}{\delta }^{\leftrightarrow}$ follows from the Holevo bound [@NieChu00]. The right-most inequality in goes as follows. Note that ${\delta }^{\leftrightarrow}$ is smaller than the case where the minimization is performed over all rank-one projective measurements ${\mathcal{W}}$ and ${\mathcal{W}}'$ on $A$ and $B$, respectively, so $$\begin{aligned}
{\delta }^{\overleftrightarrow{A|B}}&{\leqslant}\min_{{\mathcal{W}}, {\mathcal{W}}'} \{ I(\rho_{AB}) - I[({\mathcal{W}}{\otimes}{\mathcal{W}}')(\rho_{AB})] \} \notag\\
&{\leqslant}\min_{{\mathcal{W}}, {\mathcal{W}}'} \{ H[ ({\mathcal{W}}{\otimes}{\mathcal{W}}' ) (\rho_{AB})] - H(\rho_{AB}) \} = {\Delta}^{\overleftrightarrow{A|B}}. \notag\end{aligned}$$
For , simply apply the above arguments, such as Eq. , to the state $\rho_{AB}^{{\otimes}N}$ for $N\to \infty$.
Smooth measures {#app44}
===============
Subnormalized states
--------------------
Let $\textsf{P}({\mathcal{H}})$ denote the set of positive semi-definite operators on Hilbert space ${\mathcal{H}}$. Let $\textsf{S}_{\le}({\mathcal{H}})$ and $\textsf{S}_{=}({\mathcal{H}})$, respectively, denote the sets of subnormalized and normalized positive operators on ${\mathcal{H}}$, i.e., $$\begin{aligned}
\textsf{S}_{\le}({\mathcal{H}}) & = \{{\sigma }\in \textsf{P}({\mathcal{H}}): {{\rm Tr}}({\sigma }) {\leqslant}1 \},\notag\\
\textsf{S}_{=}({\mathcal{H}}) &= \{{\sigma }\in \textsf{P}({\mathcal{H}}): {{\rm Tr}}({\sigma }) = 1 \}.\notag\end{aligned}$$ Sometimes we may drop the explicit dependence on the Hilbert space for $\textsf{S}_{\le}$ and $\textsf{S}_{=} $ when the space is obvious.
It also useful to generalize the notion of $\textsf{MQ}$ states to subnormalized states. We denote this broader set as $\textsf{MQ}_{\le}$, containing all states of the form given in but allowing ${\sigma }_B\in \textsf{S}_{\le}({\mathcal{H}}_B)$ to be subnormalized, or equivalently, defined by but allowing $\rho_{AC}$ to be subnormalized.
Purified distance and ${\epsilon}$-balls
----------------------------------------
Smooth measures involve optimizing over a ball of states within some chosen distance ${\epsilon}$ from the state of interest. These balls of states are called ${\epsilon}$-balls, and the distance measure of choice for constructing them is the purified distance [@TomColRen10], which ensures that the ${\epsilon}$-balls are, to some degree, invariant under purifications or extensions (e.g., see Lemma \[thm9\]). The definitions and lemmas in this subsection are mostly due to the work of Tomamichel, Colbeck, and Renner [@TomColRen10]. They note that the purified distance between $\rho \in \textsf{S}_{\le}$ and ${\sigma }\in \textsf{S}_{\le}$ can be written as
$$P(\rho, {\sigma }) = \sqrt{1-{\overline{F}}(\rho, {\sigma })^2},$$
where ${\overline{F}}$ is a generalized fidelity, $${\overline{F}}(\rho, {\sigma }):=F(\rho,{\sigma })+\sqrt{(1-{{\rm Tr}}\rho)(1-{{\rm Tr}}{\sigma })}$$ with the standard fidelity given by $$F(\rho,{\sigma }) = {{\rm Tr}}[(\sqrt{{\sigma }}\rho \sqrt{{\sigma }})^{1/2}].$$ Several useful properties of the purified distance are worked out in [@TomColRen10]. For example, they give the following lemma.
\[thm7\] The purified distance is non-increasing under trace-nonincreasing completely positive maps (TNICPMs). Since a projector $\Pi$ gives rise to a TNICPM of the form $\rho \to \Pi\rho \Pi$, we have that $$P(\rho , {\sigma }) {\geqslant}P( \Pi \rho \Pi , \Pi {\sigma }\Pi).$$
Now let us define the ${\epsilon}$-ball around $\rho\in\textsf{S}_{\le}$, $${\mathcal{B}}^{{\epsilon}}(\rho):=\{{\sigma }\in \textsf{S}_{\le}: P(\rho, {\sigma }){\leqslant}{\epsilon}\}.$$ The ball grows monotonically with ${\epsilon}$ and ${\mathcal{B}}^0(\rho) = \{\rho \}$.
Also, Lemma \[thm7\] implies that if ${\sigma }\in {\mathcal{B}}^{{\epsilon}}(\rho)$, then $\Pi {\sigma }\Pi \in {\mathcal{B}}^{{\epsilon}}(\rho)$ if $\Pi$ projects onto a space that includes the support of $\rho$. This fact is helpful when considering a subset of the ${\epsilon}$-ball that includes only those states confined to a particular subspace $\Pi$ that includes the support $\rho$, defined as follows $${\mathcal{B}}^{{\epsilon}}_{\Pi}(\rho):=\{{\sigma }\in \textsf{S}_{\le}: P(\rho, {\sigma }){\leqslant}{\epsilon}, \Pi {\sigma }\Pi ={\sigma }\}.$$ Clearly ${\mathcal{B}}^{{\epsilon}}_{\Pi}(\rho)\subseteq {\mathcal{B}}^{{\epsilon}}(\rho)$, and setting $\Pi$ to the identity recovers the full ball, ${\mathcal{B}}^{{\epsilon}}_{{\openone}}(\rho)={\mathcal{B}}^{{\epsilon}}(\rho)$.
It will also useful to define another subset of the ${\epsilon}$-ball in the special case where $\rho$ is pure, and that is a ball of pure states [@TomColRen10]: $${\mathcal{B}}^{{\epsilon}}_p(\rho):=\{{\sigma }\in \textsf{S}_{\le}: P(\rho, {\sigma }){\leqslant}{\epsilon},\text{rank}({\sigma })=1 \}.$$ Again, ${\mathcal{B}}^{{\epsilon}}_p(\rho) \subseteq {\mathcal{B}}^{{\epsilon}}(\rho)$. In fact it is helpful to combine the notions of ${\mathcal{B}}^{{\epsilon}}_{\Pi}(\rho)$ and ${\mathcal{B}}^{{\epsilon}}_p(\rho)$ as follows: $${\mathcal{B}}^{{\epsilon}}_{p,\Pi}(\rho):=\{{\sigma }\in \textsf{S}_{\le}: P(\rho, {\sigma }){\leqslant}{\epsilon},\text{rank}({\sigma })=1, \Pi {\sigma }\Pi ={\sigma }\},$$ assuming $\rho$ is pure and $\Pi \rho \Pi =\rho$.
The following two lemmas are from [@TomColRen10].
\[thm8\] Let $\rho, \tau \in \textsf{S}_{\le}({\mathcal{H}})$ and let $\phi_{\rho}\in \textsf{S}_{\le}({\mathcal{H}}{\otimes}{\mathcal{H}}' )$ with $\dim({\mathcal{H}}){\leqslant}\dim({\mathcal{H}}')$ be a purification of $\rho$, then there exists a purification of $\tau$, $\phi_{\tau}\in \textsf{S}_{\le}({\mathcal{H}}{\otimes}{\mathcal{H}}' )$, such that $P(\phi_{\rho}, \phi_{\tau}) = P(\rho,\tau)$.
\[thm9\] Let $\rho \in \textsf{S}_{\le}({\mathcal{H}})$ and let $\phi_{\rho}\in \textsf{S}_{\le}({\mathcal{H}}{\otimes}{\mathcal{H}}' )$ be a purification of $\rho$, then $${\mathcal{B}}^{{\epsilon}}(\rho) \supseteq \{{\sigma }\in \textsf{S}_{\le}({\mathcal{H}}): \exists \phi_{{\sigma }}\in {\mathcal{B}}^{{\epsilon}}_p(\phi_{\rho}) , {\sigma }={{\rm Tr}}_{{\mathcal{H}}'}(\phi_{{\sigma }}) \}$$ with the two sets being identical if $\dim({\mathcal{H}}){\leqslant}\dim({\mathcal{H}}')$.
We will need a generalization of Lemma \[thm9\].
\[thm10\] Let $\rho \in \textsf{S}_{\le}({\mathcal{H}})$ and let $\phi_{\rho}\in \textsf{S}_{\le}({\mathcal{H}}{\otimes}{\mathcal{H}}' )$ be a purification of $\rho$, let $\Pi$ be a projector such that $\Pi\rho\Pi = \rho$, then $$\label{eqn77}
{\mathcal{B}}^{{\epsilon}}_{\Pi}(\rho) \supseteq \{{\sigma }\in \textsf{S}_{\le}({\mathcal{H}}): \exists \phi_{{\sigma }}\in {\mathcal{B}}^{{\epsilon}}_{p,\Pi}(\phi_{\rho}) , {\sigma }={{\rm Tr}}_{{\mathcal{H}}'}(\phi_{{\sigma }}) \}$$ with the two sets being identical if $\dim({\mathcal{H}}){\leqslant}\dim({\mathcal{H}}')$.
Note that if ${\sigma }\in \textsf{S}_{\le}({\mathcal{H}})$, if $\Pi$ is a projector on ${\mathcal{H}}$, and if $\phi_{{\sigma }}\in \textsf{S}_{\le}({\mathcal{H}}{\otimes}{\mathcal{H}}') $ is a purification of ${\sigma }$, then $\Pi {\sigma }\Pi = {\sigma }$ if and only if $\Pi \phi_{{\sigma }}\Pi = \phi_{{\sigma }} $. So the set on the right side of will be contained in the $\Pi$ subspace and will be within ${\epsilon}$ of $\rho$ since the purified distance is non-increasing under partial trace, so this proves . The equality for $\dim({\mathcal{H}}){\leqslant}\dim({\mathcal{H}}')$ follows from Lemma \[thm8\].
Additional properties of $D_{\max}$ and $D_{{\text{fid}}}$
----------------------------------------------------------
In addition to Properties \[a\] and \[b\], it is useful note the following properties of $D_{\max}$ and $D_{{\text{fid}}}$.
For positive operators $P$ and $Q$, if $P' {\geqslant}P$, then $$\begin{aligned}
\label{eqn78}D_{\max}(P || Q ) &{\leqslant}D_{\max}(P' ||Q) \\
\label{eqn79}D_{{\text{fid}}}(P || Q ) &{\geqslant}D_{{\text{fid}}}(P' ||Q )\end{aligned}$$ Let $\Pi_P$ be a projector that includes the support of $P$, and define $\Pi_Q$ analogously, then $$\begin{aligned}
\label{eqn80}D_{{\text{fid}}}(P || Q ) &= D_{{\text{fid}}}(P || \Pi_P Q \Pi_P)\notag\\
&= D_{{\text{fid}}}(\Pi_Q P \Pi_Q || Q )\end{aligned}$$
Let $\Pi$ be any projector, then $$\begin{aligned}
\label{eqn81}D_{\max}(P || Q ) &{\geqslant}D_{\max}(\Pi P \Pi || \Pi Q \Pi )\end{aligned}$$
Consider the quantum channel ${\mathcal{F}}(\cdot ) := \Pi(\cdot )\Pi +({\openone}- \Pi)(\cdot )({\openone}- \Pi)$. We have $$\begin{aligned}
D_{\max}(P || Q) &{\geqslant}D_{\max}({\mathcal{F}}(P) || {\mathcal{F}}( Q))\notag \\
& {\geqslant}D_{\max}(\Pi P \Pi || \Pi Q \Pi) \notag\end{aligned}$$ where the second line follows by first invoking with ${\mathcal{F}}(P){\geqslant}\Pi P \Pi $, and then invoking Property \[b\].
Proof of Lemma \[thm5\]
-----------------------
In Lemma \[thm1\], we proved a correlation hierarchy for the the min- and max-entropies (among others). The proof was for normalized states $\rho_{AB}$, but the exact same proof applies to subnormalized states. Hence we have the following Lemma.
\[thm11\] For any bipartite state $\rho_{AB}\in \textsf{S}_{\le}$, $$\begin{aligned}
\label{eqn82}&-H_{\min}(A|B) {\leqslant}{\mathbb{E}}^{A|B}_{\max} {\leqslant}{\Delta}^{\overrightarrow{A|B}}_{\max}, {\Delta}^{\overrightarrow{B|A}}_{\max} {\leqslant}{\Delta}^{\overleftrightarrow{A|B}}_{\max},\\
\label{eqn83}&-H_{\max}(A|B) {\leqslant}{\mathbb{E}}^{A|B}_{{\text{fid}}} {\leqslant}{\Delta}^{\overrightarrow{A|B}}_{{\text{fid}}}, {\Delta}^{\overrightarrow{B|A}}_{{\text{fid}}} {\leqslant}{\Delta}^{\overleftrightarrow{A|B}}_{{\text{fid}}}.\end{aligned}$$
Now since Lemma \[thm11\] applies to each state in the ball, ${\sigma }_{AB}\in {\mathcal{B}}^{{\epsilon}}(\rho_{AB})$, Lemma \[thm5\] follows as a direct corollary.
Proof of Theorem \[thm6\]
-------------------------
Here we prove the collapse of the smooth correlation hierarchy for premeasurement states. It is helpful to note the following lemma, which extends Theorem \[thm2\] to subnormalized states. The proof is exactly the same as that given for Theorem \[thm2\], i.e., the proof of Theorem \[thm2\] did not rely on the normalization of the state.
\[thm12\] For any premeasurement state ${\tilde{\rho}}_{M_XS}\in \textsf{MQ}_{\le}$, $$\begin{aligned}
\label{eqn84}&-H_{\min}(M_X|S) = {\mathbb{E}}^{M_X|S}_{\max} = {\Delta}^{\overrightarrow{M_X|S}}_{\max}= {\Delta}^{\overrightarrow{S|M_X}}_{\max} = {\Delta}^{\overleftrightarrow{M_X|S}}_{\max},\\
\label{eqn85}&-H_{\max}(M_X|S) = {\mathbb{E}}^{M_X|S}_{{\text{fid}}} = {\Delta}^{\overrightarrow{M_X|S}}_{{\text{fid}}}= {\Delta}^{\overrightarrow{S|M_X}}_{{\text{fid}}} = {\Delta}^{\overleftrightarrow{M_X|S}}_{{\text{fid}}}.\end{aligned}$$
In what follows, we make use of ${\mathcal{B}}^{{\epsilon}}_{\Pi}({\tilde{\rho}}_{M_XS})$ where $\Pi =V_X V_X{^\dagger}$ includes the support of ${\tilde{\rho}}_{M_XS} = V_X\rho_S V_X{^\dagger}$, so it is helpful to state the following lemma.
\[thm13\] For ${\tilde{\rho}}_{M_XS} = V_X\rho_S V_X{^\dagger}\in \textsf{MQ}$ and $\Pi =V_X V_X{^\dagger}$, the ball ${\mathcal{B}}^{{\epsilon}}_{\Pi}(\tilde{\rho}_{M_XS})$ only contains $\textsf{MQ}_{\le}$ states, of the form $V_X \tau_S V_X{^\dagger}$ for some $\tau_S\in \textsf{S}_{\le}({\mathcal{H}}_S)$.
By definition, all states in ${\mathcal{B}}^{{\epsilon}}_{\Pi}(\tilde{\rho}_{M_XS})$ are of the form ${\sigma }_{M_XS} = V_X V_X{^\dagger}{\sigma }_{M_XS} V_X V_X{^\dagger}$ and hence of the form $V_X \tau_S V_X{^\dagger}$ where $\tau_S = V_X{^\dagger}{\sigma }_{M_XS} V_X \in \textsf{S}_{\le}({\mathcal{H}}_S)$.
Now we are ready to prove Theorem \[thm6\]. Let $\Pi = V_XV_X{^\dagger}$ in what follows. We first show the proof of . Let ${\overline{\sigma}}\in\textsf{S}_{\le}({\mathcal{H}}_{M_XS})$ and ${\overline{\tau}}\in \textsf{S}_{=}({\mathcal{H}}_S)$ be the two states that achieve the optimization in $H^{{\epsilon}}_{\min}(M_X|S)_{{\tilde{\rho}}}$, i.e., let $- H^{{\epsilon}}_{\min}(M_X|S)_{{\tilde{\rho}}} = D_{\min}({\overline{\sigma}}|| {\openone}{\otimes}{\overline{\tau}})$, where ${\tilde{\rho}}$ is short-hand for ${\tilde{\rho}}_{M_XS}$. Then we have $$\begin{aligned}
\label{eqn86}- H^{{\epsilon}}_{\min}(M_X|S)_{{\tilde{\rho}}} &= D_{\min}({\overline{\sigma}}|| {\openone}{\otimes}{\overline{\tau}}) \\
\label{eqn87}& {\geqslant}D_{\min}(\Pi {\overline{\sigma}}\Pi || \Pi ({\openone}{\otimes}{\overline{\tau}}) \Pi) \\
\label{eqn88}&= D_{\min}(\Pi {\overline{\sigma}}\Pi || {\openone}{\otimes}\sum_j X_j {\overline{\tau}}X_j)\\
\label{eqn89}&{\geqslant}\min_{{\sigma }\in{\mathcal{B}}^{{\epsilon}}_{\Pi}({\tilde{\rho}})} [-H_{\min}(M_X|S)_{{\sigma }}] \\
\label{eqn90}&= \min_{{\sigma }\in{\mathcal{B}}^{{\epsilon}}_{\Pi}({\tilde{\rho}})} {\Delta}^{\overleftrightarrow{M_X|S}}_{\max}({\sigma }) \\
\label{eqn91}&{\geqslant}{}_{{\epsilon}}{\Delta}^{\overleftrightarrow{M_X|S}}_{\max}({\tilde{\rho}})
\end{aligned}$$ Equation invoked , invoked Property \[b\], invoked Lemmas \[thm12\] and \[thm13\], and notes that ${\mathcal{B}}^{{\epsilon}}_{\Pi}({\tilde{\rho}}) \subset {\mathcal{B}}^{{\epsilon}}({\tilde{\rho}})$. Now note that gave an inequality in the reverse direction, so the inequalities must be equalities and the hierarchy in must collapse.
For the proof of , let us define $\textsf{CC}_{\Pi}\subset \textsf{CC}$ as the set $\{\tau\in \textsf{CC}: \Pi \tau \Pi = \tau\}$, i.e., only those $\textsf{CC}$ states that live in the subspace $\Pi$. Then we have $$\begin{aligned}
\label{eqn92}- H^{{\epsilon}}_{\max}(M_X|S)_{{\tilde{\rho}}} &{\geqslant}\max_{{\sigma }\in{\mathcal{B}}^{{\epsilon}}_{\Pi}({\tilde{\rho}})}[-H_{\max}(M_X|S)_{{\sigma }}] \\
\label{eqn93}&= \max_{{\sigma }\in{\mathcal{B}}^{{\epsilon}}_{\Pi}({\tilde{\rho}})} {\Delta}^{\overleftrightarrow{M_X|S}}_{{\text{fid}}}({\sigma })\\
\label{eqn94}&= \max_{{\sigma }\in{\mathcal{B}}^{{\epsilon}}_{\Pi}({\tilde{\rho}})} \min_{\tau\in \textsf{CC}} D_{{\text{fid}}}({\sigma }|| \tau )\\
\label{eqn95}&= \max_{{\sigma }\in{\mathcal{B}}^{{\epsilon}}_{\Pi}({\tilde{\rho}})} \min_{\tau\in \textsf{CC}_{\Pi}} D_{{\text{fid}}}({\sigma }|| \tau )\\
\label{eqn96}&= \max_{{\sigma }\in{\mathcal{B}}^{{\epsilon}}({\tilde{\rho}})} \min_{\tau\in \textsf{CC}_{\Pi}} D_{{\text{fid}}}({\sigma }|| \tau )\\
\label{eqn97}&{\geqslant}\max_{{\sigma }\in{\mathcal{B}}^{{\epsilon}}({\tilde{\rho}})} \min_{\tau\in \textsf{CC}} D_{{\text{fid}}}({\sigma }|| \tau )\\
\label{eqn98}&= {}_{{\epsilon}}{\Delta}^{\overleftrightarrow{M_X|S}}_{{\text{fid}}}({{\tilde{\rho}}})
\end{aligned}$$ Equation invoked Lemmas \[thm12\] and \[thm13\], follows from and the surrounding discussion, invoked , and used $\textsf{CC}_{\Pi}\subset \textsf{CC}$. Again, note that gave an inequality in the reverse direction, so the inequalities must be equalities and the hierarchy in must collapse. This completes the proof.
As an aside, we note that, because the above inequalities must be equalities, the optimization in the smooth min- and max-entropy of a premeasurement state can be restricted to the ball ${\mathcal{B}}^{{\epsilon}}_{\Pi}({\tilde{\rho}}_{M_XS})$, as in Eqs. and .
Properties of $_{{\epsilon}}{\mathbb{E}}_{\max}$ and $_{{\epsilon}}{\mathbb{E}}_{{\text{fid}}}$
-----------------------------------------------------------------------------------------------
Here we note a few useful properties of $_{{\epsilon}}{\mathbb{E}}_{\max}$ and $_{{\epsilon}}{\mathbb{E}}_{{\text{fid}}}$. In particular, $_{{\epsilon}}{\mathbb{E}}_{\max}$ is non-increasing under LOCC, $_{{\epsilon}}{\mathbb{E}}_{{\text{fid}}}$ is non-increasing under local quantum channels, and both $_{{\epsilon}}{\mathbb{E}}_{\max}$ and $_{{\epsilon}}{\mathbb{E}}_{{\text{fid}}}$ are invariant under local isometries.
Let $\rho_{AB}\in \textsf{S}_{=}({\mathcal{H}}_{AB})$,
\(i) Let ${\Lambda }$ be an LOCC operation, denote $\rho_{A'B'}={\Lambda }(\rho_{AB})$, then $$\begin{aligned}
{}_{{\epsilon}}{\mathbb{E}}_{\max}^{A|B}(\rho_{AB}) &{\geqslant}{}_{{\epsilon}}{\mathbb{E}}_{\max}^{A'|B'}(\rho_{A'B'})\end{aligned}$$
\(ii) Let ${\mathcal{E}}_A:{\mathcal{H}}_A\to{\mathcal{H}}_{A'}$ and ${\mathcal{E}}_B:{\mathcal{H}}_B\to{\mathcal{H}}_{B'}$ be local quantum channels on $A$ and $B$ respectively, denote $\rho_{A'B'}=({\mathcal{E}}_A{\otimes}{\mathcal{E}}_B) (\rho_{AB})$, then $$\begin{aligned}
{}_{{\epsilon}}{\mathbb{E}}_{{\text{fid}}}^{A|B}(\rho_{AB}) &{\geqslant}{}_{{\epsilon}}{\mathbb{E}}_{{\text{fid}}}^{A'|B'}(\rho_{A'B'})\end{aligned}$$
\(iii) Let $V_A:{\mathcal{H}}_A\to{\mathcal{H}}_{A'}$ and $V_B:{\mathcal{H}}_B\to{\mathcal{H}}_{B'}$ be local isometries on $A$ and $B$ respectively, denote $\rho_{A'B'}=(V_A{\otimes}V_B) \rho_{AB} (V_A{^\dagger}{\otimes}V_B {^\dagger})$, then $$\begin{aligned}
{}_{{\epsilon}}{\mathbb{E}}_{\max}^{A|B}(\rho_{AB}) &= {}_{{\epsilon}}{\mathbb{E}}_{\max}^{A'|B'}(\rho_{A'B'})\\
{}_{{\epsilon}}{\mathbb{E}}_{{\text{fid}}}^{A|B}(\rho_{AB}) &= {}_{{\epsilon}}{\mathbb{E}}_{{\text{fid}}}^{A'|B'}(\rho_{A'B'})\end{aligned}$$
\(i) $$\begin{aligned}
{}_{{\epsilon}}{\mathbb{E}}_{\max}^{A|B}(\rho_{AB}) &= \min_{{\sigma }\in {\mathcal{B}}^{{\epsilon}}(\rho_{AB})} {\mathbb{E}}_{\max}^{A|B}({\sigma })\notag\\
& {\geqslant}\min_{{\sigma }\in {\mathcal{B}}^{{\epsilon}}(\rho_{AB})} {\mathbb{E}}_{\max}^{A|B}({\Lambda }({\sigma }))\notag\\
& {\geqslant}{}_{{\epsilon}}{\mathbb{E}}_{\max}^{A'|B'}(\rho_{A'B'})\notag\end{aligned}$$ where the third line used the fact if ${\sigma }\in {\mathcal{B}}^{{\epsilon}}(\rho_{AB})$ then ${\Lambda }({\sigma }) \in {\mathcal{B}}^{{\epsilon}}({\Lambda }(\rho_{AB}))$ due to Lemma \[thm7\].
\(ii) Using the Stinespring dilation, write ${\mathcal{E}}_A(\cdot) = {{\rm Tr}}_{E_A}[V_A(\cdot)V_A{^\dagger}]$ and ${\mathcal{E}}_B(\cdot) = {{\rm Tr}}_{E_B}[V_B(\cdot)V_B{^\dagger}]$ where $E_A$ and $E_B$ are ancillas and $V_A:{\mathcal{H}}_A \to {\mathcal{H}}_{A'E_A}$ and $V_B:{\mathcal{H}}_B \to {\mathcal{H}}_{B'E_B}$ are local isometries. Define $\rho_{E_A A'B' E_B}:= (V_A{\otimes}V_B)\rho_{AB}(V_A{^\dagger}{\otimes}V_B{^\dagger})$ and note that $\rho_{A'B' } = {{\rm Tr}}_{E_AE_B}(\rho_{E_A A'B' E_B}) = ({\mathcal{E}}_A{\otimes}{\mathcal{E}}_B)(\rho_{AB})$. Also define $\Pi := V_A V_A{^\dagger}{\otimes}V_BV_B{^\dagger}$, and denote $\textsf{Sep}_{\Pi}$ as the set of normalized separable states that live only in the subspace defined by $\Pi$. Then $$\begin{aligned}
{}_{{\epsilon}}{\mathbb{E}}_{{\text{fid}}}^{A|B}(\rho_{AB}) &= \max_{{\sigma }\in {\mathcal{B}}^{{\epsilon}}(\rho_{AB})} \min_{\tau\in \textsf{Sep}} D_{{\text{fid}}}({\sigma }||\tau ) \notag\\
&=\max_{{\sigma }\in {\mathcal{B}}^{{\epsilon}}_{\Pi}(\rho_{E_AA'B'E_B})} \min_{\tau\in \textsf{Sep}_{\Pi}} D_{{\text{fid}}}({\sigma }||\tau ) \notag\\
&=\max_{{\sigma }\in {\mathcal{B}}^{{\epsilon}}(\rho_{E_AA'B'E_B})} \min_{\tau\in \textsf{Sep}_{\Pi}} D_{{\text{fid}}}({\sigma }||\tau ) \notag\\
&{\geqslant}\max_{{\sigma }\in {\mathcal{B}}^{{\epsilon}}(\rho_{E_AA'B'E_B})} \min_{\tau\in \textsf{Sep}} D_{{\text{fid}}}({\sigma }||\tau ) \notag\\
&= \max_{{\sigma }\in {\mathcal{B}}^{{\epsilon}}(\rho_{E_AA'B'E_B})} E^{E_AA' | B'E_B}_{{\text{fid}}}({\sigma }) \notag\\
&{\geqslant}\max_{{\sigma }\in {\mathcal{B}}^{{\epsilon}}(\rho_{A'B'})} E^{A' | B'}_{{\text{fid}}}({\sigma }) = {}_{{\epsilon}}{\mathbb{E}}_{{\text{fid}}}^{A'|B'}(\rho_{A'B'})\notag\end{aligned}$$ The third line follows from and Lemma \[thm7\]. The last line follows from Lemma \[thm7\] and the fact that ${\mathbb{E}}_{{\text{fid}}}$ is non-increasing under local partial traces.
\(iii) This follows from parts (i) and (ii) of this Lemma, by invoking the fact that the entanglement measure is non-increasing under local quantum channels, *twice* in succession. That is, invoke it first with the channel that applies the local isometries $V_A {\otimes}V_B$, and invoke it again with the channel that undoes these local isometries to obtain ${\mathbb{E}}(\rho_{AB}) {\geqslant}{\mathbb{E}}[(V_A {\otimes}V_B) \rho_{AB} ( V_A{^\dagger}{\otimes}V_B{^\dagger})] {\geqslant}{\mathbb{E}}(\rho_{AB})$. Hence the inequalities are equalities.
Additivity of ${\mathbb{E}}_D$ for $\textsf{MQ}$ states {#app1}
=======================================================
In general, ${\mathbb{E}}_D$ is not additive [@ShorEtAl2001; @ShorEtAl2003], i.e., there exist states $\rho$ and ${\sigma }$ for which ${\mathbb{E}}_D(\rho{\otimes}{\sigma })\neq {\mathbb{E}}_D(\rho) +{\mathbb{E}}_D({\sigma })$. However, in the special case, e.g., when $\rho$ and ${\sigma }$ are $\textsf{MQ}$ states, ${\mathbb{E}}_D$ is additive. The basic idea is that if $\rho \in \textsf{MQ}$ and ${\sigma }\in \textsf{MQ}$, then $(\rho {\otimes}{\sigma }) \in \textsf{MQ} $, and hence from Theorem \[thm4\], ${\mathbb{E}}_D(\rho{\otimes}{\sigma })$ can be written as a conditional von Neumann entropy, and such entropies are additive, which in turn implies the additivity of ${\mathbb{E}}_D$. This argument of course applies to the state ${\tilde{\rho}}_{M_XS}{\otimes}{\tilde{\rho}}_{M_YS}{\otimes}{\tilde{\rho}}_{M_ZS}$, which is the state considered in the entanglement distillation game in Section \[sct7.2\].
|
---
abstract: '[The flux-flux plot (FFP) method can provide model-independent clues regarding the X-ray variability of active galactic nuclei. To use it properly, the bin size of the light curves should be as short as possible, provided the average counts in the light curve bins are larger than $\sim 200$. We apply the FFP method to the 2013, simultaneous [*XMM-Newton*]{} and [*NuSTAR*]{} observations of the Seyfert galaxy MCG–6-30-15, in the 0.3–40 keV range. The FFPs above $\sim 1.6$ keV are well-described by a straight line. This result rules out spectral slope variations and the hypothesis of absorption driven variability. Our results are fully consistent with a power-law component varying in normalization only, with a spectral slope of $\sim 2$, plus a variable, relativistic reflection arising from the inner accretion disc around a rotating black hole. We also detect spectral components which remain constant over $\sim 4.5$days (at least). At energies above $\sim 1.5$ keV, the stable component is consistent with reflection from distant, neutral material. The constant component at low energies is consistent with a blackbody spectrum of $kT_{\rm BB} \sim 100$eV. The fluxes of these components are $\sim 10-20\%$ of the average continuum flux (in the respective bands). They should always be included in the models that are used to fit the spectrum of the source. The FFPs below 1.6keV are non-linear, which could be due to the variable warm absorber in this source.]{}'
author:
- |
E. S. Kammoun,$^{1}$[^1] and I. E. Papadakis$^{2,3}$\
$^{1}$SISSA, via Bonomea 265, I-34135 Trieste, Italy\
$^{2}$Department of Physics and Institute of Theoretical and Computational Physics, University of Crete, 71003, Heraklion, Greece\
$^{3}$IESL, Foundation of Research and Technology, 71110 Heraklion, Greece\
bibliography:
- 'ek-MCG-ref.bib'
date: 'Accepted XXX. Received YYY; in original form ZZZ'
title: 'The nature of X-ray spectral variability in MCG–6-30-15'
---
\[firstpage\]
galaxies: active – galaxies: individual: MCG–6-30-15 – galaxies: nuclei – galaxies: Seyfert – X-rays: galaxies
Introduction {#sec:intro}
============
According to the current paradigm, active galactic nuclei (AGN) are thought to be powered by accretion of matter, in the form of a disc, onto a central supermassive black hole (BH) of mass $M_{\rm BH} \sim 10^{6-9}\,M_{\odot}$. AGN are strong X-ray emitters, and it is widely accepted that the X–rays are produced by Compton up-scattering of ultraviolet (UV)/soft X-ray disc photons off hot electron [$\sim 10^9$K; e.g. @Shap76; @Haa93]. Since the X–ray luminosity is a substantial part of the bolometric luminosity in these objects, it is believed that the X–ray source (which is usually referred to as the ‘X–ray corona’) is located close to the central black hole, where most of the accretion power is released. AGN are highly variable in X–rays, both in flux and spectral shape. The amplitude of the X–ray flux variations is the highest, and the variability time scales are the shortest, among the variations at all wavelengths in radio-quiet AGN. This observational characteristic indicates that the X–ray source should also be small. Because of these characteristics, it is believed that X–ray spectral and timing studies can provide important clues regarding the physical processes that operate in the innermost region of AGN.
In this work, we apply the flux–flux plot (FFP) method to the simultaneous [*XMM-Newton*]{} and [*NuSTAR*]{} observations of the Seyfert 1 galaxy MCG–6-30-15 ($z = 0.00775$), performed in January 2013. Our main objective is to study its X–ray flux and spectral variability properties. The FFP method was first developed by [@Chu01] and was applied to the study of the X–ray variability of the black hole binary Cygnus X-1. It was first applied to AGN studies by [@Tay03], with the aim to study the X-ray spectral variability of X-ray bright Seyferts. It has been used since then in numerous AGN X–ray variability studies.
MCG–6-30-15 is the archetype of Seyferts with broad iron lines in their X–ray spectra. It was the first source where a broad Fe K$\alpha$ line with a red tail was detected. The line shape was interpreted as being due to relativistic reflection, implying an almost maximally spinning Kerr black hole [e.g. @Tana95; @Iwa96; @Iwa99; @Min07; @Mari14]. This interpretation was supported by the detection of short delays between the X–ray continuum and the soft band (i.e. X–rays below $\sim 1.5$ keV) emission [e.g. @Emmanou11; @Emma14; @Kara14]. [@Epitropakis16] showed that the iron line/continuum time delays are consistent with the delays between the hard (i.e. $>2$ keV) and soft band variations.
MCG-6-30-15 is highly variable in X–rays. It shows large amplitude flux and spectral variations on short (minutes/hours) and long (days/years) time scales. Its spectral variations have been interpreted within the context of a two component model which consists of: 1) a highly variable power-law (PL) continuum (with an almost constant spectral slope of $\Gamma \sim 2$), and 2) a less variable ionized reflection spectrum arising within a few gravitational radii [@Shih02; @Fab03; @Tay03; @Parker14]. The soft X-ray spectrum of the source is affected by a complex warm absorber [e.g. @Otani96; @Reynolds97; @Brand01; @Turner03; @Turner04], whose properties vary in time, and should add to the observed variability of the source. In fact, [@Mil08; @Mil09] proposed a complex absorption-dominated model in order to explain the red-tail of the iron line and the spectral variability of MCG–6-30-15. According to this model, partial-covering absorbers in the line of sight (having column densities in the $10^{22}-10^{24}\,{\rm cm^{-2}}$ range), can produce an apparent broadening of the Fe K$\alpha$ line similar to the one caused by relativistic effects [e.g. @Mil07; @Turner07]. Variability in the covering fraction of these absorbers could also explain the observed spectral variations.
We recently applied the FFP method to the narrow-line Seyfert 1 galaxy IRAS13224–3809 [@Kam15]. We found that, if the source is highly variable and the intrinsic FFP is non-linear, the shape of the observed FFPs may be affected by the light curves’ bin size. We suggested the use of the shortest possible bin size in the construction of the FFPs. In this work, we investigate the effects of the Poisson noise to the observed FFPs, and we provide practical guidelines for their estimation.
Although the main objective in the past applications of the FFP method was the determination of constant spectral components, in this work we use the FFPs to also study the variable spectral components in the X–ray spectrum of MCG-6-30-15. FFPs can provide model-independent information on the origin of the spectral variability in AGN, and MCG–6-30-15 is an ideal target for this: it is highly variable and X–ray bright. As a result, we can use the [*NuSTAR*]{} data to study the FFPs at energies up to 40 keV. This was not possible to achieve in the case of IRAS13224–3809, whose flux is low above $\sim 3-4$ keV. We detect a constant component at energies above $\sim 1.5$ keV, which is indicative of X-ray reflection from neutral material. We find that the hard X-ray emission is variable in amplitude, but not in shape (contrary to IRAS13224–3809), and that it cannot be due to absorption related variations only. Similar to IRAS13224–3809, we find strong evidence of a variable X–ray reflection component originating from an ionized disc, which extends to the inner stable circular orbit around a maximally rotating BH. We also find evidence of a constant component at low energies, which may arise from the inner disc.
Observations and data reduction {#sec:obsred}
===============================
*XMM-Newton* {#subsec:XMMdata}
------------
The [*XMM-Newton*]{} sattelite [@Jans01] observed MCG–6-30-15 simultaneously with [*NuSTAR*]{} [@Har13], starting on 2013 January 29 during three consecutive revolutions (Obs. IDs 0693781201, 0693781301, and 0693781401). The data are available in the [*XMM-Newton*]{} Science Archive[^2] (XSA). We considered data provided by the EPIC-pn camera [@Stru01] only, that was operating in small window/medium filter imaging mode. [We do not consider the data from the two EPIC-MOS [@Tur01] detectors because they were affected by a high level of pile-up [@Mari14].]{}
We reduced the data using the [*XMM-Newton*]{} Science Analysis System ([SAS]{}v15.0.1) and the latest calibration files. The data were cleaned for strong background flares and were selected using the criterion PATTERN$\leq$4. Source light curves were extracted from a circle of radius 40, while the background light curves were extracted from an off-source circular region of radius 50. We checked for pileup and we found it to be negligible in all observations. Background-subtracted light curves were produced using the [SAS]{} task [EPICLCCORR]{}.
*NuSTAR* {#subsec:Nustardata}
--------
MCG–6-30-15 was observed by [*NuSTAR*]{} with its two co-aligned telescopes with corresponding Focal Plane ModulesA (FPMA) and B (FPMB) starting on 2013 January 29 (Obs. IDs 60001047002, 60001047003, and 60001047005). We reduced the [*NuSTAR*]{} data following the standard pipeline in the [*NuSTAR*]{} Data Analysis Software (NuSTARDASv1.6.0). We used the instrumental responses from the latest calibration files available in the [*NuSTAR*]{} calibration database (CALDB). The unfiltered event files were cleaned with the standard depth correction, which reduces the internal background at high energies, and we excluded South Atlantic Anomaly passages from our analysis. The source and background light curves were extracted from circular regions of radii 15 and 3, respectively, for both FPMA and FPMB, using the HEASoft task [NUPRODUCT]{}, and requiring an exposure fraction larger than 50%. We checked that the background-subtracted light curves of the two [*NuSTAR*]{} modules were consistent with each other as follows. We divided the FPMA over the FPMB light curves (binned at $\Delta t = 1\,{\rm ks}$), in all the energy bands we consider in this work (see next Section), and we fitted the ratio as a function of time with a constant, $C$. The fit was acceptable in all cases, indicating that the FPMA and FPMB light curves are consistent ($C$ being consistent with 1 in all cases). Given this result, we added the FPMA and FPMB light curves in the various energy bands considered in this work, using the [FTOOLS]{} [@Ftools] command [LCMATH]{}, in order to increase the signal-to-noise of the [*NuSTAR*]{} light curves.
Figure\[fig:lightcurve\] shows the [*XMM-Newton*]{} and [*NuSTAR*]{} light curves in the 3–4keV band (chosen to be the reference band; see next Section), normalized to the mean average count rate. We plot data during the four time periods when both satellites were observing the source (we considered data from these periods only, by merging the good time intervals tables of the two satellites using the [FTOOLS]{} command [MGTIME]{}). This figure shows the large variability range of the source (the max-to-min flux ratio is $\sim 7$) but also the consistency between the instruments.
Flux-flux analysis {#sec:FFA}
==================
Choice of the energy bands {#subsec:refband}
--------------------------
The first task in the flux-flux analysis is to define the reference band. Ideally, the flux in this band should be representative of the X-ray primary emission mainly, and should have the largest possible signal-to-noise ratio. In our case this band should also be common in both [*XMM-Newton*]{} and [*NuSTAR*]{} data. For these reasons, we chose 3–4keV as the reference band. Table\[table:log\] lists the net exposure time and the average 3–4keV count rate for each of the 4 time intervals and for the various detectors.
To construct the FFPs at energies above 4 keV (the high-energy FFPs, hereafter) we divided the 4–40keV band into 10 sub-bands. The first five were common to both [*XMM-Newton*]{} and [*NuSTAR*]{}, with $\Delta E =1$keV in the energy range 4–8keV, and $\Delta E =2$keV for the fifth sub-band (8–10keV). Using data from these bands and the reference band we constructed FFPs (plotted in Fig.\[figapp:commFFPs\]). At energies larger than 10keV, we used [*NuSTAR*]{} data only (Fig.\[figapp:nustarFFPs\]). We considered two sub-bands with $\Delta E =2$keV. Then we chose a width of $\Delta E=3$keV and 5keV for the following two sub-bands. We also considered the light curve in the 25–40keV sub-band ($\Delta E=15$keV). We did not consider the data at energies higher than 40 keV, because of the rapid decrease of the signal-to-noise ratio at these energies.
At energies below 3 keV, we extracted [*XMM-Newton*]{} light curves from 7 sub-bands in the energy range 0.3–1keV with a width of $\Delta E =0.1$keV. Then we considered two sub-bands with $\Delta E = 0.3$keV, one with $\Delta E = 0.4$keV, and $\Delta E=1$keV for the last sub-band (2–3 keV). Using these light curves, and the reference band, we constructed the low-energy FFPs (plotted in Fig.\[figapp:lowEFFPs\]).
[cccc]{} Int. & Exp. time (ks) &\
\
& EPIC-pn/FPMA,B & EPIC-pn & FPMA(B)\
1 & 41/37 & 1.23 $ \pm $ 0.05 & 0.21(0.22) $ \pm $ 0.01\
2 & 84/83 & 1.66 $ \pm $ 0.04 & 0.30(0.30) $ \pm $ 0.01\
3 & 129/129 & 0.93 $ \pm $ 0.02 & 0.17(0.17) $ \pm $ 0.01\
4 & 48/43 & 0.76 $ \pm $ 0.03 & 0.13(0.14) $ \pm $ 0.01\
\[table:log\]
Choice of the time bin size {#sec:timebin}
---------------------------
The time bin size of the light curves, $\Delta t_{\rm bin}$, plays a significant role in the FFP analysis [@Kam15]. To investigate this issue, we used [*XMM-Newton*]{} and [*Nustar*]{} light curves with $\Delta t_{\rm bin} = 100$s, 1ks, and 5.8ks (equal to the [*NuSTAR*]{} orbit) to create the low and high-energy FFPs (the 100s, 1ks, and 5.8ks FFPs, hereafter). We fitted them with a power-law plus constant (PLc) model of the form, $$y = A_{\rm PLc}x^{\beta} + C_{\rm PLc},
\label{eq:PLc}$$ ($x$ in this, and all equations hereafter, represents the count rate in the reference band). We used the [MPFIT]{}[^3] package [@Mark09], taking into account the errors on the $y$-axis only.
In general, the best-fit parameters in the case of the 1 and 5.8ks high-energy FFPs are consistent with each other. This is not the case with the low-energy FFPs. This is similar to what was observed in IRAS 13224–3809 [@Kam15] and suggests that the intrinsic FFPs are not linear at energies below $\sim 2-3$ keV (see §\[subsec:lowEFFP\]). The model parameters from the best-fits to the 100s binned FFPs are significantly different, at all energies. As we demonstrate in Appendix\[app:poisson\], this discrepancy is due to Poisson noise effects, which become significant when the count rate is low and $\Delta t_{\rm bin}$ is small. We find that the average number of counts per bin in each light curve should be larger than $\sim 200$ photons in order to be able to determine the intrinsic FFP shape, without any distortions due to Poisson noise. Given this result, and the disagreement between the 1ks and 5.8ks results in the low-energy FFPs, we decided to study the FFPs which are constructed with the use of the 1ks binned light curves at all energy bands, except the two highest [*NuSTAR*]{} energy bands, where we used the 5.8ks binned light curves (to satisfy the high count rate criterion).
The high-energy flux-flux plots {#subsec:highE-FFP}
-------------------------------
We fitted the high-energy FFPs with the PLc model (eq.\[eq:PLc\]). We fitted both the data of the four time intervals shown in Fig.\[fig:lightcurve\] separately, and the data from all intervals combined together. The fits were statistically accepted in all cases, and the best-fit slopes were consistent with one (at all energies). This result suggests that a straight line can also fit the FFPs. So we re-fitted them with a linear model of the form, $$y = A_{\rm L} x + C_{\rm L},
\label{eq:linear}$$ using the [MPFFITEXY]{} routine [@Williams10] which takes into account the errors on both $x$ and $y$ variables. Tables \[table:XMM-highE\] and \[table:Nustar-highE\] in Appendix\[app:tables\] list the best-fit results to the individual and the combined FFPs. The solid lines in Fig.\[figapp:commFFPs\] and \[figapp:nustarFFPs\] show the best-fit lines to the combined high-energy FFPs.
The resulting $A_{\rm L}$ and $C_{\rm L}$ values from the best-fits to the FFPs of the individual intervals were consistent within the errors, at all energy bands. Filled symbols in Fig.\[fig:weighted-highE\] show their weighted mean ($A_{\rm L,wm}$ and $C_{\rm L,wm}$) plotted as a function of the mean energy of each energy bin. Empty symbols in the same figure show the best-fit $A_{\rm L,all}$ and $C_{\rm L,all}$ values we get when we fit the combined FFPs (using the data from all four segments). They are consistent with $A_{\rm L,wm}$ and $C_{\rm L,wm}$ (within $3\sigma$). Since the errors of $A_{\rm L,all}$ and $C_{\rm L,all}$ are smaller than the errors of $A_{\rm L,wm}$ and $C_{\rm L,wm}$, we will use the former in our analysis.
In order to show the consistency of the results derived from the [*XMM-Newton*]{} and [*NuSTAR*]{} FFPs, we can re-write eq.\[eq:linear\] as follows, $$\frac{y}{\langle y \rangle} = \frac{A_{\rm L} \langle x \rangle }{\langle y \rangle} \frac{x}{\langle x \rangle} + \frac{C_{\rm L}}{\langle y \rangle},$$ where $\langle y \rangle$ and $\langle x \rangle$ are the mean count rates. Figure\[fig:XNcomp\] shows the normalized [*NuSTAR*]{} best-fit values (i.e. $A'=A_{\rm L,all}\langle x \rangle/\langle y \rangle$ and $C'=C_{\rm L,all}/\langle y \rangle$), versus the respective [*XMM-Newton*]{} values. This plot shows that the results from the analysis of the [*XMM-Newton*]{} FFPs are consistent with those from the [*NuSTAR*]{} FFPs.
The best-fit model constants, $C_{\rm L}$, are significantly larger than zero, even at the highest energy band. This result suggests the presence of a spectral component which is not variable, at least on time scales comparable to the duration of the MCG-06-30-15 observations ($\sim 4.5$ days). Secondly, the high-energy FFPs are well described by a straight line. This is consistent with the hypothesis of a power-law like X–ray continuum which varies in normalization only. In this case, the slope of the line which fits the FFPs, $A_{\rm L}$, should be equal to the ratio of $y$ over $x$.
To investigate this issue further, we created fake power-law spectra using the XSPEC [@Arn96] command [FAKEIT]{}, assuming an absorbed PL model with $\Gamma$ in the range 1.95–2.2, with a step of $\Delta\Gamma=0.01$. We considered only Galactic absorption in the line of sight of the source [$N_{\rm H}=3.92 \times 10^{20}\,{\rm cm^{-2}}$; @Kal05], and the response matrices of EPIC-pn and FPMA/B. We estimated the expected count rate in each one of the high energy sub-bands, and we computed their ratio over the 3–4 keV model count rate. In this way, we were able to compute $A_{\rm L,mod}$, and then “$A_{\rm L,mod}-$vs–Energy" data sets for each $\Gamma$ value.
Then we fitted the observed $A_{\rm L,all}-E$ data (empty symbols in Fig.\[fig:weighted-highE\]) to the $A_{\rm L,mod}-E$ lines. We found that the observed $A_{\rm L}$’s are best reproduced in the case when $\Gamma_{\rm X} =2.04\pm 0.02$ ($\chi^2_{\rm X}/{\rm degrees\, of\, freedoom\, (dof)}=7.4/4$), and $\Gamma_{\rm N} = 2.18\pm 0.02$ ($\chi^2_{\rm N}/{\rm dof}=22/9$) for the [*XMM-Newton*]{} and [*NuSTAR*]{} FFPs[^4], respectively. The [*XMM-Newton*]{} and [*NuSTAR*]{} best-fit $A_{\rm L,mod}-E$ models are plotted with the solid and dashed lines, respectively, in Fig.\[fig:weighted-highE\]. . We note that the best-fit $A_{\rm L,mod}-E$ lines do not give a statistically accepted fit to the data ($\chi^2_{\rm X+N}=29.4/13$ dof, $p_{null}=5.7\times 10^{-3}$). The weighted mean of the residuals ratio ($|(A_{\rm L,mod}-A_{\rm L,all})/A_{\rm L,all}|$) over the 4–40 keV band is $(1.96 \pm 0.49)\%$. Therefore, a PL component which varies in normalization accounts for most, but not all, of the observed variations. We further discuss this issue in §\[subsec:Spec-highE\].
The low-energy flux-flux plots {#subsec:lowEFFP}
------------------------------
As with the high-energy FFPs, first we fit a PLc model to the low-energy FFPs of the individual time intervals. Figure\[figapp:lowEFFPs\] in Appendix\[app:plots\] shows the resulting best-fit PLc models. The best-fit results, the mean value of the best-fit parameters, and the best-fit parameters obtained by fitting all the data together are listed in Table\[table:XMM-lowE\]. The model parameters from the best-fits to the individual time intervals were consistent with each other, in all bands. However, contrary to the high-energy FFPs, the best-fit values derived by fitting all the data together do not agree with the mean value of parameters obtained by fitting the FFPs of the individual time intervals.
Strictly speaking, the PLc model is not statistically accepted, neither when we fit the individual nor the combined low-energy FFPs. The residual plots show significant, random data fluctuations around the best-fit models, indicative of short-amplitude, fast variations in the low energy bands which are independent of the continuum variations. When we fit a straight line to the best-fit residuals of the individual FFPs, the best-fit slope turns out to be consistent with zero. This suggests that the PLc model represents rather well the general trend in the low-energy FFPs. It takes account of most of the observed variations in the soft bands, and does not result in any large-scale, systematic trends in the residual plots. On the other hand, the residuals from the best-fits to the combined FFPs show systematic trends. We therefore accept the best-fit results to the individual FFPs as representative of the low energy FFPs. Since the best-fit parameters are consistent (within $3\sigma$) at all low-energy FFPs, we use their arithmetic mean[^5] in our analysis. Filled symbols in Fig.\[fig:lowE-mean\] show the mean model parameters plotted as a function of the centroid energy of each energy bin.
The best-fit model slopes (middle panel in Fig.\[fig:lowE-mean\]) are significantly larger than one at energies below $\sim 1.6$keV. Non-linear FFPs can be produced by intrinsic spectral slope variations, as demonstrated by [@Kam15]. However, these authors showed that $\Gamma$ variations result in FFP slopes which are flatter than one. In addition, the high-energy FFPs argues against intrinsic $\Gamma$ variations. We therefore conclude that the non-linear FFPs are not the result of spectral slope variations.
The magenta solid lines in Fig.\[figapp:lowEFFPs\] show the expected FFPs assuming a power-law spectrum with $\Gamma_{\rm X} = 2.04$, which varies only in normalization, as is the case with the high-energy FFPs (the predicted FFP lines are plotted assuming the Galactic absorption, only). The open circles in the top panel of Fig.\[fig:lowE-mean\] show the resulting $A_{\rm PLc}$. At energies $\sim 1.6-3$ keV, the observed FFP slopes are consistent with one, and the observed $A_{\rm PLc}$ are consistent with the predicted values. Not surprisingly, the magenta solid lines are also (broadly) consistent with the observed FFPs. We therefore conclude that the FFPs down to $\sim 1.6$ keV are consistent with a power-law spectrum with $\Gamma_{\rm X} \sim 2$, which varies only in normalization.
The observed FFPs are [*below*]{} the magenta solid lines at energies between $\sim 0.6-1.6$keV. Furthermore, the observed $A_{\rm PLc}$ are below the expected values at all energies below $\sim 1.6$ keV. This result suggests that the count rate in these energies is smaller than what we would expect based on the variable PL model that is consistent with the high-energy FFPs (even when we take into account the Galactic absorption). The lower than expected count rate can be explained by the well-known variable warm absorber in MCG-6-30-15, which affects mainly the low energy spectrum of the source. At the same time, if the absorber is variable, it can result in FFP slopes which are steeper than one (as we show in Appendix\[app:warmabs\]).
The best-fit model constants ($C_{\rm PLc}$) are positive at all energies below $\sim 1$keV (bottom panel in Fig.\[fig:lowE-mean\]). This is indicative of the presence of a spectral component at low energies which does not vary on time scales shorter than the duration of the observations. This agrees with the fact that, despite the warm absorption, the observed FFPs are above the predicted ones (magenta line) in the 0.3–0.6 keV range. This can only be explained by the presence of an extra spectral component (in addition to the variable PL and the warm absorber).
Discussion {#sec:disc}
==========
Absorption induced X–ray continuum variability
----------------------------------------------
The fact that a straight line fits well the high energy FFPs provides a model independent evidence against variable, clumpy absorption dominating the X–ray variability in MCG-6-30-15. If that were the case, the observed count rate, $y(t)$, at energy $E_y$, would be equal to: $$y(t) = \left\{\prod_{i=1}^{N} \exp\left[-n_{\rm H,i}(t)\sigma(E_y)\right]\right\} AE_y^{-\Gamma},$$ assuming $N$ obscuring clouds, each one with equivalent hydrogen column, $n_{H,i}(t)$, which is variable in time, while the X–ray continuum spectrum remains constant ($\sigma(E_y)$ is the photo-electric cross-section). The above equation becomes, $$y(t) =\exp\left\{\left[-\sum_{i=1}^{N}n_{\rm H,i}(t)\right]\sigma(E_y)\right\} AE_y^{-\Gamma},
\label{ynh}$$ and should also hold for the count rate at energy $E_x$, $$x(t) = \exp\left\{\left[-\sum_{i=1}^{N} n_{\rm H,i}(t)\right]\sigma(E_x)\right\} AE_x^{-\Gamma}.
\label{xnh}$$ We can solve for $\left[-\sum n_{\rm H,i}(t)\right]$ using eq.\[xnh\], and substitute it in eq.\[ynh\] in order to reach the following relation between the count rates in the two bands, $$y = Cx^{\beta},
\label{xynh}$$ wehere $C$ is a constant, and $\beta=\sigma(E_y)/\sigma(E_x)$. Equation \[xynh\] predicts a non linear relation between $y$ and $x$, contrary to our results. Even if $N$ varies with time, eq.\[xynh\] should still hold. Therefore, our results show that the hypothesis that the X–ray variability in MCG-6-30-15 is due to variable absorption only (on the time scales we probe, at least) is not valid.
The constant high energy X–ray component {#subsec:highE-constant}
----------------------------------------
-------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- --------------------------------------------
$\Gamma$ $ 2.06 _{- 0.19 }^{+ 0.17 }$ $ 2.03 _{- 0.19 }^{+ 0.17 }$ $ 1.99 _{- 0.20 }^{+ 0.18 }$ $ 1.91 _{- 0.20 }^{+ 0.19 }$
\[0.1 cm\] $E_{\rm cut}$ $ 27 _{- 7 }^{+ 12 }$ $ 26 _{- 7 }^{+ 12 }$ $ 26 _{- 7 }^{+ 12 }$ $ 25 _{- 6 }^{+ 11 }$
(keV)
\[0.1 cm\] $A_{\rm Fe}$ $ 0.26 _{- 0.04 }^{+ 0.05 }$ $ 0.26 _{- 0.04 }^{+ 0.05 }$ $ 0.26 _{- 0.04 }^{+ 0.05 }$ $ 0.27 _{- 0.04 }^{+ 0.05 }$
\[0.1 cm\] $i(^\circ)$ $ {0^f }$ $ {30^f }$ $ {45^f }$ $ {60^f }$
\[0.1 cm\] Norm $ 0.026 _{- 0.007 }^{+ 0.009 }$ $ 0.025 _{- 0.007 }^{+ 0.009 }$ $ 0.025 _{- 0.007 }^{+ 0.009 }$ $ 0.026 _{- 0.007 }^{+ 0.009 }$
$\chi^2 / {\rm d.o.f.}$ 14.25/12 14.19/12 14.43/12 14.8/12
\[0.1cm\]
-------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- --------------------------------------------
: The best-fit parameters obtained by fitting the high energy constant component with [pexmon]{}.
Fixed.
\[table:pexmon\]
The linear model defined by eq.\[eq:linear\] consists of two terms. The $C_{\rm L}$ term should be representative of a spectral component, which is not variable (at least over the sampled time scales). We used the best-fit $C_{\rm L,all}$ values and the [FTOOLS]{} command [ascii2pha]{} to construct the spectrum of this component at energies above 1.6 keV (Fig.\[fig:pexmonSpec\]). We fitted the spectrum with the neutral reflection model [pexmon]{} [@Nan07]. We fixed the reflection fraction to one and the abundance of heavy elements to solar but we let the iron abundance and the cutoff energy free to vary. We kept all the parameters tied between the [*XMM-Newton*]{} and [*NuSTAR*]{} spectra, but we included a multiplicative cross-calibration constant that we fixed to unity for the [*XMM-Newton*]{} spectrum and we let it free to vary for the [*NuSTAR*]{} spectrum. We found it consistent with one , for all the cases that we considered.
The best-fit results are listed in Table \[table:pexmon\] for various inclinations, up to 60 degrees. The model fits well the data in all cases, which implies that we cannot constrain the inclination. The photon index is consistent with 2 (within the errors), and the iron abundance is subsolar in all cases. We also found a low value for the high-energy cutoff, similar to the one found for the variable component in §\[subsec:Spec-highE\] (the 3$\sigma$ upper limit is 120keV). Our results indicate that the constant component can result from reflection off neutral material. This component is constant over at least $\sim 4.5$ days, which places a lower limit on the distance of the reflector from the central source. Assuming that the BH mass is $M_{\rm BH} \simeq 1.6\times 10^6\,M_{\rm \odot}$ [@Bentz16], this implies that the reflecting material is located at a distance $D \geq 5\times 10^4\,r_{\rm g}$ ($r_{\rm g}=GM_{\rm BH}/c^2$, is the gravitational radius). This is $\sim 1.7$ times larger than the broad line region radius in this source [@Mari14].
The constant low energy X–ray component {#sec:spec-lowE}
---------------------------------------
The model defined by eq.\[eq:PLc\], which fits well the low-energy FFPs, also consists of two terms. The $C_{\rm PLc}$ term could be representative of a low-energy spectral component which remains constant on time-scales of a few days (at least). However, this is not straight forward in this case. The soft X-ray spectrum of MCG–6-30-15 is charaterized by complex and variable warm absorption. We demonstrate in Appendix\[app:warmabs\] that variable warm absorption can result in non-linear FFPs at low energies, with slopes steeper than one, as observed. The simulated FFPs are well fitted by a PLc model, with either positive or negative constants, $C_{\rm PLc,sim}$. In the cases that we considered, the absolute value of these constants is much smaller than the constants we measure in the observed low-energy FFPs, $C_{\rm PLc,obs}$. Although we cannot prove that this will always be the case, it is possible that $C_{\rm PLc,obs}$ are indicative of a spectral component which does not vary, at least over the duration of the observations.
We used the best-fit $C_{\rm PLc,obs}$ values listed in Table\[table:XMM-lowE\] (and the [FTOOLS]{} command [ascii2pha]{}) to construct the low-energy, constant spectral component of MCG-6-3015 (plotted in Fig.\[fig:BBSpec\]). We fit the spectrum with an absorbed blackbody (BB) spectrum, taking into account the Galactic absorption only. The fit (blue solid line in Fig.\[fig:BBSpec\]) is statistically accepted ($\chi^2/{\rm d.o.f.}=4.7/7$). The best-fit temperature and normalization are $kT_{\rm BB} = 100 \pm 6$eV and $N_{\rm BB} = (1.99 \pm 0.3)\times 10^{-4}$, respectively.
Such a component could be due to the intrinsic emission of the inner disc. In this case, this component should be variable on the local viscous time scale, which even for a source with a BH mass of the order of a million solar masses could be of the order of many days. [In order to investigate the possibility of this component being representative of disc emission, we considered the [optxagnf]{} model [@Done12] which gives the spectral energy distribution of an accretion disc around a rotating SMBH, assuming Novikov-Thorne emissivity. We fitted this model to the data, assuming a BH mass of $1.6\times 10^6\,M_{\odot}$, a spin parameter of 0.998, and the emission from the inner part of the disc only (i.e. we fixed the model parameter $r_{out}$ to 2$r_{\rm g}$). The model fits the data well ($\chi^2= 12.9/8$ dof, $p_{null} = 0.11$), with the best-fit Eddington ratio being $\log \lambda_{\rm Edd} = -1.19 \pm 0.02$. We therefore conclude that, the constant component in the soft-band of MCG–6-30-15 can be indicative of the inner disc emission, if the BH is maximally rotating, and the accretion rate is $\sim 6$ per cent of the Eddington limit.]{}
We note that the best-fit residuals plot in Fig.\[fig:BBSpec\] indicate an absorption feature at energies $\sim 0.6-0.8$ keV. It is not significant but this feature is reminiscent of warm absorption. It suggests that the constant soft spectral component is emitted by a region close to the central source, in agreement with the assumption that this is the intrinsic emission from the inner disc.
The variable X–ray spectral component {#subsec:Spec-highE}
-------------------------------------
The $A_{\rm PLc}x^{\beta}$ and $A_{\rm L}x$ terms in eqs.\[eq:PLc\] and \[eq:linear\] should account for the variable, X–ray continuum spectral component in MCG–6-30-15, at low and high energies, respectively. We considered the mean 3–4keV count rate with the best-fit $A_{\rm L,all}$ and $A_{\rm PLc}$ values (at energies above and below 1.6 keV, respectively), and we used the [FTOOLS]{} command [ascii2pha]{} to create the spectrum of this component. In principle, we could use any 3–4keV count rate value to create the spectrum. We chose the mean so that the resulting spectrum is representative of the variable component in the average-flux state of the source (during these observations). The high energy variable component, $y_{\rm var,h}$, is plotted with the filled symbols in the top panel of Fig.\[fig:meanvarspec\] (circles and triangles indicate the data using the best-fit [*XMM-Newton*]{} and [*NuSTAR*]{} $A_{\rm L,all}$ values, respectively). The low-energy variable component, $y_{\rm var,l}$, is plotted with the open circles in the same panel.
We fitted $ y_{\rm var,h}$ with a PL model, taking into consideration the Galactic absorption in the line of sight of the source. The model provides a rather poor fit to the data ($\chi^2=30$/15 dof; $p_{null}=0.01$), in agreement with the results we presented in §\[subsec:highE-FFP\]. The weighted mean of the residuals ratio in the 2–10keV band is $1.5\pm 0.5\%$. This in agreement with the results from the principle component analysis (PCA) method which reveals that the variability in the normalization of the PL component can account for $\sim 97\%$ of the variability in this source [@Parker14; @Parker15].
The best-fit residuals (shown in the middle panel in Fig.\[fig:meanvarspec\]) indicate a deficit at $\sim 3$ keV and an excess at around $\sim 6.5$ and 20 keV, and are are suggestive of an X-ray reflection component. We therefore re-fitted $ y_{\rm var,h}$ with [relxill]{} [@Daus13; @Gar14] (accounting for Galactic absorption). We assumed a maximally spinning black hole, a power-law emissivity profile with $q=3$, and a reflection fraction of 1. We fixed the inner and outer disc radius to the ISCO and to 400$r_{\rm g}$, respectively. The model fits the data well ($\chi^2/{\rm dof}=10.8/11$; the best-fit residuals are plotted in the bottom panel of Fig.\[fig:meanvarspec\]).
The best-fit results are listed in the second column of Table\[table:parvar\]. The best-fit spectral slopes are consistent with the spectral slopes we found in §\[subsec:highE-FFP\]. The best-fit PL cut-off energy is rather low when compared to other AGN [e.g. @Marinucci16] but it is not well constrained. The $3\sigma$ confidence range is \[34–295 keV\]. We note that the respective $E_{\rm cut}$ range from the [pexmon]{} best-fit to the constant component (for all inclinations) is \[12–120 keV\]. When combined together, the two results indicate a cut-off energy between 34–120 keV in MCG-6-30-15. We also note that the best-fit iron abundances from the [relxill]{} fit to the variable component and from the [pexmon]{} fit to the constant component are not in agreement. We cannot explain this discrepancy. It could either mean that our modelling is not complete, or it may be indicative of the degree that one (or both) of the models approximate well the respective spectral components.
[lccc]{}\
$\Gamma_{\rm X}$ & $ 2.03 \pm 0.03 $ & $ 2.03^f $ & $2.12 _{-0.04}^{+0.10}$\
$\Gamma_{\rm N}$ & $ 2.16 \pm 0.05 $ & $ 2.16^f $ & $2.25_{-0.05}^{+0.10}$\
$i (^\circ)$ & $ 42_{-10}^{+5} $ & $ 42^f $ & $44_{-9}^{+6}$\
$\log \xi_{\rm d}$ & $ 1.7_{+0.3}^{+0.2} $ & $ 1.7^f $ & $1.69_{-0.40}^{+0.38}$\
$A_{\rm Fe}$(solar) & $ 1.48_{-0.60}^{+0.89} $ & $ 1.48^f $ & $0.88_p^{+0.56} $\
$E_{\rm cut}$(keV) & $ 60_{-15}^{+23} $ & $ 60^f $ & $81_{-64}^{+200} $\
\
$N_{\rm H}\,(10^{21} \rm cm^{-2})$ & $ - $ & $ 5.1_{-0.6}^{+1.1} $ & $7.1_{-1.1}^{+2.3} $\
$\log \xi_{\rm abs}$ & $ - $ & $ 0.78 \pm 0.10 $ & $ 0.66_{-0.18}^{+0.10} $\
CF & $ - $ & $ 0.96_{-0.08}^p $ & $ 0.88 \pm 0.07 $\
$\chi^2/{\rm dof}$ & $ 10.8/11 $ & $ 17/23 $ & $ 10.8/17 $\
pegged to its maximum/minimum value.
fixed.
\[table:parvar\]
The extrapolation of the best-fit [relxill]{} model to low energies ($<1.6$keV) is indicated by the dotted blue line in the top panel of Fig.\[fig:meanvarspec\]. The model exceeds the average variable component in this energy range. This is due to the effects of the warm absorber. Hence, we fitted the full band (0.3–40keV) variable component with the model: [zxipcf $\times$ relxill]{} (accounting for Galactic absorption). First, we fixed the [relxill]{} parameters to their best-fit values obtained from fitting $ y_{\rm var,h}$. The fit was statistically acceptable ($\rm \chi^2/dof = 17/23$). The best-fit warm absorber parameters are listed in the third column of Table\[table:parvar\]. The best-fit model and the corresponding residuals are shown in the top and bottom panel of Fig.\[fig:meanvarspec\], respectively. We re-fitted the full band variable spectrum with the same model but letting the [relxill]{} parameters free. The fit was also acceptable ($\rm \chi^2/dof=11/17$). The best-fit parameters are reported in the last column of Table\[table:parvar\]. There are differences between the best-fit values listed in the first and third columns of Table\[table:parvar\], notably in the PL spectral slopes, but they are within 2$\sigma$.
Our results imply that the observed variations in MCG-6-3015 are due to a PL continuum which is variable in normalization only, and a variable, X–ray reflection component from the (ionized) inner disc. Various studies in the past have detected short delays between the continuum and the soft band variations in this source [e.g. @Emma14; @Kara14]. Recently, [@Epitropakis16] also detected similar delays between the continuum and the iron line variations in MCG–6-30-15. To measure time lags, both the continuum and the reflection components must be variable. Our results confirm this scenario.
Conclusions {#sec:conclusion}
===========
To correctly estimate flux-flux plots, the mean counts per bin in both light curves must be larger than 200 in order to avoid distortions in the FFP shape due to the Poisson noise bias. As long as this criterion is fulfilled, the bin size of the light curves should be as small as possible, in order to avoid further distortions due to binning, in the case when the intrinsic FFP has a non-linear shape.
[The FFP analysis can provide model independent information on both the constant and variable spectral components in the X–ray spectra of AGN. The latter possibility has not been explored in detail so far, although it has interesting advantages. For example, the FFP shape (linear or power-law like) can show conclusively, and in a model independent way, whether variable absorption operates or not. The spectrum shown in Fig.\[fig:meanvarspec\] is not a traditional, observed spectrum. It is a representation of the spectral energy distribution of the source at a certain flux level, using the results from the FFP analysis. Its energy resolution is low, but it is free of non-variable spectral components that complicate the subsequent model fitting. We could construct these spectra at various flux levels, and study the spectral evolution of the source in this way. We plan to explore in detail this possibility in the future.]{} Our conclusions from the study of the MCG–6-30-15 FFPs are summarised below.\
[*A) The non-variable, X–ray spectral components in MCG–6-30-15.*]{}\
A1) We detect spectral component(s) that remain constant at least over the duration of the observations we study (i.e. $\sim 4.5$ days). At energies above $\sim 1.6$ keV the constant spectral component is consistent with reflection from cold, neutral material, located more than $5 \times 10^4\,r_{\rm g}$ away from the central source. Our results are consistent with the results of [@Tay03]. At energies below $\sim 1.6$keV, the constant component is well fitted by a black-body model with a temperature of $\sim 0.1$keV. This component cannot correspond to the soft-excess expected from X–ray reflection from a milddly ionized disc, as this should be variable (since the reflection at high energies is variable). It could be due to intrinsic thermal emission from the inner disc itself, if the disc extends to the ISCO around a maximmaly spinnign BH.
A2) The 2–10 and 2–40keV flux of the high energy, constant component is $5\times 10^{-12}$ and $1.9\times 10^{-11}$ $\rm erg\,s^{\rm -1}cm^{-2}$, respectively, which is 10% and 20% of the average X–ray continuum flux. The 0.3–1.6 keV flux of the low energy component is $\sim 17\%$ of the average X–ray continuum flux in the same band. These are not negligible fractions so, in addition to a PL continuum plus a relativistically blurred reflection component, modelling of the X–ray spectrum of the source should also add: a) a constant reflection component from cold material, and b) a constant, blackbody-like component at low energies.\
[*B) The variable, X–ray spectral components in MCG–6-30-15*]{}.\
B1) The FFPs at energies above $\sim 1.6$ keV are well fitted with a straight line. This result proves that: a) there are no spectral slope variations, and b) the observed variations cannot be caused by variations of the number and/or the covering factor of absorbing clouds. These are straight forward results, which do not depend on any assumptions regarding the model fitting of the source’s spectrum.
B2) Both the low and the high energy FFPs are fully consistent with a PL continuum, which varies in normalization, plus a variable (on time scales as short as 1ks), X-ray reflection component, from ionized material close to the central BH. The variable reflection component is consistent with the detection of “soft" time lags in this source, since in order to detect delays between two components, both of them must vary. Part of the observed variations at energies below $\sim 1$keV are due to variations of the warm absorber. The presence of the variable warm absorber is supported by the non-linearity of the FFPs at energies below 1.6keV are non-linear (like IRAS13224–3809).\
[*C) The soft excess in MCG-6-30-15*]{}.\
It consists of both a constant and a variable component. Both could originate from the inner disc, as long as it extends to the ISCO around a fast rotating BH: the former could be due to the disc’s intrinsic emission, the latter due to X–ray reprocessing (from the same disc region). Using the best-fit results of the constant and variable components, we estimate that the 0.3–1 keV flux of the constant and the variable component, in excess of the PL, are $6.8\times 10^{-12}$ and $4.3\times 10^{-12}$ ergs s$^{-1}$ cm$^{-2}$, respectively. Therefore, $\sim 60$ and 40 per cent of the soft excess flux is due to these two components. We note that that the variable component flux is based on the modeling of the variable component we reported in §\[subsec:Spec-highE\], when the source was in its average-flux state during the 2013 observations. Obviously, the contribution of the variable soft excess component (due to X–ray reprocessing) will be larger/smaller during higher/lower flux states of the source.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank the anonymous referee for useful comments. This work made use of data from the [*NuSTAR*]{} mission, a project led by the California Institute of Technology, managed by the Jet Propulsion Laboratory, and funded by NASA, [*XMM-Newton*]{}, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. This research has made use of the [*NuSTAR*]{} Data Analysis Software (NuSTARDAS) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (USA).
The Poisson noise effects to FFPs {#app:poisson}
=================================
We chose the 25–40 vs 3–4 keV [*NuSTAR*]{} FFP (bottom panel in Fig.\[figapp:nustarFFPs\]) to investigate the effects of the Poisson noise on the FFPs, because the mean count rate in these bands is the smallest among all FFPs. First we created simulated ([*NuSTAR*]{}) 3–4 keV band count rates assuming a log-normal distribution with mean and standard deviation equal to the mean and standard deviation of the observed count rates in this band. Using the resulting values we computed 25–40 keV band count rates based on the best-fit linear relation we obtained from fitting the observed, 1ks binned FFP. We multiplied the count rates in both bands by a factor equal to 1, 2, 3, 4 and $5\times 10^3$, assuming a Poisson distribution, in order to compute the simulated counts. We divided the resulting counts by the respective factor to get the final, simulated count rate in both bands, and we used them to construct 1, 2, 3, 4 and 5ks binned, simulated FFPs. Then, we fitted them with a linear model, exactly as we did with the observed FFPs.
Figure \[fig:T100\] shows the best fit $A_{\rm L}$ and $C_{\rm L}$ values (top and bottom panels, respectively), as a function of the square root of the average counts in the (simulated) 25–40 keV band light curve. The two panels in Fig. \[fig:T100\] show that we can retrieve the intrinsic $A$ and $C$ values (indicated by the horizontal line in both panels), only when the average counts in the light curve is at least $\sim 200$. The mean count rate in the 3–4 keV [*NuSTAR*]{} band is ten times larger than the mean count rate in the 25–40 keV band (see Fig.\[figapp:nustarFFPs\]). In fact, the average counts in this band is larger than 200 even if we bin the data into 1ks bins. For that reason, the best-fit $A_{\rm L}$ and $C_{\rm L}$ values are consistent (with the error), irrespective of the bin size of the 25–40 keV light curves. However, they approach the intrinsic values only when the average counts in the 25–40keV band light curves reaches the limit of 200.
[@Kam15] suggested the use of light curves with the shortest possible bin size in order to recover the intrinsic FFP shape in the case of highly variable sources and an intrinsically non-linear relation. We show here that in doing so, particular case should be given to Poisson noise effects, which can affect the observed shape of the FFPs. Although it is usually assumed that the Poisson distribution approaches the Gaussian distribution when the mean (ie the average counts) is $\sim 20-50$, our results indicate that this assumption is not enough to guarantee the correct estimation of the model parameters when fitting FFPs. We suspect that the reason is due to the nature of the intrinsic count rate distribution. As indicated by the plots in Appendix\[app:plots\], there is usually a small number of high flux points, which can span a large range in fluxes. It appears that a truly large number of counts per bin is necessary to guarantee a good approximation to a Gaussian (which is symmetric), so as to not bias the best-fit results to the FFPs to steeper (than intrinsic) slopes (and hence smaller constants).
[We considered light curves affected by Poisson noise, because this is usually the case with X–ray light curves, such as the [*NuSTAR*]{} light curves. Our conclusions should be largely unaffected by the nature of the experimental noise (be it Poissonian or not): as long as the (mean) signal to noise ratio of the observed light curves, defined for example as the ratio of the mean over the mean error, is larger than $(N=200)/\sqrt{N=200}\sim14$, then the resulting FFPs should not be affected by the effects of the observational noise bias.]{}
Plots {#app:plots}
=====
[![Similar to Fig.\[figapp:commFFPs\] but for the [*NuSTAR*]{}-only FFPs, in the energy range 10–40keV.[]{data-label="figapp:nustarFFPs"}](plots/highE-FFPs/Nall-mpfit-10-12.eps "fig:"){width="23.50000%"}]{} [![Similar to Fig.\[figapp:commFFPs\] but for the [*NuSTAR*]{}-only FFPs, in the energy range 10–40keV.[]{data-label="figapp:nustarFFPs"}](plots/highE-FFPs/Nall-mpfit-12-15.eps "fig:"){width="23.50000%"}]{} [![Similar to Fig.\[figapp:commFFPs\] but for the [*NuSTAR*]{}-only FFPs, in the energy range 10–40keV.[]{data-label="figapp:nustarFFPs"}](plots/highE-FFPs/Nall-mpfit-15-20.eps "fig:"){width="23.50000%"}]{} [![Similar to Fig.\[figapp:commFFPs\] but for the [*NuSTAR*]{}-only FFPs, in the energy range 10–40keV.[]{data-label="figapp:nustarFFPs"}](plots/highE-FFPs/Nall-mpfit-20-25.eps "fig:"){width="23.50000%"}]{} [![Similar to Fig.\[figapp:commFFPs\] but for the [*NuSTAR*]{}-only FFPs, in the energy range 10–40keV.[]{data-label="figapp:nustarFFPs"}](plots/highE-FFPs/Nall-mpfit-25-40.eps "fig:"){width="23.50000%"}]{}
Tables {#app:tables}
======
------------------ ------ --------------------------- --------------------------------------- -----------------------------------
Energy Band Int. $ A_{\rm L} $ $ C_{\rm L} $ $ \chi^2/{\rm d.o.f.} $
\[0.2cm\] (keV) $ ({\rm Count\,s^{-1}}) $
\[-0.1cm\] 4 – 5 1 $ 0.66 \pm 0.02 $ $ 0.01 \pm 0.03 $ $ 43 / 39 $
\[0.2cm\] 2 $ 0.59 \pm 0.01 $ $ 0.08 \pm 0.02 $ $ 85 / 82 $
\[0.2cm\] 3 $ 0.65 \pm 0.01 $ $ 0.01 \pm 0.01 $ $ 128 / 127 $
\[0.2cm\] 4 $ 0.55 \pm 0.03 $ $ 0.10 \pm 0.02 $ $ 64 / 46 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.625 \pm 0.006 $ $ 0.037 \pm 0.009 $ $ - $
\[0.2cm\] all $ 0.622 \pm 0.006 $ $ 0.044 \pm 0.006 $ $ 330 / 300 $
\[0.2cm\] 5 – 6 1 $ 0.40 \pm 0.02 $ $ 0.06 \pm 0.02 $ $ 54 / 39 $
\[0.2cm\] 2 $ 0.39 \pm 0.01 $ $ 0.08 \pm 0.02 $ $ 98 / 82 $
\[0.2cm\] 3 $ 0.43 \pm 0.01 $ $ 0.03 \pm 0.01 $ $ 115 / 127 $
\[0.2cm\] 4 $ 0.39 \pm 0.02 $ $ 0.08 \pm 0.02 $ $ 62 / 46 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.406 \pm 0.005 $ $ 0.049 \pm 0.007 $ $ - $
\[0.2cm\] all $ 0.406 \pm 0.005 $ $ 0.056 \pm 0.005 $ $ 337 / 300 $
\[0.2cm\] 6 – 7 1 $ 0.24 \pm 0.01 $ $ 0.11 \pm 0.02 $ $ 35 / 39 $
\[0.2cm\] 2 $ 0.25 \pm 0.01 $ $ 0.09 \pm 0.01 $ $ 108 / 82 $
\[0.2cm\] 3 $ 0.27 \pm 0.01 $ $ 0.06 \pm 0.01 $ $ 142 / 127 $
\[0.2cm\] 4 $ 0.23 \pm 0.02 $ $ 0.10 \pm 0.01 $ $ 63 / 46 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.257 \pm 0.005 $ $ 0.079 \pm 0.006 $ $ - $
\[0.2cm\] all $ 0.264 \pm 0.004 $ $ 0.075 \pm 0.004 $ $ 325 / 300 $
\[0.2cm\] 7 – 8 1 $ 0.15 \pm 0.01 $ $ 0.03 \pm 0.01 $ $ 49 / 39 $
\[0.2cm\] 2 $ 0.16 \pm 0.01 $ $ 0.03 \pm 0.01 $ $ 106 / 82 $
\[0.2cm\] 3 $ 0.15 \pm 0.01 $ $ 0.02 \pm 0.01 $ $ 126 / 127 $
\[0.2cm\] 4 $ 0.17 \pm 0.01 $ $ 0.02 \pm 0.01 $ $ 40 / 46 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.155 \pm 0.004 $ $ 0.023 \pm 0.004 $ $ - $
\[0.2cm\] all $ 0.147 \pm 0.003 $ $ 0.031 \pm 0.003 $ $ 316 / 300 $
\[0.2cm\] 8 – 10 1 $ 0.12 \pm 0.01 $ $ 0.04 \pm 0.01 $ $ 42 / 39 $
\[0.2cm\] 2 $ 0.13 \pm 0.01 $ $ 0.03 \pm 0.01 $ $ 124 / 82 $
\[0.2cm\] 3 $ 0.15 \pm 0.01 $ $ 0.013 \pm 0.005 $ $ 188 / 127 $
\[0.2cm\] 4 $ 0.13 \pm 0.01 $ $ 0.04 \pm 0.01 $ $ 47 / 46 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.136 \pm 0.004 $ $ 0.023 \pm 0.004 $ $ - $
\[0.2cm\] all $ 0.130 \pm 0.003 $ $ 0.037 \pm 0.003 $ $ 344 / 300 $
\[0.2cm\]
------------------ ------ --------------------------- --------------------------------------- -----------------------------------
: Results from the linear model best-fits to the individual and combined [*XMM-Newton*]{} high-energy FFPs.
\[table:XMM-highE\]
------------------ ------ --------------------------- --------------------------------------- -----------------------------------
Energy Band Int. $ A_{\rm L} $ $ C_{\rm L} $ $ \chi^2/{\rm d.o.f.} $
\[0.2cm\] (keV) $ ({\rm Count\,s^{-1}}) $
\[-0.1cm\] 4 – 5 1 $ 1.04 \pm 0.08 $ $ 0.01 \pm 0.03 $ $ 29 / 20 $
\[0.2cm\] 2 $ 1.02 \pm 0.05 $ $ 0.02 \pm 0.03 $ $ 40 / 42 $
\[0.2cm\] 3 $ 1.04 \pm 0.05 $ $ 0.02 \pm 0.01 $ $ 87 / 68 $
\[0.2cm\] 4 $ 0.93 \pm 0.10 $ $ 0.05 \pm 0.02 $ $ 27 / 24 $
\[0.2cm\]
\[-0.2cm\] mean $ 1.025 \pm 0.031 $ $ 0.021 \pm 0.011 $ $ - $
\[0.2cm\] all $ 1.020 \pm 0.020 $ $ 0.021 \pm 0.007 $ $ 162 / 160 $
\[0.2cm\] 5 – 6 1 $ 0.75 \pm 0.06 $ $ 0.10 \pm 0.03 $ $ 17 / 20 $
\[0.2cm\] 2 $ 0.86 \pm 0.05 $ $ 0.05 \pm 0.03 $ $ 45 / 42 $
\[0.2cm\] 3 $ 0.88 \pm 0.04 $ $ 0.04 \pm 0.01 $ $ 60 / 68 $
\[0.2cm\] 4 $ 0.84 \pm 0.09 $ $ 0.06 \pm 0.02 $ $ 31 / 24 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.848 \pm 0.027 $ $ 0.055 \pm 0.010 $ $ - $
\[0.2cm\] all $ 0.863 \pm 0.018 $ $ 0.052 \pm 0.007 $ $ 136 / 160 $
\[0.2cm\] 6 – 7 1 $ 0.61 \pm 0.06 $ $ 0.12 \pm 0.02 $ $ 24 / 20 $
\[0.2cm\] 2 $ 0.71 \pm 0.04 $ $ 0.06 \pm 0.02 $ $ 42 / 42 $
\[0.2cm\] 3 $ 0.79 \pm 0.04 $ $ 0.03 \pm 0.01 $ $ 72 / 68 $
\[0.2cm\] 4 $ 0.58 \pm 0.08 $ $ 0.10 \pm 0.02 $ $ 31 / 24 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.716 \pm 0.024 $ $ 0.063 \pm 0.009 $ $ - $
\[0.2cm\] all $ 0.724 \pm 0.016 $ $ 0.061 \pm 0.006 $ $ 162 / 160 $
\[0.2cm\] 7 – 8 1 $ 0.53 \pm 0.05 $ $ 0.03 \pm 0.02 $ $ 31 / 20 $
\[0.2cm\] 2 $ 0.55 \pm 0.03 $ $ 0.02 \pm 0.01 $ $ 64 / 42 $
\[0.2cm\] 3 $ 0.56 \pm 0.03 $ $ 0.03 \pm 0.01 $ $ 82 / 68 $
\[0.2cm\] 4 $ 0.67 \pm 0.07 $ $ 0.001 \pm 0.02 $ $ 22 / 24 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.563 \pm 0.020 $ $ 0.023 \pm 0.008 $ $ - $
\[0.2cm\] all $ 0.536 \pm 0.013 $ $ 0.034 \pm 0.005 $ $ 182 / 160 $
\[0.2cm\] 8 – 10 1 $ 0.73 \pm 0.06 $ $ 0.05 \pm 0.02 $ $ 44 / 20 $
\[0.2cm\] 2 $ 0.79 \pm 0.04 $ $ 0.03 \pm 0.02 $ $ 55 / 42 $
\[0.2cm\] 3 $ 0.87 \pm 0.04 $ $ 0.02 \pm 0.01 $ $ 69 / 68 $
\[0.2cm\] 4 $ 0.88 \pm 0.09 $ $ 0.04 \pm 0.02 $ $ 35 / 24 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.822 \pm 0.026 $ $ 0.034 \pm 0.010 $ $ - $
\[0.2cm\] all $ 0.765 \pm 0.017 $ $ 0.060 \pm 0.006 $ $ 187 / 160 $
\[0.2cm\]
------------------ ------ --------------------------- --------------------------------------- -----------------------------------
: Similar to Table\[table:XMM-highE\] but for [*NuSTAR*]{}.
\[table:Nustar-highE\]
-------------------- ------ --------------------------- --------------------------------------- -----------------------------------
Energy Band Int. $ A_{\rm L} $ $ C_{\rm L} $ $ \chi^2/{\rm d.o.f.} $
\[0.2cm\] (keV) $ ({\rm Count\,s^{-1}}) $
\[-0.1cm\] 10 – 12 1 $ 0.36 \pm 0.04 $ $ 0.07 \pm 0.02 $ $ 12 / 20 $
\[0.2cm\] 2 $ 0.48 \pm 0.03 $ $ 0.01 \pm 0.01 $ $ 43 / 42 $
\[0.2cm\] 3 $ 0.47 \pm 0.03 $ $ 0.04 \pm 0.01 $ $ 74 / 68 $
\[0.2cm\] 4 $ 0.53 \pm 0.07 $ $ 0.03 \pm 0.02 $ $ 29 / 24 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.462 \pm 0.018 $ $ 0.036 \pm 0.007 $ $ - $
\[0.2cm\] all $ 0.435 \pm 0.012 $ $ 0.047 \pm 0.005 $ $ 161 / 160 $
\[0.2cm\] 12 – 15 1 $ 0.26 \pm 0.04 $ $ 0.08 \pm 0.01 $ $ 24 / 20 $
\[0.2cm\] 2 $ 0.34 \pm 0.02 $ $ 0.04 \pm 0.02 $ $ 55 / 42 $
\[0.2cm\] 3 $ 0.35 \pm 0.02 $ $ 0.040 \pm 0.008 $ $ 71 / 68 $
\[0.2cm\] 4 $ 0.51 \pm 0.06 $ $ 0.02 \pm 0.02 $ $ 18 / 24 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.347 \pm 0.016 $ $ 0.044 \pm 0.006 $ $ - $
\[0.2cm\] all $ 0.338 \pm 0.011 $ $ 0.052 \pm 0.004 $ $ 180 / 160 $
\[0.2cm\] 15 – 20 1 $ 0.24 \pm 0.04 $ $ 0.06 \pm 0.01 $ $ 33 / 20 $
\[0.2cm\] 2 $ 0.30 \pm 0.02 $ $ 0.03 \pm 0.01 $ $ 52 / 42 $
\[0.2cm\] 3 $ 0.34 \pm 0.02 $ $ 0.025 \pm 0.008 $ $ 82 / 68 $
\[0.2cm\] 4 $ 0.37 \pm 0.06 $ $ 0.04 \pm 0.01 $ $ 26 / 24 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.310 \pm 0.015 $ $ 0.032 \pm 0.006 $ $ - $
\[0.2cm\] all $ 0.270 \pm 0.009 $ $ 0.051 \pm 0.004 $ $ 182 / 160 $
\[0.2cm\] 20 – 25 1 $ 0.07 \pm 0.03 $ $ 0.04 \pm 0.01 $ $ 8 / 5 $
\[0.2cm\] 2 $ 0.15 \pm 0.02 $ $ 0.01 \pm 0.01 $ $ 11 / 12 $
\[0.2cm\] 3 $ 0.13 \pm 0.02 $ $ 0.020 \pm 0.006 $ $ 39 / 21 $
\[0.2cm\] 4 $ 0.15 \pm 0.04 $ $ 0.02 \pm 0.01 $ $ 2 / 6 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.130 \pm 0.010 $ $ 0.019 \pm 0.004 $ $ - $
\[0.2cm\] all $ 0.120 \pm 0.006 $ $ 0.022 \pm 0.002 $ $ 83 / 50 $
\[0.2cm\] 25 – 40 1 $ 0.06 \pm 0.03 $ $ 0.011 \pm 0.01 $ $ 6 / 5 $
\[0.2cm\] 2 $ 0.09 \pm 0.02 $ $ -0.002 \pm 0.01 $ $ 26 / 12 $
\[0.2cm\] 3 $ 0.11 \pm 0.02 $ $ -0.005 \pm 0.006 $ $ 21 / 21 $
\[0.2cm\] 4 $ 0.07 \pm 0.05 $ $ 0.02 \pm 0.01 $ $ 5 / 6 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.090 \pm 0.010 $ $ 0.019 \pm 0.004 $ $ - $
\[0.2cm\] all $ 0.075 \pm 0.006 $ $ 0.008 \pm 0.003 $ $ 73 / 50 $
\[0.2cm\]
-------------------- ------ --------------------------- --------------------------------------- -----------------------------------
\[table:cont-Nustar-highE\]
---------------------- ------ --------------------------- --------------------------- --------------------------------------- -----------------------------------
Energy Band Int. $ A_{\rm PLc} $ $ \beta$ $ C_{\rm PLc} $ $ \chi^2/{\rm d.o.f.} $
\[0.2cm\] (keV) $ $ $ ({\rm Count\,s^{-1}}) $
\[-0.1cm\] 0.3 – 0.4 1 $ 1.39 \pm 0.21 $ $ 1.34 \pm 0.15 $ $ 1.30 \pm 0.21 $ $ 968 / 40 $
\[0.2cm\] 2 $ 1.97 \pm 0.36 $ $ 0.93 \pm 0.11 $ $ 0.84 \pm 0.39 $ $ 1634 / 83 $
\[0.2cm\] 3 $ 1.18 \pm 0.06 $ $ 1.44 \pm 0.07 $ $ 1.14 \pm 0.05 $ $ 1651 / 128 $
\[0.2cm\] 4 $ 1.36 \pm 0.15 $ $ 1.49 \pm 0.27 $ $ 0.79 \pm 0.15 $ $ 188 / 47 $
\[0.2cm\]
\[-0.2cm\] mean $ 1.47 \pm 0.17 $ $ 1.30 \pm 0.13 $ $ 1.02 \pm 0.12 $ $ - $
\[0.2cm\] all $ 2.17 \pm 0.04 $ $ 1.01 \pm 0.02 $ $ 0.25 \pm 0.04 $ $ 6674 / 304 $
\[0.2cm\] 0.4 – 0.5 1 $ 1.67 \pm 0.22 $ $ 1.35 \pm 0.14 $ $ 1.32 \pm 0.22 $ $ 1166 / 40 $
\[0.2cm\] 2 $ 1.860 \pm 0.27 $ $ 1.11 \pm 0.10 $ $ 1.26 \pm 0.29 $ $ 2151 / 83 $
\[0.2cm\] 3 $ 1.45 \pm 0.05 $ $ 1.49 \pm 0.06 $ $ 1.15 \pm 0.05 $ $ 2156 / 128 $
\[0.2cm\] 4 $ 1.68 \pm 0.14 $ $ 1.52 \pm 0.22 $ $ 0.71 \pm 0.15 $ $ 296 / 47 $
\[0.2cm\]
\[-0.2cm\] mean $ 1.66 \pm 0.08 $ $ 1.37 \pm 0.09 $ $ 1.11 \pm 0.14 $ $ - $
\[0.2cm\] all $ 2.63 \pm 0.05 $ $ 1.01 \pm 0.02 $ $ 0.07 \pm 0.04 $ $ 8119 / 304 $
\[0.2cm\] 0.5 – 0.6 1 $ 1.23 \pm 0.14 $ $ 1.63 \pm 0.14 $ $ 1.26 \pm 0.14 $ $ 1080 / 40 $
\[0.2cm\] 2 $ 1.90 \pm 0.29 $ $ 1.04 \pm 0.10 $ $ 0.75 \pm 0.31 $ $ 1906 / 83 $
\[0.2cm\] 3 $ 1.33 \pm 0.05 $ $ 1.52 \pm 0.06 $ $ 0.86 \pm 0.05 $ $ 2063 / 128 $
\[0.2cm\] 4 $ 1.47 \pm 0.12 $ $ 1.61 \pm 0.23 $ $ 0.54 \pm 0.12 $ $ 304 / 47 $
\[0.2cm\]
\[-0.2cm\] mean $ 1.48 \pm 0.15 $ $ 1.45 \pm 0.14 $ $ 0.85 \pm 0.15 $ $ - $
\[0.2cm\] all $ 2.33 \pm 0.04 $ $ 1.04 \pm 0.02 $ $ -0.07 \pm 0.04 $ $ 7435 / 304 $
\[0.2cm\] 0.6 – 0.7 1 $ 1.12 \pm 0.15 $ $ 1.50 \pm 0.15 $ $ 0.82 \pm 0.15 $ $ 988 / 40 $
\[0.2cm\] 2 $ 1.21 \pm 0.18 $ $ 1.24 \pm 0.11 $ $ 0.91 \pm 0.20 $ $ 1524 / 83 $
\[0.2cm\] 3 $ 1.21 \pm 0.05 $ $ 1.40 \pm 0.06 $ $ 0.54 \pm 0.05 $ $ 1728 / 128 $
\[0.2cm\] 4 $ 1.68 \pm 0.25 $ $ 1.04 \pm 0.22 $ $ -0.11 \pm 0.26 $ $ 293 / 47 $
\[0.2cm\]
\[-0.2cm\] mean $ 1.31 \pm 0.13 $ $ 1.30 \pm 0.10 $ $ 0.54 \pm 0.23 $ $ - $
\[0.2cm\] all $ 1.99 \pm 0.04 $ $ 1.01 \pm 0.02 $ $ -0.20 \pm 0.04 $ $ 6072 / 304 $
\[0.2cm\] 0.7 – 0.8 1 $ 0.73 \pm 0.12 $ $ 1.52 \pm 0.17 $ $ 0.50 \pm 0.11 $ $ 489 / 40 $
\[0.2cm\] 2 $ 1.08 \pm 0.20 $ $ 1.06 \pm 0.13 $ $ 0.22 \pm 0.22 $ $ 1036 / 83 $
\[0.2cm\] 3 $ 0.76 \pm 0.03 $ $ 1.52 \pm 0.08 $ $ 0.39 \pm 0.03 $ $ 918 / 128 $
\[0.2cm\] 4 $ 0.96 \pm 0.11 $ $ 1.40 \pm 0.26 $ $ 0.09 \pm 0.11 $ $ 191 / 47 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.88 \pm 0.08 $ $ 1.37 \pm 0.11 $ $ 0.30 \pm 0.09 $ $ - $
\[0.2cm\] all $ 1.29 \pm 0.03 $ $ 1.02 \pm 0.02 $ $ -0.12 \pm 0.03 $ $ 3435 / 304 $
\[0.2cm\]
---------------------- ------ --------------------------- --------------------------- --------------------------------------- -----------------------------------
\[table:XMM-lowE\]
---------------------- ------ --------------------------- --------------------------- --------------------------------------- -----------------------------------
Energy Band Int. $ A_{\rm PLc} $ $ \beta $ $ C_{\rm PLc} $ $ \chi^2/{\rm d.o.f.} $
\[0.2cm\] (keV) $ $ $ ({\rm Count\,s^{-1}}) $
\[-0.1cm\] 0.8 – 0.9 1 $ 0.83 \pm 0.16 $ $ 1.24 \pm 0.19 $ $ 0.23 \pm 0.16 $ $ 283 / 40 $
\[0.2cm\] 2 $ 0.81 \pm 0.15 $ $ 1.16 \pm 0.13 $ $ 0.27 \pm 0.17 $ $ 608 / 83 $
\[0.2cm\] 3 $ 0.67 \pm 0.04 $ $ 1.36 \pm 0.08 $ $ 0.31 \pm 0.04 $ $ 602 / 128 $
\[0.2cm\] 4 $ 0.74 \pm 0.07 $ $ 1.65 \pm 0.29 $ $ 0.16 \pm 0.08 $ $ 175 / 47 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.76 \pm 0.04 $ $ 1.35 \pm 0.11 $ $ 0.24 \pm 0.03 $ $ - $
\[0.2cm\] all $ 1.05 \pm 0.03 $ $ 1.04 \pm 0.02 $ $ -0.07 \pm 0.03 $ $ 2415 / 304 $
\[0.2cm\] 0.9 – 1 1 $ 0.83 \pm 0.17 $ $ 1.17 \pm 0.19 $ $ 0.16 \pm 0.17 $ $ 211 / 40 $
\[0.2cm\] 2 $ 0.84 \pm 0.15 $ $ 1.12 \pm 0.13 $ $ 0.12 \pm 0.17 $ $ 452 / 83 $
\[0.2cm\] 3 $ 0.65 \pm 0.04 $ $ 1.34 \pm 0.08 $ $ 0.26 \pm 0.03 $ $ 610 / 128 $
\[0.2cm\] 4 $ 0.74 \pm 0.08 $ $ 1.57 \pm 0.29 $ $ 0.10 \pm 0.08 $ $ 198 / 47 $
\[0.2cm\]
\[-0.2cm\] mean $ 0.77 \pm 0.04 $ $ 1.30 \pm 0.10 $ $ 0.16 \pm 0.03 $ $ - $
\[0.2cm\] all $ 1.00 \pm 0.03 $ $ 1.05 \pm 0.02 $ $ -0.08 \pm 0.02 $ $ 2127 / 304 $
\[0.2cm\] 1 – 1.3 1 $ 2.35 \pm 0.31 $ $ 1.11 \pm 0.12 $ $ 0.19 \pm 0.30 $ $ 409 / 40 $
\[0.2cm\] 2 $ 2.08 \pm 0.21 $ $ 1.20 \pm 0.08 $ $ 0.40 \pm 0.24 $ $ 946 / 83 $
\[0.2cm\] 3 $ 1.94 \pm 0.07 $ $ 1.23 \pm 0.05 $ $ 0.48 \pm 0.06 $ $ 1018 / 128 $
\[0.2cm\] 4 $ 2.31 \pm 0.16 $ $ 1.38 \pm 0.16 $ $ -0.02 \pm 0.17 $ $ 412 / 47 $
\[0.2cm\]
\[-0.2cm\] mean $ 2.17 \pm 0.10 $ $ 1.23 \pm 0.06 $ $ 0.26 \pm 0.11 $ $ - $
\[0.2cm\] all $ 2.68 \pm 0.04 $ $ 1.04 \pm 0.01 $ $ -0.27 \pm 0.04 $ $ 4015 / 304 $
\[0.2cm\] 1.3 – 1.6 1 $ 1.70 \pm 0.25 $ $ 1.17 \pm 0.13 $ $ 0.30 \pm 0.24 $ $ 296 / 40 $
\[0.2cm\] 2 $ 2.06 \pm 0.26 $ $ 1.03 \pm 0.09 $ $ -0.10 \pm 0.29 $ $ 543 / 83 $
\[0.2cm\] 3 $ 1.51 \pm 0.05 $ $ 1.33 \pm 0.05 $ $ 0.41 \pm 0.05 $ $ 797 / 128 $
\[0.2cm\] 4 $ 2.40 \pm 0.36 $ $ 0.90 \pm 0.19 $ $ -0.57 \pm 0.37 $ $ 304 / 47 $
\[0.2cm\]
\[-0.2cm\] mean $ 1.92 \pm 0.20 $ $ 1.11 \pm 0.09 $ $ 0.01 \pm 0.22 $ $ - $
\[0.2cm\] all $ 2.11 \pm 0.04 $ $ 1.03 \pm 0.02 $ $ -0.19 \pm 0.04 $ $ 2447 / 304 $
\[0.2cm\] 1.6 – 2 1 $ 1.72 \pm 0.27 $ $ 1.09 \pm 0.14 $ $ 0.11 \pm 0.27 $ $ 189 / 40 $
\[0.2cm\] 2 $ 1.93 \pm 0.26 $ $ 1.03 \pm 0.09 $ $ -0.11 \pm 0.28 $ $ 393 / 83 $
\[0.2cm\] 3 $ 1.60 \pm 0.06 $ $ 1.21 \pm 0.05 $ $ 0.22 \pm 0.06 $ $ 549 / 128 $
\[0.2cm\] 4 $ 3.32 \pm 0.95 $ $ 0.55 \pm 0.19 $ $ -1.58 \pm 0.95 $ $ 255 / 47 $
\[0.2cm\]
\[-0.2cm\] mean $ 2.14 \pm 0.40 $ $ 0.97 \pm 0.14 $ $ -0.34 \pm 0.42 $ $ - $
\[0.2cm\] all $ 2.04 \pm 0.04 $ $ 0.99 \pm 0.02 $ $ -0.23 \pm 0.04 $ $ 1631 / 304 $
\[0.2cm\] 2 – 3 1 $ 1.97 \pm 0.31 $ $ 1.03 \pm 0.13 $ $ -0.02 \pm 0.31 $ $ 149 / 40 $
\[0.2cm\] 2 $ 1.27 \pm 0.15 $ $ 1.33 \pm 0.09 $ $ 0.75 \pm 0.17 $ $ 384 / 83 $
\[0.2cm\] 3 $ 1.74 \pm 0.06 $ $ 1.16 \pm 0.05 $ $ 0.18 \pm 0.06 $ $ 420 / 128 $
\[0.2cm\] 4 $ 2.51 \pm 0.95 $ $ 0.55 \pm 0.19 $ $ -1.58 \pm 0.95 $ $ 255 / 47 $
\[0.2cm\]
\[-0.2cm\] mean $ 1.88 \pm 0.26 $ $ 1.08 \pm 0.11 $ $ 0.08 \pm 0.28 $ $ - $
\[0.2cm\] all $ 2.08 \pm 0.04 $ $ 0.99 \pm 0.02 $ $ -0.15 \pm 0.04 $ $ 1170 / 304 $
\[0.2cm\]
---------------------- ------ --------------------------- --------------------------- --------------------------------------- -----------------------------------
The effects of the warm absorber to the FFPs {#app:warmabs}
============================================
In order to investigate the effect of a variable warm absorber on the low-energy FFPs, we created simulated spectra using the XSPEC command [FAKEIT]{}, and the EPIC-pn responses, assuming the following model (in [XSPEC]{} terminology): $${\tt model = TBabs \times zxipcf \times powerlaw},$$ where [TBabs]{} [@tbabs] and [zxipcf]{} [@zxipcf] account for the Galactic and the warm asborption, respectively. [powerlaw]{} varied in normalization ($N_{\rm PL}$) only, with $\Gamma$ fixed at 2.03. $N_{\rm PL}$ varied between $N_{\rm PL,min}$ and $N_{\rm PL, max}$, so that the respective model 3–4 keV model count rate were equal to the minimum/maximum observed count rate in the same band. As for [zxipcf]{}, we fixed $N_{\rm H}$ at $2 \times 10^{22}\,{\rm cm^{-2}}$ and we considered 3 different values for the covering fraction (CF): 0.4, 0.6 and 0.8. We assumed that the ionization parameter ($\xi$) is linearly proportional to the primary flux, as: $\log \xi = \log N_{\rm PL} + 2.97$. The constant was chosen so that the model count rate in the 0.6–0.7 keV band (when CF=0.6 and $N_{\rm PL}=N_{\rm PL,max}$) is equal to the observed largest value. Given the $N_{\rm PL,min} -N_{\rm PL, max}$ range, $\log \xi$ varied between 0.85 and 1.55.
To construct the model FFPs, we estimated the model count rate in the reference and the low-energy bands, assuming 10 different values of $N_{\rm PL}$ (between $N_{\rm PL,min}$ and $N_{\rm PL, max}$). Then we fitted them with a PLc model, exactly as we did with the observed FFPs. The best-fit simulated PLc parameters are plotted as empty symbols in Fig.\[figapp:warmabs\].
In general, the assumed variable warm absorber model results in FFPs which are, qualitatively, similar to the observed plots. In all cases, $B_{\rm PLc,\, sim}$’s are steeper than one, as observed. Therefore, a variable warm absorber can produce non-linear FFPs, with slopes steeper than one. In addition, variable warm absorption can also result in non-zero, positive constants. But, the value of $C_{\rm PLc,\, sim}$, at all energies, below 1keV is quite smaller than $C_{\rm PLc,\, obs}$. We also tried different $N_{\rm H}$ and/or CF values, and we saw that in some cases, a variable warm absorber model may even result in negative $C_{\rm PLc,\, sim}$ in the FFPs. In this case, the amplitude of the intrinsic constant spectral component will be larger than what $C_{\rm PLc,\, obs}$’s imply.
\[lastpage\]
[^1]: E-mail: <ekammoun@sissa.it>
[^2]: <http://nxsa.esac.esa.int/nxsa-web>
[^3]: <http://code.google.com/p/astrolibpy/source/browse/trunk/>
[^4]: The difference between the best-fit $\Gamma_{\rm X}$ and $\Gamma_{\rm N}$ slopes ($\Delta \Gamma = 0.14 \pm 0.03$) should be representative of the inter-calibration uncertainties between EPIC-pn and FPMA/B. For example, the difference we observe is consistent with the $\Delta\Gamma$ differences between the two instruments that [@Madsen2015] reported.
[^5]: Due to the large $\chi^2$ values, the error of the best-fit parameters does not represent their real uncertainty. For that reason we considered the arithmetic mean values of the best-fit parameters for the low-energy FFPs.
|
---
abstract: 'Ni$_{50}$Mn$_{34}$In$_{16}$ undergoes a martensitic transformation around 250 K and exhibits a field induced reverse martensitic transformation and substantial magnetocaloric effects. We substitute small amounts Ga for In, which are isoelectronic, to carry these technically important properties to close to room temperature by shifting the martensitic transformation temperature.'
author:
- 'Seda Aksoy, Thorsten Krenke[@AdressKrenke], Mehmet Acet, Eberhard F. Wassermann'
- 'Xavier Moya, Lluís Mañosa, Antoni Planes'
title: Tailoring magnetic and magnetocaloric properties of martensitic transitions in ferromagnetic Heusler alloys
---
There is growing interest in searching for materials other than Ni-Mn-Ga which may have interesting properties concerning applications relevant to magnetic-field-induced strains. Such search on Ni-Mn based Heusler systems has led to the observation of giant magnetocaloric effects (MCE) [@Hu00; @Marcos02; @Pareti03; @Krenke05b; @Han06; @Sharma07], large strains related to field-induced transformations, and substantial contribution to the understanding of martensitic transformations in ferromagnetic Heusler materials. The valence electron concentration ($e/a$) dependence of $M_s$ in NiMn$X$ is linear, but with different slope for each $X$-species [@Krenke07c]. Therefore, it should be possible to manipulate $M_s$ not only by varying $e/a$, but also by holding $e/a$ constant and replacing one $X$ species with another. In this manner one may have the possibility of shifting and adjusting favorable features occurring around the martensitic transformation of a particular alloy to higher or lower temperatures. Ni$_{50}$Mn$_{34}$In$_{16}$ \[$(e/a)\approx 7.87$\] shows a field induced reverse martensitic transformation at $M_s\approx 250$ K and associated with it, a large field induced strain and a magnetocaloric effect [@Krenke07; @Moya07]. In view of technical interest, it would be desirable to shift the transition temperature to around room temperature without altering the favorable features. On the other hand, in view of understanding the electronic properties of such systems close to the martensitic transformation, it would be interesting to understand to what extent the valence electron concentration can be employed as a meaningful parameter. To test this possibility, we substitute 2% Ga for In in Ni$_{50}$Mn$_{34}$In$_{16}$. From interpolation at constant $(e/a)$, this amount of Ga is expected to shift $M_s$ to around room temperature. We compare in this study the magnetic and magnetocaloric properties of the isoelectronic compounds Ni$_{50}$Mn$_{34}$In$_{16}$ [@Krenke07] and Ni$_{50}$Mn$_{34}$In$_{14}$Ga$_{2}$ an discuss to what extent the features around $M_s$ are preserved. The magnetocaloric properties are studied from the entropy-change as well as from direct temperature-change measurements.
![\[MT\] (color online) (a) ZFC, FC, and FH $M(T)$ in 5 mT of a) Ga0 and b) Ga.](fig01.eps){width="8cm"}
The samples were prepared by arc melting pure metals under argon atmosphere. They were annealed at 1073 K for 2 hours and quenched in ice-water. The compositions of the alloys were determined by energy dispersive x-ray analysis.
Temperature dependent magnetization measurements $M(T)$ were carried out in 5 mT in the temperature range $4<T<400$ K, and magnetization isotherms $M(\mu_0 H)$ around the martensitic transformation were obtained in magnetic-fields up to 5 T using a superconducting quantum interference device magnetometer. The entropy change $\Delta S$ was obtained from the magnetization isotherms, and the direct temperature-change was measured with an adiabatic magneto-calorimeter.
Figures \[MT\]a and \[MT\]b show $M(T)$ in 5 mT taken on a zero-field-cooled (ZFC), field-cooled (FC), nd field-heated (FH) sequence for Ga0 and Ga2 respectively. The curves corresponding to the ZFC and FC states for both samples deviate below $T_C^M$, whereas no appreciable deviation is found below $T_C^A$. The deviation below $T_C^M$ is related to the anisotropy that develops in the non-cubic martensitic phase of the alloys, so that cooling in zero-field and cooling in finite field lead to different spin configurations with different $M(T)$. For Ga0, $T_C^A\approx 308$ K, and this decreases to about 293 K for Ga2. On the other hand $M_s$ increases from about 243 K for Ga0 to about 275 K for Ga2, but the fundamental features of the curve remain similar.
$M(T)$ has been also measured in several fields $\mu_0 H\geq 1$ T to compare the field rate of shift of $M_s$, $dM_s/dH$, of both samples. The results are shown in Fig. \[MTht\]a. The heavy lines are drawn through the points joining the onset of decrease in $M(T)$ with decreasing temperature. These mark $M_s$ for each measuring field. The slope of these lines at these magnetic-field ranges give $dM_s/dH\approx -6$ KT$^{-1}$ and $dM_s/dH\approx -2$ KT$^{-1}$ for Ga0 and Ga2 respectively. $M(T)$ curves in 5 T for the FC and FH states are shown in Fig. \[MTht\]b. The thermal hysteresis for Ga2 narrows with respect to that of Ga0. Furthermore, it is also seen that the magnetization in the martensitic state decreases when Ga is added.
![\[MTht\] $M(T)$ for Ga0 and Ga2 in high fields. a) Field-cooled $M(T)$ for Ga0 and Ga2. b) $M(T)$ for Ga0 and Ga2 in the FC and FH states. The thermal hysteresis is broader in Ga0.](fig02.eps){width="8cm"}
The magnetization isotherms in the vicinity of $M_s$ in Figs. \[MH\]a and \[MH\]b show that the overall agnetization is lower in Ga2 than in Ga0. The data shown with open circles in both figures correspond to $M(\mu_0 H)$ for $T<M_s$ (values printed in italic), and the filled circles correspond to $T>M_s$. The metamagnetic-like character of the feature in $M(\mu_0 H)$ at temperatures $T<M_s$ is associated with a field-induced reverse martensitic transformation. $M(\mu_0 H)$ initially increases with increasing field with decreasing curvature, until it reaches an inflection point at a field corresponding to the onset of the field-induced transformation. Above this point, $M(\mu_0 H)$ begins to increase faster with increasing magnetic-field. For Ga2, the field-induced transformation begins to take place at lower fields than those needed for Ga0, so that the steep rise in $M(\mu_0 H)$ begins already below 1 T. The narrower hysteresis in $M(T)$ for Ga2 compared to broader hysteresis for Ga0 is the cause for the lower threshold of the transformation in Ga0.
![\[MH\] (color online) magnetic-field dependence of the magnetization for a) Ga0 and b) Ga2. Open circles (red) and filled circles are data for $T<M_s$ and $T>M_s$ respectively.](fig03.eps){width="8cm"}
Using the data in Fig. \[MH\], the field induced entropy change $\Delta S$ is determined by ntegrating numerically $\Delta S(T,H)
= \mu_0 \int^H_{0} ({\partial M}/{\partial T})_H dH$. $\Delta
S(T,H)$ for Ga0 and Ga2 is shown in Figs. \[delS\]a and \[delS\]b. For both samples $\Delta S(T,H)$ is positive below $M_s$ (inverse MCE) and negative around $T_C^A$ (conventional MCE) with the crossover taking place at the temperature corresponding to $M_s$ determined from Figs. \[MT\]a and \[MT\]b. The magnitude of the entropy change below $M_s$ remain nearly unchanged for both samples, with a maximum value of 8 Jkg$^{-1}$K$^{-1}$. Above $M_s$, $\Delta S$ of Ga0 reaches a slightly higher value than that of Ga2, both being about $-$5 Jkg$^{-1}$K$^{-1}$ under 5 T. As expected, both samples cool on applying a magnetic-field below $M_s$ and heat on applying a field around $T_C$ as seen from the results of the direct magnetocaloric measurements in Figs. \[delT\]a and \[delT\]b for both samples. The maxima in $\Delta T$ below $M_s$ are nearly the same for both samples reaching a value of $-$2 K in 5 T. Around $T_C^A$, the maximum value is about 3.5 K for Ga0 and is slightly larger than 2 K for Ga2. This difference is consistent with the difference in $\Delta S$ above $M_s$.
![\[delS\] Temperature dependence of the entropy change around $M_s$ and $T_C^A$ for a) Ga0 and b) Ga2.](fig04.eps){width="8cm"}
Investigations on quaternary Heusler-based systems have been undertaken previously both to improve material properties and to examine the interplay between magnetic and structural properties around the martensitic transformation [@Khan04; @Khan05; @Gao06; @Xuan07; @Krenke07b; @Kainuma06]. We provide in this study a method based on the varying $e/a$ dependence of $M_s$ for different group III-V elements, by which $M_s$ can be shifted so that favorable properties of a particular alloy can be brought to a desired temperature. Presently, this desired temperature is limited to below room temperature, since $T_C^A$ is limited to about 300-350 K in NiMn-based Heusler alloys. Nevertheless, we find indeed that at constant $e/a$, it is possible to preserve at a high temperature a favorable MCE property of Ni$_{50}$Mn$_{34}$In$_{16}$ occurring at a low temperature by substituting an element isoelectronic to In, namely Ga. The next step would be to devise a method to manipulate $T_C$, such as that involving the addition of small amounts of Co. Some work already gives evidence that replacing Ni with small amounts of Co tends to increase $T_C^A$ [@Yu07]. This would be particularly interesting, e.g., in a sample with about 4 at% Ga, where $M_s$ lies already within the paramagnetic regime of both austenite and martensite phases. By adding Co, such a system would regain its ferromagnetism in the austenitic state.
![\[delT\] Temperature dependence of the measured temperature change around $M_s$ and $T_C^A$ a) Ga0 and b) Ga2.](fig05.eps){width="8cm"}
We find in the present studies that the maximum absolute values of $\Delta S$ and $\Delta T$ on both sides of $M_s$ for Ga0 and Ga2 are nearly the same. $dM_s/dH$ is smaller for Ga2 than for Ga0 meaning that the MCE on Ga2 should be smaller. It appears that the narrower temperature hysteresis for Ga2 with respect to that of Ga0 facilitates the field induced reverse transformation. As discussed in previous studies, the narrow hysteresis is favorable for a large MCE, and much effort is invested in reducing hysteresis-losses [@Gschneider05; @Provenzano04]. The reduced hysteresis in Ga2 compensates for its lower $dM_s/dH$ as compared to that of Ga0. As can be seen from the fast rise of the magnetization with increasing magnetic-field already in low-fields in Fig. \[MH\], a lower threshold field for Ga2 is required than that for Ga0 to induce a transformation with an external field.
This work was supported by Deutsche Forschungsgemeinschaft (GK 277 and SPP 1239) and CICyT (Spain), project MAT2007-61200. XM acknowledges support from DGICyT (Spain).
[99]{}
Present address: ThyssenKrupp Electrical Steel, Kurt-Schumacher-Str. 95, D-45881 Gelsenkirchen, Germany
F. Hu, B. Shen, J. Sun, Appl. Phys. Lett. **76**, 3460 (2000).
J. Marcos, A. Planes, L. Mañosa, F. Casanova, X. Batlle, A. Labarta, and B. Martínez, Phys. Rev. B **66**, 224413 (2002).
L. Pareti, M. Solzi, F. Albertini, A. Paoluzi, Eur. Phys. J. B, **32**, 303 (2003).
T. Krenke, M. Acet, E. F. Wassermann, X. Moya, L. Mañosa, A. Planes, Nature Materials **4**, 450 (2005).
Z. D. Han, D. H. Wang, C. L. Zhang, S. L. Tang, B. X. Gu, Y. W. Du, Appl. Phys. Lett. **89**, 182507 (2006).
V. K. Sharma, M. K. Chatttopadhyay, S. B. Roy, J. Phys. D: Appl. Phys **40**, 1869 (2007).
T. Krenke, X. Moya, S. Aksoy, M. Acet, P. Entel, Ll. Mañosa, A. Planes, Y. Elerman, A. Yücel, E.F. Wassermann, J. Magn. Magn. Mater. **310** 2788 (2007).
T. Krenke, E. Duman, M. Acet, E. F. Wassermann, X. Moya, L. Mañosa, A. Planes, E. Suard, B. Ouladdiaf, Phys. Rev. B. **75**, 104414 (2007).
X. Moya, L. Mañosa, A. Planes, S. Aksoy, T. Krenke, M. Acet, E. F. Wassermann, Phys. Rev. B **75**, 184412 (2007).
M. Khan, I. Dubenko, S. Stadler, N. Ali, J. Phys.: Condens. Matter. **16**, 5259 (2004).
M. Khan, I. Dubenko, S. Stadler, N. Ali, J. Appl. Phys. **97**, 10M304 (2005).
L. Gao, W. Cai, A. L. Liu, L. C. Zhao, J. Alloy. Comp. **425**, 314 (2006).
R. Kainuma, Y. Imano, W. Ito, , Y. Imano, W. Ito, Y. Sutou, H. Morito, S. Okamoto, O. Kitakami, K. Oikawa, A. Fujita, T. Kanomata, and K. Ishida, Nature **439**, 957 (2006).
H. C. Xuan, D. H. Wang, C. L. Zhang, Z. D. Han, H. S. Liu, B. X. Gu, Y. W. Du, Sol. State Comm. **142**, 591 (2007).
T. Krenke, E. Duman, M. Acet, E. F. Wassermann, X. Moya, L. Mañosa, A. Planes, J. Appl. Phys. **102**, 033903 (2007).
S. Y. Yu, L. Ma, G. D. Liu, Z. H. Liu, J. L. Chen, Z. X. Cao, G. H. Wu, B. Zhang, X. X. Zhang, Appl. Phys. Lett. **90**, 242501 (2007).
V. Provenzano, A. J. Shapiro, and R. D. Shull, Nature **429**, 853 (2004).
K. A. Gschneidner Jr., V. K. Pecharsky, and A. O. Tsokol, Rep. Prog. Phys. **68**, 1479 (2005).
|
---
abstract: 'It is well-known that polynomials decompose into spherical harmonics. This result is called separation of variables or the Fischer decomposition. In the paper we prove the Fischer decomposition for spinor valued polynomials in $k$ vector variables of ${\mathbb R}^m$ under the stable range condition $m\geq 2k$. Here the role of spherical harmonics is played by monogenic polynomials, that is, polynomial solutions of the Dirac equation in $k$ vector variables.'
address: |
Charles University, Faculty of Mathematics and Physics, Mathematical Institute\
Sokolovská 83, 186 75 Praha, Czech Republic
author:
- 'R. Lávička'
- 'V. Souček'
title: Fischer decomposition for spinor valued polynomials in several variables
---
[^1]
Introduction
============
Each polynomial $P$ in the Euclidean space ${\mathbb R}^m$ decomposes uniquely as $$P=H_0+r^2H_1+\cdots+r^{2j}H_j+\cdots$$ where $r^2=x_1^2+\cdots+x_m^2$, $(x_1,\ldots,x_m)\in{\mathbb R}^m$ and $H_j$ are harmonic polynomials in ${\mathbb R}^m$. Under the natural action of the orthogonal group ${{SO}}(m)$, this decomposition of polynomials is invariant, $r^2$ generates the algebra of invariant polynomials and the whole space of polynomials is the tensor product of invariants and spherical harmonics. An analogous result known as separation of variables or the Fischer decomposition was obtained in various cases and for other symmetry groups [@CW; @G; @Ho1; @KV; @Cou; @LS].
For example, spinor valued polynomials in one variable of ${\mathbb R}^m$ decompose into monogenic polynomials [@DSS]. A polynomial $P$ in ${\mathbb R}^m$ taking values in the spinor space ${\mathbb S}$ is called monogenic if it satisfies the equation ${\partial_{\underline{x}}}P=0$ where $${\partial_{\underline{x}}}:=\sum_{i=1}^m e_i\partial_{x_i}$$ is the Dirac operator in ${\mathbb R}^m$. Here $(e_1,\ldots,e_m)$ is an orthonormal basis for ${\mathbb R}^m$. Then each polynomial $P:{\mathbb R}^m\to{\mathbb S}$ decomposes uniquely as $$P=M_0+{\underline{x}}M_1+\cdots+{\underline{x}}^j M_j+\cdots$$ where ${\underline{x}}=x_1e_1+\cdots+x_me_m$ is the vector variable of ${\mathbb R}^m$ and $M_j$ are monogenic polynomials.
Function theory for the Dirac operator ${\partial_{\underline{x}}}$ is called Clifford analysis and it generalizes complex analysis to higher dimensions. An important feature of the Fischer decomposition is the fact that its pieces behave well under the action of the conformal spin group, which is the symmetry group of the Dirac equation. Its transcendental version is usually called the Almansi decomposition.
Clifford analysis in several variables is a natural generalization of theory of functions of several complex variables. Its study started about half century ago, however its basic principles and facts are still not well understood. One of the most important basic questions is to formulate and to prove an analogue of the Fischer decomposition for spinor valued polynomials in $k$ vector variables of ${\mathbb R}^m$, that is, for polynomials $P:({\mathbb R}^m)^k\to{\mathbb S}$.
It became soon clear that it is necessary to distinguish different cases. The simplest case is the so called stable range $m\geq 2k$. After a longer evolution, a corresponding conjecture was formulated in the book by F. Colombo, F. Sommen, I. Sabadini, D. Struppa ([@CSSS], Conj. 4.2.1, p. 236). Recently, the conjecture was proved in the case of two variables ([@L]).
The main purpose of the paper is to prove the Fischer decomposition for any number of variables in the stable range. The proof of the conjecture is based on the Fischer decomposition for scalar valued polynomials in several variables. The scalar case in the stable range is well known (see [@GW]). Recently, in the scalar case, an alternative approach leading to explicit formulae for different projections in the decomposition was developed in [@DER] for two variables.
Let us state the main result. Polynomials $P:({\mathbb R}^m)^k\to{\mathbb S}$ depend on $k$ vector variables ${\underline{x}}_1,\ldots,{\underline{x}}_k$ of ${\mathbb R}^m$ with ${\underline{x}}_j=x_{1j}e_1+\cdots+x_{mj}e_m$. Such a polynomial $P$ is called monogenic if it satisfies all the corresponding Dirac equations $$\partial_{{\underline{x}}_1} P=0,\ldots, \partial_{{\underline{x}}_k} P=0.$$ It is easy to see that the Dirac operators $\partial_{{\underline{x}}_j}$ and the left multiplication by the vector variables ${\underline{x}}_j$ are invariant operators on the space ${\mathcal{P}}(({\mathbb R}^m)^k)\otimes{\mathbb S}$ of spinor valued polynomials on $({\mathbb R}^m)^k$ under the natural action of the spin group ${{Spin}}(m)$, the double cover of ${{SO}}(m)$. Denote by ${\mathcal J}$ the algebra of invariants generated by the vector variables ${\underline{x}}_1,\ldots,{\underline{x}}_k$. Then, in the stable range, we prove that ${\mathcal{P}}(({\mathbb R}^m)^k)\otimes{\mathbb S}$ is isomorphic to the tensor product ${\mathcal J}\otimes{\mathcal{M}}$ where ${\mathcal{M}}$ is the space of spinor valued monogenic polynomials on $({\mathbb R}^m)^k.$ Indeed, we show the following result we refer to as the monogenic Fischer decomposition in the stable range.
\[t\_mfd\] For $J\subset\{1,2,\ldots,k\}$, $J=\{j_1<\cdots<j_r\}$, denote $${\underline{x}}_J={\underline{x}}_{j_1}\cdots {\underline{x}}_{j_r}.$$ For $1\leq i\leq j\leq k$, put $
r^2_{ij}:=x_{1i} x_{1j}+x_{2i} x_{2j}+\cdots+x_{mi} x_{mj}.
$
If $m\geq 2k$, then we have $$\label{e_mfd}
{\mathcal{P}}(({\mathbb R}^m)^k)\otimes{\mathbb S}=\bigoplus_{J,\{n_{ij}\}} \Big(\prod_{1\leq i\leq j\leq k}r_{ij}^{2n_{ij}}\Big) {\underline{x}}_J {\mathcal{M}}$$ where the direct sum is taken over all subsets $J$ of $\{1,2,\ldots,k\}$ and all sequences $\{n_{ij}|\ 1\leq i\leq j\leq k\}$ of numbers of ${\mathbb N}_0$.
For two variables, the stable range means that the dimension $m$ is bigger or equal to 4. In [@L], it was proved that the Fischer decomposition holds also for dimension 3. There are indications that, in the scalar case, the ’stable’ Fischer decomposition could hold also in one dimension less than the stable range in general, that is, when $m\geq 2k-1$. If so, the same would be true also for the monogenic Fischer decomposition. Out of the stable range, there is no reasonable conjecture available and further study is needed.
In the space ${\mathcal{P}}(({\mathbb R}^m)^k)$ of scalar valued polynomials on $({\mathbb R}^m)^k$, the isotypic components of the action of ${{SO}}(m)$ have infinite multiplicities. The Howe duality theory removes the multiplicities and shows that the space ${\mathcal{P}}(({\mathbb R}^m)^k)$ decomposes under the action of the dual pair $SO(m)\times{\mathfrak{sp}}(2k)$ with multiplicity one ([@Ho1; @G; @HTW]). An analogous role for spinor valued polynomials ${\mathcal{P}}(({\mathbb R}^m)^k)\otimes{\mathbb S}$ is played by the pair ${{Spin}}(m)\times{\mathfrak{osp}}(1|2k)$ where the Lie superalgebra ${\mathfrak{osp}}(1|2k)$ is generated by the odd operators $\partial_{{\underline{x}}_j}$ and ${\underline{x}}_j$ for $j=1,\ldots,k$. Indeed, we can reformulate Theorem \[t\_mfd\] as a duality between finite dimensional representations of ${{Spin}}(m)$ and infinite dimensional representations of ${\mathfrak{osp}}(1|2k)$. In particular, in ${\mathcal{P}}(({\mathbb R}^m)^k)\otimes{\mathbb S}$, the isotypic components of ${{Spin}}(m)$ give explicit realizations of lowest weight modules of ${\mathfrak{osp}}(1|2k)$ with lowest weights $(a_1+(m/2),\ldots,a_k+(m/2))$ for integers $a_1\geq \cdots\geq a_k\geq 0$, see Theorem \[t\_isod\]. In this connection, we mention that there are not so many known constructions of such modules, see e.g. the paraboson Fock space [@LSJ], and for a classification, we refer to [@DZ; @DS].
In the paper, after preliminaries and notation in Section 2, the Fischer decomposition in the scalar case is reviewed in Section 3. A proof of the main result, that is, the monogenic Fischer decomposition in the stable range, is then contained in Section 4. In Section 5, we describe the structure of isotypic components for ${{Spin}}(m)$ in the space of spinor valued polynomials.
Preliminaries and notations
===========================
Clifford algebra and spinors
----------------------------
Consider the Euclidean space ${\mathbb R}^m$ with a fixed negative definite quadratic form $B.$ The corresponding (universal) Clifford algebra is denoted by ${\mathbb R}_{0,m},$ its complexification by ${\mathcal{C}}_m={\mathbb R}_{0,m}\otimes{\mathbb C}.$ If $(e_1,\ldots,e_m)$ is an orthonormal basis for ${\mathbb R}^m,$ its elements satisfy the relations $$e_ie_j+e_je_i=-2\delta_{ij},\ i\not=j,\ i,j=1,\ldots,m.$$ Let $a\to\bar{a}$ denote the main antiinvolution on ${\mathbb R}^m$ characterized by the properties $\overline{e_i}=-e_i,\;i=1,\ldots,m$ and $\overline{ab}=\bar{b}\bar{a},$ $a,b\in{\mathbb R}_{0,m}.$ The extension of the main antiinvolution to ${\mathcal{C}}_m$ is defined by $$\overline{a\otimes\alpha}=
\overline{a}\otimes \overline{\alpha},
\ a\in{\mathbb R}_{0,m},\alpha\in{\mathbb C},$$ where $\overline{\alpha}$ denotes the complex conjugation.
The Clifford algebra ${\mathcal{C}}_m$ is ${\mathbb Z}_2$-graded, it decomposes as ${\mathcal{C}}_m={\mathcal{C}}_m^+\oplus{\mathcal{C}}_m^-$ into even and odd parts. For $m$ odd, the even part ${\mathcal{C}}^+_m$ is (isomorphic to) a matrix algebra over ${\mathbb C}.$ For $m$ even, the algebra ${\mathcal{C}}_m$ is (isomorphic to) a matrix algebra over ${\mathbb C},$ and ${\mathcal{C}}_m^+$ is a sum of two matrix algebras.
Spinor valued fields considered in the paper have values in the space ${\mathbb S}$ defined as follows. For odd $m=2n+1$, we denote by ${\mathbb S}$ the unique irreducible module for ${\mathcal{C}}_m^+,$ while for even $m=2n$, we denote by ${\mathbb S}$ the unique irreducible module for ${\mathcal{C}}_m.$ In both cases, the dimension of ${\mathbb S}$ is equal to $2^n.$
The vector space ${\mathbb R}^m$ is embedded into ${\mathcal{C}}_m$ by $$x:=(x_1,\ldots, x_m)\mapsto {\underline{x}}:=\sum_{i=1}^m e_ix_i,$$ the Dirac operator is denoted by ${\partial_{\underline{x}}}:=\sum_{i=1}^m e_i\partial_{x_i}.$ See e.g. [@DSS] for a detailed account.
Fischer inner product
---------------------
We identify $({\mathbb R}^m)^k$ with the vector space of all $m\times k$ real matrices. The columns $x_1,\ldots,x_k$ of a matrix $x\in({\mathbb R}^m)^k$ are hence vectors in the Euclidean space ${\mathbb R}^m$, that is, $x_j=(x_{1j},\ldots,x_{mj})\in{\mathbb R}^m$. Denote by ${\mathcal{P}}={\mathcal{P}}(({\mathbb R}^m)^k)$ the space of all complex valued polynomials on $({\mathbb R}^m)^k$. Then a polynomial $f\in{\mathcal{P}}(({\mathbb R}^m)^k)$ has a form $f=\sum_{\alpha} c_\alpha x^\alpha,$ where $\alpha=(\alpha_{ij})\in{\mathbb N}_0^{m\times k}$ is a (matrix) multiindex, $x^\alpha=\Pi_{ij}(x_{ij})^{\alpha_{ij}}$ and $c_\alpha\in{\mathbb C}$ are non-zero for a finite number of $\alpha\in{\mathbb N}_0^{m\times k}$. The standard Fischer inner product on the space ${\mathcal{P}}(({\mathbb R}^m)^k)$ is given for $f=\sum_{\alpha} c_\alpha x^\alpha$ and $g=\sum_{\alpha} d_\alpha x^\alpha$ by $$\langle f,g\rangle=\sum_\alpha \alpha!\;\overline{c_{\alpha}}d_\alpha,$$ where $\alpha!=\Pi_{ij}(\alpha_{ij}!).$ It is a Hermitian (positive definite) inner product. It can also be written using an integral formula ([@G], Lemma 9.1).
The Fischer inner product can be extended to polynomials with values in the Clifford algebra ${\mathcal{C}}_m$ by the formula $$\langle f,g\rangle=\sum_\alpha \alpha!\;[ \overline{c_{\alpha}}d_\alpha]_0$$ where, for a Clifford number $c\in{\mathcal{C}}_m$, $\overline{c}$ is the main antiinvolution of $c$ and $[c]_0$ is its real part.
The spinor representation ${\mathbb S}$ can be realized as a suitable left ideal in ${\mathcal{C}}_m,$ so the Fischer inner product is well defined also for spinor valued fields. See [@DSS].
A realization of ${\mathfrak{sp}}(2k)$ {#sp2k}
--------------------------------------
For an account of representation theory and Howe duality, we refer e.g. to [@GW]. In the description of the Howe dual pair $({{SO}}(m),{\mathfrak{sp}}(2k))$ the second partner is realized inside the Weyl algebra ${\mathcal{D}}{\mathcal W}$ of differential operators with polynomial coefficients as follows. The group ${{SO}}(m) $ acts on the space $({\mathbb R}^m)^k$ by the left matrix multiplication, hence it induces the action on the space ${\mathcal{P}}={\mathcal{P}}(({\mathbb R}^m)^k)$ of all complex valued polynomials on $({\mathbb R}^m)^k$ by $$[g\cdot P](x):=P(g^t\,x),\;g\in{{SO}}(m),\; P\in{\mathcal{P}},\; x\in({\mathbb R}^m)^k.$$ We introduce the following differential operators on the space ${\mathcal{P}}:$ $$r^2_{ij}:=\sum_{k=1}^m x_{ki} x_{kj},\
\Delta_{ij}:= \sum_{k=1}^m \partial_{x_{ki}} \partial_{x_{kj}},\
E_{ij}:=\sum_{k=1}^m x_{ki} \partial_{x_{kj}}$$ and $h_{ij}:=E_{ij}+\frac{m}{2}\delta_{ij}$ for $i,j=1,\ldots,k$. All these differential operators are elements of the Weyl algebra ${\mathcal{D}}{\mathcal W}$ generated by the partial derivatives $\partial_{x_{kj}}$ and by the operators of multiplication by $x_{kj}$ acting on the space ${\mathcal{P}}$ for $k=1,\ldots,m$, $j=1,\ldots,k$.
The action of the Lie algebra ${\mathfrak{t}}\simeq{\mathfrak{gl}}(k)$ on ${\mathcal{P}}$ given by $${\mathfrak{t}}:={\operatorname{span}}\{h_{ij}|\ i,j=1,\ldots k\}$$ extends to the action of the Lie algebra ${\mathfrak g}'\simeq {\mathfrak{sp}}(2k),$ where ${\mathfrak g}'={\mathfrak p}_-
\oplus {\mathfrak{t}}\oplus {\mathfrak p}_+,
$ and $${\mathfrak p}_+:={\operatorname{span}}\{r^2_{ij}|\ 1\leq i\leq j\leq k\},\;
{\mathfrak p}_-:={\operatorname{span}}\{\Delta_{ij}|\ 1\leq i\leq j\leq k\}.$$ All operators in ${\mathfrak{sp}}(2k)$ are ${{SO}}(m)$-invariant differential operators. The Lie algebra ${\mathfrak{t}}$ splits moreover as $${\mathfrak{t}}={\mathfrak{t}}_-\oplus{\mathfrak{t}}_0\oplus{\mathfrak{t}}_+\text{\ \ with\ \ }{\mathfrak{t}}_0={\operatorname{span}}\{h_{ii}|\ 1\leq i\leq k \},$$ $${\mathfrak{t}}_+={\operatorname{span}}\{h_{ji}|\ 1\leq i<j\leq k \},\ \ {\mathfrak{t}}_-={\operatorname{span}}\{h_{ij}|\ 1\leq i<j\leq k \}.$$
Lie superalgebra ${\mathfrak{osp}}(1|2k)$ {#ss_osp}
-----------------------------------------
The Lie algebra ${\mathfrak{sp}}(2k)$ is the even part of the Lie superalgebra ${\mathfrak{osp}}(1|2k).$ There is a realization of this superalgebra by differential operators acting on the space of spinor valued polynomials ${\mathcal{P}}(({\mathbb R}^m)^k)\otimes {\mathbb S}.$
The even part ${\mathfrak{sp}}(2k)$ of ${\mathfrak{osp}}(1|2k)$ is realized as in Subsection \[sp2k\] above and its odd part is ${\mathfrak f}_+\oplus{\mathfrak f}_-,$ where $${\mathfrak f}_+:={\operatorname{span}}\{{\underline{x}}_i|\ i=1,\ldots,k \},
\; {\mathfrak f}_-:={\operatorname{span}}\{{{\partial_{\underline{x}}}}_i|\ i=1,\ldots,k \}.$$ So we have ${\mathfrak{osp}}(1|2k)={\mathfrak f}_-\oplus{\mathfrak p}_-\oplus{\mathfrak{t}}\oplus{\mathfrak p}_+\oplus{\mathfrak f}_+$.
Representations of ${{Spin}}(m)$
--------------------------------
We always assume that $m>2$, and $m=2n$ or $m=2n+1$. Finite dimensional irreducible representations of the group ${{Spin}}(m)$, the double cover of $SO(m)$, are denoted in the paper by $E_\lambda, $ where $\lambda=(\lambda_1,\ldots,\lambda_n)$ is the highest weight satisfying usual dominant conditions depending on parity of $m.$ It is given by $\lambda_1\geq\ldots\geq\lambda_{n-1}\geq |\lambda_n|$ for $m=2n,$ resp. $\lambda_1\geq\ldots\geq \lambda_n\geq 0$ for $m=2n+1.$ A representation $E_\lambda$ factorizes to a representation of ${{SO}}(m)$ if and only if its components are integral. In case that the components $\lambda_i$ are all elements of ${\mathbb Z}+\frac{1}{2},$ it is a genuine module for ${{Spin}}(m).$
The spinor representation ${\mathbb S}$ is irreducible for $m$ odd and its highest weight is $\mu=(\frac{1}{2},\ldots,\frac{1}{2}).$ For $m$ even, ${\mathbb S}={\mathbb S}^+\oplus
{\mathbb S}^-$ where ${\mathbb S}^{\pm}$ are irreducible submodules with the highest weights $\mu_\pm=(\frac{1}{2},\ldots,\frac{1}{2},\pm\frac{1}{2}),$ respectively.
All irreducible ${{Spin}}(m)$-modules can be realized using the basic function spaces studied in Clifford analysis, as shown in [@CLS]. To do this, we define the space ${\mathcal{H}}$ of harmonic polynomials on $({\mathbb R}^m)^k$ as $${\mathcal{H}}:={\operatorname{Ker}}{\mathfrak p}_-=\{P\in{\mathcal{P}}|\ LP=0\text{\ for all\ }L\in{\mathfrak p}_-\},$$ and the space of simplicial harmonics $${\mathcal{H}}^S:={\mathcal{H}}\cap{\operatorname{Ker}}{\mathfrak{t}}_-$$ where ${\mathfrak p}_-$ and ${\mathfrak{t}}_-$ are given as in Subsection \[ss\_osp\]. For $\ell\in{\mathbb N}_0$, denote by ${\mathcal{P}}_{\ell}$ the space of polynomials of ${\mathcal{P}}$ homogeneous of total degree $\ell$ and, for a multiindex $a=(a_1,\dots,a_k)\in{\mathbb N}_0^k$, ${\mathcal{P}}_a$ stands for the subspace of polynomials of ${\mathcal{P}}$ homogeneous in the variable $x_i$ of degree $a_i$ for each $i=1,\ldots,k$. We use an analogous notation for other spaces of polynomials, e.g., ${\mathcal{H}}_{\ell}$ and ${\mathcal{H}}^S_a$. For a partition $a\in{\mathbb N}_0^k$ (i.e., $a_1\geq a_2\geq \cdots\geq a_k$), we know that the space ${\mathcal{H}}^S_a$ of simplicial harmonic polynomials forms an irreducible $SO(m)$-module with the highest weight $a.$ Irreducible representations with half integral weights can be realized in a similar way using suitable spaces of monogenic polynomials. More details are presented below in Subsection \[s\_monog\].
The following result is needed in the proof of the main theorem.
\[Klimyk\] Suppose that $2k\leq m$, and $m=2n$ or $m=2n+1.$ Let $\Pi({\mathbb S})$ be the list of all $2^n$ weights of the module ${\mathbb S}.$ Then the tensor product $E_\lambda\otimes{\mathbb S}$ decomposes with multiplicity one as $$E_\lambda\otimes{\mathbb S}\simeq \bigoplus_{\nu\in A}E_\nu,$$ where $A$ is the set of all dominant weights $\nu$ for ${{Spin}}(m)$ of the form $\nu=\lambda+\alpha$ for some $\alpha\in \Pi({\mathbb S})$.
The number of summands is bounded by $2^k$ and is equal to $2^k,$ if $\lambda$ lies inside the dominant Weyl chamber.
Suppose that $\rho$ is the half sum of positive roots. The claim follows immediately from Klimyk’s formula [@FH Ex. 25.41, p. 428], because all weights $\alpha\in\Pi({\mathbb S})$ have multiplicity one and the sum $\lambda+\alpha+\rho$ is either inside or on the wall of the dominant Weyl chamber.
The first $k$ components of $\alpha\in\Pi({\mathbb S})$ are equal to $\pm\frac{1}{2}$ and there are at most $2^k$ such possibilities, and if $\lambda+\alpha$ is dominant, all components $\alpha_i,
i=k+1,\ldots,\ell-1$ have the sign plus. For a generic $\lambda$ all sums $\lambda+\alpha$ are dominant.
Representations of ${\mathfrak{gl}}(k)$
---------------------------------------
Finite dimensional irreducible representations of the group ${{GL}}(k)$ are denoted in the paper by $F_\lambda, $ where $\lambda=(\lambda_1,\ldots,\lambda_k)\in{\mathbb Z}^k$ is the highest weight satisfying usual dominant condition $\lambda_1\geq\ldots\geq \lambda_k.$ For the Lie algebra ${\mathfrak{gl}}(k)$ of $k\times k$ matrices, there are more finite dimensional irreducible representations. In fact, for each $\delta\in{\mathbb C}$, we have 1-dimensional representation $F_{\delta 1_k}$ of ${\mathfrak{gl}}(k)$ given by $$[g\cdot z]=\delta{\operatorname{tr}}(g)z,\ g\in{\mathfrak{gl}}(k), z\in{\mathbb C}$$ where $\delta 1_k=(\delta,\ldots,\delta)$ ($k$ numbers) and ${\operatorname{tr}}(g)$ is the trace of a matrix $g\in{\mathfrak{gl}}(k)$. Then each finite dimensional irreducible representation of ${\mathfrak{gl}}(k)$ is of the form $$F_{\lambda+\delta 1_k}=F_{\lambda}\otimes F_{\delta 1_k}$$ for some $\lambda=(\lambda_1,\ldots,\lambda_k)\in{\mathbb Z}^k$, $\lambda_1\geq\ldots\geq \lambda_k$ and $\delta\in{\mathbb C}$. In particular, we have $\dim F_{\lambda+\delta 1_k}=\dim F_{\lambda}$.
Harmonic Fischer decomposition
==============================
In this section, we present a review of results from classical invariant theory and the theory of Howe dual pairs needed for the proof of the Fischer decomposition for spinor valued polynomials. Results presented in this section are taken from [@Ho1; @G; @HTW].
Separation of variables
-----------------------
The classical invariant theory describes the set ${\mathcal I}:={\mathcal{P}}^{{{SO}}(m)}$ of invariant polynomials with respect to the action of the group ${{SO}}(m).$ The theorem below (called by tradition ’separation of variables’) describes polynomials in the space ${\mathcal{P}}$ as products of invariant and harmonic polynomials. Recall that ${\mathcal{H}}={\operatorname{Ker}}{\mathfrak p}_-$ is the space of harmonic polynomials on $({\mathbb R}^m)^k$, that is, those annihilated by all the differential operators $\Delta_{ij}$.
\[t\_hfd\*\]
\(i) The space ${\mathcal I}={\mathcal{P}}^{{{SO}}(m)}$ of invariant polynomials under the action of the group $SO(m)$ is the polynomial algebra ${\mathbb C}[r^2_{ij}] $, and it is also isomorphic to the symmetric algebra ${\mathcal{S}}({\mathfrak p}_+)$ over the space ${\mathfrak p}_+$.
\(ii) The linear map from ${\mathcal I}\otimes{\mathcal{H}}$ onto ${\mathcal{P}}$ $$I\otimes H\mapsto IH,$$ given by the multiplication of polynomials $I\in{\mathcal I}$ and $H\in{\mathcal{H}}$, is an isomorphism if the stable range condition $m\geq 2k$ holds.
Obviously, in the stable range, we can reformulate Theorem \[t\_hfd\*\] as follows.
\[t\_hfd\] If $m\geq 2k$, we have $$\label{e_hfd}
{\mathcal{P}}(({\mathbb R}^m)^k)=\bigoplus_{\{n_{ij}\}} \Big(\prod_{1\leq i\leq j\leq k}r_{ij}^{2n_{ij}}\Big) {\mathcal{H}}$$ where the sum is taken over all sequences $\{n_{ij}|\ 1\leq i\leq j\leq k\}$ of numbers of ${\mathbb N}_0$.
Decomposition of spherical harmonics
------------------------------------
The space ${\mathcal{H}}$ of harmonic polynomials is invariant under the action of the product ${{SO}}(m)\times {\mathfrak{gl}}(k),$ and it decomposes into irreducible parts with multiplicity one as follows. Here the Lie algebra ${\mathfrak{gl}}(k)$ is realized as ${\mathfrak{t}}$, see Subsection \[sp2k\].
\[t\_isodh\] The space ${\mathcal{H}}$ of harmonic polynomials has a multiplicity free decomposition under the action of ${{SO}}(m)\times{\mathfrak{gl}}(k)$ $${\mathcal{H}}=\bigoplus_a {\mathcal{H}}_{(a)}\text{\ \ with\ \ } {\mathcal{H}}_{(a)}\simeq {\mathcal{H}}^S_{a}\otimes F_{a+\frac{m}{2}1_k},$$ where the sum is taken over all partitions $a\in{\mathbb N}_0^k$. Individual summands ${\mathcal{H}}_{(a)}$ are at the same time isotypic components of the ${{SO}}(m)$ action and the isotypic components under the action of ${\mathfrak{t}}\simeq{\mathfrak{gl}}(k)$ in the space ${\mathcal{H}}.$
The Howe duality ${{SO}}(m)\times{\mathfrak{sp}}(2k)$
-----------------------------------------------------
It is well-known that, under the joint action of ${{SO}}(m)$ and ${\mathfrak{sp}}(2k)$, the space ${\mathcal{P}}(({\mathbb R}^m)^k)$ of scalar valued polynomials has a multiplicity free decomposition. Indeed, we have
Assume $m\geq 2k$. Then the space ${\mathcal{P}}(({\mathbb R}^m)^k)$ decomposes under the action of the dual pair ${{SO}}(m)\times{\mathfrak{sp}}(2k)$ as $${\mathcal{P}}(({\mathbb R}^m)^k)\simeq\bigoplus_a {\mathcal{H}}^S_a\otimes {L}_{a+\frac m2 1_k},$$ where the sum is taken over all partitions $a\in {\mathbb N}_0^k$ and $L_{\lambda}$ is an irreducible lowest weight module with lowest weight $\lambda$ for ${\mathfrak{sp}}(2k)$.
Monogenic Fischer decomposition
===============================
In this section, we prove Theorem \[t\_mfd\], the Fischer decomposition for spinor valued polynomials in $k$ vector variables of ${\mathbb R}^m$. In what follows, we consider only the stable range case $m\geq 2k.$
Radial algebra
--------------
The vector variables ${\underline{x}}_j=\sum_{k=1}^{m}e_kx_{kj}$ are elements in the algebra ${\mathcal A}:={\mathcal{P}}(({\mathbb R}^m)^k)\otimes{\operatorname{End}}({\mathbb S})$ of ${\operatorname{End}}({\mathbb S})$-valued polynomials. They are invariant with respect to the action of the group ${{Spin}}(m).$ Let ${\mathcal J}$ be the subalgebra generated by $\{{\underline{x}}_1,\ldots,{\underline{x}}_k\}$ in ${\mathcal A}.$ By [@Som1], ${\mathcal J}$ is a realization of the (abstract) radial algebra ${\mathcal{R}}({\underline{x}}_1,\ldots,{\underline{x}}_k)$ in the vector variables ${\underline{x}}_1,\ldots,{\underline{x}}_k$. It is easy to see that $${\mathcal J}\simeq{\mathcal{S}}({\mathfrak p}_+)\otimes{\mbox{\Large $\wedge$}}({\mathfrak f}_+)$$ where ${\mathcal{S}}({\mathfrak p}_+)$ is the symmetric algebra over the space ${\mathfrak p}_+$ and ${\mbox{\Large $\wedge$}}({\mathfrak f}_+)$ is the exterior algebra over ${\mathfrak f}_+$. Actually, ${\mathcal J}$ may be also viewed as the symmetric superalgebra over the superspace $V=V_0\oplus V_1$ with $V_0={\mathfrak p}_+$ and $V_1={\mathfrak f}_+$.
Decomposition of spherical monogenics {#s_monog}
-------------------------------------
Before proving Theorem \[t\_mfd\] we describe an irreducible decomposition of monogenic polynomials with respect to the group ${{Spin}}(m)$ in Theorem \[t\_idm\] below. Recall that a polynomial $P:({\mathbb R}^m)^k\to{\mathbb S}$ is called monogenic (i.e., $P\in{\mathcal{M}}$) if it satisfies the Dirac equations $$\partial_{{\underline{x}}_1} P=0,\ldots, \partial_{{\underline{x}}_k} P=0.$$ We say that such a $P$ is simplicial monogenic if it holds in addition that $$h_{ij}\; P=0$$ for each $1\leq i<j\leq k$. Here the operators $h_{ij}$ are defined in Subsection \[sp2k\]. Then, for the space ${\mathcal{M}}^S$ of simplicial monogenics, we have $${\mathcal{M}}^S={\mathcal{M}}\cap{\operatorname{Ker}}{\mathfrak{t}}_-.$$ It is easy to see that ${\mathcal{M}}^S$ decomposes by homogeneity as $$\label{e_dsm}
{\mathcal{M}}^S:=\bigoplus_{a}{\mathcal{M}}^S_a$$ where the sum is taken over all partitions $a\in {\mathbb N}_0^k$. In addition, the space ${\mathcal{M}}^S_a$ is an irreducible ${{Spin}}(m)$-module with the highest weight $a' =
(a_1+\frac 12,\ldots,a_k+\frac 12,\frac 12,\ldots, \frac 12)$ ($n$ numbers) in odd dimension $m=2n+1,$ while in even dimension $m=2n,$ it decomposes into two irreducible components ${\mathcal{M}}^S_a= {\mathcal{M}}^{S,+}_a\oplus {\mathcal{M}}^{S,-}_a$ with the highest weights $a'_{\pm} =
(a_1+\frac 12,\ldots,a_k+\frac 12,\frac 12,\ldots, \frac 12,\pm\frac 12)$ ($n$ numbers). Here ${\mathcal{M}}^{S,\pm}_a={\mathcal{M}}^{S}_a\cap({\mathcal{P}}\otimes{\mathbb S}^{\pm})$. See [@CLS] for details.
\[l\_idhs\] For a partition $a\in{\mathbb N}_0^k$, the following $Spin(m)$-modules are isomorphic $${\mathcal{H}}^S_a\otimes{\mathbb S}\simeq\bigoplus_J {\mathcal{M}}^S_{a-\epsilon(J)}$$ where the direct sum is taken over all sets $J\subset\{1,2,\ldots,k\}$, and $\epsilon(J)=(\epsilon_1,\ldots,\epsilon_k)$ with $\epsilon_j=1$ for $j\in J$ and $\epsilon_j=0$ for $j\not\in J$. Here ${\mathcal{M}}^S_{b}=0$ unless $b\in{\mathbb N}_0^k$ is a partition.
This follows directly from Klimyk’s formula, see Lemma \[Klimyk\]. In the even dimension $m$, we use in addition the decompositions ${\mathbb S}={\mathbb S}^+\oplus{\mathbb S}^-$ and ${\mathcal{M}}^S_a= {\mathcal{M}}^{S,+}_a\oplus {\mathcal{M}}^{S,-}_a$.
\[l\_odm\] Let $\ell\in{\mathbb N}_0$. (i) Then we have $${\mathcal{M}}_{\ell}={\mathcal{M}}^S_{\ell}\oplus\sum_{1\leq i<j\leq k} h_{ji}{\mathcal{M}}_{\ell}.$$ Here the sum in the second term on the right-hand side is not necessarily direct.
\(ii) In particular, we have ${\mathcal{M}}_{\ell}={\mathcal U}({\mathfrak{t}}_+){\mathcal{M}}^S_{\ell}$ where ${\mathcal U}({\mathfrak{t}}_+)$ is the universal enveloping algebra of ${\mathfrak{t}}_+$.
With respect to the Fischer inner product, we have inside ${\mathcal{M}}_{\ell}$ that $$(\sum_{1\leq i<j\leq k} h_{ji}{\mathcal{M}}_{\ell})^{\perp}={\mathcal{M}}^S_{\ell},$$ which implies (i).
By repeated application of the claim (i), we get (ii).
\[t\_idm\] Under the action of ${{Spin}}(m)\times {\mathfrak{gl}}(k)$, we get a decomposition of monogenic polynomials $$\label{idm}
{\mathcal{M}}=\bigoplus_a{\mathcal{M}}_{(a)}\text{\ \ with\ \ }{\mathcal{M}}_{(a)}\simeq{\mathcal{M}}^S_{a}\otimes F_{a+\frac m2 1_k}$$ where the direct sum is taken over all partitions $a\in{\mathbb N}_0^k$. The decomposition is irreducible in the odd dimension $m$. In the even dimension $m$, we have ${\mathcal{M}}_{(a)}={\mathcal{M}}^+_{(a)}\oplus{\mathcal{M}}^-_{(a)}$ with ${\mathcal{M}}^{\pm}_{(a)}\simeq{\mathcal{M}}^{S,\pm}_{a}\otimes F_{a+\frac m2 1_k}$.
For simplicity, we limit ourselves to the odd dimension $m$. The even dimensional case is proved in quite a similar way.
By Lemma \[l\_odm\] and , it is clear that we have $${\mathcal{M}}=\bigoplus_a{\mathcal{M}}_{(a)}$$ where we put ${\mathcal{M}}_{(a)}:={\mathcal U}({\mathfrak{t}}_+){\mathcal{M}}^S_{a}.$ Obviously, for a given partition $a\in{\mathbb N}_0^k$, ${\mathcal{M}}_{(a)}$ is a representation of ${{Spin}}(m)\times {\mathfrak{gl}}(k)$. Let $M_a$ be a highest weight vector of the irreducible ${{Spin}}(m)$-module ${\mathcal{M}}^S_{a}$ of weight $a'$. Then, under the action of ${\mathfrak{t}}\simeq{\mathfrak{gl}}(k)$, $M_a$ is also a singular vector in ${\mathcal{M}}_{(a)}$ (i.e., ${\mathfrak{t}}_-\cdot M_a=0$) of weight $a+\frac m2 1_k$. Actually, it is easy to see that, under the joint action of ${{Spin}}(m)\times {\mathfrak{gl}}(k)$, $M_a$ is a unique (up to non-zero multiple) singular vector in ${\mathcal{M}}_{(a)}$. In other words, ${{Spin}}(m)\times {\mathfrak{gl}}(k)$-module ${\mathcal{M}}_{(a)}$ is irreducible and $${\mathcal{M}}_{(a)}\simeq{\mathcal{M}}^S_{a}\otimes F_{a+\frac m2 1_k},$$ which completes the proof.
A proof of Theorem \[t\_mfd\]
-----------------------------
To prove the main result of the paper we need some auxiliary results.
\[l\_wmfd\] We have $${\mathcal{P}}\otimes{\mathbb S}=\sum_{J,\{n_{ij}\}} \Big(\prod_{1\leq i\leq j\leq k}r_{ij}^{2n_{ij}}\Big) {\underline{x}}_J {\mathcal{M}}$$ where the (not necessarily direct) sum is taken over all sets $J\subset\{1,2,\ldots,k\}$ and all sequences $\{n_{ij}|\ 1\leq i\leq j\leq k\}$ of numbers of ${\mathbb N}_0$.
Let $\ell\in{\mathbb N}_0$. With respect to the Fischer inner product, we have inside ${\mathcal{P}}_{\ell}\otimes{\mathbb S}$ that $$({\underline{x}}_1{\mathcal{P}}_{\ell-1}\otimes{\mathbb S}+\cdots+{\underline{x}}_k{\mathcal{P}}_{\ell-1}\otimes{\mathbb S})^{\perp}={\mathcal{M}}_{\ell}.$$ Therefore we get $${\mathcal{P}}\otimes{\mathbb S}={\mathcal{M}}\oplus({\underline{x}}_1{\mathcal{P}}\otimes{\mathbb S}+\cdots+{\underline{x}}_k{\mathcal{P}}\otimes{\mathbb S}).$$ By induction, we have easily that $${\mathcal{P}}\otimes{\mathbb S}=\sum w{\mathcal{M}}$$ where the sum is taken over all finite products $w$ of the variables ${\underline{x}}_1,{\underline{x}}_2,\ldots,{\underline{x}}_k$. To finish the proof we use the relations ${\underline{x}}_j{\underline{x}}_i=-{\underline{x}}_i{\underline{x}}_j+2r^2_{ij}$.
To complete the proof of the main result, we need to decompose spinor valued spherical harmonics into monogenic polynomials. To do this, for $\ell\in{\mathbb N}_0$, it is easy to see the decomposition $$\label{e_HarmDecomp}
{\mathcal{P}}_{\ell}\otimes{\mathbb S}=({\mathcal{H}}_{\ell}\otimes{\mathbb S})\oplus\sum_{1\leq i\leq j\leq k} r^2_{ij}\;({\mathcal{P}}_{\ell-2}\otimes{\mathbb S}).$$ Indeed, with respect to the Fischer inner product, we have inside ${\mathcal{P}}_{\ell}\otimes{\mathbb S}$ that $$\Big(\sum_{1\leq i\leq j\leq k} r^2_{ij}\;({\mathcal{P}}_{\ell-2}\otimes{\mathbb S})\Big)^{\perp}={\mathcal{H}}_{\ell}\otimes{\mathbb S}.$$ The projection from ${\mathcal{P}}_{\ell}\otimes{\mathbb S}$ onto ${\mathcal{H}}_{\ell}\otimes{\mathbb S}$ corresponding to the decomposition is denoted by $\pi$, and we call it the harmonic projection.
\[t\_idh\] For $\ell\in{\mathbb N}_0$, we have $${\mathcal{H}}_{\ell}\otimes{\mathbb S}=\bigoplus_J \pi({\underline{x}}_J{\mathcal{M}}_{\ell-|J|})$$ where the direct sum is taken over all sets $J\subset\{1,2,\ldots,k\}$ and $|J|$ is the number of elements of $J$. Here $\pi:{\mathcal{P}}_{\ell}\otimes{\mathbb S}\mapsto{\mathcal{H}}_{\ell}\otimes{\mathbb S}$ is the harmonic projection. In addition, for any $\ell\in{\mathbb N}_0$, the following vector spaces are isomorphic $$\pi({\underline{x}}_J{\mathcal{M}}_{\ell})\simeq {\underline{x}}_J{\mathcal{M}}_{\ell}\simeq{\mathcal{M}}_{\ell}.$$
By Lemma \[l\_wmfd\] and , for a given $\ell\in{\mathbb N}_0$, we have $${\mathcal{H}}_{\ell}\otimes{\mathbb S}=\sum_J \pi({\underline{x}}_J{\mathcal{M}}_{\ell-|J|})$$ where the sum is taken over all sets $J\subset\{1,2,\ldots,k\}$. To show that the sum is actually direct it is sufficient to prove that the ${{Spin}}(m)$-modules $M:={\mathcal{H}}_{\ell}\otimes{\mathbb S}$ and $$N:=\bigoplus_J {\mathcal{M}}_{\ell-|J|}$$ where the direct sum is taken over all sets $J\subset\{1,2,\ldots,k\}$ are isomorphic. To do this, we show that multiplicities of each submodule ${\mathcal{M}}^S_a$ in the modules $M$ and $N$ are the same. Indeed, let $j=1,2,\ldots,k$ and $a\in{\mathbb N}_0^k$ be a partition such that $\ell-j=|a|$ where $|a|=a_1+a_2+\cdots+a_k$. For ${{Spin}}(m)$-modules $A,B$, denote the multiplicity of $A$ in $B$ by $[A:B]$. Then, by Theorem \[t\_idm\], we have $$\label{multN}
[{\mathcal{M}}^S_a:N]=\sum_{J:|J|=j} [{\mathcal{M}}^S_a:{\mathcal{M}}_{\ell-|J|}]={k \choose j}\dim F_{a}$$ where the sum in the middle expression is taken over all subsets $J\subset\{1,2,\ldots,k\}$ with $j$ elements. Here we use the fact that $\dim F_{a+\frac m2 1_k}=\dim F_{a}$. On the other hand, using Lemma \[l\_idhs\], we get $$\label{multM}
[{\mathcal{M}}^S_a:M]=\sum_{J:|J|=j} \dim F_{a+\epsilon(J)}$$ because $[{\mathcal{M}}^S_a:{\mathcal{H}}^S_{a+\epsilon(J)}\otimes{\mathbb S}]=1$ and $[{\mathcal{H}}^S_{a+\epsilon(J)}\otimes{\mathbb S}:{\mathcal{H}}_{\ell}\otimes{\mathbb S}]=\dim F_{a+\epsilon(J)}$. Here $F_b=0$ and ${\mathcal{H}}^S_b=0$ unless $b\in{\mathbb N}_0^k$ is a partition. Hence to prove the equality $[{\mathcal{M}}^S_a:M]=[{\mathcal{M}}^S_a:N]$ we need to show that $${k \choose j}\dim F_{a}=\sum_{J:|J|=j} \dim F_{a+\epsilon(J)}.$$ But this follows directly from Pieri’s rule for ${{GL}}(k)$-modules $${\mbox{\Large $\wedge$}}^j({\mathbb C}^k)\otimes F_a\simeq\bigoplus_{J:|J|=j} F_{a+\epsilon(J)}$$ where ${\mbox{\Large $\wedge$}}^j({\mathbb C}^k)$ is the $j$-th antisymmetric power of the defining representation ${\mathbb C}^k$ for ${{GL}}(k)$ and $\dim{\mbox{\Large $\wedge$}}^j({\mathbb C}^k)={k \choose j}$, see [@HL].
In addition, for each $\ell\in{\mathbb N}_0$, we have proved that $\pi({\underline{x}}_J{\mathcal{M}}_{\ell})\simeq{\mathcal{M}}_{\ell}.$ Finally, the fact that ${\underline{x}}_J{\mathcal{M}}_{\ell}\simeq{\mathcal{M}}_{\ell}$ is trivial.
The use of Pieri’s rule for ${{GL}}(k)$-modules in the proof of Theorem \[t\_idh\] is not a chance. In fact, from the proof, it is not difficult to see that $${\mathcal{H}}\otimes{\mathbb S}\simeq{\mbox{\Large $\wedge$}}({\underline{x}}_1,{\underline{x}}_2,\ldots,{\underline{x}}_k)\otimes{\mathcal{M}}$$ as ${{Spin}}(m)\times {\mathfrak{gl}}(k)$-modules. Here the action of ${\mathfrak{gl}}(k)$ is trivial on ${\mathbb S}$ and the action of ${{Spin}}(m)$ is trivial on ${\mbox{\Large $\wedge$}}({\underline{x}}_1,{\underline{x}}_2,\ldots,{\underline{x}}_k)$.
Let $\ell\in{\mathbb N}_0$. By Lemma \[l\_wmfd\], we know that $$\label{e_mfd1}
{\mathcal{P}}_{\ell}\otimes{\mathbb S}=\sum \Big(\prod_{1\leq i\leq j\leq k}r_{ij}^{2n_{ij}}\Big) {\underline{x}}_J {\mathcal{M}}_t$$ where the sum is taken over all $J$ and $\{n_{ij}\}$ such that $\ell=t+|J|+2\sum_{1\leq i\leq j\leq k} n_{ij}$. On the other hand, by Theorems \[t\_hfd\] and \[t\_idh\], we have $$\label{e_mfd2}
{\mathcal{P}}_{\ell}\otimes{\mathbb S}=\bigoplus \Big(\prod_{1\leq i\leq j\leq k}r_{ij}^{2n_{ij}}\Big) \pi({\underline{x}}_J {\mathcal{M}}_t)\simeq \bigoplus\Big(\prod_{1\leq i\leq j\leq k}r_{ij}^{2n_{ij}}\Big) {\underline{x}}_J {\mathcal{M}}_t$$ where the direct sums are taken over the same set of parameters $J$ and $\{n_{ij}\}$ as in . Finally, we get from and .
Isotypic components
===================
In this section, we describe the structure of isotypic components for ${{Spin}}(m)$ in the space of spinor valued polynomials. To do this, recall the decomposition of ${\mathfrak{osp}}(1|2k)$ from Subsection \[ss\_osp\] $${\mathfrak{osp}}(1|2k)={\mathfrak f}_-\oplus{\mathfrak p}_-\oplus{\mathfrak{t}}\oplus{\mathfrak p}_+\oplus{\mathfrak f}_+.$$ Given a finite dimensional irreducible modul $F_{\lambda}$ for ${\mathfrak{t}}\simeq{\mathfrak{gl}}(k)$, we extend the action of ${\mathfrak n}_{-}:={\mathfrak f}_-\oplus{\mathfrak p}_-$ on $F_{\lambda}$ trivially and define the generalized Verma module $V_{\lambda}$ for ${\mathfrak{osp}}(1|2k)$ as the induced module $$V_{\lambda}={\operatorname{Ind}}_{{\mathfrak n}_-\oplus{\mathfrak{t}}}^{{\mathfrak{osp}}(1|2k)} F_{\lambda}.$$
\[t\_isod\] If $m\geq 2k$, then we have $$\label{e_isod}
{\mathcal{P}}(({\mathbb R}^m)^k)\otimes{\mathbb S}\simeq\bigoplus_a {\mathcal{M}}^S_{a}\otimes V_{a+\frac m2 1_k}$$ where the direct sum is taken over all partitions $a\in{\mathbb N}_0^k$.
The decomposition follows directly from Theorems \[t\_mfd\] and \[t\_idm\] and PBW theorem for Lie superalgebras (see [@CW Theorem 1.36, p. 31]).
We expect that ${\mathfrak{osp}}(1|2k)$-modules $V_{a+\frac m2 1_k}$ that occur in the decomposition are irreducible. But this question seems to remain open.
[99]{}
S.J. Cheng and W. Wang: Dualities and Representations of Lie Superalgebras, American Mathematical Soc., 2012.
D. Constales: The relative position of $L_2$-domains in complex and Clifford analysis, PhD. thesis, State Univ. Ghent, 1989-1990.
D. Constales, P. Van Lancker, F. Sommen: Models for irreducible ${{Spin}}(m)$-modules, Adv. Appl. Clifford Alg., 11 (2001), 271-289.
F. Colombo, F. Sommen, I. Sabadini, D. Struppa: [Analysis of Dirac systems and computational algebra,]{} Birkhäuser, Boston, 2004.
K. Coulembier: The orthosymplectic superalgebra in harmonic analysis, J. Lie Theory 23 (2013), 55-83.
R. Delanghe, F. Sommen, V. Souček: Clifford algebra and spinor-valued functions, Kluwer Academic Publishers, Dordrecht, 1992.
H. De Bie, D. Eelbode, M. Roels: The harmonic transvector algebra in two vector variables, Jour. of Algebra, 473 (2017), 247-282.
K. Dobrev, I. Salom: Positive Energy Unitary Irreducible Representations of the Superalgebras ${\mathfrak{osp}}(1|2n,{\mathbb R})$ and Character Formulae, Proceedings of the VIII Mathematical Physics Meeting, (Belgrade, 24-31 August 2014) SFIN XXVIII (A1), eds. B. Dragovich et al, (Belgrade Inst. Phys. 2015)
V.K. Dobrev, R.B. Zhang: Positive Energy Unitary Irreducible Representations of the Superalgebras ${\mathfrak{osp}}(1|2n, {\mathbb R})$, Phys. Atom. Nuclei, 68 (2005) 1660-1669.
W. Fulton, J. Harris: Representation Theory, a First Course (Graduate Texts in Mathematics 129), Springer-Verlag, New York, 1991.
R. Goodman: Multiplicity-free spaces and Schur-Weyl-Howe duality, in Representations of Real and p-adic Groups (ed. E-C Tan and C-B Zhu), Lecture Note Series–Institute for Mathematical Sciences, Vol. 2, World Scientific, Singapore, 2004.
R. Goodman, N.R. Wallach: Symmetry, Representations, and Invariants, Springer-Verlag, New York, 2009.
R. Howe: Remarks on classical invariant theory, Trans. AMS, 313, 2 (1989), 539-570.
R. Howe, S.T. Lee: Why should the Littlewood-Richardson rule be true? Bull. Amer. Math. Soc. (N.S.) 49 (2012), no. 2, 187-236.
R. Howe, E.-Ch. Tan, J. F. Willenbring: Reciprocity algebras and branching for classical symmetric pairs, in ’Group and analysis’, LMS Lecture Note Ser., 354, CUP, 2008, 191-231.
M. Kashiwara, M. Vergne: On the Segal-Shale-Weil representations and harmonic polynomials, Inv. Math., 44 (1978), 1-44.
P. Van Lancker: The Monogenic Fischer Decomposition: Two Vector Variables, Complex Anal. Oper. Theory 6 (2012), 425-446.
R. Lávička, D. Šmíd, Fischer decomposition for polynomials on superspace, J. Math. Phys. 56, 111704 (2015).
S. Lievens, N.I. Stoilov, J.V. der Jeugt: The Paraboson Fock Space and Unitary Irreducible Representations of the Lie Superalgebra ${\mathfrak{osp}}(1|2n)$, Commun. Math. Phys. 281 (2008), 805-826.
F. Sommen: An algebra of abstract vector variables, Portugal. Math. 54, 3 (1997), 287-310.
[^1]: The support of the grant GACR 17-01171S is gratefully acknowledged.
|
---
author:
- |
J.A. Gracey,\
Department of Applied Mathematics and Theoretical Physics,\
University of Liverpool,\
P.O. Box 147,\
Liverpool,\
L69 3BX,\
United Kingdom.
title: 'The QCD $\beta$-function at $O(1/N_{\! f})$'
---
[**Abstract.**]{} The leading order coefficients of the $\beta$-function of QCD are computed in a large $N_{\! f}$ expansion. They are in agreement with the three loop $\overline{\mbox{MS}}$ calculation. The method involves computing the anomalous dimension of the operator $(G^a_{\mu\nu})^2$ at the $d$-dimensional fixed point in the non-abelian Thirring model to which QCD is equivalent in this limit. The effect the $O(1/N_{\!f})$ corrections have on the location of the infrared stable fixed point for a range of $N_{\!f}$ is also examined.
The strong force of the standard model is described by quantum chromodynamics, (QCD), which is an $SU(3)$ gauge theory which is asymptotically free. At large energies the fundamental quarks behave as though they were non-interacting. In terms of the field theory itself this property is a consequence of the leading coefficient of the $\beta$-function being negative. The initial one loop calculation was carried out in [@1]. Higher order corrections have also been determined. The remaining scheme independent term, ie the two loop contribution, was calculated in [@2]. The three loop term was computed in the Feynman gauge in [@3] using dimensional regularization in the $\overline{\mbox{MS}}$ scheme. More recently this result was checked in [@4] in an arbitrary covariant gauge where the remaining renormalization group functions, such as the gluon anomalous dimension, were also deduced in arbitrary gauge, [@4]. The three loop quark mass anomalous dimension, which is gauge independent, is available too, [@5; @6]. One reason for such precise information is that, for example, it allows one to obtain a more accurate insight into the variation of quantities with energy scale. Fundamental in this respect is the $\beta$-function as it always appears in the appropriate renormalization group equation, (rge).
These higher order analytic calculations of the rge functions are exceedingly tedious to compute, however, due to the huge number of Feynman diagrams that arise. For such results to be credible it is important to have independent checks on the expressions obtained aside from the obvious one of performing another complete evaluation which may be a waste of resources. In this letter we provide the results of such a procedure for the QCD $\beta$-function. This is the large ${N_{\!f}}$ technique of determining exact all orders results of the rge functions of gauge theories at successive orders in powers of $1/{N_{\!f}}$, where ${N_{\!f}}$ is the number of fundamental fields. The technique was initially developed for low dimensional models in a series of impressive papers, [@7; @8; @9]. Briefly the method involves computing appropriate critical exponents at the $d$-dimensional fixed point of the $\beta$-function as ${N_{\!f}}$ $\rightarrow$ $\infty$. Through the critical rge these $d$-dimensional exponents encode all orders information on the coefficients of the corresponding rge function. Clearly the values will overlap with the lowest known orders, providing the partial check we have indicated. From a technical point of view one benefit of this approach is the exploitation of the conformal symmetry at the fixed point which simplifies the resummation of the minimal set of Feynman diagrams comprising the relevant Schwinger Dyson equation. The calculation of the $O(1/{N_{\!f}})$ QCD $\beta$-function here completes the leading order analysis as the quark, gluon and ghost dimensions were deduced in [@10], in the Landau gauge and agreed with the three loop results of [@11; @4]. Another motivation arises from a comment in [@4] in regard to future calculations. It is indicated that the four loop QCD $\beta$-function is attainable. The main obstacle, though, would appear to be correctly generating and treating the vast numbers of Feynman diagrams. Therefore the new coefficients we will deduce from our results will be important in this respect.
We recall that the $O(1/{N_{\!f}})$ computation of the QED $\beta$-function is available, [@12]. That calculation was carried out by inserting the implicit bubble sum of the photon propagator in the $2$ and $3$ point functions and then deducing the $\overline{\mbox{MS}}$ coefficients of the renormalization constants using dimensional regularization. Those results have been reproduced in the critical point approach, [@13]. One interesting aspect of [@12] was the search for other fixed points in the strictly four dimensional QED $\beta$-function, for a range of values of the coupling. Although none were observed it would be a worthwhile exercise to repeat that analysis in the non-abelian case especially as at two loops such a point exists, [@14], for a range of ${N_{\!f}}$.
We recall the fundamental ingredients for treating QCD in large ${N_{\!f}}$ in our approach. The lagrangian is $$L ~=~ i \bar{\psi}^{iI} {D \!\!\!\! /}\psi^{iI} ~-~ \frac{(G^a_{\mu \nu})^2}{4e^2}$$ where $\psi^{iI}$ is the quark field, $A^a_\mu$ is the gluon field, the covariant derivative is $D_\mu$ $=$ $\partial_\mu$ $+$ $iT^a A_\mu^a$ with $T^a$ the generators of the colour gauge group and $G^a_{\mu\nu}$ $=$ $\partial_\mu A^a_\nu$ $-$ $\partial_\nu A^a_\mu$ $+$ $f^{abc}A^b_\mu A^c_\nu/e$ is the field strength with $f^{abc}$ the structure constants. The ranges of the indices are $1$ $\leq$ $i$ $\leq$ ${N_{\!f}}$, $1$ $\leq$ $a$ $\leq$ $(N^2_c-1)$ and $1$ $\leq$ $I$ $\leq$ $N_c$ and the Casimirs are $\mbox{tr}(T^aT^b)$ $=$ $T(R)\delta^{ab}$, $T^aT^a$ $=$ $C_2(R)$ and $f^{acd}f^{bcd}$ $=$ $C_2(G)
\delta^{ab}$. To three loops, in $d$-dimensions, \[1-4\], $$\begin{aligned}
\beta(g) &=& (d-4)g + \left[ \frac{2}{3}T(R){N_{\!f}}- \frac{11}{6}C_2(G) \right]
g^2 \nonumber \\
&+& \left[ \frac{1}{2}C_2(R)T(R){N_{\!f}}+ \frac{5}{6}C_2(G)T(R){N_{\!f}}- \frac{17}{12}C^2_2(G) \right] g^3 \nonumber \\
&-& \left[ \frac{11}{72} C_2(R)T^2(R){N_{\!f}}^2 + \frac{79}{432} C_2(G) T^2(R)
{N_{\!f}}^2 + \frac{1}{16} C^2_2(R) T(R) {N_{\!f}}\right. \nonumber \\
&&-~ \left. \frac{205}{288}C_2(R)C_2(G)T(R){N_{\!f}}- \frac{1415}{864} C^2_2(G)T(R){N_{\!f}}+ \frac{2857}{1728}C^3_2(G)
\right] g^4 + O(g^5) \end{aligned}$$ where our coupling $g$ is $g$ $=$ $(e/2\pi)^2$. The presence of the $O(g)$ term of (2), corresponding to the dimension of the coupling in $d$-dimensions, gives rise to our non-trivial fixed point, $g_c$. Explicitly $$\begin{aligned}
g_c &=& \frac{3\epsilon}{T(R){N_{\!f}}} + \frac{1}{4T^2(R){N_{\!f}}^2} \left[ \frac{}{}
33C_2(G)\epsilon - \left( 27C_2(R) + 45C_2(G)\right) \epsilon^2 \right.
\nonumber \\
&&+~ \left. \left( \frac{99}{4}C_2(R) + \frac{237}{8} C_2(G) \right)
\epsilon^3 + O(\epsilon^4) \right] + O \left( \frac{1}{{N_{\!f}}^3} \right)\end{aligned}$$ where $d$ $=$ $4$ $-$ $2\epsilon$. In the neighbourhood of this point the quark and gluon anomalous dimension were deduced in the Landau gauge as, at leading order in $1/{N_{\!f}}$, respectively, [@10], $$\begin{aligned}
\eta &=& \frac{C_2(R)\eta^{\mbox{o}}_1}{T(R){N_{\!f}}} \\
\eta \, + \, \chi &=& - \, \frac{C_2(G) \eta^{\mbox{o}}_1}{2(\mu-2)T(R){N_{\!f}}} \end{aligned}$$ where $\eta^{\mbox{o}}$ $=$ $(2\mu-1)(\mu-2)\Gamma(2\mu)/[4\Gamma^2(\mu)
\Gamma(\mu+1) \Gamma(2-\mu)]$ and $d$ $=$ $2\mu$. Moreover, the asymptotic scaling forms of the respective propagators are, as $k^2$ $\rightarrow$ $\infty$, $$\psi(k) ~\sim~ \frac{A{k \!\!\! /}}{(k^2)^{\mu-\alpha}} ~~,~~
A_{\mu\nu}(k) ~\sim~ \frac{B}{(k^2)^{\mu-\beta}}\left[ \eta_{\mu\nu}
- \frac{k_\mu k_\nu}{k^2} \right]$$ where $\alpha$ $=$ $\mu$ $+$ ${\mbox{\small{$\frac{1}{2}$}}}\eta$, $\beta$ $=$ $1$ $-$ $\eta$ $-$ $\chi$ and $A$ and $B$ are amplitudes though only the combination $z$ $=$ $A^2B$ appears in calculations. Specifically $z$ $=$ $\Gamma(\mu+1)\eta^{\mbox{o}}/[2(2\mu-1)(\mu-2)T(R){N_{\!f}}]$.
One feature which simplifies the fixed point analysis both for computing the $O(1/{N_{\!f}})$ corrections to $\beta(g)$ and (4,5) arises from the universality class in which QCD belongs. For instance, it is widely accepted that the $O(N)$ bosonic $\sigma$ model and the $O(N)$ $\phi^4$ model are equivalent, or in the same universality class, at the $d$-dimensional fixed point analogous to (3). In other words critical exponents calculated in either field theory at this fixed point are the same. So if one wished to examine the critical behaviour of the three dimensional $O(3)$ Heisenberg ferromagnet to which both are equivalent then either model can be used. Another widely studied equivalence is the Yukawa interaction and the Gross Neveu model with the same chiral properties, [@15]. Likewise the Thirring model and QED are equivalent and have been the subject of recent interest, [@16; @17]. For QCD the relevant model is the non-abelian Thirring model, (NATM), whose lagrangian is $$L ~=~ i \bar{\psi}^{iI} {\partial \!\!\! /}\psi^{iI} ~+~ \bar{\psi}^{iI} \gamma^\mu
T^a_{IJ} \psi^{iJ} A^a_\mu ~-~ \frac{(A^a_\mu)^2}{2\lambda}$$ where $A^a_\mu$ is an auxiliary field, which if eliminated leads to a $4$-fermi interaction, and $\lambda$ is the coupling whose dimension is $(d-2)$ when compared to the $(d-4)$ of the QCD coupling. It has been demonstrated in [@18] that it is equivalent to QCD in the large ${N_{\!f}}$ limit. One feature of that work was the correct reproduction of the three and four gluon vertices of non-abelian theories, which are absent in (7), by integrating out quark loops in the gluon $3$- and $4$-point functions. For our computation the major simplification is the fact that by calculating with (7) at $g_c$ the resulting exponents, which are universal, will be equivalent to those computed in (1). Then by decoding the exponent using (3), we can deduce information on the perturbative structure of the rge functions. A test of this argument will be the correct reproduction of the $\overline{\mbox{MS}}$ coefficients. Importantly though we need only consider graphs which are built out of the [*single*]{} interaction of (7).
We now turn to the details of the calculation. Ordinarily one computes $\omega$ $=$ $-$ ${\mbox{\small{$\frac{1}{2}$}}}\beta^\prime(g_c)$ by considering corrections to (6) in the Dyson equations, [@7; @13]. Equivalently one can identify the composite operator in the lagrangian whose coupling relates to the ordinary coupling constant, [@9]. Then the anomalous dimension of that operator is related via a scaling law deduced from the lagrangian to $\beta^\prime(g_c)$. In QCD the appropriate operator is $(G^a_{\mu\nu})^2$ as we use a formulation, (1), where the coupling is defined in such a way that the $3$-point interaction is $\bar{\psi}\gamma^\mu T^a\psi A^a_\mu$. Thus the canonical dimensions for the fields essentially satisfy the condition for conformal integration or uniqueness, [@19]. From the second term of (1), therefore, we have the scaling relation $$\omega ~=~ \eta ~+~ \chi ~+~ \chi_G$$ The quantity $\chi_G$ is the critical exponent corresponding to the renormalization of the pure (composite) operator $(G^a_{\mu\nu})^2$, whilst the gluon dimension arises because of the wave function renormalization of the constituent fields of the composite. Thus computing $\chi_G$ gives $\omega$ from (8).
To evaluate $\chi_G$ one substitutes the critical propagators (6) into the relevant $O(1/{N_{\!f}})$ set of Feynman diagrams and determines the residue of the simple pole in $\Delta$, [@8]. This regularizing parameter is introduced by the shift $\beta$ $\rightarrow$ $\beta$ $-$ $\Delta$. The contributing graphs are illustrated in fig. 1 and each is computed in the Landau gauge to avoid mixing with $(\partial^\mu A^a_\mu)^2$, [@9]. The first two graphs occur in QED and have been computed in [@13]. Here we note their respective colour group factors are $C_2(R)$ and $[C_2(R)$ $-$ $C_2(G)/2]$. The third graph arises from the cubic term of the operator and it and the final graph are purely non-abelian, each having group factors $C_2(G)$. The computation was carried out by the application of standard techniques for massless integrals including integration by parts and conformal methods. Nevertheless several difficult subintegrals lurk within the final graph which were tedious to determine. To verify that we had obtained the correct values for them we calculated several graphs of [@9] which contained the same subintegrals and checked that the total expression we computed agreed with the values given in [@9]. Useful in this and other respects were the packages [Reduce]{} [@20] and [Form]{} [@21]. We note that the values are respectively, $$\begin{aligned}
&& - \, \frac{\mu(\mu-1)(2\mu-1)\eta^{\mbox{o}}_1}{(\mu+1)} ~~,~~
\frac{(4\mu^2+\mu-9)\eta^{\mbox{o}}_1}{(\mu+1)} ~~,~~
- \, \frac{(4\mu^3-2\mu^2-4\mu+1)\eta^{\mbox{o}}_1}{2(\mu+1)(2\mu-1)(\mu-2)}
\nonumber \\
&& \quad \quad \quad \quad \quad \quad
\frac{(4\mu^6-6\mu^5+18\mu^4-67\mu^3+85\mu^2-38\mu+6)\eta^{\mbox{o}}_1}
{4(2\mu-1)(\mu+1)(\mu-1)(\mu-2)} \end{aligned}$$ Further at leading order there are no ghost contributions. One can see this by attempting to include them in the formalism we will describe later and then observing that the first appearance of any contribution is at $O(1/{N_{\!f}}^2)$. This feature was also observed in [@9] where the non-abelian generalization of the $CP(N)$ $\sigma$ model was studied. Indeed we make use of some of the observations of [@9] here.
The final result is $$\omega ~=~ (\mu - 2) ~-~ \left[ (2\mu-3)(\mu-3)C_2(R)
- \frac{(4\mu^4 - 18\mu^3 + 44\mu^2 - 45\mu + 14)C_2(G)}{4(2\mu-1)(\mu-1)}
\right] \frac{\eta^{\mbox{o}}_1}{T(R)N_{\! f}}$$ A final check on (10) is that it correctly reproduces the $O(1/{N_{\!f}})$ terms of the three loop $\beta$-function of (2), \[1-4\]. This amounts to the terms with $C_2(G)$ as those with $C_2(R)$ have been verified for QED in [@12; @2]. In three dimensions $$\omega ~=~ - \, \frac{1}{2} ~-~ \frac{10C_2(G)}{3\pi^2T(R)N_{\! f}}$$ From (10) we can now deduce higher order coefficients which will appear in the $\overline{\mbox{MS}}$ $\beta$-function, by carrying out the $\epsilon$-expansion of (10) and using (3). Thus defining the leading order large ${N_{\!f}}$ coefficients by $a_n$, $$\beta(g) ~=~ \beta_0g^2 + \sum_{n=1}^\infty a_{n+1} [T(R){N_{\!f}}]^n g^{n+2}$$ with $\beta_0$ $=$ $[2T(R){N_{\!f}}/3$ $-$ $11C_2(G)/6]$, then $$\begin{aligned}
a_4 &=& - ~ \frac{[154C_2(R) + 53C_2(G)]}{3888} \nonumber \\
a_5 &=& \frac{[(288\zeta(3) + 214)C_2(R) + (480\zeta(3) - 229)C_2(G)]}{31104}
\nonumber \\
a_6 &=& \frac{1}{233280}[(864\zeta(4) - 1056\zeta(3) + 502)C_2(R)
+ (1440\zeta(4) - 1264\zeta(3) - 453)C_2(G)] \nonumber \\
a_7 &=& \frac{1}{1679616}[(3456\zeta(5) - 3168\zeta(4) - 2464\zeta(3)
+ 1206)C_2(R) \nonumber \\
&& ~~~~~~~~~~~~~ + \, (5760\zeta(5) - 3792\zeta(4) - 848\zeta(3)
- 885)C_2(G)] \end{aligned}$$
Having obtained the set $\{a_n\}$ for non-abelian theories we can examine the purely four dimensional $\beta$-function and search for fixed points other than the well known infrared stable point of Banks and Zaks, $g^{BZ}_c$, [@14]. Its existence is important for recent developments in supersymmetric theories in relation to electric-magnetic duality, [@23; @24]. Those infrared fixed points are determined by using exact non-perturbative arguments in the limit $N_c$, ${N_{\!f}}$ $\rightarrow$ $\infty$ with ${N_{\!f}}/3N_c$ held fixed, [@24]. (The one loop coefficient of the $\beta$-function for that model is $({N_{\!f}}$ $-$ $3N_c)$ for $SU(N_c)$.) Further in the context of $1/{N_{\!f}}$ expansions a $(16{\mbox{\small{$\frac{1}{2}$}}}$ $-$ ${N_{\!f}})$ expansion from $g^{BZ}_c$ has been used to obtain an estimate for $\alpha_S$ for low ${N_{\!f}}$, [@25]. Before studying the effect $O(1/{N_{\!f}})$ corrections have on $\beta(g)$, we first recall properties of $g^{BZ}_c$. In [@14] it was observed that for a range of ${N_{\!f}}$ the two terms of the two loop $\beta$-function have a different sign which therefore gives rise to a non-zero critical coupling, $g^{BZ}_c$. For $SU_c(3)$ this range is $8$ $<$ ${N_{\!f}}$ $<$ $17$, [@14], and we have recorded the explicit values of $g^{BZ}_c$ in this case in our notation in table 1. Subsequently one can also study the effect that the inclusion the three loop term of the $\beta$-function has on the location of $g^{BZ}_c$. We have analysed (2) numerically and determined the three loop values of $g^{BZ}_c$ which are given in the second column of table 1. Several features are apparent. First, the range of ${N_{\!f}}$ for the existence of such an infrared fixed point is extended to $5$ $<$ ${N_{\!f}}$ $<$ $17$. Second the effect the three loop correction has is to move the location of $g^{BZ}_c$ towards the origin. In other words to a region where perturbation theory would be valid. These observations, however, ought to be qualified. It is not clear whether this picture is meaningful because as ${N_{\!f}}$ decreases $g^{BZ}_c$ clearly increases away from the region where perturbation theory is useful. In other words for low values of ${N_{\!f}}$ we can not make a reliable statement on even, say, the [*three*]{} loop range of ${N_{\!f}}$ for which $g^{BZ}_c$ occurs. One indication of where the perturbative picture may not be valid can be deduced from the values of the critical exponent $\beta^\prime(g^{BZ}_c)$ which is a physically meaningful and calculable quantity. In table 2 we have given the corresponding values for $\beta^\prime(g^{BZ}_c)$ as deduced from the two and three loop values of $g^{BZ}_c$ respectively. Clearly the three loop corrections do not affect the two loop values appreciably for ${N_{\!f}}$ $=$ $14$, $15$ and $16$ suggesting that higher loop corrections are small. For lower values the divergence is evident indicating that the four and higher loop contributions would be needed to make an accurate estimate of the exponent. In light of the critical coupling being smaller for a larger range of ${N_{\!f}}$ it would better of either, though, to take the three loop values of $\beta^\prime(g^{BZ}_c)$ as being the more reliable.
Now we consider the effect that the $O(1/{N_{\!f}})$ corrections of (10) have. We have studied the case $N_c$ $=$ $3$ in various ways. First, as in [@12] we examined the $\beta$-function given by just taking all the leading order coefficients $a_n$ which was improved by non-abelianization, [@26]. This entails replacing ${N_{\!f}}$ by the one loop $\beta$-function coefficient through the shift ${N_{\!f}}$ $\rightarrow$ $({N_{\!f}}$ $-$ $11C_2(G)/[4T(R)])$. It turns out that in searching for zeroes of the four dimensional $\beta$-function that the contributions from these $O(1/\beta_0)$ coefficients on their own are not sufficient for even obtaining a fixed point $g^{BZ}_c$. This is the same as was found in the QED case, [@12], in the range of couplings where the series was convergent. Instead, to improve this situation we took the two and three loop $\beta$-functions of (2) and then included all subsequent information included in (10). The point of view being that one can at least study the effect the $O(1/{N_{\!f}})$ corrections have on the fixed point $g^{BZ}_c$ which is known to exist. It turns out that in this approach we did not observe any non-trivial fixed points other than $g^{BZ}_c$ in $g$ $>$ $0$ which was independent of the number of terms included. From a practical point of view in our analysis we truncated the series for $\beta(g)$ at around $14$ terms. The effect of including more terms is negligible on the results we give in both tables until ${N_{\!f}}$ ${\mbox{\footnotesize{$\stackrel{<}{\sim}$}}}$ $8$ when perturbation theory can not be regarded as reliable anyway. The remaining columns of our tables are the results of this analysis. Clearly the effect the $O(1/\beta_0)$ corrections has is not to move $g^{BZ}_c$ significantly from the three loop value for a large range of ${N_{\!f}}$. Also for ${N_{\!f}}$ $=$ $14$, $15$ and $16$ the values of $\beta^\prime(g^{BZ}_c)$ are not that different from the perturbative estimates of $\beta^\prime(g^{BZ}_c)$.
In conclusion we have produced the leading order corrections to the QCD $\beta$-function in a $1/{N_{\!f}}$ expansion which extends the calculation of [@12]. Consequently we examined the effect they had on the known infrared fixed point in four dimensional $\beta$-function. It transpires that perturbation theory is valid for analysing the fixed point when the value of ${N_{\!f}}$ is near the upper bound for the existence of $g^{BZ}_c$ and estimates for a critical exponent were obtained then. It would be interesting, though, to compare the values of these exponents with results from other techniques such as the lattice, which would be expected to reliably cover the lower part of the range. In this case a resummation would be necessary to try and improve the lack of convergence which is apparent when the three loop values are compared to the two loop ones. Further, we believe it would be useful to repeat our analysis for the supersymmetric extension of QCD in large ${N_{\!f}}$ in relation to [@23; @24]. Once the analogous expression to (10) is available it would be possible to study the effect the $O(1/{N_{\!f}})$ corrections have on the infrared fixed point when $N_c$ is large as well as for orthogonal and symplectic gauge groups. As a first step one would need to determine which field theory supersymmetric QCD is equivalent to at the $d$-dimensional fixed point and verify, for example, that the correct triple and quartic interactions are obtained in the large ${N_{\!f}}$ limit similar to [@8]. It would be hoped that there is a small set of interactions, as in the non-abelian Thirring model, to reduce the amount of calculation that would occur.
[**Acknowledgements.**]{} This work was carried out with the support of PPARC through an Advanced Fellowship. The author thanks Drs D.J. Broadhurst, D.R.T. Jones and H. Osborn for useful conversations and Dr T.J. Morris for drawing his attention to [@18]. The figures were designed using the package [ FeynDiagram]{} version 1.21.
[99]{} D.J. Gross & F.J. Wilczek, Phys. Rev. Lett. [**30**]{} (1973), 1343; H.D. Politzer, Phys. Rev. Lett. [**30**]{} (1973), 1346. W.E. Caswell, Phys. Rev. Lett. [**33**]{} (1974), 244; D.R.T. Jones, Nucl. Phys. [**B75**]{} (1974), 531. O.V. Tarasov, A.A. Vladimirov & A.Yu. Zharkov, Phys. Lett. [**93B**]{} (1980), 429. S.A. Larin & J.A.M. Vermaseren, Phys. Lett. [**B303**]{} (1993), 334. D.V. Nanopoulos & D.A. Ross, Nucl. Phys. [**B157**]{} (1979), 273; R. Tarrach, Nucl. Phys. [**B183**]{} (1981), 384. O.V. Tarasov, JINR preprint P2-82-900 (in Russian). A.N. Vasil’ev, Yu.M. Pis’mak & J.R. Honkonen, Theor. Math. Phys. [**46**]{} (1981), 157; [*ibid.*]{} [**47**]{} (1981), 291. A.N. Vasil’ev & M.Yu. Nalimov, Theor. Math. Phys. [**55**]{} (1982), 423; [*ibid.*]{} [**56**]{} (1982), 643. A.N. Vasil’ev, M.Yu. Nalimov & J.R. Honkonen, Theor. Math. Phys. [**58**]{} (1984), 111. J.A. Gracey, Phys. Lett. [**B318**]{} (1993), 177. E.S. Egorian & O.V. Tarasov, Theor. Math. Phys. [**41**]{} (1979), 26. A. Palanques-Mestre & P. Pascual, Commun. Math. Phys. [**95**]{} (1984), 277. J.A. Gracey, Int. J. Mod. Phys. [**A8**]{} (1993), 2465. T. Banks & A. Zaks, Nucl. Phys. [**B196**]{} (1982), 189. J. Zinn-Justin, Nucl. Phys. [**B367**]{} (1991), 105. S.J. Hands, Phys. Rev. [**D51**]{} (1995), 5816. K.-I. Kondo, Nucl. Phys. [**B450**]{} (1995), 251. A. Hasenfratz & P. Hasenfratz, Phys. Lett. [**B297**]{} (1992), 166. M. d’Eramo, L. Peliti & G. Parisi, Lett. Nuovo Cim. [**2**]{} (1971), 878. A.C. Hearn, “[REDUCE]{} Users Manual” version 3.4, Rand publication CP78, (1991). J.A.M. Vermaseren, “[FORM]{}” version 1.1, CAN publication, (1992). S.G. Gorishny, A.L. Kataev, S.A. Larin & L.R. Surguladze, Phys. Lett. [**B256**]{} (1991), 81. N. Seiberg & E. Witten, Nucl. Phys. [**B426**]{} (1994), 19. N. Seiberg, Nucl. Phys. [**B435**]{} (1995), 129. P.M. Stevenson, Phys. Lett. [**B331**]{} (1994), 187. D.J. Broadhurst & A.G. Grozin, Phys. Rev. [**D52**]{} (1995), 4082.
----------- ---------- ------------ -------------------- --------------------
$N_{\!f}$ Two loop Three loop Two loop Three loop
$+$ $O(1/\beta_0)$ $+$ $O(1/\beta_0)$
6 - 81.682972 (15.663662) 8.095948
7 - 5.972522 (17.792600) 4.922952
8 - 2.658882 (16.790094) 2.717457
9 4.166667 1.475455 13.456718 1.510817
10 1.522523 0.871775 7.901656 0.879995
11 0.720238 0.516977 1.617025 0.518561
12 0.360000 0.295517 0.360750 0.295784
13 0.173759 0.155581 0.173789 0.155616
14 0.073746 0.069899 0.073751 0.069903
15 0.022727 0.022307 0.022727 0.022306
16 0.002208 0.002203 0.002207 0.002203
----------- ---------- ------------ -------------------- --------------------
[Table 2. Values of $\beta^\prime(g^{BZ}_c)$ for $SU_c(3)$.]{}
|
---
abstract: |
In this paper we use the Klazar-Marcus-Tardos method (see [@MT]) to prove that if a hereditary property of partitions $\P$ has super-exponential speed, then for every $k$-permutation $\pi$, $\P$ contains the partition of $[2k]$ with parts $\{\{i,\pi(i) + k\} : i \in [k]\}$. We also prove a similar jump, from exponential to factorial, in the possible speeds of monotone properties of ordered graphs, and of hereditary properties of ordered graphs not containing large complete, or complete bipartite ordered graphs.
Our results generalize the Stanley-Wilf Conjecture on the number of $n$-permutations avoiding a fixed permutation, which was recently proved by the combined results of Klazar [@Klaz] and Marcus and Tardos [@MT]. Our main results follow from a generalization to ordered hypergraphs of the theorem of Marcus and Tardos.
address:
- |
Department of Mathematics\
University of Illinois\
1409 W. Green Street\
Urbana, IL 61801
- |
Department of Mathematical Sciences\
The University of Memphis\
Memphis, TN 38152\
and\
Trinity College\
Cambridge CB2 1TQ\
England
- |
Department of Mathematical Sciences\
The University of Memphis\
Memphis, TN 38152
author:
- József Balogh
- Béla Bollobás
- Robert Morris
title: 'Hereditary properties of partitions, ordered graphs and ordered hypergraphs'
---
[^1]
Introduction {#S:intro}
============
In this paper we shall prove that a jump from exponential to factorial speed occurs for properties of combinatorial structures of various types. We request the reader’s patience while we make the various definitions necessary to state our results.
An *ordered hypergraph* $\HH = (V,E,<)$ is a hypergraph – a set of vertices $V$ and edges $E \subset \{A : A \subset V, |A| \ge 2\}$ – together with a linear order $<$ on its vertices. Note that we do not allow edges to be repeated, and that we do not allow edges to consist of a single vertex. An ordered hypergraph $\K = (U,F,<)$ is an *induced sub-hypergraph* of $\HH$ if $U \subset V$ (with the induced ordering), and $F = \{e \cap U : e \in E, |e| \ge 2\}$. $\K$ is a *sub-hypergraph* of $\HH$ if $U \subset V$ (again with the induced ordering), and $F \subset \{e \cap U : e \in E, |e| \ge 2\}$. Finally, $\K$ is *contained* in $\HH$ if there exists a sub-hypergraph $\L = (U,D,<)$ of $\HH$, with $|D| = |F| = t$, say, and $f_i \subset d_i$ for each $i \in [t]$ (where $D = \{d_1, \ldots, d_t\}$ and $F = \{f_1, \ldots, f_t\}$).
A collection of ordered hypergraphs is called a *property* if it is closed under order-preserving isomorphisms of the vertex set. A property of ordered hypergraphs $\P$ is called *hereditary* if it is closed under taking induced sub-hypergraphs; it is called *monotone* if it is closed under taking sub-hypergraphs; and it is called *strongly monotone* if it is closed under containment. Observe that any strongly monotone property is monotone, and any monotone property is hereditary.
An *ordered graph* is a graph together with a linear order $<$ on its vertices; equivalently, it is an ordered hypergraph in which each edge has size exactly 2. The definitions of hereditary and monotone properties are therefore inherited (note that in this case the definitions of monotone and strongly monotone coincide).
A *partition* of the set $[n] = \{1, \ldots, n\}$ is an (unordered) collection of disjoint, non-empty sets $\{A_1, \ldots, A_t\}$ such that $A_1 \cup \ldots \cup A_t = [n]$. It is easy to see that a partition may be thought of as an ordered graph in which each component is a clique, or as an ordered hypergraph in which the edges are pairwise disjoint. Thus we obtain the definition of a hereditary property of partitions. Since we have come some distance from the original definition, we remark that if $P = \{A_1, \ldots, A_t\}$ is a partition of $[n]$, and $S$ is a subset of $[n]$ with elements $s_1 < \ldots < s_k$, then the *sub-partition of $P$ induced by $S$* is the following partition of $[k]$. First let $\{B_1, \ldots, B_t\}$ satisfy $i \in B_j$ if and only if $s_i \in A_j$; then delete the empty classes. A property of partitions is hereditary if it is closed under taking sub-partitions.
Now, given a property $\P$ of ordered hypergraphs, we write $\P_n$ for the collection of distinct (i.e., non-isomorphic) ordered hypergraphs on $n$ vertices in $\P$, and call the function $n \mapsto |\P_n|$ the *speed* (or unlabelled speed) of $\P$. An analogous definition can be made for other combinatorial structures (e.g., graphs, posets, permutations).
We are interested in the (surprising) phenomenon that for many such structures, only very ‘few’ speeds are possible. More precisely, there often exists a family $\Fset$ of functions $f : \N \to \N$ and another function $F : \N \to \N$ with $F(n)$ [*much*]{} larger than $f(n)$ for every $f \in \Fset$, such that if for each $f \in \Fset$ the speed is infinitely often larger than $f(n)$, then it is also larger than $F(n)$ for every $n \in \N$. Putting it concisely: the speed [*jumps*]{} from $\Fset$ to $F$.
The study of the speeds of monotone properties of labelled graphs was introduced over forty years ago by Erdős [@E], and continued by Erdős, Kleitman and Rothschild [@EKR], Erdős, Frankl and Rödl [@EFR], Kolaitis, Prömel and Rothschild [@KPR], Kleitman and Winston [@KW], Hundack, Prömel and Steger [@HPS] and more recently Balogh, Bollobás and Simonovits [@BBS]. A new direction was initiated by Scheinerman and Zito [@SZ], who were the first to study hereditary properties of graphs with speeds below $n^n$. A little later, considerably stronger results were proved by Balogh, Bollobás and Weinreich [@BBW], [@BBW4]. In the range $|\P_n| = 2^{cn^2}$, the main results were proved by Alekseev [@Alekseev], Bollobás and Thomason [@BTbox], [@BT1], and Prömel and Steger [@PS3], [@PS4], [@PS5]. For a review of the early results, see Bollobás [@ICM]. Hereditary properties of other combinatorial structures have not yet been studied in such great detail, but it is likely that many more beautiful theorems await discovery.
In this paper we shall prove that a jump of this type, from exponential to factorial speed, occurs for strongly monotone properties of ordered hypergraphs. As a result of this, we shall be able to prove similar jumps for hereditary properties of partitions, monotone properties of ordered graphs, and hereditary properties of ordered graphs not containing arbitrarily large complete, or complete bipartite ordered graphs. As we shall see, each of these theorems is a generalization of the Stanley-Wilf Conjecture (Theorem A), proved recently by the combined results of Klazar [@Klaz] and Marcus and Tardos [@MT].
Before we begin, we should remark that our main theorem has been proved independently (and at around the same time) by Klazar and Marcus [@KM]. Although we were unaware of their work until after ours was completed, we should note also that many of the ideas in this paper were inspired by the earlier work of Klazar [@Klaz] and of Marcus and Tardos [@MT]. The reader may also wish to refer to some other papers of Klazar [@Klaz2], [@Klaz3] which we later discovered contain some of the ideas (though none of the results) below.
For each $n \in \N$, let $\Pi_n$ denote the collection of all permutations of $[n]$, and let $\Pi = \bigcup_n \Pi_n$. Also, if $\P$ is a property of ordered hypergraphs, and $k, \ell \in \N \cup \{\infty\}$, let $$\P^{(k,\ell)} = \{G \in \P : \Delta(G) \le k\textup{ and } |e| \le \ell \textup{ for every }e \in E(G)\}$$ denote the sub-property consisting of the ordered hypergraphs in which each vertex is contained in at most $k$ edges, and each edge has size at most $\ell$. Note that if $\P$ is hereditary, or monotone, or strongly monotone, then so is $\P^{(k,\ell)}$.
Finally, if $\pi \in \Pi_k$, let $H(\pi)$ denote the ordered hypergraph on vertex set $[2k]$ and with edge set $\{ \{i, \pi(i) + k\} : i \in [k]\}$. We shall also write $H(\pi)$ for the ordered graph with the same vertex and edge sets, and for the partition $\{\{i, \pi(i) + k\} : i \in [k]\}$ of $[2k]$. It will always be clear which of these $H(\pi)$ is.
We are now ready to state the main result of this paper, which was conjectured by Klazar in [@Klaz3], and has been proved independently by Klazar and Marcus [@KM].
\[hypergraphs\] Let $\P$ be a strongly monotone property of ordered hypergraphs. If for every constant $c > 0$ there exists an $N = N(c) \in \N$ such that $|\P_N| > c^N$, then $\P$ contains the ordered hypergraph $H(\pi)$ for every $\pi \in \Pi$, and hence $$|\P_n| \: \ge \: |\P^{(1,2)}_n| \: \ge \: \ds\sum_{k=0}^{\lfloor n/2 \rfloor} {n \choose {2k}} k! \: = \: n^{n/2 + o(n)}$$ for every $n \in \N$. This lower bound is best possible, and there is a unique strongly monotone property of ordered hypergraphs with this speed.
We remark that Scheinerman and Zito \[25\] proved that a similar jump, from exponential to factorial speed, exists for hereditary properties of labelled graphs. For more involved results, see [@BBW].
Our proof of Theorem \[hypergraphs\] is based on the ideas of Klazar [@Klaz], [@Klaz3], and of Marcus and Tardos [@MT]. Theorem A, below, was proved in two stages: first Klazar [@Klaz] showed that the theorem was a consequence of a conjecture of Füredi and Hajnal [@FH]; then Marcus and Tardos [@MT] proved that conjecture. In Theorem \[genMT\] we shall prove a generalization of the theorem of Marcus and Tardos (Theorem B). We shall then deduce Theorem \[hypergraphs\] using the method of Klazar (see [@Klaz] and [@Klaz3]).
We shall state Theorem \[genMT\] in terms of $(0,1)$–matrices, though it can equally be thought of as a theorem about ordered hypergraphs. To simplify the statement, we need a little notation.
Let $k \in \N$. If $A = (a_{ij})$ and $B = (b_{ij})$ are $k \times k$ matrices, we shall write $(A,B)$ for the $k \times 2k$ matrix obtained by putting $A$ in front of $B$. Thus $(A,B)_{ij}$ is $a_{ij}$ if $j \le k$ and $b_{i(j-k)}$ if $j \ge k+1$. Call two matrices $C$ and $D$ equivalent (and write $C \sim D$) if $D$ is obtained from $C$ by permuting its rows. Let $\Mset(k)$ denote the set of equivalence classes (with respect to $\sim$) in the family of all matrices of the form $(K,L)$, where $K$ and $L$ are $k \times k$ permutation matrices. Note that every such matrix $(K,L)$ is equivalent to a unique matrix $(I,M)$, where $I = (\delta_{ij})$ is the $k \times k$ identity matrix, so $|\Mset(k)| = k!$.
Finally, if $P$ and $Q$ are $(0,1)$–matrices, then we say that $Q$ is a *sub-matrix* of $P$ if $Q$ is obtained from $P$ by deleting rows and columns. We say that $P$ *contains* $Q = (q_{ij})$ if there exists a sub-matrix $R = (r_{ij})$ of $P$, the same size as $Q$, with $r_{ij} = 1$ whenever $q_{ij} = 1$. If we associate an ordered hypergraph $\HH$ with a $(0,1)$–matrix whose rows are the indicator functions of the edges of $\HH$, and consider two matrices to be the same if they are equivalent, then this concept coincides with hypergraph containment defined above. If $P$ does not contain $Q$ then we say that $P$ *avoids* $Q$.
The following theorem is a reformulation of Klazar’s conjecture C4 in [@Klaz3]. It has also been proved independently by Klazar and Marcus [@KM].
\[genMT\] Let $k \in \N$. There exists a constant, $c_k$, such that if $m,n \in \N$ and $A$ is an $m \times n$ $(0,1)$–matrix satisfying
1. at least $c_kn$ of the entries of $A$ are $1$, and
2. each of the rows of $A$ are different,
then $A$ contains some member of each class of $\Mset(k)$.
Notice that Theorem \[genMT\] still holds if condition $(ii)$ is replaced by the condition\
$(ii')$ each of the rows of $A$ has at least $2k$ of its entries $1$,\
since if $A$ satisfies $(ii')$, and any row occurs at least $k$ times in $A$, then $A$ contains every $k \times 2k$ $(0,1)$– matrix.
Füredi and Hajnal [@FH] proved that the extremal number of $1$’s possible in an $n \times n$ $(0,1)$–matrix avoiding the matrix $$S_1 = \left( \begin{array}{cccc} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \end{array} \right)$$ is (up to a constant) $n\alpha(n)$, where $\alpha(n) \to \infty$ extremely slowly. A simple corollary of Theorem \[genMT\] (with $k = 2$) is that the extremal number of $1$’s if we avoid both $S_1$ and $$S_2 = \left( \begin{array}{cccc} 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 \end{array} \right)$$ is $O(n)$. For many more results along these lines, see Tardos [@T].
We shall now note two important (and immediate) consequences of Theorem \[hypergraphs\]. The first of them was conjectured by Klazar in [@Klaz2], and the second was originally proved (although not stated!) by Klazar in [@Klaz] as a consequence on the Füredi–Hajnal Conjecture.
\[parts\] Let $\P$ be a hereditary property of partitions. If for every constant $c > 0$ there exists an $N = N(c) \in \N$ such that $|\P_N| > c^N$, then $\P$ contains the partition $H(\pi)$ for every $\pi \in \Pi$, and hence $$|\P_n| \: \ge \: |\P^{(1,2)}_n| \: \ge \: \ds\sum_{k=0}^{\lfloor n/2 \rfloor} {n \choose {2k}} k! \: = \: n^{n/2 + o(n)}$$ for every $n \in \N$. This lower bound is best possible, and there is a unique hereditary property of partitions with this speed.
\[mono\] Let $\P$ be a monotone property of ordered graphs. If for every constant $c > 0$ there exists an $N = N(c) \in \N$ such that $|\P_N| > c^N$, then $\P$ contains the ordered graph $H(\pi)$ for every $\pi \in \Pi$, and hence $$|\P_n| \: \ge \: |\P^{(1,2)}_n| \: \ge \: \ds\sum_{k=0}^{\lfloor n/2 \rfloor} {n \choose {2k}} k! \: = \: n^{n/2 + o(n)}$$ for every $n \in \N$. This lower bound is best possible, and there is a unique monotone property of ordered graphs with this speed.
For $t \in \N$, let $K_t$ denote the complete ordered graph on $t$ vertices, and let $K_{t,t}$ denote the complete ordered bipartite graph on $[2t]$ with edge set $E(K_{t,t}) = \{\{i,j\} : i \le t < j\}$. We shall deduce the following theorem from Theorem \[mono\].
\[noKt\] Let $\P$ be a hereditary property of ordered graphs such that for some $t \in \N$, neither $K_t$ nor $K_{t,t}$ is in $\P$. If for every constant $c > 0$ there exists an $N = N(c) \in \N$ such that $|\P_N| > c^N$, then $\P$ contains the ordered graph $H(\pi)$ for every $\pi \in \Pi$, and hence $$|\P_n| \: \ge \: |\P^{(1,2)}_n| \: \ge \: \ds\sum_{k=0}^{\lfloor n/2 \rfloor} {n \choose {2k}} k! \: = \: n^{n/2 + o(n)}$$ for every $n \in \N$. This lower bound is best possible, and there is a unique hereditary property containing neither $K_t$ nor $K_{t,t}$ with this speed.
We conjecture that Theorems \[hypergraphs\], \[parts\], \[mono\] and \[noKt\] have the following common generalization.
\[hyperconj\] Let $\P$ be a hereditary property of ordered hypergraphs. If for every constant $c > 0$ there exists an $N = N(c) \in \N$ such that $|\P_N| > c^N$, then $$|\P_n| \: \ge \: \ds\sum_{k=0}^{\lfloor n/2 \rfloor} {n \choose {2k}} k! \: = \: n^{n/2 + o(n)}$$ for every $n \in \N$.
The following statement is a special case of Conjecture \[hyperconj\], but still generalizes Theorems \[parts\], \[mono\] and \[noKt\] (since partitions can be represented by ordered graphs whose components are complete graphs), and would be very interesting in its own right. It was in fact our main motivation for studying ordered hypergraphs and partitions.
\[orderconj\] Let $\P$ be a hereditary property of ordered graphs. If for every constant $c > 0$ there exists an $N = N(c) \in \N$ such that $|\P_N| > c^N$, then $$|\P_n| \: \ge \: \ds\sum_{k=0}^{\lfloor n/2 \rfloor} {n \choose {2k}} k! \: = \: n^{n/2 + o(n)}$$ for every $n \in \N$.
Note that in both conjectures the lower bounds, if true, are best possible (by Lemma \[bound\], below). However it is *not* true that, under the conditions of the conjectures, $\P$ must contain the ordered graph $H(\pi)$ on $[2k]$ with edge set $\{\{i,\pi(i)+k\} : i \in [k]\}$ for every $k \in \N$ and $\pi \in \Pi_k$. To see this, call an ordered graph $G$ on $[n]$ a *co-matching* if $\{x_1,y_1\},\{x_2,y_2\} \in {{[n]} \choose 2} \setminus E(G)$ implies that $|\{x_1,y_1\} \cap \{x_2,y_2\}| = 0$ or $2$, and call $G$ a *star-matching* if (say) $\{x_1,y_1\} \in E(G)$ implies $\{x_1,y_2\} \in E(G)$ for every $y_1 \le y_2 \le n$. The collection of all co-matchings and the collection of all star-matchings are hereditary properties of ordered graphs with super-exponential speeds, but neither contains all the graphs $H(\pi)$.
For further details on the possible speeds of hereditary properties of ordered graphs see [@BBMord], which considers such properties with speed below $2^n$, and also those with speed above $2^{\eps n^2}$.
The rest of the paper is organised as follows. In Section \[KMT\] we shall state the Klazar-Marcus-Tardos and Marcus-Tardos Theorems, and show that the former is implied by each of Theorems \[parts\], \[mono\] and \[noKt\]; in Section \[genMTsec\] we shall prove Theorem \[genMT\] using the Marcus-Tardos theorem; in Section \[hypersec\] we shall deduce Theorem \[hypergraphs\] from Theorem \[genMT\]; and in Section \[rest\] we shall deduce Theorems \[parts\], \[mono\] and \[noKt\].
The Klazar-Marcus-Tardos and Marcus-Tardos theorems {#KMT}
===================================================
We begin by recalling the theorems of Marcus and Tardos [@MT].
Given $n \in \N$, we shall call a permutation of $[n]$ an $n$-permutation. An $n$-permutation $\pi$ is said to *contain* a $k$-permutation $\sigma$ if there are integers $1 \le a(1) < \ldots < a(k) \le n$ such that $\pi(a(i)) < \pi(a(j))$ if and only if $\sigma(i) < \sigma(j)$. Otherwise $\pi$ is said to *avoid* $\sigma$. A property of permutations is a collection of permutations, closed under isomorphism. A property of permutations is said to be hereditary if it is also closed under containment.
The following theorem was conjectured by Stanley and Wilf around 1992 (see [@Arratia], [@Bona], [@MT]), and proved by Marcus and Tardos in 2004 (Corollary 2 of [@MT]), using a theorem of Klazar [@Klaz]. This result is usually known as the Stanley-Wilf Conjecture, but we shall refer to it as the Klazar-Marcus-Tardos Theorem, or simply as Theorem A.
Let $\P$ be a hereditary property of permutations. Either $\P$ is the set $\Pi$ of all permutations, so $|\P_n| = n!$ for every $n \in \N$, or there exists a constant $c = c(\P)$ such that $|\P_n| \le c^n$ for every $n \in \N$.
We also state here the theorem of Marcus and Tardos (Theorem 1 of [@MT]), which was originally conjectured by Füredi and Hajnal in [@FH]. In Section \[genMTsec\] we shall use it to prove Theorem \[genMT\].
For every permutation matrix $M$, there exists a constant $C = C(M)$ such that any $n \times n$ $(0,1)$–matrix with at least $Cn$ of its entries $1$ contains $M$.
In this section we shall show that the simplest case of Conjectures \[hyperconj\] and \[orderconj\] (the case in which every $G \in \P$ is an ordered graph with maximum degree at most one) is equivalent to the Klazar-Marcus-Tardos Theorem, and deduce that our main results generalize that theorem. We start however by proving the following lemma, which gives the final implication of Theorems \[hypergraphs\], \[parts\], \[mono\] and \[noKt\].
\[bound\] Let $\P$ be a hereditary property of ordered hypergraphs. If $H(\pi) \in \P$ for every $\pi \in \Pi$, then $$|\P_n| \ge |\P^{(1,2)}_n| \ge \ds\sum_{k=0}^{\lfloor n/2 \rfloor} {n \choose {2k}} k!$$ for every $n \in \N$. Moreover, there is a unique hereditary property containing every $H(\pi)$ with this speed.
Given integers $k,n \in \N$, a subset $A \subset [n]$ of size $2k$ (with elements $a(1) < \ldots < a(2k)$ say), and permutation $\pi \in \Pi_k$, define $G(n,A,\pi)$ to be the ordered hypergraph on vertex set $[n]$, and with edge set $\{\{a(i), a(\pi(i)+k)\} : i \in [k]\}$. Let $\P$ be a hereditary property of ordered hypergraphs with $H(\pi) \in \P$ for every $\pi \in \Pi$. We shall show that $G(n,A,\pi) \in \P$ for every such $n$, $A$ and $\pi$.
Indeed, let $n$, $A$ and $\pi$ be as described, let $X$ be the set of isolated vertices in $G = G(n,A,\pi)$, let $Y = \{v \in X : v < a_k\}$ and let $Z = X \setminus Y$. Suppose $|Y| = r$ and $|Z| = s$, and consider an ordered graph $H$ on $[-s+1,n+r]$ formed by adding to $G$ an arbitrary matching between the vertices $\{-s+1, \ldots, 0\}$ and $Z$, and an arbitrary matching between the vertices $\{n+1, \ldots, n+r\}$ and $X$. It is easy to see that $H$ is isomorphic to $H(\sigma)$ for some $\sigma \in \Pi_{k+r+s}$, so $H \in \P$, and that $G$ is an induced subgraph of $H$, so $G \in \P$.
Thus $\P$ contains the ordered hypergraph $G(n,A,\pi)$ for every $k \in \N$, $\pi \in \Pi_k$, $n \ge 2k$ and $A \subset [n]$ with $|A| = 2k$, and hence $$\begin{aligned}
|\P_n| \; \ge \; |\P^{(1,2)}_n| & \ge & |\{(k,\pi,A) : 2k \le n\textup{, }\pi \in \Pi_k\textup{ and }A \subset [n]^{(2k)}\}|\\
& = & \ds\sum_{k=0}^{\lfloor n/2 \rfloor} {n \choose {2k}}
k!.\end{aligned}$$
Finally, note that the collection $\Q = \{G(n,\pi,A) : \pi \in \Pi_k$ and $A \subset [n]^{(2k)}$ for some $k \le n/2\}$ forms a hereditary property of ordered hypergraphs, and $H(\pi) \in \Q$ for every $\pi \in \Pi$. By the argument above, if $H(\pi) \in \P$ for every $\pi \in \Pi$ then $\Q \subset \P$, so $\Q$ the unique such hereditary property of ordered hypergraphs with this speed.
We shall now show that the simplest case of Conjecture \[hyperconj\] follows from the Klazar-Marcus-Tardos Theorem. We shall not need this to prove our main results, but the proof is short and has some independent value. Here we use $\G$ to denote a property of ordered graphs, to distinguish it from a property $\P$ of permutations.
\[deg1\] Let $\G$ be a hereditary property of ordered graphs of maximal degree at most $1$. If for every constant $c > 0$ there exists an $N = N(c) \in \N$ such that $|\G_N| > c^N$, then $\G$ contains the ordered graph $H(\pi)$ for every $\pi \in \Pi_k$, and hence $$|\G_n| \ge |\G^{(1,2)}_n| \ge \ds\sum_{k=0}^{\lfloor n/2 \rfloor} {n \choose {2k}} k!$$ for every $n \in \N$.
Let $\G$ be a hereditary property of ordered graphs of maximal degree at most one, and suppose that for every constant $c_1 > 0$ there exists an $N = N(c_1) \in \N$ such that $|\G_N| > c_1^N$. Given an ordered graph $G \in \G$, we define a permutation $\phi(G)$. Suppose $G$ has $k$ edges, $e_1 = \{a_1,b_1\}, \ldots, e_k = \{a_k,b_k\}$, (where $a_i < b_i$ for each $i \in [k]$), ordered by their left-endpoints, i.e., $a_i < a_j$ if and only if $i < j$ (recall that $\Delta(G) \le 1$). Let $\pi$ be the $k$-permutation such that $b_{\pi(1)} < \ldots < b_{\pi(k)}$, and define $\phi(G) = \pi^{-1}$.
Let $\P = \{\phi(G) : G \in \G\}$. Since $\G$ is hereditary, so is $\P$, since removing a vertex from a permutation corresponds to removing one of the endpoints of the corresponding edge. By the Klazar-Marcus-Tardos Theorem, either $\P = \Pi$ or there exists a constant $c$ such that $|\P_n| \le c^n$ for every $n \in \N$.
Suppose the latter, so $|\P_n| \le c^n$ for every $n \in \N$. Assuming $c > 2$, we claim that in this case $$|\G_n| \: \le \: \ds\sum_{k=0}^{\lfloor n/2 \rfloor} {n \choose k}{{n-k} \choose k}c^k \: < \: n {n \choose {\lfloor n/2 \rfloor}}^2 c^{\lfloor n/2 \rfloor} \: < \: (4\sqrt{c})^n.$$ To see this, simply note that any ordered graph $G$ of maximal degree at most one is determined by its order, its left-endpoint set, its right-endpoint set, and the permutation $\phi(G)$. Hence, setting $c_1 = 4\sqrt{c}$, we have a contradiction to our assumption that $|\G_N| > c_1^N$ for some $N \in \N$.
Next, suppose that $\P = \Pi$. We want to show that $\G$ contains $H(\pi)$ for every $\pi \in \Pi$, so let us fix $k \in \N$ and $\pi \in \Pi_k$. Let $\pi'$ be the $(k+1)$-permutation defined as follows: $\pi'(i) = \pi(i) + 1$ for each $i \in [k]$, and $\pi'(k+1) = 1$. Since $\P = \Pi$, we have $\pi' \in \P$, so for some $G \in \G$ we have $\pi' = \phi(G)$.
Now notice that in $G$, all left-endpoints occur to the left of *all* right-endpoints, since $\pi'(k+1) = 1$. Therefore, letting $G'$ be subgraph of $G$ induced by the first $k$ left-endpoints and last $k$ right-endpoints, we have $G' = H(\pi)$.
In fact, for any permutation $\pi \in \Pi_k$, the number of ordered graphs $G$ of order $n$ and maximal degree at most one with $\phi(G) = \pi$ is at most ${n \choose k}Cat(k)$, where $Cat(k) = \frac{1}{k+1}{{2k} \choose k}$ is the $k^{th}$ Catalan number. To see this, we use the fact that there are exactly $Cat(k)$ legal sequences of $k$ left- and $k$ right-brackets (i.e., in any initial segment of the sequence there are at least as many left-brackets as right-brackets). Given any ordered graph $G \in \G$, we can define a corresponding sequence of brackets, $\psi(G)$, by taking a left-bracket for every vertex which is the left-endpoint of an edge, and a right-bracket for every right-endpoint.
Now, given $n$, $\phi = \phi(G)$, $\psi = \psi(G)$ and the (even-sized) subset $A = \{v \in [n] : d_G(v) = 1\}$, it is simple to reconstruct $G$: if the elements of $A$ are $a(1) < \ldots < a(2k)$, the left brackets of $\psi$ lie in positions $1 \le s(1) < \ldots < s(k) \le 2k$ and the right brackets lie in positions $1 \le t(1) < \ldots < t(k) \le 2k$ (so $\{s(1), \ldots, s(k), t(1), \ldots, t(k)\} = [2k]$), then the edge set is $\{\{a(s(i)),a(t(\phi(i) ) ) \} : i \in [k]\}$. Note that although for many permutation - bracket sequence pairs $(\phi, \psi)$ no ordered graph $G$ has $\phi(G) = \phi$ and $\psi(G) = \psi$ (for example, $\phi = 21$ and $\psi =$ \[( ) ( )\]), for the identity permutation all $Cat(k)$ bracket pairs are realised.
We proved Theorem \[deg1\] using Theorem A; we now prove the reverse implication. It will follow almost immediately that Theorem $i$ implies Theorem A for $i =$ \[parts\], \[mono\] and \[noKt\].
\[genA\] Theorem \[deg1\] implies Theorem A.
Let $\P$ be a non-trivial hereditary property of permutations (i.e., different from $\Pi$), and assume that Theorem \[deg1\] holds. Let the ordered graphs $G(n,A,\pi)$ be as defined above, and let $$\G = \{G(n,A,\pi) : n \in \N, \: A \subset [n], \: \pi \in \P, \: |A| = 2|\pi|\}.$$ Because $\P$ is hereditary, $\G$ is also hereditary, since removing an isolated vertex from $G(n,A,\pi)$ gives $G(n-1,A',\pi)$ (for some $A' \subset [n-1]$), and removing a non-isolated vertex corresponds to removing an element from $\pi$.
Since $\P \neq \Pi$ there exists some $\pi \notin \P$, and by definition $\G$ does not contain $H(\pi)$. Hence, by Theorem \[deg1\], there exists a constant $c > 0$ such that $|\G_n| \le c^n$ for every $n \in \N$. But $|\G_{2n}| \ge |\P_n|$ for every $n \in \N$, so $|\P_n| \le c^{2n}$ for every $n \in \N$.
We can now deduce that our main theorems do indeed generalize the Klazar-Marcus-Tardos Theorem.
Each of the Theorems \[parts\], \[mono\] and \[noKt\] implies Theorem A.
To show that Theorem \[parts\] and Theorem \[noKt\] imply Theorem A, it suffices to observe that any hereditary property of ordered graphs of maximal degree at most one may be viewed as a hereditary property of partitions (with part sizes at most 2), or as a hereditary property of ordered graphs containing no $K_3$ and no $K_{2,2}$. The result then follows by Lemma \[genA\].
To show that Theorem \[mono\] implies Theorem A, let $\P$ be a hereditary property of ordered graphs of maximal degree one, and consider the minimal monotone property of ordered graphs $\P'$ containing $\P$. If $\P'$ contains the ordered graph $H(\pi)$ (for some $\pi \in \Pi$) then so does $\P$. Otherwise $|\P'_n| \le c^n$ for some $c > 0$ and every $n \in \N$ by Theorem \[mono\], and hence $|\P_n| \le |\P'_n| \le c^n$ for every $n \in \N$. The result again follows by Lemma \[genA\].
Proof of the generalized Marcus-Tardos Theorem {#genMTsec}
==============================================
In this section we shall prove Theorem \[genMT\]. Recall that by Theorem B, for each permutation matrix $M$, there exists a constant $C(M)$ such that any $n \times n$ $(0,1)$–matrix with at least $C(M)n$ of its entries $1$ contains $M$. For each $k \in \N$, let $C(k)$ be the constant obtained in the Theorem B for $k \times k$ matrices, i.e., $C(k) = \operatorname{max}\{C(M) : M$ a $k \times k$ permutation matrix$\}$. We shall give our bounds on $c_k$ in terms of $C(k)$.
To obtain Theorem \[genMT\], we use Theorem B to prove it in the case that the rows each have a bounded number of $1$’s, and then use this result and the method of Marcus and Tardos [@MT] to prove the general case. First however, we need to show that Theorem B implies Theorem \[genMT\] in the case that each row has exactly two $1$’s; in fact these statements are equivalent.
\[MTdeg2\] Let $f: \N \to \N$ be any function. The following statements satisfy $(i) \Rightarrow (ii) \Rightarrow (iii)$.\
(i) For each $k,m,n \in \N$, any $m \times n$ $(0,1)$–matrix with at least $f(k)n$ of its entries $1$, and at most two $1$’s in each row, and with each row different, contains a member of each class of $\Mset(k)$.\
(ii) Theorem B holds with $C(M_k) = f(k)$ for each $k \times k$ permutation matrix $M_k$.\
(iii) For each $k,m,n \in \N$, any $m \times n$ $(0,1)$–matrix with at least $(2f(k+1)+1)n$ of its entries $1$, and at most two $1$’s in each row, and with each row different, contains a member of each class of $\Mset(k)$.
First we shall prove that $(i)$ implies $(ii)$. Let $k,m,n \in \N$, $M = (m_{ij})$ be a $k \times k$ permutation matrix, and $A = (a_{ij})$ be an $n \times n$ $(0,1)$–matrix with at least $f(k)n$ $1$’s, with $f(k)$ given by $(i)$. We wish to show that $A$ contains $M$. Suppose without loss of generality that there are more $1$’s above the top-left/bottom-right diagonal than below it (otherwise replace $A$ and $M$ by $A^T$ and $M^T$). Let the number of pairs $(i,j)$ for which $i \le j$ and $a_{ij} = 1$ be $m$, and label them $e_1, \ldots, e_m$ arbitrarily. We define an $m \times n$ $(0,1)$–matrix $B = (b_{ij})$ with at most two $1$’s in each row, and with each row different, by letting $b_{ij} = 1$ if and only if vertex $j$ is an endpoint of $e_i$. Note that at least $f(k)n$ of the entries of $B$ are $1$.
Applying $(i)$ to $B$, we see that $B$ must contain a matrix $(K,L) \sim (I,M)$, where $(I,M)$ is the $k \times 2k$ matrix obtained by putting the identity matrix in front of $M$ (so $(I,M)_{ij} = \delta_{ij}$ and $(I,M)_{i(j+k)} = m_{ij}$ for each $j \in [k]$). Suppose $(K,L)$ occurs in columns $a_1 < \ldots < a_k < b_1 < \ldots < b_k$ of $B$. Then $M$ occurs in the intersection of the rows $a_1,\dots,a_k$ and the columns $b_1, \ldots, b_k$ of $A$, and so we are done.
The proof that $(ii)$ implies $(iii)$ is similar. Again let $k,m,n \in \N$, $M = (m_{ij})$ be a $k \times k$ permutation matrix, and let $B = (b_{ij})$ be an $m \times n$ $(0,1)$–matrix with at most two $1$’s in each row, each row different, and at least $(2f(k+1)+1)n$ of its entries $1$. It will suffice to show that $B$ contains some matrix $(K,L) \sim (I,M)$.
We produce from $B$ an $n \times n$ $(0,1)$–matrix $A = (a_{ij})$, by letting $a_{ij} = 1$ if and only if $i < j$ and $b_{ri} = b_{rj} = 1$ for some $r \in [m]$. Note that at least $f(k+1)n$ of the entries of $A$ are $1$. Applying $(ii)$ to $A$, we see that $A$ must contain the ($k+1) \times (k+1)$ matrix $M'$, formed by putting $M$ in the top right-hand corner, and a single $1$ in the bottom left-hand corner. Thus $M'_{i(j+1)} = m_{ij}$ for $i,j \in [k]$, $M'_{(k+1)1} = 1$, and $M'_{ij} = 0$ otherwise. Suppose $M'$ occurs in rows $a_1 < \ldots < a_{k+1}$ and columns $b_1 < \ldots < b_{k+1}$ of $A$, and that the $1$’s corresponding to $1$-entries of $M$ correspond to rows $r_1 < \ldots < r_k$ of $B$. Since $A_{a_{k+1}b_1} = 1$ and $A$ is upper triangular, $a_{k+1} < b_1$. Therefore some $(K,L) \sim (I,M)$ occurs in the intersection of the rows $r_1,\dots,r_k$ and the columns $a_1, \ldots, a_k, b_2, \ldots, b_{k+1}$ of $B$, and we are again done.
Since Theorem B holds with $C(M) = C(k)$, we have the following immediate corollary.
\[MTdeg2(c)\] Let $k \in \N$. Any $m \times n$ $(0,1)$–matrix with at least $(2C(k+1)+1)n$ of its entries $1$, and at most two $1$’s in each row, and with each row different, contains a member of each class of $\Mset(k)$.
To prove the case where the rows have a bounded number of $1$’s, we shall use the following trivial observation.
\[match\] Let $G$ be a bipartite graph with parts $A$ and $B$. Suppose $d(v) \ge 1$ for each $v \in A$, and $d(v) \le m$ for each $v \in B$. Then there exists a matching in $G$ of size at least $|A|/m$.
If $d(v) > 1$ for any $v \in A$ then remove all but one of the edges joined to $v$. We now have a family of stars, each centred in $B$ and of order at most $m+1$. Take one edge from each.
For each $D \in \N$, define $g_D : \N \to \N$ by $g_D(x) = \sum_{i=0}^{D-1} (i+1){x \choose i}$. Note that $g_D(x) < 2D{x \choose {D-1}}$ if $x > 3D$. Let $\HH$ be an ordered hypergraph on $[n]$ in which every edge has size at most $D$. For each vertex $v \in [n]$, the $2$-degree of $v$ in $\HH$ is $d^{(2)}_{\HH}(v) = |\{u \in [n] : \{u,v\} \subset E$ for some $E \in E(\HH)\}|$. Suppose a vertex of $2$-degree $x$ is removed from $\HH$; by how much can $\|\HH\| = \sum_{E \in E(\HH)} |E|$ decrease? For each $0 \le i \le D-1$, $v$ is contained in at most ${x \choose i}$ edges $E$ of $\HH$ of size $i+1$, and each of these must be removed entirely if $E \setminus v$ is also an edge. Hence the maximum possible decrease in $\|\HH\|$ is $g_D(x)$.
For each $k \in \N$ let $C_1(k) = 2C(k+1) + 1$.
\[MTdegbdd\] Let $k,m,n,D \in \N$, with $D \ge 2$, and let $g_D: \N \to \N$ be as defined above. Any $m \times n$ $(0,1)$–matrix with at least $g_D\left( D(D - 1)C_1(k) \right)n$ of its entries $1$, and at most $D$ of the entries of each row $1$, and with each row different, contains a member of each class of $\Mset(k)$.
We shall use Corollary \[MTdeg2(c)\]. Let $k,m,n,D \in \N$, $M = (m_{ij})$ be a $k \times k$ permutation matrix, and $A = (a_{ij})$ be an $m \times n$ $(0,1)$–matrix with at least $g_D\left(D(D - 1) C_1(k) \right)n$ of its entries $1$, at most $D$ of the entries in each row $1$, and each row different. We shall show that $A$ contains a matrix in the equivalence class of $(I,M)$.
Consider the ordered hypergraph $\HH$ on vertex set $[n]$ with edge set $\{E_i : i \in [m]$ and $a_{ij} = 1 \Leftrightarrow j \in E_i\}$, so the rows of $A$ are the indicator functions of the edges. Note that $\|\HH\| = \sum_i |E_i| \ge g_D\left( D(D - 1) C_1(k) \right)n$. We first wish to find a subset $S$ of $[n]$ in which there are at least ${D \choose 2}C_1(k)|S|$ distinct pairs $\{i,j\}$, each contained in some edge of $\HH$. If $d^{(2)}_{\HH}(v) < D(D - 1)C_1(k)$ for some vertex $v$ then, by the comments above, removing $v$ from the ordered hypergraph causes $\|\HH\|$ to decrease by at most $g_D\left( D(D - 1)C_1(k) - 1 \right) < g_D\left( D(D - 1)C_1(k) \right)$. Thus removing $v$ causes the density of edges in the ordered hypergraph to increase.
Thus, if we repeatedly remove vertices of minimal $2$-degree from $\HH$, we must eventually produce an ordered hypergraph $\HH'$ on vertex set $S$ in which every vertex $v$ has $d^{(2)}_{\HH'}(v) \ge D(D - 1)C_1(k)$. By counting degrees, there are at least ${D \choose 2}C_1(k)|S|$ distinct pairs $\{i,j\} \subset S$, each contained in some edge of $\HH'$, and hence of $\HH$.
Let $\T = \{\{i,j\} \subset S : i,j \in E$ for some $E \in \HH\}$ be the set of such pairs. Let $B$ be the bipartite graph on sets $\T$ and $E(\HH)$ with edges corresponding to containment (i.e., $(\{i,j\},E)$ is an edge of $B$ iff $\{i,j\} \subset E$). By Lemma \[match\], there exists a matching $W$ in $B$ of size $t$, with $t \ge |\T|/{D \choose 2} \ge C_1(k)|S|$, since each edge has order at most $D$. Let $\T_1$ be the set of endpoints of $W$ lying in $\T$.
Let $s = |S|$, and define $P$ to be the $t \times s$ $(0,1)$–matrix in which the columns correspond to elements of $S$, and the rows are the indicator functions of the edges in $\T_1$. In $P$ all rows have exactly two $1$’s, all rows are different, and $2t \ge 2C_1(k)s$ of its entries are $1$, so by Corollary \[MTdeg2(c)\], $P$ contains some matrix $(K,L)$ equivalent to $(I,M)$.
Fix a copy of $(K,L)$ in $P$, and let the columns of $P$ containing this $(K,L)$ be those corresponding to the vertices $b(1) < \ldots < b(2k)$ of $\HH$ (note that here $b(i)$ is the *original* labelling of the vertex in $[n]$). We claim that the corresponding columns of $A$ contain a matrix equivalent to $(K,L)$. To see this, let the rows of $P$ containing the same copy of $(K,L)$ be $a(1) < \ldots < a(k)$, and let $p(i)$ be the pair in $\T_1$ corresponding to row $a(i)$ for $1 \le i \le k$. For each of these pairs $p(i)$, choose the edge $e(i) \in E(\HH)$ it was matched to by $W$. We have thus found $k$ distinct edges $e(1), \ldots, e(k)$, for which $p(i) \subset e(i)$. It follows immediately that the columns $b(1), \ldots, b(2k)$ of $A$ contain some matrix $(K',L') \sim (K,L) \sim (I,M)$. This completes the proof.
We are now ready to prove Theorem \[genMT\].
For each $n,k \in \N$, let $f(n,k)$ be the largest number of $1$’s possible in an $m \times n$ $(0,1)$–matrix $A$ (where $m \in \N$ is arbitrary), with each row different, not containing any member of some class of $\Mset(k)$. We wish to show that $f(n,k) = O(n)$, where $k$ is fixed and $n \to \infty$. The proof (that $f(n,k) < c_k n$, where $c_k$ will be determined later) will be by induction on $n$. Note that since each row of $A$ is different, at most $n2^n$ of the entries of $A$ can be $1$’s. We shall choose $c_k > 2^{8k^3}$, so the statement $f(n,k) < c_kn$ is (trivially) true for $n \le 8k^3$.
Let $k,m,n \in \N$ with $n \ge 8k^3$, $M = (m_{ij})$ be a $k \times k$ permutation matrix, and $A$ be an $m \times n$ $(0,1)$–matrix, with each row different, not containing any matrix equivalent to $(I,M)$. Following the method of Marcus and Tardos, we want to divide $A$ up into ‘fat’ and ‘skinny’ blocks of size $1 \times t$ for some $t$. In preparation for this, we must remove the rows with few $1$’s. Let $t \in \N$ and let $D = (2k-1)t$. (We shall eventually set $t = 2k^2$, but we postpone choosing this value until it is clear why the choice is being made. Our argument up to that point works for any $t \in \N$.) By Lemma \[MTdegbdd\], there are at most $g_D\left(D(D - 1)C_1(k) \right)n$ $1$’s in rows with at most $D$ of their entries $1$, otherwise $A$ would contain some $(K,L) \sim (I,M)$, contradicting our assumption. Let $A' = (a_{ij}')$ be the $m' \times n$ matrix obtained from $A$ by deleting the rows with at most $D$ entries $1$.
Now, let $q$ and $r$ satisfy $n = qt + r$, with $q \in \N$ and $r \in [t] $, and partition $A'$ into $qm'$ blocks of size $1 \times t$ and $m'$ blocks of size $1 \times r$ as follows. Let the $1 \times t$ blocks be $S_{ij} = (a_{i\ell}' : \ell \in [(j-1)t+1,jt])$, for each $i \in [m']$ and $j \in [q]$, and the $1 \times r$ blocks be $S_{i(q+1)} = (a_{i\ell}' : \ell \in [qt+1,n])$, for each $i \in [m']$. Define $B = (b_{ij})$ to be the $m' \times (q+1)$ $(0,1)$–matrix obtained by assigning the value $1$ to a block if any entry of the block is $1$. Thus, for $j \in [q]$, $b_{ij} = 0$ if and only if $a_{i\ell}' = 0$ for every $\ell \in [(j-1)t+1,jt]$, and similarly for $j = q+1$.\
: $B$ contains no matrix equivalent to $(I,M)$.
This is Lemma 4 of Marcus and Tardos [@MT]. To spell it out, assume $B$ contains such a matrix $P$, and for each $1$ it contains, choose an arbitrary non-zero entry from the corresponding blocks of $A'$. They represent a copy of $P$ in $A$, a contradiction.
Call a block ‘fat’ if at least $2k$ of its entries are $1$.\
: There are at most ${t \choose {2k}}(k-1)$ fat blocks in any column of blocks $(S_{ij} : i \in [m'])$.
This is Lemma 5 of [@MT]. If there are more than ${t \choose {2k}}(k-1)$ fat blocks in a given column $S_{ij}$, then there are at least $k$ fat blocks which contain $1$’s in the same $2k$ columns of $A'$. Hence $A'$ contains a complete $k \times 2k$ matrix (i.e., a matrix in which all entries are $1$), so $A$ contains every $k \times 2k$ $(0,1)$–matrix, another contradiction.
We wish to bound the number of $1$’s in $B$. $B$ may contain repeated rows, but since every row in $A'$ has at least $(2k-1)t + 1$ of its entries $1$, every row of $B$ must have at least $2k$ of its entries $1$. Thus if any row occurs in $B$ more than $k-1$ times, then $B$ contains a complete $k \times 2k$ matrix, contradicting Claim 1. If we let $B'$ be the matrix obtained from $B$ by deleting repeated rows, then $B'$ contains no matrix equivalent to $(I,M)$ and all rows of $B'$ are different, so at most $f(\lceil n/t \rceil,k)$ of the entries of $B'$ are $1$. Since each row in $B$ was repeated at most $(k-1)$ times, it follows that at most $(k-1)f(\lceil n/t \rceil,k)$ of the entries of $B$ are $1$. We have thus established the following recurrence: $$f(n,k) \le (2k-1)kf\left(\left\lceil \frac{n}{t} \right\rceil, k\right) + {t \choose {2k}} (k-1)n + g_D\left(2{D \choose 2}C_1(k)\right)n.$$
Let $t = 2k^2$. Assuming (by induction) that $$f \left( \left\lceil \frac{n}{2k^2} \right\rceil, k \right) < c_k \left\lceil \frac{n}{2k^2} \right\rceil,$$ and using the inequality $g_D(x) < 2D{x \choose {D-1}}$ noted earlier, we obtain $$\begin{aligned}
\label{eqn1}
f(n,k) & < & (2k^2-k)c_k \left\lceil \frac{n}{2k^2} \right\rceil + \left({{2k^2} \choose {2k}}k + 8k^3{{2{{4k^3 - 2k^2} \choose 2}C_1(k)} \choose {4k^3 - 2k^2 - 1}}\right)n \notag \\
& < & c_k n - \frac{c_k n}{2k} + (2k^2 - k)c_k + 8k^3{ {16k^6 C_1(k)} \choose {4k^3}}n.\end{aligned}$$ So if $$c_k \ge 32k^4{{16k^6C_1(k)} \choose {4k^3}},$$ then $$\label{eqn2}
\frac{c_k n}{2k} \: \ge \: \frac{c_k n}{4k} + 8k^3{{16k^6C_1(k)} \choose {4k^3}} \: \ge \: 2k^2c_k + 8k^3{{16k^6C_1(k)} \choose {4k^3}},$$ since $n \ge 8k^3$. Set $c_k = 32k^4{{16k^6C_1(k)} \choose {4k^3}}$, and note that $c_k > 2^{8k^3}$. Now inequalities (\[eqn1\]) and (\[eqn2\]) imply that $f(n,k) < c_kn$, so the induction step is complete.
Marcus and Tardos proved that $C(k) < 2k^4{{k^2} \choose k}$, so the explicit bound we obtain is roughly $c_k = O(k^{9k^4})$.
Ordered Hypergraphs {#hypersec}
===================
We now deduce Theorem \[hypergraphs\] from Theorem \[genMT\]. This implication may be read out of a proof of Klazar (Theorem 2.5 of [@Klaz3]), but for the sake of completeness we shall prove it (and in fact our proof is slightly different from that in [@Klaz3]).
Given an ordered hypergraph $\HH$ on $[n]$, say that $\HH$ *contains* a $k$-permutation $\pi$ if $\HH$ contains the ordered hypergraph $H(\pi)$. In other words, there exist $2k$ vertices $v_1, \ldots, v_{2k} \in [n]$, and $k$ distinct edges $E_1, \ldots, E_k \in E(\HH)$ such that, letting $e_i = \{v_i,v_{\pi(i)+k}\}$ denote the edges of $H(\pi)$, we have $e_i \subset E_i$ for each $i \in [k]$. Otherwise say that $\HH$ *avoids* $\pi$. For each permutation $\pi$, and each $n \in \N$, let $T_n(\pi)$ denote the family of ordered hypergraphs on $[n]$ avoiding $\pi$.
\[mattohg\] Let $k \in \N$ and $\pi \in \Pi_k$. If $\HH \in T_n(\pi)$, then $$\|\HH\| \: = \: \sum_{E \in E(\HH)} |E| \: < \: c_kn.$$
The lemma is a simple corollary of Theorem \[genMT\]. To spell it out, let $k \in \N$, $\pi \in \Pi_k$ and $\HH \in T_n(\pi)$, let $m = |E(\HH)|$, and define $A$ to be an $m \times n$ $(0,1)$–matrix whose rows are the indicator functions of the edges of $\HH$. (Note that $A$ is unique up to permutations of its rows.)
Now, $A$ has exactly $\|\HH\|$ of its entries 1, and each of its rows are different, so by Theorem \[genMT\], if $\|\HH\| \ge c_kn$ then $A$ contains a sub-matrix $B \sim (I,M)$, where $M$ is the permutation matrix of $\pi$. Now, let $v_1, \ldots, v_{2k}$ be the vertices in $[n]$ corresponding to the columns of $B$, and $E_1, \ldots, E_k$ be the edges of $\HH$ corresponding to the rows of $B$, ordered so that $v_i \in E_i$ for each $i \in [k]$. Then for each $i \in [k]$ we have $v_i,v_{\pi(i)+k} \in E_i$, so $\HH$ contains $\pi$, a contradiction. Hence $\|\HH\| < c_kn$.
Let $n,k \in \N$, and $\pi \in \Pi_k$. We claim that $$\label{eqn}
|T_{2n}(\pi)| \: \le \: |T_n(\pi)| \: 3^{2c^2n},$$ where $c = c_k$ is the constant obtained in Theorem \[genMT\].
To prove inequality (\[eqn\]), we map each $\HH \in T_{2n}(\pi)$ to the ordered hypergraph $\K$ on $[n]$ with edge set $$\{E \subset [n] : \exists E' \in E(\HH)\textup{ with }i \in E \Leftrightarrow \{2i-1,2i\} \cap E' \neq \emptyset\}.$$ In other words, $\K$ is formed by identifying vertices $2i-1$ and $2i$ for every $i \in [n]$.
We claim that $\K \in T_n(\pi)$, i.e., that $\K$ avoids $\pi$. Indeed, suppose for a contradiction that there exist edges $E_1, \ldots, E_k \in E(\K)$ and vertices $v_1, \ldots, v_{2k} \in [n]$ such that for each $i \in [k]$ we have $e_i = \{v_i,v_{\pi(i)+k}\} \subset E_i$. For each edge $E_i$ choose an edge $F_i$ of $\HH$ such that $\{2j-1,2j\} \cap F_i \neq \emptyset$ if and only if $j \in E_i$ (such an $F_i$ exists by the definition of $\K$). The edges $F_i$ are distinct (since the edges $E_i$ are), and $e_i \subset F_i$ for each $i \in [k]$, so $\HH$ contains $\pi$, which is the desired contradiction.
Now, how many ordered hypergraphs $\HH$ map to the same ordered hypergraph $\K$? An edge $E$ of $\K$ is the image of $3^{|E|}$ different possible edges of $\HH$, since each vertex $v$ of $E$ may have come from $\{2v-1\}$, $\{2v\}$ or $\{2v-1,2v\}$. However, suppose at least $2c = 2c_k$ of these did in fact occur in $\HH$ for a given edge $E$. Each such edge has size at least $|E|$, so the ordered hypergraph $\HH'$ induced by $\HH$ on vertex set $\{2i-1, 2i \in [2n] : i \in E\}$ has $\|\HH'\| \ge 2c|E|$. But now $\HH' \notin T_{2|E|}(\pi)$ by Lemma \[mattohg\], so $\HH$ contains $\pi$, a contradiction.
Thus for each edge $E$ of $\K$, at most $2c - 1$ of the edges which map to it actually occur in $\HH$, so we have at most $$\sum_{i = 0}^{2c - 1} {{3^{|E|}} \choose i} < 3^{2c|E|}$$ choices for these edges.
Now, again by Lemma \[mattohg\], since $\K \in T_n(\pi)$ we have $\|\K\| \le cn$. Thus the maximum possible number of ordered hypergraphs $\HH$ which map to a given $\K$ is $$\prod_{E \in E(\K)} 3^{2c|E|} \: = \: 3^{2c \sum_E |E|} \: \le \: 3^{2c^2n}.$$ This proves inequality (\[eqn\]).
Now, let $\P$ be a strongly monotone property of ordered hypergraphs, let $k \in \N$, $\pi \in \Pi_k$, and suppose that $H(\pi) \notin \P$. Then $\P_n \subset T_n(\pi)$, and by inequality (\[eqn\]) and induction on $n$, $$T_n(\pi) \: \le \: 3^{3c^2n},$$ for every $n \in \N$, where again $c = c_k$, since $$|T_{2n-1}(\pi)| \: \le \: |T_{2n}(\pi)| \: \le \: |T_{n}| \: 3^{2c^2n} \: \le \: 3^{5c^2n} \: \le \: 3^{3c^2(2n - 1)}$$ when $n \ge 3$. So $|\P_n| \le 3^{3c^2n}$ for every $n \in \N$, which proves Theorem \[hypergraphs\].
Partitions and ordered graphs {#rest}
=============================
We shall now deduce Theorems \[parts\], \[mono\] and \[noKt\] from Theorem \[hypergraphs\]. We begin with Theorem \[parts\]. The implication is very simple, but in any case we shall write out all the details.
Let $\P$ be a hereditary property of partitions. For each partition $P \in \P$, let $\HH(P)$ be the ordered hypergraph whose edges are the parts of $P$ of size at least two. To be precise, if $P$ is the partition $\{A_1, \ldots, A_t\}$ of $[n]$, then $\HH(P)$ has vertex set $[n]$ and edge set $\{A_i : i \in [t], |A_i| \ge 2\}$. Observe that for any permutation $\pi$, $\HH(P)$ contains $\pi$ if and only if $P$ contains the partition $H(\pi)$ as an induced subpartition.
Suppose that for some $\pi \in \Pi$, $\P$ does not contain $H(\pi)$. By the observation above, $\HH(P)$ avoids $\pi$ for every $P \in \P$. Now, let $T(\pi)$ denote the strongly monotone property of ordered hypergraphs consisting of all ordered hypergraphs avoiding $\pi$ (so $T(\pi) = \bigcup_n T_n(\pi)$, with $T_n(\pi)$ as in the previous section). Then $\HH(P) \in T(\pi)$ for every $P \in \P$.
Now apply Theorem \[hypergraphs\] to $T(\pi)$. Since $H(\pi) \notin T(\pi)$, there exists a constant $c$ such that $|T_n(\pi)| \le c^n$ for every $n \in \N$. But now we are done, since $$|\P_n|\: = \:|\{\HH(P) : P \in \P_n\}| \: \le \: |T_n(\pi)| \: \le \: c^n$$ for every $n \in \N$, since $\HH(P) \in T(\pi)$ for every $P \in \P$. This proves Theorem \[parts\].
If $\pi \in \Pi$, let $\P(\pi)$ be the largest hereditary property of partitions such that $\P$ avoids the partition $H(\pi)$. Let $c'_k$ be the smallest constant such that $|\P(\pi)_n| < (c_k')^n$ for every $n \in \N$ and $\pi \in \Pi_k$. We have shown that $c'_k = O(3^{k^{19k^4}})$.
We next deduce Theorem \[mono\] from Theorem \[hypergraphs\]. Since a monotone property of ordered graphs is a strongly monotone property of ordered hypergraphs, the implication is trivial.
Let $\P$ be a monotone property of ordered graphs, and for each $G \in \P$, let $G'$ be the ordered hypergraph with the same vertex and edge set as $G$. Define $\P' = \{G' : G \in \P\}$. Now $\P'$ is a strongly monotone property of ordered hypergraphs, since each edge of $G' \in \P'$ has size 2, so the only ordered hypergraphs contained in $G'$ are its subgraphs. The result now follows by applying Theorem \[hypergraphs\] to $\P'$.
In fact one can also prove Theorem \[mono\] without using Theorem \[hypergraphs\], but using the Marcus-Tardos and Klazar-Marcus-Tardos Theorems instead. Alternative proofs can often give new insight into the difficulties and the true nature of a problem, and for this reason we give a sketch of this second proof.
Let $\P$ be a monotone property of ordered graphs, let $\pi \in \Pi$, and suppose that $\P$ does not contain the ordered graph $H(\pi)$.
Suppose first that for some $n \in \N$ there exists an ordered graph $G \in \P_n$ with at least $C(k+1)n$ edges (where $C(k)$ is again the constant in Theorem B). In this case we can use Theorem B to find $H(\pi)$ in $G$, just as in the proof of Lemma \[MTdeg2\]. So assume that for every $G \in \P$, $e(G) < C(k+1)|G|$.
Let $\S(n,m)$ denote the family of sequences $(a_1, \ldots, a_n)$ such that $a_i \in \N \cup \{0\}$ for each $i$ and $\sum_i a_i = m$, and let $\S = \bigcup_{n,m} S(n,m)$. We define a map $\varphi : \P \to \Pi \times \S \times \S$ as follows.
Let $G \in \P$ have $n$ vertices and $m$ edges. We put the following two linear orders, $<_\ell$ and $<_r$, on the edges of $G$. If $e = \{e_1,e_2\}$ and $f = \{f_1,f_2\}$ with $e_1 < e_2$ and $f_1 < f_2$, then $e <_\ell f$ if $e_1 < f_1$, or $e_1 = f_1$ and $e_2 < f_2$, while $e <_r f$ if $e_2 < f_2$, or $e_2 = f_2$ and $e_1 < f_1$. Let $\varphi_p(G)$ be the $m$-permutation which takes the order of the edges under $<_r$ to the order under $<_\ell$. Let $\varphi_\ell (G)$ be the left-endpoint degree sequence of $G$, i.e., the sequence $(a_1, \ldots, a_n)$ where $a_i$ is the number of edges of $G$ whose left-endpoint is vertex $i$, and let $\varphi_r (G)$ be the right-endpoint degree sequence of $G$. Let $\varphi(G) = (\varphi_p(G), \varphi_\ell (G), \varphi_r(G)) \in \Pi_m \times S(n,m) \times S(n,m)$.
Let $\Q = \{\varphi_p(G) : G \in \P\}$. $\Q$ is a hereditary property of permutations, so by the Klazar-Marcus-Tardos Theorem, either $Q = \Pi$, or there exists a constant $c$ such that $|\Q_n| \le c^n$ for every $n \in \N$.
Suppose first that $|\Q_n| \le c^n$ for every $n \in \N$. We claim that for any $\varphi \in \Pi \times \S \times \S$, there is only at most one $G$ such that $\varphi(G) = \varphi$. We omit the proof, which is by induction on $m$. For the induction step, remove the first edge of $G$ in the order $<_r$.
So $|\P_n|$ is just $|\operatorname{Im}_{\varphi}(\P_n)|$, which can easily be approximated since each $G \in \P_n$ has at most $C(k+1)n$ edges. Thus $$\begin{aligned}
|\operatorname{Im}_{\varphi_p}(\P_n)| & \le & |\bigcup_{m=0}^{C(k+1)n} \Q_m|\\
& \le & \sum_{k=0}^{C(k+1)n} c^m \: < \; 2c^{C(k+1)n},\end{aligned}$$ assuming (as we may) that $c > 2$, and $$|\operatorname{Im}_{\varphi_\ell}(\P_n)| \le {{m+n-1} \choose {n-1}} < {{(C(k+1)+1)n} \choose {n-1}} < 2^{(C(k+1)+1)n},$$ and similarly for $|\operatorname{Im}_{\varphi_r}(\P_n)|$. Hence $$\begin{aligned}
|\P_n| = |\operatorname{Im}_{\varphi}(\P_n)| & \le & |\operatorname{Im}_{\varphi_p}(\P_n)| \cdot |\operatorname{Im}_{\varphi_\ell}(\P_n)| \cdot |\operatorname{Im}_{\varphi_r}(\P_n)|\\[+1ex] & < & 2^{2(C(k+1)+1)n+1}c^{C(k+1)n},\end{aligned}$$ so we are done in this case.
Now suppose that $\Q = \Pi$. We want to show that $\P$ contains the ordered graph $H(\pi)$, and thus obtain a contradiction. To do this, define the $2k$-permutation $\sigma$ by $\sigma(2i-1) = 2\pi(i)$ and $\sigma(2i) = 2\pi(i)-1$ for $1 \le i \le k$ (so for example if $\pi = 213$ then $\sigma = 432165$). By assumption, there exists an ordered graph $G \in \P$ such that $\varphi_p(G) = \sigma$. Let the edges of $G$ be $e_1, \ldots, e_{2k}$ in the order $<_\ell$. Note that for each $i \in [k]$, the edges $e_{2i-1}$ and $e_{2i}$ do not share an endpoint, since $e_{2i-1} <_\ell e_{2i}$ and $e_{2i-1} >_r e_{2i}$.
Consider the edges $\{e_{2i-1} : i \in [k]\}$. Suppose two of them share a left (right) endpoint $v$. Then all edges between them in the order $<_\ell$ ($<_r$) also share that endpoint. This contradicts the previous observation that $e_{2i-1}$ and $e_{2i}$ do not share an endpoint, so these edges are in fact independent. Thus there exists $G \in \P$ with $\Delta(G) = 1$ and $\varphi_p(G) = \pi$.
Now apply the same technique to the $(k+1)$-permutation $\pi'$, where $\pi'(i) = \pi(i) + 1$ if $i \in [k]$, and $\pi'(k+1) = 1$. We obtain $G \in \P$ with $\Delta(G) = 1$ and $\varphi_p(G) = \pi'$. Again (as in the proof of Lemma \[deg1\]), notice that in $G$ all left-endpoints occur to the left of *all* right-endpoints, since $\pi'(k+1) = 1$. Therefore, letting $G'$ be subgraph of $G$ induced by the first $k$ left-endpoints and last $k$ right-endpoints, we have $G' = H(\pi)$.
So $H(\pi) \in \P$, and this gives us the desired contradiction.
Finally, we deduce Theorem \[noKt\] from Theorem \[mono\].
Let $t \in \N$, and let $\P$ be a hereditary property of ordered graphs such that $K_t \notin \P$ and $K_{t,t} \notin \P$. Let $\G = \G(\P)$ be the smallest monotone property containing $\P$. Note that (trivially) $|\G_n| \ge |\P_n|$.
By Theorem \[mono\], either $\G$ contains $H(\pi)$ for every $\pi \in \Pi$, or there exists a constant $c$ such that $|\G_n| < c^n$ for every $n \in \N$. Suppose the latter. Then $|\P_n| \le |\G_n| < c^n$ for every $n \in \N$, in which case we are done. So assume that $\G$ contains $H(\pi)$ for every $\pi \in \Pi$.
We wish to find, for each permutation $\pi$, a large permutation $\sigma$ such that an ordered graph on $[n]$ containing no induced copy of $K_t$ or $K_{t,t}$, and containing $H(\sigma)$ as a subgraph, contains $H(\pi)$ as an induced subgraph. We shall use Ramsey’s Theorem and the pigeonhole principle, so recall that for $\ell \in \N$, $R(\ell)$ denotes the smallest integer $n$ such that any graph on $n$ vertices contains either a clique on $\ell$ vertices, or an independent set of order $\ell$. We shall also write $S(\ell)$ for the smallest integer $n$ such that any bipartite graph with one part of order at least $2\ell - 1$ and the other of order at least $n$, contains either the complete bipartite graph $K_{\ell,\ell}$, or the empty bipartite graph $E_{\ell,\ell}$. It is easy to show that $R(\ell) < 4^\ell$ and $S(\ell) < \ell4^\ell$. For each $j \in \N$, let $R^{(j+1)}(\ell) = R(R^{(j)}(\ell))$, where $R^{(1)}(\ell) = R(\ell)$, and let $S^{(j)}(\ell)$ be defined similarly.
Let $k \in \N$ and $\pi \in \Pi_k$. We shall show that $H(\pi) \in \P$. Let $m' = S^{(K)}(t)$, where $K = {{2k} \choose 2} - k$, and $m = R^{(2)}(m')$. We define the $mk$-permutation $\sigma$ as follows: for each $i \in [k]$ and $j \in [0,m-1]$, let $\sigma(im - j) = \pi(i)m - j$. By assumption, $H(\sigma) \in \G$; let $G \in \P$ be an ordered graph on $[2mk]$ containing $H(\sigma)$. Such a $G$ must exist by the definition of $\G$. We know that for each $i \in [mk]$, the edge $\{i,\sigma(i) + mk\}$ is in $E(G)$. We want to show that for some subset $A = \{a(1), \ldots, a(2k)\} \subset [2mk]$ with $a(i) \in [(i-1)m + 1, im]$ for each $i \in [2k]$, and $a(\pi(i)+k) = a(i) + (k + \pi(i) - i)m$ for each $i \in [k]$, these are the only edges induced by $A$.
First we use Ramsey’s Theorem to find ‘matching’ independent subsets of $[(i-1)m + 1,im]$ for each $i \in [2k]$. (This step is necessary if we are to assume only that $K_t$ and $K_{t,t}$ are avoided; without it we would have to assume that every ordered graph containing $K_{t,t}$ is missing from $\P$.) We do this first for each $i \in [k]$. Since $m \ge R^{(2)}(m')$, by Ramsey’s Theorem there exists either a clique or an independent set $A^{(1)}_i$ of order $R(m')$ in $[(i-1)m + 1,im]$. Since, $R(m') \ge t$, $K_t \notin \P$ and $\P$ is hereditary, $A^{(1)}_i$ must be an independent set. For $i \in [k+1,2k]$, let $A^{(1)}_i$ be the set $A^{(1)}_{\pi^{-1}(i-k)} + (k + \pi(i) - i)m$ (if $A$ is a set and $b \in \N$ then $A + b = \{a + b : a \in A\}$). Now, again by Ramsey’s Theorem, for each $i \in [k+1,2k]$ there exists a clique or independent set $A^{(2)}_i \subset A^{(1)}_i$ of order $m'$. Again this must be an independent set, since $m' \ge t$. For each $i \in [k]$, let $A^{(2)}_i$ be the set $A^{(2)}_{\pi(i)+k} - (k + \pi(i) - i)m$.
Thus we have found independent sets $A^{(2)}_i \subset [(i-1)m+1,im]$ of order $m'$ for each $i \in [2k]$ such that there is a matching between $A^{(2)}_i$ and $A^{(2)}_{\pi(i)+k}$ for each $i \in [k]$. We next apply the pigeonhole principle (aka bipartite Ramsey Theorem) to each of the pairs $A^{(2)}_x$ and $A^{(2)}_y$ with $1 \le x < y \le 2k$, $y \neq \pi(x) + k$, to find the desired subset $A$.
To be precise, let $\{e_1, \ldots, e_K\}$ be the set of pairs $\{x,y\}$ such that $1 \le x < y \le 2k$ and $y \neq \pi(x) + k$ (recall that $K = {{2k} \choose 2} - k$), and for each $i \in [2k]$ let $B^{(0)}_i = A^{(2)}_i$. We shall define inductively, for each $i \in [2k]$, a sequence of sets $B^{(0)}_i \supset B^{(1)}_i \supset \ldots \supset B^{(K)}_i$. For each pair $e_\ell = \{x(\ell), y(\ell)\}$ in turn (i.e., for each $\ell \in [K]$), define the sets $B^{(\ell)}_i$ as follows.
Let $x' = x'(\ell)$ and $y' = y'(\ell)$ be the elements matched to $x = x(\ell)$ and $y = y(\ell)$ respectively by $\pi$, so $x' = \pi(x) + k$ if $x \le k$ and $x' = \pi^{-1}(x-k)$ if $x \ge k+1$, and similarly for $y'$. If $i \notin \{x, y, x', y'\}$ then set $B^{(\ell)}_i = B^{(\ell - 1)}_i$. Let $B^{(\ell)}_{x}$ and $B^{(\ell)}_{y}$ be the parts of the largest empty bipartite graph induced by $G$ with $B^{(\ell)}_{x} \subset B^{(\ell-1)}_{x}$, $B^{(\ell)}_{y} \subset B^{(\ell-1)}_{y}$ and $|B^{(\ell)}_{x}| = |B^{(\ell)}_{y}|$. Let $B^{(\ell)}_{x'}$ be the set $B^{(\ell)}_{x} + (k + \pi(i) - i)m$ if $x \le k$ and the set $B^{(\ell)}_{x} - (k + \pi(i) - i)m$ if $x \ge k + 1$. Let $B^{(\ell)}_{y'}$ be defined from $B^{(\ell)}_{y}$ similarly. Note that $B^{(\ell)}_{x'} \subset B^{(\ell-1)}_{x'}$, and $B^{(\ell)}_{y'} \subset B^{(\ell-1)}_{y'}$.
We claim that $|B^{(j)}_i| \ge S^{(K - j)}(t)$ for each $i \in [2k]$ and $j \in [K]$, and prove it by induction on $j$. For $j = 0$ the statement is that $|B^{(0)}_i| \ge S^{(K)}(t) = m'$ for each $i \in [k]$, so the base case holds (since each set $A^{(2)}_i$ has order $m'$). Assume the result holds for $j-1$. If $i \notin \{x, y, x', y'\}$, then $|B^{(j)}_i| = |B^{(j-1)}_i| \ge S^{(K - j + 1)}(t) > S^{(K - j)}(t)$, so we are done in this case. Now suppose $i \in \{x, y, x', y'\}$. By the induction hypothesis, $|B^{(j-1)}_\ell| \ge S^{(K - j + 1)}(t)$ for each $\ell \in \{x, y\}$, so by the definition of $S$, there exists either a complete bipartite or an empty bipartite graph in $G[B^{(j-1)}_{x},B^{(j-1)}_{y}]$ with each part having at least $S^{(K - j)}(t)$ vertices. Since $S^{(K - j)}(t) \ge t$, it cannot be complete (since the sets $B^{(j-1)}_{x}$ and $B^{(j-1)}_{y}$ are independent), so $B^{(j)}_\ell \ge S^{(K - j)}(t)$, for $\ell \in \{x,y\}$. Since $|B^{(j)}_{x'}| = |B^{(j)}_x|$ and $|B^{(j)}_{y'}| = |B^{(j)}_y|$, the induction step is complete.
It follows from the claim that $|B^{(K)}_i| \ge t \ge 1$ for each $i \in [2k]$. Observe also that for each $i \in [k]$, $B^{(K)}_i + (k + \pi(i) - i)m = B^{(K)}_{\pi(i)+k}$, and that for $\{i,j\} \in \{e_1, \ldots, e_K\}$, there are no edges in $G$ between $B^{(K)}_i$ and $B^{(K)}_j$. For each $i \in [k]$, choose a vertex $a(i) \in B^{(K)}_i$, and let $a(\pi(i)+k) = a(i) + (k + \pi(i) - i)m \in B^{(K)}_{\pi(i)+k}$. Let $A = \{a(1), \ldots, a(2k)\}$. The set $A$ induces the ordered graph $H(\pi)$ in $G$, so we are done.
For each permutation $\pi$, let $\tilde{c}(\pi, t)$ denote the smallest constant such that $|\P_n| < \tilde{c}(\pi, t)^n$ for every $n \in \N$, for every hereditary property $\P$, satisfying the conditions of the theorem, which avoids $H(\pi)$. The bounds given by our proof on the constant $\tilde{c}(\pi,t)$ are rather large. They could be improved somewhat by choosing the order in which the pairs $e_i$ are dealt with, and thus obtaining a much stronger inequality than the one we obtained ($|B^{(j)}_i| \ge S^{(K - j)}(t)$), but for simplicity of presentation (and because the actual bounds are not our main interest), we leave this as an exercise for the interested reader. Notice also that although we assumed $K_t \notin \P$, we only needed $K_{m'} \notin \P$.
We finish by noting an immediate consequence of Theorem \[noKt\].
Let $\P$ be a hereditary property of ordered graphs. If there exists a function $f: \N \to \N$ such that $$e(G) \le f(n) = o(n^2)$$ for every $G \in \P_n$ and every $n \in \N$, then Conjecture \[orderconj\] holds for $\P$.
Suppose there is such a function $f(n) = o(n^2)$, satisfying $e(G) \le f(n)$ for every $G \in \P_n$ and every $n \in \N$. Since $f(n) = o(n^2)$, there must exist $t \in \N$ such that $e(G) < n^2/4$ for every $G \in \P_n$ with $n \ge t$.
Assume $t \ge 2$. Now, $e(K_t) = {t \choose 2} \ge \frac{t^2}{4}$, and $e(K_{t,t}) = t^2$, so $K_t \notin \P_t$ and $K_{t,t} \notin \P_{2t}$. The result now follows by Theorem \[noKt\].
Acknowledgements
================
The authors would like to thank the anonymous referees for their careful reading of the manuscript and their many helpful comments, which included simplifying the original proof of Lemma \[bound\].
[99]{}
V.E. Alekseev, On the entropy values of hereditary classes of graphs, [*Discrete Math. Appl.*]{}, [**3**]{} (1993), 191–199.
R. Arratia, On the Stanley-Wilf conjecture for the number of permutations avoiding a given pattern, [*Electron. J. Combin.*]{}, [**6**]{} (1999), 4pp.
J. Balogh, B. Bollobás and R. Morris, Hereditary properties of ordered graphs, submitted to a Festschrift in honour of Jaroslav Nesetril.
J. Balogh, B. Bollobás and M. Simonovits, On the number of graphs without forbidden subgraphs, [*J. Combin. Theory Ser. B.*]{}, **91** (2004), 1–24.
J. Balogh, B. Bollobás and D. Weinreich, The speed of hereditary properties of graphs, [*J. Combin. Theory Ser. B*]{}, [**79**]{} (2000), 131–156.
J. Balogh, B. Bollobás and D. Weinreich, A jump to the Bell number for hereditary graph properties, [*to appear in J. Combin. Theory Ser. B*]{}.
B. Bollobás, Hereditary properties of graphs: asymptotic enumeration, global structure and colouring, in [*Proceedings of the International Congress of Mathematicians*]{}, Vol. III (Berlin, 1998), [*Doc. Math.*]{} 1998, Extra Vol. III, 333–342 (electronic).
B. Bollobás and A. Thomason, Projections of bodies and hereditary properties of hypergraphs, [*Bull. London Math. Soc.*]{}, [**27**]{} (1995), 417–424.
B. Bollobás and A. Thomason, Hereditary and monotone properties of graphs, “The mathematics of Paul Erdős, II" (R.L. Graham and J. Nešetřil, Editors), [*Alg. and Combin.*]{}, Vol. 14, Springer-Verlag, New York/Berlin (1997), 70–78.
M. Bóna, Exact and asymptotic enumeration of permutations with subsequence conditions, Ph.D. Thesis, M.I.T. (1997)
P. Erdős, On extremal problems of graphs and generalized graphs, *Israel J. Math.*, **2** (1964), 183–190.
P. Erdős, P. Frankl and V. Rödl, The asymptotic number of graphs not containing a fixed subgraph and a problem for hypergraphs having no exponent, [*Graphs and Combin.*]{}, [**2**]{} (1986), 113–121.
P. Erdős, D.J. Kleitman and B.L. Rothschild, Asymptotic enumeration of $K_{n}$-free graphs, in [*Colloquio Internazionale sulle Teorie Combinatorie*]{} (Rome, 1973), Vol. II, pp. 19–27. [*Atti dei Convegni Lincei*]{}, [**17**]{}, Accad. Naz. Lincei, Rome, 1976.
Z. Füredi and P. Hajnal, Davenport–-Schinzel theory of matrices, *Discrete Math.*, **103** (1992), 233-–251.
C. Hundack, H.J. Prömel and A. Steger, Extremal graph problems for graphs with a color-critical vertex, [*Combin. Probab. Comput.*]{}, **2** (1993), 465–477.
M. Klazar, The Füredi–Hajnal conjecture implies the Stanley–Wilf conjecture, *Formal Power Series and Algebraic Combinatorics* (D. Krob, A. A. Mikhalev and A. V. Mikhalev, eds.), Springer, Berlin (2000), 250–255.
M. Klazar, Counting pattern-free set partitions I: A generalization of Stirling numbers of the second kind, [*Europ. J. Combin.*]{}, **21** (2000), 367–378
M. Klazar, Counting pattern-free set partitions II: Non-crossing and other hypergraphs, [*Elecron. J. Combin.*]{}, **33** (2000), 737–746
M. Klazar and A. Marcus, Extensions of the linear bound in the Füredi-Hajnal conjecture, preprint.
D.J. Kleitman and K.J. Winston, On the number of graphs without $4$-cycles, [*Discrete Math.*]{}, **41** (1982), 167–172.
Ph.G. Kolaitis, H.J. Prömel and B.L. Rothschild, $K_{l+1}$-free graphs: asymptotic structure and a $0$-$1$ law, [*Trans. Amer. Math. Soc.*]{}, [**303**]{} (1987), 637–671.
A. Marcus and G. Tardos, Excluded permutation matrices and the Stanley-Wilf conjecture, *J. Combin. Theory Ser. A*, **107** (2004), 153–160.
H.J. Prömel and A. Steger, Excluding induced subgraphs III., A general asymptotic, [*Random Structures Algorithms*]{}, [**3**]{} (1992), 19–31.
H.J. Prömel and A. Steger, On the asymptotic structure of sparse triangle free graphs, [*J. Graph Theory*]{}, [**21**]{} (1996), 137–151.
H.J. Prömel and A. Steger, Counting $H$-free graphs, [*Discrete Math.*]{}, [**154**]{} (1996), 311–315.
E.R. Scheinerman and J. Zito, On the size of hereditary classes of graphs, [*J. Combin. Theory Ser. B*]{}, [**61**]{} (1994), 16–39.
G. Tardos, On 0-1 matrices and small excluded submatrices, *to appear in J. Combin. Theory Ser. A*.
[^1]: The first author was supported during this research by OTKA grant T049398 and NSF grant DMS-0302804, the second by NSF grant ITR 0225610, and the third by a Van Vleet Memorial Doctoral Fellowship
|
****
**Peng Zhang**\
\
*Institute of Theoretical Physics, College of Applied Sciences,\
Beijing University of Technology, Beijing 100124, P.R.China*
**Abstract**
By requiring the correct Regge behavior in both meson and nucleon sectors, we determine the infrared asymptotic behavior of various background fields in the soft-wall AdS/QCD model, including the dilaton, the warp factor, and the scalar VEV. We then use a simple parametrization which smoothly connect these IR limits and their usual UV limits. The resulting spectrum is compared with experimental data, and the agreement between them is good.
Introduction
============
Quantum chromodynamics (QCD) has been established as the genuine theory of strong interaction for nearly forty years. Quarks and gluons are identified as the fundamental degrees of freedom. QCD is asymptotically free in the ultraviolet (UV) limit, so people can use standard techniques of perturbation theory to study the processes with large momentum transfer, like deep inelastic scatterings, etc. In the infrared (IR) region, however, the coupling constant becomes strong. Now the effective participants of strong interactions are various hadrons, like $\pi$, $\rho$, $N$, etc., while quarks and gluons are confined inside these particles. Perturbation theory cannot be directly used here. People need to develop various effective models to describe the low energy hadron physics.
AdS/QCD is one of them and has been densely researched for recent several years. This methodology stems from the idea of the large $N$ expansion due to ’t Hooft [@tH], and is directly motivated by the Anti-de Sitter/conformal field theory (AdS/CFT) correspondence [@M; @GKP; @W] in string theory. AdS/QCD is a bottom-up approach. It associate QCD operators, like chiral currents and the quark condensate, to bulk fields propagating in a five-dimensional space, which tends to AdS$_5$ as the fifth coordinate $z$ go to zero. There are mainly two version: the hard-wall model [@dTB; @EKSS; @DP1] and the soft-wall model [@KKSS]. The former can correctly describe the chiral symmetry breaking ($\chi$SB) and low lying hadron states. The latter is developed for the purpose of realizing the meson Regge behavior due to the linear confinement in QCD. They find that it is necessary to introduce the dilaton background which is quadratic growth in the deep IR region $z\rightarrow\infty$. It is further studied in [@GKK] in order to correcly incorporate the $\chi$SB. AdS/QCD also has interesting relations with the light-front dynamics [@BdT1]. The UV limits of various background fields can be easily fixed. For instance, the warp factor should tend to that of the AdS space in order to reflect the conformal invariance of the high energy fixed point of QCD, and the UV behavior of the vacuum expectation value (VEV) of the bulk scalar is determined by the pattern of $\chi$SB. Therefore works on soft-wall models mainly focus on various improvements in the IR region. However it still seems arbitrary to some extents.
The main result of this paper is a way to fix the IR asymptotic behavior of various background fields: the dilaton $\Phi(z)$, the warp factor $a(z)$, and the scalar VEV $v(z)$. We achieve this just by requiring the model has correct Regge-type spectrum in both meson and nucleon sectors. Nucleons can also be realized [@HIY] in the framework of AdS/QCD by introducing 5D Dirac spinors which correspond to the baryon operators. In [@Z] nucleons are extended to the soft-wall model with asymptotically linear spectrum in both meson and nucleon sectors.. Some other works considering mesons and baryons at the same time can be found in [@FBF; @VS1; @BdT2]. The main drawback of the model in [@Z] is that, although both mesons and nucleons have linear spectra, the spectral slopes of vectors and that of axial-vectors are different, which is inconsistent with experimental data. Actually we can argue that it is impossible to improve this if only adjusting the form of the potential and the scalar VEV. The way we around this is to allow, actually we *have to* allow for being consist with data, the mass of a bulk field being $z$-dependent. This idea has also been suggested in the literature, e.g. [@CCW; @VS2]. Except for conserved currents, a generic operator will has nonzero anomalous dimension, which is scale-dependent due to the running coupling constant of QCD. According to the well-known dimension-mass relation, the mass of the corresponding bulk field should be $z$-dependent, since the fifth coordinate $z$ can be interpreted as the inverse of the 4D energy scale. What we find is that, by requiring the Regge-type spectrum is properly realized in both meson and nucleon sectors, the IR asymptotic behaviors of various background fields are totally fixed. Then using a simple parametrization which smoothly connect these IR limits and their usual UV limits, we can make predictions and compare them with the observed data. Our philosophy is to reduce the uncertainty of the model as much as possible by use of known facts.
The model and constraints
=========================
The soft-wall AdS/QCD model is defined in a five-dimensional bulk with the metric $$\begin{aligned}
ds^2=\,G_{MN}\,dx^M dx^N=a^2(z)\,(\,\eta_{\mu\nu}dx^{\mu}dx^{\nu}-dz^2)\,,\quad 0 < z < \infty\,.\end{aligned}$$ The factor $a(z)$ is called warp factor, which tends to $z^{-1}$ as $z\rightarrow0$. There is also a background dilaton $\Phi(z)$ which is assumed to be $O(z^2)$ as $z\rightarrow\infty$, in order to have Regge-type spectrum in the meson sector [@KKSS]. According to the general rules of the gauge/gravity duality, there are two 5D gauge fields, $L_M^a$ and $R_M^a$, which correspond to 4D chiral currents $J_{L}^{a\mu}=\bar{q}_L\gamma^{\mu}t^a q_L$ and $J_{R}^{a\mu}=\bar{q}_R\gamma^{\mu}t^a q_R$. The quark bilinear operator $\bar{q}_L^i q_R^j$ is also an important 4D operator for $\chi$SB. Its holographic dual is a 5D $2\times2$ matrix-valued complex scalar field $X=(X^{ij})$, which is in the bifundamental representation of the 5D gauge group $SU(N_f)_L \times SU(N_f)_R$ with $N_f$ being the number of quark flavors. The bulk action for the meson sector is $$\begin{aligned}
S_M=\int d^4x\,dz\, \sqrt{G}\,e^{-\Phi}\, \mathrm{Tr}\left\{-\frac{1}{4g_5^2}(\,F_L^2+F_R^2)+
|DX|^2-m_X^2|X|^2\,\right\}\,. \label{SM}\end{aligned}$$ By matching with QCD, $g_5^2=12\pi^2/N_c=4\pi^2$. The covariant derivative of $X$ is $D_M{X}=\p_M{X}-iL_M{X}+i{X}R_M$. $F_L$ and $F_R$ are the field strengths of the gauge potentials $L$ and $R$ respectively. The generator $t^a$ is normalized by $\mathrm{Tr}(\,t^at^b)=\frac{1}{2}\delta^{ab}$.
The bulk scalar $X$ is assumed to have a $z$-dependent VEV: $\langle{X}\rangle=\frac{1}{2}\,v(z)$. The function $v(z)$ satisfies the equation of motion (EOM) $$\begin{aligned}
\p_z(\,a^3 e^{-\Phi}\p_z v)-a^5 e^{-\Phi}m_X^2v=0\,. \label{EOMv}\end{aligned}$$ The mass-square $m_X^2$ may be $z$-dependent due to possible anomalous dimension of $\bar{q}_L q_R$. From (\[EOMv\]) we can express $m_X^2$ as $$\begin{aligned}
m_X^2=\,\frac{\,v''+(-\Phi'+3a'/a)\,v'}{a^2v}\,\,. \label{msq}\end{aligned}$$ To describe vector mesons, define $V_M=(L_M+R_M)/2$ and use the axial gauge $V_5=0$. Expend the field $V_\mu$ in terms of its Kaluza-Klein (KK) modes $V_\mu(x,z)=\sum_{n}\,\rho_\mu^{(n)}(x)f_V^{(n)}(z)$ with $f_V^{(n)}(z)$ being eigenfunctions of $-\p_5(ae^{-\Phi}\p_5f_V^{(n)})=ae^{-\Phi}M_V^{(n)2}f_V^{(n)}$. After integrating out the $z$-coordinate, we get an effective 4D action for a tower of massive vector fields $\rho_\mu^{(n)}(x)$, which can be identified as the fields of $\rho$ mesons with $M_V^{(n)}$ being their masses. By setting $f_V^{(n)}=e^{\omega/2}\psi_V^{(n)}$, the equation of eigenfunctions can be transformed to a Schrödinger form $-\psi_V^{(n)\prime\prime}+V_V\psi_V^{(n)}=M_V^{(n)2}\psi_V^{(n)}$ with the potential $$\begin{aligned}
V_V=\frac{1}{4}\omega'^{\,2}-\frac{1}{2}\omega''\,,\label{VV}\end{aligned}$$ where $\omega=\Phi-\log{a}$. Similarly for axial-vectors, define $A_M=(L_M-R_M)/2$. Also use the axial gauge $A_5=0$, expand $A_\mu(x,z)=\sum_{n}\,a_\mu^{(n)}(x)f_A^{(n)}(z)$, and transform the eigenvalue problem for $f_A^{(n)}(z)$ into the Schrödinger form. The resulting potential $V_A$ for axial-vector mesons is $$\begin{aligned}
V_A=\frac{1}{4}\omega'^{\,2}-\frac{1}{2}\omega''+\,g_5^2\,a^2v^2\,. \label{VA}\end{aligned}$$ The corresponding eigenvalue is the mass-square $M_A^{(n)2}$ of the $a_1$ mesons. Note that there is an additional term $g_5^2\,a^2v^2$, which guarantees the axial-vector resonance is heavier the vector one with the same radial quantum number.
The spin-1/2 nucleon can also be realized in the AdS/QCD framework by introducing two 5D Dirac spinors $\Psi_{1,2}$, which is charged under the gauge fields $L_M$ and $R_M$ respectively. The nucleon sector action is [@HIY; @Z] $$\begin{aligned}
S_N&=&\int d^4x\,dz\, \sqrt{G}\,\left(\mathcal{L}_K+\mathcal{L}_I\right)\,, \nonumber\\
\mathcal{L}_K&=&i\overline{\Psi}_1\Gamma^M\nabla_M\Psi_1+i\overline{\Psi}_2\Gamma^M\nabla_M\Psi_2
-m_{\Psi}\overline{\Psi}_1\Psi_1+m_{\Psi}\overline{\Psi}_2\Psi_2 \,, \label{SN} \\[0.2cm]
\mathcal{L}_I&=&-g_\mathrm{Y}\overline{\Psi}_1X\Psi_2-g_\mathrm{Y}\overline{\Psi}_2X^\dag\Psi_1\,. \nonumber\end{aligned}$$ Here $\Gamma^M=e^M_A\Gamma^A=z\delta^M_A\Gamma^A$, and $\{\Gamma^A,\Gamma^B\}=2\eta^{AB}$ with $A=(a,5)$. We choose the representation as $\Gamma^A=(\gamma^a, -i\gamma^5)$ with $\gamma^5=\mathrm{diag}(I,-I)$. The covariant derivatives for spinors are $\nabla_M \Psi_1=\p_M\Psi_1+\frac{1}{2}\,\omega^{AB}_M\Sigma_{AB}\Psi_1-iL_M\Psi_1$ and $\nabla_M \Psi_2=\p_M\Psi_2+\frac{1}{2}\,\omega^{AB}_M\Sigma_{AB}\Psi_2-iR_M\Psi_2$. The only nonzero components of the spin connection $\omega^{AB}_M$ is $\omega^{a5}_\mu=-\omega^{5a}_\mu=\frac{1}{z}\,\delta^a_\mu$. The $\mathcal{L}_I$ part introduces the effects of $\chi$SB into the nucleon sector. In (\[SN\]) we also allow $m_\Psi$ being $z$-dependent due to possible anomalous dimension of the baryon operator. Similar with the meson sector, we expand two spinors $\Psi_{a=1,2}$ in terms of its KK modes $$\begin{aligned}
\Psi_a(x,z)=\begin{pmatrix}\,\, \sum_n N_{L}^{(n)}(x)\,f_{aL}^{(n)}(z) \,\,\,\, \\[0.2cm]
\,\, \sum_n N_{R}^{(n)}(x)\,f_{aR}^{(n)}(z) \,\,\,\, \end{pmatrix} \,\,.\end{aligned}$$ The 4D spinors $N^{(n)}=(N_{L}^{(n)}\hspace{-0.1cm}, N_{R}^{(n)})^{\mathrm{T}}$ are interpreted as nucleon fields. The internal wave functions $f$’s satisfy four coupled 1st order differential equations. By acting one more derivative and eliminating two right-handed $f$’s, we get a coupled Sterm-Liouville eigenvalue problem for $f_{L}^{(n)}\equiv(f_{1L}^{(n)},f_{2L}^{(n)})^\mathrm{T}$. Define $\chi_{L}^{(n)}=a^{2}f_{L}^{(n)}$, the coupled Schrödinger equation for $\chi_{L}^{(n)}$ is $-\chi_L^{(n)}{''}+V_N\chi^{(n)}_L=M_N^{(n)2}\chi^{(n)}_L$. The potential matrix $V_N$ is $$\begin{aligned}
V_N =\begin{pmatrix}\,\, m_{\Psi}^2a^2+(m_{\Psi}a)'+u^2 & u' \\[0.2cm]
u' & m_{\Psi}^2a^2-(m_{\Psi}a)'+u^2 \,\,\,\end{pmatrix}\,,\label{VN}\end{aligned}$$ with $u(z)=\frac{1}{2}\,g_\mathrm{Y}av$. The eigenvalue $M_N^{(n)2}$ is the mass-square of nucleon and its radial excitations.
Now we start to analyze the asymptotic behavior of various background fields in the model, i.e. the dilaton $\Phi(z)$, the warp factor $a(z)$, and the scalar VEV $v(z)$. The UV limit is relatively simple to argue. For the warp factor $$\begin{aligned}
a(z)\sim \,\frac{L}{z}\,,\qquad z\rightarrow0\,. \label{aUV}\end{aligned}$$ This is because of the conformal invariance of the UV fixed point. So the bulk space should be asymptotic 5D AdS. The value of the characteristic length $L$ does not affect the resulting spectrum. For the scalar VEV $$\begin{aligned}
v(z)\sim \,Az+Bz^3\,,\qquad z\rightarrow0\,. \label{vUV}\end{aligned}$$ The linear term corresponds to the explicit $\chi$SB due to the quark mass, while the cubic term describes the spontaneous breaking by the nonzero quark condensate. Unlike the warp factor and the scalar VEV, the UV limit of the dilaton, however, cannot be uniquely fixed. The reason is as follows. Since QCD is asymptotically free, the conformal dimension of any operator, at the UV fixed point, is just its classical value, which is 3 for $\bar{q}_L q_R$. Therefore by the mass-dimension relation $m_X^2=\Delta(\Delta-4)$, we have $$\begin{aligned}
m_X^2(z)\sim\,-3\,,\qquad z\rightarrow0\,. \label{mXUV}\end{aligned}$$ From the expression (\[msq\]) we can see that the above equation (\[mXUV\]) holds if and only if $\Phi(z)\sim z^\alpha$ as $z\rightarrow0$ with $\alpha>0$. Actually the UV limit of the dilaton could also depend on the form of the scalar potential in the bulk action [@GKK; @Z]. So it is, generally speaking, model-dependent.
Having studied the UV behavior, we now turn to the IR. For the dilaton it must be $$\begin{aligned}
\Phi(z)\sim\,O(z^2)\,,\qquad z\rightarrow\infty\,,\end{aligned}$$ which guarantees $M_V^{(n)2}\hspace{-0.1cm}\sim O(n)$ as $n\rightarrow\infty$ for vector mesons [@KKSS]. Suppose $a(z)\sim O(z^{\gamma})$ as $z\rightarrow\infty$,[^1] we always have $(\log{a}\hspace{-0.05cm})\,'\hspace{-0.1cm}\sim O(z^{-1})$ for any power $\gamma$. Therefore, from the expression (\[VV\]) of the potential, only considering vector mesons cannot give any constraint on the IR behavior of the warp factor $a(z)$. One of key observations of this paper is that we can fixed that by the nucleon sector. Look at the nucleon potential matrix (\[VN\]), please. At the IR fixed point, QCD becomes a strongly coupled, but well-defined, conformal field theory. So the dimension of the baryon operator should be finite, which means that $m_\Psi$, although may be $z$-dependent, must tend to a finite constant as $z\rightarrow\infty$. Consider the diagonal terms of (\[VN\]) first. $m_{\Psi}^2a^2$ must dominates $\pm(m_{\Psi}a)'$ when $z$ large. However which one is dominant between the 1st term $m_{\Psi}^2a^2$ and the 3rd term $u^2$ is a crucial issue. We determine this by reduction to the absurd. Suppose $u^2\propto{a^2v^2}$ dominates, then the asymptotic linearity of nucleon spectrum forces $a^2v^2\sim O(z^2)$ in the IR. However note that $u^2\propto{a^2v^2}$ also appears in the axial potential (\[VA\]), there will be another $O(z^2)$ term in addition to the 1st term in $V_A(z)$. Then this implies the axial-vector spectrum, although still linear, has a different slope with vector mesons, which is inconsistent with experimental data. Therefore the conclusion is: $m_{\Psi}^2a^2$ dominates $u^2$. Again by the spectral linearity of nucleons, we obtain the desired IR limit of the warp factor as $$\begin{aligned}
a(z)\sim\,O(z)\,,\qquad z\rightarrow\infty\,. \label{aIR}\end{aligned}$$ By requiring vectors and axial-vectors have the same spectral slopes, we only know $v(z)\sim O(z^{1-\varepsilon})$ at IR for some positive $\varepsilon$. To further constrain it, we suppose the chiral symmetry is not asymptotically restored [@SV].[^2] This means that $V_A-V_V\propto a^2v^2$ should tend to some nonzero constant as $z\rightarrow\infty$. Therefore the IR behavior of the scalar VEV is $$\begin{aligned}
v(z)\sim\,O(z^{-1})\,,\qquad z\rightarrow\infty\,. \label{vIR}\end{aligned}$$ As a cross check, we find $u'\propto(av)'$ tends to zero in the deep IR region. Two Schrödinger equations for nucleons decouple with each other, which means the mass-difference between a nucleon state and its parity partner becomes smaller and smaller.[^3] This is consistent with the observed data.
Additionally it can be shown that, with these IR limits, the scalar and pseudoscalar mesons also have asymptotically linear spectral trajectories parallel to those of vectors and axial-vectors. By directly applying the method of [@KKSS], it can be further shown that the relation between the mass-square and the total angular-momentum quantum number $J$ for higher spin mesons is indeed Regge-type. These facts exhibit the consistency of the AdS/QCD model and our asymptotic relations (\[aIR\]) and (\[vIR\]) which are our main results in the present work.
A simple parametrization
========================
Having determined various asymptotic behaviors of background fields, we will use simple parametrizations which smoothly connect these asymptotes from UV to IR. First we simply choose $$\begin{aligned}
\Phi=\kappa^2z^2\,. \label{Phi}\end{aligned}$$ It is shown in [@KKSS2] that the sign of the dilaton should be positive to avoid a spurious massless state in the vector sector. Since we allow $m_X^2$ to be $z$-dependent, the choice (\[Phi\]) does not raise difficulties for the correct realization of $\chi$SB [@VS2]. We parametrize the warp factor and the scalar VEV as $$\begin{aligned}
a(z)&=&\,\frac{\,1+\mu z^2}{z}\,\,,\\[0.1cm] \label{a}
v(z)&=&\,\frac{\,Az+Bz^3}{1+Cz^4}\,\,. \label{v}\end{aligned}$$ We have chosen the characteristic length $L$ of AdS$_5$ to be 1. All of these three parametrizations have correct UV and IR behaviors determined in the previous section. By use of (\[msq\]) the $z$-dependence of $m_X^2$ has been fixed. Since the dilaton $\Phi$ has positive power, $m_X^2$ has correct UV limit, i.e. $m_X^2\sim-3$. In the deep IR it can be shown that $m_X^2$ tends to zero. With these parametrizations we can numerically solve the Schrödinger equation with the potentials $V_V(z)$ in (\[VV\]) and $V_A(z)$ in (\[VA\]). By fitting the vector meson masses and those of the axial-vectors, we choose the values of the five parameters as follows $$\begin{aligned}
&\kappa=415.9\,\mathrm{MeV}\,,\quad \mu=860.4\,\mathrm{MeV}\,;& \nonumber \\
&A=2.1\,\mathrm{MeV}\,,\quad B=(411.9\,\mathrm{MeV})^3\,,\quad C=(733.6\,\mathrm{MeV})^4\,.& \label{para_m}\end{aligned}$$ Vector mesons and axial-vector mesons both have asymptotically linear mass-squares, with the same slope $4\kappa^2$. The resulting spectra together with the observed values are listed in Table \[rho\] and \[a1\] respectively. The agreement between them is good, especially for higher resonance states.
$\rho$ 0 1 2 3 4 5 6
------------------------- ------- ------- ------ ------ ------ ------ ------ --
$m_{\mathrm{th}}$ (MeV) 1003 1306 1550 1759 1947 2118 2276
$m_{\mathrm{ex}}$ (MeV) 775.5 1465 1570 1720 1909 2149 2265
error 29.3% 10.8% 1.3% 2.3% 2.0% 1.4% 0.5%
: []{data-label="rho"}
$a_1$ 0 1 2 3 4 5
------------------------- ------- ------ ------ ------ ------ ------
$m_{\mathrm{th}}$ (MeV) 1452 1646 1842 2022 2187 2340
$m_{\mathrm{ex}}$ (MeV) 1230 1647 1930 2096 2270 2340
error 18.1% 0.0% 4.6% 3.6% 3.7% 0.0%
: []{data-label="a1"}
For nucleons we parametrize the bulk spinor mass $m_\Psi$ as $$\begin{aligned}
m_\Psi=\,\frac{\,\frac{5}{2}+\mu_1z\,}{\,1+\mu_2z\,}\,. \label{mpsi}\end{aligned}$$ This parametrization gives the correct UV limit $5/2$ which correspond to the classical dimension $9/2$ of the baryon operator by the mass-dimension relation for spinors $m_\Psi=\Delta-2$. At IR $m_\Psi$ tends to a constant $\mu_1/\mu_2$ which, together with the parameter $\mu$ in the warp factor, determines the mass-square slope for nucleons. It is needed to numerically solve the coupled Schrödinger equation with proper boundary conditions [@Z]. We simply fix $\mu_1=1$GeV, while $\mu_2$ and the Yukawa coupling constant $g_\mathrm{Y}$ are chosen as $$\begin{aligned}
g_\mathrm{Y}=9.2\,,\quad \mu_2=4573\,\mathrm{MeV}\,.\end{aligned}$$ The calculated nucleon masses and the corresponding data are listed in Table \[nucl\]. The agreement between them is also reasonable.
$N$ 0 1 2 3 4 5 6
------------------------ ------ ------ ------ ------ ------ ------ ------
$m_{\mathrm{th}}$(MeV) 937 1434 1583 1783 1842 2029 2065
$m_{\mathrm{ex}}$(MeV) 939 1440 1535 1650 1710 2090 2100
error 0.2% 0.5% 3.2% 8.0% 7.7% 2.9% 1.7%
: []{data-label="nucl"}
Conclusions
===========
By requiring the model has correct Regge-type spectrum in both meson and nucleon sectors, we determine the IR behavior of various background fields in the soft-wall AdS/QCD model, including the dilaton $\Phi(z)$, the warp factor $a(z)$, and the scalar VEV $v(z)$. More precisely, our arguments are mainly based on: (i) $M_n^2\sim O(n)$ as $n\rightarrow\infty$ for both mesons and nucleons. (ii) The meson spectral slopes are asymptotically equal. (iii) The distance between the mass-squares of a vector resonance and the corresponding axial-vector resonance tends to a finite nonzero constant. We use simple parametrizations which smoothly connect these IR limits with their usual UV limits. The agreement between the predictive value and experimental data is good. In the present work we restrict ourself to the lowest order effective bulk action. Whether our arguments could supply some constraints on higher dimensional terms, e.g. scalar potentials, is an interesting further issue.
[99]{}
G. ’t Hooft, Nucl. Phys. B72, 461 (1974). J. M. Maldacena, Adv. Theor. Math. Phys. 2, 231 (1998) \[arXiv:hep-th/9711200\]. S. S. Gubser, I. R. Klebanov and A. M. Polyakov, Phys. Lett. B428, 105 (1998) \[arXiv:hep-th/9802109\]. E. Witten, Adv. Theor. Math. Phys. 2, 253 (1998) \[arXiv:hep-th/9802150\].
G. F. de Téramond and S. J. Brodsky, Phys. Rev. Lett. 94, 0201601 (2005) \[arXiv:hep-th/0501022\]. J. Erlich, E. Katz, D. T. Son and M. A. Stephanov, Phys. Rev. Lett. 95, 261602 (2005) \[arXiv:hep-ph/0501128\]. L. Da Rold and A. Pomarol, Nucl. Phys. B721, 79 (2005) \[arXiv:hep-ph/0501218\].
A. Karch, E. Katz, D. T. Son and M. A. Stephanov, Phys. Rev. D74, 015005 (2006) \[arXiv:hep-ph/0602229\]. T. Gherghetta, J. I. Kapusta and T. M. Kelley, Phys. Rev. D79: 076003 (2009) \[arXiv:0902.1998\]. S. J. Brodsky and G. F. de Teramond, Phys. Rev. Lett. 96, 201601 (2006) \[arXiv:hep-ph/0602252\]; G. F. de Teramond, S. J. Brodsky, Phys. Rev. Lett. 102, 081601 (2009) \[arXiv:0809.4899\].
D. K. Hong, T. Inami and H.-U. Yee, Phys. Lett. B646, 165 (2007) \[arXiv:hep-ph/0609270\]. P. Zhang, JHEP 05 (2010) 039 \[arXiv:1003.0558\]; Phys. Rev. D82, 094013 (2010) \[arXiv:1007.2163\]. H. Forkel, M. Beyer and T. Frederico, JHEP 07 (2007) 077 \[arXiv:0705.1857\]. A. Vega and I. Schmidt, Phys. Rev. D79, 055003 (2009) \[arXiv:0811.4638\]. G. F. de Teramond and S. J. Brodsky, \[arXiv:1001.5193\]. A. Cherman, T. D. Cohen and E. S. Werbos, Phys. Rev. C79, 045203 (2009) \[arXiv:0804.1096\]. A. Vega and I. Schmidt, Phys. Rev. D82, 115023 (2010) \[arXiv:1005.3000\]. M. Shifman and A. Vainshtein, Phys. Rev. D77, 034002 (2008) \[arXiv:0710.0863\]. L. Y. Glozman, Phys. Rept. 444, 1 (2007) \[arXiv:hep-ph/0701081\]. R. L. Jaffe, D. Pirjol and A. Scardicchio, Phys. Rept. 435, 157 (2006) \[arXiv:hep-ph/0602010\]. E. Klempt, \[arXiv:1011.3644\].
A. Karch, E. Katz, D. T. Son and M. A. Stephanov, JHEP 04 (2011) 066 \[arXiv:1012.4813\].
[^1]: If assume non-power function, e.g. $a\sim e^z$, it will destroy the linear spectrum in the nucleon sector.
[^2]: There are some controversies about this issue among experts, see e.g. [@Gl] for the opposite opinion.
[^3]: This asymptotic degeneracy of nucleon states in a parity doublet does not necessarily imply, at least theoretically, the chiral symmetry restoration, see e.g. [@JPS; @K].
|
---
abstract: 'In this work we investigate protoneutron star properties within a modified version of the quark coupling model (QMC) that incorporates a $\omega-\rho$ interaction plus kaon condensed matter at finite temperature. Fixed entropy and trapped neutrinos are taken into account. Our results are compared with the ones obtained with the GM1 parametrization of the non-linear Walecka model for similar values of the symmetry energy slope. Contrary to GM1, within the QMC the formation of low mass black-holes during cooling are not probable. It is shown that the evolution of the protoneutron star may include the melting of the kaon condensate driven by the neutrino diffusion, followed by the formation of a second condensate after cooling. The signature of this complex proccess could be a neutrino signal followed by a gamma ray burst. We have seen that both models can, in general, describe very massive stars.'
author:
- 'Prafulla K. Panda'
- 'Débora P. Menezes'
- Constança Providência
title: Effects of the symmetry energy on the kaon condensates in the QMC Model
---
Introduction
============
In the last years, all sorts of phenomenological equations of state (EoS), relativistic and non-relativistic ones, have been used to describe (proto)neutron star matter. These EoS are parameter dependent and are adjusted so as to reproduce nuclear matter bulk properties, as the binding energy at the correct saturation density and incompressibility as well as ground state properties of some nuclei and their collective responses [@fitting; @nl3]. Attempts to constrain the EoS have been made and they were based either on finite nuclei experimental results, as for instance, the isoscalar monopole and the isovector dipole giant resonances [@jorge2002] and neutron skin thickness [@peles] or on astrophysical observations [@cottam; @sanwal]. Until 2010, when a star with a mass of almost 2 $M_\odot$ was confirmed [@demorest], most EoS were expected to produce maximum stellar masses just larger than 1.44 $M_\odot$ and radii of the order of 10 to 12 km. Some of them, as the NL3 parametrization [@nl3] of the non-linear Walecka model (NLWM) were even discarded to be considered too hard and to provide too large solar masses. Recently a second very massive neutron star was detected [@antoniadis] and many parametrizations and models were revisited and readjusted to account for the new observations. Also, many other constraints based on the above mentioned nuclear properties and also on the symmetry energy, its slope, skewness, dipole polarizabilities, heavy-ion collision flows, isobaric analog states, etc, have been proposed [@lattimer; @outros].
As far as relativistic models are concerned, the $\omega-\rho$ interaction [@antigos; @IUFSU] can be adjusted to reproduce experimental values of the symmetry energy and its related slope, the latest being strongly correlated with many nuclear [@review] and stellar properties [@rafael].
On the other hand, it is well known that the EoS and the internal constitution of the neutron stars depend on the nature of the strong interaction. In a compact star, strangeness may occur in the form of baryons, such as $\Lambda$ and $\Sigma^-$ hyperons, as a Bose condensate i.e. $K^-$ meson condensate, or in the form of strange quarks, in all cases influencing the star structure and its macroscopic properties [@prak97; @Glen00]. Some years ago, it was suggested that above some critical density, the ground state of baryonic matter might contain a Bose-Einstein condensate of negatively charged kaons [@kaplan] as far as a pion condensate also exists. In [@brown], another mechanism that allowed kaon condensation without pion condensation was proposed for the first time. There is a strong attraction between $K^-$ mesons and baryons which increases with density and lowers the energy of the zero momentum state. A condensate is formed when this energy equals the kaon chemical potential $\mu$. When the electron chemical potential equals the effective kaon mass, the kaons are favored because they help in the conservation of charge neutrality once they are bosons and can condense in the lowest energy state. For this reason $K^-$ mesons that have the same electric charge as electrons, are the type of kaons that normally appear in a condensed state in stars.
It is well known that the onset of the kaon condensation is model dependent and varies according to the strength of the kaon optical potential [@kaon; @gupta2012]. In the present work, we revisit the possibility that a hybrid compact star can be constituted by hadrons and kaon condensed matter at higher densities [@kaon] by using the quark meson coupling (QMC) model at finite temperature [@qmc; @panda2010] with the inclusion of the $\omega-\rho$ interaction [@antigos; @IUFSU]. The inclusion of this non-linear interaction was shown to soften the symmetry energy at high densities and to bring the QMC model properties closer to density dependent relativistic models [@panda2012]. As the inclusion of this term allows us to tune the slope of the symmetry energy, shown to be strongly correlated with some star properties, it is an important ingredient in the investigation that follows. In a previous work [@gupta2013] a discussion on the onset of kaons and antikaons controlled by stiff and soft symmetry energy and EoS was performed at zero temperature with four kinds of models: a standard relativistic mean field one, a density dependent model, a model with the $\omega-\rho$ interaction and a model with higher order coupling constants. It was found that although the last two models bear quite different symmetry energies, they yield very similar star masses and radii. In [@pons2001], it was seen that the effects of kaon condensation on metastable stars can be quite dramatic resulting in different neutrino emission signals. Hence, for a better understanding of the role played by kaons inside a star, we use two different models, the QMC and the GM1 [@Glen00] parametrization of the NLWM for three values of fixed entropies that correspond to different snapshots of the star evolution and discuss the effect of the symmetry energy. The properties of the stars are also studied.
In the QMC model, nucleons are described as a system of nonoverlapping MIT bags that interact through the scalar and vector mesons. The quark degrees of freedom are explicitly taken into account and the couplings are determined at the quark level. In the present work we also treat the kaons as MIT bags [@bag] and the couplings of the kaons with nucleons are determined in a self-consistent way.
In section II, we discuss the formalism employed in finite temperature calculations. In section III we present and discuss our results and compare them with the GM1 parametrization of the non-linear Walecka model [@Glen00]. For this comparison, in view of the fact that the symmetry energy slope regulates some physical properties, it was chosen to be similar in both models. In the final section, we draw our conclusions.
Formalism
=========
In the QMC model, the nucleon in nuclear medium is assumed to be a static spherical MIT bag in which quarks interact with the scalar $(\sigma)$ and vector $(\omega, \rho)$ fields, and those are treated as classical fields in the mean-field approximation (MFA). The quark ${\psi_q}(\vec{r},t)$ inside the bag satisfies the equation of motion:
$$[i\gamma_\mu \partial^\mu - (m^0_q - g^q_\sigma \sigma_0) -
\gamma^0 (g^q_\omega \omega_0 +
\frac{1}{2} g^q_\rho \tau_{3q} b_{03} )] {\psi_q}(\vec{r},t) = 0,$$
where ${\sigma}_0 ,{\omega}_0, b_{03} $ are the classical meson fields for $\sigma,~\omega,~\rho$ mesons, ${m^0}_q$ is the current quark mass and $ {\tau}_{3q}$ is the third component of the Pauli matrices. $ g^q_\sigma,~g^q_\omega,~g^q_\rho$ denote the quark coupling constants with $\sigma,~\omega,~\rho$. At finite temperature, quarks inside the bag can be thermally excited to higher angular momentum states and also quark – anti-quark pairs can be created. For simplicity, the bag is assumed to be spherical with radius $R$ which depends on the temperature. The single particle energies in units of $R^{-1}$ for the quarks and the anti-quarks are given as $$\epsilon_q^{nk}= \Omega_q^{nk} + R_N(V_\omega \pm \frac{1}{2} V_\rho)$$ and $$\epsilon_{\bar q}^{nk}=\Omega_q^{nk} - R_N(V_\omega \pm \frac{1}{2} V_\rho)$$ where $V_\sigma=g_\sigma^q\sigma_0$, $V_\omega=g_\omega^q\omega_0$ and $ V_\rho=g_\rho^q b_{03}$. The $+ve$ sign refers to the $u-$quarks and the $-ve$ sign to the $d-$quarks. The total energy from the quarks and anti-quarks at finite temperature is $$E_{tot}=\sum_{q,n,k}{\frac{\Omega_q^{nk}}{R_N}(f^q_{nk} + f^{\bar q}_{nk})}$$ where $$f^q_{nk} = \frac{1}{e^{[ \Omega_q^{nk}/R_N - \upsilon_q]/T} + 1 },$$ and $$f^{\bar q}_{nk}=\frac{1}{e^{[ \Omega_q^{nk}/R_N + \upsilon_q]/T} + 1 }$$ with $\Omega_q^{nk} = \sqrt{x_{nk}^2 + R_N^2 m_q^{*2}}$ and the eigenvalues $ x_{nk} $ for the state characterized by $n$ and $k$ are determined by the boundary condition at the bag surface. In the above, $ \upsilon_q=\mu_q - V_\omega -m_\tau^q V_\rho$ is the effective quark chemical potential and is related to the quark chemical potential, $\mu_q$. The energy of a static baryon bag consisting of three ground state quarks can be expressed as $$E_N^{bag} = E_{tot} - \frac{Z_N}{R_N} + \frac{4}{3} \pi {R_N}^3 B_N$$ where $Z_N $ is the parameter that accounts for zero point motion and $B_N $ is the bag constant. The entropy of the bag is defined as, $$\begin{aligned}
S^{bag} &=& -\sum_{q,n,k} [f^q_{nk} \ln{f^q_{nk}}+
(1-f^q_{nk})\ln{(1-f^q_{nk})}\nonumber\\
&+& {\bar f}^q_{nk}\ln{{\bar f}^q_{nk}}+(1-{\bar f}^q_{nk})
\ln{(1-{\bar f}^q}_{nk})].\end{aligned}$$ The free energy of the bag is given by $F_N^{bag} = E_N^{bag} - T S^{bag}$ and the effective mass of a nucleon bag at rest is taken to be $M_N^* = F_N^{bag}$. The equilibrium condition for the bag is obtained by minimizing the effective mass ${M_N}^*$ with respect to the bag radius $R_N$ $$\frac{\partial{M_N^*}}{\partial{R_N^*}} = 0.$$ Once the bag radius is obtained, the effective baryon mass is immediately determined. For a given temperature and scalar field $\sigma$, the effective quark chemical potentials $\nu_q$ are determined from the total number of quarks, isospin density and strangeness.
We next obtain the thermodynamic potential within the mean field approximation and perform a calculation similar to that carried out in [@kaon]. We assume that the kaons are described by the static MIT bag in the same way as the nucleons. Moreover $\sigma,~\omega$ and $ \rho$ mesons are only mediators of the $u$ and $d$ quarks inside the kaons. The effective Lagrangian density for the kaon sector is $$\mathcal{L}_K = D_\mu^* K^* D^\mu K - M_K^* K^* K$$ where kaons are coupled to the meson fields via minimal coupling and the covariant derivative reads $$D_\mu = \partial_\mu+i g_{\omega K}\omega_\mu+i \frac{1}{2}g_{\rho K}
\vec{\tau} \cdot b_\mu ,$$ with $$X_\mu=g_{\omega K}\omega_\mu+g_{\rho K}\vec\tau\cdot\vec\rho_\mu.$$ The energy of the static bag describing kaon K can be expressed as $$E_K^{bag} = \sum_{q,n,k}\frac{\Omega^{nk}_q}{R_K}
(f^q_{nk}+f^{\bar q}_{nk})-\frac{Z_K}{R_K}+\frac{4}{3}\pi R_K^3 B_K.$$ For our calculations, we have fixed the bag constant, $B_K$, to be the same as for the nucleon and from the kaon mass and the stability condition in the vacuum, we have obtained $Z_K=3.362$ and $R_K=0.457$ fm for $R_N=0.6$ fm. The free energy of the kaon bag is $F_K^{bag} = E_K^{bag} - T S^{bag}$ and the effective mass of a kaon bag at rest is taken to be $ M_K^* = F_K^{bag}$. In analogy with the nucleonic sector, the equilibrium condition for the bag is $$\frac{\partial{M_K^*}}{\partial{R_K^*}} = 0.$$ The Bose occupation probability for particles $(f_{B^+})$ and anti-particles $(f_{B^-})$ appears naturally in the equation of state and reads $$\begin{aligned}
f_{B^\pm}&=&\frac{1}{(e^{\beta (\omega^\pm\mp\mu_K)}-1)}\end{aligned}$$ with $\beta=1/T$, $\epsilon_K^*=\sqrt{p^2+{M_K^*}^2}$ and $\omega^\pm=\epsilon_K^*\pm X_0$. In the above we define the kaon effective chemical potential $\nu_K=\mu_K+X_0$ where $X_0=g_{\omega K}\omega_0+g_{\rho K}b_{03}$.
In the mean field approximation, the kaon contribution to the thermodynamic potential is $$\begin{aligned}
&&\frac{\Omega_K}{V}=\zeta^2 \Big[{M_K^*}^2-(\mu_K+X_0)^2\Big] \nonumber\\
&+&T\int_0^\infty\frac{d^3p}{(2\pi)^3}\Big\{\ln[1-e^{-\beta(\omega^-+\mu)}]
+\ln[1-e^{-\beta(\omega^+-\mu)}]\Big\},\nonumber\\\end{aligned}$$ from which we get $$\begin{aligned}
P_K&=&-\frac{\Omega_K}{V}=
%\zeta^2 \Big[(\mu_K+X_0)^2-{M_K^*}^2\Big] \nonumber\\
%&-&T\int_0^\infty\frac{d^3p}{(2\pi)^3}\Big\{\ln[1-e^{-\beta(\omega^-+\mu)}]\\
%&+&\ln[1-e^{-\beta(\omega^+-\mu)}]\Big\}\nonumber\\
\zeta^2 \Big[(\mu_K+X_0)^2-{M_K^*}^2\Big] \\
&+&\frac{1}{3}\int_0^\infty\frac{d^3p}{(2\pi)^3}\frac{p^2}
{\epsilon_K^*}\Big[f_{B^+} +f_{B^-}\Big],\end{aligned}$$ and the kaon contribution to the energy density reads $$\begin{aligned}
\varepsilon_K &=& \zeta^2 \Big[{M_K^*}^2+(\mu_K^2+X_0^2)\Big] \\
&+ &\int_0^\infty\frac{d^3p}{(2\pi)^3}
\Big[\omega^+(p)f_{B^+}+\omega^-(p)f_{B^-}\Big]\nonumber\\
&\equiv& \zeta^2 \Big[{M_K^*}^2+(\mu_K^2+X_0^2)\Big] \nonumber \\
&+& \int_0^\infty\frac{d^3p}
{(2\pi)^3}\epsilon_K^*\Big[f_{B^+} +f_{B^-}\Big].\end{aligned}$$ The kaon number density is $$n_K=n_K^c+n_K^{th},$$ where $n_K^c= 2\zeta^2(\mu_K+X_0)$ is the condensate contribution and $n_K^{th}$ is the thermal contribution for the number density given by $$n_K^{th}=\int_0^\infty\frac{d^3p}{(2\pi)^3}\Big[f_{B^+} -f_{B^-}\Big].$$ Similarly the scalar density for the kaons is given by $$n_K^s=\int_0^\infty\frac{d^3p}{(2\pi)^3}\frac{M_K^*}{\epsilon_K^*}
\Big[f_{B^+} +f_{B^-}\Big].$$ The kaon entropy density is given by $S_K=\beta(\varepsilon_K+P_K-\mu_K n_K)$.
The equations of motion for the meson fields are given by [@pmp] $$\begin{aligned}
{m_\sigma}^2 \sigma &=& \sum_{i=p,n}- \frac{\partial M_N^*}{\partial \sigma}
\nonumber\\
&\times&\frac{1} {\pi^2} \int d{\mathbf k}\frac{M_N^*}
{\left[k^2 + M_N^{*2}\right]^{1/2}} (f_i+\bar f_i), \nonumber \\
&+& g_{\sigma K}(n_K^c+n_K^s)\\
{m_\omega}^2 \omega_0 &=& \sum_{i=p,n} {g_{\omega} \rho_i}-g_{\omega K} n_K
-2\Lambda_vg_\omega^2 g_\rho^2 b_{03}^2\omega_0,\\
{m_\rho}^2b_{03} &=&\sum_{i=p,n}{g_{\rho} I_{3i}\rho_i}-
\frac{1}{2}g_{\rho K} n_K\nonumber\\
&-& 2\Lambda_vg_\omega ^2 g_\rho^2 b_{03}\omega_0^2,\end{aligned}$$ and $$\zeta\Big[\mu_k-\omega^+(0)\Big]\Big[\mu_k+\omega^-(0)\Big]=0,$$ where $f_i$ and $ \bar{f_i}$ are the thermal distribution functions for the baryon and antibaryon: $$f_i =\frac{1}{e^{(\epsilon^* - \upsilon)/T} + 1 }~~~~~~\mbox{and}~~~~~~
\bar f_i = \frac{1}{e^{(\epsilon^* + \upsilon)/T} + 1 }.$$ $\epsilon^* = \sqrt{\vec{k}^2 + M_N^{*2}}$, is the effective nucleon energy, and $\upsilon = \mu_N - g_\omega\omega
-I_{3i} g_\rho b_{03} $ is the effective nucleon chemical potential. The term $\Lambda_v ~g_\omega^2 ~g_\rho^2 ~b_{03}^2 ~\omega_0^2$ accounts for the $\omega-\rho$ interaction as proposed in [@antigos; @IUFSU] and already considered in [@panda2012].
As for the parameters, we have used [@pmp] $g_\sigma^q=5.957$, $g_{\sigma}=3g_\sigma^q S_N(0)=8.58$, $g_{\omega}=8.981$, $g_{\rho}=8.651$ with $g_{\omega} = 3g_\omega^q$ and $g_{\rho} =
g_\rho^q$. We have taken the standard values for the meson masses, $m_\sigma=550$ MeV, $m_\omega=783$ MeV and $m_\rho=770$ MeV. Note that the $s$-quark is unaffected by the $\sigma$, $\omega$ and $\rho$ mesons i.e. $g_\sigma^s=g_\omega^s=g_\rho^s=0\ .$ The kaon couplings are given by $g_{\omega K}=\frac{1}{3}g_{\omega}$, $g_{\rho
K}=g_{\rho}$ as in [@Glen00].
After a self-consistent calculation, the kaon effective mass, $m^*_K$ can be parametrized as [@tsushima98] $$m^*_K=m_K-g_{\sigma K}(\sigma) \sigma \simeq m_K-\frac{1}{3} g_{\sigma}
\left(1-\frac{a_K}{2}g_{\sigma} \sigma \right) \sigma,$$ where $a_k=0.00045043\mbox{ MeV}^{-1}$ for $R_N=0.6$ fm. This determines the $g_{\sigma K}$ which is a density dependent parameter.
Finally, the total energy density of the nuclear matter with kaons at finite temperature becomes $$\varepsilon = \varepsilon_B +\varepsilon_K,$$ where [@panda2012] $$\begin{aligned}
\varepsilon_B &=& \frac{2}{{(2 \pi)}^3} \sum_{i=p,n}\int d^3 k
\left[\epsilon^* (f_i + \bar{f_i})\right]\nonumber\\
&+&\frac{1}{2} {m_\sigma}^2 \sigma^2 - \frac{1}{2} {m_\omega}^2 \omega_0^2
- \frac{1}{2} {m_\rho}^2 b_{03}^2\nonumber\\
&+&g_\omega \omega_0\rho_B +\frac{1}{2}g_\rho ~b_{03}~\rho_3
-\Lambda_v ~g_\omega^2 ~g_\rho^2 ~b_{03}^2 ~\omega_0^2.\end{aligned}$$
For neutron stars, their particle composition is determined by the requirements of charge neutrality and $\beta$-equilibrium conditions under the weak processes $n\Rightarrow p+l+\bar \nu$ and $n+l\Rightarrow p+\nu$, implying that $$\mu_n =\mu_p + \mu_e$$ and $$\rho_e+\rho_\mu+\rho_K=\rho_p.$$ If neutrino trapping is imposed to the system, the beta equilibrium condition is altered to $$\mu_n =\mu_p + (\mu_e-\mu_{\nu_e}).$$
In this work, different snapshots of the star evolution are simulated through different entropies per particle and trapped neutrinos. At first, the star is relatively warm (represented by fixed entropy per particle) and has a large number of trapped neutrinos (represented by fixed lepton fraction). As the trapped neutrinos diffuse out, they heat up the star [@prak97]. Finally, the star is considered cold (zero temperature) and deleptonized:
- $S/\rho_B=1$, $Y_l=0.4$,
- $S/\rho_B=2$, $\mu_{\nu_l}=0$,
- $S/\rho_B=0$, $\mu_{\nu_l}=0$.
The last scenario has already been considered in [@kaon]. We next discuss the first two cases and also the possibility (only for academic purposes) that the temperature is fixed through out the star. The number of muons and muon neutrinos is negligble when the fraction of electrons and electron neutrinos is fixed at 0.4 and, therefore, in this case we do not include muons in the calculation. For the cases without neutrinos, muons are also considered.
Results and Discussion
======================
As mentioned in the Introduction, we discuss next the effect of the slope of the symmetry energy on the kaon condensation. The symmetry energy is defined as: $${\cal E}_{sym} = \frac{1}{2} \left [ \frac{\partial^2 ({\varepsilon_B}/\rho_B)}{\partial \alpha^2} \right ]_{\alpha = 0}=
\frac{k_F^2}{6 E_F}+\frac{g_\rho^2}{4{m^*_\rho}^2}\rho_B
\; ,$$ where ${\varepsilon_B}$ is the energy density, $\alpha$ is the asymmetry parameter $\alpha = (\rho_n - \rho_p)/\rho_B$, $\rho_B=\rho_n+\rho_p$, $E_F=\sqrt{k_F^2+{M^*_0}^2}$, where $M_0^*$ is the nucleon effective mass at saturation density and $k_F=(3\pi^2 \rho_B/2)^{1/3}$, ${m^*_\rho}^2=m_\rho^2 +2\Lambda_v g_\omega^2 g_\rho^2
\omega_0^2$, and the slope of the symmetry energy is: $$L = \left [ 3 \rho_B \frac{\partial {\cal E}_{sym}}{\partial \rho_B} \right ]_{\rho_B = \rho_0} \; .$$ We consider six different EOS, obtained from the QMC and the GM1 parametrization of the NLWM. Both models have a similar symmetry energy and corresponding slope at the saturation denstiy. Including the $\omega\rho$ term we build other EOS from these models by changing the symmetry energy slope at saturation and keeping fixed the symmetry energy as 21.74 MeV at $\rho=0.1$ fm$^{-3}$. For each model, QMC and GM1, we have fixed three different values of the slope $L$, roughly the same for both models, namely $L\sim 59.5,\, 70.5, \, 93.5$ MeV. With a similar behavior of the symmetry energy at saturation, we can discuss how the other properties of the EOS affect the properties of neutron stars containing a kaon condensate. In table \[table0\] we display the $L$ values and the related $\Lambda_v$ and $g_\rho$ for the models we investigate.
model $L$ (MeV) $ {\cal E}_{sym}$ (MeV) $\rho_0$ (fm$^{-3}$) $\Lambda_v$ $g_\rho$
------- ----------- ------------------------- ---------------------- ------------- ----------
QMC 93.5 33.70 0.15 0.0 8.8606
QMC 70.5 31.88 0.15 0.03 9.2463
QMC 59.3 30.87 0.15 0.05 9.5335
GM1 93.8 32.47 0.153 0.0 8.0104
GM1 70.8 29.57 0.153 0.03 8.0104
GM1 59.6 27.80 0.153 0.037 8.0104
: \[table0\] Symmetry energy, its slope and related model parameters.
For the kaon-$\omega$ coupling we consider $g_{N\omega}/3$. In GM1, the coupling to the scalar $\sigma$-field is fixed to a kaon optical potential in symmetric nuclear matter at saturation, $V_K = - 125$ MeV, a value suggested by chiral models [@chiral]. For this value of the kaon optical potential we obtain a second order phase transition from a pure hadronic phase to a hadronic phase with kaons. A more attractive potential lowers the kaon onset density and the transition to the hadronic phase is a first order one [@Glen00]. Within the QMC model this quantitity is an output, and with the present choice of parameters $V_K=
- 123$ MeV [@kaon] at saturation, very close to the above value taken for GM1.
![Particle fractions obtained with the QMC model for T=10 MeV, L=93.5 MeV (thick line) and L=70.5 MeV (thin line).[]{data-label="fig1"}](fig1.eps){width="0.8\linewidth"}
In [@predeal2012] it was shown, for $T=0$ MeV, that: a) the larger $L$ the larger the fraction of kaons at a given density; b) the onset of kaon condensation occurs at similar or slightly smaller densities for lower values of $L$. It was also pointed out that the EoS with the lower slope $L$ is softer, giving rise to stars with smaller radii and with a larger total strangeness content because of their larger central densities. In the following we discuss the effect of the slope $L$ on the properties of warm stars with a kaon condensate described by two different models, QMC and GM1.
![Warm matter with trapped neutrino, $S=1$ and $Y_l=0.4$, a) particle fraction versus the baryonic density within QMC for two different values of the slope $L$,; b) the kaon fraction for both QMC and GM1.[]{data-label="fig2"}](fig2a.eps "fig:"){width="0.8\linewidth"}\
![Warm matter with trapped neutrino, $S=1$ and $Y_l=0.4$, a) particle fraction versus the baryonic density within QMC for two different values of the slope $L$,; b) the kaon fraction for both QMC and GM1.[]{data-label="fig2"}](fig2b.eps "fig:"){width="0.8\linewidth"}
We first discuss the effect of the temperature on the kaon onset. The EOS is calculated at a fixed temperature, even though inside a compact object the temperature is not expected to be constant. In Fig. \[fig1\] we plot the particle fraction for a fixed temperature $T=10$ MeV and two different $L$ values within the QMC model. The temperature helps the appearance of kaons (strangeness): they appear at a smaller density as compared will the results presented in [@kaon] for a zero temperature system. Similar results were obtained previously [@pons2001; @banik]. Moreover, it is clear that a larger slope enhances the production of kaons. A larger $L$ favors larger proton and electron fractions, and, therefore we may also expect a larger kaon fraction, since kaons replace the electrons.
![Kaon effective mass versus density in $\beta$-equilibrium matter for different values of the entropy per baryon $S$ and the slope $L$. Matter for $S=1$ contains trapped neutrions and all other curves were obtained for neutrino free matter.[]{data-label="fig3"}](fig3.eps){width="0.8\linewidth"}
After a short initial time, the entropy is pratically constant inside the star, and therefore in the following we present results obtained for a fixed entropy per baryon, for both matter with trapped neutrinos and $S=1$ and without neutrinos and $S=2$ [@prak97]. In Fig. \[fig2\]a), the results for kaon fractions obtained at a fixed entropy per baryon $S=1$ and a lepton fraction of $Y_l= 0.4$ are shown for the QMC and GM1 models and two different values of $L$. The onset of a condensate of kaons occurs only at a density $\sim 0.15-0.2$ fm$^{-3}$ above the onset of thermal kaons. Within GM1, kaons appear at lower densities and, for a given density, in larger amounts than QMC, even when the same slope $L$ is chosen. Hence, stars descibed by the QMC model present a lower amount of strangeness.This behavior results from a softer EOS at densities above saturation within QMC. From Fig. \[fig2\]b), it is also seen that smaller values of $L$ give rise to smaller amounts of strangeness, as already seen in [@panda2012], where hyperons (instead of kaons) were considered. The only exception is $S=2$: in this case the kaon condensation for $L=93$ MeV occurs at a density above the range of densities shown, and, therefore, the kaon fraction is always below the values obtained for $L=70$ MeV, which predicts a kaon condensate above 0.6 fm$^{-3}$. The presence of neutrinos also disfavors the formation of kaons as seen if one compares the $S=1$ curves obtained with trapped neutrinos with the ones obtained with $S=0$ and $S=2$ neutrino free matter. The total amount of kaons in the star is ultimately ditacted by the central density that is larger for the softer EOS.
![Neutrino chemical potential versus baryon density for $\beta$-equilibrium matter at $S=1$ with the lepton fraction $Y_l=0.4$. Results for GM1 and QMC are shown for three different values of $L$.[]{data-label="fig5"}](fig4.eps){width="0.8\linewidth"}
In Fig. \[fig3\] the density dependence of the kaon effective mass is shown for several values of the entropy per baryon and for different values of the symmetry energy slope. Up to 2 times the saturation density, the curves are practically identical, i.e., no dependence on the slope and temperature is noticed, but at high densities, the mass decreases faster for lower temperatures. In particular, for $S=2$ and $L=93$ MeV, the mass remains quite high and consequently, no kaon condensate is formed, since a larger mass delays the onset of kaons.
![Temperature versus density for neutrino free matter with $S=2$ and matter with neutrino trapped matter with $S=1$.[]{data-label="fig6"}](fig5.eps){width="0.8\linewidth"}
[cccccccccc]{} model& type&L (MeV) & $M_{max}(M_\odot)$& $M_{bmax}(M_\odot)$& R (km) & $\varepsilon$ (fm$^4$)&$\varepsilon^K$ (fm$^4$)&$M^K_{max}(M_\odot)$ & $R(1.4 M_\odot)(Km)$\
QMC& T=10 MeV&93.5&2.03 &2.37&12.8&5.66&2.80&1.86 &\
QMC& S=1, $Y_l=0.4$ &93.5&2.03&2.25&11.7&5.26&4.09&1.99 & 13.12\
QMC& S=1, $Y_l=0.4$ &70.5&2.05&2.19&11.1&5.97&4.41&1.98 & 12.56\
QMC& S=1, $Y_l=0.4$ &59.03&2.04&2.25&10.9&5.97&4.30&1.97& 11.91\
QMC& S=2 &93.5&2.51&3.11&12.2&5.35&3.03&2.30 & 14.15\
QMC& S=2 &70.5&2.15&2.51&11.8&5.86&3.44&2.04 & 13.69\
QMC& S=2 &59.03&2.13&2.48&11.7&5.99&3.57&2.03 & 13.54\
QMC& S=0 &93.5&1.98&2.25&12.08&5.41&2.93&1.86 & 13.58\
QMC& S=0 &70.5&1.94&2.12&11.85&5.61&2.96&1.81 & 13.19\
QMC& S=0 &59.03&1.95&2.18&11.7&5.75&3.07&1.82 & 13.06\
\
GM1& S=1, $Y_l=0.4$ &93.8&2.24&2.52&11.9&5.52&3.70&2.17 &13.13\
GM1& S=1, $Y_l=0.4$ &70.8&2.23&2.61&11.7&5.64&3.91&2.18 &12.96\
GM1& S=1, $Y_l=0.4$ &59.6&2.24&2.53&11.7&5.64&3.92&2.18 &12.88\
GM1& S=2 &93.8&2.31&2.65&12.7&4.88&3.97&2.30 &13.98\
GM1& S=2 &70.8&2.26&2.60&12.3&5.11&4.09&2.25 &13.51\
GM1& S=2 &59.6&2.27&2.60&12.2&5.13&4.41&2.25 &13.27\
GM1& S=0 &93.8&2.14&2.46&12.8&4.63&2.70&2.03 & 13.82\
GM1& S=0 &70.8&2.06&2.39&12.4&4.86&2.94&2.02 &13.23\
GM1& S=0 &59.6&2.06&2.39&12.3&4.96&2.96&1.95 &13.06\
In Fig. \[fig5\] we plot the neutrino chemical potential as a function of the baryonic chemical potential for $S=1$ and $Y_l=0.4$ for different values of the symmetry energy slope with QMC and GM1 models. Lower values of the slope correspond to larger amounts of neutrinos in matter with a fixed fraction of leptons, because a smaller $L$ favors smaller amounts of protons and electrons at large densities. The kink at a chemical potential above 1300 MeV occurs at the onset of the kaon condensate. After its onset, the number of electrons decreases rapidly due to the charge neutrality condition. For a fixed lepton the neutrino abundance increases to compensate the decrease of the electrons. Comparing the models, it is clear that within GM1 the amount of neutrinos is larger, and therefore a larger neutrino chemical potential is obtained. We point out that the kink in the chemical potential due to the onset of the condensate is more pronounced for GM1 and occurs at lower densities. This is due to the onset of a kaon condensation at lower densities and larger amounts of condensed kaons at a given density. Therefore, we may expect that during cooling a smaller amount of neutrinos is emmitted within QMC and for smaller values of $L$, and, as a consequence we may expect that the probability of the occurence of a black-hole will be smaller within QMC. We come back to this point later.
![Mass-radius profiles for stars with $S=1$ and fixed lepton fraction $Y_l=0.4$ (red full lines), $S=2$ and neutrino free matter (green dashed lines) and cold matter (blue dotted lines), for different values of the symmetry energy slope: $L=59$ (thick), 70 (medium) and 93 (thin) MeV, with QMC (top panel) and GM1 (bottom panel). The black dots indicate the onset of kaon condensate[]{data-label="fig7"}](fig6a.eps "fig:"){width="0.8\linewidth"}\
![Mass-radius profiles for stars with $S=1$ and fixed lepton fraction $Y_l=0.4$ (red full lines), $S=2$ and neutrino free matter (green dashed lines) and cold matter (blue dotted lines), for different values of the symmetry energy slope: $L=59$ (thick), 70 (medium) and 93 (thin) MeV, with QMC (top panel) and GM1 (bottom panel). The black dots indicate the onset of kaon condensate[]{data-label="fig7"}](fig6b.eps "fig:"){width="0.8\linewidth"}
In Fig. \[fig6\] the temperature of the system is depicted as a function of the baryonic density for $S=2$ neutrino free $\beta$-equilibrium matter and $S=1$ $\beta$-equilibrium matter with trapped neutrinos with both models under investigation. As compared with the results shown in [@panda2010], the system with kaons can reach temperatures inside of the star that are much higher than the ones attained when hyperons are included in its core, when the highest temperature is 35 MeV. It is the kaon condensation the reason for this behavior. It is seen that for $S=2$, $L=93$ MeV, the temperature does not increase so much because the kaons do not condensate in the range of densities shown.
-------------------------------------------- --------------------------------------------
{width="0.4\linewidth"} {width="0.4\linewidth"}
{width="0.4\linewidth"} {width="0.4\linewidth"}
-------------------------------------------- --------------------------------------------
We also see, from Fig. \[fig6\] that the temperature values are somewhat similar, but still slightly larger for the smaller $L$ values if $S=2$. In matter with larger values of $L$ the number of neutrons and protons is closer, which correspons to a smaller temperature if we fix the entropy. No $L$ differences are seen in matter with $S=1$, because in this case the entropy has an important contribution from neutrinos due to their almost zero mass.
Once the EoS are determined, we use them as input in the TOV equations to obtain the stellar macroscopic properties, which are shown in Table \[table1\] and Figs. \[fig7\] and \[fig8\]. The result for $T=10$ MeV is added only for completeness. The maximum masses for the cases with fixed entropy do not show a clear behaviour with the slope, but for a stable zero temperature system, they tend to decrease with the decrease of the slope for both models. If the slope $L$ is small the trend may invert due to two competing effects: a) a smaller slope $L$ means a softer EOS and, therefore, smaller maximum masses; b) however the onset of kaons for a softer EOS occurs at larger densities, as seen from table \[table1\] from the central energy densities of the threshold stars for the kaon onset. As a result stars obtained from an EOS with larger $L$ contain larger amounts of kaons, which soften the EOS. Therefore, decreasing $L$ reduces the maximum star mass until a critical value of $L$ when the onset of kaons is shifted to much larger densities so that the overall softening due to the kaon fraction becomes negligeable. $L=59$ MeV is one of these critical values of the slope. This effect is present in both GM1 and QMC and for cold and warm stars. A comment should also be done concerning the differences of maximum masses with $L$. Within the QMC the maximum mass diference is quite small, in fact not larger than $\sim 0.04$ M$_\odot$ except for the $S=2$, $L=93$ MeV case, when the much larger mass is due to the non existing kaon condensation in the star, since only thermal kaons are present. Within GM1 the differences are larger, but the main considerations above remain valid. The difference in this case is the fact that the nucleonic GM1 EOS is harder than QMC at intermediate densities.
The star radii, on the other hand, decrease with the decrease of the slope, see Fig. \[fig7\], as already seen in [@antigos; @rafael] for different parametrizations of the NLWM (excluding IU-FSU). In these figures the black dots indicate the onset of thermal kaons, which occurs for quite massive stars. Therefore, we may state that the radius difference between the families of stars is mainly due to the different behavior of the symmetry energy with density and not to the presence of the kaons. According to [@Hebeler], the radii of canonical $1.4M_\odot$ neutron stars should lie within the range 9.7-13.9 $Km$. In table \[table1\] we show our results for these canonical stars and see that, except for the $S=2, L=93.5/93.8$ case, a star which is not stable, our results fall inside the expected range. On the other hand, other two different analyses of five quiescent low-mass X-ray binaries in globular clusters were performed to establish possible neutron star radii ranges. In the first analysis [@guillot], all neutron stars were assumed to have the same radii in the range $R=9.1^{+1,3}_{-1.5}$. The second calculation [@Lattimer2013], based on a Bayesian analysis, estimates that neutron stars radii should lie in between 10.9 and 12.7 Km. It is important to have in mind that measurement and assessment of neutron star radii still remain to be better understood, but if the above mentioned constraints are to be validated, our results would only be in accordance with the second analysis.
The kaon onset energy density increases with decreasing $L$ as alreay referred, see $\epsilon^K$ in table \[table1\]. However, since the existence of kaons softens the EOS, the kaon threshold star mass is generally smaller for the smaller $L$. Notice that both models, even with the inclusion of kaons and the $\omega-\rho$ interaction, known to soften the EoS, may account for the description of very massive stars.
In Fig. \[fig8\] we display, both for the QMC and GM1 models and two values of the symmetry energy slope, the baryonic versus gravitacional masses for stars at different snapshots of their lives [@prakash01]: a) immediatly after the core bounce with trapped neutrinos $S=1$ and fixed lepton fraction $Y_l=0.4$; b) after neutrino diffusion $Y_\nu=0$ and core heating due to deleptonization. The maximum entropy per baryon $S=2$ is attained at $t\sim 15$ s; c) after core cooling with $S=0$ and $Y_\nu=0$. If no accretion occurs during the cooling process the transition between the different stages occurs at constant baryonic mass, i.e. along vertical lines [@prak97]. We identify with a full black dot and a red asterisk, respectively, the minimal configuration with a kaon condensate in the centar and with a fraction of at least 10$^-8$ of kaons.
Some configurations obtained for $S=2$ and neutrino-free matter cannot be populated since their baryonic mass is larger than the maximum baryonic mass obtained with $S=1$ and trapped neutrinos. In particular, all stars with $M>2.05\, M_\odot$ for $S=2$ and $L=93$ MeV belong to this set of stars. It is also seen that within the QMC model (but not GM1) it may occur that a star has a kaon condensate in its center during the neutrino trapped phase. After the neutrinos diffuse out the condensate melts and finally it is once more formed after cooling. These transformations should result in neutrino signals after the supernova explosion and before the cooling of the star.
Some conclusions are in order: a) from both Fig. \[fig7\] and Table \[table1\] we conclude that for $L=93$ MeV no blackhole will be formed during the cooling process since the maximum baryonic mass at $S=1$ and $Y_l=0.4$ is not larger than the maximum baryonic mass at $S=0$. This is not the case for GM1. This model predicts the formation of low mass backhole during cooling; b) decreasing the symmetry energy slope $L$ may modify some of the above conclusions. In particular, for $L=70$ (but not anymore for 59 MeV) within QMC there is a small range of star configurations ($\Delta M\sim 0.07
\ M_\odot$) that will decay to a blackhole during cooling. Within GM1 the number of star configurations that decay into blackholes increases when $L$ goes from 93 to 70 MeV, but decreases if $L$ is further reduced to 59 MeV. This is due the smaller kaon content in these last stars together with a central density that is not much larger; c) the cooling of the stars that contain a kaon condensate, in same cases, involves the melting of the condensate at an intermediate stage ($S=2$) and a second formation of the condensate at $T=0$.
We should point out that we are studying the evolution of the stars without considering finite size effects as in [@maruyama06]. In the present calculation the kaon potential is not strong enough to give rise to a first order phase transtion, and corresponds to a second order phase transition.
Conclusions
===========
In the present work we have revisited the QMC model at finite temperature to investigate the thermal kaon effects on stellar properties. The $\omega-\rho$ interaction was included because it softens the very hard QMC symmetry energy at high densities and can be used to tune the values of the slope of the symmetry energy. As had already been seen in [@panda2012; @rafael], lower values of the slope yield smaller amounts of strangeness if hyperons are considered for a given density. The same conclusion is here obtained if kaons are the carriers of strangeness instead of the hyperons.
As compared with the results obtained with the GM1 parametrization of the NLWM, the QMC EoS is generally softer, the only exception being the $S=2$ case for $L=93.5$ MeV (see table \[table1\]), which is due to the fact that no kaon condensate is formed because the central temperatures of the star lie above the melting temperature of the condensate. A softer EOS at intermediate densities implies a smaller amount of kaons and as a consequence within QMC no black-hole formation is expected during the cooling of the a protoneutron star with a kaon condensate in the core, if $L$ is large enough. For smaller values of $L$, but not too small, the set of stars that could cool to a black-hole is very reduced and certainly much smaller than what is expected with GM1.
It is interesting to identify the role of the density dependence of the symmetry energy on the possible evolution of a compact star with a kaon condensate. Within QMC no black-hole is formed either if $L$ is large or $L$ small. This is due to the balance between the softening of the EOS when $L$ is smaller, together with a less pronounced softening of the EOS because less kaons are formed. Within GM1 there are always a quite large set of stars that cool to a black-hole, although this set is larger for intermediate values of $L$.
We have also shown that the complex evolution of the star may include the melting and formation of a new kaon condensate. The first transformation is driven by the neutrino diffusion and the second is due to cooling. These processes could be responsible for a neutrino signal followed by a gamma ray burst after the supernova explosion.
Finally, we point out that both models can describe very massive stars, namely stars as massive as the pulsars PSR J1614–2230 [@demorest] and PSR J0348+0432 [@antoniadis].
ACKNOWLEDGMENTS {#acknowledgments .unnumbered}
===============
This work was partially supported by the initiative QREN financed by the UE/FEDER through the Programme COMPETE under the project PTDC/FIS/113292/2009, by CNPq (Brazil) and FAPESC (Brazil) under project 2716/2012,TR 2012000344, and by NEW COMPSTAR, a COST initiative. P.K.P. acknowledges the warm hospitality at both the University of Coimbra and Universidade Federal de Santa Catarina and D.P.M. at the Universidad de Alicante, where parts of this work were carried out.
[99]{}
B. G. Todd-Rutel and J. Piekarewicz, Phys. Rev. Lett. 95, 122501 (2005); F. J. Fattoyev and J. Piekarewicz, Phys. Rev. C 82, 025805 (2010).
G. A. Lalazissis, J. Konig, and P. Ring, Phys. Rev. C 55, 540 (1997).
J. Piekarewicz, Phys. Rev. C 66, 034305 (2002); 69, 041301 (2004).
S.S. Avancini, J.R. Marinelli, D.P. Menezes, M.M.W. Moraes and A.S. Schneider, Phys. Rev. C 76, 064318 (2007) ; S.S. Avancini, J.R. Marinelli, D.P. Menezes, M.M.W. Moraes and C. Providência, Phys. Rev. C 75, 055805 (2007).
J. Cottam, F. Paerels and M. Mendez, Nature [**420**]{}, 51 (2002). D. Sanwal, G.G. Pavlov, V.E. Zavlin and M.A. Teter, Astrophys. J [**574**]{}, L 61 (2002).
Paul Demorest, Tim Pennucci, Scott Ransom, Mallory Roberts, and Jason Hessels, Nature (London) 467, 1081 (2010).
J. Antoniadis et al, Science 26, 340 n. 6131 (2013).
J.M. Lattimer and Y. Lim, arXiv:1203.4286.
M.B. Tsang et al., Phys. Rev. C 86, 015803 (2012); W.G. Newton, M. Gearheart, J. Hooker and B,-A. Li, in ”Neutron Star Crust”, ed. C. A. Bertulani and J. Piekarewicz, Nova Publishers (2012), arXiv:1112.2018 (2011); I. Vidaña, C. Providência, A. Polls and A. Rios, Phys. Rev. C 80, 045806 (2009); C. Ducoin, J. Margueron, C, Providência and I. Vidaña, Phys. Rev. C 83, 045810 (2011).
C. J. Horowitz and J. Piekarewicz, Phys. Rev. Lett. 86, 5647 (2001); Bunta and Gmuca, Phys. Rev. C 68, 054318 (2003).
F. J. Fattoyev, C. J. Horowitz, J. Piekarewicz, and G. Shen, Phys. Rev. C 82, 055803 (2010).
C. Providência et al, Eur. Phys. J. (2013), in press, arXiv:1307.1436\[nucl-th\].
R. Cavagnoli, C. Providência and D.P. Menezes, Phys. Rev. C 84, 065810 (2011).
M. Prakash, I. Bombaci, M. Prakash, P. J. Ellis, J. M. Lattimer and R. Knorren, Phys. Rep. [**280**]{}, 1 (1997).
N. K. Glendenning, Compact Stars, Springer-Verlag, New-York, (2000).
D.B. Kaplan and A.E. Nelson, Phys. Lett. [**B 175**]{} 57 (1986); [**B 179**]{} 409(E) (1986).
G.E. Brown, K. Kubodera, M. Rho and V. Thorsson, Phys. Lett. B 291 (1992) 355.
D. P. Menezes, P. K. Panda, and C. Providencia, Phys. Rev. C 72, 035802 (2005).
N. Gupta and P. Arumugam, Phys. Rev. [**C 85**]{}, 015804 (2012).
P. A. M. Guichon, Phys. Lett. [**B 200**]{}, 235 (1988); K. Saito and A.W. Thomas, Phys. Lett. [**B 327**]{}, 9 (1994); P.K. Panda, A. Mishra, J.M. Eisenberg, W. Greiner, Phys. Rev. [**C 56**]{}, 3134 (1997); K. Tsushima, K. Saito, A.W. Thomas and S.V. Wright, Phys. Lett. [**B 429**]{}, 239 (1998).
P.K. Panda, C. Providência and D.P. Menezes, Phys. Rev C [**82**]{}, 045801 (2010).
P.K. Panda, A.M.S. Santos, C. Providência and D.P. Menezes, Phys. Rev. [**C 85**]{}, 055802 (2012).
N. Gupta and P. Arumugam, Phys. Rev. [**C 87**]{}, 045802 (2013).
J. A. Pons, J. A. Miralles, M. Prakash and J.M. Lattimer, Astrophys. J. 553: 382 (2001).
A. Chodos, R.L. Jaffe, K. Johnson, C.B. Thorne and V.F. Weisskopf, Phys. Rev. [**D 9**]{}, 3471 (1974).
P.K. Panda, D.P. Menezes and C. Providencia, Phys. Rev C [**69**]{}, 025207 (2004).
K. Tsushima, K. Saito, A.W. Thomas and S.V. Wright, Phys. Lett. B [**429**]{} 239 (1998).
T. Waas and W. Weise, Nucl Phys A 625, 287 (1997); W. Weise and R. Hartle, Nucl Phys A 835, 51 (2010).
M. Prakash, J.M. Lattimer, J.A. Pons, A.W. Steiner, S. Reddy, Lect. Notes Phys. 578, 364 (2001).
Sarmistha Banik, Rana Nandi, and Debades Bandyopadhyay Phys. Rev. C 86, 045803 (2012); Sarmistha Banik, Walter Greiner, and Debades Bandyopadhyay Phys. Rev. C 78, 065804 (2008). I. Bombaci and B. Datta, Astrophys. J. [**530**]{}, L69 (2000). Z. Berezhiani, I. Bombaci, A. Drago, F. Frontera, and A. Lavagno, Astrophys. J. [**586**]{}, 1250 (2003); I. Bombaci, I. Parenti, and I. Vidaña, Astrophys. J. [**614**]{}, 314 (2004); D.P. Menezes, D.B. Melrose, C. Providência and K. Wu, Phys. Rev. C [**73**]{}, 025806 (2006).
R. Cavagnoli, C. Providência and D.P. Menezes, Phys. Rev. C [**84**]{}, 065810 (2011).
C. Providência [*et al.*]{}, J. Phys.: Conf. Ser. 413, 012023 (2013)
K. Hebeler et al: Phys. Rev. Lett. **105**, 161102 (2010)
S. Guillot, M. Servillat, N.A. Webb and R.E. Rutledge, arXiv: 1302.0023v2\[astro-ph.HE\]
J.M. Lattimer and A.W. Steiner, arXiv:1305.3242\[astro-ph.HE\]
Toshiki Maruyama, Toshitaka Tatsumi, Dmitri N. Voskresensky, Tomonori Tanigawa, Tomoki Endo, and Satoshi Chiba Phys. Rev. C 73, 035802 (2006)
|
---
abstract: 'Inspired by number series tests to measure human intelligence, we suggest number sequence prediction tasks to assess neural network models’ computational powers for solving algorithmic problems. We define the complexity and difficulty of a number sequence prediction task with the structure of the smallest automaton that can generate the sequence. We suggest two types of number sequence prediction problems: the number-level and the digit-level problems. The number-level problems format sequences as 2-dimensional grids of digits and the digit-level problems provide a single digit input per a time step. The complexity of a number-level sequence prediction can be defined with the depth of an equivalent combinatorial logic, and the complexity of a digit-level sequence prediction can be defined with an equivalent state automaton for the generation rule. Experiments with number-level sequences suggest that CNN models are capable of learning the compound operations of sequence generation rules, but the depths of the compound operations are limited. For the digit-level problems, simple GRU and LSTM models can solve some problems with the complexity of finite state automata. Memory augmented models such as Stack-RNN, Attention, and Neural Turing Machines can solve the reverse-order task which has the complexity of simple pushdown automaton. However, all of above cannot solve general Fibonacci, Arithmetic or Geometric sequence generation problems that represent the complexity of queue automata or Turing machines. The results show that our number sequence prediction problems effectively evaluate machine learning models’ computational capabilities.'
author:
- |
Hyoungwook Nam\
College of Liberal Studies\
Seoul National University\
Seoul, Korea\
`hwnam831@snu.ac.kr`\
Segwang Kim\
Department of Electrical and\
Computer Engineering\
Seoul National University\
Seoul, Korea\
`ksk5693@snu.ac.kr`\
Kyomin Jung\
Department of Electrical and\
Computer Engineering\
Seoul National University\
Seoul, Korea\
`kjung@snu.ac.kr`\
bibliography:
- 'aaai2019.bib'
title: Number Sequence Prediction Problems for Evaluating Computational Powers of Neural Networks
---
Introduction
============
Well-defined machine learning tasks have been crucial for machine learning researches. Major deep learning breakthroughs in the field of computer vision such as AlexNet [@alexnet], VGGNet [@vgg] and ResNet [@he2016deep] could not be possible without Imagenet dataset and challenges [@imagenet]. In the field of reinforcement learning, open-source platforms like MuJoCo [@mujoco] and Deepmind Lab [@deepmindlab] provide challenging environments for the studies. However, it is hard to find machine learning task suite for algorithmic reasoning although reasoning has always been a significant subject for many machine learning studies.
It is theoretically proven that carefully designed neural network models can simulate any Turing machine [@siegelmann1995computational]. Hence, there have been studies applying neural network models to solve algorithmic tasks such as learning context-sensitive languages [@gers2001lstm], solving graph questions [@graves2016hybrid], and composing low-level programs [@reednpi]. Also, there have been attempts to train neural networks with simple numerical rules such as copy, addition or multiplication [@stackrnn; @neuralgpu; @gridlstm; @neuralturing]. However, it has been unclear whether the proposed models express computational powers equivalent to Turing machines in practice. To provide a method to test the computational powers of neural network models, we propose a set of number sequence prediction problems designed to fit deep learning methods.
A number sequence prediction problem is a kind of intelligence test for machine learning models inspired by number series tests, which are conventional methods to evaluate non-verbal human intelligence [@nonverbal]. A typical number series test gives a sequence of numbers with a certain rule and requires a person to infer the rule and fill in the blanks. Similarly, a number sequence prediction problem requires a machine learning model to predict the following numbers from a given sequence. The numbers are represented as a sequence of digit symbols; hence the model has to learn discrete transition rules between the symbols such as carry or borrow rules.
To be specific, we suggest two types of number sequence prediction problems: the number-level problems and the digit-level problems. A number-level problem provides a two-dimensional grid of digits as an input where each row of the grid represents a multi-digit number. The target would be a grid of the same format filled with the following numbers. Solving a number-level problem is equivalent to constructing the combinatorial logic for the transition rules. On the other hand, a digit-level task provides a single digit as an input per each time step. A model needs to simulate a sequential state automaton to predict the outputs. The type of the state machine required can vary from a finite state machine to a Turing machine, depending on the generation rule of the sequence.
The number sequence prediction problems are good machine learning tasks for several reasons. First, typical deep learning models can easily fit into the problems. Generative models for 2D images can be directly applied to solve the number-level problems, and recurrent language models can fit into the digit-level problems after minimal modifications. Next, it is possible to define the complexity and difficulty of the problem. Like Kolmogorov complexity [@chaitin1977algorithmic], we can define the complexity of a problem with the structure of the minimal automaton needs to be simulated. Finally, we can generate an arbitrarily large number of examples, which is hard for many machine learning tasks.
To empirically prove that the number sequence prediction problems can effectively evaluate the computational capabilities of machine learning models, we conduct experiments with typical deep learning methods. We apply residual convolution neural network (CNN) [@he2016deep] models for the number-level problems, and recurrent neural network (RNN) models with GRU [@gru] or LSTM [@hochreiter1997long] cells to the digit-level problems. We also augment RNN models with stack [@stackrnn], external memory [@neuralturing] and attention [@bahdanau2014neural] which might help models solve more complex digit-level sequence prediction tasks. One-dimensional CNN models can be applied to digit-level sequences, but it is not equivalent to solving digit-level problems because for the CNN models the data needs to be given at the same time in parallel, losing the sequential nature of the problems. For each type of sequences, we measure the complexity of it by designing an automaton equivalent to the generation rule. In the experiments of the number-level problems, sequences are generated by various linear homogeneous recurrence relations. Since the digit transition rules of the relations can be implemented with combinatorial logic, we measure the complexity and the difficulty of a sequence from the width and the depth of the logic. Experiments show that CNN models are capable of learning the compound operations of number-level sequence generation rules but limited to certain complexity. Digit-level sequence prediction problems can be solved with state automata. Therefore, we define the complexity of a problem with the computing power of the automaton and choose sequences with complexities of finite state automata, pushdown automata, and linear bound Turing machines. The contributions of this work are as follows:
- We propose a set of number sequence prediction problems for evaluating a machine learning model’s algorithmic computing power.
- We define methods to measure complexities and difficulties of the problems based on the structure of automata to be simulated, which can predict the difficulty of training.
- Number-level sequence prediction experiments show that CNN models can simulate deep combinatorial logics up to certain depth.
- Digit-level sequence prediction tasks reveal that the computational powers of existing recurrent neural network models are limited to that of finite state automata or pushdown automata.
Overall, the set of our problems can be a well-defined method to verify whether a new machine learning architecture extends the computing power of previous models. There are some possible directions to extend the computational capabilities of neural network models. The first way is to apply training methods other than the typical methods we used in the experiments. For instance, reinforcement learning methods can be applied to the algorithmic tasks [@zaremba2016learning]. Next, non-backpropagation methods such as dynamic routing [@capsnet] might help neural network models learn more complex rules. Our number sequence prediction tasks would provide a well-defined basis for those possible future works.
Problem Definition
==================
Number-level Sequence Prediction
--------------------------------
![ Input and target sequence examples of a number-level problem with the Fibonacci sequence. The number-level sequence example is with length $n=4$, shift $s=2$ and digit $l=4$. A number in a cell is represented by an one-hot vector. \[fig\_numberproblem\] ](fig_numberproblem){width="0.65\linewidth"}
Figure \[fig\_numberproblem\] illustrates a number-level sequence prediction problem. The model is given with an input sequence $A_{1} \cdots A_{n}$ which is formatted as a two-dimensional grid with $n$ rows. Each row corresponds to a term $A_{i}$ which is a multi-digit number of $l$ digits. A digit cell is a one-hot vector where the number of channels is equal to the base $b$ of the digits. The target data $A_{n+1} \cdots A_{n+s}$ is a sequence of following numbers with the length shift $s$ with the same data layout. In the experiments, we use sequence data of $n=8$, $l=8$ and $s=4$. We denote this $\{0,1\}^{l\times b}$ binary one-hot row tensor representation of a natural number $A$ as $\langle A\rangle$.
We use various order-$k$ homogeneous linear recurrence of the form $A_n=c_1 A_{n-1}+\cdots+c_k A_{n-k}$ with constant integer coefficients $c_1, \dots, c_k$ to generate number sequences starting from randomly selected initial terms $A_1, \dots, A_k$. For instance, $k=2, \ c_1=1, \ c_2=1$ imply a general-Fibonacci sequence and $k=2, \ c_1=2, \ c_2=-1$ give an arithmetic progression. Likewise, a progression with arithmetic sequence as its difference whose recurrence is $A_{n}-A_{n-1}=A_{n-1}-A_{n-2}+c$ can be re-written in $A_{n}=3A_{n-1}-3A_{n-2}+A_{n-3}$. In the perspective of combinatorial logic, the generation rules of the sequences can be seen as $k$-ary operations of the binary tensors. For example, the generation rule of arithmetic sequences can be represented with a binary operation of $(\langle A\rangle, \langle B\rangle) \mapsto \langle 2A-B\rangle$. Since all inputs and outputs of an operation are binary, there exists a shortest disjunctive normal form (DNF) for the operation. We first define a combinatorial width of an operation with its disjunctive normal form, i.e. sum of minterms[^1].
If the smallest DNF of a function $f : \{0,1\}^n \rightarrow \{0,1\}^m$ has $\Theta(w)$[^2] minterms, $\Theta(w)$ is called the **combinatorial width** of the function. If functions $f_1, \dots, f_k$ have corresponding widths of $\Theta(w_1), \dots, \Theta(w_k)$, the **compound width** of a composition $f_1 \circ \dots \circ f_k$ is defined as $\Theta(w_1+\dots+w_k)$.
![ Conceptual schema of a binary operation (left), a ternary operation (middle) and an equivalent composition of two binary operations (right). The formulas represent combinatorial widths. \[fig\_logic\] ](fig_comb){width="0.7\linewidth"}
The decimal digit addition, for example, requires at least $\Theta(10^2)$ products since it has to memorize the consequences of all possible digit pair inputs. Therefore, the combinatorial width of a linear binary operation is $\Theta(b^2)$ where $b$ is the base of the digits. Note that the compound width of a function is not unique. Consider a logical circuit for the ternary operation of $(\langle A\rangle, \langle B\rangle, \langle C\rangle) \mapsto \langle 2A-B+C\rangle$. As seen in Figure \[fig\_logic\], the operation can be implemented with a single function of combinatorial width $\Theta(b^3)$, or a compound of two binary operations resulting in the compound width of $\Theta(b^2)$ in at the cost of a deeper data path. This depth of the path can define the complexity of the operation.
The **complexity** of a function $f : \{0,1\}^n \rightarrow \{0,1\}^m$ is the minimum number $n$ of functions which make the compound width of $f_1 \circ \dots \circ f_n = f$ the smallest. Such smallest compound width is called the **difficulty** of the function.
For example, the length of a row $l$ is the complexity of the carry rule since the carry digit of the most significant digit sequentially depends on all other digits. To eliminate the dependence on the dimensions, we ignore the carry or borrow rule while calculating a complexity. Since a logical product can be approximated with a neuron with a nonlinear activation, the difficulty should correspond to the number of neurons in the network. Also, since the complexity reflects the depth of a logical circuit, it should correspond to the number of layers in the network. Note that it is possible to compromise the width for the depth as seen in Figure \[fig\_logic\]. We expect deep neural networks to learn narrow but deeper representations.
Digit-level Sequence Prediction
-------------------------------
![ Input and target sequence examples of a digit-level problem with the Fibonacci sequence. The example is with $n=8$ and $s=4$. The order of the digits is little-endian (least significant digits first). \[fig\_digitproblem\] ](fig_digitproblem){width="0.8\linewidth"}
Figure \[fig\_digitproblem\] illustrates a digit-level sequence prediction problem. The model is given with sequential inputs of $a_{1} \dots a_{n}$, each of which is an integer number corresponds to a character. With the base of $b$, the numbers $0 \dots b-1$ correspond to the digits. The second last number $b$ is a blank, and the last number $b+1$ is a delimiter. After $n$ inputs, we give delimiters as inputs for $s$ time steps. The target sequence consists of $n$ delimiters followed by $a_{n+1} \dots a_{n+s}$. Because digit calculations must start from the smallest digit, we order the digits in the little-endian order which is the reverse of the typical digit order. In the experiments, we use sequences of $n=12$ and $s=12$.
The sequential nature of the data makes it more difficult to solve the problems. Since the model has to retain information from the previous inputs, solving the problem is equivalent to modeling a sequential state automaton of the generation rule. The computing power of a state automaton falls into one of the four categories: finite state machine, pushdown automaton, linear bounded automaton, and Turing machine. All Turing machines are linearly bounded in the problems because the computation time is linearly bounded to the length of the sequence. Therefore, three levels of state automata are possible in the digit-level sequence prediction problems. We define the complexity of a sequence by the smallest state automaton required.
The **complexity** of a number sequence prediction problem is the complexity of a state automaton which can simulate the sequence generation rule with the smallest number of states. The **minimal grammar** of the sequences is the formal grammar can be recognized with the automaton.
To illustrate, we can think about the most straightforward sequence of number counting. If the numbers have at most $l$ digits of base $b$, the counter can be implemented with $\Theta(lb)$ shift registers which can be translated to the same number of non-deterministic finite state automaton. Hence, the complexity of counting numbers is the complexity of finite state automata, and its minimal grammar is a regular grammar. In the experiments, we use progressions with a fixed difference because they can be understood as generalized forms of number counting sequences. Arithmetic, geometric and general-Fibonacci sequences can also be represented as digit-level sequences. The most straightforward automata capable of generating them are queue automata, which share the same computational powers with Turing machines. Since Turing machines must be linearly bounded in the digit-level problems, the minimal grammars of both arithmetic and general-Fibonacci sequences are context-sensitive grammars.
![ Nondeterministic finite state automaton that can solve reverse-order task with $n=2$ and $b=2$. Automata for fixed difference arithmetic sequence can be built in similar manners. \[fig\_fsm\] ](fig_fsm){width="0.9\linewidth"}
Between regular and context-sensitive languages, there are context-free languages which require pushdown automata. Palindromes are proper examples of context-free languages which cannot be expressed by lesser languages. Therefore, we add the experiment of a reverse-order task where the target sequence is the reverse of the input sequence. The input data consists of $n$ random digits followed by $n$ delimiters, and the target data is $n$ delimiters followed by $n$ digits, which is the reverse of the input sequence. If $n$ is limited, it is possible to solve the reverse-order task with a finite state automaton as seen in Figure \[fig\_fsm\]. Therefore, we train the models with $n=1 \dots 12$ and validate with $n=16$ to force the complexity of the problem equivalent to a pushdown automaton.
Method
======
Model Architecture
------------------
![ Schematic of number-level CNN models. The number of neurons in the convolution layers can be one of ${64, 128, 192}$. The residual blocks can be repeated once, twice or thrice, making 12, 21 or 30-layer CNN model. \[fig\_cnnmodel\] ](fig_cnnmodel){width="0.8\linewidth"}
The number-level sequence prediction models described on Figure \[fig\_cnnmodel\] are based on WaveNet model [@wavenet] which is also a generative model for sequential data. Since the data layout of number-level sequences is two-dimensional, we use 3$\times$3 convolution kernels with dilation on the second dimension of the kernels where large receptive fields are necessary for the carry rules. Unlike WaveNet, we use ReLU activation because we empirically observe that the gate activation slows down the training speed but shows no improvement on the accuracy. The number of neurons per convolution layer can be 64, 128 or 192, which can correspond to the difficulty of a problem. Inspired by the bottleneck architectures of residual CNN [@he2016deep], the first layer of each residual block has half the number of neurons. By stacking more residual blocks to the model, we can change the depth of the model. The base 12-layer model has three residual blocks of dilation (0,2,4), and they can be repeated to make 21-layer and 30-layer models. BatchNorm [@batchnorm] and Dropout [@dropout] methods are applied to all residual blocks.
The digit-level sequence prediction models on the left side of Figure \[fig\_rnnmodel\] are based on the simple character-level RNN language model [@char-rnn] with minimal modifications. LSTM [@hochreiter1997long], GRU [@gru], Stack-RNN [@stackrnn] and Neural Turing Machine (NTM) [@neuralturing] are used for the recurrent modules in the middle. A Stack-RNN module uses a number of stacks equal to the base $b$, and an NTM module uses 4 read and write heads. A digit-level model with attention [@bahdanau2014neural] follows the encoder-decoder architecture on the right side of Figure \[fig\_rnnmodel\]. The first half of an input sequence and the second half of a target sequence begun with the delimiter ($\langle$Go$\rangle$ symbol) are fed into the encoder and the decoder. We use both unidirectional and bidirectional LSTM modules for the models with attention. We set the number of neurons in all hidden layers to 128.
![ Schematics of digit-level neural network models. A recurrent module in a digit-level model can be either LSTM, GRU, Stack-RNN or Neural Turing Machine. Unlike other digit-level models, an attention model must follow the encoder-decoder structure which is illustrated on the right side. \[fig\_rnnmodel\] ](fig_digitmodel){width="0.9\linewidth"}
{width="\linewidth"}
Training and Validation Method
------------------------------
We follow the end-to-end training fashion. Thus the models have to learn the logical rules without any domain-specific prior knowledge. A batch of size 32 is randomly generated for each iteration by choosing the initial numbers and applying the generation rules. The space of all possible training sequences should be large enough to avoid overfitting. We evaluate the validation prediction error rate with a pre-defined validation dataset after every 32 iterations. We define a prediction error rate as a ratio of wrong predictions to the total predictions. The total predictions are counted as $l\times s = 32$ in number-level sequences and $s = 12$ in digit-level sequences. A prediction is determined by the digit channel with the maximum output value. Both number-level and digit-level models are trained to minimize the cross-entropy loss function. The validation dataset is also randomly generated from the space outside of the training data space. For example, we choose the first two terms of number-level arithmetic sequences from the range of $(0,20000)$ for the training dataset, but we choose them from the range of $(20000, 30000)$ for the validation dataset.
Experiment
==========
![ The learning curves of the 12-layer number-level model with 64 neurons on the five types of the basic sequences. $(p,q)$ denotes the coefficients of binary operations $(A,B) \mapsto pA+qB$. $(1,0,1)$ denotes the relation of $A_n=A_{n-1}+A_{n-3}$. \[fig\_lv2\] ](plot_lv2){width="0.9\linewidth"}
Number-level Sequence Prediction Experiment
-------------------------------------------
#### Setup
The objective of the experiments is to verify that complexity and difficulty of a number-level problem correspond to the depth and the parameter size of a CNN model. Total eight types of sequences are used in this part of the experiments. First four types of the sequences have recurrence relations in the form of $A_n=p A_{n-1}+q A_{n-2}$ where $(p,q)\in \{ (1,1),(2,-1),(3,-2),(1,2) \}$. These four sequences represent binary operations with the complexity of one. The fifth type of sequences has a relation of $A_n=A_{n-1}+A_{n-3}$ which represents a binary operation with the complexity of two because the model has to see through at least two layers to catch the relation between $A_{n-1}$ and $A_{n-3}$ with $3\times3$ convolution kernels. The sixth type of the sequences is a mixture of the first four types of the sequences. This is equivalent to building a ternary combinatorial logic with four times more width. For comparison, the seventh type of sequences is generated by a recurrence relation of $A_n=2A_{n-1}-A_{n-2}+A_{n-3}$ which can be a compound of two binary operations. The last type sequence is a progression with a relation of $A_n=4A_{n-1}-6A_{n-2}+4A_{n-3}-A_{n-4}$ whose general term can be calculated with a fourth-order polynomial. For the training data, the first $k$ terms[^3] of the sequences are chosen from $(0,20000)$, while they are chosen from $(20000,30000)$ in the validation dataset. We compare the learning curve patterns over various model configurations.
![ The error examples from number-level model trained with general-Fibonacci sequences. Shaded cells show the locations of the errors. The numbers are shown in little-endian order. \[fig\_lv2err\] ](fig_lv2err){width="0.7\linewidth"}
#### Result
Figure \[fig\_lv2\] and Figure \[fig\_lv2err\] show the validation error curves and the error examples of a CNN number-level model during the training on the five types of sequences generated by the binary operations. Although the numbers of possible sequences exceed a hundred million, the model can achieve error rates near zero in less than a hundred thousand examples. Since the validation data comes from the outside of the training data space, we can conclude that the model can learn the exact logic rules for the operations. The error examples show that it is hard to catch long-term carry rules, which is expected because the carry rules have complexities equal to $l=8$. Deploying deeper models reduce the errors from those long-term carry rules, occasionally achieving a zero prediction error. The fifth sequence of rule $A_{n-1}+A_{n-3}$ shows a different learning curve pattern since $3\times3$ convolution kernels force the model to simulate the logic with the complexity of two. The results show that complexities of number-level sequence prediction problems can effectively predict the hardness of learning.
Figure \[fig\_mixlv3\] compares the learning curves of the models with various configurations and sequence data. The models successfully learn the rules from both the mixed set of primary sequences and the sequences generated by a ternary relation. However, the patterns of the learning curves are different. With the mixed set of primary sequences, the learning curves of the models show uniform convexity without a saddle point. Also, there is no clear advantage of using deeper models. However, the learning curves with the sequences of a compound rule have saddle points, where we suspect the models find breakthroughs. Moreover, we can observe the advantages of using deeper models. Therefore, it can be concluded that deep learning models tend to learn complex but less difficult combinatorial logic, rather than the equivalent shallow but wide representations. Meanwhile, the last learning curves show that the CNN model finds it hard to learn the logic with the complexities more than three. The quaternary operator with base 5 has a smaller combinatorial width than a decimal ternary operator, but the model cannot learn the rule of the former.
Tasks Reverse-order (training) Geometric Arithmetic Fibonacci
--------------------------- -------------------------- ----------- ------------ -----------
LSTM 28.4% (1.2%) 79.4% 77.1% 80.5%
GRU 51.9% (0.9%) 69.0% 77.1% 79.3%
Attention(unidirectional) 42.0% (8.8%) 62.8% 77.0% 69.3%
Attention(bidirectional) 0.0% (0.0%) 51.0% 72.9% 60.9%
Stack-RNN **0.0%** (0.0%) 64.1% 63.8% 69.4%
NTM **0.0%** (0.0%) 57.1% 65.7% 68.1%
![ Validation error curves of LSTM and GRU digit-level sequence prediction models on the arithmetic sequences with fixed difference of 17. \[fig\_count\] ](plot_count){width="0.9\linewidth"}
![ Error examples from the digit-level LSTM model trained with general-Fibonacci sequences. The numbers are shown in little-endian order. Shaded cells show locations of the errors. \[fig\_digitfib\] ](fig_lstmfiberr){width="0.9\linewidth"}
Digit-level Sequence Prediction Experiment
------------------------------------------
#### Setup
The purpose of the digit-level sequence prediction experiments is to find complexity limits of the models. The first type of the sequences is a progression with a fixed difference, which can be understood as a variation of number counting sequences. We use the difference of 17 to observe the carry rules more often. The first term of a training data is chosen from the range of $(0,9000)$, and that of a validation data is chosen from $(9000,9900)$. In the second experiment, we use arithmetic sequences or general-Fibonacci sequences. The first two terms are chosen from the range of $(0,4000)$ during the training and $(4000,6000)$ for validations. Since it is impractical to build finite state automata for all cases, the model must simulate queue automata to solve the problems. The third experiment uses rounded geometric sequences with the relation $A_{n+1} = \lfloor1.3A_n\rfloor$ where the first terms are randomly chosen from $(0,4000)$ during the training and $(4000,6000)$ for validations. The task also requires a smaller queue automaton since it has to remember only one previous number at a time. The last experiment tests the models with the reverse-order task, which has the complexity of a pushdown automaton. Since a reverse-order problem of fixed length can be solved by a finite automaton, we train the models with $n\in\{1 \dots 12\}$ and validate the models with $n=16$ to force the models to learn a pushdown automaton.
#### Result
Figure \[fig\_count\] shows that GRU and LSTM based models are capable of simulating finite state automata. Although the GRU model shows better performance than the LSTM model, both are not able to solve the problems that require queue or pushdown automata as seen in Table \[table\_err\]. Training error rates of GRU and LSTM models on reverse-order task converges around 0.01 suggesting that the models are capable of simulating finite state automata for generating palindromes with a limited length. The error examples from the general-Fibonacci sequence prediction task in Figure \[fig\_digitfib\] show the strategies of the models. The models remember relationships between the most significant digits, while relationships between the least significant digits are more critical for the digit computations. We can conclude that the computational powers of typical RNN models are limited to those of finite state automata if they are trained with typical training methods. Encoder-decoder model with attention, Stack-RNN and NTM models are capable of solving reverse-order task, but they are no better than typical RNN models in problems that require queue automata. The model with attention doesn’t show significant differences if the model uses unidirectional LSTM. Using bidirectional LSTM seems to be crucial for simulating pushdown automata in the models with attention.
Conclusion
==========
We introduced effective machine learning problems of number sequence prediction which can evaluate a machine learning model’s capability of solving algorithmic tasks. We also introduced the methods to define the complexities and difficulties of the number sequence prediction problems. The structure of a combinatorial logic measures the complexity and the difficulty of a number-level sequence prediction problem. Experiments with the CNN models showed that they are effective ways to predict the hardness of learning, and they correspond to structures of neural network models. The complexity of a digit-level sequence prediction task could be defined as the complexity of a minimal state automaton which can solve it. The experimental results showed that the computational powers of typical RNN models are limited to those of finite automata. While models augmented with external memory could solve the problems that require pushdown automata, none of the models were capable of simulating queue automata which are equivalent to Turing machines. To sum up, our number sequence prediction tasks were proven to be effective and well-defined for testing neural network models’ computational capabilities.
There are a few possible ways we suggest to proceed with the problems. The first way is to propose and test a new network architecture to solve the tasks could not be solved in this study. If a neural network model can solve digit-level arithmetic and geometric sequence prediction tasks, we can say that the model extended the computational capabilities of neural networks. Another way is applying training methods other than typical methods we used in the experiments. Sequence-to-sequence training methods for the digit-level prediction problems limit the capability of models to linear bound automata since the computation time is linearly bounded to the length of a sequence. The training methods that decouple computation time and the number of outputs might expand the capability of the neural network models. Finally, non-backpropagation methods such as dynamic routing [@capsnet] might be able to expand the computing power of neural network models. Our number sequence prediction tasks would provide a well-defined basis for those possible future works.
[^1]: A logical AND of literals in which each variable appears exactly once in true or complemented form [@logicdesign]
[^2]: $w$ is a function of input and output dimensions
[^3]: $k$ is the order of a recurrence relation
|
1200
amssym =pplri9d=pplri9d at 8.6pt=pplri9d at 6pt =pplb9d=pplb9d at 7.6pt=pplb9d at 6pt =pplro=pplro at 7.6pt=pplro at 6pt =pplr9d=pplr9d at 7.6pt=pplr9d at 6pt
=pplri9o 2 =pplri9o at 7.6pt2 =pplri9o at 6pt2 =eurmo102 =eurmo10 at 7.6pt 2 =eurmo10 at 6pt 2 =’177 =’177 =’177 1= 1= 1=0= 0= 0=4= 4= 4=
‘$="4428
\mathcode`$="5429 ‘="343A ‘="643B ‘$$="445B
\mathcode`$$="545D ‘=“242B =”643A ł[’252]{} Ł[’212]{} =“0100 =”0101 =“0102 =”0103 =“0104 =”0105 =“0106 =”0107 =“0108 =”0109 ="010A
=bbmsl10 =bbmsl10 at 8.6pt =bbmsl10 at 6pt ===\#1[[\#1]{}]{}
=txtt =pplrc9d =rpzcmi
=
\#1/\#2[.1em .5ex-.1em /-.15em.25ex]{}
\#1. \#2
[\#1.]{}[\#2]{}
**Odd-dimensional Charney-Davis Conjecture**
’221wiatos’252aw R. Gal[^1][Partially supported by Polish [n201 012 32/0718]{} grant.]{} & Tadeusz Januszkiewicz[^2][Partially supported by the [nsf]{} grant [dms-0706259]{}.]{}
[^3][2000 [*Mathematics Subject Classification:* ]{}52b70 (52b11, 06a07).]{} [^4][[*Key phrases: flag complex, h-vector, Charney–Davis Conjecture*]{}.]{}
[ [Abstract:]{} More than once we have heard that the Charney-Davis Conjecture makes sense only for odd-dimensional spheres. This is to point out that in fact it is also a statement about even-dimensional spheres.]{}
A conjecture of Heinz Hopf asserts that the sign of the Euler characteristic of a smooth Riemannian $2d$-dimensional manifold of non-positive sectional curvature is the same for all such manifolds, this is the same as that of product of non-positively curved surfaces: $$(-1)^d\chi (M^{2d})\geq 0.$$
For Riemannian manifolds the condition of non-positive sectional curvature is equivalent to being locally [cat(0)]{}. The Hopf Conjecture subsequently has been generalized to include closed, piecewise Euclidean locally, [cat(0)]{} () manifolds.
By work of M. W. Davis \[D\], Coxeter groups provide a rich source of piecewise Euclidean, locally [cat(0)]{} spaces. Given a [*flag*]{} triangulation of a () sphere $L^{n-1}$, a construction of Davis gives a reflection () orbifold ${\cal O}^n$, with many (generalized homology) manifold covers.
The Euler characteristic of $\cal O$ is given by the f- and h-polynomials of $L$ as follows: $$\chi({\cal O})=f_L(-1/2)=h_L(-1).$$
Charney and Davis emphasized the combinatorial implications of the Hopf Conjecture in this context.
Conjecture ([\[CD, Conj. D, p. 135\]]{}). If $L$ is a flag triangulation of the () sphere of dimension $2d-1$ then $(-1)^dh(-1)\geq 0$.
The Hopf Conjecture does not say anything about odd-dimensional manifolds. So on the face of it, the Charney-Davis Conjecture should not say anything about odd-dimensional () spheres. The point we want to make in this note is that in fact it does.
Theorem. The Charney-Davis Conjecture is equivalent to the following statement. Let $L$ be a sphere of dimension $2d$. Let $h_L(t)$ be its h-polynomial, and let $\widetilde h_L(t)$ be defined by $h_L(t)=(1+t)\widetilde h_L(t)$. Then $$(-1)^d\widetilde h_L(-1)\geq 0.$$
[*Proof:*]{} Let $L$ be a suspension of $\char'212$. Since the h-polynomial is multiplicative for joins $(1+t)\widetilde h_L(t)=h_L(t)=(1+t)h_{\char'212}(t)$. The statement $(-1)^dh_{\char'212}(-1)=(-1)^d\widetilde h_L(-1)\geq 0$ is just the Charney-Davis Conjecture for $\char'212$.
To prove the other implication recall that f-polynomial and h-polynomial are related by the formula $$(2+t)(1+t)^{2d-1}\widetilde h_L\left({1\over1+t}\right)=
(1+t)^{2d}h_L\left({1\over1+t}\right)=t^{2d}f_L\left({1\over t}\right).$$ Differentiating both sides we get $$(2+t)\left[(1+t)^{2d-1}\widetilde h_L\left({1\over1+t}\right)\right]'
+(1+t)^{2d-1}\widetilde h_L\left({1\over1+t}\right)
=2d\,t^{2d-1}f_L\left({1\over t}\right)-t^{2d-2}f'\left({1\over t}\right)$$ where $\left[(1+t)^{2d-1}\widetilde h_L(1/(1+t))\right]'$ is a polynomial. Substitute $t=-2$ and use the fact that, by Dehn-Sommerville, $f_L(-1/2)=0$ to get $$(-1)^{2d-1}\widetilde h_L(-1)=-(-2)^{2d-2}f'(-1/2).$$
We omit the proof of the following straightforward claim. The sum of f-polynomials of links of vertices of $L$ is equal to the derivative of the f-polynomial of $L$.
Applying the above claim to the preceding equality gives $$(-1)^d\widetilde h_L(-1)=4^{d-1}\sum_v (-1)^d h_{\mathop{\rm Lk}_v}(-1).$$ The right hand side is non-negative by the Charney-Davis Conjecture. Hence the proof. $\square$
Remark. The quantity $(-1)^d\widetilde h_L(-1)$ is equal to $\gamma_d(L)$, the top coefficient of the $\gamma$-polynomial introduced in \[G, Def. 2.1.4\]. The calculation proving that $\gamma_d(L)>0$ provided the Charney-Davis. Conjecture holds for links of all vertices in $L$ was mentioned in \[G, Cor. 2.2.2\] without relating it to the h-polynomial of $L$.
In view of the above the Charney-Davis Conjecture for even-dimensional spheres is essentially included in (though perhaps a dramatic restatement of) the Conjecture 2.1.7 in \[G\] which treats equally even- and odd-dimensional spheres and provides further strengthenings of the Charney-Davis Conjecture.
One may speculate about the geometric interpretation of $\widetilde h_L(-1)$. Note that presumably $\widetilde h_L$ is an h-polynomial of a $(2d-1)$-dimensional sphere. For example, if $L$ is a icosahedron, $\widetilde h_L$ is an h-polynomial of the decagon. On the other hand the geometry of the Davis orbifolds for the icosahedron and a suspension of the decagon are very different. The former is hyperbolic and the latter is a product. Thus there is no hope of relating $\widetilde h_L(-1)$ to $\ell^2$-torsion.
References
=.4in
[\[CD\]]{} [R. Charney & M. Davis]{}, [*The Euler characteristic of a non-positively curved, piecewise Euclidean manifold*]{}, Pacific J. Math. [**171**]{} (1995), pp. 117–137,
[\[D\]]{} [Michael W. Davis]{}, [*The geometry and topology of Coxeter groups*]{}, [*London Mathematical Society Monographs Series*]{}, [**32**]{}. Princeton University Press, Princeton, [nj]{}, 2008,
[\[G\]]{} [’221. R. Gal]{}, [*Real Root Conjecture fails for five and higher dimensional spheres*]{}, Discrete & Computational Geometry [**34**]{} (2005), pp. 269–284.
Mathematical Institute, Wroc’252aw University pl. Grunwaldzki 2/4, 50-384 Wroc’252aw, Poland [sgal@math.uni.wroc.pl]{}
[Tadeusz Januszkiewicz:]{} Department of Mathematics, The Ohio State University 231 [w 18th Ave]{}, Columbus, [oh 43210, usa]{} and the Mathematical Institute of Polish Academy of Sciences; on leave from Mathematical Institute, Wroc’252aw University [tjan@math.ohio-state.edu]{}
[^1]: $^a$
[^2]: $^b$
[^3]:
[^4]:
|
---
abstract: 'We introduce and demonstrate the variational autoencoder (VAE) for probabilistic non-negative matrix factorisation (PAE-NMF). We design a network which can perform non-negative matrix factorisation (NMF) and add in aspects of a VAE to make the coefficients of the latent space probabilistic. By restricting the weights in the final layer of the network to be non-negative and using the non-negative Weibull distribution we produce a probabilistic form of NMF which allows us to generate new data and find a probability distribution that effectively links the latent and input variables. We demonstrate the effectiveness of PAE-NMF on three heterogeneous datasets: images, financial time series and genomic.'
address: 'University of Southampton, University Road, Southampton, UK'
author:
- 'Steven Squires, Adam Prügel Bennett and Mahesan Niranjan'
title: 'A Variational Autoencoder for Probabilistic Non-Negative Matrix Factorisation'
---
non-negative matrix factorisation, variational autoencoder, dimensionality reduction.
Introduction {#sec:PAE-NMF:intro}
============
Non-Negative Matrix Factorization
---------------------------------
There has been a considerable increase in interest in NMF since the publication of a seminal work by [@lee1999learning] (although earlier [@paatero1994positive] had studied this field) in part because NMF tends to produce a sparse and parts based representation of the data. This sparse and parts based representation is in contrast to other dimensionality reduction techniques such as principal component analysis which tends to produce a holistic representation. The parts should represent features of the data, therefore NMF can produce a representation of the data by the addition of extracted features. This representation may be considerably more interpretable than more holistic approaches.
Consider a data matrix $\textbf{V} \in \mathbb{R} ^{m \times n}$ with $m$ dimensions and $n$ data points which has only non-negative elements. If we define two matrices, also with only non-negative elements: $\textbf{W} \in \mathbb{R} ^{m \times r}$ and $\textbf{H} \in \mathbb{R} ^{r \times n}$, then non-negative matrix factorisation (NMF) can reduce the dimensionality of $\textbf{V}$ through the approximation $\textbf{V} \approx \textbf{WH}$ where, generally, $r<\min(m,n)$.
The columns of $\textbf{W}$ make up the new basis directions of the dimensions we are projecting onto. Each column of $\textbf{H}$ represents the coefficients of each data point in this new subspace. There are a range of algorithms to conduct NMF, most of them involving minimising an objective function such as $$\min_{\textbf{W},\textbf{H}}||\textbf{V}-\textbf{WH}||^2_{\text{F}} \text{ subject to } W_{i,j}\geq 0, H_{i,j}\geq 0.
\label{Rank_eq:frobenius}$$
Using Autoencoders for NMF
--------------------------
Several authors have studied the addition of extra constaints to an autoencoder to perform NMF [@lemme2012online; @ayinde2016visualizing; @hosseini2016deep; @smaragdis2017neural]. These methods show some potential advantages over standard NMF including the implicit creation of the $\textbf{H}$ matrix and straightforward adaptation to online techniques.
In Figure \[neuralNet1\] we show a representation of a one-hidden layer autoencoder for performing NMF. We feed in a data-point $\textbf{v}\in \mathbb{R} ^{m \times 1}$, the latent representation is produced by $\textbf{h}=f(\textbf{W}_{1}\textbf{v})$ where $f$ is an element-wise (linear or non-linear) function with non-negative ouputs, $\textbf{h}\in \mathbb{R}^{r \times 1}$ is the representation in the latent space and $\textbf{W}_{1}\in \mathbb{R}^{r \times m}$ contains the weights of the first layer. It is also possible to add additional layers to make a deeper network with multiple hidden layers before arriving at the constriction layer which produces the $\textbf{h}$ output. The final set of weights must be kept non-negative with an identity activation function such that $\textbf{x}=\textbf{W}_{f}\textbf{h}$ where $\textbf{W}_{f}\in \mathbb{R} ^{m \times r}$. The final weights of the autoencoder then can be interpreted as the dimensions of the new subspace with the elements of $\textbf{h}$ as the coefficients in that subspace. The network is then trained so that $\textbf{x}\approx\textbf{v}$.
![Diagram of an autoencoder designed to perform NMF. The weights of the final layer, $\textbf{W}_f$, become the directions of the subspace, with the outputs of the hidden layer, $\textbf{h}$, as the coefficients in that new subspace. The activation function that produces $\textbf{h}$ must be non-negative as must all the elements of $\textbf{W}_f$.[]{data-label="neuralNet1"}](NeuralNet1)
Variational Autoencoders
------------------------
NMF and autoencoders both produce a lower dimensional representation of some input data. However, neither produce a probability model, just a deterministic mapping. It is also not obvious how to generate new data from these lower dimensional spaces. Generative models solve both of these problems, enabling a probability distribution to be found linking the input and latent spaces whilst enabling new data to be created. One of the most popular recent generative model is the variational autoencoder (VAE) [@kingma2013auto; @rezende2014stochastic]. A VAE is a probabilistic model which utilises the autoencoder framework of a neural network to find the probabilistic mappings from the input to the latent layers and on to the output layer. Unlike a standard autoencoder the VAE finds a distribution between the latent and seen variables, which also enables the production of new data by sampling from the latent distributions.
Probabilistic Non-Negative Matrix Factorisation
-----------------------------------------------
Several authors have presented versions of NMF with a probabilistic element [@paisley2014bayesian; @cegmil2008bayesian; @schmidt2009bayesian] which involve the use of various sampling techniques to estimate posteriors. Other work has been done using hidden Markov models to produce a probabilistic NMF [@mohammadiha2013gamma] and utilising probabilistic NMF to perform topic modelling [@luo2017probabilistic]. However, to our knowledge no one has utilised the ideas behind the VAE to perform NMF.
Although probabilistic methods for NMF have been developed, even a full Bayesian framework still faces the problem that, for the vast majority of problems where NMF is used, we have little idea about what is the appropriate prior. We would therefore be forced to do model selection or introduce hyperparameters and perform inference (maximum likelihood or Bayesian) based on the evidence. However, as the posterior in such cases is unlikely to be analytic this is likely to involve highly time consuming Monte Carlo. In doing so we would expect to get results close to those we obtain using PAE-NMF. However, for machine learning algorithms to be of value they must be practical. Our approach, following a minimum description length methodology, provides a principled method for achieving automatic regularisation. Because it fits within the framework of deep learning it is relatively straightforward and quick to implement (using software such as Keras or PyTorch with built-in automatic differentiation, fast gradient descent algorithms, and GPU support). In addition, our approach provides a considerable degree of flexibility (e.g. in continuous updating or including exogenous data), which we believe might be much more complicated to achieve in a fully probabilistic approach.
The next section lays out the key alterations needed to a VAE to allow it to perform NMF and is our main contribution in this paper.
PAE-NMF {#PAE_NMF_sec:PAE}
=======
The model proposed in this paper provides advantages both to VAEs and to NMF. For VAEs, by forcing a non-negative latent space we inherit many of the beneficial properties of NMF; namely we find representations that tend to be sparse and often capture a parts based representation of the objects being represented. For NMF we introduce a probabilisitic representation of the vectors $\textbf{h}$ which models the uncertainty in the parameters of the model due to the limited data.
Ideas Behind PAE-NMF
--------------------
[@kingma2013auto] proposed the VAE, their aim is to perform inference where the latent variables have intractable posteriors and the data-sets are too large to easily manage. Two of their contributions are showing that the “reparameterization trick" allows for use of standard stochastic gradient descent methods through the autoencoder and that the intractable posterior can be estimated.
In a standard autoencoder we take some data point $\textbf{v}\in \mathbb{R}^{m}$ and run it through a neural network to produce a latent variable $\textbf{h}=f(\textbf{v})\in\mathbb{R}^r$ where $r<m$ and $f$ is some non-linear element-wise function produced by the neural network. This is the encoding part of the network. We then run $\textbf{h}$ through another neural network to produce an output $\textbf{\^{v}}=g(\textbf{h})\in \mathbb{R}^{m}$, which is the decoding part of the network. The hope is that due to $r<m$ the latent variables will contain real structure from the data.
A standard VAE differs in that instead of encoding a deterministic variable $h$ we find a mean, $\mu$, and variance, $\sigma^2$ of a Gaussian distribution. If we want to generate new data we can then sample from this distribution to get $h$ and then run $h$ through the decoding part of the network to produce our new generated data. PAE-NMF utilises the same objective function as a standard VAE, so for each data-point, we are minimising:
$$\begin{aligned}
\text{obj}=&\mathbb{E}_{q_{\phi}(\textbf{h}|\textbf{v})}\left(\strut-\log\left(p_{\theta}(\textbf{v}|\textbf{h})\strut\right)\right)+\text{D}_\text{KL}(q_{\phi}(\textbf{h}|\textbf{v})||p(\textbf{h})) \\
\approx& \frac{1}{2\sigma^2}||\textbf{v}-\textbf{\^{v}}||^{2}+\text{D}_\text{KL}(q_{\phi}(\textbf{h}|\textbf{v})||p(\textbf{h}))
\label{objFun}\end{aligned}$$
where $\text{D}_\text{KL}$ is the KL divergence, $\phi$ and $\theta$ represent the parameters of the encoder and decoder, respectively, $\textbf{v}$ is an input vector with $\textbf{\^v}$ the reconstructed vector, created by sampling from $p(\textbf{h}|\textbf{v})$ and running the $\textbf{h}$ produced through the decoding part of the network. The first term represents the reconstruction error between the original and recreated data-point, which we assume to be approximately Gaussian. The second term is the KL divergence between our prior expectation, $p(\textbf{h})$, of the distribution of $\textbf{h}$ and the representation created by the encoding part of our neural network, $q_{\phi}(\textbf{h}|\textbf{v})$. We can interpret this as a regularisation term, it will prevent much of the probability density being located far from the origin, assuming we select a sensible prior. Another way of looking at it is that it will prevent the distributions for each datapoint being very different from one another as they are forced to remain close to the prior distribution.
This objective function can also be interpreted as the amount of information needed to communicate the data [@kingma2014semi]. We can think of vectors in latent space as code words. Thus to communicate our data, we can send a code word and an error term (as the errors fall in a more concentrated distribution than the original message they can be communicated more accurately). The log-probability term can be interpreted as the message length for the errors. By associating a probability with the code words we reduce the length of the message needed to communicate the latent variables (intuitively we can think of sending the latent variables with a smaller number of significant figures). The KL divergence (or relative entropy) measures the length of code to communicate a latent variable with probability distribution $q_{\phi}(\textbf{h}|\textbf{v})$. Thus by minimising the objective function we learn a model that extracts all the useful information from the data (i.e. information that allows the data to be compressed), but will not over-fit the data. This interpretation shares considerable similarity with previous work which used a model of the minimum description length within the objective function when performing matrix factorisation [@squires2019minimum].
Structure of the PAE-NMF
------------------------
The structure of our PAE-NMF is given in Figure \[P\_AE\_NMF:diagram2\]. The input is fed through an encoding neural network which produces two vectors $\textbf{k}$ and $\boldsymbol\lambda$, which are the parameters of the $q_{\phi}(\textbf{h}|\textbf{x})$ distribution. The latent vector for that data-point, $\textbf{h}$, is then created by sampling from that distribution. However, this causes a problem when training the network because backpropagation requires the ability to differentiate through the entire network. In other words, during backpropagation we need to be able to find $\frac{\partial h_i}{\partial k_i}$ and $\frac{\partial h_i}{\partial\lambda_i}$, where the $i$ refers to a dimension. If we sample from the distribution we cannot perform these derivatives. In variational autoencoders this problem is removed by the “reparameterization trick" [@kingma2013auto] which pushes the stochasticity to an input node, which does not need to be differentiated through, rather than in the middle of the network. A standard variational autoencoder uses Gaussian distributions. For a univariate Gaussian the trick is to turn the latent variable, $h\sim q(h|x)=\mathcal{N}(\mu,\sigma^2)$, which cannot be differentiated through, into $z=\mu+\sigma\epsilon$ where $\epsilon\sim\mathcal{N}(0,1)$. The parameters $\mu$ and $\sigma$ are both deterministic and can be differentiated through and the stochasticity is added from outside.
We cannot use the Gaussian distribution because we need our $h_i$ terms to be non-negative to fulfill the requirement of NMF. There are several choices for probability distributions which produce only non-negative samples, we use the Weibull distribution for reasons detailed later in this section. We can sample from the Weibull distribution using its inverse cumulative distribution and the input of a uniform distribution at an input node. In Figure \[P\_AE\_NMF:diagram2\] we display this method of imposing stochasticity from an input node through $\boldsymbol\epsilon$. Similarly to using a standard autoencoder for performing NMF the same restrictions, such as $\textbf{W}_f$ being forced to stay non-negative, apply to the PAE-NMF.
![General structure of the PAE-NMF with stochasticity provided by the input vector $\boldsymbol\epsilon$.[]{data-label="P_AE_NMF:diagram2"}](diagram2new.eps)
Details of the PAE-NMF
----------------------
In this paper we have utilised the Weibull distribution which has a probability density function (PDF) of
$$f(x) =
\begin{cases}
\frac{k}{\lambda}\big(\frac{x}{\lambda}\big)^{k-1}\exp{(-(x/\lambda)^k)} & \text{if } x \geq 0 \\
0 & \text{if } x < 0
\end{cases}$$
with parameters $k$ and $\lambda$. The Weibull distribution satisfies our requirements: that the PDF is zero below $x=0$, falls towards 0 as $x$ becomes large and is flexible enough to enable various shapes of distribution. For each data point there will be $r$ Weibull distributions generated, one for each of the subspace dimensions.
To perform PAE-NMF we need an analytical form of the KL divergence, so we can differentiate the objective function, and a way of extracting samples using some outside form of stochasticity. The KL divergence between two Weibull distribution is given by [@bauckhage2014computing]
$$\begin{split}
D_{KL}(F_1||F_2) =&\int^\infty_0f_1(x|k_1,\lambda_1)\log\left(\frac{f_1(x|k_1,\lambda_1)}{f_2(x|k_2,\lambda_2)}\right)dx \\
=&\log\!\left(\frac{k_1}{\lambda_1^{k_1}}\right)-\log\!\left(\frac{k_2}{\lambda_2^{k_2}}\right)+(k_1-k_2)\left[\log\!\left(\lambda_1\right)-\frac{\gamma}{k_1}\right] \\&+\left(\frac{\lambda_1}{\lambda_2}\right)^{k_2}\Gamma\left(\frac{k_2}{k_1}+1\right)-1
\end{split}$$
where $\gamma\approx0.5772$ is the Euler-Mascheroni constant and $\Gamma$ is the gamma function.
In other situations using NMF with probability distributions the gamma distribution has been used [@squires2017rank]. The reason that we have chosen the Weibull distribution is that while it is possible to apply variations on the reparameterization trick to the gamma function [@figurnov2018implicit] it is simpler to use the Weibull distribution and use inverse transform sampling. To sample from the Weibull distribution all we need is the inverse cumulative function, $C^{-1}(\epsilon)=\lambda(-\ln(\epsilon))^{1/k}$. We generate a uniform random variable $\epsilon$ at an input node and then sample from the Weibull distribution with $\lambda$ and $k$ by $C^{-1}(\epsilon)$. So, refering to Figure \[P\_AE\_NMF:diagram2\], we now have $\epsilon\sim\mathcal{U}(0,1)$ and each of the dimensions of $\textbf{z}$ are found by $z_i=C^{-1}_i(\epsilon_i)$.
It is worth considering exactly how and where PAE-NMF differs from standard NMF. In Figure \[neuralNet1\] we see that, in terms of NMF, it is fairly unimportant what occurs before the constriction layer in terms of the outputs of interest ($\textbf{W}$ and $\textbf{H}$). The aim of the design before that part is to allow the network to find the best possible representation that gets the lowest value of the objective function. There are effectively two parts which make this a probabilistic method: the choice of objective function and the fact that we sample from the distribution. Without those two parts the link between the distribution parameters and $\textbf{h}$ would just be a non-linear function which might do better or worse than any other choice but would not make this a probabilistic method.
Methodology {#PAE-NMF:sec:method}
-----------
We now have the structure of our network in Figure \[P\_AE\_NMF:diagram2\] and the distribution (Weibull) that we are using. The basic flow through the network with one hidden layer, for an input datapoint $\textbf{v}$, looks like:
$$\begin{aligned}
\boldsymbol{\lambda}=f(\textbf{W}_{\boldsymbol\lambda}\textbf{v}), \quad
\boldsymbol{k}=g(\textbf{W}_{\textbf{k}}\textbf{v}), \quad
\textbf{h}=C_{{\boldsymbol\lambda},\textbf{k}}^{-1}(\boldsymbol\epsilon), \quad
\textbf{\^{v}}=\textbf{W}_f\textbf{h}\end{aligned}$$
where $f$ and $g$ are non-linear functions that work element-wise. The inverse cumulative function, $C^{-1}_{{\boldsymbol\lambda},\textbf{k}}$, also works element-wise.
There are a range of choices to make for this network, to demonstrate the value of our technique we have kept the choices as simple as possible. We use rectified linear units (ReLU) as the activation function as these are probably the most popular current method [@nair2010rectified]. To keep the network simple we have only used one hidden layer. We update the weights and biases using gradient descent. We keep the $\textbf{W}_f$ values non-negative by setting any negative terms to zero after updating the weights. We did experiment with using multiplicative updates [@lee1999learning] for the $\textbf{W}_f$ weights but found no clear improvement. We use the whole data-set in each batch, the same as is used in standard NMF. We initialise the $\textbf{W}_f$ weights using $\textbf{W}$ found from standard NMF, with added noise. We use a prior of $k=1$ and $\lambda=1$. We calculate the subspace size individually for each data-set using the method of [@squires2017rank]. The learning rates are chosen by trial and error.
To demonstrate the use of our technique we have tested it on three heterogeneous data-sets, displayed in Table \[table1\]. The faces data-set, are a group of $19\times 19$ grey-scale images of faces. The Genes data-set is the 5000 gene expressions of 38 samples from a leukaemia data-set and the FTSE 100 data is the share price of 94 different companies over 1305 days.
**Name** **Type** *m* *n* **Source**
---------- ------------ ------ ------ ---------------------------------------
Faces Image 361 2429 http://cbcl.mit.edu/software-datasets
/FaceData2.html
Genes Biological 5000 38 http://www.broadinstitute.org/cgi-bin
/cancer/datasets.cgi
FTSE 100 Financial 1305 94 Bloomberg information terminal
: Data-sets names, the type of data, the number of dimensions, m, number of data-points, n and the source of the data.[]{data-label="table1"}
Results and Discussion
======================
First we demonstrate that PAE-NMF will produce a reasonable recreation of the original data. We would expect the accuracy of the reconstruction to be worse than for standard NMF because we impose the KL divergence term and we sample from the distribution rather than taking the latent outputs deterministically. These two impositions should help to reduce overfitting which results in the higher reconstruction error.
In Figure \[ICLR:fig3\] we show recreations of the original data for the three data-sets from Table \[table1\]. The faces plots show five original images chosen at random above the recreated versions (with $r=81$). The bottom left plot shows nine stocks from the FTSE 100 with the original data as a black dashed line and the recreated results as a red dotted line ($r=9$). The results here follow the trend well and appear to be ignoring some of the noise in the data. In the bottom right plot we show 1000 elements of the recreated matrix versus the equivalent elements ($r=3$). There is significant noise in this data, but there is also a clear trend. The black line shows what the results would be if they were recreated perfectly.
![(Top) Top five are original faces with the equivalent recreated faces below. (Bottom left) Nine stocks with original values (black dashed) and recreated values (red dotted). (Bottom right) 1000 recreated elements plotted against the equivalent original.[]{data-label="ICLR:fig3"}](ICLRFigFacesRecreate.eps "fig:") ![(Top) Top five are original faces with the equivalent recreated faces below. (Bottom left) Nine stocks with original values (black dashed) and recreated values (red dotted). (Bottom right) 1000 recreated elements plotted against the equivalent original.[]{data-label="ICLR:fig3"}](ICLRFigFinRecreate.eps "fig:") ![(Top) Top five are original faces with the equivalent recreated faces below. (Bottom left) Nine stocks with original values (black dashed) and recreated values (red dotted). (Bottom right) 1000 recreated elements plotted against the equivalent original.[]{data-label="ICLR:fig3"}](ICLRFigGenesRecreate.eps "fig:")
Secondly, we want to look at the $\textbf{W}_f$ matrices (the weights of the final layer). In NMF the columns of $\textbf{W}$ represent the dimensions of the new subspace we are projecting onto. In many circumstances we hope these will produce interpretable results, which is one of the key features of NMF. In Figure \[ICLR:fig4\] we demonstrate the $\textbf{W}_f$ matrices for the faces data-set, where each of the 81 columns of $\textbf{W}_f$ has been converted into a $19\times 19$ image (left) and the FTSE 100 data-set (right) where the nine columns are shown. We can see that the weights of the final layer does do what we expect in that these results are similar to those found using standard NMF [@lee1999learning]. The faces data-set produces a representation which can be considered to be parts of a face. The FTSE 100 data-set can be viewed as showing certain trends in the stock market.
![(Left) Each small image is one of the 81 reshaped columns of $\textbf{W}_f$ for the faces data-set. The features we see are very similar to what you get in standard NMF. (Right) Each plot is a column of $\textbf{W}_f$ for the FTSE 100 data-set.[]{data-label="ICLR:fig4"}](ICLRFigFacesW.eps "fig:") ![(Left) Each small image is one of the 81 reshaped columns of $\textbf{W}_f$ for the faces data-set. The features we see are very similar to what you get in standard NMF. (Right) Each plot is a column of $\textbf{W}_f$ for the FTSE 100 data-set.[]{data-label="ICLR:fig4"}](ICLRFigFinW.eps "fig:")
We now want to consider empirically the effect of the sampling and KL divergence term. In Figure \[ICLR:fig5\] we show the distributions of the latent space of one randomly chosen datapoint from the faces data-set. We use $r=9$ so that the distributions are easier to inspect. The left and right set of plots show results with and without the KL divergence term respectively. The black and blue dashed lines show samples extracted deterministically from the median of the distributions and sampled from the distribution respectively. The inclusion of the KL divergence term has several effects. First, it reduces the scale of the distributions so that the values assigned to the distributions are significantly lower in the left hand plot. This has the effect of making the distributions closer together in space. The imposition of randomness into the PAE-NMF through the uniform random variable $\boldsymbol\epsilon$ has the effect of reducing the variance of the distributions. When there is a KL divergence term we can see that the distributions follow fairly close to the prior. However, once the stochasticity is added this is no longer tenable as the wide spread of data we are sampling from becomes harder to learn effectively from. When there is no KL divergence term the effect is to tighten up the distribution so that the spread is as small as possible. This then means we are approaching a point, which in fact would return us towards standard NMF. The requirement for both the KL divergence term and the stochasticity means that we do get a proper distribution and the results are prevented from simply reducing towards a standard NMF formulation.
![The distributions of $\textbf{h}$ for one data-point from the faces data-set with $r=9$. (Left) These plots show the distributions when we include the $D_{KL}$ term. (Right) The $D_{KL}$ term is not applied during training. The black dashed line shows results when we train the network deterministically using the median value of the distribution and the blue line is when we trained with random samples.[]{data-label="ICLR:fig5"}](ICLRFig12a.eps "fig:") ![The distributions of $\textbf{h}$ for one data-point from the faces data-set with $r=9$. (Left) These plots show the distributions when we include the $D_{KL}$ term. (Right) The $D_{KL}$ term is not applied during training. The black dashed line shows results when we train the network deterministically using the median value of the distribution and the blue line is when we trained with random samples.[]{data-label="ICLR:fig5"}](ICLRFig12b.eps "fig:")
A potentially valuable effect of PAE-NMF is that we would expect to see similar distributions for similar data-points. In Figure \[ICLR:Fig6\] we can see the distributions of the $\textbf{h}$ vectors for two pairs of faces, each pair is similar to the other one in the pair and very different from the other pair. Next to them we plot the distributions. The distributions for the two faces on the left are the black dashed lines and the distributions for the two faces on the right are given by the red dotted lines. There is very clear similarity in distribution between the similar faces, and significant differences between the dissimilar pairs.
![The left images (top and bottom) are similar to one another and we plot their distributions as black dashed lines in the plots to the right. The right images are very different to the left, we plot their distributions as red dotted lines. The similar images have very similar distributions for these data-points.[]{data-label="ICLR:Fig6"}](ICLRFig4a.eps "fig:") ![The left images (top and bottom) are similar to one another and we plot their distributions as black dashed lines in the plots to the right. The right images are very different to the left, we plot their distributions as red dotted lines. The similar images have very similar distributions for these data-points.[]{data-label="ICLR:Fig6"}](ICLRFig4b.eps "fig:")
Finally, we want to discuss the generation of new data using this model. We show results for the faces and FTSE 100 data-sets in Figure \[ICLR:Fig7\]. For the faces we use a low $r=9$ value and show four random faces (left column), with the median being the $\textbf{h}$ drawn from the middle of the inverse cumulative distribution. The three image plots on the right are then drawn randomly from the distributions. Four stocks from the FTSE 100 data are shown on the right of the figure. The solid lines are the original data with the dotted lines representing sampled data. This ability to directly sample from the distributions and produce plausible new data-points is one of the most useful features of using PAE-NMF over standard NMF.
![(Left) Sampling from the distributions of the faces data-set with $r=9$. Four original faces are on the left column, with faces drawn deterministically from the centre of the distribution next to them and three sampled faces along the next three columns. (Right) Sampling from the FTSE 100 data-set with $r=9$ for four different stocks. The solid black line is the real data and the dotted lines show three sampled versions.[]{data-label="ICLR:Fig7"}](ICLRFig5.eps "fig:") ![(Left) Sampling from the distributions of the faces data-set with $r=9$. Four original faces are on the left column, with faces drawn deterministically from the centre of the distribution next to them and three sampled faces along the next three columns. (Right) Sampling from the FTSE 100 data-set with $r=9$ for four different stocks. The solid black line is the real data and the dotted lines show three sampled versions.[]{data-label="ICLR:Fig7"}](ICLRFigFinLast.eps "fig:")
Summary
=======
We have demonstrated a novel method of probabilistic NMF using a variational autoencoder. This model extends NMF by providing us with uncertainties on our $\textbf{h}$ vectors, a principled way of providing regularisation and allowing us to sample new data. The advantage over a VAE is that we should see improved interpretability due to the sparse and parts based nature of the representation formed, especially in that we can interpret the $\textbf{W}_f$ layer as the dimensions of the projected subspace.
Our method extracts the useful information from the data without over-fitting due to the combination of the log-probability with the KL divergence. While the log-probability works to minimise the error, the KL divergence term acts to prevent the model over-fitting by forcing the distribution $q_{\phi}(\textbf{h}|\textbf{x})$ to remain close to its prior distribution. This provides a principled regularisation mechanism. Other alternatives for probabilistic NMF still require the choice of an appropriate prior. This could be done using model selection through the evidence term, but this would require computing the full posterior, which is likely to require techniques such as Markov Chain Monte Carlo and could be prohibitively time consuming.
Another advantage of this approach is that as the objective function measures the description length, we could use it to select the appropriate size for the latent space. This would provide an alternative to other description length methods [@squires2017rank]. However, as the distribution $q_{\phi}(\textbf{h}|\textbf{x})$ provides a self-regularisation the size of the latent space is likely to be less critical.
[19]{} \[1\][\#1]{} \[1\][`#1`]{} urlstyle \[1\][doi: \#1]{}
Babajide O Ayinde, Ehsan Hosseini-Asl, and Jacek M Zurada. Visualizing and understanding nonnegativity constrained sparse autoencoder in deep learning. In *International Conference on Artificial Intelligence and Soft Computing*, pp. 3–14. Springer, 2016.
Christian Bauckhage. Computing the kullback-leibler divergence between two generalized gamma distributions. *arXiv preprint arXiv:1401.6853*, 2014.
AT Cegmil. Bayesian inference in non-negative matrix factorization models. *Computational Intelligence and Neuroscience*, 2008.
Michael Figurnov, Shakir Mohamed, and Andriy Mnih. Implicit reparameterization gradients. *arXiv preprint arXiv:1805.08498*, 2018.
Ehsan Hosseini-Asl, Jacek M Zurada, and Olfa Nasraoui. Deep learning of part-based representation of data using sparse autoencoders with nonnegativity constraints. *IEEE transactions on neural networks and learning systems*, 270 (12):0 2486–2498, 2016.
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013.
Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In *Advances in Neural Information Processing Systems*, pp. 3581–3589, 2014.
Daniel D Lee and H Sebastian Seung. Learning the parts of objects by non-negative matrix factorization. *Nature*, 4010 (6755):0 788–791, 1999.
Andre Lemme, Ren[é]{} Felix Reinhart, and Jochen Jakob Steil. Online learning and generalization of parts-based image representations by non-negative sparse autoencoders. *Neural Networks*, 33:0 194–203, 2012.
Minnan Luo, Feiping Nie, Xiaojun Chang, Yi Yang, Alexander G Hauptmann, and Qinghua Zheng. Probabilistic non-negative matrix factorization and its robust extensions for topic modeling. In *AAAI*, pp. 2308–2314, 2017.
Nasser Mohammadiha, W Bastiaan Kleijn, and Arne Leijon. Gamma hidden markov model as a probabilistic nonnegative matrix factorization. In *Signal Processing Conference (EUSIPCO), 2013 Proceedings of the 21st European*, pp. 1–5. IEEE, 2013.
Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In *Proceedings of the 27th international conference on machine learning (ICML-10)*, pp. 807–814, 2010.
Pentti Paatero and Unto Tapper. Positive matrix factorization: A non-negative factor model with optimal utilization of error estimates of data values. *Environmetrics*, 50 (2):0 111–126, 1994.
John Paisley, D Blei, and Michael I Jordan. Bayesian nonnegative matrix factorization with stochastic variational inference. *Handbook of Mixed Membership Models and Their Applications. Chapman and Hall/CRC*, 2014.
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. *arXiv preprint arXiv:1401.4082*, 2014.
Mikkel N Schmidt, Ole Winther, and Lars Kai Hansen. Bayesian non-negative matrix factorization. In *International Conference on Independent Component Analysis and Signal Separation*, pp. 540–547. Springer, 2009.
Paris Smaragdis and Shrikant Venkataramani. A neural network alternative to non-negative audio models. In *Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on*, pp. 86–90. IEEE, 2017.
Steven Squires, Adam Pr[ü]{}gel-Bennett, and Mahesan Niranjan. Rank selection in nonnegative matrix factorization using minimum description length. *Neural Computation*, 2017.
Steven Squires, Adam Prugel Bennett, and Mahesan Niranjan. Minimum description length as an objective function for non-negative matrix factorization. , 2019.
|
---
abstract: 'The data analysis software (DAS) for VLT ESPRESSO is aimed to set a new benchmark in the treatment of spectroscopic data towards the extremely-large-telescope era, providing carefully designed, fully interactive recipes to take care of complex analysis operations (e.g. radial velocity estimation in stellar spectra, interpretation of the absorption features in quasar spectra). A few months away from the instrument’s first light, the DAS is now mature for science validation, with most algorithms already implemented and operational. In this paper, I will showcase the DAS features which are currently employed on high-resolution HARPS and UVES spectra to assess the scientific reliability of the recipes and their range of application. I will give a glimpse on the science that will be possible when ESPRESSO data become available, with a particular focus on the novel approach that has been adopted to simultaneously fit the emission continuum and the absorption lines in the Lyman-alpha forest of quasar spectra.'
author:
- 'Guido Cupani,$^1$ Valentina D’Odorico,$^1$ Stefano Cristiani,$^1$ Jonay I. González Hernández,$^2$ Christophe Lovis,$^3$ Sérgio Sousa,$^4$ Paolo Di Marcantonio,$^1$ and Denis Mégevand$^3$'
bibliography:
- 'P1-13.bib'
title: Field tests for the ESPRESSO data analysis software
---
ESPRESSO in a nutshell {#espresso-in-a-nutshell .unnumbered}
======================
ESPRESSO [@2013Msngr.153....6P] is an ultra-stable, high-resolution spectrograph ($R\sim 55,000$ to $200,000$) for the coudé combined focus of the Very Large Telescope (VLT) of the European Southern Observatory (ESO). Its driving scientific objectives are (i) the search for Earth-like exoplanets and (ii) the exploration of new physics beyond the standard model, through a measure of the possible variation of the dimensionless constants $\alpha$ (fine-structure constant) and $\mu$ (proton-to-electron mass ratio). The latter science case depends on the accurate analysis of the absorption features produced by the inter-galactic and circum-galactic medium on the spectrum of background bright sources such as quasars (QSOs). The same analysis provides a valuable insight into the physical and chemical state of the baryonic matter from the reionization epoch onwards and on its interplay with galaxies.
Since the inception of its development, ESPRESSO has been conceived as a “science machine” able to produce scientific results within minutes from the end of observations. To this aim the instrument is equipped with dedicated software tools to handle both the data reduction and the data analysis, the latter covering both stellar and QSO spectral analysis [@2012SPIE.8448E..1OD; @2014SPIE.9149E..1QD]. The ESPRESSO Data Analysis Software (DAS) has been introduced in a series of previous articles [@2015MmSAI..86..502C; @2015ASPC..495..289C; @2016SPIE_cupani_1]. In this article we discuss the status of its development at the instrument integration stage and present some results of the first science assessment on test data, a few months before ESPRESSO is commissioned and sees its first light. We will focus in particular on the QSO spectral analysis, as it embraces some of the most interesting feature of the DAS, both in its algorithms and interface.
The DAS concept {#the-das-concept .unnumbered}
===============
The ESPRESSO DAS is meant to set a benchmark in the treatment of spectroscopic data towards the ELT era, providing carefully designed, fully interactive recipes to take care of complex analysis operations. Those are (i) for stellar spectra: computation of the radial velocity, the stellar activity indexes, the equivalent width of absorption lines, and the stellar parameters (effective temperature, \[Fe$/$H\]); continuum fitting and re-computation of the radial velocity by comparison with synthetic spectra; (ii) for QSO spectra: detection of the absorption lines; determination of the emission continuum level; identification and fitting of the absorption systems.
Together with the DRS, the DAS enforce a “pixel conservation” paradigm, in which the information collected by the individual pixels of the detector is preserved throughout the reduction cascade [@2016SPIE_cupani_2]. Whenever the information from different pixels is merged, potentially disrupting the flux statistics (such as in re-binning and co-addition of multiple exposures), the software propagates the information both in merged and non-merged form, to allow for a correct assessment of theoretical models (such as Voigt profiles for absorption lines) with standard best-fit techniques.
The bulk of the DAS code is written in ANSI-C. Most of the code is developed using the ESO Common Pipeline Library [@2004SPIE.5493..444M] which provides low-level tool for data handling; a library of higher-level functions has been designed to address individual tasks (e.g. spectral rebinning, curve smoothing, line fitting) which are organized into self-standing modules (“recipes”). The cascade of recipes is typically run within the ESO Reflex environment which provides an intuitive workflow interface complemented with Python scripts to visualize the results and adjust the parameters.
------- ------------ ------------ --------------- ----------------- ------- --------------- -------- --------------------------- ------
Comp. $\Delta z$ $\Delta\log{N}$ $\Delta b$
\[$10^{-6}$\] \[$10^{-2}$\] \[$10^{-1}$ km s$^{-1}$\]
a $2.220040$ $(6)$ $-1$ $12.65$ $(3)$ $ 9.5$ $(9)$ $-1$
b $2.220253$ $(4)$ $-1$ $12.66$ $(3)$ $ 6.7$ $(6)$ $-1$
c $2.220501$ $(2)$ $13.48$ $(2)$ $ 6.8$ $(3)$ $-2$
d $2.220645$ $(2)$ $13.58$ $(1)$ $ 5.8$ $(2)$ $-1$
e $2.221055$ $(11)$ $-2$ $12.63$ $(4)$ $-1$ $14.0$ $(1.6)$ $-5$
f $2.221322$ $(3)$ $13.27$ $(7)$ $ 6.8$ $(5)$ $+1$
g $2.221501$ $(36)$ $-3$ $13.26$ $(5)$ $-1$ $15.0$ $(2.0)$ $+1$
h $2.221619$ $(3)$ $+1$ $13.07$ $(7)$ $+2$ $ 5.5$ $(6)$ $-1$
------- ------------ ------------ --------------- ----------------- ------- --------------- -------- --------------------------- ------
: Line parameters fitted by the DAS on the paired components of the C <span style="font-variant:small-caps;">iv</span> doublet shown in Fig. \[line\_fit\], with the $1\sigma$ uncertainty of the last significant digit(s) in parentheses. The component IDs correspond to the labels in figure. $z$: line redshift; $N$: Voigt-profile column density; $b$: Voigt-profile thermal broadening (turbulence broadening has been neglected). Next to each parameter, we listed the difference (when measurable) between the values fitted by the DAS and those fitted by the ESO MIDAS FITLYMAN package. In all cases, this difference is well below the fit uncertainty.[]{data-label="fitlyman"}
Validating the analysis of QSO spectra {#validating-the-analysis-of-qso-spectra .unnumbered}
======================================
As of November 2016, eight internal releases of the DAS has been issued for verification by ESO. All but one recipes have been coded and three out of four Reflex workflows (two for the stellar branch, one for the QSO branch) are already in operation. As the integration of the instrument progresses, the code is being validated both on reduced high-resolution spectra from VLT UVES and TNG HARPS-North (for scientific purpose) and on the first ESPRESSO test data processed by the DRS (to check the DRS/DAS interface). The first public release is foreseen for the instrument commissioning (2017).
The QSO branch of the DAS includes new algorithms to automatically fit the continuum emission and the absorption features produced by the intervening structures. By nature, such analysis can only be performed iteratively: the continuum level is determined by fitting and removing all the absorption lines, while the lines are fitted with respect to a previously determined continuum. In practice, the workflow operates as follows: (i) the continuum is first determined by making initial assumptions on the nature of lines (distribution of column densities in the forest of Lyman-$\alpha$ absorbers; guesses on other Voigt parameters) and then the spectrum is normalized; (ii) associated lines (corresponding to different atomic transitions at the same redshift) are selected to define absorption systems; (iii) absorption systems are then fitted with Voigt profiles, adjusting the number of components and the constraint among line parameters. The information from line fitting can finally be used to refine the continuum estimation. Both continuum and line fitting are validated (by a $\chi^2$ test) on the non-rebinned spectra, according to the pixel conservation paradigm, to allow a correct modelling of the flux variance.
Multiple test have been conducted on observations of QSO HE 0940$-$1050 (observed with UVES). The automatic continuum estimation in the Lyman-$\alpha$ forest (which includes an estimation of the residual optical depth not accounted for by fitted lines, modelled from the distribution of the neutral hydrogen column density) is consistent with the results of visual estimation, providing in addition a $\chi^2$ goodness-of-fit assessment. Within the Reflex environment, we developed a user-friendly interface to line fitting which allows to interactively select the transitions and set up constraints among the Voigt parameters (Fig. \[line\_fit\]). Comparison with other packages for Voigt-profile fitting (such as the ESO MIDAS FITLYMAN package, @1995Msngr..80...37F; see table \[fitlyman\]) shows perfect consistence. More tests are currently ongoing, taking advantage of a larger UVES test data set.
|
---
abstract: |
In the absence of constraints from the binary companion or supernova remnant, the standard method for estimating pulsar ages is to infer an age from the rate of spin-down. While the generic spin-down age may give realistic estimates for normal pulsars, it can fail for pulsars with very short periods. Details of the spin-up process during the low mass X-ray binary (LMXB) phase pose additional constraints on the period ($P$) and spin-down rates [($\dot{P}$) ]{}that may consequently affect the age estimate. Here, we propose a new recipe to estimate millisecond pulsar (MSP) ages that parametrically incorporates constraints arising from binary evolution and limiting physics. We show that the standard method can be improved by this approach to achieve age estimates closer to the true age while the standard spin-down age may overestimate or underestimate the age of the pulsar by more than a factor of $\sim$10 in the millisecond regime.
We use this approach to analyze the population on a broader scale. For instance, in order to understand the dominant energy loss mechanism after the onset of radio emission, we test for a range of plausible braking indices. We find that a braking index of n=3 is consistent with the observed MSP population. We demonstrate the existence and quantify the potential contributions of two main sources of age corruption: the previously known “age bias” due to secular acceleration and “age contamination” driven by sub-Eddington progenitor accretion rates. We explicitly show that descendants of LMXBs that have accreted at very low rates ($\dot{m}\ll \dot{M}_{Edd}$) will exhibit ages that appear older than the age of the Galaxy. We further elaborate on this technique, the implications and potential solutions it offers regarding MSP evolution, the underlying age distribution and the post-accretion energy loss mechanism. [^1]
author:
- 'Bülent K[i]{}z[i]{}ltan & Stephen E. Thorsett'
title: ' [Millisecond Pulsar Ages Implications of Binary Evolution and a Maximum Spin Limit ]{}'
---
\[sec:intro\]Introduction
=========================
An accurate determination of pulsar ages plays a critical role in our understanding of advanced stages of stellar evolution, supernova explosions and remnants, white dwarf (WD) atmospheres and cooling models, binary evolution, planet formation around compact objects, and pulsar evolution in general.
Typically, pulsar ages are estimated by calculating the amount of energy lost during their spin-down. Consequently, the spin-down age of a pulsar can be formulated as: $$\label{eq:age}
\tau\; = \; \frac{P}{(n-1)\;\dot{P}}\left[1-\left(\frac{P_0}{P}\right)^{n-1}\right] \\$$ where the period ($P$) and the spin-down rate [($\dot{P}$) ]{}are the two main observables acquired by pulsar timing measurements. In the standard approach, the unknown initial spin period ($P_{0}$) of the pulsar is assumed to be much smaller ($P_{0}\ll P$) than the observed period. The dominant energy loss mechanism is analytically captured by the braking index, which is $n=3$ for pure dipole radiation and is implicitly adapted by the characteristic age $\tau_{c}$. The age of a pulsar can then be conveniently approximated by its characteristic age, where: $$\label{eq:cage}
\tau \longrightarrow \tau_{c}\equiv \frac{P}{2\;\dot{P}} \;\;{\rm for}\;\; P_{0}\ll P,\; n=3.$$
While this approach may give reliable estimates for some normal pulsars (e.g. Crab Pulsar-PSR B0531+21: $\tau_{c}\sim 1240$ yr whereas the age from the supernova remnant (SNR) $\tau_{SNR}\sim 955$ yr; [@WM77]), it should be kept in mind that characteristic ages for some other pulsars will suffer, dramatically in some cases (e.g. PSR J0205+6449: $\tau_{c}\sim 5370$ yr, $\tau_{SNR}\sim 820$ yr; [@MSS02]), because of the assumptions that render the standard approach less accurate, especially in the millisecond regime. Therefore, it is useful to develop a more comprehensive and detailed framework by which we can quantitatively understand MSP spin evolution.
In our previous work, we have set up a base by which we demonstrated that the proper inclusion of evolutionary constraints alone gives a deeper insight into the subsequent spin evolution [@KT09].
In this paper, we propose a recipe to estimate pulsar ages that parametrically incorporates additional evolutionary and physical constraints. We show that the combined effect of a possible spin-up process during the LMXB phase and a maximum spin period due to the limiting centrifugal forces imparts meaningful constraints on the joint period ($P$) and spin-down [($\dot{P}$) ]{}values that MSPs can attain, which ultimately are used to estimate their ages. We detail the contribution this new approach offers to our understanding of MSP evolution and elaborate on the ramifications on several related problems such as the dominant energy loss mechanism and braking indices of millisecond radio pulsars, WD atmospheres and cooling models, the underlying age distribution, the enigma of MSPs that appear older than the galaxy they reside in, and the sources of MSP age corruption.
\[sec:spin\] The Spin Up Process
================================
The period and spin-down [($P-\dot{P}$) ]{}relation of a particular MSP at the epoch when it turns on its radio emission can be scaled as P\_[0]{}\^[4/3]{}, \[eq:spin\] as the neutron star can ultimately be spun-up no further than the spin period delineated by the Keplerian velocity at the Alfvén radius [@GL92]. We will use the least conservative upper bound of this scaling factor as an upper limit for the region where MSPs may be re-born. A single birth line instead would imply that these MSPs have accreted at near-Eddington rates ($\dot{m}\simeq \dot{M}_{Edd}$), which appears unreasonable, at least for a significant fraction of the observed MSP population as indicated by the paucity of sources near the spin-up line (see $\S$\[sec:cont\] for discussion).
We also now know that it is more than likely that the observed majority of MSPs do not initially re-appear in the vicinity of the spin-up line [@KT09]. The region where MSPs are re-born as radio sources on the [$P-\dot{P}$ ]{}diagram is strongly correlated with the dominant accretion rate ($\dot{m}$) experienced during the last phases of LMXB evolution, which is poorly constrained. For the scope of this paper, using the spin-up line as a marginal upper boundary for the region where MSPs are re-born as radio sources (i.e. where the spin-down trajectories on the [$P-\dot{P}$ ]{}plane start) will adequately account for possible nonlinear offsets due to other inherent uncertainties and assumptions made in Equation ( \[eq:spin\]) regarding the accretion geometry (streams of hot plasma flowing onto the neutron stars’ polar caps instead of uniform spherical or wind accretion) or the opacity of the accreted material (sensitive to whether the companion has an H or He rich envelope attached to a CO or ONeMg core, see $\S$: \[sec:wd\]).
On the other hand, the cumulative uncertainty of a presumed spin-up line that would offset or tilt a particular re-birth line cannot be arbitrarily big [@ACW99; @FKR02]. Also, within the context of the standard recycling scenario [@ACR82; @RS82; @BH91 see for review], the “spin-up line” is merely an upper boundary below which MSPs are expected to be born rather than the line of culmination. To this day, there are no recycled pulsars observed above the spin-up line, except few in globular clusters (GCs) which have very uncertain evolutionary histories. Therefore, we will limit ourselves to Galactic MSPs whose spin-down history, orbital dynamics and Galactic kinematics remain unperturbed by gravitational encounters.
\[sec:age\] Millisecond Pulsar Ages
===================================
The standard approach to estimate pulsar ages in the absence of additional constraints from either a possible association to an SNR or a stellar companion, has been to use the characteristic age as a proxy to the true age. The main goal of our work is to better understand the non-trivial relationship between the physically important true age and the observationally accessible characteristic age. We will refer to the time that has passed since the cessation of accretion as the “(true) age: $\tau_{t}$” of a recycled pulsar.
\[sec:altmeth\]Alternative Methods to Estimate Pulsar Ages
----------------------------------------------------------
### \[sec:kin\]Ages From Kinematics and Supernova Remnants
For some young pulsars that have reliable proper motion and distance measurements, a kinematic age estimate can be made by tracing the pulsar’s trajectory in the galactic gravitational potential. However, without a firmly established birth site, kinematic ages are at best an indirect means to constrain pulsar ages.
For MSPs, that have no associated SNRs and have ages that are much longer than the orbital timescales in the Galactic potential, all kinematic age information has been lost.
### \[sec:wd\]White Dwarf Cooling Ages
After the discovery of optical emission from pulsar companions [@K86], WDs were soon realized as an alternative means to estimate the age of an MSP. Once active accretion has ceased in LMXBs with a recycled pulsar primary, the WD begins to subsequently radiate its internal heat after it burns off the remaining envelope. So, the beginning of WD cooling also marks the epoch when the spin-down starts for its companion. Therefore, in principle, the WD cooling ages are expected to be consistent with the ages of their MSP companions [@HP98.1; @HP98.2].
The basics of WD cooling models are potentially accessible to theoretical understanding because of the simple thermal structure of the WD. The whole system is kept isothermal due to the efficient heat conduction of degenerate electrons. However, in practice, cooling ages remain difficult to estimate once realistic (and complex) effects of surface physics and stellar structure are included, leading to discrepancies and controversies in the interpretation of specific observations [@W92; @SAM98; @SEG00; @SDB00; @ASB01.1; @ASB01.2; @KBJ05].
Possibly more promising than the original goal of using WD cooling ages to constrain the properties of MSPs, one might instead hope to use better constrained MSP ages ($\widetilde{\tau}$, see $\S$: \[sec:rea\]) to understand WD atmospheres and cooling models.
\[sec:cha\] Characteristic Ages: Idealized Pulsar Spin-down
-----------------------------------------------------------
For millisecond radio pulsars, unbiased[^2] characteristics ages ($\tau_{c}'$) become upper limits to the true ages ($\tau_{t}$) as the spin-down trajectories are truncated below the spin-up line. MSPs can be re-born on the spin-up line only if they accrete at an Eddington rate ($\dot{m}=\dot{M}_{Edd}$) during recycling (see Equation ( \[eq:spin\])). Because of likely sub-Eddington accretion rates experienced during the LMXB phase [@KT09], the majority of the spin-down trajectories start well below the spin-up line. A considerable fraction of millisecond radio pulsars ($\sim$30%) would even be expected to be born below the Hubble line (see $\S$: \[sec:cont\] for discussion). In this approximation, $\tau_{c}$ is derived by implicitly assuming pure dipole spin-down in the absence of other forms of additional torques and braking that might affect the apparent age. Other potential spin-down torques during the early stages after recycling such as gravitational or multipole radiation [@B98; @K91] are also assumed to be absent when $\tau_{c}$ is inferred from the observed [$P-\dot{P}$ ]{}values. Possible non-monotonic field decay before magnetic stability sets in may also contribute to the corruption of characteristic ages.
One bias for which we can properly correct is the effect of secular acceleration, i.e., Shklovskii effect [@S70]. The observed $\dot{P}$ values include an additional apparent spin-down factor introduced because of the increasing projected distance between the pulsar and the solar system barycenter. This leads to a quadratic centrifugal term [@CTK94], $$\begin{aligned}
\label{eq:pc}
\dot P_{s}= 1.08\times 10^{-18} \times \left( \frac{v_{t}} {100} \right)^{2} \times D_{kpc} ^{-1 }\times P\end{aligned}$$ in $$\label{eq:pdot}
\frac{\dot{P}}{P} \approx \frac{ \dot{P}' }{P} + \frac{v_{t}^2}{c D} \equiv \frac{\dot{P}' }{P} + \frac{\dot{P}_{s} }{P}$$ where $\dot{P}$ and $\dot{P}'$ are the measured and unbiased spin-down rates for a pulsar at a distance $D$ (in units of kpc) with a transverse velocity $v_{t}$ (in units of 100 km s$^{-1}$).
Figure \[fig:obs\] shows the observed MSP population and the extent of the bias introduced by secular acceleration. MSPs that have a combination of relatively high transverse velocities and small distance measurements, have higher corresponding correction factors, e.g. for PSR J0034$-$0534 (D=0.98 kpc, $v_{t}\simeq$146.3 km s$^{-1}$) and PSR B1257+12 (D=0.77 kpc, $v_{t}\simeq$350.6 km s$^{-1}$) the corresponding correction factors are $\dot{P}/\dot{P}'\sim$9.3 and 16.9 respectively.
\[sec:rea\] A Realistic Age for Millisecond Pulsars ($\widetilde{\tau}$)
------------------------------------------------------------------------
One of the reasons why characteristic age estimates become less reliable for MSPs, even after the observed spin-down rates are unbiased for secular acceleration, is due to the assumption that the birth periods are much smaller than their currently observed spin periods (P$_{0}\ll$P), which fails for a considerable fraction of the population (see $\S$ \[sec:cont\]). [^3]
In fact, the predicted MSP spin-down age is proportional to the integrated (and normalized) spin-down path from P$_{0}$ to P. The difference in the integrated trajectories will depend on the initial spin period P$_{0}$, which remains an unknown parameter in most cases. We know that the spin-down trajectory in the case when recycling and a constraining maximum spin limit are taken into account is more compressed than what the standard approach predicts. Therefore, with tighter upper limits on the true age, $\widetilde{\tau}$ will give an age estimate that is closer to $\tau_{t}$.
Given enough time, MSPs may reach a limiting equilibrium phase where they begin to lose angular momentum gained from accretion by shedding mass. A maximum spin limit beyond which a neutron star cannot be spun-up due to this continuous loss of excess angular momentum will truncate the spin-down trajectories vertically. Several authors [@HZ89; @FI92; @CST94] have constructed equilibrium sequences for neutron stars where a range of mass shedding periods are calculated for different equations of states including the effects of rapid rotation and large deviations from spherical symmetry. While the theoretical best-fit values range between P$_{sh}=$1.28-1.32 ms for neutron stars with realistic configurations, for some extreme cases they find that a limiting period of P$_{sh}\gtrsim$0.85 ms may be plausible. [@CMM03] find evidence for a statistically significant upper limit at P$_{sh}\simeq$1.32 ms. We therefore use a putative value of P$_{sh}\sim$1 ms and include it parametrically to perform calculations.
The critical magnetic field B$_{c}$ will be the locus of points extending from the intersection of the diagonal spin-up line and the vertical mass shedding limit. MSPs with magnetic fields below B$_{c}$ can only be spun-up to the limiting mass shedding period. The critical magnetic field is B$_{c}$= (3.36$^{+0.7}_{-0.9}$)$\times10^{8}$ G for P=1 ms where the spin-up line is prescribed as $\dot{P}=\alpha P^{4/3}$ for $\alpha=(1.1\pm0.5)\times10^{-15}$[**s**]{}$^{-4/3}$ [@ACW99].
We can formulate an age estimate that implements the constraints arising from recycling and mass shedding: $$\begin{aligned}
\widetilde{\tau}\,(B > B_{c})&=&\frac{P}{(n-1)\;\dot{P}}\left[1-\left(\frac{\widetilde{\alpha}\,
\dot{P}^{3/7}}{P^{4/7}}\right)^{n-1}\right]\label{eq:mage1} \\
\widetilde{\tau}\, (B < B_{c})&=&\frac{P}{(n-1)\;\dot{P}}\left[1-\left(\frac{P_{sh}}{P}\right)^{n-1}\right]\label{eq:mage2}\end{aligned}$$ where P$_{sh}$ is the mass shedding limit. We parametrically adopt the re-normalized coefficient $\widetilde{\alpha}=2.6^{+0.7}_{-0.4} \times10^{6}{\bf s^{4/7}}$ and use it as a fiducial value. Although $\widetilde{\alpha}$ inherently has numerous sources of uncertainty, the corresponding minimum post-accretion period that defines the spin-up line depends on the total accreted mass, but is insensitive to other parameters [see @PK94]. [@ACW99] find an upper limit of $\alpha=1.6\times10^{-15}$[**s**]{}$^{-4/3}$ to be empirically reasonable which we adopt to marginalize $\widetilde{\alpha}$.
At low magnetic fields (B$\lesssim$ B$_{c}$), the Alfvén radii ($r_{A}\propto B_{s}^{4/7}$) shrink essentially down to the surface of the neutron star and angular momentum is transferred more efficiently and stably on longer timescales. Therefore the spin-down trajectories for neutron stars with lower magnetic fields are more likely to start closer to the vertical re-birth line ($\sim$P$_{sh}$).
The level of truncation of the spin-down trajectories can be more dramatic for neutron stars with very low magnetic fields and short periods (P$\lesssim3$ ms). These MSPs with B$\lesssim10^{8}$G are expected then to be born with periods very close to their observed ones (P$\simeq$P$_{0}$) and therefore are more likely to be much younger than they appear.
In Figure \[fig:obs\] the sets of blue and red lines are MSP age ($\mage$) lines for braking indices n=3 (red: dash) and 5 (blue: dash-dot). It is noteworthy to point out that for MSPs whose angular momentum and energy loss is dominated by multipole or quenched by gravitational wave radiation, the scaling factor in Equation ( \[eq:spin\]) can be different. The blue line in Figure \[fig:obs\] demonstrates the potential level of additional bias for the case in which gravitational wave radiation is the dominant mechanism for energy loss as opposed to braking due to pure dipole radiation. MSPs that lose energy via more efficient processes will traverse the spin-down path much faster, and consequently mimic older ages. As a result, the contribution of more efficient processes will exacerbate the age overestimate further.
Figure \[fig:sim\] shows the true age trend for MSPs. The synthetic population is produced with the method described in [@KT09], where the MSP evolution is parameterized by the [evolution functional]{} $\mathcal{E}(D, \dot{M}, R)$, in which $D$ is the initial period (P$_{0}$) distribution of the progenitor population, $\dot{M}$ is the distribution of predominant accretion rates experienced during the latest phases before the onset of radio emission, and $R$ is the galactic birth rate. The parameter space ($D, \dot{M}, R$) is sampled by producing purely random synthetic populations. Then, only the input parameters that produce synthetic samples consistent with the observed MSP population are used for the construction of the underlying [$P-\dot{P}$ ]{}demographics. For the filtration process, we use a [ multi-dimensional Kolmogorov-Smirnov (K-S) test]{} as the consistency criteria for the multi-layered Monte-Carlo integration scheme [@FF87 and references therein].
We choose sub-samples that are uniformly selected from the underlying MSP population to produce Figures \[fig:sim\] and \[fig:multi\]. While selection biases play a role in which MSPs are preferentially observed, the reflected age trend will remain unaffected. Therefore, the age trend of the underlying population both in Figures \[fig:sim\] and \[fig:multi\] is expected to be a realistic reflection of the true age.
\[sec:dis\]Discussion
=====================
\[sec:cont\]Sources of Age Corruption: Bias and Contamination
-------------------------------------------------------------
Older MSPs tend to appear younger if the spin-down rates ($\dot{P}$) are not properly corrected for the contribution due to secular acceleration. This causes an upward “age bias” on the [$P-\dot{P}$ ]{}plane. The Shklovskii effect is more pronounced for MSPs that are closer to the solar system barycenter and have higher transverse velocities.
We can unbias our measurements and correct for this corruption by calculating the centrifugal term (Equations (\[eq:pc\]) and (\[eq:pdot\])) once both the distance and transverse velocity terms are accurately known. The proper motion measurements for MSPs with poorly constrained distances pose a stiffer challenge than previously predicted [@DTB09]. To avoid underestimating the potential age bias, we introduced conservative random deviations to the observed distance and transverse velocity measurements by up to $\pm 0.4D$ and $\pm 0.05 v_{t}$ [@BBG02; @BFG03; @DTB09] to produce the sample population in Figure \[fig:multi\].
Figure \[fig:multi\](a) reflects the potential age bias we estimate for the underlying population. In some cases, relatively old MSPs may even appear above the spin-up line if the observed spin-down rates are not properly unbiased. The age bias caused by the Shklovskii effect will tend to push older MSPs upward by amounts proportional to their correction term ($\dot{P}/\dot{P}'$) and hence make them appear younger.
Throughout this work, we have excluded MSPs in GCs due to their complicated spin evolution. The cluster’s compact gravitational well, and to a lesser extent the larger cross section for gravitational interaction in GCs relative to the Galactic disk, effectively perturb the spin evolution of these MSPs by injecting a cumulative corruption to their spin-down rates. Conversely, the spin-down trajectories of MSPs in the Galaxy will include less corrupted information about the initial formation. Even though the level of age bias for Galactic MSPs appears not as dramatic as the ones that are gravitationally perturbed in GCs, it still mixes the apparent age distribution quite efficiently (Figure \[fig:multi\](a)).
Unbiasing will push old MSPs that appear younger to their corresponding age ($\mage$) lines. This correction process will recover the final positions of the true spin-down trajectories but will effectually exacerbate the downward age contamination which was artificially diluted by secular acceleration. The main source of the downward age contamination though will be the sub-Eddington progenitor accretion rates experienced during the LMXB phase.
These MSPs are expected to be born with smaller spin-down rates and consequently traverse shorter trajectories. We predict that $\sim$30% of MSP ages overestimate the true age by more than a factor of 2. Therefore, we argue that MSPs, which were presumed as the oldest sub-population, have a flatter true age distribution than previously thought.
While we can reverse the upward “age bias”, the downward “age contamination” on the other hand, reflects upon the intrinsic spin-down rates and is real (Figures \[fig:multi\](a) and (b)). Unless we have a unique leverage to accurately determine the period at birth (P$_{0}$) along with the respective progenitor accretion rate, the inferred MSP ages will remain to be strict overestimates (Figure \[fig:multi\](b)).
Figure \[fig:dist\] quantifies the level of bias and contamination of MSP ages before and after correcting for the contribution due to secular acceleration. Both the upward bias and downward contamination will simultaneously remain before the spin-down rates are properly corrected for the Shklovskii effect. We predict that 20% of the measured [$P-\dot{P}$ ]{}values for MSPs (dotted line in Figure \[fig:dist\]) will overestimate the age by more than a factor of two, whereas about 10% will underestimate the age at the same level. Old and young MSPs are practically indistinguishable before the spin-down rates are properly unbiased. Some very young MSPs may appear even below the Hubble line whereas much older MSPs may be observed above the spin-up line (Figure \[fig:multi\](a)). The asymmetric wings of the dotted line that extend in both directions in Figure \[fig:dist\] quantify this bidirectional corruption.
The ages obtained from intrinsic [$P-\dot{P}$ ]{}values will represent strict upper limits to the true age ($\tau_{i}=\tau_{t} \le \widetilde{\tau}'\le \tau_{c}'$). The truncated solid line in Figure \[fig:dist\] shows at what level we overestimate the ages for the whole MSP population. After properly unbiasing the observed spin-down rates, we expect to overestimate the age of 30% of MSPs by more than a factor of two. Table 1 shows $\cage$ and $\mage$ for observed millisecond radio pulsars before and after unbiasing. Table 2 marginalizes the potential bias for an assumed $v_{t}\sim$100 km s$^{-1}$ for millisecond radio pulsars that have no proper motion measurements.
The intrinsic [$P-\dot{P}$ ]{}values of the underlying MSP population suggest that $\sim$30% of MSPs will be born with apparent ages older than the age of the Galaxy. The true age distribution of MSPs with $\tau_{c}\ge 10^{10}$ yr is relatively well mixed as opposed to MSPs that lie on or just above the Hubble line. Hence, the sources that appear below the $10^{10}$yr line might not be among the oldest within the MSP sub-population.
We predict younger ages for pulsars with $\tau_{c}/\mage'>$1 (Table 1). The majority of MSPs for which we predict younger ages, are the ones that are closer to the spin-up line. These MSPs have a significant fraction of their spin-down trajectories truncated because the timescales pulsars spend close to the spin-up line is much shorter than MSPs with smaller spin-down rates. Some of the younger sources with $\tau_{t}\le\mage'<\cage$ are PSRs J0218+4232, J0737$-$3039A, J1023+0038, B1534+12, B1913+16, and B1937+21.
For MSPs with no proper motion measurements, Table 2 shows the potential biases for an assumed $v_{t}$=100 km s$^{-1}$. In this category, PSR J1841+0130 may have the strongest age corruption with $\cage/\mage'\sim$5.85.
There are two sources in particular that have been of considerable interest:
[*PSR J1012+5307*]{}: The ranges of cooling ages for possibly the best studies example is PSR J1012+5307 with $\tau_{wd}\sim$0.3-7 Gyr [@LL95; @ASH96; @BKW96]. We derive an age of $\widetilde{\tau}'\sim$ 6.25 Gyr (Table 1) for PSR J1012+5307 which is consistent with $\tau_{c}'$. This implies that the true age has to be $\le$ 6.25 Gyr. One cannot exclude younger ages by either the Shklovskii corrected (unbiased) characteristic age ($\tau_{c}'$) or MSP age ($\widetilde{\tau}'$) approach as PSR J1012+5307 may have been born with a P$_{0}$ very close to its currently observed period (P$_{0}\simeq$P) due to low accretion rates experienced during the LMXB phase (see $\S$\[sec:spin\]).
[*PSR B1257+12*]{}: The cumulative correction to the age of PSR B1257+12 is among the most significant of MSPs that have proper motion measurements ($\dot{P}/\dot{P}'\sim$16.6). We infer an upper limit of $\tau_{t}\le\mcage=1.42\times10^{10}$ yr for the age. This implies that the age of PSR B1257+12 cannot be constrained solely from its spin-down history. In general, when interpreting ages inferred from spin-down histories for single MSPs, one has to consider the possibility that an evolutionary process which produces MSPs without a binary companion may affect the spin-down evolution. For instance, an encounter by which the companion is ejected will perturb the spin-down rate of the compact primary. In the exceptional case of PSR B1257+12, the process that led to the formation of the planets may have affected the spin-down evolution.
\[sec:bra\] Braking Index
-------------------------
We tested whether alternative energy loss mechanisms other than pure dipole braking (n=3) are required to reconcile for the MSP age distribution. While more efficient processes (n$>$ 3) may also contribute to the downward age contamination, it would seem unnecessary to invoke higher braking indices to account for ages that appear older than the Galaxy. We do not rule out that gravitational wave radiation may expedite spin-down during the very early stages after re-birth before magnetic stability sets in [@L95; @LM95; @B98]. However, for MSPs following standard spin-up, the contribution of gravitational wave radiation to age contamination may not to exceed the offset between the blue-to-red $\mage$ age lines in Figure \[fig:obs\]. We explicitly show that lower preferred accretion rates during the active accretion phase can produce the paradoxically older appearing MSPs (Figures \[fig:sim\] and \[fig:multi\]).
Based on the [$P-\dot{P}$ ]{}characteristics of MSPs, we find no compelling evidence that energy loss has been dominantly driven by multipole or gravitational wave radiation during a significant portion of the lifetime of these sources.
\[sec:con\]Conclusions
======================
We have implemented constraints arising from the spin-up process and a limiting maximum spin limit into the standard method to obtain a more realistic age ($\mage$) estimate for MSPs. There are a range of ramifications that follow:
[*Age distribution*]{}: The unbiased characteristic ages are only upper limits to the true age. The new age estimate gives tighter upper limits and hence is closer to the true age ($\tau_{t}\le\mcage\le\cage'$). This flattens and shifts the age distribution toward younger ages while the age corruption scrambles the positions on the [$P-\dot{P}$ ]{}plane quite efficiently. We predict that a significant fraction of MSPs are born with apparent older ages. The true age distribution of MSPs does not appear to peak at $\sim10^{10}$yr as sharply as expected for a sub-population recycled from a first generation of pulsars with features indicative of a population that is already old, at least in a dynamic sense [@HP97]. MSPs that appear older than the Galaxy can be reconciled with very low ($\dot{m}\ll\dot{M}_{Edd}$) progenitor accretion rates experienced during the latest phases of the LMXB evolution. We expect $\sim$30% of the population to be born with $\tau_{c}'\ge 10^{10}$ yr.
[*Age corruption*]{}: There are two sources of age corruption: (1) the previously known “age bias”, which appears to be more prominent than previously predicted and (2) “age contamination” which is driven by lower progenitor accretion rates. Age contamination, which effectively disguises young MSPs as old ones, is not correctable in the absence of additional constraints that may give us insight into the details of individual prior accretion histories. On the other hand, the correctable age bias, manifests itself as reverse contamination and will disguise old MSPs as younger sources. The downward contamination will remain as the main source of confusion with regards to MSP ages. We expect to overestimate the true age of MSPs by more than a factor of 2 for $\sim$30% of the population. As a consequence, the birth and merger rates of NS–NS systems based on $\cage$ are most likely underestimates which therefore will have ramifications for potential LIGO sources.
[*Braking indices*]{}: The millisecond radio pulsar demographics is consistent with the canonical spin-down model (n=3). The challenge to disentangle possibly mixed sub-populations of MSPs that may have experienced dissimilar energy loss histories (n$\le$3 or n=5) is mainly due to the paucity of sources. Therefore, an early short phase when MSP energy loss is dominated by gravitational wave or multipole emission remains as a potentially contributing source of age contamination.
The research presented here has made extensive use of the 2009 August version of the ATNF Pulsar Catalogue [@MHT93]. BK thanks Athanasios Kottas for long discussions on Bayesian statistics which have seeded the idea and subsequently given birth to the proper parameterization of millisecond pulsar evolution. We thank the anonymous referee for useful comments. The authors acknowledge NASA and NSF grants AST-0506453.
[abbrv]{}
Alberts, F., Savonije, G. J., & van den Heuvel, E. P. J. 1996, , 380, 676
Alpar, M. A., Cheng, A. F., Ruderman, M. A., & Shaham, J. 1982, , 300, 728
Althaus, L. G., Serenelli, A. M., & Benvenuto, O. G. 2001, , 323, 471
Althaus, L. G., Serenelli, A. M., & Benvenuto, O. G. 2001, , 324, 617
Arzoumanian, Z., Cordes, J. M., & Wasserman, I. 1999, , 520, 696
Bhattacharya, D., & van den Heuvel, E. P. J. 1991, , 203, 1
Bildsten, L. 1998, , 501, L89
Brisken, W. F., Benson, J. M., Goss, W. M., & Thorsett, S. E. 2002, , 571, 906
Brisken, W. F., Fruchter, A. S., Goss, W. M., Herrnstein, R. M., & Thorsett, S. E. 2003, , 126, 3090
Burderi, L., King, A. R., & Wynn, G. A. 1996, , 283, L63
Camilo, F., Thorsett, S. E., & Kulkarni, S. R. 1994, , 421, L15
Chakrabarty, D., Morgan, E. H., Muno, M. P., Galloway, D. K., Wijnands, R., van der Klis, M., & Markwardt, C. B. 2003, , 424, 42
Cook, G. B., Shapiro, S. L., & Teukolsky, S. A. 1994, , 423, L117
Deller, A. T., Tingay, S. J., Bailes, M., & Reynolds, J. E. 2009, , 701, 1243
Fasano, G., & Franceschini, A. 1987, , 225, 155
Frank, J., King, A., & Raine, D. J. 2002, Accretion Power in Astrophysics, by Juhan Frank and Andrew King and Derek Raine, pp. 398. ISBN 0521620538. Cambridge, UK: Cambridge University Press, February 2002.,
Friedman, J. L., & Ipser, J. R. 1992, Royal Society of London Philosophical Transactions Series A, 340, 391
Ghosh, P., & Lamb, F. K. 1992, NATO Advanced Research Workshop on X-Ray Binaries and the Formation of Binary and Millisecond Radio Pulsars, p. 487 - 510, 487
Haensel, P., & Zdunik, J. L. 1989, , 340, 617
Hansen, B. M. S., & Phinney, E. S. 1997, , 291, 569
Hansen, B. M. S., & Phinney, E. S. 1998, , 294, 557
Hansen, B. M. S., & Phinney, E. S. 1998, , 294, 569
K[i]{}z[i]{}ltan, B., & Thorsett, S. E. 2009, , 693, L109
Krolik, J. H. 1991, , 373, L69
Kulkarni, S. R. 1986, , 306, L85
Lindblom, L. 1995, , 438, 265
Lindblom, L., & Mendell, G. 1995, , 444, 804
Lorimer, D. R., Lyne, A. G., Festin, L., & Nicastro, L. 1995 , 376, 393
Manchester, R. N., Hobbs, G. B., Teoh, A., & Hobbs, M. 2005, , 129, 1993
Murray, S. S., Slane, P. O., Seward, F. D., Ransom, S. M., & Gaensler, B. M. 2002, , 568, 226
Phinney, E. S., & Kulkarni, S. R. 1994, , 32, 591
Radhakrishnan, V., & Srinivasan, G. 1982, Current Science, 51, 1096
Sarna, M. J., Antipova, J., & Muslimov, A. 1998, , 499, 407
Sarna, M. J., Ergma, E., & Ger[š]{}kevit[š]{}-Antipova, J. 2000, , 316, 84
Shklovskii, I. S. 1970, Soviet Astronomy, 13, 562
Sch[ö]{}nberner, D., Driebe, T., & Bl[ö]{}cker, T. 2000, , 356, 929
van Kerkwijk, M. H., Bassa, C. G., Jacoby, B. A., & Jonker, P. G. 2005, Binary Radio Pulsars (ASP Conf. Ser. 328), ed. F. A. Rasio & I. H. Stairs (San Francisco, CA: ASP), 357
Wood, M. A. 1992, , 386, 539
Wyckoff, S., & Murray, C. A. 1977, , 180, 717
\[t!\] {width="4.9in"}
{width="4.9in"}
{width="6.4in"} \[fig:multi\]
{width="4.9in"}
[lccccc]{} \[tab:1\] & & & &\
\* J0030+0451 & 7.56 & 7.65 & 7.24 & 7.32 & 1.03\
\* J0034$-$0534 & 6.00 & 55.71 & 4.29 & 39.90 & 0.15\
\* J0218+4232 & 0.48 & 0.48 & 0.34$_{-0.09}^{+0.04}$ & 0.35$_{-0.09}^{+0.04}$ & 1.37$_{-0.14}^{+0.48}$\
\* J0437$-$4715 & 1.59 & 6.12 & 1.47 & 5.94 & 0.27\
\* J0610$-$2100 & 4.93 & 17.92 & 4.60 & 16.72 & 0.30\
\* J0613$-$0200 & 5.06 & 5.28 & 4.52 & 4.72 & 1.07\
\* J0621+1002 & 9.67 & 10.01 & 9.56 & 9.91 & 0.98\
\* J0711$-$6830 & 5.80 & 10.46 & 5.61 & 10.11 & 0.57\
\* J0737$-$3039A & 0.20 & 0.20 & 0.14$_{-0.04}^{+0.02}$ & 0.14$_{-0.04}^{+0.02}$ & 1.43$_{-0.18}^{+0.57}$\
\* J0751+1807 & 7.08 & 7.25 & 6.49 & 6.66 & 1.06\
\* J1012+5307 & 4.87 & 6.48 & 4.69 & 6.25 & 0.78\
\* J1023+0038 & 2.23 & 2.50 & 1.45 & 1.62 & 1.37\
\* J1045$-$4509 & 6.77 & 10.91 & 6.63 & 10.71 & 0.63\
\* B1257+12 & 0.86 & 14.58 & 0.75 & 14.21 & 0.06\
\* J1453+1902 & 7.91 & 8.46 & 7.68 & 8.21 & 0.96\
\* J1455$-$3330 & 5.21 & 8.07 & 5.07 & 7.93 & 0.66\
\* J1518+4904 & 23.84 & 29.34 & 23.74 & 29.23 & 0.82\
\* B1534+12 & 0.25 & 0.25 & 0.19$_{-0.04}^{+0.01}$ & 0.20$_{-0.04}^{+0.01}$ & 1.24$_{-0.05}^{+0.32}$\
\* J1600$-$3053 & 6.00 & 6.76 & 5.54 & 6.24 & 0.96\
\* J1603$-$7202 & 14.98 & 17.94 & 14.85 & 17.81 & 0.84\
\* J1640+2224 & 17.71 & 30.59 & 15.94 & 27.53 & 0.64\
\* J1643$-$1224 & 3.96 & 5.04 & 3.77 & 4.81 & 0.82\
\* J1709+2313 & 20.21 & 49.45 & 19.27 & 47.15 & 0.43\
\* J1713+0747 & 8.49 & 9.01 & 8.08 & 8.58 & 0.99\
\* J1738+0333 & 3.85 & 4.07 & 3.71 & 3.93 & 0.98\
\* J1744$-$1134 & 7.24 & 9.36 & 6.80 & 8.80 & 0.82\
\* B1855+09 & 4.80 & 4.92 & 4.63 & 4.75 & 1.01\
\* J1909$-$3744 & 3.34 & 17.08 & 2.95 & 15.12 & 0.22\
\* J1911$-$1114 & 4.05 & 9.13 & 3.74 & 8.44 & 0.48\
\* B1913+16 & 0.11 & 0.11 & 0.07$_{-0.03}^{+0.01}$ & 0.07$_{-0.03}^{+0.01}$ & 1.66$_{-0.29}^{+1.09}$\
\* B1937+21 & 0.24 & 0.24 & 0.10$_{-0.09}^{+0.04}$ & 0.10$_{-0.09}^{+0.04}$ & 2.37$_{-0.66}^{+21.63}$\
\* J1944+0907 & 4.80 & 8.59 & 4.63 & 8.27 & 0.58\
\* B1953+29 & 3.27 & 3.41 & 3.14 & 3.27 & 1.00\
\* B1957+20 & 1.51 & 2.23 & 0.92 & 1.37 & 1.10\
\* J2019+2425 & 8.88 & 24.34 & 8.31 & 22.77 & 0.39\
\* J2051$-$0827 & 5.63 & 5.81 & 5.35 & 5.52 & 1.02\
\* J2124$-$3358 & 3.79 & 6.25 & 3.64 & 5.99 & 0.63\
\* J2129$-$5721 & 1.98 & 2.09 & 1.84 & 1.94 & 1.02\
\* J2145$-$0750 & 8.53 & 9.81 & 8.42 & 9.69 & 0.88\
\* J2235+1506 & 5.99 & 9.13 & 5.92 & 9.04 & 0.66\
\* J2317+1439 & 22.55 & 36.18 & 20.65 & 33.13 & 0.68\
\* J2322+2057 & 7.85 & 18.49 & 7.51 & 17.69 & 0.44\
[lccccc]{} \[tab:2\] & & & &\
\* J0407+1607 & 5.15 & 5.64 & 5.06 & 5.55 & 0.93\
\* J0609+2130 & 3.76 & 4.37 & 3.68 & 4.30 & 0.87\
\* J0900$-$3144 & 3.59 & 5.11 & 3.47 & 4.99 & 0.72\
\* J1038+0032 & 6.82 & 8.50 & 6.73 & 8.40 & 0.81\
\* J1125$-$6014 & 10.39 & 16.37 & 8.89 & 14.00 & 0.74\
\* J1157$-$5112 & 4.83 & 5.85 & 4.75 & 5.77 & 0.84\
\* J1232$-$6501 & 1.72 & 1.75 & 1.67 & 1.69 & 1.02\
\* J1420$-$5625 & 8.01 & 11.67 & 7.91 & 11.57 & 0.69\
\* J1435$-$6100 & 6.05 & 6.92 & 5.92 & 6.79 & 0.89\
\* J1439$-$5501 & 3.20 & 4.46 & 3.11 & 4.36 & 0.73\
\* J1454$-$5846 & 0.88 & 0.89 & 0.81 & 0.83 & 1.06\
\* J1528$-$3146 & 3.87 & 5.28 & 3.80 & 5.20 & 0.74\
\* J1629$-$6902 & 9.51 & 18.16 & 9.24 & 17.66 & 0.54\
\* J1721$-$2457 & 9.39 & 15.93 & 8.62 & 14.62 & 0.64\
\* J1730$-$2304 & 6.37 & 42.92 & 6.24 & 42.27 & 0.15\
\* J1732$-$5049 & 6.10 & 7.92 & 5.88 & 7.64 & 0.80\
\* J1745$-$0952 & 3.23 & 3.56 & 3.14 & 3.46 & 0.93\
\* J1751$-$2857 & 5.49 & 7.42 & 5.13 & 6.93 & 0.79\
\* J1753$-$1914 & 0.49 & 0.50 & 0.44 & 0.45 & 1.10\
\* J1753$-$2240 & 1.91 & 1.98 & 1.85 & 1.93 & 0.99\
\* J1756$-$2251 & 0.44 & 0.45 & 0.38 & 0.38 & 1.16\
\* J1757$-$5322 & 5.34 & 7.30 & 5.21 & 7.16 & 0.75\
\* J1802$-$2124 & 2.78 & 2.95 & 2.68 & 2.84 & 0.98\
\* J1804$-$2717 & 3.62 & 4.59 & 3.50 & 4.46 & 0.81\
\* J1810$-$2005 & 3.44 & 3.66 & 3.36 & 3.57 & 0.96\
\* J1841+0130 & 0.058 & 0.058 & 0.010$^{+0.013}_{-0.010}$ & 0.010$^{+0.013}_{-0.010}$ & 5.85$^{+\infty}_{-3.33}$\
\* J1843$-$1113 & 3.05 & 3.41 & 2.15 & 2.41 & 1.27\
\* J1853+1303 & 7.33 & 10.65 & 6.89 & 10.01 & 0.73\
\* J1903+0327 & 1.81 & 1.85 & 1.42 & 1.45 & 1.25\
\* J1904+0412 & 10.24 & 12.40 & 10.16 & 12.32 & 0.83\
\* J1905+0400 & 12.34 & 33.12 & 11.47 & 30.81 & 0.40\
\* J1910+1256 & 8.08 & 11.27 & 7.76 & 10.81 & 0.75\
\* J1911+1347 & 4.29 & 5.24 & 4.09 & 4.99 & 0.86\
\* J1918$-$0642 & 5.05 & 6.69 & 4.91 & 6.55 & 0.77\
\* J2010$-$1323 & 17.17 & 185.03 & 16.54 & 178.24 & 0.10\
\* J2033+17 & 8.57 & 14.86 & 8.33 & 14.44 & 0.59\
[^1]: Full resolution color figures and movies available at URL: http://www.kiziltan.org/research/MSP/ages.html
[^2]: For consistency, we designate unbiased values that are corrected for secular acceleration (i.e., Shklovskii effect) by adding “ $'$ ” to the parameter in lieu of referring them as “intrinsic” values. While the unbiased spin-down rates will represent the intrinsic values (i.e., $\dot{P}'=\dot{P}_{i}$), the unbiased characteristic age $\tau_{c}'=P/ 2\dot{P}'$ is neither the intrinsic nor the true age (i.e., $\tau_{c}'\ne \tau_{i}=\tau_{t}$, see § \[sec:cont\] for discussion).
[^3]: See a time-lapse movie for the true age evolution of millisecond pulsars at URL: http://www.kiziltan.org/research/MSP/ages.html
|
---
address:
- |
Department of Statistics\
The Wharton School\
University of Pennsylvania\
Philadelphia, Pennsylvania 19104\
USA\
- |
Department of Statistics\
University of Wisconsin–Madison\
Madison, Wisconsin 53706\
USA\
author:
-
-
title: 'Discussion: “A significance test for the lasso”'
---
We congratulate the authors for an interesting article and an innovative proposal to testing the significance of the predictor variables selected by the Lasso. There is much material for thought and exploration. Research on high-dimensional regression has been very active in recent years, but most of the efforts have so far focused on estimation. Despite the popularity of the Lasso as a variable selection technique, the problem of making valid inference for a model chosen by the Lasso is largely unsettled. The current paper pinpoints some of the challenges in making valid inference in the high-dimensional setting and presents a thought-provoking approach to address them.
Following the notation used in the paper, let $A$ be the model selected at the $k$th step of either the Lasso or forward stepwise regression and $j$ be the index of the variable to be added in the next step. This paper considers the problem of testing the null hypothesis that the underlying model corresponding to the true regression coefficient vector $\beta^\ast$ is nested in the current selected model, that is, $$H_0\dvtx \supp\bigl(\beta^\ast\bigr)\subseteq A.$$
As pointed out in the paper, a classical approach to testing two fixed nested models $A$ and $A\cup\{j\}$ is the chi-squared test, which is based on the test statistic $$R_j=(\mathrm{RSS}_A-\mathrm{RSS}_{A\cup\{j\}})/
\sigma^2$$ and compares it to the quantile of the $\chi^2_1$ distribution. The test fails, as noted, when applying to the forward stepwise regression or the Lasso in a vanilla fashion because it fails to account for the fact that neither $A$ nor $\{j\}$ is fixed. The randomness of $A$ can be addressed using a conditional argument as suggested by the authors. The effect of the way that the new index $j$ is selected is more subtle. The seemingly lack of a remedy to this problem motives the authors to focus on the Lasso and to propose the so-called covariance test statistic $$\begin{aligned}
\label{Tkdecomp}
T_k&=& \bigl( \bigl\langle y, X\hat{\beta}(\lambda_{k+1})
\bigr\rangle- \bigl\langle y, X_A\tilde{\beta}_A(
\lambda_{k+1}) \bigr\rangle\bigr)/\sigma^2
\nonumber\\[-8pt]\\[-8pt]
&=&R_j-\lambda_{k+1} \bigl( \bigl\langle s_{A\cup\{j\}},
\hat{\beta}^{\mathrm{LS}}_{A\cup\{j\}} \bigr\rangle- \bigl\langle
s_{A},\hat{\beta}^{\mathrm{LS}}_{A} \bigr\rangle\bigr)/
\sigma^2,\nonumber\end{aligned}$$ where $s_A$ and $s_{A\cup\{j\}}$ are, respectively, the vector of signs of the nonzero regression coefficients for the Lasso at the $k$th and $(k+1)$st steps, and $\hat{\beta}^{\mathrm{LS}}_{M}=(X_M^\top
X_M)^{-1}X_M^\top y$ is the least squares estimate under model $M$. In effect, the second term on the right-hand side of (\[Tkdecomp\]) can be viewed as a correction factor to account for the fact that the next index $j$ is not fixed, but selected through the penalized $\ell
_1$ minimization. It is shown in the present paper that under $H_0$, the limiting null distribution of $T_k$ is either $\operatorname{Exp}(1)$ or stochastically smaller than $\operatorname{Exp}(1)$, and the paper proposed a test for the null hypothesis $H_0$ based on this fact.
In this discussion, we introduce and explore a perhaps simpler and more generic correction factor whose simplicity makes it an appealing alternative to the current proposal. Furthermore, it can be easily extended to other settings such as logistic regression and Cox proportional hazards regression.
An alternative test {#an-alternative-test .unnumbered}
-------------------
Our proposal is based on the observation that for a given subset $A$, the next selected index $j$ is not an arbitrary index in $A^c$. It is instructive to first look at the case of orthogonal design where it is clear that for both forward stepwise regression and the Lasso, $j$ can be identified with $$R_j=\max_{m\in A^c} R_m.$$ As a result, although for a fixed index $m\in A^c$, $R_m$ is a $\chi
^2_1$ distributed random variable, $R_j$, which is the maximum of $R_m$ for all $m\in A^c$, is not $\chi^2_1$ distributed. Note that, conditioning on the design matrix $X$, $R_m$’s are independent $\chi
^2_1$ random variables. Therefore, the conditional distribution of $R_j$ given $X$ can be easily deduced from the distribution of the maxima of independent Gaussian random variables \[see, e.g., @HF06\]. In particular, in a high-dimensional setting where $p$ is large and $|A|$ is relatively small, the null distribution of $R_j$ can be well approximated by a Gumbel distribution (of type I). More specifically, it can be shown that $$\label{eqasy} \qquad R_j-2\log\bigl(\bigl|A^c\bigr|
\bigr)+\log\log\bigl(\bigl|A^c\bigr| \bigr) \stackrel{d} {\to} \operatorname{Gumbel}(-
\log\pi, 2)\qquad\mbox{as } p\to\infty,$$ where the distribution function of a random variable $G$ following$\operatorname{Gumbel}(-\log\pi, 2)$ is given by $$\mathbb{P}(G\le x)=\exp\bigl(-\exp\bigl(-(x+\log\pi)/2 \bigr) \bigr).$$ This motivates us to consider the following test statistic: $$\label{eqnewtest} \widetilde{T}_k=R_j-2\log
\bigl(\bigl|A^c\bigr| \bigr)+\log\log\bigl(\bigl|A^c\bigr| \bigr)$$ and compare $\widetilde{T}_k$ with the quantile of $\operatorname{Gumbel}(-\log\pi, 2)$ distribution for testing the null hypothesis $H_0$. More specifically, for any given $0<\alpha< 1$, we will reject $H_0$ at the $\alpha$ level if and only if $\widetilde{T}_k\ge q_{1-\alpha}^G$ where $q_{1-\alpha}^G$ is the $1-\alpha$ quantile of $\operatorname{Gumbel}(-\log\pi, 2)$.
To illustrate the accuracy of the reference distribution, we first repeated the experiment considered in the paper with $n=100$ observations and $p=50$ variables under the orthogonal design. When the true model is $\beta^\ast=0$ and, therefore, the null hypothesis holds, we computed $\widetilde{T}_1$ for $500$ simulated datasets. The Q–Q plot of the observed $\widetilde{T}_1$ versus its reference distribution $\operatorname{Gumbel}(-\log\pi, 2)$ is given in the left panel of Figure \[figqqplot\]. Similarly, the right panel of Figure \[figqqplot\] gives the Q–Q plot for $\widetilde{T}_4$, again computed from 500 simulated datasets, when $\beta^\ast=(6,6,6,0,\ldots)^\top$.
![Comparisons of the empirical distributions with the reference distribution for $\widetilde{T}_k$ under the orthogonal design.[]{data-label="figqqplot"}](1175bf01.eps)
The strength of $\widetilde{T}_k$ comes from the robustness of its limiting distribution under correlated designs. When $X^\top X\neq I$, $R_m$’s are no longer independent but they are still marginally $\chi
^2_1$ distributed random variables. The distribution of $R_j=\max_{m\in
A^c}R_m$ again can be deduced from that of the maxima of a Gaussian process. In particular, it can be shown that the limiting Gumbel distribution given by (\[eqasy\]) continues to hold under fairly weak conditions on the dependence structure \[see, e.g., @LLR83\]. To verify the accuracy of the Gumbel approximation under dependency, we repeated the previous example with $\beta^\ast=(6,6,6,0,\ldots)^\top$. But instead of the orthogonal design, the design matrix is now generated from a multivariate normal distribution with mean zero and covariances $\operatorname{cov}(X_i, X_j)=\rho^{|i-j|}$. The left panel of Figure \[figcorr\] corresponds to $\rho=0.2$ and right panel to $\rho=0.8$, both suggesting that the limiting distribution $\operatorname{Gumbel}(-\log\pi, 2)$ continues to provide a reasonable approximation to the null distribution of $\widetilde{T}_k$. In contrast, numerical results show that the distribution of $T_k$ could deviate significantly from the reference distribution $\operatorname{Exp}(1)$ under the correlated designs, and thus comparing it to $\operatorname{Exp}(1)$ could be rather conservative in the correlated case.
![Comparisons of the empirical distributions with the reference distribution for $\widetilde{T}_4$ under the null $H_0\dvtx \operatorname{supp}(\beta
^\ast)\subseteq A$ with the ${\mathrm{AR}}(1)$ design.[]{data-label="figcorr"}](1175bf02.eps)
General nonlinear $\ell_1$ regularization problems {#general-nonlinear-ell_1-regularization-problems .unnumbered}
--------------------------------------------------
The advantages of the test statistic $\widetilde{T}_k$ proposed in (\[eqnewtest\]) are in its simplicity and generality. The correction factor utilized by $\widetilde{T}_k$ depends only on the number of remaining variables, and is straightforward to evaluate. This makes it particularly appealing when considering extensions to more general nonlinear $\ell_1$ regularization problems where the exact tuning parameter $\lambda_{k+1}$ for the next knot is typically not known in closed form and often has to be approximated using an iterative procedure. On the other hand, the validity of the Gumbel distribution as the reference distribution under $H_0$ remains when $R_j$ is replaced by the commonly used likelihood ratio test statistics.
To illustrate this point, we consider a logistic regression model where the true regression parameter is $\beta^\ast=0$. With $n=100$ observations on a binary response and $p=50$ covariates independently generated from the standard normal distribution. Same as before, the experiment was repeated for 500 times; the Q–Q plot of the resulting statistic $\widetilde{T}_1$ with respect to the $\operatorname{Gumbel}(-\log\pi, 2)$ distribution is given in the left panel of Figure \[figglm\]. The right panel of Figure \[figglm\] shows the results from a similar experiment for Cox proportional hazards regression where the response was generated from $\operatorname{Exp}(1)$ with 10% censoring. In both cases, the reference $\operatorname{Gumbel}(-\log\pi, 2)$ distribution provides a good approximation to the null distribution of the test statistic $\widetilde{T}_1$.
![Reference distribution for $\widetilde{T}_1$ under $H_0\dvtx \beta
^\ast=0$ for logistic regression and Cox’s proportional hazards model.[]{data-label="figglm"}](1175bf03.eps)
Summary {#summary .unnumbered}
-------
The Lasso is a popular method for the high-dimensional linear regression and it is important to make statistical inference for a model chosen by the Lasso. The authors raise intriguing inferential questions in the paper and propose a novel method to addressing them. The work sheds new insight on high-dimensional model selection using the Lasso and will definitely stimulate new ideas in the future. The alternative test based on the test statistic $\widetilde{T}_k$ given in (\[eqnewtest\]) merits further investigation for linear regression, logistic regression and Cox proportional hazards regression, under the high-dimensional setting. We thank the authors for their interesting work.
[2]{}
(). . , .
, (). . , .
|
[**Signature of *f*-electron conductance in $\alpha$-Ce single-atom contacts**]{}\
Sebastian Kuntz$^{1}$, Oliver Berg$^{1}$, Christoph Sürgers$^{1\ast}$, and Hilbert v. Löhneysen$^{1,2}$\
$^1$Karlsruhe Institute of Technology, Physikalisches Institut, P.O. Box 6980, D-76049 Karlsruhe, Germany\
$^2$Karlsruhe Institute of Technology, Institut für Festkörperphysik,\
P.O. Box 3640, D-76021 Karlsruhe, Germany\
$^{\ast}$e-mail: christoph.suergers@kit.edu
**Cerium is a fascinating element exhibiting, with its different phases, long-range magnetic order and superconductivity in bulk form. The coupling of the 4*f* electron to *sd* conduction electrons and to the lattice is responsible for unique structural and electronic properties like the isostructural first-order solid-solid transition from the cubic $\gamma$ phase to the cubic $\alpha$ phase, which is accompanied by a huge volume collapse of 14 %. While the $\gamma - \alpha$ phase transition has been investigated for decades, experiments aiming at disentangling the 4*f* contribution to the electric conductance of the different phases have not been performed. Here we report on the strongly enhanced conductance of single-atom Ce contacts. By controlling the content of $\alpha$-Ce employing different rates of cooling, we find a strong correlation between the fraction of $\alpha$-Ce and the magnitude of the last conductance plateau before the contact breaks. We attribute the enhanced conductance of $\alpha$-Ce to the additional contribution of the 4*f* level.**
Cerium is perhaps the elemental material that exhibits the most pronounced configurational changes. Under ambient pressure, Ce is in the fcc phase ($\gamma$-Ce) in the configuration \[Xe\](6s5d)$^3$4f$^1$, with the 4$f$ electron strongly localized, and exhibits Curie-Weiss-type paramagnetism [@lawson_concerning_1949; @koskenmaki_chapter_1978]. However, below $\sim$ 200K the ground state of $\alpha$-Ce is \[Xe\](6s5d)$^4$4f$^0$ and the 4$f$ electron is delocalized [@koskimaki_heat_1975]. $\alpha$-Ce has the same fcc structure as $\gamma$-Ce, but the lattice constant $a$ changes from 5.15 to 4.85[Å]{}. Noteworthy, $\alpha'$-Ce, a high pressure variant of the $\alpha$ phase, is even superconducting with $T_c$ = 1.7 K [@wittig_superconductivity_1968; @Loa_lattice_2012]. Cerium is thus a paradigm of the interplay of magnetism and superconductivity.
The early proposal describing the $\gamma \rightarrow \alpha$ phase transition by the promotion of the *f* electron to the *sd* conduction band [@lawson_concerning_1949; @schuch_structure_1950] was found to be in disagreement with subsequent experiments [@gustafson_positron_1969; @kornstadt_investigation_1980; @podloucky_band_1983; @patthey_low-energy_1985] and, furthermore, not confirmed by band-structure calculations [@min_total-energy_1986]. Instead, a delocalization of the 4*f* electron into a 4*f* band in $\alpha$-Ce was suggested pointing towards an orbitally selective Mott transition (MT) [@gustafson_positron_1969; @johansson1974].
The nature of the $\gamma-\alpha$ transition, which can be tuned at ambient temperature by hydrostatic pressure, is still under debate [@held_cerium_2001; @de_medici_mott_2005; @lanata_2013]. The issue is complicated by the existence of an intervening ($320 \geq T \geq 170$ K) dhcp phase ($\beta$-Ce). The $\beta$ phase has similar electronic properties as the $\gamma$ phase, with localized 4*f* moments that order antiferromagnetically below 12.5 K [@wilkinson_neutron_1961; @gibbons_magnetic_1987]. This has been confirmed by density functional theory (DFT) in the local density approximation taking the onsite Hubbard interaction into account (LDA + U) [@amadon_ensuremathgamma_2008]. The $\gamma - \alpha$ transition proceeds much faster than the $\gamma - \beta$ transition. Since the transitions between these phases are of first order, it is very difficult to obtain single-phase Ce modifications [@wilkinson_neutron_1961; @koskimaki_preparation_1974].
Although the structural and electronic properties of $\alpha$-, $\beta$-, and $\gamma$-Ce have been studied experimentally and theoretically for decades, the different contributions of *s*, *d*, and *f* electrons to the total conductance have not been resolved. Mechanically controlled break junctions (MCBJ) offer the possibility to repeatedly open and close a contact established in a thin metallic wire (yet of macroscopic dimensions) and approach the quantum regime where the magnitude of the conductance is of the order of a few conductance quanta G$_0 = 2 e^2/h$ [@agrait_quantum_2003]. Here the conductance $G$ exhibits plateau-like features when the contact is gradually opened mechanically, intercepted by sharp steps to lower $G$. It has been demonstrated for many different metals that the transport at the last plateau before breakage is due to current flow through a single atom [@agrait_quantum_2003]. Further increasing the distance between the electrodes yields vacuum tunneling through the opened contact as evidenced by the exponential distance dependence of $G \ll {\rm G}_0$. The conductance on the last plateau, on the order of $G_0$, depends on the number of atomic valence orbitals at or close to the Fermi level $E_{\rm F}$ and on the transmission coefficients of the orbitals [@scheer_signature_1998]. Therefore, this technique is well suited to investigate atomic-size contacts of elemental metals with different electron configurations like cerium. We note in passing that the situation in metals differs distinctly from that in semiconductors. In the latter, conductance quantization, i.e., $G = n$ $G_0$ with $n$ integer, has been observed because the inverse Fermi wave number $k_{\rm F}^{-1}$ is much larger than the interatomic distance *a* due to the small conduction-electron density. In metals, on the other hand, $k_{\rm F}^{-1} \approx a$ leading to a strong intertwining of electronic properties and atomic structure in the contact. Here we report on the conductance of Ce single-atom contacts of MCBJ on the last plateau, i.e., single-atom contacts, obtained from polycrystalline wires (Fig. \[fig1\]a).
Figure \[fig1\]b shows the conductance $G$ in units of $G_0$ vs. the electrode distance $\Delta x$ of cerium contacts at 4.2 K. Here, $\Delta x$ was determined from conductance measurements in the tunneling regime and the distance zero was arbitrarily set to the onset of conductance upon closing the contact signaled by a discontinuous jump from $G \ll {\rm G}_0$ to $G \approx {\rm G}_0$. The curves are typical of conductance curves of many elemental metals [@agrait_quantum_2003]. Upon stretching the contact, the conductance decreases by showing several steps due to the structural relaxation of the material and reformation of the atomic structure at the neck until finally a last plateau (Fig. \[fig1\]b, red arrow) is reached before the contact eventually breaks and the conductance jumps to zero. Usually this behaviour is evaluated statistically on a large number of curves to reveal the most frequently occurring conductance values [@agrait_quantum_2003].
Here we focus on the conductance *G’* of the last plateau before breaking at 4.2 K, characteristic of the conductance of a single-atom contact, for some rare-earth metal contacts in Fig. \[fig1\]c. We find, again as usual, a broad distribution of conductances. While for ferromagnetic Gd, ferromagnetic dysprosium [@muller_switching_2011], and nonmagnetic yttrium we observe a distribution of *G’* with a maximum at $\bar{G'} = 0.6\, G_0$, $\bar{G'} = 0.91\, G_0$, or $\bar{G'} = 1.11 \, G_0$, respectively, a broader distribution with a maximum at $\bar{G'} = 1.79 \, G_0$ is observed for this particular contact of cerium. The values of $\bar{G'}$ for Dy and Y are close to $G_0$ as likewise observed for 3*d* transition metals [@agrait_quantum_2003]. The conductance for Gd is in agreement with a recent investigation of lithographically prepared Gd MCBJ, where the conductance histogram - taking into account all plateaus observed below 20 $G_0$ - revealed a maximum at 0.75 $G_0$ [@olivera_electronic_2016]. The low conductance of Gd is attributed to the hybridization between $s$ and $p_z$ conduction channels which reduces the conductance of the pure $s$ channels as inferred from DFT calculations. The electronic transport through atomic contacts is usually considered ballistic where the conductance is expressed by transport channels with a certain transmission of the electronic wave function in the Landauer-Büttiker theory [@agrait_quantum_2003]. The strongly enhanced conductance of Ce single-atom contacts compared to transition metals and to the rare-earth metals Gd and Dy with localized 4$f$ conduction electrons suggests that additional transport channels contribute to the total conductance of Ce which are attributed to the 4*f* orbital.
In order to investigate the effect of the 4$f$ configuration on single-atom Ce contacts, we employed the content of $\alpha$-Ce in our Ce samples as a control parameter. To this end, we subjected the Ce ingots to different heat treatments, see Methods section for details. q-Ce samples had been quenched from the melt to room temperature. These samples had passed very quickly through the $\gamma-\beta$ transition and are expected to contain a large volume fraction of $\alpha$-Ce at low $T$. a-Ce samples had been annealed at 600 $^{\circ}$C and slowly cooled to 100 $^{\circ}$C. The $\gamma-\beta$ transition ($T_{\gamma \beta}$ = 60 $^{\circ}$C) was passed even more slowly to 40 $^{\circ}$C at a rate of 3 $^{\circ}$C/h. These samples are expected to contain a larger volume fraction of $\beta$-Ce and a smaller fraction of $\alpha$-Ce at low temperatures. The samples were then cooled from room temperature to low temperatures and their structure was checked by x-ray diffraction shown in Fig. \[fig2\].
At room temperature $T$ = 300 K all Bragg reflections can be assigned to the $\gamma$ phase of cerium, see inset of Fig. \[fig2\]a. The intensities of the individual Bragg reflections deviate from the intensities expected for a powder of randomly distributed grains due to a preferred orientation of some crystallites along the \[111\] direction in the polycrystalline ingot. Upon cooling to 20 K, the $\gamma$(111) reflection shifts to larger angles (smaller lattice-plane distances) and is at 20 K attributed to the $\beta$(004) reflection. Below 100 K an additional peak develops around $2 \theta = 32^{\circ}$, see inset of Fig. \[fig2\]d, due to the transformation to the $\alpha$ phase below 100 K. Figs. \[fig2\] a and d clearly show that a phase mixture of $\alpha$-Ce and $\gamma$-Ce forms at low temperatures with a higher fraction of $\alpha$-Ce in q-Ce compared to a-Ce. Furthermore, the fraction depends on whether the sample was cooled fast (qf-Ce, af-Ce, coloured areas) or slowly (qs-Ce, as-Ce, solid lines). For a quantitative estimate of the fraction of $\alpha$-Ce at 20 K, we estimate the ratio $S_{\alpha}$ of the integrated intensities $I$ of the two peaks centered at $2 \theta = 30.3^{\circ}$ and $32.1^{\circ}$, $S_{\alpha} = I(32.1^{\circ})/[I(32.1^{\circ})+I(30.3^{\circ})]$. Table \[table1\] shows that the fast cooled qf-Ce has a much larger $S_{\alpha}$ and, hence, fraction of $\alpha$-Ce than the slowly cooled as-Ce. These results are in perfect agreement with earlier investigations by Gschneidner et al. [@gschneidner_effects_1962].
The phase transformations are also observed in the temperature dependence of the resistivity, see Figs. \[fig2\]b and e, for which the annealed a-Ce exhibits a factor-of-five lower resistivity than q-Ce which was rapidly cooled from the melt. In both samples, the resistivity drops while cooling through the $\gamma \rightarrow \alpha$ transition and strongly increases while heating through the $\alpha \rightarrow \gamma$ transition with a hysteresis characteristic for a first-order phase transition. For q-Ce, the temperatures where the resistive transitions set in, agree again very well with data by Gschneidner et al. [@gschneidner_effects_1962]. In contrast, the hysteresis observed for a-Ce is much broader, possibly due to the larger amount of $\beta$ phase hampering and delaying the transformation from $\gamma$- to $\alpha$-Ce [@james_resistivity_1952].
At temperatures below 20 K, a-Ce shows a kink in $\rho(T)$ around 12 K (Fig. \[fig2\]f) characteristic for the onset of antiferromagnetic order in $\beta$-Ce [@james_resistivity_1952; @wilkinson_neutron_1961]. In the antiferromagnetic regime, the resistivity follows a $\rho \propto T^2$ dependence due to antiferromagnetic spin waves [@ueda_electrical_1977]. The kink is only weakly observed in q-Ce (Fig. \[fig2\]c). We use the resistivity ratio $RR_{\beta} = \rho(15\, {\rm K})/\rho(2\, {\rm K})$ to indicate the amount of $\beta$ phase in the sample. Table \[table1\] shows that the relative amount of $\beta$ phase decreases from slowly cooled as-Ce to fast cooled qf-Ce in agreement with the corresponding increase of the fraction of $\alpha$-Ce estimated from the x-ray intensity ratio $S_{\alpha}$. In summary, samples with different volume fractions of $\alpha$-Ce and $\beta$-Ce at low temperatures have been successfully prepared and characterized by x-ray diffraction and resistivity data to disentangle their respective contributions to the total conductance of Ce atomic contacts.
We now turn to the conductance properties of the different Ce nanocontacts. We proceed by considering the conductance value of the last plateau $G'$ (Fig. \[fig1\]b) and plot in Fig. \[fig3\] histograms of four representative samples thus focusing on single-atom-contact histograms instead of those of full conductance curves $G(\Delta x)$. These histograms comprise opening curves only because closing curves generally show much larger last-plateau values extending up to 5$G_0$. This suggests that upon closing the contacts instantaneously several atoms form the contact.
We first note that the conductance values $G'$ of the a-Ce samples follow a Gaussian distribution $$N(G') = \frac{A}{\sigma \sqrt{2\pi}}\,{\rm exp}[\frac{-(G'-\bar{G'})^2}{2\sigma ^2}]
\label{eqn1}$$ ($A$: area, $\bar{G'}$: mean value, $\sigma$: standard deviation) much more closely and smoothly than the q-Ce samples. As mentioned above, the former contain more $\beta$ phase with localized stable Ce$^{3+}$ moments. It is highly plausible that the local environment of atoms in the contact with reduced number of nearest neighbours leads to a further stabilization of localized moments. This observation suggests that it is not the hybridization of 4$f$ electrons with intra-atomic $6s/5d$ orbitals but rather the hybridization with nearest-neighbour orbitals which is essential. The hybridization possibly further decreases while opening the contact due to the elongation of interatomic bonds when a single atom forms the contact. Indeed, scanning tunneling spectroscopy on single Ce or Co impurities on the Ag or Cu surface show a strong reduction of the Kondo temperature $T_{\rm K}$ for thin layers and for single atoms and clusters as compared to $T_{\rm K}$ of the corresponding bulk solids, due to the reduction of the number of nearest neighbours and the ensuing decrease of hybridization of the magnetic impurity with respect to the bulk electronic system of the host crystal [@li_kondo_1998; @schneider_kondo_2005; @ternes_spectroscopic_2009]. Vice versa, for individual Co adatoms on Cu(100), $T_{\rm K}$ strongly increases toward the bulk value upon decreasing the tip-adatom distance due to the stronger hybridization between the Co 3*d* level and the conduction-electron states of the Cu substrate and the W tip [@neel_conductance_2007]. In line with these arguments a substantial change of the 4$f$ electronic structure at the surface of $\alpha$-Ce towards a $\gamma$-like behaviour was observed in photoemission experiments [@weschke_surface_1991]. Therefore, in the as-samples with a large fraction of $\beta$-Ce and stabilization of Ce$^{3+}$ moments in the contact region, a homogeneous distribution of $G'$ is expected.
The af sample is actually better described by a dominant Gaussian centered at $\bar{G'_1}$ = 1.6 $G_0$ and a three-times smaller Gaussian centered at $\bar{G'_2}$ = 2.1 $G_0$ indicating that fast cooling from room temperature also favours the $\alpha$-phase formation to some extent. On the other hand, the histograms of the q samples can be clearly decomposed into two Gaussians, one centered around $\bar{G'_1} \approx$ 1.5 $G_0$ and the other around $\bar{G'_2}$ = 2.1 $G_0$. These two components would then correspond to stable Ce moments reminiscent of the $\beta$ phase ($G'_1$) and strongly hybridized moments as in the $\alpha$ phase ($G'_2$) with a finite transmission probability due to their more delocalized nature, respectively. Table \[table1\] summarizes this behaviour, i.e., samples with a larger volume fraction of $\alpha$-Ce, represented by a large $S_{\alpha}$ and a small $RR_{\beta}$, more frequently show a higher conductance $G'_2$ of the last plateau inferred from the ratio of the areas $A_1$ and $A_2$ of the two distribution functions. The enhanced conductance clearly depends on the sample treatment and the different volume fraction of electronically different phases. We now discuss the implication of our results for the $\gamma - \alpha$ transition. In both KVC and MT models the *f* electrons are strongly correlated in the $\alpha$ and $\gamma$ phases and both models are in qualitative agreement with the localization-delocalization picture. Specifically, the MT model of *f* electrons considers localized nonbonding *f* states in $\gamma$-Ce which are favoured by an on-site *f-f* Coulomb interaction $U$ being larger than the *f*-hybridization energy [@johansson1974]. While $U$, being a intratomic quantity, might be considered the same for bulk Ce and single-atom Ce contacts, the latter may be strongly reduced via the decrease of 4*f*-(*sd*)$^3$ hybridization, see below. Furthermore, the effectively one-dimensional contact would entail an additional reduction of the bandwidth which, in a tight-binding model, is proportional to the number of nearest neighbours [@ashcroft_mermin], i.e., two in the single-atom contact vs. twelve in bulk fcc cerium. This would lead to a strong tendency of pushing the contact towards the Mott-insulating side compared to bulk. Thus, our observation of nearly delocalized 4*f* electrons participating in electronic conduction is at variance with the MT model for the $\gamma - \alpha$ transition in the contact region. In the Kondo volume collapse (KVC) model, the 4*f* electrons are nearly localized and exhibit a stable moment in both $\alpha$- and $\gamma$-Ce but experience a different screening by the *sd* conduction electrons resulting in unscreened moments in $\gamma$-Ce and screened moments in $\alpha$-Ce. Spin fluctuations give rise to the phase transition [@lavagna_volume_1982; @allen_kondo_1982]. The KVC is corroborated by x-ray diffraction and x-ray emission spectroscopy [@lipp_thermal_2008; @lipp_x-ray_2012] and has been predicted to occur at the nanoscale down to the dimer level [@casadei_density-functional_2012]. Experiments and first-principles calculations suggest that Ce has a low-temperature critical point at negative pressures [@thompson_two_1983; @lashley_tricritical_2006] and is therefore close to being quantum critical [@lanata_2013]. The latter work has pointed out that spin-orbit coupling plays an important role in hampering the local fluctuations induced in the $f$ local space of the large-volume $\gamma$ phase.
One would expect that, since the 4$f$-($sd$)$^3$ hybridization is primarily with neighbouring atoms, the Kondo temperature $T_{\rm K} \approx 790$K for $\alpha$-Ce [@allen_kondo_1982] would be strongly reduced. However, Kondo-like behaviour might still be observed as long as $T_{\rm K}$ exceeds the measuring temperature of 4.2 K. It is important to point out that in one-dimensional metals the Kondo effect leads to an *enhancement* of the conductance as shown in numerous examples [@iqbal_odd_2013; @park_coulomb_2002; @liang_kondo_2002; @wiel_kondo_2000]. We are therefore led to the conclusion that although the hybridization between conduction electrons and 4$f$ electron at the single atom of the contact very likely is reduced, the $\alpha$-phase-rich q-Ce contacts facilitate $f$-electron transport across the junction which is qualitatively in line with the KVC model.\
**Methods**\
All samples were prepared from the same Ce starting material (purity 99.99 %, Atlantic Equipment Engineers). This was first melted several times in an arc furnace under argon atmosphere to obtain a homogeneous ingot. After heating, the melt solidified rapidly by cooling to 18 $^\circ$C in approximately one minute by contact with a water cooled copper plate. The material prepared in this manner was labeled q-Ce. The ingot was cut into two halves one of which was thermally annealed. For this purpose, it was put in an alumina crucible and sealed in a quarz tube under argon atmosphere ($p\approx 5\cdot 10^{-2}$ mbar). The quartz tube was thermally annealed for several days. First, the temperature was raised to $T=600\,^\circ$C to cross the $\gamma$-$\beta$ phase boundary and then held there for 8 hours. Afterwards the temperature was lowered over a period of about 4 days to $T=100\,^\circ$C and then lowered even more slowly, with a cooling rate of $\partial T / \partial t=3\,^\circ\textnormal{C}/\textnormal{h}$, while crossing the $\gamma$-$\beta$ phase boundary ($T_{\gamma,\beta}\approx 60\,^\circ$C). The material prepared in this manner was labeled a-Ce.\
X-ray diffraction was done using a Siemens D500 powder-diffractometer equipped with a $^4$He flow cryostat and with the sample under high vacuum ($p=10^{-7}$ mbar). Pieces of 0.5-mm thickness were cut from the ingot in several arbitrary spatial directions. They were properly cleaned in acetone and their oxide layers were carefully removed with a scalpel and then polished abrasive sand paper. Afterwards they were immediately covered with a thin film of highly diluted GE varnish to protect them against further oxidation. X-ray diffractograms were obtained in $\theta$-$2\theta$ Bragg-Brentano mode using Cu-K$_\alpha$ radiation and a Ni foil to reduce the contribution from Cu-K$_\beta$ radiation.\
Resistivity data were taken on bulk Ce samples, having been subjected to the different heat treatments described above, in a physical-property measurement system (PPMS, Quantum Design) in a four-point probe setup. Cerium pieces of 0.5 mm $\times$ 0.5 mm $\times$ 10 mm size were cut off the ingots and contacted with copper wires using a conductive epoxy EPO-TEK H20E.\
For the MCBJ, thin wires of $0.1 \times 0.1$ mm $^2$ cross section and 8 mm length were cut from the ingot. A notch was cut in the middle of the wire as a predetermined point where it should break during bending. The wire was glued with Stycast epoxy to a flexible 0.3-mm thick copper-bronze substrate coated with a 2-$\mu$m thick durimide film for electrical insulation. It was crucial to heavily coat the sample with Stycast for stabilization of the wire sustaining the structural $\gamma \rightarrow \alpha$ phase transition (with intervening $\beta$ phase) which is accompanied by a huge volume change and thus generation of mechanical stress. The Ce wire was connected with conductive epoxy EPO-TEK H20E to four copper leads in order to perform four-point conductance measurements. This assembly was then mounted in a MCBJ device with countersupports 8 mm apart and cooled to 4.2 K in a $^4$He bath cryostat. The substrate was bent mechanically by pushing a piston against the back of the substrate and fine tuning of the bending was achieved by using a piezo stack controlled by a voltage $V_{\rm p}$. A voltage of 10 $\mu$V was applied to the junction and the current through the junction was measured. The electrode distance $\Delta x$ was obtained from $G(V_{\rm p})$ in the tunneling regime by using appropriate work functions of the materials as described earlier [@muller_switching_2011]. All conductance measurements on the MCBJ devices were carried out at 4.2 K.\
**Acknowledgements**\
We thank K. Held and J. Schmalian for valuable discussions and W. Kittler for help with the annealing of the Ce ingot.\
[10]{} url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
& . ** ****, ().
& . In (ed.) **, vol. , (, ).
& . ** ****, ().
. ** ****, ().
*et al.* . ** ****, ().
& . ** ****, ().
, & . ** ****, ().
, & . ** ****, ().
& . ** ****, ().
, , & . ** ****, ().
, , & . ** ****, ().
. ** ****, ().
, & . ** ****, ().
, , & . ** ****, ().
*et al.* . ** ****, ().
, , , & . ** ****, ().
, & . ** ****, ().
, & . ** ****, ().
, & . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** (). .
, & . ** ****, ().
, & . ** ****, ().
. ** ****, ().
, , & . ** ****, ().
*et al.* . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
& ** (, ).
, & . ** ****, ().
& . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
, , , & . ** ****, ().
, , , & . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
, , , & . ** ****, ().
*et al.* . ** ****, ().
-------- ---------------- -------------- -------------- -------------------------- -------------------------- -----------
Sample Cooling rate $S_{\alpha}$ $RR_{\beta}$ $\bar{G'_1}\pm \sigma_1$ $\bar{G'_2}\pm \sigma_2$ $A_2/A_1$
(K min$^{-1})$ ($G_0$) ($G_0$)
as-Ce 1 0.49 1.62 $1.52\pm 0.46$ - -
af-Ce 10 0.51 1.59 $1.61\pm 0.29$ $2.10 \pm 0.26$ 0.28
qs-Ce 1 0.65 1.47 $1.45 \pm 0.25$ $2.05 \pm 0.29$ 1.29
qf-Ce 10 0.67 1.39 $1.56 \pm 0.28$ $2.16 \pm 0.25$ 0.50
-------- ---------------- -------------- -------------- -------------------------- -------------------------- -----------
: \[table1\]**Characteristic parameters of single-atom Ce contacts.** Ratio $S_{\alpha}$ of integrated x-ray intensities and resistivity ratio $RR_{\beta}$ for the different Ce samples cooled with different rates from 300 K to 20 K. $\bar{G'_1}$ and $\bar{G'_2}$ are the mean values of the two Gaussian distribution functions and $A_2/A_1$ is the ratio of their areas.
|
---
abstract: 'Vibrational properties of iron-chalcogenide superconductor K$_{0.75}$Fe$_{1.75}$Se$_{2}$ with $T_{c}\sim$ 30 K have been measured by Raman and optical spectroscopies over temperature range of 3-300 K. Sample undergoes *I4/m* $\to $ *I4* structural phase transition accompanied by loss of inversion symmetry at $T_{1}$, below 250 K, observed as appearance of new fully-symmetric Raman mode at $\sim$ 165 cm$^{-1}$. Small vibration mode anomalies are also observed at $T_{2}\sim$ 160 K. From first-principles vibrational analysis of antiferromagnetic K$_{0.8}$Fe$_{1.6}$Se$_{2}$ utilizing pseudopotentials all observed Raman and infrared modes have been assigned and the displacement patterns of the new Raman mode identified as involving predominantly the Se atoms.'
author:
- 'A. Ignatov$^{1}$, A. Kumar$^{1}$, P. Lubik$^{1}$, R. H. Yuan$^{2}$, W. T. Guo$^{2}$, N. L. Wang$^{2}$, K. Rabe$^{1}$, and G. Blumberg$^{1}$'
title: 'Structural phase transition below 250 K in superconducting K$_{0.75}$Fe$_{1.75}$Se$_{2}$'
---
INTRODUCTION
============
Discovery of high-$T_{c}$ superconductivity in iron-based chalcogenides A$_{y}$Fe$_{1.6+x}$Se$_{2}$ (A=K, Rb, Cs, and Tl) Ref. \[\] raised considerable attention since the materials exhibit unusual physical properties. Parent compound ($y=$1, $x=$0) is an insulator, [@transport; @gap] crystalizes into $\sqrt 5$x$\sqrt 5$x1 *I4/m* Fe vacancy-ordering structure, and exhibits antiferromagnetic (AFM) order below a Neél temperature of $\sim$ 560 K Ref.\[\]. Doping with alkaline metals or Tl ($y<$1) apparently preserves the Fe vacancy-ordering and give rise to superconductivity in samples with close to 2:4:5 stoichiometry [@compos]. Early transport [@transport] and neutron diffraction [@structure] studies suggested that the superconductivity coexists with AFM order. Alternatively, the doping is discussed in terms of microscopic phase separation [@TEM; @XRD; @XRD1; @NQR0; @optics]: a mixture of vacancy-ordered AFM insulating phase and superconducting phase (SC). Due to resent experimental evidences [@mSR; @NQR; @INS; @STM] the consensus seems emerge: the AFM and SC phases are specially separated, the AFM phase occupies from $\sim$ 80 Ref. \[\] to 95 % Ref. \[\] of the sample volume, and SC phase is homogeneous and does not contains Fe-vacancies nor magnetic moments. [@NQR; @INS; @STM]
Raman scattering study of superconducting K$_{0.8}$Fe$_{1.6}$Se$_{2}$ observed at least 13 phonon modes [@Raman1]. The crystal symmetry of sample was determined as $C_{4h}$ or lower. Zhang *et al.* performed LDA vibration analysis of nonmagnetic *I4/m* K$_{0.8}$Fe$_{1.6}$Se$_{2}$ phase and assigned majority of observed Raman modes. The vibrational properties K$_{0.88}$Fe$_{1.63}$S$_{2}$ isostructural to K$_{0.8}$Fe$_{1.6}$Se$_{2}$ confirmed Fe-vacancy ordering: 14 Raman active modes predicted by factor-group analysis were observed and assigned. The authors concluded that the phonon energies in the range of 80-300 K are driven by anharmonicity effects without any signatures of electron-phonon interaction [@Raman2]. Impact of iron and potassium composition on Raman vibration spectra of A$_{0.8}$Fe$_{1.6}$Se$_{2}$ (A=K, Rb, and Tl) was presented in Ref. \[\].
The optical studies to date showed at least ten IR-active modes at low temperatures. [@optics1; @optics2] The in-plane optical conductivity of ($T_{c}=$31 K) is incoherent at 300 K, dominated by IR-active modes and high-frequency excitations [@optics2], but become coherent just above the $T_{c}$. Small carrier concentration prompted authors [@optics; @optics2] to suggest that the global superconductivity is due to Josephson coupling of nanoscale-sized superconducting phase in the AFM ordered insulating phase.
In this paper we report on Raman scattering and *ab*-plain optical conductivity studies of superconducting K$_{0.75}$Fe$_{1.75}$Se$_{2}$ ($T_{c}\sim$ 30 K) in the $T$-range from 3 to 300 K. At least 19 Raman-active and 12 IR-active modes are observed at 3 K. The $\sim$ 136, 143, 242, and 277 cm$^{-1}$ Raman and $\sim$ 208 cm$^{-1}$ IR mode exhibit Fano-like shape. The Raman Fano modes are due the vibration coupling to AFM spin fluctuations, while the IR- mode is coupled to charge carriers in low-frequency part of optical conductivity. Raman phonon linewidth contains approximately equal contributions of two-phonon lattice anharmonicity on one hand and bare self-energy and broadening due to intrinsic defects on the other hand, except for the $\sim$ 100 cm$^{-1}$ mode dominated by inhomogeneous broadening. We show that K$_{0.75}$Fe$_{1.75}$Se$_{2}$ undergoes *I4/m* $\to $ *I4* structural phase transition at $T_{1}$ below 250 K. Several modes which are not Raman- and IR-active in the measured geometry in *I4/m* become clearly visible in *I4* phase. Symmetry of the Se-Fe slab is broken at $T_{1}$. At $T_{2}\sim$ 160 K Raman vibration modes exhibit weak anomalies seen as small discontinuity of vibration frequencies and change in vibration intensity vs temperature dependencies. Raman vibration intensities of a few modes increases between $T_{1}$ and $T_{2}$, saturating above the $T_{2}$, except for three modes dominated by $c$-axis atomic displacements: $c$-axis structural distortions within the slab appear to build up on cooling down to 3 K. The low-frequency optical conductivity displays weak temperature dependence above $T_{1}$ followed by faster increase below the $T_{2}$.
EXPERIMENTAL
============
The crystal of iron-chalcogenide superconductors were grown by a self-melting method with nominal concentration of 0.8:2.1:2.0 (K:Fe:Se). The actual chemical composition was determined by EDXS as K$_{0.75}$Fe$_{1.75}$Se$_{2}$ (KFS). Two-step transitions were seen in resistivity curve [@optics], a sharp drop at 42 K is followed by a major superconducting transition with $T_{c}\sim$ 30 K. Further details of sample characterization can be found elsewhere. [@optics]
KFS crystals were never exposed to air. Sealed vial was open under 99.999[%]{} N$_{2}$, crystal removed and glued on replaceable copper sample holder of a helium Oxford Instruments cryostat, dried and cleaved along the *ab*-plane, transferred to the He-flow cryostat, and quickly cooled below water freezing temperature. Raman data were obtained on two single crystals. Data presented in this paper refer to the sample with more detail temperature dependence records. It is worth mentioning that results obtained on the second sample are consistent with findings reported here.
Raman spectra were excited with Kr$^{+}$ laser line of $\lambda =$647.1 nm ($E=$1.92 eV) with less than 10 mW of incident laser power focused into a spot of $\sim$ 50x100 $\mu $m$^{2}$ on the freshly cleaved *ab*-plain crystal surface. The scattered light collected close to the backscattered geometry was focused to a 100x240 $\mu $m$^{2}$ entrance slits of a custom triple-stage spectrometer equipped with 1800 lines/mm gratings. The instrumental resolution was $\sim$ 1.4 cm$^{-1}$. To record symmetry resolved Raman spectra we employed circularly polarized light with the optical configurations selecting either the same or opposite chirality for incident and scattered light. The former is referred to as right-right (RR) and the latter as to right-left (RL) configurations. For the $C_{4h}$ point group the $B_{g}$ ($A_{g})$ symmetry is probed in the RL (RR) scattering geometry. Temperature dependent Raman spectra were collected at 3, 45, 100, 150, 160, 180, 200, 260, and 300 K with $T$- stability better than 0.1 K. An estimated local heating in the laser spot did not exceed 4 K.
Optical measurements were done by Bruker Vertex 80v spectrometer in the frequency range from 25 to 10000 cm$^{-1}$. The sample was under vacuum of 2\*10$^{-5}$ Pa. An in-situ gold and aluminum over-coating technique was used to get the reflectance $R(\omega )$ for light polarized in the KFS (*ab*)- planes. The real part of conductivity $\sigma_{1}(\omega )$ was obtained by the Kramers-Kronig transformation of the $R(\omega )$. Optical spectra were collected at 8, 35, 170, and 300 K.
RESULTS
=======
K$_{0.8}$Fe$_{1.6}$Se$_{2}$ crystalizes into tetragonal structure *I4/m* (space group [\#]{}87) Ref. \[\], resulting in the irreducible vibrational representation: $$\Gamma_{\text{vib}} = 9A_{g} \oplus 8B_{g} \oplus 8E_{g} \oplus 9A_{u} \oplus 7B_{u} \oplus 10E_{u}$$ All $g$-modes are Raman active, but only the $A_{g}$ and $B_{g}$ are selected with RR and RL polarizations under the measured geometry. The $A_{u}$ and $E_{u}$ vibrations are infrared active along the $c$-axis and in the $ab$-plane. The $B_{u}$ modes are silent. The Fe(1)-vacancy related vibration modes are excluded. Throughout this paper we adopted commonly used site designation: K(1), K(2), Fe(1), Fe(2), Se(1) and Se(2) stand, respectively, for Wyckoff positions of 2$a$, 8$h$, 4$d$, 16$i$, 4$e$, and 16$i$, refer to a legend of Fig.\[fig1\].
{width="18.0cm"}
Raman spectra of K$_{0.75}$Fe$_{1.75}$Se$_{2}$ are shown in Fig \[fig1\](a) and \[fig1\](b) for RL and RR polarizations, respectively. At 300 K, at least 7(9) modes are observed in RL (RR) in good agreement with 9$A_{g}$+8$B_{g}$ expected in the *I4/m* $\sqrt 5$x$\sqrt 5$ cell of K$_{0.8}$Fe$_{1.6}$Se$_{2}$ [@Raman1]. The $A_{g}$ modes at $\sim$ 112 and 267 cm$^{-1}$ are dominated by both chiral and breathing displacements of K(2) and Fe(2) atoms, respectively, Fig. \[fig1\](d,e). Below 200 K new modes appear (marked with red arrows): at $\sim$ 201 cm$^{-1}$ in RL and at $\sim$ 165 and $\sim$ 211 cm$^{-1}$ in the RR. The $\sim$ 136 and $\sim$ 277 cm$^{-1}$ phonons in the RL and $\sim$ 144 and $\sim$ 242 cm$^{-1}$ phonons in RR exhibit Fano shapes in whole temperature range of this study. The Fano modes become more symmetric with temperature decrease. Phonon modes parameters are derived from least square fit to experimental data and are summarized in Table \[tab1\].
Low-frequency region of optical conductivity adopted from Fig. 2 in Ref. \[\] is shown in Fig \[fig1\](c). In agreement with previous studies [@optics; @optics2], $\sigma_{1}(\omega )$ is small (characteristic of a poor metal) and it is dominated by the infrared-active vibrations and interband features at higher energies. At 170 and 300 K 9 IR-active modes are observed. The $\sim$ 208 cm$^{-1}$ mode exhibits a Fano-like shape, becoming more asymmetric on cooling. At 170 K and below, at least three new modes (red arrows) are formed, Fig. \[fig1\](c). An inspection of the Table \[tab1\] reveled that Raman and IR modes reported in this work do not overlap. It’s therefore tempting to conclude that inversion symmetry is preserved. In Section IV we argue that inversion symmetry is actually broken below $T_{1}\sim$ 250 K. The conductivity displays relatively weak temperature dependence above 170 K followed by about two-fold increase of the continuum as temperature drops from 170 to 35 K. In agreement with previous studies [@optics1; @optics2], a Drude-like peak is seen in 35 K data, shortly before sample becomes superconducting.
AFM, *I4/m* Raman ($\omega $, $\Gamma )$ Ref.\[\] IR ($\omega $, $\Gamma )$ Ref.\[\]
-------------------- ------------------------------ ---------- --------------------------- ----------
63.6 $A_{g}$ 67.6, 3.5 66.3
79.9 $A_{g}$ 81.0, 6.1
89.0 $A_{g}$ 111.8, 15.0
108.2 $A_{g}$ 124.8, 11.1 123.8
126.0 $A_{g}$ Fano 135.9 134.6
173.5 $A_{g}$ 182.5, 3.4
211.1 $A_{g}$ 205.3, 3.6 202.9
236.0 $A_{g}$ Fano 242.3 239.4
265.9 $A_{g}$ 267.0, 5.3 264.6
57.9 $B_{g}$ 63.1, 6.2 61.4
66.4 $B_{g}$ - -
98.2 $B_{g}$ 100.6, 12.8 100.6
117.3 $B_{g}$ 103.3, 2.1
134.9 $B_{g}$ Fano 143.6 141.7
206.0 $B_{g}$ 195.3, 2.6
224.0 $B_{g}$ 216.1, 3.0 214.3
262.7 $B_{g}$ Fano 277.1 274.9
59.0 $E_{g}$
79.9 $E_{g}$
95.1 $E_{g}^{\#}$ 98.9, 8.2 102.3
104.5 $E_{g}$
156.9 $E_{g}^{\#}$ 171.2, 4.9
206.5 $E_{g}$
224.0 $E_{g}$
251.3 $E_{g}^{\#}$ 246.3, 5.
61.1 $A_{u}$
92.7 $A_{u}$
96.3 $A_{u}$
172.4 $A_{u}^{+}$ 165.2, 3.6
212.4 $A_{u}^{+}$ 211.0, 1.4
249.7 $A_{u}$
271.0 $A_{u}$
67.7 $B_{u}$
76.0 $B_{u}$
89.3 $B_{u}$
116.2 $B_{u}$
181.9 $B_{u}^{+}$ 200.5, 3.4
240.5 $B_{u}$
273.0 $B_{u}$
62.5 $E_{u}$ 63.8, 7.1 65.2
73.9 $E_{u}$ 74.2, 6.1 73.6
85.9 $E_{u}$ 91.3, 16. 93.7
96.5 $E_{u}$ 118.5,10. 121.9
140.7 $E_{u}$ 150.1, 4.0 151.7
219.3 $E_{u}$ Fano 207.6 208.3
233.3 $E_{u}$ 236.6, 4.4 238.3
268.3 $E_{u}$ 267.2, 6.1 267.1
276.1, 4.9 278.6
: Assignment of observed Raman and IR vibration modes in K$_{0.75}$Fe$_{1.75}$Se$_{2}$ based on comparison with first-principle calculations utilizing pseudopotentials. Raman and IR modes which appear below $T_{1}\sim$ 250 K are marked by $^{+}$ and $^{\#}$. The Lorentz parameters for Raman and *ab*-plane IR modes, respectively, at 3 and 35 K were obtained from fit to the experimental data shown in Fig. \[fig1\]. Here $\omega_{i}$ and $\Gamma_{i}$(in cm$^{-1}$) are the frequency and FWHM, of the $i$-th mode. Error bars estimated from covariance are 0.2-0.4 and 0.4-2.0 cm$^{-1}$ for $\omega_{i}$ and $\Gamma_{i}$. Raman data from Ref. \[\] and IR data from Ref. \[\] are shown for comparison. Computed vibration frequencies are shown for *I4/m* structure. Total number of modes is less that in Eq(1) because K(1) and acoustic modes are not computed/omitted. Vibration frequencies in *I4* are less than 10 cm$^{-1}$ apart are not shown in this Table. $A_{g}$, $B_{g}$, and $E_{u}$ modes in *I4/m* becomes, respectively $A$, $B$, and $E$ modes in *I4*, refer to Sect IV.
\[tab1\]
First-principles phonon modes analysis
--------------------------------------
We performed first-principles density functional theory (DFT) calculations using local density approximation (LDA) with Perdew Zunger (PZ) parameterization for the exchange-correlation energy functional as implemented in *Quantum Espresso* simulation package [@PWSCF]. We used ultrasoft pseudopotentials [@USP] for K, Fe and norm-conserving pseudopotential [@NCP] for Se to describe the interaction between the ionic cores and the valence electrons. The pseudopotentials include 9 valence electrons for K ($3s^{2}$, $3p^6$, $4s^2$), 16 for Fe ($3s^{2}$, $3p^6$, $3d^6$, $4s^2$), and 6 for Se ($4s^{2}$, $4p^4$) atoms. We used a plane wave basis with energy cutoff of 40 Ry for wave function and 360 Ry for the change density and a $6\times 6\times 4$ Monkhorst Pack [@mh-pack] $k$-point mesh for the Brillouin zone (BZ) integration.
We optimized structure of K$_{0.8}$Fe$_{1.6}$Se$_{2}$ with a four spin cluster AFM ordering as discussed by Bao *et al*. [@structure] using experimental lattice constants obtained at 11 K. The calculation was done using a primitive unit cell of 22 atoms with K(1) and Fe(1) vacancies at 2$a$ and 4$b$ Wyckoff sites of space group *I4/m* respectively. Structural optimization is carried through minimization of energy using Hellman-Feynman forces at each atoms in Broyden-Flecher-Goldfarb-Shanno scheme. The optimized structure shows a very good agreement with the experiment. We also allowed inversion symmetry breaking displacement and found that *I4* structure has slightly lower energy ( 3.5 meV) compared to *I4/m*, however the splitting of atomic coordinates was very small. We find that both the structures exhibit a band gap of $\sim$ 0.4 eV.
Frequencies of the zone center phonons are determined using linear response method [@LR] for the relaxed structures are listed in the first column of Table \[tab1\] for the AFM *I4/m* K$_{0.8}$Fe$_{1.6}$Se$_{2}$. Since all nine $A_{g}$ modes anticipated in the parent AFM *I4/m* K$_{0.8}$Fe$_{1.6}$Se$_{2}$ are observed at room temperature, there is a unique correspondence between computed and measured vibration frequencies summarized in Table \[tab1\]. Computed $B_{g}$ mode at 66.4 cm$^{-1}$ is not observed. The $B_{g}$ vibration at $\sim$ 117.3 cm$^{-1}$ submerges to the broad $B_{g}$ mode at $\sim$ 100 cm$^{-1}$, Fig. \[fig1\](a) and is revealed in the fit. Importantly, two $A_{g}$ modes at 79.9 and 89.0 cm$^{-1}$cannot be reproduced in non-magnetic (NM) *I4/m* $\sqrt 5$x$\sqrt 5$ structure either undoped K$_{0.8}$Fe$_{1.6}$Se$_{2}$ or vacancy-free KFe$_{2}$Se$_{2}$ also computed in this work but not listed in the Table \[tab1\]. Therefore, accounting for the spin degree of freedom is essential for accurate mapping of the observed Raman modes. The 8 out of 9 observed IR active modes at 300 K are assigned, Table \[tab1\] and Fig. \[fig1\](c). The remaining $E_{u}$ mode at $\sim$ 278 cm$^{-1}$ is likely due to finite Fe(1) population in the superconducting sample: an extra $E_{u}$ mode appears in NM vacancy-free *I4/m* at 294 cm$^{-1}$. In summary, the observed Raman and IR- vibration frequencies above $\sim$ 200 - 250 K are in good agreement with computed frequencies. Below 200 - 250 K, new Raman modes at $\sim$ 165, 201, and 211 cm$^{-1}$ and IR-active modes at $\sim$ 99, 171, and 246 cm$^{-1}$ show up. Their vibration frequencies corresponds well to the computed frequencies of Raman active A, B and IR active E in the *I4* structure, Table \[tab1\].
Displacement patterns of selected vibration modes are illustrated in Fig. \[fig1\]. Raman modes shown in (a1) and (b1) correspond to Fe B$_{1g}$ and As A$_{1g}$ vibrations in the 122 iron-arsenides [@modes122]. In AFM *I4/m* both modes got finite ($x$,$y$) displacements. The B$_{g}\sim$ 100 cm$^{-1}$ mode is dominated by in-plane K displacements with some admixture of Se and Fe displacements. Its large $T$-independent linewidth is related to static disorder associated with K(2) sites. The $A_{u}\sim$ 172.4 cm$^{-1}$ patterns are shown in (b2). Being non-Raman active in the AFM *I4/m* phase it becomes new Raman mode $A$ in the low-$T$ phase AFM *I4*, Fig. \[fig1\](b2). Fig. \[fig1\] visualizes atomic displacement of one IR-active (c1) and two out of four observed Raman active Fano modes (a3 and b3) discussed in this paper. The striking feature of all but Se-based $A_{g}\sim$ 136 cm$^{-1}$ is essential involvement of Fe(2) atomic displacements. In K$_{0.8}$Fe$_{1.6}$Se$_{2}$ Fe atoms carry magnetic moment as large as 3.3$\mu_{B}$ Ref. \[\], while electronic structure near $E_{f}$ is dominated by Fe $d$-states [@gap]. The Fano modes coupling to electronic and magnetic degrees of freedom are explored in the following section.
Origin of Fano vibration modes
------------------------------
Asymmetric line shapes are characteristic of Fano resonances arising from coupling between the phonons and an electronic continuum, electronic or magnetic in origin. Dipole transition in the IR- absorption does not directly couple the AFM excitations, but couples charge carriers. Raman scattering probes both electronic and magnetic excitations. The 208 cm$^{-1}$ IR- and $\sim$ 144, 242, and 277 cm$^{-1}$ Raman Fano modes are dominated by Fe(2) atomic displacements. Interestingly, the IR mode gets more asymmetric \[Fig. \[fig2-ir\]\] while four Raman modes \[Fig. \[fig2\](a-d)\] become more symmetric on cooling.
The optical conductivity at 35 and 170 K is shown in Fig. \[fig2-ir\]. Taking 170 K spectrum as an example, the experimental conductivity is fitted as sum of Drude peak (dashed green), broad Lorentz component (dashed black) describing interband transition at $\sim$ 400 cm$^{-1}$, the beginning of mid-infrared (MIR) peak, and eleven Lorentz and one Fano phonon modes. Both Drude and MIR become slightly more coherent and better pronounced as temperature decreases to 35 K. At frequency of Fano mode, the Dude contribution increases, while the MIR contribution slightly decreases. Therefore, enhancement in the IR Fano peak asymmetry is due to the vibration coupling to charge carriers in the Drude tail.
![(Color online) Optical conductivity at 35 K (blue) and 170 K (red) along with a fitting curve (dark green dots). Drude components (green) and MIR Lorentz terms (black) are shown with solid(dashed) curves for 35(170) K. Insert: c-axis view of *ab*- plane displacement patterns (not up to scale) of the Fe(2) atoms.[]{data-label="fig2-ir"}](Fig2.pdf){width="8.5cm"}
![(Color online) Raman Fano modes for (a) $\sim$ 142 and (b) $\sim$ 277 cm$^{-1}$ in RL and (c) $\sim$ 137 and (d) $\sim$ 242 cm$^{-1}$ in RR. Inserts: $c$-axis view of *ab*- plane displacement patterns (not up to scale) of the Fe(2) atoms. The 3D displacement patterns are shown in Fig. \[fig1\].[]{data-label="fig2"}](Fig3.pdf){width="8.5cm"}
Four Fano Raman modes are presented in Fig. \[fig2\](a)-(d) at 3 and 160 K. They were obtained by removal of fitted in phonons from data shown in Fig 1(a,b). Clearly, all Raman modes exhibit similar $T$ dependence: (a) they are less symmetric, and (b) they characterized by larger background at 160 K than at 3 K. The observed behavior is reminiscent of $T$-dependence of a new mode observed in AFM 122 systems. The mode appears at $T_{N}$ as a Fano-shaped one and it becomes progressively more symmetric with temperature decrease. The Fano peak derives from vibration coupling to magnetic continuum, the AFM spin fluctuations.
Temperature dependence of Raman mode linewidth and phonon frequencies
---------------------------------------------------------------------
Selected linewidth and phonon frequencies as function of temperature are shown in Fig. \[fig3\]. The $T$-dependent phonon frequencies qualitatively agree with those reported by Zhang *et al* in K$_{0.8}$Fe$_{1.6}$Se$_{2}$ (Fig 5 in Ref. \[\]) and by Lazarević *et al* in isostructural K$_{0.88}$Fe$_{1.63}$S$_{2}$ (Fig. 3(b-j) in Ref. \[\]). In the latter work, the authors concluded that the Raman active phonon energies in the range of 80-300 K are fully driven by anharmonicity effects [@Raman2]. Interpretation offered in the present work is different: the residual linewidth is compatible \[Fig. \[fig3\](a,c)\] or larger \[Fig. \[fig3\](e)\] than the temperature dependent increment between 3 and 300 K. Therefore, self-energy of non-Fano phonons (i.e. at $\sim$ 195 and 216 cm$^{-1}$ in $B_{g}$ and at $\sim$ 68, 205, and 267 cm$^{-1}$ in $A_{g}$ channels) consist of approximately equal contributions of two-phonon lattice anharmonicity on one hand and bare self-energy and broadening due to intrinsic defects on the other. Self-energy of $\sim$ 100 cm$^{-1}$ mode involving the K(2) atomic displacements is dominated by inhomogeneous broadening. The new $\sim$ 165 cm$^{-1}$ mode appearing at $T_{1}$ in the range of 200 to 250 K becomes fully coherent below $\sim$ 40-60 K: the linewidth presented in Fig. \[fig3\](g) quickly reduces by $\sim$ 5 times as temperature decreases from 200 to 60 K, followed by saturation below $\sim$ 40 K. The mode hardens on cooling by $\sim$ 1.0 cm$^{-1}$ \[Fig. \[fig3\](h)\] in the traceable $T$-range.
![(Color online) Linewidth (first row) and subsequent phonon frequencies (second row) of Fe(2) $B_{g}\sim$ 216, Se(2) $A_{g}\sim$ 182, K(2) $B_{g}\sim$ 100, and Se(1) $A_{g}\sim$ 165 cm$^{-1}$ modes. Solid red lines describe two-phonon anharmonic decay [@gamma]. Dashed green lines are guided to the eye. The atomic displacement patterns of the modes are visualized in Fig \[fig1\] (a1, b1, a2, and b2).[]{data-label="fig3"}](Fig4_r2.pdf){width="8.5cm"}
DISCUSSION
==========
The new 165 cm$^{-1}$ mode appearing at $T_{1}$ in the range of 200-250 K, \[insert of Fig. \[fig1\](b)\], usually referred as seen at $T_{1}$ below $\sim$ 250 K throughout this paper, is not Raman active in the *I4/m* phase (Table \[tab1\]), fully symmetric in character, and it quickly becomes coherent with $T$-decrease \[Fig. \[fig2\](g)\]. The question arises whether this mode signifies crystal symmetry lowering on a structural phase transition. If it is associated with symmetry lowering, it would become allowed phonon mode in one of subgroups of $C_{4h}$. The $C_{4h}$ encompass the $C_{4}$(loss of inversion and rotation-reflection), $C_{2h}$(loss of 4$^{th}$ order rotation axis), $S_{4}$ (loss of inversion and 4$^{th}$ order rotation axis), C$_{2}$ (loss of inversion, 4$^{th}$ order rotation axis, and rotation-reflection), and $C_{1}$(primitive) subgroups. We did not observe leaking of four-fold axis symmetry that would results in the cross-polarization intensity leakages beyond small leakages of polarization optics which don’t correlate with temperature dependence of the 165 cm$^{-1}$ mode. Thus, $C_{2h}$, $C_{2}$, and $C_{1}$ subgroups are excluded. The $S_{4}$ (space group [\#]{}82) is excluded because there is no new $A$-type Se(1) mode associated with the transition. Therefore, *I4/m* ($C_{4h}$, space group [\#]{}87) becomes *I4* ($C_{4}$, space group [\#]{}79). From the factor-group analysis $A_{g}$, $B_{g}$, and $E_{u}$ modes in $C_{4h}$ becomes, respectively $A$, $B$, and $E$ modes in $C_{4}$. Instead of 9$A_{g}+$8$B_{g}$ Raman active and 8$E_{u}$ infrared active modes in high-$T$ *I4m* phase one would expect to encounter 17$A$+15$B$ Raman and 17$E$ IR-active modes in low-$T$ phase under the measured geometry. Here we excluded acoustic and Fe(1)-related modes, since the Fe(1)-site is mostly empty site in K$_{0.75}$Fe$_{1.75}$Se$_{2}$. The Raman active modes do not overlap with *ab*-plane IR active modes, not only in high- but in low-$T$ phases. This explains seemingly puzzling absence of IR mode leakages into the Raman spectra noted in Section 3. Since new Raman modes (at $\sim$ 165, 201, and 211 cm$^{-1}$ and IR-active modes (at $\sim$ 99, 171, and 246 cm$^{-1})$ appears below $T_{1}$ and those modes are non-Raman(non-IR) active $A_{u}(E_{g})$ or silent $B_{u}$ in *I4m* (Table \[tab1\]) we suggest that K$_{0.75}$Fe$_{1.75}$Se$_{2}$ undergoes *I4/m* $\to $ *I4* structural phase transition accompanied by loss of inversion symmetry at $T_{1}$ below $\sim$ 250 K. Our first-principles calculations utilizing pseudopotentials also narrowly favor *I4* over the *I4/m* structure. The small total energy difference is likely because computations do not include all correlations and/or the calculations are performed for the undoped K$_{0.8}$Fe$_{1.6}$Se$_{2}$.
Temperature dependence of selected phonon frequencies and intensities are shown in Fig. \[fig4\](a,b) and \[fig4\](c,d), respectively. Apart from the structural phase transition at $T_{1}$ below $\sim$ 250 K, clearly there is a second characteristic temperature, $T_{2}\sim$ 160 K. At $T_{2}$ majority of phonon vibration frequencies exhibit consistent discontinuity up to $\sim$ 0.3 cm$^{-1}$ \[Fig. \[fig4\](a, b)\], while quite a few modes display slop changes in their intensity vs temperature dependencies \[Fig. \[fig4\](c, d)\]. Since no new vibration modes (Raman or IR) are observed below the $T_{2}$ the $T_{2}$ is not constituent a structural phase transition, but rather referred as phonon anomaly temperature. Anomaly of a single $A_{g}$ mode at 66 cm$^{-1}$ at 160 K was mentioned by Zhang *et al.* [@Raman1]. We would like to point out that the phonon anomalies seen at the $T_{2}\sim$ 160 K involve majority of measured Raman modes.
![(Color online) Temperature dependence of selected phonon frequencies (a,b) and intensities (c,d). Some error bars are omitted in (b) for clarity. $T_{1}$ and $T_{2}$ mark temperatures of structural phase transition at $\sim$ 250 K and phonon anomalies at $\sim$ 160 K. In (c) intensity of the Fe(2)-based B(B$_{g})\sim$ 216 cm$^{-1}$(solid blue line) scales almost perfectly to 3.5$\pm$ 0.2 times intensity of Se(2)-based $A(A_{g})\sim$ 183 cm$^{-1}$ (dashed green line) in the range of 45 to 250 K. It also satisfactory scales to 13$\pm$ 1 times intensity of the Se(1)-based $A\sim$ 165 cm$^{-1}$(dot-dashed blue line) in the range of $\sim$ 100 to 200 K. Note there are four Se(2) per one Se(1) atom.[]{data-label="fig4"}](Fig5_r2.pdf){width="8.5cm"}
From experimental data at hand we could point out two implications of the observed structural phase transition on low-$T$ properties of K$_{0.75}$Fe$_{1.75}$Se$_{2}$. First, symmetry of the Se(1,2)-Fe(2) slab is broken at $T_{1}$, sample becomes ferroelectric, and $c$-axis structural distortions within the stab appears to build up on cooling. This is seen in Raman phonon peak intensities, Fig. \[fig4\](c,d), which are directly proportional to polarizability tensor. As sample enters the low-$T$ phase ($T<T_{1}$), polarizability of quite a few Raman-active modes build-up till $\sim T_{2}$, followed by saturation at $T<T_{2}$ ($B$-symmetry at 63, 100.6, and 277; $A$-symmetry at 81 and 267, Fano-shape 136 and 242 cm$^{-1}$) or reduction ($A$-symmetry at 68, 112, and 125 cm$^{-1}$). However, polarizabilities of Fe(2)-based $B$ mode at $\sim$ 216, As(2)-based $A$ mode at $\sim$ 183, and As(1)-based $A$ mode at $\sim$ 165 cm$^{-1}$ continue to build up till $\sim$ 45, 3, and 3 K, respectively. The scaling relationships among the three are shown in Fig. \[fig4\](c). The scaling is not surprising, giving to similar displacements patterns of the modes along the $c$-axis \[Fig.\[fig1\](a1), (b1), and (b2)\], so that their Raman activities are driven by polarizability of the electronic orbitals forming the Fe-Se slabs. In iron-arsenides (122-systems) the Fe-As slab is perfectly symmetric and the Raman-active As-based $A_{1g}$ mode has extremely low intensity (polarizability) if measured in the same geometry [@modes122]. It becomes visible upon doping destroying the symmetry of the slab. In K$_{0.75}$Fe$_{1.75}$Se$_{2}$, intrinsic population of Fe- and K- vacancies makes the Se(2)-based $A_{g}\sim$ 183 cm$^{-1}$ mode effortlessly visible at room temperature. The atomic displacements associated with the *I4/m* $\to $ *I4* phase transition which have sizable $ab$-plane components are likely quenched below $T_{2}$, while $c$- axis displacements continue to build up on cooling. Second implication concerns the temperature dependence of low-frequency optical conductivity shown in Fig. \[fig1\](c). The conductivity displays weak temperature dependence above $T_{1}$ followed by faster increase below the $T_{2}$, in agreement with similar temperature dependence reported by Homes *et al*. [@optics2]
Onset of superconductivity at $\sim$ 30 K has little effect on phonons. Using 3 and 45 K data points, the upper estimated phonon energy shifts are -0.3 $\pm$ 0.4 cm$^{-1}$ ($\Delta \omega $/$\omega \sim$ 0.44[%]{}) for 67.6 cm$^{-1}$ and $+$0.6 $\pm$ 0.4 cm$^{-1}$ ($\Delta \omega $/$\omega \sim$ 0.36[%]{}) for 165.0 cm$^{-1}$ modes. Small frequencies renormalization implies either weak *e-ph* interaction or that the phonons used in our analysis belongs to the AFM phase in the phase separated models [@TEM; @XRD; @optics; @NQR; @INS]: spectator AFM phonons would not feel onset of the superconductivity, unless via the proximity effect.
CONCLUSIONS
===========
Raman scattering and optical conductivity were used to determine lattice vibration frequencies of superconducting crystal K$_{0.75}$Fe$_{1.75}$Se$_{2}$ in temperature range from 3 to 300 K. 19 Raman-active and 12 IR-active modes are observed at 3 K. The $\sim$ 136, 143, 242, and 277 cm$^{-1}$ Raman and $\sim$ 208 cm$^{-1}$ IR mode exhibit Fano-like shape. The Raman Fano modes are due to the vibration coupling to AFM spin fluctuations, while the IR- mode is coupled to charge carriers in low-frequency part of optical conductivity. Raman phonon linewidths contain approximately equal contributions of two-phonon lattice anharmonicity on one hand and bare self-energy and broadening due to intrinsic defects on the other hand. The K$_{0.75}$Fe$_{1.75}$Se$_{2}$ undergoes *I4/m* (space group [\#]{}87) $\to $ *I4* (space group [\#]{}79) structural phase transition at $T_{1}$ below $\sim$ 250 K. Several modes which are not Raman- and IR-active in the measured geometry in *I4/m* become visible in *I4* phase including Raman modes at $\sim$ 165, 201, and 211 cm$^{-1}$ and IR-active modes at $\sim$ 99, 171, and 246 cm$^{-1}$. Weak phonon anomalies are also observed at at $T_{2} \sim$ 160 K. Symmetry of the Se(1,2)-Fe(2) slab is broken at $T_{1}$. $ab$-plane structural distortions are likely quenched below $T_{2}$, while $c$-axis structural distortions within the slab continues to build up on cooling down to 3 K.
ACKNOWLEDGMENTS
===============
Research at Rutgers was supported by the U.S. DOE, office of BES, Division of Materials Science and Engineering under award DE-SC0005463. Research at Beijing National Laboratory for Condensed Matter Physics was supported by the NSFC and 973 projects of MOST (Grant No. 2011CB921701, 2012CB821403).
[99]{}
J. Guo, S. Jin, G. Wang, S. Wang, K. Zhu, T. Zhou, M. He, and X. Chen, Phys. Rev. B **82**, 180520(R) (2010); A. F. Wang, J. J. Ying, Y. J. Yan, R. H. Liu, X. G. Luo, Z. Y. Li, X. F. Wang, M. Zhang, G. J. Ye, P. Cheng, Z. J. Xiang, and X. H. Chen, ibid. **83**, 060512(R) (2011). R. H. Liu, X.G. Luo, M. Zhang, A. F. Wang, J. J. Ying, X. F. Wang, Y. J. Yan, Z. J. Xiang, P. Cheng, G. J. Ye, Z. Y. Li, and X. H. Chen, Europhys. Lett. **94**, 27008 (2011). X.-W. Yan, M. Gao, Z.-Y. Lu, and T. Xiang, Phys. Rev. B **83**, 233205 (2011). W. Bao, Q.-Z. Huang, G.-F. Chen, M. A. Green, D.-M. Wang, J.-B. He, and Y.-M. Qiu, Chin. Phys. Lett. **28**, 086104 (2011). Y. J. Yan, M. Zhang, A. F. Wang, J. J. Ying, Z. Y. Li, W. Qin, X.G. Luo, J. Q. Li, J. Hu, and X. H. Chen, Sci. Rep. **2**, 212 (2012). Z. Wang, Y. J. Song, H. L. Shi, Z. W. Wang, Z. Chen, H. F. Tian, G. F. Chen, J. G. Guo, H. X. Yang, and J. Q. Li, Phys. Rev. B **83**, 140505 (2011). A. Ricci, N. Poccia, G. Campi, B. Joseph, G. Arrighetti, L. Barba, M. Reynolds, M. Burghammer, H. Takeya, Y. Mizuguchi, Y. Takano, M. Colapietro, N. L. Saini, and A. Bianconi, Phys. Rev. B **84**, 060511 (2011). Jun Zhao, Huibo Cao, E. Bourret-Courchesne, D. -H. Lee, and R. J. Birgeneau, arXiv:1205.5992. D. A. Torchetti, M. Fu, D. C. Christensen, K. J. Nelson, T. Imai, H. C. Lei, and C. Petrovic, Phys. Rev. B **83**, 104508 (2011). R. H. Yuan, T. Dong, Y. J. Song, P. Zheng, G. F. Chen, J. P. Hu, J. Q. Li, and N. L. Wang, Sci. Rep. **2**, 221 (2012). A. Charnukha, A. Cvitkovic, T. Prokscha, D. Pröpper, N. Ocelic, A. Suter, Z. Salman, E. Morenzoni, J. Deisenhofer, V. Tsurkan, A. Loidl, B. Keimer, and A.V. Boris, Phys. Rev. Lett. **109**, 017003 (2012). Y. Texier, J. Deisenhofer, V. Tsurkan, A. Loidl, D. S. Inosov, G. Friemel, J. Bobroff, arXiv:1203.1834. G. Friemel, J. T. Park, T. A. Maier, V. Tsurkan, Yuan Li, J. Deisenhofer, H.-A. Krug von Nidda, A. Loidl, A. Ivanov, B. Keimer, and D. S. Inosov, Phys. Rev. B **85**, 140511(R) (2012). Wei Li, Hao Ding, Peng Deng, Kai Chang, Canli Song, Ke He, Lili Wang, Xucun Ma, Jiang-Ping Hu, Xi Chen1, and Qi-Kun Xue, Nature Physics **8**, 126 (2012). A. M. Zhang, K. Liu, J. H. Xiao, J. B. He, D. M. Wang, G. F. Chen, B. Normand, and Q. M. Zhang, Phys. Rev. B **85**, 024518 (2012). N. Lazarević, Hechang Lei, C. Petrovic, and Z. V. Popović, Phys. Rev. B **84**, 214305 (2011). A. M. Zhang, K. Liu, J. H. Xiao, J. B. He, D. M. Wang, G. F. Chen, B. Normand, and Q. M. Zhang, arXiv:1105.1198. Z. G. Chen, R. H. Yuan, T. Dong, G. Xu, Y. G. Shi, P. Zheng, J. L. Luo, J. G. Guo, X. L. Chen, and N. L. Wang, Phys. Rev. B **83**, 220507(R) (2011). C. C. Homes, Z. J. Xu, J. S. Wen, and G. D. Gu, Phys. Rev. B **85**, 180510(R) (2012). S. Baroni, A. Dal Corso, S. de Gironcoli, and P. Giannozzi, 2001, http://www.pwscf.org. D. Vanderbilt, Phys. Rev. B **41**, 7892(1990). D. R. Hamann, M. Schlüter, and C. Chiang, Phys. Rev. Lett. **43**, 1494 (1979). H. J. Monkhorst and J. D. Pack, Phys. Rev. B **13**, 5188 (1976); H. J. Monkhorst and J. D. Pack, ibid. **16**, 1748 (1977). S. Baroni, S. de Gironcoli, A. Dal Corso, and P. Giannozzi, Rev. Mod. Phys. **73**, 515 (2001). A. P. Litvinchuk, V. G. Hadjiev, M. N. Iliev, Bing Lv, A. M. Guloy, and C. W. Chu, Phys. Rev. B **78**, 060503(R) (2008). M. Balkanski, R. F. Wallis, and E. Haro, Phys. Rev. B **28**, 1928 (1983).
|
---
abstract: 'The presence of saddle-node bifurcation cascade in the logistic equation is associated with an intermittency cascade; in a similar way as a saddle-node bifurcation is associated with an intermittency. We merge the concepts of bifurcation cascade and intermittency. The mathematical tools necessary for this process will describe the structure of the Myrberg-Feigenbaum point.'
address:
- 'Departamento de Matemática Aplicada, E.U.I.T.I, Universidad Politécnica de Madrid. Ronda de Valencia 3, 28012 Madrid Spain'
- 'Departamento de Física Matemática y Fluidos, U.N.E.D. Senda del Rey 9, 28040 Madrid Spain'
author:
- 'Jesús San-Martín'
title: |
Universal Scaling in Saddle-Node Bifurcation Cascades (II)\
Intermittency Cascade
---
Intermittency Cascade. Saddle-Node bifurcation cascade. Attractor of attractors. Structure of Myrberg-Feigenbaum points.
Introduction
============
Period-doubling [@Feigenbaum78; @Feigenbaum79], Quasi-periodicity [@Ruelle71; @Newhouse78] and Intermittency [@Manneville79; @Pomeau80] are well known routes of transition from periodic to chaotic behavior, and whose origin is in local bifurcations. Initially, the system has a stable limit cycle, for a range control parameter $r$. As this parameter is increased beyond a critical value $r_{c}$ system behavior changes according to a local bifurcation that occurs at $r_{c}$.
In order to study the genesis of the transition we resort to the Poincare section. In this section, the original stable limit cycle of the system generates a fixed point, whose evolution is studied in parallel to the control parameter changes.
Paying attention to the different local bifurcation kinds we will have different transitions to chaos. So, if the fixed point shows successive pitchfork bifurcations, which double repeatedly the period of the original orbit, the Feigenbaum period doubling cascade is obtained. The final ending is a period-$\infty$ orbit, a chaotic attractor.
Quasi-periodicity occurs as a new Hopf bifurcation generates a second frequency in the system, which is incommensurate with the original system frequency. If an irrational winding number is fixed it goes through chaos.
At last, the intermittency is a chaotic regime characterized by an apparently regular behavior, which undergoes irregular bursts from time to time. This intermittency between two behaviors names this kind of chaos.
The regular behavior or laminar regime corresponds to an evolution of the system in a narrow region or a channel in the phase space. Such regular behavior stems from the fact that the system maintains a “ghost” of laminar regime
Whereas in the other transitions to chaos, Period-doubling and Quasi-periodicity, the system is totally regular before the transition and chaotic later. This does not happen with the intermittency. The intermittency shows a continue transition from regular behavior to a chaotic one. The smaller the value $\varepsilon=r-r_{c}$ is the longer the laminar regime will be and the lesser it will be altered by a chaotic behavior, due to the fact that the average time of laminar regime $\left\langle l\right\rangle $ for small $\varepsilon$ is $\varepsilon^{-\beta}$, being $\beta>0$. Therefore, beyond $r_{c}$the laminar behavior alternates with irregular bursts, the smaller (bigger) the value $\varepsilon$ is the more (less) the laminar regime $\left\langle l\right\rangle $ is.
The intermittency transition was discussed by Pomeau and Manneville [@Manneville79; @Pomeau80]. They pointed out three kinds of intermittency. The system is periodic for parameter values smaller than the critical point $r<r_{c}$. This periodic behavior generates a stable fixed point in the Poincare section. As the parameter reaches the critical value $r=r_{c}$the fixed point losses its stability. The loss of stability is caused when the eigenvalue modulus of the linearized Poincare map becomes larger than unit. This may happen is in three different ways.
i\) There is a real eigenvalue crossing the unit circle by plus one. A saddle-node bifurcation is generated, which has associated type-I intermittency. In this case the average length of laminar regime time is $\left\langle l\right\rangle \sim\varepsilon^{-\frac{1}{2}}$.
ii\) A couple of complex conjugate eigenvalues crosses the unit circle. This circumstance is associated with the birth of a Hopf bifurcation and it involves a type-II intermittency with $\left\langle l\right\rangle \sim\varepsilon^{-1}$
iii\) A real eigenvalue crosses the unit circle by minus one, generating a flip bifurcation. This time the type-III intermittency occurs and we obtain $\left\langle l\right\rangle \sim\varepsilon^{-1}$.
Other kinds of intermittencies have been studied such as type-X [@Anil97] one which shows a transition with hysteresis and type-V one [@Bauer92] for discontinuous maps. Intermittencies have also been studied whose laminar regime occurs alternatively between two channels, as a result of symmetry in the problem [@SanMartin99].
The generalization and logical development of this latter problem is to find intermittencies with an arbitrary number of channels. Furthermore it is desirable to have a different number of channels for different values of control parameters in the same system, and not to look for different systems *ad hoc* with the appropriate symmetries which show a fixed number of channels. We say that such behavior is desirable in an unique system because, if a change of the control parameter implies an increase of the number of channel, what is obtained is an intermittency cascade; in a similar way a change of the control parameter in the logistic map generates a period doubling cascade.
In this paper we are going to show that such phenomenon occurs in the logistic map, benefiting from the fact that this map shows saddle-node bifurcation cascades [@SanMartin05] and that the type-I intermittency is associated with the saddle-node bifurcation. We will characterize control parameter values at which successive intermittencies are generated, how some intermittencies are related to others, the number of channels, which is the average time of laminar regime of intermittencies, and what relationship there is between such regimes for different intermittencies. To answer these questions we will use the universal properties of the logistic map [@Feigenbaum78; @Feigenbaum79] and the saddle-node bifurcation cascade this map shows [@SanMartin05].
The saddle-node bifurcation cascade is a sequence of saddle-node bifurcations in which the number of fixed points showing this kind of bifurcation is duplicated. The successive elements of the sequence are given by an equation identical to the one that Feigenbaum found for period doubling cascade [@SanMartin05].
The way of acting is as follows. As we mentioned above type-I intermittency is associated to one saddle-node bifurcation. Therefore, each element of the sequence of the saddle-node bifurcation cascade has associated a type-I intermittency. The number of channels of this intermittency coincides with the number of fixed points that simultaneously show a saddle-node bifurcation. For instance, the saddle-node bifurcation cascade, symbolized by the sequence $3$,$3\cdot2$,$3\cdot2^{2}$,$3\cdot2^{3}$,...,$3\cdot2^{q}$, points out that there are $3$ fixed points at a first parameter value $r=r_{3}$, there are $3\cdot2$ at $r=r_{3\cdot2}$ and so on. Each saddle-node fixed point contributes to the intermittency with one channel, accordingly in this intermittency cascade there will be a sequence of intermittencies with $3$,$3\cdot2$,$3\cdot2^{2}$,$3\cdot2^{3}$,...,$3\cdot2^{q}$,.... channels.
The channels responsible for laminar regime are close to critical points of logistic map (Fig. \[cap:fig2\]). The way, how the neighborhood of these critical points are contracted in the successive iterated of the map, determines how the channels are contracted and it allows us to look for the connection between them. The scaling of the neighborhood of critical points for iterated map was calculated near a pitchfork bifurcation by Feigenbaum [@Feigenbaum78]. We will follow this work to calculate the scaling near a saddle-node bifurcation, because it is here where intermittency occurs.
As many iterated one-dimensional maps are nearly quadratic under renormalization [@Gukenheimer87], we have to expect that intermittencies cascade is a common phenomenon in many natural processes.
Let be the logistic equation $x_{n+1}=f(x_{n})=r\, x_{n}(1-x_{n})$. The graph of $f^{3}$ ($f^{n}=f\circ\quad\circ f$) is shown in Fig. \[cap:fig1\], where it is tangent to line $x_{n+1}=x_{n}$, at $r=r_{c}=1+\sqrt{8}$ , which means a period-$3$ orbit exists . There are three saddle-node fixed points, and we are at the genesis of a saddle-node bifurcation. For $r>r_{c}$ the valleys and the hills of $f^{3}$ are sharper than at $r=r_{c}$, and each saddle-node point has generated two points: one saddle and one node. If we decreased from $r>r_{c}$ to $r<r_{c}$ we would observe that the saddle point approaches the node one, touching at $r=r_{c}$, as the saddle-node bifurcation occurs. For $r<r_{c}$ the valleys and the hills are pulled away from the diagonal and saddle-node fixed points disappear. After the bifurcations have disappeared three narrow channels, delimited by the graph of $f^{3}$ and the diagonal, remain, which are responsible for laminar regime of intermittency. This is what we meant when we said earlier that each S-N point would be responsible for generating a channel in the intermittency cascade.
Each iterated of logistic map will take a long time to go through these channels (see Fig. \[cap:fig2\]). The average number of iterated inside a channel is given by $\left\langle l\right\rangle \sim\varepsilon^{-\frac{1}{2}}$[@Pomeau80], and so it is for the set of three channels.
The saddle-node bifurcation cascade involves saddle-node bifurcations with $q$,$q\cdot2$,$q\cdot2^{2}$,$q\cdot2^{3}$,...,$q\cdot2^{n}$, $q\neq2^{m}$ fixed points for the maps $f^{q}$,$f^{2q}$,....,$f^{q2^{n}}$ respectively, and the same number of channels for the intermittency. Fig. \[cap:fig3\] and \[cap:fig4\] show saddle-node bifurcations for $f^{3\cdot2}=f^{6}$and $f^{3\cdot2^{2}}=f^{12}$ respectively.
We want to connect the length of laminar regime $\left\langle l\right\rangle $ of one intermittency with the laminar regime of other intermittencies present in the cascade. If we notice in the neighborhood of point $\left(\frac{1}{2},\frac{1}{2}\right)$in Fig. \[cap:fig3\] we will see that the graph of $f^{3}$ is reproduced in Fig. \[cap:fig1\] at $r=r_{c}$, escalated by a factor $\frac{1}{\alpha}$, $\alpha>1$. The same can be said in the neighborhood of point $\left(\frac{1}{2},\frac{1}{2}\right)$in Fig. \[cap:fig4\]. As the iterated $f^{q2^{n}}$ with $n\longrightarrow\infty$ are considered the constant $\alpha$ we get is the Feigenbaum constant. (see appendix)
If we come back to Fig. \[cap:fig3\] we will notice that $f^{3\cdot2}$reproduces again the graph of $f^{3}$ in the neighborhood of $\left(f(\frac{1}{2}),f(\frac{1}{2})\right)$, and that this one is not escalated by the same factor as in the neighborhood of the point $\left(\frac{1}{2},\frac{1}{2}\right)$. From one iterated to the following one , that is, from $f^{3\cdot2^{n}}$to $f^{3\cdot2^{n+1}}$ , half of the neighborhood of critical points escalate approximately with $\frac{1}{\alpha}$ and the other half with $\frac{1}{\alpha^{2}}$ (see appendix). We will be able to answer our questions because $f^{3}$ is reproduced in the neighborhood of critical points of $f^{3\cdot2^{n}}$and also because we know how these neighborhoods scale in the successive elements of the cascade
Intermittency Cascade
=====================
Let $r_{q\cdot2^{n},SN}$ be the parameter value at which $f^{q\cdot2^{n}}$, $q\neq2^{m}$ has a saddle-node bifurcation, that is, $f^{q2^{n}}$has a saddle-node orbit with $q\cdot2^{n}$points. The points of this orbit are located right where the function $f^{q2^{n}}$ is tangent to the line $y=x$. The $q\cdot2^{n}$points can be classified into $2^{n}$ sets, each one of them having $q$ points. The $2^{n}$ sets correspond to $2^{n}$ critical points of $f^{2^{n}}$closest to the line $y=x$. The $q$ saddle-node points are captured in a neighborhood of each one of these critical points, in other words, the graph of $f^{q}$ is captured in every one of such neighborhoods; for instance, in $f^{3\cdot2^{2}}$we notice how the graph of $f^{3}$ is captured in the neighborhoods of the $2^{2}$ critical points of $f^{2^{2}}$ (Fig. \[cap:fig4\]).
If we choose $$r=r_{q\cdot2^{n},SN}-\varepsilon\,\,\,\,0<\varepsilon\ll1\label{r}$$ then the saddle-node bifurcation will be about to occur. In these conditions, there are $q\cdot2^{n}$ points where $f^{q2^{n}}$ is almost tangent to the line $y=x$. In such points there are $q\cdot2^{n}$ channels, which are formed by the graph of $f^{q2^{n}}$ and the line $y=x$. These channels are the narrower the smaller $\varepsilon$ is in Eq. (\[r\]), and in them the laminar regime occurs. The time to cross the channel depends on $\varepsilon$.
Let $\left\langle l\right\rangle _{n}$be the average time that the iterates of $x_{n+1}=f(x_{n})$ spend to cross the $q\cdot2^{n}$ channels generated by $f^{q2^{n}}$. If we consider the laminar regime of an intermittency of $f^{q\cdot2^{n+1}}$ then the number of channels will become duplicated because $f^{q\cdot2^{n+1}}=f^{q\cdot2^{n}}\circ f^{q\cdot2^{n}}$, in other words, the graph of $f^{q}$ is duplicated close to the critical points of $f^{2^{n}}$. But in the doubling process the replicas of graph of $f^{q}$ are contracted, half of them as $\frac{1}{\alpha}$ and the others as $\frac{1}{\alpha^{2}}$ as $n\rightarrow\infty$, where $\alpha$ is the Feigenbaum constant (see appendix). Accordingly, for $\varepsilon=r_{q\cdot2^{n+1},SN}-r$ the intermittency of $f^{q\cdot2^{n+1}}$ shows the channels of the intermittency of $f^{q\cdot2^{n}}$ duplicated, but half of them contracted as $\frac{1}{\alpha}$ and the other contracted as $\frac{1}{\alpha^{2}}$. As the average time for the intermittency of $f^{q\cdot2^{n}}$is $\left\langle l\right\rangle _{n}$ it turns out that the average time for the intermittency of $f^{q\cdot2^{n+1}}$ will be $\frac{\left\langle l\right\rangle _{n}}{\alpha}$, which comes from the channels contracted by $\frac{1}{\alpha}$, plus $\frac{\left\langle l\right\rangle _{n}}{\alpha^{2}}$, which are given by the channels contracted by $\frac{1}{\alpha^{2}}$. In conclusion, the average time of laminar regime of the intermittency of $f^{q\cdot2^{n+1}}$ is $\left\langle l\right\rangle _{n}(\frac{1}{\alpha}+\frac{1}{\alpha^{2}})$, where $\left\langle l\right\rangle _{n}$is the average time of laminar regime of $f^{q\cdot2^{n}}$, and both $f^{q\cdot2^{n}}$ and $f^{q\cdot2^{n+1}}$are at the same distance $\varepsilon$from the corresponding saddle-node bifurcation in the parameter space, that is, $r=r_{q\cdot2^{n+1},SN}-\varepsilon$ and $r=r_{q\cdot2^{n},SN}-\varepsilon$.
Let’s notice that in a saddle-node bifurcation cascade the laminar regime from an intermittency to the next one in the sequence is decreased by a factor $(\frac{1}{\alpha}+\frac{1}{\alpha^{2}})$. Accordingly, the average time for intermittency of $f^{q\cdot2^{n+m}}$ is$$\left\langle l\right\rangle _{n+m}=\left\langle l\right\rangle _{n}(\frac{1}{\alpha}+\frac{1}{\alpha^{2}})^{m}\label{long-n-m}$$
Given the saddle-node bifurcation cascade of $f^{q\cdot2^{n}}$,$f^{q\cdot2^{n+1}}$,....,$f^{q\cdot2^{n+m}}$,.. if the bifurcation parameter of $f^{q\cdot2^{n}}$is at $r_{q\cdot2^{n},SN}$ then the other bifurcation parameters are given by [@SanMartin05]
$$r_{q\cdot2^{n+1},SN}=\frac{1}{\delta}r_{q\cdot2^{n},SN}+(1-\frac{1}{\delta})r_{\infty}\label{Fei-Jesus}$$
where $\delta$is the Feigenbaum constant; and $r_{\infty}$is the Myrberg-Feigenbaum point of a canonical window where Feigenbaum cascade finishes and where also all saddle-node bifurcation cascades finish, whatever $q\neq2^{m}$ is.
Eqs. (\[long-n-m\]) and (\[Fei-Jesus\]) determine the intermittency cascade, because the parameter values at which it occurs and the average time of its corresponding laminar regimes are known.
The former results are valid if a intermittency cascade occurs in a period-$j$ window instead of a canonical window. Because both the scaling of laminar regime $(\frac{1}{\alpha}+\frac{1}{\alpha^{2}})$ and Eq. (\[Fei-Jesus\]) are valid in a period-$j$ window, although the Eq. (\[Fei-Jesus\]) turns into$$r_{q\cdot2^{n+1},SN}=\frac{1}{\delta}r_{q\cdot2^{n},SN}+(1-\frac{1}{\delta})r_{\infty,j}$$ to indicate that the convergence is the one to the Myrberg-Feigenbaum point $r_{\infty,j}$ of the period-$j$ window.
There is a second way of changing the control parameter in the intermittency cascade, which is more important to the experimenters.
The Eq. (\[long-n-m\]) gives the length of average times to an intermittency cascade associated with the saddle-node bifurcation cascade of $f^{q\cdot2^{n}}$,$f^{q\cdot2^{n+1}}$,....,$f^{q\cdot2^{n+m}}$,.., if the value of $\varepsilon$ is constant. Such value gives the distance from the control parameter to the saddle-node bifurcation parameter. Nonetheless, it is possible to change $\varepsilon$in the successive saddle-node bifurcations in such a way that the average time of laminar regimen is kept constant, and equal to a $\left\langle l\right\rangle _{n}$, for the whole intermittency cascade. For that purpose, all we need is to have the value $\varepsilon$, taken in the first intermittency, is rescaled by a factor $(\frac{1}{\alpha}+\frac{1}{\alpha^{2}})$ for each one of the successive intermittencies of the cascade, that is, the values$$\varepsilon,\varepsilon(\frac{1}{\alpha}+\frac{1}{\alpha^{2}}),...,\varepsilon(\frac{1}{\alpha}+\frac{1}{\alpha^{2}})^{m},...\label{epsilones}$$ for $m=0,1,2,3,...$. This is so because $$\left\langle l\right\rangle \propto\frac{1}{\varepsilon}\label{eq:lm}$$ for the type-I intermittency, which is present in the logistic equation. If the values of (\[epsilones\]) are introduced in Eq. (\[eq:lm\]) then the laminar regimen is increased in a factor which is the same as the one that contracts according to Eq. (\[long-n-m\]). The result is that the average time of laminar regimen stays constant.
It is necessary to change $\varepsilon$ in this way. As Eq. (\[Fei-Jesus\]) shows a geometric progression, of ratio $\frac{1}{\delta}$, if we held $\varepsilon$ constant then very quickly the value $r=r_{q\cdot2^{n},SN}-\varepsilon$ would not be within the parameter interval $\left[r_{q\cdot2^{n+m+1},SN},r_{q\cdot2^{n+m},SN}\right]$and we would not observe channels corresponding to two successive saddle-node bifurcation of the cascade. The result would be that $r\ll r_{q\cdot2^{n+m+1},SN}$ and the intermittency cascade would not be observed. Obviously, this is vital for the experimenters and for the development of numerical experiments as well.
Bear in mind that as $\varepsilon$changes as in Eq. (\[epsilones\]) it turns out that also $\varepsilon$ changes as geometric progression of ratio $(\frac{1}{\alpha}+\frac{1}{\alpha^{2}})$. This geometric progression converges faster than the progression Eq. (\[Fei-Jesus\]). Accordingly, the parameter value $r$ can be held such that $r\in\left[r_{q\cdot2^{n+m+1},SN},r_{q\cdot2^{n+m},SN}\right]$, and therefore the channel associated with the saddle-node bifurcation at $r_{q\cdot2^{n+m},SN}$ can be observed. It is necessary for the experimenter to change the parameter as in Eq. (\[epsilones\]), in order to stay close to successive saddle-node bifurcations of the cascade and get a constant value of $\left\langle l\right\rangle $. It is easy to get this variation because the bifurcation parameters are given by Eq. (\[Fei-Jesus\]).
Myrberg-Feigenbaum point structure
==================================
As shown in Fig. \[cap:fig1\] we can see a saddle-node bifurcation of $f^{3}$. This same figure appears twice in Fig. \[cap:fig2\]. They are the two first elements of the saddle-node bifurcation cascade of $f^{3\cdot2^{n}}$. The bigger $n$ is the more times Fig. \[cap:fig1\] is replicated in the graph of $f^{3\cdot2^{n}}$ along the line $y=x$.
As shown in the appendix, Fig. \[cap:fig1\] appears twice more every time we move on one stage in a saddle-node bifurcation cascade, half of the figures are contracted by $\frac{1}{\alpha}$ and the other half by $\frac{1}{\alpha^{2}}$. The outcome is that in a saddle-node bifurcation cascade the points, that are tangent to the line $y=x$ , duplicate at the same time as the region they are placed in contracts.
We would hope to find any kind of Cantor set and, what is worse, one Cantor set for each period-$q\cdot2^{n}$, $q\neq2^{m}$ saddle-node bifurcation cascade, because all saddle-node bifurcation cascades approach the Myrberg-Feigenbaum point $r_{\infty}$ as $n\rightarrow\infty$. Nonetheless the solution is extraordinary simple at the limiting value $r_{\infty}$.
If we consider the cascade $q,q\cdot2,...,q\cdot2^{n},...$, $q\neq2^{m}$ it will turn out that the graph of $f^{q}$will be reproduced in the neighborhood of the critical points of $f^{\,2^{n}}$, which correspond to the points of the restricted-supercycle given by $\left\{ \frac{1}{2},f(\frac{1}{2}),f^{2}(\frac{1}{2})....,f^{2^{n}-1}(\frac{1}{2})\right\} $(see appendix). Half of these neighborhoods are contracted by $\frac{1}{\alpha}$ and the other half by $\frac{1}{\alpha^{2}}$, each time we move on one stage in saddle-node bifurcation cascade, that is, the saddle-node points duplicate. Therefore, in the limit $n\rightarrow\infty$each neighborhood has collapsed to a point of $\left\{ \frac{1}{2},f(\frac{1}{2}),f^{2}(\frac{1}{2})....,f^{2^{n}-1}(\frac{1}{2})\right\} _{n\rightarrow\infty}$. The points of $\left\{ \frac{1}{2},f(\frac{1}{2}),f^{2}(\frac{1}{2})....,f^{2^{n}-1}(\frac{1}{2})\right\} _{n\rightarrow\infty}$ coincide with the period doubling orbit as $n\rightarrow\infty$. The outcome is that the period-$2^{n}$ ( $n\rightarrow\infty$) chaotic orbit coincides with period-$q\cdot2^{n}$, $q\neq2^{m}$, $n\rightarrow\infty$ orbit —in the sense of limit—, that is, the limits cannot tell from each other. We have the same limit orbit, both the one which comes from period doubling cascade at $r<r_{\infty}$, and the one which comes from saddle-node bifurcation cascades at $r>r_{\infty}$.
The fact that the period-$q\cdot2^{n}$saddle-node orbit tends to the period-$2^{n}$ as $n\rightarrow\infty$hides another fact: the collapse of period-$q\cdot2^{n}$window to a point at Myrberg-Feigenbaum point $r_{\infty}$. This is so because the period-$q\cdot2^{n}$ window starts with the birth of the period-$q\cdot2^{n}$saddle-node orbit and the period-$q\cdot2^{n+1}$ window starts with the birth of the period-$q\cdot2^{n+1}$saddle-node. As the birth of both saddle-node orbits tends to $r_{\infty}$, as $n\rightarrow\infty$, then the window length tends to zero. This result is captured in the expression (see [@SanMartin05])$$\frac{L_{n}}{L_{n+1}}=\delta$$ which shows that the length of two successive windows of a saddle-node bifurcation cascade are contracted by a factor $\delta$, being $\delta$ Feigenbaum constant, that is, the windows length tends to zero.
If we consider the period-$q\cdot2^{n}$window it will have a Myrberg-Feigenbaum point $r_{\infty,q\cdot2^{n}}$, at which its corresponding Feigenbaum cascade will finish. As $n\rightarrow\infty$, the window length tends to zero and it brings the following consequences and interpretation, relative to period-$q\cdot2^{n}$window.
i\) The whole period doubling process also collapses to a point, and the same happens to the rest of the saddle-node bifurcation cascades present in the period-$q\cdot2^{n}$ window, because the process occurs in a window whose length tends (collapses) to zero.
ii\) The Myrberg-Feigenbaum point $r_{\infty}$ of canonical window and the Myrberg-Feigenbaum point $r_{\infty,q\cdot2^{n}}$of the period-$q\cdot2^{n}$ window are the closer to each other the bigger $n$ is. The distance tends to zero as $n\rightarrow\infty$. The same happens with every one of saddle-node bifurcation cascades located in the period-$q\cdot2^{n}$ window. Accordingly the accumulating point of every saddle-node bifurcation cascade (a new Myrberg-Feigenbaum) tends to $r_{\infty,q\cdot2^{n}}$, and therefore it tends to $r_{\infty}$. The process is applied again to the new windows which are born from a saddle-node bifurcation cascade and so forth.
This explains why a Myrberg-Feigenbaum point is an attractor of other Myrberg-Feigenbaum points, which are attractor of other Myrberg-Feigenbaum points and so forth (see [@SanMartin05])
The former convergence process has been expounded for a fixed saddle-node bifurcation cascade with $q\neq2^{m}$, but it is valid for any value of $q$. Therefore there are infinite sequences which mimic the former process, one for each value of $q\neq2^{m}$.
The approaching to the Myrberg-Feigenbaum point, and the multiplicity of convergent sequences, explain completely the mechanism of attractor of attractor introduced in [@SanMartin05]
CONCLUSIONS
===========
The presence of saddle-node bifurcation cascade in the logistic equation assures the genesis of a intermittency cascade. Each saddle-node bifurcation of the bifurcation cascade is associated with an intermittency. As the location of the saddle-node bifurcation is known it brings that so is the genesis of the intermittency cascade. The knowledge, a priori, of the length of the laminar regime in the type-I intermittency, and of the scaling the peaks and valleys of the successive iterated of logistic map, allows us to establish the length of the laminar regime in the intermittency cascade.
The intermittency cascade is a phenomenon that takes place in all windows of the logistic map, and not only in windows associated to first-occurrence orbits.
It is proved that the windows collapse to the Myrberg-Feigenbaum points, this mechanism being responsible for the fact that Myrberg-Feigenbaum points are attractors of attractors.
Acknowledgments {#acknowledgments .unnumbered}
===============
The author wishes to thank Daniel Rodríguez-Pérez for helpful discussions and help in the preparation of the manuscript.
APPENDIX {#appendix .unnumbered}
========
Let’s find the scaling law of high-order cycles in the saddle-node bifurcation cascade. To do so we will follow the Feigenbaum work [@Feigenbaum80] , which is carried out close to pitchfork bifurcation.
The scaling law is not determined by the location of the elements on the x-axis, but by their order as iterates of $x=\frac{1}{2}$ ( or of $x=0$ after a coordinate translation that moves $x=\frac{1}{2}$ to $x=0$). This is the point which necessarily belongs to any supercycle. Feigenbaum denotes the distance from the m-th element of a $2^{n}$-supercycle to its nearest neighbor by$$d_{n}(m)=x_{m}-f_{R_{n}}^{2^{n-1}}(x_{m})$$ where $R_{n}$is the control parameter value at which supercycle occurs.
To generalize the latter definition to the period-$q\cdot2^{n}$ saddle-node orbit, which is in the saddle-node bifurcation cascade, we are not going to take into account all its points, but only a few very particular ones.
As a period-$q\cdot2^{n}$ saddle-node orbit undergoes a period doubling process there will be a control parameter value, prior to the duplication , at which the orbit will be a period-$q\cdot2^{n}$ supercycle. Let $R_{n,q}$ be such parameter, and let be$$\left\{ \frac{1}{2},f(\frac{1}{2}),....,f^{q\cdot2^{n}-1}(\frac{1}{2})\right\} \label{s1}$$ the supercycle in question.
We extract from supercycle the sequence$$\left\{ x_{m,q}\right\} _{m=1}^{m=2^{n}}=\left\{ x_{1,q}=\frac{1}{2},x_{2,q}=f(\frac{1}{2}),x_{3,q}=f`^{2}(\frac{1}{2})....,x_{2^{n},q}=f^{2^{n}-1}(\frac{1}{2})\right\}$$ which we will name “restricted supercycle”. The restricted supercycle consist of $2^{n}$ points, for every one of which the function $f^{2^{n}}$has a critical point close to the line $y=x.$ The restricted supercycle is similar to the supercycle with which Feigenbaum works in the period doubling cascade, but it is different because there are $q$ points of the supercycle (\[s1\]) around every critical point close to line $y=x$. Furthermore,the neighborhood of every critical point is visited $q$ times if the order given by supercycle (\[s1\]) is followed. In other words, the graph of $f^{q}$is in the neighborhood of every point of a restricted supercycle. Accordingly, the scaling law of restricted supercycle gives the scaling law of the supercycle (\[s1\])
What we are doing is to classify the $q\cdot2^{n}$ points of the saddle-node orbit in $2^{n}$sets, each one having $q$ points. The $2^{n}$ sets correspond to the $2^{n}$ critical points of $f^{2^{n}}$closest to line $y=x$. The neighborhood of each one of these critical points captures $q$ saddle-node points, in other words, the graph of $f^{q}$ is captured in every one of such neighborhoods; for instance, in $f^{3\cdot2^{2}}$we notice that the graph of $f^{3}$ is captured in the neighborhoods of the $2^{2}$ critical points of $f^{2^{2}}$ (Fig. \[cap:fig4\]).
Let’s denote the distance from the m-th element of a $2^{n}$-restricted-supercycle to its nearest neighbor by
$$d_{n,q}(m)=x_{m,q}-f_{R_{n,q}}^{2^{n-1}}(x_{m,q})$$
Let’s define the scaling (see Eq. (56) of [@Feigenbaum80]) by
$$\sigma_{n,q}(m)=\frac{d_{n+1,q}(m)}{d_{n,q}(m)}$$
Bearing in mind that $x_{m}=f_{R_{n,q}}^{m}(0)$, if we set $m=2^{n-i}\,\,\,\,1\ll i\ll n$ then $\sigma_{n,q}(2^{n-i})$ can be approximated by (see 57 of [@Feigenbaum80])
$$\sigma_{n,q}(2^{n-i})\sim\frac{g_{i+1,q}(0)-g_{i+1,q}\left[(-\alpha)^{-i}g_{1,q}(0)\right]}{g_{i,q}(0)-g_{i,q}\left[(-\alpha)^{-i+1}g_{1,q}(0)\right]}$$ where $g_{i,q}$ are the functions defined in [@SanMartin05].
The new variable
$$t_{n}(m)=\frac{m}{2^{n}}$$ or $$t_{n}(2^{n-i})=2^{-i}$$ rescales the axis of iterates in such a way that all $2^{n+1}$ iterates are within a unit interval.
Defining
$\sigma_{,q}(t_{n}(m))\sim\sigma_{n,q}(m)$ (as $n\rightarrow\infty$)\
it turns out that (see Eq. (60) of [@Feigenbaum80])
$$\sigma_{,q}(-2^{-i-1})=\frac{g_{i+1,q}(0)-g_{i+1,q}\left[(-\alpha)^{-i}g_{1,q}(0)\right]}{g_{i,q}(0)-g_{i,q}\left[(-\alpha)^{-i+1}g_{1,q}(0)\right]}$$
In the limit $i\rightarrow\infty$ it yields
$$\sigma_{,q}(-2^{-i-1})_{i\rightarrow\infty}=\frac{g(0)-g\left[(-\alpha)^{-i}g_{1,q}(0)\right]}{g(0)-g\left[(-\alpha)^{-i+1}g_{1,q}(0)\right]}=\frac{1}{\alpha^{2}}$$ where
$$g_{i+1,q}(x)\rightarrow_{i\rightarrow\infty}g(x)$$ has been used (see [@SanMartin05]), and besides $g(x)$ has a quadratic maximum, so
$$g\left[(-\alpha)^{-i}g_{1,q}(0)\right]\simeq g(0)+\frac{1}{2}g^{''}(0)\cdot(-\alpha)^{-2i}g_{1,q}^{2}(0)$$
Let’s notice that the scaling law does not depend on $q$, so we can drop this label and simply write
$$\sigma(-2^{-i-1})_{i\rightarrow\infty}=\frac{g(0)-g\left[(-\alpha)^{-i}g_{1,q}(0)\right]}{g(0)-g\left[(-\alpha)^{-i+1}g_{1,q}(0)\right]}=\frac{1}{\alpha^{2}}$$
The independence of $q$ is critical to expound that the whole set of Saddle-Node bifurcations (for any $q$) scales as the set of pitcfork bifurcation described by Feigenbaum. Once $\sigma(-2^{-i-1})$ has been calculated, Feigenbaum [@Feigenbaum80] gives numbers in binary expression and demonstrates that $\left|\sigma\right|$ behaves as $\sim\frac{1}{\alpha}$ half the time and as $\sim\frac{1}{\alpha^{2}}$ the other half.
[10]{}
M. J. Feigenbaum, Quantitative Universal for a Class of Nonlinear Transformations. Journal of Statistical Physics, Vol. 19, 25 (1978) M. J. Feigenbaum, The Universal Metric Properties of Nonlinear Transformations. Journal of Statistical Physics, 21 669-706 (1979) D. Ruelle and F. Takens, “On the Nature of Turbulence”. Commun. Math. Phys. 20, (1971) 167-92 S. E. Newhouse, D. Ruelle and R. Takens, “Occurrence of Strange Axiom A Attractors near Quasi-Periodic Flows on $T_{m}$ ($m=3$ or more)”. Commun. Math. Phys. 64, 35 (1978) P. Manneville and Y. Pomeau, “ Intermittency and the Lorenz Model”. Phys. Lett. 75A (1979) 1-2 Y. Pomeau and P. Manneville, “Intermittent transition to turbulence in dissipative dynamical systems”. Commun. Math. Phys. 74 (1980) 189-197 C. V. Anil Kumar, T. R. Ramamohan, New Class I intermittency in the dynamics of periodically forced spheroids in simple shear flow. Phys. Lett. A 227 (1997) 72-78 M. Bauer, S. Habip, D. R. He, W. Martienssen, New type of intermittency in discontinuous maps. Phys. Rev. Lett. 68 (1992) 1625-1628 J. San-Martín and J. C. Antoranz, “Type-I and Type-II Intermittencies with Two Channels of Reinjection”. Chaos, Solitons & Fractals Vol. 10 N. 9 1539-1544(1999) J. San-Martín, Universal Scaling in Saddle-Node Bifurcation Cascades (I). \[nlin.CD/0501035\] J. Gukenheimer, “Renormalization of one dimension mappings” Contemp. Math 58, pt. III (1987) 143-160 M. J. Feigenbaum, Universal Behavior in Nonlinear Systems. Los Alamos Sciences 1 4-27 (1980)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.