text
stringlengths
4
2.78M
--- abstract: 'The function of a real network depends not only on the reliability of its own components, but is affected also by the simultaneous operation of other real networks coupled with it. Robustness of systems composed of interdependent network layers has been extensively studied in recent years. However, the theoretical frameworks developed so far apply only to special models in the limit of infinite sizes. These methods are therefore of little help in practical contexts, given that real interconnected networks have finite size and their structures are generally not compatible with those of graph toy models. Here, we introduce a theoretical method that takes as inputs the adjacency matrices of the layers to draw the entire phase diagram for the interconnected network, without the need of actually simulating any percolation process. We demonstrate that percolation transitions in arbitrary interdependent networks can be understood by decomposing these system into uncoupled graphs: the intersection among the layers, and the remainders of the layers. When the intersection dominates the remainders, an interconnected network undergoes a continuous percolation transition. Conversely, if the intersection is dominated by the contribution of the remainders, the transition becomes abrupt even in systems of finite size. We provide examples of real systems that have developed interdependent networks sharing a core of “high quality” edges to prevent catastrophic failures.' author: - Filippo Radicchi title: Percolation in real interdependent networks --- Percolation is among the most studied topics in statistical physics [@stauffer1991introduction]. The model used to mimic percolation processes assumes the existence of an underlying network of arbitrary structure. Regular grids are traditionally considered to model percolation in materials [@kirkpatrick1973percolation; @berkowitz1995analysis]. Complex graphs are instead assumed as underlying supports in the analysis of spreading phenomena in social environments [@pastor2001epidemic; @newman2002spread], or in robustness studies of technological and infrastructural systems [@albert2000error; @cohen2000resilience; @callaway2000network]. Once the network has been specified, a configuration of the percolation model is generated assuming nodes (or sites) present with probability $p$. For $p=0$, only a disconnected configuration is possible. For $p=1$ instead, all nodes are within the same connected cluster. As the occupation probability varies, the network undergoes a structural transition between these two extreme configurations. Although there are special substrates, e.g., one-dimensional lattices, where the percolation transition may be discontinuous, in the majority of the cases, random percolation models give rise to continuous structural changes [@dorogovtsev2008critical]. This means that the size of the largest cluster in the network, used as a proxy for the connectedness of the system, increases from the non-percolating to the percolating phases in a smooth fashion. ![Decomposition of interconnected networks into uncoupled graphs. [**a)**]{} Schematic example of two coupled networks $A$ and $B$. In this representation, nodes of the same color are one-to-one interdependent. [**b)**]{} In the percolation model, the interconnected system is equivalent to a set of three graphs that do not share any edge: the remainders of the network layers $A$ and $B$, and their intersection. []{data-label="fig:1"}](fig1.pdf){width="45.00000%"} The percolation transition may become discontinuous in a slightly different model involving not just a single network, but a system composed of two or more interdependent graphs [@buldyrev2010catastrophic]. This is a very realistic scenario considering that many, if not all, real graphs are “coupled” with other real networks. Examples can be found in several domains: social networks (e.g., Facebook, Twitter, etc.) are coupled because they share the same actors [@szell2010multirelational]; multimodal transportation networks are composed of different layers (e.g., bus, subway, etc.) that share the same locations [@barthelemy2011spatial; @de2014navigability]; the functioning of communication and power grid systems depend one on the other [@buldyrev2010catastrophic]. In the simplest case, one considers an interconnected system composed of only two network layers. Nodes in both layers are uniquely identified, but the way they are connected to the other vertices is not necessarily identical (see Fig. 1). In the percolation model defined on this system, nodes are still present with probability $p$. Since the networks are interdependent, the presence of a node in one layer implies the presence of the same vertex in the other layer. However, as $p$ varies, one node may not be simultaneously within the largest clusters of both layers. In such a case, the vertex is said to be outside the largest cluster of mutually connected nodes. This is a set of nodes identified in a recursive manner, and composed of vertices that are simultaneously in the largest clusters of both network layers thanks only to connections with other nodes within the set. It has been proved that, in infinitely large interconnected systems composed of two uncorrelated random networks, the percolation transition, monitored through the size of the largest cluster of mutually connected nodes, is discontinuous [@buldyrev2010catastrophic; @gao2012networks; @son2012percolation]. This result has been however shown to not apply to more general network models that account for degree correlations [@PhysRevX.4.021014; @reis2014avoiding]. Unfortunately, all these theoretical approaches have been developed under two special, and unrealistic, assumptions. First, they hypothesize that network layers are generated according to some kind of graph toy model whose topology is not specified by a one-zero adjacency matrix, but rather a list of probabilities for pairs of nodes to be connected. Second, they apply only to the case of infinitely large systems. Real interdependent networks, on the other hand, are composed of layers very different from those that can be generated with toy models, and they clearly have finite size. In this paper, we introduce a novel theoretical approach of direct applicability to the study of percolation transitions in real interdependent networks. To illustrate our methodology, we start from a simplified version of a recent theory developed for percolation in real isolated networks [@bollobas2010percolation; @PhysRevLett.113.208702; @PhysRevLett.113.208701]. Consider an undirected and unweighted graph composed of $N$ nodes and $E$ edges. The structure of the graph is encoded by the adjacency matrix $A$, a symmetric $N \times N$ matrix whose generic element $A_{ij}$ differs from zero and equals one only if vertices $i$ and $j$ share an edge. Without loss of generality, we assume that, when all nodes are present in the network, the graph is formed by a single connected component. Let us consider an arbitrary value of the site occupation probability $p \in (0,1)$, and indicate with $s_i$ the probability that the generic node $i$ is part of the largest cluster. The order parameter of the percolation transition is simply defined as the average of these probabilities over all nodes in the graph, i.e., $P_\infty = \frac{1}{N} \sum_i s_i$. Note that $s_i$ is a function of $p$, but, in the following, we omit this dependence for shortness of notation. As a first attempt, we can say that the probability $s_i$ for node $i$ to be part of the largest cluster is given by $$s_i = p \, [ 1 - \prod_{j \in \mathcal{N}_i} \, (1 - s_j) \, ] \; , \label{eq:site_dense}$$ where $\mathcal{N}_i$ is the set of neighbors of vertex $i$. The probability $s_i$ is written as the product of two contributions: (i) the probability that the node is occupied; (ii) the probability that at least one of its neighbors is part of the largest cluster. The attempt of Eq. (\[eq:site\_dense\_vec\]) relies on the so-called locally tree-like approximation [@dorogovtsev2008critical]. In this ansatz, neighbors of node $i$ are not directly connected, and this allows us to consider the probabilities $s_j$ as independent variables. This approximation typically holds in real networks [@PhysRevLett.113.208702], but does not apply to regular lattices. Introducing the vectors $\vec{u}$ and $\vec{q}$, whose $i$-th components are respectively $u_i = \ln(1-s_i)$ and $q_i = \ln(1 - s_i/p)$, we can write the set of coupled equations (\[eq:site\_dense\]) into the single vectorial equation $$\vec{q} = A \, \vec{u} \; \label{eq:site_dense_vec}$$ A trivial solution of Eq. (\[eq:site\_dense\_vec\]) is given by the configuration $\vec{u} = \vec{q} = \vec{0}$, corresponding to $\vec{s} = \vec{0}$ or $s_i=0$ for all $i=1, \ldots, N$. In the proximity of this configuration, we can make use of the truncated Taylor expansion $\ln{(1-x)} = - x$, and Eq. (\[eq:site\_dense\_vec\]) reads as $$\vec{s} = p A \vec{s} \;, \label{eq:stability_site_dense}$$ thus an eigenvalue/eigenvector equation. By the Perron-Frobenius theorem, the only solution having a physical meaning of this equation is obtained by setting $p = 1/\lambda$ and $\vec{s} = \vec{l}$, with $(\lambda, \vec{l})$ principal eigenpair of the adjacency matrix $A$. This tells us that the solution of Eq. (\[eq:site\_dense\]) is $s_i = 0$, for all $i=1, \ldots, N$, if the site occupation probability is smaller than $1/\lambda$. In this region, the network is in the non-percolating regime. Slightly on the right of $1/\lambda$, the vector of probabilities $\vec{s}$ starts to grow in the direction of the principal eigenvector of the adjacency matrix, and the order parameter is not longer zero. For any value of the site occupation probability larger than $1/\lambda$, the network is in the percolating phase. The site percolation threshold obtained using the approximation of Eq. (\[eq:site\_dense\]) is thus $p_c = 1/\lambda$. Further, Eq. (\[eq:site\_dense\]) can be solved numerically to draw the percolation diagram of the network in this approximation. The most serious limitation of Eq. (\[eq:site\_dense\]) is to introduce a positive feedback among probabilities. An increment in the probability $s_i$ produces an increase in the probabilities $s_j$ of the neighbors, which in turn causes an increment in the probability $s_i$, and so on. To avoid the presence of this self-reinforcement effect, we can rewrite Eq. (\[eq:site\_dense\]) as $$s_i = p \,[\, 1 - \prod_{j \in \mathcal{N}_i} \, ( 1 - r_{i \to j}) \, ] \; . \label{eq:site_sparse1}$$ This equation still relies on the locally tree-like ansatz. Here, $r_{i \to j}$ stands for the probability that node $j$ is part of the largest cluster independently of vertex $i$. We note that, while this quantity can be defined for any pair of nodes, only contributions given by adjacent vertices play a role in Eq. (\[eq:site\_sparse1\]). We can think $r_{i \to j}$ as one of the $2E$ components of a vector $\vec{r}$. In the definition of $\vec{r}$, every edge $(i,j)$ in the graph is responsible for two entries, i.e., $r_{i \to j}$ and $r_{j \to i}$. For consistency, the probability $r_{i \to j}$ obeys $$r_{i \to j} = p \, [ 1 - \prod_{k \in \mathcal{N}_j \setminus \{i\}} \, ( 1 - r_{j \to k}) \, ] \; . \label{eq:site_sparse2}$$ The product of the r.h.s. of the last equation runs over all neighbors of node $j$ excluding vertex $i$. It is convenient to rewrite Eq. (\[eq:site\_sparse2\]) as $\ln{(1 - r_{i \to j} / p)} = \sum_k A_{jk} \, \ln{(1 - r_{j \to k})} - A_{ji} \ln{(1 - r_{j \to i})}$. Defining the vectors $\vec{w}$ and $\vec{t}$ such that their $(i \to j)$-th components are respectively given by $w_{i \to j} = \ln(1 - r_{i \to j})$ and $z_{i \to j} = \ln(1 - r_{i \to j} /p )$, the system of equations (\[eq:site\_sparse2\]) becomes equivalent to the vectorial equation $$\vec{t} = M \, \vec{w} \;. \label{eq:site_sparse2_vec}$$ The generic element of the $2E \times 2E$ square matrix $M$ is given by $$M_{i \to j, k \to \ell} = \delta_{j,k} (1 - \delta_{i,\ell}) \; , \label{eq:nonback}$$ where $\delta_{x,y}$ is the Kronecker delta function defined as $\delta_{x,y} =1$ if $x=y$, and $\delta_{x,y} =0$, otherwise. Thus, the generic entry of the matrix $M$ is different from zero only if the ending node of the edge $i \to j$ corresponds to the starting vertex of the edge $k \to \ell$, but the starting and ending nodes $i$ and $\ell$ are different. This matrix is known as the non-backtracking matrix of the graph [@hashimoto1989zeta; @krzakala2013spectral]. A trivial solution of the preceding equation is given by $\vec{r}=\vec{0}$, which in turn leads to $\vec{s}=\vec{0}$. In proximity of this configuration, we can still make use of the truncated Taylor expansion of the logarithm, and rewrite Eqs. (\[eq:site\_sparse1\]) and  (\[eq:site\_sparse2\_vec\]) respectively as $$s_{i} = p \sum_j A_{ij} r_{i \to j} \quad \textrm{and} \quad \vec{r} = p \, M \, \vec{r} \;. \label{eq:stability_site_sparse}$$ Using arguments similar to those applied to Eq. (\[eq:stability\_site\_dense\]), we can say that, according to Eq. (\[eq:stability\_site\_sparse\]), the percolation threshold equals $p_c = 1/\mu$, with $\mu$ principal eigenvalue of the non-backtracking matrix of the graph, and that slightly on the right of the critical point the probability $s_i$ grows linearly with the sum of the components of the principal eigenvector of the non-backtracking matrix corresponding to edges pointing out from node $i$. The entire percolation diagram can be instead obtained by numerically solving the system of Eqs. (\[eq:site\_sparse1\]) and  (\[eq:site\_sparse2\]). To summarize, the results presented so far tell us two main interesting things. First, the difference between the two approaches resides only in the inclusion or exclusion of self-reinforcement effects among local variables. In this sense, Eqs. (\[eq:site\_sparse1\]) and (\[eq:site\_sparse2\]) represent an improvement to Eq. (\[eq:site\_dense\]), but both approaches are based on the same principles and approximations. This first observation serves to reunite recent predictions on percolation thresholds under the same theory [@bollobas2010percolation; @PhysRevLett.113.208702; @PhysRevLett.113.208701]. Second, the way in which individual probabilities behave slightly on the right of the critical point allow us to understand why the prediction of the percolation threshold of Eq. (\[eq:stability\_site\_sparse\]) may become inaccurate in networks with localized eigenstates of the non-backtracking matrix [@radicchi2014predicting]. Next, we propose the generalization of the previous equations to describe percolation transitions in two interdependent networks. Indicate with $A$ and $B$ the adjacency matrices of the two network layers. Our first attempt to write the the probability $s_i$ that node $i$ is in the largest mutually connected cluster of the system is given by $$s_i = p \; [ S_{\mathcal{AB}_i} + (1- S_{\mathcal{AB}_i}) \; S_{\mathcal{A-B}_i} \; S_{\mathcal{B-A}_i} ] \, , \label{eq:interd_dense}$$ where $S_{\mathcal{X}} = 1 - \prod_{j \in \mathcal{X}} (1 - s_j)$ is the probability that at least one of the nodes $j$ in the set $\mathcal{X}$ is part of the largest cluster (for the empty set $\emptyset$, we have $S_{\emptyset} = 0$). In the definition of Eq. (\[eq:interd\_dense\]), we have implicitly defined three disjoint sets of nodes: $\mathcal{AB}_i = \mathcal{N}^{A}_i \cap \mathcal{N}^{B}_i$ is the set of nodes that are neighbors of vertex $i$ in both layers, $\mathcal{A-B}_i = \mathcal{N}^{A}_i \setminus \mathcal{AB}_i$ is the set of nodes connected to vertex $i$ only in layer $A$ but not in $B$, and $\mathcal{B-A}_i = \mathcal{N}^{B}_i \setminus \mathcal{AB}_i$ is the set of nodes that are neighbors of vertex $i$ in layer $B$ but not in $A$. Eq. (\[eq:interd\_dense\]) essentially states that, given that the vertex is occupied, the probability $s_i$ for node $i$ of being part of the largest mutually connected cluster is given by the sum of two contributions: (i) the probability to be connected to the largest cluster thanks to at least one vertex that is connected to $i$ in both layers; (ii) if the latter condition is not true, the probability that node $i$ is connected to the largest cluster through at least one node $k$ in layer $A$ and one node $\ell$ in layer $B$, with $k \neq \ell$. Note that, if the network layers are identical, then Eq. (\[eq:interd\_dense\]) correctly reduces to Eq. (\[eq:site\_dense\]). In other terms, one can split the set of edges in the system in three different subset, and then construct three different graphs on the basis of this unique division (see Fig. 1): the intersection graph with adjacency matrix given by the Hadamard product of the matrices $A$ and $B$ \[i.e., the $(i,j)$-th element of the adjacency matrix is $A_{ij}B_{ij}$\]; the remnant of network $A$, where edges between nodes $i$ and $j$ are present only if $A_{ij}(1-B_{ij})=1$; the remainder of graph $B$, whose $(i,j)$-th adjacency matrix element equals $B_{ij}(1-A_{ij})$. If we make use of the vector $\vec{u}$ previously defined, we can write $S_{\mathcal{AB}_i} = 1 - \exp{[\sum_j A_{ij} B_{ij} u_j]}$, $S_{\mathcal{A - B}_i} = 1 - \exp{[\sum_j A_{ij} (1-B_{ij}) u_j]}$ and $S_{\mathcal{B - A}_i} = 1 - \exp{[\sum_j B_{ij} (1-A_{ij}) u_j]}$. Thus, the numerical solution of Eq. (\[eq:interd\_dense\]) can be obtained in a certain number of iterations, each having a computational complexity that grows at maximum as the number of edges present in the denser layer. Unfortunately, the Taylor expansion of the r.h.s. of Eq. (\[eq:interd\_dense\]) gives us only some insights about the structure of the solution, but it does not allow to reduce the original problem to a simple eigenvalue/eigenvector equation as in the case of isolated networks (see Appendix). ![image](fig2.pdf){width="75.00000%"} Also here, we can avoid the presence of self-reinforcing mechanisms among variables by excluding already visited edges. The equations read as $$s_i = p \; [ R_{\mathcal{AB}_i} + (1- R_{\mathcal{AB}_i}) \; R_{\mathcal{A-B}_i} \; R_{\mathcal{B-A}_i} ] \, , \label{eq:interd_sparse1}$$ and $$r_{i \to j} = p \; [ R_{\mathcal{AB}_{j} \setminus \{i\}} + (1- R_{\mathcal{AB}_{j} \setminus \{i\}}) \; R_{\mathcal{A-B}_{j} \setminus \{i\}} \; R_{\mathcal{B-A}_{j} \setminus \{i\}} ] \, , \label{eq:interd_sparse2}$$ with $R_{\mathcal{X}_i} = 1 - \prod_{j \in \mathcal{X}} \, (1 - r_{i \to j})$ , and the three sets $\mathcal{AB}_i$, $\mathcal{A-B}_i$ and $\mathcal{B-A}_i$ are defined as above. If the network layers are identical, then Eqs. (\[eq:interd\_sparse1\]) and (\[eq:interd\_sparse2\]) reduce to Eqs. (\[eq:site\_sparse1\]) and (\[eq:site\_sparse2\]). If we indicate with $\vec{r}^{(AB)}$ the vector whose components are generated by edges present in the intersection graph, $\vec{w}^{(AB)}$ the vector with entries of the type $\vec{w}^{(AB)}_{i \to j} = \ln(1 - \vec{r}^{(AB)}_{i \to j} )$, and $M^{(AB)}$ the non-backtracking matrix obtained from the adjacency matrix of intersection between layers, we can write $R_{\mathcal{AB}_{j} \setminus \{i\}} = 1 - \exp{[M^{(AB)} \, \vec{w}^{(AB)}]}$. In a similar spirit, we can also write $R_{\mathcal{A-B}_{j} \setminus \{i\}} = 1 - \exp{[M^{(A-B)} \, \vec{w}^{(A-B)}]}$ and $R_{\mathcal{B-A}_{j} \setminus \{i\}} = 1 - \exp{[M^{(B-A)} \, \vec{w}^{(B-A)}]}$, where these equations are valid only for edges that belong to either layer $A$ or layer $B$. Obtaining a numerical solution of Eqs. (\[eq:interd\_sparse1\]) and (\[eq:interd\_sparse2\]) by iteration is thus relatively fast, since every iteration has a computational complexity at maximum equal to twice the number of edges present in the denser network layer. This is a great achievement given the high complexity of the algorithm necessary to draw the phase diagram for the percolation process in interdependent networks by means of direct numerical simulations [@hwang2014efficient]. ![ Percolation transition in interdependent biological networks. [**a)**]{} Phase diagram for the multilayer [*H. sapiens*]{} protein interaction network [@stark2006biogrid; @de2014muxviz]. Edges in different layers represent diverse type of connections among proteins: direct interaction, physical association, and colocalization. When analyzing a multiplex with two of these layers, we restrict our attention only on the set of nodes present in both layers. For each of the three systems formed by two interconnected networks that we can generate with this data, we draw the percolation diagram by means of numerical simulations (large symbols) and numerical solution of our equations (small symbols). [**b)**]{} Phase diagram for the multilayer network of the [*C. elegans*]{} connectome [@de2014muxviz]. Edges in different layers represent different types of synaptic junctions among the neurons: electrical, chemical monadic, and chemical polyadic. [**c)**]{} Decomposition of the multilayer [*C. elegans*]{} connectome. Remnant of the layer corresponding to electrical junctions, [**d)**]{} intersection among the layers corresponding to electrical and chemical monadic interactions and [**e)**]{} remainder of the layer corresponding to chemical monadic junctions. In the various panels, nodes belonging to the largest connected component are visualized with red circles. All other nodes are instead represented with blue squares. []{data-label="fig:3"}](fig3.pdf){width="45.00000%"} ![ Percolation transition in interconnected transportation networks. [**a)**]{} The system is obtained by combining Delta and American Airlines routes. We consider only US domestic flights operated in January, 2014 [@flights], and construct an interconnected network where airports are nodes, and connections on the layers are determined by the existence of at least a flight between the two locations. In the percolation diagram, large red circles are results of numerical simulations, whereas small red circles represent the solutions of our equations. Blue squares represent susceptibility, a measure of the fluctuation across realizations of the percolation model, whose peak location is often used as a proxy for the identification of the critical threshold $p_c$. [**b)**]{} Same as in a, but for the combination of Delta and United flights. [**c)**]{} Same as in a, but for the combination of American Airlines and United flights. [**d, e, and f)**]{} Intersection graphs for the systems analyzed respectively in panels a, b and c. In the various network visualizations, nodes belonging to the largest connected component are visualized with red circles. All other nodes are instead represented with blue squares. []{data-label="fig:4"}](fig4.pdf){width="45.00000%"} Phase diagrams obtained through the numerical solution of Eqs. (\[eq:interd\_sparse1\]) and (\[eq:interd\_sparse2\]) reproduce the results of numerical simulations very accurately. In Fig. 2a for example, we consider systems composed of two independent Erdős-Rényi network models with different values of the average degree $\langle k \rangle$, where each network layer is generated by connecting pairs of vertices with probability $\langle k \rangle / N$. A fundamental feature that the diagrams reveal is the presence of a sudden jump in the order parameter $P_\infty$ at a certain threshold $p_c$. We stress that our equations predict the existence of first-order percolation transitions in networks of finite size, and not just in the thermodynamic limit. As Figs. 2b and c show, the location of the critical point $p_c$, and the height of the jump of the order parameter are well described by predictions valid for this type of graph models in the limit of infinite size[@buldyrev2010catastrophic; @gao2012networks; @son2012percolation]. We argue that a sudden jump in the order parameter is present only if the contribution of the remainders dominates the importance of the intersection. This condition is certainly verified in interdependent Erdős-Rényi graphs, where the intersection is composed of a very small number of edges, roughly equal to $\langle k \rangle / 2$, while the number of edges in each remnants is proportional to $\langle k \rangle \, N /2$. Our intuition is fully supported by the results of Fig. 2d. Here, we control for the weight of the intersection with respect to those of the remnants in a simple fashion [@bianconi2014percolation]. The two layers are given by exactly the same network structure. Indices of interdependent nodes are however shuffled with a given probability $q$. As $q$ grows, the percolation transition changes its features: we pass from a continuous phase transition for small values of $q$, through a mix between a second- and a first-order structural change at intermediate values of $q$, to a discontinuous phase transition for sufficiently large values of $q$. The same type of considerations hold when network layers are scale-free random graphs. The transition is always discontinuous if the layers are uncorrelated, so that only remainders are present (Fig. 2e). This can be viewed by the existence of a finite value of the critical threshold $p_c$ (Fig. 2f), and a jump of non null height of the order parameter at criticality (Fig. 2g). Still, the nature of the phase transition can be tuned from first to second order by simply decreasing the density of the intersection relatively to those of the remainders (Fig. 2h). Our argument about the dependence of the nature of the transition on the weight of the intersection compared to those of the remnants may serve to explain why real interdependent networks are not exposed to catastrophic failures [@reis2014avoiding]. In Fig. 3, we draw the percolation diagrams for two interconnected systems of interest in the biological sciences: the [*H. sapiens*]{} protein interaction network [@stark2006biogrid; @de2014muxviz], and the [*C. elegans*]{} connectome [@de2014muxviz]. Both these interconnected systems undergo continuous percolation transitions. Interestingly, this behavior is not caused by the amount of redundancy among layers, but rather the “quality” of the edges shared across layers. Connections in the intersection graph account, in fact, for less than $10\%$ in five out of the six interdependent networks analyzed here. It seems therefore that these organisms have developed interconnected networks sharing a core of “high quality” edges to prevent catastrophic failures. Whereas the robustness we observe in biological networks can be viewed as the result of a selective evolutionary process, one may argue that man-made interdependent systems could have been instead not perfectly designed to resist to random damages of their components. This is indeed what arise from the analysis of the multilayer air transportation network within the US (Fig. 4) [@guimera2005worldwide; @colizza2006role]. The system shows fragility, with a sudden jump of the order parameter. On the other hand, the height of the jump is not as dramatic as observed in random uncorrelated graphs. Major airports all belong to the largest connected component of the intersection graph, and their connections constitute a set of high quality edges that avoid truly catastrophic changes in the connectedness of the entire interdependent system. From these examples, it seems therefore that real interdependent networks may be not so fragile as previously believed. [30]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ** (, ). , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , , ****, (). , , , ****, (). , ****, (). , , , , ****, (). , , , , ****, (). , , , , , ****, (). , ****, (). , , , , , , , ****, (). , , , , , ****, (). , , , ****, (). , ****, (). , pp. (). , , , , , , , ****, (). , ****, (). , , , , (). , , , , , , ****, (). , , , p. (). **, , . , (). , , , , ****, (). , , , , ****, (). Appendix {#appendix .unnumbered} ======== $\gamma$ $p_c$ $\alpha$ $P_\infty$ $\beta$ ---------- -------- ---------- ------------ --------- $2.3$ $0.18$ $0.34$ $0.01$ $0.33$ $2.7$ $0.25$ $0.43$ $0.07$ $0.39$ $3.5$ $0.33$ $0.60$ $0.16$ $0.54$ $100.0$ $0.47$ $0.95$ $0.32$ $1.09$ : In panel f of Fig. 2, we fitted empirical estimations of the percolation threshold $p_c(N)$ computed at various system sizes $N$ with the function $p_c(N) = p_c + N^{-\alpha}$. In Fig. 2g, we perform instead the fit $P_\infty(N) = P_\infty + N^{-\beta}$ for the height of the order parameter at criticality. Here, we report the values of the best estimates of $p_c$, $\alpha$, $P_\infty$ and $\beta$ for the different values of the degree exponent $\gamma$ used in the generation of the scale-free networks. Taylor expansions {#taylor-expansions .unnumbered} ----------------- An alternative way to arrive to the results of Eq. (\[eq:stability\_site\_dense\]) is to use the multidimensional Taylor expansion of the r.h.s. of Eq. (\[eq:site\_dense\]) around the trivial solution $\vec{s} = \vec{0}$ as $$\begin{array}{l} [ 1 - \prod_{j \in \mathcal{N}_i} \, (1 - s_j) \, ] \\ = \sum_k \, s_k \, \left. \frac{d}{d s_k} \, [ 1 - \prod_{j \in \mathcal{N}_i} \, (1 - s_j) \, ]\right|_{\vec{s} = \vec{0}} + o(s_i^2) \\ \simeq \sum_j A_{ij} \, s_j \end{array} \; .$$ Truncated multidimensional Taylor expansions can be used also to reduce Eq. (\[eq:site\_sparse2\]) to Eq. (\[eq:stability\_site\_sparse\]). The only difference here is that the derivatives are taken with respect to the variables $r_{i \to j}$, and the expansion is made around the trivial solution $\vec{r} = \vec{0}$. When dealing with Eq. (\[eq:interd\_dense\]), the Taylor expansion should be instead extended to at least the second order. Let us first imagine that the intersection graph does not contain edges, so that Eq. (\[eq:interd\_dense\]) reads as $$s_i = p \; S_{\mathcal{A-B}_i} \; S_{\mathcal{B-A}_i} \;.$$ Since $S_{\mathcal{A-B}_i}$ and $S_{\mathcal{B-A}_i}$ calculated at $\vec{s} = \vec{0}$ are zero, the first derivatives of the r.h.s. calculated in $\vec{s} = \vec{0}$ are automatically zero. The Taylor expansion of r.h.s. is thus $$\begin{array}{l} \frac{1}{2} \, \sum_j \sum_k \; \, s_j s_k \; \left. \frac{d^2}{ds_j\, ds_k} \, S_{\mathcal{A-B}_i} \; S_{\mathcal{B-A}_i} \right|_{\vec{s}=\vec{0}} + o(s_i^3) = \\ \frac{1}{2} \, \sum_j \sum_k \; \, s_j s_k \; \left. \frac{d}{ds_j} \, S_{\mathcal{A-B}_i} \; \frac{d}{ds_k} S_{\mathcal{B-A}_i} \right|_{\vec{s}=\vec{0}} + o(s_i^3) \end{array} \; ,$$ where the second equality is justified by the fact that $S_{\mathcal{A-B}_i}$ and $S_{\mathcal{B-A}_i}$ are zero at $\vec{s} = \vec{0}$. Using the definitions of $S_{\mathcal{A-B}_i}$ and $S_{\mathcal{B-A}_i}$ we have that $$\left. \frac{d}{ds_j} S_{\mathcal{A-B}_i}\right|_{\vec{s}=\vec{0}} = A_{ij}(1-B_{ij})$$ and $$\left. \frac{d}{ds_j} S_{\mathcal{B-A}_i}\right|_{\vec{s}=\vec{0}} = B_{ij}(1-A_{ij}) \; ,$$ where $A$ and $B$ are the adjacency matrices of the network layers. In conclusion, we can approximate Eq. (\[eq:interd\_dense\]) in absence of the intersection term as $$s_i = \frac{p}{2} \, [ \sum_j s_j \, A_{ij}(1-B_{ij}) ] \; [ \sum_j s_j \, B_{ij}(1-A_{ij}) ] \;.$$ With straightforward considerations, we can also insert the term accounting for the intersection graph, and write $$s_i = p \, \sum_j A_{ij}B_{ij} s_j + \frac{p}{2} \, [ \sum_j s_j \, A_{ij}(1-B_{ij}) ] \; [ \sum_j s_j \, B_{ij}(1-A_{ij}) ] \;.$$ This last equation gives us some insights about the structure of the solution, but it does not allow to reduce the original problem to a simple eigenvalue/eigenvector equation as in the case of isolated networks. Similar considerations can be deduced by taking the Taylor expansion of Eq. (\[eq:interd\_sparse2\]).
--- abstract: | The immense success of deep learning based methods in computer vision heavily relies on large scale training datasets. These richly annotated datasets help the network learn discriminative visual features. Collecting and annotating such datasets requires a tremendous amount of human effort and annotations are limited to popular set of classes. As an alternative, learning visual features by designing auxiliary tasks which make use of freely available self-supervision has become increasingly popular in the computer vision community. In this paper, we put forward an idea to take advantage of multi-modal context to provide self-supervision for the training of computer vision algorithms. We show that adequate visual features can be learned efficiently by training a CNN to predict the semantic textual context in which a particular image is more probable to appear as an illustration. More specifically we use popular text embedding techniques to provide the self-supervision for the training of deep CNN. Our experiments demonstrate state-of-the-art performance in image classification, object detection, and multi-modal retrieval compared to recent self-supervised or naturally-supervised approaches. author: - Yash Patel - Lluis Gomez - Raul Gomez - Marçal Rusiñol - Dimosthenis Karatzas - 'C.V. Jawahar' bibliography: - 'bibliography.bib' date: 'Received: date / Accepted: date' title: 'TextTopicNet - Self-Supervised Learning of Visual Features Through Embedding Images on Semantic Text Spaces' --- Introduction {#sec:introduction} ============ The emergence of large-scale annotated datasets [@deng2009imagenet; @zhou2014learning; @lin2014microsoft] has undoubtedly been one of the key ingredients for the tremendous impact of deep learning on almost every computer vision task. However, there is a major issue with the supervised learning setup in large scale datasets: collecting and manually annotating those datasets requires great amount of human effort. ![image](texttopicnet_overall.pdf){width="\textwidth" height="5cm"} As an alternative, self-supervised learning aims at learning discriminative visual features by designing auxiliary tasks where the target labels are free to obtain. These labels provide supervision for the training of computer vision models the same as in supervised learning, but could be directly obtained from the training data, either from the image itself [@doersch2015unsupervised; @pathak2016context] or from a complementary modality that is found naturally correlated with it [@agrawal2015learning; @owens2016ambient]. Unlike supervised learning that learns visual features from the human generated semantic labels, the self-supervised learning scheme mines them from the nature of the data. Another class of methods is weakly-supervised learning, where training makes use of low level human annotations for solving more complex computer vision tasks. One such example is making use of per-image class labels for object detection [@bilen2016weakly; @oquab2015object] in natural scene images. In most cases, human generated data annotations consist of semantic entities in the form of textual information with different granularity depending on the vision task at hand: a single word to identify an object/place (classification), a list of words that describe the image (labeling), or a descriptive phrase of the scene shown (captioning). In this paper we propose that text found in illustrated articles can be leveraged as a form of image annotation to provide self-supervision, albeit being a very noisy one. The key benefit of this approach is that these annotations can be obtained for “free”. Illustrated articles are ubiquitous in our culture: for example in newspapers, encyclopedia entries, web pages, etc. Their visual and textual content complement each other and provide an enhanced semantic context to the reader. In this paper we propose to leverage all this freely available multi-modal content to train computer vision algorithms. Surprisingly, the use of naturally co-occurring textual and visual information has not been fully utilized yet for self-supervised learning. The goal of this paper is to propose an alternative solution to fully supervised training of CNNs by leveraging the correlation between images and texts found in illustrated articles. Our main objective is to explore the strength of language semantics in unstructured text articles as a supervisory signal to learn visual features. We present a method we call TextTopicNet, that performs self-supervised learning of visual features by mining a large scale corpus of multi-modal web documents (Wikipedia articles). TextTopicNet makes use of freely available unstructured multi-modal content for learning visual features in a self-supervised learning setup. We claim that it is feasible to learn discriminative features by training a CNN to predict the semantic context in which a particular image is more probable to appear as an illustration. As illustrated in Figure \[fig:overall\_method\] our method consists in applying a text embedding algorithm to the textual part to obtain a vectorial text representation and then use this representation as the supervisory signal for visual learning of a CNN. We investigate the use of various document level and word level text embeddings of articles, and we empirically find that the the best practice is to represent the textual information at the topic level, by leveraging the hidden semantic structures discovered by the Latent Dirichlet Allocation (LDA) topic modeling framework [@blei2003latent]. As illustrated in Figure \[fig:wiki\_samples\], the intuition behind using topic-level semantic descriptors is that the amount of visual data available about specific objects or fine-grained classes (e.g. a particular animal) is limited in our data collection, while it would be easy to find enough images representative of broader object categories (e.g. “mammals”). As a result of this approach the expected visual features that we learn are generic for a given topic, but still useful for other, more specific, computer vision tasks. \ By training a CNN to directly project images into a textual semantic space, TextTopicNet is not only able to learn visual features from scratch without any annotated dataset, but it can also perform multi-modal retrieval in a natural way without requiring extra annotation or learning efforts. This paper is an extended version of the work previously published in CVPR 2017 [@gomez2017self]. Following are the contributions in this paper: - We provide an extension of our previous method [@gomez2017self] and show that the idea of self-supervised learning using illustrated articles is scalable and can be extended to a larger training dataset (such as the entire English Wikipedia). - We experimentally demonstrate that TextTopicNet outperforms recent self-supervised or naturally supervised methods on standard benchmark evaluations. We extend our previous analysis to the more challenging SUN397 [@xiao2010sun] dataset, where TextTopicNet substantially reduces the performance gap between self-supervised and supervised training on ImageNet [@deng2009imagenet]. - We show that using textual context based representations for training helps the network to automatically learn semantic multi-modal retrieval. On the task of image-text retrieval, TextTopicNet outperforms unsupervised methods and shows competitive performance compared to supervised approaches without making use of any class specific information. - We provide a baseline comparison across different text-embeddings such as word2vec [@mikolov2013efficient], GloVe [@pennington2014glove], FastText [@joulin2016bag], doc2vec [@le2014distributed] for the purpose of self-supervised learning. - We publicly release an image-text article co-occurring dataset which consists of 4.2 million images and is obtained from entire English Wikipedia. The rest of the paper is structured as follows. In Section \[sec:rel\_work\], previous work is reviewed. In Section \[sec:wiki\_data\] details of the training dataset and scrapping setup are given. TextTopicNet method is presented in Section \[sec:texttopicnet\] and is evaluated in Section \[sec:experiments\]. Finally, conclusions are drawn in Section \[sec:conclusion\]. Related Work {#sec:rel_work} ============ Self-Supervised Learning ------------------------ Work in unsupervised data-dependent methods for learning visual features has been mainly focused on algorithms that learn filters one layer at a time. A number of unsupervised algorithms have been proposed to that effect, such as sparse-coding, restricted Boltzmann machines (RBMs), auto-encoders [@zhao2015stacked], and K-means clustering [@coates2010analysis; @dundar2015convolutional; @krahenbuhl2015data]. However, despite the success of such methods in several unsupervised learning benchmark datasets, a generic unsupervised method that works well with real-world images does not exist. As an alternative to fully-unsupervised algorithms, there has recently been a growing interest in self-supervised or naturally-supervised approaches that make use of non-visual signals, intrinsically correlated to the image, as a form to supervise visual feature learning. Agrawal  [@agrawal2015learning] draw inspiration from biological observation that the living organisms learned visual perception for the purpose of moving and interacting with the environment. They make use of egomotion information obtained by odometry sensors mounted on a vehicle. The agent, that is, the vehicle, can be considered as a moving camera. Thus, they train a network using contrastive loss formulation [@mobahi2009deep] to predict the camera transformations between two image pairs. Wang & Gupta  [@wang2015unsupervised] make use of videos as training data and use relative motion of objects as supervisory signal for training. Their general idea is that two image patches connected by a tracker may contain same object or object parts. The relative motion information is obtained by using a standard unsupervised tracking algorithm. A Siamese-triplet network is then trained using a ranking loss function. In a further extension, Wang & Gupta  [@wang2017transitive] model two different variations: (a) inter-instance variations (two objects in the same class should have similar features) (b) intra-instance variations (viewpoint, pose, deformations, illumination, etc.). They generate a data graph over object instances with two kinds of edges: (a) different viewpoints of same object instance (b) same viewpoint of different object instances. Similar to [@wang2015unsupervised] they train a VGG-16 [@simonyan2014very] based Siamese-triplet network using a ranking loss function for the two different types of data-triplet-pairs. Doersch  [@doersch2015unsupervised] use spatial context such as relative position of patches within an image to make the network learn object and object parts. They make use of an unlabeled collection of images and train a network to predict the relative position of second patch given the first patch. Owens  [@owens2016ambient] make use of sound as a modality to provide supervisory signal. They do so by training a deep CNN to predict a hand-crafted statistical summary of sound associated with a video frame. Pathak & Efros  [@pathak2016context] take inspiration from auto-encoders and proposed a context-encoder. They train a network using a combination of L2 loss and adversarial loss to generate arbitrary image regions conditioned on their surrounding. Bojanowski & Joulin  [@bojanowski2017unsupervised] present an approach for unsupervised learning of visual features using Noise As Target (NAT) label for training. Their approach is domain agnostic and makes use of fixed set of target labels for training. They make use of stochastic batch reassignment strategy and a separable square loss function. In this paper we explore a different modality, text, for self-supervision of CNN feature learning. As mentioned earlier, text is the default choice for image annotation in many computer vision tasks. This includes classical image classification [@deng2009imagenet; @everingham2010pascal], annotation [@duygulu2002object; @huiskes2008mir], and captioning [@Ordonez2011im2text; @lin2014microsoft]. In this paper, we extend this to a larger level of abstraction by capturing text semantics with topic models. Moreover, we avoid using any human supervision by leveraging the correlation between images and text in a largely abundant corpus of illustrated web articles. Deep Learning Image-Text Embeddings ----------------------------------- Joint image and text embeddings have been lately an active research area. The possibilities of learning together from different kinds of data have motivated this field of study, where both general and applied research has been done. DeViSE [@frome2013devise] proposes a pipeline that, instead of learning to predict ImageNet classes, it learns to infer the Word2Vec [@mikolov2013efficient] representations of their labels. The result is a model that makes semantically relevant predictions even when it makes errors, and generalizes to classes outside of its labeled training set. A similar idea is explored in the work of Gordo & Larlus  [@gordo2017], where image captions are leveraged to learn a global visual representation for semantic retrieval. They use a *tf-idf* based BoW representation over the image captions as a semantic similarity measure between images and they train a CNN to minimize a margin loss based on the distances of triplets of query-similar-dissimilar images. Wang  [@wang2016learning] propose a method to learn a joint embedding of images and text for image-to-text and text-to-image retrieval, by training a neural network to embed in the same space Word2Vec [@mikolov2013efficient] text representations and CNN extracted features. Other than semantic retrieval, joint image-text embeddings have also been used in more specific applications. Gordo  [@gordo2015lewis] embed word images in a semantic space relying in the graph taxonomy provided by WordNet to perform text recognition. In a more specific application, Salvador  [@salvador2017learning] propose a joint embedding of food images and its recipes to identify ingredients, using Word2Vec [@mikolov2013efficient] and LSTM representations to encode ingredient names and cooking instructions and a CNN to extract visual features from the associated images. Our work differ from the previous image-text embedding methods in that we aim to learn generic and discriminative features in a self-supervised fashion without making use of any annotated dataset. Topic Modeling -------------- ![image](imageCLEF_topics.pdf){width="\textwidth"} Our method is also related with various image retrieval and annotation algorithms that make use of a topic modeling framework in order to embed text and images in a common space. Multi-modal LDA (mmLDA) and correspondence LDA (cLDA) [@blei2003modeling] methods learn the joint distribution of image features and text captions by finding correlations between the two sets of hidden topics. Supervised variations of LDA are presented in  [@rasiwasia2013latent; @wang2011max; @putthividhy2010topic] where the discovered topics are driven by the semantic regularities of interest for the classification task. Sivic  [@sivic2005discovering] adopt BoW representation of images for discovering objects in images using pLSA [@hofmann2001unsupervised] for topic modelling. Feng  [@feng2010topic] uses the joint BoW representation of text and image for learning LDA. Most cross-modal retrieval methods work with the idea of representing data of different modalities into a common space where data related to same topic of interest tend to appear together. The unsupervised methods in this domain utilize co-occurrence information to learn a common representation across different modalities. Verma  [@verma2014im2text] do image-to-text and text-to-image retrieval using LDA [@blei2003latent] for data representation. Methods such as those presented in  [@rasiwasia2010new; @gong2014multi; @pereira2014role; @li2011face] use Canonical Correlation Analysis (CCA) for establishing relationships between data of different modalities. Rasiwasia  [@rasiwasia2010new] proposed a method for cross-modal retrieval by representing text using LDA [@blei2003latent], image using BoW and CCA for finding correlation across different modalities. In one of our prior publication [@patel2016dynamic], we presented an approach for dynamic lexicon generation to improve scene text recognition systems. We used image-captions of MS-COCO [@lin2014microsoft] dataset and fine-tune an ImageNet [@deng2009imagenet] pre-trained Inception network [@szegedy2015going] to predict topic probabilities of a LDA model [@blei2003latent] directly from the images. Then using the word probabilities from LDA model, we predicted the probability of occurrence of each word given an image. Our proposed method is related to these image annotation and image retrieval methods in the way that we use LDA [@blei2003latent] topic-probabilities as common representation for both image and text. However, we differ from all these methods in that we use the topic level representations of text to supervise the visual feature learning of a Convolutional Neural Network. Our CNN model, by learning to predict the semantic context in which images appear as illustrations, learns generic visual features that can be leveraged for other visual tasks. Wikipedia Image-Text Data {#sec:wiki_data} ========================= TextTopicNet leverages the semantic correlation of image-text pairs for self-supervised learning of visual features. Thus it requires a large scale dataset of multimodal content. In this paper we propose to use the Wikipedia web site as the source of such dataset. Wikipedia is a multilingual, web-based encyclopedia project currently composed of over 40 million articles across 299 different languages. Wikipedia articles are usually comprised by text and other kinds of multimedia objects (image, audio, and video files), and can thus be treated as multimodal documents. For our experiments we make use of two different sets of Wikipedia articles’ collections: (a) the ImageCLEF 2010 Wikipedia collection  [@tsikrika2011overview] (b) our own contributed dataset Wikipedia Image-Text Co-ocurrence that is made publicly available and consists of $4.2$ million image-text pairs obtained from entire English Wikipedia. ImageCLEF Wikipedia Collection ------------------------------ The ImageCLEF 2010 Wikipedia collection [@tsikrika2011overview] consists of $237,434$ Wikipedia images and the Wikipedia articles that contain these images. An important observation is that the data collection and filtering is not semantically driven. The original ImageCLEF dataset contains all Wikipedia articles which have versions in three languages (English, German and French) and are illustrated with at least one image in each version. Thus, we have a broad distribution of semantic subjects, expected to be similar as to the entire Wikipedia or other general-knowledge data collections. A semantic analysis of the data, extracted from the ground-truth of relevance assessments for the ImageCLEF retrieval queries, is shown in Figure \[fig:dataset\_analysis\]. Although the dataset also provides human-generated annotations in this paper we train CNNs from scratch using only the raw Wikipedia articles and their images. We consider only the English articles of the ImageCLEF Wikipedia collection. We also filter small images ($< 256$ pixels) and images with formats other than JPG (Wikipedia stores photographic images as JPG, and uses other formats for digital-born imagery). This way our subset of ImageCLEF training dataset is composed of $100,785$ images and $35,582$ unique articles. Throughout the paper, we refer to this dataset as “ImageCLEF”. Full English Wikipedia dump --------------------------- In order to show that the idea of self-supervised learning using illustrated articles is scalable and can be extended to larger datasets than the ImageCLEF collection we have built a new dataset by scraping the entire English Wikipedia. With $5,614,418$ articles, the English Wikipedia is the largest among the 290 different Wikipedia encyclopedias. While a proper semantic analysis of the Wikipedia content is out of the scope of this paper, we consider relevant to highlight its broad and highly extensive coverage of human knowledge. For this, in Figure \[fig:enwiki\_topics\] we show the distribution of articles among the $11$ top level categories as computed with the algorithm proposed by Kittur  [@kittur2009s]. ![Distribution of articles among the $11$ top level categories in the English Wikipedia [@kittur2009s] computed from the page-category assignments.[]{data-label="fig:enwiki_topics"}](enwiki_topics.png){width="\linewidth"} In order to obtain the training dataset for TextTopicNet we scrap entire English Wikipedia but we consider only articles with at least $50$ words and illustrated with at-least one image. Similarly to the preprocessing of ImageCLEF dataset we filter small images ($< 256$ pixels) and images with formats other than JPG. This way our training data is composed of $4.2$ million images and $1.7$ million unique articles, made publicly available[^1]. On average each text article is illustrated with $2.3$ images. Through rest of the paper, we refer to this dataset as “Wikipedia”. ![image](topicsvis_2.pdf){width="\textwidth"} TextTopicNet {#sec:texttopicnet} ============ The proposed method learns visual features in a self-supervised fashion by predicting the semantic textual context in which an image is more probable to appear as an illustration. As illustrated in Figure \[fig:overall\_method\] our CNN is trained on images to directly predict the vectorial representation of their corresponding text documents. In Section \[sec:exp\_compate\_text\_emb\] we experimentally investigate the effect of various document level and word level text embeddings of articles for providing the training supervision to CNN. We provide a baseline comparison on the use of: Word2Vec [@mikolov2013efficient], GloVe [@pennington2014glove], FastText [@joulin2016bag], Doc2Vec [@le2014distributed] and LDA [@blei2003latent] for the purpose of self-supervised learning. In this experiment we observe that using Latent Dirichlet Allocation (LDA) for text representation demonstrates best performance. On average, a Wikipedia article contains few hundred words and thus averaging word level representations such as Word2Vec [@mikolov2013efficient] looses semantic meaning. On the other hand LDA [@blei2003latent] discovers the distribution of documents over latent topics. Given a text document, this underlying distribution gives us a better semantic representation of entire text article. In this section we discuss the specific details of the TextTopicNet pipeline using LDA for representing text articles. First, we describe how we learn a Latent Dirichlet Allocation (LDA) [@blei2003latent] topic model on all the text documents in our dataset. Then we detail how the LDA topic model is used to generate the target labels for training our Convolutional Neural Network (CNN). Latent Dirichlet Allocation (LDA) topic modeling {#sec:method_lda} ------------------------------------------------ Our self-supervised learning framework assumes that the textual information associated with the images in our dataset is generated by a mixture of hidden topics. Similar to various image annotation and image retrieval methods we make use of the Latent Dirichlet Allocation (LDA) [@blei2003latent] algorithm for discovering those latent topics and representing the textual information associated with a given image as a probability distribution over the set of discovered topics. Representing text at topic level instead of at word level (BoW) provides us with: (1) a more compact representation (dimensionality reduction), and (2) a more semantically meaningful interpretation of descriptors. LDA is a generative statistical model of a text corpus where each document can be viewed as a mixture of various topics, and each topic is characterized by a probability distribution over words. LDA can be represented as a three level hierarchical Bayesian model. Given a text corpus consisting of $M$ documents and a dictionary with $N$ words, LDA define the generative process for a document $d$ as follows: - [Choose $\theta \sim Dirichlet(\alpha)$.]{} - [For each of the $N$ words $w_n$ in $d$:]{} - [Choose a topic $z_n \sim Multinomial(\theta)$.]{} - [Choose a word $w_n$ from $P(w_n \mid z_n, \beta)$, a multinomial probability conditioned on the topic $z_n$.]{} where $\theta$ is the mixing proportion and is drawn from a Dirichlet prior with parameter $\alpha$, and both $\alpha$ and $\beta$ are corpus level parameters, sampled once in the process of generating a corpus. Each document is generated according to the topic proportions $z_{1:K}$ and word probabilities over $\beta$. The probability of a document $d$ in a corpus is defined as : $$P(d\mid\alpha, \beta) = \nonumber \int_{\theta}P(\theta \mid\alpha)\left(\prod_{n=1}^{N}\sum_{z_{K}}^{ } P(z_{K} \mid \theta)P(w_{n}\mid z_{K},\beta)\right)d\theta \nonumber$$ Learning LDA on a document corpus provides two sets of parameters: word probabilities given topic $P(w\mid z_{1:K})$ and topic probabilities given document $P(z_{1:K} \mid d)$. Therefore each document is represented in terms of topic probabilities $z_{1:K}$ (being $K$ the number of topics) and word probabilities over topics. Any new (unseen) document can be represented in terms of a probability distribution over the topics of the learned LDA model by projecting it into the topic space. Self Supervised Learning of Visual Features using LDA Topic Probabilities {#sec:method_cnn} ------------------------------------------------------------------------- We train a CNN to predict text representations (topic probability distributions) from images. Our intuition is that we can learn useful visual features by training the CNN to predict the semantic context in which a particular image is more probable to appear as an illustration. For our experiments we make use of two different architectures. One is the 8 layers CNN CaffeNet [@jia2014caffe], a replication of the AlexNet [@krizhevsky2012imagenet] model with some differences (it does not train with the relighting data-augmentation, and the order of pooling and normalization layers is switched). The other architecture is a 6 layers CNN resulting from removing the 2 first convolutional layers from CaffeNet. This smaller network is used to do experiments with tiny images. The choice of AlexNet is justified because most of the existing self-supervised methods make use of this same architecture [@owens2016ambient; @agrawal2015learning; @pathak2016context; @wang2015unsupervised] and this makes us able to offer a direct comparison with them. For learning to predict the target topic probability distributions we minimize a sigmoid cross-entropy loss on our image dataset. We use a Stochastic Gradient Descent (SGD) optimizer, with base learning rate of $0.001$, multiplied by $0.1$ every $250,000$ iterations, and momentum of $0.9$. The batch size is set to $128$. With these settings the network converges after $520,000$ iterations. ![image](topicsvis.pdf){width="\textwidth"} In an attempt to visualize the semantic nature of the supervisory signal provided by the LDA model, we show in Figures \[fig:topic\_vis\] and  \[fig:topic\_words\_imgs\] the top-5 most relevant words for discovered topics by LDA and the corresponding most relevant images for such topics. The analysis is done individually on each of the datasets introduced in Section \[sec:wiki\_data\]. We appreciate that the discovered topics correspond to broad semantic categories for which, a priori, it is difficult to find the most appropriate illustration. Still we observe that the most representative images for each topic present some regularities and thus allow the CNN to learn discriminative features for broader object classes, despite the possible noise introduced by outlier images that may appear in articles from the same topic. Further, by comparing the discovered topics from the ImageCLEF dataset (Figure \[fig:topic\_words\_imgs\]) to the ones discovered in the Wikipedia dataset (Figure \[fig:topic\_vis\]) we can appreciate that the two LDA models share some common topics (eg. words like “music”, “album”, “song” are prominent to one of the topic in both LDA models). This observation supports the claim made in Section \[sec:wiki\_data\] that both datasets must have a similar distribution of semantic subjects. It is important to notice that a given image will rarely correspond to a single semantic topic, because by definition the discovered topics by LDA have a certain semantic overlap. In this sense we can think of the problem of predicting topic probabilities as a multi-label classification problem in which all classes exhibit a large intra-class variability. These intuitions motivate our choice of a sigmoid cross-entropy loss for predicting targets interpreted as topic probabilities instead of a one hot vector for a single topic. Once the TextTopicNet model has been trained, it can be straightforwardly used in an image retrieval setting. Furthermore, it can be potentially extended to an image annotation [@patel2016dynamic] or captioning system by leveraging the common topic space in which text and images can be projected respectively by the LDA and CNN models. However, in this paper we are more interested in analyzing the qualities of the visual features that we have learned by training the network to predict semantic topic distributions. We claim that the learned features, out of the common topic space, are not only of sufficient discriminative power but also carry more semantic information than features learned with other state of the art self-supervised and unsupervised approaches. The proposed self-supervised learning framework will have thus a broad application in different computer vision tasks. With this spirit we propose the use of TextTopicNet as a convolutional feature extractor and as a CNN pre-training method. We evaluate these scenarios in the next section and compare the obtained results in different benchmarks with the state of the art. Experiments {#sec:experiments} =========== In this section we perform extensive experimentation in order to demonstrate the quality of the visual features learned by the TextTopicNet model. Our aim is to demonstrate that the learned visual features are both discriminative and robust towards unseen or uncommon classes. First we compare various text-embeddings for the purpose of self-supervised learning from pairs of images and texts. Second we perform a baseline analysis of TextTopicNet top layers’ features for image classification on the PASCAL VOC 2007 dataset [@everingham2010pascal] to find the optimal number of topics of the LDA model. Third we compare our method with state of the art self-supervised methods and unsupervised learning algorithms for image classification on PASCAL, SUN397 [@xiao2010sun], and STL-10 [@coates2010analysis] datasets, and for object detection in PASCAL. Finally, we perform experiments on image retrieval from visual and textual queries on the Wikipedia retrieval dataset [@rasiwasia2010new]. Comparing Text-Embeddings for Self-Supervised Visual Feature Learning {#sec:exp_compate_text_emb} --------------------------------------------------------------------- As we have previously mentioned in the review of the state of the art, there exist several text-image embedding pipelines that share the basic design of TextTopicNet but make use of other text representations instead of LDA topic probabilities. Thus our first objective is to understand the strength of using different text embeddings to provide self-supervision for CNN training. In order to do so, we train AlexNet [@krizhevsky2012imagenet] as explained in Section \[sec:method\_cnn\] on ImageCLEF dataset using different text embeddings: LDA [@blei2003latent], Word2Vec [@mikolov2013efficient], Doc2Vec [@le2014distributed], GloVe [@pennington2014glove] and FastText [@joulin2016bag]. For evaluation of these trained models, we train one vs. rest SVMs using the image representation obtained by different layers on PASCAL VOC 2007 dataset [@everingham2010pascal]. The PASCAL VOC 2007 dataset [@everingham2010pascal] consists of 9,963 images, split into 50% for training/validation and 50% for testing. Each image has been annotated with a bounding box and object class label for each object in one of the twenty classes present in the image: *“person”, “bird”, “cat”, “cow”, “dog”, “horse”, “sheep”, “aeroplane”, “bicycle”, “boat”, “bus”, “car”, “motorbike”, “train”, “bottle”, “chair”, “dining table”, “potted plant”, “sofa”,* and *“tv/monitor”*. The dataset is a standard benchmark for image classification and object detection tasks and its relatively small size for training makes it specially well suited for the evaluation of self-supervised algorithms as well as for transfer learning methods. Popular text vectorization methods in this pipelines are diverse in terms of architecture and the text structure they are designed to deal with. Some methods are oriented to generate representations of individual words [@joulin2016bag; @mikolov2013efficient; @pennington2014glove] and others to vectorize entire text articles or paragraphs [@blei2003latent; @le2014distributed]. In our analysis we consider the top-performing text embeddings and test them in our pipeline to evaluate the performance of the learned visual features. Briefly the main characteristics of each text embedding method used in this experiment are as follows: - **Word2Vec** [@mikolov2013efficient]: Using large amounts of unannotated plain text, Word2Vec learns relationships between words automatically using a feed-forward neural network. It builds distributed semantic representations of words using the context of them considering both words before and after the target word. - **Doc2Vec** [@le2014distributed]: Extends the Word2Vec idea to documents. Instead of learning feature representations for words, it learns them for sentences or documents. - **GloVe** [@pennington2014glove]: It is a count-based model. It learns the vectors by essentially doing dimensionality reduction on the co-occurrence counts matrix. Training is performed on aggregated global word-word co-occurrence statistics from a corpus. - **FastText** [@joulin2016bag]: It is an extension of Word2Vec which treats each word as composed of character ngrams, learning representations for ngrams instead of words. The vector for a word is made of the sum of its character n grams, so it can generate embeddings for out of vocabulary words. While LDA [@blei2003latent] and Doc2Vec [@le2014distributed] can directly generate text-article level embeddings, Word2Vec [@mikolov2013efficient], GloVe [@pennington2014glove] and FastText [@joulin2016bag] generate only word level embeddings. In order to make use of representation obtained from entire text article for supervision, we use the mean embedding of all words within an article. For all embeddings except LDA, we test with two different representation dimensions: (a) 40 (same as optimum number of topics when using LDA [@blei2003latent; @gomez2017self]) (b) 300 (same as standard models as trained in original implementation). We make use of Gensim[^2] implementations of Word2Vec [@mikolov2013efficient], FastText [@joulin2016bag] and Doc2Vec [@le2014distributed] and the GloVe implementation by provided by Maciej Kula[^3]. For each of these text-embeddings we train a CNN as explained in Section \[sec:method\_cnn\]. Once the CNN is trained, we learn one-vs-all SVMs on features obtained from different layers in the network. Table \[pascal\_SVM\_mAP\_comparison\] shows the PASCAL VOC2007 image classification performance using different text embeddings. We observe that Latent Drichilet Allocation (LDA) [@blei2003latent] based embedding serves as best global representation for self-supervised learning of visual features. Text Representation pool5 fc6 fc7 ---------------------- ---------- ---------- ---------- LDA [@gomez2017self] **47.4** **48.1** **48.5** Word2Vec (40) 44.1 45.1 36.9 Word2Vec (300) 41.1 36.6 32.2 Doc2Vec (40) 41.8 40.0 33.3 Doc2Vec (300) 43.7 35.4 33.1 GloVe (40) 41.6 40.6 34.7 GloVe (300) 36.2 30.3 29.4 FastText (40) 45.3 46.2 38.7 FastText (300) 40.4 34.5 34.0 : TextTopicNet comparison using different text embeddings. PASCAL VOC2007 %mAP image classification.[]{data-label="pascal_SVM_mAP_comparison"} LDA Hyper-parameter Settings {#sec:exp_lda_params} ---------------------------- As observed in Section \[sec:exp\_compate\_text\_emb\], LDA [@blei2003latent] based global text representation of entire text articles provide best supervision. Here we perform a baseline analysis for parameter optimization using the standard train/validation split of the PASCAL VOC 2007 dataset. In this experiment we train a LDA topic model on the corpus of $35,582$ articles from the ImageCLEF Wikipedia collection [@tsikrika2011overview]. From the raw articles we remove stop-words and punctuation, and perform lemmatization of words. The word dictionary ($50,913$ words) is made from the processed text corpus by filtering those words that appear in less than $20$ articles or in more than $50\%$ of the articles. At the time of choosing the number of topics in our model we must consider that as the number of topics increase, the documents of the training corpus are partitioned into finer collections, and increasing the number of topics may also cause an increment on the model perplexity [@blei2003latent]. Thus, the number of topics is an important parameter in our model. ![One vs. Rest linear SVM validation %mAP on PASCAL VOC2007 by varying number of topics of LDA [@blei2003latent] in our method.[]{data-label="plot_map_num_topics"}](plot_map_num_topics.pdf){width="50.00000%"} We take a practical approach and empirically determine the optimal number of topics in our model by leveraging validation data. Figure \[plot\_map\_num\_topics\] shows validation accuracy of SVM classification using *fc7* features for different number of topics in our model. We appreciate that the best validation performance is obtained with the $40$ topics LDA model. This configuration is kept for both LDA models on ImageCLEF and Wikipedia datasets for the rest of our experiments. We do not perform this hyper-parameter optimization of LDA on introduced entire Wikipedia dataset due to high training time on $4.2$ million images. Image Classification {#sec:exp_classification} -------------------- In this set of experiments we evaluate how good are the learned visual features of the 6 layer CNN (CaffeNet) for image classification when trained with the self-supervised method explained in Section \[sec:texttopicnet\]. Image classification is evaluated using two standard protocols: (1) training one-vs-all SVMs on representations obtained from different layers such as *conv5, fc6, fc7* (2) fine-tuning the network on different datasets using TextTopicNet initialized weights. ### Unsupervised Features for Image Classification Starting from the TextTopicNet model trained as detailed on Section \[sec:texttopicnet\] we extract features from the top layers of the CNN (fc7, fc6, pool5, etc.) for each image of the dataset. Then, for each class we perform a grid search over the parameter space of an one-vs-all Linear SVM classifier [^4] to optimize its validation accuracy. Then, we use the best performing parameters to train again the one-vs-all SVM using both training and validation images. Tables \[pascal\_pool5\_AP\] and \[pascal\_SVM\_mAP\] compare our results on the PASCAL test set with different state-of-the-art self-supervised learning algorithms using features from different top layers and SVM classifiers. Scores for all other methods are taken from [@owens2016ambient]. We appreciate in Table \[pascal\_SVM\_mAP\] that using text semantics as supervision for visual feature learning outperforms all other modalities in this experiment. In Table \[pascal\_pool5\_AP\], attention is drawn to the fact that our *pool5* features are substantially more discriminative than the rest for the most difficult classes, see e.g. “bottle”, “pottedplant” or “cow”. Indeed, in the case of “bottle” our method outperforms fully supervised networks. Additionally for commonly occurring classes such as “aeroplane”, “car”, “person” TextTopicNet substantially outperforms previous self-supervised approaches and show competitive performance to supervised training. Method max5 pool5 fc6 fc7 --------------------------------------- ---------- ---------- ---------- ---------- TextTopicNet (Wikipedia) - **51.9** **54.2** **55.8** TextTopicNet (ImageCLEF) - 47.4 48.1 48.5 Sound  [@owens2016ambient] 39.4 46.7 47.1 47.4 Texton-CNN 28.9 37.5 35.3 32.5 K-means [@krahenbuhl2015data] 27.5 34.8 33.9 32.1 Tracking [@wang2015unsupervised] 33.5 42.2 42.4 40.2 Patch pos. [@doersch2015unsupervised] 26.8 46.1 - - Egomotion [@agrawal2015learning] 22.7 31.1 - - TextTopicNet (MS-COCO) - **50.7** **53.1** **55.4** ImageNet [@krizhevsky2012imagenet] **63.6** **65.6** **69.6** **73.6** Places [@zhou2014learning] 59.0 63.2 65.3 66.2 : PASCAL VOC2007 %mAP for image classification.[]{data-label="pascal_SVM_mAP"} TextTopicNet (Wikipedia) and TextTopicNet (ImageCLEF) in Table \[pascal\_SVM\_mAP\] correspond to the models trained respectively on each of the datasets detailed in Section \[sec:wiki\_data\]. We appreciate that our model greatly benefits from the larger scale of the entire Wikipedia dataset. The TextTopicNet (COCO) entry corresponds to a model trained with MS-COCO [@lin2014microsoft] images and their ground-truth caption annotations as textual content. Since MS-COCO captions are generated by human annotators, this entry can not be considered a self-supervised method, but rather as a kind of weakly supervised approach. Our interest in training this model is to show that having more specific textual content, like image captions, helps TextTopicNet to learn better features. In other words, there is an obvious correlation between the noise introduced in the self supervisory signal of our method and the quality of the learned features. Actually, the ImageNet entry in Table \[pascal\_SVM\_mAP\] can be seen as a model with a complete absence of noise, i.e. each image corresponds exactly to one topic and each topic corresponds exactly to one class (a single word). Still, the TextTopicNet (Wikipedia) features, learned from a very noisy signal, surprisingly outperform the ones of the TextTopicNet (COCO) model. As an additional experiment we have calculated the classification performance on the combination of TextTopicNet and that of Sound entries in Table \[pascal\_SVM\_mAP\]. Here we seek insight about how complementary are the features learned with two different modalities of supervisory signals. By using the concatenation of *fc7* features from TextTopicNet(ImageCLEF) and Sound models the mAP increases to 54.81%. On combining *fc7* features from TextTopicNet(Wikipedia) and Sound models the mAP gets to 57.38%. This improvement in performance indicates towards a certain degree of complementarity. Table \[tab:sun\_SVM\_mAP\] compares our results on the SUN397 [@xiao2010sun] test set with state-of-the-art self-supervised learning algorithms. SUN397 [@xiao2010sun] consists of 50 training and 50 test images for each of the 397 scene classes. We follow the same evaluation protocol as [@owens2016ambient; @agrawal2015learning] and make use 20 images per class for training and remaining 30 for validation. We evaluate TextTopicNet on three different partitions of training and testing and report the average performance. This scene classification dataset is suitable for the evaluation of self-supervised approaches as it contains less frequently occurring classes and thus is more challenging compared to PASCAL VOC 2007 dataset. We appreciate that TextTopicNet outperforms all other modalities of supervision in this experiment. We observe that using features from fc6 layer of TextTopicNet gives better performance compared to using features from fc7 layer. This indicates that fc6 and pool5 layers of TextTopicNet are more robust towards uncommon classes. Method max5 pool5 fc6 fc7 --------------------------------------- ---------- ---------- ---------- ---------- TextTopicNet (Wikipedia) - **28.8** **32.2** **27.7** Sound  [@owens2016ambient] 17.1 22.5 21.3 21.4 Texton-CNN 10.7 15.2 11.4 7.6 K-means [@krahenbuhl2015data] 11.6 14.9 12.8 12.4 Tracking [@wang2015unsupervised] 14.1 18.7 16.2 15.1 Patch pos. [@doersch2015unsupervised] 10.0 22.4 - - Egomotion [@agrawal2015learning] 9.1 11.3 - - ImageNet [@krizhevsky2012imagenet] 29.8 34.0 37.8 37.8 Places [@zhou2014learning] **39.4** **42.1** **46.1** **48.8** : SUN397 accuracy for image classification.[]{data-label="tab:sun_SVM_mAP"} ### Self-Supervised pre-training for Image Classification In knowledge transfer, other that using CNN as a feature extractor and SVMs for classification, another standard procedure to evaluate the quality of CNN visual features it to fine-tune the network into the target domain. We analyze the performance of TextTopicNet for image classification by fine-tuning the CNN weights to specific datasets (PASCAL and STL-10). For fine-tuning our network we use the following optimization strategy: we use Stochastic Gradient Descent (SGD) for $120,000$ iterations with an initial learning rate of $0.0001$ (reduced by $0.1$ every $30,000$ iterations), batch size of $64$, and momentum of $0.9$. We use data augmentation by random crops and mirroring. At test time we follow the standard procedure of averaging the net responses at $10$ random crops. Table \[pascal\_finetuning\_mAP\] compares our results for image classification on PASCAL by fine-tuning the weights learned with different self-supervised learning algorithms. Image classification using AlexNet when trained only on PASCAL VOC dataset with randomly initialized weights achieve a performance of $53.4$ %mAP. We appreciate that TextTopicNet substantially improved the classification performance over this baseline. Method Fine-tuning ------------------------------------------- ------------- TextTopicNet (Wikipedia) **61.0** TextTopicNet (ImageCLEF) [@gomez2017self] 55.7 K-means [@krahenbuhl2015data] 56.6 Tracking [@wang2015unsupervised] 55.6 Patch pos. [@doersch2015unsupervised] 55.1 Egomotion [@agrawal2015learning] 31.0 NAT [@bojanowski2017unsupervised] 56.7 Context Encoder [@pathak2016context] 56.5 ImageNet[@krizhevsky2012imagenet] **78.2** : Fine-tuning results on PASCAL VOC 2007.[]{data-label="pascal_finetuning_mAP"} Table \[stl10\_finetuning\_Acc\] compares our classification accuracy on STL-10 with different state of the art unsupervised learning algorithms. In this experiment we make use of the shortened 6 layers network in order to adapt better to image sizes for this dataset ($96\times96$ pixels). We do fine-tuning with the same hyper-parameters as for the 6 layer network. The standard procedure on STL-10 is to perform unsupervised training on a provided set of $100,000$ unlabeled images, and then supervised training on the labeled data. While our method does not directly compare with unsupervised and semi-supervised methods in Table \[stl10\_finetuning\_Acc\], because of the distinct approach (self-supervision), the experiment provides insight about the added value of self-supervision compared with fully-unsupervised data-driven algorithms. It is important to notice that we do not make use of the STL-10 unlabeled data in our training. Method Acc. ---------------------------------------------------- ------------ TextTopicNet (ImageCLEF) - CNN-finetuning \* **76.51%** TextTopicNet (ImageCLEF) - fc7+SVM \* 66.00% Semi-supervised auto-encoder [@zhao2015stacked] **74.33**% Convolutional k-means [@dundar2015convolutional] 74.10% CNN with Target Coding [@yang2015deep] 73.15% Exemplar convnets [@dosovitskiy2014discriminative] 72.80% Unsupervised pre-training [@paine2014analysis] 70.20% Swersky  [@swersky2013multi] \* 70.10% C-SVDDNet [@wang2016unsupervised] 68.23% K-means (Single layer net) [@coates2010analysis] 51.50% Raw pixels 31.80% : STL-10 classification accuracy. Methods with an asterisk mark make use of external (unlabeled) data.[]{data-label="stl10_finetuning_Acc"} ### Visual Features Analysis We further analyze the qualities of the learned features by visualizing the receptive field segmentation of TextTopicNet convolutional units using the methodology of [@zhou2014object; @owens2016ambient]. The purpose of this experiment is to gain insight in what our CNN has learned to detect. Figure \[rf\_segmentations\] shows a selection of neurons in the *fc7* layer of our model. We appreciate that our network units are quite generic, mainly selective to textures, shapes and object-parts, although some object-selective units are also present (e.g. faces). ![Top-5 activations for five units in *fc7* layer of TextTopicNet(ImageCLEF) model. While most TextTopicNet units are selective to generic textures, like grass or water, some of them are also selective for specific shapes, objects, and object-parts.[]{data-label="rf_segmentations"}](RFsegmentation_unitID96_new.jpg "fig:"){width="\columnwidth"}\ ![Top-5 activations for five units in *fc7* layer of TextTopicNet(ImageCLEF) model. While most TextTopicNet units are selective to generic textures, like grass or water, some of them are also selective for specific shapes, objects, and object-parts.[]{data-label="rf_segmentations"}](RFsegmentation_unitID137_new.jpg "fig:"){width="\columnwidth"}\ ![Top-5 activations for five units in *fc7* layer of TextTopicNet(ImageCLEF) model. While most TextTopicNet units are selective to generic textures, like grass or water, some of them are also selective for specific shapes, objects, and object-parts.[]{data-label="rf_segmentations"}](RFsegmentation_unitID142_new.jpg "fig:"){width="\columnwidth"}\ ![Top-5 activations for five units in *fc7* layer of TextTopicNet(ImageCLEF) model. While most TextTopicNet units are selective to generic textures, like grass or water, some of them are also selective for specific shapes, objects, and object-parts.[]{data-label="rf_segmentations"}](RFsegmentation_unitID147_new.jpg "fig:"){width="\columnwidth"}\ ![Top-5 activations for five units in *fc7* layer of TextTopicNet(ImageCLEF) model. While most TextTopicNet units are selective to generic textures, like grass or water, some of them are also selective for specific shapes, objects, and object-parts.[]{data-label="rf_segmentations"}](RFsegmentation_unitID190_new.jpg "fig:"){width="\columnwidth"} Object Detection {#sec:exp_detection} ---------------- Similar to other self-supervised approaches, for object detection we make use of Fast R-CNN [@girshick2015fast]. We replace the ImageNet initialized weights of Fast R-CNN with the weights of TextTopicNet and train the network with default parameters for $40,000$ iterations on training and validation set of PASCAL VOC 2007. Table \[tab:pascal\_finetuning\_detection\] compares our results for image classification and object detection on the test set of PASCAL VOC 2007 with different self-supervised learning algorithms. Object detection using Fast RCNN [@girshick2015fast] when trained only on PASCAL VOC dataset with randomly initialized weights achieve a performance of $40.7$ %mAP. We appreciate that TextTopicNet and other self-supervised methods enhance the detection performance over this baseline. Method Detection --------------------------------------- ----------- TextTopicNet (Wikipedia) 44.3 TextTopicNet (ImageCLEF) 43.0 Sound  [@owens2016ambient] 44.1 K-means [@krahenbuhl2015data] 45.6 Tracking [@wang2015unsupervised] 47.4 Patch pos. [@doersch2015unsupervised] 46.6 Egomotion [@agrawal2015learning] 41.8 NAT [@bojanowski2017unsupervised] **49.4** Context Encoder [@pathak2016context] 44.5 ImageNet [@krizhevsky2012imagenet] **56.8** : PASCAL VOC2007 finetuning %mAP for object detection.[]{data-label="tab:pascal_finetuning_detection"} Multi-Modal Retrieval {#sec:exp_retrieval} --------------------- ![image](Image_to_image_nn.pdf){width="\textwidth"} ![image](Text_to_image_nn.pdf){width="\textwidth"} We evaluate our learned self-supervised visual features for two types of multi-modal retrieval tasks: (1) Image query vs. Text database, (2) Text query vs. Image database. For this purpose, we use the Wikipedia retrieval dataset [@rasiwasia2010new], which consists of 2,866 image-document pairs split into train and test set of 2,173 and 693 pairs respectively. Further, each image-document pair is labeled with one of ten semantic classes [@rasiwasia2010new]. As demonstrated in Figure \[fig:multi\_modal\] for retrieval we project images and documents into the learned topic space and compute the KL-divergence distance of the query (image or text) with all the entities in the database. ![TextTopicNet projects the images on same topic probability representation as that of co-occurring text article.[]{data-label="fig:multi_modal"}](multi_modal.pdf){width="50.00000%"} In Table \[table:multi\_modal\_retrieval\] we compare our results with supervised and unsupervised multi-modal retrieval methods discussed in [@wang2016comprehensive] and  [@kang2015cross]. Supervised methods make use of class or categorical information associated with each image-document pair, whereas unsupervised methods do not. All of these methods use LDA for text representation and CNN features from pre-trained CaffeNet, which is trained on ImageNet dataset in a supervised setting. We appreciate that our self-supervised method outperforms unsupervised approaches, and has competitive performance to supervised methods without using any labeled data. Method Average ---------------------------------- ------------------ ------------------ ------------------ TextTopicNet (Wikipedia) $37.63$ $\textbf{40.25}$ $38.94$ TextTopicNet (ImageCLEF) $39.58$ $38.16$ $38.87$ CCA [@rasiwasia2010new] $19.70$ $17.84$ $18.77$ PLS [@rosipal2006overview] $30.55$ $28.03$ $29.29$ SCM\* [@rasiwasia2010new] $37.13$ $28.23$ $32.68$ GMMFA\* [@sharma2012generalized] $38.74$ $31.09$ $34.91$ CCA-3V\* [@gong2014multi] $40.49$ $36.51$ $38.50$ GMLDA\* [@sharma2012generalized] $40.84$ $36.93$ $38.88$ LCFS\* [@wang2013learning] $41.32$ $38.45$ $39.88$ JFSSL\* [@wang2016joint] $\textbf{42.79}$ $39.57$ $\textbf{41.18}$ : MAP comparison on Wikipedia dataset [@rasiwasia2010new] with supervised (bottom) and unsupervised (middle) methods. Methods marked with asterisk make use of document (image-text) class category information.[]{data-label="table:multi_modal_retrieval"} Finally, in order to analyze better what is the nature of learned features by our self-supervised TextTopicNet we perform additional qualitative experiments for the image retrieval task. Figure \[fig:img2img\] shows the 4 nearest neighbors for a given query image (left-most), where each row makes use of features obtained from different layers of TextTopicNet (without fine tuning). From top to bottom: prob, fc7, fc6, pool5. Query images are randomly selected from PASCAL VOC 2007 dataset and never shown at training time. It can be appreciated that when retrieval is performed in the topic space layer (prob, 40 dimensions, top row), the results are semantically close, although not necessarily visually similar. As features from earlier layers are used, the results tend to be more visually similar to the query image. Further we appreciate that without any supervision from PASCAL VOC 2007 classes, TextTopicNet learns to retrieve images belonging to correct corresponding class of input image. Figure \[fig:txt2img\] shows the 12 nearest neighbors for a given text query in the topic space of TextTopicNet (again, without fine tuning). Interestingly, the list of retrieved images for the first query (“airplane”) is almost the same for related words and synonyms such as “flight”, “airway”, or “aircraft”. By leveraging textual semantic information our method learns a polysemic representation of images. Further it can be appreciated that TextTopicNet is capable of handling semantic text queries for retrieval such as (“airplane” + “fighter”, “fly” + “sky”). Conclusions {#sec:conclusion} =========== In this paper we provide an extension to our CVPR 2017 paper [@gomez2017self] on self-supervised learning using text topic spaces learned by LDA [@blei2003latent] topic model. The presented method, TextTopicNet, is able to take advantage of freely available multi-modal content to train computer vision algorithms without human supervision. By considering text found in illustrated articles as noisy image annotations the proposed method learns visual features by training a CNN to predict the semantic context in which a particular image is more probable to appear as an illustration. Here we experimentally demonstrate that our method is scalable to larger and more diverse training datasets. The contributed experiments show that although the learned visual features are generic for broad topics, they can be used for more specific computer vision tasks such as image classification, object detection, and multi-modal retrieval. Our results are superior when compared with state of the art self-supervised algorithms for visual feature learning. [TextTopicNet source code, pre-trained models and introduced Wikipedia dataset (Section \[sec:wiki\_data\]) are publicly available at]{} <https://github.com/lluisgomez/TextTopicNet>. This work has been partially supported by the Spanish research project TIN2014-52072-P, the CERCA Programme / Generalitat de Catalunya, the H2020 Marie Skłodowska-Curie actions of the European Union, grant agreement No 712949 (TECNIOspring PLUS), the Agency for Business Competitiveness of the Government of Catalonia (ACCIO), CEFIPRA Project 5302-1 and the project “aBSINTHE - AYUDAS FUNDACI[Ó]{}N BBVA A EQUIPOS DE INVESTIGACION CIENTIFICA 2017. We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research. [^1]: https://github.com/lluisgomez/TextTopicNet [^2]: <http://radimrehurek.com/gensim> [^3]: <http://github.com/maciejkula/glove-python> [^4]: Liblinear implementation from <http://scikit-learn.org/>
--- abstract: 'We consider semilinear Schrödinger equations with nonlinearity that is a polynomial in the unknown function and its complex conjugate, on $\mathbb{R}^d$ or on the torus. Norm inflation (ill-posedness) of the associated initial value problem is proved in Sobolev spaces of negative indices. To this end, we apply the argument of Iwabuchi and Ogawa (2012), who treated quadratic nonlinearities. This method can be applied whether the spatial domain is non-periodic or periodic and whether the nonlinearity is gauge/scale-invariant or not.' author: - Nobu Kishimoto title: A remark on norm inflation for nonlinear Schrödinger equations --- Introduction ============ We consider the initial value problem for semilinear Schrödinger equations: $$\label{NLS'} \left\{ \begin{array}{@{\,}r@{\;}l} i{\partial}_tu+\Delta u&=F (u,\bar{u}),\qquad (t,x)\in [0,T] \times Z,\\ u(0,x)&=\phi (x), \end{array} \right.$$ where the spatial domain $Z$ is of the form $Z={{\mathbb{R}}}^{d_1}\times {{\mathbb{T}}}^{d_2}$, $d_1+d_2=d$, and $F (u,\bar{u})$ is a polynomial in $u,\bar{u}$ without constant and linear terms, explicitly given by [$$\begin{split} F (u,\bar{u})=\sum _{j=1}^n\nu _ju^{q_j}\bar{u}^{p_j-q_j} \end{split}$$]{} with mutually different indices $(p_1,q_1),\dots ,(p_n,q_n)$ satisfying $p_j\ge 2$, $0\le q_j\le p_j$ and non-zero complex constants $\nu _1,\dots ,\nu _n$. The aim of this article is to prove *norm inflation* for the initial value problem in some negative Sobolev spaces. We say norm inflation in $H^s(Z)$ (“*NI$_s$*” for short) occurs if for any ${\delta}>0$ there exist $\phi \in H^{\infty}$ and $T>0$ satisfying [$$\begin{split} {\| \phi \| _{H^s}}<{\delta},\qquad 0<T<{\delta}\end{split}$$]{} such that the corresponding smooth solution $u$ to exists on $[0,T]$ and [$$\begin{split} {\| u(T) \| _{H^s}}>{\delta}^{-1}. \end{split}$$]{} Clearly, NI$_s$ implies the discontinuity of the solution map $\phi \mapsto u$ (which is uniquely defined for smooth $\phi$ locally in time) at the origin in the $H^s$ topology, and hence the ill-posedness of in $H^s$. However, NI$_s$ is a stronger instability property of the flow than the discontinuity, which only requires $0<T{\lesssim}1$ and ${\| u(T) \| _{H^s}}{\gtrsim}1$. Let us begin with the case of single-term nonlinearity: $$\label{NLS} \left\{ \begin{array}{@{\,}r@{\;}l} i{\partial}_tu+\Delta u&=\nu u^q\bar{u}^{p-q},\qquad (t,x)\in [0,T] \times Z,\\ u(0,x)&=\phi (x), \end{array} \right.$$ where $p\ge 2$ and $0\le q\le p$ are integers, $\nu \in {\mathbb{C}}\setminus \{0\}$ is a constant. The equation is invariant under the scaling transformation $u(t,x)\mapsto {\lambda}^{\frac{2}{p-1}}u({\lambda}^2t,{\lambda}x)$ (${\lambda}>0$), and the critical Sobolev index $s$ for which ${\| {\lambda}^{\frac{2}{p-1}}\phi ({\lambda}\cdot ) \| _{\dot{H}^{s}}}={\| \phi \| _{\dot{H}^{s}}}$ is given by [$$\begin{split} s=s_c(d,p):=\tfrac{d}{2}-\tfrac{2}{p-1}. \end{split}$$]{} The scaling heuristics suggests that the flow becomes unstable in $H^s$ for $s<s_c(d,p)$. In addition, we will demonstrate norm inflation phenomena by tracking the transfer of energy from high to low frequencies (that is called “high-to-low frequency cascade”), which naturally restrict us to negative Sobolev spaces. In fact, we will show NI$_s$ with any $s<\min {\{ s_c(d,p),0\}}$ for any $Z$ and $(p,q)$, as well as with some negative but scale-subcritical regularities for specific nonlinearities. Precisely, our result reads as follows: \[thm:main0\] Let $Z$ be a spatial domain of the form ${{\mathbb{R}}}^{d_1}\times {{\mathbb{T}}}^{d_2}$ with $d_1+d_2=d\ge 1$, and let $p\ge 2$, $0\le q\le p$ be integers. Then, the initial value problem exhibits NI$_s$ in the following cases: 1. $Z$ and $(p,q)$ are arbitrary, $s<\min {\{ s_c(d,p),0\}}$. 2. $d,p,s$ satisfy $s=s_c(d,p)=-\frac{d}{2}$; that is, $(d,p,s)=(1,3,-\frac{1}{2})$ and $(2,2,-1)$. 3. $d=1$, $(p,q)=(2,0),(2,2)$ and $s<-1$. 4. $Z={{\mathbb{R}}}^d$ with $1\le d\le 3$, $(p,q)=(2,1)$ and $s<-\frac{1}{4}$. 5. $Z={{\mathbb{R}}}^{d_1}\times {{\mathbb{T}}}^{d_2}$ with $d_1+d_2\le 3$, $d_2\ge 1$, $(p,q)=(2,1)$ and $s<0$. 6. $Z={{\mathbb{T}}}$, $(p,q)=(4,1),(4,2),(4,3)$ and $s<0$. There is an extensive literature on the ill-posedness of nonlinear Schrödinger equations, and a part of the above theorem has been proved in previous works. Concerning ill-posedness in the sense of norm inflation, Christ, Colliander, and Tao [@CCT03p-1] treated the case of gauge-invariant power-type nonlinearities $\pm |u|^{p-1}u$ on ${{\mathbb{R}}}^d$ and proved NI$_s$ when $0<s<s_c(d,p)$ or $s\le -\frac{d}{2}$ (with some additional restriction on $s$ if $p$ is not an odd integer). For the remaining range of regularities $-\frac{d}{2}<s<0$ (when $s_c\ge 0$) they proved the failure of uniform continuity of the solution map. Note that this milder form of ill-posedness is not necessarily incompatible with well-posedness in the sense of Hadamard, for which continuity of the solution map is required. Moreover, since their argument is based on scaling consideration and some ODE analysis, it does not apply in any obvious way to the cases of periodic domains,[^1] non gauge-invariant nonlinearities, and complex coefficients. Later, Carles, Dumas, and Sparber [@CDS12] and Carles and Kappeler [@CK17] studied norm inflation in Sobolev spaces of negative indices for the problem with smooth nonlinearities (i.e., $\pm |u|^{p-1}u$ with an odd integer $p\ge 3$) in ${{\mathbb{R}}}^d$ and in ${{\mathbb{T}}}^d$, respectively. They used a geometric optics approach to obtain NI$_s$ for $d\ge 2$ and $s<-\frac{1}{p}$ in the ${{\mathbb{R}}}^d$ case[^2] and for $s<0$ in the ${{\mathbb{T}}}^d$ case with the exception of $(d,p)=(1,3)$ for which $s<-\frac{2}{3}$ was assumed. (See [@C07; @AC09] for related ill-posedness results.) In fact, they showed stronger instability property than NI$_s$ for these cases; that is, norm inflation *with infinite loss of regularity* (see Proposition \[prop:niilr\] below for the definition). Our argument, which evaluates each term in the power series expansion of the solution directly, is different from the aforementioned works. Note that, for smooth nonlinearities, Theorem \[thm:main0\] covers all the remaining cases in the range $s<\min {\{ s_c(d,p),0\}}$ and extends the result to the (partially) periodic setting as well as to the case of general nonlinearities with complex coefficients. Moreover, our argument also gives another proof of the results in [@CDS12; @CK17] on NI$_s$ with infinite loss of regularity; see Proposition \[prop:niilr\] for the precise statement. The one-dimensional cubic equation with nonlinearity $\pm |u|^2u$ has been attracting particular attention due to its various physical backgrounds and complete integrability. Note also that this is the only $L^2$-subcritical case among smooth and gauge-invariant nonlinearities. In spite of the $L^2$ subcriticality, the equation becomes unstable below $L^2$ due to the Galilean invariance, both in ${{\mathbb{R}}}$ and in ${{\mathbb{T}}}$. In fact, the initial value problem was shown to be globally well-posed in $L^2$ [@T87; @B93-1], whereas it was shown in [@KPV01; @CCT03] for ${{\mathbb{R}}}$ and in [@BGT02; @CCT03] for ${{\mathbb{T}}}$ that the solution map fails to be uniformly continuous below $L^2$. Ill-posedness below $L^2({{\mathbb{T}}})$ was established in the periodic case by the lack of continuity of the solution map [@CCT03p-2; @M09] and by the non-existence of solutions [@GO18]. Nevertheless, one can show a priori bound in some Sobolev spaces below $L^2$ [@KT07; @CCT08; @KT12; @GO18], which prevents norm inflation. Recent results in [@KT16p; @KVZ17p] finally gave a priori bound on $H^s$ for $s>-\frac{1}{2}$, both in ${{\mathbb{R}}}$ and in ${{\mathbb{T}}}$. We remark that NI$_s$ at $s=-\frac{1}{2}$ shown in Theorem \[thm:main0\] ensures the optimality of these results.[^3] In [@KVZ17p Theorem 4.7], Killip, Vişan and Zhang also derived a priori bound of the solutions in the norm which is logarithmically stronger than the critical $H^{-\frac{1}{2}}$. Motivated by this result, in addition to Theorem \[thm:main0\] (ii) we also show norm inflation for the one-dimensional cubic equation in some “logarithmically subcritical” spaces; see Proposition \[prop:A\] below. Since the work of Kenig, Ponce, and Vega [@KPV96-NLS], non gauge-invariant nonlinearities have also been intensively studied. In [@BT06], Bejenaru and Tao proposed an abstract framework for proving ill-posedness in the sense of discontinuity of the solution map. They considered the quadratic NLS on ${{\mathbb{R}}}$ with nonlinearity $u^2$ and obtained a complete dichotomy of Sobolev index $s$ into locally well-posed ($s\ge -1$) and ill-posed ($s<-1$) in the sense mentioned above. Their argument is based on the power series expansion of the solution, and they proved ill-posedness by observing that high-to-low frequency cascades break the continuity of the first nonlinear term in the series. A similar dichotomy was shown for other quadratic nonlinearities $\bar{u}^2$, $u\bar{u}$ in [@K09; @KT10] by employing the idea of [@BT06]. Later, Iwabuchi and Ogawa [@IO15] considered the nonlinearity $u^2$, $\bar{u}^2$ in ${{\mathbb{R}}}$, ${{\mathbb{R}}}^2$ and refined the idea of [@BT06] to prove ill-posedness in the sense of NI$_s$ for $s<-1$ in ${{\mathbb{R}}}$ and $s\le -1$ in ${{\mathbb{R}}}^2$. In particular, in the two-dimensional case they could complement the local well-posedness result in $H^s({{\mathbb{R}}}^2)$, $s>-1$, which had been obtained in [@K09]. Note that the original argument of [@BT06] is not likely to yield norm inflation phenomena nor discontinuity of the solution map at the threshold regularity such as $s=-1$ in the above ${{\mathbb{R}}}^2$ case. We will have more discussion on this issue in the next section. Another quadratic nonlinearity $u\bar{u}$ was investigated by the same method in [@IU15], where for ${{\mathbb{R}}}^d$ with $d=1,2,3$ they proved norm inflation in Besov spaces $B^{-1/4}_{2,{\sigma}}$ of regularity $-\frac{1}{4}$ with $4<{\sigma}\le {\infty}$.[^4] It turns out that the method of Iwabuchi and Ogawa [@IO15] proving norm inflation has a wide applicability. The purpose of the present article is to apply this method to NLS with general nonlinearities. In the last few years the method has been used to a wide range of equations; see for instance [@MO15; @MO16; @HMO16; @CP16; @Ok17].[^5] In [@O17; @Ok17], norm inflation based at general initial data was proved for NLS and some other equations.[^6] We make some additional remarks on Theorem \[thm:main0\]. \(i) Concerning one-dimensional periodic cubic NLS below $L^2$, the renormalized (or Wick ordered) equation [$$\begin{split} i{\partial}_tu+{\partial}_x^2u=\pm \big( |u|^2-2-\hspace{-13pt}\int _{{{\mathbb{T}}}}|u|^2\big) u \end{split}$$]{} is known to behave better than the original one with nonlinearity $\pm |u|^2u$; see [@OS12] for a detailed discussion. We note that our proof can be also applied to the renormalized cubic NLS. In fact, the solutions constructed in Theorem \[thm:main0\] is smooth and its $L^2$ norm is conserved. Then, a suitable gauge transformation, which does not change the $H^s$ norm at any time, gives smooth solutions to the renormalized equation that exhibit norm inflation. \(ii) In the periodic setting, our proof does not rely on any number theoretic consideration. Hence, it can be easily adapted to the problem on general anisotropic tori, whether rational or irrational; that is, $Z={{\mathbb{R}}}^{d_1}\times [{{\mathbb{R}}}^{d_2}/({\gamma}_1{\mathbb{Z}})\times \cdots \times ({\gamma}_{d_2}{\mathbb{Z}})]$ for any ${\gamma}_1,\dots ,{\gamma}_{d_2}>0$. \(iii) When $Z={{\mathbb{R}}}$ and $(p,q)=(4,2)$, the example in [@G00p Example 5.3] suggests that a high-to-low frequency cascade leads to instability of the solution map when $s<-\frac{1}{8}$. However, our argument does not imply NI$_s$ for $-\frac{1}{6}\le s<-\frac{1}{8}$ so far. There are far less results on ill-posedness for multi-term nonlinearities than for . However, such nonlinear terms naturally appear in application. For instance, the nonlinearity $6u^5-4u^3$ appears in a model related to shape-memory alloys [@FLS87], and $(u+2\bar{u}+u\bar{u})u$ is relevant in the study of asymptotic behavior for the Gross-Pitaevskii equation (see [e.g.]{} [@GNT09]). Note that norm inflation for a multi-term nonlinearity does not immediately follow from that for each nonlinear term. Our next result concerns the equation of full generality: \[thm:main\] The initial value problem exhibits NI$_s$ whenever $s$ satisfies the condition in Theorem \[thm:main0\] for at least one term $u^{q_j}\bar{u}^{p_j-q_j}$ in $F (u,\bar{u})$, except for the case where $Z={{\mathbb{T}}}$ and $F (u,\bar{u})$ contains $u\bar{u}$. When $Z={{\mathbb{T}}}$ and $F (u,\bar{u})$ contains $u\bar{u}$, NI$_s$ occurs in the following cases: 1. $s<0$ if $F (u,\bar{u})$ has a quintic or higher term, or one of $u^3\bar{u}$, $u^2\bar{u}^2$, $u\bar{u}^3$. 2. $s<-\frac{1}{6}$ if $F (u,\bar{u})$ has $u^4$ or $\bar{u}^4$ but no other quartic or higher terms. 3. $s\le -\frac{1}{2}$ if $F (u,\bar{u})$ has a cubic term but no quartic or higher terms. 4. $s<0$ if $F (u,\bar{u})$ has no cubic or higher terms. In the above theorem, the range of regularities is restricted when $Z={{\mathbb{T}}}$ and $F (u,\bar{u})$ has $u\bar{u}$; note that the nonlinear term $u\bar{u}$ by itself leads to NI$_s$ for $s<0$ as shown in Theorem \[thm:main0\]. This restriction seems unnatural and an artifact of our argument. The rest of this article is organized as follows. In the next section, we recall the idea of [@BT06], [@IO15] and discuss some common features and differences between them. Section \[sec:proof0\] is devoted to the proof of Theorem \[thm:main0\] for the single-term nonlinearities. Then, in Section \[sec:proof\] we see how to treat the multi-term nonlinearities, proving Theorem \[thm:main\]. In Appendices, we consider norm inflation with infinite loss of regularity in Section \[sec:niilr\] and inflation of various norms with the critical regularity for the one-dimensional cubic problem in Section \[sec:ap\]. Strategy for proof {#BT-IO} ================== We will use the power series expansion of the solutions to prove norm inflation. To see the idea, let us consider the simplest case of quadratic nonlinearity $u^2$ in . This amounts to considering the integral equation [$$\label{eq:ie} \begin{split} u(t)&=e^{it\Delta}\phi -i\int _0^te^{i(t-\tau )\Delta}\big( u(\tau )\cdot u(\tau )\big) \,d\tau \\ &=:{\mathcal{L}}[\phi ](t)+{\mathcal{N}}[u,u](t),\qquad t\in [0,T]. \end{split}$$]{} We first recall the argument of Bejenaru and Tao [@BT06]. By Picard’s iteration, the power series $\sum _{k=1}^{\infty}U_k[\phi ]$ with [$$\begin{split} &U_1[\phi ]:={\mathcal{L}}[\phi ], \qquad U_2[\phi ]:={\mathcal{N}}[{\mathcal{L}}[\phi ],{\mathcal{L}}[\phi ]],\\ &U_3[\phi ]:={\mathcal{N}}[{\mathcal{L}}[\phi ],{\mathcal{N}}[{\mathcal{L}}[\phi ],{\mathcal{L}}[\phi ]]]+{\mathcal{N}}[{\mathcal{N}}[{\mathcal{L}}[\phi ],{\mathcal{L}}[\phi ]],{\mathcal{L}}[\phi ]],\\ &\quad \vdots \\ &U_k[\phi ]:=\sum _{k_1,k_2\ge 1;\,k_1+k_2=k}{\mathcal{N}}[U_{k_1}[\phi ],U_{k_2}[\phi ]] \qquad (k\ge 2) \end{split}$$]{} formally gives a solution to . To justify this, we basically need the linear and bilinear estimates [$$\label{est:qwp} \begin{split} {\big\| {\mathcal{L}}[\phi ] \big\| _{S}}\le C{\big\| \phi \big\| _{D}},\qquad {\big\| {\mathcal{N}}[u_1,u_2] \big\| _{S}}\le C{\big\| u_1 \big\| _{S}}{\big\| u_2 \big\| _{S}} \end{split}$$]{} for the space of initial data $D$ and some space $S\subset C([0,T];D)$ in which we construct a solution. In fact, they showed (roughly speaking) the following: > Assume that holds with the Banach space $D$ of initial data and some Banach space $S$. Then, (i) for any $k\ge 1$ the operators $U_k:D\to S$ are well-defined and satisfies ${\| U_k[\phi ] \| _{S}}\le (C{\| \phi \| _{D}})^k$, and (ii) there exists ${\varepsilon}_0>0$ (depending on the constants in ) such that the solution map $\phi \mapsto u[\phi ]:=\sum _{k=1}^{\infty}U_k[\phi ]$ is well-defined on $B_D({\varepsilon}_0):={\big\{ \, \phi \in D \, \big| \, {\| \phi \| _{D}}\le {\varepsilon}_0 \, \big\}}$ and gives a solution to . Next, consider some coarser topologies on $D$ and $S$ induced by the norms ${\| ~ \| _{D'}}$ and ${\| ~ \| _{S'}}$ weaker than ${\| ~ \| _{D}}$ and ${\| ~ \| _{S}}$, respectively. They claimed the following: > Assume further that the solution map $\phi \mapsto u[\phi ]$ given above is continuous from $(B_D({\varepsilon}_0),{\| ~ \| _{D'}})$ (i.e., $B_D({\varepsilon}_0)$ equipped with the $D'$ topology) to $(S,{\| ~ \| _{S'}})$. Then, for each $k$ the operator $U_k$ is continuous from $(B_D({\varepsilon}_0),{\| ~ \| _{D'}})$ to $(S,{\| ~ \| _{S'}})$. To show the continuity of $U_k$ in coarser topologies, by its homogeneity one can restrict to sufficiently small initial data. Then, by the estimates , contribution of higher order terms $\sum _{k'>k}U_{k'}[\phi ]$ can be made arbitrarily small compared to $U_k[\phi ]$. Combining this fact with the hypothesis that $\sum _{k\ge 1}U_k[\phi ]$ is continuous, one can show the claim by an induction argument on $k$. Now, this claim gives a way to prove ill-posedness in coarse topologies. Namely, one can show the discontinuity of the solution map $\phi \mapsto \sum _{k=1}^{\infty}U_k[\phi ]$ in coarse topologies by simply establishing the discontinuity of the (more explicit) map $\phi \mapsto U_k[\phi ]$ for at least one $k$.[^7] We notice that this proof of ill-posedness includes evaluating higher terms by using , that is, estimates (or well-posedness) in stronger topology. Here, we observe two facts on this method. First, it cannot yield norm inflation in coarse topologies. This is because the image of the continuous solution map with domain $B_D({\varepsilon}_0)$ is bounded in $S$, and hence it must be bounded in weaker norms. Secondly, the ‘well-posedness’ estimates in $D,S$ and discontinuity of some $U_k$ in $D',S'$ would imply the discontinuity of $U_k$ in any ‘intermediate’ norms $D'',S''$ satisfying [$$\begin{split} {\| \phi \| _{D'}}{\lesssim}{\| \phi \| _{D''}}{\lesssim}{\| \phi \| _{D}}^{\theta}{\| \phi \| _{D'}}^{1-{\theta}},\qquad {\| u \| _{S'}}{\lesssim}{\| u \| _{S''}}{\lesssim}{\| u \| _{S}} \end{split}$$]{} for some $0<{\theta}<1$. In fact, if $U_k:(B_D({\varepsilon}_0),{\| ~ \| _{D'}})\to S'$ is not continuous, there exist ${\{ \phi _n\}}\subset B_D({\varepsilon}_0)$ and $\phi _{\infty}\in B_D({\varepsilon}_0)$ such that ${\| \phi _n-\phi _{\infty}\| _{D'}}\to 0$ ($n\to {\infty}$) but ${\| U_k[\phi _n]-U_k[\phi _{\infty}] \| _{S'}}{\gtrsim}1$. Since ${\{ \phi _n\}}$ is bounded in $D$, this implies that ${\| \phi _n-\phi _{\infty}\| _{D''}}\to 0$ and ${\| U_k[\phi _n]-U_k[\phi _{\infty}] \| _{S''}}{\gtrsim}1$. In particular, if we work in Sobolev spaces: [$$\begin{split} D=H^{s_0},\quad S\hookrightarrow C([0,T];H^{s_0}),\quad D'=H^{s_1},\quad S'=C([0,T];H^{s_1})\qquad (s_0>s_1), \end{split}$$]{} then ill-posedness in $H^{s_1}$ as a consequence of the argument in [@BT06] should actually yield ill-posedness in any $H^s$, $s_1\le s<s_0$, while we have , i.e., well-posedness in $H^{s_0}$. Therefore, the regularity $s_0$ in which we invoke must be automatically the threshold regularity for well-/ill-posedness. This explains why the same argument cannot be applied to the two-dimensional quadratic NLS with nonlinearity $u^2$. In fact, as mentioned in Introduction, are obtained in $D=H^s$ when $s>-1$ (with a suitable $S$) but fails if $s\le -1$ (for any $S$ continuously embedded into $C([0,T];H^s)$), and hence well-posedness at the threshold regularity is not available in this case. We next recall Iwabuchi and Ogawa’s result [@IO15], which settled the aforementioned two-dimensional case. Indeed, the argument in [@IO15] is similar to that of [@BT06] in that it exploits the power series expansion and shows that one term in the series exhibits instability and dominates all the other terms. Now, we notice that the existence time $T>0$ is allowed to shrink for the purpose of establishing norm inflation, while in [@BT06] it is fixed and uniform with respect to the initial data. The main difference of the argument in [@IO15] from that of [@BT06] is that they worked with the estimates like [$$\label{est:qwp'} \begin{split} {\big\| {\mathcal{L}}[\phi ] \big\| _{S_T}}\le C{\big\| \phi \big\| _{D}},\qquad {\big\| {\mathcal{N}}[u_1,u_2] \big\| _{S_T}}\le CT^{\delta}{\big\| u_1 \big\| _{S_T}}{\big\| u_2 \big\| _{S_T}} \end{split}$$]{} for the data space $D$, $S_T\subset C([0,T];D)$, and ${\delta}>0$, and consider the expansion up to different times $T$ according to the initial data. In fact, this enables us to take a sequence of initial data which is unbounded in $D$ (but converges to $0$ in a weaker norm), and such a set of initial data actually yields unbounded sequence of solutions. Another feature of the argument in [@IO15] is that higher-order terms were estimated directly in $D'$ by using properties of specific initial data they chose; in [@BT06] these terms were simply estimated in $D$ by that hold for general functions.[^8] At a technical level, another novelty in [@IO15] is the use of modulation space $M_{2,1}$ as $D$ instead of Sobolev spaces. The bilinear estimate in is then straightforward thanks to the algebra property of $M_{2,1}$. Finally, we remark that the strategies of [@BT06; @IO15] work well in the case that the operator $U_k$ involves a significant high-to-low frequency cascade, as mentioned in [@BT06]. However, the situation is different in the case of *system* of equations, as there are more than one regularity indices and one cannot simply order two pairs of regularity indices; see e.g. [@MO15], where the argument of [@IO15] was employed to derive norm inflation from nonlinear interactions of “high$\times$low$\to$high” type. Proof of Theorem \[thm:main0\] {#sec:proof0} ============================== Let us first consider the case of single-term nonlinearity and prove Theorem \[thm:main0\]. The argument in this section basically follows that in [@IO15]. Since the coefficient $\nu \neq 0$ plays no role in our proof, we assume $\nu =1$ for simplicity. We write [$$\begin{split} \mu _{p,q}(z_1,\dots ,z_p):=\prod _{l=1}^qz_l\prod _{m=q+1}^p\bar{z}_m,\qquad \mu _{p,q}(z):=\mu _{p,q}(z,\dots ,z), \end{split}$$]{} so that $u^q\bar{u}^{p-q}=\mu _{p,q}(u)$. \[defn:U\_k\] For $\phi \in L^2(Z)$, we (formally) define [$$\begin{split} U_1[\phi ](t)&:=e^{it\Delta}\phi ,\\ U_k[\phi ](t)&:=-i\sum _{{\begin{smallmatrix} k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k \end{smallmatrix}}}\int _0^t e^{i(t-\tau )\Delta}\mu _{p,q}\big( U_{k_1}[\phi ],\dots ,U_{k_p}[\phi ]\big) (\tau )\,d\tau ,\qquad k\ge 2. \end{split}$$]{} Note that $U_{k}[\phi ]=0$ unless $k\equiv 1\mod p-1$. The expansion $u=\sum _{k=1}^{\infty}U_k[\phi ]$ of a (unique) solution $u$ to will play a crucial role in the proof. To make sense of this representation, we use modulation spaces. The notion of modulation spaces was introduced by Feichtinger in the 1980s [@F83] and nowadays it has become one of the common tools in the study of nonlinear evolution PDEs; see e.g. the survey [@RSW12] and references therein. Let $A>0$ be a dyadic number. Define the space $M_A$ as the completion of $C_0^{\infty}(Z)$ with respect to the norm $${\big\| f \big\| _{M_A}}:=\sum _{\xi \in A{\mathbb{Z}}^d}{\big\| {\widehat}{f} \big\| _{L^2(\xi +Q_A)}},$$ where $Q_A:=[-\frac{A}{2},\frac{A}{2})^d$. We consider the space $M_A$ with $A<1$ only when $Z={{\mathbb{R}}}^d$. For $Z={{\mathbb{R}}}^{d_1}\times {{\mathbb{T}}}^{d_2}$, the $L^2(\xi +Q_A)$ norm in the above definition means the $L^2$ norm restricted onto $(\xi +Q_A)\cap {\widehat}{Z}$, where ${\widehat}{Z}:={{\mathbb{R}}}^{d_1}\times {\mathbb{Z}} ^{d_2}$. If $Z={{\mathbb{T}}}^d$, the space $M_1$ coincides with the Wiener algebra ${{\mathcal{F}}}L^1({{\mathbb{T}}}^d)$. We will only use the following properties of the space $M_A$. The proof is elementary, and thus it is omitted. \[lem:M\_A\] (i) $M_A\cong _AM_1$,$H^{\frac{d}{2}+{\varepsilon}}\hookrightarrow M_1\hookrightarrow L^2$(${\varepsilon}>0$). \(ii) There exists $C=C(d)>0$ such that for any $f,g\in M_A$, we have $${\big\| fg \big\| _{M_A}}\le CA^{\frac{d}{2}}{\big\| f \big\| _{M_A}}{\big\| g \big\| _{M_A}}.$$ Since the space $M_A$ is a Banach algebra and the linear propagator $e^{it\Delta}$ is unitary in $M_A$, we can easily show the following multilinear estimates. \[lem:U\_k\] Let $A\ge 1$ be a dyadic number and $\phi \in M_A$ with ${\| \phi \| _{M_A}}\le M$. Then, there exists $C>0$ independent of $A$ and $M$ such that [$$\begin{split} {\big\| U_k[\phi ](t) \big\| _{M_A}}\le t^{\frac{k-1}{p-1}}(CA^{\frac{d}{2}}M)^{k-1}M \end{split}$$]{} for any $t\ge 0$ and $k\ge 1$. Let ${\{ a_k\}} _{k=1}^{\infty}$ be the sequence defined by $$a_1=1,\qquad a_k=\frac{p-1}{k-1}\sum _{{\begin{smallmatrix} k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k \end{smallmatrix}}}a_{k_1}\cdots a_{k_p}\qquad (k\ge 2).$$ As observed in [@BT06 Eq. (16)], one can show inductively that $a_k\le C^k$ for some $C>0$. To be more precise, we state it as the following lemma. The $p=2$ case can be found in [@MO16 Lemma 4.2] with a detailed proof. \[lem:a\_k\] Let ${\{ b_k\}}_{k=1}^{\infty}$ be a sequence of nonnegative real numbers such that $$b_{k} \le C\sum _{{\begin{smallmatrix} k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k \end{smallmatrix}}}b_{k_1}\cdots b_{k_p},\qquad k\ge 2$$ for some $p\ge 2$ and $C>0$. Then, we have $$b_k\le b_1C_0^{k-1},\qquad k\ge 1;\qquad C_0:=\frac{\pi ^2}{6}(Cp^2)^{\frac{1}{p-1}}b_1.$$ By Lemma \[lem:a\_k\], it holds $a_k\le C_0^{k-1}$ for some $C_0>0$. Thus, it suffices to show [$$\begin{split} {\big\| U_{k}[\phi ](t) \big\| _{M_A}}\le a_kt^{\frac{k-1}{p-1}}(C_1A^{\frac{d}{2}}M)^{k-1}M,\qquad t\ge 0,\quad k\ge 1 \end{split}$$]{} for some $C_1>0$. This is trivial if $k=1$. Let $k\ge 2$, and assume the above estimate for $U_1,U_2,\dots ,U_{k-1}$. Using Lemma \[lem:M\_A\], we have [$$\begin{split} {\big\| U_{k}[\phi ](t) \big\| _{M_A}}&\le CA^{\frac{d}{2}(p-1)}\sum _{{\begin{smallmatrix} k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k \end{smallmatrix}}}\int _0^t \prod _{j=1}^p{\big\| U_{k_j}[\phi ](\tau ) \big\| _{M_A}}\,d\tau \\ &\le CA^{\frac{d}{2}(p-1)}(C_1A^{\frac{d}{2}}M)^{k-p}M^p\sum _{{\begin{smallmatrix} k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k \end{smallmatrix}}}a_{k_1}\cdots a_{k_p}\int _0^t \tau ^{\frac{k-p}{p-1}}\,dt\\ &=Ca_kC_1^{k-p}(A^{\frac{d}{2}}M)^{k-1}Mt^{\frac{k-1}{p-1}}. \end{split}$$]{} The estimate for $U_k$ follows by setting $C_1$ to be $C^{\frac{1}{p-1}}$ with the constant $C$ in the last line, which is independent of $k$. A standard argument (cf. [@BT06 Theorem 3]) with Lemma \[lem:M\_A\] (ii) and Lemma \[lem:U\_k\] shows the following local well-posedness of in $M_A$. \[cor:lwp\] Let $A\ge 1$ be dyadic, and $M>0$. If $0<T\ll (A^{d/2}M)^{-(p-1)}$, then for any $\phi \in M_A$ with ${\| \phi \| _{M_A}}\le M$ the following holds. \(i) A unique solution $u$ to the integral equation associated with , [$$\label{eq:ie'} \begin{split} u(t)=e^{it\Delta}\phi -i\int _0^t e^{i(t-\tau )\Delta}\mu _{p,q}(u(\tau ))\,d\tau ,\qquad t\in [0,T] \end{split}$$]{} exists in $C([0,T];M_A)$. \(ii) The solution $u$ given in (i) has the expression [$$\begin{split} u=\sum _{k=1}^{\infty}U_{k}[\phi ]=\sum _{l=0}^{\infty}U_{(p-1)l+1}[\phi ], \end{split}$$]{} which converges absolutely in $C([0,T];M_A)$. \(i) Let [$$\begin{split} \Psi _{\phi}[u](t):= e^{it\Delta}\phi -i\int _0^t e^{i(t-\tau )\Delta}\mu _{p,q}(u(\tau ))\,d\tau , \end{split}$$]{} then from Lemma \[lem:M\_A\] (ii) we have [$$\begin{split} {\big\| \Psi _\phi [u] \big\| _{L^{\infty}(0,T;M_A)}}\le {\| \phi \| _{M_A}}+CTA^{\frac{d}{2}(p-1)}{\| u \| _{L^{\infty}(0,T;M_A)}}^p \end{split}$$]{} and that $\Psi$ is a contraction on a ball in $C([0,T];M_A)$ if $TA^{\frac{d}{2}(p-1)}{\| \phi \| _{M_A}}^{p-1}\ll 1$. \(ii) The series $u=\sum _{k\ge 1}U_k[\phi ]$ converges in $C([0,T];M_A)$ by virtue of Lemma \[lem:U\_k\]. By uniqueness, it suffices to show that $u$ solves the equation . Let $u_K:=\sum _{k=1}^KU_k[\phi ]$, so that $u=\lim _{K\to {\infty}}u_K$ in $C([0,T];M_A)$. We see that $\Psi _\phi [u_K]-u_K$ consists of $k$-linear terms in $\phi$ with $K+1\le k\le pK$, and we can show [$$\begin{split} {\| \Psi _\phi [u_K]-u_K \| _{L^{\infty}(0,T;M_A)}}\le C(CT^{\frac{1}{p-1}}A^{\frac{d}{2}}M)^KM \end{split}$$]{} by an argument similar to Lemma \[lem:U\_k\]. By letting $K\to {\infty}$, we obtain $\Psi _\phi [u]=u$. \(i) In $M_A$ we have *unconditional* local well-posedness. In particular, the embedding (Lemma \[lem:M\_A\] (i)) shows that the unique solution with initial data in some high-regularity Sobolev space exists on a time interval $[0,T]$ and coincides with the solution constructed in Corollary \[cor:lwp\]. \(ii) In the following proof of Theorem \[thm:main0\] we will take initial data that are localized in frequency on several cubes of side length $O(A)$ located in ${\{ |\xi |\gg \max (1,A)\}}$. For such initial data the $L^2$ norm is comparable with the $M_A$ norm, but much smaller than the Sobolev norms of positive indices. In the $L^2$-supercritical cases (i.e., $s_c(d,p)>0$), no reasonable well-posedness is expected in $L^2$, while the use of higher Sobolev space would verify the power series expansion only on a smaller time interval. In this regard, the space $M_A$ is suitable for our purpose. Let $N,A$ be dyadic numbers to be specified so that $N\gg 1$ and $0< A\ll N$ ($1\le A\ll N$ when $Z$ has a periodic direction). In the proof of norm inflation, we will use initial data $\phi$ of the following form: [$$\label{cond:phi} \begin{split} &{\widehat}{\phi}=rA^{-\frac{d}{2}}N^{-s}\chi _\Omega \quad \text{with a positive constant $r$ and a set $\Omega$ satisfying}\\ &\Omega = \bigcup _{\eta \in \Sigma}(\eta +Q_A){\hspace{10pt}}\text{for some $\Sigma \subset {\{ \xi \in {{\mathbb{R}}}^d:|\xi |\sim N\}}$ s.t. $\# \Sigma \le 3$}. \end{split}$$]{} Note that ${\| \phi \| _{M_A}}\sim rN^{-s}$, ${\| \phi \| _{H^s}}\sim r$. We derive Sobolev bounds of $U_k[\phi ](t)$ with $\phi$ satisfying the above condition. \[lem:supp\] There exists $C>0$ such that for any $\phi$ satisfying and $k\ge 1$, we have [$$\begin{split} \big| {\operatorname{supp}\> {\widehat}{U_{k}[\phi]}(t)}\big| \le C^kA^d,\qquad t\ge 0. \end{split}$$]{} Since the $\xi$-support of ${\widehat}{U_{k}[\phi]}$ is determined by a spatial convolution of $k$ copies of $\hat{\phi}$ or $\hat{\bar{\phi}}={\overline}{\hat{\phi}(-\cdot )}$, it is easily seen that [$$\begin{split} {{\operatorname{supp}\> {\widehat}{U_{k}[\phi]}(t)}\subset \bigcup _{\eta \in {\mathcal{S}}_k}\big( \eta +Q_{kA}\big)} \end{split}$$]{} for all $t\ge 0$, where ${\mathcal{S}}_1:=\Sigma$ and [$$\begin{split} {\mathcal{S}}_k:=&{\big\{ \, \eta \in {{\mathbb{R}}}^d \, \big| \, \eta =\sum _{l=1}^k\eta _l,\,\eta _l\in \Sigma \cup (-\Sigma )\;(1\le l\le k) \, \big\}},\qquad k\ge 2. \end{split}$$]{} Since $\# {\mathcal{S}}_k\le 6^k$, we have [$$\begin{split} \big| {\operatorname{supp}\> {\widehat}{U_{k}[\phi]}(t)}\big| \le \big| Q_{kA}\big| \# {\mathcal{S}}_k\le (kA)^d6^k\le C^kA^d.\qedhere \end{split}$$]{} \[lem:U\_k\_H\^s\] Let $\phi$ satisfy . Assume that $s<0$. Then, there exists $C>0$ depending only on $d,p,s$ such that the following holds. 1. ${\big\| U_1[\phi ](T) \big\| _{H^s}}\le Cr$for any $T\ge 0$. 2. ${\big\| U_{k}[\phi](T) \big\| _{H^s}}\le Cr(C\rho)^{k-1}A^{-\frac{d}{2}}N^{-s}f_s(A)$for any $T\ge 0$ and $k\ge 2$, where [$$\begin{split} \rho :=rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{p-1}},\qquad f_s(A):={\big\| {{\langle \xi \rangle }}^s \big\| _{L^2({\{ |\xi |\le A\}})}}. \end{split}$$]{} \(i) is easily verified. For (ii), we see that [$$\begin{split} &{\big\| U_{k}[\phi](t) \big\| _{H^s}}\le {\big\| {{\langle \xi \rangle }}^s \big\| _{L^2({\operatorname{supp}\> {\widehat}{U_{k}[\phi]}(t)})}}\sup _{\xi \in {{\mathbb{R}}}^d}\big| {\widehat}{U_k[\phi]}(t,\xi )\big| \\ &\le {\big\| {{\langle \xi \rangle }}^s \big\| _{L^2({\operatorname{supp}\> {\widehat}{U_{k}[\phi]}(t)})}}\sum _{{\begin{smallmatrix} k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k \end{smallmatrix}}}\int _0^t {\big\| \big| v_{k_1}(\tau )\big| *\cdots *\big| v_{k_p}(\tau )\big| \big\| _{L^{\infty}}}\,d\tau , \end{split}$$]{} where $v_{k_l}$ is either ${\widehat}{U_{k_l}[\phi]}$ or ${\widehat}{{\overline}{U_{k_l}[\phi ]}}$. By Young’s inequality, the above is bounded by [$$\begin{split} &{\big\| {{\langle \xi \rangle }}^s \big\| _{L^2({\operatorname{supp}\> {\widehat}{U_{k}[\phi]}(t)})}}\sum _{{\begin{smallmatrix} k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k \end{smallmatrix}}}\int _0^t {\big\| v_{k_1}(\tau ) \big\| _{L^2}}{\big\| v_{k_2}(\tau ) \big\| _{L^2}}\prod _{l=3}^{p}{\big\| v_{k_l}(\tau ) \big\| _{L^1}}\,d\tau \\ &\le {\big\| {{\langle \xi \rangle }}^s \big\| _{L^2({\operatorname{supp}\> {\widehat}{U_{k}[\phi]}(t)})}}\sum _{{\begin{smallmatrix} k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k \end{smallmatrix}}}\int _0^t \prod _{l=3}^p\big| {\operatorname{supp}\> {\widehat}{U_{k_l}[\phi]}(\tau )}\big| ^{\frac{1}{2}}\prod _{l=1}^p{\big\| {\widehat}{U_{k_l}[\phi]}(\tau ) \big\| _{L^2}}\,d\tau . \end{split}$$]{} Since $s<0$, for any bounded set $D\subset {{\mathbb{R}}}^d$ it holds that [$$\begin{split} \big| {\{ {{\langle \xi \rangle }}^{s}>{\lambda}\}}\cap D\big| \le \big| {\{ {{\langle \xi \rangle }}^s>{\lambda}\}}\cap B_D\big| \qquad ({\lambda}>0), \end{split}$$]{} where $B_D\subset {{\mathbb{R}}}^d$ is the ball centered at the origin with $|D|=|B_D|$. This implies that ${\| {{\langle \xi \rangle }}^s \| _{L^2(D)}}\le {\| {{\langle \xi \rangle }}^s \| _{L^2(B_D)}}$. Moreover, it follows from Lemma \[lem:U\_k\] with $M=CrN^{-s}$ that [$$\begin{split} {\big\| U_{k}[\phi](t) \big\| _{L^2}}\le {\big\| U_{k}[\phi](t) \big\| _{M_A}}\le Ct^{\frac{k-1}{p-1}}(CrA^{\frac{d}{2}}N^{-s})^{k-1}rN^{-s},\qquad k\ge 1 . \end{split}$$]{} Hence, we apply Lemma \[lem:supp\] to bound the above by [$$\begin{split} &{\big\| {{\langle \xi \rangle }}^s \big\| _{L^2({\{ |\xi |\le C^{\frac{k}{d}}A\}})}}\cdot C^{\frac{k}{2}}A^{\frac{d(p-2)}{2}}\sum _{{\begin{smallmatrix} k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k \end{smallmatrix}}}\int _0^t \prod _{l=1}^p\big[ C\tau ^{\frac{k_l-1}{p-1}}(CrA^{\frac{d}{2}}N^{-s})^{k_l-1}rN^{-s}\big] \,d\tau \\ &\le C^k{\big\| {{\langle \xi \rangle }}^s \big\| _{L^2({\{ |\xi |\le A\}})}}A^{\frac{d(p-2)}{2}+\frac{d}{2}(k-p)}(rN^{-s})^k\int _0^t\tau ^{\frac{k-p}{p-1}}\,d\tau \\ &\le f_s(A)A^{\frac{d}{2}(k-2)}(CrN^{-s})^kt^{\frac{k-1}{p-1}}, \end{split}$$]{} which is the desired one. We observe the following lower bounds on the $H^s$ norm of the first nonlinear term in the expansion of the solution. \[lem:U\_p\] The following estimates hold for any $s\in {{\mathbb{R}}}$. 1. Let $(p,q)$ and $Z={{\mathbb{R}}}^{d_1}\times {{\mathbb{T}}}^{d_2}$ be arbitrary. For $1\le A\ll N$, we define the initial data $\phi$ by with $\Sigma ={\{ Ne_d, -Ne_d, 2Ne_d\}}$, where $e_d:=(0,\dots ,0,1)\in {{\mathbb{R}}}^d$. If $0<T\ll N^{-2}$, then we have [$$\begin{split} {\big\| U_p[\phi ](T) \big\| _{H^s}}{\gtrsim}r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A). \end{split}$$]{} 2. Let $(p,q)=(2,1)$ and $Z={{\mathbb{R}}}^d$, $1\le d\le 3$. For $N\gg 1$, define $\phi$ by [$$\begin{split} {\widehat}{\phi}:=rN^{\frac{1}{2}-s}\chi _{Ne_d+{\widetilde}{Q}_{N^{-1}}}{\hspace{10pt}}\text{with}{\hspace{10pt}}r>0,{\hspace{10pt}}{\widetilde}{Q}_{N^{-1}}:=[-\tfrac{1}{2},\tfrac{1}{2})^{d-1}\times [-\tfrac{1}{2N},\tfrac{1}{2N}). \end{split}$$]{} Then, for any $0<T\ll 1$ we have [$$\begin{split} {\big\| U_2[\phi ](T) \big\| _{H^s}}{\gtrsim}r^2N^{-2s-\frac{1}{2}}T. \end{split}$$]{} 3. Let $(p,q)=(2,1)$ and $Z={{\mathbb{R}}}^{d_1}\times {{\mathbb{T}}}^{d_2}$ with $d_1+d_2\le 3$, $d_2\ge 1$. Define $\phi$ by with $A=1$, $\Sigma ={\{ Ne_d\}}$. Then, for any $0<T\ll 1$ we have [$$\begin{split} {\big\| U_2[\phi ](T) \big\| _{H^s}}{\gtrsim}r^2N^{-2s}T. \end{split}$$]{} 4. Let $(p,q)=(4,1)$ or $(4,2)$ or $(4,3)$ and $Z={{\mathbb{T}}}$. Define $\phi$ by with $A=1$, $\Sigma ={\{ -N,2N,3N\}}$. Then, for any $T>0$ we have [$$\begin{split} {\big\| U_4[\phi ](T) \big\| _{H^s}}{\gtrsim}r^4N^{-4s}T. \end{split}$$]{} Note that [$$\begin{split} {\widehat}{U_p[\phi ]}(T,\xi )=ce^{-iT|\xi |^2}\int _{{\Gamma}}\prod _{l=1}^q{\widehat}{\phi}(\xi _l)\prod _{m=q+1}^p{\overline}{{\widehat}{\phi}(\xi _m)}\int _0^T e^{it\Phi}\,dt, \end{split}$$]{} where [$$\begin{gathered} {\Gamma}:={\big\{ \, (\xi _1,\dots ,\xi _p) \, \big| \, \sum _{l=1}^q\xi _l-\sum _{m=q+1}^p\xi _m=\xi \, \big\}},\quad \Phi :=|\xi |^2-\sum _{l=1}^q|\xi _l|^2+\sum _{m=q+1}^p|\xi _m|^2. \end{gathered}$$]{} \(i) If we restrict $\xi$ to $Q_A$, we have [$$\begin{split} {\widehat}{U_p[\phi ]}(T,\xi )=c(rA^{-\frac{d}{2}}N^{-s})^pe^{-iT|\xi |^2}\sum _{(\eta _1,\dots ,\eta _p)}\int _{{\Gamma}}\prod _{l=1}^p\chi _{\eta _l+Q_A}(\xi _l)\int _0^T e^{it\Phi}\,dt, \end{split}$$]{} where the sum is taken over the set [$$\begin{split} {\big\{ \, (\eta _1,\dots ,\eta _p)\in {\{ \pm Ne_d ,2Ne_d\}}^p \, \big| \, \sum _{l=1}^q\eta _l-\sum _{m=q+1}^p\eta _m=0 \, \big\}}, \end{split}$$]{} which is non-empty for any $(p,q)$.[^9] Since $|\Phi| {\lesssim}N^2$ in the integral, for $0<T\ll N^{-2}$ we have [$$\begin{split} |{\widehat}{U_p[\phi ]}(T,\xi )|{\gtrsim}(rA^{-\frac{d}{2}}N^{-s})^p(A^d)^{p-1}T\chi _{p^{-1}Q_{A}}(\xi ), \end{split}$$]{} and thus [$$\begin{split} {\big\| U_p[\phi ](T) \big\| _{H^s}}{\gtrsim}(rA^{-\frac{d}{2}}N^{-s})^p(A^d)^{p-1}T{\big\| {{\langle \xi \rangle }}^s \big\| _{L^2(p^{-1}Q_{A})}}\sim r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A). \end{split}$$]{} \(ii) In this case we have [$$\begin{split} {\widehat}{U_2[\phi ]}(T,\xi )=c(rN^{\frac{1}{2}-s})^2e^{-iT|\xi |^2}\int _{\xi _1-\xi _2=\xi}\chi _{{\widetilde}{Q}_{N^{-1}}}(\xi _1-Ne_d)\chi _{{\widetilde}{Q}_{N^{-1}}}(\xi _2-Ne_d)\int _0^T e^{it\Phi}\,dt, \end{split}$$]{} and in the integral, for $\xi =\xi _1-\xi _2\in {\widetilde}{Q}_{N^{-1}}$, [$$\begin{split} \Phi =|\xi |^2-|\xi _1|^2+|\xi _2|^2=|\xi |^2-|\xi _1-Ne_d|^2+|\xi _2-Ne_d|^2-2(\xi _1-\xi _2)\cdot Ne_d=O(1). \end{split}$$]{} Hence, if $0<T\ll 1$, we have [$$\begin{split} |{\widehat}{U_2[\phi ]}(T)|{\gtrsim}(rN^{\frac{1}{2}-s})^2N^{-1}T\chi _{2^{-1}{\widetilde}{Q}_{N^{-1}}},\qquad {\big\| U_2[\phi ](T) \big\| _{H^s}}{\gtrsim}(rN^{\frac{1}{2}-s})^2N^{-\frac{3}{2}}T \end{split}$$]{} for any $s\in {{\mathbb{R}}}$. \(iii) Similarly to (ii), we see that [$$\begin{split} {\widehat}{U_2[\phi ]}(T,(\xi ',0))=c(rN^{-s})^2e^{-iT|\xi |^2}\int _{\xi _1'-\xi _2'=\xi '}\chi _{[-1/2,1/2 )^{d-1}}(\xi _1')\chi _{[-1/2,1/2 )^{d-1}}(\xi _2')\int _0^T e^{it\Phi}\,dt, \end{split}$$]{} where the integral in $\xi '=(\xi _1,\dots ,\xi _{d-1})$ vanishes if $Z={{\mathbb{T}}}$. In the integral, [$$\begin{split} \Phi =|(\xi ',0)|^2-|(\xi _1',N)|^2+|(\xi _2',N)|^2=O(1). \end{split}$$]{} Hence, if $0<T\ll 1$, we have [$$\begin{split} {\big\| U_2[\phi ](T) \big\| _{H^s}}\ge {\big\| {{\langle \xi \rangle }}^s{\widehat}{U_2[\phi ]}(T) \big\| _{L^2(Q_{1/2})}}{\gtrsim}(rN^{-s})^2T \end{split}$$]{} for any $s\in {{\mathbb{R}}}$. \(iv) We first consider $(p,q)=(4,1)$; the case of $(4,3)$ is treated in the same way. Observe that [$$\begin{split} &{\big\{ \, (\eta _1,\dots ,\eta _4)\in {\{ -N,2N,3N\}}^4 \, \big| \, \eta _1-\eta _2-\eta _3-\eta _4=0 \, \big\}}\\ &={\{ (3N,-N,2N,2N),\,(3N,2N,-N,2N),\,(3N,2N,2N,-N)\}}. \end{split}$$]{} Therefore, we have [$$\begin{split} {\widehat}{U_4[\phi ]}(T,0)&=c(rN^{-s})^4\sum _{{\begin{smallmatrix} \xi _1,\dots ,\xi _4\in {\mathbb{Z}}\\ \xi _1-\xi _2-\xi _3-\xi _4=0 \end{smallmatrix}}}\prod _{l=1}^4\chi _{{\{ -N,2N,3N\}}}(\xi _l)\int _0^T e^{it\Phi}\,dt\\ &=3c(rN^{-s})^4\int _0^T e^{it\{ 0^2-(3N)^2+(-N)^2+(2N)^2+(2N)^2\}}\,dt =3c(rN^{-s})^4T, \end{split}$$]{} which implies [$$\begin{split} {\big\| U_4[\phi ](T) \big\| _{H^s}}{\gtrsim}(rN^{-s})^4T \end{split}$$]{} for any $s\in {{\mathbb{R}}}$ and $T>0$. Next, we consider $(p,q)=(4,2)$, which is very similar to the above. Since [$$\begin{split} &{\big\{ \, (\eta _1,\dots ,\eta _4)\in {\{ -N,2N,3N\}}^4 \, \big| \, \eta _1+\eta _2-\eta _3-\eta _4=0 \, \big\}}\\ &={\big\{ \, (\eta _1,\dots ,\eta _4)\in {\{ -N,2N,3N\}}^4 \, \big| \, {\{ \eta _1,\eta _2\}}={\{ \eta _3,\eta _4\}} \, \big\}}, \end{split}$$]{} we have [$$\begin{split} {\widehat}{U_4[\phi ]}(T,0)&=c(rN^{-s})^4\sum _{{\begin{smallmatrix} \xi _1,\dots ,\xi _4\in {\mathbb{Z}}\\ \xi _1+\xi _2-\xi _3-\xi _4=0 \end{smallmatrix}}}\prod _{l=1}^4\chi _{{\{ -N,2N,3N\}}}(\xi _l)\int _0^T e^{it\Phi}\,dt=15c(rN^{-s})^4T, \end{split}$$]{} and the same estimate holds. Now, we are in a position to prove norm inflation. We first recall that $U_k[\phi]=0$ unless $k\equiv 1\mod p-1$. If the initial data $\phi$ satisfies , Corollary \[cor:lwp\] guarantees existence of the solution to and the power series expansion in $M_A$ up to time $T$ whenever $\rho =rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{p-1}}\ll 1$. : General $Z$ and $(p,q)$, $s<\min {\{ s_c(d,p),0\}}$. Take $\phi$ as in Lemma \[lem:U\_p\] (i). From Lemmas \[lem:U\_k\_H\^s\] and \[lem:U\_p\], under the conditions [$$\label{cond:1} \begin{split} T\ll N^{-2},\quad \rho \ll 1,\quad r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\gg r, \end{split}$$]{} we have [$$\begin{split} {\big\| u(T) \big\| _{H^s}}\sim {\big\| U_p[\phi ](T) \big\| _{H^s}}\sim r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A). \end{split}$$]{} Now, we set [$$\begin{split} r=(\log N)^{-1},\quad A\sim (\log N)^{-\frac{p+1}{|s|}}N,\quad T=(A^{-\frac{d}{2}}N^s)^{p-1}, \end{split}$$]{} so that $\rho = (\log N)^{-1}\ll 1$. The super-critical assumption $s<s_c(d,p)=\frac{d}{2}-\frac{2}{p-1}$ ensures that [$$\begin{split} T\sim (\log N)^{\frac{d(p+1)}{2|s|}(p-1)}N^{(s-\frac{d}{2})(p-1)}\ll N^{-2}. \end{split}$$]{} Moreover, since $f_s(A){\gtrsim}A^{\frac{d}{2}+s}$ for any $s<0$ and $A\ge 1$, we see that [$$\begin{split} r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A){\gtrsim}r\rho ^{p-1}A^{s}N^{-s}\sim \log N\gg (\log N)^{-1}=r. \end{split}$$]{} Therefore, is fulfilled and we have ${\| u(T) \| _{H^s}}{\gtrsim}\log N$. Noticing ${\| \phi \| _{H^s}}\sim r=(\log N)^{-1}$ and $T\ll N^{-2}$, we show norm inflation by letting $N\to {\infty}$. : $Z={{\mathbb{R}}}$ or ${{\mathbb{T}}}$, $(p,q)=(2,0)$ or $(2,2)$, $-\frac{3}{2}\le s<-1$. We take the same initial data $\phi$ as in Case 1, but with [$$\begin{split} r=(\log N)^{-1},\quad A=1,\quad T=(\log N)^{-1}N^{-2}. \end{split}$$]{} Then, $T\ll N^{-2}$, $\rho =(\log N)^{-2}N^{-2-s}\ll 1$ by $s\ge -\frac{3}{2}$ and [$$\begin{split} r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\sim r\rho N^{-s}= (\log N)^{-3}N^{-2-2s}\gg 1\gg r \end{split}$$]{} by $s<-1$. Hence, holds and we have ${\| u(T) \| _{H^{s}}}\sim (\log N)^{-3}N^{-2-2s}\gg 1$, which together with ${\| \phi \| _{H^s}}\sim r\ll 1$ and $T\ll 1$ shows norm inflation by taking $N$ large. : $Z={{\mathbb{R}}}$ or ${{\mathbb{T}}}$, $p=3$, $s=-\frac{1}{2}$. Take the same $\phi$ as in Case 1, but with [$$\begin{split} r=(\log N)^{-\frac{1}{12}},\quad A\sim (\log N)^{-\frac{1}{4}}N,\quad T=(\log N)^{-\frac{1}{12}}N^{-2}. \end{split}$$]{} Then, $T\ll N^{-2}$, $\rho \sim (\log N)^{-\frac{1}{4}}\ll 1$ and [$$\begin{split} r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\sim r\rho ^2A^{-\frac{1}{2}}N^{\frac{1}{2}}(\log A)^{\frac{1}{2}}\sim (\log N)^{\frac{1}{24}}\gg 1\gg r. \end{split}$$]{} Hence, holds and we have ${\| u(T) \| _{H^{-\frac{1}{2}}}}\sim (\log N)^{\frac{1}{24}}\gg 1$, which implies norm inflation as well. : $Z={{\mathbb{R}}}^2$ or ${{\mathbb{R}}}\times {{\mathbb{T}}}$ or ${{\mathbb{T}}}^2$, $(p,q)=(2,0)$ or $(2,2)$, $s=-1$. We follow the argument in Case 1 again, but with [$$\begin{split} r=(\log N)^{-\frac{1}{12}},\quad A\sim (\log N)^{-\frac{1}{4}}N,\quad T=(\log N)^{-\frac{1}{6}}N^{-2}. \end{split}$$]{} Then, $T\ll N^{-2}$, $\rho \sim (\log N)^{-\frac{1}{2}}\ll 1$ and [$$\begin{split} r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\sim r\rho A^{-1}N(\log A)^{\frac{1}{2}}\sim (\log N)^{\frac{1}{6}}\gg 1\gg r. \end{split}$$]{} Hence, holds and we have ${\| u(T) \| _{H^{-1}}}\sim (\log N)^{\frac{1}{6}}\gg 1$, which shows NI$_{-1}$. : $Z={{\mathbb{R}}}^{d_1}\times {{\mathbb{T}}}^{d_2}$ with $d_1+d_2\le 3$, $d_2\ge 1$, $(p,q)=(2,1)$, and $\frac{d}{2}-2\le s<0$. Take $\phi$ as in Lemma \[lem:U\_p\] (iii) and choose $r,T$ as $r=(\log N)^{-1}$ and $T=N^s$, which implies [$$\begin{split} T\ll 1,\quad \rho \sim rN^{-s}T=(\log N)^{-1}\ll 1,\quad r\rho N^{-s}\sim (\log N)^{-2}N^{-s}\gg 1\gg r. \end{split}$$]{} From Lemmas \[lem:U\_k\_H\^s\] and \[lem:U\_p\], we have ${\| u(T) \| _{H^s}}\sim {\big\| U_2[\phi ](T) \big\| _{H^s}}\sim (\log N)^{-2}N^{-s}\gg 1$, and norm inflation occurs. : $Z={{\mathbb{T}}}$, $(p,q)=(4,1)$ or $(4,2)$ or $(4,3)$, and $-\frac{1}{6}\le s<0$. Take $\phi$ as in Lemma \[lem:U\_p\] (iv), and then take $r=(\log N)^{-1}$ and $T=N^{3s}$, which implies [$$\begin{split} T\ll 1,\quad \rho \sim rN^{-s}T^{\frac{1}{3}}=(\log N)^{-1}\ll 1,\quad r\rho N^{-s}\sim (\log N)^{-2}N^{-s}\gg 1\gg r. \end{split}$$]{} Again, we have ${\| u(T) \| _{H^s}}\sim {\big\| U_4[\phi ](T) \big\| _{H^s}}\sim (\log N)^{-4}N^{-s}\gg 1$. : $Z={{\mathbb{R}}}^d$ with $1\le d\le 3$, $(p,q)=(2,1)$, and $\frac{d}{2}-2\le s<-\frac{1}{4}$. In this case the data $\phi$ is taken as in Lemma \[lem:U\_p\] (ii) and does not satisfy , so we need to modify the previous argument. We use anisotropic modulation space ${\widetilde}{M}$ defined by the norm [$$\begin{split} {\big\| f \big\| _{{\widetilde}{M}}}:=\sum _{\xi \in {\mathbb{Z}}^{d-1}\times N^{-1}{\mathbb{Z}}}{\big\| {\widehat}{f} \big\| _{L^2(\xi +{\widetilde}{Q}_{N^{-1}})}}. \end{split}$$]{} We have the product estimate [$$\begin{split} {\| fg \| _{{\widetilde}{M}}}{\lesssim}N^{-\frac{1}{2}}{\| f \| _{{\widetilde}{M}}}{\| g \| _{{\widetilde}{M}}} \end{split}$$]{} in this space. Thus, we follow the proof of Lemma \[lem:U\_k\] to obtain [$$\begin{split} {\| U_k[\phi ](t) \| _{{\widetilde}{M}}}\le Cr(CrN^{-\frac{1}{2}-s}t)^{k-1}N^{-s} \end{split}$$]{} for any $k\ge 1$, which is used to justify the expansion of the solution in ${\widetilde}{M}$ up to time $T$ such that ${\widetilde}{\rho}:=rN^{-\frac{1}{2}-s}T\ll 1$. Then, by the same argument as in the proofs of Lemmas \[lem:supp\] and \[lem:U\_k\_H\^s\], we see that [$$\begin{split} |{\operatorname{supp}\> {\widehat}{U_k[\phi ]}(t)}|\le C^kN^{-1},\qquad {\| U_k[\phi ](T) \| _{H^s}}\le Cr(C{\widetilde}{\rho})^{k-1}N^{-s}. \end{split}$$]{} In particular, ${\| U_2[\phi ](T) \| _{H^s}}\sim r{\widetilde}{\rho}N^{-s}$ for $0<T\ll 1$ by Lemma \[lem:U\_p\] (iii). Now, we take $r=(\log N)^{-1}\ll 1$, $T=(\log N)^3N^{2s+\frac{1}{2}}\ll 1$, so that ${\widetilde}{\rho}=(\log N)^2N^s\ll 1$, $r{\widetilde}{\rho}N^{-s}=\log N \gg r$. From the estimates above, we have ${\| u(T) \| _{H^s}}\sim \log N \gg 1$, which shows norm inflation. Proof of Theorem \[thm:main\] {#sec:proof} ============================= Here, we see how to use the estimates for single-term nonlinearities for the proof in the multi-term cases. We write $p:=\max _{1\le j\le n}p_j$. For the initial value problem , the $k$-th order term $U_k[\phi ]$ in the expansion of the solution is given by $U_1[\phi ]:=e^{it\Delta}\phi$ and [$$\begin{split} U_k[\phi ]:=-i\sum _{j=1}^n\nu _j\sum _{{\begin{smallmatrix} k_1,\dots ,k_{p_j}\ge 1\\ k_1+\dots +k_{p_j}=k \end{smallmatrix}}}\int _0^t e^{i(t-\tau )\Delta}\mu _{p_j,q_j}\big( U_{k_1}[\phi ](\tau ),\dots ,U_{k_{p_j}}[\phi ](\tau )\big) \,d\tau \end{split}$$]{} for $k\ge 2$ inductively. The following lemmas are verified in the same manner as Lemmas \[lem:a\_k\], \[lem:U\_k\], and Corollary \[cor:lwp\]. \[lem:a\_k’\] Let ${\{ b_k\}}_{k=1}^{\infty}$ be a sequence of nonnegative real numbers such that $$b_{k} \le \sum _{j=1}^nC_j\sum _{{\begin{smallmatrix} k_1,\dots ,k_{p_j}\ge 1\\ k_1+\dots +k_{p_j}=k \end{smallmatrix}}}b_{k_1}\cdots b_{k_{p_j}},\qquad k\ge 2$$ for some $p_1,\dots ,p_n\ge 2$ and $C_1,\dots ,C_n>0$. Then, we have $$b_k\le b_1C_0^{k-1},\qquad k\ge 1,\qquad C_0=\max _{1\le j\le n}\frac{\pi ^2}{6}(nC_jp_j^2)^{\frac{1}{p_j-1}}b_1.$$ \[lem:U\_k’\] There exists $C>0$ such that for any $\phi \in M_A$ with ${\| \phi \| _{M_A}}\le M$ we have [$$\begin{split} {\big\| U_k[\phi ](t) \big\| _{M_A}}\le t^{\frac{k-1}{p-1}}(CA^{\frac{d}{2}}M)^{k-1}M \end{split}$$]{} for any $0\le t\le 1$ and $k\ge 1$. \[lem:MAlwp’\] Let $\phi \in M_A$ with ${\| \phi \| _{M_A}}\le M$. If $T>0$ satisfies $A^{\frac{d}{2}}MT^{\frac{1}{p-1}}\ll 1$, then a unique solution $u\in C([0,T];M_A)$ to exists and has the expansion $u=\sum _{k=1}^{\infty}U_k[\phi ]$. The next lemma can be verified similarly to Lemma \[lem:U\_k\_H\^s\]. \[lem:U\_k\_H\^s’\] Let $\phi$ satisfy and $s<0$. Then, the following holds. 1. ${\big\| U_1[\phi ](T) \big\| _{H^s}}\le Cr$for any $T\ge 0$. 2. ${\big\| U_k[\phi ](T) \big\| _{H^s}}\le Cr(C\rho)^{k-1}A^{-\frac{d}{2}}N^{-s}f_s(A)$for any $0\le T\le 1$ and $k\ge 2$, where [$$\begin{split} \rho =rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{p-1}}\quad (p=\max _{1\le j\le n}p_j),\qquad f_s(A)={\big\| {{\langle \xi \rangle }}^s \big\| _{L^2({\{ |\xi |\le A\}})}}. \end{split}$$]{} We now begin to prove Theorem \[thm:main\]. We divide the proof into two cases: (I) One of the terms of order $p$ (highest order) is responsible for norm inflation, or (II) a lower order term determines the range of regularities for norm inflation. Note that (II) occurs only when $Z={{\mathbb{R}}}$, $p=3$, $F (u,\bar{u})$ has the term $u\bar{u}$ and $s\in (-\frac{1}{2},-\frac{1}{4})$. (I): Rewrite the nonlinear terms as [$$\begin{split} F (u,\bar{u})=\sum _{q=0}^p\nu _{p,q}\mu _{p,q}(u)+\text{(terms of order less than $p$)}. \end{split}$$]{} Note that $\nu _{p,q}$ may be zero but $(\nu _{p,0},\dots ,\nu _{p,p})\neq (0,\dots ,0)$. We divide the series into four parts: [$$\begin{split} \sum _{k=1}^{\infty}U_k[\phi ]&=U_1[\phi ]+\Big\{ \sum _{k=2}^{p}U_k[\phi ]-\Big( -i\sum _{q=0}^p\nu _{p,q}\int _0^te^{i(t-\tau )\Delta}\mu _{p,q}\big( U_1[\phi ](\tau )\big) \,d\tau \Big) \Big\} \\ &{\hspace{10pt}}+\Big( -i\sum _{q=0}^p\nu _{p,q}\int _0^te^{i(t-\tau )\Delta}\mu _{p,q}\big( U_1[\phi ](\tau )\big) \,d\tau \Big) +\sum _{k=p+1}^{\infty}U_k[\phi ]\\ &=:U_1[\phi ]+U_{low}[\phi ]+U_{main}[\phi ]+U_{high}[\phi ]. \end{split}$$]{} Note that $U_{low}=0$ if $p=2$. The following lemma indicates how $U_{low}$ is dominated by $U_{main}$, and how the contributions of the $(p+1)$ terms in $U_{main}$ can be ‘separated’. \[lem:U\_p’\] We have the following: 1. Let $\phi$ satisfy and $s<0$. Let $0<T\le 1$, and assume that $\rho =rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{p-1}}\ll 1$. Then, (if $p\ge 3$,) [$$\begin{split} {\big\| U_{low}[\phi](T) \big\| _{H^s}}{\lesssim}r^2N^{-2s}f_s(A)T^{\frac{1}{p-2}}. \end{split}$$]{} 2. Let $q_*\in {\{ 0,1,\dots ,p\}}$ be such that $\nu _{p,q_*}\neq 0$. Then, for any $T\ge 0$ there exists $j\in {\{ 0,1,\dots ,p\}}$ such that [$$\begin{split} {\big\| U_{main}[e^{i\frac{j\pi}{p+1}}\phi ](T) \big\| _{H^s}}{\gtrsim}{\| G_{q_*}[\phi ](T) \| _{H^s}}, \end{split}$$]{} where [$$\begin{split} G_q[\phi ](t):=-i\int _0^te^{i(t-\tau )\Delta}\mu _{p,q}(U_1[\phi ](\tau ))\,d\tau ;\quad U_{main}[\phi ]=\sum _{q=0}^p\nu _{p,q}G_q[\phi ](t). \end{split}$$]{} \(i) We notice that the nonlinear terms of highest order $p$ have nothing to do with $U_{low}[\phi ]$. Hence, we estimate by Lemma \[lem:U\_k\_H\^s’\] (ii) with $p$ replaced by $p-1$ and have [$$\begin{split} {\big\| U_{low}[\phi ](T) \big\| _{H^s}}\le \sum _{k=2}^pCr(CrA^{\frac{d}{2}}N^{-s}T^{\frac{1}{(p-1)-1}})^{k-1}A^{-\frac{d}{2}}N^{-s}f_s(A). \end{split}$$]{} Since $0<T\le 1$ implies $rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{(p-1)-1}}\le \rho \ll 1$, we have [$$\begin{split} {\big\| U_{low}[\phi ](T) \big\| _{H^s}}{\lesssim}r\cdot rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{p-2}}\cdot A^{-\frac{d}{2}}N^{-s}f_s(A). \end{split}$$]{} \(ii) We observe that $\zeta _p:=e^{i\frac{\pi}{p+1}}$ satisfies $\sum _{j=0}^{p}\zeta _p^{2qj}=0$ if $q\not\equiv 0\mod p+1$. Since $G_q[\zeta _p^j\phi ]=\zeta _p^{(p-2q)j}G_q[\phi ]$, for any $0\le q_*\le p$ it holds that [$$\begin{split} \sum _{j=0}^p\zeta _p^{(2q_*-p)j}U_{main}[\zeta _p^j\phi ]&=\sum _{q=0}^{p}\sum _{j=0}^p\zeta _p^{2(q_*-q)j}\nu _{p,q}G_q[\phi ]=(p+1)\nu _{p,q_*}G_{q_*}[\phi ]. \end{split}$$]{} Hence, if $\nu _{p,q_*}\neq 0$, by the triangle inequality we see that [$$\begin{split} \sum _{j=0}^p{\big\| U_{main}[\zeta _p^j\phi ](T) \big\| _{H^s}}\ge (p+1)|\nu _{p,q_*}|{\big\| G_{q_*}[\phi ](T) \big\| _{H^s}}. \end{split}$$]{} This implies the claim. By Lemma \[lem:U\_p’\], the proof is almost reduced to the case of single-term nonlinearities, as we see below. : General $Z$ and $p$, $s<\min {\{ s_c(d,p),0\}}$. Let us take the initial data $\phi$ as in Lemma \[lem:U\_p\] (i), and assume $\rho =rA^{-\frac{d}{2}}N^{-s}T^{\frac{1}{p-1}}\ll 1$, $0<T\ll N^{-2}$. Lemma \[lem:U\_k\_H\^s’\] (ii) yields that [$$\begin{split} {\big\| U_{high}[\zeta _p^j\phi ](T) \big\| _{H^s}}{\lesssim}r\rho ^pA^{-\frac{d}{2}}N^{-s}f_s(A), \end{split}$$]{} while Lemma \[lem:U\_p’\] (ii) and Lemma \[lem:U\_p\] (i) imply that [$$\begin{split} {\big\| U_{main}[\zeta _p^j\phi ](T) \big\| _{H^s}}\sim r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\gg {\big\| U_{high}[\zeta _p^j\phi ](T) \big\| _{H^s}} \end{split}$$]{} for an appropriate $j$. Hence, from Lemma \[lem:U\_k\_H\^s’\] (i) and Lemma \[lem:U\_p’\] (i), [$$\begin{split} {\big\| u(T) \big\| _{H^s}}&\ge \tfrac{1}{2}{\big\| U_{main}[\zeta _p^j\phi ](T) \big\| _{H^s}}-{\big\| U_{low}[\zeta _p^j\phi ](T) \big\| _{H^s}}-{\big\| U_1[\zeta _p^j\phi ](T) \big\| _{H^s}}\\ &\ge C^{-1}r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)-C\big( r^2N^{-2s}f_s(A)T^{\frac{1}{p-2}}+r\big) . \end{split}$$]{} If we take the same choice for $r,A,T$ as in Case 1 of the proof of Theorem \[thm:main0\]; [$$\begin{split} r=(\log N)^{-1},\quad A\sim (\log N)^{-\frac{p+1}{|s|}}N,\quad T=(A^{-\frac{d}{2}}N^s)^{p-1};\quad \text{so that}{\hspace{10pt}}\rho = (\log N)^{-1}, \end{split}$$]{} all the required conditions for norm inflation are satisfied when $p=2$. Even for $p\ge 3$, it suffices to check that [$$\begin{split} r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\gg r^2N^{-2s}f_s(A)T^{\frac{1}{p-2}}. \end{split}$$]{} This is equivalent to $\rho ^{p-2}\gg T^{\frac{1}{p-2}-\frac{1}{p-1}}$, which we can easily show. : $p=2$. We need to deal with the following situations: - $d=1$, $\nu _{2,1}=0$, $-\frac{3}{2}\le s<-1$; - $d=2$, $\nu _{2,1}=0$, $s=-1$; - $Z={{\mathbb{R}}}^{d_1}\times {{\mathbb{T}}}^{d_2}$ with $d_1+d_2\le 3$, $d_2\ge 1$, $\nu _{2,1}\neq 0$, and $\frac{d}{2}-2\le s<0$; - $Z={{\mathbb{R}}}^d$, $1\le d\le 3$, $\nu _{2,1}\neq 0$, $\frac{d}{2}-2\le s<-\frac{1}{4}$, which correspond to Cases 2, 4, 5, and 7 in the proof of Theorem \[thm:main0\], respectively. As seen in the preceding case, we do not have to care about $U_{low}$ and the proof is the same as the single-term cases, except that we need to pick up the appropriate one among $u^2$, $u\bar{u}$, $\bar{u}^2$ by using Lemma \[lem:U\_p’\] (ii). : $d=1$, $p=3$, $s=-\frac{1}{2}$. We take the initial data $e^{i\frac{j\pi}{4}}\phi$ with $\phi$ as in and parameters $r,A,T$ as in Case 3 for Theorem \[thm:main0\]. Following the argument in Case 1, it suffices to check the condition for ${\| U_{main} \| _{H^s}}\gg {\| U_{low} \| _{H^s}}$; [$$\begin{split} r\rho ^2A^{-\frac{1}{2}}N^{\frac{1}{2}}f_{-\frac{1}{2}}(A)\gg r^2Nf_{-\frac{1}{2}}(A)T. \end{split}$$]{} Actually, we see that $\text{L.H.S.}\sim (\log N)^{\frac{1}{24}}\gg (\log N)^{\frac{1}{4}}N^{-1}\sim \text{R.H.S.}$ : $Z={{\mathbb{T}}}$, $p=4$, $(\nu _{4,1},\nu _{4,2},\nu _{4,3})\neq (0,0,0)$, $s\in [-\frac{1}{6},0)$. Similarly, we take $e^{i\frac{j\pi}{5}}\phi$ with parameters $r,A,T$ as in Case 6 for Theorem \[thm:main0\]. It suffices to verify the condition [$$\begin{split} r\rho ^{3}N^{-s}\gg r^2N^{-2s}T^{\frac{1}{2}}, \end{split}$$]{} and in fact it holds that $\text{L.H.S.}\sim (\log N)^{-4}N^{-s}\gg (\log N)^{-2}N^{-\frac{s}{2}}\sim \text{R.H.S.}$ (II): Recall that we claim NI$_s$ for $s\in (-\frac{1}{2},-\frac{1}{4})$ in the case of $Z={{\mathbb{R}}}$, $p=3$, and $F (u,\bar{u})$ has the term $u\bar{u}$. We take $\phi$ as in with $A=N^{-1}$ and $\Sigma ={\{ N\}}$ (same as in Case 7 for the single-term nonlinearity). By Lemmas \[lem:MAlwp’\] and \[lem:U\_k\_H\^s’\], we can expand the solution whenever $\rho =rN^{-\frac{1}{2}-s}T^{\frac{1}{2}}\ll 1$ and we have [$$\begin{split} \sum _{k\ge 4}{\| U_k[\phi ](T) \| _{H^s}}{\lesssim}r\rho ^3N^{\frac{1}{2}-s}f_s(N^{-1})\sim r^4N^{-\frac{3}{2}-4s}T^{\frac{3}{2}} \end{split}$$]{} for $0<T\le 1$. For $U_3$, observing that the Fourier support is in the region $|\xi |\sim N$, we modify the estimate in Lemma \[lem:U\_k\_H\^s’\] to obtain [$$\begin{split} {\| U_3[\phi ](T) \| _{H^s}}{\lesssim}r\rho ^2N^{\frac{1}{2}-s}\cdot {\| {{\langle \xi \rangle }}^s \| _{L^2({\operatorname{supp}\> {\widehat}}{U_3[\phi ]})}}\sim r^3N^{-1-2s}T. \end{split}$$]{} For $U_2$ the contribution from $u^2$ and $\bar{u}^2$ has the Fourier support in high frequency, thus being dominated by the contribution from $u\bar{u}$. By Lemma \[lem:U\_p\] (ii), we have [$$\begin{split} {\| U_2[\phi ](T) \| _{H^s}}{\gtrsim}r^2N^{-\frac{1}{2}-2s}T \end{split}$$]{} if $0<T\ll 1$. We set $r=(\log N)^{-1}$ and $T=(\log N)^3N^{2s+\frac{1}{2}}$ as before (Case 7 in the single-term case), then it holds that $T\ll 1$, $\rho =(\log N)^{\frac{1}{2}}N^{-\frac{1}{4}}\ll 1$ and [$$\begin{split} {\| u(T) \| _{H^s}}\ge C^{-1}r^2N^{-\frac{1}{2}-2s}T-C\big( r+r^3N^{-1-2s}T+r^4N^{-\frac{3}{2}-4s}T^{\frac{3}{2}}\big) {\gtrsim}\log N\gg 1 \end{split}$$]{} for $s\in [-\frac{3}{4},-\frac{1}{4})$, which gives the claimed norm inflation. This concludes the proof of Theorem \[thm:main\]. Norm inflation with infinite loss of regularity {#sec:niilr} =============================================== In this section, we derive norm inflation with infinite loss of regularity for the problem with smooth gauge-invariant nonlinearities: $$\label{nuNLS} \left\{ \begin{array}{@{\,}r@{\;}l} i{\partial}_tu+\Delta u&=\pm |u|^{2\nu}u,\qquad t\in [0,T],\quad x\in Z={{\mathbb{R}}}^{d-d_2}\times {{\mathbb{T}}}^{d_2},\\ u(0,x)&=\phi (x), \end{array} \right.$$ where $\nu$ is a positive integer. The initial value problem on ${{\mathbb{R}}}^d$ is invariant under the scaling $u(t,x)\mapsto {\lambda}^{\frac{1}{\nu}}u({\lambda}^2t,{\lambda}x)$, and the critical Sobolev index is $s_c(d,2\nu +1)=\frac{d}{2}-\frac{1}{\nu}$, which is non-negative except for the case $d=\nu =1$. \[prop:niilr\] We assume the following condition on $s$: - If $d=\nu =1$, then $s<-\frac{2}{3}$; - if $d\ge 2$, $\nu =1$ and $d_2=0,1$ (i.e., $Z={{\mathbb{R}}}^d$ or ${{\mathbb{R}}}^{d-1}\times {{\mathbb{T}}}$), then $s<-\frac{1}{3}$; - if $d\ge 1$, $\nu \ge 2$ and $d_2=0$ (i.e., $Z={{\mathbb{R}}}^d$), then $s<-\frac{1}{2\nu +1}$; - otherwise, $s<0$. Then, NI$_s$ with infinite loss of regularity occurs for the initial value problem : For any ${\delta}>0$ there exist $\phi \in H^{\infty}$ and $T>0$ satisfying ${\| \phi \| _{H^s}}<{\delta}$, $0<T<{\delta}$ such that the corresponding smooth solution $u$ to exists on $[0,T]$ and ${\| u(T) \| _{H^{\sigma}}}>{\delta}^{-1}$ for *all* ${\sigma}\in {{\mathbb{R}}}$.[^10] \(i) The proofs of Theorems \[thm:main0\] and \[thm:main\] are easily adapted to yield NI$_s$ with *finite* loss of regularity in most cases. However, we only consider here *infinite* loss of regularity. \(ii) The coefficient of the nonlinearity is not important in the proof, and the same result holds for any non-zero complex constant. \(iii) To show infinite loss of regularity, we need to use the nonlinear interactions of very high frequencies which create a significant output in low frequency $\{ |\xi |\le 1\}$. Except for the case $d=\nu =1$, there are such interactions that are also *resonant*; i.e., there exist non-zero vectors $k_1,\dots ,k_{2\nu +1}\in {\mathbb{Z}}^{d}$ satisfying [$$\begin{split} \sum _{j=0}^{\nu}k_{2j+1}=\sum _{l=1}^{\nu}k_{2l},\qquad \sum _{j=0}^{\nu}|k_{2j+1}|^2=\sum _{l=1}^{\nu}|k_{2l}|^2. \end{split}$$]{} This is also the key ingredient in the proof of the previous results [@CDS12; @CK17], and hence the restriction on the range of $s$ in Proposition \[prop:niilr\] is the same as that in [@CDS12; @CK17]. A complete characterization of the resonant set [$$\begin{split} {\mathcal{R}}_{d,\nu}(k):={\big\{ \, (k_m)_{m=1}^{2\nu +1}\in ({\mathbb{Z}}^d)^{2\nu +1} \, \big| \, k=\sum _{m=1}^{2\nu +1}(-1)^{m+1}k_m,\, |k|^2=\sum _{m=1}^{2\nu +1}(-1)^{m+1}|k_m|^2 \, \big\}} \end{split}$$]{} (for $k\in {\mathbb{Z}}^d$ given) is easily obtained in the $\nu =1$ case; see [@CK17 Proposition 4.1] for instance. In Proposition \[prop:char\] below, we will provide a complete characterization of the set ${\mathcal{R}}_{1,2}(0)$, which may be of interest in itself. Since $(k_m)_{m=1}^5\in {\mathcal{R}}_{1,2}(k)$ if and only if $(k_m-k)_{m=1}^5\in {\mathcal{R}}_{1,2}(0)$, we have a characterization of ${\mathcal{R}}_{1,2}(k)$ for any $k\in {\mathbb{Z}}$ as well. However, in the proof of Proposition \[prop:niilr\] we only need the fact that ${\mathcal{R}}_{d,\nu}(0)$ has an element consisting of non-zero vectors in ${\mathbb{Z}}^d$, except for $(d,\nu )=(1,1)$. We follow the proof of Theorem \[thm:main0\] but take different initial data to show infinite loss of regularity. Let $N\gg 1$ be a large positive integer and define $\phi \in H^\infty (Z)$ by [$$\begin{split} {\widehat}{\phi}:=rN^{-s}\chi _{\Sigma +Q_1}, \end{split}$$]{} where $r=r(N)>0$ is a constant to be chosen later, $Q_1:=[-\tfrac{1}{2},\tfrac{1}{2})^d$, and [$$\begin{gathered} \Sigma := \begin{cases} {\{ N,2N\}} &\text{if $d=\nu =1$},\\ {\{ Ne_{d-1},\,Ne_d,\,N(e_{d-1}+e_d)\}} &\text{if $d\ge 2$, $\nu =1$},\\ {\{ Ne_d,\,3Ne_d,\,4Ne_d\}} &\text{if $d\ge 1$, $\nu \ge 2$}, \end{cases}\\ e_d:=(\underbrace{0,\dots ,0}_{d-1},1),\qquad e_{d-1}:=(\underbrace{0,\dots ,0}_{d-2},1,0). \end{gathered}$$]{} The argument in Section \[sec:proof0\] (with $A=1$) shows the following: - The unique solution $u=u[\phi ]$ to exists on $[0,T]$ and has the power series expansion $u=\sum _{k=1}^{\infty}U_k[\phi ]$ if $\rho :=rN^{-s}T^{\frac{1}{2\nu}}\ll 1$. - ${\| U_1[\phi ](T) \| _{H^s}}={\| \phi \| _{H^s}}\sim r$ for any $T\ge 0$. - ${\| U_k[\phi ](T) \| _{H^s}}\le C\rho ^{k-1}rN^{-s}$ for any $T\ge 0$ and $k\ge 2$. For the first nonlinear term $U_{2\nu +1}[\phi]$, we observe that [$$\begin{split} |{\widehat}{U_{2\nu +1}[\phi ]}(T,\xi )|&=c(rN^{-s})^{2\nu +1}\Big| \int _\Gamma \prod _{m=1}^{2\nu+1}\chi _{\Sigma +Q_1}(\xi _m) \Big( \int _0^Te^{it\Phi}\,dt\Big) \,d\xi _1\dots d\xi _{2\nu +1}\Big| , \end{split}$$]{} where [$$\begin{split} \Gamma :={\big\{ \, (\xi _1,\dots ,\xi _{2\nu +1}) \, \big| \, \sum _{j=0}^\nu \xi _{2j+1}-\sum _{l=1}^\nu \xi _{2l}=\xi \, \big\}},\quad \Phi :=|\xi |^2-\sum _{j=0}^\nu |\xi _{2j+1}|^2+\sum _{l=1}^\nu |\xi _{2l}|^2. \end{split}$$]{} Now, we restrict $\xi$ to the low-frequency region $Q_{1/2}$. If $d=\nu =1$, then we have [$$\begin{split} &\chi _{Q_{1/2}}(\xi )\int _\Gamma \prod _{m=1}^{2\nu+1}\chi _{\Sigma +Q_1}(\xi _m) \int _0^Te^{it\Phi}\,dt\\ &=2\chi _{Q_{1/2}}(\xi )\int _\Gamma \chi _{N+Q_1}(\xi _1)\chi _{2N+Q_1}(\xi _2)\chi _{N+Q_1}(\xi _3)\int _0^Te^{it\Phi}\,dt, \end{split}$$]{} and $\Phi =O(N^2)$ in the integral. If $d\ge 2$ and $\nu =1$, we have [$$\begin{split} &\chi _{Q_{1/2}}(\xi )\int _\Gamma \prod _{m=1}^{2\nu+1}\chi _{\Sigma +Q_1}(\xi _m) \int _0^Te^{it\Phi}\,dt\\ &=2\chi _{Q_{1/2}}(\xi )\int _\Gamma \chi _{Ne_{d-1}+Q_1}(\xi _1)\chi _{N(e_{d-1}+e_d)+Q_1}(\xi _2)\chi _{Ne_d+Q_1}(\xi _3)\int _0^Te^{it\Phi}\,dt, \end{split}$$]{} and the resonant property implies that [$$\begin{split} \Phi =\begin{cases} O(N) &\text{if $d_2=0,1$},\\ O(1) &\text{if $d_2\ge 2$} \end{cases} \end{split}$$]{} in the integral. Therefore, in these cases we have the following lower bound: [$$\label{est:A} \begin{split} {\big\| {\widehat}{U_{2\nu +1}[\phi ]}(T) \big\| _{L^2(Q_{1/2})}}\ge cT(rN^{-s})^{2\nu +1}=c\rho ^{2\nu}rN^{-s} \end{split}$$]{} [$$\begin{split} \text{for any}\quad 0<T\ll \begin{cases} N^{-2} &\text{if $d=\nu =1$},\\ N^{-1} &\text{if $d\ge 2$, $\nu =1$, $d_2=0,1$},\\ ~1 &\text{if $d\ge 2$, $\nu =1$, $d_2\ge 2$}. \end{cases} \end{split}$$]{} The quintic and higher cases are slightly different. On one hand, there are “almost resonant” interactions such as [$$\begin{split} \prod _{j=1,3}\chi _{Ne_d+Q_1}(\xi _j)\prod _{l=2,4}\chi _{3Ne_d+Q_1}(\xi _l)\prod _{m=5}^{2\nu +1}\chi _{4Ne_d+Q_1}(\xi _m), \end{split}$$]{} for which it holds [$$\begin{split} \Phi =\begin{cases} O(N) &\text{if $d_2=0$},\\ O(1) &\text{if $d_2\ge 1$} \end{cases} \end{split}$$]{} in the integral. On the other hand, some non-resonant interactions such as [$$\begin{split} \chi _{3Ne_d+Q_1}(\xi _1)\chi _{4Ne_d+Q_1}(\xi _2)\prod _{m=3}^{2\nu +1}\chi _{Ne_d+Q_1}(\xi _m) \end{split}$$]{} also create low-frequency modes, with $|\Phi |\sim N^2$ in the integral. Hence, if we choose $T>0$ as [$$\begin{split} N^{-2}\ll T\ll \begin{cases} N^{-1} &\text{if $d\ge 1$, $\nu \ge 2$, $d_2=0$},\\ ~1 &\text{if $d\ge 1$, $\nu \ge 2$, $d_2\ge 1$}, \end{cases} \end{split}$$]{} then [$$\begin{split} \begin{cases} \Re \Big( \displaystyle\int _0^Te^{it\Phi}\,dt\Big) \ge \frac{1}{2}T &\text{for ``almost resonant'' interactions},\\[10pt] \Big| \displaystyle\int _0^Te^{it\Phi}\,dt\Big| \le CN^{-2}\ll T &\text{for non-resonant interactions}, \end{cases} \end{split}$$]{} so that no cancellation occurs among “almost resonant” interactions, which dominate the non-resonant interactions. Therefore, we have for such $T$ as above. Finally, we set [$$\begin{split} \begin{cases} r:=N^{s+\frac{2}{3}}\log N,\quad T:=N^{-2}(\log N)^{-1} &\text{if $d=\nu =1$},\\ r:=N^{s+\frac{1}{2\nu +1}}\log N,\quad T:=N^{-1}(\log N)^{-1} &\text{if $d\ge 2$, $\nu =1$, $d_2=0,1$}\\[-5pt] &\quad \text{or $d\ge 1$, $\nu \ge 2$, $d_2=0$},\\ r:=N^s\log N,\quad T:=(\log N)^{-(2\nu +\frac{1}{2})} &\text{otherwise}. \end{cases} \end{split}$$]{} We see that, under the assumption on $s$, ${\| \phi \| _{H^s}}\sim r\ll 1$, $T\ll 1$, $\rho \ll 1$, and [$$\begin{split} {\big\| {\widehat}{u}(T) \big\| _{L^2(Q_{1/2})}}&\ge c{\big\| {\widehat}{U_{2\nu +1}[\phi ]}(T) \big\| _{L^2(Q_{1/2})}}-C\Big( {\big\| U_1[\phi ](T) \big\| _{H^s}}+\sum _{l\ge 2} {\big\| U_{2\nu l+1}[\phi ](T) \big\| _{H^s}}\Big) \\ &\ge c{\big\| {\widehat}{U_{2\nu +1}[\phi ]}(T) \big\| _{L^2(Q_{1/2})}}\gg 1. \end{split}$$]{} We conclude the proof by letting $N\to {\infty}$. At the end of this section, we give a characterization of resonant interactions creating the zero mode in the one-dimensional quintic case. \[prop:char\] The quintuplet $(k_1,\dots ,k_5)\in {\mathbb{Z}}^5$ satisfies [$$\label{cond:res} \begin{split} k_1+k_3+k_5=k_2+k_4,\qquad k_1^2+k_3^2+k_5^2=k_2^2+k_4^2 \end{split}$$]{} if and only if [$$\label{char} \begin{split} {\{ k_1,\,k_3,\,k_5\}}&={\{ ap,\,bq,\,(a+b)(p+q)\}},\\ {\{ k_2,\,k_4\}}&={\{ ap+(a+b)q,\,(a+b)p+bq\}} \end{split}$$]{} for some $a,b,p,q\in {\mathbb{Z}}$. \(i) Taking $a=p=b=q=1$ in , we have the quintuplet $(1,3,1,3,4)$ which has appeared in the proof of Proposition \[prop:niilr\] above. Also, with $(a,b,p,q)=(-1,2,-2,1)$ we have $(2,3,2,0,-1)$, which gives a resonant interaction for quartic nonlinearities $u^3\bar{u}$, $u\bar{u}^3$ exploited in the proof of Lemma \[lem:U\_p\] (iv) above. \(ii) The quintuplets $(pq,-q^2,-pq,p^2,p^2-q^2)$ given in [@CK17 Lemma 4.2] can be obtained by setting $a=-q$, $b=p$ in . The *if* part is verified by a direct computation, so we show the *only if* part. Let $(k_1,\dots ,k_5)\in {\mathbb{Z}}^5$ satisfy . We start with observing that at least one of $k_1,k_3,k_5$ is an even integer; otherwise, we would have [$$\begin{split} k_1^2+k_3^2+k_5^2\equiv 3\not\equiv 1\equiv k_2^2+k_4^2\mod 4, \end{split}$$]{} contradicting . Without loss of generality, we assume $k_5$ to be even and set [$$\begin{split} n_j:=k_j-\tfrac{1}{2}k_5\in {\mathbb{Z}}\quad (j=1,\dots ,5),\qquad n_6:=-\tfrac{1}{2}k_5\in {\mathbb{Z}}. \end{split}$$]{} From we see that [$$\begin{split} n_1+n_3+n_5=n_2+n_4+n_6,\quad n_1^2+n_3^2=n_2^2+n_4^2,\quad n_5=-n_6. \end{split}$$]{} The second equality implies that two vectors $(n_1-n_2,n_3-n_4), (n_1+n_2,n_3+n_4)\in {\mathbb{Z}}^2$ are orthogonal to each other (unless one of them is zero), which allows us to write [$$\label{id:A} \begin{split} (n_1-n_2,n_3-n_4)={\alpha}(q,p),\quad (n_1+n_2,n_3+n_4)={\beta}(-p,q) \end{split}$$]{} with ${\alpha},{\beta},p,q\in {\mathbb{Z}}$. Note that $n_1,\dots ,n_4$ are then written as [$$\begin{split} n_1=\tfrac{1}{2}({\alpha}q-{\beta}p),\qquad n_2=-\tfrac{1}{2}({\alpha}q+{\beta}p),\\ n_3=\tfrac{1}{2}({\alpha}p+{\beta}q),\qquad n_4=-\tfrac{1}{2}({\alpha}p-{\beta}q), \end{split}$$]{} and that [$$\begin{split} n_5=-n_6=\tfrac{1}{2}(n_5-n_6)=-\tfrac{1}{2}\big\{ (n_1-n_2)+(n_3-n_4)\big\} =-\tfrac{1}{2}{\alpha}(p+q). \end{split}$$]{} Recalling $k_j=n_j-n_6$ ($j=1,\dots ,5$), we have [$$\label{ks} \begin{split} &k_1=-\tfrac{1}{2}({\alpha}+{\beta})p,\qquad k_3=-\tfrac{1}{2}({\alpha}-{\beta})q,\qquad k_5=-{\alpha}(p+q),\\ &\qquad k_2=-\tfrac{1}{2}({\alpha}+{\beta})p-{\alpha}q,\qquad k_4=-\tfrac{1}{2}({\alpha}-{\beta})q-{\alpha}p. \end{split}$$]{} We next claim that the integers ${\alpha},{\beta},p,q$ can be chosen in so that ${\alpha}$ and ${\beta}$ have the same parity. To see this, we notice that the four integers $n_1\pm n_2$, $n_3\pm n_4$ are of the same parity, since all of [$$\begin{gathered} (n_1+n_2)+(n_1-n_2)=2n_1,\qquad (n_3+n_4)+(n_3-n_4)=2n_3,\\ (n_1-n_2)+(n_3-n_4)=n_6-n_5=2n_6 \end{gathered}$$]{} are even. If $n_1\pm n_2$, $n_3\pm n_4$ are odd integers, then by ${\alpha}$ and ${\beta}$ must be odd. So, we assume that they are all even. If one of $p,q$ is odd, then both ${\alpha}$ and ${\beta}$ must be even. If both $p$ and $q$ are even, we replace $({\alpha},{\beta},p,q)$ with $(2{\alpha},2{\beta},p/2,q/2)$ to obtain another expression with both ${\alpha}$ and ${\beta}$ being even. Hence, the claim is proved. Finally, we set $a:=-\frac{1}{2}({\alpha}+{\beta})$, $b:=-\frac{1}{2}({\alpha}-{\beta})$, both of which are integers. Inserting them into , we find the expression . Norm inflation for 1D cubic NLS at the critical regularity {#sec:ap} ========================================================== In this section, we consider the particular equation $$\label{cNLS} \left\{ \begin{array}{@{\,}r@{\;}l} i{\partial}_tu+{\partial}_x^2 u&=\pm |u|^2u,\qquad t\in [0,T],\quad x\in Z={{\mathbb{R}}}\text{~or~} {{\mathbb{T}}},\\ u(0,x)&=\phi (x). \end{array} \right.$$ We will show the inflation of the Besov-type scale-critical Sobolev and Fourier-Lebesgue norms with an additional logarithmic factor: For $1\le p<{\infty}$, $1\le q\le {\infty}$ and ${\alpha}\in {{\mathbb{R}}}$, define the $D^{[{\alpha}]}_{p,q}$-norm by [$$\begin{split} {\big\| f \big\| _{D^{[{\alpha}]}_{p,q}}}:=\Big\| N^{-\frac{1}{p}}{{\langle \log N \rangle }}^{\alpha}{\big\| {\widehat}{f} \big\| _{L^p_\xi (\{ N\le {{\langle \xi \rangle }}<2N\} )}}\Big\| _{\ell ^q_N(2^{{\mathbb{Z}}_{\ge 0}})} . \end{split}$$]{} We also define the $D^{s}_{p,q}$-norm for $s\in {{\mathbb{R}}}$ by [$$\begin{split} {\big\| f \big\| _{D^{s}_{p,q}}}:=\Big\| N^{s}{\big\| {\widehat}{f} \big\| _{L^p_\xi (\{ N\le {{\langle \xi \rangle }}<2N\} )}}\Big\| _{\ell ^q_N(2^{{\mathbb{Z}}_{\ge 0}})} . \end{split}$$]{} \(i) We see that $D^{[0]}_{2,q}=D^{-\frac{1}{2}}_{2,q}=B^{-\frac{1}{2}}_{2,q}$ (Besov norm) and $D^{[0]}_{p,p}={{\mathcal{F}}}L^{-\frac{1}{p},p}$ (Fourier-Lebesgue norm). In the case of $Z={{\mathbb{R}}}$, the homogeneous version of $D^{[0]}_{p,q}$ is scale invariant for any $p,q$. \(ii) We have the embeddings $D^{[{\alpha}]}_{p_2,q}\hookrightarrow D^{[{\alpha}]}_{p_1,q}$ if $p_1\le p_2$, $D^{[{\alpha}]}_{p,q_1}\hookrightarrow D^{[{\alpha}]}_{p,q_2}$ if $q_1\le q_2$. \(iii) We will not consider the space $D^{[{\alpha}]}_{p,q}$ with $p={\infty}$ here, since our argument seems valid only in the space of negative regularity. \[prop:A\] For the Cauchy problem , norm inflation occurs in the following cases: \(i) In $D^{[{\alpha}]}_{p,q}$ for any $1\le q\le {\infty}$ and ${\alpha}<\frac{1}{2q}$, if $\frac{3}{2}\le p<{\infty}$. \(ii) In $D^{[{\alpha}]}_{p,q}$ and $D^s_{p,q}$ for any $1\le q\le {\infty}$, ${\alpha}\in {{\mathbb{R}}}$ and $s<-\frac{2}{3}$, if $1\le p<\frac{3}{2}$. \(i) If $\frac{3}{2}\le p<{\infty}$ and $1\le q<{\infty}$, Proposition \[prop:A\] shows inflation of a “logarithmically subcritical” norm (i.e., $D^{[{\alpha}]}_{p,q}$ with ${\alpha}>0$). Moreover, if $1\le p<\frac{3}{2}$ we show norm inflation in $D^{s}_{p,q}$ for subcritical regularities $-\frac{2}{3}>s>-\frac{1}{p}$. However, for $q={\infty}$ and $p\ge \frac{3}{2}$, inflation is not detected even in the critical norm $D^{[0]}_{p,{\infty}}$. \(ii) In [@KVZ17p Theorem 4.7] global-in-time a priori bound was established in $D^{[\frac{3}{2}]}_{2,2}$ and $D^{[2]}_{2,{\infty}}$. Recently, Oh and Wang [@OW18p] proved global-in-time bound in ${{\mathcal{F}}}L^{0,p}$ for $Z={{\mathbb{T}}}$ and $2\le p<{\infty}$. There are still some gaps between these results and ours. In fact, Proposition \[prop:A\] shows inflation of $D^{[\frac{1}{4}-]}_{2,2}$ and $D^{[0-]}_{2,{\infty}}$ norms, as well as in a norm only logarithmically stronger than ${{\mathcal{F}}}L^{-\frac{1}{p},p}$ for $p\ge 2$. \(iii) Guo [@G17] also studied on ${{\mathbb{R}}}$ in “almost critical” spaces. It would be interesting to compare our result with [@G17 Theorem 1.8], where he showed well-posedness (and hence a priori bound) in some Orlicz-type generalized modulation spaces which are barely smaller than the critical one $M_{2,{\infty}}$. There is no conflict between these results, because the function spaces for which norm inflation is claimed in Proposition \[prop:A\] are not included in $M_{2,{\infty}}$ due to negative regularity. Note also that the function spaces in [@G17 Theorem 1.8] admit the initial data $\phi$ of the form ${\widehat}{\phi}(\xi )=[\log (2+|\xi |)]^{-\gamma}$ only for ${\gamma}>2$ (see [@G17 Remark 1.9]), while it belongs to $D_{p,q}^{[{\alpha}]}$ if ${\gamma}>{\alpha}+\frac{1}{q}$. \(iv) In contrast to the results in [@KVZ17p; @OW18p], complete integrability of the equation will play no role in our argument. In particular, Proposition \[prop:A\] still holds if we replace the nonlinearity in with any of the other cubic terms $u^3,\bar{u}^3,u\bar{u}^2$ or any linear combination of them with complex coefficients. We follow the argument in Section \[sec:proof0\]. For $1\le \rho <{\infty}$ and $A>0$, let $M^\rho _A$ be the rescaled modulation space defined by the norm [$$\begin{split} {\big\| f \big\| _{M^\rho _A}}:=\sum _{\xi \in A{\mathbb{Z}}}{\big\| {\widehat}{f} \big\| _{L^\rho (\xi +I_A)}},\qquad I_A:=[ -\tfrac{A}{2},\tfrac{A}{2}) . \end{split}$$]{} It is easy to see that $M_A^\rho $ is a Banach algebra with a product estimate: [$$\begin{split} {\big\| fg \big\| _{M_A^\rho}}\le CA^{1-\frac{1}{\rho}}{\big\| f \big\| _{M^\rho _A}}{\big\| g \big\| _{M^\rho _A}}. \end{split}$$]{} Mimicking the proof of Lemma \[lem:U\_k\], we see that the operators $U_k$ defined as in Definition \[defn:U\_k\] satisfy [$$\label{est:A1} \begin{split} {\big\| U_k[\phi ](t) \big\| _{M_A^\rho}}\le t^{\frac{k-1}{2}}\big( CA^{1-\frac{1}{\rho}}{\| \phi \| _{M_A^\rho}}\big) ^{k-1}{\| \phi \| _{M_A^\rho}},\qquad t\ge 0,\quad k\ge 1. \end{split}$$]{} We also recall that from Corollary \[cor:lwp\], the power series expansion of the solution map $u[\phi ]=\sum _{k\ge 1}U_k[\phi ]$ is verified in $C([0,T];M_A^2)$ whenever [$$\label{cond:A1} \begin{split} 0<T\ll \big( A^{\frac{1}{2}}{\| \phi \| _{M_A^2}}\big) ^{-2}. \end{split}$$]{} For the proof of norm inflation in $D^{[{\alpha}]}_{p,q}$, we restrict the initial data $\phi$ to those of the form ; for given $N\gg 1$, we set [$$\begin{split} {\widehat}{\phi}:=rA^{-\frac{1}{p}}N^{\frac{1}{p}}\chi_{(N+I_A)\cup (2N+I_A)}, \end{split}$$]{} where $r>0$ and $1\ll A\ll N$ will be specified later according to $N$. Then, since ${\| \phi \| _{M_A^2}}\sim rA^{\frac{1}{2}-\frac{1}{p}}N^{\frac{1}{p}}$, the condition is equivalent to [$$\label{cond:A2} \begin{split} 0<r(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\ll 1. \end{split}$$]{} Moreover, it holds that [$$\label{est:A2} \begin{split} {\big\| U_1[\phi ](T) \big\| _{D^{[{\alpha}]}_{p,q}}}={\big\| \phi \big\| _{D^{[{\alpha}]}_{p,q}}}\sim r(\log N)^{\alpha},\qquad T\ge 0, \end{split}$$]{} and similarly to Lemma \[lem:U\_p\] (i), that [$$\label{est:A3} \begin{split} {\big\| U_3[\phi ](T) \big\| _{D^{[{\alpha}]}_{p,q}}}&\ge cT\big( rA^{-\frac{1}{p}}N^{\frac{1}{p}}\big) ^3A^2{\big\| {{\mathcal{F}}}^{-1}\chi _{I_{A/2}} \big\| _{D^{[{\alpha}]}_{p,q}}},\qquad 0<T\le \tfrac{1}{100}N^{-2},\\ &=c\Big[ r(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\Big] ^2r\Big( \frac{A}{N}\Big) ^{-\frac{1}{p}}f_{p,q}^{\alpha}(A), \end{split}$$]{} where [$$\begin{split} f_{p,q}^{\alpha}(A):={\big\| {{\mathcal{F}}}^{-1}\chi _{I_{A/2}} \big\| _{D^{[{\alpha}]}_{p,q}}}\sim \begin{cases} (\log A)^{{\alpha}+\frac{1}{q}}, &{\alpha}>-\frac{1}{q},\\ (\log \log A)^{\frac{1}{q}}, &{\alpha}=-\frac{1}{q},\\ ~1, &{\alpha}<-\frac{1}{q}.\end{cases} \end{split}$$]{} For estimating $U_{2l+1}[\phi]$, $l\ge 2$ in $D^{[{\alpha}]}_{p,q}$, we first observe that [$$\begin{split} {\big\| U_k[\phi ](T) \big\| _{D^{[{\alpha}]}_{p,q}}}\le {\big\| {{\mathcal{F}}}^{-1}\chi _{{\operatorname{supp}\> {\widehat}{U_k[\phi ]}(T)}} \big\| _{D^{[{\alpha}]}_{p,q}}}{\big\| {\widehat}{U_k[\phi ]}(T) \big\| _{L^{\infty}}}. \end{split}$$]{} A simple computation yields that [$$\begin{split} {\big\| {{\mathcal{F}}}^{-1}\chi _\Omega \big\| _{D^{[{\alpha}]}_{p,q}}}\le C{\big\| {{\mathcal{F}}}^{-1}\chi _{I_{|\Omega |}} \big\| _{D^{[{\alpha}]}_{p,q}}} \end{split}$$]{} for any measurable set $\Omega \subset {{\mathbb{R}}}$ of finite measure. From Lemma \[lem:supp\], we have [$$\begin{split} \big| {\operatorname{supp}\> {\widehat}{U_k[\phi ]}(T)}\big| \le C^kA, \qquad T\ge 0,\quad k\ge 1, \end{split}$$]{} and hence, [$$\begin{split} {\big\| {{\mathcal{F}}}^{-1}\chi _{{\operatorname{supp}\> {\widehat}{U_k[\phi ]}(T)}} \big\| _{D^{[{\alpha}]}_{p,q}}}\le C{\big\| {{\mathcal{F}}}^{-1}\chi _{I_{C^kA}} \big\| _{D^{[{\alpha}]}_{p,q}}}\le C^kf_{p,q}^{\alpha}(A). \end{split}$$]{} Moreover, similarly to Lemma \[lem:U\_k\_H\^s\] (ii), we use Young’s inequality, and Lemma \[lem:a\_k\] to obtain [$$\begin{split} {\big\| {\widehat}{U_k[\phi ]}(T) \big\| _{L^{\infty}}}&\le \sum _{{\begin{smallmatrix} k_1,k_2,k_3\ge 1\\k_1+k_2+k_3=k \end{smallmatrix}}}\int _0^T{\big\| {\widehat}{U_{k_1}[\phi ]}(t) \big\| _{M^{\frac{3}{2}}_A}}{\big\| {\widehat}{U_{k_2}[\phi ]}(t) \big\| _{M^{\frac{3}{2}}_A}}{\big\| {\widehat}{U_{k_3}[\phi ]}(t) \big\| _{M^{\frac{3}{2}}_A}}\,dt\\ &\le \int _0^Tt^{\frac{k-3}{2}}\,dt\cdot \big( CrA^{1-\frac{1}{p}}N^{\frac{1}{p}}\big) ^{k-3}\big( CrA^{\frac{2}{3}-\frac{1}{p}}N^{\frac{1}{p}}\big) ^{3}\\ &\le C\big( CrT^{\frac{1}{2}}A^{1-\frac{1}{p}}N^{\frac{1}{p}}\big) ^{k-1}rA^{-\frac{1}{p}}N^{\frac{1}{p}},\qquad T\ge 0,\quad k\ge 3. \end{split}$$]{} Hence, we have [$$\label{est:A4} \begin{split} {\big\| U_k[\phi ](T) \big\| _{D^{[{\alpha}]}_{p,q}}}\le C\Big[ Cr(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\Big] ^{k-1}r\Big( \frac{A}{N}\Big) ^{-\frac{1}{p}}f_{p,q}^{\alpha}(A),\quad T\ge 0,~~k\ge 3. \end{split}$$]{} From –, we only need to check if there exist $r,A,T$ such that [$$\label{cond:A3} \begin{split} 1\ll A\ll N,\qquad r\ll (\log N)^{-{\alpha}} ,\qquad (TN^2)\le \tfrac{1}{100},\qquad\qquad \\ \Big[ r(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\Big] ^2\ll 1 \ll \Big[ r(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\Big] ^2r\Big( \frac{A}{N}\Big) ^{-\frac{1}{p}}f^{\alpha}_{p,q}(A). \end{split}$$]{} When $1\le p<\frac{3}{2}$, it holds that $2(1-\frac{1}{p})\ge 0>2(1-\frac{1}{p})-\frac{1}{p}$. Hence, we may choose [$$\begin{gathered} r=(\log N)^{\min \{ -{\alpha},0\} -1},\quad A=N^{\frac{1}{2}},\quad T=\tfrac{1}{100}N^{-2}, \end{gathered}$$]{} which clearly satisfies . (Note that $f^{\alpha}_{p,q}(A){\gtrsim}1$ for any $p,q,{\alpha}$.) If $\frac{3}{2}\le p<{\infty}$, would imply that [$$\begin{split} 1 \ll \Big[ r(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\Big] ^2r\Big( \frac{A}{N}\Big) ^{-\frac{1}{p}}f^{\alpha}_{p,q}(A){\lesssim}r^3f^{\alpha}_{p,q}(A)\ll (\log N)^{-3{\alpha}}f^{\alpha}_{p,q}(A). \end{split}$$]{} In particular, when ${\alpha}>-\frac{1}{q}$ this condition requires [$$\begin{split} (\log N)^{3{\alpha}}\ll (\log N)^{{\alpha}+\frac{1}{q}}, \end{split}$$]{} which shows the necessity of the restriction ${\alpha}<\frac{1}{2q}$ in our argument. We now see the possibility of choosing $r,A,T$ with the condition in the following two cases separately: (a) If $1\le q<{\infty}$ and $0\le {\alpha}<\frac{1}{2q}$, we may take for instance [$$\begin{split} r=(\log N)^{-{\alpha}}(\log \log N)^{-1},\quad A=N(\log \log N)^{-1},\quad T=\tfrac{1}{100}N^{-2}. \end{split}$$]{} (Note that $f^{\alpha}_{p,q}(A)\sim f^{\alpha}_{p,q}(N)\sim (\log N)^{{\alpha}+\frac{1}{q}}$.) (b) If ${\alpha}<0$, we take [$$\begin{split} r=(\log N)^{-{\alpha}}(\log \log N)^{-1},\quad A=N(\log N)^{{\alpha}(1-\frac{1}{p})^{-1}},\quad T=\tfrac{1}{100}N^{-2}. \end{split}$$]{} In both cases we easily show . Finally, we assume $1\le p<\frac{3}{2}$ and prove norm inflation in $D^s_{p,q}$ for $s<-\frac{2}{3}$. We use the initial data $\phi$ of the form [$$\begin{split} {\widehat}{\phi}:=rN^{-s}\chi_{[N,N+1]\cup [2N,2N+1]}. \end{split}$$]{} Then, the condition with $A=1$ is equivalent to [$$\begin{split} 0<T^{\frac{1}{2}}rN^{-s}=(TN^2)^{\frac{1}{2}}rN^{-s-1}\ll 1. \end{split}$$]{} Repeating the argument above we also verify that [$$\begin{gathered} {\big\| U_1[\phi ](T) \big\| _{D^s_{p,q}}}={\big\| \phi \big\| _{D^s_{p,q}}}\sim r,\\ {\big\| U_3[\phi ](T) \big\| _{D^s_{p,q}}}\ge c\big( T^\frac{1}{2}rN^{-s}\big) ^2rN^{-s}=c(TN^2)r^3N^{-3s-2}\qquad \text{if $T\le \tfrac{1}{100}N^{-2}$},\\ {\big\| U_k[\phi ](T) \big\| _{D^s_{p,q}}}\le C\big( CT^\frac{1}{2}rN^{-s}\big) ^{k-1}rN^{-s},\qquad T\ge 0,~k\ge 3. \end{gathered}$$]{} Hence, we set [$$\begin{split} r=N^{s+\frac{2}{3}}\log N,\qquad T=\tfrac{1}{100}N^{-2}, \end{split}$$]{} so that for $s<-\frac{2}{3}$ we have [$$\begin{gathered} {\big\| U_1[\phi ](T) \big\| _{D^s_{p,q}}}\sim N^{s+\frac{2}{3}}\log N\ll 1,\qquad {\big\| U_3[\phi ](T) \big\| _{D^s_{p,q}}}{\gtrsim}(\log N)^3\gg 1, \\ \sum _{l\ge 2}{\big\| U_{2l+1}[\phi ](T) \big\| _{D^s_{p,q}}}{\lesssim}N^{-\frac{2}{3}}(\log N)^5\ll 1, \end{gathered}$$]{} from which norm inflation is detected by letting $N\to {\infty}$. **Acknowledgments:** The author would like to thank Tadahiro Oh for his generous suggestion and encouragement. This work is partially supported by JSPS KAKENHI Grant-in-Aid for Young Researchers (B) No. 24740086 and No. 16K17626. [00]{} T. Alazard and R. Carles, *Loss of regularity for supercritical nonlinear Schrödinger equations*, Math. Ann. **343** (2009), no. 2, 397–420. I. Bejenaru and T. Tao, *Sharp well-posedness and ill-posedness results for a quadratic non-linear Schrödinger equation*, J. Funct. Anal. **233** (2006), no. 1, 228–259. The latest version is in `arXiv:math/0508210` J. Bourgain, *Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations, I, Schrödinger equations*, Geom. Funct. Anal. **3** (1993), 107–156. N. Burq, P. Gérard, and N. Tzvetkov, *An instability property of the nonlinear Schrödinger equation on $S^d$*, Math. Res. Lett. **9** (2002), no. 2-3, 323–335. R. Carles, *Geometric optics and instability for semi-classical Schrödinger equations*, Arch. Ration. Mech. Anal. **183** (2007), no. 3, 525–553. R. Carles, E. Dumas, and C. Sparber, *Geometric optics and instability for NLS and Davey-Stewartson models*, J. Eur. Math. Soc. **14** (2012), no. 6, 1885–1921. R. Carles and T. Kappeler, *Norm-inflation with infinite loss of regularity for periodic NLS equations in negative Sobolev spaces*, Bull. Soc. Math. France **145** (2017), no. 4, 623–642. A. Choffrut and O. Pocovnicu, *Ill-posedness of the cubic nonlinear half-wave equation and other fractional NLS on the real line*, Int. Math. Res. Not. IMRN (**2018**), no. 3, 699–738. M. Christ, J. Colliander, and T. Tao, *Asymptotics, frequency modulation, and low regularity ill-posedness for canonical defocusing equations*, Amer. J. Math. **125** (2003), no. 6, 1235–1293. M. Christ, J. Colliander, and T. Tao, *Ill-posedness for nonlinear Schrödinger and wave equations*, preprint (2003). `arXiv:math/0311048` M. Christ, J. Colliander, and T. Tao, *Instability of the periodic nonlinear Schrödinger equation*, preprint (2003). `arXiv:math/0311227` M. Christ, J. Colliander, and T. Tao, *A priori bounds and weak solutions for the nonlinear Schrödinger equation in Sobolev spaces of negative order*, J. Funct. Anal. **254** (2008), no. 2, 368–395. F. Falk, E.W. Laedke, and K.H. Spatschek, *Stability of solitary-wave pulses in shape-memory alloys*, Phys. Rev. B **36** (1987), no. 6, 3031–3041. H.G. Feichtinger, *Modulation spaces on locally compact Abelian groups*, Technical Report, University of Vienna, 1983; Published in “Proc. Internat. Conf. on Wavelets and Applications”, New Delhi Allied Publishers, 2003, 1–56. A. Grünrock, *Some local wellposedness results for nonlinear Schrödinger equations below $L^2$*, preprint (2000). `arXiv:math/0011157` S. Guo, *On the 1D cubic nonlinear Schrödinger equation in an almost critical space*, J. Fourier Anal. Appl. **23** (2017), no. 1, 91–124. Z. Guo and T. Oh, *Non-existence of solutions for the periodic cubic NLS below $L^2$*, Int. Math. Res. Not. IMRN (**2018**), no. 6, 1656–1729. S. Gustafson, K. Nakanishi, and T.P. Tsai, *Scattering theory for the Gross-Pitaevskii equation in three dimensions*, Commun. Contemp. Math. **11** (2009), no. 4, 657–707. H. Huh, S. Machihara, and M. Okamoto, *Well-posedness and ill-posedness of the Cauchy problem for the generalized Thirring model*, Differential Integral Equations **29** (2016), no. 5-6, 401–420. T. Iwabuchi and T. Ogawa, *Ill-posedness for the nonlinear Schrödinger equation with quadratic non-linearity in low dimensions*, Trans. Amer. Math. Soc. **367** (2015), no. 4, 2613–2630. T. Iwabuchi and K. Uriya, *Ill-posedness for the quadratic nonlinear Schrödinger equation with nonlinearity $|u|^2$*, Commun. Pure Appl. Anal. **14** (2015), no. 4, 1395–1405. C.E. Kenig, G. Ponce, and L. Vega, *Quadratic forms for the $1$-D semilinear Schrödinger equation*, Trans. Amer. Math. Soc. **348** (1996), no. 8, 3323–3353. C.E. Kenig, G. Ponce, and L. Vega, *On the ill-posedness of some canonical dispersive equations*, Duke Math. J. **106** (2001), no. 3, 617–633. R. Killip, M. Vişan, and X. Zhang, *Low regularity conservation laws for integrable PDE*, preprint (2017). `arXiv:1708.05362` N. Kishimoto, *Low-regularity bilinear estimates for a quadratic nonlinear Schrödinger equation*, J. Differential Equations **247** (2009), no. 5, 1397–1439. N. Kishimoto and K. Tsugawa, *Local well-posedness for quadratic nonlinear Schrödinger equations and the “good” Boussinesq equation*, Differential Integral Equations **23** (2010), no. 5-6, 463–493. H. Koch and D. Tataru, *A priori bounds for the 1D cubic NLS in negative Sobolev spaces*, Int. Math. Res. Not. IMRN **2007**, no. 16, Art.ID rnm053, 36 pp. H. Koch and D. Tataru, *Energy and local energy bounds for the 1-d cubic NLS equation in $H^{-1/4}$*, Ann. Inst. H. Poincaré Anal. Non Linéaire **29** (2012), no. 6, 955–988. H. Koch and D. Tataru, *Conserved energies for the cubic NLS in 1-d*, preprint (2016). `arXiv:1607.02534` S. Machihara and M. Okamoto, *Ill-posedness of the Cauchy problem for the Chern-Simons-Dirac system in one dimension*, J. Differential Equations **258** (2015), no. 4, 1356–1394. S. Machihara and M. Okamoto, *Sharp well-posedness and ill-posedness for the Chern-Simons-Dirac system in one dimension*, Int. Math. Res. Not. (**2016**), no. 6, 1640–1694. L. Molinet, *On ill-posedness for the one-dimensional periodic cubic Schrödinger equation*, Math. Res. Lett. **16** (2009), no. 1, 111–120. T. Oh, *A remark on norm inflation with general initial data for the cubic nonlinear Schrödinger equations in negative Sobolev spaces*, Funkcial. Ekvac. **60** (2017), 259–277. T. Oh and C. Sulem, *On the one-dimensional cubic nonlinear Schrödinger equation below $L^2$*, Kyoto J. Math. **52** (2012), no. 1, 99–115. T. Oh and Y. Wang, *On the ill-posedness of the cubic nonlinear Schrödinger equation on the circle*, to appear in An. Ştiinţ. Univ. Al. I. Cuza Iaşi. Mat. (N.S.). T. Oh and Y. Wang, *Global well-posedness of the one-dimensional cubic nonlinear Schrödinger equation in almost critical spaces*, preprint (2018). `arXiv:1806.08761` M. Okamoto, *Norm inflation for the generalized Boussinesq and Kawahara equations*, Nonlinear Anal. **157** (2017), 44–61. M. Ruzhansky, M. Sugimoto, and B. Wang, *Modulation spaces and nonlinear evolution equations*, Evolution equations of hyperbolic and Schrödinger type, 267–283, Progr. Math., **301**, Birkhäuser/Springer Basel AG, Basel, 2012. Y. Tsutsumi, *$L^2$-solutions for nonlinear Schrödinger equations and nonlinear groups*, Funkcial. Ekvac. **30** (1987), no. 1, 115–125. [^1]: One can still adapt their idea to the periodic setting with additional care. Moreover, although their original argument did not apply to the 1d cubic case with the scaling critical regularity $s=-\frac{1}{2}$, one can modify the argument to cover that case. See [@OW15p] for details. [^2]: In [@CDS12] they also proved norm inflation for generalized nonlinear Schrödinger equations and the Davey-Stewartson system including non-elliptic Laplacian. [^3]: The one-dimensional cubic problem was not treated in the first version of this article. We would like to thank T. Oh for drawing our attention to this case. [^4]: Essentially, they also proved NI$_s$ for $s<-\frac{1}{4}$, i.e., the case (iv) of our Theorem \[thm:main0\]. [^5]: In the first version of this article, we only considered gauge-invariant smooth nonlinearities $\nu |u|^{2k}u$, $k\in {\mathbb{Z}}_{>0}$ and linear combinations of them. Note, however, that the method of Iwabuchi and Ogawa [@IO15] had been applied before only to quadratic nonlinearities and it was the first result dealing with nonlinearities of general degrees in a unified manner. The authors of [@CP16; @O17] informed us that their proofs of norm inflation results followed the argument in the first version of this article. We also remark that an estimate proved in the first version (Lemma \[lem:a\_k\] below) was employed later in [@MO16; @HMO16; @Ok17]. [^6]: In [@Ok17] non gauge-invariant nonlinearities were first treated in a general setting. In fact Theorem \[thm:main0\] follows as a corollary of [@Ok17 Proposition 2.5 and Corollary 2.10]. However, we decide to include the non gauge-invariant cases in the present version in order to state Theorem \[thm:main\] (for multi-term nonlinearities) with more generality. [^7]: It is worth noticing that the continuity of $U_k$ from $(B_D({\varepsilon}_0),{\| ~ \| _{D'}})$ to $(S,{\| ~ \| _{S'}})$ does not imply its continuity from $(D,{\| ~ \| _{D'}})$ to $(S,{\| ~ \| _{S'}})$ in general, even though $U_k$ can be defined for all functions in $D$. By the $k$-linearity of $U_k$, the latter continuity is equivalent to the *boundedness*: ${\| U_k[\phi ] \| _{S'}}\le C{\| \phi \| _{D'}}^k$. Hence, only disproving the boundedness of $U_k$ in coarse topologies (which may imply that the solution map is not $k$ times differentiable) is not sufficient to conclude the discontinuity of the solution map. [^8]: In fact, we do not need ‘well-posedness in $D$’, i.e., such estimates as that hold for *all* functions in $D$ and $S$. It is enough to estimate the terms $U_k[\phi ]$ just for particularly chosen initial data $\phi$. In some problems this consideration becomes essential; see [@Ok17], Theorem 1.2 and its proof. [^9]: If $p$ is even, we can choose $\eta _l$ to be $Ne_d$ or $-Ne_d$ so that $\sum _{l=1}^q\eta _l-\sum _{m=q+1}^p\eta _m=0$. If $p$ is odd, we choose $\eta _1=2Ne_d$ and $\eta _2$ to be $Ne_d$ or $-Ne_d$ so that the output from these two frequencies is either $Ne_d$ or $-Ne_d$. Then, the other $\eta _j$ can be chosen as for $p$ even. [^10]: More precisely, we show ${\| {\widehat}{u}(T) \| _{L^2(\{ |\xi |\le 1\} )}}>{\delta}^{-1}$. This implies the claim if we define the Sobolev norm of negative indices $\sigma$ as ${\| f \| _{H^{\sigma}}}:={\| \min \{ 1,\,|\xi |^{{\sigma}}\} {\widehat}{f}(\xi ) \| _{L^2}}$.
--- abstract: 'Edge machine learning can deliver low-latency and private artificial intelligent (AI) services for mobile devices by leveraging computation and storage resources at the network edge. This paper presents an energy-efficient edge processing framework to execute deep learning inference tasks at the edge computing nodes whose wireless connections to mobile devices are prone to channel uncertainties. Aimed at minimizing the sum of computation and transmission power consumption with probabilistic quality-of-service (QoS) constraints, we formulate a joint inference tasking and downlink beamforming problem that is characterized by a group sparse objective function. We provide a statistical learning based robust optimization approach to approximate the highly intractable probabilistic-QoS constraints by nonconvex quadratic constraints, which are further reformulated as matrix inequalities with a rank-one constraint via matrix lifting. We design a reweighted power minimization approach by iteratively reweighted $\ell_1$ minimization with difference-of-convex-functions (DC) regularization and updating weights, where the reweighted approach is adopted for enhancing group sparsity whereas the DC regularization is designed for inducing rank-one solutions. Numerical results demonstrate that the proposed approach outperforms other state-of-the-art approaches.' author: - 'Kai Yang,  Yuanming Shi,  Wei Yu,  and Zhi Ding,  [^1] [^2] [^3] [^4]' bibliography: - 'reliable\_edge\_processing.bib' title: 'Energy-Efficient Processing and Robust Wireless Cooperative Transmission for Edge Inference' --- Edge intelligence, energy efficiency, robust communication, group sparse beamforming, robust optimization, difference-of-convex-functions Introduction ============ Machine learning has transformed many aspects of our daily lives by taking advantage of abundant data and computing power in the cloud center. In particular, the strong capability of capturing the representations of data for detection or classification using deep neural networks [@lecun2015deep] has made impressive gains in face recognition, natural language processing tasks, etc. With the explosion of mobile data and the increasing edge computing capability, there is an emerging trend of *edge intelligence* [@zhou2019edge; @park2018edgeai]. Instead of uploading all data collected by mobile devices to the remote cloud data center, edge intelligence emphasizes the use of the computation and storage resources at network edges to provide low-latency and reliable artificial intelligent (AI) service [@kang2019incentive; @kang2019reliable] for privacy/security sensitive devices, such as wearable devices, augmented reality, smart vehicles, and drones. However, since mobile devices are usually equipped with limited computation power, storage and energy [@park2018edgeai], it is usually infeasible to deploy deep learning models, i.e., deep neural networks (DNNs), at resource-constrained mobile devices, and execute inference tasks locally. A promising solution is to enable processing at the mobile network access points to facilitate deep learning inference, which is termed as *edge inference* [@xu2018scaling; @xu2019edgesanitizer]. In this paper, we shall present the edge processing framework for edge inference (as illustrated in Fig. \[fig:framework\]) that the input (e.g., a piece of rough doodle) of each mobile user is uploaded to wireless access points (e.g., base stations) served as edge computing nodes, each task is performed with pre-trained deep learning model (e.g., Nvidia’s AI system GauGAN [@GauGANwebsite2019] for turning rough doodles into photorealistic landscapes) at multiple edge computing nodes, and the output results (e.g., landscape images) are transmitted to mobile users via coordinated beamforming among multiple access points. In such a system, the provisioning of wireless transmissions in both the uplink and the downlink are important design considerations. In addition to the low-latency requirement, improving the energy efficiency [@sze2017efficient] is also critical due to the high computational complexity of processing DNNs, for which a number of works focusing on model compression methods [@han2015deep; @Zhang_SPM18]. ![Illustration of our energy-efficient processing and robust wireless cooperative transmission framework for edge inference.[]{data-label="fig:framework"}](framework){width="\columnwidth"} There is a communication and computation tradeoff for the edge inference system in downlink. In particular, performing an inference task at more edge computing nodes can achieve higher quality-of-service (QoS) through cooperative downlink transmission for delivering the output results to mobile users. This however results in more computation power consumption for executing the deep learning models. We thus propose to jointly decide on the task allocation strategy at edge nodes and design downlink beamforming vectors by minimizing the sum of transmission power consumption and computation power consumption. In particular, the power consumption of deep learning inference tasks can be determined through the estimated energy [@yang2017designing] and computation time. We observe that there is an intrinsic connection between the group sparse structure [@Yuanming_TWC2014; @tao2016content] of the downlink aggregative beamforming vector and the combinatorial variable, i.e., the set of tasks performed at edge nodes. The cooperative transmission strategies require global channel state information (CSI), while uncertainty in CSI acquisition is inevitable in practice due to training based channel estimation [@yangfuqian2018pilot], limited feedback [@mo2018limited], partial CSI acquisition [@shi2015optimal] and CSI acquisition delays [@maddah2012completely]. We thus formulate the joint task selection and downlink beamforming problem for energy-efficient processing and robust transmission against CSI errors in edge inference system as a group sparse beamforming problem with probabilistic-QoS constraints [@shi2015optimal]. The joint chance constraints make the formulated probabilistic group sparse beamforming problem highly intractable since it has no closed-form expression generally. To address the chance-constrained programs, a number of works focus on finding computationally tractable approximations based on the collected samples of the random variables. A recognized scenario generation (SG) approach [@nemirovski2006convex] is proposed that uses a collection of sampled constraints to approximate the original chance constraints. However, SG is over-conservative since the volume of feasible region decreases by increasing the sample size, which leads to the deterioration of its performance. In addition, given the pre-specified probability $1-\epsilon$ and the confidence level $1-\delta$ for the probabilistic-QoS constraints, the required samples size of SG should satisfy $\sum_{i=1}^{NKL-1}\binom{T}{i}\epsilon^i(1-\epsilon)^{T-i}\leq \delta$, which increases roughly linearly with $1/\epsilon$. In [@shi2015optimal], a stochastic optimization approach is provided to address the over-conservativeness of SG. However, its computational cost grows linearly with the sample size, which is not scalable for obtaining high-robustness solutions. Moreover, its statistical guarantee under finite sample size is still not available. To overcome limitations of existing methods, we present a robust optimization approximation approach for the joint chance constraints by enforcing the QoS constraints for any element within a high probability region. The high probability region is further determined by adopting a statistical learning [@hong2017learning] approach. This approach enjoys the benefits that the minimum required sample size is only $\log\delta/\log(1-\epsilon)$, and the computational cost is independent of the sample size. With the statistical learning based robust optimization approximation approach, the resulting robust group sparse beamforming problem has nonconvex quadratic constraints and a nonconvex group sparse objective function. We find that the nonconvex quadratic constraints can be convexified by matrix lifting and semidefinite relaxation (SDR) [@luo2007approximation]. Specifically, the nonconvex quadratic robust QoS constraints can be lifted as convex constraints in terms of a rank-one positive semidefinite matrix variable, which is then convexified by simply dropping the rank-one constraint. However, the SDR approach cannot guarantee that the obtained solution is feasible with respect to the original nonconvex quadratic constraints. The mixed $\ell_1/\ell_2$-norm [@bach2012optimization] is a well-known convex group sparsity inducing norm, which has been successfully applied in green cloud radio access networks [@Yuanming_TWC2014] and cooperative wireless cellular network [@sanjabi2014joint]. However, the SDR approach requires a quadratic form of the objective function, which makes the mixed $\ell_1/\ell_2$-norm minimization approach inapplicable. To overcome this problem, a quadratic variational form of weighted mixed $\ell_1/\ell_2$-norm is proposed in [@shi2015robust_TSP] to induce group sparsity. Note that [@shi2015robust_TSP] also considers a group sparse beamforming problem with nonconvex quadratic constraints. However, the performance of a quadratic variational form of weighted mixed $\ell_1/\ell_2$-norm minimization with SDR is still not satisfactory. To address the limitations of existing approaches, we propose a reweighted power minimization approach to enhance the group sparsity as well as improve the feasibility of nonconvex quadratic constraints. Specifically, we first adopt the iteratively reweighted $\ell_1$ minimization approach for enhancing group sparsity [@dai2016energy; @shi2016smoothed]. To further guarantee the feasibility of the original nonconvex quadratic constraints, we exploit the matrix lifting technique to recast the nonconvex quadratic constraints as the convex constraints with respect to a rank-one positive semidefinite matrix, and propose a novel difference-of-convex-functions (DC) regularzation approach to induce rank-one solutions. Numerical results demonstrate that the proposed approach improves the probability of feasibility by avoiding the over-conservativeness of SG. Benefiting from both the reweighted $\ell_1$ minimization and the DC regularization, the proposed approach achieves a much lower total power consumption than the algorithm proposed in [@shi2015robust_TSP] and has a better capability of inducing group sparsity with nonconvex quadratic constraints. Contributions ------------- In this work, we consider an edge computing system to execute deep learning inference tasks for resource-constrained mobile devices. In order to provide energy-efficient processing and robust wireless cooperative transmission service for edge inference, we propose to jointly design the downlink beamforming vector and the set of inference tasks performed at each edge computing nodes under probabilistic-QoS constraints. We provide a statistical learning based robust optimization approximation for the highly intractable joint chance constraints, which guarantees that the probabilistic-QoS constraints are feasible with certain confidence level. The resulting problem turns out to be a group sparse beamforming problem with nonconvex quadratic constraints. We propose a reweighted power minimization approach based on the principles of iteratively reweighted $\ell_1$ minimization for group sparsity inducing, matrix lifting technique, and a novel DC representation for rank-one positive semidefinite matrices. The proposed approach can enhance group sparsity and induce rank-one solutions. We summarize the major contributions of this paper as follows: 1. We propose an energy-efficient processing and robust transmission approach for executing deep learning inference tasks at possibly multiple edge computing enabled wireless access points. The selection of optimal set of access points for each task is formulated as a group sparse beamforming problem with joint chance constraints. 2. We provide a robust optimization counterpart to approximate the joint chance constraints followed by a statistical learning approach to learn the parameters from data samples of the random channel coefficients. It turns out a nonconvex group sparse beamforming problem with nonconvex quadratic constraints. 3. We show that the nonconvex quadratic constraints can be reformulated as convex constraints with a rank-one constraint, where the rank-one constraint can be reformulated with a novel DC representation. To enhance the group sparsity and inducing rank-one solutions, we propose a reweighted power minimization approach by iteratively reweighted $\ell_1$ minimization with DC regularization and updating weights. 4. We conduct extensive numerical experiments to demonstrate the advantages of the proposed approach in providing energy-efficient and robust transmission service for edge inference. Organization and Notations -------------------------- The rest of this work is organized as follows. In Section II, we introduce the system model and the power consumption model of edge inference, and formulate the energy-efficient processing and robust cooperative transmission problem as a group sparse beamforming problem with joint chance constraints. Section II provides a statistical learning based robust optimization approach to approximate the joint chance constraints. In Section IV, we design a reweighted power minimization approach for solving the robust group sparse beamforming problem. The simulation results are illustrated in Section V to demonstrate the superiority of the proposed approach over other state-of-the-art approaches. Finally, we conclude this work in Section VI. Throughout this paper, we use lower-case bold letters (e.g., ${{\bm{v}}}$) to denote column vectors and letters with one subscript to denote their subvectors (e.g., ${{\bm{v}}}_{k}$). We further use lower-case bold letters with two subscripts to denote the subvectors of subvectors (e.g., ${{\bm{v}}}_{nk}$ is a subvector of ${{\bm{v}}}_k$). We denote scalars with lower-case letters, matrices with capital letters (e.g., ${{\bm{V}}}$) and sets with calligraphic letters (e.g., $\mathcal{A}$). The conjugate transpose of a vector or matrix, $\ell_2$-norm of a vector and spectral norm of a matrix are denoted as $(\cdot)^{\sf{H}},\|\cdot\|_2$ and $\|\cdot\|$, respectively. Table \[tab:notations\] summarizes the notations used in this paper. Notation Explanation ------------------------------------------------------------- ------------------------------------------------------------------------------------------------------- $N,K,L$ the number of APs, MUs, and AP’s antennas, respectively $[K]$ the set of $\{1,\cdots,K\}$ $\mathcal{A}$ task allocation of APs $P_{nk}^c$ power consumption of performing the $k$-th user’s task at the $n$-th AP $P_n^{\text{Tx}}$ maximum transmit power of the $n$-th AP $\eta_n$ power amplifier efficiency $P^c$ total computation power consumption at APs $P$ total power consumption ${{\bm{v}}}_{nk},{{\bm{v}}}_{k},{{\bm{v}}}$ beamforming vectors at the APs ${{\bm{V}}}_{ij}[s,t],{{\bm{V}}}_{ij},{{\bm{V}}}$ lifted matrices of beamforming vectors ${{\bm{h}}}_{kn},{{\bm{h}}}_{k},{{\bm{h}}}$ downlink channel coefficient vectors between APs and MUs $\hat{{{\bm{h}}}}_{kn},\hat{{{\bm{h}}}}_k,\hat{{{\bm{h}}}}$ estimated channel coefficient vectors ${{\bm{e}}}_{kn},{{\bm{e}}}$ random errors of CSI $\gamma_k,\zeta$ the target QoS and its target tolerance level $\epsilon,\delta$ the tolerance level and its confidence level $\mathcal{U}_k$ high probability region of ${{\bm{h}}}_k$ $\mathcal{D}$ the data set consisting of $D$ i.i.d. samples of ${{\bm{h}}}$ $\mathcal{D}^1,\mathcal{D}^2$ the partitioned two parts of the data set $\mathcal{D}$ with size $D_1$ and $D_2=D-D_1$, respectively $\tilde{{{\bm{h}}}}^{(j)}$ the $j$-th data sample $q_{1-\epsilon}$ $(1-\epsilon)$-quantile : Notations used in the paper[]{data-label="tab:notations"} System Model and Problem Formulation ==================================== This section provides the system model and power consumption model of edge inference for deep neural networks, followed by the proposal of the energy-efficient edge processing under probabilistic-QoS constraints. System Model ------------ Consider the edge processing network consisting of $N$ $L$-antenna edge computing enabled wireless access points (APs) and $K$ single-antenna mobile users (MUs), as shown in Fig. \[fig:system\]. Each MU $k$ has a deep learning inference task $\phi_k(d_k)$ with input $d_k$. Instead of relying on a cloud data center, we execute deep learning tasks at the APs to address latency and privacy concerns for high-stake applications such as drones and smart vehicles [@park2018edgeai]. In this paper, we propose to store the trained deep neural network (DNN) models $\phi_k$’s to APs in advance. Each AP collects all inputs $\{d_k\}_{k=1}^{K}$ from each MU in the first phase. In the second phase, each AP will selectively execute some inference tasks and transmit the output results to the MUs through cooperative downlink transmission, thereby providing low-latency intelligent services for MUs. The point is that the same inference task can be executed at multiple APs, so that the multiple APs can jointly transmit the result to the MUs through beamforming, thus improving the downlink transmission efficiency (at the expenses of the larger energy consumption due to executing the same task at multiple APs.) This paper focuses on the joint task selection and downlink transmit beamforming problem in the second phase. ![System model of edge inference for deep neural networks. This papper focuses on the computing and downlink transmission phase.[]{data-label="fig:system"}](system){width="\columnwidth"} Let $\phi_k(d_k)$ be the requested output for MU $k$, $s_k\in\mathbb{C}$ be the encoded scalar to be transmitted, and ${{\bm{v}}}_{nk}\in\mathbb{C}^{L}$ be the beamforming vector for message $\phi_k(d_k)$ at the $n$-th AP. We consider the downlink communication scenario, where all inputs $d_k$’s have already been collected at APs. Then the received signal at MU $l$ is given by $$y_k = \sum_{n=1}^{N}\sum_{l=1}^{K}{{\bm{h}}}_{kn}^{\sf{H}}{{\bm{v}}}_{nl}s_l+z_k, \label{eq:io1}$$ where ${{\bm{h}}}_{kn}\in\mathbb{C}^{L}$ is the channel coefficient vector between the $n$-th AP and the $k$-th MU, $z_k\sim\mathcal{CN}(0,\sigma_k^2)$ is the additive isotropic white Gaussian noise. Suppose all data symbols $s_k$’s are mutually independent with unit power, i.e., $\mathbb{E}[|s_k|^2]=1$, and also independent with the noise. Denote $[K]$ as the set $\{1,\cdots,K\}$. Let $\mathcal{A}\subseteq\{(n,k):n\in[N],k\in[K]\}$ denote a feasible allocation for the inference tasks on APs, i.e., computational task $\phi_k$ shall be performed at the $n$-th AP for $(n,k)\in\mathcal{A}$. In term of the group sparsity structure of the aggregative beamforming vector $${{\bm{v}}}=[{{\bm{v}}}_{11}^{\sf{H}},\cdots,{{\bm{v}}}_{N1}^{\sf{H}},\cdots,{{\bm{v}}}_{NK}^{\sf{H}}]^{\sf{H}}\in\mathbb{C}^{NKL},$$ we have that if the inference task $k$ will not be performed at AP $n$, i.e., $(n,k)\notin\mathcal{A}$, the beamforming vector ${{\bm{v}}}_{nk}$ will be set as zero. Let $\mathcal{T}({{\bm{v}}})$ be the group sparsity pattern of ${{\bm{v}}}$ given as $$\mathcal{T}({{\bm{v}}})=\{(n,k)|{{\bm{v}}}_{nk}\ne{{\bm{0}}}\}.$$ The signal-to-interference-plus-noise-ratio (SINR) for mobile device $k$ is given by $$\textrm{SINR}_k({{\bm{v}}};{{\bm{h}}}_k) = \frac{|{{\bm{h}}}_{k}^{\sf{H}}{{\bm{v}}}_{k}|^2}{\sum_{l\ne k}|{{\bm{h}}}_{k}^{\sf{H}}{{\bm{v}}}_{l}|^2+\sigma_k^2},$$ where ${{\bm{h}}}_{k}$ and ${{\bm{v}}}_k$ are given by $$\begin{aligned} {{\bm{h}}}_{k}&=[{{\bm{h}}}_{k1}^{\sf{H}},\cdots,{{\bm{h}}}_{kN}^{\sf{H}}]^{\sf{H}}\in\mathbb{C}^{NL}, \\ {{\bm{v}}}_{k}&=\begin{bmatrix} {{\bm{v}}}_{1k}^{\sf{H}} & \cdots & {{\bm{v}}}_{Nk}^{\sf{H}} \end{bmatrix}^{\sf{H}}\in\mathbb{C}^{NL}, \end{aligned}$$ and the aggregative channel coefficient vector is denoted as $${{\bm{h}}}=[{{\bm{h}}}_{1}^{\sf{H}},\cdots,{{\bm{h}}}_{K}^{\sf{H}}]^{\sf{H}}\in\mathbb{C}^{NKL}.$$ The transmit power constraint at the $n$-th AP is given by $$\mathbb{E}\left[\sum_{l=1}^{K}\|{{\bm{v}}}_{nl}s_l\|_2^2\right] = \sum_{l=1}^{K}\|{{\bm{v}}}_{nl}\|_2^2\leq P_n^{\text{Tx}}, n\in[N], \label{constraint:transpower}$$ where $P_n^{\text{Tx}}$ is the maximum transmit power. Power Consumption Model ----------------------- Although widespread applications of deep learning present numerous opportunities for intelligent systems, energy consumption becomes one of the main concerns [@xu2018scaling]. Indeed, the energy consumption of performing DNN inference is dominated by the memory access. As pointed out in [@han2015deep], a memory access of 32 bit dynamic random access memory (DRAM) consumes 640pJ, while a cache access of 32 bit static random access memory (SRAM) consumes 5pJ and a 32 bit floating point add operation consumes 0.9pJ. Large DNN models probably cannot fit in the storage of mobile device, which requires more costly DRAM memory accesses. Therefore, small models can be directly deployed on mobile devices but large models are preferably executed at the powerful edge nodes. Let the power consumption of computing task $\phi_k$ at the $n$-th edge computing node be $P_{nk}^{\text{c}}$. The total computation power consumption for all edge computing nodes is thus given by $$P^{\text{c}}=\sum_{n,k}P_{nk}^{\text{c}}I_{(n,k)\in\mathcal{T}({{\bm{v}}})},$$ where the indicator function $I$ is $1$ if $(n,k)\in\mathcal{T}({{\bm{v}}})$ and $0$ otherwise. Therefore, the total power consumption consists of transmission power consumption for output results delivery and computation power consumption for deep learning tasks execution, which is given by $$P =\sum_{n,k}\frac{1}{\eta_n}\|{{\bm{v}}}_{nk}\|_2^2+ \sum_{n,k}P_{nk}^{\text{c}}I_{(n,k)\in\mathcal{T}({{\bm{v}}})},$$ where $\eta_n$ is the power amplifier efficiency. Deep neural networks especially deep convolutional neural networks (CNNs) becomes an indispensable and the state-of-the-art paradigm for real-world intelligent services. Its high energy cost has attracted much interest in designing energy-efficient structures of neural networks [@han2015deep]. Estimating the energy consumption of a neural network is thus critical for inference at the edge, for which an estimation tool is developed in [@EnergyEstimationWebsite]. The energy consumption of performing an inference task consists of the computation part and the data movement part [@yang2017designing]. The computation energy consumption can be calculated by counting the number of multiply-and-accumulate (MACs) in the layer and weighing it with the energy consumption of each MAC operation in the computation core. The energy consumption of data movement is calculated by counting the number of accessing memory at each level of the memory hierarchy in the corresponding hardware and weighing it with the energy consumption of accessing the memory in the corresponding level. Here we illustrate how to estimate the computation power consumption of performing image classification tasks using the classic CNN (i.e., AlexNet consisting of $5$ convolutional layers and $3$ fully-connected layers) on the Eyeriss chip. The energy estimation tool takes network configuration as input and outputs the estimated energy breakdown of each layer in terms of computation part and the data movement part of three data types(weight, input feature map, output feature map). Figure \[fig:EnergyDecomp\] demonstrates the estimated energy of each layer running on Eyeriss chip, and the overall energy consumption is the sum of four parts. The unit of energy is normalized by the energy for one MAC. Based on the total energy consumption, the computation power consumption can be further determined via dividing the energy consumption by the computation time. ![Energy consumption breakdown of the AlexNet [@krizhevsky2012imagenet]. The unit of energy is normalized by the energy for one MAC operation (i.e., $10^2$ = energy of 100 MACs).[]{data-label="fig:EnergyDecomp"}](energy.png){width="\columnwidth"} Channel Uncertainty Model {#subsec:channel_uncertainty_model} ------------------------- For high-stake intelligent applications such as autonomous driving and automation, robustness is a critical requirement. In practice, inevitably there is uncertainty in the available channel state information (CSI) ${{\bm{h}}}$, which is taken into consideration to provide robust transmission in this paper. It may originate from training based channel estimation [@yangfuqian2018pilot], limited precision of feedback [@mo2018limited], partial CSI acquisition [@shi2015optimal] and delays in CSI acquicition [@maddah2012completely]. In this work, we adopt the additive error model [@liu2018energy; @fang2017joint] of the channel imperfection, i.e., $$\label{eq:additive_model} {{\bm{h}}}=\hat{{{\bm{h}}}}+{{\bm{e}}},$$ where $\hat{{{\bm{h}}}}\in\mathbb{C}^{NKL}$ is the estimated aggregative channel vector and ${{\bm{e}}}\in\mathbb{C}^{NKL}$ is the random errors of the CSI with unknown distribution and expectation as ${{\bm{0}}}$. We apply the probabilistic quality-of-service (QoS) constraints [@shi2015optimal] to characterize the robustness of delivering the inference results to MUs $$\label{eq:pcr} \textrm{Pr}\left(\textrm{SINR}_{k}({{\bm{v}}};{{\bm{h}}}_k) \geq \gamma_k\right)\geq 1-\zeta, \forall k\in[K].$$ Here $\zeta$ is the tolerance level and $``\textrm{SINR}_{k} \geq \gamma_k"$ is called safe condition. Problem Formulation ------------------- In the proposed edge processing framework for deep learning inference tasks, there is a fundamental tradeoff between computation and communication. Specifically, executing the same inference task at multiple edge nodes will require higher computation power consumption, while the downlink transmission power consumption shall be reduced due to the cooperative transmission gains. In this paper, we propose an energy-efficient processing and robust transmission approach to minimize the total network power consumption, while satisfying the probabilistic QoS constraints and transmit power constraints. It is formulated as the following probabilistic group sparse beamforming problem: $$\begin{aligned} \!\!\!\!\!\!\!\!\!\mathscr{P}_{\text{CCP}}:\!\!\mathop{\textrm{min}}_{{{\bm{v}}}\in\mathbb{C}^{NKL}} && \!\!\sum_{n,k}\frac{1}{\eta_n}\|{{\bm{v}}}_{nk}\|_2^2+ \sum_{n,k}P_{nk}^{\text{c}}I_{(n,k)\in\mathcal{T}({{\bm{v}}})}\nonumber \\ \textrm{s.t.~~~~} && \!\!\!\!\textrm{Pr}\left(\textrm{SINR}_{k}({{\bm{v}}};{{\bm{h}}}_k) \geq \gamma_k\right)\geq 1-\zeta, k\in[K] \label{constraint:probQoS} \\ && \sum_{k=1}^{K}\|{{\bm{v}}}_{nk}\|_2^2\leq P_n^{\text{Tx}}, n\in[N].\end{aligned}$$ In edge inference, data privacy is another main concern for high-stake applications such as smart vehicles and drones. Mobile users in these applications may be reluctant to send their raw data to APs. To avoid the exposure of raw data, hierarchical distributed structure has been studied in the literature, such as [@li2019edge], by determining a partition point of a DNN model and deploying the partitioned model across the mobile device and the edge computing enabled AP. The data privacy is protected since only the output of the layers before the partition point is uploaded to APs. Note that our proposed framework is also applicable to the privacy-preserving hierarchical distributed structure. In this case, the input $d_k$ becomes the locally computed output of the layers before the partition point. The computation task $\phi_k$ becomes the task of computing the inference result with the layers after the partition point. To achieve the robustness of QoS against CSI errors, we shall collect $D$ i.i.d. (independent and identically distributed) samples of the imperfect channel state information as the data set $\mathcal{D}=\{\tilde{{{\bm{h}}}}^{(1)},\cdots,\tilde{{{\bm{h}}}}^{(D)}\}$ to learn the uncertainty model of CSI before providing edge inference service. Based on the data set $\mathcal{D}$, we aim to design a beamforming vector ${{\bm{v}}}$ such that the safe condition is satisfied with probability at least $1-\zeta$. However, since we do not know the prior distribution of random errors, the statistical guarantee of a given approach is usually expressed as certain confidence level $1-\delta$ for certain tolerance level $1-\epsilon$, e.g., the scenario generation approach [@nemirovski2006convex]. That is, the confidence level of $$\label{relaxed_Prob_QoS} \textrm{Pr}\left(\textrm{SINR}_{k}({{\bm{v}}};{{\bm{h}}}_k) \geq \gamma_k\right)\geq 1-\epsilon$$ is no less than $1-\delta$ for some ${{\bm{v}}}$, $D$, $0<\epsilon<1$ and $0<\delta<1$. Thus the violation probability of the safe condition is upper bounded by $$\textrm{Pr}(\textrm{SINR}_{k}({{\bm{v}}};{{\bm{h}}}_k) < \gamma_k)< \delta + \epsilon(1-\delta).$$ By setting $\epsilon$ and $\delta$ such that $\zeta>\delta+\epsilon(1-\delta)$, the safe condition (\[eq:pcr\]) is guaranteed to be met. We consider the block fading channel where the channel distribution is assumed invariant [@liu2019two] within $T_s$ blocks and the channel coefficient vector remains unchanged within each block. Note that the training by collecting $D$ channel samples within each block will result in high signaling overhead. We will show that our proposed approach for addressing the probabilistic-QoS constraints can be intergrated with a cost-effective channel sampling strategy in Section \[subsec:sampling\_strategy\]. Problem Analysis {#subsec:problem_analysis} ---------------- Directly solving the joint chance constraints (\[constraint:probQoS\]) is usually a highly-intractable task [@nemirovski2006convex], especially when there is no exact knowledge about the uncertainty. In this work, we shall propose a general framework for edge inference without assuming the prior distribution of random errors. A natural idea is to find a computationally tractable approximation for the probabilistic QoS constraints (\[constraint:probQoS\]). ### Scenario Generation Scenario generation [@nemirovski2006convex] is a well-known approach by obtaining $D$ independent samples of the random channel coefficient vector ${{\bm{h}}}$ and imposing the target QoS constraints $\textrm{SINR}_{k} \geq \gamma_k,k\in[K]$ for each sample. However, because it ensures robustness in the minimax sense, it is too conservative when a large number of samples are drawn, since the volumn of feasible region will decrease, which may result in the infeasibility of problem $\mathscr{P}_{\text{CCP}}$. In addition, the sample size $D$ should be chosen such that $\sum_{i=1}^{NKL-1}\binom{D}{i}\epsilon^i(1-\epsilon)^{D-i}\leq \delta$, where $1-\delta$ gives the confidence level for the probabilistic-QoS constraints defined in equation (\[eq:pcr\]). Therefore, the scenario generation approach has scalability issue since the required minimum sample size $D$ increases roughly linearly with $1/\epsilon$ for small $\epsilon$ and also with $NKL$. ### Stochastic Programming To address this over-conservativeness issue of the scenario generation approach, a stochastic programming approach is further provided in [@shi2015optimal] by finding a difference-of-convex-functions (DC) approximation for the chance constraints. The resulting DC constrained stochastic program can be solved by successive convex approximation with the Monte Carlo approach at each iteration. However, its computation cost grows linearly with the number of samples $D$ which is not scalable for obtaining high-robustness solutions, and the statistical guarantee is not available for the joint chance constraints under finite sample size. To address the limitations of the existing works, we shall present a robust optimization approach in Section \[sec:ropt\] to approximate the chance constraint via a statistical learning approach[@hong2017learning]. This approach enjoys the main advantages that the minimum required number of observations is only $\log\delta/\log(1-\epsilon)$ and the computational cost is independent of the sample size. Learning-Based Robust Optimization Approximation for Joint Chance Constraints {#sec:ropt} ============================================================================= In this section, we provide a robust optimization approximation for the joint chance constraints in problem $\mathscr{P}_{\text{CCP}}$, followed by a statistical learning approach to learn the shape and size of the high probability region. Approximating Joint Chance Constraints via Robust Optimization -------------------------------------------------------------- Robust optimization [@hong2017learning] uses safe approximation and imposes that the safe conditions are always satisfied when the random variables lie in some geometric set. Specifically, the robust optimization approximation of the joint chance constraints (\[constraint:probQoS\]) is given by $$\textrm{SINR}_{k}({{\bm{v}}};{{\bm{h}}}_k) \geq \gamma_k, {{\bm{h}}}_k \in \mathcal{U}_k,\forall k \in [K]\label{constraint:RO_QoS}$$ where $\mathcal{U}_k$ is the high probability region that ${{\bm{h}}}_k$ lies in. The robust optimization approximation for the joint chance constraints should yield a solution such that the probabilistic QoS constraint is satisfied with high confidence. The robust optimization approximation approach is realized by constructing a high probability region $\mathcal{U}_k$ from the data set $\mathcal{D}$ such that $\mathcal{U}_k$ covers a $1-\epsilon$ content of ${{\bm{h}}}_k$, i.e., $$\label{eq:prob_cover} \textrm{Pr}({{\bm{h}}}_k \in \mathcal{U}_k)\geq 1-\epsilon,$$ with confidence level at least $1-\delta$. More precisely, since $\mathcal{U}_k$ is generated from data and therefore is random, we require that the proportion of time (\[eq:prob\_cover\]) is satisfied to be at least $1-\delta$ in the repeated application of the data generation and high probability region construction procedure. By imposing the QoS constraints for element in the high probability region as presented in equation (\[constraint:RO\_QoS\]), the confidence level for the probabilistic-QoS constraints (\[relaxed\_Prob\_QoS\]) will be at least $1-\delta$. We thus obtain the robust optimization approximation for problem $\mathscr{P}_{\text{CCP}}$ as $$\begin{aligned} \!\!\!\mathscr{P}_{\text{RO}}:\mathop{\textrm{minimize}}_{{{\bm{v}}},{{\bm{h}}}} && \sum_{n,k}\frac{1}{\eta_n}\|{{\bm{v}}}_{nk}\|_2^2+ \sum_{n,l}P_{nk}^{\text{c}}I_{(n,k)\in\mathcal{T}({{\bm{v}}})}\nonumber \\ \textrm{subject to} && \textrm{SINR}_{k}({{\bm{v}}};{{\bm{h}}}_k) \geq \gamma_k, {{\bm{h}}}_k \in \mathcal{U}_k, k\in[K] \nonumber \\ && \sum_{k=1}^{K}\|{{\bm{v}}}_{nk}\|_2^2\leq P_n^{\text{Tx}}, n\in[N].\end{aligned}$$ The choice of the geometric shape of the uncertainty set $\mathcal{U}_k$ is critical to the performance and the tracatability of the robust optimization approximation. Motivated by the tractability of robust optimization, ellipsoids and polytopes are commonly chosen as the basic uncertainty sets. The uncertainty set can be further augmented as the unions or intersection of these basic sets. In this paper, we choose ellipsoidal uncertainty set to model the uncertainty of each group of channel coefficient vector ${{\bm{h}}}_{k}$ for its wide use in modeling CSI uncertainties [@shi2015robust_TSP; @hanif2013efficient], as well as its tractability as shown in Section \[subsec:tractable\_reformulation\]. The high probability region $\mathcal{U}_k$ is parameterized as $$\mathcal{U}_{k}=\{{{\bm{h}}}_{k}:{{\bm{h}}}_{k}=\hat{{{\bm{h}}}}_{k}+{{\bm{B}}}_{k}{{\bm{u}}}_{k}, {{\bm{u}}}_{k}^{\sf{H}}{{\bm{u}}}_{k} \leq 1\}. \label{eq:uncertainty_set1}$$ Here the parameters ${{\bm{B}}}_{k}\in\mathbb{C}^{NL\times NL}$ and $\hat{{{\bm{h}}}}_{k}\in\mathbb{C}^{NL}$ shall be learned from the data set $\mathcal{D}$, which will be presented in Section \[subsec:learninguncertainty\]. We will then present the tractable reformulation of the robust optimization counterpart problem $\mathscr{P}_{\text{RO}}$ in Section \[subsec:tractable\_reformulation\]. Learning the High Probability Region from Data Samples {#subsec:learninguncertainty} ------------------------------------------------------ Note that (\[constraint:RO\_QoS\]) only gives a feasibility guarantee for the joint chance constraints with statistical confidence at least $1-\delta$, but its conservativeness is still a challenging problem. Generally speaking, problem $\mathscr{P}_{\text{RO}}$ is a less conservative approximation for problem $\mathscr{P}_{\text{CCP}}$ if it has a larger feasible region. Therefore, we prefer a smaller volume of the high probability region $\mathcal{U}$ which provides a larger feasible region. In our problem formulation, we set the volume of the high probability region such that the statistical confidence for the probabilistic-QoS constraints is close to $1-\delta$. In this paper, we propose to use a statistical learning approach [@hong2017learning] for the parameters of the high probability region $\mathcal{U}$, which consists of a shape learning procedure and a size calibration procedure via quantile estimation. First of all, we split the samples in data set $\mathcal{D}$ into two parts, i.e., $ \mathcal{D}^{1}=\{\tilde{{{\bm{h}}}}^{(1)},\cdots,\tilde{{{\bm{h}}}}^{(D_1)}\}$ and $ \mathcal{D}^{2}=\{\tilde{{{\bm{h}}}}^{(D_1+1)},\cdots,\tilde{{{\bm{h}}}}^{(D)}\}$, each for one procedure. ### Shape Learning Each ellipsoid set $\mathcal{U}_{k}$ can be re-parameterized as $$\mathcal{U}_{k}=\{{{\bm{h}}}_{k}:({{\bm{h}}}_{k}-\hat{{{\bm{h}}}}_{k})^{T}{{\bm{\Sigma}}}_{k}^{-1}({{\bm{h}}}_{k}-\hat{{{\bm{h}}}}_{k})\leq s_{k}\},$$ where $\hat{{{\bm{h}}}}_{k}$ and ${{\bm{\Sigma}}}_{k}$ are shape parameters of the ellipsoid $\mathcal{U}_{k}$, $s_{k}>0$ determines its size, and ${{\bm{\Sigma}}}_{k}/{s}_{k}={{\bm{B}}}_{k}{{\bm{B}}}_{k}^{\sf{H}}$. Suppose the observations of ${{\bm{h}}}_{k}$ is given by $\mathcal{D}_{k}=\mathcal{D}_{k}^{1}\cup \mathcal{D}_{k}^{2}=\{\tilde{{{\bm{h}}}}_{k}^{(j)}\}_{j=1}^{D}$. The shape parameter $\hat{{{\bm{h}}}}_{k}$ can be chosen as the sample mean, i.e., $$\begin{aligned} \hat{{{\bm{h}}}}_{k}&=\frac{1}{D_1}\sum_{j=1}^{D_1}\tilde{{{\bm{h}}}}_{k}^{(j)},\label{eq:HPR_mean}\end{aligned}$$ To reduce the complexity of the ellipsoid, we omit the correlation between each $\{{{\bm{h}}}_{kn}\}$ and choose ${{\bm{\Sigma}}}_{k}$ as the block diagonal matrix where each diagonal element is the sample covariance of the first part of the data set for ${{\bm{h}}}_{kn}$, i.e., $$\begin{aligned} {{\bm{\Sigma}}}_{k}&=\begin{bmatrix} {{\bm{\Sigma}}}_{k1} & & \\ & \ddots & \\ & & {{\bm{\Sigma}}}_{kN} \end{bmatrix}, ~\text{where}~~\nonumber\\ {{\bm{\Sigma}}}_{kn}&=\frac{1}{D_1-1}\sum_{j=1}^{D_1}(\tilde{{{\bm{h}}}}_{kn}^{(j)}-\hat{{{\bm{h}}}}_{kn})(\tilde{{{\bm{h}}}}_{kn}^{(j)}-\hat{{{\bm{hn}}}}_{k})^{\sf{H}}.\label{eq:HPR_covariance}\end{aligned}$$ ### Size Calibration via Quantile Estimation We then use the second part of data set $\mathcal{D}_{k}^2$ for calibrating the ellipsoid size $s_{k}$. The key idea is to estimate a $1-\epsilon$ quantile with $1-\delta$ confidence of a transformation of the data samples in $\mathcal{D}_{k}^2$. Let $$\label{eq:map} \mathcal{G}(\xi)=(\xi-\hat{{{\bm{h}}}}_{k})^{T}{{\bm{\Sigma}}}_{k}^{-1}(\xi-\hat{{{\bm{h}}}}_{k})$$ be the map from the random space that ${{\bm{h}}}_{k}$ lies in to $\mathbb{R}$. The size parameter $s_{k}$ will be chosen as an estimated $(1-\epsilon)$-quantile of the underlying distribution of $\mathcal{G}(\xi)$ based on the data samples in $\mathcal{D}_{nk}^2$, where the $(1-\epsilon)$-quantile $q_{1-\epsilon}$ is defined from $$\textrm{Pr}(\mathcal{G}(\xi)\leq q_{1-\epsilon}) = 1-\epsilon.$$ Specifically, by computing the function values of $\mathcal{G}$ on each sample of $\mathcal{D}_{k}^2$, we can obtain the observations $G_{1},\cdots,G_{D-D_1}$ where $G_{j}=\mathcal{G}({{\bm{h}}}_{k}^{(D_1+j)})$. Then the $t^\star$-th value of the ranked observations $G_{(1)}\leq \cdots \leq G_{(D-D_1)}$ in ascending order, denoted as $G_{(j^\star)}$, can be an upper bound of the $(1-\epsilon)$-quantile of the underlying distribution of $G(\xi)$ based on the following proposition: $s_{k}$ is an upper bound of the $(1-\epsilon)$-quantile of the underlying distribution with $1-\delta$ confidence, i.e., $$\textrm{Pr}(s_{k}\geq q_{1-\epsilon})\geq 1-\delta,$$ if $s_{k}$ is set as $$\begin{aligned} &s_{k} = G_{(j^\star)},~~\text{where}~j^\star~\text{is given by} \nonumber\\ &\min_{1\leq j\leq D-D_1}\!\!\left\{j:\sum_{k=0}^{j-1}\binom{D-D_1}{k}(1-\epsilon)^k\epsilon^{D-D_1-k}\geq 1-\delta \right\}.\label{eq:HPR_size}\end{aligned}$$ According to the definition of the quantile $q_{1-\epsilon}$, we have $$\begin{aligned} &\textrm{Pr}(G_{(j)}\geq q_{1-\epsilon}) \nonumber\\ =&\textrm{Pr}(G_{(k)}<q_{1-\epsilon}, k=0,\cdots, j-1)\nonumber\\ =&\sum_{k=0}^{j-1}\binom{D-D_1}{k}(1-\epsilon)^k\epsilon^{D-D_1-k}. \end{aligned}$$ Therefore $G_{(j^\star)}$ is the smallest one among all upper bounds of the $(1-\epsilon)$-quantile of the underlying distribution with $1-\delta$ confidence. Using the presented two procedures, we learn a high probability region $\mathcal{U}$ of the random channel coefficient vector $h_{k}$’s. The statistical guarantee of this statistical learning based robust optimization approximation approach is given by the following proposition: Suppose the data samples in the data set $D_{k}$ are i.i.d. and chosen from a continuous distribution for any $k$. The data set is split into two independent parts $\mathcal{D}_{k}^1$ and $\mathcal{D}_{k}^2$. Each uncertainty set is chosen as $\mathcal{U}_{k}=\{{{\bm{h}}}_{k}:({{\bm{h}}}_{k}-\hat{{{\bm{h}}}}_{k})^{T}{{\bm{\Sigma}}}_{k}^{-1}({{\bm{h}}}_{k}-\hat{{{\bm{h}}}}_{k})\leq s_{k}\}$. Their parameters $\hat{{{\bm{h}}}}_{k},{{\bm{\Sigma}}}_{k},$ and $s_{k}$ are determined following equation (\[eq:HPR\_mean\]), equation (\[eq:HPR\_covariance\]), and equation (\[eq:HPR\_size\]), respectively. Thus, any feasible solution to problem $\mathscr{P}_{RO}$ guarantees that the probabilistic-QoS constraints (\[relaxed\_Prob\_QoS\]) are satisfied with confidence at least $1-\delta$. Since $\mathcal{G}$ depends only on $\mathcal{D}_{k}^{1}$, we have $$\begin{aligned} &\textrm{Pr}_{\mathcal{D}_{k}^{2}}({{\bm{v}}}\in\mathcal{V}) = \textrm{Pr}_{\mathcal{D}_{k}^{2}}(G_{(t^\star)}\geq q_{1-\epsilon}) \geq 1-\delta. \end{aligned}$$ Therefore, it is readily obtained that $\textrm{Pr}(\textrm{SINR}_k\geq \gamma_k)\geq 1-\epsilon$ satisfies with confidence at least $1-\delta$. Note that $j^\star$ exists only if $$\sum_{k=0}^{D-D_1-1}\binom{D-D_1}{k}(1-\epsilon)^k\epsilon^{D-D_1-k}\geq 1-\delta,$$ which implies that $1-(1-\epsilon)^{D-D_1}\geq 1-\delta$. In other words, the required minimum number of samples is $D>D-D_1\geq \log{\delta}/\log{(1-\epsilon)}$ to achieve the $1-\delta$ confidence of the probabilistic QoS constraint (\[constraint:probQoS\]). Matrix ${{\bm{B}}}_{k}$ can be computed as $$\label{eq:HPR_B} {{\bm{B}}}_{k}=\sqrt{s_{k}}{{\bm{\Delta}}}_{k},$$ where ${{\bm{\Delta}}}_k$ is the Cholesky decomposition of ${{\bm{\Sigma}}}_{k}$, i.e., ${{\bm{\Sigma}}}_{k}={{\bm{\Delta}}}_{k}{{\bm{\Delta}}}_{k}^{\sf{H}}$. We summarize the whole procedure for learning the high probability region $\mathcal{U}$ from data set $\mathcal{D}$ in Algorithm \[algorithm:robust\]. **Input:** the data set $\mathcal{D}=\{\tilde{{{\bm{h}}}}^{(1)},\cdots,\tilde{{{\bm{h}}}}^{(D)}\}$.\ **Output:** $\hat{{{\bm{h}}}}_{k}$, ${{\bm{B}}}_{k}$ for all $k$. Tractable Reformulations for Robust Optimization Problem {#subsec:tractable_reformulation} -------------------------------------------------------- According to the ellipsoidal uncertainty model (\[eq:uncertainty\_set1\]), the robust optimization approximation (\[constraint:RO\_QoS\]) can be rewritten as $$\begin{aligned} &{{\bm{h}}}_k^{\sf{H}}\bigg(\frac{1}{\gamma_k}{{\bm{v}}}_k{{\bm{v}}}_k^{\sf{H}}-\sum_{l\ne k} {{\bm{v}}}_l{{\bm{v}}}_l^{\sf{H}}\bigg){{\bm{h}}}_k\geq \sigma_k^2 \label{eq:QoS1} \\ &{{\bm{h}}}_{k} = \hat{{{\bm{h}}}}_{k}+ {{\bm{B}}}_{k}{{\bm{u}}}_{k}, {{\bm{u}}}_{k}^{\sf{H}}{{\bm{u}}}_{k} \leq 1, \label{eq:uncertainty1}\end{aligned}$$ where ${{\bm{u}}}_{nk}\in\mathbb{C}^{L}$. By defining matrices $$\label{eq:HPR_H} {{\bm{H}}}_{k}=\begin{bmatrix} \hat{{{\bm{h}}}}_{k} & {{\bm{B}}}_{k} \end{bmatrix}\in\mathbb{C}^{NL\times (NL+1)}$$ and using the S-procedure [@boyd2004convex], we obtain the following equivalent tractable reformulation for (\[eq:QoS1\]) and (\[eq:uncertainty1\]): $$\begin{aligned} &{{\bm{H}}}_{k}^{\sf{H}}\bigg(\frac{1}{\gamma_k}{{\bm{v}}}_{k}{{\bm{v}}}_{k}^{\sf{H}}-\sum_{l\ne k}{{\bm{v}}}_{l}{{\bm{v}}}_{l}^{\sf{H}}\bigg){{\bm{H}}}_{k}\succeq {{\bm{Q}}}_k \label{constraint:nonconvex_sdp} \\ &\lambda_k\geq 0, \label{constraint:non_negative}\end{aligned}$$ where ${{\bm{\lambda}}}=[\lambda_{k}]\in\mathbb{R}_{+}^{K}$ and ${{\bm{Q}}}_k$ is given by $${{\bm{Q}}}_k=\begin{bmatrix} \lambda_{k}+\sigma_k^2 & {{\bm{0}}} \\ {{\bm{0}}} & -\lambda_{k}{{\bm{I}}}_{NL} \end{bmatrix}\in\mathbb{C}^{(NL+1)\times(NL+1)}.$$ The derivation details of (\[constraint:nonconvex\_sdp\]) and (\[constraint:non\_negative\]) from (\[eq:QoS1\]) and (\[eq:uncertainty1\]) is relegated to Appendix \[append:Sprocedure\]. Thus the proposed robust optimization approximation for problem $\mathscr{P}_{\text{CCP}}$ is given by the following group sparse beamforming problem with nonconvex quadratic constraints: $$\begin{aligned} \mathscr{P}_{\text{RGS}}:\!\!\!\!\!\mathop{\textrm{minimize}}_{{{\bm{v}}}\in\mathbb{C}^{NKL},{{\bm{\lambda}}}\in\mathbb{R}^{K}}\!\!\! && \sum_{n,l}\frac{1}{\eta_n}\|{{\bm{v}}}_{nl}\|_2^2+ \sum_{n,l}P_{nl}^{\text{c}}I_{(n,l)\in\mathcal{T}({{\bm{v}}})} \nonumber \\ \textrm{subject to~~~} && (\ref{constraint:nonconvex_sdp}),\lambda_k\geq0, \forall k\in[K] \label{con:nonconvex_qudratic} \\ && \sum_{l=1}^{K}\|{{\bm{v}}}_{nl}\|_2^2\leq P_n^{\text{Tx}}, \forall n\in[N]. \label{con:transmit_power}\end{aligned}$$ Its computational complexity of solving problem $\mathscr{P}_{\text{RGS}}$ is independent of the sample size $D$. An effective approach for obtaining approximate solution of nonconvex quadratic constrained quadratic program is to lift the aggregative beamforming vector as a rank-one positive semidefinite matrix ${{\bm{V}}}={{\bm{v}}}{{\bm{v}}}^{\sf{H}}$ and simply drop the rank-one constraint, which is termed as the semidefinite relaxation (SDR) technique [@luo2007approximation]. The obtained solution however may be infeasible for the original nonconvex quadratic constraints. To induce the group sparsity with nonconvex quadratic constraints, a quadratic variational form of weighted mixed $\ell_1/\ell_2$-norm is adopted in [@shi2015robust_TSP]. In this paper, we will adopt an iteratively reweighted minimization approach which has demonstrated its effectiveness in cloud radio access network [@dai2016energy; @shi2016smoothed] to further enhance the group sparsity of the aggregative beamforming vector. In addition, to improve the feasibility for the nonconvex quadratic constraint for each subproblem of the reweighted approach, we shall provide a novel difference-of-convex-functions (DC) approach for inducing rank-one solution. It should be mentioned that the uplink-downlink duality is not applicable to efficiently address the robust QoS constraints (\[constraint:nonconvex\_sdp\]) due to the CSI uncertainty. Integrating the Robust Optimization Approximation with a Cost-Effective Sampling Strategy {#subsec:sampling_strategy} ----------------------------------------------------------------------------------------- Consider the block fading channel where the channel distribution is assumed invariant [@liu2019two] within the *coherence interval for channel statistics*. The coherence interval for channel statistics consists of $T_s$ blocks, where each block is called a *coherence interval for CSI* and the channel coefficient vector remains unchanged within each block. However, collecting $D$ channel samples within each block leads to high signaling overhead. To address this issue, we provide a cost-effective sampling strategy for enabling robust transmission, whose timeline is illustrated in Fig. \[fig:time\_scale\]. At the beginning of the coherence interval for channel statistics, we collect $D$ i.i.d. channel samples as $\mathcal{D}$. Based on the data set $\mathcal{D}$, we can learn the estimated channel coefficient vector $\hat{{{\bm{h}}}}_k$ from equation (\[eq:HPR\_mean\]) and the estimated high probability region of the error ${{\bm{e}}}_k$ as ${{\bm{B}}}_{k}$ from equation (\[eq:HPR\_B\]). For the transmission in the first block, we can obtain $\{{{\bm{H}}}_k\}$ by combining these two parts following equation (\[eq:HPR\_H\]) and solve the resulting problem $\mathscr{P}_{\text{RGS}}$. For any other block $t>1$, we can obtain the estimated channel coefficient $\hat{{{\bm{h}}}}[t]$ as the sample mean by collecting as few as one sample of the channel coefficient vector. By replacing the estimated channel coefficient $\hat{{{\bm{h}}}}$ and keeping the error information $\{{{\bm{B}}}_{k}:k\in[K]\}$, we can construct the parameter $\{{{\bm{H}}}_k[t]\}$ at the $t$-th block as $$\label{eq:efficient_sampling} {{\bm{H}}}_{k}[t]=\begin{bmatrix} \hat{{{\bm{h}}}}_{k}[t] & {{\bm{B}}}_{k} \end{bmatrix},\forall k\in[K],$$ and design the transmitter beamformer by solving problem $\mathscr{P}_{\text{RGS}}(\{{{\bm{H}}}_{k}[t]\})$, which significantly reduces the signaling overhead for channel sampling. The effectiveness of this cost-effective scheme will be demonstrated in Section \[subsec:sim\_uncertainty\] numerically. ![Timeline of a cost-effective channel sampling strategy.[]{data-label="fig:time_scale"}](time_scale){width="\columnwidth"} Reweighted Power Minimization for Group Sparse Beamforming with Nonconvex Quadratic Constraints =============================================================================================== This section presents a reweighted power minimization approach to induce the group sparsity structure for problem $\mathscr{P}_{\text{RGS}}$. We further demonstrate that the nonconvex quadratic constraints can be reformulated as convex constraints with respect to a rank-one positive semidefinite matrix using a matrix lifting technique, followed by proposing a DC approach to induce rank-one solutions. Matrix Lifting for Nonconvex Quadratic Constraints -------------------------------------------------- We observe that constraints (\[constraint:nonconvex\_sdp\]) are convex with respect to ${{\bm{v}}}{{\bm{v}}}^{\sf{H}}$ despite of its nonconvexity with respect to ${{\bm{v}}}$. This motivates us to adopt the matrix lifting technique [@luo2007approximation] to address the nonconvex quadratic constraints in problem $\mathscr{P}_{\text{RGS}}$ by denoting $$\begin{aligned} &{{\bm{V}}}_{ij}[s,t]={{\bm{v}}}_{si}{{\bm{v}}}_{tj}^{\sf{H}}\in\mathbb{C}^{L\times L}\\ &{{\bm{V}}}_{ij}=\begin{bmatrix} {{\bm{V}}}_{ij}[1,1] & \cdots & {{\bm{V}}}_{ij}[1,N] \\ \vdots & \ddots & \vdots \\ {{\bm{V}}}_{ij}[N,1] & \cdots & {{\bm{V}}}_{ij}[N,N] \end{bmatrix}={{\bm{v}}}_{i}{{\bm{v}}}_{j}^{\sf{H}}\in\mathbb{C}^{NL\times NL}\\ &{{\bm{V}}}={{\bm{v}}}{{\bm{v}}}^{\sf{H}}=\begin{bmatrix} {{\bm{V}}}_{11} & \cdots & {{\bm{V}}}_{1K} \\ \vdots & \ddots & \vdots \\ {{\bm{V}}}_{K1} & \cdots & {{\bm{V}}}_{KK} \end{bmatrix}\in\mathbb{S}_+^{NKL},\end{aligned}$$ where $\mathbb{S}_+^{NKL}$ denotes the set of Hermitian positive semidefinite (PSD) matrices. The aggregative beamforming vector ${{\bm{v}}}$ is thus lifted as a rank-one PSD matrix ${{\bm{V}}}$. The constraint $\mathcal{C}_k$ of problem $\mathscr{P}_{\text{RGS}}$, which given by (\[constraint:nonconvex\_sdp\]), can be equivalently rewritten as the following PSD constraint $$\label{constraint:lifted_sdp} {{\bm{H}}}_{k}^{\sf{H}}\bigg(\frac{1}{\gamma_k}{{\bm{V}}}_{kk}-\sum_{l\ne k}{{\bm{V}}}_{ll}\bigg){{\bm{H}}}_{k}\succeq {{\bm{Q}}}_k,$$ and the transmit power constraint (\[con:transmit\_power\]) can be equivalently rewritten as $$\sum_{l=1}^{K}\|{{\bm{v}}}_{nl}\|_2^2=\sum_{l=1}^{K}\textrm{Tr}({{\bm{V}}}_{ll}[n,n])\leq P_n^{\text{Tx}}, \forall n=1,\cdots,N.$$ Therefore, using the matrix lifting technique, we obtain an equivalent reformulation for problem $\mathscr{P}_{\text{RGS}}$ as $$\begin{aligned} \mathscr{P}:\mathop{\textrm{minimize}}_{{{\bm{V}}},{{\bm{\lambda}}}} && \sum_{n,l}\left(\frac{1}{\eta_n}\textrm{Tr}({{\bm{V}}}_{ll}[n,n])+ P_{nl}^{\text{c}}I_{\textrm{Tr}({{\bm{V}}}_{ll}[n,n])\ne 0}\right) \nonumber \\ \textrm{subject to} && (\ref{constraint:lifted_sdp}),\lambda_k\geq 0, \forall k\in[K] \label{con:QoS_sdp}\\ && \sum_{l=1}^{K}\textrm{Tr}({{\bm{V}}}_{ll}[n,n])\leq P_n^{\text{Tx}}, \forall n\in[N] \label{con:transmit_power_sdp}\\ && {{\bm{V}}}\succeq{{\bm{0}}}, \textrm{rank}({{\bm{V}}})=1. \label{prob:psd}\end{aligned}$$ Note that the constraints are still nonconvex due to the nonconvexity of the rank-one constraint. DC Representations for Rank-One Constraint ------------------------------------------ For a positive semidefinite matrix ${{\bm{V}}}\in\mathbb{S}_+^{NKL}$, its rank is one if and only if it has only one non-zero singular value, i.e., $$\sigma_i({{\bm{V}}})=0, i=2,\cdots,NKL,$$ where $\sigma_i({{\bm{V}}})$ is the $i$-th largest singular value of ${{\bm{V}}}$. The trace norm and spectral norm of the positive semidefinite matrix ${{\bm{V}}}$ are respectively given as $$\textrm{Tr}({{\bm{V}}})=\sum_{i=1}^{NKL}\sigma_i({{\bm{V}}}), \|{{\bm{V}}}\| = \sigma_1({{\bm{V}}}).$$ Thus we obtain an equivalent DC representation for the rank-one constraint of ${{\bm{V}}}$: $$\label{eq:DCrepresentation} \mathcal{R}({{\bm{V}}})=\textrm{Tr}({{\bm{V}}})-\|{{\bm{V}}}\|=0.$$ $\mathcal{R}$ is a DC function of ${{\bm{V}}}$ since both the trace norm and the spectral norm are convex. Reweighted $\ell_1$ Minimization for Inducing Group Sparsity ------------------------------------------------------------ Reweighted $\ell_1$ minimization approach has shown its advantages in enhancing group sparsity for improving the energy-efficiency of cloud radio access networks [@dai2016energy; @shi2016smoothed]. $\ell_1$-norm is a well recognized convex surrogate for the $\ell_0$-norm. In order to further enhance the sparsity, reweighted $\ell_1$ minimization is proposed to iteratively minimize a weighted $\ell_1$-norm and update the weights. For the objective function of problem $\mathscr{P}$, we observe that the indicator function $I_{\textrm{Tr}({{\bm{V}}}_{ll}[n,n])\ne 0}$ can be interpreted as the $\ell_0$-norm of $\textrm{Tr}({{\bm{V}}}_{ll}[n,n])$. We can thus use the reweighed $\ell_1$ minimization technique via approximating $I_{\textrm{Tr}({{\bm{V}}}_{ll}[n,n])\ne 0}$ by $w_{nl}\textrm{Tr}({{\bm{V}}}_{ll}[n,n])$, which consists of alternatively minimizing the approximated objective function and updating the weight as $$\label{eq:update_weights} w_{nl}=\frac{c}{\textrm{Tr}({{\bm{V}}}_{ll}[n,n])+\tau},$$ where $\tau>0$ is a constant regularization factor and $c>0$ is a constant. If $\textrm{Tr}({{\bm{V}}}_{ll}[n,n])$ is small, the reweighted $\ell_1$ minimization approach will put larger weight on the transceiver pair $(n,l)$, which prompts that the inference task $l$ is not preferred to be executed at the $n$-th edge node. Proposed Reweighted Power Minimization Approach ----------------------------------------------- In this subsection, we provide a reweighted power minimization approach by combining the matrix lifting, DC representation and reweighted $\ell_1$ minimization techniques. In the $j$-th step, we shall update ${{\bm{V}}}^{[j+1]}$ via solving $$\begin{aligned} \mathop{\textrm{minimize}}_{{{\bm{V}}},{{\bm{\lambda}}}} && \sum_{n,l}\Big(\frac{1}{\eta_n}+ w_{nl}^{[j]}P_{nl}^{\text{c}}\Big)\textrm{Tr}({{\bm{V}}}_{ll}[n,n]) \nonumber \\ \textrm{subject to} && (\ref{constraint:lifted_sdp}),\lambda_k\geq 0, \forall k\in[K] \nonumber \\ && \sum_{l=1}^{K}\textrm{Tr}({{\bm{V}}}_{ll}[n,n])\leq P_n^{\text{Tx}}, \forall n\in[N] \nonumber\\ && {{\bm{V}}}\succeq{{\bm{0}}}, \textrm{rank}({{\bm{V}}})=1,\label{reweighted_rank_one}\end{aligned}$$ and the weights $\{w_{nl}^{[j]}\}$ are updated following (\[eq:update\_weights\]) which are initialized as $1$ at the beginning. To solve problem (\[reweighted\_rank\_one\]) with nonconvex rank-one constraint, we propose to use the DC representation (\[eq:DCrepresentation\]) by solving the following DC program $$\begin{aligned} \mathscr{P}_{\text{DC}}:\mathop{\textrm{minimize}}_{{{\bm{V}}},{{\bm{\lambda}}}} && \sum_{n,l}\Big(\frac{1}{\eta_n}+ w_{nl}^{[j]}P_{nl}^{\text{c}}\Big)\textrm{Tr}({{\bm{V}}}_{ll}[n,n])+\mu\mathcal{R}({{\bm{V}}}) \nonumber \\ \textrm{subject to} && (\ref{constraint:lifted_sdp}),\lambda_k\geq 0, \forall k\in[K] \nonumber\\ && \sum_{l=1}^{K}\textrm{Tr}({{\bm{V}}}_{ll}[n,n])\leq P_n^{\text{Tx}}, \forall n\in[N] \nonumber\\ && {{\bm{V}}}\succeq{{\bm{0}}},\end{aligned}$$ where $\mu>0$ is the regularization parameter. Despite of the nonconvexity of the DC problem, problem $\mathscr{P}_{\text{DC}}$ can be efficiently solved by the simplified DC algorithm, i.e., iteratively linearizing the concave part [@tao1997convex]. At the $t$-th iteration, we shall solve $$\begin{aligned} \mathop{\textrm{minimize}}_{{{\bm{V}}},{{\bm{\lambda}}}} && \sum_{n,l}\Big(\frac{1}{\eta_n}+ w_{nl}^{[j]}P_{nl}^{\text{c}}\Big)\textrm{Tr}({{\bm{V}}}_{ll}[n,n]) \nonumber \\ &&\quad\quad+\mu(\textrm{Tr}({{\bm{V}}})- \textrm{Tr}(G^{(t)}{{\bm{V}}}))\nonumber \\ \textrm{subject to} && (\ref{constraint:lifted_sdp}),\lambda_k\geq 0, \forall k\in[K] \nonumber\\ && \sum_{l=1}^{K}\textrm{Tr}({{\bm{V}}}_{ll}[n,n])\leq P_n^{\text{Tx}}, \forall n\in[N] \nonumber\\ && {{\bm{V}}}\succeq{{\bm{0}}}, \label{prob:DCiter}\end{aligned}$$ where $G^{(t)}$ is one subgradient of spectral norm at ${{\bm{V}}}^{(t)}$. It can be computed as $\partial\|{{\bm{V}}}\|_2={{\bm{u}}}_1{{\bm{u}}}_1^{\sf{H}}$ where ${{\bm{u}}}_1$ is the eigenvector corresponding to the largest eigenvalue of matrix ${{\bm{V}}}$. This DC algorithm guarantees converging to a stationary point of problem $\mathscr{P}_{\text{DC}}$ from arbitrary initial points [@tao1997convex]. When the reweighted $\ell_1$ minimization algorithm converges at a rank-one solution ${{\bm{V}}}^{[j]}$, we can extract the aggregative beamforming vector ${{\bm{v}}}^\star$ from the Choleskey decomposition ${{\bm{V}}}^{[j]}={{\bm{v}}}^\star{{{\bm{v}}}^\star}^{\sf{H}}$. The whole procedure of the proposed reweighted power minimization approach is summarized in Algorithm \[algorithm:proposed\]. **Initialization:** ${{\bm{V}}}^{[0]},w_{nl}$.\ obtain ${{\bm{v}}}^\star$ through Choleskey decomposition ${{\bm{V}}}^{[j]}={{\bm{v}}}^\star{{{\bm{v}}}^\star}^{\sf{H}}$.\ **Output:** ${{\bm{v}}}^\star$. Numerical Results {#sec:simulation} ================= In this section, we provide numerical experiments for comparing the proposed framework with other state-of-the-art approaches. We generate the edge inference system with $N=4$ APs located at $(\pm400,\pm400)$ meters and $K=4$ mobile users randomly located in the $[-800~800]\times [-800~800]$ meters square region. Each AP is equipped with $L=2$ antennas. The imperfection model of the channel coefficient vector between the $n$-th AP and the $k$-th mobile user is chosen as ${{\bm{h}}}_{kn}=10^{-L(d_{kn})/20}({{\bm{c}}}_{kn}+{{\bm{e}}}_{kn})$. The path loss model is given by $L(d_{kn})=128.1+37.6\log_{10}{d_{kn}}$, the Rayleigh small scale fading coefficient is given by ${{\bm{c}}}_{kn}\sim\mathcal{CN}({{\bm{0}}},{{\bm{I}}})$, and the additive error is given by ${{\bm{e}}}_{kn}\sim\mathcal{CN}({{\bm{0}}},10^{-4}{{\bm{I}}})$. As presented in Section \[subsec:learninguncertainty\], $D_1$ determines the accuracy of the learned shape of the uncertainty set, while $D_2$ determines the accuracy of the calibrated size of the uncertainty set. To balance these two points, the collected $D$ independent samples of ${{\bm{h}}}_{kn}$’s are split evenly for learning the shape and size of the uncertainty ellipsoids, respectively, i.e., $D_1=D_2=D/2$. For each AP, the power amplifier efficiency is chosen as $\eta_1=\cdots=\eta_N=1/4$, the average maximum transmit power is chosen as $P_1^{\text{Tx}}=\cdots=P_N^{\text{Tx}}=1W$, and the computation power consumption for each task $\phi_k$ at the $n$-th AP is chosen as $P_{nk}^{\text{c}}=0.60W$. We set the target SINR as $\gamma_1=\cdots=\gamma_K=\gamma$, the tolerance level as $\epsilon=0.05$, and the confidence level as $\delta=0.05$. The regularization parameters $\tau$ is set as $10^{-6}$ and $\mu$ is set as $10$. Benefits of Taking CSI Uncertainty into Consideration {#subsec:sim_uncertainty} ----------------------------------------------------- In this paper, we consider the CSI uncertainty in channel sampling and propose to solve it with a learning-based robust optimization approximation approach. To further reduce the channel sampling overhead, we provide a cost-effective sampling strategy in Secion \[subsec:sampling\_strategy\]. We now evaluate its advantages over the beamformer design without taking the CSI error into consideration by supposing that each task is performed at all APs. Specifically, we collect $D=200$ i.i.d. channel samples in the training phase within one coherent interval for CSI. In the test phase, we only collect one channel sample ${{\bm{h}}}^{(1)}$, construct ${{\bm{H}}}_k$’s following equation (\[eq:efficient\_sampling\]) and solve the problem $$\begin{aligned} \mathop{\textrm{minimize}}_{{{\bm{V}}},{{\bm{\lambda}}}} && \sum_{n,l}\left(\frac{1}{\eta_n}\textrm{Tr}({{\bm{V}}}_{ll}[n,n])+ P_{nl}^{\text{c}}\right) \nonumber \\ \textrm{subject to} && (\ref{constraint:lifted_sdp}),\lambda_k\geq 0, \forall k\in[K] \nonumber\\ && \sum_{l=1}^{K}\textrm{Tr}({{\bm{V}}}_{ll}[n,n])\leq P_n^{\text{Tx}}, \forall n\in[N] \nonumber\\ && {{\bm{V}}}\succeq{{\bm{0}}}.\end{aligned}$$ As comparison, the beamforming design without taking uncertainty into consideration is given by solving the problem $$\begin{aligned} \mathop{\textrm{minimize}}_{{{\bm{V}}},{{\bm{\lambda}}}} && \sum_{n,l}\left(\frac{1}{\eta_n}\textrm{Tr}({{\bm{V}}}_{ll}[n,n])+ P_{nl}^{\text{c}}\right) \nonumber \\ \textrm{subject to} && {{{\bm{h}}}_{k}^{(1)}}^{\sf{H}}\Big(\frac{1}{\gamma_k}{{\bm{V}}}_{kk}-\sum_{l\ne k}{{\bm{V}}}_{ll}\Big){{{\bm{h}}}_{k}^{(1)}}\geq \sigma_k^2, ~\forall k \nonumber\\ && \sum_{l=1}^{K}\textrm{Tr}({{\bm{V}}}_{ll}[n,n])\leq P_n^{\text{Tx}}, ~\forall n, \nonumber\\ && {{\bm{V}}}\succeq{{\bm{0}}}.\end{aligned}$$ Note that we use SDR for both approaches for fairness. We compare two approaches by generating $40000$ realizations of i.i.d. channel samples for testing, and regenerate the training data set for the proposed approach every $200$ realizations. We compute the achieved SINR for each mobile device with the solution to each approach, i.e., $\textrm{SINR}_k({{\bm{v}}};\tilde{{{\bm{h}}}})$ where $\tilde{{{\bm{h}}}}$ is the true channel coefficient vector, and calculate the number of realizations that the target QoS for each device is met, i.e., $\textrm{SINR}_k\geq \gamma_k$. The results shown in Table \[tab:uncertainty\] demonstrate that the proposed robust approximation approach has considerably improved the robustness of QoS against CSI errors by a cost-effective sampling approach. User Index 1 2 3 4 --------------------------------- ------- ------- ------- ------- Proposed Approach 39946 39946 39946 39946 Without considering uncertainty 15205 15123 15197 15214 : Number of tests that QoS is met[]{data-label="tab:uncertainty"} Overcoming the Over-Conservativeness of Scenario Generation {#subsec:SG} ----------------------------------------------------------- As we point out in Section \[subsec:problem\_analysis\], the scenario generation approach is over-conservative since it imposes that the target QoS constraints are satisfied for all samples, which would lead to a smaller feasible region. Here we use numerical experiments to demonstrate the advantage of the presented robust optimization approximation approach in overcoming the over-conservativeness. Consider the feasibility problem of the robust optimization approximation approach given by $$\begin{aligned} \mathop{\textrm{find}} && {{\bm{V}}},{{\bm{\lambda}}} \nonumber \\ \textrm{subject to} && (\ref{constraint:lifted_sdp}),\lambda_k\geq 0, \forall k\in[K], \nonumber\\ && \sum_{l=1}^{K}\textrm{Tr}({{\bm{V}}}_{ll}[n,n])\leq P_n^{\text{Tx}}, \forall n\in[N], \nonumber\\ && {{\bm{V}}}\succeq{{\bm{0}}},\end{aligned}$$ and the feasibility problem of the scenario approach given by $$\begin{aligned} \mathop{\textrm{find}} && {{\bm{V}}} \nonumber \\ \textrm{subject to} && {{{\bm{h}}}_{k}^{(i)}}^{\sf{H}}\Big(\frac{1}{\gamma_k}{{\bm{V}}}_{kk}-\sum_{l\ne k}{{\bm{V}}}_{ll}\Big){{{\bm{h}}}_{k}^{(i)}}\geq \sigma_k^2, ~\forall k, i \nonumber\\ && \sum_{l=1}^{K}\textrm{Tr}({{\bm{V}}}_{ll}[n,n])\leq P_n^{\text{Tx}}, ~\forall n, \nonumber\\ && {{\bm{V}}}\succeq{{\bm{0}}}.\end{aligned}$$ Note that we adopt the SDR technique in both approach for purpose of fairness. We collect $D=200$ i.i.d. channel samples for each realization, run both algorithms for $25$ random realizations of the data set, and compare the probability of yielding feasible solutions using the scenario generation approach and the presented robust optimization approximation approach. The results in Fig. \[fig:feasibility\] reveal that the statistical learning based robust approximation considerably improves the probability of feasibility compared with the scenario generation approach though we only obtain sufficient conditions for the robust optimization counterpart using S-procedure in Section \[subsec:tractable\_reformulation\]. ![Probability of feasibility using scenario generation and the robust optimization approximation approach over the target SINR $\gamma$.[]{data-label="fig:feasibility"}](feasibility){width="\columnwidth"} Convergence Behavior {#subsec:convergence} -------------------- By choosing the reweighting parameter as $c=1/\ln(1+\tau^{-1})$, the proposed reweighted power minimization approach, i.e., Algorithm \[algorithm:proposed\], essentially approximates the $\ell_0$-norm according to $I_{x\ne 0} = \|x\|_0 = \lim_{\tau \rightarrow 0}{\ln(1+x\tau^{-1})}/{\ln(1+\tau^{-1})}$, and minimizes the approximated objective function $$\begin{aligned} f({{\bm{V}}})&=\sum_{n,l}\Big(\frac{1}{\eta_n}\textrm{Tr}({{\bm{V}}}_{ll}[n,n])\nonumber\\ &\quad+ P_{nl}^{\text{c}}\frac{\ln(1+\tau^{-1}\textrm{Tr}({{\bm{V}}}_{ll}[n,n]))}{\ln(1+\tau^{-1})}\Big)+\mu\mathcal{R}({{\bm{V}}})\label{eq:obj_value}\end{aligned}$$ under constraints (\[con:QoS\_sdp\]) and (\[con:transmit\_power\_sdp\]) using an majorization-minimization (MM) technique as stated in [@dai2016energy]. Fig. \[fig:convergence\_obj\] illustrates the convergence behavior of the proposed reweighted power minimization approach in terms of the objective function $f$ by collecting $D=200$ channel samples. We also plot the corresponding trajectories of the group sparsity of the aggregative beamforming vector ${{\bm{v}}}$ in Fig. \[fig:convergence\_num\], i.e., total number of inference tasks performed at all edge computing nodes. We observe that the number of tasks to be performed at edge computing nodes increases with a greater value of target QoS $\gamma$, which leads to higher total power consumption of the edge inference system. ![Convergence behavior of the proposed reweighted power minimization approach with different target SINR $\gamma$.[]{data-label="fig:convergence_obj"}](convergence){width="\columnwidth"} ![Trajectories of the total number of inference tasks performed at all edge computing nodes with different target SINR $\gamma$.[]{data-label="fig:convergence_num"}](convergence_tasks){width="\columnwidth"} Total Power Consumptions over Target SINR ----------------------------------------- We then conduct numerical results to compare the performance of different algorithms for problem $\mathscr{P}$ with $D=200$ i.i.d. channel samples, including the proposed reweighted power minimization approach termed as “reweighted+DC” and other state-of-the-art algorithms listed below: - “mixed $\ell_1/\ell_2$+SDR”: This algorithm is proposed in [@shi2015robust_TSP], which adopts the quadratic variational form of the weighted mixed $\ell_1/\ell_2$-norm for inducing group sparsity and SDR to address the nonconvex quadratic constraints. - “reweighted+SDR”: To improve the energy efficiency of downlink transmission in cloud-RAN, we adopt the iteratively reweighted minimization algorithm [@dai2016energy] for inducing the group sparsity and SDR [@luo2007approximation] for the nonconvex quadratic constraints. - “CB+SDR”: We assume that all tasks are performed at each AP and conduct coordinated beamforming for minimizing the transmission power consumption under probabilistic-QoS constraints. We also set $c=1/\ln(1+\tau^{-1})$ as stated in Sec. \[subsec:convergence\]. The performances of all algorithms averaged over $100$ channel realizations are illustrated in Fig. \[fig:totalpower\] and Fig. \[fig:num\_tasks\]. Fig. \[fig:totalpower\] presents the total power consumption of each algorithm and demonstrates that the proposed DC algorithm yields lower total power consumption than other approaches, which is owed to its better capability to induce group sparsity as shown in Fig. \[fig:num\_tasks\]. Note that the total number of tasks for the “CB+SDR” algorithm is always $KN=16$. ![Total power consumption over target SINR.[]{data-label="fig:totalpower"}](total_power){width="\columnwidth"} ![Total \# of tasks performed at APs over target SINR.[]{data-label="fig:num_tasks"}](num_tasks){width="\columnwidth"} Through all numerical results, we have seen considerable advantages of the presented statistical learning based robust optimization approximation and the proposed reweighted power minimization algorithm in providing energy-efficient processing and robust transmission service for edge inference. Conclusion ========== In this work, we presented an energy-efficient processing and robust cooperative transmission framework for executing deep learning inference tasks for mobile devices. Specifically, we proposed to minimize the sum of computation power and transmission power consumption under the probabilistic-QoS constraints via adaptive task selection and coordinated beamforming design. The joint chance constraints therein were further addressed by a statistical learning based robust optimization approximation approach. This yielded a group sparse beamforming problem with nonconvex quadratic constraints. We then developed a reweighted power minimization approach by iteratively solving a DC regularized reweighted $\ell_1$ minimization problem and updating the weights, thereby tackling both the group sparse objective function and nonconvex quadratic constraints. Numerical results demonstrated that the proposed approach achieved the lowest total power consumption among other state-of-the-art algorithms, and avoided the drawbacks of other methods for joint chance-constrained programs. There are still some open problems to be studied: - This work considers the architecture that each inference task is performed at multiple base stations separately. An interesting problem is to consider the hierarchical distributed structure of deep neural networks over the cloud and the edge [@teerapittayanon2017distributed]. - In this work, we consider a basic ellipsoid model for each uncertain channel coefficient vector. It is interesting to use data-driven approach with more complicated model of the high probability region to reduce its volume, such as clustering the data samples and using a union of ellipsoids as the high probability region. - It is still an open problem to provide the theoretical guarantee of the proposed reweighted power minimization algorithm since the conditions for convergence guarantee of reweighted approach in [@dai2016energy; @Yuanming_cvxsmooth18] are not met. Derivation of (\[constraint:nonconvex\_sdp\]) Using S-Procedure {#append:Sprocedure} =============================================================== We first rewrite (\[eq:uncertainty1\]) as $${{\bm{h}}}_{k}\tau_k = \hat{{{\bm{h}}}}_{k}\tau_k+ {{\bm{B}}}_{k}\tilde{{{\bm{u}}}}_{k}, \tilde{{{\bm{u}}}}_{k}^{\sf{H}}\tilde{{{\bm{u}}}}_{k}\leq \tau_{k}^2,$$ where ${{\bm{u}}}_{k}=\tilde{{{\bm{u}}}}_{k}/\tau_{k}\in\mathbb{C}^{L},\tau_k>0$. Let $${{\bm{x}}}_{k}=\begin{bmatrix} \tau_{k}^{\sf{H}} & \tilde{{{\bm{u}}}}_{k}^{\sf{H}} \end{bmatrix}^{\sf{H}}\in\mathbb{C}^{NL+1},$$ we can obtain that $${{\bm{h}}}_k\tau_k = {{\bm{H}}}_{k}{{\bm{x}}}_k.$$ Thus we know $$\begin{aligned} &{{\bm{h}}}_k^{\sf{H}}(\frac{1}{\gamma_k}{{\bm{v}}}_k{{\bm{v}}}_k^{\sf{H}}-\sum_{l\ne k} {{\bm{v}}}_l{{\bm{v}}}_l^{\sf{H}}){{\bm{h}}}_k- \sigma_k^2\geq 0 \\ \Leftrightarrow&({{\bm{h}}}_k\tau_k)^{\sf{H}}(\frac{1}{\gamma_k}{{\bm{v}}}_k{{\bm{v}}}_k^{\sf{H}}-\sum_{l\ne k} {{\bm{v}}}_l{{\bm{v}}}_l^{\sf{H}}){{\bm{h}}}_k\tau_k- \sigma_k^2\tau_k^2\geq 0 \\ \Leftrightarrow& ({{\bm{H}}}_{k}{{\bm{x}}}_k)^{\sf{H}}(\frac{1}{\gamma_k}{{\bm{v}}}_k{{\bm{v}}}_k^{\sf{H}}-\sum_{l\ne k} {{\bm{v}}}_l{{\bm{v}}}_l^{\sf{H}}){{\bm{H}}}_{k}{{\bm{x}}}_k- \sigma_k^2\tau_k^2\geq 0 \\ \Leftrightarrow&{{\bm{x}}}_{k}^{\sf{H}}{{\bm{P}}}_k^{0}{{\bm{x}}}_{k}\geq 0,\end{aligned}$$ where ${{\bm{P}}}_k^{0}\in\mathbb{S}^{NL+1}$ is given by $${{\bm{H}}}_{k}^{\sf{H}}(\frac{1}{\gamma_k}{{\bm{v}}}_k{{\bm{v}}}_k^{\sf{H}}-\sum_{l\ne k} {{\bm{v}}}_l{{\bm{v}}}_l^{\sf{H}}){{\bm{H}}}_{k}-\begin{bmatrix} \sigma_k^2 & 0 & \cdots & 0 \\ 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 \end{bmatrix}.$$ Likewise, $\tilde{{{\bm{u}}}}_{k}^{\sf{H}}\tilde{{{\bm{u}}}}_{k}\leq \tau_{k}^2,$ can be rewritten as $${{\bm{x}}}_{k}^{\sf{H}}{{\bm{P}}}_k^{1}{{\bm{x}}}_{k}\geq 0,$$ where ${{\bm{P}}}_k^{1}\in\mathbb{S}^{NL+1}$ is given by $${{\bm{P}}}_k^{1}=\begin{bmatrix} 1 & \\ & -{{\bm{I}}}_N \end{bmatrix}$$ Thus, we shall use the S-procedure $${{\bm{x}}}_{k}^{\sf{H}}{{\bm{P}}}_k^{1}{{\bm{x}}}_{k}\geq 0 \Longrightarrow {{\bm{x}}}_{k}^{\sf{H}}{{\bm{P}}}_k^{0}{{\bm{x}}}_{k}\geq 0,$$ which is given by $${{\bm{P}}}_k^{0}\geq \lambda_{k}{{\bm{P}}}_k^{1}, \lambda_{k}\geq 0.$$ Therefore, we obtain the tractable reformulation for the joint chance constraints (\[constraint:probQoS\]) as $$\begin{aligned} &{{\bm{H}}}_{k}^{\sf{H}}(\frac{1}{\gamma_k}{{\bm{v}}}_{k}{{\bm{v}}}_{k}^{\sf{H}}-\sum_{l\ne k}{{\bm{v}}}_{l}{{\bm{v}}}_{l}^{\sf{H}}){{\bm{H}}}_{k}\succeq {{\bm{Q}}}_k,\end{aligned}$$ where ${{\bm{\lambda}}}=[{{\bm{\lambda}}}_{1},\cdots,{{\bm{\lambda}}}_{K}]=[\lambda_{nk}]\in\mathbb{R}_{+}^{N\times K}$ and ${{\bm{Q}}}_k$ is given by $${{\bm{Q}}}_k=\begin{bmatrix} \lambda_{k}+\sigma_k^2 & \\ & -\lambda_{k}{{\bm{I}}}_{NL} \end{bmatrix}\in\mathbb{C}^{(NL+1)\times(NL+1)}.$$ [^1]: K. Yang is with the School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China, also with the Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 200050, China, and also with the University of Chinese Academy of Sciences, Beijing 100049, China (e-mail: yangkai@shanghaitech.edu.cn). [^2]: Y. Shi is with the School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China (e-mail: shiym@shanghaitech.edu.cn). [^3]: Wei Yu is with the Electrical and Computer Engineering Department, University of Toronto, Toronto, ON M5S 3G4, Canada (e-mail: weiyu@comm.utoronto.ca). [^4]: Z. Ding is with the Department of Electrical and Computer Engineering, University of California at Davis, Davis, CA 95616 USA (e-mail: zding@ucdavis.edu).
--- abstract: 'An existence result is presented for the worst-case error of lattice rules for high dimensional integration over the unit cube, in an unanchored weighted space of functions with square-integrable mixed first derivatives. Existing studies rely on random shifting of the lattice to simplify the analysis, whereas in this paper neither shifting nor any other form of randomisation is considered. Given that a certain number-theoretic conjecture holds, it is shown that there exists an $N$-point rank-one lattice rule which gives a worst-case error of order $1/\sqrt{N}$ up to a (dimension-independent) logarithmic factor. Numerical results suggest that the conjecture is plausible.' author: - 'Yoshihito Kazashi, Frances Y. Kuo, and Ian H. Sloan' date: - - 13 November 2018 title: ' Worst-case error for unshifted lattice rules without randomisation ' --- Introduction ============ This paper is concerned with an error estimate for a numerical integration rule for functions defined on high-dimensional hypercube $[0,1)^{s}$, $s\in\mathbb{N}$, $$\label{eq:int} \int_{[0,1)^{s}}f({{\boldsymbol{x}}})\,{{\mathrm{d}}}{{\boldsymbol{x}}}.$$ More specifically, we consider the worst-case error for rank-one lattice rules. The main contribution of this paper is the analysis of unshifted lattice rules without randomisation; we allow neither shifting nor any other form of randomisation. Given the truth of a certain conjecture with a number-theoretic flavour [(Conjecture \[conj:alpha\])]{}, our results show the existence of a deterministic cubature point set that attains the worst-case error of the order $1/\sqrt{N}$, up to a logarithmic factor, where $N$ is the number of cubature points, with a dimension-independent constant (Corollary \[cor:final-cor\]). An $N$-point rank-one lattice rule in $s$-dimension is an equal-weight cubature rule for approximating the integral — a quasi-Monte Carlo rule — of the form $$\label{eq:qmc} \frac1{N}\sum_{k=0}^{N-1}f({{\boldsymbol{t}}}_k),$$ with cubature points $$\label{eq:lat} {{\boldsymbol{t}}}_{k} =\bigg\{\frac{k{{\boldsymbol{z}}}}{N}\bigg\} ,\quad k=0,\dotsc, N-1,$$ for some ${{\boldsymbol{z}}}\in \{1,\dotsc ,N-1\}^{s}$, where $\{{{\boldsymbol{x}}}\}\in [0,1)^s$ for ${{\boldsymbol{x}}}=(x_1,\dots, x_s)\in [0,\infty)^{s}$ denotes the vector consisting of the fractional part of each component of ${{\boldsymbol{x}}}$. The choice of ${{\boldsymbol{z}}}$, known as the *generating vector*, completely determines the cubature points, and thus the quality of the cubature rule. Our interest in this paper lies in proving the existence of a good generating vector ${{\boldsymbol{z}}}\in \{1,\dotsc ,N-1\}^{s}$. The figure of merit we consider is the so-called worst-case error, defined by $$e(N,{{\boldsymbol{z}}}):= e(N,(\{k{{\boldsymbol{z}}}/N\})_k):= \sup\limits_{f\in H_{s,{{\boldsymbol{\gamma}}}},\,\|f\|_{H_{s,{{\boldsymbol{\gamma}}}}}\le 1} \bigg| \int_{[0,1)^{s}} f({{\boldsymbol{x}}})\,{{\mathrm{d}}}{{\boldsymbol{x}}}- \frac1{N}\sum_{k=0}^{N-1}f(\{{k{{\boldsymbol{z}}}}/{N}\}) \bigg|,$$ where $H_{s,{{\boldsymbol{\gamma}}}}$ is a suitable normed space consisting of non-periodic functions over $[0,1)^s$, specified below. As is standard nowadays, we will assume that the norm incorporates certain parameters $\gamma_{{\mathrm{\mathfrak{u}}}}$, one for each subset ${{\mathrm{\mathfrak{u}}}}\subseteq \{1,2,\ldots,s\}$, since without weights integration problems are often intractable[, see [[@Dick.J_etal_2013_Acta; @SW98]]{} for more details]{}. It is natural to seek a generating vector ${{\boldsymbol{z}}}$ that makes the worst-case error small. If $H_{s,{{\boldsymbol{\gamma}}}}$ is a reproducing kernel Hilbert space then the worst-case error $e(N,{{\boldsymbol{z}}})$ can be computed for any value of ${{\boldsymbol{z}}}$ (see below), but there is no known formula that gives a good value of ${{\boldsymbol{z}}}$ for general $s$. The strategy we take in this paper is to prove an existence result, by considering the average of $e^2(N,{{\boldsymbol{z}}})$ over all possible generating vectors ${{\boldsymbol{z}}}\in Z_{N}^s$, with $Z_N:=\{1,2, \ldots,N-1 \}$, i.e. we compute $$\label{eq:avg} \overline{e}^{2}(N ):=\frac1{(N-1)^s}\sum_{{{\boldsymbol{z}}}\in Z_{N}^s} e^2(N,{{\boldsymbol{z}}});$$ and then use the well known principle that there must exist one choice of ${{\boldsymbol{z}}}$ that is as good as average. With the support of a certain number-theoretic conjecture ([Conjecture \[conj:alpha\]]{}), which does not depend on the choice of ${{\boldsymbol{z}}}$), we will show that [ $$\overline{e}^{2}(N ) \le \frac{C\,(\ln N)^{\alpha}}{N},$$]{}with $C$ independent of $N$, where $\alpha>0$ is an exponent appearing in the conjecture that depends on neither $s$ nor $N$. Moreover, $C$ is independent of $s$ for suitable weights $\gamma_{{\mathrm{\mathfrak{u}}}}$. It follows that there exists a generating vector ${{\boldsymbol{z}}}^*$ for which the worst-case error $e(N,{{\boldsymbol{z}}}^*)$ is bounded by $\sqrt{C}(\ln N)^{\alpha/2}/\sqrt{N}$ (Corollary \[cor:final-cor\]). For periodic function spaces, error estimates for rank-one lattice rules are well known; see [[@Dick.J_etal_2013_Acta; @Hic98b; @Nie92; @Sloan.I.H_Joe_1994_book]]{} and references therein. For non-periodic functions, with the aid of *shifting*—changing the cubature points from $\{k{{\boldsymbol{z}}}/N\}$ to $\{k{{\boldsymbol{z}}}/N + {\boldsymbol{\Delta}}\}$ with elements ${\boldsymbol{\Delta}}\in [0,1)^s$—good results have been obtained for shift-averaged worst-case errors; see [@Dick.J_etal_2013_Acta] and references therein for more details. In the present paper, however, the function space is not periodic, and the worst-case error we consider is not shift-averaged. Approaches to estimating the error for lattice rules for non-periodic functions without randomisation include [@Dick.J_etal_2014_tent; @Goda.T_etal_2017_tent] where a change of variable called the tent transform was applied to the integrand. In this paper, however, we do not transform the integrand. The shift-averaged worst-case error mentioned above is the expected worst-case error for *randomly shifted* lattice rules, see [@Dick.J_etal_2013_Acta]. The present paper is a first step in our project to “*derandomise*” randomly shifted lattice rules—that is, to produce explicit shifts (for an untransformed rule) that gives worst-case errors that lose no accuracy compared to the shift-averaged worst-case errors. While randomly shifted lattice rules have the advantages of [providing us with an online error estimator and are]{} simple to analyse and construct, they are less efficient than a good deterministic rule, because of the need in practice to repeat the calculations of integrals with fixed ${{\boldsymbol{z}}}$ for some number (say $30$) of random shifts. In this first step in this programme, we study the case of zero shifts. (Experience suggests that this is a poor choice—perhaps the worst!) [There are related works in [@Joe04; @Joe06] where a quantity called ‘$R$’, which is connected to the so-called (weighted) star discrepancy, was considered as the error criterion. In the weighted setting in [@Joe06], lattice rules can be constructed to achieve $O(n^{-1+\delta})$ convergence rate for any $\delta >0$, with the implied constant independent of $s$ and $N$ for suitable weights.]{} After establishing the setting in Section \[sec:preliminaries\], the conjecture and the main results are stated in Section \[sec:estimate\]. Section \[sec:num\] provides numerical evidence relating to the conjecture. Section \[sec:conclusion\] concludes the paper. Preliminaries {#sec:preliminaries} ============= In this section, we introduce the setting and recall some facts on lattice rules that will be needed later. Throughout this paper, we assume that $N$, the number of cubature points, is a prime number. Let us start with a general reproducing kernel Hilbert space (RKHS) $H_{s}$ with a reproducing kernel $K\colon[0,1]^s\times [0,1]^s\to \mathbb{R}$ that satisfies $$\int_{[0,1]^s}\int_{[0,1]^s} K({{\boldsymbol{x}}},{{\boldsymbol{y}}}) \,{{\mathrm{d}}}{{\boldsymbol{x}}}\,{{\mathrm{d}}}{{\boldsymbol{y}}}< \infty.$$ It is well known that for a general quasi-Monte Carlo (QMC) rule , the square of the worst-case error in $H_s$, $$e(N,({{\boldsymbol{t}}}_k)_k):=\sup\limits_{f\in H_{s},\, \|f\|_{H_{s}}\le 1} \bigg| \int_{[0,1)^{s}} f({{\boldsymbol{x}}})\,{{\mathrm{d}}}{{\boldsymbol{x}}}- \frac1{N}\sum_{k=0}^{N-1}f({{\boldsymbol{t}}}_k) \bigg|,$$ is given by $$\begin{aligned} &e^{2}(N,({{\boldsymbol{t}}}_k)_k) \\ &= \int_{[0,1]^{s}}\int_{[0,1]^{s}} K({{\boldsymbol{x}}},{{\boldsymbol{y}}})\,{{\mathrm{d}}}{{\boldsymbol{x}}}\,{{\mathrm{d}}}{{\boldsymbol{y}}}-\frac{2}{N}\sum_{k=0}^{N-1} \int_{[0,1]^{s}} K({{\boldsymbol{t}}}_{k} ,{{\boldsymbol{x}}})\,{{\mathrm{d}}}{{\boldsymbol{x}}}+\frac{1}{N^{2}}\sum_{k=0}^{N-1}\sum_{k'=0}^{N-1} K({{\boldsymbol{t}}}_{k} ,{{\boldsymbol{t}}}_{k'}),\end{aligned}$$ see for example [@Dick.J_etal_2013_Acta Theorem 3.5]. We specialise to the case $$\int_{[0,1]^{s}} K({{\boldsymbol{x}}},{{\boldsymbol{y}}})\,{{\mathrm{d}}}{{\boldsymbol{x}}}=1 \qquad\text{for any } {{\boldsymbol{y}}}\in [0,1]^{s} ,$$ to obtain $$\begin{aligned} \label{eq:err1} e^{2}(N,({{\boldsymbol{t}}}_k)_k) &= \frac{1}{N^{2}}\sum_{k=0}^{N-1}\sum_{k'=0}^{N-1} K({{\boldsymbol{t}}}_{k},{{\boldsymbol{t}}}_{k'}) -1.\end{aligned}$$ In particular, for the QMC rule we here take an unshifted lattice rule with cubature points given by for some ${{\boldsymbol{z}}}\in Z_N^s$. Then, we have $$e^{2}(N,{{\boldsymbol{z}}}) = \frac{1}{N^{2}} \sum_{k=0}^{N-1}\sum_{k'=0}^{N-1} K\left(\bigg\{\frac{k{{\boldsymbol{z}}}}{N}\bigg\} ,\bigg\{\frac{k'{{\boldsymbol{z}}}}{N}\bigg\}\right) -1. \label{eq:decomp-err-in-u}$$ Now we further specialise the RKHS to $H_{s,{{\boldsymbol{\gamma}}}}$ with kernel $$K_{s,{{\boldsymbol{\gamma}}}}({{\boldsymbol{x}}},{{\boldsymbol{y}}}) =\sum_{{{\mathrm{\mathfrak{u}}}}\subseteq \{1:s\}} \gamma_{{{\mathrm{\mathfrak{u}}}}}\prod_{j\in {{\mathrm{\mathfrak{u}}}}} \eta ( x_{j} ,y_{j}) , \label{eq:def-kernel}$$ where $$\eta(x,y) :=\frac{1}{2} B_{2}( |x-y|) +\Big( x-\frac{1}{2}\Big)\Big( y-\frac{1}{2}\Big) ,\qquad x,y\in [0,1] .$$ Here $B_2(t)=t^2-t+1/6$, $t\in\mathbb{R}$ is the Bernoulli polynomial of degree $2$, $\{1:s\}$ is a shorthand notation for $\{1,2,...,s\}$, and the sum in is over all subsets ${{\mathrm{\mathfrak{u}}}}\subseteq\{1:s\}$, including the empty set; and ${{\boldsymbol{\gamma}}}= \{\gamma_{{{\mathrm{\mathfrak{u}}}}}\}_{{{\mathrm{\mathfrak{u}}}}\subset \mathbb{N}}$ is an arbitrary collection of positive numbers called *weights* with $\gamma_{\emptyset}=1$. The choice of weights plays an important role in deriving a dimension-independent error estimate, see Corollary \[cor:final-cor\]. This space, discussed fully in [@Dick.J_etal_2013_Acta], is an “unanchored” space of functions on the unit cube with square integrable mixed first derivatives. We again refer the readers to [@Dick.J_etal_2013_Acta] for more details. For this space it follows from that $$e^{2}(N,{{\boldsymbol{z}}}) =\sum_{{\emptyset \neq } {{\mathrm{\mathfrak{u}}}}\subseteq \{1:s\}} \gamma_{{{\mathrm{\mathfrak{u}}}}} \, e^{2}_{{{\mathrm{\mathfrak{u}}}}}(N,{{\boldsymbol{z}}}_{{\mathrm{\mathfrak{u}}}}) , \label{eq:def-e2}$$ where for ${{\mathrm{\mathfrak{u}}}}\subseteq \{1:s\}$ and ${{\boldsymbol{z}}}_{{\mathrm{\mathfrak{u}}}}= (z_j)_{j\in{{\mathrm{\mathfrak{u}}}}}$, from and $$\begin{aligned} & e^{2}_{{{\mathrm{\mathfrak{u}}}}}(N,{{\boldsymbol{z}}}_{{\mathrm{\mathfrak{u}}}})\notag\\ & := \frac{1}{N^{2}} \sum_{k=0}^{N-1} \sum_{k'=0}^{N-1}\prod_{j\in {{\mathrm{\mathfrak{u}}}}} \bigg[ \frac{1}{2} B_{2}\bigg( \bigg|\bigg\{\frac{kz_{j}}{N}\bigg\} -\bigg\{\frac{k'z_{j}}{N}\bigg\} \bigg| \bigg) +\left(\bigg\{\frac{kz_{j}}{N} \bigg\} -\frac{1}{2}\right) \left(\bigg\{\frac{k'z_{j}}{N} \bigg\} -\frac{1}{2}\right) \bigg] . \label{eq:def-eu2}\end{aligned}$$ Thus the quantity $e^{2}_{{{\mathrm{\mathfrak{u}}}}}(N,{{\boldsymbol{z}}}_{{\mathrm{\mathfrak{u}}}})$ is a key to deriving an estimate for $e^{2}(N,{{\boldsymbol{z}}})$. Existence result for worst-case error {#sec:estimate} ===================================== In this section, we derive an existence result for the worst-case error. We first note the following property. \[prop:property-B2\] Let $g$ be a function that satisfies $g(t)=g(1-t)$ for $t\in [0,1]$. Then for $a,b\ge0$ we have $$g(|\{a\} -\{b\}|) = g(\{a-b\}),$$ where, as before, the braces indicate that we take the fractional part of the real number. Note first that $\{a\},\{b\}\in[0,1)$ and therefore $\{a\} -\{b\}\in (-1,1)$. It is clear that $\{a\} -\{b\}$ differs from $\{a-b\}$ by $1$ or $0$. If $\{a\}=\{b\}$, then $\{a-b\}=0$ and the result is trivial. If $\{a\}>\{b\}$, then $\{a\}-\{b\}\in (0,1)$, and so $\{a\}-\{b\}=\{a-b\}$. Thus, again the result is trivial. If $\{a\}<\{b\}$, then $|\{a\}-\{b\}|=\{b\}-\{a\}\in (0,1)$ and so $|\{a\}-\{b\}|=\{b-a\}$. Thus, using $g(t)=g(1-t)$, $t\in [0,1]$ we have $$g(|\{a\}-\{b\}|) = g(\{b\}-\{a\}) = g(\{b-a\}) = g(1-\{b-a\}) = g(\{a-b\}),$$ where in the last step we used the identity $\{t\}+\{-t\}=1$ for $t\not\in \mathbb{Z}$. In particular, Proposition \[prop:property-B2\] applies to the function $B_2(\cdot)$ so we can rewrite as $$\begin{aligned} &e^{2}_{{{\mathrm{\mathfrak{u}}}}}(N,{{\boldsymbol{z}}}_{{\mathrm{\mathfrak{u}}}}) \notag\\ &=\frac{1}{N^{2}}\sum_{k=0}^{N-1}\sum_{k'=0}^{N-1}\prod_{j\in {{\mathrm{\mathfrak{u}}}}} \bigg[ \frac{1}{2} B_{2}\left( \bigg\{\frac{( k-k') z_{j}}{N}\bigg\} \right) +\left(\bigg\{\frac{kz_{j}}{N} \bigg\} -\frac{1}{2}\right)\left(\bigg\{\frac{k'z_{j}}{N} \bigg\} -\frac{1}{2}\right) \bigg].\label{eq:(6)}\end{aligned}$$ Now we obtain the average over ${{\boldsymbol{z}}}\in Z_{N}^s$. From and we have $$\overline{e}^{2}(N) =\sum_{\emptyset \neq {{\mathrm{\mathfrak{u}}}}\subseteq \{1:s\}} \gamma_{{{\mathrm{\mathfrak{u}}}}}\, \overline{e}^{2}_{{{\mathrm{\mathfrak{u}}}}}(N), \label{eq:def-bar-e2N}$$ where $$\begin{aligned} \overline{e}^{2}_{{{\mathrm{\mathfrak{u}}}}}(N) := \frac{1}{(N-1)^{s}}\sum_{{{\boldsymbol{z}}}\in Z_{N}^s} e^{2}_{{{\mathrm{\mathfrak{u}}}}}(N,{{\boldsymbol{z}}}_{{\mathrm{\mathfrak{u}}}}) &= \frac{1}{(N-1)^{|{{\mathrm{\mathfrak{u}}}}|}}\sum_{{{\boldsymbol{z}}}_{{\mathrm{\mathfrak{u}}}}\in Z_N^{|{{\mathrm{\mathfrak{u}}}}|}} e^{2}_{{{\mathrm{\mathfrak{u}}}}}(N,{{\boldsymbol{z}}}_{{\mathrm{\mathfrak{u}}}}) \nonumber\\ & = \frac{1}{N^{2}} \sum_{k=0}^{N-1} \sum_{k'=0}^{N-1} ( X_{N;k,k'} + J_{N;k,k'} )^{|{{\mathrm{\mathfrak{u}}}}|} ,\label{eq:def-bar-e-u}\end{aligned}$$ [with]{} $$X_{N;k,k'}:= \frac{1}{2(N-1)}\sum ^{N-1}_{z =1} B_{2}\left( \bigg\{\frac{( k-k') z}{N}\bigg\} \right) , \label{eq:def-XNkk'}$$ and $$J_{N;k,k'} := \frac{1}{N-1}\sum_{z=1}^{N-1} \bigg(\bigg\{\frac{kz}{N} \bigg\}-\frac{1}{2}\bigg) \bigg(\bigg\{\frac{k'z}{N} \bigg\}-\frac{1}{2}\bigg). \label{eq:def-JNkk'}$$ Further, the binomial theorem gives $$\overline{e}^{2}_{{{\mathrm{\mathfrak{u}}}}}(N) = \frac{1}{N^{2}} \sum_{k=0}^{N-1} \sum_{k'=0}^{N-1} \sum_{ {{\mathrm{\mathfrak{v}}}}\subseteq {{\mathrm{\mathfrak{u}}}}} (X_{N;k,k'})^{|{{\mathrm{\mathfrak{u}}}}\setminus {{\mathrm{\mathfrak{v}}}}|} (J_{N;k,k'})^{|{{\mathrm{\mathfrak{v}}}}|} .\label{eq:e2binom}$$ In seeking an error estimate for the generating-vector-averaged worst-case error $\overline{e}^{2}(N)$, we take the point of view that estimates of order $1/N$ or higher are relatively harmless, so we are concentrating on isolating terms that are more slowly converging. In the following two subsections, we derive estimates for $X_{N;k,k'}$ and $J_{N;k,k'}$. It turns out that, roughly speaking, the terms $(X_{N;k,k'})^{|{{\mathrm{\mathfrak{u}}}}\setminus {{\mathrm{\mathfrak{v}}}}|}$ yield the order $1/N$. The terms $(J_{N;k,k'})^{|{{\mathrm{\mathfrak{v}}}}|}$ seem to converge more slowly, and require more detailed analysis. Estimates for $X_{N;k,k'}$ {#estimaesfor} -------------------------- We have the following expression for $X_{N;k,k'}$. \[lem:estim-XNkk’\] For $N$ prime and $k,k'\in \{0,1,\dots,N-1\}$, the quantity $X_{N;k,k'}$ defined in satisfies [ $$X_{N;k,k'}= \begin{cases} \displaystyle\frac1{12} & \mbox{if } k=k', \\[5pt] -\displaystyle\frac1{12N} & \mbox{if } k\not=k'. \end{cases}$$]{} For $k=k'$, we have $B_2(0)=\frac16$. For $k\not=k'$, recalling the (absolutely convergent) series representation $$B_{2}(x)=\frac{1}{2\pi^{2}}\sum_{h\ne 0}\frac{\exp(2\pi \mathrm{i}hx)}{h^{2}}, \qquad x\in [0,1],$$ we have $$\begin{aligned} X_{N;k,k'} &=\frac{1}{4\pi^{2}(N-1)}\sum_{h\ne 0}\frac{1}{h^{2}} \sum_{z=1}^{N-1}\exp( 2\pi \mathrm{i}h( k-k') z/N)\\ &=\frac{1}{4\pi^{2}(N-1)}\sum_{h\ne 0}\frac{1}{h^{2}}\left(\sum ^{N-1}_{z=0}\exp( 2\pi \mathrm{i}zh( k-k') /N) -1\right) ,\end{aligned}$$with $$\sum ^{N-1}_{z=0}\exp( 2\pi \mathrm{i}zh(k-k')/N) =\begin{cases} N & \text{if}\ \ h(k-k')\equiv_N 0, \\ 0 & \text{if}\ \ h(k-k')\not\equiv_N 0. \end{cases} $$ Throughout this paper, the notation $a \equiv_N b$ means that $a\equiv b \pmod N$, and similarly $a \not\equiv_N b$ means that $a\not\equiv b \pmod N$. Since $N$ is prime and $k\ne k'$, we conclude that all possible values of $k-k'$, namely, $\pm 1,\pm 2, \ldots$, $\pm (N-1)$, are relatively prime to $N$, and so $h(k-k')\equiv_N 0 \iff h\equiv_N 0$. Thus $$\begin{aligned} X_{N;k,k'} &= \frac{1}{4\pi^{2}(N-1)} \Bigg(N\sum_{\substack{ h\neq 0\\ h\equiv_N 0 }}\frac{1}{h^{2}} -\sum_{h\ne 0}\frac{1}{h^{2}}\Bigg)\\ &=\frac{1}{4\pi^{2}(N-1)}\bigg(N\sum_{\ell \not{=} 0}\frac{1}{(\ell N)^{2}} -\frac{\pi^{2}}{3}\bigg) =\frac{1}{4\pi^{2}(N-1)}\left(\frac{N}{N^{2}}\frac{\pi^{2}}{3} -\frac{\pi^{2}}{3}\right) =-\frac{1}{12N},\end{aligned}$$which completes the proof. We deduce the following estimate for $\overline{e}^{2}_{{{\mathrm{\mathfrak{u}}}}}(N)$. \[prop:bd-1st-term-ofe2\] For $N$ prime, the quantity $\overline{e}^{2}_{{{\mathrm{\mathfrak{u}}}}}(N)$ defined in satisfies $$\overline{e}^{2}_{{{\mathrm{\mathfrak{u}}}}}(N) \le c_{{{\mathrm{\mathfrak{u}}}}} \frac1N + \frac1{N^2} \sum_{k=1}^{N-1}\sum_{k'=1}^{N-1} (J_{N;k,k'})^{|{{\mathrm{\mathfrak{u}}}}|}, \qquad \mbox{with}\quad c_{{{\mathrm{\mathfrak{u}}}}}:=\frac{2}{3^{|{{\mathrm{\mathfrak{u}}}}|}} + \frac{1}{4^{|{{\mathrm{\mathfrak{u}}}}|}}.$$ On separating out the diagonal terms of , we have $$\begin{aligned} \label{eq:two-terms} \overline{e}^{2}_{{{\mathrm{\mathfrak{u}}}}}(N) & =\frac{1}{N^{2}}\sum_{k=0}^{N-1} (X_{N;k,k} +J_{N;k,k} )^{|{{\mathrm{\mathfrak{u}}}}|} +\frac{1}{N^{2}}\sum_{k=0}^{N-1}\sum\limits ^{N-1}_{\substack{ k'=0\\ k'\neq k}}\sum_{{{\mathrm{\mathfrak{v}}}}\subseteq {{\mathrm{\mathfrak{u}}}}} (X_{N;k,k'})^{|{{\mathrm{\mathfrak{u}}}}\setminus {{\mathrm{\mathfrak{v}}}}|} (J_{N;k,k'})^{|{{\mathrm{\mathfrak{v}}}}|} .\end{aligned}$$ From $X_{N;k,k} =\frac{1}{12}$ and $0\le J_{N;k,k}\leq \frac{1}{N-1}\sum ^{N-1}_{z=1}\frac{1}{4} =\frac{1}{4}$, the first term in can be bounded by $$\frac{1}{N^{2}}\sum_{k=0}^{N-1} (X_{N;k,k} +J_{N;k,k} )^{|{{\mathrm{\mathfrak{u}}}}|}\leq \frac{1}{3^{|{{\mathrm{\mathfrak{u}}}}|} N} .$$ For the second term in , noting $ |J_{N;k,k'} |\leq \frac{1}{4}$, from Lemma \[lem:estim-XNkk’\] we have for any ${{\mathrm{\mathfrak{v}}}}\subseteq {{\mathrm{\mathfrak{u}}}}$ $$\big| (X_{N;k,k'})^{|{{\mathrm{\mathfrak{u}}}}\setminus {{\mathrm{\mathfrak{v}}}}|}(J_{N;k,k'})^{|{{\mathrm{\mathfrak{v}}}}|} \big| \leq \frac{1}{(12N)^{|{{\mathrm{\mathfrak{u}}}}\setminus {{\mathrm{\mathfrak{v}}}}|}\, 4^{|{{\mathrm{\mathfrak{v}}}}|}} ,$$ and thus summing over $\displaystyle \mathfrak{v\subsetneq u}$ and estimating $N^{-|{{\mathrm{\mathfrak{u}}}}\setminus{{\mathrm{\mathfrak{v}}}}|}$ by $N^{-1}$ we obtain $$\displaystyle \sum_{\mathfrak{v\subsetneq u}} \big| (X_{N;k,k'})^{|{{\mathrm{\mathfrak{u}}}}\setminus {{\mathrm{\mathfrak{v}}}}|} (J_{N;k,k'})^{|{{\mathrm{\mathfrak{v}}}}|} \big| \leq \frac{1}{N}\sum_{\mathfrak{v\subsetneq u}}\frac{1}{12^{|{{\mathrm{\mathfrak{u}}}}\setminus {{\mathrm{\mathfrak{v}}}}|}\, 4^{|{{\mathrm{\mathfrak{v}}}}|}}.$$ Further, from the binomial theorem we have $\sum_{\mathfrak{v\subsetneq u}}\frac{1}{12^{|{{\mathrm{\mathfrak{u}}}}\setminus {{\mathrm{\mathfrak{v}}}}|} 4^{|{{\mathrm{\mathfrak{v}}}}|}}=\big(\frac{1}{12}+\frac14\big)^{|{{\mathrm{\mathfrak{u}}}}|}-\frac1{4^{|{{\mathrm{\mathfrak{u}}}}|}} = \frac1{3^{|{{\mathrm{\mathfrak{u}}}}|}} -\frac1{4^{|{{\mathrm{\mathfrak{u}}}}|}} $. Using this, together with the case $\displaystyle {{\mathrm{\mathfrak{v}}}}={{\mathrm{\mathfrak{u}}}}$, we obtain $$\overline{e}^{2}_{{{\mathrm{\mathfrak{u}}}}} (N)\leq \left(\frac{2}{3^{|{{\mathrm{\mathfrak{u}}}}|}} - \frac{1}{4^{|{{\mathrm{\mathfrak{u}}}}|}}\right) \frac{1}{N} +\frac{1}{N^{2}}\sum_{k=0}^{N-1}\sum\limits ^{N-1}_{\substack{ k'=0\\ k'\neq k}} (J_{N;k,k'})^{|{{\mathrm{\mathfrak{u}}}}|}.$$Using again $|J_{N;k,k'}|\le 1/4$, we can separate out the contributions for $k=0$ or $k'=0$, to obtain $$\frac{1}{N^{2}}\sum_{k'=1}^{N-1} |J_{N;0,k'} |^{|{{\mathrm{\mathfrak{u}}}}|} \leq \frac{N-1}{4^{|{{\mathrm{\mathfrak{u}}}}|} N^2} \leq \frac{1}{4^{|{{\mathrm{\mathfrak{u}}}}|} N} \qquad\mbox{and}\qquad \frac{1}{N^{2}}\sum^{N-1}_{k=1} |J_{N;k,0} |\leq \frac{1}{4^{|{{\mathrm{\mathfrak{u}}}}|}N}.$$ Finally noting $(J_{N;k,k})^{|{{\mathrm{\mathfrak{u}}}}|}\geq 0$ yields the desired result. Estimates for $J_{N;k,k'}$ -------------------------- In this subsection, we derive estimates for $J_{N;k,k'}$ for $k,k'\ge 1$. In the following we will make use of the Fourier series for the real $1$-periodic sawtooth function, defined on $[0,1)$ by $$b(x):=\begin{cases} x-1/2 & \mbox{if } x\in(0,1),\\ 0 & \mbox{if } x=0, \end{cases}$$ and then extended to the whole of ${{\mathbb{R}}}$ by $b(x) = b(x+1)$ for all $x\in{{\mathbb{R}}}$. Thus $b(x)$ is the periodic version of the first-degree Bernoulli polynomial $B_1(x) = x-1/2$. It is well known (following, for example, from the Dini criterion) that the symmetric partial sums in its Fourier series converge to $b(x)$ pointwise for all $x\in \mathbb{R}$, that is $$b(x) = \lim_{M\to\infty} \frac{\mathrm{i}}{2\pi} \sum_{\substack{h=-M\\h\ne 0}}^M \frac{\exp(2\pi \mathrm{i} h x)}{h},\quad x \in \mathbb{R}.$$ For notational simplicity we shall often omit the limit, writing simply $$b(x) = \frac{\mathrm{i}}{2\pi} \sum_{h\ne 0} \frac{\exp(2\pi \mathrm{i} h x)}{h},\quad x \in \mathbb{R},$$ but this is always to be understood as the limit of the symmetric partial sum. We have the following expression for $J_{N;k,k'}$, $k,k'\in \{1,\dotsc ,N-1\}$. \[lem:identity-J\] For $N$ prime and $k,k'\in \{1,\dotsc ,N-1\}$, the quantity $J_{N;k,k'}$ defined in satisfies $$\begin{aligned} J_{N;k,k'}& = \frac{1}{4\pi^{2}}\frac{N}{N-1} \sum_{h \neq 0} \sum_{ \substack{ h'\neq 0\\ h'k'\equiv_N\, hk }} \frac{1}{hh'},\label{eq:J-nonzero}\end{aligned}$$ where the double sum is to be in interpreted as the double limit $$\sum_{h\neq 0} \sum_{\substack{ h'\neq 0\\ h'k' \equiv_N\, hk }} \frac{1}{hh'} := \lim\limits_{M\to\infty} \lim\limits_{M'\to\infty} \sum_{h\in\{-M,\dots,M\}\setminus\{0\}} \sum_{\substack{ h' \in\{-M' ,\dots,M' \}\setminus\{0\}\\ h'k' \equiv_N\, hk }} \frac{1}{hh'}.$$ For $(x,y)\in(0,1)^2$ we have $$\begin{aligned} B_{1}( x)B_{1}( y) &\,= \frac{1}{4\pi^2 } \sum_{h\not{=} 0}\sum_{h'\not{=} 0} \frac{e^{2\pi \mathrm{i}hx}}{h} \frac{e^{-2\pi \mathrm{i}h'y}}{h'} \\ &:= \lim_{M\to\infty} \lim_{M'\to\infty} \frac{1}{4\pi^2 } \sum_{h \in\{-M ,\dots,M \}\setminus\{0\}} \sum_{h'\in\{-M',\dots,M'\}\setminus\{0\}} \frac{e^{2\pi \mathrm{i}hx}}{h} \frac{e^{-2\pi \mathrm{i}h'y}}{h'}.\end{aligned}$$ Thus for any $k,k'=1,\dots,N-1$ we have, noting that the finite sum over $z$ may be interchanged with the implied limits, $$\begin{aligned} J_{N;k,k'} & =\frac{1}{4\pi^{2}(N-1)} \sum_{h\not{=} 0}\sum_{h'\not{=} 0} \frac{1}{hh'} \sum^{N-1}_{z=1} \exp\bigg(2\pi \mathrm{i}\Big(\frac{hk-h'k'}{N}\Big) z\bigg) \\ & =-\frac{1}{4\pi^{2}(N-1)} \sum_{h\not{=} 0}\sum_{h'\not{=} 0} \frac{1}{hh'} + \frac{1}{4\pi^{2}(N-1)} \sum_{h\not{=} 0}\sum_{h'\not{=} 0} \frac{1}{hh'} \sum ^{N-1}_{z=0} \exp\bigg(2\pi \mathrm{i}\Big(\frac{hk-h'k'}{N}\Big) z\bigg).\end{aligned}$$ The first term vanishes because it has as a factor the limit of the product of symmetric partial sums of the odd function $1/h$. For the second term we use $$\sum ^{N-1}_{z=0} \exp(2\pi \mathrm{i} z(hk-h'k')/N) = \begin{cases} N & \text{if } hk-h'k'\equiv_N 0,\\ 0 & \text{if } hk-h'k'\not\equiv_N 0, \end{cases} $$ which leads to the desired formula. We now want to estimate $J_{N;k,k'}$ for $k,k'\ge 1$ using . It turns out that it suffices to consider $J_{N;\kappa,1}$, for $\kappa=1,\dots,N-1$. \[prop:bd-e2-by-Jkappa1\] For $N$ prime, the quantity $\overline{e}^{2}_{{{\mathrm{\mathfrak{u}}}}}(N)$ defined in satisfies $$\overline{e}^{2}_{{{\mathrm{\mathfrak{u}}}}}(N) \le c_{{{\mathrm{\mathfrak{u}}}}} \frac1N + \frac{1}{N} \sum_{\kappa=1}^{N-1} |J_{N;\kappa,1}|^{|{{\mathrm{\mathfrak{u}}}}|}. \label{eq:bd-e2-by-Jkappa1}$$ Because $N$ is prime, for each $k'\in \{1,\ldots,N-1\}$ there is a unique inverse ${k'}^{-1} \in \{1,\dotsc ,N-1\}$ such that $k'{k'}^{-1} \equiv_N 1$, and therefore $$h'k'\equiv_N hk \quad\Leftrightarrow \quad h'\equiv_N h ( k{k'}^{-1}).$$ It follows from that $$J_{N;k,k'} =J_{N; \kappa,1} , \qquad\mbox{with}\qquad \kappa := k{k'}^{-1} \mod{N},$$ and since $\kappa$ runs over all of $\{1,\dots,N-1\}$ as $k'$ runs over $\{1,\dots,N-1\}$, we have $$\frac1{N^2}\sum_{k=1}^{N-1}\sum_{k'=1}^{N-1} (J_{N;k,k'})^{|{{\mathrm{\mathfrak{u}}}}|} = \frac{N-1}{N^2} \sum_{\kappa=1}^{N-1} (J_{N;\kappa,1})^{|{{\mathrm{\mathfrak{u}}}}|} \le \frac{1}{N} \sum_{\kappa=1}^{N-1} |J_{N;\kappa,1}|^{|{{\mathrm{\mathfrak{u}}}}|}.$$ Applying this to Proposition \[prop:bd-1st-term-ofe2\] yields the desired result. From Lemma \[lem:identity-J\] we have $$\begin{aligned} \label{eq:JNkappa1} J_{N;\kappa,1} &= \frac{1}{4\pi^{2}}\frac{N}{N-1} \sum_{h\neq 0} \sum_{\substack{h'\neq 0\\ h'\equiv_N \, h\kappa}} \frac{1}{hh'} = \frac{1}{4\pi^{2}}\frac{N}{N-1} \lim_{M\to\infty} \lim_{M'\to\infty} S(M,M'),\end{aligned}$$ where $$\begin{aligned} \label{eq:S-def} S(M,M') := \sum_{h\in\{-M,\dots,M\}\setminus\{0\}} \sum_{\substack{ h'\in\{-M',\dots,M'\}\setminus\{0\}\\ h'\equiv_N h\kappa }} \frac{1}{hh'}.\end{aligned}$$ To further simplify $J_{N;\kappa,1}$, we note that for $h,h'$ satisfying $h'\equiv_N h\kappa$ with $\kappa \in \{1,\ldots,N-1\}$ we have $$h\equiv_N 0 \ \ \ \Leftrightarrow \ \ \ h\kappa \equiv_N 0 \ \ \ \Leftrightarrow \ \ \ h'\equiv_N 0.$$ Hence, for the $h\equiv_N 0$ contribution to the double sum we have $$\sum_{\substack{ h\in\{-M,\dots,M\}\setminus\{0\}\\ h\equiv_N 0 }} \frac{1}{h} \sum_{\substack{ h'\in\{-M',\dots,M'\}\setminus\{0\}\\ h'\equiv_N 0 }}\frac{1}{h'} =0.$$ Thus, we can restrict the double sum to $h\not\equiv_N 0$ so that $$\begin{aligned} S(M,M') = \sum_{\substack{ h\in\{-M,\dots,M\}\setminus\{0\}\\ h\not\equiv_N 0 }} \sum_{\substack{ h'\in\{-M',\dots,M'\}\setminus\{0\}\\ h'\equiv_N\, h\kappa }} \frac{1}{hh'}. \label{eq:S-simp}\end{aligned}$$ We now assume $N\ge 3$ so that $N-1$ is even for $N$ prime. We can write $h\not\equiv_N 0$ as $$h=\ell N+q,\quad\mbox{with}\quad \ell\in\mathbb{Z} \quad\mbox{and}\quad q\in \left\{-\tfrac{N-1}{2} ,...,\tfrac{N-1}{2}\right\} \setminus \{0\} =:R_N. \label{eq:h'-with-ell'}$$ Then, we can write $h'\equiv_N h\kappa$ with $h\not\equiv_N 0$ as $$h'=\ell' N+ r(q\kappa,N), \quad\mbox{with}\quad \ell'\in\mathbb{Z},$$ where $r(j,N)$ is the unique integer congruent to $j\bmod N$ with the smallest magnitude. More precisely, the function $r(\cdot,N)\colon\mathbb{Z} \to R_N \cup \{0\}$ is defined for $j\ge 0$ by $$\begin{aligned} \label{eq:r-def} r(j,N):= \begin{cases} j \bmod N & \text{ if}\quad j\bmod N \le \frac{N-1}{2}, \\ j \bmod N - N & \text{ if}\quad j\bmod N > \frac{N-1}{2}, \end{cases}\end{aligned}$$ and extended to all integers $j$ by $r(j,N) =r(j+N,N)$. It follows that for $j>0$ we have $r(-j,N) = r(N-j\bmod N,N) = -r(j,N)$. Hence the function is both $N$-periodic and odd. If $N$ divides $j$, then we have $r(j,N)= 0$, but otherwise $r(j,N) \in R_N$. Using these representations of $h$ and $h'$, the double limit in $J_{N;\kappa,1}$ as in can be rewritten as follows. \[lem:J1\] For $N\ge 3$ prime and $\kappa\in\{1,\dots,N-1\}$, the quantity $J_{N;\kappa,1}$ given by satisfies $$\begin{aligned} \label{eq:J1} J_{N;\kappa,1} = \frac{1}{2\pi^{2}}\frac{N}{N-1} \sum_{q=1}^{(N-1)/2} \Bigg(\frac{1}{q} - \sum_{\ell=1}^\infty \frac{2q}{(\ell N)^2-q^2 }\Bigg) \Bigg(\frac{1}{r(q\kappa,N)} - \sum_{\ell'=1}^\infty \frac{2\,r(q\kappa,N)}{(\ell' N)^2-r(q\kappa,N)^2 }\Bigg),\end{aligned}$$ where $r(\cdot,N)$ is defined as in . We begin with the expression for $J_{N;\kappa,1}$. Writing $M =L N + Q$ and $M'=L'N+Q'$ with $L,L'\in{{\mathbb{N}}}$ and $Q,Q'\in R_N\cup\{0\}$, the double sum can be rewritten as $$\begin{aligned} \label{eq:S-prod} S(M,M') &= \sum_{\substack{\ell\in{{\mathbb{Z}}},\, q\in R_N\\|\ell N+q|\le LN+Q}}\; \sum_{\substack{\ell'\in{{\mathbb{Z}}},\, q'\in R_N\\ |\ell' N+q'|\le L'N+Q'\\ q'\equiv_N\, q\kappa}} \frac{1}{\ell N+q}\; \frac{1}{\ell' N + q'} \nonumber\\ &= \sum_{{\stackrel{\scriptstyle{q,q'\in R_N}}{\scriptstyle{q'\equiv_N\, q\kappa}}}} \Bigg(\sum_{{\stackrel{\scriptstyle{\ell=-L}}{\scriptstyle{|\ell N+q| \le LN+Q}}}}^L \frac{1}{\ell N+q }\Bigg) \Bigg(\sum_{{\stackrel{\scriptstyle{\ell'=-L'}}{\scriptstyle{|\ell' N+q'| \le L'N+Q'}}}}^{L'} \frac{1}{\ell' N + q'}\Bigg),\end{aligned}$$ where we used the fact that the inequalities in the summation conditions cannot hold if $|\ell|> L$ or $|\ell'|> L'$. First we consider the sum over $\ell$ in . Since the condition $|\ell N+q|\le LN + Q$ always holds for $|\ell|\le L-1$, we can write $$\sum_{{\stackrel{\scriptstyle{\ell=-L}}{\scriptstyle{|\ell N+q| \le LN+Q}}}}^L \frac{1}{\ell N+q } = \sum_{\ell=-L}^L \frac{1}{\ell N+q } - \sum_{{\stackrel{\scriptstyle{\ell=\pm L}}{\scriptstyle{|\ell N+q| > LN+Q}}}} \frac{1}{\ell N+q},$$ where we have $$\sum_{\ell=-L}^L \frac{1}{\ell N+q } = \frac{1}{q} + \sum_{\ell=1}^L \left(\frac{1}{\ell N+q } + \frac{1}{-\ell N+q }\right) = \frac{1}{q} - \sum_{\ell=1}^L \frac{2q}{(\ell N)^2-q^2 }$$ and $$\Bigg|\sum_{{\stackrel{\scriptstyle{\ell=\pm L}}{\scriptstyle{|\ell N+q| > LN+Q}}}} \frac{1}{\ell N+q} \Bigg| \le \frac{2}{LN+Q} \le \frac{2}{LN - N/2} \to 0 \qquad\mbox{as}\qquad L\to\infty.$$ Thus we conclude that $$\lim_{L\to\infty} \sum_{{\stackrel{\scriptstyle{\ell=-L}}{\scriptstyle{|\ell N+q| \le LN+Q}}}}^L \frac{1}{\ell N+q } = \frac{1}{q} - \lim_{L\to\infty} \sum_{\ell=1}^L \frac{2q}{(\ell N)^2-q^2 } = \frac{1}{q} - \sum_{\ell=1}^\infty \frac{2q}{(\ell N)^2-q^2 } =: P_N(q).$$ The sum over $\ell'$ in is similar. Now since the double limit of $S(M,M')$ exists as $M\to\infty$ and $M'\to\infty$, it must equal the double limit of the last expression in as $L\to\infty$ and $L'\to\infty$, with arbitrary $Q$ and $Q'$. (This is because for a particular pair $(Q,Q')$, the last expression in , when interpreted as a sequence in the double index $(L,L')$, can be considered as a subsequence of the convergent sequence $S(M,M')$ with double index $(M,M')$.) Hence we obtain $$\lim_{M\to\infty} \lim_{M'\to\infty} S(M,M') = \sum_{{\stackrel{\scriptstyle{q,q'\in R_N}}{\scriptstyle{q'\equiv_N\, q\kappa}}}} P_N(q)\, P_N(q') = \sum_{q\in R_N} P_N(q)\, P_N(r(q\kappa,N)),$$ where we used the fact that for a given $q\in R_N$, the only value of $q'\in R_N$ that satisfies $q'\equiv_N q\kappa$ is $q' = r(q\kappa,N)$. Finally, we observe that $P_N(-q) = - P_N(q)$, and $P_N(r(-q\kappa,N)) = -P_N(r(q\kappa,N))$ since $r(-q\kappa,N) = -r(q\kappa,N)$. Thus the contributions of $q$ and $-q$ to the sum are the same, and so we only need to sum over the positive values of $q$ and then double the result. Applying the result in completes the proof. Now we estimate the magnitude of $J_{N;\kappa,1}$. \[lem:J1-bound\] For $N\ge 3$ prime and $\kappa\in\{1,\dots,N-1\}$, the quantity $J_{N;\kappa,1}$ from satisfies $$|J_{N;\kappa,1}| \le \frac{1}{2\pi^2} \frac{N}{N-1} \left(T_N(\kappa) + \frac{10\pi^2 \ln N}{9N}\right),$$ where $$\label{eq:T-def} T_N(\kappa) := \sum_{q=1}^{(N-1)/2} \frac{1}{q\,|r(q\kappa,N)|} < \frac{\pi^2}{6}.$$ We expand the two factors in the sum over $q$ in and then apply the triangle inequality to obtain $$|J_{N;\kappa,1}| \le \frac{1}{2\pi^2} \frac{N}{N-1} \big(T_N(\kappa) + A_1 + A_2 + A_3\big),$$ with $$\begin{aligned} A_1 &:= \sum_{q=1}^{(N-1)/2} \frac{1}{|r(q\kappa,N)|} \Bigg(\sum_{\ell=1}^\infty \frac{2q}{(\ell N)^2-q^2 }\Bigg), \\ A_2 &:= \sum_{q=1}^{(N-1)/2} \frac{1}{q} \Bigg(\sum_{\ell'=1}^\infty \frac{2\,|r(q\kappa,N)|}{(\ell' N)^2-r(q\kappa,N)^2 }\Bigg), \\ A_3 &:= \sum_{q=1}^{(N-1)/2} \Bigg(\sum_{\ell=1}^\infty \frac{2q}{(\ell N)^2-q^2 }\Bigg) \Bigg(\sum_{\ell'=1}^\infty \frac{2\,|r(q\kappa,N)|}{(\ell' N)^2-r(q\kappa,N)^2 }\Bigg).\end{aligned}$$ Since $q\le N/2\le \ell N/2$ and $|r(q\kappa,N)|\le N/2\le\ell' N/2$, we have $$\sum_{\ell=1}^\infty \frac{2q}{(\ell N)^2-q^2 } \le \sum_{\ell=1}^\infty \frac{N}{(\ell N)^2-(\ell N/2)^2 } = \frac{4}{3N} \sum_{\ell=1}^\infty \frac{1}{\ell^2} = \frac{2\pi^2}{9N},$$ and $$\sum_{\ell'=1}^\infty \frac{2\,|r(q\kappa,N)|}{(\ell' N)^2-r(q\kappa,N)^2} \le \sum_{\ell'=1}^\infty \frac{N}{(\ell' N)^2-(\ell' N/2)^2 } = \frac{2\pi^2}{9N}.$$ Moreover, we have $$\sum_{q=1}^{(N-1)/2} \frac{1}{q} \le 1 + \int_1^{(N-1)/2} \frac{1}{t}\,{{\mathrm{d}}}t \le 2\ln N \quad\mbox{and}\quad \sum_{q=1}^{(N-1)/2} \frac{1}{|r(q\kappa,N)|} = \sum_{t=1}^{(N-1)/2} \frac{1}{t} \le 2\ln N,$$ where in the prenultimate step we used the fact that $|r(q\kappa,N)|$ takes all the values from $1$ to $(N-1)/2$ exactly once as $q$ runs from $1$ to $(N-1)/2$. These estimates lead to $$A_1 + A_2 + A_3 \le \frac{4\pi^2\,\ln N}{9N} + \frac{4\pi^2\,\ln N}{9N} + \frac{N-1}{2}\frac{4\pi^4}{81N^2} \le \frac{\pi^2\,\ln N}{9N} \left(8 + \frac{2\pi^2}{9\ln 3}\right) \le \frac{10\pi^2\,\ln N}{9N}.$$ On the other hand, a crude estimate for $T_N(\kappa)$ follows from the Cauchy-Schwarz inequality: $$\begin{aligned} T_N(\kappa) & \leq \Bigg(\sum ^{(N-1) /2}_{q=1}\frac{1}{q^{2}}\Bigg)^{1/2} \Bigg(\sum ^{(N-1) /2}_{q=1}\frac{1}{r(q\kappa,N)^{2}}\Bigg)^{1/2} =\Bigg(\sum ^{(N-1)/2}_{q=1}\frac{1}{q^{2}}\Bigg)^{1/2} \Bigg(\sum ^{(N-1) /2}_{t=1}\frac{1}{t^{2}}\Bigg)^{1/2} < \frac{\pi^{2}}{6}.\end{aligned}$$ This completes the proof. Numerical experiments show that the value of $T_N(\kappa)$ is much smaller than the crude bound $\pi^2/6$ for most values of $\kappa$, and have led us to the following conjecture. Note that we have $r(q(N-\kappa),N) = r(-q\kappa,N) = -r(q\kappa,N)$, and so $T_N(N-\kappa) = T_N(\kappa)$. Moreover, from we conclude that $$J_{N;N-\kappa,1} = - J_{N;\kappa,1}.$$ Since we are only interested in the magnitude of $J_{N;\kappa,1}$ (see Proposition \[prop:bd-e2-by-Jkappa1\]), it suffices to consider only $\kappa \in R_N^+ := \{1,2,\ldots,(N-1)/2\}$. \[conj:alpha\] For $N\ge 3$ prime and $\kappa\in R_N^+$, with $T_N(\kappa)$ as defined in , let $( \kappa_{j})$ for $j\in R^{+}_{N}$ be an ordering of the elements of $R_N^+$ such that $(T_{N}(\kappa _{j}))$ is non-increasing. The conjecture is that there exist $C_{1}, C_{2}>0$ and $\alpha\ge 2$ independent of $N$ such that $$T_{N}(\kappa _{j}) \le C_{1}\frac{(\ln N)^{\alpha}}{N} \ \ \text{ for all } \ \ \ j >C_{2}\,(\ln N)^{\alpha}.\label{eq:conj}$$ [Conjecture \[conj:alpha\]]{} together with Lemma \[lem:J1-bound\] lead to an estimate for $|J_{N;\kappa_j,1}|$ of the following form: $$|J_{N;\kappa_j,1}| \le \begin{cases} C_3 & \text{ for } j\le C_2\, (\ln N)^\alpha, \\ C_4 \displaystyle\frac{(\ln N)^\alpha }{N} &\text{ for } j>C_2\, (\ln N)^\alpha,\\ \end{cases}$$ where $C_3$ and $C_4$ are known numerical constants. We will use this bound in the next subsection to obtain the desired result for the mean of the worst-case error. Final results ------------- Now we are ready to state our main results. \[thm:bd-bar-e-u-N\] Suppose that [Conjecture \[conj:alpha\]]{} holds with some $\alpha\ge2$. For arbitrary ${{\mathrm{\mathfrak{u}}}}\subseteq\{1:s\}$ and any prime number $N\ge 3$ such that ${(\ln N)^{\alpha}}/{N}\le 1$, the quantity $\overline{e}_{{{\mathrm{\mathfrak{u}}}}}(N)$ defined in satisfies $$\overline{e}_{{{\mathrm{\mathfrak{u}}}}}(N) \le C_{{{\mathrm{\mathfrak{u}}}}} \frac{(\ln N)^{\alpha/2}}{\sqrt{N}},$$ where $$C_{{{\mathrm{\mathfrak{u}}}}}:= \sqrt{ c_{{{\mathrm{\mathfrak{u}}}}} + 2C_{2}\Big(\frac{23}{24}\Big)^{|{{\mathrm{\mathfrak{u}}}}|} +\Big( \frac{3C_{1}}{4\pi^{2}} +\frac{5}{6}\Big)^{|{{\mathrm{\mathfrak{u}}}}|} }.$$ Here, the constant $c_{{{\mathrm{\mathfrak{u}}}}}$ is as in Proposition \[prop:bd-1st-term-ofe2\], and $C_1,C_2$ are as in [Conjecture \[conj:alpha\]]{}. From Proposition \[prop:bd-e2-by-Jkappa1\] together with $J_{N;N-\kappa,1} = - J_{N;\kappa,1}$, we have $$\overline{e}^{2}_{{{\mathrm{\mathfrak{u}}}}}(N) \le c_{{{\mathrm{\mathfrak{u}}}}} \frac1N + \frac{2}{N} \sum_{j=1}^{(N-1)/2} |J_{N;\kappa_j,1}|^{|{{\mathrm{\mathfrak{u}}}}|}. \label{eq:bd-e2-by-Jkappa1}$$ For $j\leq C_{2}(\ln N)^{\alpha}$, we use $T_N(\kappa_j)\le \pi^2/6$, $\ln N/N\le 1$ and $N/(N-1)\le 3/2$ in Lemma \[lem:J1-bound\] to obtain $$|J_{N;\kappa_{j},1}| \le \frac{1}{2\pi^{2}}\frac{N}{N-1}\bigg( \frac{\pi^2}{6} +\frac{10\pi^{2}}{9}\frac{\ln N}{N}\bigg) \le \frac{1}{2\pi^{2}}\frac{3}{2}\bigg( \frac{\pi^2}{6} +\frac{10\pi^{2}}{9}\bigg) = \frac{23}{24}.$$ For $j > C_{2}(\ln N)^{\alpha }$, we use $\ln N\geq 1$, $N/(N-1)\le 3/2$ and [Conjecture \[conj:alpha\]]{} to obtain $$\begin{aligned} |J_{N;\kappa_{j},1}| \le \frac{1}{2\pi^{2}}\frac{N}{N-1}\bigg( C_{1}\frac{(\ln N)^{\alpha }}{N} +\frac{10\pi^{2}}{9}\frac{\ln N}{N}\bigg) \le \bigg( \frac{3C_1}{4\pi^2} +\frac{5}{6}\bigg)\frac{(\ln N)^{\alpha}}{N}.\end{aligned}$$ Combining these and using $(\ln N)^{\alpha }/{N} \leq 1$, we obtain $$\begin{aligned} \sum_{j=1}^{(N-1)/2} |J_{N;\kappa_j,1} |^{|{{\mathrm{\mathfrak{u}}}}|} &\le \sum_{1\leq j\leq C_{2}(\ln N)^{\alpha }} \bigg(\frac{23}{24}\bigg)^{|{{\mathrm{\mathfrak{u}}}}|} + \sum_{C_{2}(\ln N)^{\alpha } < j\leq (N-1) /2} \bigg( \frac{3C_1}{4\pi^2} +\frac{5}{6}\bigg)^{|{{\mathrm{\mathfrak{u}}}}|} \bigg(\frac{(\ln N)^{\alpha}}{N}\bigg)^{|{{\mathrm{\mathfrak{u}}}}|} \\ &\le C_{2}(\ln N)^{\alpha }\left(\frac{23}{24}\right)^{|{{\mathrm{\mathfrak{u}}}}|} + \frac{N-1}{2}\bigg( \frac{3C_1}{4\pi^2} +\frac{5}{6}\bigg)^{|{{\mathrm{\mathfrak{u}}}}|} \frac{(\ln N)^{\alpha}}{N}\\ &\le \bigg( C_{2}\left(\frac{23}{24}\right)^{|{{\mathrm{\mathfrak{u}}}}|} + \frac{1}{2} \left(\frac{3C_{1}}{4\pi^{2}} + \frac{5}{6}\right)^{|{{\mathrm{\mathfrak{u}}}}|} \bigg) (\ln N)^{\alpha }.\end{aligned}$$ This together with yields the required result. \[cor:final-cor\] Suppose that [Conjecture \[conj:alpha\]]{} holds with some $\alpha\ge 2$. Let $N\ge 3$ be a prime number. Suppose that the weights ${{\boldsymbol{\gamma}}}= (\gamma_{{\mathrm{\mathfrak{u}}}})_{{{\mathrm{\mathfrak{u}}}}}$ satisfy $$C := \sum_{|{{\mathrm{\mathfrak{u}}}}|<\infty} \gamma_{{\mathrm{\mathfrak{u}}}}\, C_{{\mathrm{\mathfrak{u}}}}< \infty,$$ where $C_{{{\mathrm{\mathfrak{u}}}}}$ is the constant as in Theorem \[thm:bd-bar-e-u-N\]. Then, the generating-vector-averaged worst-case error $\overline{e}^{2}(N) $ defined as in satisfies $$\overline{e}^{2}(N)\le C \frac{(\ln N)^{\alpha}}{{N}},$$ with $C>0$ independent of $s$ and $N$. As a consequence, there exists a generating vector ${{\boldsymbol{z}}}^*\in Z_{N}^s=\{z\in \mathbb{Z}\mid 1\le z\le N-1 \}^s$ that attains the worst-case error $${e}(N,{{\boldsymbol{z}}}^*)\le \sqrt{C} \frac{(\ln N)^{\alpha/2}}{\sqrt{N}}.$$ From and Theorem \[thm:bd-bar-e-u-N\], we have $$\overline{e}^{2}(N) \le \sum_{\emptyset \neq {{\mathrm{\mathfrak{u}}}}\subseteq \{1:s\}} \gamma_{{{\mathrm{\mathfrak{u}}}}} C_{{{\mathrm{\mathfrak{u}}}}} \frac{(\ln N)^{\alpha}}{{N}} \le C \frac{(\ln N)^{\alpha}}{{N}}.$$ Now, recall that $\overline{e}^{2}(N)$ is defined in as the average of ${e}^2(N,{{\boldsymbol{z}}})$ over all possible ${{\boldsymbol{z}}}$. Thus, there must be at least one ${{\boldsymbol{z}}}^*$ such that $${e}^2(N,{{\boldsymbol{z}}}^*)\le C \frac{(\ln N)^{\alpha}}{{N}},$$ which yields the second statement. Numerical experiments on the conjecture {#sec:num} ======================================= In this section, we present numerical evidence relating to [Conjecture \[conj:alpha\]]{}. We compute the numbers $\{T_N(\kappa)\}_{\kappa=1}^{(N-1)/2}$, given by for varying $N$. For each fixed $N$, we sort these values in non-increasing order, which we write as $(T_N(\kappa_j))_{j=1,\dots,(N-1)/2}$, plot the values, and make a guess of the constants $C_1$, $C_2$ in [Conjecture \[conj:alpha\]]{}. We used Julia 0.6.2. for the experiments below. Figure \[fig:50kto100k\] shows the values of $\frac{N}{(\ln N)^\alpha }T_N(\kappa_j)$ against $j/(\ln N)^\alpha$ for $j=1,\dots,(N-1)/2$ with $\alpha=2,3$, and $N=50021,74687,99991$. We see that for both $\alpha=2$ and $3$ and these values of $N$ we can take constants $C_1$, $C_2$ such that for all $j/(\ln N)^\alpha>C_2$ with $j=1,\dots,(N-1)/2$ we have $T_N(\kappa_j){N}/{(\ln N)^\alpha }\le C_1$: for example, $C_1=20$ and $C_2=10$. This is consistent with [Conjecture \[conj:alpha\]]{}, especially for $\alpha = 3$. Of course, we cannot be certain even in this case that the bounds will hold for very large $N$, with these or any constants. But even if the conjecture fails, the numerical experiments give us confidence, even for $\alpha=2$, that the bounds in Theorem \[thm:bd-bar-e-u-N\] will hold with $C_1=20$ and $C_2=10$ for $N$ up to at least a few hundred thousand. Conclusion {#sec:conclusion} ========== In this paper, we considered the worst-case error for unshifted lattice rules without randomisation. A conjecture to support the error estimate was proposed. Given the conjecture, we showed the existence of a generating vector that attains the worst-case error $1/\sqrt{N}$, up to a logarithmic factor. Numerical experiments suggest that the conjecture is plausible. Acknowledgements {#acknowledgements .unnumbered} ================ We gratefully acknowledge the financial support from the Australian Research Council (FT130100655 and DP180101356). [1]{} J. Dick, F. Y. Kuo, and I. H. Sloan. . , [22:133–288 (2013).]{} J. Dick, D. Nuyens, and F. Pillichshammer. . , [126(2):259–291 (2014).]{} T. Goda, K. Suzuki, and T. Yoshiki. . I. H. Sloan and S. Joe. . Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1994.
--- abstract: 'We re-investigate the evolution of the strongly degenerate neutrinos in the early universe. With the larger degeneracy, the neutrino number freezes at higher temperatures because the neutrino annihilation rate decreases. We consider very large degeneracy so large that the neutrino number freezes before events in which the particle degrees of freedom in the universe decrease (e.g. the muon annihilation and the quark-hadron phase transition). In such a case, the degeneracy by the time of nucleosynthesis becomes smaller than the initial degeneracy. We calculate how much it decreases from the initial value on the basis of the conservation of the neutrino number and the total entropy. We found a large drop in the degeneracy but it is not large enough to affect the current constraints on the neutrino degeneracy from BBN and CMBR.' author: - Kazuhide Ichikawa - 'M. Kawasaki' title: Remarks on the Cosmic Density of Degenerate Neutrinos --- introduction ============ It is well known that the evolution of the universe deeply depends on the properties of neutrinos (see Ref. [@Dolgov2002] for review). In the standard cosmology, we assume three types of massless neutrinos with the same number of particles and antiparticles. In this paper, we deviate from the last assumption and consider cosmological effects caused by neutrino-antineutrino number asymmetry. We use a terminology “degeneracy” for this asymmetry. We also assume, just for simplicity of the description, the neutrino number is larger than the antineutrino number ([*i.e. *]{}the neutrino has positive chemical potential). When considering the opposite case, we need to just exchange the role of neutrinos and antineutrinos. Since degeneracy increases the sum of the energy density of neutrinos and antineutrinos, it significantly affects standard predictions such as the big bang nucleosynthesis and the cosmic microwave background (in addition, electron-type neutrinos destroy neutrons so the nucleosynthesis depends strongly on their number density). Naively, these effects monotonously increase with the degree of degeneracy so the observations can put upper bounds on the degeneracy in each generation of neutrinos. However, things are not as simple as this because the neutrino number freeze-out temperature also increases with the degeneracy (the rise in the freeze-out temperature is caused by the scarcity of the antineutrinos, the annihilation partners of the neutrinos). The complication occurs when the freeze-out takes place before the muon-antimuon annihilation ends. In this case, when muons and antimuons annihilate, already frozen neutrino number can not change but the entropy is transfered to the neutrinos through the elastic scattering with electrons and positrons so the neutrinos keep the same temperature with the rest of the cosmic plasma. In order to conserve the number while the temperature changing faster than the inverse of the scale factor, the neutrino degeneracy parameter, chemical potential divided by temperature, decreases. As a result, by the onset of nucleosynthesis, the neutrino energy density becomes much lower than the value calculated using the initial degeneracy. Therefore, there is a logical possibility that very large initial degeneracy cannot be ruled out by the observations. This possibility is pursued in Ref. [@Kang1992] but with an incorrect picture on the “neutrino decoupling” when very large degeneracy exists (the same lines of argument are still found in a few papers [@Orito2000] [@Orito2002] [@etc]). They have regarded the neutrino number freeze-out and its kinematical decoupling from the rest of the cosmic plasma occur simultaneously. The correct picture is the one as described in the previous paragraph, the kinetic equilibrium holds well after the chemical equilibrium ceases to hold. The reason is that there is small number of annihilation partners, antineutrinos, but elastic scattering partners, electrons and positrons, are abundant. This is pointed out in Ref. [@Dolgov2002], but they have not calculated the relation between the initial degeneracy and the final degeneracy after the entropy producing events. Obtaining this relation is the main purpose of this paper. In the next section, we calculate the neutrino number freeze-out temperature and show it increases exponentially with the initial degeneracy parameter. We also justify the neutrinos and anti-neutrinos are in kinetic equilibrium during the annihilation by demonstrating their elastic scattering rate is sufficiently large. In Sec. \[sec:change\], for a certain range of initial degeneracy parameter, we calculate their final value after the entropy producing events using the neutrino number conservation and the total entropy conservation. This is our main result. In Sec. \[sec:conclusion\], we summarize the results and discuss the current constraints on the neutrino degeneracy are not affected. neutrino number freeze-out {#sec:freezeout} ========================== We calculate the neutrino number freeze-out temperature $T_f$ by $\Gamma_A(T_f)=H(T_f)$, where $\Gamma_A$ is the rate of the change in the neutrino number density $n_{\nu}$ through the neutrino-antineutrino annihilation processes and $H$ is the cosmic expansion rate. In order to calculate $\Gamma_A$, we have to sum all the annihilation process rate. Assuming one type of the neutrino $\nu_{l}$ has degeneracy, we consider the annihilation to some fermion-antifermion pairs, $\nu_{l}(p_1) + \bar{\nu}_{l}(p_2) \rightarrow F(p_3) + \bar{F}(p_4)$ where the variables in the brackets denote the four-momentum of each particle in the comoving frame. For $F$, there are electrons and two non-degenerate types of neutrinos. In addition, we consider the annihilation to muon-anitimuon pairs when $T>m_{\mu}/3$ and to quark-antiquark pairs when $T>T_{\rm QCD}$ where we assume the quark-hadron phase transition to occur instantaneously at $T_{\rm QCD}=200$ MeV and the quark-gluon phase contains u, d and s quarks. The contribution of this process to the $\Gamma_A$ is, $$\begin{aligned} \Gamma_{\nu_l\bar{\nu_l}\rightarrow F\bar{F}}&=&-\left( \frac{\dot{n}_{\nu_l}}{n_{\nu_l}} \right)_{\nu\bar{\nu}\rightarrow F\bar{F}} \nonumber \\ &=&-\frac{1}{n_{\nu_l}} \int \frac{d^3p_1}{(2\pi)^3} \frac{d^3p_2}{(2\pi)^3} \frac{d^3p_3}{(2\pi)^3} \frac{d^3p_4}{(2\pi)^3} |{\cal M}(\nu_l + \bar{\nu_l} \rightarrow F + \bar{F})|^2 \nonumber \\ & &\times(2\pi)^4 \delta^{(4)}(p_1+p_2-p_3-p_4)f_{\nu_l}(E_1) f_{\bar{\nu_l}}(E_2) [1-f_{F}(E_3)] [1-f_{\bar{F}}(E_4)]. \label{eq:rateann}\end{aligned}$$ $F$’s are well-approximated to be massless so the square of the invariant matrix element $|{\cal M}(\nu_l + \bar{\nu_l} \rightarrow F + \bar{F})|^2$ can be written in the form $32G_F [b(p_1 {\cdot} p_3)^2 + c(p_1 {\cdot} p_4)^2]$ where $G_F=(292.80 {\rm GeV})^{-2}$ is the Fermi coupling constant. For $F\neq l$, only the neutral current contributes so that $b=(C_V^F-C_A^F)^2$ and $c=(C_V^F+C_A^F)^2$ and for $F=l$, the charged current contributes in addition so that $b=(C_V^l-C_A^l)^2$ and $c=(C_V^l+C_A^l+2)^2$. The vector and axial-vector couplings $(C_V^F,C_A^F)$ are $(1/2,1/2)$ for $F=\nu$, $(-1/2,-1/2+2\sin^2\theta)$ for $F=e$ and $\mu$, $(1/2,1/2-(4/3)\sin^2\theta)$ for $F=u$, and $(-1/2,-1/2-(2/3)\sin^2\theta)$ for $F=d$ and $s$ where the weak-mixing angle $\sin^2\theta=0.231$. $f$’s are the distribution functions of the particle species on the subscript and we use the equilibrium form, $f_{F,\bar{F}}=1/[\exp(E/T)+1]$ and $f_{\nu_l,\bar{\nu_l}}=1/[\exp((E\pm\mu)/T)+1]$ where $\mu$ is the chemical potential and the signs are $-$ for $\nu_l$ and $+$ for $\bar{\nu_l}$. Here, we assume $\mu=\mu_{\nu_l}=-\mu_{\bar{\nu_l}}$, expecting the chemical equilibrium holds initially at very high temperature. Some of the integration can be performed analytically and we obtain $$\begin{aligned} \dot{n}_{\nu_l}|_{\nu_l\bar{\nu_l}\rightarrow F\bar{F}} &=&T^8\frac{G_F^2}{64\pi^5} \int_0^{\infty} dx \int_0^{\infty} dy \int_{-1}^{1} dz \int_{f_-}^{f_+} dt\ f_{\nu_l}(x) f_{\bar{\nu_l}}(y) x^3 y^3(1-z)^2\delta^{-5} \nonumber \\ & & \times \Big[ (b+c)\left\{ 3\delta^{4}+3(x-y)^2(2t-x-y)^2-(x-y)^2\delta^{2}-(2t-x-y)^2\delta^{2} \right\} \nonumber \\ & &-4(b-c)(x-y)(2t-x-y)\delta^{2} \Big] [1-f_{F}(t)][1-f_{\bar{F}}(x+y-t)] \nonumber \\ &\equiv&T^8 G_F^2 L(\xi) \label{eq:dnnudt}\end{aligned}$$ where $x=E_1/T$, $y=E_2/T$, $z=\cos\theta$, $t=E_3/T$, $\delta(x,y,z)=(x^2+y^2+2xyz)^{1/2}$ and $f_{\pm}=(x+y\pm\delta)/2$. With these variables, the distribution functions are $f_{\nu_l,\bar{\nu_l}}(x)=1/[\exp(x\pm\xi)+1]$ and $f_{F,\bar{F}}(x)=1/[\exp(x)+1]$, where we define the degeneracy parameter $\xi\equiv \mu/T$. Finally, the degenerate neutrino number density is $$n_{\nu_l}(\xi)=\frac{T^3}{2\pi^2}\int\frac{x^2}{e^{x-\xi}+1} dx \equiv M(\xi)T^3. \label{eq:nnu}$$ The cosmic expansion $H$ is determined by the total energy of the universe by the Einstein equation, $$H=\sqrt{\frac{8\pi\rho_{tot}}{3M_P^2}}=\sqrt{\frac{4\pi^3 g_{\ast}(\xi)}{45}}\frac{T^2}{M_{pl}}, \label{eq:hubble}$$ where $M_P=1.2\times10^{19}$ GeV is the Planck energy and we have introduced $g_{\ast}$ to express the total energy density as $\rho_{tot}=g_{\ast}(\xi)(\pi^2/30)T^4$. On calculating the total energy, we do not have to worry about temperature dependence of $g_{\ast}$ because it turns out that when the freeze-out temperature becomes so high as muons to contribute as relativistic degree of freedom, $\xi$ should be large making the muon (and the other particle species other than the degenerate neutrino) energy density negligible compared to that of the degenerate neutrino. The same is true for pions and the quark-gluon plasma. Therefore we include the contribution from photons, electrons, positrons, two types of non-degenerate (anti)neutrinos and the degenerate (anti)neutrinos, $$\begin{aligned} g_{\ast}(\xi)=9+ \frac{15}{\pi^4}\int \left( \frac{x^3}{e^{x-\xi}+1} + \frac{x^3}{e^{x+\xi}+1} \right)dx. \label{eq:gstar}\end{aligned}$$ This gives usual value $g_{\ast}(0)=43/4$. Combining Eqs. (\[eq:rateann\]) $\sim$ (\[eq:gstar\]) gives freeze-out temperatures of the neutrino number as a function of $\xi$ $$\begin{aligned} T_f(\xi)=0.999756\ g_{\ast}(\xi)^{1/6} \left[\frac{M(\xi)}{L(\xi)}\right]^{1/3} \quad{\rm MeV}. \label{eq:Tfexact}\end{aligned}$$ The results are shown in Fig. \[fig:Tf\]. We find that $T_f$ increases exponentially with $\xi$. This is expected because the number density of the antineutrinos, the annihilation partners of neutrinos, is exponentially suppressed due to the degeneracy. To see this more explicitly and for the later convenience, we here derive the approximate expression for $T_f$ when $\xi$ is large. First, Eq. (\[eq:dnnudt\]) is simplified by setting $f_{\bar{\nu_l}}(y) \approx \exp(-y) \exp(-\xi)$ and neglecting $e^{\pm}$ Pauli blocking factors. Using the integration approximation formula for a function satisfying $\varphi(0)=0$, $$\int_0^{\infty} \varphi^{\prime}(x)f_{\nu}(x)dx=\int_0^{\infty} \frac{\varphi^{\prime}(x)dx}{\exp(x-\xi)+1}=\varphi(\xi)+\frac{\pi^2}{6}\varphi^{\prime\prime}(\xi)+\frac{7\pi^4}{360}\varphi^{(4)}(\xi)+\cdots ,$$ we obtain $$\begin{aligned} L(\xi) &\approx& \frac{2}{3\pi^5}(b+c)\ \exp(-\xi) \left[ \frac{\xi^4}{4}+\frac{\pi^2 \xi^2}{2}+\frac{7\pi^4}{60} \right], \\ M(\xi) &\approx& \frac{\xi}{6} \left[ \frac{\xi^2}{\pi^2} + 1 \right].\end{aligned}$$ For $g_{\ast}(\xi)$, we neglect the exponentially suppressed contribution from antineutrinos and obtain $$\begin{aligned} g_{\ast}(\xi) \approx \frac{43}{4} + \frac{15}{4} \left[ \left(\frac{\xi}{\pi}\right)^4+2\left(\frac{\xi}{\pi}\right)^2 \right] .\end{aligned}$$ We have the expression for $T_f$ by putting these approximations into Eq. (\[eq:Tfexact\]) and it reproduces closely the numerical result plotted in Fig. \[fig:Tf\] when $\xi \gtrsim 5$. ![The relation between the neutrino degeneracy and its number freeze-out temperatures. The solid, dotted and small-dotted lines respectively show when $\nu_e$, $\nu_{\mu}$ and $\nu_{\tau}$ has the degeneracy. They basically increase exponentially with the degeneracy. At $T\sim m_{\mu}/3$ and $T_{\rm QCD}$, according to the variation in the relativistic freedom of the fermions, muons and quarks, to which neutrinos can annihilate, the change in the freeze-out temperature with degeneracy is retarded.[]{data-label="fig:Tf"}](TdannREV) Now we explicitly write down the expression for $T_f$ taking the leading order in $\xi$ to compare with that of Refs. [@Dolgov2002], [@Kang1992] and [@Freese1983]. Our analysis shows $$\begin{aligned} T_f \approx 1.37408 \left(\frac{b+c}{2.3432}\right)^{-\frac{1}{3}} \xi^{\frac{1}{3}} \exp(\xi/3)\ {\rm MeV}. \end{aligned}$$ This and theirs all agree with respect to $T_f \propto \exp(\xi/3)$ which originates from the exponentially suppressed degenerate $n_{\bar{\nu}}$. Our expression contains the factor $\xi^{1/3}$, which originates from the degenerate $\nu$ distribution affecting both of $n_{\nu}$ and $H$, but it seems not to appear in the others. However, the expression of Ref. [@Dolgov2002] agrees with ours if thermally averaged momentum divided by $T$, $\langle p/T \rangle$, is calculated using the degenerate $\nu$ distribution so that $\langle p/T \rangle \approx (3/4)\xi$ (they use $\langle p/T \rangle \approx 3$ which is true when the degeneracy is small) and $g_{\ast}(\xi)$ is approximated as $\propto \xi^4$. As for Ref. [@Freese1983], $T_f \propto \xi^{-2/3}$ but this is thought to originate from the incorrect division by $n_e$ when deriving $\Gamma_A$. This does not make difference when calculating $T_f$ with no degeneracy but is not appropriate when strong degeneracy exists. Ref. [@Kang1992] has not found such a factor of power of $\xi$ but this is thought to originate from calculating the annihilation rate in the center-of-mass frame. In addition, they also calculate $\dot{n_e}/n_e$ so, as Ref. [@Orito2000] has pointed out, their result has to be corrected anyway. Until now, we have not cared about the possibility of the neutrino spectral distortion and have used thermal distributions to calculate various quantities. We now justify this procedure. Since neutrinos with larger energies have more probability to annihilate (from Eq. (\[eq:dnnudt\]), we see $\dot{f_{\nu}} / f_{\nu} \propto E_1$), they seems to freeze-out later and the neutrino spectrum would distort very much. But as we discuss next, the neutrino-electron elastic scattering occur sufficiently so the thermal distribution of neutrinos is preserved at the time of annihilation. Since the presence of neutrino large degeneracy ensures that the neutrino-antineutrino annihilation process occurs with their distribution preserving equilibrium form, it is considered that the estimation of neutrino number freeze-out temperature from $\Gamma_A=H$ turns out to be good. We investigate how these degeneracy parameters undergo a change in the next section. Before we proceed, we estimate the temperature at which the neutrinos decouple kinematically from $e^{\pm}$. This temperature is expected to be much lower than the number freeze-out temperature because the partners of the elastic processes, electrons and positrons are abundant regardless of the neutrino degeneracy. So they are coupled until relatively low temperature and have the same temperature with the others even after their number has frozen out at higher temperature. This decoupling temperature is well estimated from the rate for the elastic scattering processes because the energy transfer between $e^{\pm}$ and $\nu$ occurs efficiently in this case. The energy transfer efficiency is estimated as follows. The center-of-mass frame differencial cross section for the elastic processes is $$\left( \frac{d\sigma_{CM}}{d\cos\theta} \right)_{\nu + e^{-} \rightarrow \nu + e^{-}} + \left( \frac{d\sigma_{CM}}{d\cos\theta} \right)_{\nu + e^{+} \rightarrow \nu + e^{+}}= \frac{s}{16\pi}G_F^2(C_V^2+C_A^2) \left\{ 4+(1+\cos\theta)^2 \right\}, \label{eq:cselastic}$$ where $s$ and $\theta$ are the total initial energy squared and the polar angle in the center-of-mass frame. The energy transfer from $e^{\pm}$ to $\nu$ with initial energy (in the laboratory frame) $E_{e^{\pm}}$ and $E_{\nu}$ is $$E_{{\rm transfer}}=\frac{1}{2}(E_{e^{\pm}} - E_{\nu})(1-\cos\theta). \label{eq:etransfer}$$ From (\[eq:cselastic\]) and (\[eq:etransfer\]), the expectation value of the energy transfer by one collision is computed as $$\langle E_{{\rm transfer}} \rangle = \frac{\displaystyle{\int E_{{\rm transfer}} \frac{d\sigma}{d\cos\theta} d(\cos\theta)}}{\displaystyle{\int \frac{d\sigma}{d\cos\theta} d(\cos\theta)}} = \frac{7}{16}(E_{e^{\pm}} - E_{\nu}).$$ Therefore, roughly speaking, $\nu$ becomes almost as energetic as $e^{\pm}$ after one collision. Now, we compute the rate for the elastic scattering processes $$\Gamma_{\nu,{\rm elastic}}=-\left[\left(\frac{dn_{\nu}}{dt}\right)_{\nu + e^- \rightarrow \nu + e^-} + \left(\frac{dn_{\nu}}{dt}\right)_{\nu + e^+ \rightarrow \nu + e^+}\right] \Bigg/n_{\nu}$$ and search a temperature $T_d$ at which it becomes equal to the expansion rate, just as we have done in the case of the annihilation process. The result is shown in Fig. \[fig:Td\]. We see that $T_d$ increase with $\xi$ but rather slowly. Therefore, as is the case of no degeneracy, the neutrinos are in kinetic equilibrium holding the same temperature as the rest of the plasma until at least the muon-antimuon annihilation ends. ![The decoupling temperatures of the degenerate neutrinos. The neutrinos are in kinetic equilibrium with $e^{\pm}$ and $T_{\nu}$ follows $T_{e^{\pm}}$’s variation above these temperatures. Note that they increase with $\xi$ but do not exceed $m_{\mu}/3$.[]{data-label="fig:Td"}](Tdelastic) the change in chemical potential and energy density {#sec:change} =================================================== In this section, we study how the neutrino or its degeneracy parameter $\xi$ evolves after its number has frozen out. $\xi$ is conserved when the annihilation occurs frequently so that chemical equilibrium holds. Even after the annihilation practically ceases, $\xi$ does not vary as long as the temperature $T$ falls as $T \propto a^{-1}$ where $a$ is the cosmological scale factor. But this condition is not satisfied when, for example, the muon annihilation progresses because the temperature falls more slowly (recall from the end of the last section that the neutrino temperature evolves along with the $e^{\pm}$ temperature because they are in kinetic equilibrium). For the case when the freeze-out takes place before the muon annihilation ends, there is a period in which the neutrino number at the freeze-out temperature is conserved while the temperature falls slower than $a^{-1}$. This causes $\xi$ to decrease during the muon annihilation or other entropy producing processes. We can compute the value of $\xi$ after the muon annihilation on the basis of neutrino number and total entropy conservation after the freeze-out. We briefly explain why the latter holds. The second law of thermodynamics for the total gas in physical volume $V$ is [@EarlyUniverse] $$T dS = d(\rho V)+p dV - \mu d(nV),$$ where $\rho$ is the energy density, $p$ is the pressure and $n$ is the number density. The first two terms on the right hand side vanishes according to the total energy conservation. For the last term, there could be contributions from particle species with non-zero chemical potential, but $d(nV)=0$ for neutrinos because it has been already frozen out and $d(nV)$ is negligible for antineutrinos because its number density is exponentially suppressed ($n_{\bar{\nu}}=(T^3/\pi^2)e^{\xi}$ for $\xi\ll -1$) for such large degeneracy as to make the neutrino number freeze out before the muon-antimuon annihilation ends. The total entropy conservation is expressed as $$a_{f}^3 s_{tot} \left( T_{f}(\xi_{\rm initial}), \xi_{\rm initial} \right)=a_{\rm final}^3 s_{tot}(T_{\rm final} ,\xi_{\rm final}), \label{eq:entropyconservation}$$ where the subscript “$f$” denotes the value at the neutrino number freeze-out and “final” after the $\mu^{\pm}$ annihilation but before the $e^{\pm}$ annihilation. $\xi$ is conserved when the chemical equilibrium holds so we have written $\xi_{\rm initial}$ instead of $\xi_f$. $s_{tot}$ is the sum of entropy density $s=(\rho+p-\mu n)/T$ of all the particle species exist. On the right hand side, there exists only the relativistic particles, photons, $e^{\pm}$, and neutrinos one type with degeneracy and 2 types without, so the $T^3$ can be scaled out as $$s_{tot}(T,\xi) \approx \frac{2\pi^2}{45} T^3 \left( g_{\ast}(\xi) -\xi \frac{45}{4\pi^4}\int \frac{x^2}{e^{x-\xi}+1} dx \right) \equiv \frac{2\pi^2}{45}K(\xi)T^3, \label{eq:stotal1}$$ where $g_{\ast}(\xi)$ is the effective degrees of freedom same as the one appeared in Eq. (\[eq:gstar\]). We neglect exponentially suppressed antineutrino contribution. When we calculate the left hand side of Eq. (\[eq:entropyconservation\]), finite masses of muons ($m_{\mu}=106$ MeV) and pions ($m_{\pi}=135$ MeV) are included in order to treat the annihilation of these particles continuously. We assume the quark-hadron phase transition to occur instantaneously at $T_{\rm QCD}=200$ MeV and the quark-gluon phase contains u, d and s quarks which are well approximated as massless. Then $$\begin{aligned} s_{tot}(T,\xi) &\approx& \frac{2\pi^2}{45} T^3 \Bigg( g_{\ast}(\xi)-\xi \frac{45}{4\pi^4}\int \frac{x^2}{e^{x-\xi}+1} dx +\frac{95}{2}\theta(T-T_{\rm QCD})\nonumber \\ & &+4 I^{+}_{\mu}(T)+3I^{-}_{\pi}(T) \theta(T_{\rm QCD}-T)\Bigg) \equiv \frac{2\pi^2}{45}J(T,\xi)T^3, \label{eq:stotal2}\end{aligned}$$ where $\theta(x)$ is 1 for $x>0$ and is 0 otherwise. $I^{\pm}_i$ denotes the contribution from the massive particles $i$ as $$\begin{aligned} I^{\pm}_i(T)=\frac{45}{4\pi^4} \left[ \int_0^{\infty}dx \frac{x^2}{\exp \left(\sqrt{x^2+\alpha_i^2} \right)\pm 1} \left( \sqrt{x^2+\alpha_i^2}+\frac{x^2}{3\sqrt{x^2+\alpha_i^2}} \right) \right], \end{aligned}$$ where $\alpha_i=m_i/T$ and the sign is $+/-$ when $i$ is a fermion/boson. The neutrino number conservation is $$a_f^3 n_{\nu} \left( T_{f}(\xi_{\rm initial}), \xi_{\rm initial} \right)=a_{\rm final}^3 n_{\nu}(T_{\rm final},\xi_{\rm final}), \label{eq:numberconservation}$$ and $n_{\nu}$ is calculated by Eq. (\[eq:nnu\]). Dividing each sides of Eq. (\[eq:entropyconservation\]) by Eq. (\[eq:numberconservation\]), we obtain an equation with respect to $\xi_{\rm final}$ for given $\xi_{\rm initial}$, $$J(T_f(\xi_{\rm initial}),\xi_{\rm initial})M(\xi_{\rm final})-K(\xi_{\rm final})M(\xi_{\rm initial})=0.$$ We numerically solve this for $\xi_{\rm final}$ in the range $0\le\xi_{\rm initial}\le15$. The result is shown in Fig. \[fig:xijump\] and we see considerable difference between the initial $\xi_{\nu}$ and the final $\xi_{\nu}$. ![The relation between the degeneracy which exists initially ($\xi_{\rm initial}$) and which remains after the muon-antimuon annihilation ($\xi_{\rm final}$). For $\xi_{\rm initial}\gtrsim 6$, since the neutrino number freezes while the universe contains relativistic degrees of freedom of the particles other than photons, $e^{\pm}$ and neutrinos, $\xi_{\rm final}$ becomes smaller than $\xi_{\rm initial}$.[]{data-label="fig:xijump"}](xijumpREV) We translate this results in terms of neutrino energy density as shown in Fig. \[fig:rhojump\] (with regard to only the degenerate family). We show the final value of the energy density of the degenerate neutrinos plus antineutrinos divided by the one with no degeneracy (so-called effective number of neutrino types minus two). ![The final energy density of the degenerate type of neutrinos and antineutrinos normalized to the one with no degeneracy: $\rho_{\nu+\bar{\nu}}(\xi)/\rho_{\nu+\bar{\nu}}(0)$.[]{data-label="fig:rhojump"}](rhojumpREV) For a comparison, in Fig. \[fig:rhojumpcomp\], we also show the degenerate neutrino final energy densities when some assumptions are different. We compute the case when the degeneracy parameter $\xi$ maintains the initial value and the case when we assume, as in Refs. [@Kang1992][@Orito2000], the neutrinos are not heated by annihilation processes after its number has frozen out. Our result demonstrates that for some range of the degeneracy, the final neutrino energy density can be lower than the one with the initial degeneracy but not as low as previous analysis [@Orito2000] has shown. ![A comparison with the results which appear under different assumptions (for $\nu_{e}$). The solid line is our result. The dotted line is the energy density calculated from the initial degeneracy. The dot-dashed line is the one expected when the neutrinos are assumed to be kinematically decoupled simultaneously with its number freeze-out.[]{data-label="fig:rhojumpcomp"}](rhojumpcompREV) At last, we make a comment on the evolution of the antineutrino degeneracy parameter, $\xi_{\bar{\nu}}$. At the initial stage, the chemical equilibrium holds so $\xi_{\bar{\nu}}=-\xi_{\nu}$. This relation could be broken after the neutrino number freeze-out because the neutrino is not in the chemical equilibrium any more. However, since the frequent elastic scattering with electrons ensures neutrinos and antineutrinos to be in kinematical equilibrium and annihilation is still efficient for antineutrinos, the relation $\xi_{\bar{\nu}}=-\xi_{\nu}$ continues to hold. Therefore, as $\xi_{\nu}$ decreases during for example the muon-antimuon annihilation, $\xi_{\bar{\nu}}$ follows its variation with the sign opposite. Meanwhile, when the temperature drops down to the electron-positron annihilation, the elastic scattering is not efficient to keep $\nu$ (and also $\bar{\nu}$ if the degeneracy is not very large) in the kinetic equilibrium with $e^{\pm}$. In this case, the momentum distribution is distorted from the equilibrium form so, strictly speaking, the notion of the chemical potential or the degeneracy parameter is lost. But the distortion can usefully expressed as momentum dependent chemical potential and in this sense, $\xi_{\nu}+\xi_{\bar{\nu}}\neq 0$ as shown in Ref.[@Esposito2000] where the evolution of the neutrino spectrum is fully simulated with $\xi \lesssim 1$. conclusion {#sec:conclusion} ========== We have re-investigated some properties of the thermal history of the early universe with very large neutrino degeneracy. We have justified and adopted the correct picture of the neutrino number freeze-out as explained by Ref. [@Dolgov2002]. We have made some complementary arguments and calculations to those found in Ref. [@Dolgov2002] and have obtained the results concerning the evolution of the strong neutrino degeneracy and in turn its energy density. We find there are cases that the neutrino degeneracy parameter $\xi$ becomes smaller than the initial value after the muon annihilation. However, they do not seem to require alteration to the cosmological bounds on the neutrino degeneracy such as $-0.01\leqslant \xi_{\nu_e}\leqslant 0.22$ and $|\xi_{\nu_{\mu,\tau}}|\leqslant 2.6$ [@Hansen2001] (more stringent bound, $|\xi_{\nu}|\lesssim 0.07$ for all three neutrino types, is likely to apply taking into account neutrino oscillations with maximal mixing [@Dolgov2002a]. The analysis on how the bounds are modified for the region of the Large Mixing Angle solution to solar neutrino problems is found in Refs. [@Wong2002] and [@Abazajian2002]), because, in Fig. \[fig:xijump\], the local minimum of the final degeneracy caused by the quark-hadron phase transition does not have the value lower than the present upper bound. Finally, we note our results stem from the fact that the degenerate neutrinos are in kinetic equilibrium with the rest of the cosmic plasma well after their number has frozen out. The same is true for other possibly degenerate species of stable particles other than the neutrinos and our analysis is applicable to them. So, although our analysis on the neutrinos has turned out not to affect the cosmological bound on their degeneracy, the remark made in this paper is worth bearing in mind. A. D. Dolgov, Phys. Rep. [**370**]{}, 333 (2002). H-S. Kang and G. Steigman, Nucl. Phys. [**B372**]{}, 494 (1992). M. Orito, T. Kajino, G. J. Mathews, and R. N. Boyd, astro-ph/0005446. K. Freese, E. W. Kolb, and M. S. Turner, Phys. Rev. D [**27**]{},1689 (1983). E. W. Kolb and M. S. Turner, [*The Early Universe*]{} (Addison Wesley, Reading, MA, 1990). S. Esposito, G. Miele, S. Pastor, M. Peloso and O. Pisanti, Nucl. Phys. [**B590**]{}, 539 (2000). M. Orito, T. Kajino, G. J. Mathews, and Y. Wang, Phys. Rev. D [**65**]{}, 123504 (2002). J. Lesgourgues and S. Pastor, Phys. Rev. D [**60**]{}, 103521 (1999); S. Pastor and J. Lesgourgues, Nucl. Phys. Proc. Suppl. [**81**]{}, 47 (2000). S. H. Hansen, G. Mangano, A. Melchiorri, G. Miele, and O. Pisanti, Phys. Rev. D [**65**]{}, 023511 (2001). A. D. Dolgov, S. H. Hansen, S. Pastor, S. T. Petcov, G. G. Raffelt, and D. V. Semikoz, Nucl. Phys. [**B632**]{}, 363 (2002). Y. Y. Y. Wong, Phys. Rev. D [**66**]{}, 025015 (2002). K. N. Abazajian, J. F. Beacom, and N. F. Bell, Phys. Rev. D [**66**]{}, 013008 (2002).
epsf.sty Recently, interest in the two-dimensional (2D) square-lattice Heisenberg antiferromagnet has intensified due in large part to the discovery of high-temperature superconductivity in the doped lamellar cuprates and the subsequent realization of the near-ideal 2D Heisenberg nature of their parent compounds.[@Manousakis91] The nearest-neighbor Heisenberg model is defined as: $$H = J \sum_{<i,j>} {\bf S}_{i} \cdot {\bf S}_{j}$$ where $J$ is the nearest neighbor coupling which is positive for an antiferromagnet. Classically, ${\bf S}_{i}$ is a three component vector of magnitude $\sqrt{S(S+1)}$ representing the spin at site $i$, while quantum mechanically ${\bf S}_{i}$ is the quantum spin operator. As a result of a symbiotic interplay among theory, simulation, and experiment, great progress in understanding the instantaneous spin correlations of the 2D Heisenberg antiferromagnet has been made in recent years. Chakravarty, Halperin and Nelson (CHN)[@Chakravarty] developed an effective field theory from which an exact low-temperature expression for the instantaneous correlation length, $\xi$, has been found [@Hasenfratz91]. While this expression agrees closely with experiments on spin-1/2 Heisenberg systems [@Greven95], measurements on systems with $S>1/2$ display strong deviations from the predicted behavior [@Greven95; @Leheny99]. Subsequent work [@Elstner95; @Cuccoli96] has pointed towards a broad crossover from classical behavior at high temperature to the renormalized classical regime where field theory is valid. Recently, Hasenfratz [@Hasenfratz99] incorporated cutoff effects into the field theory formalism to describe the behavior in this crossover region. The dynamics of the 2D Heisenberg antiferromagnet has likewise been the subject of detailed theoretical work. Tyc, Halperin and Chakravarty (THC) [@Tyc89] combined renormalization group analysis and dynamic scaling theory[@Hohenberg77] with simulations of the classical lattice rotor model to predict a form for the dynamic structure factor. Classical molecular dynamics[@Wysin90] and quantum Monte Carlo simulations [@Makivic92] have lent credence to their predictions; however, due to a lack of suitable systems, comparatively few experimental studies of dynamics in 2D Heisenberg antiferromagnets have been performed. Some of the most ideal 2D Heisenberg systems (La$_2$CuO$_4$ and Sr$_2$CuO$_2$Cl$_2$) have a very large intersite coupling $J$ ($J \approx 1500K$), making quantitative results using conventional neutron scattering techniques difficult to obtain. Consequently, previous experiments have not resolved the quasielastic scattering from the long-wavelength spin-wave excitations.[@Yamada89; @Thurber97]. In this communication, we present a neutron scattering study of Rb$_2$MnF$_4$, a quasi-two-dimensional spin-5/2 system with an effective spin anisotropy that can be tuned to zero using an external magnetic field. Our results provide a detailed characterization of the dynamic structure factor in the quasielastic region. Previous studies[@Leheny99; @Birgeneau70] indicate that this system behaves like a nearly ideal 2D Heisenberg antiferromagnet. Accordingly, we compare our findings with the current theoretical understanding of 2D Heisenberg critical dynamics. Following a strategy introduced in our previous work [@Leheny99], we exploit the presence of a bicritical point in the field-temperature phase diagram of Rb$_2$MnF$_4$ to make possible a study of the dynamic spin correlations of a near-ideal Heisenberg system over a large range of correlation lengths. Rb$_2$MnF$_4$ has the tetragonal $K_2NiF_4$ crystal structure with an in-plane lattice constant of $a=4.215$ Å  and an out-of-plane lattice constant of 13.77 Å. The large ratio of the out-of-plane to the in-plane lattice constant combines with the frustration due to the body-centered stacking to make it a nearly two-dimensional magnetic system, with an interplane coupling of less than $10^{-4}J$. At zero field, Rb$_2$MnF$_4$ is a weakly Ising antiferromagnet with $J=0.63$ meV[@Cowley77]. This interaction energy $J$ is more than two orders of magnitude smaller than that of the lamellar copper oxide Heisenberg systems, thus making the energy scale of the dynamics much more accessible for neutron scattering studies. The principal spin anisotropy is a magnetic dipole interaction, with $g\mu_BH_A = 0.032$ meV[@Cowley77] along the $c$ axis (perpendicular to the magnetic plane). Correspondingly, when a field of approximately 5.5 T (depending on temperature) is applied parallel to the $c$ axis, the spins flop into the plane. Above this spin-flop transition, the system has XY symmetry. Precisely along the spin-flop line, and on the extension of the line into the paramagnetic phase, the anisotropy is effectively zero, so that the system should be in the 2D Heisenberg universality class. (See Fig.  1.) Experiments were conducted at the NIST Center for Neutron Research using NIST’s 7 Tesla superconducting magnet. We aligned the $c$ axis within $0.5^{\circ}$ of the magnetic field to minimize any induced in-plane anisotropies. We took field scans at several temperatures to confirm the phase diagram and found the line of zero anisotropy in Tesla (shown in Fig.  1) to be approximately: $H = \sqrt{28.09 + 0.23 T}$ where $T$ is the temperature in Kelvin. This is in accordance with the form given by Cowley [*et al.*]{} [@Cowley93]. Studies of the quasielastic scattering were performed with the thermal neutron triple-axis spectrometer BT9 and the cold neutron spectrometer SPINS. At BT9, we used a fixed initial energy of either 13.7 or 14.8 meV with a pyrolytic graphite filter before the sample to remove higher harmonics in the incident beam. Collimations of 40’-27’-Sample-24’-60’ were typical, giving an energy resolution of 0.8 meV full width at half maximum (FWHM). For lower temperatures where higher resolution was needed, we used SPINS with a fixed final energy of 4 meV and collimations of guide-20’-S-20’-open, which gave a resolution of 0.12 meV FWHM. Figure 2 shows the scattered intensity as a function of energy at the antiferromagnetic zone center for several temperatures. The two-dimensional Heisenberg antiferromagnet, in accordance with the Hohenberg-Mermin-Wagner theorem, has no transition to long-range order above zero temperature. At non-zero temperature it has correlated regions whose characteristic length scale diverges exponentially with inverse temperature. These correlated regions have a finite lifetime which translates into a non-zero energy width of the quasielastic peak produced in the dynamic structure factor. As the temperature is lowered towards zero, the correlated regions become progressively more stable, and the energy width of the peak decreases. The measurements in Fig.  2 display this critical slowing down. According to dynamic scaling theory, the functional form of the structure factor is independent of temperature. The temperature dependence enters only through the reduced reciprocal space position $k$ (2D reciprocal space distance from the magnetic zone center) and frequency $\omega$, which are scaled by $\xi$, the correlation length, and $\omega_0$, the characteristic frequency, respectively [@Hohenberg77]: $q \equiv k\xi$ and $\nu \equiv \omega/\omega_0$, so that $q$ and $\nu$ are both dimensionless. In addition, the characteristic frequency is predicted to scale with the correlation length to a power $-z$, with $z=d/2$ for a Heisenberg antiferromagnet, where $d$ is the spatial dimension. In accordance with these predictions, we fit the energy scans through the quasielastic peak at (0 1 0) to the dynamic structure factor: $$S(k,\omega) = \omega_{0}^{-1} S(q) \Phi(q,\nu).$$ We took Lorentzian forms for $S(q)$ and $\Phi(q,\nu)$: $$\begin{aligned} S(q) = \frac{S_0}{1 + q^2} \\ \Phi(q,\nu) = \frac{\gamma_q^{-1}}{1 + \frac{\nu^2}{\gamma_q^2}} \end{aligned}$$ with $\gamma_q = (1 + \mu q^2)^{1/2}$. ($\mu$ is an arbitrary constant.) Due to the finite resolution, $q$-dependent contributions were needed to reproduce accurately the observed lineshape. $\mu = 1.7 \pm 0.2$ gave the best fit at all temperatures, in agreement with the values ($1.4$ and $2.0$) found by THC in their analyses of the classical lattice rotor model. For the fits, $\xi$ was fixed at the values determined in our previous study[@Leheny99], and the temperature dependence of $S_0$ was found to agree well with the results of those measurements. The correlation length for the temperature range accessible for studying the dynamics varied from $1<\xi/a<60$. As Fig.  2 demonstrates, fits to this form convolved with the experimental resolution are quite good at all temperatures. Figure 3 shows the results for the energy widths extracted from these fits. Note that data taken at SPINS and BT9 agree closely in the overlapping region. This agreement gives us confidence that we have correctly accounted for the very different experimental resolutions in the two measurements. [@Note] When the temperature is scaled by $JS(S+1)$, the temperature dependence of $\omega_0(T)$ agrees well with the results of classical molecular dynamics simulations carried out by Wysin and Bishop [@Wysin90]. They predict a temperature scaling factor of $JS^2$, but when scaled by this factor, our data show a much stronger temperature dependence than that exhibited by the simulation data. Normalizing temperature by the classical spin stiffness $JS(S+1)$ has been shown [@Elstner95] to collapse the instantaneous correlation length data for 2D quantum Heisenberg antiferromagnets with $S>1$ onto the classical results at high temperatures, and here again succeeds in reconciling the spin-5/2 data with corresponding results for the classical system. Thus, as with $\xi$, the dynamic behavior follows classical scaling at high temperature ($1<\xi/a<10$). Similar measurements of the quasielastic energy width have been performed by Fulton [*et al.*]{} [@Fulton94] on KFeF$_4$, another 2D spin-5/2 antiferromagnet. These results agree with our data at the highest temperatures, but deviate strongly at lower temperatures. We believe that this discrepancy results from a crossover to Ising critical behavior in KFeF$_4$. Studies of Rb$_2$MnF$_4$ in zero field [@Lee98] show that the Ising crossover occurs near $1.2 T_N$. KFeF$_4$ has nearly the same reduced Ising anisotropy as Rb$_2$MnF$_4$, and hence would also be expected to enter a region of Ising critical behavior below $1.2 T_N$. All but the highest temperatures from the KFeF$_4$ study therefore lie below the Ising crossover region. As mentioned above, dynamic scaling theory predicts $\omega_0 \propto \xi^{-z}$ with $z=1$ for the 2D Heisenberg antiferromagnet [@Hohenberg77]. In addition, CHN predict corrections to scaling which go as $T^{1/2}$: $\omega_{0} = c\xi^{-1}(\frac{T}{2\pi\rho_s})^{1/2}$ where $c$ is the spin-wave velocity, and $\rho_s$ is the spin stiffness. Figure 4a shows a plot of $\omega_0$ versus $1/\xi$; the best fit to the simple form $\omega_0 \propto \xi^{-z}$ gives $z=1.35 \pm 0.02$. This value for $z$ is intermediate between the values for the 2D ($z=1$) and 3D ($z=1.5$) Heisenberg antiferromagnets. However, as detailed below we believe only a 2D model is relevant here. Figure 4b, which shows the product $\omega_0\xi$ versus temperature, demonstrates the corrections to scaling if $z=1$ is assumed. Clearly, corrections are stronger than the $T^{1/2}$ predicted by CHN. Simulations of the classical model, as mentioned above, agree with our data for $\omega_0$, yet they claim to see a different temperature dependence for the product $\omega_0\xi$. This is most likely due their use of a form for $\xi(T)$ that has since been shown to be inaccurate in this temperature range. Monte Carlo studies on a spin-1/2 system [@Makivic92] have also indicated that $z=1$, but with a temperature independent prefactor in agreement with predictions by Arovas and Auerbach. [@Arovas88] The data in Figs.  3 and 4, taken at face value, may suggest that $z>1$ for Rb$_2$MnF$_4$ along the bicritical line, or that there is a crossover to some other critical behavior with a non-zero phase transition temperature. Explanations involving a crossover to three-dimensional behavior seem unlikely in light of previous studies[@Birgeneau70; @Lee98] at zero field showing that Rb$_2$MnF$_4$ behaves as a nearly ideal two dimensional system to very large correlation lengths. Likewise, 2D Ising or 2D XY behavior are precluded by the high temperature at which we observe deviations from $z=1$ behavior, as compared to the scales at which these crossovers should occur, as well as by our previous results on the statics[@Leheny99], which agree very well with theory and simulation for the 2D Heisenberg model. However, the dynamic scaling near the bicritical point could still conceivably differ from that of the ideal 2D Heisenberg antiferromagnet. While the universality class for static critical behavior is determined solely by the symmetry properties and the spatial dimension, the dynamics can also be affected by conserved quantities and the Poisson-bracket relations they satisfy [@Hohenberg77]. The bicritical region differs from a true isotropic system due to the non-zero, conserved uniform magnetization along the applied field direction. Noting this distinction, Dohm and Janssen [@Dohm] performed a renormalization group study of bicritical dynamics in $4-\epsilon$ dimensions. They found that dynamic scaling was obeyed, but that the exponent for the 3D bicritical point was larger than that for the 3D Heisenberg model. To explore the possibility that we might be seeing a similar effect in our 2D system, we measured $\omega_0$ in zero field at temperatures above the Ising-Heisenberg crossover. These results, shown in Fig.  4a, overlap closely with the data taken at the same temperatures on the bicritical line. This indicates that, for this temperature range, the magnetic field itself is not measurably affecting the quasielastic width. Clearly, additional theoretical work on 2D bicritical dynamics and corrections to dynamic scaling for the 2D Heisenberg antiferromagnet would greatly elucidate the findings from these measurements. With these measurements of the dynamic spin correlations in Rb$_2$MnF$_4$ near the bicritical point, we have provided the first experimental study of the quasielastic behavior in a 2D isotropic antiferromagnet. These results are largely consistent with the current theoretical understanding of the dynamics of the 2D Heisenberg model, but also raise some questions. The shape of the dynamic structure factor in the quasielastic region obeys a form consistent with dynamic scaling, and the temperature dependence of the characteristic frequency $\omega_0$ is consistent with the anticipated form $\xi^{-z}$, though with $z$ larger than the predicted value $z=1$. To establish whether the difference in $z$ originates in stronger corrections to scaling than predicted or indicates a distinction between ideal 2D Heisenberg dynamic scaling and the dynamic behavior near a 2D bicritical point will require further theoretical work as well as experimental studies of other Heisenberg antiferromagnets. For a review, see M. A. Kastner, R. J. Birgeneau, G. Shirane, Y. Endoh, Rev. Mod. Phys. [**70**]{}, 897 (1998). S.Chakravarty, B. I. Halperin, and D. R. Nelson, Phys. Rev. B [**39**]{}, 2344 (1989). P. Hasenfratz and F. Niedermayer, Phys. Lett. B [**268**]{}, 231 (1991). M. Greven [*et al.*]{}, Zeitschrift fur Physik B [**96**]{}, 465 (1995). R. L. Leheny, R. J. Christianson, R. J. Birgeneau, R. W. Erwin, Phys. Rev. Lett. [**82**]{}, 418 (1999). N. Elstner [*et al.*]{}, Phys. Rev. Lett. [**75**]{}, 938 (1995). A. Cuccoli, V. Tognetti, R. Vaia, P. Verrucchi, Phys. Rev. Lett. [**77**]{}, 3439 (1996). P. Hasenfratz. Euro. Phys. J. B [**13**]{}, 11 (1999). S. Tyc, B. I. Halperin and S. Chakravarty, Phys. Rev. Lett. [**62**]{} 835 (1989). P. C. Hohenberg and B. I. Halperin, Rev. Mod. Phys. [**49**]{}, 435 (1977). G. M. Wysin and A. R. Bishop, Phys. Rev. B [**42**]{}, 810 (1990). M. Makivic and M. Jarrell, Phys. Rev. Lett. [**68**]{}, 1770 (1992). S. M. Hayden [*et al.*]{}, Phys. Rev. Lett. [bf 67]{} 3622 (1991). K. Yamada [*et al.*]{}, Phys. Rev. B [**40**]{}, 4557 (1989). K. R. Thurber [*et al.*]{}, Phys. Rev. Lett. [**79**]{}, 171 (1997). R. J. Birgeneau, H. J. Guggenheim, G. Shirane, Phys. Rev. B [**1**]{}, 2211 (1970). R. A. Cowley, G. Shirane, R. J. Birgeneau, H. J. Guggenheim , Phys. Rev. B [**15**]{}, 4292 (1977) R. A. Cowley [*et al.*]{}, Z. Phys. B [**93**]{}, 5 (1993). Our results agree quantitatively with this study of the phase diagram of Rb$_2$MnF$_4$ provided that we shift the bicritical point by about 0.15 T. This shift likely arises from minor differences in sample quality and small differences in the calibrations of the magnets used for the experiments. In addition, the sample was remounted in the magnet twice during the experiment. The good reproducibility indicates that any small possible misalignments of the sample in the magnetic field do not adversely affect the data. S. Fulton, R. A. Cowley, A. Desert, T. Mason, J. Phys.: Cond. Matt. [**6**]{}, 6679 (1994). Y.S. Lee [*et al.*]{}, Euro. Phys. J. B, [**5**]{} 1, 15 (1998). D. P. Arovas and A. Auerbach, Phys. Rev. B [**38**]{}, 316 (1988). V. Dohm and H. K. Janssen, Phys. Rev. Lett. [**39**]{}, 946 (1977); =3.1in =3.1in =3.1in =3.1in
--- abstract: 'We discuss the interest of escort distributions and Rényi entropy in the context of source coding. We first recall a source coding theorem by Campbell relating a generalized measure of length to the Rényi-Tsallis entropy. We show that the associated optimal codes can be obtained using considerations on escort-distributions. We propose a new family of measure of length involving escort-distributions and we show that these generalized lengths are also bounded below by the Rényi entropy. Furthermore, we obtain that the standard Shannon codes lengths are optimum for the new generalized lengths measures, whatever the entropic index. Finally, we show that there exists in this setting an interplay between standard and escort distributions.' address: | Université Paris-Est, LIGM, UMR CNRS 8049, ESIEE-Paris\ 5 bd Descartes, 77454 Marne la Vallée Cedex 2, France author: - 'J.-F. Bercher' title: Source Coding with Escort Distributions and Rényi Entropy Bounds --- Source coding ,Rényi-Tsallis entropies ,Escort distributions ,[05.90.+m]{} ,[89.70.+c]{} Introduction ============ Rényi and Tsallis entropies extend the standard Shannon-Boltzmann entropy, enabling to build generalized thermostatistics, that include the standard one as a special case. This has received a very high attention and there is a wide variety of applications where experiments, numerical results and analytical derivations fairly agree with these new formalisms [@tsallis_introduction_2009]. These results have also raised interest in the general study of information measures and their applications. The definition of Tsallis entropy was originally inspired by multifractals whereas the Rényi entropy is an essential ingredient [@harte_multifractals:_2001; @jizba_world_2004], e.g. via the definition of the Rényi dimension. For a distribution $p$ of a discrete variable with $N$ possible microstates, the Rényi entropy of order $\alpha$, with $\alpha\geq 0$, is defined by $$H_\alpha(p)=\frac{1}{1-\alpha}\log \sum_{i=1}^N p_i^\alpha. \label{eq:renyi}$$ By L’Hospital rule, for $\alpha=1$, we recover the Shannon entropy $$H_1(p)=-\sum_{i=1}^N p_i\log p_i.$$ The base of the logarithm is arbitrary. In the following, we will denote $\log_D$ the base $D$ logarithm. The Tsallis entropy is a simple transformation of the Rényi entropy, but is nonextensive. Often associated to these entropies, and central in the formulation of nonextensive statistical mechanics is the concept of escort distributions: if $\{p_i \}$ is the original distribution, then its escort distribution $P$ is defined by $$P_i = \frac{p_i^q}{\sum_{i=1}^N p_i^q}. \label{eq:escort}$$ The parameter $q$ behaves as a microscope for exploring different regions of the measure $p$ [@chhabra_direct_1989]: for $q>1$, the more singular regions are amplified, while for $q<1$ the less singular regions are accentuated. The escort distributions have been introduced as a tool in the context of multifractals. Interesting connections with the standard thermodynamic are in [@chhabra_direct_1989; @beck_thermodynamics_1993]. Discussion of their geometric properties can also be found in [@abe_geometry_2003]. It is also interesting to note that the escort distributions can be found as the result of a maximum entropy problem with a constraint on the expected value of a logarithmic quantity, see [@harte_multifractals:_2001 p. 53] in the context of multifractals, or [@bercher_tsallis_2008] for a different view. We shall also point out that the ‘deformed’ information measure like the Rényi entropy (\[eq:renyi\]) and the escort distribution (\[eq:escort\]) are originally two distincts concepts, as indicated here by the different notations $\alpha$ and $q$. There is a lengthy discussion on this point in [@pennini_semiclassical_2007].\ In the information theory of communication, the entropy is the measure of the quantity of information in a message, and a primary aim is to represent the possible messages in an efficient manner, that is to find a compact representation of the information according to a measure of ‘compactness’. This is the role of source coding. In this note, we discuss the interest of escort distributions and alternative entropies in this context. This suggests possible connections between coding theory and the measure of complexity in nonextensive statistical mechanics. Related works are the study of generalized channel capacities [@landsberg_distributions_1998], the notion of nonadditive information content [@yamano_information_2001], the presentation of a generalized rate distorsion theory [@venkatesan_generalized_2009]. The first section is devoted to a very short presentation of the source coding context, and to the presentation of the fundamental Shannon source coding theorem. In section \[sec:Campbell\], we describe a source coding theorem relating a new measure of length and the Rényi entropy. In the next section, we show that it is possible to obtain the very same optimum codes, as well as a practical procedure, using a reasoning based on the nonextensive generalized mean as the measure of length. In section \[sec:yetanother\], we introduce another measure of length, involving escort distribution, and obtain general inequalities for this measure, where the lower bound, once again is a Rényi entropy. We show that the corresponding optimum codes are the standard Shannon codes. Finally, in section \[sec:connections\] we discuss the connections between these different results. Source coding ============= In source coding, one considers a set of symbols $\mathcal{X}=\{x_1, x_2, \ldots x_N\}$, and a source that produces symbols $x_i$ from $\mathcal{X}$ with probabilities $p_i$ where $\sum_{i=1}^N p_i = 1$. The aim of source coding is to encode the source using an alphabet of size $D$, that is to map each symbol $x_i$ to a codeword $c_i$ of length $l_i$ expressed using the $D$ letters of the alphabet. It is known that if the set of lengths $l_i$ satisfies the Kraft-Mac Millan inequality $$\sum_{i=1}^N D^{-l_i} \leq 1, \label{eq:kraft}$$ then there exists a uniquely decodable code with these lengths, which means that any sequence $c_{i1}c_{i2}\ldots c_{in}$ can be decoded unambiguously into a sequence of symbols $x_{i1}x_{i2}\ldots x_{in}$. Furthermore, any uniquely decodable code satisfies the Kraft-Mac Millan inequality (\[eq:kraft\]). The Shannon source coding theorem (noiseless coding theorem) indicates that the expected length of the code $\bar{L}$ is bounded below by the entropy of the source, $H_1(p)$, and that the best uniquely decodable code satisfies $${H_1(p)} \leq \bar{L}=\sum_i p_i l_i < {H_1(p)} + 1, \label{eq:shannon_theo}$$ where the logarithm in the definition of the Shannon entropy is taken in base $D$. This result indicates that the Shannon entropy $H_1(p)$ is the fundamental limit on the minimum average length for any code constructed for the source. The lengths of the individual codewords, also called ‘bit-numbers’ [@beck_thermodynamics_1993 p. 46], are given by $$l_i = -\log_D p_i \label{eq:bitnumberp}$$ where $\log_D$ denotes the logarithm in base $D$. Obviously these code lengths enable to attain the entropy in the left of the inequality (\[eq:shannon\_theo\]). The characteristic of these optimum codes is that they assign the shorter codewords to the most likely symbols and the longer codewords to unlikely symbols. The uniquely decodable code can be chosen to have the prefix property, i.e. the property that no codeword is a prefix of another codeword. Source coding with Campbell measure of length {#sec:Campbell} ============================================= It is well-known that Huffman coding yields a prefix code which minimizes the expected length and approaches the optimum limit $l_i=-\log_D p_i$. What is much less well known is that some other forms of lengths have been considered [@baer_source_2006], the first and definitely fundamental contribution being the paper of Campbell [@campbell_coding_1965]. Since the codewords lengths obey to the relation (\[eq:bitnumberp\]), low probabilities yield very long words. But the cost of using a word is not necessarily a linear function of its length, and it is possible that adding a letter to a long word cost much more than adding a letter to a shorter word. This led Campbell to the proposal of a new average length measure, featuring an exponential account of the elementary lengths of the codewords. This length, which is called a $\beta$-exponential mean or Campbell length, is a Kolmogorov-Nagumo generalized mean associated to an exponential function. It is defined by $$C_\beta=\frac{1}{\beta} \log_D \sum_{i=1}^N p_i D^{\beta l_i}, \label{eq:CambellLength}$$ where $\beta$ is a strictly positive parameter. The remarkable result [@campbell_coding_1965] is that just as Shannon entropy is the lower bound on the average codeword length of an uniquely decodable code, the Rényi entropy of order $q$, with $q=1/(\beta+1)$, is the lower bound on the exponentially weighted codeword length (\[eq:CambellLength\]): $$C_\beta \geq H_q(p). \label{eq:Campbell_theo}$$ A simple proof of this result will be given below. It is easy to check that the equality is achieved by choosing the $l_i$ such that $$D^{- l_i} = P_i = \frac{p_i^q}{\sum_{j=1}^N p_j^q},$$ that is $$l_i= -q \log_D p_i + (1-q) H_q(p). \label{eq:opt_li_Pi}$$ Obviously, the individual lengths obtained this way can be made smaller than the Shannon lengths $l_i=-\log_D p_i$, especially for small $p_i$, by selecting a sufficiently small value of $q$. Hence, the procedure effectively penalizes the longer codewords and yields a code different from Shannon’s code, with possibly shorter codewords associated to the low probabilities. Source coding with nonextensive generalized mean {#sec:generalized} ================================================ In the standard measure of average length $\bar{L}=\sum_i p_i l_i$, we have a linear combination of the individual lengths, with the probabilities $p_i$ as weights. In order to increase the impact of the longer lengths with low probabilities, the Campbell’s length uses an exponential of the length. A different approach to the problem can be to modify the weigths in the linear combination, so as to raise the importance of the terms with low probabilities. A simple way to achieve this is to deform, flatten, the original probability distribution and use the new distribution as weights rather than the $p_i$. Of course, a very good candidate is the escort distribution, which leads us to the ‘average length measure’ $$M_q = \sum_{i=1}^N \frac{p_i^q}{\sum_j p_j^q} l_i = \sum_{i=1}^N P_i l_i,$$ which is nothing but the generalized expected value of nonextensive statistical mechanics according to the third mean values’ choice of Tsallis, Mendes and Plastino [@tsallis_role_1998]. For the virtual source with distribution $P$, the standard expected length is $M_q$, and the classical Shannon noiseless source coding theorem immediately applies, leading to $$M_q \geq H_1(P), \label{eq:MqH1}$$ with equality if $$l_i=-\log_D P_i \label{eq:bitnumberP}$$ which is exactly the lengths in (\[eq:opt\_li\_Pi\]) obtained via Campbell’s measure. This easy result has also be mentioned in [@yamano_information_2001].[^1] The simple relation $l_i=-\log_D P_i$ for the minimization of $M_q$ subject to the Kraft-Mac Millan inequality has a direct practical implication. Indeed, it suffices to feed a standard coding algorithm, namely a Huffman coder, with the escort distribution $P$ instead of the natural distribution $p$, to obtain as a result a code tailored for the Campbell’s length measure $C_\beta$ or equivalently for the length measure $M_q$. A simple example, with $D=2$, is reported in Table \[tab:ComparWords\]: we used a standard Huffman algorithm with the original distribution and the escort distributions with $q=0.7$ and $q=0.4$. $p_i$ $q=1$    $q=0.7$   $q=0.4$  ------- ---------- ----------- ---------- 0.48 0 0 00 0.3 10 10 01 0.1 110 1100 100 0.05 1110 1101 101 0.05 11110 1110 110 0.01 111110 11110 1110 0.01 111111 11111 1111 It is worth noting that some specific algorithms have been developed for Campbell’s length [@humblet_generalization_1981; @blumer_renyi_1988; @baer_source_2006]. The remark above gives an easy alternative. An important point is that these new codes have direct applications: they are optimum for minimizing the probability of buffer overflows [@humblet_generalization_1981], or, with $q>1$ for maximizing the chance of the reception of a message in a single snapshot [@baer_optimal_2008]. In the second case, the choice $q>1$ increases the main features of the probability distribution, then leading to select more short codewords for the highest probabilities; this maximizes the chance of a complete reception of a message in a single transmission of limited size. Another measure of length with Rényi bounds {#sec:yetanother} =========================================== Given these results, it is now interesting to introduce a new measure of average length, similar to Campbell’s length but mixing both a an exponential weight of individual lengths $l_i$ and an escort distribution. This measure is defined by $$L_q = \frac{1}{q-1} \log_D \left[ \sum_{i=1}^N \frac{p_i^q}{\sum_j p_j^q} D^{(q-1)l_i} \right]. \label{eq:Blength}$$ Some specific values are as follows. It is easy to see that $L_0 = -\log_D \sum_i D^{-l_i} + \log_D N$. When $q \rightarrow +\infty$, the maximum of the probabilities, say $p_k=\mathrm{arg~max}_i p_i$ emerges, and $L_\infty = l_k$, where $l_k$ is the length associated to $p_k$, the maximum among the probabilities $p_i$. By L’Hospital’s rule, we also obtain that $L_1=\bar{L}=\sum_i p_i l_i$. As for Campbell’s measure, it is possible to show that $L_q$ is bounded below by the Rényi entropy. As in Campbell’s original proof, let us consider the Hölder inequality $$\biggl( \sum_{i=1}^N |x_i|^p \biggr)^{\!1/p\;} \biggl( \sum_{i=1}^N |y_i|^{p'} \biggr)^{\!1/{p'}} \le \sum_{i=1}^N |x_i\,y_i| \text{ for all sequences }(x_1,\ldots,x_N),(y_1,\ldots.y_N)\in\mathbb{R}^N$$ for $p$ or ${p'}$ in $(0,1)$ and such that $1/p+1/{p'}=1$. Note that the reverse inequality is true when $p$ and ${p'}$ are in $[1,+\infty)$. Suppose that the $l_i$ are the lengths of the codewords in a uniquely decodable code, which means that they satisfy the Kraft inequality (\[eq:kraft\]). If we let now $x_i=p_i^\alpha D^{- l_i}$ and $y_i=p_i^{-\alpha}$, it comes $$\biggl( \sum_{i=1}^N p_i^{\alpha p} D^{-p l_i} \biggr)^{\!1/p\;} \biggl( \sum_{i=1}^N p_i^{-\alpha {p'}} \biggr)^{\!1/{p'}} \le \sum_{i=1}^N D^{- l_i} \le 1, \label{eq:ineqH}$$ where the last inequality in the right is the Kraft inequality. If we let $\alpha p=1$, then $\alpha=-1/\beta$, and $-\alpha {p'}=\alpha/(\alpha-1)=1/(\beta+1)$. Then, (\[eq:ineqH\]) reduces to $$\biggl( \sum_{i=1}^N p_i D^{\beta l_i} \biggr)^{\!-1/\beta\;} \biggl( \sum_{i=1}^N p_i^{1/(\beta+1)} \biggr)^{\!(\beta+1)/\beta} \le 1. $$ Taking the base $D$ logarithm, we obtain the Campbell theorem $C_\beta \geq H_q(p)$, with $q=1/(\beta+1)$. If we now take $\alpha p = q$ and choose $-\alpha {p'}=1$, we obtain $$\biggl( \sum_{i=1}^N p_i^{q} D^{-p l_i} \biggr)^{\!1/p\;} \le 1, $$ where we used of course the fact that the probabilities sum to one. The condition $1/p+1/p'=1$ easily gives $p=1-q$. Dividing the two sides by $(\sum_i p_i^q)^{1/(1-q)}$, taking the logarithm and changing the sign of the inequality, we finally obtain $$\frac{1}{q-1} \log_D \biggl( \sum_{i=1}^N \frac{p_i^q}{\sum_j p_j^q} D^{(q-1) l_i} \biggr) \geq \frac{1}{1-q} \log_D \sum_{i=1}^N p_i^q$$ which gives the simple inequality $$L_q \geq H_q. \label{eq:Btheorem}$$ Hence we obtain that the new length measure of order $q$ is lower bounded by the Rényi entropy of the same order. Note that this result include Shannon result in the special case $q=1$. Interestingly, it is easy to check that we have equality in (\[eq:Btheorem\]) for $l_i=-\log_D p_i$, which is nothing but the optimal lengths in the Shannon coding theorem. Hence, it is remarkable that the whole series of inequalities (\[eq:Btheorem\]) become equalities for the choice $l_i=-\log_D p_i$ which appears as a kind of universal value in this context. This result can draw attention to alternative coding algorithms, based on the minimization of $L_q$, or alternative characterizations of the optimal code. For instance, the inequality (\[eq:Btheorem\]) shows, as a direct consequence, that the Shannon code with $l_i=-\log_D p_i$ minimizes the length of the codeword associated to the maximum probability. Indeed, when $q\rightarrow+\infty$, $L_\infty \rightarrow l_k$ the length of the codeword of maximum probability, and $L_\infty$ is minimum when $l_k$ has its minimum value $H_\infty=-\log_D p_k$. Since the Rényi and Tsallis entropy are related by a simple monotone transformation, inequalities similar to (\[eq:Campbell\_theo\]) and (\[eq:Btheorem\]) exist with Tsallis entropy bounds. Connections between the different length measures {#sec:connections} ================================================= It is finally useful to exhibit an interplay between the two length measures, their minimizers, and the standard and escort distributions. The Campbell measure in (\[eq:CambellLength\]) involves the distribution $p$, an exponential weight with index $\beta$. The optimum lengths that achieve the equality in the inequality (\[eq:Campbell\_theo\]) are the bit-numbers associated to the escort distribution $l_i=-\log_D P_i$. On the other hand, the measure (\[eq:Blength\]) involves the escort distribution $P$ instead of $p$, has an index $q$ and the optimum lengths that achieve the equality in the extended source coding inequality (\[eq:Btheorem\]) are the bit-numbers $l_i=-\log_D p_i$ associated to the original distribution. We know that the transformation $q \leftrightarrow 1/q$ [@tsallis_role_1998 p. 543] links the original and escort distribution, that is the distribution $p$ is the escort distribution with index $1/q$ of the distribution $P$. This remark enables to find an equivalence between thermostatistics formalisms base on linear and generalized averages [@raggio_equivalence_1999; @naudts_dual_2002]. Here, when we substitute $q$ by $1/q$ in (\[eq:Blength\]), and therefore $P$ by $p$, we end with Campbell length (\[eq:CambellLength\]) where $q=1/(\beta+1)$. Concerning the entropy bound in (\[eq:Campbell\_theo\]) and (\[eq:Btheorem\]), we shall also observe that $H_{\frac{1}{q}}(P)=H_q(p)$, so that we have finally equivalence between the two inequalities (\[eq:Campbell\_theo\]) and (\[eq:Btheorem\]). This is a new illustration of the duality between standard and escort distributions. As a last remark, let us mention that if we apply Jensen inequality to the exponential function in the sum defining $L_q$ (\[eq:Blength\]), we then obtain $M_q \geq L_q$, where $M_q$ is the generalized mean, taken with respect to the escort distribution, and we have $$M_q \geq L_q \geq H_q.$$ The equality in $M_q \geq L_q$ means that the transformation in Jensen inequality is a straight line, which means $q=1$. In such case, we still obtain $M_1 \geq H_1(p)$, which is nothing but the standard Shannon coding theorem. Conclusions =========== In this Letter, we have pointed out the relevance of Rényi entropy and escort distributions in the context of source coding. This suggests possible connections between coding theory and the main tools of nonextensive statistical mechanics. We have first outlined an overlooked result by Campbell that gave the first operational characterization of Rényi entropy, as the lower bound in the minimization of a deformed measure of length. We then considered some alternative definitions of measure of length. We showed that Campbell’s optimum codes can also be obtained using another natural measure of length based on escort distributions. Interestingly, this provides an easy practical procedure for the computation of these codes. Next, we introduced a third measure of length involving both an exponentiation, as in Campbell’s case, and escort distributions. We showed that this length is also bounded below by a Rényi entropy. Finally, we showed that the duality between standard and escort distributions connects some of these results. Further work should consider the extension of these results, namely the new lengths definitions, in the context of channel coding. With these new lengths, we also intend to investigate the problem of model selection, as in Rissanen MDL (Minimum Description Length) procedures. [10]{} url \#1[`#1`]{}urlprefix C. Tsallis, Introduction to [N]{}onextensive [S]{}tatistical [M]{}echanics, 1st Edition, Springer, 2009. D. Harte, Multifractals: Theory and Applications, 1st Edition, Chapman & [Hall]{}, 2001. P. Jizba, T. Arimitsu, Ann Phys 312 (2004) 17. A. Chhabra, R. V. Jensen, Phys Rev Lett 62 (1989) 1327. C. Beck, F. Schloegl, Thermodynamics of Chaotic Systems, Cambridge University Press, 1993. S. Abe, Phys Rev E 68 (2003) 031101. J.-F. Bercher, Phys Lett A 372 (2008) 5657. F. Pennini, A. Plastino, G. Ferri, Physica A: Statistical Mechanics and its Applications 383 (2007) 782. P. T. Landsberg, V. Vedral, Phys Lett A 247 (1998) 211. T. Yamano, Phys Rev E 63 (2001) 046105. R. Venkatesan, A. Plastino, Physica A 388 (2009) 2337. M. Baer, IEEE Trans Inf Theory 52 (2006) 4380. L. L. Campbell, Inf Control 8 (1965) 423. C. Tsallis, R. S. Mendes, A. R. Plastino, Physica A 261 (1998) 534. P. Humblet, IEEE Trans Inf Theory 27 (1981) 230. A. Blumer, R. [McEliece]{}, IEEE Trans Inf Theory 34 (1988) 1242. M. Baer, IEEE Trans Inf Theory 54 (2008) 1273. G. A. Raggio, On equivalence of thermostatistical formalisms, http://arxiv.org/abs/cond-mat/9909161 (1999). J. Naudts, Chaos Solitons Fractals 13 (2002) 445. [^1]: In this interesting paper, another inequality is given for the generalized mean: $M_q \geq S_q(p)$, where $S_q$ is the normalized version of Tsallis entropy. In fact, this is only true under the condition $\sum_i \exp_q (-l_i) \leq 1$, with the equality occuring for $l_i=-\ln_q(p_i)$, where $\exp_q$ and $\ln_q$ denote the standard nonextensive $q$-deformed exponential and logarithm. When these lengths $l_i$ also fullfill the Kraft-Mac Millan inequality we have $M_q=S_q(p)>H_1(P)$.
--- abstract: 'A general strategy is formulated for computing bound state spectra in the framework of functional renormalisation group (FRG). Dynamical “coordinates” characterising bound states are introduced as coupling parameters in the $n$-point functions of effective fields representing the bound states in an extended effective action functional. Their scale dependence is computed with functional renormalisation group equations. In the infrared an interaction potential among the constituting fields is extracted as smooth function of the coupling parameters. Eventually quantised bound state solutions are found by solving the Schrödinger eigenvalue problem formulated for the coupling parameters transmuted into coordinates. The proposed strategy is exemplified through the analysis of a recently published FRG study of the one-flavor chiral Nambu–Jona-Lasinio model.' author: - | A. Jakovác and A. Patkós\ Institute of Physics, Eötvös University,\ Budapest, H-1117, Hungary\ E-mail: antal.jakovac@gmail.com,patkos@hector.elte.hu title: | Bound-state spectra of field theories\ through separation of external and internal dynamics --- Introduction and motivation =========================== Bound states of two particles appear as (complex) pole singularities in two-particle propagators. In practice this propagator is parametrised with finite number of intuitively chosen parameters. Approximate solution of the bound state problem consists of finding optimised values of the parameters, reflecting the expected qualitative physical features. A widely used procedure in quantum chemistry is the Born-Oppenheimer approximation[@BO1927], where the electronic wave function is parametrised with a fixed static distance of the two nuclei. The corresponding Schrödinger energy eigenvalues are smooth functions of the nuclear coordinates. The optimisation step consists of the determination of the probability amplitude of the distance distribution, which is realised by solving the Schrödinger-equation for the quantum motion of the nuclei. Similar approach is used in heavy quark spectroscopy of QCD, where in a first step the interquark potential is determined through the exchange of dynamical gluons and quarks between static colored sources, and next the Schrödinger-problem of the quark sources is solved in this potential[@HQ-spectr]. The interquark potential is found by measuring the correlator of two Polyakov-lines on a space time lattice, and extracting the renormalised interaction potential after appropriately subtracting the self-energies of the individual lines[@Polyakov-renorm]. In both physical problems the binding energy (the missing mass) is orders of magnitude smaller than the complete mass of the composite. The quality of these approximate scenarios depends critically on the decoupling of the dynamics of the “force field sources” from the rest of dynamical degrees of freedom and also on the hierarchical ordering the subsequent contributions to the complete energy. It is highly desirable to construct a systematic procedure which can test whether the internal dynamics of the candidate subsystems is influenced only through some collective effects (like the interaction potential) emerging from the motion of the rest of the complete system. One possible approach is to introduce collective fields/wave functions representing the prospective bound state with help of an auxiliary function. In this short note a strategy is put forward to determine a smooth interrelation between different parameters characterising the propagator and the coupling of the auxiliary field representing the composite (bound) degree of freedom to the original fields. Functional renormalisation group equations are ideal in searching for these functional relations. On the basis of such stable, physically meaningful relations one can proceed to the second stage and solve the quantum equations for the reduced set of parameters describing the internal quantum dynamics of the composite field. The proposition for a general strategy is described in detail in section 2. It is applied in section 3 to the symmetric phase of the chiral Nambu–Jona-Lasino model for which an interaction potential has been determined non-perturbatively among the fermionic constituents recently[@jakovac20]. This specific model, where chiral symmetry forces the defining fields massless, represents some additional interest to us. The very accurate [*ab initio*]{} reconstruction of the lowest lying baryon spectra with light quarks[@fodor08] is one of the greatest success of lattice field theory. However, the lattice approach does not offer any insight on the emergence of the concept of constituent quark mass, which is the basis of the widest used non-relativistic quark models. Effective models of chiral dynamics from the earliest days[@chiral-eff-models] relate this mass to the chiral condensate of strong interactions, which raises, however, the question of the existence of hadronic bound states in the phase of restored chiral symmetry. In a broader context of the Standard Model the mechanism of producing vector and scalar bound states with light (relative to the Planck mass) fermions was a central problem also for V.N. Gribov[@gribov-bonn-95]. The strategy ============ Consider a quantum theory of defining fields $\varphi(x)$. Its solution is encoded into the effective action $\Gamma[\varphi]$ from which all $n$-point functions can be extracted through appropriate functional derivatives. The straightest way to look for bound states formed with $N=2,3,...$ constituent fields is to look into the analytic structure of the $2N$-point functions. This program is technically difficult, maybe even impossible to realize. It is more realistic to consider a collective field introduced in the channel where one searches for the existence of a bound state: $$H(x)\leftrightarrow\int[\Pi_{i=1}^{N} dy_i]O(x-y_1,...,x-y_N)\varphi(y_1)...\varphi(x_N) \label{multifield-replacement}$$ where $O(z_1,..z_N)$ characterizes the space-time stucture of the compound system. The collective field is introduced into the theory as an auxiliary field, for instance with help of the quadratic expression: $$\begin{aligned} &\displaystyle \Delta\Gamma[H,\varphi]=\nonumber\\ &\displaystyle \frac{M_H^2}{2}\int dx\left[H(x)-\frac{g}{M_H^2}\int[\Pi_{i=1}^{N} dy_i]O(x-y_1,...,x-y_N)\varphi(y_1)...\varphi(x_N)\right]^2, \label{delta-eff-action}\end{aligned}$$ with scale dependent new couplings $M_H^2,g$. The hunting for the bound state focuses now on the two-point function of $H(x)$ taking into account the effect of the quantum fluctuations of the field $\varphi$ by running the renormalisation group equations formulated for the Euclidean theory [@wetterich93; @morris94]. Theories extended this way were used for investigating bound states in Refs.[@ellwanger94; @gies02; @pawlowski07] (see also the recent careful analysis of Ref.[@jakovac19]). Detailed discussion of the mesonic bound state spectra was based on this kind of transformed QCD first in Ref.[@jungnickel96]. In this investigation composite fields were defined locally, without any internal structure. As a consequence only the first stage of the strategy to be outlined below has been realized and the renormalized mass-spectra without including any effect of the internal dynamics of the constituents was fitted to the observed meson spectra. The trial two-point function is parametrised in the Euclidean version of the theory in a way reflecting the expected occurrence of a pole: $$G_H(p)=\frac{Z_H}{p^2+M_H^2}+{\textrm{polynomial background}}.$$ The infrared values of the scale dependent parameters are controlled by the evolution of the coupling (vertex) function $O(z_1,...,z_N)$ weighting the contributions emerging from the interacting defining fields. For instance the simple trial form for the Fourier transformed vertex function $$O(q_1,q_2,...,q_N)\sim \Pi_{i\neq j} e^{-\alpha_{ij} (q_i-q_j)^2}\times e^{-\beta(Q-\sum q_i)^2} \label{vertex-parametrisation}$$ introduces a spatio-temporal range $\alpha=\sup_{ij}\{\alpha_{ij},\beta\}$ to which all constituents are restricted. One can easily invent higher cluster distance restrictions. The interesting case is when the scale dependent parameters stay close to their classical (UV) values and their infrared values change smoothly (slowly) with the input (initial) values. Under this assumed behavior one can anticipate the existence of a smooth functional relation between the important structural parameters of the effective action, for instance $$M_H^2(IR)=M_H^2(\alpha(IR)).$$ This function is the central object of the proposed strategy. Its existence cannot be guaranteed, it is expected only intuitively. The next task is to deconstruct carefully this function. For large $\alpha_0(UV)$ one expects the particles corresponding to $N$ fields to fill uniformly the space of linear size $\alpha_0^{1/2}$. One has to expect that $M_H^2(IR,0)$ tends to a limiting value when $\alpha_0$ increases beyond any limit. For this limiting case the change $\delta M_H^2(0)\equiv M_H^2(IR,0)-M_H^2(UV,0)$ can be interpreted as the sum of one-particle self-energy contibutions to the invariant squared “mass”, since the initial value $M_H^2(UV)$ is the classical squared mass of the $N$-field complex. The interesting question concerns what happens with this difference when $\alpha$ (e.g. the available volume) is diminished gradually? Generically, for $\alpha_1(UV)<\alpha_0(UV)$ one finds $\delta M_H^2(1)\equiv M_H^2(IR,1)-M_H^2(UV,1)\neq \delta M_H^2(0)$. The difference reflects the interaction among the constituents. One can map out the dependence of this interaction squared energy on the squared size by gradually changing $\alpha(UV)$. This simple disentanglement of the different contributions can be expressed formally for any given $\alpha$ as $$M_H^2(IR) = M_H^2(UV)+\delta M_H^2(0) +\Delta M_H^2({\textrm{interaction}}). \label{energy-disentanglement}$$ The second part of the strategy consists of solving the quantum mechanical N-body problem with identical particles of squared classical mass $m_H^2=(M_H^2(UV)+\delta M_H^2(0))/N$ moving in the generalized potential $[\Delta M_H^2(\{\alpha_{ij}\})]^{1/2}$. $m_H$ corresponds to the constituent mass formed dynamically. If one uses the relative coordinates $(\alpha_{i<j})^{1/2}$ the simplest Hamiltonian defining the quantum mechanical problem is $$\hat H=\sum_{i<j}\frac{\hat \pi_{ij}^2}{2m_H}+[\Delta M_H^2({\textrm{interaction}})]^{1/2}, \label{reduced-Hamiltonian}$$ where $\hat\pi_{ij}$ is the momentum conjugate to $\alpha_{ij}^{1/2}$. Bound state solutions of this system correspond to eigenvalues lower than $(M_H^2(UV)+\delta M_H^2(0))^{1/2}$. The difference is the binding energy. By the examples quoted in the introductory part one expects good quality results if $m_H>>[\Delta M_H^2({\textrm{interaction}})]^{1/2}$. In conclusion, this general strategy based on extracting the renormalised interaction potential with help of renormalisation group equations in principle offers an equivalent procedure to the numerical simulation and renormalisation of the appropriate correlations in the framework lattice field theories. In the next section a concrete realisation of the above general strategy is presented. Two-particle bound state in the symmetric phase of the one-flavor chiral NJL model ================================================================================== In our recent paper we have introduced a collective field in the sense of (\[multifield-replacement\]) for the degenerate scalar-pseudoscalar two-fermion sector of the chiral NJL-model. A single “slow” variable $\alpha(UV)$ was introduced in a Gaussian ansatz like (\[vertex-parametrisation\]). Infrared values of $M_S^2=M_{PS}^2\equiv M_C^2(t=-\infty)$ were obtained by solving a coupled set of Wetterich equations. Slowly tuning $\alpha(UV)$ from 350 to 2.85 smooth variation of $M_C^2(IR,\alpha(UV))$ was detected (see Fig.\[alpha\_mphys2\_vs\_t\]). ![RG-variation (as a function of $t=\ln(k/\Lambda)$) of the quantum contribution to the squared composite mass $\delta M_C^2(\alpha(k=\Lambda))=M_C^2(t=-\infty,\alpha(k=\Lambda))-M_C^2(t=0,\alpha(k=\Lambda))$ in units of $\Lambda^2$. From the top to the bottom curves with diminishing $\alpha_r(k=\Lambda)$ are presented. From Ref.[@jakovac20][]{data-label="alpha_mphys2_vs_t"}](grMC2high-rev.pdf){width="2in"} Next we have performed the analysis summarized in Eq.(\[energy-disentanglement\]) resulting in the smooth $\Delta M^2_C(\alpha(UV)\Lambda^2)/\Lambda^2$ function (see Fig.\[int-energy-size\]): ![Dependence of the interaction energy of the composite on $\alpha(t=0)\Lambda^2=\alpha(k=\Lambda)\Lambda^2$ the cut-off value of the size parameter. From Ref.[@jakovac20] []{data-label="int-energy-size"}](grdiff1-rev.pdf){width="2.4in"} From this figure one can extract the dimensionless curvature $d^2[\Delta M_C^2]^{1/2}/ d(\alpha^{1/2})^2/\Lambda^3$ at the minimum to be denoted (for dimensional reasons) by $\Omega^3$. To a very good approximation the dimensionless reduced mass of the two-particle state is $\mu_C\equiv [M_C^2]^{1/2}/2\Lambda$. The Schrödinger eigenvalue equation for the dimensionless binding energy $\epsilon=E/\Lambda$ can be written with help of the dimensionless radial momentum $p=\pi/\Lambda$ and the dimensionless radial distance $x=\alpha^{1/2}\Lambda$ as $$\left(\frac{1}{2}\frac{p^2}{\mu_C}+\frac{1}{2}\Omega^3x^2\right)\Psi=\epsilon \Psi.$$ It is important to note that near the minimum $(\alpha(t=0)\Lambda^2)^{-1}\sim 1/2$ which is much larger than $|\Delta M_C^2/\Lambda^2|\sim 0.007$, therefore one consistently can use non-relativistic quantum theory for dealing with the internal dynamics. One can estimate the ground state energy using the uncertainty principle of Heisenberg in the form $px\sim 1$. An important peculiarity of the present NJL system is the observation made upon Fig.\[MC-alpha-v\], namely in the infrared limit one has $x\mu_C\rightarrow {\textrm{const.}}\equiv K$. ![RG-variation of the product of the squared boson mass and of the width of the composite “wave function”. From Ref.[@jakovac20][]{data-label="MC-alpha-v"}](galphamuC2.pdf){width="2in"} The limiting value of the constant is reached very slowly and it very weakly depends on $\alpha(UV)$. It equals approximately 2. The best approximation is to take $ K=x_{min}\mu_C(x_{min})$. This leads to $$\hat H=\Lambda\left(\frac{x}{2K}p^2+\frac{1}{2}\Omega^3x^2+\epsilon_{min}\right),$$ where $\epsilon_{min}\approx -0.007$ by the figure. Replacing $x$ everywhere by $1/p$ in view of the uncertainty relation one finds the condition for the extremum of $\epsilon(p)$: $$\frac{d\epsilon}{dp}=\frac{1}{2K}-\frac{1}{p^3}\Omega^3=0$$ which gives for the ground state energy $$\epsilon=\frac{3}{2}(2K)^{-2/3}\Omega+\epsilon_{min}.$$ Conclusions =========== In this note we proposed a strategy for extracting the renormalised interaction potential of the constituting objects of a bound state from the renormalised squared mass parameter $M_H^2$ defined in (\[delta-eff-action\]). This potential might turn out a smooth continuous function of the length-like parameters $\sqrt{\alpha_{ij}}$ characterising the function $O(x-y_1,...,x-y_N)$ linking the constituents to the composite field $H(x)$ representing the bound state. The renormalisation group evolution of the parameters $M_H^2,\alpha_{ij}$ as described, for instance, by the Wetterich equation, is the result of the action of the fluctuations of the elementary fields defining the model. The internal quantum dynamics of the constituents of the bound state leads to discrete energy levels. This dynamics is defined in an admittedly intuitive step, adding non-relativistic kinetic terms to the potential energy defined through momentum variables canonically conjugate to the length-like parameters and a constituent mass emerging from the infrared limit of $M_H^2$. Of course the consistency of the non-relativistic nature of the dynamics should be checked. Acknowledgements {#acknowledgements .unnumbered} ================ This research was supported by the Hungarian Research Fund under the contract K104292. The authors are indebted to dr. Júlia Nyiri for the invitation to contribute to the Gribov-90 Memorial Volume. [0]{} M. Born and R. Oppenheimer, Annalen d. Physik [**398**]{} (1927) J.L. Richardson, Phys.Lett. B[**82**]{} (1979) 272-274 O. Kaczmarek, F. Karsch, P. Petreczky, and F. Zantow, Phys. Lett. B[**543**]{} (2002) 41 A. Jakovác and A. Patkós, Mod. Phys. Lett. A[**35**]{} (2020) 2050130 S. Durr, Z. Fodor, J. Frison, C. Hoelbling, R. Hoffmann, S.D. Katz, S. Krieg, T. Kurth, L. Lellouch, T. Lippert, K.K. Szabo, G. Vulvert, Science [**322**]{} (2008) 1224 Y. Nambu and G. Jona-Lasinio, Phys. Rev. [**122**]{} (1961) 345, [*ibid.*]{} [**124**]{} (1961) 246 V.N. Gribov, [*Bound States of Massless Fermions as a Source for New Physics*]{}, Bonn TK-95-35, published in: V.N. Gribov, Gauge Theories and Quark Confinement, PHASIS, Moscow, 2002, pp.483-496 C. Wetterich, Phys. Lett. B[**301**]{} (1993) 90 T.R. Morris, Int. J. Mod. Phys. A[**6**]{} (1994) 2411 U. Ellwanger and C. Wetterich, Nucl. Phys. B[**423**]{} (1994) 137 H. Gies and C. Wetterich, Phys. Rev. D[**65**]{} (2002) 065001 J.M. Pawlowski, Ann. Phys. (N.Y.) [**322**]{} (2007) 2831 A. Jakovác and A. Patkós, Int. J. Mod. Phys. A[**34**]{} (2019) 1950154 D.U. Jungnickel and C. Wetterich, Phys. Rev. D[**53**]{} (1996) 5142
--- abstract: | We prove that any limit-interface corresponding to a locally uniformly bounded, locally energy-bounded sequence of stable critical points of the van der Waals–Cahn–Hilliard energy functionals with perturbation parameter $\to 0^{+}$ is supported by an embedded smooth stable minimal hypersurface in low dimensions and an embedded smooth stable minimal hypersurface away from a closed singular set of co-dimension $\geq 7$ in general dimensions. This result was previously known in case the critical points are local minimizers of energy, in which case the limit-interface is locally area minimizing and its (normalized) multiplicity is 1 a.e. Our theorem uses earlier work of the first author establishing stability of the limit-interface as an integral varifold, and relies on a recent general theorem of the second author for its regularity conclusions in the presence of higher multiplicity. address: - 'Department of Mathematics, Hokkaido University, Sapporo 060-0810 Japan.' - 'Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, Cambridge, CB3 0WB, United Kingdom.' author: - Yoshihiro Tonegawa - Neshan Wickramasekera title: 'stable phase interfaces in the van der Waals–Cahn–Hilliard theory' --- addtoreset[equation]{}[section]{} Introduction ============ Let $\Omega \subset {\mathbb R}^{n}$ ($n\geq 2$) be a bounded domain and consider the family of energy functionals $E_{\e}$, $\e \in (0, 1)$, arising in the van der Waals–Cahn–Hilliard theory of phase transitions ([@CahnHilliard]; see also [@ModicaMortola]), given by $$E_{\e}(u) =\int_{\O}\frac{\e|\nabla u|^2}{2}+\frac{W(u)}{\e}\, d{x}, \label{energy}$$ where $u:\O\->\R$ belongs to the Sobolev space $H^1(\O)=\{u\in L^2(\O)\,:\, \nabla u\in L^2(\O)\}$ and $W:\R\->\R^+\cup\{0\}$ is a given $C^{3}$ double-well potential function with (precisely two) strict minima at $\pm 1$ with $W(\pm 1)=0$. When $\e \to 0^{+}$ with $E_{\e}(u_{\e})$ remaining bounded independently of $\e$, it is clear (from the bound on the second term of the integral above) that $u_{\e}$ must stay close to $\pm 1$ on a bulk region in $\O$ and typically (i.e. in case the sets $\{u_{\e} \approx 1\}$ and $\{u_{\e} \approx -1\}$ each has measure $\geq$ a fixed proportion of the measure of $\O$ ) there is a transition layer of thickness $O(\e),$ which we may call an “interface region” or a “diffused interface”. In the past few decades it has been established that in the presence of a uniform bound on the energy $E_{\e}(u_{\e})$ and under natural variational hypotheses on $u_{\e}$ of varying degrees of generality, for small $\e> 0$, the interface region corresponding to $u_{\e}$ is close to a generalized minimal hypersurface $V$ of $\O$ (the “limit-interface” as $\e \to 0^{+}$) and that $E_{\e}(u_{\e})$ approximates a fixed multiple of the $(n-1)$-dimensional area of this hypersurface. L. Modica ([@Modica]) and P. Sternberg ([@Sternberg]) established this, in the framework of $\Gamma$-convergence, for absolutely energy minimizing $u_{\e}$ satisfying a uniform volume constraint; they proved that in this case, the limit-interface $V$ is area minimizing in an appropriate class. R. Kohn and P. Sternberg ([@KohnSternberg]) studied the locally energy minimizing case, again in the context of $\Gamma$-convergence. More recently, J. Hutchinson and the first author ([@HutchinsonTonegawa]) showed that $V$ is a stationary integral varifold if $u_{\e}$ are assumed to be merely volume-unconstrained critical points of $E_{\e}$ (Theorem \[thm1\] below), and that $V$ is an integral varifold with constant generalized mean curvature when the $u_{\e}$ are critical points subject to a volume constraint (see also [@RoegerTonegawa Theorem 7.1]). Subsequently, the first author ([@Tonegawa]) showed that whenever the $u_{\e}$ are unconstrained stable critical points of $E_{\e},$ the limit stationary integral varifold $V$ is stable in the sense that $V$ admits a generalized second fundamental form which satisfies the stability inequality (Theorem \[stabilitythm\] below). With regard to smoothness of $V$ in the absence of an energy minimizing hypothesis, little has been known beyond the following theorem of the first author ([@Tonegawa]): *Suppose that $n=2$, $\e_{i} \to 0^{+}$ as $i \to \infty$ and that for each $i=1, 2, 3, \ldots,$ $u_{\e_i} \in H^{1}(\Omega)$ is a [stable]{} critical point of $E_{\e_i}$ with $\sup_{\O} |u_{\e_{i}}| + E_{\e_i}(u_{\e_i})\leq c$ for some $c>0$ independent of $i$. Then there exists a locally finite union $L$ of non-intersecting lines of $\Omega$ such that after passing to a subsequence of $\{\e_{i}\}$ without changing notation, for any $0<s<1,$ the sequence of sets $\{{x}\in \O\,:\, |u_{\e_i}({x})|\leq s \}$ converges locally in Hausdorff distance to $L$.* Thus in case $n=2,$ any stable diffused interface must be close to non-intersecting lines for sufficiently small positive values of the parameter $\e$. It has remained an open question whether one can make analogous conclusions in dimensions $n>2$. Here we give an affirmative answer to this question in all dimensions. Specifically, we prove (in Theorem \[thm2\] below) that [*if $u_{\e_{i}}$ are uniformly bounded stable critical points of $E_{\e_{i}}$ with no volume constraint and with uniformly bounded energy, then for $2 \leq n \leq 7$, there exists an [*embedded smooth stable minimal hypersurface*]{} $M$ of $\Omega$ such that after passing to a subsequence of $\{\e_{i}\}$ without changing notation, for each fixed $s \in (0, 1)$, the sequence of interface regions $\{{x} \in \Omega \, : \, |u_{\e_{i}}({x})| < s\}$ converges locally in Hausdorff distance to $M$; for $n\geq 8,$ the limit stable minimal hypersurface $M$ may carry an interior singular set, which is discrete if $n=8$ and has Hausdorff dimension at most $n-8$ if $n \geq 9.$*]{} This regularity result was known for the limit-interfaces corresponding to sequences $\{u_{\e_{i}}\}$ of energy minimizers since in that case the limit-interfaces are area-minimizing and the well known regularity theory for locally area minimizing currents is applicable. The new result in this paper is that the stability hypothesis, which is much weaker than any energy minimizing hypothesis, suffices to guarantee the same regularity for the limit-interfaces. The main reason why, in [@Tonegawa], the interface regularity was established only in case $n=2$ and not for $n>2$ was that while in case $n=2$ (i.e. when the interface is a 1-dimensional varifold), the structure theorem (due to W. Allard and F. Almgren [@AllardAlmgren]) for stationary 1-dimensional varifolds is applicable to the limit-interface, there was no sufficiently general regularity theory available at the time for higher dimensional stable integral varifolds. In contrast to limit-interfaces corresponding to sequences of locally energy minimizing critical points of $E_{\e}$, a general stable limit-interface may develop higher multiplicity, [*a priori*]{} variable even locally. This fact gives rise to significant difficulties that need to be overcome in understanding smoothness properties of a stable limit-interface, and is the reason why the regularity question for stable limit-interfaces in arbitrary dimension remained unresolved prior to the present work. Note that the Schoen–Simon regularity theory ([@SchoenSimon]), which was the most general theory available for stable hypersurfaces at the time when work in [@Tonegawa] was carried out, requires knowing [*a priori*]{} that the singular set (in particular the set of those singular points where the varifold has tangent hyperplanes of multiplicity $\geq 2$) is sufficiently small, a hypothesis which appears to be difficult to verify directly for a stable limit-interface. The key new input to this problem is the recent work of the second author ([@Wickramasekera]), which gives a necessary and sufficient geometric structural condition for a general stable codimension 1 integral varifold to be regular (Theorem \[wick\] below). Here we show that the limit-interface in question satisfies precisely this structural condition; its regularity then follows directly from the general theory of [@Wickramasekera]. While the present work as well as the series of works mentioned above ([@Modica; @Sternberg; @HutchinsonTonegawa; @RoegerTonegawa; @Tonegawa]) investigate the general character of limit-interfaces, there have been a number of articles which address the question of existence of critical points of whose interface regions converge to a given minimal hypersurface. In this direction we mention the work by F. Pacard and R. Ritoré ([@PacardRitore]), M. Kowalczyk ([@Kowalczyk]) and a number of recent joint works by M. del Pino, M. Kowalczyk, F. Pacard, J. Wei and J. Yang (see for example [@DPKW]). We refer the reader to the recent survey paper by Pacard [@Pacard] for a complete list of references. Hypotheses and the main results =============================== In this section, we state the hypotheses on $W$ and $u_{\e}$, state our main theorem (Theorem \[thm2\]) and recall some definitions and known results needed for its proof. We will give the proof of Theorem \[thm2\] in Sections \[sec3\] and \[sec4\]. We assume: - $W\in C^3({\mathbb R})$, $W \geq 0$ and $W$ has precisely three critical points, two of which are minima at $\pm 1$ with $W(\pm 1)=0$ and $W''(\pm 1)>0$, and the third a local maximum between $\pm 1$.\ - $\e_{1}, \e_{2}, \e_{3}, \ldots$ are positive numbers with $\lim_{i\->\00}\e_i=0,$ the constants $c_{1}, c_{2}$ are positive and for each $i=1, 2, 3, \ldots,$ the function $u_{\e_{i}} \in H^{1}(\O)$ and satisfies $E_{\e_i}(u_{\e_i})\leq c_1$ and $\sup_{\O}|u_{\e_i}|\leq c_2$.\ - $u_{\e_{i}}$ is a stable critical point of $E_{\e_{i}}$ for each $i=1, 2, 3, \ldots$ Thus $u_{\e_{i}}$ solves, weakly, $$-\e_i\Delta u_{\e_i}+\frac{W'(u_{\e_i})}{\e_i}=0\hspace{.5cm}\mbox{on $\O$} \label{critical}$$ and satisfies $$\int_{\Omega} \e_{i}|\nabla \phi|^{2} + \frac{W^{\prime\prime}(u_{\e_{i}})}{\e_{i}}\phi^{2} \geq 0 \hspace{.5cm} \mbox{for each} \;\; \phi\in C^1_c(\O). \label{secondvar}$$ [**Remarks:**]{} [**(1)**]{} Hypotheses (\[critical\]) and (\[secondvar\]) are equivalent, respectively, to the conditions $$\left.\frac{d}{dt}\right|_{t=0} \, E_{\e_{i}}(u_{\e_{i}} + t\phi) = 0 \;\; {\rm and}$$ $$\left.\frac{d^2}{dt^2}\right|_{t=0} E_{\e_i}(u_{\e_i}+t\phi)\geq 0$$ for each $\phi \in C^{1}_{c}(\Omega).$ [**(2)**]{} Since $u_{\e_{i}}$ is bounded (by hypothesis (A2)), it follows from standard elliptic theory that $u_{\e_{i}} \in C^{3}(\Omega)$ with (\[critical\]) satisfied pointwise in $\O.$ We shall use the following notation throughout the paper: A general point in $\R^n$ is denoted by ${x}=(x_1,\cdots,x_n)$ or by $({y},{z})$ with ${ y}=(y_1,y_2)\in \R^2$ and ${z}=(z_1,\cdots,z_{n-2})\in \R^{n-2}$. For ${x}\in {\mathbb R}^{n}$ and $r>0$, we let $B_r({x})=\{\tilde{x}\in \R^n\,:\, |\tilde{x}-{x}|<r\}$ and abbreviate $B_r({0})$ as $B_{r}$. For $k \neq n$, ${w}\in \R^k$ and $r>0$, we let $B^k_r({w})=\{\tilde{w}\in\R^k\,:\, |\tilde{w}-{w}|<r\}$ and abbreviate $B^{k}_{r}(0)$ as $B_r^k$. The $k$-dimensional Lebesgue measure on $\R^k$ will be denoted by ${\mathcal L}^{k}$ and $\o_k = \L^k(B_1^k)$. The $k$-dimensional Hausdorff measure on $\R^n$ is denoted by ${\mathcal H}^{k}$. In order to characterize the limit-interfaces, we use the notion of varifolds ([@Allard; @Simon]). Let $G_{n-1}(\O)$ be the product space $\O\times G(n,n-1)$ with the product topology, where $G(n,n-1)$ is the space of $(n-1)$-dimensional subspaces in $\R^n$. We identify each element $S \in G(n,n-1)$ with the $n\times n$ matrix corresponding to the orthogonal projection of ${\mathbb R}^{n}$ onto $S$. A Radon measure $V$ on $G_{n-1}(\O)$ is called an [*$(n-1)$-varifold*]{} (henceforth just called a [*varifold*]{}). For a varifold $V$, $\|V\|$ shall denote the associated [*weight measure*]{} on $\O$, defined by $$\|V\|(\phi)=\int_{G_{n-1}(\O)}\phi({x})\, dV({x,S})\hspace{.5cm} \mbox{for $\phi\in C_c(\O)$.}$$ The support of the measure $\|V\|$ in $\O$ is denoted by ${\rm spt}\,\|V\|$. We say that $V$ is [*integral*]{} if there exists a countably $(n-1)$-rectifiable subset $M$ of $\O$ and an $\H^{n-1}$ measurable, positive integer valued function $\theta \, : \, M \to \Z^{+}$ such that $V$ is given by $$V(\phi)=\int_M\phi({x},T_{x}M)\theta({x})\, d\H^{n-1}({x}) \hspace{1cm}\mbox{for $\phi\in G_{n-1}(\O),$} \label{integral}$$ where $T_{x}M$ is the approximate tangent space of $M$ at ${x}.$ The function $\theta$ is called the [*multiplicity*]{} of $V$. For an $(n-1)$-dimensional $C^{1}$ submanifold $M$ in $\O$, $|M|$ denotes the multiplicity 1 integral varifold associated with $M,$ as in with $\theta \equiv 1$. We say that $V$ is [*stationary*]{} if $V$ has zero first variation with respect to area under deformation by any $C^{1}$ vector field of $\Omega$ with compact support, namely (see [@Simon]), if $$\int_{G_{n-1}(\O)}{\rm tr}\, (\nabla{g}({x})\cdot{S})\, dV({x,S})=0 \hspace{1cm}\mbox{for all ${g}\in C^1_c(\O;\R^n)$}.$$ Here $\cdot$ is the usual matrix multiplication and ${\rm tr}$ is the trace operator. For $V\in G_{n-1}(\O)$, let ${\rm reg}\, V\subset\O$ be the set of regular points of ${\rm spt}\, \|V\| \cap \O$. Thus, ${x}\in {\rm reg}\, V$ if and only if $x \in {\rm spt} \, \|V\|$ and there exists some open ball $B_r({x})\subset \O$ such that ${\rm spt}\, \|V\|\cap \overline{B_r({x})}$ is a compact, connected, embedded smooth hypersurface with boundary contained in $\partial B_r({x})$. The interior singular set ${\rm sing} \, V$ of $V$ is defined by $${\rm sing}\, V=({\rm spt}\, \|V\|\setminus {\rm reg}\, V) \cap \Omega.$$ By definition, ${\rm sing}\, V$ is closed in $\Omega$. To each $u_{\e_i}$ satisfying (A1)-(A3), we associate the varifold $V_{\e_i}\in G_{n-1}(\O)$ defined by $$V_{\e_i}(\phi) =\frac{1}{\sigma}\int_{\{|\nabla u_{\e_i}|>0\}} \phi({x},{\bf I}-{\bf n}_{\e_i}({x})\otimes{\bf n}_{\e_i}({x})) \frac{\e_i}{2}|\nabla u_{\e_i}|^2\, d{x} \hspace{1in} (\star)$$ for $\phi\in C_c(G_{n-1}(\O))$, where ${\bf n}_{\e_i}({x})= \frac{\nabla u_{\e_i}({x})}{|\nabla u_{\e_i}({x})|}$, $\sigma=\int_{-1}^1\sqrt{W(s)/2}\, ds$, ${\bf I}$ is the $n\times n$ identity matrix and $\otimes$ is the tensor product. Note that $\|V_{\e_i}\|$ then corresponds simply to $\frac{1}{\sigma}\frac{\e_i}{2}|\nabla u_{\e_i}|^2\, d{x}$. As a consequence of hypothesis (A2), there exists a subsequence of $\{V_{\e_i}\}_{i=1}^{\00}$ converging as varifolds on $\O$ (i.e. as Radon measures on $G_{n-1}(\O)$) to some $V \in G_{n-1}(\O)$. Our main result, which concerns regularity of $V$, is the following: Let the hypotheses be as in (A1)-(A3) and let $V_{\e_{i}}$ be defined by ($\star$). Let $V \in G_{n-1}(\O)$ be such that $\lim_{i\to \infty} \, V_{\e_{i}} =V,$ where the convergence is as varifolds on $\O.$ Then ${\rm sing}\, V$ is empty if $2\leq n\leq 7$, ${\rm sing}\, V$ is a discrete set of points if $n=8$ and $\H^{n-8+\gamma}({\rm sing}\, V)=0$ for each $\gamma>0$ if $n\geq 9$; furthermore, ${\rm reg}\,V =({\rm spt}\, \|V\|\setminus{\rm sing}\, V) \cap \O$ is an embedded smooth stable minimal hypersurface of $\Omega$. \[thm2\] [**Remarks:**]{} [**(1)**]{} As just mentioned, if the hypotheses are as in (A1)-(A3), then after passing to a subsequence of $\{\e_{i}\}$ without changing notation, we obtain $V \in G_{n-1}(\O)$ such that $V_{\e_{i}} \to V$ as varifolds on $\O.$ It is of course possible that $V = 0$. [**(2)**]{} There exists $u_{0} \in BV(\O)$ such that after passing to a subsequence of $\{\e_{i}\}$ without changing notation, $u_{\e_{i}} \to u_{0}$ in $L^{1}(\O);$ in fact $u_{0}(x) = \pm 1$ for a.e. $x \in \O,$ and hence the sets $\{u_{0} = 1\}$ and $\{u_{0} = -1\}$ have finite perimeter in $\O$ (see the discussion in [@HutchinsonTonegawa], pp. 51-52) and ${\rm spt} \,\|\partial \, \{u_{0} = 1\}\| \cap \O \subset {\rm spt} \, \|V\|$ (see [@HutchinsonTonegawa], Theorem 1). Thus in particular, $V \neq 0$ is implied by the condition that $u_{0} \not\equiv 1$ on $\O$ and $u_{0} \not\equiv -1$ on $\O.$ We now further discuss known results we shall need for the proof of Theorem \[thm2\]. The following theorem, due to Hutchinson and the first author ([@HutchinsonTonegawa]), says among other things that a limit varifold $V$ corresponding to a sequence of critical points of $E_{\e}$ (i.e. $V \in G_{n-1}(\O)$ arising as the varifold limit of a sequence $\{V_{\e_{i}}\},$ where $\e_{i} \to 0^{+}$ and $V_{\e_{i}} \in G_{n-1}(\O)$ is defined by ($\star$) with $u_{\e_{i}}$ satisfying ) is a stationary integral varifold, and that ${\rm spt} \, \|V\|$ indeed is the limit-interface corresponding to $\{u_{\e_{i}}\}$ in the sense made precise in part (3) of the theorem. Note that no stability hypothesis is necessary for this result. [(]{}[@HutchinsonTonegawa Theorem 1 and Proposition 4.3][)]{} Suppose that (A1), (A2) and hold, and let $V \in G_{n-1}(\O)$ be such that $V = \lim_{i\to \infty} \, V_{\e_{i}},$ where $V_{\e_{i}}$ is as in ($\star$) and the convergence is as varifolds on $\O.$ Let $U$ be an open subset of $\O$ such that the closure of $U$ is contained in $\O.$ Then - $V$ is a stationary integral varifold on $\O$. - $$\lim_{i\->\00}\int_U\left|\e_i\frac{|\nabla u_{\e_i}|^2}{2}- \frac{W(u_{\e_i})}{\e_i}\right|\, d{x}=0. \label{discrepancy}$$ - For each $s \in (0, 1)$, $\{|u_{\e_i}|\leq s\}\cap U$ converges to ${\rm spt}\, \|V\|\cap U$ in Hausdorff distance. \[thm1\] In order to discuss the additional known results relevant to us concerning *stable* critical points of $E_{\e}$, it is convenient to introduce the following notation: For $u\in C^2(\O),$ let ${\mathbf B}_{u}$ be the non-negative function defined by $${\mathbf B}_{u}^2 =\frac{1}{|\nabla u|^2}\sum_{i,j=1}^n u_{x_i x_j}^2-\frac{1}{|\nabla u|^4}\sum_{i=1}^n\left(\sum_{j=1}^n u_{x_j}u_{x_i x_j}\right)^2 \label{defsec}$$ on $\{|\nabla u|>0\}$ and ${\mathbf B}_{u} =0$ on $\{|\nabla u|=0\}$. Here and subsequently, $u_{x_{i}},$ $u_{x_{i}x_{j}}$ denote the partial derivatives $\frac{\partial \, u}{\partial \, x_{i}},$ $\frac{\partial^{2} \, u}{\partial x_{j} \partial x_{i}}$ respectively. Note that the expression on the right hand side of (\[defsec\]) is non-negative when $\nabla \, u \neq 0$ and is invariant under orthogonal transformations of ${\mathbb R}^{n}$. We have the following: [(]{}[@PadillaTonegawa][)]{}\[stab\] Let $\e \in (0, 1)$, $u \in C^{2}(\O)$ and suppose that $u$ is a stable critical point of $E_{\e}$ in the sense that and are satisfied with $\e$ in place of $\e_{i}$ and $u$ in place of $u_{\e_{i}}.$ Then $$\int_{\O}{\mathbf B}_{u}^2 |\nabla u|^2\phi^2\, d{x} \leq \int_{\O} |\nabla\phi|^2|\nabla u|^2\, d{x} \label{secondest1}$$ for each $\phi\in C^1_c(\O).$ One proves by taking $|\nabla u|\phi$ in place of $\phi$ in the inequality and utilizing equation . See [@PadillaTonegawa] or  [@Tonegawa] for details. Let the hypotheses be as in (A1)-(A3), and write $${\mathbf B}_{\e_i} = {\mathbf B}_{u_{\e_{i}}}.$$ In view of hypothesis (A2), Lemma \[stab\] implies that the $L^1$-norm of $\e_i{\mathbf B}_{\e_i}^2|\nabla u_{\e_i}|^2$ is locally uniformly bounded. Let $\nu$ be a subsequential limit (as Radon measures on $\O$) of the sequence $\e_{i}{\mathbf B}_{\e_{i}}^{2}\left|\nabla u_{\e_{i}}\right|^{2} dx.$ Thus after re-indexing, $$\nu(\phi)=\lim_{i\->\00}\int_{\O}\e_i\phi{\mathbf B}_{\e_i}^2|\nabla u_{\e_i}|^2\, d{x} \label{secondest2}$$ for $\phi\in C_c(\O)$.\ The following crucial stability inequality is established in [@Tonegawa]: [(]{}[@Tonegawa Theorem 3] [)]{} \[stable\] Let the hypotheses be as in (A1)-(A3) and let $V_{\e_{i}}$ be defined by ($\star$). Let $V \in G_{n-1}(\O)$ be such that $V = \lim_{i\to \infty} \, V_{\e_{i}},$ where the convergence is as varifolds on $\O.$ Then $V$ has a generalized second fundamental form ${A}$ with its length $|{A}|$ satisfying $$\int_{\O}|{A}|^2\phi^2\, d\|V\|\leq \int_{\O}|\nabla \phi|^2\, d\|V\| \label{stability}$$ for all $\phi\in C^1_c(\O)$. \[stabilitythm\] We refer the reader to [@Tonegawa Section 2] for the definition of the generalized second fundamental form of a varifold. See also [@Hutchinson] where the notion was defined originally. We end this section with the following elementary consequence of which we shall need later: If $u \in C^{2}(\O)$ and holds with $\e \in (0, 1)$ in place of $\e_{i}$ and $u$ in place of $u_{\e_{i}}$, then $$\left|\nabla\left(\e\frac{\left|\nabla u\right|^2}{2}-\frac{W(u)}{\e}\right)\right| \leq \e\sqrt{n-1}\,|\nabla u|^2{\mathbf B}_{u}\hspace{.2in} \mbox{in $\O$}. \label{discrepancyest}$$ [*Proof*]{}. For any $n\times n$ symmetric matrix $M$ and any unit vector ${\bf m} \in {\mathbb R}^{n}$, one has that $$|M\cdot {\bf m}-({\rm tr}\, M){\bf m}|\leq \sqrt{n-1}({\rm tr}\, (M^2)- {\bf m}^t\cdot M^2\cdot{\bf m})^{\frac12}.$$ Using this and , we see that on the set $\{\nabla u \neq 0\}$, $$\begin{split} &\left|\nabla\left(\e\frac{\left|\nabla u\right|^2}{2}-\frac{W(u)}{\e}\right)\right| =\e\left|\nabla^2u\cdot \nabla u-\Delta u\nabla u\right|\\ &\leq \e\sqrt{n-1}\, |\nabla u|\left({\rm tr}\, ((\nabla^2 u)^2)-\frac{1}{|\nabla u|^2} \nabla u^t\cdot (\nabla^2 u)^2\cdot\nabla u\right)^{\frac12}=\e\sqrt{n-1}\,|\nabla u|^2 {\mathbf B}_{u}. \end{split}$$ If $\nabla u = 0$, the inequality holds trivially.\ Regularity of stable codimension 1 integral varifolds and the proof of the main theorem {#sec3} ======================================================================================= In this section we recall the main content (Theorem \[wick\] below) of the regularity theory of the second author ([@Wickramasekera]) for stable codimension 1 integral varifolds and show how it implies our main result (Theorem \[thm2\]) concerning regularity of the limit-interfaces, modulo verification of a certain structural condition satisfied by the limit-interfaces. This structural condition is precisely given in Proposition \[mainprop\] below, and is necessary to apply Theorem \[wick\]. We shall establish its validity in the next section. Fix an integer $m \geq 2$ and $\a\in (0,1)$. Denote by ${\mathcal S}_{\a}$ the collection of all integral $m$-varifolds $V$ on the open unit ball $B^{m+1}_{1} \subset {\mathbb R}^{m+1}$ with ${0}\in {\rm spt}\, \|V\|$, $\|V\|(B^{m+1}_{1})<\00$ and satisfying the following conditions: - (Stationarity) $V$ is a critical point of the $m$-dimensional area functional in $B^{m+1}_{1}$, viz. $V$ is a stationary integral $m$-varifold on $B^{m+1}_1$.\ - (Stability) $V$ satisfies $$\int_{{\rm reg}\, V}|{A}|^2\phi^2\, d\H^{m}\leq \int_{{\rm reg}\,V}|\nabla^{{\rm reg} \, V}\phi|^2\, d\H^{m} \label{stabreg}$$ for all $\phi\in C^1_c({\rm reg}\, V)$, where ${A}$ denotes the (classical) second fundamental form of ${\rm reg}\, V,$ $|{A}|$ its length and $\nabla^{{\rm reg}\, V}$ is the gradient operator on ${\rm reg} \, V$.\ - ($\a$-Structural Hypothesis) For each ${x}\in {\rm sing}\, V$, there exists no $\rho>0$ such that ${\rm spt}\, \|V\|\cap B_{\rho}({x})$ is equal to the union of a finite number of $m$-dimensional embedded $C^{1,\a}$ submanifolds-with-boundary of $B_{\rho}({x})$ all having common boundary in $B_{\rho}({x})$ equal to an $(m-1)$-dimensional embedded $C^{1,\a}$ submanifold of $B_{\rho}({x})$ containing ${x}$, and no two intersecting except along their common boundary. With these hypotheses, we have the following: [(]{}[@Wickramasekera Theorem 3.1][)]{} If $V\in {\mathcal S}_{\a},$ then ${\rm sing}\, V\cap B^{m+1}_1$ is empty if $2\leq m \leq 6$, ${\rm sing} \, V \cap B_{1}$ is a discrete set of points if $m=7$ and $\H^{m-7+\gamma}({\rm sing}\, V\cap B_1)=0$ for each $\gamma>0$ if $m\geq 8$. \[wick\] An obvious yet extremely useful feature of Theorem \[wick\] is that it suffices, when applying the theorem, to verify the $\alpha$-Structural Hypothesis for points $x \in {\rm sing} \, V$ in the complement of a set $Z$ having $(m-1)$-dimensional Hausdorff measure zero; we do not need to know that such $Z$ is closed. We rely on this fact in an essential way in the present application, in which the $\alpha$-Structural Hypothesis is verified by way of the following proposition. We shall prove this proposition in the next section. Let $V$ be as in Theorem \[thm2\]. There exists a (possibly empty) Borel set $Z\subset{\rm spt}\,\|V\| \cap \O$ with $\H^{n-2}(Z)=0$ such that for each ${x}\in ({\rm spt}\, \|V\|\setminus Z) \cap \O$ and each tangent cone ${\mathbf C}$ to $V$ at ${x}$, ${\rm spt}\, \|{\mathbf C}\|$ is not equal to a union of three or more half-hyperplanes of $\R^n$ meeting along an $(n-2)$-dimensional affine subspace. \[mainprop\] Theorem \[thm2\] follows directly from Proposition \[mainprop\] and Theorem \[wick\]. *Proof of Theorem \[thm2\]*. Let $V$ be as in Theorem \[thm2\], $y \in {\rm spt} \, \|V\| \cap \Omega$ and $\rho \in (0, {\rm dist} \, (y, \, \partial \, \O))$. In order to prove Theorem \[thm2\], it clearly suffices to establish its conclusions with $\widetilde{V} = \eta_{y, \rho \; \#} \, V$ in place of $V$, which we can achieve by Theorem \[wick\] if we can show that $\widetilde{V} \in {\mathcal S}_{\alpha}$ for some $\alpha \in (0, 1)$. Here and subsequently, $\eta_{y, \rho} \, : \, {\mathbb R}^{n} \to {\mathbb R}^{n}$ is the map defined by $\eta_{y, \rho}(x) = \rho^{-1}(x - y)$ and $\eta_{y, \rho \, \#}$ denotes the push-forward of $V$ by $\eta_{y, \rho}.$ It follows from Theorem \[thm1\] that $\widetilde{V}$ satisfies (${\mathcal S} \, 1$). To verify that $\widetilde{V}$ satisfies (${\mathcal S} \, 2$), note the following two facts: (i) on the regular part of the varifold, the generalized second fundamental form in coincides with the classical second fundamental form ([@Hutchinson]); (ii) by the constancy theorem ([@Simon], Theorem 41.1), the multiplicity function of $V$ is constant on each connected component of ${\rm reg}\, V$ so we can replace $d\|V\|$ in by $d\H^{n-1}$ whenever $\phi \in C^{1}_{c}(\Omega \setminus {\rm sing} \, V)$. Thus given $\phi \in C^{1}_{c} \, ({\rm reg} \, \widetilde{V})$, we may choose any extension $\widetilde{\phi} \in C^{1}_{c}(B_{1} \setminus {\rm sing} \, \widetilde{V})$ of $\phi$ such that $\nabla \widetilde{\phi} = \nabla^{{\rm reg} \, V} \phi$ on ${\rm reg} \, V$ and use with $\widetilde{\phi} \circ \eta_{y, \, \rho}$ in place of $\phi$ to deduce that $\widetilde{V}$ satisfies (${\mathcal S} \, 2$). To verify that $V$ (and hence also $\widetilde{V}$) satisfies (${\mathcal S} \, 3$) (for any $\alpha \in (0, 1)$), let $\alpha \in (0, 1)$ and suppose that there is a point ${x}\in {\rm spt}\, \|V\|\cap\O$ and $\rho \in (0, {\rm dist}({x},\partial \O))$ such that ${\rm spt}\, \|V\|\cap B_{\rho}({x})$ is a union of three or more $C^{1,\a}$ hypersurfaces-with-boundary meeting along a common $(n-2)$-dimensional $C^{1,\a}$ submanifold $L$ of $B_{\rho}({x})$ with ${x}\in L$. It is standard to see, with the help of the Hopf boundary point lemma for divergence form elliptic operators ([@FinnGilbarg Lemma 7], see also [@HardtSimon Lemma 10.1]) that at every point along $L$, these hypersurfaces-with-boundary must meet transversely. Hence the unique tangent cone to $V$ at any $\tilde{x}\in L$ is supported by a union of three or more half-hyperplanes meeting along a common $(n-2)$-dimensional subspace. For any $\widetilde{x} \in L\setminus Z$, this directly contradicts Proposition \[mainprop\], where $Z$ is the set as in Proposition \[mainprop\]. Note that $L \setminus Z \neq \emptyset$ since $\H^{n-2}(Z)=0$ by Proposition \[mainprop\]. Thus $V$ must satisfy (${\mathcal S} \, 3$). Hence $\widetilde{V} \in {\mathcal S}_{\alpha}$, and Theorem \[thm2\] follows from Theorem \[wick\]. Structural condition for the stable limit-interfaces {#sec4} ==================================================== To complete the proof of Theorem \[thm2\], it only remains to give a proof of Proposition \[mainprop\], which we shall do in this section. Let the hypotheses be as in (A1)-(A3) and let $\nu$ be the Radon measure on $\O$ defined by . Let $V$ be as in Theorem \[thm2\], obtained possibly after passing to a suitable subsequence of $\{\e_{i}\}$ and the corresponding subsequence of $\{u_{\e_{i}}\}.$ Let $$Z =\left\{{x}\in {\rm spt}\,\|V\| \cap \O\, :\, \limsup_{r\-> 0}\frac{\nu(B_r({x}))}{r^{n-3}} >0\right\}.$$ It is standard to see that $\H^{n-3+\gamma}(Z)=0$ for each $\gamma>0$ and thus in particular that $\H^{n-2}(Z)=0$. We will show that Proposition \[mainprop\] holds with this $Z.$ To obtain a contradiction assume that we have a point ${x}\in{\rm spt}\, \|V\|\setminus Z$ where a tangent cone ${\mathbf C}$ to $V$ has the property that ${\rm spt}\, \|{\mathbf C}\|$ is equal to a union of three or more half-hyperplanes meeting along a common $(n-2)$-dimensional subspace $S({\mathbf C})$. Without loss of generality we may assume that ${x}={0}$ and that $S({\mathbf C}) = \{0\} \times {\mathbb R}^{n-2}.$ Thus we may write $${\rm spt}\, \|{\mathbf C}\|= \cup_{j=1}^{N} P_{j}$$ for some $N \geq 3,$ where for each $j=1, 2, \ldots, N,$ $$P_{j} = \{t{\mathbf p}_{j} \, : \, t \geq 0\} \times {\mathbb R}^{n-2}$$ with ${\bf p}_1,\cdots,{\bf p}_N\in \R^2$ distinct vectors such that $|{\bf p}_{j}|=\frac12$ for $j=1, 2, \ldots, N.$ By the definition of tangent cone, there exists a sequence $r_i\->0$ such that $\lim_{i\->\00}\y_{r_i\,\#}V={\mathbf C}$. Here $\eta_{r}$ is the map $x \mapsto r^{-1}x.$ Since $V_{\e_i}\-> V$, we may choose a subsequence of $\{\e_{i}\}$ for which, after relabeling, we have that $\lim_{i\->\00}\y_{r_i\,\#} \, V_{\e_i}={\mathbf C}$ and $\lim_{i\->\00}\frac{\e_i}{r_i} =0$. Letting $\tilde{\e}_i=\frac{\e_i}{r_i}$ and defining $\tilde{u}_{\tilde{\e}_i}(\tilde{x})=u_{\e_i}(r_i\tilde{x})$, we then have that $$\y_{r_i\,\#} \, V_{\e_i}(\phi)=\frac{1}{\sigma}\int_{\{|\nabla \tilde{u}_{\tilde{\e}_{i}}| > 0\}} \phi(\tilde{x},{\bf I} -{\bf n}_{\tilde{\e}_i}\otimes{\bf n}_{\tilde{\e}_i})\frac{\tilde{\e}_i}{2} |\nabla \tilde{u}_{\tilde{\e}_i}|^2\, d{\tilde{x}}$$ for $\phi\in C_c(G_{n-1}(r_i^{-1}\O))$, where $\tilde{\mathbf n}_{\tilde{\e}_{i}} = \frac{\nabla \tilde{u}_{\tilde{\e}_{i}}}{|\nabla \tilde{u}_{\tilde{\e}_{i}}|}$. Since $\lim_{i\->\00}\frac{\nu(B_{2r_i})}{(2r_i)^{n-3}}=0$, we may choose a further subsequence of $\{\e_{i}\}$ without changing notation such that $$\lim_{i\->\00}\frac{1}{(2r_i)^{n-3}}\int_{B_{2r_i}}\e_i{\mathbf B}_{\e_i}^2 |\nabla u_{\e_i}|^2\, d{x}=0. \label{secdec}$$ With the change of variables as above, this is equivalent to $$\lim_{i\->\00}\int_{B_2}\tilde{\e}_i{\tilde{\mathbf B}}_{\tilde{\e}_i}^2|\nabla \tilde{u}_{\tilde{\e}_i}|^2\, d\tilde{x}=0,$$ where ${\tilde{\mathbf B}}_{\tilde{\e}_i}$ is defined by with $\tilde{u}_{\tilde{\e}_i}$ in place of $u$. By [@HutchinsonTonegawa Prop. 3.4], for each open set $U$ with $U \subset\subset \O$, there exist constants $c = c(c_{1}, n, {\rm dist}\, (U, \partial \, \O))$ and $\e_{0} = \e_{0}(c_{1}, n, {\rm dist} \, (U, \partial \, \O))$ such that if $B_{r}({x_{0}}) \subset U$ and $s \in (0, r]$, then for all $i$ sufficiently large to ensure $\e_{i} \leq \e_{0}$, $$\begin{aligned} \label{monotonicity} &&r^{1-n} \int_{B_{r}(x_{0})} \left(\e_{i}\frac{|\nabla u_{\e_{i}}|^{2}}{2} + \frac{W(u_{\e_{i}})}{\e_{i}}\right) \, dx - s^{1-n} \int_{B_{s}(x_{0})} \left(\e_{i}\frac{|\nabla u_{\e_{i}}|^{2}}{2} + \frac{W(u_{\e_{i}})}{\e_{i}}\right) \, dx\nonumber\\ &&\hspace{.5in} \geq \int_{s}^{r} \left(\t^{-n}\int_{B_{\t}(x_{0})} \left(\frac{W(u_{\e_{i}})}{\e_{i}} - \frac{\e_{i}}{2}|\nabla u_{\e_{i}}|^{2}\right)^{+} \, dx\right) \, d\t -cr\nonumber\\ &&\hspace{2.5in} + \, \e_{i}\int_{B_{r}(x_{0}) \setminus B_{s}(x_{0})} \frac{\left((y - x_{0}) \cdot \nabla u_{\e_{i}}\right)^{2}}{|y - x_{0}|^{n+1}} \, dy.\end{aligned}$$ This implies in particular that $$\int_{B_{2}} \frac{\tilde{\e}_{i}}{2}|\nabla \tilde{u}_{\tilde{\e}_{i}}|^{2} + \frac{W(\tilde{u}_{\tilde{\e}_{i}})}{\tilde{\e}_{i}} \leq C$$ for all sufficiently large $i$, where $C$ is a positive constant depending only on $n$, $c_{1}$ and $c_{2}.$ Thus hypotheses (A1)-(A3) are satisfied with $\tilde{\e}_{i}$ in place of $\e_{i}$, $\tilde{u}_{\tilde{\e}_{i}}$ in place of $u_{{\e}_{i}}$ and $C$ in place of $c_{1},$ so by replacing the original sequences $\{\e_{i}\},$ $\{u_{\e_{i}}\}$ with the new sequences $\{\tilde{\e}_{i}\},$ $\{\tilde{u}_{\tilde{\e}_{i}}\}$ and the constant $c_{1}$ with $C$, we have that (A1)-(A3) hold with $\O = B_{2}$, together with the additional facts that $$\label{cone} \lim_{i\->\00}V_{\e_i}={\mathbf C}$$ where the convergence is as varifolds on $B_{2}$ and that $$\label{secdec2} \lim_{i\->\00}\int_{B_2}{\e}_i{{\mathbf B}}_{{\e}_i}^2|\nabla {u}_{{\e}_i}|^2 =0.$$ *For the rest of the discussion we shall assume that $W$, $\{\e_{i}\}$, $\{u_{\e_{i}}\}$ satisfy (A1)-(A3) with $\O = B_{2},$ as well as and* . Our goal is to derive a contradiction. Set $c_3=\frac12 \sqrt{\min_{|t|\leq\frac34}W(t)}>0$ and let $$D_{\e_i}=\left\{{z}\in B_1^{n-2}\,:\, |\nabla u_{\e_i}({y},{z})| \geq \frac{c_3}{\e_i}\mbox{ holds for all }{y}\in B_1^2\mbox{ with }|u_{\e_i} ({y},{z})|\leq \frac12\right\}.$$ Then we have that $$\lim_{i\->\00}\L^{n-2}(B_1^{n-2}\setminus D_{\e_i})=0.$$ \[lemma1\] [**Remark:**]{} Note that $D_{\e_i}$ contains the set $D^{\prime}_{\e_{i}}$ of points ${z} \in B_{1}^{n-2}$ where $|u_{\e_i}({y},{z})|>\frac12$ for all ${y}\in B_1^2$. We shall prove that $D^{\prime}_{\e_{i}}$ is small in Lemma \[lemma2\] below.\ [*Proof*]{}. For each $i=1, 2, 3, \ldots,$ let $\{B_{\e_i}^{n-2}({z}_{i,j})\}_{j=1}^{J_i}$ be a maximal pairwise disjoint collection of balls such that ${z}_{i,j}\in B_1^{n-2}\setminus D_{\e_i}$ for $j=1,\cdots,J_i.$ Then $B_1^{n-2}\setminus D_{\e_i}\subset \cup_{j=1}^{J_i}B_{2\e_i}^{n-2}({z}_{i,j})$ and by the definition of $D_{\e_i},$ there exists, for each $j=1, 2, \ldots, J_{i}$, a point ${y}_{i,j}\in B_1^2$ such that $|u_{\e_i}({y}_{i,j},{z}_{i,j})|\leq\frac12$ and $|\nabla u_{\e_i}({y}_{i,j},{z}_{i,j})|<\frac{c_3}{\e_i}$. By standard elliptic estimates we have that $$\sup_{B_{15/8}} \, |\nabla^{2} \, u_{\e_{i}}| \leq C \sup_{B_{2}} \, \left(|u_{\e_{i}}| + \frac{|W^{\prime}(u_{\e_{i}})|}{\e_{i}^{2}}\right)$$ where $C = C(n)$, whence in view of the hypothesis $\sup_{B_{2}} \, |u_{\e_{i}}| \leq c_{2}$, there exists a fixed number $r_{0} \in (0, 1],$ depending only on $n$, $W$ and $c_2,$ such that $$|u_{\e_i}({x})|\leq \frac34\mbox{ and }|\nabla u_{\e_i}({x})|< \frac{2c_3}{\e_i} \label{small}$$ for each ${x}\in B_{2r_0 \e_i}({y}_{i,j},{z}_{i,j})$. On this ball we have $$v_{\e_i}\equiv \frac{W(u_{\e_i})}{\e_i}-\frac{\e_i|\nabla u_{\e_i}|^2}{2} \geq \frac{1}{\e_i}\left(\min_{|t|\leq \frac34}W(t)-2c_3^2\right) \geq \frac{2c_3^2}{\e_i}. \label{big}$$ Since $B^2_{r_0 \e_i}({y}_{i,j})\times\{{z}\}\subset B_{2r_0\e_i}({y}_{i,j}, {z}_{i, j})$ for each ${z}\in B_{r_0 \e_i}^{n-2}({z}_{i,j})$, we have by that $$2c_3^2 \sqrt{\pi}\, r_0\leq \left(\int_{B_{r_0\e_i}^2 ({y}_{i,j})} (v_{\e_i}({y},{z}))^2\, d{y}\right)^{\frac12} \label{big2}$$ for each ${z}\in B_{r_0 \e_i}^{n-2}({z}_{i,j}).$ On the other hand, by the relevant 2-dimensional Sobolev inequality and , $$\begin{split} &\left(\int_{B^2_{r_0\e_i}({y}_{i,j})}(v_{\e_i}({y},{z}))^2\, d{y}\right)^{\frac12} \leq \left(\int_{B^2_{1+r_0\e_i}}(v_{\e_i}({y},{z}))^2 \, d{y}\right)^{\frac12}\\ &\hspace{.5in} \leq C\int_{B^{2}_{1+r_{0}\e_{i}}}|v_{\e_{i}}(y, z)| + |\nabla_{y} v_{\e_i}({y},{z})|\, d{y}\\ &\hspace{.5in} \leq C\int_{B^{2}_{1+r_{0}\e_{i}}} |v_{\e_{i}}(y, z)| \, dy + C\sqrt{n-1}\e_{i}\int_{B^2_{1+r_{0}\e_{i}}}{\mathbf B}_{\e_i}(y, z)|\nabla u_{\e_i}(y, z)|^2\, d{y} \end{split} \label{big3}$$ where $C$ is the relevant Sobolev constant. Combining and we obtain that $$c_3^2\sqrt{\pi}r_0\leq C\int_{B^{2}_{1+r_{0}\e_{i}}} |v_{\e_{i}}(y, z)| \, dy + C\sqrt{n-1}\e_{i}\int_{B^2_{1+r_{0}\e_{i}}}{\mathbf B}_{\e_i}(y, z)|\nabla u_{\e_i}(y, z)|^2\, d{y}$$ for all ${z}\in B^{n-2}_{r_0 \e_i}({z}_{i,j})$. Integrating this over $B_{r_0\e_i}^{n-2}({z}_{i,j})$ first, summing over $j$ and using the fact that $\{B_{r_0\e_i}^{n-2}({z}_{i,j})\}_{j=1}^{J_i}$ are pairwise disjoint, we obtain with the help of the Cauchy-Schwarz inequality that $$\begin{split} &c_3^2\sqrt{\pi}r_0^{n-1}\o_{n-2}\e_i^{n-2}J_i\leq C\int_{B^{2}_{1 + r_{0}\e_{i}} \times B^{n-2}_{1 + r_{0}\e_{i}}} |v_{\e_{i}}| \, dx + C\sqrt{n-1}\e_{i}\int_{B^2_{1 + r_{0}\e_{i}}\times B^{n-2}_{1 + r_{0}\e_{i}}}{\mathbf B}_{\e_i}|\nabla u_{\e_i}|^2\, d{x}\\ &\hspace{.5in} \leq C\int_{B_{15/8}}|v_{\e_{i}}| \, dx + C\sqrt{n-1}\left(\int_{B_{15/8}}\e_{i}|\nabla \, u_{\e_{i}}|^{2} \, dx \right)^{1/2}\left(\int_{B_{15/8}}\e_{i}{\mathbf B}_{\e_{i}}^{2}|\nabla u_{\e_{i}}|^{2} \, dx\right)^{1/2}\\ &\hspace{.5in} \leq C\int_{B_{15/8}}|v_{\e_{i}}| \, dx + Cc_{1}\sqrt{n-1}\left(\int_{B_{15/8}}\e_{i}{\mathbf B}_{\e_{i}}^{2}|\nabla u_{\e_{i}}|^{2} \, dx \right)^{1/2}. \end{split} \label{big4}$$ Since $\L^{n-2}(B_1^{n-2}\setminus D_{\e_i})\leq J_i\o_{n-2}(2\e_i)^{n-2}$, the lemma follows from Theorem \[thm1\](2) and .\ Choose $\d>0$ small enough depending on ${\mathbf C}$ so that $\{B_{2\d}^2({\bf p}_i)\}_{i=1}^{N}$ are disjoint, and define $$Q_{\e_i}=\left\{{z}\in B_1^{n-2}\,:\, \forall t\in [-1/2,1/2 ],\, \forall j\in\{1,\cdots,N\},\, \exists {y}\in B_{\d}^2({\bf p}_j)\, \mbox{ s.t. }u_{\e_i}({y},{z})=t\right\}.$$ The next lemma is obtained by re-examining [@HutchinsonTonegawa Section 5]. With $Q_{\e_{i}}$ as above, we have that $$\lim_{i\->\00}\L^{n-2}(B_1^{n-2}\setminus Q_{\e_i})=0.$$ \[lemma2\] [*Proof*]{}. For $j=1, 2, \ldots, N$, let $$Q_{\e_i,j} =\left\{{z}\in B^{n-2}_1\,:\, \forall t\in [-1/2,1/2],\, \exists {y}\in B^2_{\d}({\bf p}_j)\mbox{ s.t. }u_{\e_i}({y},{z})=t\right\}.$$ It suffices to prove that $\lim_{i\->\00}\L^{n-2}(B_1^{n-2}\setminus Q_{\e_i,j})=0$ for each $j=1, 2, \ldots, N$. Thus without loss of generality, we may assume $j=1$, $P_1=\{y_2=0\}\cap\{y_1 \geq 0\}$ and ${\bf p}_1=(1/2,0)$. With these coordinates, ${\rm spt}\, \|{\mathbf C}\| \cap B_{\d}({\bf p}_1,{z})=\{y_2=0\}\cap B_{\d}({\bf p}_1,{z})$ for each ${z}\in \R^{n-2}$. On $B_{\d}({\bf p}_1,{z})$ with ${z}\in B^{n-2}_1$, $V_{\e_i}$ converge to $\theta_1 |P_1|$ as varifolds, where $\theta_1 \in {\mathbb N}$ is the multiplicity of ${\mathbf C}$ on $P_1$. By Theorem \[thm1\], the sets $B_{\d}({\bf p}_1,{z})\cap\{|u_{\e_i}|\leq \frac12\}$ converge to $P_1 \cap B_{\d}({\bf p}_1,{z})$ in Hausdorff distance. Note that $u_{\e_i}({x})$ converges to different values ($\pm 1$) uniformly on $\left(\cup_{z \in B_{1}^{n-2}} \, B_{\d}({\bf p}_1,{z})\right)\cap\{y_2 >\frac{\d}{2}\}$ and $\left(\cup_{z \in B_{1}^{n-2}} \, B_{\d}({\bf p}_1,{z})\right) \cap \{y_2<-\frac{\d}{2}\}$ in case $\theta_1$ is odd, and to the same value if $\theta_{1}$ is even (see the discussion in [@HutchinsonTonegawa p. 78]). Hence if $\theta_{1}$ is odd, by continuity of $u_{\e_i}$, the function $y_2 \mapsto u_{\e_i}(\frac12,y_2,{z})$ as $y_2$ ranges over $[-\frac{\d}{2}, \frac{\d}{2}]$ takes all values between $-\frac12$ and $\frac12$, so that in this case we see that $B^{n-2}_1= Q_{\e_i,1}$ for all sufficiently large $i,$ proving the lemma. If $\theta_1$ is even, we need to utilize results in [@HutchinsonTonegawa Section 5]. Assume without loss of generality that $u_{\e_i}$ converges to $+1$ on both sides of $\{y_2>0\}$ and $\{y_2<0\}$ on $B_{\delta}({\bf p}_1,z)$. Let $$\begin{split} &\hat{B}_{\delta/2}({\bf p}_1,z)=\{(\hat{y}_1, \hat{y}_2,\hat{z})\,:\, (\hat{y}_1-1/2)^2+|\hat{z}-z|^2<(\delta/2)^2, \,\, |\hat{y}_2|<\delta/2\} \;\; {\rm and}\\ &S_i=\left\{x\in B_{\delta/2}({\bf p}_1,z)\cap P_1\,:\, \exists t\in [-1/2, 1/2],\,\, \mbox{with} \,\, \{u_{\e_i}=t\}\cap T_1^{-1}(x)\cap \hat{B}_{\delta/2}({\bf p}_1,z)=\emptyset\right\}. \end{split}$$ Here $T_1$ is the orthogonal projection ${\mathbb R}^n\rightarrow \{y_{2} = 0\}$. By the continuity of the $u_{\e_i}$’s and their local uniform convergence to $+1$ away from $P_1$, we have for any $b \in (0, 1/2)$ that $S_i\subset S_i^b\cup \hat{S}_i^b$, where $$\begin{split} S_i^b=&\left\{x\in B_{\delta/2}({\bf p}_1,z)\cap P_1 \,:\, \{u_{\e_i}\leq 1-b\} \cap T_1^{-1}(x)\cap \hat{B}_{\delta/2}({\bf p}_1,z)=\emptyset\right\} \;\; {\rm and}\\ \hat{S}_i^b=&\left\{x\in B_{\delta/2}({\bf p}_1,z)\cap P_1\,:\, \inf_{T_1^{-1}(x)\cap \hat{B}_{\delta/2}({\bf p}_1,z)} u_{\e_i}\in [-1/2,1-b] \right\}. \end{split}$$ We claim that for any given sufficiently small $s>0,$ we can choose small $b = b(s, W)>0$ such that $$\limsup_{i\rightarrow\infty}{\mathcal L}^{n-1}(S_i^b) \leq c(\sigma,n,\theta_1)s. \label{Si1}$$ To see this, we argue as follows: Note first that for any given $s \in (0,1)$ we have the estimates (5.5)-(5.8) of [@HutchinsonTonegawa] with $B_{\delta} ({\bf p}_1,z)$ in place of $B_{3}$, where $b = b(s, W) >0$ in (5.8) is given by [@HutchinsonTonegawa Prop. 5.1]. Choose $\eta = \eta(s, W, \delta, \theta_{1}) \in (0, 1)$ and $L = L(s, W) \in (1, \infty)$ as in [@HutchinsonTonegawa Prop. 5.5, 5.6] with $R = \delta$, $N = \theta_{1},$ and define $G_i$ by $$\begin{split} G_i=&\hat{B}_{\delta/2}({\bf p}_1,z) \cap\{|u_{\e_i}|\leq 1-b\}\cap\\ &\left\{x\,: \, \int_{B_r(x)}\left(\left|\frac{\e_i|\nabla u_{\e_i}|^2}{2} -\frac{W(u_{\e_i})}{\e_i}\right|+(1-\nu_{2, \, i}^2)\e_i|\nabla u_{\e_i}|^2\right) \leq \eta r^{n-1}\,\mbox{ if }\, 4\e_i L\leq r\leq \delta\right\} \end{split} \label{Gi}$$ where $\nu_{2, \, i} = |\nabla u_{\e_{i}}|^{-1}\frac{\partial \, u_{\e_{i}}}{\partial y_{2}}$ if $\nabla \, u_{\e_{i}} \neq 0$ and $\nu_{2, \, i} = 0$ otherwise. With the help of the Besicovitch covering theorem and (\[monotonicity\]) we see that\ $$\|V_{\e_i}\|(\hat{B}_{\delta/2}({\bf p}_1,z)\cap\{|u_{\e_i}|\leq 1-b\}\setminus G_i)+{\mathcal L}^{n-1}(T_1(\hat{B}_{\delta/2}({\bf p}_1,z)\cap \{|u_{\e_i}|\leq 1-b\}\setminus G_i))\rightarrow 0. \label{Gigoto0}$$ We also note that there exists $c=c(b,s,W)$ with the following property: for a.e. $t \in [-1+b, 1 - b],$ the level set $\{u_{\e_{i}} = t\}$ is an $(n-1)$-dimensional $C^3$ surface and for any $x\in G_i$ with $u_{\e_i}(x)=t$, the set $\{u_{\e_i}=t\} \cap B_{L\e_{i}}(x)$ is a graph $y_2=f(y_1,z_1,\cdots,z_{n-2})$ of a $C^{1}$ function $f \, : \, T_{1}(\{u_{\e_{i}} =t\} \cap B_{L\e_{i}}(x)) \to {\mathbb R}$ with $|\nabla f| \leq c\eta^{1/(n+1)}$ on $T_{1}(\{u_{\e_{i}} =t\} \cap B_{L\e_{i}}(x))$. This follows from the proof of [@HutchinsonTonegawa Prop. 5.6], which yields that for any $x = (y_{1}^{\star}, y_{2}^{\star}, z^{\star})\in G_{i}$, the function $u_{\e_i}$ in the neighborhood $B_{L\e_{i}}(x)$ is $C^{1}$ close to $\pm q((y_2-y_{2}^{\star})/\e_i),$ where $q$ is the standing wave solution defined by the ODE $q''=W'(q)$ with $q(\pm \infty)=\pm 1$; specifically, letting $\tilde{u}_{\e_i} (\tilde{y}_1,\tilde{y}_2,\tilde{z})=u_{\e_i}(\e_i\tilde{y}_1+y_1^{\star}, \e_i\tilde{y}_2+y_2^{\star},\e_i\tilde{z}+z^{\star})$ and $\tilde{q} (\tilde{y}_1,\tilde{y}_2,\tilde{z})=\pm q(\tilde{y}_2+c)$ (so that $q(c)=t$), we have that $\|\tilde{u}_{\e_i}-\tilde{q}\|_{C^1(B_L)}\leq c\eta^{1/(n+1)}$. In particular, we choose $\eta = \eta(s, W, \delta, \theta_{1}) \in (0, 1)$ so small that $$\sqrt{1+|\nabla f|^2}\leq 1+s. \label{gradsmall}$$ For $x\in P_1\cap B_{\delta/2} ({\bf p}_1,z)$ and $|t|\leq 1-b$, define $Y^i_x(t)=T_1^{-1}(x) \cap G_i\cap\{u_{\e_i}=t\}$. We claim that the cardinality $\# Y_x^i(t)$ of $Y_x^i(t)$ is $\leq \theta_1$. To see this, assume for a contradiction that $\# Y_x^i (t)\geq \theta_1+1$, and let $Y'$ be any subset of $Y_x^i (t)$ such that $\# Y'=\theta_1+1$. Then we have by [@HutchinsonTonegawa Prop. 5.6] $$I \equiv \sum_{\tilde{x}\in Y'}\frac{1}{\omega_{n-1}(L\e_i)^{n-1}} \int_{B_{L\e_i}(\tilde{x})}\frac{\e_i|\nabla u_{\e_i}|^2}{2}+\frac{W(u_{\e_i})}{\e_i} \geq (\# Y')(2\sigma -s) \label{Iineq1}$$ while by [@HutchinsonTonegawa Prop. 5.5], $$\begin{split} I&\leq s+\frac{1+s}{\omega_{n-1}\delta^{n-1}} \int_{\{\tilde{x}\,|\, {\rm dist}\,(Y',\tilde{x})<\delta\}} \frac{\e_i|\nabla u_{\e_i}|^2}{2}+\frac{W(u_{\e_i})}{\e_i}\\ &\leq s+\frac{1+s}{\omega_{n-1}\delta^{n-1}}2\|V_{\e_i}\| (B_{\delta+o(1)}(x))+o(1) \end{split} \label{Iineq2}$$ where $o(1)\rightarrow 0$ as $i\rightarrow\infty$ uniformly in $x \in B_{\delta/2}({\bf p}_1,z)\cap P_1$. Since $$\|V_{\e_i}\| (B_{\delta+o(1)}(x))\leq \sigma\theta_1\delta^{n-1}\omega_{n-1} +o(1), \label{Iineq3}$$ having $\# Y'= \theta_1 +1$ would contradict - for all sufficiently large $i,$ provided $s>0$ is smaller than a number depending only on $\theta_1,\, n,\,\delta$ and $\sigma$. We may of course assume that $s>0$ is smaller than this number to begin with. Now defining $w^i$ as in [@HutchinsonTonegawa page 52], we have by [@HutchinsonTonegawa (5.8)] and that $$\begin{split} (\delta/2)^{n-1}\omega_{n-1}\theta_1\sigma&=\lim_{i\rightarrow\infty} \int_{\hat{B}_{\delta/2}({\bf p}_1,z)}|\nabla w^i|\\ &\leq s+\liminf_{i\rightarrow\infty}\int_{G_i}|\nabla u_{\e_i}|\sqrt{W(u_{\e_i})/2}. \end{split} \label{inty1}$$ Using the co-area formula, , the fact that $\# Y^i_x(t)\leq \theta_1$ and , we see that $$\begin{split} (\delta/2)^{n-1}\omega_{n-1}\theta_1\sigma&\leq s+\liminf_{i\rightarrow\00} \int_{-1+b}^{1-b}{\mathcal H}^{n-1}(\{u_{\e_i}=t\}\cap G_i)\sqrt{W(t)/2}\, dt\\ &\leq s+\sigma\theta_1(1+s)\liminf_{i\rightarrow\00}{\mathcal L}^{n-1}(T_1(G_i)). \end{split} \label{t3}$$ Note that $T_1(G_i)\cap S^b_i=\emptyset$ by the definition of $G_{i}$ and $S_{i}^{b}$, and hence ${\mathcal L}^{n-1}(T_1(G_i))\leq \omega_{n-1}(\delta/2)^{n-1}-{\mathcal L}^{n-1} (S_i^b)$. In view of , this implies that $\limsup_{i\rightarrow\00} {\mathcal L}^{n-1}(S_i^b)\leq c(\sigma,n,\theta_1)s$, completing the proof of . We next verify that $$\hat{S}_i^b\subset T_1(\{|u_{\e_i}|\leq 1-b\}\cap\hat{B}_{\delta/2}( {\bf p}_1,z)\setminus G_i) \label{Si2}$$ as follows: For any $x=(\hat{y}_{1},0,\hat{z})\in \hat{S}_i^b$, there exist $\hat{y}_2$ with $|\hat{y}_2|\leq \delta/4$ and $t\in [-1/2,1-b]$ with $u_{\e_i}(\hat{y}_1,\hat{y}_2,\hat{z})=t$. If $(\hat{y}_1,\hat{y}_2,\hat{z})\in G_i$, again as above we have by [@HutchinsonTonegawa Prop. 5.6] that $u_{\e_i}$ is $C^1$ close to $q((y_2-\hat{y}_{2})/\e_i)$ in the $L\e_i$-neighborhood of $(\hat{y}_1,\hat{y}_2,\hat{z})$. In particular, we would then have $T_1^{-1}(x)\cap\{u_{\e_i}=-3/4\}\cap\hat{B}_{\delta/2}({\bf p}_1,z)\neq \emptyset$, contradicting the assumption that $x \in \hat{S}_i^b$. Thus $(\hat{y}_1,\hat{y}_2,\hat{z})\in \{|u_{\e_i}|\leq 1-b\}\cap\hat{B}_{\delta/2}( {\bf p}_1,z)\setminus G_i$, proving . It follows from and that $$\lim_{i\rightarrow\00}{\mathcal L}^{n-1}(\hat{S}_i^b)=0. \label{Si3}$$ Since $S_i\subset S_i^b\cup\hat{S}_i^b$, it follows from , and arbitrariness of $s>0$ that $$\lim_{i\rightarrow\00}{\mathcal L}^{n-1}(S_i)=0. \label{Sif}$$ Now to complete the proof, assume contrary to the assertion of the lemma that $$\limsup_{i\rightarrow\00}{\mathcal L}^{n-2}(B_1^{n-2}\setminus Q_{\e_i,1})>0.$$ Then for some $z\in B_1^{n-2},$ we must have $$\limsup_{i\rightarrow\00}{\mathcal L}^{n-2}(B_{\delta/4}^{n-2}(z) \setminus Q_{\e_i,1})>0. \label{Sig}$$ Take any $z'\in B_{\delta/4}^{n-2}(z)\setminus Q_{\e_i,1}$. For any $y_1$ with $|y_1 - 1/2|<\delta/4$, we have $$x=(y_1,0,z')\in S_i; \label{Sig2}$$ for if not, there would exist $y_1$ with $|y_1 - 1/2|<\delta/4$ such that $x=(y_1,0,z')\notin S_i$ so that $u_{\e_i}(y_1,y_2,z')$ must take all values $t\in [-1/2,1/2]$ as $y_2$ ranges over $[-\delta/2,\delta/2]$. But this would mean that $z'\in Q_{\e_i,1},$ contrary to our assumption. Thus the claim holds, and says that $$Z_i \equiv \{(y_1,0,z')\,:\, z'\in B_{\delta/4}^{n-2}(z)\setminus Q_{\e_i,1}, \,\, |y_1 - 1/2|<\delta/4\}\subset S_i. \label{Sig3}$$ But then since ${\mathcal L}^{n-1}(Z_i)=\frac{\delta}{2}{\mathcal L}^{n-2}(B_{\delta/4}^{n-2}(z)\setminus Q_{\e_i,1})$, the statements , and are contradictory, completing the proof of the lemma.\ In Lemma \[lemma3\] below we shall make hypotheses and use notation as follows: Let $u\in C^2(B^2_1\times B^{n-2}_1)$ and suppose that $t$ is a regular value of $u$ with $M = u^{-1}(t) \neq \emptyset,$ so that $M$ is an $(n-1)$-dimensional embedded $C^2$ submanifold of $B^{2}_{1} \times B^{n-2}_{1}$. Let ${\mathbf B}_{u}$ be the function defined by . Let $L$ be the set of points $z \in B_{1}^{n-2}$ satisfying the following two requirements: (i) $T_{2}^{-1}(z) \cap M \neq \emptyset$ and (ii) $z$ is a regular value of the map $\left.T_{2}\right|_{M} \, : \, M \to {\mathbb R}^{n-2}$, where $T_{2} \, : \, {\mathbb R}^{n} \to {\mathbb R}^{n-2}$ is the orthogonal projection. Then for each $z \in L,$ $\ell_{z} \equiv T_2^{-1}({z})\cap M$ is a $C^{2}$ 1-manifold. For $z \in L,$ let $\k_{z}(p)$ denote the curvature of $\ell_{z}$ at $p \in \ell_{z}$. Let the hypotheses and notation be as described in the preceding paragraph. Then we have for any Borel set $G\subset L,$ $$\int_G\, d{z}\int_{\ell_{z}}|\k_{z}|\, ds\leq \left(\int_{M\cap T_2^{-1}(G)}{\mathbf B}_{u}^2 \, d\H^{n-1}\right)^{\frac12} \left(\H^{n-1}(M\cap T_2^{-1}(G))\right)^{\frac12}.$$ \[lemma3\] [*Proof*]{}. Since $L$ is open in $B^{n-2}_1$, we may choose an increasing sequence of open sets $L_i\subset\subset L$ such that $\cup_{i=1}^{\00}L_i=L$. Then $M_i=T_2^{-1}(\overline{L_i})\cap M$ is a $C^2$ submanifold-with-boundary $\partial M_i=T_2^{-1}(\partial L_i) \cap M$ in $B_1^2\times B^{n-2}_1$. On $M_i$, $(u_{y_1},u_{y_2})\neq (0,0)$. Hence for each point $x \in M_{i}$, there exists $\rho_{x} >0$ such that $M \cap B_{\rho_{x}}(x)$ is the graph of a $C^{2}$ function $v$ defined over an open subset $U$ of either the plane $\{y_{1} = 0\}$ or the plane $\{y_{2} = 0\}.$ We may cover $M_i$ with a finite number of such coordinate charts $M_{i} \cap B_{\rho_{x}}(x)$. Let us now fix such a chart, and assume without loss of generality that $U \subset \{y_{2} = 0\}$ for that chart, so that $v = v(y_{1}, z)$ for $(y_{1}, z) \in U$ and by the definition of $M,$ $v$ satisfies $u(y_{1}, v(y_{1}, z), z) =t$ for all $(y_{1}, z) \in U.$ In particular, for each $z \in T_{2}(U)$, we have that $\ell_{z} \cap B_{\rho}(x) = \{(y_{1}, v(y_{1}, z), z) \, : \, y_{1} \in U \cap T_{2}^{-1}(z)\}$ and hence $\k_{z}=v_{y_1 y_1}(1+v_{y_1}^2)^{-\frac32}.$ Using the identity $u(y_{1}, v(y_{1}, z), z) \equiv t$ on $U$, this can be expressed in terms of $u$ as $$\k_{z} = -\frac{u_{y_2 y_2}u_{y_1}^2-2u_{y_1 y_2}u_{y_1}u_{y_2}+u_{y_1 y_1}u_{y_2}^2} {(u_{y_1}^2+u_{y_2}^2)^{\frac32}}.$$ Since the length element $ds$ is given by $$ds = \sqrt{1+v_{y_1}^2}\, dy_1=\frac{\sqrt{u_{y_1}^2+u_{y_2}^2}}{|u_{y_2}|}\, dy_1,$$ we have that $$|\k_{z}|ds=\frac{|u_{y_2 y_2}u_{y_1}^2-2u_{y_1 y_2}u_{y_1}u_{y_2}+u_{y_1 y_1} u_{y_2}^2|}{|u_{y_2}|(u_{y_1}^2+u_{y_2}^2)}\, dy_1.$$ Next for unit vector ${\bf m}\in {\mathbb R}^n$ with ${\bf m}\perp \nabla u$ and $M=(\nabla^2 u)$, we have $$({\bf m}^t M{\bf m})^2\leq |M{\bf m}|^2 ={\bf m}^t M^2 {\bf m}\leq {\rm tr}(M^2)-\frac{1}{|\nabla u|^2} (\nabla u)^t M^2 (\nabla u)=|\nabla u|^2|{\bf B}_u|^2.$$ Note that here we used the non-negativity of the eigenvalues of $M^2$. Since $(u_{y_2},-u_{y_1},0,\cdots,0)\perp \nabla u$, we deduce that $$|u_{y_2 y_2}u_{y_1}^2-2u_{y_1 y_2}u_{y_1}u_{y_2}+u_{y_1 y_1} u_{y_2}^2|^2\leq (u_{y_1}^2+u_{y_2}^2)^2{\mathbf B}_{u}^2 |\nabla u|^2,$$ which implies that $$\frac{|u_{y_2 y_2}u_{y_1}^2-2u_{y_1 y_2}u_{y_1}u_{y_2}+u_{y_1 y_1} u_{y_2}^2|}{|u_{y_2}|(u_{y_1}^2+u_{y_2}^2)} \leq \frac{{\mathbf B}_{u}|\nabla u|}{|u_{y_2}|}.$$ We also note that $$\sqrt{1+v_{y_1}^2+v_{z_1}^2+\cdots+v_{z_{n-2}}^2}=\frac{|\nabla u|}{|u_{y_2}|},$$ so that $$\frac{|u_{y_2 y_2}u_{y_1}^2-2u_{y_1 y_2}u_{y_1}u_{y_2}+u_{y_1 y_1} u_{y_2}^2|}{|u_{y_2}|(u_{y_1}^2+u_{y_2}^2)} \leq {\mathbf B}_u \sqrt{1+v_{y_1}^2+v_{z_1}^2+\cdots+v_{z_{n-2}}^2},$$ where both the expression on the left hand side and the function ${\bf B}_{u}$ are evaluated at $(y_{1}, v(y_{1}, z), z).$ After integrating over $(y_1,z)\in U\cap \{z\in L_{i} \cap G\}$ and summing over the finitely many coordinate charts employing a suitable partition of unity subordinate to the coordinate charts, we obtain $$\int_{L_i\cap G}d{z}\int_{\ell_{z}}|\k_{z}|ds\leq \int_{M_i\cap T_2^{-1}(G)}{\mathbf B}_{u}\, d\H^{n-1}.$$ By letting $i\->\00$ in this and using the Cauchy-Schwarz inequality on the right hand side, we deduce the desired estimate. $\Box$\ We now proceed to derive the contradiction necessary to prove Proposition \[mainprop\]. First note that by Lemmas \[lemma1\] and \[lemma2\] we have that $$\lim_{i\->\00}\L^{n-2}(B_1^{n-2}\setminus(D_{\e_i}\cap Q_{\e_i}))=0. \label{van}$$ For the rest of the proof let $T_2:B_{1}^{2} \times B_{1}^{n-2}\-> B_{1}^{n-2}$ be the orthogonal projection. By the defining property of $D_{\e_i}$ we have that $$\int_{\Ti \cap\{|u_{\e_i}|\leq \frac12\}}{\mathbf B}_{\e_i}^2|\nabla u_{\e_i}|\, d{x}\leq \frac{\e_i}{c_3}\int_{B_1^2\times B_1^{n-2}} {\mathbf B}_{\e_i}^2 |\nabla u_{\e_i}|^2\, d{x}$$ which by tends to 0 as $i\->\00$. By the co-area formula it then follows that $$\lim_{i\->\00}\int_{-\frac12}^{\frac12}dt\int_{\Ti\cap\{u_{\e_i}=t\}} {\mathbf B}_{\e_i}^2\, d\H^{n-1}=0.$$ Now choose a generic $t \in (-1/2, 1/2)$ such that $\{u_{\e_i}=t\}$ is a $C^3$ hypersurface of $B_1^2\times B_1^{n-2}$ for each $i=1, 2, 3, \ldots,$ $$\lim_{i\->\00}\int_{\Ti\cap\{u_{\e_i}=t\}}{\mathbf B}_{\e_i}^2\, d\H^{n-1}=0$$ and $$\liminf_{i\->\00} \, \H^{n-1}(\Ti\cap\{u_{\e_i}=t\})<\00.$$ This last requirement can be met since by the co-area formula and Fatou’s lemma, $$\begin{aligned} &&\int_{-1/2}^{1/2} \liminf_{i \to \infty} \, {\mathcal H}^{n-1}(\Ti\cap\{u_{\e_i}=t\}) \, dt\\ &&\hspace{1.5in} \leq \limsup_{i\->\00}\int_{\Ti\cap\{|u_{\e_i}|\leq \frac12\}}|\nabla u_{\e_i}|\, d{x}\\ &&\hspace{1.5in} \leq \limsup_{i\->\00}\frac{\e_i}{c_3}\int_{B_2}|\nabla u_{\e_i}|^2\, d{x}<\00,\end{aligned}$$ Applying Lemma \[lemma3\] with $u = u_{\e_{i}}$ and $G=D_{\e_i}\cap Q_{\e_i}$, we see in view of that after passing to a subsequence without changing notation, there is a point ${z}_i\in D_{\e_i}\cap Q_{\e_i},$ $i =1, 2, 3, \ldots,$ such that $$\label{flat} \lim_{i\->\00}\int_{\ell^{i}_{{z}_i}}|\k^{i}_{{z}_i}|\, ds=0,$$ where $\ell^{i}_{z} = T_{2}^{-1}(z) \cap \{u_{\e_{i}} = t\}$ and $\k^{i}_{z}$ is the curvature of the curve $\ell^{i}_{z}$. Note that ${\ell}^{i}_{{z}_{i}}$ is the union of disjoint, connected, embedded planar curves having at least one point in each of the disjoint balls $B^{2}_{\delta}({\mathbf p}_{j}) \times \{z_{i}\}$, $j=1, 2, \ldots, N,$ and no boundary point in $B_{1}^{2} \times B_{1}^{n-2}.$ Since $N \geq 3$, there must exist at least one connected component $\gamma_{i}$ of $\ell^{i}_{{ z}_{i}}$ such that as one moves along $\gamma_{i} \cap \overline{B_{1}^{2} \times B_{1}^{n-2}}$ from one end point to the other, or once around $\gamma_{i}$ in case $\gamma_{i}$ is closed, (a continuous choice of) the unit normal $\nu_{i}$ to $\gamma_{i}$ changes by a fixed amount depending only on the cone ${\mathbf C}$ and independent of $i.$ By integrating the derivative $\nu_{i}^{\prime}(s)$ along $\gamma_{i}(s),$ where $s$ is the arc length parameter, and using the fact that $\nu_{i}^{\prime}(s) = \k^{i}_{z_{i}}(s)$, we then obtain a fixed positive lower bound for $\int_{\gamma_{i}} |\kappa_{\gamma_{i}}(s)| ds$ independent of $i$, contradicting . This completes the proof of Proposition \[mainprop\]. $\Box$ [99]{} Allard, W. K., *On the first variation of a varifold*, Ann. of Math. (2) **95** (1972) 417–491. Allard, W. K.; Almgren, F. J. Jr., *The structure of stationary one dimensional varifolds with positive density*, Invent. Math. **34** (1976), no. 2, 83–97. Cahn, J. W.; Hilliard, J. E., *Free energy of a nonuniform system I. Interfacial free energy*, J. Chem. Phys. **28** (1958) 258–267. del Pino, M.; Kowalczyk, M.; Wei, J., *On De Giorgi conjecture in dimension $N \geq 9$*, Preprint, 2009. arXiv:0806.3141v2. Finn, R.; Gilbarg, D., *Subsonic flows*, Comm. Pure & Appl. Math. **10** (1957), ?, 23–63. Hardt, R.; Simon, L., *Boundary regularity and embedded solutions for the oriented Plateau problem*, Ann. of Math. (2) **110** (1979) Hutchinson, J. E., *Second fundamental form for varifolds and the existence of surfaces minimizing curvature*, Indiana Univ. Math. J. **35** (1986), no. 1, 45–71. Hutchinson, J. E.; Tonegawa, Y., *Convergence of phase interfaces in the van der Waals-Cahn-Hilliard theory*, Calc. Var. PDE **10** (2000), no. 1, 49–84. Kohn, R. V., Sternberg, P., *Local minimizers and singular perturbations*, Proc. Roy. Soc. Edinburgh Sect. A **111** (1989), no. 1-2, 69–84. Kowalczyk, M., *On the existence and Morse index of solutions to the Allen-Cahn equation in two dimensions*, Ann. Mat. Pura Appl. (4) **184** (2005), no. 1, 17–52. Modica, L., *The gradient theory of phase transitions and the minimal interface criterion*, Arch. Rational Mech. Anal. **98** (1987), no. 2, 123–142. Modica, L.; Mortola, S., *Un esempio di $\G$-convergenza*, Boll. Un. Mat. Ital. B (5) **14(1)** (1977) 285–299. Pacard, F. *Geometric aspects of the Allen-Cahn equation*, Preprint, 2009. Pacard, F.; Ritoré, R., *From the constant mean curvature hypersurfaces to the gradient theory of phase transitions*, J. Differential Geom. **64** (2003), no. 3, 356–423. Padilla, P.; Tonegawa, Y., *On the convergence of stable phase transitions*, Comm. Pure & Appl. Math. **51** (1998), no. 6, 551–579. Röger, M.; Tonegawa, Y., *Convergence of phase-field approximations to the Gibbs-Thomson law*, Calc. Var. PDE **32** (2008), no. 1, 111–136. Schoen, R.; Simon, L., *Regularity of stable minimal hypersurfaces*, Comm. Pure & Appl. Math. **34** (1981), 741–797. Simon, L., *Lectures on geometric measure theory*, Proc. Centre Math. Anal. Austral. Nat. Univ. **3** (1983). Sternberg, P., *The effect of a singular perturbation on nonconvex variational problems*, Arch. Rational Mech. Anal. **101** (1988), no. 3, 209–260. Tonegawa, Y., *On stable critical points for a singular perturbation problem*, Comm. Analysis and Geometry **13** (2005), no. 2, 439–459. Wickramasekera, N., *A general regularity theory for stable codimension 1 integral varifolds*, Preprint, 2009. arXiv:0911.4883.
--- abstract: 'We investigate the ground-state property of an $e_{\rm g}$-orbital Hubbard model at quarter filling on a zigzag chain by exploiting the density matrix renormalization group method. When two orbitals are degenerate, the zigzag chain is decoupled to a doble-chain spin system to suppress the spin frustration due to the spatial anisotropy of the occupied orbital. On the other hand, when the level splitting is increased and the orbital anisotropy disappears, a characteristic change in the spin incommnsurability is observed due to the revival of the spin frustration.' address: 'Advanced Science Research Center, Japan Atomic Energy Research Institute, Tokai, Ibaraki 319-1195, Japan' author: - Hiroaki Onishi - Takashi Hotta title: 'Spin-charge-orbital ordering on triangle-based lattices' --- , Geometrical frustration ,Orbital ordering ,Spin incommensurability ,Density matrix renormalization group method 75.10.-b; 71.10.Fd; 75.30.Et The magnetic property of frustrated spin systems has been one of the central issues for many years in the research field of condensed matter physics [@ref-Diep-review]. For example, it is well known that in the antiferromagnetic (AFM) Ising model on a triangular lattice, there occurs macroscopic degeneracy for possible spin configurations in the ground state [@ref-Wannier-TI]. In general, however, such high degeneracy is lifted to suppress the spin frustration, since the lattice is deformed to lower the lattice symmetry due to the spin-lattice coupling. On the other hand, recently there has been a rapid increase of interest in the effect of the interplay of spin and orbital degrees of freedom [@ref-Dagotto-review]. It has been emphasized that the orbital anisotropy plays a significant role to cause a variety of cooperative phenomena in realistic materials. In particular, in geometrically frustrated lattices, it is expected that orbital ordering occurs to affect the spin frustration due to the spin-orbital coupling, since the orbital anisotropy leads to the non-uniform exchange interactions. In this paper, to clarify the key role of the orbital anisotropy in geometrically frustrated lattices, we investigate an $e_{\rm g}$-orbital Hubbard model on a zigzag chain with one electron per site (quarter filling). When the Hund’s rule coupling $J$ is small, the ground state is found to be paramagnetic (PM) [@ref-Onishi-1; @ref-Onishi-2], which is relevant to a geometrically frustrated antiferromagnet. Here we study the property of the PM phase and set $J$=0 for simplicity. The effect of $J$ has been investigated for an $e_{\rm g}$-orbital degenerate model [@ref-Onishi-2]. The Hamiltonian considered here is given by $$\begin{aligned} H &=& \sum_{{\bf i},{\bf a},\gamma,\gamma',\sigma} t_{\gamma\gamma'}^{\bf a} d_{{\bf i}\gamma\sigma}^{\dag} d_{{\bf i}+{\bf a}\gamma'\sigma} -(\Delta/2) \sum_{\bf i} (\rho_{{\bf i}a}-\rho_{{\bf i}b}) \nonumber\\ && +U \sum_{{\bf i},\gamma} \rho_{{\bf i}\gamma\uparrow} \rho_{{\bf i}\gamma\downarrow} +U' \sum_{{\bf i}} \rho_{{\bf i}a} \rho_{{\bf i}b}, % +J \sum_{{\bf i},\sigma,\sigma'} % d_{{\bf i}a\sigma}^{\dag} d_{{\bf i}b\sigma'}^{\dag} % d_{{\bf i}a\sigma'} d_{{\bf i}b\sigma} % +J' \sum_{{\bf i},\gamma \ne \gamma'} % d_{{\bf i}\gamma\uparrow}^{\dag} d_{{\bf i}\gamma\downarrow}^{\dag} % d_{{\bf i}\gamma'\downarrow} d_{{\bf i}\gamma'\uparrow}\end{aligned}$$ where $d_{{\bf i}a\sigma}$($d_{{\bf i}b\sigma}$) is the annihilation operator for an electron with spin $\sigma$ in the 3$z^2$$-$$r^2$($x^2$$-$$y^2$) orbital at site ${\bf i}$, $\rho_{{\bf i}\gamma\sigma}$=$d_{{\bf i}\gamma\sigma}^{\dag}d_{{\bf i}\gamma\sigma}$, and $\rho_{{\bf i}\gamma}$=$\sum_{\sigma}\rho_{{\bf i}\gamma\sigma}$. $t_{\gamma,\gamma'}^{\bf a}$ is nearest-neighbor hopping between $\gamma$ and $\gamma'$ orbitals along the ${\bf a}$ direction. Note that the zigzag chain is considered as a double chain connected by a zigzag path. The hopping amplitudes are given by $t_{aa}^{\bf x}$=$t/4$, $t_{ab}^{\bf x}$=$t_{ba}^{\bf x}$=$-\sqrt{3}t/4$, $t_{bb}^{\bf x}$=$3t/4$ for the double-chain direction and $t_{aa}^{\bf u}$=$t/4$, $t_{ab}^{\bf u}$=$t_{ba}^{\bf u}$=$\sqrt{3}t/8$, $t_{bb}^{\bf u}$=$3t/16$ along the zigzag path. Hereafter, $t$ is taken as the energy unit. $\Delta$ is the level splitting between 3$z^2$$-$$r^2$ and $x^2$$-$$y^2$ orbitals, $U$ is the intraorbital Coulomb interaction, and $U'$ is the interorbital Coulomb interaction. We analyze the model with $N$ sites in the open boundary condition by using the density matrix renormalization group (DMRG) method [@ref-Schollwock-review]. To reduce the size of the superblock Hilbert space, we treat each orbital as a site. We employ the finite-system algorithm with keeping up to 200 states per block and the truncation error is estimated to be 4$\times10^{-6}$ at most. ![Orbital structure for (a) $\Delta$=0 and (b) $\Delta$=1 from the DMRG results for $N$=40 at $U'$=$U$=20. Spin correlation for (c) $\Delta$=0 and (d) $\Delta$=1. Electron densities in (e) 3$z^2$$-$$r^2$ and $x^2$$-$$r^2$ orbitals and (f) optimal $\tilde{a}$ and $\tilde{b}$ orbitals.](onishi-fig1.eps){width="45.00000%"} In order to determine the orbital structure, we introduce new operators by using an angle $\theta_{\bf i}$ such as $\tilde{d}_{{\bf i}\tilde{a}\sigma}= e^{i\theta_{\bf i}/2} [\cos(\theta_{\bf i}/2)d_{{\bf i}a\sigma}+ \sin(\theta_{\bf i}/2)d_{{\bf i}b\sigma}]$ and $\tilde{d}_{{\bf i}\tilde{b}\sigma}= e^{i\theta_{\bf i}/2} [-\sin(\theta_{\bf i}/2)d_{{\bf i}a\sigma}+ \cos(\theta_{\bf i}/2)d_{{\bf i}b\sigma}]$ [@ref-Hotta-berryphase]. The optimal orbitals, $\tilde{a}$ and $\tilde{b}$, are determined so as to maximize the orbital correlation $T({\bf q})= \sum_{{\bf i},{\bf j}} e^{i{\bf q}\cdot({\bf i}-{\bf j})} \langle \tilde{T}_{\bf i}^{z}\tilde{T}_{\bf j}^{z} \rangle/N^2$ with $\tilde{T}_{\bf i}^z= \sum_{\sigma} (\tilde{d}_{{\bf i}\tilde{a}\sigma}^{\dag} \tilde{d}_{{\bf i}\tilde{a}\sigma}- \tilde{d}_{{\bf i}\tilde{b}\sigma}^{\dag} \tilde{d}_{{\bf i}\tilde{b}\sigma})/2$. In Figs. 1(a) and (b), the orbital structure is shown for $\Delta$=0 and 1, respectively. When two orbitals are degenerate for $\Delta$=0, orbital degree of freedom is active, but a 3$x^2$$-$$r^2$ orbital is selectively occupied to suppress the spin frustration, as shown in Fig. 1(a). Namely, the orbital shape extends just along the double-chain direction, and the zigzag chain is decoupled to a double-chain spin system due to the orbital anisotropy. In fact, the ratio of the AFM exchange interaction along the double-chain direction $J_2$ to that along the zigzag path $J_1$ is estimated as $J_2/J_1$=$64^2$. On the other hand, for $\Delta$=1, a lower-energy 3$z^2$$-$$r^2$ orbital is favorably occupied, as shown in Fig. 1(b). Note that when the 3$z^2$$-$$r^2$ orbital is fully occupied for infinite $\Delta$, the orbital anisotropy disappears in the $xy$ plane, i.e., $J_2/J_1$=1, and the spin frustration becomes effective. In accordance with the variation in the orbital shape, the spin state is also changed. To clarify this point, it is convenient to reduce the present model to a spin system on the orbital-ordered background. Then, the present system is described by the zigzag spin chain, in which the spin correlation has a commensurate peak at $q$=$\pi$ for 0$\leq$$J_2/J_1$$\leq$1/2, but the peak is gradually changed to an incommensurate one for $J_2/J_1$$\geq$1/2, and eventually, we find the incommensurate peak at $q$=$\pi/2$ for infinite $J_2/J_1$ [@ref-Tonegawa-zigzag; @ref-White-zigzag]. In Fig. 1(c), we show our DMRG result of the spin correlation $S({\bf q})= \sum_{{\bf i},{\bf j}} e^{i{\bf q}\cdot({\bf i}-{\bf j})} \langle S_{\bf i}^{z}S_{\bf j}^{z} \rangle /N^2$ with $S_{\bf i}^z= \sum_{\gamma} (\rho_{{\bf i}\gamma\uparrow}-\rho_{{\bf i}\gamma\downarrow})/2$ for $\Delta$=0. We find a clear peak at $q$=$\pi/2$, consistent with that of the zigzag spin chain with large $J_2/J_1$, since $J_2/J_1$=$64^2$ for $\Delta$=0. On the other hand, as shown in Fig. 1(d), the peak position for $\Delta$=1 changes from $q$=$\pi/2$ toward $q$=$\pi$, expected by analogy with the zigzag spin chain, since we estimate $J_2/J_1$=1.61 for $\Delta$=1. The detail of the $\Delta$ dependence will be discussed elsewhere in future. Finally, let us consider how the orbital state changes in the intermediate region. In Fig. 1(e), we show the $\Delta$ dependence of the electron densities $n_{\gamma}=\sum_{\bf i}\langle\rho_{{\bf i}\gamma}\rangle/N$ in 3$z^2$$-$$r^2$ and $x^2$$-$$r^2$ orbitals. With increasing $\Delta$, electrons are forced to accommodate in the lower 3$z^2$$-$$r^2$ level, but the electron density in each orbital is found to change gradually without any singularity. To understand this behavior, we evaluate the electron densities for the optimal orbitals. As shown in Fig. 1(f), it is found that one of the optimal orbitals is occupied irrespective of $\Delta$ and the fluctuation is very small even in the intermediate region. Namely, the present system is always regarded as a one-orbital system, although we have considered the multi-orbital system. In summary, for $\Delta$=0, the 3$x^2$$-$$r^2$ orbital is selectively occupied to suppress the spin frustration and the zigzag chain is decoupled to a double chain due to the orbital anisotropy. For large $\Delta$, the 3$z^2$$-$$r^2$ orbital is occupied and the spin frustration revives, leading to the change in the spin commensurability. T.H. is supported by the Japan Society for the Promotion of Science and by the Ministry of Education, Culture, Sports, Science, and Technology of Japan. [99]{} H. T. Diep, (Ed.), Frustrated Spin Systems, World Scientific, Singapore, 2004. G. H. Wannier, Phys. Rev. [**79**]{} (1950) 357. E. Dagotto [*et al*]{}., Phys. Rep. [**344**]{}, 1 (2001). H. Onishi, T. Hotta, Physica B [**359-361**]{} (2005) 669. H. Onishi, T. Hotta, Phys. Rev. B [**71**]{} (2005) 180410(R). U. Schollwöck, Rev. Mod. Phys. [**77**]{} (2005) 259. T. Hotta [*et al*]{}., Int. J. Mod. Phys. B [**12**]{} (1998) 3437. T. Tonegawa, I. Harada, J. Phys. Soc. Jpn. [**56**]{} (1987) 2153. S. R. White, I. Affleck, Phys. Rev. B [**54**]{} (1996) 9862.
--- abstract: 'We measure the correlation between the arrival directions of the highest energy cosmic rays detected by the Pierre Auger Observatory with the position of the galaxies in the  Parkes All Sky Survey (HIPASS) catalogue, weighted for their  flux and Auger exposure. The use of this absorption–free catalogue, complete also along the galactic plane, allows us to use all the Auger events. The correlation is significant, being 86.2% for the entire sample of  galaxies, and becoming 99% when considering the richest galaxies in HI content, or 98% with those lying between 40–55 Mpc. We interpret this result as the evidence that spiral galaxies are the hosts of the producers of UHECR and we briefly discuss classical (i.e energetic and distant) long Gamma Ray Burst (GRBs), short GRBs, as well as newly born or late flaring magnetars as possible sources of the Auger events. With the caveat that these events are still very few, and that the theoretical uncertainties are conspicuous, we found that newly born magnetars are the best candidates. If so, they could also be associated with sub–energetic, spectrally soft, nearby, long GRBs. We finally discuss why there is a clustering of Auger events in the direction on the radio–galaxy Cen A and an absence of events in the direction of the radio–galaxy M87.' author: - | G. Ghisellini,$^1$[^1] G. Ghirlanda,$^1$ F. Tavecchio,$^{1}$, F. Fraternali$^2$ and G. Pareschi$^1$\ $^1$INAF – Osservatorio Astronomico di Brera, Via Bianchi 46 Merate, Italy\ $^2$Dept. of Astronomy, University of Bologna, via Ranzani 1, 40127 Bologna, Italy title: 'Ultra–High Energy Cosmic Rays, Spiral Galaxies and Magnetars' --- cosmic rays – galaxies: gamma–rays: bursts – galaxies: statistics — radio lines: galaxies Introduction ============ The origin of ultra–high energy cosmic rays (UHECR), exceeding 10 EeV (1 EeV=$10^{18}$ eV) has been a mystery for decades, but the recent findings of the large area detectors, such as AGASA (Ohoka et al. 1997), HIRes (Abu–Zayyad et al. 2000), and especially the Pierre Auger Southern Observatory (Abraham et al. 2004), began to disclose crucial clues about the association of the highest energy events with cosmic sources. The Auger collaboration (Abraham et al. 2007) found a positive correlation between the arrival directions of UHECR with energies grater than 57 EeV and nearby AGNs (in the optical catalogue of Veron–Cetty & Veron 2006). Although this result has not been confirmed by HIRes (Abbasi et al. 2008) and it has been criticised by Gorbunov et al. (2008), it received an important confirmation by George et al. (2008), who considered a complete sample of nearby hard X–ray emitting AGNs detected by the BAT instrument onboard [*Swift*]{}. This sample is much less affected by absorption than any optical sample although, to identify as such an AGN, one relies on optical identification. Moreover, George et al. (2008) found a correlation not simply with the AGN locations, but by weighting them for the X–ray flux and the Auger exposure. This association, if real, is surprising, since the large majority of the correlating AGNs are radio–quiet, a class of objects not showing, in their electromagnetic spectrum, any sign of non–thermal high energy emission: no radio–quiet AGN was detected by the EGRET instrument onboard the [*Compton Gamma Ray Observatory*]{} (Hartman et al. 1999). Therefore they must accelerate particles (protons, nuclei, and presumably their accompanying electrons) to ultra–high energies without any noticeable radiative emission from these very same particles. Radio–loud AGNs, instead, together with Gamma Ray Bursts (of both the long and short category) do show high energy non–thermal emission, and have been considered for a long time better candidates as UHECR sources (Vietri 1995; Waxman 1995; Milgrom & Usov 1995; Wang, Razzaque & Mészáros 2008; Murase et al. 2008; Torres & Anchordoqui 2004 and Dermer 2007 for reviews, and Nagar & Matulich 2008 and Moskalenko et al. 2008 for the possible association of the AUGER events with radio–loud AGNs). Note also that some short GRBs could be due to the giant flares of highly magnetised neutron stars (“magnetars”, as the 27 Dec 2004 event from 1806–20; Borkowski et al. 2004; Hurley et al. 2005; Terasawa et al. 2005), and that, at birth, a fastly spinning magnetar can be much more energetic than when, later, it produces giant flares (Arons 2003). The possibility that GRBs and magnetars are the sites of production of UHECR would directly imply the direct association of these events with (normal) galaxies. In this case the found association of UHECR with nearby AGNs might then be due to the fact that local AGNs just trace the distribution of galaxies. The aim of the present paper is to test this possibility directly correlating the locations of the ultra–high energy Auger events with a well defined, complete, and possibly absorption–free sample of galaxies. For this purpose we use the sample of  emitting galaxies, compiled using the Parkes 64–m radio telescope (Barnes et al. 2001; Staveley–Smith et al. 1996), which is conveniently located in the south hemisphere, as the Auger observatory. The entire sample covers the portion of the sky visible by Auger, making it possible to use, for the correlation analysis, all the 27 UHECR events with energies larger than 57 EeV detected by Auger, without excluding the galactic plane, as is instead necessary when dealing with AGNs or optically selected galaxies. Note that the presence of neutral hydrogen strongly favours spirals (or, more generally, gas–rich galaxies) with respect to elliptical galaxies. We use a cosmology with $\rm{h}_0=\Omega_\Lambda=0.7$ and $\Omega_{\rm M}=0.3$. Data ==== UHECR events ------------ The Auger Observatory (Abraham et al. 2004, 2008), operating in Argentina since 2004, is located at latitude $-35.2\degr$ and it has a maximum zenith angle acceptance of $60\degr$. The relative exposure is independent of the energy of the detected events and it is a nearly uniform in right ascension. The dependence on declination is given by Sommers (2001). The Observatory can detect Cosmic Rays from sources with declination $\delta<24.8^\circ$. The available Auger list of UHECR events (Abraham et al. 2008) comprises 27 events with energies in excess of $5.7\times{}10^{19}\ev$ from an integrated exposure of $9000\crexp$. The event arrival directions are determined with an angular resolution of better than $1\degr$. However, magnetic fields of unknown strength will deflect charged particles on their trajectories through space. The advantage of studying the highest energy events is that this deflection is minimised, but it can still be up to $\sim{}10\degr$ in the Galactic field. The 27 UHECR detected by Auger are distributed in the range $\delta\in[-61,9.6]$ (or at galactic latitudes $b\in[-78.6,54.1]$ – open circles in Fig. \[HIn\] and Fig. \[HIflux\]). HIPASS catalogue ---------------- We compare the arrival directions of Auger UHECRs with the locations of sources of the  Parkes All–Sky Survey (HIPASS – Meyer et al. 2004). This is a blind survey of sources in  covering the full southern sky at $\delta<25^\circ$ which is the same sky area accessed by the Auger Observatory. The full catalogue is composed by a list of 4315 sources at $\delta<2^\circ$ (HICAT – Meyer et al. 2004; Zwaan et al., 2004) and by its extension to the northern sky up to $\delta=25^\circ$ (NHICAT – Wong et al. 2006) which includes 1002 sources. All sources are shown in Fig. \[HIn\] with the 27 UHECR detected by Auger. The HICAT and NHICAT have different level of completeness. To have a catalogue complete in flux at the 95% level, we cut the HICAT at $S_{\rm int}>7.4$ Jy km s$^{-1}$ and the NHICAT at $S_{\rm int}>15$ Jy km s$^{-1}$ as discussed in Zwaan et al. (2004) and Wong et al. (2006). $S_{\rm int}$ represents the total  line flux. For the purposes of this paper we also considered the sources within 100 Mpc which is the maximum distance at which UHECRs of $E>57$ EeV can survive the GZK suppression effect (see e.g. Harari et al. 2006). We call this sample 95HIPASS: it contains 2414 sources from the HICAT and 290 sources from the NHICAT for a total of 2704 sources and covers the entire sky at $\delta<25^\circ$. We will also consider the southern sky sample alone which is more complete and can be cut at 99% completeness for $S_{\rm int}>9.4$ Jy km s$^{-1}$ (also by considering sources at $<$100 Mpc). This sample contains 1946 sources and is called 99HICAT. -0.5cm -0.5cm Analysis ======== To quantify the possible correlation between UHECR Auger events and the distribution of  local galaxies we use the method adopted by George et al. (2008). In order to quantify the probability that two sets of sources are drawn from the same parent population of objects we perform the two-dimensional generalisation of the Kolmogorov–Smirnov (K–S) test (Peacock 1983) proposed by Fasano & Franceschini (1987). In our case the test is used to compare two data samples, i.e. the UHECR and the  galaxies. This test can then measure either if UHECRs have a galaxy counterpart, and, viceversa, if a concentration of galaxies has an UHECR counterpart. The test relies on the statistic $D$, also used for the unidimensional K–S test, which represents the maximum difference between the cumulative distributions of the two data samples. For each UHECR data point $j$ we compute a set of four numbers $d_{j,i}$ ($i$=\[1,4\]) defined as the difference of the relative fraction of UHECR and  galaxies found in the four natural quadrants defined around point $j$. Hence, $D=\max(d_{j,i})$ for all the data points considered. Defining $Z_{\rm {n}}=D\sqrt{n}$, the strength of the correlation between two catalogues is the integral probability distribution $P(D\sqrt{n}>\rm{observed})$, where $n=N_1N_2/(N_1+N_2)$, and $N_1$ and $N_2$ are the number of data points in the two sets. This measurement can be used to determine the similarity of sets of positions on the sky. The probability can be computed analytically for large data sets ($n>$80 – Fasano & Franceschini 1987). In our case, having only 27 UHECR, we have to rely on Monte Carlo simulations. We generate a large set of random UHECR events according to the relative Auger exposure. For each synthetic UHECR sample we compute $Z_{\rm {n}}$ by correlating it with the catalogue of  galaxies. The probability of the observed $Z_{\rm {n}}$ is given by the number of times we find a value of $Z_{\rm {n}}$ larger than the observed one. This is the probability that the correlation between the (real) UHECR sample and the  galaxies is not by chance. Large (low) values of the probability indicate a good (poor) correlation between the Auger UHECRs and the given  galaxy sample. As noted by George et al. (2008), the two–dimensional K–S test can be performed with the number of data points or with the flux of the sources in the comparison sample. In our case $D$ represents the maximum difference between the number of UHECRs and that of the sum of the galaxies weighted for their flux and for the the relative Auger exposure. The advantage of using the weighted flux of the sources is that it accounts for their distance. George et al. (2008) found that the UHECRs are more correlated with the weighted flux of [*Swift*]{} AGN than with with their position. In Fig. \[HIflux\] we show the map of the flux of the HIPASS catalogue weighted for the Auger relative exposure. -0.5 true cm Results ======= We found that with the 95HIPASS catalogue (2704  sources complete in flux at 95%) the probability that UHECRs are correlated with  galaxies is 71.6% by using the weighted flux of the  sources. Considering the more complete 99HICAT (1946  sources complete in flux at 99%) distributed within 100 Mpc and the 25 UHECRs distributed in the same sky region we find a larger flux–weighted probability of 87.8%. This probability is slightly smaller than found with local AGN by George et al. (2008). However, having a large sample of  galaxies we can study if the correlation probability changes by considering different sub–samples of galaxies selected according to their distance or luminosity. We have considered 4 bins of distance with an equal number of sources ($\sim$500) per bin. The correlation probability shows a maximum of 95% (97.8% for the 99HICAT) for sources distributed between 37.8 and 55 Mpc. We show these results in Fig. \[bella\] (open circles and stars in the bottom panel). Similarly we defined four equally populated luminosity bins, or, equivalently, four bins of HI mass content, since we can use $M/M_\odot = 2.36\times 10^5 D_{\rm Mpc}^2 S_{\rm int}$ to estimate the  mass (here $S_{\rm int}$ is measured in \[Jy km/s\]). We find that the probability (left panel in Fig. \[bella\]) is maximised by the most  luminous or massive (in ) sources (98% and 99% for the 95HIPASS and 99HICAT sample, respectively, for $M> 1.1\times 10^{10}\, M_{\odot}$). Selecting those  galaxies located within two $20^\circ \times 20^\circ$ boxes centred on the radio–galaxies Cen A and M87 (green boxes in Fig. \[HIn\]), we can show where they lie in the luminosity–distance plane in Fig. \[bella\] (orange and green dots, respectively). While there is no clustering of points at the distances of Cen A and Virgo, we can see that  galaxies in the direction of Cen A do cluster at distances of 40–50 Mpc, where the Centaurus cluster is. This could explain why some UHECR events [*appear*]{} to be associated with the radio–galaxy Cen A, and none with M87: beyond Cen A there is the Centaurus cluster, richer of  emitting spirals than the Virgo cluster. The ratio of the integrated HI fluxes from the two $20^\circ\times20^\circ$ boxes (Virgo/Cen A) is 5.9. To this, we have to multiply by another factor 3 for the lower Auger exposure in the direction of Virgo. The sample has too few galaxies beyond 100 Mpc to test the GZK effect (that would be revealed by finding no correlation for these galaxies). Discussion ========== The 27 Auger events above 57 EeV, with a total exposure of $9000\crexp$ correspond to an integrated flux, in CGS units: $$F_{\rm A}(E>57\, {\rm EeV})\, \sim \, 1.1 \times 10^{-11} \,\,\, {\rm erg\,\, cm^{-2}\, s^{-1}} \label{fa}$$ This flux is smaller than the electromagnetic flux that we receive from nearby radio–quiet AGNs in hard X–rays (see e.g. Tueller et al. 2008). We now compare this flux with the expected flux of other candidate sources. We will consider flaring or bursting sources, that is impulsive events, but the spreading of the arrival times of UHECRs from a source located at a distance $D$, $\Delta t \sim D \theta^2/2 c$, due to even tiny magnetic deflections, ensure that we can treat all candidate sources as continuous. We will estimate the predicted flux in two different ways. First, assume that a class of sources is characterised by a pulse of emission of UHECRs, of average total energy $<E>$. Assume also that these events occur at a rate $R$ per galaxy, per year, and consider those events occurring within the GZK radius $D_c$. We have: $$F \, = \, <E> {R \over 3.15 \times 10^7} \, { N_{\rm g}(D<D_c) \over 4\pi (a D_c)^2}$$ where $3.15\times 10^7$ is the number of seconds in one year and $N_{\rm g}(D<D_c)$ is the number of galaxies within $D_c$ of $L_*$ luminosity. The average distance of the sources is $a D_c$ ($a=3/4$ for sources homogeneously distributed). Setting the mean local galaxy density $n_{\rm g}= N_{\rm g}/(4\pi D_c^3/3)= 10^{-2} n_{\rm g, -2}$ Mpc$^{-3}$ we have: $$F \sim 1.2 \times 10^{-57} <E> R\, n_{\rm g, -2} {D_{c, 100} \over a^2} \,\,\, {\rm erg\, cm^{-2}\, s^{-1}} \label{rate}$$ where $D_c =100 D_{c,100}$ Mpc. The second estimate on the predicted UHECR flux uses the electromagnetic flux as a proxy. Assume that we detect, for a typical member of a class of sources, an average fluence $<{\cal F}>$, and that there are $N$ events per year. If a fraction $\eta$ of these events comes from sources within $D_c$, we have $$F\, = \, {\eta <{\cal F}> N \over 3.15 \times 10^7} \label{m2}$$ This estimate is more appropriate when dealing with sources, such as long and short GRBs, whose fluences and occurrences are known, while Eq. \[rate\] is more appropriate when dealing with possible sources of unknown electromagnetic output, but predicted energetics and rates, such as newborn magnetars (Arons 2003) or giant flares from old magnetars. Let us consider the above classes of sources in turn, starting from short GRBs. In the BATSE catalog (cossc.gsfc.nasa.gov/docs/cgro/batse/BATSE\_Ctlg/flux.html) we have 490 short GRBs of total fluence $5.5\times 10^{-4}$ erg cm$^{-2}$ in 9 years of operation. Tanvir et al. (2005) correlated these short GRBs with local optically selected galaxies finding that a fraction between 5 and 25% of BATSE short GRBs might be nearby, i.e. at $z < 0.025$, corresponding to 109 Mpc. Considering that BATSE saw half of the sky and setting $\eta=0.1$ we have an average flux of $3.9\times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$. Then, if the UHECR flux is similar to the electromagnetic one, short GRBs do not match the required flux. Classical long GRBs (namely, energetic GRBs at $z\gsim 1$) in the BATSE sample have a total fluence of 0.024 erg cm$^{-2}$ (for the listed 1490 long GRBs in the BATSE catalog), corresponding to an average (all sky) flux of $1.7\times 10^{-10}$ erg cm$^{-2}$ s$^{-1}$, larger than the one given by Eq. \[fa\]. However, for long BATSE GRBs, $\eta$ must be much smaller than 0.1, as directly suggested by the paucity of nearby events, and by the lack of correlation with nearby galaxies and clusters (Ghirlanda et al. 2006). While we cannot dismiss them as sources of UHECRs, it seems likely that classical BATSE bursts are too distant (but see below). Consider now giant flares from relatively “old” magnetars. The giant flare from SGR 1806–20 of Dec 27, 2004 emitted an energy $E\sim 10^{46}$ erg in less than a second. The radio afterglow convincingly demonstrated the formation of a (at least mildly) relativistic fireball. With the current hard X–ray instruments, such flares can be detected up to $\sim$30–40 Mpc (Hurley et al. 2005). Eq. \[rate\], with $D_{c,100}=0.3$, and $<E>=10^{46}$erg, would require $R\sim 1$ event per galaxy per year, while an approximate limit to the rate is $R< 1/30$ yr$^{-1}$ (see e.g. Lazzati et al. 2005). Finally, consider fastly spinning newly born magnetars, whose rotational energy can exceed $10^{52}$ erg, with a rate of $R=10^{-4}$ events per galaxy per year (Arons 2003). With the estimated galaxy density ($n_{\rm g, -2}\sim 0.7$ with $L\sim L_*$; Blanton et al. 2001) there should be 1 event per year within 100 Mpc. If each magnetar produces $10^{50}$ erg in UHECRs, then this class of sources can be the progenitor of the Auger events (Eq. \[rate\]). This is independent of collimation, since the reduced rate of events pointing at us is compensated by an increase of the apparent energetics. But if an equal amount of energy is released in electromagnetic form, at energies detectable by BATSE, then they should be a significant fraction of all BATSE GRBs. Since the birth of a magnetar should be accompanied by a supernova, [*these events should be associated with long, rather than short GRBs,*]{} for which no associated supernova has been seen. If the radiative output is isotropic, they will all be nearby, sub–energetic, GRBs. The required fluence of these sub–energetic nearby long GRBs, to match the UHECRs flux, should be $$<{\cal F}>\, \sim\, 3.15\times 10^{-4} \epsilon_{\rm CR} \, {F_{\rm A, -11} \over \eta N} \,\,\, {\rm erg\,\, cm^{-2}}$$ where $\epsilon_{\rm CR}$ is the ratio of the emitted energy in radiation and UHECR. If $\eta\sim \epsilon_{\rm CR}\sim 1$, these events constitute a sizeable fraction of the total fluence of all long BATSE GRBs in one year (which is ${\cal F} \sim 0.024/9\sim 2.7\times 10^{-3}$ erg cm$^{-2}$). Since we know that the large majority of long GRBs are not nearby, newly born magnetars should not constitute conspicuous events in hard X–rays. Their fluence must be mostly emitted in another energy range. GRB 060218 (Campana et al. 2006) with an energy of a few $\times 10^{49}$ erg, at a distance of 145 Mpc, could be one of these events, and Soderberg et al. (2006) and Toma et al. (2007) already suggested that this GRB was powered by a newly born magnetar. The spectrum of its prompt emission peaked at $\sim$5 keV, i.e. its fluence in relatively soft X–rays exceeded the 15–150 keV fluence. It was also very long, slowly rising, and would not have been detected by BATSE. Soderberg et al. (2006) pointed out that these sub–energetic long GRBs should not be strongly beamed (not to exceed the rate of SN Ib,c), and should occur at a rate of $230^{+490}_{190}$ Gpc$^{-3}$ yr$^{-1}$, corresponding to $R \approx 10^{-5}$ events per $L_*$ galaxy per year, about ten times larger than for classical long GRBs whose radiation is collimated into 1% of the sky. According to this rate, Eq. \[rate\] would then demand $<E>\sim 6\times 10^{50}$ erg in UHECRs to match the observed flux. Conclusion ========== We have correlated the cosmic rays with $E>57$ EeV detected by the Auger Observatory with a complete, absorption–free sample of  selected galaxies. We found a significant correlation when correlating the  [*flux*]{} of galaxies of our sample. When considering the largest 95HIPASS catalogue and the 27 UHECRs we find a weak correlation (probability of 72%), while a larger significance (87.8%) is reached if we consider the most complete 99HICAT sample of galaxies (though with 25 UHECRs). These probabilities are maximised by cutting the  sample in distance or luminosity bins: it becomes 99% when considering the 500 most luminous (or most  massive) galaxies (1/4 of the sample), and 98% when considering the 500 galaxies lying between 38 and 54 Mpc, where the Centaurus cluster of galaxies is. Thus there is the possibility that the UHECRs coming from the direction of Cen A are instead coming from the more distant Centaurus cluster. Galaxies of this cluster are richer in  than Virgo galaxies, explaining why there is no UHECR event from the direction of Virgo. This sample is formed by  emitting galaxies, therefore it is biased against ellipticals. The found correlation with these galaxies, per se, is not disproving the found correlation with AGNs (Abraham et al. 2007, 2008; George et al. 2008), since they also trace the local distribution of matter, as spiral galaxies do. On the other hand, it opens up the possibility, on equal foot, that UHECRs are produced by GRBs or newly born magnetars (see also Singh et al. 2004 who used AGASA events). With the caveat that it is premature, with so few events and big theoretical uncertainties, to draw strong conclusions, we have pointed out that although classical (i.e. energetic) long GRBs and short GRBs have difficulties in producing the required UHECR flux, newly born magnetars can. If so, they could also be a subclass of [*long*]{} GRBs, possibly sub–energetic and relatively nearby, powered by fastly spinning, newborn magnetars. The future increased statistics of UHECRs arrival directions will help to discriminate among the different proposed progenitors, especially if there will be (or not) an excess of events close to the radio core and/or lobes of Cen A. Acknowledgments {#acknowledgments .unnumbered} =============== We thank the referee for constructive comments. We thank the ASI I/088/06/0 and the 2007 PRIN–INAF grants and Ivy Wong and Martin Zwaan for providing the NHICAT catalogue. The Parkes telescope is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. Abbasi R.U., et al., 2008 preprint (astro–ph/0804.0382) Abraham J., et al., 2004, Nucl. Instrum. Methods A, 523, 50 Abraham J., et al., 2007, Science, 318, 938 Abraham J., et al., 2008, Astroparticle Physics, 29, 188 Abbu–Zayyad T., et al., 2000, Nucl. Instrum. Methods A, 450, 253 Arons J., 2003, ApJ, 589, 871 Barnes D. G., et al., 2001, MNRAS, 322, 486 Blanton M.R., et al. 2001, AJ, 121, 2358 Borkowski J., Gotz D., Mereghetti S., Mowlavi N., Shaw S., Turler M., 2004, GCN Report No. 2920 Campana S., et al., 2006, Nat, 442, 1008 Cannizzo J.K. et al., 2006, GCN rep. 20.1 Dermer C.D., 2007, preprint (astro-ph/0711.2804) Fasano G. & Franceschini A., 1987, MNRAS, 225, 155 George M.R., Fabian A.C., Baumgartner W.H., Mushotzky R.F. & Tueller J., 2008, MNRAS, 388, L59 Ghirlanda G., et al., Magliocchetti M., Ghisellini G., Guzzo L., 2006, MNRAS, 386, L20 Gorbunov D., Tinyakov P., Tkachev I. & Troitsky S., 2008, preprint (astro–ph/0711.4060) Harari D., Mollerach S. & Roulet E., 2006, JCAP, 11, 12 Hartman R.C., et al., 1999, ApJS, 123, 79 Hurley K. et al., 2005, Nat, 434, 1098 Lazzati D., Ghirlanda G. & Ghisellini G., 2005, MNRAS, 362, L8 Meyer et al. 2004, MNRAS, 350, 1195 Milgrom M. & Usov V., 1996, ApJ, 449, L37 Moskalenko I.V., Stawarz L., Porter T.A. & Cheung C.C., 2008, subm to ApJ (astro–ph/0805.1260) Murase K., Ioka K., Hagataki S & Nakamura T., 2008, PRD, 78, id. 023005 (astro–ph/0801.2861) Nagar N.M. & Matulich J., 2008, A&A in press (astro–ph/0806.3220) Ohoka H., et al., 1997, Nucl. Instrum. Methods A, 385, 268 Peacock J.A., 1983, MNRAS, 202, 615 Singh S., Ma C.-P. & Arons J., 2004, Phys. Rev. D, 69, 063003 Soderberg A., et al., 2006, Nat, 442, 1014 Sommers P., 2001, Astroparticle Physics, 14, 271 Staveley–Smith L., et al., 1996, Publ. Aston. Soc. Aust., 13, 243 Tanvir N. R., Chapman R., Levan A. J., Priddey R. S., 2005, Nat, 438, 991l Terasawa T., et al., 2005, Nat, 434, 1110 Toma K., Ioka K., Sakamoto T. & Nakamura T., 2007, ApJ, 659, 1420 Torres D.F. & Anchordoqui L.A., 2004, RPPh, 67, 1663 (astro–ph/0402371) Tueller J., Mushotzky R.F., Barthelmy S., Cannizzo J.K., Geherels N., Markwardt C.B. & Winter L.M., 2008, ApJ, 681, 113 Véron–Cetty M.-P., & Véron P., 2006, A&A 455, 773 Vietri M., 1995, ApJ, 453, 883 Wang X–Y., Razzaque S. & Mészáros P. 2008, ApJ, 677, 432 Wong O.I., et al. 2006, MNRAS, 371, 1855 Waxman E., 1995, Phys. Rev. Lett., 75, 386 Zwaan M.A., et al., 2004, MNRAS, 350, 1210 [^1]: Email: gabriele.ghisellini@brera.inaf.it
PUPT-2118\ hep-th/0405106 [A.M. Polyakov\ ]{} Joseph Henry Laboratories\ Princeton University\ Princeton, New Jersey 08544 [Abstract]{} In this article we discuss gauge/string correspondence based on the non-critical strings With this goal we present several remarkable sigma models with the AdS target spaces. The models have kappa symmetry and are completely integrable. The radius of the AdS space is fixed and thus they describe isolated conformal fixed points of gauge theories in various dimensions May 2004 Soon after the proposal for gauge/ strings correspondence \[1\] and its spectacular implementation in N=4 Yang-Mills theory \[2\] (which was also based on the earlier findings \[3\]) it has been suggested that some non-supersymmetric gauge theories may become conformal at a fixed coupling \[4,5\]. The conjecture was based on the one loop estimate of the $\beta$ function in the $AdS_{p}\otimes S_{q}$ sigma model in the non-critical string ($p+q<10).$The effective equations of motion  have been shown to have a solution with a particular values of the radii of $AdS_{p}$ and $S_{q}$ . Unfortunately this regime takes place at the curvatures of the order of the string scale where the one loop approximation can be used only as an order of magnitude estimate. Still, the counting of the parameters in \[5\] makes this result plausible. More recently this conjecture was discussed in the zero dimensional model \[ 6\]. In this letter I will take the next step and give further arguments that the above sigma models are conformal . They are also shown to be completely integrable. The bosonic part of the AdS sigma model is the familiar action for the unit vector field $\overrightarrow{n}$ which in this case is hyperbolic, satisfying the relation $\overrightarrow{n}^{2}=-1.$This hyperboloid is embedded in the $p+1$ dimensional flat space with the signature  (p,1) or (p-1,2) depending on whether the dual gauge theory is assumed to be in the Euclidean or Minkowskian spaces. It is convenient to use the Cartan moving frame, defined by $$\begin{aligned} dn & =B^{a}e^{a}\\ de^{a} & =A^{ab}e^{b}-B^{a}n\nonumber\end{aligned}$$ Here the set of vectors $e_{a},$ $a=1,...p$ are orthogonal to $n.$ The one forms $B^{a}=A^{a,p+1}$ and $A^{ab}$ form a zero curvature connection with  the value in $SO(p+1)$ . Gauge symmetry related to $A^{ab}$ corresponds to the rotation of the $e-$ vectors by the $SO(p)$ group and thus we treat $\overrightarrow{n}$ as an element of the coset space $SO(p+1)/SO(p)$ . The Maurer -Cartan equations (the zero curvature conditions) are$$\begin{aligned} dB^{a} & =A^{ab}B^{b}\\ dA^{ab} & =\frac{1}{2}[AA]^{ab}-B^{a}B^{b}\nonumber\end{aligned}$$ where all the products are the exterior products of 1-forms . The gauge invariant Lagrangian for the $\overrightarrow{n}$ - field has the form$$L=\frac{1}{2\gamma}B_{\alpha}^{a}B_{\alpha}^{a}$$ where $\gamma$ is a coupling constant and the $SO(p)$ gauge symmetry is explicit since there are no  derivatives in this expression. The first variation of this action is$$\delta S\sim{\displaystyle\int} B_{\alpha}^{a}\nabla_{\alpha}\omega^{a}$$ where$$\delta B_{\alpha}^{a}=\nabla_{\alpha}\omega^{a}=\partial_{\alpha}\omega ^{a}-A_{\alpha}^{ab}\omega^{b}$$ That gives the equation of motion$$\nabla_{\alpha}B_{\alpha}^{a}=0$$ In order to calculate the $\beta-$function we have to calculate the second variation of the action$$\delta^{2}S\sim\int\nabla_{\alpha}\omega^{a}\nabla_{\alpha}\omega^{a}-\delta A_{\alpha}^{ab}\omega^{b}B_{\alpha}^{a}=\int\nabla\omega^{a}\nabla\omega ^{a}+(\omega_{a}\omega_{b}-\omega^{2}\delta_{ab})B^{a}B^{b}$$ where $\delta A^{ab}=\omega^{a}B^{b}-\omega^{b}B^{a}$ is the corresponding gauge transformation. Using the fact that $<\omega^{a}\omega^{b}>=\frac {\delta_{ab}}{2\pi}\log\Lambda,$ where $\Lambda$ is a cut-off, we obtain the divergent counterterm defining the $\beta$ -function$$\Delta S=-\frac{p-1}{2\pi}\log\Lambda\int B_{\alpha}^{a}B_{\alpha}^{b}$$ We brought up here this 30 years old derivation because when done this way, it has a direct generalization for the case of interest. Before coming to that we need one more recollection - the Wess - Zumino terms and their contribution to the $\beta$ - function. In the bosonic case the WZ terms exist only for p=3 which corresponds to the coset $\frac{SO(4)}{SO(3)}.$In this case we construct a 3-form (following Novikov and Witten)$$\Omega_{3}=e^{abc}B^{a}B^{b}B^{c}$$ where the exterior product of 1-forms is used. It is obvious that due to the structure equations this form is closed. It is also not exact. This last statement requires some explanation. In the compact case its meaning is obvious - one can’t represent $\Omega_{3}=d\Omega_{2}$ with the non-singular $\Omega_{2}$. But what is the meaning of this in the non- compact and in the supermanifolds ? The definition of cohomology which we will adopt below is as following. We assume that the 3- form is not exact if one can’t find $\Omega_{2}$ which can be *locally* expressed in terms of connections. This definition is motivated by the renormalization group, as we will see below. The key to the renormlization properties is the variation of $\Omega_{3}.$Under the gauge transformation we find $$\delta\Omega_{3}=e^{abc}d(\omega^{a}B^{b}B^{c})$$ Hence$$\delta S_{WZ}\sim e^{abc}\int\omega^{a}B^{b}B^{c}d^{2}\xi$$ The second variation gives$$\delta^{2}S_{WZ}\sim e^{abc}e^{\alpha\beta}\int\omega^{a}\nabla_{\alpha}\omega^{b}B_{\beta}^{c}d^{2}\xi$$ This term generates $\log\Lambda$ in the second order in $B$ which has an opposite sign to ( 6). Chosing the action in the form$$S=\frac{1}{2\gamma}(\int B^{2}d^{2}\xi+\kappa\int\Omega_{3}$$ we find the the $\beta$- function $$\beta(\gamma)=\frac{1}{2\pi}\gamma^{2}(1-\kappa^{2})+...$$ In the compact case the coefficient $\frac{\kappa}{\gamma}$ must be quantized and thus (according to the standard argument) can not renormalize. In the general case we can modify this argument by saying that the counter-terms must depend on the connection locally, and thus can not create a cohomologically non-trivial $\Omega_{3}.$There is a nice interplay between the cohomology (in the sense above) and the renormalization. In the bosonic case the above construction of the conformal $\overrightarrow {n}$-field theory is limited to the group $SO(4)$ since there is no invariant 3-tensors in higher dimensional case ( $H^{3}(\frac{SO(p+1)}{SO(p)})=0$ ). In the superspace the situation is different. Our main goal is to find the Wess -Zumino terms in the various superspaces of both critical and non-critical dimensions, which will provide us with the conformally invariant $\overrightarrow{n}$ -field theories on the world sheet, corresponding in the hyperbolic cases to various gauge theories in space-time. The “critical” case of $AdS_{5}\times S_{5}$ has already been examined in the important work by Metsaev and Tseytlin \[7 \]. Our approach in this case leads to some drastic simplifications, while consistent with their results. Our first non-trivial example is based on the supergroup $OSp(2\mid4).$Its bosonic part $Sp(4)\approx SO(3,2)$ acts on $AdS_{4}$ thus describing some 3d gauge theory. It also has the R-symmetry $SO(2).$This is a simplest choice because , as we will see below, there is no closed 3-forms without R symmetry (as in $OSp(1\mid4)$) and the case of the simpler supergroup $OSp(1\mid2),$ which was recently considered in \[6 \], is somewhat degenerate and may require special consideration. Roughly speaking, the extra invariant tensors in the superspace are simply the elements of $\gamma-$matrices, while the invariance conditions is given by the famous $\gamma-\gamma$ identities \[ 8\]. The set of connections in $OSp(2\mid4)$ contains as before 1-forms $B^{a}=A^{a5}$ and $A^{ab}$ where the latter is the gauge connection for $SO(3,1)$ ( $a=1,...4$) ;directions 1 and 5 are assumed to be time-like. This set is complemented with the 2 gravitino 1-forms , $\psi_{i}$, $i=1,2;$each form is also a size 4 Majorana spinor. Finally we have a connection $C$ of the $R-$ symmetry $SO(2).$ The Maurer - Cartan equations have the form (they are easily read of the standard commutation relations of the $OSp$ algebra \[9\] )$$\begin{aligned} dB^{a} & =A^{ab}B^{b}+\overline{\psi_{i}}\gamma^{a}\psi_{i}\\ dA^{ab} & =\frac{1}{2}[AA]^{ab}-B^{a}B^{b}+\overline{\psi_{i}}\gamma ^{ab}\psi_{i}\\ d\psi_{i} & =(\gamma^{ab}A^{ab}+\gamma^{a}B^{a})\psi_{i}+Ce^{ij}\psi_{j}\\ dC & =e^{ij}\overline{\psi_{i}}\psi_{j}$$ The closed 3-form which replaces (7) in this case is given by$$\Omega_{3}=e^{ij}B^{a}\overline{\psi}_{i}\gamma^{a}\gamma^{5}\psi_{j}$$ Here $\gamma^{ab}=\frac{1}{4}[\gamma^{a}\gamma^{b}]$ and $\gamma^{a}$ is the set of four real gamma matrices; everywhere the antisymmetric product of differential forms is assumed. The form $\Omega_{3}$ has explicit gauge symmetry under $SO(3,1)\times SO(2)$ and thus defined on the coset space $\frac{OSp(2\mid4)}{SO(3,1)\times SO(2)}.$Let us now calculate $d\Omega_{3}$ by using the above relations. First of all, due to its explicit gauge symmetry the terms containing $A^{ab}$and $C$ will vanish trivially. The non-trivial part is related to two identities. First of all , from the $dB$ term comes the contribution$$d\Omega_{3}=(\overline{\psi}_{i}\gamma^{a}\psi_{i})e^{kl}(\overline{\psi_{k}}\gamma^{a}\gamma^{5}\psi_{l})+...$$ It can be rewritten as a sum of terms like $$(\overline{\psi}_{1}\gamma^{a}\psi_{1})(\overline{\psi}_{1}\gamma^{a}\chi)$$ where $\chi=\gamma^{5}\psi_{2}.$Since the product of 1 forms is cyclicly symmetric, this expression is precisely the $\gamma-\gamma$ identity \[8 \] and is equal to zero. Another dangerous term comes from the pieces $d\psi =\gamma^{a}B^{a}\psi+...;$and $d\overline{\psi}=-B^{a}\overline{\psi}\gamma^{a}$ its contribution is given by$$d\Omega_{3}=e^{ij}B^{a}(\overline{\psi}_{i}\gamma^{a}\gamma^{5}d\psi _{j}-d\overline{\psi}_{i}\gamma^{a}\gamma^{5}\psi_{j})$$ (the minus in this formula comes from the fact that we are differentiating 1-forms). By plugging in the above expression for $d\psi$ we see that we get zero (due to the presence of $\gamma^{5}).$In the simpler case of $OSp(1\mid4)$ this would not be possible since the expression with $\gamma ^{5}$ is identically zero and without it the contribution (20 ) wouldn’t cancel. Let us also notice that our WZ term is parity -conserving, since the effect of $\gamma^{5}$ is compensated by the orientation dependence of the exterior products. One might think that we found a cohomology but this is not the case. It is easy to see that $\Omega_{3}=d\Omega_{2},$where $\Omega_{2}=e^{ij}\overline{\psi}^{i}\gamma^{5}\psi^{j}.$ The $\gamma-\gamma$ identity turns out to be a part of the Jacobi identities for $OSp(2|4).$ Now we can chose the action for our sigma model in a remarkably simple form$$S=\frac{1}{2\gamma}(\int B_{\alpha}^{a}B_{\alpha}^{a}d^{2}\xi+\kappa\int \Omega_{2}d^{2}\xi)$$ The next step is to find the first and the second variatons of this action. As in the bosonic case we have to consider the change of $\Omega_{2}$ under infinitesimal gauge transformations. These transformations are given by$$\begin{aligned} \delta B^{a} & =\nabla\omega^{a}+\overline{\psi}_{i}\gamma^{a}\varepsilon_{i}\\ \delta\psi_{i} & =\nabla\varepsilon_{i}+\gamma^{a}B^{a}\varepsilon _{i}+\gamma^{a}\omega^{a}\psi_{i}$$ The bosonic field $\omega^{a}$ and the fermionic fields $\varepsilon_{i}$ (which are two Majorana spinors) will become the degrees of freedom of our sigma model. The variation of the $\Omega_{2}$is given by $$\delta\Omega_{2}=e^{ij}(\omega^{a}\overline{\psi}_{i}\gamma^{a}\gamma^{5}\psi_{j}+B^{a}\overline{\psi}_{i}\gamma^{a}\gamma^{5}\varepsilon_{j}+d(\psi_{i}\gamma^{5}\varepsilon_{j}))$$ The last term doesn’t contribute to the action and the equations of motion take the form$$\begin{aligned} \nabla_{\alpha}B_{\alpha}^{a}+\kappa e_{\alpha\beta}e^{ij}\overline{\psi }_{i\alpha}\gamma^{a}\gamma^{5}\psi_{j\beta} & =0\\ \gamma^{a}B_{\alpha}^{a}\psi_{i\alpha}+\kappa e_{\alpha\beta}e^{ij}B_{\alpha }^{a}\gamma^{a}\gamma^{5}\psi_{j\beta} & =0\end{aligned}$$ To calculate the $\beta$ - function we need the second variation of the action. It is sufficient to find it in the background fields for which all the fermionic components are set to zero. As a result, the answer is a sum of two terms one of which is quadratic in $\omega$ and given by ( 5), while the other is quadratic in $\varepsilon.$It is convenient to pass to the complex notations, $\psi=\psi_{1}+i\psi_{2}$ , and to introduce left and right components of a spinor, $\psi=\frac{1+\gamma_{5}}{2}\psi_{L}+\frac {1-\gamma_{5}}{2}\psi_{R}.$ We also set $\kappa=1$ since , as we will see, this is necessary for conformal symmetry. It is straightforward to vary (21 ) and to find the second variation of the action. In the Weyl notations it is given by$$\delta^{2}S\sim\int(\overline{\varepsilon}_{L}\widehat{B}_{+}\nabla _{-}\varepsilon_{L}+\overline{\varepsilon}_{R}\widehat{B}_{-}\nabla _{+}\varepsilon_{R}+\overline{\varepsilon}_{L}\widehat{B}_{+}\widehat{B}_{-}\varepsilon_{R})d^{2}\xi$$ where $\widehat{B}$ =$\gamma^{a}B^{a}.$ In order to calculate the $\beta-$function it is sufficient to treat the case in which the background $B$ are constant matrices. This is follows from the fact that the counterterms can’t contain the gradients of $B$ , which would have higher dimensions. Another constraint on the string-theoretic background is that the energy- momentum tensors are zero$$T_{\pm\pm}=\widehat{B}_{\pm\pm}^{2}=0$$ This condition implies that our Lagrangian is degenerate and we must fix the $\kappa-$ symmetry. In the present context it is quite simple. Redefine the fields and matrices in the following way$$\begin{aligned} \widehat{B}_{\pm} & =\sqrt{B_{+}^{a}B_{-}^{a}}\gamma_{\pm}\\ \varepsilon_{L,R} & =(B_{+}^{a}B_{-}^{a})^{-\frac{1}{4}}\phi_{L,R}$$ where {$\gamma_{+},\gamma_{-}\}=2;\gamma_{\pm}^{2}=0.$ Just as it is done in the light cone gauge, we can impose the conditions on $\phi$, which kill one half of its components. Namely we take $\gamma_{-}\phi_{L}=0;\gamma_{+}\phi_{R}=0.$ With these constraints the action (27 ) takes the form$$\delta^{2}S\sim\int(\phi_{L}^{\dagger}\nabla_{+}\phi_{L}+\phi_{R}^{\dagger }\nabla_{\_}\phi_{R}+(\phi_{L}^{\dagger}\phi_{R}+\phi_{R}^{\dagger}\phi _{L})\sqrt{B_{+}^{a}B_{-}^{a}})d^{2}\xi$$ It is instructive to count the number of degrees of freedom in this case. We see that after fixing the $\kappa-$ symmetry we are left with the two left movers and two right movers (we are counting real components of the spinors). On the bosonic side we have four d.o.f. coming from the $AdS_{4}$ which are reduced to two by the Virasoro constraints. So, there is a match between bosons and fermions. The contribution of fermionic fluctuations to the $\beta-$ function comes from and only from the second order iteration of the mass term $$\int d^{2}\xi\langle\phi_{L}^{\dagger}\phi_{R}(0)\phi_{R}^{\dagger}\phi _{L}(\xi)\rangle\sim\log\Lambda$$ It has an opposite sign to the ( 6) and cancels it. Beyond one loop we need a more general argument, since our action is cohomologically trivial. Let us return to the first variation of the action and write it , using Weyl’s notations, in the form$$\delta S=\int(\overline{\psi}_{L-}\widehat{B}_{+}\varepsilon_{L}+\overline{\psi}_{R+}\widehat{B}_{-}\varepsilon_{R})d^{2}\xi$$ The $\kappa$ -symmetry of this action is immediately seen by setting $\varepsilon_{L}=\widehat{B}_{+}\kappa_{-}$;$\varepsilon_{R}=\widehat{B}_{-}\kappa_{+}$ . We obtain the contribution proportional to the world sheet energy -momentum tensor $T_{\pm\pm}=(B_{\pm}^{a})^{2}$ which can be cancelled by the shift of the world-sheet metric. This symmetry will be lost if we generate a counterterm explicitly dependent on the metric. Thus the non-zero $\beta$ -function, which introduces an explicit dependence on the Liouville field , must be forbidden. This, however, is not conclusive since we can’t exclude an anomaly in the $\kappa-$ symmetry. While we lack a complete proof, let us add another argument in favor of conformal symmetry. The variation of the action (33 ) doesn’t depend on $\psi_{L+}$ and $\psi_{R-}$ . This independence persists to the second variation. Perhaps one can prove it in all orders. If this is the case, conformal symmetry follows immediately. Indeed, the logarithmically divergent counterterm must contain terms like $\overline{\psi}_{R-}\psi_{L+}$ and thus can’t appear. Notice that conformal symmetry of the familiar WZNW model can be proved by the very similar argument. However, at present the conformal symmetry is still a conjecture. It is important to realize that before fixing $\kappa-$ symmetry the model is not renormalizible. At the first glance it seems strange since both bosonic and fermionic connection entering (21 ) have dimension one and thus the coupling constant is dimensionless. On the other hand, even in the flat space the GS action contains quartic fermionic terms with derivatives which naively would give power - like divergences. Similar terms appear in our formalism if we continue the loop expansion by the further variations of the action. The reason for this discrepancy is that the leading term in the kinetic energy for the fermionic excitations vanishes. Indeed, while $\psi\sim\partial \varepsilon$ , the term ($\partial\varepsilon)^{2}$ is absent from the action due to the properties of the Majorana spinors. Instead we get a kinetic terms with first derivatives only ( in contrast with the bosonic part). As a result, the UV dimension of $\varepsilon$ is 1/2 instead of zero. This is the source of the power-like UV-behaviour. These power-like counter-terms are quite unusual -by dimensional counting they are seen to contain *negative powers* of the background field $B.$ After fixing the $\kappa-$ gauge most of the non-linear terms should disappear. We know that it happens in the light-cone gauge in the flat space and in the leading UV order the curvature is irrelevant. However, in general the right choice of the $\kappa$- gauge and renormalizability is a non-trivial problem. I plan to analyze it in a separate article. Let us stress that explicit renormalizability may depend on the gauge choice . For example, the Nambu action of the bosonic string is renormalizible in the conformal gauge and apparently non-renormalizible in the Monge gauge. These consideration show that only $\kappa-$ symmetric actions are allowed. Another reason for that is the fact that in the Minkowskian space - time hte Green-Schwarz fermions contain negative norms and these are eliminated by the $\kappa-$ symmetry. It is interesting to notice that $\kappa-$symmetric models are completely integrable. In the critical case $AdS_{5}\times S_{5}$ it was known for some time that this model has a hidden symmetry ( \[16 \] and A. Polyakov (unpublished)). In the non-critical case this is also true and can be demonstrated in a very simple way. Generally, hidden symmetry follows either from the Lax representation or from the zero curvature representation with the spectral parameter $\lambda$ \[17 \]. In the latter case we need to construct a family of $\lambda$ -dependent flat connections, such that at $\lambda=1$ they coincide with our original set (13-16 ), while the flatness for other $\lambda$ imply the equations of motion. Let us do it for $OSp(2|4).$ The relevant zero curvature equations in the complex Weyl notations have the form$$\begin{aligned} \nabla_{+}B_{-}^{a}-\nabla_{-}B_{+}^{a} & =\overline{\psi}_{L}\gamma^{a}\psi_{L}+\overline{\psi}_{R}\gamma^{a}\psi_{R}\\ \nabla_{+}\psi_{L-}-\nabla_{-}\psi_{L+} & =\widehat{B}_{+}\psi_{R-}-\widehat{B}_{-}\psi_{R+}\\ \nabla_{+}\psi_{R-}-\nabla_{-}\psi_{R+} & =\widehat{B}_{+}\psi_{L-}-\widehat{B}_{-}\psi_{L+}$$ Now, let us introduce the spectral deformation of these connection in the following way$$\begin{aligned} B_{-} & \Rightarrow\lambda B_{-};B_{+}\Rightarrow\lambda^{-1}B_{+}\\ \psi_{R\pm} & \Rightarrow\lambda^{\frac{1}{2}}\psi_{R\pm};\psi_{L\pm }\Rightarrow\lambda^{-\frac{1}{2}}\psi_{L\pm}$$ while all other connections remain unchanged. These deformations preserve the zero curvature conditions if the following equations of motion are satisfied$$\begin{aligned} \nabla_{+}B_{-}^{a} & =\overline{\psi}_{R}\gamma^{a}\psi_{R};\nabla_{-}B_{+}^{a}=\overline{\psi}_{L}\gamma^{a}\psi_{L}\\ \widehat{B}_{+}\psi_{L-} & =0;\widehat{B}_{-}\psi_{R+}=0\end{aligned}$$ which are just the equations of motion for the $OSP(2|4)$ model. Notice also that if the fermions are set to zero we get the standard zero curvature representation for the $\overrightarrow{n}$ -field and the sine -gordon equations \[17 \]. Existence of the $\lambda-$ dependent flat connections easily leads to the infinite number of conserved currents \[17\]. In principle with these formulae one can start the heavy machinery of the inverse scattering method. But even in the bosonic case this is not straightforward because of the possible quantum anomalies. We will not proceed with it here and only notice that this hidden symmetry must manifest itself in the spectrum of the anomalous dimensions. The above scheme generalizes to the sigma models on $AdS_{5}$ describing 4d gauge theories. In this case the relevant supergroups are $SU(2,2\mid N)$ . The bosonic part of it is $SO(4,2)\times U(N)$ (for $N=4$ the right factor is $SU(4)$ ). It is convenient to use Majorana representation of $SO(4,2)$ provided by the $8\times8$ real $\gamma-$matrices. The conjugation rule in this case is $\overline{\psi}=\psi^{T}\beta,$ $\widetilde{M}=\beta^{-1}M^{T}\beta$ where $\beta$ $=\gamma^{1}\gamma^{6}$ (we assume that $1$ and 6 are the time-like directions). The odd matrices under this conjugation consist of $\gamma_{pq},\gamma_{pqr}$ $,\gamma_{7}$ (they form the algebra $Sp(8))$ , all other tensors are even. The Cartan -Maurer equations are almost the same as before. We will write explicitly their fermionic part only $$\begin{aligned} d\psi_{k} & =(C_{kl}+i\gamma^{7}D_{kl})\psi_{l}+...\\ dB^{a} & =\overline{\psi}_{k}\gamma^{a}\gamma^{6}\psi_{k}+...\\ dA^{ab} & =\overline{\psi}_{k}\gamma^{\lbrack a}\gamma^{b]}\psi_{k}+...\\ dC_{kl} & =\overline{\psi}_{k}\psi_{l}+...\\ dD_{kl} & =\overline{\psi}_{k}\gamma^{7}\psi_{l}-\frac{1}{4}\delta _{kl}\overline{\psi}_{n}\gamma^{7}\psi_{n}+...\end{aligned}$$ where $C$ is antisymmetric and $D$ is symmetric in $k,l.$ The $U(N)$ connection is just $C+iD.$ Let us discuss now the WZ term. Its form depends on the type of the supercoset space we are looking for, that is on the part of the R-symmetry group which we want to gauge. This choice must be consistent with the $\kappa-$symmetry. The general expression for the 2-form defined on $AdS_{5}$ is given by$$\Omega_{2}=\overline{\psi}_{k}\gamma^{6}(E^{kl}+i\gamma^{7}F^{kl})\psi_{l}$$ where $E$ and $F$ are some antisymmetric matrices. Since we do not have a general classification of all possible matrices, let us discuss some interesting examples. First of all the simplest supergroup is $SU(2,2\mid1)$ which is the symmetry of the N=1 Yang-Mills theory. In this case the WZ term doesn’t exist , $\Omega_{2}$ vanishes because $\gamma^{6}$ is an even matrix and the result must be antisymmetric. For the case N=2 we have a natural action with $E^{kl}=e^{kl}.$ It is easy to see that this differential form is invariant under the subgroup $SU(2)$ of the $R$ - symmetry (which is described by the traceless part of the above connection) and under $SO(4,1)$ transformations of space-time (this symmetry is explicit in ( 46) ). As a result, the Goldstone modes will as before include bosonic fluctuations $\omega^{a}$ with $a=1...5,$ two Majorana 8-spinors $\varepsilon_{k}$ and also the $U(1)$ remainder of the $R-$ symmetry, the angle $\alpha.$ Thus our action is describing a sigma model on $AdS_{5}\times S_{1}.$The gauge variations needed to derive the equations of motion are given by$$\begin{aligned} \delta\psi_{k} & =\gamma^{b6}(B^{b}\varepsilon_{k}+\omega^{b}\psi _{k})+e^{kl}\gamma^{7}(C\varepsilon_{l}+\alpha\psi_{l})+\nabla\varepsilon _{k}\\ \delta B^{a} & =\nabla\omega^{a}+\overline{\psi}_{k}\gamma^{a6}\varepsilon_{k}\\ \delta C & =\nabla\alpha+e^{kl}\overline{\psi}_{k}\gamma^{7}\varepsilon_{l}$$ We see that in order to have $\kappa-$ symmetry the action must have the form$$S=\frac{1}{2\gamma}\int((B^{a})^{2}+C^{2}+\Omega_{2})d^{2}\xi$$ It is convenient at this stage to replace the Majorana 8-spinors by the Weyl 4-spinors. With these modifications the first and the second variations are the same as in the previous case except that the spinors are larger and the extra connection $C$ is added. Once again we have a Fermi- Bose match: there are 6 bosons from $AdS_{5}\times$ $S_{1}$ reduced to 4 by the consraints and 4 physical fermions. Our next example is the group $SU(2,2\mid4),.$the case already examined in \[ 7\]. As is well known the $R-$ symmetry in this case is reduced to $SU(4)\approx SO(6).$It is convenient to introduce the Clifford algebra of $O(6)$ which allows the Majorana representation with the purely imaginary antisymmetric 8$\times8$ matrices which we will call $\beta^{n}$ , $n=1...6.$ We will now repackage the set of $\psi_{k}$ , $k=1,...4$ connections (each of which is a Majorana 8-spinor of $SO(4,2)).$We consider a set of 64 Majorana fields $\Psi$ which are direct product of 8-spinors in $SO(4,2)\times SO(6).$ The Weyl condition, which reduces the number of fields to the desired 32 is given by $\gamma^{7}\beta^{7}\Psi$=$\Psi.$The set of bosonic connections is simply doubled. We have to find now the WZ -term. As we saw before, we need an antisymmetric tensor to write the needed 2-form. The key observation is that it is provided by the matrix $\beta^{6}$ . The 2-form with the right properties is $$\Omega_{2}=\overline{\Psi}\beta^{6}\gamma^{6}\Psi$$ This form is explicitly invariant under $SO(4,1)\times SO(5)$ rotations forming a gauge group. The full action is remarkably simple$$S=\frac{1}{2\gamma}\int((B^{a})^{2}+(C^{n})^{2}+\Omega_{2})d^{2}\xi$$ The key difference (apart from the different choice of variables) with \[ 7\] is that in this paper the WZ term was written as 3-form. Here we notice that this 3-form is exact and this greatly simplifies the matter. Let us also notice that in \[ 7\] the authors worked with the pair of the Majorana-Weyl 16-spinors, $L_{1}$ and $L_{2}.$ In this variables (linearly related to ours ) the form $\Omega_{2}=\overline{L}_{1}L_{2}$. We will not repeat the calculations of the second variation and of $\kappa-$ symmetry, since they are practically identical to the derivations given above. So far we discussed only the $\beta$ -function, but for string theory we also must have a correct central charge $c(\gamma)=26.$ In principle this relation determines the value of $\gamma$ and thus fixes the curvature of $AdS_{5}$ . In the corresponding gauge theory this means that unlike N=4 Yang-Mills theory we are discussing the 4d theories with the isolated zeroes of their (4d) $\beta-$ functions. It is clearly important to calculate $c(\gamma).$In the WZNW model this problem has been solved long ago. In the present case we still lack the necessary tools. All we can do at the moment is to find this function at $\gamma\rightarrow0.$In this limit the bosonic part of the action gives a contribution simply equal to the number of degrees of freeedom. However there is a subtlety with the fermionic part. The action (27 ) in the UV limit looks like the action for the world-sheet fermions. The latter have central charge $\frac{1}{2}$ . So naively one should get $c=\frac{1}{2}($number of fermi-felds). This counting is wrong (see also related comments in \[ 10\]). To get the right one, let us notice that the dependence on the Liouville field in this Lagrangian appears through the Pauli-Villars regulators. We introduce heavy fermions $\chi$ with the mass equal to the cut-off $\Lambda.$The mass term in their Lagrangian has the form$$S_{PV}\sim\Lambda\int e^{\varphi}\overline{\chi}\chi d^{2}\xi$$ since these fermions are scalars from the world-sheet point of view. For standard world-sheet fermions, which are spinors we would get $e^{\frac {\varphi}{2}}$ factor in the corresponding expression. Since the central charge is the coefficient in front of the Lioville action which is quadratic in $\varphi,$ we conclude that the right formula for $c$ in the limit of zero coupling is $c=($ $n_{B}+2n_{F})$ , in which the contribution of the GS fermions is *four times larger* than the central charge of the world-sheet fermions. In the case of the flat 10d space that indeed gives $c=(10+2\times8)=26$ (after the $\kappa-$ symmetry is gauge fixed, we remain with 8 fermions in each direction). The sigma models we described, provided that the conjecture of conformal invariance is correct, describe gauge theories in various dimensions. Some more work is needed to identified their matter content. In most cases known today, this issue is resolved by appealing to the D-brane picture in the flat space and then replacing the D-branes by the corresponding fluxes. This approach works for the weak coupling when the supergravity approximation is applicable. However , as was stressed in \[ 4\] , D-branes, while useful, are neither necessary nor sufficient for the gauge/strings correspondence. In general one has to analyze the edge states of the sigma model. As was argued in \[ 1\], they are described by the open string vertex operators and correspond to the various fields on the gauge theory side. Such operators can be studied at the weak coupling , although even that is non-trivial. These calculations have not been done so far. The only thing we know at present is the symmetry of the above models. To avoid confusion one should clearly distinguish the explicit symmetries of the above actions and the global symmetry of the theory. The explicit symmetries are in fact gauge symmetries coming with the coset space. They are related to the right supergroup action. On the other hand our Lagrangians are written in terms of the left-invariant connections. Thus the global supergroup $SU(2,2|N)$ of left multiplications is not visible but definitely present (even in the standard bosonic $\frac {SO(3)}{SO(2)}$ case the explicit symmetry is $SO(2)$ , while the global symmetry is $SO(3)$ ). At the same time there is a simple way to pass to the non-supersymmetric models. It was pointed out in \[ 4\] that for the gauge/strings correspondence it is necessary to eliminate the open string tachion from the edge states. The minimal way to achieve it in the NSR formalism is to exploit the non-chiral GSO projection leading to the Type 0 strings without supersymmetry. The closed string tachyon may be either of the “good variety” \[4\] in which case it is harmless or of the bad variety, corresponding to the relevant operators on the gauge theory side. In the latter case the gauge theory requires a fine-tuning to be conformal. In the present context the Type 0 construction in the Green- Schwarz fromalism corresponds to the summation over the spin structures for the GS fermions ( recall that in the standard supersymmetric case one must take only positive spin structures). The summation preserves modular invariance and projects out the states with odd number of GS fermions (see an alternative discussion in \[11 \] ). Above we discussed only the simplest supercosets. They are cohomologically trivial and for that reason we couldn’t prove non-renormalization of the WZ term. They also contained no free parameters. It would be very interesting to find cases without these limitations. A free parameter must appear in the theories describing gauge fields with the fundamental matter. In this case the sigma model must contain a parameter $N_{f}/N_{c}$ . It is interesting to notice that the structure some simple supergroups indeed depends on a free parameter \[9\]. Conformal gauge theories described above may find various applications. They are useful for the further decoding of the gauge/strings correspondence, in particular for testing of the strong coupling limit which I will discuss elsewhere. One might also think of using 3d conformal gauge theories for the holographic description of the early universe. Another interesting problem related to the above models is QCD with $\vartheta=\pi.$ However, first we must learn much more about their dynamics (after all we didn’t really proved that the $\beta-$ function is zero and didn’t compute the central charge) . After I wrote this paper I learned (from A. Tseytlin) that the quadratic form of the WZ term has some history \[12-14\]. I refer the reader to these valuable papers. However, neither our models nor the issues of conformal symmetry have been discussed before. Also, the supercoset models were analyzed in \[15 \] in the Berkovits formalism. Relation of this impressive paper to the present one is unclear to me. It is a pleasure to thank Ig. Klebanov, J. Maldacena, A. Tseytlin and H. Verlinde for very useful discussions. This work was partially supported by the NSF grant 0243680. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarly reflect the views of the National Science Foundation. REFERENCES \[1\] A. M. Polyakov Nucl. Phys. Proc. Suppl. 68 (1998) ,hep-th/ 9711002 \[2\] J. Maldacena Adv. Theor. Math. Phys. 2,231 (1998) , hep-th/9711200 \[3\] Ig. Klebanov Nucl. Phys. B496, 231 (1997) \[4\] A. M. Polyakov Int. J. Mod. Phys. A14,645 (1999) \[5\] A.M. Polyakov Phys. Atom. Nucl. 64,540 (2001) , hep-th/0006132 \[6\] H. Verlinde hep-th/0403024 \[7\] R. Metsaev, A. Tseytlin Nucl. Phys. B533, 109 (1998) \[8\] L.Brink, J. Schwarz, J. Sherk Nucl. Phys. B121, 253, (1977) \[9\] P. van Nieuwenhuizen, Phys. Reports68,4,(1981), 189 \[10\] N.Drukker, D. J. Gross, A. Tseytlin JHEP 004 (2000) 021 , hep-th/0001204 \[11\] Ig. klebanov, A. Tseytlin JHEP 9903 (1999) 015, hep-th/9901101 \[12\] R. Roiban, W. Siegel JHEP 0011(2000) 024, hep-th/0010104 \[13\] M. Hatsuda, K. Kamimura, M. Sakaguchi Phys. Rev. D62 (2000) 105024 , hep-th/0007009 \[14\] M. Hatsuda, M. Sakaguchi hep-th/0205092 \[15\] N. Berkovits, C. Vafa, E. Witten JHEP 9903 (1999) 018, hep-th/9902098 \[16\] I. Bena, R. Roiban, J. Polchinski Phys. Rev.D 69, (2004) 046002, hep-th/0305116 \[17\] B. Dubrovin, I. Krichever, S. Novikov in “Encyclopedia of Mathematical Sciences” v. 4, Springer Verlag (1990)
--- abstract: 'We show that Steady-state Ab initio Laser Theory (SALT) can be applied to find the stationary multimode lasing properties of an $N$-level laser. This is achieved by mapping the $N$-level rate equations to an effective two-level model of the type solved by the SALT algorithm. This mapping yields excellent agreement with more computationally demanding $N$-level time domain solutions for the steady state.' address: - '${}^1$Department of Applied Physics, Yale University, New Haven, CT 06520' - '${}^2$Department of Electrical Engineering, Princeton University, Princeton, NJ 08544' - '${}^*$Corresponding author: douglas.stone@yale.edu' author: - 'Alexander Cerjan,$^1$ Yidong Chong,$^1$ Li Ge,$^2$ and A. Douglas Stone$^{1*}$' title: 'Steady-State Ab Initio Laser Theory for N-level Lasers' --- [99]{} H. Haken, *Light: Laser Dynamics* Vol. 2 (North-Holland Phys. Publishing, New York, 1985). A. E. Siegman, *Lasers* (University Science Books, Mill Valley - California, 1986). A. S. Nagra and R. A. York, “FDTD analysis of wave propagation in nonlinear absorbing and gain media,” IEEE Trans. Antennas Propag. **46**, 334-340 (1998). K. S. Yee, “Numerical solution of the initial boundary value problems involving Maxwell’s equations in isotropic media,” IEEE Trans Antennas Propag. **14**, 302-307 (1966). H. E. Türeci, A. D. Stone, and B. Collier, “Self-consistent multimode lasing theory for complex or random lasing media,” Phys. Rev. A **74**, 043822 (2006). H. E. Türeci, A. D. Stone, and L. Ge, “Theory of the spatial structure of nonlinear lasing modes,” Phys. Rev. A **76**, 013813 (2007). H. E. Türeci, L. Ge, S. Rotter, and A. D. Stone, “Strong interactions in multimode random lasers,” Science **320**, 643-646 (2008). L. Ge, Y. D. Chong, and A. D. Stone, “Steady-state ab initio laser theory: generalizations and analytic results,” Phys. Rev. A **82**, 063824 (2010). H. Cao, “Review on the latest developments in random lasers with coherent feedback,” J. Phys. A **38**, 10497-10535 (2005). O. Painter, R. K. Lee, A. Scherer, A. Yariv, J. D. O’Brien, P. D. Dapkus, and I. Kim, “Two-dimensional photonic band-gap defect mode laser,” Science **284**, 1819-1821 (1999). S. Chua, Y. D. Chong, A. D. Stone, M Soljacić, and J. Bravo-Abad, “Low-threshold lasing action in photonic crystal slabs enabled by Fano resonances,” Opt. Express **19**, 1539 (2011). C. Gmachl, F. Capasso, E. E. Narimanov, J. U. Nöckel, A. D. Stone, J. Faist, D. L. Sivco, and A. Y. Cho, “High-power directional emission from microlasers with chaotic resonators,” Science **280**, 1556-1564 (1998). Li Ge, Yale PhD thesis, 2010. L. Ge, R. J. Tandy, A. D. Stone, and H. E. Türeci, “Quantitative verification of ab initio self-consistent laser theory,” Opt. Express **16**, 16895 (2008). The equations are written for the TM case, the modifications for TE are straightforward. The observation that coherent and incoherent pumping are nearly equivalent for most systems, is invalid when the coherent pumping is supplied at a similar frequency to the atomic lasing transition and thus interactions between the lasing field and pumping field must be taken into account. H. Fu and H. Haken, “Multifrequency operations in a short-cavity standing-wave laser,” Phys. Rev. A **43**, 2446-2454 (1991). Y. I. Khanin, *Principles of Laser Dynamics* (Elsevier, Amsterdam, 1995). B. Bidégaray, “Time discretizations for Maxwell-Bloch equations,” Numer. Meth. Partial Differential Equations **19**, 284-300 (2003). For any 1D cavity which is uniformly pumped the TCF states for solving SALT can also be found using a transfer matrix method which does not require discretizing space. We use a more general TCF solver in the calculations presented here which does discretize space. X. Jiang and C. M. Soukoulis, “Time Dependent Theory for Random Lasers,” Phys. Rev. Lett. **85**, 70 (2000). X. Jiang, S. Feng, C. M. Soukoulis, J. Zi, J. D. Joannopoulos, and H. Cao, “Coupling, competition, and stability of modes in random lasers,” Phys. Rev. B **69**, 104202 (2004). R. W. Boyd, *Nonlinear Optics* (Academic Press, New York, 2008). Introduction ============ Semiclassical laser theory, which neglects the quantum fluctuations of the electromagnetic field, is widely used to describe and simulate lasers [@haken; @lamb63]. In principle, it correctly describes the laser thresholds and frequencies, the spatial pattern of the lasing modes, and the laser output power, including all classical non-linear effects, such as spatial hole-burning, gain saturation, and mode and phase locking. Essentially the theory describes Maxwell’s equations in an open cavity, coupled to the non-linear polarization of the gain medium. The gain polarization can be described using either a classical non-linear oscillator model [@siegman], or a quantum-mechanical model of $N$ atomic levels in which the polarization and level populations obey the equations of motion of the quantum density matrix. The simplest version of the theory, used widely in textbooks, is the two-level Maxwell-Bloch (MB) model [@haken]; however, most design and characterization simulations of lasers use models with $N = 3$ or more levels. In addition, most theoretical solutions for the semiclassical laser equations employ a large number of simplifying assumptions in order to make them analytically tractable, most notably neglecting the openness of the cavity and/or treating only simple one-dimensional (1D) or ring cavities, as well as approximating the non-linear interactions to cubic order. The results are typically not useful for quantitative modeling. Until recently, the only useful way to obtain quantitative results for non-trivial laser structures was to integrate the semiclassical laser equations in space and time. For novel and interesting new laser structures with non-trivial 2D and 3D cavity geometries, such simulations are at the limits of computational feasibility, making it difficult to study a large parameter space or ensemble of designs. In the past five years a new approach to finding the stationary solutions of the semiclassical laser equations has been developed, known as Steady state Ab initio Laser Theory (SALT)[@salt1; @salt2; @saltsci; @spasalt]. SALT treats the openness of the cavity exactly, and the multimode non-linear interactions to infinite order (with two approximations, to be discussed below). It is applicable to cavities of arbitrary complexity in 2D and 3D, although we will discuss here only the scalar wave equation coupled to a gain medium. Importantly, this approach eliminates the need to perform a time integration to steady-state, dramatically reducing the computational effort and allowing one to study cavities with high spatial complexity, such as 2D random lasers [@saltsci; @cao], photonic crystal lasers [@phot; @pcsel] and chaotic disk microlasers [@disk; @li_thesis]. SALT was originally formulated to solve the steady-state two-level Maxwell-Bloch equations [@salt1; @salt2] in the standard slowly-varying envelope approximation (SVEA), and an iterative algorithm was developed [@salt2] to solve the resulting SALT equations. Subsequently it was realized that the SVEA afforded no advantage in numerical solutions of the SALT equations, so this approximation was dropped, leading to slightly different SALT equations [@li_thesis; @tandy], which were used in all subsequent work [@saltsci; @spasalt; @pcsel]. Recently an important generalization of the SALT solution algorithm was developed, improving its performance for laser cavities with complex spatial index variation and inhomogeneous pumping [@spasalt]. In its present form, SALT only employs two approximations, the stationary inversion approximation (SIA) and the rotating wave approximation (RWA), which are both well satisfied for most lasers of interest. In Ref. [@tandy], the results of SALT were compared to the full time-dependent solution of the MB equations for a simple 1D cavity in the multimode regime, well above threshold. Excellent agreement was found in the parameter regime for which the SIA holds. This was, to our knowledge, the first demonstration of a frequency-domain method which agrees with exact time-domain methods for above-threshold multimode lasing. Previous applications of SALT have focused on two-level gain media. In order for SALT to be a useful modeling tool, it is necessary to demonstrate that it can be applied to $N$-level lasers. In the current work, we show analytically that the steady-state equations for an $N$-level laser can be reduced to those for an effective two-level system, and hence solved using the efficient SALT algorithm with essentially the same degree of computational effort. We also explore how this effective two-level system differs from the ordinary two-level laser. Next, we present a numerical comparison between the results of SALT calculations and exact $N$-level finite-difference time-domain (FDTD) calculations, for the same simple 1D laser studied in [@tandy], as well as for a 1D random laser. We note that a similar comparison between SALT and FDTD has been performed for a four-level high-Q single-mode photonic crystal laser in Ref. [@pcsel], with good agreement found. Here, we test SALT’s accuracy in treating the more challenging case of multimode lasing in low-Q and random cavities. Effective two-level systems =========================== Four level system analysis -------------------------- We illustrate the approach using the semi-classical laser equations [@haken] for the four-level atomic gain medium shown in Fig. \[schem\]: $$\begin{aligned} 4\pi \ddot{\mathbf{P}}^+ &=& c^2 \nabla^2 \mathbf{E}^+ - \varepsilon_c(\mathbf{r}) \ddot{\mathbf{E}}^+ \label{waveqn} \\ \dot{\mathbf{P}}^{+} &=& -\left(i\omega_a + \gamma_{\perp}\right)\mathbf{P}^{+} + \frac{g^2}{i \hbar}\mathbf{E}^+\left(\rho_{22} - \rho_{11}\right) \label{poleqn} \\ \dot \rho_{33} &=& \mathcal{P} \left( \rho_{00} - \rho_{33} \right) - \gamma_{23} \rho_{33} \label{rbeqn} \\ \dot \rho_{22} &=& \gamma_{23} \rho_{33} - \gamma_{12} \rho_{22} - \frac{1}{i\hbar}\mathbf{E}^+ \cdot \left((\mathbf{P}^{+})^* - \mathbf{P}^{+} \right) \\ \dot \rho_{11} &=& \gamma_{12} \rho_{22} - \gamma_{01} \rho_{11} + \frac{1}{i\hbar}\mathbf{E}^+ \cdot \left((\mathbf{P}^{+})^* - \mathbf{P}^{+} \right) \\ \dot \rho_{00} &=& \gamma_{01} \rho_{11} - \mathcal{P} \left( \rho_{00} - \rho_{33} \right). \label{reeqn}\end{aligned}$$ The RWA has already been made, and we have assumed a lasing structure with one or two directions of translational symmetry, so that TM and TE polarizations are conserved and Maxwell’s equations reduce to a scalar Helmholtz equation [@TE]. $\mathbf{E}^+$ and $\mathbf{P}^+$ are the positive frequency components of the scalar electric and polarization fields respectively. $\rho_{ii}$ is the population density of level $|i \rangle$, $\omega_a$ is the frequency of the gain center, $\gamma_\perp$ is the gain width (polarization dephasing rate), $g$ is the dipole matrix element, $\mathcal{P}$ is the pump rate, and $\gamma_{ij}$ is the decay rate from level $|j \rangle$ to level $|i \rangle$. The four levels are labelled from $0 - 3$ in order of increasing energy (Fig. \[schem\]). The polarization equation (\[poleqn\]) is obtained from the four-level density matrix equation of motion, assuming that only the level $2 \to 1$ transition will be inverted and lase. Often in FDTD calculations a real classical oscillating dipole equation is used to describe the polarization [@pcsel]; in Appendix A, we show that this yields essentially the same results, with an appropriate identification of parameters. In the rate equations (\[rbeqn\])-(\[reeqn\]), the pump is coherently acting between levels $|0\rangle$ and $|3\rangle$. ![Schematic of a four-level gain medium. \[schem\]](nlv_schematic.eps){width="70.00000%"} The polarization equation (\[poleqn\]) incorporates the inversion $D(x,t) \equiv \rho_{22} (x,t) - \rho_{11} (x,t)$, similar to the polarization equation for a two-level laser. By assuming that the non-lasing populations are stationary, $\dot\rho_{00} = \dot\rho_{33} = 0$, we can show that $D(x,t)$ obeys $$\dot D = -\gamma_{\parallel}'(D - D_0') - \frac{2}{i\hbar}\mathbf{E}^+ \cdot \left((\mathbf{P}^{+})^* - \mathbf{P}^{+} \right), \label{efinv}$$ which is precisely the form of the inversion equation for the two-level medium [@haken; @fu]. The parameters $D_0'$ and $\gamma_\parallel'$ serve as an effective equilibrium inversion and inversion relaxation rate respectively, and are given by [@li_thesis]: $$\begin{aligned} \gamma_\parallel' &=& 2 \gamma_{12} \left( 1 + \frac{S}{2 + \frac{\gamma_{01}}{ \mathcal{P}} + 2 \frac{\gamma_{01}}{\gamma_{23}}}\right) \label{eq8} \\ D_0' &=& \frac{S \mathcal{P} \, n}{\gamma_{01} + \left(S + 2 + 2\frac{\gamma_{01}} {\gamma_{23}} \right) \mathcal{P}}, \label{eq9}\end{aligned}$$ where $S = (\gamma_{01} - \gamma_{12})/\gamma_{12}$ and $n = \sum_i \rho_{ii}$ is the total density of gain atoms. Eq. (\[eq9\]) for the inversion in the absence of laser emission (which here acts as the effective pump parameter) has been discussed by Siegman [@siegman], while Eq. (\[eq8\]) for the effective relaxation rate has been derived for a special case by Khanin [@khanin]. These expressions have not been used previously to solve the four-level lasing equations in terms of the two-level solutions, as we do here. If we use an incoherent pump, the $ \mathcal{P} (\rho_{00} - \rho_{33})$ terms in (\[rbeqn\]) and (\[reeqn\]) would be replaced by $\mathcal{P}\rho_{00}$; the effect of this would be the removal of the factor of $2$ preceding the ratio $\gamma_{01}/\gamma_{23}$ in the denominators of (\[eq8\]) and (\[eq9\]). For typical laser systems, each denominator is dominated by the $\gamma_{01}/\mathcal{P}$ term, so the difference between coherently and incoherently pumping the system is negligible [@caveat]. Given the parameters $\{\gamma_{01},\gamma_{12},\gamma_{23}, \mathcal{P}\}$ describing the four-level medium of Fig. \[fig1\], we can calculate the effective pump and relaxation rates, and feed those (together with the cavity dielectric function, lasing transition frequency $\omega_a$, and gain linewidth $\gamma_{\perp}$) into the SALT algorithm. That will yield the steady-state lasing properties of this four-level laser. Arbitrary number of levels -------------------------- A general $N$-level laser can be treated via an analogous procedure. Suppose we have a gain medium with an arbitrary number of levels, $N$. We assume that there is only a single lasing transition, between two levels which we denote by $|u\rangle$ and $|l\rangle$. The population of each of the $N-2$ non-lasing levels obeys $$\dot \rho_{ii} = \sum_{j} \gamma_{ij}\rho_{jj} - \sum_{j} \gamma_{ji} \rho_{ii} + \gamma_{iu} \rho_{uu} + \gamma_{il} \rho_{ll} - (\gamma_{ui}+\gamma_{li})\rho_{ii},$$ where the sums are taken over all of the non-lasing states and $\gamma_{ij}$ is the rate at which atoms transition from state $|j \rangle$ to $|i \rangle$ which is either a decay or pump rate depending on the relative energies of the states. In this way, we can incorporate decay and pump processes between any levels. We again assume that $\dot{\rho}_{ii} = 0$ for all non-lasing levels. Then $$\sum_j \left[(s_i + \gamma_{ui} + \gamma_{li}) \delta_{ij} - \gamma_{ij}\right] \rho_{jj} = \gamma_{iu} \rho_{uu} + \gamma_{il}\rho_{ll}, \label{rhoii matrix}$$ where $s_i = \Sigma_j \gamma_{ji}$ and $\delta_{ij}$ is the Kronecker delta. The term in brackets on the left hand side corresponds to an $(N-2)\times(N-2)$ matrix, which we denote as $R$. Upon inverting Eq. (\[rhoii matrix\]) and substituting it into the equations of motion for the lasing levels, we obtain $$\gamma_\parallel' = \frac{B_l T_u - B_u T_l}{T_u + T_l}, \qquad\; D_0' = \frac{B_u + B_l}{T_u + T_l} \; \frac{n}{\gamma_\parallel'},$$ where $$\begin{aligned} T_{u/l} &=& 1 + \sum_{ij} [R^{-1}]_{ij} \, \gamma_{j,u/l} \\ B_u &=& - s_u + \sum_{ij} (\gamma_{ui} - \gamma_{li}) [R^{-1}]_{ij} \gamma_{ju}, \\ B_l &=& s_l + \sum_{ij} (\gamma_{ui} - \gamma_{li}) [R^{-1}]_{ij} \gamma_{jl}.\end{aligned}$$ The details of this calculation are given in Appendix B. Physical Limits of Interest =========================== Returning to the typical four-level case, we take note of two important physical regimes. The first, the linear regime, is $\gamma_{23} \sim \gamma_{01} \gg \gamma_{12} \gg \mathcal{P}$, for which one recovers the expected behavior that the equilibrium inversion increases linearly with the pump and that $\gamma_\parallel'$ is a constant: $$\begin{aligned} \gamma_\parallel' &\approx& 2 \gamma_{12}, \label{l1} \\ D_0' &\approx& \frac{\mathcal{P}}{\gamma_{12}}n \label{l2}.\end{aligned}$$ In this case, varying the equilibrium inversion and the pump strength are essentially equivalent. The second regime of interest, the non-linear regime, is $\gamma_{23} \sim \gamma_{01} \gg \gamma_{12} \sim \mathcal{P}$, i.e. when the slow decay rate between the lasing levels is on the same order as the pump rate. In this regime, $\gamma_\parallel'$ increases with increasing pump and $D_0'$ saturates with increasing pump: $$\begin{aligned} \gamma_\parallel' &\approx& 2 \left(\gamma_{12} + \mathcal{P} \right), \label{gp1} \\ D_0' &\approx& \frac{1}{1+ \frac{\mathcal{P}}{\gamma_{12}}} \left( \frac{\mathcal{P}} {\gamma_{12}} \right) n. \label{gp2}\end{aligned}$$ This regime is also interesting from the viewpoint of SALT. As $\gamma_\parallel'$ increases with $\mathcal{P}$, a laser could satisfy the inequality $\gamma_\parallel' \ll \gamma_\perp$ near threshold, leading to stationary inversion and an accurate solution via SALT, but fail to satisfy the inequality as the pump becomes stronger, leading to a decrease in the accuracy of SALT. For a system with an arbitrary number of levels, the first regime always occurs at sufficiently small pump values; the second regime is obtainable if electrons in the upper lasing level are relatively long-lived compared to electrons in other levels. Brief Summary of SALT ===================== For completeness we briefly outline SALT. The $E^+$ and $P^+$ fields are assumed to obey a multi-mode ansatz $$\begin{aligned} \begin{aligned} E^+(\vec{r},t) &= \sum_{\mu=1}^M \Psi_\mu(\vec{r})\,e^{-i\omega_\mu t}, \\ P^+(\vec{r},t) &= \sum_{\mu=1}^M p_\mu(\vec{r})\,e^{-i\omega_\mu t}, \label{mode ansatz} \end{aligned}\end{aligned}$$ where the indices $\mu = 1, 2, \cdots, M$ label the different lasing modes, and the field and polarization are now explicitly scalar quantities. The total number of modes, $M$, is not given, but increases in unit steps from zero as we increase the pump strength $D_0$. The values of $D_0$ at which each step occurs are the (interacting) modal thresholds, to be determined self-consistently from the theory. The real numbers $\omega_\mu$ are the lasing frequencies of the modes (henceforth $c=1$), which will also be determined self-consistently. We insert the ansatz (\[mode ansatz\]) into the two-level laser equations, and employ the stationary inversion approximation (SIA) $\dot{D} = 0$. The result is a set of coupled nonlinear differential equations, which are the fundamental equations of SALT [@spasalt]: $$\begin{aligned} \left[\nabla^2 + \left(\epsilon_c(\vec{r}) + \frac{{\gamma_\perp}D(\vec{r})}{k_\mu - k_a + i\gamma_\perp} \right)k_\mu^2\right] \Psi_\mu(\vec{r}) = 0, \label{TSG1} \\ D(\vec{r}) = D_0(\vec{r}) \, \left[1 +\sum_{\nu=1}^M \Gamma_\nu |\Psi_\nu(\vec{r})|^2\right]^{-1}. \label{TSG2}\end{aligned}$$ $\Psi$ and $D$ are now dimensionless, measured in their natural units $E_c = \hbar\sqrt{{\gamma_\parallel}{\gamma_\perp}}/(2g)$ and $D_c = \hbar{\gamma_\perp}/(4\pi g^2)$, and $\Gamma_\nu \equiv {\gamma_\perp}^2/({\gamma_\perp}^2 + (\omega_\nu-\omega_a)^2)$ is the Lorentzian gain curve evaluated at frequency $\omega_\nu$. Note that these equations are time-independent; Eq. (\[TSG1\]) is a stationary wave equation for the electric field mode $\Psi_\mu$, with an effective dielectric function consisting of both the “passive” contribution $\epsilon_c(\vec{r})$ and an “active” contribution from the gain medium. The latter is frequency-dependent, and has both a real part and a negative (amplifying) imaginary part. It also includes infinite-order nonlinear “hole-burning” modal interactions, seen in the $|\Psi_\nu|^2$ dependence of (\[TSG2\]). In addition, we make the key requirement that $\Psi_\mu$ must be purely out-going outside the cavity; it is this condition that makes the problem non-Hermitian. It is worth noting that the SIA is not needed until at least two modes are above threshold, so (\[TSG2\]) is exact for single-mode lasing up to and including the second threshold (aside from the well-obeyed RWA). These equations are solved efficiently by projecting them onto a complete biorthogonal set of purely outgoing states with external wavevectors, $k_\mu$, equal to the lasing frequencies. We refer to these states as the [*threshold constant flux*]{} (TCF) states, because one member of the basis set is always equal to the (non-interacting) threshold lasing mode, leading to very rapid convergence of the basis expansion above threshold [@spasalt]. The major computational effort in solving the SALT equations by this approach is in calculating the (linear) TCF states. The SALT solutions are obtained for successive values of the pump increasing from the first threshold; at each step, the coefficients of the lasing modes in the TCF basis are obtained using a standard non-linear solver, with the coefficients from the previous step as an initial guess (which is never far from the correct solution). Unlike FDTD, in the current version of SALT one cannot simply directly solve at a fixed pump value, well above threshold. However, even with this limitation, SALT is much more efficient, and provides substantial physical insight, as we will discuss below. Numerical comparison ==================== To perform a well-controlled comparison between SALT and the four-level laser equations (\[waveqn\])-(\[reeqn\]), as well as $N$-level generalizations, we studied 1D microcavity lasers for which the FDTD calculations are tractable and fast enough to generate extensive steady-state data. We first consider the same simple edge emitting uniform-index laser treated in Refs. [@salt1; @salt2; @tandy], with a perfect mirror at the origin, active region of length $L$ terminating abruptly in air (see schematic, Fig. \[fig1\]). The simulations were carried out using standard FDTD for the electromagnetic field, and Crank-Nicholson discretization for the polarization and rate equations based on the method of Bidégaray [@bid] (in which the polarization and inversion are spatially aligned with the electric field but updated at the same time steps as the magnetic field). The reported modal intensities are calculated by Fourier transforming the electric field at the cavity boundary after the simulation has reached steady state (see §\[efficiency section\] for a discussion of the steady-state criterion). The lasing transition frequency $\omega_a$ is chosen so that $n_0 k_a L = 60$, corresponding to roughly ten wavelengths of radiation within the cavity. Physical quantities are reported in terms of their natural scales, $D_c = \hbar \gamma_\perp / (4 \pi g^2)$ and $E_c = (\hbar/2g)\sqrt{\gamma_\perp \gamma_\parallel'}$. In addition, we take $c = \hbar = 1$ and measure rates in dimensionless units, i.e. $\gamma_{\textrm{meas}} = \gamma_{\textrm{real}} L / c$. We note that the parameters chosen accurately reflect those of real microcavities at optical frequencies [@pcsel]; the complete set of simulation parameters is given in Appendix D. ![Modal intensities as functions of the normalized equilibrium inversion $D_0' / D_{c}$ (effective pump) in a 1D microcavity edge emitting laser (schematic inset). The cavity is bounded on one side by a perfect mirror and on the other side by air, and has uniform refractive index $n=1.5$. Solid lines are results obtained by the time-independent SALT method; open circles are results of FDTD simulations with a coherently pumped four-level medium (Fig. \[schem\]); solid triangles are results of FDTD simulations with a coherently pumped six-level medium with a lasing transition between $|3 \rangle$ and $|1 \rangle$. Simulation parameters are given in Appendix D. Both the four-level and six-level media are chosen to satisfy SIA. The dephasing rate is $\gamma_\perp = 4.0$. The four-level system is in the linear regime described by Eq. (\[l1\])-(\[l2\]). The six-level system is calculated using the formula in the appendix B, but is in the non-linear regime described by Eq. (\[gp1\])-(\[gp2\]). The spectra at $D_0 / D_c = 0.488$, and the gain curve, are shown in the upper left inset. \[fig1\]](nlv_n1p5cav.eps){width="70.00000%"} As shown in Fig. \[fig1\], we find close agreement between SALT calculations and FDTD simulations. At a representative pump strength $D_0' = 0.488 D_c$, the mode intensities produced by SALT differ from those of the four- and six-level FDTD simulations by $\sim 1 \%$, while the frequencies differ by $< 0.1 \%$. The difference in mode frequencies between SALT and FDTD also exists at the first lasing threshold, for which an analytical value can be calculated. There, we find that the FDTD simulation has a $0.2 \%$ error in the first mode frequency, while SALT has a $0.08 \%$ error; this error arises from the spatial discretization of the cavity employed in both approaches [@1d]. It is worth emphasizing that SALT treats the non-linearity to infinite order; in the earlier work on the Maxwell-Bloch model [@tandy] it was shown that for this same cavity the common cubic approximation for the non-linearity fails both quantitatively and qualitatively. These results demonstrate that so long as the system satisfies SIA, the mapping between systems with an arbitrary number of levels to an effective two-level system is nearly exact, and SALT is able to very accurately determine the steady state properties of the cavity. If two cavities, each with an arbitrary number of levels, have the same effective parameters $D_0'$ and $\gamma_\parallel'$, and otherwise have the same polarization relaxation rate and atomic transition frequency, the cavities are equivalent from the electromagnetic point of view, and will have identical lasing properties. The six-level simulations shown in Fig. \[fig1\] occupy the non-linear parameter regime of Eq. (\[gp1\])-(\[gp2\]), i.e. $\gamma_\parallel'$ is a linear function and $D_0'$ a non-linear function of $\mathcal{P}$. However, the unscaled modal intensity leaving the cavity is still, to leading order, linear in $\mathcal{P}$. This can be seen by rearranging (\[TSG2\]), inserting the expressions for $\gamma_\parallel'$ and $D_0'$, and noting that at the end of the cavity the inversion is roughly independent of the pump strength. This result is discussed further in Appendix C. ![Breakdown of the equivalence between SALT and FDTD when SIA is not valid is shown here in two different ways. Here, modal intensities as a function of the normalized equilibrium inversion $D_0' / D_{c}$ (effective pump) are shown for a 1D microcavity edge emitting laser with $\gamma_\perp = 4.0$ and $n=1.5$. Solid lines again represent results obtained from SALT, while open circles represent FDTD simulations of a simple four-level system with $\gamma_\parallel' = 0.1$. Triangles represent FDTD simulations of a six-level system in the non-linear parameter regime in which $\gamma_\parallel' \sim 0.001$ for $D_0' \le 0.1$, and thus satisfying SIA, but $\gamma_\parallel' \sim 0.01$ for $D_0' \ge 0.45$, and consequently no longer satisfying SIA. \[fig2\]](nlv_gpar.eps){width="70.00000%"} The mapping between the $N$-level laser and two-level SALT breaks down at large pump strengths, when the condition $\gamma_\parallel' \ll \gamma_\perp$ is violated due to the increase of $\gamma_\parallel'$ with $\mathcal{P}$. In Ref. [@tandy], following an argument by Haken [@fu], it was demonstrated that violating this condition causes the SIA for the two-level model to break down. This effect can be seen in the four-level laser data in Fig. \[fig2\], where $\gamma_\parallel' = 0.1,\gamma_\perp = 4.0$ and accuracy is already lost for the third lasing mode. For the six-level data of Fig. \[fig2\], which is in the non-linear parameter regime, SIA is satisfied and SALT agrees with the FDTD simulations for small values of the normalized equilibrium inversion; for larger values of $D_0'$, the SALT and FDTD results begin to diverge. Finally, to demonstrate that the mapping to an effective two-level model works equally well for a complex laser cavity, Fig. \[fig3\] shows a comparison between SALT and FDTD simulations for a four-level gain medium in a 1D random dielectric structure. A number of studies have been published on random lasers using such simulations [@1drl1; @1drl2]; SALT provides a much more efficient method for such studies, which often require generating a statistical ensemble of lasers. Here, the passive cavity dielectric function contains $\sim 31$ layers, alternating randomly between regions with refractive indices $n_1 = 1.25$ and $n_2 = 1$. Each random layer was generated according to the formula $d_{1,2} = \langle d_{1,2} \rangle (1 + \eta \zeta)$ where $\langle d_1 \rangle = (1/3)(L/30)$ and $\langle d_2 \rangle = (2/3)(L/30) $ are the average thicknesses of the layers, $\eta = 0.9$ represents the degree of randomness of the cavity, and $\zeta \in [ -1, 1 ]$ is a randomly generated number. The gain medium was added uniformly to the entire cavity, and the coherent pump was likewise uniform. The transition frequency was chosen such that $n_1 k_a L = 120$, corresponding to roughly $20$ wavelengths inside of the cavity. We find only small discrepancies between the SALT and FDTD results, with $\sim 1.1 \%$ difference in the modal intensities. These differences did not vary significantly between different realizations of the random laser. ![SALT and FDTD results for a 1D random laser. Modal intensities are plotted against the normalized equilibrium inversion $D_0' / D_{c}$ (effective pump). Solid lines represent SALT results, and circles represent FDTD simulations for a four-level system with $\gamma_\parallel' = 0.001$. The refractive index distribution of the edge emitting random laser is described in the text. The gain medium has $\gamma_\perp = 4.0$ and is in the regime described by Eq. (\[l1\])-(\[l2\]). Left inset: log-log plot of the indicated region where three modes turn on in close proximity. Right inset: schematic of the cavity structure. \[fig3\]](nlv_rand.eps){width="70.00000%"} Computational efficiency of SALT for N-level systems {#efficiency section} ==================================================== In this section we present a set of benchmarks comparing the computational efficiency of SALT to FDTD. SALT calculations enjoy three main advantages over FDTD simulations of the semiclassical laser equations. First and foremost, SALT directly finds the steady-state solutions, so no time integration is involved, which substantially decreases computational effort. Second, SALT unambiguously determines how many modes are lasing at a given pump, whereas it can be difficult to determine, especially for multimode lasing, when an FDTD simulation has reached the steady-state with all modes that will lase “on". Third, within SALT, with minimal additional computational effort, it is possible to monitor modes which are [*below*]{} threshold via a modified threshold matrix [@saltsci], and hence to ascertain if more modes are likely to turn on in some interval of pump. It is important to note that the implementation of SALT used in this study has also not yet been fully optimized. For instance, the implementation of SALT used in this study requires calculating the entire lasing intensity and frequency spectrum starting from the first lasing threshold. However, this is not necessary and it is possible to implement SALT to take an initial guess of the number of modes and their relative intensities and then allow the algorithm to flow to the correct solution as the SALT algorithm has been demonstrated to be rather robust [@li_thesis]. Thus, while one might assume from Fig. \[rtfig\] that there would be a crossover pump value, high above threshold, where it would be more efficient to calculate the steady-state solutions using FDTD, this is not necessarily the case. SALT does have one disadvantage that FDTD does not, assuming the cavity is not in the chaotic regime. The convergence time in FDTD is determined by the longest time scale in the problem, which is the greater of beating period between consecutive modes and the relaxation oscillation time. These time scales are relatively independent of the number of modes lasing in the cavity, so the efficiency is largely independent of the pump in the multimode regime. This is not the case for SALT, as the computational time increases as $N^2$ where $N$ is the number of lasing modes. However, as we see in Fig. \[rtfig\], even a non-optimal implementation of SALT is substantially more efficient than FDTD even when calculating the steady state of a single pump value. ![Comparison of SALT and FDTD run-times. Modal intensities are shown as a function of the run-time for SALT (squares) and four-level FDTD simulations (circles), using the parameters of Fig. \[fig1\]. FDTD simulations that have not begun to lase are marked as crosses. Plot (a) shows data for $D_0/D_c = 0.071$, just above the first lasing threshold. SALT determined the steady-state single modal intensity in under three minutes, while the FDTD required $\sim 5000$ minutes to reach steady state. Plot (b) shows data for $D_0/D_c = 0.486$, well above the third lasing threshold. SALT calculated all data up to and including this pump value in under 90 minutes, whereas FDTD required $> 500$ minutes for the first two modes to reach steady-state, with the third mode intensity (green circles) still fluctuating after $5000$ minutes (not shown). \[rtfig\]](nlv_rt.eps){width="100.00000%"} Calculating full modal intensity/frequency curves as a function of the pump strength, such as in Fig. \[fig1\], is generally much more efficient using SALT. For example, in order to generate the curves seen in Fig. \[fig1\], SALT ran for a little under 2 processor hours. To generate all of the FDTD data for the four level simulations took 267 processor days. If one is attempting to explore a large parameter space of designs or system parameters, SALT may make studies feasible which are simply impractical using FDTD, particularly in more realistic 2D and 3D structures. As mentioned before, the bulk of the computational effort required for the SALT algorithm, especially in higher dimensions, is in solving for the TCF states. While the difficulty of solving for the TCF states does scale with the dimensionality of the system, one only need solve the associated generalized eigenvalue problem $\sim 100$ (even in higher dimensions) to have a sufficiently complete basis for all pump values, whereas in FDTD one needs to solve an $O(n^d)$ problem at each of many thousands, if not millions, of time steps. Furthermore, it is likely that switching to a finite element method in higher dimensions is not only possible, but likely to result in increased computational efficiency. Once one has a TCF basis library, using the SALT algorithm to iterate above threshold does not directly scale with the dimensionality of the system. Finally, while current implementations of SALT assume the electric field is perpendicular to the direction of wave propagation, the vectorial generalization of SALT is under investigation. Conclusion ========== We have found that using stationarity conditions on the non-lasing level populations, the rate equations for an $N$-level laser can be mapped to an effective two-level model, for which the steady-state multimode lasing properties are efficiently solvable using Steady-state Ab initio Laser Theory (SALT). Using this mapping, we found excellent agreement between SALT and FDTD simulations for the modal frequencies, thresholds and above-threshold intensities, in the expected domain of validity. SALT is typically several orders of magnitude more efficient computationally than time domain solution of the laser-rate equations, assuming only steady-state properties are needed. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Robert J. Tandy, Hui Cao, and Peter Bermel for helpful discussions. This work was partially supported by NSF grant No. DMR-0908437. Classical polarization in the laser-rate equations ================================================== Throughout this paper, we have used the density matrix equations of motion for the polarization field. However, much of the literature uses the classical oscillating dipole equation, in which the gain atoms are assumed to be dipoles undergoing harmonic oscillations and the quantum density matrix is neglected. This appendix briefly demonstrates that the two models are equivalent, and derives the relevant parameter redefinitions. A more thorough discussion has been given by Boyd [@boyd]. The polarization equation used here, $$\dot{P}^{+} = -\left(i\omega_a + \gamma_{\perp}\right)P^{+} + \frac{g^2}{i \hbar}E^+\left(\rho_{uu} - \rho_{ll}\right)$$ is derived from the density matrix equation $$\dot \rho_{ul} = \left( i \omega_a + \gamma_\perp \right) \rho_{ul} - \frac{i}{\hbar} g E (\rho_{uu} - \rho_{ll}) \label{dme}$$ where $|u \rangle$ and $|l \rangle$ are the upper and lower lasing states, $\rho_{ij}$ is the density matrix element $ij$ and $g_{ul} = g_{lu} = g$ is the coupling constant of the lasing states to the electric field, which allows for the definition $P^+ = g \rho_{ul}$. Alternatively, following the notation of Boyd [@boyd], we could consider the equation of motion for $M = g\rho_{ul} + \textrm{c.c.} = P^+ + P^-$, which is the expectation value of the dipole moment induced by the applied field, i.e. the classical oscillating dipole field. Thus $$\dot M = g \dot \rho_{ul} + \textrm{c.c.} = \left( i \omega_a + \gamma_\perp \right) \rho_{ul} - \frac{i}{\hbar} g E D + \textrm{c.c.},$$ using (\[dme\]), and $$\ddot M = \left(-\omega_a^2 + 2i\omega_a \gamma_\perp + \gamma_\perp^2 \right) g \rho_{ul} - \frac{\omega_a}{\hbar}g^2 E D + \textrm{c.c.}.$$ This can be rewritten as $$\ddot M + 2 \gamma_\perp \dot M + \omega_a^2 M = -\gamma_\perp^2 M - \frac{2\omega_a g^2}{\hbar} ED,$$ where $D = \rho_{uu} - \rho_{ll}$ is the inversion density of the lasing states. For $\omega_a^2 \gg \gamma_\perp^2$, we can discard the term $\gamma_\perp^2 M$, resulting in the traditional form of the classical oscillating dipole polarization field equation. This gives the classical coupling constant $\sigma = 2 \omega_a g^2 / \hbar $ [@siegman]. A similar analysis can used to show that the inversion equation can be rewritten as $$\dot D = -\gamma_\parallel(D - D_0) + \frac{2 E}{\hbar \omega_a} \dot M,$$ which demonstrates how the change in the inversion is dependent upon the classical polarization field. Therefore, so long as $\omega_a^2 \gg \gamma_\perp^2$, a condition that is usually satisfied, these two formulations of the polarization dynamics are equivalent. Effective two-level parameters from N-level rate equations ========================================================== This appendix derives the effective equilibrium inversion and relaxation rate for a gain medium with an arbitrary number of levels. We allow decays between any two levels, even if they are not adjacent. We assume there is a single lasing transition, between levels $|u\rangle$ and $|l\rangle$, which need not be adjacent. The rate equation for an arbitrary *non-lasing* level in the system is $$\dot \rho_{ii} = \sum_{j} \gamma_{ij}\rho_{jj} - \sum_{j} \gamma_{ji} \rho_{ii} + \gamma_{iu} \rho_{uu} + \gamma_{il} \rho_{ll} - (\gamma_{ui}+\gamma_{li})\rho_{ii}, \label{non lasing rhodot}$$ where the summations are taken over all non-lasing levels. Here we do not distinguish between decay rates and pumping rates; $\gamma_{ij}$ is simply interpreted as the rate at which level $|j \rangle$ transitions into level $|i \rangle$, regardless of the energies of those states. If the populations of all the non-lasing transitions are stationary, i.e. $\dot \rho_{ii} = 0$, then we can rewrite (\[non lasing rhodot\]) as $$\begin{aligned} \sum_j R_{ij} \rho_{jj} &=& \gamma_{iu} \rho_{uu} + \gamma_{il}\rho_{ll}, \label{genrate} \\ R_{ij} &\equiv& (s_i + \gamma_{ui} + \gamma_{li}) \delta_{ij} - \gamma_{ij}.\end{aligned}$$ Here, $s_i \equiv \Sigma_j \gamma_{ji}$ and $\delta_{ij}$ is the Kronecker delta. Inverting (\[genrate\]) gives $$\rho_{ii} = \sum_j [R^{-1}]_{ij} (\gamma_{ju} \rho_{uu} + \gamma_{jl} \rho_{ll}). \label{rhoii}$$ Hence, we can express the total number density of gain atoms as $$\begin{aligned} n &=& \sum_i \rho_{ii} + \rho_{uu} + \rho_{ll} \\ &=& T_u \rho_{uu} + T_l \rho_{ll}\end{aligned}$$ where $$\begin{aligned} T_u &=& 1 + \sum_{ij} [R^{-1}]_{ij} \gamma_{ju}, \\ T_l &=& 1 + \sum_{ij} [R^{-1}]_{ij} \gamma_{jl}.\end{aligned}$$ Noting that $D = \rho_{uu} - \rho_{ll}$, we can write the populations of the lasing states as $$\begin{aligned} \rho_{uu} &=& \frac{n + T_l D}{T_l + T_u}, \label{rhouu}\\ \rho_{ll} &=& \frac{n - T_u D}{T_l + T_u}. \label{rholl}\end{aligned}$$ From the equations of motion for the lasing levels, we have the inversion equation $$\dot D = \dot \rho_{uu} - \dot \rho_{ll} = \sum_i(\gamma_{ui} - \gamma_{li})\rho_{ii} - s_u \rho_{uu} + s_l \rho_{ll} - \frac{2}{i\hbar}\mathbf{E}^+ \cdot \left((\mathbf{P}^{+})^* - \mathbf{P}^{+} \right),$$ where $s_u = \Sigma_j \gamma_{ju} + \gamma_{lu}$ and $s_l$ is defined similarly. Inserting (\[rhoii\]) into this equation gives $$\dot D = B_u \rho_{u,u} + B_l \rho_{l,l} - \frac{2}{i\hbar}\mathbf{E}^+ \cdot \left((\mathbf{P}^{+})^* - \mathbf{P}^{+} \right),$$ where $$\begin{aligned} B_u &\equiv& - s_u + \sum_{ij} (\gamma_{ui} - \gamma_{li}) [R^{-1}]_{ij} \gamma_{ju}, \\ B_l &\equiv& s_l + \sum_{ij} (\gamma_{ui} - \gamma_{li}) [R^{-1}]_{ij} \gamma_{jl}.\end{aligned}$$ Plugging in (\[rhouu\]) and (\[rholl\]) now yields the inversion equation in the desired form (\[efinv\]), with $$\begin{aligned} \gamma_\parallel' &=& \frac{B_l T_u - B_u T_l}{T_u + T_l}, \\ D_0' &=& \frac{B_u + B_l}{B_lT_u - B_uT_l} n.\end{aligned}$$ Inversion as a function of the pump =================================== In this appendix we discuss the surprising result that the unscaled modal intensities of the six-level simulations discussed in Fig. \[fig1\] as a function of the pump are approximately linear, as shown in Fig. \[intvpump\], even though these six-level simulations are in the non-linear parameter regime. In simple treatments of lasers [@siegman] the inversion is assumed to be clamped after the first lasing threshold. However, a more detailed analysis lead to the inclusion of spatial hole-burning effects which forces the inversion in the cavity to change beyond the first lasing threshold in a non-linear manner. This result then, that the unscaled modal intensity is linear in the pump rate, can be understood from the observation that at the end of the cavity, $\vec r = L$, all of the lasing modes have a maximum in their fields, and thus the inversion is effectively clamped at this point beyond the first lasing threshold, which can be seen in Fig. \[intvpump\] (b). To understand this formally, we begin by rewriting (\[TSG2\]) and removing the scaling factors $E_c$ and $D_c$ gives $$\frac{2g^2}{\hbar^2} \sum_{\nu=1}^N \Gamma_\nu |\Psi_\nu(\vec r)|^2 = \gamma_\parallel' \left( \frac{D_0'}{D(\vec r)} -1 \right).$$ Substituting in (\[gp1\]) and (\[gp2\]), which are valid for this simulation, gives $$\frac{g^2}{\hbar^2}\sum_{\nu=1}^N \Gamma_\nu |\Psi_\nu(\vec r)|^2 = \mathcal{P} \left(\frac{N}{D(\vec r)} -1 \right) - \gamma_{12}.$$ The inversion $D$ is a function of both position and the pump. However, for $\vec{r}=L$ corresponding to the cavity edge, $D$ should be mostly independent of the pump, as at this location every mode is at its maximum intensity and the effect of spatial hole-burning is most pronounced. The FDTD simulation results, shown in Fig. \[intvpump\], demonstrate that $D$ indeed varies very weakly with $\mathcal{P}$ at the cavity edge. ![(a) Unscaled modal intensity of the six-level simulations from Fig. \[fig1\] as a function of the pump. A cross sectional area of $1 \textrm{m}^2$ is assumed to calculate the power. (b) Inversion as a function of the pump at the cavity boundary. Dashed lines in plots a and b correspond to the pump values shown in plot c. (c) Inversion as a function of position within the cavity for three different pump values, cyan corresponds with $\mathcal{P} = 3.75 \times 10^8 s^{-1}$, magenta with $\mathcal{P} = 1.65 \times 10^9 s^{-1}$, and orange with $\mathcal{P} = 4.85 \times 10^9 s^{-1}$ to show the evolution of the inversion within the cavity as a function of the pump strength. \[intvpump\]](nlv_f6.eps){width="100.00000%"} Simulation constants ==================== In this appendix we list the parameters used in each of the FDTD simulations described above. These constants are matrices, with $\gamma_{ij}$ denoting the decay rate from $|j \rangle$ to $|i \rangle$. These values are given in their dimensionless form, i.e. $\gamma_{\textrm{meas}} = \gamma_{\textrm{real}} L / c$. Unlisted entries are zero. We also note that throughout this paper $|0 \rangle$ denotes the ground state, so these matrices are $0$ indexed. For the four-level simulations in Fig. \[fig1\], $$\gamma_{4\textrm{lv, fig \ref{fig1}}} = \left( \begin{array}{cccc} \cdot& 0.8 & \cdot& \cdot\\ \cdot& \cdot& 5 \times 10^{-4} & \cdot\\ \cdot& \cdot & \cdot& 0.8 \\ \cdot& \cdot& \cdot& \cdot \\ \end{array} \right).$$ The dipole matrix element is $g = 2.3 \cdot 10^{-12} \textrm{m}^{3/2}$, and the number of gain atoms is $n = 5 \cdot 10^{23} \textrm{m}^{-3}$. The pump $\mathcal{P}$ was varied between $3 \cdot 10^{-6}$ and $3 \cdot 10^{-5}$. Thus, for an optical wavelength of $\lambda = 628\textrm{nm}$, the requirement in Fig. \[fig1\] that $n_0kL = 60$ means that $L = 4 \mu \textrm{m}$. Using this length, the decay rates can be converted to their unit-full values as $\gamma_\perp = 3 \cdot 10^{14} \textrm{s}^{-1}$, $\gamma_{23} = \gamma_{01} = 6 \cdot 10^{13} \textrm{s}^{-1}$, $\gamma_{12} = 3.75 \cdot 10^{10} \textrm{s}^{-1}$, and the pump at threshold is $\mathcal{P} = 3 \cdot 10^8 \textrm{s}^{-1}$. Similarly, the dipole matrix element also acquires units of inverse time, and can be expressed as $g^2 / \hbar = 3.98 \cdot 10^{-9} \textrm{m}^3 / \textrm{s}$, which corresponds to a coupling constant in the classical oscillating dipole picture of $\sigma = 10^{-4} \textrm{C}^2/\textrm{kg}$. These constants can be seen to be similar to those used in other studies of optical microcavities [@york; @pcsel]. For the six-level simulations in Fig. \[fig1\], $$\gamma_{6\textrm{lv, fig \ref{fig1}}} = \left( \begin{array}{cccccc} \cdot& 0.8 & 10^{-5} & 10^{-5} & 10^{-5} & 10^{-5} \\ \cdot& \cdot& 0.8 & 10^{-5} & 10^{-5} & 10^{-4} \\ \cdot& \cdot& \cdot& 5 \times 10^{-5} & 10^{-5} & 10^{-5} \\ \cdot& \cdot& \cdot& \cdot& 0.8 & 10^{-5}\\ \cdot& \cdot& \cdot& \cdot& \cdot& 0.8\\ \cdot& \cdot& \cdot& \cdot& \cdot& \cdot\\ \end{array} \right).$$ Furthermore, $\gamma_{15} = 10^{-4}$, and the lasing transition is between levels $|3 \rangle$ and $|1 \rangle$ (where the ground state is again $|0 \rangle$ and the states are numbered in order of increasing energy). For the four-level simulations in Fig. \[fig2\], $$\gamma_{4\textrm{lv, fig \ref{fig2}}} = \left( \begin{array}{cccc} \cdot& 0.8 & \cdot&\cdot \\ \cdot& \cdot& 5 \times 10^{-2} & \cdot\\ \cdot& \cdot& \cdot& 0.8 \\ \cdot& \cdot& \cdot&\cdot \\ \end{array} \right).$$ For the six-level simulations in Fig. \[fig2\], $$\gamma_{6\textrm{lv, fig \ref{fig2}}} = \left( \begin{array}{cccccc} \cdot& 0.8 & 10^{-5} & 10^{-5} & 10^{-5} & 10^{-5} \\ \cdot&\cdot & 0.8 & 10^{-5} & 10^{-5} & 10^{-4} \\ \cdot&\cdot & \cdot& 5 \times 10^{-5} & 10^{-5} & 10^{-5} \\ \cdot&\cdot & \cdot&\cdot & 0.8 & 10^{-5}\\ \cdot&\cdot & \cdot&\cdot &\cdot & 0.8\\ \cdot&\cdot & \cdot&\cdot &\cdot &\cdot \\ \end{array} \right).$$ The four-level simulations of the random cavity in Fig. \[fig3\] used the same parameters as the four-level simulations in Fig. \[fig1\].
--- abstract: | We discuss non-perturbative corrections to the gauge kinetic functions in a four-dimensional $\mathcal{N} = 2$ gauge theory realized with a system of D7/D3-branes in a compactification of type I$^\prime$ theory on $\mathcal{T}_4/\mathbb{Z}_2\times \mathbb{Z}_2$. The non-perturbative contributions arise when D(–1) branes, corresponding to stringy instantons, are added to the system; such contributions can be explicitly evaluated using localization techniques and precisely match the results predicted by the heterotic/type I$^\prime$ duality. This agreement represents a very non-trivial test of the stringy multi-instanton calculus. address: | Dipartimento di Fisica Teorica, Università di Torino and I.N.F.N., sezione di Torino\ via P. Giuria 1, I-10125 Torino (Italy) author: - 'Marialuisa Frau [^1]' title: Stringy instantons and dualities --- Introduction ============ It has been recently found [@Blumenhagen:2009qh] that certain classes of D-brane instantons arising in intersecting brane models can generate effective interactions at energies that are not linked to the gauge theory scale, and for this reason they are usually called “stringy” or “exotic” instantons. This feature is very welcome in the search of semi-realistic string scenarios for the physics beyond the Standard Model. It is therefore of the greatest importance to devise techniques to determine quantitatively such exotic non-perturbative corrections through their explicit realization at the string level. In this context both the usual gauge instantons and the exotic ones can be obtained from Euclidean branes entirely wrapping some cycle of the internal space. Depending on whether this cycle coincides or not with the one wrapped by the space-filling D-branes on which the gauge theory is defined, the Euclidean branes correspond to gauge or exotic instantons, respectively. In the simplest cases, $4d$ gauge instantons can be realized with bound states of space-filling D3-branes and point-like D(-1)-branes [@Witten:1995gx]. In these systems the massless sector of open strings having at least one endpoint on the D(-1)’s is in one-to-one correspondence with the moduli of the gauge instanton solution. Actually, also the effective action on the moduli space, the rules of the instanton calculus and the profile of the classical solution can be explicitly obtained in this way [@Green:2000ke; @Billo:2002hm]. In the exotic cases, the gauge and instantonic branes intersect non-trivially in the internal space and thus the open strings stretching between them have extra “twisted” directions and some instanton moduli (specifically those related to sizes and gauge orientations) disappear from the spectrum. Their supersymmetric fermionic partners remain massless, and when integrated out, they can lead to the effective interactions we alluded to above. A very simple example of this phenomenon occurs in the D(-1)/D7 brane system, which exhibits the world-sheet features of exotic instantons since mixed open strings have eight twisted directions. By adding O7-planes, this system can be embedded in type I$^\prime$ string theory, a setup which possesses a computable perturbative heterotic dual. The non-perturbative contributions of D-instantons to the effective action on the D7-branes can be explicitly computed as integrals over the moduli space via localization techniques, in strict analogy with what is done for usual gauge instantons [@Nekrasov:2002qd]. One finds that all D-instanton numbers correct the quartic gauge couplings of the $8d$ gauge theory of the D7-branes[@Billo':2009gc; @Billo:2009di; @Fucito:2009rs], and this whole series of terms matches the perturbative result obtained in the dual heterotic string theory. In this contribution we briefly describe one example of exotic instanton calculus in a $4d$ setup that has been developed in [@Billo':2010bd]. We consider a perturbatively conformal ${{\mathcal{N}}}=2$ gauge theory that admits a brane realization where exotic instantons generate a whole series of corrections to the quadratic gauge couplings and possesses a calculable heterotic dual against which these corrections can be checked. This provides a very non-trivial check of the correctness of this approach to the exotic instanton calculus. A $\mathcal{N}=2$ conformal model from an orbifold of type I$^\prime$ {#sec:Imodel} ===================================================================== We consider a $\mathcal N=2$ orientifold compactification of type IIB string theory on ${{\mathcal{T}}}_4\times {{\mathcal{T}}}_2$. The action of the orientifold generators selects 4 O7-planes located at the invariant points of ${{\mathcal{T}}}_2$ and 64 O3-planes located at the fixed points of ${{\mathcal{T}}}_4\times {{\mathcal{T}}}_2$. The global cancellation of the RR tadpoles requires the presence of 16 dynamical D7-branes transverse to ${{\mathcal{T}}}_2$ and of 16 dynamical “half“ D3-branes transverse to the internal 6-torus. We choose to cancel *locally* the RR charges in ${{\mathcal{T}}}_2$ by placing exactly 4 D7-branes and 4 half D3-branes on top of each O7-plane. The D3’s could then be distributed over the 16 orbifold fixed points that are common to a given O7-plane. For sake of simplicity we place them at distinct points of ${{\mathcal{T}}}_4$ and we focus on the gauge theory leaving in one of the O7 fixed plane. The $4d$ field theory leaving on the D7 world-volume at the selected fixed plane is a conformal ${{\mathcal{N}}}= 2$ $\mathrm{U}(4)$ SYM theory containing one adjoint vector multiplet, two antisymmetric hypermultiplets and four fundamental hypermultiplets which are charged under a $\mathrm{U}(1)^4$ flavour group. The quadratic effective action for the gauge fields can be described by holomorphic Wilsonian couplings [@Dixon:1990pc] that have the following structure: $$f = f_{(0)} + f_{(1)}+f_{\mathrm{n. p.}} \label{f01}$$ where the subscripts $_{(0)}$ and $_{(1)}$ refer to the tree-level and 1-loop contributions, while the last term accounts for possibile non perturbative corrections. Writing the effective action in terms of the $\mathcal{N}=2$ multiplet encoding the $\mathrm{U}(4)$ gauge degrees of freedom, $\Phi(x,\theta) = \phi(x) + \theta^\alpha \Lambda_\alpha(x) + (\theta \gamma^{\mu\nu} \theta)\, F_{\mu\nu}(x)$, we see that there are two possible colour structures, each with its own coupling: $$\label{action} S = \int d^4x d^4\theta \left\{f\, {\mathrm{Tr}\,}\Phi^2 + f'({\mathrm{Tr}\,}\Phi)^2\right\} +\mathrm{c.c}~.$$ The tree-level value for the single trace coupling $f$ can be deduced from the Born-Infeld action and is $$f_{(0)}= -{\rm{i}}\,t \ \ \ \ \ \ \ \ \ \ \ ({\rm Re}\,t=\frac{\theta_{ YM}}{2\pi}~,~~ {\rm Im}\,t=\frac{4\pi}{g_{ YM}^2} \sim \frac{Vol({\mathcal T}_4)}{g_s}) ~; \label{f0}$$ on the other hand the tree level value for the double trace coupling $f'$ is vanishing. The only perturbative corrections to $f$ and $f'$ come from the 1-loop threshold corrections, which in turn are related in an universal way to the string 1-loop two point-functions. The correction to the single trace coupling $f_{(1)}$ is expected to vanish, since the gauge theory is conformal, and in fact the 1-loop string diagrams that contribute to the single trace coupling add up to zero. On the contrary, the 1-loop diagrams that contribute to the double trace structure give a non vanishing result, due to the massless states winding on ${\mathcal T}_2$, and we have $$f'_{(1)} = -8 \log \eta(U) ~, \label{fIaa}$$ where $U$ is complex structure of ${\mathcal T}_2$. In the next sections we will study the non-perturbative corrections $f_{\mathrm{n. p.}}$ and $f'_{\mathrm{n. p.}}$ induced by D-instantons. D-instantons and their moduli spectrum {#sec:Dmod} ====================================== The orientifold projection that defines our model is compatible both with Euclidean E3-branes wrapped on ${{\mathcal{T}}}_4$ that represent ordinary gauge instantons for the field theory living on the D7-branes, and D-instantons which describe truly stringy instanton configurations for the D7-brane gauge theory [@Billo':2009gc; @Billo:2009di; @Fucito:2009rs]. Here we only discuss the contributions produced by the D(-1)-branes, and show that they correct non-perturbatively the gauge kinetic functions of the $\mathcal N=2$ $\mathrm{U}(4)$ theory discussed in Sect. \[sec:Imodel\]. Again we focus on the four D7-branes located at one of the orientifold fixed points, and place on them a number of fractional D-instantons. However, since there are also four D3-branes distributed in four different orbifold fixed points, we have to distinguish between two possibilities, depending on whether the D-instantons are at an empty fixed point or occupy the same position of one of the D3-branes. In the first case (case *a)*) only the (-1)/(-1) and (-1)/7 open strings support massless moduli, because the (-1)/3 strings have always a non-vanishing stretching energy due to the separation between their endpoints. In the second case (case *b)*) we can find massless excitations also in the spectrum of the (-1)/3 strings. Consistently with the orientifold projections, when we set $k$ “half” D-instantons at a given fixed point, the instanton moduli organize in representation of $\mathrm{U}(k)$ and in representations of the Lorentz symmetry group, which in our local system is broken to $\mathrm{SO}(4)\times \widehat{\mathrm{SO}}(4) \times \mathrm{SO}(2) = \mathrm{SU}(2)_+\times \mathrm{SU}(2)_- \times \widehat{\mathrm{SU}}(2)_+\times \widehat{\mathrm{SU}}(2)_- \times \mathrm{SO}(2)$. The (-1)/(-1) moduli form the so-called neutral moduli sector, since they do not transform under the $\mathrm{U}(4)$ gauge group and are common to both case *a)* and *b)*. They comprise four complex scalars transforming as vectors of the $\mathrm{SO}(4) \times \widehat{\mathrm{SO}}(4)$ groups that rotate the coordinates of the D7 world volume, a complex scalar $\chi$ and their fermionic partners. The charged moduli sector accounts for the (-1)/7 open strings. Since there are eight directions with mixed boundary conditions we only find a physical fermionic modulus $\mu'$. Finally, the flavored sector of the instanton moduli space arises from the (-1)/3 open strings. In our model, this sector exists only in case *b)*, when the D(-1)’s and the D3’s occupy the same fixed point. It will be useful however to consider the generalized case with $m$ half D3-branes supporting a $\mathrm{U}(m)$ symmetry, so that the configuration *a)* corresponds to $m=0$ and the configuration *b)* corresponds to $m=1$. In this case, one finds two complex variables transforming as chiral spinors with respect to $\mathrm{SO}(4)$, and two spinors of opposite chiralities with respect to $\widehat{\mathrm{SO}}(4)$. All physical moduli and their transformation properties are summarized in Tab. 1. [|c|c|c|c|c|c| c| ]{} neutral & $\mathrm{SU}(2)^4$ & $\;\;\;\;\;\mathrm{U}(k)\;\;\;\;\;$ & & charged & $\mathrm{SU}(2)^4$ & $\!\!\!\!\!\!\mathrm{U}(k)\times \mathrm{U}(4)\!\!\!\!\!\!$ \ $\begin{array}{c} B_\ell \\ M_{\dot \alpha a} \end{array}$ & $\begin{array}{c}{(\mathbf{2},\mathbf{2},\mathbf{1},\mathbf{1})} \\ {(\mathbf{1},\mathbf{2},\mathbf{2},\mathbf{1})}\end{array}$ & adjoint & &$\begin{array}{c}\mu'\end{array}$ & ${(\mathbf{1},\mathbf{1},\mathbf{1},\mathbf{1})}$ & $({\tableau{1}},\overline{{\tableau{1}}})$\ $\begin{array}{c} N_{\dot \alpha \dot a} \end{array}$ & $\begin{array}{c}{(\mathbf{1},\mathbf{2},\mathbf{1},\mathbf{2})} \end{array}$ & ${\tableau{1 1}}+ \overline{{\tableau{1 1}}}$ & & flavoured & $\mathrm{SU}(2)^4$ & $\mathrm{U}(k)\times \mathrm{U}(m)$ \ $\begin{array}{c} B_{\dot\ell} \\ M_{\alpha \dot a} \end{array}$ & $\begin{array}{c}{(\mathbf{1},\mathbf{1},\mathbf{2},\mathbf{2})} \\ {(\mathbf{2},\mathbf{1},\mathbf{1},\mathbf{2})}\end{array}$ & ${\tableau{2}}+ \overline{{\tableau{2}}}$ & &$\begin{array}{c} w_\alpha \\ \mu_a \end{array}$ & $\begin{array}{c} {(\mathbf{2},\mathbf{1},\mathbf{1},\mathbf{1})} \\ {(\mathbf{1},\mathbf{1},\mathbf{2},\mathbf{1})} \end{array}$ & (${\tableau{1}},\overline{{\tableau{1}}})$\ $\begin{array}{c} N_{\alpha a} \\ \bar \chi \end{array}$ & $\begin{array}{c}{(\mathbf{2},\mathbf{1},\mathbf{2},\mathbf{1})} \\ {(\mathbf{1},\mathbf{1},\mathbf{1},\mathbf{1})}\end{array}$ & adjoint & & $\begin{array}{c} \mu_{\dot a} \end{array}$ & ${(\mathbf{1},\mathbf{1},\mathbf{1},\mathbf{2})} $ & $({\tableau{1}},{\tableau{1}})$\ \[tab:2\] Note that in addition to the physical moduli we have to consider extra auxiliary fields, $d_{m}$, $D_{\dot \alpha \dot a}$, $h_a$ and $h'$, that linearize the quartic interactions among the moduli and whose equations of motion generalize the ADHM constraints on the ordinary instanton moduli space. Non-perturbative corrections from localization formulæ {#sec:loc} ====================================================== The corrections induced by D-instantons can be encoded in a non perturbative prepotential ${{\mathcal{F}}}_{\mathrm{n.p.}}(\Phi)$, which, taking into account the different instanton configurations and their multiplicity, can be written as $$\label{prepsum} {{\mathcal{F}}}_{\mathrm{n.p.}}(\Phi) = 12\, {{\mathcal{F}}}^{(m=0)}(\Phi) + 4\, {{\mathcal{F}}}^{(m=1)}(\Phi)~.$$ The prepotentials ${{\mathcal{F}}}^{(m)}(\Phi)$ can be expressed as integrals over the “centered” moduli space (containing all moduli except the “center of mass” coordinates $x$ and $\theta$) of the instantonic branes. To compute ${{\mathcal{F}}}_{\mathrm{n.p.}}(\Phi)$ we exploit the fact [@Nekrasov:2002qd] that, after suitable deformations of the instanton action, the modular integrals localize around isolated points in the instanton moduli space. To obtain explicit formulas we first take $\Phi={{\rm diag}\,}(a_1,\ldots, a_4,-a_1,\ldots, -a_4)$, where $a_u$ are constant expectation values along the Cartan directions of $\mathrm{U}(4)$, and then consider the $\epsilon$-deformed instanton partition function $$\label{ipf0} Z^{(m)}(a,\epsilon) =\sum_k q^k \,Z^{(m)}_k(a,\epsilon) = \sum_k q^k \int \!d{{\mathcal{M}}}_{k,m} ~{\rm e}^{-{S_{\rm mod}}^{\epsilon}({{\mathcal{M}}}_{k,m},a)}~.$$ where ${S_{\rm mod}}^{\epsilon}$ is obtained by deforming the moduli action with Lorentz breaking terms parameterized by four parameters $\epsilon_I$ describing rotations along the four Cartan directions of $\mathrm{SO}(4) \times \widehat{\mathrm{SO}}(4)$. [From]{} the string perspective, these deformations can be obtained by switching on suitable RR background fluxes on the D7-branes, as shown in [@Billo:2006jm; @Billo:2009di; @Billo':2008sp]. Notice that integrals in (\[ipf0\]) run over all moduli, including $x$ and $\theta$. In presence of the $\epsilon$-deformations it is rather easy to see that the integration over the super-space yields a volume factor growing as $1/(\epsilon_1\epsilon_2)$ in the limit of small $\epsilon_{1,2}$. Therefore, to obtain the integral over the centered moduli this factor has to be removed. In addition, we have to notice that the $k$-th order in the $q$-expansion receives contributions not only from genuine $k$-instanton configurations but also from disconnected ones. Thus, we are led to consider $$\label{ipf} {{\mathcal{F}}}^{(m)}(a,\epsilon) =\epsilon_1 \epsilon_2 \log Z^{(m)}(a,\epsilon) ~.$$ The prepotential will be extracted from ${{\mathcal{F}}}^{(m)}(a,\epsilon)$ by sending $\epsilon_I \to 0$ and $a \to \Phi$. The localization procedure is based on the co-homological structure of the instanton moduli action which is exact with respect to a suitable BRST charge $Q$, namely $${S_{\rm mod}}= Q \Xi~. \label{q}$$ We can choose as $Q$ any component of the supersymmetry charges preserved on the brane system. Of course, since these charges transform as spinors of $\mathrm{SO}(4) \times \widehat{\mathrm{SO}}(4)$, the choice of $Q$ breaks this symmetry to the $\mathrm{SU}(2)^3 \equiv \mathrm{SU}(2)_- \times \widehat{\mathrm{SU}}(2)_- \times {{\rm diag}\,}\left[\mathrm{SU}(2)_+ \times \widehat{\mathrm{SU}}(2)_+\right] $ subgroup which preserves this spinor. After this identification is made we can see that all the moduli but $\chi$ form BRST doublets, which we will schematically denote as $\big(\phi,\psi\equiv Q\phi\big)$, and the moduli action can indeed be written in the form (\[q\]). To localize the integral over moduli space, it is necessary to make the charge $Q$ equivariant with respect to all symmetries, which in our case are the gauge symmetry $\mathrm{U}(k)\times \mathrm{U}(4)\times \mathrm{U}(m)$ and the residual Lorentz symmetry $\mathrm{SU}(2)^3$. After the equivariant deformation, the charge $Q$ becomes nilpotent up to an element of the symmetry group. In the basis provided by the weights $\vec q \equiv \bigl({\vec q}_{\mathrm{U}(k)}, \vec q_{\mathrm{U}(4)}, {\vec q}_{\mathrm{U}(m)}, {\vec q}_{\mathrm{SU}(2)^3} \bigr)$, $Q$ acts diagonally $$\label{brspair3} Q\phi_q = \psi_q~,~~~ Q\psi_q = \Omega_q \phi_q~,$$ where $\Omega_q = \vec\chi\cdot {\vec q}_{\mathrm{U}(k)} + \vec a \cdot \vec q_{\mathrm{U}(4)} + \vec b \cdot {\vec q}_{\mathrm{U}(m)} + \vec \epsilon \cdot {\vec q}_{\mathrm{SU}(2)^3}$, parametrize the equivariant deformation in terms of the Cartan components of the group parameters $\vec \chi$, $\vec{b}$, $\vec{a}$ and $\vec \epsilon$. [^2] [|c|c|c|c| ]{} $(\phi,\psi)$ & $\mathrm{U}(k) \times \mathrm{U}(4)\times \mathrm{U}(m)$ & $\mathrm{SU}(2)^3$ & $\vec\epsilon\cdot \vec{q}_{SU(2)^3}$ \ $(B_\ell,M_\ell)$ & $\bigl(\mbox{adj}, {\mathbf{1}}, {\mathbf{1}}\bigr)$ & ${(\mathbf{2},\mathbf{1},\mathbf{2})}$ & $ \epsilon_1,\epsilon_2$\ $(B_{\dot\ell},M_{\dot\ell})$ & $\bigl({\tableau{2}}, {\mathbf{1}}, {\mathbf{1}}\bigr) + \mbox{h.c.}$ & ${(\mathbf{1},\mathbf{2},\mathbf{2})}$ & $ \epsilon_3,\epsilon_4$\ $(N_{\dot\alpha\dot a},D_{\dot\alpha\dot a})$ & $\bigl({\tableau{1 1}}, {\mathbf{1}}, {\mathbf{1}}\bigr) + \mbox{h.c.}$ & ${(\mathbf{2},\mathbf{2},\mathbf{1})}$ & $ \epsilon_2+\epsilon_3,\epsilon_1+\epsilon_3$\ $(N_{m},d_{m})$ & $\bigl(\mbox{adj}, {\mathbf{1}}, {\mathbf{1}}\bigr)$ & ${(\mathbf{1},\mathbf{1},\mathbf{3})}$ & $ 0_{{{\mathbb R}}},\epsilon_1+\epsilon_2$\ $(\bar\chi,\eta)$ & $\bigl(\mbox{adj}, {\mathbf{1}}, {\mathbf{1}}\bigr)$ & ${(\mathbf{1},\mathbf{1},\mathbf{1})}$ & $ 0_{{{\mathbb R}}} $\ $(\mu',h')$ & $\bigl({\tableau{1}}, \overline{{\tableau{1}}}, {\mathbf{1}}\bigr) + \mbox{h.c.}$ & ${(\mathbf{1},\mathbf{1},\mathbf{1})}$ & $ 0$\ $(w_\alpha,\mu_\alpha)$ & $\bigl({\tableau{1}}, {\mathbf{1}}, \overline{{\tableau{1}}}\bigr) + \mbox{h.c.}$ & ${(\mathbf{1},\mathbf{1},\mathbf{2})}$ & $ (\epsilon_1+\epsilon_2 )/2$\ $(\mu_{\dot a},h_{\dot a})$ & $\bigl({\tableau{1}}, {\mathbf{1}}, {\tableau{1}}\bigr) + \mbox{h.c.}$ & ${(\mathbf{1},\mathbf{2},\mathbf{1})}$ & $ (\epsilon_3-\epsilon_4 )/2$\ \[tab:bs\] After the complete localization the integral is given by the (super)-determinant of $Q^2$ evaluated at the fixed points of $Q$ [@Nekrasov:2002qd; @Bruzzo:2002xf], and its explicit expression can be deduced by considering, for each modulus $\phi$ in Tab. 2, the set of weights corresponding to its symmetry representation. The explicit result is $$\begin{aligned} Z_k^{(m)}(a,b,\epsilon) &=& \left( \frac{s_3}{\epsilon_1 \epsilon_2 } \right)^k\int \prod_{i=1}^k \!\frac{d{\chi_i}}{2\pi{\mathrm{i}}} ~ \prod_{i<j}^{k} \big(\chi_i -\chi_j\big)^2\,\Big( (\chi_i -\chi_j)^2-s_{3}^2 \Big)\, \nonumber\\ && \times \prod_{i<j}^{k}\, \prod_{\ell=1}^{2} \frac{ \Big((\chi_i +\chi_j)^2-s^2_{\ell}\Big)} {\Big((\chi_i-\chi_j)^2-\epsilon_{\ell}^2\Big) \Big( (\chi_i+\chi_j)^2-\epsilon_{\ell+2}^2\Big)}\label{Z} \\ &&\times \, \prod_{i=1}^k\left[\,\prod_{\ell=1}^{2} \frac{1}{\Big(4\chi_i^2 -\epsilon^2_{\ell+2}\Big)} \, \prod_{r=1}^{m} \frac{\Big(( \chi_i +b_r)^2-\frac{(\epsilon_3-\epsilon_4)^2}{4}\Big)}{\Big((\chi_i -b_r)^2-\frac{(\epsilon_1+\epsilon_2)^2}{4}\Big)} \,\prod_{u=1}^{n} \Big(\chi_i-a_u\Big) \right]~. \nonumber\end{aligned}$$ The integral over $\chi_i$ in this expression has to be thought of as a multiple contour integral, according to the prescription introduced in Ref. [@Moore:1998et]. In order to obtain the non-perturbative prepotential from the partition function $Z^{(m)}(a,b,\epsilon)$, we set $b_r=0$, since the D3-branes are fixed at one of the orbifold fixed-points and we take the limit $\epsilon_I\to 0$ to remove the Lorentz breaking deformations. A simple inspection of the explicit results for $\log Z^{(m)}(a,\epsilon)$ [@Billo':2010bd] shows that this expression diverges as $1/(\epsilon_1 \epsilon_2\epsilon_3 \epsilon_4)$ in this limit. Such a divergence is typical of interactions in eight dimensions, where the ${{\mathcal{N}}}=2$ super-space volume grows like $\int d^8x d^8 \theta \sim 1/(\epsilon_1 \epsilon_2\epsilon_3 \epsilon_4)$. These contributions can be thought of as coming from regular D(–1)-instantons moving in the full eight-dimensional world-volume of the D7-branes and can in fact be associated to a universal quartic prepotential ${\mathcal F}_{\mathrm{IV}}(a)$ [@Billo:2009di] defined as $$\label{F4} {\mathcal F}_{\mathrm{IV}}(a) = \lim_{\epsilon_I\to 0} \epsilon_1 \epsilon_2\epsilon_3 \epsilon_4 \log Z^{(m)}(a,\epsilon)~.$$ We can then extract a finite quadratic prepotential by subtracting the divergence coming from ${\mathcal F}_{\mathrm{IV}}(a)$: $$\label{F2} {\mathcal F}_{\mathrm{II}}^{(m)}(a) = \lim_{\epsilon_{I}\to 0} \Big( \epsilon_1 \epsilon_2 \log Z^{(m)}(a,\epsilon) -\frac{1}{\epsilon_3 \epsilon_4} {\cal F}_{\mathrm{IV}}(a) \Big)~.$$ Since the moduli measure is dimensionless no dynamically generated scale may appear and the contributions at *all* instanton numbers must be constructed only out of the $a$’s; we find $$\begin{aligned} \!\!{\mathcal F}_{\mathrm{II}}^{(m=0)}(a) &=\Big(\!\!-\sum_{i<j}a_ia_j \Big) \, q + \Big(\sum_{i<j}a_ia_j-\frac14\,\sum_ia_i^2 \Big)\,q^2 +\Big(\!\!-\frac{4}{3}\sum_{i<j}a_ia_j \Big)\,q^3+\cdots~,\\ \!\!{\mathcal F}_{\mathrm{II}}^{(m=1)}(a) &=\Big(3\sum_{i<j}a_ia_j \Big) \, q + \Big(\sum_{i<j}a_ia_j+\frac74\,\sum_ia_i^2 \Big)\,q^2 +\Big(4\sum_{i<j}a_ia_j \Big)\,q^3+\cdots~. \end{aligned} \label{F2b}$$ We can now promote the vacuum expectation values $a$’s to the dynamical superfield $\Phi(x,\theta)$ and determine ${\mathcal F}_{\mathrm{n.p.}}(\Phi)$ taking into account the contributions from the various $m=0,1$ configurations according to [Eq. (\[prepsum\])]{}. Performing the $\theta$-integration, we then obtain the quadratic non-perturbative action: $$\label{snpq} S_{\mathrm{n.p.}}= 4 \int d^4x\, \Big[ 2\big({\mathrm{tr}\,}F\big)^2 - {\mathrm{tr}\,}F^2 \Big] \,q^2 + O(q^4)~~+~ \mathrm{c.c.}$$ and read the non-perturbative part of the holomorphic couplings. Considering also the perturbative contributions written above we have $$\label{concl_ffpI} f = -{\mathrm{i}}t - 4\,q^2 + O(q^4)\phantom{\vdots}~,\ \ \ \ \ \ \ \ \ \ f'= -8\log \eta(U)^2 +8 \,q^2 + O(q^4)\phantom{\vdots}~.$$ We would like to stress that the vanishing of the contributions at the one and three instanton level is due to the non-trivial cancellations between contributions coming from configurations *a)* and *b)*. The heterotic model dual to the Type I$^\prime$ description of the previous sections can be built from the U(16) compactification of the SO(32) heterotic string on ${{\mathcal{T}}}_4/\mathbb Z_2$ (with standard embedding of the orbifold curvature into the gauge bundle) and further reduced on ${{\mathcal{T}}}_2$ with Wilson lines that break U(16) to $\mathrm{U}(4)^4$. The gauge kinetic terms in this heterotic set-up are corrected at 1-loop by an infinite tower of world-sheet instantons wrapping ${{\mathcal{T}}}_2$, which are dual to the D-instantons of the type I$^\prime$ theory [@Bachas:1997mc] and read [@Billo':2010bd]: $$\label{concl_ffphet} f = -{\mathrm{i}}S + 8 \log\Bigg(\frac{\eta({{\textstyle\frac{T}{4}}})^2}{\eta ({{\textstyle\frac{T}{2}}})^2}\Bigg)~,\ \ \ \ \ \ \ \ \ \ f'= -8\log \eta(U)^2 + 8 \log\Bigg(\frac{\eta({{\textstyle\frac{T}{2}}})^2}{\eta ({{\textstyle\frac{T}{4}}})^4}\Bigg)~.$$ These couplings are exact and do not receive any kind of corrections beyond 1-loop. Therefore they must contain all information, both perturbative and non-perturbative, on the corresponding type I$^\prime$ couplings, including the (exotic) instanton corrections computed above. Indeed, when we expand for large values of $T$ and use the duality map that relate the Kähler modulus of the heterotic theory $T$ to the axio-dilaton $\lambda$ of the type I$^\prime$ model: ${T}/{4} \longleftrightarrow \lambda$, these heterotic formulas predict no instanton corrections at $k=1$ and $k=3$, and a relative coefficient $-2$ between the $k=2$ corrections to $f$ and $f'$, in perfect agreement with the results obtained in the type I$^\prime$ setting. We regard these results as a nice and non-trivial confirmation of the validity of the exotic instanton calculus, which can then be applied with confidence also to four-dimensional theories and to models for which the heterotic dual is not known or does not exist. [\[1\]]{} R. Blumenhagen, M. Cvetic, S. Kachru, and T. Weigand, [Ann. Rev. Nucl.Part. Sci. [**59**]{} (2009) 269–296]{}, E. Witten, [Nucl. Phys. [**B460**]{} (1996) 541–559]{}; M. R. Douglas, [arXiv:hep-th/9512077]{}. M. B. Green and M. Gutperle, JHEP [**02**]{} (2000) 014. M. Billo, M. Frau, I. Pesando, F. Fucito, A. Lerda, and A. Liccardo, JHEP [**02**]{} (2003) 045. N. A. Nekrasov, Adv. Theor. Math. Phys. [**7**]{} (2004) 831–864. M. Billo, M. Frau, L. Gallot, A. Lerda, and I. Pesando, [JHEP [**03**]{} (2009) 056]{}. M. Billo, L. Ferro, M. Frau, L. Gallot, A. Lerda, and I. Pesando, [JHEP [**07**]{} (2009) 092]{}. F. Fucito, J. F. Morales, and R. Poghossian, [JHEP [**10**]{} (2009) 041]{}. M. Billo, M. Frau, F. Fucito, A. Lerda, J. F. Morales and R. Poghossian, [JHEP [**05**]{} (2010) 107]{}. L. J. Dixon, V. Kaplunovsky, and J. Louis, [Nucl. Phys. [**B355**]{} (1991) 649–688]{}. M. Billo, M. Frau, F. Fucito, and A. Lerda, JHEP [**11**]{} (2006) 012. M. Billo, L. Ferro, M. Frau, F. Fucito, A. Lerda, and J. F. Morales, [JHEP [**10**]{} (2008) 112]{}; [JHEP [**12**]{} (2008) 102]{}. U. Bruzzo, F. Fucito, J. F. Morales, and A. Tanzini, JHEP [**05**]{} (2003) 054; U. Bruzzo and F. Fucito, [Nucl. Phys. [**B678**]{} (2004) 638–655]{}. G. W. Moore, N. Nekrasov, and S. Shatashvili, [Commun. Math. Phys. [**209**]{} (2000) 77–95]{}. C. Bachas, C. Fabre, E. Kiritsis, N. A. Obers, and P. Vanhove, [Nucl. Phys. [**B509**]{} (1998) 33–52]{}; C. Bachas, [Nucl. Phys. Proc.Suppl. [**68**]{} (1998) 348–354]{}. [^1]: E-mail:  [^2]: The Cartan directions of the residual Lorentz group $\mathrm{SU}(2)^3$ are parametrized by $\epsilon_I$ ($I=1,\ldots,4$) subject to the constraint $\epsilon_1+ \epsilon_2+ \epsilon_3 +\epsilon_4=0$.
--- abstract: | We consider the “convection-diffussion” equation $u_t=J*u-u-uu_x,$ where $J$ is a probability density. We supplement this equation with step-like initial conditions and prove a convergence of corresponding solution towards a rarefaction wave, [*i.e.*]{} a unique entropy solution of the Riemann problem for the nonviscous Burgers equation. Methods and tools used in this paper are inspired by those used in \[Karch, Miao and Xu, SIAM J. Math. Anal. [**39**]{} (2008), no. 5, 1536–1549.\], where the fractal Burgers equation was studied. [**AMS Subject Classification 2000:**]{}35B40, 35K55, 60J60 [**Key words:**]{} asymptotic behaviour of solutions, rarefaction waves, Riemann problem, long range interactions. **** author: - | Anna Pude[ł]{}ko\ \ \ \ date: - - title: '**Rarefaction waves in nonlocal convection-diffusion equations** ' --- Introduction ============ The goal of this work is to study asymptotic properties of solutions to the Cauchy problem for the following nonlocal convection-diffussion equation $$\label{rownanie} u_t={\cal L} u-uu_x, \qquad x\in \mathbb{R},~t>0,$$ where the nonlocal operator ${\cal L}$ is defined by the formula $$\label{operator} {\cal L} u=J*u-u, \quad \text {with} \quad J\in L^1(\mathbb R),~ J\geqslant 0,$$ and “ \* ” denotes the convolution with respect to the space variable. We supplement this problem with the step-like initial condition satisfying $$\label{warunek} u(x,0)=u_0(x)\to u_{\pm} \qquad \text{when}\quad x\to\pm \infty$$ with some constants $u_{-}<u_{+}$. The precise meaning of this condition is given in (\[as u01\]) and (\[as u02\]), below. 0,5cm Equation (\[rownanie\]) with the particular kernel $J(x)=\frac{1}{2}e^{-\vert x\vert}$ can be obtained from the following system modelling a radiating gas [@H] $$\label{numer} u_t+uu_x+q_x=0,\quad -q_{xx}+q+u_x=0 \qquad\text{for} \quad x\in \mathbb R,~t\geqslant 0.$$ Indeed, the second equation in (\[numer\]) can be formally solved to obtain $q=-\tilde J u_x,$ with a kernel $J(x)=\frac{1}{2}e^{-\vert x\vert}$ that is the fundamental solution of the operator $-\frac{d^2}{dx^2}+I.$ Thus, substituting $ q_x=-\tilde Ju_{xx}=u-J*u$ into first equation in (\[numer\]) we obtain an equation which is formally equivalent to (\[rownanie\])-(\[operator\]). The derivation of system (\[numer\]) from the Euler system for a perfect compressible fluid coupled with an elliptic equation for the temperature can be found in [@KT]. In this work, we consider more general kernels (see our assumptions (\[as J\]), below), because the general integral operator ${\cal L} u=J*u-u$ models long range interactions and appears in many problems ranging from micro-magnetism [@MGP; @MOPT_1; @MOPT_2], neural network [@EM], hydrodynamics [@R] to ecology [@CMS; @C; @DK; @KM; @M], and [@SSN]. For example, in some population dynamic models, such an operator is used to model the dispersal of individuals through their environment [@F_1; @F_2; @HMMV]. We also refer the reader to a series of papers [@AB; @BFRW; @Ch; @Co_1; @Co_2; @CoD; @CoDM] on travelling fronts and to [@CoDM_2] on pulsating fronts for the equation $u_t=J*u-u+f(x,u).$ The equation in (\[rownanie\])-(\[operator\]) with the particular kernel $J(x)=\frac{1}{2}e^{-\vert x\vert}$ (thus in the context of modelling radiating gases) with various classes of initial data have been recently intensively studied. For existence and uniqueness results, we refer the reader to [@KN] and [@LM]. In [@CH], Chmaj gave an answer to an open problem stated by Serre in [@S_2] concerning existence of travelling wave solutions to equation (\[rownanie\])-(\[operator\]) with more general kernel. Here, we refer the reader to the recent work [@ChJ], for generalizations of those results and for additional references. The large time behaviour of solution to equation (\[rownanie\])-(\[operator\]) was considered eg. in [@KN; @S; @L; @KT]. In the case of initial data $u_0$ satisfying $u_0(x)\to u_{\pm}$ when $x\to\pm \infty,$ with $u_->u_+$, Serre [@S] showed the $L^1$-stability of shock profiles. Asymptotical stability of smooth travelling waves was proved in [@KN]. For initial data $u_0\in L^1(\mathbb R)\cap L^\infty(\mathbb R),$ Laurençot [@L] showed the convergence of integrable and bounded weak solutions of (\[rownanie\])-(\[operator\]) towards a source-type solution to the viscous Burgers equation. Here, we recall also recent works [@IR; @IIS-D], where a doubled nonlocal version of equation (\[rownanie\]) (namely, where the Burgers flux is replaced by nonlinear term in convolution form) was studied together with initial conditions from $L^1(\mathbb R)\cap L^\infty(\mathbb R).$ The large time behaviour of solutions to problem (\[rownanie\])-(\[warunek\]) when $J(x)=\frac{1}{2}e^{-\vert x\vert}$ and $u_-<u_+$ was studied by Kawashima and Tanaka [@KT], where a specific structure of this model was used to show the convergence of solutions towards rarefaction waves, under suitable smallness conditions on initial data. The goal of this work is to generalize the result from [@KT] by considering less regular initial condition with no smallness assumption and more general kernel $J.$ To deal with such a problem, we develope methods and tools, which are inspired by those used in [@KMX] where the fractal Burgers equation was studied. Main result =========== First, we recall that the explicit function $$\label{rarefaction} w^R(x,t)=\left\{ \begin{aligned} &u_-\,,\quad &&x/t\leq u_-,\\ &x/t\,,\quad &&u_-\leq x/t\leq u_+, \\ & u_+\,,\quad &&x/t\geq u_+, \end{aligned} \right.$$ is called a rarefaction wave and satisfies the following Riemann problem $$\begin{aligned} &&w^R_t+w^R w^R_x=0,\label{eq-rar}\\ &&w^R(x,0)=w_0^R(x)=\left\{ \begin{aligned} u_-\,,\quad x<0,\\ u_+\,,\quad x>0. \end{aligned}\right.\label{ini-rar}\end{aligned}$$ in a weak (distributional sense). Moreover, this is the unique entropy solution. Such rarefaction waves appear as asymptotic profiles when $t\to\infty$ of solutions to the viscous Burgers equation $$\label{burgers} u_t-u_{xx}+uu_x=0$$ supplemented with an initial datum $u(x,0)=u_0(x)$, satisfying $u_0-u_-\in L^1((-\infty,0))$ and $u_0-u_+\in L^1((0,\infty)),$ (cf. [@HN; @IO] and Lemma \[w\_R\], below). Below, we use also the following regularized problem $$\begin{aligned} &&w_t-w_{xx}+w w_x=0,\label{eq-app}\\ &&w(x,0)=w_0(x)= \left\{ \begin{aligned} u_-\,,\quad x<0,\\ u_+\,,\quad x>0, \end{aligned}\right. \label{ini-app}\end{aligned}$$ which solution is called smooth approximation of the rarefaction wave (\[rarefaction\]). The purpose of this paper is to show that weak solutions of the nonlocal Cauchy problem (\[rownanie\])-(\[warunek\]) exist for all $t\geqslant 0$ and converge as $t\to\infty$ towards the rarefaction wave. Here, as usual, a function $u\in L^\infty(\mathbb R\times [0,\infty))$ is called a weak solution to problem (\[rownanie\])-(\[warunek\]) if for every test function $\varphi\in C^\infty_c(\mathbb R\times [0,\infty))$ we have $$\label{ue_slabe_2} -\int_\mathbb R\int_0^\infty u{\varphi}_t dtdx-\int_\mathbb R u_0(x){\varphi}(x,0)~dx= \int_\mathbb R\int_0^\infty u {\cal L}{\varphi}~dtdx+\frac{1}{2}\int_\mathbb R\int_0^\infty u^2{\varphi}_x~dtdx.$$ In the following, we assume that ${\cal L}u=J*u-u$ with $$\label{as J} \begin{split} &J,~\vert x\vert^2 J\in L^1(\mathbb R), \quad \int_{\mathbb R} J(x) dx=1,\\ &J(x)=J(-x)\quad \text{and}\quad J(x)\geqslant 0\quad \text{for all} \quad x\in\mathbb R. \end{split}$$ Moreover, we consider initial conditions satisfying $$\label{as u01} u_0-u_-\in L^1((-\infty,0))\quad \text{and} \quad u_0-u_+\in L^1((0,\infty))$$ as well as $$\label{as u02} u_{0,x}\in L^1(\mathbb R) \quad \text{and}\quad u_{0,x}(x)\geqslant 0\quad \text{a.e. in} ~~\mathbb R.$$ Now, we formulate the main result of this work on the rate of convergence of solutions to problem (\[rownanie\])-(\[warunek\]) towards the rarefaction wave (\[rarefaction\]). \[th main\] Assume that the kernel $J$ satisfies (\[as J\]) and the initial datum $u_0$ has properties stated in (\[as u01\]) and (\[as u02\]). Then, there exists a unique weak solution $u=u(x,t)$ of problem (\[rownanie\])-(\[warunek\]) with the following property: for every $p\in [1,\infty]$ there is a constant $C>0$ such that $$\label{glowna nierownosc} \Vert u(t)-w^R(t)\Vert_p\leqslant C t^{-(1-1/p)/2}[\log(2+t)]^{(1+1/p)/2}$$ for all $t>0.$ Although the nonlocal operator ${\cal L}u=J*u-u$ has no regularizing property as [*e.g.*]{} the Laplace operator, we have still global-in-time continuous solutions, because, for non-decreasing initial condition, the nonlinear term in equation (\[rownanie\]) does not develope shocks in finite time. The paper is organized as follows. In the next section, we gather results concerning an equation regularized by the usual viscosity term and auxiliary lemmas on properties of the nonlocal operator ${\cal L}$. The main result on the large time behaviour of solutions to the regularized problem is shown in Section 4. The convergence of regularized solutions to a weak solution of the nonlocal problem (\[rownanie\])-(\[warunek\]) and Theorem \[th main\] are proved in Section 5. 0,5cm [**Notation.**]{} By $\|\cdot\|_p$ we denote the $L^p$-norm of a function defined on $\R.$ Integrals without integration limits are defined on the whole line $\mathbb R.$ Several numerical constants are denoted by $C$. Regularized problem =================== In this section, we consider the regularized problem $$\label{rownanie zregularyzowane} u_t=\varepsilon u_{xx}+{\cal L} u-uu_x, \qquad x\in \mathbb{R},~t>0$$ $$\label{warunek ue} u(x,0)=u_0(x).$$ with fixed $\varepsilon>0.$ Our first goal is to show that this initial value problem has a unique smooth global-in-time solution. ([*Existence of solutions.*]{}) \[istnienie zregularyzowanego\] If $u_0\in L^\infty(\mathbb R)$ and ${\cal L} u=J*u-u,$ where the kernel $J$ satisfies (\[as J\]), then the regularized problem (\[rownanie zregularyzowane\])-(\[warunek ue\]) has a solution ${u^\varepsilon}\in L^\infty(\mathbb R\times[0,\infty]).$ Moreover, this solution satisfies: $u\in C^\infty(\mathbb R\times (0,\infty))$ and all its derivatives are bounded on $\mathbb R\times(t_0,\infty)$ for all $t_0 > 0,$ for all $(x,t)\in\mathbb R\times [0,\infty)$ $$\label{L} \underset{x\in\mathbb R}{{\rm essinf}~ u_0}\leqslant {u^\varepsilon}(x,t)\leqslant \underset{x\in\mathbb R}{{\rm esssup}~u_0} $$ $u$ satisfies the equation (\[rownanie zregularyzowane\]) in the classical sense, $u(t)\to u_0$ as $t\to 0,$ in $L^\infty (\mathbb R)$ weak-\* and in $L^p_{loc}(\mathbb R)$ for all $p\in [1,\infty).$ This is a unique solution of problem (\[rownanie zregularyzowane\])-(\[warunek ue\]) in the sense of the integral formulation (\[duhamel\]), below. In the following theorem we collect other properties of solutions to the regularized problem. \[wlasnosci rozwiazania\] Assume that the kernel $J$ satisfies (\[as J\]). Let ${u^\varepsilon}$ be a solution of the regularized problem corresponding to an initial condition $u_0$ satisfying (\[as u01\]) If $ u_{0,x}\in L^1(\mathbb R)$ then $$\label{prawo zach} \int {u^\varepsilon}_x(x,t) ~dx=\int u_{0,x}(x)~ dx.$$ If $u_{0,x}\geqslant 0$ then $${u^\varepsilon}_x(x,t)\geqslant 0$$ for all $x\in \mathbb R$ and $t\geqslant 0.$ Moreover, for two initial conditions $u_0, ~\bar u_0$ satisfying (\[as u01\])-(\[as u02\]), the corresponding solutions ${u^\varepsilon},~\bar u^\varepsilon$ satisfy $$\label{kontrakcja} \Vert {u^\varepsilon}(t)-\bar u^\varepsilon(t)\Vert_1\leqslant \Vert u_0-\bar u_0\Vert_1.$$ [*Proof of Theorem \[istnienie zregularyzowanego\].*]{} Following the usual procedure, based on the Duhamel principle, we rewrite problem (\[rownanie zregularyzowane\])-(\[warunek ue\]) in the itegral form $$\label{duhamel} \begin{split} {u^\varepsilon}(x,t)=({G^\varepsilon}(\cdot,t)*{u^\varepsilon}_0)(x)&+\int_0^t({G^\varepsilon}(\cdot,t-s)*{\cal L}{u^\varepsilon}(\cdot,s))(x)~ ds\\ &-\int_0^t ({G^\varepsilon}(\cdot,t-s)*{u^\varepsilon}(\cdot,s){u^\varepsilon}_x(\cdot,s))(x)~ ds , \end{split}$$ where ${G^\varepsilon}(x,t)=(4\pi\varepsilon t)^{-1/2} e^{-\frac{\vert x\vert^2}{4\varepsilon t}}$ is the fundamental solution of the heat equation $u_t=~\varepsilon u_{xx}.$ It is a completely standard reasoning (details can be found for example in ([@DGV Section 5]), based on the Banach contraction principle, that the integral equation (\[duhamel\]) has a unique local-in-time regular solution on $[0,T]$ with properties stated in (i), (iii) and (iv). Here, one should notice that the second term on the right hand side of the equation (\[duhamel\]) does not cause any problem to adapt the arguments from ([@DGV Section 5]) in our case. This is due to the fact that the convolution operator ${\cal L}$ is bounded on $L^\infty(\mathbb R).$ Hence, we skip these details. This solution is global-in-time because of estimates (\[L\]) which we are going to prove below. In the proof of the comparison principle expressed by inequalities (\[L\]) we adapt ideas described in [@K_eveq]. It is based in the following auxiliary results. \[lem:pass\] Let $\varphi \in C^3_b(\R).$ If the sequence $\{x_n\}\subset \R$ satisfies $\varphi(x_n)\to \underset{x\in\R}{\sup}~\varphi(x)$ then $\underset{n\to\infty}{\lim}~\varphi'(x_n)=0$ $\underset{n\to\infty}{\limsup}~\varphi''(x_n)\leqslant 0$ $\underset{n\to\infty}{\limsup}~{\cal L}\varphi(x_n)\leqslant 0.$ Since $\varphi''$ is bounded, there exists $C>0$ such that $$\label{Taylor_1} \underset{x\in\R}{\sup}~\varphi(x)\geqslant \varphi(x_n-z)\geqslant \varphi(x_n)-\varphi'(x_n)~z-Cz^2$$ for every $z\in \R.$ Since the sequence $\{\varphi'(x_n)\}$ is bounded, passing to the subsequence, we can assume that $\varphi'(x_n)\to p.$ Consequently, passing to the limit in (\[Taylor\_1\]) we obtain the inequality $$0\geqslant -pz-Cz^2 \quad\quad\mbox{for every} \quad\quad z\in \R,$$ which imediately implies $p=0$. To prove inequality (ii), we use an analogous argument involving the inequality $$\label{Taylor_2} \underset{x\in\R}{\sup}~\varphi(x)\geqslant \varphi(x_n-z)\geqslant \varphi(x_n)-\varphi'(x_n)~z+\frac{1}{2} \varphi''(x_n)~z^2-Cz^3$$ for all $z>0,$ where $C=\frac{1}{6}\Vert \varphi'''\Vert_{\infty}.$ Passing to the limit superior in (\[Taylor\_2\]), denoting $q=\underset{n\to\infty}{\limsup} \varphi''(x_n)$ and using (i) we obtain the inequality $$0\geqslant \frac{1}{2}qz-Cz^3 \quad\quad\mbox{for all} \quad\quad z>0.$$ Choosing $z>0$ arbitrarily small we deduce from this inequality that $q\leqslant 0$ which completes the proof of (ii). Now, we prove that $\underset{n\to\infty}{\limsup}~{\cal L}\varphi(x_n)\leqslant 0.$ Note first, that by the definition of the sequence $\{x_n\}$, we have $$\varphi(x_n-z)-\varphi(x_n)\leqslant \underset{x\in\R}{\sup}~\varphi(x) -\varphi(x_n)\to 0 \quad\text{as}\quad n\to\infty.$$ Hence, $\underset{n\to\infty}{\limsup} \Big(\varphi(x_n-z)-\varphi(x_n) \Big)\leqslant 0. $ Applying the Fatou lemma to the expression $${\cal L}\varphi(x_n)= \int\Big( \varphi(x_n-z)-u(x_n)\Big)J(z)~dz$$ ends the proof of (iii). We are now in a position to prove the comparison principle for equations with the nonlocal operator ${\cal L}.$ \[eveq\] Assume that $u\in C_b(\R\times [0,T])\cap C^3_b(\R\times [\varepsilon ,T])$ is the solution of the equation $$\label{eq:lin} u_t=u_{xx}+{\cal L} u-b(x,t)u_x,$$ where ${\cal L}$ is the nonlocal convolution operator given by (\[operator\]) and $b=b(x,t)$ is a given and sufficiently regular real-valued function. Then $$u(x,0)\leqslant 0 \quad\mbox{implies}\quad u(x,t)\leqslant 0 \quad \mbox{for all}\quad x\in \R,~t\in [0,T].$$ The function $\Phi(t)=\underset{x\in\R}{\sup}~u(x,t)$ is well-defined and continuous. Our goal is to show that $\Phi$ is locally Lipschitz and $\Phi'(t)\leqslant 0$ almost everywhere. To show the Lipschitz continuity of $\Phi$, for every $\varepsilon>0$ we choose $x_\varepsilon$ such that $$\underset{x\in\R}{\sup}~u(x,t)=u(x_\varepsilon,t)+\varepsilon.$$ Now, we fix $t,s\in I$, where $I\subset (0,T)$ is a bounded and closed interval and we suppose (without loss of generality) that $\Phi(t)\geq \Phi(s)$. Using the definition of $\Phi$ and regularity of $u$ we obtain $$\begin{split} 0\leq \Phi(t)-\Phi(s) &= \sup_{x\in\R} u(x,t)-\sup_{x\in\R} u(x,s)\\ &\leq \varepsilon +u(x_\varepsilon, t)-u(x_\varepsilon, s)\\ &\leq \varepsilon +\sup_{x\in\R} |u(x,t)-u(x,s)|\\ &\leq \varepsilon + |t-s| \sup_{x\in\R,t\in I}|u_t(x,t)|. \end{split}$$ Since $\varepsilon>0$ and $t,s\in I$ are arbitrary, we immediately obtain that the function $\Phi$ is locally Lipschitz, hence, by the Rademacher theorem, differentiable almost everywhere, as well. Let us now differentiate $\Phi(t)=\underset{x\in\R}{\sup}~u(x,t)$ with respect to $t>0$. By the Taylor expansion, for $0<s<t$, we have $$u(x,t)= u(x,t-s)+s u_t(x,t)+Cs^2.$$ Hence, using equation (\[eq:lin\]), we obtain $$\label{es1} u(x,t)\leqslant\underset{x\in\R}{\sup}~u(x,t-s) +s \Big(u_{xx}(x,t)+{\cal L} u(x,t)-b(x,t)u_x(x,t)\Big) +Cs^2.$$ Substituting in (\[es1\]) $x=x_n$, where $u(x_n,t)\to \underset{x\in\R}{\sup}~u(x,t)$ as $n\to\infty$, passing to the limit using Lemma \[lem:pass\], we obtain the inequality $$\underset{x\in\R}{\sup}~u(x,t)\leq \underset{x\in\R}{\sup}~u(x,t-s) +Cs^2$$ which can be transformed into $$\frac{\Phi(t)-\Phi(t-s)}{s}\leq Cs.$$ For $s\searrow 0$, we obtain $\Phi'(t)\leq 0$ in those $t$, where $\Phi$ is differentiable. [*Proof of Inequalities (\[L\]).*]{} Let $m=\underset{x\in\mathbb R}{{\rm esssup}~u_0}$ then, since ${\cal L}m=0$, the function ${v^\varepsilon}(x,t)={u^\varepsilon}(x,t)-m$ satisfies the following equation $${v^\varepsilon}_t={v^\varepsilon}_{xx}+{\cal L}{v^\varepsilon}-({v^\varepsilon}+m){v^\varepsilon}_x$$ Now, we use Proposition \[eveq\] with $b(x,t)={v^\varepsilon}(x,t)+m$ to conclude that ${v^\varepsilon}(x,t)\leqslant 0,$ so ${u^\varepsilon}(x,t)\leqslant m$ for all $x\in \mathbb R,~ t\in [0,T],$ for arbitrary $T>0.$ The proof of the second inequality $\underset{x\in\mathbb R}{{\rm essinf}~u_0}\leqslant {u^\varepsilon}(x,t)$ is completely analogous, hence we skip it. 0,5cm [*Proof of Theorem \[wlasnosci rozwiazania\].*]{} In order to show the equality (\[prawo zach\]), we differentiate Duhamel’s formula (\[duhamel\]), and we obtain $$\label{---} \begin{split} {u^\varepsilon}_x(x,t)=({G^\varepsilon}(\cdot,t)*{u^\varepsilon}_{0,x})(x)&+\int_0^t({G^\varepsilon}(\cdot,t-s)*{\cal L}{u^\varepsilon}_x(\cdot,s))(x)~ ds\\ &-\int_0^t ({G^\varepsilon}_x(\cdot,t-s)*{u^\varepsilon}(\cdot,s){u^\varepsilon}_x(\cdot,s))(x)~ ds. \end{split}$$ Then, integrating (\[—\]) over $\mathbb R,$ we have $$\label{pochodna} \begin{split} \int {u^\varepsilon}_x(x,t)~dx&=\int ({G^\varepsilon}(\cdot,t)*{u^\varepsilon}_{0,x})(x)~dx+ \int_0^t \int ({G^\varepsilon}(\cdot,t-s)*{\cal L}{u^\varepsilon}_x(\cdot,s))(x)~dxds\\ &-\int_0^t \int ({G^\varepsilon}_x(\cdot,t-s)*{u^\varepsilon}(\cdot,s){u^\varepsilon}_x(\cdot,s))(x))~dxds. \end{split}$$ Since $\int {G^\varepsilon}(x,t)~dx=1,$ the second term on the right hand side of (\[pochodna\]) is egual to zero by the equality (\[zero\]). Now, making of use the equality $\int {G^\varepsilon}_x(x,t)~dx=0,$ leads to zero in the last term on the right hand side of (\[pochodna\]), and that ends the proof of (\[prawo zach\]). To prove nonnegativity of ${u^\varepsilon}_x,$ we first differentiate equation (\[rownanie zregularyzowane\]) with respect to $x,$ and we have $$\label{+} ({u^\varepsilon}_x)_t=\varepsilon ({u^\varepsilon}_{xx})_x+{\cal L} {u^\varepsilon}_x-\left({u^\varepsilon}{u^\varepsilon}_x\right)_x, \qquad x\in \mathbb{R},~t>0.$$ Next, we multiply (\[+\]) by $({u^\varepsilon}_x)^-=\max\{-{u^\varepsilon}_x,0\},$ and we integrate the resulting equation over $\mathbb R,$ to obtain $$\label{++} \int ({u^\varepsilon}_x)_t({u^\varepsilon}_x)^-~dx=\varepsilon \int ({u^\varepsilon}_{xx})_x~({u^\varepsilon}_x)^-~dx+\int ({u^\varepsilon}_x)^-~{\cal L} {u^\varepsilon}_x~dx-\int \left({u^\varepsilon}{u^\varepsilon}_x\right)_x~({u^\varepsilon}_x)^-~dx.$$ Now, we notice that the integral on the left-hand side of (\[++\]) is equal to $\frac{1}{2}\frac{d}{dt}\int_{{u^\varepsilon}_x\leqslant 0} \left[({u^\varepsilon}_x)^-\right]^2~ dx.$ Straightforward calculations, based on the integration by parts in the first and third term of the right-hand side of (\[++\]) lead to $$\label{+++} \frac{1}{2}\frac{d}{dt}\int_{{u^\varepsilon}_x \leqslant 0} \left[({u^\varepsilon}_x)^-\right]^2 dx=-\int_{{u^\varepsilon}_x \leqslant 0} \left[({u^\varepsilon}_x)^-\right]^2dx+ \int_{{u^\varepsilon}_x \leqslant 0} ({u^\varepsilon}_x)^-~{\cal L} {u^\varepsilon}_xdx-\frac{1}{2} \int_{{u^\varepsilon}_x \leqslant 0} \left[({u^\varepsilon}_x)^-\right]^3dx.$$ By Lemmas \[convex\] and \[Kato\], we have $\int_{{u^\varepsilon}_x \leqslant 0} ({u^\varepsilon}_x)^-~{\cal L} {u^\varepsilon}_x~dx\leqslant 0.$ In a consequence, we obtain $$\label{4+} \frac{1}{2}\frac{d}{dt}\int_{{u^\varepsilon}_x \leqslant 0} \left[({u^\varepsilon}_x)^-\right]^2 dx \leqslant 0,$$ and, this immediately implies $$\int_{{u^\varepsilon}_x \leqslant 0} \left[({u^\varepsilon}_x)^-\right]^2 dx\leqslant \int_{{u^\varepsilon}_x \leqslant 0} \left[(u_{0,x})^-\right]^2 dx.$$ By nonnegativity assumption from (\[as u02\]) imposed on $u_{0,x}$ we have $({u^\varepsilon}_x)^-=0$ on the set $\{{u^\varepsilon}_x \leqslant 0\},$ thus, in a consequence, we have ${u^\varepsilon}_x(x,t)\geqslant 0$ for all $x\in \mathbb R$ and $t>0.$ To prove the $L^1$-contraction property in (\[kontrakcja\]) is sufficient to repeat the reasons from Lemma \[log\] below, hence we do not reproduce it, here. Convergence of regularized solutions towards rarefaction wave ============================================================= Now, we show that a solution to the regularized problem satisfies certain decay estimates and converges towards a rarefaction wave with all estimates independent of $\varepsilon>0.$ The main result of this section reads as follows. \[jeszcze nie wiem jaki label\] Let $u={u^\varepsilon}(x,t)$ be the solution of regularized problem (\[rownanie zregularyzowane\])-(\[warunek ue\]), with the kernel $J$ satisfying (\[as J\]) and the initial data $u_0$ satisying (\[as u01\])-(\[as u02\]), from Theorem \[istnienie zregularyzowanego\]. For every $p\in[1,\infty]$ there exists $C=C(p)>0$ independent of $t$ and of $\varepsilon>0$ such that $$\label{decay zregularyzowanego} \Vert {u^\varepsilon}_x(t)\Vert_p\leqslant t^{-1+1/p}\Vert u_{0,x}\Vert_1^{1/p}$$ and $$\label{log zregularyzowanego} \Vert {u^\varepsilon}(t)-w^R(t)\Vert_p\leqslant C t^{-(1-1/p)/2}[\log(2+t)]^{(1+1/p)/2}$$ for all $t>0,$ where $w^R=w^R(x,t)$ is the rarefaction wave (\[rarefaction\]). We proceed the proof of this theorem by proving preliminary inequalities involving the nonlocal operator ${\cal L}.$ \[Kato\] For every $\varphi\in L^1(\mathbb R)$ we have ${\cal L}\varphi\in L^1(\mathbb R).$ Moreover, $$\label{zero} \int {\cal L} \varphi~dx = 0$$ and $$\label{signum} \int {\cal L} \varphi~{{\rm sgn}\,}\varphi ~dx\leqslant 0.$$ The function ${\cal L}\varphi$ is integrable by the Young inequality and the following calculation $$\Vert {\cal L} \varphi\Vert_1\leqslant \Vert\varphi\Vert_1+\Vert J*\varphi\Vert_1\leqslant \Vert\varphi\Vert_1(1+\Vert J\Vert_1).$$ Since $\int J(x)~dx=1$, we obtain (\[zero\]) immediately by applying the Fubini theorem. Since ${\cal L}\varphi=J*\varphi-\varphi,$ to prove inequality (\[signum\]), it is sufficient to use the estimates $$\left\vert \int J*\varphi\cdot{{\rm sgn}\,}\varphi~dx\right\vert\leqslant\int\int J(y)\vert\varphi(x-y)\vert~dxdy=\int \vert \varphi(x)\vert~dx$$ by the Fubini theorem and assumptions (\[as J\]). \[convex\] Let $\varphi\in L^1(\mathbb R)$ and $g\in C^2(\mathbb R)$ be a convex function. Then $$\label{convex inequality} {\cal L} g(\varphi)\geqslant g'(\varphi){\cal L} \varphi \qquad a.e.$$ The convexity of the function $g$ leads to the following inequality $$g(\varphi(x-y))-g(\varphi(x))\geqslant g'(\varphi(x))[\varphi(x-y)-\varphi(x)].$$ Multiplying this inequality by $J(y)$ and integrating it with respect to $y$ over $\mathbb R$ we obtain the inequality (\[convex inequality\]). For simplicity of the exposition, we first formulate some auxiliary lemmas. We start with known results concerning the initial value problem for the viscous Burgers equation (\[eq-app\])-(\[ini-app\]). The following estimates can be deduced from the explicit formula for solutions to the problem (\[eq-app\])-(\[ini-app\]). We refer the reader to [@HN] for detailed calculations, and for additional improvements to [@KT]. \[w\_R\] Problem (\[eq-app\])-(\[ini-app\]) with $u_-<u_+$ has the unique solution $w(x,t)$ satisfying $u_-<w(t,x)<u_+$ and $w_x(t,x)>0$ for all $(x,t)\in\R\times(0,\infty)$. Moreover, for every $p\in[1,\;\infty]$, there is a constant $C=C(p,u_-,u_+)>0$ such that $$\|w_x(t)\|_{p}\leq C t^{-1+1/p}, \quad \|w_{xx}(t)\|_{p}\leq C t^{-3/2+1/(2p)}$$ and $$\|w(t)-w^R(t)\|_{p}\leq C t^{-(1-1/p)/2},$$ for all $t>0$, where $w^R(x,t)$ is the rarefaction wave (\[rarefaction\]). Our goal is to estimate $\Vert {u^\varepsilon}(t)-w(t)\Vert_{p}$ where ${u^\varepsilon}={u^\varepsilon}(x,t)$ is a solution of regularized problem (\[rownanie zregularyzowane\])-(\[warunek ue\]) and $w=w(x,t)$ is a smooth approximation of the rarefaction wave $w^R.$ First, we deal with the $L^1$-norm. \[log\] Assume that ${u^\varepsilon}={u^\varepsilon}(x,t)$ is a solution of problem (\[rownanie zregularyzowane\])-(\[warunek ue\]) from Theorem \[istnienie zregularyzowanego\]. Let $w=w(x,t)$ be the smooth approximation of a rarefaction wave. Then, there exists a constant $C>0$ independent of $t$ and of $\varepsilon>0$ such that $$\label{log_nierownosc} \Vert {u^\varepsilon}(t)-w(t)\Vert_1\leqslant C\log (2+t)\qquad \text{for all} \quad t>0.$$ The function ${v^\varepsilon}(x,t)={u^\varepsilon}(x,t)-w(x,t)$ satisfies the following equation $${v^\varepsilon}_t-{\cal L} {v^\varepsilon}+\left(\frac{({v^\varepsilon})^2}{2}+{v^\varepsilon}w\right)_x={\cal L} w-w_{xx}.$$ We multiply it by ${{\rm sgn}\,}{v^\varepsilon}$ and we integrate over $\mathbb R$ to obtain $$\label{*} \begin{split} \frac{d}{dt}\int \vert {v^\varepsilon}\vert ~dx-\int {\cal L} {v^\varepsilon}~{{\rm sgn}\,}{v^\varepsilon}dx&+ \frac{1}{2}\int \left(({v^\varepsilon})^2+2{v^\varepsilon}w\right)_x ~{{\rm sgn}\,}{v^\varepsilon}dx\\ &=\int ({\cal L} w-w_{xx})~{{\rm sgn}\,}{v^\varepsilon}dx. \end{split}$$ By Lemma \[Kato\], the second term on the left-hand side of (\[\*\]) is non-negative. For the third term, we approximate the sgn function by a smooth and nondecreasing function $\varphi=\varphi(x)$. Thus, we obtain $$\begin{aligned} \int [({v^\varepsilon})^2+2{v^\varepsilon}w]_x \varphi({v^\varepsilon}) dx &=-\int (({v^\varepsilon})^2+2{v^\varepsilon}w)\varphi'({v^\varepsilon}){v^\varepsilon}_x \;dx\\ &=- \int \Psi({v^\varepsilon})_x\; dx+ \int w_x \Phi({v^\varepsilon})\; dx,\end{aligned}$$ where $\Psi(s)=\int_0^sz^2\varphi'(z)\,dz$ and $\Phi(s)=\int_0^s2z\varphi'(z)\,dz$. Here, the first term on the right hand side equals zero and the second one is nonnegative because $w_x\geq 0$ and $\Phi(s)\geq 0$ for all $s\in \R$. Hence, an approximation argument gives $\int[({v^\varepsilon})^2+~2{v^\varepsilon}w]_x {{\rm sgn}\,}{v^\varepsilon}\,dx\geq 0$. Now, we estimate the term on the rght hans side of (\[\*\]). First, we notice that using the Taylor formula, we have $$\begin{split} {\cal L} w(x,s)&=(J*w-w)(x,s)=\int J(y)[w(x-y,s)-w(x,s)]dy\\ &=\int J(y) y w_x(x,s)~dy+ \int J(y) \frac{y^2}{2} w_{xx}(x+\theta y,s)~dy, \end{split}$$ where $\int J(y) y w_x(x,s) dy=w_x(x,s)\int J(y) y dy=0$ by the symmetry assumption from (\[as J\]). Therefore, by assumption (\[as u01\]), we can estimate the integral on the right-hand side of (\[\*\]) as follows $$\begin{split} \left\vert \int ({\cal L} w-w_{xx})~{{\rm sgn}\,}{v^\varepsilon}~dx\right\vert&\leqslant \int J(y) \frac{y^2}{2} \int \vert w_{xx}(x+\theta y,s)\vert~dxdy+\Vert w_{xx}\Vert_1\\ &\leqslant C \Vert w_{xx}\Vert_1. \end{split}$$ Consequently, applying these estimates to inequality (\[\*\]) we obtain the following differential inequality $$\label{**} \frac{d}{dt} \Vert {v^\varepsilon}(t)\Vert_1\leqslant C \Vert w_{xx}(t)\Vert_1.$$ Now, by Lemma \[w\_R\], we have the inequality $\Vert w_{xx}(t)\Vert_1\leqslant Ct^{-1}$ for all $t>0,$ which combined with (\[\*\*\]) ater integration completes the proof of Lemma \[log\]. Now, we are in a position to prove the convergence of regularized solutions towards rarefaction wave. [*Proof of Theorem \[jeszcze nie wiem jaki label\]*]{} [*Part I. Decay estimates.*]{} In the case $p=1$, we use the equality (\[prawo zach\]) from Theorem \[wlasnosci rozwiazania\]. Since ${u^\varepsilon}_x\geqslant 0,$ we have $$\label{-} \Vert \partial_x {u^\varepsilon}(t)\Vert_1=\Vert \partial_x u_0\Vert_1 \qquad \text{for all} \quad t\geqslant 0.$$ In order to show the inequality (\[decay zregularyzowanego\]) for $p\in(1,\infty),$ we multiply equation (\[+\]) by $({u^\varepsilon}_x)^{p-1},$ and integrate the resulting equation over $\mathbb R$ to obtain $$\label{!!} \begin{split} \frac{1}{p}\frac{d}{dt} \int ({u^\varepsilon}_x)^p dx&=\varepsilon\int {u^\varepsilon}_{xx}({u^\varepsilon}_x)^{p-1}dx+\int ({u^\varepsilon}_x)^{p-1}{\cal L} {u^\varepsilon}_x dx\\ &-\int \left(({u^\varepsilon}_x)^2+{u^\varepsilon}{u^\varepsilon}_x\right)({u^\varepsilon}_x)^{p-1} dx. \end{split}$$ The first integral on the right-hand side of (\[!!\]) is equal to $\frac{\varepsilon}{p}\int \left[({u^\varepsilon}_x)^p\right]_xdx.$ Thus, since ${u^\varepsilon}_x\in L^1(\mathbb R),$ this term equals zero. The second integral on the right-hand side of (\[!!\]) is non-positive by inequalities (\[convex inequality\]) and (\[zero\]) as well as by the assumptions on the kernel of the operator ${\cal L}$ from (\[as J\]). Thus, since ${u^\varepsilon}_x$ is integrable and nonnegative, after the following calculations involving the integration by part on the third integral of the right-hand side of (\[!!\]) $$\int \left(({u^\varepsilon}_x)^2+{u^\varepsilon}{u^\varepsilon}_x\right)({u^\varepsilon}_x)^{p-1} dx=\int ({u^\varepsilon}_x)^{p+1} dx+\int {u^\varepsilon}\left(\frac{({u^\varepsilon}_x)^p}{p}\right)_x dx=\left(1-\frac{1}{p}\right)\int({u^\varepsilon}_x)^{p+1}$$ we arrive at inequality $$\label{A} \frac{1}{p}\frac{d}{dt} \Vert {u^\varepsilon}_x(t) \Vert_p^p\leqslant -\left(1-\frac{1}{p}\right)\Vert {u^\varepsilon}_x(t) \Vert_{p+1}^{p+1}.$$ Combining inequality (\[A\]) with the interpolation inequality $$\Vert {u^\varepsilon}_x(t) \Vert_p^\frac{p^2}{p-1}\leqslant \Vert {u^\varepsilon}_x(t) \Vert_{p+1}^{p+1}\Vert {u^\varepsilon}_x(t) \Vert_1^\frac{1}{p-1}$$ and with the conservation of the $L^1$-norm in (\[-\]) we obtain the following differential inequality $$\label{nierownosc rozniczkowa} \frac{d}{dt} \Vert {u^\varepsilon}_x(t) \Vert_p^p\leqslant -(p-1)\left( \Vert {u^\varepsilon}_x(t) \Vert_p^p\right)^\frac{p}{p-1} \Vert u_{0,x}(t) \Vert_1^{-\frac{1}{p-1}}.$$ Consequently, decay estimates (\[decay zregularyzowanego\]) result from inequality (\[nierownosc rozniczkowa\]) by standard calculations. We obtain immediately the case of $p=\infty$ in inequality (\[decay zregularyzowanego\]) by passing to the limit $p\to\infty.$ [*Part II. Convergence towards rarefaction wave.*]{} First, we recall that by Lemma \[w\_R\] the large time asymptotics of $w(t)$ is described in $L^p(\R)$ by the rarefaction wave $w^R(t)$ and the rate of this convergence is $t^{-1/2(1-1/p)}$. Thus, it is enough to estimate $L^p$-norm of the difference of the solution ${u^\varepsilon}$ of problem (\[rownanie zregularyzowane\])-(\[warunek ue\]) and of the smooth approximation of the rarefaction wave satisfying (\[eq-app\])-(\[ini-app\]). To this end, using the following Gagliardo-Nirenberg-Sobolev inequality $$\label{GN} \|v\|_p\leq C\|v_x\|^a_\infty\|v\|^{1-a}_1,$$ valid for every $1<p\leq \infty$ and for $a=1/2(1-1/p),$ inequality (\[decay zregularyzowanego\]), and Lemma \[w\_R\] we have $$\begin{aligned} \| {u^\varepsilon}(t)-w(t)\|_p& \leq C(\| {u^\varepsilon}_x(t)\|_\infty+\|w_x(t)\|_\infty)^a\| {u^\varepsilon}(t)-w(t)\|_1^{1-a} \\ &\leq Ct^{-a}\| {u^\varepsilon}(t)-w(t)\|^{1-a}_1.\end{aligned}$$ Finally, the logaritmic estimate of the $L^1$-norm from Lemma \[log\] completes the proof. Passage to the limit $\varepsilon\to 0$ ======================================= Here, we prove a result on the convergence as $\varepsilon\to 0$ of solutions ${u^\varepsilon}$ for regularized problem (\[rownanie zregularyzowane\])-(\[warunek ue\]) towards a weak solution to problem (\[rownanie\])-(\[warunek\]). \[zALS\] Let the assumptions on the initial data $u_0$ and the kernel $J$ from (\[as J\])-(\[as u02\]) hold true and let ${u^\varepsilon}={u^\varepsilon}(x,t)$ be a solution to problem (\[rownanie zregularyzowane\])-(\[warunek ue\]) with $\varepsilon>0.$ Then, there exists a sequence $\varepsilon_n\to 0$ such that $u^{\varepsilon_n}\to u$ in $C([t_1,t_2],L^1_{loc}(\mathbb R)),$ for every $t_2>t_1>0,$ as well as $u^{\varepsilon_n}\to u$ [*a.e.*]{} in $\mathbb R\times (0,\infty),$ where $u$ is a weak solution of problem (\[rownanie\])-(\[warunek\]). In the proof of this theorem, the following version of the Aubin-Lions-Simon compactness theorem will be used. \[ALS\] Let $T > 0, 1 < p \leqslant\infty,$ and $1 \leqslant q \leqslant\infty.$ Assume that $Y\subset X\subset Z$ are Banach spaces such that $Y$ is compactly embedded in $X$ and $X$ is continuously embedded in $Z$. If $A$ is a bounded subset of $W^{1,p}([0,T],Z)$ and of $L^q([0,T],Y),$ then $A$ is relatively compact in $L^q([0,T],X)$ and in $C([0,T],X)$ if $q=\infty$. The proof of Theorem \[ALS\] can be found in [@Si]. [*Proof of Theorem \[zALS\].*]{} First, we show the relative compactness of the family $\mathcal{F}=\{ {u^\varepsilon}: \varepsilon\in(0,1]\}$ in the space $C((0,+\infty),L^1_{loc}(\mathbb R)),$ and next, we pass to the limit $\varepsilon\to 0,$ using the Lebesgue dominated convergence theorem. [*Step*]{} 1. We check the assumptions of the Aubin-Lions-Simon theorem in the case $p=q=\infty,$ $Y=W^{1,1}(K),$ $A={\bf 1}_{K\times [t_1,t_2]}\cal{F},$ $X=L^1(K)$ and $Z=(C^2_K)^*,$ with arbitrary $t_2>t_1>0,$ where $K\subset\mathbb R$ is a compact set and $(C^2_K)^*$ is topological dual space to the space of $C^2$ functions with compact support in $K$ (with its natural norm). First, we notice that $L^1(K)$ is obviously continuously embedded in $(C^2_K)^*,$ and by the Rellich-Kondrachov theorem $W^{1,1}(K)$ is compactly imbedded in $L^1(K).$ By inequality (\[L\]), we have $$\left\vert \int_K {u^\varepsilon}(t){\varphi}dx\right\vert\leqslant\Vert{\varphi}\Vert_{C^2_K}\Vert u_0\Vert_\infty\vert K\vert$$ for every ${\varphi}\in C^\infty_c (\mathbb R).$ Hence, the family $\cal{F}$ is bounded in $L^\infty([t_1,t_2],(C^2_K)^*).$ Now, we check that $\{{u^\varepsilon}_t\}$ is bounded in $L^\infty([t_1,t_2],(C^2_K)^*).$ To this end, we multiply equation (\[rownanie zregularyzowane\]) by ${\varphi}\in C^\infty_c (\mathbb R)$ and integrate over $\mathbb R.$ Applying integrating by part formula we have the following estimate $$\label{,} \left\vert \int_K {u^\varepsilon}_t(t){\varphi}dx\right \vert\leqslant \Vert{\varphi}\Vert_{C^2_K}\left(\varepsilon\int_K \vert{u^\varepsilon}\vert dx+\int_K \vert{\cal L} {u^\varepsilon}\vert dx+\int_K {({u^\varepsilon})}^2dx\right).$$ By assumption imposed on the kernel $J$ in (\[as J\]), the Young inequality, and inequality (\[L\]), the right-hand side of inequality (\[,\]) can be estimated by $\Vert{\varphi}\Vert_{C^2_K}\Vert u_0\Vert_\infty|K|\left(\Vert J\Vert_1+1+\varepsilon+\frac{1}{2}\Vert u_0\Vert_\infty\right).$ Now, again by inequality (\[L\]) we have $\int_K\vert{u^\varepsilon}(t)\vert dx\leqslant\Vert u_0\Vert_\infty\vert K\vert.$ Moreover, from decay estimate (\[decay zregularyzowanego\]) for $p=\infty$ we obtain $\int_K\vert{u^\varepsilon}_x(t)\vert dx\leqslant\frac{1}{t_1}\vert K\vert.$ All these estimates imply that $\cal{F}$ is bounded in $L^\infty([t_1,t_2],W^{1,1}(K)).$ Thus, the Aubin-Lions-Simon theorem ensures that $\cal{F}$ is relatively compact in $C([t_1,t_2],L^1(K))$ for all $t_2>t_1>0,$ and all compact sets $K\subset\mathbb R.$ [*Step*]{} 2. We deduce from Step 1 and from the Cantor diagonal argument that there exists a sequence $\varepsilon_n\to 0$ and a function $u\in C((0,+\infty),L^1_{loc}(\mathbb R))$ such that $u^{\varepsilon_n}$ converges as $\varepsilon_n\to 0$ towards in $C([t_1,t_2 ],L^1(K))$ for all $t_2>t_1>0,$ and all compact $K\subset\mathbb R.$ Up to another subsequence, we can also assume that $u^{\varepsilon_n}\to u$ [*a.e.*]{} on $\mathbb R\times (0,\infty).$ This convergence and inequality (\[L\]) implies that $u\in L^\infty(\mathbb R\times (0,+\infty)).$ Now, we prove that a function $u$ is a weak solution of the problem (\[rownanie\])-(\[warunek\]). To this end, we multiply equation (\[rownanie\]) by ${\varphi}\in C^\infty_c(\mathbb R\times [0,\infty)),$ and integrating the resulting equation over $\mathbb R\times [0,\infty)$ and integrating by parts, we obtain $$\begin{split} \label{ue_slabe_2} &-\int_\mathbb R\int_0^\infty u^{\varepsilon_n}{\varphi}_t dtdx-\int_\mathbb R u_0(x){\varphi}(x,0)~dx= \\ &=\varepsilon \int_\mathbb R\int_0^\infty u^{\varepsilon_n}{\varphi}_{xx}~ dtdx+\int_\mathbb R\int_0^\infty u^{\varepsilon_n} {\cal L}{\varphi}~dtdx+\frac{1}{2}\int_\mathbb R\int_0^\infty (u^{\varepsilon_n})^2{\varphi}_x~dtdx. \end{split}$$ Thus, since $u^{\varepsilon_n}\to u$ [*a.e.*]{} as $\varepsilon_n\to 0,$ the sequence $\{u^{\varepsilon_n}\}$ is bounded in $L^\infty$-norm by $\Vert u_0\Vert_\infty,$ and ${\cal L}{\varphi}$ is integrable, the Lebesgue dominated convergence theorem allows us to pass to the limit in equality (\[ue\_slabe\_2\]). This completes the proof of Theorem \[zALS\]. 1 cm Now, we are in a position to prove Theorem \[th main\]. [*Proof of Theorem \[th main\].*]{} Denote by $u^{\varepsilon_n}$ the solution of regularized problem (\[rownanie zregularyzowane\])-(\[warunek ue\]) and by $u$ the weak solution of problem (\[rownanie\])-(\[warunek\]). By Theorem \[zALS\], we know that $u^{\varepsilon_n}\to u$ [*a.e.*]{} on $\mathbb R\times (0,\infty)$ for a sequence $\varepsilon_n\to 0.$ Therefore, by the Fatou lemma and Theorem \[jeszcze nie wiem jaki label\], we have for each $R>0$ and $p\in[1,\infty]$ and for all $t>0$ the following estimate $$\Vert u(t)-w^R(t)\Vert_{L^p(-R,R)}\leqslant \liminf_{\varepsilon_n\to 0} \Vert u^{\varepsilon_n}(t)-w^R(t)\Vert_{L^p(-R,R)}\leqslant C t^{-(1-1/p)/2}[\log(2+t)]^{(1+1/p)/2}.$$ Since $R > 0$ is arbitrary and the right-hand side of this inequality does not depend on $R$, we complete the proof of the inequality (\[glowna nierownosc\]) by letting $R\to \infty.$ Since solution of the regularized problem satisfy the $L^1$-contraction property stated in Theorem \[wlasnosci rozwiazania\], by an analogous passage to the limit $\varepsilon_n\to 0$ as described above, we obtain $L^1$- contraction inequality for weak solutions to the nonlocal problem (\[rownanie\])-(\[warunek\]). Hence a weak solution to (\[rownanie\])-(\[warunek\]) is unique. [**Acknowlegements.**]{} This work was supported by the MNiSW grant No. IdP2011/000661. [00]{} G. Alberti, G. Bellettini, [*A nonlocal anisotropic model for phase transitions. I. The optimal profile problem*]{}, Math. Ann., [**310**]{}, (1998), 3, 527–560. P. W. Bates, P. C. Fife, X. Ren, X. Wang, [*Traveling Waves in a convolution model for phase transition*]{}, Arch. Rational Mech. Anal. [**138**]{} (1997), no. 2, 105–136. M. L. Cain, B.G. Milligan, A.E. Strand, [*Long-distance seed dispersal in plant populations*]{}, Am. J. Bot., [**87**]{} (2000), 9, ,1217–1227. X. Chen, [*Existence, uniqueness, and asymptotic stability of traveling waves in nonlocal evo- lution equation*]{}, Adv. Differential Equations [**2**]{} (1997), no. 1, 125–160. X.Chen, H.Jiang, [*Chen, Xinfu; Jiang, Huiqiang Traveling waves of a non-local conservation law*]{}, Differential Integral Equations [**25**]{} (2012), no. 11-12, 1143-1174. Chmaj, Adam J.J. [*Existence of traveling waves for the nonlocal Burgers equation*]{}, Appl. Math. Lett., 20 (2007), 4, 439-444. J.S. Clark, [*Why Trees Migrate So Fast: Confronting Theory with Dispersal Biology and the Paleorecord*]{}, The American Naturalist, [**152**]{} (1998), 2, 204–224. J. Coville, [*On uniqueness and monotonicity of solutions of non-local reaction diffusion equa- tion*]{} [**185**]{} (2006), 3, 461–485. J. Coville, [*Travelling fronts in asymmetric nonlocal reaction diffusion equation: The bistable and ignition case*]{}, Preprint of the CMM. J. Coville, L. Dupaigne, [*On a non-local reaction diffusion equation arising in population dynamics*]{}, Proc. Roy. Soc. Edinburgh Sect. A [**137**]{} (2007), no. 4, 727–755. J. Coville, J. D´avila, S. Mart´ınez, [*Nonlocal anisotropic dispersal with monostable nonlinearity*]{}, J. Differential Equations [**244**]{} (2008), no. 12, 3080–3118. J. Coville, J. D´avila, S. Mart´ınez, [*Pulsating ronts for nonlocal dispersion and KPP nonlinearity*]{}, arXiv:1302.1053v1 \[math.AP\] 5 Feb 2013 C. Deveaux, E. Klein, [*Estimation de la dispersion de pollen à longue distance à léchelle d‘un paysage agicole : une approche expérimentale*]{}, Publication du Laboratoire Ecologie, Systèmatique et Evolution, 2004. J. Droniou, T. Gallou¨et, J. Vovelle, [*Global solution and smoothing effect for a non-local regularization of a hyperbolic equation*]{}, J.evol.equ. [**3**]{} (2002) 499–521 G.B. Ermentrout, J. B. McLeod, [*Existence and uniqueness of travelling waves for a neural network*]{}, Proc. Roy. Soc. Edinburgh Sect. A, [**123**]{} (1993), 3, 461–478. P.C. Fife, Mathematical aspects of reacting and diffusing systems, Lecture Notes in Biomathematics, 28, Springer-Verlag, Berlin, 1979. P.C. Fife, [*An integrodifferential analog of semilinear parabolic PDEs*]{}, Partial differential equations and applications, Lecture Notes in Pure and Appl. Math., [**177**]{}, 137–145, Dekker, New York, 1996. K. Hamer, [*Nonlinear efects on the propagation of sound waves in a radiating gas*]{}, Quart. J. Mech. Appl. Math. [**24**]{} (1971), 155–168. Y. Hattori and K. Nishihara, [*A note of the stability of the rarefaction wave of the Burgers equations*]{}, Japan J. Indust. Appl. Math. [**8**]{} (1991), 85–96. V. Hutson, S. Martinez, K. Mischaikow, G.T. Vickers, [*The evolution of dispersal*]{}, J. Math. Biol., [**47**]{} (2003), 6, 483–517. L. Ignat, T. Ignat, D. Stancu-Dumitru [*Asymptotic behavior for a nonlocal convection-diffusion equation*]{}, arXiv:1301.6019v1 \[math.AP\] 25 Jan 2013 L. Ignat and J. Rossi, [*A nonlocal convection-diffusion equation*]{}, J. Func. Anal. [**251**]{} (2007), 399–437. A. M. Il’in and O. A. Oleinik, [*Asymptotic behaviour of solutions of the Cauchy problem for some quasi-linear equations for large values of time*]{}, Mat. Sb. [**51**]{} (1960), 191–216. G. Karch, Nonlinear evolution equations with anomalous diffusion, [*Qualitative properties of solutions to partial differential equations*]{}, Jindřich Nečas Cent. Math. Model. Lect. Notes, [**5**]{}, Matfyzpress, Prague, 2009, 2–68. G. Karch, Ch. Miao, X. Xu, [*On convergence of solutions of fractal Burgers equation toward rarefaction waves*]{}, SIAM J. Math. Anal. [**39**]{} (2008), no. 5, 1536–1549. S. Kawashima and S. Nishibata, [*Shock waves for a model system of radiating gas*]{}, SIAM J. Math. Anal. [**30**]{} (1999), 95–117. S. Kawashima and Y. Tanaka, [*Stability of rarefaction waves for a model system of a radiating gas*]{}, Kyushu J. Math. [**58**]{} (2004), 211–250. M. Kot, J. Medlock, [*Spreading disease: integro-differential equations old and new*]{}, Math. Biosci., [**184**]{} (2003), 2, 201–222. C. Lattanzio and P. Marcati, [*Global well-posedness and relaxation limits of a model for radiating gas*]{}, J. Differential Equations [**190**]{} (2003), 439–465. P. Laurençot , [*Asymptotic self-similarity for a simplified model for radiating gasas*]{}, Asymptot. Anal. [**42**]{} (2005), 251–262. A. De Masi, T. Gobron, E. Presutti, [*Travelling fronts in non-local evolution equations*]{}, Arch. Rational Mech. Anal.,[**132**]{} (1995), 2, 143–205. A. De Masi, E. Orlandi, E. Presutti, L. Triolo, [*Glauber evolution with Kac potentials. I. Mesoscopic and macroscopic limits, interface dynamics*]{}, Nonlinearity [**7**]{} (1994), 633–696. A. De Masi, E. Orlandi, E. Presutti, L. Triolo, [*Uniqueness and global stability of the instanton in nonlocal evolution equations*]{}, Rend. Mat. Appl. (7) , [**14**]{} (1994), 4, 693–723. J. D. Murray, Mathematical biology, Biomathematics, 19, Second Ed., Springer-Verlag, Berlin, 1993. F.M. Schurr, O. Steinitz, R. Nathan, [*Plant fecundity and seed dispersal in spatially hetero- geneous environments: models, mechanisms and estimation*]{}, J. Ecol.,[**96**]{} (2008), 4, 628–641. D. Serre, [*$L^1$-stability of constants in a model for radiating gases*]{}, Comm. Math. Sci. [**1**]{} (2003), 197–205. D. Serre, [*$L^1$-stability of non-linear waves in scalar conservation laws*]{}, Evolutionary Equations, Vol. I, pp. 573-533, Handbook Diff. Eqns., North-Holland, Amsterdam, 2004. J. Simon, [*Compact sets in the space $L^p(0, T;B)$*]{}, Ann. Mat. Pura Appl. (4), [**146**]{} (1987), 65–96. P. Rosenau, [*Extending hydrodynamics via the regularization o the Chapman-Enskog expansion*]{}, Phys. Rev. A [**40**]{} (1989), 7193–7196.
--- abstract: 'Most problems in gravitational lensing require numerical solutions. The most frequent types of problems are (1) finding multiple images of a single source and classifying the images according to their properties like magnification or distortion; (2) propagating light rays through large cosmological simulations; and (3) reconstructing mass distributions from their tidal field. This lecture describes methods for solving such problems. Emphasis is put on using adaptive-grid methods for finding images, issues of spatial resolution and reliability of statistics for weak lensing by large-scale structures, and methodical questions related to shear-inversion techniques.' author: - | Matthias Bartelmann\ MPI für Astrophysik, P.O. Box 1317, D–85740 Garching, Germany date: '*Proceedings Contribution, Gravitational Lensing Winter School, Aussois 2003*' title: Numerical Methods in Gravitational Lensing --- Introduction ============ Only for very special lens models can numerical methods be avoided in gravitational lensing studies. There are three essential reasons for that. One is the non-linearity of gravitational lensing, i.e. the fact that image and source positions are related to one another in a non-linear fashion. This gives rise to the well-known phenomena of mutiple imaging, strong image distortions, and so forth. The second reason is that lenses exist which are themselves best described by numerical models. Galaxy clusters are one example, lensing by large-scale structures is another. Although it is true that many aspects of gravitational lensing by large-scale structures can be derived analytically, detailed simulations require numerical techniques. The third reason is that the interpretation of gravitational lensing effects or events often require the application of sophisticated algorithms to ever growing amounts of data. One example is the reconstruction of the projected mass density distribution of a galaxy cluster from the observed image distortions due to gravitational shear. Needless to say, there are many more aspects of numerical methods related to gravitational lensing than I can cover in this review. An outstanding example are the highly elaborate methods that have been developed over recent years for determining image shapes of faint background galaxies on CCD frames, and for extracting the gravitational shear signal from them. This is a whole branch of data analysis on its own. Here, I can only deal with numerical methods for relating mass distributions to their gravitational lensing effects. Consequently, the outline of this lecture is as follows: First, I shall discuss methods for studying individual lenses, i.e. their imaging properties, their critical curves and caustics. In particular, the use of adaptive grids and techniques for searching and characterising images will be discussed. Second, I shall describe how extended lenses can be treated numerically using the multiple-lens plane theory. This will lead to the basic equations for tracing light rays through (simulated) cosmological volumes. A large fraction of the discussion will be devoted to issues of resolution and noise, and to spurious effects in simulated lensing statistics. Finally, third, I shall describe inversion techniques, i.e. methods for reconstructing the projected mass distribution of lenses whose distortion has been measured. The classic Kaiser-Squires method will be described, and also maximum-likelihood techniques and maximum-entropy methods. General lensing theory and the theory of weak lensing are covered by Koenraad Kuijken’s and Peter Schneider’s lectures in this volume. Basic references on lensing include the textbook by Schneider et al. (1992) and the lecture by Narayan & Bartelmann (1999), reviews of weak lensing are Mellier (1999) and Bartelmann & Schneider (2001). Individual Lenses ================= Assumptions ----------- A brief reminder of the basic assumptions underlying the theory of individual lenses may be in order. There are three main assumptions. First, the Newtonian gravitational potential of the lens be small, $|\Phi|\ll c^2$. Second, velocities in the gravitational lens system, both of constituents within the lenses and of the lenses with respect to the rest frame of the microwave background, be small $v\ll c$. Third, the extent of the lenses along the line-of-sight be small compared to the other distances in the system, which are usually cosmological and thus comparable to the Hubble radius, $c/H_0=3\,h^{-1}\,\mathrm{Gpc}$, with $H_0$ being the Hubble constant and $h=H_0/100\,\mathrm{km\,s^{-1}\,Mpc^{-1}}$. It is worth noting how well these assumptions are satisfied in ordinary lensing situations. Consider a galaxy cluster with mass $M=10^{15}\,h^{-1}M_\odot$. Assuming spherical symmetry, the Newtonian potential at a distance $R=1\,h^{-1}\,\mathrm{Mpc}$ from its centre is $$|\Phi|\approx\frac{G\,M}{R}\approx (2\times10^3\,\mathrm{km\,s^{-1}})^2\;, \label{eq:1}$$ evidently much smaller than the speed of light squared. A typical length scale for the radius of a galaxy cluster is $1-1.5\,h^{-1}\,\mathrm{Mpc}$, which is several hundred times smaller than typical distances in a cluster-lensing system. Finally, peculiar velocities of galaxy clusters with respect to the Hubble flow are of order several hundred $\mathrm{km\,s^{-1}}$, and typical velocities of galaxies within galaxy clusters reach of order $10^3\,\mathrm{km\,s^{-1}}$, but both velocities are way below the speed of light. The above assumptions hold even better for lensing by galaxies, of course. We can thus safely assume the above conditions to be satisfied. It is then possible to project the lensing mass distribution onto a plane perpendicular to the line-of-sight, the lens plane, and describe it by its surface mass density $\Sigma$. Sources are assumed to be located on a corresponding plane, the source plane. A typical lens system is sketched in Fig. \[fig:1\]. ![Schematic view of a gravitational lens system. The lens is projected onto the lens plane perpendicular to the line-of-sight, sources are located on the parallel source plane. There are three distances required to describe the geometry of the system, i.e. the distances $D_\mathrm{d,s,ds}$ between the observer and the lens, the observer and the source, and between the lens and the source, respectively. Due to space-time curvature, these distances are generally not additive.[]{data-label="fig:1"}](fig1.eps){width="\hsize"} The three distances $D_\mathrm{d,s,ds}$ shown in Fig. \[fig:1\] and explained in its caption are generally not additive because of space-time curvature, thus $D_\mathrm{s}\ne D_\mathrm{d}+D_\mathrm{ds}$ in contrast to flat space-time. Coordinates and Notation ------------------------ Let us now introduce physical coordinates $\vec\xi$ and $\vec\eta$ on the lens and source planes, respectively. Alternatively, it is often convenient to introduce angular coordinates $\vec\theta$ and $\vec\beta$, which are obviously related to $\vec\xi$ and $\vec\eta$ through $$\vec\xi=D_\mathrm{d}\,\vec\theta\;,\quad \vec\eta=D_\mathrm{s}\,\vec\beta\;. \label{eq:2}$$ Dimensional coordinates are of course not suitable for numerical calculations, which can only handle numbers. We thus have to introduce a length scale $\xi_0$, or alternatively an angular scale $\theta_0$, in the lens plane. This length scale is so far *arbitrary*. It implies a length or angular scale $$\eta_0=\frac{D_\mathrm{s}}{D_\mathrm{d}}\,\xi_0\;, \quad\mbox{or}\quad \beta_0=\frac{\eta_0}{D_\mathrm{s}}= \frac{\xi_0}{D_\mathrm{d}}=\theta_0 \label{eq:3}$$ in the source plane. Dimension-less coordinates are then defined by $$\vec x=\frac{\vec\xi}{\xi_0}=\frac{\vec\theta}{\theta_0}\;, \quad\mbox{or}\quad \vec y=\frac{\vec\eta}{\eta_0}=\frac{\vec\beta}{\theta_0} \label{eq:4}$$ in the lens and source planes, respectively. The numerical code will have to deal with the dimension-less vectors $\vec x$ and $\vec y$. It helps numerical accuracy greatly if these numbers are of order unity. Thus, the first challenge in setting up a lensing simulation is to choose an appropriate length- or angular scale $\xi_0$ or $\theta_0$, which should both be adapted to the physical problem at hand, and to the requirement that numerical codes work most accurately if the numbers they are dealing with are neither too large nor too small, compared to machine accuracy. Choosing unappropriate length scales can, for instance, render image searches unsuccessful. The Lensing Potential --------------------- It will be convenient for the following discussion to introduce the lensing potential $\psi$ as the basic physical quantity for lensing studies. It is the scaled, projected Newtonian gravitational potential of the lens, $$\psi(\vec x)=\frac{2}{c^2}\, \frac{D_\mathrm{d}D_\mathrm{ds}}{\xi_0^2\,D_\mathrm{s}}\, \int\,\Phi(\xi_0\vec x,l)\,{\mathrm{d}}l\;. \label{eq:5}$$ The so-called *reduced* (i.e. appropriately scaled) deflection angle is the gradient of the potential, $$\vec\alpha(\vec x)=\nabla_{\vec x}\psi(\vec x)\;, \label{eq:6}$$ and the lensing convergence (i.e. the appropriately scaled surface-mass density) is $$\kappa(\vec x)=\frac{1}{2}\,\nabla_{\vec x}^2\psi(\vec x)= \frac{1}{2}\,\nabla_{\vec x}\cdot\vec\alpha(\vec x)\;. \label{eq:7}$$ Finally, the gravitational tidal field is described by the two-component shear, $$\gamma_1(\vec x)=\frac{1}{2}\left(\psi_{,11}-\psi_{,22}\right)= \frac{1}{2}\left(\alpha_{1,1}-\alpha_{2,2}\right)\;,\quad \gamma_2(\vec x)=\psi_{,12}=\alpha_{1,2}\;, \label{eq:8}$$ where the convention was used that $f_{i,j}$ is the derivative of the $i$-th component of $\vec f$ with respect to the coordinate $x_j$. It is important to note that the fact that all lensing quantities can be derived from the scalar lensing potential establishes relations between all of them. This will be exploited several times later. Note that the lensing quantities must be rescaled in case the coordinate scale $\xi_0$ is changed. Suppose $\xi'_0$ is introduced instead of $\xi_0$. Since the physical surface-mass density of the lens must remain the same at any given physical location, the reduced deflection angle must transform as $$\vec\alpha(\vec x')=\frac{\xi_0'}{\xi_0}\,\vec\alpha(\vec x)\;, \label{eq:9}$$ and convergence and shear transform as $$\left[\kappa(\vec x')\,,\,\gamma_i(\vec x')\right]= \left(\frac{\xi_0'}{\xi_0}\right)^2\, \left[\kappa(\vec x)\,,\,\gamma_i(\vec x)\right]\;. \label{eq:10}$$ Imaging ------- Suppose now we were given some description of the lensing potential $\psi(\vec x)$, or of the deflection angle $\vec\alpha(\vec x)$. This description could be an analytical formula, or it could be in form of an array, i.e. a set of numbers given at grid points $(x_i,x_j)$. We wish to know how the given lens images its background. We introduce a coordinate grid $\vec x_{ij}$ on the lens plane subject to the condition that it be sufficiently well resolved. This means that the smallest features in the lens must be covered by at least a few grid points. Since we are given the deflection angle as a function of position, we can compute a deflection-angle grid, $\vec\alpha_{ij}=\vec\alpha(\vec x_{ij})$. The mapped grid on the source plane is then simply $\vec y_{ij}=\vec x_{ij}-\vec\alpha_{ij}$. This mapped grid will appear as a distorted image of the regular grid in the lens plane, as the example in the left panel of Fig. \[fig:2\] shows. ![*Left panel:* A regular grid in the lens plane (blue dots) is mapped onto the source plane (red dots) using a numerical description of a deflection-angle field. Distortions are clearly visible. *Right panel:* For each point in the lens plane, those points of a regular grid in the source plane (blue) are searched which surround its mapped point in the source plane (red).[]{data-label="fig:2"}](fig2a.eps "fig:"){width="0.49\hsize"} ![*Left panel:* A regular grid in the lens plane (blue dots) is mapped onto the source plane (red dots) using a numerical description of a deflection-angle field. Distortions are clearly visible. *Right panel:* For each point in the lens plane, those points of a regular grid in the source plane (blue) are searched which surround its mapped point in the source plane (red).[]{data-label="fig:2"}](fig2b.eps "fig:"){width="0.49\hsize"} The mapping process must now be reversed in order to obtain an image created by the lens. For doing so, the *source* plane is first covered with a regular grid, $\vec y_{ij}'$. Next, we loop over all grid points $\vec x_{ij}$ in the *lens* plane and find its mapped source point $\vec y_{ij}$ in the source plane, and search for the nearest neightbours $\vec y_{kl}'$ surrounding $\vec y_{ij}$ in the source plane. This is illustrated in the right panel of Fig. \[fig:2\]. The surface brightness of the source, known at the positions $\vec y_{kl}'$, can then be interpolated to $\vec y_{ij}$ and the result assigned to the image point $\vec x_{ij}$. That way, the surface brightness at all points in the lens plane can be determined, and thus the lensed image be constructed. Fig. \[fig:3\] shows an example. ![image](fig3.eps){width="\hsize"} The left panel of the figure shows a simulated CMB temperature fluctuation field of $10'\times10'$ angular size. The temperature increases from white to red. In essence, the temperature fluctuation corresponds to a fairly smooth gradient across the field. The right panel shows the gravitational lensing signature imprinted on the CMB at such angular scales by a galaxy cluster. The temperature visible at an angular position $\vec\theta$ on the sky, $T'(\vec\theta)$, is related to the intrinsic temperature $T$ through $T'(\vec\theta)=T[\vec\theta-\vec\alpha(\vec\theta)]$. Thus, the light deflection by the cluster causes the visible temperature distribution to be rearranged, yielding a highly specific pattern (Seljak & Zaldarriaga 2000). Critical Curves and Caustics ---------------------------- As mentioned in the introduction, the deflection-angle field contains full information on the lensing mass distribution. All other quantities like convergence and shear, but also image magnifications, follow from the deflection angle via differentiation. It is thus a common task to compute numerical derivatives. Suppose a function $f(\vec x)$ is tabulated on a grid, so that we are given the values $f_{ij}$ at the grid points $\vec x_{ij}$. The derivative of $f(\vec x)$ at a particular point $\vec x_{00}$ in the first coordinate direction is approximated by $$\left.\frac{\partial f(\vec x)}{\partial x_1} \right|_{\vec x_{00}}=\frac{1}{2h}\,\left(f_{10}-f_{-10}\right)+ \mathcal{O}(h^2)\;\, \label{eq:11}$$ where $h$ is the separation of the grid points in the chosen direction; cf. the left panel of Fig. \[fig:4\]. This centred difference has the advantage compared to the more straightforward one-sided differences $f_{10}-f_{00}$ or $f_{00}-f_{-10}$ of being second-order in the grid separation $h$. There are higher-order differencing schemes using function values at more than two grid points, but the second-order scheme is usually sufficient. No lensing quantity should vary strongly between two adjacent grid points because otherwise the resolution of the grid would be grossly insufficient. ![*Left panel:* Second-order numerical differentiation using centred differences. *Right panel:* A simple method for finding points in the lens plane next to a critical curve uses sign changes of the Jacobian determinant between the point considered and its four nearest neighbours.[]{data-label="fig:4"}](fig4a.eps "fig:"){width="0.49\hsize"} ![*Left panel:* Second-order numerical differentiation using centred differences. *Right panel:* A simple method for finding points in the lens plane next to a critical curve uses sign changes of the Jacobian determinant between the point considered and its four nearest neighbours.[]{data-label="fig:4"}](fig4b.eps "fig:"){width="0.49\hsize"} We will typically need derivatives of the deflection angle $\vec\alpha$. Since $\vec\alpha$ is itself the gradient of a scalar potential, its derivatives must satisfy $\alpha_{1,2}=\psi_{,12}=\psi_{,21}=\alpha_{2,1}$. It is thus usually preferable to check that this relation is satisfied within numerical accuracy, and to use $(\alpha_{1,2}+\alpha_{2,1})/2$ instead of either $\alpha_{1,2}$ or $\alpha_{2,1}$ alone. Critical curves in the lens plane are defined by the condition that the Jacobian determinant of the lens mapping vanish there, $\det\mathcal{A}(\vec x)=0$. The elements of the Jacobian matrix are $\mathcal{A}_{ij}=\delta_{ij}-\alpha_{i,j}$, thus the Jacobian determinant is $$D\equiv\det\mathcal{A}= (1-\alpha_{1,1})(1-\alpha_{2,2})-\alpha_{1,2}^2\;. \label{eq:12}$$ It can be computed once the (numerical) derivatives of the both deflection-angle components have been determined. One method of identifying grid points in the lens plane next to the critical curve proceeds as follows. Let $S=\mathrm{sign}(D)$, and consider one particular grid point $\vec x_{00}$ in the lens plane. The point is next to the critical curve if, and only if, the sign of the Jacobian determinant changes between it and one or more of its nearest neighbours. Hence, if the condition $$S_{00}(S_{-10}+S_{10}+S_{0-1}+S_{01})<4 \label{eq:13}$$ is satisfied, the grid point $\vec x_{00}$ is next to a critical curve (cf. the right panel of Fig. \[fig:4\]). Of course, $\vec x_{00}$ is not itself *on* the critical curve, but to the positional accuracy determined by the grid resolution, the position of the critical curve can be constrained that way. Points on the source plane next to the caustic curve are then easily found via the lens equation, $\vec y_{\mathrm{C}ij}=\vec x_{\mathrm{C}ij}-\vec\alpha(\vec x_{\mathrm{C}ij})$, where the $\vec x_{\mathrm{C}ij}$ are the grid points in the lens plane next to critical curves. As an example, consider a lens model for a spiral galaxy, consisting of a spherical halo and a flat disk seen almost edge-on (Bartelmann & Loeb 1998). The deflection-angle field of such a lens can be given analytically (cf. Keeton & Kochanek 1998). Convergence and total shear $(\gamma_1+\gamma_2)^{1/2}$ as determined by numerical differentiation are shown together with the modulus of the deflection angle in Fig. \[fig:5\]. ![image](fig5.eps){width="\hsize"} The critical curves and caustics of that lens model as determined with the method described above are shown in Fig. \[fig:6\]. ![Critical curves (left) and caustics (right) of the spiral-galaxy lens model illustrated in Fig. \[fig:5\].[]{data-label="fig:6"}](fig6a.eps "fig:"){width="0.49\hsize"} ![Critical curves (left) and caustics (right) of the spiral-galaxy lens model illustrated in Fig. \[fig:5\].[]{data-label="fig:6"}](fig6b.eps "fig:"){width="0.49\hsize"} Adaptive Source Grids --------------------- One of the most prominent goals of gravitational lensing studies with individual strong lenses is to determine the imaging statistics of a given lens model, for example the abundance of highly magnified events, the occurrence of multiple imaging with the images satisfying certain conditions, and the like. This is done in principle by distributing many sources across the source plane, imaging them as described before, and determining the image properties. However, such events are rare. If one were to cover the entire source plane with a regular grid of sources, this grid would have to have a very high resolution for rare events to be reliably found. In turn, most of the sources probed would produce images failing the criteria imposed, so by far the largest fraction of the CPU time used would be wasted. This situation calls for adaptive grids. We know in advance that any strongly lensed image will occur near a critical curve, or any strongly lensed source near a caustic. It is those sources that we need to treat in detail, while those far from caustic curves are usually only required to normalise the statistics properly. One approach for defining an adaptive grid, and there may be others more suitable for a particular lensing situation, proceeds as follows. Again, we assume that we know the deflection angle of the lens, either because it was provided numerically or because it is described by a known analytic formula. Then, we saw in the preceding subsection how grid points can easily be identified which are close to a critical curve in the lens plane, or a caustic curve in the source plane. In order to save computational time, the source plane is first covered with a coarse grid. This grid should obviously be fine enough for the caustics to be properly resolved; for instance, it must not be so coarse that the typically two types of caustic curve, the radial and the tangential one, are closer than a few times the grid separation. Next, those points on that coarse grid are identified and saved which are next to a caustic curve. This can, for instance, be done by masking, i.e. by attaching a logical variable to all grid points and setting it to either *true* or *false* depending on whether it is or is not next to a critical curve. One can then cover the source plane with a grid whose resolution is doubled in both dimensions, and keep only those points which are identical with, or surrounded by, points of the coarse grid which were masked in the preceding step. This procedure can be repeated as often as desired, i.e. until the finest grid level reaches the ultimately required resolution. Note that it is not the grids and their masks which need to be saved, but only the coordinates of those grid points which are either part of the coarse initial grid, or whose logical mask values are *true*. That way, lists of source positions can be constructed which are to be probed later for the images they give rise to. Naturally, this can only be a basic recipe which needs to be adapted to the situation at hand. For instance, the condition that grid points need to be next to a caustic can be replaced by the condition that the absolute value of the Jacobian determinant be less than a given threshold which can be lowered at each step of grid refinement. Such a criterion would naturally increase the grid resolution near such grid positions where sources are certain to be highly magnified. Of course, if statistics is the ultimate goal, one has to take into account that sources near caustic curves were positioned such as to have an unfair advantage over sources far from caustics. Since we have chosen to double the grid resolution at each refinement step, each source on a refined grid represents only a quarter of the area on the source plane represented by a source on the next coarser grid. Assigning a statistical weight of unity to the sources on the finest grid, the weight must quadruple for each coarser level. If the grid was refined $N$ times, the weight of sources on the coarsest grid is thus $w_i=2^{2N}$. Each source is assigned a statistical weight $w_i$ in that way, and counts $w_i$ times in the final statistical evaluation. The left panel in Fig. 7 shows the source locations chosen for evaluating image statistics of the spiral lens model illustrated in Fig. \[fig:6\]. ![image](fig7a.eps){width="0.49\hsize"} ![image](fig7b.eps){width="0.49\hsize"} Finding Images -------------- The principle of finding the images of a given source is simple: Given a source at position $\vec y_\mathrm{s}$, find those grid points $\vec x_{ij}$ on the lens plane which are mapped sufficiently close to $\vec y_\mathrm{s}$, i.e. whose mapped points $\vec y_{ij}$ are within a specified distance from $\vec y_\mathrm{s}$. The problem with this approach is that a square-shaped or rectangular grid cell from the image plane is mapped onto a distorted figure in the source plane. In most cases, this figure will be a parallelogram, but in rare cases, opposing corners of the original rectangle may even be interchanged on the source plane. How can it then be decided whether a given point in the source plane is inside or outside the mapped grid cell, or in other words, whether the image of the given source falls within that particular grid cell on the lens plane? The solution is to split each grid cell in the lens plane into two triangles, because a mapped triangle always remains a triangle, which always has a well-defined interiour (cf. Schneider et al. 1992). ![Illustration of the technique for finding images described in the text. Grid cells in the lens plane are split into triangles (left panel), which have a well-defined interior after being mapped back onto the source plane (right panel). This would not necessarily be the case for rectangular grid cells. A source is contained by a triangle if all mixed cross products $\vec d_i\times\vec d_j$ for the shown vectors $\vec d_i$ are positive.[]{data-label="fig:8"}](fig8.eps){width="\hsize"} Consider Fig. \[fig:8\]. The three grid points marked on the lens plane in the left panel of the figure are mapped to the distorted triangle shown on the right panel, which contains the source position. Call $\vec d_{1,2,3}$ the three vectors from the mapped triangle’s corners towards the source position. It can be shown that the source is inside the mapped triangle if the three vector products $$\vec d_1\times\vec d_2\;,\;\vec d_1\times\vec d_3\;,\; \vec d_2\times\vec d_3 \label{eq:14}$$ are all positive, with the vector product in two dimensions being defined as $$\vec a\times\vec b\equiv a_1b_2-a_2b_1\;. \label{eq:15}$$ One straightforward way to verify this condition is to convince one’s self that the source point is inside the triangle if all vectors $\vec d_i$ point within the angles spanned by the adjacent sides of the triangle, and that this condition translates to Eq. (\[eq:14\]) above. This algorithm for finding images works well as long as the separation between images is larger than the size of the grid cells in the lens plane. Very close images can be contained within the same grid cell, in which case the algorithm would find only one. Of course, this potential problem can be remedied by increasing the grid resolution, but then a very large number of grid cells would have to be checked in vain for containing an image. Again, a viable solution uses adaptive grids. One can start with a coarse grid on the lens plane. Searching for images on that coarse grid will almost certainly not yield all images of a multiply imaged source, but those missed will be closer than the grid separation to those found. Then, those grid cells containing images can individually be covered with a highly resolved grid, and the image search repeated on those sub-grids. Hence, the first step represents a coarse scan of the lens plane for grid cells containing at least one image, and the second step scans only those regions on the lens plane in detail where images are sure to be found. If needed, further sub-grids can be similarly nested. Of course, even though this procedure is highly adaptive and efficient, it always has a remaining resolution limit, and images closer than that will not be resolved. It is then important to adapt the resolution of the finest sub-grid to the situation at hand, for instance such that remaining unresolved images would neither be resolved by observations. The right panel in Fig. \[fig:7\] illustrates the result of an adaptive image search for all sources at the positions shown in the figure’s left panel. Colours denote image numbers: Black means one image, blue three, and red five, while green shows source positions for which an even number of images has been found, in contradiction to the necessarily odd image number produced by non-singular lenses. Such events are rare, but they do occur because of the finite resolution limit of the algorithm applied. Figure \[fig:9\] gives an example for possible results of that adaptive technique for finding images. Colour-coded is the total magnification of point sources in the source plane behind the almost edge-on spiral lensing galaxy introduced above. The increasing spatial resolution towards the caustic curves is evident. The panel inserted into the figure shows caustics (blue) and critical curves (red) of the lens, the source position as a blue dot just inside the right-hand “naked” cusp, and the three images as red hexagons whose size logarithmically encodes the image magnification. ![The colour encodes the total magnification of a point source lensed by an almost edge-on spiral galaxy; blue means a magnification near unity, yellow means very high magnification. The adaptive resolution of grid cells on the source plane is clearly visible. The size of the grid cells decreases substantially towards regions of high magnification. The inserted panel shows caustics and critical curves of the same lens (blue and red, respectively), a source position close to the right-hand “naked” cusp, and the three images as red hexagons, whose size logarithmically encodes their magnification.[]{data-label="fig:9"}](fig9.eps){width="\hsize"} Asymmetric Lenses ----------------- So far, we have used a model for a spiral galaxy as an example for a complex lens whose properties need to be determined numerically. Despite its complexity, the model is still highly symmetric; and what is more, its deflection angle is given as an analytic formula. Sources were so far assumed to be point-like. Let us now increase the level of complexity and use a numerically simulated galaxy cluster to gravitationally lens extended sources. Again, we assume the deflection angle to be given and postpone the question as to how it can be determined from an $N$-body simulation. All techniques described above for computing convergence and shear from the deflection angle, for finding critical curves and caustics, for placing sources on an adaptive grid, and for finding images within grid cells split into triangles remain valid unchanged. Figure \[fig:10\] shows an example. ![image](fig10a.eps){width="0.49\hsize"} ![image](fig10b.eps){width="0.49\hsize"} The modulus of the cluster’s deflection angle is shown as the colour plot in the left panel. The right panel shows a section of the source plane with the dots marking source positions, and their colour illustrating the image number. Black, blue and red means one, three, or five images, respectively. The caustic structures can clearly be identified as the boundaries between black and blue and between blue and red, respectively. Imaging Extended Sources ------------------------ Extended sources can be described in a variety of ways. What follows is a simple description for elliptical sources, but alternative source models can easily be constructed along similar lines. We assume that source positions $\vec y_\mathrm{s}$ have already been found, preferentially on an adaptive grid as described before. Also, we need to be sure that the grid resolution in the source plane is sufficiently high as to resolve the smallest sources to be considered. Elliptical sources are described by three more parameters, viz. their size, their ellipticity, and their position angle $\phi$. Let us describe the ellipticity by $e=b/a$, with $a$ and $b$ being the semi-major and semi-minor axes of the ellipse, respectively. Finally, we introduce an effective radius $r$ by demanding that a circle of radius $r$ have the same area as the ellipse, hence $r=\sqrt{ab}$. By rotating by an angle $\phi$ an ellipse centred on the coordinate origin whose axes are aligned with the coordinate axes, it can straightforwardly be shown that a grid point $\vec y_{ij}$ is enclosed by the ellipse if the condition $$\begin{aligned} \cos^2\phi\left(\frac{\delta y_1^2}{e}+e\delta y_2^2\right)&+& \sin^2\phi\left(\frac{\delta y_2^2}{e}+e\delta y_1^2\right) \nonumber\\ &+& 2\delta y_1\delta y_2\sin\phi\cos\phi\left(\frac{1}{e}-e\right) \le r^2 \label{eq:16}\end{aligned}$$ is satisfied, where $\delta\vec y\equiv\vec y_{ij}-\vec y_\mathrm{s}$. If the grid point $\vec x_{ij}$, whose image in the source plane is $\vec y_{ij}$, satisfies Eq. (\[eq:16\]), the image point $\vec x_{ij}$ is part of the source, and the image can be constructed by assigning the source’s surface brightness at $\vec y_{ij}$ to the image point $\vec x_{ij}$. By mapping the entire lens plane onto the source plane and checking Eq. (\[eq:16\]) for each individual imaged grid point $\vec y_{ij}$, all image points belonging to the given source can be identified. It is often desired for statistical purposes to automatically characterise a large number of images. An example is the determination of cross sections for the formation of large gravitational arcs by a numerically simulated galaxy cluster, for which a large number of sources need to be imaged and the image properties automatically quantified to search for the rare “giant” arcs. Most of the methods described here have been introduced and used extensively e.g. by Bartelmann & Weiss (1994), Bartelmann et al. (1995, 1998), Meneghetti et al. (2000, 2001); see also the contribution by Massimo Meneghetti to this volume. A source may have multiple images, thus the point sets in the lens plane found by imaging extended sources need not be connected. The first step is therefore to group the image points into images. This can be done with a variant of the classical friends-of-friends algorithm: Pick one arbitrary point out of any given set of image points and search for another image point which is at most $\sqrt{2}h$ grid units away from the first point; $h$ is the grid size in the lens plane. If there is such a point, it is called a “friend” and grouped into the same image as the first point. Now take the “friend” and repeat until no further “friends” can be found and the image is complete. If more image points are left on the lens plane, pick one of those and repeat the process until all image points have been grouped. If the image is large enough, and the grid resolution on the lens plane is high enough for the image to consist of many points, the image magnification is simply the ratio between the numbers of pixels covered by an image and the number of pixels covered by the source. Once all image points belonging to a single image have been identified, it is often useful to determine the boundary points of that image, e.g. by identifying those points inside an image which have a neighbour outside the image. By suitably ordering the boundary points, a boundary line can be found whose length can be measured and used in further steps of the automatic image classification. Next, the curvature of the image can be found by first identifying the image point which of the source centre, then search for the boundary point most distant from the so-defined image centre, and finally searching for the boundary point most distant from the first boundary point. These three points uniquely define a circle whose radius can be used as an approximation for the arc radius. And so on, you get the drift: Once image points are grouped into individual images and boundary curves have been determined, images can be classified by adapting elementary geometrical figures to them. Deflection Angles of Asymmetric Lenses -------------------------------------- So far we have assumed to be given the deflection angle either as an analytic expression or as two two-dimensional arrays of numbers giving its two components as a function of position in the lens plane. We now need to describe methods for obtaining the deflection angle of a numerically simulated lens. The first issue to be discussed is the spatial resolution. Since the simulated lens is composed of discrete particles which represent a smooth mass distribution in reality, the deflection angle must not be computed by simply summing up the deflection angles of the individual particles: The result would be a collection of microlenses rather than a single macrolens, having many spurious and undesired imaging features. Rather, the collection of particles has to be projected onto a lens plane, on which it needs to be smoothed in some way. We will return later to the issue of how particles should be sorted into grid cells. An important point to be addressed before is how large the grid cells should be chosen. They should be small enough for important features of the lens to remain identifiable; they should be large enough for the surface density to lose the “graininess” due to its being composed of individual particles, and they should be large enough so that Poisson errors are smaller than a certain threshold. If the number of particles per grid cell is $nh^2$, its Poisson fluctuation is $\sqrt{nh^2}$, thus the discreteness of the particles gives rise to fluctuations in the surface-mass density. Demanding that the relative fluctuations of the density should be smaller than $\epsilon\ll1$, the cell size $h$ has to be chosen such as to satisfy $(nh^2)^{-1/2}\le\epsilon$. It is impossible to give a general rule applicable to the majority of lensing situations, but it is clear that resolution, smoothing and particle noise have to be carefully balanced by choosing the grid cell size appropriately. Assigning particle masses to grid points in order to obtain a smooth density distribution is an art of its own (cf. Hockney & Eastwood 1988). In principle, the particle mass could simply be attributed to the single grid point next to its position. This “nearest grid point” (NGP) method is appropriate for particles near the centre of a cell, but particles near cell boundaries should be attributed to the cell and its neighbour(s) in order to avoid boundary effects like density discontinuities. Numerous schemes for interpolating particles across cells have been proposed. They are generally of the form $$Q(\vec x)=\sum_i\,W\left(\vec x-\vec x_i\right)\, Q(\vec x_i)\;, \label{eq:17}$$ where $Q$ is the quantity to be interpolated onto a point $\vec x$, e.g. the particle mass, the sum extends over all particles sufficiently close to the point of interest $\vec x$, and $W(\vec x-\vec x_i)$ is a smoothing or interpolation kernel depending on the separation vector between the particle position $\vec x_i$ and $\vec x$. The kernel is decomposed into three factors directions, $$W(\delta\vec x)=w(\delta x_1)w(\delta x_2)w(\delta x_3)\;, \label{eq:17a}$$ one for each dimension, the $i$-th of which depends only on the $i$-component of the separation vector. Interpolation methods can now be classified according to the kernel factors $w(\delta x)$ and their width. ![The “cloud-in-cell” (left panel) and “triangular shaped cloud” (right panel) interpolation schemes are illustrated here. The (projected) particle position is marked red, the grid points to which the particle mass is assigned are marked blue and green. The CIC and TSC schemes assign the particle mass to the eight and 27 nearest neighbours, respectively (in three dimensions).[]{data-label="fig:11"}](fig11a.eps "fig:"){width="0.49\hsize"} ![The “cloud-in-cell” (left panel) and “triangular shaped cloud” (right panel) interpolation schemes are illustrated here. The (projected) particle position is marked red, the grid points to which the particle mass is assigned are marked blue and green. The CIC and TSC schemes assign the particle mass to the eight and 27 nearest neighbours, respectively (in three dimensions).[]{data-label="fig:11"}](fig11b.eps "fig:"){width="0.49\hsize"} The “cloud-in-cell” (CIC) scheme uses the kernel factors $$w_\mathrm{CIC}(\delta x)=\left\{\begin{array}{ll} 1-|\delta x|/h & \mbox{for}\quad |\delta x|<h \\ 0 & \mbox{otherwise} \end{array}\right.\;, \label{eq:18}$$ which implies that the particle is distributed over the four nearest grid points. A more elaborate scheme is the “triangular shaped cloud” (TSC) method, which uses the kernel factors $$w_\mathrm{TSC}(x)=\left\{\begin{array}{ll} 3/4-\delta x^2/h^2 & \mbox{for}\quad |\delta x|\le h/2 \\ (3/2-|\delta x|/h)^2/2 & \mbox{for}\quad h/2\le|\delta x|<3h/2 \\ 0 & \mbox{otherwise} \end{array}\right.\;. \label{eq:19}$$ The CIC and TSC interpolation schemes are illustrated for two dimensions in Fig. \[fig:11\]. For all schemes, the kernel has to be normalised such that all particle mass fractions add up to unity. Suppose now we have obtained the surface mass density on a grid $\kappa_{ij}=\kappa(\vec x_{ij})$, then the deflection angle can most straightforwardly be determined by direct summation as $$\vec\alpha_{ij}=\frac{1}{\pi}\,\sum_{kl}\,\kappa_{kl}\, \frac{\vec x_{ij}-\vec x_{kl}} {\left|\vec x_{ij}-\vec x_{kl}\right|^2}\;. \label{eq:20}$$ Depending on the number of grid cells, the direct summation can be prohibitively slow. In many circumstances of astrophysical interest, fast-Fourier techniques can then be applied. In order to see how this works, note that the deflection angle can be written as a convolution of the convergence $\kappa(\vec x)$ with a kernel $$\vec K(\vec x)=\frac{1}{\pi}\,\frac{\vec x}{\left|\vec x\right|^2}\;. \label{eq:21}$$ This allows the Fourier convolution theorem to be applied, which holds that the Fourier transform of a convolution is the product of the Fourier transforms of the functions to be convolved, hence $$\hat{\vec\alpha}(\vec k)=\hat\kappa(\vec k)\hat{\vec K}(\vec k)\;. \label{eq:22}$$ The Fourier transform of the kernel $\vec K$ can be determined and tabulated once. Using fast-Fourier techniques to determine the Fourier transform of the convergence $\hat\kappa(\vec k)$ requires the convergence to be periodic on the lens plane. In many cases, this can be safely assumed or arranged. Often, lens planes are constructed from large-scale $N$-body simulations which have periodic boundary conditions by design, or the lens is an isolated object like a galaxy cluster, which can be surrounded by a sufficiently large field for the convergence to drop near zero everywhere around the edges of the field. Fast-Fourier methods speed up the computation of the deflection angle considerably. If necessary, derivatives of the deflection angle field can also be determined in Fourier space. Once the convergence has been Fourier transformed, one can employ the two-dimensional Poisson equation to compute the Fourier transform of the lensing potential, $$\hat\psi=-\frac{2}{k^2}\,\hat\kappa\;, \label{eq:23}$$ from which the Fourier transforms of the deflection angle and the shear components can easily be determined, $$\hat{\vec\alpha}=-\mathrm{i}\,\vec k\,\hat\psi\;,\quad \hat\gamma_1=-\frac{1}{2}\left(k_1^2-k_2^2\right)\,\hat\psi\;,\quad \hat\gamma_2=-k_1k_2\,\hat\psi\;. \label{eq:24}$$ Relations like those and the exploitation of fast-Fourier methods are particularly relevant for simulating gravitational lensing by large-scale structures. Lensing by Large-Scale Structures ================================= Resolution Issues ----------------- Obviously, the thin-lens approximation that we have been using so far breaks down if one wishes to study gravitational lensing by large-scale structures. The solution then is to cover the complete cosmic volume whose lensing effects one wants to simulate with simulation boxes stacked along the line-of-sight, to project suitable slices on individual lens planes, and to use multiple lens-plane theory for describing light propagation. The multiplicity of lens planes, and the general weakness of lensing by large-scale structures, make questions of angular and mass resolution particularly relevant for cosmic lensing. For instance, lens planes close to the observer are typically poorly resolved because even small grid cells span a large solid angle near the observer, and making grid cells smaller is not generally an acceptable solution because then the number of particles per grid cell becomes small, and the shot noise possibly unacceptably large. However, lens planes near the observer are less efficient than lens planes approximately half-way to the source because the lensing efficiency function is zero at the observer and source redshifts and peaks in between. Yet, structures grow over time, thus the lensing efficiency function is skewed towards lower redshifts because structures are geometrically less efficient lenses, but their density contrast keeps growing. By a related argument, sources at very high redshifts do not require the entire cosmological volume between them and the observer to be filled with lens planes because lens planes at very high redshift are geometrically inefficient and have a low density contrast. The left panel of Fig. \[fig:12\] shows two examples for the lensing efficiency function times the linear growth factor, which is the relevant quantity combining structure growth with geometrical efficiency. ![image](fig12a.eps){width="0.49\hsize"} ![image](fig12b.eps){width="0.49\hsize"} Similarly, the effective angular resolution of the simulation is dominated by the angular resolution of those lens planes near the peak in the combined efficiency function, i.e. the product of geometrical efficiency and linear growth factor. The shot noise caused by the discretisation of mass into particles is particularly important for studies of weak lensing by large-scale structures. Even in absence of density inhomogeneities, shot noise leads to density fluctuations. They need to be sufficiently smaller than the signal, i.e. the convergence fluctuations which cause weak lensing. In essence, this requirement also imposes a resolution limit. Suppose we wish to quantify the weak-lensing signal within a solid angle $\delta\Omega$. The volume spanned by $\delta\Omega$ within redshifts $z$ and $z+{\mathrm{d}}z$ is $${\mathrm{d}}V(z)=\delta\Omega\,D^2(z)\, \left|\frac{{\mathrm{d}}D_\mathrm{prop}}{{\mathrm{d}}z}\right|\,{\mathrm{d}}z\;, \label{eq:25}$$ where $D(z)$ and $D_\mathrm{prop}(z)$ are the angular diameter and proper distances to redshift $z$. In absence of density inhomogeneities, this volume element contains ${\mathrm{d}}N(z)$ particles, with $${\mathrm{d}}N(z)={\mathrm{d}}V(z)\,\frac{\bar\rho(z)}{m_\mathrm{p}}\;, \label{eq:26}$$ where $\bar\rho(z)$ is the mean matter density at redshift $z$, and $m_\mathrm{p}$ is the mass of an $N$-body particle in the simulation. The contribution to the lensing convergence by these particles has to be weighted by the effective lensing distance, $D_\mathrm{eff}(z,z_\mathrm{s})$, and by numerical factors. Poisson fluctuations in the particle number thus cause convergence fluctuations whose variance is $$\delta^2\kappa\propto\int_0^{z_\mathrm{s}} {\mathrm{d}}z\,D_\mathrm{eff}^2(z,z_\mathrm{s})\,{\mathrm{d}}N(z)\;. \label{eq:27}$$ These fluctuations need to be compared with, and smaller than, the convergence fluctuations due to large-scale structure, which are typically of order $\langle\kappa^2\rangle^{1/2}\approx5\%$ for sources near redshift unity and angular scales of order $1'$. According to Eqs. (\[eq:25\]) through (\[eq:27\]), the *rms* shot noise $\langle\delta^2\kappa\rangle^{1/2}$ scales like $\delta\Omega^{1/2}$, thus the requirement that the signal-to-noise ratio $$\frac{\mathrm{S}}{\mathrm{N}}=\left( \frac{\langle\kappa^2\rangle}{\langle\delta^2\kappa\rangle} \right)^{1/2} \label{eq:28}$$ exceed a specified threshold translates into a lower limit to the solid angle $\delta\Omega$ which can reasonably be resolved by the simulation. The smallness of the *rms* cosmic convergence $\kappa_\mathrm{rms}=\langle\kappa^2\rangle^{1/2}$ implies that many particles need to be enclosed by the “cone” spanned by $\delta\Omega$ for the simulation to be reliable. The right panel of Fig. \[fig:12\] shows an example. The *rms* cosmic convergence in per cent and the *noise-to-signal* ratio are plotted as functions of angular scale. The noise level was adapted to an $N$-body simulation with particle mass $m_\mathrm{p}=6.8\times10^{10}\,h^{-1}\,M_\odot$. The curves show that the noise-to-signal ratio drops below unity for sources at redshift $z_\mathrm{s}=1$ only if the angular resolution is lowered to $\gtrsim5'$, while an angular resolution of $\gtrsim0.8'$ can be achieved for $z_\mathrm{s}=1000$ (i.e. for weak lensing of the CMB; Pfrommer 2002). Multiple Lens-Plane Theory -------------------------- Weak lensing by large-scale structures requires the cosmic volume to be split into multiple lens planes rather than a single one (for general reference on multiple lens-plane theory, see Schneider et al. 1992). The lens plane closest to the observer is the image plane which represents the observer’s sky. A light ray piercing the image plane at a physical coordinate $\vec\xi_1$ is mutiply deflected on $N$ lens planes and finally reaches the source plane at the physical coordinate $$\vec\eta(\vec\xi_1)=\frac{D_\mathrm{s}}{D_1}\vec\xi_1+ \sum_{i=1}^N\,D_{i\mathrm{s}}\,\vec{\hat\alpha}(\vec\xi_i)\;, \label{eq:29}$$ where the $D_i$ and $D_{i\mathrm{s}}$ are the angular diameter distances from the observer to the $i$-the lens plane, and from the $i$-th lens plane to the source, respectively. The light ray passes the $i$-th plane at $\vec\xi_i$, where it is deflected by $\vec{\hat\alpha}(\vec\xi_i)$. Similarly, the $\vec\xi_i$ are determined by $$\vec\xi_j(\vec\xi_1)=\frac{D_j}{D_1}\vec\xi_1+ \sum_{i=1}^{j-1}\,D_{ij}\,\vec{\hat\alpha}(\vec\xi_i)\;, \label{eq:30}$$ where $D_{ij}$ is the angular diameter distance from the $i$-th to the $j$-th lens plane. Introducing angular coordinates $\vec\theta_i=\vec\xi_i/D_i$ yields $$\vec\theta_j(\vec\theta_1)=\vec\theta_1+ \sum_{i=1}^{j-1}\,\frac{D_{ij}D_\mathrm{s}}{D_jD_\mathrm{is}}\, \vec\alpha(\vec\theta_i)\;, \label{eq:31}$$ where we have introduced the *reduced* deflection angle $\vec\alpha=(D_{i\mathrm{s}}/D_\mathrm{s})\hat{\vec\alpha}$. We now define the matrices $$\mathcal{A}_i=\frac{\partial\vec\theta_i}{\partial\vec\theta_1} \;,\quad \mathcal{U}_i=\frac{\partial\vec\alpha_i}{\partial\vec\theta_i}\;. \label{eq:32}$$ Clearly, $\mathcal{A}_i$ is the Jacobian matrix of the lens mapping between the $i$-th lens plane and the image plane, thus $\mathcal{A}_N$ is the Jacobian matrix of the mapping between the source and image planes. The goal is thus to determine $\mathcal{A}_N$ in order to obtain convergence, shear, and magnification for a light ray starting out into direction $\vec\theta_1$. The ray-tracing equation (\[eq:31\]) implies the recursion relation $$\mathcal{A}_j=\mathcal{I}- \sum_{i=1}^{j-1}\,\frac{D_{ij}D_\mathrm{s}}{D_jD_\mathrm{is}}\, \mathcal{U}_i\mathcal{A}_i\;, \label{eq:33}$$ starting with $\mathcal{A}_1=\mathcal{I}$, the identity matrix. In summary, the deflection-angle fields $\vec\alpha_i$ on the $N$ lens planes can be used to construct the matrices $\mathcal{U}_i$ according to Eq. (\[eq:32\]), then Eq. (\[eq:33\]) can be used to determine the lensing experienced by a light ray starting out into any direction on the image plane. The left panel in Fig. \[eq:13\] shows the total convergence experienced by sources at $z_\mathrm{s}=5$ on a lens plane with a side length of $4.25^\circ$, obtained from an $N$-body simulation (Pfrommer 2002). ![image](fig13a.eps){width="0.49\hsize"} ![image](fig13b.eps){width="0.49\hsize"} The right panel in Fig. \[fig:13\] shows numerically determined power spectra for the effective convergence as functions of wave number $l$, which is the Fourier conjugate variable to the angular scale. The lines in this figure show the theoretically expected power spectra. The agreement between the numerical and theoretical results is very good over a limited range of wave numbers. Once the wave numbers increase beyond the limit set by the angular resolution, the simulated convergence fields lack power and the numerical results fall below the theoretical ones. This happens at lower $l$ for smaller source redshifts, because a fixed angular scale, and thus wave number $l$, corresponds to smaller physical scales at lower distances. On the low-$l$ end, i.e. for large structures, the errors on the numerically determined power spectra increase because the number of independent modes in the simulated convergence field decreases as the modes increase. This example should suffice to demonstrate that numerical simulations of gravitational lensing by large-scale structures should be carefully designed to match their final purpose. Inversion Techniques ==================== Let us conclude with a brief discussion of inversion techniques. They are typically less demanding numerically, but the methods which have been developed for this purpose are interesting in their own right. Shear Deconvolution ------------------- We have seen before in Eqs. (\[eq:23\]) and (\[eq:24\]) that convergence and shear are related because they are both linear combinations of second derivatives of the scalar lensing potential $\psi$. In Fourier space, the relations are algebraic and can easily be combined to eliminate the Fourier transform $\hat\psi$ of the potential. Transforming back into configuration space, the convergence turns out to be a convolution of the shear with a well-known kernel, $$\kappa(\vec\theta)=\frac{1}{\pi}\, \int{\mathrm{d}}^2\theta'\left[ \mathcal{D}_1(\vec\theta-\vec\theta')\gamma_1(\vec\theta')+ \mathcal{D}_2(\vec\theta-\vec\theta')\gamma_2(\vec\theta') \right]\;, \label{eq:34}$$ with $$\mathcal{D}_1(\vec\theta)= -\frac{\theta_1^2-\theta_2^2}{|\vec\theta|^4}\;,\quad \mathcal{D}_2(\vec\theta)= -\frac{2\theta_1\theta_2}{|\vec\theta|^4}\;. \label{eq:35}$$ This is the classic Kaiser & Squires (1993) shear inversion equation. Its limitations have been discussed in detail and removed to satisfaction by modifying e.g. the kernel components $\mathcal{D}_{ij}$; they are not of interest for the discussion here (cf. Peter Schneider’s lecture in this volume). A suitable practical approximation of (\[eq:34\]) using measured galaxy ellipticities $\epsilon_i$ ($i=1,2$) is $$\kappa(\theta)\approx\frac{1}{n\pi}\,\sum_{i=1}^N\left[ \mathcal{D}_1\epsilon_{1,i}+\mathcal{D}_2\epsilon_{2,i} \right]\;, \label{eq:36}$$ where $n$ is the number density of lensed galaxies on the sky. In practice, however, it turns out that an approximation like (\[eq:36\]) would have infinite noise because of the random sampling of the shear components $\gamma_i$ by $N$ galaxy ellipticities $\epsilon_i$. This can be remedied by introducing a smoothed kernel $\mathcal{D}'$ instead of $\mathcal{D}$, e.g. $$\mathcal{D}'=\left[ 1-\left(1+\frac{\theta^2}{\theta_\mathrm{s}^2}\right) \exp\left(-\frac{\theta^2}{\theta_\mathrm{s}^2}\right) \right]\,\mathcal{D}\;, \label{eq:37}$$ where $\theta_\mathrm{s}$ is the angular smoothing scale (Seitz & Schneider 1995). The noise convariance matrix between the convergence values at two different grid points $\vec\theta_i$ and $\vec\theta_j$ is then $$\left\langle\kappa(\vec\theta_i)\kappa(\vec\theta_j)\right\rangle= \frac{\sigma_\epsilon^2}{4\pi\theta_\mathrm{s}^2n}\, \exp\left[ -\frac{(\vec\theta_i-\vec\theta_j)^2}{2\theta_\mathrm{s}^2} \right]\;, \label{eq:38}$$ where $\sigma_\epsilon$ is the scatter of the intrinsic galaxy ellipticities (van Waerbeke 2000). This expression demonstrates that smoothing introduces correlations on the convergence map on the angular scale $\theta_\mathrm{s}$, but the variance of $\kappa$ can become very high if $\theta_\mathrm{s}$ is chosen too small. A careful balance between the local variance and non-local correlations is necessary in order to arrive at a convergence map with the required properties. Maximum-Likelihood Lens Inversion --------------------------------- An entirely different approach to lens inversion uses the maximum-likelihood technique (Bartelmann et al. 1996). Each lensed background galaxy $i$ provides a measurement of two ellipticity components $(\epsilon_{1,i},\epsilon_{2,i})$ and its angular size. Comparing the size of a galaxy behind a galaxy cluster to the average size of unlensed galaxies of the same surface brightness, an estimate $r_i$ of the inverse magnification of the lensed galaxy can be derived. Thus $N$ galaxies provide a $3N$-dimensional data vector $$\vec d=(\epsilon_{1,1},\epsilon_{2,1},r_1,\ldots, \epsilon_{1,N},\epsilon_{2,N},r_N)\;. \label{eq:39}$$ The goal of the lens inversion is then to find a two-dimensional array $\psi_{jk}$ of lensing potential values such that the ellipticities and inverse magnifications caused by that potential at the positions $\vec\theta_i$ of the real galaxies optimally reproduce the measured ellipticities and inverse magnifications. In other words, the potential values $\psi_{jk}$ have to be determined such as to minimise the mean-square difference between the data vector $\vec d$ and the model data vector $\vec d[\psi_{jk}(\vec x_i)]$, $$\chi^2(\psi_{jk})=\sum_{i=1}^{3N}\left\{ \frac{[d_i-d_i(\psi_{jk})]^2}{\sigma_i^2} \right\}\;, \label{eq:40}$$ where the errors $\sigma_i$ can be estimated from the data themselves. The minimisation of $\chi^2$ with respect to the potential values $\psi_{jk}$ can be done with any minimisation algorithm like, e.g. the downhill simplex method. For large fields, the number of potential values can become very large. In that case, conjugate-gradient methods are preferred, which make use of the fact that the derivatives of $\chi^2$ with respect to the $\psi_{jk}$ are known analytically. Such methods can speed up the minimisation sufficiently to render it feasible even for large potential arrays (cf. Press et al. 1992). Maximum-Entropy Methods ----------------------- The minimisation of $\chi^2$ is a special case of the maximum-likelihood technique for assumed Gaussian deviations of the measured data around the model values. Improvements of the maximum-likelihood technique can be derived starting from Bayes’ theorem, $$P(\psi|\vec d)=\frac{P(\vec d|\psi)}{P(\vec d)}\,P(\psi)\;, \label{eq:41}$$ which states that the probability $P(\psi|\vec d)$ of finding the potential $\psi$ given the data $\vec d$ is proportional to the probability $P(\vec d|\psi)$ of obtaining the data given the potential, times the probability $P(\psi)$ for finding the potential. The denominator $P(\vec d)$ is called the *evidence* and simply normalises Eq. (\[eq:41\]). $P(\psi)$ is called the prior, quantifying any *a priori* information one has or assumes on the potential $\psi$, $P(\vec d|\psi)$ is called the likelihood, and $P(\psi|\vec d)$ is the *posterior* probability. The goal is now to maximise the latter, which is equivalent to maximising the product $P(\vec d|\psi)\,P(\psi)$ of likelihood and prior. If we have or can assume Gaussian noise and a diagonal noise correlation matrix, the likelihood reduces to $P(\vec d|\psi)=\exp(-\chi^2/2)$. It can now be shown that in absence of any further information, the best, i.e. least prejudiced, prior is the maximum-entropy prior, $$P(\psi)\propto\exp\left[\alpha\,S(\psi,\vec m)\right]\;, \label{eq:42}$$ with the *cross entropy* $$S(\psi,\vec m)=\sum_{i=1}^{3N}\, \psi_i-m_i-\psi_i\,\ln\frac{\psi_i}{m_i}\;, \label{eq:43}$$ where $\vec m$ is a model vector for the potential which can encode expectations on the potential, or simply be chosen to be uniform for all $i$. The potential array is then determined by maximising $\exp(-\chi^2/2+\alpha S)$, or equivalently by minimising $$F\equiv\frac{1}{2}\chi^2-\alpha S \label{eq:44}$$ instead of the simple $\chi^2$ in Eq. (\[eq:40\]). The parameter $\alpha$ can be included into the minimisation. Bayesian theory implies that a good approximation to the optimal choice for $\alpha$ is determined such that $F\sim3N/2$ at the potential minimum $\bar\psi$. The error covariance matrix for the potential $\psi$ is given by the inverse curvature matrix of $F$, $$\left\langle(\psi-\bar\psi)(\psi-\bar\psi)^\mathrm{T}\right\rangle \approx \left(\frac{\partial^2F}{\partial\psi_i\partial\psi_j}\right)^{-1}\;. \label{eq:45}$$ Maximum-entropy methods have been suggested and used for regularising shear-inversion techniques such that their spatial resolution is adapted to the strength of the lensing signal (Bridle et al. 1998; Seitz et al. 1998). Concluding Remarks ================== Many numerical methods have been used for gravitational lensing studies which I was not able to cover during the limited time of the lecture. Among them are the hierarchical tree-code methods introduced into microlensing by Wambsganss et al. (1990) and the methods for constraining cluster mass distributions from multiple arc systems (e.g. Kneib et al. 1993; see also Jean-Paul Kneib’s presentation in this volume). Despite this unavoidable incompleteness, I hope to have given a flavour of how numerical methods can be used for lensing, and what the main problem areas are. [99]{} Bartelmann, M., Weiss, A. 1994, A&A 287, 1 Bartelmann, M., Steinmetz, M., Weiss, A. 1995, A&A 297, 1 Bartelmann, M., Narayan, R., Seitz, S., Schneider, P. 1996, ApJ 464, L115 Bartelmann, M., Loeb, A. 1998, ApJ 503, 48 Bartelmann, M., Schneider, P. 2001, Phys. Rep. 340, 291 Bridle, S.L., Hobson, M.P., Lasenby, A.N., Saunders, R. 1998, MNRAS 299, 895 Hockney, R.W., Eastwood, J.W. 1988, Computer Simulations using Particles (Bristol: Hilger) Kaiser, N., Squires, G. 1993, ApJ 404, 441 Keeton, C.R., Kochanek, C.S. 1998, ApJ 495, 157 Kneib, J.-P., Mellier, Y., Fort, B., Mathez, G. 1993, A&A 273, 367 Mellier, Y. 1999, Ann. Rev. Astr. Ap. 37, 127 Meneghetti, M., Yoshida, N., Bartelmann, M., Moscardini, L. Springel, V., Tormen, G., White, S.D.M. 2001, MNRAS 325, 435 Meneghetti, M., Bolzonella, M., Bartelmann, M., Moscardini, L., Tormen, G. 2000, MNRAS 314, 338 Narayan, R., Bartelmann, M. 1999, in: Formation of Structure in the Universe, eds. A. Dekel and J.P. Ostriker, p. 360 (Cambridge: University Press) Pfrommer, C. 2002, *Diploma Thesis*, Munich University Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P. 1992, Numerical Recipes (Cambridge: University Press) Schneider, P., Ehlers, J., Falco, E.E. 1992, Gravitational Lenses (Heidelberg: Springer Verlag) Seitz, C., Schneider, P. 1995, A&A 297, 287 Seitz, S., Schneider, P., Bartelmann, M. 1998, A&A 337, 325 Seljak, U., Zaldarriaga, M. 2000, ApJ 538, 57 van Waerbeke, L. 2000, MNRAS 313, 524 Wambsganss, J., Paczyński, B., Katz, N. 1990, ApJ 352, 407
--- abstract: 'We derive new results on the characterization of Gelfand–Shilov spaces $\mathcal{S}^\mu_\nu (\R^n)$, $\mu,\nu >0$, $\mu+\nu \geq 1$ by Gevrey estimates of the $L^2$ norms of iterates of $(m,k)$ anisotropic globally elliptic Shubin (or $\Gamma$) type operators, $(-\Delta)^{m/2} +| x |^k$ with $m,k\in 2\N$ being a model operator, and on the decay of the Fourier coefficients in the related eigenfunction expansions. Similar results are obtained for the spaces $\Sigma^\mu_\nu (\R^n)$, $\mu,\nu >0$, $\mu+\nu > 1$, cf. . In contrast to the symmetric case $\mu = \nu$ and $k=m$ (classical Shubin operators) we encounter resonance type phenomena involving the ratio $\kappa:=\mu/\nu$; namely we obtain a characterization of $\mathcal{S}^\mu_\nu(\R^n)$ and $\Sigma^\mu_\nu(\R^n)$ in the case $\mu=kt/(k+m), \nu= mt/(k+m), t \geq 1$, that is, when $\kappa=k/m \in \Q$.' address: - 'Dipartimento di Matematica, Università di Torino, Via Carlo Alberto 10, 10123 Torino, Italy' - 'Dipartimento di Matematica e Informatica, Università di Cagliari, Via Ospedale 72, 09124 Cagliari, Italy' - 'Institute of Mathematics, University of Novi Sad, trg. D. Obradovica 4, 21000 Novi Sad, Serbia' - 'Dipartimento di Matematica, Università di Torino, Via Carlo Alberto 10, 10123 Torino, Italy' author: - Marco Cappiello - Todor Gramchev - Stevan Pilipovic - Luigi Rodino title: 'ANISOTROPIC SHUBIN OPERATORS AND EIGENFUNCTION EXPANSIONS IN GELFAND-SHILOV SPACES' --- Introduction and statement of the results ========================================= The main goal of the paper is to prove results on the characterization of the non-symmetric ($\mu \neq \nu)$ Gelfand–Shilov spaces $\mathcal{S}^\mu_\nu (\R^n)$, $\mu,\nu >0$, $\mu+\nu \geq 1$ by Gevrey estimates of the $L^2$ norms of the iterates $P^\ell u $, $\ell =1,2,\ldots, u\in \cS(\R^n),$ of positive anisotropic globally elliptic Shubin differential operators $P$ of the type $(m,k)$, $m,k$ being even natural numbers, and on the decay of the Fourier coefficients $u_j$, $j\in \N$, in the eigenfunction expansions $u = \sum_{j=1}^\infty u_j \varphi_j$, where $\{ \varphi_j\}_{j=1}^\infty$ stands for an orthonormal basis of eigenfunctions associated to the operator $P$. The $(m,k)$ Shubin elliptic differential operators are modelled by $$\begin{aligned} {\mathcal H}^{m,k}_n:= (-\Delta)^{m/2} + |x|^{k}, \ \ \ |x| = \sqrt{ x_1^2 +\ldots +x_n^2}, \, k, m\in 2\N. \label{anismod1}\end{aligned}$$ We recall that for $\mu >0, \nu>0,$ the inductive (respectively, projective) Gelfand-Shilov classes $\mathcal{S}^\mu_\nu({\mathbb R}^n), \; \mu+\nu\geq 1$ (respectively, $\Sigma^\mu_\nu (\R^n), \; \mu +\nu >1$), are defined as the set of all $u\in \cS(\R^n)$ for which there exist $A>0, C>0$ (respectively, for every $A>0$ there exists $C>0$) such that $$\label{GSdef} |x^\beta\partial_x^\alpha u(x)|\leq CA^{|\alpha|+|\beta|}(\alpha!)^\mu(\beta!)^\nu, \;\; \alpha, \beta \in {\mathbb N}^n,$$ see [@GeShi; @A; @CCK; @CPRT; @Mi] and [@NR Chapter 6]. These spaces have recently gained a wide importance in view of the fact that they represent a suitable functional setting both for microlocal analysis and PDE and for Fourier and time-frequency analysis [@AC; @BG; @CGR1; @CGR2; @CGR3; @CNR; @CaRo; @CT; @GZ2; @T2]. Concerning the investigation in the present paper, we can cite different sources of motivations. First, we recall the fundamental work of Seeley [@S2] on eigenfunction expansions of real analytic functions on compact manifolds (see also the recent paper of Dasgupta and Ruzhansky [@DaRu1], extending the result of [@S2] for all Gevrey spaces $G^\sigma$, $\sigma >1$, on compact Lie groups). Secondly, we mention the work [@gpr1] on the characterization of symmetric Gelfand-Shilov spaces $\mathcal{S}^\mu_\mu (\R^n)$ by means of estimates of iterates and the decay of the Fourier coefficients in the eigenfunction expansions associated to globally elliptic (or $\Gamma$ elliptic) differential operator. We also refer to [@vv], where general Gevrey sequences $M_p$ are used. Finally, we mention as additional motivation the results on hypoellipticity in $\mathcal{S}^\mu_\nu (\R^n)$ for elliptic operators of the type $\mathcal{H}^{m,k}_n$ for $\mu \geq k/(m+k)$, $\nu \ge m/(m+k)$, $k,m$ being even natural numbers, cf. [@CGR2] (see also the older work [@CGR1]). Before stating our main results we need some preliminaries. As counterpart of an elliptic operator in a compact manifold, we consider in $\R^n$ the decay of the Fourier coefficients in the eigenfunction expansions associated to $\mathcal{H}^{m,k}_n $. In contrast to the symmetric case $\mu = \nu$ and $k=m$ (classical Shubin operators) we encounter new resonance type phenomena involving $\kappa:=\mu/\nu$, namely we can characterize the spaces $\mathcal{S}^\mu_\nu(\R^n)$, $\mu +\nu \geq1$ (respectively $\Sigma^\mu_\nu(\R^n)$, $\mu +\nu >1$) by iterates and eigenfunction expansions defined by $\mathcal{H}^{m,k}_n$ iff $\kappa$ is rational number, $\kappa=k/m$. Our basic example of operator will be the anisotropic quantum harmonic oscillator appearing in Quantum Mechanics $$\begin{aligned} \label{eq1.1} {\mathcal H}^{2,k}_n=-\triangle+| x|^k, \qquad k \in 2\N,\end{aligned}$$ with recovering for $k=2$ the standard harmonic oscillator whose eigenfunctions are the Hermite functions $$\begin{aligned} \label{eq1.2} h_{\alpha}(x)=H_\alpha(x)e^{-|x|^2/2}, \;\;\; \alpha=(\alpha_1,...,\alpha_n)\in {\mathbb N}^n,\end{aligned}$$ where $H_\alpha(x)$ is the $\alpha$-th Hermite polynomial. See for example [@lang; @pil88; @reedsimon] for related Hermite expansions as well as [@gpr2; @Wpar] for connections with a degenerate harmonic oscillator. Here we shall consider a more general class of operators with polynomial coefficients in ${\mathbb R}^n$, namely $(m,k)$ anisotropic operators: $$\begin{aligned} \label{eq1.3} P=\sum\limits_{\frac{|\alpha|}{m} +\frac{|\beta|}{k} \leq 1}c_{\alpha\beta}x^\beta D_x^\alpha, \;\;\; D^\alpha=(-i)^{|\alpha|}\partial_x^\alpha.\end{aligned}$$ Set $$\begin{aligned} \label{w1} \Lambda_{m,k}(x,\xi) & = & (1 + | x|^{2k} + | \xi| ^{2m})^{1/2}, \quad (x,\xi)\in \R^{2n}, \ \textrm{$m,k\in 2\N$.}\end{aligned}$$ The global ellipticity for $P$ in (\[eq1.3\]) is defined by imposing $$\begin{aligned} \label{eq1.4} \sum\limits_{\frac{|\alpha|}{m}+\frac{|\beta|}{k}=1}c_{\alpha\beta}x^\beta\xi^\alpha \neq 0 \;\;\; \mbox{for} \;\; (x,\xi)\neq (0,0).\end{aligned}$$ or equivalently, there exist $C_1>0, C_2>0, R>0$ such that $$\begin{aligned} \label{eq1.4a} C_2 \leq \frac{|p(x,\xi)|}{\Lambda_{m,k}(x,\xi)}\leq C_1, \quad |(x,\xi)|\geq R.\end{aligned}$$ Under the assumption (or ), the following estimate holds for every $u \in \cS(\R^n)$: $$\label{ellipticestimate} \sum_{\frac{|\alpha|}{m} +\frac{|\beta|}{k} \leq 1} \| x^\beta D_x^\alpha u \|_{L^2} \leq C( \|Pu\|_{L^2} + \|u \|_{L^2}),$$ cf. [@BBR]. For these operators, the counterpart of the standard Sobolev spaces are the spaces $ Q^{s}_{m,k}({\mathbb R}^n), s \in \R,$ defined, for example, by requiring that $$\begin{aligned} \label{eq1.5a} \left\|\Lambda(x,D)^s u\right\|_{L^2}<\infty,\end{aligned}$$ where $$\begin{aligned} \label{sublin1} \Lambda(x,\xi) = (1 + | x|^{2k}+ | \xi|^{2m})^{1/2\max \{k, m \}}, \quad k,m\in 2\N.\end{aligned}$$ Under the global ellipticity assumption (\[eq1.4\]), $$P:Q^{s}_{m,k}({\mathbb R}^n) \to L^2({\mathbb R}^n),\; s = \max\{k,m\},$$ is a Fredholm operator. The finite-dimensional null-space $\textrm{Ker}\, P$ is given by functions in the Schwartz space $\cS({\mathbb R}^n)$. We assume, as in [@gpr1], that $P$ is a positive anisotropic elliptic operator, which implies that $k$ and $m$ are even numbers. This guarantees the existence of an orthonormal basis of eigenfunctions $\varphi_j$, $j\in \N$, with eigenvalues $\lambda_j$, $\lim\limits_{j\to \infty }\lambda_j = +\infty$ (see [@shubin]). Moreover we have that $$\label{aseigen} \lambda_j \sim C j^{\frac{mk}{n(m+k)}} \qquad \textit{as} \quad j \to +\infty.$$ for some $C>0$, cf. [@BBR; @shubin]. Hence, given $u \in L^2({\mathbb R}^n)$, or $u \in \cS '({\mathbb R}^n)$, we can expand $$\begin{aligned} \label{eq1.7} u=\sum\limits_{j=1}^\infty u_j \varphi_j\end{aligned}$$ where the Fourier coefficients $u_j \in \mathbb C$ are defined by $$\begin{aligned} \label{eq1.8} u_j=(u,u_j)_{L^2}, \;\; j=1,2,\ldots\end{aligned}$$ with convergence in $L^2({\mathbb R}^n)$ or $\cS '({\mathbb R}^n)$ for (\[eq1.7\]). By the hypoellipticity results of [@CGR2] the eigenfunctions $\varphi_j$ belong to $\mathcal{S}^{k/(m+k)}_{m/(m+k)}(\R^n)$. We first state an assertion on the characterization of the anisotropic Sobolev spaces $Q^s_{m,k} (\R^n)$ and the Schwartz class $\cS(\R^n)$. \[t1.1\] Suppose that $P$ is $(m,k)$-globally elliptic cf. , , and positive. Then: - $u \in Q^s_{m,k}({\mathbb R}^n) \Longleftrightarrow \sum\limits_{j=1}^\infty |u_j|^2 \lambda_j^{s/\max \{ m,k \} }< \infty $, $s \in {\mathbb N}$. - $u \in \cS ({\mathbb R}^n) \Longleftrightarrow |u_j|= O(\lambda_j^{-s}), \, j\to \infty \Longleftrightarrow |u_j|= O(j^{-s}), \, j\to \infty$ for all $s\in \N$. Let us now come to the characterization of the spaces $\mathcal{S}^\mu_\nu(\R^n)$ and $\Sigma^\mu_\nu(\R^n)$ in the case $\kappa:=\mu/\nu \in \Q.$ We may link $\mu, \nu$ with an operator of the form for a suitable choice of $k$ and $m$. In fact, observe first that we may write $\mu=t\mu_o, \nu=t \nu_o$ for some $t >0$ with $\mu_o=\kappa/(1+\kappa), \nu_0=1/(1+\kappa)$ so that $\mu_o+\nu_o = 1$. If $\mu+\nu \geq 1$ we have $t \geq 1$, if $\mu+\nu >1$ then $t >1$. On the other hand, for any given $\mu_o \in \Q$ we may write $\mu_o =k/(k+m)$ for two positive integers $k$ and $m$, and consequently $\nu_o=1-\mu_o=m/(k+m).$ Multiples of $k$ and $m$ work as well, in particular we may assume $k$ and $m$ to be even natural numbers so that the symbol of $\Lambda_{m,k}$ in (\[w1\]) is a smooth function which is necessary for the proof of the hypoellipticity result of [@CGR2]. So we have $$\mu=\frac{kt}{k+m}, \quad \nu = \frac{mt}{k+m}.$$ For given even integers $k$ and $m$, an example of globally elliptic positive operator is given by .\ The first main result of the paper characterizes the Gelfand-Shilov spaces in terms of estimates of the iterates of $P$ and reads as follows. \[t1\] Let $P$ be an operator of the form for some integers $k \geq 1, m \geq 1$, be globally elliptic, namely satisfy and let $u \in \cS(\R^n)$. Then $u \in \mathcal{S}^{\frac{kt}{k+m}}_{\frac{mt}{k+m}}(\R^n), t \geq 1$ (respectively $u \in \Sigma^{\frac{kt}{k+m}}_{\frac{mt}{k+m}}(\R^n), t > 1$) if and only if there exist $C>0, R>0$ (respectively for every $C>0$ there exists $R>0$) such that: $$\label{iteratesestimate} \|P^M u\|_{L^2}\leq RC^{M} (M!)^{\frac{kmt}{k+m}}$$ for every integer $M \geq 1$. Theorem \[t1\] suggests the possibility of considering new function spaces defined by the estimates also for $0< t <1$ (respectively $0<t \leq 1$). Corresponding Gelfand-Shilov classes are empty in that case as well known from [@GeShi] and the equivalence in Theorem \[t1\] fails. Nevertheless such definition in terms of deserves interest, cf. also [@CST; @T1]. Using Theorem \[t1\] we can prove the following result. \[t2\] Let $P$ be a positive operator of the form for some integers $k \geq 1, m \geq 1$, satisfying and let $u \in \cS(\R^n)$. Let the eigenvalues $\lambda_j$ and the Fourier coefficients $u_j$ be defined as before. The following conditions are equivalent:\ i) $u \in \mathcal{S}^{\frac{kt}{k+m}}_{\frac{mt}{k+m}}(\R^n), t \geq 1$ (respectively $u \in \Sigma^{\frac{kt}{k+m}}_{\frac{mt}{k+m}}(\R^n), t > 1$);\ ii) there exists $\varepsilon>0$ such that (respectively for every $\varepsilon >0$) we have $$\label{aseigenexp1} \sum_{j=1}^\infty |u_j|^2 e^{\epsilon \lambda_j^{\frac{k+m}{kmt}}} < \infty;$$ iii) there exists $\varepsilon>0$ such that (respectively for every $\varepsilon >0$) we have $$\label{aseigenexp2} \sup_{j\in \N} |u_j|^2 e^{\epsilon \lambda_j^{\frac{k+m}{kmt}}} < \infty.$$ iv) there exists $\varepsilon>0$ such that (respectively for every $\varepsilon >0$) we have for some $C>0$: $$|u_j| \leq C e^{-\varepsilon j^{\frac{1}{tn}}}, \qquad j \in \N.$$ The somewhat surprising fact that in $ iv)$ the estimates do not depend on the couple $(m,k)$, that is on $(\mu,\nu)$, may find intuitive explanation in the $\mathcal{S}^\mu_\nu$ regularity of the eigenfunctions $\varphi_j$, cf. [@CGR2]. Proof of the main results ========================= *Proof of Theorem \[t1.1\].* The proof of Theorem \[t1.1\] is easy, by using the $r$-th power of $P, r \in \R$, that we may define as $$P^r u = \sum_{j=1}^\infty \lambda_j^r u_j \varphi_j,$$ and by observing that the norms $\|P^r u\|_{L^2}, r= s/\max\{k,m \}$ and $\| \Lambda(x,D)^s u\|_{L^2}$ are equivalent, see [@BBR; @NR; @shubin]. On the other hand, by Parseval identity $$\|P^r u\|_{L^2}^2 = \| \sum_{j=1}^\infty \lambda_j^r u_j \varphi_j\|_{L^2}^2 = \sum_{j=1}^\infty \lambda_j^{2r}|u_j|^2$$ and $i)$ follows. Since $\cS(\R^n) = \bigcap\limits_{s \in \N} Q_{m,k}^s(\R^n)$ we also obtain $ii)$.\ The proof of Theorem \[t1\] needs some preparation. We first define, for fixed $ r \geq 0 $ and $u \in L^2(\R^n)$: $$|u|_r = \sum_{\frac{|\alpha|}{m}+\frac{|\beta|}{k}=r} \| x^{\beta}D^\alpha u \|_{L^2}$$ First it is useful to characterize Gelfand-Shilov spaces in terms of the norms $|u|_r$ as follows. \[char\] Let $u \in L^2(\R^n).$ Then $u \in \mathcal{S}^{\frac{kt}{k+m}}_{\frac{mt}{k+m}}(\R^n), t \geq 1$ (respectively $u \in \Sigma^{\frac{kt}{k+m}}_{\frac{mt}{k+m}}(\R^n), t > 1$) if and only if there exist $C>0, R>0$ (respectively for every $C>0$ there exists $R>0$) such that $$\label{charest} |u|_r \leq RC^{r} r^{\frac{kmrt}{k+m}}$$ for every $ r>0$. We have the following preliminary result. \[AntiWick\] There exists a constant $C>0$ such that, for any given $p \in \N, (\alpha, \beta ) \in \N^{2n}$, with $|\alpha|/m+|\beta|/k =r, p<r<p+1,$ and for every $\varepsilon >0$, the following estimate holds true: $$\label{AW} |u|_r \leq \varepsilon |u|_{p+1}+C \varepsilon^{-\frac{r-p}{p+1-r}}|u|_p + C^p (p+1)!^{\frac{km}{k+m}}|u|_0$$ for all $u \in \cS(\R^n)$. The proof follows the same lines as the proof of Proposition 2.1 in [@CR], cf. also [@KN], and it is omitted. Next, fixed $\lambda >0, p \in \N$ and $ u \in L^2(\R^n)$, we set: $$\sigma_p(u, \lambda) = \lambda^{-p} (p!)^{-\frac{kmt}{k+m}} |u|_p.$$ \[sigma\] For every $p \in \N$ and for $\lambda >0$ sufficiently large, we have: $$\label{stimasigma} \sigma_{p+1}( u, \lambda) \leq (p+1)^{-\frac{kmt}{k+m}} \sigma_p (Pu, \lambda) + \sum_{h=0}^{p} \sigma_h (u,\lambda)$$ for every $u \in \cS(\R^n).$ For $p=0$ the assertion is a direct consequence of if $\lambda$ is large enough. Fix now $p \in \N, p \geq 1$ and let $\alpha, \beta \in \N^n$ such that $|\alpha|/m+|\beta|/k =p+1.$ It is easy to verify that we can find $\gamma, \delta \in \N^n,$ with $\gamma \leq \alpha, \delta \leq \beta$ such that $|\gamma|/m+|\delta|/k =p$ and $|\alpha-\gamma|/m+|\beta-\delta|/k =1.$ Then by we can write $$\begin{aligned} \| x^{\beta}D^\alpha u \|_{L^2} &\leq & \| x^{\beta-\delta}D^{\alpha-\gamma}(x^\delta D^\gamma u) \|_{L^2} + \| x^{\beta-\delta}[x^\delta, D^{\alpha-\gamma}]D^\gamma u \|_{L^2} \\ & \leq & C\| P(x^\delta D^\gamma u) \|_{L^2} + \| x^{\beta-\delta}[ x^\delta, D^{\alpha-\gamma}]D^\gamma u\|_{L^2} \\ & \leq & I_1 + I_2 +I_3,\end{aligned}$$ where $$I_1 = C \|x^\delta D^\gamma (Pu) \|_{L^2}, \qquad I_2 =C \| [P, x^\delta D^\gamma ] u \|_{L^2}, \qquad I_3 = \| x^{\beta-\delta}[ x^\delta, D^{\alpha-\gamma}]D^\gamma u\|_{L^2}.$$ Let now $$J_h = \sum_{\frac{|\alpha|}{m}+\frac{|\beta|}{k} =p+1}I_h, \qquad Y_h = \lambda^{-p-1} (p+1)!^{-\frac{kmt}{k+m}}J_h, \quad h=1,2,3.$$ Then, obviously we have $$|u|_{p+1} \leq J_1+J_2+J_3, \qquad \sigma_{p+1} (\lambda, u) \leq Y_1+Y_2+Y_3.$$ Now, since $J_1 \leq C_1|Pu|_p$ for some $C_1 >0$, then we have $Y_1 \leq (p+1)^{-\frac{kmt}{k+m}} \sigma_p(\lambda, Pu),$ if $\lambda \geq C_1^{-1}$. To estimate $J_2$ and $Y_2$ we observe that $$[P, x^\delta D^\gamma ]u = \sum_{\frac{|\tilde{\alpha}|}{m}+\frac{|\tilde{\beta}|}{k} \leq 1} c_{\tilde{\alpha} \tilde{\beta}} [x^{\tilde{\beta}}D^{\tilde{\alpha}}, x^\delta D^\gamma] u,$$ and that $$[x^{\tilde{\beta}}D^{\tilde{\alpha}}, x^\delta D^\gamma] u = \hskip-3pt \sum_{0 \neq \tau \leq \tilde{\alpha}, \tau \leq \delta} \hskip-4pt C_{\tilde{\alpha} \delta \tau}x^{\delta+\tilde{\beta}-\tau}D^{\gamma + \tilde{\alpha}-\tau} u - \hskip-3pt \sum_{0 \neq \tau \leq \tilde{\beta}, \tau \leq \gamma} \hskip-4pt C_{\tilde{\beta} \gamma \tau}x^{\delta+\tilde{\beta}-\tau}D^{\gamma + \tilde{\alpha}-\tau} u.$$ where the constants $|C_{\tilde{\alpha} \delta \tau}|$ and $| C_{\tilde{\beta} \gamma \tau}|$ can be estimated by $C_2 \, p^{|\tau|}$ for some positive constant $C_2$ independent of $p$. We observe now that in both the sums above we have $$r= \frac{|\gamma + \tilde{\alpha}-\tau|}{m} + \frac{|\delta+\tilde{\beta}-\tau|}{k} = p+ \frac{| \tilde{\alpha}|}{m} + \frac{ |\tilde{\beta}|}{k} - \frac{m+k}{km} |\tau| \leq p+1- \frac{m+k}{km} |\tau|,$$ hence in particular we have $ 0 \leq r <p+1 $ since $|\tau|>0$. Moreover, we have $$|\tau| \leq \frac{km}{m+k}(p+1-r).$$ In view of these considerations, we easily obtain $$J_2 \leq C_3 ( J'_2 + p^{\frac{km}{k+m} }|u|_p + J''_2),$$ where $$J'_2 = \sum_{p<r<p+1} p^{\frac{km}{k+m}(p+1-r)}|u|_r,$$ $$J''_2 =\sum_{0 \leq r < p}p^{\frac{km}{k+m}(p+1-r)}|u|_r.$$ Now, applying Lemma \[AntiWick\] to $J'_2$ with $$\varepsilon =(4C_3)^{-1} p^{-\frac{km}{k+m}(p+1-r)},$$ and using standard factorial inequalities we obtain $$J'_2 \leq (4C_3)^{-1}|u|_{p+1} + C_4 p^{\frac{km}{k+m}}|u|_p + C_5^{p+1} (p+1)!^{\frac{km}{k+m}}|u|_0.$$ Similarly, writing $$J''_2 = p^{\frac{km}{k+m}(p+1)}|u|_0 + \sum_{q=0}^{p-1}\sum_{q < r < q+1}p^{\frac{km}{k+m}(p+1-r)}|u|_r$$ and applying Lemma \[AntiWick\] to each term of the sum above with $$\varepsilon = p^{-\frac{km}{k+m}(q+1-r)},$$ we get $$\begin{aligned} J''_2 &\leq& C_6^{p+1} (p+1)!^{\frac{km}{k+m}}|u|_0 + C_7\sum_{q=0}^{p-1}\left[p^{\frac{km}{k+m}(p-q)}|u|_{q+1} + p^{\frac{km}{k+m}(p-q+1)}|u|_q \right] \\ &\leq& C_8^{p+1} (p+1)!^{\frac{km}{k+m}}|u|_0 + C_9 \sum_{q=1}^{p} p^{\frac{km}{k+m}(p-q+1)}|u|_q,\end{aligned}$$ from which we get $$J_2 \leq \frac14 |u|_{p+1} + \tilde{C}^{p+1} (p+1)!^{\frac{km}{k+m}}|u|_0 + C' \sum_{q=1}^{p} p^{\frac{km}{k+m}(p-q+1)}|u|_q$$ for some positive constants $C', \tilde{C}$ independent of $p$. From the estimates above, taking $\lambda$ sufficiently large and using the fact that $t \geq 1$, we obtain $$Y_2 = \lambda^{-p-1}(p+1)!^{-\frac{kmt}{k+m}}J_2 \leq \frac14 \sum_{h=0}^{p+1} \sigma_h( \lambda, u).$$ Analogous estimates can be derived for $Y_3$ and yield . We leave the details for the reader. Starting from and arguing by induction on $p$ it is easy to prove the following result. We omit the proof for the sake of brevity. \[inductiveest\] For every $p \in \N, t \geq 1$ and $\lambda >0$ sufficiently large we have $$\sigma_p(u, \lambda) \leq 2^{p} \sigma_0(u, \lambda) + \sum_{\ell=1}^p 2^{p-\ell} \binom{p}{\ell} (\ell !)^{-\frac{kmt}{k+m}}\sigma_0(P^\ell u, \lambda).$$ *Proof of Theorem \[t1\].* The fact that the Gelfand-Shilov regularity of $u$ implies is easy to prove and we omit the details. In the opposite direction, by Proposition \[char\] it is sufficient to prove that $u$ satisfies for every $r>0$. From the previous estimate, we have, for every $p \in \N$: $$\sigma_p(u, \lambda) \leq C + \sum_{\ell=1}^p 2^{p-\ell} \binom{p}{\ell} C^{\ell+1} \leq C(2+C)^{p+1}.$$ Therefore $$|u|_p \leq C^{p+1}p!^{\frac{kmt}{k+m}}$$ for a new constant $C>0$, which gives in the case $r \in \N$. If $ r>0$ is not integer, then $p<r<p+1$ for some $ p \in \N$ and we can apply Lemma \[AntiWick\] which yields $$\begin{aligned} |u|_r &\leq& \varepsilon |u|_{p+1}+C \varepsilon^{-\frac{r-p}{p+1-r}}|u|_p + C^p (p!)^{\frac{km}{k+m}}|u|_0 \\ &\leq& \varepsilon C_1^{p+1}(p+1)!^{\frac{kmt}{k+m}} + C_1^p \varepsilon^{-\frac{r-p}{p+1-r}} (p+1)!^{\frac{kmt}{k+m}} + C_1^p (p+1)!^{\frac{kmt}{k+m}} \leq C_2^{r+1} r^{\frac{kmrt}{k+m}}.\end{aligned}$$ Then, by Proposition \[char\] we conclude that $u \in \mathcal{S}^{\frac{kt}{k+m}}_{\frac{mt}{k+m}}(\R^n)$. Similarly we argue for $u \in \Sigma^{\frac{kt}{k+m}}_{\frac{mt}{k+m}}(\R^n)$.\ *Proof of Theorem \[t2\]*. The equivalence between $ii)$ and $iii)$ is obvious. Moreover $iii)$ is equivalent to $iv)$ in view of . The arguments are similar for $\mathcal{S}^{\frac{kt}{k+m}}_{\frac{mt}{k+m}}(\R^n)$ and $\Sigma^{\frac{kt}{k+m}}_{\frac{mt}{k+m}}(\R^n)$ classes. To conclude the proof we will show the equivalence between $i)$ and $iv)$. We first observe that $$\|P^Mu\|^2_{L^2}=\|\sum\limits_{j=1}^\infty u_jP^M \varphi_j \|_{L^2}^2=\sum\limits_{j=1}^\infty \lambda_j^{2M}|u_j|^2,$$ in view of Parseval identity. By it follows that $$\begin{aligned} \label{eq3.2} C_1\|P^Mu\|^2_{L^2}\leq \sum\limits_{j=1}^\infty j^{2Mkm/(n(k+m))}|u_j|^2 \leq C_2\|P^Mu\|^2_{L^2}\end{aligned}$$ for suitable positive constants $C_1,C_2$. Now if $iv)$ holds, then we have $$|u_j|^2\leq e^{-\epsilon j^{1/(nt)}}$$ for some new constant $\epsilon>0$. Then from the first estimate in (\[eq3.2\]) we have for some $C>0$ $$\begin{aligned} \|P^Mu\|^2_{L^2} &\leq & C\sum\limits_{j=1}^\infty j^{2Mkm/(n(m+k))}e^{-\epsilon j^{1/(nt)}} \\ &\leq& \tilde{C}\sup_{j \in \N}j^{2Mmk/(n(m+k))}e^{-\epsilon j^{1/(nt)}} \label{eq3.3} \end{aligned}$$ with $$\tilde{C}=C\sum\limits_{j=1}^\infty e^{-\epsilon j^{1/(nt)}}.$$ Moreover, for any fixed $\omega >0$ we have $$e^{\omega j^{1/(nt)}}=\sum\limits_{M=0}^\infty \frac{\omega^Mj^{M/(nt)}}{M!}.$$ This implies that for every $M \in {\mathbb N}$: $$\begin{aligned} \label{eq3.4} j^{M/(nt)}e^{-\omega j^{1/(nt)}}\leq \omega^{-M}M!\end{aligned}$$ Taking the $2kmt/(k+m)$-th power of both sides of (\[eq3.4\]) and applying in the last estimate in (\[eq3.3\]) with $$\omega =2\epsilon kmt/(k+m),$$ we obtain $$\begin{aligned} \|P^Mu\|^2_{L^2}\leq \tilde{C}\omega^{-\frac{2Mkmt}{k+m}}(M!)^{\frac{2mkt}{m+k}},\end{aligned}$$ which gives $i)$ in view of Theorem \[t1\].\ $i) \Rightarrow ii)$ Viceversa assume that $u \in \mathcal{S}^{\frac{kt}{k+m}}_{\frac{mt}{k+m}}(\R^n)$. In view of $iv)$ it is sufficient to show that $$\label{bord}\sup_{j \in \N} |u_j|^2 e^{\epsilon j^{\frac1{nt}}}< +\infty.$$ Theorem \[t1\] and the second inequality in imply that $$\frac{j^{\frac{2Mkm}{n(k+m)}}}{C^M (M!)^{\frac{2kmt}{k+m}}}|u_j|^2 \leq C$$ for every $j, M \in \N$ and for some $C$ independent of $j$ and $M$. Taking the supremum of the left-hand side over $M$ we get with $\epsilon =\frac{2kmt}{k+m}C^{-\frac{k+m}{2kmt}}.$ This concludes the proof. [Generalizations]{} We list some possible generalizations of the preceding results. First, one can replace the hypothesis of positivity for the operator $P$ by assuming that $P$ is normal, i.e. $P^*P = PP^*$. This guarantees the existence of an orthonormal basis of eigenfunctions $\varphi_j, j \in \N$, with eigenvalues $\lambda_j, \lim\limits_{j \to \infty}|\lambda_j| =+\infty$, see [@shubin], and we may then proceed as before, cf. [@S2].\ Another possible generalization consists in replacing $L^2$ norms with $L^p$ norms, $1<p<\infty$. Let us observe that the basic estimate is valid also for $L^p$ norms, see [@GM; @M], and it seems easy to extend Theorem \[t1\] in this direction.\ A much more challenging problem is an analogous characterization of the classes $\mathcal{S}^\mu_\nu( \R^n)$ when $\kappa = \mu/\nu = k/m $ is irrational. First difficulty, in this case, is given by an appropriate choice of the operator $P$. In fact, the natural candidates $$P= (-\Delta)^{m/2} +(1+|x|^2)^{k/2}, \qquad m \in 2\N, k >0, k \notin 2\N$$ can be easily treated in the setting of temperate distributions but results of Gelfand-Shilov regularity, extending those in [@CGR2], are missing for them. 0.3cm **Note.** With great sorrow, Marco Cappiello, Stevan Pilipovic and Luigi Rodino inform that their friend Todor Gramchev passed away on October 17, 2015. He inspired and collaborated to the initial version of the present paper and appears here as co-author. [10]{} A.Ascanelli, M.Cappiello, *Hölder continuity in time for SG hyperbolic systems*, J. Differential Equations **244** (2008), 2091–2121. A. Avantaggiati, *$S$-spaces by means of the behaviour of Hermite-Fourier coefficients*, Boll. Un. Mat. Ital. **6** (1985), 487–495. H.A. Biagioni, T. Gramchev, *Fractional derivative estimates in Gevrey spaces, global regularity and decay for solutions to semilinear equations in $\R^n$*, J. Differential Equations **194** (2003), 140–165. P. Boggiatto, E. Buzano, L. Rodino, *Global hypoellipticity and spectral theory.* Math. Res. **92**, Akademie Verlag, Berlin, 1996. D. Calvo, L. Rodino, [*Iterates of operators and Gelfand-Shilov functions*]{}, Int. Transf. Spec. Funct. **22** (2011), 269–276. M. Cappiello, T. Gramchev, L. Rodino, *Super-exponential decay and holomorphic extensions for semilinear equations with polynomial coefficients.* J. Funct. Anal. **237** (2006), 634–654. M. Cappiello, T. Gramchev, L. Rodino, *Entire extensions and exponential decay for semilinear elliptic equations*, J. Anal. Math. **111** (2010), 339–367. M. Cappiello, T. Gramchev, L. Rodino, *Sub-exponential decay and uniform holomorphic extensions for semilinear pseudodifferential equations*, Comm. Partial Differential Equations **35** (2010), n. 5, 846-877. M. Cappiello, L. Rodino, *SG-pseudo-differential operators and Gelfand-Shilov spaces*, Rocky Mountain J. Math.**36** (2006) n. 4, 1117–1148. M. Cappiello, J. Toft, *Pseudo-differential operators in a Gelfand-Shilov setting*, Math. Nachr. (2016). To appear. Y. Chen, M. Signahl, J. Toft, *Factorizations and singular value estimates of operators with Gelfand–Shilov and Pilipovi' c kernels*, arXiv:1511.06257 (2016). J. Chung, S. Y. Chung, D. Kim, *Characterization of the Gelfand-Shilov spaces via Fourier transforms*, Proc. Am. Math. Soc. **124** (1996), 2101–2108. E. Cordero, F. Nicola, L. Rodino, *Wave packet analysis of Schrödinger equations in analytic function spaces*, Adv. Math. *278* (2015), 182–209. E. Cordero, S. Pilipović, L. Rodino, N. Teofanov, *Localization operators and exponential weights for modulation spaces*, Mediterranean J. Math. **2** (2005), 381–394. A. Dasgupta, M. Ruzhansky, *Eigenfunction expansions of ultradifferentiable functions and ultradistributions*, Trans. Amer. Math. Soc., to appear. Available at https://arxiv.org/abs/1410.2637. G. Garello, A. Morando, *$L^p$-bounded pseudo-differential operators and regularity for multi-quasi-elliptic equations*, Integral Equations Operator Theory **51** (2005), 501–517. I.M. Gelfand, G.E. Shilov, *Generalized functions II.* Academic Press, New York, 1968. T. Gramchev, S. Pilipovi' c, L. Rodino, [*Global Regularity and Stability in S-Spaces for Classes of Degenerate Shubin Operators*]{}. Pseudo-Differential Operators: Complex Analysis and Partial Differential Equations Operator Theory: Advances and Applications **205** (2010), 81-90. T. Gramchev, S. Pilipovi' c, L. Rodino, [*Eigenfunction expansions in $\R^n$*]{}, Proc. Amer. Math. Soc. **139** (2011), 4361–4368. K. Gröchenig, G. Zimmermann, *Spaces of test functions via the STFT*, J. Funct. Spaces Appl. **2** (2005), 1671–1716. B. Helffer, *Théorie spectrale pour des opérateurs globalement elliptiques.* Astérisque **112**, Société Mathématique de France, Paris, 1984. H. Komatsu, [*A proof of Kotake and Narashiman’s Theorem*]{}. Proc. Japan Acad. **38** (1962), 615-618. T. Kotake, M.S. Narasimhan, [*Regularity theorems for fractional powers of a linear elliptic operator*]{}. Bull. Soc. Math. France, **90** (1962), 449-471. M. Langenbruch, *Hermite functions and weighted spaces of generalized functions.* Manuscripta Math. **119** (2006), 269–285. B.S. Mitjagin, *Nuclearity and other properties of spaces of type $S$*, Amer. Math. Soc. Transl., Ser. 2, **93** (1970), 45–59. A. Morando, *$L^p$-regularity for a class of pseudo-differential operators in $\R^n$*, J. Partial Differential Equations **18** (2005), 241–262. F. Nicola, L. Rodino, [*Global pseudo-differential calculus on Euclidean spaces*]{}, Birkhauser, Basel, 2010. S. Pilipović, *Generalization of Zemanian spaces of generalized functions which have orthonormal series expansions*, SIAM J. Math. Anal. **17** (1986), 477-484. S. Pilipović, *Tempered ultradistributions.* Boll. Unione Mat. Ital. VII. Ser. B [**2**]{} (1988), 235–251. S. Pilipović, N. Teofanov, *Pseudodifferential operators on ultramodulation spaces*, J. Funct. Anal. **208** (2004), 194–228. M. Reed, B. Simon, *Methods of modern mathematical physics* Vol 1. Academic Press, San Diego Ca., 1975. R.T. Seeley, [*Integro-differential operators on vector boundes*]{}, Trans. Am. Math. Soc. **117** (1965), 167-204. R.T. Seeley, [*Eigenfunction expansions of analytic functions*]{}, Proc. Am. Math. Soc. **21** (1969), 734–738. M. Shubin, *Pseudodifferential operators and spectral theory.* Springer Series in Soviet Mathematics, Springer Verlag, Berlin, 1987. J. Toft, *Multiplication properties in Gelfand-Shilov pseudo-differential calculus.* In: Molahajlo, S., Pilipovi' c, S., Toft, J., Wong, M.W. (eds.) Pseudo-Differential Operators, Generalized Functions and Asymptotics, Operator Theory: Advances and Applications, Birkh" user, Basel, **231** (2013), 117-172. J. Toft, *Images of function and distribution spaces under the Bargmann transform*, J. Pseudo-Differ. Oper. Appl. DOI 10.1007/s11868-016-0165-9 (2016). J. Vindas, Dj. Vuckovic, *Eigenfunction expansions of ultradifferentiable functions and ultradistributions in $\mathbb R^n$* arXiv:1512.01684 (2016). M.W. Wong, *The heat equation for the Hermite operator on the Heisenberg group.* Hokkaido Math. J. **34** (2005), 393–404.
[NORDITA-2013-062]{}\ [KCL-MTH-13-08]{}\ [IHES/P/13/27]{}\ [arXiv:1308.4972]{}\ 1.5 cm [Jesper Greitz${}^{a}$[^1], Paul Howe${}^{b}$[^2] and Jakob Palmkvist${}^c$[^3]]{} ${}^a$[*Nordita* ]{} [*Roslagstullsbacken 23, SE-106 91 Stockholm, Sweden* ]{} ${}^b$[*Department of Mathematics, King’s College London*]{} [*The Strand, London WC2R 2LS, UK*]{} ${}^c$[*Institut des Hautes Études Scientifiques*]{} [*35, Route de Chartres, FR-91440 Bures-sur-Yvette, France* ]{} [**Abstract**]{} .5cm A compact formulation of the field-strengths, Bianchi identities and gauge transformations for tensor hierarchies in gauged maximal supergravity theories is given. A key role in the construction is played by the recently-introduced tensor hierarchy algebra. It has been known for many years that the forms in $D$-dimensional maximal supergravity theories, when the duals of the physical forms are included, are associated with algebraic structures [@Cremmer:1997ct; @Cremmer:1998px]. These structures have been interpreted as sub-algebras of Borcherds algebras [@HenryLabordere:2002dk; @HenryLabordere:2002xh] and in terms of extended $E$-series algebras [@Julia:1997cy; @West:2001as; @Damour:2002cu; @Riccioni:2007au; @Bergshoeff:2007qi; @Bergshoeff:2007vb; @Riccioni:2007ni; @Bergshoeff:2008xv; @Riccioni:2009xr]. It has been found that there are also $(D-1)$-form potentials (de-forms), associated with deformations, and $D$-forms, otherwise known as top forms, both carrying no physical degrees of freedom, whose existence is implied by these algebraic structures (these were first observed in $D=10$ [@Bergshoeff:2005ac; @Bergshoeff:2006qw]). In general, the potential forms transform under representations $\cR_{\ell}$ of the duality group of the given supergravity theory where the level number $\ell$ coincides with the form-degree. In a separate, but related, development, studies of the general structure of gauged supergravities [@deWit:1981eq; @Gunaydin:1984qu; @Pernici:1984xx; @Hull:1984vg; @Hull:1984qz; @deWit:1983gs; @Nicolai:2000sc; @Nicolai:2001sv; @deWit:2002vt; @deWit:2003hr] have revealed that the same sets of forms are needed in that context (with two exceptions for $D=3$) and that the gauge transformations of the potentials at level $\ell$ involve parameters up to level $(\ell+1)$, the whole set of forms giving rise to a tensor hierarchy [@deWit:2005hv; @deWit:2008ta; @deWit:2008gc]. A key feature of this general construction is the use of the embedding tensor that specifies how the gauge group $G_0$ is embedded in the duality group $G$. The embedding tensor is treated as a spurionic object that transforms under a representation of the duality group, although in a given gauging it becomes fixed and symmetry under $G$ is lost. This technique allows the formalism to be developed generally for an arbitrary gauging. In reference [@Henneaux:2010ys] it was shown how one could derive Borcherds algebras for maximal supergravity theories starting from $E_{11}$, while in [@Palmkvist:2011vz], it was shown how to go in the other direction. More recently, it was argued in [@Kleinschmidt:2013em] that the Borcherds algebras given in [@HenryLabordere:2002dk; @Henneaux:2010ys] for $D>7$ do not agree with those obtained by oxidation from lower dimensions. It has also become clear that the Lie superalgebras determined by the forms do not imply unique Borcherds algebras for these cases. Moreover, these Lie superalgebras of forms can be extended in a different way that is not symmetrical about $\ell=0$ (as the Borcherds algebras are). The resulting new algebras, called tensor hierarchy algebras (THAs) [@Palmkvist:2013vya], have the property that they encode the sequence of maps, $Y_{\ell+1,\ell}:\cR_{\ell+1}\rightarrow\cR_{\ell}$, that appear in the formulae for the field-strengths in the tensor hierarchy, in a simple way, namely as the adjoint action of a level $-1$ element corresponding to the embedding tensor. A forerunner of this type of algebra extension was given in [@Lavrinenko:1999xi] in the context of massive IIA supergravity where a level $-1$ element was used to describe the deformations of the field-strengths with respect to the massless case.[^4] Borcherds algebras, extended $E$-series algebras and THAs are all $\bbZ$-graded algebras, where an integer $\ell\in\bbZ$ labels a non-zero subspace, and at the same time can be interpreted as the degree of a form. With such an interpretation, these algebras are therefore truncated in a spacetime context, but in superspace there is no limit to the degree a form can have, so it is natural to include all of them [@Greitz:2011da]. This latter point of view has some advantages, one of which is that the top forms can be treated gauge-covariantly because their $(D+1)$-form field-strengths make perfectly good sense in superspace. Moreover, even in the context of on-shell maximal supergravity, there can be over-the-top forms. For example, in IIA supergravity there is a twelve-form RR field-strength tensor that has a non-zero superspace component [@Greitz:2011da]. This fact allows the Lie superalgebra of forms to be discussed without the complications of gauge symmetries or truncation. Moreover, it is quite possible that higher-degree forms will become non-trivial when higher-order string corrections are taken into account [@Greitz:2012vp]. In the context of gauging, a superspace framework allows one to discuss the complete hierarchy in a natural way without truncation [@Greitz:2011vh; @Greitz:2012vp]. In the current note, we shall extend the ideas of [@Cremmer:1997ct; @Cremmer:1998px], taking into account the algebraic point of view of [@Palmkvist:2013vya], in order to develop a simple formalism for tensor hierarchies in maximal supergravity theories, focusing on the simplest cases, $3\leq D \leq 7$. We give compact formulae for the full (infinite) sets of field-strengths, gauge transformations and Bianchi identities. These formulae are valid in both spacetime and superspace, although, as we have mentioned, the latter framework allows one to avoid issues of truncation. For maximal supergravity theories in $3\leq D\leq7$ dimensions the forms determine a (proper) Lie superalgebra generated by the level-one elements, subject to the supersymmetry constraint, and the duality algebra $\gg$ is simple and finite-dimensional.[^5] However, the formalism can be easily adapted to higher dimensions and should be applicable in other cases such as half-maximal supergravity theories [@Nicolai:2001ac; @deWit:2003ja; @Weidner:2006rp; @Bergshoeff:2007vb; @Greitz:2012vp]. On the other hand, it does not generalise straightforwardly to the conformal tensor hierarchies which have been studied recently in $D=6\ (1,0)$ supersymmetry [@Samtleben:2011fj; @Samtleben:2012mi; @Samtleben:2012fb; @Bandos:2013jva; @Bandos:2013sia; @Palmer:2013pka].[^6] As mentioned above, the set of forms in any maximal supergravity theory determines a Lie superalgebra, $\gf$, which is graded, not only as a superalgebra, but also with a subspace for each positive integer $\ell$, called the level. This can be most easily described in terms of the field-strengths $F_{\ell+1}$, where the subscript denotes the form-degree. At each level there will be a set of forms determined by a representation $\cR_{\ell}$, which is generically reducible. The Bianchi identities are d F\_[+1]{}=\_[m+n=]{} F\_[m+1]{} F\_[n+1]{} , while the consistency of these identities, $d^2=0$, requires, schematically \_[p+q+r=]{} F\_[p+1]{} F\_[q+1]{} F\_[r+1]{}=0 , where the wedge product between the forms is understood. These two equations determine the Lie bracket and the Jacobi identity respectively for the Lie superalgebra $\gf$. One must also require that the Bianchi identities are soluble, and one can show, using superspace cohomology, that this implies that there is a further constraint on the representations that are allowed at level two [@Greitz:2011vh; @Greitz:2012vp]. This is the so-called supersymmetry constraint. Let $e_{\cM}$ denote the basis elements of $\gf$ at level one, $\cM=1,\ldots, {\rm dim}\, \cR_1$. Then the basis elements at higher levels are determined sequentially by imposing the supersymmetry constraint at level two and the Jacobi identity. So at level two we have $[e_{\cM},e_{\cN}]=e_{\cM\cN}$, and at higher levels we write \[nesting\] \[e\_[\_1]{},\[e\_[\_2]{},…,\[e\_[\_[-1]{}]{},e\_[\_]{}\]\]\] = e\_[\_1 \_]{} . Here and elsewhere the brackets are understood to be graded. For example, $[e_{\cM},e_{\cN}]$ is symmetric, since the level one basis elements $e_{\cM}$ are odd. The level two basis elements $e_{\cM\cN}$ are then even, and the indices are projected onto the representation $\cR_2$, which is contained in the symmetric product of two $\cR_1$ representations. However, at higher levels $e_{\cM_1\cdots \cM_{\ell}}$ will not be fully symmetric on the indices. The notation $\langle \cM_1\cdots \cM_{\ell} \rangle$ will be used to denote the projection of $\cR_1{}^{\otimes \ell}$ onto $\cR_{\ell}$. We denote the Lie algebra of the duality group $G$ by $\gg$. We can obtain a new algebra $\gg_{\gf}$ by taking the semi-direct sum of $\gg$ with $\gf$. The action of $\gg$ on $e_{\cM}$ is given by =t\_[m]{}\^ e\_ , where $t_m$, $m=1,\ldots, {\rm dim}\, \gg$, is a basis for $\gg$ and where $t_{m\cN}{}^{\cP}$ represents this basis in the representation $\cR_1$. Since $\gf$ is infinite-dimensional so is $\gg_{\gf}$. The THA is constructed from $\gg_{\gf}$ by appending a subspace at level $-1$, with a basis $\phi_m{}^\cM$, corresponding to a representation $\cR_{-1}$ of $\gg$, contained in the tensor product of the adjoint and the dual representation of $\cR_1$. The bracket of this subspace at level $-1$ with the level-one subspace is defined by =\_\^ t\_[m]{} , \[bracket+1-1\] where the diagonal hook brackets denote projection on $\cR_{-1}$. The level $-1$ subspace then generates an extension of $\gg_{\gf}$ to all negative levels, but here we will only consider the subalgebra $\hat{\gg}$ of the THA generated by $\gg_{\gf}$ and a single element $\Th$ at level $-1$, such that $[\Th,\Th]=0$. Thus $\hat{\gg}$ does not have any lower levels. We set $\Th=\Th_{\cM}{}^m \phi_m{}^{\cM}$, where we identify $\Th_{\cM}{}^m$ with the embedding tensor, a constant tensor that describes how the gauge group is embedded into the duality group (times a coupling strength). The representation $\cR_{-1}$ is determined by $\cR_{2}$ since the Jacobi identity forces the elements - \[\[\_m\^,e\_\],e\_\] to vanish. Thus the supersymmetry constraint is not only a constraint on the elements at level two, but also a constraint on the embedding tensor at level $-1$ (as such, it is also known as the representation constraint). By the Jacobi identity it follows from (\[bracket+1-1\]) that the bracket with $\gg$ at level zero is given by =f\_[mn]{}\^p \_p\^ - t\_[m ]{}\^ \_[n]{}\^  . Contracted with $\Th_\cM{}^m$, this simply says that the embedding tensor transforms in the representation $\cR_{-1}$. When one combines [(\[4\])]{} with the fact that $[\Th,\Th]=0$, one sees that the embedding tensor is invariant under the gauge algebra $\gg_0$ which is spanned by $X_\cM= \Th_\cM{}^m t_m = [\Th,e_\cM]$. Since $\Th$ is an element at level $-1$, its brackets with the basis elements at level $\ell$ will be at level $(\ell-1)$, =(-1)\^Y\_[\_1\_,]{}\^[\_[-1]{}\_1 ]{} e\_[\_1\_[-1]{} ]{} , where the sign factor is included for later convenience. Using the Jacobi identity and $[\Th,\Th]=0$ we see that $Y_{\ell+1,\ell} Y_{\ell,\ell-1}=0$, so that we can identify the $Y_{\ell+1,\ell}$ with the intertwiner that maps $\cR_{\ell+1}\rightarrow \cR_{\ell}$. We therefore see that the THA encodes in a very concise manner the properties of the embedding tensor and the intertwiners [@deWit:2005hv; @deWit:2008ta; @deWit:2008gc]. We refer to [@Palmkvist:2013vya] for a full derivation of the THA and its properties.[^7] Let $\O$ denote the associative superalgebra of forms and $\cU_{\hat \gg}$, the enveloping algebra of $\hat\gg$. We shall be interested in objects that take their values in the tensor product $\O\otimes\hat\gg:=\O_{\hat\gg}$, which can be viewed as a Lie superalgebra, or in the tensor product $\O\otimes\cU_{\hat\gg}$, which can be viewed as an associative superalgebra. The degree of an element in $\O_{\hat\gg}$ (or $\O\otimes\cU_{\hat\gg}$) is then the sum of the degrees of its constituents in $\O$ and $\hat \gg$ (or $\O$ and $\cU_{\hat\gg}$). In particular, note that odd forms anti-commute with odd elements of $\cU_{\hat\gg}$. We shall assume that the exterior derivative acts from the right (as in superspace), and also define another odd derivation acting from the right, $L_{\Th}$, that takes the bracket of a given element with $\Th$. Because $\Th$ is constant it is easy to check that \[cross-terms\] d L\_+L\_ d=0 , and as $L_{\Th}{}^2=0$ as well, it follows that the operator $d_{\Th}:=d+L_{\Th}$ is nilpotent. The potentials $A_\ell$, gauge parameters $\L_{\ell-1}$ and field-strengths $F_{\ell+1}$ that we consider are actually forms (with the form degrees given by the subscripts) contracted with the basis elements of ${\hat\gg}$ at level $\ell$. In order to minimise signs it is convenient to write the basis elements to the left, so for any form $\o$ at level $\ell$ we set ø=e\_[\_…\_1]{} ø\^[\_1…\_]{}  . We have also used here the superspace convention of summing the indices from the inside out, although since these indices are not super themselves, this is not really necessary. This convention means that when we apply $d$ to a $\cU_{\hat\gg}$-valued form it starts from the right and lands directly on $\o^{\cM_1\ldots\cM_{\ell}}$. Thus $A_\ell$ are even elements of $\O_{\hat\gg}$, while $\L_{\ell-1}$ and $F_{\ell+1}$ are odd, and the same of course holds for their sums $$\begin{aligned} A&=\sum_{\ell\geq1} A_{\ell}\ , & \L&=\sum_{\ell\geq1} \L_{\ell-1}\ , & F&=\sum_{\ell\geq1} F_{\ell+1}\ . \end{aligned}$$ Note that none of these objects has a level-zero or minus one component. We begin with the ungauged case. The formalism is essentially the same as that of [@Cremmer:1997ct; @Cremmer:1998px]. We put F=d e\^A e\^[-A]{} . This can be considered to be a modified Maurer-Cartan form. It clearly satisfies dF+F\^2=0 . Equation (\[8\]) gives the Bianchi identities for all of the field-strength forms. These identities are consistent because the underlying algebra $\gf$ is the Lie superalgebra of forms that was derived from the Bianchis in the first place. (Equivalently, one could view (\[7\]) as the solution to these identities in terms of potentials.) Defining e\^Ae\^[-A]{}=Z , we find that F=dZ + \[Z,F\]  . We want the $F$s to be gauge-invariant, so we require $dZ + [Z,F]=0$. This is solved by Z=dŁ+ \[Ł,F\] . The invariance of the $F$s then follows straightforwardly using (\[8\]). We can include the scalars in the picture in a covariant fashion by making use of the scalar fields as an element $\cV$ of the duality group $G$. The latter acts on $\cV$ to the right globally, while the local R-symmetry group $H$ acts on the left, $\cV\rightarrow h^{-1}\cV g$. If we now set = d(e\^A)e\^[-A]{}\^[-1]{} , then clearly $d\F+\F^2=0$. The Maurer-Cartan form $\F$ can be rewritten as = d\^[-1]{} + F\^[-1]{} . Now $d\cV \cV^{-1}=P+Q$, where $Q$ is the composite connection for $\gh$, the Lie algebra of $H$, while $P$, which takes its values in the quotient of $\gg$ by $\gh$, can be considered as the one-form field-strength tensor for the scalar fields. Note that $\F$ is invariant under $G$, so that we can consider $\cV F\cV^{-1}:=\tilde F$ to be the field-strength forms in the $H$-basis. The Maurer-Cartan equation for $\F$ then gives R+ DP+ P\^2&=&0 ,1 DF + F\^2 + \[F,P\]&=&0 , where $R=dQ+Q^2$ is the $\gh$-curvature and $D$ the $\gh$-covariant derivative. The generalisation of the above to the gauged case is fairly straightforward. We define $A$ as before but then put F’=d\_ e\^A e\^[-A]{} . Note that $F'$ now has a level-zero component $[A_1,\Th]:=\cA$. This is the gauge field for $G_0$. So $F'=F+\cA$, where $F$ is the sum of the field-strength forms starting at level one. We have d\_F’ + F’\^2=0  because $d_{\Th}{}^2=0$ as noted above in [(\[6\])]{}. As for the ungauged case we can define gauge transformations by e\^A e\^[-A]{}=Z , and if we choose Z=d\_Ł-\[Ł\_0,\] +\[Ł,F’\] then we find, using [(\[15\])]{}, that F&=&\[F,\[Ł\_0,\]\] ,1 &=&\[dŁ\_0 +\[Ł\_0,\[A\_1,\]\],\] . The $\gg_0$-covariant derivative on a field, $\L$, for $\ell>1$, is $\cD\L=d\L + [\L,[A_1,\Th]]=d\L+[\L,\cA]$, so that [(\[19.1\])]{} is the standard formula for the gauge transformation of $\cA$ with parameter $[\L_0,\Th]$. Note that we can write Z=Ł+ \[Ł\_[2]{},\] + \[F,Ł\] , or, for each level, Z\_=Ł\_[-1]{} + \[Ł\_,\] + \_[m=0]{}\^[-2]{} \[F\_[-m]{},Ł\_m\] . The Bianchi identities, when written out, are F\_[+1]{}+ (F\^2)\_[+1]{} + \[F\_[+2]{},\]=0 for $\ell\geq 1$. At level zero we just get the identification of $\cF=d\cA+\cA^2$ with $-[F_2,\Th]$. These Bianchi identities are indeed what one expects for the tensor gauge hierarchy. Expanding out [(\[15\])]{} we find for the first three $F$s F\_2&=&d A\_1 + \[A\_1,\[A\_1,\]\] + \[A\_2,\] ,1 F\_3&=&A\_2 + \[A\_1,dA\_1\] + \[A\_1,\[A\_1,\[A\_1,\]\]\] + \[A’\_3,\] ,1 F\_4&=&A’\_3+\[A\_2,F\_2\] -+ \[A\_1,\[A\_1,dA\_1\]\]\ && +\[A\_1,\[A\_1,\[A\_1,\[A\_1,\]\]\]\]+ \[A’\_4,\]  , where A’\_3&=&A\_3+\[A\_1,A\_2\] ,1 A’\_4&=&A\_4++\[A\_1,\[A\_1,A\_2\]\] . These formulae, expressed in terms of the redefined gauge potentials, are the standard ones for the field-strengths in the hierarchy [@deWit:2005hv; @deWit:2008ta; @deWit:2008gc]. The first three variations are given by Z\_1&=&Ł\_0 + \[Ł\_1,\]1 Z\_2&=&Ł\_1 +\[Ł\_2,\] + \[F\_2,Ł\_0\]1 Z\_3&=&Ł\_3 +\[Ł\_3,\] + \[F\_3,Ł\_0\] + \[F\_2,Ł\_1\]  . These variations, and indeed all the $Z_{\ell}$s in [(\[20.1\])]{}, are actually the covariant variations for the potentials, $\D A_{\ell}$, given in the literature [@deWit:2005hv; @deWit:2008ta]. In fact they are the covariant variations for the redefined potentials $A'$ (which should be identified with the ones that are introduced from the beginning in the standard formalism). To include the scalars in the gauged case we put \[MCf-scalars-gauged\] &=&+d\_ (e\^A)e\^[-A]{}\^[-1]{}1 &=&\^[-1]{} +(+ F)\^[-1]{}1 &=&+++F . The extra $\Th$ term on the first line is necessary in order to obtain $\cV\Th\cV^{-1}$ on the second line.[^8] Conjugation with $\cV$ converts $\Th$ and $F$ from the $G$-basis to the $H$-basis as indicated on the third line. The $\cA$ gauge-field in $\cD$ acting on the scalars comes from the level-zero term in $d_{\Th}e^A e^{-A}$. It is not difficult to show that $\F$ satisfies a standard Maurer-Cartan equation $d\F+\F^2=0$. Written out it gives \[MCeq-scalars-gauged\] R+D+\^2&=& - \[F\_2,\]= \^[-1]{} ,1 DF+ F\^2 +\[F,\] + \[F\_[2]{},\]&=& 0 ,1 D+ \[,\]&=&0 . where $D=d+\cQ$ is the $\gh$-covariant derivative for the gauged theory, and $R=d\cQ +\cQ^2$. Note that the tilded quantities do not transform under $G$ and, as a result, are also invariant under $G_0$. We shall now write out a few of the above formulae in components to facilitate comparison with the literature. For the covariant derivative one finds (recalling [(\[26\])]{}) ø\^[\_1\_]{}=dø\^[\_1…\_]{} +( ø\^[\_1\_[-1]{} | ]{} A\_1\^ X\_\^[|\_]{} + (-1) [terms]{}) , where $X_{\cM\cN}{}^{\cP}=\Th_{\cM}{}^m t_{m\cN}{}^{\cP}$ as usual. In deriving this we have used the fact that the gauge potential $\cA$ is =\[A\_1,\]=\[e\_ A\_1\^, \_n\^ \_\^n\]=-A\_1\^\_\^m t\_m , where the minus sign arises in taking the odd form $A_1^{\cM}$ past the odd basis element $\phi_n{}^{\cN}$. Using these rules one finds for the first two field-strength forms F\_2\^&=&d A\_1\^ +A\_1\^ A\_1\^X\_\^ + A\_2\^ Y\_\^ ,1 F\_3\^&=&A\_2\^-A\_1\^(d A\_1\^+A\_1\^ A\_1\^ X\_\^[|]{}) + [A]{}\_3’\^Y\_\^ . For the form indices we use the superspace convention of writing the basis forms $d x^\m$ to the left, so for a $p$-form $\o$, ø= dx\^[\_p]{}dx\^[\_1]{}ø\_[\_1\_p]{} . For the field-strengths we then find, at levels one and two, F\_\^&=&2\_[\[]{} A\_[\]]{}\^ -A\_[\[]{}\^A\_[\]]{}\^X\_\^ + A\_\^ Y\_\^ ,1 F\_\^&=&3\_[\[]{} A\_[\]]{}\^-3A\_[\[]{}\^ (\_ A\_[\]]{}\^[|]{}- A\_\^A\_[\]]{}\^X\_\^[|]{} )+1 && +  [A’]{}\_\^ Y\_\^ . The minus signs in the middle terms are due to the superspace summation convention. To get superspace formulae one simply has to substitute super-indices, $M,N$, etc, running over both $x$ and $\th$ coordinates, for $\m,\n$, etc. In summary, the field-strengths for the tensor hierarchy in gauged maximal supergravity theories are given by the generalised Maurer-Cartan form [(\[15\])]{}, the Bianchi identities by the generalised Maurer-Cartan equation [(\[16\])]{} and the covariant gauge transformations by $Z$ defined in [(\[17\])]{}. In more detail, the gauge transformations and the Bianchi identities are given by [(\[20\])]{} and [(\[21\])]{} respectively, while the expressions for the field-strengths have to be extracted from the general definition [(\[15\])]{} level by level. The first three are given in [(\[22\])]{}. These are the standard expressions as we confirmed by writing the first two out after removing the basis elements in [(\[29\])]{}. Finally, we note that the scalars can also be incorporated in a manifestly covariant fashion by including them as a superspace (or spacetime)-dependent element $\cV$ of the duality group in the Maurer-Cartan form $\F$. Acknowledgments {#acknowledgments .unnumbered} --------------- JP would like to thank B. Julia, S. Lavau and H. Samtleben for discussions. [99]{} E. Cremmer, B. Julia, H. Lü and C. N. Pope, [*Dualisation of dualities. I*]{}, Nucl. Phys.  B [**523**]{} (1998) 73 \[[arXiv:hep-th/9710119]{}\]. E. Cremmer, B. Julia, H. Lü and C. N. Pope, [*Dualisation of dualities. II: Twisted self-duality of doubled fields and superdualities*]{}, Nucl. Phys.  B [**535**]{} (1998) 242 \[[arXiv:hep-th/9806106]{}\]. P. Henry-Labordere, B. Julia and L. Paulot, [*Borcherds symmetries in M-theory*]{}, JHEP [**0204**]{} (2002) 049 \[[arXiv:hep-th/0203070]{}\]. P. Henry-Labordere, B. Julia and L. Paulot, [*Real Borcherds superalgebras and M-theory*]{}, JHEP [**0304**]{} (2003) 060 \[[arXiv:hep-th/0212346]{}\]. B. L. Julia, [*Dualities in the classical supergravity limits: Dualisations, dualities and a detour via $4k+2$ dimensions*]{}, \[[arXiv:hep-th/9805083]{}\]. P. C. West, [*$E_{11}$ and M-theory*]{}, Class. Quant. Grav.  [**18**]{} (2001) 4443 \[[arXiv:hep-th/0104081]{}\]. T. Damour, M. Henneaux and H. Nicolai, [*$E_{10}$ and a ‘small tension expansion’ of M-theory*]{}, Phys. Rev. Lett.  [**89**]{} (2002) 221601 \[[arXiv:hep-th/0207267]{}\]. F. Riccioni and P. C. West, [*The $E_{11}$ origin of all maximal supergravities*]{}, JHEP [**0707**]{} (2007) 063 \[[arXiv:0705.0752\[hep-th\]]{}\]. E. A. Bergshoeff, I. De Baetselier and T. A. Nutma, [*$E_{11}$ and the embedding tensor*]{}, JHEP [**0709**]{} (2007) 047 \[[arXiv:0705.1304\[hep-th\]]{}\]. E. A. Bergshoeff, J. Gomis, T. A. Nutma and D. Roest, [*Kac-Moody Spectrum of (Half-)Maximal Supergravities*]{}, JHEP [**0802**]{} (2008) 069 \[[arXiv:0711.2035\[hep-th\]]{}\]. F. Riccioni and P. C. West, [*$E_{11}$-extended spacetime and gauged supergravities*]{}, JHEP [**0802**]{} (2008) 039 \[[arXiv:0712.1795\[hep-th\]]{}\]. E. A. Bergshoeff, O. Hohm, A. Kleinschmidt, H. Nicolai, T. A. Nutma and J. Palmkvist, [*$E_{10}$ and Gauged Maximal Supergravity*]{}, JHEP [**0901**]{} (2009) 020 \[[arXiv:0810.5767\[hep-th\]]{}\]. F. Riccioni, D. Steele and P. West, [*The $E_{11}$ origin of all maximal supergravities – the hierarchy of field-strengths*]{}, JHEP [**0909**]{} (2009) 095 \[[arXiv:0906.1177\[hep-th\]]{}\]. E. A. Bergshoeff, M. de Roo, S. F. Kerstan and F. Riccioni, [*IIB Supergravity Revisited*]{}, JHEP [**0508**]{} (2005) 098 \[[arXiv:hep-th/0506013]{}\]. E. A. Bergshoeff, M. de Roo, S. F. Kerstan, T. Ortin and F. Riccioni, [*IIA ten-forms and the gauge algebras of maximal supergravity theories*]{}, JHEP [**0607**]{} (2006) 018 \[[arXiv:hep-th/0602280]{}\]. B. de Wit and H. Nicolai, [*$N=8$ Supergravity with Local ${\rm SO}(8) \times {\rm SO}(8)$ Invariance*]{}, Phys. Lett. [**B108**]{}, 285 (1982). M. Günaydin, L. J. Romans and N. P. Warner, [*Gauged $N=8$ Supergravity in Five Dimensions*]{}, Phys. Lett. B [**154**]{} (1985) 268. M. Pernici, K. Pilch and P. van Nieuwenhuizen, [*Gauged Maximally Extended Supergravity In Seven Dimensions*]{}, Phys. Lett. B [**143**]{} (1984) 103. C. M. Hull, [*Noncompact Gaugings Of $N=8$ Supergravity*]{}, Phys. Lett. B [**142**]{} (1984) 39. C. M. Hull, [*More Gaugings Of $N=8$ Supergravity*]{}, Phys. Lett. B [**148**]{} (1984) 297. B. de Wit and H. Nicolai, [*The Parallelizing $S^7$ Torsion In Gauged $N=8$ Supergravity*]{}, Nucl. Phys. B [**231**]{} (1984) 506. H. Nicolai and H. Samtleben, [*Maximal gauged supergravity in three dimensions*]{}, Phys. Rev. Lett.  [**86**]{} (2001) 1686 \[[arXiv:hep-th/0010076]{}\]. H. Nicolai and H. Samtleben, [*Compact and noncompact gauged maximal supergravities in three dimensions*]{}, JHEP [**0104**]{} (2001) 022 \[[arXiv:hep-th/0103032]{}\]. B. de Wit, H. Samtleben and M. Trigiante, [*On Lagrangians and gaugings of maximal supergravities*]{}, Nucl. Phys. B [**655**]{} (2003) 93 \[[arXiv:hep-th/0212239]{}\]. B. de Wit, H. Samtleben and M. Trigiante, [*Gauging maximal supergravities*]{}, Fortsch. Phys.  [**52**]{} (2004) 489 \[[arXiv:hep-th/0311225]{}\]. B. de Wit and H. Samtleben, [*Gauged maximal supergravities and hierarchies of non-\ abelian vector-tensor systems*]{}, Fortsch. Phys. [**53**]{} (2005) 442 \[[arXiv:hep-th/0501243]{}\]. B. de Wit, H. Nicolai and H. Samtleben, [*Gauged Supergravities, Tensor Hierarchies, and M-Theory*]{}, JHEP [**0802**]{} (2008) 044 \[[arXiv:0801.1294\[hep-th\]]{}\]. B. de Wit and H. Samtleben, [*The end of the $p$-form hierarchy*]{}, JHEP [**0808**]{} (2008) 015 \[[arXiv:0805.4767\[hep-th\]]{}\]. M. Henneaux, B. L. Julia and J. Levie, [*$E_{11}$, Borcherds algebras and maximal supergravity*]{}, \[[arXiv:1007.5241\[hep-th\]]{}\]. J. Palmkvist, [*Tensor hierarchies, Borcherds algebras and $E_{11}$*]{}, JHEP [**1202**]{} (2012) 066 \[[arXiv:1110.4892\[hep-th\]]{}\]. A. Kleinschmidt and J. Palmkvist, [*Oxidizing Borcherds symmetries*]{}, JHEP [**1303**]{} (2013) 044 \[[arXiv:1301.1346\[hep-th\]]{}\]. J. Palmkvist, [*The tensor hierarchy algebra*]{}, J. Math. Phys. [**55**]{} (2014) 011701 \[[arXiv:1305.0018\[hep-th\]]{}\]. I. V. Lavrinenko, H. Lü, C. N. Pope and K. S. Stelle, [*Superdualities, brane tensions and massive IIA/IIB duality*]{}, Nucl. Phys. B [**555**]{} (1999) 201 \[[arXiv:hep-th/9903057]{}\]. J. Greitz and P. S. Howe, [*Maximal supergravity in $D=10$: Forms, Borcherds algebras and superspace cohomology*]{}, JHEP [**1108**]{} (2011) 146 \[[arXiv:1103.5053\[hep-th\]]{}\]. J. Greitz and P. S. Howe, [*Half-maximal supergravity in three dimensions: supergeometry, differential forms and algebraic structure*]{}, JHEP [**1206**]{} (2012) 177 \[[arXiv:1203.5585\[hep-th\]]{}\]. J. Greitz and P. S. Howe, [*Maximal supergravity in three dimensions: supergeometry and differential forms*]{}, JHEP [**1107**]{} (2011) 071 \[[arXiv:1103.2730\[hep-th\]]{}\]. H. Nicolai and H. Samtleben, [*$N=8$ matter coupled $AdS_3$ supergravities*]{}, Phys. Lett. B [**514**]{} (2001) 165 \[[arXiv:hep-th/0106153]{}\]. B. de Wit, I. Herger and H. Samtleben, [*Gauged locally supersymmetric $D = 3$ nonlinear sigma models*]{}, Nucl. Phys. B [**671**]{} (2003) 175 \[[arXiv:hep-th/0307006]{}\]. M. Weidner, [*Gauged supergravities in various spacetime dimensions*]{}, Fortsch. Phys.  [**55**]{} (2007) 843 \[[arXiv:hep-th/0702084]{}\]. H. Samtleben, E. Sezgin and R. Wimmer, [*$(1,0)$ superconformal models in six dimensions*]{}, JHEP [**1112**]{} (2011) 062 \[[arXiv:1108.4060\[hep-th\]]{}\]. H. Samtleben, E. Sezgin, R. Wimmer and L. Wulff, [*New superconformal models in six dimensions: Gauge group and representation structure*]{}, PoS CORFU [**2011**]{} (2011) 071 \[[arXiv:1204.0542\[hep-th\]]{}\]. H. Samtleben, E. Sezgin and R. Wimmer, [*Six-dimensional superconformal couplings of non-abelian tensor and hypermultiplets*]{}, JHEP [**1303**]{} (2013) 068 \[[arXiv:1212.5199\[hep-th\]]{}\]. I. Bandos, H. Samtleben and D. Sorokin, [*Duality-symmetric actions for non-Abelian tensor fields*]{}, Phys. Rev. D [**88**]{} (2013) 025024 \[[arXiv:1305.1304\[hep-th\]]{}\]. I. A. Bandos, [*Non-Abelian tensor hierarchy in $(1,0)$ $D=6$ superspace*]{}, JHEP [**1311**]{} (2013) 203 \[[arXiv:1308.2397\[hep-th\]]{}\]. S. Palmer and C. Saemann, [*Six-Dimensional $(1,0)$ Superconformal Models and Higher Gauge Theory*]{}, J. Math. Phys. [**54**]{} (2013) 113509 \[[arXiv:1308.2622\[hep-th\]]{}\]. B. Julia, [*Kac-Moody Symmetry Of Gravitation And Supergravity Theories*]{}, LPTENS-82-22. H. Nicolai, [*The Integrability Of N=16 Supergravity*]{}, Phys. Lett. B [**194**]{} (1987) 402. [^1]: email: jesper.greitz@nordita.org [^2]: email: paul.howe@kcl.ac.uk [^3]: email: palmkvist@ihes.fr [^4]: We are grateful to B. Julia for pointing out this similarity. [^5]: In $D=8,9$, $\gg$ is not simple and in $D=10$ IIA supergravity the Lie superalgebra of forms is not generated by the level-one forms alone. In IIB the levels are even, so the Lie superalgebra of forms is not proper (has no odd elements) and in $D=11$ there is no duality group and the Lie superalgebra has only two non-empty levels, three and six. In the last two cases there are no vectors and therefore no gaugings. Below $D=3$ the duality groups become infinite-dimensional [@Julia:1982gx; @Nicolai:1987kz]. [^6]: An underlying reason for this is that the dimensions of the forms change so that the Bianchi identities no longer define Lie superalgebras. This is not usually seen in components since the hierarchy is truncated, but in superspace one can see that at level four one could have a cubic term in the Bianchi identity of the form $dF_5 \sim (F_2)^3 + \ldots$. [^7]: The THA defined in [@Palmkvist:2013vya] differs from $\hat{\gg}$ at the positive levels by the maximal ideal of $\hat{\gg}$ contained in $\gf$. This ideal corresponds to representations that are present in the Borcherds algebra, but not seen by the tensor hierarchy, particularly a singlet and an adjoint at levels two and three for $D=3$. [^8]: This dressed version of the embedding tensor, $\tilde\Th$, is in fact the original one, known as the T-tensor [@deWit:1981eq; @deWit:2002vt; @deWit:2003hr].
--- abstract: 'This paper considers a model consisting of a kinetic term, Rashba spin-orbit coupling and short-range Coulomb interaction at zero-temperature. The Coulomb interaction is decoupled by a mean-field approximation in the spin channel using field theory methods. The results feature a first-order phase transition for any finite value of the chemical potential and quantum criticality for vanishing chemical potential. The Hall conductivity is also computed using Kubo formula in a mean-field effective Hamiltonian. In the limit of infinite mass the kinetic term vanishes and all the phase transitions are of second order, in this case spontaneous symmetry breaking mechanism adds a ferromagnetic metallic phase to the system and features a zero-temperature quantization of the Hall conductivity in the insulating one.' author: - Edgar Marcelino title: 'First and second-order metal-insulator phase transitions and topological aspects of a Hubbard-Rashba system' --- Introduction ============ Two-dimensional spin-orbit coupled systems are typically well described by spin-orbit interactions (SOIs) of Rashba and Dresselhaus types [@Dresselhaus; @Rashba; @SOI], which play an important role in many aspects of semiconductor physics, such as spin interferometer [@Interf1; @Interf2], persistent spin helix [@Helix1; @Helix2], and classical and quantum spin Hall effects [@Hall1; @Hall2; @Hall3; @Kane_TI2D; @Kane; @Zhang_TI2D]. SOI is closely related to interesting properties of materials yielding different kinds of topological orders, this ensures the presence of robust metallic edge states due to the impossibility for the system to interpolate between two phases with different topological orders without closing the gap. The first example of this kind of topological order appearing in Condensed Matter Physics was given by the integer quantum Hall effect [@Klitizing], characterized by the TKNN index [@TKNN] and featuring a quantization of the Hall conductivity related to the Chern-numbers of the bands located below the chemical potential. The ideas in [@TKNN], first restricted to lattice systems, were later generalized to continuous systems and obtained in situations where a strong transverse magnetic field, such as in [@Klitizing], could be absent. Indeed, one main ingredient for the manifestation of this topological order in two-dimensional systems is the breaking of time reversal (TR) symmetry [@Haldane] and those analogous phases of matter, that do not require any external magnetic field at all, are the so called Chern insulators or anomalous quantum Hall effect [@Chern-Insulators]. When TR symmetry is present another kind of topological order, characterized by a $Z_{2}$ topological index, may manifest. This was first proposed in [@Kane_TI2D; @Kane], yielding a so called topological insulator (TI). In this case spin is locked to momentum leading to TR symmetry-protected gapless edge states coexisting with gapped bulk states [@Hasan-Kane-RMP; @Zhang-RMP-2011], which is provided by the band structure of several materials, even in absence of electron-electron interactions [@Zhang-RMP-2011]. When TR is present in two-dimensional systems, charge-carriers with opposite spin flow in opposite directions, featuring a quantized spin Hall conductivity, with a vanishing Hall conductivity being ensured after considering the contribution from both spins. One important example of interacting system including SOI that exhibits a rich phase diagram is the so called Kane-Mele-Hubbard model [@Stephan-Rachel-1], where an SOI of the Rashba type is implemented in a Honeycomb lattice. In this case the phase diagram features a quantum spin-Hall phase and also a Mott insulating antiferromagnetic one [@Stephan-Rachel-1; @Stephan-Rachel-2; @Assaad-KMH]. There was also a discussion about a spin liquid phase in the Kane-Mele Hubbard model [@Assaad-KMH], but this idea was rejected later [@Sorella-KMH; @Assaad-KMH-2]. Such a rich phase structure also arises in the square lattice for the Bernevig-Hughes-Zhang (BHZ) model [@BHZ] when Hubbard interactions are included [@Tada; @Yoshida; @Miyakoshi; @Budich]. A considerably simpler framework allowing an explicit analytic treatment is a Rashba-Hubbard system defined directly as a continuum field theory, with a second-quantized Hamiltonian given by ($\hbar=v_{F}=1$), $$\label{Eq:Rashba-Hubbard} {\cal H}=\frac{1}{2m}\nablab\psi^\dagger\cdot\nablab\psi-\mu\psi^\dagger\psi-i\alpha\psi^\dagger(\nablab\times\hat {z})\cdot\sigmab \psi -\frac{2g}{3}{\bf S}^2,$$ where $\psi^\dagger=[\psi_\uparrow^\dagger~\psi^\dagger_\downarrow]$, $\sigmab=(\sigma_x,\sigma_y,\sigma_z)$ is a vector of Pauli matrices, and ${\bf S}=(1/2)\psi^\dagger\sigmab\psi$. The last term is just a repulsive Hubbard interaction in disguise, since ${\bf S}^2=(3/2)\psi^\dagger \psi-(3/4)(\psi^\dagger\psi)^2$. In this paper this model is studied using a mean-field approximation within an effective action formalism obtained by means of a Hubbard-Stratonovich transformation in the spin channel and integrating out the fermionic fields. The mean-field results feature a first-order metal-insulator phase transition for any finite value of the chemical potential. For a vanishing chemical potential a second-order phase transition is obtained. The Hall conductivity of the system is also computed and it remains finite only in the insulating phase. Since the system is gapless in the metallic phase, no quantization of the Hall conductivity is obtained in this case. The limit of infinite mass, in which case the kinetic term may be neglected, is also studied and leads to a model that is TR invariant only in the paramagnetic phase. In this case spontaneous TR symmetry breaking leads to a metallic ferromagnetic phase in addition to the paramagnetic and ferromagnetic insulating ones. The Hall conductivity varies linearly with the gap in the metallic phase, being quantized in the insulating regime, where it assumes only two distinct values. In the paramagnetic case the system has TR symmetry and the Hall conductivity vanishes, as expected. The results obtained here are quite distinct from other ones following from standard models of magnetism in the literature. The Hubbard model typically yields a second-order phase transition, but not a first-order one. The Stoner model does not even exhibit a phase transition at zero temperature. Despite the fact that the results in this paper are quite different from others in literature, they resemble partially the ones from a discontinuous metal-insulator transition proposed by Mott in which he considers an array of hydrogen-like atoms with arbitrary lattice constant. In Mott’s paper it is assumed that the transition between two states, with all electrons being trapped or free, would occur when a screened Yukawa-like potential round each constant. The screening behavior is determined by the Thomas-Fermi method. Details about the Mott transition can be founded in Ref. [@Mott] and references therein. Mott also studied later whether, or not, this phase transition would become continuous in the presence of disorder. In the next section the Lagrangian density of the system with a chemical potential is considered and an effective action at finite temperature is obtained after performing a Hubbard-Stratonovich transformation. The saddle-point equations for the gap and charge density are derived after integrating out the fermions. In section III the mean-field equations are solved and the phase transitions of the system are studied at zero temperature and also in the limit of infinite mass. In section IV the Hall conductivity is computed by considering a mean-field effective Hamiltonian and using the Kubo formula. The last section contains the conclusions and some discussions about future perspectives for this work. Effective action and gap equation ================================= In order to study the model at the mean-field level and obtain systematic fluctuation corrections, it is more convenient to use a functional integral formalism where the fermionic operators are replaced by Grassmann fields in a standard fashion. In this case, upon performing a Hubbard-Stratonovich (HS) transformation, a formally quadratic action in the fermionic fields is obtained. Using an imaginary time formalism at finite temperature, the Lagrangian featuring the HS field, $\Phib$, is obtained as, $$\label{Eq:L} {\cal L}=\psi^\dagger\left[\partial_\tau-\mu+H(\Phib)\right]\psi+\frac{1}{2}\Phib^2,$$ where $H$ represents the Hamiltonian operator, $$\label{Eq:H} H(\Phib)=-\frac{\nabla^2}{2m}-\left(i\alpha\nablab\times\hat{\bf z}+ \sqrt{\frac{g}{3}}\Phib\right)\cdot\sigmab.$$ After performing the Gaussian integral in the Grassmann fields, the [*exact*]{} effective action is given by, $$\label{Seff} S_{\rm eff}(\Phib)=-{\rm Tr}\ln\left[\partial_\tau-\mu+H(\Phib)\right]+\frac{1}{2}\int_0^\beta d\tau\int d^2r~\Phib^2.$$ A mean-field theory is obtained by assuming $\Phib$ to be uniform within a lowest order approximation, in which case the tracelog above can be evaluated more easily. Indeed, in this case, $$\label{Tracelog} \frac{1}{A}{\rm Tr}\ln\left[\partial_\tau-\mu+H(\Phib)\right]\approx\beta\sum_{\sigma=\pm}\int\frac{d^2k}{(2\pi)^2} \ln\left[1+e^{-\beta E_\sigma({\bf k})}\right],$$ where $A$ is an (infinite) area, and, $$\label{Energy_Spec} E_{\pm}(\mathbf{k})=\frac{\mathbf{k}^{2}}{2m}-\mu \pm \eta(\mathbf{k}),$$ with $$\eta(\mathbf{k})=\sqrt{\left( \alpha k_{x}-\sqrt{\frac{g}{3}} \Phi_{y} \right)^{2}+\left( \alpha k_{y}+\sqrt{\frac{g}{3}} \Phi_{x} \right)^{2}+\frac{g}{3}\Phi_{z}^{2}}.$$ The mean-field theory is determined by the “equation of motion” for $\Phib$, i.e., $$\Phi_j=2\sqrt{\frac{g}{3}}\langle S_j\rangle.$$ Thus, using the above result in the Heisenberg equations of motion for the electronic spin, the following equations are derived, $$\partial_t\langle S_x\rangle=2\alpha k_x\langle S_x\rangle,$$ $$\partial_t\langle S_y\rangle=2\alpha k_y\langle S_y\rangle,$$ $$\partial_t\langle S_z\rangle=2\alpha {\bf k}\cdot\langle {\bf S}\rangle.$$ Therefore, in order to have $\Phib$ also uniform in time, we need $\Phi_x=\Phi_y=0$, such that only $\Phi_z=\phi$ is nonzero. Under this assumption, the free energy density is minimized to obtain the gap equation, $$\label{gap_equation} \phi=\frac{g \phi}{6 \pi} \int_{0}^{\infty} \frac{f_{-}(k)-f_{+}(k)}{\epsilon(k)}kdk,$$ where $f_{\pm}(k)=1/[e^{\beta (k^{2}/2m-\mu \pm \epsilon(k))}+1]$ and $\epsilon(k)=\sqrt{\alpha^{2} k^{2}+(g/3)\phi^{2}}$. The charge density can also be easily computed from the free energy and leads to, $$\label{density_equation} n=\frac{1}{2 \pi} \int_{0}^{\infty} \left(f_{-}(k)+f_{+}(k) \right)kdk,$$ In the next section the zero temperature solutions of the saddle-point equations (Eq. \[gap\_equation\] and Eq. \[density\_equation\]) will be analytically derived and the corresponding phase transitions will be discussed. First-order phase transition and quantum critical point ======================================================= At zero temperature a nonzero value of the gap, corresponding to a ferromagnetic solution, satisfies, $$\begin{aligned} \label{mean-field(T=0)} \frac{6 \pi}{g}=\sum_{\sigma= \pm 1} \sigma \int_{0}^{\infty} \frac{kdk}{\epsilon(k)} \Theta \left(\mu-\frac{k^{2}}{2m}+\sigma \epsilon(k) \right),\end{aligned}$$ where $\Theta(x)$ is the Heaviside function. The charge density can also be evaluated in the same way, $$\begin{aligned} \label{mean-field(T=0)2} n &=& \sum_{\sigma= \pm 1} \int_{0}^{\infty} \frac{kdk}{2 \pi} \Theta \left(\mu-\frac{k^{2}}{2m}+\sigma \epsilon(k) \right).\end{aligned}$$ Denoting $M=\phi \sqrt{\frac{g}{3}}$ and performing the integrals above, one may obtain for the gap equation, $$\label{gap_T=0} \frac{6 \pi\alpha^2}{g}= \begin{cases} m\alpha^2-|M|+\sqrt{m^2\alpha^4+2m\alpha^2\mu+|M|^2} & (\mu \leq |M|) \\ 2m\alpha^2 & (\mu \geq |M|) \end{cases}$$ and for the charge density, $$\label{density_T=0} \frac{2 \pi n}{m} = \begin{cases} m\alpha^2+\mu+\sqrt{m^2\alpha^4+2m\alpha^2\mu+|M|^2} & (\mu \leq |M|) \\ 2(\mu+m\alpha^2) & (\mu \geq |M|). \end{cases}$$ Using Eq. (\[density\_T=0\]) one can write the chemical potential as a function of the charge density in the insulating phase $(\mu \leq |M|)$ and in the metallic one $(\mu \geq |M|)$. At first it may seem that there will be more than one solution in each case but since the chemical potential and the density should be positive and respect the corresponding inequality of each phase, given by the Heaviside functions, it is possible to find a unique solution and comparison with Eq. (\[gap\_T=0\]) leads to, $$\label{g_T=0} \frac{6 \pi\alpha^2}{g}=\begin{cases}2m \alpha^{2} & \left( \frac{\pi n}{m}-m \alpha^{2}-|M| \geq 0 \right) \\ \sqrt{4 \pi n \alpha^{2}+M^{2}}-|M| & \left(|M|-\frac{\pi n}{m} +m \alpha^{2} \geq 0 \right), \end{cases}$$ where $\mu=\frac{\pi n}{m}-m \alpha^{2} \geq |M| \geq 0$ in the metallic phase and $|M|-\frac{\pi n}{m} +m \alpha^{2} \geq 0$ in the insulating one $\left(\mu= \frac{2 \pi n}{m}-\sqrt{4 \pi n \alpha^{2}+M^{2}} \leq |M| \right)$. A quick look to Eq. (\[g\_T=0\]) shows an apparent independence of the gap with respect to the coupling constant in the metallic phase, which doesn’t seem to be reasonable. To understand that better one can write $\phi$ as a function of $g$ in the insulator phase and notice that $g=g_c=3 \pi/m$ only for $\phi=M=0$, which corresponds to the value of the coupling constant in the metallic phase. The conclusion is that $\phi=0$ for $g<g_c$ and the metallic phase coincides with the paramagnetic one. Thus, Eq. (\[g\_T=0\]) can be inverted, $$\label{M_T=0} |M|=\left( \frac{gn}{3}-\frac{3 \pi \alpha^{2}}{g} \right) \Theta \left(g-g_{c} \right).$$ The solution on Eq. (\[M\_T=0\]) is interesting because it yields for $\mu>0$ a first-order metal-insulator transition where the insulating phase is ferromagnetic. For $\mu=0$ it features a second-order phase transition and $g=g_c$ becomes a quantum critical point. The behavior of the gap with the coupling constant $g$ is shown in Fig. \[phi\_x\_g(T=0)\]. For $g = g_c$ the system is in the gapless case and with null chemical potential $(\mu=M=0)$, thus the critical spin-orbit coupling is completely determined as it is easy to see from Eq. (\[density\_T=0\]) or from Eq. (\[M\_T=0\]) and leads to $ \alpha^2=\alpha_{c}^{2}=\frac{\pi n}{m^{2}}$. For $g \geq g_{c}$ the magnetization decreases with $\alpha$ until vanishing as showed in Fig. \[M\_x\_a\_T=0\] for $\frac{mg}{3 \pi}=1.0$, notice that the magnetization reaches zero exactly at $\alpha=\alpha_{c}=\frac{\pi n}{m^{2}}$. ![(Color online) Behavior of the gap \[Eq. (\[M\_T=0\])\] with the coupling constant $g$ for different values of the density. The black line corresponding to $\alpha=\alpha_c=\frac{\pi n}{m^{2}}$ leads to $\mu=0$ and shows a second order phase transition where $g=g_c=\frac{3 \pi}{m}$ is a quantum critical point.[]{data-label="phi_x_g(T=0)"}](phi_x_g_T=0){width="50.00000%"} ![(Color online) Behavior of the gap \[Eq. (\[M\_T=0\])\] with the spin-orbit coupling constant $\alpha$ for different values of the density and $\frac{mg}{3 \pi}=1.0$.[]{data-label="M_x_a_T=0"}](M_x_a_T=0){width="50.00000%"} A limit case of interest is the one where the SOI is very strong, being formally given by the infinite mass limit ($m\to\infty$). In this regime the system is TR invariant only in the paramagnetic phase $(\phi=0)$ and spontaneous symmetry breaking allows an additional ferromagnetic metallic phase. Furthermore, as it will now be shown, this case exhibits a behavior quite distinct from the one given by Eq. (\[gap\_T=0\]), as it always features a quantum critical point. The reason for these differences lies on the fact that the limit $m\to\infty$ is singular. Indeed, in order to evaluate the integrals in Eq. (\[mean-field(T=0)\]) and Eq. (\[mean-field(T=0)2\]) for $m\to\infty$ we have to introduce a large momentum cutoff ($\Lambda$), $$\begin{aligned} \label{gap_iso(T=0)} \frac{6 \pi\alpha^{2}}{g}=\int_{|M|}^{\sqrt{\alpha^{2} \Lambda^{2}+M^{2}}} [\Theta(\mu+\epsilon)-\Theta(\mu-\epsilon)]d \epsilon, \\ \label{n_iso(T=0)} 2 \pi n \alpha^{2}= \int_{|M|}^{\sqrt{\alpha^{2} \Lambda^{2}+M^{2}}} [\Theta(\mu+\epsilon)+\Theta(\mu-\epsilon)] \epsilon d \epsilon.\end{aligned}$$ The metallic and insulating regimes will be analyzed separately, since they describe distinct critical behavior. The insulating regime will be studied at first. Assuming $|M|\ll \alpha\Lambda$, Eq. (\[gap\_iso(T=0)\]) and Eq. (\[n\_iso(T=0)\]) reduce to, $$\begin{aligned} \frac{6\pi\alpha^2}{g}=\alpha\Lambda-|M|, \\ n=\frac{\Lambda^{2}}{4 \pi},\end{aligned}$$ which yields, $$\begin{aligned} |M|=6\pi\alpha^2\left(\frac{1}{g_{ci}}-\frac{1}{g}\right), \\ n=\frac{9 \pi \alpha^{2}}{g_{ci}^{2}}, \label{n_TI1}\end{aligned}$$ where $g_{ci}=6\pi\alpha/\Lambda$ is the critical point for the insulating regime. Since $|M|\geq 0$, we have that the gap vanishes for $g\leq g_{ci}$. On the other hand, for the ordered metallic regime Eq. (\[gap\_iso(T=0)\]) and Eq. (\[n\_iso(T=0)\]) become, $$\begin{aligned} \frac{6\pi\alpha^2}{g}=\sqrt{\alpha^{2} \Lambda^{2}+M^{2}}-\mu, \\ n=\frac{1}{4 \pi \alpha^{2}} \left[\alpha^{2} \Lambda^{2} + \mu^{2}-M^{2}\right],\end{aligned}$$ and the quantum critical point is now located at, $$\label{Eq:gc-m_large} g_{cm}=\frac{6\pi\alpha^2}{\alpha\Lambda-\mu},$$ such that for $g\geq g_{cm}$, $$\begin{aligned} |M|=\sqrt{\left(\frac{6\pi\alpha^2}{g}+\mu \right)^{2}-\left(\frac{6\pi\alpha^2}{g_{cm}}+\mu \right)^{2}}, \\ n=\frac{1}{4 \pi \alpha^{2}} \left[ \left( \frac{6\pi\alpha^2}{g_{cm}}+\mu \right)^{2}+\mu^{2}-M^{2} \right]. \label{n_TI2}\end{aligned}$$ Note that for $M=0$ and $\mu \rightarrow 0^{+}$, the critical couplings in each regime become the same ($g_{cm}=g_{ci}$) and Eq. (\[n\_TI2\]) reduces to Eq. (\[n\_TI1\]). The chemical potential does not need to be small as compared to $\alpha\Lambda$ in the metallic regime, but Eq. (\[Eq:gc-m\_large\]) requires $\mu<\alpha\Lambda$, in order to keep $g_{cm}$ positive. Thus, the ferromagnetic order parameter in the metallic case exhibits a different power law from the ferromagnetic insulating one. This jump in the critical behavior of the order parameter replaces the jump in the order parameter found in the finite $m$ case, reflecting the Fermi surface singularity of the metallic phase. Topological charge and Hall conductivity ======================================== From Eq. (\[Eq:L\]) and Eq. (\[Eq:H\]) it is possible to write a mean-field effective Hamiltonian in the form, $$\label{H_mean-field} H=\epsilon(k) I+\mathbf{d}\cdot \sigmab$$ where $\epsilon(k)=\frac{\mathbf{k}^{2}}{2m}-\mu+\frac{\phi^{2}}{2}$ and $\mathbf{d}=(\alpha k_{y},- \alpha k_{x},M)$, leading to the following energy spectrum: $$\begin{aligned} \label{Energy_Spectrum} E_{\pm}(\mathbf{k})=\frac{\mathbf{k}^{2}}{2m}-\mu \pm \sqrt{\alpha^{2} \mathbf{k}^{2}+M^{2}}.\end{aligned}$$ For a Hamiltonian such as the one given by Eq. (\[H\_mean-field\]) the Kubo formula yields a Hall conductivity $\sigma_{xy}$ [@Qi(Kubo_formula)], $$\label{Hall} \sigma_{xy}(T,M)=\frac{e^{2}}{2h} \int d^{2} \mathbf{k} Q_{xy}(\mathbf{k}) \left[n_{F}(-|\mathbf{d}|)-n_{F}(|\mathbf{d}|) \right]$$ where $n_{F}(x)=1/[1+e^{\beta(\mathbf{k}^{2}/2m-\mu+x)}]$, and $Q_{xy}(\mathbf{k})$ is the topological charge given by, $$Q_{xy}(\mathbf{k})=\frac{1}{2 \pi} \hat{d}(\mathbf{k})\cdot [\partial_{k_{x}} \hat{d}(\mathbf{k}) \times \partial_{k_{y}} \hat{d}(\mathbf{k})]$$ with $\hat{d} \equiv \mathbf{d}/|\mathbf{d}|$. Thus, we obtain, $$\label{Top_charge} Q_{xy}(\mathbf{k})=\frac{M \alpha^{2} }{2 \pi (\alpha^{2} \mathbf{k}^{2}+M^{2})^{3/2} }.$$ At zero temperature Eq. (\[Hall\]), with the topological charge of Eq. (\[Top\_charge\]), can be analytically solved leading to; $$\begin{aligned} \label{Hall(T=0)} \frac{2h}{e^2}\sigma_{xy}(M,T=0)&=&M\left[ \frac{\Theta \left( 1-|M|/\mu \right)}{\Delta-\alpha^{2}m}- \frac{1}{\Delta+\alpha^{2}m} \right] \nonumber\\ &+&{\rm sgn}(M) \Theta \left(|M|/\mu-1 \right).\end{aligned}$$ where $\Delta=\sqrt{m^2\alpha^4+2m\alpha^2\mu+|M|^2}$. Using Eq. (\[M\_T=0\]) it is easy to see that the Hall conductivity on Eq. (\[Hall(T=0)\]) is different from zero only in the ferromagnetic insulating phase and actually reduces to, $$\label{Conductivity} \frac{2h}{e^2}\sigma_{xy}(M,T=0)=\left( {\rm sgn}(M)-\frac{M}{\Delta+\alpha^{2}m} \right) \Theta (g-g_{c}),$$ which is plotted in Fig. \[c\_x\_M(T=0)\], Fig. \[c\_x\_n(T=0)\] and Fig. \[c\_x\_a(T=0)\]. The limit $m\to\infty$ is also straightforward to obtain, $$\frac{2h}{e^2}\sigma_{xy}(M,T=0)|_{m\to\infty}=\frac{M}{\mu}+\left( \frac{M}{|M|}-\frac{M}{\mu} \right) \theta(|M|-\mu)$$ and leads to a broken symmetry metallic solution, see Fig. \[c(T=0,m=inf)\]. ![(Color online) Hall conductivity as a function of the coupling constant for $\alpha=1$ and different values of $\frac{\pi n}{m^{2} \alpha^{2}}$ at zero temperature. []{data-label="c_x_M(T=0)"}](c_x_g_T=0.pdf){width="47.00000%"} ![(Color online) Hall conductivity as a function of $\frac{\pi n}{m^{2}}$ for $\alpha=1$ and different values of the coupling constant at zero temperature. []{data-label="c_x_n(T=0)"}](c_x_n_T=0.pdf){width="47.00000%"} ![(Color online) Hall conductivity as a function of the spin-orbit coupling $\alpha$ for $\frac{mg}{3 \pi}=1$ and different values of $\frac{\pi n}{m^{2}}$ at zero temperature. []{data-label="c_x_a(T=0)"}](c_x_a_T=0.pdf){width="47.00000%"} ![(Color online) Hall conductivity as a function of the ratio between the gap and the chemical potential in the $m \rightarrow \infty$ limit at zero temperature.[]{data-label="c(T=0,m=inf)"}](c_T=0,m=inf.pdf){width="49.00000%"} Considering the massive case, the Hall conductivity remains finite only for $g>g_c$ since there is no ferromagnetic metallic phase and although the gap increases with the coupling constant, Eq. (\[Conductivity\]) ensures the conductivity only grows until a certain maxima and than decreases showing an asymptotic behavior $g \rightarrow \infty \Rightarrow |M| \rightarrow \infty \Rightarrow \frac{2h \sigma_{xy}}{e^2} \rightarrow 0$, as showed in Fig. \[c\_x\_M(T=0)\]. For $\alpha=\alpha_{c}=\frac{\pi n}{m^{2}}$ and $g=g_{c}=\frac{3 \pi}{m}$ the magnetization vanishes (consequently the Hall conductivity also), as discussed before and one can see in Fig \[c\_x\_n(T=0)\] and Fig \[c\_x\_a(T=0)\]. Under the $m \rightarrow \infty$ limit spontaneous TR symmetry breaking is responsible for the appearance of a ferromagnetic metallic solution, in which the Hall conductivity varies linearly with the ratio $|M|/ \mu$, in the insulator phase the hall conductivity remains quantized and assumes the values $\sigma_{xy} =\pm e^{2}/2h$. Although the effective Hamiltonian in Eq. (\[H\_mean-field\]) is quite general, the particular choice for $\mathbf{d}$ in this paper is related to a continuum model instead of some periodic lattice. For a real lattice insulating model the gap is oppened in such a way that the energy minimum of one band is larger than the maximum of the other band for any momentum in the first Brillioun zone, this ensures that $n_{F}(-|\mathbf{d}|) \rightarrow 1$ and $n_{F}(|\mathbf{d}|) \rightarrow 0$ in the zero-temperature limit and Eq. (\[Hall\]) leads to a quantized Hall conductivity independent of the details of the lattice. The model presented in this paper may be viewed as some effective theory obtained after an expansion around the gamma point, for example, for a block related to a quantum well Hamiltonian [@Quantum_Well], and features different results from the lattice. First, the unbounded kinetic term $\frac{\mathbf{k}^{2}}{2m}$ ensures that the energy minimum from one band is not always larger than the energy from the other for all different momenta $(\mathbf{k})$. Thus, the band-gap is not fully opened, unlike the case of a lattice insulating model, and the proper zero-temperature limits of the Fermi distributions, necessary for the system to feature a quantization of the Hall conductivity, are not achieved. Second, in the $m \rightarrow \infty$ limit the band is fully gapped and the quantization of the Hall conductivity is achieved in the insulating phase, but leads to half-integer values. This is also expected since the two-band Hamiltonian here is just one block of a whole quantum well Hamiltonian, thus the number of edge states is reduced and consequently the total winding number related to the quantization of the Hall conductivity, as discussed in [@Qi(Kubo_formula)]. On the other way effective theories such as the one presented here may be useful to describe the two-dimensional surface of a three-dimensional topological insulator in which TR symmetry is broken due to the proximity effect to some ferromagnetic material [@Nogueira_Hall], notice that the system in this paper under the $m \rightarrow \infty$ limit is TR invariant only in the paramagnetic phase $(\phi=0)$. In this case each surface of the topological insulator would give a half-integer contribution to the Hall conductivity, such as the one obtained here. Conclusion ========== We investigated a Hubbard-Rashba model with short-range Coulomb interaction. The Coulomb interaction is decoupled by a saddle-point approximation in the effective action, obtained after a Hubbard-Stratonovich transformation in the spin channel and integration of the fermionic fields. Under this approximation the charge density and the gap equation are analytically solved and the Hall conductivity is computed. The limit of infinite mass, which is a way to consider strong spin-orbit coupling (a common situation for those interested in topological properties) is also considered. For finite mass our system always features a first-order metal-insulator phase transition, except in the absence of chemical potential, in which case quantum criticality occurs. In the limit of infinite mass the ferromagnetic phase is split into a ferromagnetic insulating phase, which also appears for a finite mass, and a metallic ferromagnetic one (non-existing for finite mass) due to spontaneous TR symmetry breaking. The Hall conductivity is quantized only in the insulating phase in the infinite mass limit. In the metallic region the Hall conductivity remains zero for a finite mass, since the whole metallic phase is paramagnetic, but varies linearly with the ratio $M/\mu$ for $m \to \infty$. Our results are quite different from the ones obtained with standard models of magnetism such as the Hubbard, Stoner, Heisenberg and Kane-Mele ones, but a first-order metal-insulator transition featuring an important dependence on the density has already been found before in the lattice by Mott [@Mott]. The next step is to investigate the phase transitions of this system using other techniques, such as the renormalization group, and make the same investigation in the case of a cubic Rashba SOI, since recent experiments [@Cubic-Rashba] allow for a better investigation of the effects of the cubic Rashba coupling in Ge/SiGe compounds. Acknowledgement {#acknowledgement .unnumbered} =============== The author thanks Ilya Eremin and Flavio Nogueira from RUB - Ruhr Universităt Bochum for very helpful and interesting discussions about this work and general topics on Condensed Matter Physics. The author also acknowledges support from brazilian agency CAPES (CSF 11763/13-2). [99]{} G. Dresselhaus, Phys. Rev [**100**]{}, 580 (1955). E. I. Rashba, Sov. Phys. Solid State [**2**]{}, 1109 (1960). R. Winkler, [*Spin-orbit coupling effect in two-dimensional electron and hole systems*]{} (Springer, Berlin, 2003). T. Koga, J. Nitta and M. van Veenhuizen, Phys. Rev. B [**70**]{}, 161302 (2004). T. Koga, Y. Sekine and J. Nitta, Phys. Rev. B [**74**]{}, 041302 (2006). B. A. Bernevig, J. Orenstein and S.-C. Zhang, Phys. Rev. Lett. [**97**]{}, 236601 (2006). J. D. Koralek, C. P. Weber, J. Orenstein, B. A. Bernevig, S.-C. Zhang, S. Mack and D. D. Awschalom, Nature [**458**]{}, 610 (2009). J. E. Hirsch, Phys. Rev. Lett. [**83**]{}, 1834 (1999). S. Murakami, N. Nagaosa and S.-C. Zhang, Science [**301**]{}, 1348 (2003). Y. K. Kato, R. C. Myers, . C. Gossard and D. D. Awschalom, Science [**316**]{}, 1910 (2004). C. L. Kane and E. J. Mele, Phys. Rev. Lett. [**95**]{}, 146802 (2005). C. L. Kane, E. J. Mele, Phys. Rev. Lett. [**95**]{}, 226801 (2005). B. A. Bernevig and S.-C. Zhang, Phys. Rev. Lett. [**96**]{}, 106802 (2006). K. V. Klitzing, G. Dorda and M. Pepper, Phys. Rev. Lett. [**45**]{}, 494-497 (1980). D. J. Thouless, M. Kohmoto, M. P. Nightingale and M. den Nijs, Phys. Rev. Lett. [**49**]{}, 405-408 (1982). F. D. M. Haldane, Phys. Rev. Lett. [**61**]{}, 2015-2018 (1988) C. -Z. Chang, J. Zhang, X. Feng, J. Shen, Z. Zhang, M. Guo, K. Li, Y. Ou, P. Wei, L. -L. Wang, Z. -Q. Ji, Y. Feng, S. Ji, X. Chen, J. Jia, X. Dai, Z. Fang, S. -C. Zhang, K. He, Y. Wang, L. Lu, X.-C. Ma, Q.-K. Xue, Science [**340**]{}, 167-170 (2013). M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. [**82**]{}, 3045 (2010). X.-L. Qi and S.-C. Zhang, Rev. Mod. Phys. [**83**]{}, 1057 (2011). X.-L. Qi, Y.-S. Wu, and S.-C. Zhang, Phys. Rev. B [**74**]{}, 085308 (2006). S. Rachel and K. Le Hur, Phys. Rev. B [**82**]{}, 075106 (2010). M. Laubach, J. Reuther, R. Thomale, and S. Rachel, Phys. Rev. B [**90**]{}, 165136 (2014). M. Hohenadler, T. C. Lang, and F. F. Assaad, Phys. Rev. Lett. [**106**]{}, 100403 (2011). S. Sorella, Y. Otsuka, and S. Yunoki, Sci. Rep. [**2**]{}, 992 (2012). F. F. Assaad and I. F. Herbut, Phys. Rev. X [**3**]{}, 031010 (2013). B. A. Bernevig, T. L. Hughes and S. C. Zhang, Science [**314**]{} 1757 (2006). Y. Tada, R. Peters, M. Oshikawa, A. Koga, N. Kawakami and S. Fujimoto, Phys. Rev. B [**85**]{} 165138 (2012). T. Yoshida, R. Peters, S. Fujimoto and N. Kawakami, Phys. Rev. B [**87**]{} 085134 (2013). S. Miyakoshi and Y. Ohta, Phys. Rev. B [**87**]{} 195133 (2013). J. C. Budich, B. Trauzettel and G. Sangiovanni, Phys. Rev. B [**87**]{}, 235104 (2013). N. F. Mott, Metal insulator transitions, 2nd.edition (Taylor & Francis, London, New York, Philadelphia, 1990). M. König, S. Wiedmann, C. Brüne, A. Roth, H. Buhmann and L. W. Molenkamp, X. -L. Qi and S. -C. Zhang, Science [**318**]{} 766 (2007). F. S. Nogueira and I. Eremin, Phys. Rev. B [**90**]{}, 014431 (2014). R. Moriya, K. Sawano, Y. Hoshi, S. Masubuchi, Y. Shiraki, A. Wild, C. Neumann, G. Abstreiter, D. Bougeard, T. Koga and T. Machida [**]{}, Phys. Rev. Lett. [**113**]{}, 086601 (2014).
--- abstract: 'In [@kiani2015unitary], the authors claim to have found the unitary Cayley graph $Cay(M_{n}(F),GL_{n}(F))$ of matrix algebras over finite field $F$ is strongly regular only when $n=2$. But they have only cited two special cases to prove it, namely when $n = 2$ and $3$, and they have failed to cover the general cases(i.e. when $n \neq 2$ and $n \neq 3$). In this paper, we will prove that the unitary Cayley graph of matrix algebras over finite field $F$ is strongly regular iff $n=2$.' address: 'School of Mathematics and Computational Science, Xiangtan Univerisity, Xiangtan, Hunan, 411105, PR China' author: - Yihan Chen - Bicheng Zhang bibliography: - 'A\_note\_on\_unitary\_Cayley\_graphs\_of\_matrix\_algebras.bib' nocite: - '[@morrison2006integer]' - '[@kiani2015unitary]' title: A note on unitary Cayley graphs of matrix algebras --- Strongly regular graph;Unitary Cayley graph;Finite field;Matrix algebra Introduction ============ In graph theory, it is of great significance to study the construction and characterization of strongly regular graphs(SRG). [@kiani2015unitary] in its abstract said $n=2$ is a necessary and sufficient condition, but in fact, [@kiani2015unitary] has only proved $n=2$ is a sufficient condition(See Theorem 2.3. in [@kiani2015unitary]). So here, we will prove that $Cay(M_{n}(F),GL_{n}(F))$ is SRG iff $n=2$. Let $F$ be a finite field, $M_{n}(F)$ be a matrix algebra over $F$, $GL_{n}(F)$ be the general linear group. (Unitary Cayley graph) We denote $G_{M_{n}(F)}=Cay(M_{n}(F),GL_{n}(F))$, the unitary Cayley graph of $M_{n}(F)$, which is a graph with vertex set $M_{n}(F)$ and edge set $\{\{A,B\}|A-B \in GL_{n}(F)\}$. (Strongly regular graph) A graph $G$ with order $n$ is called a strongly regular graph with parameter $(n,k,\lambda,\mu)$ if: - Every vertex adjacents to exactly $k$ vertices. - For any two adjacent vertices $x,y$, there are exactly $\lambda$ vertices adjacent to both $x$ and $y$. - For any two non-adjacent vertices $x,y$, there are exactly $\mu$ vertices adjacent to both $x$ and $y$. (Linear derangement) We call $A \in M_{n}(F)$ a linear derangement if $A \in GL_{n}(F)$ and does not fix any non-zero vector. In other words, $0$ and $1$ are not eigenvalues of $A$. We denote $e_{n}$ the number of linear derangements in $M_{n}(F)$ and $e_{0}=0$. (See [@morrison2006integer].)Let $F$ be a finite field of order $q$, then $e_{n}=e_{n-1}(q^n-1)q^{n-1}+(-1)^nq^{\frac{n(n-1)}{2}}.$ Preliminaries ============= (See [@kiani2015unitary].) Let $n$ be a positive integer, $F$ be a finite field, then $G=Cay(M_{n}(F),GL_{n}(F))$ is $|GL_{n}(F)|$-regular and for any two adjacent vertices $x,y$, there exists $e_{n}$ vertices adjacent to both of them. Let $n\times n$ square matrix $A=(a_{1}, a_{2},\dots, a_{n})$, where $a_{i}$ is $n$-dimensional column vectors, then we have: $A \in GL_{n}(F)$ and $A+diag\{1,0,\dots,0\} \notin GL_{n}(F)$ iff ${a_1} = \sum\limits_{i = 2}^n {{k_i}} {a_i} - {e_1}$ Where $k_{i} \in F$ and $det((e_{1},a_{2},\dots,a_{n})) \neq 0$, $F$ is a field with $q$ elements, $e_{1}=(1,0,\dots,0)^T$. First we will prove that if $A \in GL_{n}(F)$ and $A+diag\{1,0,\dots,0\} \notin GL_{n}(F)$, then ${a_1} = \sum\limits_{i = 2}^n {{k_i}} {a_i} - {e_1}$, $k_{i} \in F$ and $det((e_{1},a_{2},\dots,a_{n})) \neq 0$. Let $E_{ij}$ be the matrix whose $(i,j)$-element is equal to $1$ and the rest equal to $0$. Assume $det(({{e}_{1}},{{a}_{2}},\ldots ,{{a}_{n}}))\text{=}0$, since $a_{2},\dots,a_{n}$ are linearly independent, so $e_{1}$ is a linear combination of $a_{2},\dots,a_{n}$, then ${0 = }det({a_1} + {e_1},{a_2}, \ldots ,{a_n}) = det({a_1},{a_2}, \ldots ,{a_n})$ leads to a contradiction.Therefore $det({{e}_{1}},{{a}_{2}},\ldots ,{{a}_{n}})\ne 0$ and, $a_{1}+e_{1}$ is a linear combination of $a_{2},\dots,a_{n}$, ie. ${a_1} = \sum\limits_{i = 2}^n {{k_i}} {a_i} - {e_1}, k_{i} \in F.$ By far, we have proven the necessity of the proposition. Now let’s prove the sufficiency. It is known that $A+E_{11} \notin GL_{n}(F)$ and $a_{2},\dots,a_{n}$ are linearly independent. Assume $det(A)=0$, then $a_{1}$ is a linear combination of $a_{2},\dots,a_{n}$, therefore $det({e_1},{a_2}, \ldots ,{a_n}) = det({a_1} + {e_1},{a_2}, \ldots ,{a_n}) = 0$ leads to a contradiction. So $det(A)\ne 0,A\in G{{L}_{n}}(F).$ Results ======= Let $F$ be a finite field of order $q$. We have $N: = |({E_{11}} + G{L_n}(F)) \cap G{L_n}(F)| = ({q^n} - {q^{n - 1}} - 1)\prod\limits_{k = 1}^{n - 1} {({q^n} - {q^k})} $. Let $N_{1}$ be the number of matrices $A$ where $A \in GL_{n}(F)$ and $E_{11}+A \notin GL_{n}(F)$, $N$ be the number of matrices $A$ where $A \in GL_{n}(F)$ and ${{E}_{11}}+A\in G{{L}_{n}}(F)$, $N_{2}$ be the number of vector collections $\{a_{2},\dots,a_{n}\}$ such that $det(e_{1},a_{2},\dots,a_{n}) \neq 0$ and $N_{3}$ be the number of $F$-linear combinations of any $n-1$ linear independent vectors. Obviously ${N_3} = {q^{n - 1}}$. For $N_{2}$, to construct such a matrix, for $2\leq k \leq n$ the kth column can be any vector in $F^n$ except for the $q^{k-1}$ linear combinations of the previous $k-1$ columns, hence ${N_2} = ({q^n} - q)({q^n} - {q^2}) \ldots ({q^n} - {q^{n - 1}}) = \prod\limits_{k = 1}^{n - 1} {({q^n} - {q^k})} $. ${N_1} = {N_2}{N_3}$ by Lemma 2.2. Hence $$\begin{aligned} N &= |{E_{11}} + G{L_n}(F)| - {N_1} \\ &= |G{L_n}(F)| - {N_1} \\ &= \prod\limits_{k = 1}^n {({q^n} - {q^{k - 1}})} - {q^{n - 1}}\prod\limits_{k = 1}^{n - 1} {({q^n} - {q^k})} \\ &= ({q^n} - 1)\prod\limits_{k = 2}^n {({q^n} - {q^{k - 1}})} - {q^{n - 1}}\prod\limits_{k = 1}^{n - 1} {({q^n} - {q^k})} \\ &= ({q^n} - 1)\prod\limits_{k = 1}^{n - 1} {({q^n} - {q^k})} - {q^{n - 1}}\prod\limits_{k = 1}^{n - 1} {({q^n} - {q^k})} \\ &= ({q^n} - {q^{n - 1}} - 1)\prod\limits_{k = 1}^{n - 1} {({q^n} - {q^k})}\end{aligned}$$ $$\begin{aligned} M: &= |(diag\{ 1,1,0, \ldots ,0\} + G{L_n}(F)) \cap G{L_n}(F)| \\ &= \{ {e_2}{q^{2n - 4}} + ({q^{n - 2}} - 1)({q^{n - 2}} - q) + [({q^2} - 1)({q^2} - q) - {e_2} - 1]{q^{n - 2}}({q^{n - 2}} - 1)\} \prod\limits_{k = 2}^{n - 1} {({q^n} - {q^k})} \\ &= ({q^{2n}} - {q^{2n - 1}} - {q^{2n - 2}} + {q^{2n - 3}} + {q^{n - 1}} - {q^{n + 1}} + q)\prod\limits_{k = 2}^{n - 1} {({q^n} - {q^k})} \end{aligned}$$ Let $D=diag\{1,1,0,\dots,0\}$, then $M$ is the number of matrices $A$ such that $A \in GL_{n}(F)$ and $A+D \in GL_{n}(F)$. Let $A=(a_{ij}) \in GL_{n}(F)$ then $A+D \in GL_{n}(F) \Leftrightarrow I+A^{-1}D \in GL_{n}(F).$ Therefore $M = |\{ A = ({a_{ij}}) \in G{L_n}(F)|I + AD \in G{L_n}(F)\} |$. Obviously, $I + AD \in G{L_n}(F) \Leftrightarrow \left( {\begin{array}{*{20}{c}} {{a_{11}} + 1}&{{a_{12}}}\\ {{a_{21}}}&{{a_{22}} + 1} \end{array}} \right) \in G{L_2}(F)$, let ${A_1} = \left( {\begin{array}{*{20}{c}} {{a_{11}}}&{{a_{12}}}\\ {{a_{21}}}&{{a_{22}}} \end{array}} \right)$, hence $M = |\{ A \in G{L_n}(F)|I + {A_1} \in G{L_2}(F)\} |$. By Lemma 2.1, the number of matrices $A_{1}$ such that $A_{1} \in GL_{n}(F)$ and $A_{1}+I \in GL_{n}(F)$ is ${e_2} = {q^4} - 2{q^3} - {q^2} = 3q$. Let $M_{1}$ be the number of matrices $B = ({b_{ij}})$ such that $\left( {\begin{array}{*{20}{c}} {{b_{11}} + 1}&{{b_{12}}}\\ {{b_{21}}}&{{b_{22}} + 1} \end{array}} \right)\in GL_{2}(F)$. To construct such a matrix, we can choose any vector in $F^2$ except $(-1,0)^T$ as the first column of $B$, the second column of $B$ can be any vector in $F^2$ except for the $q$ linear combinations of the first column. Hence ${M_1} = ({q^2} - 1)({q^2} - q)$. 1. Take a matrix $A_{1}$ such that $A_{1} \in GL_{2}(F)$ and $A_{1}+I \in GL_{2}(F)$, then the number of matrices which are in $GL_{n}(F)$ and has $A_{1}$ as its leading principal submatrix of order $2$ is ${q^{2n - 4}}\prod\limits_{k = 2}^{n - 1} {({q^n} - {q^k})} .$ So the number of matrices $A$ where *(1)* $A \in GL_{n}(F)$, *(2)* $A$ has invertible 2nd leading principal submatrix, *(3)* $A+D \in GL_{n}(F)$, is ${e_2}{q^{2n - 4}}\prod\limits_{k = 2}^{n - 1} {({q^n} - {q^k})} .$ 2. The number of matrices $A$ where *(1)* $A \in GL_{n}(F)$, *(2)* the 2nd leading principal submatrix of $A$ is $\textbf{0}$, *(3)* $A+D \in GL_{n}(F)$, is $({q^{n - 2}} - 1)({q^{n - 2}} - q)\prod\limits_{k = 2}^{n - 1} {({q^n} - {q^k})} .$ 3. The number of matrices $A$ where *(1)* $A \in GL_{n}(F)$, *(2)* the rank of the 2nd leading principal submatrix $A$ is equal to $1$, *(3)* $A+D \in GL_{n}(F)$, is $({M_1} - {e_2} - 1){q^{n - 2}}({q^{n - 2}} - 1)\prod\limits_{k = 2}^{n - 1} {({q^n} - {q^k})} = [({q^2} - 1)({q^2} - q) - {e_2} - 1]{q^{n - 2}}({q^{n - 2}} - 1)\prod\limits_{k = 2}^{n - 1} {({q^n} - {q^k})} $. Hence $$\begin{aligned} M &= \{ {e_2}{q^{2n - 4}} + ({q^{n - 2}} - 1)({q^{n - 2}} - q) + [({q^2} - 1)({q^2} - q) - {e_2} - 1]{q^{n - 2}}({q^{n - 2}} - 1)\} \prod\limits_{k = 2}^{n - 1} {({q^n} - {q^k})} \\ &= ({q^{2n}} - {q^{2n - 1}} - {q^{2n - 2}} + {q^{2n - 3}} + {q^{n - 1}} - {q^{n + 1}} + q)\prod\limits_{k = 2}^{n - 1} {({q^n} - {q^k})} \end{aligned}$$ $A,B$ are two non-adjacent vertices of $G_{M_n}(F)$, then the number of paths of length $2$ between $A$ and $B$ is $$W: = \left| {\left( {\left( {\begin{array}{*{20}{c}} {{I_r}}&0\\ 0&0 \end{array}} \right) + G{L_n}(F)} \right) \cap G{L_n}(F)} \right|$$ where $r=rank(A-B)$. Let $N(A)$ be the neighbourhood of $A$, then $W=|N(A)\cap N(B)|=|(A+G{{L}_{n}}(F))\cap (B+G{{L}_{n}}(F))|.$ Let $d=(A+GL_n(F)) \cap(B+GL_n(F))$, $H=(A-B+G{{L}_{n}}(F))\cap G{{L}_{n}}(F)$. Consider map $\begin{array}{l} \phi :d \to H\\ \;\;M \mapsto M - B \end{array}$ It is obvious that $\phi$ is injective and $\forall K\in H$ we have $K=A-B+X$ where $X\in GL_n(F)$, hence $K=A+X-B.$ Also known by $\phi(A+X)=K$, $\phi$ is surjective, hence $\phi$ is a bijection, $W=|H|$. There exists $P,Q\in G{{L}_{n}}(F)$ such that $P(A-B)Q= \begin{pmatrix} I_r & 0\\0& 0 \end{pmatrix}$ where $r=r(A-B)$. Let $S = \left( {\left( {\begin{array}{*{20}{c}} {{I_r}}&0\\ 0&0 \end{array}} \right) + G{L_n}(F)} \right) \cap G{L_n}(F)$, we define the map $\psi$ as following $\begin{array}{l} \psi :H \to S\\ \;\;\;\;h\;\;\; \mapsto PhQ \end{array}$ Obviously $\psi$ is bijective. Hence $W = |S| = \left| {\left( {\left( {\begin{array}{*{20}{c}} {{I_r}}&0\\ 0&0 \end{array}} \right) + G{L_n}(F)} \right) \cap G{L_n}(F)} \right|.$ For $n>2$, ${{G}_{{{M}_{n}}}}(F)$ is not SRG. Let $A={{E}_{11}},B=diag\{1,1,0,\cdots ,0\}$, $A$ is not adjacent to $\textbf{0}$, neither does $B$. $$\begin{aligned} |N(A) \cap N(O)| &= |({E_{11}} + G{L_n}(F)) \cap G{L_n}(F)| \\ &=({q^n} - {q^{n - 1}} - 1)\prod\limits_{{\rm{k}} = 1}^{n - 1} {({q^n} - {q^k})} \\ &:= a\end{aligned}$$ $$\begin{aligned} |N(B) \cap N(O)| &= |(diag\{ 1,1,0,...,0\} + G{L_n}(F)) \cap G{L_n}(F)| \\ &=({q^{2n}} - {q^{2n - 1}} - {q^{2n - 2}} + {q^{2n - 3}} + {q^{n - 1}} - {q^{n + 1}} + q)\prod\limits_{k = 2}^{n - 1} {({q^n} - {q^k})} \\ &:= b\end{aligned}$$ For $n>2$, ${{q}^{2n-3}}+{{q}^{n-1}}-{{q}^{2n-2}}\ne 0$$\Rightarrow$$a\neq b$, hence ${{G}_{{{M}_{n}}}}(F)$ is not SRG. For $n=1$, ${{G}_{{{M}_{n}}}}(F)$ is not SRG. $Cay(M_{1}(F),GL_{1}(F)$ is complete graph, hence is not SRG. By Theorem 2.3. in [@kiani2015unitary] and Theorem 3.4. and Theorem 3.5. we obtain the following Theorem. $Cay(M_{n}(F),GL_{n}(F)$ is SRG iff $n=2$. So far, we have characterized strongly regular unitary Cayley graphs of matrix algebras over finite field $F$. Acknowledgements {#acknowledgements .unnumbered} ================ The authors are grateful to those of you who support to us. References {#references .unnumbered} ==========
--- abstract: 'In our earlier work [@Fareed17], we proposed an incremental SVD algorithm with respect to a weighted inner product to compute the proper orthogonal decomposition (POD) of a set of simulation data for a partial differential equation (PDE) without storing the data. In this work, we perform an error analysis of the incremental SVD algorithm. We also modify the algorithm to incrementally update both the SVD and an error bound when a new column of data is added. We show the algorithm produces the exact SVD of an approximate data matrix, and the operator norm error between the approximate and exact data matrices is bounded above by the computed error bound. This error bound also allows us to bound the error in the incrementally computed singular values and singular vectors. We illustrate our analysis with numerical results for three simulation data sets from a 1D FitzHugh-Nagumo PDE system with various choices of the algorithm truncation tolerances.' author: - 'Hiba Fareed [^1]' - 'John R. Singler' bibliography: - 'incremental\_POD.bib' title: Error Analysis of an Incremental POD Algorithm for PDE Simulation Data --- Introduction ============ Proper orthogonal decomposition (POD) is a method to find an optimal low order basis to approximate a given set of data. The basis elements are called POD modes, and they are often used to create low order models of high-dimensional systems of ordinary differential equations or partial differential equations (PDEs) that can be simulated easily and even used for real-time applications. For more about the applications of POD in engineering and applied sciences and POD model order reduction, see, e.g., [@colonius02; @ZIMMERMANN10; @Zimmermann14; @Peng16; @Christensen99; @Kalashnikova14; @Amsallem16; @Daescu08; @Holmes12; @Barone09; @Calo14; @Farhat15; @XieWellsWangIliescu18; @MohebujjamanRebholzXieIliescu17; @GunzburgerJiangSchneier17; @Kostova-VassilevskaOxberry18]. There is a close relationship between the singular value decomposition (SVD) of a set of data and the POD eigenvalues and modes of the data. Due to applications involving functional data and PDEs, many researchers discuss this relationship in weighted inner product spaces and general Hilbert spaces [@QuarteroniManzoniNegri16; @Singler14; @KunischVolkwein02; @GubischVolkwein17]. For the POD calculation, it is important to determine an inner product that is appropriate for the application [@Zlatko17; @Tabandeh16; @Serre12; @Amsallem15; @Kalashnikova14]. Since the size of data sets continues to increase in applications, many researchers have proposed and developed more efficient algorithms for POD computations, the SVD, and other related methods [@Brand02; @Brand06; @BakerGallivan12; @Chahlaoui03; @Iwen16; @Mastronardi05; @Mastronardi08; @Fahl01; @BeattieBorggaard06; @WangMcBeeIliescu16; @HimpeLeibnerRave_pp]. These algorithms have been recently applied in conjunction with techniques such as POD model order reduction and the dynamic mode decomposition, which often consider simulation data from a PDE [@PlaczekTranOhayon11; @Amsallem15; @CoriglianoDossiMariani15; @PeherstorferWillcox15; @PeherstorferWillcox16; @Schmidt17; @ZahrFarhat15; @Zimmermann17; @ZimmermannPeherstorferWillcox17; @NME:NME5283]. In our earlier work [@Fareed17], we proposed an incremental SVD algorithm for computing POD eigenvalues and modes in a weighted inner product space. Specifically, we considered Galerkin-type PDE simulation data, initialized the SVD on a small amount of the data, and then used an incremental approach to approximately update the ٍSVD with respect to a weighted inner product as new data arrives. The algorithm involves minimal data storage; the PDE simulation data does not need to be stored. The algorithm also involves truncation, and therefore produces approximate POD eigenvalues and modes. We proved the SVD update is exact without truncation. In this paper, we study the effectiveness of the truncations and deduce error bounds for the SVD approximation. To handle the computational challenge raised by large data sets, we bound the error incrementally. Specifically, we extend the incremental SVD algorithm for a weighted inner product in [@Fareed17] to compute an error bound incrementally without storing the data set; see , . We also perform an error analysis in that clarifies the effect of truncation at each step, and provides more insight into the accuracy of the algorithm with truncation and the choices of the two tolerances. We prove the algorithm produces the exact SVD of an approximate data set, and the operator norm error between the exact and approximate data set is bounded above by the incrementally computed error bound. This yields error bounds for the approximate POD eigenvalues and modes. To illustrate the analysis, we present numerical results in for a set of PDE simulation data using various choices of the tolerances. Finally, we present conclusions in . Background and Algorithm {#Section:basic} ======================== We begin by setting notation, recalling background material, and discussing the algorithm. For a matrix $A \in \mathbb{R}^{m \times n}$, let $ A_{(p : q, r : s)} $ denote the submatrix of $ A $ consisting of the entries of $ A $ from rows $ p, \ldots, q $ and columns $ r, \ldots, s $. Also, if $ p $ and $ q $ are omitted, then the submatrix should consist of the entries from all rows. A similar convention applies for the columns if $ r $ and $ s $ are omitted. Let $ M \in \mathbb{R}^{m \times m} $ be symmetric positive definite, and let $ \mathbb{R}^m_M $ denote the Hilbert space $ \mathbb{R}^m $ with weighted inner product $ (x,y)_M = y^T M x $ and corresponding norm $ \| x \|_{ M } = ( x^T M x )^{1/2} $. For a matrix $ P \in \mathbb{R}^{m \times n} $, we can consider $ P $ as a linear operator $ P : \mathbb{R}^{n} \to \mathbb{R}_{M}^{m}$. In this case, the operator norm of $ P $ is $$\| P \|_{ \mathcal{L}( \mathbb{R}^{n}, \mathbb{R}_{M}^{m}) }= \sup_{\|x\|=1} \| P x \|_{ M }.$$ We note that $ \mathbb{R}^n $ without a subscript should be understood to have the standard inner product $ (x,y) = y^T x $ and Euclidean norm $ \| x \| = (x^T x)^{1/2} $. The Hilbert adjoint operator of the matrix $ P: \mathbb{R}^{n} \to \mathbb{R}_{M}^{m}$ is the matrix $P^{*} : \mathbb{R}_{M}^{m} \to \mathbb{R}^{n}$ given by $P^* = P^{T} M $. We have $ (Px,y)_M = (x,P^*y) $ for all $ x \in \mathbb{R}^n $ and $ y \in \mathbb{R}^m_M $. In our earlier work [@Fareed17], we discussed how the proper orthogonal decomposition of a set of PDE simulation data can be reformulated as the SVD of a matrix with respect to a weighted inner product. We do not give the details of the reformulation here, but we do briefly recall the SVD with respect to a weighted inner product since we use this concept throughout this work. A *core SVD* of a matrix $ P: \mathbb{R}^{n} \to \mathbb{R}_{M}^{m} $ is a decomposition $ P = V \Sigma W^T $, where $ V \in \mathbb{R}^{m \times k} $, $ \Sigma \in \mathbb{R}^{k \times k} $, and $ W \in \mathbb{R}^{n \times k} $ satisfy $$V^T M V = I, \quad W^T W = I, \quad \Sigma = \mathrm{diag}(\sigma_1, \ldots, \sigma_k),$$ where $ \sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_k > 0 $. The values $ \{\sigma_i\} $ are called the (positive) singular values of $ P $ and the columns of $ V $ and $ W $ are called the corresponding singular vectors of $ P $. Since POD applications do not typically require the zero singular values, we do not consider the full SVD of $ P: \mathbb{R}^{n} \to \mathbb{R}_{M}^{m} $ in this work. We do note that the SVD of $ P: \mathbb{R}^{n} \to \mathbb{R}_{M}^{m} $ is closely related to the eigenvalue decompositions of $ P^* P $ and $ P P^* $. See [@Fareed17 Section 2.1] for more details. Also, when we consider the SVD (or core SVD) of a matrix without weighted inner products we refer to this as the standard SVD (or standard core SVD). We consider approximately computing the SVD of a dataset $ U $ incrementally by updating the core SVD when each new column $ c $ of data is added to the data set. This incremental procedure is performed without forming or storing the original data matrix. Specifically, we focus on the incremental SVD algorithm with a weighted inner product proposed in Algorithm 4 of [@Fareed17]. The algorithm is based on the following fundamental identity: if $ U = V \Sigma W^T $ is a core SVD, then $$\begin{aligned} [\, U \, c \, ] &= [\, V \Sigma W^T \, c \, ]\\ &= [\, V \,j \,] \left[\begin{array}{cc} \Sigma & V^{*}c\\ 0 & p \end{array}\right] \left[\begin{array}{cc} W & 0\\ 0 & 1 \end{array}\right]^T, $$ where $ j = ( c - V V^* c )/ p $ and $ p = \| c - V V^* c \|_{M} $ [@Fareed17]. The algorithm is a modified version of Brand’s incremental SVD algorithm [@Brand02] to directly treat the weighted inner product. Brand’s incremental SVD algorithm without a weighted inner product has been used for POD computations in [@ZahrFarhat15; @NME:NME5283], and our implementation strategy follows the algorithm in [@NME:NME5283]. Below, we consider a slight modification of the algorithm from [@Fareed17]; specifically, we update the algorithm to include a computable error bound $ e $. We show in this work that the algorithm produces the exact core SVD of a matrix $ \tilde{U} $ such that $ \| U - \tilde{U} \|_{\mathcal{L}(\mathbb{R}^s,\mathbb{R}^m_M)} \leq e $, where $ U $ is the true data matrix. This error bound gives information about the approximation error for the singular values and singular vectors; see for details. We take the first step in the incremental SVD algorithm by initializing the SVD and the error bound with a single column $ c \neq 0 $ as follows: $$\Sigma = \left\Vert \,c\,\right\Vert_{M} = (| c^{T} M c | )^{1/2}, \quad V = c \Sigma^{-1}, \quad W = 1, \quad e = 0.$$ Here, the error bound $ e $ is set to zero since the initial SVD is exact. Also, as mentioned in [@Fareed17], even though $ M $ is positive definite it is possible for round off errors to cause $ c^T M c $ to be very small and negative; we use the absolute value here and throughout the algorithm to avoid this issue. Then we incrementally update the SVD and the error bound by applying when a new column is added. Most of the algorithm is taken directly from [@Fareed17 Algorithm 4]; we refer to that work for a detailed discussion of the algorithm and details about the implementation. We note the following: - The input is an existing SVD $ V $, $ \Sigma $, and $ W $, a new column $ c $, the weight matrix $ M $, two positive tolerances, and an error bound $ e $. - Lines 10, 15, 18, 21, and 26 are new, and are simple computations used to update the error bound $ e $. - In the SVD update stage (lines 1–16), $ e_p $ is the error due to $ p $-truncation in line $ 3 $. - In the singular value truncation stage (lines 17–22), $ e_{sv} $ is the error due to the singular value truncation in line $19$. - In the orthogonalization stage (lines 23–25), a modified Gram-Schmidt algorithm with reorthogonalization is used; see Section 4.2 in [@Fareed17]. - The output is the updated SVD and error bound. - The columns of $ V $ are the $ M $-orthonormal POD modes, and the squares of the singular values are the POD eigenvalues. - If only the POD eigenvalues and modes are required, then the computations involving $ W $ can be skipped; however, $ W $ is needed if an approximate reconstruction of the entire data set is desired. - As new columns continue to be added, a user can monitor the computed error bound and lower the tolerances if desired. $ V \in \mathbb{R}^{m \times k} $, $ \Sigma \in \mathbb{R}^{k \times k} $, $ W \in \mathbb{R}^{n \times k}$, $ c \in \mathbb{R}^m $, $ M \in \mathbb{R}^{m \times m} $, $ \mathrm{tol} $, $ \mathrm{tol}_\mathrm{sv} $, $ e $\ % Prepare for SVD update $d=V^{T} M c$, $ p = \mathrm{sqrt}( |( c-Vd )^{T} M ( c-Vd )| )$ $ Q = \begin{bmatrix} \Sigma & d \\ 0 & 0 \end{bmatrix} $ $ Q = \begin{bmatrix} \Sigma & d \\ 0 & p \end{bmatrix} $ $[\,V_{Q},\Sigma_{Q},W_{Q}\,]=\mathrm{svd}(Q)$\ % SVD update $ V = V V_{Q_{(1:k,1:k)}}$, $\Sigma = \Sigma_{Q_{(1:k,1:k)}}$, $ W =\begin{bmatrix} W & 0\\ 0 & 1 \end{bmatrix} W_{Q_{(:,1:k)}} $ $ e_{p} = p $ $ j=(c-Vd) / p $ $ V= [V \, j] V_{Q} $, $ \Sigma = \Sigma_{Q}$, $ W = \begin{bmatrix} W & 0\\ 0 & 1 \end{bmatrix} W_{Q} $ $ k=k+1 $ $ e_{p} = 0 $\ % Neglect small singular values: truncation $ e_{sv} = \Sigma_{(r+1,r+1)} $ $ \Sigma = \Sigma_{(1:r,1:r)}$, $V = V_{(:,1:r)}$, $W = W_{(:,1:r)}$ $ e_{sv} = 0 $\ % Orthogonalize if necessary $ V = \mathrm{modifiedGSweighted}(V,M) $ $ e = e + e_{p} + e_{sv} $ $ V $, $ \Sigma $, $ W $, $ e $ Error Analysis {#subsec:Error Analysis} ============== In this section, we perform an error analysis of . We show the algorithm produces the exact SVD of another matrix $ \tilde{U} $, and bound the error between the matrices. We assume all computations in the algorithm are performed in exact arithmetic. Therefore, the Gram-Schmidt orthogonalization stage (in lines 23–25) is not considered here. We note that in [@Fareed17], we considered a Gram-Schmidt procedure with reorthogonalization to minimize the effect of round-off errors; see, e.g., [@GiraudLangou02; @GiraudLangou05; @GiraudLangouRozloznik05; @RozloznikTuma12]. We leave an analysis of round-off errors in to be considered elsewhere. We begin our analysis in by analyzing the error due to each individual truncation step in the algorithm. Then we provide error bounds for the algorithm in . Individual Truncation Errors {#subsec:ind_trunc_errors} ---------------------------- We begin our analysis of the incremental SVD algorithm by recalling a result from [@Fareed17]. This result shows that a single column incremental update to the SVD is exact without truncation when $ p = \| c-VV^{*}c \|_{M} > 0 $. \[thm:svd\_exact\_update\] Let $ U: \mathbb{R}^{n}\longrightarrow \mathbb{R}_{M}^{m}$, and suppose $ U = V \Sigma W^{T}$ is an exact core SVD of $ U $, where $ V^{T} M V = I $ for $V \in \mathbb{R}^{m \times k}$, $ W^{T} W = I$ for $W \in \mathbb{R}^{n \times k}$, and $\Sigma \in \mathbb{R}^{k \times k}$. Let $ c \in \mathbb{R}^{m}_M $ and define $$h = c-VV^{*}c, \quad p = \| h \|_{M}, \quad Q=\begin{bmatrix} ~\Sigma & V^{*}c\\ 0 & p \end{bmatrix},$$ where $ V^* = V^T M $. If $ p > 0 $ and a standard core SVD of $ Q \in \mathbb{R}^{k+1 \times k+1} $ is given by $$\label{eq:4} Q =V _{Q}\,\Sigma_{Q}\,W_{Q}^{T}, $$ then a core SVD of $ [ \,U \,\, c \, ] : \mathbb{R}^{n+1}\longrightarrow \mathbb{R}_{M}^{m} $ is given by $$[\,U \,\, c \,] = V_{u} \Sigma_{Q} W_{u}^{T},$$ where $$V_{u} = [\, V \,\, j \,] ~V_{Q}, \quad j = h/p, \quad W_{u} = \left[\begin{array}{cc} W & 0 \\ 0 & 1 \end{array}\right] W_{Q}.$$ Next, we analyze the incremental SVD update in the case when the added column $ c $ satisfies $ p = \| c - V V^* c \|_M = 0 $. \[prop:2.2\] Let $ U = V \Sigma W^{T} $, $ c $, $ h $, $ p $, and $ Q $ be given as in , and assume $ p = \| c - V V^* c \|_M = 0 $. If the full standard SVD of $ Q \in \mathbb{R}^{k+1 \times k+1} $ is given by $ Q = V_Q \Sigma_Q W_Q^T $, where $ V_Q, \Sigma_Q, W_Q \in \mathbb{R}^{k+1 \times k+1} $, then $$ V_Q = \begin{bmatrix} V_{Q_{(1:k,1:k)}} & 0 \\ 0 & 1 \end{bmatrix}, \quad \Sigma_Q = \begin{bmatrix} \Sigma_{Q_{(1:k,1:k)}} & 0 \\ 0 & 0 \end{bmatrix}, \quad \Sigma_{Q_{(1:k,1:k)}} > 0, $$ and a standard core SVD of $ R = Q_{(1:k,1:k+1)} = [ \, \Sigma \,\,\, V^* c \,] \in \mathbb{R}^{k \times k+1} $ is given by $$R = V_{Q_{(1:k,1:k)}} \Sigma_{Q_{(1:k,1:k)}} (W_{Q_{(:,1:k)}})^T.$$ Let $ \sigma_{Q_1} \geq \sigma_{Q_2} \geq \cdots \geq \sigma_{Q_{k+1}} \geq 0 $ be the singular values of $ Q $ so that $ \Sigma_Q = \mathrm{diag}(\sigma_{Q_{1}}, ..., \sigma_{Q_{k+1}})$. Also, let $ \{ v_{Q_j} \} $ and $ \{ w_{Q_j} \} $ be the corresponding orthonormal singular vectors in $ \mathbb{R}^{k+1} $, so that $$V_Q = [ v_{Q_{1}}, \ldots, v_{Q_{(k+1)}} ], \quad W_Q = [ w_{Q_{1}}, \ldots, w_{Q_{(k+1)}} ],$$ with $ V_Q^{T} V_Q = I $ and $ W_Q^{T} W_Q = I $. First, we show $ Q $ has exactly one zero singular value. Since we know $$\begin{aligned} Q^{T} v_{Q_j} &= \sigma_{Q_j} w_{Q_j},\label{Q_SVD_1} \\ Q w_{Q_j} &= \sigma_{Q_j} v_{Q_j}, \label{Q_SVD_2}\end{aligned}$$ for $ j = 1, \ldots, k+1 $, the number of zero singular values of $ Q $ is precisely equal to the dimension of the nullspace of $ Q^T $. Suppose $ v = [v_1, \ldots, v_{k+1}]^T \in \mathbb{R}^{k+1} $ satisfies $ Q^T v = 0 $. Recall $ \Sigma = \mathrm{diag}(\sigma_1,\sigma_2, \ldots, \sigma_k) > 0 $, and let $ d = V^* c = [d_1, \ldots, d_k]^T $. Then $ Q^T v = 0 $ implies $$\begin{bmatrix} \sigma_{1} v_1 \\ \sigma_{2} v_2 \\ \vdots \\ \sigma_{k} v_k \\ d_{1} v_1 + d_{2} v_2 + \ldots + d_{k} v_k \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ \vdots \\ 0 \\ 0 \end{bmatrix}.$$ Since $ \sigma_{1}\geq \cdots \geq \sigma_{k} > 0 $, we have $ v_j = 0 $ for $ j = 1,\ldots, k $. This implies the nullspace of $ Q^T $ is exactly the span of $ e_{k+1} = [0, \ldots, 0, 1]^T \in \mathbb{R}^{k+1} $. Therefore, the nullspace is one dimensional and $ Q $ has exactly one zero singular value, i.e., $ \sigma_{Q_{k+1}} = 0 $ and $ \sigma_{Q_1} \geq \sigma_{Q_2} \geq \cdots \geq \sigma_{Q_{k}} > 0 $. Next, $ Q w_{Q_{j}} = \sigma_{j} v_{Q_{j}} $ for $ j = 1, \ldots, k $ gives $$\Rightarrow \begin{bmatrix} \sigma_{1} w_{Q_{j,1}} + d_{1} w_{Q_{j,k+1}} \\ \sigma_{2} w_{Q_{j,2}} + d_{2} w_{Q_{j,k+1}} \\ \vdots \\ \sigma_{k} w_{Q_{j,k}} + d_{k} w_{Q_{j,k+1}} \\ 0 \end{bmatrix} = \begin{bmatrix} \sigma_{j} v_{Q_{j,1}} \\ \sigma_{j} v_{Q_{j,2}} \\ \vdots \\ \sigma_{j} v_{Q_{j,k}} \\ \sigma_{j} v_{Q_{j,k+1}} \end{bmatrix}.$$ The last equation gives $ v_{Q_{j,k+1}} = 0 $ since $ \sigma_{j} > 0 $ for $ j = 1, \ldots , k $. Therefore, for $ j = 1, \ldots , k $, $$v_{Q_{j}} = [ v_{Q_{j,1}}, v_{Q_{j,2}}, \ldots, v_{Q_{j,k}}, 0 ]^T,$$ and $$v_{Q_{k+1}} = [ 0, 0, \ldots, 0, 1 ]^T.$$ This implies $$V_Q = \begin{bmatrix} V_{Q_{(1:k,1:k)}} & 0 \\ 0 & 1 \end{bmatrix},$$ and so the SVD decomposition of Q is given by $$\begin{aligned} Q &= \begin{bmatrix} V_{Q_{(1:k,1:k)}} & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} \Sigma_{Q_{(1:k,1:k)}} & 0 \\ 0 & 0 \end{bmatrix} W_{Q}^T. $$ This gives $ R = Q_{(1:k,1:k+1)} = \check{V}_{Q} \check{\Sigma}_{Q} \check{W}_{Q}^T $, where $ \check{V}_{Q} = V_{Q_{(1:k,1:k)}} $, $ \check{\Sigma}_{Q} = \Sigma_{Q_{(1:k,1:k)}} $, and $ \check{W}_{Q} = W_{Q_{(1:k+1,1:k)}} $. It can be checked that $ \check{V}_{Q}^{T} \check{V}_{Q} = I $ and $ \check{W}_{Q}^{T} \check{W}_{Q} = I $ since $ V_Q^{T} V_Q = I $ and $ W_Q^{T} W_Q = I $. Therefore, a standard core SVD of $ R \in \mathbb{R}^{k \times k+1} $ is given by $ R = \check{V}_{Q} \check{\Sigma}_{Q} \check{W}_{Q}^T $. The following result is nearly identical to Proposition 2.3 in [@Fareed17]; the proof is also almost identical and is omitted. \[prop:2.3\] Suppose $ V_u \in \mathbb{R}^{m \times k} $ has $ M $-orthonormal columns and $ W_u \in \mathbb{R}^{n \times l} $ has orthonormal columns. If $ R \in \mathbb{R}^{k \times l} $ has standard core SVD $ R = V_{R} \Sigma_{R} W_{R}^T $ and $ P: \mathbb{R}^{n} \to \mathbb{R}_{M}^{m}$ is defined by $ P = V R W^T $, then $$\label{eqn:P_coreSVD_product} P = V_u \Sigma_u W_u^T, \quad V_u = V V_{R}, \quad \Sigma_u = \Sigma_R, \quad W_u = W W_{R}, $$ is a core SVD of $ P $. Next, we complete the analysis of the $ p = 0 $ case: \[prop:p\_zero\] Let $ U = V \Sigma W^{T} $, $ c $, $ h $, $ p $, and $ Q $ be given as in , and assume $ p = \| c - V V^* c \|_M = 0 $. If the full standard SVD of $ Q \in \mathbb{R}^{k+1 \times k+1} $ is given by $ Q = V_Q \Sigma_Q W_Q^T $, where $ V_Q, \Sigma_Q, W_Q \in \mathbb{R}^{k+1 \times k+1} $, then a core SVD of $ [\, U \,\, c \, ] : \mathbb{R}^{n+1} \to \mathbb{R}^m_M $ is given by $$[\,U \,\, c \,] = V_{u} \Sigma_u W_{u}^{T},$$ where $$V_{u} = V V_{Q_{(1:k,1:k)}}, \quad \Sigma_u = \Sigma_{Q_{(1:k,1:k)}}, \quad W_{u} = \left[\begin{array}{cc} W & 0 \\ 0 & 1 \end{array}\right] W_{Q_{(:,1:k)}}.$$ Since $ p = 0 $, we have $ c = V V^* c $ and therefore $$\left[\begin{array}{cc} U & c \end{array}\right] = \left[\begin{array}{cc} V \Sigma W^T & V V^* c \end{array}\right] = V \left[\begin{array}{cc} \Sigma & V^{*}c \end{array}\right] \left[\begin{array}{cc} W & 0\\ 0 & 1 \end{array}\right]^T.$$ The result follows from and by taking $ P = [ \, U \,\, c \, ] $ and $ R = [ \, \Sigma \,\, V^* c \, ] $. **Truncation part 1.** Next, we analyze the incremental SVD update in the case when the added column $ c $ satisfies $ p = \| c - V V^* c \|_M < \mathrm{tol} $. In this case, does not compute the SVD of $ [ \, U \,\, c \, ] $. Instead, sets $ p = 0 $ and returns the exact SVD of $ \tilde{U} = [ \, U \,\,\, V V^* c \,] $. The approximation error in the operator norm is given in the next result. \[trn\_1\] Let $ U: \mathbb{R}^{n}\longrightarrow \mathbb{R}_{M}^{m}$, and suppose $ U = V \Sigma W^{T} $ is a core SVD of U. If $ c \in \mathbb{R}^m_M $, $ p = \| c - VV^{*}c \|_{M} $, and $$\tilde{U} = [ \, U \,\, VV^{*}c\,],$$ then $$\| [ \,U \,\, c\,] - \tilde{U} \|_{ \mathcal{L}( \mathbb{R}^{n+1}, \mathbb{R}_{M}^{m})} = p.$$ For $ x = [x_1, \ldots, x_{n+1} ]^T \in \mathbb{R}^{n+1} $, we have $$\begin{aligned} \| [ \,U \,\, c\,] - \tilde{U} \|_{ \mathcal{L}( \mathbb{R}^{n+1}, \mathbb{R}_{M}^{m})} & = \sup_{\|x\|=1} \big\| [ \,0 \,\,\, (c-VV^{*}c) \,] x \big\|_{M}\\ & = \, \sup_{\|x\|=1} \| c - VV^{*}c \|_{M} \, | x_{n+1} |\\ & = \| c - VV^{*}c \|_{M}, $$ where the $ \sup $ is clearly attained by $ x = [0, \ldots, 0, 1 ]^T \in \mathbb{R}^{n+1} $. **Truncation part 2.** In , after the SVD update due to an added column the algorithm truncates any singular values that are smaller than a given tolerance, $ \mathrm{tol}_\mathrm{sv} $. For the matrix case with unweighted inner products, the operator norm error caused by this truncation is well-known to equal the first neglected singular value. This result is also true for a compact linear operator mapping between two Hilbert spaces; see, e.g., [@GohbergGoldbergKaashoek90 Chapters VI–VIII], [@Lax02 Chapter 30], and [@ReedSimon80 Sections VI.5–VI.6] for more information about the SVD for compact operators. This gives the following result: \[trn\_2\] Let $ U: \mathbb{R}^{n}\longrightarrow \mathbb{R}_{M}^{m}$, and suppose $ U = V \Sigma W^{T} $ is a core SVD of U. For a given $ r > 0 $, let $ \tilde{U} $ be the rank $ r $ truncated SVD of $ U $, i.e., $$\tilde{U} = V_{(:,1:r)} \Sigma_{(1:r,1:r)} (W_{(:,1:r)})^{T}.$$ Then $$\| U - \tilde{U} \|_{ \mathcal{L}( \mathbb{R}^{n}, \mathbb{R}_{M}^{m})} = \Sigma_{(r+1,r+1)}.$$ Error Bounds {#subsection:error_bounds} ------------ Next, we fully explain the computed error bound in . In a typical application of the algorithm, many new columns of data are added and the POD is updated many times. In the following result, we assume we are at the $ k $th step of this procedure and we have an existing error bound. We prove that produces a correct update of the error bound. More specifically, let $ k \in \mathbb{N} $, let $ U_k, \tilde{U}_k : \mathbb{R}^k \to \mathbb{R}^m_M $, and assume $$U_k = V_k \Sigma_k W_k^{T}, \quad \tilde{U}_k = \tilde{V}_k \tilde{\Sigma}_k \tilde{W}_k^{T}$$ are core SVDs of $ U $ and $ \tilde{U} $. Let $ c_k \in \mathbb{R}^m_M $ and define $ U_{k+1} := [ U_k \,\,c_k ] : \mathbb{R}^{k+1} \to \mathbb{R}^m_M $. Furthermore, let $ \tilde{U}_{k+1} : \mathbb{R}^{k+1} \to \mathbb{R}^m_M $ be the result of one step of the incremental SVD update applied to $ \tilde{U}_{k} $ so that $$\tilde{U}_{k+1} = \tilde{V}_{k+1} \tilde{\Sigma}_{k+1} \tilde{W}_{k+1}^{T}.$$ Therefore, we consider the sequence $ \{ U_k \} $ to be the exact data matrices, and the sequence $ \{ \tilde{U}_k \} $ to be the result produced (in exact arithmetic) by . In exact arithmetic, there are two stages to . The first stage is the SVD update in lines 1–16. This stage of the algorithm takes $ \tilde{U}_k $ and the added column $ c $ and produces the update $ \hat{U}_{k+1} $. There are two possible results for $ \hat{U}_{k+1} $ depending on the value of $ p $ in line 1. The second stage is the singular value truncation applied to $ \hat{U}_{k+1} $ (lines 17–22), which produces the final update $ \tilde{U}_{k+1} $. Again, there are two possible results for $ \tilde{U}_{k+1} $, depending on the singular values of $ \hat{U}_{k+1} $. We analyze the error bound for each possible outcome of the algorithm in the result below. Let the positive tolerances $ \mathrm{tol} $ and $ \mathrm{tol}_\mathrm{sv} $ be fixed. Below, we let $ p_k $ denote the value $ p $ in line 1 of . We say that $ p $ truncation is applied if $ p_k < \mathrm{tol} $. We say the singular value truncation is applied if any of the singular values of $ \hat{U}_{k+1} $ are less than $ \mathrm{tol}_\mathrm{sv} $. In this case, we find a value $ r $ so that the first $ r $ largest singular values of $ \hat{U}_{k+1} $ are greater than $ \mathrm{tol}_\mathrm{sv} $, while the remaining singular values are less than or equal to $ \mathrm{tol}_\mathrm{sv} $. We let $ \hat{\sigma}_{r+1} $ denote the largest singular value of $ \hat{U}_{k+1} $ such that $ \hat{\sigma}_{r+1} \leq \mathrm{tol}_\mathrm{sv} $. \[Lemma:1step\_errorbound\] If $$\| U_k - \tilde{U}_k \|_{\mathcal{L}(\mathbb{R}^k,\mathbb{R}_M^m)} \leq e_k, \quad p_k = \| c_k - \tilde{V}_k \tilde{V}_k^* c_k \|_{M},$$ then $$\| U_{k+1} - \tilde{U}_{k+1} \|_{\mathcal{L}(\mathbb{R}^{k+1},\mathbb{R}_M^m)} \leq e_{k+1},$$ where $$ e_{k+1} = \begin{cases} e_k, & \text{if no truncation is applied,}\\ e_k + p_k, & \text{if only $ p $ truncation is applied,}\\ e_k + \hat{\sigma}_{r+1}, & \text{if only the singular value truncation is applied,}\\ e_k + p_k + \hat{\sigma}_{r+1}, & \text{if both truncations are applied.} \end{cases} $$ Stage 1 of (lines 1–16) takes $ \tilde{U}_k $ and produces $ \hat{U}_{k+1} $. If $ p_k \geq \mathrm{tol} $, then gives that the core SVD is updated exactly, i.e., $$\hat{U}_{k+1} = [ \, \tilde{U}_k \, \, c_k \, ] \quad \mbox{if $ p_k \geq \mathrm{tol} $.}$$ Otherwise, if $ p_k < \mathrm{tol} $, then implies $$\hat{U}_{k+1} = [ \, \tilde{U}_k \, \, \tilde{V} \tilde{V}^* c_k \, ] \quad \mbox{if $ p_k < \mathrm{tol} $,}$$ and the error is given by $$\| [ \, \tilde{U}_k \, \, c_k \, ] - \hat{U}_{k+1} \|_{\mathcal{L}(\mathbb{R}^{k+1},\mathbb{R}_M^m)} = p_k.$$ Stage 2 of (lines 17–22) takes $ \hat{U}_{k+1} $ and produces $ \tilde{U}_{k+1} $. If all of the singular values of $ \hat{U}_{k+1} $ are greater than $ \mathrm{tol}_\mathrm{sv} $, then $ \hat{U}_{k+1} = \tilde{U}_{k+1} $ and there is no error in this stage. Otherwise, let $ \hat{\sigma}_{r+1} $ denote the largest singular value of $ \hat{U}_{k+1} $ such that $ \hat{\sigma}_{r+1} \leq \mathrm{tol}_\mathrm{sv} $. In this case, $ \tilde{U}_{k+1} $ is simply the $ r $th order truncated SVD of $ \hat{U}_{k+1} $, and the error is given by : $$\| \tilde{U}_{k+1} - \hat{U}_{k+1} \|_{\mathcal{L}(\mathbb{R}^{k+1},\mathbb{R}_M^m)} = \hat{\sigma}_{r+1}.$$ Below, for ease of notation, let $ \| \cdot \| $ denote the $ \mathcal{L}(\mathbb{R}^{k+1},\mathbb{R}_M^m) $ operator norm. The error between $ U_{k+1} $ and $ \tilde{U}_{k+1} $ in the operator norm can be bounded as follows: $$\| U_{k+1} - \tilde{U}_{k+1} \| \leq \| U_{k+1} - [ \, \tilde{U}_k \, \, c_k \,] \| + \| [ \, \tilde{U}_k \, \, c_k \,] - \hat{U}_{k+1} \| + \| \hat{U}_{k+1} - \tilde{U}_{k+1} \|.$$ As noted above, the second error term is either zero if $ p $ truncation is not applied or $ p_k $ otherwise. Also, the third error term is either zero if the singular values truncation is not applied or $ \hat{\sigma}_{r+1} $ otherwise. For the first term, we have $$\begin{aligned} \| U_{k+1} - [ \, \tilde{U}_k \, \, c_k \,] \| &= \| [ \, U_k \, \, c_k \,] - [ \, \tilde{U}_k \, \, c_k \,] \|\\ &= \| ( U_k - \tilde{U}_k) \, \, 0 \|\\ &= \sup_{ \| x \| = 1 } \big\| [ ( U_k - \tilde{U}_k) \, \, 0 ] x \big\|_{M}\\ &\leq \| U_k - \tilde{U}_k \|_{ \mathcal{L}(\mathbb{R}^{k},\mathbb{R}_M^m) } \leq e_k. $$ This completes the proof. The result above explains the update of the error bound in one step of . Now we assume the SVD is initialized exactly when $ k = 1 $, and then the algorithm is applied for a sequence of added columns $ \{ c_k \} \subset \mathbb{R}_M^m $, for $ k = 2, \ldots, s $. Let $ \mathrm{tol} $ and $ \mathrm{tol}_\mathrm{sv} $ be fixed positive constants, and let $ \{ c_k \} \subset \mathbb{R}_M^m $, for $ k = 1, \ldots, s $, be the columns of a matrix $ U $. For $ k = 1 $, assume the SVD $ \tilde{U}_1 = \tilde{V}_1 \tilde{\Sigma}_1 \tilde{W}_1^T $ and error bound $ e_1 = 0 $ are initialized exactly as described in . For $ k = 1, \ldots, s-1 $, let $ \tilde{U}_{k+1} = \tilde{V}_{k+1} \tilde{\Sigma}_{k+1} \tilde{W}_{k+1}^T $ and $ e_{k+1} $ be the output of applied to the input $ \tilde{U}_k = \tilde{V}_k \tilde{\Sigma}_k \tilde{W}_k^T $ and $ e_k $. If $ T_p $ represents the total number of times $ p $ truncation is applied and $ T_\mathrm{sv} $ represents the total number of times the singular value truncation is applied, then $$\| U - \tilde{V}_{s} \tilde{\Sigma}_{s} \tilde{W}_{s} \|_{ \mathcal{L}(\mathbb{R}^{s},\mathbb{R}_M^m) } \leq T_{p} \mathrm{tol} + T_\mathrm{sv} \mathrm{tol}_\mathrm{sv}.$$ The proof follows immediately from the previous result, using $ p_k \leq \mathrm{tol} $ and $ \hat{\sigma}_{r+1} \leq \mathrm{tol}_\mathrm{sv} $. The error bound in the result above is not as precise as the error bound computed using since the tolerances are only upper bounds on the errors in each step. However, this result does provide some insight into the choice of the tolerances for the algorithm. Specifically, in general there is no reason to expect one of $ T_p $ or $ T_\mathrm{sv} $ to be significantly larger than the other; therefore, it seems reasonable to choose equal values for the tolerances. Furthermore, for a very large number of added columns, it is possible that $ T_p $ and $ T_\mathrm{sv} $ can be large; therefore, small tolerances should be chosen to preserve accuracy. computes an upper bound on the operator norm error between the exact data matrix $ U $ and the approximate truncated SVD $ \tilde{U} = \tilde{V} \tilde{\Sigma} \tilde{W}^T $ of the data matrix. (The above corollary also provides another upper bound on the error.) This error bound allows us to bound the error in the incrementally computed singular values and singular vectors. Let $\{\sigma_k, v_k, w_k \}_{k \geq 1}$ and $\{\tilde{\sigma}_k, \tilde{v}_k, \tilde{w}_k \}_{k \geq 1}$ denote the ordered singular values and corresponding orthonormal singular vectors of $ U, \tilde{U} : \mathbb{R}^s \to \mathrm{R}^m_M $ in the result below. The following result follows directly from general results about error bounds for singular values and singular vectors of compact linear operators in . Let $ k \geq 1 $, and let $ \varepsilon > 0 $ such that $ \| U - \tilde{U} \|_{ \mathcal{L}( \mathbb{R}^s, \mathbb{R}^m_M)} \leq \varepsilon $. Then $$| \sigma_\ell - \tilde{\sigma}_\ell | \leq \varepsilon \quad \mbox{for all $ \ell \geq 1 $.}$$ Also, for $ j = 1, \ldots, k $, define $$\varepsilon_j = j \varepsilon + 2 \sum_{i=1}^{j-1} \left( \varepsilon_i + \sigma_{i} E_i^{1/2} \right), \quad E_j = 2 \left( 1 - \sqrt{ \frac{ ( \sigma_j - 2 \varepsilon_j )^2 - \sigma_{j+1}^2 }{ \sigma_j^2 - \sigma_{j+1}^2 } } \right).$$ If the first $ k+1 $ singular values of $ U $ are distinct and positive, the singular vector pairs $ \{ \tilde{v}_j, \tilde{w}_j \}_{j=1}^k $ are suitably normalized, and $$\varepsilon_j \leq \frac{ \sigma_j - \sigma_{j+1} }{ 2 } \quad \mbox{for $ j = 1, \ldots, k $,}$$ then $$\label{diff_kthsvectors} \| v_j - \tilde{v}_j \|_M \leq E_j^{1/2}, \quad \| w_j - \tilde{w}_j \| \leq E_j^{1/2} + 2 \sigma_j^{-1} \varepsilon_j, \quad \mbox{for $ j = 1, \ldots, k $.} $$ This result indicates we should expect accurate approximate singular values and also accurate approximate singular vectors if $ \varepsilon $ is small and there is not a small gap in the singular values. We note that POD singular values often decay to zero quickly, and therefore we expect to see lower accuracy in the computed POD modes for smaller singular values due to the small gap. The examples in our first work [@Fareed17] and the new examples below show both of these expected behaviors for the errors in the approximate singular vectors. Numerical Results {#sec:numerical_results} ================= We consider the 1D FitzHugh-Nagumo system $$\begin{aligned} \frac{\partial v(t,x)}{\partial t} &= \mu \frac{\partial^{2}v(t,x)}{\partial x^{2}} - \frac{1}{\mu} w(t,x) + \frac{1}{\mu} f(v) + \frac{c}{\mu}, \quad 0<x<1,\\ \frac{\partial w(t,x)}{\partial t} &= b v(t,x) - \gamma w(t,x) + c, \quad 0<x<1,\end{aligned}$$ where $f(v) = v(v - 0.1)(1 - v)$, $\mu = 0.015 $, $ b = 0.5 $, $ \gamma = 2 $, $ c = 0.05 $, the boundary conditions are $ v_x(t,0) = - 50000 t^{3} e^{-15t} $, $ v_x(t,1) = 0 $, and the initial conditions are zero. This example problem was considered in [@Wang15], and we used the interpolated coefficient finite element method from that work to discretize the problem in space. For the finite element method we used continuous piecewise linear basis functions with equally spaced nodes, and we used Matlab’s `ode23s` to approximate the solution of the resulting nonlinear ODE system on different time intervals. For the POD computations, we consider the data $ z(t,x) = [ v(t,x), w(t,x) ] $ in the Hilbert space $ L^2(0,1) \times L^2(0,1) $ with standard inner product. Now we follow the procedure in our first work [@Fareed17] to arrive at the weighted SVD problem. At each time step, we rescale the approximate solution data by the square root of the time step; see [@Fareed17 Section 5.1]. We expand the approximate solution in the finite element basis to obtain the weight matrix $ M $ as in [@Fareed17 Section 5.2]. To compute the POD of the approximate solution data, we compute the SVD of the finite element solution coefficient matrix $ U : \mathbb{R}^s \to \mathbb{R}^m_M $, where $ s $ is the number of time steps (snapshots) and $ m $ is two times the number of finite element nodes. To illustrate our analysis of the incremental SVD algorithm, we consider three examples: Example 1 : $5000$ finite element nodes and $ s = 491 $ snapshots in the time interval $[ 0 , 10 ]$ Example 2 : $10000$ finite element nodes and $ s = 710 $ snapshots in the time interval $[ 0 , 15 ]$ Example 3 : $50000$ finite element nodes and $ s = 1275 $ snapshots in the time interval $[ 0 , 28 ]$ We consider relatively small values of $ m = 2 \times \text{nodes} $ and $ s $ in order to test the incremental algorithm against exact SVD computations. Let $ U $ denote the finite element solution coefficient matrix, and let $ \tilde{U} = \tilde{V} \tilde{\Sigma} \tilde{W}^T $ denote the incrementally computed approximate SVD of $ U : \mathbb{R}^s \to \mathbb{R}^m_M $ produced by . For each example, we choose various tolerances and compute: $$\begin{gathered} \mbox{Rank $ = \mathrm{rank}(\tilde{U}) $,} \quad \mbox{Exact error $ = \| U - \tilde{U} \|_{ \mathcal{L}(\mathbb{R}^s,\mathbb{R}^m_M) } $,}\\ \mbox{Incr.\ error bound $ = e $ computed by \Cref{algorithm:incrSVD_Error_weightedinner} at the final snapshot.} $$ The exact SVD of $ U : \mathbb{R}^s \to \mathbb{R}^m_M $ and the exact error are both computed using a Cholesky factorization of the weight matrix $ M $ following Algorithm 1 in [@Fareed17]. The exact computations are for testing only since they require storing all of the data. – display the computed quantities listed above for the three examples with various choices of the $ p $ truncation tolerance, $ \mathrm{tol}$, and the singular value truncation tolerance, $ \mathrm{tol}_{\mathrm{sv}} $. We set each tolerance to $ 10^{-8} $, $ 10^{-10} $, or $ 10^{-12} $, for a total of nine tests for each example. In all of the tests, the incrementally computed error bound is larger than the exact error and the error bound is small. Also, the tests indicate that there is no benefit from choosing one tolerance different than the other. $ \mathrm{tol} $ $ \mathrm{tol}_\mathrm{sv}$ Rank Exact error Incr. error bound ------------------- ----------------------------- ------- ---------------- ------------------- $ 10^{-8} $ $ 10^{-8} $ $36$ $ 3.6924e-07 $ $2.8029e-06 $ $ 10^{-8} $ $ 10^{-10}$ $66$ $3.1932e-07 $ $1.1826e-06 $ $ 10^{-8} $ $ 10^{-12}$ $61$ $8.5938e-07 $ $9.0495e-07 $ $ 10^{-10}$ $ 10^{-8} $ $30$ $3.9090e-08 $ $1.4908e-06 $ $ 10^{-10}$ $ 10^{-10}$ $44 $ $4.4893e-10 $ $2.7417e-08 $ $ 10^{-10}$ $ 10^{-12}$ $71$ $3.9349e-10 $ $8.9680e-09 $ $ 10^{-12}$ $ 10^{-8} $ $30$ $3.9090e-08 $ $1.4908e-06 $ $ 10^{-12}$ $ 10^{-10}$ $41$ $4.5256e-10 $ $1.5511e-08 $ $ 10^{-12} $ $ 10^{-12}$ $55$ $4.4334e-12 $ $2.8596e-10 $ : Example 1 – error between true and incremental SVD[]{data-label="table:Error_SVD ex1"} $ \mathrm{tol} $ $ \mathrm{tol}_\mathrm{sv}$ Rank Exact error Incr. error bound ------------------- ----------------------------- ------ --------------- ------------------- $ 10^{-8}$ $ 10^{-8}$ $35$ $3.0859e-07$ $3.6931e-06 $ $ 10^{-8}$ $10^{-10}$ $66$ $1.3881e-07$ $1.1429e-06 $ $ 10^{-8}$ $10^{-12}$ $64$ $3.4657e-07$ $1.5321e-06 $ $ 10^{-10}$ $10^{-8} $ $31$ $4.1497e-08$ $1.7368e-06 $ $ 10^{-10}$ $10^{-10}$ $45$ $5.3142e-10$ $3.6491e-08 $ $ 10^{-10}$ $ 10^{-12}$ $74$ $7.7348e-10$ $1.1523e-08 $ $ 10^{-12}$ $ 10^{-8} $ $30$ $4.1497e-08$ $1.7368e-06 $ $ 10^{-12}$ $ 10^{-10}$ $41$ $4.6086e-10$ $1.8671e-08 $ $ 10^{-12} $ $ 10^{-12} $ $59$ $4.8658e-12 $ $3.4880e-10 $ : Example 2 – error between true and incremental SVD[]{data-label="table:Error_SVD_ex2"} $ \mathrm{tol} $ $ \mathrm{tol}_\mathrm{sv}$ Rank Exact error Incr. error bound ------------------- ----------------------------- ------ --------------- ------------------- $10^{-8}$ $ 10^{-8}$ $38$ $6.5705e-08$ $4.3271e-06 $ $ 10^{-8}$ $ 10^{-10}$ $72$ $6.8271e-07$ $1.1523e-06 $ $ 10^{-8}$ $ 10^{-12}$ $67$ $ 3.6916e-07$ $2.3847e-06$ $ 10^{-10}$ $ 10^{-8} $ $31$ $ 4.7018e-08$ $2.2388e-06 $ $ 10^{-10}$ $ 10^{-10}$ $49$ $ 4.8302e-10$ $4.3655e-08 $ $ 10^{-10}$ $ 10^{-12}$ $78$ $2.4473e-08 $ $2.6825e-08 $ $ 10^{-12}$ $ 10^{-8} $ $31$ $4.7018e-08 $ $ 2.2388e-06$ $ 10^{-12}$ $ 10^{-10}$ $41$ $4.9660e-10 $ $2.5022e-08 $ $ 10^{-12}$ $ 10^{-12}$ $60$ $6.3200e-12 $ $5.7438e-10 $ : Example 3 – error between true and incremental SVD[]{data-label="table:Error_SVD_ex3"} shows the exact and incrementally computed POD singular values and also the weighted norm error between the exact and incrementally computed POD modes with $ \mathrm{tol} $ and $ \mathrm{tol}_\mathrm{sv} $ both equal to $10^{-12}$. The errors for the POD modes corresponding to the largest singular values are extremely small (approximately $10^{-12}$). The errors in the POD modes increase slowly as the corresponding singular values approach zero. There are many accurate POD modes; the first $30$ modes are computed to an accuracy level of at least $ 10^{-5} $. The POD singular value and mode errors behaved similarly for other cases. Conclusion {#sec:conclusion} ========== In our earlier work [@Fareed17], we proposed computing the SVD with respect to a weighted inner product incrementally to obtain the POD eigenvalues and modes of a set of PDE simulation data. In this work, we extended the algorithm to update the SVD and an error bound incrementally when a new column is added. We also performed an error analysis of this algorithm by analyzing the error due to each individual truncation. We showed that the algorithm produces the exact SVD of a matrix $ \tilde{U} $ such that $ \| U - \tilde{U} \|_{\mathcal{L}(\mathbb{R}^s,\mathbb{R}^m_M)} \leq e $, where $ U $ is the true data matrix, $ M $ is the weight matrix, and $ e $ is computed error bound. We also proved error bounds for the incrementally computed singular values and singular vectors. We tested our approach on three example data sets from a 1D FitzHugh-Nagumo PDE system with various choices of the two truncation tolerances. In all of the tests, the incrementally computed error bound was larger than the exact error and the error bound was small. Furthermore, the approximate singular values and dominant singular vectors were accurate. Also, our analysis and the numerical tests suggest that there is no benefit from choosing one algorithm tolerance different than the other. Acknowledgement {#acknowledgement .unnumbered} =============== The authors thank Mark Opmeer for a helpful discussion. Appendix {#Appen} ======== Let $ X $ and $ Y $ be two separable Hilbert spaces, with inner products $ (\cdot,\cdot)_X $ and $ (\cdot,\cdot)_Y $ and corresponding norms $ \| \cdot \|_X $ and $ \| \cdot \|_Y $. Below, we drop the subscripts on the inner products and the norms since the space will be clear from the context. Assume $ H, H_\varepsilon : X \to Y $ are compact linear operators. In this section, we prove bounds on the error between the singular vectors of $ H $ and $ H_\varepsilon $ assuming the singular values are distinct. Our results rely on techniques from [@glover88; @Pedro08]. Let $\{\sigma_k, v_k, w_k \}_{k \geq 1}$ and $\{\sigma^{\varepsilon}_k, v^{\varepsilon}_k, w^{\varepsilon}_k \}_{k \geq 1}$ be the ordered singular values and corresponding orthonormal singular vectors of $H$ and $H_\varepsilon$. They satisfy $$\label{eqn:svd_H_Heps} H v_k = \sigma_k w_k, \quad H^* w_k = \sigma_k v_k, \quad H_\varepsilon v^\varepsilon_k = \sigma^\varepsilon_k w^\varepsilon_k, \qquad H^*_\varepsilon w^\varepsilon_k = \sigma^\varepsilon_k v^\varepsilon_k, $$ where the star denotes the Hilbert adjoint operator. Also, if $ \sigma_k > 0 $, then $ \sigma_k^2 $ is the $ k $th ordered eigenvalue of the self-adjoint nonnegative compact operators $ H H ^* $ and $ H^* H $. First, we recall a well-known bound on the singular values; see, e.g., [@GohbergKrein69 page 30] and [@GohbergGoldbergKaashoek90 page 99]. Let $\varepsilon > 0 $ such that $ \| H - H_\varepsilon \|_{ \mathcal{L}( X, Y)} \leq \varepsilon $. Then for all $ k \geq 1 $ we have $$\label{diff_kthsvalue} | \sigma_{k} - \sigma^{\varepsilon}_{k} | < \varepsilon. $$ In the results below, we require the singular vectors $ \{ v_k^\varepsilon, w_k^\varepsilon \} $ are suitably normalized. We note that any pair $ \{ v_k^\varepsilon, w_k^\varepsilon \} $ of singular vectors for a fixed value of $ k $ can be rescaled by a constant of unit magnitude and remain a pair of singular vectors. However, due to the relationship , we note that both vectors in the pair must be rescaled by the same constant. The proof of the following result is largely contained in [@glover88 Appendix 2], but we include the proof here to be complete. \[lemma2\] Let $\varepsilon > 0 $ such that $ \| H - H_\varepsilon \|_{ \mathcal{L}( X, Y)} \leq \varepsilon $. If $ \sigma_1 > \sigma_2 > 0 $, $ v_1^\varepsilon $ and $ w_1^\varepsilon $ are suitably normalized, and $$\label{eqn:eps_small_condition} \varepsilon \leq \frac{ \sigma_1 - \sigma_2 }{ 2 }, $$ then $$\label{diff_1stsvectors} \| v_1 - v^{\varepsilon}_1 \| \leq E_1^{1/2}, \quad \| w_1 - w^{\varepsilon}_1 \| \leq E_1^{1/2} + 2 \sigma_1^{-1} \varepsilon, \quad E_1 = 2 \left( 1 - \sqrt{ \frac{ ( \sigma_1 - 2 \varepsilon )^2 - \sigma_2^2 }{ \sigma_1^2 - \sigma_2^2 } } \right). $$ The larger error bound for $ \| w_1 - w_1^\varepsilon \| $ is due to the way we assume the singular vectors are normalized in the proof. It is possible to use a different normalization and make the error bound larger for $ \| v_1 - v_1^\varepsilon \| $ instead. We comment on the normalization in the proof. Define $ V_1 = \mathrm{span}\{ v_1 \} \subset X $. We have $ X = V_1 \oplus V_1^{\perp} $, and therefore $ v^{\varepsilon}_1 = r_\varepsilon v_1 + x_\varepsilon $ for some constant $ r_\varepsilon $ and $ x_\varepsilon \in X $ satisfies $ ( x_\varepsilon , v_1 ) = 0 $. This gives $ \| x_\varepsilon\|^2 = 1 - | r_\varepsilon |^2 $ and also $ | r_\varepsilon | \leq 1 $. Then $$\begin{aligned} \label{diff_v1} \| v_1 - v^{\varepsilon}_1\|^2 & = \| v_1 - r_\varepsilon v_1 - x_\varepsilon \|^2 \nonumber \\ & = | 1 - r_\varepsilon|^2 \| v_1 \|^2 + \| x_\varepsilon \|^2 \nonumber\\ & = 2 ( 1 - \mathrm{Re}(r_\varepsilon) ). $$ Note $ \|\sigma^\varepsilon_1 w^\varepsilon_1\| = \| H_\varepsilon v^\varepsilon_1 \| $ implies $$\begin{aligned} \sigma^\varepsilon_1 & = \| H_\varepsilon v^\varepsilon_1 + H v^\varepsilon_1 - H v^\varepsilon_1 \| \\ & \leq \|H v^\varepsilon_1 \| + \| H - H_\varepsilon \| \| v^\varepsilon_1 \|\\ & \leq \|H (r_\varepsilon v_1 + x_\varepsilon) \| + \varepsilon \\ & = \|r_\varepsilon \sigma_1 w_1 + H x_\varepsilon\| + \varepsilon. $$ To estimate this norm, we use $ ( H x_\varepsilon , w_1) = ( x_\varepsilon , H^* w_1) = \sigma_1 ( x_\varepsilon , v_1) = 0 $ and also $$\| H x_\varepsilon \|^2 = \frac{ (H^* H x_\varepsilon, x_\varepsilon) }{ \| x_\varepsilon \|^2 } \| x_\varepsilon \|^2 \leq \sup_{ x \in V_1^\perp, \: x \neq 0 } \frac{ (H^* H x, x) }{ \| x \|^2 } \| x_\varepsilon \|^2 = \sigma_2^2 \, \| x_\varepsilon \|^2,$$ where we used the variational characterization of the second eigenvalue $ \sigma_2^2 $ of the self-adjoint compact nonnegative operator $ H^* H $ [@Lax02 Chapter 28]. These results give $$\begin{aligned} \|r_\varepsilon \sigma_1 w_1 + H x_\varepsilon \|^2 & = |r_\varepsilon|^2 \sigma^2_1 + \|H x_\varepsilon\|^2\\ & \leq |r_\varepsilon|^2 \sigma^2_1 + \sigma_2^2 \, \|x_\varepsilon\|^2\\ & = \big( \sigma_1^2 - \sigma_2^2 \big) |r_\varepsilon|^2 + \sigma_2^2. $$ Next, the assumption for $ \varepsilon $ gives $ \varepsilon \leq (\sigma_1 - \sigma_2) / 2 \leq \sigma_1/2 $, and therefore $ \sigma_1 - 2 \varepsilon \geq 0 $. Also, gives $ -\varepsilon \leq \sigma_1^\varepsilon - \sigma_1 $, or $ \sigma_1^\varepsilon - \varepsilon \geq \sigma_1 - 2 \varepsilon \geq 0 $. This gives $ ( \sigma_1^\varepsilon - \varepsilon )^2 \geq ( \sigma_1 - 2 \varepsilon )^2 $, and therefore $$| r_\varepsilon |^2 \geq \frac{ ( \sigma_1^\varepsilon - \varepsilon )^2 - \sigma_2^2 }{ \sigma_1^2 - \sigma_2^2 } \geq \frac{ ( \sigma_1 - 2 \varepsilon )^2 - \sigma_2^2 }{ \sigma_1^2 - \sigma_2^2 }.$$ Note that the assumption for $ \varepsilon $ guarantees that we can take a square root of this estimate. If $ v_1^\varepsilon $ is normalized so that $ r_\varepsilon $ is a nonnegative real number, then , $ 1-\mathrm{Re}(r_\varepsilon) = 1-| r_\varepsilon | $, and the above inequality give the desired estimate for $ \| v_1 - v^{\varepsilon}_1\| $. If $ r_\varepsilon $ is not a nonnegative real number, then rescale the singular vector pair $ \{v_1^\varepsilon,w_1^\varepsilon\} $ by $ \overline{r}_\varepsilon/ | r_\varepsilon | $ to obtain the proper normalization and the bound for $ \| v_1 - v^{\varepsilon}_1\| $. For $ w_1 $ and $ w_1^\varepsilon $, it does not appear that we can use a similar proof strategy since we have already rescaled the singular vector pair $ \{v_1^\varepsilon,w_1^\varepsilon\} $. Specifically, we can obtain $ w_1^\varepsilon = s_\varepsilon w_1 + y_\varepsilon $, but it is not clear that $ s_\varepsilon $ will be a nonnegative real number and we are unable to rescale again. Therefore, we use $ \| H \| = \sigma_1 $, $ \| H - H_\varepsilon \| \leq \varepsilon $, and $ | \sigma_1 - \sigma_1^\varepsilon | \leq \varepsilon $ to directly estimate: $$\begin{aligned} \| w_1 - w^{\varepsilon}_1 \| &= \| \sigma_1^{-1} H v_1 - (\sigma^\varepsilon_1)^{-1} H_\varepsilon v^{\varepsilon}_1 \|\\ &\leq \| \sigma_1^{-1} H v_1 - \sigma_1^{-1} H v^{\varepsilon}_1 \| + \| \sigma_1^{-1} H v^{\varepsilon}_1 - \sigma_1^{-1} H_\varepsilon v^{\varepsilon}_1 \| + \| \sigma_1^{-1} H_\varepsilon v^{\varepsilon}_1 - (\sigma^\varepsilon_1)^{-1} H_\varepsilon v^{\varepsilon}_1 \|\\ &\leq \| v_1 - v_1^\varepsilon \| + \sigma_1^{-1} \varepsilon + | \sigma_1^\varepsilon \sigma_1^{-1} - 1 |\\ &\leq \| v_1 - v_1^\varepsilon \| + 2 \sigma_1^{-1} \varepsilon. $$ In the result below, note that $ \varepsilon_1 = \varepsilon $ and $ E_1 $ is defined as in in above. Let $ k \geq 1 $, and let $ \varepsilon > 0 $ such that $ \| H - H_\varepsilon \|_{ \mathcal{L}( X, Y)} \leq \varepsilon $. For $ j = 1, \ldots, k $, define $$\varepsilon_j = j \varepsilon + 2 \sum_{i=1}^{j-1} \left( \varepsilon_i + \sigma_{i} E_i^{1/2} \right), \quad E_j = 2 \left( 1 - \sqrt{ \frac{ ( \sigma_j - 2 \varepsilon_j )^2 - \sigma_{j+1}^2 }{ \sigma_j^2 - \sigma_{j+1}^2 } } \right).$$ If the first $ k+1 $ singular values of $ H $ are distinct and positive, the singular vector pairs $ \{ v_j^\varepsilon, w_j^\varepsilon \}_{j=1}^k $ are suitably normalized, and $$\varepsilon_j \leq \frac{ \sigma_j - \sigma_{j+1} }{ 2 } \quad \mbox{for $ j = 1, \ldots, k $,}$$ then $$\label{diff_kthsvectors} \| v_j - v^{\varepsilon}_j \| \leq E_j^{1/2}, \quad \| w_j - w^{\varepsilon}_j \| \leq E_j^{1/2} + 2 \sigma_j^{-1} \varepsilon_j, \quad \mbox{for $ j = 1, \ldots, k $.} $$ The proof is by induction. First, the result is true for $ k = 1 $ by . Next, assume the result is true for all $ j = 1 , \ldots , k-1 $. Define compact linear operators for $ j = 2, \ldots, k $ by $$H^{j}x = H x - \sum_{i=1}^{j-1} \sigma_{i}(x,v_i) w_i, \quad H^{j}_\varepsilon x = H_\varepsilon x - \sum_{i=1}^{j-1} \sigma^{\varepsilon}_{i}(x,v^{\varepsilon}_i) w^{\varepsilon}_i,$$ for all $ x \in X $. Then the ordered singular values and corresponding singular vectors of $ H^j $ and $ H^j_\varepsilon $ are $ \{ \sigma_i, v_i, w_i \}_{i \geq j} $ and $ \{ \sigma_i^\varepsilon, v_i^\varepsilon, w_i^\varepsilon \}_{i \geq j} $. Note that $$\begin{aligned} \| H^{k} x - H^{k}_\varepsilon x \| &\leq \| ( H - H_\varepsilon ) x \| + \sum_{i=1}^{k-1} \| \sigma^{\varepsilon}_{i}(x,v^{\varepsilon}_i) w^{\varepsilon}_i - \sigma_{i}(x,v_i) w_i \| \\ &\leq \varepsilon \| x \| + \| x \| \sum_{i=1}^{k-1} \big( | \sigma^{\varepsilon}_{i} - \sigma_{i} | + \sigma_{i} \|v^{\varepsilon}_i - v_i \| + \sigma_{i} \|w^{\varepsilon}_i - w_i \| \big). \end{aligned}$$ Then since the result is true for all $ j = 1 , \ldots , k-1 $, we have $ \| H^k - H^k_\varepsilon \| \leq \varepsilon_k $, where $$\begin{aligned} \varepsilon_k &= \varepsilon + \sum_{i=1}^{k-1} \left( \varepsilon + \sigma_{i} E_i^{1/2} + \sigma_{i} \big( E_i^{1/2} + 2 \sigma_i^{-1} \varepsilon_i \big) \right) \\ &= k \varepsilon + 2 \sum_{i=1}^{k-1} \left( \varepsilon_i + \sigma_{i} E_i^{1/2} \right). \end{aligned}$$ Applying to $ H^k $ and $ H^k_\varepsilon $ with $ \| H^k - H^k_\varepsilon \| \leq \varepsilon_k $ completes the proof. [^1]: Department of Mathematics and Statistics, Missouri University of Science and Technology, Rolla, MO (, ).
--- abstract: | We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors. author: - 'Shashi Kiran Chilappagari,  [^1]  Michael Chertkov,  [^2]  Mikhail G. Stepanov, [^3] and  Bane Vasic,   [^4]' title: 'Instanton-based Techniques for Analysis and Reduction of Error Floors of LDPC Codes' --- Low-density parity-check codes, Error Floor, Iterative Decoding, Linear Programming Decoding, Instantons, Pseudo-Codewords, Trapping Sets Introduction {#section1} ============ LDPC codes [@63Gallager], [@99Mackay], have been the focus of intense research over the past decade because they can approach theoretical limits of reliable transmission over various channels even when decoded by sub-optimal low complexity algorithms. Two important classes of such algorithms are (i) iterative decoding algorithms, which include message passing algorithms (variants of the BP algorithm [@88Pearl] and Gallager type algorithms [@63Gallager]), and bit flipping algorithms [@76ZP; @96SS] (serial and parallel), as well as (ii) the LP decoding algorithm [@05FWK]. Characterization of the error performance of sub-optimal algorithms (or simply decoders) is still an open problem, and has been addressed for both LDPC code ensembles, as well as for individual codes [@08RU]. Error performance of LDPC codes in the asymptotic limit of the code length is well characterized for a large class of sub-optimal decoders over different channels (the interested reader is referred to [@63Gallager; @01RU; @01RSU; @04BRU] for general theory of message passing algorithms, [@76ZP; @96SS; @01BM; @08Burshtein] for analysis of bit flipping algorithms and expander based arguments and [@05FS; @07FMSSW; @08DDKW] for analysis of the LP decoder). A common feature of all the analysis methods used in deriving the asymptotic results is that the underlying assumptions hold in the limit of infinitely long code and/or are applicable to an ensemble of codes. Hence, they are of limited use for the analysis of a given finite length code. The performance of a code under a particular decoding algorithm is characterized by the bit-error-rate (BER) or the frame-error-rate (FER) curve plotted as a function of the signal-to-noise ratio (SNR). A typical BER/FER vs SNR curve consists of two distinct regions. At small SNR, the error probability decreases rapidly with SNR, with the curve looking like a *waterfall*. The decrease slows down at moderate values turning into the *error floor* asymptotic at very large SNR [@03Richardson]. This transient behavior and the error floor asymptotic originate from the sub-optimality of decoder, i.e., the ideal maximum-likelihood (ML) curve would not show such a dramatic change in the BER/FER with the SNR increase. While the slope of the BER/FER curve in the waterfall region is the same for almost all the codes in the ensemble, there can be a huge variation in the slopes for different codes in the error floor region [@08RU]. Since for sufficiently long codes the error floor phenomenon manifests itself in the domain unreachable by brute force Monte-Carlo (MC) simulations, analytical methods are necessary to characterize the FER performance. Finite length analysis of LDPC codes is well understood for decoding over the binary erasure channel (BEC). The decoder failures in the error floor domain are governed by combinatorial structures known as stopping sets [@02DPRTU]. Stopping set distributions of various LDPC ensembles have been studied by Orlitsky *et al.* (see [@05OVZ] and references therein for related works). Unfortunately, such a level of understanding of the decoding failures has not been achieved for other important channels such as the BSC and the AWGNC. In this paper, we focus on the decoding failures of LDPC codes for iterative as well as LP decoders over the BSC and the AWGNC. Failures of iterative decoders for graph based codes were first studied by Wiberg [@96Wiberg] who introduced the notions of *computation trees* and *pseudo-codewords*. Subsequent analysis of the computation trees was carried out by Frey *et al.* [@01FKV] and Forney *et al.*[@01Forney]. The failures of the LP decoder can be understood in terms of the vertices of the so-called *fundamental polytope* which are also known as pseudo-codewords [@05FWK]. Vontobel and Koetter [@05VK] introduced a theoretical tool known as graph cover approach and used it to establish connections between the LP and the message passing decoders using the notion of the fundamental polytope. They showed that the pseudo-codewords arising from the Tanner graph covers are identical to the pseudo-codewords of the LP decoder. Vontobel and Koetter [@04VK] also studied the relation between the LP and the min-sum decoders. For iterative decoding on the AWGNC, MacKay and Postol [@03MP] were the first to discover that certain “near codewords” are to be blamed for the high error floor in the Margulis code. Richardson [@03Richardson] reproduced their results and developed a computation technique to predict the performance of a given LDPC code in the error floor domain. He characterized the troublesome noise configurations leading to the error floor using combinatorial objects termed trapping sets and described a technique (of a Monte-Carlo importance sampling type) to evaluate the error rate associated with a particular class of trapping sets. The method from [@03Richardson] was further refined for the AWGNC by Stepanov *et al.* [@05SCCV] who introduced the notion of *instantons*. In a nutshell, an instanton is a configuration of the noise which is positioned in between a codeword (say zero codeword) and another pseudo-codeword (which is not necessarily a codeword). Incremental shift (allowed by the channel) from this configuration toward the zero codeword leads to correct decoding (into the zero-codeword) while incremental shift in an opposite direction leads to a failure. In principle, one can find this dangerous configuration of the noise by exploring the domain of correct decoding surrounding the zero codeword, and finding borders of this domain – the so-called error-surface. If the channel is continuous, the error-surface consists of continuous patches while configuration of the noise maximizing the error probability over a patch is called an instanton. The term instanton introduced initially in the context of disordered systems is also known under the names of *saddle-point* or *optimal fluctuation*, and is common in modern theoretical physics (see [@05SCCV] and references therein). As stated above, the instantons that affect the decoder performance in the error floor region are extremely rare, and hence identifying and enumerating them is a challenging task. However, once this difficulty is overcome, the knowledge of the trapping set/pseudo-codeword distribution can be used to evaluate the performance of the code. It can also be used to guide optimization of the code and design of improved decoding strategies. In this paper, we focus on the methods used to identify the most relevant noise configurations for various decoders and channel models. Previous investigation of the problem include the work by Kelley and Sridhara [@07KS] who studied pseudo-codewords arising from graph covers and derived bounds on the minimum pseudo-codeword weight in terms of the girth and the minimum left-degree of the underlying Tanner graph. The bounds were further investigated by Xia and Fu [@08XF]. Smarandache and Vontobel [@07SV] found pseudo-codeword distributions for the special cases of codes from Euclidean and projective planes. Pseudo-codeword analysis has also been extended to the convolutional LDPC codes by Smarandache *et al.* [@06SPVC]. Milenkovic *et al.* [@07MSW] studied the asymptotic distribution of trapping sets in regular and irregular ensembles. Wang *et al.* [@06WKP] proposed an algorithm to exhaustively enumerate certain trapping sets. Chernyak *et al.* [@04CCSV] and Stepanov *et al.* [@05SCCV] suggested to pose this problem of finding the instantons as a special optimization problem. This optimization method was built in the spirit of the general methodology, borrowed from statistical physics, guiding exploration of rare events which contribute the most to the BER/FER. The optimization method allowed to discover in [@05SCCV], the set of most probable instantons for the AWGNC and iterative decoder. The operational utility of the method was illustrated on some number of moderate size examples and dependence of the instanton structure on the number of iterations was observed. The general optimization method was substantially improved and refined in [@08CS] for the LP decoder over continuous channels (with main enabling example chosen to be the AWGNC). The pseudo-codeword-search (PCS) algorithm of [@08CS] was essentially exploring in an iterative way the Wiberg formula, treating an instanton configuration as a median between a pseudo-codeword and the zero-codeword. It was shown empirically that, initiated with a sufficiently noisy configuration, the algorithm converges to an instanton in sufficiently small number of steps, independent or weakly dependent on the code size. Repeated multiple times, the method outputs the set of instanton configurations which can further be used to estimate the BER/FER performance in the transient and error floor domain. The definition of the instantons and the instanton search method were extended in [@08CCV] to the BSC. In this special case, the instanton search algorithm is provably efficient, in the sense that it outputs an instanton in small number of steps, and that the weight of the pseudo-codeword found in the intermediate steps is monotonically decreasing. (See also [@07Vontobel] for an exhaustive list of references for this and related subjects.) In this paper, we discuss failures of iterative decoders (specifically the BP algorithm and the Gallager A/B algorithms) as well as LP decoding over the BSC and the AWGNC. We explain the notion of instanton and elaborate on the connections between instantons and trapping sets as well as pseudo-codewords. We then describe algorithms to search for instantons. By using the $[155,64,20]$ Tanner code [@01TSF] as an enabling example, we illustrate the performance of the instanton search technique outputting the set of most probable instantons. By identifying that all decoding failures can be attributed to the presence of certain subgraphs, we construct a code avoiding this subgraph and show that this code outperforms the original code. Throughout the paper, we focus on the BSC and the AWGNC and while the underlying approach is similar for both channels, rigorous statements can be made for the BSC [@08CCV], while the respective AWGNC statements come from experiments only. The rest of the paper is organized as follows. In Section \[section2\], we introduce the notation and provide the required background material. The notions of decoding failures and instantons are discussed in Section \[section3\], followed by a description of instanton search algorithms for different decoders in Section \[section4\]. We illustrate numerical results in Section \[section5\] and conclude in Section \[section6\]. Preliminaries {#section2} ============= LDPC Codes ---------- LDPC codes belong to the class of linear block codes which can be defined by sparse bipartite graphs [@81Tanner]. The Tanner graph [@81Tanner] $G$ of an LDPC code $\mathcal{C}$ is a bipartite graph with two sets of nodes: the set of variable nodes $V=\{1,2,\ldots,n\}$ and the set of check nodes $C=\{1,2,\ldots,m\}$. The check nodes (variable nodes resp.) connected to a variable node (check node resp.) are referred to as its neighbors. The set of neighbors of a node $u$ is denoted by $\mathcal{N}(u)$. The degree $d_u$ of a node $u$ is the number of its neighbors. A vector $\mathbf{v}=(v_1,v_2,\ldots,v_n)$ is a codeword if and only if for each check node, the modulo two sum of its neighbors is zero. An $(n,d_v,d_c)$ regular LDPC code has a Tanner graph with $n$ variable nodes each of degree $d_v$ and $nd_v/d_c$ check nodes each of degree $d_c$. This code has length $n$ and rate $r \geq 1-d_v/d_c$ [@63Gallager]. It should be noted that the Tanner graph is not uniquely defined by the code and when we say the Tanner graph of an LDPC code, we only mean one possible graphical representation. Channel Assumptions ------------------- We assume that a binary codeword $\mathbf{y}$ is transmitted over a noisy channel and is received as $\mathbf{\hat{y}}$. The support of a vector $\mathbf{y}=(y_1,y_2,\ldots,y_n)$, denoted by ${\mbox{supp}}(\mathbf{y})$, is defined as the set of all positions $i$ such that $y_i\neq 0$. In this paper, we consider binary input memoryless channels with discrete or continuous output alphabet. As the channel is memoryless, we have $$\Pr(\hat{\mathbf{y}}| \mathbf{y}) = \prod_{i \in V} \Pr(\hat{y}_i| y_i)$$ and hence can be characterized by $\Pr(\hat{y}_i|y_i)$, the probability that $\hat{y}_i$ is received given that $y_i$ was sent. The negative log-likelihood ratio (LLR) corresponding to the variable node $i \in V$ is given by $$\gamma_i=\log\left(\frac{\Pr(\hat{y}_i| y_i=0)}{\Pr(\hat{y}_i| y_i=1)}\right).$$ Two binary input memoryless channels of interest are the BSC with output alphabet $\{0,1\}$ and the AWGNC with output alphabet $\mathbb{R}$. On the BSC with transition probability $\epsilon$, every transmitted bit $y_i \in \{0,1\}$ is flipped [^5] with probability $\epsilon$ and is received as $\hat{y}_i \in \{0,1\}$. Hence, we have $$\gamma_i = \left \{ \begin{array}{cl} \log \left( \frac{1-\epsilon}{\epsilon}\right) & \mbox{~if~} \hat{y}_i=0 \\ \log \left( \frac{\epsilon}{1-\epsilon}\right) & \mbox{~if~} \hat{y}_i=1 \end{array}\right.$$ For the AWGNC, we assume that each bit $y_i \in \{0,1\}$ is modulated using binary phase shift keying (BPSK) and transmitted as $\overline{y}_i = 1-2y_i$ and is received as $\hat{y}_i = \overline{y}_i + n_i$, where $\{n_i\}$ are i.i.d. $N(0,\sigma^2)$ random variables. Hence, we have $$\gamma_i = \frac{2\hat{y}_i}{\sigma^2}.$$ Decoding Algorithms ------------------- ### Message Passing Decoders Message passing decoders operate by passing messages along the edges of the Tanner graph representation of the code. Gallager in [@63Gallager] proposed two simple binary message passing algorithms for decoding over the BSC; Gallager A and Gallager B. There exist a large number of message passing algorithms (the BP algorithm, the min-sum algorithm, quantized decoding algorithms, decoders with erasures [@01RU] to name a few ) in which the messages belong to a larger alphabet. Let $\mathbf{\hat{y}}=(\hat{y}_1,\hat{y}_2,\ldots,\hat{y}_n)$, an $n$-tuple be the input to the decoder. Let $\omega^{(k)}_{i \to \alpha}$ denote the message passed by a variable node $i \in V$ to its neighboring check node $\alpha \in C$ in the $k^{th}$ iteration and $\varpi^{(k)}_{\alpha \to i}$ denote the message passed by a check node $\alpha$ to its neighboring variable node $i$. Additionally, let $\varpi^{(k)}_{* \to i}$ denote the set of all incoming messages to variable node $i$ and $\varpi^{(k)}_{* \backslash \alpha \to i}$ denote the set of all incoming messages to variable node $i$ except from check node $\alpha$. **Gallager A/B Algorithm:** The Gallager A/B algorithms are hard-decision-decoding algorithms in which all the messages are binary. With a slight abuse of the notation, let $|\varpi_{* \to i}=m|$ denote the number of incoming messages to $i$ which are equal to $m \in \{0,1\}$. Associated with every decoding round $k$ and variable degree $d_i$ is a threshold $b_{k,d_i}$. The Gallager B algorithm is defined as follows. $$\begin{aligned} \omega_{i \to \alpha}^{(0)}&=&\hat{y}_i \nonumber \\ \varpi^{(k)}_{\alpha \to i}&=& \left(\sum_{j \in \mathcal{N}(\alpha)\backslash i} \omega^{(k-1)}_{j \to \alpha}\right) \mbox{mod } 2 \nonumber \\ \omega^{(k)}_{i \to \alpha} &=& \left \{ \begin{array}{cl} 1, & \mbox{if } |\varpi^{(k)}_{* \backslash \alpha \to i}=1| \geq b_{k,d_i}\\ 0, & \mbox{if } |\varpi^{(k)}_{* \backslash \alpha \to i}=0| \geq b_{k,d_i}\\ \hat{y}_i, & \mbox{otherwise} \end{array}\right. \nonumber \end{aligned}$$ The Gallager A algorithm is a special case of the Gallager B algorithm with $b_{k,d_i}=d_i-1$ for all $k$. At the end of each iteration, a decision on the value of each variable node is made based on all the incoming messages and possibly the received value. **The BP Algorithm:** A soft-decision-decoding algorithm, which is the best possible one if the messages are calculated locally in the Tanner graph of the code, is the BP algorithm (also known as the sum-product algorithm). With a moderate abuse of notation, the messages passed in the BP algorithm are described below: $$\begin{aligned} \omega^{(0)}_{i \to \alpha} & = & \gamma_i \nonumber \\ \varpi^{(k)}_{\alpha \to i} & = & 2\tanh^{-1} \left( \prod_{j \in \mathcal{N}(\alpha)\backslash i} \tanh \left( \frac{1}{2}\omega^{(k-1)}_{j \to \alpha} \right) \right) \nonumber \\ \omega^{(k)}_{i \to \alpha} & = & \gamma_i + \sum_{\delta \in \mathcal{N}(i)\backslash \alpha} \varpi^{(k)}_{\delta \to i} \nonumber\end{aligned}$$ The result of decoding after $k$ iterations, denoted by $\mathbf{x}^{(k)}$, is determined by the sign of $m_i^{(k)} = \gamma_i + \sum_{\alpha \in \mathcal{N}(i)} \varpi^{(k)}_{\alpha \to i}$. If $m_i^{(k)} > 0$ then $x_i^{(k)}=0$, otherwise $x_i^{(k)}=1$. In the limit of high SNR, when the absolute value of the messages is large, the BP algorithm becomes the min-sum algorithm, where the message from the check $\alpha$ to the bit $i$ looks like: $$\begin{aligned} \varpi^{(k)}_{\alpha \to i} & = & \min \big| \omega^{(k-1)}_{* \backslash i \to \alpha} \big| \cdot \prod_{j \in \mathcal{N}(\alpha)\backslash i} {\rm sign} \big( \omega^{(k-1)}_{j \to \alpha} \big) \nonumber\end{aligned}$$ The min-sum algorithm has a property that the Gallager A/B and the LP decoders also possess — if we multiply all the likelihoods $\gamma_i$ by a factor, all the decoding would proceed as before and would produce the same result. Note that we do not have this “scaling” in the BP algorithm. To decode the message in complicated cases (when the message distortion is large) we may need a large number of iterations, although typically a few iterations would be sufficient. To speed up the decoding process one may check after each iteration whether the output of the decoder is a valid codeword, and if so halt decoding. ### Linear Programming Decoder The ML decoding of the code $\mathcal{C}$ allows a convenient LP formulation in terms of the *codeword polytope* $\mbox{poly}(\mathcal{C})$ [@05FWK] whose vertices correspond to the codewords in $\mathcal{C}$. The ML-LP decoder finds $\mathbf{f}=(f_1,\ldots,f_n)$ minimizing the cost function $\sum_{i=1}^{n}\gamma_if_i$ subject to the $\mathbf{f}\in \mbox{poly}(\mathcal{C})$ constraint. The formulation is compact but impractical as the number of constraints is exponential in the code length. Hence a *relaxed* polytope is defined as the intersection of all the polytopes associated with the local codes introduced for all the checks of the original code. Associating $(f_1,\ldots,f_n)$ with bits of the code we require $$\label{eq1} 0 \leq f_i \leq 1, ~~\forall i \in V$$ For every check node $\alpha$, let $\mathcal{N}(\alpha)$ denote the set of variable nodes which are neighbors of $\alpha$. Let $E_\alpha=\{T \subseteq \mathcal{N}(\alpha): |T| \mbox{~is even}\}$. The polytope $Q_\alpha$ associated with the check node $\alpha$ is defined as the set of points $(\mathbf{f},\mathbf{w})$ for which the following constraints hold $$\begin{aligned} &0 \leq w_{\alpha,T} \leq 1,& \forall T \in E_\alpha \\ &\sum_{T \in E_\alpha} w_{\alpha,T}=1& \\ &f_i=\sum_{T \in E_\alpha, T \ni i }w_{\alpha,T},& \forall i \in \mathcal{N}(\alpha) \label{eq4}\end{aligned}$$ Now, let $Q=\cap_\alpha Q_\alpha$ be the set of points $(\mathbf{f},\mathbf{w})$ such that (\[eq1\])-(\[eq4\]) hold for all $\alpha \in C$. (Note that $Q$, which is also referred to as the fundamental polytope [@03KV; @05VK], is a function of the Tanner graph $G$ and consequently the parity-check matrix $H$ representing the code $\mathcal{C}$.) The linear code linear program (LCLP) can be stated as $$\min\limits_{(\mathbf{f},\mathbf{w})} \sum_{i \in V}\gamma_i f_i, \mbox{~s.t.~} (\mathbf{f},\mathbf{w}) \in Q.$$ For the sake of brevity, the decoder based on the LCLP is referred to in the following as the LP decoder. A solution $(\mathbf{f},\mathbf{w})$ to the LCLP such that all $f_i$s and $w_{\alpha,T}$s are integers is known as an integer solution. The integer solution represents a codeword [@05FWK]. It was also shown in [@05FWK] that the LP decoder has the ML certificate, i.e., if the output of the decoder is a codeword, then the ML decoder would decode into the same codeword. The LCLP can fail, generating an output which is not a codeword. It is appropriate to mention here that the LCLP can be viewed as the zero temperature version of BP-decoder looking for the global minimum of the so-called Bethe free energy functional [@03WJ]. Decoding Failures and Instantons {#section3} ================================ To characterize the performance of a coding/decoding scheme for linear codes over any output symmetric channel, one can assume, without loss of generality, the transmission of the all-zero-codeword, i.e. ${\bm y}={\bm 0}$, when the decoding algorithm satisfies certain symmetry conditions (see Definition 1 and Lemma 1 in [@01RU]). The iterative decoding algorithms that we consider in this paper satisfy these symmetry conditions. The assumption of the transmission of the all-zero-codeword also holds for the LP decoding of linear codes on output symmetric channels, as the polytope $Q$ is highly symmetric and looks exactly the same from any codeword (see [@05FWK] for proof). Henceforth, we assume that ${\bm y}={\bm 0}$. A decoding failure is said to have occurred if the output of the decoder is not equal to the transmitted codeword (all-zero-codeword). Probability of a decoder failure, or the frame error rate as a function of the SNR $s (=E_b/N_0)$ can be expressed as: $$\begin{aligned} FER(s)=\sum_{\hat{\bm y}} P_{s}(\hat{\bm y})\theta(\hat{\bm y}), \label{FER}\end{aligned}$$ where the sum goes over all the possible outputs of the channel for the zero-codeword input. In case of a continuous output channel, the sum becomes an integral: $\sum\to\int d\hat{\bm y}$, and the channel probability mass function becomes a probability density function: $\int d\hat{\bm y} P_{s}(\hat{\bm y})=1$. $\theta(\hat{\bm y})$ in Eq. (\[FER\]) is defined to be zero, in the case of successful decoding, and is unity in the case of failure. $P_{s}(\hat{\bm y})$ is the probability of observing $\hat{\bm y}$ at the output of a channel characterized by the SNR $s$ [^6]. Calculating the above sum/integral exactly is not feasible, and the instanton-based approach consists of approximating the sum/integral by a finite number of terms corresponding to the most probable failures – the instantons. This approximation becomes asymptotically exact in the limit of large SNR, while at smaller SNRs, more terms are needed to obtain accurate approximation for the FER. Note that the details of the approximate evaluations are different for discrete and continuous channels. In the discrete case, the number of terms is finite. We account for the $k$-most probable configurations, and $FER(s)\approx \sum_{\beta=1}^k {\cal N}_\beta P_{s}(\hat{\bm y}_\beta)$, where the multiplicity factor ${\cal N}_\beta$ counts the number of instantons equivalent under bit permutations. For continuous channels, an instanton is a stationary point of the respective integrand. By stationary point, we mean the local maximum of the noise probability density function. Hence for the AWGNC, instanton is defined as the noise configuration with minimal (probably locally) value of the $L^2$ norm of $\hat{\mathbf{y}}$ that leads to a decoding failure. The $L^2$ norm of a vector $\hat{\mathbf{y}}$ is equal to $\sqrt{\sum_{i \in V}\hat{y_i}^2}$. Note that for the AWGNC, smaller the $L^2$ norm, the more probable the noise configuration is. ![Illustration of error surface.[]{data-label="Errorsurface"}](fig1.eps){width="2.2in"} The FER approximation should also include, in addition to the multiplicities, the curvature corrections around the stationary point (e.g. within Gaussian approximation) [@04CCSV; @06SC]. In other words, $ FER(s)\approx \sum_{\beta=1}^k {\cal N}_\beta {\cal C}_s(\hat{\bm y}_\beta) P_s(\hat{\bm y}_\beta)$, where ${\cal C}_s(\hat{\bm y}_\beta)$ is the curvature factor. The multiplicities of the instantons are determined by the symmetry group of the code, and if nothing is known about it, one may assume that the group is trivial and all multiplicities are equal to $1$. In continuous case, the curvatures are determined by the geometry of the error surface in the vicinity of the instanton (this subject was not studied numerically so far, as the most important information about the instanton is its weight which determines the slope of FER [*vs.*]{} SNR curve in the asymptotic). Intuitively, in the case of the AWGNC and $s\to\infty$, ${\cal C}_s(\hat{\bm y}_\beta)=O(1/\sqrt{s})$, the decay of the noise correlations is exponential along one direction (orthogonal to the error surface) and quadratic along the remaining $N-1$ components of the noise vector (see Fig. \[Errorsurface\] for an illustration of the error surface). Consistent with the above statements, instantons $\hat{\bm y}_s$ can be also defined as special configurations of the noise resulting in decoding failures such that any incremental (and channel specific) shift of the noise toward the zero-codeword results in correct decoding. It is thus useful to also introduce a respective output, $\tilde{\bm y}_s=\mbox{dec}(\hat{\bm y}_s)$, called a pseudo-codeword. It should be noted that this informal definition of the pseudo-codewords is generic and applicable to any channel and decoder. While the output for the LP decoder is well defined and does not suffer from numerical issues, the iterative decoder can exhibit oscillations i.e., the bits which are decoded wrongly can differ from one iteration to another. As a way to streamline the description of decoding failures in the presence of rounding and iterative uncertainties, Richardson [@03Richardson] suggested a proxy notion of the trapping set, which is a combinatorial object that accounts for the decoder output over iterations. In the subsequent discussion, we formally define trapping sets and pseudo-codewords and also provide some BSC-specific definitions. If an instanton of a channel/decoder is known, the respective pseudo-codeword can be easily found, and conversely if a pseudo-codeword is given (i.e. we know for sure that there exists a configuration of the noise which is sandwiched in between the pseudo-codeword and the zero-codeword) the respective instanton can be restored. In fact, this inversion is in the core of the pseudo-codeword/instanton search algorithms discussed in Section \[section4\]. Trapping Sets for Iterative Decoders ------------------------------------ In practice, we assume that the iterative decoder performs a finite number $D$ of iterations. Let $\mathbf{\hat{y}}=(\hat{y}_1,\hat{y}_2,\ldots,\hat{y}_n)$ be a vector which is the input to the iterative decoder and let $\mathbf{x}^{(k)}=(x_1^{(k)},x_2^{(k)},\ldots, x_n^{(k)})$, $k \leq D$ be the output binary vector at the $k^{th}$ iteration. A variable node $i$ is said to be *eventually correct* if there exists a positive integer $K$ such that for all $k \geq K$, $x^{(k)}_i =0$ [@03Richardson]. Formally, a decoder failure is said to have occurred if there does not exist $k$ such that ${\mbox{supp}}(\mathbf{x}^{(k)})=\emptyset$ [@03Richardson]. [@03Richardson](Trapping sets for iterative decoders:) For an input $\mathbf{\hat{y}}$, let $\mathbf{T}(\mathbf{\hat{y}})$ denote the set of variable nodes that are not eventually correct. If $\mathbf{T}(\mathbf{\hat{y}})\neq \emptyset$, then $\mathbf{T}(\mathbf{\hat{y}})$ is a trapping set. If $a=|\mathbf{T}(\mathbf{\hat{y}})|$ and $b$ is the number of odd degree check nodes in the sub-graph induced by $\mathbf{T}(\mathbf{\hat{y}})$, we say $\mathbf{T}(\mathbf{\hat{y}})$ is an $(a,b)$ trapping set. For the BSC, since the input to the decoder as well as the messages passed are discrete, it is easier to define instantons in terms of number of bits flipped in the input to the decoder. The instantons with least number of flips will be the most dominant in the error floor region. We formalize this intuition below. (Critical number for Gallager A/B algorithm) Let $\mathcal{T}$ be a trapping set for the Gallager A/B algorithm and let $\mathbf{\hat{y}} \in GF(2)^n$. Let $\mathbf{Y}(\mathcal{T})=\{\mathbf{\hat{y}}|\mathbf{T}(\mathbf{\hat{y}})=\mathcal{T}\}$. The critical number $m(\mathcal{T})$ of trapping set $\mathcal{T}$ for the Gallager A/B algorithm is the minimum number of variable nodes that have to be initially in error for the decoder to end up in the trapping set $\mathcal{T}$, i.e., $$\displaystyle m(\mathcal{T})=\min_{\mathbf{Y}(\mathcal{T})}{|{\mbox{supp}}(\mathbf{\hat{y}})|}.$$ The most relevant trapping set in the error floor region is the trapping set with the least critical number. (Instanton for Gallager A/B over the BSC) An instanton is a binary vector $\bm{i}$ such that $\mathbf{T}(\bm{i}) = \mathcal{T}$ for some trapping set $\mathcal{T}$ and for any binary vector $\bm{r}$ such that ${\mbox{supp}}(\bm{r}) \subset {\mbox{supp}}(\bm{i})$, $ \mathbf{T}(\bm{r})=\emptyset$. The size of an instanton is the cardinality of its support. Given a trapping set, one can consider vectors whose support is a subset of the trapping set as input to the decoder and see if such vectors are instantons. While rigorous statements cannot be made about finding smallest size instantons, the above method gives instantons in most of the cases (see [@06CSV] for some illustrations). Intuitively, this seems reasonable as we do not expect inputs to the decoder which do not have errors in variable nodes involved in a trapping set to end up in a trapping set. Pseudo-codewords for LP Decoders -------------------------------- In contrast to the iterative decoders, the output of the LP decoder is well defined in terms of pseudo-codewords. [@05FWK] An *integer pseudo-codeword* is a vector $\mathbf{p}=(p_1,\ldots,p_n)$ of non-negative integers such that, for every parity check $\alpha \in C$, the neighborhood $\{p_i: i \in \mathcal{N}(\alpha)\}$ is a sum of local codewords. The interested reader is referred to Section V in [@05FWK] for more details and examples. Alternatively, one may choose to define a *re-scaled pseudo-codeword*, $\mathbf{p}=(p_1,\ldots,p_n)$ where $0 \leq p_i \leq 1, \forall i \in V$, simply equal to the output of the LCLP. In the following, we adopt the re-scaled definition. The cost associated with LP decoding of a vector $\mathbf{\hat {y}}$ to a pseudo-codeword $\mathbf{p}$ is given by $${\mbox{cost}}(\mathbf{\hat{y}},\mathbf{p})= \sum_{i \in V} \gamma_i p_i. \nonumber$$ For an input $\mathbf{\hat{y}}$, the LP decoder outputs the pseudo-codeword $\mathbf{p}$ with minimum ${\mbox{cost}}(\mathbf{\hat{y}},\mathbf{p})$. Since the cost associated with LP decoding of $\mathbf{\hat {y}}$ to the all-zero-codeword is zero, a decoder failure occurs on the input $\mathbf{\hat{y}}$ if and only if there exists a pseudo-codeword $\mathbf{p}$ with ${\mbox{cost}}(\mathbf{\hat{y}},\mathbf{p}) \leq 0$. A given code $\mathcal{C}$ may have different Tanner graph representations and consequently potentially different fundamental polytopes. Hence, we refer to the pseudo-codewords as corresponding to a particular Tanner graph $G$ of $\mathcal{C}$. [@01Forney] Let $\mathbf{p}=(p_1,\ldots,p_n)$ be a pseudo-codeword distinct from the all-zero-codeword of the code $\mathcal{C}$ represented by Tanner graph $G$ . Then, the *pseudo-codeword weight* of $\mathbf{p}$ is defined as follows: - $w_{BSC}(\mathbf{p})$ for the BSC is $$w_{BSC}(\mathbf{p})=\left\{ \begin{array}{cl} 2e,& \mbox{~if~} \sum_{e}p_i=\left(\sum_{i \in V}p_i\right)/2; \\ 2e-1,&\mbox{~if~} \sum_{e}p_i>\left(\sum_{i \in V}p_i\right)/2. \end{array}\right.$$ where $e$ is the smallest number such that the sum of the $e$ largest $p_i$s is at least $\left(\sum_{i \in V}p_i\right)/2$. - $w_{AWGN}(\mathbf{p})$ for the AWGNC is $$w_{AWGN}(\mathbf{p})=\frac{(p_1+p_2+\ldots + p_n)^2}{(p_1^2+p_2^2+\ldots+ p_n^2)}$$ The minimum pseudo-codeword weight of $G$ denoted by $w_{min}^{BSC/AWGN}$ is the minimum over all the non-zero pseudo-codewords of $G$. We now give definitions specific to the BSC. (Median for LP decoding over the BSC) The median noise vector (or simply the median) $M(\mathbf{p})$ of a pseudo-codeword $\mathbf{p}$ distinct from the all-zero-codeword is a binary vector with support $S=\{i_1,i_2,\ldots,i_e\}$, such that $p_{i_1},\ldots,p_{i_e}$ are the $e(=\lceil \left(w_{BSC}(\mathbf{p})+1\right)/2\rceil)$ largest components of $\mathbf{p}$. Note that for input $\mathbf{\hat{y}}=M(\mathbf{p})$ for some non-zero pseudo-codeword $\mathbf{p}$, we have ${\mbox{cost}}(\mathbf{\hat{y}},\mathbf{p})\leq 0$ and hence leads to a decoding failure (the output of the decoder, however, need not be the pseudo-codeword we start with). (Instanton for LP decoding over the BSC) The BSC *instanton* $\mathbf{i}$ is a binary vector with the following properties: (1) There exists a pseudo-codeword $\mathbf{p}$ such that ${\mbox{cost}}(\mathbf{i},\mathbf{p})\leq {\mbox{cost}}(\mathbf{i},\mathbf{0})=0$; (2) For any binary vector $\mathbf{r}$ such that ${\mbox{supp}}(\mathbf{r}) \subset {\mbox{supp}}(\mathbf{i})$, there exists no pseudo-codeword with ${\mbox{cost}}(\mathbf{r},\mathbf{p})\leq 0$. The size of an instanton is the cardinality of its support. An attractive feature of LP decoding over the BSC is that any input whose support contains an instanton leads to a decoding failure (which is not the case for Gallager A decoding over the BSC) [@08CCV]. This important property is in fact used in searching for instantons. To summarize, evaluating FER vs SNR approximately reduces to finding the set of most probable instantons and evaluating their probabilities, multiplicities and, in the continuous case, also respective curvatures. Specifically, for LP decoding over the BSC and the Gallager algorithm, the slope of the FER curve in the error floor region is equal to the cardinality of the smallest size instanton (see [@08ICV] for a formal description). Understanding that the knowledge of the instantons allows efficient approximation of FER vs SNR dependence (which is our main task), we now discuss approaches to finding the set of instantons for a given error-correction setting in Section \[section4\]. Searching for Instantons {#section4} ======================== As explained above in Section \[section3\], instantons that control the large SNR asymptotic of the FER are the most probable noise configurations corresponding to decoder failures. Stated this way the problem of finding an instanton becomes an optimization problem, and all the remaining details of this section are related to efficient implementation of this, generally difficult, optimization problem. Instanton search for iterative decoding over continuous channels ---------------------------------------------------------------- A straightforward optimization method for finding instantons in the case of a continuous channel is based on the standard (amoeba) optimization [@92PBTV] and was discussed by Stepanov and Chertkov in [@06SC]. The main idea of the direct technique is as follows. One draws randomly a unit length configuration of the noise and finds a scale-up value which positions the re-scaled configuration of the noise exactly at the error-surface. Thus, incremental increase/decrease of the rescaling factor leads to decoding failure or recovery. Such a configuration and its probability are recorded, and this operation is repeated $(N-2)$ times, thus generating $N-1$ vertices of a simplex with respective probabilities assigned. Then, aiming to find a more probable point in the interior of the simplex, the current point is transformed according to the standard amoeba rules. The process is repeated until the size of the simplex becomes smaller than a preset accuracy, and the resulting most probable configuration outputs an instanton. The whole optimization is repeated multiple number of times, each time generating an instanton. The main advantage of the method is in its generality (it can be used for any continuous channel and any soft decoding algorithm). However, implementing this method is costly. Although one can use amoeba optimization method for LP decoding too, because of a certain property of LP decoding (it is easy to find an instanton (noise realization) corresponding to the output of the decoding which is a pseudo-codeword), the PCS method described in Section \[section4c\] is a lot more effective. The instanton-amoeba method easily finds the instantons for a code if the number of iterations in decoding is not large (less than $20$). Increase of the number of iterations, $n_{\mbox{\scriptsize it}}$, simply means longer computations. The other more important effect is associated with enhancement of irregular, stochastic component in decoding observed with $n_{\mbox{\scriptsize it}}$ increase. One finds that already a slight variation in the noise can drastically change results. That makes the function that we have to optimize very irregular, which dramatically slows down the optimization process. In the case of large number of iterations in the decoder, with the check for a codeword in each iteration, it is not easy to come up with good starting point for the amoeba. The configurations from amoeba with small number of iterations (when the method is quite effective) are not very useful as the decoder eventually outputs a codeword. The following two ways to find such configurations were developed. Both are based on observations from numerical experiments. 1\) Input an instanton for LP decoder to the min-sum iterative decoder. The instantons for LP decoding (as they have low weight) serve as good seeding noise configurations for amoeba, as they are in erroneous domain in noise space even for a decoder with a very large number of iterations. 2\) Limit the noise configuration on bits where the instanton for low number of iterations is supported. Work then with an optimization problem on these bits only, setting the noise value on all other bits to zero. In this way, the number of variables is much lower, so the optimization procedure is a lot easier to proceed with. The smaller is the dimension of the space in which the amoeba optimization is done, the easier is the problem. The instantons for low number of iterations usually have noise in a few bit locations. One can hope that if one increases the noise level on these selected bits (while keeping the noise at all other bits being exactly zero) then the noise configuration will “survive” a lot more iterations. The $12.45$ weight instanton (supported by $12$ bits) for AWGNC and $410$ iterations decoder that is reported in [@06SC] was found this way. Instanton search for Gallager A/B decoders over the BSC ------------------------------------------------------- In contrast to iterative decoding with continuous alphabet, the trapping sets and instantons for the Gallager A/B decoder can be found using certain combinatorial considerations which were first pointed out by Richardson [@03Richardson] and later investigated in detail in [@06CSV; @08CNVM1; @08CNVM2; @08CV]. The trapping sets for Gallager type decoders are closely related to trapping sets for the bit flipping decoders. Instanton search for LP decoding over the AWGNC {#section4c} ----------------------------------------------- For the LP decoding over the AWGNC, another suggestion for solving the difficult optimization problem faster was formulated in [@08CS] by Chertkov and Stepanov. This pseudo-codeword search (PCS) algorithm , originally stated for the continuous channel model, is based on the aforementioned relation between instantons and respective pseudo-codewords. Specifically, if a pseudo-codeword, ${\bm p}$ corresponding to an instanton, is known, then reconstructing the respective instanton $\tilde {\bm p}$ is equivalent to maximizing the probability of the noise under the condition that the probabilities of the noise configuration counted from the zero-codeword and from the pseudo-codeword, ${\bm p}$, are identical, i.e. $$\tilde{\bm p}=\left.\mbox{argmax}_{\bm n} P({\bm n})\right|_{P({\bm n})=P({\bm n}+{\bm p});\quad {\bm p}\neq {\bm 0}}. \label{max_prob}$$ The idea of the method of [@08CS] consists of throwing a sufficiently strong configuration of the noise (so that the resulting decoding is erroneous), decode it into a pseudo-codeword, and then assume that the pseudo-codeword shares an error-surface with the zero-codeword. Then the projected instanton is reconstructed using Eq. (\[max\_prob\]), even though the noise configuration, especially after the first iteration, is not an actual instanton. This procedure is repeated until the input and the output for an iteration give the same result. It was empirically shown in [@08CS] that such scheme formulated for the LP decoder outputs the sequence of noise configurations with probabilities monotonically increasing with the number of iterations and converging in a small number of iterations to an instanton. Instanton search for LP decoding over the BSC --------------------------------------------- The PCS was extended to the case of LP decoding over the BSC by Chilappagari *et al.* in [@08CCV]. The algorithm proposed in [@08CCV] termed as the instanton search algorithm (ISA) is provably efficient and outputs an instanton in bounded number of steps. We summarize the algorithm below. \ : Initialize to a binary input vector $\mathbf{r}$ containing sufficient number of flips so that the LP decoder decodes it into a pseudo-codeword different from the all-zero-codeword. Apply the LP decoder to $\mathbf{r}$ and denote the pseudo-codeword output of LP by $\mathbf{p}^{1}$.\ : Take the pseudo-codeword $\mathbf{p}^l$ (output of the $(l-1)$ step) and calculate its median $M(\mathbf{p}^l)$. Apply the LP decoder to $M(\mathbf{p}^l)$ and denote the output by $\mathbf{p}_{M_l}$. Only two cases arise: - $w_{BSC}(\mathbf{p}_{M_l}) < w_{BSC}(\mathbf{p}^l)$. Then $\mathbf{p}^{l+1}=\mathbf{p}_{M_l}$ becomes the $l$-th step output/$(l+1)$ step input. - $w_{BSC}(\mathbf{p}_{M_l})=w_{BSC}(\mathbf{p}^l)$. Let the support of $M(\mathbf{p}^l)$ be $S=\{i_1,\ldots,i_{k_l}\}$. Let $S_{i_t}=S \backslash \{i_t\}$ for some $i_t \in S$. Let $\mathbf{r}_{i_t}$ be a binary vector with support $S_{i_t}$. Apply the LP decoder to all $\mathbf{r}_{i_t}$ and denote the $i_t$-output by $\mathbf{p}_{i_t}$. If $\mathbf{p}_{i_t}=\mathbf{0}, \forall i_t$, then $M(\mathbf{p}^l)$ is the desired instanton and the algorithm halts. Else, $\mathbf{p}_{i_t} \neq \mathbf{0}$ becomes the $l$-th step output/$(l+1)$ step input. The interested reader is referred to [@08CCV] for a discussion of various issues that arise in the implementation of the ISA. Numerical Results {#section5} ================= This section summarizes statistics of instantons found for the $[155,64,20]$ Tanner code [@01TSF] performing over the BSC and the AWGNC and decoded by iterative and LP decoders. The Tanner code is a $(3,5)$ regular code whose Tanner graph has girth $8$ [@01TSF]. Instanton Statistics for the Tanner Code ---------------------------------------- **Gallager A algorithm:** The most dominant trapping set in the error floor domain is the $(5,3)$ trapping set which has critical number $3$. There are a total of $155~(5,3)$ trapping sets each of which has an instanton of weight $3$ [@06CSV] (see Fig. \[InstantonBSCGalA\]). There are $465$ $(4,4)$ trapping sets each with critical number $4$. Hence, the slope of the FER curve in the error floor region is dominated by the $(5,3)$ trapping sets and it is equal to $3$. The trapping sets for the Gallager A algorithm are found by a combination of simulations and combinatorial considerations (see [@03Richardson; @06CSV] for more details). **Iterative BP:** The instantons for 4 iterations decoder were analyzed by the instanton-amoeba method in [@05SCCV]. The 3 lowest instantons were found, all of which contained a specific characteristic 12 bit structure. It turns out that this bit structure is what is responsible for errors even for very large number of iterations [@06SC]. MC simulations show that the error floor asymptotic for the Tanner code under iterative decoder with large number of iterations is determined by these structures (resulting in effective distance of 12.45 [@06SC]). All the trapping sets corresponding to the lowest weight instantons contain an (8,2) trapping set which is shown in Fig. \[InstantonAWGNIterative\]. For more details on the FER curves for different number of iterations and relevant discussion, the interested reader is referred to [@06SC]. **LP decoder over BSC:** The ISA described in Section \[section4\] found $155$ distinct instantons of size $5$ (the corresponding pseudo-codewords have BSC weight $9$). The support of each of these instantons is a $(5,3)$ trapping set shown in Fig. \[InstantonBSCLP\] (from the symmetry of the Tanner code it can be verified that there are exactly $155$ such structures present in the Tanner graph). The ISA also discovered higher weight instantons (see [@08CCV] for more details), but the instantons of size $5$ are the most dominant ones in the error floor region. **LP decoder over AWGNC:** The PCS algorithm of [@08CS] found many low-weight pseudo-codewords ($16.4037$ being the least weight pseudo-codeword as found by the PCS). The weighted-median noise configurations (instantons) (see [@08CS]) corresponding to various low-weight pseudo-codewords have high noise at $5$ variable nodes corresponding to the $(5,3)$ trapping sets. In fact, the respective BSC weight $9$ pseudo-codewords have low weight on the AWGNC also (but not the absolute lowest!). The support of each of the lowest-weight pseudo-codewords is large but the components in the variable nodes corresponding to the (5,3) trapping set have maximum value (illustrated in Fig. \[InstantonAWGNLP\]). An important insight gained from this comparison is that the decoding failures for various algorithms on different channels are closely related and are dependent on only a few topological structures. These relations can be exploited to find instantons for a given decoder on a given channel based on the knowledge of instantons for another already analyzed decoder, which can even be performing over another channel. This relation is also suggestive for design of a better code, the idea substantiated in the next subsection. Code Design for Increasing the Smallest Instanton Size ------------------------------------------------------ In [@07KS; @08CNVM2], it was shown that the minimum pseudo-codeword weight (for LP decoding) and the minimum critical number (for Gallager A/B decoding) of a code increase with the increase in the girth of the underlying Tanner graph. While girth optimized codes are known to perform better in general, the code length and degree distribution place a fundamental restriction on the best achievable girth. Observing that the instantons for different decoding algorithms performing over different channels have a common underlying topological structure (e.g. the $(5,3)$ trapping set in the case of the $[155,64,20]$ code), it is natural to discuss design of a similar but new code which excludes these troublesome structures. In fact, this suggests a natural code optimization technique with an improved instanton distribution. Starting with a reasonably good code (constructed either algebraically or by the progressive edge growth (PEG) method [@05HEA]), we find the most damaging instantons and their underlying topological structure. We then construct a new code avoiding such subgraphs (either by swapping edges, by increasing code length, or utilizing a combination of both). We iterate this procedure till the code can no longer be optimized or reaching a computationally unbearable complexity. For the Gallager A decoding, it has been proved in [@08CKV] that codes with Tanner graphs of girth $8$ which avoid the $(5,3)$ trapping set and weight $8$ codewords can correct all the error patterns of weight $3$ or less. While proving a similar result might be difficult for the iterative decoder over the AWGNC and the LP decoder, such considerations nonetheless play a role in our code design strategy. An algorithm, suggesting construction of a code meeting the Gallager A-related conditions, was provided in [@08CKV]. This algorithm can be seen as a generalization of the PEG algorithm [@05HEA]. Given a list of forbidden subgraphs, at every step of the algorithm, an edge is established such that the resulting graph at that stage does not consist of any of the forbidden subgraphs. (The PEG algorithm is a special case forbidding cycles shorter than a given threshold.) Using the algorithm proposed in [@08CKV], we constructed a new code of length $155$ with uniform left degree $3$ and with most check nodes with degree $5$. By construction, this code avoids $(5,3)$ trapping sets. This results in a steeper FER slope of $4$ in the error floor domain under the Gallager A decoder, as shown in Fig. \[BSCPerformanceGalA\]. The dominant trapping set for the new code with critical number $4$ is the $(4,4)$ trapping set (an eight cycle) and has multiplicity $662$. Fig. \[BSCPerformanceGalA\] also shows the predicted performance at very low $\epsilon$ (the method to predict the error floor performance using the trapping set statistics is described in detail in [@06CSV]). ![Comparison of the FER performance of the Tanner code and the new code under the Gallager A algorithm. Plotted also is the asymptotic prediction made using the statistics of lowest weight instantons.[]{data-label="BSCPerformanceGalA"}](fig3.eps){width="3.2in"} ![Instanton weight distribution for the Tanner code and the new code for LP decoding over the BSC as found by running the ISA 2000 times.[]{data-label="OldvsNewBSC"}](fig4.eps){width="3.4in"} ![Pseudo-codeword weight distribution for the Tanner code and the new code on the AWGNC as found by running the PCS algorithm 2000 times.[]{data-label="OldvsNewAWGN"}](fig5.eps){width="3.4in"} \[table:parameters2\] [|c|c|c|c|c|c|c|c|c|c|c|]{} & & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13\ &Total&715&194&248&230&295&201&74&10&1 &Unique&155&177&238&228&295&201&74&10&1\ &Total& &106&409&622&508&247&62&11&1 &Unique& &80&397&617&508&247&62&11&1\ ![Comparison of the FER performance of the Tanner code and the new code under LP decoding over the BSC. Plotted also is the asymptotic prediction using instanton statistics for the Tanner code. For the new code, as the total number of instantons of weight six is unknown, different curves are plotted (labeled 1,2,3) assuming different (200,2000,5000 respectively) number of instantons of weight six.[]{data-label="BSCPerformanceLP"}](fig6.eps){width="3.2in"} The minimum weight instanton for LP decoding of the new code over the BSC found by the ISA is $6$ (we have independently verified that the code is in fact capable of correcting error patterns with up to five errors by exhaustive search). Table \[table:parameters2\] shows the instanton statistics for the Tanner code and the new code found by running the ISA with 20 random flips and 2000 times. The statistics of number of unique instantons for the two codes as a histogram is illustrated in Fig. \[OldvsNewBSC\]. The pseudo-codeword weight distribution for LP decoding over the AWGNC for the two codes is shown in Fig. \[OldvsNewAWGN\]. The FER performance of the Tanner code and the new code under the LP decoder over the BSC is shown in Fig. \[BSCPerformanceLP\]. While all the lowest-weight instantons for the Tanner code have been found by the ISA, the same cannot be said of for the new code [^7]. Hence, we can only predict the slope of the FER curve in the error floor region and not the exact value. This can be remedied by running more trials of the ISA or by studying the automorphism group of the code and exploiting the structure of the code to find the multiplicity of the lowest-weight instantons. In Fig. \[BSCPerformanceLP\], we have plotted the predicted FER curve assuming different values (200, 2000 and 5000 respectively) for the number of instantons of weight six. All the above statistics illustrate the superiority of the new code. Conclusion {#section6} ========== In this paper, we presented a comprehensive description of various instanton based techniques for the analysis and reduction of error floors of LDPC codes. The most powerful method discussed is the pseudo-codeword/instanton search algorithm, designed specifically for the LP decoder. Using the instanton-based technique for analysis of sample (intermediate size) codes, e.g. $[155,64,20]$ Tanner code, we conclude that the underlying topological structures of the most probable instanton, found for the same code but different channels and decoders, are related to each other. Understanding of the graphical structure of the instanton and its relation to the decoding failures leads to a method to construct codes whose Tanner graphs are free of these structures. The instanton technique, applied to this code and also complemented by the direct Monte Carlo simulations, confirm the success of the new code improvement strategy. Future work includes: (1) refining the above techniques and applying them to longer codes, (2) developing improved semi-analytical methods for FER estimation, more specifically combining instantons and MC in order to obtain a good approximation of the entire FER curve, (3) optimization of decoders to reduce error floors, and (4) finding other combinatorial strategies for designing low error floor codes, such as judicious removal of the lines in a finite geometry leading to point-line incidence matrix free of trapping sets. Acknowledgment {#acknowledgment .unnumbered} ============== Part of the work by S. K. Chilappagari was performed when he was a summer GRA at LANL. The work at LANL, by S. K. Chilappagari and M. Chertkov, was carried out under the auspices of the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. B. Vasic and S. K. Chilappagari would like to acknowledge the financial support of the NSF (Grants CCF-0634969 and IHCS-0725405) and Seagate Technology. M. G. Stepanov would like to acknowledge the support of NSF grant DMS-0807592. The authors would like to thank A. R. Krishnan for providing the modified code and the anonymous reviewers for their suggestions. [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{} R. G. Gallager, *Low Density Parity Check Codes*.1em plus 0.5em minus 0.4emCambridge, MA: M.I.T. Press, 1963. D. J. C. Mackay, “Good error-correcting codes based on very sparse matrices,” *IEEE Trans. Inform. Theory*, vol. 45, no. 2, pp. 399–431, 1999. J. Pearl, *Probablisitic Reasoning in Intelligent Systems*.1em plus 0.5em minus 0.4emSan Francisco, CA: Kaufmann, 1988. V. V. Zyablov and M. S. Pinsker, “Estimation of the error-correction complexity for [G]{}allager low-density codes,” *Problems of Information Transmission*, vol. 11, no. 1, pp. 18–28, 1976. M. Sipser and D. Spielman, “Expander codes,” *IEEE Trans. Inform. Theory*, vol. 42, no. 6, pp. 1710–1722, 1996. J. Feldman, M. Wainwright, and D. Karger, “Using linear programming to decode binary linear codes,” *IEEE Trans. Inform. Theory*, vol. 51, no. 3, pp. 954–972, March 2005. T. Richardson and R. Urbanke, *Modern Coding Theory*.1em plus 0.5em minus 0.4emCambridge University Press, March 2008. \[Online\]. Available: <http://lthcwww.epfl.ch/mct/index.php> T. J. Richardson and R. Urbanke, “The capacity of low-density parity-check codes under message-passing decoding,” *IEEE Trans. Inform. Theory*, vol. 47, no. 2, pp. 599–618, 2001. T. J. Richardson, M. Shokrollahi, and R. Urbanke, “Design of capacity-approaching irregular low-density parity-check codes,” *IEEE Trans. Inform. Theory*, vol. 47, no. 2, pp. 638–656, 2001. L. Bazzi, T. Richardson, and R. Urbanke, “Exact thresholds and optimal codes for the binary symmetric channel and [Gallager’s]{} decoding algorithm [A]{},” *IEEE Trans. Inform. Theory*, vol. 50, pp. 2010–2021, 2004. D. Burshtein and G. Miller, “Expander graph arguments for message-passing algorithms,” *IEEE Trans. Inform. Theory*, vol. 47, no. 2, pp. 782–790, 2001. D. Burshtein, “On the error correction of regular [LDPC]{} codes using the flipping algorithm,,” *IEEE Trans. Inform. Theory*, vol. 54, no. 2, pp. 517–530, Feb. 2008. J. Feldman and C. Stein, “[LP]{} decoding achieves capacity,” in *SODA ’05: Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms*.1em plus 0.5em minus 0.4emPhiladelphia, PA, USA: Society for Industrial and Applied Mathematics, 2005, pp. 460–469. J. Feldman, T. Malkin, R. A. Servedio, C. Stein, and M. J. Wainwright, “[LP]{} decoding corrects a constant fraction of errors,” *IEEE Trans. Inform. Theory*, vol. 53, no. 1, pp. 82–89, 2007. C. Daskalakis, A. G. Dimakis, R. M. Karp, and M. J. Wainwright, “Probabilistic analysis of linear programming decoding.” *IEEE Trans. Inform. Theory*, vol. 54, no. 8, pp. 3565–3578, August 2008. \[Online\]. Available: <http://dblp.uni-trier.de/db/journals/tit/tit54.html#DaskalakisDKW08> T. J. Richardson, “Error floors of [LDPC]{} codes,” in *Proc. 41st Annual Allerton Conf. on Communications, Control and Computing*, 2003, pp. 1426–1435. \[Online\]. Available: <http://www.hpl.hp.com/personal/Pascal_Vontobel/pseudocodewords/papers> C. Di, D. Proietti, T. Richardson, E. Telatar, and R. Urbanke, “Finite length analysis of low-density parity-check codes on the binary erasure channel,” *IEEE Trans. Inform. Theory*, vol. 48, pp. 1570–1579, 2002. A. Orlitsky, K. Viswanathan, and J. Zhang, “Stopping set distribution of [LDPC]{} code ensembles,” *IEEE Trans. Inform. Theory*, vol. 51, no. 3, pp. 929–953, March 2005. N. Wiberg, “Codes and decoding on general graphs,” Ph.D. dissertation, Univ. Linköping, Sweden, Dept. Elec. Eng., 1996. B. Frey, R. Koetter, and A. Vardy, “Signal-space characterization of iterative decoding,” *IEEE Trans. Inform. Theory*, vol. 47, no. 2, pp. 766–781, Feb. 2001. G. D. Forney, R. Koetter, F. R. Kschischang, and A. Reznik, “On the effective weights of pseudocodewords for codes defined on graphs with cycles,” in *In Codes, systems and graphical models*.1em plus 0.5em minus 0.4emSpringer, 2001, pp. 101–112. P. O. Vontobel and R. Koetter, “Graph-cover decoding and finite length analysis of message-passing iterative decoding of [LDPC]{} codes,” 2005. \[Online\]. Available: <http://arxiv.org/abs/cs.IT/0512078> P. Vontobel and R. Koetter, “On the relationship between linear programming decoding and min-sum algorithm decoding,” in *Proc. of the Int. Symp. on Inform. Theory and its Appl.*, Oct. 10-13 2004, pp. 991–996. D. J. C. MacKay and M. J. Postol, “Weaknesses of [M]{}argulis and [R]{}amanujan–[M]{}argulis low-density parity-check codes,” in *Proceedings of MFCSIT2002, Galway*, ser. Electronic Notes in Theoretical Computer Science, vol. 74.1em plus 0.5em minus 0.4em Elsevier, 2003. \[Online\]. Available: <http://www.inference.phy.cam.ac.uk/mackay/abstracts/margulis.html> M. G. Stepanov, V. Chernyak, M. Chertkov, and B. Vasic, “Diagnosis of weaknesses in modern error correction codes: A physics approach,” *Phys. Rev. Lett.*, vol. 95, p. 228701, 2005. C. Kelley and D. Sridhara, “Pseudocodewords of [T]{}anner graphs,” *IEEE Trans. Inform. Theory*, vol. 53, no. 11, pp. 4013–4038, Nov. 2007. S.-T. Xia and F.-W. Fu, “Minimum pseudoweight and minimum pseudocodewords of [LDPC]{} codes,” *IEEE Trans. Inform. Theory*, vol. 54, no. 1, pp. 480–485, Jan. 2008. R. Smarandache and P. O. Vontobel, “Pseudo-codeword analysis of [Tanner]{} graphs from projective and [E]{}uclidean planes,” *IEEE Trans. Inform. Theory*, vol. 53, no. 7, pp. 2376–2393, 2007. R. Smarandache, A. E. Pusane, P. O. Vontobel, and D. J. Costello, “Pseudo-codewords in [LDPC]{} convolutional codes,” 2006, pp. 1364–1368. O. Milenkovic, E. Soljanin, and P. Whiting, “Asymptotic spectra of trapping sets in regular and irregular [LDPC]{} code ensembles,” *IEEE Transactions on Information Theory*, vol. 53, no. 1, pp. 39–55, 2007. C.-C. Wang, S. R. Kulkarni, and H. V. Poor, “Exhaustion of error-prone patterns in [LDPC]{} codes,” to appear in IEEE Trans. Info. Theory. \[Online\]. Available: <http://arxiv.org/abs/cs/0609046> V. Chernyak, M. Chertkov, M. Stepanov, and B. Vasic, “Instanton method of post-error-correction analytical evaluation,” in *Proc. IEEE Inform. Theory Workshop*, 2004, pp. 220–224. M. Chertkov and M. Stepanov, “An efficient pseudocodeword search algorithm for linear programming decoding of [LDPC]{} codes,” *IEEE Trans. Inform. Theory*, vol. 54, no. 4, pp. 1514–1520, April 2008. S. K. Chilappagari, M. Chertkov, and B. Vasic, “Provably efficient instanton search algorithm for [LP]{} decoding of [LDPC]{} codes over the [BSC]{},” 2008, submitted to *IEEE Trans. Inform. Theory*. \[Online\]. Available: <http://arxiv.org/abs/0808.2515> P. O. Vontobel, “Papers on pseudo-codewords.” \[Online\]. Available: <http://www.hpl.hp.com/personal/Pascal_Vontobel/pseudocodewords/papers> R. M. Tanner, D. Sridhara, and T. Fuja, “A class of group-structured [LDPC]{} codes,” in *Proc. ISCTA*, 2001. \[Online\]. Available: <http://www.soe.ucsc.edu/~tanner/isctaGrpStrLDPC.pdf> R. M. Tanner, “A recursive approach to low complexity codes,” *IEEE Trans. Inform. Theory*, vol. 27, no. 5, pp. 533–547, 1981. R. Koetter and P. O. Vontobel, “Graph covers and iterative decoding of finite-length codes,” in *Proc. of the 3rd Intern. Conf. on Turbo Codes and Related Topics*, Sept. 1-5 2003, pp. 75–82. M. J. Wainwright and M. I. Jordan, “Variational inference in graphical models: the view from the marginal polytope,” in *Proc. 40th Allerton Conf. on Communications, Control, and Computing*, 2003. \[Online\]. Available: <http://www.hpl.hp.com/personal/Pascal_Vontobel/pseudocodewords/papers> M. Stepanov and M. Chertkov, “Instanton analysis of low-density-parity-check codes in the error-floor regime,” in *Proc. of the Int. Symp. on Inform. Theory*, July 9-14 2006, pp. 9–14. S. K. Chilappagari, S. Sankaranarayanan, and B. Vasic, “Error floors of [LDPC]{} codes on the binary symmetric channel,” in *Proc. Int. Conf. on Communications*, vol. 3, 2006, pp. 1089–1094. M. Ivkovic, S. K. Chilappagari, and B. Vasic, “Eliminating trapping sets in low-density parity-check codes by using [Tanner]{} graph covers,” *IEEE Trans. Inf. Theory*, vol. 54, no. 8, pp. 3763–3768, 2008. \[Online\]. Available: <http://dx.doi.org/10.1109/TIT.2008.926319> W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, *Numerical Recipes in C : The Art of Scientific Computing*.1em plus 0.5em minus 0.4em, October 1992. S. K. Chilappagari, D. V. Nguyen, B. Vasic, and M. W. Marcellin, “On trapping sets and guaranteed error correction capability of [LDPC]{} codes and [GLDPC]{} codes,” May 2008. \[Online\]. Available: <http://arxiv.org/abs/0805.2427> ——, “Error correction capability of column-weight-three [LDPC]{} codes: [Part II]{},” Jul 2008. \[Online\]. Available: <http://arxiv.org/abs/0807.3582> S. K. Chilappagari and B. Vasic, “Error correction capability of column-weight-three [LDPC]{} codes,” *IEEE Trans. Inform. Theory*, accepted for publication, Nov. 2008. \[Online\]. Available: <http://arxiv.org/abs/0710.3427> X. Y. Hu, E. Eleftheriou, and D. M. Arnold, “Regular and irregular progressive edge-growth [Tanner]{} graphs,” *IEEE Trans. Inform. Theory*, vol. 51, no. 1, pp. 386–398, 2005. S. K. Chilappagari, A. R. Krishnan, and B. Vasic, “[LDPC]{} codes which can correct three errors under iterative decoding,” in *Proc. IEEE Inform. on Theory Workshop*, May 2008, pp. 406–410. [^1]: Manuscript received October 1, 2008. Revised January 15, 2009 and March 6, 2009. S. K. Chilappagari \[shashic@ece.arizona.edu\] is with the Electrical and Computer Engineering Department, University of Arizona, Tucson, AZ, 85721, USA. [^2]: M. Chertkov \[chertkov@lanl.gov\] is with Theory Division & CNLS, LANL, Los Alamos, NM, 87545 USA. [^3]: M. G. Stepanov \[stepanov@math.arizona.edu\] is with the Department of Mathematics, University of Arizona, Tucson, AZ, 85721, USA. [^4]: B. Vasic \[vasic@ece.arizona.edu\] is with the Electrical and Computer Engineering Department, University of Arizona, Tucson, AZ, 85721, USA. [^5]: The event of a bit changing from $0$ to $1$ and vice-versa is known as flipping. [^6]: Note that for the BSC, the transition probability $\epsilon$ is a measure of the SNR. For code rate $r$ and BPSK modulated transmission over the AWGNC with noise variance $\sigma^2$, we have $E_b/N_0=1/(2r\sigma^2)$. [^7]: The standard way to find out whether our instanton search exhausted all the unique configurations is as follows. Assume that there are $N$ unique instantons of a given weight and in each trial ISA finds all of them with equal probability. To estimate the number of ISA runs required for finding all the $N$ instantons, one notice that if $N-1$ instantons are already found the number of trails required to find to the last instanton is $\approx N$. If all but two instantons are already found the number of ISA trials required is $N/2$. Therefore, the average number of ISA trials required to find all the instantons is estimated as $N + N/2 + N/3 + \cdots N/(N-1) + 1 = N(1 + 1/2 + 1/3 + \cdots + 1/N)$ turning to $N\ln N$ at $N\to\infty$, i.e. $N\ln N$ trials ISA reliably finds $N$ instantons.
--- abstract: 'The purpose of the present study is to compare the predictions of different models of star formation rate (SFR) history in the universe with the upper limit of Super Kamiokande for the neutrino background. To this aim we have calculated the expected neutrino density for the most popular models of SFR history, Hogg et al., Glazebrook et al., Cole et al., Yuksel et al., Hernquist et al. and Kaplinghat et al. Different from previous studies we have used the $ \Lambda $CDM model with $ \Omega_{\Lambda}=0.7 $. We have assumed that the detector used for the detection the neutrino flux is SuperK and also we have assumed that the electron neutrinos produced in the Supernovae oscillate equally to the three standard model flavors. By these assumptions all models stay below the upper limit of SuperK on the event rate and the detection of the supernova relic neutrino background (SRNB) remains undetected. Future neutrino detectors such as KM3Net will be able to detect the SRNB and distinguish between the models of the SFR history.' author: - 'Jafar Khodagholizadeh $^{1}$[^1], Sepehr Arbabi $^{2}$ [^2] and Zamri Zainal Abidin$^{3} $[^3]' title: Supernova Neutrino Background Bound on the SFR History --- Introduction ============ Neutrino telescopes are right now the only possibility of exploring the distant universe independent of photons. By studying the relic neutrinos stemming from distant supernovae we can derive information about the history of the universe and its star formation rate (SFR). It is a well established fact that core collapse supernovae like Type II SN emit $99\%$ of their energy in the form of neutrinos. These neutrinos, form the so called relic $ \bar{\nu}_{e} $ background, which is created by Type II supernovae and has been detected in large Neutrino detectors such as SuperK, SNOLlab, Icecube and future detectors such as Km3Net. The relic $ \bar{\nu}_{e} $ flux depends on the rate of $ SN_{II}$ rate as function of redshift which itself is a function of the SFR history of the universe. Also the metal enrichment history and the cosmic chemical evolution are closely related to SFR and play an important role in the study of the universe. In the recent past a wealth of methods for estimating the star formation rate history have been applied. For a recent review of the observational methods used in this topic, cf. Madau, Wilkinson 2014 [@Madau14]. There is a general consensus that the star formation increases from now to redshift unity and decreases exponentially towards higher redshifts and earlier times. The star formation history of the universe was the subject of a pioneering work by Madau et al.96 [@Madau96]. Sicne then a large number of studies have been undertaken until the recent past. For example Pei and Fall compared models of cosmic chemical evolution with observational indicators, the abundance of neutral hydrogen, heavy elements and the dust on damped $Ly\alpha$ systems and present-day galaxies [@Pei; @Fall]. Totani et. al [@Totani; @Totani1] used a time-dependent supernova rate from a model of galaxy evolution based on the population synthesis method. In a similar way, in the work of Bisnovatyi-Kogan and Seidev [@Bisnovatyi], galaxy evolution is considered and it is assumed that the supernova rate depends redshift proportional to $(1+z)^{A}$. In their model, the supernova rate is much higher in the early phase of elliptical galaxies, and more than half of total supernova explode during the initial 1 $Gyr $ after the formation of galaxies. Thereafter the supernovae rate in spiral galaxies becomes dominant and the total number of supernova until the present is consistent with the requirements of nucleosynthesis. Also Hartman and Woosley [@Hartmann] compared a model of cosmic chemical evolution with observations of $Ly\alpha$ systems and faint galaxy surveys. They obtain a a power-law SFR with a redshift dependence (SFR $\propto t^{-2.5}$[@Lilly]). In most models the maximum of SFR takes place at a redshift of order unity, where overall rate was about 10 times higher than it is today. In another analysis of the SFR, Hogg considered measurements of radio, infrared and ultraviolet broad-band photometric indicators, visible and near-ultraviolet line-emission indicators from redshift unity to the present day [@Hogg]. Assuming that SFR is proportional to $(1+z)^{\beta}$, the best-fit exponent was obtained $\beta=2.7\pm0.7$ Cole et. al measured galaxy luminosity functions in the near infrared from a combined 2MASS-2dFGRS selected galaxy catalogue [@Cole]. On this basis they determined the mass of stars formed until today and obtained $\dot{\rho}_{\star}=(a+bz)/[1+(z/c)^{d}]h M_{\bigodot}yr^{-1} Mpc^{-3}$ where$(a,b,c,d)=(0.0166,0.1848,1.9474,2.6316)$. Hernquest and Springel use analytical physical reasons to model SFR.,$\dot{\rho}_{\star}$ [@Hern]. They found that at early times $\dot{\rho}_{\star}$ generically rises exponentially as $z$ decreases, independent of the details of the physical model for star formation, but dependent on the normalization and shape of the cosmological power spectrum. They conclude that at lower redshifts, the star formation rate scales approximately as $\dot{\rho}_{\star}\propto H(z)^{4/3}$. In this model the peak of SFR depends on the model parameters but half of the stars have formed at redshifts higher than $z\simeq2.2$. Glazebrook and et.al study the overall spectrum of galaxies obtained from the red-selected Sloan Digital Sky Survey and compared the results with the blue-selected 2dF Galaxy redshift Survay [@Glaze]. Here they used a double power-law parametrization of the SFR with a break at redshift unity: SFR $\propto (1+z)^{\beta}$ for $z<1$ and $\propto (1+z)^{\alpha}$ for $1<z<5$ and star formation rate is zero for $z>5$.. Finally Yuksel and et al studied SFR in relation to the Gamma-ray bursts [@Yuksel]. The Gamma-ray bursts have the advantage that they can be observed at higher redshifts and determine the SFR at $z=4-7$. The result for SFR reported there was that a steep drop exist in the SFR up to at least $z\sim 6.5$ In the present study we have chosen analytic models mentioned above for studying the SFR. Here we start out with a model which has a few parameters which can be constrained in comparison of such models with observations, which is the relic supernova background in our case. This paper is organized as follows. In the next section we calculate the supernova relic neutrino density for the different models of SFR. In section 3, we determine the predictions of the models for the neutrino event rate at the SuperKamiokande detector. The results are presented in section 4, followed by a short discussion in section5. The Supernovae Relic Neutrino Spectrum {#S2} ====================================== If the supernova rate per unit commoving volume at redshift $ z$ is $ N_{SN}(Z) $ and the neutrino energy distribution at the source (at energy $ \epsilon $) is $ L_{\nu}^{S}(\epsilon) $, then the expected flux of relic neutrinos on Earth is given by [@Kaplin]: $$\begin{aligned} \label{1} j_{\nu}(\epsilon)=\frac{C}{H_{0}}\int dz\frac{N_{SN}(z)<L^{S}_{\nu} (\epsilon')>}{(1+z)\sqrt{\Omega_{\Lambda}+(1+z)^{3}\Omega_{M}}}\end{aligned}$$ where $ \epsilon'= (1+z)\epsilon $ is the rest frame neutrino energy and the neutrinos are assumed to be massless. The spectrum of the neutrinos is parameterized as a Fermi-Dirac distribution with zero chemical potential and normalized to the total energy in a particular neutrino species $ (E_{\nu}) $ emitted by the supernova, i.e. $ \int L_{\nu}^{S}(\epsilon) \epsilon d\epsilon=E_{\epsilon}$. For each neutrino species $\nu_i$ the energy distribution is given by: $$L^{S}_{\nu_i}=E_{\nu_i} \times \frac{120}{7\pi^{4}}\frac{\epsilon'^{2}}{T_{\nu_i}^{4}}[\exp(\frac{\epsilon'}{T_{\nu_i}})+1]^{-1}$$ From the determination of the SFR we extract the SN rate: $$N_{SN}(z)\propto \dot{\rho}(z)$$ where $ \dot{\rho}(z) $ is the star formation rate. Averaging over a Salpeter Initial Mass Function (IMF) for $ M >8M_{\odot} $, the supernova rate is $ N_{SN}(z)=(\frac{0.013}{M_{\odot}})\dot{\rho}(z) $̇, while the star formation rate is measured in solar masses. Following [@Totani; @Totani1; @Bisnovatyi; @Hartmann; @Hogg; @Baldry], we parameterize the SFR as: $$\dot{\rho}(z)\propto (1+z)^{A}$$ where the constant $A$ has a different value for $ z<1 $ and $ 1<z<2 $. We also assume that the behavior of $A$ at $ 1<z<2 $, continues to higher redshifts. For $z<1$ Hogg [@Hogg] has compiled measurements of the UV and $H_\alpha $ luminosity density and obtained the $ 68 \% $ C.L limits of $ A=2.7\pm0.7 $ for $\Lambda CDM$ model and $ A=3.3\pm0.8 $ for $\Omega_{M}=1$ model. From the optical spectrographic measurements in the Sloan Digital Sky Survey (SDSS) limits on A were found to be $ 2-3 $ for $ z<1 $ and $ 0-1 $ for $ z>1 $, [@Glaze]. With these assumptions the rate of relic neutrinos on earth will be: $$\begin{aligned} J_{\nu}=\frac{C}{H_{0}} \frac{120}{7\pi^{4}}\frac{<E_{\nu}>}{<T_{\nu}>^{4}}\frac{0.013}{M_{\odot}} \int dz\frac{(1+z)^{A} \epsilon '^{2}}{(1+z)\sqrt{\Omega_{\Lambda}+ (1+z)^{3}\Omega_{M}}}\frac{1}{\exp(\frac{\epsilon'}{T_{\nu}})+1}\end{aligned}$$ As before $ \epsilon'=(1+z)\epsilon $. By setting $ x=1+z $ , we can rewrite the integral: $$\label{nuflux} J_{\nu}=\frac{C}{H_{0}} \frac{120}{7\pi^{4}}\frac{<E_{\nu}>}{<T_{\nu}>^{4}}\frac{0.013}{M_{\odot}} \epsilon ^{2}\int_{1}^{\infty} dx\frac{x^{A+1} }{\sqrt{\Omega_{\Lambda}+ x^{3}\Omega_{M}}}\frac{1}{\exp(\frac{\epsilon}{T_{\nu}})+1}$$ The data from $ SN1987 A $ gave $ E_{\nu}= 8 \times 10^{52} ergs $ and $ T_{\nu}=4.8 $Mev, while the model introduced by Woesley et al[@woos] predicts that for a $ 25M\odot $ supernova progenitor: $ E_{\nu}= 11\times 10^{52}erg $, $ T_{\nu}=5.3 $Mev. As an approximate solution for (\[nuflux\]) we find: $$\begin{aligned} J_{\nu}=\frac{C}{H_{0}} \frac{120}{7\pi^{4}}\frac{<E_{\nu}>}{<T_{\nu}>^{3}}\frac{0.013}{M_{\odot}} \frac{\epsilon}{\Omega}_{M} [1.8\sqrt{\Omega_{\Lambda}+8\Omega_{M}}-0.5\sqrt{\Omega_{\Lambda}+9\Omega_{M}}]\end{aligned}$$ But the SFR is not always simple and it has a more complex shape in the general case. Cole et.al obtained from high-z galaxies and gamma-ray bursts data,assuming a Salpeter initial mass function [@Cole] $$\begin{aligned} \dot{\rho}(z)=(a+bz)h/[1+(z/c)^{d}]\end{aligned}$$ with $a=0.0389 $ ,$b=0.0545 $ , $c=2.973 $ and $d=3.655 $ . Hermquist and Springel found [@Hern] $$\begin{aligned} \dot{\rho}(z)=\dot{\rho}_{0}\chi^{2}/[1+\alpha(\chi -1)^{3}\exp(\beta \chi^{7/4})]\end{aligned}$$ where $ \chi=[H(z)/H_{0}]^{2/3} $ , $ \dot{\rho}_{0}=0.030 $ ,$ \alpha=0.323 $ and $ \beta=0.051 $. In the Yuksel et.al model, the SFR is [@Yuksel] $$\begin{aligned} \dot{\rho}(z)=\dot{\rho}_{0}[(1+z)^{\eta\eta}+\lbrace(1+z)/B\rbrace^{\beta\eta}+\lbrace(1+z)/C\rbrace^{\gamma\eta}]^{1/\eta}/[1+\alpha(\chi -1)^{3}\exp(\beta \chi^{7/4})]\end{aligned}$$ where $ B=(1+z_{1})^{1-\alpha/\beta} $, $ C=(1+z_{1})^{(\beta-\alpha)/\gamma}(1+z_{2})^{1-\beta/\alpha} $ while $ \dot{\rho}_{0}=0.0285 $, $ \alpha=1.6 $, $ \beta=-1.2 $, $ \gamma=-5.7 $ in which $ z_{1}=1.7 $, $ z_{2}=5.0 $ and $ \eta=-1.62 $. Therefore the relation (\[1\]) becomes $$\label{2} J_{\nu}=\frac{C}{H_{0}} \frac{120}{7\pi^{4}}\frac{<E_{\nu}>}{<T_{\nu}>^{4}}\frac{0.013}{M_{\odot}} \epsilon ^{2}\int_{1}^{\infty} dx\frac{x \dot{\rho}(x) }{\sqrt{\Omega_{\Lambda}+ x^{3}\Omega_{M}}}\frac{1}{\exp(\frac{\epsilon}{T_{\nu}})+1}$$ Neutrino density function for each model has been plotted in figure1. The results presented here in section .4 are based on numerical calculations of (\[nuflux\]) and (\[2\]) in the intervals $z=0$ to $ z=\infty$ or $x=1$ to $ x=\infty$. ![Neutrino density predictions at different redshifts for each model as explained section 2.[]{data-label="fig:3"}](2) Event Rate at Super Kamiokande Detector ======================================= State of the art neutrino telescopes are water Cerenkov detectors, where two general types can be distinguished. The dominant reaction in a light water Cerenkov detector such as SuperK is $ \bar{\nu}_{e}p\longrightarrow ne^{+} $ with the cross section $ \delta_{\epsilon}(e) $ two order of magnitude larger than that of the scattering reaction $ \nu_{e}e\longrightarrow\nu_{e}e $ in a heavy water Cerenkov detector such as SNO is $ \bar{\nu}D\longrightarrow nn e^{+} $, where the cross section of these reactions are denoted $\sigma_{i} $. $ i $ is a positron at SuperK type of detector and Deuterium at the SNO type. For simplicity at the calculation of the event rate $R$, the efficiency of the detectors are assumed to be $ 100\% $ in the observable energy window. SuperK does not detect supernovae relic neutrinos at all energies. Below $ 10 $ Mev, the $ \bar{\nu}_{e} $ flux is dominated by nuclear reactors and other sources of $ \bar{\nu}_{e} $ arriving on Earth are not distinguishable. Between $ 10 $ Mev and $ 19 $ Mev the Neutrino background is due to solar neutrinos and due to Cosmic muons in detectors. Between $ 19 $ Mev and nearly $ 20.3 $Mev the background is due to atmospheric neutrinos and the flux of neutrinos at energies greater than $ 36 $ Mev rapidly falls exponentially, so that the observable flux for neutrinos in SuperK detector is from $20.3$ to $36.3 $ Mev where $ \epsilon=E_{e}+1.3 Mev $ and $ E_{e} $ is the energy of positron and $ 1.3 $ Mev the neutron-positron mass difference. Therefore the differential event rate in the internal $d\epsilon $ at SuperK is $$\begin{aligned} R_{superK} = B(1.51\times 10^{33})(9.52 \times 10^{52})\frac{<E_{\nu}>}{<T_{\nu}>^{4}}(\frac{0.013}{M_{\odot}}) \int_{0}^{\infty} \frac{\dot{\rho}(z)}{(1+z)\sqrt{\Omega_{M}(1+z)^{3}+\Omega_{\Lambda}}}~~~~~~~~ \nonumber\\ \times \int_{20.3}^{36.3} \frac{\epsilon ^{2}(\epsilon-1.3)^{2}d\epsilon} {\exp(\frac{\epsilon x}{<T_{\nu}>})+1}\nonumber\\ = B(1.51\times 10^{33})(9.52 \times 10^{52})\frac{<E_{\nu}>}{<T_{\nu}>^{4}}(\frac{0.013}{M_{\odot}}) \int_{1}^{\infty} \frac{x \dot{\rho}(x)dx}{\sqrt{\Omega_{M}x^{3}+\Omega_{\Lambda}}} \times \int_{20.3}^{36.3} \frac{\epsilon ^{2}(\epsilon-1.3)^{2}d\epsilon} {\exp(\frac{\epsilon x}{<T_{\nu}>})+1}\nonumber\\\end{aligned}$$ where $ B=(\frac{120}{7\pi^{4}})C H_{0} ^{-1}=1056 h_{50}^{-1}$ , we use $ \sigma_{p}(\epsilon)= 9.52 \times 10^{52} E_{e}P_{e }cm^{2}$ [@vogel] ,$ <E_{\nu}>=11 \times 10^{52} $ergs and $<T_{\nu}>=5.3$Mev. So the differential event rate in the interval $ d\epsilon $ is $ N_{p}\sigma_{p}J_{\nu}(\epsilon) d\epsilon $ and the predicted event rate at the detector are given by: $$R=B N_{i}\frac{<E_{\nu}>}{<T_{\nu}>^{4}}(\frac{0.013}{M_{\odot}})\int \frac{x \dot{\rho}(x)~dx }{\sqrt{\Omega_{\Lambda}+\Omega_{M}x^{3}}}\int \frac{\epsilon^{2}\sigma_{i}(\epsilon)}{\exp(\frac{x\epsilon}{<T_{\nu}>})+1}$$ The $d\epsilon$ delineate the energy window between 20.3 to 36.3 and $ N_{p} $ is the number of free protons in SuperK detector with a sensitive water mass of $ 22.5 $ Ktons and $ N_{p}=1.51 \times 10^{33} $ . The SN relic $ \bar{\nu}_{e} $ event rate at SuperK in $ \Lambda CDM $ model can be written as $$\begin{aligned} R=0.063(\frac{M_{\odot}}{\langle M_{z}\rangle})(\frac{\langle E_{\nu}\rangle}{10^{53} ergs})(\frac{\langle T_{\nu}\rangle}{Mev})\frac{events}{22.5 kton-yaer}\end{aligned}$$ We have set $ h_{50}=1 $ and also the average metal yield per supernova have taken to be $ 1 M_{\odot} $ in the interest of obtaining an upper bound to the event rate, while the first number in the right side of the above expression with $ \Omega_{M}=1 $ is equal to 0.066. More recent estimates by Totani et al [@Totani], using the population synthesis method to model the evolution of star formation in galaxies, obtained a prediction for the flux of SRN at superK (in the energy interval from 15 to 40 Mev) of $ 1.2 yr^{-1} $ and the “most optimistic” prediction for their model was an event rate of $ 4.7 yr^{-1} $. Malaney [@Malaney] used the Pei and Fall result [@Pei] in order to parametrize the evolution of the Cosmic gas density rate and integrated over all energies and found a total SRN flux of $ 2.0-5.4 cm^{-2}s^{-1} $. Hartmann and Woosley [@Hartmann] used an SN rate proportional to $ (1+z)^{4} $ and their best estimation was $ \sim 0.2 cm^{-2} sec^{-1} $ . Kaplinghat et al. [@Kaplin] used the assumption that the supernova rate tracks the metal enrichment rate can be written as: $ N_{SN}(z)=\frac{\dot{\rho _{z}}(z)}{<M_{z}>} $ where $ <M_{z}> $ is the average yield of “ metal” $ ( z > 6 )$ per supernova and $ \dot{\rho}_{z} $ is the metal enrichment rate per unit comoving volume $ 22.5 Kton - year $ and the neutrino flux at superK is $ 1.6 cm^{-2}sec^{-1} $ or event rate is $ R<4 $events for $ 19 MeV< E_{e}<35 MeV$ and over all energies the event rate at SuperK is $ 54 ~cm^{-2} sec^{-1} $. Results ======== Here to obtain the most optimistic of SRN event rate at SuperK we consider neutrino oscillation as a mechanism for maximizing the SRN flux. We have assumed that $ \Omega_{M}=0.3 $ and $ \Omega_{\Lambda}=0.7 $. In Kaplinghat et al model with $ \Omega_{\Lambda} $, our estimates gives the neutrino flux at superK is $ 1.54 cm^{-2}sec^{-1} $and the event rate is $ R<3 $ events. For Hogg’s model wich has compiled the UV and $H_{\alpha}$ luminosity density and obtained the SFR as $(1+z)^{2.7\pm 0.7}$ the flux at detector is 108.57 $cm^{-2}s^{-1}$. For the SFR obtained from the optical spectrographic measurements in the Sloan Digital Sky Survey (SDSS) the maximum flux of SRN is 111.43 $cm^{-2}s^{-1}$ and minimum flux is 91.1 $cm^{-2}s^{-1}$. With the Cole model the flux is 3.02 $cm^{-2}s^{-1}$ and Yuksel model the flux is 80.32 $cm^{-2}s^{-1}$ and then for Hernquest model the flux at detector will be 70.13 $cm^{-2}s^{-1}$. So then their upper bound from models of Fall, Hogg, Glazebrook, Cole, Yuksel and Hernquest are 3 event, 212 event, 217 event, 6 event, 156 event and 136 event respectively and as summarized in the table \[tab\]. Star Formation Rate      The neutrino flux ($cm^{-2} sec^{-1}$)     Event Rate( $R <$ ) ------------------------ -------------------------------------------- ----------------------- Kaplinghat’s model 1.54 3 Hogg ’s model 108.57 212 Glazebrook’s model 111.43 217 Cole’s model 3.02 6 Yuksel’s model 80.32 156 Hennguiest’s model 70.13 136 : The neutrino flux and upper bound for SRN at superK in $\Lambda CDM $ cosmology\[tab\] Discussion ========== In the present work we have studied the predictions of the different models of SFR for the number of expected relic neutrinos in large neutrino detectors such as SuperK. These relic neutrinos have been produced at every supernova explosion throughout the history of the universe, which transfer almost all their energy to neutrinos. Here we have calculated the neutrino production for different SFR models and have found that the models of Glazebrook [@Glaze] and Hogg [@Hogg] yield an estimate for the number of relic neutrinos closer to the lower limit presently given by SuperK. As the relic neutrino background is an independent way of constraining the SFR of the universe, this method can be regarded as complimentary to the more standard methods. As the construction of more modern neutrino telescopes is underway, the predictions of the SFR models for the relic neutrinos can be soon tested with much more and better data of these telescopes. Regarding this perspective the present study can be expected to be extended soon and yield more fruitful results in the light of new data. Acknowledgement {#acknowledgement .unnumbered} =============== The author Abidin Z.Z. would like to acknowledge the University of Malaya’s HIR grant UM.S/625/3/HIR/28 for their funding. [00]{} Piero Madau, Mark Dickinson “Cosmic Star-Formation History ”, Annual Review of Astronomy and Astrophysics, Vol. 52: 415-486, 2014 Piero Madau, Henry C. Ferguson, Mark E. Dickinson, Mauro Giavalisco, Charles C. Steidel, and Andrew Fruchter, “ High-Redshift Galaxies in the Hubble Deep Field: Color Selection and Star Formation History to z $ \sim $ 4.”, Monthly Notices of the Royal Astronomical Society, 283, 1388, 1996. Y. C. Pei, S. M. Fall[*et al.*]{} , ApJ 454, 69(1995) S. M. Fall, “A global perspective on star formation,”\[astro-ph/9611155\].CITATION = astro-ph/9611155. T. Totani, K. Sato and Y. Yoshii, “Spectrum of the supernova relic neutrino background and evolution of galaxies,”Astrophys. J.  [**460**]{}, 303 (1996).\[astro-ph/9509130\]. T. Totani, K. Sato, Astroparticle Phys, 3, 367 (PaperI) (1995) G. S.  Bisnovatyi-Kogan , S. F. Seidov, Ann.NY Acad. Sci,422,319 (1984) D. H. Hartmann, S. E.  Woosley, ”The cosmic supernova neutrino background”, Astroparticle Phys. 7, 137 (1997) S. J.  Lilly, O. Le Fevre, E.  Hammer and D. Crampton, Ap. J. 460 (1996) . D. W. Hogg, “A meta-analysis of cosmic star-formation history,”\[astro-ph/0105280\]. S. Cole [*et al.*]{} \[2dFGRS Collaboration\], “The 2dF Galaxy Redshift Survey: Near infrared galaxy luminosity functions,”Mon. Not. Roy. Astron. Soc.  [**326**]{}, 255 (2001).\[astro-ph/0012429\]. L. Hernquist and V. Springel, “An analytical model for the history of cosmic star formation,”Mon. Not. Roy. Astron. Soc.  [**341**]{}, 1253 (2003).\[astro-ph/0209183\]. K. Glazebrook [*et al.*]{} \[SDSS Collaboration\], “The Sloan Digital Sky Survey: The Cosmic spectrum and star - formation history,”Astrophys. J.  [**587**]{}, 55 (2003).\[astro-ph/0301005\]. H. Yuksel, M. D. Kistler, J. F. Beacom and A. M. Hopkins, “Revealing the High-Redshift Star Formation Rate with Gamma-Ray Bursts,”Astrophys. J.  [**683**]{}, L5 (2008).\[arXiv:0804.4008 \[astro-ph\]\]. M. Kaplinghat, G. Steigman and T. P. Walker, “The Supernova relic neutrino background,”Phys. Rev. D [**62**]{}, 043001 (2000).\[astro-ph/9912391\]. I. K. Baldry, K. Glazebrook, C. M. Baugh, J. Bland-Hawthorn, T. Bridges, R. Cannon, S. Cole and M. Colless [*et al.*]{}, “The 2dF Galaxy Redshift Survey: Constraints on cosmic star - formation history from the cosmic spectrum,”Astrophys. J.  [**569**]{}, 582 (2002).\[astro-ph/0110676\]. S. E. Woosley, [*et al.*]{}, 1986, ApJ 302, 19 . P. Vogel and J. F. Beacom, “Angular distribution of neutron inverse beta decay, $\overline{\nu}_{e} + p \dashrightarrow e^{+} + n$,”Phys. Rev. D [**60**]{}, 053003 (1999).\[hep-ph/9903554\]. R. A. Malaney, “Evolution of the cosmic gas and the relic supernova neutrino background,”Astropart. Phys.  [**7**]{}, 125 (1997).\[astro-ph/9612012\]. M. Malek [*et al.*]{} \[Super-Kamiokande Collaboration\], “Search for supernova relic neutrinos at SUPER-KAMIOKANDE,”Phys. Rev. Lett.  [**90**]{}, 061101 (2003).\[hep-ex/0209028\]. W. Zhang, et al. “Experimental Limit on the Flux of Relic Anti-neutrinos From Past Supernovae,”  Phys. Rev. Lett. 61 385-388, (1988). [^1]: Lecturer in Shahid Beheshti branch, Farhangian University, Tehran, Iran: gholizadeh@ipm.ir [^2]: arbabi@ipm.ir [^3]: zzaa@um.edu.my
[**1. Introduction**]{} Al(rich)-Mn and Al(rich)-Si-Mn systems, contain many crystalline approximants of quasicrystals. These phases are good examples to analyse the effect of the position of transition metal (TM) atoms in stabilizing complex structure related to quasiperiodicity. The origin of the stabilization of quasicrystals is still unclear in spite of many experimental and theoretical study. For Al-based quasicrystals, a Hume-Rothery mechanism [@Massalski78; @Paxton97] have been shown to play a significant role (see for instance [@Mayou94; @Belin02; @Mizutani02] and Refs. within). In these phases, the average number of electron per atom (ratio $e/a$) is an important parameter. Indeed, the occurrence of phases related to quasicrystals is explained by the fact that they are electron compounds with similar $e/a$ ratio in spite of different constituents and different atomic concentrations [@Tsai91; @Gratias93]. A band energy minimisation occurs when the Fermi sphere touches a pseudo-Brillouin zone, constructed by Bragg vectors ${\bf K}_p$ corresponding to intense peaks in the experimental diffraction pattern. The Hume-Rothery condition for alloying is then $2k_F \simeq K_p$. Assuming a free electron valence band, the Fermi momentum, $k_F$, is calculated from $e/a$. In sp Hume-Rothery alloys, valence electrons (sp electrons) are nearly free. Their density of states (DOS) is well described by the Jones theory [@Massalski78; @Paxton97]. The Fermi-sphere/pseudo-Brillouin zone interaction creates a depletion in the DOS, called pseudogap, near the Fermi energy $E_F$. Such a pseudogap has been found experimentally and from first-principles calculations in many sp quasicrystals and approximants [@Fujiwara91; @Hafner92; @Mizutani02]. It has also been found in many icosahedral approximants containing TM elements [@FujiAlMnSi; @Krajci97_dAlPdMn; @Mizutani02] whereas there are contradictory results about decagonal phases (Ref. [@Guy_beta] and Refs. within). But, the treatment of Al(rich)-TM is more complicated as d states of TM are not nearly-free states. In the case of crystals and icosahedral quasicrystals it has been shown [@GuyEuroPhys93; @GuyPRB95] that sp-d hybridisation increases a pseudogap. In some particular cases a pseudogap may also be induced by the sp-d hybridisation [@Duc92; @HafnerICQ8]. The Hume-Rothery stabilization can also be viewed as a consequence of oscillating pair interactions between atoms (Refs. [@Mayou94; @Galher02] and Refs. in there). In this direction Zou and Carlsson have shown that an indirect Mn-Mn interaction, mediated by sp states of Al, is strong enough to favours Mn-Mn distances close to 4.7$\rm \AA$ in Al(rich)-Mn quasicrystals and approximants. Here, it is shown that an indirect Mn-Mn interaction up to 10-20$\rm \AA$ induces pseudogap at $E_F$ in the approximants: cubic $\rm Al_{12}Mn$ [@GuyPRB95], orthorhombic o-$\rm Al_6Mn$ [@GuyPRB95], and cubic $\alpha$-$\rm Al_9Mn_2Si$ [@Cooper66]. The importance of Mn-Mn interaction up to large distances shows the complexity of the stabilizing process. Obviously “frustration” mechanism should occur that may favour for complex atomic structures. As $\rm Al_{12}Mn$, o-$\rm Al_6Mn$ and $\alpha$-$\rm Al_9Mn_2Si$ are related to quasicrystals, this study suggests that a Hume-Rothery stabilization, expressed in terms of Mn-Mn interaction, is intrinsically linked to the emergence of quasiperiodic structures in Al(Si)-Mn systems. [**2. Effective Bragg potential for Al(rich)-Mn alloys**]{} For sp Hume-Rothery alloys, the valence states (sp states) are nearly-free states scattered by a weak potential (Bragg potential, $V_B$). In this section, we show briefly that in sp-d Hume-Rothery alloys, sp electrons feel an [*“effective Bragg potential”*]{} [@GuyPRB95; @Guy_beta] that takes into account the strong effect TM atoms via the sp-d hybridisation. Following a classical approximation [@Friedel56; @Anderson61] for Al(Si)-Mn alloys, a simplified model is considered where sp states are nearly-free and d states are localized on Mn sites $i$. The effective hamiltonian for the sp states is written: $$\begin{aligned} H_{eff(sp)}= \frac{\hbar^2\,k^2}{2m} + V_{B,eff} \label{Hamil_eff_sp} \end{aligned}$$ where $V_{B,eff}$ is an effective Bragg potential that takes into account the scattering of sp states by the strong potential ot Mn atoms. $V_{B,eff}$ depends thus on the positions ${\bf r}_i$ of Mn atoms. Assuming that all Mn atoms are equivalent and that two Mn atoms are not first-neighbour, one obtains [@GuyPRB95; @Guy_beta]: $$\begin{aligned} { V_{B,eff}({\bf r}) = \sum_{\bf K} V_{B,eff}({\bf K}) e^{i{\bf K}.{\bf r}}, } \\ { V_{B,eff}({\bf K}) = V_B({\bf K}) + \frac{|t_{{\bf K}}|^2}{E - E_d} \sum_i e^{-i {\bf K}.{\bf r}_{i}}, } \label{EqVeffectif} \end{aligned}$$ where the vectors ${\bf K}$ belong to the reciprocal lattice, $t_{{\bf K}}$ is a average matrix element that couples sp states ${\bf k}$ and ${\bf k}-{\bf K}$ via the sp-d hybridisation, and $E_d$ is the energy of d states. The term $V_B({\bf K})$ is a weak potential independent with the energy $E$. It corresponds to the Bragg potential for sp Hume-Rothery compounds. The last term in equation (\[EqVeffectif\]), is due to the d resonance of the wave function by the potential of Mn atoms. It is strong in an energy range $ E_d-\Gamma \leq E \leq E_d+\Gamma$, where $2\Gamma$ is the width of the d resonance. This term is essential as it does represent the diffraction of the sp electrons by a network of d orbitals, i.e. the factor $\left(\sum_i e^{-i {\bf K}.{\bf r}_{i}}\right)$ corresponding to the structure factor of the TM atoms sub-lattice. As the d band of Mn is almost half filled, $E_F \simeq E_d$, this factor is important for energy close to $E_F$. Note that the Bragg planes associated with the second term of equation (\[EqVeffectif\]) correspond to Bragg planes determined by diffraction. This analysis shows that both sp-d hybridisation and diffraction of sp states by the sub-lattice of Mn atoms are essential to understand the electronic structure of Al(Si)-Mn alloys [@Guy_beta]. The strong effect of sp-d hybridisation on the pseudogap is then understood in the framework of Hume-Rothery mechanism. [**3. Two Mn in the Al(Si) matrix**]{} As a Hume-Rothery stabilization is a consequence of oscillation of the charge density of the valence electrons with energy close to $E_F$, a most stable atomic structure is obtained when distances between atoms are multiples of the wavelength of electrons with energy close to $E_F$. Since the scattering of valence sp states by the Mn sub-lattice is strong, the Friedel oscillations of charge of sp electrons around Mn must have a strong effect on a stabilization. Therefore a Hume-Rothery mechanism in Al(rich)-Mn compounds might be analysed in term of a Mn-Mn pair interaction resulting from a strong sp-d hybridisation. Zou and Carlsson [@ZouPRL93; @Zou94] have calculated this interaction from an Anderson model hamiltonian with two impurities, using a Green’s function method. It is found that a specific Mn-Mn distance of 4.7$\rm \AA$ favours for a stabilization of Al-Mn approximants [@ZouPRL93]. As 4.7$\rm \AA$ is larger than first neighbour distances, this shows the existence of an indirect medium range Mn-Mn interaction. The indirect interaction is mediated by sp-d hybridisation where sp states are mainly Al states. We calculated the indirect Mn-Mn pair interaction $\Phi_{Mn\textrm{-}Mn}$ from the transfer matrix $T$ of two Mn atoms in the free electrons matrix by using the Lloyd formula [@GuyPRB97] (Fig.\[PotMn\_Mn\_m0\]). According to classical approximation for metal, a phenomenological short range repulsive term should be add. But this term is not important in the present study as we analyse only the medium range order, i.e. distances larger than first-neighbour distances (see Fig.\[PotMn\_Mn\_m0\]). Parameters of the calculation are: the Fermi energy $E_F$ fixed by the Al matrix ($E_F=11.7$eV), the width of the d resonance $2\Gamma$ which increases as the sp-d hybridisation increases ($2\Gamma = 2.7$eV), and the energy $E_d$ of the d resonance which depends on the nature of the transition metal atom ($E_d=11.37$eV corresponding to $\sim$5.8 d electrons per Mn atom). A small variation of these parameters does not modify qualitatively the results presented in the following. In this paper only non-magnetic Mn are considered as most of Mn are non-magnetic in quasicrystals and approximants [@Virginie2; @Hippert99; @Prejean02]. In particular $\rm Al_{12}Mn$, o-$\rm Al_6Mn$ and $\alpha$-$\rm Al_9Mn_2Si$ are non magnetic [@Virginie2]. Because of the sharp Fermi surface of Al, $\Phi_{TM\textrm{-}TM}$ oscillated (Friedel oscillations of the charge density). It asymptotic form at large TM-TM distance ($r$) is of the form: $$\begin{aligned} \Phi_{TM\textrm{-}TM}(r) \propto \frac{\cos (2k_F\,r-\delta)}{r^3}\;. \label{EqVeffOscil} \end{aligned}$$ The phase shift $\delta$ depends on the nature of the TM atom and varies from $2\pi$ to $0$ as the d band fills. Magnitude of the medium range interaction is larger for Mn-Mn than for other transition metal (Cr, Fe, Co, Ni, Cu), because the number of d electrons close to $E_F$ is the largest for Mn, and the most delocalized electrons are electrons with Fermi energy. The effect analysed here is then more important for Al-rich alloys containing Mn element than for alloys containing other TM elements. ![Mn-Mn pair interaction of two non-magnetic manganese atoms in free electron matrix, simulating aluminium (and silicon) host. $2\Gamma = 2.7$eV, $E_d=11.37$eV and $E_F=11.7$eV.[]{data-label="PotMn_Mn_m0"}](PotMn_Mn_m0.eps){width="7cm"} The total DOS of two Mn atoms in the free electrons matrix is: $$\begin{aligned} n(E,r)=n_{sp}^0(E) + \Delta n_{2Mn}(E,r)\,, \label{Eq_DOS_2Mn} \end{aligned}$$ where $n_{sp}^0$ is the free electron DOS and $\Delta n_{2Mn}$, the variation of the total DOS due to two Mn atoms. $\Delta n_{2Mn}$ depends on the Mn-Mn distance $r$. When $r$ is very large (almost infinity), each Mn are similar to Mn impurity thus: $\Delta n_{2Mn} = 2 \Delta n_{1Mn}$, where $\Delta n_{1Mn}$ is the well known Lorentzian of the virtual-bound states. But small deviation from the Lorentzian occurs for finite $r$. On Fig.\[DOS\_2Mn\_imp\], $\Delta n_{2Mn}(E)$ is drawn for different values of $r$. $r = 3.8\,$${\rm \AA}$ and $ r = 5.8$$\rm \,\AA$ correspond to positive values of Mn-Mn interaction (Fig.\[PotMn\_Mn\_m0\]). These distances are thus unstable and the corresponding DOSs at $E_F$ increase with respect to Lorentzian value. On the other hand, $r = 4.8$$\rm \,\AA$ and $r = 6.7$$\rm \,\AA$ are more stable (minima of interaction), and the corresponding DOSs at $E_F$ are lower than the Lorentzian value. ![Variation of the DOS, $\Delta n_{2Mn}(E)$, due to 2 Mn impurities in free electrons matrix. The Mn-Mn distances $r = 3.8\,$${\rm \AA}$ and $ r = 5.8$$\rm \,\AA$ correspond to positive Mn-Mn interaction, whereas $r = 4.8$$\rm \,\AA$ and $r = 6.7$$\rm \,\AA$ correspond to minima of the interaction (see arrows on Fig. \[PotMn\_Mn\_m0\]). []{data-label="DOS_2Mn_imp"}](DOS_2Mn_imp.eps){width="7cm"} [**4. Effect of Mn sub-lattice on electronic structure of approximants**]{} [*4.1. Density of states*]{} In this section, the effect of indirect Mn-Mn interaction on the DOS of approximants is analysed. We focus on the case of cubic $\rm Al_{12}Mn$, orthorhombic o-$\rm Al_6Mn$ and cubi $\alpha$-$\rm Al_9Mn_2Si$. In each of these phases, Mn sites are similar and Mn atoms are not first-neighbour. In metallic alloys, the main aspects of the DOS are consequences of short range and medium range atomic order. The effect of the medium range order on the pseudogap at Fermi energy is estimated from a simple model that takes only into account the Mn-Mn pair effects with Mn-Mn distances larger than first-neighbour distances. An important question is to determine the distance up to which an indirect Mn-Mn interaction is essential. Assuming a Hume-Rothery mechanism for the stabilization, the electronic energy is a sum of pair interaction. As interaction magnitudes are larger for Mn-Mn than for Al-Mn and Al-Al [@Mihalkovic96], $\Phi_{Mn\textrm{-}Mn}$ has a major effect on the electronic energy and $\Phi_{Al\textrm{-}Al}$, $\Phi_{Al\textrm{-}Mn}$ are neglected. Triplet effects, quadruplet effects (...), that might be important for a transition metal concentration larger than 25% [@Widom98], are neglected. In this model, the total DOS, $n_R(E)$, is calculated as the sum of the variation of the DOS due to each Mn-Mn pair: $$\begin{aligned} \lefteqn{n_{R}(E) = n_{sp}^0(E) + \Delta n_R(E),} \\ \lefteqn{\Delta n_R(E) = x \Delta n_{1Mn}(E) }\nonumber \\ & & + \sum_{r_{ij} < R}\Big( \Delta n_{2Mn}(E,r_{ij}) - 2 \Delta n_{1Mn}(E) \Big), \end{aligned}$$ where $i$, $j$ are index of Mn atom, $r_{ij}$ is $\rm Mn_i$-$\rm Mn_j$ distance, and $x$, the number of Mn atoms. $\Delta n_{2Mn}$ is defined by equation (\[Eq\_DOS\_2Mn\]). $\Delta n_{1Mn}$ is the variation of the DOS due to one Mn impurity in the free electron matrix: virtual-bound state (V.B.S.). $\Delta n_{1Mn}$ is a Lorentzian centred at energy $E_d$ with a width at half maximum equal to $2\Gamma$. $n_{R}$ is the total DOS computed by taking into account all Mn-Mn interaction up to Mn-Mn distance equal to $R$. $\Delta n_R$ is the part of $n_{R}$ due to Mn atoms. $\Delta n_{R}(E)$ of $\rm Al_{12}Mn$, o-$\rm Al_6Mn$ and $\rm \alpha$-$\rm Al_9Mn_2Si$ are shown in Fig.\[DOS\_Al12Mn\_Al6Mn\_alpha\] for different values of distance $R$. First Mn-Mn distance is 6.47$\rm \AA$ in $\rm Al_{12}Mn$, 4.47$\rm \AA$ in o-$\rm Al_6Mn$ and 4.61 $\rm \AA$ in $\alpha$-$\rm Al_9Mn_2Si$, but a well pronounced pseudogap appeared only when the Mn-Mn interactions up to 10-20$\rm \AA$ are taken into account. Negative value of $\Delta n_{R}(E)$ induces reduction of the total DOS with respect to the free electron value $n_{sp}^0$. For o-$\rm Al_6Mn$, the minimum of the pseudogap corresponds to $\Delta n_{R} \simeq 0$. The total DOS at the minimum of the pseudogap is thus similar to pure Al DOS, in agreement with first-principles calculation [@GuyPRB95]. But for $\rm Al_{12}Mn$ and $\alpha$-$\rm Al_9Mn_2Si$, as $\Delta n_{R} < 0$, a reduction of the total DOS with respect to free electron case is due to Mn-Mn medium range interaction. First-principles studies [@GuyPRB95; @FujiAlMnSi] have already shown a reduction. The present work enlightens a particular effect of Mn atoms in these [*ab initio*]{} results. [*4.2. Energy*]{} The [*“structural energy”*]{}, $\cal{E}$, of the Mn sub-lattice in Al host is defined as the energy needed to built the Mn sub-lattice in the metallic host that simulates Al (and Si) host from isolated Mn atoms in the same metallic host. $\cal{E}$ per unit cell is: $$\begin{aligned} {\cal E} = \sum_{i,j\,(j\neq j)} \frac{1}{2} \, \Phi_{Mn\textrm{-}Mn}(r_{ij})~e^{-\frac{r_{ij}}{L}}\;, \label{EquationEStruturale} \end{aligned}$$ $L$ is the mean-free path of electrons due to scattering by static disorder or phonons [@Guy_beta]. $L$ depends on the structural quality and temperature and can be estimated to be larger than 10$\rm \AA$. ${\cal E}(L)$ for $\rm Al_{12}Mn$, o-$\rm Al_6Mn$ and $\alpha$-$\rm Al_9Mn_2Si$ are shown on Fig.\[FigE\_Al12Mn\_Al6Mn\_alpha\]. ${\cal E}$ are always negative with magnitude strong enough to give a significant contribution to the band energy. This result is in good agreement with effect of Mn-Mn interactions on the pseudogap as shown previously. According to a Hume-Rothery mechanism, one expects that a pseudogap is well pronounced for a large value of $|{\cal E}|$. [**5. Conclusion**]{} A simple model is presented that allows to enlighten effects of Mn atoms on the electronic structure in Al(rich)-Mn phases related to quasicrystals. It is shown that an indirect Mn-Mn interaction up to distances 10-20$\rm \AA$ is essential in stabilizing, as it creates a Hume-Rothery pseudogap close to $E_F$. The band energy is then minimised. The effect of an indirect Mn-Mn interactions has been also study in previous works [@ZouPRL93; @Zou94; @Mihalkovic96; @Guy_beta; @GuyPRL00]. Recently [@Guy_beta], it explained the origin of a large vacancies in the hexagonal $\beta$-$\rm Al_9Mn_3Si$ and $\varphi$-$\rm Al_{10}Mn_3$ phases, whereas similar site are occupied by Mn in $\rm \mu\,Al_{4.12}Mn$ and $\rm \lambda\,Al_4Mn$, and by Co in $\rm Al_5Co_2$. On the other hand, medium range indirect Mn-Mn interaction is also determinant for the existence or not of magnetic moments in Al-Mn quasicrystals and approximants [@GuyPRL00]. As Al(rich)-Mn phase structure are related to those of quasicrystals, it suggests that a Hume-Rothery stabilization, governs by this Mn-Mn interaction, is intrinsically linked to the emergence of quasiperiodicity. [99]{} 0.2ex T.B. Massalski, U. Mizutani, [ Prog. Mater. Sci.]{} [ 22]{} (1978) 151. A.T. Paxton, M. Methfessel, D.G. Pettifor, [ Proc. R. Soc. Lond.]{} A [453]{} (1997) 1493. D. Mayou, in [*Lecture on Quasicrystals*]{} ed. F. Hippert and D. Gratias (Les Ulis, Les Editions de Physique, 1994) p 417. E. Belin-Ferré, in this conference. U. Mizutani, T. Takeuchi, H. Sato, in this conference. A.P. Tsai, A. Inoue, T. Masumoto, [ Sci. Rep. Ritu.]{} A [36]{} (1991) 99. D. Gratias, Y. Calvayrac, J. Devaud-Rzepski, F. Faudot, M. Harmelin, A. Quivy, P.A. Bancel, [ J. Non-cryst. Solid]{} [153]{}-[154]{} (1993) 482. T. Fujiwara, T. Yokokawa, [ Phys. Rev. Lett.]{} [ 66]{} (1991) 333. J. Hafner, M. Krajčí, [ Europhys. Lett.]{} [ 17]{} (1992) 145. T. Fujiwara, [ Phys. Rev. B]{} [ 40]{} (1989) 942. M. Krajčí, J. Hafner, M. Mihalkovič, [ Phys. Rev.]{} B [ 55]{} (1997) 843. G. Trambly de Laissardière, arXiv:cond-mat/0202240. G. Trambly de Laissardière, D. Nguyen Manh, D. Mayou, [ Europhys. Lett.]{} [ 21]{} (1993) 25. G. Trambly de Laissardière, D. Nguyen Manh, L. Magaud, J.P. Julien, F. Cyrot-Lackmann, D. Mayou, [ Phys. Rev.]{} B [ 52]{} (1995) 7920. D. Nguyen Manh, G. Trambly de Laissardière, J.P. Julien, D. Mayou, F. Cyrot-Lackmann, [Solid State Commun.]{} [ 82]{} (1992) 329. J. Friedel, Can. J. Phys. 34 (1956) 1190. P.W. ANderson, Phys. Rev. 124 (1961) 41. J. Hafner and M. Krajčí, in these conference (nb 51). F. Gähler, S. Hocker, in this conference. M. Cooper, K. Robinson, [Acta Cryst. ]{} [ 20]{} (1966) 614; P. Guyot, M. Audier, [ Philos. Mag.]{} B [ 52]{} (1985) L15; V. Elser, C.L. Henley, [ Phys. Rev. Lett.]{} [ 55]{} (1985) 2883. J. Zou J, A.E. Carlsson, [Phys. Rev. Lett.]{} [ 70]{} (1993) 3748. J. Zou, A.E. Carlsson, [ Phys. Rev.]{} B [ 50]{} (1994) 99. G. Trambly de Laissardière and D. Mayou, Phys. Rev. B [55]{} (1997) 2890; G. Trambly de Laissardière, S. Roche, D. Mayou, Mat. Sci. Eng. A 226-228 (1997) 986. V. Simonet, F. Hippert, M. Audier, G. Trambly de Laissardière, Phys Rev B [ 58]{} (1998) R8865. F. Hippert, V. Simonet, G. Trambly de Laissardière, M. Audier and Y. Calvayrac, J. Phys. : Cond. Mat. [11]{}, (1999) 10419; F. Hippert, et al., in this conference. J.J. Préjean, C. Berger, A. Sulpice, Y. Calvayrac, [Phys. Rev.]{} B [65]{} (2002) 140203(R). M. Mihalkovi$\rm \check{c}$, W.J. Zhu, C.L. Henley, R. Phillips, [phys. Rev.]{} B [53]{} (1996) 9021. M. Widom, J.A. Moriarty, Phys. Rev. B 58 (1998) 8967. G. Trambly de Laissardière, D. Mayou, [ Phys. Rev. Lett.]{} [ 85]{} (2000) 3273.
--- abstract: 'The classical Hadwiger conjecture dating back to 1940’s states that any graph of chromatic number at least $r$ has the clique of order $r$ as a minor. Hadwiger’s conjecture is an example of a well studied class of problems asking how large a clique minor one can guarantee in a graph with certain restrictions. One problem of this type asks what is the largest size of a clique minor in a graph on $n$ vertices of independence number $\alpha(G)$ at most $r$. If true Hadwiger’s conjecture would imply the existence of a clique minor of order $n/\alpha(G)$. Results of Kuhn and Osthus and Krivelevich and Sudakov imply that if one assumes in addition that $G$ is $H$-free for some bipartite graph $H$ then one can find a polynomially larger clique minor. This has recently been extended to triangle free graphs by Dvořák and Yepremyan, answering a question of Norin. We complete the picture and show that the same is true for arbitrary graph $H$, answering a question of Dvořák and Yepremyan. In particular, we show that any $K_s$-free graph has a clique minor of order $c_s(n/\alpha(G))^{1+\frac{1}{10(s-2) }}$, for some constant $c_s$ depending only on $s$. The exponent in this result is tight up to a constant factor in front of the $\frac{1}{s-2}$ term.' author: - 'Matija Bucić[^1]' - 'Jacob Fox[^2]' - 'Benny Sudakov[^3]' title: Clique minors in graphs with a forbidden subgraph --- Introduction ============ A graph $\Gamma$ is said to be a *minor* of a graph $G$ if for every vertex $v$ of $\Gamma$ we can choose a connected subgraph $G_u$ of $G$, such that subgraphs $G_u$ are vertex disjoint and $G$ contains an edge between $G_v$ and $G_{v'}$ whenever $v$ and $v'$ make an edge in $\Gamma$. The notion of graph minors is one of the most fundamental concepts of modern graph theory and has found many applications in topology, geometry, theoretical computer science and optimisation; for more details, see the excellent surveys [@lovasz-survey; @norin-survey]. Many of these applications have their roots in the celebrated Robertson-Seymour theory of graph minors, developed over more than two decades and culminating in the proof of Wagner’s conjecture [@R-S]. One of several equivalent ways of stating this conjecture is that every family of graphs, closed under taking minors can be characterised by a finite family of excluded minors. A forerunner to this result is Kuratowski’s theorem [@kuratowski], one of the most classical results of graph theory dating back to 1930. In a reformulation due to Wagner [@wagner-kuratowski] it postulates that a graph is planar if and only if neither $K_5$ nor $K_{3,3}$ are its minors. Another cornerstone of graph theory is the famous $4$-colour theorem dating back to 1852 which was finally settled with the aid of computers in 1976 by [@four-color]. It states that every planar graph $G$ has chromatic number[^4] at most four. In light of Kuratowski’s theorem, Wagner [@wagner-4-col] has shown that in fact the $4$-colour theorem is equivalent to showing that any graph without $K_5$ as a minor has $\chi(G) \le 4$. In 1943 Hadwiger proposed a natural generalisation, namely that any graph with $\chi(G) \ge r$ has $K_{r}$ as a minor. Hadwiger’s conjecture is known for $r \le 5$ (for the case of $r=5$ see [@hadwiger-5]) but despite receiving considerable attention over the years it is still widely open for $r \ge 6,$ see [@hadwiger-survey] for the current state of affairs. In this paper we study the question of how large a clique minor one can guarantee to find in $G$ which belongs to a certain restricted family of graphs. A prime example of this type of problems is Hadwiger’s conjecture itself. Another natural example asks what happens if instead of restricting the chromatic number we assume a lower bound on the average degree. Note that $\chi(G) \ge r$ implies that $G$ has a subgraph of minimum degree at least $r-1$. So the restriction in this problem is weaker than in Hadwiger’s conjecture and we are interested in how far can this condition take us. This question, first considered by Mader [@mader] in 1968, was answered in the 80’s independently by Kostochka [@kost] and Thomason [@thom] who show that a graph of average degree $r$ has a clique minor of order $\Theta(r/\sqrt{\log r})$. This is best possible up to a constant factor as can be seen by considering random graph with appropriate edge density (whose largest clique minor was analysed by Bollobás, Catlin and Erdős in [@BCE]). This unfortunately means that this approach is not strong enough to prove Hadwiger’s conjecture for all graphs. For almost four decades, bounding the chromatic number through average degree and using the Kostochka-Thomason theorem gave the best known lower bound on the clique minor given the chromatic number. Very recently, Norin and Song [@norinsong] got beyond this barrier and Postle [@Postle] gave a further improvement, that every graph of chromatic number $r$ has a clique minor of size $r/(\log r)^{1/4+o(1)}$. This still falls short of proving Hadwiger’s conjecture for all graphs. However, if we impose some additional restrictions on the graph it turns out we can do much better. One of the most natural restrictions, frequently studied in combinatorics, is to require our graph $G$ to be $H$-free for some other, small graph $H$. This problem was first considered by Kuhn and Osthus [@K-O] who showed that every $K_{s,t}$-free graph with average degree $r$ has a clique minor of order $\Omega \left(r^{1+2/(s-1)}/\log^3 r\right).$ The polylog factor in this result was subsequently improved by Krivelevich and Sudakov [@B-M] who obtain in a certain sense the best possible bound. They also obtain tight results, for the case of $C_{2k}$-free graphs. These results show Hadwiger’s conjecture holds in a stronger form for any $H$-free graph, provided $H$ is bipartite. On the other hand, if $H$ is not bipartite then taking $G$ to be a random bipartite graph shows the bound of Kostochka [@kost] and Thomason [@thom] can not be improved. A natural next question is whether we can do better if we assume a somewhat stronger condition than a bound on the average degree or the chromatic number. A natural candidate is an upper bound on $\alpha(G)$, the size of a largest independent set in $G$. Indeed, the chromatic number of a graph $G$ is at least $n/\alpha(G)$. An old conjecture, which is implied by Hadwiger’s conjecture, (see [@hadwiger-survey]) states that if $\alpha(G) \le r$ then $G$ has a clique minor of order $n/r$. Duchet and Meyniel [@D-M] showed in 1982 that this conjecture holds within a factor of $2$, which was subsequently improved by [@fox; @B-G; @BLW; @Pedersentoft; @PlummerStiToft; @kawa1; @kawa2; @woodall; @maffraymeyniel], most notably Fox [@fox] gave the first improvement of the multiplicative constant $2$. Building upon the ideas of [@fox], Balogh and Kostochka [@B-G] obtain the best known bound to date. In light of these results, Norin asked whether in this case assuming additionally that $G$ is triangle-free allows for a better bound. This question was answered in the affirmative by Dvořák and Yepremyan [@D-Y] who show that for $r$ large enough, a triangle-free $n$-vertex graph with $\alpha(G) \le r$ has a clique minor of order $(n/r)^{1+\frac{1}{26}}.$ They naturally ask if the same holds if instead of triangle-free graphs we consider $K_s$-free graphs. We show that this is indeed the case. \[thm:main\] Let $s\ge 3$ be an integer and $t$ be large enough. Any $K_s$-free graph $G$ with $\alpha(G) \le r$ has a clique minor of order at least $ (n/r)^{1+ \frac1{10(s-2)}}.$ For the case of $s=3$ our result has a simpler proof and gives a better constant in the exponent compared to that in [@D-Y]. As an additional illustration we also use our strategy to obtain a short proof of a result of Kuhn and Osthus [@K-O] about finding clique minors in $K_{s,t}$-free graphs. The above-mentioned two examples put quite different restrictions on the structure of the underlying graph, nevertheless our approach performs well in both cases. This leads us to believe that our strategy, or minor modifications of it, could provide a useful tool for finding clique minors in graphs under other structural restrictions as well. The strategy is simple enough to be worth describing in the Introduction. Method {#method .unnumbered} ------ Given a graph $G$ our strategy for finding minors of large average degree goes as follows: 1. We independently colour every vertex of $G$ red with probability $p$ and blue otherwise. 2. Every blue vertex chooses independently one of its red neighbours (if one exists) uniformly at random. This decomposes the graph into stars, either centred at a red vertex with leaves being the blue vertices, which have chosen the central vertex as their red neighbour or being isolated blue vertices which had no red neighbours to choose from. We obtain our random minor ${\mathcal{M}}(G,p)$ by contracting each star into a single vertex and deleting the isolated blue vertices. We note that similar strategies were employed by both Kuhn and Osthus [@K-O], and Dvořák and Yepremyan [@D-Y]. Our strategy above streamlines their approaches for finding dense minors. This helps us develop a new way of analysing the outcome, allowing us to answer the above question of Dvořák and Yepremyan [@D-Y] as well as to obtain simpler proofs of the results of both [@K-O] and [@D-Y]. **Notation:** For a graph $G$, we denote by $V(G)$ its vertex set and by $E(G)$ its edge set and by $\delta(G)$, $d(G)$ and $\Delta(G)$ its minimum, average and maximum degree. For $v \in V(G)$ let $d_G(v)$ be the degree of $v$ in $G$ (we often write just $d(v)$ when the underlying graph is clear). If $V(G)$ is red-blue coloured, we denote by $d_r(v)$ the number of red neighbours of $v$. For a subset $S \subseteq V(G)$, let $N(S)$ be the set of vertices adjacent to at least one vertex in $S$. For us an $\ell$-path is a path of length $\ell$, which consists of $\ell$ edges and $\ell+1$ vertices. For a path $v_1v_2\ldots v_n$ we say $v_1$ and $v_n$ are its endvertices and $v_2,\ldots,v_{n-1}$ are its internal vertices. Setting up the framework and an example ======================================= We begin the section by stating some well known tools which we are going to use. \[lem:chernoff\] Let $X_1, \ldots, X_d$ be independent random variables, taking value $1$ with probability $p$ and $0$ otherwise and let $X= \sum_{i=1}^d X_i$. Then ${\mathbb{P}}(X>2pd) \le e^{-pd/3}$ and ${\mathbb{P}}(X<\frac12pd) \le e^{-pd/8}$. \[lem:maxcut\] Any graph $G$ has a spanning bipartite subgraph in which every vertex has degree at least half as big as it had in $G$. Choose a partition $X,Y$ of $V(G)$ which maximises the number of edges from $X$ to $Y$. We obtain the desired subgraph by deleting all the edges within $X$ and $Y$. Any vertex $v$ satisfies the property as otherwise we could move it to the other part and increase the number of edges between the parts. \[KST\] Let $s\le t$ be positive integers. Any bipartite graph with parts of size $n$ and at least $(t-1)n^{2-1/s}+sn$ edges has $K_{s,t}$ as a subgraph. We now introduce some notation and give an overview of how we analyse ${\mathcal{M}}(G,p).$ If $G$ has $n$ vertices, then $M \sim {\mathcal{M}}(G,p)$ will almost surely have roughly $np$ vertices. In terms of edges, any edge $xy$ of $G$ which got both its endpoints coloured blue will become an edge in $M$ between the red vertices that $x$ and $y$ picked (assuming $x$ and $y$ each have red neighbours). This means that, provided we pick $p$ carefully, $M$ will have roughly as many edges as $G$. The main issue is that $M$ might not be a simple graph. In other words, between some vertices of $M$ there could be many parallel edges and the main part of the analysis of the above process is to control the number of such parallel edges. Given $M \sim {\mathcal{M}}(G,p)$, we say that a $3$-path $vxyu$ in $G$ is *activated* if both vertices $v$ and $u$ got coloured red, both vertices $x$ and $y$ got coloured blue and $x$ chose $v$ and $y$ chose $u$ as their red neighbours. Note that a path $vxyu$ being activated means there is an edge between $v$ and $u$ in $M$. With this in mind our general strategy for the analysis of ${\mathcal{M}}(G,p)$ will be to try to find a collection $\mathcal{P}$ of many $3$-paths in our graph, such that not too many of these paths have the same endvertices. The chance that a fixed $3$-path activates will be rather small, but the chance that $2$ such paths simultaneously activate is even smaller. This means that with the right choice of parameters we still expect to see many activated paths from $\mathcal{P}$ but only very few between the same pairs of vertices, which lets us conclude there are many edges in $M$ but few parallel ones. The key part of the approach is to choose $\mathcal{P}$ correctly, which will depend on the assumptions we make on our graph $G$. The technical part of the approach is mostly contained in the following two lemmas. The first one gives a lower bound on the probability that a single $3$-path activates while the second gives an upper bound on the chance that several paths with same endvertices activate simultaneously. \[lem:activation-lb\] Let $G$ be a graph with $\Delta(G) \le d$. The chance that a path $vxyu$ activates in ${\mathcal{M}}(G,p)$ is at least $\frac{1}{2^7d^2}$ given $\frac4d \le p\le \frac12$. Let us first consider what happens in the colouring stage of our procedure. We say that a colouring is well-behaved (w.r.t. $vxyu$) if $v,u$ got coloured red, $x,y$ got coloured blue and both $d_r(x),d_r(y) \le 2pd+2$. The probability that the right colours got assigned to $v,x,y,u$ is $p^2(1-p)^2$ and given this the probability that $d_r(x) > 2p(d-2)+2$ is by Chernoff bound () at most $e^{-p(d-2)/3}\le e^{-pd/4}$, since $x$ has at least $d-2$ neighbours which are not yet coloured or are coloured blue. So the probability that a colouring is well behaved is at least $$p^2(1-p)^2(1-2e^{-pd/4}) \ge p^2/16.$$ Given that we have obtained a well-behaved colouring in the first stage, the probability that $vxyu$ activates is at least $\frac{1}{(2pd+2)^2}\ge \frac{1}{8p^2d^2}$. So putting things together $${\mathbb{P}}(vxyu \text{ activates}) \ge \frac{1}{8p^2d^2}\cdot \frac{p^2}{16} \ge \frac{1}{2^7d^2}.$$ \[lem:activation-ub\] Let $G$ be a graph, $d' \le \delta(G)$ and $\mathcal{P}$ be a non-empty collection of $3$-paths between vertices $v$ and $u$ of $G$. If the set of internal vertices of all paths in $\mathcal{P}$ has size $m$ and $2^7 m\log m<pd'$ then the chance that all paths in $\mathcal{P}$ simultaneously activate in ${\mathcal{M}}(G,p)$ is at most $2p^2/(d'p/4)^m$. Let us denote the paths in $\mathcal{P}$ by $vx_iy_iu$. Let $X$ denote the set of $x_i$’s and $Y$ the set of $y_i$’s. The condition of the lemma tells us that $|X \cup Y|=m$. Notice that all paths in $vx_iy_iu$ simultaneously activate only if $v,u$ get coloured red, each vertex in $X\cup Y$ gets coloured blue, all vertices in $X$ choose $v$ as their red neighbour and all vertices in $Y$ choose $u$ as their red neighbour. In particular, unless $X \cap Y = \emptyset$ the probability of simultaneous activation is $0$. So we may assume $X \cap Y = \emptyset$. We say that a colouring is feasible if $v,u$ get coloured red and all vertices in $X \cup Y$ get coloured blue. We say it is well-behaved if in addition all vertices in $X \cup Y$ have at least $(d'-m)p/2$ red neighbours. The probability that a colouring is feasible is $p^2(1-p)^m$. Given that a colouring is feasible the probability that $d_r(v) < p(d'-m)/2$ for a vertex $v\in X \cup Y$ is by Chernoff bound () at most $e^{-p(d'-m)/8}\le e^{-pd'/16},$ since $v$ has at least $d'-m$ neighbours for which we still don’t know the colour or we know are red. So the probability that a colouring is not well-behaved given that it is feasible is by a union bound at most $$me^{-pd'/16} \le \frac{m}{(d'p)^m},$$ which follows since $2^7 m\log m<pd'$ implies $pd'/\log (pd') > 16m$, using the fact that $m \ge 2$. Given a well-behaved colouring the probability that each vertex in $X \cup Y$ chooses the right red neighbour is at most $\left(\frac{4}{d'p}\right)^m$ since $(d'-m)p/2 \ge d'p/4.$ The chance that all paths in $\mathcal{P}$ activate given that the colouring is feasible is at most probability that all paths in $\mathcal{P}$ activate given that the colouring is well-behaved plus the probability that the colouring is not well-behaved given it is feasible. By the above bounds, this probability is at most $$\frac{4^m}{(d'p)^m}+\frac{m}{(d'p)^m}\le \frac{2^{2m+1}}{(d'p)^m}.$$ Finally, this implies that the chance that $\mathcal{P}$ activates is at most $\frac{2^{2m+1}}{(d'p)^m}\cdot p^2(1-p)^m \le \frac{2^{2m+1}p^2}{(d'p)^m}.$ Minors in K s,t-free graphs --------------------------- In this subsection we illustrate our approach by giving a simpler proof of a result of Kuhn and Osthus [@K-O] on dense minors in $K_{s,t}$-free graphs $G$. A graph $G$ is said to be *almost regular* if $\Delta(G) \le 2\delta(G)$. For $2\le s \le t$ there is a constant $c=c(s,t)>0$ such that any $K_{s,t}$-free, almost regular graph $G$ with average degree $d$ has a minor of average degree at least $cd^{1+\frac{2}{s-1}}.$ We choose $c=c(s,t)=\frac{1}{2^{56}st^2}.$ If $cd^{2/(s-1)} \le 1$ then $G$ itself provides us with the desired minor. So we may assume that $d^{2/(s-1)} \ge c^{-1}=2^{56}st^2$. While we will work with the above explicit value of $c$, for the sake of clarity a reader might assume throughout the argument that $d$ is sufficiently large compared to $s$ and $t$. We have $d \le \Delta(G) \le 2\delta(G) \le 2d$ which in particular gives $\delta(G) \ge d/2$ and $\Delta(G) \le 2d$. By there is a spanning bipartite subgraph $G'$ of $G$ which has $d(G') \ge d(G)/2=d/2$ and $\delta(G') \ge \delta(G)/2\ge d/4.$ Let us first observe a few easy counts, all but the first of which follow due to the fact $G'$ is $K_{s,t}$-free. 1. The number of $3$-paths in $G'$ is at most $2nd^3$, where $n=|G'|$.\ This follows since the number of edges in $G'$ is at most $nd/2$ and for any choice of an edge as a middle edge of a $3$-path we have at most $4d^2$ choices for its endvertices. 2. The number of cycles of length $6$ is at most $\frac43tnd^{5-1/(s-1)}$.\ Note that as each path of length $3$, of which there are at most $2nd^3$, completes into a $6$-cycle in at most $t(2d)^{2-1/(s-1)}$ many ways (and we counted each cycle $6$ times). This follows since given a $3$-path from $v$ to $u$ each such cycle corresponds to an edge between neighbourhoods of $v$ and $u$ both of which have size at most $2d$. The subgraph induced by these neighbourhoods is bipartite and $K_{s-1,t}$-free since any $K_{s-1,t}$ together with $v$ or $u$ would constitute a copy of $K_{s,t}$ in $G$. The claimed bound now follows by Kövári-Sós-Turán theorem (). 3. The number of copies of $K_{2,s}$ is at most $ntd^{s}$.\ This time we count, for every vertex $v$ the number of stars $K_{1,s}$ with centre in the second neighbourhood $N_2$ and all leaves in the first neighbourhood $N_1$ of $v$. Let $k=|N_2|$ and $d_1,\ldots, d_k$ denote the number of neighbours of vertices in $N_2$ within $N_1$. The number of our stars is equal to $\sum \binom{d_i}{s} \le t\binom{d(v)}{s} \le t\binom{2d}{s}\le 2td^s$ as otherwise we get a $K_{s,t}$ in $G$. Taking the sum over all vertices and noticing we count each $K_{2,s}$ twice we get the claimed bound. Let us consider $M \sim {\mathcal{M}}(G',p)$ with $p=2^{12}\sqrt{t}d^{-\frac1{2(s-1)}}\le2^{12}\sqrt{t}c^{\frac14}\le \frac14$. Let $X$ denote the expected number of activated $3$-paths in $M$. Since $G'$ has at least $nd/4$ edges and each contributes a distinct activated $3$-path, provided its endpoints are blue and have a red neighbour, we deduce $${\mathbb{E}}X \ge (1-p)^2(1-2(1-p)^{d/4-1})nd/4\ge (1-p)^2(1-2e^{-pd/8})nd/4 \ge nd/8.$$ We say a $6$-cycle in $G'$ activates if it consists of two edge-disjoint activated $3$-paths. Let $Y$ count the number of activated $6$-cycles. The chance for two edge-disjoint $3$-paths with endvertices $v,u$ to simultaneously activate is at most $2p^2/(dp/16)^4=\frac{2^{17}}{p^2d^4}$, by (with $m=4$ and $d'=d/4$), which applies since $2^{12} \le pd$. Each cycle has three possible pairs of such paths so the chance that a $6$-cycle activates is at most $3\cdot \frac{2^{17}}{p^2d^4}$. In particular, $${\mathbb{E}}Y \le 2^{19}tnd^{1-1/(s-1)}/p^2.$$ Given a star $K_{1,s}$ that appears between neighbourhoods of vertices $v,u \in G'$, we say it is activated for $v$ and $u$ if all $3$-paths between $v$ and $u$ through an edge of the star activated. Let $Z$ count the number of triples consisting of such a and vertices $v,u$ such that the star was activated for $v,u$. Each such triple corresponds to a $K_{2,s}$ with a vertex appended to one of its left vertices. In particular, there are at most $4ntd^{s+1}$ plausible triples each of which activates with probability at most $2p^{2}/(dp/16)^{s+1}=2^{4s+5}p^{1-s}d^{-s-1}$, by (with $m=s+1$ and $d'=d/4$). Here we require $2^9(s+1)\log (s+1) \le pd$ in order to be able to apply the lemma, which holds since $pd \ge 2^{12}\sqrt{d}$ and $d$ is large enough compared to $s$. In particular we get $${\mathbb{E}}Z \le 2^{4s+7}ntp^{1-s}.$$ For any activated $6$-cycle we delete a middle edge of an activated $3$-path on the cycle, in total deleting at most $Y$ edges. For any activated star we delete one of its edges, in total at most $Z$ edges get deleted. So we are left with at least $X-Y-Z$ edges in $M$ so we know that the expected number of remaining edges is at least $$\begin{aligned} nd/8-2^{19}tnd^{1-1/(s-1)}/p^2-2^{4s+7}ntp^{1-s}&=nd/8-nd/32-2^{4s+7}ntp^{1-s}\\ &\ge nd/8-nd/32-nd/32 = nd/16,\end{aligned}$$ where the inequality is equivalent to $d \ge 2^{4s+12}tp^{1-s}$ which after plugging in our choice of $p$ is equivalent to $d \ge t^{3-s}2^{48-16s}$ which holds by our choice of $c$. **Claim.** After this process between any two vertices $v,u\in M$ there are at most $s-1$ parallel edges. To see why this is true consider the bipartite subgraph with parts being neighbourhoods of two vertices $v,u$ in $G'$ and with the edge set consisting of edges of $G'$ which were a middle edge of an activated $3$-path from $v$ to $u$. There are no $2$ independent edges in this graph as otherwise we would have an activated $6$-cycle in which we did not delete an edge. This means that this graph is a star. If the star had size $s$ or more we would get an activated star for which we have not deleted an edge. So this graph is a star with at most $s-1$ edges. This means that, after deleting all but one parallel edge between each pair of vertices, we are left with a simple graph with the expected number of edges at least $\frac{nd}{16s}$. Since we want to show this minor has large average degree, what remains to be done is to control the number of vertices in $M$. By the Chernoff bound (), $M$ has more than $2pn$ vertices with probability at most $e^{-np/3}$ and each such outcome can contribute at most $n^2$ edges to our expectation. So we can find an outcome with at most $2pn$ vertices and at least $\frac{nd}{16s}-n^2e^{-np/3}$ edges. This gives us a minor of average degree at least $$\frac{d}{32sp}-np^{-1}e^{-np/3}\ge d^{1+\frac{1}{2(s-1)}}/(2^{17}s\sqrt{t})-p^{-2} \ge cd^{1+\frac{1}{2(s-1)}}$$ where in the first inequality we used $np e^{-np/3}\le 1$ since $np \ge dp \ge 12.$ **Remark:** The above theorem holds also without the almost regularity condition as shown by [@B-M]. The approach presented above with some additional ideas can be used to obtain an alternative, slightly simpler proof of this result. However, since this is a known result and the purpose of including the above argument was mainly for illustration, we chose not to include the details. Ks-minor free case ================== In this section, we prove our main result, . We restate an equivalent version below that is more convenient to work with. \[thm:main-r\] Let $s\ge 3$ be an integer, ${\varepsilon}< \frac1{10(s-2)}$ and $d$ be large enough. Any $K_s$-free graph $G$ without $K_d$ as a minor has $\alpha(G) \ge \frac n{d^{1-{\varepsilon}}}$. In this setting we will need to take a slightly different approach when analysing ${\mathcal{M}}(G,p)$. The reason is that when working with $K_s$-free graphs, we lack a good way of bounding the number of $3$-paths between an arbitrary pair of vertices, which we previously obtained using the fact our graph was $K_{s,t}$-free. In the present setting we will use the fact that $\alpha(G)$ is bounded to show that independent sets must expand well. Additionally, being $K_s$-free allows us to show that we can cover at least a half of any collection of vertices using few independent sets. With these two facts, we fix a vertex, consider a carefully selected collection of $3$-paths starting at this vertex, and then repeat a similar argument as in the previous section to show that we expect to see many non-parallel edges incident to this vertex in $M$. The following standard Ramsey lemma quantifies the second fact mentioned above. We prove it for completeness. \[lem:ramsey-asym\] Let $s \ge 2$. It is possible to cover half of the vertices of any $n$-vertex $K_s$-free graph with at most $4n^{1-\frac{1}{(s-1)}}$ independent sets. We will show that any $K_s$-free graph with $m$ vertices contains an independent set of size at least $m^{\frac1{s-1}}/2$. It follows that, as long as we have at least $n/2$ vertices left, we can find an independent set of size at least $n^{1/(s-1)}/4$ and remove it from the graph. Once we stop we removed at most $4n^{1-\frac{1}{(s-1)}}$ independent sets which cover at least half of the graph, as desired. We prove the above claim by induction on $s$. For $s=2$ the graph being $K_2$-free means there are no edges so there is an independent set of size $m$ as desired. Assume now that $s \ge 3$ and that the lemma holds for $s-1.$ If there is a vertex which has degree at least $m^{\frac{s-2}{s-1}}$ then, since we know its neighbourhood is $K_{s-1}$-free, we are done by induction. On the other hand, if all vertices have degree less than $m^{\frac{s-2}{s-1}}$, then by Turán’s theorem (see [@alon-spencer]) we know that there is an independent set of size $\frac m{m^{\frac{s-2}{s-1}}+1}\ge m^{\frac1{s-1}}/2$. We will also make use of the result of Kostochka [@kost] and Thomason [@thom] mentioned in the introduction, which lets one pass from dense minors to clique minors. \[thm:dense-to-clique\] Any graph with average degree at least $(3+o(1))t \sqrt{\log t}$ has $K_t$ as a minor. We say that a graph $G$ is *$d$-independent set expanding* if for any independent set $S$ in $G$ we have $|N(S)| \ge (d^{1-{\varepsilon}}-1) |S|.$ We now prove our main result with a restriction on the maximum degree and assuming that independent sets of our graph expand. Both these assumptions will be easy to remove later. \[thm:main-max-deg\] Let $s\ge 3$ be an integer, ${\varepsilon}< \frac1{10(s-2)}$ and $d$ be large enough. Any $K_s$-free, $d^{1-{\varepsilon}}$-independent set expanding graph $G$ with $\Delta(G)\le d$ has $K_d$ as a minor. Note first that since independent sets expand we have $\delta(G)\ge d'=d^{1-{\varepsilon}}-1.$ Fix a vertex $v$. Since $G$ is $K_s$-free we know that $N(v)$ must be $K_{s-1}$-free. By this means there exist disjoint independent sets of vertices $S_1, \ldots, S_t \subseteq N(v)$ such that $|S_1|+\ldots +|S_t| \ge |N(v)|/2$ and $t \le 4|d(v)|^{1-1/(s-2)}$. Let $N_i=N(S_i) \setminus (\{v\} \cup N(v))$. For any vertex in $N_i$ we delete all but $1$ edge towards $S_i$, leaving us with at least $|N_i|$ edges between $S_i$ and $N_i$ which split into stars with centres in $S_i$. In particular, this means that the remaining neighbourhoods of vertices in $S_i$ partition $N_i$. For each vertex $w \in S_i$ we apply to find a collection of at most $t_w \le 4d(w)^{1-1/(s-2)}$ disjoint independent sets $S_{w1},\ldots,S_{wt_w}\subseteq N(w)\cap N_i$ which cover at least half of $N(w)\cap N_i$, where we again used the fact that $N(w)$ is $K_{s-1}$-free. For every vertex in $N_{wj}:=N(S_{wj})\setminus (\{v\} \cup N(v))$ we mark one of its edges towards $S_{wj}$ as *permissible* for $S_{wj}$. See for an illustration. [ ; ]{}; [ ; ]{}; [ ; ]{}; [ ; ]{}; [ ; ]{}; [ ; ]{}; [ ; ]{}; [ ; ]{}; [ ; ]{}; [ ; ]{}; [ ; ]{}; [ ; ]{}; [ ; ]{}; ; ; \(v) – ($(a2)+(0.15,0)$); (v) – ($(a3)-(0.15,0)$); at ($0.5*(a0)+0.5*(a2)-(2.9,0)$) [ At least half of $N(v)$ ]{}; at ($0.5*(b0)+0.5*(b2)-(1.9,0)$) [ $N_i$ ]{}; at ($0.5*(b2)+0.5*(a0)-(2.4,0)$) [ Stars: ]{}; at ($0.5*(c2)+0.5*(b0)-(2.4,0)$) [Permissible edges:]{}; at ($0.5*(c0)+0.5*(c2)-(2.7,0)$) [ At most one edge towards each $S_{wj}$ is permissible ]{}; (a0) rectangle (a3); (b0) rectangle (b3); (c0) rectangle (c3); ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; (a13) – (a03); (a12) – (a02); (a11) – (a01); (a10) – (a00); at ($0.5*(a02)+0.5*(a13)$) [$\cdots$]{}; at ($0.5*(a00)+0.5*(a11)$) [$\cdots$]{}; at ($0.5*(a0)+0.5*(a2)+(0.35,0.1)$) [$S_1$]{}; at ($0.5*(a11)+0.5*(a01)+(0.35,0.1)$) [$S_i$]{}; at ($0.5*(a13)+0.5*(a03)+(0.35,0.1)$) [$S_t$]{}; [ ; ]{} [ ; ]{} ; ; ; ; ; ; (w1) – ($(b11)-(0.3,0.15)$); (w1) – ($(b2)-(-0.3,0.15)$); (w2) – ($(b13)-(0.3,0.15)$); (w2) – ($(b11)-(-0.3,0.15)$); (w3) – ($(b3)-(0.3,0.15)$); (w3) – ($(b13)-(-0.3,0.15)$); ($(b0)+(0.15,0.15)$) rectangle ($(b11)-(0.15,0.15)$); ($(b01)+(0.15,0.15)$) rectangle ($(b13)-(0.15,0.15)$); ($(b03)+(0.15,0.15)$) rectangle ($(b3)-(0.15,0.15)$); ($0.67*(b2)+0.33*(b11)-(0,0.15)$) – ($0.67*(b0)+0.33*(b01)+(0,0.15)$); ($0.33*(b2)+0.67*(b11)-(0,0.15)$) – ($0.33*(b0)+0.67*(b01)+(0,0.15)$); ($0.67*(b11)+0.33*(b13)-(0,0.15)$) – ($0.67*(b01)+0.33*(b03)+(0,0.15)$); ($0.33*(b11)+0.67*(b13)-(0,0.15)$) – ($0.33*(b01)+0.67*(b03)+(0,0.15)$); ($0.67*(b13)+0.33*(b3)-(0,0.15)$) – ($0.67*(b03)+0.33*(b1)+(0,0.15)$); ($0.33*(b13)+0.67*(b3)-(0,0.15)$) – ($0.33*(b03)+0.67*(b1)+(0,0.15)$); at ($0.335*(b03)+0.335*(b13)+0.165*(b3)+0.165*(b1)+(-3.7,0)$) [$S_{w2}$]{}; at ($0.5*(b03)+0.5*(b13)-(3.6,0)$) [$S_{w1}$]{}; at ($0.165*(b03)+0.335*(b1)+0.5*(b3)-(4.025,0)$) [$\cdots$]{}; [ ; ]{} ; ; ; ; (sw2) ; (w3) – (sw1); (w3) – (sw2); ; ; ; ; (sw3) ; ; ; (sw6) ; ; ; (sw5) ; [ ; ]{} ; ; (w2) – (sw3); (w2) – (sw4); (w1) – (sw5); (w1) – (sw6); iin [1,2,3]{} [ (v) – (wi); ]{} iin [1,...,6]{} [ (u) – (swi); ]{} \(u) – (sw4) – (w2) – (v); (sw4) ; (sw1) ; \(u) ; at ($(w2)+(0.3,0.12)$) [$w$]{}; iin [1,2,3]{} [ (wi) ; ]{} \(v) ; at ($(v)+(0.4,0.2)$) [$v$]{}; at ($(u)+(0.3,-0.05)$) [$u$]{}; We now build a collection of $3$-paths $\mathcal{P}_i$ as follows. Any path in $\mathcal{P}_i$ starts with $v$ then proceeds to a $w \in S_i$ then to a vertex in $S_{wj}$ for some $j$ along one of the remaining edges between $S_i$ and $N_i$ and finally follows an $S_{wj}$-permissible edge. Let us first show some properties of $\mathcal{P}_i$ that we will use. \[claim:stars\] Middle edges of paths in $\mathcal{P}_i$ span stars. This is immediate since we removed all but one edge of any vertex in $N_i$ towards $S_i$. \[claim:multiple-paths\] For any $w \in S_i$ and $u \in V(G)\setminus (\{v\} \cup N(v))$ there are at most $4d^{1-\frac1{s-2}}$ paths in $\mathcal{P}_i$ passing through $w$ and ending with $u$. This claim follows since the last edge of any such path in $\mathcal{P}_i$ must be $S_{wj}$-permissible for some $j$ and any vertex $u$ sends at most one permissible edge towards each $S_{wj}$. Therefore there can be at most $t_w \le 4d^{1-\frac1{s-2}}$ such paths in $\mathcal{P}_i$. \[claim:size\] $|\mathcal{P}_i| \ge d'^2|S_i|/4-2d'd$ This follows since any $S_{wj}$-permissible edge gives rise to a distinct $3$-path in our construction. Since we mark precisely one such edge for every vertex in $N_{wj}$ we get $$\begin{aligned} |\mathcal{P}_i|= \sum_{w \in S_i, j \in [t_w]}|N_{wj}| &\ge \sum_{w \in S_i, j \in [t_w]} (d'|S_{wj}|-(d+1))\\ &\ge d'|N_i|/2-|S_i|\cdot 4d^{1-1/(s-2)}(d+1) \\ &\ge d'^2|S_i|/2-d'(d+1)-|S_i|\cdot 8 d^{2-1/(s-2)} \ge d'^2|S_i|/4-2d'd\end{aligned}$$ where in the first inequality we used the fact $S_{wj}$ is independent so by the expansion property $|N_{wj}| \ge d'|S_{wj}|-|N(v)|-1$. In the second inequality we used the fact that $\sum_{j \in [t_w]} |S_{wj}| \ge |N(w) \cap N_i|/2$ and $\cup_{w \in S_i} N(w) \supseteq N_i$ to bound the first term and $t_w \le 4d^{1-1/(s-2)}$ to bound the second. In the third inequality we used $|N_i|\ge |S_i|d'-(|N(v)|+1)$ which holds by the expansion property since $S_i$ is independent. In the last inequality we used $ {\varepsilon}< \frac{1}{2(s-2)}$ and $d$ (so also $d'$) large enough. Let $\mathcal{P}:=\bigcup_i \mathcal{P}_i.$ Applying the above claim for each $i$ we obtain that for large enough $d$: $$\label{eq:number-of-paths} |\mathcal{P}| \ge \frac{d'^2}{4}\sum_{i=1}^t|S_{i}|-2td'd\ge \frac{d'^3}{8}-8d'd^{2-1/(s-2)} \ge \frac{d'^3}{16}.$$ Let us fix another vertex $u$ and look at the collection $\mathcal{P}_{vu}$ of paths in $\mathcal{P}$ ending in $u$. Let $X$ be the set of vertices following $v$ (so in $N(v)$) on these paths and $Y$ the sets of vertices preceding $u$ on these paths. Since we excluded $N(v)$ from our $N_i$’s we have $X \cap Y = \emptyset.$ Consider the bipartite subgraph $B$ with bipartition given by $X$ and $Y$ and edges coming from middle edges of paths in $\mathcal{P}_{vu}$. \[claim:max-deg\] $\Delta(B)\le 4d^{1-1/(s-2)}.$ For any $w$ in $X$, we know by that there are at most $4d^{1-1/(s-2)}$ paths in $\mathcal{P}_{uv}$ so $w$ can have at most this many neighbours in $B$. For any vertex $y \in Y$, implies $y$ has at most one edge towards any $S_i$; in particular, it has degree at most $t\le 4d^{1-1/(s-2)}$ in $B$. Let $p=2^{10}d^{2{\varepsilon}-\frac{1}{2(s-2)}}$ and $M\sim {\mathcal{M}}(G,p).$ \[claim:bound\] The probability that some path in $\mathcal{P}_{vu}$ activates for $M$ is at least $\frac{|\mathcal{P}_{vu}|}{2^{9}d^2}.$ Let $a:=|\mathcal{P}_{vu}|$, so by and the fact that parts of $B$ have size at most $d$, we get $|E(B)|=a \le 4d^{2-1/(s-2)}$. We trivially have at most $a^2$ pairs of independent edges in $B$. On the other hand, if we denote by $d_1,\ldots, d_{|X|+|Y|}$ the degrees in $B$, the number of pairs of edges sharing a vertex is equal to $$\sum_{i=1}^{|X|+|Y|} \binom{d_i}{2} \le { \left\lceil \frac{2a}{\Delta(B)} \right\rceil } \frac{\Delta(B)^2}{2}\le 2a\Delta(B) \le 8ad^{1-1/(s-2)},$$ where we used $\sum_{i=1}^{|X|+|Y|} d_i =2a,$ $d_i\le \Delta(B)$ and the fact that a sum of squares of nonnegative numbers subject to a given sum is maximised when as many terms as possible are maximum and the rest are zero. Let us denote by $A_e$ the event that edge $e$ in $B$ activated path $veu.$ By the inclusion-exclusion principle, $$\begin{aligned} {\mathbb{P}}\left(\bigcup_{e\in B} A_e\right) &\ge \sum_{e\in B}{\mathbb{P}}(A_e)-\sum_{\{e, f\}\subseteq B}{\mathbb{P}}(A_e \cap A_f)\\ & = \sum_{e\in B}{\mathbb{P}}(A_e)-\sum_{\substack{\{e, f\}\subseteq B, \\e \cap f = \emptyset}}{\mathbb{P}}(A_e \cap A_f)-\sum_{\substack{\{e, f\}\subseteq B, \\|e \cap f|=1}}{\mathbb{P}}(A_e \cap A_f)\\ &\ge a \cdot \frac{1}{2^7d^2}-a^2\cdot \frac{2^9}{p^2d'^4}-8ad^{1-1/(s-2)}\cdot \frac{2^{7}}{pd'^3} \ge \frac{a}{2^{9}d^2},\end{aligned}$$ where in the second inequality to bound ${\mathbb{P}}(A_e)$ we used while to bound ${\mathbb{P}}(A_e \cap A_f)$ we used with $m=4$ if $e \cap f=\emptyset$ and with $m=3$ if $|e \cap f|=1$; the conditions of the lemmas are easily seen to hold for our choice of $p$ when $d$ is large enough. In the last inequality we used $\frac{a}{p^2d'^4} \le \frac{4d^{2-1/(s-2)}}{p^2d'^4} \le 8d^{-2+4{\varepsilon}-1/(s-2)}/p^2= 2^{-17}/d^2$ and $\frac{d^{1-1/(s-2)}}{pd'^3}\le 2^{-11}d^{-2+{\varepsilon}-\frac1{2(s-2)}} \le 2^{-19}/d^2$, for large enough $d$. Using this claim and we obtain that the expected number of distinct neighbours of $v$ in $M$ is at least $\frac{\sum_u |\mathcal{P} _{vu}|}{2^{9}d^2}\ge \frac{d'^3}{2^{13}d^2}\ge 2^{-14}d^{1-3 {\varepsilon}}.$ As $v$ was a fixed and arbitrary vertex of $G$ in the above argument, we conclude that the expected number of non-parallel edges in $M$ is at least $n2^{-15}d^{1-3 {\varepsilon}}$. By the Chernoff bound (), $M$ has more than $2pn$ vertices with probability at most $e^{-np/3}$, and each such outcome can contribute at most $n^2$ edges to our expectation. So we can find an outcome with at most $2pn$ vertices and at least $n2^{-15}d^{1-3 {\varepsilon}}-n^2e^{-np/3}$ edges, so of average degree at least $n2^{-14}d^{1-3 {\varepsilon}}/(2pn)-np^{-1}e^{-pn/3}\ge 2^{-26}d^{1-5 {\varepsilon}+\frac{1}{2(s-2)}} \gg d \sqrt{\log d}$ since ${\varepsilon}<\frac1{10(s-2)}$. So we can find a $K_d$ minor by . Let us first remove the requirement that independent sets expand. Let $s\ge 3$ be an integer, ${\varepsilon}< \frac1{10(s-2)}$ and $d$ be large enough. Any $K_s$-free graph $G$ on $n$ vertices with $\Delta(G)\le d$ and no $K_d$ minor has $\alpha(G) \ge \frac n{d^{1-{\varepsilon}}}$. The proof is by induction on the number $n$ of vertices of $G$. The claim is trivial in the base case $n \le d^{1-{\varepsilon}}$. As $G$ has no $K_d$ minor, implies it cannot be $d^{1-{\varepsilon}}$-independent set expanding. In other words, there is an independent set $S$ in $G$ with $|N(S)| < (d^{1-{\varepsilon}}-1)|S|$. Then $G'=G \setminus (S \cup N(S))$ still has $\Delta(G') \le \Delta(G) \le d$, is $K_s$-free and has no $K_d$ minor, so by the inductive assumption it has an independent set $S'$ of size $|G'|/d^{1-{\varepsilon}}$. There are no edges between $S$ and $S'$ since $N(S) \cap G' = \emptyset$, so $S \cup S'$ is an independent set of size $|S|+|G'|/d^{1-{\varepsilon}} \ge (|S|+N(S)+|G'|)/d^{1-{\varepsilon}}=n/d^{1-{\varepsilon}}$, as desired. Let us finally remove the restriction on the maximum degree to obtain our main result. Note that $G$ has average degree at most $d\sqrt{\log d}$ as otherwise implies it has a $K_d$ minor. This implies that there can be at most $n/2$ vertices of degree at least $2d \sqrt{\log d}$ in $G$. In particular, there is an induced subgraph of order at least $n/2$ with maximum degree at most $2d \sqrt{\log d}.$ Applying our (with $2d \sqrt{\log d}$ in place of $d$ and for some ${\varepsilon}'$ which satisfies ${\varepsilon}< {\varepsilon}' < \frac1{10(s-2)}$), we obtain the desired result. Concluding remarks ================== We proved that any $K_s$-free graph $G$ on $n$ vertices has a clique minor of order polynomially larger than $n/\alpha(G)$, which would be implied by Hadwiger’s conjecture. In particular, we can take any power smaller than $1+\frac{1}{10(s-2)-1}.$ Examples based on random graphs used to bound the Ramsey number $R(s,k)$ for $s$ fixed and $k$ large (see [@bohman-keevash] and references therein) are $K_s$-free and have no clique minor of order $(n/\alpha(G))^{1+\frac{1}{s-1}+o(1)}$, showing that our result is best possible up to a constant factor in front of the $\frac{1}{s-2}$ term. In terms of the constant factor, being more careful with our bounds one can easily improve the $1/10$ factor to $1/8$ and with more work even a bit further. It would be interesting to determine the best possible exponent of $n/\alpha(G)$ in our result. [10]{} N. Alon and J. H. Spencer, *The probabilistic method.* John Wiley & Sons, 2016. 82 (1976), 711–712. 311 (2011), 2203–2215. 31 (2011), 639–674. T. Bohman and P. Keevash, *The early evolution of the H-free process.* Inventiones Math. 181 (2010), 291–336. B. Bollobás, P. Catlin and P. Erdős, [*Hadwiger’s conjecture is true for almost every graph.*]{} [European J. Combin.]{} 1 (1980), 195–199. In [Graph Theory]{}, vol. 62 of [North-Holland Mathematics Studies]{}. North-Holland, (1982), 71–73. Z. Dvořák and L. Yepremyan, *Independence number in triangle-free graphs avoiding a minor*, preprint, arXiv:1907.12999. 24 (2010), 1313–1321. conjecture. 95 (2005), 152–167. 56 (2007), 219–226. A. V. Kostochka, *The minimum [H]{}adwiger number for graphs with a given mean degree of vertices.* Metody Diskret. Analiz. no. 38 (1982), 37–58. T. K[ö]{}v[á]{}ri, V. S[ó]{}s, and P. Tur[á]{}n, *On a problem of K. Zarankiewicz.* Colloq. Math. 3 (1954), 50–57. K. Kuratowski, Sur le probleme des courbes gauches en topologie. [*Fund. Math.*]{} 16 (1930), 271–283. M. Krivelevich and B. Sudakov, *Minors in expanding graphs.* Geom. Funct. Anal. 19 (2009), 294–331. D. Kühn and D. Osthus, *Complete minors in [$K_{s,s}$]{}-free graphs.* Combinatorica 25 (2005), 49–64. L. Lovász, *Graph minor theory.* Bull. Amer. Math. Soc. 43 (2006), 75–86. W. Mader, [*Homomorphiesätze für Graphen.*]{} [Math. Ann. 178]{} (1968), 154–168. 64 (1987), 39–42. S. Norin, *New tools and results in graph minor structure theory.* Surveys in combinatorics 2015, London Math. Soc. Lecture Note Ser., vol. 424, Cambridge Univ. Press, Cambridge, 2015, 221–260. S. Norin and X. Song, Breaking the degeneracy barrier for coloring graphs with no $K_t$ minor, preprint, arXiv:1910.09378. 310 (2010), 480 – 488. 23 (2003), 333–363. L. Postle, Halfway to Hadwiger’s conjecture, preprint, arXiv:1911.01491. N. Robertson and P. D. Seymour, *Graph minors. [XX]{}. [W]{}agner’s conjecture.* J. Combin. Theory Ser. B 92 (2004), 325–357. 14 (1993), 279–361. (2015), 417––437. A. Thomason, *An extremal function for contractions of graphs.* Math. Proc. Cambridge Philos. Soc. 95 (1984), 261–265. 114 (1937), 570–590. K. Wagner, Über eine Eigenschaft des Satzes von Kuratowski. [*Deutsche Mathematik*]{} 2 (1937), 280–285. 11 (1987), 197–204. [^1]: Department of Mathematics, ETH, Zürich, Switzerland. Email: [](mailto:matija.bucic@math.ethz.ch). [^2]: Department of Mathematics, Stanford University, Stanford, CA 94305. Email: [jacobfox@stanford.edu]{}. Research supported by a Packard Fellowship and by NSF Award DMS-1855635. [^3]: Department of Mathematics, ETH, Zürich, Switzerland. Email: [](mailto:benjamin.sudakov@math.ethz.ch). Research supported in part by SNSF grant 200021-175573. [^4]: The chromatic number of a graph $G$, denoted $\chi(G)$, is the minimum number of colours required to colour vertices of $G$ so that there are no adjacent vertices of the same colour.
--- abstract: 'Let $F(s)=\sum_n a_n/\lambda_n^s$ be a general Dirichlet series which is absolutely convergent on $\Re(s)>1$. Assume that $F(s)$ has an analytic continuation and satisfies a growth condition, which gives rise to certain invariants namely the degree $d_F$ and conductor $\alpha_F$. In this paper, we show that there are at most $2d_F$ general Dirichlet series with a given degree $d_F$, conductor $\alpha_F$ and residue $\rho_F$ at $s=1$. As a corollary, we get that elements in the extended Selberg class with positive Dirichlet coefficients are determined by their degree, conductor and the residue at $s=1$.' address: | Department of Mathematics and Statistics\ Queen’s University\ Jefferey Hall, 48 University Ave\ Kingston\ Canada, ON\ K7L 3N6 author: - 'Anup B. Dixit' bibliography: - 'extremal.bib' title: A uniqueness property of general Dirichlet series --- **Introduction** ================ The study of $L$-functions plays a central role in number theory. These are functions attached to arithmetic and geometric objects. Such functions are typically defined as a Dirichlet series of the form $$F(s) = \sum_{n=1}^{\infty} \frac{a_n}{n^s},$$ which is absolutely convergent on a half plane $\Re(s) > \sigma_0$, where the coefficients $a_n$ arise from the underlying object. Then we try to analytically continue $F(s)$ to the whole complex plane. If this is possible, the value distribution of $F(s)$ sheds light on many important arithmetic properties of the underlying structure.\ The most common example of an $L$-function is the Riemann zeta-function, defined on $\Re(s)>1$ as $$\zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s}.$$ It has an analytic continuation to $\mathbb{C}$ except for a simple pole at $s=1$ with residue $1$. Furthermore, it satisfies a functional equation of the following form. If $$\Psi(s) := \pi^{-s/2}\, \Gamma\left(\frac{s}{2}\right)\, \zeta(s),$$ then $$\Psi(s) = \Psi(1-s).$$ Since $\Gamma(s)$ has poles on all non-positive integers, we get that $$\label{trivial-zeros} \zeta(-2n) = 0$$ for all $n\in \mathbb{N}$. These are called the trivial zeros of $\zeta(s)$. Moreover, using properties of the gamma function, we get $$\label{growthzeta} \max_{|s|=r} |\zeta(s)| \ll \frac{\Gamma(r)}{(2\pi)^r},$$ for $r\geq 3/2$. However this growth condition fails if $2\pi$ in is replaced by $2\pi + \epsilon$ for any $\epsilon>0$, i.e., for any constant $\eta >0$ there exists an infinite sequence $\{r_m\}$ of positive real numbers going to infinity such that $$\label{notgrowthzeta} \max_{|s|=r_m} |\zeta(s)| \geq \frac{\eta\, \Gamma(r_m)}{(2\pi + \epsilon)^{r_m}}.$$ This gives rise to a converse question, which was answered affirmatively by A. Beurling [@beurling] in 1950. He proved that \[beurling-theorem\] Consider a function $F(s)$ satisfying the following properties. 1. For $0<\lambda_1 < \lambda_2 <\cdots$, let $$F(s) = \sum_{n=1}^{\infty} \frac{1}{\lambda_n^s}$$ be absolutely convergent for $\Re(s)>1$. 2. $F(s)$ has an analytic continuation to $\mathbb{C}$ except for a simple pole at $s=1$ with residue $1$. 3. $F(-2n)=0$ for all $n\in\mathbb{N}$. 4. $F(s)$ satisfies the growth condition and . Then $$F(s) = \zeta(s) \hspace{4mm} \text{or}\hspace{4mm} F(s) = (2^s-1)\, \zeta(s).$$ The goal of this paper is to extend this result to any general Dirichlet series of the form $$F(s) = \sum_{n=1}^{\infty} \frac{a_n}{\lambda_n^s},$$ which is absolutely convergent on $\Re(s)>1$. In order to state the main theorem, we first introduce some growth parameters. **Growth Parameters** --------------------- Let $F(s)$ be a general Dirichlet series given by $$F(s) = \sum_{n=1}^{\infty} \frac{a_n}{\lambda_n^s},$$ absolutely convergent on $\Re(s)>1$. Suppose $F(s)$ has an analytic continuation to $\mathbb{C}$, except for a simple pole at $s=1$ with residue $\rho_F$. We say that $F(s)$ satisfies the *growth condition* if 1. there exists a positive integer $d_F$ and a positive real number $\alpha_F$ such that $$\label{degree-conductor-growth-condition} \max_{s=r} |F(s)| \ll \frac{\Gamma(r)^{d_F}}{\alpha_F^r},$$ for $r\geq 3/2$, and for any $\eta,\epsilon > 0$, there exists an infinite sequence $\{r_m\}$ of positive real numbers going to infinity such that $$\max_{s=r_m} |F(s)| \geq \frac{\eta \, \Gamma(r_m)^{d_F}}{(\alpha_F+ \epsilon)^{r_m}}.$$ If $F(s)$ satisfies $(\dagger)$, we call $d_F$ the degree of $F(s)$ and $\alpha_F$ the conductor of $F(s)$. These are closely related to the notion of degree and conductor arising from the functional equation of elements in the Selberg class.\ In 1989, Selberg [@selberg] introduced a class of $L$-functions $\mathbb{S}$ which is expected to encapsulate all familiar $L$-functions arising from arithmetic and geometry. For instance, the Riemann zeta-function $\zeta(s)$, the Dirichlet $L$-functions $L(s,\chi)$, the Dedekind zeta-functions $\zeta_K(s)$ etc. are all members of the Selberg class $\mathbb{S}$. This class has been extensively studied over the past few decades. For precise definition of $\mathbb{S}$ and recent developments, the reader may refer to the excellent survey articles [@kaczorowski-survey], [@Rm3], [@Perelli2] and [@Perelli1]. For $F\in\mathbb{S}$, there exist real numbers $Q>0$, $\alpha_i\geq 0$, complex numbers $\beta_i$ for $0\leq i \leq k$ and $w\in\mathbb{C}$, with $\Re(\beta_i) \geq 0$ and $|w| =1$, such that $$\label{fneq} \Phi(s) := Q^s \prod_i \Gamma(\alpha_i s + \beta_i) F(s)$$ satisfies the functional equation $$\Phi(s) = w \overline{\Phi}(1-\overline{s}).$$ Although the functional equation is not unique, because of the duplication formula of the $\Gamma$-function, we have some well-defined invariants, namely the degree of $F(s)$ denoted $d_F$, defined as (see [@Conrey]) $$\label{degree-functional-eqn} d_F = 2\sum_{i=1}^m \alpha_i,$$ and the conductor of $F(s)$ denoted $q_F$ is defined as (see [@Per2]) $$\label{conductor-functional-eqn} q_F := (2\pi)^{d_F} Q^2 \prod_{i=1}^m \alpha_i^{2\alpha_i}.$$ It is an intriguing conjecture that for $F\in\mathbb{S}$, $d_F$ and $q_F$ are always positive integers. It is easily seen that if $F(s)$ satisfies a functional equation of the type , then $F(s)$ also satisfies the growth condition $(\dagger)$. Moreover, the notions of degree $d_F$ in both cases coincide and $$\alpha_F = \frac{(2\pi)^{d_F}}{q_F}.$$ It is worth emphasizing here that the growth condition $(\dagger)$ is a far less restrictive a condition than the functional equation. This is evident from [@Km], where V. K. Murty introduced a class of $L$-functions $\mathbb{M}$ based on growth conditions, which contains the Selberg class $\mathbb{S}$. He proved that $\mathbb{M}$ is closed under addition and has a ring structure, which is not the case for $\mathbb{S}$. A more extensive study of this class is undertaken in [@anup-thesis]. **The class $\mathbb{B}$** -------------------------- Let $\mathbb{B}$ denote the class of meromorphic functions $F(s)$ satisfying the following properties. 1. [**General Dirichlet series**]{} - It can be expressed as a general Dirichlet series $$F(s) = \sum_{n=1}^{\infty} \frac{a_n}{\lambda_n^s},$$ which is absolutely convergent on $\Re(s)>1$, where $0<\lambda_1 < \lambda_2 \cdots $ and $a_n > 0$. We also normalize the leading coefficient, $a_1=1$. 2. [**Analytic continuation**]{} - It has an analytic continuation to $\mathbb{C}$ except for a *simple* pole at $s=1$ with residue $\rho_F$. 3. [**Growth condition**]{} - It satisfies the growth condition $(\dagger)$ with associated invariants $d_F$ and $\alpha_F$. 4. [**Trivial zeros**]{} - $F(-2n d_F)=0$ for all $n\in\mathbb{N}$. In this paper, we consider the question of how many $F\in \mathbb{B}$ can have the same values of degree $d_F$, conductor $\alpha_F$ and residue $\rho_F$ at $s=1$?\ This question is motivated by the classification problem in the Selberg class $\mathbb{S}$. The degree conjecture in $\mathbb{S}$ asserts that for any $F\in\mathbb{S}$, the degree $d_F$ is a non-negative integer. Towards this conjecture, it was shown by Conrey and Ghosh [@Conrey] that there are no $F\in \mathbb{S}$ such that $0<d_F<1$. Later Perelli and Kaczorowski [@Perelli3] showed that there are no $F\in\mathbb{S}$ such that $1<d_F < 2$. A more intricate question is to classify all elements in $\mathbb{S}$ with a given degree. In this direction, Perelli and Kaczorowski [@Perelli4] showed that in the Selberg class if $d_F=1$, then $F(s) = \zeta(s)$ or $F(s) = L(s + i\theta, \chi)$ where $\chi$ is a non-principal irreducible Dirichlet character modulo $q$ and $\theta\in\mathbb{R}$. However, no such classification is known for elements in $\mathbb{S}$ with degree $ 2$ or higher.\ In this paper, we restrict ourselves to class $\mathbb{B}$ where the functions have positive generalized Dirichlet coefficients and a simple pole at $s=1$. Note that in $\mathbb{B}$, we no longer have the restriction of Euler product or functional equation. Instead we enforce a weaker condition on the growth of the function. Surprisingly, we show that the degree $d_F$, conductor $\alpha_F$ and the residue $\rho_F$, with a certain additional condition determines the function $F(s)$. The proof is inspired by the work of A. Beurling [@beurling]. Main Theorem ------------ \[main-theorem\] Suppose $F\in \mathbb{B}$ satisfies $$\label{main-condition} \alpha_F > \left(\frac{\pi \rho_F}{\sin \frac{\pi}{2d_F}}\right)^{d_F}.$$ Then there are at most $2d_F$ elements $g\in\mathbb{B}$ such that $d_F = d_g$, $\alpha_F = \alpha_g$ and $\rho_F = \rho_g$. Note that if $F(s)=\zeta(s)$, then Theorem \[main-theorem\] gives Theorem \[beurling-theorem\] of Beurling. The *extended Selberg class* $\mathbb{S}^{\#}$ defined in [@Per2], consists of all functions $F(s)$, which have a Dirichlet series representation $$F(s)= \sum_{n=1}^{\infty}\frac{a_n}{n^s},$$ on $\Re(s)>1$ with $a_1=1$, can be analytically continued to the whole complex plane $\mathbb{C}$ except for a possible pole at $s=1$ and satisfy a functional equation of the type . In the context of $\mathbb{S}^{\#}$, Theorem \[main-theorem\] gives There are at most $2d_F$ elements in the extended Selberg class $\mathbb{S}^{\#}$ having non-negative Dirichlet coefficients , with degree $d_F$, conductor $q_F$ and a simple pole at $s=1$ with residue $\rho_F$, satisfying $$q_F^{-1/d_F} > \frac{ \rho_F}{2\sin \frac{\pi}{2d_F}}.$$ **Preliminaries** ================= In this section, we state and prove some lemmas that will be useful in the proof of the Theorem \[main-theorem\]. \[derivative-bound\] Suppose $g(s)\in \mathbb{B}$ satisfies the growth condition $(\dagger)$. Then $$\max_{|s| = r} \frac{|g^{(k)}(s)|}{k!} \ll \frac{ 2^{k+1}\,(\Gamma(r+1))^{d_g}}{\alpha_g^r},$$ for $r\geq 2$. Since $g(s)$ is analytic in the region $|s|\geq 3/2$, using Cauchy’s formula, for any $a\in\mathbb{C}$ with $|a|\geq 2$, we have $$g^{(k)}(a) = \frac{k!}{2\pi i} \int_{C(a, 1/2)} \frac{g(z)}{(z-a)^{k+1}} \, dz,$$ where $C(a,1/2)$ is the circle of radius $1/2$ centered at $a$. Therefore, we have $$\begin{aligned} \frac{|g^{(k)}(a)|}{k!} \leq \frac{1}{2\pi i} \int_{C(a, 1/2)} \frac{|g(z)|}{(z-a)^{k+1}} \, dz \ll \frac{ 2^{k+1}(\Gamma(r+1/2))^{d_g}}{\alpha_g^r} \leq \frac{ 2^{k+1}(\Gamma(r+1))^{d_g}}{\alpha_g^r}.\end{aligned}$$ We also use the following bound on $\Gamma(s)$. \[gamma-bound\] For $\sigma \geq 2$, $$\int_{-\infty}^{\infty} |\Gamma(\sigma + 2 + it) | \, dt \ll \sigma^3 \Gamma(\sigma).$$ Let $s=\sigma+2+it$. For $\sigma \geq 2$ and $t> 2 (\sigma +2)$, by Stirling’s approximation, we get $$\Gamma(s) = \sqrt{2\pi}\, s^{s-1/2}\, e^{-s}\, O\left(1 + \frac{1}{|s|}\right).$$ Therefore, we get $$\label{gamma-bound-one} |\Gamma(s)| = \sqrt{2\pi}\, |s|^{(\sigma + 2-1/2)}\, e^{-t\, \arg(s)}\, e^{-(\sigma +2)}\,\, O\left(\left |1 + \frac{1}{s}\right|\right).$$ Since, $t\geq 2(\sigma + 2)$, we have $$\frac{\pi}{2} > \arg(\sigma + 2 + it) \geq \frac{\pi}{3} \hspace{5mm} \text{ and } \hspace{5mm} |\sigma + 2 + it| \leq \sqrt{5} |t|.$$ Using this in , we have $$|\Gamma(\sigma + 2 + it)| \ll t^{(\sigma + 2 -1/2)}\, e^{-t}.$$ Furthermore, since $\Gamma(\overline{z}) = \overline{\Gamma(z)}$, for $s=\sigma +2+ it$, $\sigma\geq 2$ and $t\leq -2(\sigma +2)$, we have $$|\Gamma(\sigma + 2 + it)| \ll |t|^{\sigma + 2 -1/2}\, e^{-|t|}.$$ Thus, we get $$\int_{2(\sigma+2)}^{\infty} |\Gamma(\sigma + 2 + it)| \, dt \ll \int_{\sigma+2}^{\infty} t^{\sigma + 2 -1/2} e^{-t} \, dt \leq \int_{0}^{\infty} t^{\sigma + 2 -1/2} e^{-t} \, dt = \Gamma\left(\sigma + 2 - \frac{1}{2}\right) \ll \sigma^3\, \Gamma(\sigma).$$ Similarly, we also have $$\int_{-\infty}^{-2(\sigma+2)} |\Gamma(\sigma + 2 + it)| \, dt \ll \sigma^3\, \Gamma(\sigma).$$ For $|t|< 2(\sigma +2)$, we use the trivial estimate $$|\Gamma(\sigma+2+it)| \leq \Gamma(\sigma+2),$$ to get $$\int_{-2(\sigma+2)}^{2(\sigma + 2)} |\Gamma(\sigma +2 +it)| \ll (\sigma+2) \Gamma(\sigma +2) \ll \sigma^3 \Gamma(\sigma).$$ This completes the proof of the Lemma. We also recall the Phragmén-Lindelöf principle (see [@Titchmarsh-functions section 5.61]). \[Phragmen\] Let $f(z)$ be an analytic function of $z=e^{i\theta}$, analytic in the region $D$ between two straight lines making an angle $\pi/\alpha$ at the origin, and on the lines themselves. Suppose that $$\label{phragmen-inequality} |f(z)| \leq M$$ on the lines, and that, as $r\to\infty$, $$f(z) = O\left( e^{r\beta}\right),$$ where $\beta < \alpha$, uniformly in the angle. Then actually the inequality holds throughout the region $D$. A well-known consequence of the Phragmén-Lindelöf theorem is the following theorem due to Carlson (see [@Titchmarsh-functions section 5.8]), which will also be useful. \[carlson\] Let $f(z)$ be entire and of the form $O\left(e^{k|z|}\right)$; and let $f(z) = O\left(e^{-a|z|}\right)$, where $a>0$, on the real axis. Then, $f(z)=0$ identically. We also use the following Lemma, which is similar to and the proof follows a similar argument. \[uniqueness-growth\] There are only two entire functions $f$ of exponential type with $f(0)=1$ which on the real axis satisfy a relation of the form $$\label{uniqueness-hrowth-condition} f(x) - a |x|^p e^{\beta |x|} = O(e^{-\delta |x|}),$$ where $a>0, \beta>0, \delta >0$ and $p$ are real constants, viz.: $$f(z) = \frac{e^{\beta z} - e^{-\beta z}}{2\beta z}$$ and $$f(z) = \frac{e^{\beta z} + e^{-\beta z}}{2}.$$ Since $f(x)$ satisfies on the real line, we have $$f(x)-f(-x) = O(e^{-\delta|x|})$$ on the real line. Therefore, by Carlson’s theorem \[carlson\], we conclude that $f$ is an even function. Since, $f(z)$ is of exponential type, there exists $A>0$ such that $$\log |f(z)| < A |z|$$ for $|z|>1$. Thus, on the boundary of the region $\Re(z)>1$, $\Im(z)>0$, the function $h(z) := f(z) - a z^p e^{\beta z}$ satisfies $$|h(z)| \ll e^{(iA-\delta) z}.$$ By applying Phragmén-Lindelöf Theorem \[Phragmen\], we get that for $z=x+iy$ $$\label{boundary-inequality-1} |h(z)| \ll e^{Ay-\delta x}$$ holds for the whole region $\Re(z)>1$, $\Im(z)>0$. By a similar argument, we also have the bound for the region $\Re(z)>1$, $\Im(z)<0$. Hence, in the strip, $|\Im(z)|<1$ and $\Re(z)>1$, we have $h(z) = O(e^{-\delta x})$. Thus, using Cauchy’s formula, $$h^{(n)}(x) = \frac{n!}{2\pi i} \int_{C(x,1)} \frac{h(z)}{(z-x)^{n+1}} \, dz,$$ where $C(x,1)$ is a circle of radius $1$ with center $x$, we get $$h^{(n)}(x) = O(e^{-\delta x})$$ for all real $x>1$. Since $h$ is even, we further get $$h^{(n)}(x) = O(e^{-\delta |x|})$$ for all real $x$. Now, note that for real $x\neq 0$, $|x|^p e^{\beta x}$ and $|x|^p e^{-\beta x}$ are both solutions to the linear differential equation $$L(u) = x^2 u'' - 2 p x u' - (\beta^2 x^2 - p(p+1)) u =0$$ Therefore, for real $x\neq 0$, we have $$L(f) = L(h) = O\left(x^2 e^{-\delta |x|}\right).$$ Since $L(f)$ is itself an entire function of exponential type, by Carlson’s theorem \[carlson\], $L(f)$ must vanish identically. Thus, for $x>0$, $f(x)$ is of the form $$f(x) = x^p (c_1 e^{\beta x} + c_2 e^{-\beta x}).$$ Since $f$ is entire, even and satisfies $f(0)=1$, the only choices we have are $$p=-1, c_1 = -c_2 = \frac{1}{2\beta},$$ or $$p=0, c_1 = c_2 = \frac{1}{2}.$$ This proves the lemma. **Proof of Theorem \[main-theorem\]** ===================================== Let $F(s)\in \mathbb{B}$ be fixed. Suppose $g(s) \in \mathbb{B}$ satisfies $d_g = d_F$, $\alpha_g = \alpha_F$ and $\rho_g=\rho_F$. Our goal is to show that there are at most $2 d_F$ possibilities for $g(s)$. Let the general Dirichlet series for $g(s)$ be given by $$g(s) = \sum_{n=1}^{\infty} \frac{a_n}{\lambda_n^s},$$ which is absolutely convergent for $\Re(s)>1$. Thus, there exists a $\delta>0$ such that $\lambda_n/n > \delta$ for all $n$. Now consider the function $$f(z) := \prod_{n=1}^{\infty} \left( 1 + \frac{z^2}{\lambda_n^{2d_F}}\right)^{a_n}.$$ Since $\lambda_n/n >\delta >0$ and $a_n \geq 0$ for all $n$, $f(z)$ is an entire function of exponential type. Using the well-known identity (see [@tables_mellin_transform section 1.4, formula 4.4]) $$\int_0^{\infty} \log (1+x^2) \, \frac{dx}{x^{1+s}} = \frac{\pi}{s \sin \frac{\pi s}{2}}$$ for $0<\Re(s)<2$, we have $$\int_0^{\infty} \log (1+x^2) \, \frac{dx}{x^{1+s/d_F}} = \frac{\pi\, d_F}{s \sin \frac{\pi s}{2d_F}}$$ for $0< \Re(s) < 2d_F$. Changing variables $x \mapsto x/\lambda_n^{d_F}$, we get $$\int_0^{\infty} \log \left(1+\frac{x^2}{\lambda_n^{2d_F}}\right)^{a_n} \, \frac{dx}{x^{1+s/d_F}} = \frac{a_n}{\lambda_n^s}\left(\frac{\pi \, d_F}{s \sin \frac{\pi s}{2d_F}}\right)$$ for $0<\Re(s)<2d_F$. Summing over $n$, we have $$\int_0^{\infty}\log f(x)\, \frac{dx}{x^{1+s/d_F}} = \left(\frac{\pi d_F}{s \sin \frac{\pi s}{2d_F}}\right) g(s),$$ for $1<\Re(s)<2d_F$. Now define $$\psi(s) := \left(\frac{\pi d_F}{s \sin \frac{\pi s}{2d_F}}\right) g(s).$$ Since $g(s)$ has an analytic continuation to $\mathbb{C}$ except for a simple pole at $s=1$, we have a meromorphic continuation for $\psi(s)$ in the region $\Re(s) < 2d_F$. Furthermore, since $g(-2nd_F)=0$ for all $n$, the zeros of $\sin (\pi s/2d_F)$ do not generate poles for $\psi(s)$ in this region. Therefore, $\psi(s)$ has an analytic continuation on $\Re(s)< 2d_F$, except for poles at $s=0$ and $s=1$, with principal parts $$\frac{2\,d_F^2 \,g(0)}{s^2} + \frac{2\,d_F^2 \,g'(0)}{s}\hspace{5mm}\text{and}\hspace{5mm} \frac{1}{(s-1)} \left(\frac{\pi \rho_F\, d_F }{\sin \frac{\pi}{2d_F}}\right)$$ respectively.\ We now capture the growth of $\psi(s)$. Observe that on the boundary of the region $\Im(s)>0$, $\Re(s)< 3/2$ and $|s|>2$, $$\label{bound-for-psi} \left | \frac{\psi(s)}{\alpha_F^s \, \Gamma(2-s)^{d_F}} \right| \ll 1.$$ Indeed, on the vertical line $s= 3/2 + it$ and $|s|>2$, we have $|g(s)|\ll 1$. We also have $$\left |\Gamma\left(\frac{1}{2} +it \right)\right| > \sqrt{\pi} e^{-\frac{\pi}{2} |t|} \hspace{5mm}\text{ and }\hspace{5mm} \left|\sin \left(\frac{\pi}{2d_F}\left(\frac{3}{2} + it\right)\right)\right| > \frac{1}{4} e^{-\frac{\pi}{2d_F}|t|}.$$ Therefore, we get the bound for $s= 3/2 + it$ and $|s|>2$.\ On the negative real axis, we consider the following two cases. If $|\sigma + 2nd_F| \leq \frac{1}{4d_F}$ for $n\in \mathbb{N}$, we have $(\sigma + 2nd_F)/\sin (\pi \sigma/2d_F)$ is bounded above and below by positive constants. Using the Taylor expansion of $g(s)$ at $s=-2nd_F$ and Lemma \[derivative-bound\], we get $$\left| \frac{g(\sigma)}{\sin \frac{\pi \sigma}{2d_F}} \right| \ll \left|(\sigma+2nd_F) g'(-2nd_F) + (\sigma+2nd_F)^2\, \frac{g''(-2nd_F)}{2} + \cdots \right| \ll \left|\frac{(\Gamma(r))^{d_F}}{(k+\epsilon)^r}\right |,$$ for every $\epsilon>0$. On the other hand, if $|\sigma + 2nd_F| > \frac{1}{4d_F}$, we have $$\left| \frac{1}{\sin (\pi \sigma/2d_F)}\right| \ll 1.$$ Now, using the Euler’s reflection formula $$\Gamma(z)\Gamma(1-z) = \frac{\pi}{\sin \pi z}$$ for $z\notin \mathbb{Z}$ and the growth condition $(\dagger)$, we get for $\sigma<0$, $$\begin{aligned} \left | \frac{\psi(\sigma)}{\alpha^{\sigma} \Gamma(2- \sigma)^{d_F}} \right| = \left | \frac{g(\sigma)}{\alpha^{\sigma} \sin (\frac{\pi \sigma}{2}) \Gamma(2- \sigma)^{d_F}} \right| \ll \left | \frac{(\Gamma(-\sigma))^{d_F}}{|\sigma|\alpha^{\sigma} \alpha^{-\sigma} (\Gamma(2- \sigma))^{d_F}} \right| \ll 1.\end{aligned}$$ This proves the bound . Applying Phragmén-Lindelöf principle, we get that all $s$ in the region $\Im(s)>0$, $\Re(s)<3/2$ and $|s|>2$ satisfy $$\label{boun-psi-region} \left | \frac{\psi(s)}{\alpha_F^s \, \Gamma(2-s)^{d_F}} \right| \ll 1.$$ By symmetry, we in fact have in the region $\Re(s)<3/2$ and $|s|>2$.\ Now, using Lemma \[gamma-bound\], we have for $\sigma \geq 2$, $$\label{psi-bound} \int_{-\infty}^{\infty} |\psi(-\sigma + it)| \, dt \ll \frac{\sigma^{3d_F} (\Gamma(\sigma))^{d_F}}{\alpha^{\sigma}}.$$ Applying Mellin transform, for $d_F <c< 2d_F$ $$\log f(x) = \frac{1}{2\pi i d_F} \int_{(c)} \psi(s) x^{s/d_F} \, ds.$$ Moving the line of integration to the left, we get for $c<0$, $$\frac{1}{2\pi i d_F} \int_{(c)} \psi(s) x^{s/d_F} \, ds = \log f(x) - \frac{\pi \,\rho_F\, d_F}{\sin \frac{\pi}{2d_F}} x^{1/d_F} - 2 d_F\, g(0) \log x - 2 d_F^2 \,g'(0).$$ Set $a=2\, d_F \,g(0)$ and $b= e^{2 \,d_F^2 \, g'(0)}$ and $$m = \frac{\pi \, \rho_F \, d_F}{\sin \frac{\pi}{2d_F}}.$$ Using , for $c>2$ $$\left| \log f(x) - \log (b x^a e^{m x^{1/d_F}}) \right| \ll \frac{c^{d_F} (\Gamma(c))^{3d_F}}{(\alpha_F x)^c}.$$ Choose $c = (\alpha_F \, x)^{1/d_F}$. Using Stirlings formula and taking $x\to \infty$, we get $$\frac{c^{d_F} (\Gamma(c))^{d_F}}{(\alpha_F x)^c} \ll e^{-((\alpha_F -\epsilon) x)^{1/d_F} d_F },$$ for every $\epsilon>0$. Thus $$\left| \log f(x) - \log (bx^a e^{\pi \rho x}) \right| \ll e^{-((\alpha_F-\epsilon) x)^{1/d_F} d_F },$$ for every $\epsilon>0$. If $f(x)>bx^a e^{\pi \rho x}$, exponentiating both sides gives $$\frac{f(x)}{b x^a e^{mx^{1/d_F}}} = 1 + O\left( e^{-((\alpha_F-\epsilon) x)^{1/d_F} d_F }\right)$$ Using the condition and the fact $e^y = 1 + O(y)$ for $y\ll 1$, we have for $x\to\infty$ $$\label{equation-1} f(x) = b x^a e^{mx^{1/d_F}} + O\left( e^{-\delta x^{1/d_F}}\right).$$ Moreover, if $f(x) < bx^a e^{\pi \rho x}$, using the condition , we also get . Since $f$ is even, $$\left| f(x) - b |x|^a e^{m|x|^{1/d_F}}\right| \ll e^{-\delta |x|^{1/d_F}}$$ for all real $x \neq 0$. From the definition of $f(z)$, note that $h(z) := f(z^{d_F})$ is also an entire function of exponential type and satisfies $$\left| h(x) - b |x|^{a/d_F} e^{m|x|}\right| \ll e^{-\delta |x|}$$ for all real $x\neq 0$. Using Lemma \[uniqueness-growth\] on $h(z)$, we conclude that there are at most two such functions. Hence, there are at most $2 d_F$ choices for $f(x)$ and therefore for $g(s)$. This proves Theorem \[main-theorem\]. **Application to Dedekind zeta-functions** ========================================== For a number field $K/\mathbb{Q}$, let $n_K$ denote the degree $[K:\mathbb{Q}]$ and $|\Delta_K|$ denote the absolute discriminant. The Dedekind zeta-function associated to $K$ is defined as $$\zeta_K(s):= \prod_{\mathfrak{P}\subset \mathcal{O}_K} \left( 1- N\mathfrak{P}^{-s}\right)^{-1},$$ for $\Re(s)>1$, where $\mathfrak{P}$ runs over all non-zero prime ideals in the ring of integers of $K$. The function $\zeta_K(s)$ has an analytic continuation to the whole complex plane except for a simple pole at $s=1$. Let the residue of $\zeta_K(s)$ at $s=1$ be $\rho_K$. Additionally, $\zeta_K(s)$ satisfies a functional equation and hence, the growth condition $(\dagger)$ with $d_{\zeta_K} = n_K$ and $\alpha_{\zeta_K} = (2\pi)^{n_K} / |\Delta_K|$. Applying Theorem \[main-theorem\], we have \[Dedekind-zeta-corollary\] For the same values of $n_K$, $|\Delta_K|$ and $\rho_K$, there are at most $2n_K$ Dedekind zeta-functions $\zeta_K(s)$ satisfying $$\label{dedekind-zeta-condition} \left|\Delta_K^{1/n_K}\right| < \frac{2 \sin (\pi/2n_K)}{\rho_K}.$$ For a fixed degree $n_K$, as we vary the number fields $K$, the root discriminant $|\Delta_K|^{1/n_K} \to \infty$. Thus, Corollary \[Dedekind-zeta-corollary\] yields a uniqueness theorem for only finitely many number fields. More interesting application is in the context of asymptotically exact families introduced by Tsfasman-Vlăduţ [@Tsfasman] in 2002.\ For a number field $K$ and any prime power $q$, let $N_q(K)$ denote the number of non-archimedean places $v$ of $K$ such that $Norm(v)=q$. A sequence $\mathcal{K} = \{K_i\}_{i\in\mathbb{N}}$ of number fields is said to be a family if $K_i\neq K_j$ for $i\neq j$. We say that a family $\mathcal{K}$ is *asymptotically exact* if the limits $$\phi_{\mathbb{R}} := \lim_{i\to \infty} \frac{r_1(K_i)}{g_{K_i}}, \hspace{5mm} \phi_{\mathbb{C}} := \lim_{i\to \infty} \frac{r_2(K_i)}{g_{K_i}},\hspace{5mm} \phi_q := \lim_{i\to\infty} \frac{N_q(K_i)}{g_{K_i}}$$ exist for all prime powers $q$, where $r_1(K_i)$ and $r_2(K_i)$ are the number of real and complex embeddings of $K_i$ respectively. We say that an asymptotically exact family $\mathcal{K} = \{K_i\}$ is *asymptotically bad*, if $\phi_q = \phi_{\mathbb{R}} = \phi_{\mathbb{C}} =0$ for all prime powers $q$. This is analogous to saying that the root discriminant $ |\Delta_{K_i}|^{1/n_{K_i}}$ tends to infinity as $i\to \infty$. If an asymptotically exact family $\mathcal{K}$ is not asymptotically bad, we say that it is *asymptotically good*.\ Most naturally occurring families of number fields are asymptotically bad families. On the other hand, asymptotically good families are rather mysterious and very little is known about them. In most cases, one must assume the asymptotically good family to be a tower of number fields to prove anything interesting (see for instance [@Tsfasman Theorem 7.3]). It is important to note that the root discriminant $|\Delta_K|^{1/n_K}$ converges to a non-zero limit over an asymptotically good family. Thus, Corollary \[Dedekind-zeta-corollary\] becomes interesting in this case and sheds light on how many $\zeta_K$ can have the same degree $n_K$, discriminant $\Delta_K$ and residue $\rho_K$ in an asymptotically good family of number fields. **Acknowledgements** ==================== I am grateful to Prof. M. R. Murty and Prof. V. K. Murty for their suggestions and comments on an earlier version of this paper. This work was supported by a Coleman postdoctoral fellowship at Queen’s University, Kingston.
--- abstract: 'We have studied the composition-induced metal-to-insulator transitions (MIT) of cation substituted Lithium Titanate, in the forms [Li$_{1+x}$Ti$_{2-x}$O$_4$]{} and [LiAl$_y$Ti$_{2-y}$O$_4$]{}, utilising a quantum site percolation model, and we argue that such a model provides a very reliable representation of the noninteracting electrons in this material [*if*]{} strong correlations are ignored. We then determine whether or not such a model of $3d^1$ electrons moving on the Ti (corner-sharing tetrahedral) sublattice describes the observed MITs, with the critical concentration defined by the matching of the mobility edge and the chemical potential. Our analysis leads to quantitative predictions that are in disagreement with those measured experimentally. For example, experimentally for the [LiAl$_y$Ti$_{2-y}$O$_4$]{} compound an Al concentration of $y_c \approx 0.33$ produces a metal-to-insulator transition, whereas our analysis of a quantum site percolation model predicts $y_c \approx 0.83$. One hypothesis that is consistent with these results is that since strong correlations are ignored in our quantum site percolation model, which includes the effects of configurational disorder only, such strong electronic correlations are both present and important.' author: - 'F. Fazileh' - 'R. J. Gooding' - 'D. C. Johnston' bibliography: - './LiTiO\_MIT\_2lanl.bib' title: | Examining the metal-to-insulator transitions\ in [Li$_{1+x}$Ti$_{2-x}$O$_4$]{} and [LiAl$_y$Ti$_{2-y}$O$_4$]{}\ with a Quantum Site Percolation model --- The oxide spinel [LiTi$_2$O$_4$]{} has been the subject of considerable experimental and theoretical study. It was first synthesised and structurally characterised in 1971 by Deschanvres [*et al.*]{}[@Deschanvres71] Superconductivity, at 11K, was identified in 1973 by one of the present authors and his collaborators.[@Johnston73] A comprehensive study of the normal state and superconducting properties of [Li$_{1+x}$Ti$_{2-x}$O$_4$]{}  (for $0 \leq x \leq 1/3$) was reported in 1976,[@Johnston76I; @Johnston76II] and a superconducting transition temperature of 13K was observed. A recent review[@Moshopoulou99] highlights many of the advances made since then. There are several reasons to study this system. Firstly, it is interesting to note that superconductivity among spinel systems is very rare; [*e*.g.]{}, of the 300 or so known spinels,[@Moshopoulou99] only four of them are superconductors - CuRh$_2$Se$_4$ (T$_c = 3.49$ K), CuV$_2$S$_4$ (T$_c = 4.45$ K), CuRh$_2$S$_4$ (T$_c = 4.8$ K), and [LiTi$_2$O$_4$]{} (T$_c = 11.3~K$) - and only one of these four is an oxide. So, that oxide, [LiTi$_2$O$_4$]{}, has the highest transition temperature of any spinel. Secondly, conduction in this system is believed to take place on the Ti sublattice via the $t_{2g}$ orbitals, as suggested, *e.g.*, by electronic structure calculations,[@Satpathy87; @Massidda88] and these sites form a corner-sharing tetrahedral lattice(CSTL). Thus, this system represents an example of conduction on a *fully frustrated* three-dimensional lattice. Also, in this paper we will argue, supported by considerable experimental evidence, that the conduction paths of [Li$_{1+x}$Ti$_{2-x}$O$_4$]{} and [LiAl$_y$Ti$_{2-y}$O$_4$]{}  are excellent physical realizations of *quantum site percolation*.[@Kirkpatrick72; @Shapir82] Furthermore, and central to the motivation for our work, these same electronic structure calculations[@Satpathy87; @Massidda88] point out that this is a narrow band electronic system, with the bandwidth of the $t_{2g}$ bands of the order of 2-3 eV, thus suggesting that perhaps strong electronic correlations are present and important. Indeed, others have reached similar conclusions; notably, the phase diagram of Alex Müller,[@Muller96] summarising a view of how the increased strength of electronic correlations in transition metal oxides leads to higher and higher superconducting temperatures, includes the Lithium Titanate system. Although the original experiments and analysis suggested a weak-coupling BCS-like s-wave superconductor,[@Johnston76II] it was later suggested[@Heintz89] that off stoichiometry this material is in fact an “anomalous" superconductor (although this claim is not without criticism[@Annett99]). We also mention that photoemission studies of Edwards, *et al.*[@Edwards84] are interpreted in terms of strong correlations, and magnetic susceptibility [@Harrison84] and specific heat data[@Heintz89] are interpreted in terms of a density of states that is moderately to strongly enhanced (see Ref.[@Dunstall94] for a discussion of these and other experiments). Taken together, these experimental results form a reasonably strong case for the presence of electronic correlations that are important to the physics of these materials. Lastly, we mention the recent discovery of the first $d$-electron heavy fermion compound, [LiV$_2$O$_4$]{}.[@Kondo97] This system also assumes the spinel structure, but so far no superconductivity has been observed. The active transition metal ion in [LiV$_2$O$_4$]{} has a formal valence of $d^{1.5}$, whereas for [LiTi$_2$O$_4$]{} one considers $d^{0.5}$ ions. Thus, [LiTi$_2$O$_4$]{} is a lower electronic density system than [LiV$_2$O$_4$]{}, and an understanding of its behaviour would seem to be a prerequisite to a full understanding of the Vanadate material. For example, why does [LiTi$_2$O$_4$]{} superconduct, with a relatively high T$_c$, while [LiV$_2$O$_4$]{} does not superconduct at all? In attempt to gain more understanding of the [LiTi$_2$O$_4$]{} system, and, in particular, to try and understand whether or not strong electronic correlations are present, we have examined the density driven metal-to-insulator transition (MIT) of the related [Li$_{1+x}$Ti$_{2-x}$O$_4$]{} and [LiAl$_y$Ti$_{2-y}$O$_4$]{} compounds; for $x_{MIT}\sim 0.12$ and $y_{MIT} \sim 1/3$, transitions[@Johnston76I; @Harrison85; @Heintz89; @Lambert87; @Lambert90] to a non-metallic state (which we refer to as insulating) are encountered. To be specific, we use a one electron approach to study this transition employing a quantum site percolation model. Our work may be viewed as addressing the question of whether or not the MIT undergone by this system is driven by disorder only, similar to an Anderson-like MIT. We find that the answer is no, and thus this work provides indirect theoretical support for the proposal that strong electronic correlations are important in a description of the complicated transitions undergone in the [LiTi$_2$O$_4$]{} class of materials. To fully explain our model we note the following: (i) Electronic structure calculations[@Satpathy87; @Massidda88] show that the bands arising from the Ti $3d$ orbitals are separated from the O $2p$ band by about 2.4 $eV$; thus, the electronic valence state may be represented as Li$^{+1}$(Ti$^{+3.5}$)$_2$(O$^{-2}$)$_4$, and we ignore the oxygen sites and focus on only the Ti sites. The crystal octahedral field around Ti cations splits the Ti $3d$ bands into two separate and nonoverlapping $t_{2g}$ and $e_g$ bands, with the $e_g$ bands split off above the $t_{2g}$ bands. Thus, formally this is a very low filling system — 1/12th filling of each of the (approximately) degenerate $t_{2g}$ bands. Although we have generalized our work to include all three $t_{2g}$ bands, here we will present results for a one-band model of the Ti sublattice, and thus the stoichiometric compound is represented by a 1/4-filled band. (ii) Crystallographic refinements of the excess Li system [Li$_{1+x}$Ti$_{2-x}$O$_4$]{} and the doped Al system [LiAl$_y$Ti$_{2-y}$O$_4$]{} have consistently demonstrated that both the excess Li and doped Al ions enters substitutionally onto the Ti sublattice (octahedral sites of the spinel structure).[@Deschanvres71; @Johnston76I; @HarrisonPhD; @LambertPhD; @Lambert90] Assuming that the Li/Al ions that are substituted into the corner-sharing tetrahedral lattice are fully ionised, these sites would block any conduction electrons from hopping onto such sites; *e.g.*, a simple argument supporting this follows from noting that the Li$^+$ ions will be at least doubly negatively charged relative to the occupied Ti$^{3+}$ and unoccupied Ti$^{4+}$ sites that would exist in the absence of substituting Li, and thus electrons will avoid these sites in favour of the Ti sites. We shall assume that these Li-substituting sites are removed from the sites available to the conduction electrons, which implies that this system represents an excellent physical realization of site percolation. Using such a model the simplest approach to characterising the MIT would be to then associate the transition with the critical concentration at which the classical percolation threshold is reached. For corner-sharing tetrahedral lattices, we have completed a large scale Hoshen-Kopelman[@Hoshen76] search, and have determined that this concentration corresponds to a probability of finding an occupied site at the transition of $p_c\sim 0.39 \pm 0.01$. Noting that the relationship between the probability $p$ of site being occupied by a Ti ion (in the stoichiometric Ti sublattice), and the excess Li concentration $x$, is given by $x = 2(1-p)$, with an identical $y=2(1-p)$ relation for Al added to the Ti sublattice, in contrast to previous statements[@Harrison85; @Lambert87] this $p_c$ corresponds to a critical excess Li, or added Al, of $x_c=y_c=2(1-p_c) \sim1.2$; that is Li$_{2.2}$Ti$_{0.8}$O$_4$ or LiAl$_{1.2}$Ti$_{0.8}$O$_4$. These very high levels of doping are well beyond the observed $x_{MIT} \sim 0.15$ or $y_{MIT} \sim 0.33$ concentrations at which the MITs occur. In fact, such a system would require large positive Ti valencies well beyond anything seen in nature! Thus, the physics of these MITs is more complicated than simply the loss of an infinite maximally connected cluster at $p_c$. We now consider a more accurate model for this system, a so-called quantum site percolation model, which includes the dynamics of the electrons hopping on the conducting, disordered sublattice of Ti sites. To be specific, the near-neighbour tight-binding Hamiltonian for [LiTi$_2$O$_4$]{} is $$\label{eq:TB} \hat{\mathcal{H}}= \sum_i \varepsilon_i c_i^\dagger c_i - t \sum_{<ij>} (c_i^\dagger c_j + h. c.)$$ where $i$ labels the sites of the (ordered) Ti sublattice, $c_i^\dagger$ ($c_i$) is the creation (annihilation) operator of an electron at site $i$, $\sum_{<ij>}$ represents the sum over all nearest neighbour sites of a corner-sharing tetrahedral lattice, and $t$ is the near-neighbour hopping energy. To produce our model of quantum site percolation for the doped systems, the on-site energy $\varepsilon_j$ is determined by the probability of occupation, denoted by $p$, of a site being either a Ti ion, or a Li or Al dopant ion: $$\label{eq:SiteDistr} P(\varepsilon_i) = p\delta(\varepsilon_i-\varepsilon_{\rm Ti})+ (1-p)\delta(\varepsilon_i-\varepsilon_{\rm X}) \qquad$$ where $\varepsilon_{\rm Ti}$ ($\varepsilon_{\rm X}$) is the on-site energy when an electron occupies a Ti (X = Li or Al) site. In order to enforce that itinerant electrons move only on Ti sites we set $\varepsilon_{\rm Ti}=0$ and $\varepsilon_{\rm X}\to\infty$, and this limit connects this system with a quantum site percolation Hamiltonian.[@Kirkpatrick72; @Shapir82] Such considerations lead to the introduction of the quantum percolation threshold,[@Shapir82] usually denoted by $p_q$. To be specific, $p_q$ is reached when all single-electron energy eigenstates of the above tight-binding Hamiltonian are localized. That means that $p_q$ is always larger than $p_c$ since the presence of extended states necessarily requires an infinite maximally connected cluster. Further, the evaluation of this quantity is warranted since $p_q > p_c$ implies that the theoretical predictions of $x_c$ and $y_c$ given above would be reduced by quantum percolation. Our evaluation of $p_q$ for the corner-sharing tetrahedral lattice proceeds as follows. We have considered various realizations of site percolated lattices for several system sizes for a range of dopant concentrations; to be specific, we consider lattices of size two, four, six, and eight cubed conventional unit cells (noting that there are 16 Ti sites in the ordered lattice per conventional unit cell), and then for each $p$ we examine 100, 50, 20 and 10 realizations consistent with this $p$, for two, four, six, and eight cubed lattices, respectively. For each realization we first apply the Hoshen-Kopelman algorithm to identify the maximally connected cluster. Then, we diagonalize the one-electron Hamiltonian describing the electron dynamics on this cluster. Then, to determine the localized *vs.* delocalized nature of the single-electron wave functions for the maximally connected cluster, we have used the scaling of the relative localization length as a function of system size, as described by Sigeti *et al.*[@Sigeti91] This localization length, for a particular eigenstate, is defined by $$\lambda~=~\sum_{ij}~|\psi_i|^2~|\psi_j|^2~ d(i,j)$$ where $|\psi_i|^2$ is the probability amplitude for this eigenstate at site $i$, and $d(i,j)$ is the Euclidean distance between lattice sites $i$ and $j$. Then, the relative localization length is just the ratio of this quantity to that for a state having a uniform probability amplitude throughout the entire maximally connected cluster (we denote the latter by $\lambda_0$) — this ratio thus provides a useful measure of the effective size, or localization, of a particular eigenstate relative to a Bloch state on the same maximally connected cluster. The utility of this quantity (and we will describe its use for another problem below) is that if the quantity decreases as the system size is increased, that eigenstate corresponds to one that is spatially localized; the opposite behaviour is expected for delocalized states. We have used this quantity as a means of identifying $p_q$. As a test of this method, we note that for three-dimensional lattices, reliable estimates exist only for the simple cubic lattice, and these were obtained with a variety of different methods — *e.g.*, see the discussion in Ref.[@Soukoulis92] A value of $p_q = 0.44 \pm 0.02$ was identified,[@Soukoulis92] and we have found that our method reproduces this number. Using this method we find a value of $p_q$ for corner-sharing tetrahedral lattices (with near-neighbour hopping only) of $p_q~=~0.52 \pm 0.02$, and if we then associate this quantity with the concentration at which the MIT occurs in doped [LiTi$_2$O$_4$]{}, one finds $x_c=y_c=2(1-p_q)=0.96$, which correspond to Li$_{1.96}$Ti$_{1.04}$O$_4$ and LiAl$_{0.96}$Ti$_{1.04}$O$_4$. Again, these concentrations are much higher than the experimentally measured values. The reason for both of these failures is clear and has been suggested before[@Harrison85] — as the concentration of doped cations is increased, the density of available $3d$ electrons is decreased. In fact, for $x_{BI}\equiv0.33$ and $y_{BI}\equiv1.0$ these materials become insulators simply because the bands of all allowed states are empty (we refer to such a state as a Band Insulator (BI)). So, if a one-electron description is going to correctly reproduce the MIT, we must account for the changing of the electronic density with cation dopant concentration. (This notwithstanding, we needed to determine what values of $x_c$ and $y_c$ were predicted by $p_q$ in case they were less than either $x_{BI}$, or $y_{BI}$, respectively. Clearly, they are not.) ![\[fig:MIT\]A sequence of schematic diagrams that summarize how the MIT in [Li$_{1+x}$Ti$_{2-x}$O$_4$]{}would proceed [*if*]{} the transition was caused by disorder only. The densities of states [*vs.*]{} energy are shown for several $x$. The hatched regions identify the location of extended states, and the unhatched (solid) regions of density of states represent the location of localized states. (a) In [LiTi$_2$O$_4$]{}all eigenstates are extended and the system is a $\frac14$-filled $d$ band conductor. (b) By doping Li cations randomly into the Ti sites the system becomes disordered; thus, some energy eigenstates near the band edges become localized, and, in particular, all states below the mobility edge $\mu_M$ are localized. At the same time the density of itinerant electrons decreases, and thus the chemical potential $(\mu_F)$ is reduced. (c) When the chemical potential coincides with the mobility-edge, this model would predict that the MIT occurs. (d) The other end member of the homogeneity range of the spinel phase [Li$_{1+x}$Ti$_{2-x}$O$_4$]{} ($0 \leq x \leq \frac13$) is an empty band insulator (note that the chemical potential appears at the bottom of the allowed energy bands, and thus all electronic states are unoccupied). (e) The so-called quantum percolation threshold is reached when all electronic states have a localized character (regardless of the location of the chemical potential).](./MIT.eps){width="8cm"} Also, since the above arguments point to Ti occupation probabilities well above the quantum percolation threshold, one is guaranteed to find a maximally connected cluster, and (possibly) several isolated clusters. By definition, all electronic states associated with an isolated cluster are localized, while for the maximally connected cluster both extended and localized states will be found. Similar to Anderson localization[@PWA58] of disordered systems, we expect that the eigenstates of the maximally connected cluster are localized in the band-edge and are separated from extended states at the middle of the band by the so-called mobility-edge.[@PWA58] By increasing disorder more and more states become localized and the mobility-edge moves toward the centre of the band. However, as the concentration of cations (producing the disorder) is increased the density of itinerant $3d$ electrons is decreased. As long as the system’s chemical potential (determined by both the density of $3d$ electrons and electronic density of states of the disordered system) is above the mobility edge, the system remains metallic, whereas if it was below the mobility edge, the states in the immediate vicinity of the Fermi surface would be localized and the system would display insulating behaviour. The critical dopant concentration at which the chemical potential and mobility edge meet thus identifies the MIT. The diagrams in Fig. \[fig:MIT\] summarize this process for the particular example of [Li$_{1+x}$Ti$_{2-x}$O$_4$]{}(for which $x_c \sim 0.15$ and $x_{BI} \sim 0.33$). ![\[fig:DOS\]Averaged numerical density of states for [Li$_{1+x}$Ti$_{2-x}$O$_4$]{}  systems with different doping concentrations, and the corresponding location of chemical potentials. The DOS of the undoped system corresponds to the thermodynamic limit, whereas plots for the doped systems are averaged density of states for 8$^3$ conventional unit cells using 10 different realizations of disorder.](./dos_8x8x8u16_Li.eps){width="8cm" height="7cm"} Example density of states curves, and the locations of the chemical potentials, are shown in Fig. \[fig:DOS\] for different doping concentrations, averaged over systems with 8-cubed conventional unit cells (the largest studied). Note that the full of spectrum of eigenvalues, of both the maximally connected cluster and all isolated clusters, have been evaluated since the latter contribute to the location of the chemical potential. The mobility edge has been estimated using the above-mentioned scaling method,[@Sigeti91] and Fig. \[fig:scaling\] depicts the application of this method to identify the location of mobility edge for the specific doping concentration of $x = 0.7$. To be concrete, one can estimate that for this dopant concentration the mobility edge, in units of the hopping energy $t$, is -4.0$\pm$0.15. ![\[fig:scaling\]Scaling of the relative localization length ($\lambda$) of different energy “bins" relative to that of the maximally connected cluster ($\lambda_0$), for energies close to the mobility edge, for a doping concentration of $x = 0.7$. This ratio is plotted *vs.* the reciprocal of the number of conventional cells, and as the size of the lattices is increased from 2$^3$ conventional unit cells to 8$^3$ unit cells this data shows that the eigenstates with energies above -3.9 have delocalized behaviour. For this system we estimate that the (lower) mobility edge is located at $- 4.0\pm0.15$.](./scaling_0.35.eps){width="8cm" height="7cm"} We have combined all of our data in Fig. \[fig:bb\_mu\], which shows the chemical potential and mobility edge that we would estimate for both the excess Li and doped Al systems. For reference we have included the values of the band minimum for all dopings. Note that the Fermi level of the excess Li system crosses the mobility edge at roughly $x_c \approx 0.324$, whereas for the doped Al system this crossing occurs at $y_c \sim 0.83$. Clearly, these numbers are much larger than the experimental values ($x_{MIT} \approx 0.15$ and $y_{MIT} \approx 0.33$). Thus, we have studied a system that should be very well represented, in a one-electron theory, by a quantum site percolation model, have determined the concentrations at which the Fermi levels cross the mobility edge, and find these values to be a factors of 2 and greater than 3 larger than the experimental results. So, we believe that this shows that a one-electron model, that ignores electronic correlations, cannot explain the observed MITs. ![\[fig:bb\_mu\] This plot displays the final numerical results of our study, and leads to the identification of predicted concentrations at which the MITs occur. These data are the the chemical potentials ($\mu_F$) and mobility edge as a function of doping for both [Li$_{1+x}$Ti$_{2-x}$O$_4$]{} and [LiAl$_y$Ti$_{2-y}$O$_4$]{}; for reference we have also plotted the minimum of the band of electronic states. The crossing of the chemical potential and mobility edge (denoted by open black circles) would indicate the position of disorder-only induced metal-to-insulator transition, and these values are labelled $x_c$ and $y_c$.](./phase_diagram.eps){width="8cm" height="7cm"} Concluding, our results show that disorder-only models of the MITs undergone by these systems substantially overestimate the critical concentrations of doped cations. To be specific, for [Li$_{1+x}$Ti$_{2-x}$O$_4$]{} our numerical result for $x_c$ is a little more than double the experimental value, and for [LiAl$_y$Ti$_{2-y}$O$_4$]{} our numerical result is an even larger multiple. (We note that we have not eliminated the possibility that polaronic effects give rise to this transition, although no experimental work points to their existence.[@Moshopoulou99]) Thus, indirectly, this study supports the hypothesis that strong electronic correlations are important for the MIT, and, possibly, also for the $T_c \sim 13 K$ superconducting transition undergone by [LiTi$_2$O$_4$]{}. This past year[@chen03] sample preparation advances have allowed for the growth of large stoichiometric [LiTi$_2$O$_4$]{}  single crystals, and a remarkably sharp superconducting transition ($\delta$T$_c$ $\sim$ 0.1 K) has been observed. We hope that these new samples stimulate further experimental research in this interesting class of materials. We wish to thank Gene Golub for several helpful comments. Part of this paper was written when one of the authors (RJG) was visiting the Fields Institute for Research in Mathematical Sciences, and he thanks them for their support and hospitality. This work was supported in part by the NSERC of Canada, OGSST, and the USDOE.
--- abstract: 'Motivated by the colossal negative thermal expansion recently found in BiNiO$_3$, the valence transition accompanied by the charge transfer between the Bi and Ni sites is theoretically studied. We introduce an effective model for Bi-$6s$ and Ni-$3d$ orbitals with taking into account the valence skipping of Bi cations, and investigate the ground-state and finite-temperature phase diagrams within the mean-field approximation. We find that the valence transition is caused by commensurate locking of the electron filling in each orbital associated with charge and magnetic orderings, and the critical temperature and the nature of the transitions are strongly affected by the relative energy between the Bi and Ni levels and the effective electron-electron interaction in the Bi sites. The obtained phase diagram well explains the temperature- and pressure-driven valence transitions in BiNiO$_3$ and the systematic variation of valence states for a series of Bi and Pb perovskite oxides.' author: - 'Makoto Naka$^{1,2}$, Hitoshi Seo$^{1,3}$, and Yukitoshi Motome$^4$' title: 'Theory of Valence Transition in BiNiO$_3$' --- Perovskite transition metal (TM) oxides (general formula: $AB$O$_3$) have been providing central issues of phase transitions and strong electron correlations in condensed matter physics [@Imada; @Cheong]. They exhibit a wide range of novel magnetic, dielectric, and transport properties: for example, the large negative magnetoresistance in La$_{1-x}$Sr$_{x}$MnO$_3$ [@Chahara; @Helmolt; @Tokura], the spin-state transition in La$_{1-x}$Sr$_{x}$CoO$_3$ [@Korotin; @Saitoh], the metal-to-insulator transition in $R$NiO$_3$ ($R$: rare earth element) [@Torrance], and the ferroelectric to quantum paraelectric transition in Ba$_{1-x}$Sr$_x$TiO$_3$ [@Sawaguchi; @Zhou]. In these phenomena, the central players are the electrons in $3d$ orbitals of the $B$-site TMs hybridized with oxygen $2p$ orbitals. The $A$-site cations, on the other hand, are usually inert and have been regarded as “stagehands": they control the electron filling and bandwidth through their valence state and ionic radius, respectively. Peculiar exceptions to the above standards have recently been found in several perovskite TM oxides, in which the $A$-site cations play an active role as “valence skipper". In these compounds, not only the $B$-site $3d$ electrons but also the valence $s$ electrons in the $A$-site cations significantly contribute to the electronic properties. In the valence skippers, the outermost $s$ orbital prefers closed-shell configurations $s^{0}$ or $s^{2}$, and tends to skip the intermediate valence $s^{1}$. This is attributed to the effective attractive interaction between $s$ electrons [@Varma; @Anderson; @Hase], and hence the $A$-site valence state can be actively controlled through electronic degrees of freedom. Owing to the multiple electronic instabilities in both $A$- and $B$-site cations, the TM oxides with the $A$-site valence skipper have a potential of new electronic phases and functions. The colossal negative thermal expansion (CNTE) material BiNiO$_3$ [@Ishiwata_1] is one of such candidates; both Bi-$6s$ and Ni-$3d$ electrons are expected to play a key role in the large volume change [@Azuma_nat; @Nabetani]. At ambient pressure, BiNiO$_3$ has a unique valence state, where the average valence of Bi is $4+$ but it is disproportionated to $3+$ and $5+$, while the valence of Ni is $2+$ [@Wadati]: the $A$-site Bi cation exhibits the valence skipping nature. Bi${^{3+}}$ and Bi$^{5+}$ are spatially ordered in a checkerboard-like pattern, which can be regarded as a charge ordering (CO) at the Bi sites. The electronic and lattice structures are drastically changed by applying pressure; the system exhibits an insulator-to-metal transition with the structural change from triclinic to orthorhombic, where the electrons are transferred from Ni to Bi and the valence state changes from Bi${^{3+}}_{0.5}$Bi${^{5+}}_{0.5}$Ni$^{2+}$ to Bi$^{3+}$Ni$^{3+}$. At the same time, the unit cell volume largely shrinks about by 5 $\%$ [@Azuma_nat; @Ishiwata_2; @Azuma_jacs]. Under pressure, this phase transition is also observed by raising temperature ($T$), which is termed as the CNTE. The large volume shrinkage in the CNTE is attributable to the valence increase at the $B$ sites which determine the lattice parameters in perovskites in general. However, the mechanism of the valence transition behind it, involving both Bi and Ni sites and the charge transfer between them, remains to be clarified. This is owing to the fact that, as mentioned above, most of the previous studies for perovskite TM oxides have been generally focusing on the TM $3d$ and oxygen $2p$ electrons, which usually govern the electronic properties, and hardly incorporate the valence state in the $A$-site cations explicitly. In this Letter, we present a microscopic theory for the valence transition behind the CNTE in BiNiO$_3$, which treats the active electronic degrees of freedom in both $A$- and $B$-site cations on equal footing. We introduce a simple but realistic effective model for BiNiO$_3$, and clarify the ground-state and finite-$T$ phase diagrams within the mean-field (MF) approximation. We will show that the valence transition is controlled by the relative energy between Bi and Ni levels as well as the electron correlation at the Bi sites. The charge transfer between Bi and Ni is caused by spontaneous symmetry breaking associated with a bipolaronic CO at Bi and antiferromagnetic (AFM) order at Ni, both of which favor a commensurate filling in each band. Our result will provide a useful guide for further exploration of larger CNTE in related materials. ![(Color online) (a) Schematic electron configurations for Bi $6s$ and Ni $3d$ $e_{g}$ orbitals in BiNiO$_3$ and (b) those in the effective model in Eq. \[eq:hamil\]. CO, PM, AFM, and CT denote charge order, paramagnetic, antiferromagnetic states, and charge transfer, respectively. Schematic pictures of (c) pervskite structure for BiNiO$_3$ and (d) the projected two-dimensional lattice structure for the effective model. The shaded area represents the mean-field unit cell. []{data-label="fig:vst"}](fig1){width="1.0\columnwidth"} Let us construct an effective model for BiNiO$_3$. According to first principles band calculations, the Bi-$6s$ and Ni-$3d$ $e_{g}$ bands are located near the Fermi energy, and the O-$2p$ bands strongly hybridize with them [@Azuma_jacs]. Here, we take into account the Bi-$6s$ and Ni-$3d$ $e_{g}$ orbitals, while assuming that the role of O-$2p$ is effectively incorporated in the energy levels and the other parameters. The electronic configurations in the insulating Bi${^{3+}}_{0.5}$Bi${^{5+}}_{0.5}$Ni$^{2+}$O${^{2-}}_{3}$ and the metallic Bi$^{3+}$Ni$^{3+}$O${^{2-}}_{3}$ phases are schematically illustrated in Fig. \[fig:vst\](a). For further simplicity, we omit the twofold degeneracy of $e_{g}$ orbitals as shown in Fig. \[fig:vst\](b); the essential physics related to the valence transition will be retained as described below. Considering the uniform electron configuration along the $z$ axis in the real compound [@Ishiwata_1], we adopt a two-dimensional lattice of Bi and Ni obtained by projecting the perovskite structure in Fig. \[fig:vst\](c) on the $xy$ plane, as shown in Fig. \[fig:vst\](d). The Hamiltonian is given by $$\begin{aligned} {\cal H} &= t_{\rm N} \sum_{\langle ij \rangle \sigma}^{\rm Ni-Ni} \left( a^{\dagger}_{i \sigma} a_{j \sigma} + {\rm H.c.} \right) + t_{\rm B} \sum_{\langle ij \rangle \sigma}^{\rm Bi-Bi} \left( b^{\dagger}_{i \sigma} b_{j \sigma}+ {\rm H.c.} \right) \notag \\ &+ t_{\rm BN} \sum_{\langle ij \rangle \sigma}^{\rm Bi-Ni} \left( a^{\dagger}_{i \sigma} b_{j \sigma} + {\rm H.c.} \right) \notag \\ &+ {\Delta} \sum_{i \sigma}^{\rm Ni} n^{\rm N}_{i \sigma} + U_{\rm N} \sum_{i \sigma}^{\rm Ni} n^{\rm N}_{i \uparrow} n^{\rm N}_{i \downarrow} + U_{\rm B} \sum_{i \sigma}^{\rm Bi} n^{\rm B}_{i \uparrow} n^{\rm B}_{i \downarrow} \notag \\ &+ V_{\rm B} \sum_{\langle ij \rangle}^{\rm Bi-Bi} n^{\rm B}_{i} n^{\rm B}_{j} + V_{\rm BN} \sum_{\langle ij \rangle}^{\rm Bi-Ni} n^{\rm N}_{i} n^{\rm B}_{j}, \label{eq:hamil}\end{aligned}$$ where $a_{i \sigma}$ and $b_{i \sigma}$ represent the annihilation operators of electron with the spin $\sigma(=\uparrow, \downarrow)$ at the Ni and Bi sites of $i$-th unit cell, respectively; $n^{\rm N}_{i \sigma} = a^{\dagger}_{i \sigma} a_{i \sigma}$ and $n^{\rm B}_{i \sigma} = b^{\dagger}_{i \sigma} b_{i \sigma}$. In Eq. (\[eq:hamil\]), the first and second lines represent the electron hopping on the Ni-Ni, Bi-Bi, and Bi-Ni bonds. In the third line, the first term is the energy difference between the Bi-$6s$ and Ni-$3d$ levels, the second term is the on-site Coulomb interactions on the Ni sites, and the third term is the effective interaction that describes the valence skipping nature of Bi [@Varma; @Anderson; @Hase]; we consider not only positive but also negative values for $U_{\rm B}$. The fourth line represents the intersite Coulomb interactions on the Bi-Bi and Bi-Ni bonds. We obtain the phase diagram for the model in Eq. (\[eq:hamil\]) by the MF approximation with decoupling the two-body interaction terms in the third and fourth lines in Eq. (\[eq:hamil\]) as $n n' \simeq n \langle n' \rangle + \langle n \rangle n' - \langle n \rangle \langle n' \rangle$. We take the unit cell that includes four Bi and four Ni sites, as shown in Fig. \[fig:vst\](d). By investigating several sets of parameters and varying them, we find that $U_{\rm B}$ and $\Delta$ are most relevant to the valence transition; hence, below we show the results by varying these two parameters while fixing the others at $t_{\rm N} = t_{\rm B} = 1$, $t_{\rm BN} = 0.5$, $U_{\rm N} = 3$, $V_{\rm B} = 0.65$, $V_{\rm BN} = 1$. We have confirmed that the detailed changes of the parameters do not alter the following results qualitatively. ![(Color online) (a) Ground state phase diagram. Solid and broken lines denote the second and first order transitions, respectively. Schematic pictures of charge and spin configurations: (b) uniform-I, (c) CO+AFM, (d) uniform-II, (e) sub-I, and (f) sub-II. The circles and arrows represent the charge densities and spin moments at each site, respectively. []{data-label="fig:latt_pdgs"}](fig2){width="1.0\columnwidth"} ![(Color online) $\Delta$ dependences of the charge densities (upper panels) and spin moments (lower panels) at each Bi and Ni site for $t_{\rm B} = t_{\rm N} =1$, $t_{\rm BN} = 0.5$, $U_{\rm N} = 3$, and $V_{\rm B} = 0.65$; (a) $U_{\rm B} = -2$, (b) $U_{\rm B} = 0$, and (c) $U_{\rm B} = 2$. []{data-label="fig:op_gs"}](fig3){width="1.0\columnwidth"} First, we present the ground state properties. Figure \[fig:latt\_pdgs\](a) shows the obtained phase diagram on the plane of $\Delta$ and $U_{\rm B}$. There are three main phases with different valence states \[Figs. \[fig:latt\_pdgs\](b)–\[fig:latt\_pdgs\](d)\], while two sub-phases appear between them \[Figs. \[fig:latt\_pdgs\](e) and \[fig:latt\_pdgs\](f)\]. In the uniform-I (II) phase, a major portion of the electrons reside in the Bi (Ni) sites with spatially uniform distribution, and no spin polarization is seen in either sites as shown in Fig. \[fig:latt\_pdgs\](b) \[\[fig:latt\_pdgs\](d)\]. On the other hand, in the CO+AFM phase, the average charge density becomes almost the same for Bi and Ni, i.e., charge transfer occurs between them compared to the uniform phases. In addition, a bipolaronic charge disproportionation occurs in a staggered way between the Bi sites, as shown in Fig. \[fig:latt\_pdgs\](c). The spin moments are canceled at each Bi site, whereas those at the Ni sites are antiferromagnetically ordered. To elucidate variations of the electronic states in more detail, we show the $\Delta$ dependences of the charge densities $\langle n^{\rm B(N)}_{i} \rangle = \langle n^{\rm B(N)}_{i \uparrow} \rangle + \langle n^{\rm B(N)}_{i \downarrow} \rangle$ and spin moments $\langle s_{i}^{z {\rm N}} \rangle = (\langle n^{\rm N}_{i \uparrow} \rangle - \langle n^{\rm N}_{i \downarrow} \rangle)/2$. The spin moments in the Bi sites remain zero in the parameter range we calculated. Figure \[fig:op\_gs\](a) shows the results in $U_{\rm B} = -2$, where the valence skipping nature of Bi is relatively strong. In the large-$\Delta$ region, where the energy level of the Ni orbital is substantially higher than that of Bi, the electrons occupy dominantly and uniformly the Bi sites, and the Ni orbitals are nearly empty; no spin polarization appears. This electronic state corresponds to the uniform-I phase in Fig. \[fig:latt\_pdgs\](b). When $\Delta$ is decreased, the charge densities and the Ni spin moments simultaneously jump at $\Delta \simeq 1.25$, with showing the spontaneous symmetry breaking in both charge and spin channels: the system enters the CO+AFM phase. At the transition, the average charge density in the Bi (Ni) sites suddenly decreases (increases) by about $0.7$. Namely, the valence transition is caused by the charge transfer between Bi and Ni. As seen in the phase diagram in Fig. \[fig:latt\_pdgs\](a), when $U_{\rm B}$ is increased, the CO+AFM phase shrinks and shifts to larger $\Delta$ region. This is seen in Figs. \[fig:op\_gs\](b) and \[fig:op\_gs\](c), showing the results for $U_{\rm B} = 0$ and $U_{\rm B} = 2$, respectively. When $U_{\rm B} = 0$, the uniform-II phase is realized in the small-$\Delta$ region, where the charge density in the Ni sites becomes larger than that in the Bi sites, on the contrary to the uniform-I phase. The transition between the CO+AFM and uniform-II phases is also a valence transition. In addition, a narrow intermediate state termed sub-II \[Fig. \[fig:latt\_pdgs\](f)\] appears, in which the average charge densities are similar to the uniform-II phase but with a weak checkerboard-type CO at the Bi sites. For $U_{\rm B} = 2$, the region of $\Delta$ for the CO+AFM phase is further narrowed and shifted, and another intermediate phase termed sub-I \[Fig. \[fig:latt\_pdgs\](e)\] appears between the CO+AFM and uniform-I phase, where a weak AFM order occurs at the Ni sites. We note that a larger $U_{\rm B}$ results in a smaller amplitude of the valence changes, as shown in Figs. \[fig:op\_gs\](a)–\[fig:op\_gs\](c). ![(Color online) Global phase diagram in the $\Delta$-$U_{\rm B}$-$T$ space. Solid and broken lines denote the second and first order transitions, respectively. Thin dotted lines are guides for the eye denoting connections between the CO domes. []{data-label="fig:pd_global"}](fig4){width="0.8\columnwidth"} Next, we discuss the finite-$T$ properties. Figure \[fig:pd\_global\] shows the global phase diagram with the $T$-axis attached to the ground-state phase diagram in Fig. \[fig:latt\_pdgs\](a). When $T$ is raised in the CO+AFM phase, first the AFM order is destroyed, while the CO remains; at higher $T$ the CO melts and the uniform phase is realized. The uniform-I and -II phases in the ground state are continuously connected in the high $T$ region, as they possess the same symmetry. Consequently, the CO phase shows a dome-like structure. The width of the CO dome becomes wider for lower $T$, whose rate is slightly increased below the AFM transition temperature. The $T$ dependences of the charge densities at each site are shown in Figs. \[fig:op\_ft\](a)–\[fig:op\_ft\](c) for several values of $\Delta$ at $U_{\rm B} = - 2$. The CO transition is discontinuous for the large and small values of $\Delta$, while it becomes continuous for intermediate $\Delta$. The discontinuous CO transition is accompanied by the large amount of the charge transfer between the Bi and Ni sites. Below the CO transition temperature, the average charge densities in the Bi and Ni sites become nearly $1$. The transition between CO and CO+AFM is always continuous, whose critical temperature is insensitive to $\Delta$. Figures \[fig:op\_ft\](d)–\[fig:op\_ft\](f) show the results for $U_{\rm B} = 2$; all the $T$ scales become smaller compared to the case of $U_{\rm B} = -2$, and accordingly, the amplitude of the CO also becomes smaller. The average charge densities in the Bi and Ni sites change even below the AFM transition temperature, which moderately depends on $\Delta$ here, in contrast to the case of $U_{\rm B} = -2$. ![(Color online) $T$ dependences of the charge densities at each Bi and Ni site for $t_{\rm B} = t_{\rm N} =1$, $t_{\rm BN} = 0.5$, $U_{\rm N} = 3$, and $V_{\rm B} = 0.65$; (a)–(c) $U_{\rm B} = -2$ and (d)–(f) $U_{\rm B} = 2$ for different values of $\Delta$ (indicated in the figures). Note the difference in the $T$ range for the two choices of $U_{\rm B}$. []{data-label="fig:op_ft"}](fig5){width="1.0\columnwidth"} Through the above analyses, we have found that the valence transition occurs between the uniform and CO phases. Here, we discuss its origin. The Bi-site electrons have the instability toward the bipolaronic CO due to the combination of negative $U_{\rm B}$ and positive $V_{\rm B}$. The instability is most enhanced when the Bi band is half-filled; $\sum_{i} \langle n_{i}^{\rm B} \rangle/ N = 1$ (the sum is taken over the Bi sites and $N$ is number of the Bi sites). This condition is satisfied near the center of the CO dome where the CO temperature is maximized, for example in the situation in Fig. \[fig:op\_ft\](b). On the other hand, apart from the dome center, the Bi band filling becomes incommensurate in the high-$T$ uniform phase, as shown in Figs. \[fig:op\_ft\](a) and \[fig:op\_ft\](c). In this case, the CO transition is accompanied by the charge transfer between the Bi and Ni sites for restoring the commensurability of the Bi band. This commensurate locking is the primary cause of the valence transition. The AFM instability in the Ni sites due to the positive $U_{\rm N}$ also contributes to this locking effect, as clearly seen in Fig. \[fig:op\_ft\](d). These considerations lead us to conclude that the valence transition is attributed to the commensurate locking of the electron filling in both Bi and Ni bands driven by the strong electron correlations. Finally, let us compare the present results with the experiments in BiNiO$_3$ and related materials. The first-order valence transition with the charge transfer from Ni to Bi is observed by increasing $T$ and pressure in BiNiO$_3$. In the present calculation, the discontinuous valence transition driven by $T$ is indeed seen, e.g., in Fig. \[fig:op\_ft\](a). The pressure-driven transition can also be explained by the transition with increasing the relative orbital energy $\Delta$, e.g., in Fig. \[fig:op\_gs\](a), when we assume that the pressure affects $\Delta$. This is reasonable as the volume reduction under pressure is mainly due to the contraction of the Ni-O perovskite framework in BiNiO$_3$ [@Azuma_nat], which may increase $\Delta$ via the increase of the Ni $3d$ energy levels. On the other hand, $\Delta$ can be more directly controlled by the substitution of the TM cations. Such effects are actually investigated in Bi$M$O$_3$ ($M=$ V, Cr, Mn, Fe, Co, and Ni) [@BiCr; @BiMn; @BiFe; @BiCo] and Pb$M$O$_3$ ($M=$ Ti, V, Cr, Fe, and Ni) [@PbCr; @PbNi], where Pb is another valence skipper favoring the oxidation states Pb$^{2+}$($6s^{2}$) and Pb$^{4+}$($6s^{0}$). In the Bi compounds, the valence state is Bi$^{3+}M^{3+}$ for $M=$ V – Co, while it is Bi${^{3+}}_{0.5}$Bi${^{5+}}_{0.5}M^{2+}$ for $M=$ Ni (at ambient pressure) as we have been discussing. Meanwhile, in Pb$M$O$_3$, successive variations of such valence state as Pb$^{2+}M^{4+}$ $\leftrightarrow$ Pb${^{2+}}_{0.5}$Pb${^{4+}}_{0.5}M^{3+}$ $\leftrightarrow$ Pb$^{4+}M^{2+}$ are observed between V and Cr, and Fe and Ni, respectively, where the intermediate state Pb${^{2+}}_{0.5}$Pb${^{4+}}_{0.5}M^{3+}$ involves the CO of Pb$^{2+}$ and Pb$^{4+}$ resembling Bi${^{3+}}_{0.5}$Bi${^{5+}}_{0.5}M^{2+}$. These experimental results suggest that the electrons tend to be accumulated in the $A$($B$) cations in the systems with low (high) $3d$ level, while they are uniformly distributed in $A$ and $B$ cations for the intermediate case. This tendency and the phase transitions between the different valence states are well understood by the $\Delta$ dependence in our phase diagram in Fig. \[fig:pd\_global\]. In summary, we have presented and analyzed a microscopic model for the electronic states in the perovskite oxides including the valence skipper as the $A$-site cation, especially focusing on the valence transition in BiNiO$_3$. We have found that the valence transition is attributed to the commensurate locking of the electron filling in the Bi and Ni bands due to the electron correlations, and it is sensitive to the relative energy of the Bi $6s$ and Ni $3d$ levels and the intra Bi-site interaction. Our work provides a fundamental understanding of the electronic properties in a series of new perovskite materials including valence skippers as the $A$-site cation which have as yet been scarcely dealt with. As a mechanism of negative thermal expansion, the ion radius change due to the valence transition is markedly distinct from those previously discussed, which are attributed to lattice vibrations and magnetovolume effects [@Takenaka]. We expect that our result will contribute to the development of the “next generation" negative thermal expansion materials. The authors would like to thank M. Azuma, S. Ishihara, T. Mizokawa, M. Mizumaki, K. Oka, and T. Watanuki for valuable discussions. [99]{} M. Imada, A. Fujimori and Y. Tokura, Rev.Mod. Phys. [**70**]{}, 1039 (1998). S. -W. Cheong, Nat. Mater. [**6**]{}, 927 (2007). K. Chahara, T. Ohno, M. Kasai, and Y. Kozono, Appl. Phys. Lett. [**63**]{}, 1990 (1993). R. von Helmolt, J. Wecker, B. Holzapfel, L. Schultz, and K. Samwer, Phys. Rev. Lett. [**71**]{}, 2331 (1993). Y. Tokura, A. Urushibara, Y. Moritomo, T. Arima, A. Asamitsu, G. Kido, and N. Furukawa, J. Phys. Soc. Jpn. [**61**]{}, 3931 (1994). M. A. Korotin, S. Yu. Ezhov, I. V. Solovyev, V. I. Anisimov, D. I. Khomskii, and G. A. Sawatzky, Phys. Rev. B [**54**]{}, 5309 (1996) T. Saitoh, T. Mizokawa, A. Fujimori, M. Abbate, Y. Takeda, M. Takano, Phys. Rev. B [**56**]{}, 1290 (1997). J. B. Torrance, P. Lacorre, A. I. Nazzal, E. J. Ansaldo, and Ch. Niedermayer, Phys. Rev. B [**45**]{}, 8209 (1992). E. Sawagushi, A. Kikuchi, and Y. Kodera, J. Phys. Soc. Jpn. [**17**]{}, 1666 (1962). Liqin Zhou, P. M. Vilarinho and J. L. Baptista, J. Eur. Ceram. Soc. [**19**]{}, 2015 (1999). P. W. Anderson, Phys. Rev. Lett. [**34**]{}, 953 (1975). C. M. Varma, Phys. Rev. Lett. [**61**]{}, 2713 (1988). I. Hase and T. Yanagisawa, Phys. Rev. B [**76**]{}, 174103 (2007). S. Ishiwata, M. Azuma. M.Takano, E. Nishibori, M. Takata, M. Sakata, and K. Kato, J. Mater. Chem. [**12**]{}, 3733 (2002) M. Azuma, W. Chen, H. Seki, M. Czapski, S. Olga, K. Oka, M. Mizumaki, T. Watanuki, N. Ishimatsu, N. Kawamura, S. Ishiwata, M. G. Tucker, Y. Shimakawa, and J. P. Attfield, Nat. Comm. [**2**]{}, 347 (2011). K. Nabetani, Y. Muramatsu, K. Oka, K. Nakano, H. Hojo, M. Mizumaki, A. Agui, Y. Higo, N. Hayashi, M. Takano, and M. Azuma, Appl. Phys. Lett. [**106**]{}, 061912 (2015) H. Wadati, M. Takizawa, T. T. Tran, K. Tanaka, T. Mizokawa, A. Fujimori, A. Chikamatsu, H. Kumigashira, M. Oshima, S. Ishiwata, M. Azuma, M. Takano, Phys. Rev. B [**72**]{}, 155103 (2005). S. Ishiwata, M. Azuma, M. Hanawa, Y. Moritomo, Y.Ohishi, K. Kato, M. Takata, E. Nishibori, M. Sakata, I. Terasaki, and M. Takano, Phys. Rev. B [**72**]{}, 045104 (2005). M. Azuma, S. Carlsson, J. Rodgers, M. G. Tucker, M. Tsujimoto, S. Ishiwata, S. Isoda, Y. Shimakawa, M. Takano, and J. P. Attfield, J. Am. Chem. Soc. [**129**]{}, 14433 (2007). S. Niitaka, M. Azuma, M. Takano, E. Nishibori, M. Takata, and M. Sakata, Solid State Ionics, [**172**]{}, 557 (2004). T. Kimura, S. Kawamoto, I. Yamada, M. Azuma, M. Takano, and Y. Tokura, Phys. Rev. B [**67**]{}, 180401(R) (2003) J. Wang, J. B. Neaton, H. Zheng, V. Nagarajan, S. B. Ogale, B. Liu, D. Viehland, V. Vaithyanathan, D. G. Schlom, U. V. Waghmare, N. A. Spaldin, K. M. Rabe, M. Wuttig, R. Ramesh, Science [**299**]{}, 1719 (2003). A. A. Belik, S. Iikubo, K. Kodama, N. Igawa, S. Shamoto, S. Niitaka, M. Azuma, Y. Shimakawa, M. Takano, F. Izumi, and E. Takayama-Muromachi, Chem. Mater. [**18**]{}, 798 (2006). R. Yu et al., (unpublished). Y. Inaguma, K. Tanaka, T. Tsuchiya, D. Mori, T. Katsumata, T. Ohba, K. Hiraki, T. Takahashi, and H. Saitoh, J. Am. Chem. Soc., [**133**]{} 16920 (2011). K. Takenaka, Sci. Technol. Adv. Mater., [**13**]{}, 013001 (2012).
--- abstract: 'We predict a quantum spin Hall effect (QSHE) in the ferromagnetic graphene under a magnetic field. Unlike the previous QSHE, this QSHE appears in the absence of any spin-orbit interaction, thus, arrived from a different physical origin. The previous QSHE is protected by the time-reversal (T) invariance. This new QSHE is protected by the CT invariance, where C is the charge conjugation operation. Due to this QSHE, the longitudinal resistance exhibits quantum plateaus. The plateau values are at $1/2$, $1/6$, $3/28$, ... , (in the unit of $h/e^2$), depending on the filling factors of the spin-up and spin-down carriers. The spin Hall resistance is also investigated and is found to be robust against the disorder.' author: - 'Qing-feng Sun$^{1,\star}$ and X.C. Xie$^{1,2}$' title: CT invariant quantum spin Hall effect in a ferromagnetic graphene --- In the years since the spin Hall effect (SHE) has been discovered, it has generated great interest.[@ref1; @ref2; @ref3; @ref4] In the SHE, an applied longitudinal charge current or voltage bias induces a transverse spin current due to the spin-dependent scatterings[@ref1; @ref2] of the spin-orbit interaction (SOI)[@ref3]. Soon afterwards, the quantum SHE (QSHE) is also predicted.[@ref5; @ref6] The QSHE occurs in a topological insulator in which the bulk material is an insulator with two helical edge states carrying the current.[@ref7] The edge states, with opposite spins on a given edge or opposite edges for a given spin direction containing opposite propagation directions, lead to a quantized spin Hall conductance. The QSHE is a new quantum state of matter with a non-trivial topological property. The existence of QSHE was first proposed in a graphene film in which the SOI opened a band gap and established the edge states.[@ref5; @ref6] But the sequent work found that the SOI in the graphene was quite weak and the gap-opening was small, so the QSHE was difficult to observe.[@ref8] Soon afterwards, the QSHE was also predicted to exist in some other systems.[@ref9; @ref10; @ref11; @ref12] Recently, the QSHE was successfully realized in the CdTe/HgTe/CdTe quantum wells, and a quantized longitudinal resistance plateau was experimentally observed due to the QSHE.[@ref11] Another subject that has also been extensively investigated in recent years is the graphene, a single-layer hexagonal lattice of carbon atoms[@ref13] after it has been successfully fabricated.[@ref14; @ref15] The graphene has a unique band structure with a linear dispersion near the Fermi surface, giving it many peculiar properties. For example, the quasi-particles obey the Dirac-like equation and have relativistic-like behaviors, and its Hall plateaus are at the half-integer values. In this Letter, we predict a new kind of QSHE in a ferromagnetic graphene. Let us first imagine a two-dimensional system consisting of the following characteristics: (i) its carriers contain electrons and holes; (ii) both electrons and holes are completely spin-polarized with opposite spin polarizations. When a high perpendicular magnetic field is applied to the system, the edge states are formed and the carriers move only along the edges. In particular, the electrons (with their spins up) and holes (with their spins down) move in opposite directions on a given edge (see the inset in Fig.1a). Therefore, the QSHE automatically exists in this system. Although the ordinary metals (or doped semiconductors) cannot meet the above two characteristics, a ferromagnetic graphene does. Recently, several approaches to realize a ferromagnetic graphene have been suggested.[@ref16; @ref17; @ref18] For example, the ferromagnetic graphene can be realized by growing the graphene on a ferromagnetic insulator (e.g. EuO).[@ref17] For a ferromagnetic graphene, as soon as the Fermi energy $E_F$ is tuned to lie between the spin-up and spin-down Dirac points (see the inset in Fig.1b), the above two characteristics are met and the QSHE occurs. In the following calculations, we consider four- and six-terminal graphene Hall bars (see the insets in Fig.1a). The results reveal that the transverse spin current and spin Hall resistance indeed show the quantized plateaus because of the QSHE. Comparing this new QSHE with the previously-studied QSHE, there are two essential differences: (i) The previous QSHE comes from the SOI and the proposed systems all contain the time-reversal symmetry,[@ref5; @ref6; @ref7; @ref9; @ref10] while the present QSHE exists without the SOI and breaks the time-reversal symmetry. However, this new QSHE is protected by CT invariance. (ii) In the previous QSHE, the edge states only carry a spin current while at equilibrium; in this QSHE system, the edge states carry both spin and charge currents at equilibrium with the two edges states being CT partners of each other (see the inset in Fig.1a). Thus, this is a new kind of QSHE and the system is a new type of topological insulator. Due to the topological invariance, the plateaus of the spin Hall resistance are robust to disorder or impurity scattering. So the plateau is very stable and its value can be used as the standard value for the spin Hall resistance. ![ (Color online) The Hall conductance $I_e/V$ (a) and spin Hall conductance $I_s/V$ (b) vs. the energy $\epsilon_0$ for $N=80$ and $\phi=0.005$. The two insets in (a) are the schematic diagram for the four- and six-terminal graphene’s Hall bars. The inset in (b) is the schematic diagram for band structure of the ferromagnetic graphene while $\epsilon_0 +M > E_F>\epsilon_0 -M$. ](fig1.eps){width="8.5cm"} In the tight-binding representation, the four- or six-terminal ferromagnetic graphene device (see the insets in Fig.1a) can be described by the Hamiltonian:[@ref19] $$H = \sum_{i,\sigma} (\epsilon_0-\sigma M) a^{\dagger}_{i\sigma} a_{i\sigma} -\sum_{<ij>, \sigma} t e^{i\phi_{ij}} a_{i\sigma}^{\dagger} a_{j\sigma},$$ where $a_{i\sigma}$ and $a_{i\sigma}^{\dagger}$ are the annihilation and creation operators at the discrete site $i$. $\epsilon_0$ is the on-site energy (i.e. the Dirac-point energy), $M$ is the ferromagnetic exchange split[@ref17], and $t$ is the nearest neighbor hopping element. Here, the whole device, including the center region and four or six terminals, is made of the ferromagnetic graphene. With the presence of a perpendicular magnetic field $B$, a phase factor $\phi_{ij}$ is added to the hopping element, $\phi_{ij}=\int_i^j \vec{A} \cdot d\vec{l}/\phi_0$ with the vector potential $\vec{A}=(-By,0,0)$ and $\phi_0=\hbar/e$. The transmission coefficient $T_{pq\sigma}(\epsilon)$ from the terminal $q$ to the terminal $p$ with spin $\sigma$ can be calculated from the equation:[@ref20] $T_{pq\sigma}(\epsilon) = Tr [{\bf \Gamma}_{p\sigma} {\bf G}_{\sigma}^r {\bf \Gamma}_{q\sigma} {\bf G}^a_{\sigma} ]$, where ${\bf \Gamma}_{p\sigma}(\epsilon)= i[{\bf\Sigma}^r_{p\sigma}(\epsilon) -{\bf\Sigma}^a_{p\sigma}(\epsilon)]$, the Green functions ${\bf G}^r_{\sigma}(\epsilon)=[{\bf G}^{a}_{\sigma}(\epsilon)]^{\dagger} =1/[\epsilon-{\bf H}^{cen}_{\sigma}-\sum_{p}{\bf\Sigma}^r_{p\sigma}]$, and ${\bf H}^{cen}_{\sigma}$ is the Hamiltonian of the center region. The retarded self-energy ${\bf\Sigma}^r_{p\sigma}(\epsilon)$ due to the coupling to the terminal $p$ can be calculated numerically.[@ref21] After obtaining the transmission coefficient, the particle current in the terminal $p$ with the spin ${\sigma}$ can be calculated from the Landauer-B$\ddot{u}$ttiker formula: $I_{p\sigma}= (1/h)\int d \epsilon \sum_q T_{pq\sigma}(\epsilon)[f_{q\sigma}(\epsilon)-f_{p\sigma}(\epsilon)]$, where $f_{p\sigma}(\epsilon) =1/\{\exp[(\epsilon-\mu_{p\sigma})/k_BT]+1\}$ is the Fermi distribution function in the terminal $p$, with the spin-dependent chemical potential $\mu_{p\sigma}$ and the temperature $T$. In the following numerical calculations, we take $t=1$ as the energy unit and only consider the zero temperature case ($T=0$), as the thermal energy $k_BT$ is normally much smaller than other energy scales in the problem. The sample width is denoted by $N$, and the insets of Fig.1a show a system with $N=3$. In the calculations, we choose $N=80$ and $40$, and the corresponding widths are $33.9nm$ and $16.9nm$. The magnetic field is described by the $\phi$ with $\phi \equiv (3\sqrt{3}/4) a^2 B/\phi_0$ and the magnetic flux in a honeycomb lattice is $2\phi$. We first consider the four-terminal device (see the inset at the top right corner of Fig.1a) and a small bias $V$ is applied between the longitudinal terminals 1 and 3 to study the induced charge current $I_{ne}$ \[$I_{ne}\equiv e(I_{n\uparrow}+I_{n\downarrow})$\] and spin current $I_{ns}$ \[$I_{ns}\equiv (\hbar/2)(I_{n\uparrow}-I_{n\downarrow})$\] in the transversal terminals 2 and 4. Here the boundary conditions for the four terminals are $\mu_{1\uparrow}=\mu_{1\downarrow}=eV/2$, $\mu_{2\uparrow}=\mu_{2\downarrow}=0$, $\mu_{3\uparrow}=\mu_{3\downarrow}=-eV/2$, and $\mu_{4\uparrow}=\mu_{4\downarrow}=0$. The currents in the terminals 2 and 4 satisfy the relations: $I_{2e}=-I_{4e}\equiv I_e$ and $I_{2s}=-I_{4s}\equiv I_s$. Fig.1a and 1b show the Hall conductance $I_e/V$ and spin Hall conductance $I_s/V$ versus the Dirac-point energy $\epsilon_0$, respectively. For a non-ferromagnetic graphene ($M=0$) under the high magnetic field ($\phi=0.005$), $I_s/V$ is zero and $I_e/V$ exhibits the plateaus at odd integer values $ne^2/h$ ($n=\pm 1$, $\pm 3$, ...) due to the quantum Hall effect (QHE). These results have been observed in recent experiments.[@ref14; @ref15] For a ferromagnetic graphene with $M\not=0$, the spin current emerges (see Fig.1b) since the QSHE. The spin Hall conductance $I_s/V$ also shows the quantized plateaus. By considering the edge state under the high magnetic field, the plateau values of $I_s/V$ and $I_e/V$ can be analytically derived to be at $(\nu_{\uparrow}-\nu_{\downarrow})e/8\pi$ and $(\nu_{\uparrow}+\nu_{\downarrow})e^2/2h$,[@supplement] where $\nu_{\sigma}$ is the Landau filling factor for spin $\sigma$. In particular, when $|\epsilon_0|<|M|$, in which case the Fermi energy $E_F$ ($E_F=0$) is located between the spin-up Dirac-point $\epsilon_0-M$ and the spin-down Dirac-point $\epsilon_0+M$, $I_e$ is zero and a net quantum spin current emerges in the transversal terminals. In addition, if in the open circuit case, the spin accumulation emerges at the sample boundaries instead of the spin current.[@supplement] Since the QSHE can give rise to quantum plateaus in resistances, we next study the longitudinal and Hall resistances in the six-terminal Hall device (see the inset in the lower left corner of Fig.1a). Now we consider a small bias $V$ applied to the longitudinal terminals 1 and 4. The transversal terminals 2, 3, 5, and 6 are all voltage probes, their charge currents vanish ($I_{pe}=0$) and $\mu_{p\uparrow}=\mu_{p\downarrow}\equiv \mu_p$. Combining these boundary conditions with the Landauer-B$\ddot{u}$ttiker formula, the voltages $V_p$ ($V_p = \mu_p/e$) in four voltage probes can be obtained, then the longitudinal resistance $R_{14,23}=(V_2-V_3)/I_{14}$ and Hall resistance $R_{14,26}=(V_2-V_6)/I_{14}$ are calculated, here $I_{14}= -I_{1e}= I_{4e}$. The resistances contain the properties $R_{14,26}=R_{14,35}$ and $R_{14,23}=R_{14,65}$. ![ The panels (a) and (b) show the resistances $R_{14,23}$ and $R_{14,26}$ (in the unit of $h/e^2$) vs the exchange split $M$ and energy $\epsilon_0$ for $N=80$ and $\phi=0.005$. []{data-label="fig:2"}](fig2.eps){width="7cm"} Fig.2a and 2b show the longitudinal and Hall resistances, $R_{14,23}$ and $R_{14,26}$, versus the energy $\epsilon_0$ and the exchange split $M$ at an external magnetic field $\phi=0.005$. Due to the QSHE and QHE, both $R_{14,23}$ and $R_{14,26}$ may be non-zero, and they both exhibit plateau structures. The plateau values are determined by the filling factors $\nu_{\uparrow}$ and $\nu_{\downarrow}$. For the fixed filling factors $\nu_{\uparrow}$ and $\nu_{\downarrow}$, $R_{14,23}$ and $R_{14,26}$ maintain their plateau values regardless of $\epsilon_0$ and $M$. By considering the carriers transport along the edge states, the plateau values can be analytically derived:[@supplement] $R_{14,23}=0$ and $R_{14,26} = [1/(\nu_{\uparrow}+ \nu_{\downarrow})] h/e^2$ for $(\nu_{\uparrow},\nu_{\downarrow})=(+,+)$ or $(-,-)$, and $R_{14,23}= [|\nu_{\uparrow}\nu_{\downarrow}|/ ( |\nu_{\uparrow}|^3 +|\nu_{\downarrow}|^3 )] h/e^2$ and $R_{14,26} = sign(\nu_{\uparrow})[ (|\nu_{\uparrow}|^2- |\nu_{\downarrow}|^2)/ ( |\nu_{\uparrow}|^3 +|\nu_{\downarrow}|^3 )] h/e^2$ for $(\nu_{\uparrow},\nu_{\downarrow})=(+,-)$ or $(-,+)$. Some plateau values for low $\nu_{\uparrow},\nu_{\downarrow}$ have been labeled in Fig.2. The numerical results in Fig.2 are in excellent agreement with the analytic plateau values (the differences between them are less than $10^{-6}$). Furthermore, $R_{14,23}$ and $R_{14,26}$ have the following properties: While $|\epsilon_0|>|M|$ with $(\nu_{\uparrow}, \nu_{\downarrow})= (+,+)$ or $(-,-)$, the longitudinal resistance $R_{14,23}$ is zero and only the Hall resistance $R_{14,26}$ exists because the spin-up and spin-down carriers are simultaneously either electron-like or hole-like and move in the same direction. On the other hand, while $|\epsilon_0|<|M|$ with $(\nu_{\uparrow}, \nu_{\downarrow})= (+,-)$ or $(-,+)$, the Fermi energy $E_F$ is located between $\epsilon_0+M$ and $\epsilon_0-M$, the longitudinal resistance $R_{14,23}$ emerges since now the spin-up and spin-down carriers move in opposite directions for a given edge. (i) While $\nu_{\uparrow}=-\nu_{\downarrow} \equiv \nu$, the Hall resistance $R_{14,26}=0$, only the longitudinal resistance $R_{14,23}$ exists with the value $(1/2\nu)h/e^2$. This means that only the QSHE emerges and the QHE vanishes in this region. In this case, the system has the CT invariance. Furthermore, if $\nu_{\uparrow}=-\nu_{\downarrow}=\pm 1$, $R_{14,26}=0$ and $R_{14,23}=(1/2) (h/e^2)$. Now the observed phenomena are completely the same with the QSHE from the SOI,[@ref5; @ref6; @ref7; @ref9; @ref10] but their physical mechanisms are different. (ii) While $\nu_{\uparrow}\not=-\nu_{\downarrow}$ but still with ($\nu_{\uparrow}$, $\nu_{\downarrow})=(+,-)$ or $(-,+)$, $R_{14,26}$ is now non-zero since the numbers of the spin-up and spin-down edge states are different. In this case, both resistances $R_{14,26}$ and $R_{14,23}$ have non-zero quantized plateaus and the QSHE and QHE coexist. Fig.3 shows the resistances $R_{14,23}$ and $R_{14,26}$ versus the energy $\epsilon_0$ for a fixed $M$ (i.e. along the horizontal lines in Fig.2), and it clearly shows that the quantum plateaus persist very well. ![ (Color online) The resistances $R_{14,23}$ (a) and $R_{14,26}$ (b) vs the energy $\epsilon_0$ for $N=80$ and $\phi=0.005$. []{data-label="fig:3"}](fig3.eps){width="8.5cm"} Up to now, we demonstrate the existence of QSHE in the ferromagnetic graphene from both physical picture and detailed numerical calculations. In the following, we study the properties of the spin Hall resistance $R_s$, a measurable quantity robust to dephasing[@ref22] and well reflecting the topological invariance of the system. We again consider the four-terminal Hall bar. But now the transversal terminals 2 and 4 are spin-biased probes with boundary conditions $I_{p\uparrow} =I_{p\downarrow} = 0$ ($p=2,4$). Here the spin Hall resistance $R_s$ is defined as the transversal spin bias over the longitudinal charge current: $R_s \equiv (\mu_{2\uparrow}-\mu_{2\downarrow})/eI_{13} = -(\mu_{4\uparrow}-\mu_{4\downarrow})/eI_{13}$. Since the spin bias $\mu_{n\uparrow}-\mu_{n\downarrow}$ is experimentally measurable, so is the $R_s$.[@ref23; @ref24] Fig.4 shows $R_s$ versus the energy $\epsilon_0$ for different ferromagnetic exchange split $M$ and magnetic field $\phi$. For $|\epsilon_0|>|M|$ with $(\nu_{\uparrow},\nu_{\downarrow})=(+,+)$ or $(-,-)$, $R_s=0$. On the other hand, while $|\epsilon_0|<|M|$ with $(\nu_{\uparrow},\nu_{\downarrow})=(+,-)$ or $(-,+)$, $R_s$ exists. $R_s$ exhibits the quantum plateaus, and its plateau values are at $[1/(|\nu_{\uparrow}|+|\nu_{\downarrow}|)] h/e^2$. For a small $M$ (e.g. $M=0.02t$ or $0.05t$ in Fig.4b) or a high magnetic field $\phi$ (e.g. $\phi=0.005$ in Fig.4a), $(\nu_{\uparrow},\nu_{\downarrow})$ can only equal to $(1,-1)$, so only the plateau of $R_s= h/2e^2$ emerges. But for a large $M$ or a small magnetic field $\phi$, $(\nu_{\uparrow},\nu_{\downarrow})$ may be $(1,-3)$, $(3,-1)$, $(1,-5)$, $(5,-1)$, etc, then the plateaus of $R_s= h/4e^2$, $h/6e^2$, etc, are also possible. ![ (Color online) (a) shows $R_s$ vs $\epsilon_0$ for $M=0.05$ and (b) shows $R_s$ vs $\epsilon_0$ for $\phi=0.005$. The parameter $N=80$.[]{data-label="fig:4"}](fig4.eps){width="8.5cm"} ![ (Color online) $R_s$ vs $\epsilon_0$ (a) and $R_s$ vs the disorder strength $W$ (b) with the parameters $N=40$, $\phi=0.007$, and $M=0.05$. The curves in (a) and (b) are averaged over up to 1000 and 8000 random configurations, respectively. []{data-label="fig:5"}](fig5.eps){width="8.5cm"} Finally, we examine the disorder effect on the spin Hall resistance $R_s$. Here we assume that the disorder only exists in the central region (see dotted box in top right inset of Fig.1a). Due to the disorder, the on-site energy $\epsilon_0 -\sigma M$ for each site $i$ in the central region is changed to $\epsilon_0 +w_i -\sigma M$, where $w_i$ is uniformly distributed in the range $[-W/2, W/2]$ with the disorder strength $W$. Fig.5a shows $R_s$ versus the energy $\epsilon_0$ at the different disorder strengths $W$ and Fig.5b shows $R_s$ versus the disorder strength $W$ at different energies $\epsilon_0$. The results show that the quantum plateaus of $R_s$ are very robust against the disorder because of the topological invariance of the system. The quantum plateau maintains its quantized value very well even when $W$ reaches 2 (see Fig.5a and 5b). Since the plateau is so robust and stable, its value can be used as the standard for the spin Hall resistance. In addition, even for a very large disorder strength $W$ (e.g. $W=5$ or larger), the plateau value only slightly decreases while maintaining the plateau structure (see Fig.5b). This is because although the disorder strongly weakens the spin bias $\mu_{2\uparrow}-\mu_{2\downarrow}$, it also weakens the longitudinal charge current $I_{13}$, so the value of $R_s$ is affected less. This means that in the large disorder limits ($W\rightarrow \infty$), although the QSHE is broken, the SHE still holds. In summary, we predict a new QSHE in the ferromagnetic graphene film. Unlike the QSHEs studied so far, the origin of this QSHE is not caused by the spin-orbit interaction. The results also show that the system can exhibit the QSHE, the QHE, and the coexistence of the QSHE and QHE, depending on the filling factors of the spin-up and spin-down carriers. Due to the QSHE and QHE, both the longitudinal and Hall resistances exhibit the plateau structures. The plateau values (in the unit of $h/e^2$) are at $1/2$, $1/6$, $3/28$, ..., for the longitudinal resistance and at $\pm 1/2$, $\pm 1/4$, $\pm 1/6$, $\pm 2/7$, ..., for the Hall resistance. In addition, the spin Hall resistance has also investigated and found to be robust against the disorder. [**Acknowledgments:**]{} This work was financially supported by NSF-China under Grants Nos. 10525418, 10734110, and 10821403, China-973 program and US-DOE under Grants No. DE-FG02- 04ER46124. Q.F.S. gratefully acknowledges Prof. R. B. Tao for many helpful discussions. Electronic address: sunqf@aphy.iphy.ac.cn M.I. Dyakonov and V.I. Perel, JETP Lett. [**13**]{}, 467 (1971). J.E. Hirsch, Phys. Rev. Lett. [**83**]{}, 1834 (1999). S. Murakami, N. Nagaosa and, S.C. Zhang, Science [**301**]{}, 1348 (2003); J. Sinova [*et al*]{}., Phys. Rev. Lett. [**92**]{}, 126603 (2004). Y.K. Kato [*et al*]{}., Science [**306**]{}, 1910 (2004); J. Wunderlich [*et al*]{}., Phys. Rev. Lett. [**94**]{}, 047204 (2005). C.L. Kane, E.J. Mele, Phys. Rev. Lett [**95**]{}, 146802(2005). C.L. Kane, E.J. Mele, Phys. Rev. Lett [**95**]{}, 226801(2005). C. Day, Physics Today [**61**]{} 19 (2008); N. Nagaosa, Science [**318**]{}, 758 (2007). H. Min [*et al*]{}., Phys. Rev. B [**74**]{}, 165310 (2006); Y. Yao [*et al*]{}., Phys. Rev. B [**75**]{}, 041401(R) (2007). L. Sheng, et al., Phys. Rev. Lett [**95**]{}, 136602 (2005); B.A. Bernevig and S.-C. Zhang, Phys. Rev. Lett [**96**]{}, 106802 (2006); L. Fu, C.L. Kane, and E.J. Mele, Phys. Rev. Lett [**98**]{}, 106803 (2007); C. Liu, et al., Phys. Rev. Lett [**100**]{}, 236601 (2008). B.A. Bernevig, T.L. Hughes and S.C. Zhang, Science [**314**]{}, 1757 (2006). M. K$\ddot{o}$nig, et al., Science [**318**]{}, 766 (2007). D. Hsieh, et al., Nature (London) [**452**]{}, 970 (2008). C.W.J. Beenakker, Rev. Mod. Phys. [**80**]{}, 1337 (2008); A.H. Castro Neto, [*et al*]{}., Rev. Mod. Phys. [**81**]{}, 109 (2009). K.S. Novoselov, [*et al*]{}., Science [**306**]{}, 666 (2004); Nature (London) [**438**]{}, 197 (2005); Nat. Phys. [**2**]{}, 177 (2006). Y. Zhang, [*et al*]{}., Nature (London) [**438**]{}, 201 (2005). Y.-W. Son [*et al*]{}., Nature (London) [**444**]{}, 347 (2006); E.-J. Kan [*et al*]{}., Appl. Phys. Lett. [**91**]{}, 243116 (2007). H. Haugen [*et al*]{}., Phys. Rev. B [**77**]{}, 115406 [2008]{}; J. Linder [*et al*]{}., Phys. Rev. Lett. [**100**]{}, 187004 (2008). Q. Zhang [*et al*]{}., Phys. Rev. Lett. [**101**]{}, 047005 (2008). W. Long, Q.F. Sun, and J. Wang, Phys. Rev. Lett. [**101**]{}, 166806 (2008). , edited by S. Datta (Cambridge University Press 1995). D.H. Lee and J.D. Joannopoulos, Phys. Rev. B [**23**]{}, 4997 (1981); M.P. Lopez Sancho, [*et al*]{}., J. Phys. F: Met. Phys. [**14**]{}, 1205 (1984); [**15**]{}, 851 (1985). See EPAPS Document No. E-PRLTAO-\*\*\*-\*\*\*\*\*\* for supplementary material. For more information on EPAPS, see http://www.aip.org/pubservs/epaps.html. H. Jiang [*et al*]{}., Phys. Rev. Lett. [**103**]{}, 036803 (2009). E.J. Koop, et al., Phys. Rev. Lett. [**101**]{}, 056602 (2008); Q.-F. Sun, et al., Phys. Rev. B [**77**]{}, 195313 (2008); Y.X. Xing, et al., Appl. Phys. Lett. [**93**]{}, 142107 (2008). S.M. Frolov, et al., Phys. Rev. Lett. [**102**]{}, 116802 (2009); Nature [**458**]{}, 868 (2009).
--- abstract: 'We study integral almost square-free modular categories; i.e., integral modular categories of Frobenius-Perron dimension $p^nm$, where $p$ is a prime number, $m$ is a square-free natural number and ${\rm gcd}(p,m)=1$. We prove that if $n\leq 5$ or $m$ is prime with $m<p$ then they are group-theoretical. This generalizes several results in the literature and gives a partial answer to the question posed by the first author and H. Tucker. As an application, we prove that an integral modular category whose Frobenius-Perron dimensions is odd and less than $1125$ is group-theoretical.' address: - 'College of Engineering, Nanjing Agricultural University, Nanjing, Jiangsu 210031, China' - 'Department of Mathematics, Yangzhou University, Yangzhou, Jiangsu 225002, China' - 'College of Engineering, Nanjing Agricultural University, Nanjing, Jiangsu 210031, China' author: - Jingcheng Dong - Libin Li - Li Dai title: 'Integral almost square-free modular categories' --- Introduction {#sec1} ============ Modular categories arise from many areas of mathematics and physics, including representation theory of quantum groups [@BaKi2001lecture], vertex operator algebras [@Huang2005vertex], von Neumann algebras [@EvKa1998quantum], topological quantum field theory [@Turaev1994quantum] and conformal field theory [@MoSe1989Classical]. The importance of modular categories also comes from their applications in condensed matter physics and quantum computing [@Wang2010Top]. A fusion category is called **almost square-free (ASF)** if its **Frobenius-Perron (FP)** dimension has the form $p^nm$, where $p$ is a prime number, $m$ is a square-free natural number and ${\rm gcd}(p,m)=1$. Several examples of this class of fusion categories have been extensively studied in the literature, see [@bruillard2013classification; @DoTu2015; @dong2013semisimple; @etingof2005fusion; @naidu2011finiteness]. One important class of fusion categories is that of group-theoretical fusion categories. As described by Etingof, Nikshych and Ostrik [@etingof2005fusion], a group-theoretical fusion category can be explicitly constructed from finite group data (see Section \[sec25\]). Therefore this class of fusion categories can be viewed as the simplest fusion categories except the pointed ones. This is similar in spirit to the work of classifying finite-dimensional Hopf algebras: if a finite-dimensional Hopf algebra can be obtained from group algebras or dual group algebras then it must be well-understood. The main task of this paper is to determine when an integral ASF modular category is group-theoretical. The paper is organized as follows. In Section \[sec2\], we recall some basic definitions and results which will be used throughout. In Section \[sec3\], we study the existence of non-trivial Tannakian subcategories and their FP dimensions in an integral ASF modular category. As one of the consequences, we get that any integral ASF modular categories with FP dimension $p^nm$ is group-theoretical if $n\leq 5$. This generalizes the work in a series of papers [@bruillard2013classification; @DoTu2015; @naidu2011finiteness]. In Section \[sec5\], we study the core of an integral ASF modular category. This is a new notion introduced in [@drinfeld2010braided]. We prove that the core of an integral ASF modular category is a pointed modular category. As a consequence, we get the structure of an integral ASF modular category: it is an equivariantization of a nilpotent fusion category of nilpotency class $2$. As an application, we get that an integral modular category with FP dimension $p^nq$, where $q<p$ are prime numbers, is group-theoretical. This gives a partial answer to the question posed in [@DoTu2015]. In Section \[sec6\], we apply the results obtained so far to a low-dimensional integral modular category ${{\mathcal C}}$. We find that if the FP dimension ${\operatorname{FPdim}}({{\mathcal C}})$ is odd and less than 1125 then ${{\mathcal C}}$ is ASF except the case when ${\operatorname{FPdim}}({{\mathcal C}})=675$. We prove that if ${\operatorname{FPdim}}({{\mathcal C}})=675$ then ${{\mathcal C}}$ is a pointed modular category. This allows us to conclude that integral modular categories whose Frobenius-Perron dimensions are odd and less than $1125$ are group-theoretical. Preliminaries {#sec2} ============= We shall work over an algebraically closed field $k$ of characteristic zero. A fusion category is an abelian $k$-linear semisimple rigid monoidal category with a simple unit object $\textbf{1}$, finite-dimensional morphisms space, and finitely many isomorphism classes of simple objects. We refer the reader to [@drinfeld2010braided; @etingof2005fusion] for the main notions about fusion categories. For the reader’s convenience, we collect some definitions and basic results in this section. Frobenius-Perron dimension {#sec21} -------------------------- Let ${{\mathcal C}}$ be a fusion category, and let ${\operatorname{Irr}}({{\mathcal C}})$ denote the set of isomorphism classes of simple objects of ${{\mathcal C}}$. The FP dimension of a simple object $X\in{{\mathcal C}}$ is the FP eigenvalue of the matrix of left multiplication by the class of $X$ in ${\operatorname{Irr}}({{\mathcal C}})$. The FP dimension of ${{\mathcal C}}$ is the number $${\operatorname{FPdim}}({{\mathcal C}})=\sum_{X\in{\operatorname{Irr}}({{\mathcal C}})}{\operatorname{FPdim}}(X)^2.$$ A fusion category ${{\mathcal C}}$ is called integral if ${\operatorname{FPdim}}(X)$ is an integer for all simple object $X\in{{\mathcal C}}$, and it is called weakly integral if ${\operatorname{FPdim}}({{\mathcal C}})$ is an integer. Let ${\operatorname{Irr}}_\alpha({{\mathcal C}})$ be the set of isomorphism classes of simple objects of FP dimension $\alpha$. Then the set ${\operatorname{Irr}}_1({{\mathcal C}})$ is the set of all invertible simple objects, and it generates the unique largest pointed fusion subcategory ${{\mathcal C}}_{pt}$ of ${{\mathcal C}}$. Recall that a fusion category is called pointed if every simple object has Frobenius-Perron dimension $1$. The set ${\operatorname{Irr}}_1({{\mathcal C}})$ also forms a group with multiplication given by tensor product. We denote this group by $G({{\mathcal C}})$. This group admits an action on the set ${\operatorname{Irr}}({{\mathcal C}})$ by left tensor product. For $X\in {\operatorname{Irr}}({{\mathcal C}})$, we use $G[X]$ to denote the stabilizer of $X$ under this action. \[lem21\] Let ${{\mathcal C}}$ be a fusion category. Suppose that the order of $G({{\mathcal C}})$ is a power of a prime number $p$ and ${\rm gcd}(|{\operatorname{Irr}}_{\alpha}({{\mathcal C}})|, p)=1$ for some $\alpha>1$. Then there exists $X\in{\operatorname{Irr}}_\alpha({{\mathcal C}})$ such that $G[X]=G({{\mathcal C}})$. Obviously, the action of $G({{\mathcal C}})$ on the set ${\operatorname{Irr}}({{\mathcal C}})$ preserves FP dimension. Hence, this action can be restricted to the set ${\operatorname{Irr}}_\alpha({{\mathcal C}})$. The set ${\operatorname{Irr}}_\alpha({{\mathcal C}})$ is therefore a union of disjoint orbits under this restricted action, and every orbit has length $p^i$ for some $i\geqslant0$. Since ${\rm gcd}(|{\operatorname{Irr}}_{\alpha}({{\mathcal C}})|, p)=1$ there exists at least one orbit having length 1, which implies the lemma. Group extensions {#sec22} ---------------- Let $G$ be a finite group and let $e\in G$ be the identity element. A fusion category ${{\mathcal C}}$ is said to have a $G$-grading if ${{\mathcal C}}$ has a direct sum of full abelian subcategories ${{\mathcal C}}=\oplus_{g\in G}{{\mathcal C}}_g$ such that $({{\mathcal C}}_g)^\ast={{\mathcal C}}_{g^{-1}}$ and ${{\mathcal C}}_g\otimes{{\mathcal C}}_h\subseteq{{\mathcal C}}_{gh}$ for all $g,h\in G$. The grading ${{\mathcal C}}=\oplus_{g\in G}{{\mathcal C}}_g$ is called faithful if ${{\mathcal C}}_g\neq 0$ for all $g\in G$. A fusion category ${{\mathcal C}}$ is said a $G$-extension of ${{\mathcal D}}$ if ${{\mathcal C}}$ admits a faithful grading ${{\mathcal C}}=\oplus_{g\in G}{{\mathcal C}}_g$ such that ${{\mathcal C}}_e\cong{{\mathcal D}}$. By [@etingof2005fusion Proposition 8.20], if ${{\mathcal C}}$ is a $G$-extension of ${{\mathcal D}}$ then, for all $g,h\in G$, we have $$\label{eq0} \begin{split} {\operatorname{FPdim}}({{\mathcal C}}_g)={\operatorname{FPdim}}({{\mathcal C}}_h)\, \mbox{and}\, {\operatorname{FPdim}}({{\mathcal C}})=|G| {\operatorname{FPdim}}({{\mathcal D}}). \end{split}$$ It is known that every fusion category ${{\mathcal C}}$ has a canonical faithful grading ${{\mathcal C}}=\oplus_{g\in \mathcal{U}({{\mathcal C}})}{{\mathcal C}}_g$ with trivial component ${{\mathcal C}}_e={{\mathcal C}}_{ad}$, where $\mathcal{U}({{\mathcal C}})$ is the universal grading group of ${{\mathcal C}}$, and ${{\mathcal C}}_{ad}$ is the adjoint subcategory of ${{\mathcal C}}$ generated by simple objects in $X\otimes X^\ast$ for all $X\in {\operatorname{Irr}}({{\mathcal C}})$. This grading is called the universal grading of ${{\mathcal C}}$. Equivariantizations and de-equivariantizations {#sec23} ---------------------------------------------- Let ${{\mathcal C}}$ be a fusion category with an action of a finite group $G$. We then can define a new fusion category ${{\mathcal C}}^G$ of $G$-equivariant objects in ${{\mathcal C}}$. An object of this category is a pair $(X,(u_g)_{g\in G})$, where $X$ is an object of ${{\mathcal C}}$, $u_g : g(X)\to X$ is an isomorphism for all $g\in G$, such that $$u_{gh}\circ \alpha_{g,h} =u_g \circ g(u_h),$$ where $\alpha_{g,h}: g(h(X))\to gh(X)$ is the natural isomorphism associated to the action. Morphisms and tensor product of equivariant objects are defined in an obvious way. This new category is called the $G$-equivariantization of ${{\mathcal C}}$. In the other direction, let ${{\mathcal C}}$ be a fusion category and let ${{\mathcal E}}= {\operatorname{Rep}}(G)\subseteq \mathcal{Z}({{\mathcal C}})$ be a Tannakian subcategory that embeds into ${{\mathcal C}}$ via the forgetful functor $\mathcal{Z}({{\mathcal C}})\to {{\mathcal C}}$. Here $\mathcal{Z}({{\mathcal C}})$ denotes the Drinfeld center of ${{\mathcal C}}$. Let $A={\rm Fun}(G)$ be the algebra of functions on $G$. It is a commutative algebra in $\mathcal{Z}({{\mathcal C}})$. Let ${{\mathcal C}}_G$ denote the category of left $A$-modules in ${{\mathcal C}}$. It is a fusion category and called the de-equivariantization of ${{\mathcal C}}$ by ${{\mathcal E}}$. See [@drinfeld2010braided] for details on equivariantizations and de-equivariantizations. Equivariantizations and de-equivariantizations are inverse to each other: $$({{\mathcal C}}_G)^G\cong{{\mathcal C}}\cong({{\mathcal C}}^G)_G,$$ and their FP dimensions have the following relations: $$\label{eq1} \begin{split} {\operatorname{FPdim}}({{\mathcal C}})=|G|{\operatorname{FPdim}}({{\mathcal C}}_G) \,\mbox{\,and}\, {\operatorname{FPdim}}({{\mathcal C}}^G)=|G| {\operatorname{FPdim}}({{\mathcal C}}). \end{split}$$ Solvable fusion categories {#sec24} -------------------------- A fusion category ${{\mathcal C}}$ is nilpotent if there exists a sequence of fusion categories ${\operatorname{Vec}}={{\mathcal C}}_0\subseteq{{\mathcal C}}_1\subseteq\cdots\subseteq{{\mathcal C}}_n={{\mathcal C}}$, and a sequence of finite groups $G_1,\cdots,G_n$ such that ${{\mathcal C}}_i$ is a $G_i$-extension of ${{\mathcal C}}_{i-1}$ for all $i$. If the groups $G_1,\cdots,G_n$ are cyclic groups of prime order then ${{\mathcal C}}$ is called cyclically nilpotent. A fusion category ${{\mathcal C}}$ is called solvable if it is Morita equivalent to a cyclically nilpotent fusion category. Here the notion of Morita equivalence is a categorical analogue of the notion of Morita equivalence for rings. Precisely, two fusion categories ${{\mathcal C}}$ and ${{\mathcal D}}$ are Morita equivalent if $D$ is equivalent to the dual ${{\mathcal C}}^\ast_{{{\mathcal M}}}$ with respect to an indecomposable module category ${{\mathcal M}}$, where ${{\mathcal C}}^\ast_{{{\mathcal M}}}$ is the category of ${{\mathcal C}}$-module endofunctors of ${{\mathcal M}}$. Solvable fusion categories have good properties: If ${{\mathcal C}}$ is a solvable fusion category then ${{\mathcal C}}$ can be obtained by recursive extensions or equivariantizations from the trivial fusion category ${\operatorname{Vec}}$, where all finite groups involved are cyclic groups of prime order. It is shown in [@etingof2011weakly Proposition 4.5] that the class of solvable fusion categories is closed under taking extensions and equivariantizations by solvable groups. Group-theoretical fusion categories {#sec25} ----------------------------------- A fusion category ${{\mathcal C}}$ is called group-theoretical if it is Morita equivalent to a pointed fusion category. A group-theoretical fusion category can be precisely reconstructed from finite group data as follows: Let $H$ be a subgroup of a finite group $G$, $\omega \in Z^3(G, \mathbb{C}^{\times})$ a normalized 3-cocycle, and $\psi \in C^2(H, \mathbb{C}^{\times})$ a normalized 2-cochain such that $\mathrm{d}\psi = \omega |_H$. Let ${\operatorname{Vec}}_G^\omega$ be the category of $G$-graded vector spaces with associativity given by 3-cocycle $\omega$. The twisted group algebra $\mathbb{C}_\psi[H]$ is an associative algebra in ${\operatorname{Vec}}_G^\omega$. Therefore we may consider the category $${{\mathcal C}}(G, \omega, H, \psi) := \{\mathbb{C}_\psi[H]-\text{bimodules in} {\operatorname{Vec}}_G^\omega\}$$ with tensor product $\otimes_{\mathbb{C}_\psi[H]}$ and unit object $\mathbb{C}_\psi[H]$. It is a group-theoretical fusion category. In fact every group-theoretical fusion category can be obtained in this way [@etingof2005fusion Section 8.8]. Note that a group-theoretical fusion category must be integral [@etingof2005fusion Corollary 8.43]. Braided fusion categories {#sec26} ------------------------- A fusion category ${{\mathcal C}}$ is called braided if it admits a braiding $c$, where the braiding $c$ is a family of natural isomorphisms: $c_{X,Y}$:$X\otimes Y\rightarrow Y\otimes X$ satisfying the hexagon axioms for all $X,Y\in{{\mathcal C}}$ [@kassel1995quantum]. A braided fusion category ${{\mathcal C}}$ is called symmetric if $c_{Y,X}c_{X,Y}=id_{X\otimes Y}$ for all objects $X,Y\in{{\mathcal C}}$. A symmetric fusion category ${{\mathcal C}}$ is said to be Tannakian if it is equivalent to ${\operatorname{Rep}}(G)$ for some finite group $G$ as symmetric categories. Let $G$ be a finite group and let $u\in G$ be a central element of order 2. Then the category ${\operatorname{Rep}}(G)$ has a braiding $c^u_{X,Y}$ as follows: for all $x\in X, y\in Y$, $$\label{eq2} \begin{split} c^u_{X,Y}(x\otimes y)=(-1)^{mn}y\otimes x\,\, \mbox{if}\,\,ux=(-1)^mx,uy=(-1)^ny. \end{split}\nonumber$$ Let ${\operatorname{Rep}}(G,u)$ be the fusion category ${\operatorname{Rep}}(G)$ equipped with the new braiding $c^u_{X,Y}$. Deligne proved that any symmetric fusion category is equivalent to some ${\operatorname{Rep}}(G,u)$ [@deligne1990categories]. The following lemma is taken from [@drinfeld2010braided Corollary 2.50]. \[lem22\] Let ${{\mathcal C}}$ be the symmetric fusion category ${\operatorname{Rep}}(G,u)$. Then one of the following holds: \(1) ${{\mathcal C}}$ is a Tannakian category; \(2) ${\operatorname{Rep}}(G/\langle u\rangle)$ is a Tannakian subcategory of ${{\mathcal C}}$ with dimension $\frac{1}{2}{\operatorname{FPdim}}({{\mathcal C}})$. In particular, if ${\operatorname{FPdim}}({{\mathcal C}})$ is odd then ${{\mathcal C}}$ is Tannakian. Note that the lemma above shows that if ${\operatorname{FPdim}}({{\mathcal C}})$ is bigger than 2, then ${{\mathcal C}}$ always has a non-trivial Tannakian subcategory ${\operatorname{Rep}}(G/\langle u\rangle)$. Let ${{\mathcal D}}\subset{{\mathcal C}}$ be a fusion subcategory. Then the Müger centralizer ${{\mathcal D}}'$ of ${{\mathcal D}}$ in ${{\mathcal C}}$ is the fusion subcategory $${{\mathcal D}}'=\{Y\in{{\mathcal C}}|c_{Y,X}c_{X,Y}=id_{X\otimes Y}\, \mbox{for all}\, X\in{{\mathcal D}}\}.$$ The Müger center $\mathcal{Z}_2({{\mathcal C}})$ of ${{\mathcal C}}$ is the Müger centralizer ${{\mathcal C}}'$ of ${{\mathcal C}}$. The fusion category ${{\mathcal C}}$ is called non-degenerate if its Müger center $\mathcal{Z}_2({{\mathcal C}})$ is trivial. A braided fusion category is called premodular if it admits a spherical structure. A modular category is a non-degenerate premodular category. Combining Proposition 8.23 with Proposition 8.24 of [@etingof2005fusion], we know that a weakly integral braided fusion category is modular if and only if it is non-degenerate. Thus, an ASF fusion category is modular if and only if it is non-degenerate. Suppose that ${{\mathcal C}}$ is a modular category and ${{\mathcal D}}\subseteq {{\mathcal C}}$ is a fusion subcategory. The following equalities will be frequently used in this paper: $$\begin{aligned} ({{\mathcal C}}_{pt})'& = {{\mathcal C}}_{ad}\label{cptad};\\ {\operatorname{FPdim}}({{\mathcal D}}) {\operatorname{FPdim}}({{\mathcal D}}') &= {\operatorname{FPdim}}({{\mathcal C}})\label{fpdim}.\end{aligned}$$ The first equality comes from [@gelaki2008nilpotent Corollary 6.8], and the second one comes from [@muger2003structure Theorem 3.2]. Let ${{\mathcal E}}={\operatorname{Rep}}(G)$ be a Tannakian category, and let ${{\mathcal C}}$ be a braided fusion category containing ${\operatorname{Rep}}(G)$. Recall from that the de-equivariantization ${{\mathcal C}}_G$ of ${{\mathcal C}}$ by ${{\mathcal E}}$ is a braided $G$-crossed fusion category. Let ${{\mathcal C}}_G=\oplus_{g\in G}({{\mathcal C}}_G)_g$ be the corresponding grading. The trivial component $({{\mathcal C}}_G)_e$ of this grading is a braided fusion category. By [@drinfeld2010braided Proposition 4.56], the braided fusion category ${{\mathcal C}}$ is non-degenerate if and only if the trivial component $({{\mathcal C}}_G)_e$ is non-degenerate and the corresponding grading of ${{\mathcal C}}_G$ is faithful. This description implies the following lemma. \[newadded\] If a modular category ${{\mathcal C}}$ contains a Tannakian subcategory ${\operatorname{Rep}}(G)$, then the square of $|G|$ divides ${\operatorname{FPdim}}({{\mathcal C}})$. In particular, if ${{\mathcal C}}$ is an ASF modular category with FP dimension $p^nm$, then the FP dimension of any Tannakian subcategory of ${{\mathcal C}}$ is a power of $p$. Let ${\operatorname{SuperVec}}$ be the category of super vector spaces. We recall a helpful lemma of Müger’s regarding braided fusion categories containing ${\operatorname{SuperVec}}$. \[mugerlem\][@muger2000galois Lemma 5.4] Let ${{\mathcal C}}$ be a braided fusion category such that ${\operatorname{SuperVec}}\subseteq \mathcal{Z}_2({{\mathcal C}})$ and let $g \in {{\mathcal C}}$ be the invertible object generating ${\operatorname{SuperVec}}$. Then $g \otimes X \ncong X$ for all $X \in {\operatorname{Irr}}({{\mathcal C}})$. Group-theoretical properties of integral ASF modular categories {#sec3} =============================================================== In this section, we aim to study the group-theoretical property of an integral ASF modular category, so we may assume all fusion categories considered are integral. Throughout this section, ${{\mathcal C}}$ is an integral modular category of FP dimension $p^nm$, where $p$ is a prime number, $m$ is a square-free natural number and ${\rm gcd}(p,m)=1$. We may assume that $m>1$ since [@drinfeld2007group Corollary 6.8] shows that an integral fusion category of prime power dimension is always group-theoretical. Note that an integral modular category ${{\mathcal C}}$ of square-free FP dimension must be pointed. In fact, [@etingof1998some Lemma 1.2] shows that the square of ${\operatorname{FPdim}}(X)$ divides ${\operatorname{FPdim}}({{\mathcal C}})$ for every simple object $X$ of ${{\mathcal C}}$. Since ${\operatorname{FPdim}}({{\mathcal C}})$ is square-free, ${\operatorname{FPdim}}(X)$ must be $1$. Hence ${{\mathcal C}}$ is pointed. We therefore will always assume in the following context that $n\geq2$. Let $n=2t+1$ if $n$ is odd, or $n=2t$ if $n$ is even. Then [@etingof1998some Lemma 1.2] shows that the possible FP dimensions of simple objects of ${{\mathcal C}}$ are $1, p, \cdots, p^t$. Let $a_0, a_1, \cdots, a_t$ be the number of non-isomorphic simple objects of ${{\mathcal C}}$ of FP dimension $1, p, \cdots, p^t$, respectively. Then we get an equation: $$\label{eq3} \begin{split} a_0+a_1p^2+\cdots+a_tp^{2t}=p^nm. \end{split}$$ Since we have assumed that $n\geq2$, we get that $p^2$ divides $a_0={\operatorname{FPdim}}({{\mathcal C}}_{pt})$ by equation (\[eq3\]). This observation implies the following result. \[lem30\] The FP dimension of ${{\mathcal C}}_{pt}$ is divisible by $p^2$. Since ${{\mathcal C}}$ is modular, the universal group $\mathcal{U}({{\mathcal C}})$ is isomorphic to the group $G({{\mathcal C}})$ consisting of the isomorphism classes of invertible simple objects [@gelaki2008nilpotent Theorem 6.2]. Hence $|\mathcal{U}({{\mathcal C}})|={\operatorname{FPdim}}({{\mathcal C}}_{pt})$ is divisible by $p^2$. \[lem31\] Suppose that ${{\mathcal C}}$ is not pointed. Then ${\operatorname{FPdim}}({{\mathcal C}}_{pt})$ is not divisible by $p^{n-1}$. Suppose first that ${\operatorname{FPdim}}({{\mathcal C}}_{pt})=p^nm'$ for some $m'\in \mathbb{N}$. By equation (\[eq0\]), we have $$|\mathcal{U}({{\mathcal C}})|{\operatorname{FPdim}}({{\mathcal C}}_{ad})={\operatorname{FPdim}}({{\mathcal C}}_{pt}){\operatorname{FPdim}}({{\mathcal C}}_{ad})={\operatorname{FPdim}}({{\mathcal C}}).$$ This implies that ${\operatorname{FPdim}}({{\mathcal C}}_{ad})=m''>1$ is square-free and is not divisible by $p$, where $m'm''=m$. Since ${{\mathcal C}}_{ad}$ is braided, the FP dimension of every simple object of ${{\mathcal C}}_{ad}$ divides ${\operatorname{FPdim}}({{\mathcal C}}_{ad})$ [@etingof2011weakly Theorem 2.11]. Hence ${{\mathcal C}}_{ad}$ does not contain simple objects of FP dimension $p^i$, for any $1\leqslant i\leqslant t$. That is, ${{\mathcal C}}_{ad}$ is pointed. The subcategory ${{\mathcal C}}_{pt}$ is the unique largest pointed fusion subcategory of ${{\mathcal C}}$. Hence ${{\mathcal C}}_{ad}$ is a fusion subcategory of ${{\mathcal C}}_{pt}$. It follows that ${\operatorname{FPdim}}({{\mathcal C}}_{ad})$ divides ${\operatorname{FPdim}}({{\mathcal C}}_{pt})$ [@etingof2005fusion Proposition 8.15]. This is a contradiction since $1\neq {\operatorname{FPdim}}({{\mathcal C}}_{ad})$ is relatively prime to ${\operatorname{FPdim}}({{\mathcal C}}_{pt})$. Suppose then that ${\operatorname{FPdim}}({{\mathcal C}}_{pt})=p^{n-1}m'$ for some $m'\in \mathbb{N}$ with ${\rm gcd}(p,m')=1$. In this case ${\operatorname{FPdim}}({{\mathcal C}}_g)=pm''$ for every component ${{\mathcal C}}_g$ of the universal grading of ${{\mathcal C}}$, for some $m''\in \mathbb{N}$. Let $a_g^0, a_g^1, \cdots, a_g^t$ be the number of non-isomorphic simple objects of ${{\mathcal C}}_g$ of dimension $1, p, \cdots, p^t$. Then we have an equation: $$\label{eq4} \begin{split} a_g^0+a_g^1p^2+\cdots+a_g^tp^{2t}=pm''. \end{split}$$ This equation implies that $a_g^0\neq 0$ and $p$ divides $a_g^0$. This further means that every component ${{\mathcal C}}_g$ contains at least $p$ non-isomorphic invertible simple objects. There are $p^{n-1}m'$ components in the universal grading, since $|\mathcal{U}({{\mathcal C}})|={\operatorname{FPdim}}({{\mathcal C}}_{pt})$. Therefore, there are at least $p^nm'$ non-isomorphic invertible simple objects in ${{\mathcal C}}$. This contradicts the fact ${\operatorname{FPdim}}({{\mathcal C}}_{pt})=p^{n-1}m'$. \[cor32\] If $n\leqslant3$ then ${{\mathcal C}}$ is pointed. If $n=0$ or $1$ then ${\operatorname{FPdim}}({{\mathcal C}})$ is square-free. This case has been stated at the beginning of this section. When $n=2$ or $3$. Suppose that ${{\mathcal C}}$ is not pointed. By Lemma \[lem30\], ${\operatorname{FPdim}}({{\mathcal C}}_{pt})$ is divisible by $p^2$. But Lemma \[lem31\] shows that it is impossible. Hence ${{\mathcal C}}$ is pointed. In view of Corollary \[cor32\], we assume that $n\geqslant4$ in the following context. \[lem33\] The largest pointed fusion category $({{\mathcal C}}_{ad})_{pt}$ of ${{\mathcal C}}_{ad}$ is a symmetric subcategory with FP dimension $p^j$ for some $2\leqslant j\leqslant n-2$. In particular, ${{\mathcal C}}$ has a non-trivial pointed Tannakian subcategory of dimension $p$. By Lemma \[lem30\] and Lemma \[lem31\], we may write ${\operatorname{FPdim}}({{\mathcal C}}_{pt})=p^im'$ for some $2\leq i\leq n-2$ and ${\rm gcd}(p,m')=1$. So we can write ${\operatorname{FPdim}}({{\mathcal C}}_{ad})=p^{n-i}m''$, where $m'm''=m$. Let ${{\mathcal D}}=({{\mathcal C}}_{ad})_{pt}$. Then ${{\mathcal D}}$ is a fusion subcategory of ${{\mathcal C}}_{ad}$, as well as of ${{\mathcal C}}_{pt}$. Hence, dimension of ${{\mathcal D}}$ divides both ${\operatorname{FPdim}}({{\mathcal C}}_{pt})$ and ${\operatorname{FPdim}}({{\mathcal C}}_{ad})$. This implies that ${\operatorname{FPdim}}({{\mathcal D}})$ is a power of $p$. Let $a_e^0, a_e^1, \cdots, a_e^t$ be the number of non-isomorphic simple objects of ${{\mathcal C}}_e={{\mathcal C}}_{ad}$ of dimension $1, p, \cdots, p^t$. Then we have an equation: $$\label{eq44} \begin{split} a_e^0+a_e^1p^2+\cdots+a_e^tp^{2t}=p^{n-i}m''. \end{split}$$ The fact $n-i\geq 2$ and the equation above imply that ${\operatorname{FPdim}}({{\mathcal D}})=a_e^0$ is divisible by $p^2$. So we may write ${\operatorname{FPdim}}({{\mathcal D}})=p^j$ for some $2\leq j\leq n-2$. From ${{\mathcal D}}\subseteq{{\mathcal C}}_{ad}$ we have $({{\mathcal C}}_{ad})'\subseteq{{\mathcal D}}'$. On the other hand $({{\mathcal C}}_{ad})'={{\mathcal C}}_{pt}\supseteq{{\mathcal D}}$ by [@gelaki2008nilpotent Corollary 6.8]. Hence, ${{\mathcal D}}\subseteq {{\mathcal D}}'$, and hence ${{\mathcal D}}$ is a symmetric fusion subcategory of ${{\mathcal C}}$. If $p>2$ then ${\operatorname{FPdim}}({{\mathcal D}})$ is odd. In this case ${{\mathcal D}}\cong{\operatorname{Rep}}(G)$ is a Tannakian subcategory of ${{\mathcal C}}$. If $p=2$ then there exists a group $G$ of order $2^j$ and a central element $u\in G$ of order 2 such that ${{\mathcal D}}\cong {\operatorname{Rep}}(G,u)$ as symmetric fusion categories. It follows that ${\operatorname{Rep}}(G/\langle u\rangle)$ is a Tannakian subcategory of ${{\mathcal C}}$ with ${\operatorname{FPdim}}({\operatorname{Rep}}(G/\langle u\rangle))=2^{j-1}$. In both cases, $G$ is abelian (since ${{\mathcal D}}$ is pointed), hence we get that ${{\mathcal D}}$, and hence ${{\mathcal C}}$ has a Tannakian subcategory of FP dimension $p$. \[thm35\] The modular category ${{\mathcal C}}$ is either group-theoretical or contains a Tannakian subcategory of FP dimension $p^i$, for some $i\geq2$. We keep all notations from Lemma \[lem33\] and let ${{\mathcal D}}=({{\mathcal C}}_{ad})_{pt}$. If $p$ is odd then ${\operatorname{FPdim}}({{\mathcal D}})=p^j$ is also odd, and hence ${{\mathcal D}}$ is a Tannakian subcategory by Lemma \[lem22\]. So we are done. In the rest of our proof, we assume that $p=2$. Suppose that ${\operatorname{FPdim}}({{\mathcal D}})\geqslant2^3$. Then ${\operatorname{Rep}}(G/\langle u\rangle)$ is a Tannakian subcategory of ${{\mathcal C}}$ with ${\operatorname{FPdim}}({\operatorname{Rep}}(G/\langle u\rangle))\geqslant2^2$, also by Lemma \[lem22\] . Suppose that ${\operatorname{FPdim}}({{\mathcal D}})=2^2$. Let $a_e^1, \cdots, a_e^t$ be the number of non-isomorphic simple objects of ${{\mathcal C}}_{ad}$ of dimension $2,2^2,\cdots, 2^t$. Then we have $$\label{eq5} \begin{split} 4+\sum_{k=1}^ta_e^k2^{2k}=2^{n-i}m''. \end{split}$$ So we get $$\label{eq6} \begin{split} a_e^1=2^{n-i-2}m''-1-2\sum_{k=2}^ta_e^k2^{2k-3}. \end{split}$$ If $i<n-2$ then $a_e^1$ is odd. In this case, $|G({{\mathcal C}}_{ad})|={\operatorname{FPdim}}({{\mathcal D}})=4$ and $|{\operatorname{Irr}}_2({{\mathcal C}}_{ad})|=a_e^1$ is odd. By Lemma \[lem21\], there exists $X\in {\operatorname{Irr}}_2({{\mathcal C}}_{ad})$ such that $G[X]=G({{\mathcal C}}_{ad})$. In other words, $h\otimes X\cong X$ for all $h\in G({{\mathcal C}}_{ad})$. If ${{\mathcal D}}$ is not Tannakian then it contains the category ${{\mathcal E}}$ of super vector spaces. Let $1\neq g\in {{\mathcal E}}$ be the unique invertible object which generates ${{\mathcal E}}$ as a symmetric category. Then $g\otimes X\ncong X$ for every simple object of ${{\mathcal C}}_{ad}$ by [@muger2000galois Lemma 5.4]. This contradicts the result obtained above. So in this case ${{\mathcal D}}$ must be Tannakian. Now we consider the case $i=n-2$ and assume that ${{\mathcal D}}$ is not Tannakian. The argument in this case is pointed out to the author by Sonia Natale. In this case, ${\operatorname{FPdim}}({{\mathcal D}})=4$ and ${\operatorname{FPdim}}({{\mathcal C}}_{ad})=4m''$. By Lemma \[lem33\], ${{\mathcal D}}$ has a Tannakian subcategory equivalent to ${\operatorname{Rep}}(\mathbb{Z}_2)$. This Tannakian subcategory is also contained in ${{\mathcal C}}_{ad}$. So we have the de-equivariantization $({{\mathcal C}}_{ad})_{\mathbb{Z}_2}$ of ${{\mathcal C}}_{ad}$ by ${\operatorname{Rep}}(\mathbb{Z}_2)$. Since $({{\mathcal C}}_{ad})'\supseteq{{\mathcal D}}\supseteq {\operatorname{Rep}}(\mathbb{Z}_2)$, ${\operatorname{Rep}}(\mathbb{Z}_2)$ is contained in the Müger center of ${{\mathcal C}}_{ad}$. Hence the de-equivariantization $({{\mathcal C}}_{ad})_{\mathbb{Z}_2}$ is braided by [@muger2000galois Lemma 3.10]. The canonical functor $F:{{\mathcal C}}_{ad}\rightarrow({{\mathcal C}}_{ad})_{\mathbb{Z}_2}$ is a dominant braided tensor functor such that $F({{\mathcal D}})\cong{{\mathcal D}}_{\mathbb{Z}_2}$ by [@drinfeld2010braided Proposition 4.22]. Consider the de-equivariantization ${{\mathcal D}}_{\mathbb{Z}_2}$. Since ${{\mathcal D}}$ is not Tannakian by our assumption, we have that ${{\mathcal D}}_{\mathbb{Z}_2}\cong {\rm SuperVec}$ as braided fusion categories [@natale2014graphs Remark 9.1]. Applying the functor $F$ to $\mathcal{Z}_2({{\mathcal C}}_{ad})\supseteq{{\mathcal D}}$, we have $$\label{eq7} \begin{split} {\rm SuperVec}\cong {{\mathcal D}}_{\mathbb{Z}_2}\subseteq \mathcal{\mathcal{Z}}_2(({{\mathcal C}}_{ad})_{\mathbb{Z}_2}). \end{split}$$ By [@natale2012fusion Lemma 7.2], the FP dimensions of simple objects of $({{\mathcal C}}_{ad})_{\mathbb{Z}_2}$ are powers of 2. On the other hand, $({{\mathcal C}}_{ad})_{\mathbb{Z}_2}$ is braided and hence the FP dimension of every simple object of it divides ${\operatorname{FPdim}}(({{\mathcal C}}_{ad})_{\mathbb{Z}_2})=2m''$. It follows that the category $({{\mathcal C}}_{ad})_{\mathbb{Z}_2}$ only has simple objects with FP dimension 1 or 2. If $({{\mathcal C}}_{ad})_{\mathbb{Z}_2}$ has a simple object $X$ of FP dimension 2 then equation (\[eq7\]) implies that $g\otimes X\cong X$, where $g$ is the generator of ${\operatorname{SuperVec}}$. This contradicts Lemma \[mugerlem\]. Therefore, $({{\mathcal C}}_{ad})_{\mathbb{Z}_2}$ is pointed. We now consider the functor $\tilde{F}:{{\mathcal C}}\rightarrow{{\mathcal C}}_{\mathbb{Z}_2}$. Again by [@drinfeld2010braided Proposition 4.22], we have $$\label{eq8} \begin{split} ({{\mathcal C}}_{\mathbb{Z}_2})_{ad}\subseteq\tilde{F}({{\mathcal C}}_{ad})=({{\mathcal C}}_{ad})_{\mathbb{Z}_2}. \end{split}$$ Hence $({{\mathcal C}}_{\mathbb{Z}_2})_{ad}$ is also pointed. It follows that ${{\mathcal C}}_{\mathbb{Z}_2}$ is nilpotent, and hence ${\operatorname{FPdim}}(X)^2$ divides ${\operatorname{FPdim}}(({{\mathcal C}}_{\mathbb{Z}_2})_{ad})$ for all $X\in {\operatorname{Irr}}({{\mathcal C}}_{\mathbb{Z}_2})$, by [@gelaki2008nilpotent Corollary 5.3]. Since ${\operatorname{FPdim}}({{\mathcal C}}_{ad})_{\mathbb{Z}_2}=2m''$ is square-free, equation (\[eq8\]) shows that ${\operatorname{FPdim}}(({{\mathcal C}}_{\mathbb{Z}_2})_{ad})$ is also square-free. Hence ${\operatorname{FPdim}}(X)=1$ for all $X\in {\operatorname{Irr}}({{\mathcal C}}_{\mathbb{Z}_2})$. That is, ${{\mathcal C}}_{\mathbb{Z}_2}$ is pointed, hence ${{\mathcal C}}$ is group-theoretical by [@naidu2009fusion Theorem 7.2]. This completes the proof. \[cor35\] If $n\leq 5$ then ${{\mathcal C}}$ is group-theoretical. By Corollary \[cor32\] and Theorem \[thm35\], it suffices to consider the case that $n=4$ or $5$, and ${{\mathcal C}}$ has a Tannakian subcategory ${\operatorname{Rep}}(G)$ of FP dimension $p^2$. Let ${{\mathcal C}}_G$ be the de-equivariantization of ${{\mathcal C}}$ by ${\operatorname{Rep}}(G)$. Let ${{\mathcal C}}_G=\oplus_{g\in G}({{\mathcal C}}_G)_g$ be the corresponding grading of ${{\mathcal C}}_G$. This grading is faithful and $({{\mathcal C}}_G)_e$ is a modular category (see Section \[sec26\]). If $n=4$ then ${\operatorname{FPdim}}(({{\mathcal C}}_G)_e)=m$; if $n=5$ then ${\operatorname{FPdim}}(({{\mathcal C}}_G)_e)=pm$. In both cases, $({{\mathcal C}}_G)_e$ is a pointed modular category by Corollary \[cor32\]. Hence ${{\mathcal C}}_G$ is a nilpotent fusion category. By [@gelaki2008nilpotent Corollary 5.3], ${\operatorname{FPdim}}(X)^2$ divides ${\operatorname{FPdim}}(({{\mathcal C}}_G)_e)$ for every simple object $X$ of ${{\mathcal C}}_G$. Since in both cases ${\operatorname{FPdim}}(({{\mathcal C}}_G)_e)$ is square-free, ${\operatorname{FPdim}}(X)=1$ for all simple object $X$ of ${{\mathcal C}}_G$. This shows that ${{\mathcal C}}_G$ is a pointed fusion category. Therefore, the modular category ${{\mathcal C}}$, being an equivariantization of a pointed fusion category, is group-theoretical by [@naidu2009fusion Theorem 7.2]. General properties of integral ASF modular categories {#sec5} ===================================================== Let ${{\mathcal D}}$ be a braided fusion category. A Tannakian subcategory ${{\mathcal E}}\subset {{\mathcal D}}$ is maximal if it is not contained in any other Tannakian subcategory of ${{\mathcal D}}$. In this section, we still assume that ${{\mathcal C}}$ is an ASF modular category with FP dimension $p^nm$ as in Section \[sec3\]. \[thm34\] Let ${{\mathcal E}}\cong{\operatorname{Rep}}(G)$ be a proper maximal Tannakian subcategory of ${{\mathcal C}}$, and let ${{\mathcal E}}'$ be its Müger centralizer in ${{\mathcal C}}$. Then the de-equivariantization $({{\mathcal E}}')_G$ of ${{\mathcal E}}'$ by ${\operatorname{Rep}}(G)$ is pointed. In particular, ${{\mathcal E}}'$ is group-theoretical. The existence of ${{\mathcal E}}$ is guaranteed by Lemma \[lem33\] and the modularity of ${{\mathcal C}}$. Let ${{\mathcal D}}=({{\mathcal E}}')_G$. The Tannakian subcategory ${{\mathcal E}}$ is the Müger center of ${{\mathcal E}}'$. By [@etingof2011weakly Remark 2.3], the de-equivariantization ${{\mathcal D}}$ is non-degenerate. Also, the FP dimension of ${{\mathcal E}}$ is a power of $p$ by Lemma \[newadded\]. So ${{\mathcal D}}$ is an integral ASF modular category. Hence, if ${{\mathcal D}}$ is not pointed then ${{\mathcal D}}_{pt}\cap({{\mathcal D}}_{pt})'={{\mathcal D}}_{pt}\cap{{\mathcal D}}_{ad}=({{\mathcal D}}_{ad})_{pt}$ has FP dimension $p^j$ for some $2\leq j\leq n-2$, by Lemma \[lem33\]. On the other hand, the fusion category ${{\mathcal D}}$ is called the core of ${{\mathcal C}}$ in [@drinfeld2010braided Section 5.4]. It is a weakly anisotropic braided fusion category by [@drinfeld2010braided Corollary 5.19]. Recall that a braided fusion category is called weakly anisotropic if it has no nontrivial Tannakian subcategories which are stable under all braided autoequivalences of the fusion category. By [@drinfeld2010braided Corollary 5.29], ${{\mathcal D}}_{pt}\cap({{\mathcal D}}_{pt})'$ is either trivial, or equivalent to the category ${\rm SuperVec}$ of super vector spaces. Therefore, ${{\mathcal D}}$ must be pointed, otherwise we will get a contradiction. The last statement follows from [@naidu2009fusion Theorem 7.2] which says that a braided fusion category is group-theoretical if and only if it is an equivariantization of a pointed fusion category. Now we are ready to describe the structure of an integral ASF modular category by equivariantizations. \[cor36\] The modular category ${{\mathcal C}}$ is an equivariantization of a nilpotent fusion category of nilpotency class $2$. Let ${{\mathcal E}}\cong{\operatorname{Rep}}(G)$ be a proper maximal Tannakian subcategory of ${{\mathcal C}}$. Then we can form the de-equivariantization of ${{\mathcal C}}_G$ of ${{\mathcal C}}$ by ${{\mathcal E}}$. It is known that ${{\mathcal C}}_G$ has a faithful $G$-grading and the trivial component $({{\mathcal C}}_G)_e$ is non-degenerate. By [@drinfeld2010braided Proposition 4.56], $({{\mathcal C}}_G)_e=({{\mathcal E}}')_G$. Hence, $({{\mathcal C}}_G)_e$ is pointed by Theorem \[thm34\], and hence ${{\mathcal C}}$ is an equivariantization of a nilpotent fusion category of nilpotency class $2$. Restricting to the case where $m=q$ is a prime number allows us to obtain one of the main results of this paper. \[thm37\] Let ${{\mathcal C}}$ be an integral modular category of FP dimension $p^nq$, where $q<p$ are prime numbers. Then ${{\mathcal C}}$ is group-theoretical. It suffices to prove that ${\operatorname{FPdim}}({{\mathcal C}}_{pt})$ can not be a power of $p$. Indeed, if ${\operatorname{FPdim}}({{\mathcal C}}_{pt})$ is not a power of $p$ then ${\operatorname{FPdim}}({{\mathcal C}}_{pt})=p^iq$ for some $2\leq i\leq n-2$ by Lemma \[lem30\] and Lemma \[lem31\]. By equalities (\[cptad\]) and (\[fpdim\]), ${\operatorname{FPdim}}({{\mathcal C}}_{ad})={\operatorname{FPdim}}(({{\mathcal C}}_{pt})')=p^{n-i}$. This means that ${{\mathcal C}}_{ad}$ is nilpotent [@etingof2005fusion Theorem 8.28], and so is ${{\mathcal C}}$. Hence ${{\mathcal C}}$ is a group-theoretical fusion category by [@drinfeld2007group Corollary 6.2]. Suppose on the contrary that ${\operatorname{FPdim}}({{\mathcal C}}_{pt})=p^i$ for some $2\leq i\leq n-2$. Let ${{\mathcal E}}={\operatorname{Rep}}(G)$ be a proper maximal Tannakian subcategory of ${{\mathcal C}}$. By Lemma \[newadded\], ${\operatorname{FPdim}}({{\mathcal E}})=p^j$ for some $j\leq i\leq n-2$. The Müger centralizer ${{\mathcal E}}'$ of ${{\mathcal E}}$ in ${{\mathcal C}}$ is group-theoretical by Theorem \[thm34\]. Under our assumption $q<p$, this fact implies that ${{\mathcal E}}'$ is nilpotent by [@DoTu2015 Proposition 4.11]. The main result of [@drinfeld2007group] says that a braided nilpotent fusion category has a unique decomposition as a tensor product of braided fusion categories whose FP dimensions are distinct prime powers. Again by equality (\[fpdim\]), we know ${\operatorname{FPdim}}({{\mathcal E}}')=p^{n-j}q$. So we have $${{\mathcal E}}'\cong {{\mathcal A}}_q\boxtimes{{\mathcal A}}_{p^{n-j}},$$ where ${{\mathcal A}}_t$ is a fusion category with FP dimension $t$. By [@etingof2005fusion Corollary 8.30], ${{\mathcal A}}_q$ is a pointed fusion category, hence ${{\mathcal E}}'$ and also ${{\mathcal C}}$ contain a pointed fusion subcategory of FP dimension $q$. This contradicts our assumption that ${\operatorname{FPdim}}({{\mathcal C}}_{pt})$ is a power of $p$. Low-dimensional integral modular categories {#sec6} =========================================== Let ${{\mathcal C}}$ be a fusion category, and let $1=a_0<a_1<\cdots<a_m$ be the distinct FP dimensions of simple objects of ${{\mathcal C}}$. If $n_i$ is the number of non-isomorphic simple objects of FP dimension $a_i$ then we say ${{\mathcal C}}$ has category type $(1, n_0; a_1, n_1; \cdots; a_m, n_m)$. We summarize some results in the literature to establish a criterion for an array $(1, n_0; a_1, n_1; \cdots; a_m, n_m)$ to be a category type of an odd-dimensional integral modular category. \[lem41\] Let $(1, n_0; a_1, n_1; \cdots; a_m, n_m)$ be a category type of an integral modular category. Suppose that ${{\mathcal C}}$ has odd FP dimension. Then \(1) $a_i$ is odd and $n_i$ is even, for all $1\leqslant i\leqslant m$; \(2) $n_0$ divides ${\operatorname{FPdim}}({{\mathcal C}})$ and $n_ia_i^2$, for all $1\leqslant i\leqslant m$; \(3) $a_i^2$ divides ${\operatorname{FPdim}}({{\mathcal C}})$, for all $1\leqslant i\leqslant m$; \(4) $n_0a_m^2\leqslant {\operatorname{FPdim}}({{\mathcal C}})$. \(1) If there exists $i$ such that $a_i$ is even, then ${\operatorname{FPdim}}({{\mathcal C}})$ is also even by [@dong2014existence Lemma 5.3]. This is a contradiction. If there exists $i$ such that $n_i$ is odd then there must exist $X\in {\operatorname{Irr}}_{a_i}({{\mathcal C}})$ such that $X\cong X^\ast$. In this case ${\operatorname{FPdim}}({{\mathcal C}})$ is even by [@dong2014existence Lemma 5.2]. This is also a contradiction. \(2) It follows from [@dong2012frobenius Lemma 2.2]. \(3) It follows from [@etingof1998some Lemma 1.2]. \(4) Counting the FP dimension of every component of the universal grading of ${{\mathcal C}}$, we get $n_0a_m^2\leqslant {\operatorname{FPdim}}({{\mathcal C}})$ (Note that $n_0=|\mathcal{U}({{\mathcal C}})|$). Let ${{\mathcal C}}$ be a fusion category and let $G$ be a group acting on ${{\mathcal C}}$ by tensor autoequivalences. Recall from Section \[sec23\] that we have the fusion category ${{\mathcal C}}^G$ of $G$-equivalent objects of ${{\mathcal C}}$. Let $F:{{\mathcal C}}^G\to {{\mathcal C}}$ be the forgetful functor. It is a surjective tensor functor. That is, for any object $Y$ of ${{\mathcal C}}$ there is an object $X$ in ${{\mathcal C}}^G$ such that $Y$ is a subobject of $F(X)$. The Lemma \[lem51\] below is [@natale2014graphs Lemma 7.2]. \[lem51\] Let ${{\mathcal C}}^G$ be the equivariantization of ${{\mathcal C}}$ under the action of $G$. Let $X$ be a simple object of ${{\mathcal C}}^G$. If ${\operatorname{FPdim}}(X)$ is relatively prime to the order of $G$ then $F(X)$ is a simple object of ${{\mathcal C}}$. \[thm52\] Let ${{\mathcal C}}$ be a modular category such that ${\operatorname{FPdim}}({{\mathcal C}})$ is odd and ${\operatorname{FPdim}}({{\mathcal C}})<1125$. Then ${{\mathcal C}}$ is group-theoretical. First, the assumption that ${\operatorname{FPdim}}({{\mathcal C}})$ is odd implies that ${{\mathcal C}}$ is integral [@gelaki2008nilpotent Corollary 3.11]. Second, the assumption that ${\operatorname{FPdim}}({{\mathcal C}})<1125$ implies that ${{\mathcal C}}$ is an integral ASF modular category except the case when ${\operatorname{FPdim}}({{\mathcal C}})=675$. Suppose that ${{\mathcal C}}$ is an integral ASF modular category and the FP dimension $p^nm$ is odd and less than $1125$, where $p$ is a prime number, $m$ is a square-free natural number and ${\rm gcd}(p,m)=1$. If $m=1$ then ${\operatorname{FPdim}}({{\mathcal C}})=p^n$. In this case, ${{\mathcal C}}$ is group-theoretical by [@drinfeld2007group Corollary 6.8]. If $m\neq 1$ then $n\leq 5$. In this case, ${{\mathcal C}}$ is group-theoretical by Corollary \[cor32\] and Corollary \[cor35\]. Therefore, in the rest of the proof, we only need to consider the case when ${\operatorname{FPdim}}({{\mathcal C}})=675$. By Lemma \[lem41\], the modular category ${{\mathcal C}}$ has the following possible types: $$\begin{aligned} &(1,45;3,70),(1,27;3,72),(1,9;3,74),\\ &(1,3;3,8;5,6;15,2), (1,675).\end{aligned}$$ Suppose that ${{\mathcal C}}$ has category type $(1,45;3,70)$. Then the universal grading has $45$ components and every component has FP dimension $15$. Hence, any component contains at most $1$ simple object of FP dimension $3$, and so there should be at least $270$ non-isomorphic invertible simple objects. This is impossible since ${\operatorname{FPdim}}({{\mathcal C}}_{pt})=45$. The same argument shows that ${{\mathcal C}}$ can not have category types $(1,27;3,72)$ and $(1,9;3,74)$. Suppose that ${{\mathcal C}}$ is of type $(1,3;3,8;5,6;15,2)$. It follows that every component ${{\mathcal C}}_g$ has FP dimension $225$ for all $g\in\mathcal{U}({{\mathcal C}})$, and the only possible category type of the trivial component ${{\mathcal C}}_{ad}$ is $(1,3;3,8;5,6)$. By [@dong2014existence Theorem 4.2], ${{\mathcal C}}_{ad}$ has a non-trivial Tannakian subcategory ${{\mathcal D}}={\operatorname{Rep}}(G)$. Counting the dimension of simple objects of ${{\mathcal C}}_{ad}$, we know that ${{\mathcal C}}_{ad}$ can not have fusion subcategory of FP dimension $15$ and $45$. Also, ${\operatorname{FPdim}}({{\mathcal D}})\ne 75$ by Lemma \[newadded\]. So the only possibility of ${\operatorname{FPdim}}({{\mathcal D}})$ is $3$. If ${\operatorname{FPdim}}({{\mathcal D}})=3$ then ${{\mathcal D}}={{\mathcal C}}_{pt}$. By [@gelaki2008nilpotent Corollary 6.8], $({{\mathcal C}}_{ad})'={{\mathcal D}}$. On the other hand, ${{\mathcal D}}$ is a fusion subcategory of ${{\mathcal C}}_{ad}$. Hence, ${{\mathcal D}}$ is the Müger center of ${{\mathcal C}}_{ad}$. By [@etingof2011weakly Remark 2.3], the de-equivariantization $({{\mathcal C}}_{ad})_G$ of ${{\mathcal C}}_{ad}$ by $G$ is a modular category. Since $({{\mathcal C}}_{ad})_G$ has FP dimension $75$, Corollary \[cor32\] shows that $({{\mathcal C}}_{ad})_G$ is pointed. But Lemma \[lem51\] shows that $({{\mathcal C}}_{ad})_G$ must contain simple objects of FP dimension $5$. This is a contradiction. Therefore, ${{\mathcal C}}$ must be pointed and we are done. \[rem52\] (1)There exists non-group-theoretical integral modular categories whose FP dimension are even. Examples are given in [@bruillard2013classification]. In that paper, the authors studied non-group-theoretical $\mathbb{Z}_3$-graded $36$-dimensional modular categories and classified them, up to equivalence of fusion categories. (2)Natale studied low-dimensional modular categories in [@natale2013weakly]. She proved that if the FP dimension of a modular category is odd and less than $33075$ then this modular category is solvable. Acknowledgements ================ The author would like to thank Sonia Natale for her help with the proof of Theorem \[thm35\]. The research of the author is supported by the Fundamental Research Funds for the Central Universities (KYZ201564), the Natural Science Foundation of China (11471282, 11201231) and the Qing Lan Project. [10]{} url \#1[`#1`]{}urlprefix B. Bakalov, A. K. Jr., Lectures on Tensor Categories and Modular Functors, University Lecture Series, vol. 21, Amer. Math. Soc., 2001. P. Bruillard, C. Galindo, S.-M. Hong, Y. Kashina, et al., Classification of integral modular categories of [F]{}robenius-[P]{}erron dimension $pq^4$ and $p^2q^2$, Canad. Math. Bull. 57 (4) (2014) 721–734. P. Deligne, Cat[é]{}gories [T]{}annakiennes, in: The Grothendieck Festschrift, Springer, 1990, pp. 111–195. J. Dong, L. Dai, Existence of [T]{}annakian subcategories and its applications, Commun. Algebra 44 (4) (2016) 1767-1782. J. Dong, S. Natale, L. Vendramin, [F]{}robenius property for fusion categories of small integral dimension, J. Algebra Appl. 14 (2) (2015) 1550011, 17pages. J. Dong, H. Tucker, Integral modular categories of [F]{}robenius-[P]{}erron dimension $pq^n$, Algebr. Represent. Theor. 19 (1)(2016) 33-46. J. Dong, S. Wang, On semisimple [H]{}opf algebras of dimension $2q^3$, J. Algebra 375 (2013) 97–108. V. Drinfeld, S. Gelaki, D. Nikshych, V. Ostrik, Group-theoretical properties of nilpotent modular categories, preprint arXiv:0704.0195. V. Drinfeld, S. Gelaki, D. Nikshych, V. Ostrik, On braided fusion categories [I]{}, Selecta Math., New Ser. 16 (1) (2010) 1–119. P. Etingof, S. Gelaki, Some properties of finite-dimensional semisimple [H]{}opf algebras, Math. Res. Lett. 5 (2) (1998) 191–197. P. Etingof, D. Nikshych, V. Ostrik, On fusion categories, Ann. Math. 162 (2) (2005) 581–642. P. Etingof, D. Nikshych, V. Ostrik, Weakly group-theoretical and solvable fusion categories, Adv. Math. 226 (1) (2011) 176–205. D. E. Evans, Y. Kawahigashi, Quantum symmetries on operator algebras, Oxford Mathematical Monographs, Oxford Science Publications, 1998. S. Gelaki, D. Nikshych, Nilpotent fusion categories, Adv. Math. 217 (3) (2008) 1053–1071. Y.-Z. Huang, Vertex operator algebras, the verlinde conjecture, and modular tensor categories, Proc. Natl. Acad. Sci. USA 102 (15) (2005) 5352–5356. C. Kassel, Quantum groups, [GTM]{} 155 (1995). G. Moore, N. Seiberg, Classical and quantum conformal field theory, Comm. Math. Phys. 123 (2) (1989) 177–254. M. M[ü]{}ger, Galois theory for braided tensor categories and the modular closure, Adv. Math. 150 (2) (2000) 151–201. M. M[ü]{}ger, On the structure of modular categories, Proc. London Math. Soc. 87 (02) (2003) 291–308. D. Naidu, D. Nikshych, S. Witherspoon, Fusion subcategories of representation categories of twisted quantum doubles of finite groups, Internat. Math. Res. Notices 2009 (22) (2009) 4183–4219. D. Naidu, E. C. Rowell, A finiteness property for braided fusion categories, Algebr. Represent. Theory 14 (5) (2011) 837–855. S. Natale, On weakly group-theoretical non-degenerate braided fusion categories, J. Noncommut. Geom. 8 (4) (2014) 1043–1060. S. Natale, J. Y. Plavnik, On fusion categories with few irreducible degrees, Algebra Number Theory 6 (6) (2012) 1171–1197. S. Natale, E. P. Rodríguez, Graphs attached to simple [F]{}robenius-[P]{}erron dimensions of an integral fusion category, Monatsh. Math. Doi: 10.1007/s00605-015-0734-7. V. Turaev, Quantum Invariants of Knots and 3-Manifolds, De Gruyter Studies in Mathematics, Walter de Gruyter, 1994. Z. Wang, Topological Quantum Computation, CBMS Regional Conference Series in Mathematics, vol. 112, Amer. Math. Society, 2010.
--- abstract: 'The idea that black holes (BHs) result in highly excited states representing both the “hydrogen atom” and the “quasi-thermal emission” in quantum gravity is today an intuitive but general conviction. In this paper it will be shown that such an intuitive picture is more than a picture. In fact, we will discuss a model of quantum BH somewhat similar to the historical semi-classical model of the structure of a hydrogen atom introduced by Bohr in 1913. The model is completely consistent with existing results in the literature, starting from the celebrated result of Bekenstein on the area quantization.' author: - '**Christian Corda**' title: '**Bohr-like black holes**' --- Dipartimento di Fisica e Chimica, Istituto Universitario di Ricerca Scientifica Santa Rita, Centro di Scienze Naturali, Via di Galceti, 74, 59100 Prato Institute for Theoretical Physics and Advanced Mathematics (IFM) Einstein-Galilei, Via Santa Gonda 14, 59100 Prato, Italy International Institute for Applicable Mathematics & Information Sciences (IIAMIS), B.M. Birla Science Centre, Adarsh Nagar, Hyderabad - 500 463, India *E-mail address:* Researchers in quantum gravity [\[]{}20[\]]{} intuitively think that, in some respects, BHs are the fundamental bricks of quantum gravity in the same way that atoms are the fundamental bricks of quantum mechanics. This analogy suggests that the BH mass should have a discrete spectrum. In this extended abstract, we show that the such an intuitive picture is more than a picture. Starting from the natural correspondence between Hawking radiation [\[]{}1[\]]{} and BH quasi-normal modes (QNMs) [\[]{}24[\]]{}, we show that QNMs can be really interpreted in terms of BH quantum levels discussing a BH model somewhat similar to the semi-classical Bohr model of the structure of a hydrogen atom [\[]{}5, 6[\]]{}. One considers Dirac delta perturbations [\[]{}24, 7[\]]{} representing subsequent absorptions of particles having negative energies which are associated to emissions of Hawking quanta in the mechanism of particle pair creation. BH responses to perturbations are QNMs [\[]{}24, 8-12, 21[\]]{}, which are frequencies of radial spin-$j$ perturbations obeying a time independent Schröedinger-like equation [\[]{}24, 12[\]]{}. They are the BH modes of energy dissipation which frequency is allowed to be complex [\[]{}24, 12[\]]{}. For large values of the principal quantum number $n$, where $n=1,2,...$, QNMs become independent of both the spin and the angular momentum quantum numbers [\[]{}24, 8, 12, 13, 14[\]]{}, in perfect agreement with *Bohr’s Correspondence Principle* [\[]{}15[\]]{}, which states that transition frequencies at large quantum numbers should equal classical oscillation frequencies. In other words, Bohr’s Correspondence Principle enables an accurate semi-classical analysis for large values of the principal quantum number $n,$ i.e, for excited BHs. By using that principle, Hod has shown that QNMs release information about the area quantization as QNMs are associated to absorption of particles [\[]{}13, 44[\]]{}. Hod’s work was refined by Maggiore [\[]{}8[\]]{} who solved some important problems. On the other hand, as QNMs are *countable* frequencies, ideas on the *continuous* character of Hawking radiation did not agree with attempts to interpret QNMs in terms of emitted quanta, preventing to associate QNMs to Hawking radiation [\[]{}12[\]]{}. Recently, ourselves and collaborators [\[]{}24, 8-11, 21[\]]{} observed that the non-thermal spectrum of Parikh and Wilczek [\[]{}16[\]]{} also implies the countable character of subsequent emissions of Hawking quanta. This issue enables a natural correspondence between QNMs and Hawking radiation, permitting to interpret QNMs also in terms of emitted energies [\[]{}24, 8-11[\]]{}. In fact, Dirac delta perturbations due to discrete subsequent absorptions of particles having negative energies, which are associated to emissions of Hawking quanta in the mechanism of particle pair creation by quantum fluctuations, generates BH QNMs [\[]{}24, 8-11[\]]{}. On the other hand, the correspondence between emitted radiation and proper oscillation of the emitting body is a fundamental behavior of every radiation process in science. Based on such a natural correspondence between Hawking radiation and BH QNMs, one can consider QNMs in terms of quantum levels also for emitted energies [\[]{}24, 8-11[\]]{}. For large values of the principal quantum number $n,$ i.e, for excited BHs, and independently of the angular momentum quantum number, the QNMs expression of the Schwarzschild BH which takes into account the non-strictly thermal behavior of the radiation spectrum is obtained as [\[]{}24[\]]{} $$\omega_{n}=a+ib+\frac{in}{4M-2|\omega_{n}|}\backsimeq\frac{in}{4M-2|\omega_{n}|},\label{eq: quasinormal modes corrected}$$ where $a$ and $b$ are real numbers with $a=\frac{\ln3}{4\pi(2M-|\omega_{n}|)},\; b=\frac{1}{4(2M-|\omega_{n}|)}$ for $j=0,2$ (scalar and gravitational perturbations), $a=0,\; b=0$ for $j=1$ (vector perturbations) and $a=0,\; b=\frac{1}{4(2M-|\omega_{n}|)}$ for half-integer values of $j$. On the other hand, as $a,b\ll|\frac{in}{4M-2|\omega_{n}|}|$, a fundamental consequence is that the quantum of area obtained from the asymptotic values of $|\omega_{n}|$ is an intrinsic property of Schwarzschild BHs because for large $n$ the leading asymptotic behavior of $|\omega_{n}|$ is given by the leading term in the imaginary part of the complex frequencies, and it does not depend on the spin content of the perturbation [\[]{}24, 8[\]]{}. An intuitive derivation of eq. (\[eq: quasinormal modes corrected\]) can be found in [\[]{}3, 4[\]]{}. We *rigorously* derived such an equation in the Appendix of [\[]{}2[\]]{}. If one considers the leading asymptotic behavior, the physical solution for the absolute values of the frequencies (\[eq: quasinormal modes corrected\]) is [\[]{}24[\]]{} $$E_{n}\equiv|\omega_{n}|=M-\sqrt{M^{2}-\frac{n}{2}}.\label{eq: radice fisica}$$ $E_{n}\:$ is interpreted like the total energy emitted by the BH at that time, i.e. when the BH is excited at a level $n$ [\[]{}24[\]]{}. Considering an emission from the ground state (i.e. a BH which is not excited) to a state with large $n=n_{1}$ and using eq. (\[eq: radice fisica\]), the BH mass changes from $M\:$ to [\[]{}24[\]]{} $$M_{n_{1}}\equiv M-E_{n_{1}}=\sqrt{M^{2}-\frac{n_{1}}{2}}.\label{eq: me-1}$$ In the transition from the state with $n=n_{1}$ to a state with $n=n_{2}$ where $n_{2}>n_{1}$ the BH mass changes again from $M_{n_{1}}\:$ to $$\begin{array}{c} M_{n_{2}}\equiv M_{n_{1}}-\Delta E_{n_{1}\rightarrow n_{2}}=M-E_{n_{2}}\\ =\sqrt{M^{2}-\frac{n_{2}}{2}}, \end{array}\label{eq: me}$$ where $$\Delta E_{n_{1}\rightarrow n_{2}}\equiv E_{n_{2}}-E_{n_{1}}=M_{n_{1}}-M_{n_{2}}=\sqrt{M^{2}-\frac{n_{1}}{2}}-\sqrt{M^{2}-\frac{n_{2}}{2}},\label{eq: jump}$$ is the jump between the two levels due to the emission of a particle having frequency $\Delta E_{n_{1}\rightarrow n_{2}}$. Thus, in our BH model [\[]{}2[\]]{}, during a quantum jump a discrete amount of energy is radiated and, for large values of the principal quantum number $n,$ the analysis becomes independent of the other quantum numbers. In a certain sense, QNMs represent the electron which jumps from a level to another one and the absolute values of the QNMs frequencies represent the energy shells2[\]]{}. In Bohr model [\[]{}5, 6[\]]{} electrons can only gain and lose energy by jumping from one allowed energy shell to another, absorbing or emitting radiation with an energy difference of the levels according to the Planck relation (in standard units) $E=hf$, where $\: h\:$ is the Planck constant and $f\:$ the transition frequency. In our BH model [\[]{}2[\]]{}, QNMs can only gain and lose energy by jumping from one allowed energy shell to another, absorbing or emitting radiation (emitted radiation is given by Hawking quanta) with an energy difference of the levels according to eq. (\[eq: jump\]). The similarity is completed if one notes that the interpretation of eq. (\[eq: radice fisica\]) is of a particle, the “electron”, quantized on a circle of length [\[]{}3[\]]{} $$L=\frac{1}{T_{E}(E_{n})}=4\pi\left(M+\sqrt{M^{2}-\frac{n}{2}}\right),\label{eq: lunghezza cerchio}$$ which is the analogous of the electron travelling in circular orbits around the hydrogen nucleus, similar in structure to the solar system, of Bohr model [\[]{}5, 6[\]]{}. On the other hand, Bohr model is an approximated model of the hydrogen atom with respect to the valence shell atom model of full quantum mechanics. In the same way, our BH model should be an approximated model with respect to the definitive, but at the present time unknown, BH model arising from a full quantum gravity theory. Let us discuss *the area quantization*. Setting $n_{1}=n-1$, $n_{2}=n$ in eq. (\[eq: jump\]) on gets the emitted energy for a jump among two neighboring levels [\[]{}2, 3, 4[\]]{} $$\Delta E_{n-1\rightarrow n}=\sqrt{M^{2}-\frac{n+1}{2}}-\sqrt{M^{2}-\frac{n}{2}}.\label{eq: variazione}$$ An enlightening analysis that we rigorously developed in [\[]{}2[\]]{} shows that eq. (\[eq: variazione\]) leads to the area quantum $$|\triangle A_{n}|=|\triangle A_{n-1}|=8\pi,\label{eq: 8 pi planck}$$ which is exactly the famous result of Bekenstein on the area quantization [\[]{}17[\]]{}, and this *cannot* be a coincidence. Other fundamental results are: i) the famous formula of Bekenstein-Hawking entropy [\[]{}1, 18, 19[\]]{} reads [\[]{}2[\]]{} $$\left(S_{BH}\right)_{n-1}\equiv\frac{A_{n-1}}{4}=8\pi N_{n-1}M_{n-1}\cdot\Delta E_{n-1\rightarrow n}=4\pi\left(M^{2}-\frac{n-1}{2}\right)\label{eq: Bekenstein-Hawking n-1}$$ before the emission and $$\left(S_{BH}\right)_{n}\equiv\frac{A_{n}}{4}=8\pi N_{n}M_{n}\cdot\Delta E_{n-1\rightarrow n}=4\pi\left(M^{2}-\frac{n}{2}\right),\label{eq: Bekenstein-Hawking n}$$ after the emission, respectively; ii) the total BH entropy becomes [\[]{}2[\]]{} $$\begin{array}{c} \left(S_{total}\right)_{n-1}=4\pi\left(M^{2}-\frac{n-1}{2}\right)\\ \\ -\ln\left[4\pi\left(M^{2}-\frac{n-1}{2}\right)\right]+\frac{3}{32\pi\left(M^{2}-\frac{n-1}{2}\right))} \end{array}\label{eq: entropia n-1}$$ before the emission, and $$\begin{array}{c} \left(S_{total}\right)_{n}=4\pi\left(M^{2}-\frac{n}{2}\right)\\ \\ -\ln\left[4\pi\left(M^{2}-\frac{n}{2}\right)\right]+\frac{3}{32\pi\left(M^{2}-\frac{n}{2}\right)} \end{array}\label{eq: entropia n}$$ after the emission, respectively. Thus, both the Bekenstein-Hawking entropy and the total BH entropy results a function of the BH excited state $n.$ We stress that our results are in perfect agreement with existing results in the literature, see [\[]{}2-4[\]]{} for details. **Conclusion remarks** We have shown that the intuitive but general conviction that BHs result in highly excited states representing both the “hydrogen atom” and the “quasi-thermal emission” in quantum gravity is more than a picture as we have indeed discussed a model of quantum BH somewhat similar to the historical semi-classical model of the structure of a hydrogen atom introduced by Bohr in 1913. This Bohr-like model of BHs is totally consistent with existing results in the literature, starting from the famous result of Bekenstein on the area quantization. The issue that semi-classical BHs are the analogous of the Bohr model for the hydrogen atom is also an intriguing starting point for future work on a new approach to quantum gravity based on the “electron” represented by QNMs, i.e. for the potential construction of a “QNMs quantum gravity”. #### Acknowledgements {#acknowledgements .unnumbered} It is a pleasure to thank Prof. T. Simos for inviting me to release a Plenary Lecture in the 12th International Conference of Numerical Analysis and Applied Mathematics. This Proceeding paper is indeed an extended abstract of such a Plenary Lecture. I thank the referees for useful comments and advices that permitted to improve this paper. ### References {#references .unnumbered} [\[]{}1[\]]{} S. W. Hawking, Commun. Math. Phys. 43, 199 (1975). [\[]{}2[\]]{} C. Corda, Eur. Phys. J. C 73, 2665 (2013). [\[]{}3[\]]{} C. Corda, Int. Journ. Mod. Phys. D 21, 1242023 (2012). [\[]{}4[\]]{} C. Corda, J. High En. Phys. 1108, 101 (2011). [\[]{}5[\]]{} N. Bohr, Philos. Mag. 26 , 1 (1913). [\[]{}6[\]]{} N. Bohr, Philos. Mag. 26 , 476 (1913). [\[]{}7[\]]{} C. Corda, Ann. Phys. 337, 49 (2013). [\[]{}8[\]]{} M. Maggiore, Phys. Rev. Lett. 100, 141301 (2008). [\[]{}9[\]]{} C. Corda, Electr. Jour. Theor. Phys. 11, 30 27 (2014). [\[]{}10[\]]{} C. Corda, S. H. Hendi, R. Katebi, N. O. Schmidt, JHEP 06, 008 (2013). [\[]{}11[\]]{} C. Corda, S. H. Hendi, R. Katebi, N. O. Schmidt, Adv. High En. Phys. 527874 (2014). [\[]{}12[\]]{} L. Motl, Adv. Theor. Math. Phys. 6, 1135 (2003). [\[]{}13[\]]{} S. Hod, Gen. Rel. Grav. 31, 1639 (1999). [\[]{}14[\]]{} S. Hod, Phys. Rev. Lett. 81 4293 (1998). [\[]{}15[\]]{} N. Bohr, Zeits. Phys. 2, 423 (1920). [\[]{}16[\]]{} M. K. Parikh and F. Wilczek, Phys. Rev. Lett. 85, 5042 (2000). [\[]{}17[\]]{} J. D. Bekenstein, Lett. Nuovo Cim. 11, 467 (1974). [\[]{}18[\]]{} J. D. Bekenstein, Nuovo Cim. Lett. 4, 737 (1972). [\[]{}19[\]]{} J. D. Bekenstein, Phys. Rev. D7, 2333 (1973). [\[]{}20[\]]{} C. Corda, New Adv. Phys., 7, 1, 67 (2013) [\[]{}21[\]]{} C. Corda, S. H. Hendi, R. Katebi, N. O. Schmidt, Adv. High En. Phys. 530547 (2014).
--- author: - | $^{1}$ T.M.O. Franzen${^1}$, N. Seymour${^1}$, S.V. White${^1}$, Tara Murphy$^{2,3}$, E. M. Sadler$^{2,3}$, J. R. Callingham$^{2,3,4}$, R. W. Hunstead${^2}$, J. Hughes$^{2}$, J. V. Wall${^8}$, M. E. Bell$^{3,4}$, K.S. Dwarakanath${^5}$, B-Q. For${^6}$, B.M. Gaensler$^{2,3,7}$, P. J. Hancock$^{1,3}$, L. Hindson${^9}$, N. Hurley-Walker${^1}$, M. Johnston-Hollitt${^9}$, A. D. Kapińska$^{3,6}$, E. Lenc$^{2,3}$, B. McKinley$^{10}$, J. Morgan${^1}$, A. R. Offringa$^{11}$, P. Procopio$^{10}$, L. Staveley-Smith$^{3,6}$ R. B. Wayth$^{1,3}$, C. Wu${^6}$, Q. Zheng${^9}$\ $^{1}$ICRAR Curtin University, Australia; $^{2}$University of Sydney, Australia; $^{3}$ARC Centre of Excellence For All-Sky Astrophysics (CAASTRO) $^{4}$CSIRO Astronomy and Space Science (CASS), Australia; $^{5}$Raman Research Institute, India; $^{6}$ICRAR University of Western Australia, Australia; $^{7}$Dunlap Institute for Astronomy & Astrophysics, University of Toronto, Canada; $^{8}$University of British Columbia, Canada; $^{9}$Victoria University of Wellington, New Zealand; $^{10}$The University of Melbourne, Australia; $^{11}$Netherlands Institute for Radio Astronomy (ASTRON), The Netherlands\ Email: title: 'The MWA GLEAM 4 Jy sample; a new large, bright radio source sample at 151 MHz' --- Introduction ============ Complete samples of radio sources are an essential tool to unravelling the cosmic evolution of radio galaxies and quasars. Whilst deep, small area surveys can be designed to detect sources of relatively low intrinsic radio power at moderate-to-high redshifts, they fail to detect statistically meaningful samples of the brightest and rarest sources. As a result we are driven to sample the largest accessible volume to trace the high-power, tail end of the radio luminosity function (RLF). However, any bright flux density-limited sample includes a wide diversity of sources beyond the most luminous and most distant; sources also lie at local and intermediate distances and thus it is necessary to obtain reliable distance measures to untangle the effects of luminosity and redshift. This requirement has long been recognised: the effort required to completely identify relatively modest scale radio samples has been enormous, although with the advent of all-sky multi-band imaging and (photometric) spectroscopy, this situation is improving. As this conference looks towards new scientific challenges, we note that deep extragalactic radio surveys are being used to define the scientific potential of future radio telescopes. The path to defining a new telescope is to ensure its capabilities are suitably matched to its sensitivity, e.g. considering the number density of sources, dynamic range, confusion effects, etc. One approach is to extrapolate the best available sky model(s) to approximate the sensitivity of the proposed instrument. This is how the radio astronomical community is exploring the scientific potential of the Square Kilometre Array (SKA) telescope particularly in the continuum domain (Prandoni & Seymour 2015), via extrapolation of existing source counts and other complementary data. The new series of deep, low frequency radio surveys such as the GaLactic and Extragalactic All-sky MWA Survey (GLEAM, Wayth et al. 2015) are vital in providing data in the same frequency range as SKA. Challenges of evolution models and source counts ================================================ Radio source counts have been measured from deep and increasingly wide-area surveys to very sensitive flux density limits of a few tens of $\mu$Jy, at frequencies between 1 - 3 GHz - a range now referred to as part of the ‘mid’ frequency band within the SKA project. At these observing frequencies, some fraction of sources appear ‘beamed’, i.e. boosted in flux density via relativistic aberration from the physical alignment of core-jet structures close to our line-of-sight. Moreover, as radio sources lie at all redshifts, we observe intrinsic (rest-frame) emission from increasingly more compact and energetic regions of each radio source as both observing frequency and redshift increase. At mid frequencies the combined effect of these features is that we lose the ability to detect the intrinsic origin of the radio emission and instead only account for a decreasing fraction of the sources’ radio activity. These two effects bias the source count and are the origin of the changing form of the differential source count with frequency noted by Wall (1994): the consequence is that it is not straightforward to extrapolate source counts to a significantly different observing frequency without knowledge of the intrinsic emission properties of the source population(s) plus the evolution of the RLF. Whilst we have fair insight into the evolution of the RLF for the extreme high-power sources from the 3CRR sample, it is not determined for sources of moderate or low intrinsic radio power (e.g. $P_{151 \rm{MHz}} < 10^{25}$ W Hz$^{-1}$). There is little information on the space density of these radio source population(s) except locally (i.e. at $z < 0.1$): here some progress has been made using wide area radio surveys combined with sizable spectroscopic optical surveys (e.g. Mauch & Sadler 2007). Within the context of defining SKA science, much work has been done to model the SKA sky to deep flux density limits (e.g. S$^{3}-$SEX, Wilman et al. 2008) taking account of the canonical ‘radio-loud’ source populations and including source populations more normally described as ‘radio-quiet’. However due to the effects noted above, it is not trivial to translate these 1.4 GHz model skies to lower frequencies. This is particularly pertinent to the current SKA$\_$LOW specification (frequency range of 50 - 350 MHz) as has been illustrated by Franzen et al. (2016) and others. We have posited (Wall & Jackson 1997) that at low frequencies ($\nu < 200$ MHz) the effects of beaming are negligible such that we observe unbiased source emission, for which we adopt the term the ‘parent’ population(s). The evolutionary behaviour of each source population can then be transposed to higher frequencies by a simple model of jet Doppler factors and randomised sky orientation. This approach has been taken in whole or part by a number of authors, e.g. Orr & Browne 1982; Wall & Peacock 1985; Morisawa & Takahara 1987; Wall & Jackson 1997; Jackson & Wall 1999, and has been reasonably successful in reproducing radio source counts across a wide frequency range (150 MHz - 5 GHz). Differential source counts from contemporary deep low frequency surveys now reveal that our model, based on extrapolation of the RLF derived from the 3CRR sample, underestimate the observed source count as shown in Figure \[fit151f\]. We note that this mismatch had been noted when the first deep, small area low frequency source counts at 153 MHz were determined (Intema et al. 2011). Whilst such models were able to fit the 3CRR, 6C and 7C survey counts, it is now clear that the derived RLF is deficient in predicting the lower flux density count traced by wide field surveys such as the T-RaMiSu survey (Williams et al. 2013). This shortfall could be due to a paucity of low redshift, lower power sources, or high redshift, high power sources or a combination of both. A path to resolving this degeneracy is to investigate the luminosity distribution of a large complete sample of sources of high flux density to trace the distribution of moderate-to-high power sources at much higher statistical significance than can be gleaned from 3CRR. ![Differential radio source counts at 151 MHz with the model fit of Jackson & Wall (1999) shown dotted; the model diverges from the observed source counts below $S_{151 \rm{MHz}} \sim$ 0.1 Jy. Source count data is compiled from 3CRR (178 MHz: Laing, Riley & Longair 1983) transposed to 151 MHz for $S>$ 12.33 Jy; 6C (151 MHz: Hales, Baldwin & Warner 1988) over the range 0.2 Jy to 10 Jy and from the T-RaMiSu survey from 0.02 Jy to 6 Jy (153 MHz: Williams, Intema & Rottgering 2013).[]{data-label="fit151f"}](fit151f.eps) Low frequency surveys - old and new =================================== A number of major wide-area radio surveys have been conducted at frequencies between $\sim$100 MHz and 200 MHz, e.g. the Cambridge surveys 3C (159 MHz: Edge et al. 1959), 4C (178 MHz: Bennett 1962), 6C (151 MHz: Pilkington & Scott 1965; Gower, Scott & Wills 1967) and 7C (151 MHz: Hales et al. 1988, 2007). There are also more recent, ongoing surveys including the MRT survey (151.5 MHz: Pandey & Shankar 2007), TGSS, a new deep survey at 153 MHz (Sirothia et al. 2010), the LOFAR surveys including MSSS (Heald et al. 2015) and the MWA GLEAM survey (Wayth et al. 2015). Whilst all of these surveys have revealed the complexity of the radio source sky, the number and sample sizes of complete, or near-completely-identified, samples has remained small as noted in Table 1: this is due to the enormous amount of effort required to locate and identify the hosts of the radio sources. At slightly higher frequencies, samples such as the Molonglo Southern 4 Jy sample (“MS4” selected at 408 MHz, Burgess & Hunstead 2006) provide comparable samples to 3CRR: whilst remaining limited to small numbers of sources, these samples also become increasingly susceptible to the observational bias effects described earlier. ---------- ----- --------- --------- ----------- -------------------------------- 3CRR 173 178 MHz 10.9 Jy 100% Laing, Riley, Longair 1983 7CI 37 151 MHz 0.51 Jy 90% Grimes, Rawlings, Willott 2004 7CII 54 151 MHz 0.48 Jy 90% Grimes, Rawlings, Willott 2004 7CIII 37 151 MHz 0.5 Jy 95% Lacy et al. 1999 TOOTS-00 47 178 MHz 0.1 Jy $\sim$80% Vardoulaki et al. 2010 ---------- ----- --------- --------- ----------- -------------------------------- : Complete or near-completely identified low frequency ($\nu < 200$ MHz) radio source samples. []{data-label="smallsamps"} This situation improves with the advent of multi-frequency all-sky radio surveys where low frequency, low resolution, radio data can be matched to higher frequency radio data to provide refined position estimates to confidently identify the host galaxy. These revised positions can be used across the EM spectrum, i.e. taking advantage of wide-field multi-passband optical imaging, spectroscopy, IR surveys, etc. However, we are also painfully aware that a sizable fraction of bright radio sources are found to have faint optical hosts; whilst this new era of wide spectrum coverage is upon us, it is likely that many will require singular follow-up in order to obtain a completely-identified sample. The MWA GLEAM 4 Jy sample ========================= The Murchison Widefield Array (MWA, Tingay et al. 2013, Ord et al. 2015) is sited at the Murchison Radio-astronomy Observatory (MRO) in Western Australia. The MWA has a range of science goals (Bowman et al. 2013), including the GLEAM survey (Wayth et al. 2015). This survey covers the sky in the declination range $+$25$^\circ$ to $-$80$^\circ$ with a near contiguous frequency coverage from 72 to 231 MHz. The GLEAM survey catalogue is constructed from a series of 120 s drift-scan observations, each set covering one of five, 30.72 MHz-wide instantaneous frequency bands between 72 and 231 MHz. From these observations, the GLEAM data are imaged and calibrated in 20, 7.68 MHz frequency sets, such that we can choose relatively narrow frequency slices to select any specific sample. A catalogue of approximately 300,000 extragalactic source components detected by the GLEAM survey during its first year of observations is nearing release (Hurley-Walker et al., in prep). This will provide full details of the flux density calibration procedures, completeness and reliability of the bright sample discussed further in this paper. A large fraction of the extragalactic GLEAM survey ($\sim$ 7.4 sr) is 100$\%$ complete at $S_{151 \rm{MHz}} \ge 4$ Jy excluding regions close to the Galactic Plane, $|b| < 10^\circ$ and Magellanic Clouds. From these data we select 2130 components with $S_{151 \rm{MHz}} \ge 4$ Jy catalogued in the GLEAM 147 - 154 MHz image. Reconciling the GLEAM catalogue to discrete radio sources ========================================================= As with almost all other extragalactic radio source catalogues, the catalogued GLEAM components do not necessarily have a one-to-one correspondence with physically discrete radio galaxies and quasars. How the GLEAM survey catalogue represents the physical sky in this respect is one of the first issues we need to resolve before attempting to interpret the GLEAM 4 Jy ‘sources’ with any other radio data. Whilst the GLEAM survey has excellent surface brightness sensitivity, the image resolution varies as approximately 2.5 x 2.2/cos($\delta + 26.7^\circ$) arcmin at 151 MHz. Given these characteristics we determine that there are four instances to consider in translating the selected GLEAM sample to a physical source list: (1) as will be described in Hurley-Walker et al. (in prep), GLEAM imaging has excised regions around the so-called ‘Class A’ sources, e.g. Fornax A, etc, such that these sources are missing. These sources are well-determined and will be added into our 4 Jy sample at a later stage. (2) Sources with two or more separately-catalogued GLEAM components with individual flux densities less than 4 Jy are not included in this sample: to quantify these we will use the 2-point angular correlation function $w(\theta)$ to estimate the instance of wide doubles. (3) Sources with very extended and separated radio lobes could be catalogued as individual components in GLEAM: where one or both lobes are brighter than 4 Jy the component(s) appear in our sample. These instances are resolved as a single radio source via visual inspection with other catalogue data as described later. (4) GLEAM sources with complex regions of low surface brightness: in this case often just one component appears in the selected sample. These instances are resolved as described in the next section via analysis of GLEAM sources in close proximity coupled with visual inspection as also described below. Examples of each case are shown in Figures \[gdoublea\] and \[gdoubleb\]. ![An example of a resolved GLEAM double (left) and where multiple (in this case, two) GLEAM components are determined to be unrelated (right): sources GLEAM J002113-191038 and GLEAM J000540+195026 respectively. Dashed contours (GLEAM) and solid contours (NVSS) are shown with the lowest contour level at 3$\sigma$ (rms, survey) and increasing in factors of 2. The catalogued radio positions are shown for both surveys, GLEAM $+$ and NVSS $\times$.[]{data-label="gdoublea"}](GLEAMdouble.eps "fig:") ![An example of a resolved GLEAM double (left) and where multiple (in this case, two) GLEAM components are determined to be unrelated (right): sources GLEAM J002113-191038 and GLEAM J000540+195026 respectively. Dashed contours (GLEAM) and solid contours (NVSS) are shown with the lowest contour level at 3$\sigma$ (rms, survey) and increasing in factors of 2. The catalogued radio positions are shown for both surveys, GLEAM $+$ and NVSS $\times$.[]{data-label="gdoublea"}](GLEAMunrelated.eps "fig:") ![Two GLEAM catalogued sources revealed as having double-lobed radio structure by the NVSS survey images: sources GLEAM J001636-382649 and GLEAM J001815+214143 respectively. Dashed contours (GLEAM) and solid contours (NVSS) are shown with the lowest contour level at 3$\sigma$ (rms, survey) and increasing in factors of 2. The catalogued radio positions are shown for both surveys, GLEAM $+$ and NVSS $\times$.[]{data-label="gdoubleb"}](NVSSdouble.eps "fig:") ![Two GLEAM catalogued sources revealed as having double-lobed radio structure by the NVSS survey images: sources GLEAM J001636-382649 and GLEAM J001815+214143 respectively. Dashed contours (GLEAM) and solid contours (NVSS) are shown with the lowest contour level at 3$\sigma$ (rms, survey) and increasing in factors of 2. The catalogued radio positions are shown for both surveys, GLEAM $+$ and NVSS $\times$.[]{data-label="gdoubleb"}](NVSSdouble2.eps "fig:") Having established how we interpret the GLEAM survey component catalogue to define a [*complete*]{} 4 Jy source sample, we refine the source positions using higher frequency, higher resolution, data. This step also provides a first insight to the nature of these sources. We examine positional cross-matches of the GLEAM 4 Jy sources with existing higher frequency radio surveys which assists in resolving cases (3) and (4) noted above, as well as allowing us to refine the centroid position of the radio source emission of all GLEAM 4 Jy sources. We begin by analysing all 2130 selected GLEAM components. As noted in cases (3) and (4) above, whilst GLEAM has a large beam size it is still possible that the source finding process has fragmented sources; to find sources where this has occurred, we run a process to identify potentially related source components (‘chains of friends’) using a 4 arcmin step offset, i.e. approximately 2 beam widths. We find that 1873/2130 sources are isolated GLEAM components having no near neighbours and the remaining 257 GLEAM components have one or more neighbours in close proximity. On visual inspection it becomes clear that the vast majority of these close-proximity components are unrelated, such that (only) 27 of these are sources where a single 4 Jy radio source has been fragmented (i.e. fit with multiple Gaussian components) in GLEAM processing. We correct our sample by accumulating the flux densities of these multiple components and also calculate a new centroid-weighted GLEAM position for the complex source. The flux density distribution of the resultant 4 Jy sample reveals that it is dominated by sources at the faint end, as is expected given the steepness of the 151 MHz source counts. If we consider the equivalent 3CRR flux density limit at this frequency as 12.33 Jy (i.e. 10.9 Jy at 178 MHz transposed to 151 MHz assuming a spectral index of $-$0.75), then there are 1888 GLEAM 4 Jy sources with flux densities below this limit (89% of the sample); thus $\sim$90% of the 4Jy GLEAM sample will augment the information obtained from 3CRR. We cross-match the GLEAM 4 Jy source sample with the SUMSS (Bock, Large, Sadler 1999) and NVSS surveys (Condon et al. 1998) . Where there are multiple NVSS or SUMSS components detected within the GLEAM search radius we visually inspect the data and find 371 (17%) are double, triple or higher order radio source structures, examples of which are shown in Figure 3. This step of cross-matching with the SUMSS and NVSS catalogues not only reveals information on the higher resolution, higher frequency, characteristics of the GLEAM 4 Jy sources but also helps us derive a more robust centroid position of the origin of the radio emission, via an adaption of the methodology developed by Magliocchetti et al. (1998). This work is ongoing and we will describe this process and the cross-match of this GLEAM sample to other waveband catalogues, including the ATCA 20 GHz survey (AT20G: Murphy et al. 2010), IR, optical, etc in a future publication (Jackson et al. in prep). Discussion and future work ========================== As discussed at this conference, there are many opportunities for new insights into radio galaxy evolution and physical models from the upcoming generation of radio surveys. Our initial work described here outlines how we are using a first look at the brightest $\sim$2000 sources at 151 MHz to check the integrity of the GLEAM catalogue data, ahead of its public release, as well as our motivation in pursuing this sample to be completely-identified in the future. Acknowledgements ================ This work makes use of the Murchison Radio-astronomy Observatory, operated by CSIRO. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. Support for the operation of the MWA is provided by the Australian Government Department of Industry and Science and Department of Education (National Collaborative Research Infrastructure Strategy: NCRIS), under a contract to Curtin University administered by Astronomy Australia Limited. We acknowledge the iVEC Petabyte Data Store and the Initiative in Innovative Computing and the CUDA Center for Excellence sponsored by NVIDIA at Harvard University. We acknowledge the International Centre for Radio Astronomy Research (ICRAR), a Joint Venture of Curtin University and the University of Western Australia, funded by the West Australian Government. CAJ thanks the Department of Science, Office of Premier & Cabinet, WA for their support through the Western Australian Fellowship Program. The Centre for All-sky Astrophysics is an Australian Research Council Centre of Excellence, funded by grant CE110001020. [99]{} Bennett A.S., 1962, MNRAS, 125, 75 Bock D., Large M.I., Sadler E.M., 1999, AJ 117, 1578 Burgess A.M., Hunstead R.W., 2006, AJ, 131, 100 Condon J.J., Cotton W.D., Greisen E.W., Yin Q.F., Perley R.A., Taylor G.B., Broderick J.J. 1998, AJ, 115, 1693 Edge D. O., Shakeshaft J. R., McAdam W. B., Baldwin J. E., Archer S., 1959, MNRAS, 68, 37 Franzen T.M.O. et al., 2016, MNRAS, in press Grimes J.A., Rawlings S., Willott C.J., 2004, MNRAS, 349, 503 Hales S.E.G, Baldwin J.E., Warner P.J., 1988, MNRAS, 234, 919 Hales S. et al., 2007, MNRAS, 382, 1639 Hales C. A. et al., 2014, MNRAS, 441, 2555 Hancock P. et al., 2012, MNRAS, 422, 1812 Heald G. et al., 2015, A&A, 582, 123 Intema H. T., van Weeren R. J., R[ö]{}ttgering H. J. A., Lal D. V., 2011, A&A, 535, A38 Jackson C. A., Wall J. V., 1999, MNRAS, 304, 160 Laing R., Riley J., Longair M.S., 1983, MNRAS, 204, 151 Magliocchetti M., Maddox S.J., Lahav O, Wall J.V., 1998, MNRAS, 300, 257 Mauch T., Sadler E. M., 2007, MNRAS, 375, 931 Morisawa K., Takahara F., 1987, MNRAS, 228, 745 Murphy T. et al., 2010, MNRAS, 402, 2403 Ord S., Greenhill L., Wayth R., Mitchell D., Dale K., Pfister H., Edgar R., 2009, ASPC, 411, 127 Orr M.J.L., Browne I.W.A., 1982, MNRAS, 200 1067 Pandey V.N., Shankar N.U., 2007, Highlights of Astronomy, CUP, Volume 14, 385 Pilkington J.D.H., Scott J.F., 1965, MmRAS, 69, 183 Prandoni I., Seymour N., 2015, Proceedings of Advancing Astrophysics with the Square Kilometre Array (AASKA14). 9 -13 June, 2014. Giardini Naxos, Italy, 67. Sirothia et al., The TIFR GMRT Sky Survey: http://tgss.ncra.tifr.res.in/150MHz/tgss.html Tingay S. J. et al., 2013, PASA, 30, e007 Vardoulaki E., et al., 2010, MNRAS, 401, 1709 Wall J. V., 1994, AuJPh, 47, 625 Wall J.V., Jackson C.A., 1997, MNRAS 290, L17 Wall J.V., Peacock J., 1985, MNRAS, 216, 173 Wayth R. B. et al., 2015, PASA, 32, e025 Wilman C. et al., S$^{3}$ - The Simulated SKA Skies Project: http://s-cubed.physics.ox.ac.uk Williams W.L., Intema H.T., Rottgering H.J.A, 2013, A&A, 594, 55
--- abstract: 'These lectures give a pedagogical introduction to real and virtual Compton scattering at low energies. We will first discuss real Compton scattering off a point particle as well as a composite system in the framework of nonrelativistic quantum mechanics. The concept of electromagnetic polarizabilities is introduced. We then address a description of the Compton-scattering tensor within quantum field theory with particular emphasis on the derivation of low-energy theorems. The importance of a consistent treatment of hadron structure in the use of electromagnetic vertices is stressed. Finally, the reader is introduced to the notion of generalized polarizabilities in the rapidly expanding field of virtual Compton scattering.' address: 'Institut für Kernphysik, Johannes Gutenberg-Universität, D-55099 Mainz, Germany' author: - 'S. Scherer' date: '11.1.1999' title: 'Real and Virtual Compton Scattering at Low Energies[^1]' --- Introduction ============ The discovery of the Compton effect [@Compton_23; @Debye_23], [*i.e.*]{}, the scattering of photons off electrons, and its explanation in terms of conservation of energy and momentum in the collision between a single light quantum with an electron is regarded as one of the key developments of modern physics [@Stuewer_77]. In atomic physics, condensed matter physics, and chemistry Compton scattering is nowadays an important tool of investigating the momentum distribution of the scattering electrons in the probe. The inclusion of the electron spin in the calculation of the Compton-scattering cross section by Klein and Nishina [@Klein_29] has become one of the textbook examples of applying quantum electrodynamics at lowest order. In the realm of strong-interaction physics, the potential of using Compton scattering as a method of studying properties of particles was realized in the early fifties. The influence of the [*anomalous*]{} magnetic moment of the proton on the Compton-scattering cross section was first discussed by Powell [@Powell_49]. The derivation of low-energy theorems (LETs), [*i.e.*]{}, model-independent predictions based upon a few general principles, became an important starting point in understanding hadron structure . Typically, the leading terms of the low-energy amplitude for a given reaction are predicted in terms of global, model-independent properties of the particles. LETs provide an important constraint for models or theories of hadron structure: unless these general principles are violated, the predictions of a low-energy theorem must be reproduced. Furthermore, LETs also provide useful constraints for experiments as they define a reference point for the precision which has to be achieved in experimental studies designed to distinguish between different models. Based on the requirement of gauge invariance, Lorentz invariance, crossing symmetry, and the discrete symmetries, the low-energy theorem for Compton scattering (CS) of real photons off a nucleon [@Low_54; @GellMann_54] uniquely specifies the terms in the low-energy scattering amplitude up to and including terms linear in the photon momentum. The coefficients of this expansion are expressed in terms of global properties of the nucleon: its mass, charge, and magnetic moment. Terms of second order in the frequency, which are not determined by this theorem, can be parameterized in terms of two new structure constants, the electric and magnetic polarizabilities of the nucleon. These polarizabilities have been the subject of numerous experimental and theoretical studies as they determine the first information on the compositeness or structure of the nucleon specific to Compton scattering. As in all studies with electromagnetic probes, the possibilities to investigate the structure of the target are much greater if virtual photons are used, since the energy and three-momentum of the virtual photon can be varied independently. Moreover, the longitudinal component of current operators entering the amplitude can be studied. The amplitude for virtual Compton scattering (VCS) off the proton is accessible in the reactions $e^-p\to e^-p\gamma$ and $\gamma p\to p e^- e^+$. In particular, the first process has recently received considerable interest as it allows to investigate generalizations of the RCS polarizabilities to the spacelike region, namely, the so-called generalized polarizabilities [@Guichon_95]. The purpose of these lectures is to provide an [*introduction*]{} to the topics of real and virtual Compton scattering. The material is organized in three chapters. We start at an elementary level in the framework of nonrelativistic quantum mechanics and discuss basic features of Compton scattering. Then, a covariant treatment within quantum field theory is discussed with particular emphasis on a consistent treatment of compositeness in the use of electromagnetic vertices. In the last chapter, the reader is introduced to the rapidly expanding field of virtual Compton scattering. In preparing these lectures, we have made use of the excellent pedagogical reviews on hadron polarizabilities of Refs. [@Friar_89; @Holstein_90; @Holstein_92]. A vast amount of more detailed information is contained in Refs. [@Petrunkin_81; @Lvov_93]. An overview of the current status of experimental and theoretical activities on hadron polarizabilities can be found in Ref. [@WGS_98]. Finally, for a first review on virtual Compton scattering the reader is referred to Ref. [@Guichon_98]. Compton scattering in nonrelativistic quantum mechanics ======================================================= Kinematics and notations ------------------------ We will first discuss real Compton scattering (RCS), for which $q^2=q'^2=q\cdot\epsilon=q'\cdot\epsilon'=0$. The kinematical variables and polarization vectors are defined in Fig. \[figurekin\]. As a result of translational invariance in space-time, the total three-momentum and energy, respectively, are conserved, $$\label{epcons} \vec{p}_i+\vec{q}=\vec{p}_f+\vec{q}\,',\quad E_i+\omega=E_f+\omega',$$ where the energy of the particle is given by $E=\frac{\vec{p}\,^2}{2M}$ or $E=\sqrt{M^2+\vec{p}\,^2}$ depending on whether one uses a nonrelativistic or relativistic framework. For the description of the RCS amplitude one requires two kinematical variables, [*e.g.*]{}, the energy of the initial photon, $\omega$, and the scattering angle between the initial photon and the scattered photon, $\cos(\Theta)=\hat{q}\cdot\hat{q}\,'$. The energy of the scattered photon in the lab frame is given by $$\label{omegap} \omega'=\frac{\omega}{1+\frac{\omega}{M}[1-\cos(\Theta)]},$$ if use of relativistic kinematics is made. From Eq. (\[omegap\]) one obtains the well-known result for the wavelength shift of the Compton effect, $\Delta \lambda=(4\pi/M) \sin^2(\Theta/2)$. Nonrelativistic Compton scattering off a point particle ------------------------------------------------------- In order to set the stage, we will first discuss, in quite some detail, Compton scattering of real photons off a free point particle of mass $M$ and charge $e>0$ within the framework of nonrelativistic quantum mechanics. First of all, this will allow us to introduce basic concepts such as gauge invariance, photon-crossing symmetry as well as discrete symmetries. Secondly, the result will define a reference point beyond which the structure of a composite object can be studied. Finally, this will also allow us to discuss later on, where a relativistic description departs from a nonrelativistic treatment. Consider the Hamiltonian of a single, free point particle of mass $M$ and charge $e>0$,[^2] $$\label{h0} H_0=\frac{\vec{p}\,^2}{2M}.$$ The coupling to the electromagnetic field, $A^\mu(\vec{x},t)= (\Phi(\vec{x},t),\vec{A}(\vec{x},t))$, is generated by the well-known minimal-substitution procedure[^3] $$\label{minsub} i\frac{\partial}{\partial t}\mapsto i\frac{\partial}{\partial t} - e \Phi(\vec{x},t),\quad \vec{p}\mapsto\vec{p}-e \vec{A}(\vec{x},t),$$ resulting in the Schrödinger equation $$\label{schr} i\frac{\partial \Psi(\vec{x},t)}{\partial t}= [H_0+H_I(t)]\Psi(\vec{x},t)= [H_0+H_1(t)+H_2(t)]\Psi(\vec{x},t)= H(\Phi,\vec{A})\Psi(\vec{x},t),$$ where $$H_1(t)=-e\frac{\vec{p}\cdot\vec{A}+\vec{A}\cdot{\vec{p}}}{2M}+e\Phi, \quad H_2(t)=\frac{e^2}{2M}\vec{A}\,^2.$$ Gauge invariance of Eq. (\[schr\]) means that $$\Psi'(\vec{x},t)=\exp[-ie\chi(\vec{x},t)]\Psi(\vec{x},t)$$ is a solution of $$i\frac{\partial \Psi'(\vec{x},t)}{\partial t}= H(\Phi+\dot{\chi},\vec{A}-\vec{\nabla}\chi)\Psi'(\vec{x},t),$$ provided $\Psi(\vec{x},t)$ is a solution of Eq. (\[schr\]). In other words, Eq. (\[schr\]) remains invariant under a gauge transformation $$\Psi\mapsto\exp[-ie\chi(\vec{x},t)]\Psi, \quad A^\mu\mapsto A^\mu+ \partial^\mu\chi.$$ For a discussion of gauge invariance in the context of nonrelativistic reductions the interested reader is referred to Ref. [@Scherer_94]. After introducing the interaction representation, $$\label{intrep} H_I^{int}(t)=e^{i H_0 t} H_I(t)e^{-i H_0 t},$$ the $S$-matrix element is obtained by evaluating the Dyson series $$\label{dyson} S = 1+ \sum_{k=1}^\infty \frac{(-i)^k}{k!}\int_{-\infty}^\infty dt_1 \cdots dt_k \hat{T}\left[H^{int}_I(t_1)\cdots H^{int}_I(t_k)\right]$$ between $|i\!>\equiv|\vec{p}_i; \gamma(q,\epsilon)\!>$ and $<\!f|\equiv \mbox{$<\!\vec{p}_f; \gamma(q', \epsilon')|$}$. In Eq. (\[dyson\]), $\hat{T}$ refers to the time-ordering operator, $$\hat{T}\left[A(t_1) B(t_2)\right]=A(t_1) B(t_2)\Theta(t_1-t_2) +B(t_2) A(t_1)\Theta(t_2-t_1),$$ with a straightforward generalization to an arbitrary number of operators. We use second-quantized photon fields $$\label{sqpf} <\! 0|A^\mu(\vec{x},t)|\gamma[q,\epsilon(\lambda)]\!>= \epsilon^\mu(q,\lambda)N(\omega)e^{-iq\cdot x},$$ where $N(\omega)=[(2\pi)^3 2\omega]^{-1/2}$, and normalize the states as $$\label{normstat} <\!\vec{x}|\vec{p}\!>=\frac{e^{i\vec{p}\cdot\vec{x}}}{\sqrt{(2\pi)^3}}.$$ The part relevant to Compton scattering \[${\cal O}(e^2)$\] reads $$\label{scomp} S=-i\int_{-\infty}^\infty dt H^{int}_2(t) -\int_{-\infty}^\infty dt_1 dt_2 H_1^{int}(t_1) H_1^{int}(t_2)\Theta(t_1-t_2),$$ where the first term generates the contact-interaction contribution or so-called seagull term: $$\begin{aligned} \label{scont} S_{fi}^{cont}&=& -i\frac{e^2}{2M}\int_{-\infty}^\infty dt <\!f| e^{iH_0 t}\vec{A}\,^2(\hat{\vec{r}},t) e^{-iH_0 t} |i\!>\nonumber\\\ &=&-i(2\pi)^4\delta^4(p_f+q'-p_i-q)\frac{1}{\sqrt{4\omega\omega'}(2\pi)^6} \frac{e^2 \vec{\epsilon}\,'^\ast\cdot\vec{\epsilon}}{M}.\end{aligned}$$ In order to obtain Eq. (\[scont\]), one first contracts the photon field operators with the photons in the initial and final states, respectively,[^4] then evaluates the time integral, and, finally, makes use of $$<\!\vec{p}\,'|f(\hat{\vec{r}})|\vec{p}\!>=\frac{1}{(2\pi)^3}\int d^3r e^{i(\vec{p}-\vec{p}\,')\cdot\vec{r}}f(\vec{r})$$ with $f(\vec{r})=\exp[i(\vec{q}-\vec{q}\,')\cdot\vec{r}]$, to obtain the three-momentum conservation. The second contribution of Eq. (\[scomp\]) is evaluated by inserting a complete set of states between $H_1^{int}(t_1)$ and $H_1^{int}(t_2)$: $$-\int_{-\infty}^\infty dt_1 dt_2 \Theta(t_1-t_2) \int d^3p <\! f|H^{int}_1(t_1)|\vec{p}\!><\!\vec{p}| H^{int}_1(t_2)|i\!>.$$ There are two distinct possibilities to contract the photon fields, namely, $A_\nu(t_2)$ with $|\gamma(q,\epsilon)\!>$ and $A_\mu(t_1)$ with $<\!\gamma(q',\epsilon')|$ and vice versa, giving rise to the so-called direct and crossed channels, respectively. Evaluating the time dependence and making use of $$\int_{-\infty}^\infty dt_1 dt_2 \Theta(t_1 - t_2) e^{iat_1} e^{ibt_2} =\frac{2\pi i\delta(a+b)}{a+i0^+}$$ one obtains $$\begin{aligned} \label{sppdccc1} S_{fi}^{dc+cc}&=&-2\pi i\delta(E_f+\omega'-E_i-\omega)\int d^3p \left(\frac{<\vec{p}_f|H^{em}_1|\vec{p}><\vec{p}|H^{abs}_1|\vec{p}_i>}{ E_f+\omega'-E(\vec{p})+i0^+}\right.\nonumber\\ &&\quad\left. +\frac{<\vec{p}_f|H^{abs}_1|\vec{p}><\vec{p}|H^{em}_1|\vec{p}_i>}{ E_f-\omega-E(\vec{p})+i0^+}\right),\end{aligned}$$ where the superscripts $abs$ and $em$ refer to absorption and emission of photons, respectively, and where the matrix elements are given by $$\begin{aligned} <\!\vec{p}|H^{abs}_1|\vec{p}_i\!>&=&-eN(\omega) \delta^3(\vec{p}-\vec{q}-\vec{p}_i) \left[\frac{(\vec{p}+\vec{p}_i)\cdot\vec{\epsilon}}{2M}- \epsilon_0\right],\\ <\!\vec{p}_f|H^{em}_1|\vec{p}\!>&=&-eN(\omega') \delta^3(\vec{p}_f+\vec{q}\,'-\vec{p}) \left[\frac{(\vec{p}_f+\vec{p})\cdot\vec{\epsilon}\,'^\ast}{2M}- \epsilon_0'^\ast\right].\end{aligned}$$ Using the following convention $${\cal T}_{fi}=\frac{1}{\sqrt{4\omega\omega'}(2\pi)^6}t_{fi},$$ with $S=I+iT$ at the operator level and $<\! f|T|i\!>=(2\pi)^4\delta^4(P_f-P_i){\cal T}_{fi}$, the final result for the $T$-matrix element reads $$\begin{aligned} t_{fi}&=&e^2\left\{-\frac{\vec{\epsilon}\,'^\ast\cdot\vec{\epsilon}}{M} -\left[\frac{(2\vec{p}_f+\vec{q}\,')\cdot \vec{\epsilon}\,'^\ast}{2M} -\epsilon_0'^\ast\right] \frac{1}{E_f+\omega'-E(\vec{p}_f+\vec{q}\,')} \left[\frac{(2\vec{p}_i+\vec{q})\cdot \vec{\epsilon}}{2M}-\epsilon_0\right]\right.\nonumber\\ &&\left.\hspace{4em} -\left[\frac{(2\vec{p}_f-\vec{q})\cdot\vec{\epsilon}}{2M} -\epsilon_0\right] \frac{1}{E_f-\omega-E(\vec{p}_f-\vec{q})} \left[\frac{(2\vec{p}_i-\vec{q}\,')\cdot \vec{\epsilon}\,'^\ast}{2M}-\epsilon_0'^\ast\right]\right\}.\end{aligned}$$ Let us discuss a few properties of $t_{fi}$. - Gauge invariance: As a result of the gauge-invariance property of the equation of motion, the result for true observables should not depend on the gauge chosen. In the present context, this means that the transition matrix element is invariant under the replacement $\epsilon^\mu\to \epsilon^\mu +\zeta q^\mu$ (analogously for $\epsilon'$): $$\begin{aligned} t_{fi}&\stackrel{\epsilon^\mu\to q^\mu}{\mapsto}& e^2\left\{-\frac{\vec{\epsilon}\,'^\ast\cdot\vec{q}}{M} -\left[\frac{(2\vec{p}_f+\vec{q}\,')\cdot \vec{\epsilon}\,'^\ast}{2M} -\epsilon_0'^\ast\right] \frac{1}{E_f+\omega'-E(\vec{p}_f+\vec{q}\,')} \left[\frac{(2\vec{p}_i+\vec{q})\cdot\vec{q}}{2M}-\omega\right]\right.\\ &&\left.-\left[\frac{(2\vec{p}_f-\vec{q})\cdot \vec{q}}{2M} -\omega\right] \frac{1}{E_f-\omega-E(\vec{p}_f-\vec{q})} \left[\frac{(2\vec{p}_i-\vec{q}\,')\cdot \vec{\epsilon}\,'^\ast}{2M}-\epsilon_0'^\ast\right]\right\}\\ &\stackrel{(\ast)}{=}& e^2\left\{-\frac{\vec{\epsilon}\,'^\ast\cdot\vec{q}}{M} +\left[\frac{(2\vec{p}_f+\vec{q}\,')\cdot \vec{\epsilon}\,'^\ast}{2M} -\epsilon_0'^\ast\right] -\left[\frac{(2\vec{p}_i-\vec{q}\,')\cdot \vec{\epsilon}\,'^\ast}{2M}-\epsilon_0'^\ast\right]\right\}\\ &=&0\quad\mbox{since}\quad2\vec{p}_f+\vec{q}\,'-2\vec{p}_i+\vec{q}\,' =2\vec{q},\end{aligned}$$ where, using energy conservation, in $(\ast)$ we inserted $$\begin{aligned} E_f+\omega'-E(\vec{p}_f+\vec{q}\,')&=& -\left[\frac{(2\vec{p}_i+\vec{q})\cdot\vec{q}}{2M}-\omega\right],\\ E_f-\omega-E(\vec{p}_f-\vec{q})&=& \left[\frac{(2\vec{p}_f-\vec{q})\cdot\vec{q}}{2M}-\omega\right].\end{aligned}$$ - Photon-crossing symmetry: $t_{fi}$ is invariant under the simultaneous replacements $\epsilon^\mu\leftrightarrow \epsilon'^{\mu\ast}$ and $q^\mu\leftrightarrow -q'^\mu$, [*i.e.*]{}, $$\begin{aligned} t_{fi}&\mapsto& e^2\left\{-\frac{\vec{\epsilon}\cdot\vec{\epsilon}\,'^\ast}{M} -\left[\frac{(2\vec{p}_f-\vec{q})\cdot \vec{\epsilon}}{2M} -\epsilon_0\right] \frac{1}{E_f-\omega-E(\vec{p}_f-\vec{q})} \left[\frac{(2\vec{p}_i-\vec{q}\,')\cdot \vec{\epsilon}\,'^\ast}{2M}-\epsilon_0'^\ast\right]\right.\\ &&\left.\hspace{4em}-\left[\frac{(2\vec{p}_f+\vec{q}\,')\cdot \vec{\epsilon}\,'^\ast}{2M} -\epsilon_0'^\ast\right] \frac{1}{E_f+\omega'-E(\vec{p}_f+\vec{q}\,')} \left[\frac{(2\vec{p}_i+\vec{q})\cdot \vec{\epsilon}}{2M}-\epsilon_0\right]\right\}\\ &=&t_{fi}.\end{aligned}$$ - Invariance under $e\mapsto -e$, [*i.e.*]{}, the Compton-scattering amplitudes for particles of charges $e$ and $-e$ are identical. - Under parity, $t_{fi}$ behaves as a scalar, [*i.e.*]{}, there are no terms of, [*e.g.*]{}, the type $\epsilon_{ijk} \epsilon_i \epsilon_j'^\ast q_k$. - Particle crossing, $(E_i,\vec{p}_i)\leftrightarrow (-E_f,-\vec{p}_f)$, is [*not*]{} a symmetry of a nonrelativistic treatment. For the purpose of calculating the differential cross section, we make use of the Coulomb gauge, $\epsilon_0=0$, $\vec{q}\cdot\vec{\epsilon}=0$, $\epsilon_0'=0$, and $\vec{q}\,'\cdot\vec{\epsilon}\,'=0$, and evaluate the $S$-matrix element in the laboratory frame, where $\vec{p}_i=0$, $$S_{fi}=-i (2\pi)^4 \delta(E_f+\omega'-E_i-\omega) \delta^3(\vec{p}_f+\vec{q}\,'-\vec{p}_i-\vec{q}) \frac{1}{\sqrt{4\omega\omega'}(2\pi)^6} \frac{e^2\vec{\epsilon}\cdot\vec{\epsilon}\,'^\ast}{M}.$$ Using standard techniques,[^5] the differential cross section reads $$d\sigma = \delta(E_f+\omega'-E_i-\omega) \delta^3(\vec{p}_f+\vec{q}\,'-\vec{p}_i-\vec{q}) \frac{1}{\omega\omega'} \left|\frac{e^2\vec{\epsilon}\cdot \vec{\epsilon}\,'^\ast}{4\pi M}\right|^2 d^3q' d^3p_f.$$ After integration with respect to the momentum of the particle, we make use of $d^3q'=\omega'^2d\omega' d\Omega$ and obtain $$\label{dsdo} \frac{d\sigma}{d\Omega}= \left\{1-4\frac{\omega}{M}\sin^2\left(\frac{\Theta}{2}\right) +{\cal O}\left[\left(\frac{\omega}{M}\right)^2\right]\right\} \left|\frac{e^2\vec{\epsilon}\cdot \vec{\epsilon}\,'^\ast}{4\pi M}\right|^2.$$ Averaging and summing over initial and final photon polarizations, respectively, is easily performed by treating $\{\hat{q}=\hat{e}_z,\vec{\epsilon}(1)=\hat{e}_x, \vec{\epsilon}(2)=\hat{e}_y\}$ as well as $\{\hat{q}',\vec{\epsilon}\,'(1),\vec{\epsilon}\,'(2)\}$ as orthonormal bases, $$\label{sumpol} \sum_{\lambda'=1}^2\left(\frac{1}{2} \sum_{\lambda=1}^2|\vec{\epsilon}(\lambda)\cdot \vec{\epsilon}\,'^\ast(\lambda')|^2\right) =\frac{1}{2}[1+\cos^2(\Theta)].$$ Let us consider the so-called Thomson limit, [*i.e.*]{}, $\omega\to 0$, for which Eq. (\[dsdo\]) in combination with Eq. (\[sumpol\]) reduces to $$\left.\frac{d\sigma}{d\Omega}\right|_{\omega=0} =\frac{\alpha^2}{M^2}\frac{1+\cos^2(\Theta)}{2}, \quad \alpha=\frac{e^2}{4\pi} \approx \frac{1}{137}.$$ The total cross section, obtained by integrating over the entire solid angle, reproduces the classical Thomson scattering cross section denoted by $\sigma_T$, $$\label{sigmat} \sigma_T=\frac{8\pi}{3}\frac{\alpha^2}{M^2}.$$ Numerical values of the Thomson cross section for the electron, charged pion, and the proton are shown in Table \[tcs\]. Nonrelativistic Compton scattering off a composite system --------------------------------------------------------- Next we discuss Compton scattering off a composite system within the framework of nonrelativistic quantum mechanics. For the sake of simplicity, we consider a system of two particles interacting via a central potential $V(r)$, $$\label{tpp} H_0=\frac{{\vec{p}_1}\,^2}{2m_1}+\frac{\vec{p}_2\,^2}{2m_2} +V(|\vec{r}_1-\vec{r}_2|) =\frac{\vec{P}\,^2}{2M}+\frac{\vec{p}\,^2}{2\mu}+V(r),$$ where we introduced $$\begin{aligned} &&M=m_1+m_2,\quad \vec{R}=\frac{m_1\vec{r}_1+m_2\vec{r}_2}{M},\quad \vec{P}=\vec{p}_1+\vec{p}_2, \\ &&\mu=\frac{m_1m_2}{M},\quad \vec{r}=\vec{r}_1-\vec{r}_2,\quad \vec{p}=\frac{m_2\vec{p}_1-m_1\vec{p}_2}{M}.\end{aligned}$$ As in the single-particle case, the electromagnetic interaction is introduced via minimal coupling, $i\partial/\partial t\to i\partial/\partial t-q_1\phi_1-q_2\phi_2$, $\vec{p}_i\to\vec{p}_i-q_i \vec{A}_i$, resulting in the interaction Hamiltonians $$\begin{aligned} H_1(t)&=&\sum_{i=1}^2\left[-\frac{q_i}{2m_i}(\vec{p}_i\cdot\vec{A_i} +\vec{A_i}\cdot\vec{p}_i)+q_i\phi_i\right],\\ H_2(t)&=&\sum_{i=1}^2\frac{q_i^2}{2m_i}\vec{A_i}^2,\end{aligned}$$ where $(\phi_i,\vec{A_i})=(\phi(\vec{r}_i,t),\vec{A}(\vec{r}_i,t))$. In order to keep the expressions as simple as possible, we will make some simplifying assumptions and quote the general result at the end. First of all, we do not consider the spin of the constituents, [*i.e.*]{}, we omit an interaction term $$-\sum_i\vec{\mu}_i\cdot\vec{B}_i,\quad \vec{B}_i=\vec{\nabla}_i\times \vec{A}_i,$$ where $\vec{\mu}_i$ is an intrinsic magnetic dipole moment of the $i$th constituent. Secondly, we take equal masses for the constituents, $m_1=m_2=m=\frac{1}{2}M$ and assume that one has charge $q_1=e>0$ and the second one is neutral, $q_2=0$. Finally, as a result of the gauge-invariance property we perform the calculation within the Coulomb gauge, $\phi_i=0$.[^6] With these preliminaries, the Hamiltonian reads $$H=H_0-\frac{e}{M}(\vec{p}_1\cdot\vec{A}_1+\vec{A}_1\cdot\vec{p}_1) +\frac{e^2}{M}\vec{A_1}^2.$$ The $S$-matrix element is obtained in complete analogy to the previous section within the framework of ordinary time-dependent perturbation theory: $$\label{scp} S_{fi}=S_{fi}^{cont}+S_{fi}^{dc}+S_{fi}^{cc},$$ where the seagull contribution results from the sum of the individual contact terms and the direct-channel and crossed-channel contributions are more complicated as in the single-particle case, since they now also involve excitations of the composite object. Using $\vec{r}_1=\vec{R}+\frac{1}{2}\vec{r}$, one obtains for the contact contribution $$\label{tficoncoms} t_{fi}^{cont}=-\vec{\epsilon}\cdot\vec{\epsilon}\,'^* \frac{2e^2}{M} \int d^3 r |\phi_0(\vec{r})|^2 \exp\left[i(\vec{q}-\vec{q}\,')\cdot\frac{\vec{r}}{2}\right].$$ Since $q_2=0$, the integral is just the charge form factor $F[(\vec{q}-\vec{q}\,')^2]$ of the ground state, $$F(\vec{q}\,^2)=1-\frac{1}{6}r_E^2\vec{q}\,^2+\cdots.$$ We note that for a composite object, in general, the contact interactions of the constituents do not yet generate the complete Thomson limit. However, it is possible to make a unitary transformation such that the total Thomson amplitude is generated by a contact term making the composite object look very similar to the point object [@Jennings_87]. The second contribution is evaluated by inserting a complete set of states, $$\begin{aligned} \label{sdccc} S_{fi}^{dc+cc} &=&-2\pi i\delta(E_f+\omega'-E_i-\omega)\int d^3 P\sum_n\nonumber \\ &&\times\left( \frac{<\!\!\vec{p}_f,0|H^{em}_1|\vec{P},n\!\!><\!\!\vec{P},n| H^{abs}_1|\vec{p}_i,0\!\!>}{E_f+\omega'- E_n(\vec{P})}+ \frac{<\!\!\vec{p}_f,0|H^{abs}_1|\vec{P},n\!\!><\!\!\vec{P},n| H^{em}_1|\vec{p}_i,0\!\!>}{E_f-\omega-E_n(\vec{P})}\right),\nonumber\\\end{aligned}$$ where, in the framework of Eq. (\[tpp\]), the energy of an excited state with intrinsic energy $\omega_n$ moving with c.m. momentum $\vec{P}$ is given by $$E_n(\vec{P})=\frac{\vec{P}\,^2}{2M}+\omega_n.$$ In Coulomb gauge, the corresponding Hamiltonians for absorption and emission of photons, respectively, read $$H_1^{abs}=-\frac{2e}{M}N(\omega) \hat{\vec{p}}_1\cdot\vec{\epsilon} \exp(i\vec{q}\cdot\vec{r}_1),\quad H_1^{em}=-\frac{2e}{M}N(\omega') \hat{\vec{p}}_1\cdot\vec{\epsilon}\,'^\ast \exp(-i\vec{q}\,'\cdot\vec{r}_1).$$ As in the point-object case, $S_{fi}$ is symmetric under photon crossing $(\omega,\vec{q})\leftrightarrow (-\omega',-\vec{q}\,')$ and $\vec{\epsilon}\leftrightarrow\vec{\epsilon}\,'^\ast$. The low-energy expansion of Eq. (\[sdccc\]) is obtained by expanding the vector potentials and the denominators in $\omega$ and $\omega'$. The explicit calculation is beyond the scope of the present treatment and we will only quote the general result at the end . However, we find it instructive to consider the limit $\omega\to 0$: $$\begin{aligned} \left.t_{fi}^{dc+cc}\right|_{\omega=0}&=& \frac{4e^2}{M^2}\sum_n\frac{1}{\Delta\omega_n} \left( <\!0|\vec{p}\cdot\vec{\epsilon}\,'^\ast |n\!><\!n|\vec{p}\cdot\vec{\epsilon}|0\!> +<\!0|\vec{p}\cdot\vec{\epsilon} |n\!><\!n|\vec{p}\cdot\vec{\epsilon}\,'^\ast|0\!>\right),\end{aligned}$$ where $\Delta\omega_n=\omega_n-\omega_0$. The matrix elements involve internal degrees of freedom, only. Making use of $\vec{p}=i\mu[H_0,\vec{r}\,]$ and applying $H_0$ appropriately to the right or left, the expression simplifies to $$\begin{aligned} \label{tdccccpt} \left.t_{fi}^{dc+cc}\right|_{\omega=0}&=& -i\frac{4e^2\mu}{M^2}\sum_n \left( <\!0|\vec{r}\cdot\vec{\epsilon}\,'^\ast |n\!><\!n|\vec{p}\cdot\vec{\epsilon}|0\!> -<\!0|\vec{p}\cdot\vec{\epsilon} |n\!><\!n|\vec{r}\cdot\vec{\epsilon}\,'^\ast|0\!>\right)\nonumber\\ &=&-i\frac{4e^2\mu}{M^2}<\!0|[\vec{r}\cdot\vec{\epsilon}\,'^\ast, \vec{p}\cdot\vec{\epsilon}]|0\!> = \frac{e^2}{M}\vec{\epsilon}\,'^\ast\cdot\vec{\epsilon},\end{aligned}$$ where, again, we used the completeness relation, $[\vec{a}\cdot\hat{\vec{r}},\vec{b}\cdot\hat{\vec{p}}]=i\vec{a}\cdot\vec{b}$, and $\mu=M/4$. Combining this result with the contact contribution of Eq. (\[tficoncoms\]) yields the correct Thomson limit also for a composite system. Indeed, it has been shown a long time ago in the more general framework of quantum field theory that the scattering of photons in the limit of zero frequency is correctly described by the classical Thomson amplitude [@Thirring_50; @Kroll_54; @Low_54; @GellMann_54]. We will come back to this point in the next section. Beyond the Thomson limit, we only quote the nonrelativistic $T$-matrix element for Compton scattering off a spin-zero particle of mass $M$ and total charge $Ze$, expanded to second order in the photon energy: $$\label{tfize} t_{fi}=\vec{\epsilon}\,'^\ast\cdot\vec{\epsilon} \left(-\frac{(Ze)^2}{M}+4\pi\bar{\alpha}_E\omega\omega'\right) +4\pi\bar{\beta}_M\vec{q}\,'\times\vec{\epsilon}\,'^\ast \cdot\vec{q}\times\vec{\epsilon},$$ where $$\begin{aligned} \label{alphab} \bar{\alpha}_E&=&\frac{\alpha Z^2 r^2_E}{3M}+2\alpha\sum_{n\neq 0} \frac{|\!\!<\!\!n|D_z|0\!\!>\!\!|^2}{E_n-E_0},\\ \label{betab} \bar{\beta}_M&=&-\frac{\alpha <\!\!\vec{D}\,^2\!\!>}{2M}-\frac{\alpha}{6} <\!\!\sum_{i=1}^N \frac{q^2_i \vec{r}_i\,^2}{m_i}\!\!>+2\alpha\sum_{n\neq 0} \frac{|\!\!<\!\!n|M_z|0\!\!>\!\!|^2}{E_n-E_0}\end{aligned}$$ denote the electric ($\bar{\alpha}_E$) and magnetic ($\bar{\beta}_M$) polarizabilities of the system. In these equations $$\vec{D}=\sum_{i=1}^N q_i(\vec{r}_i-\vec{R})$$ refers to the intrinsic electric dipole operator and $$\vec{M}= \sum_{i=1}^N \left[ \frac{\hat{q}_i}{2m_i}(\vec{r}_i-\vec{R})\times (\vec{p}_i-\frac{m_i}{M}\vec{P}) +\vec{\mu}_i\right]$$ to the magnetic dipole operator, where the possibility of magnetic moments of the constituents has now been included. The electromagnetic polarizabilities describe the response of the internal degrees of freedom of a system to a small external electromagnetic perturbation. For example, in atomic physics the second term of Eq. (\[alphab\]) is related to the quadratic Stark effect describing the energy shift of an atom placed in an external electric field. We will come back to an interpretation of the electric polarizability in terms of a classical analogue in the next subsection. Finally, let us discuss the influence of the electromagnetic polarizabilities on the differential Compton-scattering cross section. We restrict ourselves to the leading term due to the interference of the Thomson amplitude with the polarizability contribution. The evaluation of that term requires, in addition to Eq. (\[sumpol\]), the sum $$\sum_{\lambda,\lambda'}Re\{\vec{\epsilon}\,'^\ast\cdot\vec{\epsilon} \hat{q}\,'\times\vec{\epsilon}\,'\cdot\hat{q}\times\vec{\epsilon}\,^\ast\} =2\cos(\Theta),$$ and one obtains $$\begin{aligned} \label{dsdopol} \frac{d\sigma}{d\Omega}&=& \left[1-4\frac{\omega}{M}\sin^2\left(\frac{\Theta}{2}\right) +{\cal O}\left(\frac{\omega^2}{M^2}\right)\right] \left\{\frac{1}{2}[1+\cos^2(\Theta)] \frac{\alpha^2 Z^4}{M^2}\right.\nonumber\\ &&\left. -[1+\cos^2(\Theta)]\frac{\alpha Z^2}{M}\bar{\alpha}_E \omega\omega' -2\cos(\Theta)\frac{\alpha Z^2}{M}\bar{\beta}_M\omega\omega' +{\cal O}(\omega^2\omega'^2)\right\}.\end{aligned}$$ The differential cross sections at $\Theta=0^\circ,90^\circ$, and $180^\circ$ are sensitive to $\bar{\alpha}_E+\bar{\beta}_M$, $\bar{\alpha}_E$, and $\bar{\alpha}_E-\bar{\beta}_M$, respectively. Classical interpretation ------------------------ The prototype example for illustrating the concept of an electric polarizability is a system of two harmonically bound point particles of mass $m$ with opposite charges $+q$ and $-q$ [@Friar_89; @Holstein_90]: $$H_0=\frac{\vec{p}_1\,^2}{2m}+\frac{\vec{p}_2\,^2}{2m}+\frac{\mu\omega^2_0}{2} \vec{r}\,^2,\quad\mu=\frac{m}{2},\quad \vec{r}=\vec{r}_1-\vec{r}_2,$$ where we neglect the Coulomb interaction between the charges. If a static, uniform, external electric field $$\vec{E}=E_0\hat{e}_z$$ is applied to this system, the equilibrium position is determined by $$\mu\ddot{z}=-\mu\omega_0^2z+qE_0 \stackrel{!}{=}0,$$ leading to $$z_0=\frac{qE_0}{\mu\omega_0^2}.$$ The electric polarizability $\alpha_E$ is defined via the relation between the induced electric dipole moment and the external field[^7] $$\vec{p}=qz_0\hat{e}_z\equiv 4\pi\alpha_E \vec{E}.$$ For the harmonically bound system, $\alpha_E$ is proportional to the inverse of the spring constant, $$\alpha_E=\frac{q^2}{4\pi}\frac{1}{\mu\omega^2_0},$$ [*i.e.*]{}, it is a measure of the stiffness or rigidity of the system [@Holstein_90]. The potential energy associated with the induced electric dipole moment reads $$\label{valpha} V=-2\pi\alpha_E\vec{E}^2=-\frac{1}{2}\vec{p}\cdot\vec{E},$$ where the factor $\frac{1}{2}$ results from the interaction of an induced rather than a permanent electric dipole moment with the external field. Similarly, the potential of an induced magnetic dipole, $\vec{m}=4\pi\beta_M \vec{H}$, is given by $$\label{vbeta} V=-2\pi\beta_M \vec{H}^2.$$ Compton scattering in quantum field theory ========================================== Now that we have discussed Compton scattering in the framework of [*nonrelativistic*]{} quantum mechanics, we will turn to a description in the context of quantum field theory. Generally, we will consider the case of the nucleon but will restrict ourselves to the pion whenever this allows for a (substantial) simplification without loss of generality. We will direct our attention to the influence of hadron structure on the description of electromagnetic processes. In particular, we will emphasize the power of Ward-Takahashi identities [@Ward_50; @Takahashi_57]. First of all, we will describe the simplest electromagnetic vertex, namely, the interaction of a single photon with a charged pion. Using the method of Gell-Mann and Goldberger [@GellMann_54], we will derive the low-energy and low-momentum behavior of the (virtual) Compton-scattering tensor based upon Lorentz invariance, gauge invariance, crossing symmetry, and the discrete symmetries. Finally, we will consider Compton scattering off a pion to illustrate why off-shell effects are not directly observable. Electromagnetic vertex of a charged pion ---------------------------------------- For the purpose of illustrating the power of symmetry considerations, we explicitly discuss the most general electromagnetic vertex of an off-shell pion. We will formally introduce the concept of [*form functions*]{} by parameterizing the electromagnetic three-point Green’s function of a pion. In this context, we distinguish between [*form factors*]{} and [*form functions*]{}, the former representing observables, which is, in general, not true for form functions. Let us define the three-point Green’s function of two unrenormalized pion field operators $\pi^+(x)$ and $\pi^-(y)$ and the electromagnetic current operator $J^\mu(z)$ as[^8] $$\label{tpgf} G^\mu(x,y,z)=<\!\! 0|T\left[\pi^+(x) \pi^-(y)J^{\mu}(z)\right]|0\!\!>,$$ and consider the corresponding momentum-space Green’s function $$\label{gmu} (2\pi)^4 \delta^4(p_f-p_i-q) G^{\mu}(p_f,p_i)= \int d^4x\, d^4y\, d^4z\, e^{i(p_f \cdot x - p_i \cdot y-q\cdot z )} G^\mu(x,y,z),$$ where $p_i$ and $p_f$ are the four-momenta corresponding to the pion lines entering and leaving the vertex, respectively, and $q=p_f-p_i$ is the momentum transfer at the vertex. Defining the renormalized three-point Green’s function $G^\mu_R$ as $$\label{gmur} G^{\mu}_R(p_f,p_i) = Z^{-1}_{\phi} Z^{-1}_J G^{\mu}(p_f,p_i),$$ where $Z_{\phi}$ and $Z_J$ are renormalization constants,[^9] we obtain the one-particle irreducible, renormalized three-point Green’s function by removing the propagators at the external lines, $$\label{gammamuirr} \Gamma^{\mu,irr}_R(p_f,p_i) = [i \Delta_R(p_f)]^{-1} G^{\mu}_R(p_f,p_i)[i\Delta_R(p_i)]^{-1},$$ where $\Delta_R(p)$ is the full, renormalized propagator. From a perturbative point of view, $\Gamma^{\mu,irr}_R$ is made up of those Feynman diagrams which cannot be disconnected by cutting any one single internal line. In the following we will discuss a few model-independent properties of $\Gamma^{\mu,irr}_R(p_f,p_i)$. 1. Imposing Lorentz covariance, the most general parameterization of $\Gamma^{\mu,irr}_R$ can be written in terms of two independent four-momenta, $P^\mu=p_f^\mu+p_i^\mu$ and $q^\mu=p_f^\mu-p_i^\mu$, respectively, multiplied by Lorentz-scalar form functions $F$ and $G$ depending on three scalars, [*e.g.*]{}, $q^2$, $p_i^2$, and $p_f^2$, $$\label{par} \Gamma^{\mu,irr}_R(p_f,p_i) = (p_f+p_i)^{\mu} F(q^2,p_f^2,p_i^2) + (p_f-p_i)^{\mu} G(q^2,p_f^2,p_i^2).$$ 2. Time-reversal symmetry results in $$\label{trs} F(q^2,p_f^2,p_i^2)=F(q^2,p_i^2,p_f^2), \quad G(q^2,p_f^2,p_i^2)=-G(q^2,p_i^2,p_f^2).$$ In particular, from Eq. (\[trs\]) we conclude that $G(q^2,M^2_\pi,M^2_\pi)=0$. This, of course, corresponds to the well-known fact that a spin-0 particle has only one electromagnetic form factor, $F(q^2)$. 3. Using the charge-conjugation properties $J^\mu\mapsto-J^\mu$ and $\pi^+\leftrightarrow\pi^-$, it is straightforward to see that form functions of particles are just the negative of form functions of antiparticles. In particular, the $\pi^0$ does not have any electromagnetic form functions even off shell, since it is its own antiparticle. 4. Due to the hermiticity of the electromagnetic current operator, $F(q^2)$ is real in the spacelike region $q^2\leq 0$: $$\begin{aligned} (p_f+p_i)^\mu F^\ast(q^2)&=&<p_f|J^\mu(0)|p_i>^\ast =<p_i|{J^\mu}^\dagger(0)|p_f> =<p_i|J^\mu(0)|p_f>\\ &=&(p_i+p_f)^\mu F(q^2)\,\,\mbox{for}\,\, q^2\leq 0.\end{aligned}$$ 5. After writing out the various time orderings in Eq. (\[tpgf\]), let us consider the divergence $$\begin{aligned} \label{wtstart} \partial_\mu^z G^\mu(x,y,z)&=& <\!\!0|T[\pi^+(x)\pi^-(y)\partial_\mu J^\mu(z)]|0\!\!>\nonumber\\ &&+\delta(z^0-x^0)<\!\!0|T\{[J^0(z),\pi^+(x)]\pi^-(y)\}|0\!\!>\nonumber\\ &&+\delta(z^0-y^0)<\!\!0|T\{\pi^+(x)[J^0(z),\pi^-(y)]\}|0\!\!>.\end{aligned}$$ Current conservation at the operator level, $\partial_\mu J^\mu(z)=0$, together with the equal-time commutation relations of the electromagnetic charge-density operator with the pion field operators,[^10] $$\begin{aligned} \label{comrel} [J^0(x),\pi^-(y)] \delta (x^0-y^0) & = & \delta^4(x-y) \pi^-(y), \nonumber \\ {[}J^0(x),\pi^+(y)] \delta (x^0-y^0) & = & -\delta^4(x-y) \pi^+(y),\end{aligned}$$ are the basic ingredients for obtaining Ward-Takahashi identities [@Ward_50; @Takahashi_57] for electromagnetic processes. For example, we obtain from Eq. (\[wtstart\]) $$\label{partialg} \partial_\mu^z G^\mu(x,y,z)=\left[\delta^4(z-y)-\delta^4(z-x)\right] <\!\!0|T[\pi^+(x)\pi^-(y)]|0\!\!>.$$ Taking the Fourier transformation of Eq. (\[partialg\]), peforming a partial integration, and repeating the same steps which lead from Eq. (\[gmu\]) to (\[gammamuirr\]), one obtains the celebrated Ward-Takahashi identity for the electromagnetic vertex $$\label{wti} q_{\mu} \Gamma^{\mu,irr}_R(p_f,p_i) = \Delta_R^{-1}(p_f)-\Delta_R^{-1}(p_i).$$ In general, this technique can be applied to obtain Ward-Takahashi identities relating Green’s functions which differ by insertions of the electromagnetic current operator. Inserting the parameterization of the irreducible vertex, Eq. (\[par\]), into the Ward-Takahashi identity, Eq. (\[wti\]), the form functions $F$ and $G$ are constrained to satisfy $$\label{constraint} (p_f^2-p_i^2) F(q^2,p_f^2,p_i^2)+q^2 G(q^2,p_f^2,p_i^2) = \Delta^{-1}_R(p_f)-\Delta^{-1}_R(p_i).$$ From Eq. (\[constraint\]) it can be shown that, given a consistent calculation of $F$, the propagator of the particle, $\Delta_R$, as well as the form function $G$ are completely determined (see Appendix A of Ref. [@Rudy_94] for details). The Ward-Takahashi identity thus provides an important consistency check for microscopic calculations. 6. As the simplest example, one may consider a structureless “point pion”: $$\Gamma^\mu(p_f,p_i)=(p_f+p_i)^\mu,\quad q_\mu \Gamma^\mu=p_f^2-p_i^2=(p_f^2-m_\pi^2)-(p^2_i-m_\pi^2),$$ [*i.e.*]{}, $F(q^2,p_f^2,p_i^2)=1$, $G(q^2,p_f^2,p_i^2)=0$. 7. As was already pointed out in Ref. [@Barton_65], use of $$\Gamma^\mu(p_f,p_i)=(p_f+p_i)^\mu F(q^2)$$ leads to an inconsistency, since the left-hand side of the corresponding Ward-Takahashi identity depends on $q^2$, whereas the right-hand side only depends on $p_f^2$ and $p^2_i$. The nucleon case is more complicated due to spin and the most general form of the irreducible, electromagnetic vertex can be expressed in terms of 12 operators and associated form functions. The interested reader is referred to Refs. [@Bincer_60; @Naus_87]. Finally, it is important to emphasize that the off-shell behavior of form functions is representation dependent, [*i.e.*]{}, form functions are, in general, not observable. In the context of a Lagrangian formulation, this can be understood as a result of field transformations . This does not render the previous discussion useless, rather the Ward-Takahashi identities provide important consistency relations between the building blocks of a quantum-field-theoretical description. Low-energy theorem for the Compton-scattering tensor ---------------------------------------------------- The Compton-scattering tensor $V^{\mu\nu}_{s_is_f}$ is defined through a Fourier transformation of the time-ordered product of two electromagnetic current operators evaluated between on-shell nucleon states:[^11] $$\begin{aligned} \label{cst} \lefteqn{(2\pi)^4\delta^4(p_f+q'-p_i-q) V^{\mu\nu}_{s_i s_f}(p_f,q';p_i,q)=} \nonumber\\ &&\int d^4x d^4y e^{-iq\cdot x}e^{iq'\cdot y} <\!N(p_f,s_f)|T[J^\mu(x)J^\nu(y)]|N(p_i,s_i)\!>.\end{aligned}$$ The relation to the invariant amplitude of real Compton scattering (RCS),[^12] is given by $$\label{mrcs} {\cal M}=-e^2\epsilon_\mu\epsilon'^\ast_\nu \left. V^{\mu\nu}_{s_i s_f}(p_f,q';p_i,q)\right|_{q^2=q'^2=0}.$$ In RCS, $V^{\mu\nu}_{s_is_f}$ can only be tested for a rather restricted range of variables $q^\mu$ and $q'^\mu$ and, furthermore, only the transverse components of $V^{\mu\nu}_{s_is_f}$ are accessible. The expression “virtual Compton scattering” (VCS) refers to the situation, where one or both photons are virtual. We will primarily be concerned with the case $q^2<0$ and $q'^2=0$ which, [*e.g.*]{}, for the proton can be tested in the reaction $e^-p\to e^-p\gamma$. In the following, we will discuss the low-energy and small-momentum behavior of the Compton-scattering tensor. Our discussion will closely follow the method of Gell-Mann and Goldberger [@GellMann_54; @Kazes_59]. Let us first recall the distinction between $V^{\mu\nu}_{s_is_f}$ and $\Gamma^{\mu\nu}$,[^13] $$V^{\mu\nu}_{s_is_f}(p_f,q';p_i,q)= \bar{u}(p_f,s_f)\Gamma^{\mu\nu}(P,q',q)u(p_i,s_i),$$ where $P=p_f+p_i$. In $V^{\mu\nu}_{s_is_f}$ the nucleon is assumed to be on shell, $p_i^2=p_f^2=M^2$, whereas $\Gamma^{\mu\nu}$ is defined for arbitrary $p_i^2$ and $p_f^2$, [*i.e.*]{}, $\Gamma^{\mu\nu}$ is the analogue of $\Gamma^{\mu,irr}_R$ of Eq. (\[gammamuirr\]). We divide the contributions to $\Gamma^{\mu\nu}$ into two classes, $A$ and $B$, $$\Gamma^{\mu\nu}=\Gamma^{\mu\nu}_A+\Gamma^{\mu\nu}_B,$$ where class $A$ consists of the s- and u-channel pole terms and class $B$ contains all the other contributions. With this separation all terms which are irregular for $q^\mu\rightarrow 0$ (or $q'^\mu\rightarrow 0$) are contained in class $A$, whereas class $B$ is regular in this limit. Strictly speaking, one also assumes that there are no massless particles in the theory which could make a low-energy expansion in terms of kinematical variables impossible [@Low_54]. The contribution due to t-channel exchanges, such as a $\pi^0$, is taken to be part of class $B$. We express class $A$ in terms of the full renormalized propagator and the irreducible electromagnetic vertices, $$\label{gammaa} \Gamma^{\mu\nu}_A=\Gamma^\nu(p_f,p_f+q')iS(p_i+q) \Gamma^\mu(p_i+q,p_i)+\Gamma^\mu(p_f,p_f-q)iS(p_i-q') \Gamma^\nu(p_i-q',p_i).$$ Note that $\Gamma^{\mu\nu}_A$ is symmetric under photon crossing, $q\leftrightarrow -q'$ and $\mu\leftrightarrow\nu$, [*i.e.*]{}, $\Gamma_A^{\mu\nu}(P,q,q')=\Gamma_A^{\nu\mu}(P,-q',-q)$. Since this is also the case for the total $\Gamma^{\mu\nu}$, class $B$ must be separately crossing symmetric [@GellMann_54]. In analogy to the previous section, Ward-Takahashi identities can be obtained for $\Gamma^\mu$ and $\Gamma^{\mu\nu}$, $$\begin{aligned} \label{wtn1} q_\mu \Gamma^\mu(p_f,p_i)&=&S^{-1}(p_f)-S^{-1}(p_i),\\ \label{wtn2} q_\mu \Gamma^{\mu\nu}(P,q',q)&=& i\left[S^{-1}(p_f)S(p_f-q)\Gamma^\nu(p_f-q,p_i) -\Gamma^\nu(p_f,p_i+q)S(p_i+q)S^{-1}(p_i)\right].\end{aligned}$$ Using Eq. (\[wtn1\]), one obtains the following constraint for class $A$ as imposed by gauge invariance: $$\begin{aligned} \label{cgigammaa} q_\mu \Gamma^{\mu\nu}_A(P,q,q')&=&i\left[\Gamma^\nu (p_f,p_f+q') -\Gamma^\nu(p_i-q',p_i) +S^{-1}(p_f)S(p_i-q')\Gamma^\nu(p_i-q',p_i)\right.\nonumber\\ &&\left. -\Gamma^\nu(p_f,p_f+q')S(p_i+q)S^{-1}(p_i)\right].\end{aligned}$$ Eqs. (\[wtn2\]) and (\[cgigammaa\]) can now be combined to obtain a constraint for class $B$ $$\label{constraintb} q_\mu \Gamma^{\mu\nu}_B=q_\mu(\Gamma^{\mu\nu}-\Gamma^{\mu\nu}_A) =i[\Gamma^\nu(p_i-q',p_i)-\Gamma^\nu(p_f,p_f+q')].$$ At this point, we make use of the freedom of choosing a convenient representation for $\Gamma^\mu$ below the pion production threshold, $$\label{gammaeff} \Gamma^\mu_{\mbox{\footnotesize eff}}(p_f,p_i) =\gamma^\mu F_1(q^2)+i\frac{\sigma^{\mu\nu}q_\nu}{2M} F_2(q^2) +q^\mu q\hspace{-.5em}/\hspace{.5em} \frac{1-F_1(q^2)}{q^2},$$ where $F_1$ and $F_2$ are the Dirac and Pauli form factors of the proton, respectively. The fundamental reason for this assumption is the fact that one can perform transformations of the fields in an effective Lagrangian which do not change the physical observables but which allow to a certain extent to transform away the off-shell dependence at the vertices. We will come back to this point in the next section. Eq. (\[gammaeff\]) contains on-shell quantities only, and satisfies the Ward-Takahashi identity in combination with the free Feynman propagator, $$q_\mu\Gamma^\mu_{\mbox{\footnotesize eff}}=q\hspace{-.5em}/\hspace{.5em} =S^{-1}_F(p_f)-S^{-1}_F(p_i).$$ In this representation the constraint for class $B$ is particularly simple: $$\label{constraintbs} q_\mu \Gamma^{\mu\nu}_B=0.$$ In order to solve this equation, one first makes an ansatz for class $B$, $$\label{gammabansatz} $$\Gamma^{\mu\nu}_B(P,q',q)=a^{\mu,\nu}(P,q')+b^{\mu\rho,\nu}(P,q')q_\rho +\cdots$$ which is inserted into Eq. (\[constraintbs\]), $$\label{consb3} 0=a^{\mu,\nu}(P,q')q_\mu +b^{\mu\rho,\nu}(P,q')q_\mu q_\rho+ \cdots.$$ The constraints due to crossing symmetry, and the discrete symmetries are imposed and Eq. (\[consb3\]) is solved as a power series in $q$ and $q'$. At this point, off-shell kinematics is required in order to be able to treat $q$, $q'$, and $P$ as completely independent. For example, using off-shell kinematics 6 invariant scalars can be formed from $q$, $q'$, and $P$, whereas for the on-shell case, $p_i^2=p_f^2=M^2$, only four of these combinations are independent. A detailed description of this procedure can be found in Ref. [@Scherer_96] and we will only summarize the key results. Class $B$ contains no constant terms and no terms at ${\cal O}(q)$ or ${\cal O}(q')$. Combining this observation with an evaluation of class $A$ generates, for RCS, the LET of Refs. [@Low_54; @GellMann_54]. At ${\cal O}(2)$ one finds two terms which can be related to the electromagnetic polarizabilities $\bar{\alpha}_E$ and $\bar{\beta}_M$. The framework is general enough to also account for virtual photons. In particular, no new polarizability appears in the longitudinal part. In fact, this result has also been obtained in the framework of effective Lagrangians in Ref. [@Lvov_93]. The result for the RCS amplitude can be summarized as $$\begin{aligned} \label{mresrcs} {\cal M}&=&-ie^2\bar{u}(p_f,s_f) \left[\epsilon'^\ast\cdot\Gamma(-q') S_F(p_i+q)\epsilon\cdot\Gamma(q) +\epsilon\cdot\Gamma(q) S_F(p_i-q')\epsilon'^\ast\cdot\Gamma(-q')\right. \nonumber\\ &&-\frac{4\pi}{e^2}\bar{\beta}_M (\epsilon\cdot\epsilon'^\ast q\cdot q' -\epsilon\cdot q'\epsilon'^\ast\cdot q) +\frac{\pi}{e^2 M^2}(\bar{\alpha}_E+\bar{\beta}_M) (\epsilon\cdot\epsilon'^\ast P\cdot q P\cdot q' +\epsilon\cdot P\epsilon'^\ast\cdot Pq\cdot q'\nonumber\\ &&\left. -\epsilon\cdot P\epsilon'^\ast\cdot q P\cdot q' -\epsilon\cdot q'\epsilon'^\ast\cdot P P\cdot q) +{\cal O}(3)\right]u(p_i,s_i),\end{aligned}$$ where we introduced the abbreviation $$\label{gmureal} \Gamma^\mu(q)=\gamma^\mu +i\frac{\sigma^{\mu\nu}q_\nu}{2M}\kappa, \quad \kappa=1.79.$$ Here, the electromagnetic polarizabilities are defined with respect to “Born terms” which have been calculated with the vertices of Eqs.  (\[gammaeff\]) or (\[gmureal\]) for RCS. In particular, with such a choice the Born terms are separately gauge invariant. As a matter of fact, this is not always the case, since, in principle, one could have started to calculate the Born terms with on-shell equivalent electromagnetic vertices containing the Sachs form factors $G_E$ and $G_M$ or the Barnes form factors $H_1$ and $H_2$. Then class $B$ would have taken a different form even though the final result for the total amplitude, of course, has to be the same. For a more detailed discussion of the ambiguity of defining “Born terms,” see Sec. IV of Ref. [@Scherer_96] as well as Ref. [@Fearing_98]. Table \[rcspoln\] contains a selection of results of various models for the electromagnetic polarizabilities which have to be compared with the empirical numbers of Tables \[rcspolpemp\] and \[rcspolnemp\]. Within the framework of an effective Lagrangian it was shown in Ref. [@Lvov_93] that, in a covariant approach, the Compton polarizabilities $\bar{\alpha}_E$ and $\bar{\beta}_M$ coincide with the parameters determining the energy shifts in Eqs. (\[valpha\]) and (\[vbeta\]). This should be compared with a nonrelativistic treatment, where, say, in the quadratic Stark effect only the second term of Eq. (\[alphab\]) appears in the energy shift. Whenever comparing different results, the original references should be consulted in order to see whether the predictions have been obtained in a nonrelativistic or a covariant framework. The sum of the electric and magnetic polarizabilities is constrained by the Baldin sum rule [@Baldin_60], $$\label{baldinsumrule} (\bar{\alpha}_E+\bar{\beta}_M)_N=\frac{1}{2\pi^2}\int_{\omega_{thr}}^\infty \frac{\sigma^{\mbox{\footnotesize tot}}_N(\omega)}{ \omega^2}d\omega,$$ where $\sigma^{\mbox{\footnotesize tot}}_N(\omega)$ is the total photoabsorption cross section. Eq. (\[baldinsumrule\]) is obtained via a once-subtracted dispersion relation for the spin-averaged forward Compton amplitude using the optical theorem together with the LET. An evaluation of the integral requires an extrapolation of available data to infinity (the results are given in units of $10^{-4}$ fm$^3$), $$\begin{aligned} \label{baldin} (\bar{\alpha}_E+\bar{\beta}_M)_p&=&14.2 \pm 0.3,\quad\cite{Damashek_70} \nonumber\\ (\bar{\alpha}_E+\bar{\beta}_M)_n&=& 15.8 \pm 0.5,\quad\cite{Lvov_79} \nonumber\\ (\bar{\alpha}_E+\bar{\beta}_M)_p&=&13.69\pm 0.14,\quad\cite{Babusci_98a} \nonumber\\ (\bar{\alpha}_E+\bar{\beta}_M)_n&=&14.40\pm 0.66,\quad\cite{Babusci_98a}\end{aligned}$$ where the last two results correspond to the most recent analysis. Finally, we mention that four spin polarizabilities $\gamma_i$ parameterize the amplitude at ${\cal O}(3)$ [@Ragusa_93]. These spin-dependent terms have recently received considerable attention but a discussion of these structure constants is beyond the scope of the present treatment and we refer the interested reader to Refs. [@Hemmert_98; @Drechsel_98; @Tonnison_98; @Babusci_98b]. Compton scattering and off-shell effects ---------------------------------------- The issue of how to treat particles with “internal” structure as soon as they do not satisfy on-mass-shell kinematics has a long history. As an example, we have seen in Sec. III.A that the electromagnetic vertex of a pion involving off-mass-shell momenta is more complicated than for asymptotically free states. It is therefore natural to ask how such off-shell effects show up in observables and, in particular, whether they can be extracted from empirical information. Several attempts have been made to calculate off-shell effects within microscopic models and to estimate their importance in physical observables (see, [*e.g.*]{}, Refs.  [@Bincer_60; @Naus_87; @Nyman_70; @Tiemeijer_90; @Kondratyuk_98]). We will argue in this section that off-shell effects are not only model dependent but also representation dependent and thus not directly measurable. In studying off-shell effects, we find that nucleon spin is an inessential complication. We use Compton scattering off a pion in the framework of chiral perturbation theory (ChPT) only as a [*vehicle*]{} to illustrate the point we want to make. Our conclusions are more general, [*i.e.*]{}, apply to other processes as well, and do not rely on chiral symmetry. We choose ChPT, since it provides a complete and consistent field-theoretical framework. ### The chiral Lagrangian and field redefinitions In this section we shall give a brief introduction to those aspects of chiral perturbation theory [@Weinberg_79; @Gasser_84; @Gasser_85] which are relevant for a discussion of off-shell Green’s functions. We will introduce the concept of field transformations since it turns out to be important for interpreting the meaning of form functions. In the limit of massless $u$, $d$, and $s$ quarks, the $QCD$ Lagrangian exhibits a chiral $\mbox{SU(3)}_L\times\mbox{SU(3)}_R$ symmetry which is assumed to be spontaneously broken to a subgroup isomorphic to $\mbox{SU(3)}_V$, giving rise to eight massless pseudoscalar Goldstone bosons with vanishing interactions in the limit of zero energies. In ChPT the chiral symmetry is mapped onto the most general effective Lagrangian for the interaction of these Goldstone bosons. The corresponding Lagrangian is organized in a momentum expansion [@Gasser_84; @Gasser_85; @Fearing_96], $$\label{leff} {\cal L}_{\mbox{\footnotesize eff}} ={\cal L}_2+{\cal L}_4+ {\cal L}_6 +\cdots,$$ where the index $2n$ denotes $2n$ derivatives. Couplings to external fields, such as the electromagnetic field, as well as explicit symmetry breaking due to the finite quark masses, can be systematically incorporated into the effective Lagrangian. Weinberg’s power counting scheme allows for a classification of Feynman diagrams by establishing a relation between the momentum expansion and the loop expansion. Thus, a perturbative scheme is set up in terms of external momenta which are small compared to some scale. Covariant derivatives and quark-mass terms are counted as ${\cal O}(p)$ and ${\cal O}(p^2)$, respectively, in the power counting scheme. The most general chiral Lagrangian at ${\cal O}(p^2)$ is given by $$\label{l2} {\cal L}_2 = \frac{F_0^2}{4} \mbox{Tr} \left[ D_{\mu} U (D^{\mu}U)^{\dagger} +\chi U^{\dagger}+ U \chi^{\dagger} \right],\quad U(x)=\exp\left( i\frac{\phi(x)}{F_0} \right ),$$ where $$\label{phi} \phi(x)=\left ( \begin{array}{ccc} \pi^0+\frac{1}{\sqrt{3}}\eta & \sqrt{2} \pi^+ & \sqrt{2} K^+ \\ \sqrt{2} \pi^- & -\pi^0+\frac{1}{\sqrt{3}}\eta & \sqrt{2} K^0 \\ \sqrt{2} K^- & \sqrt{2} \bar{K}^0 & -\frac{2}{\sqrt{3}}\eta \end{array} \right ).$$ The quark-mass matrix is contained in $\chi=2 B_0\, \mbox{diag} (m_u,m_d,m_s)$. $B_0$ is related to the quark condensate $<\!\!\bar{q}q\!\!>$, $F_0\approx 93$ MeV denotes the pion-decay constant in the chiral limit. The covariant derivative $D_\mu U = \partial_\mu U +ie A_\mu [Q,U]$, where $Q=\mbox{diag}(2/3,-1/3,-1/3)$ is the quark-charge matrix, $e>0$, generates a coupling to the electromagnetic field $A_\mu$. Finally, the equation of motion (EOM) obtained from ${\cal L}_2$ reads $$\label{eom2} {\cal O}^{(2)}_{EOM}(U)=D^2 U U^\dagger - U (D^2 U)^\dagger -\chi U^\dagger +U \chi^\dagger +\frac{1}{3}\mbox{Tr}\left(\chi U^\dagger-U\chi^\dagger\right)=0.$$ The most general structure of ${\cal L}_4$ was first written down by Gasser and Leutwyler (see Eq. (6.16) of Ref. [@Gasser_85]), $$\label{l4} {\cal L}_4=L_1\left\{\mbox{Tr}[D_\mu U (D^\mu U)^\dagger]\right\}^2 + \cdots,$$ and introduces 10 physically relevant low-energy coupling constants $L_i$. We now discuss the concept of field transformations [@Chisholm_61; @Kamefuchi_61; @Coleman_69; @Scherer_95a] by introducing a field redefinition, $$\label{up} U'=\mbox{exp}(iS)U=U+iSU+\cdots,$$ where $S=S^\dagger$ and $\mbox{Tr}(S)=0$, and then look for generators $S$ which i) are of ${\cal O}(p^2)$, ii) guarantee that $U'$ has the correct $\mbox{SU(3)}_L\times\mbox{SU(3)}_R$ transformation properties, iii) produce the correct parity and charge-conjugation behavior, $P:U'(\vec{x},t)\mapsto U'^\dagger(-\vec{x},t)$, $C:U'\mapsto U'^T$. After some algebra (see Ref. [@Scherer_95a] for details) one finds two such generators at ${\cal O}(p^2)$, $$\label{sgen} S=i\alpha_1[D^2 U U^\dagger - U (D^2U)^\dagger] +i\alpha_2[\chi U^\dagger- U\chi^\dagger -\frac{1}{3}\mbox{Tr}(\chi U^\dagger - U\chi^\dagger)],$$ where $\alpha_1$ and $\alpha_2$ are arbitrary real parameters with dimension $energy^{-2}$. If we insert $U'$ into ${\cal L}_{\mbox{\footnotesize eff}}$ of Eq. (\[leff\]), we obtain $$\label{lagup} {\cal L}_{\mbox{\footnotesize eff}} (U)\mapsto{\cal L}_{\mbox{\footnotesize eff}}(U') ={\cal L}_2(U)+\Delta {\cal L}_2(U) +{\cal L}_4(U) + {\cal O}(p^6),$$ where, to leading order in $S$, $\Delta{\cal L}_2(U)$ is given by $$\label{dlag2} \Delta {\cal L}_2(U)= \mbox{total divergence} +\frac{F^2_0}{4} \mbox{Tr}(iS {\cal O}_{EOM}^{(2)}) + {\cal O}(p^6).$$ As usual, the total divergence is irrelevant. The second term of Eq. (\[dlag2\]) is of ${\cal O}(p^4)$ and leads to a “modification” of ${\cal L}_4$ [@Rudy_94; @Scherer_95b], $$\label{loffshell} {\cal L}^{\mbox{\footnotesize off shell}}_4= \beta_1 \mbox{Tr}({\cal O}_{EOM}^{(2)} {\cal O}^{(2)\dagger}_{EOM}) +\beta_2\mbox{Tr}[(\chi U^\dagger-U\chi^\dagger){\cal O}_{EOM}^{(2)}],$$ where $\alpha_1=4\beta_1/F^2_0$ and $\alpha_2=-4(\beta_1+\beta_2)/F^2_0$ and $\beta_1$ and $\beta_2$ are now dimensionless. By a simple redefinition of the field variables one generates an infinite set of “new” Lagrangians depending on two parameters $\beta_1$ and $\beta_2$. That all these Lagrangians describe the same physics will be illustrated in the next section. In this sense we would argue that Eqs. (\[leff\]) and (\[lagup\]) represent the [*same*]{} theory in different representations. The concept of field transformations is very similar to choosing appropriate coordinates in the description of a dynamical system. The value of physical observables should, of course, not depend on the choice of coordinates. ### The Compton-scattering amplitude The most general, irreducible, renormalized three-point Green’s function \[see Eq. (\[par\])\] at ${\cal O}(p^4)$ was derived in Ref. [@Rudy_94]. For positively charged pions and for real photons ($q^2=0, q=p_f-p_i$) it has the simple form $$\label{emv} \Gamma^{\mu,irr}_R(p_f,p_i)=(p_f+p_i)^{\mu} \left(1+16\beta_1\frac{p_f^2+p_i^2-2M^2_{\pi}}{F^2_{\pi}}\right),$$ and the corresponding renormalized propagator satisfying the Ward-Takahashi identity, Eq. (\[wti\]), is given by $$\label{propagatoremv} i\Delta_R(p)=\frac{i}{p^2-M^2_{\pi} +\frac{16 \beta_1}{F^2_{\pi}}(p^2-M^2_{\pi})^2+i\epsilon}.$$ Clearly, the parameter $\beta_1$ is related to the deviation from a “pointlike” vertex, once one of the pion legs is off shell. Eqs. (\[emv\]) and (\[propagatoremv\]) have to be compared with the result of the usual representation of ChPT at ${\cal O}(p^4)$. In this case the vertex at $q^2=0$ is independent of $p_f^2$ and $p_i^2$, $\Gamma^{\mu,irr}_R(p_f,p_i)=(p_f+p_i)^\mu$. Furthermore, the renormalized propagator is simply given by the free propagator. Let us now consider the process $\gamma(\epsilon,q)+\pi^+(p_i) \to \gamma(\epsilon',q')+\pi^+(p_f)$. For $\beta_1\neq 0$ one expects a different contribution of the pole terms, since the intermediate pion is not on its mass shell. We subtract the ordinary calculation of the pole terms using free vertices from the corresponding calculation with off-shell vertices and interpret the result as being due to off-shell effects. Similar methods have been the basis of investigating the influence of off-shell form functions in various reactions involving the nucleon, such as proton-proton bremsstrahlung [@Nyman_70; @Kondratyuk_98] or electron-nucleus scattering [@Naus_87; @Tiemeijer_90]. With the help of Eqs. (\[emv\]) and (\[propagatoremv\]) the change in the pole amplitude can easily be calculated,[^14] $$\label{dmp} \Delta M_P=M_P(\beta_1\neq 0)-M_P(\beta_1=0) =-ie^2 \frac{64 \beta_1}{F^2_{\pi}} \left( p_f\cdot\epsilon'\,p_i\cdot\epsilon+ p_f\cdot\epsilon\, p_i\cdot\epsilon' \right).$$ However, Eq. (\[dmp\]) cannot be used for a unique extraction of the form functions from experimental data since the very same term in the Lagrangian which contributes to the off-shell electromagnetic vertex also generates a two-photon contact interaction. This can be seen by inserting the appropriate covariant derivative into Eq. (\[loffshell\]) and by selecting those terms which contain two powers of the pion field as well as two powers of the electromagnetic field. From the first term of Eq. (\[loffshell\]) one obtains the following $\gamma\gamma\pi\pi$ interaction term $$\begin{aligned} \label{loff1} {\Delta\cal L}_{\gamma\gamma\pi\pi}&=&\frac{16\beta_1 e^2}{F^2_{\pi}} \left\{-A^2[\pi^-(\Box+M^2_{\pi})\pi^+ +\pi^+(\Box+M^2_{\pi})\pi^-]\right.\nonumber\\ &&\left.+(\partial\cdot A+2A\cdot\partial)\pi^+ (\partial\cdot A+2A\cdot\partial)\pi^-\right\}.\end{aligned}$$ For real photons Eq. (\[loff1\]) translates into a contact contribution of the form $$\label{f1} \Delta {\cal M}_{\gamma\gamma\pi\pi}=ie^2\frac{64\beta_1}{F^2_\pi} (p_f\cdot\epsilon'\,p_i\cdot \epsilon+p_f\cdot\epsilon\, p_i\cdot\epsilon'),$$ which precisely cancels the contribution of Eq. (\[dmp\]). At first sight the second term of Eq. (\[loffshell\]) also seems to generate a contribution to the Compton-scattering amplitude. However, after wave function renormalization this term drops out (see Ref. [@Scherer_95b] for details). We emphasize that all the cancellations happen only when one consistently calculates off-shell form functions, propagators and contact terms, and properly takes renormalization into account. Thus the Lagrangian of Eq. (\[lagup\]) which represents an equivalent form to the standard Lagrangian of ChPT yields, as a consequence of the equivalence theorem, the same Compton-scattering amplitude while, at the same time, it generates different off-shell form functions. Clearly, this illustrates why there is no unambiguous way of extracting the off-shell behavior of form functions from on-shell matrix elements. The ultimate reason is that the form functions of Eq. (\[par\]) are not only model dependent but also representation dependent, [*i.e.*]{}, two representations of the same theory result in the same observables but different form functions. ### Off-shell effects versus contact interactions In the present case there was a complete cancellation between off-shell effects and a contact contribution. Even though this will not always necessarily be true, we would still argue that as a matter of principle it is impossible to [*uniquely*]{} extract off-shell effects from observables as there is a certain amount of freedom to trade such effects for contact interactions. Let us discuss this claim within a somewhat different approach which does not make use of a calculation within a specific model or theory. Such a discussion also serves to demonstrate that our interpretation is independent of the fact that we made use of ChPT at ${\cal O}(p^4)$. For that purpose we come back to the method of Gell-Mann and Goldberger in their derivation of the low-energy theorem for Compton scattering [@GellMann_54], and split the most general invariant amplitude into two classes $A$ and $B$ (see Sec. III.B), where class $A$ consists of the most general pole terms and class $B$ contains the rest. The original motivation in Ref. [@GellMann_54] for such a separation was to isolate those terms of ${\cal M}=-ie^2\epsilon_\mu\epsilon'^\ast_\nu M^{\mu\nu}$ which have a singular behavior in the limit $q,q'\to 0$. As in Eq. (\[gammaa\]) we write class $A$ in terms of the most general expressions for the irreducible, renormalized vertices and the renormalized propagator, $$\label{ma} M^{\mu\nu}_A=\Gamma^\nu(p_f,p_f+q')\Delta_R (p_i+q) \Gamma^\mu (p_i+q,p_i) + (q\leftrightarrow -q',\mu\leftrightarrow\nu),$$ where we made use of crossing symmetry. For sufficiently low energies class $B$ can be expanded in terms of the relevant kinematical variables \[see Eq. (\[gammabansatz\])\]. Furthermore, in class $A$ we expand the vertices and the renormalized pion propagator around their respective on-shell points, $p^2=M^2_\pi$. We obtain for the propagator $$\label{exprop} \Delta^{-1}_R(p)=p^2-M^2_\pi-\Sigma(p^2)=(p^2-M^2_\pi) [1-\frac{p^2-M^2_\pi}{2}\Sigma''(M^2_\pi)+\cdots],$$ where we made use of the standard normalization conditions $\Sigma(M^2_\pi)=\Sigma'(M^2_\pi)=0$. The expansion of, [*e.g.*]{}, the vertex describing the absorption of the initial real photon in the s channel reads $$\begin{aligned} \label{expvert} \Gamma^\mu(p_i+q,p_i)&=&(P^\mu+q'^\mu)F[0,M^2_\pi+(s-M^2_\pi), M^2_\pi]\nonumber\\ &=&(P^\mu+q'^\mu)[1+(s-M^2_\pi)\partial_2F(0,M^2_\pi,M^2_\pi)+\cdots],\end{aligned}$$ where $P=p_i+p_f$, and where $\partial_2$ refers to partial differentiation with respect to the second argument. Inserting the result of Eqs. (\[exprop\]) and (\[expvert\]) into Eq. (\[ma\]) we obtain for the s-channel contribution to $M^{\mu\nu}_A$ $$\begin{aligned} \label{masexp} M^{\mu\nu}_s&=&-ie^2 (P^\nu+q^\nu)[1+(s-M^2_\pi) \partial_3 F(0,M^2_\pi,M^2_\pi) +\cdots] \nonumber\\ &\times&\frac{1}{s-M^2_\pi}[1+\frac{s-M^2_\pi}{2} \Sigma''(M^2_\pi)+\cdots]\nonumber\\ &\times&[P^\mu+q'^\mu)(1+(s-M^2_\pi) \partial_2 F(0,M^2_\pi,M^2_\pi) +\cdots]\nonumber\\ &=&-ie^2\frac{(P^\nu+q^\nu)(P^\mu+q'^\mu)}{s-M^2_\pi}+ {\cal O}[(s-M^2_\pi)^0]\nonumber\\ &=&\mbox{``free'' s channel + analytical terms},\end{aligned}$$ and an analogous term for the u channel. In Eq. (\[masexp\]) “free” s channel refers to a calculation with on-shell vertices. From Eq. (\[masexp\]) we immediately see that off-shell effects resulting from either the form functions or the renormalized propagator are of the same order as analytical contributions from class $B$. In the total amplitude off-shell contributions from class $A$ cannot uniquely be separated from class $B$ contributions. In the language of field transformations this means that contributions to $\cal M$ can be shifted between different diagrams leaving the total result invariant. In the language of Gell-Mann and Goldberger, by a change of representation, contributions can be shifted from class $A$ to class $B$ within the [*same*]{} theory. We can also express this differently; what appears to be an off-shell effect in one representation results, for example, from a contact interaction in another representation. In this sense, off-shell effects are not only model dependent, [*i.e.*]{}, different models generate different off-shell form functions, but they are also representation dependent which means that even different representations of the [*same*]{} theory generate different off-shell form functions. This has to be contrasted with on-shell S-matrix elements which, in general, will be different for different models (model dependent), but always the same for different representations of the same model (representation independent). For a further discussion in the context of bremsstrahlung the reader is referred to Ref. [@Fearing_98b]. Virtual Compton scattering ========================== In this section we will, finally, address an old topic [@Berg_61] which has recently received considerable attention, namely, low-energy virtual Compton scattering (VCS) as tested in, [*e.g.*]{}, the reactions $e^-p\to e^-p\gamma$ [@Brand_95; @Audit_93; @Brand_94; @Audit_95; @Shaw_97] and $\pi^- e^-\to\pi^-e^-\gamma$ [@Moinester_98a; @Moinester_98b]. From a theoretical point of view, the objective of investigating VCS is to map out all independent components of the Compton-scattering tensor, Eq. (\[cst\]), for arbitrary four-momenta of the photons. As we have seen, real Compton scattering is only sensitive to the transverse components as well as restricted to the kinematics $\omega=|\vec{q}|$ and $\omega'=|\vec{q}\,'|$. As in all studies with electromagnetic probes, the possibilities to investigate the structure of the target increase substantially, if virtual photons are used since (a) photon energy and momentum can be varied independently and (b) longitudinal components of the transition current are accessible. Kinematics and LET ------------------ When investigating $e^-p\to e^-p\gamma$ or $\pi^- e^-\to\pi^-e^-\gamma$, the VCS tensor is only a building block of the invariant amplitude describing the process. The total amplitude consists of a Bethe-Heitler (BH) piece, where the real photon is emitted by the initial or final electrons, and the VCS contribution (see Fig. \[fig:diagrams\]), $${\cal M}={\cal M}_{\mbox{\footnotesize BH}}+{\cal M}_{\mbox{\footnotesize VCS}}.$$ It is straightforward to calculate the Bethe-Heitler contribution, since it involves on-shell information of the target only, namely, its electromagnetic form factors. In the following we will be concerned with the invariant amplitude for VCS. For the final photon the Lorentz condition $q'\cdot\epsilon'=0$ is automatically satisfied, and we choose, in addition, the Coulomb gauge $\epsilon'^\mu=(0,\vec{\epsilon}\,')$ implying $\vec{q}\,'\cdot\vec{\epsilon}\,'=0$. Writing the invariant amplitude as $${\cal M}_{\mbox{\footnotesize VCS}}=-ie^2 \epsilon_\mu M^\mu,$$ where $\epsilon_\mu=e\bar{u}_e\gamma_\mu u_e/q^2$ is the polarization vector of the virtual photon, we can make use of current conservation, $q_\mu\epsilon^\mu=0$, $q_\mu M^\mu=0$, to express $\epsilon_0$ and $M^0$ in terms of $\epsilon_z$ and $M_z$, respectively, $$\label{mvcs} {\cal M}_{\mbox{\footnotesize VCS}} =ie^2\left(\vec{\epsilon}_T\cdot\vec{M}_T +\frac{q^2}{\omega^2}\epsilon_z M_z\right).$$ Note that as $\omega\to 0$, both $\epsilon_z$ and $M_z$ tend to zero such that ${\cal M}_{\mbox{\footnotesize VCS}}$ in Eq. (\[mvcs\]) remains finite. After a reduction from Dirac spinors to Pauli spinors the transverse and longitudinal parts of Eq. (\[mvcs\]) may be expressed in terms of 8 and 4 independent structures, respectively [@Guichon_95; @Lvov_93; @Scherer_96] (see Table \[numamp\]): $$\begin{aligned} \label{mt} \vec{\epsilon}_T\cdot\vec{M}_T&=& \vec{\epsilon}\,'^\ast\cdot \vec{\epsilon}_T A_1 +i \vec{\sigma}\cdot(\vec{\epsilon}\,'^\ast\times\vec{\epsilon}_T) A_2 +(\hat{q}'\times\vec{\epsilon}\,'^\ast)\cdot(\hat{q}\times\vec{\epsilon}_T) A_3 +i \vec{\sigma}\cdot (\hat{q}'\times\vec{\epsilon}\,'^\ast) \times(\hat{q}\times\vec{\epsilon}_T)A_4\nonumber\\ &&+i\hat{q}\cdot\vec{\epsilon}\,'^\ast \vec{\sigma}\cdot(\hat{q}\times\vec{\epsilon}_T) A_5 +i\hat{q}'\cdot\vec{\epsilon}_T \vec{\sigma}\cdot(\hat{q}'\times\vec{\epsilon}\,'^\ast) A_6 +i\hat{q}\cdot\vec{\epsilon}\,'^\ast \vec{\sigma}\cdot(\hat{q}'\times\vec{\epsilon}_T) A_7\nonumber\\ &&+i\hat{q}'\cdot\vec{\epsilon}_T \vec{\sigma}\cdot(\hat{q}\times\vec{\epsilon}\,'^\ast) A_8,\\ \label{ml} \epsilon_z M_z&=&\epsilon_z \vec{\epsilon}\,'^\ast\cdot\hat{q} A_9 +i \epsilon_z\vec{\epsilon}\,'^\ast\cdot\hat{q} \vec{\sigma}\cdot(\hat{q}'\times\hat{q})A_{10} +i\epsilon_z\vec{\sigma}\cdot(\hat{q}\times\vec{\epsilon}\,'^\ast)A_{11} +i\epsilon_z\vec{\sigma}\cdot(\hat{q}'\times\vec{\epsilon}\,'^\ast)A_{12}, \nonumber\\\end{aligned}$$ where the functions $A_i$ depend on three kinematical variables, [*e.g.*]{}, $|\vec{q}|$, $\omega'=|\vec{q}\,'|$, and $z=\hat{q}\cdot\hat{q}\,'$. For the spin-zero case only one longitudinal and two transverse structures are required. In Ref. [@Scherer_96] the method of Gell-Mann and Goldberger was extended to VCS (see Sec. III.B) and model-independent predictions for the functions $A_i$ were obtained to second order in $q$ or $q'$. The results for the functions $A_i$ in the CM system expanded up to ${\cal O}(2)$, [*i.e.*]{} $|\vec{q}\,'|^2$, $|\vec{q}\,'||\vec{q}|$ and $|\vec{q}|^2$, are shown in Tables \[tablet\] and \[tablel\]. To this order, all $A_i$ can be expressed in terms of the proton mass $M$, the anomalous magnetic moment $\kappa$, the electromagnetic Sachs form factors $G_E$ and $G_M$, the mean square electric radius $r^2_E$, and the real-Compton-scattering electromagnetic polarizabilities $\bar{\alpha}_E$ and $\bar{\beta}_M$. For $|\vec{q}|=\omega'$, the predictions of Table \[tablet\] reduce to the well-known RCS result. In Ref. [@Fearing_98] the low-energy behavior of the VCS amplitude of $\pi^-(p_i)+\gamma^\ast(\epsilon,q)\to \pi^-(p_f)+\gamma(\epsilon',q')$ was found to be of the form $$\label{mvcspion} {\cal M}_{\mbox{\footnotesize VCS}}=-2ie^2 F(q^2)\left[ \frac{p_f\cdot\epsilon'^\ast (2 p_i+q)\cdot\epsilon}{s-m_\pi^2} +\frac{p_i\cdot\epsilon'^\ast (2 p_f-q)\cdot\epsilon}{u-m_\pi^2} -\epsilon\cdot\epsilon'^\ast\right]+{\cal M}_R,$$ where $F(q^2)$ is the electromagnetic form factor of the pion, $s=(p_i+q)^2$, and $u=(p_i-q')^2$. The residual term ${\cal M}_R$ is separately gauge invariant and at least of second order, [*i.e.*]{}, ${\cal O}(qq,qq',q'q')$. Beyond the LET: Generalized Polarizabilities -------------------------------------------- The framework for analyzing the model-dependent terms specific to VCS at low energies has been developed by Guichon, Liu, and Thomas [@Guichon_95]. The invariant amplitude ${\cal M}_{\mbox{\footnotesize VCS}}$ is split into a pole piece ${\cal M}_P$ and a residual part ${\cal M}_R$. For the nucleon, the s- and u-channel pole diagrams have been calculated using the free Feynman propagator together with electromagnetic vertices of the form $$\label{f1f2vertex} \Gamma^\mu_{F_1F_2}(p_f,p_i) =\gamma^\mu F_1(q^2)+i\frac{\sigma^{\mu\nu}q_\nu}{2M} F_2(q^2), \,q=p_f-p_i,$$ where $F_1$ and $F_2$ are the Dirac and Pauli form factors, respectively. The corresponding amplitude ${\cal M}^{\gamma^\ast\gamma}_P$ contains all irregular terms as $q\to 0$ or $q'\to 0$ and is separately gauge invariant [@Guichon_95; @Scherer_96]. The second property is a special feature when working with these particular vertex operators and has been essential for the derivation in [@Guichon_95]. Gauge invariance with this choice of operators is not as trivial as it might appear, since the electromagnetic vertex, Eq. (\[f1f2vertex\]), and the nucleon propagator do not satisfy the Ward-Takahashi identity (except at the real photon point, $q^2 = 0)$: $$\label{wtf1f2} q_\mu \Gamma^\mu_{F_1,F_2}(p_f,p_i)=(p_f\hspace{-.9em}/\hspace{.3em} -p_i\hspace{-.7em}/\hspace{.3em})F_1(q^2)\neq S_F^{-1}(p_f) -S^{-1}_F(p_i).$$ However, explicit calculation, including the use of the Dirac equation, shows that ${\cal M}_P$ of Ref. [@Guichon_95] is, in fact, identical with evaluating the pole terms using the vertex of Eq. (\[gammaeff\]). We should mention that in Ref. [@Guichon_95] the phrase LET is used for the Born terms as opposed to the more restrictive sense of Sec. IV.A. For the pion, the situation is somewhat more complicated due to the fact that even for real photons the s- and u-channel pole diagrams are not separately gauge invariant. A natural starting point is given by Eq. (\[mvcspion\]). The generalized polarizabilities in VCS [@Guichon_95] result from an analysis of ${\cal M}_{R}^{\gamma^{\ast} \gamma}$ in terms of electromagnetic multipoles $H^{(\rho' L', \rho L)S}(\omega' , |\vec{q}|)$, where $\rho \, (\rho')$ denotes the type of the initial (final) photon ($\rho = 0$: charge, C; $\rho = 1$: magnetic, M; $\rho = 2$: electric, E). The initial (final) orbital angular momentum is denoted by $L \, (L')$, and $S$ distinguishes between non-spin-flip $(S = 0)$ and spin-flip $(S = 1)$ transitions. For the pion, only the case $S=0$ applies. ${\cal M}^{\gamma^\ast\gamma}_R$ is at least linear in the energy of the real photon. A restriction to the lowest-order, [*i.e.*]{} linear terms in $\omega'$ leads to only electric and magnetic dipole radiation in the final state. Parity and angular-momentum selection rules (see Table \[tab:mulan\]) then allow for 3 scalar multipoles $(S = 0)$ and 7 vector multipoles $(S = 1)$. The corresponding ten GPs, $P^{(01,01)0}$, ..., ${\hat{P}}^{(11,2)1}\,$, are functions of $|\vec{q}|^2$, where mixed-type polarizabilities, ${\hat{P}}^{(\rho' L' , L)S} (|\vec{q}|^2)$, have been introduced which are neither purely electric nor purely Coulomb type (see Ref.  [@Guichon_95] for details). However, the treatment of Ref. [@Guichon_95] does not fully exploit all the symmetries of the VCS tensor, resulting in redundant stuctures. This observation was first made in Ref. [@Metz_96] in the covariant framework of the linear sigma model. In fact, only six of the above ten GPs are independent, if charge-conjugation symmetry is imposed in combination with particle crossing [@Drechsel_97; @Drechsel_98b]. For example, for a charged pion, the constraint for the Compton tensor reads [@Fearing_98; @Drechsel_97] $$\label{ccc} M_{\pi^+}^{\mu\nu}(p_f,q';p_i,q) \stackrel{C}{=}M_{\pi^-}^{\mu\nu}(p_f,q';p_i,q) \stackrel{part.cross.}{=}M_{\pi^+}^{\mu\nu}(-p_i,q';-p_f,q),$$ generating one relation between originally three GPs [@Metz_96; @Drechsel_97]: $$\label{relpol} 0 = \sqrt{\frac{3}{2}} P^{(01,01)0}(|\vec{q}|^2) + \sqrt{\frac{3}{8}} P^{(11,11)0}(|\vec{q}|^2) + \frac{3 |\vec{q}|^2}{2 \omega_0} \hat{P}^{(01,1)0}(|\vec{q}|^2),$$ allowing one to eliminate the mixed-type polarizability $\hat{P}^{(01,1)0}$. In Eq. (\[relpol\]), $\omega_0=\left.\omega\right|_{\omega'=0}= M-\sqrt{M^2+\vec{q}\,^2}$. The remaining two spin-independent polarizabilities have been defined such as to be proportional to the RCS polarizabilities at $|\vec{q}|=0$: $$\bar{\alpha}_E=-\frac{e^2}{4\pi}\sqrt{\frac{3}{2}} P^{(01,01)0}(0),\quad \bar{\beta}_M=-\frac{e^2}{4\pi}\sqrt{\frac{3}{8}} P^{(11,11)0}(0).$$ Note that a calculation in the framework of nonrelativistic quantum mechanics is not particle-crossing symmetric and fails to satisfy the second equality in Eq. (\[ccc\]) [@Pasquini_99] (see Sec. II.B). In general, we do not expect a nonrelativistic calculation to reproduce the constraints of [@Drechsel_97; @Drechsel_98b]. Relations between the spin-dependent GPs at $|\vec{q}|=0$ and the four spin-dependent RCS polarizabilities $\gamma_i$ of Ref.  [@Ragusa_93] were discussed in Ref. [@Drechsel_98c]: $$\label{rpgp} \gamma_3 = - \frac{e^2}{4 \pi} \frac{3}{\sqrt{2}} P^{(01,12)1} (0),\quad \gamma_2 + \gamma_4 = - \frac{e^2}{4 \pi} \frac{3 \sqrt{3}}{2 \sqrt{2}} P^{(11,02)1}(0),$$ [*i.e.*]{}, only two of the four $\gamma_i$ can be related to GPs at $|\vec{q}|=0$. Generalized Polarizabilities of the Nucleon ------------------------------------------- Predictions for the generalized polarizabilities of the nucleon have been obtained in various approaches . Here, we will concentrate on a discussion of the GPs obtained within the heavy-baryon formulation of chiral perturbation theory (HBChPT) and the linear sigma model. Both calculations are based upon covariant approaches and thus satisfy the constraints found in [@Drechsel_97; @Drechsel_98b]. The extension of meson chiral perturbation theory to the nucleon sector starts from the most general effective chiral Lagrangian involving nucleons, pions, and external fields [@Gasser_88], $$\label{leffnucl} {\cal L}_{\mbox{\footnotesize eff}} ={\cal L}^{(1)}_{\pi N}+{\cal L}^{(2)}_{\pi N}+\cdots.$$ The lowest-order Lagrangian, $$\label{ln1} {\cal L}_{\pi N}^{(1)} = \bar{\Psi}\left(i D\hspace{-.7em}/-m_0 +\frac{\stackrel{\circ}{g}_A}{2}u\hspace{-.6em}/\gamma_5\right)\Psi,$$ contains two parameters in the chiral limit, namely, the axial-vector coupling constant $\stackrel{\circ}{g}_A$ and the nucleon mass $m_0$. The covariant derivatives $D_{\mu} \Psi$ includes, among other terms, the coupling to the electromagnetic field, and $u_\mu$ contains in addition the derivative coupling of a pion. Since the nucleon mass does not vanish in the chiral limit, one has an additional large scale such that even in the chiral limit external four-momenta cannot be made arbitrarily small. In the framework of HBChPT [@Bernard_92; @Jenkins_91] four-momenta are written as $p = m_0 v +k$, $v^2=1$, $v^0\ge 1$, where often $v^\mu = (1,0,0,0)$ is used. By introducing so-called velocity-dependent fields $${\cal N}_v=e^{+im_0v\cdot x}\frac{1}{2}(1+v\hspace{-.5em}/)\Psi,\quad {\cal H}_v=e^{+im_0v\cdot x}\frac{1}{2}(1-v\hspace{-.5em}/)\Psi,$$ such that $\Psi(x)=e^{-im_0 v \cdot x} ({\cal N}_v +{\cal H}_v)$, and by using the equation of motion for ${\cal H}_v$, one can eliminate ${\cal H}_v$ to obtain a Lagrangian for ${\cal N}_v$ which, to lowest order in $1/m_0$, is given by $$\label{lh1n} \widehat{\cal L}^{(1)}_{\pi N}=\bar{\cal N}_v(iv\cdot D + g_A S\cdot u) {\cal N}_v.$$ In Eq. (\[lh1n\]) $S^{\mu} = i \gamma_5 \sigma^{\mu\nu} v_{\nu}$ refers to the spin matrix. This procedure can be generalized to higher orders in the chiral expansion and is very smilar to the Foldy-Wouthuysen procedure [@Foldy_50]. At each order in the momentum expansion one will have $1/m_0$ terms coming from the expansion of the leading Lagrangian in combination with counter terms resulting from the most general form allowed at that order. In Refs. [@Hemmert_97a; @Hemmert_97b] the VCS amplitude was calculated using HBChPT to third order in the external momenta. At ${\cal O}(p^3)$, contributions to the GPs are generated by nine one-loop diagrams and the $\pi^0$-exchange $t$-channel pole graph (see Fig. 2 of Ref. [@Hemmert_97a]). For the loop diagrams only the leading-order Lagrangians of Eqs.(\[l2\]) and (\[lh1n\]) are needed. For the $\pi^0$-exchange diagram one requires, in addition, the $\pi^0\gamma\gamma^\ast$ vertex provided by the Wess-Zumino-Witten Lagrangian [@Wess_71; @Witten_79], $$\label{wzwpi0} {\cal{L}}_{\gamma\gamma\pi^0}^{(WZW)} = -\frac{e^2}{32\pi^2 F_\pi} \; \epsilon^{\mu\nu\alpha\beta} F_{\mu\nu} F_{\alpha\beta} \pi^0 \,,$$ where $\epsilon_{0123}=1$ and $F_{\mu\nu}$ is the electromagnetic field strength tensor. At ${\cal O}(p^3)$, the LET of VCS (see Tables \[tablet\] and \[tablel\]) is reproduced by the tree-level diagrams obtained from Eq. (\[lh1n\]) and the relevant part of the second- and third-order Lagrangian [@Ecker_96], $$\begin{aligned} \label{lpin2} \widehat{\cal L}^{(2)}_{\pi N}&=& - \frac{1}{2M} \bar N_v \left[ D \cdot D +\frac{e}{2} (\mu_S+\tau_3\mu_V) \varepsilon_{\mu \nu \rho \sigma} F^{\mu\nu} v^{\rho} S^{\sigma}\right] N_v,\\ \label{lpin3} \widehat{\cal L}^{(3)}_{\pi N}&=& \frac{ie\varepsilon_{\mu\nu\rho\sigma} F^{\mu\nu}}{8 M^2} \bar N_v \left[\mu_S-\frac{1}{2}+\tau_3(\mu_V-\frac{1}{2})\right] S^{\rho} D^{\sigma} N_v +h.c.\,.\end{aligned}$$ The numerical results for the ten GPs are shown in Fig.  \[gpschpt\] (recall that only six combinations are independent). As an example of the spin-independent sector, let us discuss the generalized electric polarizability $\bar{\alpha}_E(|\vec{q}|^2) =-\frac{e^2}{4\pi}\sqrt{\frac{3}{2}} P^{(01,01)0}(|\vec{q}|^2)$, $$\label{alphaq2} \frac{\bar{\alpha}_E(\bar{q}^2)}{\bar{\alpha}_E}= 1-\frac{7}{50}\frac{|\vec{q}|^2}{m^2_\pi} +\frac{81}{2800}\frac{|\vec{q}|^4}{m^4_\pi} +O\left(\frac{|\vec{q}|^6}{m^6_\pi}\right),\,\, \bar{\alpha}_E=\frac{5 e^2 g_A^2}{384\pi^2m_\pi F_\pi^2} =12.8\times 10^{-4}\,\mbox{fm}^3.$$ As a function of $|\vec{q}|^2$, the prediction of ChPT decreases considerably faster than within the constituent quark model [@Guichon_95]. At ${\cal O}(p^3)$, all results can be expressed in terms of the pion mass $m_\pi$, the axial-vector coupling constant $g_A$, and the pion-decay constant $F_\pi$. This property is true for all GPs at ${\cal O}(p^3)$. The $\pi^0$-exchange diagrams only contributes to the spin-dependent polarizabilities. Let us consider as an example $P^{(11,11)1}$, $$\label{p11111} P^{(11,11)1}(|\vec{q}|^2)=-\frac{1}{288}\frac{g_A^2}{F_\pi^2} \frac{1}{\pi^2 M}\left[\frac{|\vec{q}|^2}{m^2_\pi}-\frac{1}{10} \frac{|\vec{q}|^4}{m^4_\pi}\right] +\frac{1}{3M}\frac{g_A}{8\pi^2F^2_\pi}\tau_3 \left[\frac{|\vec{q}|^2}{m^2_\pi}-\frac{|\vec{q}|^4}{m^4_\pi}\right] +O\left(\frac{|\vec{q}|^6}{m^6_\pi}\right).$$ In general, the spin-dependent polarizabilities consist of an isoscalar contribution due to pion loops \[first term of Eq. (\[p11111\])\] and an isovector contribution from the $\pi^0$-exchange graph \[second term of Eq. (\[p11111\])\]. The linear sigma model (LSM) [@GellMann_60] represents a field-theoretical realization of chiral $\mbox{SU(2)}_L\times\mbox{SU(2)}_R$ symmetry. The dynamical degrees of freedom are given by a nucleon doublet $\Psi$, a pion triplet $\vec{\pi}$, and a singlet $\sigma$: $$\begin{aligned} \label{t:ss:lsm} {\cal L}_S&=&i\bar{\Psi}\partial\hspace{-.5em}/\Psi +\frac{1}{2}\partial_\mu\sigma\partial^\mu\sigma+\frac{1}{2} \partial_\mu\vec{\pi}\cdot\partial^\mu\vec{\pi}\nonumber\\ &&-g_{\pi N}\bar{\Psi}(\sigma+i\gamma_5\vec{\tau}\cdot\vec{\pi})\Psi -\frac{\mu^2}{2}(\sigma^2+\vec{\pi}^2) -\frac{\lambda}{4}(\sigma^2+\vec{\pi}^2)^2,\\ \label{t:ss:lsb} {\cal L}_{s.b.}&=&-c\sigma,\end{aligned}$$ where ${\cal L}_{s.b.}$ is a perturbation which explicitly breaks chiral symmetry. With an appropriate choice of parameters ($\mu^2 <0$, $\lambda >0$) the model reveals spontaneous symmetry breaking, $<\!0|\sigma|0\!>=F_\pi=92.4$ MeV. The spectrum consists of massless pions, a massive sigma and nucleons with masses satisfying the Goldberger-Treiman relation $m_N=g_{\pi N} F_\pi$ with $g_A=1$. The symmetry breaking of Eq. (\[t:ss:lsb\]) generates the PCAC relation $$\partial^\mu A_\mu^a=F_\pi m^2_\pi \pi^a.$$ The interaction with the electromagnetic field is introduced via minimal substitution in Eq. (\[t:ss:lsm\]). The generalized polarizabilities have been calculated in the framework of a one-loop calculation [@Metz_96; @Metz_97]. In Fig. \[genpolcomp\], some predictions of the LSM are shown in comparison with other models. In Ref. [@Metz_97] for each generalized polarizability a chiral expansion has been performed. As an example, consider $P^{(11,11)1}$ for the proton, $$\begin{aligned} \label{t:ss:p11111} P^{(11,11)1}_p(Q^2_0) &=&Q^2_0\frac{C}{2m_\pi}\left[\frac{6}{\mu}-\frac{1}{2\mu}+\frac{7\pi}{8} +{\cal O}(\mu)\right]\nonumber\\ &&+(Q_0^2)^2 \frac{C}{5m_\pi^3}\left[-\frac{15}{\mu}+\frac{1}{8\mu} -\frac{9\pi}{64} +{\cal O}(\mu)\right]+{\cal O}[(Q_0^2)^3],\end{aligned}$$ where $C=g_{\pi N}^2/(72\pi^2 m_N^4)$, $\mu=m_\pi/m_N$ und $Q_0^2=2 m_N (\sqrt{|\vec{q}|^2+m_N^2}-m_N)$. The leading-order term of the chiral expansion agress with the corresponding result of HBChPT at ${\cal O}(p^3)$. Generalized Polarizabilities of Pions ------------------------------------- Of course, the concept of (generalized) polarizabilities can also be applied to the pion. From the experimental point of view, an extraction of polarizabilities is more complicated since there is no pion target. For that purpose, one has to consider reactions which contain the Compton-scattering amplitude as a building block, such as, [*e.g.*]{}, the Primakoff effect in high-energy pion-nucleon bremsstrahlung, $\pi^-A\to A\pi^-\gamma$ [@Antipov_83; @Antipov_85], radiative pion photoproduction off the nucleon, $\gamma p\to n\gamma \pi^+$ [@Aibergenov_86], and pion pair production in $e^+e^-$ scattering, $e^+e^-\to e^+e^-\pi^+\pi^-$ [@MARKII_90; @CELLO_92]. The current empirical information on the RCS polarizabilities is summarized in Table \[pionpolemp\]. New experiments are presently beeing performed [@Ahrens_95] or in preparation [@Gorringe_98]. From a theoretical point of view, a precise determination of the electromagnetic polarizabilities is of great importance as a test of ChPT. At the one-loop level, ${\cal O}(p^4)$, of chiral perturbation theory the electromagnetic polarizabilities of the charged pion are entirely determined by an ${\cal O}(p^4)$ counter term [@Holstein_90], $$\label{t:ss:alpha} \bar{\alpha}_E=-\bar{\beta}_M =\frac{e^2}{4\pi} \frac{2}{m_\pi F^2_\pi}(2l^r_5-l^r_6) =(2.68\pm0.42)\times 10^{-4}\,\mbox{fm}^3,$$ where the linear combination $2l^r_5-l^r_6$ is predicted through the decay $\pi^+\to e^+\nu_e\gamma$. Corrections to this result at ${\cal O}(p^6)$ were shown to be reasonably small, namely 12% and 24% of the ${\cal O}(p^4)$ values for $\bar{\alpha}_E$ and $\bar{\beta}_M$, respectively [@Buergi_96]. In particular, the degeneracy $|\bar{\alpha}_E|=|\bar{\beta}_M|$ of Eq. (\[t:ss:alpha\]) is lifted at ${\cal O}(p^6)$. Presently, the pion VCS reaction is under investigation as part of the Fermilab E781 SELEX experiment, where a 600 GeV pion beam interacts with atomic electrons in nuclear targets [@Moinester_98b]. In principle, the different behavior under the substitution $\pi^-\to\pi^+$ of ${\cal M}_{\mbox{\footnotesize BH}}$ and ${\cal M}_{\mbox{\footnotesize VCS}}$ (see Fig. \[fig:diagrams\]), $${\cal M}_{\mbox{\footnotesize BH}}(\pi^-)= -{\cal M}_{\mbox{\footnotesize BH}}(\pi^+),\quad {\cal M}_{\mbox{\footnotesize VCS}}(\pi^-) = {\cal M}_{\mbox{\footnotesize VCS}}(\pi^+),$$ may be of use in identifying the contributions due to internal structure by comparing the reactions involving a $\pi^-$ and a $\pi^+$ beam for the same kinematics:[^15] $$d\sigma(\pi^+)-d\sigma(\pi^-)\sim 4 Re \left( {\cal M}_{\mbox{\footnotesize BH}}(\pi^+) {\cal M}^\ast_{\mbox{\footnotesize VCS}}(\pi^+)\right).$$ The invariant amplitude for VCS at ${\cal O}(p^4)$ in the framework of chiral perturbation theory has been calculated in Ref. [@Unkmeir_99]. The result was found to be in complete agreement with Eq.  (\[mvcspion\]). Using the procedure developed in Ref. [@Drechsel_97], the generalized polarizabilities of the charged pion in the convention of Ref. [@Guichon_95] were extracted, $$\begin{aligned} \label{t:ss:genalpha} \bar{\alpha}_E(|\vec{q}|^2)&=&-\bar{\beta}_M(|\vec{q}|^2)\nonumber\\ &=& \frac{e^2}{8\pi m_\pi}\sqrt{\frac{m_\pi}{E_\pi}} \left[\frac{4(2l^r_5-l^r_6)}{F^2_\pi}-2 \frac{m_\pi-E_\pi}{m_\pi} \frac{1}{(4\pi F)^2} {J^{(0)}}'\left(2\frac{m_\pi-E_\pi}{m_\pi}\right)\right], \nonumber\\\end{aligned}$$ where $E_\pi=\sqrt{m_\pi^2+|\vec{q}|^2}$ and $${J^{(0)}}'(x)=\frac{1}{x}\left[1-\frac{2}{x\sigma(x)}\ln\left( \frac{\sigma(x)-1}{\sigma(x)+1}\right)\right], \quad \sigma(x)=\sqrt{1-\frac{4}{x}}, \quad x\le 0.$$ The $|\vec{q}|^2$ dependence does not contain any ${\cal O}(p^4)$ parameter, [*i.e.*]{}, it is entirely given in terms of the pion mass and the pion decay constant $F_\pi=92.4$ MeV. The numerical prediction is shown in Fig. \[fig:alphapion\]. I would like to thank the organizers, in particular, Petr Bydzovsky for a very efficient organization and a pleasant atmosphere at the summer school. The author would like to thank D. Drechsel, H.W. Fearing, T.R. Hemmert, B.R. Holstein, G. Knöchlein, J.H. Koch, A.Yu. Korchin, A.I. L’vov, A. Metz, B. Pasquini, and C. Unkmeir for a pleasant and fruitful collaboration on various topics related to virtual Compton scattering. It is pleasure to thank J.M. Friedrich, N. d’Hose, M.A. Moinester, and A. Ocherashvili for useful discussions on experimental issues in VCS. Finally, many thanks are due to B. Pasquini for carefully reading the manuscript. [References]{} A. H. Compton, Phys. Rev. [**21**]{}, 483 (1923). P. Debye, Z. Phys. [**24**]{}, 165 (1923). R. H. Stuewer and M. J. Cooper, in [*Compton Scattering*]{}, edited by B. Williams (McGraw-Hill, New York, 1977). O. Klein and Y. Nishina, Z. Phys. [**52**]{}, 853 (1929). J. L. Powell, Phys. Rev. [**75**]{}, 32 (1949). W. Thirring, Phil. Mag. [**41**]{}, 1193 (1950). N. M. Kroll and M. A. Ruderman, Phys. Rev. [**93**]{}, 233 (1954). F. E. Low, Phys. Rev. [**96**]{}, 1428 (1954). M. Gell-Mann and M. L. Goldberger, Phys. Rev. [**96**]{}, 1433 (1954). P. A. M. Guichon, G. Q. Liu, and A. W. Thomas, Nucl. Phys. [**A591**]{}, 606 (1995). J. L. Friar, [*Electromagnetic Polarizabilities of Hadrons*]{}, in Proceedings of the Workshop on [*Electron-Nucleus Scattering*]{}, EIPC, Marciana Marina, Italy, 7-15 June 1988, edited by A. Fabrocini, S. Fantoni, S. Rosati, and M. Viviani (World Scientific, Singapore, 1989). B. R. Holstein, Comments Nucl. Part. Phys. [**19**]{}, 239 (1990). B. R. Holstein, Comments Nucl. Part. Phys. [**20**]{}, 301 (1992). V. A. Petrunkin, Sov. J. Part. Nucl. [**12**]{}, 271 (1981). A. I. L’vov, Int. J. Mod. Phys. A [**8**]{}, 5267 (1993). D. Drechsel (Convener) [*et al.*]{}, [*Hadron Polarizabilities and Form Factors*]{}, in Proceedings of the Workshop [*Chiral Dynamics: Theory and Experiment*]{}, Mainz, Germany, September 1997, edited by A. M. Bernstein, D. Drechsel, and Th. Walcher (Springer, Berlin, 1998). P. A. M. Guichon and M. Vanderhaeghen, Prog. Part. Nucl. Phys. [**41**]{}, 125 (1998). S. Scherer, G.I. Poulis, and H. W. Fearing, Nucl. Phys. [**A570**]{}, 686 (1994). J. J. Sakurai, [*Modern Quantum Mechanics*]{} (Addison-Wesley, Redwood City, 1985). V. A. Petrunkin, Nucl. Phys. [**55**]{}, 197 (1964). T. E. O. Ericson and J. Hüfner, Nucl. Phys. [**B57**]{}, 604 (1973). J. L. Friar, Ann. Phys. (N.Y.) [**95**]{}, 170 (1975). B. K. Jennings, Phys. Lett. B [**196**]{}, 307 (1987). J. C. Ward, Phys. Rev. [**78**]{}, 182 (1950). Y. Takahashi, Nuovo Cim. [**6**]{}, 371 (1957). T. E. Rudy, H. W. Fearing, and S. Scherer, Phys. Rev. C [**50**]{}, 447 (1994). G. Barton, [*Introduction to Dispersion Techniques in Field Theory*]{} (Benjamin, New York, 1965), Chap. 8-3. A. M. Bincer, Phys. Rev. [**118**]{}, 855 (1960). H. W. L. Naus and J. H. Koch, Phys. Rev. C [**36**]{}, 2459 (1987). J. S. R. Chisholm, Nucl. Phys. [**26**]{}, 469 (1961). S. Kamefuchi, L. O’Raifeartaigh, and A. Salam, Nucl. Phys. [**28**]{}, 529 (1961). S. Coleman, J. Wess, and B. Zumino, Phys. Rev. [**177**]{}, 2239 (1969). S. Scherer and H. W. Fearing, Phys. Rev. D [**52**]{}, 6445 (1995). J. D. Bjorken and S. D. Drell, [*Relativistic Quantum Mechanics*]{} (McGraw-Hill, New York, 1964). E. Kazes, Nuovo Cim. [**13**]{}, 1226 (1959). S. Scherer, A. Yu. Korchin, and J. H. Koch, Phys. Rev. C [**54**]{}, 904 (1996). H. W. Fearing and S. Scherer, Few-Body Syst. [**23**]{}, 111 (1998). P. C. Hecking and G. F. Bertsch, Phys. Lett. [**99B**]{}, 237 (1981). E. M. Nyman, Phys. Lett. [**142B**]{}, 388 (1984). A. Schäfer, B. Müller, D. Vasak, and W. Greiner, Phys. Lett. [**143B**]{}, 323 (1984). R. Weiner and W. Weise, Phys. Lett. [**159B**]{}, 85 (1985). M. Chemtob, Nucl. Phys. [**A473**]{}, 613 (1987). N. N. Scoccola and W. Weise, Phys. Lett. B [**232**]{}, 287 (1989). V. Bernard, N. Kaiser, J. Kambor, and U.-G. Mei[ß]{}ner, Nucl. Phys. [**B388**]{}, 314 (1992). Z. Li, Phys. Rev. D [**48**]{}, 3070 (1993). T. R. Hemmert, B. R. Holstein, J. Kambor, Phys. Rev. D [**55**]{}, 5598 (1997). F. J. Federspiel [*et. al.*]{}, Phys. Rev. Lett. [**67**]{}, 1511 (1991). A. Zieger [*et al.*]{}, Phys. Lett. B [**278**]{}, 34 (1992). E. L. Hallin [*et al.*]{}, Phys. Rev. C [**48**]{}, 1497 (1993). B. E. MacGibbon [*et al.*]{}, Phys. Rev. C [**52**]{}, 2097 (1995). J. Schmiedmayer, H. Rauch, and P. Riehs, Phys. Rev. Lett. [**61**]{}, 1065 (1988). L. Koester, W. Waschkowski, and J. Meier, Z. Phys. A [**329**]{}, 229 (1988). K. W. Rose [*et al.*]{}, Phys. Lett. B [**234**]{}, 460 (1990). K. W. Rose [*et al.*]{}, Nucl. Phys. [**A514**]{}, 621 (1990). J. Schmiedmayer, P. Riehs, J. A. Harvey, and N. W. Hill, Phys. Rev. Lett. [**66**]{}, 1015 (1991). L. Koester [*et al.*]{}, Phys. Rev. C [**51**]{}, 3363 (1995). Particle Data Group, Eur. Phys. J. C [**3**]{}, 1 (1998). A. M. Baldin, Nucl. Phys. [**18**]{}, 318 (1960). M. Damashek and F. J. Gilman, Phys. Rev. D [**1**]{}, 1319 (1970). A. I. L’vov, V. A. Petrunkin, and S. A. Startsev, Sov. J. Phys. [**29**]{}, 651 (1979). D. Babusci, G. Giordano, and G. Matone, Phys. Rev. C [**57**]{}, 291 (1998). S. Ragusa, Phys. Rev. D [**47**]{}, 3757 (1993); [**49**]{}, 3157 (1994). T. R. Hemmert, B. R. Holstein, J. Kambor, and G. Knöchlein, Phys. Rev. D [**57**]{}, 5756 (1998). D. Drechsel, G. Krein, and O. Hanstein, Phys. Lett. B [**420**]{}, 248 (1998). J. Tonnison, A. M. Sandorfi, S. Hoblit, and A. M. Nathan, Phys. Rev. Lett. [**80**]{}, 4382 (1998). D. Babusci, G. Giordano, A. I. L’vov, G. Matone, and A. M. Nathan, Phys. Rev. C [**58**]{}, 1013 (1998). E. M. Nyman, Nucl. Phys. [**A154**]{}, 97 (1970). P. C. Tiemeijer and J. A. Tjon, Phys. Rev. C [**42**]{}, 599 (1990). S. Kondratyuk, G. Martinus, O. Scholten, Phys. Lett. B [**418**]{}, 20 (1998). S. Weinberg, Physica [**96A**]{}, 327 (1979). J. Gasser and H. Leutwyler, Ann. Phys. (N.Y.) [**158**]{}, 142 (1984). J. Gasser and H. Leutwyler, Nucl. Phys. [**B250**]{}, 465 (1985). H. W. Fearing and S. Scherer, Phys. Rev. D [**53**]{}, 315 (1996). S. Scherer and H. W. Fearing, Phys. Rev. C [**51**]{}, 359 (1995). H. W. Fearing, Phys. Rev. Lett. [**81**]{}, 758 (1998). R. A. Berg and C. N. Lindner, Nucl. Phys. [**26**]{}, 259 (1961). J. F. J. van den Brand [*et al.*]{}, Phys. Rev. D [**52**]{}, 4868 (1995). G. Audit [*[et al.]{}*]{}, CEBAF Report No. PR 93-050, 1993. J. F. J. van den Brand [*[et al.]{}*]{}, CEBAF Report No. PR 94-011, 1994. G. Audit [*[et al.]{}*]{}, MAMI proposal “Nucleon structure study by Virtual Compton Scattering.” 1995. J. Shaw [*[et al.]{}*]{}, MIT–Bates proposal No. 97-03, 1997. M. A. Moinester and V. Steiner, hep-ex/9801008 (1998). M. A. Moinester, A. Ocherashvili, private communication. G. G. Simon, Ch. Schmitt, F. Borkowski, and V. H. Walther, Nucl. Phys. [**A333**]{}, 381 (1980). A. Metz and D. Drechsel, Z. Phys. A [**356**]{}, 351 (1996). D. Drechsel, G. Knöchlein, A. Metz, and S. Scherer, Phys. Rev. C [**55**]{}, 424 (1997). D. Drechsel, G. Knöchlein, A. Yu. Korchin, A. Metz, and S. Scherer, Phys. Rev. C [**57**]{}, 941 (1998). B. Pasquini, D. Drechsel, and S. Scherer, in preparation. D. Drechsel, G. Knöchlein, A. Yu. Korchin, A. Metz, and S. Scherer, Phys. Rev. C [**58**]{}, 1751 (1998). G. Q. Liu, A. W. Thomas, and P. A. M. Guichon, Aust. J. Phys. [**[49]{}**]{}, 905 (1996). M. Vanderhaeghen, Phys. Lett. B [**[368]{}**]{}, 13 (1996). T. R. Hemmert, B. R. Holstein, G. Knöchlein, and S. Scherer, Phys. Rev. D [**55**]{}, 2630 (1997). T. R. Hemmert, B. R. Holstein, G. Knöchlein, and S. Scherer, Phys. Rev. Lett. [**79**]{}, 22 (1997). A. Metz and D. Drechsel, Z. Phys. A [**359**]{}, 165 (1997). M. Kim and D.-P. Min, Seoul National University Report No. SNUTP-97-046, hep-ph/9704381, 1997. B. Pasquini and G. Salmè, Phys. Rev. C [**57**]{}, 2589 (1998). A. Yu. Korchin and O. Scholten, Phys. Rev. C [**58**]{}, 1098 (1998). J. Gasser, M. E. Sainio, and A. Švarc, Nucl. Phys. [**B307**]{}, 779 (1988). E. Jenkins and A. V. Manohar, Phys. Lett. B [**255**]{}, 558 (1991). L. L. Foldy and S. A. Wouthuysen, Phys. Rev. [**78**]{}, 29 (1950). C. Unkmeir, S. Scherer, A.I. L’vov, D. Drechsel, in preparation. J. Wess and B. Zumino, Phys. Lett. [**37B**]{}, 95 (1971). E. Witten, Nucl. Phys. [**B160**]{}, 57 (1979). G. Ecker and M. Mojžiš, Phys. Lett. B [**365**]{}, 312 (1996). M. Gell-Mann and M. Lévy, Nuovo Cim. [**16**]{}, 705 (1960). Yu. M. Antipov [*et al.*]{}, Phys. Lett. [**121B**]{}, 445 (1983). Yu. M. Antipov [*et al.*]{}, Z. Phys. C [**26**]{}, 495 (1985). T. A. Aibergenov [*et al.*]{}, Czech. J. Phys. [**B36**]{}, 948 (1986). MARK II Collaboration, J. Boyer [*et al.*]{}, Phys. Rev. D [**42**]{}, 1350 (1990). CELLO Collaboration, H.-J. Behrend [*et al.*]{}, Z. Phys. C [**56**]{}, 381 (1992). J. Ahrens [*et al.*]{}, Few-Body Syst. Suppl. [**9**]{}, 449 (1995). T. Gorringe (spokesman), TRIUMF Expt. E838 (1998). U. Bürgi, Phys. Lett. B [**377**]{}, 147 (1996); Nucl. Phys. [**B479**]{}, 392 (1996). particle $\sigma_T$ ---------- ---------------- electron 0.665 barn pion 8.84 $\mu$barn proton 197 nbarn : \[tcs\] Thomson cross section $\sigma_T$ for the electron, charged pion, and the proton. ---------------- ------------------------ ------------------ ----------------- ------------------ ----------------- Ref. description $\bar{\alpha}_E$ $\bar{\beta}_M$ $\bar{\alpha}_E$ $\bar{\beta}_M$ proton proton neutron neutron [@Hecking_81] MIT Bag 7.1 2.6 4.7 3.4 [@Nyman_84] Skyrme model 2 2 [@Schaefer_84] MIT Bag 10.8 2.3 10.8 1.5 [@Weiner_85] Chiral Quark Model 7-9 $\leq 2$ 7-9 $\leq 2$ [@Chemtob_87] Skyrme 8.3 8.5 8.3 8.5 25.2 1.7 25.2 1.7 [@Scoccola_89] Chiral Soliton 13.4 -1.1 13.4 -1.1 [@Bernard_92] HBChPT ${\cal O}(p^3)$ 12.8 1.3 12.8 1.3 [@Li_93] NRQM 7.25 12 7.25 12 [@Hemmert_97] $\epsilon^3$ ChPT 17.1 9.2 17.1 9.2 ---------------- ------------------------ ------------------ ----------------- ------------------ ----------------- : \[rcspoln\] Some theoretical predictions for the electromagnetic polarizabilities of the nucleon in units of $10^{-4}\,\mbox{fm}^3$. Ref. description $\bar{\alpha}_E^p$ $\bar{\beta}_M^p$ ------------------ ------------------------ ----------------------------------- ---------------------------------- [@Federspiel_91] $\gamma p\to \gamma p$ $10.9\pm 2.2\pm 1.3$ $3.3\pm 2.2\pm 1.3$ [@Zieger_92] $\gamma p\to \gamma p$ $10.62^{+1.25+1.07}_{-1.19-1.03}$ $3.58^{+1.19+1.03}_{-1.25-1.07}$ [@Hallin_93] $\gamma p\to \gamma p$ $9.8\pm 0.4\pm 1.1$ $4.4\pm 0.4\pm 1.1$ [@MacGibbon_95] $\gamma p\to\gamma p$ $12.5\pm 0.6\pm 0.9$ $1.7\pm 0.6\pm 0.9$ [@MacGibbon_95] global average $12.1\pm 0.8\pm 0.5$ $2.1\pm 0.8\pm 0.5$ : \[rcspolpemp\] Empirical numbers for the electromagnetic polarizabilities of the proton in units of $10^{-4}\,\mbox{fm}^3$. The polarizabilities have been determined using the Baldin sum rule, Eq. (\[baldinsumrule\]), with $\bar{\alpha}_p+\bar{\beta}_p=14.2\pm 0.5$. Due to this constraint the errors of $\bar{\alpha}_p$ and $\bar{\beta}_p$ are anticorrelated. Ref. description $\bar{\alpha}_E^n$ -------------------- -------------------------------------------------------- ----------------------- [@Schmiedmayer_88] low-energy $n$Pb and $n$C scattering $12\pm 10$ [@Koester_88] low-energy $n$Pb and $n$Bi scattering $8\pm 10$ [@Rose_90a] quasi-free Compton scattering: $\gamma d\to \gamma'np$ $11.7^{+4.3}_{-11.7}$ [@Rose_90b] quasi-free Compton scattering: $\gamma d\to \gamma'np$ $10.7^{+3.3}_{-10.7}$ [@Schmiedmayer_91] low-energy $n$Pb scattering $12.0\pm 1.5\pm 2.0$ [@Koester_95] low-energy $n$Pb scattering $0 \pm 5$ [@PDG_98] PDG average $9.8^{+1.9}_{-2.3}$ : \[rcspolnemp\] Empirical numbers for the electric polarizability of the neutron in units of $10^{-4}\,\mbox{fm}^3$. -------------------------------------------------------------------- $A_1$ $-\frac{1}{M}+\frac{z}{M^2}|\vec{q}| -\left(\frac{1}{8M^3}+\frac{r^2_E}{6M}-\frac{\kappa}{4M^3} -\frac{4\pi\bar{\alpha}_E}{e^2}\right) |\vec{q}\,'|^2 +\left(\frac{1}{8M^3}+\frac{r^2_E}{6M}-\frac{z^2}{M^3}+ \frac{(1+\kappa)\kappa}{4M^3}\right)|\vec{q}|^2$ ------- ------------------------------------------------------------ $A_2$ $\frac{1+2\kappa}{2M^2}|\vec{q}\,'| -\frac{\kappa^2}{4M^3}|\vec{q}\,'|^2 +\frac{z\kappa}{2M^3}|\vec{q}\,'||\vec{q}| -\frac{(1+\kappa)^2}{4M^3}|\vec{q}|^2$ $A_3$ $-\frac{1}{M^2}|\vec{q}| +\left(\frac{1}{4M^3}+\frac{4\pi \bar{\beta}_M}{e^2} \right)|\vec{q}\,'||\vec{q}| +\frac{(3-2\kappa-\kappa^2)z}{4M^3}|\vec{q}|^2$ $A_4$ $-\frac{(1+\kappa)^2}{2M^2}|\vec{q}| -\frac{(2+\kappa)\kappa}{4M^3}|\vec{q}\,'||\vec{q}| +\frac{(1+\kappa)^2 z}{4M^3}|\vec{q}|^2$ $A_5$ $-\frac{N_i G_M(Q^2)}{(E_i+z|\vec{q}|)(E_i+M)} \frac{|\vec{q}|^2}{|\vec{q}\,'|} +\frac{(1+\kappa)\kappa}{4M^3}|\vec{q}|^2$ $A_6$ $\frac{1+\kappa}{2M^2}|\vec{q}\,'| -\frac{(1+\kappa)\kappa}{4M^3}|\vec{q}\,'|^2 -\frac{(1+\kappa)z}{2M^3}|\vec{q}\,'||\vec{q}|$ $A_7$ $-\frac{1+3\kappa}{4M^3}|\vec{q}\,'||\vec{q}|$ $A_8$ $\frac{1+3\kappa}{4M^3}|\vec{q}\,'||\vec{q}|$ -------------------------------------------------------------------- : \[tablet\] Transverse functions $A_i$ of Eq. (\[mt\]) in the CM frame. The functions are expanded in terms of $|\vec{q}\,'|$ and $|\vec{q}|$ of the final real and initial virtual photon, respectively. $N_i=\sqrt{\frac{E_i+M}{2M}}$ is the normalization factor of the initial spinor, where $E_i=\sqrt{M^2+|\vec{q}|^2}$. $G_E(q^2)=F_1(q^2)+\frac{q^2}{4M^2}F_2(q^2)$ and $G_M(q^2)=F_1(q^2) +F_2(q^2)$ are the electric and magnetic Sachs form factors, respectively. $r^2_E=6 G_E'(0)=(0.74\pm 0.02)\, \mbox{fm}^2$ is the electric mean square radius [@Simon_80] and $\kappa=1.79$ the anomalous magnetic moment of the proton. $Q^\mu$ is defined as $q^\mu|_{|\vec{q}\,'|=0}=(M-E_i,\vec{q})$, $Q^2=-2M(E_i-M)$, and $z=\hat{q}\,'\cdot\hat{q}$. $\bar{\alpha}_E$ and $\bar{\beta}_M$ are the electric and magnetic Compton polarizabilities of the proton, respectively. ------------------------------------------------------------------------------------------ $A_9$ $\frac{N_i G_E(Q^2)}{(E_i+z|\vec{q}|)(E_i+M)}\frac{|\vec{q}|^2}{|\vec{q}\,'|} -\frac{1}{M} +\frac{z}{M^2}|\vec{q}| -\left(\frac{1}{8M^3}+\frac{r^2_E}{6M}-\frac{\kappa}{4M^3} -\frac{4\pi\bar{\alpha}_E}{e^2}\right) |\vec{q}\,'|^2 +\left(\frac{1}{8M^3}+\frac{r^2_E}{6M}-\frac{z^2}{M^3}\right) |\vec{q}|^2$ ---------- ------------------------------------------------------------------------------- $A_{10}$ $-\frac{1+3\kappa}{4M^3}|\vec{q}\,'||\vec{q}|$ $A_{11}$ $-\frac{1+2\kappa}{2M^2}|\vec{q}\,'| +\frac{\kappa^2}{4M^3}|\vec{q}\,'|^2 +\frac{(1+\kappa)z}{4M^3}|\vec{q}\,'||\vec{q}| +\frac{1+2\kappa}{4M^3}|\vec{q}|^2$ $A_{12}$ $\frac{(1+\kappa)z}{2M^2}|\vec{q}\,'| -\frac{(1+\kappa)\kappa z}{4M^3}|\vec{q}\,'|^2 -\frac{(1+\kappa)(2z^2-1)}{4M^3}|\vec{q}\,'||\vec{q}| -\frac{(1+\kappa)z}{4M^3}|\vec{q}|^2$ ------------------------------------------------------------------------------------------ : \[tablel\] Longitudinal functions $A_i$ of Eq. (\[ml\]) in the CM frame. See caption of Table \[tablet\]. nucleon RCS ($\gamma\gamma$) VCS ($\gamma^\ast\gamma$) VCS ($\gamma^\ast\gamma^\ast$) ----------------------------- ---------------------- --------------------------- -------------------------------- total number of amplitudes 6 12 18 spin independent ($=$ pion) 2 3 5 spin dependent 4 9 13 : \[numamp\] Number of independent amplitudes for the description of the general VCS tensor. [|c|c|c|]{} $J^P$&final real photon&initial virtual photon\ $\frac{1}{2}^-$&E1&C1,E1\ $\frac{3}{2}^-$&E1&C1,E1,M2\ $\frac{1}{2}^+$&M1&C0,M1\ $\frac{3}{2}^+$&M1&C2,E2,M1\ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- reaction $\bar{\alpha}_E$ $\bar{\beta}_M$ $\bar{\alpha}_E+\bar{\beta}_M$ --------------------------------------- -------------------------------- ----------------------------------------------------- --------------------------------------------------------------------------------------------- $\pi^-A\rightarrow A\pi^-\gamma$ $6.8\pm 1.4 \cite{Antipov_83}$ $-7.1 \pm 2.8_{\mbox{\footnotesize stat.}} $1.4\pm 3.1_{\mbox{\footnotesize stat.}}\pm 2.5_{\mbox{\footnotesize syst.}}$ [@Antipov_85] \pm 1.8_{\mbox{\footnotesize syst.}}$ [@Antipov_85] $\gamma p \rightarrow \gamma \pi^+ n$ $20\pm 12$ [@Aibergenov_86] - - ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : Empirical numbers for the electromagnetic polarizabilities of the pion in units of $10^{-4}\,\mbox{fm}^3$. \[pionpolemp\] [^1]: Lectures at the 11th Indian-Summer School on Intermediate Energy Physics [*Mesons and Light Nuclei*]{}, Prague, September 7 - 11, 1998, Czech Republic. [^2]: Except for a few cases, we will use the same symbols for quantum-mechanical operators such as $\hat{\vec{p}}$ and corresponding eigenvalues $\vec{p}$. [^3]: We use Heaviside-Lorentz units, $e>0$, $\alpha=e^2/4\pi\approx1/137$. [^4]: Note the factor of 2 for two contractions. [^5]: It is advantageous to discuss these steps using box normalization instead of $\delta$-function normalization. See Chap. 7.11 of Ref. [@Sakurai_85]. [^6]: In actual calculations, it is highly recommended not to specify the gauge and use gauge invariance as a check of the final result. [^7]: The factor $4\pi$ results from our use of the Heaviside-Lorentz units instead of the Gaussian system. [^8]: $\pi^{+/-}(x)$ destroys a $\pi^{+/-}$ or creates a $\pi^{-/+}$. [^9]: In fact, $Z_J=1$ due to gauge invariance. [^10]: Note that both equations are related by taking the adjoint. [^11]: In the following, we will consider the proton case. [^12]: We use the convention of Bjorken and Drell [@Bjorken_64]. [^13]: We omit the superscript [*irr*]{} and the subscript $R$, respectively. [^14]: Of course, using Coulomb gauge $\epsilon^{\mu}=(0,\vec{\epsilon})$, $\epsilon'^{\mu}=(0,\vec{\epsilon}\,')$, and performing the calculation in the lab frame ($p_i^{\mu}=(M_{\pi},0)$), the additional contribution vanishes, since $p_i\cdot\epsilon= p_i\cdot\epsilon'=0$. However, this is a gauge-dependent statement and thus not true for a general gauge. [^15]: This argument works for any particle which is not its own antiparticle such as the $K^+$ or $K^0$. Of course, one could also employ the substitution $e^-\to e^+$.
--- abstract: | We study the foundation of space-time theory in the framework of first-order logic (FOL). Since the foundation of mathematics has been successfully carried through (via set theory) in FOL, it is not entirely impossible to do the same for space-time theory (or relativity). First we recall a simple and streamlined FOL-axiomatization of special relativity from the literature. is complete with respect to questions about inertial motion. Then we ask ourselves whether we can prove the usual relativistic properties of accelerated motion (e.g., clocks in acceleration) in . As it turns out, this is practically equivalent to asking whether is strong enough to “handle” (or treat) accelerated observers. We show that there is a mathematical principle called induction () coming from real analysis which needs to be added to in order to handle situations involving relativistic acceleration. We present an extended version of which is strong enough to handle accelerated motion, in particular, accelerated observers. Among others, we show that the Twin Paradox becomes provable in , but it is not provable without . Key words: twin paradox, relativity theory, accelerated observers, first-order logic, axiomatization, foundation of relativity theory author: - 'Judit X. Madarász' - István Németi - Gergely Székely title: TWIN PARADOX AND THE LOGICAL FOUNDATION OF RELATIVITY THEORY --- [^1] INTRODUCTION ============ The idea of elaborating the foundation of space-time (or foundation of relativity) in a spirit analogous with the rather successful foundation of mathematics (FOM) was initiated by several authors including, e.g., David Hilbert [@Hi02] or leading contemporary logician Harvey Friedman [@FriFOM1; @FriFOM2]. Foundation of mathematics has been carried through strictly within the framework of first-order logic (FOL), for certain reasons. The same reasons motivate the effort of keeping the foundation of space-time also inside FOL. One of the reasons is that staying inside FOL helps us to avoid tacit assumptions, another reason is that FOL has a complete inference system while higher-order logic cannot have one by Gödel’s incompleteness theorem, see e.g., V[ä]{}[ä]{}n[ä]{}nen [@V01 p.505] or [@pezsgo Appendix]. For more motivation for staying inside FOL (as opposed to higher-order logic), cf. e.g., Ax [@Ax], Pambuccian [@Pam], [@pezsgo Appendix 1: “Why exactly FOL”], [@AMNsamp], but the reasons in Väänänen [@V01], Ferreirós [@FeBSL], or Woleński [@Wol] also apply. Following the above motivation, we begin at the beginning, namely first we recall a streamlined FOL axiomatization of special relativity theory, from the literature. is complete with respect to (w.r.t.) questions about inertial motion. Then we ask ourselves whether we can prove the usual relativistic properties of accelerated motion (e.g., clocks in acceleration) in . As it turns out, this is practically equivalent to asking whether is strong enough to “handle” (or treat) accelerated observers. We show that there is a mathematical principle called induction () coming from real analysis which needs to be added to in order to handle situations involving relativistic acceleration. We present an extended version of which is strong enough to handle accelerated clocks, in particular, accelerated observers. We show that the so-called Twin Paradox becomes provable in . It also becomes possible to introduce Einstein’s equivalence principle for treating gravity as acceleration and proving the gravitational time dilation, i.e. that gravity “causes time to run slow”. What we are doing here is not unrelated to Field’s “Science without numbers” programme and to “reverse mathematics” in the sense of Harvey Friedman and Steven Simpson. Namely, we systematically ask ourselves which mathematical principles or assumptions (like, e.g., ) are really needed for proving certain observational predictions of relativity. (It was this striving for parsimony in axioms or assumptions which we alluded to when we mentioned, way above, that was “streamlined”.) The interplay between logic and relativity theory goes back to around 1920 and has been playing a non-negligible role in works of researchers like Reichenbach, Carnap, Suppes, Ax, Szekeres, Malament, Walker, and of many other contemporaries.[^2] In Section \[ax-s\] we recall the FOL axiomatization complete w.r.t. questions concerning inertial motion. There we also introduce an extension of (still inside FOL) capable for handling accelerated clocks and also accelerated observers. In Section \[main-s\] we formalize the Twin Paradox in the language of FOL. We formulate Theorems \[thmTwp\], \[thmEq\] stating that the Twin Paradox is provable from and the same for related questions for accelerated clocks. Theorems \[thmNoIND\], \[thmMO\] state that is not sufficient for this, more concretely that the induction axiom in is needed. In Sections \[an-s\], \[proofss\] we prove these theorems. Motivation for the research direction reported here is nicely summarized in Ax [@Ax], Suppes [@Sup68]; cf. also the introduction of [@pezsgo]. Harvey Friedman’s [@FriFOM1; @FriFOM2] present a rather convincing general perspective (and motivation) for the kind of work reported here. AXIOMATIZING SPECIAL RELATIVITY IN FOL {#ax-s} ====================================== In this paper we deal with the kinematics of relativity only, i.e. we deal with motion of [*bodies*]{} (or [*test-particles*]{}). The motivation for our choice of vocabulary (for special relativity) is summarized as follows. We will represent motion as changing spatial location in time. To do so, we will have reference-frames for coordinatizing events and, for simplicity, we will associate reference-frames with special bodies which we will call [*observers*]{}. We visualize an observer-as-a-body as “sitting” in the origin of the space part of its reference-frame, or equivalently, “living” on the time-axis of the reference-frame. We will distinguish [*inertial*]{} observers from non-inertial (i.e. accelerated) ones. There will be another special kind of bodies which we will call [*photons*]{}. For coordinatizing events we will use an arbitrary [*ordered field*]{} in place of the field of the real numbers. Thus the elements of this field will be the “[*quantities*]{}” which we will use for marking time and space. Allowing arbitrary ordered fields in place of the reals increases flexibility of our theory and minimizes the amount of our mathematical presuppositions. Cf. e.g., Ax [@Ax] for further motivation in this direction. Similar remarks apply to our flexibility oriented decisions below, e.g., keeping the number $d$ of space-time dimensions a variable. Using coordinate systems (or reference-frames) instead of a single observer independent space-time structure is only a matter of didactical convenience and visualization, furthermore it also helps us in weeding out unnecessary axioms from our theories. Motivated by the above, we now turn to fixing the FOL language of our axiom systems. The first occurrences of concepts used in this work are set by boldface letters to make it easier to find them. Throughout this work, if-and-only-if is abbreviated to [**iff**]{}. Let us fix a natural number $d\ge 2$ for the dimension of the space-time that we are going to axiomatize. Our first-order language contains the following non-logical symbols: - unary relation symbols ${{\mathrm}{B}}$ (for [**Bodies**]{}), ${{\mathrm}{Ob}}$ (for [**Observers**]{}), ${{\mathrm}{IOb}}$ (for [**Inertial Observers**]{}), ${{\mathrm}{Ph}}$ (for [**Photons**]{}) and ${{\mathrm}{F}}$ (for [**quantities**]{} which are going to be elements of a Field), - binary function symbols $+$, $\cdot$ and a binary relation symbol $\le $ (for the field operations and the ordering on ${{\mathrm}{F}}$), and - a $2+d$-ary relation symbol ${{\mathrm}{W}}$ (for [**World-view relation**]{}). The bodies will play the role of the “main characters” of our space-time models and they will be “observed” (coordinatized using the quantities) by the observers. This observation will be coded by the world-view relation ${{\mathrm}{W}}$. Our bodies and observers are basically the same as the “test particles” and the “reference-frames”, respectively, in some of the literature. We read ${{\mathrm}{B}}(x), {{\mathrm}{Ob}}(x), {{\mathrm}{IOb}}(x), {{\mathrm}{Ph}}(x), {{\mathrm}{F}}(x)$ as “$x$ is a body”, “$x$ is an observer”, “$x$ is an inertial observer”, “$x$ is a photon”, “$x$ is a field-element”. We use the world-view relation ${{\mathrm}{W}}$ to talk about coordinatization, by reading ${{\mathrm}{W}}(x,y,z_1,\ldots, z_d)$ as “observer $x$ observes (or sees) body $y$ at coordinate point $\langle z_1,\ldots,z_d\rangle$”. This kind of observation has no connection with seeing via photons, it simply means coordinatization. ${{\mathrm}{B}}(x), {{\mathrm}{Ob}}(x), {{\mathrm}{IOb}}(x), {{\mathrm}{Ph}}(x), {{\mathrm}{F}}(x), {{\mathrm}{W}}(x,y,z_1,\ldots, z_d), x=y, x\leq y$ are the so-called [atomic formulas]{} of our first-order language, where $x,y,z_1,\dots,z_d$ can be arbitrary variables or terms built up from variables by using the field-operations “$+$” and “$\cdot$”. The [**formulas**]{} of our first-order language are built up from these atomic formulas by using the logical connectives [*not*]{} ($\lnot$), [*and*]{} ($\land$), [*or*]{} ($\lor$), [*implies*]{} ($\Longrightarrow$), [*if-and-only-if*]{} ($\Longrightarrow$) and the quantifiers [*exists*]{} $x$ ($\exists x$) and [*for all $x$*]{} ($\forall x$) for every variable $x$. Usually we use the variables $m,k,h$ to denote observers, $b$ to denote bodies, $ph$ to denote photons and $p_1,\dots,q_1,\dots$ to denote quantities (i.e. field-elements). We write $p$ and $q$ in place of $p_1,\dots,p_d$ and $q_1,\dots,q_d$, e.g., we write ${{\mathrm}{W}}(m,b,p)$ in place of ${{\mathrm}{W}}(m,b,p_1,\dots,p_d)$, and we write $\forall p$ in place of $\forall p_1,\dots,p_d$ etc. The [**models**]{} of this language are of the form $$\mathfrak{M} = \langle U; {{\mathrm}{B}}, {{\mathrm}{Ob}}, {{\mathrm}{IOb}}, {{\mathrm}{Ph}}, {{\mathrm}{F}},+,\cdot,\leq,{{\mathrm}{W}}\rangle,$$ where $U$ is a nonempty set, ${{\mathrm}{B}},{{\mathrm}{Ob}},{{\mathrm}{IOb}},{{\mathrm}{Ph}},{{\mathrm}{F}}$ are unary relations on $U$, etc. A unary relation on $U$ is just a subset of $U$. Thus we use ${{\mathrm}{B}},{{\mathrm}{Ob}}$ etc. as sets as well, e.g., we write $m\in {{\mathrm}{Ob}}$ in place of ${{\mathrm}{Ob}}(m)$. Having fixed our language, we now turn to formulating an axiom system for special relativity in this language. We will make special efforts to keep all our axioms inside the above specified first-order logic language of $\mathfrak{M}$. Throughout this work, $i$, $j$ and $n$ denote positive integers. ${{\mathrm}{F}}^n:={{\mathrm}{F}}\times\ldots\times {{\mathrm}{F}}$ ($n$-times) is the set of all $n$-tuples of elements of ${{\mathrm}{F}}$. If $a\in {{\mathrm}{F}}^n$, then we assume that $a=\langle a_1,\ldots,a_n\rangle$, i.e.$a_i\in{{\mathrm}{F}}$ denotes the $i$-th component of the $n$-tuple $a$. The following axiom is always assumed and is part of every axiom system we propose. : ${{\mathrm}{Ob}}\cup {{\mathrm}{Ph}}\subseteq {{\mathrm}{B}}$, ${{\mathrm}{IOb}}\subseteq {{\mathrm}{Ob}}$, $U={{\mathrm}{B}}\cup {{\mathrm}{F}}$, ${{\mathrm}{W}}\subseteq {{\mathrm}{Ob}}\times {{\mathrm}{B}}\times {{\mathrm}{F}}^d$, $+$ and $\cdot$ are binary operations on ${{\mathrm}{F}}$, $\le$ is a binary relation on ${{\mathrm}{F}}$ and $\left< {{\mathrm}{F}}; +,\cdot, \le \right>$ is an [**Euclidean ordered field**]{}, i.e. a linearly ordered field in which positive elements have square roots.[^3] In pure first-order logic, the above axiom would look like $\forall x\enskip\big[\big({{\mathrm}{Ob}}(x)\vee {{\mathrm}{Ph}}(x)\big)\Longrightarrow {{\mathrm}{B}}(x)\big]$ etc. In the present section we will not write out the purely first-order logic translations of our axioms since they will be straightforward to obtain. The first-order logic translations of our next three axioms , , can be found in the Appendix. Let $\mathfrak{M}$ be a model in which is true. Let $\mathfrak{F}:=\left< {{\mathrm}{F}}; +, \cdot, \le\right>$ denote the [**ordered field reduct**]{} of $\mathfrak{M}$. Here we list the definitions and notation that we are going to use in formulating our axioms. Let $0,1,-,/,\sqrt{\phantom{i}}$ be the usual field operations which are definable from “$+$” and “$\cdot$”. We use the vector-space structure of ${{\mathrm}{F}}^n$, i.e. if $p,q\in {{\mathrm}{F}}^n$ and $\lambda\in {{\mathrm}{F}}$, then $p+q,-p, \lambda p\in {{\mathrm}{F}}^n$; and $o:=\langle 0,\ldots,0\rangle$ denotes the [**origin**]{}. The [**Euclidean-length**]{} of $a\in {{\mathrm}{F}}^n$ is defined as [ $|a| :=\sqrt{\resizebox{!}{8pt}{$a_1^2+\ldots+a_n^2$}}$]{}. The set of positive elements of ${{\mathrm}{F}}$ is denoted by ${{\mathrm}{F}}^+:=\{x\in {{\mathrm}{F}}:x>0\}$. Let $p,q\in {{\mathrm}{F}}^d$. We use the notation $p_s:=\langle p_2,\ldots, p_d\rangle$ for the [**space component**]{} of $p$ and $p_t:=p_1$ for the [**time component**]{} of $p$. We define the [**line**]{} through $p$ and $q$ as $pq:=\{ q+{\lambda}(p-q): {\lambda}\in {{\mathrm}{F}}\}$. The set of [**lines**]{} is defined as $\text{\it Lines}:=\{pq : p\neq q \;\land\; p,q \in {{\mathrm}{F}}^d\}$. The [**slope**]{} of $p$ is defined as $\text{\it slope}(p):= {\left| p_s \right|}/{\left| p_t \right|}$ if $p_t\neq 0$ and is undefined otherwise; furthermore $\text{\it slope}(pq):=\text{\it slope}(p-q)$ if $p_t\neq q_t$ and is undefined otherwise. ${{\mathrm}{F}}^d$ is called the [**coordinate system**]{} and its elements are referred to as [**coordinate points**]{}. The [**event**]{} (the set of bodies) observed by observer $m$ at coordinate point $p$ is: $$ev_m(p):=\{b\in {{\mathrm}{B}}: {{\mathrm}{W}}(m,b,p)\}.$$ The mapping $p\mapsto ev_m(p)$ is called the [**world-view**]{} (function) of $m$. The [**coordinate domain**]{} of observer $m$ is the set of coordinate points where $m$ observes something: $$Cd(m):=\{p \in {{\mathrm}{F}}^d: ev_m(p)\neq \emptyset \}.$$ The [**life-line**]{} (or [**trace**]{}) of body $b$ as seen by observer $m$ is defined as the set of coordinate points where $b$ was observed by $m$: $$tr_m(b):=\{p\in {{\mathrm}{F}}^d : {{\mathrm}{W}}(m,b,p)\}=\{p\in {{\mathrm}{F}}^d : b \in ev_m(p)\}.$$ The life-line $tr_m(m)$ of observer $m$ as seen by himself is called the [**self-line**]{} of $m$. The [**time-axis**]{} is defined as $\bar{t}:=\{\langle x,0,\dots,0\rangle:x\in {{\mathrm}{F}}\}$. \[bl\]\[bl\][$tr_m(m)$]{} \[bl\]\[bl\][$tr_m(k)$]{} \[t\]\[t\][$tr_m(b)$]{} \[tl\]\[tl\][$tr_m(ph)$]{} \[br\]\[br\][$tr_k(k)$]{} \[bl\]\[bl\][$tr_k(m)$]{} \[t\]\[t\][$tr_k(b)$]{} \[tl\]\[tl\][$tr_k(ph)$]{} \[r\]\[r\][$p$]{} \[l\]\[l\][$q$]{} \[lb\]\[lb\][$\bar{t}$]{} \[t\]\[t\][$o$]{} \[t\]\[t\][$ev_k(p)=ev_m(q) \text{, i.e.\ } \langle p,q\rangle \in f^k_m$]{} \[cb\]\[cb\][world-view of $k$]{} \[cb\]\[cb\][world-view of $m$]{} ![for the basic definitions mainly for $f^k_m$. ](fig1 "fig:"){width="80.00000%"} Now we are ready to build our space-time theories by formulating our axioms. We formulate each axiom on two levels. First we give an intuitive formulation, then we give a precise formalization using our notation. The following natural axiom goes back to Galileo Galilei and even to the Norman-French Oresme of around 1350, cf. e.g., [@AMNsamp p.23, §5]. It simply states that each observer thinks that he rests in the origin of the space part of his coordinate system. : The self-line of any observer is the time-axis restricted to his coordinate domain: $$\forall m \in {{\mathrm}{Ob}}\quad tr_m(m)=\bar{t}\cap Cd(m).$$ A FOL-formula expressing can be found in the Appendix. The next axiom is about the constancy of the speed of the photons, cf. e.g., [@d'Inverno §2.6]. For convenience, we choose $1$ for their speed. : For every inertial observer, the lines of slope 1 are exactly the traces of the photons: $$\forall m\in {{\mathrm}{IOb}}\quad\{tr_{m}(ph):ph\in {{\mathrm}{Ph}}\}=\{l\in \text{\it Lines}:\text{\it slope}(l)=1\}.$$ A FOL-formula expressing can be found in the Appendix. We will also assume the following axiom: : All inertial observers observe the same events: $$\forall m,k\in {{\mathrm}{IOb}}\enskip \forall p\in {{\mathrm}{F}}^d\;\exists q\in {{\mathrm}{F}}^d\quad ev_{m}(p)=ev_{k}(q).$$ A FOL-formula expressing can be found in the Appendix. $${\textcolor{axcolor}{\ensuremath{\mathsf{Specrel_0}}}}:= \{ {\textcolor{axcolor}{\ensuremath{\mathsf{AxSelf^-}}}}, {\textcolor{axcolor}{\ensuremath{\mathsf{AxPh}}}}, {\textcolor{axcolor}{\ensuremath{\mathsf{AxEv}}}}, {\textcolor{axcolor}{\ensuremath{\mathsf{AxFrame}}}}\}.$$ Since, in some sense, is only an “auxiliary” (or book-keeping) axiom about the “mathematical frame” of our reasoning, the heart of consists of three very natural axioms, , , . These are really intuitively convincing, natural and simple assumptions. From these three axioms one can already prove the most characteristic predictions of special relativity theory. What the average layperson usually knows about relativity is that “moving clocks slow down”, “moving spaceships shrink”, and “moving pairs of clocks get out of synchronism”. We call these the [**paradigmatic effects**]{} of special relativity. All these can be proven from the above three axioms, in some form, cf. Theorem \[spec-thm\]. E.g., one can prove that “if $m,k$ are any two observers not at rest relative to each other, then one of $m,k$ will “see” or “think” that the clock of the other runs slow”. However, does not imply yet the inertial approximation of the so-called Twin Paradox.[^4] In order to prove the inertial approximation of the Twin Paradox also, and to prove all the paradigmatic effects in their strongest form, it is enough to add one more axiom to . This is what we are going to do now. We will find that studying the relationships between the world-views is more illuminating than studying the world-views in themselves. Therefore the following definition is fundamental. The [**world-view transformation**]{} between the world-views of observers $k$ and $m$ is the set of pairs of coordinate points $\langle p,q\rangle$ such that $m$ and $k$ observe the same nonempty event in $p$ and $q$, respectively: $$f^k_m:=\{\langle p,q\rangle \in {{\mathrm}{F}}^d \times {{\mathrm}{F}}^d:ev_k(p)=ev_m(q)\neq\emptyset\},$$ cf. Figure 1. We note that although the world-view transformations are only binary relations, axiom turns them into functions, cf. (iii) of Proposition \[prop0\] way below. \[fmkconv\] Whenever we write “$f^k_m(p) $”, we mean that there is a unique $q\in {{\mathrm}{F}}^d$ such that $\langle p,q\rangle \in f^k_m$, and $f^k_m(p)$ denotes this unique $q$. I.e. if we talk about the value $f^k_m(p)$ of $f^k_m$ at $p$, we tacitly postulate that it exists and is unique (by the present convention). The following axiom is an instance (or special case) of the Principle of Special Relativity, according to which the “laws of nature” are the same for all inertial observers, in particular, there is no experiment which would decide whether you are in absolute motion, cf. e.g., Einstein [@Einstein] or [@d'Inverno §2.5] or [@Mphd §2.8]. To explain the following formula, let $p,q\in{{\mathrm}{F}}^d$. Then $p_t-q_t$ is the time passed between the events $ev_m(p)$ and $ev_m(q)$ as seen by $m$ and $f^m_k(p)_t-f^m_k(q)_t$ is the time passed between the same two events as seen by $k$. Hence $|( f^m_k(p)_t- f^m_k(q)_t)\slash(p_t-q_t)|$ is the rate with which $k$’s clock runs slow as seen by $m$. The same explanation applies when $m$ and $k$ are interchanged. : Any two inertial observers see each other’s clocks go wrong in the same way: $$\forall m,k\in {{\mathrm}{IOb}}\enskip \forall p,q\in \bar{t}\quad\big|f^k_m(p)_t - f^k_m(q)_t\big|=\big| f^m_k(p)_t- f^m_k(q)_t\big|.$$ All the axioms so far talked about inertial observers, and they in fact form an axiom system complete w.r.t. the inertial observers, cf. Theorem \[spec-thm\] below. $${\textcolor{axcolor}{\ensuremath{\mathsf{Specrel}}}}:= \{ {\textcolor{axcolor}{\ensuremath{\mathsf{AxSelf^-}}}}, {\textcolor{axcolor}{\ensuremath{\mathsf{AxPh}}}}, {\textcolor{axcolor}{\ensuremath{\mathsf{AxEv}}}}, {\textcolor{axcolor}{\ensuremath{\mathsf{AxSym}}}}, {\textcolor{axcolor}{\ensuremath{\mathsf{AxFrame}}}}\}.$$ Let $p,q\in {{\mathrm}{F}}^d$. Then $$\mu(p):=\left\{ \begin{array}{rl} \sqrt{\big|p_t^2-|p_s|^2 \big|} & \text{ if } p_t^2-|p_s|^2\ge0 , \\ -\sqrt{\big|p_t^2-|p_s|^2 \big|} & \text{ otherwise } \end{array} \right.$$ is the (signed) [**Minkowski-length**]{} of $p$ and the [**Minkowski-distance**]{} between $p$ and $q$ is defined as follows: $$\mu(p,q):=\mu(p-q).$$ A motivation for the “otherwise” part of the definition of $\mu(p)$ is the following. $\mu(p)$ codes two kinds of information, (i) the length of $p$ and (ii) whether $p$ is time-like (i.e. $|p_t|>|p_s|$) or space-like. Since the length is always non-negative, we can use the sign of $\mu(p)$ to code (ii). Let $f:{{\mathrm}{F}}^d\rightarrow {{\mathrm}{F}}^d$ be a function. $f$ is said to be a [**Poincaré-transformation**]{} if $f$ is a bijection and it preserves the Minkowski-distance, i.e. $\mu\big(f(p),f(q)\big)=\mu(p,q)$ for all $p,q\in {{\mathrm}{F}}^d$. $f$ is called a [**dilation**]{} if there is a positive $\delta\in{{\mathrm}{F}}$ such that $f(p)=\delta p$ for all $p\in{{\mathrm}{F}}^d$ and $f$ is called a [**field-automorphism-induced**]{} mapping if there is an automorphism $\pi$ of the field $\langle{{\mathrm}{F}},+,\cdot\rangle$ such that $f(p)=\langle \pi p_1,\dots, \pi p_d\rangle$ for all $p\in{{\mathrm}{F}}^d$. The following is proved in [@pezsgo 2.9.4, 2.9.5] and in [@Mphd 2.9.4–2.9.7]. Let $\Sigma$ be a set of formulas and $\mathfrak{M}$ be a model. $\mathfrak{M} \models \Sigma $ denotes that all formulas in $\Sigma$ are true in model $\mathfrak{M}$. In this case we say that $\mathfrak{M}$ is a [**model of**]{} $\Sigma$. \[spec-thm\]\[thmPoi\] Let $d>2$, let $\mathfrak{M}$ be a model of our language and let $m,k$ be inertial observers in $\mathfrak{M}$. Then $f^m_k$ is a Poincaré-transformation whenever $\mathfrak{M}\models{\textcolor{axcolor}{\ensuremath{\mathsf{Specrel}}}}$. More generally, $f^m_k$ is a Poincaré-transformation composed with a dilation and a field-automorphism-induced mapping whenever $\mathfrak{M}\models{\textcolor{axcolor}{\ensuremath{\mathsf{Specrel_0}}}}$. \[rem-specthm\] Assume $d>2$. Theorem \[spec-thm\] is best possible in the sense that, e.g., for every Poincaré-transformation $f$ over an arbitrary Euclidean ordered field there are a model $\mathfrak{M}\models{\textcolor{axcolor}{\ensuremath{\mathsf{Specrel}}}}$ and inertial observers $m,k$ in $\mathfrak{M}$ such that the world-view transformation $f^m_k$ between $m$’s and $k$’s world-views in $\mathfrak{M}$ is $f$, see [@pezsgo 2.9.4(iii), 2.9.5(iii)]. Similarly for the second statement in Theorem \[spec-thm\]. Hence, Theorem \[spec-thm\] can be refined to a pair of completeness theorems, cf. [@pezsgo 3.6.13, p.271]. Roughly, for every Euclidean ordered field, its Poincaré-transformations (can be expanded to) form a model of . Similarly for the other case. [$\lhd$]{} It follows from Theorem \[spec-thm\] that the paradigmatic effects of relativity hold in in their strongest form, e.g., if $m$ and $k$ are observers not at rest w.r.t. each other, then both will “think” that the clock of the other runs slow. also implies the “inertial approximation” of the Twin Paradox, see e.g., [@pezsgo 2.8.18], and [@mythes]. It is necessary to add to in order to be able to prove the inertial approximation of the Twin Paradox by Theorem \[spec-thm\], cf. e.g., [@mythes]. We now begin to formulate axioms about non-inertial observers. The non-inertial observers are called [**accelerated**]{} observers. To connect the world-views of the accelerated and the inertial observers, we are going to formulate the statement that at each moment of his life, each accelerated observer sees the nearby world for a short while as an inertial observer does. To formalize this, first we introduce the relation of being a co-moving observer. The (open) [**ball**]{} with center $c\in {{\mathrm}{F}}^n$ and radius $\varepsilon \in {{\mathrm}{F}}^+$ is: $$B_\varepsilon(c):=\{a\in {{\mathrm}{F}}^n : \left| a-c \right| < \varepsilon\}.$$ $m$ is a [**co-moving observer**]{} of $k$ at $q\in {{\mathrm}{F}}^d$, in symbols $m \succ_q k$, if $q\in Cd(k)$ and the following holds: $$\forall \varepsilon \in {{\mathrm}{F}}^+ \; \exists \delta \in {{\mathrm}{F}}^+ \enskip \forall p \in B_{\delta}(q)\cap Cd(k)\enskip \big|p-f^k_m(p)\big| \leq\varepsilon|p-q|.$$ Behind the definition of the co-moving observers is the following intuitive image: as we zoom into smaller and smaller neighborhoods of the given coordinate point, the world-views of the two observers are more and more similar. Notice that $f^k_m(q)=q$ if $m \succ_q k$. The following axiom gives the promised connection between the world-views of the inertial and the accelerated observers: : At any point on the self-line of any observer, there is a co-moving inertial observer: $$\forall k \in {{\mathrm}{Ob}}\enskip \forall q \in tr_k(k) \; \exists m \in {{\mathrm}{IOb}}\quad m\succ_q k.$$ Let be the collection of the axioms introduced so far: $${\textcolor{axcolor}{\ensuremath{\mathsf{AccRel_0}}}}:=\{ {\textcolor{axcolor}{\ensuremath{\mathsf{AxSelf^-}}}}, {\textcolor{axcolor}{\ensuremath{\mathsf{AxPh}}}}, {\textcolor{axcolor}{\ensuremath{\mathsf{AxEv}}}}, {\textcolor{axcolor}{\ensuremath{\mathsf{AxSym}}}}, {\textcolor{axcolor}{\ensuremath{\mathsf{AxAcc}}}}, {\textcolor{axcolor}{\ensuremath{\mathsf{AxFrame}}}}\}.$$ Let $\mathfrak{R}$ denote the ordered field of the real numbers. Accelerated clocks behave as expected in models of when the ordered field reduct of the model is $\mathfrak{R}$ (cf. Theorems \[thmTwp\], \[thmEq\], and in more detail Prop.\[propTr\], Rem.\[Trrem\]); but not otherwise (cf. Theorems \[thmNoIND\], \[thmMO\]). Thus to prove properties of accelerated clocks (and observers), we need more properties of the field reducts than their being Euclidean ordered fields. As it turns out, adding all FOL-formulas valid in the field of the reals does not suffice (cf.Corollary \[corNoIND\]). The additional property of $\mathfrak{R}$ we need is that in $\mathfrak{R}$ every bounded non-empty set has a [**supremum**]{}, i.e. a least upper bound. This is a second-order logic property (because it concerns all subsets of $\mathfrak{R}$) which we cannot use in a FOL axiom system. Instead, we will use a kind of “induction” axiom schema. It will state that every non-empty, bounded subset of the ordered field reduct which can be defined by a FOL-formula using possibly the extra part of the model, e.g., using the world-view relation, has a supremum. We now begin to formulate our FOL induction axiom schema.[^5] To abbreviate formulas of FOL we often omit parentheses according to the following convention. Quantifiers bind as long as they can, and $\land$ binds stronger than $\Longrightarrow$. E.g., $\forall x\ \varphi\land\psi\Longrightarrow\exists y\ \delta\land\eta$ means $\forall x\big((\varphi\land\psi)\Longrightarrow\exists y(\delta\land\eta)\big)$. Instead of curly brackets we sometimes write square brackets in formulas, e.g., we may write $\forall x\ (\varphi\land\psi \Longrightarrow[\exists y\delta]\land\eta)$. If $\varphi$ is a formula and $x$ is a variable, then we say that $x$ is a [**free variable**]{} \[free variable\] of $\varphi$ iff $x$ does not occur under the scope of either $\exists x$ or $\forall x$. Let $\varphi$ be a formula; and let $x,y_1,\ldots,y_n$ be all the free variables of $\varphi$. Let $\mathfrak{M}=\langle U;{{\mathrm}{B}},\ldots\rangle$ be a model. Whether $\varphi$ is true or false in $\mathfrak{M}$ depends on how we associate elements of $U$ to these free variables. When we associate $d,a_1,\ldots,a_n\in U$ to $x,y_1,\ldots,y_n$, respectively, then $\varphi(d,a_1,\ldots,a_n)$ denotes this truth-value, thus $\varphi(d,a_1,\ldots,a_n)$ is either true or false in $\mathfrak{M}$. For example, if $\varphi$ is $x\leq y_1+\ldots+y_n$, then $\varphi(0,1,\ldots,1)$ is true in $\mathfrak{R}$ while $\varphi(1,0,\ldots,0)$ is false in $\mathfrak{R}$. $\varphi$ is said to be [**true**]{} in $\mathfrak{M}$ if $\varphi$ is true in $\mathfrak{M}$ no matter how we associate elements to the free variables. We say that a subset $H$ of ${{\mathrm}{F}}$ is [**definable by**]{} $\varphi$ iff there are $a_1,\ldots,a_n\in U$ such that $H=\{d\in {{\mathrm}{F}}\: :\: \varphi(d,a_1,\ldots,a_n)\text{ is true in }\mathfrak{M}\}$. : Every subset of ${{\mathrm}{F}}$ definable by $\varphi$ has a supremum if it is non-empty and [**bounded**]{}: $$\begin{split} \forall y_1,\ldots,y_n \quad[\exists x \in {{\mathrm}{F}}\quad \varphi] \;\land\; [\exists b\in {{\mathrm}{F}}\quad (\forall x\in {{\mathrm}{F}}\quad \varphi \Longrightarrow x \le b)]\;\\ \Longrightarrow\; \big(\exists s \in {{\mathrm}{F}}\enskip \forall b \in {{\mathrm}{F}}\quad (\forall x\in {{\mathrm}{F}}\quad \varphi \Longrightarrow x \le b)\iff s\le b\big). \end{split}$$ We say that a subset of ${{\mathrm}{F}}$ is [**definable**]{} iff it is definable by a FOL-formula. Our axiom scheme below says that every non-empty bounded and definable subset of ${{\mathrm}{F}}$ has a supremum. $${\textcolor{axcolor}{\ensuremath{\mathsf{IND}}}}:=\{{\textcolor{axcolor}{\ensuremath{\mathsf{AxSup_\varphi}}}}: \varphi \mbox{ is a FOL-formula of our language } \}.$$ Notice that is true in any model whose ordered field reduct is $\mathfrak{R}$. Let us add to and call it : $${\textcolor{axcolor}{\ensuremath{\mathsf{AccRel}}}}:={\textcolor{axcolor}{\ensuremath{\mathsf{AccRel_0}}}}\cup{\textcolor{axcolor}{\ensuremath{\mathsf{IND}}}}.$$ is a countable set of FOL-formulas. We note that there are non-trivial models of , cf. e.g., Remark \[Trrem\] way below. Furthermore, we note that the construction in Misner-Thorne-Wheeler [@MTW Chapter 6 entitled “The local coordinate system of an accelerated observer”, especially pp. 172-173 and Chapter 13.6 entitled “The proper reference frame of an accelerated observer” on pp. 327-332] can be used for constructing models of . Models of are discussed to some detail in [@mythes]. Theorems \[thmTwp\] and \[thmEq\] (and also Prop.\[propTr\], Rem.\[Trrem\]) below show that already implies properties of accelerated clocks, e.g., it implies the Twin Paradox. implies all the FOL-formulas true in $\mathfrak{R}$, but is stronger. Let denote the set of elements of that talk only about the ordered field reduct, i.e. let $$\begin{split} {\textcolor{axcolor}{\ensuremath{\mathsf{IND^-}}}} := \{ & {\textcolor{axcolor}{\ensuremath{\mathsf{AxSup_\varphi}}}} : \varphi \mbox{ is a FOL-formula in the language } \\ &\mbox{of the ordered field reduct of our models}\}. \end{split}$$ Now, together with the axioms of linearly ordered fields is a complete axiomatization of the FOL-theory of $\mathfrak{R}$, i.e. all FOL-formulas valid in $\mathfrak{R}$ can be derived from them.[^6] However, is stronger than , since ${\textcolor{axcolor}{\ensuremath{\mathsf{AccRel_0}}}}\cup{\textcolor{axcolor}{\ensuremath{\mathsf{IND^-}}}} \not\models {\textcolor{axcolor}{\ensuremath{\mathsf{Tp}}}}$ by Corollary \[corNoIND\] below, while ${\textcolor{axcolor}{\ensuremath{\mathsf{AccRel_0}}}}\cup{\textcolor{axcolor}{\ensuremath{\mathsf{IND}}}}\models {\textcolor{axcolor}{\ensuremath{\mathsf{Tp}}}}$ by Theorem \[thmTwp\]. The strength of comes from the fact that the formulas in can “talk” about more “things” than just those in the language of $\mathfrak{R}$ (namely they can talk about the world-view relation, too). For understanding how works, it is important to notice that does not speak about the field $\mathfrak{F}$ itself, but instead, it speaks about connections between $\mathfrak{F}$ and the rest of the model $\mathfrak{M}=\langle\ldots, {{\mathrm}{F}},\ldots,{{\mathrm}{W}}\rangle$. Why do we call a kind of induction schema? The reason is the following. implies that if a formula $\varphi$ becomes false sometime after $0$ while being true at $0$, then there is a “first” time-point where, so to speak, $\varphi$ becomes false. This time-point is the supremum of the time-points until which $\varphi$ remained true after $0$. Now, $\varphi$ may or may not be false at this supremum, but it is false arbitrarily “close” to it afterwards. If such a “point of change” for the truth of $\varphi$ cannot exist, implies that $\varphi$ has to be true always after $0$ if it is true at $0$. (Without , this may not be true.) MAIN RESULTS: ACCELERATED CLOCKS AND THE TWIN PARADOX IN OUR FOL AXIOMATIC SETTING {#main-s} ================================================================================== Twin Paradox (TP) concerns two twin siblings whom we shall call Ann and Ian. (“A” and “I” stand for accelerated and for inertial, respectively). Ann travels in a spaceship to some distant star while Ian remains at home. TP states that when Ann returns home she will be *younger* than her *twin brother* Ian. We now formulate TP in our FOL language. The [**segment**]{} between $p\in {{\mathrm}{F}}^d$ and $q\in {{\mathrm}{F}}^d$ is defined as: $$[p q]:=\{{\lambda}p+(1-{\lambda})q:\lambda\in {{\mathrm}{F}}\;\land\; 0\le{\lambda}\le1\}.$$ We say that observer $k$ is in [**twin-paradox relation**]{} with observer $m$ iff whenever $k$ leaves $m$ between two meetings, $k$ measures less time between the two meetings than $m$: $$\begin{split} \forall p,q \in tr_k(k) \enskip \forall p',q' \in tr_m(m) & \\ \langle p,p'\rangle ,\langle q,q'\rangle \in f^k_m \;\land\;[p q] \subseteq & tr_k(k)\;\land\; [p' q']\not\subseteq tr_m(k)\\ & \Longrightarrow\; \big|q_t-p_t\big|<\big|q'_t-p'_t\big|, \end{split}$$ cf. Figure 2. In this case we write $\mathsf{Tp}(k<m)$. We note that, if two observers do not leave each other or they meet less than twice, then they are in twin-paradox relation by this definition. Thus two inertial observers are always in this relation. \[l\]\[l\] \[l\]\[l\] \[b\]\[b\][$f^k_m$]{} \[rt\]\[rt\][$p$]{} \[rb\]\[rb\][$q$]{} \[l\]\[l\][$tr_m(k)$]{} \[lt\]\[lt\][$p'$]{} \[lb\]\[lb\][$q'$]{} \[c\]\[c\][$\Longrightarrow\; |q_t-p_t|<|q'_t-p'_t|$]{} \[c\]\[c\][$\Longrightarrow\; |q_t-p_t|=|q'_t-p'_t|$]{} \[b\]\[b\][$tr_m(m)$]{} \[b\]\[b\][$tr_k(k)$]{} \[r\]\[r\][$ tr_m(k) \not\supseteq$]{} \[r\]\[r\][$ tr_k(k) \supseteq$]{} \[c\]\[c\][same events]{} ![for and . ](fig2 "fig:"){width="80.00000%"} : Every observer is in twin-paradox relation with every inertial observer: $$\forall k\in {{\mathrm}{Ob}}\enskip\forall m\in {{\mathrm}{IOb}}\quad \mathsf{Tp}(k<m).$$ Let $\varphi$ be a formula and $\Sigma$ be a set of formulas. $\Sigma \models \varphi$ denotes that $\varphi$ is true in all models of $\Sigma$. Gödel’s completeness theorem for FOL implies that whenever $\Sigma\models\varphi$, there is a (syntactic) derivation of $\varphi$ from $\Sigma$ via the commonly used derivation rules of FOL. Hence the next theorem states that the formula formulating the Twin Paradox is provable from the axiom system . \[thmTwp\] ${\textcolor{axcolor}{\ensuremath{\mathsf{AccRel}}}} \models {\textcolor{axcolor}{\ensuremath{\mathsf{Tp}}}}$ if $d>2$. The proof of the theorem is in Section \[proofss\]. Now we turn to formulating a phenomenon which we call Duration Determining Property of Events. : If each of two observers observes the very same (non-empty) events in a segment of their self-lines, they measure the same time between the end points of these two segments: $$\begin{split} \forall k,m\in {{\mathrm}{Ob}}\enskip\forall p,q \in tr_k(k)\enskip \forall p',q'\in t&r_m(m)\\ \emptyset\not\in\{ev_k(r):r\in[p q]\}=\{ev_m(r'&): r'\in [p' q']\}\; \Longrightarrow\;\\ \big|&q_t-p_t\big|=\big|q'_t-p'_t\big|, \end{split}$$ see the right hand side of Figure 2. The next theorem states that also can be proved from our FOL axiom system . \[thmEq\] ${\textcolor{axcolor}{\ensuremath{\mathsf{AccRel}}}} \models {\textcolor{axcolor}{\ensuremath{\mathsf{Ddpe}}}}$ if $d>2$. The proof of the theorem is in Section \[proofss\]. The assumption $d>2$ cannot be omitted from Theorem \[thmTwp\]. However, Theorems \[thmTwp\] and \[thmEq\] remain true if we omit the assumption $d>2$ and assume auxiliary axioms and below, i.e.$${\textcolor{axcolor}{\ensuremath{\mathsf{AccRel}}}}\cup\{{\textcolor{axcolor}{\ensuremath{\mathsf{AxIOb}}}},{\textcolor{axcolor}{\ensuremath{\mathsf{AxLine}}}}\}\models {\textcolor{axcolor}{\ensuremath{\mathsf{Tp}}}}\;\land\;{\textcolor{axcolor}{\ensuremath{\mathsf{Ddpe}}}}$$ holds for $d=2$, too. A proof for the latter statement can be obtained from the proofs of Theorems \[thmTwp\] and \[thmEq\] by [@mythes items 4.3.1, 4.2.4, 4.2.5] and [@AMNsamp Theorem 1.4(ii)]. : In every inertial observer’s coordinate system, every line of slope less than 1 is the life-line of an inertial observer: $$\forall m\in {{\mathrm}{IOb}}\quad \{tr_m(k): k\in {{\mathrm}{IOb}}\} \supseteq \{l \in \text{\it Lines}: \text{\it slope}(l)< 1 \}.$$ : Traces of inertial observers are lines as observed by inertial observers: $$\forall m,k \in {{\mathrm}{IOb}}\quad tr_m(k)\in \text{\it Lines}.$$ [$\lhd$]{} Can the assumption $d>2$ be omitted from Theorem \[thmEq\], i.e.does ${\textcolor{axcolor}{\ensuremath{\mathsf{AccRel}}}}\models{\textcolor{axcolor}{\ensuremath{\mathsf{Ddpe}}}}$ hold for $d=2$? The following theorem says that Theorems \[thmTwp\] and \[thmEq\] do not remain true if we omit the axiom scheme from . If a formula $\varphi$ is not true in a model $\mathfrak{M}$, we write $\mathfrak{M} \not\models \varphi$. \[thmNoIND\] For every Euclidean ordered field $\mathfrak{F}$ not isomorphic to $\mathfrak{R}$, there is a model $\mathfrak{M}$ of ${\textcolor{axcolor}{\ensuremath{\mathsf{AccRel_0}}}}$ such that $\mathfrak{M}\not\models{\textcolor{axcolor}{\ensuremath{\mathsf{Tp}}}}$, $\mathfrak{M}\not\models{\textcolor{axcolor}{\ensuremath{\mathsf{Ddpe}}}}$ and the ordered field reduct of $\mathfrak{M}$ is $\mathfrak{F}$. The proof of the theorem is in Section \[proofss\]. By Theorems \[thmTwp\] and \[thmEq\], is not true in the model $\mathfrak{M}$ mentioned in Theorem \[thmNoIND\]. This theorem has strong consequences, it implies that to prove the Twin Paradox, it does not suffice to add all the FOL-formulas valid in $\mathfrak{R}$ (to ). Let $Th(\mathfrak{R})$ denote the set of all FOL-formulas valid in $\mathfrak{R}$. \[corNoIND\] $Th(\mathfrak{R})\cup{\textcolor{axcolor}{\ensuremath{\mathsf{AccRel_0}}}}\not\models {\textcolor{axcolor}{\ensuremath{\mathsf{Tp}}}}$ and $Th(\mathfrak{R})\cup{\textcolor{axcolor}{\ensuremath{\mathsf{AccRel_0}}}}\not\models {\textcolor{axcolor}{\ensuremath{\mathsf{Ddpe}}}}$. The proof of the corollary is in Section \[proofss\]. An ordered field is called [**non-Archimedean**]{} if it has an element $a$ such that, for every positive integer $n$, $-1<\underbrace{a+\ldots+a}_n<1$. We call these elements [**infinitesimally small**]{}. The following theorem says that, for countable or non-Archimedean Euclidean ordered fields, there are quite sophisticated models of in which and are false. \[thmMO\] For every Euclidean ordered field $\mathfrak{F}$ which is countable or non-Archimedean, there is a model $\mathfrak{M}$ of ${\textcolor{axcolor}{\ensuremath{\mathsf{AccRel_0}}}}$ such that $\mathfrak{M}\not\models{\textcolor{axcolor}{\ensuremath{\mathsf{Tp}}}}$, $\mathfrak{M}\not\models{\textcolor{axcolor}{\ensuremath{\mathsf{Ddpe}}}}$, the ordered field reduct of $\mathfrak{M}$ is $\mathfrak{F}$ and (i)–(iv) below also hold in $\mathfrak{M}$. - Every observer uses the whole coordinate system for coordinate-domain: $$\forall m \in {{\mathrm}{Ob}}\quad Cd(m)={{\mathrm}{F}}^d.$$ - At any point in ${{\mathrm}{F}}^d$, there is a co-moving inertial observer of any observer: $$\forall k \in {{\mathrm}{Ob}}\enskip \forall q \in {{\mathrm}{F}}^d\; \exists m \in {{\mathrm}{IOb}}\quad m\succ_q k.$$ - All observers observe the same set of events: $$\forall m,k\in {{\mathrm}{Ob}}\enskip \forall p\in {{\mathrm}{F}}^d\;\exists q\in {{\mathrm}{F}}^d\quad ev_{m}(p)=ev_{k}(q).$$ - Every observer observes every event only once: $$\forall m\in {{\mathrm}{Ob}}\enskip \forall p,q\in {{\mathrm}{F}}^d\quad ev_m(p)=ev_m(q)\; \Longrightarrow\; p=q.$$ [$\lhd$]{} The proof of the theorem is in Section \[proofss\]. \[l\]\[l\][$tr_m(m)$]{} \[rt\]\[rt\][$tr_{k_2}(k_2)$]{} \[rb\]\[rb\][$tr_{k_1}(k_1)$]{} \[rt\]\[rt\][$tr_m(k_1)$]{} \[rb\]\[rb\][$tr_m(k_2)$]{} \[l\]\[l\][$p'$]{} \[l\]\[l\][$q'$]{} \[l\]\[l\][$r'$]{} \[lb\]\[lb\][$f^{k_2}_m$]{} \[lb\]\[lb\][$f^{k_1}_m$]{} \[t\]\[t\][$f^{k_2}_m$]{} \[b\]\[b\][$f^{k_1}_m$]{} \[r\]\[r\][$p$]{} \[r\]\[r\][$q$]{} \[r\]\[r\][$r$]{} ![for . ](fig3 "fig:"){width="50.00000%"} Finally we formulate a question. To this end we introduce the inertial version of the twin paradox and some auxiliary axioms. In the inertial version of the twin paradox, we use the common trick of the literature to talk about the twin paradox without talking about accelerated observers. We replace the accelerated twin with two inertial ones, a leaving and an approaching one. We say that observers $k_1$ and $k_2$ are in [**inertial twin-paradox relation**]{} with observer $m$ if the following holds: $$\begin{split} \forall p,q,r \in tr_{k_1}(k_1)\cap tr_{k_2}(k_2) \enskip \forall p',q' \in tr&_m(m)\enskip \forall r'\in {{\mathrm}{F}}^d \quad \\ \langle p,p'\rangle ,\langle r,r'\rangle \in f^{k_1}_m\;\land\; \langle r,r'\rangle ,\langle q,q'\rangle \in f&^{k_2}_m\;\land\; p'_t<r'_t<q'_t\;\land\; r'\not\in [p' q'] \\ \Longrightarrow& \; |q'_t-p'_t| > |q_t-r_t|+|r_t-p_t\big|, \end{split}$$ cf. Figure 3. In this case we write $\mathsf{Tp}({k_1k_2}<m)$. : Every three inertial observers are in inertial twin-paradox relation: $$\forall m,k_1,k_2\in {{\mathrm}{IOb}}\quad \mathsf{Tp}({k_1k_2}<m).$$ <!-- --> : To every inertial observer $m$ and coordinate point $p$ there is an inertial observer $k$ such that the world-view transformation between $m$ and $k$ is the translation by vector $p$: $$\forall m\in {{\mathrm}{IOb}}\enskip \forall p\in {{\mathrm}{F}}^d\; \exists k\in {{\mathrm}{IOb}}\enskip \forall q\in {{\mathrm}{F}}^d \quad f^m_k(q)=q+p.$$ <!-- --> : The world-view transformation between inertial observers $m$ and $k$ is a linear transformation if $f^m_k(o)=o$ : $$\begin{split} \forall m,k\in {{\mathrm}{IOb}}\enskip\forall p,q\in {{\mathrm}{F}}^d\enskip\forall\lambda\in {{\mathrm}{F}}\quad f^m_k(o)=o\; &\Longrightarrow\;\\ f^m_k(p+q)=f^m_k(p)+f^m_k(q)\;\land\; f^m_k(\lambda p)=&\lambda f^m_k(p). \end{split}$$ \[qTwp\] Does Theorem \[thmTwp\] remain true if we replace in with the inertial version of the twin paradox and we assume the auxiliary axioms , and ? Cf. Question \[qConv\]. We note that and are true in the models of in case $d>2$, cf. [@AMNsamp Theorem 1.2], [@Mphd Theorem 2.8.28] and [@mythes §3]. [$\lhd$]{} PIECES FROM NON-STANDARD ANALYSIS: SOME TOOLS FROM REAL ANALYSIS CAPTURED IN FOL {#an-s} ================================================================================ In this section we gather the statements (and proofs from ) of the facts we will need from analysis. The point is in formulating these statements in FOL and for an arbitrary ordered field in place of using the second-order language of the ordered field $\mathfrak{R}$ of reals. In the present section is assumed without any further mentioning. Let $a,b,c\in {{\mathrm}{F}}$. We say that $b$ is [**between**]{} $a$ and $c$ iff $a<b<c$ or $a>b>c$. We use the following notation: $[a,b]:=\{x\in {{\mathrm}{F}}: a\le x \le b\}$ and $(a,b):=\{x\in {{\mathrm}{F}}: a<x<b\}$. Whenever we write $[a,b]$, we assume that $a,b\in {{\mathrm}{F}}$ and $a\leq b$. We also use this convention for $(a,b)$. Let $H\subseteq {{\mathrm}{F}}^n$. Then $p\in {{\mathrm}{F}}^n$ is said to be an [**accumulation point**]{} of $H$ if for all $\varepsilon\in {{\mathrm}{F}}^+$, $B_\varepsilon(p)\cap H$ has an element different from $p$. $H$ is called [**open**]{} if for all $p\in H$, there is an $\varepsilon\in {{\mathrm}{F}}^+$ such that $B_\varepsilon(p)\subseteq H$. Let $R\subseteq A\times B$ and $S\subseteq B\times C$ be binary relations. The [**composition**]{} of $R$ and $S$ is defined as: $ R \circ S :=\{ \langle a,c\rangle \in A\times C: \exists b\in B \enskip \langle a,b\rangle \in R \;\land\; \langle b,c\rangle \in S \}$. The [**domain**]{} and the [**range**]{} of $R$ are denoted by $Dom(R):=\{a\in A : \exists b\in B\enskip \langle a,b\rangle \in R \}$ and $Rng(R):= \{b\in B : \exists a\in A \enskip \langle a,b\rangle\in R \}$, respectively. $R^{-1}$ denotes the [**inverse**]{} of $R$, i.e. $R^{-1}:=\{\langle b,a\rangle \in B\times A: \langle a,b\rangle \in R\}$. We think of a [**function**]{} as a special binary relation. Notice that if $f,g$ are functions, then $f \circ g (x)=g\big(f(x)\big)$ for all $x\in Dom(f\circ g)$. $f:A\rightarrow B$ denotes that $f$ is a function from $A$ to $B$, i.e. $Dom(f)=A$ and $Rng(f)\subseteq B$. Notation $f:A{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}B$ denotes that $f$ is a [**partial function**]{} from $A$ to $B$; this means that $f$ is a function, $Dom(f)\subseteq A$ and $Rng(f)\subseteq B$. Let $f:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^n$. We call $f$ [**continuous**]{} at $x$ if $x\in Dom(f)$, $x$ is an accumulation point of $Dom(f)$ and the usual formula of continuity holds for $f$ and $x$, i.e. $$\forall \varepsilon \in {{\mathrm}{F}}^+ \; \exists \delta \in {{\mathrm}{F}}^+ \enskip \forall y \in Dom(f) \quad \left|y - x \right| < \delta \; \Longrightarrow\; \left|f(y)-f(x) \right| <\varepsilon.$$ We call $f$ [**differentiable**]{} at $x$ if $x\in Dom(f)$, $x$ is an accumulation point of $Dom(f)$ and there is an $a\in {{\mathrm}{F}}^n$ such that $$\begin{split} \forall \varepsilon\in {{\mathrm}{F}}^+\; \exists \delta \in {{\mathrm}{F}}^+\enskip\forall y\in Dom(f)&\\ |y-x|<\delta\; \Longrightarrow\; |f(y)-&f(x)-(y-x)a|\le \varepsilon|y-x|. \end{split}$$ This $a$ is unique. We call this $a$ the [**derivate**]{} of $f$ at $x$ and we denote it by $f'(x)$. $f$ is said to be continuous (differentiable) on $H\subseteq {{\mathrm}{F}}$ iff $H\subseteq Dom(f)$ and $f$ is continuous (differentiable) at every $x\in H$. We note that the basic properties of the differentiability remain true since their proofs use only the ordered field properties of $\mathfrak{R}$, cf.Propositions \[propDiff\], \[propAff\] and \[propMax\] below. Let $f,g:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^n$ and $\lambda \in {{\mathrm}{F}}$. Then $\lambda f:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^n$ and $f+g:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^n$ are defined as $\lambda f:=\big\{\langle x,\lambda f(x)\rangle: x \in Dom(f)\big\}$ and $f+g:=\big\{\langle x,f(x)+g(x)\rangle: x\in Dom(f)\cap Dom(g)\big\}$. Let $h:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}$. $h$ is said to be [**increasing**]{} on $H$ iff $H\subseteq Dom(h)$ and for all $x,y\in H$, $h(x)<h(y)$ if $x<y$, and $h$ is said to be [**decreasing**]{} on $H$ iff $H\subseteq Dom(h)$ and for all $x,y\in H$, $h(x)>h(y)$ if $x<y$. \[propDiff\] Let $f,g:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^n$ and $h:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}$. Then (i)–(v) below hold. - If $f$ is differentiable at $x$ then it is also continuous at $x$. - Let $\lambda\in {{\mathrm}{F}}$. If $f$ is differentiable at $x$, then $\lambda f$ is also differentiable at $x$ and $(\lambda f)'(x)=\lambda f'(x)$. - If $f$ and $g$ are differentiable at $x$ and $x$ is an accumulation point of $Dom(f)\cap Dom(g)$, then $f+g$ is differentiable at $x$ and $(f+g)'(x)=f'(x)+g'(x)$. - If $h$ is differentiable at $x$, $g$ is differentiable at $h(x)$ and $x$ is an accumulation point of $Dom(h\circ g)$, then $h\circ g$ is differentiable at $x$ and $(h\circ g)'(x)=h'(x)g'\big(h(x)\big)$. - If $h$ is increasing (or decreasing) on $(a,b)$, differentiable at $x\in (a,b)$ and $h'(x)\neq0$, then $h^{-1}$ is differentiable at $h(x)$. Since the proofs of the statements are based on the same calculations and ideas as in real analysis, we omit the proof, cf.  [@Ross Theorems 28.2, 28.3, 28.4 and 29.9]. Let $i\leq n$. $\pi_i:{{\mathrm}{F}}^n\rightarrow {{\mathrm}{F}}$ denotes the $i$-th projection function, i.e. $\pi_i:p\mapsto p_i$. Let $f:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^n$. We denote the $i$-th coordinate function of $f$ by $f_i$, i.e.  $f_i:=f\circ\pi_i$. We also denote $f_1$ by $f_t$. A function $A:{{\mathrm}{F}}^n\rightarrow {{\mathrm}{F}}^j$ is said to be an [**affine map**]{} if it is a linear map composed by a translation.[^7] The following proposition says that the derivate of a function $f$ composed by an affine map $A$ at a point $x$ is the image of the derivate $f'(x)$ taken by the linear part of $A$. \[propAff\] Let $f:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^n$ be differentiable at $x$ and let $A:{{\mathrm}{F}}^n\rightarrow {{\mathrm}{F}}^j$ be an affine map. Then $f\circ A$ is differentiable at $x$ and $(f\circ A)'(x)=A\big(f'(x)\big)-A(o)$. In particular, $f'(x)=\langle f'_1(x),\dots,f'_n(x)\rangle$, i.e.  $f'_i(x)=f'(x)_i$. The statement is straightforward from the definitions. $f:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}$ is said to be [**locally maximal**]{} at $x$ iff $x\in Dom(f)$ and there is a $\delta \in {{\mathrm}{F}}^+$ such that $f(y)\le f(x)$ for all $y\in B_\delta(x)\cap Dom(f)$. The [**local minimality**]{} is defined analogously. \[propMax\] If $f:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}$ is differentiable on $(a,b)$ and locally maximal or minimal at $x\in (a,b)$, then its derivate is $0$ at $x$, i.e. $f'(x)=0$. The proof is the same as in real analysis, cf.e.g., [@Rudin Theorem 5.8]. Let $\mathfrak{M}=\langle U;\ldots\rangle$ be a model. An $n$-ary relation $R\subseteq {{\mathrm}{F}}^n$ is said to be [**definable**]{} iff there is a formula $\varphi$ with only free variables $x_1,\ldots,x_n, y_1,\ldots,y_i$ and there are $a_1,\ldots,a_i\in U$ such that $$R=\{\langle p_1,\ldots, p_n\rangle \in {{\mathrm}{F}}^n : \varphi(p_1,\ldots,p_n,a_1,\ldots,a_i) \text{ is true in } \mathfrak{M}\}.$$ Recall that says that every non-empty, bounded and definable subset of ${{\mathrm}{F}}$ has a supremum. \[thmBoltzano\] Assume . Let $f: {{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}$ be definable and continuous on $[a,b]$. If $c$ is between $f(a)$ and $f(b)$, then there is an $s\in [a,b]$ such that $f(s)=c$. Let $c$ be between $f(a)$ and $f(b)$. We can assume that $f(a)<f(b)$. Let $H:=\{x\in [a,b] : f(x) <c\}$. Then $H$ is definable, bounded and non-empty. Thus, by , it has a supremum, say $s$. Both $\{x\in (a,b) : f(x)<c \}$ and $\{x\in (a,b): f(x)>c \}$ are non-empty open sets since $f$ is continuous on $[a,b]$. Thus $f(s)$ cannot be less than $c$ since $s$ is an upper bound of $H$ and cannot be greater than $c$ since $s$ is the smallest upper bound. Hence $f(s)=c$ as desired. \[thmsup\] Assume . Let $f: {{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}$ be definable and continuous on $[a,b]$. Then the supremum $s$ of $\{f(x): x\in [a,b]\}$ exists and there is an $y\in [a,b]$ such that $f(y)=s$. The set $H:=\{y\in [a,b]: \exists c\in {{\mathrm}{F}}\enskip\forall x\in [a,y]\quad f(x)<c\}$ has a supremum by since $H$ is definable, non-empty and bounded. This supremum has to be $b$ and $b\in H$ since $f$ is continuous on $[a,b]$. Thus $Ran(f):=\{f(x): x\in [a,b]\}$ is bounded. Thus, by , it has a supremum, say $s$, since it is definable and non-empty. We can assume that $f(a)\neq s$. Let $A:=\{y\in [a,b]: \exists c \in {{\mathrm}{F}}\enskip \forall x\in [a,y]\quad f(x)<c<s\}$. By , $A$ has a supremum. At this supremum, $f$ cannot be less than $s$ since $f$ is continuous on $[a,b]$ and $s$ is the supremum of $Ran(f)$. Throughout this work $Id:{{\mathrm}{F}}\rightarrow {{\mathrm}{F}}$ denotes the identity function, i.e. $Id:x\mapsto x$. \[thmLagrange\] Assume . Let $f:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}$ be definable, differentiable on $[a,b]$. If $a\neq b$, then there is an $s\in (a,b)$ such that $f'(s)=\frac{f(b)-f(a)}{b-a}$. Assume $a\neq b$. Let $h:=\big(f(b)-f(a)\big)Id - (b-a)f$. Then $h$ is differentiable on $[a,b]$ and $h'(x)=f(b)-f(a)-(b-a)f'(x)$ for all $x\in [a,b]$ by (ii) and (iii) of Proposition \[propDiff\] since $Id$ is differentiable on $[a,b]$ and its derivate is $1$ for all $x\in [a,b]$. If $h$ is constant on $[a,b]$,[^8] then $h'(s)=0$ for all $s\in(a,b)$. Otherwise, by Theorem \[thmsup\], there is a maximum or minimum of $h$ different from $h(a)=f(b)a-bf(a)=h(b)$ at an $s\in (a,b)$. Hence $h'(s)=0$ by Proposition \[propMax\]. This completes the proof since $a\neq b$ and $h'(s)=f(b)-f(a)-(b-a)f'(s)$. \[thmRoll\] Assume . Let $f:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}$ be definable and differentiable on $[a,b]$. If $f(a)=f(b)$ and $a\neq b$, then there is an $s\in (a,b)$ such that $f'(s)=0$. \[propInt\] Assume . Let $f,g:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}$ be definable and differentiable on $(a,b)$. If $f'(x)=g'(x)$ for all $x\in(a,b)$, then there is a $c\in {{\mathrm}{F}}$ such that $f(x)=g(x)+c$ for all $x\in(a,b)$. Assume that $f'(x)=g'(x)$ for all $x\in(a,b)$. Let $h:=f-g$. Then $h'(x)=f'(x)-g'(x)=0$ for all $x\in(a,b)$ by (ii) and (iii) of Proposition \[propDiff\]. If there are $y,z\in (a,b)$ such that $h(y)\neq h(z)$ and $y\neq z$, then, by the Mean Value Theorem, there is an $x$ between $y$ and $z$ such that $h'(x)=\frac{h(z)-h(y)}{z-y}\neq 0$ and this contradicts $h'(x)=0$. Thus $h(y)=h(z)$ for all $y,z\in (a,b)$. Hence there is a $c\in {{\mathrm}{F}}$ such that $h(x)=c$ for all $x\in(a,b)$. PROOFS OF THE MAIN RESULTS {#proofss} ========================== In the present section is assumed without any further mentioning. Let $\widehat{\enskip}: {{\mathrm}{F}}\rightarrow {{\mathrm}{F}}^d$ be the natural embedding defined as  $\widehat{\enskip}: x \mapsto \langle x,0,\dots,0\rangle$. We define the [**life-curve**]{} of observer $k$ as seen by observer $m$ as $Tr^k_m:=\widehat{\enskip} \circ f^k_m$. Throughout this work we denote $\widehat{\enskip}(x)$ by $\widehat{x}$, for $x\in {{\mathrm}{F}}$. Thus $Tr^k_m(t)$ is the coordinate point where $m$ observes the event “$k$’s wristwatch shows $t$”, i.e. $Tr^k_m(t)=p$ iff $ev_m(p)=ev_k(\langle t,0,\ldots,0\rangle)=ev_k(\,\widehat{t}\;)$. In the following proposition, we list several easy but useful consequences of some of our axioms. \[prop0\] Let $m\in {{\mathrm}{IOb}}$ and $k\in {{\mathrm}{Ob}}$. Then (i)–(viii) below hold. - Assume . Then $Cd(m)={{\mathrm}{F}}^d$ and for all distinct $p,q\in {{\mathrm}{F}}^d$, $ev_m(p)\neq ev_m(q)$. - Assume and . Then $tr_m(m)=\bar{t}$. - Assume . Then $f^k_m:{{\mathrm}{F}}^d{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^d$ and $Tr^k_m:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^d$. - Assume and . If $k\in {{\mathrm}{IOb}}$, then $f^k_m:{{\mathrm}{F}}^d\rightarrow {{\mathrm}{F}}^d$ is a bijection and $Tr^k_m:{{\mathrm}{F}}\rightarrow {{\mathrm}{F}}^d$ is an injection. - Assume . Let $h\in {{\mathrm}{IOb}}$. Then $f^k_h=f^k_m\circ f^m_h$ and $Tr^k_h=Tr^k_m\circ f^m_h$. - Assume and . Then $tr_k(k)\subseteq Dom(f^k_m)$. - Assume . Then $\{\widehat{x}: x\in Dom(Tr^k_m)\}\subseteq tr_k(k)$ and $Rng(Tr^k_m)\subseteq tr_m(k)$. - Assume , and . Then $\{\widehat{x}: x\in Dom(Tr^k_m)\}=tr_k(k)$. To prove (i), let $p,q\in {{\mathrm}{F}}^d$ be distinct points. Then there is a line of slope $1$ that contains $p$ but does not contain $q$. By , this line is the trace of a photon. For such a photon $ph$, we have $ph\in ev_m(p)$ and $ph\not\in ev_m(q)$. Hence $ev_m(p)\neq ev_m(q)$ and $ev_m(p)\neq\emptyset$. Thus (i) holds. \(ii) follows from (i) since $tr_m(m)=Cd(m)\cap \bar{t}$ by . \(iii) and (iv) follow from (i) by the definitions of the world-view transformation and the life-curve. To prove (v), let $\langle p,q\rangle \in f^k_h$. Then $ev_k(p)=ev_h(q)\neq\emptyset$. Since, by , $h$ and $m$ observe the same set of events, there is an $r\in {{\mathrm}{F}}^d$ such that $ev_m(r)= ev_h(q)$. But then $\langle p,r\rangle \in f^k_m$ and $\langle r,q\rangle \in f^m_h$. Hence $\langle p,q\rangle \in f^k_m\circ f^m_h$. Thus $f^k_h\subseteq f^k_m\circ f^m_h$. The other inclusion follows from the definition of the world-view transformation. Thus $f^k_h=f^k_m\circ f^m_h$ and $Tr^k_h=\widehat{\enskip}\circ f^k_h=\widehat{\enskip}\circ f^k_m\circ f^m_h=Tr^k_m\circ f^m_h$. To prove (vi), let $q\in tr_k(k)$. By , there is an $h\in {{\mathrm}{IOb}}$ such that $h$ is a co-moving observer of $k$ at $q$. For such an $h$, we have $f^k_h(q)=q$ and, by (v), $Dom (f^k_h)\subseteq Dom (f^k_m)$. Thus $q\in Dom (f^k_m)$. To prove (vii), let $\langle x,q\rangle \in Tr^k_m$. Then $\langle\widehat{x},q\rangle\in f^k_m$. But then $ev_k(\widehat{x})=ev_m(q)\neq\emptyset$. Thus $\widehat{x}\in Cd(k)\cap\bar{t}$. By , $\widehat{x}\in tr_k(k)$; and this proves the first part of (vii). By $\widehat{x}\in tr_k(k)$, we have $k\in ev_k(\widehat{x})=ev_m(q)$. Thus $q\in tr_m(k)$ and this proves the second part of (vii). The “$\subseteq$ part” of (viii) follows from (vii). To prove the other inclusion, let $p\in tr_k(k)$. Then, by and (vi), $p\in\bar t\cap Dom (f^k_m)$. Thus there are $x\in {{\mathrm}{F}}$ and $q\in {{\mathrm}{F}}^d$ such that $\widehat{x}=p$ and $\langle p,q\rangle\in f^k_m$. But then $\langle x,q\rangle \in\widehat{\enskip}\circ f^k_m= Tr^k_m$. Hence $x\in Dom(Tr^k_m)$. We say that $f$ is [**well-parametrized**]{} iff $f:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^d$ and the following holds: if $x\in Dom(f)$ is an accumulation point of $Dom(f)$, then $f$ is differentiable at $x$ and its derivate at $x$ is of Minkowski-length $1$, i.e. $\mu\big(f'(x)\big)=1$. Assume $\mathfrak{F}=\mathfrak{R}$. Then the curve $f$ is well-parametrized iff $f$ is parametrized according to Minkowski-length, i.e. for all $x,y\in {{\mathrm}{F}}$, if $[x,y]\subseteq Dom(f)$, the Minkowski-length of $f$ restricted to $[x,y]$ is $y-x$. (By Minkowski-length of a curve we mean length according to Minkowski-metric, e.g., in the sense of Wald [@Wal p.43, (3.3.7)]). [**Proper time**]{} or [**wristwatch time**]{} is defined as the Minkowski-length of a time-like curve, cf. e.g., Wald [@Wal p.44, (3.3.8)], Taylor-Wheeler [@TW00 1-1-2] or d’Inverno [@d'Inverno p.112, (8.14)]. Thus a curve defined on a subset of $\mathfrak{R}$ is well-parametrized iff it is parametrized according to proper time, or wristwatch-time. (Cf. e.g., [@d'Inverno p.112, (8.16)].) The next proposition states that life-curves of accelerated observers in models of are well-parametrized. This implies that accelerated clocks behave as expected in models of . Remark \[Trrem\] after the proposition will state a kind of “completeness theorem” for life-curves of accelerated observers, much in the spirit of Remark \[rem-specthm\]. \[propTr\] Assume and $d>2$. Let $m\in {{\mathrm}{IOb}}$ and $k\in {{\mathrm}{Ob}}$. Then $Tr^k_m$ is well-parametrized and definable. Let $m\in {{\mathrm}{IOb}}$, $k\in {{\mathrm}{Ob}}$. Then $Tr^k_m$ is definable by its definition. Furthermore, $f^k_m:{{\mathrm}{F}}^d{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^d$ and $Tr^k_m: {{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^d$ by (iii) of Proposition \[prop0\]. Let $x\in Dom(Tr^k_m)$ be an accumulation point of $Dom(Tr^k_m)$. We would like to prove that $Tr^k_m$ is differentiable at $x$ and its derivate at $x$ is of Minkowski-length $1$. $\widehat{x}\in tr_k(k)$ by (vii) of Proposition \[prop0\]. Thus, by , there is a co-moving inertial observer of $k$ at $\widehat{x}$. By Proposition \[propAff\], we can assume that $m$ is a co-moving inertial observer of $k$ at $\widehat{x}$, i.e. $m\succ_{\widehat{x}}k$, because of the following three statements. By (v) of Proposition \[prop0\], for every $h\in {{\mathrm}{IOb}}$, either of $Tr^k_m$ and $Tr^k_h$ can be obtained from the other by composing the other by a world-view transformation between inertial observers. By Theorem \[thmPoi\], world-view transformations between inertial observers are Poincaré-transformations. Poincaré-transformations are affine and preserve the Minkowski-distance. Now, assume that $m$ is a co-moving inertial observer of $k$ at $\widehat{x}$. Then $f^k_m(\widehat{x})=\widehat{x}$, $z\widehat{1}=\widehat{z}$ and $Tr^k_m(z)=f^k_m(\widehat{z})$ for every $z\in Dom(Tr^k_m)$. Therefore $$\forall y\in Dom(Tr^k_m)\quad \big|Tr^k_m(y)-Tr^k_m(x)-(y-x)\widehat{1}\big|=\big|f^k_m(\widehat{y})-\widehat{y}\big|. \label{tp-e1}$$ Since $Dom(f^k_m)\subseteq Cd(k)$ and $\widehat{y}\in Dom(f^k_m)$ if $y\in Dom (Tr^k_m)$, we have that for all $\delta\in {{\mathrm}{F}}^+$, $$\forall y\in Dom(Tr^k_m)\quad |y-x|<\delta\; \Longrightarrow\; \widehat{y}\in B_\delta(\widehat{x})\cap Cd(k). \label{tp-e2}$$ Let $\varepsilon \in {{\mathrm}{F}}^+$ be fixed. Since $m\succ_{\widehat{x}} k$ and $f^k_m: {{\mathrm}{F}}^d{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^d$, there is a $\delta\in {{\mathrm}{F}}^+$ such that $$\forall p\in B_\delta(\widehat{x})\cap Cd(k)\quad \big|p-f^k_m(p)\big|\leq \varepsilon |p-\widehat{x}|. \label{tp-e3}$$ Let such a $\delta$ be fixed. By (\[tp-e2\]), (\[tp-e3\]) and the fact that $|\widehat{y}-\widehat{x}|=|y-x|$, we have that $$\forall y\in Dom(Tr^k_m)\quad |y-x|<\delta\; \Longrightarrow\; \big|\widehat{y}-f^k_m(\widehat{y})\big|\leq \varepsilon |y-x|.$$ By this and (\[tp-e1\]), we have $$\begin{split} \forall y\in Dom(Tr^k_m) \quad & |y-x|<\delta \; \Longrightarrow\\ &\big|Tr^k_m(y)-Tr^k_m(x)-(y-x)\widehat{1}\big|\leq\varepsilon |y-x|. \end{split}$$ Thus $(Tr^k_m)'(x)=\widehat{1}$. This completes the proof since $\mu(\,\widehat{1}\,)=1$. \[Trrem\] Well parametrized curves are exactly the life-curves of accelerated observers, in models of , as follows. Let $\mathfrak{F}$ be an Euclidean ordered field and let $f:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^d$ be well-parametrized. Then there are a model $\mathfrak{M}$ of , $m\in {{\mathrm}{IOb}}$ and $k\in {{\mathrm}{Ob}}$ such that $Tr^k_m=f$ and the ordered field reduct of $\mathfrak{M}$ is $\mathfrak{F}$. Recall that if $\mathfrak{F}=\mathfrak{R}$, then this $\mathfrak{M}$ is a model of . This is not difficult to prove by using the methods of the present paper. [$\lhd$]{} We say that $p\in {{\mathrm}{F}}^d$ is [**vertical**]{} iff $p\in \bar{t}$. \[lemWp\] Let $f:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^d$ be well-parametrized. Then (i) and (ii) below hold. - Let $x\in Dom(f)$ be an accumulation point of $Dom(f)$. Then $f_t$ is differentiable at $x$ and $|f'_t(x)|\ge 1$. Furthermore, $|f'_t(x)|=1$ iff $f'(x)$ is vertical. - Assume and that $f$ is definable. Let $[a,b]\subseteq Dom(f)$. Then $f_t$ is increasing or decreasing on $[a,b]$. If $f_t$ is increasing on $[a,b]$ and $a\neq b$, then $f'_t(x)\ge 1$ for all $x\in [a,b]$. Let $f$ be well-parametrized. To prove (i), let $x\in Dom(f)$ be an accumulation point of $Dom(f)$. Then $f'(x)$ is of Minkowski-length $1$. By Proposition \[propAff\], $f_t$ is differentiable at $x$ and $f'_t(x)=f'(x)_t$. Now, (i) follows from the fact that the absolute value of the time component of a vector of Minkowski-length 1 is always greater than 1 and it is 1 iff the vector is vertical. To prove (ii), assume and that $f$ is definable. Let $[a,b]\subseteq Dom(f)$. From (i), we have $f'_t(x)\neq 0$ for all $x\in[a,b]$. Thus, by Rolle’s theorem, $f_t$ is injective on $[a,b]$. Thus, by Bolzano’s theorem, $f_t$ is increasing or decreasing on $[a,b]$ since $f_t$ is continuous and injective on $[a,b]$. Assume that $f_t$ is increasing on $[a,b]$ and $a\neq b$. Then $f'_t(x)\geq 0$ for all $x\in [a,b]$ by the definition of the derivate. Hence, by (i), $f'_t(x)\geq 1$ for all $x\in [a,b]$. \[thmJtwp\] Assume . Let $f:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^d$ be definable, well-parametrized and $[a,b]\subseteq Dom(f)$. Then (i) and (ii) below hold. - $b-a\le \left|f_t(b)-f_t(a)\right|$. - If $f(x)_s\neq f(a)_s$ for an $x\in[a,b]$, then $b-a<\big|f_t(b)-f_t(a)\big|$. Let $f:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^d$ be definable, well-parametrized and $[a,b]\subseteq Dom(f)$. We can assume that $a\neq b$. For every $i\leq d$, $f_i$ is definable and differentiable on $[a,b]$ by Proposition \[propAff\]. Then, by the Main Value Theorem, there is an $s\in (a,b)$ such that $f'_t(s)=\frac{f_t(b)-f_t(a)}{b-a}$. By (i) of Lemma \[lemWp\], we have $1\le |f'_t(s)|$. But then, $b-a\le \big|f_t(b)-f_t(a)\big|$. This completes the proof of (i). To prove (ii), let $x\in [a,b]$ be such that $f(x)_s\neq f(a)_s$. Let $1<i\le d$ be such that $f_i(x)\neq f_i(a)$. Then, by the Main Value Theorem, there is an $y\in (a,b)$ such that $f'_i(y)=\frac{f_i(x)-f_i(a)}{x-a}\neq 0$. Thus $f'(y)$ is not vertical. Therefore, by (i) of Lemma \[lemWp\], we have $1<|f'_t(y)|$. Thus, by the definition of the derivate, there is a $z\in(y,b)$ such that $1<\frac{|f_t(z)-f_t(y)|}{z-y}$. Hence we have $$z-y<|f_t(z)-f_t(y)|.$$ Let us note that $a<y<z<b$. By applying (i) to $[a,y]$ and $[z,b]$, respectively, we get $$y-a\le \big| f_t(y)-f_t(a) \big|\quad\text{and}\quad b-z\le \big|f_t(b)-f_t(z)\big|.$$ $f_t$ is increasing or decreasing on $[a,b]$ by (ii) of Lemma \[lemWp\]. Thus $f_t(a)<f_t(y)<f_t(z)<f_t(b)$ or $f_t(a)>f_t(y)>f_t(z)>f_t(b)$. Now, by adding up the last three inequalities, we get $b-a<\big|f_t(b)-f_t(a)\big|$. Let $a\in {{\mathrm}{F}}^d$. For convenience, we introduce the following notation: $a^+:=a$ if $a_t \ge 0$ and $a^+:=-a$ if $a_t < 0$. A set $H\subseteq {{\mathrm}{F}}^d$ is called [**twin-paradoxical**]{} iff $\widehat{1}\in H$, $o\not\in H$, $\text{\it slope}(p)<1$ if $p\in H$, for all $p\in {{\mathrm}{F}}^d$ if $\text{\it slope}(p)<1$, then there is a $\lambda\in {{\mathrm}{F}}$ such that $\lambda p \in H$, and for all distinct $p,q,r\in H$ and for all $\lambda,\mu \in {{\mathrm}{F}}^+$, $r^+=\lambda p^+ + \mu q^+$ implies that $\lambda+\mu<1$. A positive answer to the following question would also provide a positive answer to Question \[qTwp\], cf.  [@mythes §3]. \[qConv\] Assume . Let $f:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^d$ be definable such that $f$ is differentiable on $[a,b]$ and $f(a),f(b)\in\bar{t}$. Furthermore, let the set $\{f'(x):x\in [a,b] \}$ be a subset of a twin-paradoxical set. Are then (i) and (ii) below true? - $b-a\le \big|f_t(b)-f_t(a)\big|$. - If $f(x)_s\neq f(a)_s$ for an $x\in[a,b]$, then $b-a<\big|f_t(b)-f_t(a)\big|$. [$\lhd$]{} \[thmJeq\] Assume . Let $f,g:{{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^d$ be definable and well-parametrized. Let $[a,b]\subseteq Dom(f)$ and $[a',b']\subseteq Dom(g)$ be such that $\{f(r):r\in[a,b]\}=\{g(r'):r'\in [a',b']\}$. Then $b-a=b'-a'$. By (ii) of Lemma \[lemWp\], $f_t$ is increasing or decreasing on $[a,b]$ and so is $g_t$ on $[a',b']$. We can assume that $Dom(f)=[a,b]$, $Dom(g)=[a',b']$ and that $f_t$ and $g_t$ are increasing on $[a,b]$ and $[a',b']$, respectively.[^9] Then $Rng(f)=Rng(g)$. Furthermore, $f$ and $g$ are injective since $f_t$ and $g_t$ are such. Since $Rng(f)=Rng(g)$ and $g_t$ is injective, $f\circ g^{-1}=f_t\circ g_t^{-1}$. Let $h:=f\circ g^{-1}=f_t\circ g_t^{-1}$. Since $Rng(f_t)=Rng(g_t)$ and $f_t$ and $g_t$ are increasing, $h$ is an increasing bijection between $[a,b]$ and $[a',b']$. Hence $h(a)=a'$ and $h(b)=b'$. We are going to prove that $b-a=b'-a'$ by proving that there is a $c\in {{\mathrm}{F}}$ such that $h(x)=x+c$ for all $x\in [a,b]$. We can assume that $a\neq b$ and $a'\neq b'$. By Lemma  \[lemWp\], $f_t$ and $g_t$ are differentiable on $[a,b]$ and $[a',b']$, respectively, and $f'_t(x)>0$ for all $x\in[a,b]$ and $g'_t(x')>0$ for all $x'\in[a',b']$. By (iv) and (v) of Proposition \[propDiff\], $h=f_t\circ g_t^{-1}$ is also differentiable on $(a,b)$. By $h=f\circ g^{-1}$, we have $f=h\circ g$. Thus $f'(x)=h'(x)g'\big(h(x)\big)$ for all $x\in(a,b)$ by (iv) of Proposition \[propDiff\]. Since both $f'(x)$ and $g'\big(h(x)\big)$ are of Minkowski-length $1$ and their time-components are positive[^10] for all $x\in(a,b)$, we conclude that $h'(x)=1$ for all $x\in(a,b)$. By Proposition \[propInt\], we get that there is a $c\in {{\mathrm}{F}}$ such that $h(x)=x+c$ for all $x\in(a,b)$ and thus for all $x\in[a,b]$ since $h$ is an increasing bijection between $[a,b]$ and $[a',b']$. \[thmTwp-proof\] Assume and $d>2$. Let $m\in {{\mathrm}{IOb}}$ and $k\in {{\mathrm}{Ob}}$. Let $p,q\in tr_k(k)$, $p',q'\in tr_m(m)$ be such that $\langle p,p'\rangle,\langle q,q'\rangle \in f^k_m$, $[p q]\subseteq tr_k(k)$ and $[p' q']\not\subseteq tr_m(k)$, cf. Figure 2. Let us abbreviate $Tr^k_m$ by $Tr$. We are going to prove that $|q_t-p_t|<|q'_t-p'_t|$ by applying Theorem \[thmJtwp\] to $Tr$ and $[p_t, q_t]$. By Proposition \[propTr\], $$\label{twp-e1} Tr: {{\mathrm}{F}}{\xrightarrow{\resizebox{!}{3.5pt}{$\circ$}}}{{\mathrm}{F}}^d \text{ is well-parametrized and definable.}$$ By , $p,q,p',q'\in\bar{t}$. By $\widehat{p_t}=p$, by $\widehat{q_t}=q$, by $\langle p,p'\rangle ,\langle q,q'\rangle \in f^k_m$ and by $Tr=\widehat{\enskip}\circ f^k_m$, $$\label{twp-e2} Tr(p_t)=p'\quad\text{ and }\quad Tr(q_t)=q'.$$ By $p,q\in tr_k(k)$ and $\langle p,p'\rangle, \langle q,q'\rangle\in f^k_m$, we have that $p',q'\in tr_m(k)$. Thus, by $[p' q']\not\subseteq tr_m(k)$, we have that $p'\neq q'$. Hence, by (\[twp-e2\]), $p_t\neq q_t$. We can assume that $p_t<q_t$. By (viii) of Proposition \[prop0\], $\{\widehat{x}: x\in Dom(Tr)\}=tr_k(k)$. Since $[p q]\subseteq tr_k(k)$, $$\label{twp-e3} [p_t, q_t]\subseteq Dom(Tr).$$ By (i) of Lemma \[lemWp\], (\[twp-e1\]) and (\[twp-e3\]), we have that $Tr_t$ is differentiable on $[p_t,q_t]$, thus it is continuous on $[p_t,q_t]$. Let $x'\in [p' q']\subseteq \bar{t}$ be such that $x'\not\in tr_m(k)$. By Bolzano’s theorem and (\[twp-e2\]), there is an $x\in [p_t,q_t]$ such that $Tr_t(x)=x'_t$. Let such an $x$ be fixed. $Tr(x)\in tr_m(k)$ since $Rng(Tr)\subseteq tr_m(k)$ by (vii) of Proposition \[prop0\]. But then $Tr(x)\neq x'$. Hence $Tr(x)\not\in\bar{t}$. Thus $$\label{twp-e4} x\in [p_t,q_t]\quad\mbox{and}\quad Tr(x)_s\neq Tr(p_t)_s$$ since $Tr(p_t)=p'\in\bar t$. Now, by (\[twp-e1\])–(\[twp-e4\]) above, we can apply (ii) of Theorem \[thmJtwp\] to $Tr$ and $[p_t,q_t]$, and we get that $|q_t-p_t|<|Tr_t(q_t)-Tr_t(p_t)|=|q'_t-p'_t|$. \[thmEq-proof\] Assume and $d>2$. Let $k$ and $m$ be observers. Let $p,q \in tr_k(k)$, $p',q'\in tr_m(m)$ be such that $\emptyset\not\in\{ev_k(r):r\in[p q]\}=\{ev_m(r'):r'\in [p' q']\}$, cf. the right hand side of Figure 2. Thus $[p q]\subseteq Cd(k)$ and $[p' q']\subseteq Cd(m)$. By , $ tr_k(k)= Cd(k)\cap \bar t$ and $tr_m(m)= Cd(m)\cap \bar t$. Therefore $[p q]\subseteq tr_k(k)\subseteq\bar t$ and $[p' q']\subseteq tr_m(m)\subseteq\bar t$. We can assume that $p_t\leq q_t$ and $p'_t\leq q'_t$. Let $h\in {{\mathrm}{IOb}}$. We are going to prove that $\big|q_t-p_t\big|=\big|q'_t-p'_t\big|$, by applying Theorem \[thmJeq\] as follows: let $[a,b]:=[p_t,q_t]$, $[a',b']:=[p'_t,q'_t]$, $f:=Tr^k_h$ and $g:=Tr^m_h$. By (viii) of Proposition \[prop0\], by $[p q]\subseteq tr_k(k)$ and by $[p' q']\subseteq tr_m(m)$, we conclude that $[a,b]\subseteq Dom(f)$ and $[a',b']\subseteq Dom(g)$. By Proposition \[propTr\], $f$ and $g$ are well-parametrized and definable. We have $\{f(r):r\in[a,b]\}=\{g(r'):r'\in [a',b']\}$ since $\{ev_k(r):r\in[p q]\}=\{ev_m(r'):r'\in [p' q']\}$. Thus, by Theorem \[thmJeq\], we conclude that $b-a=b'-a'$. Thus $\big|q_t-p_t\big|=\big|q'_t-p'_t\big|$ and this is what we wanted to prove. \[thmNoIND-proof\] \[thmMO-proof\] We will construct three models. Let $\mathfrak{F}=\left<{{\mathrm}{F}};+,\cdot,\le \right>$ be an Euclidean ordered field different from $\mathfrak{R}$. For every $p\in {{\mathrm}{F}}^d$, let $m_p:{{\mathrm}{F}}^d\rightarrow {{\mathrm}{F}}^d$ denote the translation by vector $p$, i.e. $m_p: q\mapsto q+p$. $f:{{\mathrm}{F}}^d\rightarrow {{\mathrm}{F}}^d$ is called [**translation-like**]{} iff for all $q \in{{\mathrm}{F}}^d$, there is a $\delta \in {{\mathrm}{F}}^+$ such that for all $p\in B_\delta(q)$, $f(p)=m_{f(q)-q}(p)$ and for all $p,q\in {{\mathrm}{F}}^d$, $f(p)=f(q)$ and $p\in \bar t$ imply that $q\in\bar t$. Let $k:{{\mathrm}{F}}^d\rightarrow {{\mathrm}{F}}^d$ be translation-like. First we construct a model $\mathfrak{M}_{(\mathfrak{F},k)}$ of and (i) and (ii) of Theorem \[thmMO\] for $\mathfrak{F}$ and $k$, which will be a model of (iii) and (iv) of Theorem \[thmMO\] if $k$ is a bijection. We will show that is false in $\mathfrak{M}_{(\mathfrak{F},k)}$. Then we will choose $\mathfrak{F}$ and $k$ appropriately to get the desired models in which is false, too. Let the ordered field reduct of $\mathfrak{M}_{(\mathfrak{F},k)}$ be $\mathfrak{F}$. Let $\{I_1, I_2, I_3, I_4, I_5\}$ be a partition[^11] of ${{\mathrm}{F}}$ such that every $I_i$ is open, $x\in I_2 \iff x+1\in I_3 \iff x+2 \in I_4$ and for all $y\in I_i$ and $z\in I_j$, $y\leq z \iff i\leq j$. Such a partition can easily be constructed.[^12] Let $$k'(p):=\left\{ \begin{array}{cl} p & \text{ if } p_t\in I_1\cup I_5 , \\ p-\widehat{1} & \text{ if } p_t\in I_4 , \\ p+\widehat{1} & \text{ if } p_t\in I_3 , \\ p+\langle 0,1,0,\ldots,0\rangle & \text{ if } p_t \in I_2 \end{array} \right.$$ for every $p\in {{\mathrm}{F}}^d$, cf. Figure 4. \[r\]\[r\][$p'$]{} \[r\]\[r\][$q'$]{} \[r\]\[r\][$q$]{} \[r\]\[r\][$p$]{} \[b\]\[b\][world-view of $m$]{} \[b\]\[b\][world-view of $k'$]{} \[l\]\[l\][$I_1$]{} \[l\]\[l\][$I_2$]{} \[l\]\[l\][$I_3$]{} \[l\]\[l\][$I_4$]{} \[l\]\[l\][$I_5$]{} \[r\]\[r\][$tr_{k'}(k')$]{} \[r\]\[r\][$tr_m(k')$]{} \[b\]\[b\][$k'$]{} \[t\]\[t\][$f^{k'}_m$]{} \[t\]\[t\][$f^m_{k'}$]{} \[t\]\[t\] ![\[notwp\] for the proofs of Theorems \[thmNoIND\] and \[thmMO\].](fig4 "fig:"){width="90.00000%"} It is easy to see that $k'$ is a translation-like bijection. Let ${{\mathrm}{IOb}}:=\{m_p:p\in {{\mathrm}{F}}^d\}$, ${{\mathrm}{Ob}}:={{\mathrm}{IOb}}\cup\{k, k'\}$, ${{\mathrm}{Ph}}:=\{l\in \text{\it Lines}: \text{\it slope}(l)=1\}$ and ${{\mathrm}{B}}:={{\mathrm}{Ob}}\cup {{\mathrm}{Ph}}$. Recall that $o:=\langle 0,\ldots,0\rangle$ is the origin. First we give the world-view of $m_o$ then we give the world-view of an arbitrary observer $h$ by giving the world-view transformation between $h$ and $m_o$. Let $ tr_{m_o}(ph):=ph$ and $tr_{m_o}(h):=\{h(x): x\in\bar{t}\,\}$ for all $ph\in {{\mathrm}{Ph}}$ and $h\in {{\mathrm}{Ob}}$. And let $ev_{m_o}(p):=\{b\in {{\mathrm}{B}}: p\in tr_{m_o}(b)\}$ for all $p\in {{\mathrm}{F}}^d$. Let $f^h_{m_o}:=h$ for all $h\in {{\mathrm}{Ob}}$. From these world-view transformations, we can obtain the world-view of each observer $h$ in the following way: $ev_h(p):=ev_{m_o}\big(h(p)\big)$ for all $p\in {{\mathrm}{F}}^d$. And from the world-views, we can obtain the ${{\mathrm}{W}}$ relation as follows: for all $h\in {{\mathrm}{Ob}}$, $b\in {{\mathrm}{B}}$ and $p\in {{\mathrm}{F}}^d$, let ${{\mathrm}{W}}(h,b,p)$ iff $b\in ev_h(p)$. Thus we are given the model $\mathfrak{M}_{(\mathfrak{F},k)}$. We note that $f^m_h=m\circ h^{-1}$ and $m_{h(q)-q}\succ_q h$ for all $m,h\in {{\mathrm}{Ob}}$ and $q\in {{\mathrm}{F}}^d$. It is easy to check that the axioms of and (i) and (ii) of Theorem \[thmMO\] are true in $\mathfrak{M}_{(\mathfrak{F},k)}$ and that if $k$ is a bijection, then (iii) and (iv) of Theorem \[thmMO\] are also true in $\mathfrak{M}_{(\mathfrak{F},k)}$. Let $p,q\in\bar t$ be such that $p_t\in I_1$, $q_t\in I_4$; and let $p':=k'(p)=p$, $q':=k'(q)=q-\widehat{1}$ and $m:=m_{o}$. It is easy to check that is false in $\mathfrak{M}_{(\mathfrak{F},k)}$ for $k'$, $m$, $p$, $q$, $p'$ and $q'$, i.e. $p,q \in tr_{k'}(k')$, $p',q' \in tr_m(m)$, $\langle p,p'\rangle ,\langle q,q'\rangle \in f^{k'}_m$, $[p q] \subseteq tr_{k'}(k')$, $[p' q']\not\subseteq tr_m(k')$ and $\big|q_t-p_t\big|\not<\big|q'_t-p'_t\big|$, cf. Figure 4. \[t\]\[t\][world-view of $m$]{} \[t\]\[t\][world-view of $k$]{} \[r\]\[r\][$p'$]{} \[r\]\[r\][$q'$]{} \[r\]\[r\][$q$]{} \[r\]\[r\][$p$]{} \[rt\]\[rt\][$p$]{} \[lb\]\[lb\][$q$]{} \[rt\]\[rt\][$p'$]{} \[lb\]\[lb\][$q'$]{} \[r\]\[r\][$tr_k(k)$]{} \[r\]\[r\][$tr_m(k)$]{} \[b\]\[b\][$k$]{} \[t\]\[t\][$f^k_m$]{} \[t\]\[t\][$f^m_k$]{} \[t\]\[t\] \[t\]\[t\] \[t\]\[t\] \[l\]\[l\][$tr_m(m)$]{} \[r\]\[r\][$tr_m(m)$]{} \[r\]\[r\][$a+1$]{} \[r\]\[r\][$a+2$]{} \[r\]\[r\][$a$]{} \[l\]\[l\][$\widehat{1}$]{} \[l\]\[l\][$o$]{} \[l\]\[l\][$I_1$]{} \[l\]\[l\][$I_2$]{} ![\[noDDPE\] for the proofs of Theorems \[thmNoIND\] and \[thmMO\].](fig5 "fig:"){width="85.00000%"} To construct the first model, let $\mathfrak{F}$ be an arbitrary Euclidean ordered field different from $\mathfrak{R}$ and let $\{I_1, I_2\}$ be a partition of ${{\mathrm}{F}}$ such that for all $x\in I_1$ and $y\in I_2$, $x<y$. Let $$k(p):=\left\{ \begin{array}{cl} p & \text{ if } p_t\in I_1 , \\ p-\widehat{1} & \text{ if } p_t\in I_2 \end{array} \right.$$ for every $p\in {{\mathrm}{F}}^d$, cf. Figure 5. It is easy to see that $k$ is translation-like. Let $p,q\in\bar t$ be such that $p_t,p_t+1\in I_1$ and $q_t,q_t-1\in I_2$; and let $p':=k(p)=p$, $q':=k(q)=q-\widehat{1}$ and $m:=m_{o}$. It is also easy to check that is false in $\mathfrak{M}_{(\mathfrak{F},k)}$ for $k$, $m$, $p$, $q$, $p'$ and $q'$, i.e.$p,q \in tr_k(k)$, $p',q'\in tr_m(m)$, $\emptyset\not\in\{ev_k(r):r\in[p q]\}=\{ev_m(r'):r'\in [p' q']\}$ and $\big|q_t-p_t\big|\neq\big|q'_t-p'_t\big|$, cf. Figure 5. This completes the proof of Theorem \[thmNoIND\]. To construct the second model, let $\mathfrak{F}$ be an arbitrary non-Archimedean Euclidean ordered field. Let $a\sim b$ if $a,b\in {{\mathrm}{F}}$ and $a-b$ is infinitesimally small. It is easy to see that $\sim$ is an equivalence relation. Let us choose an element from every equivalence class of $\sim$ and let $\tilde{a}$ denote the chosen element equivalent with $a\in {{\mathrm}{F}}$. Let $k(p):=\langle p_t+\tilde{p}_t,p_s\rangle$ for every $p\in {{\mathrm}{F}}^d$, cf.Figure 5. It is easy to see that $k$ is a translation-like bijection. Let $p:=o$, $q:=\widehat{1}$, $p':=k(p)=\langle \tilde{0},0,\ldots,0\rangle$, $q':=k(q)=\langle 1+\tilde{1},0,\dots,0\rangle$ and $m:=m_{o}$. It is also easy to check that is false in $\mathfrak{M}_{(\mathfrak{F},k)}$ for $k$, $m$, $p$, $q$, $p'$ and $q'$, cf. Figure 5. \[t\]\[t\][$a$]{} \[t\]\[t\][$r_1$]{} \[t\]\[t\][$r_2$]{} \[t\]\[t\][$r_3$]{} \[r\]\[r\][$1$]{} \[l\]\[l\][$\vdots$]{} \[l\]\[l\][$\frac{1}{2}$]{} \[l\]\[l\][$\frac{1}{4}$]{} \[l\]\[l\][$\frac{1}{8}$]{} \[rb\]\[rb\][etc.]{} \[t\]\[t\][$a+2$]{} \[t\]\[t\][$a_1$]{} \[b\]\[b\][$I_1$]{} \[b\]\[b\][$I_2$]{} ![\[arch\] for the proofs of Theorems \[thmNoIND\] and \[thmMO\].](fig6 "fig:"){width="90.00000%"} To construct the third model, let $\mathfrak{F}$ be an arbitrary countable Archimedean Euclidean ordered field and let $k(p)=\langle f(p_t),p_s\rangle$ for every $p\in {{\mathrm}{F}}^d$ where $f:{{\mathrm}{F}}\rightarrow {{\mathrm}{F}}$ is constructed as follows, cf. Figures 5, 6. We can assume that $\mathfrak{F}$ is a subfield of $\mathfrak{R}$ by [@Fuchs Theorem 1 in §VIII]. Let $a$ be a real number that is not an element of ${{\mathrm}{F}}$. Let us enumerate the elements of $[a,a+2]\cap {{\mathrm}{F}}$ and denote the $i$-th element with $r_i$. First we cover $[a,a+2]\cap {{\mathrm}{F}}$ with infinitely many disjoint subintervals of $[a,a+2]$ such that the sum of their length is $1$, the length of each interval is in ${{\mathrm}{F}}$ and the distance of the left endpoint of each interval from $a$ is also in ${{\mathrm}{F}}$. We are going to construct this covering by recursion. In the $i$-th step, we will use only finitely many new intervals such that the sum of their length is $1/{2^i}$. In the first step, we cover $r_1$ with an interval of length $1/2$. Suppose that we have covered $r_i$ for each $i<n$. Since we have used only finitely many intervals yet, we can cover $r_n$ with an interval that is not longer than $1/{2^n}$. Since $\sum_{i=1}^{n}1/2^i<1$, it is easy to see that we can choose finitely many other subintervals of $[a,a+2]$ to be added to this interval such that the sum of their length is $1/{2^n}$. We are given the covering of $[a,a+2]$. Let us enumerate these intervals. Let $I_i$ be the $i$-th interval, $d_i$ be the length of $I_i$, $d_0:=0$ and $a_i\geq 0$ the distance of $a$ and the left endpoint of $I_i$. $\sum_{i=1}^{\infty}d_i=1$ since $\sum_{i=1}^{\infty}{1}/{2^i}=1$. Let $$f(x):=\left\{ \begin{array}{ll} x & \text{ if } x<a , \\ x-1 & \text{ if } a+2\le x,\\ x-a_n+\sum\limits_{i=0}^{n-1}d_i & \text{ if } x\in I_n \end{array} \right.$$ for all $x\in {{\mathrm}{F}}$, cf. Figure 6. It is easy to see that $k$ is a translation-like bijection. Let $p,q\in {{\mathrm}{F}}^d$ be such that $p_t<a$ and $a+2<q_t$; and let $p':=k(p)=p$, $q':=k(q)=q-\widehat{1}$ and $m:=m_{o}$. It is also easy to check that is false in $\mathfrak{M}_{(\mathfrak{F},k)}$ for $k$, $m$, $p$, $q$, $p'$ and $q'$, cf. Figure 5. Let $\mathfrak{F}$ be a field elementarily equivalent to $\mathfrak{R}$, i.e. such that all FOL-formulas valid in $\mathfrak{R}$ are valid in $\mathfrak{F}$, too. Assume that $\mathfrak{F}$ is not isomorphic to $\mathfrak{R}$. E.g.  the field of the real algebraic numbers is such. Let $\mathfrak{M}$ be a model of ${\textcolor{axcolor}{\ensuremath{\mathsf{AccRel_0}}}}$ with field-reduct $\mathfrak{F}$ in which neither ${\textcolor{axcolor}{\ensuremath{\mathsf{Tp}}}}$ nor ${\textcolor{axcolor}{\ensuremath{\mathsf{Ddpe}}}}$ is true. Such an $\mathfrak{M}$ exists by Theorem \[thmNoIND\]. Since $\mathfrak{M}\models Th(\mathfrak{R})$ by assumption, this shows $Th(\mathfrak{R})\cup {\textcolor{axcolor}{\ensuremath{\mathsf{AccRel_0}}}}\not\models {\textcolor{axcolor}{\ensuremath{\mathsf{Tp}}}}\lor {\textcolor{axcolor}{\ensuremath{\mathsf{Ddpe}}}}$. In a subsequent paper, we will discuss how the present methods and in particular and can be used for introducing gravity via Einstein’s equivalence principle and for proving that gravity “causes time run slow” (known as gravitational time dilation). In this connection we would like to point out that it is explained in Misner et al. [@MTW pp. 172-173, 327-332] that the theory of accelerated observers (in flat space-time!) is a rather useful first step in building up general relativity by using the methods of that book. APPENDIX {#appendix .unnumbered} ======== A FOL-formula expressing is: $$\begin{split} \forall m\;\forall p\quad{{\mathrm}{Ob}}(m)\wedge {{\mathrm}{F}}(p_1)\wedge\ldots\wedge {{\mathrm}{F}}(p_d)\; \Longrightarrow&\\ \Big[{{\mathrm}{W}}(m,m,p)\iff \big(\exists b\ {{\mathrm}{B}}(b)\wedge{{\mathrm}{W}}(m,b,p)\wedge& p_2=0\wedge\ldots\wedge p_d=0\big)\Big]. \end{split}$$ A FOL-formula expressing is: $$\begin{split} \forall m\; \forall p\; \forall q \quad {{\mathrm}{IOb}}(m)\wedge{{\mathrm}{F}}(p_1)\wedge {{\mathrm}{F}}(q_1)\wedge\ldots&\wedge {{\mathrm}{F}}(p_d)\wedge {{\mathrm}{F}}(q_d) \; \Longrightarrow\\ \Big[ (p_1-q_1)^2=(p_2-q_2)^2+\ldots+(p_d-q_d)&^2 \iff\\ \exists ph\ {{\mathrm}{Ph}}(ph)\wedge {{\mathrm}{W}}(m,p&h,p)\wedge{{\mathrm}{W}}(m,ph,q) \Big] \wedge\\ \Big[\forall ph\;\forall\lambda\quad {{\mathrm}{Ph}}(ph)\wedge{{\mathrm}{F}}(\lambda)\wedge{{\mathrm}{W}}(m,ph,p)\wedge&{{\mathrm}{W}}(m,ph,q)\\ \Longrightarrow\; {{\mathrm}{W}}\big(&m,ph,q+\lambda(p-q)\big)\Big]. \end{split}$$ A FOL-formula expressing is: $$\begin{split} \forall m\; \forall k\;\forall p\quad {{\mathrm}{IOb}}(m)\wedge{{\mathrm}{IOb}}(k)\wedge{{\mathrm}{F}}(p_1)\wedge\ldots\wedge{{\mathrm}{F}}(p_d)\; &\Longrightarrow \exists q\\ {{\mathrm}{F}}(q_1)\wedge\ldots\wedge {{\mathrm}{F}}(q_d)\wedge \big(\forall b\quad {{\mathrm}{B}}(b)\; \Longrightarrow\; [{{\mathrm}{W}}(m,b,p)\iff&{{\mathrm}{W}}(k,b,q)]\big). \end{split}$$ ACKNOWLEDGEMENTS {#acknowledgements .unnumbered} ================ We are grateful to Victor Pambuccian for careful reading the paper and for various useful suggestions. We are also grateful to Hajnal Andréka, Ramón Horváth and Bertalan Pécsi for enjoyable discussions. Special thanks are due to Hajnal Andréka for extensive help and support in writing the paper, encouragement and suggestions. Research supported by the Hungarian National Foundation for scientific research grants T43242, T35192, as well as by COST grant No. 274. [10]{} H. Andr[é]{}ka, J. X. Madar[á]{}sz and I. N[é]{}meti, “Logical axiomatizations of space-time,” In [*Non-Euclidean Geometries*]{}, E. Moln[á]{}r, ed. (Kluwer, Dordrecht, 2005) http://www.math-inst.hu/pub/algebraic-logic/lstsamples.ps. H. Andr[é]{}ka, J. X. Madar[á]{}sz and I. N[é]{}meti, with contributions from A. Andai, G. S[á]{}gi, I. Sain and Cs. T[ o]{}ke, “On the logical structure of relativity theories,” Research report, Alfr[é]{}d R[é]{}nyi Institute of Mathematics, Budapest (2002) http://www.math-inst.hu/pub/algebraic-logic/Contents.html. J. Ax, “The elementary foundations of spacetime,” , 507 (1978). C. C. Chang and H. J. Keisler, (North–Holland, Amsterdam, 1973, 1990). R. d’Inverno, (Clarendon, Oxford, 1992). A. Einstein, (von F. Vieweg, Braunschweig, 1921). G. Etesi and I. N[é]{}meti, “Non-turing computations via Malament-Hogarth space-times,” , 341 (2002) arXiv:gr-qc/0104023. J. Ferreir[ó]{}s, “The road to modern logic – an interpretation,” , 441 (2001). H. Friedman, On foundational thinking 1, Posting in FOM (Foundations of Mathematics) Archives www.cs.nyu.edu (January 20, 2004). H. Friedman, On foundations of special relativistic kinematics 1, Posting No 206 in FOM (Foundations of Mathematics) Archives www.cs.nyu.edu (January 21, 2004). L. Fuchs, (Pergamon, Oxford, 1963). D. Hilbert, “Über den Satz von der Gleichheit der Basiswinkel im gleichschenkligen Dreieck,” , 50 (1902/1903). W. Hodges, (Cambridge Univ., Cambridge, 1997). M. Hogarth, “Deciding arithmetic using SAD computers,” , 681 (2004). J. X. Madarász, Logic and relativity (in the light of definability theory), PhD thesis, E[ö]{}tv[ö]{}s Lor[á]{}nd Univ., Budapest (2002) http://www.math-inst.hu/pub/algebraic-logic/Contents.html. C. W. Misner, K. S. Thorne and J. A. Wheeler, (W.H. Freeman, San Francisco, 1973). V. Pambuccian, “Axiomatizations of hyperbolic and absolute geometries,” In [*Non-Euclidean Geometries*]{}, E. Molnár, ed. (Kluwer, Dordrecht, 2005). K. A. Ross, (Springer, New York, 1980). W. Rudin, (McGraw-Hill, New York, 1953). P. Suppes, “The desirability of formalization in science,” , 651 (1968). G. Sz[é]{}kely, A first order logic investigation of the twin paradox and related subjects, Master’s thesis, E[ö]{}tv[ö]{}s Lor[á]{}nd Univ., Budapest (2004). A. Tarski (Univ. of California, Berkeley, 1951). E. F. Taylor and J. A. Wheeler, (Addison Wesley, San Francisco, 2000). J. V[ä]{}[ä]{}n[ä]{}nen, “Second-order logic and foundations of mathematics,” , 504 (2001). R. M. Wald, (Univ. of Chicago, Chicago, 1984). J. Wole[ń]{}ski, “First-order logic: (philosophical) pro and contra,” In [*First-Order Logic Revisited*]{} (Logos, Berlin, 2004). [^1]: Alfréd Rényi Institute of Mathematics of the Hungarian Academy of Sciences, POB 127 H-1364 Budapest, Hungary. E-mail addresses: madarasz@renyi.hu, nemeti@renyi.hu, turms@renyi.hu. [^2]: In passing we mention that Etesi-Németi [@EN], Hogarth [@Hdecid] represent further kinds of *connection* between *logic and relativity* not discussed here. [^3]: For example, the ordered fields of the real numbers, the real algebraic numbers, and the hyper-real numbers are Euclidean but the ordered field of the rational numbers is not Euclidean and the field of the complex numbers cannot be ordered. For the definition of (linearly) ordered field, cf. e.g., Rudin [@Rudin] or Chang-Keisler [@Chang-Keisler]. [^4]: This inertial approximation of the twin paradox is formulated as at the end of Section \[main-s\] below Theorem \[thmMO\]. [^5]: This way of imitating a second-order formula by a FOL-formula schema comes from the methodology of approximating second-order theories by FOL ones, examples are Tarski’s replacement of Hilbert’s second-order geometry axiom by a FOL schema or Peano’s FOL induction schema replacing second-order logic induction. [^6]: This follows from a theorem of Tarski, cf.Hodges [@Ho97 p.68 (b)] or Tarski [@Ta51], by Theorem \[thmBoltzano\] herein or [@mythes Proposition A.0.1]. [^7]: I.e. $A$ is an affine map if there are $L:{{\mathrm}{F}}^n\rightarrow {{\mathrm}{F}}^j$ and $a\in {{\mathrm}{F}}^j$ such that $A(p)=L(p)+a$, $L(p+q)=L(p)+L(q)$ and $L(\lambda p)=\lambda L(p)$ for all $p,q\in {{\mathrm}{F}}^n$ and $\lambda \in {{\mathrm}{F}}$. [^8]: , i.e. if there is a $c\in {{\mathrm}{F}}$ such that $h(x)=c$ for all $x\in [a,b]$, [^9]: It can be assumed that $f_t$ is increasing on $[a,b]$ because the assumptions of the theorem remain true when $f$ and $[a,b]$ are replaced by $-Id\circ f$ and $[-b,-a]$, respectively, and $f_t$ is decreasing on $[a,b]$ iff $(-Id\circ f)_t$ is increasing on $[-b,-a]$. [^10]: i.e. $f'_t(x)>0$ and $g'_t\big(h(x)\big)>0$ [^11]: I.e. $I_i$’s are disjoint and ${{\mathrm}{F}}=I_1\cup I_2 \cup I_3 \cup I_4 \cup I_5$. [^12]: Let $H\subset {{\mathrm}{F}}$ be a non-empty bounded set that does not have a supremum. Let $I_1:=\{x\in {{\mathrm}{F}}: \exists h \in H \quad x<h\}$, $I_2:=\{x+1\in {{\mathrm}{F}}: x\in I_1\}\setminus I_1$, $I_3:=\{x+1\in {{\mathrm}{F}}: x\in I_2\}$, $I_4:=\{x+1\in {{\mathrm}{F}}: x\in I_3\}$ and $I_5:={{\mathrm}{F}}\setminus(I_1\cup I_2 \cup I_3 \cup I_4)$.
--- abstract: 'The topological data analysis method “concurrence topology” is applied to mutation frequencies in 69 genes in glioblastoma data. In dimension 1 some apparent “mutual exclusivity” is found. By simulation of data having approximately the same second order dependence structure as that found in the data, it appears that one triple of mutations, PTEN, RB1, TP53, exhibits mutual exclusivity that depends on special features of the third order dependence and may reflect global dependence among a larger group of genes. A bootstrap analysis suggests that this form of mutual exclusivity is not uncommon in the population from which the data were drawn.' address: | Unit 42, NYSPI\ 1051 Riverside Dr.\ New York, NY 10032\ U.S.A. author: - 'Steven P. Ellis' date: '9/6/2017' title: Concurrence Topology of Some Cancer Genomics Data --- Introduction ============ This is a report of some work I have done under the auspices of the the Rabadan Lab in the department of Systems Biology at Columbia University\ (https://rabadan.c2b2.columbia.edu/). Drs. Rabadan and Camara kindly provided me with some genomic data on glioblastoma (GBM). (See section \[S:data\].) I dichotomized those data and applied the “concurrence topology (CT)” (Ellis and Klein [@spEaK14.ConcurTopolfMRI]) method to them. Concurrence topology (CT) is a method of “topological data analysis” that uses persistent homology to describe aspects of the statistical dependence among binary variables. I detected some one-dimensional homology with apparently long lifespan. The first question I investigated was, is that lifespan statistically significantly long? “Long” compared to what? Statistical significance is always based on a “null” hypothesis. I took the null hypothesis to be that the observed persistent homology among mutations can be explained simply by the first and second order dependence among the mutations. Second order dependence is that which can be described fully by looking at genes just two at a time. A $p$-value can be computed by simulating data whose distribution is completely determined by the first and second order statistics of the GBM data. Specifically, I endeavored to simulate binary data sets of the size of the GBM data in such a way that all such data sets whose first and second order statistics approximate those of the GBM data are equally likely. What I mean by “approximate” is specified in section \[S:simulation\]. Simulating such data is itself rather challenging (section \[S:simulation\]) and I am not sure that my efforts were completely successful. Data and CT analysis {#S:data} ==================== The GBM data set consists of data on 290 tumors. Dr. Camara recommended 75 genes of which I was able to locate 69 in the data set. Each entry in the $290 \times 69$ matrix is a numerical score ranging from 0 to 4, inclusive. I dichotomized the data by converting every positive value to 1. So “1” indicates the presence of a mutation. The following table lists for every gene the number of tumors in which it was mutated. $$\begin{matrix} PTEN & TP53 & EGFR & PIK3R1 & NF1 \\ 90 & 84 & 77 & 33 & 32 \\ PIK3CA & RB1 & MUC17 & HMCN12 & ATRX\\ 32 & 25 & 23 & 19 & 17 \\ IDH1 & KEL & COL6A3 & STAG25 & GABRA6 \\ 15 & 15 & 14 & 12 & 11 \\ LZTR1 & PIK3C2G & SEMG1 & F5 & RPL5 \\ 10 & 9 & 9 & 9 & 8 \\ TPTE22 & NUP210L & IL4R & BCOR0 & BRAF\\ 8 & 8 & 8 & 7 & 6 \\ TP63 & TRPA1 & TLR6 & QKI & PTPN11 \\ 6 & 5 & 5 & 5 & 5 \\ PLCG1 & SETD22 & FAM126B & ZDHHC46 & TCF12 \\ 5 & 5 & 4 & 4 & 4 \\ DDX5 & SLC6A3 & CLCN7 & RNF16818 & GLT8D2 \\ 4 & 4 & 4 & 4 & 4 \\ TGFA & EEF1A1 & AOX1& ACAN& NIPBL \\ 4 & 4 & 4 & 4 & 3 \\ ZNF292 & KRT13 & RBBP6 & EPHA3 & CLIP1 \\ 3 & 3 & 3 & 3 & 3 \\ KRT15 & CREBZF & MAX & ST3GAL62 & ARID1A \\ 2 & 2 & 2 & 2 & 2 \\ KRAS & C15orf48 & TYRP1 & ARID228 & PPM1J2\\ 2 & 2 & 2 & 2 & 2 \\ ZBTB20 & NRAS & IL1RL1 & C10orf76 & EIF1AX \\ 1 & 1 & 1 & 1 & 1 \\ CIC & SMARCA4 & ABCD1& EDAR \end{matrix}$$ I ran my CT code on this binary data set for dimension 1 and with $\mathbb{Z}/2 \mathbb{Z}$ coefficients. Figure \[F:GBM\_persistence\_plot\] shows the persistence diagram. The two persistence classes corresponding to the dot in the figure lying furthest below the diagonal line are the ones with the longest lifespan. They are born in “frequency level” 15 and have lifespan 13. Since at least three subjects (tumors) are needed to form a 1-cycle, each of these persistent classes involves at least 3\*15 tumors, or about 15.5% of the sample. In CT it is often possible to “localize” classes in the sense of looking for representative “short cycles”. A “short cycle” in dimension 1 means a cycle consisting of three 1-simplices, line segments. A short cycle is uniquely determined by its vertices. In this context, vertices correspond to genes. We find that each of the two classes with lifespan 13 has a short representative in frequency levels 15 and lower. They are: > EGFR, TP53, PTEN and\ > PTEN, RB1, TP53. Each of these triplets exhibit “mutual exclusivity” (Ciriello et al [@gCeCcSnS2012.MutualExclusOncogenes], Szczurek1 and Niko Beerenwinkel [@eSnB.MutualExclusCancerGenes], and Melamed et al [@rdMjWaIrR.GeneAlterationsGlioblastoma]). Note that the mutations involved in these short cycles are the three most common in the sample, EGFR, TP53, PTEN, and the seventh most common, RB1. Later we will find more evidence that there is something special about RB1. These triples of genes reflect more than just mutual exclusivity. The corresponding 1-cycles represent homology and homology is a *global* property involving all 69 genes. So we have found an apparent global pattern of mutation in GBM. However, I do not have a biological interpretation of this kind of structure. Simulation {#S:simulation} ========== My goal was to generate random data sets that share the same first and second order statistics as the real data, at least approximately. For the cancer data this seems a little tricky. Here we discuss the algorithm I used. Let $D$ be the $N \times d$ data matrix. So $N = 290$ is the number of samples and $d = 69$ is the number of genes. We have $N > d$. $D$ is a binary (0 – 1) matrix and its $(i,j)$ entry is 1 if and only if the $j^{th}$ gene for the $i^{th}$ tumor is mutated. The first and second order statistics are captured by the $d \times d$ matrix $C := D^{T} D$, where “${}^{T}$” indicates matrix transposition. For $i \neq j$, the $(i,j)$ entry of $C$ is the number of times mutations of both genes $i$ and $j$ are present in the same sample, a second order statistic. The $i^{th}$ diagonal element of $C$ is the number of samples in which the $i^{th}$ gene is mutated, a first order statistic. For $n = 1, 2, \ldots$, let $1_{n}$ be the column vector all of whose entries are 1. Then $s := 1_{N}^{T} D$ is the $d$-dimensional row vector column sums of $D$. Thus, $s$ is the same as the diagonal of $C$. “Center” $D$ by subtracting out the column means: $D_{0} := D - M$ where $M^{N \times d} := N^{-1} 1_{N} \, s$. (I use superscripts to indicate matrix dimensions.) Thus, the column sums of $D_{0}$ are all 0: $1_{N}^{T} D_{0} = 0$. Let $D_{0} = U \Lambda V^{T}$ be the singular value decomposition of $D_{0}$ (Wikipedia). Thus, $U$ is an $N \times d$ matrix with orthonormal columns, $\Lambda$ is $d \times d$ diagonal, and $V$ is orthogonal. For our data sets no entry of $\Lambda$ is 0 and all the entries are distinct. Since $0 = 1_{N}^{T} D_{0} = 1_{N}^{T} U \Lambda V^{T}$ it follows that $$\label{E:1.N.U.=.0} 1_{N}^{T} U = 0.$$ Moreover, $D_{0}^{T} D_{0} = V \Lambda^{2} V^{T}$. Hence, the diagonal of $\Lambda^{2}$ is the vector of eigenvalues of $D_{0}^{T} D_{0}$ and the columns of $V$ are the corresponding unit eigenvectors. Therefore, since the eigenvalues are distinct, the columns of $V$ are unique up to sign. Observe that $N^{-1} D_{0}^{T} D_{0}$ is just the (variance-)covariance matrix of $D$. Hence, columns of $V$ are the unit eigenvectors of the covariance matrix and the diagonal elements of $N^{-1} \Lambda^{2}$ are the eigenvalues. We have $$\label{E:D=U.Lamb.VT.+.M} D = D_{0} + M = U \Lambda V^{T} + M$$ where, you recall, $D$ is the original data matrix. Let $W^{N \times 1}$ be any matrix with orthogonal columns s.t. (such that) $1_{N}^{T} W = 0$. For example, by , we can take $W = U$. Let $Y := W \Lambda V^{T} + M$. Since $1_{N}^{T} W = 0$ we have $W^{T} M = N^{-1} W^{T} 1_{N} s = 0$. Similarly, $M^{T} M = N^{-2} s^{T} 1_{N}^{T} 1_{N} s = N^{-1} s^{T} s$, since $1_{N}^{T} 1_{N} = N$. Thus, since the columns of $W$ are orthonormal, $$Y^{T} Y = ( V \Lambda W^{T} + M^{T} ) (W \Lambda V^{T} + M) = V \Lambda^{2} V^{T} + N^{-1} s^{T} s.$$ So $Y^{T} Y$ *does not depend on* $W$. In particular, $Y^{T} Y = D^{T} D = C$. We can use this fact to sample uniformly from the set of all $N \times d$ matrices, $Y$, s.t. $Y^{T} Y = C$, i.e. to sample uniformly from the set of all $N \times d$ matrices having the same first and second order statistics that $D$ has. One merely has to sample uniformly from the space of all matrices $W^{N \times 1}$ with orthogonal columns s.t. (such that) $1_{N}^{T} W = 0$. Such sampling can be done easily as follows. Let $w^{N \times 1}$ be a random Gaussian column vector with statistically independent *population* mean 0 components. Let $\bar{w} = N^{-1} 1_{N}^{T} w$ be the *sample* mean of the components of $w$. ($\bar{w}$ is just a random number.) Center $w$, i.e., replace it by $w_{1} := w - \bar{w} 1_{N}$. Thus, $w_{1}$ is a random $N$-column vector and $1_{N}^{T} w_{1} = 0$. Repeat this operation independently $d$ times producing column vectors $w_{1}, \ldots, w_{d}$. Apply the Gram-Schmidt orthogonalization process to these vectors to produce orthonormal column $N$-vectors $w_{1}', \ldots, w_{d}'$. Since $w_{1}, \ldots, w_{d}$ all have sample mean 0, so do $w_{1}', \ldots, w_{d}'$. Let $W^{N \times d}$ be the matrix whose columns are $w_{1}', \ldots, w_{d}'$. Finally, take $Y := W \Lambda V^{T} + M$. Then we know $Y^{T} Y = C$. I mentioned above that the columns of $V$ are unique up to sign. One might think that one can make the distribution of $Y$ more uniform by randomly changing the signs of the columns of $V$. Let $E^{d \times d}$ be a diagonal matrix with diagonal entries, $\epsilon_{i} = \pm 1$, ($i=1, \ldots, d$). Since diagonal matrices commute, we have $W \Lambda (V E)^{T} = W \Lambda E V^{T} = (W E) \Lambda V^{T}$. Thus, multiplying the columns of $V$ by $\epsilon_{i}$, ($i=1, \ldots, d$) amounts to replacing $w_{1}', \ldots, w_{d}'$ by $\epsilon_{1} w_{1}', \ldots, \epsilon_{d} w_{d}'$. Now, examination of the Gram-Schmidt process shows that $\epsilon_{1} w_{1}', \ldots, \epsilon_{d} w_{d}'$ is exactly what one gets when one applies Gram-Schmidt to the original $\epsilon_{1} w_{1}, \ldots, \epsilon_{d} w_{d}$, i.e. changing the signs of the independent Gaussian random vectors we started with. But theses random vectors are independent Gaussian with mean 0, therefore changing their signs does not change their distribution. This shows that changing the signs of the columns of $V$ does not change the distribution of $Y$ and, thus, is unnecessary. $Y$ constructed as above is uniformly distributed over the space of matrices $X^{N \times d}$ with $X^{T} X = C$. There is only one problem. We want *binary* matrices and the probability that $Y$ generated as above is binary is 0. The obvious remedy is to threshold: Write $Y = ( y_{ij} )$. For some number $t$ replace each entry $y_{ij}$ by 0 if $y_{ij} < t$ and by 1 otherwise. Call the resulting binary matrix $B_{t}^{N \times d}$. But what threshold $t$ should we use? Alas, we have to accept the fact that no matter what $t$ we use we will *not* have $B_{t}^{T} B_{t} = C$. So we have to settle for an approximation $B_{t}^{T} B_{t} \approx C$. We pick $t$ to get a “best” approximation. In order to define what “best” means we need a definition of distance between $B_{t}^{T} B_{t}$ and $C$. One possibility is to use the (squared) Euclidean or Frobenius matrix distance: $trace \, (B_{t}^{T} B_{t} - C)^{T} (B_{t}^{T} B_{t} - C)$. This is just the sum of the squared entries of $B_{t}^{T} B_{t} - C$. But remember that $D$, and hence, $C$ are random and a more stable distance would use weights approximately equal to the variances of the entries of $C$. There are numerous ways might estimate these variances. I employed a simple one. Remember that the entries of $C$ are counts, non-negative integer-values. Perhaps the simplest distribution of a non-negative integer-valued random variable is the Poisson. So a crude estimate of the variance of an entry is just the entry itself. However, there are hundreds of unique values in $C$ so the values themselves will be very noisy estimates. Now, in statistics it is well known that when simultaneously estimating a large number of quantities one improves estimates by shrinking toward a constant. I informally employed that technique. Let $c$, a number, be the sample mean of all the entries in $C$ and let $\bar{C}$ be the $d \times d$ matrix all of whose entries are $c$. Then for the purpose of weighting we replace $C$ by $\hat{C} := (1/2) C + (1/2) \bar{C}$. Now define the “distance”, $\delta(B_{t}^{T} B_{t}, C)$, between $B_{t}^{T} B_{t}$ and $C$ as follows. Form the matrix $\Delta_{2}^{d \times d} = (\delta_{ij})$ whose $ij^{th}$ entry is the squared difference between the $ij^{th}$ entry of $B_{t}^{T} B_{t}$ and the $ij^{th}$ entry of $C$. Now divide $\delta_{ij}$ by the $ij^{th}$ entry of $\hat{C}$. Add up all those quotients. That’s the “distance”, $\delta(B_{t}^{T} B_{t}, C)$. However, $B_{t}^{T} B_{t}$ is symmetric, so in this procedure the unique off-diagonal elements are counted twice. To make up for this, we modify $\hat{C}$ by dividing its diagonal by 2. This doubles the contribution of the diagonals of $B_{t}^{T} B_{t}$ and $C$. The binary matrix we want will not be all 0 or 1. Therefore, the only thresholds we need try are the distinct numeric values in $Y$. We try all those values as thresholds and pick the one that minimizes $\delta(B_{t}^{T} B_{t}, C)$. How do we know that the minimum distance we achieve is small enough? Again, the data matrix $D$ itself is random. Even if we gathered another data set $D_{2}$ using the same method used to gather $D$ we would not have $D_{2}^{T} D_{2} = C$. Therefore, it is unreasonable to insist that $\delta(B_{t}^{T} B_{t}, C)$ be tiny. But how small a value for $\delta(B_{t}^{T} B_{t}, C)$ is acceptable? Looking at the distribution of $\delta(D_{2}^{T} D_{2}, C)$, where $D_{2}$ is drawn at random from the same population that $D$ came from gives us a yardstick to use for judging sizes of $\delta(B_{t}^{T} B_{t}, C)$. Now, we cannot draw new samples $D_{2}$, but we can approximate that process by drawing samples, with replacement, from the rows of $D$. This is the non-parametric bootstrap (Efron and Tibshirani [@bErjT93.bootstrap]). One draws many samples, with replacement, from the rows of $D$. (These are called “resamples”. I drew 2,000.) Each time one obtains a matrix $D_{2}$. One then records the value of $\delta(D_{2}^{T} D_{2}, C)$ for each resample. The distribution of all these numbers approximates the distribution one would get by taking many samples $D_{2}$ from the population. I chose the median $m_{2}$ of these distances as the cutoff for distinguishing matrices that are close to $D$ from those that are not. Unfortunately, even the closest $B_{t}$, call it $B_{t_{opt}}$, generated by thresholding $Y$ practically always fails this test. So more work needs to be done to $B_{t_{opt}}$ to make it acceptable. To do this I used an informal “Markov Chain Monte Carlo (MCMC)” algorithm (Wikipedia). Intialize $Z_{1} := B_{t_{opt}}$ and $b_{1} := \delta(Z_{1}^{T} Z_{1}, C)$. At each step pick a random entry in $Z_{1}$ and flip it so 0 gets replaced by 1 or vice versa. (Call that a “flip attempt”.) Call the resulting matrix $Z$. Then compute $b := \delta(Z^{T} Z, C)$. If $b < b_{1}$ (a “successful flip”) then set $Z_{1} := Z$ and $b_{1} := b$. Otherwise, $Z_{1}$ and $b_{1}$ are not changed. That is the iteration. What is the stopping rule? I stopped the iteration as soon as $b_{1} < m_{2}$. One might be concerned that when the algorithm halts the distance $b_{1} := \delta(Z_{1}^{T} Z_{1}, C)$ would be only slightly smaller than $m_{2}$. Values much smaller than $m_{2}$ would never be achieved. However, it is well known that the volume of a high dimensional ball is almost entirely found near the boundary. So if one did sample matrices from the ball centered at $D$ and having $\delta$-radius $m_{2}$, one would rarely get a matrix whose $\delta$-distance from $D$ is much smaller than $m_{2}$. So that is not a legitimate objection to the “MCMC” algorithm. A more justified concern is the following. Above we argued that $Y$ constructed as above is uniformly distributed over the space of matrices $X^{N \times d}$ with $X^{T} X = C$. Even the matrix $B_{t_{opt}}$, though not close to $D$, is unbiased in terms of the *direction* $B_{t_{opt}} - D$. I.e., I conjecture that it would not be hard to prove that the expected value of $B_{t_{opt}} - D$ is 0. However, I fear that the informal MCMC step in the construction might introduce some bias. Still, why not skip the SVD step and just perform the “MCMC” step? As an experiment I generated a 0 matrix of the same dimensions as the data and randomly replaced 712 of the entries by 1’s, where 712 is the number of 1’s in the original data matrix.. It took more than 21,000 flip attempts with over 1,000 successful flips to bring this matrix close to the data matrix. A second attempt produced similar results. As a benchmark, note that there are 20010 positions in data matrix. But it is not just to save flips that the svd-based starting matrix is helpful. That approach also serves, I believe, to help generate binary matrices (nearly?) uniformly distributed among matrices with first and second order statistics similar to the real population. Simulation results and bootstrap {#S:results} ================================ I generated 500 synthetic data sets using the algorithm described in section \[S:simulation\]. In contrast to the experiments described in the last paragraph of the last section, in these 500 MCMC calculations, the range in the number of flip attempts needed to bring the second order statistics acceptably close to those of the data was 625 to 2076 with a median of 1061. The number of successful flips ranged from 193 to 329 with a median of 255. For each synthetic data set I found the persistent 1-D homology classes with the longest lifespan. (There were practically always just one such class.) Here are summary statistics for those lifespans: $$\begin{matrix} Min. & 1st Qu. & Median & Mean & 3rd Qu. & Max. \\ 5.00 & 12.00 & 14.00 & 14.36 & 17.00 & 25.00 \end{matrix}$$ We observe that the maximum lifespan in the real data, viz., 13, is not remarkable in the simulated data. In fact, 56.4% of the time the maximum lifespan obtained in simulation was larger than in the data. For those classes with maximum lifespan I also recorded their frequency level of birth. Here are summaries. $$\begin{matrix} Min. & 1st Qu. & Median & Mean & 3rd Qu. & Max. \\ 6.00 & 19.00 & 22.00 & 20.63 & 24.00 & 32.00 \end{matrix}$$ We observe that the births in the real data, viz., 15, is actually rather small by comparison. In fact 84.4% of the time the birth of the class with maximum lifespan obtained in simulation was larger than in the data. The vertices that appear in one or both of the short cycles I found in the longest lived classes in the GBM data (section \[S:data\]) are EGFR, PTEN, RB1, and TP53. For each of them I recorded if it appeared in a short cycle in the simulated data. (The persistent class with the longest lifespan was represented by one or more short cycles in all but ten of the simulations.) Here are the proportions of the simulations in which that happened. $$\begin{matrix} EGFR & PTEN & RB1 & TP53 \\ 0.914 & 0.968 & 0.080 & 0.970 \end{matrix}$$ We see, then, that the homology class represented by the short cycle with vertices PTEN, RB1, TP53 appears rather uncommonly in the simulated data (because RB1 appears uncommonly). But perhaps the cycle with vertices PTEN, RB1, TP53 also appears uncommonly in the population from which the GBM data is derived. To answer this question, in the spirit of Chazal et al [@fCbtFfLaRaSlW14.BootPersDiags], I applied the bootstrap method mentioned in section \[S:simulation\] with 500 resamples. Thus, I resampled the tumors (rows) of the GBM matrix and computed the same summaries I just described for the SVD simulations. I found that in over 25% of the resamples the longest lived classes had short cycles including RB1 as a vertex. This is (circumstantial) evidence that in the population of tumors from which the data are drawn the homology class represented by the short cycle with vertices PTEN, RB1, TP53 is not uncommon. Discussion {#S:discussion} ========== Applying the Concurrence Topology method to the GBM data we found two cycles with apparently long lifespan. But “long” compared to what? Compared to results one would get just “by luck”. But what kind of luck? Complete independence among mutations is biologically unrealistic. It is already known that mutations are not independent. (See, for example, the afore-cited articles on mutual exclusivity.) Instead, I focussed on the luck one would observe if the distribution of mutations reflected, approximately, the first and second order statistics in the data. I used an algorithm that is intended to generate samples from such a distribution. It is not hard to generate *floating point*-valued matrices whose first and second order statistics exactly match those of the data. The difficulty is in satisfying the requirement that the algorithm produce *binary* matrices having the desired distribution. Specifically, one would like all binary matrices having approximately the right statistics (where “approximately” is defined in section \[S:simulation\]) to be equally likely. The method I used takes the floating point algorithm as starting point then discretizes and randomly flips entries to achieve an approximate match. Perhaps better algorithms already exist somewhere, otherwise more work in this area is needed. We found that neither the frequency level of birth nor lifespan of the classes in the real data is remarkable in that second order context. However, one of the classes with the longest lifespan in the data appears rather infrequently in the second order distribution but not infrequently in bootstrapped resamples. This suggest the involvement of third and higher order dependence among the mutations. I find it surprising that first and second order dependence can give rise to persistent homology in dimension 1 with long lifespans. I think this is partly due to the fact that some of the mutations, *viz.*, EGFR, TP53, PTEN, are so common. In section \[S:data\] I pointed out that persistent homology reflects “mutual exclusivity”. But mutual exclusivity is a “local phenomenon”: mutual exclusivity among a group of mutations, e.g. EGFR, TP53, PTEN, is a property of the joint distribution of just those mutations. But persistent homology is a *global* property. In general, it reflects the joint distribution of all, in this case 69, genes. In the case of the short cycle EGFR, TP53, PTEN the fact that it represents persistent homology with a long lifespan may just be local because those mutations are far more numerous than the others. However, the short cycle PTEN, RB1, TP53 seems to reflect something more global because RB1 is not such a common mutation. The simulations seem to confirm this. [MWIR15]{} Giovanni Ciriello, Ethan Cerami, Chris Sander, and Nikolaus Schultz, *[Mutual exclusivity analysis identifies oncogenic network modules]{}*, Genome Research **22** (2012), 398–406. Frédéric Chazal, Brittany Terese Fasy, Fabrizio Lecci, Alessandro Rinaldo, Aarti Singh, and Larry Wasserman, *On the bootstrap for persistence diagrams and landscapes*, arXiv:1311.0376 \[math.AT\], 2014. Steven P. Ellis and Arno Klein, *[Describing high-order statistical dependence using “concurrence topology”, with application to functional MRI brain data]{}*, Homology, Homotopy, and Applications **16** (2014), 245–264. Bradley Efron and Robert J. Tibshirani, *[An Introduction to the Bootstrap]{}*, Chapman & Hall, New York, 1993. Rachel D. Melamed, Jiguang Wang, Antonio Iavarone, and Raul Rabadan, *[An information theoretic method to identify combinations of genomic alterations that promote glioblastoma]{}*, [Journal of Molecular Cell Biology]{} **7** (2015), 203–213. Ewa Szczurek1 and Niko Beerenwinkel, *[Modeling Mutual Exclusivity of Cancer Mutations]{}*, [PLOS Computational Biology]{} **10** (2014), e1003503.
--- abstract: 'We report on our analysis of a large sample of energy dependent pulse profiles of the X-ray binary pulsar Hercules X-1. We find that all data are compatible with the assumption of a slightly distorted magnetic dipole field as sole cause of the asymmetry of the observed pulse profiles. Further the analysis provides evidence that the emission from both poles is equal. We determine an angle $\Theta_{\rm m} < 20^{\circ}$ between the rotation axis and the local magnetic axis. One pole has an offset $\delta<5^{\circ}$ from the antipodal position of the other pole. The beam pattern shows structures that can be interpreted as pencil- and fan-beam configurations. Since no assumptions on the polar emission are made, the results can be compared with various emission models. A comparison of results obtained from pulse profiles of different phases of the 35-day cycle indicates different attenuation of the radiation from the poles being responsible for the change of the pulse shape during the main-on state. These results also suggest the resolution of an ambiguity within a previous analysis of pulse profiles of Cen X-3, leading to a unique result for the beam pattern of this pulsar as well. The analysis of pulse profiles of the short-on state indicates that a large fraction of the radiation cannot be attributed to the direct emission from the poles. We give a consistent explanation of both the evolution of the pulse profile and the spectral changes with the 35-day cycle in terms of a warped precessing accretion disk.' author: - 'S. Blum and U. Kraus' title: | Analyzing X-Ray Pulsar Profiles:\ Geometry and Beam Pattern of Her X-1 --- Introduction {#intro} ============ Since its discovery in 1972 by the UHURU satellite (Tananbaum et al. 1972), the X-ray binary system Hercules X-1/HZ Herculis has become the best studied of its class of about 44 known today (Bildsten et al. 1997). They are understood to be fast spinning neutron stars that are accreting matter from a massive companion star either via Roche lobe overflow or from the stellar wind of the companion. Since the neutron stars have strong magnetic fields, the accreted matter is funnelled along the field lines onto the magnetic poles, where most of the energy is released in form of X-radiation. Generally the magnetic axis and the rotation axis are not aligned. Therefore a large fraction of the detected flux from these sources is pulsed as during the course of each revolution of the neutron star the beams from the poles sweep through our line of sight. Her X-1/HZ Her combines most of the properties that can be found in X-ray binaries, this made it one of the favourite sources of X-ray astronomers. From the observation of eclipses and from pulse timing analyses the orbital parameters are well determined. The masses of the neutron star and its optical companion are $1.3 \ M_{\odot}$ and $2.2 \ M_{\odot}$ respectively, the orbital period of Her X-1 is $1.7 \ {\rm d}$, and the inclination of the orbital plane is $i>80^{\circ}$ (Deeter, Boynton, & Pravdo 1981). In addition to the pulse period of 1.24 s, i.e. the rotation period of the neutron star, Her X-1 also displays X-ray intensity variations on a period of about 35 days. Such a long-term variability is only known for two other pulsars: LMC X-4 and SMC X-1. The 35-day cycle of Her X-1 is nowadays ascribed to the precession of a warped accretion disk which periodically obscures the neutron star from our view (Petterson, Rothschild, & Gruber 1991, Schandl & Meyer 1994). During its high intensity or main-on state, Her X-1 has a luminosity $L_{\rm x} \approx 2.5 \cdot 10^{37} \ {\rm ergs \ s}^{-1}$ (2-60 keV) (McCray et al. 1982). The maximum flux of the short-on state is typically only 30% of that of the main-on. Balloon observations in 1977 allowed for the first time the indirect measurement of the magnetic field strength of some $10^{12}$ G by the revelation of a spectral feature in the hard X-ray spectrum (Trümper et al. 1978), interpreted as a cyclotron absorption line at about 40 keV. The pulse shapes of Her X-1 are highly asymmetric and depend on energy and on the phase of the 35-day cycle. In several studies phenomenological emission patterns have been used to reproduce the asymmetric pulse profiles of Her X-1. Wang & Welter (1981) fitted the geometry of two antipodal polar caps with asymmetric fan-beam patterns. In this approach the asymmetry of the emission pattern was attributed to asymmetric accretion due to the plasma becoming attached to the magnetic field lines away from the corotation radius. However it is not clear whether an asymmetric accretion stream must produce an asymmetric beam pattern (Basko & Sunyaev 1975). Another way of introducing asymmetry into the pulse shapes is via non antipodal emission regions. Leahy (1991) used two offset rings on the surface of the neutron star with symmetric pencil-beams and Panchenko & Postnov (1994) modelled two antipodal polar caps and one ringlike area which was attributed to a non-coaxial quadrupole configuration of the magnetic field. Further studies have shown that relativistic light deflection near the neutron star plays an important role when emission models are used to explain the observed pulse shapes (e.g. Riffert et al. 1993, Leahy & Li 1995). In this analysis we take up the idea of a non antipodal location of the emission regions caused by a slightly distorted magnetic dipole field. We assume that the emission originating from the regions near the magnetic poles only depends on the viewing angle between the magnetic axis and the direction of observation which means that the emission is symmetric with respect to the local magnetic axis. In contrast to previous studies where specific emission models have been used to fit the pulse profiles, the method used here does not involve any assumptions on the polar emission. Instead it tests in a general way whether the pulse profiles are compatible with the assumption that they are the sum of two independent symmetric components. The method we use to analyze pulse profiles is briefly summarized in the following §\[method\]. In §\[data\] we list the analyzed data. The results of the analysis are presented in §\[results\]. We show that the data of Her X-1 are indeed compatible with the idea of a slightly distorted magnetic dipole field. Further we find indications in the contributions to the pulse profiles that the emission from both poles is identical. We determine the location of the magnetic poles and reconstruct the beam pattern, which is discussed in §\[char\]. In the following §\[35d\] we examine the dependence of the pulse shape on the phase of the 35-day cycle. We argue that the contributions to the pulse profile undergo different attenuation resulting in the observed evolution of the pulse shapes during the main-on state of the 35-day cycle. Analysis ======== The Method {#method} ---------- This section is a short summary of the method we use to analyze the energy dependent pulse profiles of Her X-1. We will focus on the main ideas and assumptions omitting both formal derivations and technical details. A comprehensive presentation of the material including a test case has been given in Kraus et al. (1995). Consider the emission region near one of the magnetic poles of the neutron star. Radiation escapes from the accretion stream and from the star’s surface and, while close to the star, is deflected in the gravitational field of the neutron star. A distant observer who cannot spatially resolve the emission region measures the integrated flux coming from the entire visible part of the emission region. The observed integrated flux depends on the direction of observation because the direction of observation determines which part of the emission region is visible and also because the radiation emitted by the accretion stream and the neutron star is presumably beamed. This function, namely the flux of a single emission region measured by a distant observer as a function of the direction of observation, is the link between the properties of the emission region and the contribution of that emission region to the pulse profile. In the following we will call this function the beam pattern of the emission region. The contribution of the emission region to the pulse profile, which we will refer to as a single-pole pulse profile, depends both on the beam pattern and on the pulsar geometry, i.e., on the orientation of the rotation axis with respect to the direction of observation and on the location of the magnetic pole on the neutron star. In short: local emission pattern plus relativistic light deflection determine the beam pattern, beam pattern plus geometry result in a certain single-pole pulse profile and the superposition of the single-pole pulse profiles of the both emission regions is the total pulse profile. ### a. decomposition into single-pole pulse profiles {#a.-decomposition-into-single-pole-pulse-profiles .unnumbered} In the following we are going to assume that the beam pattern is axisymmetric with respect to the magnetic axis (i.e., to the axis that passes through the center of the neutron star and through the magnetic pole). The axisymmetric beam pattern is a function of only one variable, the angle $\theta$ between the direction of observation and the magnetic axis. Consider now the single-pole pulse profile $f(\phi)$, where $\phi$ is the angle of rotation of the neutron star. It can easily be shown that the single-pole pulse profile produced by an axisymmetric beam pattern is symmetric in the following sense: there is a rotation angle $\Phi$, so that $f(\Phi-\phi) = f(\Phi+\phi)$ for all values of $\phi$. The fact that $f$ is periodic in $\phi$ implies that the same symmetry must hold with respect to the rotation angle $\Phi+\pi$. Now turn to the total pulse profile produced as the sum of the two symmetric single-pole pulse profiles. If the emission regions are antipodal, i.e., the two magnetic axes are aligned, it turns out that the symmetry points $\Phi_1$ and $\Phi_1 + \pi$ of the first single-pole pulse profile fall on the same rotation angles as the symmetry points $\Phi_2$ and $\Phi_2 + \pi$ of the second single-pole pulse profile. Their sum, the total pulse profile, is therefore symmetric with respect to the same symmetry points. If the emission regions are not antipodal, however, the symmetry points of the two single-pole pulse profiles do not coincide (except for certain special displacements from the antipodal positions) and the total pulse profile is asymmetric. Given an observed asymmetric pulse profile, we can ask if it could possibly have been built up out of two symmetric contributions with symmetry points that do not coincide. If so, it must be possible to find two symmetric (and periodic) functions $f_1$ and $f_2$ with the pulse profile $f$ as their sum. By writing the observed pulse profile, defined by a certain number $N$ of discrete data points $f(\phi_k)$, as a Fourier sum and with an ansatz for $f_1$ and $f_2$ in the form of Fourier sums also, the following can easily be shown: For an arbitrary choice of symmetry points $\Phi_1$ and $\Phi_2$, there are two periodic functions $f_1$ and $f_2$, $f_1$ symmetric with respect to $\Phi_1$ and $f_2$ symmetric with respect to $\Phi_2$, such that $f=f_1 + f_2$, and the two symmetric functions are uniquely determined. Exceptions to this rule occur only if $(\Phi_1-\Phi_2)/\pi$ is a rational number. In this case the symmetric functions may not exist or, if they exist, may not be uniquely determined. It must also be noted that the symmetric functions obviously can only be determined up to a constant $C$, since $f_1 + C$ and $f_2 - C$ are also a solution if $f_1$ and $f_2$ are. Thus, in principle every choice of a pair of symmetry points corresponds to a unique decomposition of any pulse profile into two symmetric contributions. For such a decomposition to be an acceptable solution, however, $f_1$ and $f_2$ also have to meet the following physical criteria in order to be interpreted as single-pole pulse profiles: 1. They must not have negative values, since they represent photon fluxes. 2. They must be reasonably simple and smooth. We do not expect the polar contributons to have a shape that is more complex than the pulse profile. Especially modulations of the single-pole pulse profiles that cancel out in the sum are not compatible with the assumption of two independent and therefore uncorrelated emission regions. 3. They must conform to the energy dependence of the pulse profile. The decomposition can be done independently for pulse profiles in different energy ranges. Since the symmetry points are determined by the pulsar geometry, the same symmetry points must give acceptable decompositions according to the criteria 1 and 2 in all energy ranges. Finally the single-pole pulse profiles should show the same gradual energy dependence as the pulse profile. Given the existence of formal decompositions for all pairs of symmetry points and the criteria mentioned above, we are left with the two-dimensional parameter space of all possible values of $\Phi_1$ and $\Phi_2$, which we search for points with acceptable decompositions. For practical purposes, the parameters we use are the quantities $\Phi_1$ and $\Delta := \pi - (\Phi_1-\Phi_2)$. The parameter space that contains every possible unique decomposition then is $0 \leq \Phi_1 \leq \pi$ and $0 \leq \Delta \leq \pi/2$. In the analysis of just one pulse profile there will in general be a number of different acceptable decompositions. This number may be significantly reduced by the energy dependence of the pulse profile. In general, the existence of symmetry points with acceptable decompositions in all energy channels is by no means guaranteed. If such a pair of symmetry points is found, then it is indeed possible to build up the observed pulse profile out of two symmetric contributions, and we can conclude that the analyzed data are compatible with the assumption that the asymmetry of the observed pulse profile is caused by the non-antipodal locations of the magnetic poles. The symmetric functions can be interpreted as the single-pole pulse profiles due to the two emission regions. A successful decomposition provides information both on the geometry and on the beam pattern. As to the geometry, we obtain a value for the parameter $\Delta$. This parameter is related to the locations of the emission regions on the neutron star (see Figure \[fig1\]). The beam pattern is related to the single-pole pulse profile via the geometric parameters, i.e., the location of the emission region on the neutron star and the direction of observation. Since these parameters are not known, one cannot directly deduce the beam pattern from the single-pole pulse profile. It can be shown, however, that an appropriate transformation of the single-pole pulse profile and the beam pattern turns the transformed single-pole pulse profile into a scaled, but undistorted copy of a section of the transformed beam pattern. Although the scaling factor is a geometric quantity and therefore not known, this still provides an intuitive understanding of what a section of the beam pattern must look like. Since in the case of Her X-1 it is possible to eventually reconstruct the beam pattern, the information obtained at this stage mainly serves as a starting point for the next step of the analysis and we will not go into details about the transformation mentioned above. [ ]{} ### b. search for an overlap region and determination of the geometry {#b.-search-for-an-overlap-region-and-determination-of-the-geometry .unnumbered} In general, the two emission regions on the neutron star may or may not be equal (i.e., have the same beam pattern). If they are equal, this fact may be apparent in the single-pole pulse profiles in the following way. Since in general the rotation axis and the magnetic axis of the neutron star are not aligned, the viewing angle $\theta$ between the magnetic axis and the direction of observation of each emission region changes with rotation angle $\phi$. The range $\theta$ can cover for each magnetic pole depends on the location of that pole on the neutron star and on the direction of observation, where $0^\circ \leq \theta_{\rm min} \leq \theta_{\rm max} \leq 180^\circ$. Only in the special case where both the magnetic axis and the direction of observation are perpendicular to the rotation axis, $\theta$ takes all values between $0^\circ$ and $180^\circ$. Since the emission regions have different locations on the neutron star, their ranges of values of $\theta$ are different. Depending on the geometry, these two ranges for $\theta$ may overlap. For an ideal dipole configuration e.g., the condition under which an overlap in the ranges of values of $\theta$ of the both poles exists is $\Theta_{\rm O}+\Theta_{\rm m}>\pi/2$, where $\Theta_{\rm O}$ is the angle between the rotation axis and the line of sight, and $\Theta_{\rm m}$ is the angle between the rotation axis and the magnetic axis. Consider an angle $\tilde{\theta}$ in the overlap region. At some instant during the course of one revolution of the neutron, at rotation angle $\phi$, one emission region is seen under the angle $\tilde{\theta}$. At a different instant, at rotation angle $\phi'$, the other emission region is seen under the same angle $\tilde{\theta}$. If the beam patterns of the two emission regions are identical, then the flux detected from the one emission region at $\phi$ is equal to the flux detected from the other emission region at $\phi'$. Thus, if an overlap region exists, the corresponding part of the beam pattern shows up in both single-pole pulse profiles, though at different values of rotation angle. Since the single-pole pulse profiles can be transformed into undistorted (though scaled) copies of sections of the beam patterns, such a part of the beam pattern that shows up in both single-pole pulse profiles should be readily recognizable. Note that the occurence and size of the overlap region depends on the geometric parameters and must therefore be the same for pulse profiles in different energy channels. If an overlap region is found in the single-pole pulse profiles obtained in the decomposition, this is an indication that there are two emission regions with identical beam patterns. Since each single-pole pulse profile provides a section of the beam pattern and the two sections overlap, we can then combine the two sections by superposing the overlapping parts. As a result we obtain the total visible section of the beam pattern. Superposing the overlapping parts of the two sections of the beam pattern amounts to determining the relation between the corresponding values $\phi$ and $\phi'$ of the rotation angle. On the other hand, the relation between $\phi$ and $\phi'$ can be expressed in terms of the unknown geometric parameters of the system. Thus, the superposition provides a constraint on the geometry. Again omitting all details we simply note the procedure for superposing the overlapping parts of the two sections of the beam pattern. The single-pole pulse profiles $f_1(\phi)$ with symmetry point $\Phi_1$ and $f_2(\phi)$ with symmetry point $\Phi_2$ are transformed into functions of a common variable $q$ through $\cos(\phi-\Phi_1) = q$ for $f_1$ and $\cos(\phi-\Phi_2)=(q-a)/b$ for $f_2$. The real numbers $a$ and $b>0$ are determined by means of a fit which minimizes the quadratic deviation between $f_1(q)$ and $f_2(q)$ in the overlap region. At this point the constant $C$, which determines how the unpulsed flux has to be distributed to the single-pole pulse profiles, can also be computed. Since $a$ and $b$ can be expressed in terms of the unknown geometric parameters of the pulsar, their best-fit values constitute constraints on the pulsar geometry. The results of this second step of the analysis are the total visible beam pattern as a function of $q$ and two constraints on the geometric parameters. The geometric information obtained so far (i.e., the values of $\Delta$, $a$, and $b$) is not quite sufficient in itself to completely determine the pulsar geometry. It needs to be supplemented by an independent determination of any one additional geometric parameter or by an additional constraint. We suggest that this supplement may be obtained by means of the assumption that the rotation axis of the neutron star is perpendicular to the orbital plane. In this case, the angle $\Theta_{\rm O}$ between the direction of observation and the rotation axis of the neutron star is given by the inclination of the orbital plane. The assumption of $\Theta_{\rm O}=i$ seems to be quite plausible since accreted mass also carries angular momentum from the massive companion, and this transfer is expected to align the rotation axes of the binary stars on a timescale short compared to the lifetime of the system. However, this assumption must not hold true for all binary systems. With the inclination substituted for $\Theta_{\rm O}$, the analysis of the pulse profiles determines the positions of the emission regions on the neutron star. Once the pulsar geometry is known, we also obtain the equation relating the auxiliary variable $q$ and the viewing angle $\theta$, so that the reconstructed beam pattern can be transformed into a function of $\theta$. However, it turns out that the relation between $q$ and $\theta$ involves an ambiguity which cannot be resolved within this analysis. It is due to the fact that we are not able to relate a single-pole pulse profile to one of the two emission regions. Therefore, we obtain two different possible solutions for the beam pattern and a choice between them must be based on either theoretical considerations and model calculations, or on additional information on the source. The Data {#data} -------- The analysis presented in this paper is based on pulse profiles of the main-on and short-on states of Her X-1. The analyzed sample contains a total of 148 pulse profiles from 20 different observations. References, the platform of the detectors, year of observation, the total energy range, the state of the 35-day cycle, the number of separate observations and the total number of pulse profiles of the respective observations are listed in Table \[tab1\]. The data reduction including background subtraction has been done by the respective authors. In order to compare the pulse profiles from different observations, the pulse profiles of the main-on have been aligned in phase so that their common features match best. Since the pulse profiles of the short-on are markedly different, their features have been aligned with respect to the main-on as suggested by Deeter et al. (1998). At energies below 1 keV, the pulses of Her X-1 have a sinusoidal shape which is interpreted as reprocessed hard X-radiation at the inner edge of the accretion disk (McCray et al. 1982). Since the origin of these soft X-rays is not the region near the magnetic poles, the analysis is restricted to higher energies. Above 1 keV the pulse profiles of Her X-1 are highly asymmetric and their typical energy dependence has been examined in a variety of studies (see Deeter et al. 1998, and references therein). In the analysis the pulse profiles are written as Fourier series. Since the higher Fourier coefficients are presumably affected by aliasing and also may have fairly large statistical errors, the highest coefficients are set to zero. This has a smoothing effect depending on the number of Fourier coefficients concerned. An example of the typical energy dependence of the pulse profiles and their representation in the analysis is given in the top row of Figure \[fig2\]. It shows pulse profiles in three different energy ranges of an EXOSAT observation (Kahabka 1987) during the main-on state. The observed pulse profiles are plotted with crosses. The profiles plotted as solid lines are inverse Fourier-transformed using 32 out of originally 60 Fourier coefficients. [ ]{} Results ------- ### a. decomposition into single-pole pulse profiles {#a.-decomposition-into-single-pole-pulse-profiles-1 .unnumbered} In a first run, the decomposition method has been simultaneously applied to the 103 pulse profiles of the 15 observations of the main-on state. Due to the large number of distinct pulse shapes and due to the fact that they have a relatively low level of unpulsed flux, the positive flux criterion has led to an exclusion of about 90% of the whole parameter space of possible symmetry points $\Phi_1$ and $\Phi_1+\Delta$. Further sorting out the decompositions (i.e. the single-pole pulse profiles) that are qualitatively too complicated to match the criterion of two independent emission regions only left over one type of decomposition. The energy dependence of this type of decomposition is as smooth as that of the pulse profiles. Thus we have found acceptable decompositions in a small range of $\Phi_1$ and $\Phi_1+\Delta$ which are all of the same type. This type of decomposition is unique in the sense that a small deviation from the ’best-values’ of the symmetry points results in decompositions that look similar but become more and more complicated the larger the deviation becomes until they do not match the physical criteria any more. A systematic variation of the best-values of the symmetry points, which could be caused by free precession of the neutron star, is not observed. The lower panels in Figure \[fig2\] show the decompositions of the typical pulse profiles of the respective top panels. The unpulsed flux has been distributed to the single-pole pulse profiles according to the constant $C$ as derived in the second step of the analysis (see § \[method\]). The single-pole pulse profiles show that the energy dependence of the pulse profiles is mainly due to the change of one polar contribution (dashed curve) where an additional peak appears above 10 keV, whereas the pulse shape of the other pole (solid curve) does not change much. Interestingly, the contributions of the emission regions we obtain look very similar to those of Panchenko & Postnov (1994) obtained from a model calculation mentioned in § \[intro\]. Similar components were also obtained by Kahabka (1987) in an attempt to model the observed pulse shapes by means of 3 to 5 gaussians, a sinusoidal component and a constant flux. Extending the analysis to the short-on and the turn-on of the main-on we also find acceptable decompositions in the same range of the symmetry points as in the main-on. Since the pulses of the short-on state have quite a different shape compared to the main-on, their decompositions look different as well. An example of a typical short-on pulse profile and its decomposition is given in Figure \[fig3\]. [ ]{} ### b. search for an overlap region and determination of the geometry {#b.-search-for-an-overlap-region-and-determination-of-the-geometry-1 .unnumbered} In the next step of the analysis a two parameter fit has been applied to the decompositions in order to find out whether there is a range where the shapes of the polar contributions match. For the 62 pulse profiles of 8 of the main-on observations we have found a set of fit parameters which correspond to an overlap range where the two curves of each decomposition match astonishingly well if the statistical errors of the data are taken into account. This is shown in Figure \[fig4\] where the solid (dashed) curve corresponds to the respective single-pole pulse profile in Figure \[fig2\]. The typical errors ($\pm\sigma$) as derived from error propagation of the statistical errors of the data are indicated in the upper right corner of each panel. The range where the curves overlap corresponds to values of the viewing angle $\theta$ under which both emission regions are seen during the course of one revolution of the neutron star. Introducing a scaling factor as an additional fit parameter we have achieved acceptable fits for the 15 pulse profiles of another three main-on observations. These profiles are further discussed in §\[35d\]. No acceptable fits have been achieved for only four observations at late phases of the main-on when the flux had already dropped to less than 60% of the maximum flux of the respective 35-day cycle. The dependence of the results for the location of the magnetic poles on the direction of observation $\Theta_{\rm O}$ is shown in Figure \[fig5\]. Assuming that $\Theta_{\rm O}$ is equal to the inclination $i$ of the system and adopting $i=83^{\circ} (\pm4^{\circ})$ (Kunz 1996, private communication), we obtain the polar angles of the magnetic poles $\Theta_1 \approx 18^{\circ}$ and $\Theta_2 \approx 159^{\circ}$ with an offset from antipodal positions of $\delta < 5^{\circ}$ (see Figure \[fig1\]). The small value obtained for $\delta$ confirms the assumption that a fairly small distortion of the magnetic dipole field is enough to explain the considerable asymmetry of the pulse profiles of Her X-1. The error bars at $\Theta_{\rm O}=83^{\circ}\pm4^{\circ}$ demonstrate how little the best fit-parameters determined for different pulse profiles vary. [ ]{} [ ]{} The remaining ambiguity in the determination of the beam pattern is indicated by the different units of the lower ($\theta_+$) and the upper ($\theta_-$) x-axis in Figure \[fig4\]. However as discussed in § \[35d\] the study of the evolution of the pulse profile with the 35-day cycle indicates that the $\theta_+$-solution is presumably the correct one. The beam pattern of the emission regions has been reconstructed in the range $66^{\circ}<\theta_+<116^{\circ}$ or $64^{\circ}<\theta_-<114^{\circ}$ (for $\Theta_{\rm O}=83^{\circ}$). The emission regions are unobservable under values of $\theta_+$ ($\theta_-$) outside this range. Concerning the decompositions of the short-on pulse profiles, fits of a quality similar to those found for the main-on are not found. Additionally the values of the best fit parameters are different from each other and different from those of the main-on. The same holds true for the pulse profiles of the observation of the turn-on which unfortunately ended already when the flux had reached about 2/3 of the maximum flux of this 35-day cycle, as the lightcurve of the All-Sky-Monitor (ASM) onboard Rossi X-ray Timing Explorer (RXTE) shows (Wilms 1999, private communication). Interpretation {#discussion} ============== The results show that the pulse profiles of Her X-1 are compatible with the idea that the beam pattern is symmetric and that a distorted dipole field is responsible for the asymmetry of the pulse profiles. The analysis does not permit to discriminate between exact symmetry and a small asymmetry of the beam pattern. In the case of a small asymmetry, test calculations suggest that the beam pattern derived above can be regarded as a fair approximation to the azimuthally averaged beam pattern. Considering the above results a large asymmetry of the beam pattern seems unlikely. A prominent asymmetry of the pulse profiles that is primarily due to an asymmetric beam pattern cannot in general be mimicked by displaced symmetric emission regions, because one choice of displacement will hardly produce simple and smooth ’false-symmetric constituents’ for many different energies and luminosities with their respective distinct asymmetric pulse shapes. However, the possibility that the asymmetry of the pulse profiles of Her X-1 is primarily due to an asymmetric beam pattern cannot be rigorously excluded. With this caveat in mind, we will in this section discuss the consequences of the reconstructed symmetric beam pattern. Beam Pattern {#char} ------------ The beam pattern can also be plotted as a polar diagram with the magnetic axis ($\theta = 0^{\circ}$) as symmetry axis. This is done in Figure \[fig6\]. It shows the $\theta_+$-solution for the beam pattern in the energy ranges 6.0 - 8.3 keV (solid line) and 20.0 - 23.0 keV (dashed line). In the overlap range the mean values of the single-pole contributions are plotted. Each beam pattern is normalized so that the total power emitted into the observable solid angle is unity. The $\theta_-$-solution can be obtained by turning the diagram upside down. No information on the beam pattern is available in the shaded regions. [ ]{} The visibility of the emission region up to an angle of at least $116^{\circ}$ is due to a lateral extension of the emission region along the neutron star surface, emission of radiation from the plasma at a certain height above the pole and relativistic light deflection near the neutron star surface. We can get an idea of the effect of light deflection if we imagine the emission to be originating from a hypothetical point source located at the pole of the neutron star. With an assumption about the ratio of the radius of the neutron star $r_{\rm n}$ to its Schwarzschild-radius $r_{\rm s}$, the asymptotic angle $\theta$ under which the magnetic axis is seen by the distant observer can be transformed into the intrinsic angle $\vartheta$ under which the radiation is emitted from the point source (see Figure \[fig7\]). Figure \[fig8\] shows how this transformation changes the asymptotic beam pattern of Figure \[fig6\] for $r_{\rm n}/r_{\rm s}=2.8$. Again the emission pattern is normalized to have an integrated power of unity. It also illustrates the necessity of taking the effects of relativistic light deflection into account when modelling the emission regions, as has been previously pointed out by other authors (e.g. Nollert et al. 1989). [ ]{} [ ]{} All beam patterns obtained from the various observations exhibit the same basic structure and energy dependence. Only the relative sizes of their substructures differ. The overall structure is quite complex as can be seen from the representative beam patterns in Figure \[fig6\]. It has an increasing component towards the direction of the magnetic axis. Near the highest angles of the visible range the flux has a maximum at $\theta_+ \approx 108^\circ$. These major components can be interpreted as a pencil- and a fan-beam respectively. The relative size of the fan-beam component decreases with increasing energy. Another relatively small feature occurs at $\theta_+ \approx 80^\circ$. Above 15 keV the beam pattern has an additional increasing component at $\theta_+ > 114^\circ$. This feature seems to become dominant above 28 keV and might therefore be responsible for the observed widening of the main peak of the pulse profile in this energy range (Soong 1990a, Kuster 1998, private communication). The occurrence of such a feature in this energy regime indicates a possible relation with electron cyclotron absorption at about 40 keV, favoured by many authors (e.g. Gruber et al. 1999). Unfortunately the energy resolution and the statistics of the data covering the range above 30 keV available to us and suitable for the analysis was not good enough to give an insight into this property of the beam pattern. The beam patterns describe the flux as a function of viewing angle $\theta$ in the various energy ranges. The results can also be plotted as energy dependent spectra showing the flux depending on energy for various viewing angles. The left panel in Figure \[fig9\] shows 12 beam patterns in the energy range between 0.92 and 26.0 keV obtained from pulse profiles of an EXOSAT observation (Kahabka 1987). Since in the pulse profiles the response of the detectors is not considered, the flux of the beam patterns derived from the pulse profiles is normalized at the arbitrarily chosen angle $\theta_+ \approx 90^\circ$ (indicated by an arrow). The angular range is divided into four sections in which the main features of the beam patterns are located. The other panels of Figure \[fig9\] show spectra at various viewing angles $\theta_+$. Due to the normalization, the spectrum at the angle of normalization is just a horizontal line and all other spectra are relative to that particular one. Each spectrum contains two curves corresponding to the ME-Argon and the ME-Xenon proportional counters of EXOSAT. It can be easily seen that the spectra in section III, interpreted as fan-beam, are very soft compared to the spectra in section I, interpreted as pencil-beam. The spectra in section IV, interpreted as the high energy feature above, are even harder than those in section I. It should be pointed out that there is a great difference between the kind of spectra presented here and spectra obtained from pulse phase spectroscopy. At a particular pulse phase the poles are generally seen under different viewing angles and the spectra from both poles are always superposed. Nevertheless we can indentify the sections in the beam pattern that are responsible for features in the pulse profile. Then we compare the spectra in these sections with spectra at the phases where the corresponding features in the pulse profile occur. E.g. section III of the beam pattern corresponds to the maxima of the second single-pole contribution around phases 0.15 and 0.45 (see Figure \[fig2\]) which are responsible for the shoulder in the leading edge and the secondary maximum in the trailing edge of the peak of the pulse profile. The hardness ratio at these parts of the pulse profile is relatively low (see e.g. Deeter et al. 1998) which is consistent with the soft spectra in section III. On the other hand the hardness ratio at the peak of the pulse profile is very high corresponding to the sections I and IV where the spectra are relatively hard as well. [ ]{} Due to the anisotropy of the beam pattern the flux depends on the viewing angle and therefore on the location of the poles and the inclination of the system. Then the observed luminosity of the pulsar also depends on the geometry and the inclination. Since we expect other pulsars to have similar anisotropic beam patterns but different geometries and inclinations, the fact that none has a luminosity $L_{\rm x} \gg 10^{38}$ erg/s indicates that the trend of the flux to increase towards the direction of the magnetic axis can be expected to reverse at small viewing angles. This would be consistent with the picture of the radiation escaping into the direction of the magnetic axis being blocked due to electron cyclotron absorption. Since the components identified in the energy dependent beam pattern and the corresponding parts of the pulse profile directly reflect the properties of the processes of the emission regions, the beam pattern should be further compared with emission models. Evolution of the Pulse Profile with the 35-day Cycle {#35d} ---------------------------------------------------- The evolution of the pulse profile with the 35-day cycle has been studied intensively by many authors (e.g. Kahabka 1987, Ögelman & Trümper 1988, Soong et al. 1990b, Scott 1993). Deeter et al. (1998) summarize the observations establishing that the changes in pulse profile throughout the course of a 35-day cycle are systematic. Several attempts have been made to explain the change of the pulse shape with the 35-day phase (e.g. Bai 1981, Trümper et al. 1986, Petterson et al. 1991). In this section we discuss a scenario in which the column densities along the lines of sight onto the poles are different due to a partial obscuration of the neutron star by the inner edge of the accretion disk. This results in a different attenuation of the polar contributions. As observed in a two day long continuous monitoring by RXTE, the pulse shape of Her X-1 does not change significantly during turn-on, whereas the spectra show strong photoelectric absorption (Kuster et al. 1998). This is in contrast to the behaviour during the decline of the main-on, when the pulse shape undergoes systematic changes while no spectral changes are prominent (Deeter et al. 1998, and references therein). The observations concerning the spectral behaviour can be explained in terms of a twisted and tilted accretion disk (Schandl & Meyer 1994). At turn-on the outer edge of the warped disk recedes from the line of sight to the neutron star whereas at the end of the main-on the inner edge sweeps into the line of sight. Since the obscuring material at the outer edge of the disk is relatively cool compared to the very dense material at the inner edge, photoelectric absorption is only present during turn-on. By taking into account the scale heights of the corresponding parts of the disk, a warped disk profile also provides a mechanism to explain the different behaviour of the pulse shape. The density gradient in the obscuring material at the outer edge of the disk is relatively small. Thus the radiation emerging from both polar regions experiences the same absorption and the pulse profile does not change appreciably during the early stages of the main-on. Since on the other hand, the scale height of the inner edge of the disk is comparable to the size of the neutron star, the poles become obscured successively towards the end of the main-on. Therefore the radiation from one pole becomes attenuated more with respect to the other, leading to changes in the pulse profile. This situation is schematically illustrated in Figure \[fig10\]. [ ]{} The different attenuation of the radiation from the poles is apparent in the decompositions. Figure \[fig11\] shows two pulse profiles of an EXOSAT observation (Kahabka 1987) during one 35-day cycle at $\Psi_{35}=0.136$ near maximum intensity (solid) and at $\Psi_{35}=0.234$ during the decay phase of the main-on state (dotted). The shoulder in the leading edge and the secondary maximum in the trailing edge of the peak are less prominent in the decay phase. We find that we can model the pulse profile at the end of the main-on state with the decompositions found for the pulse profile at maximum intensity by scaling one component with respect to the other. The pulse shape plotted with crosses in Figure \[fig11\] is reproduced from the decompositions of the pulse shape at maximum intensity by scaling the second component by a factor of 0.7 and adding the unscaled first component. It indeed closely resembles the features of the pulse profile in the decay phase. We conclude that during the decay phase the neutron star was partly obscured. [ ]{} The fact that the second component has to be scaled simply means that the radiation from the second pole is attenuated more than the radiation from the first pole. Therefore the second pole must be located on that side of the neutron star which is on the opposite side of the accretion disk with respect to the observer. This enables us to decide between the $\theta_-$- and the $\theta_+$-solution discussed in section \[results\]. It follows that the second component must correspond to the higher values of the viewing angle $\theta$ and therefore the $\theta_+$-solution must be the correct one. In a previous analysis of pulse profiles of the X-ray binary Cen X-3 (Kraus et al. 1996), we have also found unique decompositions and the beam patterns and their energy dependence are quite similar to those of Her X-1. But we were not able to decide between the $\theta_+$- and the $\theta_-$-solution. However the similarity of the beam patterns suggests that the $\theta_+$-solution is the correct one for this pulsar, too. Many authors have noted a narrowing of the main peak during the decay phase of the 35-day cycle (see Kunz 1996). Different attenuation of the components obtained in the analysis provides a natural explanation of this behaviour of the pulse profile. As the amount of matter along the lines of sight increases, not only attenuation but also scattering of radiation from other directions into the line of sight increases. This leads to an increasing fraction of scattered flux in the pulse profile and the pulsed fraction [^1] decreases. Other processes that lead to an increase of unpulsed flux are reprocessing of the direct beams by the interposed material or reflection from the disk (McCray et al. 1982). In other words the pulsed fraction is an indicator of the fraction of radiation that is coming directly from the polar regions. A pulse profile which contains a large fraction of scattered flux will have a small pulsed fraction. From such a pulse profile we can not expect to be able to reconstruct the beam pattern. Figure \[fig12\] shows that indeed the pulse profiles for which an acceptable fit has been found (denoted by filled symbols) are just those with a high pulsed fraction. The fact that the analysis of the pulse profiles of the short-on state has not led to acceptable fits can then be undestood in terms of the low value of their pulsed fraction. Accounting for possible attenuation, the components can be scaled in the fit procedure. This leads for the 15 pulse profiles of three main-on observations with a flux of about 70% of the typical maximum flux of the main-on state to a significant decrease of the deviation $\lambda^2_{\rm red}$ [^2] between the two curves. These pulse profiles typically have a medium pulsed fraction. [ ]{} Observations show that the spectral behaviour at X-ray turn-on is similar for the short-on and main-on states and that the pulse shape also changes during short-on (Deeter et al. 1998). This suggests that the configuration of the disk causing the spectral behaviour and the evolution of the pulse profile described above in the case of the main-on state is similar during short-on. Thus the outer part of the disk is responsible for the turn-on of the short-on state, whereas it ends when the inner edge of the disk passes into the line of sight. This work has been supported by the Deutsche Forschungsgemeinschaft (DFG). Bai, T. 1981, , 243, 244 Basko, M. M., & Sunyaev, R. A. 1975, , 42, 311 Bildsten, L., Chakrabarty, D., Chiu, J., Finger, M. H., Koh, D. T., Nelson, R. W., Prince, T. A.,\ Rubin, B. C., Scott, D. M., Stollberg, M., Vaughan, B. A., Wilson, C. A., and Wilson, R. B.\ 1997, , 113, 367 Deeter, J. E., Boynton P. E., and Pravdo, S. H. 1981, , 247, 1003 Deeter, J. E., Scott, D. M., Boynton, P. E., Miyamoto, S., Kitamoto, S., Takahama, S., and\ Nagase, F. 1998, , 502, 802 Gruber, D. E., Heindl, W. A., Rothschild, R. E., Staubert, R., Wilms, J., and Scott, D.M., 1999,\ in: ”Highlights in X-Ray Astronomy in Honour of Joachim Tr"umper’s 65 th birthday”, eds.\ B. Aschenbach & M. J. Freyberg, MPE Report 272, 33 Kahabka, P. 1987, PhD thesis, TU München, MPE Report 204 Kraus, U., Blum, S., Schulte, J., Ruder, H. and Mészáros, P. 1996, , 467, 794 Kraus, U., Nollert, H.-P., Ruder, H. and Riffert, H. 1995, , 450, 763 Kunz, M. 1996, , S120, 231 Kuster, M., Wilms, J., Blum, S., Staubert, R., Gruber, D., Rothschild, R., and Heindl, W. 1998,\ Astrophys. Lett. Comm., 38, 161 Leahy, D. A. 1991, , 251, 203 Leahy, D. A., & Li, L. 1995, , 277, 1177 McCray, R. A., Shull, J. M., Boynton, P. E., Deeter, J. E., Holt, S. S., and White, N. E. 1982,\ , 262, 301 Nollert, H.-P., Ruder, H., Herold, H., and Kraus, U. 1989, , 208, 153 Ögelman, H., & Trümper, J. 1988, Mem. Soc. Astron. Italiana, 59, 169 Panchenko, I. E., & Postnov, K. A. 1994, , 286, 497 Petterson, J. A., Rothschild, R. E., and Gruber, D. E. 1991, , 378, 696 Riffert, H., Nollert, H.-P., Kraus, U., and Ruder, H. 1993, , 406, 185 Schandl, S., & Meyer, F. 1994, , 289, 149 Scott, D. M. 1993 PhD thesis, University of Washington Soong, Y., Gruber, D. E., Peterson, L. E., and Rothschild, R. E. 1990a, , 348, 634 Soong, Y., Gruber, D. E., Peterson, L. E., and Rothschild, R. E. 1990b, , 348, 641 Tananbaum, H., Gursky, H., Kellog, E. M., Levinson, R., Schreier, E. and Giacconi, R. 1972, ,\ 174, L143 Trümper, J., Kahabka, P., Ögelman, H., Pietsch, W., and Voges, W. 1986, , 300, L63 Trümper, J., Pietsch, W., Reppin, C., Voges, W., Staubert, R. and Kendziorra, E. 1978, , 219,\ L105 Wang, Y.-M., & Welter, G. L. 1981, , 102, 97 [^1]: $\mbox{pulsed fraction}=1-\frac{\mbox{minimum flux of pulse profile}} {\mbox{mean flux of pulse profile}}$ [^2]: $\lambda^2_{\rm red}=\frac{1}{N-\nu} \sum_{\rm N}{(f_1({\rm i})-f_2({\rm i}))^2}$, where $\nu$ is the number of fit parameters
--- abstract: 'We present a new polynomial-free prolongation scheme for Adaptive Mesh Refinement (AMR) simulations of compressible and incompressible computational fluid dynamics. The new method is constructed using a multi-dimensional kernel-based Gaussian Process (GP) prolongation model. The formulation for this scheme was inspired by the GP methods introduced by A. Reyes *et al.* \[A New Class of High-Order Methods for Fluid Dynamics Simulation using Gaussian Process Modeling, Journal of Scientific Computing, 76 (2017), 443-480; A variable high-order shock-capturing finite difference method with GP-WENO, Journal of Computational Physics, 381 (2019), 189–217\]. In this paper, we extend the previous GP interpolations/reconstructions to a new GP-based AMR prolongation method that delivers a high-order accurate prolongation of data from coarse to fine grids on AMR grid hierarchies. In compressible flow simulations special care is necessary to handle shocks and discontinuities in a stable manner. To meet this, we utilize the shock handling strategy using the GP-based smoothness indicators developed in the previous GP work by A. Reyes *et al.*. We demonstrate the efficacy of the GP-AMR method in a series of testsuite problems using the AMReX library, in which the GP-AMR method has been implemented.' address: - 'Department of Applied Mathematics, The University of California, Santa Cruz, CA, United States' - 'Flash Center for Computational Science, Department of Astronomy & Astrophysics, The University of Chicago, Chicago, IL, United States' - 'Mathematics and Computer Science, Argonne National Laboratory, Argonne, IL, United States' - 'Department of Physics and Astronomy, University of Rochester, Rochester, NY, United States' - 'Laboratory for Laser Energetics, University of Rochester, Rochester, NY, United States' author: - 'Steven I. Reeves' - Dongwook Lee - Adam Reyes - Carlo Graziani - Petros Tzeferacos bibliography: - 'mybibfile\_merged.bib' title: 'An Application of Gaussian Process Modeling for High-order Accurate Adaptive Mesh Refinement Prolongation' --- Adaptive Mesh Refinement; Prolongations; High-order methods; Gaussian processes; Computational fluid dynamics; Introduction {#sec:introduction} ============ Since the dawn of the computer era for science and engineering, the primary role of computational fluid dynamics (CFD) is to advance our theoretical understandings. Through experimentation, a wide range of parameter spaces is designed and modeled in computer simulations. As such, computer-aided simulations target complex physical conditions in various degrees of disparity adequate to users’ specific theoretical models. As increasingly more complex systems to be considered for better computer modeling, modern simulation codes face increasingly versatile challenges to meet expected metrics in a possibly vast parameter space. These complex simulations need to be able to be interpreted as *physically valid* models, at least in an approximated sense. In the fields of geophysics, astrophysics, and laboratory plasma astrophysics simulations have become essential to characterizing and understanding complex processes (e.g.,  [@glatzmaiers1995three; @jordan2008three; @tzeferacos2015flash; @meinecke2014turbulent]). As increasingly more complex systems to be considered for better computer modeling, modern simulation codes face increasingly versatile challenges to meet expected metrics in a possibly vast parameter space. CFD has been (and will continue to be) an indispensable tool to improve our capabilities to investigate conditions where simplified theoretical models inadequately capture the correct physical behavior and experiments can be prohibitively expensive or too observationally difficult to be the sole pathways for discovery. In these simulations flow conditions can develop in which the physics becomes extremely challenging to simulate due to significant imbalances in length and temporal scales. To alleviate such conditions in computer simulations, practitioners have explored approaches by which a computer simulation can focus on localized flow regions when the dynamics exhibit confined features that evolve on a much shorter length scale relative to the flow dynamics on the rest of the computational domain. Adaptive mesh refinement (AMR) is one such approach that allows a local and dynamic change in the grid resolutions of a simulation in space and time. Since the 1980s, AMR has been an exceptional tool and has become a powerful strategy in utilizing computational fluid dynamics (CFD) simulations for computational science across many disciplines such as astrophysics, geophysics, atmospheric sciences, oceanography, biophysics, engineering, and many others [@plewa2005adaptive]. There have been many advancements in AMR since the seminal paper by Berger and Oliger [@amr_orig]. In their paper, the primary concern was to focus on a strategy for generating subgrids and managing the grid hierarchy for scalar hyperbolic PDEs in one and two spatial dimensions (1D and 2D). In the subsequent work by Berger and Colella [@Berger], further improvements were made possible for numerical solutions of the 2D Euler equations to provide a robust shock-capturing AMR algorithm that satisfies the underlying conservation property on large-scale computer architectures. The novel innovations in their work have now become the AMR standards, namely including refluxing (or flux correction) between fine-coarse interface boundaries, conservative (linear) prolongation and restriction on AMR hierarchies, and timestep subcycling. Bell *et al.* extended the precedent 2D AMR algorithms of [@amr_orig; @Berger] to a 3D AMR algorithm and applied it to solve 3D hyperbolic systems of conservation laws [@amr3d]. They demonstrated that the AMR algorithm reduced the computational cost by more than a factor of 20 than on the equivalent uniform grid simulations in simulating a 3D dense cloud problem interacting a Mach 1.25 flow on Cray-2. This is, by far, the main benefit of using AMR, particularly in large 3D simulations, in that one could gain such a computational speed-up by focusing computational resources on the dynamically interesting regions of the simulation. AMR can be expected to become more computationally expensive relative to a uniform grid solution using high-order (4th or higher) PDE solvers, as a significant fraction of the computational domain becomes dominated by small scale structures . Jameson [@jameson2003amr] estimated that small scale features, such as shocks or vortices, should not exceed more than a third of the computational domain in order for low order AMR schemes to be computationally competitive. The effectiveness of AMR in atmospheric simulations has also been studied [@ferguson2016analyzing]. Jameson [@jameson2003amr] examined computational gain with AMR further. Simulations using the traditional second-order AMR schemes (i.e., second-order PDE solutions solved on AMR grids) could become computationally more expensive than the compared uniform grid (non-AMR) simulations using high-order (4th or higher) PDE solvers, particularly when a large fraction of the computational domain contains fine scale structures such as vortices, eddies, rotating flows, turbulence, etc. In this case, one should rely on high-order PDE solvers on uniform grids to get the best computational results, by which small scale flow features are better resolved on a given “static” grid than the second-order AMR calculation. According to Jameson’s estimation, the amount of the flow of interests such as shocks, vortices, and other small scale flows should not exceed more than 1/3 of the computational domain in order that low order AMR schemes become computationally competitive. Also see the study on the effectiveness of AMR in atmospheric simulations [@ferguson2016analyzing]. Of importantly related is the consistency between calculations on AMR and uniform grids. Mathematically speaking, AMR calculations should converge to the corresponding uniform grid solutions in the limit of grid convergence; otherwise exercising AMR for the purpose of gaining computational efficiency becomes no avail. In a numerical comparison study, Schmidt *et al.* [@schmidt2015influence] considered conditions upon which statistical agreement can be achieved between AMR and uniform grid calculations at the same effective grid resolutions. Modern AMR implementations may be categorized into two main types: structured and unstructured. Unstructured AMR, and meshes in general, are very useful for problems with irregular geometry (e.g., many structural engineering problems), but is often computationally complex and difficult to handle when regridding. On the other hand, structured AMR (SAMR, or block-structured AMR) offers practical benefits (over unstructured) such as ease of discretization, a global index space, accuracy gain through cancellation terms, and ease of parallelization. In block-structured AMR, the solution to a PDE is constructed on a hierarchy of levels with different resolution. Each level is composed of a union of logically rectangular grids or patches. These patches can change dynamically throughout a simulation. In general, patches need not be fixed size, and may not have one unique parent grid. Figure \[fig:amr\] illustrates the use of AMR in a block-structured environment. ![Multiple levels in a block-structured AMR grid hierarchy \[fig:amr\]](./amrex_multilevel-eps-converted-to.pdf){width="11cm"} The approach presented by Berger and Oliger [@amr_orig] and Berger and Colella [@Berger] has set the foundation on the patch-based SAMR. An alternative to the patch-based formulation is the octree-based approach which has evolved into the fully-threaded tree (FTT) formalism (or cell-based) of Khokhlov [@khokhlov1998fully] and the block-based octree of MacNiece *et al.* [@macneice2000paramesh] & van der Holst *et al.* [@van2007hybrid]. Such AMR methods have gained popularity over the past 30 years and have been adopted by various codes in astrophysics. Some of the well-known examples implementing the patch-based AMR include AstroBEAR [@cunningham2009simulating], ENZO [@bryan2014enzo], ORION [@klein1999star], PLUTO [@mignone2011pluto], CHARM [@miniati2011constrained], CASTRO [@almgren2010castro], MAESTRO [@nonaka2010maestro]; the octree-based AMR has been implemented in FLASH [@fryxell2000flash; @dubey2009extensible], NIRVANA [@ziegler2008nirvana], BATS-R-US [@powell1999solution; @glocer2009multifluid]; the FTT AMR in RAMSES [@teyssier2002cosmological], ART [@kravtsov1997adaptive]. The AMRVAC code [@keppens2003adaptive] features both the patch-based and octree-based AMR schemes. In contrast to these codes that incorporate AMR with the purpose of delivering specific applications in astrophysics, other frameworks have pursued a more general functionality. Examples include PARAMESH [@macneice2000paramesh] which supplies solely the octree-based block-structured mesh capability independent of any governing equations; AMReX [@amrex] is another a standalone grid software library that provides the patch-based SAMR support; Chombo [@colella2009chombo; @adams2015chombo] and SAMRAI [@hornung2002managing], on the other hand, supply both AMR capabilities and a more broader support for solving general systems of equations of hyperbolic, parabolic, and elliptic partial differential equations (PDEs). A more compressive survey on the block-structured AMR frameworks can be found in [@dubey2014survey]. Recently, there have been many noticeable efforts aimed at designing high-order accurate solvers for governing systems of equations (e.g., [@ray2007using; @mccorquodale2015adaptive; @zhang2011order; @buchmuller2014improved; @mccorquodale2011high; @dumbser2013ader; @balsara_higher-order_2017; @shu2016high; @reyes_new_2016; @reyes2019variable]) in accordance with a trend of decreasing memory per compute core in newer high-performance computing (HPC) architecture [@Attig2011; @Dongarra2012future; @Subcommittee2014top]. Such high-order (4th or higher) PDE solvers are then combined with the AMR strategies described above. Traditionally, a second-order linear interpolation scheme has been commonly adopted for data prolongation from coarse to finer AMR levels, and a mass-conserving averaging scheme for data restriction from finer to coarser levels. This “low-order” AMR interpolation model has been the default choice in the vast majority of the aforementioned AMR paradigms and algorithms in practice. The accuracy gap between the underlying high-order PDE solvers and the second-order AMR interpolation could potentially degrade the quality of solutions from the high-order PDE solvers when the solutions are projected to AMR grids that are progressively undergoing refinements and de-refinements. In addition, another accuracy loss inevitably happens at fine-coarse boundaries. It is therefore natural to close the accuracy gap in the direction of providing high-order models in AMR interpolations, to serves better to maintain the overall solution accuracy integrated as a whole on AMR grid configurations. The high-order AMR prolongations of Shen *et al.* [@shen2011adaptive] and Chen *et al.* [@chen_5th-order_2016] are in this vein. These authors coupled high-order finite difference method (FDM) PDE solvers with fourth- or fifth-order accurate prolongations based on the well-known high-order polynomial interpolation schemes of WENO [@sebastian2003multidomain] and MP5 [@suresh1997accurate], respectively. These studies have shown that the AMR simulations with a higher-order coupling can produce better results in terms of increasing solution accuracy and lowering numerical diffusion, thereby, resolving fine-scale flow features. The present work focuses on developing a new high-order polynomial-free interpolation scheme for AMR data prolongation on the block-structured AMR implementation using the AMReX library. Our high-order prolongation scheme stems from the previous studies on applying Gaussian Process Modeling [@rasmussen2005] in designing high-order reconstruction/interpolation in finite volume method (FVM) [@reyes2019variable] and in finite difference method (FDM) [@reyes2019variable]. This paper is organized as follows. In Section \[sec:AMReX\] we overview the relevant AMR framework, AMReX, as our computational toolkit in which we integrate our new GP-based prolongation algorithm. In Section \[sec:GP\] we provide a mathematical overview on the GP modeling specific to high-order AMR prolongation. We give step-by-step execution details of our algorithm in Section \[sec:method\]. Also, we give a description on extending our work to a GPU-friendly implementation by following AMReX programming directives. Section \[sec:results\] shows the code performance of the new GP prolongation on selected multidimensional test problems, and finally, in Section \[sec:conclusion\] we summarize the main results of our work. Overview of AMReX {#sec:AMReX} ================= Developed and managed by the Center for Computational Science and Engineering at Lawrence Berkeley National Laboratory, AMReX is funded through the Exascale Computing Project (ECP) as a software framework to support the development of block-structured AMR applications focusing on current and next-generation architectures [@amrex]. AMReX provides support for many operations involving adaptive meshes including multilevel synchronization operations, particle and particle/mesh algorithms, solution of parabolic and elliptic systems using geometric and algebraic multigrid solvers, and explicit/implicit mesh operations. As part of an ECP funded project, AMReX takes the hybrid MPI/OpenMP CPU parallelization along with GPU implementations (CUDA). AMReX is mostly comprised of source files that are written in C++ and Fortran. Fortran is solely used for mathematics drivers, while C++ is used for I/O, flow control, memory management and mathematics drivers. The novelty of the current study is the new GP-based prolongation method implemented within the AMReX framework. The GP implementation furnishes an optional high-order prolongation method from coarse to fine AMR levels, alternative to the default second-order linear prolongation method in AMReX. In this way, the GP results in Section \[sec:results\] naturally inherit all the generic AMReX operations such as load balancing, guardcell exchanges, refluxing, AMR data and grid managements, except for the new GP prolongation method. We display a suite of test comparisons between the two prolongation methods. AMR restriction is another important operation on the AMR data management in the opposite direction, from fine to coarse levels. We use the default restriction method of averaging that maintains conservation on AMR grid hierarchies. This approach populates data on coarse levels by averaging down the corresponding fine level data according to $$\mathbf{U}^C = \frac{1}{R}\sum\limits_i^R{\mathbf{U}^f_i}, \label{eq:avg}$$ where $\mathbf{U}^C$ and $\mathbf{U}^f$ are conservative quantities on the coarse and fine grids respectively, $R=\prod\limits_d{r_d}$ is the normalization factor with $r_d$ being the refinement ratio in each direction $d= x, y, z$. Lastly, maintaining conservation across fine-coarse interface levels is done by the operation called the refluxing. This process corrects the coarse grid fluxes by averaging down the fluxes computed on the fine grids abutting the coarse grid. In practice, the conservation is managed as a posterior correction step after all fluid variables ${\bf{U}}^C$ on a coarse cell are updated. For other AMR operations related to AMReX, interested readers are encouraged to refer to [@zhang2016boxlib; @zingale2018meeting; @amrex]. Gaussian Process Modeling for CFD {#sec:GP} ================================= The new prolongation method we are presenting in this paper is based on Gaussian Process (GP) Modeling. In order that this paper to be self-contained we give a brief overview on constructing a GP Model in this section. More detailed introductions to GP modeling are found in [@rasmussen2005; @bishop2007pattern]. Gaussian Processes are a family of stochastic processes in which any finite collection of random variables sampled from this process are joint normally distributed. In a more general sense, GPs take samples of functions from an infinite dimensional function space. In this way, the AMR prolongation routine described in detail in Section \[sec:method\] will be drawn from a data-informed distribution space trained on the coarse grid data. A statistical introduction to Gaussian Processes {#sec:GP_intro} ------------------------------------------------ The construction of the posterior probability distribution over the function space is the heart of GP modeling. To construct a GP, one needs to specify a *prior probability distribution* for the function space. This can be done by specifying two functions, a prior mean function and a prior covariance kernel function (see more details below), by which a GP is fully defined. Samples, namely function values evaluated at known locations, drawn from the GP prior are then used to further update this prior probability distribution. As a consequence, a *posterior probability distribution* is generated as a combination of the newly updated prior along with these samples, by means of Bayes’ Theorem. Once constructed, one can draw functions from this data-adjusted GP posterior space to generate a model for prolongation in AMR or interpolation/reconstruction. Specifically, the GP posterior could be used to probabilistically predict the value of a function at points where the function has not been previously sampled. In [@reyes2019variable; @reyes_new_2016] Reyes *et al.* have utilized this *posterior mean function* as a high-order predictor to introduce a new class of high-order reconstruction/interpolation algorithms for solving systems of hyperbolic equations. Within a single algorithmic framework, the new GP algorithms have shown a novel algorithmic flexibility in which a variable order of spatial accuracy is achieved and is given by $2R+1$, corresponding to the size of the one dimensional stencil. Here, $R$ is the radius of a GP stencil, called the GP radius, given as a positive integer value which represents the radial distance between the central cell $\mathbf{x}_i$ and $\mathbf{x}_{i+R}$. Similarly, from the perspective of designing a probabilistically driven prediction of function values, the posterior mean function becomes an AMR prolongator that delivers a high-order accurate approximation at the desired location in a computational domain. As briefly mentioned, GPs can be fully defined by two functions: a mean function $\bar{f}(\mathbf{x}) = \mathbb{E}[f(\mathbf{x})]$ and a covariance function which is a symmetric, positive-definite kernel $K(\mathbf{x}, \mathbf{y}): \mathbb{R}^N\times\mathbb{R}^N \to \mathbb{R}$. Notationally, we write $f\sim \mathcal{GP}(\bar{f}, K)$ to denote that functions $f$ have been distributed in accordance with the mean function $\bar{f}(\mathbf{x})$ and the covariance $K(\mathbf{x}, \mathbf{y})$ of the GP prior. Analogous to finite-dimensional distributions we write the covariance as $$K(\mathbf{x}, \mathbf{y}) = \mathbb{E}\left[\left(f(\mathbf{x}) - \bar{f}(\mathbf{x})\right) \left(f(\mathbf{y}) - \bar{f}(\mathbf{y})\right)\right]$$ where $\mathbb{E}$ is with respect to the GP distribution. One controls the GP by specifying both $\bar{f}(\mathbf{x})$ and $K(\mathbf{x}, \mathbf{y})$, typically as some functions parametrized by the so-called hyperparameters. These hyperparameters allow us to give the “character" of functions generated by the posterior (i.e., length scales, differentiability or regularity) which will define the underlying pattern of predictions using the posterior GP model. Suppose we have a given GP and $N$ locations, $\mathbf{x}_n\in \mathbb{R}^d$, where $d = 1, 2, 3$ and $ n = 1, \dots, N$. For samples $f(\mathbf{x}_n)$ collected at those points, we can calculate the likelihood $\mathcal{L}$, viz., the probability of the data $f(\mathbf{x}_n)$ given the GP model. Let us denote the data array in a compact form, $\mathbf{f} = \left[f(\mathbf{x}_1), \dots, f(\mathbf{x}_N) \right]^T $. The likelihood $\mathcal{L}$ of $\mathbf{f}$ is given by $$\mathcal{L} \equiv P\big(\mathbf{f} | \mathcal{GP}(\bar{f}, K)\big) = (2\pi)^{-N/2} \det |\mathbf{K}|^{-1/2} \exp\left[-\frac{1}{2}\left(\mathbf{f} - \bar{\mathbf{f}}\right)\mathbf{K} \left(\mathbf{f} - \bar{\mathbf{f}}\right)\right], \label{eq:likely}$$ where $\mathbf{K}$ is a matrix generated by $K_{n,m} = K(\mathbf{x}_n, \mathbf{x}_m)$, $n, m = 1,\dots, N$, and the mean $\bar{\mathbf{f}} = [\bar{f}(\mathbf{x}_1), \cdots \bar{f}(\mathbf{x}_N)]^T$. Since these samples (or functions) are probabilistically distributed according to the GP prior, i.e., $f \sim \mathcal{GP}(\bar{f}, K)$, we now can make a probabilistic statement about the value of any agnostic function $f$ in the GP at a new point $\mathbf{x}_*$, at which we do not know the exact function value, $f(\bx_*)$. In other words, the GP model enables us to predict the value of $f(\mathbf{x}_*)$ probabilistically based on the character of likely functions given in the GP model prior. For AMR, this is especially important as we need to construct data at a finer resolution where we do not know the data values at newly generated grid locations refined from a parent coarse level. An application of Bayes’ Theorem, along with the conditioning property, directly onto the joint Gaussian prior gives the updated (or data-informed) posterior distribution of the predicted value $f_*$ conditioned on the observations $\mathbf{f}$, $$P(f_* | \mathbf{f}) = (2\pi U^2)^{-1/2} \exp\left[- \frac{(f_* - \tilde{f}_*)^2}{2U^2}\right], \label{eq:cond_like}$$ where $\tilde{f}_*$ is the posterior mean, given as $$\tilde{f}_* \equiv \bar{f}(\mathbf{x}_*) + \mathbf{k}_*^T\mathbf{K}^{-1} (\mathbf{f} - \bar{\mathbf{f}}), \label{eq:mean}$$ and the *posterior covariance function* as $$U^2 \equiv k_{**} - \mathbf{k}_*^T\mathbf{K}^{-1} \mathbf{k}_*. \label{eq:cov}$$ The posterior probability given in Eq. (\[eq:cond\_like\]) is maximized by the choice $f_*=\tilde{f}_*$, leading to Eq. (\[eq:mean\]) being taken as the GP prediction for the unknown $f(\bx_*)$. Meanwhile the *posterior* covariance in Eq. (\[eq:cov\]) reflects the GP model’s confidence in the prediction for the function at $\bfx_*$. we then focus on the posterior mean which will become the basis for our interpolation in the GP-based AMR prolongation. In the application of AMR prolongation, we assume that the coarse grid data is perfectly sampled, that is, there is no *uncertainty*. This assumption, together with the fact that we are not adaptively finding hyperparameters in the present work, safely provide a logical justification that we do not utilize the posterior covariance in our GP-based AMR prolongation. On the other hand, we will focus on the posterior mean which will become the basis for our interpolation for the GP-based AMR prolongation. In the next subsections we describe two modeling schemes for the GP-based AMR prolongation. In Section \[sec:gp\_ptwise\_prol\] we describe the first method that prolongates pointwise state data from coarse to fine levels. For AMR simulations in which the state data is represented as volume-averaged, conserving such quantities become crucial to satisfy the underlying conservation laws. To meet this end, we introduce the second method in Section \[sec:gp\_vol\_prol\], which preserves volume-averaged quantities in prolongation. We will refer to our GP-based AMR prolongation as GP-AMR for the rest of this paper. GP for pointwise AMR prolongation {#sec:gp_ptwise_prol} --------------------------------- In this section we introduce the first GP-AMR prolongation method that is suitable for AMR applications where the state data is comprised of pointwise values. In this case the GP-AMR model samples are given as pointwise evaluations of the underlying function. Let $\Delta x_d$ denote the distance between points in a **coarse** level in each $d=x,y,z$ direction. Using the posterior mean function in Eq. , we first devise a pointwise prolongation scheme for AMR, i.e., AMR prolongation of pointwise data from coarse to fine levels. The choice of $\mathbf{x}_*$ will depend on the *refinement ratio* $\mathbf{r} = [r_x, r_y, r_z]$ and there will be $\prod_d r_d $ new points generated for the new level in general. For example, if we wished to refine a single coarse grid by two in all three directions in 3D, we would generate eight new grid points as well as the eight new associated data values at those grid points in a newly refined level. To illustrate the process, we consider a simple example of a two-level refinement in 1D. In this refinement two refined data values are to be newly generated for each and every coarse value. Assume here that we utilize a stencil with the GP radius of one (i.e., $R=1$) in which case the local 3-point GP stencil $\mathbf{f}_i$ centered at each $i$-th cell for interpolation is laid out as $$\mathbf{f}_i = [q_{i-1}, q_i, q_{i+1}]^T.$$ From this given stencil data at the coarse level, we wish to generate two finer data values $q_{s\pm1/2}$ for each $s=i-1, i, i+1$. To do this, we use the posterior mean function in Eq.  on three 3-point GP stencils, $\mathbf{f}_s$, $s=i-1, i, i+1$, to populate a total of six new data, $$q_{s\pm\frac{1}{2}} = \mathbf{k}_{s\pm\frac{1}{2}}^T\mathbf{K}^{-1} \mathbf{f}_s, \;\; s=i-1, i, i+1, \label{eq:pinterp}$$ where we used a zero mean prior, $\bar{\mathbf{f}}=0$. In 2D or 3D, data values on a standard $(2N+1)$-point stencil are to be reshaped into a 1D local array $\mathbf{f}_s$ in an orderly fashion, where each $\mathbf{f}_s$ includes corresponding multidimensional data reordered in 1D between $s-N$ and $s+N$. This strategy will be fully described in Section \[sec:method\]. A common practice with GP modeling is to assume a zero prior mean as we did with Eq. . In our implementations we use this assumption. Something to note is that the GP weights, $\mathbf{k}_{*}^T\mathbf{K}^{-1}$, are independent of the samples $\bf{f}$, and are constructed based on the choice of kernel function and the location of the samples, $\bf{x}_n$, and prediction point, $\bf{x}_*$, alone. This is particularly useful in block structured AMR applications, as we can compute the weights for each level a priori, based on the min and max levels prescribed for each run. Otherwise, we can generate the model weights the first time a level is used and save them for later uses. Since the matrix $\mathbf{K}$ is symmetric and positive-definite, we can use the Cholesky decomposition to compute the GP weights. In practice, we compute and save $\bf{w}^T_* = \bf{k}^T_*\bf{K}^{-1}$ using Cholesky followed by back-substitution only once per simulation, either at an initial grid configuration step or at the first time an AMR level is newly used. In this way, the computational cost of the prolongation is reduced to a dot product between $\bf{w}$ and $\bf{f}$.As a consequence we arrive at a compact form, $$q_{s\pm\frac{1}{2}} = \mathbf{w}^T_{s\pm\frac{1}{2}} \mathbf{f}_s, \;\; s=i-1, i, i+1. \label{eq:pinterp2}$$ There are many choices of covariance kernel functions available for GP modelling [@rasmussen2005; @bishop2007pattern]. One of the most widely used kernels in Gaussian Process modeling is the squared-exponential (SE) covariance kernel function, $$K(\mathbf{x}, \mathbf{y}) \equiv \Sigma^2\exp\left[-\frac{(\mathbf{x} - \mathbf{y})^2}{2\ell^2}\right]. \label{eq:sqrexpcov}$$ The SE kernel is infinitely differentiable and as a consequence will sample functions that are then equally smooth. The kernel contains two model hyperparameters $\Sigma$ and $\ell$. $\Sigma$ acts as an overall constant factor that has no impact on the posterior mean (this can be seen as a cancellation between the $\bfk_*^T$ and $\bfK^{-1}$ terms in Eq. (\[eq:mean\])), which is the basis of our GP-AMR prolongation and we take $\Sigma=1$. $\ell$ controls the length scale on which *likely* functions will vary according to the GP model. All that remains to complete the GP model is to specify the prior mean function. The prior mean function is often depicted as a constant mean function for simplicity, i.e., $\bar{f}(\mathbf{x}) = f_0 $. $f_0$ controls the behavior of the GP prediction at spatial locations that, according to the kernel function, are not highly correlated with any of the observed values $\bff$. In the context of the SE kernel for prolongation this happens when the choice of $\ell$ is much smaller than the grid spacing between coarse cells. For that reason it is advisable to choose $\ell$ so that it is on the order of the size of the prolongation stencil. The GP-SE model features three hyperparameters, namely $f_0, \Sigma^2,$ and $\ell$. As we previously stated we chose $f_0 = 0$ in the present work. In GP modeling the hyperparameter $\Sigma$ is used in the posterior covariance function, which is used to assess the “quality” of the GP model constructed based on the sampled data. Since we are not considering uncertainty in this application we set this value $\Sigma^2 = 1$ as it does not effect the calculation of the posterior mean function. The model using the SE kernel in Eq.  and Eq. (\[eq:pinterp\]) with the prescribed hyperparameter choices is our first formula for the pointwise AMR prolongation. A GP Prolongation for Cell-Averaged Quantities {#sec:gp_vol_prol} ---------------------------------------------- For the majority of AMReX and fluid dynamics application codes, the state data is cell-averaged (or volume-averaged), as per the formulation of FVMs. The above GP-AMR prolongation for pointwise data has to be modified in order to preserve the integral relations between fine and coarse data that are implicit in the integral formulation of the governing equations used in FVM. The key observation from [@reyes_new_2016] is that the averaging over cells constitutes a “linear” operation on the a function $f(\bfx)$. As is done for finite dimensional Gaussian processes, linear operations on Gaussian random variables yields a new Gaussian random variable with linearly transformed mean and covariance functions. This is attained by realizing that the sample points are cell averaged and a different measure must be used. Let $\mathbf{G}$ be the vector of cell-averaged samples, whose elements are $G_i = \left<f(\mathbf{x}_i)\right> = \frac{1}{\mathcal{V}}\int_{I_i} f(\mathbf{x}_i) d\mathcal{V}$, where $I_i\subset \mathbb{R}^D$ is the $D$-dimensional cell in which $\mathbf{x}_i$ is the center and $\Delta x_d$ is the cell length in each $d$-direction, and $\mathcal{V}$ is the cell volume of $I_i$. In order to calculate the covariance between cell averaged quantities we need an integrated covariance kernel as described in  [@reyes_new_2016]. That is, $$\begin{aligned} C_{kh} & = \mathbb{E}[(G_k - \bar{G}_k)(G_h - \bar{G}_h)] \\ & = \int \mathbb{E}[(f(\mathbf{x}) - \bar{f}(\mathbf{x}))(f(\mathbf{y}) - \bar{f}(\mathbf{y}))]dg_k(\mathbf{x}) dg_h(\mathbf{y}) \\ & = \iint K(\mathbf{x}, \mathbf{y}) dg_k(\mathbf{x}) dg_h(\mathbf{y}), \end{aligned} \label{eq:int}$$ where $$dg_s(\mathbf{x}) = \begin{cases} \displaystyle d\mathbf{x}\prod_{d=x,y,z}^{D} \frac{1}{\Delta x_d} & \textrm{if } \mathbf{x}\in I_s, \\ 0 & \textrm{else,}\end{cases}$$ is the $1D$ cell-average measure. $G_i = \left<f(\mathbf{x}_i)\right> = \frac{1}{\mathcal{V}}\int_{I_i} f(\mathbf{x}_i) d\mathcal{V}$ are the cell-averaged data over cell $I_i\subset \mathbb{R}^D$ with volume $\mathcal{V}$. With the use of the squared-exponential kernel, Eq. (\[eq:int\]) becomes $$\begin{aligned} C_{kh} = \prod_{d=x,y,z}^{D} \sqrt{\pi}\left(\frac{\ell}{\Delta x_d}\right)^2\Bigg\{\left( \frac{\Delta_{kh}+1}{\sqrt{2}\ell/\Delta x_d}\textrm{erf}\left[\frac{\Delta_{kh}+1}{\sqrt{2}\ell/\Delta x_d}\right] + \frac{\Delta_{kh}-1}{\sqrt{2}\ell/\Delta x_d}\textrm{erf}\left[\frac{\Delta_{kh}-1}{\sqrt{2}\ell/\Delta x_d}\right] \right) \\ + \frac{1}{\sqrt{\pi}}\left(\exp{\left[-\left(\frac{\Delta_{kh}+1}{\sqrt{2}\ell/\Delta x_d}\right)^2\right]} +\exp{\left[-\left(\frac{\Delta_{kh}-1}{\sqrt{2}\ell/\Delta x_d}\right)^2\right]}\right) \\ - 2\left(\frac{\Delta_{kh}}{\sqrt{2}\ell/\Delta x_d}\textrm{erf}\left[\frac{\Delta_{kh}}{\sqrt{2}\ell/\Delta x_d} \right] + \frac{1}{\sqrt{\pi}}\exp\left[-\left(\frac{\Delta_{kh}}{\sqrt{2}\ell/\Delta x_d}\right)^2\right]\right) \Bigg\}, \end{aligned} \label{eq:intsq}$$ where we used $\Delta_{kh} = (x_{d,h} - x_{d,k})/\Delta x_d$. Following similar arguments as for the covariance kernel function, the prediction vector $\bfk_*$ must also be linearly transformed to reflect the relationship between the input data averaged over coarse cells and the output prolonged data averaged over the fine cells. This leads to Notice that in the pointwise prolongation the values sampled and generated were of the same pointwise type. In the scope of cell-averaged quantities, the prolongation returns the same cell-averaged quantities from cell-averaged values, but this time, the return values are with different cell sizes that are smaller than the original averaged values. So we need to build a new covariance to get a new GP weight vector that relates cell-averages between the coarse and the refined levels. This is derived in a similar manner to $C_{kh}$, but the limits of integration will reflect the refinement. Starting with the SE kernel we integrate over the two cells, one being the sampled stencil $I_k$, and the other being the target cell $I_*$, to get the new GP weight vector, $$T_{k*} \equiv T(\mathbf{x}, \mathbf{x}_*) = \int_{I_k} \int_{I_*} K(\mathbf{x}, \mathbf{x}_*) dg_k(\mathbf{x}) dg_*(\mathbf{x_*}),$$ where $$I_* = \bigtimes\limits_{d=x,y,z}^{D} \left[x_{*,d} - \frac{\Delta x_d}{2r_d}, \;\; x_{*,d} + \frac{\Delta x_d}{2r_d}\right],$$ in which $\bigtimes$ denotes the Cartesian production on sets. Using the SE kernel, we have a closed form for $T_{k*}$, $$T_{k*} = \pi^{D/2}\prod_{d = x,y,z}^D r_d \left(\frac{\ell}{\Delta x_d}\right)^2 \sum_{\alpha = 1}^4 (-1)^\alpha \left[\phi_{\alpha, d} \textrm{erf}(\phi_{\alpha, d}) + \frac{1}{\sqrt{\pi}}\exp(-\phi_{\alpha,d}^2)\right], \label{eq:trans}$$ where for each $\alpha=1,\dots,4$, $$\phi_{\alpha,d}= \frac{1}{\sqrt{2}\ell/\Delta x_d}\left(\Delta_{k,*} + \frac{r_d -1}{2r_d}, \hspace{1mm} \Delta_{k,*} + \frac{r_d+1}{2r_d}, \hspace{1mm} \Delta_{k,*} - \frac{r_d -1}{2r_d}, \hspace{1mm} \Delta_{k,*} - \frac{r_d+1}{2r_d}\right).$$ Therefore, with the combination of the cell-averaged kernel in Eq. (\[eq:intsq\]) and the weight vector in Eq. (\[eq:trans\]), we obtain our second GP-AMR formula given in the integral analog of Eq. (\[eq:mean\]) for cell-averaged data prolongation from coarse to fine levels, $$\left<f(\mathbf{x}_*)\right> = \mathbf{T}_{*}^T\mathbf{C}^{-1} \mathbf{G}, \label{eq:vinterp}$$ where we used the zero mean as before. The vector $\mathbf{G}$ of cell-averaged samples within the GP radius $R$ is given as $\mathbf{G} = [G_{i-R}, \dots, G_{i+R}]^T$. Analogous to the pointwise method, we cast $\mathbf{T}_{*}^T\mathbf{C}^{-1}$ into a new GP weight vector $\mathbf{z}_*$ to rewrite Eq.  as $$\left<f(\mathbf{x}_*)\right> = \mathbf{z}_{*}^T\mathbf{G}. \label{eq:vinterp2}$$ Many methods perform interpolation in a dimension-by-dimension manner. In contrast, the above two GP-AMR methods are inherently multidimensional. Moreover, the use of the SE kernel as a base in each $d$-direction facilitates the analytic multidimensional form obtained in Eq. . Our GP-AMR methods, therefore, provide a unique framework where all interpolation procedures in AMR grid hierarchies naturally support multidimensionality, as the evaluation of the covariance matrices only depends on the distance between data points. Furthermore, it is worth to point out that the two prolongation schemes in Eqs.  and are merely a straightforward calculation of dot products between the GP weight vectors and the grid data. This is the novelty of the use of GP modeling in AMR prolongation, which reveals two new compact prolongation methods that are computed in the same way for any stencil configuration in any number of spatial dimensions without any added complexity. This is in stark contrast to polynomial based methods which require the use of explicit basis functions and have strict requirements on stencil sizes and configurations in order to form a well posed interpolation. All of this together results in polynomial based methods giving increased difficulty especially as the number of dimensions or the order of accuracy is increased, highlighting the simplicity afforded in the present GP-AMR methods. Nonlinear Multi-substencil Method of GP-WENO for Non-Smooth Data {#sec:GP-WENO} ---------------------------------------------------------------- Both of the above GP modeling techniques can suffer from non-physical oscillations near discontinuities. The SE and integrated SE kernels work very well for continuous data, but we need to implement some type of a “limiting" process in order to suppress ‘nonphysical’ oscillations in flow regions with sharp gradients. The linear interpolations used by default in AMReX makes use of the monotonized central (MC) slope limiter in order to produce slopes that do not introduce any new extrema into the solution. In this study we utilize the GP-based smoothness indicator approach studied in GP-WENO [@reyes_new_2016; @reyes2019variable]. Following the Weighted Essentially Non-Oscillaltory (WENO) [@jiang1996efficient] approach, GP-WENO adaptively chooses a non-oscillatory stencil by nonlinearly weighing GP predictions trained on a set of substencils according to a GP-based local indicator of smoothness $\beta_m$. The smoothness is determined using the GP likelihood to measure the compatibility of the substencil data $\bf_m$ with the smooth SE kernel. Effectively $\beta_m$ is an indication of how well the data in $\bf_m$ matches with the GP model assumptions that are encoded in $\mathbf{K}_{m,\sigma}$ (or $\mathbf{C}_{m,\sigma}$). There two differences between $\mathbf{K}$ and $\mathbf{K}_{m,\sigma}$ in that (i) $\mathbf{K} \in \mathbb{R}^{M \times M} $ and $\mathbf{K}_{m,\sigma} \in \mathbb{R}^{(2D+1)\times(2D+1)}$, where $M=2D^2+2D+1$ for each spatial dimension $D=1,2,3$, and (ii) the scale-length hyperparameter for $\mathbf{K}_{m,\sigma}$, $\sigma$ is a much smaller length scale in accordance with the narrow shock-width spread over a couple of grid spacing. The same differences hold between $\mathbf{C}$ and $\mathbf{C}_{m,\sigma}$ as well. The first step in this multi-substencil method of GP-WENO is to build $2D+1$ substencil data on each substencil $S_m$, $m=1, \dots, 2D+1$. The data are combined using linear weights $\gamma_m$ derived from an over-determined linear system relating the weights generated by building a GP model on all substencils $S_m$ and the weights generated from a GP model on a total stencil $S$. The last step is to take the linear weights $\gamma_m$ to define nonlinear weights $\omega_m$ using the GP-based smoothness indicators $\beta_m$ [@reyes_new_2016; @reyes2019variable]. We now describe in detail the GP-AMR method for the two-dimensional case. Extensions to other dimensions are readily obtained due to the choice of the isotropic SE kernel, with the only difference being in the number of stencil points used. We begin with a total stencil $S$, taken as all cells whose index centers are within a radius of $2$ of the central cell $I_{i,j}$. The total stencil $S$ is then subdivided into $2D+1$ candidate stencils, $S_m, m = 1, \dots 2D+1$, such that $\displaystyle\bigcap_{m=1}^{2D+1} S_m = \{\mathbf{x}_{i,j} \}$ and $\displaystyle\bigcup_{m=1}^{2D+1} S_m = S$. A schematic of these stencil configurations is given in Fig. \[fig:2dS\]. That is, the prolongation will have the form: $$f_* = \sum_{m=1}^{2D+1} \omega_m \mathbf{w}_m^T\mathbf{f}_m \label{eq:MSGP}$$ where $\mathbf{w}_m^T = \mathbf{T}_{*,m}\mathbf{C}^{-1}_{m}$ for the cell-averaged prolongation, or $\mathbf{w}_m^T = \mathbf{k}_{*,m}^T\mathbf{K}^{-1}_{m}$ for the pointwise prolongation. The coefficients $\omega_m$ are defined as in the WENO-JS method [@WENO], \_m = \_m = . For our algorithm we choose $\epsilon = 10^{-36}$ and $p = 2$. The terms $\beta_m$ are taken as the data dependent term in $\log(\mathcal{L})$ from Eq. (\[eq:likely\]) for the stencil data $\mathbf{f}_m$, that is, \_m = \_m\^T \_[m,]{}\^[-1]{} \_m for the pointwise prolongation, and \_m = \_m\^T \_[m,]{}\^[-1]{} \_m for the cell-averaged prolongation. Notice that, due to the properties of the kernel matrices [@reyes_new_2016], we can cast $$\beta_m = \sum_{i=1}^{2D+1} \frac{1}{\lambda_i}\left(\mathbf{v}_i^T\mathbf{f}_m\right)^2, \label{eq:eig}$$ where $\mathbf{v}_i$ and $\lambda_i$ are the eigenvectors and eigenvalues of the covariance kernel matrix, ${\bf{K}}_{m,\sigma}$ or ${\bf{C}}_{m,\sigma}$. As described in [@reyes_new_2016; @reyes2019variable], the GP-based smoothness indicators $\beta_m$ defined in this way is derived by taking the negative log of the GP likelihood of Eq. . This gives rise to the statistical interpretation of $\beta_m$ which relates that if there is a shock or discontinuity in one of the substencils, say $S_k$, such a short length-scale (or rapid) change on $S_k$ makes $\mathbf{f}_k$ unlikely. Here this likeliness is relative to a GP model that assumes that the underlying function is smooth on the length scale set by $\sigma$. In other words, the GP model whose smoothness is represented by the smoothness property of its covariance kernel, $\mathbf{K}_{m,\sigma}$ or $\mathbf{C}_{m,\sigma}$, gives a low probability to $\mathbf{f}_k$, in which case $\beta_k$ – given as the negative log likelihood of $\mathbf{f}_k$ – becomes relatively larger than the other $\beta_m$, $m \ne k$. In our method we use GP modeling for both a regression (prolongation) and a classification. The regression aspect enables us to prolongate GP samples (i.e., function values, or fluid values) over the longer length-scale variability specified by $\ell$. On the other hand, the classification aspect allows us to detect and handle discontinuities. This is achieved by employing a much shorter length-scale variability tuned by $\sigma$, which is integrated into the eigensystem in Eq. (\[eq:eig\]) generated with $\mathbf{K}_{m,\sigma}$ or $\mathbf{C}_{m,\sigma}$. Smaller than $\ell$, the parameter $\sigma$ is chosen to reflect the short width of shocks and discontinuities in numerical simulations, which is typically over a couple of grid spacings. In this manner, we use two length scale parameters, $\ell$ for the interpolation model, and $\sigma$ for shock-capturing. Another key factor are the linear weights $\gamma_m$, $m=1, \dots, 2D+1$. Let $\boldsymbol{\gamma}=[\gamma_1, \dots, \gamma_{2D+1}]^T$ be a vector containing $2D+1$ linear weights, each corresponding to one of the substencils. These weights are retrieved by solving an over-determined linear system $$\mathbf{M}\boldsymbol{\gamma} = \mathbf{w}_*, \label{eq:gamsys}$$ where the $n$-th column of $\mathbf{M}$ is given by $\mathbf{w}_n$, and $\bf{w}_*$ is the model weights for the interpolation point $\bf{x}_*$ relative to the total stencil $S$. As mentioned previously, these weights are generated using the length scale parameter $\ell$. We should note that $\bf{M}$ is a potentially sparse matrix, and is constructed using the substencil model weights. For our GP modeling procedure in multiple spatial dimensions, we cast the multidimensional stencil $S$ as a flattened array. To illustrate this concept we explore a 2$D$ example where the coarse level cells are refined by the 4-refinement ratio in both $x$ and $y$ directions, i.e., $r_x=r_y=4$. Suppose $D = 2$, in which case and the total stencil $S$ is in the $5\times5$ patch of cells centered at $(i,j)$ and contains 13 data points. The total stencil is subdivided into five 5-point substencils $S_m$, $m=1, \dots, 5$. We take the natural cross-shape substencil for each $S_m$ on each of which GP will approximate function values (i.e., state values of density, pressure, etc.) at 16 new refined locations, i.e., $(i\pm 1/4, j \pm 1/4)$, $(i\pm 1/4, j \pm 3/4)$, $(i\pm 3/4, j \pm 1/4)$, and $(i\pm 3/4, j \pm 3/4)$. For instance, let’s choose $\mathbf{x}_{i+1/4, j+1/4}$ as the location we wish GP to compute function values for prolongation. Explicitly, five 5-point substencils are chosen as, S\_1 &=& ,\ S\_2 &=& ,\ S\_3 &=& ,\ S\_4 &=& ,\ S\_5 &=& . In this example, the total stencil $S$ is constructed to satisfy $\displaystyle\bigcap_{m=1}^{5} S_m = \{\mathbf{x}_{i,j} \}$ and $\displaystyle\bigcup_{m=1}^{5} S_m = S$, containing 13 data points whose local indices range from $i-2,j-2$ to $i+2,j+2$, excluding the 12 cells in the corner regions. See Fig. \[fig:2dS\], for a detailed schematic of the multi-substencil approach. (1.2,1.2) grid (3.6,3.6); (1.65,1.65) rectangle (3.15,3.15); (2.05,2.05) rectangle (2.35,2.35); (1.65,2.05) rectangle (1.95,2.35); (2.45,2.05) rectangle (2.75,2.35); (2.05,2.45) rectangle (2.35,2.75); (2.05,1.95) rectangle (2.35,1.65); (1.65,2.45) rectangle (1.95,2.75); (2.45,2.45) rectangle (2.75,2.75); (2.75,1.95) rectangle (2.45,1.65); (1.65,1.95) rectangle (1.95,1.65); (3.15,1.95) rectangle (2.85,1.65); (3.15,2.35) rectangle (2.85,2.05); (3.15,2.75) rectangle (2.85,2.45); (3.15,3.15) rectangle (2.85,2.85); (2.75,3.15) rectangle (2.45,2.85); (2.35,3.15) rectangle (2.05,2.85); (1.95,3.15) rectangle (1.65,2.85); (1.6,0.8) rectangle (4,3.2); (1.6,0.8) grid (4,3.2); (1.6,0.8) rectangle (4,3.2); (2.45,2.45) rectangle (3.15,3.15); (3.15,2.35) rectangle (2.45,1.65); (1.65,2.35) rectangle (2.35,1.65); (3.15,1.55) rectangle (2.45,0.85); (3.25, 2.35) rectangle (3.95, 1.65); (0.8,1.6) rectangle (3.2,4); (0.8,1.6) grid (3.2,4); (0.8,1.6) rectangle (3.2,4); (2.45,2.45) rectangle (3.15,3.15); (0.85,2.45) rectangle (1.55,3.15); (1.65,3.25) rectangle (2.35,3.95); (1.65,2.45) rectangle (2.35,3.15); (1.65,2.35) rectangle (2.35,1.65); (0.8,0.8) rectangle (3.2,3.2); (0.8,0.8) grid (3.2,3.2); (0.8,0.8) rectangle (3.2,3.2); (1.65,2.45) rectangle (2.35,3.15); (3.15,2.35) rectangle (2.45,1.65); (1.65,2.35) rectangle (2.35,1.65); (0.85,2.35) rectangle (1.55,1.65); (1.65,1.55) rectangle (2.35,0.85); (0,0.8) rectangle (2.4,3.2); (0,0.8) grid (2.4,3.2); (0,0.8) rectangle (2.4,3.2); (0.85,2.45) rectangle (1.55,3.15); (1.65,2.35) rectangle (2.35,1.65); (0.85,2.35) rectangle (1.55,1.65); (0.85,1.55) rectangle (1.55,0.85); (0.05, 2.35) rectangle (0.75, 1.65); (0.8,0) rectangle (3.2,2.4); (0.8,0) grid (3.2,2.4); (0.8,0) rectangle (3.2,2.4); (1.65,2.35) rectangle (2.35,1.65); (1.65,1.55) rectangle (2.35,0.85); (3.15,1.55) rectangle (2.45,0.85); (0.85,1.55) rectangle (1.55,0.85); (1.65,0.05) rectangle (2.35,0.75); (0,0) rectangle (4,4); (0.0,0) grid (4,4); (1.65,3.25) rectangle (2.35,3.95); (2.45,2.45) rectangle (3.15,3.15); (3.25, 2.35) rectangle (3.95, 1.65); (3.15,1.55) rectangle (2.45,0.85); (1.65,0.05) rectangle (2.35,0.75); (0.85,2.45) rectangle (1.55,3.15); (0.85,1.55) rectangle (1.55,0.85); (0.05, 2.35) rectangle (0.75, 1.65); (1.65,2.45) rectangle (2.35,3.15); (3.15,2.35) rectangle (2.45,1.65); (1.65,2.35) rectangle (2.35,1.65); (0.85,2.35) rectangle (1.55,1.65); (1.65,1.55) rectangle (2.35,0.85); (0, 15) – (0,12.5); (0, 11.35) – (0,9); (0, 8.65) – (0,7.25); (0, 6.9) – (0,5.5); (0, 4.3) – (0,3.65); (0, 2.55) – (0,1.1); (-.74,1) – (-1.65, -2); (0.74,1) – ( 1.65, -2); (0, 1) – (0, -1.5); (-3,-2)node\[left\] to\[out=0,in=100\] (-0,-1); (6.5, 1) node ; (6.85, 3.35) node ; (6.85, 5.1) node ; (6.85, 7.7) node ; (6.85, 9.45) node ; (6.85, 12.6) node [ on a coarse level]{}; (6.85,-1.6) node ; (6.85,-2.1) node ; (6.85,-2.6) node ; Using these data points, we build the 13$\times$5 over-determined system $$\begin{pmatrix} w_{1,1} & 0 & 0 & 0 & 0 \\ w_{1,2} & w_{2,1} & 0 & 0 & 0 \\ w_{1,3} & 0 & w_{3,1} & 0 & 0 \\ w_{1,4} & 0 & 0 & w_{4,1} & 0 \\ 0 & w_{2,2} & 0 & 0 & 0 \\ 0 & w_{2,3} & w_{3,2} & 0 & 0 \\ w_{1,5} & w_{2,4} & w_{3,3} & w_{4,2} & w_{5,1} \\ 0 & 0 & w_{3,4} & w_{4,3} & 0 \\ 0 & 0 & 0 & w_{4,4} & 0 \\ 0 & w_{2,5} & 0 & 0 & w_{5,2} \\ 0 & 0 & w_{3,5} & 0 & w_{5,3} \\ 0 & 0 & 0 & w_{4,5} & w_{5,4} \\ 0 & 0 & 0 & 0 & w_{5,5} \end{pmatrix} \begin{pmatrix}\gamma_1 \\ \gamma_2 \\ \gamma_3 \\ \gamma_4 \\ \gamma_5 \end{pmatrix} = \begin{pmatrix}w_{1} \\ w_{2} \\ w_{3} \\ w_{4} \\ w_{5} \\ w_{6} \\ w_{7} \\ w_{8} \\ w_{9} \\ w_{10} \\ w_{11} \\ w_{12} \\ w_{13} \end{pmatrix}, \label{eq:2Dgam}$$ which is solved using the QR factorization method for least squares. Notice that both the pointwise SE kernel and the integrated SE kernel in Section \[sec:gp\_ptwise\_prol\] and Section \[sec:gp\_vol\_prol\] are both isotropic kernels. Hence, every $\mathbf{K}_{m,\sigma}$ and $\mathbf{C}_{m,\sigma}$ are identical over each substencil, illustrating that the WENO combination weights (i.e., $\mathbf{w}_m^T$) and GP model weights (i.e., $\mathbf{w}_*^T$ and $\mathbf{T}_*^T$) only need to be computed and saved once per level, and reused later. The nonlinear weighting approach of GP-WENO designed in this probabilistic way has proven to be robust and accurate in treating discontinuities [@reyes_new_2016; @reyes2019variable]. Regardless, the nature of its non-linearity requires the calculation of nonlinear weights are to be taken place over the entire computational domain, consuming an extra computing time. In this regard, one can save the overall computation if the GP-WENO weighting could only be performed when needed, i.e., near sharp gradients, identified by a shock-detector. In our GP formulation, we already have a good candidate for a shock-detector, that is, the GP-based $\beta_m$. To meet this, we slightly modify Eq.  to introduce an optional switching parameter $\alpha$, defined by $$\alpha = \frac{\sum\limits_{i=1}^{2D+1} \displaystyle\frac{1}{\lambda_i} (\mathbf{v}_i^T\mathbf{f})^2} {\mathbb{E}_{arith}^2[\mathbf{f}] + \epsilon_2 }. \label{eq:alph}$$ Here, the data array $\mathbf{f}$ includes the $2D+1$ data solely chosen from the center most substencil, e.g., $S_1$ in Fig. \[fig:2dS\], $\mathbb{E}_{arith}^2$ is the squared arithmetic mean over the sampled coarse grid data points over $2D+1$ sized substencil centered at the cell $(i,j,k)$, or $S_1$, that is, $$\mathbf{E}^2_{arith}[\mathbf{f}] = \left(\frac{1}{2D+1}\displaystyle\sum_{\mathbf{x}\in S_1}f(\mathbf{x})\right)^2,$$ and finally, $\epsilon_2$ is a safety parameter in case the substencil data values are all zeros. Notice that this is just a scaled version of the $\beta_m$ in Eq.  for the central substencil $S_1$. Since the $\sigma$ GP model is built with smooth data in mind prescribed by the smooth SE kernel, this parameter will detect “unlikeliness" in of the data $\bf{f}$ with respect to the GP model. Note that the critical value of $\alpha$, called $\alpha_c$, will be based on the kernel chosen. Without the normalization by the squared arithmetic mean, this factor will vary based on mean value of the data. In this regard, dividing by the average value of the data, $\bf{f}$, helps to normalize the factor without changing the variability detection. From the statistical interpretation of the GP model $\mathbf{E}^2_{arith}[\mathbf{f}]$ may be viewed as a likelihood measure for a GP that assumes uncorrelated data (i.e., $\bfK_{ij}=\delta_{ij}$), so that $\alpha$ becomes normalized relative to the likelihood of another model. We choose a critical value, $\alpha_c$ so that shocks, and high variability in $\bf{f}$ are detected when $\alpha > \alpha_c$; smooth and low variability when $\alpha \leq \alpha_c$. We heuristically set $\alpha_c=100$ in this strategy. Using this $\alpha$ parameter we have a switching mechanism between the more expensive nonlinear multi-substencil GP-WENO method in this section and the linear single-stencil GP model in Sections \[sec:gp\_ptwise\_prol\] and \[sec:gp\_vol\_prol\]. Using the multi-substencil GP-WENO method, there are generally $2D+1$ dot products of the stencil size for each prolonged point, $\prod_d r_d$. In patch-based AMR, even though refined grids are localized around the regions containing shocks and turbulence, there are often areas of smooth flow in every patch. The use of the switch $\alpha$ allows us to reduce the computational complexity to one dot product of the stencil size for each coarse stencil that has smooth data, therefore reducing the cost to one dot product of the stencil size for each prolonged point. This method is extremely useful in 3D and when the refinement ratio is greater than $\mathbf{r} = 2$. We conclude this section by making a remark on one significant feature of GP which we do not explore in our current study. The multi-substencil GP-WENO methods, outlined in [@reyes_new_2016; @reyes2019variable], in smooth flows can variably increase/decrease the order of accuracy. However, in the application for AMR prolongation there may be large grids to be refined, so the increased computational cost can become undesirable. Note that the linear single model GP interpolation is still $\mathcal{O}(\Delta x^3)$, and serves as a high-order accurate prolongation that often matches the order of accuracy of the simulation. Reyes et al. [@reyes_new_2016; @reyes2019variable] discuss how to vary accuracy as a tunable parameter within the GP methodology. The studies show that the GP radius $R$ of the stencil dictates the order of accuracy. The method illustrated in this paper utilizes a GP radius $R =1$ and is $\mathcal{O}(\Delta x^3)$, however if one uses $R=2$ we can retrieve a method that is $\mathcal{O}(\Delta x^5)$. Implementation {#sec:method} ============== The multi-substencil GP-WENO prolongation method is implemented within the AMReX framework. The AMReX framework utilizes a hybrid C++/Fortran library with many routines to support the complex algorithmic nature of patch-based AMR and state-of-the-art high performance computing. As an example, the object-oriented nature of C++ is fully utilized to furnish simple data and workflow. In AMReX there is a virtual base class called *Interpolator*. This class has many derivations including *CellConservativeLinear*, an object for the functions related to a cell-based conservative linear interpolation. The methods presented in the current work reside in the *CellGaussian* class within the AMReX framework. This class constructs a GP object, which contains the model weights for each of the $\prod_{d = 1}^{D} r_d$ new points per cell as member data. When a simulation is executed in parallelized format, each MPI rank has an Interpolator class, by which it helps to avoid unnecessary communication. Computationally, the order of execution is as follows: 1. The refinement ratio and $\Delta \mathbf{x}$ are passed on to the construction of the GP object. 2. Build GP covariance matrices for both interpolation $\bK, \bK_m$ and shock detection $\bK_{m,\sigma}$ ($\mathbf{C},\mathbf{C}_m$ and $\mathbf{C}_{m,\sigma}$ for cell-averaged data) using the SE kernel, Eq. (\[eq:sqrexpcov\]) (Eq. (\[eq:intsq\]) for the integrated kernel). - $\bK$ and $\bK_m$ are used for prolongation and should be used with the $\ell$ hyperparameter. This should be on the order of the size of the stencil to match our model assumption that the data varies smoothly over the stencil. We adopt $\ell = 12\cdot \Delta$, where $\Delta = \min(\Delta x_d)$ - $\bK_{m,\sigma}$ is used in the shock detection through the GP smoothness indicators and should take $\sigma$ as the length hyperparameter. $\sigma/\Delta \sim 1.5 - 3.5$ corresponding to the typical shock width in high-order Godunov method simulations. 3. Calculate the GP weights $\mathbf{w}_*$ ($\mathbf{z}_*$) for all $\prod_{d=1}^{D} r_d$ prolonged points using Eq. (\[eq:pinterp2\]) (Eq. (\[eq:vinterp2\])). These weights are calculated only once for every possible refinement ratio and stored for use throughout the simulation. 4. Compute the eigensystem of $\bK_{m,\sigma}$ as part of building the shock-capturing model. The eigenvectors $\mathbf{v}_i/\sqrt{\lambda_i}$ are stored for use in calculating $\beta_m$ and $\alpha$. 5. Solve for $\boldsymbol{\gamma}$ for each prolonged point using the weights from $S_m$ and $S$. Just as for the GP weights $\mathbf{\gamma}$ is only calculated once and stored for each possible refinement ratio. 6. For each coarse cell, the switch parameter $\alpha$ is calculated and compared to $\alpha_c$ (we choose $\alpha_c=100$). - When $\alpha < \alpha_c$ the data is determined to be smooth enough not to need the full nonlinear GP-WENO prolongation. Instead the GP-weights over the total stencil $\mathbf{w}_*$ ($\mathbf{z}_*$) may be used without any weighting. - In the case that $\alpha < \alpha_c$ the points are prolonged using the nonlinear multi-substencil GP-WENO model (e.g., one of the methods in Sections \[sec:gp\_ptwise\_prol\] and \[sec:gp\_vol\_prol\], plus the nonlinear controls in Section \[sec:GP-WENO\]). If needed, the parameter $\alpha_c$ can be tuned to a different value to alleviate the GP performance relating to sensitivity to shock-detection. By lowering $\alpha_c$ shocks will be detected more frequently, leading the overall computation to increase since GP-WENO will be activated on an increased number of cells. In most practical applications such a tuning would be unnecessary considering that strong shocks are fairly localized, and in such regions $\alpha$ would retain a value much larger than $\alpha_c$ anyway. Therefore, the condition $\alpha > \alpha_c$ for nonlinear GP-WENO would be met most likely over a wide span of possible values of $\alpha_c$ users might set. Nonetheless, the localized nature of shocks allows the computationally efficient linear GP model to be used in simulations that do not require the frequent shock handling mechanism. In what follows, we set $\alpha_c = 100$ in the numerical test cases presented in Section \[sec:testing\]. To illustrate, we show the $\alpha$ values associated with a Gaussian profile elevated by the circular cylinder of height 0.25 defined by the following function $$f(x, y) = \begin{cases} 1 + \exp{\left(-(x^2 + y^2)\right) \quad \textrm{if}\hspace{1mm} (x^2 + y^2) < 0.5}, \\ 0.25 \quad \textrm{else}. \end{cases}$$ In Fig. \[fig:alpha\], we demonstrate how $\alpha$ varies over the profile which combines the smooth continuous profile with the abrupt discontinuity. It is observed that the $\alpha$ value is close to 2 over the continuous region. However, at the points corresponding to the sharp discontinuity, $(x^2 + y^2) = 0.5$, $\alpha$ soars to over 300, resulting in the full engagement of the multi-substencil GP-WENO model near the discontinuity. In the rest of the smooth region, $\alpha$ becomes much smaller, and therefore only using the linear GP model. This also tells us that the linear GP model would be a sufficient AMR prolongation algorithm in an incompressible setting. AMReX Programming Directives ---------------------------- We make our implementation of the present work publicly available at <https://github.com/stevenireeves/amrex> in the GP-AMR branch. Written in C++, it utilizes AMReX’s hardware-agnostic parallelization macros and lambda functions. The code is designed to utilize pragmas that declare the interpolation function as callable from either a CPU or GPU. The AMReX parallelization strategy is similar for both CPU-based supercomputers (e.g., Cori at NERSC) and GPU-based machines (e.g., Summit at the Oak Ridge Leadership Co mputing Facility (OLCF), as well as Perlmutter at NERSC and the forthcoming Frontier at OLCF). The strategy is to use MPI for domain decomposition, OpenMP for CPU based multi-threading, and CUDA (and HIP in the future) for GPU accelerators. Data allocation, CPU-GPU data transfers and handling are natively embedded in most AMReX data types and objects. For a more in-depth look into how the AMReX software framework is implemented we invite the interested readers to refer to [@amrex]. To provide a simple example into the AMReX style of accelerator programming, let us suppose that we wish to assign the integer value of 1 to the whole AMReX datatype *Array4*. The datatype is a plain-old-data object which is a four dimensional array indexed as $( i, j, k, n)$. The first three indices are for spatial indices and the last one for each individual component (e.g., fluid density, $\rho$). We can use the AMReX lambda function **AMREX\_PARALLEL\_FOR\_4D** to expand a 4D loop in a parallel fashion. For instance, **AMREX\_PARALLEL\_FOR\_4D** distributes the code segment in Listing 1 to an equivalent format in Listing 2: AMREX_PARALLEL_FOR_4D(bx, ncomp, i, j, k, n, { my_array(i,j,k,n) += 1; }); for(int i = lo.x; i < hi.x; ++i){ for(int j = lo.y; j < hi.y; ++j){ for(int k = lo.z; k < hi.z; ++k){ AMREX_PRAGMA_SIMD for(int n = 0; n < ncomp; ++n) my_array(i,j,k,n) += 1; } } } This formulation allows for one code to be compiled for either CPU running or GPU launching. The AMReX lambda functions are expanded by the compiler, and the box dimensions (lo.x - hi.x, etc) are different based on the target device. For GPUs the lo and hi variables are set based on how much data each GPU thread will handle. In the CPU version, the lo to hi are the dimensions of target tile boxes respectively. Essentially the lambda handles the GPU kernel launch or CPU for-loop expansion for the developer/user. Assuredly, there are other approaches available to launch a parallel region in AMReX for GPU extension. We recommend that further interested reader view the GPU tutorials in AMReX source code for more information on various types of launch macros [@amrex]. While a detailed description of GPU computing is beyond the scope of this paper, we provide a general principle of our strategy to implement the GP-AMR algorithm for GPUs: 1. Construct the model weights $\mathbf{T}^T_{*}\mathbf{C}^{-1}$ for each stencil $S_m$, $\boldsymbol{\gamma}$ and the eigensystem of $\mathbf{C}_\sigma$ on the CPU at the beginning of program execution or at the initialization of each AMR level. 2. Create a GPU copy for these variables and transfer them to the GPU global memory space. Every core on the GPU will need to access them, but do not need their own copy. 3. Create a function for the prolongation. This function will require both the coarse grid data as an input, and the fine grid data as an output. Both arrays will need to be on the global GPU memory space. This function will be launched on the GPU, and the fine level will be filled accordingly. In general, with GPU computing, it is best to do as few memory transfers between the CPU and GPU as possible because a memory transfer can cost hundreds or thousands of compute cycles and can drastically slow down an application. To further explain these steps, Figure \[fig:diagram\] is of an example call graph along with CPU-GPU memory transfers. In this diagram, it is already assumed that the course and fine state variables have been constructed and allocated on the GPU respectively, as is with the case in AMReX. ![\[fig:diagram\] Diagram illustrating a call graph for GP-AMR utilizing GPUs as accelerators. ](./diagram-eps-converted-to){width="65.00000%"} Results {#sec:results} ======= In this section, we present the performance of the new GP-based prolongation model compared with the default conservative linear polynomial scheme in AMReX. To illustrate the utility of the new GP-based prolongation scheme in fluid dynamics simulations, we integrated the prolongation method in two different AMReX application codes, Castro [@castro] – a massively parallel, AMR, compressible Astrophysics simulation code, PeleC – a compressible combustion code [@pelec], as well as a simple advection tutorial code built in AMReX. Accuracy -------- To test the order of accuracy of the proposed method, a simple Gaussian profile is refined with the GP prolongation method. This profile follows the formula $$f(\mathbf{x}) = \exp(-||\bx||^2) \label{eq:acc}$$ where $\mathbf{x} \in [-2, 2] \times [-2, 2]$. We compare the prolonged solution, denoted as $f_p(\bx)$, against the analytical value, $f(\bx)$, associated with the Gaussian profile function. We find that the accuracy of the cell-averaged GP prolongation routine matches with the analysis in [@reyes_new_2016; @reyes2019variable]. The convergence rate of the error in 1-norm, $E=||f-f_p||$, computed using the GP prolongation model with $R=1$, exhibits the expected third-order accuracy, following the theoretical slope of third-order convergence on the grid scales, $\mathcal{O}(\Delta x^3)$. ![\[fig:err\] Convergence for the GP method. The quantities are measured in $\log$ of base 2 to better cope with the refinement jump ratio of 2.](./err_convergence_a){width="65.00000%"} GP-AMR Tests {#sec:testing} ------------ There are several test problems we will examine. First will be a single vortex advection, provided as the *Advection\_AmrLevel* tutorial in AMReX. Next we will present a modified version of the slotted cylinder problem from [@mapped]. The subsequent problems using Castro are some classic hydrodynamic test problems including the Sedov implosion test [@sedov] and the double Mach reflection problem [@dmr]. Lastly, we discuss a premixed flame simulation from PeleC. In all tests we use $\ell = 12\cdot\min(\Delta x_d)$. For the 2D simulations, $\sigma = 3\cdot\min(\Delta x_d)$ is used, and for the 3D simulations we use $\sigma = 1.5\cdot\min(\Delta x_d)$. ### Single Vortex Advection using the AMReX tutorial The first test is a simple reversible vortex advection run. A radial profile is morphed into a vortex and reversed back into its original shape. This stresses the AMR prolongation’s ability to recover the profile after it has been advected into the coarse cells so that at the final time the solution can return to it’s original shape. The radial profile initially is defined by $$f(x, y) = 1 + \exp\Bigl[-100\left(\left(x-0.5\right)^2 + \left(y-0.75\right)^2\right)\Bigr].$$ The profile is advected with the following velocity field: $$\mathbf{v}(x,y,t) = \nabla \times \psi \label{eq:veloc}$$ which is the curl of the stream function $$\psi(x,y,t) = \frac{1}{\pi}\sin\left(\pi x\right)^2 \sin\left(\pi y\right)^2 \cos\left(\pi \frac{t}{2}\right)$$ Here $(x,y)\in[0,1]\times[0,1]$. In this demonstration, the level 0 grid size is $64 \times 64$, and has two additional levels of refinement surrounding the radial profile. The simulation is an incompressible advection problem using the Mac-Projection to compute the incompressibility condition enforcing the divergence-free velocity fields numerically, $\nabla \cdot \mathbf{v}=0$ [@mac]. The flux is calculated by a simple second-order accurate upwind linear reconstruction method. Although the overall solution is second-order which is lower than the third-order accuracy of the GP prolongation method, this example still illustrates the computational performance of the GP-based method over the default conservative second-order prolongator. [0.33]{} ![\[fig:amrlev\] The progression of the 2D radial profile with 4 levels of refinement using the multi-substencil GP prolongation algorithm. ](Phi0-eps-converted-to "fig:"){width="\textwidth"} [0.33]{} ![\[fig:amrlev\] The progression of the 2D radial profile with 4 levels of refinement using the multi-substencil GP prolongation algorithm. ](Phi25-eps-converted-to "fig:"){width="\textwidth"} [0.33]{} ![\[fig:amrlev\] The progression of the 2D radial profile with 4 levels of refinement using the multi-substencil GP prolongation algorithm. ](Phi50-eps-converted-to "fig:"){width="\textwidth"} [0.33]{} ![\[fig:amrlev\] The progression of the 2D radial profile with 4 levels of refinement using the multi-substencil GP prolongation algorithm. ](Phi75-eps-converted-to "fig:"){width="\textwidth"} [0.33]{} ![\[fig:amrlev\] The progression of the 2D radial profile with 4 levels of refinement using the multi-substencil GP prolongation algorithm. ](Phi100-eps-converted-to "fig:"){width="\textwidth"} [0.33]{} ![\[fig:amrlev\] The progression of the 2D radial profile with 4 levels of refinement using the multi-substencil GP prolongation algorithm. ](Phi120-eps-converted-to "fig:"){width="\textwidth"} The simulation is finished at $t = 2$. We used sub-cycling of time-steps to improve the overall performance, in which a smaller time-step $\Delta t_f$ is used on a finer level to advance the regional solutions for stability. The coarser level solutions which advance with a larger time-step $\Delta t_c$ await until the solutions on the finer levels catch up with the global simulation time $t_g=t^{n}+\Delta t_c$ over the number of sub-cycling steps $N_{\mbox{subcycle}}=\Delta t_c/ \Delta t_f$. We present the performance and accuracy results for this problem in Table \[tab:SingleVort\]. Execution Time Prolongation Time \# of calls $L_1$ error ------------- ---------------- ------------------- ------------- ------------- $2D$ GP-AMR 0.2323s 0.004168s 9115 0.00033 $2D$ Linear 0.2335s 0.008436s 9113 0.00071 $3D$ GP-AMR 1.6523s 0.086361s 21929 0.00151 $3D$ Linear 1.6640s 0.157623s 21893 0.00160 : Accuracy and Performance of GP-AMR against the default linear AMR for the single vortex test on a workstation with an Intel i7-8700K processor, with 6 MPI ranks.[]{data-label="tab:SingleVort"} Since the two methods are of different order, they can yield different AMR level patterns which can lead to the slight difference in the number of function calls. In the prolongation functions we find that the default linear prolongation took approximately twice as much time than the GP prolongation. This is due to the smoothness of the solution not requiring the multi-modeled treatment, allowing for the simplified GP algorithm to be used. However, the overall simulation times were equally comparable, since there were larger areas (or more cells) that followed the profile and were computed in the finest AMR level in the GP case than the linear case. We note that the cost of computing the GP model weights were negligible in comparison to the program’s execution time of 0.0002306 seconds on average, being called twice (since there were two levels) per MPI rank. We also check the level averaged $L_1$ error between the solution at $t=2$ and the solution at $t=0$ for both AMR prolongation methods. In the 2D case we find that the GP-AMR solution is approximately half of the error produced by the default method. This highlights the utility of a high-order prolongation method, as smooth features are better recovered after they have been advected into coarser cells. Another useful examination is the analogous problem in 3D in which the computational stencils for both methods grow. For the 3D version we use a $32 \times 32 \times 32$ base grid with 2 levels of refinement. The details of this simulation can also be found in Table \[tab:SingleVort\]. We note that a parallel copy operation becomes slightly more expensive with GP because the need for the GP multi-substencil grows on non-smooth regions to handle discontinuities in a stable manner, as managed by the $\alpha_c$ parameter. This becomes more apparent in 3D, as the computational stencil effectively grows from 7 to 25 cells when using the multi-substencil approach. In this 3D benchmark, the difference in error between these methods is less than in the 2D case. The GP-AMR simulation still outperforms the linear prolonged simulation, but to a smaller degree. The 3D simulation has a coarser base grid by a factor of 2 than the 2D simulation, leading to a coarser finest grid in the simulation. This is why the $L_1$ errors are greater in the 3D case. In Figure \[fig:amrlev\] we show the same 2D single vortex advection with GP-AMR on a base grid of $64 \times 64$ with 4 levels of refinement to illustrate the GP method with a more production level grid configuration. ### Slotted Cylinder as another AMReX Test Another useful test is the slotted cylinder advection presented in [@mapped]. In this paper we do not use an exact replica of this problem, but instead we utilize the initial profile and perform a similar transformation as in the previous problem. That is, the slotted cylinder is morphed using the same velocity used in the Single Vortex test. Because the advected profile is now piecewise constant, compared to the smooth profile in the previous problem, it will require the use of the nonlinear GP-WENO prolongation and test the capabilities of the GP-based smoothness indicators for prolongation. The slotted cylinder is defined as a circle (in 2D) of radius $R=0.15$ centered at $\bx_c = (x_c, y_c) = (0.5, 0.75)$ with a slot of width $W=0.05$ and height $H=0.25$ removed from the center of the cylinder. The initial condition is given by $$\phi_0(\mathbf{x}) = \begin{cases} 0, \quad \mbox{if } R < \sqrt{(x-x_c)^2 + (y-y_c)^2}, \\ 0, \quad \mbox{if } |2x_c| < W \;\; \mbox{and } 0 < y_c + R < H, \\ 1, \quad \textrm{else}, \end{cases}$$ where $(x,y)\in[0,1]\times[0,1]$. The initial profile is shown in Figure \[fig:slot0\]. In this case we wish to find the simulation that best retains the profile of the initial condition when it is completed at $t = 2$ as in the previous test. ![\[fig:slot0\] The slotted cylinder at $t=0$ over the entire domain with 3 AMR levels. ](./Phi_ic-eps-converted-to){width="8cm"} We have two levels of refinement on a base grid of size $64 \times 64$ resolution. Figure \[fig:slot1\] contains snapshots of the simulations at times $t = 0.28, 1.44$ and $t = 2$. The goal is to retain as much of the initial condition as possible, in a similar fashion to the previous 2D vortex advection test. ![\[fig:slot1\] The morphed slotted cylinder problem at times $t=0.28, 1.44, 2$, from left to right in time. Top: (a) Default AMReX with linear prolongation. Bottom: (b) AMReX with adaptive multi-modeled GP prolongation.](./clin25-eps-converted-to "fig:"){width="32.50000%"} ![\[fig:slot1\] The morphed slotted cylinder problem at times $t=0.28, 1.44, 2$, from left to right in time. Top: (a) Default AMReX with linear prolongation. Bottom: (b) AMReX with adaptive multi-modeled GP prolongation.](./clin75-eps-converted-to "fig:"){width="32.50000%"} ![\[fig:slot1\] The morphed slotted cylinder problem at times $t=0.28, 1.44, 2$, from left to right in time. Top: (a) Default AMReX with linear prolongation. Bottom: (b) AMReX with adaptive multi-modeled GP prolongation.](./clin120-eps-converted-to "fig:"){width="32.50000%"} \ ![\[fig:slot1\] The morphed slotted cylinder problem at times $t=0.28, 1.44, 2$, from left to right in time. Top: (a) Default AMReX with linear prolongation. Bottom: (b) AMReX with adaptive multi-modeled GP prolongation.](./gp25-eps-converted-to "fig:"){width="32.50000%"} ![\[fig:slot1\] The morphed slotted cylinder problem at times $t=0.28, 1.44, 2$, from left to right in time. Top: (a) Default AMReX with linear prolongation. Bottom: (b) AMReX with adaptive multi-modeled GP prolongation.](./gp75-eps-converted-to "fig:"){width="32.50000%"} ![\[fig:slot1\] The morphed slotted cylinder problem at times $t=0.28, 1.44, 2$, from left to right in time. Top: (a) Default AMReX with linear prolongation. Bottom: (b) AMReX with adaptive multi-modeled GP prolongation.](./gp120-eps-converted-to "fig:"){width="32.50000%"} The result shows that the multi-substencil GP-AMR prolongation preserves the initial condition better than the conservative linear scheme native to AMReX. Notably, there is far less smearing and the circular nature of the cylinder is better retained with GP-AMR. Furthermore, it should be noted that a larger area of the slotted cylinder is covered by the finest grid structure with the GP-AMR prolongation. The refinement criteria in this test and the previous are set for critical values of the profile. This is analogous to refining on regions of high density or pressure. We wish to trace the slotted cylinder’s evolution with the finest grid. In this way, we can directly compare the ‘diffusivity’ of each method in how it retains this grid. With this test, we see that the default linear prolongation is much more numerically diffusive and smears the profile almost immediately, being unable to reconstruct the underlying profile on the coarser grid to the same fidelity that GP-AMR is able to achieve. This results in a far more blurred cylinder at $t=2$. While there is some loss with GP-AMR, the profile at $t=2$ far better resembles the cylinder at the onset of the simulation. ### Sedov Blast Wave using Castro A perhaps more useful test of the algorithm is in a compressible setting, where shock-handling becomes necessary. To illustrate the compressible performance of this method, we utilize the Sedov Blast wave  [@sedov], a radially expanding pressure wave. This simulation is solved using Castro with the choice of the piecewise parabolic method (PPM) [@ppm] for reconstruction along with the Colella and Glaz Riemann solver [@ColGlazFerg]. For the 2D test we have a base grid of $64 \times 64$, two additional AMR levels, using a $r_x=r_y=2$. Figure \[fig:sedov\] illustrates the propagation of the Sedov blast wave at $t = 0.1$, and allows us to compare with the linear prolongation method and GP-AMR. [0.49]{} ![\[fig:sedov\] A Sedov Blast Wave solution at $t = 0.1$ with two levels of refinement.](./Sedov_GP-eps-converted-to "fig:"){width="\textwidth"} [0.49]{} ![\[fig:sedov\] A Sedov Blast Wave solution at $t = 0.1$ with two levels of refinement.](./Sedov_clin-eps-converted-to "fig:"){width="\textwidth"} Although simple, the Sedov blast wave is a good test illustrating the shock-handling capabilities of the GP multi-substencil model. Notice that visually, the radial shockwaves in both simulations are identical. However, the vacuum in the center of the blast wave is closer to 0 with GP-AMR, as in the self-similar analytic solution [@sedov]. In Figure \[fig:sedov\] the AMR levels track the shock as it propagates radially and the shock front is contained at the most refined level. At the most refined level, the shock is handled by the multi-modeled GP-WENO treatment. This increases the computational complexity in this region. However, the GP algorithm is less expensive in this example, because the shock is very well localized, and the majority of the domain is handled by the regular GP model. The standard GP model is a simple dot-product using the precomputed weights and the stencil. Table \[tab:Sedov\] contains the performance statistics of the GP-AMR algorithm compared to the default linear using the same workstation as the previous test. Execution Time Prolongation Time \# of calls -------------------------- ---------------- ------------------- ------------- $2D$ GP-AMR 5.719s 0.07691s 19743 $2D$ Linear 5.698s 0.12610s 19439 $3D$ GP-AMR 64.27s 1.28912s 35202 $3D$ Non-Adapitve GP-AMR 72.81s 5.09467s 35202 $3D$ Linear 67.76s 1.96204s 35202 : Performance of GP-AMR against the default linear AMR on the Sedov Blast Wave with 6 MPI ranks on an Intel i7-8700K processor.[]{data-label="tab:Sedov"} A 3D Sedov blast was also tested, giving us a better look at the multi-substencil cost in the shock regions. For this benchmark, the simulation utilized a base grid of $32\times 32\times 32$ with additional two levels of AMR, utilizing a refinement factor of 2 for both levels. The wave was advected until $t = 0.01$ with both simulations (GP-AMR and default) .Table \[tab:Sedov\] also contains the performance metrics for the 3D test. By setting $\alpha_c=0$ we effectively use the multi-substencil GP-WENO method over all cells. The metrics for this example are labeled as “Non-Adaptive GP-AMR” in Table \[tab:Sedov\]. Using the multi-substencil GP-WENO method for every grid is roughly 5$\times$ more expensive as a prolongation method. This is expected as the multi-modeled GP-WENO method combines 5 GP models. ### Double Mach Reflection using Castro The double Mach reflection [@dmr] consists of a Mach 10 shock incident on a reflecting wedge resulting in a complex set of interacting features. For this test, we utilize Castro with PPM [@ppm] and the HLLC [@toro] Riemann solver. The initial condition describes a planar shock front with an angle of $\theta = \pi/3$ extending from the $x$-axis which itself is a reflecting wall, $$\begin{pmatrix} \rho, u, v, p \end{pmatrix} = \begin{cases} (1.4, \; 0, \; 0, \; 1) & \textrm{for} \quad x > x_{shock}, \\ (8,\; 8.25, \;-8.25, \;116.5) & \textrm{else}, \end{cases}$$ where $$x_{shock} = \frac{y + \frac{1}{6}}{\tan{\frac{\pi}{3}}}$$ when $y\in[0,1]$. The full domain of the problem is $[0,4]\times[0,1]$. Figure \[fig:dmrfull\] is of the solution to this problem with 4 levels of refinement starting at a base level with resolution 512$\times$128 using the GP-based prolongation. ![\[fig:dmrfull\] The double Mach reflection simulation at $t = 0.2$ with 4 levels of AMR refinement.](./GP_DMR_full-eps-converted-to){width="\textwidth"} With sufficient resolution and accuracy, we observe vortices along the primary slip line, as seen with the copious amount of vortices in Figure \[fig:dmrfull\]. The number of vortices serves as a general indication of the numerical diffusivity of the method, and a quality of Riemann solver. In this context, we are interested in the amount of numerical dissipation of the two different AMR prolongation methods. For reference we present a labeled schematic of the double-mach reflection containing the regions of interest for this comparison. Figure \[fig:dmr\_schm\] ![\[fig:dmr\_schm\] A schematic of the main features in the double mach reflection problem.](./dmr_fig_a-eps-converted-to){width="\textwidth"} We will refer to features contained in this diagram in the following analysis. Mostly in the central region encompassing secondary reflected shock, triple point, and primary slip line. The default and GP-AMR implementations are compared by zooming into the aforementioned region. In Figure \[fig:dmrComp\] we observe the effects of the each prolongation method on the number of vortices along the primary slip line. As can be seen in Figures \[fig:GPDMR\] and \[fig:LINDMR\], there is more onset to Kelvin-Helmholtz instability along the primary slip line in the GP-based AMR simulation, resulting in more additional vortices above the secondary reflected shock wave, along with onset to instability on the primary slip line close to the primary triple point. As a rudimentary measure, the GP-AMR simulation contains 20 vortices in this region whereas the default AMR contains 17. With this simulation we see that the default linear prolongation is faster, as the adaptive GP algorithm has more cells to prolongate in the high $\alpha$ regime. Table \[tab:DMR\] contains the details about the execution times. Execution Time Prolongation Time \# of calls -------- ---------------- ------------------- ------------- GP-AMR 760.43s 1.1121s 39724 Linear 705.10s 0.7395s 39837 : Performance insights for the Double Mach Reflection problem utilizing 8 nodes and 320 cores on Lux.[]{data-label="tab:DMR"} These results were generated using the University of California, Santa Cruz’s Lux supercomputer, utilizing 8 nodes. Each node contains two 20-core Intel Xeon Gold 6248 (Cascade Lake) CPUs. This simulation generates more high $\alpha$ regions, and thus requires the multi-modeled GP-WENO algorithm more often. This results in the GP-AMR simulation being slower than the default prolongation method by 60s. In addition to the increase of computational complexity, there is an increase of time spent in the parallel copy algorithm. The multi-modeling GP-WENO algorithm requires 2 growth cells at the boarders of each patch, therefore increasing the amount of data to be copied. ### Premixed Flame using PeleC For the final test problem in this paper, we produce a steady flame using the AMR compressible combustion simulation code, PeleC [@pelec]. With PeleC, chemical species are tracked as mass fractions that are passively moved during the advection phase, and diffused subject to transport coefficients and evolved in the reaction phase by solving ordinary differential equations to compute reaction rates. In this problem, the GP-AMR is tied with PPM and the Colella and Glaz Riemann solver [@ColGlazFerg]. We show the $\rho w$ (momentum in the $z$ direction) of the premixed flame solution in Figure \[fig:PMF\]. For this illustration the base grid had a $32\times32\times256$ configuration with two additional AMR levels. Furthermore Figure \[fig:PMF\] has a color map such that the lighter color are regions with high momentum, and the dark regions have low momentum. The premixed flame is a 3D flame tube problem in a domain that encompasses $[0,0.625]\times[0,0.625]\times[1,6]$. The flame spans the $x$ and $y$ dimension and is centered at $z = 3.0$. The gases are premixed and follow Li-Dryer hydrogen combustion chemical kinetics model [@lidryer]. [0.49]{} [0.49]{} [0.6]{} To illustrate some performance metrics we execute the simulation on 32 nodes, with 196 NVIDIA V100 GPUs, a base level of 256$\times$128$\times$2048 with two levels of refinement. We present the performance of GP-AMR against the default in Table \[tab:perf\]. We see that the GP-AMR is twice as fast as the default linear on average. Execution Time Prolongation Time \# of calls -------- ---------------- ------------------- ------------- GP-AMR 50.23s 0.03281s 940 Linear 52.04s 0.06261s 940 : Execution timings of PeleC and AMReX on the Premixed Flame test problem on 32 nodes of the Summit supercomputer.[]{data-label="tab:perf"} Additionally we perform weak scaling with this problem for up to 3072 NVIDIA V100 GPUs on Summit, with results illustrated in Figure \[fig:wkscl\]. The $y$-axis of the figure illustrates the average GP prolongation times and the $x$-axis is number of GPUs from 96 nodes to 3072 GPUs on Summit in logarithmic scale of base-2. Each node on Summit contains 6 NVIDIA V100 GPUs, therefore the scaling ranges from 16 to 512 nodes. GP-AMR when implemented in AMReX scales very well on Summit, one of the top-class supercomputers in the modern leadership computing facilities in the world. ![\[fig:wkscl\] Weak scaling of GP-AMR utilized in PeleC up to 3072 Nvidia Volta GPUs (512 Nodes on the OLCF Summit Super Computer).](./GP_scal-eps-converted-to){width="75.00000%"} Conclusion {#sec:conclusion} ========== In this paper we developed an efficient, third-order accurate, AMR prolongation method based on the Gaussian Process Modeling. This method is general to the type of data being interpolated, as illustrated with a substitution of covariance kernels in Eqns. \[eq:sqrexpcov\] and \[eq:intsq\]. In order to handle shock waves, a multi-substencil GP-WENO algorithm inspired by WENO [@WENO] was studied. We recognize that GP-WENO becomes computationally expensive when shocks are present in simulations. The tagging approach in Eq.  was proposed as a method to mitigate this situation. This approach uses a grid-scale sized length-scale parameter, furnished from GP, to detect regions that may contain shocks or non-smooth flows. In the three of the five test cases, the GP-AMR method was faster than the linear interpolation. The other two cases had situations where the patches to be interpolated contained mostly cells where the linear GP model did not suffice and required the non-linear multi-substencil treatment. Overall, the GP-AMR method is a balance between speed, stability, and accuracy. In the scope of this paper, the tunable parameters $\ell$ and $\sigma$ are either fixed, or fixed in relation to the grid scale. To further adapt the algorithm, one could try and maximize the log of Eqn. \[eq:likely\] with respect to the hyperparameter $\ell$ as is done in many applications utilizing Gaussian Process regression. However, in our application a fixed prescription for $\ell$ appears to hold the properties we desired. The stability of the algorithm is inherently tied to the $\sigma$ parameter, which we recommend never being larger than three times the grid scale. In many of the test cases we chose $\sigma = 1.5\Delta x$. If additional stability is required, we recommend either tuning $\alpha_c$ to be smaller or to be zero, requiring the algorithm to only use the multi-substencil GP model. Utilizing the framework provided, an even higher-order prolongation method can be generated by just by increasing the size of the stencil while utilizing the same framework. This will be inherently useful as more simulations codes are moving to increasingly accurate solutions with WENO [@WENO; @WENO3D] or GP [@reyes_new_2016; @reyes2019variable] based reconstruction methods paired with Spectral Differed Corrections (SDC) [@sdcode; @sdc] which can yield a fourth or higher order accurate total simulation. In this case, a second order AMR interpolation may degrade the overall quality of the solution or incur additional SDC iterations – increasing the execution time of the simulation. Acknowledgements ================ This work was supported in part by the National Science Foundation under grant AST-1908834. We acknowledge that the current work has come to fruition with help from the Center for Computational Science and Engineering at Lawrence Berkeley National Laboratory, the home of AMReX. We thank Dr. Ann S. Almgren, Dr. Weiqun Zhang, and Dr. Marcus Day for their insight and advice when completing this research. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. The first and second authors also acknowledge use of the Lux supercomputer at UC Santa Cruz, funded by NSF MRI grant AST 1828315. \[sec:references\]
0.1cm0.1cm-1.0cm 15.8cm HD-THEP-99-32\ [**Analytic approach to confinement\ and monopoles in lattice [$\bf SU(2)$]{}**]{}\ [**Dieter Gromes**]{}\ Institut für Theoretische Physik der Universität Heidelberg\ Philosophenweg 16, D-69120 Heidelberg\ E - mail: d.gromes@thphys.uni-heidelberg.de\ [**Abstract:** ]{} We extend the approach of Banks, Myerson, and Kogut for the calculation of the Wilson loop in lattice $U(1)$ to the non-abelian $SU(2)$ group. The original degrees of freedom of the theory are integrated out, new degrees of freedom are introduced in several steps. The centre group $Z_2$ enters automatically through the appearance of a field strength tensor $f_{\mu \nu }$, which takes on the values 0 or 1 only. It obeys a linear field equation with the loop current as source. This equation implies that $f_{\mu \nu }$ is non vanishing on a two-dimensional surface bounded by the loop, and possibly on closed surfaces. The two-dimensional surfaces have a natural interpretation as strings moving in euclidean time. In four dimensions we recover the dual Abrikosov string of a type II superconductor, i.e. an electric string encircled by a magnetic current. In contrast to other types of monopoles found in the literature, the monopoles and the associated magnetic currents are present in every configuration. With some plausible, though not generally conclusive, arguments we are directly led to the area law for large loops. August 1999, revised October 1999 Introduction ============ It is now widely accepted, that confinement is due to the formation of a color electric string, and that magnetic monopoles play an essential role in this context. Up to now there is a lively activity in this field, illuminating the phenomenon from various sides. A particularly appealing approach is the one by Banks, Myerson, and Kogut [@bmk] which is now more than 20 years old. They considered the partition function for an electric current loop and derived step by step the appearance of monopoles by integrating out the original degrees of freedom and introducing new ones. The possibility to do this was restricted to the lattice U(1) model with the Villain action [@vil] and some other simple models. The authors also remark, that the techniques do not generalize simply to non-abelian theories. We will start by applying and generalizing the methods of ref [@bmk]. In a first step the $SU(2)$ matrices on the links are explicitly parametrized by three angles $\psi ,\vartheta ,\varphi $. An appropriate decomposition of the $SU(2)$ matrices allows the calculation of the trace in the plaquette action. After an expansion of exponentials into modified Bessel functions the integrations over the link angles can be performed. They lead to constraints for the new variables which were introduced in the expansions. Most of these variables are irrelevant and the summations can be performed after a suitable transformation. After this has been done, we are left with several integer variables which are restricted by constraint equations. The most important one is a $Z_2$ field strength tensor $f_{\mu \nu }$. It lives on plaquettes and is either 0 or 1. This tensor obeys an inhomogeneous linear field equation with the loop current as source. The solutions of the equation have a simple geometrical interpretation. The tensor $f_{\mu \nu }$ is non vanishing on a two-dimensional surface bounded by the loop, and possibly on closed two-dimensional surfaces. These surfaces have a natural interpretation as strings moving in euclidean time. There is a string which connects the two charges associated with the loop, and possibly a number of additional closed strings. The situation is particularly transparent for planar loops, where the layer on the minimal surface, corresponding to the straight string, plays a special role. For large loops we can use some general arguments and reasonings from statistical mechanics, like additivity of the free energy, and obtain the area law. The subtle question, whether a finite string tension survives in the thermodynamic limit and in the continuum limit, needs additional investigations. Our approach is purely analytical and non-perturbative, no gauge fixing was performed, and $\beta $ was kept arbitrary. Lengthy calculations were done with the help of Mathematica. Nowhere any physical picture of what we expect was put in. It is the formalism itself which automatically leads to the appearance of a plaquette variable $f_{\mu \nu }$ which is naturally associated with the world sheet of a string. If one could perform the integrations and summations over the remaining parameters one would obtain the explicit string action. Even without doing this, the formalism clearly shows the appearance of the string picture and the origin of the confinement mechanism. The partition function ======================= We are interested in the expectation value of a Wilson loop $W$, characterized by a closed current loop $J$: $$Z[J]=\int Tr[W(J)]\exp[\frac{\beta }{2}\sum_{p_{\rho \nu }}TrU_{\rho \nu }(p)] {\cal D}[U].$$ The sum runs over all plaquettes $p_{\rho \nu }$ (with $\rho <\nu $), while $U_{\rho \nu } (p)$ is the product of the four $SU(2)$-matrices on the links of the plaquette. We will perform rather extensive manipulations in the following, therefore we fix our notation here: $$\begin{aligned} & & p,q,r \mbox{\quad denote lattice points}, \nonumber\\ & & \mu ,\nu ,\rho ,\lambda =1,\cdots ,d \mbox{\quad denote space directions,} \nonumber\\ & & p\pm\mu \mbox{\quad is the lattice point next to }p \mbox{\quad in positive or negative }\mu\mbox{-direction,} \nonumber\\ & & p_\mu \mbox{\quad denotes the link connecting $p$ with $p+\mu $}, \\ & & p_{\mu \nu } \mbox{\quad with \quad $\mu <\nu$ \quad is the plaquette determined by the links $p_\mu $ and $p_\nu $,}\nonumber\\ & & \Theta ^a_\mu (p) =(\psi ,\vartheta ,\varphi )_\mu (p) \mbox{\quad denotes three angles which parametrize the SU(2) matrices} \nonumber\\ & & \mbox{on the link $p_\mu $, indices $a,b$ generally run over $\psi,\vartheta,\varphi $},\nonumber\\ & & U_\mu (p) \equiv U(\Theta_\mu ^a(p)) \mbox{ is the $SU(2)$ matrix on the link $p_\mu $},\nonumber\\ & & U_{\rho \nu }(p) =U_\rho (p)U_\nu (p+\rho ) U_\rho ^+(p+\nu )U_\nu ^+(p) \mbox{ is the plaquette variable}.\nonumber\end{aligned}$$ As a first step we have to choose a parametrization for the link variables $U_\mu (p)$ in order to be able to do the group integrations. We proceed similarly as in previous work which applied the optimized $\delta $-expansion on the lattice [@bj], [@bg]. In our case the Euler parametrization is the most appropriate, i.e. we use $$U = e^{i\sigma _3\psi } e^{i\sigma _2\vartheta } e^{i\sigma _3\varphi }.$$ The Haar measure is proportional to $\sin(2\vartheta)$. A possible choice of the parameter space is $-\pi <\psi <\pi ,\quad 0 <\varphi <\pi ,\quad 0<\vartheta<\pi /2$. For the following it is convenient to extend this region. All integrals which appear contain functions of the $U_{\rho \nu }(p)$ which are periodic in the Euler angels. So we may use some symmetry relations which are easily seen from the decomposition of $U$ into 1 and the $\sigma _m$. The shift $\varphi \rightarrow \varphi -\pi ,\psi \rightarrow \psi -\pi $ leaves $U$ invariant. Therefore one can extend the $\varphi $-integration into the interval from $-\pi $ to $\pi $, thus covering the group manifold twice. The further symmetry $\vartheta \rightarrow -\vartheta , \psi \rightarrow \psi -\pi /2, \varphi \rightarrow \varphi +\pi /2$ allows to extend the $\vartheta $-integration to the interval $-\pi /2<\vartheta <\pi /2$ if we continue the Haar measure as even function. Finally the symmetry $\vartheta \rightarrow \vartheta -\pi ,\psi \rightarrow \psi -\pi $ allows the extension of the $\vartheta $-integration to the full interval. Therefore we take $$H(\vartheta) = \frac{\pi }{2}|\sin(2\vartheta)|$$ as Haar measure in the following, and use the common boundaries $-\pi <\psi , \vartheta ,\varphi <\pi $. In fig. 1 we show the notation for the link variables of the plaquette $p_{\rho \nu }$. 1ex (80,40) (27,5)[(1,0)[26]{}]{} (55,7)[(0,1)[26]{}]{} (27,35)[(1,0)[26]{}]{} (25,7)[(0,1)[26]{}]{} (36,2)[$U_\rho (p)$]{} (56,20)[$U_\nu (p+\rho )$]{} (34,37)[$U^+_\rho (p+\nu )$]{} (17,20)[$U^+_\nu (p)$]{} (56,5)[$\rho $]{}(23,37)[$\nu $]{}(20,3)[$(p)$]{} [**Fig.1.**]{} The plaquette $p_{\rho \nu }$ and the link variables. The parametrization according to (2.3) becomes $$\begin{aligned} U_\rho (p) & = & e^{i\sigma _3\psi _\rho (p)}\; e^{i\sigma _2\vartheta _\rho (p)} \;e^{i\sigma _3\varphi _\rho (p)}, \nonumber\\ U_\nu (p+\rho ) & = & e^{i\sigma _3\psi _\nu (p+\rho )}\; e^{i\sigma _2\vartheta _\nu (p+\rho )} \;e^{i\sigma _3\varphi _\nu (p+\rho )},\nonumber\\ U^+_\rho (p+\nu ) & = & e^{-i\sigma _3\varphi _\rho (p+\nu )} \;e^{-i\sigma _2\vartheta _\rho (p+\nu )} \;e^{-i\sigma _3\psi _\rho (p+\nu )},\\ U^+_\nu (p) & = & e^{-i\sigma _3\varphi _\nu (p)} \;e^{-i\sigma _2\vartheta _\nu (p)} \;e^{-i\sigma _3\psi _\nu (p)}. \nonumber\end{aligned}$$ An appropriate technique for the further procedure, which was also extensively used in [@bj] and [@bg], is the splitting of the matrix exponentials into sums of ordinary exponentials times projection operators, in general $$e^{\pm i\sigma _m\alpha } = \sum_{s=\pm 1} e^{\pm i s \alpha } P_s(m),\mbox{\quad with\quad }P_s(m) = \frac{1}{2 }(1+s\sigma _m).$$ The link variables $U$ are parametrized as in (2.5), the plaquette variable $U_{\rho \nu }(p)$ therefore contains 12 factors. To every factor we apply the decomposition (2.6). This means that we need 12 summation indices $s[\Theta ] =\pm 1$, associated with the twelve angles in (2.5). At the end $TrU_{\rho \nu }(p)$ becomes a product of 12 factors, each being a sum of two terms of the type (2.6). So in total we have a sum of $2^{12}$ terms. Each term of the sum consists of a trace $T$ of a product of 12 projectors $P_s(m)$, multiplied by an exponential. At the four corners of the plaquette one has a product of two projectors $P_s(3)P_{s'}(3)$. They both project on the same subspace, therefore we get a non-vanishing result only if the neighboring parameters $s$ and $s'$ agree. So we must have $s[\varphi _\rho (p)] =s[\psi _\nu (p+\rho )],$ $s[\varphi _\nu (p+\rho )] =s[\varphi _\rho (p+\nu )], s[\psi _\rho (p+\nu )] =s[\varphi _\nu (p)], s[\psi _\nu (p)] =s[\psi _\rho (p)]$, there are in fact not 12, but only 8 independent parameters. We enumerate the remaining 8 independent parameters $s[\Theta]$ in a consecutive way, starting with $s[\vartheta _\rho (p)] $. In (2.7) we give the parameters $s_i$, together with the angles to which they refer. According to the remarks above, the $s_i$ with even $i$ belong to two angles. $$\begin{aligned} s_1 & \Leftrightarrow & \vartheta _\rho (p), \quad \quad \; \; \: s_2 \Leftrightarrow \varphi _\rho (p),\psi _\nu (p+\rho ), \nonumber\\ s_3 & \Leftrightarrow & \vartheta _\nu (p+\rho ), \quad s_4 \Leftrightarrow \varphi _\nu (p+\rho ),\varphi _\rho (p+\nu ), \nonumber\\ s_5 & \Leftrightarrow & \vartheta _\rho (p+\nu ), \quad s_6 \Leftrightarrow \psi _\rho (p+\nu ),\varphi _\nu (p), \\ s_7 & \Leftrightarrow & \vartheta _\nu (p), \quad \quad \; \; \: s_8 \Leftrightarrow \psi _\nu (p),\psi _\rho (p). \nonumber\end{aligned}$$ The number of terms in the expansion of $TrU_{\rho \nu }(p)$ has now been reduced from $2^{12}$ to $2^8$. We enumerate these terms by an index $n$, the order in which this is done needs not to be specified yet. Thus each $s_i$ becomes a function of $n$, we denote it by $s_{in}$. The traces also depend upon $n$, we denote them by $T_n$. Finally one can write $TrU_{\rho \nu }(p)$ in the following form: $$TrU_{\rho \nu }(p) =Tr[U_\rho (p)U_\nu (p+\rho ) U_\rho ^+(p+\nu )U_\nu ^+(p)] = \sum_{n=1}^{2^8} T_n\exp[iA_{\rho \nu }^n(p;\Theta ,s)].$$ Here $$\begin{aligned} A_{\rho \nu }^n(p;\Theta ,s) & = & s_{8n} \psi _\rho (p) + s_{1n} \vartheta _\rho (p) + s_{2n} \varphi _ \rho (p) \nonumber\\ & & + s_{2n} \psi _\nu (p+\rho ) + s_{3n} \vartheta _\nu (p+\rho ) + s_{4n} \varphi _\nu (p+\rho ) \\ & & - s_{4n} \varphi _\rho (p+\nu ) - s_{5n} \vartheta _\rho (p+\nu ) - s_{6n} \psi _\rho (p+\nu ) \nonumber\\ & & - s_{6n} \varphi _\nu (p) - s_{7n} \vartheta _\nu (p) - s_{8n} \psi _\nu(p). \nonumber\end{aligned}$$ The computation of the non-vanishing $2^{8}$ traces $T_n$ shows that they all have the values $\pm 1/16$. This structure is easily understood. Applying the projectors $P_s(3)$ and $P_s(2)$ to a given vector successively leads to alternating projections on different subspaces. Each projection gives a factor $\pm 1/\sqrt{2}$, with the sign depending upon the $s_{in}$. There are 144 values of $n$ with $T_n=1/16$ and 112 with $T_n=-1/16$. Obviously $\sum_nT_n=2$ as it should. A further simplification is obtained from the symmetry $T_n(s_{in}) = T_n(-s_{in})$, which is easily seen by inserting $\sigma _1\sigma _1$ between all projectors and using $\sigma _1P_s(m)\sigma _1 = P_{-s}(m)$ for $m=2,3$. Together with the symmetry $A_{\rho \nu }^n(p;\Theta ,s_{in}) = - A_{\rho \nu }^n(p;\Theta ,-s_{in})$ and the reality of the trace in (2.8) we can therefore fix, e.g. $s_{8n} =1$, multiply (2.8) by 2 on the rhs, and restrict the sum over $n$ to the $2^7$ values $s_{1n} ,\cdots ,s_{7n}$. It is convenient to introduce sign factors $\epsilon _n=\pm 1$ and write $$T_n = \frac{\epsilon _n}{16} = Tr[(P_{s_{1n}} (2) P_{s_{2n}} (3)) \; (P_{s_{3n}}(2)P_{s_{4n}}(3)) \; (P_{s_{5n}}(2)P_{s_{6n}}(3)) \; (P_{s_{7n}}(2) P_{s_{8n}=1}(3))].$$ According to the previous remarks we dropped redundant projectors and fixed $s_{8n}=1$. We may now write (2.8) as sum over the restricted set $n$ in the form $$Tr U_{\rho \nu }(p) = \frac{1}{8}\sum_{n=1}^{2^7} \epsilon _n \cos[A_{\rho \nu }^n(p;\Theta ,s)].$$ In order to perform the integrations over all the angles $\psi ,\vartheta,\varphi $ we proceed essentially as in [@bmk]. The various exponentials are expanded into a series of modified Bessel functions according to the formula $$\exp[\epsilon z \cos A] = \sum_{l=-\infty}^{\infty} \epsilon ^l I_l(z) e^{ilA} \mbox{\quad for }\epsilon =\pm 1.$$ The replacement $\epsilon \rightarrow -\epsilon $ is obviously equivalent to the shift $A\rightarrow A+\pi $. Using (2.11) and (2.12) we thus find $$\begin{aligned} \exp[\frac{\beta }{2}\sum_{p_{\rho \nu }} TrU_{\rho \nu }(p)] & = & \exp\left[ \frac{\beta }{16} \sum_{p_{\rho \nu} ,n}\epsilon _n \cos [A_{\rho \nu }^n(p;\Theta ,s)]\right] \nonumber\\ & = & \prod_{p_{\rho \nu },n}\exp\left[ \frac{\beta }{16} \epsilon _n \cos [A_{\rho \nu }^n(p;\Theta ,s)]\right] \nonumber\\ & = & \prod_{p_{\rho \nu },n} \sum_{l=-\infty}^\infty (\epsilon _n)^l I_l(\frac{\beta }{16}) \exp [il A_{\rho \nu }^n(p;\Theta ,s)]\\ & = & \sum_{l_{\rho \nu }^n(p)} \prod_{p_{\rho \nu },n} (\epsilon _n) ^{l_{\rho \nu }^n(p)} I_{l_{\rho \nu }^n(p)}(\frac{\beta }{16})\exp[il_{\rho\nu }^n(p) A_{\rho \nu} ^n(p;\Theta ,s)]\nonumber. \end{aligned}$$ In the last step we exchanged the order of the product and the sum. The summation parameters $l$ must then be distinguished by indices referring to the corresponding plaquette $p_{\rho \nu }$ and the index $n$; the $l_{\rho \nu }^n(p)$ run independently from $-\infty$ to $\infty$. We can now write down the expectation value for a Wilson loop. We characterize it by a closed current loop $J_\lambda (q)$ which is $\pm 1$ if the current runs in or against the direction of the link $q_\lambda $, and 0 otherwise. For simplicity we exclude loops which run multiply through some links. For the calculation of the trace $Tr[W(J)]$ we use again the decomposition (2.6) for the link variables on the loop, parametrized by (2.3). For every link $q_\lambda $ on the loop we have two parameters $\hat{s}^b_\lambda (q) = \pm 1$ (not three, because again neighboring projectors have to coincide), the whole set of these will be denoted by $\hat{s}$. For a loop of lengt $L$ we have $2L$ parameters $\hat{s}^b_\lambda (q) = \pm 1$ and in total a sum of $2^{2L}$ terms. We count these by an index $\hat{n}$, and denote the parameters by $s_{\lambda \hat{n}}^b(q)$. The traces are called $W_{\hat{n}}$. The $W_{\hat{n}}$ have a similar structure as the $T_n$; their values are $\pm1/2^L$, and $\sum _{\hat{n}}W_{\hat{n}} =2$. The partition function now reads $$\begin{aligned} Z[J] & = & \int \sum_{\hat{n}} W_{\hat{n}} \exp[i\sum_{q\lambda b}J_\lambda (q) \hat{s}_{\lambda \hat{n}}^b(q) \Theta _\lambda ^b(q)]\nonumber\\ & & \sum_{l_ {\rho \nu }^n(p)} \left(\prod_{p_{\rho \nu },n} (\epsilon _n)^{l_{\rho \nu }^n(p)} I_{l_{\rho \nu }^n(p)} (\frac{\beta }{16})\exp[i l_{\rho \nu }^n(p)A_{\rho \nu }^n(p;\Theta ,s)] \right)\nonumber\\ & & \prod_{r\mu }\left(H(\vartheta _\mu (r)) \prod_a \frac{d \Theta _\mu ^a(r)}{2\pi }\right).\end{aligned}$$ Obviously the first sum over $\hat{n}$ is the expansion of the loop. The sum over $l_{\rho \nu }^n(p)$ in the second line contains the action transformed as in (2.13), while the product over $r,\mu $ in the third line contains the integrations together with the Haar measure. We may factorize the product over $p_{\rho \nu },n$ and apply the addition theorem for the exponentials. This results in $$\begin{aligned} Z[J] & = & \int \sum_{\hat{n}} W_{\hat{n}} \exp[i\sum_{q\lambda b}J_\lambda (q)\hat{s}_{\lambda \hat{n}} ^b(q)\Theta_\lambda ^b(q)]\nonumber\\ & & \sum_{l_{\rho \nu }^n(p)} \left(\prod_{p_{\rho \nu },n} (\epsilon _n)^{l_{\rho \nu }^n(p)} I_{l_{\rho \nu }^n(p)} (\frac{\beta }{16}) \right) \exp [i \sum_{p,\rho<\nu ,n} l_{\rho \nu }^n(p)A_{\rho \nu}^n(p;\Theta ,s)]\nonumber\\ & & \prod_{r\mu }\left( H(\vartheta _\mu (r))\prod_a \frac{d\Theta_\mu ^a(r)}{2\pi }\right). \end{aligned}$$ We are now ready to perform the angular integrations over $\Theta_\mu ^a(r)$, remembering the definition of $A_{\rho \nu }^n(p;\Theta ,s)$ in (2.9). The integrations over $\psi _\mu (r)$ and $\varphi _\mu (r)$ lead to a Kronecker-$\delta $ which gives a constraint, while the $\vartheta _\mu (r)$ integration involves the Haar measure (2.4) and leads to a more complicated function. We shall also call it a constraint for simplicity. It is convenient to define a symbol $\delta ^a(C)$ for integer $C$ by $$\delta ^{a}(C) = \left\{ \begin{array}{l} \int _{-\pi }^\pi e^{iC\psi } \frac{d\psi }{2\pi } = \delta _{C,0} \mbox{ for } a=\psi ,\varphi ,\\ \int _{-\pi }^\pi H(\vartheta )e^{iC\vartheta } \frac{d\vartheta }{2\pi } = \left\{ \begin{array}{l}1/(1-C^2/4) \mbox{\quad if $C$ is a multiple of 4},\\ 0 \mbox{\quad otherwise} \end{array} \right\} \mbox{ for } a=\vartheta .\end{array}\right.$$ The argument of the constraint which arises from the $\Theta _\mu ^a(r)$-integration becomes $$\begin{aligned} C_\mu ^a(r) & \equiv & \sum_{\nu >\mu ,n} [s_{(8,1,2)n} l_{\mu \nu }^n(r)- s_{(6,5,4)n} l_{\mu \nu }^n(r-\nu )]\\ & - & \sum_{\nu <\mu ,n}[s_{(8,7,6)n} l_{\nu \mu }^n(r) - s_{(2,3,4)n} l_{\nu \mu }^n(r-\nu )] + J_\mu (r)\hat{s}_{\mu \hat{n}}^a(r), \nonumber \end{aligned}$$ where one has to use the first, second, or third subscript on $s$ for $a=\psi ,\vartheta ,\varphi $ respectively. So we end up with the following expression for the expectation value of the loop: $$Z[J] = \sum_{\hat{n}} W_{\hat{n}} \sum_{l_{\rho \nu }^n(p)} \left(\prod_{p_{\rho \nu },n} (\epsilon _n)^{l_{\rho \nu }^n(p)} I_{l_{\rho \nu }^n(p)}(\frac{\beta }{16}) \right) \prod_{r\mu a} \delta ^a [C_\mu ^a(r)].$$ Integrating out unnecessary variables ====================================== The constraint equations (2.17), as they stand, have a quite different character than the corresponding ones in the abelian case in [@bmk]. There one had one parameter $l_{\mu \nu }(r)$ for every plaquette $r_{\mu \nu }$, in our case we have $2^7$ parameters $l^n_{\mu \nu }(r)$ which are characterized by the additional index $n$. Any attempt of a physical interpretation of the $l_{\mu \nu }^n(r)$ would be premature at this stage. Let us consider a fixed plaquette $r_{\mu \nu }$ for the moment, and suppress the indices $r_{\mu \nu }$. We first look for a suitable linear transformation from the parameters $l^n$ to new parameters $m^i$. There are 8 combinations of the $l^n$ which play a special role, namely the 8 sums $\sum_{n=1}^{128}s_{in}l^n$ which appear in the constraints (2.17) (remember that $s_{8n}$ was fixed to 1). We will choose these 8 combinations as new variables $m^i,\:i=1,\cdots ,8$, eliminate the first eight $l^n$, and keep the rest of the parameters as they are. The $8 \times 8$ matrix made up of the $s_{ij}$ with $i,j = 1,\cdots ,8$ will be denoted by $S$. One may consider the $s_{in}$ as a set of $2^7$ vectors ${\bf S}_n=(s_{1n},\cdots ,s_{7n},s_{8n}=1)$ with components $\pm 1$. Up to now we did not specify the order in which these vectors ${\bf S}_n$ should be enumerated. It is now convenient to choose the ${\bf S}_n$ associated with the first eight values of $n$ in such a way that $\epsilon _n=1$ for $n=1,\cdots ,8$, with $\epsilon _n$ the signs of the trace (2.10). This can be done in many ways. To be definite we give our choice in the appendix. Our criteria were a small but non-vanishing determinant of $S$ (it is 128 for the $S$ in A.1), and a structure as transparent as possible in some of the equations below. The transformation finally becomes $$\begin{aligned} m^i & = & \sum_{n=1}^{128}s_{in}l^n = \sum_{j=1}^8s_{ij}l^j + \sum_{\alpha =9}^{128}s_{i\alpha} l^\alpha \mbox{\quad for \quad } i=1,\cdots,8,\nonumber\\ m^\alpha & = & l^\alpha \mbox{\quad for \quad } \alpha = 9,\cdots , 128.\end{aligned}$$ In matrix form the transformation reads $$\left(\begin{array}{cc}m_c\\m_f\end{array}\right) =\left(\begin{array}{cc} S & T\\0 & 1 \end{array}\right) \left(\begin{array}{cc}l_c\\l_f\end{array}\right) ,\quad \left(\begin{array}{cc}l_c\\l_f\end{array}\right) =\left(\begin{array}{cc} S^{-1} & -S^{-1}T\\0 & 1 \end{array}\right) \left(\begin{array}{cc}m_c\\m_f\end{array}\right).$$ We have split $m$ into an eight dimensional “constrained” vector $m_c$, and a 120-dimensional “free” vector $m_f$. Here $S$ is the $8\times 8$ matrix defined before, with $S_{ij} = s_{ij}$. $T$ is an $8\times120$ matrix, if we enumerate the columns from 9 to 128 we have $T_{i\alpha } = s_{i\alpha }$. There is an important restriction which has to be imposed on the transformation. It is this restriction which will later on lead to the area law. If the $l^n$ run over all integers, the same will be true for the $m^\alpha $ with $\alpha =9,\cdots ,128$, as seen from (3.1). But it is not true for the $m^i$ with $i=1,\cdots ,8$. From the inversion of the transformation in (3.2) one finds that only those $m^i$ appear, which fulfill the condition $$l^j = \sum_{i=1}^{8} (S^{-1})_{ji} m^i - \sum_{i=1}^{8} (S^{-1})_{ji} \sum_{\alpha =9}^{128} s_{i\alpha } m^\alpha \stackrel{!}{=} \mbox{ integer for }j=1,\cdots,8.$$ A computation of the matrix elements $(S^{-1})_{ji}$ shows that they are integer or half integer, and that the sums over the elements of any row are integer. Because the $s_{i\alpha }$ are $\pm 1$, this implies that the second sum in (3.3) is automatically integer. Therefore the condition simplifies to $\sum_{i=1}^{8}(S^{-1})_{ji} m^i \stackrel{!}{=}$ integer for $j=1,\cdots ,8$. This restriction finally becomes equivalent to $$m^i = \mbox{ even for all }i, \mbox{ or } m^i = \mbox{ odd for all }i = 1,\cdots ,8.$$ Introducing the transformation (3.2) into the product in (2.18) one obtains $$\sum_{l^n} \prod_n (\epsilon _n)^{l^n} I_{l^n}(\frac{\beta }{16}) = \sum_{m^i} \sum_{m^\alpha } \left( \prod_{j=1}^8 I_{l^j(m^i,m^\alpha )} (\frac{\beta }{16}) \right) \prod_{\alpha =9}^{128} (\epsilon _\alpha )^{m^\alpha }I_{m^\alpha }(\frac{\beta }{16}),$$ with $l^j(m^i,m^\alpha ) = \sum_i(S^{-1})_{ji}m^i - \sum_{\alpha } (S^{-1}T)_{j\alpha }m^{\alpha }$. The sum over $m^i$ only runs over the subset which fulfills (3.4). The $m^\alpha $ for $\alpha =9,\cdots,128$ do not show up in the constraints, therefore the summations can be performed. To do this we introduce the integral representation for all the modified Bessel functions, thereby partially reversing the previous step of expanding the exponents. $$\epsilon ^l I_l(\frac{\beta }{16}) = \int _{-\pi }^\pi \frac{d\gamma }{2\pi } \exp[\epsilon \frac{\beta }{16} \cos \gamma + il\gamma ] \mbox{\quad for } \epsilon =\pm 1.$$ The $m^\alpha $ appear in the exponent now, and the summations can be performed with the help of the Poisson sum formula $$\sum _{m^\alpha }\exp \left[ i[\gamma _\alpha -\sum_{j=1}^8\gamma _j(S^{-1}T)_{j\alpha }]m^\alpha \right] = 2\pi \sum_{k_\alpha } \delta [\gamma _\alpha -\sum_{j=1}^8\gamma _j(S^{-1}T)_{j\alpha } - 2\pi k_\alpha ].$$ All $\gamma _\alpha $ are integrated over the interval $-\pi <\gamma _\alpha \le\pi $, thus exactly one $k_\alpha $ contributes in the sum on the rhs. Furthermore the $\gamma _\alpha $ appear only as arguments of periodic cosines in the remaining part of the integrand. Therefore we may simply use the $\delta $-functions to eliminate the $\gamma _\alpha $ by putting $\gamma _\alpha =\sum_{j=1}^8 \gamma _j(S^{-1}T)_{j\alpha }$ for $\alpha =9,\cdots ,128$. We thus are left with $$\begin{aligned} \sum_{l^n} \prod_n (\epsilon _n)^{l^n} I_{l^n}(\frac{\beta }{16}) = \sum_{m^i} \int \frac{d\gamma _1}{2\pi }\cdots \frac{d\gamma _8}{2\pi } \exp[\frac{\beta }{16}A(\gamma ) + i\sum_{i,j=1}^8\gamma _j (S^{-1})_{ji}m^i], \end{aligned}$$ where we have introduced the function $$A(\gamma ) = \sum _{j=1}^8\cos \gamma _j + \sum _ {\alpha =9}^{128}\epsilon _\alpha \cos [\sum _{j=1}^8\gamma _j(S^{-1}T)_{j\alpha }].$$ In order to fulfill the conditions (3.4) we put $$m^i = 2 \tilde{m}^i + f \mbox{, with $\tilde{m}^i$ integer, and $f=0$ or 1}.$$ For our choice of the transformation, the $f$-dependence of (3.8) becomes particularly simple and only involves $\gamma _8$. We now reintroduce the suppressed indices $p_{\rho \nu }$ into (3.8), (3.10), and insert the result into the partition function (2.18). We find $$\begin{aligned} Z[J] = & & \sum_{\hat{n}}W_{\hat{n}} \sum_{\tilde{m}_{\rho \nu }^i(p)} \sum_{f_{\rho \nu }(p)} \prod_{p_{\rho \nu }} \left( \int \frac{d\gamma _1}{2\pi }\cdots \frac{d\gamma _8}{2\pi }\right.\\ & & \left.\exp\left[ \frac{\beta }{16}A(\gamma ) + 2i\sum_{i,j=1}^8 \gamma _j (S^{-1})_{ji} \tilde{m}_{\rho \nu }^i(p) + i \gamma _8f_{\rho \nu }(p)\right] \right) \prod_{r\mu a}\delta ^a[C_\mu ^a(r;\tilde{m},f,J,\hat{s} )].\nonumber \end{aligned}$$ The constraints (2.17) now depend upon $\tilde{m}^i$ and $f$. For the formulation it is convenient to extend $f_{\mu \nu }(r)$ to an antisymmetric matrix. Thus $f_{\mu \nu }(r) =0,1$ for $\mu <\nu $, and $f_{\mu \nu }(r) =0,-1$ for $\mu >\nu $. The constraints then become $$\begin{aligned} C_\mu ^a(r) & = & 2\sum_{\nu >\mu } [\tilde{m}_{\mu \nu }^{(8,1,2)}(r) - \tilde{m}_{\mu \nu }^{(6,5,4)}(r-\nu )] - 2\sum_{\nu <\mu }[\tilde{m}_{\nu \mu }^{(8,7,6)}(r) - \tilde{m}_{\nu \mu }^{(2,3,4)}(r-\nu )] \nonumber\\ & + & \sum_{\nu \neq \mu } \Delta_\nu f_{\mu \nu }(r) + J_\mu (r)\hat{s}_{\mu \hat{n}}^a(r), \end{aligned}$$ where $\Delta _\nu $ denotes the left lattice derivative. The tensor $f_{\mu \nu }(r)$ will be recognized as the $Z_2$ field strength tensor. As in (2.17) one has to use the first, second, or third upper index of $\tilde{m}_{\mu \nu }$ for $a=\psi ,\vartheta ,\varphi $. There is a symmetry relation in (3.11) which underlines the importance of the $Z_2$ tensor $f_{\rho \nu }(p)$. Replace $\beta \rightarrow -\beta $ and substitute $\gamma _j \rightarrow \gamma _j + \pi $ for all $j$ (the integrand is periodic). Using the definition of $A(\gamma )$ in (3.9) and the fact that $\sum_{j=1}^8(S^{-1}T)_{j\alpha }=1$ for all $\alpha $, one finds that $A(\gamma )$ reverses sign, i.e. $\beta A(\gamma )$ stays invariant. The second term in the exponent of (3.11) changes by a multiple of $2\pi i$, because $\sum_{j=1}^8(S^{-1})_{ji} = \delta _{8i}$, and $\tilde{m}_{\rho \nu }^8(p)$ is integer. Finally the third term changes by $i\pi f_{\rho \nu }(p)$. In this way one finds that the bracket $(\cdots )$ in (3.11) is even in $\beta $ for $f_{\rho \nu }(p)=0$, and odd in $\beta $ for $f_{\rho \nu }(p)= 1$. Up to this point all formulae were exact. It appears tempting now to proceed as follows in the continuum limit of large $\beta $. If the function $A(\gamma )$ has an isolated maximum, the integrals over $\gamma _j$ are dominated by the region where $A(\gamma )$ becomes maximal, and the integrations over the $\gamma _j$ can be extended to the interval from $-\infty$ to $\infty$. A quadratic expansion around the maximum would then lead to gaussian integrals. If all the $\epsilon _\alpha $ were equal to 1, we would indeed have a simple maximum at $\gamma _j=0$ for all $\gamma _j$. This would correspond to the situation in the abelian theory and to the replacement $I_l(z) \rightarrow e^z e^{-l^2/2z}/\sqrt{2\pi z}$ which was used in [@bmk] (these authors used by mistake $e^{-l^2/4z}$ instead of the correct $e^{-l^2/2z}$, which has, however, no consequences there). In our case the different signs of the $\epsilon _\alpha $ change the situation drastically. One finds that the function $A(\gamma )$ assumes its maximal value of 16 if $\gamma _1,\gamma _2,\gamma _3$ are arbitrary, and $\gamma _j = 0$ for $j=4,\cdots ,8$. Even a quadratic approximation in $\gamma _j$ for $j=4,\cdots,8,$ and fixed $\gamma _1,\gamma _2,\gamma _3$ is not possible because the matrix of the second derivatives has two zero eigenvalues. For $\gamma _1$=$\gamma _2$=$\gamma _3$=$0$ even three eigenvalues vanish. So it would be necessary to go to a higher order in the expansion; but then the integrations are no longer gaussian and cannot be performend. This is the way in which non abelian gauge theory protects itself from being solved analytically! Nevertheless the present formalism will clearly show, how the area law for large loops arises. We keep $\beta $ arbitrary, not necessarily large, and first solve the constraints. Solution of the constraints =========================== It is convenient to rewrite the $\vartheta $-constraints in the form $$\delta ^\vartheta [C_\mu ^\vartheta (r)] = \sum_{k_\mu (r)=-\infty}^\infty \delta ^\vartheta [4k_\mu (r)] \delta _{C_\mu ^\vartheta (r)-4k_\mu (r),0}.$$ This introduces additional sums over the $k_\mu (r)$, and factors $\delta ^\vartheta [4k_\mu (r)]$. The $\vartheta $-constraints now also appear in form of a Kronecker-$\delta $. Let us first consider the constraints modulo 2, which obviously only concerns the second line of (3.12). The factors $\hat{s}_{\mu \hat{n}}^a(r) = \pm 1$ may be dropped, and for all three cases $a=\psi ,\vartheta ,\varphi $ we find the equations $$\sum_\nu \Delta_\nu f_{\mu \nu }(r) + J_\mu (r) = 0 \mbox{\quad (mod 2) for all\quad }r,\mu .$$ These are identical to the equations for $l_{\mu \nu }(r)$ in the abelian case, except that they are equations modulo 2. They show already the appearance of a $Z_2$ structure. We concentrate on a geometrical formulation of the solution. Recall that $f_{\mu \nu }(r)=0,1$ for $\mu <\nu $ and that $f_{\mu \nu }(r)$ is antisymmetric. Therefore it is convenient to use the symbol $\epsilon _{\mu \nu } = (1,-1,0)$ for $(\mu <\nu ,\mu >\nu ,\mu =\nu )$. Let $S$ be a two dimensional surface, and $$f_{\mu \nu }^{(S)}(r) = \left\{ \begin{array}{l} \epsilon _{\mu \nu }\mbox{\quad if the plaquette }r_{\mu \nu }\mbox{ is part of the surface $S$,}\\ 0\mbox{\quad otherwise.} \end{array} \right.$$ The following statements hold: - If $S$ has the Wilson loop as boundary, then $f_{\mu \nu }^{(S)}(r)$ is a solution of (4.2). - If $S$ is a closed surface, then $f_{\mu \nu }^{(S)}(r)$ is a solution of the homogeneous equation $\sum_\nu \Delta_\nu f_{\mu \nu }^{(H)}(r) = 0$ (mod 2). The most general solution can be obtained as a superposition of a special solution $f_{\mu \nu }^{(S)}(r)$ with $S$ bounded by the loop, and a sum over solutions of the homogeneous equation. The proof is obvious. Equation (4.2) involves exactly all the plaquettes which contain the link $r_\mu $. Links $r_\mu $ on the loop appear in an odd number of plaquettes of the associated surface $S$, while links $r_\mu $ which are not part of the loop appear in an even number (including 0) of plaquettes of $S$. In the following we will restrict the discussion to planar loops for simplicity. A solution of special importance is the layer belonging to the minimal surface of the Wilson loop $W$, $$f_{\mu \nu }^{(min)}(r) = \left\{ \begin{array}{l} \epsilon _{\mu \nu }\mbox{\quad if the plaquette }r_{\mu \nu }\mbox{ is part of the minimal surface,}\\ 0\mbox{\quad otherwise.} \end{array} \right.$$ The $f_{\mu \nu }^{(min)}(r)$ associated with the minimal surface fulfills (4.2) exactly, not only modulo 2, if the loop is oriented appropriately. For different surfaces, on the other hand, this is not true. The general solution of the homogeneous equation $\sum_\nu \Delta_\nu f_{\mu \nu }^{(H)}(r) = 0$ (mod 2) can be written down explicitly. It depends upon the dimension $d$. In order to guarantee the antisymmetry of $f_{\mu \nu }(r)$ we introduce the symbol (mod$_{\mu \nu }$ 2); it is identical with (mod 2) for $\mu <\nu $, but reverses sign for $\mu >\nu $. One then has $$\begin{aligned} f_{\mu \nu }^{(H)}(r) = \left\{\begin{array}{r} \sum_\lambda \epsilon _{\mu \nu \lambda }\Delta _\lambda f(r) \mbox{\quad (mod$_{\mu \nu }$ 2) for $d=3$,}\\ \sum_{\lambda \kappa }\epsilon _{\mu \nu \lambda \kappa }\Delta _\lambda f_\kappa (r) \mbox{\quad (mod$_{\mu \nu }$ 2) for $d=4$.}\end{array} \right. \end{aligned}$$ The function $f(r)$ in three dimensions is unique up to a constant. For $d=4$ one has a gauge freedom, i.e. adding a gradient $\Delta_\kappa \Lambda (r)$ to $f_\kappa (r)$ will not change $f_{\mu \nu }^{(H)}(r)$. The simplest way to remove this ambiguity is to choose an axial gauge by imposing $\sum_\kappa n_\kappa f_\kappa (r) =0$ (mod 2). In both cases the values of $f(r)$ and $f_\kappa (r)$, respectively, are restricted to 0 and 1. Switching from one surface $S$ to another $S'$ for the special solution can also be rephrased in terms of the solution of the homogeneous equation. In $d=3$ dimensions it corresponds to changing $f(r)$ by 1 inside the volume between the two surfaces. (The use of the left derivative in (4.2), (4.5) specifies which points of the surface have to be considered as inside or outside). For $d=4$ one has to choose a three-dimensional volume spanned by the surfaces with, roughly speaking, normal vector in $\kappa $-direction at the point $r$. One then has to change $f_\kappa (r)$ by 1 inside the volume, and subsequently transform to the axial gauge. The essential part of the constraints has now been solved. We put $$f_{\mu \nu }(r) = f_{\mu \nu}^ {(min)}(r) + \sum_{\lambda \kappa }\epsilon _{\mu \nu \lambda \kappa } \Delta _\lambda f_\kappa (r) \mbox{\quad (mod$_{\mu \nu }$ 2).}$$ The minimal surface layer $f_{\mu \nu}^ {(min)}(r)$, defined in (4.4), is no longer a variable, but uniquely fixed by the loop. The $f_\kappa (r)=0,1$ are unconstrained. The index $\kappa $ on $\epsilon _{\mu \nu \lambda \kappa },f_\kappa (r)$ and in the sum appears for $d=4$ only. For $d=3$ it has to be dropped, here and wherever it appears in subsequent formulae. We next introduce the solution (4.6) into (3.12). The second line is now definitely even, therefore we denote it by $$2R_{\mu \hat{n}}^a(r) \equiv \sum_\nu \Delta_\nu f_{\mu \nu }(r) + J_\mu (r)\hat{s}_{\mu \hat{n}}^a(r).$$ For later use we specify the variables upon which $R_{\mu \hat{n}}^a(r)$ can depend. The term $J_\mu (r)\hat{s}_{\mu \hat{n}}^a(r)$ is strictly local, i.e. only depends on the argument $r$. The $f_\kappa (r)$, on the other hand, appear as $\Delta _\nu \Delta _\lambda f_\kappa (r)$ with $\nu \ne \lambda $. Therefore they enter also with shifted arguments $r'$. The points $r'$ and $r$ are neighbors in the sense that all components of $r-r'$ are either 0 or 1. Finally, one has to note that (4.6) is only an equation modulo 2. Therefore $\sum_\nu \Delta _\nu f_{\mu \nu }(r)$ as well as $R_{\mu \hat{n}}^a(r)$ can also depend on the minimal layer $f_{\mu \nu }^{(min)}(r)$. The constraints (3.12) now become $$\begin{aligned} C_\mu ^a(r) = & & 2\sum_{\nu >\mu } [\tilde{m}_{\mu \nu }^{(8,1,2)}(r) - \tilde{m}_{\mu \nu }^{(6,5,4)}(r-\nu )] - 2\sum_{\nu <\mu }[\tilde{m}_{\nu \mu }^{(8,7,6)}(r) - \tilde{m}_{\nu \mu }^{(2,3,4)}(r-\nu )] \nonumber\\ & & + 2R_{\mu \hat{n}}^a(r) \stackrel{!}{=}4\delta ^{a\vartheta } k_\mu (r). \end{aligned}$$ For $a=\psi $ (first upper index $i$ on $\tilde{m}^i$) and $a=\varphi $ (third upper index $i$ on $\tilde{m}^i$), i.e. for the even indices $i$, the index $i$ appears in both sums of (4.8). For a fixed $r$ one has $2d$ linear equations (corresponding to $a=\psi ,\varphi $ and $\mu =1,\cdots , d$) for $4 d(d-1)/2$ quantities $\tilde{m}_{\mu \nu }^{(2,4,6,8)}$. These equations are not independent due to the identity $$\sum_\mu [C_\mu ^\psi (r)- C_\mu ^\varphi (r-\mu )] = 2 \sum_\mu [R_{\mu \hat{n}}^\psi (r) - R_{\mu \hat{n}}^\varphi (r-\mu )] .$$ The rhs of (4.9) vanishes because $f_{\mu \nu }$ is antisymmetric, and because the neighboring projectors in the loop have to coincide as mentioned in sect. 2. We checked explicitly for $d=3,4$ that the equations may be simply used to eliminate some of the $\tilde{m}_{\mu \nu }^i(r)$. Any eliminated $\tilde{m}_{\mu \nu }^i(r)$ depends linearly on other unconstrained $\tilde{m}_{\mu '\nu '}^{i'}(r')$ and on $R_{\mu '\hat{n}}^a(r')$, where the components of $r'$-$r$ are either 0 or $\pm 1$. For $a=\vartheta $ (second upper index $i$ on $\tilde{m}^i$), i.e. for the odd indices $i$, the situation is even simpler. Each index $i$ enters only in one of the sums in (4.8), it is convenient to eliminate some of the $\tilde{m}_{\mu \nu }^1(r)$ and $\tilde{m}_{\nu \mu }^7(r)$. Finally we can write the solutions of the constraints in the following form, which eliminates some of the $\tilde{m}_{\mu \nu }^i(r)$, leaving the rest unconstrained. $$\tilde{m}_{\mu \nu }^i(r) = L_{\mu \nu }^i [\tilde{m}_{\mu '\nu '}^{i'}(r'),R_{\mu '\hat{n}}^a(r')] + 2k_{\mu \nu }^i(r),$$ with $k_{\mu \nu }^i(r) = (k_\mu(r),k_\nu(r),0)$ for $(i=1,i=7,$otherwise). The $L_{\mu \nu }^i$ are linear combinations of their arguments, only coefficients $0,\pm 1$ appear. The arguments $r,r'$ are neighbors in the sense explained before. The loop current $J_\mu (r)$ enters only in $R_{\mu '\hat{n}}^a(r')$. Note the drastic difference in the type of the constraints (4.2) (or the corresponding constraints in eq. (6) of ref [@bmk] for the $U(1)$ case) on one hand, and the constraints (4.8) just considered on the other. The former involve a difference operator applied to one plaquette variable $f_{\mu \nu }(r)$, the corresponding Green function being non-local and coupling the solution to the current over a long range. In contrast, the latter constraints involve several plaquette variables $\tilde{m}_{\mu \nu }^i(r)$ and can just be used to eliminate some of these. This elimination leads to an almost local coupling to the current, involving neighbors only. Confinement =========== The essential feature, which finally arose in our formulation, is the presence of the $Z_2$ field strength tensor $f_{\mu \nu }(r)$ which obeys the field equation (4.2). The solutions of this equation can be characterized by two-dimensional surfaces; a layer with the Wilson loop as boundary, possibly together with closed surfaces. One may expect that the presence of such a layer will lead to an area law. For a qualitative understanding of the confinement mechanism we use the explicit form (4.6) for the solution of the field equation (4.2). It contains the fixed layer $f_{\mu \nu }^{(min)}(r)$, together with the unconstrained $Z_2$ variable $f_\kappa (r)$. The solutions of the remaining constraint equations (4.8) for the $\tilde{m}_{\mu \nu }^i(r)$ have the form (4.10). Consider now the expression (3.11) for the partition function $Z[J]$, and introduce the solutions (4.6), (4.10) of the constraints into the exponential on the rhs. This gives $$\begin{aligned} \lefteqn{2\sum_{i,j=1}^8\gamma _j(S^{-1})_{ji}\tilde{m}_{\rho \nu }^i(p) + \gamma _8f_{\rho \nu }(p) }\nonumber\\ & = & 2\sum_{i,j=1}^8\gamma _j(S^{-1})_{ji}\{L_{\rho \nu }^i[\tilde{m}_{\rho '\nu '}^{i'}(p'),R_{\rho '\hat{n}}^a(p')] + 2k_{\rho \nu }^i(p)\}\\ & + & \gamma _8\{f_{\rho \nu }^{(min)}(p) + \sum \epsilon _{\rho \nu \lambda \kappa }\Delta _\lambda f_\kappa (p) \mbox{\quad (mod$_{\mu \nu }$ 2)}\}.\nonumber \end{aligned}$$ The Wilson loop enters into this expression in two different ways. First there is a dependence on the current $J_{\rho '} (p') \hat{s}_{\rho '\hat{n}}^a(p')$ which arises from the second term of $R_{\rho '\hat{n}}^a(p')$ in (4.7). Secondly there is a dependence on the minimal layer $f_{\rho \nu }^{(min)}(p)$ which enters into the first term of $R_{\rho '\hat{n}}^a(p')$, as well as explicitely in the factor of $\gamma _8$. The current $J$ is present on a one-dimensional set, the minimal layer $f^{(min)}$ on a two-dimensional set. Besides this, both quantities enter in a quite similar way into (5.1). One may therefore expect, that for large loops the dependence on the one-dimensional current $J$ can be neglected compared to the dependence on the two-dimensional layer $f^{(min)}$. If we neglect the dependence on $J_{\rho '} (p')$ the partition function becomes independent of the $\hat{s}_{\rho '\hat{n}}^a(p')$ and the sum $\sum_{\hat{n}}W_{\hat{n}} = 2$ can be performed. Assuming that the $\gamma $-integrations have been done, the degrees of freedom are now in the remaining unconstrained $\tilde{m}_{\rho \nu }^i(p) = -\infty,\cdots,\infty$, the $f_\kappa (p)=0,1$, and the $k_\mu (r) =-\infty,\cdots,\infty$ introduced at the beginning of sect. 4. The discussion of (4.8) showed that the solutions couple neighbors only. This means that (5.1), which appears in the exponential in (3.11), only depends on these variables with arguments $p,p',p''$; here $p',p''$ are neighbors in the sense that all components of $p'$-$p$ are $0,\pm 1$, all components of $p''$-$p$ are $0,\pm 1,\pm 2$. We digress for a technical point. Neither the exponential in (3.11) with it’s complex argument, nor the factors $\delta ^\vartheta [4k]$ are positive definite. Actually, according to the definition (2.16), one has $\sum_k\delta ^\vartheta [4k]=0$, because the Haar measure fulfills $H(0) = 0$. If desired, one could bring the expression into the usual form of a partition function with positive summands, by performing a twofold partial summation with respect to the $k_\mu (r)$. The whole loop dependence is now in the $f_{\rho \nu }^{(min)}(p) $ belonging to the minimal surface. It acts like a space-time dependent external field, comparable, say, to a constant magnetic field switched on in a finite volume of an Ising model. $Z[J]$ is a partition function where the variables couple to neighbors only, a well known standard situation in statistical mechanics. For large subsystems it therefore factorizes into products refering to the subsystems and, correspondingly, has an exponential dependence on the volume. This is, of course, nothing else but the fact that the free energy is an extensive quantity. Rigorous proofs, which apply for any dimension, can be found in [@fac]. Consider now a loop $0< x_1\le R,0<x_4\le T$ in the $x_1$-$x_4$-plane for definiteness, with $R$ and $T$ large. We divide the $x_1$-$x_4$-plane inside, as well as outside of the loop, into rectangles; these rectangles are then extended to $d$-dimensional boxes into the orthogonal directions. This means that we define regions $V^{(n)}$ by the inequalities $r_1^{(n)} <x_1\le r_2^{(n)},t_1^{(n)} <x_4\le t_2^{(n)},x_2,x_3$ arbitrary. The rectangle $r_1^{(n)} <x_1\le r_2^{(n)},t_1^{(n)} <x_4\le t_2^{(n)}$ has to lie either completely inside the loop, or completely outside the loop. If not only $R$ and $T$, but also all the differences $r_2^{(n)} - r_1^{(n)}$ and $t_2^{(n)} - t_1^{(n)}$ are large, the partition function will factorize, $$Z[J] = \prod_n Z^{(n)}.$$ Consider now the ratio $Z[J]/Z[0]$, with $Z[0]$ the expression without loop. Obviously all the outer factors cancel. For the inner ones, on the other hand, one has $f_{\rho \nu }^{(min)}(p) =1$ in the numerators, but 0 in the denominators. Thus the ratios are different from 1. Because of the factorization property, the volumes of the regions $V^{(n)}$ have to enter in the exponent. This finally implies that the area $A = R\times T$ of the loop enters in the exponent, so the result may be written as $$Z[J]/Z[0] = \exp[-\sigma A].$$ We have obtained the area law for large loops. Several comments are appropriate here. First one may wonder what would happen with our argumentation, if we would replace $f_{\rho \nu }^{(min)} (p)$, associated with the minimal surface, by a solution $f^{(S)}_{\rho \nu }(p)$, belonging to a different surface $S$. Obviously the simplicity of the situation for the regions $V^{(n)}$ inside and outside would break down, and factorization would not lead to a simple relation. The minimal surface is really unique for the argumentation. The neglection of the dependence on $J$ would certainly have been wrong if performed in the original expression described by the Euler angles $\psi _\mu (r),\vartheta _\mu (r),$ $\varphi _\mu (r)$. There one had only the current $J$ but no layer $f^{(min)}$ showed up. Therefore confinement has to arise from the dependence on $J$ in a complicated way. In our formulation the formalism led to another quantity, the minimal layer $f^{(min)}$. This appears as the natural quantity which describes the long distance physics and dominates the residual direct dependence on $J$. For illustration one can have a look on the strong coupling limit. According to the discussion at the end of sect. 3, the bracket $(\cdots )$ in (3.11) is even in $\beta $ for $f_{\rho \nu }(p)=0$, and odd in $\beta $ for $f_{\rho \nu }(p)=\epsilon _{\rho \nu }$. Therefore the order $\beta ^0$ only contributes outside the surface layer, while the order $\beta $ terms come from plaquettes on the surface layer. In this way we recover the well known lowest order strong coupling result $Z[J]/Z[0]\sim \beta ^A$. More important, we have seen that indeed $f^{(min)}$ is the crucial quantity. For a Wilson loop in the adjoint representation one does not expect an area law, because the charges can be screened by pair creation. This can be easily checked in our approach. The traces in the adjoint and in the fundamental representation are related by $TrW_{(1)}=[TrW_{(1/2)}]^2-1$. With our parametrization we obtain $$\begin{aligned} [TrW_{(1/2)}]^2 & = & \left[ \sum_{\hat{n}} W_{\hat{n}} \exp [i\sum_{q\lambda b}J_\lambda (q)\hat{s}_{\lambda \hat{n}}^b(q)\Theta _\lambda ^b(q)]\right] ^2 \nonumber\\ & = & \sum_{\hat{n}\hat{n}'} W_{\hat{n}}W_{\hat{n}'} \exp \left[ i\sum_{q\lambda b}J_\lambda (q)[\hat{s}_{\lambda \hat{n}}^b(q) +\hat{s}_{\lambda \hat{n}'}^b(q)] \Theta _\lambda ^b(q)\right].\end{aligned}$$ The last term on the rhs of the modified equation (3.12) becomes $J_\mu (r)[\hat{s}_{\mu \hat{n}}^a(q)+\hat{s}_{\mu \hat{n}'}^a(q)]$ and is always even. Therefore (4.2) becomes a homogeneous equation, no minimal layer and no area law will appear. Similarly one can see that we don’t get confinement if we replace $SU(2)$ by $SO(3)$. With some technical effort or a more streamlined approach it should be possible to carry through a similar analysis for $SU(3)$. It would be nice to see, how the formalism would create the expected $Z_3$ structure. Our conclusions which led to the area law would break down if the result, by some reason whatsoever, would be independent of $f_{\rho \nu }(p)$, thereby giving a vanishing string tension. This appears hardly possible for a finite lattice. We have seen before that there is indeed an essential dependence on $f_{\rho \nu }(p)$ in the strong coupling limit $\beta \rightarrow 0$. Such a dependence must survive for all finite $\beta $ because the original expression $Z[J]$ in (2.1) clearly fulfills the strict inequalities $0<Z[J]<Z[0]$. The string tension might, however, vanish in a certain region of $\beta $ after performing the thermodynamic limit. In particular such an effect could be expected in higher dimensions, where the presence of the two dimensional layer becomes relatively less important than in lower dimensions. Indeed it is known [@fuenf] that lattice $SU(2)$ has a first order phase transition for $d=5$ at $\beta _c=1.642 \pm 0.015$. We come back to $d=4$. At the end one is interested in the continuum limit $\beta \rightarrow \infty $ which requires a particular investigation. If the string tension is a physical quantity and $\beta $ goes to infinity as prescribed by the renormalization group, a non vanishing string tension for the lattice theory will persist in the continuum limit. Interpretation and conclusions ============================== There is an extensive literature on the various pictures of confinement which cannot be discussed here. For a recent review we refer to [@sim]. We come directly to the physical interpretation of our results. The key is equation (4.2) for the $Z_2$ field strength tensor, $$\sum _\nu \Delta _\nu f_{\mu \nu }(r) + J_\mu (r) = 0 \mbox{\quad (mod 2)}.$$ The solutions in form of layers on two-dimensional surfaces were discussed in detail in sect. 4. In $d$=3 dimensions put $f_{\mu \nu }(r) = \sum _\lambda \epsilon _{\mu \nu \lambda }B_\lambda (r) \mbox{\quad (mod$_{\mu \nu }$ 2)}$. Then (6.1) becomes $\nabla \times {\bf B} = {\bf J}$ (mod 2). The magnetic field ${\bf B}$ has sources corresponding to magnetic monopoles. It is reasonable to use the right derivative in the divergence, and to associate the monopole density $\tilde{\rho }$ with cubes as usual. We therefore define $$\tilde{\rho }(r_{123}) = \sum_\nu \Delta _\nu ^{(right)} B_\nu (r) \mbox{\quad (mod 2)}.$$ The solution $f_{\mu \nu }^{(min)}(r)$ then immediately leads to a double layer of monopoles in the cubes on both sides of the minimal surface. For $d$=4 we define the dual tensor $\tilde{f} _{\mu \nu }(r) = (1/2)\sum _{\lambda \kappa } \epsilon _{\mu \nu \lambda \kappa }f _{\lambda \kappa }(r) \mbox{ (mod$_{\mu \nu }$ 2)}$. The conserved (mod 2) magnetic current $\tilde{J}_\mu$ lives on 3-dimensional cubes $r_{\rho \lambda \kappa }$, where $\rho ,\lambda ,\kappa $ denote the three directions orthogonal to $\mu $. $$\tilde{J}_\mu (r_{\rho \lambda \kappa }) = \sum _\nu \Delta_\nu ^{(right)}\tilde{f}_{\mu \nu }(r) \mbox{\quad (mod 2)}.$$ Consider a loop in the $x_1$-$x_4$-plane, with $x_4$ interpreted as euclidean time. Let $x_1,x_4$ be within the loop and suppress the $x_4$-extension of the cubes. For the solution $f ^{(min)}_{\mu \nu }(r)$ we then get a non-vanishing $\tilde{J}_\mu $ on all plaquettes in the $x_1$-$x_2$-plane and in the $x_1$-$x_3$-plane which contact the line $x_2$=$x_3$=0. We thus have a string of electric field $E_1(r) = f_{14}(r)$ in $x_1$-direction, concentrated on $x_2$=$x_3$=0. This is surrounded by magnetic current loops parallel to the $x_2$-$x_3$-plane which circle around the electric string. The configuration is therefore just dual to an Abrikosov vortex in a type II superconductor, where the magnetic field is encircled by the electric current. Flux quantization is evident, the $Z_2$-structure only allows for one unit of flux. Configurations $f_{\mu \nu }(r)$ in (4.6) with $f_\kappa (r)\ne 0$ belong to other surfaces which are bounded by the loop as discussed in sect. 4. In addition closed surfaces can appear. The interpretation is similar as above. For illustration, connect e.g. two points in the $x_1$-$x_2$-plane by a path in form of a stair. Then $\tilde{J}_\mu $ lies on the plaquettes which point from the stair into positive and negative $x_3$-direction. All surfaces are summed with the appropriate weight in the partition function. A careful investigation of the various weights should give information about the extension of the electric flux tube. We finally check the dual London equation, $\nabla \times \tilde{{\bf J}} =\frac{1}{\Lambda }{\bf E}$(mod 2). It is easily seen that $(\nabla \times \tilde{{\bf J}})_\mu (r)$(mod 2) is equal to 0 (1) if the link $r_\mu $ has contact to an even (odd) number of plaquettes with non-vanishing magnetic current. For the examples discussed above this means that $\nabla \times \tilde{{\bf J}}$ runs along the boundary of the set of plaquettes which carry the magnetic current. For the minimal layer, $\nabla \times \tilde{{\bf J}}$ is parallel to ${\bf E}$ as it should. It is, however, not concentrated on $x_2$=$x_3$=0 as the electric field, but on the four lines $x_2$=0, $x_3=\pm 1$ and $x_3$=0, $x_2=\pm 1$. For the stair, $\nabla \times \tilde{{\bf J}}$ is shifted by $x_3=\pm 1$ with respect to ${\bf E}$. In general the dual London equation is essentially fulfilled, the two sides of the equation are just slightly shifted against each other. This might be interpreted by a non-vanishing Ginzburg-Landau coherence length $\tilde{\xi }$ which leads to a “normal” region near the string, where the London equation is not valid. Let us compare with some familiar types of monopoles in the literature. The charges of the $U(1)$ monopoles in [@bmk] can take all integers, in obvious contrast to our $Z_2$ structure which only allows 0 and 1. A popular definition of monopoles in $SU(2)$ is discussed e.g. in [@tom]. Let $\eta _{(p)} \equiv sign\:Tr\:U_{(p)}$ denote the sign of the plaquette action, and $\eta _c=\prod_{p \in \partial c}\eta _{(p)}$ the product of the $\eta _{(p)}$ around the boundary of the cube $c$. Then $\eta _c=-1$ represents a monopole in the (space like) cube $c$. There is a $Z_2$ structure as in our case. Another frequently applied definition, reviewed e.g. in [@hay], uses the maximal abelian gauge. In a first step one maximizes the quantity $R=\sum_{r\mu }Tr[\sigma _3U_\mu (r)\sigma _3U_\mu ^+(r)]$. The link matrices are then decomposed into a non-abelian and a $U(1)$ part, e.g. one can take the abelian link angle as the phase of $[U_\mu (r)]_{11}$. The $U(1)$ monopoles are then defined according to the DeGrand Toussaint construction [@dgt] which allows monopole charges $0,\pm 1,\pm 2$. The monopoles which naturally arose in the present work have no direct relation to any of these. An unconventional feature of our approach is the presence of monopoles in every configuration. In the approaches mentioned above there are plenty of configurations without any monopoles, namely those near the perturbative vacuum. In our case there is always a surface $S$ bounded by the loop. This is associated with an electric string and accompanied by monopole vortices. From a physical point of view this appears quite attractive. [**Acknowledgement:**]{} I thank I. Bender and H. J. Rothe for reading the manuscript and for valuable criticism. Appendix ======== For definiteness we give here the matrix $S$ used by us in sect. 3 when selecting a convenient subset of the ${\bf S}_n$. It reads $$S = \left|\begin{array}{rrrrrrrr} -1&-1&-1&1&-1&1&1&1\\ -1&-1&1&-1&1&-1&1&1\\ -1&1&1&1&-1&1&1&1\\ -1&-1&1&1&1&1&1&1\\ -1&1&1&-1&-1&1&-1&1\\ -1&-1&-1&1&-1&1&-1&1\\ -1&-1&1&-1&-1&1&-1&1\\ 1&1&1&1&1&1&1&1\\ \end{array}\right|$$ Recall that the columns of $S$ consist of 8 of the vectors ${\bf S}_n$, with the property that $\epsilon _n=+1$. The last component, corresponding to $s_8$, was fixed to 1. [99]{} T. Banks, R. Myerson, J. Kogut, Nucl. Phys. B 129, 493 (1977). J. Villain, Journal de Physique, 36, 581 (1975). I. R. C. Buckley, H. F. Jones, Phys. Rev. D 45, 654 (1992). I. Bender, D. Gromes, Z. f. Physik C 73, 721 (1997), hep-lat/9604022. D. Ruelle, Statistical Mechanics: Rigorous Results, World Scientific, Singapore 1999; R. B. Griffiths, in Phase Transitions and Critical Phenomena, Vol 1, ed. C. Domb, M. S. Green, Academic Press, London, New York 1972; D. Ruelle, Thermodynamic Formalism, Encyclopedia of Mathematics and it’s applications, Vol. 5, ed. G. Gallavotti, Addison-Wesley Publ. Comp., Reading Mass. 1978. M. Creutz, Phys. Rev. Lett. 43, 553 (1979); M. Baig, A. Cuervo, Nucl. Phys. B (Proc. Suppl.) 4, 21 (1988). Yu. A. Simonov, Physics-Uspekhi 39(4), 313 (1996), hep-ph/9709344. E. T. Tomboulis, Phys. Lett. B 303, 103 (1993). R. W. Haymaker, Proc. of the Int. School of Physics “Enrico Fermi”, Course 80, Selected topics in non perturbative QCD, Varenna, 27 June - 7 July 1995, ed. A. Di Giacomo, D. Diakonov, IOS press 1996, hep-lat/9510035. T. A. DeGrand, D. Toussaint, Phys. Rev. D 22, 2478 (1980).
--- abstract: 'We study geodetic lines on a surface generated by a small deformation of the standard 2D-sphere. We construct an auxiliary hamiltonian system with the view of describing geodetic coils and almost closed geodesics, by using the fact that loops of the coil can be well approximated by great circles of the sphere. The phase space of the auxiliary system is determined by the graph generated by separatrixes of its solutions, the vertices of the graph corresponding to almost closed geodesics and the edges to the geodetic coils joining them. Topological types of the graph depend on the parameters determining the deformation. Using the method of averaging in conjunction with the computer modelling of the auxiliary system, we obtain a fairly detailed visualization of geodesics on the deformed sphere.' author: - 'V. L. Golo$^1$' - 'D. O. Sinitsyn$^1$' date: 'July, 8, 2005' title: Geodetic Coils on Deformed Sphere --- Geodetic coils {#introduction} ============== Geodesic lines on a surface can be considered either as straight lines with respect to a Riemann metric, or trajectories of a particle of mass $m$ moving freely on the surface. The second approach allows for using the methods of analytical mechanics, and has drawn considerable attention, [@arnold]. It should be noted that the general solution of the problem of geodetic lines is known only for certain special cases, e.g. ellipsoid, [@arnold], [@ndf], and the topology, or analysis situs, of geodesics on a surface needs specific studying. In this paper we use asymptotic methods and in particular focus our attention on geodetic lines that are closed to within the first order of perturbation theory. The central idea relies on the circumstance that in case of an ellipsoid not differing substantially from a sphere, the great circles of the latter may serve a good approximation to the ellipsoid’s geodesics, if they are short enough, and one can visualize the geodesics as winding up in coils; loops or rings of the coil corresponding to great circles of the sphere, see FIG.\[fig1\]. Hence approximating the successive rings by great circles, we may describe the change in the position of the rings by the motion of a great circle, see FIG.\[fig1\], which in its turn is determined by the normal vector $\vec L$ of the plane cutting the sphere along the great circle. To cast this picture in a more quantitative form we may use the fact that the normal vector $\vec L$ is the angular momentum of the particle moving along the geodetic line. This, we obtain the advantage of allowing for the use of asymptotic methods, in particular the averaging. Our main instrument is an auxiliary hamiltonian system describing the asymptotic motion of $\vec L$. It can be easily solved, so that its stationary solutions correspond to closed planar geodesics within the limit of precision provided by the averaging method , see FIG.\[fig2\]. We can get some insight into the behaviour of geodesics by considering solutions joining the stationary solutions, that is separaterixes, which, from a purely geometrical point of view, are a kind of bridges between closed geodesics. In the case of two dimensional torus in three dimensional spcae this phenomenon had been observed by Yu.S.Volkov, [@volkov]. Generally, a separaterixe geodetic coil does not coincide anywhere with a closed geodesic but comes infinitely close to it, and thus resembles a limit cycle, see FIG.\[fig2\]. The set of stationary points and separaterixes of the auxiliary system forms a net, or graph, which, as is shown in Section II can be realized on projective plane. Thus, we obtain a topological classification of geodetic coils, and find that the number of their topological types is finite. These arguments are valid even for surfaces having a more general form than that of the ellipsoid. Studying the specific case of the deformation determined by fourth order terms involves the surface being substantially different from the ellipsoid as regards its differential geometry, as well as the algebraic structure of its equations, and hence we come across considerable difficulties by attempting a direct investigation of the geodesics. The asymptotic approach outlined above has the advantage of getting round this problem. Averaged equations of geodesics {#main_eq} =============================== The equations determining geodesics on a surface can be cast in the form of the equation, [@jacobi], $$\ddot{\vec x} = \lambda \, \frac{\partial \varphi} {\partial \vec x} \mbox{,} \label{2Newton}$$ where $ \varphi(\vec x) $ is the right-hand side of the constraint $ \varphi(\vec x) = 0 $ determining the surface. The Lagrangian multiplier can be found explicitly , so that the equation of motion, in the form that does not involve $\lambda$, reads $$\ddot{\vec x} = - \frac{\dot{\vec x} \cdot \displaystyle \frac{\partial^2 \varphi}{\partial \vec x^2} \cdot \, \dot{\vec x} } {\left( \displaystyle \frac{\partial \varphi}{\partial \vec x} \right)^2} \, \frac{\partial \varphi}{\partial \vec x} \mbox{.} \label{2Newt_last}$$ In this paper we consider surfaces that do not differ substantially from sphere, and in assume that their equations be of the form $$\varphi(\vec x) = \sum_i ( x_i^2 + \varepsilon_i x_i^4 ) - 1 \mbox{.}$$ where $\varepsilon_i$ are small. Then equations (\[2Newt\_last\]) read $$\ddot{x_i} = - \frac{%\displaystyle \sum_j (2 + 12 \varepsilon_j x_j^2) \dot{x_j}^2} {%\displaystyle \sum_j (2 x_j + 4 \varepsilon_j x_j^3) ^2} (2 x_i + 4 \varepsilon_i x_i^3) \mbox{.} \label{ConcrNewt}$$ The equations given above are more tractable than the usual ones employing the Christoffel symbols and an explicit parametrization of sphere, so that one may prefer them for the needs of numerical simulation, as is done in this paper. With the view of obtaining a qualitative description of geodesics, we shall consider the angular momentum $$\vec L = \vec r \times \vec p,$$ Its components satisfy the equations, which follow from (\[ConcrNewt\]) $$\begin{array}{lcl} \dot{L_1} = - 4 \, \displaystyle \frac{%\displaystyle \sum_j (2 + 12 \varepsilon_j x_j^2) \dot{x_j}^2} {%\displaystyle \sum_j (2 x_j + 4 \varepsilon_j x_j^3) ^2} \, x_2 x_3 (\varepsilon_3 x_3^2 - \varepsilon_2 x_2^2) \vspace{2mm} \\ \dot{L_2} = - 4 \, \displaystyle \frac{%\displaystyle \sum_j (2 + 12 \varepsilon_j x_j^2) \dot{x_j}^2} {%\displaystyle \sum_j (2 x_j + 4 \varepsilon_j x_j^3) ^2} \, x_3 x_1 (\varepsilon_1 x_1^2 - \varepsilon_3 x_3^2) \vspace{2mm} \\ \dot{L_3} = - 4 \, \displaystyle \frac{%\displaystyle \sum_j (2 + 12 \varepsilon_j x_j^2) \dot{x_j}^2} {%\displaystyle \sum_j (2 x_j + 4 \varepsilon_j x_j^3) ^2} \, x_1 x_2 (\varepsilon_2 x_2^2 - \varepsilon_1 x_1^2) \vspace{2mm} \end{array} \label{ConcrMom}$$ Even though the equations given above are exact, in the sense that they do not involve any approximation and do not use the $\varepsilon_i$ being small, their treatment still need further refining, and this will be done with the help of the method of averaging. Generally, the approach relies on studying the evolution equations for integrals of motion of the unperturbed system, i.e. in our case the normals to the planes of the large circles, with respect to the basic periodic solution of the latter. The averaging serves as a filter separating the main regular part of the solution from the oscillating one caused by small terms considered as perturbation, see [@hamming]. We shall write the basic equation for the particle’s motion on the sphere of unit radius in the form $$\vec x = cos(\omega t + \theta) \vec e_1 + cos(\omega t + \theta) \vec e_2$$ vectors $\vec e_1, \vec e_2, \vec e_3$ determined by the equations: $$\begin{array}{lcl} \vec e_1 = \displaystyle \frac{1}{\sqrt{L_2^2 + L_3^2}}(0, L_3, -L_2) \vspace{2mm} \\ \vec e_2 = \displaystyle \frac{1}{L \sqrt{L_2^2 + L_3^2}} (-L_2^2 - L_3^2, L_1 L_2, L_1 L_3) \vspace{2mm} \\ \vec e_3 = \displaystyle \frac{1}{L}(L_1, L_2, L_3) \mbox{.} \end{array}$$ The angular velocity $\omega$ is given by the equation $\omega^2 = \dot{\vec x}^2 = L^2$, valid to within the first order of perturbation. Here $L_1, L_2, L_3$ are coordinates of the normal to the plane of the great circle determining the solution, i.e. the angular momentum. Let us turn to the exact equations for the angular momentum (\[ConcrMom\]). With the help of the equations given above and neglecting terms of the second, and higher, order in the $ \varepsilon_i $, we can transform equations (\[ConcrMom\]) in the form $$%1 \begin{array}{rcl} \dot{L_1} = \displaystyle \frac{2 L^2 \varepsilon_2}{(L_2^2 + L_3^2)^2} \left[ \cos(\omega t + \theta) L_3 + \sin(\omega t + \theta) \displaystyle \frac{L_1 L_2}{L} \right]^3 \left[ \cos(\omega t + \theta) (-L_2) + \sin(\omega t + \theta) \displaystyle \frac{L_1 L_3}{L} \right] \vspace{2mm} \\ - \displaystyle \frac{2 L^2 \varepsilon_3}{(L_2^2 + L_3^2)^2} \left[ \cos(\omega t + \theta) (-L_2) + \sin(\omega t + \theta) \displaystyle \frac{L_1 L_3}{L} \right]^3 \left[ \cos(\omega t + \theta) L_3 + \sin(\omega t + \theta) \displaystyle \frac{L_1 L_2}{L} \right] \\ \end{array} \label{explicit}$$ $$%2 \begin{array}{rcl} \dot{L_2} = \displaystyle \frac{2 L^2 \varepsilon_3}{(L_2^2 + L_3^2)^2} \left[ \cos(\omega t + \theta) (-L_2) + \sin(\omega t + \theta) \displaystyle \frac{L_1 L_3}{L} \right]^3 \left[ \cos(\omega t + \theta) \cdot 0 + \sin(\omega t + \theta) \displaystyle \frac{- L_2^2 - L_3^2}{L} \right] \vspace{2mm} \\ - \displaystyle \frac{2 L^2 \varepsilon_1}{(L_2^2 + L_3^2)^2} \left[ \cos(\omega t + \theta) \cdot 0 + \sin(\omega t + \theta) \displaystyle \frac{- L_2^2 - L_3^2}{L} \right]^3 \left[ \cos(\omega t + \theta) (-L_2) + \sin(\omega t + \theta) \displaystyle \frac{L_1 L_3}{L} \right] \\ \end{array}$$ $$%3 \begin{array}{rcl} \dot{L_3} = \displaystyle \frac{2 L^2 \varepsilon_1}{(L_2^2 + L_3^2)^2} \left[ \cos(\omega t + \theta) \cdot 0 + \sin(\omega t + \theta) \displaystyle \frac{- L_2^2 - L_3^2}{L} \right]^3 \left[ \cos(\omega t + \theta) L_3 + \sin(\omega t + \theta) \displaystyle \frac{L_1 L_2}{L} \right] \vspace{2mm} \\ - \displaystyle \frac{2 L^2 \varepsilon_2}{(L_2^2 + L_3^2)^2} \left[ \cos(\omega t + \theta) L_3 + \sin(\omega t + \theta) \displaystyle \frac{L_1 L_2}{L} \right]^3 \left[ \cos(\omega t + \theta) \cdot 0 + \sin(\omega t + \theta) \displaystyle \frac{- L_2^2 - L_3^2}{L} \right] \\ \end{array}$$ It should be noted that the right-hand sides of Eqs.(\[explicit\]) comprise terms oscillating in time and terms that vary slowly. The situation can be treated within the framework of the averaging method, [@hamming], that is on neglecting the oscillatory terms we obtain the averaged equations for the angular momentum $$\begin{array}{rcl} \displaystyle % \frac{d L_1}{dt} \dot{L_1} &=& \displaystyle \frac34 \, \frac{L_2 L_3}{L^2} \left[ (\varepsilon_3 - \varepsilon_2) L_1^2 + \varepsilon_3 L_2^2 - \varepsilon_2 L_3^2 \right] \mbox{,} \vspace{2mm} \vspace{2mm} \\ \displaystyle % \frac{d L_2}{dt} \dot{L_2} &=& \displaystyle \frac34 \, \frac{L_3 L_1}{L^2} \left[ - \varepsilon_3 L_1^2 + (\varepsilon_1 - \varepsilon_3) L_2^2 + \varepsilon_1 L_3^2 \right] \mbox{,} \vspace{2mm} \vspace{2mm} \\ \displaystyle % \frac{d L_3}{dt} \dot{L_3} &=& \displaystyle \frac34 \, \frac{L_1 L_2}{L^2} \left[\varepsilon_2 L_1^2 - \varepsilon_1 L_2^2 + (\varepsilon_2 - \varepsilon_1) L_3^2 \right] \mbox{.} \vspace{2mm} \\ \end{array} \label{avmom}$$ It is worth noting that Eq.(\[avmom\]) have the Hamiltonian form determined by the usual Poisson brackets for the angular momentum, [@routh], [@arnold], $$\{L_i, L_j\} = \sum_k \varepsilon_{ijk} L_k \mbox{,}$$ and the Hamiltonian $$H = \frac{3}{16} L^2 \sum_i \varepsilon_i \left[ \left( \frac{L_i}{L} \right)^2 - 1 \right]^2 \mbox{.} \label{hamiltonian}$$ This circumstance is particularly interesting because, usually, the averaging procedure is not compatible with Hamiltonian structure. We may infer from the two integrals of motion, $L^2$ and $H$, that they admit of an explicit exact solution that can be cast in the form of the equation $$t = \mp \frac43 \, (\varepsilon_2 + \varepsilon_3) \, L^2 \int \frac{dL_1} {\sqrt{D (\varepsilon_2 L^2 - \varepsilon_3 L_1^2 \mp \sqrt{D}) (\varepsilon_3 L^2 - \varepsilon_2 L_1^2 \pm \sqrt{D}) }} \mbox{.} \label{time}$$ in which $$D = - \varepsilon_1 (\varepsilon_2 + \varepsilon_3)(L^2 - L_1^2)^2 - \varepsilon_2 \varepsilon_3 (L^2 + L_1^2)^2 + \frac{16}{3} (\varepsilon_2 + \varepsilon_3) L^2 H \mbox{.}$$ The exact solution is not particularly practical. In this paper we shall not use it, but employ topological considerations and numerical simulation so as to understand the dynamics of $\vec L$ and, consequently, the behaviour of geodetic coils. The important point is considering the stationary solutions to Eqs.(\[avmom\]) for which the right-hand sides turn out to be zero, and split into three parts [S1, S2]{} and [S3]{}, determined by conditions on $\varepsilon_i$, as follows. - \[S1\] No algebraic constraints imposed on $\varepsilon_i$: - \[T1a\] $ L_{10} = 0, \quad L_{20} = 0, \quad L_{30} \ne 0; $ - \[T1b\] $ L_{10} = 0, \quad L_{20} \ne 0, \quad L_{30} = 0; $ - \[T1c\] $ L_{10} \ne 0, \quad L_{20} = 0, \quad L_{30} = 0;$ - \[S2\] The constraints on $\vec L$ relaxed and linear constraints imposed on $\varepsilon_i$: - \[T2a\] $ L_{10} = 0, \quad L_{20} \ne 0, \quad L_{30} \ne 0, \quad \varepsilon_3 L_{20}^2 - \varepsilon_2 L_{30}^2 = 0 $ - \[T2b\] $ L_{20} = 0, \quad L_{30} \ne 0, \quad L_{10} \ne 0, \quad \varepsilon_1 L_{30}^2 - \varepsilon_3 L_{10}^2 = 0 $ - \[T2c\] $L_{30} = 0, \quad L_{10} \ne 0, \quad L_{20} \ne 0, \quad \varepsilon_2 L_{10}^2 - \varepsilon_1 L_{20}^2 = 0 $ - \[S3\] Vector $\vec L$ subject to $L_{10} \ne 0, L_{20} \ne 0, L_{30} \ne 0$ and the quadratic constraints imposed on $\varepsilon_i$: $$\frac{L_{10}^2} {\varepsilon_1 \,\varepsilon_2 - \varepsilon_2 \, \varepsilon_3 + \varepsilon_3 \,\varepsilon_1} = \frac{L_{10}^2} {\varepsilon_1 \,\varepsilon_2 + \varepsilon_2 \,\varepsilon_3 - \varepsilon_3 \,\varepsilon_1} = \frac{L_{10}^2} { - \varepsilon_1 \,\varepsilon_2 + \varepsilon_2 \,\varepsilon_3 + \varepsilon_3 \,\varepsilon_1}$$ It is worth noting that equations [S2]{} involve the fulfilment of the inequalities $\varepsilon_2 \varepsilon_3 > 0$, $\varepsilon_3 \varepsilon_1 > 0$, and $\varepsilon_1 \varepsilon_2 > 0$ for cases [S2.a, S2.b, S2.c]{}, respectively, whereas equations [S3]{} involve $$\begin{aligned} \varepsilon_1 \,\varepsilon_2 - \varepsilon_2 \,\varepsilon_3 + \varepsilon_3 \,\varepsilon_1 \, > 0 \nonumber \\ \varepsilon_1 \,\varepsilon_2 + \varepsilon_2 \,\varepsilon_3 - \varepsilon_3 \,\varepsilon_1 \, > 0 \nonumber \\ \ - \varepsilon_1 \,\varepsilon_2 + \varepsilon_2 \,\varepsilon_3 + \varepsilon_3 \,\varepsilon_1 \, > 0 \nonumber \end{aligned}$$ Linearizing Eqs.(\[avmom\]) at the stationary solutions and, considering small fluctuations of $\vec L$ round them, we may study their stability, which turns out to be determined by the requirements - - $\varepsilon_1 \varepsilon_2 > 0$; - $\varepsilon_2 \varepsilon_3 > 0$; - $\varepsilon_3 \varepsilon_1 > 0$. - - $ \varepsilon_1 \varepsilon_2 -\varepsilon_2 \varepsilon_3 +\varepsilon_3 \varepsilon_1 < 0 $; - $ \varepsilon_1 \varepsilon_2 +\varepsilon_2 \varepsilon_3 -\varepsilon_3 \varepsilon_1 < 0 $; - $-\varepsilon_1 \varepsilon_2 +\varepsilon_2 \varepsilon_3 +\varepsilon_3 \varepsilon_1 < 0 $. - any $ \varepsilon_i $. We may put these equations in a more graphic form using the integral $ L^2 = const $, and consider the motion of $\vec L$ on a sphere of fixed radius, the integral of energy $H$ taking appropriate values. Then the stable solutions are fixed points as regards Eqs.(\[avmom\]), the stable and the unstable points are foci and saddle points, respectfully, the separaterixes being lines joining the fixed points. Together, they generate a graph on the sphere, having the fixed points as vertices and the separaterixes as edges. It is important that the separaterixes, i.e. the edges of the graph, are oriented according to the time, $t$, so that the graph is the oriented one, and invariant with respect to the symmetry $ \vec R \rightarrow - \vec R $, $ t \rightarrow - t $. The pictures of the separatrixe net on the sphere are rather difficult for visualizing, and therefore there is a need for some means which are easier to implement. We shall employ the familiar construction of the projective plane, which runs as follows. Given a pair of twin points $\vec R$ and $ - \vec R$ belonging to the sphere we shall choose the representative of the pair determined by the condition $R_3 \ge 0$; this enables us to consider only the upper part of the sphere. Next take the projection of the upper hemi-sphere on x-y plane along axes-z. Thus, a pair of antipodal twins obtains a representative in the disk, i.e. a point inside the disk or a pair of antipodal points at the boundary. The separatrize nets constructed in this way on the projective plain, are shown in FIGs(\[fig4\] — \[fig7\]). It is important that the equations of motion are invariant under the transformations $P_i, \quad i=1,2,3$ $$P_i : \quad t \rightarrow - t, \qquad L_i \rightarrow - L_i \label{symmetry}$$ which generate automorphisms of the graph. From the fact that the transformation $$P: \quad t \rightarrow - t, \qquad L_i \rightarrow - L_i, \qquad i=1,2,3, \qquad P = P_1 P_2 P_3 \label{sym2}$$ leaves Eqs.(\[avmom\]) invariant, we infer that a point belonging to a solution of Eqs.(\[avmom\]) has its counterpart, or twin, at the antipodal point and therefore one may visualize the dynamics of $\vec L$ on a sphere with identified antipodal points, that is the projective plane. Now we are in a position to determine the topological types of the graphs by employing the computer simulation of the Eqs.(\[avmom\]) in conjunction with the knowledge of the topological types of the fixed points obtained above. It should be noted that we must check as to whether the solutions provided by Eqs.(\[avmom\]) agree with those given by original Eqs.(\[2Newton\]), see FIG.(\[fig10\]). The phase picture can be obtained by constructing a mesh generated by solutions to Eqs.(\[avmom\]), taking into account the types of fixed points. The results are illustrated in FIGs(4 — 7). We obtain the following topological types of the graphs: - \[T1\] FIG.\[fig4\], 7 foci and 6 saddles; $ \varepsilon_i $ being subject to the constraints: $$\begin{array}{rcl} \varepsilon_1 \varepsilon_2 -\varepsilon_2 \varepsilon_3 +\varepsilon_3 \varepsilon_1 > 0 \\ \varepsilon_1 \varepsilon_2 +\varepsilon_2 \varepsilon_3 -\varepsilon_3 \varepsilon_1 > 0 \\ -\varepsilon_1 \varepsilon_2 +\varepsilon_2 \varepsilon_3 +\varepsilon_3 \varepsilon_1 > 0 \end{array} \mbox{.} \label{t1constraints}$$ - \[T2\] FIG.\[fig5\], 5 foci and 4 saddle points; $ \varepsilon_i $ are not equal to zero, have the same sign, and at least one of eqs.(\[t1constraints\]) is not true. - \[T3\] FIG.\[fig6\], 3 foci and 2 saddle points; $ \varepsilon_i $ being subject to one of the following constraints: $ \varepsilon_2 \varepsilon_3 > 0 $ and $ \varepsilon_1 \varepsilon_2 \leq 0 $; $ \varepsilon_3 \varepsilon_1 > 0 $ and $ \varepsilon_2 \varepsilon_3 \leq 0 $; $ \varepsilon_1 \varepsilon_2 > 0 $ and $ \varepsilon_3 \varepsilon_1 \leq 0 $. - \[T4\] FIG.\[fig7\], 2 foci and 1 saddle point; $ \varepsilon_i $ being subject to one of the following constraints: $ \varepsilon_1 = 0 $ and $ \varepsilon_2 \varepsilon_3 \leq 0 $; $ \varepsilon_2 = 0 $ and $ \varepsilon_3 \varepsilon_1 \leq 0 $; $ \varepsilon_3 = 0 $ and $ \varepsilon_1 \varepsilon_2 \leq 0 $. The topological types of the separatrix nets depend on values of the coefficients of the deformation $\varepsilon_i$, and generate regions I, II, III, IV in the $\varepsilon_i$ space. Taking into account the homogeneous form of the constraints imposed on $\varepsilon_i$, we may visualize them on the projective plane corresponding to $\varepsilon_i$, see FIG.\[fig9\]. It is important that the lines dividing the domains corresponding to types I and II, FIG.\[fig11\], are given by the homogeneous equations $$\begin{aligned} &1.&\quad \varepsilon_1 \varepsilon_2 -\varepsilon_2 \varepsilon_3 +\varepsilon_3 \varepsilon_1 = 0 \label{boundary} \\ &2.&\quad \varepsilon_1 \varepsilon_2 +\varepsilon_2 \varepsilon_3 -\varepsilon_3 \varepsilon_1 = 0 \nonumber \\ &3.&\quad -\varepsilon_1 \varepsilon_2 +\varepsilon_2 \varepsilon_3 +\varepsilon_3 \varepsilon_1 = 0 \nonumber\end{aligned}$$ Each equation determines a projective circle on the projective plane of the homogeneous variables $ \varepsilon_1, \varepsilon_2, \varepsilon_3 $, see FIGs.\[fig14\]. It is important that the solutions to Eqs.(\[avmom\]) have the specific feature that the topological type of the graph is completely determined by the numbers of foci and saddle points. The dependence of the conformations of the foci and the saddles on values of $\varepsilon_i$ is illustrated in FIG.\[fig9\]. We see that the topological types of the graphs are non-trivial enough, and the graphs realized on the sphere differ from those on the projective plain. This circumstance is due to the fact that the graphs on the projective plain are obtained from those on the sphere by factoring with transformation (\[sym2\]), see FIG.\[fig13\]. Conclusion ========== The key point of the present investigation is the concept of geodetic coil which enables us to cast intuitive geometric ideas in analytical form, and relies on constructing an auxiliary hamiltonian system, which can be considered as a reduction of the initial problem to that of constructing a graph on projective plane. In analytical terms, one may consider it as an asymptotic reduction of the system of equations for geodesics on a deformed sphere to the system similar to that of the top, with the hamiltonian of the fourth order. The simplification we get in this way, is substantial. Indeed, the Hamiltonian system for geodesics could be non-integrable,whereas the auxiliary system is totally integrable, and its phase space can be described by a graph that comprises vertices, which correspond to stationary solutions, or almost closed geodesics, and edges, which can be visualized as geodetic coils joining them. The important thing is that the arguments, of purely analytical and topological nature, which this analysis involves, turn out to be helpful for the numerical simulation of the equations for geodetic lines, which admits us to give a tangible realization of the visualization problem for geodesics on a surface. In fact, this paper is profoundly motivated by the technical means provided by numerical modelling, which has allowed us to obtain the final picture of the problem’s phase space in terms of the separatrixe graphs. Thus, we feel that the approach used in this paper is the symbiosis of the methods of analytical mechanics, computer analysis, and topology. The latter is particularly important, giving the conceptual structure for the problem of geodetic lines on the deformed sphere. At this point it should be noted that the detailed picture of the phase space reconstruction should provide a description of imbedding the invariant torii, which could exist in some regimes, and, perhaps, the domains of chaotic dynamics, peculiar for non-integrable problems, [@ab]. The solution obtained in this paper for the geodesics on the deformed sphere does not preclude the chaotic regimes of geodetic lines, even though the auxiliary hamiltonian system turns out to be completely integrable. It is worth noting that the dynamics of geodesics gives a graphic example of the chaos in riemanniann geometry, [@ab]. This set of important problems, lying at the border between nonlinear mechanics, and riemanniann geometry and topology, deserves further studying. [**Acknowledgment**]{}\ The authors are thankful to A.T.Fomenko, A.V.Bolsinov, and Yu.S.Volkov for the useful discussions. This work was supported by the grants NS - 1988.2003.1, and RFFI 01-01-00583, 03-02-16173, 04-04-49645. [99]{} V.I.Arnold, Mathematical Methods in Classical Mechanics, Ch.9, Springer-Verlag, New York (1992). S.P.Novikov, B.A.Dubrovin, and A.T.Fomenko, Modern Geometry, Springer-Verlag, New York (1984). C.G.J Jacobi, Vorlesungen über Dynamik, Ch.28, URSS, Moscow (2004). Yu.S.Volkov, Geodesics on torii, unpublished. R.W.Hamming, Numerical Methods for Scientists and Engineers, Ch.24, McGraw-Hill, New York (1962). E.J.Routh, Dynamics of a System of Rigid Bodies, Ch.10, London - New York, Macmillan (1891 - 92). A.V.Bolsinov and A.T.Fomenko, Integrable Hamiltonian Systems, Ch. 14, Chapman and Hall, Roca Raton (2004).
--- abstract: 'Young and rapidly rotating stars are known for intense, dynamo generated magnetic fields. Spectropolarimetric observations of those stars in precisely aged clusters are key input for gyrochronology and magnetochronology. We use ZDI maps of several young K-type stars of similar mass and radius but with various ages and rotational periods, to perform 3D numerical MHD simulations of their coronae and follow the evolution of their magnetic properties with age. Those simulations yield the coronal structure as well as the instant torque exerted by the magnetized, rotating wind on the star. As stars get older, we find that the angular momentum loss decreases with $\Omega_{\star}^3$, which is the reason for the convergence on the Skumanich law. For the youngest stars of our sample, the angular momentum loss show signs of saturation around $8 \Omega_{\odot}$, which is a common value used in spin evolution models for K-type stars. We compare these results to semi-analytical models and existing braking laws. We observe a complex wind speed distribution for the youngest stars with slow, intermediate and fast wind components, which are the result of the interaction with intense and non axisymmetric magnetic fields. Consequently, in our simulations, the stellar wind structure in the equatorial plane of young stars varies significantly from a solar configuration, delivering insight about the past of the solar system interplanetary medium.' author: - 'Victor Réville$^1$, Colin P. Folsom$^{2,3}$, Antoine Strugarek$^{4,1}$, Allan Sacha Brun$^1$' title: 'Age dependence of wind properties for solar type stars: a 3D study' --- Introduction {#intro} ============ Among all the stellar properties, the characteristics of solar-like stars’ winds are probably the most difficult to probe. Emissions throughout the electromagnetic spectrum unveil some of the properties of the photosphere and the coronae of stars, and internal structures can be probed with asteroseismology. Winds, on the contrary, produce very few detectable signals, although they are likely to exist in all stars possessing a hot corona, as supersonic outflows are the only stable way to balance coronal pressure with the near zero interstellar medium pressure [@Parker1958; @Velli1994]. @LinskyWood1996 have shown that absorption by neutral hydrogen at the astropause could be detected in Ly[$\alpha$]{} spectra around astrospheres of nearby solar-like stars, unraveling properties of the stellar wind shocking against the interstellar medium. A growing sample of solar-type stars with positive detection for stellar winds led to a relationship between X-ray fluxes originating from coronal loops and mass loss rates [@Wood2002]. The “strength" of stellar winds, the mass loss rate $\dot{M}$, has consequently been related to the magnetic activity of the star. @Wood2005a have obtained the relation: $\dot{M} \propto F_X^{1.34 \pm 0.18}$, for $F_X \leq 10^6$ ergs cm$^{-2}$ s$^{-1}$, where $F_X$ is the X-ray flux. Beyond this value, weaker mass loss rates are observed, suggesting a saturation effect, that is below the usual $F_X$ saturation value [@Randich2000; @Pizzolato2003; @Gudel2004]. In parallel, the development of Zeeman Doppler Imaging (ZDI) [@Semel1989; @DonatiBrown1997; @PiskunovKochukhov2002] has opened the study of surface magnetic fields for cool stars. Large scale magnetic fields have been detected in the whole mass range that is thought to correspond to the existence of a convective envelope ($0.1 M_{\odot} - 1.4 M_{\odot}$). The study of the geometrical and topological properties of the field in the context of stellar evolution is still in progress [@DonatiLandstreet2009; @See2015] and raises theoretical questions about their generation through dynamo processes in convective envelopes [see @Brun2015 and references therein]. Nonetheless, the magnetic field amplitude of these stars has been shown to be a growing function of the rotation rate [@Noyes1984; @BrandenburgSaar2000; @Vidotto2014b]. This is necessary to explain the rotational braking of cool main sequence dwarfs, as evolutionary models need the wind to carry angular momentum at a rate proportional to $\Omega_{\star}^3$ [@Kawaler1988; @Bouvier1997; @Matt2015] all along the main sequence. However, recent studies suggest that the wind braking could stop or strongly decay for evolved stars, around a solar Rossby number $Ro \approx 2.5$ [@vanSaders2016], perhaps because of a change in magnetic topology [@Reville2015a; @Garraffo2015a]. Hence wind, magnetism, and rotation are likely to evolve coherently through the whole life of solar-like stars. After @Schatzman1962 understood that a magnetized outflow would carry away stellar angular momentum, @WeberDavis1967 demonstrated that this loss is proportional to the Alfvén radius squared. Several studies have followed to try to estimate the Alfvén radius from stellar parameters and thus give scaling laws for the angular momentum loss [@Mestel1968; @Kawaler1988]. The latest braking laws have been successfully introduced in stellar evolution models [@Matt2012; @GalletBouvier2013], and we recently demonstrated that the magnetic topology could be included in those formulations through a simple scalar parameter, the open magnetic flux [@Reville2015a; @Reville2015b]. Most studies [see, e.g., @Matt2012; @Reville2015a] have been made in two dimensions with axisymmetric configurations [see @Cohen2014; @Garraffo2015a for a 3D study of angular momentum loss with idealized magnetic field topologies], and were not able to capture the structure of complex magnetic fields observed by ZDI. 3D MHD simulations are now taking into account this complexity [@Cohen2011; @Vidotto2014a; @doNascimento2016; @AlvaradoGomez2016a; @AlvaradoGomez2016b] to derive a self-consistent coronal structure. The complex structure of the corona is needed to study the interaction between stars and close-in planets, which has been shown to be very sensitive to 3D effects [@Strugarek2015]. Yet, to our knowledge, the influence of realistic magnetic fields on the long time variation of the wind properties has not been studied. This work proposes to include observed, realistic magnetic fields in the context of stellar evolution. We used spectropolarimetric observations of the surface fields of solar-like stars to constrain 3D MHD simulations of stellar winds. The stars of our sample share similar properties except their rotational periods and their ages, which range from 25 Myr to 4.5 Gyr. We developed a coherent framework to take into account the evolution of the coronal properties with time, inspired by X-ray flux observations, spin evolution models and theoretical, ab initio models [see @HolzwarthJardine2007; @CranmerSaar2011; @Suzuki2013b]. We confirm that the evolution with age of global properties of the wind, such as the mass and angular momentum loss, follows simple prescriptions in agreement with the spin evolution models. These prescriptions can be recovered by the semi-analytical model we developed in @Reville2015b, except for the saturation of angular momentum that appears only in our simulations. Also, the three dimensional structure of the young stars’ winds shows interesting features, such as a trimodal speed distribution, that we explain through various interactions with the intense magnetic field. We show that superradial expansion is a key factor to explain the fastest wind components of young stars’ wind. We also observe regions of fast wind encountering slower streams in the equatorial plane, the so-called Corotating Interactions Regions (CIRs), that could be more common in the wind of young stars. This paper is organized as follows: the ingredients of our model are described throughout Section \[sec:model\]. In Subsection \[subsec:zdi\], we describe the observations used to constrain the surface magnetic fields of our simulations. Subsection \[subsec:evol\] describes our prescriptions for the coronal properties and Subsection \[subsec:num\] our numerical setup. The results are presented in two parts, Section \[sec:glob\] where we look at global properties such as the angular momentum and mass loss over time, and Section \[sec:vel\] where the tridimensional structure of the wind is detailed, with a special focus on the velocity distribution. We summarize and reflect upon our findings in Section \[sec:ccl\]. Model Ingredients and Description {#sec:model} ================================= Observational data: ZDI maps {#subsec:zdi} ---------------------------- [l|c|c|c|c|c|c|c|c]{} BD- 16351 & $42\pm 6$ & $3.3$ & $0.9$ & $0.9$ & $5243$ & $33.0$ & $60$ & $5$\ TYC 5164-567-1 & $120\pm 10$ & $4.7$ & $0.9$ & $0.9$ & $5130$ & $48.8$ & $78$ & $78$\ HII 296 & $125\pm 8$ & $2.6$ & $0.9$ & $0.9$ & $5322$ & $52.0$ & $57$ & $50$\ DX Leo & $257\pm 47$ & $5.4$ & $0.9$ & $0.9$ & $5354$ & $21.3$ & $69$ & $1$\ AV 2177 & $584 \pm 10$ & $8.4$ & $0.9$ & $0.9$ & $5316$ & $5.4$ & $57$ & $4$\ Sun 1996 & $4570$ & $28$ & $1.0$ & $1.0$ & $5778$ & $1.1$ & $35$ & $75$\ \[table1\] --------------------- --------------------- ![image](fig1a.pdf) ![image](fig1b.pdf) **BD- 16351** **TYC 5164-567-1** ![image](fig1c.pdf) ![image](fig1d.pdf) **HII 296** **DX Leo** ![image](fig1e.pdf) ![image](fig1f.pdf) **AV 2177** **Sun Wilcox 1996** --------------------- --------------------- We consider in this paper six different stars whose ages are precisely determined. Five of those stars belong to the study made by @Folsom2016 and share a mass and radius close to $(0.9 M_{\odot},0.9 R_{\odot})$. Their rotational periods and surface magnetic fields have been determined using observations from the spectropolarimeters NARVAL (operating at Télescope Bernard Lyot, France) and ESPaDOnS (operating at the Canada France Hawaii Telescope, Hawaii). Their ages are determined by studies of the open clusters and associations they belong to. They were chosen to span a range of ages and rotation rates but have similar masses, and bright stars were selected to have sufficient signal to noise ratio. The sixth star we include in our study is the Sun, for which we used a magnetogram obtained at the Wilcox observatory [@DeRosa2012], and which will serve as a reference and oldest star in this paper. It is well known that the solar magnetic topology of the Sun and the solar wind structure vary over the $11$-yr cycle [see, e.g., @Sokol2015; @PintoBrun2011]. We considered the Sun in its minimum of activity in late 1996, during cycle 22. The ZDI maps of the five young stars are able to describe the surface magnetic fields of the stars up to a degree $\ell_{\mathrm{max}}=15$ [^1] in a spherical harmonics decomposition, which represents a sum of $135$ different modes. The solar magnetograms made at the Wilcox observatory are able to reach much higher resolution ($\ell_{\mathrm{max}}=50$). We chose to cut the solar map to $\ell_{\mathrm{max}}=15$ to keep the same resolution for all stars in our sample. This partly justifies the choice of the solar minimum, whose energy spectrum is more concentrated in large scale structure than during maximum of activity. The first $15$ spherical harmonics represent $95\%$ of the magnetic energy during the minimum, and $80\%$ during the maximum for cycle 22. Although the ages are sampled logarithmically, the sampling of the rotation rates is enough to follow closely the changes in coronal parameters and in magnetic field amplitude (see Section \[subsec:evol\]). The stellar parameters of all the targets are listed in Table \[table1\]. Figure \[zdi\] shows the surface radial magnetic field reconstructed from the spherical harmonics decomposition of the Zeeman Doppler analysis [@Folsom2016]. The field is presented as orthographic projections on three different angles, with views facing the equator and the two poles. The color scales are chosen according to the amplitude of the magnetic field of the star. The average radial magnetic field of each case is given in table \[table1\] and mostly increases with the rotational frequency, as expected by dynamo theory [@DurneyLatour1978; @Weiss1994; @Brun2015]. We can see that, for each case, a dominant dipolar component is present, alongside smaller scale modes. This also motivated our choice of the solar minimum of activity, which exhibits a mostly dipolar field. Nonetheless, the dipolar components can show large inclinations with respect to the rotation axis, thus being far from an axisymmetric configuration and making a 3D approach necessary. These magnetic maps are used as boundary conditions and specify the surface magnetic field of our simulations. However, they do not properly describe the stellar parameters by themselves. Like the magnetic field amplitude and topology, the thermodynamical properties of the base of the corona are likely to change with age and rotation [@Gudel2004; @Giampapa2005]. The next section is dedicated to describing the model we used to take into account those variations. Evolution of coronal properties with age {#subsec:evol} ---------------------------------------- Our numerical model needs, in addition to the surface magnetic field, assumptions for the coronal base temperature and density. Several studies have addressed the evolution of those parameters with age and other stellar properties [@HolzwarthJardine2007; @CranmerSaar2011; @Suzuki2013b]. Among others, their objective was to explain the mass loss rate signatures in the astrospheres’ Ly$\alpha$ absorption spectra [see @WoodRev2004; @Wood2005a]. Those studies are also nourished with a long history of X-ray coronal emissions [@Pallavicini1981; @Pallavicini1989; @Pizzolato2003; @Gudel2004; @Wright2011], which show that coronal densities and temperatures tend to increase with the rotational period in solar-like stars. For instance, @HolzwarthJardine2007 gave scaling laws for the evolution of the temperature $T$ and number density $n$ as a function of the rotational frequency $\Omega_{\star}$ only assuming a power law dependence: $$T = T_{\odot} \left( \frac{\Omega_{\star}}{\Omega_{\odot}} \right)^{0.1},\quad n = n_{\odot} \left( \frac{\Omega_{\star}}{\Omega_{\odot}} \right)^{0.6}.$$ The values of the exponent for the power law correspond to their reference case, which aims to match the lowest branch of mass loss measurements. In their model, the mass loss is obtained computing 1D polytropic and magneto-centrifugal wind from the coronal parameters [see @WeberDavis1967; @Sakurai1985; @Reville2015b]. The study of @Wood2005a shows, however, mass loss rates that can reach $100$ times the solar value for rather slow rotators [see the case of $70$ Oph, with a period of $\approx 20$ d, @Wood2005a]. Those extreme values need a stronger increase of the density with the rotation rate. @Suzuki2013b, using simulations of flux tubes heated by Alfvén wave dissipation, showed that such values could be reached and that a saturation could be obtained through the increase of the coronal density that increases the radiative losses. The dependence of the coronal density on $\Omega_{\star}$ is, however, also constrained by the observed X-ray fluxes and spin evolution models that suggest that $0.6$ is a good estimate for the exponent [@IvanovaTaam2003]. Moreover, the reference case fits the supposedly weak mass loss of more active stars (corresponding to most of the ages and rotation rates selected in our study) without invoking an additional transition below the $F_X$ saturation threshold, which still requires further theoretical understanding [@Vidotto2016a]. We thus chose to keep the same prescriptions as @HolzwarthJardine2007, changing the solar reference temperature and density, as we use a different value for the polytropic index $\gamma$. In our model, $T_{\odot}=1.5\times10^{6}$ K and $n_{\odot}=10^{8}$ cm$^{-3}$ are calibrated such that a wind with $\gamma=1.05$ recovers a wind velocity of $444$ km s$^{-1}$ at 1 AU and a mass loss rate of $3.2 \times 10^{-14} M_{\odot}/$yr. This value for $\gamma$ has been widely used in the literature, including our works [@WashShib1993; @Matt2012; @Reville2015a], to describe the quasi-isothermal expansion of the wind with a polytropic model. The temperature thus varies from $1.5 \times 10^{6}$ K to $1.9 \times 10^{6}$ K and the density from $1 \times 10^8$ cm$^{-3}$ to $4.2 \times 10^{8}$ cm$^{-3}$ throughout our sample (see table \[table2\]). Computational methods and boundary conditions {#subsec:num} --------------------------------------------- In this study, we numerically solve the time-dependent ideal magnetohydrodynamics equations until a steady state is reached in our wind simulations. We use the PLUTO code [@Mignone2007], using a finite-volume Godunov type scheme and a Harten, Lax, van Leer, and Einfeldt (HLLE) solver [@Einfeldt1988] in three dimensions. Finite volume methods aim to provide fully compressible and shock capturing numerical methods that consider fluxes of conservative quantities through volumes. Hence, they formulate the MHD equations as a set of eight conservation equations defined as follows: $$\label{MHD_1} \frac{\partial}{\partial t} \rho + \nabla \cdot \rho \mathbf{v} = 0,$$ $$\label{MHD_2} \frac{\partial}{\partial t} \mathbf{m} + \nabla \cdot (\mathbf{mv}-\mathbf{BB}+\mathbf{I}p) = - \rho \nabla \Phi + \rho \mathbf{a},$$ $$\label{MHD_3} \frac{\partial}{\partial t} (E + \rho \Phi) + \nabla \cdot ((E+p+\rho \Phi)\mathbf{v}-\mathbf{B}(\mathbf{v} \cdot \mathbf{B})) = \mathbf{m} \cdot \mathbf{a},$$ $$\label{MHD_4} \frac{\partial}{\partial t} \mathbf{B} + \nabla \cdot (\mathbf{vB}-\mathbf{Bv})=0,$$ where the energy $E \equiv \rho \epsilon + \rho v^2/2 + B^2/2$, the magnetic field $\mathbf{B}$, the mass density $\rho$, and the momentum $\mathbf{m} \equiv \rho \mathbf{v}$ are the conservative variables. Here, $\mathbf{v}$ is the velocity field, $p = p_{th} +B^2/2$ is the total (thermal plus magnetic) pressure and $\mathbf{I}$ is the identity matrix. The potential $\Phi$ accounts for the gravitational attraction of the star and $\mathbf{a}$ is a source term that contains the Coriolis and centrifugal forces as we solve the equations in a rotating frame. The magnetic field is split into a background and a variable component for computational purposes [see @Powell1994]. An ideal equation of state is used to close the set of MHD equations, and the internal energy is written $$\epsilon = \frac{p}{\rho(\gamma-1)},$$ with $\gamma=1.05$, the ratio of specific heats, which differs from the usual value of $5/3$ for a hydrogen gas in order to mimic the extended coronal heating. We solve the equation in a cartesian geometry with a grid centered on the star that extends from $-30$ to $30$ stellar radii in each direction. The grid is uniform in a cube of $[-1.5 R_{\star}, 1.5 R_{\star}]^3$ with $192$ grid points in each direction and then stretched up to $30 R_{\star}$ with an additional 256 points for each direction. The resolution at the surface of the star is $50\%$ larger than the one used in @Reville2015a, but several tests have demonstrated that this resolution is enough to have numerical convergence. The initialization is done by setting a spherically symmetric profile of a $\gamma=1.05$ polytropic wind for the density, pressure and poloidal velocity. This initial solution is obtained by a Newton-Raphson method on the normalized velocity and the critical radius of the polytropic wind solution. The MHD equations are then solved in a frame rotating with the star. We only initialize a solid body rotation inside the star so that it is the magnetic field that gives its rotating motion to the surrounding plasma. The magnetic field is initialized with a potential field extrapolation [@Schatten1969] using the radial component of the ZDI map and a source surface radius $r_{\mathrm{ss}} = 15 R_{\star}$. This particular initial choice of the source surface radius has no impact on the final state since the extrapolated potential field then dynamically evolves with the stellar winds toward a relaxed state. Boundary conditions at the surface of the star -which model the base of the corona in our case [^2]- are a sensitive point of our study. As in @Reville2015a, we set three layers with different properties. For all layers, density and pressure are maintained as the initial transonic polytropic wind solution. In the top layer, the poloidal velocity is set to be parallel to the magnetic field, while the toroidal velocity and the magnetic field are free to evolve. In the middle layer, the magnetic field is still free, the poloidal velocity is zero and the toroidal velocity is set for solid body rotation. Finally, in the deepest layer, we enforce the reconstructed magnetic field, considering a perturbation in the toroidal field that self adapts to minimize the overall current. More explicitly, the magnetic field solution interacting with the rotating wind in open regions will be in general different from the potential extrapolation we set at the initialization. Hence, to ensure a current free magnetic field inside the star, which is supposed to be a perfect conductor, we dynamically modify $B_{\varphi}$ in the deepest layer to get as close as possible to a curl free magnetic field in the rotating frame [see @MattBalick2004; @ZanniFerreira2009]. This boundary condition has a strong effect on the conservation of MHD invariants. For instance, the quantity that corresponds to the derivative of the electric field potential in axisymmetric configurations [see @Reville2015a; @Matt2012; @KG2000; @Ustyugova1999] remains in our 3D simulations close to the stellar rotation rate when our boundary condition is applied. We will see in subsection \[subsec:semianal\] that this condition is key for an improved treatment of the angular momentum loss computation. The outer edges of our domain are treated with *outflow* boundary conditions that set the derivative of each field normal to the boundary to zero. Results: Global Properties {#sec:glob} ========================== Mass and angular momentum loss ------------------------------ --------------------- -- --------------------- ![image](fig2a.pdf) ![image](fig2b.pdf) **BD- 16351** **TYC 5164-567-1** ![image](fig2c.pdf) ![image](fig2d.pdf) **HII 296** **DX Leo** ![image](fig2e.pdf) ![image](fig2f.pdf) **AV 2177** **Sun 1996** --------------------- -- --------------------- Figure \[3dim\] shows the resulting steady-state wind solutions achieved in the six simulations. The convergence time is typically of the order of a few times the Alfvén time scale, *i.e.* the time for alfvén waves to cross the simulation’s domain. Given the high resolution, and the time step that can vary by one order of magnitude depending on the magnetic field amplitude, each of these simulations reaches a steady states after $10^5$ to $2 \times 10^6$ times steps (between $10^5$ to $5 \times 10^5$ core hours on supercomputers). As usual, the topology of the coronal magnetic field in steady state can be divided into open field regions, or coronal holes, and closed field regions, or dead zones, where the plasma corotates with the star. The surface magnetic field of all the stars in our sample includes a significant dipole, that can be strongly inclined. This mode gives its large scale structure to the coronal magnetic field. However, the magnetic field inclination, amplitude and smaller scale components lead to irregular shapes of the Alfvén surface, which is shown in Figure \[3dim\]. For some cases, the Alfvén surface extends beyond the simulation domain. Indeed, for fast rotators, field collimation induces an increase of the poloidal magnetic field amplitude near the rotation axis, and the Alfvén surface is pushed farther away. A precise description of this phenomenon can be found in @WashShib1993 [@Ferreira2013; @Reville2015a] and we will address some of its consequences in section \[sec:vel\]. However, the global properties we are interested in, such as the mass and angular momentum loss rates are constants within a small numerical variation once integrated over a surface that encloses the largest closed coronal loops. In the case of TYC 5164-561-1, which seems to have a significant part of its Alfvén surface out of the computation domain (more than any other case), this numerical variation of the mass and angular momentum loss is below 3%. It drops below 1% for cases where the Alfvén surface is fully inside the computation domain. The mass loss rate $\dot{M}$ and the angular momentum loss rate $\dot{J}$ associated with the wind are computed as $$\dot{J} = \int_S \rho \Lambda \mathbf{v} \cdot d\mathbf{S}, \label{Jdot}$$ where $$\Lambda = R \left( v_{\varphi} - B_{\varphi}\frac{B_p}{\rho v_p} \right), \label{Lambda}$$ and $$\dot{M} = \int_S \rho \mathbf{v} \cdot d\mathbf{S}. \label{Mdot}$$ The subscript $p$ and $\varphi$ stand for the poloidal and azimuthal components of each vectorial field, and $R$ is the cylindrical radius. Those integrals can be computed from any surface $S$ that contains all the closed coronal loops. From those outputs we define an effective Alfvén radius, which conveniently matches the relation given in the simplified model of @WeberDavis1967: $$\langle R_A \rangle = \sqrt{\frac{\dot{J}}{\Omega_{\star} \dot{M}}}. \label{AvAlf}$$ This effective value matches quantitatively the average cylindrical radius on the irregularly shaped Alfvén surface of our simulations. All the global quantities computed from our simulations are given in Table \[table2\]. We see that the angular momentum loss (AML) varies by a factor of $470$ from the Sun to the fastest rotator HII 296. The variation of the mass loss is lower, with values that reach $6$ times the solar value, here defined as $3.0 \times 10^{-14} M_{\odot}$/yr. The position of the Alfvén radius is globally increasing with rotation rate but is also very sensitive to temperature. For instance, the largest value we get is $16.6 R_{\star}$ for TYC 5164-567-1, which has a similar magnetic field to HII 296, but a slightly cooler coronal temperature, which makes the wind slower in our model. One can recover the convergence on the Skumanich law for solar-like stars assuming a loss of angular momentum proportional to $\Omega_{\star}^3$ in the non-saturated regime. In the saturated regime, although there is no consensus[^3], the dependence of the angular momentum loss is usually assumed to be linear with $\Omega_{\star}$. Hence, a canonical expression for the stellar wind torque can be written as [@Kawaler1988; @Bouvier1997][^4]: $$\dot{J} = \dot{J}_{\odot} \left( \frac{\mathrm{min}(\Omega_{\star},\Omega_{\mathrm{sat}})}{\Omega_{\odot}} \right)^{2} \left(\frac{\Omega_{\star}}{\Omega_{\odot}} \right) \left( \frac{M_{\odot}}{M_{\star}}\frac{R_{\star}}{R_{\odot}} \right)^{0.5} \label{EmpiricalModel}$$ where $\Omega_{\mathrm{sat}}$ is a saturation value of the angular momentum loss, which occurs at $\Omega_{\mathrm{sat}} = 8 \Omega_{\odot}$ for K-type stars. This saturation value corresponds to a Rossby number $Ro \approx 0.1$ [@Wright2011], and higher mass stars have higher $\Omega_{\mathrm{sat}}$ [for G stars the value is around $15 \Omega_{\odot}$, see @GalletBouvier2015]. For clarity, we define $\tilde{\Omega}_{\star}= \mathrm{min} (\Omega_{\star},\Omega_{\mathrm{sat}})$, with $\Omega_{\mathrm{sat}} = 8 \Omega_{\mathrm{sat}}$. Moreover, we have for every star in our sample $f(M_{\star},R_{\star}) \equiv \left(M_{\star}/M_{\odot}\right)^{-0.5} \left(R_{\star}/R_{\odot}\right)^{0.5} \approx 1$ so that formulation (\[EmpiricalModel\]) reduces to $\dot{J}/\dot{J}_{\odot} = \tilde{\Omega}_{\star}^2 \Omega_{\star}/\Omega_{\odot}^3$. In Figure \[OmegaDep\], we compare the resulting torque from our simulations (in blue) to this empirical formulation (\[EmpiricalModel\]) (in red). All quantities have been normalized by the solar value. We look only at the $\Omega_{\star}$-dependence of the angular momentum loss from the simulation, and the agreement is good. This agreement is the result of the combination of the observed magnetic fields, the hypothesis on the coronal temperature and density and the simulation methods. Taking into account complex, three dimensional magnetic fields is necessary, as some of our targets have a significant part of their magnetic energy in non-axisymmetric modes. Moreover, we can see an inflection in the slope for the two fastest rotators BD- 16351 and HII 296, which corresponds to the saturation value (which is not used whatsoever in our simulation). The saturation seems to appear self-consistently in our simulations, thanks to the plateau reached by the mass loss rates for our fast rotators (we will come back to this point in the next section). ![Evolution of the mass and angular momentum loss with age, compared with the rotational frequency and equation (\[EmpiricalModel\]). The angular momentum loss in blue follows the empirical law $\dot{J} \propto \tilde{\Omega}_{\star}^2 \Omega_{\star}$. The variation of the mass loss is shown in magenta. The variation of the mass loss is close to linear with respect to $\Omega_{\star}$ and is thus not enough to explain the total AML variation, which needs an increase of the Alfvén radius. The results of the semi-analytical model are added with dashed lines.[]{data-label="OmegaDep"}](fig3.pdf) [r|c|c|c|c|c|c|c|r]{} BD- 16351 & 8.5 & 3.6 & 1.85 & 13.9 &380&5.5&8.1& 7.7\ TYC 5164-567-1 & 5.9 & 2.9 & 1.8 & 16.7 &190&2.75&10.7& 10.5\ HII 296 & 10.7 & 4.15 & 1.9 & 13.8 &470&5.65&9.3&8.7\ DX Leo & 5.2 & 2.7 & 1.76 & 13.3 &120&3.1&7.6& 7.4\ AV 2177 & 3.3 & 2.06 & 1.7 & 7.0 &20&3.0&4.6&4.3\ Sun & 1.0 &1.0 & 1.5 & 4.4 &1.0 & 1.0 &2.7&2.7\ \[table2\] The mass loss varies in our simulation (magenta line) approximately linearly with $\Omega_{\star}$ (green line) and thus covers one order of magnitude over the sample. The variation of the angular momentum loss, which covers three orders of magnitude, is the result of three ingredients (see equation \[AvAlf\]), the mass loss rate, the rotation rate and the average Alfvén radius squared, and we can say that each of these ingredients accounts for one order of magnitude. Actually, the Alfvén radius squared seems to account for a bit more than the mass loss rate, but those are not independent parameters, and this statement must be handled with caution. Semi-analytical model {#subsec:semianal} --------------------- Let us now compare these results with the semi-analytical model we developed in @Reville2015b from the parameter study of @Reville2015a. The Alfvén radii computed from the 3D simulations can be compared with the braking law we derived using a 2.5D axisymmetric setup in @Reville2015a: $$\langle \frac{R_A}{R_{\star}} \rangle = K_3 \left( \frac{\Upsilon_{\mathrm{open}}}{(1+(f/K_4)^2)^{0.5}} \right)^m,$$ where $K_3=0.65$, $K_4=0.06$, $m=0.3$, are fitted constants, $f=\Omega_{\star} R_{\star} / \sqrt{GM_{\star}/R_{\star}}$ is the breakup ratio and $\Upsilon_{\mathrm{open}} = \Phi_{\mathrm{open}}^2/(R_{\star}^2 \dot{M} v_{\mathrm{esc}})$ is a modified magnetization parameter [see @UdDoula2002]. The open magnetic flux $\Phi_{\mathrm{open}}$ is computed as the unsigned magnetic flux over a spherical surface beyond the largest closed magnetic loop. The value of the open flux should be constant whatever integration surface one chooses as long as it respects this latter condition [see @Reville2015a; @Reville2015b]. Figure \[brakinglaw\] shows how our simulations (green stars) fit this braking law. The blue line represents the braking law we derived in @Reville2015a. Our set of simulations needs a small recalibration to be modeled by our braking law. Reducing the constant $K_3$ by $15\%$ to a value of $0.55$ gives an excellent agreement for all our cases. The importance of the boundary conditions is illustrated in Figure \[brakinglaw\]. The two red stars are simulations for the Sun and TYC 5164-567-1 that were made keeping fixed the initial extrapolated magnetic toroidal field inside the deepest layer of our boundary conditions. For the other (green cases), we set our self-adaptating conditions ensuring a curl free magnetic field inside the star. The deviation is around $25\%$ on the Alfvén radius of TYC 5164-567-1 and the Sun, which induces a large underestimation of the angular momentum loss that scales as $R_A^2$. In @Reville2015b, we proposed a method to compute the open flux based on an appropriate value for the source surface radius in a potential field extrapolation. We defined the optimal source surface as the source surface radius for which the potential field source surface model [@Schatten1969; @AltschulerNewkirk1969] gives the same open flux as the simulations. To estimate this optimal source surface radius, we considered a pressure balance between the thermal and ram pressure of a spherically symmetric magneto-centrifugal wind model with the magnetic pressure of the multipolar expansion of the surface magnetic field. We demonstrated the accuracy of this estimation, using 2.5D simulations, and we show in Table \[table2\] that it still holds for the 3D simulations performed here. The optimal ($r_{\mathrm{ss,opt}}$) and estimated ($r_{\mathrm{ss,est}}$) source surface radii are close even though $r_{\mathrm{ss,est}}$ is systematically slightly smaller. ![Comparison of the 3D cases and the braking law of @Reville2015a. A new fit reducing by $15\%$ the constant $K_3$ is needed for a good agreement with the data, due to higher coronal temperatures in our sample (green stars). Red stars stand for simulations of the Sun and TYC 5164-567-1 with simpler boundary conditions. Those Alfvén radii are off the braking law by more than $25\%$, which is equivalent to an angular momentum loss divided by two.[]{data-label="brakinglaw"}](fig4.pdf) In terms of open flux, the results from the simulation and the potential extrapolation differ by less than $10\%$ for all the cases. This good agreement is due to the right choice for the source surface radius. A potential extrapolation made with a constant $r_{\mathrm{ss}}$ would have created a large discrepancy for part of the sample, given the large variation of optimal source surface radius. The large values of $r_{\mathrm{ss,opt}}$ are consistent with the size of large coronal loops observed for the youngest stars of the sample with the largest magnetic fields that extend up to $10 R_{\star}$ (see Figure \[3dim\]). Moreover, $r_{\mathrm{ss,opt}}$ values match the Alfvén points at the largest streamers’ extremities and are always smaller than the effective Alfvén radius. This indicates there is less angular momentum at the cusp of the streamers compared to coronal holes [see @KG2000; @Garraffo2015b]. Hence, the semi-analytical method described in @Reville2015b is likely to be successful for estimating the angular momentum loss from the mass loss and the open flux, if we adapt our formulation with the updated value of the $K_3$ constant. Going back to Figure \[OmegaDep\], we superimposed the mass and angular momentum loss rates given by our semi-analytical model with the dashed blue and magenta lines, respectively. The semi-analytical model overestimates the torque compared to the simulations. This can be understood simply by looking at the mass loss of the spherically symmetric wind solution used in the semi analytical model and the mass loss of the simulation (dashed and solid magenta lines). In the simulations, the mass loss is always smaller than the spherically symmetric solution, since part of the plasma is contained in closed magnetic loops that cover a large part of the stellar surface. For strong magnetic fields, it can be two to three times less than the spherically symmetric ideal case. Hence the AML is consequently smaller. It is interesting to note that the saturation does not appear in the semi-analytical model for the fastest rotators BD- 16351 and HII 296. The saturation regime we observe in the simulations complies with a linear dependency of the AML with $\Omega_{\star}$ and is thus different from the one that arises with a purely radial field in the Weber and Davis model [@Keppens1995], which is used in the semi-analytical model. Hence, in addition to the intrinsic saturation of the dynamo generated magnetic field observed in X-rays, which should be contained in our magnetic maps, mass loss saturation due to confinement in large coronal loops could be involved in the saturation of angular momentum loss. Results: 3D structure of the winds {#sec:vel} ================================== Speed distribution ------------------ --------------------- --------------------- ![image](fig5a.pdf) ![image](fig5b.pdf) ![image](fig5c.pdf) ![image](fig5d.pdf) --------------------- --------------------- The previous section showed that global properties of young stars’ winds were coherent with simpler models parametrized on 3D MHD simulations. Integrated quantities such as the mass and angular momentum loss average out the complex and 3D structure of the wind. This structure is, however, relevant when it comes to studying the interaction of those winds with other objects such as companion stars or planets [see @Vidotto2014a; @Cohen2014; @Strugarek2015; @doNascimento2016] The speed distribution of all the simulations is structured with slow and fast wind components, although no fast wind related heating mechanism are included. Thermal heating is only provided by the coronal temperature and the equation of state through the choice of $\gamma=1.05$, which is scaled with the slow solar wind component. Nevertheless, if the speed distribution is narrow in our solar case, the interaction with the strong magnetic field of our fast rotators yields broader distributions and can lead to interesting dynamical properties. Figure \[SpeedDis\] shows the histograms of the distribution of the wind speed projected on a sphere of radius $25 R_{\star}$, for four stars of our sample. First, we observe a broadening of the speed distribution when the rotation rate of the star increases. While the speed distribution of the solar case is bracketed between $280$ km/s and $330$ km/s, for the fast rotators TYC 5164 and HII 296, we observe a flattened distribution between $200$ km/s and $650$ km/s. Since the temperature increases with rotation in our model, it is expected to get higher speeds due to the thermal driving. The theoretical speed of the polytropic solution is indicated by the black dashed line and increases with rotation. We observe, however, both higher and lower components in the stellar winds of those fast rotators. The distribution of fast rotators is organized with three peaks. In Figure \[MachVol\], we show a volume rendering of the Mach number that corresponds to the four simulations of Figure \[SpeedDis\]. We can see that, for the young stars TYC 5164-567-1 and HII 296, the trimodal distribution appears with the blue (slow component), green (intermediate component) and red (fast component) colors. The first is due to the slow wind component emanating from the streamers. The strong dipolar magnetic fields of those stars create rings of slow winds located at the edge of the dead zones. Interestingly, those slow winds are positively correlated with the creation of currents in the simulation. The strong velocity gradient perpendicular to the dead zone boundary could be responsible for the creation of a current density even before the discontinuity in polarity that occurs beyond. This correlation needs, however, a more detailed analysis that is beyond the scope of this work. The second peak corresponds to the theoretical speed at this distance from the star, given by the polytropic solution. Typically, this component will emanate from quiet polar regions, where the interaction with the magnetic field is the weakest. The third and fastest component appears between the streamers and slower winds blown at the poles. It seems confined in flux tubes at mid-latitudes. --------------------- --------------------- **Sun 1996** **AV 2177** ![image](fig6a.pdf) ![image](fig6b.pdf) **TYC 5164** **HII 296** ![image](fig6c.pdf) ![image](fig6d.pdf) --------------------- --------------------- The additional acceleration given to the wind is likely to come from the magneto-centrifugal effect [@Sakurai1985; @WashShib1993; @Reville2015a; @Reville2015b]. Magneto-centrifugal acceleration is the consequence of the centrifugal force acting on the field lines embedded in the stellar rotation by the magnetic stress. Hence, the higher the rotation rate and the magnetic field, the higher is the magneto-centrifugal effect, which can be the dominant process in the wind acceleration [see @Michel1969; @WashShib1993]. For young stars, of periods of a few days, thermal driving and the magneto-centrifugal effect are comparable and must both be taken into account [@Reville2015b]. The magenta dashed line gives the wind speed obtained with the magneto-centrifugal wind solution [see @Sakurai1985; @Reville2015b for a detailed description of the solution calculation]. This value seems to be coherent for the third peak of the speed distribution of the star HII 296. Open field regions at mid-latitudes are efficiently accelerated by this process (see Figure \[MachVol\]). In the case of TYC 5164-567-1, the theoretical Sakurai speed is, however, lower than the observed fast peak. This could be partly explained by the fact that the magneto-centrifugal wind solution is computed with the average surface field of the star. Hot magnetic spots conveniently located at the surface have a strong enough magnetic flux to explain the lowest part of the peak. However, another effect due to fast rotation and high magnetic field is also able to accelerate a stellar wind. ![image](fig7.pdf){width="5in"} For a supersonic outflow, a superradial expansion of a given magnetic flux tube will necessarily accelerate the plasma (see Appendix \[AppA\]). Here, two effects, both due to strong magnetic fields and fast rotation, can be accounted for the expansion of flux tubes. First, a flux tube located near the pole and yet close to the streamer boundary -a typical situation with an inclined dipole topology, with a maximum configuration precisely located in longitude- will have on one side its field lines driven by the streamers and thus bent downward, and on the other side, collimation of the field lines toward the rotation axis will bend them upward. This latitudinal expansion process is illustrated in Figure \[ExpSketch\] in the left panel. A longitudinal expansion process can also occur when the fast flux tube originates at the boundary of the strong concentration of flux near the pole. The magnetic stress on each side of the flux tube is different and leads to a differential efficiency of the magneto-centrifugal effect on each side. Typically, the frozen-in magnetic field line that originates inside the “hot spot” will rotate faster than a magnetic field lines originating outside. This effect is consequently responsible for a longitudinal expansion of the flux tube and is illustrated in the right panel of Figure \[ExpSketch\]. For the fast rotators, the acceleration of fast streams clearly starts just beyond the sonic surface. We have estimated the supperradial expansion factor defined by: $$f_{\mathrm{exp}} = \left( \frac{r_c}{r} \right)^2 \frac{A}{A_c},$$ where $r$ is the spherical radius and $A$ is the surface of the section of the magnetic flux tube, computed at $r=18 R_{\star}$ and at the sonic critical point (subscript $c$). In the case of TYC 5164-567-1, we find $f_{\mathrm{exp}}$ to increase between $6.5$ and $10$ from the outer boundary to the core of the fastest flux tube. The maximum speed observed in the core of the flux tube is around $1300$ km/s, meaning around three times the polytropic wind speed at this distance from the star. The broadening of the speed distribution, by the interaction of a thermally heated corona with the strong magnetic field and the fast rotation, seems to be a reliable feature in our simulations. The trimodal distribution of speeds, using polytropic and magneto-centrifugal models, could be a simple input in other models, for instance the one used to compute mass loss rates [see @WoodRev2004 where they used constant stellar wind velocities in their multispecies simulations of the termination shock of astrospheres]. It is hard, though, to predict what could be the effect of additional accelerating mechanisms in coronal holes -that are necessary to explain the structure of the solar wind-, in the case of young stars. We can imagine that the trimodal distribution would remain, but that the separation between the slowest mode -which comes from the streamers- and the two others -which originates from coronal holes- would be widened. Also, the two fastest peaks would show higher speeds than the one obtained in our simulations. Slow and fast wind in the equatorial plane ------------------------------------------ The magnetic topology of the Sun goes from a strong equatorial dipole at minimum to a more quadrupolar configuration during maximum [@DeRosa2012]. The heliospheric current sheet (HCS) is corrugated due to the rotationally modulated direction of the dipolar moment, and north and south sector polarities are observed at 1 AU depending on when the Earth is beneath or above it. Hence, streams of fast wind encounter slower and higher density wind that wraps the HCS. Indeed the angle of the stream $\psi = \arctan(r\Omega_{\star}/v)$ [@RichardsonRev2004] is a decreasing function of the velocity of the flow. Streamlines of the fast component are thus less bent and compress against slower and denser wind streams. These so-called “Corotating Interaction Regions" (CIRs) [@BelcherDavis1971], were detected in the solar system thanks to *Mariner* 5. In the solar system, the spacecraft *Pioneer* $10$ and $11$ have shown that $75\%$ of the CIRs have formed shocks between 3 and 5 AU [@SmithWolfe1976]. Those shocks dissipate the energy of the solar wind and are able, for instance, to accelerate ions [@RichardsonRev2004]. In the case of rapidly rotating stars, whose dipoles are significantly inclined (BD- 16351 and DX Leo in our study), we should find similar features enhanced by rapid rotation and strong magnetic fields. Figure \[EqPlane\] shows the Mach number in the equatorial plane of BD- 16351 and the Sun in 1996. The flow is structured as a Parker spiral in both cases [@Parker1958]. Streamers can be seen in the equatorial plane of BD- 16351 because of inclination of its dipolar magnetic field. The slow wind can be clearly identified following the streamers in that case. The flow is drastically more uniform in the case of the (quiet) Sun, and the Parker spiral shows weaker inclination of the field lines. --------------------- --------------------- ![image](fig8a.pdf) ![image](fig8b.pdf) **BD- 16351** **Sun 1996** --------------------- --------------------- In the case of BD- 16351, we observe adjacent fast and slow streams that encounter each other at the edge of the domain. The direction of the field (and thus the flow) seems different for the fast stream than for the slow stream and a compression region occurs. Figure \[CIR\] shows the profiles of the density, the Mach number, the magnetic field and the current density amplitude along a $25R_{\star}$ radius circle in the equatorial plane of BD- 16351. Two density peaks mark the slow wind components. These peaks correspond to troughs in the profiles of the Mach number and the magnetic field amplitude. The thin throat in the magnetic field profile matches the crossing of the current sheet that also corresponds to the slow wind components. This structure is similar to the heliospheric current sheet, except for its inclination which follows the strongly inclined dipole of BD- 16351. ![Profiles of the density, Mach number, magnetic field and current amplitude interpolated along a $25R_{\star}$ equatorial circle in the BD- 16351 case. Peaks of density associated with troughs of velocity mark the slow wind component. Compression and acceleration occur when the fast wind encounters the slow wind component. []{data-label="CIR"}](fig9.pdf) It is interesting to look at what happens to the flow just ahead of a density peak. The magnetic field is increasing, which means that a compression in the flow occur. However, if the density increases at first, it decreases before the peak, compensated for by an acceleration of the flow. In these regions, the wind reaches its maximum Mach number, just below 5, as shown in red in Figure \[EqPlane\]. The velocity then drops rapidly inside the streamer, where higher (by a factor of $2$ to $5$) densities are achieved. This structure is repeated with the second streamer, beyond $\varphi = 5.5$. Nonetheless, the shock conditions are not yet fulfilled [see, for example, @PlasmaPhys] and the solution would likely require us to extend it in a larger domain to produce shocks. In our simulations, the discontinuities are separated by a contact layer with no matter exchanged between them. From a numerical point of view, the HLLE solver that we use is one of the most diffusive approximate Riemann solver and may not be able to correctly capture such an oblique shock. It is likely though that shocks will form much earlier in the case of fast rotators, simply because of the enhanced helicity of the Parker spiral, thus changing the energetic budget in the astrospheres of young suns. CIRs in the solar system are thought to be the most frequent cause for geomagnetic storms [@Yermolaev2012]. Our solar case does not show such features because we lack an additional heating mechanism for the acceleration of the fast wind. Yet, we can extrapolate our results stating that younger stars are likely to create more CIRs that shock in the interplanetary medium, adding up to other dynamical events that are thought to be enhanced in the environment of active stars, such as flares and coronal mass ejections [@Schrijver2012]. Discussions and Perspectives {#sec:ccl} ============================ Our study addresses 3D simulations constrained by spectropolarimetric observations of magnetic fields in the context of stellar evolution, along the main sequence. Our findings can be divided into two parts. First, considering the global and integrated properties, the mass and angular momentum loss follow simple prescriptions thanks to an appropriate modeling of the evolution of the temperature and the coronal density with the rotation rate. An angular momentum proportional to the $\Omega_{\star}^3$ can be obtained using our prescriptions with observed magnetic field amplitudes, which is required to observe the convergence of spin rates on the Skumanich law. Our 3D simulations follow the braking law we derived in @Reville2015a if the $K_3$ constant is reduced by $15\%$. This can be understood because our simulations are made with a larger (and different for each case) coronal temperature compared to this previous work. Hence, the wind described is here faster, and for a given magnetic field strength, the Alfvén radius is closer to the star. However, the fit shows little deviation, and one constant $K_3$ in the scaling law is able to describe the whole range of temperature of our sample. A more general braking law should quantify the influence of the temperature on $K_3$, since the variation remain limited in this study. Also, the use of a fully 3D geometry could be involved in the variation of $K_3$. Nonetheless, because this formulation expresses the dependence of the Alfvén radius on a magnetization parameter that includes the thermodynamics of the wind, it is likely to be valid for a wide range of magnetic fields, rotation rates, coronal temperatures and densities, if one allows a small dependence of $K_3$ on the temperature. With this adapted formulation, the semi-analytical we developed in @Reville2015b can be applied. The estimation method of the open flux of the simulation with a potential extrapolation that was tested on 2.5D configurations is perfectly operational with 3D non-axisymmetric fields. Our semi-analytical model is consequently able to estimate closely the evolution of angular momentum with the rotational period, as long as the mass loss rate of the simulations does not deviate too much from the spherically symmetric value used in the semi-analytical model. This deviation grows as the stars rotate faster and possess more intense magnetic fields. Large coronal loops are able to confine more plasma, and the mass loss seems to plateau for $\Omega_{\star} > 8 \Omega_{\odot}$. This behavior has consequences on the angular momentum loss that shows signs of saturation beyond this rotation rate, whose value is coherent with the saturation value used in rotation evolution models for K-type stars [@GalletBouvier2015]. Although the saturation of angular momentum loss is often associated with the saturation observed in the X-ray fluxes, the precise process behind this saturation remains unknown. Some works have suggested a stochastic change of the dynamo process generating the magnetic field could be involved in a topology switch from small scales to large scales that turns on the $\Omega_{\star}^3$ braking law [@Barnes2003; @Brown2014; @Garraffo2015b]. However, no such transition is observed in our sample, as all our stars possess a strong dipolar field. In our simulations, the AML saturation seems to be due to the confinement of the outflow in large coronal loops that reduces the mass loss, which can be associated with the dependence of the wind braking on the filling factor [see @CranmerSaar2011; @GalletBouvier2013; @GalletBouvier2015]. These results need, however, to be confirmed by more simulations of fast rotators and are likely to be highly dependent on the prescriptions we used for the coronal base densities and temperature. The mass loss rate of young stars in our study, although up to $6$ to $9$ times the solar one [^5], does not reach the highest values derived in @Wood2005a. A much higher dependence on rotation, for either the temperature or the coronal density would have been necessary to observe $100 \dot{M}_{\odot}$ values in our simulations. Our semi analytical model could be used to study the influence of different prescriptions on the variation of $\dot{J}$. Change in the exponents of the evolution laws, or the solar initial values, could be tested. A more physical description of the stellar wind acceleration, driven for example by Alfvén waves turbulence [see @Suzuki2006; @MatsumotoSuzuki2012; @Suzuki2013a; @Sokolov2013; @Oran2013; @Lionello2014] can help to understand how coronal parameters evolve with age. Future works will be dedicated to including such processes in our simulations. To solve this issue, more observations are also critical. In the work of @Wood2005a [see also @WoodRev2004], the analysis of the Ly$\alpha$ absorption spectra is coupled with numerical simulations of the terminal shock, where the wind speed is kept constant at the slow solar wind value around $400$ km/s. Our study, and this is the second part of our findings, brings more accurate constraints on the wind velocity amplitudes and distribution for solar-like stars that could be used to improve those calculations. Indeed, several studies have now shown that the wind speeds are likely to increase for young stars, mostly because of higher coronal temperature and magneto-centrifugal acceleration [@WashShib1993; @HolzwarthJardine2007; @Matt2012; @Suzuki2013b; @Reville2015a; @Reville2015b]. We show that the speed distribution of young and active stars follows a trimodal structure due to the interaction with the magnetic field. Moreover, the one-dimensional magneto-centrifugal wind solution [@WeberDavis1967; @Sakurai1985] is at best a low estimate of the fastest component of the wind, as other magnetic processes are able to accelerate the wind. For instance, we observe in our simulations fast streams in the vicinity of the star caused by latitudinal or longitudinal superradial expansion of flux tubes due to the fast rotation and the non-axisymmetry of the magnetic field. As slow and fast winds exist around solar-like stars, they can interact in the equatorial plane, creating Corotating Interaction Regions, which could be more common for younger stars and could be forming shocks closer to the star (the usual distance observed in the solar system is between $1$ and $5$ AU). This could have consequences on the energetics of expanding stellar winds of young stars and on exoplanetary space weather. The current sheet of young stars is also strongly corrugated when the dipole is inclined and polarity variations occur closer to one another, which has important consequences on the interaction with close-in planets, especially when they are within the Alfvén surface [@Strugarek2015]. This is, however, a very preliminary study and a more focused work needs to be done in that sense, improving, for instance, the numerical scheme and shock absorbing method. acknowledgments =============== We thank Sean Matt and Jérôme Bouvier for continuous and fruitful discussions. We acknowledge funding by the ANR Blanc TOUPIES SIMI5-6 020 01, the ERC STARS2 207430, CNES via Solar Orbiter. High performance computations were performed on the French machines Turing (IDRIS) and Curie (TGCC), within the GENCI 1623 program. We are grateful to Andrea Mignone and his team at University of Torino for the development and the maintenance of the PLUTO code. AS is a National Postdoctoral Fellow at the Canadian Institute of Theoretical Astrophysics (CITA) and acknowledges support from the Natural Sciences and Engineering Research Council of Canada. [86]{} , M. D., & [Newkirk]{}, G. 1969, [, 9, 131](http://dx.doi.org/10.1007/BF00145734) , J. D., [Hussain]{}, G. A. J., [Cohen]{}, O., [et al.]{} 2016, [, 588, A28](http://dx.doi.org/10.1051/0004-6361/201527832) —. 2016, [, 594, A95](http://dx.doi.org/10.1051/0004-6361/201628988) , S. A. 2003, [, 586, 464](http://dx.doi.org/10.1086/367639) , J. W., & [Davis]{}, Jr., L. 1971, [, 76, 3534](http://dx.doi.org/10.1029/JA076i016p03534) , J., [Forestini]{}, M., & [Allain]{}, S. 1997, , 326, 1023 , A., & [Saar]{}, S. H. 2000, in Astronomical Society of the Pacific Conference Series, Vol. 198, Stellar Clusters and Associations: Convection, Rotation, and Dynamos, ed. R. [Pallavicini]{}, G. [Micela]{}, & S. [Sciortino]{}, 381 , T. M. 2014, [, 789, 101](http://dx.doi.org/10.1088/0004-637X/789/2/101) , A. S., [Garc[í]{}a]{}, R. A., [Houdek]{}, G., [Nandy]{}, D., & [Pinsonneault]{}, M. 2015, [, 196, 303](http://dx.doi.org/10.1007/s11214-014-0117-8) , O., & [Drake]{}, J. J. 2014, [, 783, 55](http://dx.doi.org/10.1088/0004-637X/783/1/55) , O., [Kashyap]{}, V. L., [Drake]{}, J. J., [et al.]{} 2011, [, 733, 67](http://dx.doi.org/10.1088/0004-637X/733/1/67) , S. R., & [Saar]{}, S. H. 2011, [, 741, 54](http://dx.doi.org/10.1088/0004-637X/741/1/54) , M. L., [Brun]{}, A. S., & [Hoeksema]{}, J. T. 2012, [, 757, 96](http://dx.doi.org/10.1088/0004-637X/757/1/96) , Jr., J.-D., [Vidotto]{}, A. A., [Petit]{}, P., [et al.]{} 2016, [, 820, L15](http://dx.doi.org/10.3847/2041-8205/820/1/L15) , J.-F., & [Brown]{}, S. F. 1997, , 326, 1135 , J.-F., & [Landstreet]{}, J. D. 2009, [, 47, 333](http://dx.doi.org/10.1146/annurev-astro-082708-101833) , B. R., & [Latour]{}, J. 1978, , 9, 241 , B. 1988, , 25, 294 , J. 2013, [in EAS Publications Series, Vol. 62, EAS Publications Series, ed. P. [Hennebelle]{} & C. [Charbonnel]{}](http://dx.doi.org/10.1051/eas/1362006), 169 , C. P., [Petit]{}, P., [Bouvier]{}, J., [et al.]{} 2016, [, 457, 580](http://dx.doi.org/10.1093/mnras/stv2924) , F., & [Bouvier]{}, J. 2013, [, 556, A36](http://dx.doi.org/10.1051/0004-6361/201321302) —. 2015, [, 577, A98](http://dx.doi.org/10.1051/0004-6361/201525660) , C., [Drake]{}, J. J., & [Cohen]{}, O. 2015, [, 807, L6](http://dx.doi.org/10.1088/2041-8205/807/1/L6) —. 2015, [, 813, 40](http://dx.doi.org/10.1088/0004-637X/813/1/40) , M. S. 2005, in Saas-Fee Advanced Course 34: The Sun, Solar Analogs and the Climate, ed. J. D. [Haigh]{}, M. [Lockwood]{}, M. S. [Giampapa]{}, I. [R[ü]{}edi]{}, M. [G[ü]{}del]{}, & W. [Schmutz]{}, 307 , M. 2004, [, 12, 71](http://dx.doi.org/10.1007/s00159-004-0023-2) , D., & [Bhattacharjee]{}, A. 2000, [Introduction do Plasma Physics: With Space and Laboratory Applications]{} (Cambridge University Press) , V., & [Jardine]{}, M. 2007, [, 463, 11](http://dx.doi.org/10.1051/0004-6361:20066486) , N., & [Taam]{}, R. E. 2003, [, 599, 516](http://dx.doi.org/10.1086/379192) Kawaler, S. D. 1988, [, 333, 236](http://dx.doi.org/10.1086/166740) , R., & [Goedbloed]{}, J. P. 2000, [, 530, 1036](http://dx.doi.org/10.1086/308395) , R., [MacGregor]{}, K. B., & [Charbonneau]{}, P. 1995, , 294, 469 , R. A., & [Holzer]{}, T. E. 1976, [, 49, 43](http://dx.doi.org/10.1007/BF00221484) , J. L., & [Wood]{}, B. E. 1996, [, 463, 254](http://dx.doi.org/10.1086/177238) , R., [Velli]{}, M., [Downs]{}, C., [Linker]{}, J. A., & [Miki[ć]{}]{}, Z. 2014, [, 796, 111](http://dx.doi.org/10.1088/0004-637X/796/2/111) , T., & [Suzuki]{}, T. K. 2012, [, 749, 8](http://dx.doi.org/10.1088/0004-637X/749/1/8) , S., & [Balick]{}, B. 2004, [, 615, 921](http://dx.doi.org/10.1086/424727) , S. P., [Brun]{}, A. S., [Baraffe]{}, I., [Bouvier]{}, J., & [Chabrier]{}, G. 2015, [, 799, L23](http://dx.doi.org/10.1088/2041-8205/799/2/L23) , S. P., [Pinz[ó]{}n]{}, G., [Greene]{}, T. P., & [Pudritz]{}, R. E. 2012, [, 745, 101](http://dx.doi.org/10.1088/0004-637X/745/1/101) , L. 1968, , 138, 359 , F. C. 1969, [, 158, 727](http://dx.doi.org/10.1086/150233) , A., [Bodo]{}, G., [Massaglia]{}, S., [et al.]{} 2007, [, 170, 228](http://dx.doi.org/10.1086/513316) , R. W., [Weiss]{}, N. O., & [Vaughan]{}, A. H. 1984, [, 287, 769](http://dx.doi.org/10.1086/162735) , R., [van der Holst]{}, B., [Landi]{}, E., [et al.]{} 2013, [, 778, 176](http://dx.doi.org/10.1088/0004-637X/778/2/176) , R. 1989, [, 1, 177](http://dx.doi.org/10.1007/BF00872715) , R., [Golub]{}, L., [Rosner]{}, R., [et al.]{} 1981, [, 248, 279](http://dx.doi.org/10.1086/159152) , E. N. 1958, [, 128, 664](http://dx.doi.org/10.1086/146579) , R. F., [Brun]{}, A. S., [Jouve]{}, L., & [Grappin]{}, R. 2011, [, 737, 72](http://dx.doi.org/10.1088/0004-637X/737/2/72) , N., & [Kochukhov]{}, O. 2002, [, 381, 736](http://dx.doi.org/10.1051/0004-6361:20011517) , N., [Maggio]{}, A., [Micela]{}, G., [Sciortino]{}, S., & [Ventura]{}, P. 2003, [, 397, 147](http://dx.doi.org/10.1051/0004-6361:20021560) , K. G. 1994, , S. 2000, in Astronomical Society of the Pacific Conference Series, Vol. 198, Stellar Clusters and Associations: Convection, Rotation, and Dynamos, ed. R. [Pallavicini]{}, G. [Micela]{}, & S. [Sciortino]{}, 401 , V., [Brun]{}, A. S., [Matt]{}, S. P., [Strugarek]{}, A., & [Pinto]{}, R. F. 2015, [, 798, 116](http://dx.doi.org/10.1088/0004-637X/798/2/116) , V., [Brun]{}, A. S., [Strugarek]{}, A., [et al.]{} 2015, [, 814, 99](http://dx.doi.org/10.1088/0004-637X/814/2/99) , I. G. 2004, [, 111, 267](http://dx.doi.org/10.1023/B:SPAC.0000032689.52830.3e) , T. 1985, , 152, 121 , K. H., [Wilcox]{}, J. M., & [Ness]{}, N. F. 1969, [, 6, 442](http://dx.doi.org/10.1007/BF00146478) , E. 1962, , 25, 18 , C. J., [Beer]{}, J., [Baltensperger]{}, U., [et al.]{} 2012, [, 117, A08103](http://dx.doi.org/10.1029/2012JA017706) , V., [Jardine]{}, M., [Vidotto]{}, A. A., [et al.]{} 2015, [, 453, 4301](http://dx.doi.org/10.1093/mnras/stv1925) , M. 1989, , 225, 456 , E. J., & [Wolfe]{}, J. H. 1976, [, 3, 137](http://dx.doi.org/10.1029/GL003i003p00137) , J. M., [Swaczyna]{}, P., [Bzowski]{}, M., & [Tokumaru]{}, M. 2015, [, 290, 2589](http://dx.doi.org/10.1007/s11207-015-0800-2) , I. V., [van der Holst]{}, B., [Oran]{}, R., [et al.]{} 2013, [, 764, 23](http://dx.doi.org/10.1088/0004-637X/764/1/23) , A., [Brun]{}, A. S., [Matt]{}, S. P., & [R[é]{}ville]{}, V. 2015, [, 815, 111](http://dx.doi.org/10.1088/0004-637X/815/2/111) , T. K. 2006, [, 640, L75](http://dx.doi.org/10.1086/503102) —. 2013, [, 334, 81](http://dx.doi.org/10.1002/asna.201211751) , T. K., [Imada]{}, S., [Kataoka]{}, R., [et al.]{} 2013, [, 65](http://dx.doi.org/10.1093/pasj/65.5.98), [[arXiv:1212.6713 \[astro-ph.SR\]]{}](http://arxiv.org/abs/1212.6713) , A., & [Owocki]{}, S. P. 2002, [, 576, 413](http://dx.doi.org/10.1086/341543) , G. V., [Koldoba]{}, A. V., [Romanova]{}, M. M., [Chechetkin]{}, V. M., & [Lovelace]{}, R. V. E. 1999, [, 516, 221](http://dx.doi.org/10.1086/307093) , J. L., [Ceillier]{}, T., [Metcalfe]{}, T. S., [et al.]{} 2016, [, 529, 181](http://dx.doi.org/10.1038/nature16168) , M. 1994, [, 432, L55](http://dx.doi.org/10.1086/187510) —. 2010, [, 1216, 14](http://dx.doi.org/10.1063/1.3395823) , A. A., [Jardine]{}, M., [Morin]{}, J., [et al.]{} 2014, [, 438, 1162](http://dx.doi.org/10.1093/mnras/stt2265) , A. A., [Gregory]{}, S. G., [Jardine]{}, M., [et al.]{} 2014, [, 441, 2361](http://dx.doi.org/10.1093/mnras/stu728) , A. A., [Donati]{}, J.-F., [Jardine]{}, M., [et al.]{} 2016, [, 455, L52](http://dx.doi.org/10.1093/mnrasl/slv147) , Y.-M., & [Sheeley]{}, Jr., N. R. 1990, [, 355, 726](http://dx.doi.org/10.1086/168805) , H., & [Shibata]{}, S. 1993, , 262, 936 , E. J., & [Davis]{}, Jr., L. 1967, [, 148, 217](http://dx.doi.org/10.1086/149138) , N. O. 1994, in Lectures on Solar and Planetary Dynamos, ed. M. R. E. [Proctor]{} & A. D. [Gilbert]{}, 59 Wood, B. E. 2004, [, 1](http://dx.doi.org/10.12942/lrsp-2004-2) , B. E., [M[ü]{}ller]{}, H.-R., [Zank]{}, G. P., & [Linsky]{}, J. L. 2002, [, 574, 412](http://dx.doi.org/10.1086/340797) , B. E., [M[ü]{}ller]{}, H.-R., [Zank]{}, G. P., [Linsky]{}, J. L., & [Redfield]{}, S. 2005, [, 628, L143](http://dx.doi.org/10.1086/432716) , N. J., [Drake]{}, J. J., [Mamajek]{}, E. E., & [Henry]{}, G. W. 2011, [, 743, 48](http://dx.doi.org/10.1088/0004-637X/743/1/48) , Y. I., [Lodkina]{}, I. G., [Nikolaeva]{}, N. S., & [Yermolaev]{}, M. Y. 2012, [, 117, A08207](http://dx.doi.org/10.1029/2012JA017716) , C., & [Ferreira]{}, J. 2009, [, 508, 1117](http://dx.doi.org/10.1051/0004-6361/200912879) Flux tube expansion in the supersonic regime {#AppA} ============================================ The mass conservation within a flux tube reduces to the equation $$\frac{1}{v}\frac{dv}{dr}+\frac{1}{\rho}\frac{d\rho}{dr}+\frac{1}{A}\frac{dA}{dr}=0,$$ where $A$ is the section of the flux tube. The momentum equation, if we consider an isothermal flow for simplicity, can then be written as $$\left(v - \frac{c_s^2}{v}\right) \frac{dv}{dr} = \frac{c_s^2}{A} \frac{dA}{dr} - \frac{v_{\mathrm{kep}}^2 (r_c) r_c}{r^2},$$ where $c_s$ is the constant sound speed, $v_{\mathrm{kep}}(r) = \sqrt{GM_{\star}/r}$ is the Keplerian velocity and $r_c$ is the critical radius where the wind becomes supersonic. In terms of Mach number $M=u/c_s$, we can write $$\left(M - \frac{1}{M}\right) \frac{dM}{dr} = \frac{1}{A} \frac{dA}{dr} - \frac{v_{\mathrm{kep}}^2 (r_c) r_c}{c_s^2 r^2}.$$ When the wind is subsonic ($M<1$), the term $(M-1/M)$ is negative, and thus what matters for the acceleration of the wind is the sign of the term on the right hand side. The expansion of the flux tube can be locally described by $A \propto r^{\alpha}$ and yields $(1/A) dA/dr = \alpha/r$. The spherically symmetric solution, which corresponds to the case $A \propto r^2$, and $\alpha=2$ gives $$\frac{2}{r} \leq \frac{v_{\mathrm{kep}}^2 (r_c) r_c}{c_s^2 r^2},$$ as the Parker solution is always accelerating. It is shown that superradial expansion is globally anti-correlated with the wind speed [@WangSheeley1990], because of the local inversion of the latter inequality below the sonic surface. Nonetheless, superradial expansion can accelerate the outflow as long as $(1/A)dA/dr$ remains smaller than $v_{\mathrm{kep}}^2 (r_c) r_c/(c_s^2 r^2)$, for instance if the superradial expansion is located near the surface or well below the sonic point [see @Velli2010]. However, in the supersonic regime, a superradial expansion will necessarily accelerate the outflow. Indeed, as $\alpha \geq 2$, and since the right hand side is positive in the radial case, any superradial expansion will grow this term, and $dM/dr$ will be larger. The treatment of this problem without the isothermal approximation is more complex [see @KoppHolzer1976]. In our study, the sound speed variation may not be negligible, but the qualitative behavior remains and a quantitative analysis is left for future works. [^1]: The actual resolution of ZDI is somewhat lower than this ($\ell_{\mathrm{max}} =8-10$), but the fields are reconstructed using $15$ spherical harmonics. [^2]: See @MatsumotoSuzuki2012 for a 2D model of the chromosphere and the transition region. [^3]: For instance, a purely spherically symmetric radial Weber and Davis model yields a self-consistent saturation where $\dot{J} \propto \Omega_{\star}^2$ [see @Keppens1995]. [^4]: Recent work of @Matt2015 proposed a more accurate description of the mass dependence. [^5]: In table \[table2\], we have considered an upper value of the solar mass loss rate $\dot{M} = 3 \times 10^{-14} M_{\odot}/$yr.
--- abstract: 'We introduce the notion of strong embeddability for a metric space. This property lies between coarse embeddability and property A. A relative version of strong embeddability is developed in terms of a family of set maps on the metric space. When restricted to discrete groups, this yields relative coarse embeddability. We verify that groups acting on a metric space which is strongly embeddable has this relative strong embeddability, provided the stabilizer subgroups do. As a corollary, strong embeddability is preserved under group extensions.' address: - | Department of Mathematical Sciences\ Indiana University-Purdue University, Indianapolis\ 402 N. Blackford Street\ Indianapolis, IN 46202\ - | Department of Mathematics\ The Ohio State University\ Columbus, OH 43210\ - | Department of Mathematics\ The Ohio State University\ Columbus, OH 43210\ author: - Ronghui Ji - Crichton Ogle - 'Bobby W. Ramsey' bibliography: - 'all.bib' title: Strong embeddability and extensions of groups --- Introduction ============ If a discrete group $G$ has property A, as defined by Yu in [@yu], it may be coarsely embedded in a separable Hilbert space. This is a desirable property, as Yu has shown [@yu] that such coarse embeddability verifies the Coarse Baum-Connes Conjecture for that group. In [@DG_CPUE], it was shown that if $H \to G \to Q$ is an extension of groups where $Q$ has property A and $H$ is coarsely embeddable, then $G$ is coarsely embeddable. Unlike property A, it is unknown whether the property of coarse embeddability is closed under arbitrary extensions. Moreover, it is known for metric spaces that property A is a strictly stronger condition than coarse embeddability [@AGS]. In this paper we introduce an intermediate notion of strong coarse embeddability implied by property A and implying coarse embeddability. This is done via a careful analysis of the results of Dadarlat and Guentner in [@DG_CPUE]. A main property of this stronger notion is that it is preserved under arbitrary extensions. More generally, strong embeddability admits a natural relative formulation by which one can verify a relative generalization of the extension theorem. Namely: If $H < G$ is strongly embeddable and $\{ \pi : G \to G/H \}$ is a *strongly embeddable family of maps* then $G$ is strongly embeddable. This strong embeddability of a family of maps is a natural generalization of relative property A studied in [@JOR4]. As a consequence, if $G$ has relative property A with respect to a family $H_1, \ldots, H_n$ of subgroups, and if each $H_i$ is coarsely embeddable, then $G$ is coarsely embeddable. Finally, we show that strong embeddability is satisfied by groups acting on strongly embeddable metric spaces. It is currently unknown whether strong embeddability and coarse embeddability coincide for discrete metric spaces. We hope to address this issue in future work. Preliminaries ============= All groups are assumed to be countable and uniformly discrete, with a proper left-invariant metric. All metric spaces are assumed to be uniformly discrete with bounded geometry. A *coarse embedding* of a metric space $(X,d)$ in a Hilbert space ${\mathcal{H}}$ is a map $\phi : X \to {\mathcal{H}}$ for which there exist two nondecreasing maps $\rho_{-}, \rho_{+} : [0, \infty) \to \R$ with $\lim_{r\to\infty} \rho_{\pm}(r) = \infty$ for which the following holds for all $x, y \in X$. $$\rho_{-}\left( d(x,y) \right) \leq \| \phi(x) - \phi(y) \| \leq \rho_{+}\left( d(x,y) \right).$$ A metric space for which a coarse embedding exists is called *coarsely embeddable*. A useful characterization of coarse embeddability was given by Dadarlat and Guentner in [@DG_CPUE]. \[prop:CE\] Let $X$ be a metric space. Then $X$ is coarsely embeddable if and only if for every $R, \epsilon > 0$ there exists a Hilbert space valued map $\beta : X \to {\mathcal{H}}$ with $\| \beta(x) \| = 1$ for each $x \in X$, and satisfying: 1. $\sup \left\{ \left| 1 - \langle \beta(x), \beta(y) \rangle \right| : d(x,y) \leq R \right\} \leq \epsilon$. 2. $\lim_{S \to \infty} \sup \left\{ \left| \langle \beta(x) , \beta(y) \rangle \right| : d(x,y) \geq S \right\} = 0$. We point out that condition (1) can be replaced by $\sup \left\{ \| \beta(x) - \beta(y) \| : d(x,y) \leq R \right\} \leq \epsilon$. We will need to make use of a family of coarsely embeddable metric spaces, with some uniform control. We take the following as our definition. \[defn:ece\] A family $(X_j)_{j \in {\mathcal{I}}}$ of metric spaces is *equi-coarsely embeddable* if for every $R, \epsilon > 0$ there is a family of Hilbert space valued maps $\xi_j : X_j \to {\mathcal{H}}$ with $\| \xi_j(x) \| = 1$ for all $x \in X_j$ and satisfying: 1. For all $j \in {\mathcal{I}}$ and all $x,y \in X_j$, if $d(x,y) \leq R$, then $\| \xi_j(x) - \xi_j(y) \| \leq \epsilon$. 2. $\lim_{S\to\infty} \sup_{j \in {\mathcal{I}}} \sup\left\{ \left| \langle \xi_j(x), \xi_j(y)\rangle \right| : d(x,y) \geq S\, , \,\, x,y \in X_j \right\} = 0$. Now to Yu’s property A. A metric space $(X,d)$ has property A if for every $R, \epsilon > 0$ there exists an $S > 0$ and a collection $\left(A_x\right)_x$ of finite nonempty subsets of $X \times {{\mathbb{N}}}$, indexed by $x \in X$, such that: 1. For each $x, y \in X$, if $d(x,y) < R$, then $$\frac{|A_x \Delta A_y|}{|A_x|} < \epsilon.$$ 2. For each $x \in X$, if $(y, n) \in A_x$ then $d(x, y) < S$. The following characterization will be useful in the sequel. \[prop:PropA\] A metric space $(X,d)$ has property A if and only if for every $R, \epsilon > 0$ there exists a Hilbert space valued map $\alpha : X \to {\mathcal{H}}$ with each $\| \alpha_x \| = 1$, and an $S > 0$ such that: 1. For each $x,y \in X$, if $d(x,y) \leq R$, then $\left| 1 - \langle \alpha_x , \alpha_y \rangle \right| < \epsilon$. 2. For each $x \in X$, if $d(x,y) \geq S$ then $\langle \alpha_x , \alpha_y \rangle = 0$ \[defn:ee\] A family $(X_j)_{j \in {\mathcal{I}}}$ of metric spaces is *equi-exact* if for every $R, \epsilon > 0$ there is a family of Hilbert space valued maps $\xi_j : X_j \to {\mathcal{H}}$ with $\| \xi_j(x) \| = 1$ for all $x \in X_j$ and an $S>0$ and satisfying: 1. For all $j \in {\mathcal{I}}$ and all $x,y \in X_j$, if $d(x,y) \leq R$, then $\| \xi_j(x) - \xi_j(y) \| \leq \epsilon$. 2. For all $j\in {\mathcal{I}}$ and all $x,y \in X_j$, $\langle \xi_j(x), \xi_j(y)\rangle = 0$ if $d(x,y) \geq S$. As mentioned above, for a group extension $H \to G \to Q$, Dadarlat and Guentner show in [@DG_CPUE] that if $H$ is coarsely embeddable and $Q$ has property A, then $G$ is coarsely embeddable. The assumption that $Q {{\,\cong\,}}G/H$ has property A is closely related to the notion of relative property A, developed in [@JOR4]. Let $G$ be a group and $\{ H_1, \ldots, H_n \}$ a finite family of subgroups, and let $K = \sqcup_{i = 1}^{n} G/H_i$. Then $G$ has *relative property A* with respect to $\{ H_1, \ldots, H_n \}$ if for every $R>0$ and $\epsilon > 0$ there exists an $S > 0$ and a collection $A_x$ of finite nonempty subsets of $K \times {{\mathbb{N}}}$, indexed by $x \in G$, such that: 1. For each $x, y \in G$, if $d(x,y) < R$, then $$\frac{|A_x \Delta A_y|}{|A_x|} < \epsilon.$$ 2. For each $x \in G$, if $(aH_i, n) \in A_x$ then $d(x, aH_i) < S$. The cosets of each single subgroup $H_i$ serve to partition the space $G$ into isometric pieces. We obtain a family of partitions. A single element of one of the partitions is indexed by $K$, and distances are measured between elements of $G$ and elements of $K$. In the next section we expand this framework to more general situations by considering families of set maps. Exact families of maps ====================== Suppose $(X,d)$ is a metric space. For a finite family of sets, $\left\{ Y_i \right\}_{i=1}^{n}$, let ${\mathcal{Y}}= \sqcup_{i=1}^n Y_i$ denote the disjoint union. \[defn:exactfamily\] A family of set maps $\left\{ \phi_i : X \to Y_i \right\}_{i=1}^{n}$ is an *exact family of maps* if for every $R, \epsilon > 0$ there exists an $S > 0$ and a map $\xi : X \to \ell^2( {\mathcal{Y}})$, with $\| \xi_x \| = 1$ for all $x \in X$ (where $\xi_x$ is the element of $\ell^2({\mathcal{Y}})$ associated to $x \in X$) and satisfying the following. 1. For all $x,y \in X$ if $d(x,y) \leq R$, then $\| \xi_x - \xi_y \| \leq \epsilon$. 2. For all $x \in X$, ${\operatorname{supp}}\xi_x \subset \cup_{i} \phi_i\left( B_S(x) \right)$, where $B_S(x)$ denotes the ball of radius $S$ in $X$ centered at $x$. Let $G$ be a group, and let $\left\{ H_i \right\}_{i=1}^{n}$ be a finite family of subgroups of $G$. Let $\left\{\pi_i : G \to G/H_i\right\}$ be the corresponding quotient maps. The group $G$ has relative property A with respect to $\left\{ H_i \right\}_{i=1}^{n}$ if and only if $\left\{ \pi_i \right\}_{i=1}^{n}$ is an exact family of maps. A metric space $(X,d)$ has property A if and only if $\left\{ Id:X \to X \right\}$ is an exact family of maps. This follows from Proposition 4.3 of [@JOR4] and Proposition 3.2 of [@tu_RmkPropA]. In [@JOR4], relative property A was defined only for groups. The notion of exact families of maps gives a natural framework in which to extend this property to metric spaces. \[thm:propA\] Suppose $(X,d)$ is a metric space and $\left\{ \phi_i : X \to Y_i \right\}_{i=1}^{n}$ is an exact family of maps. If $\left\{ \phi_i( w )^{-1} : w \in Y_i , i=1,\ldots, n \right\}$ is an equi-exact family of metric spaces, then $X$ has property A. This is related to Theorem 3.1 of [@DG_UERHG]. Fix $R, \epsilon > 0$. Denote by ${\mathcal{Y}}$ the disjoint union $\sqcup_{i=1}^n Y_i$. For $x \in X$ and $w \in Y_i$, let $\eta(x,w)$ denote a point in $\phi_i^{-1}(w)$ closest to $x$. By the triangle inequality we have $d(x,y) \leq d(\eta(x,w), \eta(y,w) ) + d(x, \phi_i^{-1}(w)) + d(y, \phi_i^{-1}(w))$ and $d( \eta(x,w), \eta(y, w) ) \leq d(x,y) + d(x, \phi_i^{-1}(w)) + d(y, \phi_i^{-1}(w))$ for $w \in Y_i$, and all $x,y \in X$. By Definition \[defn:exactfamily\] there is a map $\alpha : X \to \ell^2({\mathcal{Y}})$ with each $\| \alpha_x \| = 1$, and an $S_X > 0$ such that: 1. For each $x,y \in X$, if $d(x,y) \leq R$, then $\left| 1 - \langle \alpha_x , \alpha_y \rangle \right| < \frac{\epsilon}{2}$. 2. For each $x \in X$, if $\alpha_x( w ) \neq 0$ then $w \in \cup_i \phi_i \left( B_{S_X}(x) \right)$. Let ${\mathcal{I}}= \left\{ (i, w) : i=1,\ldots, n, \,\, w \in Y_i \right\}$. Note that there is a natural identification of ${\mathcal{I}}$ with ${\mathcal{Y}}$. For convenience, we will refer to $j = (i,w)$ as $w$. It will be understood that $i$ is then determined by a choice of $w \in Y_i$. For $j = (i,w) \in {\mathcal{I}}$, set $X_j = \phi_i^{-1}(w)$. By Definition \[defn:ee\], there is an $S_Y > 0$, a Hilbert space ${\mathcal{H}}$, and for each $j = (i,w) \in {\mathcal{I}}$ a $\beta_j : \phi_i^{-1}(w) \to {\mathcal{H}}$ with $\| \beta_j(s) \| = 1$ for all $s \in X_j$, and such that 1. For all $s,t \in X_j$, $\left| 1 - \langle \beta_j(s), \beta_j(t) \rangle \right| \leq \frac{\epsilon}{2}$ whenever $d(s,t) \leq 2S_X + R$. 2. For all $s,t \in X_j$ if $d(s,t) \geq S_Y$, then $\langle \beta_j(s) , \beta_j(t) \rangle = 0$. Define $\xi : X \to \ell^2( {\mathcal{Y}}, {\mathcal{H}})$ by $$\xi_x( w ) = \alpha_x( w ) \beta_j( \eta( x, w ) ), \,\,\,\, \textrm{ for all } x \in X, w \in Y_i.$$ For any $x \in X$, $$\begin{aligned} \| \xi_x \|^2 &= \sum_{w \in {\mathcal{Y}}} \| \xi_x(w) \|^2\\ &= \sum_{w \in {\mathcal{Y}}} | \alpha_x(w) |^2 \| \beta_j( \eta(x,w) ) \|^2 \\ &= 1.\end{aligned}$$ It remains to verify the two remaining properties of Proposition \[prop:PropA\] for $\xi$. If $x,y \in X$ with $d(x,y) \leq R$, then $$\begin{aligned} |1 - \langle \xi_x , \xi_y \rangle | \leq & \left| \sum_{w \in {\mathcal{Y}}} \left( 1 - \langle \beta_j(\eta(x,w)), \beta_j(\eta(y,w)) \rangle \right) \alpha_x(w) \alpha_y(w) \right| \\ & + | 1 - \langle \alpha_x , \alpha_y \rangle |. \end{aligned}$$ The second term is bounded by $\frac{\epsilon}{2}$. Moreover, the first sum is over $w \in \phi_i ( B_{S_X}(x) ) \cap \phi_i ( B_{S_X}(y))$. Since $\| \alpha_x \| = \| \alpha_y \| = 1$ this sum is bounded by $$\sup\left\{ \left| 1 - \langle \beta_j(\eta(x,w)), \beta_j(\eta(y, w))\rangle \right| : w \in \phi_i( B_{S_X}(x) ) \cap \phi_i( B_{S_X}(y)), i = 1,\ldots,n \right\}.$$ As each such $w$ satisfies $d( \eta(x,w), \eta(y,w) ) \leq R + 2 S_X$, this sum is bounded by $\frac{\epsilon}{2}$. Let $x,y \in X$ be such that $d(x,y) \geq 2S_X + S_Y$. Then $$\langle \xi_x , \xi_y \rangle = \sum_{w \in {\mathcal{Y}}} \alpha_x(w) \alpha_y(w) \langle \beta_j( \eta(x, w) ), \beta_j( \eta(y, w)) \rangle.$$ The support condition on $\alpha_x$ and $\alpha_y$ ensure the sum is over $w \in \cup_i \left(\phi_i ( B_{S_X}(x) ) \cap \phi_i ( B_{S_X}(y))\right)$. For each $i$ and each $w \in \phi_i( B_{S_X}(x) ) \cap \phi_i( B_{S_X}(y) )$, we have $d( \eta(x, w), \eta(y, w) ) \geq d(x,y) - d(x, \phi_i^{-1}(w)) - d(y, \phi_i^{-1}(w)) \geq S_Y$. As such, this sum is zero. Strong embeddability ==================== \[defn:SE\] Let $(X,d)$ be a metric space. Then $X$ is strongly embeddableif and only if for every $R, \epsilon > 0$ there exists a Hilbert space valued map $\beta : X \to \ell^2(X)$ with $\| \beta_x \| = 1$ for each $x \in X$, and satisfying: 1. If $d(x,y) \leq R$, then $\| \beta_x - \beta_y \| \leq \epsilon$. 2. $\lim_{S \to \infty} \sup_{x \in X} \sum_{w \notin B_S(x)} |\beta_x(w)|^2 = 0$. Condition (2) in Definition \[defn:SE\] can be replaced by the following: 1. $\lim_{S \to \infty} \sup_{x,y \in X} \sum_{w \notin \left(B_S(x) \cap B_S(y) \right)} |\beta_x(w)\beta_y(w) | = 0$. Condition (2’) with $x = y$ yields condition (2). For the other direction, assume condition (2) is satisfied, fix any $x,y \in X$, and let $S > 0$ be given. $$\begin{aligned} \sum_{w \notin \left(B_S(x) \cap B_S(y) \right)} |\beta_x(w)\beta_y(w) | &= \sum_{w \notin B_S(x)} |\beta_x(w)\beta_y(w) | + \sum_{w \in \left( B_S(x) \setminus B_S(y) \right)} |\beta_x(w)\beta_y(w) | \\ & \leq \sum_{w \notin B_S(x)} |\beta_x(w)\beta_y(w) | + \sum_{w \notin B_S(y)} |\beta_x(w)\beta_y(w) | \\ & \leq \sum_{w \notin B_S(x)} |\beta_x(w)|^2 + \sum_{w \notin B_S(y)} |\beta_y(w) |^2 \end{aligned}$$ Each of these terms tend to zero as $S \to \infty$, uniform in $x,y \in X$. Strongly embeddable metric spaces are coarsely embeddable. Suppose $(X,d)$ is strongly embeddable, and fix $R, \epsilon > 0$. Take $\beta : X \to \ell^2(X)$ from the definition. Fix an $S>0$ and take $d(x,y) \geq S$. $$\begin{aligned} \left| \langle \beta_x , \beta_y \rangle \right| & \leq \sum_{w \in X } \left| \beta_x(w) \beta_y(w) \right| \\ & = \sum_{w \notin B_{S/2}(x) } \left| \beta_x(w) \beta_y(w) \right| + \sum_{w \in B_{S/2}(x) } \left| \beta_x(w) \beta_y(w) \right| \\ & \leq \sum_{w \notin B_{S/2}(x) } \left| \beta_x(w) \right|^2 + \sum_{w \in B_{S/2}(x) } \left| \beta_y(w) \right|^2 \\ & \leq \sum_{w \notin B_{S/2}(x) } \left| \beta_x(w) \right|^2 + \sum_{w \notin B_{S/2}(y) } \left| \beta_y(w) \right|^2\end{aligned}$$ Thus, $\lim_{S \to \infty} \sup \left\{ \left| \langle \beta_x , \beta_y \rangle \right| : d(x,y) \geq S \right\} = 0$. The following is a weakened version of an exact family of maps. As an exact family of maps captures relative property A, a strongly embeddable family of maps will capture relative coarse embeddability. \[defn:weakexactfamily\] A family of set maps $\left\{ \phi_i : X \to Y_i \right\}_{i=1}^{n}$ is a *strongly embeddable family of maps* if for every $R, \epsilon > 0$ there exists a map $\xi : X \to \ell^2( {\mathcal{Y}})$, with $\| \xi_x \| = 1$ for all $x \in X$ and satisfying the following: 1. For all $x,y \in X$ if $d(x,y) \leq R$, then $\| \xi_x - \xi_y \| \leq \epsilon$. 2. $$\lim_{S\to\infty} \sup_{x \in X} \sum_{w \notin \cup_i \phi_i ( B_{S}(x) ) } \left| \xi_x(w) \right|^2 = 0.$$ In direct analogy to Theorem \[thm:propA\], we have \[thm:ce\] Suppose $(X,d)$ is a uniformly discrete bounded geometry metric space, and let $\left\{ \phi_i : X \to Y_i \right\}_{i=1}^{n}$ be a strongly embeddable family of maps. If the preimages $\left\{ \phi_i( w )^{-1} : w \in Y_i , i=1,\ldots, n \right\}$ form an equi-coarsely embeddable family of metric spaces, then $X$ is coarsely embeddable. Pick $R, \epsilon > 0$. By Definition \[defn:weakexactfamily\] there is a map $\alpha : X \to \ell^2({\mathcal{Y}})$ with each $\| \alpha_x \| = 1$, and an $S_X > 0$ such that: 1. For each $x,y \in X$, if $d(x,y) \leq R$, then $\left| 1 - \langle \alpha_x , \alpha_y \rangle \right| < \frac{\epsilon}{3}$. 2. For each $x,y \in X$, $\sum_{w \notin \cup_i \phi_i ( B_{S_X}(x) ) \cap \phi_i ( B_{S_X}(y))} \left| \alpha_x(w) \alpha_y(w) \right| < \frac{\epsilon}{6}$. Let ${\mathcal{I}}= \left\{ (i, w) : i=1,\ldots, n, \,\, w \in Y_i \right\}$. For $j = (i,w) \in {\mathcal{I}}$, set $X_j = \phi_i^{-1}(w)$. By Definition \[defn:ece\], there is a Hilbert space, ${\mathcal{H}}$, and for each $j = (i,w) \in {\mathcal{I}}$ a $\beta_j : \phi_i^{-1}(w) \to {\mathcal{H}}$ with $\| \beta_j(s) \| = 1$ for all $s \in X_j$, and such that for each $\delta > 0$ there is an $S_Y > 0$ satisfying: 1. For all $s,t \in X_j$, $\left| 1 - \langle \beta_j(s), \beta_j(t) \rangle \right| \leq \frac{\epsilon}{3}$ whenever $d(s,t) \leq 2S_X + R$. 2. For all $s,t \in X_j$ if $d(s,t) \geq S_Y$, then $\langle \beta_j(s) , \beta_j(t) \rangle \leq \frac{\delta}{2}$. Define $\xi : X \to \ell^2( {\mathcal{Y}}, {\mathcal{H}})$ by $$\xi_x( w ) = \alpha_x( w ) \beta_j( \eta( x, w ) ), \,\,\,\, \textrm{ for all } x \in X, w \in Y_i.$$ For any $x \in X$, $\| \xi_x \|^2 = 1$. We verify the two remaining properties of Proposition \[prop:CE\] for $\xi$. If $x,y \in X$ with $d(x,y) \leq R$, then $$\begin{aligned} |1 - \langle \xi_x , \xi_y \rangle | \leq & \left| \sum_{w \in {\mathcal{Y}}} \left( 1 - \langle \beta_j(\eta(x,w)), \beta_j(\eta(y,w)) \rangle \right) \alpha_x(w) \alpha_y(w) \right| \\ & + | 1 - \langle \alpha_x , \alpha_y \rangle |. \end{aligned}$$ The second term is bounded by $\frac{\epsilon}{3}$. Moreover, the summation can be split as the sum over $w \in \cup_i \phi_i( B_{S_X}(x) ) \cap \phi_i( B_{S_X}(y))$ plus the sum over $w \notin \cup_i \phi_i( B_{S_X}(x) ) \cap \phi_i( B_{S_X}(y))$. Consider the sum over $w \in \cup_i \phi_i ( B_{S_X}(x) ) \cap \phi_i ( B_{S_X}(y))$. Since $\| \alpha_x \| = \| \alpha_y \| = 1$ this sum is bounded by $$\sup\left\{ \left| 1 - \langle \beta_j(\eta(x,w)), \beta_j(\eta(y, w))\rangle \right| : w \in \phi_i( B_{S_X}(x) ) \cap \phi_i( B_{S_X}(y)), i = 1,\ldots,n \right\}.$$ As every such $w$ satisfies $d( \eta(x,w), \eta(y,w) ) \leq R + 2 S_X$, this sum is bounded by $\frac{\epsilon}{3}$. The sum over $w \notin \cup_i \phi_i ( B_{S_X}(x) ) \cap \phi_i ( B_{S_X}(y))$ is bounded by, $$\sum_{w \notin \cup_i \phi_i ( B_{S_X}(x) ) \cap \phi_i ( B_{S_X}(y))} \left| 1 - \langle \beta_j(\eta(x,w)), \beta_j(\eta(y,w)) \rangle \right| \left|\alpha_x(w) \alpha_y(w) \right|.$$ As $\| \beta_j( \eta(x,w) ) \| = \| \beta_j ( \eta(y,w)) \| = 1$, $\left| 1 - \langle \beta_j(\eta(x,w)), \beta_j(\eta(y,w)) \rangle \right| \leq 2$. By the choice of $\alpha$, this sum is bounded by $\frac{\epsilon}{3}$. Thus $| 1 - \langle \xi_x, \xi_y \rangle | \leq \epsilon$. By Definition \[defn:weakexactfamily\], there is $S'_X > 0$ such that $\sum_{w \notin \cup_i \phi_i ( B_{S'_X}(x) ) \cap \phi_i ( B_{S'_X}(y))} \left| \xi_x(w) \xi_y(w) \right| < \frac{\delta}{2}$. Let $x,y \in X$ be such that $d(x,y) \geq 2S'_X + S_Y$. Then $$\begin{aligned} | \langle \xi_x , \xi_y \rangle | &= \left| \sum_{w \in {\mathcal{Y}}} \alpha_x(w) \alpha_y(w) \langle \beta_j( \eta(x, w) ), \beta_j( \eta(y, w)) \rangle \right| \\ &\leq \sum_{w \in {\mathcal{Y}}} |\alpha_x(w) \alpha_y(w)| \left| \langle \beta_j( \eta(x, w) ), \beta_j( \eta(y, w)) \rangle \right|.\end{aligned}$$ This sum splits into the sum over $w \in \cup_i \phi_i( B_{S'_X}(x) ) \cap \phi_i( B_{S'_X}(y) )$ plus the sum over $w \notin \cup_i \phi_i( B_{S'_X}(x) ) \cap \phi_i( B_{S'_X}(y) )$. For $w \in \cup_i \phi_i( B_{S'_X}(x) ) \cap \phi_i( B_{S'_X}(y) )$, that $\|\alpha_x \| = \| \alpha_y \| = 1$ gives that this sum is bounded by $$\sup \left\{ \left| \langle \beta_j(\eta(x, w)), \beta_j(\eta(y, w)) \rangle \right| : w \in \phi_i( B_{S'_X}(x) ) \cap \phi_i( B_{S'_X}(y) ) , i=1, \ldots, n \right\}.$$ For each $i$ and each $w \in \phi_i( B_{S'_X}(x) ) \cap \phi_i( B_{S'_X}(y) )$, we have $d( \eta(x, w), \eta(y, w) ) \geq d(x,y) - d(x, \phi_i^{-1}(w)) - d(y, \phi_i^{-1}(w)) \geq S_Y$. As such, this supremum is bounded by $\frac{\delta}{2}$. Consider the sum over $w \notin \cup_i \phi_i( B_{S'_X}(x) ) \cap \phi_i( B_{S'_X}(y) )$. As $\| \beta_j( \eta(x,w) ) \| = \| \beta_j( \eta(y,w) \| = 1$, $|\langle \beta_j( \eta(x,w) ) , \beta_j( \eta(y,w) ) \rangle | \leq 1$. By the choice of $S'_X$, this sum is bounded by $\frac{\delta}{2}$. Thus, $| \langle \xi_x , \xi_y \rangle | < \delta$. A family $(X_j)_{j \in {\mathcal{I}}}$ of metric spaces is *equi-strongly embeddable* if for every $R, \epsilon > 0$ there is a family of Hilbert space valued maps $\xi^j : X_j \to \ell^2( X_j )$ with $\| \xi^j_x \| = 1$ for all $x \in X_j$ and satisfying: 1. For all $j \in {\mathcal{I}}$ and all $x,y \in X_j$, if $d(x,y) \leq R$, then $\| \xi^j_x - \xi^j_y \| \leq \epsilon$. 2. $\lim_{S\to\infty} \sup_{j \in {\mathcal{I}}} \sup_{x \in X_j} \sum_{w \notin B_S(x)} | \xi^j_x(w) |^2 = 0$. \[cor:stronglyembeddable\] Suppose $(X,d)$ is a uniformly discrete bounded geometry metric space, and let $\left\{ \phi_i : X \to Y_i \right\}_{i=1}^{n}$ be a strongly embeddable family of maps. If the preimages $\left\{ \phi_i( w )^{-1} : w \in Y_i , i=1,\ldots, n \right\}$ are an equi-strongly embeddable family of metric spaces, then $X$ is strongly embeddable. This follows directly by the argument used in Theorem \[thm:ce\] above. Since a strongly embeddable family of maps is a generalization of relative property A, we have the following. If $G$ has relative property A with respect to a finite family of subgroups $H_1, \ldots, H_n$, and if each $H_i$ is strongly embeddable, then $G$ is strongly embeddable. In [@JOR4] it was shown that if a group $G$ has a normal subgroup $H$ with $G/H$ of property A, then $G$ has relative property A with respect to $H$. As such, the following is a generalization of the previously mentioned Dadarlat and Guentner result on the coarse embeddability of extensions. Suppose $G$ has relative property A with respect to a finite family of subgroups $\{H_i\}_{i = 1}^{n}$. If each $H_i$ is coarsely embeddable, then $G$ is coarsely embeddable. If each $H_i$ is coarsely embeddable, then $\left\{ \pi_i^{-1}( aH_i ) : aH_i \in G/H_i\, , \,\, i=1,\ldots,n \right\}$ is an equi-coarsely embeddable family. As relative property A is equivalent to $\left\{ \pi_i : G \to G/H_i \right\}_{i=1}^{n}$ being an exact family of maps, this follows from Theorem \[thm:ce\]. We next turn to group actions on metric spaces. Suppose the group $G$ acts co-finitely on a uniformly discrete bounded geometry metric space $X$. Let$x_1, x_2, \ldots, x_n \in X$ be representatives of the $G$ orbits. For each $i$, let $H_i$ be the stabilizer of $x_i$, and denote by $\pi_i : G \to G/H_i$ the quotient map. For each $x_i$, there is a natural identification of the orbit $G x_i$ with $G/H_i$, thus between $X$ and $\sqcup_{i=1}^{n} G/H_i$. Thus if $X$ is a finite space, then each $H_i$ has finite index in $G$. The following then follows from Proposition 4.11 of [@JOR4]. \[finiteIndex\] If $X$ is finite, then $\left\{ \pi_i : G \to G/H_i \right\}$ is an exact family of maps. For groups acting on infinite metric spaces, the following lemma will be useful. \[lem:TechHyp\] There is a sequence $(T_k)$ of positive numbers, tending to infinity, such that for all $w \in X$, if $d_X( x_1, w ) < T_k$ then there is $g_w \in G$ and $i$ with $w = g_w x_i$, such that $d_G(1_G, g_w) < k$. That is, $B_{T_k}( gx_1) \subset \cup_{i=1}^{n} B_k(g) x_i$ for all $g \in G$. For each $w \in X$, fix a factorization $w = g_w x_{i_w}$, for $x_{i_w}$ one of the orbit representatives, and $g_w \in G$. There may be many possible choices for $g_w$. Take one which minimizes $d_G( 1_G, g_w)$. For a positive integer $m$, set $N_m = \max \left\{ d_G( 1_G, g_w ) \, : \, w \in B_m(x_1) \right\}$. Clearly $N_m \leq N_{m+1}$ for all positive integers $m$. If $N_m \leq S$ for all $m$, then each $w \in X$ could be written as $g_w x_{i_w}$ with $d_G( 1_G, g_w) \leq S$. As $G$ has bounded geometry, there are at most finitely many possibilities for $g_w$, and finitely many orbits. This contradicts the assumption that $X$ has infinite cardinality. Thus $N_m$ tends to infinity. Using $(N_m)$ we now construct $(T_n)$. Given $n$, let $T_n = \max \{ m \, : \, N_m \leq n \}$. If $d_X(w, x_i ) < T_n$, then $d_G( 1_G, g_w ) \leq N_{T_{n}} \leq n$. The following is a strongly embeddable analogue of Theorem 5.12 of [@JOR4]. \[thm:actionse\] Suppose the group $G$ acts co-finitely on a strongly embeddable uniformly discrete bounded geometry metric space $X$. Let $x_1, x_2, \ldots, x_n \in X$ be representatives of the $G$ orbits. For each $i$, let $H_i$ be the stabilizer of $x_i$, and denote by $\pi_i : G \to G/H_i$ the quotient map. Then $\left\{ \pi_i \, : \, i = 1, \ldots, n \right\}$ is a strongly embeddable family of maps. If $X$ is finite, this follows by Theorem \[finiteIndex\]. Otherwise, we assume $X$ is infinite. Fix $R, \epsilon > 0$. There is a $C > 0$ such that for all $g, g' \in G$, and all $i, j \in \{ 1, \ldots, n\}$, $d_X( g x_i, g' x_j ) \leq C( d_G( g, g' ) + 1 )$. Since $X$ is strongly embeddable, there is a Hilbert space valued map $\beta : X \to \ell^2(X)$ with $\| \beta_x \| = 1$ for each $x \in X$, and satisfying: 1. If $d_X(x,y) \leq C(R+1)$, then $\| \beta_x - \beta_{y} \| \leq \epsilon$. 2. $\lim_{S \to \infty} \sup_{x \in X} \sum_{w \notin B_S(x)} |\beta_x(w)|^2 = 0$. Define $\xi : G \to \ell^2(X)$ by $\xi_g = \beta_{g x_1}$. It is clear that for all $g \in G$, $\| \xi_g \| = 1$. If $d_G(g,g') \leq R$, then $d_X(g x_1, g' x_1 ) \leq C(R+1)$ so $\| \xi_g - \xi_{g'} \| \leq \epsilon$. For any $g \in G$ and $S > 0$ $$\begin{aligned} \sum_{w \notin \cup_i B_{S}(g) x_i } \left| \xi_g(w) \right|^2 &= \sum_{w \notin \cup_i B_{S}(g) x_i } \left| \beta_{gx_1}(w) \right|^2 \\ &\leq \sum_{w \notin B_{T_S}(gx_1) } \left| \beta_{gx_1}(w) \right|^2,\end{aligned}$$ where the last inequality follows from Lemma \[lem:TechHyp\]. This tends to zero uniformly in $g$, as $S \to \infty$. Let $H$ be a normal subgroup of $G$. If $G/H$ is strongly embeddable in the quotient metric, then $\left\{ \pi : G \to G/H \right\}$ is a strongly embeddable family of maps. This follows from Theorem \[thm:actionse\] by considering the $G$ action on $G/H$. If $H$ is strongly embeddable as well, the collection of cosets $\{ gH \}$ forms an equi-strongly embeddable family of metric spaces. Appealing to Corollary \[cor:stronglyembeddable\] we obtain the following. The collection of strongly embeddable groups is closed under extensions of groups with proper length functions.
--- abstract: 'We consider a class of wave-Schrödinger systems in three dimensions with a Zakharov-type coupling. This class of systems is indexed by a parameter $\gamma$ which measures the strength of the null form in the nonlinearity of the wave equation. The case $\g = 1$ corresponds to the well-known Zakharov system, while the case $\g=-1$ corresponds to the Yukawa system. Here we show that sufficiently smooth and localized Cauchy data lead to pointwise decaying global solutions which scatter, for any $\g \in (0,1]$.' address: - Princeton University - Princeton University - Princeton University - Princeton University author: - Thomas Beck - Fabio Pusateri - Phil Sosoe - Percy Wong title: On global solutions of a Zakharov type system --- [^1] Introduction ============ Statement of the problem and main result ---------------------------------------- We will consider the following parametrized family of systems in three spatial dimensions: $$\label{ZSg0} \left\{ \begin{array}{l} i \partial_t u + \Delta u = u n \\ \\ \Box n = {\Lambda}^{1+\g} {|u|}^2 \end{array} \right.$$ where $\Box := - \partial_t^2 + \Delta$, $\Lambda := |\nabla| = \sqrt{-\Delta}$, and $-1 \leq \g \leq 1$. Here we have $$\begin{aligned} u: (x,t) \in \R^3 \times \R \to \C, \quad n: (x,t) \in \R^3 \times \R \to \R \, . \end{aligned}$$ The case $\g = 1$ corresponds to the well-known Zakharov system, modeling propagation of Langmuir waves in an ionized plasma [@Zakharov]. The case $\g = -1$ corresponds to the (massless) Yukawa system, which is a model for the interaction between a meson and a nucleon. is a special case of the models introduced in [@SZ2] (see Section 3 there) $$\label{ZS0} \left\{ \begin{array}{l} i \partial_t u + L_1 u = u n \\ \\ L_2 n = L_3 {|u|}^2 \, , \end{array} \right.$$ where $L_1,L_2$ and $L_3$ are constant coefficient differential operators. This class of systems is referred to as Davey-Stewartson (DS) systems in the work of Zakahrov-Schulman [@SZ2 Section 3]. In the recent literature the name DS is associated to a specific 2 dimensional system of the form \[ZS0\], modeling the evolution of weakly nonlinear water waves travelling predominantly in one direction, in which the wave amplitude is modulated slowly in two horizontal directions. See, for example, [@GhS1; @GhS2]. Here we will consider in the range $0<\g\leq 1$ and prove the following \[maintheo\] Let $\g \in (0,1]$ be given. Then there exist $N = N(\g) \gg 1$, and a small constant $\e_0 = \e_0(\g) > 0$, such that for any initial data $(u_0,n_0,n_1) = (u,n,\partial_t n)(t=0)$ satisfying[^2] $$\begin{aligned} \label{data} & {\| u_0 \|}_{H^{N+1}(\R^3)} + {\| {\langle x \rangle}^2 u_0 \|}_{L^2(\R^3)} \leq \e_0 \, , \\ &\label{data2} {\left\| \left( \Lambda n_0, n_1 \right) \right\|}_{H^{N-1}(\R^3)} + {\left\| \langle \Lambda \rangle \left( \Lambda n_0, n_1 \right) \right\|}_{\dot{B}^0_{1,1}(\R^3)} + {\left\| {\langle x \rangle}^2 \left( n_0,n_1 \right) \right\|}_{H^1(\R^3)} \leq \e_0 \, ,\end{aligned}$$ there exists a unique global-in-time solution $(u,n)(t)$ to the Cauchy problem associated to . Moreover, there exists $0 < \a < 1/6$[^3] such that $$\begin{aligned} {\| u(t) \|}_{L^{\infty}(\R^3)} \lesssim \e_0 {(1+t)}^{-1-\a} \quad , \quad {\| n(t) \|}_{L^\infty(\R^3)} \lesssim \e_0 {(1+t)}^{-1} \, .\end{aligned}$$ As a consequence, the solution $(u,n)(t)$ approaches a linear solution as $t \rightarrow \infty$. The proof shows that the assumptions on the initial data are somewhat stronger than necessary. We have chosen to display these conditions for simplicity. In Section 2, we will give the strategy of the proof, and describe some more of the properties satisfied by the solution $(u,n)(t)$. Motivation and previous results ------------------------------- ### The Zakharov system The Zakharov system, ( with $\g=1$), $$\label{Z} \tag{Z} \left\{ \begin{array}{l} i\partial_t u + \Delta u = n u \\ \\ \Box n = - \Delta |u|^2, \end{array} \right.$$ was derived by V. Zakharov in [@Zakharov] to model Langmuir waves in plasma, and has since then been under intensive investigation by physicists and mathematicians (see [@HamPlasma; @SulemBook] for some background). As we remarked above, it is a particular example of Zakharov-Schulman systems, introduced in [@SZ2] and studied in [@HamPlasma], [@KPV1], [@KW]. From the mathematical side, there has been considerable work on the local and global well-posedness of solutions with rough data through the works of Kenig, Ponce and Vega [@KPV1], Bourgain and Colliander [@BC], Ginibre, Tsutsumi and Velo [@GTV], Bejenaru, Herr, Holmer and Tataru [@BHHT] and Bejenaru and Herr [@BH]. In particular, global well-posedness for small data in the energy space was obtained in [@BC] by combining local well-posedness and conservation laws. (See the references in the cited works for previous well-posedness results). Many works have also dealt with singular limits related to the Zakharov system and with the rigorous derivation of the system in various limiting regimes from other equations and vice versa; see for example [@Texier], [@MN] and references therein. Concerning the scattering question, most of the previous work has been carried out for the final value problem, i.e. data at $t = \infty$, as in the papers of Ozawa and Tsutsumi [@OT], Ginibre and Velo [@GV-Zak], and Shimomura [@ShimoZak]. Similar work has also been dedicated to other coupled systems of §and wave equations, see for example the works of Ginibre and Velo [@GV-WS1; @GV-WS2; @GV-WS3], Shimomura [@ShimoWS; @ShimoMS], and references therein. The first work that deals with scattering for the Cauchy problem of the Zakharov system (or any other Wave-§systems in $3$ dimensions) is by Guo and Nakanishi [@GN], where they considered small radial solutions in the energy space. In [@HPS], the second author, Hani and Shatah proved pointwise decay and scattering for sufficiently smooth and localized solutions of . Global dynamics below the ground state, under radial symmetry, have been analyzed in [@GNW2]. The results in [@GN] and [@HPS] were then strengthened in [@GLNW], where the authors used a generalized Strichartz estimate to obtain scattering for data in the energy space with additional angular regularity. ### Parameter range; The Yukawa System The restriction $0<\gamma \leq 1$ in Theorem \[maintheo\] is due to our methods, but it is conceivable that similar techniques can apply for some $\gamma \le 0$[^4]. We note that in the case $\gamma=-1$ (the Yukawa Wave-§system), the expectation is to have modified scattering, that is, the behavior of the solutions for large $t$ does not coincide with that of linear solutions. This was proven to be the case for the final value problem in [@GV-WS1; @GV-WS2; @GV-WS3], [@ShimoWS] and [@GV-WSrev]. Because of this expectation, we decide here to pursue a proof of global existence and scattering for based on the use of weighted spaces, rather than the approach of [@GN] based on radial Strichartz estimates[^5]. Indeed, our approach allows us to extract more precise information about the pointwise time decay of solutions, which is an essential part of the analysis when dealing with nonlinear asymptotic behavior[^6]. In the future we hope to refine our techniques, and possibly combine them with some of the recent advances in the area, see [@IP; @IP2] for example, to further push the admissible range of $\g$ towards $-1$. ### Techniques We briefly discuss the technical features of our argument. For a more detailed discussion, see Section \[secstrategy\]. The strategy follows the general scheme of much recent work on small global solutions of dispersive systems, see for example [@GMS1; @GMS2; @GNT1; @IP], and [@HPS] which is more closely related to the problem we are considering. The vector fields method of Klainerman [@K0] cannot be applied to deal with , because of the lack of space-time transformations leaving the combined Schrödinger and wave equations invariant. On the other hand, we use the observation, which appeared in [@HPS], that $\Delta$ in the right hand side of second equation of , plays the role of a Klainerman null form [@K1], allowing us to integrate by parts to gain decay[^7]; see Section \[secpre\] and the identity . We show how this type of argument can be still used for , where $\g>0$ can be arbitrarily small, if one combines it with a careful exploitation of the improved low frequency behavior of solutions of the linear wave equation. An important role is also played by the use of the pseudo-scaling identity which allows to integrate by parts and estimate weighted norms of the Schrödinger component. Another key point is the spatial profile decomposition, also used in [@HPS], to obtain decay for the wave component. Preliminary setup {#secpre} ================= Writing $w_{\pm}=\Lambda^{-1} (i\partial_t\pm \Lambda) n $, the system becomes $$\label{Z_0} \tag{Z$_0$} \left\{ \begin{array}{l} i\partial_t u +\Delta u = \frac{1}{2}(w_+ u - w_- u) \\ \\ i\partial_t w_\pm \mp \Lambda w_\pm = \Lambda^\g {|u|}^2 \, . \end{array} \right.$$ Let $f = e^{-it\Delta} u$ and $g_\pm=e^{\pm it\Lambda} w_\pm$ denote the profiles, and let $\hat f = \FF f$ and $\hat g =\FF g$ denote their Fourier transforms. Duhamel’s formula in Fourier space then reads \[eq:profile\] $$\begin{aligned} \label{inteqf} &\hat f(\xi,t) = \hat f(\xi,0) + \sum_{\pm} \mp i\int_0^t \int_{\R^3} e^{is \phi_{\pm}(\xi,\eta)}\hat f(\xi-\eta,s) \hat g_\pm(\eta,s) \mathrm{d}\eta \mathrm{d}s \\ \label{inteqg} &\hat g_\pm(\xi,t) = \hat g_\pm (\xi,0) - i\int_0^t \int_{\R^3} {|\xi|}^\g e^{is \psi_{\pm}(\xi,\eta)} \hat f(\xi-\eta,s) \overline{\hat{f}} (\eta,s) \mathrm{d}\eta \mathrm{d}s \, ,\end{aligned}$$ where the phases are $$\begin{aligned} \begin{split} \label{phi} & \phi_\pm(\xi, \eta) = {|\xi|}^2 - {|\xi-\eta|}^2 \pm |\eta| = 2\xi \cdot \eta -|\eta|^2 \pm |\eta| \end{split} \\[.5 em] \label{psi} & \psi_{\pm}(\xi,\eta) = \mp |\xi| - {|\xi-\eta|}^2 + {|\eta|}^2 = \mp |\xi| - {|\xi|}^2 + 2 \xi \cdot \eta \, .\end{aligned}$$ We define the functions $F_{\pm}$ and $G_{\pm}$ by, $$\begin{aligned} \label{F} F_\pm (\xi,t) & \overset{def}{=} \FF^{-1} \int_0^t \int_{\R^3} e^{is \phi_{\pm}(\xi,\eta)}\hat f(\xi-\eta,s) \hat g_\pm(\eta,s) \mathrm{d}\eta \mathrm{d}s \, , \\ \label{G} G_\pm (\xi,t) & \overset{def}{=} \FF^{-1} \int_0^t \int_{\R^3} {|\xi|}^\g e^{is \psi_{\pm}(\xi,\eta)}\hat f(\xi-\eta,s) \overline{\hat{f}} (\eta,s) \mathrm{d}\eta \mathrm{d}s \, .\end{aligned}$$ Norms and a priori bounds {#secnorms} ------------------------- We denote by $\dot{B}^s_{p, q}$ the Besov space defined by the norm $$\label{Besov} \|u\|_{\dot{B}^s_{p,q}}:=\left\| 2^{sk}\|P_k u\|_{L_x^p(\R^3)}\right\|_{l_k^q(\Z)}$$ where $P_k$ denotes the Littlewood-Paley projection onto frequencies $|\xi|\sim 2^k$. Given $\g >0$, we choose $\delta,\alpha>0$ and $N\gg1$ such that[^8] $$\begin{aligned} \nonumber & 5N^{-1} \leq \d \quad , \quad \d \ll 1 \, , \\ \label{eqn: alpha gamma} & 3\delta < \alpha \leq \gamma/2 - 10\delta \quad , \quad \alpha \leq 1/6 - 10\delta \, .\end{aligned}$$ The proof of Theorem \[maintheo\] follows a bootstrap argument in the Banach space $X$ defined by the norm[^9] [^10]: $$\begin{aligned} \label{norm} \begin{split} {\|(u,w_\pm)\|}_{X} \overset{def}{=} \sup_t \left( t^{-\d} {\| f(t) \|}_{H^{N+1}} \right. & + t^{-\d}{\| xf (t) \|}_{L^2} + t^{-1+2\a+\d} {\| {|x|}^2 f (t) \|}_{L^2} \\ & + \, {\| g_\pm(t) \|}_{H^{N}} + t \left. {\| e^{\mp it\Lambda} g_\pm (t) \|}_{\dot{B}^0_{\infty,1} } \right) \, . \end{split}\end{aligned}$$ We choose $\e_1 = \e_0^{2/3}$ and assume a priori bounds on the quantities appearing in the $\|\cdot\|_{X}$ norm: $$\label{apriorif} \quad {\| f(t) \|}_{H^{N+1}} \leq \e_1 t^\d \, , \quad {\| x f(t) \|}_{L^2} \leq \e_1 t^\d \, , \quad{\| {| x |}^2 f(t) \|}_{L^2} \leq \e_1 t^{1 - 2\a - \d} \, ,$$ and $$\begin{aligned} \label{aprioriG1} & {\| g_\pm(t) \|}_{H^N} \leq \e_1 \, , \quad {\| e^{\mp it\Lambda} g_\pm(t) \|}_{ \dot{B}^0_{\infty,1} } \leq \e_1 t^{-1} \, .\end{aligned}$$ As an intermediate step, we include the additional a priori bounds for $G_{\pm}$: $$\begin{aligned} \label{aprioriG2} & {\| e^{\mp it\Lambda} x \Lambda G_\pm(t) \|}_{ L^{4/(1+\g)} } \leq \e_1 t^{- 1/4 + 3\g/4 -2\a -3\d} \, , \\ \label{aprioriG3} & {\| e^{\mp it\Lambda} \Lambda^{-1} G_\pm(t) \|}_{ L^3 } \leq \e_1 t^{-2\a - 3\d} \, , \quad {\| \Lambda^{1/2} x G_\pm(t) \|}_{L^2} \leq \e_1 \, . \end{aligned}$$ In contrast to [@HPS], we do not place an a priori bound on $x^2G_{\pm}(t)$. Instead, we make greater use of the linear dispersive estimates for the wave group, and place an a priori bound on $e^{\mp it\Lambda} x \Lambda G_{\pm}(t)$ and $e^{\mp it\Lambda} \Lambda^{-1} G_{\pm}(t)$ in suitable $L^p$ spaces. This greatly simplifies many of the estimates in [@HPS], while yielding the same exact conclusions for $\g \geq 1/3$. To obtain our result we will then show $$\begin{aligned} {\|(F_\pm, G_\pm)\|}_{X} \lesssim \e_1^2 \, ,\end{aligned}$$ which, together with the initial assumptions - (see also below), will give $$\begin{aligned} {\|(u, w_\pm)\|}_{X} \lesssim \e_0 + {\|(F_\pm, G_\pm)\|}_{X} \lesssim \e_0 + \e_1^2 \lesssim \e_0 + \e_0^{4/3} \, ,\end{aligned}$$ and guarantee a global-in-time solution belonging to $X$, provided $\e_0$ is chosen small enough. From the linear estimates for the Schrödinger group $$\begin{aligned} \label{disp} & {\|e^{it\Delta} f\|}_{L^6} \lesssim \frac{1}{t} {\| x f \|}_{L^2} \quad , \quad {\| e^{it\Delta} f \|}_{L^\infty} \lesssim \frac{1}{t^{\frac 32}} {\| x f \|}^{\frac 12}_{L^2} {\| x^2 f \|}^{\frac 12}_{L^2} \, ,\end{aligned}$$ we deduce that the $X$ norm bounds $$\begin{aligned} \label{SL^6} {\| e^{it\Delta} f \|}_{L^6} & \lesssim \frac{1}{ { t }^{1-\d}} {\| u \|}_X \, , \\ \label{decayu} {\| e^{it\Delta} f \|}_{L^\infty} & \lesssim \frac{1}{ { t }^{1+\a}} {\| u \|}_X \, .\end{aligned}$$ Moreover, by the linear dispersive estimate for the wave equation $$\label{linearwave0} {\| e^{it\Lambda} h \|}_{\dot{B}^0_{p,r}} \lesssim \frac{1}{t^{1-2/p}} {\| h \|}_{ {\dot{B}^{2(1-2/p)}_{{p^\prime},r}} } \quad , \quad p \geq 2 \, ,$$ (cf. for example [@SS]), and the fact that $g_\pm(0) = \Lambda^{-1} i n_1 \pm n_0$, we see that implies $$\label{decaywave0} {\| e^{\mp i t\Lambda} g_{\pm}(0) \|}_{ \dot{B}^0_{\infty,1} } \lesssim \frac{\e_0}{t} \, .$$ Finally, we note that with $r = 2$, and embeddings between Besov and Sobolev spaces, gives $$\label{linearwave} {\| e^{it\Lambda} h \|}_{L^p} \lesssim \frac{1}{ t^{1-2/p} } {\left\| \Lambda^{2(1-2/p)} h \right\|}_{ L^{p^\prime} } \, .$$ Strategy of the proof {#secstrategy} --------------------- From the definition of the $X$-norm, we see that in order to close our argument we need to obtain estimates on high Sobolev norms of $(u,w_\pm)$ and weighted norms of $f$, and pointwise bounds for $w_\pm$. Bounds on high Sobolev norms follow via standard energy estimates. To show decay for $w_\pm$ we use the weighted $L^2$ bounds on $f_\pm$. To eventually bound weighted norms of $f_\pm$ we use the intermediate estimates - on $G_\pm$. A key aspect is that the system has null resonances, which we can use in combination with the space time resonance method to obtain weighted and pointwise bounds. ### *Estimates for $G_\pm$* We notice that the phase $\psi_{\pm}(\xi,\eta)$ satisfies, $$\label{symg0} |\xi|=\frac{1}{2}\frac{\xi}{|\xi|} \cdot \nabla_\eta \psi_\pm(\xi,\eta) \, .$$ This means that the factor of $|\xi|^{\gamma}$ in gives the equation a resonant structure, although we see that this becomes weaker as $\gamma$ decreases towards $0$. We can thus use to integrate by parts in $\eta$ and gain decay in $s$. In particular, using this together with Sobolev embedding and the a priori bounds on $f$, we can obtain a uniform in time estimate for $x\Lambda^{1/2}G_{\pm}$ in $L^2$. Carefully exploiting the linear dispersive estimates for wave group gives the estimates for $e^{\mp it\Lambda}x\Lambda G_{\pm}$ and $e^{\mp it\Lambda}\Lambda^{-1} G_{\pm}$. These estimates are presented in Section \[secG\]. ### $L^\infty$ bounds These are obtained similarly to [@HPS]. The pointwise decay of $e^{it\Delta} f$, see , is a direct consequence of the weighted estimates for $xf$ and $|x|^2f$. To obtain a $t^{-1}$ pointwise decay for $e^{\mp it\Lambda} G_\pm$, we use the improved small frequency behavior of solutions of the linear wave equation, see , the identity , and a similar argument to [@HPS]. Some of the details for the estimate of $e^{\mp it\Lambda}G_\pm$ are presented in Section \[secdecayw\]. ### Weighted $L^2$ estimates for $F_\pm$ To obtain the desired weighted estimates for $F_{\pm}$, we use the (pseudo-scaling) identity $$\label{dxiphi00} \nabla_\xi \phi_\pm = - 2\eta = - 2 \frac{\eta}{|\eta|} \left( \frac{\eta}{|\eta|}\cdot \nabla_\eta \phi_\pm \right) - 2\frac{\phi_\pm}{|\eta|}\frac{\eta}{|\eta|} \, .$$ More specifically, calculating $xF_{\pm}$ in Fourier space involves applying a derivative in $\xi$ to $\hat{F}_{\pm}$. When this derivative is applied to the factor of $e^{is\phi_{\pm}}(\xi,\eta)$ we obtain a factor of $\nabla_\xi \phi_\pm (\xi,\eta)$. We can then use to express $\nabla_\xi \phi_\pm (\xi,\eta)$ in terms of $\nabla_\eta \phi_\pm(\xi,\eta)$ and $\phi_{\pm}(\xi,\eta)$. This allows us to integrate by parts one time in both space and time, and the estimate for $xF_{\pm}$ in $L^2$ then follows from the a priori estimates for $f$ and $G_{\pm}$. The equality in is also used in the estimate of $x^2F_{\pm}$. Applying two derivatives in $\xi$ to $\hat{F}_{\pm}(\xi,s)$, we find that one term contains a factor of $(\nabla_\xi \phi_\pm(\xi,\eta))^2$. If, as in [@HPS], we use to integrate by parts twice, we end up with a term containing a factor of $x^2G_{\pm}$. However, we no longer have an a priori estimate on $x^2G_{\pm}$, so we do not proceed in this way. Instead, for this term in $x^2F_{\pm}$, we only integrate by parts in $\eta$ once, and make use of the $L^p$ estimates - on $e^{\mp it\Lambda}x\Lambda G_{\pm}(t)$ and $e^{\mp it\Lambda}\Lambda^{-1}G_{\pm}(t)$. We carry out these estimates in Section 4. Energy Estimates and high frequency cutoff {#secenergy} ------------------------------------------ We have the following: \[proenergy\] Let $F_\pm$ and $G_\pm$ be given by and respectively. Then, for ${\| (u, w_\pm) \|}_X \leq \e_1$, we have $${\| G_\pm (t) \|}_{H^{N}} + t^{-\d} {\| F_\pm (t) \|}_{H^{N+1}} \lesssim \e_1^2 \, .$$ We do not provide details on how to obtain the above bounds, since they are fairly easy to show and can be proved as in [@HPS]. We can use the a priori bounds on high Sobolev norms to reduce all of our estimates to frequencies smaller than $s^{\delta_N}$, where $\delta_N \ll 1$ is chosen small depending on $N$. To see this, let us assume in what follows that at least one of the frequencies $\eta$ or $\xi-\eta$ in the expressions for $F_\pm$ and $G_\pm$, see and , has size larger than $s^{2/(N-2)}$. Observe that for all $k \geq 0$ one has ${\| P_{\geq k} v (s) \|}_{L^2} \lesssim 2^{-k l } {\| v(s) \|}_{H^l}$, and therefore, for frequencies $2^k \gtrsim s^{ 2/(N-2) } \gtrsim 1$, we have from the apriori assumptions -, $$\begin{aligned} \label{P_k} \begin{split} {\| P_{\geq k} u (s) \|}_{H^3} & \lesssim 2^{-k (N-2) } {\| u(s) \|}_{H^{N+1}} \lesssim \e_1 s^{-2 + \d} \\ {\| P_{\geq k} w_\pm (s) \|}_{H^2} & \lesssim 2^{-k (N-2) } {\| w_\pm (s) \|}_{H^{N}} \lesssim \e_1 s^{-2} \, . \end{split}\end{aligned}$$ These estimates are already sufficient to bound all norms that do not involve weights. To establish bounds on $L^p$ norms involving weights, we are going to apply $\nabla_\xi$ to the bilinear terms $\widehat{F}_\pm$ and $\widehat{G}_\pm$. We start by noticing that the action of weights on Littlewood-Paley projections (on high-frequencies) is harmless, and only gives terms that are easier to treat than any other term that we will have to deal with. We now briefly discuss how to estimate all the other contributions that result from applying derivatives to $\widehat{F}_\pm$ and $\widehat{G}_\pm$ in the case when $\max\{|\eta|,|\xi-\eta|\} \gtrsim s^{2/(N-2)}$. ### High frequency contributions in $F_\pm$ {#high-frequency-contributions-in-f_pm .unnumbered} Looking at , we see that applying $\nabla_\xi$ twice to $\widehat{F}_\pm$ gives three types of contributions. The first are those where $\nabla_\xi^2$ hits the phase $e^{is\phi_\pm}$: these terms will contain powers of $s$ but will not involve weights on the inputs $f$ and $g$. Since at least one frequency has size larger than $s^{2/(N-2)}$, then all these terms can be estimated directly using . The second type of terms that arise are those with $\nabla_\xi^2$ hitting the input $\widehat{f}(\xi-\eta)$. If this happens, the same estimates that we are going to perform below in section \[secF\] will work, regardless of the size of frequencies. The third type of contribution is where one $\nabla_\xi$ hits the phase $e^{is\phi_\pm}$, and the other hits the input $\widehat{f}(\xi-\eta)$. In this case, if $\eta$ is the largest frequency, the term can be estimated directly using the second bound in . If instead $\xi-\eta$ is the largest frequency, one can write $\nabla_\xi \widehat{f}(\xi-\eta) = \nabla_\eta \widehat{f}(\xi-\eta)$ and integrate by parts in $\eta$. This will generate one term with losses in powers of $s$ similar to the first type of term discussed above, plus a term where the inputs are $\widehat{f}(\xi-\eta)$ and $\nabla_\eta \widehat{g}_\pm (\eta)$. Since we are assuming that $|\xi-\eta| \gtrsim s^{ 2/(N-2)}$, we can use Sobolev’s embedding, the second apriori assumption in , and the first inequality in , to estimate directly this term and obtain the desired bound without resorting to further manipulations. ### High frequency contributions in $G_\pm$ {#high-frequency-contributions-in-g_pm .unnumbered} We now briefly describe how to obtain the bound and the second bound in for $G_\pm$, in the case of high frequencies. Since the inputs in are symmetric, up to complex conjugation (which leaves our norms invariant), we can assume that $|\eta| \gtrsim |\xi-\eta|$ and $|\eta| \gtrsim s^{ 2/(N-2)}$. When applying derivatives to $\widehat{G}_\pm$ we then obtain two types of contributions, similar to the ones discussed in the previous paragraph. The first contribution is the one where $\nabla_\xi$ hits the phase $e^{is\psi_\pm}$. This will cause a loss of a power of $s$ which can be overcome directly using the decay given by . The second type of term will contain $\nabla_\xi \widehat{f}(\xi-\eta)$ as an input. Since we have already reduced ourselves to the case when $\eta$ is the largest frequency, we can again use , and the apriori bound on $xf$, to get the desired estimates in a straightforward fashion. The above discussion shows that in estimating weighted norms of the bilinear terms $F_\pm$ and $G_\pm$ in and , we can always reduce our analysis to frequencies $|\xi-\eta| , |\eta| \lesssim s^{2/(N-2)}$, for otherwise all the desired bounds can be shown to hold true without too much effort. We therefore agree on the following: \[convention1\] In the rest of the paper, we assume that all frequencies, $\xi-\eta$ and $\eta$, appearing in the estimates of the bilinear terms and , have size bounded above by $s^{\delta_N}$, where $\delta_N := \frac{2}{N-2}$ and the integer $N \gg 1$ is determined in the course of our proof by several upperbounds on $\d_N$. In particular, expressions such as $|\xi|$ or $\nabla_\xi \psi_\pm (\xi,\eta)$ will be constantly replaced by a factor of $s^{\delta_N}$. We will also adopt the following additional notational convention: To make notations lighter, we will often drop the $\pm$ indices, and omit the dependence on the time $t$. Moreover, in the estimates of the bilinear terms $F$ and $G$ in and , we will often only consider the contribution of the integrals from $1$ to $\infty$. All of the contributions coming from integrating between $0$ and $1$ are bounded in a straightforward fashion by Sobolev’s embedding, and our control of high Sobolev norms of the solution $(u,w)$. Estimates for $G$ {#secG} ================= We recall that we have the following a priori assumptions on $f=e^{it\Delta}u$: $$\label{eqn: f} {\| xf \|}_{L^2}\leq \e_1 t^\delta \quad , \quad {\|x^2f\|}_{L^2} \leq \e_1 t^{1-2\alpha-\delta} \quad , \quad {\|f\|}_{H^N} t^{\delta} \leq \e_1 \, .$$ As a consequence, the following dispersive bounds for $u$ hold: $$\label{eqn: u} {\|u \|}_{L^{\infty}} \lesssim \e_1 t^{-1-\alpha} \quad, \quad {\|u\|}_{L^6} \lesssim \e_1 t^{-1+\delta} \, .$$ Weighted estimates for $G$ {#secGL2} -------------------------- In this section we are going to prove the following: Let $G$ be the bilinear term defined in : $$\label{eqn: G} G = \FF^{-1} \int_1^t \int_{\R^3}|\xi|^{\gamma} e^{is \psi(\xi,\eta)}\hat f(\xi-\eta,s) \overline{\hat{f}} (\eta,s) \,\mathrm{d}\eta \mathrm{d}s \, .$$ Under the a priori assumptions and , we have $$\begin{aligned} & {\| \Lambda^{1/2}x G \|}_{L^2} \lesssim \e_1^2 \, , \\ & {\| e^{it\Lambda}x\Lambda G \|}_{L^{4/(1+\gamma)}} \lesssim t^{-1/4 + 3\gamma/4 - 2\alpha - 3\delta} \e_1^2 \, , \\ & {\| e^{it\Lambda}\Lambda^{-1} G \|}_{L^3} \lesssim t^{- 2\alpha - 3\delta} \e_1^2 \, .\end{aligned}$$ The proof of the above proposition is split into Lemma \[prop: G1\], \[prop: G2\] and \[prop: G3\] below. We recall the following assumptions on the relative sizes of $\gamma$ and $\alpha$ from , $$\label{eqn: alpha} 3\delta < \alpha \leq \gamma/2 - 10\delta \quad , \quad \alpha \leq 1/6 - 10\delta.$$ \[prop: G1\] Let $G$ be the bilinear term defined in and let $\alpha$ satisfy . Then, $${\| \Lambda^{1/2}x G \|}_{L^2} \lesssim \e_1^2 \, .$$ To prove this, it is crucial to notice that low frequencies play the role of a special null resonant structure in the nonlinear term $G$, see , $$\label{symg} |\xi|=\frac{1}{2}\frac{\xi}{|\xi|} \cdot \nabla_\eta \psi \, .$$ Applying $|\xi|^{1/2}\nabla_\xi$ to $\hat{G}$ gives the terms: $$\begin{aligned} \label{xi^1/2dxig1} & \int_1^t \int_{\R^3} e^{is \psi (\xi,\eta)} |\xi|^{1/2+\gamma} \nabla_\xi \hat{f}(\xi-\eta,s) \overline{\hat{f}} (\eta,s) \mathrm{d}\eta \mathrm{d}s \\ \label{xi^1/2dxig2} & \int_1^t \int_{\R^3} s \nabla_\xi \psi e^{i s \psi (\xi,\eta)} |\xi|^{1/2+\gamma} \hat{f}(\xi-\eta,s) \overline{\hat{f}} (\eta,s) \mathrm{d}\eta \mathrm{d}s \, ,\end{aligned}$$ plus an easier term when $\nabla_\xi$ hits the symbol $|\xi|^{\gamma}$. is easily estimated by Hölder’s inequality together with the estimates on $f$ and $u$ in and , and using the first condition in : $$\begin{aligned} {\| \eqref{xi^1/2dxig1} \|}_{L^2} & \lesssim \int_1^t s^{2\delta_N} {\| x f \|}_{L^2} {\| e^{is\Delta} f \|}_{L^\infty} \, \mathrm{d}s \lesssim \int_1^t s^{2\delta_N} s^\d \frac{1}{s^{1+\a}} \, ds \lesssim 1 \, .\end{aligned}$$ Using the identity and integrating by parts in $\eta$, gives terms of the form $$\begin{aligned} \int_1^t \int_{\R^3} e^{i s\psi (\xi,\eta)} m_1(\xi,\eta)|\xi|^{-1/2+\gamma} \nabla_\eta \hat{f}(\xi-\eta,s) \overline{\hat{f}} (\eta,s) \mathrm{d}\eta ds,\end{aligned}$$ together with symmetric or easier terms. Here $m_1(\xi,\eta)$ is a symbol satisfying homogeneous bounds of order 1 for large frequencies, and is otherwise harmless. By Plancherel, the $L^2$-norm of this term is bounded by $$\int_1^t s^{\delta_N} \|\Lambda^{-1/2+\gamma} e^{is\Delta}(xf)e^{is\Delta}f\|_{L^2} \, \mathrm{d}s.$$ For $1/2\leq\gamma<1$, we can estimate this by, $$\begin{aligned} \int_1^t s^{3\delta_N/2} \|e^{is\Delta}(xf)e^{is\Delta}f\|_{L^2} \, ds \lesssim \int_1^t s^{3\delta_N/2} \|xf\|_{L^2} \|e^{is\Delta}f\|_{L^\infty} \, \mathrm{d}s \\ \lesssim \int_1^t s^{3\delta_N/2+\delta}s^{-1-\alpha} \, \mathrm{d}s \lesssim 1,\end{aligned}$$ since $\alpha>3\delta$. For $0\leq \gamma<1/2$, applying Sobolev embedding, we can estimate this by $$\begin{aligned} \int_1^t s^{\delta_N} \| e^{is\Delta}(xf)e^{is\Delta}f\|_{L^{3/(2-\gamma)}} \, \mathrm{d}s \lesssim \int_1^t s^{\delta_N} \|xf\|_{L^2}\|e^{is\Delta}f\|_{L^{6/(1-2\gamma)}} \, \mathrm{d}s\\ \lesssim \int_1^t s^{\delta_N} s^\delta \|e^{is\Delta}f\|_{L^{6/(1-2\gamma)}} \, \mathrm{d}s. \end{aligned}$$ Interpolating between the $L^6$ and $L^\infty$ estimates on $u = e^{it\Delta}f$ from , and choosing $\delta>0$ sufficiently small, we can ensure that this integral has an $O(1)$ bound. \[prop: G2\] Let $G$ be the bilinear term defined in and let $\alpha$ satisfy . Then, $${\| e^{it\Lambda}x\Lambda G \|}_{L^{4/(1+\gamma)}} \lesssim t^{-1/4 + 3\gamma/4 - 2\alpha - 3\delta} \e_1^2 \, .$$ Applying $\nabla_\xi|\xi|$ to $\hat{G}$ gives the terms: $$\begin{aligned} \label{dxixig1} & \int_1^t \int_{\R^3} e^{is \psi (\xi,\eta)} |\xi|^{1+\gamma} \nabla_\xi \hat{f}(\xi-\eta,s) \overline{\hat{f}} (\eta,s) \mathrm{d}\eta \mathrm{d}s \\ \label{dxixig2} & \int_1^t \int_{\R^3}s \nabla_\xi \psi e^{is \psi (\xi,\eta)} |\xi|^{1+\gamma} \hat{f}(\xi-\eta,s) \overline{\hat{f}} (\eta,s) \mathrm{d}\eta \mathrm{d}s \, ,\end{aligned}$$ plus an easier term when $\nabla_\xi$ hits the symbol $|\xi|^{1+\gamma}$. We now use and integrate by parts in $\eta$ to write as terms of the form $$\label{dxixig3} \int_1^t \int_{\R^3} e^{i s\psi (\xi,\eta)} m_1(\xi,\eta) |\xi|^\gamma\nabla_\eta \hat{f}(\xi-\eta,s) \overline{\hat{f}} (\eta,s) \, \mathrm{d}\eta \mathrm{d}s,$$ together with symmetric or easier terms. Here, as before, $m_1(\xi,\eta)$ is a symbol satisfying homogeneous bounds of order 1 for large frequencies, and is otherwise harmless. Using the linear dispersive estimate, the contribution from - can thus be bounded by $$\begin{aligned} & \int_0^t\frac{1}{(t-s)^{1/2-\gamma/2}} s^{\delta} \| \Lambda^{-\gamma}\Lambda^{\gamma}e^{is\Delta}(xf)(e^{is\Delta}f)\|_{L^{4/(3-\gamma)}} \, \mathrm{d}s \\ & \lesssim \int_0^t\frac{1}{(t-s)^{1/2-\gamma/2}} s^{\delta} \|xf\|_{L^2} \|e^{is\Delta}f\|_{L^{4/(1-\gamma)}} \, \mathrm{d}s \\ & \lesssim \int_0^t\frac{1}{(t-s)^{1/2-\gamma/2}} s^{2\delta}s^{-(3/4+3\gamma/4)(1-\delta)} \, \mathrm{d}s. \end{aligned}$$ If $0<\gamma<1/3$, then we can bound this integral by $t^{-1/4 - \gamma/4 +4\delta}$. If $1/3\leq \gamma <1$, then instead, we obtain a bound of $t^{-1/2+\gamma/2+ 4\delta}$. By the assumptions on $\alpha$ from , these estimates are sufficient. \[prop: G3\] Let $G$ be the bilinear term defined in and let $\alpha$ satisfy . Then, $${\| e^{it\Lambda}\Lambda^{-1} G \|}_{L^3} \lesssim \e_1^2 t^{- 2\alpha - 3\delta} \, .$$ Using , we can write, $$e^{it\Lambda}\Lambda^{-1} G(t) = \int_1^t e^{i(t-s)\Lambda}\Lambda^{-1+\gamma}|u(s)|^2 \, \mathrm{d}s.$$ Thus, by the linear dispersive estimate, $$\label{e^itxi xi^-1g} {\| e^{it\Lambda}\Lambda^{-1} G \|}_{L^3} \lesssim \int_1^t \frac{1}{(t-s)^{1/3}} \|\Lambda^{-1/3+\gamma}G\|_{L^{3/2}} \, \mathrm{d}s.$$ Suppose first that $\gamma\geq1/3$. Then, we can use the bounds from to estimate by $$\int_1^t \frac{1}{(t-s)^{1/3}}s^{\delta/N} \|u\|^2_{L^3} \, \mathrm{d}s \lesssim \int_1^t \frac{1}{(t-s)^{1/3}}s^{\delta/N}\frac{1}{s^{1-\delta}} \, \mathrm{d}s \lesssim \frac{1}{t^{1/3}}t^{2\delta}.$$ By the third assumption on $\alpha$ from , this gives us the desired bound. For $0<\gamma<1/3$, we first apply Sobolev embedding to estimate by $$\int_1^t \frac{1}{(t-s)^{1/3}} \|u\|^2_{L^{18/(7-3\gamma)}} \, \mathrm{d}s.$$ Using the bounds from , we thus obtain, $$\eqref{e^itxi xi^-1g} \lesssim \int_1^t \frac{1}{(t-s)^{1/3}}s^{-(2/3+\gamma)(1-\delta)} \, \mathrm{d}s \lesssim t^{-\gamma+2\delta}.$$ By the second assumption on $\alpha$ from , this gives us the desired bound. Decay estimate for $G$ {#secGLinfty} ---------------------- \[secdecayw\] In this section we want to show the following: \[prodecayw\] Let $G_\pm$ be the bilinear term defined in . Under the apriori assumptions and we have $${\| e^{it\Lambda} G_\pm \|}_{ \dot{B}^0_{\infty,1} } \lesssim \e_1^2 {(1+t)}^{-1} \, .$$ The proof of the above proposition is analogous to the one in section 6 of [@HPS]. We provide some of the details below. Let us split $G$ into two parts, depending on the localization of the inputs. More precisely, we let $G = G_1 + G_2$ where $$\begin{aligned} G_1 & := G(f_{\leq s^{1/8} } , \bar{f}) + G(f, \bar{f}_{\leq s^{1/8} }) \\ G_2 & := G(f_{\geq s^{1/8} } , \bar{f}_{\geq s^{1/8} } ) \, .\end{aligned}$$ The component $G_1$ can be shown to be bounded in a weighted Sobolev space stronger than $\dot{B}^{2}_{1,1}$; this directly gives the desired bound on $e^{it\Lambda} G_1$. The decay of $e^{it\Lambda} G_2$ will instead be proven using the null structure in conjuction with the improved small frequency behavior of the dispersive estimate for linear wave equation. We will crucially use the fact that the $L^2$ norm of $f_{\geq s^{1/8}}$ decays in $L^2$. Since the two terms in the definition of $G_1$ are similar we can reduce to consider $G_1$ and $G_2$ given by $$\begin{aligned} \label{G_1} \hat{G}_1 & = \int_1^t \int_{\R^3} {|\xi|}^\g e^{is \psi (\xi,\eta)} \hat{f_{\leq s^{1/8} }} (\xi-\eta,s) \overline{ \hat{f} }(\eta,s) \, \mathrm{d}\eta \mathrm{d}s \\ \label{G_2} \hat{G}_2 & = \int_1^t \int_{\R^3} {|\xi|}^\g e^{is \psi (\xi,\eta)} \hat{f_{\geq s^{1/8} }} (\xi-\eta,s) \overline{ \hat{f_{\geq s^{1/8} }} } (\eta,s)\, \mathrm{d}\eta \mathrm{d}s \, .\end{aligned}$$ . To show that $G_1$ is bounded in $\dot{B}^{2}_{1,1}$ we will interpolate weighted $L^2$ norms inside the time integral. One can then exploit the “small” support of $f_{\leq s^{1/8}}$ to get improvements on these weighted norms, and on the decay of $e^{is\Delta}f_{\leq s^{1/8}}$. Recalling that we are only considering frequencies $k$ such that $2^k \leq s^{\d_N}$, we aim to prove $$\begin{aligned} \int_1^t \, \sum_{k = -\infty}^{ \log s^{\d_N} } 2^{2k} {\| P_k \Lambda^\g e^{-is\Lambda} \left( e^{is\Delta} f_{\leq s^{1/8}} e^{-is\Delta} \bar{f} \right) \|}_{L^1} \, \mathrm{d}s \lesssim 1 \, .\end{aligned}$$ Converting a factor of $2^{(2-\g)k}$ into derivatives $\Lambda^{2-\g}$, throwing away the projection $P_k$, and performing the sum, we see that is suffices to show $$\begin{aligned} \int_1^t s^{\g \d_N} {\| \Lambda^2 e^{-is\Lambda} \left( e^{is\Delta} f_{\leq s^{1/8}} e^{-is\Delta} \bar{f} \right) \|}_{L^1} \, \mathrm{d}s \lesssim 1 \, .\end{aligned}$$ Since ${\| \cdot \|}_{L^1} \lesssim {\| x \cdot \|}_{L^2}^{1/2} {\| x^2 \cdot \|}_{L^2}^{1/2}$, the above estimate will follow from the inequalities $$\begin{aligned} \label{decayg_11} & {\left\| |x| \Lambda^2 e^{-is\Lambda} \left( e^{is\Delta} f_{\leq s^{1/8}} e^{-is\Delta} \bar{f} \right) \right\|}_{L^2} \lesssim s^{-7/4} \, , \\ \label{decayg_12} & {\left\| {|x|}^2 \Lambda^2 e^{-is\Lambda} \left( e^{is\Delta} f_{\leq s^{1/8}} e^{-is\Delta} \bar{f} \right) \right\|}_{L^2} \lesssim s^{-1} \, .\end{aligned}$$ These two estimates have been already proven in [@HPS] under the same apriori assumptions made in . Therefore, we omit them and refer the reader to section 6.1 of [@HPS] for a detailed proof. . We write $$\begin{aligned} e^{it\Lambda} G_2 (t,x) & = \int_1^t e^{i(t-s)\Lambda} \Lambda^{\g-1} \FF^{-1}_\xi \left[ \int_{\R^3} |\xi| e^{is \tilde{\psi} (\xi,\eta)} \hat{f_{\geq s^{1/8} }} (\xi-\eta,s) \overline{ \hat{f_{\geq s^{1/8} }} } (\eta,s) \mathrm{d}\eta \right] \, \mathrm{d}s\end{aligned}$$ where $\tilde{\psi} (\xi,\eta) = {|\xi-\eta|}^2 - {|\eta|}^2 = {|\xi|}^2 - 2 \xi \cdot \eta$. We now want to use to integrate by parts in $\eta$. By symmetry we can reduce to consider the following term: $$\label{g_2decay} \int_1^t e^{i(t-s)\Lambda} \frac{1}{s} \, \Lambda^{\g-1} \FF^{-1}_\xi \left[ \int_{\R^3} \frac{\xi}{|\xi|} e^{is \tilde{\psi} (\xi,\eta)} \, \nabla_\eta \hat{f_{\geq s^{1/8} }} (\xi-\eta,s) \overline{ \hat{f_{\geq s^{1/8} }} } (\eta,s) \mathrm{d}\eta \right] \mathrm{d}s \, .$$ The contribution of the time integral between $t-1$ and $t$ can be easily estimated by Sobolev embedding. To estimate the contribution from $1$ to $t-1$, we use the linear dispersive estimate for the wave equation , and our large frequency cutoff convention, to bound it by $$\begin{aligned} &\int_1^{t-1} \frac{1}{t-s} \, \frac{1}{s} \, \sum_{k = - \infty}^{ \log s^{\delta_N} } 2^{(\g+1)k} {\left\| P_k \left( e^{is\Delta} x f_{\geq s^{1/8}} \, e^{is\Delta} f_{\geq s^{1/8}} \right) \right\|}_{L^1} \, ds \\ & \lesssim \int_1^{t-1} \frac{1}{t-s} \, \frac{1}{s} \, s^{2\delta_N} {\left\| e^{is\Delta} x f_{\geq s^{1/8}} \, e^{is\Delta} f_{\geq s^{1/8}} \right\|}_{L^1} \, ds \\ & \lesssim \int_1^{t-1} \frac{1}{t-s} \, \frac{1}{s} \, s^{2\delta_N} {\left\| x f \right\|}_{L^2} {\| f_{\geq s^{1/8}} \|}_{L^2} \, ds \lesssim \int_1^{t-1} \frac{1}{t-s} \, \frac{1}{s} \, s^{2\delta_N} \, s^\d \frac{1}{s^{1/8}} s^\d \, ds \lesssim \frac{1}{t} \, . \hskip 80pt \Box\end{aligned}$$ Estimates for $F$ {#secF} ================= Recall that we are making the following a priori assumptions on $g$ and $f$: $$\begin{aligned} \label{eqn: gL} & {\| \Lambda^{1/2} x G\|}_{L^2} \leq \e_1, \qquad {\|e^{it\Lambda} x\Lambda G\|}_{L^{4/(1+\gamma)}} \leq t^{-1/4+3\gamma/4-2\alpha-3\delta} \e_1, \\ \label{eqn: Lminus1G3} & {\|e^{it\Lambda} G\|}_{L^\infty} \leq t^{-1} \e_1, \qquad {\|e^{it\Lambda}\Lambda^{-1} G\|}_{L^3} \leq t^{- 2\alpha - 3\delta} \e_1, \\ \label{apriorif10} & {\| x f \|}_{L^\infty} \leq t^{\d} \e_1, \qquad {\| x^2 f \|}_{L^2} \leq t^{1-2\alpha-\delta} \e_1 .\end{aligned}$$ In this section we want to establish estimates for $F$ defined as $$\label{eqn: Fdef} \widehat{F}(\xi,t) = \int_0^t\int_{\mathbb{R}^3} {e^{is\phi (\xi,\eta)}}{\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}\,{\,\mathrm{d}\eta\mathrm{d}s}\, .$$ Recall that $$\label{eqn: null} \partial_\xi \phi(\xi,\eta) = -2\eta = -2\frac{\eta}{|\eta|}\left(\frac{\eta}{|\eta|}\cdot \partial_\eta \phi(\xi,\eta) \right)-2\frac{\phi}{|\eta|}\frac{\eta}{|\eta|} \, ,$$ and note that $\partial_{\xi_i}^2 \phi (\xi,\eta) = 0$, which in particular leads to $\partial_{\xi^i}^2 {e^{is\phi (\xi,\eta)}}= -4s^2\eta_i^2 {e^{is\phi (\xi,\eta)}}$. Estimate for $x F$ ------------------ In this section we aim to prove the following lemma: Let $F$ be defined by . Under the apriori assumptions – we have $${\| x F \|}_{L^2} \lesssim t^\delta \e_1^2 \, .$$ We have that $\nabla_\xi \widehat{F}$ is given by a linear combination of terms of the form $$\begin{aligned} \label{eqn: xfsum1} & \int_0^t\int_{\mathbb{R}^3}e^{is\phi(\xi,\eta)}\nabla_\xi\widehat{f}(\xi-\eta,s)\widehat{g}(\eta,s)\, {\,\mathrm{d}\eta\mathrm{d}s}\\ \label{eqn: xfsum2} & \int_0^t\int_{\mathbb{R}^3} s \nabla_\xi\phi \, e^{is\phi(\xi,\eta)}\widehat{f}(\xi-\eta,s) \widehat{g}(\eta,s)\, {\,\mathrm{d}\eta\mathrm{d}s}.\end{aligned}$$ Using to integrate by parts in $\eta$ and $s$ in equation , we have the following contributions: $$\begin{aligned} \label{eqn: xf21} & \int_0^t\int_{\mathbb{R}^3}e^{is\phi(\xi,\eta)}\nabla_\eta\widehat{f}(\xi-\eta,s)\widehat{g}(\eta,s)\, {\,\mathrm{d}\eta\mathrm{d}s}\\ \label{eqn: xf22} & \int_0^t\int_{\mathbb{R}^3}e^{is\phi(\xi,\eta)}\widehat{f}(\xi-\eta,s)\nabla_\eta\widehat{g}(\eta,s)\, {\,\mathrm{d}\eta\mathrm{d}s}\\ \label{eqn: xf23} & \int_{\mathbb{R}^3}te^{it\phi(\xi,\eta)}\widehat{f}(\xi-\eta,t)\frac{1}{|\eta|}\widehat{g}(\eta,t)\, \mathrm{d}\eta \\ \label{eqn: xf24} & \int_0^t\int_{\mathbb{R}^3}se^{is\phi(\xi,\eta)}\widehat{f}(\xi-\eta,s)\frac{1}{|\eta|}\partial_s\widehat{g}(\eta,s)\, {\,\mathrm{d}\eta\mathrm{d}s}\\ & \label{eqn: xf25} \int_0^t\int_{\mathbb{R}^3}se^{is\phi(\xi,\eta)}\partial_s\widehat{f}(\xi-\eta,s)\frac{1}{|\eta|}\widehat{g}(\eta,s)\, {\,\mathrm{d}\eta\mathrm{d}s}.\end{aligned}$$ The terms and can be bounded as $$\|\eqref{eqn: xfsum1} \|_{L^2} \lesssim \int_0^t\|xf\|_{L^2}\|e^{is\Lambda}(n_0+G+\Lambda^{-1}n_1)\|_{L^\infty}\,\mathrm{d}s \lesssim \varepsilon_1^2 \int_0^t s^\delta\frac{1}{s}\,ds \lesssim \varepsilon_1^2 t^\delta.$$ For the term , we first split $g$ into $n_0+G$ and $i\Lambda^{-1}n_1$. $$\|\eqref{eqn: xf22}\| \lesssim \int_0^t\|e^{is\Lambda}x(n_0+G)\|_{L^3}\|e^{is\Delta}f\|_{L^6}\,\mathrm{d}s +\int_0^t \|e^{is\Lambda}x\Lambda^{-1}n_1e^{is\Delta}f\|_{L^2} \,\mathrm{d}s.$$ For the first term, we use the Sobolev embedding and our assumption : $$\begin{aligned} \int_0^t \|x(n_0+G)\|_{L^3}\|e^{is\Delta}f\|_{L^6}\,ds &\lesssim \int_0^t \|x(n_0+G)\|_{\dot{H}^{1/2}}\|e^{is\Delta}f\|_{L^6}\, \mathrm{d}s \lesssim \varepsilon_1^2 \int_0^t s^{-1+\delta}\, \mathrm{d}s \lesssim \varepsilon_1^2 t^\delta.\end{aligned}$$ For the second term, we commute $x$ and $\Lambda^{-1}$, using $x\Lambda^{-1}n_1 = -\frac{\partial}{\Lambda^3}n_1 + \Lambda^{-1}xn_1$, we get: $$\label{eqn: xf22a} \int_0^t \|e^{is\Lambda}x\Lambda^{-1}n_1e^{is\Delta}f\|_{L^2} \,\mathrm{d}s \lesssim \int_0^t\|e^{is\Lambda}\Lambda^{-1}xn_1e^{is\Delta}f\|_{L^2}\,\mathrm{d}s + \int_0^t\|e^{is\Lambda}\frac{\partial}{\Lambda^3}n_1e^{is\Delta}f\|_{L^2} \,\mathrm{d}s.$$ For the first integral on the right-hand side of we use Hölder and the dispersive estimates and , to obtain $$\begin{aligned} \int_1^t \|e^{is\Lambda} \Lambda^{-1} xn_1e^{is\Delta}f\|_{L^2} \,\mathrm{d}s & \lesssim \int_1^t \|e^{is\Lambda} \Lambda^{-1} x n_1 \|_{L^3}\|e^{is\Delta}f\|_{L^6}\,\mathrm{d}s \lesssim \varepsilon_1 \int_1^t s^{-4/3+\delta}\|\Lambda^{-1/3}xn_1\|_{L^{3/2}}\,\mathrm{d}s \\ &\lesssim \varepsilon_1 \int_1^t s^{-4/3+\delta}\|x n_1\|_{L^{9/7}}\,\mathrm{d}s \lesssim \varepsilon_1 \|x^2 n_1\|_{L^2}.\end{aligned}$$ For the second term in , we have $$\begin{aligned} \int_0^t\|e^{is\Lambda}\frac{\partial}{\Lambda^3}n_1e^{is\Delta}f\|_{L^2} \,\mathrm{d}s & \lesssim \int_0^t \|e^{is\Lambda}\frac{\partial}{\Lambda^3}n_1 \|_{L^\infty}\|e^{is\Delta}f\|_{L^2} \,\mathrm{d}s \\ & \lesssim \int_0^t s^{-1} \|\frac{\partial^3}{\Lambda^3}n_1\|_{L^1} \|f\|_{L^2} \,\mathrm{d}s \lesssim \e_1 t^\delta \|n_1\|_{\dot{B}^0_{1,1}}.\end{aligned}$$ To estimate the term we first observe that , and the assumptions on the initial data combined with , give us $$\begin{aligned} \label{estG+n_0} \begin{split} \|e^{is\Lambda}\Lambda^{-1}(n_0+G)\|_{L^3} & \lesssim \e_1 s^{-2\a-3\d} \, . \end{split}\end{aligned}$$ We then have $$\begin{aligned} \|\eqref{eqn: xf23}\|_{L^2} & \lesssim t \|e^{it\Delta}f\|_{L^6} \|e^{it\Lambda}\Lambda^{-1}(n_0+G)\|_{L^3} + t\|f\|_{L^2}\|e^{it\Lambda} \Lambda^{-2}m_0(i\nabla)n_1\|_{L^\infty} \\ & \lesssim t\, \e_1 t^{-1+\delta} \, \e_1 t^{-2\alpha+3\delta} + t \, \e_1 \frac{1}{t} \|n_1\|_{\dot{B}^0_{1,1}} \lesssim \varepsilon_1^2 t^\delta .\end{aligned}$$ Here, and in the remainder of the proof, we denote by $m_k$ a homogeneous multiplier of order $k$. We also implicitly use the fact that such multipliers operate on homogeneous Besov spaces $\dot{B}^{s+k}_{p,r}\rightarrow \dot{B}^s_{p,r}$. For the term , we observe that $e^{is\Lambda}\partial_s g = \Lambda^\gamma |u|^2$, so that $$\begin{aligned} \|\eqref{eqn: xf24}\|_{L^2} & \lesssim \int_1^t s \|e^{is\Delta}f\|_{L^6} \|\Lambda^{-1+\gamma}|u|^2\|_{L^3} \mathrm{d}s \lesssim \varepsilon_1 \int_1^t s s^{-1+\delta} \|u\|^2_{L^{6/(2-\gamma)}}\mathrm{d}s \lesssim \varepsilon_1^2 t^\delta \, ,\end{aligned}$$ where we use Sobolev’s embedding for the second inequality. Finally, for the term , we use $e^{is\Delta}\partial_sf = uw$ to estimate $$\begin{aligned} \|\eqref{eqn: xf25} \|_{L^2} & \lesssim \int_1^t s\|e^{is\Delta}\partial_s f\|_{L^6}\|e^{is\Lambda} \Lambda^{-1}(n_0+G)\|_{L^3} + s\|\partial_s f\|_{L^2}\|e^{is\Lambda}\Lambda^{-2}m_0(i\nabla)n_1\|_{L^\infty} \, \mathrm{d}s \\ & \lesssim \int_1^t s \|u\|_{L^6}\|w\|_{L^\infty} s^{-2\alpha+3\delta} +s\|u\|_{L^\infty}\|w\|_{L^2}\frac{1}{s}\|n_1\|_{\dot{B}^0_{1,1}} \, \mathrm{d}s \\ & \lesssim \varepsilon_1^2 \int_1^t s s^{-1+\delta} s^{-1} s^{-2\alpha+3\delta} + s s^{-1-\alpha} s^{-1} \, \mathrm{d}s \lesssim \varepsilon_1^2 t^\delta \, ,\end{aligned}$$ having used $\alpha > 3 \delta /2$. Estimate for $x^2 F$ -------------------- In this last section we want to estimate ${\|x^2 F\|}_{L^2}$ and show Under the apriori assumptions – we have $$\begin{aligned} {\|x^2 F\|}_{L^2} \lesssim \e_1^2 t^{1-2\a-\d} \, .\end{aligned}$$ Fix $i$ and differentiate twice with respect to $\xi_i$ in , generating three terms: $$\begin{aligned} \widehat{F}_1&=-4\int_0^t\int {e^{is\phi (\xi,\eta)}}s^2\eta_i^2 \widehat{f}(\xi-\eta)\widehat{g}(\eta,s){\,\mathrm{d}\eta\mathrm{d}s}\label{eqn: F1}\\ \widehat{F}_2&=-4\int_0^t\int {e^{is\phi (\xi,\eta)}}is \eta_i \partial_{\xi_i} {\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}{\,\mathrm{d}\eta\mathrm{d}s}\label{eqn: F2} \\ \widehat{F}_3&=\int_0^t\int {e^{is\phi (\xi,\eta)}}\partial_{\xi_i}^2{\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}{\,\mathrm{d}\eta\mathrm{d}s}. \label{eqn: F3}\end{aligned}$$ ### Estimate for $F_1$ As a consequence of the second equality in , we have $$\eta_i^2 = \frac{\eta_i^2}{|\eta|}\frac{\eta}{|\eta|}\cdot \partial_\eta \phi +\frac{\phi}{|\eta|}\frac{\eta_i^2}{|\eta|}.$$ Plugging this into we obtain two terms (we omit the constant factor): $$\begin{aligned} & \int_0^t\int s^2 {e^{is\phi (\xi,\eta)}}\frac{\eta_i^2}{|\eta|}{\frac{\eta}{|\eta|}\cdot \partial_{\eta} \phi}\, {\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}{\,\mathrm{d}\eta\mathrm{d}s}\label{eqn: F1-dphiterm} \, , \\ & \int_0^t \int s^2 {e^{is\phi (\xi,\eta)}}\frac{\eta_i^2}{|\eta|}{\frac{\phi}{|\eta|}}{\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}{\,\mathrm{d}\eta\mathrm{d}s}\label{eqn: F1-phiterm} \, .\end{aligned}$$ . Note that $$\phi {e^{is\phi (\xi,\eta)}}=-i \partial_s {e^{is\phi (\xi,\eta)}},$$ and so may be rewritten as a sum of the terms $$\begin{aligned} \label{eqn: F11} \widehat{F}_{11}&=-\int i m_0(\eta)t^2 {e^{is\phi (\xi,\eta)}}\widehat{f}(\xi-\eta,t)\widehat{g}(\eta,t) \mathrm{d}\eta \\ & \quad + i\int_0^t\int m_0(\eta) s^2 \, {e^{is\phi (\xi,\eta)}}\partial_s \left( {\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}\right){\,\mathrm{d}\eta\mathrm{d}s}, \nonumber \\ \label{eqn: F13} \widehat{F}_{13}&=2i\int_0^t\int m_0(\eta) s \, {e^{is\phi (\xi,\eta)}}{\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}{\,\mathrm{d}\eta\mathrm{d}s}\, . \end{aligned}$$ . Write out as a sum of terms $$\int_0^t\int s^2 \frac{\eta_i^2}{|\eta|^2} \eta_j \partial_{\eta_j}\phi \, {e^{is\phi (\xi,\eta)}}{\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}{\,\mathrm{d}\eta\mathrm{d}s}.$$ Fix one summand. Since $s\partial_{\eta_j}\phi e^{is\phi} =\partial_{\eta_j}e^{is\phi}$, integration by parts yields $$\begin{aligned} & -\int_0^t \int s \partial_{\eta_j}\left(\frac{\eta_i^2\eta_j}{|\eta|^2}\right){e^{is\phi (\xi,\eta)}}{\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}\,\mathrm{d}\eta\mathrm{d}s \label{eqn: dhitssymbol1} \\ & -\int_0^t \int s \frac{\eta_i^2\eta_j}{|\eta|^2} {e^{is\phi (\xi,\eta)}}\partial_{\eta_j} \left({\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}\right) \,\mathrm{d}\eta\mathrm{d}s \, .\label{eqn: dfg}\end{aligned}$$ The first term is analogous to $\hat{F}_{13}$ in . The quantity gives two contributions: $$\begin{aligned} & \int_0^t \int m_1(\eta) s {e^{is\phi (\xi,\eta)}}\partial_{\eta_j}\widehat{f}(\xi-\eta)\widehat{g}(\eta){\,\mathrm{d}\eta\mathrm{d}s}\\ & \int_0^t\int m_1(\eta) s {e^{is\phi (\xi,\eta)}}\widehat{f}(\xi-\eta)\partial_{\eta_j}\widehat{g}(\eta){\,\mathrm{d}\eta\mathrm{d}s}\, , \label{eqn: F12} \end{aligned}$$ where the symbol $m_1(\eta)$ denotes a multiplier which is homogenous of order 1. The first term above is analogous to the term $F_2$ in and will be estimated later. We denote the second term by $\widehat{F}_{12}$. We now proceed to estimate the terms $F_{11}$, $F_{12}$ and $F_{13}$, defined respectively in , and . . This term was defined in . The first term $$\int m_0(\eta) t^2 {e^{is\phi (\xi,\eta)}}\widehat{f}(\xi-\eta,t)\widehat{g}(\eta,t) \mathrm{d}\eta$$ can be dealt with by an $L^6-L^3$ estimate: we pair the zero-th order multiplier with $g=n_0+G +\Lambda^{-1}n_1$ to find a bound of $$\begin{aligned} & t^2 {\|e^{it\Delta} f\|}_{L^6} {\|e^{is\Lambda}(n_0+G)\|}_{L^3} + t^2\|xf\|_{L^2}\|e^{it\Lambda}\Lambda^{-1}n_1\|_\infty \\ & \lesssim \varepsilon_1 t {\|xf\|}_{L^2} t^{-1/3} + \varepsilon_1^2 t^\delta \lesssim \varepsilon_1^2 t^{2/3+\delta} .\end{aligned}$$ This is acceptable since $2/3 + \d \leq 1 - 2\alpha -\d$, see . For the second term in , we first expand the derivative $$\begin{aligned} \partial_s \left(\widehat{f}(\xi-\eta)\widehat{g}(\eta)\right) = \partial_s\widehat{f}(\xi-\eta)\widehat{g}(\eta) + \widehat{f}(\xi-\eta)\partial_s \widehat{g}(\eta).\end{aligned}$$ Using $\partial_s f = e^{is\Delta} un$ and $\partial_s g = e^{is\Lambda} \Lambda^{\g} |u|^2$, we obtain a bound of the form $$\int_0^t s^2 {\|un\|}_{L^6} {\|n\|}_{L^3}\,\mathrm{d}s + \int_0^t s^2 s^\delta {\|u\|}_{L^6} {\||u|^2\|}_{L^3}\,\mathrm{d}s \, .$$ Using the a priori assumptions , this is bounded by $$\begin{aligned} & \varepsilon_1^2 \int_0^t s s^{-1+\delta} s^{-1/3} \,\mathrm{d}s + \varepsilon_1^2 \int_0^t s^{2+\delta} {(s^{-1+\d})}^3 \,\mathrm{d}s \lesssim \varepsilon_1^2 \int_0^t s^{\delta -1/3}\,\mathrm{d}s \lesssim \varepsilon_1^2 t^{2/3 + \delta} \, .\end{aligned}$$ The term $F_{12}$ was defined in as a term of the form $$\widehat{F}_{12} = \int_0^t \int m_1(\eta) s \, {e^{is\phi (\xi,\eta)}}\widehat{f}(\xi-\eta)\partial_{\eta}\widehat{g}(\eta){\,\mathrm{d}\eta\mathrm{d}s}.$$ Up to a commutator term resulting from $\eta \partial_\eta \widehat{g}(\eta) = \partial_{\eta}(\eta \widehat{g}(\eta)) -\widehat{g}(\eta)$, we can apply Hölder’s inequality with exponents $p = 4/(1+\gamma)$ and $p'= 4/(1-\gamma)$ to find a bound of the type $$\int_0^t s \| m_0(\nabla) e^{is\Lambda} x \Lambda g \|_{L^{4/(1+\gamma)}}\|e^{is\Delta}f\|_{L^{4/(1-\gamma)}} \,\mathrm{d}s.$$ By , $$\| m_0(\nabla) e^{is\Lambda} x \Lambda (n_0+G) \|_{L^{4/(1+\gamma)}} \lesssim \varepsilon_1 s^{-1/4+3\gamma/4-2\alpha-3\delta} \,$$ and, by interpolating the $L^2$ and $L^6$ bounds, we have $$\|e^{is\Delta} f\|_{L^{4/(1-\gamma)}}\lesssim \varepsilon_1 s^{-(3/4+3\gamma/4)}.$$ On the other hand, we have $$\begin{aligned} \| e^{is\Lambda} x n_1\|_{L^{4/(1+\gamma)}} &\lesssim t^{-1/2+\gamma/2}\|\Lambda^{1-\gamma}(xn_1)\|_{L^{4/(3-\gamma)}}\\ &\lesssim t^{-1/2+\gamma/2}(\|x\Lambda n_1\|_{L^{12/(9+\gamma)}} + \|n_1\|_{L^{12/(9+\gamma)}})\\ &\lesssim t^{-1/2+\gamma/2}\|\langle x\rangle^2 \Lambda n_1\|_{L^2}.\end{aligned}$$ Combining these, we obtain the result. This was defined in . Taking $L^6-L^3$ we have a bound of the form $$\int_0^t s \|u\|_{L^6}\|n\|_{L^3}\,\mathrm{d}s +\int_0^t s\|u\|_{L^6}\|e^{is\Lambda}\Lambda^{-1}n_1\|_{L^3}\,\mathrm{d}s.$$ For the first term in the preceding, we have the bound $$\int_0^t s^{\delta-1/3}\,\mathrm{d}s \lesssim t^{2/3+\delta}.$$ For the second term, we use the dispersive estimate and Sobolev embedding to obtain the bound $t^{2/3}\|n_1\|_{L^{9/7}}$, which is more than what is needed. ### Estimate for $F_2$ This was defined in and is of the form $$\int_0^t\int {e^{is\phi (\xi,\eta)}}i s \eta_i \partial_{\xi_i} {\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}{\,\mathrm{d}\eta\mathrm{d}s}.$$ We use to obtain the two terms $$\begin{aligned} & -\int_0^t\int {e^{is\phi (\xi,\eta)}}is \frac{\eta_i}{|\eta|} {\frac{\eta}{|\eta|}\cdot \partial_{\eta} \phi}\, \partial_{\eta_i} {\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}{\,\mathrm{d}\eta\mathrm{d}s}\label{eqn: F2-dphiterm} \\ & -\int_0^t\int {e^{is\phi (\xi,\eta)}}is \frac{\eta_i}{|\eta|}\frac{\phi(\xi,\eta)}{|\eta|} \partial_{\eta_i} {\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}{\,\mathrm{d}\eta\mathrm{d}s}\label{eqn: F2-phiterm} \, .\end{aligned}$$ We integrate by parts in , obtaining terms $$\begin{aligned} & \int_0^t \partial_{\eta_j}\left(\frac{\eta_i\eta_j}{|\eta|^2}\right) {e^{is\phi (\xi,\eta)}}\partial_{\eta_i} {\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}{\,\mathrm{d}\eta\mathrm{d}s}\label{eqn: F2-term1} \\ & \int_0^t \frac{\eta_i\eta_j}{|\eta|^2}{e^{is\phi (\xi,\eta)}}\partial^2_{\eta_i\eta_j}{\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}{\,\mathrm{d}\eta\mathrm{d}s}\label{eqn: F2-term2} \\ &\int_0^t \frac{\eta_i\eta_j}{|\eta|^2}{e^{is\phi (\xi,\eta)}}\partial_{\eta_i}\widehat{f}(\xi-\eta)\partial_{\eta_j}\widehat{g}(\eta,s){\,\mathrm{d}\eta\mathrm{d}s}\, . \label{eqn: F2-term3}\end{aligned}$$ Notice that the multiplier in is of order $-1$ while in and it is of order zero. In particular, is similar to and can be estimated as in below, so we skip it. For , we use the Mikhlin multiplier Theorem[^11] and the second assumption in and , to find the bound $$\begin{aligned} {\| \eqref{eqn: F2-term1} \|}_{L^2} & \lesssim \int_0^t {\|e^{is\Delta}xf\|}_{L^6} {\|e^{is\Lambda} m_{-1}(\nabla)(n_0+G) \|}_{L^3} \,\mathrm{d}s +\int_0^t \|xf\|_{L^2}\|e^{is\Lambda}\Lambda^{-2}n_1\|_{L^\infty}\,\mathrm{d}s\\ &\lesssim \int_0^t s^{-1} {\|x^2 f\|}_{L^2} {\|e^{is\Lambda} \Lambda^{-1} (n_0+G)\|}_{L^3} \,\mathrm{d}s + \varepsilon_1^2 t^\delta\\ & \lesssim \varepsilon_1^2 \int_0^t s^{-1} s^{1-2\a-\d} s^{-2\alpha-3\delta}\,\mathrm{d}s + \varepsilon_1^2 t^\delta \lesssim \varepsilon_1^2 t^{1-2\alpha-3\delta}\, .\end{aligned}$$ The term can be estimated similarly using Sobolev’s embedding and : $$\begin{aligned} {\| \eqref{eqn: F2-term3} \|}_{L^2} & \lesssim \int_0^t {\|e^{is\Delta}xf\|}_{L^6} {\| e^{is\Lambda} x(n_0+G) \|}_{L^3} \,\mathrm{d}s + \int_0^t \|e^{is\Delta}xf\|_{L^6} \|e^{is\Lambda}\Lambda^{-1}xn_1\|_{L^3}\,\mathrm{d}s\\ & + \int_0^t \|e^{is\Delta}xf\|_{L^2}\|e^{is\Lambda}\frac{\partial}{\Lambda^3}n_1\|_{L^\infty}\,\mathrm{d}s \\ & \lesssim \int_0^t s^{-1} {\|x^2 f\|}_{L^2} {\| x(n_0+G) \|}_{\dot{H}^\frac{1}{2}} \,\mathrm{d}s +\int_0^t s^{-1} \|x^2 f\|_{L^2}\|\Lambda^{-1/3} x n_1\|_{L^{3/2}}\,\mathrm{d}s \\ & + \int_0^t s^{-1}\|\frac{\partial^3}{\Lambda^3}n_1\|_{L^1} \|xf\|_{L^2}\,\mathrm{d}s \\ & \lesssim \varepsilon_1^2 \int_0^t s^{-1} s^{1-2\a-\d} \,\mathrm{d}s + \varepsilon_1 \int_0^t s^{-2\alpha-\delta}s^{-1/3}\|xn_1\|_{L^{9/7}}\,\mathrm{d}s + \varepsilon_1^2 t^\delta\\ &\lesssim \varepsilon_1^2 t^{1-2\alpha-\delta}.\end{aligned}$$ We now integrate by parts in $s$ in the term to find the terms $$\begin{aligned} &-\int_0^t\int {e^{is\phi (\xi,\eta)}}s \frac{\eta_i}{|\eta|^2} \partial_s \partial_{\eta_i} \widehat{f}(\xi-\eta,s)\widehat{g}(\eta,s)\,\mathrm{d}\eta \mathrm{d}s &\quad (\equiv \widehat{F}_{21}) \label{eqn: F21} \\ & -\int_0^t\int {e^{is\phi (\xi,\eta)}}s \frac{\eta_i}{|\eta|^2} \partial_{\eta_i} \widehat{f}(\xi-\eta,s)\partial_s \widehat{g}(\eta,s)\,\mathrm{d}\eta \mathrm{d}s &\quad (\equiv \widehat{F}_{22}) \label{eqn: F22} \\ & \int e^{it\phi(\xi,\eta)} t \frac{\eta_i}{|\eta|^2} \partial_{\eta_i} \widehat{f}(\xi-\eta,t)\widehat{g}(\eta,t)\mathrm{d}\eta & \quad (\equiv \widehat{F}_{23}) \label{eqn: F23} \\ & -\int_0^t\int {e^{is\phi (\xi,\eta)}}\frac{\eta_i}{|\eta|^2} \partial_{\eta_i} {\widehat{f}(\xi-\eta,s)\widehat{g} (\eta,s)}{\,\mathrm{d}\eta\mathrm{d}s}\, . &\quad (\equiv \widehat{F}_{24}) \label{eqn: F24}\end{aligned}$$ We estimate these four terms below. We first compute the $\partial_s$ derivative: $$\partial_s \widehat{f}(\xi-\eta,s) = i\int e^{is\phi(\xi-\eta,\tau)} \widehat{f}(\xi-\eta-\tau,s)\widehat{g}(\tau,s)\,\mathrm{d}\tau.$$ The derivative $\partial_{\eta_j}$ now generates two terms: $$\begin{gathered} 2i\int_0^t\int\int {e^{is\phi (\xi,\eta)}}s^2 \frac{\eta_i}{|\eta|^2} e^{is\phi(\xi-\eta,\tau)} \widehat{f}(\xi-\eta-\tau,s)\tau_j \widehat{g}(\tau,s)\widehat{g}(\eta,s)\,\mathrm{d}\eta\mathrm{d}\tau\mathrm{d}s \label{eqn: F21-term1} \\ \int_0^t\int\int {e^{is\phi (\xi,\eta)}}s \frac{\eta_i}{|\eta|^2} e^{is\phi(\xi-\eta,\tau)} \partial_{\eta_j}\widehat{f}(\xi-\eta-\tau,s)\widehat{g}(\tau,s) \widehat{g}(\eta,s)\,\mathrm{d}\eta\mathrm{d}\tau\mathrm{d}s. \label{eqn: F21-term2}\end{gathered}$$ For the first term we pair the multiplier with $\widehat{g}(\eta)$ and use $L^6-L^3$ to obtain $${\| \eqref{eqn: F21-term1} \|}_{L^2} \lesssim \int_0^t s^2 {\|u\partial n\|}_{L^6} {\|e^{is\Lambda} \Lambda^{-1} (G+n_0)\|}_{L^3} \,\mathrm{d}s+\int_0^t s^2\|u\partial n\|_{L^2}\|e^{is\Lambda} \Lambda^{-2}n_1\|_{L^\infty}\,\mathrm{d}s .\label{eqn: F21-term3}$$ Estimating $\|e^{is\Lambda} \Lambda^{-1} (G + n_0)\|_{L^3}$ by , we can bound the first of the two expressions above by $$\begin{aligned} \int_0^t s^2 {\|u\|}_{L^6} {\|\partial n\|}_{L^\infty} {\|e^{is\Lambda} \Lambda^{-1} (G+n_0)\|}_{L^3}\mathrm{d}s &\lesssim \varepsilon_1^2 \int_0^t s^{2\d} s^{-2\alpha -3\d } \mathrm{d}s \lesssim \varepsilon_1^2 t^{1-2\alpha-\delta}.\end{aligned}$$ For the second integral in we have $$\begin{aligned} \int_0^t s^2\|u\partial n\|_{L^2}\|e^{is\Lambda} \Lambda^{-2}n_1\|_{L^\infty}\,\mathrm{d}s &\lesssim \int_0^t s\|u\|_{L^6} \|\partial n\|_{L^3}\|n_1\|_{\dot{B}^0_{1,1}}\,\mathrm{d}s \\ &\lesssim \int_0^t \varepsilon_1^3 s^{-1/3+2\delta}\,\mathrm{d}s \lesssim \varepsilon_1^3 t^{2/3+2\delta}.\end{aligned}$$ Moving on to , we reproduce the same pairing as before and once again use $L^6-L^3$ and $L^2-L^\infty$ estimates to find a bound $$\label{eqn: F21-term4} \int_0^t s\|e^{is\Delta}(xf) \,n \|_{L^6} \|e^{is\Lambda}\Lambda^{-1} (n_0+G)\|_{L^3}\mathrm{d}s + \int_0^t s \| e^{is\Delta}(xf) \, n \|_{L^2}\|e^{is\Lambda}\Lambda^{-2}n_1\|_{L^\infty}\,\mathrm{d}s.$$ Observe that $$\| e^{is\Delta}xf \|_{L^6} \lesssim s^{-1} \| x^2 f \|_{L^2} \lesssim \e_1 s^{-2\alpha - \d} \, .$$ Using this, together with $\| n\|_{L^\infty} \le \varepsilon_1 s^{-1}$ and , we obtain the desired bound of $\varepsilon_1^2 t^{1-2\a-\d}$ for the first term in . For the second term, we can obtain a bound of $\e_1^3 t^\d$ by using $$\| e^{is\Delta}(xf) \, n \|_{L^2} \le \|xf\|_{L^2}\|n\|_{L^\infty} \lesssim \e^2_1 s^{-1 + \delta} \, ,$$ combined with the the linear dipersive estimate and the assumption on $n_1$ in . This term is defined in . We pair the multiplier with $\partial_s g = e^{is\Lambda} \Lambda^{\g} |u|^2$, use an $L^6-L^3$ estimate, Sobolev’s embedding and , to find the bound $$\begin{aligned} {\| \eqref{eqn: F22} \|}_{L^2} & \lesssim \int_0^t s\|e^{is\Delta}xf\|_{L^6} {\|\Lambda^{-1+\g} {|u|}^2 \|}_{L^3} \,\mathrm{d}s \lesssim \varepsilon_1 \int_0^t s^{1-2\a-\d} {\| {|u|}^2 \|}_{L^{3/(2+\g)}} \,\mathrm{d}s \\ & \lesssim \varepsilon_1^2 \int_0^t s^{1-2\a-\d} {\| u \|}_{L^2} {\| u \|}_{L^{6/(1-2\g)}} \,\mathrm{d}s \lesssim \varepsilon_1^2 t^{1-2\a-\d} \, .\end{aligned}$$ This term is defined in . We can use $L^6-L^3$ and $L^\infty-L^2$ as before to get $$\begin{aligned} {\| \eqref{eqn: F23} \|}_{L^2} &\lesssim t\|e^{it\Delta} xf\|_{L^6}\|\Lambda^{-1}e^{is\Lambda}(n_0+G)\|_{L^3} \\ & +t\|xf\|_{L^2}\|e^{it\Lambda}\Lambda^{-2}n_1\|_{L^\infty} \lesssim \varepsilon_1^2 t^{1-4\alpha-5\delta} \, \end{aligned}$$ This was defined in . It is dealt with by $L^6-L^3$ and $L^\infty-L^2$, in an identical fashion to the previous case. ### Estimate for $F_3$ This was defined in . It can be dealt with by using an $L^2-L^\infty$ estimate leading to $$\begin{aligned} \label{estF_3} \begin{split} {\| (\eqref{eqn: F3} \|}_{L^2} &\lesssim \int_0^t {\|x^2f\|}_{L^2} {\|e^{is\Lambda}(n_0+G)+e^{is\Lambda}\Lambda^{-1}n_1\|}_{L^\infty}\,\mathrm{d}s\\ &\lesssim \int_0^t \|x^2f\|_{L^2}\|n\|_{L^\infty}\,\mathrm{d}s +\int_0^t \|x^2f\|_{L^2}\|\Lambda n_1\|_{\dot{B}^0_{1,1}}\,\mathrm{d}s \lesssim \varepsilon_1^2 t^{1-2\alpha-\delta} \, . \end{split}\end{aligned}$$ At this point all terms are accounted for. $\hfill \Box$ [10]{} Bourgain, J. and Colliander, J. On the Well-posedness of the Zakharov system. 11 (1996), 515-546. Bejenaru, I., Herr, S., Holmer, J., and Tataru, D. On the 2D Zakharov system with $L^2$-§data. 22 (2009), no. 5, 1063-1089. Bejenaru, I. and Herr, S. Convolutions of singular measures and applications to the Zakharov system. 261 (2011), no. 2, 478-506. Ghidaglia, J-M. and Saut, J-C. . , 3 (1990), 475-506. Ghidaglia, J-M. and Saut, J-C. . L. Debnath Ed. World Scientific (1992) 83-97. Germain, P., Masmoudi, N. and Shatah, J. Global solutions for 3D quadratic §equations. , 3 (2009), 414-432. Germain P., Masmoudi N. and Shatah, J. . , 175 (2012), no. 2, 691-754. Ginibre, J., Tsutsumi, Y. and Velo, G. On the Cauchy problem for the Zakharov system. 151 (1997), no. 2, 384-436. Ginibre, J. and Velo, G. Scattering theory for the Zakharov system. 35 (2006), no. 4, 865-892. Ginibre, J. and Velo, G. Long range scattering and modified wave operators for the Wave §system. 3 (2002), 537-612. Ginibre, J. and Velo, G. Long range scattering and modified wave operators for the Wave §system II. 4 (2003), 973-999. Ginibre, J. and Velo, G. Long range scattering and modified wave operators for the Wave §system III. (2005), 101-125. Ginibre, J. and Velo, G. Long range scattering for the wave-§system revisited. 252 (2012), no. 2, 1642-1667. Guo, Z. and Nakanishi, K. Small energy scattering for the Zakharov system with radial symmetry. , 2012. Guo, Z., Nakanishi, K. and Wang, S. Global dynamics below the ground state energy for the Zakharov system in the 3D radial case. 238 (2013), 412–441 Guo, Z., Lee, S., Nakanishi, K. and Wang, C. Generalized Strichartz estimates and scattering for 3D Zakharov system. to appear [*to Comm. Math. Phys.*]{}. Gustafson, S., Nakanishi, K. and Tsai, T. Scattering for the Gross-Pitaevsky equation in 3 dimensions. 11 (2009), no. 4, 657-707. Hani, Z., Pusateri, F. and Shatah, J. Scattering for the Zakharov system in three dimensions. 322 (2013), no. 3, 731-753. Hayashi, N. and Naumkin, P. Asymptotics for large time of solutions to the nonlinear §and Hartree equations. , 120 (1998), 369-389. Hayashi, N. and Naumkin, P. Large time behavior of solutions for the modified Korteweg-de Vries equation. , (1999), no. 8, 395-418. Hayashi, N. and Naumkin, P. Large time asymptotics of solutions to the generalized Benjamin-Ono equation. , 351 (1999), no.1, 109-130. Ionescu, A. and Pausader, B. The Euler-Poisson system in 2D: global stability of the constant equilibrium solution. (2013), 761-826 . Ionescu, A. and Pausader, B. Global solutions of quasilinear systems of Klein-Gordon equations in 3D. , to appear. arXiv:1208.2661. Ionescu, A. and Pusateri, F. Nonlinear fractional Schrödinger equations in one dimension. , (2012). Ionescu, A. and Pusateri, F. Global existence of solution for the gravity water waves system in 2d. , to appear. arXiv:1303.5357. Kato, J. and Pusateri, F. A new proof of long range scattering for critical nonlinear §equations. , 24 (2011), no. 9-10, 923-940. Kenig, C. E., Ponce, G. and Vega, L. On the Zakharov and Zakharov-Schulman systems. 127 (1995), no. 1, 204-234. Kenig, C. E., Wang, W. Existence of local smooth solution for a generalized Zakharov system. 4 (1998), no. 4-5, 469-490. Klainerman, S. Uniform decay estimates and the Lorentz invariance of the classical wave equation. , 38 (1985), no. 3, 321-332. Klainerman, S. . Nonlinear systems of partial differential equations in applied mathematics, Part 1 (Santa Fe, N.M., 1984), 293–326. Lectures in Appl. Math., 23, Amer. Math. Soc., Providence, RI, 1986. Masmoudi, N. and Nakanishi, K. Energy convergence for singular limits of Zakharov type systems. 172 (2008), no. 3, 535-583.. Musher, S. L., Rubenchik, A. M. and Zakharov, V. E. Hamiltonian approach to the description of nonlinear plasma phenomena. 129 (1985), no. 5, 285-366. Ozawa, T. and Tsutsumi, Y. Global existence and asymptotic behavior of solutions for the Zakharov equations in three space dimensions. 3 (1993/94), Special Issue, 301-334. Pusateri, F. and Shatah, J. Space-time resonances and the null condition for (first order) systems of wave equations. arXiv:1109.5662v2. [*Comm. Pure and Appl. Math.*]{}, to appear. Schulman, I. and Zakharov, V.E. Integrability of nonlinear systems and perturbation theory. (1991) Spring Ser. Nonlinear Dyn., Springer, Berlin, 185-250. Shatah, J. Normal forms and quadratic nonlinear [K]{}lein-[G]{}ordon equations. , 38(5):685-696, 1985. Shatah, J. and Struwe, M. Geometric wave equations, volume 2 of [*Courant Lecture Notes in Mathematics*]{}. New York University, Courant Institute of Mathematical Sciences, New York, 1998. Shimomura, A. Scattering theory for Zakharov equations in three space dimensions with large data. , 6 (2004), 881-899. Shimomura, A. Modified wave operators for the coupled wave-§equations in three space dimensions. , 9 (2003), no. 6, 1571-1586. Shimomura, A. Modified wave operators for Maxwell-§equations in three-dimensions space. 4 (2003) 661-683. Sulem, C. and Sulem, P.L. The nonlinear §equation. Self-focussing and wave collapse. , 139. Springer-Verlag, New York, 1999. Texier, B. Derivation of the Zakharov equations. 184 (2007), no. 1, 121-183. Tsutsumi, Y. Global existence and asymptotic behavior of solutions for the Maxwell-§equations in three space dimensions. , 151 (1993), no. 3, 543-576. Zakharov, V.E. Collapse of Langmuir waves. 62, 1745-1751 (1972) \[[*Sov. Phys. JETP*]{} 35, 908-914 (1972)\]. [^1]: The second author was partially supported by NSF grant DMS 1265875. [^2]: Here $\langle x \rangle$ is used to denote $\sqrt{1+|x|^2}$, and the Besov norm $\dot B^0_{1,1}$ is defined in . [^3]: One can choose $\a \sim 1/6$ for $\g \geq 1/2$. [^4]: One important issue that arises in the case $\g < 0$ is the presence of a “singularity” in the nonlinear term of the second equation of , which is apparent when is written as the first order system below. While some of the arguments we present are still valid in this case, some others break down and would require substantial modifications to yield the same final result. [^5]: We believe it is likely that the techniques used in [@GN] apply to in the parameter range that we consider here, to obtain scattering for small data in the energy space under the assumption of radial symmetry. [^6]: For some examples of modified scattering results in weighted spaces, see the papers [@HN; @HNKdV; @HNBO; @KP; @FNLS] which deal with semilinear equations, [@Tsutsumi; @ShimoMS] for results on the final value problem for field equations with long-range potentials, e.g. Maxwell-Schrödinger, and [@2dWW] for a recent example involving a quasilinear system, the water waves equations. [^7]: The integration by parts argument in Fourier space is related to the space-time resonance method [@GMS1; @GMS2], and was used to deal with wave equations satisfying a nonresonance condition, akin to Klainerman’s null condiditon [@K1], in [@PS]. [^8]: Note that the first inequality in places a greater restriction on the size of $\alpha$ precisely when $0<\gamma<1/3$, whereas for $\gamma\geq1/3$ we can choose $\alpha>0$ arbitrarily close to $1/6$. [^9]: Local existence in time of solutions belonging to weighted Sobolev spaces can be established by standard techniques. [^10]: Without loss of generality we can restrict our attention to times $t \geq 1$. [^11]: $|\eta| m_{-1}(\eta)$ is bounded on $L^p$, $1<p<\infty$.
--- abstract: 'We use the combination of the 2 Ms [*Chandra*]{} X-ray image, new $J$ and $H$ band images, and the [*Spitzer*]{} IRAC and MIPS images of the [*Chandra*]{} Deep Field-North to obtain high spectroscopic and photometric redshift completeness of high and intermediate X-ray luminosity sources in the redshift interval $z=2-3$. We measure the number densities of $z=2-3$ active galactic nuclei (AGNs) and broad-line AGNs in the rest-frame $2-8$ keV luminosity intervals $10^{44}-10^{45}$ and $10^{43}-10^{44}$ ergs s$^{-1}$ and compare with previous lower redshift results. We confirm a decline in the number densities of intermediate-luminosity sources at $z>1$. We also measure the number density of $z=2-3$ AGNs in the luminosity interval $10^{43}-10^{44.5}$ and compare with previous low and high-redshift results. Again, we find a decline in the number densities at $z>1$. In both cases, we can rule out the hypothesis that the number densities remain flat to $z=2-3$ at above the $5\sigma$ level.' author: - 'A. J. Barger,$\!$ L. L. Cowie$\!$' title: 'The Number Density of Intermediate and High Luminosity Active Galactic Nuclei at $z\sim 2-3$' --- Introduction {#secintro} ============ Low-redshift hard X-ray luminosity functions have been well determined from the combination of highly spectroscopically complete deep and wide-area [*Chandra*]{} X-ray surveys. At $z<1.2$, the hard X-ray luminosity functions for active galactic nuclei (AGNs) of all spectral types and for broad-line AGNs alone are both well described by pure luminosity evolution, with $L_\ast$ evolving as $(1+z)^{3.2}$ and $(1+z)^{3.0}$, respectively (Barger et al. 2005). AGNs decline in luminosity by almost an order of magnitude over this redshift range. Barger et al. (2005) compared directly their broad-line AGN hard X-ray luminosity functions with the optical QSO luminosity functions from Croom et al. (2004) and found that the bright end luminosity functions agree extremely well at all redshifts. However, the optical QSO luminosity functions do not probe faint enough to see the downturn in the broad-line AGN hard X-ray luminosity functions at low luminosities and may be missing some sources at the very lowest luminosities to which they probe. The Croom et al. (2004) pure luminosity evolution is slightly steeper than that of Barger et al. (2005), but within the uncertainties, the two determinations are consistent over the $z=0-1.2$ redshift interval. To investigate whether pure luminosity evolution continues to hold at higher redshifts, Barger et al. (2005) used the Croom et al. (2004) evolution law (which was fitted over the wider redshift range $z=0.3-2.1$) to correct all of their broad-line AGN hard X-ray luminosities to the values they would have at $z=1$. They then computed the broad-line AGN hard X-ray luminosity functions over the wide redshift intervals $z=0.2-0.7$, $0.7-1.5$, and $1.5-2.5$. Barger et al. (2005) found that the lower redshift luminosity functions matched each other throughout the luminosity range, while the highest redshift luminosity function matched the lower redshift functions only at the bright end, where the optical QSO determinations were made. They therefore concluded that there are fewer intermediate X-ray luminosity broad-line AGNs in the $z=1.5-2.5$ redshift interval, and hence that the pure luminosity evolution model cannot be carried reliably to the higher redshifts. This was consistent with other analyses made of less complete samples of all spectral types together, which had found evidence for peaks and subsequent declines in the number densities of both intermediate-luminosity (e.g., Cowie et al. 2003; Barger et al. 2003a; Hasinger 2003; Fiore et al. 2003; Ueda et al. 2003) and high-luminosity (Silverman et al. 2005) sources, with the intermediate-luminosity sources peaking at lower redshifts than the high-luminosity sources. Recently, Nandra et al. (2005; hereafter, N05) have questioned the evidence for a decline in the number densities of intermediate-luminosity AGNs at $z>1$. By combining deep X-ray data with the Lyman break galaxy (LBG) surveys of Steidel et al. (2003), they argue that the number densities of intermediate-luminosity AGNs are roughly constant with redshift above $z=1$. Only with extremely deep X-ray data and highly complete redshift identifications can we address the above controversy about a decline. In this paper, we measure the $z=2-3$ number densities of both high and intermediate X-ray luminosity sources in the 2 Ms [*Chandra*]{} Deep Field-North (CDF-N). In addition to our highly complete spectroscopic redshift identifications, we also use the recently released [*Spitzer*]{} Great Observatories Origins Deep Survey-North IRAC and MIPS data, in combination with deep $J$ and $H$ band data obtained from the ULBCAM instrument on the University of Hawaii’s 2.2 m telescope, to estimate accurate infrared (IR) photometric redshifts for the X-ray sources that could not be spectroscopically or optically photometrically identified. We assume $\Omega_M=0.3$, $\Omega_\Lambda=0.7$, and $H_0=70$ km s$^{-1}$ Mpc$^{-1}$ throughout. Data {#secobs} ==== The deepest X-ray image is the $\approx2$ Ms CDF-N exposure centered on the Hubble Deep Field-North taken with the ACIS-I camera on [*Chandra*]{}. Alexander et al. (2003) merged samples detected in seven X-ray bands to form a catalog of 503 point sources over an area of 460 arcmin$^2$. Near the aim point, the limiting fluxes are $\approx 1.5\times 10^{-17}$ ergs cm$^{-2}$ s$^{-1}$ ($0.5-2$ keV) and $\approx 1.4\times 10^{-16}$ ergs cm$^{-2}$ s$^{-1}$ ($2-8$ keV). The ground-based, wide-field optical data of the CDF-N are summarized in Capak et al. (2004), and the [*HST*]{} ACS GOODS-N data are detailed in Giavalisco et al. (2004). The [*Spitzer*]{} IRAC and MIPS data are from the [*Spitzer*]{} Legacy data products release and are presented in Dickinson et al. (2005). The new ULBCAM $J$ and $H$ band data, which cover the whole CDF-N area to $1\sigma$ depths of $25.2$ in $J$ and $24.2$ in $H$ (these are measured in $3''$ apertures and corrected to total magnitudes), are described in Trouille et al. (2005). All magnitudes in this paper are in the AB system. Most of the spectroscopic redshifts for the X-ray sources are from Barger et al. (2002, 2003b, 2005), but we also obtained some additional redshifts with the DEIMOS spectrograph on the Keck II telescope during the Spring 2005 observing season. One spectroscopic redshift ($z=2.578$) is taken from Chapman et al. (2005). X-ray Incompleteness -------------------- We consider in our analysis two restricted, uniform, flux-limited X-ray samples. The most important consideration in choosing our soft X-ray flux limits was that there be no significant X-ray incompleteness. In Figure \[fig1\], we show $0.5-2$ keV flux versus off-axis radius for the full 2 Ms CDF-N X-ray sample of Alexander et al. (2003) ([*squares and diamonds*]{}). We use small squares to denote sources that are detected in the 2 Ms survey but not in the 1 Ms survey (Brandt et al. 2001). For the bright subsample, we consider sources in a $10'$ radius region with fluxes above $1.7\times 10^{-15}$ ergs cm$^{-2}$ s$^{-1}$ ($0.5-2$ keV), and for the deep subsample, we consider sources in an $8'$ radius region with fluxes above $1.7\times 10^{-16}$ ergs cm$^{-2}$ s$^{-1}$ ($0.5-2$ keV). We show these flux and radius limits with solid lines. The two subsamples comprise 160 sources, 8 of which are stars. We can see from Figure \[fig1\] that the flux limits of our subsamples are well above the flux limit of the 2 Ms survey ([*dotted curve*]{}) at all radii, suggesting that there should be very little X-ray incompleteness in our sample. Indeed, when we consider the 160 sources in the 2 Ms exposure that lie within our two subsamples, we find that only five sources were not already detected in the 1 Ms exposure. If we were using the 1 Ms catalog in our analysis rather than the 2 Ms catalog, then this would correspond to a 3% incompleteness. With the 2 Ms exposure, we expect X-ray incompleteness to be negligible. Redshift Interval ----------------- Having selected a highly complete X-ray sample, our next concern was that we sample the full luminosity ranges of interest without clipping out any sources. For the traditional intermediate and high X-ray luminosity cut-offs of $L_{\rm 2-8~keV}=10^{43}$ and $10^{44}$ ergs s$^{-1}$, this very naturally sets the redshift interval to $z=2-3$. We can see this from Figure \[fig2\], where we show rest-frame $2-8$ keV luminosity versus redshift for the CDF-N X-ray sources in our (a) bright and (b) faint subsamples. We determined the rest-frame $2-8$ keV luminosities from the observed-frame $0.5-2$ keV fluxes, assuming an intrinsic $\Gamma=1.8$ spectrum. At $z=3$, observed-frame $0.5-2$ keV corresponds to rest-frame $2-8$ keV, providing the best possible match to lower redshift data. Moreover, the $0.5-2$ keV [*Chandra*]{} images are deeper than the $2-8$ keV images, so using the observed-frame soft X-ray fluxes at high redshifts results in increased sensitivity. We only show the luminosities at low redshifts for illustrative purposes, since normally one would use the observed-frame $2-8$ keV fluxes to calculate the $2-8$ keV luminosities at these redshifts. The soft X-ray flux limits of our faint and bright subsamples [*(solid curves)*]{} correspond to rest-frame $2-8$ keV luminosities of $10^{43}$ ergs s$^{-1}$ and $10^{44}$ ergs s$^{-1}$, respectively, at $z=3$. Thus, we will not be clipping out any sources in either the $L_{\rm 2-8~keV}=10^{43}$ to $10^{44}$ ergs s$^{-1}$ interval in the faint subsample, or the $L_{\rm 2-8~keV}=10^{44}$ to $10^{45}$ ergs s$^{-1}$ interval in the bright subsample, if we choose $z=2-3$ for our analysis. Once we have chosen our soft X-ray flux and radius limits and our redshift interval, we can plot the spectroscopically identified $z=2-3$ sources in the flux-radius plane and see how many lie in our restricted subsamples. This is shown in Figure \[fig1\], where we denote with large squares the sources that have been spectroscopically identified to lie in the redshift interval $z=2-3$. The soft X-ray flux and radius limits for our two subsamples were chosen without regard to this distribution, as can be seen from the figure. There are 15 sources in our two subsamples with spectroscopic redshifts in the interval $z=2-3$. Probability of Misidentifications {#secmisid} --------------------------------- Another issue we need to address is how many of the X-ray sources with detected optical/near-infrared (NIR) counterparts we may be misidentifying. Of the 160 sources in the 2 Ms exposure that lie within our two subsamples, 138 are detected above the $3\sigma$ limit of 23 in $H$. Thus, only 22 of the sources in our subsamples are not detected at the $3\sigma$ level in the NIR. By randomizing the source positions and remeasuring the $H$ band magnitudes many times, we found an 11% probability of misidentification. Thus, of the 138 sources in our subsamples with $3\sigma$ $H$ band detections, we would expect that about 15 might be contaminated by the projection of random, superposed objects. Of the 22 sources that are not detected at the $3\sigma$ level in $H$, 11 are detected in the IRAC $3.6\mu$m band at $<23$ magnitude, with a 4% probability of misidentification based on randomized measurements. This corresponds to about one-half of a source. Of the remaining 11 sources that are not detected at $3.6\mu$m, 5 are detected at $R<25.5$, with a 10% probability of misidentification based on randomized measurements. Again, this corresponds to about one-half of a source. Thus, in total, we have optical/NIR identifications for 154 of the 160 sources, eight of which are stars, and we expect about 16 (or just over 10%) of these to be false. Even if a large fraction of the 16 real sources suffering from random projections were in the $z=2-3$ range, which is not very likely, the correction to the number densities we derive subsequently would be small. Photometric Redshifts {#secz} ===================== Barger et al. (2002, 2003b) computed photometric redshifts for the CDF-N X-ray sources using broadband galaxy colors and the Bayesian code of Benítez (2000). They only used sources with probabilities for the photometric redshift of greater than 90%, resulting in about an 85% success rate for photometric identifications. These redshifts are robust and surprisingly accurate (often to better than 8% when compared with the spectroscopic redshifts) for non–broad-line AGNs. We now have the advantage of the [*Spitzer*]{} IRAC $3.6\mu$m, $4.5\mu$m, $5.8\mu$m, and $8.0\mu$m data, as well as the NIR $J$ and $H$ band data, for estimating photometric redshifts. Barger et al. (2005) classified the X-ray sources into four spectral classes: absorbers (i.e., no strong emission lines \[EW(\[OII\])$<3$ Å or EW(H$\alpha+$NII)$<10$ Å\]), star formers (i.e., strong Balmer lines and no broad or high-ionization lines), high-excitation (HEX) sources (i.e., \[NeV\] or CIV lines or strong \[OIII\] \[EW(\[OIII\] 5007 Å$)>3$ EW(H$\beta)$\]), and broad-line AGNs (i.e., optical lines having FWHM line widths greater than 2000 km s$^{-1}$). The measured spectral energy distributions (SEDs) of the X-ray sources in each of these spectral classes turn out to be remarkably tight. Thus, we can construct templates from the average SEDs for each class and obtain an [*IR photometric redshift*]{} by fitting the templates to the optical through mid-infrared (MIR) data. We note that the template fitting is insensitive to the spectral class for sources without strong AGN signatures. In Figure \[fig3\], we compare both (a) the optical photometric redshifts from Barger et al. (2003), and (b) our new IR photometric redshifts with the spectroscopic redshifts. Both photometric redshift techniques tend to fail on some of the broad-line AGNs [*(large open squares)*]{}, but, fortunately, broad-line AGNs are straightforward to identify spectroscopically, even in the so-called redshift “desert” at $z\sim 1.5-2$, and nearly all of the CDF-N X-ray sources have now been spectroscopically observed. While the Bayesian optical photometric redshift technique gives slightly tighter values at low redshifts due to the presence of strong spectral features and breaks in the templates from Coleman, Wu, & Weedman (1988) and Kinney et al. (1996), it does not do so well on the $z>2$ sources. In our subsequent analysis, we adopt the more robust IR photometric redshifts when a spectroscopic redshift is not available. In Figure \[fig4\], we show histograms of the spectroscopic [*(light shading)*]{} and IR photometric [*(dark shading)*]{} redshift identifications versus $R$ magnitude for the sources in our (a) bright and (b) faint subsamples. The photometric redshifts not only tell us which of the spectroscopically unidentified sources are likely to have redshifts within the $z=2-3$ redshift interval, but they also enable us to remove contaminants that are really at lower redshifts. In Figure \[fig5\], we show four example measured X-ray source SEDs [*(solid squares)*]{} redshifted to the rest-frame using the printed spectroscopic or IR photometric redshift. We have superimposed on each SED the average spectral template chosen by the IR photometric redshift fitting for that source. We selected these four examples to illustrate (a, b) two spectroscopically identified sources with different spectral classes (also printed), both of which are in the redshift interval $z=2-3$, (c) one IR photometric redshift of a lower redshift source, and (d) one IR photometric redshift of a source in the redshift interval $z=2-3$. Of the 44 sources with $f_{0.5-2~{\rm keV}}>1.7\times 10^{-15}$ ergs cm$^{-2}$ s$^{-1}$ within a $10'$ radius, 38 have spectroscopic redshifts. Of the remaining 6 sources, 5 have a robust IR photometric redshift, none of which lie in the redshift interval $z=2-3$. Of the 136 sources with $f_{0.5-2~{\rm keV}}>1.7\times 10^{-16}$ ergs cm$^{-2}$ s$^{-1}$ within an $8'$ radius, 88 have spectroscopic redshifts. Of the remaining 48, 41 have robust IR photometric redshifts, 12 of which lie in the redshift interval $z=2-3$. That leaves 8 sources in total with neither a spectroscopic nor a photometric redshift; 6 were not clearly detected in the $H$, IRAC $3.6\mu$m, or $R$ bands (see §\[secmisid\]), and the photometric redshifts are not obvious from the SEDs for the remaining two. AGN number densities at $z=2-3$ {#secnum} =============================== Six of the eight sources without spectroscopic or photometric redshifts could, if they were assigned redshifts at the center of the redshift interval $z=2-3$, have luminosities that would place them into one of our two luminosity intervals (one in the high-luminosity interval, and five in the intermediate-luminosity interval). In Figure \[fig2\], we denote the X-ray sources with spectroscopic redshifts by small, solid diamonds, and the broad-line AGNs by large, solid diamonds. We use open squares to denote the sources with IR photometric redshifts, and we use open circles to denote the unidentified sources (these are plotted at $z=2.5$). We computed the AGN number densities at $z=2-3$ for the spectroscopically or photometrically identified sources in the rest-frame $2-8$ keV luminosity intervals $10^{43}-10^{44}$ ergs s$^{-1}$ and $10^{44}-10^{45}$ ergs s$^{-1}$ using the appropriate areas and volumes. These number densities are shown in Figure \[fig6\]a as a solid diamond and a solid circle, respectively. The Poissonian $1\sigma$ uncertainties are based on the number of sources in each redshift interval. Our high-luminosity sample consists of 8 sources in this redshift interval, all of which are broad-line AGNs. Our intermediate-luminosity sample consists of 13 sources in this redshift interval, none of which is a broad-line AGN. We also computed upper limits on the number densities at $z=2-3$ by assigning redshifts of 2.5 to the six unidentified sources described above. We denote these upper limits by solid ($10^{43}-10^{44}$ ergs s$^{-1}$) and dashed ($10^{44}-10^{45}$ ergs s$^{-1}$) horizontal lines in Figure \[fig6\]a. For comparison, we also plot in Figure \[fig6\]a the AGN number densities at $z<1.5$ for our two luminosity intervals. These number densities were determined from the spectroscopic sample of Barger et al. (2005), which includes the CDF-N (Barger et al. 2003), CDF-S (Szokoly et al. 2004), and CLASXS (Steffen et al. 2004) data. We do not expect cosmic variance to be an issue on these scalelengths (Yang et al. 2005). We calculated the rest-frame $2-8$ keV luminosities for these sources using the observed-frame $2-8$ keV band, assuming an intrinsic $\Gamma=1.8$ spectrum. The Poissonian $1\sigma$ uncertainties are again based on the number of sources in each redshift interval. The Barger et al. (2005) spectroscopic sample is substantially complete at these redshifts and luminosities, and we expect that any incompleteness correction would be small. In Figure \[fig6\]a, we see that the AGN number densities in both luminosity intervals show a steep rise at $z<1$. The intermediate-luminosity number densities [*(diamonds)*]{} then show a marked decline to $z=2-3$, while the high-luminosity number densities [*(circles)*]{} remain relatively constant to $z=2-3$. If the number densities in the $L_{\rm 2-8~keV}=10^{43}-10^{44}$ ergs s$^{-1}$ interval were constant with redshift and equal to the peak value of $\sim 10^{-4}$ Mpc$^{-3}$ seen just below $z=1$ in this luminosity interval, then we would expect 66 sources in the $z=2-3$ interval at off-axis radii less than $8'$. In fact, we only have 13 sources with spectroscopic and photometric redshifts in this redshift interval in our faint subsample. Even if we were to add in all eight of the unidentified sources, this number would only rise to 21. Thus, we can reject the hypothesis that the number densities do not decline with redshift at above the $5\sigma$ level. In Figure \[fig6\]b, we show the broad-line AGN number densities for the same two luminosity intervals. The high-luminosity number densities show a dramatic rise at $z<1.5$ [*(open triangles)*]{}, and then a flattening to $z=2-3$ [*(solid triangle)*]{}, while the intermediate-luminosity number densities remain relatively constant at $z<1.5$ [*(open squares)*]{}, and then decline to $z=2-3$, where we only have an upper limit [*(solid square and arrow)*]{}. The $z<1.5$ behavior can be understood in the context of the broad-line AGN luminosity functions given in Figure 18 of Barger et al. (2005). Over the $z=0-1.5$ redshift interval, the evolution of the broad-line AGN luminosity function is well defined by pure luminosity evolution with a rapid increase of luminosity with redshift. As a consequence, the $L_{\rm 2-8~keV}=10^{44}-10^{45}$ ergs s$^{-1}$ interval, which lies on the steeply declining high-luminosity end of the luminosity function throughout this redshift interval, rises rapidly. In contrast, the $L_{\rm 2-8~keV}=10^{43}-10^{44}$ ergs s$^{-1}$ interval lies just above the peak luminosity in the luminosity function at $z=0$, and the number densities in this luminosity interval stay relatively constant over the $z=0-1.5$ redshift interval. Comparison with Nandra et al. (2005) {#secdisc} ==================================== N05 combined deep X-ray data of the CDF-N (2 Ms) and of the Groth-Westphal Strip (GWS; 200 ks) with the LBG surveys of Steidel et al. (2003) to estimate the number density of intermediate-luminosity AGNs in the interval $z=2.5-3.5$. Their method differs from the X-ray follow-up surveys in that it uses a rest-frame UV-selected sample over a narrow range of redshift with a cosmological volume determined from the optical selection function. The X-ray data are only used to determine whether an LBG hosts an AGN and to calculate an X-ray luminosity. The Groth Strip data are much shallower than the CDF-N data, and they apply an empirically determined 20% correction for X-ray incompleteness. To compare our results directly with those of N05, we used our CDF-N faint subsample to compute the AGN number densities at $z=2-3$ in the same rest-frame $2-8$ keV luminosity interval, $L_{\rm 2-8~keV}=10^{43}-10^{44.5}$ ergs s$^{-1}$, used by N05. We note that by choosing to use a broader luminosity interval that goes to higher luminosities, N05 is pushing into the QSO regime, where we know the number densities continue to rise to higher redshifts than the $z=1$ redshift at which the intermediate-luminosity number densities peak (see Fig. \[fig6\]a). Our $z=2-3$ data point [*(solid diamond in Fig. \[fig7\])*]{} contains 17 sources, while the N05 data point contains 10. We computed an upper limit on our $z=2-3$ number density by assigning redshifts of 2.5 to the five unidentified sources in our faint subsample that could lie in this luminosity interval if they were at that redshift. We denote this upper limit by a solid horizontal line in Figure \[fig7\]. N05 claimed that their number density estimate in the $z=1.5-3$ range was approximately an order of magnitude higher than that of Cowie et al. (2003)—and roughly equal to the Cowie et al. (2003) upper limit—and a factor of about 3 higher than that of Barger et al. (2005). N05’s comparison with Cowie et al. (2003) was based on the $L_{\rm 2-8~keV}>10^{42}$ ergs s$^{-1}$ evolution, and hence was not shown on their Figure 1. They did show their estimates of the Barger et al. (2005) space densities, which they obtained by approximately fitting and then integrating the Barger et al. (2005) luminosity functions over the $L_{\rm 2-8~keV}=10^{43}-10^{44.5}$ ergs s$^{-1}$ range and then accounting for the differences in the adopted cosmology in the X-ray bandpass, but they did not show the upper limits from Barger et al. (2005) on their figure. For the purposes of a clear comparison between the different analyses, we include on Figure \[fig7\] the Cowie et al. (2003) data points [*(solid squares)*]{} recalculated for the $L_{\rm 2-8~keV}=10^{43}-10^{44.5}$ ergs s$^{-1}$ luminosity interval and the N05 cosmology (which is the same cosmology used throughout this paper). We also show the upper limit given by Cowie et al. (2003) for the $z=2-4$ interval [*(dashed horizontal line)*]{}. We denote the N05 result by a solid circle. We can see that there is no inconsistency between the Cowie et al. (2003) points and the present work, nor between the Cowie et al. (2003) upper limit and the N05 result. However, our present work has tightened up the measured number density and decreased the upper limit. We also computed the $z<1.5$ AGN number densities in the $L_{\rm 2-8~keV}=10^{43}-10^{44.5}$ ergs s$^{-1}$ luminosity interval [*(open diamonds in Figure \[fig7\])*]{} using the spectroscopic sample of Barger et al. (2005). The decline in the AGN number densities at $z>1$ is again highly significant. Since N05 reached a different conclusion, namely, that their “$z=3$ estimate is consistent with, and supportive of, the hypothesis that AGN activity remained roughly constant between $z=1$ and $z=3$”, we have also included on Figure \[fig7\] the Ueda et al. (2003) points [*(open triangles)*]{} and limits [*(dotted horizontal lines)*]{} that N05 compared with. The Ueda et al. (2003) values are generally similar to the Barger et al. (2005) values, except at $z\sim 1$, where they have a high point. This is a consequence of the fact that the Ueda et al. (2003) data at this redshift come almost entirely from the CDF-N data, which are known to contain a redshift sheet at a median redshift of $z=0.94$ (Barger et al. 2003). The Barger et al. (2005) number densities are smoothed out by the inclusion of the much wider CLASXS survey data (Steffen et al. 2004). We measure a value of $2.6^{+0.8}_{-0.6}\times 10^{-5}$ Mpc$^{-3}$ for the number density of sources in the $L_{\rm 2-8~keV}=10^{43}-10^{44.5}$ ergs s$^{-1}$ range in the $z=2-3$ interval, which may be compared with the N05 value of $4.2^{+1.8}_{-1.4}$ Mpc$^{-3}$. In both cases, the errors only reflect Poissonian uncertainties from the small number of sources and are 68% confidence limits (Gehrels 1986). Although the N05 data point is statistically consistent with the present number density, the larger errors in N05 marginally permit a flat number density, while the present data definitively rule it out. While the present data unambiguously show that the number densities at the intermediate luminosities are declining to high redshifts, it should be noted that N05 were not correct in stating that a flat number density curve would argue against previous suggestions that the majority of black hole accretion occurs at low redshifts, around $z=1$. The cosmological time available at high redshifts is much shorter than that at low redshifts, and the integral of the energy density production rate over the cosmic time from $z=0$ to larger $z$ would still be dominated by the sources around $z=1$ for the case of a number density which rises to $z=1$ and is constant at higher redshifts. It is worth briefly discussing the three major advantages that N05 argued their LBG method had over the X-ray follow-up method for determining AGN number densities. First, they claimed that the optical LBG selection function is very well defined. However, as they noted in their paper, their volume element was calculated for galaxies, not AGNs, and the selection functions for both the broad-line AGNs and the narrow-line AGNs are likely to be quite different than that for the non-AGN LBGs, for which they referenced Steidel et al. (2002) and Hunt et al. (2004). They also noted that their method has the disadvantage of missing any AGNs that are too faint to be selected in their optical survey and/or have colors that fail the LBG selection criterion. Hunt et al. (2004) pointed out that three of the X-ray sources with spectroscopic redshifts in the range $2.5<z<3.5$ reported by Barger et al. (2003b) were not picked up in the Steidel et al. (2003) LBG survey, and with the Barger et al. (2005) observations, that number has now doubled to six. N05 argued that this incompleteness should be accounted for in their effective volume calculation (they only expect to pick up about 40% of the objects at $z=2.5-3.5$ compared to a top hat function), but given that this is such a large correction, the uncertainties on the optical LBG selection function for AGNs are a concern. Direct selection of X-ray sources avoids this problem and should be much more robust. Second, N05 argued that they can apply a lower X-ray detection threshold for their subsample of LBGs than can be applied when constructing a purely X-ray based catalog, thereby making their X-ray detections more complete. They declared this to be critical, since they are dealing with a population of sources which are close to the detection limit. However, since none of their additional five X-ray detected LBGs in the CDF-N (over and above the four LBGs detected in the CDF-N 2 Ms catalog by Alexander et al. 2003 and discussed in Nandra et al. 2002) have luminosities $L_{\rm 2-8~keV}>10^{43}$ ergs s$^{-1}$, this is not relevant to the issue of determining whether there is a decline in the $L_{\rm 2-8~keV}=10^{43}-10^{44.5}$ ergs s$^{-1}$ range. Finally, N05 stated that their LBG color selection mostly avoids any concerns about spectroscopic incompleteness, since the probability that the non-spectroscopically identified LBGs are $z\sim 3$ galaxies is about 96%. The high spectroscopic and photometric completeness of our $z=2-3$ X-ray sample mitigates this issue for our current analysis. Probably the most important concern about the N05 methodology is that they must correct for two window functions (a very incomplete optical LBG selection function that is not well understood for AGNs, and an X-ray selection function) rather than one. In order to determine whether their optical LBG selection function is the same for galaxies with substantial AGN contributions as it is for those without, they would need to spectroscopically identify a complete X-ray sample. However, once they had undertaken such a spectroscopic survey to calibrate their optical LBG selection, then there would be no need for them to redo the AGN number densities using the LBG method. Summary {#secsum} ======= In summary, we constructed two uniform, X-ray flux-limited, highly spectroscopically complete subsamples of 160 sources (eight of which are stars) with negligible X-ray incompleteness from the CDF-N 2 Ms X-ray data to compute AGN number densities in the $z=2-3$ redshift interval for both high ($L_{\rm 2-8~keV}=10^{44}-10^{45}$) and intermediate ($L_{\rm 2-8~keV}=10^{43}-10^{44}$) rest-frame $2-8$ keV luminosity intervals. Of the 152 non-stellar sources, 102 are spectroscopically identified. In this paper, we used new UH 2.2m ULBCAM $J$ and $H$ band imaging and [*Spitzer*]{} IRAC imaging to estimate IR photometric redshifts for the sources without spectroscopic redshifts, increasing our identified non-stellar sample to 144. Our final galaxy subsamples contain no more than eight unidentified sources. We then calculated the $z=2-3$ AGN number densities (for all spectral types together and for broad-line AGNs alone) for both luminosity intervals and compared them with those at $z<1.5$, which we calculated from the spectroscopic sample of Barger et al. (2005). We find clear evidence for a decrease in the $L_{\rm 2-8~keV}=10^{43}-10^{44}$ AGN number densities at $z>1$ and can reject the hypothesis that the number densities remain flat to $z=2-3$ at above the $5\sigma$ level. We thank the referee and Paul Nandra for helpful comments that improved the manuscript. We gratefully acknowledge support from NSF grants AST 02-39425 (A.J.B.) and AST 04-07374 (L.L.C.), the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation, the Alfred P. Sloan Foundation, and the David and Lucile Packard Foundation (A.J.B.). Alexander, D. M., et al. 2003, , 125, 383 Barger, A. J., Cowie, L. L., Brandt, W. N., Capak, P., Garmire, G. P., Hornschemeier, A. E., Steffen, A. T., & Wehner, E. H. 2002, , 124, 1839 Barger, A. J., Cowie, L. L., Capak, P., Alexander, D. M., Bauer, F. E., Brandt, W. N., Garmire, G. P., & Hornschemeier, A. E. 2003a, , 584, L61 Barger, A. J., Cowie, L. L., Mushotzky, R. F., Yang, Y., Wang, W.-H., Steffen, A. T., & Capak, P. 2005, AJ, 129, 578 Barger, A. J., et al. 2003b, , 126, 632 Benítez, N. 2000, , 536, 571 Brandt, W. N., et al. 2001, , 122, 2810 Capak, P., et al. 2004, , 127, 180 Chapman, S. C., Blain, A. W., Smail, I., & Ivison, R. J. 2005, , 622, 772 Coleman, G. D., Wu, C-C., & Weeedman, D. W. 1980, , 43, 393 Cowie, L. L., Barger, A. J., Bautz, M. W., Brandt, W. N., & Garmire, G. P. 2003, , 584, L57 Croom, S. M., Smith, R. J., Boyle, B. J., Shanks, T., Miller, L., Outram, P. J., & Loaring, N. S. 2004, , 349, 1397 Dickinson, M., et al. 2005, in preparation Fiore, F., et al. 2003, A&A, 409, 79 Gehrels, N. 1986, , 303, 336 Giavalisco, M., et al. 2004, , 600, L93 Hasinger, G. 2003, in AIP Conf. Proc. 666, The Emergence of Cosmic Structure, ed. S. S. Holt & C. Reynolds (Melville: AIP), 227 Hunt, M. P., Steidel, C. C., Adelberger, K. L., & Shapley, A. E. 2004, , 605, 625 Kinney, A. L., Calzetti, D., Bohlin, R. C., McQuade, K., Storchi-Bergmann, T., & Schmitt, H. R. 1996, , 467, 38 Nandra, K., Laird, E. S., & Steidel, C. C. 2005, , 360, L39 (N05) Nandra, K., Mushotzky, R. F., Arnaud, K., Steidel, C. C., Adelberger, K. L., Gardner, J. P., Teplitz, H. I, & Windhorst, R. A. 2002, , 576, 625 Silverman, J. D., et al. 2005, , 624, 630 Steffen, A. T., Barger, A. J., Capak, P., Cowie, L. L., Mushotzky, R. F., & Yang, Y. 2004, , 128, 1483 Steidel, C. C., Adelberger, K. L., Shapley, A.E., Pettini, M., Dickinson, M, & Giavalisco, M. 2003, , 592, 728 Steidel, C. C., Hunt, M. P., Shapley, A. E., Adelberger, K. L., Pettini, M., Dickinson, M., & Giavalisco, M. 2002, , 576, 653 Szokoly, G. P., et al. 2004, , 155, 271 Trouille, L., et al. 2005, in preparation Ueda, Y., Akiyama, M., Ohta, K., & Miyaji, T. 2003, , 598, 886 Yang, Y., Mushotzky, R. F., Barger, A. J., & Cowie, L. L. 2005, , submitted
--- abstract: 'The electronic structure of fluorite crystals are studied by means of density functional theory within the local density approximation for the exchange correlation energy. The ground-state electronic properties, which have been calculated for the cubic structures $CaF_{2}$,$SrF_{2}$, $BaF_{2}$, $CdF_{2}$, $HgF_{2}$, $\beta $-$PbF_{2}$, using a plane waves expansion of the wave functions, show good comparison with existing experimental data and previous theoretical results. The electronic density of states at the gap region for all the compounds and their energy-band structure have been calculated and compared with the existing data in the literature. General trends for the ground-state parameters, the electronic energy-bands and transition energies for all the fluorides considered are given and discussed in details. Moreover, for the first time results for $HgF_{2}$ have been presented.' author: - Emiliano Cadelano - Giancarlo Cappellini title: 'Electronic structure of fluorides: general trends for ground and excited state properties' --- Introduction {#sec:introdution} ============ Fluorides and fluorite-type crystals have attracted much interest for their intrinsic optical properties and their potential applications in optoelectronic devices. The technological importance of $CaF_{2}$ is due mainly to its optical properties; indeed $CaF_{2}$ has a direct band gap at $\Gamma$ of $12.1$ eV  and an indirect band gap estimated around $11.8$ eV.[@rubloff]  Calcium fluorite $CaF_{2}$ as well as all the other fluorides, is a highly ionic insulator with a large band gap, and its lattice structure is a cubic *$Fm\overline{3}m$* within three ions per unit cell, i.e. one cation placed in the origin and two anions $F$ are situated at $\pm(\frac{1}{4}a, \frac{1}{4}a,\frac{1}{4}a)$.[@wyckoff] Here we shall consider $CaF_{2}$, $SrF_{2}$, $BaF_{2}$ (with cations belonging to the II group) and $CdF_{2}$, $HgF_{2}$ (with cation belonging to group IIB) and, finally, $\beta $-$PbF_{2}$ (with cation belonging IV group). We refer to them respectivelly in the present text as II-compounds, IIB-compounds and $\beta $-$PbF_{2}$. [\[fig:structure\]]{} [\[fig:bz\]]{} We therefore propose the first systematic study of the electronic properties of some fluorides compounds within the same computational approach. Until now the theoretical studies of these compounds have been tackled in separate forms and with different techniques. Therefore no general and systematic trend for the family of these compounds could be obtained. The DFT-LDA studies are of particular importance and are benchmarks for subsequent researches to perform excited states and optical properties calculations. The latter rely on more sophisticated techniques which must start from well converged DFT-LDA calculations (e.g. perturbative $G_{o}W_{o}$, self-consistent GW, BSE etc.).[@cappellini1; @cappellini2] In the following lines we proceed to consider the experimental data and the theoretical results for fluorides present in the literature. #### Experimental Scenario Due to its importance in application and basic research, experiments on $CaF_{2}$ and fluorides compounds have been carried out for a long time. The optical constant of $CaF_{2}$ in the extreme ultraviolet has been studied by discharge-tube technique.[@tousey:1936] Reflectance spectra, transition energies for $CaF_{2}$, $BaF_{2}$, $SrF_{2}$ and other ionic compounds have been measured by synchrotron radiation facility later on.[@rubloff] Studies on $\beta $-$PbF_{2}$, $CaF_{2}$, $SrF_{2}$ e $BaF_{2}$ dielectric properties have been published as a function of pressure and temperature by capacitance and dielectric loss measurements.[@Samara:1976] Satellites in the $X$-ray spectra for $CaF_{2}$, $SrF_{2}$, $BaF_{2}$ , density of states for intraband transitions, have been studied by photoelectron spectrometry.[@Scrocco] The effects of Eu defects in $\beta $-$PbF_{2}$ ($PbF_{2}:Eu^{3+}$) relative to fluorescence/electronic excitation spectra and dielectric relaxation have been analyzed by laser absorption spectra techniques.[@weesner:1986] $\beta $-$PbF_{2}$ and $CdF_{2}$ mixed crystals absorption coefficients have been reported by spectrophotometry measurements.[@kosacki:1986] Neutron diffraction techniques have been employed to determine the $\beta $-$PbF_{2}$ pressure and temperature phase diagram.[@hull] $\alpha - PbF_{2}$, $\beta $-$PbF_{2}$ and others compounds absorption spectra and electronic transitions have been studied by polarized reflectivity.[@Fujita] Polarized deep and vacuum ultraviolet light measurements permitted the study of birefringence of $CaF_{2}$ and $BaF_{2}$.[@burnett] The study by different techniques of the phase diagram of $HgF_{2}$ and other Hg compounds should also be mentioned here [@hostettler:2005] (even if for $HgF_{2}$ no results appear in this work due to hydration of the sample). Recently schematic experimental phase diagrams for $HgF_{2}$ and $HgF$ have been also reported.[@okamoto:2008] #### Theoretical Scenario Various theoretical methods have been applied to study either the ground state or the excited states of the fluorides compounds. One of the first works of relevance is the one in which elastic constants, pressure derivatives of $2^{nd}$ order elastic constants, static dielectric constant and its strain dependence for $CaF_{2}$, $SrF_{2}$, $BaF_{2}$, have been calculated within a shell model.[@Srinivasan] The energy bands and reflectance spectra of $CaF_{2}$ and $CdF_{2}$ have been determined afterwards within a combined tight-binding and pseudopotential method.[@albert:1977] Mixed crystals of $CaF_{2}$, $SrF_{2}$, $CdF_{2}$, $\beta $-$PbF_{2}$ have been studied with respect to their energy bands and DOS within LMTO method.[@kudrnovsky] A phenomenological method has been then applied to calculate specific heat of $\beta $-$PbF_{2}$, $CaF_{2}$ and other compounds.[@bouteiller:1992] Linear and non-linear optical properties of the cubic insulators $CaF_{2}$, $SrF_{2}$, $CdF_{2}$, $BaF_{2}$ and other compounds have been determined by first principles OLCAO.[@ching] Points defects study in $CdF_{2}$ have been performed within the plane wave pseudopotential method.[@Mattila] With respect to electronic excitation properties and energy band-gaps, electronic band structure of $CaF_{2}$ and other compounds have been determined by DFT-GW, using plane wave basis set and ionic pseudopotentials (PW-PP) scheme.[@shirley] Using the hybrid B3PW functional, the electronic structures of defected fluorides, namely $CaF_{2}$ and $BaF_{2}$, have been calculated. [@ranjia; @ShiEglitis; @ShiEglitis2] The $\varepsilon_{2}(\omega)$ function after an iterative procedure using an effective Hamiltonian has been calculated for $CaF_{2}$ and GaN,[@benedict:1999] within PW-PP considering a screened interaction for *e-h* coupling. Native and rare-earth doped defects complexes in $\beta $-$PbF_{2}$ have been studied by atomistic simulation.[@jang:2000] In this paper, we are interested in either the structural and the electronic properties of each fluorite, and in comparison/trend studies for the whole crystallographic family. We have computed therefore the electronic and structural properties of different fluorides, $CaF_{2}$, $SrF_{2}$, $BaF_{2}$ (with cation belonging to group II), $CdF_{2}$, $HgF_{2}$ (with cation belonging to group IIB), $\beta $-$PbF_{2}$, within the same first-principles pseudopotential plane-wave method. $a_{o}$\[Å\] [ present ]{} LDA Theory Exp. --------------------- --------------- ---------------- ------------------- ------------------- $CaF_{2}$ [5.30]{} 5.34[@Kalugin] 5.46[@kudrnovsky] 5.46[@Weast] $SrF_{2}$ [5.68]{} - 5.79[@kudrnovsky] 5.78[@Srinivasan] $BaF_{2}$ [6.09]{} - 6.26[@ranjia] 6.17[@Srinivasan] $CdF_{2}$ [5.31]{} 5.36[@Deligoz] 5.39[@kudrnovsky] 5.46[@Kalugin] $HgF_{2}$ [5.47]{} - 5.55[@kaupp] 5.54[@Ebert] $\beta $-$ PbF_{2}$ 5.77 - 5.94[@kudrnovsky] 5.94[@jang:2000] : Optimized lattice constants of fluorides. In columns ”LDA” and ”Theory“ we show previuos theoretical results (DFT-LDA and others respectively), while in column ”Exp.” data after different experimental references are reported.[]{data-label="lattice constant"} Computational details {#sec:method} ===================== All the calculations for the fluorides under study have been performed using density funcional theory (DFT)[@kohn] method implemented in the plane-wave basis code VASP.[@Kresse; @Kresse2] Projector augmented wave pseudopotentials (PAW)[@paw; @paw2] have been used in the localized density approximation (LDA) for the exchange correlation energy treated within the scheme of Ceperley and Alder parametrized by Perdew and Zunger.[@Cerperley; @pervew] Relativistic effects have been included in the calculations[@Kresse; @Kresse2; @bachelet; @kaupp] and spin-orbit coupling has been considered for the valence electrons.[@koelling] The conventional cell of the crystals is shown in Fig. \[capt:structure\], in which the ions $F^{(-)}$ (drawn as little spheres) form a cubic sublattice surrounded by a faced cubic center lattice of cations$^{(++)}$ (in the picture shown as large spheres labeled e.g. as Calcium). Fluorides with cations belonging to the II and IIB groups show a stable phase in this crystallographic structure. On the other hand, at low pressure the $PbF_{2}$ show two structural phases, namely orthorhombic ($\alpha$) and cubic ($\beta$). Although the cubic phase $\beta $-$PbF_{2}$ is the most stable in ambient condictions, while the orthorhombic $\alpha $-$PbF_{2}$ becomes stable at high pressure.[@jiang2; @nizam] In Fig.\[capt:bz\], the Brillouin zone has been drawn for the *$Fm\overline{3}m$* space group; the paths in the $k$-space chosen in our calculations for the electronic band structures have also been shown. All calculations are performed with the total energy convergence within $1.5\cdot 10^{-5}{}$eV with kinetic energy cut-off depending on the cation of the compound under study (at least of $550 {eV}$). A Monkhort-Park[@monkhort] $k$-point mesh of $4\times4\times4$ has been chosen to relax the cell parameters, till the largest value for the interatomic forces result smaller than $1.5\cdot 10^{-5}{}$ eV/Å. Lattice constants for each fluorite ($a_{\circ}$ in Tab. \[lattice constant\]), as well as the bulk moduli ($B_{\circ}$ in Tab. \[bulk moduli\]) have been calculated with the Murnaghan equation of state.[@murnaghan] $B_{\circ}$\[GPa\] present Theory Exp. -------------------- --------- ---------------- ------------- $CaF_{2}$ 103.01 91[@sun]   90-82[@sun] $SrF_{2}$ 83.75 -   - $BaF_{2}$ 69.39 50[@Ayala]   59[@wong] $CdF_{2}$ 123.96 123[@Deligoz]  - $HgF_{2}$ 117.03 -   - $\beta $-$PbF_{2}$ 93.22 60[@jiang2]   64[@hull] : Bulk moduli of fluorides. Data in column ”Theory” are previous theoretical results. Data in column ”Exp.” are after different experimental references.[]{data-label="bulk moduli"} Results and discussion {#sec: result} ====================== -------------------- ------------------------------- --------------- ----------------- ------------------ ----------------- ----------------------------------------------------- --------------- ----------------- ------------------ ----------------- --------------- ----------------- ----------------- ---------------- Solid [ present ]{} LDA GW Exp. [ present ]{} LDA GW Exp. [ present ]{} LDA GW Exp. $CaF_{2}$ $ \Gamma \rightarrow \Gamma $ [7.71]{} 7.11[@rohlfing] 11.80[@rohlfing] 12.10[@rubloff] $ X\rightarrow \Gamma $ [7.43]{} 6.85[@rohlfing] 11.50[@rohlfing] 11.80[@rubloff] [3.17]{} 2.82[@rohlfing] 3.26[@rohlfing] 3.20[@Scrocco] $SrF_{2}$ $ \Gamma\rightarrow \Gamma $ [6.99]{} 11.25[@rubloff] $ X\rightarrow \Gamma $ [6.89]{} 6.77[@ching] 10.60[@rubloff] [2.33]{} 2.80[@Scrocco] $BaF_{2}$ $ \Gamma\rightarrow \Gamma $ [6.67]{} 11.00[@rubloff] $(\tfrac{1}{4},~\tfrac{1}{4},~0) \rightarrow\Gamma$ [6.58]{} 7.19[@ching] 10.00[@rubloff] [1.78]{} 2.50[@Scrocco] $CdF_{2}$ $ \Gamma\rightarrow \Gamma $ [3.37]{} 3.30[@Mattila] $W\rightarrow\Gamma$ [2.94]{} 2.80[@Mattila] 7.80[@Orlowski] 5.79 $HgF_{2}$ $ \Gamma\rightarrow \Gamma $ 0.35 $ \Gamma\rightarrow L $ 4.16 6.38 $\beta $-$PbF_{2}$ $ X\rightarrow X $ [4.09]{} $ W\rightarrow X $ [3.41]{} [7.16]{} 6.33[@Fujita] -------------------- ------------------------------- --------------- ----------------- ------------------ ----------------- ----------------------------------------------------- --------------- ----------------- ------------------ ----------------- --------------- ----------------- ----------------- ---------------- Trends for lattice constants and bulk moduli are showed in Fig. \[trendbulk\] with the corresponding experimental data from literature, if available. From Table \[lattice constant\] and Fig. \[trendbulk\] an overall good comparison appears between experimental data and present results relative to lattice constants with maximum deviation of $3\%$. Compounds with cations belonging to the II and IIB groups show a linear behavior with respect to Mendeleev table period of corresponding cation. Moreover, as shown in Table \[bulk moduli\] and Fig. \[trendbulk\], bulk moduli show a less satisfactory agreement with experimental results ( maximum deviation within $20\%$ in the case of II group, within $45\%$ for the $\beta $-$PbF_{2}$). Also for bulk moduli of compounds with cations belonging to the II and IIB groups, an almost linear behavior results with respect to the Mendeleev’s period. For IIB group no available experimental data supports the theoretical trend. Energy band structure, total and projected density of states (DOS)[@note1] are shown in the region of the gap in Figs. \[cpt:band graph\], \[cpt:band graphs\]. Direct and indirect minimum band gaps are also clearly shown. In Table \[tab:gap bandwidht\] these data are shown in comparison to previous results and experimental data; valence bandwidths are also reported in comparison with theoretical and experimental results. In Table  \[tab:mygap\] the main vertical transitions are reported for all the compounds. In the following lines, in order to extract general trends for the electronic excitation properties in the stable cubic fluorides under study, we shall compare the results for the different fluorides considering first compounds with cations belonging to the II group, namely Ca, Sr and Ba, then compounds with metal belonging to IIB group, namely Cd and Hg. Finally for $\beta $-$PbF_{2}$ data will be analysed separately. All these compounds show a minimum direct band gap at the $\Gamma$ point, except $\beta $-$PbF_{2}$. Concerning the smallest gap, the fundamental one, all the compounds herein treated are indirect gap insulators apart from $HgF_{2}$ which shows a direct fundamental band gap (Table \[tab:gap bandwidht\]). Note that absolute values for the gaps show the so-called band-gap underestimate which can be resolved going beyond DFT-LDA, by using more accurate techniques for the excited states (e.g. GW ones, see Table \[tab:gap bandwidht\]). This issue will not be addressed in the present work and will be the subject of future research. Moreover trends on electronic excitation energies as shown in the following should not be affected by the above problem.[@cappellini1; @cappellini2] For II-compounds, a decrease of the direct gap at $\Gamma$ of $1.04 ~eV$ is similar to the $1.10~ eV$ decrease after the experimental data (see also Fig. \[cpt:band graph\]). On the other hand, a decrease of $0.54~eV$ for the $X$-$\Gamma$ transitions for calcium and strontium fluorides results which can be compared with a $1.20~eV$ decrease from the experiments. Considering barium fluorite, the $X$-$\Gamma$ transition is not the minimum, but that which occurs at $(1/4,1/4,0)$-$\Gamma$ (see also Fig. \[fig:band ba\]). However, considering the value for the $X$-$\Gamma$ transition shown in Tab. \[tab:mygap\], we confirm a smaller value of $0.31 ~eV$ with respect to the same transition for $SrF_{2}$ (to compare with $0.6~eV$ experimental decrease). Considering the valence bandwidth, a $1.39 ~eV$ decrease going from $Ca$ to $Ba$ occurs, and that decrease can be compared to a $0.70 ~eV$ experimental one. For IIB-compounds (see Figs. \[fig:band cd\], \[fig:band hg\]), the direct gaps at the $\Gamma$ point show $3.02 ~eV$ difference going from $CdF_{2}$ to $HgF_{2}$. While the $CdF_{2}$ presents an indirect fundamental gap between $W\rightarrow\Gamma$, $HgF_{2}$ shows a direct fundamental band gap at $\Gamma$ (see Fig. \[fig:band hg\]). Moreover, for the IIB-compounds $HgF_{2}$ presents a larger value of valence bandwidth of $0.59 ~eV$. For $\beta $-$PbF_{2}$ the minimum direct gap occurs at $X$ instead of at $\Gamma$ as for the other fluorides. It shows an indirect fundamental gap $ W\rightarrow X $ of $3.41~eV$ (see Fig. \[fig:band pb\] and Tab. \[tab:gap bandwidht\]). Moreover, the largest value for the valence bandwidth occurs, with a slight overestimate of the experiment ($13\%$). Direct gap $CaF_{2}$ $SrF_{2}$ $BaF_{2}$ $CdF_{2}$ $HgF_{2}$ $\beta $-$PbF_{2}$ ----------------------------------------------------- ----------- ----------- ----------- ----------- ----------- -------------------- $ L\rightarrow L $ 9.35 10.59 9.29 7.62 4.31 4.62 $\Gamma\rightarrow\Gamma$ 7.71 6.99 6.67 3.37 0.35 7.45 $ X\rightarrow X $ 8.09 8.03 7.23 8.18 5.34 4.09 $ W\rightarrow W $ 8.57 8.44 7.68 8.17 5.45 5.10 $ K \rightarrow K $ 8.50 8.41 7.54 8.25 5.34 5.45 Indirect gap $X\rightarrow\Gamma $ 7.43 6.89 6.79 3.04 ... ... $L\rightarrow\Gamma $ 8.00 7.33 6.73 3.29 ... ... $W\rightarrow\Gamma $ 7.54 6.97 6.90 2.94 ... ... $K\rightarrow\Gamma $ 7.55 6.98 6.83 3.14 ... ... $ (\tfrac{1}{4},~\tfrac{1}{4},~0)\rightarrow\Gamma$ ... ... 6.58 ... ... ... $ \Gamma\rightarrow X $ ... ... ... ... ... 6.36 $ W\rightarrow X $ ... ... ... ... ... 3.95 $ K\rightarrow X $ ... ... ... ... ... 3.98 $ L\rightarrow X $ ... ... ... ... ... 4.02 : Energy band gaps (eV) after present work.[]{data-label="tab:mygap"} With respect to direct transitions, considering the data in Tab. \[tab:mygap\], all II-compounds show a decreasing trend at all main symmetry points, except for $L$. At that point $SrF_{2}$ shows a $1.24 ~eV$  larger band gap with respect to $CaF_{2}$ and a $1.3 ~eV$  larger value with respect to $BaF_{2}$. For IIB-compounds, the direct transitions show a larger difference (up to $3.31 ~eV$) going from $Cd$ to $Hg$. To complete the picture, the indirect gaps between the top of the valence band at high symmetry points to the bottom of the conduction bands ($\Gamma$ for II- and IIB-compounds, $X$ for $\beta$-$PbF_{2}$) are shown in the second part of Tab. \[tab:mygap\] . [\[figs:band ca\]]{}[\[figs:band sr\]]{}[\[figs:band ba\]]{} [\[figs:band cd\]]{}[\[figs:band hg\]]{}[\[figs:band pb\]]{} CONCLUSIONS {#sec:conclusion} =========== The DFT-LDA electronic structures for ground-state and excited states for some cubic fluorides have been studied in detail. The electronic density of states (DOS) at the gap region for all the compounds and their energy-band structure have been calculated and compared with existing experiments and previous theoretical results. The electronic energy-bands and transition energies are given and discussed. Within the same DFT-LDA scheme, general trends for the ground-state parameters and the DOS are also given. The above trends show good comparison with experimental data and theoretical results. Relatively to electronic excitations, the conduction bands for II-compounds are mostly dominated by the cations d-orbitals, while for the IIB-compounds the tail of the DOS at the lowest conduction bands shows a largely s-type character. The obtained DFT-LDA valence bandwidths agree with experimental values within $~30 \%.$ The present systematic DFT-LDA study is of particular interest for future researches on excited states and optical properties calculations of fluorides. We plan to carry out such calculations in next future by using those techniques particularly devoted to that issue. The authors acknowledge computational support provided by COSMOLAB (Cagliari, Italy) and CASPUR (Rome, Italy). Discussions with V. Fiorentini are gratefully acknowledged. [99]{} G. W. Rubloff, Phys. Rev. B [**5**]{}, 662 (1972). R. W. G. Wyckoff, *Crystal Structures*, 9th Ed. (Interscience/John Wiley, New York 1963), Vol. 1. G. Cappellini, G. Satta, M. Palummo, G. Onida, Phys. Rev. B **64**, 035104 (2001). G. Satta, G. Cappellini, V. Olevano, L. Reining, Phys. Rev. B **70**, 195212 (2004). R. Tousey, Phys. Rev. **50**, 1057 (1936). G. A. Samara, Phys. Rev. B **13**, 4529 (1976). M. Scrocco, Phys. Rev. B **32**, 1301 (1985). F.J. Weesner, J.C. Wright, J.J. Fontanella, Phys. Rev. B **33**, 1372 (1986). I. Kosacki and J. M. Langer, Phys. Rev. B **33**, 5972 (1986). S. Hull and D. A. Keen, Phys. Rev. B **58**, 14837 (1998). M. Fujita, M. Itoh, Y. Bokumoto, H. Nakagawa, D. L. Alov and M. Kitaura, Phys. Rev. B **61**, 15 731 (2000). J. H. Burnett, Z. H. Levine and E. L. Shirley, Phys. Rev. B **64**, 241102(R) (2001). M. Hostettler and D. Schwarzenbach, C. R. Chimie **8**, 147 (2005). H. Okamoto, JPEDAV **29**, 294 (2008). R. Srinivasan, Phys. Rev. **165** 1054 (1968). J. P. Albert, C. Jouanin and C. Gout, Phys. Rev. B **16**, 4619 (1977). J. Kudrnovský, N. E. Christensen and J. Mašek, Phys. Rev. B [**43**]{}, 12597 (1991). Y. Bouteiller, Phys. Rev. B [**45**]{}, 8734 (1992). W. Y. Ching, Fanqi Gan, Ming-Zhu Huang, Phys. Rev. B [**52**]{}, 1596 (1995). T. Mattila, S. Pöykkö, R. M. Nieminen, Phys. Rev. B **56**, 15665 (1997). E. L. Shirley, Phys. Rev. B **58**, 9579 (1998). R. Jia, H. Shi and G. Borstel, J. Phys.:Condens. Matter **22**, 055501 (2010). H. Shi, R. I. Eglitis and G. Borstel, J. Phys.: Condens. Matter **18**,8367 (2006). H. Shi, R. I. Eglitis and G. Borstel, Phys. Rev. B **72**, 045109 (2005). L. X. Benedict and E. L. Shirley, Phys. Rev. B **59**, 5441 (1999). H. Jiang, A. Costales, M. A. Blanco, M. Gu, R. Pandey and J. D. Gale, Phys. Rev. B **62**, 803 (2000). W. Kohn, L. J. Sham, Phys. Rev. **140**, A1133 (1965). G. Kresse, J. Furthmüller, Comput. Mater. Sci. **6**, 15 (1996). G. Kresse, J. Furthmüller, Phys. Rev. B **54**, 11169 (1996). P. E. Blöchl, Phys. Rev. B **50**, 17953 (1994). G. Kresse, D. Joubert, Phys. Rev. B **59**, 1758 (1999). D. M. Ceperley, B. J. Alder, Phys. Rev. Lett. **45**, 566 (1980). J. P. Perdew, A. Zunger, Phys. Rev. B **23**, 5048 (1981). G. B. Bachelet and M. Schlüter, Phys. Rev. B **25**, 2103 (1982). M. Kaupp, and H. G. von Schnering, Inorg. Chem. **33**, 4718 (1994). D. D. Koelling, B. N. Harmon, J. Phys. C **10**, 3107 (1977). H. Jiang, R. Orlando, M.A. Blanco and R. Pandey, J. Phys.: Condens. Matter **16**, 3081 (2004). M. Nizam, Y. Bouteiller, B. Silvi, C. Pisani, M. Causà and R. Dovesi, J. Phys. C: Solid State Phys. **21**, 5351 (1988). H. J. Monkhort, J. D. Park, Phys. Rev. B **13**, 5188 (1976). A. I. Kalugin and V. V. Sobolev, Phys. Rev. B **71**, 115112 (2005). R. C. Weast, *Handbook of Chemistry and Physics*, CRC Press, Boca Raton (1976). E. Deligoz, K. Colakoglu and Y.O. Ciftci, Journal of Alloys and Compounds **438**, 66 (2007). F. Ebert, H. Woitinek, Z. Anorg, Allg. Chem. **210**, 269 (1933). A. P. Ayala, J. Phys.: Condens. Matter **13**, 11741 (2001). X. W. Sun, Y.D. Chu, Z.J. Liu, Q.F. Chen, Q. Song and T. Song, Physica B **404**, 158 (2009). C. Wong, D. E. Schulle, J. Phys. Chem. Solids **29**, 1309 (1968). F.D. Murnaghan, Proc. Nat. Acad. Sci. **30**, 244 (1944). Y. Ma and M. Rohlfing, Phys. Rev. B [**75**]{}, 205114 (2007). B. A. Orlowski, P. Plenkiewicz, Phys. Status Solidi B **126**,285 (1984). Electronic density of states (DOS) have been calculated using a Brillouin zone sampling with an $8\times 8\times 8$ Monkhort-Park grid and Gaussian smearing of width 0.1 eV.
--- abstract: 'We examine a scenario of the abelianized Glasma evolution with accounting for back-reaction of partonic medium in ultrarelativistic heavy-ion collisions. We announce that such a generalization leads to the instabilities and the presence of negative color conductivity in the system.' author: - 'A.V. Nazarenko' title: GLASMA EVOLUTION IN PARTONIC MEDIUM --- INTRODUCTION ============ Phenomenological analyses of experimental data indicate that the quark-gluon plasma (QGP) can be formed in ultrarelativistic A+A collisions [@exp]. Its local thermalization and isotropization should be mainly related to the fast processes stimulated by instabilities at small times after collision [@Mrow; @Mrow2]. In the present theoretical picture of ultrarelativistic heavy-ion collisions [@ETS], the early stage is preferably characterized by the large number of partons with “small” momenta of the order of the so-called saturation momentum $\Lambda_s$, which are better viewed as a classical Yang-Mills field in vacuum [@GV], sometimes named as “Glasma”. The initial conditions for Glasma evolution are determined by the Color Glass Condensate (CGC) concept by McLerran-Venugopalan (MV) [@McLerran], where the field sources [*before*]{} collision can be presented by the randomly distributed valent quarks of colliding hadrons and are located (due to Lorentz contraction) on infinitesimally thin sheets running along the light-cone. These sources are also treated as the hard partons with “large” momenta, which escape quickly from the system [*after*]{} collision. Thus, the original MV model neglects the interaction between the field and the hard partonic medium. The space-time dynamics of the Yang-Mills fields in vacuum (“the melting of CGC”) in assumption of boost invariance was investigated numerically, and the energy and the number distributions of the classically produced gluons were computed (see, for example, review [@Leon]). Moreover, it was shown that the violations of boost invariance cause a non-Abelian Weibel instability [@Wei] leading the field (soft) modes to grow with proper time [@RV]. However, the effect of isotropization is out of this model. On the other hand, the hard partons (produced after moment of collision) with large transverse momentum $p_T$ can be studied within the framework of transport theory, and if the presence of the soft classical field is neglected, the time evolution of the these partons is described by Boltzmann equation with a collision kernel [@GM; @BMS; @GPZ; @MG; @DG; @NVC] (for comment, see [^1]). However, it has been argued that the collective processes caused by the soft gauge field should be dominant in equilibration of QGP instabilities developed due to anisotropic distributions of released hard partons [@DN; @XG; @BMSS; @Akk]. The third regime where the back-reaction of the field on the hard partons (treated as particles) is still weak but where the self-interaction of the former may be strongly nonlinear is governed by a “hard-loop” effective action which has been derived in Ref. [@MRS] for arbitrary momentum-space anisotropies. It is interesting to note that the numerical studies of anisotropic hard partonic modes coupled to unstable soft modes revealed the tendency of the non-Abelian gauge fields to “abelianize” during the stage of instabilities [@AL; @DN]. It means that the field commutators become much smaller than the fields themselves. Moreover, the dynamics of the Abelian and non-Abelian fields is qualitatively the same, if these fields are not strong enough. In this paper, we examine a scenario of Glasma evolution with CGC-like initial conditions, when the presence of the (momentum-)anisotropic medium of hard partons is also taken into account. Our goal is to evaluate analytically the behavior of such a system in short-time interval in weak-interaction regime, when the application of the abelianized version of the field dynamics is possible. Although the last condition demands to consider the system at relatively large times after A+A collision (as follows from numerical investigations) but simplifies the problem considerably. However, it is already pointed out in Refs. [@GKNS; @SNK; @SKN] that the early equilibration of QGP is not necessary to describe pion and kaon spectra observed experimentally at RHIC in Brookhaven. Since the momentum-space anisotropy of the system can be estimated by means of transport coefficients, we attempt to calculate a conductivity tensor and to determine an effect of instabilities on it. It is expected that the back-reaction can lead to a negative color conductivity in the boost-invariant case. THE MODEL FORMULATION ===================== As was mentioned in Introduction, the classical Yang-Mills theory in space-time with pseudo-cylindrical metric $$\begin{aligned} &&ds^2=d\tau^2-\tau^2d\eta^2-dr_T^2-r^2_Td\varphi^2,\\ &&\tau=\sqrt{t^2-z^2},\quad \eta=\frac{1}{2}\ln{\frac{t+z}{t-z}},\end{aligned}$$ ($\tau$ and $\eta$ are proper time and space-time rapidity, respectively) has been abelianized since $\tau_0\approx3/\Lambda_s$, where $\Lambda_s\approx2$ GeV [@GV]. It means that we actually come to the Maxwell theory with 4-potential $A_\mu$ (hereafter, we neglect the normalization constant $1/\sqrt{N_c}$, where $N_c$ is the number of colors). The free-field theory in mid-rapidity region in the case of central collisions, when the potentials are parametrized as $A_\tau=0$ (CGC-like gauge fixing), $A_\eta\equiv\Phi(\tau,r_T)$, $A_{r_T}=0$, $A_\varphi\equiv\Psi(\tau,r_T)$, has been already examined (see Ref. [@SNK]) in order to describe the space-time evolution of the field flow (collective velocity) at pre-thermal stage of collisions. It turns out that the results obtained are qualitatively the same like in the case of non-Abelian model from Ref. [@KF]. Here we generalize the abelianized Glasma equations by inclusion of sources in the right-hand side: $$\begin{aligned} &&\partial^2_\tau\Phi-\frac{1}{\tau}\partial_\tau\Phi-\partial^2_{r_T}\Phi-\frac{1}{r_T} \partial_{r_T}\Phi=J_\eta(\tau,r_T),\\ &&\partial^2_\tau\Psi+\frac{1}{\tau}\partial_\tau\Psi-\partial^2_{r_T}\Psi+\frac{1}{r_T} \partial_{r_T}\Psi=J_\varphi(\tau,r_T).\end{aligned}$$ Note that $J_\tau=0$, $J_{r_T}=0$ and the current conservation is satisfied automatically. In the context of A+A collisions, the presence of the sources corresponds to the existence of the medium. Accounting for hard partonic component (viewed as particle subsystem), we aim to investigate the field and particle dynamics. The components of the current in Minkowskian space-time are $$J^\mu=g\int p^\mu(f_+-f_-)\frac{d^3p}{p^0},$$ where $p^0\equiv|{\bf p}|$ (the case of massless partons), $f_\pm$ are distribution functions of (scalar) partons. The distribution of partons is supposed to be anisotropic in the momentum space and inhomogeneous in configuration one. Space-time development of functions $f_\pm$ is determined by Vlasov equations which we will formulate below. Note that “$-g$” corresponds to the charge of electron in the context of electrodynamics. A toy field model with non-trivial right-hand side has been already investigated in Ref. [@ALMY]. It is useful to parametrize momenta as $(p^\mu)=(p_T\cosh{y},p_T\cos{\phi},p_T\sin{\phi},p_T\sinh{y})$, where $y$ is momentum rapidity. In the terms of our variables, one has: $$\begin{aligned} &&J_\tau=g\int p^2_T\cosh{\theta}(f_+-f_-)dp_T dy d\phi,\\ &&J_\eta=-\tau g\int p^2_T\sinh{\theta}(f_+-f_-)dp_T dy d\phi,\\ &&J_{r_T}=-g\int p^2_T\cos{\xi}(f_+-f_-)dp_T dy d\phi,\\ &&J_\varphi=-r_T g\int p^2_T\sin{\xi}(f_+-f_-)dp_T dy d\phi,\end{aligned}$$ where $\theta=y-\eta$, $\xi=\phi-\varphi$. Taking conditions $J_\tau=0$, $J_{r_T}=0$ into account, the difference $f_+-f_-$ should be odd function of $\theta$ and $\xi$ during evolution. The evolution of $f_\pm$ is generated by Vlasov equations: $$(\hat L\pm g\hat F)f_\pm=0,$$ where $\hat L\equiv p^\mu\partial_\mu$, $\hat F\equiv p^\mu F_{\mu\nu}\partial^\nu_p$, $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu.$ Since the sources (partons) are randomly distributed at the initial moment $\tau_0$, we put $f^0_+=f^0_-=f^0$, where $f^0$ is defined as $(dN_h/d^3xd^3p)|_{\tau_0}$, and $\hat Lf^0=0$ in our investigations. It means that the system is neutral and the currents are absent at $\tau_0$. Using the curved coordinates, we obtain $$\hat L= p_T\left(\cosh{\theta}\partial_\tau+\frac{\sinh{\theta}}{\tau}\partial_\eta +\cos{\xi}\partial_{r_T}+\frac{\sin{\xi}}{r_T}\partial_\varphi\right),$$ $$\begin{aligned} &&\hat F=-\frac{\partial_\tau\Phi}{\tau}\partial_y+ \frac{\partial_{r_T}\Phi}{\tau}(\sinh{\theta}\sin{\xi}\partial_\phi -\cosh{\theta}\cos{\xi}\partial_y)\nonumber\\ &&+\frac{\partial_\tau\Psi}{r_T}(\sinh{\theta}\sin{\xi}\partial_y +\cosh{\theta}\cos{\xi}\partial_\phi)+\frac{\partial_{r_T}\Psi}{r_T}\partial_\phi.\end{aligned}$$ We can see that the operator of Lorentz force $\hat F$ is simply the operator of rotation in momentum space and, therefore, conserves the absolute value of transverse momentum $p_T$. It is expected that such structure of the Lorentz force should lead to the momentum transmission between different directions and, consequently, to the instabilities in this system. SOLUTION OF EQUATIONS ===================== In this Section, we concentrate on finding the solution of the set of the coupled Maxwell–Vlasov equations. Since the Glasma field is essential at early stage of nuclear collisions (in contrast with the hard partons or quarks), the study of the field dynamics actually dominates. By this way, it is necessary to express the particle currents through the fields. In general, the Vlasov equations are complicated. For this reason, we are forced to use a method for approximate solving this set. Fluctuation of distribution function, which arises during fairly small time interval $\Delta\tau$, can be found in the linear approximation in $g$: $$f_\pm=f^0\mp g\delta f.$$ In this approximation, the space-time evolution of correction $\delta f$, determining difference $f_+-f_-=-2g\delta f$ and the current components, results from the following equation: $$\hat L\delta f=\hat F f^0.$$ Note that this approximation does not permit us to investigate an isotropization of the particle (hard parton) kinematic part of the energy-momentum tensor, proportional to the sum $f_++f_-$. It is expected that such isotropization effect can be observed if the correction of the order $g^2$ is included. Nevertheless, the approximation under consideration allows one to study the instabilities in the system. If $\tau\to\tau_0$, there exists an approximate solution, which is short-living in time and localized in space, $$\label{appr} \delta f\approx\frac{\tau-\tau_0}{p_T\cosh{\theta}}\hat F f^0.$$ It is easy to verify that the action of evolution operator $\hat L$ on this expression gives us $$\label{prop} \hat L\frac{\tau-\tau_0}{p_T\cosh{\theta}}\hat F f^0=\hat F f^0+(\tau-\tau_0)W(\tau),$$ where $$\begin{aligned} W(\tau)&=&\left[\partial_\tau+\frac{\tanh{\theta}}{\tau}(\partial_\eta+\tanh{\theta}) \right. \nonumber\\ &+&\left.\frac{\cos{\xi}}{\cosh{\theta}}\partial_{r_T} +\frac{\sin{\xi}}{r_T\cosh{\theta}}\partial_\varphi\right](\hat F f^0).\end{aligned}$$ Thus, if $\tau\to\tau_0$, the last term in right-hand side of (\[prop\]) vanishes. Also note that this is the simplified proof of Eq. (\[appr\]). In order to understand this approximation in details, see Appendix A. Often the model initial boost-invariant distributions in central heavy-ion collisions take the form $f^0=f^0(p_T,\theta)=f^0(p_T,-\theta)$ (note that for the sake of correctness $f^0$ has to be also a function of $r_T$). In this case, we obtain $$\label{Ohm} J_\eta=\sigma_\eta(\tau)\partial_\tau\Phi, \qquad J_\varphi=\sigma_\varphi(\tau)\partial_\tau\Psi,$$ where $$\sigma_\eta(\tau)=2(\tau-\tau_0)\sigma_0,\qquad \sigma_\varphi(\tau)=-(\tau-\tau_0)\sigma_0$$ are conductivities. The common multiplier dependent on the initial distribution of partons is $$\sigma_0\equiv-2\pi g^2\int\limits_0^\infty dp_T\int\limits_{-\infty}^\infty dy \partial_y f^0 p_T \tanh{\theta}.$$ We can immediately see that $\sigma_\eta(\tau)$ and $\sigma_\varphi(\tau)$ have the different signs. It says about the presence of [*negative color conductivity*]{} driving to instability in the system. The mechanism of this instability looks simple: we deal with situation when the particles (partons) give the energy to the field. Now let us analyze the properties of $\sigma_0$. Firstly, we assume that the initial distribution $f^0$ is the product of the function of $(p_T,\theta)$ and the spatial distribution $(dN_h/d^3x)|_{\tau_0}(r_T)$ in transverse plane. Taking into account that the initial distribution is even function of $\theta$, one gets $$\sigma_0=A\left.\frac{dN_h}{d^3x}\right|_{\tau_0}>0,$$ where $A$ is a positive constant arising after integration over momentum variables. Thus, $\sigma_\varphi<0$, while $\sigma_\eta>0$. It means that the color negative conductivity takes place in the transverse plane. At this stage the natural question arises: how the negative conductivity does look in the laboratory reference frame. Eqs. (\[Ohm\]) are actually the Ohm law, where $E_\eta\equiv F_{\tau\eta}=\partial_\tau\Phi$, $E_\varphi\equiv F_{\tau\varphi}=\partial_\tau\Psi$ are the (chromo)electric field strengths. Introducing $E_i=F_{ti}$ in the Minkowskian space-time, we find that $E_\eta=\tau E_z$, $E_\varphi=r_T\cosh{\eta}(-\sin{\varphi}E_x+ \cos{\varphi}E_y)+r_T\sinh{\eta}(-\sin{\varphi}F_{zx}+\cos{\varphi}F_{zy})$. In these terms the current components are $$\begin{aligned} J_t&=&-\sinh{\eta}\sigma_\eta E_z,\quad J_z=\cosh{\eta}\sigma_\eta E_z,\\ J_x&=&-\frac{\sin{\varphi}}{r_T}\sigma_\varphi E_\varphi,\quad J_y=\frac{\cos{\varphi}}{r_T}\sigma_\varphi E_\varphi.\end{aligned}$$ If $\eta=0$ and $\varphi=0$, one has that $J_t=J_x=0$, $J_y=\sigma_\varphi E_y$, $J_z=\sigma_\eta E_z$. Thus the color negative conductivity takes place under some conditions (related with the value of angles) in the laboratory reference frame. This effect is associated with filamentation in the plasma [@Mrow2]. Since it is hard to find the general solution of field equations for arbitrary distribution $(dN_h/d^3x)|_{\tau_0}$, we try to study the particular case, when $(dN_h/d^3x)|_{\tau_0}={\rm const}$. This assumption simplifies the problem significantly. When $\sigma_0$ is a constant, the spatial dependence of the field potentials is immediately derived by using the Bessel-Fourier transform: $$\begin{aligned} \Phi(\tau,r_T)&=&\int\limits_0^\infty\Phi_0(k_T)g_\eta(\tau,k_T)J_0(k_Tr_T)dk_T,\\ \Psi(\tau,r_T)&=&r_T\int\limits_0^\infty\Psi_0(k_T)g_\varphi(\tau,k_T)J_1(k_Tr_T)dk_T,\end{aligned}$$ where the initial conditions resulting from CGC concept are applied: $$\begin{aligned} &&g_\eta|_{\tau_0}=0, \quad \left.\frac{\partial_\tau g_\eta}{\tau}\right|_{\tau_0}=k_T,\\ &&g_\varphi|_{\tau_0}=1, \quad \tau\partial_\tau g_\varphi|_{\tau_0}=0.\end{aligned}$$ In principal, functions $g_\eta(\tau,k_T)$, $g_\varphi(\tau,k_T)$ can be expressed for arbitrary $\tau_0\not=0$ in terms of Heun functions. However, these expressions are complicated for heuristic analysis of our model and its applications. For this reason, we write down the functions $g_{\eta,\varphi}$ at $\tau_0\to0$: $$g_\eta=-\frac{k_T}{2\sigma_0}\exp{\left(\frac{1}{2}\sigma_0 \tau^2\right)} M\left(-\frac{k^2_T}{4\sigma_0}, \frac{1}{2};-\sigma_0\tau^2\right),\qquad g_\varphi=\frac{1}{\tau}\sqrt{\frac{2}{\sigma_0}} \exp{\left(-\frac{1}{4}\sigma_0\tau^2\right)} M\left(\frac{k^2_T-\sigma_0}{2\sigma_0},0;\frac{1}{2}\sigma_0\tau^2\right),$$ where $M(a,b;z)$ is the Whittaker function. It is easy to verify that the occurrence of negative color conductivity $\sigma_\varphi$ leads to a growth of the some components of magnetic and electric fields in comparison with the case of the theory without partonic medium. It is important that the Abelian magnetic field exhibits a growth, draining some energy from the particle reservoir. The instabilities are related with the presence of exponents in functions $g_{\eta,\varphi}$; Whittaker functions change actually a phase of oscillations only in comparison with the free theory. If $\sigma_0\to0$ and $\tau_0\to0$, we come to the well-known expressions (see, for example, Ref. [@Lappi] and references therein): $$g_\eta(\tau,k_T)=\tau J_1(k_T\tau),\quad g_\varphi(\tau,k_T)=J_0(k_T\tau),$$ where $J_n(z)$ is the Bessel function. These expressions correspond to the perturbative (lowest order in the source charge densities) solution. Now it is necessary to determine the functions $\Phi_0(k_T)$ and $\Psi_0(k_T)$. They originate from the initial conditions for the field equations. Note that $\Psi_0(k_T)$ and $\Phi_0(k_T)$ are fluctuating quantities within the CGC concept, and the pair correlator of the (Yang–Mills) potentials is observable only. However the field potentials in our approach are not stochastic quantities in the contrast with CGC ideology because we goal to constitute the initial conditions on the base of the statistically averaged components of the energy-momentum tensor accounting for the spatial inhomogeneity. In order to derive $\Psi_0(k_T)$ and $\Phi_0(k_T)$, let us use the energy density distribution and the requirement of the absence of field flow at the initial moment. In mid-rapidity region ($\eta=0$) and $\varphi=0$ (note that transverse directions are equal in the system with cylindrical symmetry), when $T_{tt}=T_{\tau\tau}$, $T_{tx}=T_{\tau x}$, we have that $$\begin{aligned} T_{tt}|_{\tau_0}&\equiv&\varepsilon(r_T)\nonumber\\ &=&\frac{1}{2}\left(\left.\frac{\partial_{r_T}\Psi}{r_T}\right|_{\tau_0}\right)^2 +\frac{1}{2}\left(\left.\frac{\partial_\tau\Phi}{\tau}\right|_{\tau_0}\right)^2,\\ T_{tx}|_{\tau_0}&=&0,\end{aligned}$$ where $\varepsilon(r_T)$ is assumed to be the known function from numerical calculations or physical point of view. Our trick consists in division of the energy density between different field components: $$\partial_{r_T}\Psi|_{\tau_0}=\sqrt{\alpha}r_Tf(r_T),\quad \left.\frac{\partial_\tau\Phi}{\tau}\right|_{\tau_0}=\sqrt{1-\alpha}f(r_T),$$ where $f(r_T)\equiv\sqrt{2\varepsilon(r_T)}$ and $\alpha$ is a some separation constant (in general, $\alpha$ should be function of $r_T$). Since the potentials $\Psi$, $\Phi$ are real, one has that $0\leq\alpha\leq1$. In principal, $\alpha$ is arbitrary constant. Practically, it turns out that $\alpha\approx1/2$ (it follows from comparison of electric and magnetic strengths within the numerical approach). Note that the observables of the source-free theory are independent on $\alpha$. Thus, one finds $$\Psi_0(k_T)=\sqrt{\alpha}\tilde f(k_T),\quad \Phi_0(k_T)=\sqrt{1-\alpha}\tilde f(k_T),$$ here $$\tilde f(k_T)=\int\limits_0^\infty f(r_T)J_0(k_Tr_T)r_Tdr_T.$$ These expressions finally determines the fields in our model. APPLICATIONS ============ In the previous Sections we have formulated the model of Glasma in hard partonic medium. Since the classical field modes are usually interpreted as soft partons, their spatial dependence at early stage of A+A collision may be done within the framework of Glauber model. However, the explanations of experimental data can be efficiently done with application of another distributions too. As it was demonstrated in Ref. [@SKN], the Gaussian distribution of soft partons leads to the adequate pion spectra produced after collision at RHIC. To formulate the field initial conditions, here we would like to choose the same approximation for the energy density at the initial moment, $$\varepsilon(r_T)=E\exp{\left(-\frac{r^2_T}{2R^2}\right)},$$ where $E=45$ GeV/fm${}^3$, $R=3.768$ fm. Then we find that $$\tilde f(k_T)=2^{3/2}\sqrt{E}R^2\exp{(-k_T^2R^2)}.$$ The boost-invariant distribution function $f^0$ (defining conductivities) is completely arbitrary at this point, so in order to proceed one needs to assume a specific form for it. In what follows we will require that $f^0$ is obtained from isotropic function, $$\label{dist0} N_0\exp{\left(-\frac{p^0}{p_h}\right)},$$ by the replacement $y\to\theta$ in $p^0=p_T\cosh{y}$ and by the rescaling of one dimension in momentum space, $$\label{dist1} f^0=N(\zeta)\exp{\left(-\frac{p_T}{p_h}\sqrt{\cosh^2{\theta}+\zeta\sinh^2{\theta}}\right)},$$ where $p_h$ takes the role of saturation moment, $\zeta>-1$ is a parameter reflecting the strength of the partonic medium anisotropy and $N(\zeta)$ is a normalization constant. Note that $\zeta>0$ corresponds to a contraction of the distribution in the $z$-direction whereas $-1<\zeta<0$ corresponds to a stretching of the distribution in the $z$-direction. Constant $N(\zeta)$ is simply determined by requiring the number density to be the same both for isotropic and anisotropic systems and can be evaluated (by integration over momentum variables) to be $$N(\xi)=N_0\sqrt{1+\zeta}.$$ Integrating over momentum, the multiplier defining the conductivities is $$\sigma_0=4\pi g^2 N_0 p^2_hC(\zeta),$$ where $$C(\zeta)=\frac{2}{3}(1+\zeta)^{3/2} F\left(\left[2,\frac{3}{2}\right],\left[\frac{5}{2}\right],-\zeta\right).$$ The coefficient $C(\zeta)$ in the region $\zeta\in(-1,\infty)$ is determined by hypergeometric function $F$ and is such that $C(-1)=0$ (the case of source-free theory), $C(0)=2/3$ (for isotropic medium), $C(\infty)=\pi/2$. (60,45) (-90,-121)[![\[fig\] Time evolution of the field energy density split into longitudinal and transverse electric ($E$) and magnetic ($B$) components at $r_T=0.1$ \[fm\], $\eta=0$, $\varphi=0$. The top panel corresponds to the free theory with $\sigma_0=0$. The bottom panel demonstrates the growth of $E_L$ and $B_T$ at $\sigma_0=0.25$ \[fm${}^{-2}$\].](fig1.eps "fig:"){width="8.cm"}]{} (-94,-270)[![\[fig\] Time evolution of the field energy density split into longitudinal and transverse electric ($E$) and magnetic ($B$) components at $r_T=0.1$ \[fm\], $\eta=0$, $\varphi=0$. The top panel corresponds to the free theory with $\sigma_0=0$. The bottom panel demonstrates the growth of $E_L$ and $B_T$ at $\sigma_0=0.25$ \[fm${}^{-2}$\].](fig2.eps "fig:"){width="8.32cm"}]{} Fig. 1 shows how the exponentially growing energy transferred from hard to soft partons is distributed among magnetic and electric fields at $\sigma_0\not=0$ in comparison with the case of free field theory. The dominant contribution is still in longitudinal electric field (in accordance with CGC-like initial conditions). Nevertheless, we see that the transverse magnetic field demonstrates unstable behavior too while this effect is absent in the free theory. Since the particle subsystem gives the energy to the field, the total field energy density (as the sum of components) tends to grow with time. Note that a similar model with expanding Abelian field coupled to the hard partons, when the strict boost invariance of fields is relaxed, has been already developed in Ref. [@RR] in the context of the quark-gluon plasma. CONCLUSIONS =========== Generalizing the space-time evolution of the expanding Glasma with CGC-like initial conditions by inclusion of small density of the hard partons anisotropically distributed in momentum space, we observe the instabilities (due to transferring energy from hard to soft partons) in the case of the abelianized boost-invariant model. Here we propose to measure an anisotropy by means of transport coefficients like the conductivity tensor in the contrast with the usual approach based on the energy-momentum tensor. As the result, the instabilities in the system under consideration lead to a conclusion of the presence of negative color conductivity in A+A collisions. The sign of conductivity in transverse plane depends on the angle what says about filamentation inherent to Weibel instabilities in plasma. Unfortunately, the approximate solution derived to the transport equations does not permit us to achieve the isotropization here. This problem should be investigated in details and will be published elsewhere. ACKNOWLEDGEMENTS {#acknowledgements .unnumbered} ================ The research of author was partially supported by the Foundation of Department of Physics and Astronomy of NAS of Ukraine. APPENDIX A. PROPAGATOR OF TRANSPORT EQUATION {#appendix-a.-propagator-of-transport-equation .unnumbered} ============================================ Here, we would like to discuss in details the approximation which we have applied previously. To find $\delta f$, we have to determine operator $\hat L^{-1}$, inverse to the first-order evolution operator $\hat L$. The inversion procedure of the evolution operator of transport equation was elaborated by Landau and, generally speaking, results in emergence of the Landau damping in plasma. Let $G$ is the solution of the following equation: $$\hat LG(\tau,\theta,{\bf r}_T|\tau^\prime,\theta^\prime,{\bf r}_T^\prime) =\frac{\delta(\tau-\tau^\prime)}{\tau}\delta(\theta-\theta^\prime) \delta^2({\bf r}_T-{\bf r}_T^\prime),$$ where right hand side is the 4-dimensional $\delta$-function with respect to the pseudo-cylindric measure $\tau d\tau d\theta d^2r_T$. Without loosing generality, we limit ourselves by the case, when $\tau_0\to0$, and by using the transverse coordinates ${\bf r}_T$ instead of cylindrical $(r_T,\varphi)$. Using the formulas from Appendix B, $G$ is represented as $$G(\tau,\theta,{\bf r}_T|\tau^\prime,\theta^\prime,{\bf r}_T^\prime)=\lim_{\varepsilon\to+0} \frac{i}{(2\pi)^4p_T}\int\frac{{\rm e}^{i\omega[\tau\cosh{(\xi-\theta)}-\tau^\prime\cosh{(\xi-\theta^\prime)}] -i{\bf k}_T({\bf r}_T-{\bf r}^\prime_T)}} {{\bf k}_T{\bf v}_T-\omega\cosh{\xi}-i\varepsilon} \omega d\omega d\xi d^2k_T,$$ where ${\bf v}_T\equiv{\bf p}_T/p_T$ and the Landau damping is already taken into account due to auxiliary formula: $$\lim_{\varepsilon\to+0}\frac{1}{x-i\varepsilon}={\cal P}\frac{1}{x}+i\pi\delta(x).$$ Discarding the spatial dispersion, we have to assume that $k_T\ll\omega\cosh\xi$. It leads to simplification: $$\begin{aligned} &&G(\tau,\theta,{\bf r}_T|\tau^\prime,\theta^\prime,{\bf r}_T^\prime)\approx \frac{1}{p_T}\Theta(\tau\cosh{\theta}-\tau^\prime\cosh{\theta^\prime}) \nonumber\\ &&\times\delta(\tau\sinh{\theta}-\tau^\prime\sinh{\theta^\prime}) \delta^2({\bf r}_T-{\bf r}^\prime_T).\end{aligned}$$ where $\Theta$ is Heaviside function defined as $\Theta(x<0)=0$, $\Theta(x=0)=1/2$, $\Theta(x>0)=1$. This approximation says that the system is homogeneous at significantly large times and the fluctuations are localized in space. Furthermore, let the Björken scaling flow, when $\theta\approx0$, take place. It means that $\theta$ and $\theta^\prime$ should be equal and gives us that $$G(\tau,\theta,{\bf r}_T|\tau^\prime,\theta^\prime,{\bf r}_T^\prime)\approx \frac{\Theta(\tau-\tau^\prime)}{p_T\tau^\prime\cosh{\theta^\prime}} \delta(\theta-\theta^\prime)\delta^2({\bf r}_T-{\bf r}^\prime_T).$$ More precisely, it can be derived from condition, $\tau\sinh\theta={\rm const}$, resulting in $$\sinh\theta d\tau+\tau\cosh d\theta=0,$$ where $d\tau=\tau-\tau^\prime$, $d\theta=\theta-\theta^\prime$. Assuming that $\tau$ is small and the expression under integration is not essentially changed at this time range, we can do the following replacement: $$\int\limits_0^\infty d\tau^\prime\Theta(\tau-\tau^\prime)F(\tau^\prime) \to \tau F(\tau).$$ Then, one obtains that $$\begin{aligned} &&\int G(\tau,\theta,{\bf r}_T|\tau^\prime,\theta^\prime,{\bf r}_T^\prime) F(\tau^\prime,\theta^\prime,{\bf r}_T^\prime) \tau^\prime d\tau^\prime d\theta^\prime d^2r^\prime_T\approx\nonumber\\ &&\approx\frac{\tau}{p_T\cosh{\theta}}F(\tau,\theta,{\bf r}_T).\end{aligned}$$ This formula determines the solution of inhomogeneous transport equation with a source $F$. APPENDIX B. INTEGRAL TRANSFORMATION {#appendix-b.-integral-transformation .unnumbered} =================================== The Fourier transformation reads $$f(t,z)=\int\limits_{-\infty}^\infty\frac{d\mu d\mu^\prime}{(2\pi)^2} \int\limits_{-\infty}^\infty f(p,s){\rm e}^{i\mu(t-p)-i\mu^\prime(z-s)}dpds.$$ Let us introduce new variables: $$\begin{aligned} &&t=\tau\cosh{\theta},\quad z=\tau\sinh{\theta},\nonumber\\ &&p=\rho\cosh{\psi},\quad s=\rho\sinh{\psi},\nonumber\\ &&\mu=\lambda\cosh{\phi},\quad \mu^\prime=\lambda\sinh{\phi}.\nonumber\end{aligned}$$ If $f(t,z)=F(\tau,\theta)$, one gets the following transformation rule: $$\begin{aligned} \tilde F(\lambda,\phi)=\int\limits_{-\infty}^\infty F(\rho,\psi) {\rm e}^{-i\rho\lambda\cosh{(\phi-\psi)}}\rho d\rho d\psi,&&\nonumber\\ F(\tau,\theta)=\frac{1}{(2\pi)^2}\int\limits_{-\infty}^\infty\tilde F(\lambda,\phi) {\rm e}^{i\tau\lambda\cosh{(\phi-\theta)}}\lambda d\lambda d\phi.&&\nonumber\end{aligned}$$ For example, we find that $$\frac{1}{\tau}\delta(\Delta\tau)\delta(\Delta\theta)=\frac{1}{(2\pi)^2}\int\limits_{-\infty}^\infty {\rm e}^{i\lambda[\tau\cosh{(\phi-\Delta\theta)}-\tau_0\cosh{\phi}]}\lambda d\lambda d\phi,$$ where $\Delta\tau=\tau-\tau_0$, $\Delta\theta=\theta-\theta_0$. [100]{} I. Arsene [*et al.*]{} \[BRAHMS collaboration\], Nucl. Phys. [**A757**]{}, 1 (2005); B. B. Back [*et al.*]{} \[PHOBOS collaboration\], [*ibid*]{}, 28 (2005); J. Adams [*et al.*]{} \[STAR collaboration\], [*ibid*]{}, 102 (2005); K. Adcox [*et al.*]{} \[PHENIX collaboration\], [*ibid*]{}, 184 (2005). S. Mrówczyński, Phys. Lett. [**B314**]{}, 118 (1993); Phys. Rev. [**C49**]{}, 2191 (1994); Phys. Lett. [**B393**]{}, 26 (1997). S. Mrówczyński, Acta Phys. Polon. [**B37**]{} 427 (2006). V.M. Emelyanov, S.L. Timoshenko, M.N. Strihanov [*Introduction in relativistic nuclear physics*]{} (Fizmatlit, Moscow, 2004) (in Russian). F. Gelis and R. Venugopalan, Acta Phys. Polon. [**B37**]{}, 3253 (2006); arXiv: hep-ph/0611157 (2006). L. McLerran and R. Venugopalan, Phys. Rev. [**D49**]{}, 2233, 3352 (1994); [**D50**]{}, 2225 (1994); [**D53**]{}, 458 (1996); [**D59**]{}, 09400 (1999). A.V. Leonidov, Phys. Usp. [**48**]{}, 323 (2005). E.S. Weibel, Phys. Rev. Lett. [**2**]{}, 83 (1959). P. Romatschke, R. Venugopalan, Phys. Rev. Lett. [**96**]{}, 062302 (2006). K. Geiger, B. Mueller, Nucl. Phys. [**B369**]{} 600 (1992). S.A. Bass, B. Mueller, D.K. Srivastava, Phys. Lett. [**B551**]{}, 277 (2003). M. Gyulassy, Y. Pang, B. Zhang, Nucl. Phys. [**A626**]{}, 999 (1997). D. Molnar, M. Gyulassy, Phys. Rev. [**C62**]{}, 054907 (2000). A. Dumitru, M. Gyulassy, Phys. Lett. [**B494**]{}, 215 (2000). Y. Nara, S.E. Vance, P. Csizmadia, Phys. Lett. [**B531**]{}, 209 (2002). Z. Xu, C. Greiner, Phys. Rev. [**C71**]{}, 064901 (2005). R. Baier, A.H. Mueller, D. Schiff, D.T. Son, Phys. Lett. [**B502**]{}, 51 (2001). S.V. Akkelin, Phys. Rev. [**C78**]{}, 014906 (2008). A. Dumitru, Y. Nara Phys. Lett. [**B621**]{}, 89 (2005). S. Mrówczyński, A. Rebhan, M. Strickland, Phys. Rev. [**D70**]{}, 025004 (2004). P. Arnold, J. Lenaghan, Phys. Rev. [**D70**]{}, 114007 (2004). M. Gyulassy, Iu. Karpenko, A.V. Nazarenko, Yu.M. Sinyukov, Braz. J. Phys. [**37**]{}, 1031 (2007). Yu.M. Sinyukov, A.V. Nazarenko, Iu.A. Karpenko, Acta Phys. Polonica [**B40**]{}, 1109 (2009). Yu.M. Sinyukov, Iu.A. Karpenko and A.V. Nazarenko, J. Phys. G: Nucl. Part. Phys. [**35**]{} (2008) 104071. R.J. Fries, J.I. Kapusta and Y. Li, Nucl. Phys. [**A774**]{}, 861 (2006). P. Arnold, J. Lenaghan, G.D. Moore and L.G. Yaffe, Phys. Rev. Lett. [**94**]{}, 072302 (2005). T. Lappi, Phys. Lett. [**B643**]{}, 11 (2006); arXiv: hep-ph/0606207 (2006). P. Romatschke, A. Rebhan, Phys. Rev. Lett. [**97**]{}, 252301 (2006); hep-ph/0605064 (2006). [^1]: This list of references contains also some works on approaches based on the relaxation time approximation to the Boltzmann equation.
--- abstract: | We undertook a long term project, DIRECT, to obtain the direct distances to two important galaxies in the cosmological distance ladder – M31 and M33 – using detached eclipsing binaries (DEBs) and Cepheids. While rare and difficult to detect, DEBs provide us with the potential to determine these distances with an accuracy better than 5%. The extensive photometry obtained in order to detect DEBs provides us with good light curves for the Cepheid variables. These are essential to the parallel project to derive direct Baade-Wesselink distances to Cepheids in M31 and M33. For both Cepheids and eclipsing binaries, the distance estimates will be free of any intermediate steps. As a first step in the DIRECT project, between September 1996 and October 1997 we obtained 95 full/partial nights on the F. L. Whipple Observatory 1.2 m telescope and 36 full nights on the Michigan-Dartmouth-MIT 1.3 m telescope to search for DEBs and new Cepheids in the M31 and M33 galaxies. In this paper, fifth in the series, we present the catalog of variable stars found in the field M31F $[(\alpha,\delta)= (10.\!\!\arcdeg10, 40.\!\!\arcdeg72), {\rm J2000.0}]$. We have found 64 variable stars: 4 eclipsing binaries, 52 Cepheids and 8 other periodic, possible long period or non-periodic variables. The catalog of variables, as well as their photometry and finding charts, is available via [anonymous ftp]{} and the [World Wide Web]{}. The complete set of the CCD frames is available upon request. author: - 'B. J. Mochejska and J. Kaluzny' - 'K. Z. Stanek, M. Krockenberger and D. D. Sasselov' title: 'DIRECT Distances to Nearby Galaxies Using Detached Eclipsing Binaries and Cepheids. V. Variables in the Field M31F[^1]' --- Introduction ============ Starting in 1996 we undertook a long term project, DIRECT (as in “direct distances”), to obtain the distances to two important galaxies in the cosmological distance ladder – M31 and M33 – using detached eclipsing binaries (DEBs) and Cepheids. These two nearby galaxies are stepping stones to most of our current effort to understand the evolving universe at large scales. First, they are essential to the calibration of the extragalactic distance scale (Jacoby et al. 1992; Tonry et al. 1997). Second, they constrain population synthesis models for early galaxy formation and evolution and provide the stellar luminosity calibration. There is one simple requirement for all this—accurate distances. DEBs have the potential to establish distances to M31 and M33 with an unprecedented accuracy of better than 5% and possibly to better than 1%. These distances are now known to no better than 10-15%, as there are discrepancies of $0.2-0.3\;{\rm mag}$ between various distance indicators (e.g. Huterer, Sasselov & Schechter 1995; Holland 1998; Stanek & Garnavich 1998). Detached eclipsing binaries (for reviews see Andersen 1991; Paczyński 1997) offer a single step distance determination to nearby galaxies and may therefore provide an accurate zero point calibration—a major step towards very accurate determination of the Hubble constant, presently an important but daunting problem for astrophysicists. A DEB system was recently used by Guinan et al. (1998) and Udalski et al. (1998) to obtain an accurate distance estimate to the Large Magellanic Cloud. The detached eclipsing binaries have yet to be used (Huterer et al. 1995; Hilditch 1996) as distance indicators to M31 and M33. According to Hilditch (1996), there were about 60 eclipsing binaries of all kinds known in M31 (Gaposchkin 1962; Baade & Swope 1963, 1965) and only [*one*]{} in M33 (Hubble 1929), none of them observed with CCDs. Only now does the availability of large-format CCD detectors and inexpensive CPUs make it possible to organize a massive search for periodic variables, which will produce a handful of good DEB candidates. These can then be spectroscopically followed-up with the powerful new 6.5-10 meter telescopes. The study of Cepheids in M31 and M33 has a venerable history (Hubble 1926, 1929; Gaposchkin 1962; Baade & Swope 1963, 1965). Freedman & Madore (1990) and Freedman, Wilson & Madore (1991) obtained multi-band CCD photometry of some of the already known Cepheids, to build period-luminosity relations in M31 and M33, respectively. However, both the sparse photometry and the small samples (11 Cepheids in M33 and 38 Cepheids in M31) do not provide a good basis for obtaining direct Baade-Wesselink distances (see, e.g., Krockenberger, Sasselov & Noyes 1997) to Cepheids—the need for new digital photometry has been long overdue. Recently, Magnier et al. (1997) surveyed large portions of M31, which have previously been ignored, and found some 130 new Cepheid variable candidates. Their light curves are, however, rather sparsely sampled and in the $V$-band only. In Kaluzny et al. (1998, 1999, hereafter: Papers I and IV) and Stanek et al. (1998, 1999, hereafter: Papers II and III), the first four papers of the series, we presented the catalogs of variable stars found in four fields in M31, called M31B, M31A, M31C and M31D. Here we present the catalog of variables from the field M31F. In Sec.2 we discuss the selection of the fields in M31 and the observations. In Sec.3 we describe the data reduction and calibration. In Sec.4 we discuss briefly the automatic selection we used for finding the variable stars. In Sec.5 we discuss the classification of the variables. In Sec.6 we present the catalog of variable stars, followed by a brief discussion of the results in Sec.7. Fields selection and observations ================================= M31 was primarily observed in 1996 with the 1.3 m McGraw-Hill Telescope at the Michigan-Dartmouth-MIT (MDM) Observatory. We used the front-illuminated, Loral $2048^2$ CCD “Wilbur” (Metzger, Tonry & Luppino 1993), which at the $f/7.5$ station of the 1.3 m telescope has a pixel scale of $0.32\; arcsec\; pixel^{-1}$ and field of view of roughly $11\;arcmin$. We used Kitt Peak Johnson-Cousins $BVI$ filters. Data for M31 were also obtained, mostly in 1997, with the 1.2 m telescope at the F. L. Whipple Observatory (FLWO), where we used “AndyCam” (Szentgyorgyi et al. 1999), with a thinned, back-illuminated, AR coated Loral $2048^2$ pixel CCD. The pixel scale happens to be essentially the same as at the MDM 1.3 m telescope. We used standard Johnson-Cousins $BVI$ filters. Fields in M31 were selected using the MIT photometric survey of M31 by Magnier et al. (1992) and Haiman et al. (1994) (see Paper I, Fig.1). We selected six $11'\times11'$ fields, M31A–F, four of them (A–D) concentrated on the rich spiral arm in the northeast part of M31, one (E) coinciding with the region of M31 searched for microlensing by Crotts & Tomaney (1996), and one (F) containing the giant star formation region known as NGC206 (observed by Baade & Swope 1965). Fields A–C were observed during September and October 1996 five to eight times per night in the $V$ band, resulting in total of 110–160 $V$ exposures per field. Fields D–F were observed once a night in the $V$-band. Some exposures in $B$ and $I$ were also taken. M31 was also observed, in 1996 and 1997, at the FLWO 1.2 m telescope, whose main target was M33. In this paper we present the results for the M31F field. We obtained for this field useful data during 29 nights at the MDM, collecting a total of $28\times 900\;sec$ exposures in $V$ and $2\times 600\;sec$ exposures in $I$. We also obtained for this field useful data during 22 nights at the FLWO, in 1996 and 1997, collecting a total of $80\times 900\;sec$ exposures in $V$, $67\times 600\;sec$ exposures in $I$ and $7\times 1200\;sec$ exposures of $B$.[^2] Data reduction, calibration and astrometry ========================================== The details of the reduction procedure were given in Paper I. Preliminary processing of the CCD frames was done with the standard routines in the IRAF-CCDPROC package.[^3] Stellar profile photometry was extracted using the [*Daophot/Allstar*]{} package (Stetson 1987, 1992). We selected a “template” frame for each filter using a single frame of particularly good quality. These template images were reduced in a standard way (Paper I). Other images were reduced using [*Allstar*]{} in the fixed-position-mode using as an input the transformed object list from the template frames. For each frame the list of instrumental photometry derived for a given frame was transformed to the common instrumental system of the appropriate “template” image. Photometry obtained for the $B,V$ and $I$ filters was combined into separate data bases. M31F $I$-band images obtained at the MDM were reduced using FLWO “templates”. Two templates were used in the case of $V$-band images. An MDM template was used to fix the positions of the stars. The $V$ photometry was transformed to the instrumental system of an FLWO template. The photometric $VI$ calibration of the MDM data was discussed in Paper I. In addition, for the field M31F on the night of 1997 October 9/10 we have obtained independent $BVI$ calibration with the FLWO 1.2 m telescope. There was an offset of $-0.020\;{\rm mag}$ in $V$ and $0.057\;{\rm mag}$ in $V-I$ between the FLWO and the MDM calibration. The $V$ offset is well within our estimate of the total $0.05\;mag$ systematic error discussed in Paper I, and the $V-I$ offset falls slightly above. We also derived equatorial coordinates for all objects included in the data bases for the $V$ filter. The transformation from rectangular coordinates to equatorial coordinates was derived using 79 stars identified in the USNO-A2.0 catalog. We have also compared these coordinates with those given by Magnier et al. (1992) and find a good agreement, with an average difference of $0.5\; arcsec$ for 66 of the transformation stars found in both catalogs. Selection of variables ====================== The procedure for selecting the variables was described in detail in Paper I, so here we only give a short description, noting changes when necessary. The reduction procedure described in previous section produces databases of calibrated $BVI$ magnitudes and their standard errors. The $V$ database for M31F field contains 7997 stars, with up to 108 measurements, the $I$ database contains 26540 stars with up to 69 measurements and the $B$ database contains 3382 stars with up to 7 measurements. Figure \[fig:dist\] shows the distributions of stars as a function of mean $\bar{B}$, $\bar{V}$ or $\bar{I}$ magnitude. As can be seen from the shape of the histograms, our completeness starts to drop rapidly at about $\bar{B}\sim22$, $\bar{V}\sim22$ and $\bar{I}\sim20.5$. The primary reason for this difference in the depth of the photometry between $BV$ and $I$ is the level of the combined sky and background light, which is about three times higher in the $I$ filter than in the $BV$ filters. The measurements flagged as “bad” (with unusually large [*Daophot*]{} errors, compared to other stars) and measurements with errors exceeding the average error, for a given star, by more than $4\sigma$ are removed. Usually zero to 10 points are removed, leaving the majority of stars with roughly $N_{good}\sim95-105$ $V$ measurements. For further analysis we use only those stars that have at least $N_{good}>N_{max}/2\;(=54)$ measurements. There are 5838 such stars in the $V$ database of the M31F field. Our next goal is to select a sample of variable stars from the total sample defined above. There are many ways to proceed, and we largely follow the approach of Stetson (1996), also described in Paper I. In short, for each star we compute the Stetson’s variability index $J_S$ (Paper I, Eq.7), and stars with values exceeding some minimum value $J_{S,min}$ are considered candidate variables. The definition of $J_S$ is rooted in the assumption that on each visit to the program field at least one pair of observations is obtained, and only when both observations have the residual from the mean of the same sign does the pair contribute positively to the variability index. The definition of Stetson’s variability index includes the standard errors of individual observations. If, for some reason, these errors were over- or underestimated, we would either miss real variables, or select spurious variables as real ones. Using the procedure described in Paper I, we scale the [*Daophot*]{} errors to better represent the “true” photometric errors. We then select the candidate variable stars by computing the value of $J_S$ for the stars in our $V$ database. We used a cutoff of $J_{S,min}=0.75$ and additional cuts described in Paper I to select 122 candidate variable stars (about 2% of the total number of 5838). In Figure \[fig:stetj\] we plot the variability index $J_S$ vs. apparent visual magnitude $\bar{V}$ for 5838 stars with $N_{good}>54$. Period determination, classification of variables ================================================= We based our candidate variables selection on the $V$ band data collected at the MDM and the FLWO telescopes. We also have the $BI$-bands data for the field, up to 69 $I$-band epochs and up to 7 $B$-band epochs, although for a variety of reasons some of the candidate variable stars do not have a $B$ or $I$-band counterpart. We will therefore not use the $BI$ data for the period determination and broad classification of the variables. We will however use the $BI$ data for the “final” classification of some variables. Next we searched for the periodicities for all 122 candidate variables, using a variant of the Lafler-Kinman (1965) string-length technique proposed by Stetson (1996). Starting with the minimum period of $0.25\;days$, successive trial periods are chosen so $$P_{j+1}^{-1}=P_{j}^{-1}-\frac{0.02}{\Delta t},$$ where $\Delta t=t_{N}-t_{1}=398\;days$ is the time span of the series. The maximum period considered is $150\;days$. For each candidate variable 10 best trial periods are selected (Paper I) and then used in our classification scheme. The variables we are most interested in are Cepheids and eclipsing binaries (EBs). We therefore searched our sample of variable stars for these two classes of variables. As mentioned before, for the broad classification of variables we restricted ourselves to the $V$ band data. We will, however, present and use the $BI$-bands data, when available, when discussing some of the individual variable stars. For EBs, we used the search strategy described in Paper II. Within our assumption the light curve of an EB is determined by nine parameters: the period, the zero point of the phase, the eccentricity, the longitude of periastron, the radii of the two stars relative to the binary separation, the inclination angle, the fraction of light coming from the bigger star and the uneclipsed magnitude. A total of six variables passed all of the criteria. We then went back to the CCD frames and tried to see by eye if the inferred variability is indeed there, especially in cases when the light curve is very noisy/chaotic. We decided to remove two dubious eclipsing binaries, classifying one as a periodic variable with half the determined period. Its light curve is presented in Section 6.3. The remaining four EBs with their parameters and light curves are presented in the Section 6.1. In the search for Cepheids we followed the approach by Stetson (1996) of fitting template light curves to the data. We used the parameterization of Cepheid light curves in the $V$-band as given by Stetson (1996). There was a total of 64 variables passing all of the criteria (Paper I and II), but after investigating the CCD frames we removed 12 dubious “Cepheids”, which leaves us with 52 probable Cepheids. Their parameters and light curves are presented in Section 6.2. After the selection of four eclipsing binaries, 52 Cepheids and one periodic variable, we were left with 65 “other” variable stars. After raising the threshold of the variability index to $J_{S,min}=1.2$ (Paper I) we are left with 17 variables. After investigating the CCD frames we removed 10 dubious variables from the sample, which leaves seven variables which we classify as miscellaneous. Their parameters and light curves are presented in the Section 6.4. Catalog of variables ==================== In this section we present light curves and some discussion of the 64 variable stars discovered by our survey in the field M31F. [^4] The variable stars are named according to the following convention: letter V for “variable”, the number of the star in the $V$ database, then the letter “D” for our project, DIRECT, followed by the name of the field, in this case (M)31F, e.g. V244 D31F. Tables \[table:ecl\], \[table:ceph\], \[table:per\] and \[table:misc\] list the variable stars sorted broadly by four categories: eclipsing binaries, Cepheids, other periodic variables and “miscellaneous” variables, in our case meaning “variables with no clear periodicity”. Eclipsing binaries ------------------ In Table \[table:ecl\] we present the parameters of the four eclipsing binaries in the M31F field. The light curves of these variables are shown in Figure \[fig:ecl\], along with the simple eclipsing binary models discussed in the Papers I and II. The variables are sorted in the Table \[table:ecl\] by the increasing value of the period $P$. For each eclipsing binary we present its name, J2000.0 coordinates (in degrees), period $P$, magnitudes $V_{max}, I_{max}$ and $B_{max}$ of the system outside of the eclipse, and the radii of the binary components $R_1,\;R_2$ in the units of the orbital separation. We also give the inclination angle of the binary orbit to the line of sight $i$ and the eccentricity of the orbit $e$. The reader should bear in mind that the values of $V_{max},\;I_{max},\;B_{max},\; R_1,\;R_2,\;i$ and $e$ are derived with a straightforward model of the eclipsing system, so they should be treated only as reasonable estimates of the “true” value. One of the eclipsing binaries found, V1835 D31F, is a good DEB candidate. However, a much better light curve is necessary to accurately establish the properties of the system. [lrrrrrrrrcrl]{} V207…& 10.2157 & 40.6399 & 2.5202 & 20.53 & 20.34 & 20.40 & 0.48 & 0.39 & 81 & 0.05 & V4320& 10.1076 & 40.7456 & 4.6694 & 19.46 & & 19.17 & 0.58 & 0.42 & 70 & 0.00 & V1835& 10.1433 & 40.7184 & 6.3452 & 20.06 & 19.62 & 19.93 & 0.37 & 0.30 & 86 & 0.03 & DEB V763& 10.1807 & 40.7263 & 6.8930 & 19.60 & 19.42 & 19.55 & 0.64 & 0.31 & 63 & 0.02 & W UMa \[table:ecl\] Cepheids -------- In Table \[table:ceph\] we present the parameters of 52 Cepheids in the M31F field, sorted by the period $P$. For each Cepheid we present its name, J2000.0 coordinates, period $P$, flux-weighted mean magnitudes $\langle V\rangle$ and (when available) $\langle I\rangle$ and $\langle B\rangle$, and the $V$-band amplitude of the variation $A$. In Figure \[fig:ceph\] we show the phased $B,V,I$ lightcurves of our Cepheids. Also shown is the best fit template lightcurve (Stetson 1996), which was fitted to the $V$ data and then for the $I$ data only the zero-point offset was allowed. For the $B$-band data, lacking the template lightcurve parameterization (Stetson 1996), we used the $V$-band template, allowing for different zero-points and amplitudes. With our limited amounts of $B$-band data this approach produces mostly satisfactory results, but extending the template-fitting approach of Stetson (1996) to the $B$-band (and possibly other popular bands) would be most useful. Some Cepheids seem to be brighter in the $B$ band than in $V$. This effect is most likely caused by blending, since these variables are located in regions densely populated by stars, but could partially be due to blue binary companions of Cepheids (Evans 1994; Evans & Udalski 1994). Other periodic variables ------------------------ For one of the variables preliminarily classified as an eclipsing binary we decided upon closer examination to classify it as an “other periodic variable”. In Table \[table:per\] we present the parameters of this possible periodic variable. In Figure \[fig:per\] we show its phased $BVI$ lightcurves. We present its name, J2000.0 coordinates, period $P$, error-weighted mean magnitudes $\bar{V}$, $\bar{I}$ and $\bar{B}$. To quantify the amplitude of the variability, we also give the standard deviations of the measurements in the $BVI$ bands, $\sigma_{V},\sigma_{I}$ and $\sigma_{B}$. The period of V7438 D31F was taken to be half of the period determined by fitting a simple eclipsing binary lightcurve, so it should only be treated as a first approximation of its true value. Inspection of the $V,V-I$ and $V,B-V$ color-magnitude diagrams (Figure \[fig:cmd\]) reveals that the variable lands in the regions occupied by Cepheids. V7438 D31F has been previously identified by Baade & Swope (1965) as a Cepheid with a period of 5.12 days, very close to our value. In the P-L diagram (Figure \[fig:pl\]), however, it is located above the region occupied by Cepheid variables, indicating that it is possibly a blend. Another fact which may favor this explanation is the small amplitude of its variability. [lrrrrrrr]{} V3441…& 10.1201 & 40.7667 & 4.678 & 21.28 & 20.52 & 21.18 & 0.35 V4254& 10.1111 & 40.6766 & 5.718 & 21.48 & 20.50 & 21.99 & 0.36 V7832& 9.9963 & 40.7972 & 5.814 & 21.39 & 20.47 & 22.02 & 0.41 V3732& 10.1189 & 40.6764 & 6.070 & 20.98 & 19.96 & 21.05 & 0.37 V3054& 10.1243 & 40.7911 & 6.105 & 21.09 & 20.13 && 0.31 V893& 10.1716 & 40.7771 & 6.505 & 21.64 & 20.71 && 0.40 V7441& 10.0247 & 40.6533 & 6.514 & 21.43 & 19.88 && 0.37 V3860& 10.1154 & 40.7284 & 6.529 & 21.24 & 19.65 & 22.08 & 0.36 V1599& 10.1470 & 40.7687 & 6.640 & 20.94 & 19.86 & 21.78 & 0.28 V5856& 10.0799 & 40.6725 & 6.660 & 21.24 & 20.37 & 21.99 & 0.32 V5711& 10.0828 & 40.6802 & 6.707 & 21.07 & 20.10 & 21.87 & 0.43 V6623& 10.0591 & 40.6814 & 6.999 & 20.86 & 20.04 && 0.35 V5886& 10.0788 & 40.6903 & 7.458 & 21.12 & 20.54 & 21.91 & 0.36 V6406& 10.0662 & 40.6569 & 7.563 & 20.95 & 20.24 && 0.35 V3289& 10.1232 & 40.7453 & 7.599 & 20.89 & 19.76 & 20.77 & 0.27 V5893& 10.0797 & 40.6513 & 7.655 & 21.00 & 20.02 & 21.70 & 0.34 V6962& 10.0462 & 40.6923 & 7.782 & 21.31 & 20.47 && 0.24 V7741& 10.0076 & 40.6885 & 8.099 & 21.03 & 20.17 & 20.81 & 0.24 V6098& 10.0748 & 40.6395 & 8.471 & 20.52 & 19.52 && 0.28 V4855& 10.0966 & 40.8035 & 9.064 & 20.70 & 19.70 && 0.30 V5498& 10.0883 & 40.6611 & 9.387 & 20.72 & 20.27 & 20.98 & 0.24 V7074& 10.0388 & 40.7778 & 9.478 & 20.78 & 19.70 & 21.51 & 0.30 V5097& 10.0967 & 40.6804 & 9.662 & 20.66 & 19.92 & 21.16 & 0.25 V6483& 10.0637 & 40.6739 & 9.736 & 20.83 & 20.13 & 21.20 & 0.23 V5994& 10.0774 & 40.6532 & 9.886 & 20.67 & 19.53 & 21.45 & 0.22 V4556& 10.1054 & 40.7022 & 9.894 & 20.54 & 19.41 & 21.20 & 0.25 V6195& 10.0722 & 40.6463 & 9.924 & 21.21 & 20.05 & 22.54 & 0.35 V5178& 10.0949 & 40.6807 & 9.932 & 20.30 & 19.30 & 20.18 & 0.22 V7393& 10.0278 & 40.6364 & 9.937 & 20.50 & 19.51 & 21.15 & 0.31 V3550& 10.1188 & 40.7607 & 10.468 & 20.55 & 19.85 & 21.03 & 0.32 V2320& 10.1341 & 40.7656 & 10.868 & 20.27 & 19.54 & 20.60 & 0.34 V4125& 10.1109 & 40.7332 & 11.139 & 20.46 & 19.63 & 21.11 & 0.31 V1549& 10.1488 & 40.7569 & 11.764 & 20.72 & 19.68 & 21.49 & 0.35 V5442& 10.0892 & 40.6766 & 11.902 & 19.65 & 18.58 & 19.45 & 0.16 V5696& 10.0806 & 40.7520 & 12.287 & 20.74 & 19.54 & 21.62 & 0.43 V5598& 10.0855 & 40.6741 & 12.311 & 20.34 & 19.45 & 20.96 & 0.37 V6267& 10.0684 & 40.6995 & 12.324 & 20.38 & 19.30 & 19.27 & 0.17 V2156& 10.1353 & 40.7925 & 12.831 & 20.37 & 19.40 & 20.76 & 0.27 V3373& 10.1199 & 40.7971 & 12.872 & 20.94 & 19.55 & 21.81 & 0.41 V4955& 10.0959 & 40.7808 & 13.034 & 20.70 & 19.67 & 21.45 & 0.41 V1633& 10.1461 & 40.7760 & 13.043 & 21.30 & 19.85 && 0.45 V1619…& 10.1503 & 40.6545 & 13.141 & 20.61 & 19.45 & 21.47 & 0.34 V4682& 10.1046 & 40.6552 & 13.297 & 20.73 & 19.30 & 21.72 & 0.34 V6640& 10.0593 & 40.6558 & 13.761 & 19.67 &&& 0.18 V4861& 10.0972 & 40.7849 & 13.991 & 20.47 & 19.31 & 21.22 & 0.41 V6503& 10.0615 & 40.7215 & 15.210 & 20.77 & 19.55 & 21.44 & 0.38 V4708& 10.1039 & 40.6631 & 15.690 & 20.50 & 19.33 & 21.38 & 0.35 V1893& 10.1415 & 40.7417 & 16.756 & 18.79 & 17.47 & 19.54 & 0.18 V6208& 10.0714 & 40.6561 & 17.572 & 19.74 & 18.78 & 20.28 & 0.48 V821& 10.1760 & 40.7640 & 20.547 & 21.35 & 19.61 && 0.55 V602& 10.1880 & 40.7392 & 31.416 & 21.05 & 19.12 && 0.31 V2203& 10.1369 & 40.7306 & 55.373 & 17.94 & 17.17 & 18.32 & 0.17 \[table:ceph\] Miscellaneous variables ----------------------- In Table \[table:misc\] we present the parameters of seven miscellaneous variables in the M31F field, sorted by increasing value of the mean magnitude $\bar{V}$. In Figure \[fig:misc\] we show the unphased $VI$ lightcurves of the miscellaneous variables. For each variable we present its name, J2000.0 coordinates and mean magnitudes $\bar{V}, \bar{I}$ and $\bar{B}$. To quantify the amplitude of the variability, we also give the standard deviations of the measurements in $BVI$ bands, $\sigma_{V}, \sigma_{I}$ and $\sigma_{B}$. In the “Comments” column we give a rather broad sub-classification of the variability. All of the variables seem to represent the LP type of variability. A closer inspection of the color-magnitude diagrams (Figure \[fig:cmd\]) reveals that three variables (V667, V1229 and V2285 D31F) land in the same area as Cepheids. Based on their lightcurves it was possible to roughly estimate the periods of the first two to be around 90 and 100 days, respectively. Using these periods to place the stars on the P-L diagram (Figure \[fig:pl\]) suggests they may be RV Tauri type variables. V1665 and V1724 D31F are most likely Mira-type variables, based on their location in the color-magnitude diagrams. [cccrccccccl]{} V7438…& 10.0252 & 40.6442 & 5.1 & 20.41 & 19.41 & 21.02 & 0.15 & 0.11 & 0.13 & Cepheid? \[table:per\] [llllllllll]{} V244…& 10.2103 & 40.7380 & 18.39 & 16.68 & 19.87 & 0.12 & 0.07 & 0.05 & LP V1665& 10.1460 & 40.7562 & 19.27 & 16.01 & 21.40 & 0.20 & 0.16 & 0.10 & LP V1724& 10.1446 & 40.7499 & 19.63 & 16.34 & 0.00 & 0.24 & 0.13 & 0.00 & LP V2285& 10.1373 & 40.6841 & 19.67 & 18.81 & 19.94 & 0.08 & 0.07 & 0.08 & RV Tau? V1229& 10.1590 & 40.7389 & 20.47 & 19.57 & 21.07 & 0.23 & 0.10 & 0.18 & RV Tau? V764& 10.1775 & 40.8110 & 20.51 & 0.00 & 0.00 & 0.35 & 0.00 & 0.00 & LP V667& 10.1852 & 40.7265 & 21.18 & 19.33 & 0.00 & 0.35 & 0.22 & 0.00 & RV Tau? \[table:misc\] Comparison with other catalogs ------------------------------ The area of the M31F field coincides with two overlapping fields observed by Baade. The catalogs of variable stars discovered in those fields are given by Gaposchkin (1962, field II) and Baade & Swope (1965, field III). We succeeded in the cross-identification of all but one of the 55 Cepheid variables found in field III with stars on our template. We have discovered independently 24 of those Cepheids and found a very good agreement between the period determinations. We have also confirmed the periods of 23 other Cepheids, which eluded our detection, in large part due to their faintness and the strict criteria we have imposed in our process of Cepheid selection (see Table \[table:crossid\] for cross-ids). Out of the 38 unique Cepheid variables listed in the field II catalog, located within our M31F field, we have found 14 Cepheids and confirmed the periods of two. The remaining field II Cepheids have evaded positive cross-identification with our template stars. Another overlapping variable star catalog is given by Magnier et al. (1997, hereafter Ma97). Out of the three variable stars in Ma97 which are in our M31F field, we cross-identified one, also classifying it as a Cepheid. The other two did not qualify as variable star candidates because of low $J_S$ values. [lrrrrrrr]{} V3441…& 4.678 & & & 108 & 5.000 & & V4254& 5.718 & 351 & 5.719 & 153 & 5.721 & & V3732& 6.070 & 352 & 6.071 & 175 & 6.074 & & V3054& 6.105 & & & 92 & 6.101 & & V7441& 6.514 & 230 & 6.508 & & & & V3860& 6.529 & & & 24 & 6.526 & & V1599& 6.640 & & & 145 & 6.637 & & V5856& 6.660 & 328 & 6.660 & 30 & 6.700 & & V5711& 6.707 & 330 & 6.709 & 29 & 6.708 & & V6623& 6.999 & 326 & 7.000 & & & & V5886& 7.458 & 332 & 7.457 & 70 & 7.463 & & V6406& 7.563 & 319 & 7.553 & & & & V5893& 7.655 & 320 & 7.823 & & & & V6962& 7.782 & 225 & 7.780 & & & & V7741& 8.099 & 222 & 8.094 & & & & V6098& 8.471 & 315 & 8.464 & & & & V4855& 9.064 & & & 51 & 9.085 & & V5097& 9.662 & 348 & 9.662 & 68 & 9.678 & & V6483& 9.736 & 325 & 9.483 & & & & V4556& 9.894 & 341 & 9.881 & 60 & 9.870 & & V6195& 9.924 & 318 & 9.921 & & & & V7393& 9.937 & 234 & 9.933 & & & Ma97 4& 9.0 V3550& 10.468 & & & 72 & 10.461 & & V2320& 10.868 & & & 109 & 10.858 & & V4125& 11.139 & & & 20 & 11.147 & & V1549& 11.764 & & & 54 & 11.766 & & V5696& 12.287 & & & 17 & 12.286 & & V5598& 12.311 & 329 & 12.312 & 135 & 12.358 & & V6267& 12.324 & 334 & 12.294 & 133 & 12.284 & & V2156& 12.831 & & & 128 & 12.821 & & V4955& 13.034 & & & 14 & 13.051 & & V1633& 13.043 & & & 107 & 13.021 & & V1619& 13.141 & 415 & 13.125 & 31 & 0.000 & & V4682& 13.297 & 357 & 13.293 & & & & V4861& 13.991 & & & 74 & 13.966 & & V6503& 15.210 & 339 & 15.232 & 150 & 15.216 & & V4708& 15.690 & 355 & 15.699 & 114 & 15.625 & & V6208& 17.572 & & 17.569 & & & H22 & 17.60 \[table:crossid\] Discussion ========== In Figure \[fig:cmd\] we show $V,\;V-I$ and $V,\;B-V$ color-magnitude diagrams for the variable stars found in the field M31F. The eclipsing binaries and Cepheids are plotted in the left panels and the other periodic variables and miscellaneous variables are plotted in the right panels. As expected, the eclipsing binaries occupy the blue upper main sequence of M31 stars. The Cepheid variables group near $B-V\sim1.0$, with considerable scatter probably due to the differential reddening across the field. The other periodic variable is located on the CMD in the part occupied by Cepheids. The miscellaneous variables are scattered throughout the CMDs and represent several classes of variability. Two of them are very red with $V-I>2.0$, and are probably Mira variables. In Figure \[fig:xy\] we plot the location of eclipsing binaries and Cepheids in the field M31F, along with the blue stars ($B-V<0.4$) selected from the photometric survey of M31 by Magnier et al. (1992) and Haiman et al. (1994). The sizes of the circles representing the Cepheids variables are proportional to the logarithm of their period. As could have been expected, both types of variables group along the spiral arms, as they represent relatively young populations of stars. Many Cepheid variables are located in the star-forming region NGC206. We will explore various properties of our sample of Cepheids in the future paper (Sasselov et al. 1999, in preparation). Andersen, J. 1991, A&AR, 3, 91 Baade, W., & Swope, H. H. 1963, AJ, 68, 435 Baade, W., & Swope, H. H. 1965, AJ, 70, 212 Crotts, A. P. S., & Tomaney, A. B. 1996, ApJ, 473, L87 Evans, N. R., & Udalski, A. 1994, AJ, 108, 653 Evans, N. R. 1994, ApJ, 436, 273 Freedman, W. L., & Madore, B. F. 1990, ApJ, 365, 186 Freedman, W. L., Wilson, C. D., & Madore, B. F. 1991, ApJ, 372, 455 Gaposchkin, S. 1962, AJ, 67, 334 Guinan, E. F., et al. 1998, ApJ, 509, L21 Haiman, Z., et al. 1994, A&A, 286, 725 Hilditch, R. W. 1996, in: ASP Conf. Ser. 90, The Origins, Evolution and Destinies of Binary Stars in Clusters, ed. E. F. Milone & J.-C. Mermilliod (San Francisco: ASP), 207 Holland, S. 1998, AJ, 115, 1916 Hubble, E. 1926, ApJ, 63, 236 Hubble, E. 1929, ApJ, 69, 103 Huterer, D., Sasselov, D. D., Schechter, P. L. 1995, AJ, 110, 2705 Jacoby, G. H., et al. 1992, PASP, 104, 599 Kaluzny, J., Stanek, K. Z., Krockenberger, M., Sasselov, D. D., Tonry, J. L., & Mateo, M. 1998, AJ, 115, 1016 (Paper I) Kaluzny, J., Mochejska, B. J., Stanek, K. Z., Krockenberger, M., Sasselov, D. D., Tonry, J. L., & Mateo, M. 1999, AJ, in press (Paper IV) (astro-ph/9902382) Krockenberger, M., Sasselov, D. D., & Noyes, R. 1997, ApJ, 479, 875 Lafler, J., & Kinman, T. D. 1965, ApJS, 11, 216 Landolt, A. 1992, AJ, 104, 340 Magnier, E. A., Augusteijn, T., Prins, S., van Paradijs, J., & Lewin, W. H. G. 1997, A&AS, 126, 401 (Ma97) Magnier, E. A., Lewin, W. H. G., Van Paradijs, J., Hasinger, G., Jain, A., Pietsch, W., & Truemper, J. 1992, A&AS, 96, 37 Metzger, M. R., Tonry, J. L., & Luppino, G. A. 1993, in ASP Conf. Ser. 52, Astronomical Data Analysis Software and Systems II, ed. R. J. Hanisch, R. J. V. Brissenden, & J. Barnes, (San Francisco: ASP), 300 Monet, D., et al. 1996, USNO-SA2.0, (U.S. Naval Observatory, Washington DC). Paczyński, B. 1997, in The Extragalactic Distance Scale, ed. M. Livio, M. Donahue & N. Panagia (Cambridge: Cambridge Univ. Press), 273 Sasselov, D. D., et al. 1999, in preparation Stanek, K. Z., Kaluzny, J., Krockenberger, M., Sasselov, D. D., Tonry, J. L., & Mateo, M. 1998, AJ, 115, 1894 (Paper II) Stanek, K. Z., Kaluzny, J., Krockenberger, M., Sasselov, D. D., Tonry, J. L., & Mateo, M. 1999, AJ, in press (Paper III) (astro-ph/9901331) Stanek, K. Z., & Garnavich, P. M. 1998, ApJ, 503, L131 Stetson, P. B. 1987, PASP, 99 191 Stetson, P. B. 1992, in ASP Conf. Ser. 25, Astrophysical Data Analysis Software and Systems I, ed. D. M. Worrall, C. Bimesderfer, & J. Barnes (San Francisco: ASP), 297 Stetson, P. B. 1996, PASP, 108, 851 Szentgyorgyi, A., et al. 1999, in preparation Tonry, J. L., Blakeslee, J. P., Ajhar, E. A., & Dressler, A., 1997, ApJ, 475, 399 Udalski, A., Pietrzyński, G., Woźniak, P. R., Szymański, M., Kubiak, M., & Żebruń, K., 1998, ApJ, 509, L25 [^1]: Based on the observations collected at the F. L. Whipple Observatory (FLWO) 1.2 m telescope and at the Michigan-Dartmouth-MIT (MDM) 1.3 m telescope [^2]: The complete list of exposures for this field and related data files are available through [anonymous ftp]{} on [cfa-ftp.harvard.edu]{}, in [pub/kstanek/DIRECT]{} directory. Please retrieve the [README]{} file for instructions. Additional information on the DIRECT project is available through the [WWW]{} at [http://cfa-www.harvard.edu/\~kstanek/DIRECT/]{}. [^3]: IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Associations of Universities for Research in Astronomy, Inc., under cooperative agreement with the NSF [^4]: Complete $V$ and (when available) $BI$ photometry and $128\times128\;pixel$ ($\sim 40''\times40''$) $V$ finding charts for all variables are available from the authors via the [anonymous ftp]{} from the Harvard-Smithsonian Center for Astrophysics and can be also accessed through the [World Wide Web]{}.
--- abstract: 'A few of the algebraic and topological properties of intutionistic fuzzy continuity and uniformly intutionistic fuzzy continuity are investigated. Also, the concept of uniformly intutionistic fuzzy convergence is introduced thereafter a few results on uniformly intutionistic fuzzy convergence are studied.' author: - 'Bivas Dinda, T. K. Samanta' title: Intuitionistic Fuzzy Continuity and Uniform Convergence --- **Key Words :** Intuitionistic fuzzy norm linear space, Intuitionistic fuzzy continuity, Cauchy sequence, Uniformly Intuitionistic fuzzy continuity, Uniformly intuitionistic fuzzy convergent.\ **2000 Mathematics Subject Classification:** 03F55, 46S40.\ \ **Introduction** ================= The concept of intuitionistic fuzzy set, as a generalisation of fuzzy sets [@zadeh] was introduced by Atanassov [@Atanassov]. Intuitionistic fuzzy set is used in the process of decision making. Cheng and Moderson [@Shih-chuan] introduced the idea of fuzzy norm on a linear space. Bag and Samanta [@Bag1] deduce the definition of fuzzy norm whose associated matric is same as the associated metric of Cheng and Moderson [@Shih-chuan].\ \ In this paper after an introduction of intuitionistic fuzzy norm [@Samanta] and intuitionistic fuzzy continuity [@Samanta] deduced from Bag and Samanta [@Bag1] and [@Bag2], it has been shown that the class of intuitionistic fuzzy continuous functions is closed with respect addition, multiplication, scalar multiplication and inverse operation of multiplication. Also, the intuitionistic fuzzy continuity is being characterized by open set and a few properties of open sets are also proved in intutionistic fuzzy normed linear space. Thereafter the concept of uniformly intuitionistic fuzzy continuity is introduced and it is proved that the uniformly intuitionistic fuzzy continuity implies the intuitionistic fuzzy continuity but not the converse.\ In the last section, the concept of intuitionistic fuzzy convergence and uniformly intutionistic fuzzy convergence of a sequence of functions are introduced in intutionistic fuzzy normed linear space and then it is proved that the intuitionistic fuzzy continuity of each term of a sequence of function is transmitted to the limit function under uniformly intutionistic fuzzy convergence of the sequence of functions. **Preliminaries** ================== We quote some definitions and statements of a few theorems which will be needed in the sequel. [@Schweizer]. A binary operation $\ast \; : \; [\,0 \; , \; 1\,] \; \times \; [\,0 \; , \; 1\,] \;\, \longrightarrow \;\, [\,0 \; , \; 1\,]$ is continuous $t$ - norm if $\ast$ satisfies the following conditions $:$\ $(\,i\,)$ $\ast$ is commutative and associative ,\ $(\,ii\,)$ $\ast$ is continuous ,\ $(\,iii\,)$ $a \;\ast\;1 \;\,=\;\, a \hspace{1.2cm} \forall \;\; a \;\; \varepsilon \;\; [\,0 \;,\; 1\,]$ ,\ $(\,iv\,)$ $a \;\ast\; b \;\, \leq \;\, c \;\ast\; d$ whenever $a \;\leq\; c$ , $b \;\leq\; d$ and $a \, , \, b \, , \, c \, , \, d \;\, \varepsilon \;\;[\,0 \;,\; 1\,]$. [@Schweizer]. A binary operation $\diamond \; : \; [\,0 \; , \; 1\,] \; \times \; [\,0 \; , \; 1\,] \;\, \longrightarrow \;\, [\,0 \; , \; 1\,]$ is continuous $t$-conorm if $\diamond$ satisfies the following conditions $:$\ $(\,i\,)\;\;$ $\diamond$ is commutative and associative ,\ $(\,ii\,)\;$ $\diamond$ is continuous ,\ $(\,iii\,)$ $a \;\diamond\;0 \;\,=\;\, a \hspace{1.2cm} \forall \;\; a \;\; \in\;\; [\,0 \;,\; 1\,]$ ,\ $(\,iv\,)$ $a \;\diamond\; b \;\, \leq \;\, c \;\diamond\; d$ whenever $a \;\leq\; c$ , $b \;\leq\; d$ and $a \, , \, b \, , \, c \, , \, d \;\; \in\;\;[\,0 \;,\; 1\,]$. [@Vijayabalaji]. $(\,a\,)$ For any $r_{\,1} \; , \; r_{\,2} \;\; \in\;\; (\,0 \;,\; 1\,)$ with $r_{\,1} \;>\; r_{\,2}$ , there exist $r_{\,3} \; , \; r_{\,4} \;\; \in \;\; (\,0 \;,\; 1\,)$ such that $r_{\,1} \;\ast\; r_{\;3} \;>\; r_{\,2}$ and $r_{\,1} \;>\; r_{\,4} \;\diamond\; r_{\,2}$ .\ \ $(\,b\,)$ For any $r_{\,5} \;\, \in\;\, (\,0 \;,\; 1\,)$ , there exist $r_{\,6} \; , \; r_{\,7} \;\, \in\;\, (\,0 \;,\; 1\,)$ such that $r_{\,6} \;\ast\; r_{\,6} \;\geq\; r_{\,5}$ and $r_{\,7} \;\diamond\; r_{\,7} \;\leq\; r_{\,5}.$ [@Samanta]. Let $\ast$ be a continuous $t$-norm , $\diamond$ be a continuous $t$ - conorm and $V$ be a linear space over the field $F \;(\, = \; \mathbb{R} \;\, or \;\, \mathbb{C} \;)$. An **intuitionistic fuzzy norm** on $V$ is an object of the form $A \;\,=\;\, \{\; (\,(\,x \;,\; t\,) \;,\; \mu\,(\,x \;,\; t\,) \;,\; \nu\,(\,x \;,\; t\,) \;) \;\, : \;\, (\,x \;,\; t\,) \;\,\in\;\, V \;\times\; \mathbb{R^{\,+}} \;\}$ , where $\mu \,,\, \nu\;are\; fuzzy\; sets \;on \,$V$ \;\times\; \mathbb{R^{\,+}}$ , $\mu$ denotes the degree of membership and $\nu$ denotes the degree of non - membership $(\,x \;,\; t\,) \;\,\in\;\, V \;\times\; \mathbb{R^{\,+}}$ satisfying the following conditions $:$\ \ $(\,i\,)$ $\mu\,(\,x \;,\; t\,) \;+\; \nu\,(\,x \;,\; t\,) \;\,\leq\;\, 1 \hspace{1.2cm} \forall \;\; (\,x \;,\; t\,) \;\,\in\;\, V \;\times\; \mathbb{R^{\,+}}\, ;$\ $(\,ii\,)$ $\mu\,(\,x \;,\; t\,) \;\,>\;\, 0 \, ;$\ $(\,iii\,)$ $\mu\,(\,x \;,\; t\,) \;\,=\;\, 1$ if and only if $x \;=\; \theta \, ;$\ $(\,iv\,)$ $\mu\,(\,c\,x \;,\; t\,) \;\,=\;\, \mu\,(\,x \;,\; \frac{t}{|\,c\,|}\,)$ $\;\forall\; c \;\,\in\;\, F \, $ and $c \;\neq\; 0 \;;$\ $(\,v\,)$ $\mu\,(\,x \;,\; s\,) \;\ast\; \mu\,(\,y \;,\; t\,) \;\,\leq\;\, \mu\,(\,x \;+\; y \;,\; s \;+\; t\,) \, ;$\ $(\,vi\,)$ $\mu\,(\,x \;,\; \cdot\,)$ is non-decreasing function of $\mathbb{R^{\,+}}$ and $\mathop {\lim }\limits_{t\;\, \to \,\;\infty } \;\,\,\mu\,\left( {\;x\;,\;t\,} \right)=1 ;$\ $(\,vii\,)$ $\nu\,(\,x \;,\; t\,) \;\,<\;\, 1 \, ;$\ $(\,viii\,)$ $\nu\,(\,x \;,\; t\,) \;\,=\;\, 0$ if and only if $x \;=\; \theta \, ;$\ $(\,ix\,)$ $\nu\,(\,c\,x \;,\; t\,) \;\,=\;\, \nu\,(\,x \;,\; \frac{t}{|\,c\,|}\,)$ $\;\forall\; c \;\,\in\;\, F \, $ and $c \;\neq\; 0 \;;$\ $(\,x\,)$ $\nu\,(\,x \;,\; s\,) \;\diamond\; \nu\,(\,y \;,\; t\,) \;\,\geq\;\, \nu\,(\,x \;+\; y \;,\; s \;+\; t\,) \, ;$\ $(\,xi\,)$ $\nu\,(\,x \;,\; \cdot\,)$ is non-increasing function of $\mathbb{R^{\,+}}$ and $\mathop {\lim }\limits_{t\;\, \to \,\;\infty } \;\,\,\nu\,\left( {\;x\;,\;t\,} \right)=0.$ [@Samanta]. If $A$ is an intuitionistic fuzzy norm on a linear space $V$ then $(V\;,\;A)$ is called an intuitionistic fuzzy normed linear space. For the intuitionistic fuzzy normed linear space $(\,V \;,\; A\,)$, we further assume that $\mu,\, \nu,\, \ast,\, \diamond$ satisfy the following axioms :\ \ $(\,xii\,)$ $\left. {{}_{a\;\; \ast \;\;a\;\; = \;\;a}^{a\;\; \diamond \;\;a\;\; = \;\;a} \;\;} \right\}\;\;\;$,forall $\;\;a\;\; \varepsilon \;\;[\,0\;\,,\;\,1\,].$\ $(\,xiii\,)$ $\mu\,(\,x \;,\; t\,) \;>\; 0 \;\;\;\;,$ for all$ \;\; t \;>\; 0 \;\; \Rightarrow \;\; x \;=\;\theta\;.$\ $(\,xiv\,)$ $\nu\,(\,x \;,\; t\,) \;<\; 1 \;\;\;\;\;\; ,$ for all$ \;\; t \;>\; 0 \;\; \Rightarrow \;\; x \;=\; \theta\;.$\ [@Samanta]. A sequence $\{x_n\}_n$ in an intuitionistic fuzzy normed linear space $(V\,,\,A)$ is said to **converge** to $x\;\in\;V$ if for given $r>0,\;t>0,\;0<r<1$, there exist an integer $n_0\;\in\;\mathbb{N}$ such that\ $\;\mu\,(\,x_n\,-\,x\,,\,t\,)\;>\;1\,-\,r$ and $\nu\,(\,x_n\,-\,x\,,\,t\,)\;<\;r$ for all $n\;\geq \;n_0$. [@Samanta]. A sequence $\{x_n\}_n$ in an intuitionistic fuzzy normed linear space $(V\,,\,A)$ is said to be **cauchy sequence** if $\mathop {\lim }\limits_{n\;\, \to \,\;\infty } \;\,\,\mu(x_{n+p}-x_n ,t)=1\; $ and $\mathop {\lim }\limits_{n\;\, \to \,\;\infty } \;\,\,\nu(x_{n+p}-x_n ,t)=0\;\;,\;p=1,2,3,..... $ [@Samanta]. Let, $(\;U\;,\;A\;)$ and $(\;V\;,\;B\;)$ be two intuitionistic fuzzy normed linear space over the same field $F$. A mapping $f$ from $(\;U\;,\;A\;)$ to $(\;V\;,\;B\;)$ is said to be **intuitionistic fuzzy continuous** at $x_0\;\in\;U$, if for any given $\epsilon\;>\;0\;,\alpha\;\in\;(0,1)\;,\exists\;\delta \;=\delta(\alpha,\epsilon)\;>0\;,\beta\;=\beta(\alpha,\epsilon)\;\in\;(0,1)$ such that for all $x\;\in\;U$, $$\mu_U(x-x_0 \;,\;\delta)\;>\;1-\beta\;\Rightarrow\; \mu_V(f(x)-f(x_0) \;,\;\epsilon)\;>\;1-\alpha$$ $$\nu_U(x-x_0 \;,\;\delta)\;<\;\beta\;\Rightarrow\; \nu_V(f(x)-f(x_0) \;,\;\epsilon)\;<\;\alpha$$ [@Samanta]. A mapping $f$ from $(U\,,\,A)$ to $(V\,,\,B)$ is said to be **sequentially intuitionistic fuzzy continuous** at $x_0\;\in\;U$, if for any sequence $\{x_n\}_n$, $x_n\;\in\;U\;,\;\forall\;n\;\in\;\mathbb{N}$ with $x_n\;\rightarrow\;x_0$ in $(U\,,\,A)$ implies $f(x_n)\;\rightarrow\;f(x_0)$ in $(V\,,\,B)$, that is $$\mathop {\lim }\limits_{n\;\, \to \,\;\infty } \;\mu_U(x_n-x_0 \,,\,t)\;=\;1 \;and \; \mathop {\lim }\limits_{n\;\, \to \,\;\infty } \;\nu_U(x_n-x_0 \,,\,t)\;=\;0\;$$ $$\Rightarrow\;\mathop {\lim }\limits_{n\;\, \to \,\;\infty } \;\mu_{\,V}(f(x_n)-f(x_0)\,,\,t)\;=\;1 \; and \; \mathop {\lim }\limits_{n\;\, \to \,\;\infty } \;\nu_{\,V}(f(x_n)-f(x_0)\,,\,t)\;=\;0$$ [@Samanta]. Let, $f$ be a mapping from $(U\,,\,A)$ to $(V\,,\,B)$. Then $f$ is intuitionistic fuzzy continuous on $U$ if and only if it is sequentially intuitionistic fuzzy continuous on $U$ **Algebra of Intuitionistic Fuzzy Continuous functions.** ========================================================= In this section, consider $(\,U\;,\;A\,)$ and $(\,V\;,\;B\,)$ be any two intuitionistic fuzzy normed linear space over the same field $F$.\ If $f\;:(\,U\;,\;A\,)\;\rightarrow\;(\,V\;,\;B\,)$ and $g\;:(\,U\;,\;A\,)\;\rightarrow\;(\,V\;,\;B\,)$ are two sequentially intuitionistic fuzzy continuous functions and $(\,U\;,\;A\,)$ and $(\,V\;,\;B\,)$ satisfies the condition $(xii)$ then $f\,+\,g$ , $k\;f$, where $k\,\in\,F$ are also sequentially intuitionistic fuzzy continuous functions over the same field $F$. Let, $\{{x_n}\}_n$ be a sequence in $U$ such that $x_n\;\rightarrow\;x$ in $(\,U\,,\,A\,)$. Thus $\forall\;t\;\in\;\mathbb{R}$ we have$$\mathop {\lim }\limits_{n\; \to \;\infty } \;\mu_{\,U} (\,x_{\,n}\,-\,x\, ,\,t\,)\;=\;1\;\;\;\; and\;\;\; \mathop {\lim }\limits_{n\; \to \;\infty } \;\nu_{\,U} (\,x_{\,n}\,-\,x \,,\,t\,)\;=\;0 \hspace{0.2cm} \cdots \hspace{0.3cm}(1)$$ Since $f$ and $g$ are sequentially intuitionistic fuzzy continuous at $x$ from (1), we have $$\mathop {\lim }\limits_{n\; \to \;\infty } \;\mu_{\,V} (f(x_n)-f(x) \,,\, t)\;=\;1\;,\;\mathop {\lim }\limits_{n\; \to \;\infty } \;\nu_{\,V} (f(x_n)-f(x) \,,\, t)\;=\;0\;,\;\,\forall\;t\;\in\;\mathbb{R}$$ and $$\mathop {\lim }\limits_{n\; \to \;\infty } \;\mu_{\,V} (g(x_n)-g(x) \,,\, t)\;=\;1\;\,,\;\,\mathop {\lim }\limits_{n\; \to \;\infty }\;\nu_{\,V} (g(x_n)-g(x) \,,\, t)\;=\;0\;,\;\;\forall\;t\;\in\;\mathbb{R}$$\ Now, $\;\;\;\mu_{\,V} (\,(f\,+\,g)(x_n)\;-\;(f\,+\,g)(x) \,,\, t\,)$ $$=\;\mu_{\,V}(\,f(x_n)\;-\;f(x)\;+\;g(x_n)\;-\;g(x)\,,\,t\,)\;$$ $$\hspace{2.1cm}\geq\;\mu_{\,V}\left(\,f(x_n)\;-\;f(x)\,,\, \frac{t}{2}\,\right)\;\ast\; \mu_{\,V}\left(\,g(x_n)\;-\;g(x)\,,\, \frac{t}{2}\,\right)\;$$\ Taking limit we have,\ $ \mathop {\lim }\limits_{n\;\to\;\infty}\;\mu_{\,V} (\,(f\,+\,g)(x_n)\;-\;(f\,+\,g)(x) \,,\, t\,)$ $$\hspace{1.5cm}\geq\;\mathop {\lim }\limits_{n\;\to \;\infty}\mu_{\,V}\left(\,f(x_n)\,-\, f(x)\,,\, \frac{t}{2}\,\right)\;\ast\;\mathop {\lim }\limits_{n\;\to \;\infty}\mu_{\,V} \left(\,g(x_n)\,-\,g(x)\,,\, \frac{t}{2}\,\right)\;=\;1\;\ast\;1\;=\;1.$$\ Again,$\;\;\;\nu_{\,V} (\,(f\,+\,g)(x_n)\;-\;(f\,+\,g)(x) \,,\, t\,)$ $$=\;\nu_{\,V}(\,f(x_n)\;-\;f(x)\;+\;g(x_n)\;-\;g(x)\,,\,t\,)\;$$ $$\hspace{2.1cm}\leq\;\nu_{\,V}\left(\,f(x_n)\;-\;f(x)\,,\, \frac{t}{2}\,\right)\;\diamond\; \nu_{\,V}\left(\,g(x_n)\;-\;g(x)\,,\, \frac{t}{2}\,\right)\;$$\ Taking limit we have,\ $ \mathop {\lim }\limits_{n\;\to\;\infty}\;\nu_{\,V} (\,(f\,+\,g)(x_n)\;-\;(f\,+\,g)(x) \,,\, t\,)$ $$\hspace{1.5cm}\leq\;\mathop {\lim }\limits_{n\;\to \;\infty}\nu_{\,V}\left(\,f(x_n)\,-\, f(x)\,,\, \frac{t}{2}\,\right)\;\diamond\;\mathop {\lim }\limits_{n\;\to \;\infty}\nu_{\,V} \left(\,g(x_n)\,-\,g(x)\,,\, \frac{t}{2}\,\right)\;=\;0\;\diamond\;0\;=\;0.$$\ So, $f\,+\,g$ is sequentially intuitionistic fuzzy continuous.\ \ Obviously, $k\,f$ is sequentially intuitionistic fuzzy continuous for every $k\in\;F$. **We further assume that,** for an intuitionistic fuzzy normed linear space $(\,V\;,\;A\,)$ and for $x\;\neq\;\theta$,\ $(xv)\;\;\;\mu(x\,,\, .)$ is a continuous function of $\mathbb{R}$ and strictly increasing on the subset $\{\, t\;\,:\;\,0\;<\;\mu(x\,,\,t)\;<\;1 \, \}$ of $\mathbb{R}$.\ $(xvi)\;\;\;\nu(x\,,\, .)$ is a continuous function of $\mathbb{R}$ and strictly decreasing on the subset $\{\,t\;\,:\;\,0\;<\;\nu(x\,,\,t)\;<\;1 \,\}$ of $\mathbb{R}$. If $f\;:(\,U\;,\;A\,)\;\rightarrow\;(\,V\;,\;B\,)$ and $g\;:(\,U\;,\;A\,)\;\rightarrow\;(\,V\;,\;B\,)$ are two sequentially intuitionistic fuzzy continuous functions and $(\,U\;,\;A\,)$ and $(\,V\;,\;B\,)$ satisfies $(xii)$, $(xv)$ and $(xvi)$ then\ $(a)\;\;f\,g\;$ is sequentially intuitionistic fuzzy continuous functions over the same field $F$,\ $(b)$ if $g(x)\;\neq\;0\;,\;\;\forall\;x\;\in\;U$ then $\frac{f}{g}$ is sequentially intuitionistic fuzzy continuous functions over the same field $F$. (a)Let,$\;\{\,{x_n}\,\}_n\;$ be a sequence in $U$ such that $x_n\;\rightarrow\;x\;$ in $(\,U\;,\;A\,)$. Thus $\forall\;t\;\in\;\mathbb{R}$ we have\ $$\mathop {\lim }\limits_{n\; \to \;\infty } \;\mu_{\,U}\, (\,x_n\,-\,x\, ,\,t\,)\;=\;1\;\;\;\; and\;\;\; \mathop {\lim }\limits_{n\; \to \;\infty } \;\nu_{\,U}\,(\,x_n\,-\,x \,,\,t\,)\;=\;0 \hspace{0.2cm} \cdots \hspace{0.3cm}(2)$$\ Since $f$ and $g$ are sequentially intuitionistic fuzzy continuous at $x$ , from (2), we have\ $$\mathop {\lim }\limits_{n\; \to \;\infty } \;\mu_{\,V}\,(\,f(x_n)\,-\,f(x) \,,\, t\,)\;=\;1\;,\;\mathop {\lim }\limits_{n\; \to \;\infty } \;\nu_{\,V}\,(\,f(x_n)\,-\,f(x) \,,\, t\,)\;=\;0\;,\;\,\forall\;t\;\in\;\mathbb{R}$$ and $$\mathop {\lim }\limits_{n\; \to \;\infty } \;\mu_{\,V}\,(\,g(x_n)\,-\,g(x) \,,\, t\,)\;=\;1\;\,,\;\,\mathop {\lim }\limits_{n\; \to \;\infty }\;\nu_{\,V}\,(\,g(x_n)\,-\,g(x) \,,\, t\,)\;=\;0\;,\;\;\forall\;t\;\in\;\mathbb{R}$$\ Now,$\;\mu_{\,V}\,(\,(f\,g)(x_n)\;-\;(f\,g)(x_0)\,,\, t\,)$ $$=\;\mu_{\,V}\,(\,f(x_n)\,(g(x_n)\,-\,g(x_0)) \;+\;g(x_0)\,(\,f(x_n)\,-\,f(x_0)\,)\,,\, t\,)$$ $$\hspace{1.5cm}=\;\mu_{\,V}\,(\,(\,f(x_n)\,-\,f(x_0)\,)\; (\,g(x_n)\,-\,g(x_0)\,)\;+\;f(x_0) \;(\,g(x_n)\,-\,g(x_0)\,)\;$$ $$\hspace{4.5cm}+\;g(x_0)\;(\,f(x_n) \,-\,f(x_0)\,)\;,\; t)$$ $$\hspace{1.7cm}\geq\; \mu_{\,V}\,\left(\,(\,f(x_n)\,-\,f(x_0)\,)\;(\,g(x_n)\,-\,g(x_0)\,) \,,\,\frac{t}{3}\,\right)\;\ast\; \mu_{\,V}\,\left(\,f(x_0)\;(\,g(x_n)\,-\,g(x_0)\,)\,,\,\frac{t}{3}\,\right)\;$$ $$\hspace{5.1cm} \ast\;\mu_{\,V}\,\left(\,g(x_0)\; (\,f(x_n)\,-\,f(x_0)\,)\,,\,\frac{t}{3}\,\right)$$ $$\hspace{1.7cm}=\;\mu_{\,V}\,\left(\,f(x_n)\,-\,f(x_0)\,,\,\frac{t} {3\mid\,g(x_n)\,-\,g(x_0)\,\mid}\,\right) \;\ast\;\mu_{\,V}\, \left(\,g(x_n)\,-\,g(x_0)\,,\, \frac{t}{3\mid\,f(x_0)\,\mid}\right)$$ $$\hspace{5.4cm}\ast\;\mu_{\,V}\,\left(\,f(x_n)\,-\,f(x_0)\,,\, \frac{t}{3\mid\,g(x_0)\,\mid}\,\right)$$\ Taking limit as $n\;\rightarrow\;\infty$ we have,\ \ $\mathop {\lim}\limits_{n\;\to \;\infty }\, \mu_{\,V}\,(\,(f\,g)(x_n)\,-\,(f\,g)(x_0)\,,\,t\,)$\ $$\hspace{1.0cm} \geq\;\mathop {\lim}\limits_{n\;\to \;\infty }\,\mu_{\,V}\,\left(\,f(x_n)\,-\,f(x_0)\,,\, \frac{t}{3\mid\,g(x_n)\,-\,g(x_0)\,\mid}\,\right)\;\ast\; \mathop {\lim}\limits_{n\;\to \;\infty }\,\mu_{\,V}\,\left (\,g(x_n)\,-\,g(x_0)\,,\, \frac{t}{3\mid\,f(x_0)\,\mid}\,\right)$$ $$\hspace{3.5cm} \ast\;\mathop {\lim}\limits_{n\;\to \;\infty }\,\mu_{\,V}\,\left(\,f(x_n)\,-\,f(x_0)\,,\, \frac{t}{3\mid\,g(x_0)\,\mid}\,\right)$$\ $$\hspace{1.0cm} =\;\mu_{\,V}\,\left(\,f(x_n)\,-\,f(x_0)\,,\, \mathop {\lim}\limits_{n\;\to \;\infty }\;\frac{t}{3\mid\,g(x_n)\,-\,g(x_0)\,\mid}\,\right)\;\ast\; \mathop {\lim}\limits_{n\;\to \;\infty }\,\mu_{\,V}\,\left(\,g(x_n)\,-\,g(x_0)\,,\, \frac{t}{3\mid\,f(x_0)\,\mid}\,\right)$$ $$\hspace{5.2cm} \ast\;\mathop {\lim}\limits_{n\;\to\;\infty }\,\mu_{\,V}\,\left(\,f(x_n)\,-\,f(x_0) \,,\, \frac{t}{3\mid\,g(x_0)\,\mid}\,\right)\; \;, \;\;\;by(vii)$$ $$=\;\mu_{\,V}\,(\,f(x_n)\,-\,f(x_0),\infty\,)\;\ast\;1\;\ast\;1\\ =\;1\;\ast\;1\;\ast\;1\\=\;1 \hspace{2.4cm}$$ and\ $\nu_{\,V}(\,(f\,g)(x_n)\;-\;(f\,g)(x_0)\,,\, t\,)$ $$=\;\nu_{\,V}(\,f(x_n)\,(\,g(x_n)\,-\,g(x_0)\,) \;+\;g(x_0)\,(\,f(x_n)\,-\,f(x_0)\,)\,,\, t\,)$$ $$\hspace{1.5cm}=\;\nu_{\,V}(\,(\,f(x_n)\,-\,f(x_0)\,)\;(\,g(x_n)\,-\,g(x_0)\,)\;+\;f(x_0) \,(\,g(x_n)\,-\,g(x_0)\,)\;$$ $$\hspace{4.5cm}+\;g(x_0)\,(\,f(x_n)\,-\,f(x_0)\,)\;,\; t\,)$$ $$\hspace{1.5cm}\leq\; \nu_{\,V}\left(\,(\,f(x_n)\,-\,f(x_0)\,)\;(\,g(x_n)\,-\,g(x_0)\,)\,, \,\frac{t}{3}\,\right)\;\diamond\; \nu_{\,V}\left(\,f(x_0)\;(\,g(x_n)\,-\,g(x_0)\,)\,,\,\frac{t}{3}\,\right)\;$$ $$\hspace{5.1cm} \diamond\;\nu_{\,V}\left(\,g(x_0)\; (\,f(x_n)\,-\,f(x_0)\,)\,,\,\frac{t}{3}\,\right)$$ $$\hspace{1.5cm}=\;\nu_{\,V}\left(\,f(x_n)\,-\,f(x_0)\,,\,\frac{t} {3\mid\,g(x_n)\,-\,g(x_0)\,\mid}\,\right) \;\diamond\;\nu_{\,V}\left(\,g(x_n)\,-\,g(x_0)\,,\, \frac{t}{3\mid\,f(x_0)\,\mid}\,\right)$$ $$\hspace{5.4cm}\diamond\;\nu_{\,V}\left(\,f(x_n)\,-\,f(x_0)\,,\, \frac{t}{3\mid\,g(x_0)\,\mid}\,\right)$$\ \ Taking limit as $n\;\rightarrow\;\infty$ we have,\ \ $\mathop {\lim}\limits_{n\;\to \;\infty }\, \nu_{\,V}(\,(f\,g)(x_n)\,-\,(f\,g)(x_0)\,,\,t\,)$\ $$\hspace{1.0cm} \leq\;\mathop {\lim}\limits_{n\;\to \;\infty }\,\nu_{\,V}\left(\,f(x_n)\,-\,f(x_0)\,,\, \frac{t}{3\mid\,g(x_n)-g(x_0)\,\mid}\,\right)\;\diamond\; \mathop {\lim}\limits_{n\;\to \;\infty }\,\nu_{\,V}\left(\,g(x_n)-g(x_0)\,,\, \frac{t}{3\mid\,f(x_0)\,\mid}\,\right)$$ $$\hspace{3.5cm} \diamond\;\mathop {\lim}\limits_{n\;\to \;\infty }\,\nu_{\,V}\left(\,f(x_n)\,-\,f(x_0)\,,\, \frac{t}{3\mid\,g(x_0)\,\mid}\,\right)$$\ $$\hspace{1.0cm} =\;\nu_{\,V}\left(\,f(x_n)\,-\,f(x_0)\,,\, \mathop {\lim}\limits_{n\;\to \;\infty }\;\frac{t}{3\mid\,g(x_n)-g(x_0)\,\mid}\,\right) \;\diamond\; \mathop {\lim}\limits_{n\;\to \;\infty }\,\nu_{\,V}\left(g(x_n)\,-\,g(x_0)\,,\, \frac{t}{3\mid\,f(x_0)\,\mid}\,\right)$$ $$\hspace{5.2cm} \diamond\;\mathop {\lim}\limits_{n\;\to\;\infty }\,\nu_{\,V}\left(\,f(x_n)\,-\,f(x_0) \,,\, \frac{t}{3\mid\,g(x_0)\,\mid}\,\right)\; \;, \;\;\;by(vii)$$ $$=\;\nu_{\,V}(\,f(x_n)\,-\,f(x_0)\,,\,\infty\,)\;\diamond\;0\;\diamond\;0\\ =\;0\;\diamond\;0\;\diamond\;0\\=\;0 \hspace{2.4cm}$$ Hence the proof.\ \ (b)We now show that $\frac{1}{g}$ is sequentially intuitionistic fuzzy continuous at $x$ if $g(x)\neq\;0$ for all $x\;\in\;U$.\ $$\mu_{\,V}\left(\,\frac{1}{g}(x_n)\,-\,\frac{1}{g}(x_0) \,,\, t\,\right)\;=\;\mu_{\,V}\left(\,\frac{g(x_n)\,-\,g(x_0)} {g(x_n)\,g(x_0)}\;,\;t\,\right)\; =\;\mu_{\,V}\left(\,\frac{1}{g(x_n)\,g(x_0)}\;,\;\frac{t} {g(x_n)\,-\,g(x_0)}\,\right)$$\ Taking limit as $n\;\rightarrow\;\infty$ we have,\ \ $\mathop {\lim}\limits_{n\;\to\;\infty } \mu_{\,V}\left(\,\frac{1}{g}(x_n)\;-\;\frac{1}{g}(x_0)\;,\;t\,\right)$\ $$=\;\;\mu_{\,V}\left(\,\frac{1}{g(x_n)\,g(x_0)}\;\;,\; \mathop {\lim}\limits_{n\;\to \;\infty }\frac{t}{g(x_n)\;-\;g(x_0)}\,\right) \;\;\;by (vii)$$\ $$=\;\;\mu_{\,V}\left(\,\frac{1}{g(x_n)\,g(x_0)}\;,\;\infty\,\right)\;\;=\;\;1. \hspace{4.0cm}$$\ Again ,\ $$\nu_{\,V}\left(\,\frac{1}{g}(x_n)\,-\,\frac{1}{g}(x_0) \,,\, t\,\right)\;=\;\nu_{\,V}\left(\,\frac{g(x_n)\,-\,g(x_0)} {g(x_n)\,g(x_0)}\;,\;t\,\right)\; =\;\nu_{\,V}\left(\,\frac{1}{g(x_n)\,g(x_0)}\;,\;\frac{t}{g(x_n)\,-\,g(x_0)}\,\right)$$\ \ Taking limit as $n\;\rightarrow\;\infty$ we have,\ \ $\mathop {\lim}\limits_{n\;\to\;\infty } \nu_{\,V}\left(\,\frac{1}{g}(x_n)\;-\;\frac{1}{g}(x_0)\;,\;t\,\right)$\ $$=\;\;\nu_{\,V}\left(\,\frac{1}{g(x_n)\,g(x_0)}\;\;,\; \mathop {\lim}\limits_{n\;\to \;\infty }\frac{t}{g(x_n)\;-\;g(x_0)}\,\right) \;\;\;by (vii)$$ $$=\;\;\nu_{\,V}\left(\,\frac{1}{g(x_n)\,g(x_0)}\;,\;\infty\,\right)\;\;=\;\;0. \hspace{3.5cm}$$\ Hence $\frac{1}{g}$ is sequentially intuitionistic fuzzy continuous.\ The proof is complited by considering the product of $f$ and $\frac{1}{g}$. Let, $(\,V=\;\mathbb{R}\;,\;\parallel\cdot\parallel\,)$ be a normed linear space and define $a \;\ast\; b \;\,=\;\, \min\,\{\;a \;,\; b\;\}$ and $a \;\diamond\; b \;\,=\;\, \max\,\{\;a \;,\; b\;\}$ for all $a \;,\; b \;\,\in\;\, [\,0 \;,\; 1\,]$ . For all $t \;>\; 0$. Define , $\mu\,(\,x \;,\; t\,) \;\,=\;\, \frac{t}{t \;+\; k\;\|\,x\,\|}$ and $\nu\,(\,x \;,\; t\,) \;\,=\;\, \frac{k\;\|\,x\,\|}{t \;+\; k\;\|\,x\,\|}$ where $k \;>\; 0$. It is easy to see that $A \;\,=\;\, \{\; (\,(\,x \;, \; t\,) \;,\; \mu\,(\,x \;,\; t\,) \;,\; \nu\,(\,x \;,\; t\,) \;) \;\, : \;\, (\,x \;,\; t\,) \;\,\in\;\, V \;\times\; \mathbb{R^{\,+}} \;\}$ is an intuitionistic fuzzy norm on $V$. Let $f\;:\;\mathbb{R}\;\rightarrow\;\mathbb{R}$.Then $f$ is continuous on $(\,V \;,\;\parallel\cdot\parallel\,)$ if and only if it is intuitionistic fuzzy continuous on $(\,V\;,\;A\,)$. By example (2) of [@Samanta], $\{x_n\}_n$ is convergent in $(\,V \;,\;\parallel\cdot\parallel\,)$ if and only if $\{x_n\}_n$ is convergent in $(\,V\;,\;A\,)$. So, $f$ is continuous on $(\,V \;,\;\parallel\cdot\parallel\,)$\ \ $\Leftrightarrow\;$For any sequence $\{x_n\}_n$ converging to $x$ in $(\,V \;,\;\parallel\cdot\parallel\,)$, $\{f(x_n)\}_n$ converges to $f(x)$ in $(\,V \;,\;\parallel\cdot\parallel\,)$.\ \ $\Leftrightarrow\;$For any sequence $\{x_n\}_n$ converging to $x$ in $(\,V\;,\;A\,)$, $\{f(x_n)\}_n$ converges to $f(x)$ in $(\,V\;,\;A\,)$.\ $\Leftrightarrow\;\;f$ is continuous on $(\,V\;,\;A\,)$. Let, $0\;<\;r\;<\;1\;,\;t\;\in\;\mathbb{R}^+$ and $\;x\;\in\;V$. Then the set\ $\;B(\,x\,,\,r\,,\,t\,)\;=\;\{\;y\;\in\;V\;:\;\mu(\,x\,-\,y \,,\, t\,) \;>\;1-r\;,\;\;\nu(\,x-y \,,\, t\,)\;<\;r\;\}$ is called an **open ball** in $(\,V\;,\;A\,)$ with $x$ as its center and $r$ as its radious with respect to $t$. A subset $G$ of $V$ is said to be an **open set** in $(\,V\;,\;A\,)$ if for each $x\;\in\;G$ there exist $r_x\;\in\;(\,0\;,\;1\,)$ and $t\;\in\;\mathbb{R}^+$ such that $B(\,x\,,\,r_x\,,\,t\,)\;\subseteq\;G$. Every open ball $B(\,x\,,\,r\,,\,t\,)$ in $(\,V\,,\,A\,)$ is an open set in $(\,V\,,\,A\,)$ Let, $B(\,x\,,\,r\,,\,t\,)$ be an open ball with center at $x$ and radious $r$ with respect to $t$. Then,\ $$\mu(\,x\,-\,y \,,\,t\,)\;>\;1-r\,\;\; and \;\;\, \nu(\,x\,-\,y \,,\,t\,)\;<\;r \hspace{2.5cm}(3).$$\ Then for every $t_0\;\in\;(\,0\,,\,t\,)$, the relation (3) is true. So, for $t_0\;\in\;(\,0\,,\,t\,)$,\ $$\mu(\,x\,-\,y \,,\,t_{0}\,)\; >\;1-r\,\;\; and \;\;\, \nu(\,x\,-\,y \,,\,t_{0}\,)\;<\;r$$\ Let, $r_0\;=\;\mu(\,x\,-\,y \,,\,t_0\,)$. Since, $r_0\;>\;1-r\;,\;\exists\;s\;\in\;(\,0\,,\,1\,)$ such that $r_0\;>\;1-s\;>\;1-r$.\ Now for given $r_0$ and $s$ such that $r_0\;>\;1-s\;,\;\exists\;r_1 \,,\, r_2\;\in\;(\,0\,,\,1\,)$ such that $r_0\;\ast\;r_1\;>\;1-s$ and $(\,1\,-\,r_0\,) \;\diamond\;(\,1\,-\,r_2\,)\;<\;s$\ Let, $r_3\;=\;max\;\{\,r_1 \,,\,r_2\,\}$.\ Then, $r_0\;\ast\;r_1\;\leq\;r_0\;\ast\;r_3$ and $r_2\;\leq\;r_3\;\\\Rightarrow\;1\,-\,r_3\;\leq\;1\,-\,r_2\;\\ \Rightarrow \;(\,1\,-\,r_0\,)\;\diamond\;(\,1\,-\,r_3\,) \;\leq\;(\,1\,-\,r_0\,)\;\diamond\;(\,1\,-\,r_2\,)$.\ These implies that,\ $1\,-\,s\;<\;r_0\;\ast\;r_1\;\leq\;r_0\;\ast\;r_3\;$ and $\;(\,1\,-\,r_0\,)\;\diamond\;(\,1\,-\,r_3\,) \;\leq\;(\,1\,-\,r_0\,)\;\diamond\;(\,1\,-\,r_2\,) \;<\;s\;$\ i.e.,$r_0\;\ast\;r_3\;>\;1\,-\,s\;$ and $\;(\,1\,-\,r_0\,)\;\diamond\;(\,1\,-\,r_3\,)\;<\;s.$\ Consider the open ball $B\,(\,y\,,\,1\,-\,r_3\,,\,t\,-\,t_0\,).$\ It is sufficient to show that $B\,(\,y\,,\,1\,-\,r_3\,,\,t\,-\,t_0\,)\;\subset\;B\,(\,x\,,\,r\,,\,t\,)$.\ Let, $z\;\in\;B\,(\,y\,,\,1\,-\,r_3\,,\,t\,-\,t_0\,).$\ Then$\mu(\,y\,-\,z\;, \;t\,-\,t_0\,)\;>\;r_3$ and $\nu(\,y-z\;,\;t\,-\,t_0\,)\;<\;1\,-\,r_3$. Therefore,$$\;\mu\;(\,x\,-\,z \,,\,t\,) \;\;=\;\;\mu\;(\,x\,-\,y\,+\,y\,-\,z \,,\,t_0\;+\;(\,t\,-\,t_0\,))$$ $$\hspace{2.5cm}\geq\;\mu\,(\,x\,-\,y \,,\,t_0\,)\; \ast\;\mu\,(\,y\,-\,z \,,\,t\,-\,t_0\,)$$ $$\hspace{1.5cm}>\;r_0\;\ast \;r_3\;\;>\;\;1-s\;\;>\;\;1\,-\,r.$$\ and $$\nu\;(\,x\,-\,z \,,\,t\,)\;\;=\;\; \nu\;(\,x\,-\,y\,+\,y\,-\,z \,,\,t_0\;+\;(\,t\,-\,t_0\,))$$ $$\hspace{2.6cm}\leq\;\nu\;(\,x\,-\,y \,,\,t_0\,)\;\diamond\;\nu\;(\,y\,-\,z \,,\,t\,-\,t_0\,)\;$$$$\hspace{2.3cm}<\;(\,1\,-\,r_0\,)\;\diamond \;(\,1\,-\,r_3\,)\;\;<\;\;s\;<\;\;r.$$\ Thus $z\;\in\;B\,(\,x\,,\,r\,,\,t\,)$ and hence $\;B\,(\,y\,,\,1\,-\,r_3\,,\,t\,-\,t_0\,)\;\subset\;B\,(\,x\,,\,r\,,\,t\,).$ A subset $N$ of $V$ is said to be a $\textbf{neighbourhood}$ of $x\;(\,\in\;V\,)$ in $(\,V\,,\,A\,)$ if there exist $r\;\in\;(\,0\,,\,1\,)$ and $t\;\in\;\mathbb{R}^+$ such that $B\,(\,x\,,\,r\,,\,t\,)\;\subset\;N$. The following statements are equivalent:\ $(i)\;\;\;\;f$ is intuitionistic fuzzy continuous on $U$.\ $(ii)\;\;\;P$ is open in $(\,V\;,\;B\,)\;\Rightarrow\;f^{-1}\;(P)$ is open in $(\,U\;,\;A\,)$.\ $(iii)\;$ For each $x\;\in\;U\;,\;N$ is a neighbourhood of $f(x)$ in $(\,V\;,\;B\,)\;\Rightarrow\;f^{-1}\;(N)$ is a neighbourhood of $x$ in $(\,U\;,\;A\,)$. $(i)\;\Rightarrow\;(ii)\;:\;$ Suppose $f$ is intuitionistic fuzzy continuous on $U$ and $P$ is open in $(\,V\;,\;B\,)$. If $f^{-1}\,(P)\;=\;\phi$, then their is nothing to prove.\ Let, $f^{-1}\,(P)\;\neq\;\phi$ and $x_0\;\in\;f^{-1}\,(P)$. Then $f(x_0)\;\in\;P$. So, there exist $\epsilon\,(\;>\;0\,)$ and $\alpha\;\in\;(\,0 \,,\, 1\,)$ such that $B\,(\,f(x_0) \,,\, \alpha \,,\, \epsilon\,)\; \subset\;P$. Since $f$ is intuitionistic fuzzy continuous on $U$, there exist $\delta\,(\;>0\,)$ and $\beta\;\in\;(\,0 \,,\, 1\,)$ such that for all $x\;\in\;U$,\ $$\mu_{\,U}(\,x\,-\,x_0 \,,\, \delta\,)\;>\;1\,-\,\beta\;\Rightarrow \;\mu_{\,V}(\,f(x)\,-\,f(x_0) \,,\,\epsilon \,)\;>\;1\,-\,\alpha\;$$ $$\nu_{\,U}(\,x\,-\,x_0 \,,\, \delta\,) \;<\;\beta\;\;\Rightarrow \;\;\nu_{\,V}(\,f(x)\,-\,f(x_0) \,,\, \epsilon\,)\;<\;\alpha\;\;\;$$\ i.e.,$\;x\;\in\;B\,(\,x_0 \,,\,\beta \,,\, \delta\,) \;\Rightarrow\;f(x)\;\in\;B\,(\,f(x_0) \,,\, \alpha \,,\, \epsilon\,)\;\subset\;P$\ $\Rightarrow\;\;B\,(\,x_0 \,,\, \beta \,,\, \delta\,) \;\subset\;f^{-1}\,(P)$\ $\Rightarrow\;\;f^{-1}\,(P)$ is open in $(\,U \,,\, A\,)$.\ \ $(ii)\;\Rightarrow\;(i)\;:\;$ Let, $\epsilon\,(\;>\;0\,)$ and $\alpha\;\in\;(\,0 \,,\,1\,)$ and $x_0\;\in\;U$. Then $B\,(\,f(x_0) \,,\, \alpha \,,\, \epsilon\,)$ is open in $(\,V \,,\, B\,)$.\ $\Rightarrow\;\;f^{-1}\,(\,B\,(\,f(x_0)\,,\, \alpha \,,\, \epsilon\,)\,)$ is open in $(\,U \,,\, A\,)$ containing $x_0$.\ $\Rightarrow\;\;\exists\;\delta\;>\;0$ and $\beta\;\in\;(\,0 \,,\, 1\,)$ such that $B\,(\,x_0 \,,\, \beta \,,\, \delta\,)\;\subset\;f^{-1}\,(\,B\,(\,f(x_0)\,,\,\alpha\,,\,\epsilon\,)\,).$\ $\Rightarrow\;f\;(\,B\,(\,x_0 \,,\, \beta \,,\, \delta\,)\,)\; \subset\;B\,(\,f(x_0) \,,\, \alpha \,,\, \epsilon \,)$.\ $\;\Rightarrow\;\;f$ is intuitionistic fuzzy continuous on $U$.\ \ $(ii)\;\Rightarrow\;(iii)\;:\;$ Let, $x\;\in\;U$ and $N$ be a neighbourhood of $f(x)$ in $(\,V \,,\, B\,)$. Therefore, there exist $r\;\in\;(\,0 \,,\, 1 \,)$ and $t\;>\;0$ such that $B\,(\,(f(x)\,,\,r\,,\,t\,)\,)\;\subset\;N$ $\;\Rightarrow\;x\;\in\;f^{-1}\,(\,B(\, (f(x)\,,\,r\,,\,t\,)\,)\;\subset\;f^{-1}\,(N).$\ Again, $\,x\;\in\;f^{-1}\,(\,B\,(\,(\,f(x)\,,\,r\,,\,t\,)\,)\;$ and $\;f^{-1}\,(\,B\,(\,(\,f(x)\,,\,r\,,\,t\,)\,)\;$ is open in $(\,U \,,\, A\,)$. So, there exist $r_1\;\in\;(\,0 \,,\, 1\,)$ and $t_1\;>\;0$ such that\ $B\,(\,x \,,\, r_1 \,,\, t_1 \,)\;\subset\;f^{-1}\, (\,B\,(\,(\,f(x)\,,\,r\,,\,t\,)\,) \;\subset\;f^{-1}\,(N)$\ This shows that $\;f^{-1}\,(N)$ is a neighbourhood of $x$ in $(\,U \,,\, A\,).$\ \ $(iii)\;\Rightarrow\;(ii)\;:\;$ Let, $P$ be open in $(\,V \,,\, B\,)$ and $x\;\in\;f^{-1}\,(P)$. Then $\,f(x)\;\in\;P$ and therefore there exist $\epsilon\,(\;>\;0\,)$ and $\alpha\;\in\;(\,0 \,,\, 1\,)$ such that\ $B\,(\,f(x)\,,\,\alpha\,,\,\epsilon\,)\;\subset\;P$\ $\Rightarrow\;\;P\,$ is a neighbourhood of $\,f(x)\,$ in $\,(\,V \,,\, B\,)$\ $\Rightarrow\;\;f^{-1}\,(P)\,$ is a neighbourhood of $\,x\,$ in $\,(\,U \,,\, A\,)$\ $\Rightarrow\;\;\exists\;\delta\;(\;>\;0\;)$ and $\beta\;\in\;(\,0\,,\,1\,)$ such that $\,B\,(\,x\,,\,\beta\,,\,\delta\,)\;\subset\;f^{-1}\,(P).$\ $\Rightarrow\;f^{-1}\,(P)\;$ is open in $(\,U\,,\,A\,)$. $\;f\;:\;U\;\rightarrow\;V$ is said to be **uniformly intuitionistic fuzzy continuous** on $U$ if for any given $\epsilon\;>\;0\;,\;\alpha\;\in\;(\,0 \,,\, 1\,) \;\exists\;\delta\;=\;\delta\,(\,\alpha \,,\, \epsilon\,)\;>\;0\;,\;\beta\;=\;\beta\,(\,\alpha \,,\, \epsilon\,)\;>\;0$ such that for any two points $x_1\,,\,x_2\;\in\;U$, $$\hspace{1.0cm} \mu_{\,U}(\,x_1\,-\,x_2 \,,\, \delta \,) \;>\;1\,-\,\beta\;\;\; and \;\;\; \nu_{\,U}(\,x_1\,-\,x_2 \,,\, \delta\,)\;<\;\beta\;\;$$ $$\Rightarrow\;\mu_{\,V}(\,f(x_1)\,-\,f(x_2), \epsilon)\;>\;1\,-\,\alpha \;\;\;and \;\;\; \nu_{\,V}(\,f(x_1)\,-\,f(x_2) \,,\,\epsilon\,)\;<\;\alpha$$ Let, $f$ be uniformly intuitionistic fuzzy continuous on $U$. If $\{x_n\}_n$ is a cauchy sequence in $(\,U\;,\;A\,)$, then $\{f(x_n)\}_n$ is a cauchy sequence in $(\,V\;,\;B\,)$. $\;f\;$ is uniformly intuitionistic fuzzy continuous on $U$. $\\\;\Rightarrow\;$ For any given $\epsilon\;>\;0\;,\;\alpha\;\in\;(\,0\,,\,1\,)\;\exists\;\delta\;=\; \delta\,(\,\alpha\,,\,\epsilon\,)\;>\;0\;,\;\beta \,=\,\beta\,(\,\alpha\,,\,\epsilon\,)\;>\;0\;$ such that for any two points $x^\prime\;,\;x^{\prime\prime}\;\in\;U$, $$\mu_{\,U}(\,x^\prime\,-\,x^{\prime\prime} \,,\, \delta\,)\;>\;1\,-\,\beta\;\; and \;\;\nu_{\,U} (\,x^\prime\,-\,x^{\prime\prime} \,,\, \delta\,)\;<\;\beta\;\;$$ $$\Rightarrow\;\mu_{\,V}(\,f(x^\prime)\,-\,f(x^{\prime\prime}\,) \,,\, \epsilon\,)\;>\;1\,-\,\alpha \;\;and \;\; \nu_{\,V}(\,f(x^\prime)\, -\,f(x^{\prime\prime} \,,\,\epsilon\,))\;<\; \alpha\;\;\;\;\cdots\;\;\;\;(4)$$\ Since $\{x_n\}_n$ is a cauchy sequence, for $\delta\;>\;0$ and $\beta\;\in\;(\,0 \,,\, 1)$ there exist a natural number $k$ such that\ $$\hspace{2.0cm}\mu_{\,U}(\,x_n\,-\,x_m \,,\, \delta\,)\;>\;1\,-\,\beta \;\;\;and\;\;\; \nu_{\,U}(\,x_n\,-\,x_m \,,\,\delta\,)\;<\;\beta\;\;\;\;\forall\;m,n\;\geq\;k$$ $$\Rightarrow\;\;\mu_{\,U}(\,f(x_n)\,-\,f(x_m) \,,\,\epsilon\,) \;>\;1\,-\,\alpha \;\;and\;\; \nu_{\,U}(\,f(x_n)\,-\,f(x_m) \,,\,\epsilon\,) \;<\;\alpha\;\;\;\forall\;m,n\;\geq \;k\;\;\;\;\;(by\;(4))$$\ $\Rightarrow\;\{f(x_n)\}_n$ is a cauchy sequence in $(\,V\;,\;B\,)$ If $f\;:\;U\;\rightarrow\;V\;$ is uniformly intuitionistic fuzzy continuous on $U$ then $f$ is intuitionistic fuzzy continuous on $U$ but not the converse. Obvious. To show the converse result does not hold, consider the following example. Let, $(\,X\,=\,\mathbb{R}\,,\,\parallel \cdot \parallel\,)$ be a normed linear space, where $\parallel\,x\,\parallel\;=\;\mid\,x\,\mid\;,\; \forall\;x\;\in\;\mathbb{R}$. Define $a\;\ast\;b\;=\;min\;\{\,a\,,\,b\,\}$ and $a\;\diamond\;b\;=\;max\,\{\,a\,,\,b\,\}\;\;\forall\;a,b\;\in\; [\,0 \,,\, 1\,]$. Also, define $$\mu_1\;,\;\nu_1\;,\; \mu_2\;,\;\nu_2\;:\;X\;\times\; \mathbb{R}\;\rightarrow\;[\,0 \,,\, 1\,] \;\;\;\;by$$ $$\mu_1\;=\;\frac{t}{t\;+\;|\,x\,|}\;, \;\nu_1\;=\; \frac{|\,x\,|}{t\;+\;|\,x\,|}\;,\; \mu_2\;=\;\frac{t}{t\;+\;k|\,x\,|}\;, \;\nu_2\;=\;\frac{k|\,x\,|}{t\;+\;k|\,x\,|}\;,$$\ Let, $A\;=\;\{\,(\,(\,x\,,\,t\,)\;,\;\mu_1\;,\; \nu_1\,)\;:\;(\,x\,,\,t\,)\;\in\;X\;\times\;\mathbb{R}\;\}$ and $B\;=\;\{\,(\,(\,x\,,\,t\,)\;,\;\mu_2\;,\;\nu_2\,) \;:\;(\,x\,,\,t\,)\;\in\;X\;\times\;\mathbb{R}\;\}$ be two intuitionistic fuzzy norm on $X$.\ Let us define $f(x)\;=\;\frac{1}{x}\;\,\forall\;x\;\in\; (\,0 \,,\, 1\,)$. First we show that $f$ is intuitionistic fuzzy continuous on $(\,0 \,,\, 1\,)$. Let, $x_0\;\in\;(\,0\,,\,1\,)$ and $\{x_n\}_n$ be a sequence in $(\,0\,,\,1\,)$ such that $x_n\;\rightarrow\;x_0$ in $(\,X\,,\,A\,)$. i.e.,for all $t>0$, $$\mathop {\lim }\limits_{n\; \to \;\infty } \;\mu_1 \,(\,x_n\,-\,x_0 \,,\, t\,)\;\; =\;\;1\;\;\; and \;\;\;\mathop {\lim }\limits_{n\; \to \;\infty } \;\nu_1 \,(\,x_n\,-\,x_0 \,,\, t\,)\;\;=\;\;0\;$$ $$\Rightarrow\;\mathop {\lim }\limits_{n\; \to \;\infty } \;\frac{t}{t\;+\;|\,x_n\,-\,x_0|}\;\;=\;\; 1\;\;\; and \;\;\;\mathop {\lim }\limits_{n\; \to \;\infty } \;\frac{|\,x_n\,-\,x_0\,|}{t\;+\;|\,x_n\, -\,x_0|\,}\;\;=\;\;0\;$$\ $$\Rightarrow\;\mathop {\lim }\limits_{n\; \to \;\infty } \;|\,x_n\,-\,x_0\,|\;\;=\;\;0\hspace{7.5cm}$$\ Again, for all $t>0$, $$\mu_2\,(\,f(x_n)\,-\,f(x_0)\,,\,t\,)\;\;=\;\; \frac{t}{t\;+\;k\,|\,f(x_n)\,-\,f(x_0)\,|}\;\,, \;\;=\;\;\frac{t \;x_n\; x_0}{t\; x_n\; x_0\;+\;k\,|\,x_n\,-\,x_0\,|}$$ $$\Rightarrow\;\mathop {\lim }\limits_{n\; \to\;\infty } \;\mu_2(\,f(x_n)\,-\,f(x_0)\,,\,t\,) \;\,=\,\;1 \hspace{1.5cm}$$ and $$\;\;\nu_{\,2}(\,f(x_n)\,-\, f(x_0) \,,\, t\,)\;=\;\frac{k\,|\,f(x_n)\,-\,f(x_0)\,|}{t\;+ \;k\,|\,f(x_n)\,-\,f(x_0)\,|}\;\;=\;\; \frac{k\,|\,x_n\,-\,x_0\,|}{t\, x_n\, x_0\;+\;k\,|\,x_n\,-\,x_0|\,}$$ $$\Rightarrow\;\mathop {\lim }\limits_{n\; \to \;\infty } \;\nu_{\,2}(\,f(x_n)\,-\,f(x_0) \,,\,t\,) \;\;=\;\;0 \hspace{1.5cm}$$\ Thus $f$ is sequentially intuitionistic fuzzy continuous on $(\,0\,,\,1\,)$ and hence intuitionistic fuzzy continuous on $(\,0\,,\,1\,)$. We now show that $f$ is not uniformly intuitionistic fuzzy continuous on $(\,0\,,\,1\,)$. By example $2$ of [@Samanta], we see that $\{x_n\}_n$ is a cauchy sequence in $(\,X\;,\;\|\,\cdot\,\|\,)$ if and only if $\{x_n\}_n$ is a cauchy sequence in $(\,X\;,\;A\,)$ or $(\;X\;,\;B\;)$.\ Let, $x_n\;=\;\frac{1}{n\,+\,1}\;\;\forall\;n\;\in\;\mathbb{N}$. So, $\{f(x_n)\}_n$ is a not a cauchy sequence in $(\,X\;,\;\|\,\cdot\,\|\,)$ and hence not a cauchy sequence $(\,X\,,\,B\,)$.\ Consequently, $f$ is not uniformly intuitionistic fuzzy continuous on $(\,0\,,\,1\,)$. **Uniformly Intuitionistic Fuzzy Convergence** ============================================== In this section we assume that $(\,U\;,\;A\,)$ and (V,B) are two intuitionistic fuzzy normed linear space over the same field $F$. Let, $f_n\;:\;(\,U\;,\;A\,)\;\rightarrow\;(\,V\;,\;B\,)\;$ be a sequence of functions. The sequence $\;\{f_n\}_n\;$ is said to be **pointwise intuitionistic fuzzy convergent** on $U$ with respect to $A$ if for each $\;x\;\in\;U\;$ , the sequence $\;\{\,f_n\,(\,x\,)\,\}_n\;$ is convergent with respect to $B$. Let, the sequence $\{f_n\}_n$ be pointwise intuitionistic fuzzy convergent on $U$ and let, $c\;\in \;U$. Then the sequence $\{\,f_n\,(\,c\,)\,\}_n$ is intuitionistic fuzzy convergent on $(\,V\;,\;B\,)$. Let, $f_n\,(\,c\,) \;\rightarrow\;y_c$ in $(\,V\;,\;B\,)$. Then $y_c$ is unique. Let us now define $f\;:\;(\,U\,,\,A\,)\;\rightarrow\; (\,V\;,\;B\;\,)$ by $f\,(x)\;=\;y_{\,x}\;\;\forall\;x\;\in\;U$, where $f_n\,(\,x\,)\;\rightarrow\;y_{\,x}$ in $(\,V\;,\;B\,)$. Then $f$ is said to be the intuitionistic fuzzy limit function of the sequence $\{f_n\}_n$ on $U$ and it is written as $f_n\;\rightarrow\;f$ on $(\;U\;,\;A\;)$. Let, $a\;\ast\;b\;=\;min\;\{\,a\;,\;b\,\}\;,\;a\;\diamond\;b\; =\;max\;\{\,a\;,\;b\,\}$ for all $a\,,\,b\;\in\;[\,0\,,\,1\,].$ Define $\mu\,(\,x\,,\,t\,)\;=\;\frac{t}{t\;+\;|\,x\,|}$ and $\nu\,(\,x\,,\,t\,)\;=\;\frac{|\,x\,|}{t\;+\;|\,x\,|}.$\ Let, $U\;=\;(\,-1 \,,\, 1\,)$ , $V\;=\;\mathbb{R}$,$\mu \;=\; \mu_{\,U} \;=\; \mu_{\,V}$ , $\nu \;=\; \nu_{\,U} \;=\; \nu_{\,V}$ and $\;f _n\;:\;(\,U \,,\, A\,)\;\rightarrow\;(\,V \,,\, B\,)$ be defined by $f_n\,(x)\;=\;x^{\,n}\;\;\forall\;x\;\in\;U$. Also, let $O\,(x)\;=\;0\;\;\forall\;x\;\in\;U$. Therefore, $$\mu\,(\,f_n\,(x) \,-\, 0 \;,\; t\,)\;\;=\;\;\frac{t} {t\;+\;|\,x\,|^{\,n}}\;\longrightarrow\;1\;\;\; \;as \;\;\;n\;\rightarrow\;\infty\;$$ and $$\nu\,(\,f_n\,(x) \,-\,0\,,\,t\,)\;=\;\frac{|\,x\,|^{\,n}}{t\;+\;|\,x\,|^{\,n}} \;=\;1\,-\,\frac{t}{t\; +\;|\,x\,|^{\,n}}\;\rightarrow\;0\;\;\; as\;\; \;n\;\rightarrow\;\infty$$\ $\Rightarrow\;\;\{f_n\}_n$ is pointwise intuitionistic fuzzy convergent to $O$ on $(\,U \,,\, A\,).$ Let, $a\;\ast\;b\;=\;min\;\{\,a\;,\;b\,\}\;,\;a\;\diamond\;b\; =\;max\;\{\,a\;,\;b\,\}$ for all $a\,,\,b\;\in\;[\,0\,,\,1\,].$ Let, $U\;=\;\{\,x\;\in\;\mathbb{R}\;:\;x\;\geq\;0 \,\}$ , $V=\mathbb{R}$,$\mu \;=\; \mu_{\,U} \;=\; \mu_{\,V}$ , $\nu \;=\; \nu_{\,U} \;=\; \nu_{\,V}$ where $$\mu\,(\,x\,,\,t\,)\;= \;\frac{t}{t\;+\;|\,x\,|}\, \;\;\;and\;\;\; \,\nu\,(\,x\,,\,t\,)\;=\;\frac{|\,x\,|}{t\;+\;|\,x\,|}.$$ Consider, $$g_n\,(x)\;=\;\frac{n}{x\;+\;n}\;\;\forall\;x\;\in\;U \;\;\;and\;\;\; g\,(x)\;\;=\;\;1\;\;\forall\;x\;\in\;U.$$ $$Therefore, \;\;\;g_n\,(x)\,-\,g\,(x)\;\;=\;\; \frac{n}{x\;+\;n}\;-\;1\;\;=\;\;-\;\frac{x}{x\;+\;n}$$ $$\mu\,(\,g_n\,(x)\,-\,g\,(x)\;,\;t\,)\;\,=\;\,\mu\,(\,-\;\frac{x}{x \;+\;n}\;,\;t\,) \hspace{4.5cm}$$ $$\hspace{3.5cm}=\;\;\frac{t}{t\; +\;|\;-\;\frac{x}{x\;+\;n}\;|}\;\; =\;\;\frac{t}{t\;+\;\frac{x}{x\;+\;n}\;}\;\rightarrow \;1\;\; as\; \;n\;\rightarrow\;\infty$$ and $$\nu\,(\,g_n\,(x)\,-\,g\,(x)\,,\,t\,)\;=\;\frac{\frac{x}{x\; +\;n}}{t\;+\;\frac{x}{x\;+\;n}}\;=\;\frac{x}{x\;+ \;t\,(\,x\,+\,n\,)}\;\rightarrow\;0\;\, as \;\;n\; \rightarrow\;\infty$$\ Thus, we see that $g_n(x)\;\rightarrow \;g(x)\;\;\;\forall\;x\;\in\;U$ with respect to $B.$ Let, $f_n\;\:\;(\,U\;,\;A\,)\rightarrow\;(\,V\;,\;B\,)$ be a sequence of functions.The sequence $\{f_n\}_n$ is said to be **uniformly intuitionistic fuzzy convergent** on $U$ to a function $f$ with respect to $A$, if given $0\;<\;r\;<\;1\;,\;t\;>\;0$ there exist a positive integer $n_0\;=\;n_0\;(\,r\,,\,t\,)$ such that $\forall\;x\;\in\;U$ and $\forall\;n\geq n_0\;,$ $$\mu\,(\,f_n(x)\;-\;f(x) \,,\, t\,)\;>\;1\,-\,r \;\;,\; \;\nu\,(\,f_n(x)\;-\;f(x) \,,\, t\,)\;<\;r$$ In the example$(4.1)$, we have seen that $\;f_n\;\rightarrow\;O\;$ with respect to $A$. Let us show that this convergence is not uniform on $(\,0\;,\;1\,)$ but converges uniformly on $[\,0 \,,\, a\,]$ where $0\;<\;a\;<\;1$, with respect to $A$.\ Let, $c\;\in\;(\,0\,,\,1\,)\;, \;r\;\in\;(\,0\,,\,1\,)$ and $\;t\;>\;0\;.$ Then, $$\mu\,(\,f_n(c)\,-\,O\,(c)\;,\;t\,)\;>\;1\,-\,r\;\;\;and \;\;\;\nu\,(\,f_n(c)\,-\,O\,(c)\;,\;t\,)\;<\;r$$ $$\Rightarrow\;\;\frac{t}{t\;+\;c^{\,n}}\;>\;1\,-\,r \;\;\;and\;\;\; \;\frac{c^{\,n}}{t\;+\;c^{\,n}}\;<\;r$$ $$\Rightarrow\;\;c^{\,n}\;<\;\frac{r\,t}{(\,1\,-\,r\,)} \;\;\;\Rightarrow\;\;\frac{1}{c^{\,n}}\;>\;\frac{(\,1\,-\,r\,)}{r\,t}\;$$ $$\;\;\;\;\Rightarrow\;\;n\;>\;\frac{\log\;\left(\,\frac{(\,1\,-\,r\,)}{r\,t}\right)} {\log\,\left(\,\frac{1}{c}\,\right)} \hspace{4.5cm}$$\ Let, $k\;\; = \;\;\left[ {\;\frac{{\log \;\left( {\,\frac{{1\; - \;r}}{{r\;t}}\,} \right)}}{{\log \, \left( {\,\frac{1}{c}\,} \right)}}\;} \right]\;\; + \;\;1$\ \ Then, for each $x\;\in(\,0\,,\,1\,)$ and given $r\;\in(\,0\,,\,1\,)$ and $t\;>\;0\,,$ $$\mu\,(\,f_n(x)\,-\,O(x)\;,\;t\,)\;>\;1\,-\,r\;\;and \;\;\nu\,(\,f_n(x)\,-\,O(x)\;,\;t\,)\;<\;r\;\;\;\forall\;n\;\geq\;k$$ where, $k\;\; = \;\;\left[ {\;\frac{{\log \;\left( {\,\frac{{1\; - \;r}}{{r\;t}}\,} \right)}}{{\log \, \left( {\,\frac{1}{x}\,} \right)}}\;} \right]\; + \;1$, which shows that $k$ depends on $r\,,\,t$ as well as on $x$. Also, we see that as $x\;\rightarrow\;1\;\Rightarrow \;k\;\rightarrow\;\infty.$\ $\Rightarrow\;\;\{\,f_n\,\}_n$ is not uniformly intuitionistic fuzzy convergent on $(\,0\,,\,1\,)$ with respect to $A.$\ Let, $a\;\in\;(\,0\,,\,1\,)$. In $[\,0\,,\,a\,]$, the greatest value of $\left[ {\;\frac{{\log \;\left( {\,\frac{{1\; - \;r}}{{r\;t}}\,} \right)}}{{\log \, \left( {\,\frac{1}{x}\,} \right)}}\;} \right]$ is $\left[ {\;\frac{{\log \;\left( {\,\frac{{1\; - \;r}}{{r\;t}}\,} \right)}}{{\log \, \left( {\,\frac{1}{a}\,} \right)}}\;} \right]$. So, let $\;n_0\;=\;\left[ {\;\frac{{\log \;\left( {\,\frac{{1\; - \;r}}{{r\;t}}\,} \right)}}{{\log \, \left( {\,\frac{1}{a}\,} \right)}}\;} \right]\;+\;1.$\ \ Therefore, for all $x\;\in\;[\,0\,,\,a\,]$, given $r\;\in\;(\,0\,,\,1\,)$ and $t\;>\;0$, there exist a natural number $n_0\;=\;n_0(\,r\,,\,t\,)$ such that $$\mu\,(\,f_n(x)\,-\,O(x)\;,\;t\,)\;>\;1\,-\,r\;\; and \;\;\nu\,(\,f_n(x)\,-\,O(x)\;,\;t\,)\;<\;r\;\;\;\forall\;n\;\geq\;n_0$$ $\Rightarrow\;\;\;\{\,f_n\,\}_n$ is uniformly intuitionistic fuzzy convergent on $[\,0\,,\,a\,]$ with respect to $A$ , where $a\;\in\;(\,0\,,\,1\,)$. Let $ \left( {\,U\;,\;\left\| {\, \cdot \,} \right\|_{\,1} \,} \right) $ and $ \left( {\,V\;,\;\left\| {\, \cdot \,} \right\|_{\,2} \,} \right) $ be two normed linear space over the field $ K \;=\; \mathbb{R} \;or\; \mathbb{C}$ , $ f_{\,n} \;:\; U \;\rightarrow\; V \;\, \forall \;\,n \;\in\; \mathbb{N}$ , $ a \;\ast\; b \;=\; \min\,\{\,a \,,\, b\,\} $, $ a \;\diamond\; b \;=\; \max\,\{\,a \,,\, b\,\} \;\; \forall \;\, a \,,\, b \;\in\; [\,0 \,,\, 1\,]$. For all $t \;>\; 0$ , define $$\mu_{\,U}\,(\,x \,,\, t\,) \;=\; \frac{t} {t \,+\, k\,\left\| {\, x \,} \right\|_{\,1}} \;\;\;,\;\;\; \nu_{\,U}\,(\,x \,,\, t\,) \;=\; \frac{k\,\left\| {\, x \,} \right\|_{\,1}} {t \,+\, k\,\left\| {\, x \,} \right\|_{\,1}} \;\,,$$ $$\mu_{\,V}\,(\,x \,,\, t\,) \;=\; \frac{t} {t \,+\, k\,\left\| {\, x \,} \right\|_{\,2}} \;\;\;,\;\;\; \nu_{\,V}\,(\,x \,,\, t\,) \;=\; \frac{k\,\left\| {\, x \,} \right\|_{\,2}} {t \,+\, k\,\left\| {\, x \,} \right\|_{\,2}} \;\,,$$ where $k \;>\; 0$ . Let $$A \;\,=\;\, \left\{\,\left(\,(\,x \,,\, t\,) \,,\, \mu_{\,U}\,(\,x \,,\, t\,) \,,\, \nu_{\,U}\,(\,x \,,\, t\,)\,\right) \;\,:\;\, (\,x \,,\, t\,) \; \in\; U\;\times\;\mathbb{R^{\,+}}\,\right\} \;,$$ $$B \;\,=\;\, \left\{\,\left(\,(\,x \,,\, t\,) \,,\, \mu_{\,V}\,(\,x \,,\, t\,) \,,\, \nu_{\,V}\,(\,x \,,\, t\,)\,\right) \;\,:\;\, (\,x \,,\, t\,) \; \in\; U\;\times\;\mathbb{R^{\,+}}\,\right\} \;\,\,\,$$ Then $(\,U \,,\, A\,)$ and $(\,V \,,\, B\,)$ are intuitionistic fuzzy normed linear space. Following the example $(2)$ of [@Samanta] , it can shown that $\{\,f_{\,n}\,\}$ is uniformly intuitionistic fuzzy convergent on $U$ with respect to $A$ if and only if $\{\,f_{\,n}\,\}$ is uniformly convergent with respect to $\left\| {\, \cdot \,} \right\|_{\,1}$ . Let, $f_n\;:\;(\,U\,,\,A\,)\;\rightarrow\;(\,V\,,\,B\,) \;,\;\forall\;n\;\in\;\mathbb{N}$ be a sequence of functions. Then the sequence $\{\,f_n\,\}_n$ is uniformly intuitionistic fuzzy convergent on $(\,U\,,\,A\,)$ if and only if for any given $r\;\in\;(\,0\,,\,1\,)$ and $t\;>\;0$ there exist a natural number $k\;=\;k\,(\,r\;,\;t\,)$ such that $\forall\;x\;\in \;U$,$$\mu\,(\,f_{n + p}(x)\,-\,f_n(x) \;,\;t\,)\;>\;1\,-\,r \; \;,\;\;\nu\,(\,f_{n + p}(x)\,-\,f_n(x)\;,\;t\,)\;<\;r\;\;,$$ $$\hspace{5.5cm}\forall\;n\; \geq\;k\;and\;p\;=\;1\,,\,2\,,\,3\,,\,\cdots$$ **$\Rightarrow\;$part:**Let, $\{\,f_n\,\}_n$ be uniformly intuitionistic fuzzy convergent on $(\,U\,,\,A\,)$ and $f$ be its limit function. Then for any given $r\;\in\;(\,0\,,\,1\,)$ and $t\;>\;0$ there exist a natural number $n_0\;=\;n_0\,(\,r\;,\;t\,)$ such that for all $x\;\in\;U$, and $\forall\;n\;\geq\;n_0$ , $$\hspace{1.5cm}\mu\,\left(\,f_n(x)\;-\;f(x)\;,\;\frac{t}{2} \,\right)\;>\;1\,-\,r\;\;,\;\;\nu \,\left(\,f_n(x)\;-\;f(x)\;,\;\frac{t}{2}\,\right)\;<\;r$$\ $\Rightarrow\;$ For all $\;n\;\geq\;n_0\;$ and $\;p\;=\;1\,,\,2\,,\,3\,,\;\cdots\;$ and $\;x\;\in\;U\;$, $$\hspace{1.5cm}\mu\,\left(\,f_{n\,+\,p}(x)\;-\;f(x)\;,\;\frac{t}{2}\,\right) \;>\;1\,-\,r\;\;,\;\;\nu\,\left(\,f_{n\,+\,p}(x)\;-\;f(x)\;,\; \frac{t}{2}\,\right)\;<\;r$$\ Now, for all $\;x\;\in\;D\;$ and $\;p\;=\;1\,,\,2\,,\,3\,, \;\cdots\;,\;$, we see that\ $$\mu\,\left(\,f_{n\,+\,p}\,(x)\;-\;f_n\,(x)\;,\;t\,\right)\hspace{6.2cm}$$ $$\hspace{1.5cm}=\;\mu\,\left(\,f_{n\,+\,p} \,(x)\;-\;f\,(x)\;+\;f\,(x)\;-\;f_n\,(x)\;\;,\;\;\frac{t}{2}\; +\;\frac{t}{2}\,\right)$$ $$\hspace{1.0cm}\geq\;\mu\,\left(\,f_{n\,+\,p}\,(x)\;-\;f\,(x)\; ,\;\frac{t}{2}\,\right)\;\ast\; \mu\,\left(\,f\,(x)\;-\;f_n\,(x)\;,\;\frac{t}{2}\,\right)$$ $$\hspace{1.0cm}=\;\mu\,\left(\,f_{n\,+\,p}\,(x)\;-\;f\;(x) \;,\;\frac{t}{2}\,\right)\;\ast\;\mu \,\left(\,f_n\,(x)\,-\,f\,(x)\;,\;\frac{t}{2}\,\right)$$ $$>\;(\,1\;-\;r\,)\;\ast\;(\,1\;-\;r\,)\;\; =\;\;(\,1\;-\;r\,) \;\;\;\;\;\;\;\forall\;n\;\geq\;n_0$$ and $$\nu\,\left(\,f_{n\,+\,p}\,(x)\;-\;f_n\,(x)\;,\;t\,\right)\hspace{6.2cm}$$ $$\hspace{1.5cm}=\;\nu\,\left(\,f_{n\,+\,p} \,(x)\;-\;f\,(x)\;+\;f\,(x)\;-\;f_n\,(x)\;\;,\;\;\frac{t}{2}\; +\;\frac{t}{2}\,\right)$$ $$\hspace{1.7cm}\leq\;\nu\,\left(\,f_{n\,+\,p}\,(x)\;-\;f\,(x)\; ,\;\frac{t}{2}\,\right)\;\diamond\; \nu\,\left(\,f\,(x)\;-\;f_n\,(x)\;,\;\frac{t}{2}\,\right)$$ $$\hspace{1.6cm}=\;\nu\,\left(\,f_{n\,+\,p}\,(x)\;-\;f\;(x) \;,\;\frac{t}{2}\,\right)\;\diamond\;\nu \,\left(\,f_n\,(x)\,-\,f\,(x)\;,\;\frac{t}{2}\,\right)$$ $$<\;r\;\diamond\;r\;\; =\;\;r \;\;\;\;\;\;\;\forall\;n\;\geq\;n_0 \hspace{3.5cm}$$\ Hence the $\;\Rightarrow\;$ part.\ \ **$\Leftarrow\;$part:** In this part, we suppose that for any given $r\;\in\;(\,0\,,\,1\,)$ and $t\;>\;0$ there exist a natural number $n_0\;=\;n_0\,(\,r\;,\;t\,)$ such that for all $x\;\in\;U$ and $\forall\;n\;\geq\;n_0$ $$\mu\,(\,f_{n\,+\,p}\,(x)\;-\;f_n\,(x)\;,\;t\,)\;>\;1\,-\,r \;\;,\;\;\nu\, (\,f_{n\,+\,p}\,(x)\;-\;f_n\,(x)\;,\;t\,)\;<\;r.$$ Let $x_0\;\in\;U.$ Then for $\forall\;n\;\geq\;n_0$ we see that, $$\mu\,(\,f_{n\,+\,p}\,(x_0)\;-\;f_n\,(x_0)\;,\;t\,)\;>\;1\,-\,r\;\;,\;\; \nu\,(\,f_{n\,+\,p}\,(x_0)\;-\;f_n\,(x_0)\;,\;t\,)\;<\;r.$$ $\Rightarrow\;\;\{\,f_n(x_0)\,\}_n$ is an intuitionistic fuzzy cauchy sequence in $\left(\,V\,,\,B\,\right).$\ $\Rightarrow\;\;\{\,f_n(x_0)\,\}_n$ is an intuitionistic fuzzy convergent in $\left(\,V\,,\,B\,\right).$\ $\Rightarrow\;\;\{\,f_n\,\}_n$ is pointwise intuitionistic fuzzy convergent on $\left(\,U\,,\,A\,\right).$\ \ Let, $f$ be the intuitionistic fuzzy limit function of $\{\,f_n\,\}_n$ on $\left(\,U\,,\,A\,\right).$ Let, $r\;\in\;(\,0\,,\,1\,)$ and $t\;>\;0$. Then by the given condition, there exist a natural number $n_0\;=\;n_0\,(\,r\,,\,t\,)$ such that for all $x\;\in\;U$ and $p\;=\;1\,,\,2\,,\,3\,,\;\cdots$ and $\forall\;n\;\geq\;n_0$ $$\mu\,\left(\,f_{n\,+\,p}\,(x)\;-\;f_n\,(x)\;,\;\frac{t}{2}\,\right) \;>\;1\,-\,r\;\;, \;\;\nu\,\left(\,f_{n\,+\,p}\,(x)\;-\;f_n\,(x)\;,\;\frac{t}{2}\,\right)\;<\;r.$$\ Again since $f_n\;\rightarrow\;f$ as $n\;\rightarrow\;\infty$ on $\left(\,U\,,\,A\,\right)$, we see that $f_{n\,+\,p}\;\rightarrow\;f$ as $n\;\rightarrow\;\infty$ on $\left(\,U\,,\,A\,\right)$,which implies that for all $n\;\geq\;n_0$ and for all $x\;\in\;U$, $$\mu\,\left(\,f_{n\,+\,p}\,(x)\;-\;f\,(x)\;,\;\frac{t}{2}\,\right) \;>\;1\,-\,r\;\;,\; \;\nu\;\left(\,f_{n\,+\,p}\,(x)\;-\;f\,(x)\;,\;\frac{t}{2}\,\right)\;<\;r$$\ Now, for all $\;x\;\in\;U\;$ we see that\ $$\mu\,\left(\,f_n\,(x)\;-\;f\,(x)\;,\;t\,\right)\hspace{7.5cm}$$ $$\hspace{2.5cm}=\;\;\mu\,\left(\,f_n\,(x)\;-\;f_{n\,+\,p}\,(x) \;+\;f_{n\,+\,p}\,(x)\;-\;f\,(x)\;,\;\frac{t}{2}\;+\;\frac{t}{2}\,\right)$$ $$\hspace{2.9cm}\geq\;\mu\,\left(\,f_n\,(x)\;-\;f_{n\,+\,p}\,(x)\;,\; \frac{t}{2}\,\right)\;\ast\;\mu \,\left(\,f_{n\,+\,p}\,(x)\;-\;f\,(x)\;,\;\frac{t}{2}\,\right)$$ $$\hspace{1.2cm}>\;(\,1\,-\,r\,)\;\ast\;(\,1\,-\,r\,)\;=\;(\,1\,-\,r\,)\; \;\;,\;\;\;\;\;\forall\;n\;\geq\;n_0$$ and $$\nu\,\left(\,f_n\,(x)\;-\;f\,(x)\;,\;t\,\right)\hspace{7.5cm}$$ $$\hspace{2.5cm}=\;\;\nu\,\left(\,f_n\,(x)\;-\;f_{n\,+\,p}\,(x) \;+\;f_{n\,+\,p}\,(x)\;-\;f\,(x)\;,\;\frac{t}{2}\;+\;\frac{t}{2}\,\right)$$ $$\hspace{2.9cm}\leq\;\nu\,\left(\,f_n\,(x)\;-\;f_{n\,+\,p}\,(x)\;,\; \frac{t}{2}\,\right)\;\diamond\;\nu \,\left(\,f_{n\,+\,p}\,(x)\;-\;f\,(x)\;,\;\frac{t}{2}\,\right)$$ $$<\;r\;\diamond\;r\;=\;r\; \;\;,\;\;\;\;\;\forall\;n\;\geq\;n_0 \hspace{2.8cm}$$\ $\Rightarrow\;\;\{\,f_n\}_n$ is uniformly intuitionistic fuzzy convergent on $\left(\,U\,,\,A\,\right)$. **Equivalent Statement:**Let, $f_n\;:\;(\,U\,,\,A\,)\;\rightarrow\;(\,V\,,\,B\,) \;,\;\forall\;n\;\in\;\mathbb{N}$ be a sequence of functions. Then the sequence $\{\,f_n\,\}_n$ is uniformly intuitionistic fuzzy convergent on $\left(\,U\,,\,A\,\right)$ if and only if for any given $r\;\in\;(\,0\,,\,1\,)$ and $t\;>\;0$ there exist a natural number $n_0\;=\;n_0\,(\,r\;,\;t\,)$ such that $\forall\;x\;\in\;U$,$$\mu\,(\,f_{n}\,(x)\;-\;f_m(x)\;,\;t\,)\;>\;1\,-\,r\;\;, \;\;\nu\,(\,f_{n}\,(x)\,-\,f_m\,(x)\;,\;t\,)\;<\;r\;\;, \;\;\forall\;\;n\,,\,m\;\geq\;n_0.$$ In the example $(\,4.3\,)$, though we have seen that $\{\,f_n\,\}_n$ is uniformly intuitionistic fuzzy convergent on $[\,0\,,\,a\,]$, where $a\;\in\;(\,0\,,\,1\,)$ and $f_n\,(x)\;=\;x^n$, again, we will verify it by using the above theorem . Let, $r\;\in\;(\,0\,,\,1\,)$ and $t\;>\;0$. Again let, $m\,,\,n\;\in\;\mathbb{N}$ such that $m\;<\;n$. Now , $$\mu\,(\,f_n\,(x)\;-\;f_m\,(x)\;,\;t\,)\;>\;1\,-\,r\;\;,\;\;\;\nu\, (\,f_n\,(x)\;-\;f_m\,(x)\;,\;t\,)\;<\;r$$ $$\hspace{2.0cm}\Rightarrow\;\;\;\mu\,(\,x^n\;-\;x^m\;,\;t\,)\;>\; 1\,-\,r\;\;,\;\;\;\nu\,(\,x^n\;-\;x^m\;, \;t\,)\;<\;r$$ $$\Rightarrow\;\;\mid\;x^n\;-\;x^m\;\mid\;< \;\frac{r\,t}{(1\,-\,r)} \;.\hspace{3.2cm}$$\ Since, $\mathop {\sup }\limits_{x\; \in \;[\,0\;,\;a\,]} \,\left| {\,x^{\,n} \; - \;x^{\,m} \,} \right|\;\, = \;\,2\,a^{\,m} \;\,,\;\,m\;\, < \;\,n$ we have , $2\;a^m\;<\;\frac{r\;t}{(\,1 \,-\,r\,)}$ , which implies that $m\;>\; \left[ {\;\frac{{\log \;2\;\left( {\,\frac{{1\; - \;r}}{{r\;t}}\,} \right)}}{{\log \,\left( {\,\frac{1}{a}\,} \right)}}\;} \right]$ Let , $\;\;k\;=\;\left[ {\;\frac{{\log \;2\;\left( {\,\frac{{1\; - \;r}}{{r\;t}}\,} \right)}}{{\log \,\left( {\,\frac{1}{a}\,} \right)}}\;} \right]\;+\;1\;.$ Thus, we see that for given $r\;\in\;(\,0\,,\,1\,)$ and $t\;>\;0$, there exist a natural number $k\;=\;k\,(\,r\,,\,t\,)$ such that $\forall\;x\;\in\;[\,0\,,\,a\,]\;,\;a\;\in\;(\,0\,,\,1\,)$ and $\forall\;n\;>\;m\;\geq\;k$ $$\mu\,(\,f_n\,(x)\;-\;f_m\,(x)\;,\;t\,) \;>\;1\,-\,r\;\;,\;\;\nu\,(\,f_n\,(x)\;-\;f_m\,(x)\;,\;t\;)\;<\;r\;.$$ This completes the verification . **(Uniform Limit Theorem)**: Let, $(\,U\,,\,A\,)$ and $(\,V\,,\,B\,)$ be two intuitionistic fuzzy normed linear space satisfying the condition $(xii)$. Also, let $f_n\;:\;(\,U\,,\,A\,)\;\rightarrow\;(\,V\,,\,B\,)\;\;, \;\;\forall\;n\;\in\;\mathbb{N}$ and $f_n$ be intuitionistic fuzzy continuous on $(\,U\,,\,A\,)$. If $\{\,f_n\,\}_n$ be uniformly intuitionistic fuzzy convergent on $(\,U\,,\,A\,)$ to a function $f$ then $f$ is intuitionistic fuzzy continuous on $(\,U\,,\,A\,)$. Let $\{\,f_{\,n}\,\}_{\,n}$ be uniformly intuitionistic fuzzy convergent to the function $f$ on $(\,U\,,\,A\,)$ .Then for any given $r\;\in\;(\,0\,,\,1\,)$ and $t\;>\;0$, there exists a natural number $k \;=\; k\,(\,r \,,\, t\,)$ such that for all $x \;\in\;U$ and for all $n\;\geq\;k$, $$\mu_{\,V}\,\left(\,f_{\,n}\,( x ) \;-\; f( x ) \,,\,\frac{t}{3}\right) \;\;>\;\; 1 \;-\; r \;\;,\;\; \nu_{\,V}\,\left(\,f_{\,n}\,( x ) \;-\; f( x ) \,,\,\frac{t}{3}\right) \;\;<\;\; r$$ Thus, for all $x \;\in\;U$ , $$\mu_{\,V}\,\left(\,f_{\,k}\,( x ) \;-\; f( x ) \,,\,\frac{t}{3}\right) \;\;>\;\; 1 \;-\; r \;\;,\;\; \nu_{\,V}\,\left(\,f_{\,k}\,( x ) \;-\; f( x ) \,,\,\frac{t}{3}\right) \;\;<\;\; r$$ Let $x_{\,0}$ be an arbitrary but fixed point of $U$. Then we have $$\mu_{\,V}\,\left(\,f_{\,k}\,( x_{\,0} ) \;-\; f( x_{\,0} ) \,, \,\frac{t}{3}\right) \;\;>\;\; 1 \;-\; r \;\;,\;\; \nu_{\,V}\,\left(\,f_{\,k}\,( x_{\,0} ) \;-\; f( x_{\,0} ) \,,\,\frac{t}{3}\right) \;\;<\;\; r$$ Since each $f_{\,n}$ is intuitionistic fuzzy continuous on $U$ ,$f_{\,k}$ is intuitionistic fuzzy continuous at $x_{\,0}$ . So, for any given $r\;\in\;(\,0\,,\,1\,)$ and $t\;>\;0$, there exist $\delta \;=\; \delta\,\left(\,r \,,\, \frac{t}{3}\,\right) \;>\; 0$ , $\beta \;=\; \beta\,\left(\,r \,,\, \frac{t}{3}\,\right) \;\in\;(\,0\,,\,1\,)$ such that $$\mu_{\,U}\,(\,x \,-\,x_{\,0} \;,\; \delta\,) \;\,>\;\, 1 \,-\, \beta \;\,\Rightarrow \;\,\mu_{\,V}\,\left(\,f_{\,k}(x) \,-\,f_{\,k}(\,x_{\,0}\,) \;,\; \frac{t}{3} \,\right) \;\,>\;\,1 \,-\, r \; ,$$ $$\nu_{\,U}\,(\,x \,-\,x_{\,0} \;,\; \delta\,) \;\,<\;\,\beta \;\,\Rightarrow \;\,\nu_{\,V}\,\left(\,f_{\,k}(x) \,-\,f_{\,k}(\,x_{\,0}\,) \;,\; \frac{t}{3} \,\right) \;\,<\;\, r \hspace{0.5cm}$$ Thus, we see that for $\mu_{\,U}\,(\,x \,-\,x_{\,0} \;,\; \delta\,) \;\,>\;\, 1 \,-\, \beta $ , $$\mu_{\,V}\,\left(\,f(x) \,-\, f(\,x_{\,0}\,) \;,\; t\,\right) \;=\;\, \mu_{\,V}\,\left(\,f(x) \,-\, f_{\,k}(x) \,+\, f_{\,k}(x) \,-\, f_{\,k}(\,x_{\,0}\,) \,+\, f_{\,k}(\,x_{\,0}\,) \,-\, f(\,x_{\,0}\,) \;,\; t\,\right)$$ $$\hspace{2.5cm}\geq\;\; \mu_{\,V}\, \left(\,f(x) \,-\, f_{\,k}(x) \;,\; \frac{t}{3}\,\right) \;\ast\; \mu_{\,V}\,\left(\,f_{\,k}(x) \,-\, f_{\,k}(\,x_{\,0}\,) \;,\; \frac{t}{3}\,\right)$$ $$\hspace{7.5cm} \;\ast\; \mu_{\,V}\, \left(\,f_{\,k}(\,x_{\,0}\,) \,-\, f(\,x_{\,0}\,) \;,\; \frac{t}{3}\,\right)$$ $$>\;\; (\,1 \,-\, r\,) \;\ast\; (\,1 \,-\, r\,) \;\ast\; (\,1 \,-\, r\,) \;\,=\;\, 1 \,-\, r$$ Thus, we have $$\mu_{\,U}\,(\,x \,-\,x_{\,0} \;,\; \delta\,) \;\,>\;\,1 \,-\, \beta \;\,\Rightarrow\;\, \mu_{\,V}\,\left(\,f(x) \,-\, f(\,x_{\,0}\,) \;,\; t\,\right) \;\,>\;\, 1 \,-\, r\;\;\;\cdots\;\;\;(5)$$ Again , for $ \nu_{\,U}\,(\,x \,-\,x_{\,0} \;,\; \delta\,) \;\,<\;\,\beta $ , $$\nu_{\,V}\,\left(\,f(x) \,-\, f(\,x_{\,0}\,) \;,\; t\,\right) \;=\;\, \nu_{\,V}\,\left(\,f(x) \,-\, f_{\,k}(x) \,+\, f_{\,k}(x) \,-\, f_{\,k}(\,x_{\,0}\,) \,+\, f_{\,k}(\,x_{\,0}\,) \,-\, f(\,x_{\,0}\,) \;,\; t\,\right)$$ $$\hspace{4.5cm}\leq\;\; \nu_{\,V}\, \left(\,f(x) \,-\, f_{\,k}(x) \;,\; \frac{t}{3}\,\right) \;\diamond\; \nu_{\,V}\,\left(\,f_{\,k}(x) \,-\, f_{\,k}(\,x_{\,0}\,) \;,\; \frac{t}{3}\,\right)$$ $$\hspace{7.5cm} \;\diamond\; \nu_{\,V}\, \left(\,f_{\,k}(\,x_{\,0}\,) \,-\, f(\,x_{\,0}\,) \;,\; \frac{t}{3}\,\right)$$ $$<\;\; r \;\diamond\; r \;\diamond\; r \;\,=\;\, r \hspace{1.5cm}$$ Hence, we have $$\nu_{\,U}\,(\,x \,-\,x_{\,0} \;,\; \delta\,) \;\,<\;\,\beta \;\,\Rightarrow\;\, \nu_{\,V}\,\left(\,f(x) \,-\, f(\,x_{\,0}\,) \;,\; t\,\right) \;\,<\;\, r \;\;\;\cdots\;\;\;(6)$$ Thus , from $( 5 )$ and $(6)$ it follows that $f$ is intuitionistic fuzzy continuous on $(\,U\,,\,A\,)$ . The converse of the above theorem is not necessarily true. For example, we consider the sequence of functions of example 4.3. It is obvious that each $\;f_n\;$ is sequentially intuitionistic fuzzy continuous on $\;(0,1)\;$ and hence is intuitionistic fuzzy continuous on $\;(0,1)\;$. Also, the limit function $\;f\;$ is intuitionistic fuzzy continuous on $\;(0,1)\;$, but the intuitionistic fuzzy convergence is not uniformly intuitionistic fuzzy convergent on $\;(0,1)\;.$ **Open Problem :** ================== One can develop the concept of differentiation and Riemann integration in an intuitionistic fuzzy normed linear space and then verify whether the term by term differentiation and integration are valid or not for a sequence of function in an intuitionistic fuzzy normed linear space . [0]{} Atanassov, K. *Intuitionistic fuzzy sets* , Fuzzy Sets and Systems 20 $(\,1986\,)$ 87 - 96.\ Bag, T. and Samanta, S.K. , *Finite Dimensional Fuzzy Normed Linear Spaces*, The Journal of Fuzzy Mathematics Vol. 11 $(\,2003\,)$ 687 - 705.\ Bag, T. and Samanta, S.K. , *Fuzzy bounded linear operators* , Fuzzy Sets and Systems 151 $(\,2005\,)$ 513 - 547. Cheng S.C. and Mordeson J.N. , *Fuzzy Linear Operators and Fuzzy Normed Linear Spaces* , Bull. Cal. Math. Soc.86 $(\,1994\,)$ 429 - 436.\ Felbin c. , *The completion of fuzzy normed linear space*, Journal of mathmatical analysis and application 174(2) $(\,1993\,)$ 428-440.\ Felbin c. , *Finite dimentional fuzzy normed linear space*, Journal of analysis 7 $(\,1999\,)$ 117-131.\ T. K. Samanta and Iqbal H. Jebril , *Finite dimentional intuitionistic fuzzy normed linear space*, Int. J. Open Problems Compt. Math. ( accepted ) .\ Iqbal H. Jebril and Ra’ed Hatamleh , *Random n - Normed Linear Space* , Int. J. Open Problems Compt. Math. Vol. 2 , No. 3 , September 2009 pp 489 - 495.\ Iqbal H. Jebril and T. K. Samanta, *Anti fuzzy normed linear space*, Int. J. Open Problems Compt. Math. ( accepted )\ Schweizer B. , Sklar A. , *Statistical metric space*, Pacific journal of mathhematics 10 $(\,1960\,)$ 314-334.\ Vijayabalaji S. , Thillaigovindan N. , Jun Y.B. *Intuitionistic Fuzzy n-normed linear space* , Bull. Korean Math. Soc. 44 $(\,2007\,)$ 291 - 308.\ Zadeh L.A. *Fuzzy sets*, Information and control 8 $(\,1965\,)$ 338-353.\
--- abstract: 'We use biquaternion to construct SL(2,C) ADHM Yang-Mills instantons. The solutions contain 16k-6 moduli parameters for the kth homotopy class, and include as a subset the SL(2,C) (M,N) instanton solutions constructed previously. In constrast to the SU(2) instantons, the SL(2,C) instantons inhereit jumping lines or singulariries which are not gauge artifacts and can not be gauged away.' author: - 'Jen-Chi Lee' title: 'Biquaternion Construction of SL(2,C) Yang-Mills Instantons' --- Introduction ============ The classical exact solutions of Euclidean $SU(2)$ (anti)self-dual Yang-Mills (SDYM) equation were intensively studied by pure mathematicians and theoretical physicists in 1970s. The first BPST $1$-instanton solution [@BPST] with $5$ moduli parameters was found in 1975. The CFTW k-instanton solutions [@CFTW] with $5k$ moduli parameters were soon constructed, and then the number of moduli parameters of the solutions for each homotopy class $k$ was extended to $5k+4$ ($5$,$13$ for $k=1$,$2$) [@JR] based on the conformal symmetry of massless pure YM equation. The complete solutions with $8k-3$ moduli parameters for each $k$-th homotopy class were finally worked out in 1978 by mathematicians ADHM [@ADHM] using theory in algebraic geometry. Through an one to one correspondence between anti-self-dual SU(2)-connections on $S^{4}$ and holomorphic vector bundles on $CP^{3}$, ADHM converted the highly nontrivial anti-SDYM equations into a much more simpler system of quadratic algebraic equations in quaternions. The explicit closed form of the complete solutions for $k=2,3$ had been worked out [@CSW]. There are many important applications of instantons to algebraic geometry and quantum field theory. One important application of instantons in algebraic geometry was the classification of four-manifolds [@5]. On the physics side, the non-perturbative instanton effect in QCD resolved the $U(1)_{A}$ problem [@U(1)]. Another important application of YM instantons in quantum field theory was the introduction of $\theta$- vacua [@the] in nonperturbative QCD, which created the strong $CP$ problem. In addition to $SU(2)$, the ADHM construction has been generalized to the cases of $SU(N)$ SDYM and many other SDYM theories with compact Lie groups [@CSW; @JR2]. In this talk we are going to consider the classical solutions of non-compact $SL(2,C)$ SDYM system. YM theory based on $SL(2,C)$ was first discussed in 1970s [@WY; @Hsu]. It was found that the complex $SU(2)$ YM field configurations can be interpreted as the real field configurations in $SL(2,C)$ YM theory. However, due to the non-compactness of $SL(2,C)$, the Cartan-Killing form or group metric of $SL(2,C)$ is not positive definite. Thus the action integral and the Hamiltonian of non-compact $SL(2,C)$ YM theory may not be positve. Nevertheless, there are still important motivations to study $SL(2,C)$ SDYM theory. For example, it was shown that the $4D$ $SL(2,C)$ SDYM equation can be dimensionally reduced to many important $1+1$ dimensional integrable systems [@Mason], such as the KdV equation and the nonlinear Schrodinger equation. SL(2,C) SDYM Equation ===================== We first briefly review the $SL(2,C)$ YM theory. It was shown that [@WY] there are two linearly independent choices of $SL(2,C)$ group metric $$g^{a}=\begin{pmatrix} I & 0\\ 0 & -I \end{pmatrix} ,g^{b}=\begin{pmatrix} 0 & I\\ I & 0 \end{pmatrix}$$ where $I$ is the $3\times3$ unit matrix. In general, we can choose $$g=\cos\theta g^{a}+\sin\theta g^{b}$$ where $\theta$ = real constant. Note that the metric is not positive definite due to the non-compactness of $SL(2,C).$ On the other hand, it was shown that $SL(2,C)$ group can be decomposed such that [@Lee]$$SL(2,C)=SU(2)\cdot P,P\in H\newline$$ where $SU(2)$ is the maximal compact subgroup of $SL(2,C)$, $P\in H$ (not a group) and $H=\{P|P$ is Hermitain, positive definite, and $detP=1\}$. The parameter space of $H$ is a noncompact space $R^{3}$. The third homotopy group is thus [@Lee]$$\pi_{3}[SL(2,C)]=\pi_{3}[S^{3}\times R^{3}]=\pi_{3}(S^{3})\cdot\pi_{3}(R^{3})=Z\cdot I=Z\newline\newline$$ where $I$ is the identity group, and $Z$ is the integer group.On the other hand, Wu and Yang [@WY] have shown that a complex $SU(2)$ gauge field is related to a real $SL(2,C)$ gauge field. Starting from $SU(2)$ complex gauge field formalism, we can write down all the $SL(2,C)$ field equations. Let $$G_{\mu}^{a}=A_{\mu}^{a}+iB_{\mu}^{a}$$ and, for convenience, we set the coupling constant $g=1$. The complex field strength is defined as $$F_{\mu\nu}^{a}\equiv H_{\mu\nu}^{a}+iM_{\mu\nu}^{a},a,b,c=1,2,3$$ where $$\begin{aligned} H_{\mu\nu}^{a} & =\partial_{\mu}A_{\nu}^{a}-\partial_{\nu}A_{\mu}^{a}+\epsilon^{abc}(A_{\mu}^{b}A_{\nu}^{c}-B_{\mu}^{b}B_{\nu}^{c}),\nonumber\\ M_{\mu\nu}^{a} & =\partial_{\mu}B_{\nu}^{a}-\partial_{\nu}B_{\mu}^{a}+\epsilon^{abc}(A_{\mu}^{b}B_{\nu}^{c}-A_{\mu}^{b}B_{\nu}^{c}),\end{aligned}$$ then $SL(2,C)$ Yang-Mills equation can be written as $$\begin{aligned} \partial_{\mu}H_{\mu\nu}^{a}+\epsilon^{abc}(A_{\mu}^{b}H_{\mu\nu}^{c}-B_{\mu }^{b}M_{\mu\nu}^{c}) & =0,\nonumber\\ \partial_{\mu}M_{\mu\nu}^{a}+\epsilon^{abc}(A_{\mu}^{b}M_{\mu\nu}^{c}-B_{\mu }^{b}H_{\mu\nu}^{c}) & =0.\end{aligned}$$ The $SL(2,C)$ SDYM equations are$$\begin{aligned} H_{\mu\nu}^{a} & =\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}H_{\alpha\beta },\nonumber\\ M_{\mu\nu}^{a} & =\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}M_{\alpha\beta}. \label{self}$$ The Yang-Mills Equation above can be derived from the following Lagrangian$$L_{\theta}=\frac{1}{4}[F_{\mu\nu}^{i}]^{T}g_{ij}[F_{\mu\nu}^{j}]=\cos{\theta }(\frac{1}{4}H_{\mu\nu}^{a}H_{\mu\nu}^{a}-\frac{1}{4}M_{\mu\nu}^{a}M_{\mu\nu }^{a})+\sin{\theta}(\frac{1}{2}H_{\mu\nu}^{a}M_{\mu\nu}^{a}) \label{action}$$ where $F_{\mu\nu}^{k}=H_{\mu\nu}^{k}$ and $F_{\mu\nu}^{3+k}=M_{\mu\nu}^{k}$ for $k=1,2,3$. Note that $L_{\theta}$ is indefinite for any real value $\theta$. We shall only consider the particular case for $\theta=0$ in this talk, i.e. $$L=\frac{1}{4}(H_{\mu\nu}^{a}H_{\mu\nu}^{a}-M_{\mu\nu}^{a}M_{\mu\nu}^{a}),$$ for the action density in discussing the homotopic classifications of our solutions. Biquaternion construction of $SL(2,C)$ YM Instantons ==================================================== Instead of quaternion in the $Sp(1)$ ($=SU(2)$) ADHM construction, we will use *biquaternion* to construct $SL(2,C)$ SDYM instantons. A quaternion $x$ can be written as$$x=x_{\mu}e_{\mu}\text{, \ }x_{\mu}\in R\text{, \ }e_{0}=1,e_{1}=i,e_{2}=j,e_{3}=k \label{x}$$ where $e_{1},e_{2}$ and $e_{3}$ anticommute and obey$$\begin{aligned} e_{i}\cdot e_{j} & =-e_{j}\cdot e_{i}=\epsilon_{ijk}e_{k};\text{ \ }i,j,k=1,2,3,\\ e_{1}^{2} & =-1,e_{2}^{2}=-1,e_{3}^{2}=-1.\end{aligned}$$ A (ordinary) biquaternion (or complex-quaternion) $z$ can be written as$$z=z_{\mu}e_{\mu}\text{, \ }z_{\mu}\in C,$$ which will be used in this talk. Occasionally $z$ can be written as$$z=x+yi$$ where $x$ and $y$ are quaternions and $i=\sqrt{-1},$ not to be confused with $e_{1}$ in Eq.(\[x\]). For biquaternion, the biconjugation [@Ham]$$z^{\circledast}=z_{\mu}e_{\mu}^{\dagger}=z_{0}e_{0}-z_{1}e_{1}-z_{2}e_{2}-z_{3}e_{3}=x^{\dagger}+y^{\dagger}i,$$ will be heavily used in this talk. In contrast to the real number norm square of a quaternion, the norm square of a biquarternion used in this talk is defined to be$$|z|_{c}^{2}=z^{\circledast}z=(z_{0})^{2}+(z_{1})^{2}+(z_{2})^{2}+(z_{3})^{2}$$ which is a *complex* number in general as a subscript $c$ is used in the norm. We are now ready to proceed the construction of $SL(2,C)$ instantons. We begin by introducing the $(k+1)\times k$ biquarternion matrix $\Delta(x)=a+bx$ $$\Delta(x)_{ab}=a_{ab}+b_{ab}x,\text{ }a_{ab}=a_{ab}^{\mu}e_{\mu},b_{ab}=b_{ab}^{\mu}e_{\mu} \label{ab}$$ where $a_{ab}^{\mu}$ and $b_{ab}^{\mu}$ are complex numbers, and $a_{ab}$ and $b_{ab}$ are biquarternions. The biconjugation of the $\Delta(x)$ matrix is defined to be$$\Delta(x)_{ab}^{\circledast}=\Delta(x)_{ba}^{\mu}e_{\mu}^{\dagger}=\Delta(x)_{ba}^{0}e_{0}-\Delta(x)_{ba}^{1}e_{1}-\Delta(x)_{ba}^{2}e_{2}-\Delta(x)_{ba}^{3}e_{3}.$$ In contrast to the of $SU(2)$ instantons, the quadratic condition of $SL(2,C)$ instantons reads $$\Delta(x)^{\circledast}\Delta(x)=f^{-1}=\text{symmetric, non-singular }k\times k\text{ matrix for }x\notin J\text{,} \label{ff}$$ from which we can deduce that $a^{\circledast}a,b^{\circledast}a,a^{\circledast}b$ and $b^{\circledast}b$ are all symmetric matrices. We stress here that it will turn out the choice of *biconjugation* operation is crucial for the follow-up discussion in this work. On the other hand, for $x\in J,$ $\det\Delta(x)^{\circledast}\Delta(x)=0$. The set $J$ is called singular locus or “jumping lines” in the mathematical literatures and was discussed in [@LLT]. In contrast to the $SL(2,C)$ instantons, there are no jumping lines for the case of $SU(2)$ instantons. In the $Sp(1)$ quaternion case, the symmetric condition on $f^{-1}$ means $f^{-1}$ is real. For the $SL(2,C)$ biquaternion case, however, it can be shown that symmetric condition on $f^{-1}$ implies $f^{-1}$ is *complex*. To construct the self-dual gauge field, we introduce a $(k+1)\times1$ dimensional biquaternion vector $v(x)$ satisfying the following two conditions$$\begin{aligned} v^{\circledast}(x)\Delta(x) & =0,\label{null}\\ v^{\circledast}(x)v(x) & =1. \label{norm2}$$ Note that $v(x)$ is fixed up to a $SL(2,C)$ gauge transformation$$v(x)\longrightarrow v(x)g(x),\text{ \ \ }g(x)\in\text{ }1\times1\text{ Biquaternion}.$$ Note also that in general a $SL(2,C)$ matrix can be written in terms of a $1\times1$ biquaternion as$$g=\frac{q_{\mu}e_{\mu}}{\sqrt{q^{\circledast}q}}=\frac{q_{\mu}e_{\mu}}{|q|_{c}}.$$ The next step is to define the gauge field $$G_{\mu}(x)=v^{\circledast}(x)\partial_{\mu}v(x), \label{A}$$ which is a $1\times1$ biquaternion. Note that, unlike the case for $Sp(1)$, $G_{\mu}(x)$ needs not to be anti-Hermitian. We can now define the $SL(2,C)$ field strength $$F_{\mu\nu}=\partial_{\mu}G_{\nu}(x)+G_{\mu}(x)G_{\nu}(x)-[\mu \longleftrightarrow\nu].$$ To show that $F_{\mu\nu}$ is self-dual, one first show that the operator $$P=1-v(x)v^{\circledast}(x)$$ is a projection operator $P^{2}=P$, and can be written in terms of $\Delta$ as $$P=\Delta(x)f\Delta^{\circledast}(x). \label{P}$$ The self-duality of $F_{\mu\nu}$ can now be proved as following $\bigskip$$$\begin{aligned} F_{\mu\nu} & =\partial_{\mu}(v^{\circledast}(x)\partial_{\nu}v(x))+v^{\circledast}(x)\partial_{\mu}v(x)v^{\circledast}(x)\partial_{\nu }v(x)-[\mu\longleftrightarrow\nu]\nonumber\\ & =v^{\circledast}(x)b(e_{\mu}e_{\nu}^{\dagger}-e_{\nu}e_{\mu}^{\dagger })fb^{\circledast}v(x) \label{F}$$ where we have used Eqs.(\[ab\]),(\[null\]) and (\[P\]). Finally the factor $(e_{\mu}e_{\nu}^{\dagger}-e_{\nu}e_{\mu}^{\dagger})$ above can be shown to be self-dual$$\begin{aligned} \sigma_{\mu\nu} & \equiv\frac{1}{4i}(e_{\mu}e_{\nu}^{\dagger}-e_{\nu}e_{\mu }^{\dagger})=\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}\sigma_{\alpha\beta },\label{duall}\\ \overset{\_}{\sigma}_{\mu\nu} & =\frac{1}{4i}(e_{\mu}^{\dagger}e_{\nu }-e_{\nu}^{\dagger}e_{\mu})=-\frac{1}{2}\epsilon_{\mu\nu\alpha\beta }\overset{\_}{\sigma}_{\alpha\beta}.\end{aligned}$$ This proves the self-duality of $F_{\mu\nu}.$ We thus have constructed many $SL(2,C)$ SDYM field configurations. To count the number of moduli parameters for the $SL(2,C)$ $k$-instantons we have constructed , one uses transformations which preserve conditions Eq.(\[ff\]), Eq.(\[null\]) and Eq.(\[norm2\]), and the definition of $G_{\mu}$ in Eq.(\[A\]) to bring $b$ and $a$ in Eq.(\[ab\]) into a simple canonical form $$b=\begin{bmatrix} 0_{1\times k}\\ I_{k\times k}\end{bmatrix} , \label{b}$$ $$a=\begin{bmatrix} \lambda_{1\times k}\\ -y_{k\times k}\end{bmatrix} \label{a}$$ where $\lambda$ and $y$ are biquaternion matrices with orders $1\times k$ and  $k\times k$ respectively, and $y$ is symmetric $$y=y^{T}.$$ The constraints for the moduli parameters are$$a_{ci}^{\circledast}a_{cj}=0,i\neq j,\text{ and \ }y_{ij}=y_{ji}. \label{dof}$$ The total number of moduli parameters for $k$-instanton can be calculated through Eq.(\[dof\]) to be$$\text{\# of moduli for }SL(2,C)\text{ }k\text{-instantons}=16k-6,$$ which is twice of that of the case of $Sp(1).$ Roughly speaking, there are $8k$ parameters for instanton “biquaternion positions” and $8k$ parameters for instanton “sizes”. Finally one has to subtract an overall $SL(2,C)$ gauge group degree of freesom $6.$ This picture will become more clear when we give examples of explicit constructions of $SL(2,C)$ instantons in the next section. Examples of $SL(2,C)$ instantons and Jumping lines ================================================== In this section, we will explicitly construct examples of $SL(2,C)$ YM instantons to illustrate our prescription given in the last section. Example of $SL(2,C)$ instantons with jumping lines will also be given. The $SL(2,C)$ $(M,N)$ Instantons -------------------------------- In this first example, we will reproduce from the ADHM construction the $SL(2,C)$ $(M,N)$ instanton solutions constructed in [@Lee]. We choose the biquaternion $\lambda_{j}$ in Eq.(\[a\]) to be $\lambda_{j}e_{0}$ with $\lambda_{j}$ a *complex* number, and choose $y_{ij}=y_{j}\delta_{ij}$ to be a diagonal matrix with $y_{j}=y_{j\mu}e_{\mu}$ a quaternion. That is $$\Delta(x)=\begin{bmatrix} \lambda_{1} & \lambda_{2} & ... & \lambda_{k}\\ x-y_{1} & 0 & ... & 0\\ 0 & x-y_{2} & ... & 0\\ . & ... & ... & ...\\ 0 & 0 & ... & x-y_{k}\end{bmatrix} , \label{delta}$$ which satisfies the constraint in Eq.(\[dof\]). One can calculate the gauge potential as$$\begin{aligned} G_{\mu} & =v^{\circledast}\partial_{\mu}v=\frac{1}{4}[e_{\mu}^{\dagger }e_{\nu}-e_{\nu}^{\dagger}e_{\mu}]\partial_{\nu}\ln(1+\frac{\lambda_{1}^{2}}{|x-y_{1}|^{2}}+...+\frac{\lambda_{k}^{2}}{|x-y_{k}|^{2}})\nonumber\\ & =\frac{1}{4}[e_{\mu}^{\dagger}e_{\nu}-e_{\nu}^{\dagger}e_{\mu}]\partial_{\nu}\ln(\phi)\end{aligned}$$ where $$\phi=1+\frac{\lambda_{1}^{2}}{|x-y_{1}|^{2}}+...+\frac{\lambda_{k}^{2}}{|x-y_{k}|^{2}}. \label{fai}$$ For the case of $Sp(1),$ $\lambda_{j}$ is a real number and $\lambda _{j}\lambda_{j}^{\dagger}=\lambda_{j}^{2}$ is a real number. So $\phi$ in Eq.(\[fai\]) is a complex-valued function in general. If we choose $k=1$ and define $\lambda_{1}^{2}=\frac{\alpha_{1}^{2}}{1+i},$ then$$\phi=1+\frac{\frac{\alpha_{1}^{2}}{1+i}}{|x-y_{1}|^{2}}.$$ The gauge potential is$$\begin{aligned} G_{\mu} & =\frac{1}{4}[e_{\mu}^{\dagger}e_{\nu}-e_{\nu}^{\dagger}e_{\mu }]\partial_{\nu}\ln(1+\frac{\frac{\alpha_{1}^{2}}{1+i}}{|x-y_{1}|^{2}})=\frac{1}{4}[e_{\mu}^{\dagger}e_{\nu}-e_{\nu}^{\dagger}e_{\mu}]\partial_{\nu }\ln(1+\frac{\alpha_{1}^{2}}{|x-y_{1}|^{2}}+i)\nonumber\\ & =\frac{1}{2}[e_{\mu}^{\dagger}e_{\nu}-e_{\nu}^{\dagger}e_{\mu}]\frac{-\alpha_{1}^{2}(x-y_{1})_{\nu}}{|x-y_{1}|^{4}+(|x-y_{1}|^{2}+\alpha _{1}^{2})^{2}}[\frac{|x-y_{1}|^{2}+\alpha_{1}^{2}}{|x-y_{1}|^{2}}-i]\end{aligned}$$ which reproduces the $SL(2,C)$ $(M,N)=(1,0)$ solution calculated in [@Lee]. It is easy to generalize the above calculations to the general $(M,N)$ cases, and it can be shown that the topological charge of these field configuations is $k=M+N$ [@Lee]. $SL(2,C)$ CFTW $k$-instantons and jumping lines ----------------------------------------------- For another subset of $k$-instanton field configurations, one chooses $\lambda_{i}=\lambda_{i}e_{0}$ (with $\lambda_{i}$ a *complex* number) and$\ y_{i}$ to be a *biquaternion* in Eq.(\[delta\]). It is important to note that for these choices, the constraints in Eq.(\[dof\]) are still satisfied *without* turning on the off-diagonal elements $y_{ij}$ in Eq.(\[a\]). It can be shown that, for these field configurations, there are non-removable singularities which are zeros ($x\in J$ ) of$$\phi=1+\frac{\lambda_{1}\lambda_{1}^{\circledast}}{|x-y_{1}|_{c}^{2}}+...+\frac{\lambda_{k}\lambda_{k}^{\circledast}}{|x-y_{k}|_{c}^{2}}, \label{k}$$ or$$\det\Delta(x)^{\circledast}\Delta(x)=|x-y_{1}|_{c}^{2}|x-y_{2}|_{c}^{2}\cdot\cdot\cdot|x-y_{k}|_{c}^{2}\phi=P_{2k}(x)+iP_{2k-1}(x)=0. \label{jump}$$ For the $k$-instanton case, one encounters intersections of zeros of $P_{2k}(x)$ and $P_{2k-1}(x)$ polynomials with degrees $2k$ and $2k-1$ respectively$$P_{2k}(x)=0,\text{ \ }P_{2k-1}(x)=0. \label{pp}$$ These new singularities can not be gauged away and do not show up in the field configurations of $SU(2)$ $k$-instantons. Mathematically, the existence of singular structures of the non-compact $SL(2,C)$ SDYM field configurations is consistent with the inclusion of “sheaves” by Frenkel-Jardim [@math2] recently, rather than just the restricted notion of “vector bundles”, in the one to one correspondence between ASDYM and certain algebraic geometric objects. Acknowledgments =============== This talk is based on a collaboration paper with S.H. Lai and I.H. Tsai. I thank the financial support of MoST, Taiwan. [99]{} A. Belavin, A. Polyakov, A. Schwartz, Y. Tyupkin, “Pseudo-particle solutions of the Yang-Mills equations”, Phys. Lett. B 59 (1975) 85. R. Jackiw, C. Rebbi, “Conformal properties of a Yang-Mills pseudoparticle”, Phys. Rev. D 14 (1976) 517; R. Jackiw, C. Nohl and C. Rebbi, “Conformal properties of pseu-doparticle con gurations”, Phys. Rev. D 15 (1977) 1642. M. Atiyah, V. Drinfeld, N. Hitchin, Yu. Manin, “Construction of instantons”, Phys. Lett. A 65 (1978) 185. N. H. Christ, E. J. Weinberg and N. K. Stanton, “General Self-Dual Yang-Mills Solutions”, Phys. Rev. D 18 (1978) 2013. V. Korepin and S. Shatashvili, “Rational parametrization of the three instanton solutions of the Yang-Mills equations”, Math. USSR Izversiya 24 (1985) 307. S.K. Donaldson and P.B. Kronheimer, The Geometry of Four Manifolds, Oxford University Press (1990). G. ’t Hooft, “Computation of the quantum effects due to a four-dimensional pseudoparticle”, Phys. Rev. D 14 (1976) 3432. G. ’t Hooft, “Symmetry breaking through Bell-Jackiw anomalies”, Phys. Rev. Lett. 37 (1976) 8. C. Callan Jr., R. Dashen, D. Gross, “The structure of the gauge theory vacuum”, Phys. Lett. B 63 (1976) 334; “Toward a theory of the strong interactions”, Phys. Rev. D 17 (1978) 2717. R. Jackiw, C. Rebbi, “Vacuum periodicity in a Yang-Mills quantum theory”, Phys. Rev. Lett. 37 (1976) 172. R. Jackwi and C. Rebbi, Phys. Lett. 67B (1977) 189. C. W. Bernard, , N. H. Christ, A. H. Guth and E. J. Weinberg, Phys. Rev. D16 (1977) 2967. J. P. Hsu and E. Mac, J. Math. Phys. 18 (1977) 1377. L. J. Mason and G. A. J. Sparling, Nonlinear Schrodinger and Korteweg-de Vries are reductions of self-dual Yang-Mills, Phys. Lett. A 137, 29–33 (1989). K. L. Chang and J. C. Lee, “On solutions of self-dual SL(2,C) gauge theory”, Chinese Journal of Phys. Vol. 44, No.4 (1984) 59. J.C. Lee and K. L. Chang, “SL(2,C) Yang-Mills Instantons”, Proc. Natl. Sci. Counc. ROC (A), Vol 9, No 4 (1985) 296. W. R. Hamilton, Lectures on Quaternions, Macmillan & Co, Cornell University Library (1853). Sheng-Hong Lai, Jen-Chi Lee and I-Hsun Tsai, “Biquaternions and ADHM Construction of Non-Compact SL(2,C) Yang-Mills Instantons”, Ann. Phys. 361,(2015) 14-32. I. Frenkel, M. Jardim, “Complex ADHM equations and sheaves on $P^{3}$”, Journal of Algebra 319 (2008) 2913-2937. J. Madore, J.L. Richard and R. Stora, “An Introduction to the Twistor Programme”, Phys. Rept. 49, No. 2 (1979) 113-130.
--- abstract: 'We theoretically study chaos synchronization of two lasers which are delay-coupled via an active or a passive relay. While the lasers are synchronized, their dynamics is identical to a single laser with delayed feedback for a passive relay and identical to two delay-coupled lasers for an active relay. Depending on the coupling parameters the system exhibits bubbling, [i.e.]{}, noise-induced desynchronization, or on-off intermittency. We associate the desynchronization dynamics in the coherence collapse and low frequency fluctuation regimes with the transverse instability of some of the compound cavity’s antimodes. Finally, we demonstrate how, by using an active relay, bubbling can be suppressed.' author: - 'V. Flunkert^1^' - 'O. D’Huys^2^' - 'J. Danckaert^2,3^' - 'I. Fischer^4^' - 'E. Schöll^1^' title: 'Bubbling in delay-coupled lasers' --- Synchronization phenomena of coupled nonlinear oscillators are omnipresent and play an important role in physical, chemical and biological systems [@BOC02; @PIK01]. Understanding the synchronization mechanisms is crucial for many practical applications. One of the most interesting and challenging phenomena when coupling nonlinear systems is the synchronization of chaotic dynamics [@PEC90]. In order to characterize the synchronization effects, stability properties are a key issue. Noise can, for instance, cause intermittent desynchronization. This behavior is called bubbling [@ASH94] and has been observed for example in optical [@TER99; @SAU98] and electrical [@GAU96] systems. Semiconductor lasers are of particular interest in the study of chaos synchronization. The synchronization properties may facilitate new secure communication schemes. However, if two identical semiconductor lasers are optically coupled over a finite distance, it has been observed that the coupling delay leads to spontaneous symmetry breaking, and only generalized synchronization of leader-laggard type occurs [@MUL04]. A passive relay in form of a semitransparent mirror or an active relay in form of a third laser in between the two lasers have been shown to stabilize the isochronous synchronization solution [@SHA06; @KLE06; @LAN07; @FIS06], rendering such configurations attractive for chaos based applications, like, [e.g.]{}, bidirectional encrypted communication, or chaos-based key exchange, as detailed in ref. [@VIC04]. In this work we show theoretically that bubbling and on-off intermittency occur in both relay setups. In the coherence collapse (CC) and in the low frequency fluctuation (LFF) regime, we find that bubbling is caused by transversally unstable external cavity modes (ECMs). In the LFF regime the localization of the transversally unstable modes in the synchronization manifold (SM) results in desynchronization during power dropouts, which has also been observed in unidirectionally coupled lasers [@AHL98]. For the active relay we find that bubbling can be suppressed by stronger pumping of the relay laser. We consider two identical systems which are delay-coupled via a relay ([Fig.]{}\[fig:setup\]). \[width=1.0\] [figures/with\_relay]{} The relay may be an active element or a passive element which merely distributes the arriving signals between the systems. Each system receives a delayed signal from the relay $$\begin{aligned} \dot{{{\bf X}}}_{j} & = {{\bf f}}({{\bf X}}_{j})+K\,{{\bf Y}}(t-\tau/2)\qquad ({\textstyle j = 1, 2}). \label{eq:general}\end{aligned}$$ Here ${{\bf X}}_j, {{\bf Y}}\in \mathbb{R}^n$ are the state vectors of the system $j$ and the relay, respectively, ${{\bf f}}$ is a nonlinear function, $K$ is the relay-to-system coupling matrix, and $\tau$ is the propagation delay between system 1 and system 2. The overdot denotes the derivative with respect to time $t$. For the active relay we consider the equation $$\begin{aligned} \dot{{{\bf Y}}} & = {{\bf g}}({{\bf Y}}) + {{\textstyle \frac{1}{2}}}L\, {{\bf X}}_{1}(t-\tau/2) + {{\textstyle \frac{1}{2}}}L\, {{\bf X}}_{2}(t-\tau/2), \label{eq:active}\end{aligned}$$ where $L$ is the system-to-relay coupling matrix and the function ${{\bf g}}$ describes the internal dynamics of the relay. For the passive relay we consider the algebraic equation $$\begin{aligned} {{\bf Y}}(t) & = {{\textstyle \frac{1}{2}}}[ {{\bf X}}_{1}(t-\tau/2) + {{\bf X}}_{2}(t-\tau/2) ]. \label{eq:passive}\end{aligned}$$ Equation (\[eq:general\]) together with the relay equation (\[eq:active\]) or (\[eq:passive\]) allow for an isochronous (or *zero-lag*) solution ${{\bf X}}_1(t)={{\bf X}}_2(t)$, respectively. The SM is thus invariant. To analyse the stability of this solution we introduce a symmetric variable ${{\bf S}}= {{\textstyle \frac{1}{2}}}({{\bf X}}_{1}+{{\bf X}}_{2})$ and an antisymmetric variable ${{\bf A}}= {{\textstyle \frac{1}{2}}}({{\bf X}}_{1}-{{\bf X}}_{2})$. Equation (\[eq:general\]) can then be rewritten in the new variables $$\begin{aligned} \dot{{{\bf S}}}&= {{\textstyle \frac{1}{2}}}\left[{{\bf f}}({{\bf S}}+{{\bf A}})+{{\bf f}}({{\bf S}}-{{\bf A}})\right]+ K\, {{\bf Y}}(t-\tau/2), \label{eq:fullS}\\ \dot{{{\bf A}}}&= {{\textstyle \frac{1}{2}}}\left[{{\bf f}}({{\bf S}}+{{\bf A}})-{{\bf f}}({{\bf S}}-{{\bf A}})\right]. \label{eq:fullA} \end{aligned}$$ Note that due to the symmetric coupling the delay terms and all the coupling parameters in [Eq.]{}(\[eq:fullA\]) vanish. Equation (\[eq:fullA\]) taken at $\dot{{\bf A}}={\bf 0}$ has a solution ${{\bf A}}={\bf 0}$ which represents the isochronously synchronized state. Its stability is determined by linearizing [Eqs.]{}(\[eq:fullS\]) and (\[eq:fullA\]) in the variable ${{\bf A}}$ around ${{\bf A}}={\bf 0}$, [i.e.]{}, we linearize orthogonal to the SM: $$\begin{aligned} \dot{{{\bf S}}}&= {{\bf f}}({{\bf S}})+ K\, {{\bf Y}}(t-\tau/2), \label{eq:linearS}\\ \dot{{{\bf A}}}&= D{{\bf f}}({{\bf S}}){{\bf A}}. \label{eq:linearA} \end{aligned}$$ Here, $D{{\bf f}}({{\bf S}})$ denotes the Jacobian of ${{\bf f}}$ evaluated at the position ${{\bf S}}$. Since ${{\bf S}}$ depends on time, [Eq.]{}(\[eq:linearA\]) constitutes a time-dependent variational equation. For both relay types the dynamics within the SM resembles the dynamics of a single system with either self-feedback (passive relay) $$\begin{aligned} \dot {{\bf S}}&= {{\bf f}}({{\bf S}}) + K {{\bf S}}(t-\tau)\label{eq:DynPassiveRelay} \end{aligned}$$ or coupling to the active relay $$\begin{aligned} \dot {{\bf S}}&= {{\bf f}}({{\bf S}}) + K\, {{\bf Y}}(t-\tau/2),\label{eq:DynActiveRelay}\\ \dot {{\bf Y}}&= {{\bf g}}({{\bf Y}}) + L\, {{\bf S}}(t-\tau/2).\end{aligned}$$ In both cases the stability of the synchronized solution is governed by [Eq.]{}(\[eq:linearA\]). However, the trajectory ${{\bf S}}(t)$ will be different and the synchronized state may thus have different stability properties. Bubbling occurs [@ASH94; @VEN96a] when an invariant set $I$, for example a periodic orbit, in the SM is transversally unstable, while the chaotic attractor in the SM is still transversally stable, [i.e.]{}the largest transversal Lyapunov exponent of the attractor is negative, $\lambda_{\perp}<0$. In this situation the trajectory can be pushed towards the unstable set by noise and leave the SM. If there is no other attractor present, the trajectory will eventually come back to the SM and the systems will synchronize again. The point where the invariant set $I$ loses its transverse stability is called bubbling bifurcation, while the point where the attractor itself becomes unstable is called blow-out bifurcation. For semiconductor lasers the dynamics of each system is governed by the dimensionless Lang-Kobayashi rate equations [@LAN80b; @ALS96] $$\begin{aligned} \dot{E_j} & = {{\textstyle \frac{1}{2}}}(1+i\alpha)n_j\, E_j + K e^{i\varphi} E_{{{\bf Y}}}(t-\tau/2) + F_j(t)\nonumber \\ T \dot{n_j} & = p-n_j-(1+n_j)\,|E_j|^{2}. \label{eq:laser}\end{aligned}$$ Here, $E_j$ and $E_{{\bf Y}}$ are the complex electric field amplitudes of the $j$th system and the relay, respectively, $n_j$ is the excess carrier density, $\alpha$ is the linewidth enhancement factor, $p$ is the pump current, and the timescale parameter $T=\tau_c/\tau_p$ is the ratio of the carrier ($\tau_c$) and the photon $(\tau_p)$ lifetime. For simplicity we choose the feedback phase $\varphi=0$. Note that in general one could also include coupling phases in [Eq.]{}(\[eq:passive\]). This leads to interference conditions of all phases which have to be satisfied for isochronous synchronization. In our simulations we consider the spontaneous emission noise via a complex Gaussian white random variable $F_j(t)$ with the covariance $\langle F_j(t)\, F_i(t')^{*}\rangle=\beta (n+n_0) \delta_{ij} \delta(t-t')$, where $n_0=10$ is the carrier density at threshold and $\beta=10^{-5}$ is the spontaneous emission factor. Carrier noise has not been taken into account at this level. If the relay is realized through a semitransparent mirror (passive relay), the dynamics within the SM is given by [Eqs.]{}(\[eq:laser\]) with $E_{{\bf Y}}(t-\tau/2)=E_j(t-\tau)$, [i.e.]{}, an effectively decoupled laser. For this configuration we calculate the maximum parallel Lyapunov exponents $\lambda_{||}$ (within the SM) as well as the maximum transversal Lyapunov exponents $\lambda_\perp$ by simulating the dynamics in the SM without noise and applying the method developed in [@FAR82]. Figure \[fig:lyap\]a displays the Lyapunov exponents as a function of the feedback strength $K$. There are two blow-out bifurcations [@OTT94] at $K\approx 0.008$ ($B1$) and at $K \approx 0.09$ ($B2$), where $\lambda_\perp$ changes sign and the chaotic attractor loses its transversal stability. Similar behavior is found for an active relay ([Fig.]{}\[fig:lyap\]b). \[width=0.9\] [figures/lyap\_and\_lyap3]{} Over a wide range of $K$ ([Fig.]{}\[fig:lyap\]a) in which the attractor is stable and the dynamics is chaotic, we observe bubbling induced by spontaneous emission noise. In these regimes, when the noise is switched off in the simulations, the two lasers stay perfectly synchronized. In the regime with $\lambda_\perp>0$ we observe desynchronization bursts even without noise, [i.e.]{}, the system exhibits on-off intermittency. Figure \[fig:bubble\]a depicts the bubbling behavior for values of $K$ above $B2$ where the laser operates in the CC regime. Figure \[fig:bubble\]b corresponds to a lower pump current, where the synchronized lasers operate in the LFF regime. In this regime bubbling only takes place during the power dropouts. In both cases, when the noise amplitude is decreased, the desynchronization peaks occur less frequently, the maximum height, however, does not decrease. We now relate the desynchronization dynamics to the transverse stability of the ECMs in the SM. These modes organize the dynamics in the SM in the CC and the LFF regime. The ECMs are rotating wave solutions of the form $E(t)=A \exp(i\omega t)$ and $n(t)=n$ with constant values $A$, $\omega$ and $n$. They are well studied [@MOR92] solutions of the Lang-Kobayashi equations and are located on an ellipse in the $(\omega, n)$-plane (see inset of [Fig.]{}\[fig:ecmdynamics\]a). The modes on the top and bottom half of the ellipse are called modes and antimodes, respectively. \[width=\] [figures/cc\_bubble]{} \[width=\] [figures/lffbubble]{} The transverse stability of an ECM is governed by the variational equation (\[eq:linearA\]) where ${{\bf S}}(t)$ is the ECM solution. To determine the stability, we transform the laser equations into a rotating frame [@YAN05] $E_0=E \exp(-i\omega t)$. In these coordinates, an ECM $E=A \exp (i\omega t + i\psi)$ is transformed into a family of fixed points $E_0=A \exp (i\psi)$. Splitting the complex electric field $E_{0j}=x_j+i\,y_j$ and using the vector ${{\bf X}}_j=\left(x_j, y_j, n_j\right)$ [Eqs.]{}(\[eq:laser\]) can be written in the form of [Eq.]{}(\[eq:general\]) and the above analysis applies. The eigenvalues of the Jacobian in the rotating frame then determine the ECM’s transverse stability. Figure \[fig:ecmdynamics\]a displays the position of the ECMs in the $(\omega, n)$-plane and their stability for a choice of parameters. The black trajectory displays the projection of the symmetric variable $n_{{\bf S}}$. \[width=1.0,keepaspectratio\] [figures/cc\_and\_clm\_portrait]{} The bubbling behavior in the CC regime and the correlation of the desynchronization with the power dropouts in the LFF regime can be understood as follows. In the CC regime the dynamics comprises chaotic itinerancy among the modes and global antimode dynamics [@MUL98] (see [Fig.]{}\[fig:ecmdynamics\]a). The modes involved in the chaotic itinerancy are transversally stable (blue circles). The antimodes on the other hand are transversally unstable (red squares). Thus, when the trajectory approaches the antimode, noise can lead to desynchronization and bubbling occurs. The yellow diamonds in [Fig.]{}4a mark the onset of desynchronization, showing that bubbling always occurs in the vicinity of the antimodes (independent of the power). Please note that due to the role of noise not every approach to an antimode results in a bubbling excursion. In the LFF regime [@SAN94] the dynamics is similar. The intensity buildup process in between power dropouts is characterized by chaotic switching between different attractor ruins (ghosts) of unstable ECMs with a drift towards the ECM with minimal $n$. All ECMs involved in the buildup process are transversally stable and we observe no desynchronization. After a transient time, a power dropout takes place. During the dropout the trajectory collides with an antimode in a crisis. Again, the vicinity to transversally unstable antimodes - rather than the drop in power - leads to bubbling behavior. The transverse stability of the ECMs depends on the laser and coupling parameters as well as on the parameters of the particular ECM. Note that modes and antimodes are not necessarily transversally stable or unstable, respectively. The modes on the lower right-hand side in [Fig.]{}\[fig:ecmdynamics\]a, for instance, are transversally unstable. With decreasing coupling strength $K$, more modes become transversally unstable until the whole chaotic attractor loses its transversal stability. This leads to the blowout bifurcation $B2$ in [Fig.]{}\[fig:lyap\]. With increasing feedback strength the bubbling occurs less frequently and the average synchronization interval $\Delta$ increases; however, we did not find a transition to a bubbling-free state in a physically reasonable range of $K$. Note that neither $K$ nor the other parameters of our model are *normal* parameters in the sense of Ref. [@TER99]. Thus we do not observe power-law scaling of $\Delta$ as in [@SAU98; @VEN96a]. The parallel Lyapunov exponent $\lambda_{||}$ approaches zero with increasing $K$ and the chaoticity decreases, making this situation less interesting for chaos-based applications. If the elements are coupled via an active relay, the synchronized lasers behave like two delay-coupled lasers (see [Eq.]{}(\[eq:DynActiveRelay\])). If we choose ${{\bf f}}= {{\bf g}}$ and $K=L$, we obtain a system of two identical mutually coupled semiconductor lasers, which has been studied before [@MUL04; @ERZ05; @ERZ06a]. Such a system has rotating wave solutions of the form $E_{{{\bf S}}} (t) = A_{{{\bf S}}} \exp(i\omega t)$, $E_{{{\bf Y}}} (t) = A_{{{\bf Y}}} \exp(i\omega t +i \psi)$, $n_{{{\bf S}}} (t) = n_{{{\bf S}}}$, $n_{{{\bf Y}}}(t)=n_{{{\bf Y}}}$, called compound laser modes (CLMs). Their spectrum is more complex than for the ECMs: besides the synchronized solutions (which correspond to ECMs), there exist antisymmetric modes, for which the relay and the synchronized solution are in anti-phase ($\psi=\pi$), as well as asymmetric modes where the relay has a different intensity than the outer lasers. The positions of the transversally unstable modes are close to those of the ECMs of a single laser in the $(\omega,n)$ parameter space. Also the dynamics of three identical coupled lasers is similar to the behavior in the presence of a passive relay. Indeed, we find bubbling in both the LFF and CC regime. In the experiments reported in [@FIS06] all the coupling parameters in the setup are chosen identical, [i.e.]{}, $L=2K$ in [Eqs.]{}(\[eq:general\]) and (\[eq:active\]). But also in this case we observe qualitatively similar laser dynamics, with a trajectory in parameter space coming close to the transversally unstable CLMs. To suppress the bubbling while maintaining strong chaos, we apply a sufficiently higher pump current to the relay laser ($p_{\mbox{\scriptsize relay}}=4.0$) than to the outer lasers ($p=1.0$). For this configuration we have calculated $\lambda_{||}\approx0.026$, $\lambda_\perp\approx-0.032$, confirming that the system is in the chaotic regime (cf. [Fig.]{}\[fig:lyap\]b). The system still itinerates among the compound laser modes, but there is no global antimode dynamics. Moreover, in contrast to the behavior for the symmetric case $p_{\mbox{\scriptsize relay}}=1.0$, the active relay now suppresses the bubbling and there is no desynchronization (see [Fig.]{}\[fig:ecmdynamics\]b). Inspecting [Fig.]{}\[fig:ecmdynamics\]b, we can conclude that the CLMs involved in the dynamics are indeed transversally stable. If the middle laser is pumped less strongly than the outer ones, the opposite effect is observed. In conclusion, we have demonstrated a mechanism for desynchronization by bubbling in a very general setting of two delay-coupled lasers with either passive or active relay. We have shown that in the CC and LFF regimes the occurrence of bubbling is related to the transverse instability of some of the compound cavity’s antimodes, and that, by tuning of the active relay, it is possible to suppress the bubbling. These synchronization properties are decisive for the setup of chaos-synchronization based applications and provide a strategy how to achieve stable synchronization. We thank P. Ashwin, T. Gavrielides, and C. Mirasso for fruitful discussions. O.D. acknowledges the Research Foundation Flanders (FWO-Vlaanderen) for her fellowship and for project support. This work was partially supported by the Belgian Science Policy Office under grant IAP-VI10 “photonics@be”, by the EC Project GABA FP6-NEST contract 043309, and by DFG in the framework of Sfb 555. [10]{} S. Boccaletti, J. Kurths, G. Osipov, D. L. Valladares, and C. S. Zhou, Phys. Rep. [**366**]{}, 1 (2002). A. Pikovsky, M. G. Rosenblum, and J. Kurths, [*Synchronization, A Universal Concept in Nonlinear Sciences*]{} (Cambridge University Press, Cambridge, 2001). L. M. Pecora and T. L. Carroll, Phys. Rev. Lett. [**64**]{}, 821 (1990). P. Ashwin, J. Buescu, and I. Stewart, Phys. Lett. A [**193**]{}, 126 (1994). J. R. Terry, K. S. Thornburg, D. J. DeShazer, G. D. VanWiggeren, S. Zhu, P. Ashwin, and R. Roy, Phys. Rev. E [**59**]{}, 4036 (1999). M. Sauer and F. Kaiser, Phys. Lett. A [**243**]{}, 38 (1998). D. J. Gauthier and J. C. Bienfang, Phys. Rev. Lett. [**77**]{}, 1751 (1996). J. Mulet, C. R. Mirasso, T. Heil, and I. Fischer, J. Opt. B [**6**]{}, 97 (2004). L. B. Shaw, I. B. Schwartz, E. A. Rogers, and R. Roy, Chaos [**16**]{}, 015111 (2006). E. Klein, N. Gross, M. G. Rosenblum, W. Kinzel, L. Khaykovich, and I. Kanter, Phys. Rev. E [**73**]{}, 066214 (2006). A. S. Landsman and I. B. Schwartz, Phys. Rev. E [**75**]{}, 026201 (2007). I. Fischer, R. Vicente, J. M. Buld[ú]{}, M. Peil, C. R. Mirasso, M. C. Torrent, and J. Garc[í]{}a-Ojalvo, Phys. Rev. Lett. [**97**]{}, 123902 (2006). R. Vicente, C. R. Mirasso, and I. Fischer, Opt. Lett. [**32**]{}, 403 (2004). V. Ahlers, U. Parlitz, and W. Lauterborn, Phys. Rev. E [**58**]{}, 7208 (1998). S. C. Venkataramani, B. R. Hunt, E. Ott, D. J. Gauthier, and J. C. Bienfang, Phys. Rev. Lett. [**77**]{}, 5361 (1996). R. Lang and K. Kobayashi, IEEE J. Quantum Electron. [**16**]{}, 347 (1980). P. M. Alsing, V. Kovanis, A. Gavrielides, and T. Erneux, Phys. Rev. A [**53**]{}, 4429 (1996). J. D. Farmer, Physica D [**4**]{}, 366 (1982). E. Ott and J. C. Sommerer, Phys. Lett. A [**188**]{}, 39 (1994). J. M[ø]{}rk, B. Tromborg, and J. Mark, IEEE J. Quantum Electron. [**28**]{}, 93 (1992). S. Yanchuk, Math. Meth. Appl. Sci. [**28**]{}, 363 (2005). J. Mulet and C. R. Mirasso, Phys. Rev. E [**59**]{}, 5400 (1999). T. Sano, Phys. Rev. A [**50**]{}, 2719 (1994). H. Erzgr[ä]{}ber, D. Lenstra, B. Krauskopf, E. Wille, M. Peil, I. Fischer, and W. Els[ä]{}[ß]{}er, Opt. Commun. [**255**]{}, 286 (2005). H. Erzgr[ä]{}ber, B. Krauskopf, and D. Lenstra, SIAM J. Appl. Dyn. Syst. [ **5**]{}, 30 (2006).
--- abstract: 'It is now practically the norm for data to be very high dimensional in areas such as genetics, machine vision, image analysis and many others. When analyzing such data, parametric models are often too inflexible while nonparametric procedures tend to be non-robust because of insufficient data on these high dimensional spaces. It is often the case with high-dimensional data that most of the variability tends to be along a few directions, or more generally along a much smaller dimensional submanifold of the data space. In this article, we propose a class of models that flexibly learn about this submanifold and its dimension which simultaneously performs dimension reduction. As a result, density estimation is carried out efficiently. When performing classification with a large predictor space, our approach allows the category probabilities to vary nonparametrically with a few features expressed as linear combinations of the predictors. As opposed to many black-box methods for dimensionality reduction, the proposed model is appealing in having clearly interpretable and identifiable parameters. Gibbs sampling methods are developed for posterior computation, and the methods are illustrated in simulated and real data applications.' author: - | Abhishek Bhattacharya\ Indian Statistical Institute\ Kolkata India\ abhishek@isical.ac.in - | Garritt Page\ Department of Statistical Science\ Duke University\ page@stat.duke.edu - | David Dunson\ Department of Statistical Science\ Duke University\ dunson@stat.duke.edu bibliography: - 'reference.bib' title: Density Estimation and Classification via Bayesian Nonparametric Learning of Affine Subspaces --- [[**keywords**]{}: Dimension reduction; Classifier; Variable selection; Nonparametric Bayes]{} Introduction {#s0} ============ Data that are generated from experiments or studies carried out in areas such as genetics, machine vision, and image analysis (to name a few) are routinely high dimensional. Because such data sets have become so commonplace, designing data efficient inference techniques that scale to massive dimensional Euclidean and even non-Euclidean spaces has attracted considerable attention in the statistical and machine learning literature. When dealing with high dimensional data, it is typically the case that parametric models are too rigid to explain all the variability present in the data. Conversely, flexible nonparametric approaches suffer from the well known curse of dimensionality. With this in mind, a common approach is to make procedures more scalable to high dimensions by learning a lower dimensional subspace the data are concentrated near. This approach is supported by the success of mixture models with a few components in fitting high-dimensional data. In particular, consider a mixture of $N$ Gaussian kernels, $\sum_{j=1}^N \pi_jN_m(\cdot;\mu_j,{\sigma}^2I_m)$, $\mu_j\in \Re^m$. The $k=N-1$ largest eigenvalues corresponding to the covariance matrix for this type of density will typically be very large, while the remaining $m-k$ eigenvalues will all be equal and relatively much smaller. We may visualize such data lying close to some affine $k$ dimensional subspace of $\Re^m$ containing the mean and the $k$ corresponding eigen-vectors as its directions. If we knew that subspace, we could model the data projected onto that subspace with a nonparametric density model, while using some simple parametric distribution on the orthogonal residual vector. Robustness would be attained by fitting a flexible model on only a selected few coordinates. There is a large literature on the estimation of Euclidean subspaces, affine subspaces, and manifold subsets. Many procedures are algorithmic based. Elhamifar and Vidal [@elhamifar] propose an algorithmic based method of clustering data that lie close to multiple affine subspaces. See the references there in for a nice overview of algorithmic type approaches. Because such methods are deterministic, no measures of uncertainty are available. A probabilistic modeling approach is proposed by Chen *et al.* [@chen]. They employ a fully Bayesian model for density estimation of high dimensional data that reside close to a lower dimensional subregion (possibly a manifold) of unknown dimension. This subregion is approximated using a nonparametric Bayes mixture of factor analyzers in which Dirichlet and beta processes are employed to simultaneously allow uncertainty in the number of mixture components, the number of factors in each component and the locations of zeros in the loadings matrix. Although their methodology is flexible, it is very much a complex and over-parametrized “black box” leading to challenging computation. We propose a fully Bayesian procedure that very flexibly and uniquely identifies a lower dimensional affine subspace in a coherent modeling framework. After having identified the subspace and its dimension we model the coordinates of the orthogonal projection of the data onto that subspace using an infinite mixture of Gaussians while independently using a zero mean Gaussian to model the data component orthogonal to that subspace. Among all possible coordinate choices, we prefer isometric coordinates (those which preserve the geometry of the space). To obtain such coordinates, an orthogonal basis for the subspace must be employed which will require working on the Stiefel manifold (the space of all such basis matrices). In addition to interpretability and identifiability, advantages to using an orthogonal basis include equivalence of matrix inversion and transpose and faster MCMC convergence. We do not limit the cluster contours to be homogeneous, but use a singular value decomposition type sparse representation for the kernel covariance. By doing so, we avert the problem of dealing with massive matrices and yet make the model highly flexible. An appealing feature to our methodology is that it is not a “black box", rather nice interpretations accompany model parameters. For example, when estimating the affine subspace, which is proved to be unique, concern lies in estimating the orthogonal projection matrix associated with that space, and its orthogonal shift from the origin. Indeed, under our setting, the subspace turns out to be the $k$-principal subspace for the distribution, $k$ being the subspace dimension. In this regard, the methodology developed here provides a coherent extension of the Principal Component Analysis (PCA) of Hoff [@hoff1] to a nonparametric setting. The estimation of the projection matrix and orthogonal shift are carried out explicitly under appropriate loss functions. We also consider building efficient classifiers that entertain a high dimensional feature space. The idea is to seek the minimal subspace of the feature space such that the response depends on the predictors only through their projection onto that subspace. There has been recent developments in the machine learning and statistical communities with regards to building classifiers in the presence of a high dimensional feature space. Sun *et al.* [@YijunSun] propose a classifier that essentially breaks a complex nonlinear problem into a set of local linear problems that scales nicely to a very high dimensional space. They also provide a nice review of algorithmic based procedures to building classifiers most of which are black boxes and estimation of a principal subspace is not entertained. Recently, Cucala *et al.* [@cucala] proposed a probabilistic perspective to the $k$-nearest neighbor classifiers. However, apart from not scaling well to a high dimensional feature space, the minimal subspace of the feature space is not estimated. Estimating a minimal subspace of a high dimensional feature space has been addressed in a regression setting. Tokdar *et al.* [@tokdar] model the conditional distribution of a response given the minimal subspace directly with a Gaussian process. Recently, Reich *et al.* [@reich] propose a method of sufficient dimension reduction by modeling a conditional distribution directly after placing a prior distribution on the minimal subspace (which they call a central subspace). See references there in for frequentist approaches to estimating this subspace. Hannah *et al.* [@hannah] use Dirichlet process mixtures to flexibly model the relationship between a set of features and a response in a generalized linear model framework. Shahbaba and Neal [@shahbaba] focus on Dirichlet process mixture models in a nonlinear modeling framework. We focus on modeling the joint so that given the subspace, the response and the projection of the features onto that subspace follow a nonparametric infinite mixture model while the feature component orthogonal to the subspace follows a parametric model independent of the response and the projection. Dependence between the response and features is induced through the mixture distribution. The remainder of this article is organized as follows. Section 2 provides some preliminaries, Section 3 details the class of models to be used for density estimation along with theoretical results dealing with large prior support and strong posterior consistency. In Section 4 we investigate the identifiability of model parameters and give details of their estimation. Section 5 details computational strategies while Section 6 outlines a small simulation study and examples. In Section 7 we develop an efficient classifier and provide some examples and a small simulation study in addition to briefly introducing ideas with regards to regression. We finish with some concluding remarks in Section 8. Preliminaries ============= A $k$-dimensional affine subspace of $\Re^m$ (which is a $k$-dimensional Euclidean manifold) can be expressed as $$S = \{ Ry + {\theta}\colon y \in \Re^m \}$$ with $R$ being a $m \times m$ rank $k$ *[projection matrix]{} (it satisfies $R = R' = R^2$, rank($R) = k$) and ${\theta}\in \Re^m$ satisfying $R{\theta}= 0$. Notice that there is a one to one correspondence between the subspace $S$ and the pair $(R,{\theta})$ with ${\theta}$ being the *[projection]{} of the origin into $S$ and $R$ the projection matrix of the shifted linear subspace $$L = S - {\theta}= \{ Ry \colon y \in \Re^m \}.$$ The projection of any $x\in \Re^m$ into $S$ is defined as the $x_0 \in S$ satisfying $\|x - x_0\| = \min\{ \|x - y \|: y \in S \}$ where $\|\cdot\|$ denotes the Euclidean norm. For any affine subspace $S$ as defined above, the solution turns out to be $x_0 = Rx + {\theta}$. Similarly, the projection of $x \in \Re^m$ into $L$ is $x_0^* = Rx$, hence the name projection matrix for $R$. We denote the projection of $x\in \Re^m$ into $S$ as $Pr_S(x)$.** Each $x \in \Re^m$ can be given coordinates ${\tilde}x \in \Re^k$ such that $x = U{\tilde}x + {\theta}$ where $U$ is a matrix whose columns $\{U_1,\ldots, U_k\}$ form a basis of the column space of $R$. If $U$ is chosen to be orthonormal (i.e., $U'U = I_k$ and $R=UU'$), then the coordinates ($\tilde{x}$) are *isometric*. That is, they preserve the inner product on $S$ (and hence volume and distances). With such a basis, the projection $Pr_S(x)$ of an arbitrary $x \in \Re^m$ into $S$ has isometric coordinates $U'x$. Thus, $U$ gives $k$ mutually perpendicular ‘directions’ to $S$ while ${\theta}$ may be viewed as the ‘origin’ of $S$. We will call ${\theta}$ the *[origin]{} and $U$ an *[orientation]{} for $S$.** The *[residual]{} of $x\in \Re^m$ (which we denote as $R_S(x) = x - Pr_S(x) = x - Rx - {\theta}$) lies on a linear subspace that is perpendicular to $L$. That is, $R_S(x) \in S^\perp$ where $$S^\perp = \{ (I-R)y \colon y \in \Re^m \}.$$ Notice that the projection matrix of $S^\perp$ is $I-R$. Now if we let $V$ denote an orthonormal basis for the column space of $I-R$ (i.e., $V'V=I_{m-k}$, $VV' = I-R$), then isometric residual coordinates are given by $V'x \in \Re^{m-k}$.* For a sample lying close to such a subspace $S$, it is natural to assume that the data residuals are centered around $0$ with low variability while the data projected into $S$ comes from a possibly multi-modal distribution supported on $S$. Figure \[2Dpic\] illustrates such a sample cloud. The observations are drawn from a two-component mixture of bivariate normals with cluster centers $(1,0)$ and $(0,1)$ and band-width of 0.5. As a result they are clustered around the subspace (line) $x+y =1$. For a specific sample point $x$, $Pr_S(x)$, $R_S(x)$, and ${\theta}$ are highlighted. \[fig1\] ![Graphical representation of the affine subspace ($S$), the orthogonal shift ($\theta$), and the projection of a point into $S$ (these are the solid dots with particular emphasis given to $Rx + \theta$). []{data-label="2Dpic"}](2Dpicture3.pdf) If we let $Q$ to be a distribution on $\Re^m$ with finite second order moments, then for $d \le m$ the $d$ *[principal affine subspace]{} of $Q$ is the minimizer of following risk function $$\begin{aligned} \label{e1} R(S) = \int_{\Re^m} \| x - Pr_S(x) \|^2 Q(dx),\end{aligned}$$ with the minimization carried out over all $d$-dimensional affine subspaces $S$. The minimum value of expression 2.1 turns out to be $\sum_{d+1}^m {\lambda}_j$, where ${\lambda}_1 \ge \ldots \ge {\lambda}_m$ are the ordered eigenvalues of the covariance of $Q$. In addition, a unique minimizer exists if and only if ${\lambda}_d > {\lambda}_{d+1}$. If this is indeed the case, then the $d$ principal affine subspace ($S_o$) has projection matrix $R = UU'$ (here $U$ is any orthonormal basis for the subspace spanned by a set of $d$ independent eigenvectors corresponding to the first $d$ eigenvalues) and origin ${\theta}= (I-R)\mu$ (with $\mu$ being the mean of $Q$). Notice that when $d=0$, $S_o$ is the point set $\mu$.* In the case that $d$ is unknown, we can find an optimal value of $d$ by considering $$\begin{aligned} \label{e3} R(d,S) = f(d) + \int_{\Re^m} \| x - Pr_S(x) \|^2 Q(dx), \ 0 \le d \le m\end{aligned}$$ as a risk function for some fixed increasing convex function $f$. For $f$ linear, say, $f(d) = ad$, $a> 0$, the risk has a unique minimizer if and only if ${\lambda}_{d+1} < a < {\lambda}_d$ for some $d$, with ${\lambda}_0 = \infty$ and ${\lambda}_{m+1} = 0$. Then the minimizing dimension $d_o$ is that value of $d$ while the optimal space $S_o$ is the $d_o$ principal affine subspace. We will call $d_o$ the *[principal dimension]{} of $Q$. For the observations in Figure \[2Dpic\], the principal dimension is $d_o = 1$ with principal subspace $$\begin{aligned} S_o = \left\{ \left(\begin{array}{cc} 1/2 & -1/2 \\ -1/2 & 1/2 \\ \end{array} \right)x + \left(\begin{array}{c} 1/2 \\ 1/2 \\ \end{array} \right) :x \in \Re^2 \right\}.\end{aligned}$$* Before detailing general modeling strategies, we introduce notation that will be used through out. By ${\mathcal}{M}(S)$ we denote the space of all probabilities on the space $S$. $M(m,k)$ will denote real matrices of order $m\times k$ (with $M(m)$ denoting the special case of $m=k$), $M^+(m)$ will denote the space of all $m\times m$ positive definite matrices. For $U\in M(m,k)$, ${\mathcal}{C}(U)$ and ${\mathcal}{N}(U)$ will represent the column and null space of $U$ respectively. We will represent the space of all $m\times m$ rank $k$ projection matrices by $P_{k,m}$. That is, $$P_{k,m} = \{ R \in M(m)\colon R=R'=R^2, {\text}{rank}(R)=k \}.$$ One important manifold referred to in this paper is the Steifel manifold (denoted by $V_{k,m}$) which is the space whose points are $k$-frames in $\Re^m$ (here $k$-frame refers to a set of $k$ orthonormal vectors in $\Re^m$). That is, $$V_{k,m} = \{A \in M(m,k): A'A = I_k\}.$$ We denote the orthogonal group $\{A\in\Re^m\colon A'A = I_m\}$ by $O(m)$ which is $V_{m,m}$. The space $V_{k,m}$ is a compact non-Euclidean Riemannian manifold. Because $M(m,k)$ is embedded in Euclidean space, it inherits the Riemannian metric tensor which can be used to define the volume form, which in turn can be used as the base measure to construct a parametric family of densities. Several parametric densities have been studied on this space, and exact or MCMC sampling procedures exist. For details, see Chikuse [@chikuse2]. One important density which we will be using as a prior is the Bingham-von Mises-Fisher density which has the expression $$\begin{aligned} BMF(x;A,B,C) \propto {\mathrm}{etr}(A' x + C x' B x).\end{aligned}$$ The parameters are $A \in M(k,m)$, $B \in M(k)$ symmetric and $C \in M(m)$, while etr denotes exponential trace. As a special case, we obtain the uniform distribution which has the constant density $1/{\text}{Vol}(V_{k,m})$. Density model {#s2} ============== Consider a random variable $X$ in $\Re^m$. Let there be a $k$ dimensional affine subspace $S$, $0\le k \le m$, with projection matrix $R$ and origin ${\theta}$ such that the projection of $X$ into this subspace follows a location mixture density on the subspace (with respect to its volume form) given by $$Y = Pr_S (X) \sim \int_S (2\pi)^{-k/2} |U'AU|^{1/2} \exp \{-\frac{1}{2} (y - w)'A(y-w)\} Q(dw) \\$$ where $y \in S$ is the projection of $x$ with parameters $Q \in {\mathcal}{M}(S)$, $U \in V_{k,m}$, and $A$ a $m\times m$ positive semi-definite (p.s.d.) matrix such that $U'AU \in M^+(k)$. When $k=0$, $S$ denotes the point set $\{{\theta}\}$ and $Y={\theta}$. Note that the density expression depends on $U$ only through $UU'$. A general choice for $A$ besides being positive definite (p.d.) could be $A = U_0 \Sigma^{-1}_0 U_0'$ for some specific orientation $U_0$ and p.d. $\Sigma_0 \in M^+(k)$. As a result, the isometric coordinates $U_0'X$ of $Pr_S (X)$ follow a non-parametric Gaussian mixture model on $\Re^k$ given by $$\begin{aligned} \label{e4} U_0'X \sim \int_{\Re^k} N_k(\cdot;\mu,\Sigma_0)P(d\mu),\ P \in {\mathcal}{M}(\Re^k).\end{aligned}$$ Here $\mu = U'_0w$ for $w \in S$. Independently, let the residual $R_S(X)$ follow a mean zero homogeneous density on $S^{\perp}$ given by $$R_S(X) \sim {\sigma}^{-(m-k)}\exp\{-\frac{\|x\|^2}{2{\sigma}^2}\},$$ $x\in S^{\perp}$ and parameter ${\sigma}> 0$. If $k=m$, then $S^{\perp} = \{0\}$ and $R_S(X) = 0$. As a result, with any orientation $V \in V_{m-k,m}$ for $S^{\perp}$, the isometric coordinates $V'X$ of $R_S(X)$ follow the Gaussian density $$\label{e5} V'X \sim N_{m-k} (;V'{\theta},{\sigma}^2I_{m-k}).$$ Combine equations and to get the full density of $X$ as $$\begin{aligned} X \sim f(x;{\Theta}) & = \int_{\Re^k} N_m(x; \phi(\mu), {\Sigma})P(d\mu), \label{e7}\\ \phi(\mu) = U_0\mu + {\theta}, \ {\Sigma}& = U_0(\Sigma_0 - {\sigma}^2 I_k)U_0' + {\sigma}^2 I_m, \label{e90}\end{aligned}$$ with parameters ${\Theta}= (k,U_0,{\theta}, \Sigma_0, {\sigma}, P)$. Here $U_0 \in V_{k,m}$ and ${\theta}\in\Re^m$ satisfies $U_0'{\theta}= 0$. The affine subspace $S$ has projection matrix $R = U_0 U_0'$ and origin ${\theta}$. For $k=0$, $f(x;{\Theta}) = N_m(x; {\theta}, {\sigma}^2I_m)$. Using a flexible multimodal density model for a few data coordinates (which are chosen using a suitable basis) and an independent centered Gaussian structure on the remaining coordinates allows efficient density estimation on very high dimensional spaces. A common choice of nonparametric prior on $P$ can be a full support discrete model, such as a Dirichlet process, which allows clustering of the data around $S$. An alternative way to identify the intercept ${\theta}$ would be to set it equal to ${\mathrm}{E}(X)$. However, this would require the prior on $P$ to be such that $\bar\mu \equiv \int \mu P(d\mu)=0$ making the Dirichlet process prior inappropriate. For this reason, we set $\theta$ to be the origin of $S$ instead. With $\Sigma_0$ p.d. and ${\sigma}^2 > 0$, the within cluster covariance ${\Sigma}$ lies in $M^+(m)$ and has a sparse representation without being homogeneous. The residual variance ${\sigma}^2$ dictates how “close" $X$ lies to $S$, with $\sigma^2=0$ implying that $X \in S$. In , one may mix across $\Sigma_0$ by replacing $P(d\mu)$ by $P(d\mu \ d\Sigma_0)$ and achieve more generality. To make model even more sparse, without loss of generality, we can allow $\Sigma_0$ to be a p.d. diagonal matrix. To prove that we do not lose any generality, consider a singular value decomposition (s.v.d.) of a general $\Sigma_0$, say $\Sigma_0 = ODO'$, $O \in O(k)$, and replace $\Sigma_0$ by diagonal $D$, and $U_0$ by $U_0O'$. If $P$ is appropriately transformed, then the model is unaffected. With a diagonal $\Sigma_0$, the within cluster covariance has $k$ eigenvalues from $\Sigma_0$ and the rest all equal to ${\sigma}^2$. The columns of $U_0$ are the orthonormal eigenvectors corresponding to $\Sigma_0$. It is easy to check that $S$ is the $k$-principal subspace for the model, if and only if $\Sigma_0 + \int_{\Re^k} (\mu - \bar\mu)(\mu - \bar\mu)'P(d\mu) > {\sigma}^2 I_k$. Here $A_1 > A_2$ refers to $A-B$ being p.d. This holds, for example, when $\Sigma_0 \ge {\sigma}^2 I_k$ and $P$ is non-degenerate. Further under the model, $k$ is the principal dimension of $X$ for a range of risk functions as in with linear $f$. Weak Posterior Consistency {#s2.1} -------------------------- Consider a mixture density model $f$ as in . Let ${\mathcal}{D}(\Re^m)$ denote the space of all densities on $\Re^m$. Let $\Pi_f$ denote the prior induced on ${\mathcal}{D}(\Re^m)$ through the model and suitable priors on the parameters. Theorem \[t1\] shows that $\Pi_f$ satisfies the Kullback-Leibler (KL) condition at the true density $f_t$ on $\Re^m$. That is, for any ${\epsilon}> 0$, $\Pi_f(K_{\epsilon}(f_t)) > 0$, where $K_{\epsilon}(f_t) = \{ f \colon KL(f_t;f) < {\epsilon}\}$ denotes a ${\epsilon}$-sized KL neighborhood of $f_t$ and $KL(f_t;f) = \int\log\frac{f_t}{f}f_t dx$ is the KL divergence. As a result, using the Schwartz theorem [@schwartz], weak posterior consistency follows. That is, given a random sample ${\mathbf}{X}_n=$ $X_1,\ldots,X_n$ i.i.d. $f_t$, the posterior probability of any weak open neighborhood of $f_t$ converges to 1 a.s. $f_t$. Let $p(k)$ denote the prior distribution of $k$. We consider discrete priors that are supported on the set $\{0,\ldots,m\}$. Let $\pi_1(U_0,{\theta}|k)$ denote some joint prior distribution of $U_0$ and ${\theta}$ that has support on $\{(U_0,{\theta})\in V_{k,m}\times\Re^m: U_0'{\theta}= 0 \}$. As previously recommended, we consider a diagonal ${\Sigma}_0 = diag({\sigma}_1^2, \ldots, {\sigma}_k^2)$ and set a joint prior on the vector $\bm{{\sigma}} = ({\sigma},{\sigma}_1,\ldots,{\sigma}_k) \in (\Re^+)^{k+1}$ that we denote with $\pi_2(\bm{{\sigma}}|k)$. Further, we assume that parameters ($U_0$, ${\theta}$), $\bm{{\sigma}}$, and $P$ are jointly independent given $k$. That said, Theorem \[t1\] can be easily adapted to other prior choices. We also consider the following reasonable conditions on the true density $f_t$. - $0 < f_t(x) < A$ for some constant $A$ for all $x\in \Re^m$. - $|\int \log\{f_t(x)\} f_t(x)dx| < \infty$. - For some ${\delta}> 0$, $\int \log\frac{f_t(x)}{f_{{\delta}}(x)}f_t(x)dx < \infty$, where $f_{{\delta}}(x) = \mathop{\inf}_{y:\|y-x\|<{\delta}} f_t(y)$. - For some ${\alpha}> 0$, $\int \|x\|^{2(1+{\alpha})m}f_t(x)dx < \infty$. \[t1\] Set the prior distributions for $k$, ($U_0$, ${\theta}$), $\bm{{\sigma}}$, and $P$ to those described previously such that $p(m)>0$, $\pi_2(\Re^+\times (0,{\epsilon})^m | k=m)>0$ for any ${\epsilon}>0$, and the conditional prior on $P$ given $k=m$ contains $P_{f_t}$ in its weak support. Then under assumptions [**A1**]{}-[**A4**]{} on $f_t$, the KL condition is satisfied by $\Pi_f$ at $f_t$. The result follows if it can be proved that $\Pi_f(K_{\epsilon}(f_t)| k=m,U_0)>0$ for all ${\epsilon}>0$ and $U_0 \in O(m)$, because then $$\begin{aligned} \Pi_f(K_{\epsilon}(f_t)) \ge p(m)\int_{O(m)} \Pi_f(K_{\epsilon}(f_t)| k=m,U_0)d\pi_1(U_0|k=m) > 0\end{aligned}$$ Now, given $k=m$ and $U_0$, density can be expressed as $$\begin{aligned} \label{e2} f(x; Q, {\Sigma}) = \int_{\Re^m} N_m(x; \nu , {\Sigma}) Q(d\nu),\end{aligned}$$ with $Q = P\circ \phi^{-1}$. Here $\phi(x) = U_0x$, and ${\Sigma}= U_0 {\Sigma}_0 U_0'$. The isomorphism $\phi: \Re^m {\rightarrow}\Re^m$ being continuous and surjective ensures the same for the mapping $P \mapsto Q$. This in turn ensures that under the Theorem assumptions on the prior, the prior on $P$ and $\bm{{\sigma}}$ induces a prior on $Q$ that contains $P_{f_t}$ in its weak support and an independent prior on ${\Sigma}$ which induces a prior on its maximum eigen-value that contains $0$ in its support. Then with a slight modification to the proof of Theorem 2 in Wu and Ghosal [@WuGhosal2010], under assumptions [**A1-A4**]{} on $f_t$, we can show that $f_t$ is in the KL support of $\Pi_f$. Strong Posterior Consistency {#s2.2} ---------------------------- Using the density model for $f_t$, Theorem \[t4\] establishes strong posterior consistency, that is, the posterior probability of any total variation (or $L_1$ or strong) neighborhood of $f_t$ converges to 1 almost surely or in probability, as the sample size tends to infinity. The priors on the parameters are chosen as in Section \[s2.1\]. To be more specific, the conditional prior on $P$ given $k$ ($k\ge 1$) is chosen to be a Dirichlet process $DP(w_k P_k)$ ($w_k>0$, $P_k \in {\mathcal}{M}(\Re^k)$). The proof requires the following three Lemmas. The proof of Lemma can be found in [@barron], while the proofs of Lemmas and are provided in the appendix. In what follows $B_{r,m}$ refers to the set $\{ x \in \Re^m \colon \|x\| \le r \}$. For a subset ${\mathcal}{D}$ of densities and ${\epsilon}>0$, the $L_1$-metric entropy $N({\epsilon},{\mathcal}{D})$ is defined as the logarithm of the minimum number of $\epsilon$-sized (or smaller) $L_1$ subsets needed to cover ${\mathcal}{D}$. \[l3\] Suppose that $f_t$ is in the KL support of the prior $\Pi_f$ on the density space ${\mathcal}{D}(\Re^m)$. For every ${\epsilon}>0$, if we can partition ${\mathcal}{D}(\Re^m)$ as ${\mathcal}{D}_n^{{\epsilon}} \cup {\mathcal}{D}_n^{{\epsilon}c}$ such that $N({\epsilon},{\mathcal}{D}_n^{{\epsilon}})/n {\longrightarrow}0$ and $Pr(D_n^{{\epsilon}c}| {\mathbf}{X}_n) {\longrightarrow}0$ a.s. or in probability $P_{f_t}$, then the posterior probability of any $L_1$ neighborhood of $f_t$ converges to 1 a.s. or in probability $P_{f_t}$. \[l1\] For positive sequences $h_n {\rightarrow}0$ and $r_n{\rightarrow}\infty$ and ${\epsilon}>0$, define a sequence of subsets of ${\mathcal}{D}(\Re^m)$ as $${\mathcal}{D}_n^{\epsilon}= \{ f(\cdot;{\Theta}): {\Theta}\in H_n^{\epsilon}\}, \ H_n^{\epsilon}= \{ {\Theta}\colon \min(\bm{{\sigma}}) \ge h_n, \|{\theta}\| \le r_n, P(B_{r_n,k}^c) < {\epsilon}\}$$ with $f(\cdot;{\Theta})$ as in . Set a prior on the density parameters as in Section \[s2.1\]. Assume that $supp(\pi_2(\cdot|k)) \subseteq [0,A]^{k+1}$ for some $A>0$ for all $0\le k \le m$. Then $N({\epsilon},{\mathcal}{D}_n^{\epsilon}) \le C (r_n/h_n)^m$ where $C$ is a constant independent of $n$. \[l2\] Set a prior as in Lemma \[l1\] with a $DP(w_kP_k)$ prior on $P$ given $k$, $k\ge 1$. Assume that the base probability $P_k$ has a density $p_k$ which is positive and continuous on $\Re^k$. Assume that there exist positive sequences $h_n {\rightarrow}0$ and $r_n{\rightarrow}\infty$ such that $${\bf B1}: \lim_{n{\rightarrow}\infty}n{\delta}_{kn}^{-1}h_n^{-k}\exp(-r_n^2/8A^2) = 0$$ holds where $${\delta}_{kn} = \inf\{ p_k(\mu) : \mu\in \Re^k, \ \|\mu\| \le A+ r_n/2 \}, \ k=1,\ldots,m.$$ Also assume that under the prior $\pi_2(\cdot|k)$ on $\bm{{\sigma}}$, $Pr( \min(\bm{{\sigma}}) < h_n|k)$ decays exponentially. Then under the Assumptions of Theorem \[t1\], for any ${\epsilon}>0$, $k\ge 1$, $$E_{f_t}\big\{Pr\big( P(B_{r_n,k}^c) \ge {\epsilon}\big| k, {\mathbf}{X}_n \big)\big\} {\longrightarrow}0.$$ If [**B1**]{} is strengthed to $${\bf B1'}: \sum_{n=1}^\infty n{\delta}_{kn}^{-1}h_n^{-k}\exp(-r_n^2/8A^2) < \infty,$$ and the sequence $r_n$ satisfies $\sum_{n=1}^\infty r_n^{-2(1+{\alpha})m} < \infty$ with ${\alpha}$ as in Assumption [**A4**]{}, then the conclusion can be strengthed to $$\sum_{n=1}^\infty E_{f_t}\big\{Pr\big( P(B_{r_n,k}^c) \ge {\epsilon}\big| k, {\mathbf}{X}_n \big)\big\} < \infty.$$ With these three Lemmas we are now able to state and proof the theorem that ensures strong posterior consistency is attained. \[t4\] Consider a prior and sequences $h_n$ and $r_n$ for which the Assumptions of Lemma \[l2\] are satisfied. Further suppose that $n^{-1}(r_n/h_n)^m {\longrightarrow}0$. Also assume that the sequence $r_n$ and the prior $\pi_1(\cdot|k)$ on $(U,{\theta})$ satisfy the condition $Pr(\|{\theta}\|>r_n|k)$ decays exponentially for $k \le m-1$. Assume that the true density satisfies the conditions of Theorem \[t1\]. Then the posterior probability of any $L_1$ neighborhood of $f_t$ converges to 1 in probability or almost surely depending on Assumption [**B1**]{} or ${\bf B1'}$. Theorem \[t1\] implies that the KL condition is satisfied. Consider the partition ${\mathcal}{D}(\Re^m) = {\mathcal}{D}_n^{{\epsilon}} \cup {\mathcal}{D}_n^{{\epsilon}c}$. Then $N({\epsilon},{\mathcal}{D}_n^{{\epsilon}})/n {\longrightarrow}0$. Write $$Pr({\mathcal}{D}_n^{{\epsilon}c}| {\mathbf}{X}_n) = Pr \big( \{ f(.;{\Theta}): {\Theta}\in H_{n}^{{\epsilon}c} \} \big| {\mathbf}{X}_n \big),$$ where $$H_n^{{\epsilon}c} = \{ {\Theta}: \min(\bm{{\sigma}}) < h_n \} \cup \{ {\Theta}: \|{\theta}\| > r_n \} \cup \{ {\Theta}: P(B_{r_nk}^c)>{\epsilon}\}.$$ The posterior probability of the first two sets above converge to 0 a.s. because the prior probability decays exponentially and the prior satisfies the KL condition. Note that $$Pr\big(\{ {\Theta}: P(B_{r_nk}^c)>{\epsilon}\} \big| {\mathbf}{X}_n \big) \le \sum_{j=1}^m Pr\big(\{ {\Theta}: P(B_{r_nk}^c)>{\epsilon}\} \big| {\mathbf}{X}_n, k=j \big)$$ and Lemma \[l2\] implies that this probability converges to 0 in probability/a.s. based on Assumption [**B1**]{}/${\bf B1'}$. Using Lemma \[l3\], the result follows. Now we give an example of a prior that satisfies the conditions of Theorem \[t4\]. Any discrete distribution on $\{0,\ldots,m\}$ having $m$ in its support can be used as the prior $p$ for $k$. Given $k$ ($k\ge 1$), we draw $U_0$ from a density on $V_{k,m}$. Given $k$ and $U_0$, under $\pi_1$, ${\theta}$ is drawn from a density on the vector-space ${\mathcal}{N}(U_0)$ if $k<m$. If $k=m$, then ${\theta}=0$. When $k<m$, we set ${\theta}= r{\tilde}{\theta}$ with $r$ and ${\tilde}{\theta}$ drawn independently from $\Re^+$ and the set $\{ {\tilde}{\theta}\in \Re^m: \|{\tilde}{\theta}\|=1, {\tilde}{\theta}'U_0=0\}$ respectively. The scalar $r^a$ is drawn from a Gamma density for appropriate $a>0$. As a special case, a truncated normal density can be used for ${\theta}$ when ${\tilde}{\theta}$ is drawn uniformly, $a=2$ and $r^2 \sim Gam(1,{\sigma}_0)$, ${\sigma}_0>0$. Then ${\theta}$ has the density $${\sigma}_0^{-(m-k)}\exp\frac{-1}{2{\sigma}_0^2}\|{\theta}\|^2 I({\theta}'U_0 = 0)$$ with respect to the volume form of ${\mathcal}{N}(U_0)$. Given $k$, $\bm{{\sigma}}$ follows $\pi_2$ supported on $[0,A]^{k+1}$. Under $\pi_2$, the coordinates of $\bm{{\sigma}}$ may be drawn independently with say, ${\sigma}_j^{-2}$ following a Gamma density truncated to $[0,A]$. If reasonable, assuming ${\sigma}_1 = \ldots = {\sigma}_k = {\sigma}$ with ${\sigma}^{-2}$ following a Gamma density will simplify computations. That said, a Gamma distribution only satisfies the conditions of Theorem \[t1\] when $m\ge 2$. To satisfy the conditions of Theorem \[t4\] a *[truncated transformed]{} Gamma density may be used. That is, for appropriate $b>0$, we draw ${\sigma}^{-b}$ from a Gamma density truncated to $[0,A]$. Given $k$, $k\ge 1$, $P$ follows a $DP(w_kP_k)$ prior. To get conjugacy, we may select $P_k$ to be a Gaussian distribution on $\Re^k$ with covariance $\tau^2I_k$. With such a prior the conditions of Theorem \[t4\] are satisfied if we choose $a,b,\tau$ and $A$ such that $\tau^2 > 4A^2$, $a < 2(1+{\alpha})m$ and $a^{-1}+b^{-1} < m^{-1}$. This result is available from Corollary \[t5\] the proof of which is provided in the Appendix.* \[t5\] Assume that $f_t$ satisfies Assumptions [**A1-A4**]{}. Let $\Pi_f$ be a prior on the density space as in Theorem \[t4\]. Pick positive constants $a,b,\{\tau_k\}_{k=1}^m$ and $A$ and set the prior as follows. Choose $\pi_1(.|k)$ such that for $k \le m-1$, $\|{\theta}\|^a$ follows a Gamma density. Pick $\pi_2(.|k)$ such that ${\sigma}, {\sigma}_1, \ldots, {\sigma}_k$ are independently and identicaly distributed with ${\sigma}^{-b}$ following a Gamma density truncated to $[0,A]$. Alternatively let ${\sigma}={\sigma}_1=\ldots={\sigma}_k$ with ${\sigma}$ distributed as above. For the $DP(w_kP_k)$ prior on $P$, $k\ge 1$, choose $P_k$ to be a normal density on $\Re^k$ with covariance $\tau_k^2I_k$. Then almost sure strong posterior consistency results if the constants satisfy $\tau_k^2 > 4A^2$, $a < 2(1+{\alpha})m$ and $1/a + 1/b < 1/m$. A multivariate gamma prior on $\bm{{\sigma}}$ satisfies the requirements for weak but not strong posterior consistency (unless $m=1$). However that does not prove that it is not eligible because Corollary \[t5\] provides only sufficient conditions. Truncating the support of $\bm{{\sigma}}$ is not undesirable because for more precise fit we are interested in low within cluster covariance which will result in sufficient number of clusters. However the transformation power $b$ increases with $m$ resulting in lower probability near zero which is undesirable when sample sizes are not high. In [@abhishek2], a gamma prior is proved to to be eligible for a Gaussian mixture model (that is, $k=m$) as long as the hyperparameters are allowed to depend on sample size in a suitable way. However there it is assumed that $f_t$ has a compact support. We expect the result to hold true in this context too. Identifiability of Parameters ============================= In many applications, the goal may not be density estimation but estimating the low dimensional set $S$ and its dimension. To do so $S$ must be identifiable. That is, there must be a unique $S$ corresponding to the model . Denoting by $P_f$, the distribution corresponding to $f$, it follows that $$\label{e8} P_f = N_m(0,{\Sigma})* (P\circ\phi^{-1}),$$ with \* denoting convolution. Now let $\Phi_P(t)$ be the characteristic function of a distribution $P$, then implies that the characteristic function of $f$ (or $P_f$) is $$\label{e12} \Phi_f(t) = \exp(-1/2 t'{\Sigma}t) \Phi_{P\circ \phi^{-1}}(t), \ t\in \Re^m.$$ Once we let $P$ to be discrete, suggests that ${\Sigma}$ and $P\circ\phi^{-1}$ can be uniquely determined from $f$. Now $\phi: \Re^k {\longrightarrow}\Re^m$, $\phi(\Re^k) = S$ and $P\circ \phi^{-1}$ is the distribution of $\phi(Y)$ with $Y \sim P$. It is a distribution on $\Re^m$ supported on the $k$ dimensional affine plane $S$. To identify $S$ and $k$, we further assume that the *[affine support]{} asupp$(P)$ of $P$ is $\Re^k$. We define asupp$(P)$ as the intersection of all affine subspaces of $\Re^k$ having probability 1. It is an affine subspace containing supp$(P)$ (but may be larger). In other words, we use a prior for which $P$ is discrete and asupp$(P) = \Re^k$ w.p. 1. The Dirichlet process prior on $P$ given $k$ with a full support base is an appropriate choice. Then, from the nature of $\phi$, asupp$(P\circ\phi^{-1})$ is an affine subspace of $\Re^m$ of dimension equal to that of asupp$(P)$. Since asupp($P\circ\phi^{-1}$) is identifiable, this implies that $k$ is also identifiable as its dimension. Since $S$ contains asupp($P\circ\phi^{-1}$) and has dimension equal to that of asupp($P\circ\phi^{-1}$), hence $S = {\mathrm}{asupp}(P\circ\phi^{-1})$. Hence we have shown that the (sub) parameters $({\Sigma},k,S, P\circ\phi^{-1})$ are identifiable once we set a full support discrete prior on $P$ given $k$. Then $U_0 U_0'$ and ${\theta}$ are identifiable as the projection matrix and origin of $S$. However $P$ and the coordinate choice $\phi$ (hence $U_0$) are still non-identifiable. However, if we consider the structure ${\Sigma}= U_0{\Sigma}_0U_0' + {\sigma}^2 (I_m - U_0 U_0')$ with a diagonal ${\Sigma}_0$ and impose some ordering on the diagonal entries of ${\Sigma}_0$, then the columns of $U_0$ become identifiable up to a change of signs as the eigen-rays.* Point estimation for subspace $S$ --------------------------------- To obtain a Bayes estimate for the subspace $S$, one may choose an appropriate loss function and minimize the Bayes risk defined as the expectation of the loss over the posterior distribution. Any subspace is characterized by its projection matrix and origin. That is, the pair $(R,{\theta})$ where $R\in M(m)$ and ${\theta}\in \Re^m$ satisfy $R=R'=R^2$ and $R{\theta}=0$. We use ${\mathcal}{S}_m$ to denote the space of all such pairs. One particular loss function on ${\mathcal}{S}_m$ is $$L_1((R_1,{\theta}_1), (R_2, {\theta}_2)) = \|R_1 - R_2 \|^2 + \|{\theta}_1 - {\theta}_2\|^2, \ (R_i,{\theta}_i) \in {\mathcal}{S}_m.$$ For a matrix $A = ((a_{ij}))$, its norm-squared is defined as $\|A\|^2 = \sum_{ij} a_{ij}^2 = {\mathrm}{Tr}(AA')$. We find the average of $L_1$ over repeated draws of $(R_2, {\theta}_2)$ from their posterior and choose the value of $(R_1,{\theta}_1)$ for which the average is minimized (if a unique minimizer exists). Then the subspace $S$ is estimated as $\{ R_1x + {\theta}_1: x \in \Re^m \}$. It has dimension equal to the rank of $R_1$. If the goal is to estimate the directions of the subspace, we may instead use the loss function $$L_2((U_1, w_1), (U_2, w_2)) = \| U_1 - U_2 \|^2 + (w_1 - w_2)^2, \ (U_i,w_i) \in {\mathcal}{S}_{m2}.$$ Here the $m\times m$ matrix $U_i$ has the first few columns as the directions of the corresponding subspace $S_i$, the next column gives the direction of the subspace origin ${\theta}_i$ and the rest are set to the zero vector while $w_i = \|{\theta}_i\|$. Therefore $${\mathcal}{S}_{m2} = \left\{ (U, w)\in M(m)\times\Re^+: \ U'U = \left( \begin{array}{cc} I & 0 \\ 0 & 0 \end{array} \right) \right\}.$$ We find the minimizer (if unique) $(U_1, w_1)$ of the expected value of $L_2$ under the posterior distribution of $(U_2,w_2)$ and set the estimated subspace dimension $k$ as the rank of $U_1$ minus 1, the principal directions consisting of the first $k$ columns of $U_1$ and the origin as $w_1$ times the last column. Since the $k$ orthonormal directions of the subspace are only identifiable as rays, one may even look at the loss $$\begin{aligned} L_3((U,{\theta}_1), (V,{\theta}_2)) = \sum_{j=1}^m \| U_j U_j' - V_j V_j'\|^2 + \|{\theta}_1 - {\theta}_2\|^2,\end{aligned}$$ where $$\begin{aligned} &(U,{\theta}_1),(V,{\theta}_2) \in {\mathcal}{S}_{m3} = \left\{ (U, {\theta})\in M(m)\times\Re^m: \ U'U = \left( \begin{array}{cc} I & 0 \\ 0 & 0 \end{array} \right), \ U'{\theta}= 0 \right\}.\end{aligned}$$ Theorems \[t2\] and \[t3\] (proofs of which can be found in the appendix) derive the expression for minimimizer of the risk function corresponding to $L_1$ and $L_2$ and present conditions their uniqueness. Hereby we denote by $P_n$ the posterior distribution of the parameters given the sample. It is assumed to have finite second order moments. For a matrix $A$, by $A_{(k)}$ we shall denote the submatrix of $A$ consisting of its first $k$ columns. \[t2\] Let $f_1(R,{\theta}) = \int_{(R_2,{\theta}_2)}L_1((R,{\theta}), (R_2, {\theta}_2)) dP_n(R_2,{\theta}_2)$, $(R,{\theta}) \in {\mathcal}{S}$. This function is minimized by $R = \sum_{j=1}^k U_j U_j'$ and ${\theta}= (I - R)\bar{\theta}_2$ where $\bar R_2 = \int_{M(m)} R_2 dP_n(R_2)$ and $\bar{\theta}_2 = \int_{\Re^m} {\theta}_2 dP_n({\theta}_2)$ are the posterior means of $R_2$ and ${\theta}_2$ respectively, $2\bar R_2 - \bar{\theta}_2\bar{\theta}_2' = \sum_{j=1}^m {\lambda}_j U_j U_j'$, ${\lambda}_1 \ge \ldots \ge {\lambda}_m$ is a s.v.d. of $2\bar R_2 - \bar{\theta}_2\bar{\theta}_2'$, and $k$ minimizes $k - \sum_{j=1}^k {\lambda}_j$ on $\{0,\ldots,m\}$. The minimizer is unique if and only if there is a unique $k$ minimizing $k - \sum_{j=1}^k {\lambda}_j$ and ${\lambda}_k > {\lambda}_{k+1}$ for that $k$. \[t3\] Let $f_2(U,w) = \int_{(U_2,w_2)}L_2((U,w), (U_2, w_2))dP_n(U_2, w_2)$, $(U,w) \in {\mathcal}{S}_{m2}$. Let $\bar w$ and $\bar U$ denote the posterior means of $w_2$ and $U_2$ respectively. Then $f_2$ is minimized by $w = \bar w$ and any $U = [U_1, 0]$, where $U_1 \in V_{k+1,m}$ satisfys $\bar U_{(k+1)} = U_1 (\bar U_{(k+1)}'\bar U_{(k+1)})^{1/2}$, and $k$ minimizes $g(k) = k - 2{\mathrm}{Tr}(\bar U_{(k+1)}'\bar U_{(k+1)})^{1/2}$ over $\{0,\ldots,m-1\}$. The minimizer is unique if and only if there is a unique $k$ minimizing $g$ and $\bar U_{(k+1)}$ has full rank for that $k$. Posterior Computation {#s4} ===================== We now present an algorithm to sample from the joint posterior distribution of ${\Theta}= (k,U_0,{\theta}, \Sigma_0, {\sigma}, P)$ and as a result the density of $X$, given iid realizations $X_1, \ldots, X_n$. Since exact sampling is not possible, we resort to MCMC draws from the posterior. We first present an algorithm with $k$ being treated as a fixed known quantity. We then generalize the algorithm to allow unknown $k$. In both cases, a straight forward Gibbs sampler can be used. MCMC algorithm for the fixed $k$ {#s5.1} -------------------------------- We use a Dirichlet process (DP) prior for $P$ (i.e., $P \sim DP(w_0P_0))$. For simplicity and to preserve conjugacy we set $P_0 = N_k(m_{\mu}, S_{\mu})$ with $w_0 = 1$. We employ the stick breaking representation of the Dirichlet process (Sethuraman [@sethuraman]) so that $P = \sum_{j=1}^\infty w_j {\delta}_{\mu_j}$ where $\mu_j$ is drawn $iid$ from $P_0$ and $w_j = v_j\prod_{\ell < j}(1-v_{\ell})$ with $v_j \sim Beta(1, w_0)$. After introducing cluster labels $S_1,\ldots,S_n$, the likelihood becomes $$\begin{aligned} \label{e18} f(\bm{x}; U_0, \theta, \Sigma_0, \sigma, P, \mu, S) = & \prod_{i=1}^nw_{S_i}N_m(x_i; U_0\mu_{S_i} + \theta, \Sigma) \\ = & \prod_{i=1}^nw_{S_i}N_k(U'_0x_i; \mu_{S_i}, \Sigma_0)N_{m-k}(V'x_i; V'\theta, \sigma^2I_{m-k}) \end{aligned}$$ where once again ${\Sigma}= U_0\Sigma_0U_0' + {\sigma}^2(I_m - U_0U_0')$. After prior distributions for $(U_0, \theta, \Sigma_0, \sigma, \mu)$ are appropriately selected (details of which are given concurrently within the description of the algorithm) it is now possible to describe an algorithm that can be used to construct an MCMC chain that provides draws from the joint posterior distribution of interest by cycling through the following steps. 1. Let $\pi(U_0)$ denote a prior distribution for $U_0 \in V_{k,m}$. Using straightforward matrix algebra it can be shown that the full conditional of $U_0$ is $$\begin{aligned} \label{e101} [U_0 | -] & \propto \exp\{tr\big[1/2({\sigma}^{-2}I_k - \Sigma_0^{-1})U_0' (\sum_{i=1}^n x_i x_i')U_0 + \Sigma_0^{-1}(\sum_{i=1}^n \mu_{S_i} x_i')U_0\big] \}\pi(U_0) \nonumber \\ & \propto {\mathrm}{etr}\{F_1' U_0 + F_2 U_0' F_3 U_0 \}\pi(U_0), \end{aligned}$$ where $F_1 = (\sum_{i=1}^n x_i \mu_{S_i}')\Sigma_0^{-1}$, $F_2 = \frac{1}{2}({\sigma}^{-2}I_k - \Sigma_0^{-1})$, and $F_3 = \sum_{i=1}^n (x_i x_i')$. In ${\rm etr}(A)$ denotes $\exp(tr(A))$. Thus, if one selects a matrix Bingham-von Mises-Fisher prior distribution for $U_0$ (the Uniform distribution on the Steifel manifold being a special case), then the full conditional of $U_0$ is a matrix Bingham-von Mises-Fisher distribution on the space $U'_0\theta = 0$. Strategies for sampling from matrix Bingham-von Mises-Fisher are developed in Hoff [@hoff2]. A straightforward extension of their work can be implemented to sample from a matrix Bingham-von Mises-Fisher that has $U'_0\theta = 0$ as a constraint. 1. As discussed in Section 3.2 a good prior choice for $\theta$ is a truncated normal $\theta \sim N_m(m_{\theta}, S_{\theta})I[U'_0\theta = 0]$. The full conditional under this prior is the following truncated multivariate normal $$\begin{aligned} \label{5.4} [{\theta}| -]\sim N_m (m_{\theta}^*, S_{\theta}^*)I[U_0'{\theta}= 0],\end{aligned}$$ where $S_{\theta}^* = (n\Sigma^{-1} + S^{-1}_{\theta})^{-1}$ and $m_{\theta}^* = S_{\theta}^*(\Sigma^{-1} \sum_{i=1}^n x_i + S^{-1}_{\theta}m_{\theta}).$ Notice that if $W$ is an orthonormal basis of ${\mathcal}{N}(U'_0)$, then there exists a $\tilde{\theta} \in \Re^{m-k}$ such that $\theta = W\tilde{\theta}$ and $\tilde{\theta} \sim N_{m-k}(W'm_{\theta}^*, W'S_{\theta}^*W)$. This fact can be exploited to sample from . 1. Update $S_i$ for $i = 1, 2, \ldots, n$ by sampling from the multinomial conditional posterior distribution $$Pr(S_i = j|-) \propto w_j \exp\{-1/2 (U'_0 x_i - \mu_j)' {\Sigma}_0^{-1} (U' _0x_i - \mu_j) \}, \ j=1,\ldots,\infty.$$ To make the total number of states finite the block Gibbs sampler of Ishwaran and James [@ish] may be implemented. Alternatively, the slice sampling ideas described in Yau, Papaspiliopoulos, Roberts, and Homes [@yua2011], Walker [@walker2007], or Kalli, Griffin, and Walker [@griffin2011] could be used. The remainder of the algorithm is described from the perspective of using a block Gibbs sampler which requires truncating the number of atoms to $N$. 1. Update the DP atom weights by setting $w_j = v_j \prod_{l=1}^{j-1} (1- v_l)$, $j=1,\ldots, N$ after drawing $$[v_l | -] \sim Beta(1 + n_j, w_0 + \sum_i I(S_i > j))$$ with $n_j = \sum_i I(S_i = j)$ and setting $v_N = 1$. <!-- --> 1. Update the DP atoms $\{\mu_j: j =1, \ldots, N\}$ independently by sampling from $$[\mu_j | - ] \sim N_k(m_{\mu}^*, S_{\mu}^*),$$ where $S_{\mu}^* = (n_j\Sigma_0^{-1} + S_0^{-1})^{-1}$ and $m_{\mu}^* = S_{\mu}^*(U'_0 \Sigma_0^{-1} \displaystyle \sum_{i:S_i=j} x_{i} + S^{-1}_{\mu} m_{\mu})$. <!-- --> 1. Using a $\sigma^{-2} \sim {\rm Ga}(a,b)$ prior, $\sigma^{-2}$ can be updated using $$[\sigma^{-2}|-] \sim {\rm Ga}(\frac{1}{2}n(m-k) + a, b + \frac{1}{2}\sum_{i=1}^nx'_ix_i + \frac{n}{2}\theta'\theta - \frac{1}{2}\sum_{i=1}^nx'_iU_0U'_0x_i - \theta'\sum_{i=1}^nx_i)$$ Under the simplifying assumption that $\Sigma_0 = \sigma^{2}I_k$ the full conditional of $\sigma^{-2}$ becomes $$[\sigma^{-2}|-] \sim {\rm Ga}(\frac{1}{2}nm + a, b + \frac{1}{2}\sum_{i=1}^n(x_i - U_0\mu_{S_i} - \theta)'(x_i - U_0\mu_{S_i} - \theta))$$ <!-- --> 1. Using a truncated Gamma distribution for $\sigma^{-2}_j$ (i.e., $\sigma^{-2}_j \sim Gam(a, b)I[\sigma^{-2}_j \in [0,A]]$) allows one to update $\sigma^{-2}_j$ using the following truncated Gamma distribution. $$[\sigma^{-2}_{j}|-] \sim {\rm GAM}(\frac{n}{2} + a, b + \frac{1}{2}\sum_{i=1}^n(U_0'x_i - \mu_{S_i})^2_{j})I[\sigma^{-2}_j \in [0,A]].$$ Reasonable starting values can decrease the number of MCMC iterates discarded as burn in and therefore may be desirable. For $U_0$, the first $k$ eigen-vectors of the sample covariance matrix can be used. For $\theta$ one may use $(I_m - U_sU'_s)\bar{x}$ where $U_s$ denotes the starting value for $U_0$. The initial labels $(S_i)$ and coordinate cluster means ($\mu_j$) can be obtained by applying a k-means algorithm to $U'_s x_i$. MCMC algorithm for $k$ unknown ------------------------------ In the case that $k$ is unknown, a prior distribution needs to be assigned to $k$ and $U_0 \in O(m)$. In what follows, to denote the $k$th coordinate and the 1st $k$ coordinates of $\mu_j$ we use $\mu_{jk}$ and $\mu_{j(k)}$ respectively. Similarly, let $U_{0(k)}$ represent the first $k$ columns of $U_0$ while $U_{0(-k)}$ will represent the remaining $m-k$ columns. After introducing cluster labels, the full posterior is proportional to $$\pi(w,\mu,{\sigma},{\Sigma}_0, U_0,\theta, k, S) \propto \prod_{i=1}^n w_{S_i} N_m \big( x_i; U_{0(k)}\mu_{S_i(k)} + {\theta}, {\Sigma}\big).$$ Here $\pi$ is a general expression for the prior. The first $k$ columns of the $m\times m$ matrix $U_0$ explain the subspace directions and the first $k$ coordinates of $\mu_j$ the cluster locations. Allowing $k$ to be unknown requires altering steps 1 and 5 of the MCMC algorithm described in the previous section and adding an additional step. We first describe the additional step and then the adjustments to steps 1 and 5. Continuing from step 7 from the previous section we add 1. Update $k$ by drawing a value for $k$ from the following complete conditional $$\begin{aligned} \label{s8} Pr(k = \ell |-) & \propto p(\ell)\prod_{i=1}^n N_{m}(x_i; U_{0(\ell)}\mu_{S_i(\ell)} + {\theta}, {\Sigma}) \ \mbox{for $\ell = 1, \ldots, m-1$}.\end{aligned}$$ When the data dimension $m$ is very high, computing all $m-1$ probabilities can become computationally expensive. An approach to reduce the number of states would be to introduce a slice sampling variable $u$ drawn from $Unif(0,1)$. In this setting we replace $p(k)$ in by $I(u< p(k))$. This means that $k$ will be drawn from the set $\{k\colon p(k)>u\}$ and $u \sim Unif(0,p(k))$. Updating the upper bound for the subspace dimension ($K$) can be done by drawing $u \sim Unif(0,p(k))$ and setting $K = \max\{k \le m: p(k) > u)\}$. 1. Use the complete conditional derived in step 1 from Section 6.1 to update $U_{0(k)}$, then draw $U_{0(-k)} = [U_{0k+1}, \ldots, U_{0K}]$ from $\pi(U_{0(-K)} | U_{0k})$ such that $U'_{0(-k)}\theta = 0$. When a uniform prior is being considered, step1b requires one to sample uniformly from $V_{K-k,m}$ perpendicular to the column space of $[U_{0k}, \theta] \equiv U_\theta$. As discussed in Chikuse[@chikuse], $U^*$ is a uniform sample from $V_{K-k,m}$ if $U^* = T(T'T)^{-1/2}$ for $T$ a $m\times(K-k)$ matrix of independent standard normal random variables. To ensure that $U^* \in {\mathcal}{N}(U_\theta')$ first project $T$ into ${\mathcal}{N}(U_\theta')$ by setting $T^* = (I - U_\theta U_\theta')T$. Then $U^* = T^*(T^{*'}T^*)^{-1/2}$ is a uniform draw from $V_{K-k,m}$ perpendicular to column space of $U_\theta$. If $\pi(U_0)$ is not a uniform distribution on $O(m)$ see Hoff [@hoff2] for sampling strategies. 1. Use the full conditional found in step 5 from Section 6.1 to update $\mu_{j(k)}$. Then draw $\mu_{jk+1}, \ldots, \mu_{jK}$ from their respective prior distributions. With $k$ unknown, the MCMC chain tends to get stuck on certain values of $k$ for many iterations. The stickiness occurs because the probabilities in step 8 are computed for all $\ell = 1, \ldots, K$ using a $U_0$ that was updated for a particular value of $k$. To make the chain less sticky, we employ adaptive MCMC methods as outlined in Roberts and Rosenthal [@roberts]. We applied the adaptation to step 8 and step 5 of the algorithm. Specifically, we raised each of the un-normalized probabilities in to the $1 - \exp(-0.0001t)$ power (where $t = 1, \ldots, M$ denotes the $t^{\rm th}$ MCMC iterate) and replace $S_{\mu}^*$ found in step 5 of Section \[s5.1\] with $(1+100\exp(-0.001t))S_{\mu}^*$. In this way, the space of cluster locations is initially more thoroughly explored. Notice that the adaptation vanishes at an exponential rate, which guarantees that the proper regularity conditions hold. Simulation Study ================ To assess the proposed methodology’s density estimation ability we conducted a small simulation in which a density is estimated using observations in $\Re^m$ originating from the following finite mixture $$\begin{aligned} \label{e61} \bm{x} \sim \sum_{h=1}^{c+1} \pi_hN_{m}(\bm{\eta}_h, \sigma^2I). \end{aligned}$$ Here $\eta_h$ is a vector of zeros save for the $h$th entry which is 1. We considered the following three factor’s influence on the density estimate. 1. Bandwidth (setting $\sigma^2=0.01$, $\sigma^2=0.05$, and $\sigma^2=0.1$) 2. Sample size (setting $n=50$, $n=100$, $n=200$) 3. Dimension of the affine subspace (considering $k=2$ and $k=5$). To show that falls into the current class of models, consider the case of $k=2$ and $m=100$. For this case we have the $100$-dimensional vector $\theta = (1/3, 1/3, 1/3, 0, \dots, 0)'$. Further one possible representation of the $100\times2$ dimensional $U_0$ is $$\begin{aligned} \label{e62} U_0 = \left( \begin{array}{ccccc} 1/\sqrt{2} & -1/\sqrt{2} & 0 & \ldots & 0 \\ 1/\sqrt{6} & 1/\sqrt{6} & -2/\sqrt{6} & \ldots & 0 \end{array} \right)'.\end{aligned}$$ As competitors, we considered a finite mixture with $f(x) = \sum_{h=1}^c\pi_hN_m(\mu_h, \sigma^2\bm{I}_m)$ and an infinite mixture $f(x) = \sum_{h=1}^{\infty}\pi_hN_m(\mu_h, \sigma^2\bm{I}_m)$. The number of components employed in the finite mixture were 3 and 6 for the two respective affine subspace dimensions considered. For each synthetic data set created, 100 observations were generated to assess out of sample density estimation. To compare the density estimates between the procedures employed, we used the following Kullback-Leibler type distance $$\begin{aligned} \label{e20} \frac{1}{D} \sum_{d=1}^D\frac{1}{T} \sum_{t =1}^T \left ( \sum_{\ell = 1}^{100} \log{f_0(\bm{x}^*_{\ell d})} - \sum_{\ell = 1}^{100}\log{\hat{f}_t(\bm{x}^*_{\ell d} )}\right). \end{aligned}$$ Here $f_0$ denotes the true density function, $d$ is an index for the $D=25$ datasets that were generated, and $\bm{x}^*_{\ell d}$ is the $\ell$th out of sample observation generated from the $d$th data set and $\hat{f}_t$ is the estimated density. For each of the 25 generated data sets, a density estimate was obtained using the proposed method with $k$ unknown and for $k=1$, $k=2$, and $k=5$. We entertained a discrete uniform and stick-breaking type prior for $k$ with no appreciable difference in parameter estimation. We set $\sigma_1 = \ldots, \sigma_k = \sigma$. For each scenario 1000 MCMC iterates were used to approximate the density. A burn-in of 1000 was used when $k$ was fixed. When $k$ was considered an unknown a burn-in of 10,000 was used with a thin of 100. Convergence was monitored using trace plots of the collected MCMC iterates. The value of equation for each scenario considered averaged across the 25 datasets can be found in Table \[SimStudyResults\]. Under the column “Unknown $k$" can be found the results when $k$ was treated as an unknown. The results from the method when $k$ is fixed at a specified value can be found under one of the three “$k=$" columns. Results from the finite mixture and infinite mixture are under the columns “Fin Mix" and “Inf Mix". True $k$ $\sigma^2$ $n$ Unknown $k$ $k=1$ $k=2$ $k=5$ Fin Mix Inf Mix ------------- ------------ ----- ------------- --------- --------- --------- --------- --------- 50 582.98 1557.39 392.84 412.77 2580.81 2612.92 100 274.76 1494.65 205.49 214.32 1539.74 1619.44 200 139.21 1474.90 106.06 111.85 165.92 1429.98 (lr)[2-9]{} 50 590.24 421.93 314.44 394.53 710.46 714.26 100 271.79 371.65 172.61 192.39 465.87 499.58 200 128.30 315.85 96.37 105.34 153.54 160.66 (lr)[2-9]{} 50 589.01 232.33 250.50 365.38 426.69 426.29 100 280.99 189.05 154.91 201.62 320.02 324.92 200 134.07 162.34 87.55 104.65 160.54 176.29 50 2292.44 2645.34 2268.70 1015.80 3003.87 3029.25 100 2075.99 2564.26 2164.32 500.65 2341.99 2838.46 200 2138.87 2503.26 2065.54 256.78 1646.43 2046.68 (lr)[2-9]{} 50 872.18 646.12 654.20 714.96 798.29 801.22 100 604.07 604.73 556.36 421.40 676.65 690.04 200 506.53 550.92 489.39 231.47 460.85 512.93 (lr)[2-9]{} 50 773.15 315.85 357.02 484.87 447.79 456.62 100 431.56 294.42 309.34 358.66 351.17 353.89 200 283.02 246.20 237.94 206.01 286.96 288.10 : Results of the Kullback-Liebler type distance comparing estimated densities from each of the procedures considered in the simulation study to the density used to generate data. \[SimStudyResults\] Generally speaking, the procedure outlined in Section 3 does a much better job at recovering the true density relative to the mixtures. This is the case even if $k$ is fixed at the wrong value. That said, as expected, fixing $k$ at the true value provides the best results. The only instances in which the finite mixture estimated the density more accurately than our density estimator is when the dimension of the affine subspace is set to 5 and the sample size is small. However, even in small samples, if $k$ is fixed at the correct value, then the density is recovered more accurately using our procedure compared to mixtures. Also, it appears as $\sigma^2$ increases, then cluster separation diminishes and estimating $k$ is more difficult. Hence the varying $k$ procedure does not perform as well in estimating the density (which is to be expected) but still out performs the mixtures. In addition, as expected larger sample sizes are conducive to better density estimation as the Kullback-Leibler type distance generally gets smaller as $n$ increases. Nonparametric Classification with Feature Coordinate Selection {#s5} ============================================================== We consider a categorical $Y$ that takes on values from the set $\{1,\ldots,c\}$. The goal of classification is to identify the class to which $Y$ belongs using $m$ characteristics of $Y$. These characteristics are typically denoted by $X \in \Re^m$. Because the association between $X$ and $Y$ may not be causal, our approach is to model $X$ and $Y$ jointly and from the joint derive the conditional. Letting $M_c(y; \bm{\nu}) = \prod_{\ell = 1}^c \nu_{\ell}^{I[y=\ell]}$, we consider the following joint model $$\label{e13} (X,Y) \sim f(x,y) = \int_{\Re^k\times S_{c}} N_m( x;\phi(\mu), {\Sigma}) M_c(y; \bm{\nu}) P(d\mu \, d\bm{\nu}),$$ with $S_{c} = \{ \bm{\nu} \in [0,1]^c \colon \sum \nu_{\ell} =1 \}$ denoting the $c-1$ dimensional simplex. Note that is a generalization of and along the lines of the joint model proposed in Bhattacharya and Dunson [@abhishek1], though they focus on kernels for predictors on models that accommodate non-Euclidean manifolds and there is no dimensionality reduction. When $m$ is large it is often the case that most of the information present in the data is used to model the marginal of $X$ while the association between $X$ and $Y$ is disregarded. In order to avoid this, we instead pick a few coordinates of $X$, say $k$ many, and model the joint density of the $k$ coordinates of $X$ and $Y$. The remaining coordinates of $X$ are modeled independently as equal variance Gaussians, though in preliminary simulation studies, we find that our performance in estimating the subspace and predicting $Y$ is robust to the true joint distribution of the ‘non-signal’ predictors that are not predictive of $Y$. By setting a prior on the coordinate selection method, we can pick out those few ‘important’ coordinates which completely explain the conditional distribution of $Y$, very flexibly. Without loss of generality an isotropic transformation on $X$ can be used which would provide some benefit with regards to coordinate inversion. That is, we can locate a $k\le m$ and $U_0 \in V_{k,m}$ such that $$\begin{aligned} (U'_0X, Y) \sim f_1(x_1,y) = \int_{\Re^k\times S_{c}} N_{k}(x_1;\mu,{\Sigma}_0) M_c(y; \bm{\nu}) P(d\mu \, d\bm{\nu}), \ x_1 \in \Re^k,\end{aligned}$$ along with a ${\theta}\in \Re^m$ and $V \in V_{m-k,m}$ satisfying $V'U_0 = 0$ and ${\theta}'U_0 = 0$, such that $$\label{e14} V'X \sim N_{m-k}(V'\theta, {\sigma}^2 I_{m-k})$$ independently of $(U'_0X, Y)$. With such a structure, the joint distribution of $(X,Y)$ becomes where $$\begin{aligned} \phi:\Re^k {\rightarrow}\Re^m, \ \phi(y) = U_0y + {\theta}, \ U_0 \in V_{k,m}, \ {\theta}\in \Re^m, \ U'_0{\theta}= 0,\\ {\Sigma}= U_0({\Sigma}_0 - {\sigma}^2I_k) U'_0 + {\sigma}^2 I_m, \ {\Sigma}_0 \in M^+(k), {\sigma}^2 \in \Re^+.\end{aligned}$$ The conditional density of $Y=y$ given $X=x$ can be expressed as $$\label{e16} p(y|x;{\Theta}) = \frac{\int_{\Re^k\times S_{c}} N_k(U'_0x;\mu,{\Sigma}_0) M_c(y; \bm{\nu}) P(d\mu \, d\bm{\nu})} {\int_{\Re^k\times S_{c}} N_k(U'_0x;\mu,{\Sigma}_0)P(d\mu \, d\bm{\nu})}$$ with parameters ${\Theta}= (k,U_0,{\Sigma}_0, P, \theta, \sigma^2)$. A draw from the posterior of ${\Theta}$ given model will give us a draw from the posterior of the conditional. When $P$ is discrete (which is a standard choice), the conditional distribution of $Y$ given $X$ and ${\Theta}$ can be thought of as a weighted $c$ dimensional multinomial probability vector with the weights depending on $X$ only through the selected $k$-dimensional coordinates $U'_0X$. For example, if $P = \sum_{j=1}^\infty w_j {\delta}_{(\mu_j,\bm{\nu}_j)}$, then $$\begin{aligned} \label{e50} p(y|x;{\Theta}) = \sum_{j=1}^\infty {\tilde}w_j(U'_0x) M_c(y; \bm{\nu}_j)\end{aligned}$$ where ${\tilde}w_j(x) = \frac{w_j N_k(x;\mu_j,{\Sigma}_0)}{\sum_{i=1}^\infty w_i N_k(x;\mu_i,{\Sigma}_0)}$ and $x \in \Re^k$ for $j=1,\ldots,\infty$. We refer to as the principal subspace classifier (PSC). The above is easily adapted to a regression setting by considering a low dimensional response $Y \in \Re^l$ and replacing the multinomial kernel used for $Y$ with a Gaussian kernel. In this setting the joint model becomes $$\begin{aligned} \label{e17} (X,Y) \sim \int_{\Re^k\times\Re^l} N_m(x;\phi(\mu),{\Sigma}_x)N_l(y;\psi,{\Sigma}_y)P(d\mu \, d\psi),\end{aligned}$$ which produces the following conditional model $$\label{e18} p(y|x;{\Theta}) = \frac{\int_{\Re^k\times\Re^l} N_k(U'_0x;\mu,{\Sigma}_0) N_l(y;\psi,{\Sigma}_y)P(d\mu \ d\psi)} {\int_{\Re^k\times\Re^l} N_k(U_0'x;\mu,{\Sigma}_0)P(d\mu \, d\psi)}.$$ For a discrete $P$ this conditional distribution becomes the following mixture whose weights depend on $X$ only through its $k$-dimensional coordinates $U'_0X$ $$\begin{aligned} p(y|x;{\Theta}) = \sum_{j=1}^\infty {\tilde}w_j(U'_0x) N_l(y;\psi_j,{\Sigma}_y).\end{aligned}$$ As the regression model is a straightforward modification of the classifier, we focus on the classification case for sake of brevity. MCMC algorithm -------------- Sampling from the posterior of ${\Theta}= (k,U_0,{\Sigma}_0, P, \theta, \sigma^2)$ requires adjusting step 3 of Section 6’s algorithm and adding a step to update $\bm{\nu}$. We continue to assume $P \sim DP(\alpha, P_0)$. However, in the present setting $P_0 = N(m, S)\otimes Dir(\bm{a}_{\nu})$. Now the data likelihood, after introducing cluster labels $S_1,\ldots,S_n$, becomes $\prod_{i=1}^n w_{s_i} N_m(x_i; U\mu_{S_i}+{\theta}, {\Sigma}_0)M_c(y_i; \bm{\nu}_{S_i})$. An MCMC chain that provides draws from the joint posterior of $\Theta$ can be obtained by adding the following two steps to the algorithm in Section 6. 1. Update $S_i$ for $i = 1, 2, \ldots, n$ by sampling from the following conditional posterior distribution $$Pr(S_i = j|-) \propto w_j \exp \left\{-1/2 (\mu_j'{\Sigma}_0^{-1}\mu_j -2\mu_j'{\Sigma}_0^{-1}U'_0x_i) \right\} \prod_{\ell = 1}^c \nu_{j\ell}^{I[y_i=\ell]}$$ for $ j = 1, \ldots, \infty$. Once again, one may introduce slice sampling latent variables and implement the exact block Gibbs sampler or use the block Gibbs sampler directly to make the total number of states finite. 2. Update the $\bm{\nu}_j$’s by sampling from $[\bm{\nu}_j | - ] \sim Dir(a^*_1, \ldots, a^*_c)$, where $a^*_{\ell} = \sum_{i=1}^nI[y_i = \ell, S_i=j] + a_{\ell}$ for $\ell = 1, \ldots, c$. Simulation Study ---------------- To demonstrate the performance of the classifier we conduct a small simulation study. Synthetic data sets are generated using two methods. The first method treats the PSC as a data generating mechanism, the second is similar to the data generating scheme found on page 16 of Hastie, Tibshirani and Freedman [@HTF:2008] (here after referred to as HTF). We briefly describe both. When the PSC is being used as a data generating mechanism, the $X$ matrix is generated using . We set $m=100$, $\sigma^2=0.1$, and $k=2$. As this produces a feature space with three clusters, $Y$ takes on values in $\{1, 2, 3\}$ with probabilities $[{\tilde}w_1(U'_0X), {\tilde}w_2(U'_0X), {\tilde}w_3(U'_0X)]$ where $U_0$ is found in . The second data generating scenario consists of two classes with 100 observations each. The observations are drawn from the Gaussian mixture $\sum_{j=1}^{10} 1/10 N_{100}(m_j,1/5I)$. The 10 means, $m_j$, for the two classes are generated independently from $N_{100}(\eta_1, I)$ and $N_{100}(\eta_2, I)$ respectively ($\eta_1$ and $\eta_2$ are defined in ). For each scenario 100 data sets are generated. For the first, 100 training and 100 testing observations were generated and for the second 200 test and 200 training observations were used. The PSC, $k$ nearest neighbor (KNN), and mixture discriminant analysis (MDA) were employed to classify the response from the testing data sets. KNN and MDA procedures were selected as competitors because KNN is an algorithmic based procedure that is known to perform well in a variety of settings (see HTF) and MDA is a flexible model based Gaussian mixture classifier (see Hastie and Tibshirani [@hastie96]). We employ the [knn]{} [@ClassPack] and [mda]{} [@mdaPack] functions both of which are available freely from the [R]{} software [@Rsoft] to implement the KNN and MDA methods. For the KNN we set $k=6$ for data generated from the PSC and $k=25$ for HTF data. These values were deemed to produce the smallest misclassification rate for a few synthetic data sets from both data generating scenarios. For the same reason, with regards to the MDA, the number of components for each classes Gaussian mixture was set at 5. Choosing $k$ in this manner provides an advantage to KNN and MDA when comparing misclassification rates to the PSC. For the PSC, 1000 MCMC iterates were collected after a burn-in of 10,000 and thinning of 100. Convergence was assessed using history plots of the MCMC draws for a few data sets. The out of sample misclassification rates averaged over the 100 data sets can be found under each procedures respective heading in Table \[SimStudyResults2\]. ----------------- ------- ------- ------- -- Data Generating Mechanism PSC KNN MDA PSC 0.060 0.158 0.639 HTF 0.047 0.269 0.369 ----------------- ------- ------- ------- -- : Misclassification rates from the simulation study. Data were generated using the PSC and the method detailed on page 16 of Hastie, Tibhshirani and Feedman (HTF)[@HTF:2008] \[SimStudyResults2\] It appears as if the PSC is able to more accurately classify the categorical response from the testing data compared to KNN and MDA. This appears to be true regardless of what $k$ is fixed to be. Preliminary studies indicated that the PSC classifier still out preformed KNN and MDA (though not as drastically) even with correlated and non-Gaussian non-signal predictors. Illustration on Real Datasets ----------------------------- We now apply the PSC to two real data sets both of which are readily available in [R]{}. The first consists of two classes and 7 quantitative predictors. The predictors are physiological measurements taken on Pima Indian women with the goal of predicting the presence or absence of diabetes. To these 7 predictors we add another 93 which are comprised of random standard Gaussian draws. The dataset is split randomly into training and testing sections. The training section consists of 200 women, 68 of which are diagnosed with diabetes, while the testing section consists of 332 women, 109 of which are diagnosed with diabetes. The second data set we consider is the so called iris data set. Here the response consists of three classes each one representing a specific flower species. The four predictors are length and width measurements corresponding to the sepal and petal of a flower. The goal is to use these four measurements to predict the flower species. To the four predictors we add 96 that are comprised of random standard Gaussian draws. The data set consists of 150 observations with each flower species having 50. Fifty observations were randomly selected to comprise the testing data while the remaining 100 were used for the training data set. To both data sets we applied the PSC in addition to KNN classifier and a MDA classifier. For the KNN classifier, we chose the value of $k$ that minimized the misclassification rate which turned out to be $k=5$ for the iris data and $k=24$ for the diabetes data. Similarly, the number of components comprising the Gaussian mixtures of the MDA classifier was selected on the basis of minimizing the misclassification rate. The number of components turned out be 5 for the iris data and 7 for the diabetes data. Note that choosing $k$ in this manner gives an unfair advantage to KNN and MDA relative to PSC, which does not use the test data at all in training. We fit the PSC to both data sets by collecting 1000 MCMC iterates after a burn-in of 10,000 and thinning of 100. Convergence was monitored using trace plots from two chains that were started at different values. Prior to analysis variables were standardized. The misclassification rates can be found in Table \[RealDataResults\] Data set PSC KNN MDA ---------- ------ ------ ------ -- -- -- Iris 0.22 0.55 0.51 Diabetes 0.26 0.29 0.37 : Misclassification rates for the iris and diabetes data sets. \[RealDataResults\] It appears that the PSC was able to classify the testing data response in the presence of a high dimensional feature space much more accurately than either KNN or MDA. Conclusions =========== This article has proposed a novel methodology for nonparametric Bayesian learning of an affine subspace underlying high-dimensional data. Clearly, massive-dimensional data are now commonplace and there is a need for flexible methods for dimensionality reduction that avoid parametric assumptions. In this context, the Bayesian paradigm has substantial advantages over commonly used machine learning, computer science and frequentist statistical methods that obtain a point estimate of the subspace or manifold which the data are concentrated near. As there is unavoidably substantial uncertainty in subspace or manifold learning, it is important to fully account for this uncertainty to avoid misleading inferences and obtain appropriate measures of uncertainty in estimating densities, performing predictions and identifying important predictors. We accomplish this in a Bayesian manner by placing a probability model over the space of affine subspaces, while developing a simple and efficient computational algorithm relying on Gibbs sampling to estimate the subspace and its dimension or model-average over subspaces of different dimension. The model is theoretically proved to be highly flexible and posterior consistency is achieved under appropriate prior choices. The proposed model and computational algorithm should be broadly useful beyond the density estimation and classification settings we have considered. A potential alternative to our approach mentioned in Section 1 is to use a mixture of sparse factor models to build a tangent space approximation to the manifold the data are concentrated near. Sparse Bayesian normal linear factor models are a successful approach for dimensionality reduction (Carvalho *et al*., [@carvalho]; Bhattacharya and Dunson [@abdd]), but make restrictive normality assumptions and are limited in their ability to reduce dimensionality by linearity assumptions. By mixing factor models, one can certainly obtain a more flexible characterization, but challenging computational issues arise in accommodating uncertainty in the number of factors and locations of zeros in the factor loadings matrix for each of the multivariate Gaussian components in the mixtures. Indeed, even in modest dimensions for a normal linear factor models, Lopes and West [@lopes] encountered difficulties in efficiently inferring the number of factors, and recommending using a reversible jump MCMC algorithm that required a preliminary MCMC run for each choice of the number of factors. For mixture of factor models, one obtains a extremely rich over-parametrized black box. We propose a fundamentally new alternative that directly specifies an identifiable model based on geometry, while also developing an efficient Gibbs sampler that can infer the dimension of the subspace automatically without RJMCMC. Although our initial focus was on data in a Euclidean space, related models can be developed for non-Euclidean manifold data, as we will explore in ongoing work. [**Acknowlegements**]{}: This work was partially supported by Award Number R01ES017436 from the National Institute of Environmental Health Sciences. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Environmental Health Sciences or the National Institutes of Health. Proofs {#a1} ====== As a reminder in what follows $B_{r,m}$ refers to the set $\{ x \in \Re^m \colon \|x\| \le r \}$. For a subset ${\mathcal}{D}$ of densities and ${\epsilon}>0$, the $L_1$-metric entropy $N({\epsilon},{\mathcal}{D})$ is defined as the logarithm of the minimum number of $\epsilon$-sized (or smaller) $L_1$ subsets needed to cover ${\mathcal}{D}$. Proof of Lemma --------------- Any density $f$ in ${\mathcal}{D}_n^{\epsilon}$ can be expressed as $\int_{\Re^m} N_m(\nu,{\Sigma}) Q(d\nu)$ with ${\Sigma}= U_0 {\Sigma}_0 U_0' + {\sigma}_0^2(I_m - U_0 U_0')$, $Q = P \circ \phi^{-1}$, $\phi(x) = U_0x$, and $(k,U_0,{\theta},{\Sigma}_0,{\sigma},P)\in H_n^{\epsilon}$. The assumption on $\pi_2$ and $H_n^{\epsilon}$ will imply that ${\Sigma}$ has all its eigen-values in $[h_n^2,A^2]$. We also claim that $Q(B_{{\sqrt}{2}r_n,m}^c) < {\epsilon}$. To see that, note that $\|\phi(\mu)\|^2 = \|\mu\|^2 + \|{\theta}\|^2 \le 2r_n^2$ whenever $\|\mu\|\le r_n$ and $\|{\theta}\|\le r_n$. Hence $B_{r_n,k} \subseteq \phi^{-1}(B_{{\sqrt}{2}r_n,m})$ if $\|{\theta}\| \le r_n$. Therefore ${\epsilon}> P(B_{r_n,k}^c) \ge P\big( (\phi^{-1}(B_{{\sqrt}{2}r_n,m}))^c \big) = P\circ\phi^{-1}\big(B_{{\sqrt}{2}r_n,m}^c \big)$ for all $(P,{\theta}) \in H_n^{\epsilon}$. Hence the claim follows. Therefore $${\mathcal}{D}_n^{\epsilon}\subseteq {\tilde}{{\mathcal}{D}}_n^{\epsilon}= \{ f = \int N_m(\nu,{\Sigma}) Q(d\nu): Q(B_{{\sqrt}{2}r_n,m}^c) < {\epsilon}, \ {\lambda}({\Sigma}) \in [h_n^2,A^2] \},$$ ${\lambda}({\Sigma})$ denoting the eigen-values of ${\Sigma}$. From Lemma 1 of Wu and Ghosal [@WuGhosal2010], it follows that $N({\epsilon},{\tilde}{{\mathcal}{D}}_n^{\epsilon}) \le C (r_n/h_n)^m$ and this completes the proof. Proof of Lemma --------------- The proof is similar in scope to the proof of Lemma 2 in Wu and Ghosal [@WuGhosal2010]. Throughout the proof, $C$ will denote constant independent of $n$. Given $k, U, {\theta}, \bm{{\sigma}}$ and ${\underline}{\mu}_n=$ $\mu_1, \ldots, \mu_n$ iid $P$, $X_i \sim N_m\big( \phi(\mu_i), {\Sigma}\big)$, $i=1,\ldots, n$, independently and are independent of $P$. Hence $$Pr\big( P(B_{r_n,k}^c) \ge {\epsilon}\big| k,{\mathbf}{X}_n \big) = E\big( Pr\big( P(B_{r_n,k}^c) \ge {\epsilon}\big| k,{\underline}{\mu}_n \big) \big| k,{\mathbf}{X}_n \big).$$ From [@ferguson], given ${\underline}{\mu}_n$ and $k$, for $A\subseteq \Re^k$, $P(A) \sim Beta\big(w_kP_k(A) + N(A), w_k(1-P_k) + n - N(A) \big)$ where $N(A) = \sum_{i=1}^n I_{\{\mu_i \in A\}}$. Hence using the Markov inequality, $$Pr\big( P(B_{r_n,k}^c) \ge {\epsilon}\big| k,{\underline}{\mu}_n \big) \le \frac{w_k P_k(B_{r_n,k}^c) + N(B_{r_n,k}^c)}{{\epsilon}(n+w_k)}.$$ Therefore $$\begin{aligned} E\big( Pr\big( P(B_{r_n,k}^c) \ge {\epsilon}\big| k, {\mathbf}{X}_n \big) \le \frac{w_k P_k(B_{r_n,k}^c)}{{\epsilon}(n+w_k)} + \frac{1}{{\epsilon}(n+w_k)} \sum_{i=1}^n Pr\big( \mu_i \in B_{r_n,k}^c \big| k, {\mathbf}{X}_n \big).\end{aligned}$$ Denote the above two terms as $T_1$ and $T_2$. Then $E_{f_t}T_1 = T_1 {\longrightarrow}0$ as $r_n{\rightarrow}\infty$. Under the marginal prior given $k$, ${\underline}{\mu}_n$ has an exchangable distribution $\pi_n({\underline}{\mu}_n|k)$ on $(\Re^k)^n$ (see [@ferguson]). Also since ${\mathbf}{X}_n$ are iid given $f_t$, it follows that $$E_{f_t}(T_2) = \frac{n}{{\epsilon}(n+w_k)} E_{f_t} \big\{ Pr\big( \mu_1 \in B_{r_n,k}^c \big| k, {\mathbf}{X}_n \big) \big\}.$$ Now $$\begin{aligned} Pr\big( \mu_1 \in B_{r_n,k}^c \big| k, {\mathbf}{X}_n \big) \le Pr\big( \mu_1 \in B_{r_n,k}^c, \min(\bm{{\sigma}}) > h_n \big| k, {\mathbf}{X}_n \big) + \\ Pr( \min(\bm{{\sigma}}) \le h_n \big| k, {\mathbf}{X}_n ).\end{aligned}$$ The last term above converges to $0$ a.s. by the assumption on $\pi_2$. Hence to complete the proof, it remains to show that $$E_{f_t} \big\{ Pr\big( \mu_1 \in B_{r_n,k}^c, \min(\bm{{\sigma}}) > h_n \big| k, {\mathbf}{X}_n \big) \big\} {\longrightarrow}0 {\text}{ as } n{\rightarrow}\infty.$$ To compute the probability in above, we denote by $\pi_{1n}(\mu_1|\mu_{-1},k)$ the conditional distribution of $\mu_1$ given $\mu_{-1} = (\mu_2,\ldots,\mu_n)$ , and by $\pi_{-1n}(\mu_{-1}|k)$ the marginal distribution of $\mu_{-1}$ under the joint $\pi_n$. Then $$Pr\big( \mu_1 \in B_{r_n,k}^c, \min(\bm{{\sigma}}) > h_n \big| k, {\mathbf}{X}_n \big) = A({\mathbf}{X}_n)/B({\mathbf}{X}_n)$$ where $A({\mathbf}{X}_n) =$ $$\begin{aligned} \mathop{\int}_{\min(\bm{{\sigma}})>h_n,\|\mu_1\|>r_n} \prod_{i=1}^n N_m(X_i; \phi(\mu), {\Sigma}) d\pi_{1n}(\mu_1|\mu_{-1},k) d\pi_{-1n}(\mu_{-1}|k)d\pi_1(U_0,{\theta}|k)d\pi_2(\bm{{\sigma}}|k)\end{aligned}$$ and $B({\mathbf}{X}_n) =$ $$\int \prod_{i=1}^n N_m(X_i; \phi(\mu), {\Sigma}) d\pi_{1n}(\mu_1|\mu_{-1},k) d\pi_{-1n}(\mu_{-1}|k)d\pi_1(U_0,{\theta}|k)d\pi_2(\bm{{\sigma}}|k).$$ We use $E_{f_t}\{A({\mathbf}{X}_n)/B({\mathbf}{X}_n)\} \le$ $$\begin{aligned} \label{e11} \mathop{\sup}_{X_1 \in B_{r_n/2,m}}\frac{A({\mathbf}{X}_n)}{B({\mathbf}{X}_n)} \int_{B_{r_n/2,m}}f_t(x)dx + \int_{ B_{r_n/2,m}^c} f_t(x)dx.\end{aligned}$$ and upper bound the terms in above. First we upper bound $A({\mathbf}{X}_n)$ when $\|X_1\| \le r_n/2$. We express $N_m(X_1; \phi(\mu_1), {\Sigma})$ as $$N_k(U_0'X_1;\mu_1, {\Sigma}_0)$$ and note that $\|X_1\|\le r_n/2$, $\|\mu_1\| > r_n$ and $h_n < {\sigma}_j \le A $ $\forall j \le k$ implies $$N_k(U_0'X_1;\mu_1, {\Sigma}_0) \le C h_n^{-k} \exp \frac{-r_n^2}{8A^2}.$$ Therefore $A({\mathbf}{X}_n) \le$ $$\begin{aligned} \label{e9} \begin{split} C h_n^{-k} \exp\frac{-r_n^2}{8A^2} \int ({\sigma}^{-2})^{\frac{m-k}{2}} \exp\frac{-1}{2{\sigma}^2}(X_1 - {\theta})'(I_m - U_0U_0')(X_1 - {\theta})\\ \prod_{i=2}^n N_m(X_i;\phi(\mu_i),{\Sigma}) d\pi_{-1n}(\mu_{-1}|k)d\pi_1(U_0,{\theta}|k)d\pi_2(\bm{{\sigma}}|k). \end{split}\end{aligned}$$ Next we lower bound $B({\mathbf}{X}_n)$ when $X_1 \in B_{r_n/2,m}$. The conditional distribution $\pi_{1n}$ can be expressed as $ \frac{1}{w_k + n-1}\sum_{i=2}^n {\delta}_{\mu_i} + \frac{w_k}{w_k + n-1} P_k $ (see [@ferguson]). Hence $B({\mathbf}{X}_n) \ge$ $$\frac{w_k}{w_k + n-1} \int \prod_{i=1}^n N_m(X_i; \phi(\mu_i), {\Sigma}) p_k(\mu_1)d\mu_1 d\pi_{-1n}(\mu_{-1}|k)d\pi_1(U,{\theta}|k)d\pi_2(\bm{{\sigma}}|k).$$ Now $$\int N_k( U_0'X_1;\mu_1, {\Sigma}_0) p_k(\mu_1)d\mu_1 \ge \int_S N_k( U_0'X_1;\mu_1, {\Sigma}_0) p_k(\mu_1)d\mu_1$$ where $$S = \{\mu_1: \sum_{l=1}^k {\sigma}_l^2 (U_k'X_1 - \mu_1)^2_l \le 1 \}.$$ For $\mu_1 \in S$, $N_k\big( U_0'X_1;\mu_1, {\Sigma}_0) \ge \prod_1^k {\sigma}_j^{-1} e^{-1/2}$ and $p_k(\mu_1) \ge {\delta}_{kn}$ with ${\delta}_{kn}$ defined in the Lemma. Therefore $$\int_S N_k( U_0'X_1;\mu_1, {\Sigma}_0) p_k(\mu_1)d\mu_1 \ge C{\delta}_{kn}\prod_1^k {\sigma}_j^{-1}\int_S d\mu_1 = C{\delta}_{kn}$$ and hence when $\|X_1\| \le r_n/2$, $B({\mathbf}{X}_n) \ge$ $$\begin{aligned} \label{e10} \begin{split} C n^{-1} {\delta}_{kn} \int ({\sigma}^{-2})^{\frac{m-k}{2}} \exp\frac{-1}{2{\sigma}^2}(X_1 - {\theta})'(I_m - U_0U_0')(X_1 - {\theta}) \prod_{i=2}^n N_m(X_i; \phi(\mu_i), {\Sigma}) \\ d\pi_{-1n}(\mu_{-1}|k) d\pi_1(U_0,{\theta}|k) d\pi_2(\bm{{\sigma}}|k). \end{split}\end{aligned}$$ Combining and , we get $$\sup_{\|X_1\| \le r_n/2} \frac{A({\mathbf}{X}_n)}{B({\mathbf}{X}_n)} \le C n {\delta}_{kn}^{-1} h_n^{-k} \exp(-r_n^2/8A^2).$$ Plug this in to conclude $E_{f_t}\{A({\mathbf}{X}_n)/B({\mathbf}{X}_n)\} \le$ $$\begin{aligned} \label{e15} C n {\delta}_{kn}^{-1} h_n^{-k} \exp(-r_n^2/8A^2) + Pr_{f_t}(\|X\| > r_n/2)\end{aligned}$$ which converges to zero by assumption. Under assumption [**B1’**]{} and $\sum r_n^{-2(1+{\alpha})m} < \infty$ the sequence in has a finite sum which results in the stronger conclusion. This completes the proof. Proof of Corollary ------------------- By Theorem \[t4\], to show a.s. strong posterior consistency, we need to get positive sequences $r_n$ and $h_n$ which satisfy $$\begin{aligned} n^{-1}(r_n/h_n)^m {\longrightarrow}0, \ \sum r_n^{-2(1+{\alpha})m} < \infty, {\text}{ and} \label{e21}\\ \sum_{n=1}^\infty n{\delta}_{kn}^{-1}h_n^{-k}\exp(-r_n^2/8A^2) < \infty, \label{e22}\end{aligned}$$ and the prior probabilities $Pr(\|{\theta}\|> r_n |k)$ and $Pr(\min(\bm{{\sigma}}) < h_n |k)$ decay exponentially. Set $r_n = n^{1/a}$ and $h_n = n^{-1/b}$. Then is clearly satisfied. By the choice of $p_k$, $k\ge 1$, it is easy to check that ${\delta}_{kn} \ge C \exp\frac{-r_n^2}{2\tau_k^2}$ with $C$ denoting positive constants independent of $n$ all throughout. Then is clearly satisfied because of the assumption $\tau_k^2 > 4A^2$. Because $\|{\theta}\|^a$ follows a Gamma distribution given $k$, $k \le m-1$, the probability $Pr(\|{\theta}\|> r_n |k)$ can be upper bounded by $C\exp(-{\lambda}r_n^a)$ for some ${\lambda}>0$. This decays exponentially with $r_n = n^{1/a}$. Lastly it remains to check that $Pr(\min(\bm{{\sigma}}) < h_n |k)$, decays exponentially. When the coordinates of $\bm{{\sigma}}$ are all equal, the probability can be upper bounded by $C\exp(-{\lambda}h_n^{-b})$ for some ${\lambda}>0$. This decays exponentially with $h_n = n^{-1/b}$. In case the coordinates are iid, the probability can be upper bounded by $Cn\exp(-{\lambda}h_n^{-b})$ which also decays exponentially by the choice of $h_n$. Proof of Theorem ----------------- Simplify $f_1$ as $$\begin{aligned} \label{e19} &f_1(R,{\theta}) = f_1(\bar R, \bar{\theta}) + \|R - \bar R\|^2 + \|{\theta}- \bar{\theta}\|^2 \notag \\ &= f_1(\bar R, \bar{\theta}) + \|R - \bar R\|^2 + \|R\bar{\theta}\|^2 + \|(I-R)({\theta}- \bar{\theta})\|^2 \notag \\ &\ge f_1(\bar R, \bar{\theta}) + \|R - \bar R\|^2 + \|R\bar{\theta}\|^2.\end{aligned}$$ Equality holds in iff ${\theta}= (I-R)\bar{\theta}$. Then $$f_1(R,{\theta}) = k - {\mathrm}{Tr}\{(2\bar R - \bar{\theta}\bar{\theta}')R\} + C$$ where $k = $Rank($R$) and $C$ denotes something not depending on $R,{\theta}$. From the proof of Proposition 11.1[@bhatta], given $k$ one can show that the value of $R$ minimizing $f_1$ above is $\sum_{j=1}^k U_j U_j'$ and the minimizer is unique iff ${\lambda}_k > {\lambda}_{k+1}$. Then $$f_1(R,{\theta}) = k - \sum_{j=1}^k{\lambda}_j + C.$$ Now one needs to find the $k$ minimizing the above risk which is as mentioned. This completes the proof. Proof of Theorem ----------------- The minimizer $w=\bar w$ is obvious. Then $$f_2(U,\bar w) = \|U - \bar U\|^2 + C = k_1 - 2{\mathrm}{Tr}\bar U_{(k_1)}'U_{(k_1)} + C,$$ $k_1$ being the rank of $U$ and $C$ symbolizing any constant not depending on $U$. For $k_1$ fixed, it is proved in Theorem 10.2[@bhatta] that the minimizer $U$ is as in the theorem. It is unique iff $\bar U_{(k_1)}'\bar U_{(k_1)}$ is invertible. Plug that $U$ and the risk function becomes, as a function of $k_1$, $$f_3(k_1) = k_1 - 2{\mathrm}{Tr}(\bar U_{(k_1)}'\bar U_{(k_1)})^{1/2}.$$ We find the value of $k_1$ between $1$ and $m$ minimizing $f_3$ and set $k=k_1-1$.
--- abstract: | We study the low regularity well-posedness of the 1-dimensional cubic nonlinear fractional Schrödinger equations with Lévy indices $1 < {\alpha}< 2$. We consider both non-periodic and periodic cases, and prove that the Cauchy problems are locally well-posed in $H^s$ for $s \geq \frac {2-{\alpha}}4$. This is shown via a trilinear estimate in Bourgain’s $X^{s,b}$ space. We also show that non-periodic equations are ill-posed in $H^s$ for $\frac {2 - 3{\alpha}}{4({\alpha}+ 1)} < s < \frac {2-{\alpha}}4$ in the sense that the flow map is not locally uniformly continuous. address: - 'Department of Mathematics, and Institute of Pure and Applied Mathematics, Chonbuk National University, Jeonju 561-756, Republic of Korea' - 'Department of Mathematical Sciences, Ulsan National Institute of Science and Technology, Ulsan, 689-798, Republic of Korea' - 'Department of Mathematical Sciences, Korea Advanced Institute of Science and Technology, Daejeon 305-701, Republic of Korea' - 'Department of Mathematical Sciences, Seoul National University, Seoul 151-747, Republic of Korea' author: - Yonggeun Cho - Gyeongha Hwang - Soonsik Kwon - Sanghyuk Lee title: | Well-posedness and Ill-posedness\ for [the]{} cubic fractional Schrödinger equations --- [^1] Introduction ============ We consider the Cauchy problem for [the]{} one dimensional fractional Schrödinger equations with cubic nonlinearity in periodic and non periodic settings: $$\begin{aligned} \label{eq} \left\{\begin{array}{l} i\partial_tu + (-\Delta)^{{\alpha}/2}u = \gamma|u|^2u,\\ u(0,\cdot) = \phi \in H^s(\widehat Z), \end{array} \right.\end{aligned}$$ where $\widehat Z = \mathbb R$ or $\mathbb T$, $\alpha\in (1,2)$ is the Lévy index, $\gamma \in \mathbb R \setminus \{0\}$ and $s \in \mathbb R$. In this paper we are concerned with well-posedness of the Cauchy problem in low regularity Sobolev spaces. As the linear part generalizes the usual second-order Schrödinger equation, our interest is to investigate how the weaker dispersion affects dynamics and well-posedness. The fractional Schrödinger equations was introduced in [the]{} theory of the fractional quantum mechanics where the Feynmann path integrals approach is generalized to $\alpha$-stable Lévy process [@la1]. Also it appears in the water wave models (for example, see [@iopu] and references therein). In what follows $Z$ denotes $ \mathbb R$ (non-periodic) or $\mathbb Z$ (periodic). Accordingly, the Sobolev space $H^s(\widehat Z)$ is defined by $$H^s(\widehat Z) = \big\{f \in \mathcal S' : \|f\|_{H^s(\widehat Z)} := \|(1 + |\xi|^2)^\frac s2 \mathcal{F} f \,\|_{L^2(Z)} < \infty\big\},$$ where $L^2( Z)$ denotes $L^2(\mathbb R)$ or $\ell^2(\mathbb Z)$ and $\mathcal{F} f$ is [the]{} Fourier transform or Fourier coefficient of $f$ given by $\mathcal{F} f(\xi) = \int_{\widehat Z} e^{- i x \xi} f dx$ for $\xi \in Z$. We [define the linear propagator $U(t)$ by setting]{} $$U(t)\phi = e^{i(-\Delta)^{{\alpha}/2}t}\phi = \mathcal{F}^{-1} e^{i|\xi|^{{\alpha}}t} \mathcal{F}\phi,$$ where $\mathcal F^{-1}$ denotes the inverse Fourier transform. Then, by [Duhamel’s]{} formula the equation is written as an integral equation $$\label{integral} u = U(t)\phi - i\gamma \int_0^t U(t-t')(|u|^2u(t'))dt'.$$ ### Well-posedness {#well-posedness .unnumbered} If $s > 1/2$, by the Sobolev embedding and the energy method one can easily show the local well-posedness in $H^s$ for $0 < {\alpha}< 2$ for both periodic and non periodic cases. The equation also has the mass and energy conservation: $$M(u)= \int |u|^2, \qquad E(u) = \frac 12 \int | |\nabla|^{{\alpha}/2}u|^2 + \gamma\frac 14 \int |u|^4.$$ Thus, for $s \ge {{\alpha}}/2$ and $s > 1/2$, the global well-posedness in $H^s$ follows from the conservation laws. (For instance see [@chho; @chho2].) For the less regular initial data, i.e. $s\le 1/2$, particularly in the non periodic case, [a plausible approach may be to use the Strichartz estimate for $U(t)$.]{} In fact, it is known that the estimate $$\begin{aligned} \label{str} \||\nabla|^{-\frac{2-\alpha}{q}}U(t)\phi\|_{L^q_tL^r_x(\mathbb R \times \mathbb R)} \lesssim \|\phi\|_{L^2_x}\end{aligned}$$ holds for $2/q + 1/r = 1/2,\; 2\le q, r \le \infty$ (see [@cox]). However, due to weak dispersion the estimate accompanies a derivative loss of order $2$-$\alpha$ unless one imposes additional assumptions on $\phi$ ([@chkl1; @cholee]). This makes difficult for general data to use the usual iteration argument which relies on . To get around the shortcoming of Strichartz estimates we use Bourgain’s $X^{s,b}$ space, which has been widely used in the studies of dispersive equations for both non periodic and periodic setting. For the fractional Schrödinger equation, $X_{\widehat Z}^{s,b}$ is defined by $$X_{\widehat Z}^{s,b} = \big\{\varphi \in \mathcal S' : \|\varphi\|_{X_{\widehat Z}^{s,b}} := \|\langle \xi \rangle^s \langle \tau - |\xi|^{\alpha}\rangle^b \widehat \varphi (\tau, \xi) \|_{L^2(\mathbb R \times Z)} < \infty\big\},$$ where $\widehat \varphi (\tau, \xi)$ is the Fourier transform of $\varphi$ with respect to the time and space variables. Here $\langle \cdot \rangle$ denotes $1 + |\cdot|$. For the standard iteration argument, the main step is to show the trilinear estimate in terms of $X^{s,b}$ spaces: $$\label{trili} \| uvw\|_{X^{s,b-1}_{\widehat Z}} \lesssim \|u\|_{X^{s,b}_{\widehat Z}}\|v\|_{X^{s,b}_{\widehat Z}} \|w\|_{X^{s,b}_{\widehat Z}}.$$ We obtain this estimate by adapting the dyadic method in Tao [@tao] in which multilinear estimates in weighted $L^2$ spaces are systematically studied. The argument similarly applies to both non periodic and periodic cases. The following is our local well-posedness result. \[main1\] For $1 < {\alpha}<2$, the Cauchy problem is locally well-posed in $H^s(\widehat Z)$, if $s \geq \frac {2 - {\alpha}}4$. Recently, for the periodic case, Demirbas, Erdoğan and Tzirakis [@det] showed that the equation is locally well-posed for $s > \frac {2-{\alpha}}{4}$ and globally wellposed for $s > \frac {5{\alpha}+ 1}{12}$. Our result gives local well-posedness at the missing endpoint $s=\frac {2-{\alpha}}{4}$. The regularity threshold $ s= \frac{2-{\alpha}}{4}$ is optimal in that below that number we do not expect to solve via the contraction mapping principle. Firstly, the estimate fails for $s<\frac{2-{\alpha}}{4} $ due to the resonant interaction of high–high–high to high (frequencies). Compared to the usual Schrödinger equation, the curvature of the characteristic curve is smaller ($ (\text{frequency})^{{\alpha}-2} $). So, the stronger such resonant interactions make the threshold regularity higher. See the counter-example in Section 4. In [@gh], the authors claimed that is globally well posed if $\phi\in L^2$. But Theorem \[ill\] below shows that their result is incorrect. Their proof is based on a trilinear estimate, namely with $s=0$ ([@gh Theorem 3.2]), which is not true. ### Ill-posedness {#ill-posedness .unnumbered} Now we consider ill-posedness in the non periodic setting. Following Christ, Colliander, and Tao [@cct], we approximate the fractional equations with the cubic NLS, at $(N,N^{\alpha})$ in the Fourier space by Taylor expansion of the phase function. This allows to transfer an ill-posedness result of NLS to . A similar trick was also used in the fifth-order modified KdV equation [@kwon]. The following is our second result. \[ill\] Let $\frac {2 - 3{\alpha}}{4({\alpha}+ 1)} < s < \frac {2 - {\alpha}}4$. Then the solution map of the initial value problem fails to be locally uniformly continuous on $C_TH^s(\mathbb R)$ for any $T>0$. More precisely, for $0 < \delta \ll {\varepsilon}\ll 1$ and $T > 0$ arbitrary, there are two solutions $u_1, u_2$ to with initial data $\phi_1, \phi_2$ such that $$\begin{aligned} \label{ill-1} &\|\phi_1\|_{H^s},\|\phi_2\|_{H^s} \lesssim {\varepsilon}\,, \\ \label{ill-2} &\|\phi_1 - \phi_2\|_{H^s} \lesssim \delta\,, \\ \label{ill-3} \sup_{0 \leq t \leq T}&\|u_1(t) - u_2(t)\|_{H^s} \gtrsim {\varepsilon}.\end{aligned}$$ In view of the counter-example of the trilinear estimate it seems natural to expect the similar ill-posedness result for the periodic equations. However, it is not so simple to set make up a counter example because the frequency supports are distributed in a wide region of length $N^{\frac{2-{\alpha}}{2}}$. Currently we are not able to prove ill-posedness[^2]. ### Organization of the paper {#organization-of-the-paper .unnumbered} The paper is organized as follows. In section 2, we introduce notations and recall previously known estimates which we need in the subsequent section. In section 3, bilinear estimates in $X_{\widehat Z}^{s,b}$ space are established. Finally, we prove Theorem \[main1\] in section 4 and Theorem \[ill\] in section 5. Notations and Preliminaries =========================== We will use the same notations as in [@tao]. Let us invoke that $Z$ denotes $\mathbb R$ for the non-periodic case and $\mathbb Z$ for the periodic case. For any integer $k \ge 2$, let ${\Gamma}_k(\mathbb R \times Z)$ denote the hyperplane $${\Gamma}_k(\mathbb R \times Z) := \{\zeta = (\zeta_1, \cdots, \zeta_k) \in (\mathbb R \times Z)^{k} : \zeta_1 + \cdots + \zeta_k = 0\}$$ with $$\int_{{\Gamma}_k(\mathbb R \times Z)} f:= \int_{(\mathbb R \times Z)^{k-1}} f(\zeta_1, \cdots, \zeta_{k-1}, -\zeta_1 - \cdots - \zeta_{k-1}) d\zeta_1 \cdots d\zeta_{k-1},$$ where $d\zeta_j$ is the product of Lebesgue and the counting measure for the periodic case, and the Lebesgue measure on $\mathbb R^2$ for the non-periodic case. Note that the integral is symmetric under permutations of $\zeta_j$. Let us define a $[k; \mathbb R \times Z]$-multiplier to be any function $m : {\Gamma}_k(\mathbb R \times Z) \rightarrow \mathbb{C}$. When $m$ is a $[k; \mathbb R \times Z]$-multiplier, the norm $\|m\|_{[k;\mathbb R \times Z]}$ is defined to be the best constant so that the inequality $$|\int_{{\Gamma}_k(\mathbb R \times Z)} m(\zeta) \prod_{j=1}^k f_j(\zeta_j)| \le \|m\|_{[k;\mathbb R \times Z]} \prod_{j=1}^k \|f_j\|_{L^2(\mathbb R \times Z)}$$ holds for all test functions $f_j$ on $\mathbb R \times Z$. Here we recall some of the results about $[k; \mathbb R \times Z]$-multiplier from [@tao], which is to be used later. \[compa\] If $m$ and $M$ are $[k; \mathbb R \times Z]$-multipliers, and $|m(\zeta)| \leq M(\zeta)$ for all $\zeta \in {\Gamma}_k(\mathbb R \times Z)$, then $\|m\|_{[k; \mathbb R \times Z]} \leq \|M\|_{[k; \mathbb R \times Z]}$. Also, if $m$ is a $[k; \mathbb R \times Z]$-multiplier, and $g_1, \cdots, g_k$ are functions from $\mathbb R \times Z$ to $\mathbb{R}$, then $$\big\|m(\zeta)\prod_{j=1}^k g_j(\zeta_j)\big\|_{[k; \mathbb R \times Z]} \le \|m\|_{[k; \mathbb R \times Z]} \prod_{j=1}^k \|g_j\|_{\infty}.$$ \[trans\] For $\zeta_0 \in {\Gamma}_k(\mathbb R \times Z)$ and a $[k; \mathbb R \times Z]$-multiplier $m$, we have $$\|m(\zeta)\|_{[k; \mathbb R \times Z]} = \|m(\zeta + \zeta_0)\|_{[k; \mathbb R \times Z]}.$$ From this and Minkowski’s inequality, we thus have the averaging estimate, for any finite measure $\mu$ on ${\Gamma}_k(\mathbb R \times Z)$, $$\|m*\mu\|_{[k; \mathbb R \times Z]} \leq \|m\|_{[k; \mathbb R \times Z]}\|\mu\|_{L^1({\Gamma}_k(\mathbb R \times Z))}.$$ \[compo\] Let $k_1, k_2 \ge 1$, and $m_1, m_2$ be functions defined on $(\mathbb R \times Z)^{k_1}$, $(\mathbb R \times Z)^{k_2}$, respectively. Then $$\begin{aligned} \|m_1(\zeta_1, \cdots,\zeta_{k_1})m_2(\zeta_{k_1+1},\cdots,\zeta_{k_1+k_2})\|_{[k_1 + k_2;\mathbb R \times Z]} \leq \|m_1\|_{[k_1+1;\mathbb R \times Z]}\|m_2\|_{[k_2+1;\mathbb R \times Z]}.\end{aligned}$$ As a special case, we have the $TT^*$ identity, for all functions $m:(\mathbb R \times Z)^{k} \rightarrow \mathbb{R}$, $$\|m(\zeta_1, \cdots,\zeta_{k})\overline{m(-\zeta_{k+1},\cdots,-\zeta_{2k})}\|_{[2k;\mathbb R \times Z]}\\ \leq \|m(\zeta_1, \cdots,\zeta_{k})\|_{[k+1;\mathbb R \times Z]}^2.$$ Let $m$ be a $[k; \mathbb R \times Z]$ multipliers. For $1 \le j \le k$ we define the $j$-$support$ ${\rm supp}_j(m) \subset \mathbb{R}$ of $m$ to be the set $${\rm supp}_j(m) := \{\eta_j \in \mathbb{R} : {\Gamma}_k(\mathbb{R} \times Z; \zeta_j = \eta_j) \cap {\rm supp}(m) \neq \emptyset\},$$ where ${\Gamma}_k(\mathbb{R} \times Z; \zeta_j = \eta_j) = \{(\zeta_1, \cdots, \zeta_k) \in {\Gamma}_k(\mathbb R \times Z: \zeta_j = \eta_j)\}$. And if $J$ is a non-empty subset of $\{1,\cdots,k\}$, we define the set $supp_J(m) \subset \mathbb{R}^J$ by $${\rm supp}_J(m) := \prod_{j \in J}{\rm supp}_j(m).$$ \[schur\] Let $J_1, J_2$ be disjoint non-empty subsets of $\{1, \cdots, k\}$ and $A_1, A_2 > 0$. Suppose that $(m_a)_{a \in I}$ is a collection of $[k; \mathbb R \times Z]$ multipliers such that $$\#\{a \in I : \zeta \in {\rm supp}_{J_i}(m_a)\} \le A_i$$ for all $\zeta \in \mathbb{R}^{J_i}$ and $i = 1,2$. Then $$\big\| \sum_{a \in I} m_a \big\|_{[k;\mathbb R \times Z]} \le (A_1A_2)^{\frac 12} \sup_{a \in I} \|m_a\|_{[k;\mathbb R \times Z]}.$$ In particular, if $m_a$ is non-negative and $A_1, A_2 \sim 1$, then we have $$\big\|\sum_{a \in I}m_a\big\|_{[k;\mathbb R \times Z]} \sim \sup_{a \in I}\|m_a\|_{[k;\mathbb R \times Z]}.$$ We set, for $j=1,2,3,$ $$h_j = \pm |\xi_j|^{\alpha}, \,\,\, \zeta_j = (\tau_j, \xi_j), \,\,\, \lambda_j = \tau_j - h_j(\xi_j).$$ For the $X_Z^{s,b}$ space estimates, we need to consider the $[3;\mathbb R \times Z]$-multiplier $$m(\zeta_1,\zeta_2, \zeta_3) = \frac{\widetilde m(\xi_1,\xi_2,\xi_3)}{\prod_{j=1}^3 \langle \lambda_j \rangle^{b_j}}$$ for a function $\widetilde m$ on $\mathbb R^3$ which will be specified later. By averaging over unit time scale (Lemmas \[compa\] and \[trans\]), one may restrict the multiplier to the region $|\lambda_j| \ge 1$. And we define the function $h : {\Gamma}_3(\mathbb R \times Z) \rightarrow \mathbb{R}$ by setting $$h(\xi_1,\xi_2,\xi_3) := h_1(\xi_1) + h_2(\xi_2) + h_3(\xi_3) = -\lambda_1 - \lambda_2 - \lambda_3,$$ which plays an important role in what follows. Let $N_j, L_j, H$ $(j = 1,2,3)$ be dyadic numbers. By dyadic decomposition along the variables $\xi_j, \lambda_j$, as well as the function $h(\xi_1, \xi_2, \xi_3)$, we have $$\begin{aligned} \label{dysum} \|m\|_{[3;\mathbb R \times Z]} \lesssim \Big\|\sum_{N_{max} \gtrsim 1} \sum_H \sum_{L_1,L_2,L_3 \gtrsim 1} \frac {{\mathfrak{m}}(N_1,N_2,N_3)}{L_1^{b_1}L_2^{b_2}L_3^{b_3}}X_{N_1,N_2,N_3;H;L_1,L_2,L_3}\Big\|_{[3;\mathbb R \times Z]},\end{aligned}$$ where $X_{N_1,N_2,N_3;H;L_1,L_2,L_3}$ is the multiplier given by $$X_{N_1,N_2,N_3;H;L_1,L_2,L_3}(\tau,\xi_1, \xi_2, \xi_3) := \chi_{\{|h(\xi_1, \xi_2, \xi_3)| \sim H\}} \prod_{j=1}^3 \chi_{\{|\xi_j|\sim N_j\}}\chi_{\{|\lambda_j| \sim L_j\}}$$ and $${\mathfrak{m}}(N_1,N_2,N_3) := \sup_{|\xi_j| \sim N_j, \forall j = 1,2,3}|\widetilde m(\xi_1,\xi_2,\xi_3)|.$$ From the identities $\xi_1 + \xi_2 + \xi_3 = 0$ and $\lambda_1 + \lambda_2 + \lambda_3 + h(\xi_1, \xi_2, \xi_3) = 0$ on the support of the multiplier, we see that $X_{N_1,N_2,N_3;H;L_1,L_2,L_3}$ vanishes unless $$N_{max} \sim N_{med}\;\;\mbox{and}\;\; L_{max} \sim \max(H, L_{med}).$$ Suppose for the moment that $N_1 \ge N_2 \ge N_3$. Then we have $N_1 \sim N_2 \gtrsim 1$. As $N_1$ ranges over the dyadic numbers, the symbols in the summation in are supported on essentially disjoint regions of $\xi_1$ and $\xi_2$ spaces. This is true for any permutation of $\{1,2,3\}$. Thus, by Lemma \[schur\] we have $$\begin{aligned} \|m\|_{[3;\mathbb R \times Z]} \lesssim \sup_{N \gtrsim 1}\Big\| &\sum_{N_{max} \sim N_{med} \sim N} \sum_H \sum_{L_{max} \sim max(H,L_{med})}\\ &\frac {{\mathfrak{m}}(N_1,N_2,N_3)}{L_1^{b_1}L_2^{b_2}L_3^{b_3}}X_{N_1,N_2,N_3;H;L_1,L_2,L_3}\Big\|_{[3;\mathbb R \times Z]}.\end{aligned}$$ Hence, one is led to consider $$\begin{aligned} \label{dyest} \|X_{N_1,N_2,N_3;H;L_1,L_2,L_3}\|_{[3;\mathbb R \times Z]}\end{aligned}$$ in the low modulation case $H \sim L_{max}$ and the high modulation case $L_{max} \sim L_{med} \gg H.$ The following two lemmas give estimates for in each case. \[hmod\] If $L_{max} \sim L_{med} \gg H$, then $$\begin{aligned} \eqref{dyest} \lesssim L_{min}^\frac 12 \Big\|\mathcal{\chi}_{h(\xi) \sim H} \prod^3_{j=1} \mathcal{\chi}_{|\xi_j| \sim N_j}\Big\|_{[3;\mathbb{R}^{1+d}]} \lesssim L_{min}^\frac 12 |\{ \xi_2 \in Z : |\xi_2| \sim N_{min}\}|^\frac 12.\end{aligned}$$ Let $|E|$ denote the Lebesgue measure or counting measure of any measurable subset $E$ of $Z$. \[lmod\] Let $N_1, N_2, N_3 > 0, L_1 \ge L_2 \ge L_3$. Suppose that $H \sim L_{max}$ and $\xi_1^0, \xi_2^0, \xi_3^0$ satisfy that $$|\xi_j^0| \sim N_j \text { for } j = 1,2,3\;\;\text{and}\;\; |\xi_1^0 + \xi_2^0 + \xi_3^0| \ll N_{min}.$$ Then we have $$\eqref{dyest} \lesssim L_3^\frac 12 \big|\{\xi_2 \in Z : |\xi_2 - \xi_2^0| \ll N_{min}; h_2(\xi_2) + h_3(\xi - \xi_2) = \tau + \mathcal{O}(L_2)\}\big|^\frac 12$$ for some $\tau \in \mathbb{R}$ and $\xi \in Z$ with $|\xi + \xi_1^0| \ll N_{min}$. The same statement hold with the roles of the indices 1,2,3 permuted. Bilinear Estimates ================== In order to prove well-posedness for , we show the trilinear estimates (Proposition \[trilinear\] below). For this purpose, we first prove a bilinear estimate for $\|u\overline v\|_{L^2(\mathbb R \times \widehat Z)}$, which automatically gives the estimate for $\|u v\|_{L^2(\mathbb R \times \widehat Z)}$. Since the resonance function is $h(\xi_1, \xi_2, \xi_3) = |\xi_1|^{\alpha}- |\xi_2|^{\alpha}+ |\xi_3|^{\alpha},$ we have\ $|\xi_{max}|^{\alpha-1}|\xi_{min}| \lesssim |h(\xi)| \lesssim |\xi_{max}|^{\alpha}.$ To begin with, we establish estimate for . Here $\langle \cdot \rangle_{Z}$ denotes $|\cdot|$ for non-periodic case and $1 + |\cdot|$ for periodic case. So, $|\{\xi \in Z : a \le \xi \le b\}| = O(\langle b-a \rangle_Z)$. \[dyest2\] Let $H, N_1, N_2, N_3, L_1, L_2, L_3$ be dyadic and $h(\xi) = |\xi_1|^{\alpha}- |\xi_2|^{\alpha}+ |\xi_3|^{\alpha}$. Then we have the following. - If $H \sim L_{max} \sim L_1$ and $N_{1} \sim N_{max}$, - $\eqref{dyest} \lesssim L_{min}^\frac 12 \langle\min(N_{min}^\frac 12, N_{max}^{\frac {1 - {\alpha}}2} L_{med}^\frac 12)\rangle_Z$.\ - If $H \sim L_{max} \sim L_1$ and $N_{2} \sim N_{3} \gg N_{1}$, - $\eqref{dyest} \lesssim L_{min}^\frac 12 \langle \min(N_{min}^\frac 12, N_{max}^{\frac {2 - {\alpha}}2} N_{min}^{-\frac 12} L_{med}^\frac 12)\rangle_Z$.\ - If $H \sim L_{max} \sim L_2$ and $N_{max} \sim N_{min}$, - $\eqref{dyest} \lesssim L_{min}^\frac 12 \langle\min(N_{min}^\frac 12, N_{max}^{\frac {2 - {\alpha}}4} L_{med}^\frac 14)\rangle_Z$.\ - If $H \sim L_{max} \sim L_2$ and $N_{max} \sim N_{med} \gg N_{min}$, - $\eqref{dyest} \lesssim L_{min}^\frac 12 \langle \min(N_{min}^\frac 12, N_{max}^{\frac {1 - {\alpha}}2} L_{med}^\frac 12)\rangle_Z$.\ - If $H \ll L_{max} \sim L_{med}$, - $\eqref{dyest} \lesssim L_{min}^\frac 12 \langle N_{min}^\frac 12 \rangle_Z$. By symmetry, the same estimates also hold for the case $H \sim L_{max} \sim L_3$. Lemma \[hmod\] gives the high modulation case $H \ll L_{max} \sim L_{med}$. So we need only to show the estimates in the first four cases. First we consider the case $L_1 \sim L_{max}$ (the case $L_3 \sim L_{max}$ follows by symmetry). Then by Lemma \[lmod\], we have $$\begin{aligned} \label{length2} \eqref{dyest} \lesssim L_3^\frac 12 \big|\{\xi_2 \in Z : |\xi_2 - \xi_2^0| \ll N_{min}; |\xi_2|^{{\alpha}} - |\xi - \xi_2|^{{\alpha}} = \tau + \mathcal{O}(L_2)\}\big|^\frac 12\end{aligned}$$ for some $\tau \in \mathbb{R}$ and $\xi \in Z$ with $|\xi + \xi_1^0| \ll N_{min}$. We observe that the derivative of $|\xi_2|^{{\alpha}} - |\xi - \xi_2|^{{\alpha}}$ is equal to ${\alpha}(|\xi_2|^{{\alpha}-2}\xi_2 - |\xi_2 - \xi|^{{\alpha}-2}(\xi_2 - \xi))$. If $N_1 \sim N_{\max}$, then $0 < |\xi_2| < C|\xi|$ for some constant $C > 1$. This means $\xi_2$ is equal to $c\xi$ for some $0 < |c| < C$ and thus $\big||\xi_2|^{{\alpha}-2}\xi_2 - |\xi_2 - \xi|^{{\alpha}-2}(\xi_2 - \xi)\big| = \big|(|c|^{{\alpha}- 2}c - |c-1|^{{\alpha}- 2}(c-1))|\xi|^{{\alpha}-2}\xi\big|$, which is greater than or equal to $((C+1)^{{\alpha}-1} - C^{{\alpha}-1})|\xi|^{{\alpha}- 1}$. So, $\xi_2$ is contained in an interval of length $\mathcal{O}({N_{max}^{1-{\alpha}}L_{med}})$. Hence, by we get the desired estimate for the first case. If $N_2 \sim N_3 \gg N_1$, then $$\begin{aligned} &\qquad (\alpha-1)^{-1}\big||\xi_2|^{{\alpha}- 2}\xi_2 - |\xi_2 - \xi|^{{\alpha}- 2}(\xi_2 - \xi)\big| \\ &= \int_{\xi_2 - \xi}^{\xi_2} |\widetilde{\xi}|^{{\alpha}- 2} d\widetilde{\xi} \ge \min(|\xi_2|^{{\alpha}- 2}|\xi|, |\xi_2 - \xi|^{{\alpha}- 2}|\xi|).\end{aligned}$$ So, $\xi_2$ variable is contained in interval of length $\mathcal{O}({N_{max}^{2-{\alpha}}N_{min}^{-1}L_{med}})$. This and give the estimate for the second case. We now consider the case $L_2 \sim L_{max}$. If $N_1 \sim N_2 \sim N_3$, we see that $$\frac {|\xi_2|^{{\alpha}-2}\xi_2 + |\xi_2 - \xi|^{{\alpha}-2}(\xi_2 - \xi)}{|\frac \xi 2|^{{\alpha}-2}(\xi_2 - \frac \xi2)} \gtrsim 1$$ by the Taylor expansion. This means that $\xi_2$ is contained in an interval of length $\mathcal{O}(N_{max}^{\frac {2 - {\alpha}}2}L_{med}^\frac 12)$ by the mean value theorem and the estimate for the third case follows from . If $N_{max} \sim N_{med} \gg N_{min}$, then we have $||\xi_2|^{{\alpha}-2}\xi_2 +|\xi_2 - \xi|^{{\alpha}-2}(\xi_2 - \xi)| \sim |\xi_2 - \frac \xi2|^{{\alpha}- 1} \sim N_{max}^{{\alpha}- 1}$ and thus $\eqref{length2}$ and the mean value theorem shows that $\xi_2$ is contained in an interval of length $\mathcal{O}(N_{max}^{1-{\alpha}}L_{med})$. Since $\xi_2$ is also contained in an interval of length $\ll N_{min}$, Proposition \[dyest2\] follows from . We now show some bilinear estimates for the periodic and non periodic cases. \[bilinear2\] Let $s \ge \frac{2-{\alpha}}4$ and $0 < {\varepsilon}\ll 1$. Then, for $u \in X_{\widehat Z}^{0, \frac 12 - {\varepsilon}}$ and $v \in X_{\widehat Z}^{s, \frac 12 + {\varepsilon}}$, we have $$\|uv\|_{L^2(\mathbb{R} \times \widehat Z)} = \|u\overline{v}\|_{L^2(\mathbb{R} \times \widehat Z)} \lesssim \|u\|_{X_{\widehat Z}^{0, \frac 12 - {\varepsilon}}}\|v\|_{X_{\widehat Z}^{s, \frac 12 + {\varepsilon}}}.$$ For the periodic case the following is to be useful. \[aux\] $$\|u\overline{v}\|_{L^2(\mathbb{R} \times \mathbb T)} \lesssim \|(u - \widehat u(0))(\overline{v}-\widehat{\overline{v}}(0))\|_{L^2(\mathbb{R} \times \mathbb T)} + \|u\|_{X_{\mathbb T}^{0, \frac 12 - {\varepsilon}}}\|\overline{v}\|_{X_{\mathbb T}^{0, \frac 12 + {\varepsilon}}}.$$ We observe $$\begin{aligned} \|u\overline{v}\|_{L^2(\mathbb{R} \times \mathbb T)} &\le \|(u - \widehat u(0))(\overline{v}-\widehat{\overline{v}}(0))\|_{L^2(\mathbb{R} \times \mathbb T)} + \|u\widehat{\overline{v}}(0)\|_{L^2(\mathbb{R} \times \mathbb T)}\\ &\quad + \|\widehat u(0) \overline{v}\|_{L^2(\mathbb{R} \times \mathbb T)} + \|\widehat u(0)\widehat{\overline{v}}(0)\|_{L^2(\mathbb{R} \times \mathbb T)}\\ &\le \|(u - \widehat u(0))(\overline{v}-\widehat{\overline{v}}(0))\|_{L^2(\mathbb{R} \times \mathbb T)} + \|u\|_{L^2(\mathbb{R} \times \mathbb T)}\|\widehat{\overline{v}}(0)\|_{L_t^\infty L_x^\infty}\\ &\quad + \|\widehat u(0)\|_{L_t^2L_x^\infty}\|\overline{v} \|_{L_t^\infty L_x^2} + \|\widehat u(0)\|_{L_t^2L_x^\infty}\|\widehat{\overline{v}}(0) \|_{L_t^\infty L_x^2}.\end{aligned}$$ By Sobolev embedding $X_{\mathbb T}^{0, \frac12+{\varepsilon}} \hookrightarrow C(\mathbb R; L^2(\mathbb T))$ we have $$\|\widehat{\overline{v}}(0)\|_{L_t^\infty L_x^\infty} \le \sqrt{2\pi}\|\overline{v}\|_{L_t^\infty L_x^2} \lesssim \|\overline{v}\|_{X_{\mathbb T}^{0, \frac12+{\varepsilon}}}$$ and $\|\widehat u(0)\|_{L_t^2L_x^\infty} \le \sqrt{2\pi}\|u\|_{L_{t,x}^2} \lesssim \|u\|_{X_{\mathbb T}^{0, \frac12-{\varepsilon}}}$. This gives the desired estimate. For the proof it suffices to show that $$\|\frac {1}{\langle \xi_1 \rangle^s \langle \tau_1 - |\xi_1|^{\alpha}\rangle^{\frac 12 + {\varepsilon}}\langle \tau_2 + |\xi_2|^{\alpha}\rangle^{\frac 12 - {\varepsilon}}}\|_{[3;\mathbb{R}\times Z]} \lesssim 1.$$ The left hand side is bounded by the sum of $$\begin{aligned} \label{hmod2} \sum_{N_{max} \sim N_{med} \sim N} \sum_{L_1,L_2,L_3 \gtrsim 1}\sum_{H\sim L_{max}} \frac {1}{\langle N_1 \rangle^sL_1^{\frac 12 + {\varepsilon}}L_2^{\frac 12 - {\varepsilon}}}\|X_{N_1,N_2,N_3;H;L_1,L_2,L_3}\|_{[3;\mathbb{R}\times Z]},\end{aligned}$$ and $$\begin{aligned} \label{lmod2} \sum_{N_{max} \sim N_{med} \sim N} \sum_{L_{max} \sim L_{med}\gtrsim 1} \sum_{H \ll L_{max}} \frac {1}{\langle N_1 \rangle^s L_1^{\frac 12 + {\varepsilon}}L_2^{\frac 12 - {\varepsilon}}}\|X_{N_1,N_2,N_3;H;L_1,L_2,L_3}\|_{[3;\mathbb{R}\times Z]}.\end{aligned}$$ From the Lemma \[aux\] we may assume that $\widehat u(0) = \widehat v(0) = 0$ and thus we may also assume that $N_{\min} \ge 1$ when $Z = \mathbb Z$. Using Proposition \[dyest2\], we have $$\eqref{lmod2} \lesssim \sum_{N_{max} \sim N_{med} \sim N} \sum_{L_{max} \sim L_{med} \gtrsim N_{max}^{\alpha}} \frac {1}{\langle N_1 \rangle^s L_1^{\frac 12 + {\varepsilon}}L_2^{\frac 12 - {\varepsilon}}} L_{min}^\frac 12\langle N_{min}^\frac 12\rangle_Z.$$ Since $L_1^{\frac 12 + {\varepsilon}}L_2^{\frac 12 - {\varepsilon}} \gtrsim L_{min}^{\frac 12 + {\varepsilon}}L_{med}^{\frac 12 - {\varepsilon}}$, we get $$\begin{aligned} &\qquad \eqref{lmod2} \lesssim \sum_{N_{max} \sim N_{med} \sim N} \frac {1}{\langle N_1 \rangle^s N_{max}^{\frac {\alpha}2 - {\varepsilon}{\alpha}}} N_{min}^\frac 12\\ &\lesssim \sum_{N_{min} \lesssim N} \frac {1}{\langle N_{min} \rangle^s N^{\frac {\alpha}2 - {\varepsilon}{\alpha}}} N_{min}^\frac 12 \lesssim 1 + N^{\frac 12 - \frac {\alpha}2 + {\varepsilon}{\alpha}} \lesssim 1.\end{aligned}$$ Now we turn to . Firstly we consider the case $L_1 = L_{max}$ and $N_{\max} = N_1$ (the estimate for the case $L_3 = L_{max}$ and $N_{\max} = N_3$ follow by symmetry). Proposition \[dyest2\] gives $$\begin{aligned} \eqref{hmod2} &\lesssim \sum_{N_{max} \sim N_{med} \sim N} \sum_{L_1,L_2,L_3 \gtrsim 1}\sum_{H\sim L_1} \frac {L_{min}^\frac 12 \langle\min(N_{min}^\frac 12, N_{max}^{\frac {1 - {\alpha}}2} L_{med}^\frac 12)\rangle_Z}{\langle N_1 \rangle^s L_1^{\frac 12 + {\varepsilon}}L_2^{\frac 12 - {\varepsilon}}}\\ &\lesssim \sum_{N_{max} \sim N_{med} \sim N}\sum_{L_1,L_2,L_3 \gtrsim 1} \frac {L_{min}^\frac 12\langle \min(N_{min}^\frac 12, N_{max}^{\frac {1 - {\alpha}}2} L_{med}^\frac 12)\rangle_Z}{\langle N_{\min} \rangle^s L_1^{\frac 12 + {\varepsilon}}L_{med}^{\frac 12 - {\varepsilon}}}.\end{aligned}$$ Here $H$-sum is bounded by an absolute constant. By summing in $L_{min}$ and then $L_1$, we get $$\begin{aligned} \eqref{hmod2} &\lesssim \sum_{N_{max} \sim N_{med} \sim N} \sum_{L_{max}\ge L_{med} \ge 1} \frac {\langle\min(N_{min}^\frac 12, N_{max}^{\frac {1 - {\alpha}}2} L_{med}^\frac 12)\rangle_Z L_{med}^{\varepsilon}}{\langle N_{\min} \rangle^s L_{max}^{\frac 12 + {\varepsilon}}}. $$ If $Z = \mathbb R$, then we separate $N_{min}$ sum as follows: $$\begin{aligned} &\qquad \eqref{hmod2}\\ &\lesssim \left(\sum_{0 < N_{min} < N^{1-{\alpha}} } + \sum_{N^{1-{\alpha}} \le N_{min} \lesssim N}\right) \sum_{L_{med} \ge 1}\frac {\min(N_{min}^\frac 12 L_{med}^{-\frac 12}, N^{\frac {1 - {\alpha}}2})}{\langle N_{\min} \rangle^s }\\ &\lesssim \sum_{N_{min} < N^{1-{\alpha}} }\sum_{L_{med} \ge 1}\frac {N_{min}^\frac 12 L_{med}^{-\frac 12}}{\langle N_{\min} \rangle^s } + \sum_{N_{min} = N^{1-{\alpha}}}^{N}\sum_{L_{med} \ge 1}\frac {\min(N_{min}^\frac 12 L_{med}^{-\frac 12}, N^{\frac {1 - {\alpha}}2})}{\langle N_{\min} \rangle^s }\\ &\lesssim N^{\frac{1-{\alpha}}2} + \sum_{N_{min}=N^{1-{\alpha}}}^{N}\left(\sum_{1 \le L_{med} < N_{min}N^{{\alpha}-1}}N^{\frac {1 - {\alpha}}2} + \sum_{L_{med} \ge N_{min}N^{{\alpha}-1}}N_{min}^\frac 12 L_{med}^{-\frac 12}\right)\\ &\lesssim N^{\frac{1-{\alpha}}2} + N^{({\alpha}-1)(-\frac 12 + {\varepsilon}) + {\varepsilon}} \lesssim 1.\end{aligned}$$ If $Z = \mathbb Z$, then we have $$\begin{aligned} &\qquad \eqref{hmod2}\\ &\lesssim \sum_{N_{min}=1}^{N} \left( \sum_{N^{{\alpha}- 1}N_{min} \le L_{med} \le L_{max}} + \sum_{L_{med} \le N^{{\alpha}- 1}N_{min}} \right) \frac {(1 + \min(N_{min}^\frac 12, N^{\frac {1 - {\alpha}}2}L_{med}^{\frac 12})) L_{med}^{\varepsilon}}{\langle N_{\min} \rangle^s L_1^{\frac 12 + {\varepsilon}}}\\ &\lesssim \sum_{N_{min}=1}^{N} \left( \sum_{N^{{\alpha}- 1}N_{min} \le L_{med} \le L_{max}} \frac {N_{min}^\frac 12 L_{med}^{{\varepsilon}}}{N_{\min}^s L_{max}^{\frac 12 + {\varepsilon}}} + \sum_{L_{med} \le N^{{\alpha}- 1}N_{min}} \frac {(1 + N^{\frac {1 - {\alpha}}2} L_{med}^{\frac 12}) L_{med}^{{\varepsilon}}}{N_{\min}^s L_{max}^{\frac 12 + {\varepsilon}}}\right)\\ &\lesssim \sum_{N_{min}=1}^{N} \sum_{N^{{\alpha}- 1}N_{min} \le L_{max}}\frac {N_{min}^{\frac 12}}{N_{\min}^s L_{max}^{\frac 12}} + \sum_{N_{min}=1}^{N} \sum_{L_{med} \le N^{{\alpha}- 1}N_{min}} \frac {(1 + N^{\frac {1 - {\alpha}}2} L_{med}^{\frac 12}) L_{med}^{{\varepsilon}}}{N_{\min}^s L_{max}^{\frac 12 + {\varepsilon}}}\\ &\lesssim \sum_{N_{min}=1}^{N} \frac {N_{min}^{\frac 12}}{N_{\min}^s (N^{{\alpha}- 1}N_{min})^{\frac 12}} + \sum_{N_{min}=1}^{N} \sum_{L_{med} \le N^{{\alpha}- 1}N_{min}} \frac {(1 + N^{\frac {1 - {\alpha}}2} L_{med}^{\frac 12}) L_{med}^{{\varepsilon}}}{N_{\min}^s L_{max}^{\frac 12 + {\varepsilon}}}\\ &\lesssim N^{\frac {1-{\alpha}}{2}} + \sum_{N_{min}=1}^{N} \sum_{L_{med} \le N^{{\alpha}- 1}N_{min}} \frac {(1 + N^{\frac {1 - {\alpha}}2} L_{med}^{\frac 12}) L_{med}^{{\varepsilon}}}{N_{\min}^s L_{max}^{\frac 12 + {\varepsilon}}}\\ &\lesssim N^{\frac {1-{\alpha}}{2}} + \sum_{N_{min}=1}^{N}\left(\sum_{1 \le L_{med} < N^{{\alpha}-1}} \frac {L_{med}^{{\varepsilon}}}{N_{\min}^s L_{max}^{\frac 12 + {\varepsilon}}} + \sum_{N^{{\alpha}-1} \le L_{med} \le N^{{\alpha}-1}N_{min}}\frac {N^{\frac {1 - {\alpha}}2} L_{med}^{\frac 12} L_{med}^{{\varepsilon}}}{N_{\min}^s L_{max}^{\frac 12 + {\varepsilon}}}\right)\\ &\lesssim N^{\frac {1-{\alpha}}{2}} +1 + \sum_{N_{min}=1}^{N}\sum_{N^{{\alpha}-1} \le L_{med} \le N^{{\alpha}-1}N_{min}} N_{min}^{-s}N^{\frac {1 - {\alpha}}2}\\ &\lesssim 1+ N^{\frac{1-{\alpha}}2}\log{N} \lesssim 1.\end{aligned}$$ Secondly, we deal with the case $L_2 = L_{max}$ and $N_{max} \sim N_{min}$. Using Proposition \[dyest2\], we have $$\eqref{lmod2} \lesssim \sum_{N_{max} \sim N_{min} \sim N} \sum_{L_{max} \geq L_{med} \geq L_{min} \gtrsim 1} \frac {1}{\langle N_1 \rangle^s L_1^{\frac 12 + {\varepsilon}}L_2^{\frac 12 - {\varepsilon}}} L_{min}^\frac 12 \langle\min(N_{min}^\frac 12, N_{max}^{\frac {2 - {\alpha}}4} L_{med}^\frac 14) \rangle_Z.$$ Since $s \ge \frac{2-{\alpha}}4$, we have $$\eqref{lmod2} \lesssim \sum_{N_{max} \sim N_{min} \sim N} \sum_{L_{med} \ge 1}\frac {1}{\langle N \rangle^s L_{med}^{\frac12-{\varepsilon}}} \langle N^{\frac {2 - {\alpha}}4}L_{med}^\frac14 \rangle_Z \lesssim 1.$$ We now handle the remaining three cases: $L_1 = L_{max}$ and $N_2 \sim N_3 \gg N_1$; $L_2 = L_{max}$ and $N_3 \sim N_1 \gg N_2$; $L_3 = L_{max}$ and $N_1 \sim N_2 \gg N_3$. Case $L_1 = L_{max}$ and $N_2 \sim N_3 \gg N_1$ {#case-l_1-l_max-and-n_2-sim-n_3-gg-n_1 .unnumbered} ----------------------------------------------- Since $N_2 \sim N_3 \gg N_1$ and $\xi_1 + \xi_2 + \xi_3 = 0$, one can observe that $H \sim |h(\xi_1, \xi_2, \xi_3)| \sim |\xi_1||\xi_{\max}|^{{\alpha}-1} \sim N_{min}N^{{\alpha}-1}$. Thus we have $1 \lesssim L_1 \sim H \sim N^{{\alpha}-1}N_{min}$, which means that $N_{min} \gtrsim N^{1-{\alpha}}$. Using Proposition \[dyest2\] and performing $L_{min}$ and $L_1$ summation, we have $$\begin{aligned} &\qquad \eqref{hmod2}\\ &\lesssim \sum_{\substack{N_{max} \sim N_{med} \sim N\\N_{min} \ge N^{1-{\alpha}} }} \sum_{L_{max} \sim N^{{\alpha}-1}N_{min}} \sum_{L_{med} \ge L_{min} \gtrsim 1} \frac {L_{min}^\frac 12 \langle\min(N_{min}^\frac 12, N_{max}^{\frac {2 - {\alpha}}2} N_{min}^{-\frac 12} L_{med}^\frac 12)\rangle_Z}{\langle N_1 \rangle^s L_1^{\frac 12 + {\varepsilon}}L_2^{\frac 12 - {\varepsilon}}}\\ &\lesssim \sum_{\substack{N_{max} \sim N_{med} \sim N\\N_{min} \ge N^{1-{\alpha}} }}\sum_{L_{max} \sim N^{{\alpha}-1}N_{min}} \sum_{L_{med} \ge L_{min} \gtrsim 1} \frac {L_{min}^{\varepsilon}\langle\min(N_{min}^\frac 12, N_{max}^{\frac {2 - {\alpha}}2} N_{min}^{-\frac 12} L_{med}^\frac 12)\rangle_Z}{\langle N_{min} \rangle^s L_{max}^{\frac 12 + {\varepsilon}}}\\ &\lesssim \sum_{N^{1-{\alpha}} \le N_{min} \lesssim N} \sum_{N^{{\alpha}-1}N_{min} \ge L_{med} \ge 1} \frac {L_{med}^{\varepsilon}\langle\min(N_{min}^\frac 12, N^{\frac {2 - {\alpha}}2} N_{min}^{-\frac 12}L_{med}^{\frac12})\rangle_Z}{\langle N_{min} \rangle^s (N^{{\alpha}-1}N_{min})^{\frac12+{\varepsilon}}}.\end{aligned}$$ When $Z = \mathbb R$, by separating $N_{min}$ sum into the cases $N_{min} < N^{\frac{1-{\alpha}}2}$ and $N_{min} \ge N^{\frac{1-{\alpha}}2}$, we have $$\begin{aligned} \eqref{hmod2} &\lesssim \sum_{N^{1-{\alpha}} \le N_{min} < N^{\frac{1-{\alpha}}2}} \sum_{N^{{\alpha}-1}N_{min} \ge L_{med} \ge 1}\frac {N_{min}^\frac 12 L_{med}^{{\varepsilon}}}{\langle N_{min} \rangle^s(N^{{\alpha}-1}N_{min})^{\frac12+{\varepsilon}}}\\ &\qquad + \sum_{N^{\frac{1-{\alpha}}2} \le N_{min} \lesssim N} \sum_{N^{{\alpha}-1}N_{min} \ge L_{med} \ge 1} \frac {\min(N_{min}^\frac 12 L_{med}^{{\varepsilon}}, N^{\frac {2 - {\alpha}}2} N_{min}^{-\frac 12}L_{med}^{\frac12+{\varepsilon}})}{\langle N_{min} \rangle^s (N^{{\alpha}-1}N_{min})^{\frac12+{\varepsilon}}}\\ &\lesssim N^{\frac{1-{\alpha}}2+{\varepsilon}} + \sum_{N^{\frac{1-{\alpha}}2} \le N_{min} \lesssim N} \sum_{N^{{\alpha}-1}N_{min} \ge L_{med} \ge 1}\frac {N_{min}^\frac 12 L_{med}^{{\varepsilon}}}{\langle N_{min} \rangle^s (N^{{\alpha}-1}N_{min})^{\frac12+{\varepsilon}}}\\ &\lesssim N^{\frac{1-{\alpha}}2+{\varepsilon}} \lesssim 1.\end{aligned}$$ Otherwise ($Z = \mathbb Z$), since $N_{min} \ge 1$, we have $$\begin{aligned} \eqref{hmod2} &\lesssim \sum_{1 \le N_{min} \lesssim N} \sum_{N^{{\alpha}-1}N_{min} \ge L_{med} \ge 1} \frac {L_{med}^{\varepsilon}(1 + \min(N_{min}^\frac 12, N^{\frac {2 - {\alpha}}2} N_{min}^{-\frac 12}L_{med}^{\frac12}))}{\langle N_{min} \rangle^s (N^{{\alpha}-1}N_{min})^{\frac12+{\varepsilon}}}\\ &\lesssim 1 + \sum_{1 \le N_{min} \lesssim N} \sum_{N^{{\alpha}-1}N_{min} \ge L_{med} \ge 1} \frac {\min(N_{min}^\frac 12 L_{med}^{{\varepsilon}}, N^{\frac {2 - {\alpha}}2} N_{min}^{-\frac 12}L_{med}^{\frac12+{\varepsilon}})}{\langle N_{min} \rangle^s (N^{{\alpha}-1}N_{min})^{\frac12+{\varepsilon}}}\\ &\lesssim 1 + \sum_{1 \le N_{min} \lesssim N} \sum_{N^{{\alpha}-1}N_{min} \ge L_{med} \ge 1}\frac {N_{min}^\frac 12 L_{med}^{{\varepsilon}}}{\langle N_{min} \rangle^s (N^{{\alpha}-1}N_{min})^{\frac12+{\varepsilon}}}\\ &\lesssim 1 + N^{\frac{1-{\alpha}}2}\log N \lesssim 1.\end{aligned}$$ Case $L_2 = L_{max}$ and $N_3 \sim N_1 \gg N_2$ {#case-l_2-l_max-and-n_3-sim-n_1-gg-n_2 .unnumbered} ----------------------------------------------- In this case we have $L_2 \sim H \sim N^{{\alpha}}$. From Proposition \[dyest2\], summation in $L_{min}$ and the assumption $N_{min} \ge 1$ for $Z = \mathbb Z$, we have $$\begin{aligned} \eqref{hmod2} &\lesssim \sum_{\substack{N_{max} \sim N_{med} \sim N\\N_{min} \ge N^{1-{\alpha}} }} \sum_{L_{max} \sim N^{\alpha}} \sum_{ L_{med} \ge L_{min} \gtrsim 1} \frac {L_{min}^\frac 12 \langle\min(N_{min}^\frac 12, N_{max}^{\frac {1 - {\alpha}}2} L_{med}^\frac 12)\rangle_Z}{\langle N_1 \rangle^s L_1^{\frac 12 + {\varepsilon}}L_{max}^{\frac 12 - {\varepsilon}}}\\ &\lesssim \sum_{\substack{N_{max} \sim N_{med} \sim N\\N_{min} \ge N^{1-{\alpha}} }} \sum_{1 \lesssim L_{med} \le N^{\alpha}} \frac {\langle\min(N_{min}^\frac 12, N_{max}^{\frac {1 - {\alpha}}2} L_{med}^{\frac 12})\rangle_Z}{N^{s+{\alpha}(\frac 12 - {\varepsilon})}}\\ &\lesssim \sum_{N^{1-{\alpha}} \le N_{min} \lesssim N} \frac {\langle N_{min}^\frac 12\rangle_Z\log N}{N^{s+{\alpha}(\frac 12 - {\varepsilon})}} \lesssim 1.\end{aligned}$$ Case $L_3 = L_{max}$ and $N_1 \sim N_2 \gg N_3$ {#case-l_3-l_max-and-n_1-sim-n_2-gg-n_3 .unnumbered} ----------------------------------------------- In this case $L_3 \sim H \sim N^{{\alpha}-1}N_{min}$. By Proposition \[dyest2\] and summation in $L_{min}$, we have $$\begin{aligned} &\qquad \eqref{hmod2}\\ &\lesssim \sum_{\substack{N_{max} \sim N_{med} \sim N\\N_{min} \ge N^{1-{\alpha}} }} \sum_{L_{max} \sim N^{{\alpha}-1}N_{min}} \sum_{L_{med} \ge L_{min} \gtrsim 1} \frac {L_{min}^\frac 12\langle \min(N_{min}^\frac 12, N_{max}^{\frac {2 - {\alpha}}2} N_{min}^{-\frac 12} L_{med}^\frac 12)\rangle_Z}{\langle N_1 \rangle^s L_1^{\frac 12 + {\varepsilon}}L_2^{\frac 12 - {\varepsilon}}}\\ &\lesssim \sum_{\substack{N_{max} \sim N_{med} \sim N\\N_{min} \ge N^{1-{\alpha}} }} \sum_{L_{max} \sim N^{{\alpha}-1}N_{min}} \sum_{L_{med} \ge L_{min} \gtrsim 1} \frac {L_{min}^\frac 12 \langle \min(N_{min}^\frac 12, N_{max}^{\frac {2 - {\alpha}}2} N_{min}^{-\frac 12} L_{med}^\frac 12)\rangle_Z}{ N^{s} L_{min}^{\frac 12 + {\varepsilon}}L_{med}^{\frac 12 - {\varepsilon}}}\\ &\lesssim \sum_{\substack{N_{max} \sim N_{med} \sim N\\N_{min} \ge N^{1-{\alpha}} }} \sum_{L_{max} \sim N^{{\alpha}-1}N_{min}} \sum_{L_{med} \gtrsim 1} \frac {\langle\min(N_{min}^\frac 12 , N^{\frac {2 - {\alpha}}2} N_{min}^{-\frac 12} L_{med}^{\frac12})\rangle_Z}{N^{s}L_{med}^{\frac 12 - {\varepsilon}}}.\end{aligned}$$ Since $N^{{\alpha}-1}N_{min} \sim L_{max} \gtrsim 1$ implies $N_{min} \gtrsim N^{1 - {\alpha}}$, by breaking $N_{min}$-sum into two parts, we have: $$\begin{aligned} &\qquad \eqref{hmod2}\\ &\lesssim \sum_{N^{1 - {\alpha}} \le N_{min} \le N^{\frac {2 - {\alpha}}2}}\frac {\langle N_{min}^\frac 12\rangle_Z}{N^{s}}\\ &\qquad + \sum_{N^{\frac {2 - {\alpha}}2} < N_{min} \lesssim N} \sum_{L_{max} \sim N^{{\alpha}-1}N_{min}} \sum_{L_{med} \gtrsim 1} \frac {\langle\min(N_{min}^\frac 12 , N^{\frac {2 - {\alpha}}2} N_{min}^{-\frac 12} L_{med}^{\frac12})\rangle_Z}{N^{s}L_{med}^{\frac 12 - {\varepsilon}}} \\ &\lesssim N^{\frac{2-{\alpha}}4 - s} + \sum_{N^{\frac {2 - {\alpha}}2} < N_{min} \lesssim N} \sum_{L_{max} \sim N^{{\alpha}-1}N_{min}} \sum_{L_{med} \gtrsim 1} \frac {\langle\min(N_{min}^\frac 12 , N^{\frac {2 - {\alpha}}2} N_{min}^{-\frac 12} L_{med}^{\frac12})\rangle_Z}{N^{s}L_{med}^{\frac 12 - {\varepsilon}}}.\end{aligned}$$ For the second inequality we use $\sum_{N^{1 - {\alpha}} \le N_{min} \le N^{\frac {2 - {\alpha}}2}} ({1+ N_{min}^\frac 12}){N^{-s}} \lesssim N^{\frac{2-{\alpha}}{4}-s} + N^{-s}\log N \lesssim N^{\frac{2-{\alpha}}{4}-s}.$ Now by dividing $L_{med}$-sum into\ $\sum_{1 \leq L_{med} \leq N^{{\alpha}- 2}N_{min}^2} + \sum_{N^{{\alpha}- 2}N_{min}^2 < L_{med}}$, we get $$\begin{aligned} \eqref{hmod2} &\lesssim N^{\frac{2-{\alpha}}4 - s} + \sum_{N^{\frac {2 - {\alpha}}2} < N_{min} \lesssim N} \sum_{L_{max} \sim N^{{\alpha}-1}N_{min}} \sum_{L_{med} \gtrsim 1} \frac {\min(N_{min}^\frac 12 , N^{\frac {2 - {\alpha}}2} N_{min}^{-\frac 12} L_{med}^{\frac12})}{N^{s}L_{med}^{\frac 12 - {\varepsilon}}}\\ &\lesssim N^{\frac{2-{\alpha}}4 - s} + \sum_{N^{\frac {2 - {\alpha}}2} < N_{min} \lesssim N} \frac {N_{min}^\frac 12}{N^s(N^{{\alpha}-2}N_{min}^2)^{\frac12-{\varepsilon}}}\lesssim N^{\frac{2-{\alpha}}4 - s}.\end{aligned}$$ Since $s \ge \frac{2-{\alpha}}{4}$, we get the desired result. Proof of Theorem \[main1\] ========================== For the proof Theorem \[main1\], we need the trilinear estimate $$\label{tri} \|u_1\overline{u_2}u_3\|_{X_{\mathbb R}^{s, -\frac 12 + {\varepsilon}}} \lesssim \prod_{j=1}^3 \|u_j\|_{X_{\mathbb R}^{s, \frac 12 + {\varepsilon}}}(\mathbb{R} \times \mathbb{R}).$$ ### Failure of for $s< \frac{2-\alpha}{4}$ {#failure-of-for-s-frac2-alpha4 .unnumbered} It is easy to see that the trilinear estimate fails when $s < \frac{2-\alpha}{4} $. The counter-example is a resonant high-high-high to high interaction. For $N \gg 1$, let $$\begin{aligned} \widetilde{u_1},\widetilde{u_3} &= \chi_{A_N}, \qquad A_N =\{(\xi,\tau): N\le \xi \le N+N^{\frac{2-\alpha}{2}} , \quad |\tau -|\xi|^\alpha | \le 1 \}, \\ \widetilde{\overline{u_2}} &= \chi_{A_N}, \qquad A_N =\{(\xi,\tau): -N\le \xi \le -N+N^{\frac{2-\alpha}{2}} , \quad |\tau +|\xi|^\alpha | \le 1 \}.\end{aligned}$$ Here, the number $ N^{\frac{2-\alpha}{2}} $ is chosen so that the parallelogram $A_N$ to be fit in a width 1 strip of $\tau = |\xi|^\alpha $. Then, it follows that $$\begin{aligned} \|{\widetilde}{u_1} * {\widetilde}{\overline u_2} * {\widetilde}{u_3} \|_{X^{s,b-1}} &\sim N^{\frac{2-\alpha}{2}} N^{\frac{2-\alpha}{2}} N^s N^{\frac{2-\alpha}{4}}, \,\, \mbox{and}\;\;\|u_j\|_{X^{s,b}} \sim N^sN^{\frac{2-\alpha}{4}}.\end{aligned}$$ This and letting $N\to \infty$ give the necessary condition $ s \ge \frac{2-\alpha}{4} $ for . \[trilinear\] Let $s \ge \frac{2-{\alpha}}{4}$ and $0 < {\varepsilon}\ll 1$. For any $u_1,u_2,$ and $u_3\in X_{\widehat Z}^{s, \frac12+{\varepsilon}}$, we have $$\|u_1\overline{u_2}u_3\|_{X_{\widehat Z}^{s, -\frac 12 + {\varepsilon}}} \lesssim \prod_{j=1}^3 \|u_j\|_{X_{\widehat Z}^{s, \frac 12 + {\varepsilon}}}.$$ By duality and Plancherel’s theorem it suffices to show that $$\Big\|\frac{\langle \xi_4 \rangle^s}{\langle \xi_1 \rangle^s \langle \xi_2 \rangle^s \langle \xi_3 \rangle^s \langle \tau_1 - |\xi_1|^{\alpha}\rangle^{\frac12 + {\varepsilon}} \langle \tau_2 + |\xi_2|^{\alpha}\rangle^{\frac12 + {\varepsilon}}\langle \tau_3 - |\xi_3|^{\alpha}\rangle^{\frac12 + {\varepsilon}} \langle \tau_4 + |\xi_4|^{\alpha}\rangle^{\frac12 - {\varepsilon}}}\Big\|_{[4;\mathbb{R}\times Z]} \lesssim 1.$$ Since $\langle \tau_2 + |\xi_2|^{\alpha}\rangle^{\frac 12 + {\varepsilon}} \gtrsim \langle \tau_2 + |\xi_2|^{\alpha}\rangle^{\frac 12 - {\varepsilon}}$, the desired estimate follows from Lemma \[compo\] and bilinear estimates Propositions \[bilinear2\]. We define a nonlinear functional $\mathcal{N}$ by $$\mathcal{N}(u) = \psi(t)U(t)\phi - i\gamma\psi(t/T)\int^t_0 U(t-t')|u|^2u(t') dt',$$ where $\psi$ is a fixed smooth cut-off function such that $\psi(t) = 1$ if $|t| < 1$ and $\psi(t) = 0$ if $|t|>2$, and $0 < T \le 1$ is fixed. For $s,b \in \mathbb{R}$ we define the norm $X_{\widehat Z}^{s,b}$ for on the time interval $J_T = [0,T]$ by $$\|u\|_{X_{\widehat Z}^{s,b}(J_T)} := \inf\big\{\|v\|_{X_{\widehat Z}^{s,b}}: v|_{J_T} = u\big\}.$$ Then we recall the well-known properties of $X_{\widehat Z}^{s,b}$ : $$\begin{aligned} \label{li-est} \|\psi(t)U(t)\phi\|_{X_{\widehat Z}^{s,b}} \lesssim \|\phi\|_{H^s}, \,\, b\in \mathbb R,\end{aligned}$$ and, for $-\frac 12 < b' \le 0, 0 \le b \le b' + 1,$ $$\begin{aligned} \label{nonli-est} \|\int^t_0 U(t-t')F(t',x)dt'\|_{X_{\widehat Z}^{s,b}(J_T)} \lesssim T^{1+b'-b}\|F\|_{X_{\widehat Z}^{s,b'}(J_T)}.\end{aligned}$$ Define a compete metric space $B_{T,\rho}$ by $$B_{T,\,\rho} = \big\{u \in X_{\widehat Z}^{s, \frac 12 + {\varepsilon}}(J_T) : \|u\|_{X_{\widehat Z}^{s, \frac 12 + {\varepsilon}}}(J_T) \le \rho\big\}$$ with the metric $d(u,v) = \|u-v\|_{X_{\widehat Z}^{s, \frac 12 + {\varepsilon}}(J_T)}$. From and with $b=\frac 12 + {\varepsilon}, b' = - \frac 12 + {\varepsilon}', {\varepsilon}< {\varepsilon}'$ it follows that, for any $u \in B_{T,\rho}$, $$\|\mathcal{N}(u)\|_{X_{\widehat Z}^{s, \frac 12 + {\varepsilon}}(J_T)} \lesssim \|\phi\|_{H^s} + T^{{\varepsilon}' - {\varepsilon}}\||u|^2u\|_{X_{\widehat Z}^{s, -\frac 12 - {\varepsilon}'}(J_T)}.$$ If ${\varepsilon}'$ is sufficiently small, from Proposition \[trilinear\] we see $$\|\mathcal{N}(u)\|_{X_{\widehat Z}^{s, \frac 12 + {\varepsilon}}(J_T)} \lesssim \|\phi\|_{H^s} + T^{{\varepsilon}' - {\varepsilon}}\|u\|_{X_{\widehat Z}^{s, \frac 12 + {\varepsilon}}(J_T)} \lesssim \|\phi\|_{L^2_x} + T^{{\varepsilon}' - {\varepsilon}}\rho^3.$$ Choosing $\rho$ and $T$ small enough so that $\rho \ge 2C\|\phi\|_{H^s}$ and $CT^{{\varepsilon}' - {\varepsilon}}\rho^3 \le \rho/2$ for some constant $C$, we see that the functional $\mathcal{N}$ is a map from $B_{T,\rho}$ to itself. Similarly one can show that $N(u)$ is a contraction. Therefore there is a unique $u \in X_{\widehat Z}^{s, \frac 12 + {\varepsilon}}(J_T)$ satisfying . Ill-posedness ============= In this section, we prove that the equation in the non-periodic case is ill-posed for $\frac {2 - 3{\alpha}}{4({\alpha}+ 1)} < s < \frac {2 - {\alpha}}4$. For convenience we assume that $\gamma = 1$. Our strategy is to approximate the solution by the solutions of which is ill-posed in $H^s, s < 0$ (see [@cct] for the non-periodic case and [@cct2; @mol] for the periodic one). For this purpose we recall ill-posedness result for the Schrödinger equation $$\begin{aligned} \label{eq2} \left\{\begin{array}{l} i\partial_tv -\Delta v = |v|^2v,\\ v(0,\cdot) = \phi \in H^s. \end{array} \right.\end{aligned}$$ \[ill-NLS\] Let $s<0$. The solution map of the initial value problem of the cubic NLS fails to be uniformly continuous. More precisely, for $0 < \delta \ll {\varepsilon}\ll 1$ and $T > 0$ arbitrary, there are two solutions $v_1, v_2$ to with initial data $\phi_1, \phi_2$, respectively, satisfying , and . Moreover we can find solutions to satisfy $$\begin{aligned} \label{ill-sm} \sup_{0 \leq t \leq \infty}\|v_j(t)\|_{H^5} \lesssim {\varepsilon},\end{aligned}$$ for $j=1,2$. Let $N \gg 1$ be a large parameter to be chosen later. Let $v(s,y)$ be a solution of the cubic NLS equation and $$\label{changevar}(s,y) := \Big(t, \frac {x + {\alpha}N^{{\alpha}- 1}t}{(\frac {{\alpha}({\alpha}- 1)}2 N^{{\alpha}- 2})^{\frac 12}}\Big).$$ We shall construct approximate solutions which is given by $$\begin{aligned} \label{appsol} V(t,x) := e^{iNx} e^{iN^{\alpha}t} v(s,y).\end{aligned}$$ It is easy to see that $$\begin{aligned} &\qquad (i\partial_t + (-\Delta)^{\frac {\alpha}2})V\\ &= e^{iNx}e^{iN^{\alpha}t} \Big(- N^{\alpha}v(s,y) + i \partial_s v(s,y) + i\frac {{\alpha}N^{{\alpha}- 1}}{(\frac {{\alpha}({\alpha}- 1)}2 N^{{\alpha}- 2})^{\frac 12}} \partial_y v(s,y)\Big) \\ & + e^{iNx}e^{iN^{\alpha}t}\Big(N^{\alpha}v(s,y) - i \frac {{\alpha}N^{{\alpha}- 1}}{(\frac {{\alpha}({\alpha}- 1)}2 N^{{\alpha}- 2})^{\frac 12}} \partial_y v(s,y) - \partial_{yy} v(s,y) + R(-i\partial_y) v(s,y)\Big),\end{aligned}$$ where $$\label{remain} R(\xi) = \Big|\frac {\xi}{\big(\frac {{\alpha}({\alpha}- 1)}2 N^{{\alpha}- 2}\big)^{\frac 12}} + N\Big|^{\alpha}- N^{\alpha}- \frac {{\alpha}N^{{\alpha}- 1}}{\big(\frac {{\alpha}({\alpha}- 1)}2 N^{{\alpha}- 2}\big)^{\frac 12}}\xi - \xi^2.$$ Since $v(s,y)$ is a solution of , we have $$iV_t + (-\Delta)^\frac {{\alpha}}2V - |V|^2V = E,$$ where $E=e^{iNx} e^{iN^{\alpha}t} R(-i\partial_y) v(s,y).$ We need to bound the error. First we show the following perturbation result relying on the local well-posedness. \[pertub\] Let $u$ be a smooth solution to and $V$ be a smooth solution to the equation $$iV_t + (-\Delta)^\frac {{\alpha}}2V - |V|^2V = \mathcal E$$ for some error function $\mathcal E$. Let $e$ be the solution to the inhomogeneous problem $ie_t + (-\Delta)^\frac {{\alpha}}2e = \mathcal E,\; e(0) = 0$ and let $\eta(t)$ be a compactly supported smooth time cut-off function such that $\eta=1$ on $J = [0,1]$. Suppose that\ $\|u(0)\|_{H^{\frac {2-{\alpha}}4}}, \;\;\|V(0)\|_{H^{\frac {2-{\alpha}}4}},\;\; \|\eta(t)e\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4,\frac12+}} \lesssim {\varepsilon}.$ Then, if $\epsilon$ is sufficiently small, we have $$\|(u-V)\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4,\frac12+}(J)} \lesssim \|u(0) - V(0)\|_{H^{\frac {2-{\alpha}}4}} + \|\eta(t)e\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4,\frac12+}}.$$ In particular, we have $$\sup_{0 \leq t \leq 1}\|u(t)-V(t)\|_{H^{\frac {2-{\alpha}}4}} \lesssim \|u(0) - V(0)\|_{H^{\frac {2-{\alpha}}4}} + \|\eta(t)e\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4,\frac12+}}.$$ Writing the equation for $V$ in integral form, we have $$V(t) = U(t)V(0) + e(t) - i\int_0^t U(t - t')(|V|^2V)(t')dt'.$$ By taking $X_{\mathbb R}^{\frac {2-{\alpha}}4, \frac12+}(J)$ norm on both sides and applying , we get $$\begin{aligned} \|V\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4,\frac12+}(J)} &\lesssim \|V(0)\|_{H^{\frac {2 - {\alpha}}4}} + \|\eta(t)e\|_{X_{\mathbb R}^{\frac{2-{\alpha}}4,\frac12+}} + \||V|^2V\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4,-\frac12+}(J)}\\ &\lesssim \|V(0)\|_{H^{\frac {2 - {\alpha}}4}} + \|\eta(t)e\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4,\frac12+}} + \|V\|_{X_{\mathbb R}^{\frac{2-{\alpha}}4,\frac12+}(J)}^3.\end{aligned}$$ By continuity argument with sufficiently small ${\varepsilon}$, we obtain $\|V\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4,b}(J)} \lesssim {\varepsilon}.$ Let $w := u - V$. Then $w$ satisfies the equation $$iw_t + (-\Delta)^{{\alpha}/2}w = |w|^2w + 2|w|^2v + 2w|v|^2 + w^2\bar{v} + \bar{w}v^2 - E,\;\; w(0) = u(0) - V(0),$$ which is written in integral form as $$w(t) = U(t)w(0) - e(t) - i\int_0^t U(t - t')(|w|^2w + 2|w|^2v + 2w|v|^2 + w^2\bar{v} + \bar{w}v^2)(t')dt'.$$ Again taking $X_{\mathbb R}^{\frac {2-{\alpha}}4,\frac12+}(J)$ norms on both sides of the above equation and applying , we have $$\begin{aligned} \|w\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4, \frac12+}(J)} &\lesssim \|u(0) - V(0)\|_{H^{\frac {2 - {\alpha}}4}} + \|\eta(t)e\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4,\frac12+}}\\ &\qquad\quad+ \||w|^2w + 2|w|^2v + 2w|v|^2 + w^2\bar{v} + \bar{w}v^2\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4,-\frac12+}(J)}\\ &\lesssim \|u(0) - V(0)\|_{H^{\frac {2 - {\alpha}}4}} + \|\eta(t)e\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4,\frac12+}}\\ &\qquad\quad+ \|w\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4,\frac12+}(J)}(\|w\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4,\frac12+}(J)} + \|V\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4,\frac12+}(J)})^2.\end{aligned}$$ If ${\varepsilon}$ is sufficiently small, the continuity argument with respect to time gives the desired bound. \[error\] Let $e$ be a solution to the initial value problem $ie_t + (-\Delta)^\frac {{\alpha}}2e = E,\;\; e(0) = 0$, and let $\eta$ be the smooth time cut-off function given in Lemma \[pertub\]. Then $$\|\eta(t)e\|_{X_{\mathbb R}^{\frac {2-{\alpha}}4,\frac12+}} \lesssim {\varepsilon}N^{-{\alpha}/2}.$$ For the proof of this lemma, we make use of the following which is in [@cct]. \[inho-sob\] Let $-\frac 12 < s, \sigma > 0$, and $w \in H^\sigma(\mathbb{R})$. For $M > 1, \tau > 0, x_0 \in \mathbb{R}$ and $A > 0$ let $$\widetilde w (x) = Ae^{iMx}w(\frac {x - x_0}{\tau}).$$ 1. Suppose that $s \geq 0$. Then there exists a constant $C_1 < \infty$, depending only on $s$, such that $$\|\widetilde w\|_{H^s} \leq C_1|A|\tau^{1/2} M^s \|w\|_{H^s}$$ for all $w, A, x_0$ whenever $M\cdot\tau \geq 1$. 2. Suppose that $s < 0$ and that $\sigma \geq |s|$. Then there exists a constant $C_1 < \infty$, depending only on $s$ and $\sigma$, such that $$\|\widetilde w\|_{H^s} \leq C_1|A|\tau^{1/2}M^s\|w\|_{H^\sigma}$$ for all $w, A, x_0$ whenever $1 \leq \tau \cdot M^{1 + (s/\sigma)}$. 3. There exists $c_1 > 0$ such that for each $w$ there exists $C_w < \infty$ such that $$\|\widetilde w\|_{H^s} \geq c_1|A|\tau^{1/2}M^s\|w\|_{L^2}$$ whenever $\tau \cdot M \geq C_w.$ Using and Plancherel’s theorem, we have $$\begin{aligned} \|\eta(t)e\|_{X_{\mathbb R}^{\frac {2 - {\alpha}}4, \frac12+}} &\lesssim \|\eta(t)E\|_{X_{\mathbb R}^{\frac {2 - {\alpha}}4, -\frac12+}} = \|\langle \xi \rangle^{\frac {2 - {\alpha}}4} \langle \tau - |\xi|^{\alpha}\rangle^{-\frac12+} \widehat{\eta(t)E}\|_{L^2_{\tau,\xi}}\\ &\leq \|\langle \xi \rangle^{\frac {2 - {\alpha}}4} \widehat{\eta(t)E}\|_{L^2_{\tau,\xi}} = \|\eta(t) \langle \xi \rangle^{\frac {2 - {\alpha}}4} \mathcal{F} E\|_{L^2_{t,\xi}}\\ &\leq \|\langle \xi \rangle^{\frac {2 - {\alpha}}4} \mathcal{F}E\|_{L^\infty_tL^2_\xi([0,2] \times \mathbb{R})}.\end{aligned}$$ It suffices to show $$\sup_{0 \leq t \leq 2} \|E\|_{H^{\frac {2-{\alpha}}4}} \lesssim {\varepsilon}N^{-{\alpha}/2}.$$ Since $E=e^{iNx} e^{iN^{\alpha}t} R(-i\partial_y) v(s,y)$, by Lemma \[inho-sob\] with $M = N, \tau = N^{\frac {\alpha}2 - 1}$ we see that $\|E\|_{H^{\frac {2-{\alpha}}4}} \lesssim \|R(-i\partial_y)v\|_{H^{\frac {2-{\alpha}}4}}$. Recalling that $R(\xi)$ is given by , it suffices to show $\|R(-i\partial_y)v\|_{H^{\frac {2-{\alpha}}4}} \lesssim {\varepsilon}N^{-{\alpha}/2}$. Since $\|v\|_{H^{5}} \lesssim {\varepsilon}$ by Theorem \[ill-NLS\], we need only to show that $$\begin{aligned} \label{error est} \big|R(\xi)\big| \leq cN^{-{\alpha}/2}|\xi|^3\;\;\mbox{for all}\;\;N \gg 1.\end{aligned}$$ Let $c_1 = \max \big(8{\alpha}(\frac {{\alpha}({\alpha}- 1)}{2})^{- \frac 32}, \frac {2^{4 - {\alpha}}}{6} (2 - {\alpha})(\frac {{\alpha}({\alpha}- 1)}{2})^{-\frac 12}\big)$,\ $c_2 = \max \big(8{\alpha}(\frac {{\alpha}({\alpha}- 1)}{2})^{- \frac 32}, \frac {\big(\frac {{\alpha}({\alpha}- 1)}{2}\big)^\frac 12}{{\alpha}} \big)$ and let $$f(\xi) = \big|(\frac {{\alpha}({\alpha}- 1)}{2} N^{{\alpha}- 2})^{-\frac 12}\xi + N\big|^{\alpha},\;\; P(\xi) = N^{\alpha}+ \frac {{\alpha}N^{{\alpha}- 1}}{(\frac {{\alpha}({\alpha}- 1)}{2}N^{{\alpha}- 2})^\frac 12}\xi + \xi^2,$$ so that $R(\xi)=f(\xi) - P(\xi)$. We also denote $g(\xi)=-c_1N^{-{\alpha}/2}\xi^3 + P(\xi)$, $h(\xi) = c_2N^{-\frac {\alpha}2}\xi^3 + P(\xi)$. Then it suffices to show $\big|R(\xi)\big| \leq c_1N^{-{\alpha}/2}|\xi|^3$ on $\xi > \xi_1$ and $f \leq g, h \leq f$ on $\xi \leq \xi_1$ for some $\xi_1 < 0$. The following are easy to check: $$\begin{aligned} f'(\xi) &= {\alpha}\big|\big(\frac {{\alpha}({\alpha}- 1)}{2} N^{{\alpha}- 2}\big)^{-\frac 12}\xi + N\big|^{{\alpha}- 2}\Big(\big(\frac {{\alpha}({\alpha}- 1)}{2} N^{{\alpha}- 2}\big)^{-\frac 12}\xi + N\Big)\\ &\qquad \times \big(\frac {{\alpha}({\alpha}- 1)}2 N^{{\alpha}- 2}\big)^{-\frac 12},\\ f''(\xi) &= 2\big|(\frac {{\alpha}({\alpha}- 1)}2 N^{{\alpha}- 2})^{-\frac 12}\xi + N\big|^{{\alpha}- 2}N^{2 - {\alpha}},\\ f'''(\xi) &= 2({\alpha}- 2)\big|(\frac {{\alpha}({\alpha}- 1)}{2} N^{{\alpha}- 2})^{-\frac 12}\xi + N\big|^{{\alpha}- 4}\big((\frac {{\alpha}({\alpha}- 1)}{2} N^{{\alpha}- 2})^{-\frac 12}\xi + N\big)\\ &\qquad \times N^{2 - {\alpha}} \big(\frac {{\alpha}({\alpha}- 1)}{2} N^{{\alpha}- 2}\big)^{-\frac 12},\\ g'(\xi) &= -3c_1N^{-\frac {\alpha}2} \xi^2 + 2\xi + \frac {{\alpha}N^{{\alpha}- 1}}{(\frac {{\alpha}({\alpha}- 1)}2 N^{{\alpha}- 2})^\frac 12},\quad g''(\xi) = -6c_1N^{-\frac {\alpha}2}\xi + 2,\\ h'(\xi) &= 3c_2N^{-\frac {\alpha}2}\xi^2 + 2\xi + \frac {{\alpha}N^{{\alpha}- 1}}{(\frac {{\alpha}({\alpha}- 1)}2 N^{{\alpha}- 2})^\frac 12} > \frac 23 \times \frac {{\alpha}N^{{\alpha}- 1}}{(\frac {{\alpha}({\alpha}- 1)}2 N^{{\alpha}- 2})^\frac 12},\end{aligned}$$ provided that the derivatives exist. Let us set $\xi_1 = -\frac 12 (\frac {{\alpha}({\alpha}- 1)}{2} N^{{\alpha}- 2})^\frac 12 N$ and $\xi_2 = -2 (\frac {{\alpha}({\alpha}- 1)}{2} N^{{\alpha}- 2})^\frac 12 N$. Then we consider separately three cases $\xi \ge \xi_1;\;\; \xi_2 \leq \xi < \xi_1;\;\; \xi < \xi_2$. If $\xi \ge \xi_1$, $f$ is three times differentiable and $$|f'''(\xi)| \le | f'''(\xi_1)| = (2-{\alpha})\Big(\frac {{\alpha}({\alpha}- 1)}2\Big)^{- \frac 12} 2^{4 - {\alpha}} N^{-{\alpha}/2}.$$ Hence by Taylor’s theorem, we get . We need only to handle the remain two cases. For both cases it is easy to show $h(\xi) \leq f(\xi).$ In fact, observe that $h(\xi_1) \leq \big(\frac {{\alpha}({\alpha}-1)}{8} + 1 - \frac {3{\alpha}}{2} \big)N^{\alpha}\leq (\frac 12)^{\alpha}N^{\alpha}= f(\xi_1)$. Since $f'$ is increasing, $f'(\xi) \leq f'(\xi_1) = (\frac 12)^{{\alpha}-1 }\frac {{\alpha}N^{{\alpha}- 1}}{(\frac {{\alpha}({\alpha}- 1)}2 N^{{\alpha}- 2})^\frac 12} \leq h'(\xi)$ for $\xi \leq \xi_1$. Hence, $h(\xi) \leq f(\xi)$ if $\xi \leq \xi_1$. To show that $f(\xi) \leq g(\xi)$ for $\xi_2 \leq \xi < \xi_1$, observe that $ f(\xi_1) = \big(\frac N2\big)^{\alpha}\le \big(\frac {\alpha}2 + \frac {{\alpha}({\alpha}- 1)}8 + 1\big)N^{\alpha}\leq g(\xi_1).$ Hence, it suffices to show $f'(\xi) \geq g'(\xi)$ for $\xi_2 \leq \xi < \xi_1$. Since $f'$ is increasing, $ f'(\xi) \geq f'(\xi_2) = -{\alpha}\big(\frac {{\alpha}({\alpha}- 1)}2\big)^{- \frac 12}N^{\frac {\alpha}2}. $ Since $g'$ is increasing, $ g'(\xi) \leq g'(\xi_1) \leq -5{\alpha}\big(\frac {{\alpha}({\alpha}- 1)}2\big)^{-\frac 12}N^{\frac {\alpha}2}. $ Hence, $f'(\xi) \geq g'(\xi)$ for $\xi_2 \leq \xi < \xi_1$. Finally, we show $f(\xi) \le g(\xi)$ for $\xi < \xi_2$. We note that $f''(\xi) \le g''(\xi)$ and $$f(\xi_2) = N^{\alpha}\le (64{\alpha}+ 2{\alpha}({\alpha}- 1) - 2{\alpha}+ 1)N^{\alpha}\le g(\xi_2).$$ Since $f'(\xi_2) = -{\alpha}(\frac {{\alpha}({\alpha}- 1)}{2})N^{\frac{{\alpha}}2} \geq (-96 + 2{\alpha})N^{\frac {\alpha}2} \ge g'(\xi_2)$, $f'(\xi) \geq g'(\xi)$. This together with $f(\xi_2) \le g(\xi_2)$ gives $f(\xi) \le g(\xi)$. Now we are ready to prove Theorem \[ill\]. Let $0 < \delta \ll {\varepsilon}\ll 1$ and $T > 0$ be given. From Theorem \[ill-NLS\] we have two global solution $v_1, v_2$ with initial data $\phi_1, \phi_2$, respectively, such that $$\begin{aligned} \label{ill-1-NLS} \|\phi_1\|_{H^s},\|\phi_2\|_{H^s} \lesssim {\varepsilon},\end{aligned}$$ $$\begin{aligned} \label{ill-2-NLS} \|\phi_1 - \phi_2\|_{H^s} \lesssim \delta,\end{aligned}$$ $$\begin{aligned} \label{ill-3-NLS} \sup_{0 \leq t \leq T}\|v_1(t) - v_2(t)\|_{H^s} \gtrsim {\varepsilon},\end{aligned}$$ $$\begin{aligned} \label{ill-4-NLS} \sup_{0 \leq t \leq \infty}\|v_1(t)\|_{H^5}, \|v_2(t)\|_{H^5} \lesssim {\varepsilon}.\end{aligned}$$ Define $V_1, V_2$ by $$\begin{aligned} \label{appsol} V_j(t,x) := e^{iNx} e^{iN^{\alpha}t} v_j(s,y),\;\; j= 1, 2,\end{aligned}$$ where $(s,y)$ is given by . And let $u_1, u_2$ be smooth global solutions of with initial data $V_1(0,x), V_2(0,x)$, respectively. Now we rescale these solutions to have the conditions , and satisfied. Let $\lambda \gg 1$ be a large parameter to be chosen later. For $j=1,2$, set $$u_j^\lambda := \lambda u_j(\lambda^{\alpha}t, \lambda x),\,\,\, V_j^\lambda := \lambda V_j(\lambda^{\alpha}t, \lambda x).$$ Thus we have $$u_j^\lambda(0,x) = V_j^\lambda(0,x) = \lambda e^{iN\lambda x} e^{iN^{\alpha}t} v_j\Big(0, \frac {\lambda x + {\alpha}N^{{\alpha}- 1}t}{(\frac {{\alpha}({\alpha}- 1)}2 N^{{\alpha}- 2})^\frac 12}\Big).$$ Lemma \[inho-sob\] with $M = N\lambda, \tau = N^{\frac {{\alpha}- 2}2} \lambda^{-1}$ implies that if $s \ge 0$, $$\|u_j^\lambda(0)\|_{H^s} \lesssim \lambda^{s + 1/2} N^{s - (2 - {\alpha})/4} \|v_j(0)\|_{H^1};$$ if $\frac {2 - 3{\alpha}}{4({\alpha}+ 1)} < s <0$, $$\|u_j^\lambda(0)\|_{H^s} \lesssim \lambda^{s + 1/2} N^{s - (2 - {\alpha})/4} \|v_j(0)\|_{H^1}.$$ We choose $\lambda = N^{((2 - {\alpha})/4 - s)/(s + 1/2)}$. By and we have $$\|u_j^\lambda(0)\|_{H^s} \lesssim {\varepsilon}, \,\,\,\|u_1^\lambda(0) - u_2^\lambda(0) \|_{H^s} \lesssim \delta.$$ Now we show . Rescaling gives $$\begin{aligned} \|u_j^\lambda(t) - V_j^\lambda(t)\|_{H^s} &\lesssim \lambda^{\max(s,0) + 1/2} \|u_j(\lambda^{\alpha}t) - V_j(\lambda^{\alpha}t)\|_{H^s}\\ &\leq \lambda^{\max(s,0) + 1/2}\|u_j(\lambda^{\alpha}t) - V_j(\lambda^{\alpha}t)\|_{H^{(2-{\alpha})/4}}. \end{aligned}$$ Lemma \[pertub\] and induction argument on time interval up to $\log N/\lambda^{\alpha}$ yield $$\|u_j(\lambda^{\alpha}t) - V_j(\lambda^{\alpha}t)\|_{H^{(2-{\alpha})/4}} \lesssim {\varepsilon}N^{-{\alpha}/2 + \eta},$$ whenever $0 < t \ll \log N/\lambda^{\alpha}$. Hence we have $$\|u_j^\lambda(t) - V_j^\lambda(t)\|_{H^s} \lesssim \lambda^{\max(s,0) + 1/2}{\varepsilon}N^{-{\alpha}/2 + \eta}.$$ From the hypothesis $\frac {2 - 3{\alpha}}{4({\alpha}+ 1)} < s$ it follows that, for a sufficiently small $\eta>0$, $$\|u_j^\lambda(t) - V_j^\lambda(t)\|_{H^s} \ll {\varepsilon}.$$ Applying Lemma \[inho-sob\] with $M = N\lambda, \tau = N^{\frac {{\alpha}- 2}2} \lambda^{-1}$, we have $$\|u_j^\lambda(t)\|_{H^s} \leq \|u_j^\lambda(t) - V_j^\lambda(t)\|_{H^s} + \|V_j^\lambda(t)\|_{H^s} \lesssim {\varepsilon}+ \|v_j(\lambda^{\alpha}t)\|_{H^s} \lesssim {\varepsilon}.$$ From , we can find a time $t_0 > 0$ such that $\|v_1(t_0) - v(t_0)\|_{L^2} \gtrsim {\varepsilon}.$ Fixing $t_0$, we may choose $N$ so large that $t_0 \ll \log N$. From and Lemma \[inho-sob\], we get $$\|V_1(t_0/\lambda^{\alpha}) - V_2(t_0/\lambda^{\alpha})\|_{H^s} \sim {\varepsilon}.$$ Choosing $N$ large enough, we can make $t_0/\lambda^{\alpha}< T$. Therefore follows. Acknowledgments {#acknowledgments .unnumbered} --------------- Y. Cho was supported by the Research Funds of Chonbuk National University 2014, G. Hwang supported by NRF grant 2012R1A1A1015116, 2012R1A1B3001167 (Republic of Korea), S. Kwon partially supported by NRF grant 2010-0024017 (Republic of Korea), and S. Lee supported by NRF grant 2009-0083521 (Republic of Korea). [00]{} N. Burq, P. Gerard, and N. Tzvetkov, [*An instability property of the nonlinear Schrödinger equation on $S^d$*]{}, Math. Res. Lett. 9 (2002), no. 2-3, 323–335. M. Christ, J. Colliander, T. Tao, [*Asymptotics, frequency modulation, and low regularity ill- posedness for canonical defocusing equations*]{}, Amer. J. Math. **125** (2003), no. 6, 1235-1293. , [*Instability of the periodic nonlinear Schrödinger equation*]{}, preprint arXiv:math/0311227 (2003). Y. Cho, H. Hajaiej, G. Hwang, and T. Ozawa, [*On the Cauchy problem of fractional Schrödinger equation with Hartree type nonlinearity*]{}, Funkcialaj Ekvacioj [**56**]{} (2013), 193-224. , [*On the orbital stability of fractional Schrödinger equations*]{}, Comm. Pure Appl. Anal. **13** (2014), 1267-1282. , [*Profile decompositions and blowup phenomena of mass critical fractional Schrödinger equations*]{}, Nolinear Analysis **86** (2013), 12-29. Y. Cho and S. Lee, [*Strichartz estimates in spherical coordinates*]{}, Indiana Univ. Math. J. 62 (2013), no. 3, 991-1020. Y. Cho, T. Ozawa, S. Xia, [*Remarks on some dispersive estimates*]{}, Commun. Pure Appl. Anal., [**10**]{} (2011), no. 4, 1121-1128. S. Demirbas, M. B. Erdoğan and N. Tzirakis, [*Existence and uniqueness theory for the fractional Schrödinger equation on the torus*]{}, preprint arXiv:1312.5249. B, Guo and Z. Huo, [*Global Well-Posedness for the Fractional Nonlinear Schrodinger Equation*]{}, Comm. Partial Differential Equations **36** (2010), 247-255. A. D. Ionescu and F. Pusateri, [*Nolinear fractional Schrödinger equations in one dimension*]{}, J. Func. Anal. **266** (2014), 139-176. S. Kwon, [*Well-posedness and ill-posedness of the fifth-order modified KdV equations*]{}, Elec. J. Diff. Eqns, 2008(2008), no.1, 1–15. N, Laskin, [*Fractional quantum mechanics and Lévy path integrals*]{}, Phys. Lett. A, **268** (2000), 298-305. L. Molinet, [*On ill-posedness for the one-dimensional periodic cubic Schrödinger equation*]{}, Math. Res. Lett. **16** (2009), 111-120. T. Tao, [*Multilinear weighted convolution of $L^2$ functions, and applications to nonlinear dispersive equations*]{}, Amer. J. Math. 123 (2001), no. 5, 839-908. [^1]: [^2]: The same counter-example as the cubic NLS in [@bst] gives the ill-posedness for $ s<0$.
--- abstract: | We consider secret key generation for a “pairwise independent network” model in which every pair of terminals observes correlated sources that are independent of sources observed by all other pairs of terminals. The terminals are then allowed to communicate publicly with all such communication being observed by all the terminals. The objective is to generate a secret key shared by a given subset of terminals at the largest rate possible, with the cooperation of any remaining terminals. Secrecy is required from an eavesdropper that has access to the public interterminal communication. A (single-letter) formula for secret key capacity brings out a natural connection between the problem of secret key generation and a combinatorial problem of maximal packing of Steiner trees in an associated multigraph. An explicit algorithm is proposed for secret key generation based on a maximal packing of Steiner trees in a multigraph; the corresponding maximum rate of Steiner tree packing is thus a lower bound for the secret key capacity. When only two of the terminals or when all the terminals seek to share a secret key, the mentioned algorithm achieves secret key capacity in which case the bound is tight. [*Index Terms*]{} – PIN model, private key, public communication, secret key capacity, security index, spanning tree packing, Steiner tree packing, wiretap secret key. author: - | Sirin Nitinawarat, [*Student Member, IEEE*]{}, Chunxuan Ye, [*Member, IEEE*]{},\ Alexander Barg, Prakash Narayan, [*Fellow, IEEE*]{}, and Alex Reznik, [*Member, IEEE*]{} [^1] [^2] [^3] [^4] [^5] title: '**Secret Key Generation for a Pairwise Independent Network Model**' --- Introduction ============ Suppose that terminals $1, \ldots, m$ observe distinct but correlated signals with the feature that every pair of terminals observes a corresponding pair of correlated signals that are independent of all other pairs of signals. Following these observations, all the terminals can communicate interactively over a public noiseless channel of unlimited capacity, with all such communication being observed by all the terminals. The goal is to generate a secret key (SK), i.e., secret common randomness, for a given subset $A$ of the terminals in $\mathcal{M} = \{1, \ldots, m\}$ at the largest rate possible, with secrecy being required from an eavesdropper that observes the public interterminal communication. All the terminals in $\mathcal{M}$ cooperate in generating the SK for the secrecy-seeking set $A$. This model for SK generation, called a “pairwise independent network” model, was introduced in [@YeRez07] (see also [@YeRezSha06]). Abbreviated hereafter as the PIN model, it is motivated by practical aspects of a wireless communication network in which terminals communicate on the same frequency. In a typical multipath environment, the wireless channel between each pair of terminals produces a random mapping between the transmitted and received signals which is time-varying and location-specific. For a fixed time and location, this mapping is reciprocal, i.e., effectively the same in both directions. Also, the mapping decorrelates over different time-coherence intervals as well as over distances of the order of a few wavelengths. The PIN model is, in fact, a special case of a general multiterminal “source model” for secrecy generation studied by Csiszár and Narayan [@CsiNar04]. The latter followed leading investigations by Maurer [@Mau90; @Mau93] and Ahlswede and Csiszár [@AhlCsi93] of SK generation by two terminals from their correlated observations complemented by public communication. A single-letter characterization of secret key capacity – the largest rate at which secrecy can be generated – for the terminals in an arbitrary subset $A$ of $\mathcal{M}$ was provided in [@CsiNar04]. A particularization of this (general) SK capacity formula to our PIN model displays the special feature that it can be expressed in terms of a linear combination of mutual information terms that involve only mutually independent pairs of “reciprocal” random variables (rvs). Each such mutual information term represents the maximum rate of an SK that can be generated solely by a corresponding pair of terminals from only their own observed signals using public communication [@Mau90; @Mau93; @AhlCsi93]. This observation leads to the following question that is our main motivation: [*Can an SK of optimum rate for the terminals in*]{} $A$ [*be generated by propagating mutually independent and rate-optimal SKs for pairs of terminals in*]{} $\mathcal{M}$? An examination of this question brings out points of contact between SK generation for a PIN model and a combinatorial problem of tree packing in a multigraph. We propose an explicit algorithm for propagating pairwise SKs for pairs of terminals in $\mathcal{M}$ to form a groupwide SK for the terminals in $A$. This algorithm is based on a maximal packing of Steiner trees (for $A$) in a multigraph associated with the PIN model. Thus, the maximum rate of Steiner tree packing in this multigraph is always a lower bound for SK capacity. This bound is shown to be tight when the secrecy-seeking set $A$ contains only two terminals or when it consists of all the terminals. In these situations, our algorithm is capacity-achieving. It is of independent interest to note that given a combinatorial problem of determining the maximum rate of Steiner tree packing for $A$ in a multigraph, the SK capacity of an associated PIN model provides, in reciprocity, an upper bound for the mentioned rate, which is tight for the case $|A|=2$ as well as for the spanning tree case $A = \mathcal{M}$. In the study of secrecy generation for a multiterminal source model, the notions of wiretap SK [@Mau90; @Mau93; @AhlCsi93; @CsiNar04] and private key [@CsiNar04] also have been proposed. The former notion corresponds to the eavesdropper having additional access to a terminal not in the secrecy-seeking set $A$ and from which too the key must be concealed; this “wiretapped” terminal does not cooperate in secrecy generation. A single-letter characterization of the corresponding capacity remains unresolved in general but for partial results and bounds (cf. e.g., [@AhlCsi93; @Mau93; @RenWo03; @CsiNar04; @GohAnan07; @GohAnan08; @CsiNar08]). The notion of a private key is less restrictive, with the wiretapped terminal being allowed to cooperate; the corresponding capacity is known [@CsiNar04]. We argue in Section IV below that for a PIN model these two notions correspond to SK generation for a reduced PIN model, thereby justifying our sole focus on SK capacity. Basic concepts and definitions are presented in Section II. Section III contains statements of our results and proofs; specifically, the SK capacity for the PIN model is given in Section III.A, the connection of SK capacity with Steiner tree packing is treated in Section III.B, and with spanning tree packing in Section III.C. Concluding remarks and pointers to a sequel paper are contained in Section IV. Preliminaries ============= We shall be concerned throughout with a PIN model, which is a special case of a general multiterminal “source model” for secrecy generation with public communication (see [@Mau93; @AhlCsi93; @CsiNar04; @CsiNar08]). Suppose that terminals $1, \ldots, m,\ m \geq 2,$ observe $n$ independent and identically distributed (i.i.d.) repetitions of the rvs $\tilde{X}_1, \ldots, \tilde{X}_m,$ denoted by $\tilde{X}_1^n, \ldots, \tilde{X}_m^n,$ where $\tilde{X}_i^n = \left (\tilde{X}_{i,1}, \ldots, \tilde{X}_{i,n} \right),\ i \in \mathcal{M} = \{1, \ldots, m\}$. Each rv $\tilde{X}_i,\ i \in \mathcal{M},$ is of the form $\tilde{X}_i = \left( X_{ij},\ j \in \mathcal{M} \backslash \{i\} \right)$ with $m-1$ components, and the “reciprocal pairs” of rvs $\{\left ( X_{ij}, X_{ji} \right ),\ 1 \leq i < j \leq m \}$ are mutually independent. See Figure 1. Thus, every pair of terminals in $\mathcal{M}$ is associated with a corresponding pair of rvs that are independent of pairs of rvs associated with all the other pairs of terminals. [*All the rvs are assumed to take their values in finite sets*]{}. Following their observation of the random sequences as above, the terminals in $\mathcal{M}$ are allowed to communicate among themselves over a public noiseless channel of unlimited capacity; all such communication, which may be interactive and conducted in multiple rounds, is observed by all the terminals. A communication from a terminal, in general, can be any function of its observed sequence as well as all previous public communication. The public communication of all the terminals will be denoted collectively by $\mathbf{F} = \mathbf{F}^{(n)}$. The overall goal is to generate shared secret common randomness for a given set $A \subseteq \mathcal{M}$ of terminals at the largest rate possible, with the remaining terminals (if any) cooperating in secrecy generation. The resulting secret key must be shared by every terminal in $A$; but it need not be accessible to the terminals not in $A$ and nor does it need to be concealed from them. It must, of course, be kept secret from the eavesdropper that has access to the public interterminal communication $\mathbf{F}$, but is otherwise passive, i.e., unable to tamper with this communication. (200, 100)(0,0) (70,100)[ $\widetilde{X}_2=(X_{2 1}, X_{2 3}, \ldots, X_{2 m})$ ]{} (80,90) (90,85)[2]{} (180,60) (0,70)[ $\widetilde{X}_1 =(X_{1 2}, \ldots, X_{1 m})$ ]{} (10,60) (15,50)[1]{} (60,-20)[ $\widetilde{X}_m =(X_{m 1}, X_{m 2}, \ldots, X_{m, m-1})$]{} (95, 80)[(200,70)]{} (200, 80)[A]{} (175,30) (130,10) (70,0) (65,10)[m]{} $\hspace{0.9in} \mbox{Figure 1: The PIN Model}$ The following basic concepts and definitions are from [@CsiNar04; @CsiNar08]. Given $\epsilon >0$, for rvs $U, V$, we say that $U$ is [*$\epsilon$-recoverable*]{} from $V$ if $Pr\{ U \neq f(V)\} \leq \epsilon$ for some function $f(V)$ of $V$. With the rvs $K$ and $\mathbf{F}$ representing a secret key and the eavesdropper’s knowledge, respectively, information theoretic secrecy entails that the security index[^6] $$s(K;\mathbf{F}) = \log{|\mathcal{K}|} - H(K|\mathbf{F})$$ be required to be small, where $\mathcal{K}$ is the range of $K$ and $|\centerdot|$ denotes cardinality. This requirement simultaneously renders $K$ to be nearly uniformly distributed and nearly independent of $\mathbf{F}$. **Definition 1:** Given any set $A \subseteq \mathcal{M}$ of size $|A| \geq 2,$ a rv $K$ constitutes an $\epsilon$-secret key ($\epsilon$-SK) for the set of terminals $A$, achievable with communication $\mathbf{F}$, if $K$ is $\epsilon$-recoverable from $\left (\tilde{X}_i^n, \mathbf{F} \right )$ for each $i \in A$ and, in addition, it satisfies the secrecy condition $$s(K; \mathbf{F}) \leq \epsilon. \label{eqn:1}$$ The condition (\[eqn:1\]) corresponds to the concept of “strong” secrecy in which $\epsilon=\epsilon_n=o_n(1)$ [@Mau94; @CsiNar04; @CsiNar08], as distinct from the earlier “weak” secrecy concept which requires only that $\epsilon_n = o(n)$ [@Mau93; @AhlCsi93]. **Definition 2:** A number $R$ is an achievable SK rate for a set of terminals $A \subseteq \mathcal{M}$ if there exist $\epsilon_n$-SKs $K^{(n)}$ for $A$, achievable with communication $\mathbf{F}$, such that $$\epsilon_n \rightarrow 0\ \ \mbox{and}\ \ \frac{1}{n}\log{|\mathcal{K}^{(n)}|} \rightarrow R\ \ \mbox{as}\ \ n \rightarrow \infty. \nonumber$$ The largest achievable SK rate for $A$ is the SK capacity $C(A)$. Thus, by definition, the SK capacity for $A$ is the largest rate of a rv that is recoverable at each terminal in $A$ from the information available to it, and is nearly uniformly distributed and effectively concealed from an eavesdropper with access to the public interterminal communication; it need not be concealed from the terminals in $A^c = \mathcal{M} \backslash A$ that cooperate in secrecy generation. A single-letter characterization of the SK capacity $C(A)$, $A \subseteq \mathcal{M}$, for a general multiterminal source model, of which the PIN model is a special case, is provided in [@CsiNar04]. An upper bound for $C(A)$ in terms of (Kullback-Leibler) divergence is also given therein and shown to be tight in special cases. These results play material roles below. Results ======= Our main results are the following. First, we obtain, upon particularizing the results of [@CsiNar04], a (single-letter) expression for $C(A)$ for a PIN model, in terms of a linear combination of mutual information terms that involve only pairs of “reciprocal” rvs $\{ \left (X_{ij}, X_{ji}\right),\ 1 \leq i \neq j \leq m \}$. Second, stemming from this observation, a connection is drawn between SK generation for the PIN model and the combinatorial problem of maximal packing of Steiner trees in an associated multigraph. Specifically, we show that the maximum rate of Steiner tree packing in the multigraph is always a lower bound for SK capacity. Third, for the case $|A| = 2$ (when the Steiner tree becomes a path connecting the two vertices in $A$) and for the case $A=\mathcal{M}$ (when the Steiner tree becomes a spanning tree), the previous lower bound is shown to be tight. This is done by means of an explicit algorithm, based on maximal path packing and maximal spanning tree packing, respectively, that forms an SK out of independent SKs for pairs of terminals. In fact, the maximum rate of the SK thereby generated equals the previously known upper bound for SK capacity [@CsiNar04] mentioned above. SK Capacity ----------- We first give the SK capacity $C(A)$ for the PIN model. For $A \subseteq \mathcal{M}$, let $$\mathcal{B}(A) = \{ B \subset \mathcal{M}:\ B \neq \emptyset,\ B \nsupseteq A\}$$ and $\mathcal{B}_i(A)$ be its subset consisting of those $B \in \mathcal{B}(A)$ that contain $i$, $i \in \mathcal{M}$. Let $\Lambda(A)$ be the set of all collections\ $\lambda = \{\lambda_B:\ B \in \mathcal{B}(A)\}$ of weights $0 \leq \lambda_B \leq 1$, satisfying $$\sum_{B \in \mathcal{B}_i(A)} \lambda_B = 1\ \ \ \mathrm{for~all}\ \ \ i \in \mathcal{M}. \label{eqn:2}$$ ***Proposition 3.1:** For a PIN model, the SK capacity for a set of terminals $A \subseteq \mathcal{M}$, with $|A| \geq 2$, is* $$\hspace{-2.8in} C(A) = \nonumber$$ $$\hspace{0.2in} \min_{\lambda \in \Lambda(A)} \left [ \sum_{1 \leq i < j \leq m} \left ( \mathop{\sum_{B \in \mathcal{B}(A):}}_{i \in B,\,j \in B^c} \lambda_B \right ) I(X_{ij} \wedge X_{ji}) \right ]. \label{eqn:3}$$ [*Remark:*]{} (i) It is of interest in (\[eqn:3\]) that the SK capacity for a PIN model depends on the joint probability distribution of the underlying rvs only through a linear combination of the pairwise reciprocal mutual information terms. \(ii) We note from [@CsiNar04 Theorem 3] that additional independent randomization at the terminals in $\mathcal{M}$, enabled by giving them access to the mutually independent rvs $M_1, \ldots, M_m$, respectively, that are independent also of $(\tilde{X}_1^n, \ldots, \tilde{X}_m^n)$, does not serve to enhance SK capacity. Heuristically speaking, the mentioned independence of the randomization forces any additional “common randomness” among the terminals in $A$ to be acquired only through public communication, which is observed fully by the eavesdropper. On the other hand, randomization can serve to enhance secrecy generation for certain models (cf. e.g., [@Wyner75]) [**Proof:**]{} The proof entails an application of the formula for SK capacity in [@CsiNar04; @CsiNar08] to the PIN model. For $B \in \mathcal{B}(A)$, denote $\tilde{X}_B = \left ( \tilde{X}_i,\ i \in B \right )$. From ([@CsiNar08 Theorem 3.1], $$\hspace{-3.0in} C(A) = \nonumber$$ $$H\left ( \tilde{X}_{1}, \ldots, \tilde{X}_{m}\right ) - \max_{\lambda\,\in\,\Lambda(A)} \sum_{B\,\in\,\mathcal{B}(A)} \lambda_B H \left ( \tilde{X}_B | \tilde{X}_{B^c} \right). \label{eqn:4}$$ For the PIN model, since $\tilde{X}_i = \left ( X_{ij},\ j~\in~\mathcal{M} \backslash \{i\} \right),$ we observe in (\[eqn:4\]) that $$\begin{aligned} H(\tilde{X}_1, \ldots, \tilde{X}_m) &=& H \left ( \{(X_{ij}, X_{ji})\}_{1 \leq i < j \leq m} \right ) \nonumber \\ &=& \sum_{1 \leq i < j \leq m} H(X_{ij}, X_{ji}) \label{eqn:5}\end{aligned}$$ and $$\hspace{-1.0in} H(\tilde{X}_B | \tilde{X}_{B^c}) = H(\tilde{X}_{\mathcal{M}}) - H(\tilde{X}_{B^c})$$ $$\begin{aligned} &=& \sum_{1 \leq i < j \leq m} H(X_{ij}, X_{ji}) - \mathop{\sum_{1 \leq i < j \leq m,}}_{i \in B^c,\,j \in B^c} H (X_{ij}, X_{ji}) \nonumber \\ & & - \sum_{i \in B^c,\,j \in B} H(X_{ij}) \nonumber \\ &=& \mathop{\sum_{1 \leq i < j \leq m,}}_{i \in B,\,j \in B} H (X_{ij}, X_{ji}) + \sum_{i \in B,\,j \in B^c} H(X_{ij} | X_{ji}). \label{eqn:6}\end{aligned}$$ A straightforward manipulation of (\[eqn:4\]), using (\[eqn:5\]), (\[eqn:6\]), gives $$\begin{aligned} C(A) = \hspace{-0.2in} & & \min_{\lambda\,\in\,\Lambda(A)}~ \sum_{1 \leq i < j \leq m} \Bigg [ H \left ( X_{ij}, X_{ji} \right ) \nonumber \\ & & \ \ \ \ \ \ \ \ \ \ \ \ - \left ( \mathop{\sum_{B \in \mathcal{B}(A):}}_{i \in B,\,j \in B} \lambda_B \right ) H \left ( X_{ij}, X_{ji} \right ) \nonumber \\ & & \ \ \ \ \ \ \ \ \ \ \ \ - \left ( \mathop{\sum_{B \in \mathcal{B}(A):}}_{i \in B,\,j \in B^c} \lambda_B \right ) H \left ( X_{ij} | X_{ji} \right ) \nonumber \\ & & \ \ \ \ \ \ \ \ \ \ \ \ - \left ( \mathop{\sum_{B \in \mathcal{B}(A):}}_{i \in B^c,\,j \in B} \lambda_B \right ) H \left ( X_{ji} | X_{ij} \right ) \Bigg ]. \nonumber\end{aligned}$$ Since, by (2), $$\mathop{\sum_{B \in \mathcal{B}(A):}}_{i \in B,\,j \in B} \lambda_B = 1 - \mathop{\sum_{B \in \mathcal{B}(A):}}_{i \in B,\,j \in B^c} \lambda_B = 1 - \mathop{\sum_{B \in \mathcal{B}(A):}}_{i \in B^c,\,j \in B} \lambda_B,$$ we get $$\hspace{-2.8in} C(A) = \nonumber$$ $$\hspace{0.2in} \min_{\lambda \in \Lambda(A)} \left [ \sum_{1 \leq i < j \leq m} \left ( \mathop{\sum_{B \in \mathcal{B}(A):}}_{i \in B,\,j \in B^c} \lambda_B \right ) \left (\begin{array}{ll} H(X_{ij} , X_{ji}) \\ - H(X_{ij} | X_{ji}) \\ - H(X_{ji} | X_{ij}) \end{array} \right ) \right ], \nonumber$$ thereby completing the proof. $\qed$ An upper bound had been established for SK capacity for a general multiterminal source model [@CsiNar04 Example 4]. This bound was expressed in terms of the (Kullback-Leibler) divergence between the joint distribution of the rvs defining the underlying correlated sources and the product of the (marginal) distributions associated with appropriate partitions of these rvs, thereby measuring the minimum mutual dependence among the latter. The bound was particularized to the PIN model in [@YeRez07], and is restated below in a slightly different form that will be used subsequently. Let $\mathcal{P}$ be a partition of $\mathcal{M} = \{1, \ldots, m\}$, and denote the number of atoms of $\mathcal{P}$ by $| \mathcal{P} |$.\ ***Lemma 3.2 [@YeRez07]:** The SK capacity $C(A),\ A \subseteq \mathcal{M},$ for the PIN model is bounded above according to* $$\hspace{-2.8in} C(A) \leq$$ $$\hspace{0.1in} C^{ub}(A) \triangleq \min_{\mathcal{P}} \left ( \frac{1}{|\mathcal{P}|-1} \right ) \left [ \mathop{\sum_{1 \leq i < j \leq m}}_{(i,j)~\mbox{crosses}~\mathcal{P}} I(X_{ij} \wedge X_{ji}) \right ], \label{eqn:11}$$ where for a fixed $\mathcal{P}$, a pair of indices $(i, j)$ crosses $\mathcal{P}$ if $i$ and $j$ are in different atoms of $\mathcal{P}$. The minimization in the right side of (\[eqn:11\]) is over all partitions $\mathcal{P}$ of $\mathcal{M}$ for which every atom of $\mathcal{P}$ intersects $A$. \ SK Capacity and Steiner Tree Packing ------------------------------------ There exists a natural connection between SK generation for the PIN model and the combinatorial problem of tree packing in an associated multigraph. Let $G=\left (V, E\right )$ be a multigraph, i.e., a connected undirected graph with no selfloops and with multiple edges possible between any vertex pair, whose vertex set $V = \mathcal{M} = \{1, \ldots, m\}$ and edge set $E = \{e_{ij} \geq 0,~1 \leq i < j \leq m\}$, where $e_{ij}$ is the number of edges connecting the pair of vertices $i, j,~1 \leq i < j \leq m$. **Definition 3:** For $A \subseteq \mathcal{M}$, a [*Steiner tree*]{} of $G$ (for $A$) is a subgraph of $G$ that is a tree and whose vertex set contains $A$. A [*Steiner packing*]{} of $G$ is any collection of edge disjoint Steiner trees of $G$. Let $\mu(A, G)$ denote the maximum size of such a packing (cf. [@Die05]). We note that when $|A| = 2$, a Steiner tree for $A$ always contains a path connecting the two vertices in $A$. Clearly, it suffices to take $\mu(A, G)$ to be the maximum number of edge disjoint paths connecting the two terminals in $A$. Next, assume without any loss of generality in the PIN model that all pairwise reciprocal mutual information values $I(X_{ij} \wedge X_{ji}),\ 1 \leq i \neq j \leq m,$ are rational numbers. Let $\mathcal{N}$ denote the collection of positive integers $n$ such that the number of edges between any pair of vertices $i, j$ is equal to $n I(X_{ij} \wedge X_{ji})$ is integer-valued for all $1 \leq i \neq j \leq m$; clearly, the elements of $\mathcal{N}$ form an arithmetic progression. For a PIN model, consider a sequence of associated multigraphs $\{G^{(n)} = \left (\mathcal{M}, E^{(n)}\right ),\ n \in \mathcal{N}\}$, where $E^{(n)},\ n \in \mathcal{N},$ is such that $e_{ij} = nI(X_{ij} \wedge X_{ji})$. We term $\sup_{n \in \mathcal{N}} \frac{1}{n} \mu(A, G^{(n)})$ as the [*maximum rate of Steiner tree packing*]{} in the multigraph $G = (\mathcal{M}, E)$. The connection between SK generation for the PIN model and Steiner tree packing is formalized below.\ ***Theorem 3.3:** For a PIN model,* \(i) the SK capacity satisfies $$C(A) \geq \sup_{n\,\in\,\mathcal{N}} \frac{1}{n}~\mu(A, G^{(n)}) \label{eqn:7}$$ for every $A \subseteq \mathcal{M}$; \(ii) when $|A|=2$, the SK capacity is $$\begin{aligned} C(A) &=& \sup_{n\,\in\,\mathcal{N}} \frac{1}{n}\ \mu(A, G^{(n)}) \nonumber \\ &=& C^{ub}(A). \label{eqn:14}\end{aligned}$$ [*Remarks:*]{} (i) The inequality in (\[eqn:7\]) can be strict, as shown by a specific example in a sequel paper [@Nitin_Nar]. See also the remark following Theorem 3.4 for a heuristic explanation. \(ii) An exact determination of $\mu(A, G)$ is known to be NP-hard [@Cheriyan_Salavatipour06]. A nontrivial upper bound for $\mu(A, G)$, similar in form to (\[eqn:11\]), is known [@Jian-etal03 paragraph 5 of Section 1]. This bound can be extended to yield an upper bound for $\sup_{n \in \mathcal{N}} \frac{1}{n} \mu(A, G^{(n)})$ which, in general, is inferior to that provided by $C(A)$ in (\[eqn:7\]). \(i) The proof consists of two main steps. In the first step, fix an $\epsilon >0$ that is smaller than every positive $I(X_{ij} \wedge X_{ji}),\ 1 \leq i < j \leq m$. Each pair of terminals $i, j$ with $I(X_{ij} \wedge X_{ji}) > 0$, generates a (pairwise) SK $K_{ij} = K_{ij}^{(n)}$ of size $\lfloor n (I(X_{ij} \wedge X_{ji}) - \epsilon) \rfloor$ bits, using public communication $F_{ij} = F_{ij}^{(n)}$, and satisfying $$s(K_{ij} ; F_{ij}) = o_n(1); \label{eqn:10-a}$$ the existence of such an SK follows from [@Mau94]. The SK achievability scheme in [@Mau94] consists of a “weak” SK generated by Slepian-Wolf data compression, followed by “privacy amplification” to extract a “strong” SK. Note by the definition of the PIN model that $\{(K_{ij}, F_{ij})\}_{1 \leq i < j \leq m}$ are mutually independent. In the second step, consider the sequence of multigraphs\ $\left \{ G^{(n)}_{\epsilon} = (\mathcal{M}, \widetilde{E^{(n)}}) \right \}_{n=1}^{\infty}$, where $\widetilde{E^{(n)}}$ is such that the number of edges between any pair of vertices $i, j$ equals\ $\lfloor n (I(X_{ij} \wedge X_{ji}) - \epsilon) \rfloor$. We next show that every Steiner tree in a Steiner tree packing of $G^{(n)}_{\epsilon}$ yields one shared bit for the terminals in $A$ that is independent of the communication in that Steiner tree. Specifically, for edges $(i, j)$ and $(i, j'),\ j \neq j',$ with common vertex $i$ in the Steiner tree, vertex $i$ broadcasts to vertices $j, j'$ the binary sum of two independent SK bits – one with $j$ and the other with $j'$ – obtained from the first step. This enables $i, j, j'$ to share any one of these two bits, with the attribute that the shared bit is independent of the binary sum. This method of propagation ([@CsiNar04 proof of Theorem 5]) enables all the vertices in $A$, which are connected in the Steiner tree, to share one bit that is independent of all the broadcast binary sums from this tree. Therefore, the maximum number of such shared bits for the terminals in $A$ that can be generated by this procedure equals $\mu(A, G^{(n)}_{\epsilon})$. Denote these shared bits (of size $\mu(A, G^{(n)}_{\epsilon})$) and the communication messages generated by the mechanism in this second step by $K = K^{(n)}(\{K_{ij}\}_{1 \leq i < j \leq m})$ and $F = F^{(n)}(\{K_{ij}\}_{1 \leq i < j \leq m})$, respectively. We claim that $K$ constitutes an SK for $A$. Specifically, it remains to show that $K$ satisfies the secrecy condition (\[eqn:1\]) with respect to the overall communication in steps 1 and 2. To this end, we denote by $K_R^{(n)}(\{K_{ij}\}_{1 \leq i < j \leq m})$ all the pairwise SK bits generated in the first step, that are residual from the maximal Steiner tree packing of $G^{(n)}_{\epsilon}$ used to generate $K$ by means of $F$. Clearly, $$\{K_{ij}\}_{1 \leq i < j \leq m} = (K, F, K_R). \label{eqn:9}$$ Moreover, since the total number of edges in any Steiner tree equals the sum of unity (i.e., the shared bit of $K$) and the number of bits of public communication for that shared bit, we have $$|\widetilde{E^{(n)}}| = \log{|\mathcal{K}|} + \log{|\mathcal{F}|} + \log{|\mathcal{K}_R|}, \label{eqn:10}$$ where $\mathcal{K}$, $\mathcal{F}$ and $\mathcal{K}_R$ denote the respective ranges of $K$, $F$ and $K_R$. Note that $\log{|\mathcal{K}|} = \mu(A, G^{(n)}_{\epsilon})$. Then, $$\hspace{-1.7in} s(K;\{F_{ij}\}_{1 \leq i < j \leq m}, F)$$ $$\begin{aligned} &=& \log{|\mathcal{K}|} - H(K| \{F_{ij}\}_{1 \leq i < j \leq m}, F) \nonumber \\ &\leq& \log{|\mathcal{K}|} - H(K| \{F_{ij}\}_{1 \leq i < j \leq m}, F, K_R) \nonumber \\ &=& \log{|\mathcal{K}|} - H(K, F, K_R | \{F_{ij}\}_{1 \leq i < j \leq m}) \nonumber \\ & & + H(F, K_R | \{F_{ij}\}_{1 \leq i < j \leq m}) \nonumber \\ &=& \log{|\mathcal{K}|} - H(\{K_{ij}\}_{1 \leq i < j \leq m} | \{F_{ij}\}_{1 \leq i < j \leq m}) \nonumber \\ & & + H(F, K_R | \{F_{ij}\}_{1 \leq i < j \leq m}),\ \ \mbox{by~(\ref{eqn:9})} \nonumber\end{aligned}$$ $$\begin{aligned} &\leq& \log{|\mathcal{K}|} + s(\{K_{ij}\}_{1 \leq i < j \leq m}; \{F_{ij}\}_{1 \leq i < j \leq m}) \nonumber \\ & & - |\widetilde{E^{(n)}}| + H(F, K_R) \nonumber \\ &\leq& s(\{K_{ij}\}_{1 \leq i < j \leq m}; \{F_{ij}\}_{1 \leq i < j \leq m}), \ \ \mbox{by~(\ref{eqn:10})} \nonumber \\ &=& \sum_{1 \leq i < j \leq m} s(K_{ij}; F_{ij}), \nonumber \\ &=& \frac{m(m-1)}{2} o_n(1), \nonumber\end{aligned}$$ where the second-to-last equality is by the fact that $\{(K_{ij}, F_{ij})\}_{1 \leq i < j \leq m}$ are mutually independent, and the last equality is by (\[eqn:10-a\]). The maximum rate of the SK thus generated is equal to $\lim_{n \rightarrow \infty} \frac{1}{n} \mu(A, G^{(n)}_{\epsilon})$ which, since $\epsilon >0$ was arbitrary, equals $\sup_{n\,\in\,\mathcal{N}} \frac{1}{n}~\mu(A, G^{(n)}).$\ (ii) Suppose that $A=\{1, 2\}$, and note from the paragraph after Definition 3 that $\mu(A, G)$ is the maximum number of edge disjoint paths in $G$ connecting terminals $1$ and $2$. It is clear that $\frac{1}{n}\mu(A, G^{(n)})$ is nondecreasing in $n \in \mathcal{N}$, by the definition of $G^{(n)}$. According to Menger’s theorem [@Menger; @Bondy], given a multigraph $G=\left ( \mathcal{M}, E \right )$, the maximum number of edge disjoint paths in $G$ connecting terminals $1$ and $2$ is equal to $$\mathop{\min_{\emptyset \neq B \subset \mathcal{M}}}_{1 \in B,\ 2 \in B^c} \left (\mbox{number of edges that cross}~\{B, B^c\} \right ). \nonumber$$ Applying this to $G^{(n)}$ as above, we have that for $n~\in~\mathcal{N}$, $$\hspace{-2.5in} \frac{1}{n}\mu(A, G^{(n)}) =\nonumber$$ $$\hspace{-0.0in} \frac{1}{n} \left [ \mathop{\min_{\emptyset \neq B \subset \mathcal{M}}}_{\ 1 \in B,\ 2 \in B^c} \left ( \mathop{\sum_{1 \leq i < j \leq m:}}_{(i,j)~\mbox{crosses}~\{B, B^c\}} n I(X_{ij} \wedge X_{ji}) \right ) \right ]. \nonumber$$ It then follows that $$\begin{aligned} C(A) &\geq& \sup_{n\,\in\,\mathcal{N}}~ \frac{1}{n}\mu(A, G^{(n)}),\ \ \ \ \mbox{by (8)} \nonumber \\ &=& \mathop{\min_{\emptyset \neq B \subset \mathcal{M}}}_{1 \in B,\ 2 \in B^c} \left ( \mathop{\sum_{1 \leq i < j \leq m:}}_{(i,j)~\mbox{crosses}~\{B, B^c\}} n I(X_{ij} \wedge X_{ji}) \right ) \nonumber \\ &=& C^{ub}(A), \ \ \ \ \mbox{by}~(\ref{eqn:11}). \nonumber\end{aligned}$$ The last equality follows upon noting that when $|A|=2$, the minimization in (\[eqn:11\]) is over only those partitions that contain two atoms, each of which includes terminal 1 and terminal 2, respectively. This proves (ii). $\qed$ SK Capacity and Spanning Tree Packing for $A = \mathcal{M}$ ----------------------------------------------------------- When all the terminals in $\mathcal{M}$ seek a shared SK, i.e., when $A = \mathcal{M}$, a Steiner tree for $A$ is a spanning tree for $\mathcal{M}$. In this case, we show that the lower bound for SK capacity in Theorem 3.3 (i) is, in fact, tight. Specifically, we show that the algorithm in the proof of Theorem 3.3 yields an SK of maximum rate that coincides with the upper bound for $C(\mathcal{M})$ in Lemma 3.2. [***Theorem 3.4:** For a PIN model, the SK capacity $C(\mathcal{M})$ is $$\begin{aligned} C(\mathcal{M}) &=& \sup_{n\,\in\,\mathcal{N}} \frac{1}{n}\ \mu(\mathcal{M}, G^{(n)}) \nonumber \\ &=& C^{ub}(\mathcal{M}). \label{eqn:12}\end{aligned}$$*]{} [*Remark:*]{} When $A \subset \mathcal{M}$, Steiner tree packing may not attain SK capacity. In SK generation, a helper terminal in $A^c$ helps link the user terminals in $A$ in complex ways through various combinations of subsets of $A$. In general, an optimal such linkage need not be attained by Steiner tree packing. However, when $|A| = 2$, the two user terminals are either directly connected or are connected by a path through helpers in $A^c$; both can be accomplished by Steiner tree packing. When $A = \mathcal{M}$, the mentioned complexity of a helper is nonexistent. The proof relies on a graph-theoretic result of Nash-Williams [@Nas61] and Tutte [@Tut61], that gives a min max formula for the maximum size of spanning tree packing in a multigraph. It is clear that $\frac{1}{n}\mu(\mathcal{M}, G^{(n)})$ is nondecreasing in $n \in \mathcal{N}$, by the definition of $G^{(n)}$. By [@Nas61; @Tut61], given a multigraph $G=\left ( \mathcal{M}, E \right )$, the maximum number of edge disjoint spanning trees that can be packed in $G$ is equal to $$\min_{\mathcal{P}} \Big \lfloor \frac{1}{|\mathcal{P}| - 1} \left (\mbox{number of edges that cross }\mathcal{P} \right ) \Big \rfloor, \nonumber$$ with the minimization being over all partitions $\mathcal{P}$ of $\mathcal{M}$. Applying this to $G^{(n)}$ as above, we have that for $n~\in~\mathcal{N}$, $$\hspace{-2.5in} \frac{1}{n}\mu(\mathcal{M}, G^{(n)}) =\nonumber$$ $$\hspace{-0.10in} \frac{1}{n} \left [ \min_{\mathcal{P}} \ \Big \lfloor \frac{1}{|\mathcal{P}| - 1} \left ( \mathop{\sum_{1 \leq i < j \leq m:}}_{(i,j)~\mbox{crosses}~\mathcal{P}} n I(X_{ij} \wedge X_{ji}) \right ) \Big \rfloor \right ]. \nonumber $$ Denoting by $D$ the quantity in $\Big [ \ \Big ]$ above, it follows that $$\begin{aligned} C(\mathcal{M}) &\geq& \sup_{n\,\in\,\mathcal{N}}~ \frac{1}{n}\mu(\mathcal{M}, G^{(n)}),\ \ \ \ \mbox{by Theorem 3.3} \nonumber \\ &\geq& \sup_{n\,\in\,\mathcal{N}}~ \{D - \frac{1}{n} \} \nonumber \\ &\geq& \min_{\mathcal{P}} \ \frac{1}{|\mathcal{P}| - 1 } \left ( \mathop{\sum_{1 \leq i < i \leq m:}}_{(i,j)~\mbox{crosses}~\mathcal{P}} I(X_{ij} \wedge X_{ji}) \right ) \nonumber \\ &=& C^{ub}(\mathcal{M}), \ \ \mbox{by}~(\ref{eqn:11}). \nonumber\end{aligned}$$ The assertion in (\[eqn:12\]) is now immediate. $\qed$ Lastly, the following observation is of independent interest. Given a combinatorial problem of finding the maximal packing of Steiner trees in a multigraph, we can always associate with it a problem of SK generation for an associated PIN model. By Theorem 3.3 (i), the SK capacity for the PIN model yields an upper bound for the maximum rate of edge disjoint Steiner trees that can be packed in the multigraph; the upper bound is tight both in the case of path packing by Theorem 3.3 (ii) and in the case of spanning tree packing by Theorem 3.4. Discussion ========== Our proofs of Theorems 3.3 and 3.4 give rise to explicit polynomial-time schemes for forming a group-wide SK for the terminals in $A$ from the collection of optimum and mutually independent SKs for pairs of terminals in $\mathcal{M}$ (namely the $K_{ij}$s in the proof of Theorem 3.3). When $|A| = 2$ or $A = \mathcal{M}$, our schemes achieve SK capacity. Specifically, the schemes combine known polynomial-time algorithms for finding a maximal collection of edge-disjoint paths (resp. spanning trees) connecting the vertices in $A$ when $|A| = 2$ (resp. $A = \mathcal{M}$) [@Dinic; @Edmonds_Karp; @GarWes92] with the technique for SK propagation in each tree as in the proof of Theorem 3.3. For a general multiterminal source model, the notions of wiretap secret key (WSK) [@Mau90; @AhlCsi93; @CsiNar04] and private key (PK) [@CsiNar04] have also been proposed. Specifically, these notions involve an extra “wiretapped” terminal, say $m+1$, that observes $n$ i.i.d. repetitions of a rv $\tilde{X}_{m+1}$ with a given joint pmf with $(\tilde{X}_1, \ldots, \tilde{X}_m)$, and to which the eavesdropper has access. The key must now be concealed from the eavesdropper’s observations of $\tilde{X}_{m+1}^n = (\tilde{X}_{m+1, 1}, \ldots, \tilde{X}_{m+1, n})$ and the public communication. The notion of a WSK requires that terminal $m+1$ not cooperate in key generation. The less restrictive notion of a PK allows cooperation by terminal $m+1$ by way of public communication. The corresponding capacities for the terminals in $A \subseteq \mathcal{M}$ are defined in the usual manner, and denoted by $C_W(A)$ and $C_P(A)$. We remark that in the context of a PIN model, terminal $m+1$ represents a compromised entity. One model for the wiretapped rv $\tilde{X}_{m+1}$ entails its consisting of $\left ( \begin{array}{cc} m \\ 2 \end{array} \right )$ mutually independent components, one corresponding to each pair $(X_{ij}, X_{ji}),\ 1 \leq i < j \leq m,$ of legitimate correlated signals. This model is unresolved even in the simplest case of $m=2$ terminals [@Mau93; @AhlCsi93; @CsiNar04; @GohAnan07; @GohAnan08]. Instead, we consider a different model which depicts the situation in which an erstwhile legitimate terminal $m+1$ becomes compromised. Specifically, the model now involves every legitimate terminal $i$ in $\mathcal{M}$ observing $n$ i.i.d. repetitions of the rv $(\tilde{X}_i, X_{i, m+1})$, while terminal $m+1$ observes $n$ i.i.d. repetitions of $\tilde{X}_{m+1} = ( X_{m+1, j},\ j \in \mathcal{M} )$. We argue in the following proposition that the WSK and PK capacities for this PIN model are the same as the SK capacity of a reduced PIN model obtained by disregarding terminal $m+1$ and with each legitimate terminal $i$ in $\mathcal{M}$ observing just $\tilde{X}_i^n$. **Proof:** We shall prove that $$\nonumber C(A) \stackrel{(a)}{\leq} C_W(A) \stackrel{(b)}{\leq} C_P(A) \stackrel{(c)}{\leq} C(A).$$ The inequality $(b)$ is by definition. Next, let $K = K(\tilde{X}_1^n, \ldots, \tilde{X}_m^n)$ be a SK for $A$ achieved with communication $\mathbf{F} = \mathbf{F}(\tilde{X}_1^n, \ldots, \tilde{X}_m^n)$ for the reduced PIN model. Then $K$ is also a WSK since $$\hspace{-1.8in} s \left (K; \mathbf{F}, ( X_{m+1, j}^n,\ j \in \mathcal{M} ) \right )$$ $$\begin{aligned} &=& \log{|K|} - H \left ( K | \mathbf{F}, ( X_{m+1, j}^n,\ j \in \mathcal{M} ) \right ) \nonumber \\ &=& s(K; \mathbf{F}) + I(K \wedge ( X_{m+1, j}^n,\ j \in \mathcal{M} )| \mathbf{F}) \nonumber \\ &=& o_n(1) \nonumber\end{aligned}$$ since $I \left ( K, \mathbf{F} \wedge ( X_{m+1, j}^n,\ j \in \mathcal{M} ) \right ) = 0$, thereby establishing (a). In order to establish (c), we claim that every achievable PK rate is an achievable SK rate for the reduced PIN model upon using randomization at the terminals in $\mathcal{M}$; by remark (ii) after Proposition 3.1, (c) then follows. Since $( X^n_{m+1, j},\ j \in \mathcal{M} )$ is independent of $(\tilde{X}_1^n, \ldots, \tilde{X}_m^n)$, any terminal in $\mathcal{M}$, say terminal 1, can simulate $( X_{m+1, j}^n,\ j \in \mathcal{M} )$ and broadcast it to all the terminals. Next, each terminal $i$ in $\mathcal{M}$ can simulate $X_{i, m+1}^n$ conditioned on $( X_{m+1, j}^n,\ j \in \mathcal{M} ) = ( x_{m+1, j}^n,\ j \in \mathcal{M} )$. This second step of randomization is feasible since $(\tilde{X}_1^n, \ldots, \tilde{X}_m^n), X_{1, m+1}^n, \ldots, X_{m, m+1}^n$ are conditionally mutually independent conditioned on $( X_{m+1, j}^n,\ j \in \mathcal{M} ) = ( x_{m+1, j}^n,\ j \in \mathcal{M} )$. Thus, each terminal $i$ in $\mathcal{M}$ now has access to $(\tilde{X}_i^n, X_{i, m+1}^n)$ while the eavesdropper observes $( X_{m+1, j}^n,\ j \in \mathcal{M} )$, so that the reduced PIN model for SK generation can be used to simulate a PIN model for PK generation with the given underlying joint pmf. Thus, any achievable rate of a PK for $A$ in the [*given*]{} PIN model for PK generation is an achievable rate of a PK for $A$ in the [*simulated*]{} model. Further, the latter PK is a fortiori an SK for $A$ in the reduced PIN model with randomization permitted at the terminals in $\mathcal{M}$. This establishes (c). $\qed$ In the proof of achievability of SK capacity for the general multiterminal source model in [@CsiNar04], an SK of optimum rate was extracted from “omniscience,” i.e., from a reconstruction by the terminals in $A$ of [*all*]{} the signals $( \tilde{X}_i^n,\ i \in \mathcal{M} )$ observed by the terminals in $\mathcal{M}$. In contrast, the scheme in Theorem 3.3 (ii) (resp. Theorem 3.4) for achieving SK capacity for a PIN model with $|A|=2$ (resp. $A=\mathcal{M}$) neither seeks nor attains omniscience; however, we note that omniscience can be attained by letting the terminals in $\mathcal{M}$ simply broadcast all the residual bits left over from a maximal path packing (resp. maximal spanning tree packing). We close with the observation that in the proof of Theorem 3.3, the SK bit generated by each Steiner tree in Step 2 is exactly independent of the public communication in that tree. Thus, if the pairwise SKs in step 1 are “perfect” with zero security index, then so is the overall SK for $A$. It transpires that for the PIN model, there is a tight connection between “perfect secrecy generation” and “communication for perfect omniscience,” redolent of the asymptotic connection in [@CsiNar04]. This new connection and the role of Steiner tree packing in attaining perfect omniscience and generating perfect secrecy are the subjects of a sequel paper [@Nitin_Nar]. Acknowledgement =============== The authors thank the anonymous referees for their helpful comments. P. Narayan thanks Samir Khuller for the helpful pointer to [@GarWes92]. [99]{} R. Ahlswede and I. Csiszár, “Common randomness in information theory and cryptography, Part I: Secret sharing,” [*IEEE Trans. Inf. Theory*]{}, vol. 39, pp. 1121-1132, July 1993. A. Bondy and U. S. R. Murty, [*Graph Theory,*]{} Series, Graduate Texts in Mathematics, Vol. 244: Springer, 2008. J. Cheriyan and M. Salavatipour, “Hardness and approximation results for packing Steiner trees.” [*Algorithmica*]{}, vol. 45, pp. 21-43, 2006. I. Csiszár and P. Narayan, “Secrecy capacities for multiple terminals," [*IEEE Trans. Inf. Theory*]{}, vol. 50, pp. 3047-3061, Dec. 2004. I. Csiszár and P. Narayan, “Secrecy capacities for multiterminal channel models," [*Special Issue of the IEEE Trans. Inf. Theory on Information Theoretic Security,*]{} vol. 54, pp. 2437-2452, June 2008. E. A. Dinic, “An algorithm for the solution of the problem of maximal flow in a network with power estimation,” [*Dokl. Akad. Nauk SSSR.*]{} vol. 194, pp. 754-757, 1970. J. Edmonds and R.M. Karp, “Theoretical improvements in algorithmic efficiency for network flow problems,” in [*Combinatorial Structures and their Applications*]{}, New York: Gordon and Breach, 1970, pp. 93-96. H. N. Gabow and H. H. Westermann: “Forests, frames, and games: algorithms for matroid sums and applications,” [*Algorithmica*]{}, 7, pp. 465-497, 1992. A. Gohari and V. Anantharam, “Communication for omniscience by a neutral observer and information-theoretic key agreement of multiple terminals," in [*Proc. 2007 IEEE Int. Symp. Inf. Theory*]{}, Nice, France, pp. 2056-2060. A. Gohari and V. Anantharam, “New bounds on the information-theoretic key agreement of multiple terminals,” in [*Proc. 2008 IEEE Int. Symp. Inf. Theory*]{}, Toronto, Ontario, Canada, pp. 742-746. M. Grötschel, A. Martin and R. Weismantel, “Packing Steiner trees: A cutting plane algorithm and computational results," [*Math. Programming,*]{} vol. 72, pp. 125-145, Feb. 1996. K. Jain, M. Mahdian, and M.R. Salavatipour, “Packing Steiner trees,” in [*Proc. 14th ACM-SIAM Symp. on Discrete Algorithms (SODA)*]{}, Baltimore, Maryland, 2003, pp. 266-274. U. M. Maurer, “Provably secure key distribution based on independent channels,” presented at the [*IEEE Workshop Inf. Theory*]{}, Eindhoven, The Netherlands, 1990. U. M. Maurer, “Secret key agreement by public discussion from common information,” [*IEEE Trans. Inf. Theory*]{}, vol. 39, pp. 733-742, May 1993. U. M. Maurer, “The strong secret key rate of discrete random triples,” in [*Communications and Cryptography: Two Sides of One Tapestry*]{}, R. E. Blahut et al., Eds., Norwell, MA: Kluwer, Ch. 26, pp. 271-285, 1994. K. Menger, “Zur allgemeinen kurventheorie,” [*Fund. Math.*]{}, vol. 10, pp. 96-115, 1927. S. Nitinawarat and P. Narayan, “Perfect secrecy, perfect omniscience and Steiner tree packing,” [*IEEE Trans. Inf. Theory*]{}, to appear. C. St. J. A. Nash-Williams, “Edge disjoint spanning trees of finite graphs.” [*J. London Math. Soc.*]{}, 36, pp. 445-450, 1961. R. Renner and S. Wolf, “New bounds in secret-key agreement: The gap between formation and secrecy extraction,” in [*Proc. EUROCRYPT 2003, Lecture notes in Computer Science, vol. 2656*]{}: Springer-Verlag, 2003, pp. 562-577. W. T. Tutte, “On the problem of decomposing a graph into $n$ connected factors,” [*J. London Math. Soc.*]{}, vol. 36, pp. 221-230, 1961. A. D. Wyner, “The wire-tap channel,” [*Bell Sys. Tech. J.*]{}, vol. 54, pp. 1355-1387, 1975. C. Ye, A. Reznik and Y. Shah, “Extracting secrecy from jointly Gaussian random variables,” in [*Proc. 2006 IEEE Int. Symp. Inf. Theory*]{}, Seattle, pp. 2593-2597. C. Ye and A. Reznik, “Group secret key generation algorithms,” in [*Proc. 2007 IEEE Int. Symp. Inf. Theory*]{}, Nice, France, pp. 2896-2900. [^1]: The work of S. Nitinawarat and P. Narayan was supported by the National Science Foundation under Grants CCF0515124, CCF0635271, CCF0830697, and InterDigital. The work of A. Barg was supported by the National Science Foundation under Grants CCF0515124, CCF0830699, CCF0916919, DMS0807411, and InterDigital. The material in this paper was presented in parts at the IEEE International Symposia on Information Theory, Nice, France, June 2007, and Toronto, Ontario, Canada, July 2008. [^2]: S. Nitinawarat, A. Barg and P. Narayan are with the Department of Electrical and Computer Engineering and the Institute for Systems Research, University of Maryland, College Park, MD 20742, USA [^3]: Email: {nitinawa, abarg, prakash}@umd.edu. [^4]: C. Ye and A. Reznik are with InterDigital, King of Prussia, PA 19406 [^5]: Email:{Chunxuan.Ye, Alex.Reznik}@interdigital.com. [^6]: All logarithms are to the base 2.
--- abstract: 'We consider the problem of direction-of-arrival (DOA) estimation in unknown partially correlated noise environments where the noise covariance matrix is sparse. A sparse noise covariance matrix is a common model for a sparse array of sensors consisted of several widely separated subarrays. Since interelement spacing among sensors in a subarray is small, the noise in the subarray is in general spatially correlated, while, due to large distances between subarrays, the noise between them is uncorrelated. Consequently, the noise covariance matrix of such an array has a block diagonal structure which is indeed sparse. Moreover, in an ordinary nonsparse array, because of small distance between adjacent sensors, there is noise coupling between neighboring sensors, whereas one can assume that nonadjacent sensors have spatially uncorrelated noise which makes again the array noise covariance matrix sparse. Utilizing some recently available tools in low-rank/sparse matrix decomposition, matrix completion, and sparse representation, we propose a novel method which can resolve possibly correlated or even coherent sources in the aforementioned partly correlated noise. In particular, when the sources are uncorrelated, our approach involves solving a second-order cone programming (SOCP), and if they are correlated or coherent, one needs to solve a computationally harder convex program. We demonstrate the effectiveness of the proposed algorithm by numerical simulations and comparison to the Cramer-Rao bound (CRB).' author: - '[^1]' bibliography: - 'refs.bib' title: 'DOA Estimation in Partially Correlated Noise Using Low-Rank/Sparse Matrix Decomposition' --- Introduction ============ The assumption of spatially white noise in an array of sensors (antennas) is violated in many practical scenarios. For example, when the antennas are closely spaced, the small interelement spacing leads to strong mutual coupling between array elements [@Bala12]. A consequence of this coupling would be correlation between the noise of array elements. It is known that the performance of conventional direction-of-arrival (DOA) estimation methods degrades significantly when the noise is spatially correlated (colored) [@LoV92; @PesaG01; @StoiS92]. Colored noise in an antenna array can also be present due to environmental conditions [@Wenz62]. Nevertheless, the problem of DOA estimation in an unknown spatially colored noise is not solvable without some restrictions on the impinging sources or on the noise field [@StoiS92]. A popular solution is to exploit some largely spaced subarrays in which due to large distance between these subarrays, the inter-subarray noise is uncorrelated. These configurations for sensor arrays are also known as sparse arrays. Different algorithms have been proposed to use this type of arrays to estimate the DOA that are mainly based on the maximum likelihood (ML) criterion; see e.g., [@LiN11; @VoroGW05]. However, ML approaches lead to solving some nonconvex optimization problems which are generally very hard to solve and there is no guarantee for convergence to the global optimum solution. Moreover, the ML approaches are only derived under the assumption of Gaussian data. In this paper, we propose a new algorithm based on matrix rank minimization and sparse representation techniques which can effectively estimate the directions of possibly correlated emitters in environments where the noise covariance matrix is unknown but sparse by solving a convex optimization program. Particularly, this algorithm can be used when a sparse array is exploited, the noise field is nonuniform (the noise covariance matrix is diagonal but every diagonal entry is arbitrary) [@PesaG01], or only there is noise coupling between adjacent sensors. Also, it is worth mentioning that we will not impose any assumption on the distribution of the noise and sources; we only assume that they are zero-mean and stationary random processes. The rest of this paper is organized as follows. After formulating the problem in Section \[sec:Form\], we introduce our method in Section \[sec:Algo\] and present some numerical examples in Section \[sec:Sim\]. Section \[sec:Con\] concludes the paper. Problem Formulation {#sec:Form} =================== Consider an array of $m$ antennas and assume that $q$ sources are impinging on this array. Further, assume that the propagation time of the received signals across the array is much less than the inverse of the signal bandwidth (the assumption of being narrow-band). Samples at the output of antennas can be formulated according to the model $$\label{maineq} {\mathbf{x}}(n) = {\mathbf{A}}({\boldsymbol{\theta}}) {\mathbf{s}}(n) + {\mathbf{w}}(n), ~~ n = 1,\cdots,N,$$ where ${\mathbf{x}}(n) = \big(x_1(n),\cdots,x_m(n)\big)^T$ denotes the vector of samples at time instant $n$ from antenna 1 to $m$, $N$ is the total number of collected samples, ${\mathbf{A}}({\boldsymbol{\theta}})=\big[{\mathbf{a}}(\theta_1),\cdots,{\mathbf{a}}(\theta_q)]$ is the array manifold at unknown directions ${\boldsymbol{\theta}}= (\theta_1,\cdots,\theta_q)^T$, ${\mathbf{s}}(n) = \big(s_1(n),\cdots,s_q(n)\big)^T$ designates the vector of source signals at time instant $n$, and ${\mathbf{w}}(n) = \big(w_1(n),\cdots,w_m(n)\big)^T$ is the vector of noise at different antennas. The proposed approach {#sec:Algo} ===================== First, we briefly review the concepts of matrix completion (MC) and low-rank/sparse matrix decomposition which are used in the derivation of our algorithm. Introduction ------------ In the matrix completion problem, we observe some entries of a matrix and want to recover other unobserved elements [@CandR09]. Generally, it is not possible to reconstruct a matrix from a subset of its entries. However, if the matrix is low-rank and the position of revealed entries follows a certain random law, then using $$\label{MCRank} \min_{{\mathbf{X}}} \operatorname{rank}({\mathbf{X}})~\text{s.t.}~ [{\mathbf{X}}]_{ij} = [{\mathbf{M}}]_{ij},~ (i,j) \in \Omega,$$ in which ${\mathbf{M}}\in {{\mathbb{R}}^{n_1 \times n_2}}$ is the low-rank matrix to be reconstructed and $\Omega \subset \{1,\cdots,n_1\} \times \{1,\cdots,n_2\}$ is the index set of observed entries, one can recover ${\mathbf{M}}$ with high probability [@CandR09]. The convex relaxation of leads to $$\label{MCNuc} \min_{{\mathbf{X}}} \|{\mathbf{X}}\|_*~\text{s.t.}~ [{\mathbf{X}}]_{ij} = [{\mathbf{M}}]_{ij},~ (i,j) \in \Omega,$$ where $\| {\mathbf{X}}\|_* = \sum_{i=1}^{r} \sigma_i({\mathbf{X}})$ denotes the nuclear norm of matrix ${\mathbf{X}}$ in which $\sigma_i({\mathbf{X}})$ is the $i$th largest singular value of ${\mathbf{X}}$ and $r = \operatorname{rank}({\mathbf{X}})$. Under more restrictive conditions, solving results in obtaining the unique solution of [@CandR09]. When the observations are contaminated by additive noise, i.e., ${\mathbf{X}}= {\mathbf{M}}+ {\mathbf{W}}$, where ${\mathbf{W}}$ is a matrix modelling the additive noise, can be updated to $$\label{MCNucNoisy} \min_{{\mathbf{X}}} \|{\mathbf{X}}\|_* + \lambda_{MC} \sum_{i,j \in \Omega} \big([{\mathbf{X}}]_{ij} - [{\mathbf{M}}]_{ij}\big)^2,$$ where $\lambda_{MC} > 0$ is some constant to regularize between being low-rank and consistency with noisy observations. Now, suppose that we have a matrix ${\mathbf{X}}\in {{\mathbb{R}}^{n_1 \times n_2}}$ which is equal to the sum of a low-rank and a sparse matrix. More precisely, $${\mathbf{X}}= {\mathbf{L}}+ {\mathbf{S}},$$ where ${\mathbf{L}}$ is a low-rank matrix and ${\mathbf{S}}$ is a sparse matrix in which only a few entries are nonzero. The problem of decomposing ${\mathbf{X}}$ into ${\mathbf{L}}$ and ${\mathbf{S}}$ is underdetermined in general since the number of unknowns is larger than the number of equations. This task can be formulated as $$\label{RPCARank} \min_{{\mathbf{L}},{\mathbf{S}}} \operatorname{rank}({\mathbf{L}}) + \gamma_1 \| {\mathbf{S}}\|_0 ~\text{s.t.}~ {\mathbf{X}}= {\mathbf{L}}+ {\mathbf{S}},$$ in which $\gamma_1 > 0$ is a regularization parameter and $\|\cdot\|_0$ denotes the number of nonzero entries of a matrix. It has been shown that, under some mild assumptions, solving recovers the matrices ${\mathbf{L}}$ and ${\mathbf{S}}$ [@CandLMW11]. Nonetheless, this problem is NP-hard. The tightest convex relaxation of equals [@CandLMW11] $$\label{RPCAConv} \min_{{\mathbf{L}},{\mathbf{S}}} \| {\mathbf{L}}\|_* + \gamma_2 \| {\mathbf{S}}\|_1 ~\text{s.t.}~ {\mathbf{X}}= {\mathbf{L}}+ {\mathbf{S}},$$ where [$\| {\mathbf{S}}\|_1 = \sum_{i=1}^{n_1} \sum_{j=1}^{n_2} | [{\mathbf{S}}]_{ij} |$]{}. Under some mild deterministic or probabilistic conditions, and share the same unique solution [@ChanSPW11; @CandLMW11]. When ${\mathbf{X}}= {\mathbf{L}}+ {\mathbf{S}}+ {\mathbf{W}}$, where ${\mathbf{W}}$ is a additive noise, is updated to $$\label{RPCAConv_Noisy} \min_{{\mathbf{L}},{\mathbf{S}}} \| {\mathbf{L}}\|_* + \gamma_D \| {\mathbf{S}}\|_1 +\lambda_D \| {\mathbf{X}}- {\mathbf{L}}- {\mathbf{S}}\|_F^2,$$ where, similar to , $\lambda_D$ is some regularization parameter and $\|\cdot\|_F$ designates the Frobenius norm. The main idea ------------- The main idea of our approach to estimate the vector of unknown directions ${\boldsymbol{\theta}}$ relies on the decomposition of the sample covariance matrix. To be precise, assuming sources and noise are uncorrelated, from , we have $$\label{Rxeq} {{\mathbf{R}}_{{\mathbf{x}}}}= {\mathbf{A}}{{\mathbf{R}}_{{\mathbf{s}}}}{\mathbf{A}}^H + {{\mathbf{R}}_{{\mathbf{w}}}},$$ where ${{\mathbf{R}}_{{\mathbf{x}}}}= E\{{\mathbf{x}}(n) {\mathbf{x}}(n)^H\}$, ${{\mathbf{R}}_{{\mathbf{s}}}}= E\{{\mathbf{s}}(n) {\mathbf{s}}(n)^H\}$, and ${{\mathbf{R}}_{{\mathbf{w}}}}= E\{{\mathbf{w}}(n) {\mathbf{w}}(n)^H\}$ are covariance matrices. It can be verified that $\operatorname{rank}({\mathbf{A}}{{\mathbf{R}}_{{\mathbf{s}}}}{\mathbf{A}}^H) \leq q$; thus, if the number of sources is much smaller than the number of antennas, then ${\mathbf{A}}{{\mathbf{R}}_{{\mathbf{s}}}}{\mathbf{A}}^H$ will be a low-rank matrix. Furthermore, we assume that ${{\mathbf{R}}_{{\mathbf{w}}}}$ is an unknown matrix but sparse. As discussed in Section I, this assumption can be satisfied in a sparse array of antennas or when there is noise coupling between adjacent sensors.[^2] For instance, when a uniform linear array (ULA) is exploited and the noise of neighboring sensors is correlated, ${{\mathbf{R}}_{{\mathbf{w}}}}$ may have the following structure $$\label{RwStruct} {{\mathbf{R}}_{{\mathbf{w}}}}= \begin{bmatrix} \sigma_1^2 & \sigma_{1,2} & 0 & 0 & \cdots & 0\\ \sigma_{2,1} & \sigma_{2}^2 & \ddots & 0 & \cdots & 0\\ 0 & \ddots & \ddots & \ddots & 0 & \vdots\\ \vdots & 0 & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & 0 & \sigma_{m-1,m-2} & \sigma_{m-1}^2 & \sigma_{m-1,m}\\ 0 & \cdots & 0 & 0 & \sigma_{m,m-1} & \sigma_{m}^2 \end{bmatrix}.$$ In summary, to estimate DOAs, we make the following assumptions. - A1: The noise and sources are zero-mean wide-sense random processes and are uncorrelated. - A2: The radiated sources can be correlated or even coherent. - A3: The noise covariance matrix is arbitrary but sparse. The support of this matrix, location of nonzero entries, are known from, for example, the geometry of the array. - A4: The number of sources is unknown and much smaller than the number of antennas. As a first solution, we can exploit program to recover ${\mathbf{A}}{{\mathbf{R}}_{{\mathbf{s}}}}{\mathbf{A}}^H$ and ${{\mathbf{R}}_{{\mathbf{w}}}}$ from the matrix ${{\mathbf{R}}_{{\mathbf{x}}}}$. However, using the above assumptions more efficiently, we can exploit the information that we know the support of ${{\mathbf{R}}_{{\mathbf{w}}}}$ to obtain better results. Let $\Omega$ denote the support set of ${{\mathbf{R}}_{{\mathbf{w}}}}$ and ${\mathcal{P}_{\Omega^c}}$ be a projection to the set $\Omega^c = \{1,\cdots,m\} \times \{1,\cdots,m\} \setminus \Omega$ such that $${\mathcal{P}_{\Omega^c}}({\mathbf{X}}) = \left \{ \begin{array}{ll} 0 & (i,j) \in \Omega,\\ \protect[{\mathbf{X}}\protect]_{ij} & \text{otherwise}, \end{array} \right.$$ Applying ${\mathcal{P}_{\Omega^c}}$ on , we get $${\mathcal{P}_{\Omega^c}}({{\mathbf{R}}_{{\mathbf{x}}}}) = {\mathcal{P}_{\Omega^c}}({\mathbf{A}}{{\mathbf{R}}_{{\mathbf{s}}}}{\mathbf{A}}^H).$$ Consequently, the task of estimating ${\mathbf{A}}{{\mathbf{R}}_{{\mathbf{s}}}}{\mathbf{A}}^H$ simplifies to a MC problem, $$\min_{{\mathbf{X}}} \| {\mathbf{X}}\|_*~\text{s.t.}~ {\mathcal{P}_{\Omega^c}}({\mathbf{X}}) = {\mathcal{P}_{\Omega^c}}({{\mathbf{R}}_{{\mathbf{x}}}}).$$ However, in practice, only an estimate of ${{\mathbf{R}}_{{\mathbf{x}}}}$ is available. Let $${\widehat{{\mathbf{R}}}_{{\mathbf{x}}}}= \frac{1}{N} \sum_{n=1}^{N} {\mathbf{x}}(n) {\mathbf{x}}(n)^H$$ designate the sample covariance matrix, then we have ${\widehat{{\mathbf{R}}}_{{\mathbf{x}}}}= {\mathbf{A}}{{\mathbf{R}}_{{\mathbf{s}}}}{\mathbf{A}}^H + {{\mathbf{R}}_{{\mathbf{w}}}}+ {\mathbf{Q}}$, where ${\mathbf{Q}}$ is the disturbance term due to finite number of samples. Particularly, when sources and noise have normal distributions, ${\mathbf{Q}}$ has a recentered-Wishart distribution [@EatoE83]. To mitigate the effect of finite samples, we use the following program to recover ${\mathbf{A}}{{\mathbf{R}}_{{\mathbf{s}}}}{\mathbf{A}}^H$ $$\label{LEst} {\widehat{{\mathbf{L}}}}= \operatorname*{argmin}_{{\mathbf{X}}} \{\| {\mathbf{X}}\|_* + \lambda_1 \| {\mathcal{P}_{\Omega^c}}({\mathbf{X}}) - {\mathcal{P}_{\Omega^c}}({\widehat{{\mathbf{R}}}_{{\mathbf{x}}}}) \|_F ~ | ~{\mathbf{X}}\succeq {\mathbf{0}}\},$$ where ${\mathbf{X}}\succeq {\mathbf{0}}$ means that ${\mathbf{X}}$ is a positive semidefinite matrix. [In and other optimization programs we use in what follows, the data fidelity terms (e.g., $\| {\mathcal{P}_{\Omega^c}}({\mathbf{X}}) - {\mathcal{P}_{\Omega^c}}({\widehat{{\mathbf{R}}}_{{\mathbf{x}}}}) \|_F$ in ) are not squared. This lets us to select the regularization parameter similar to [@BellCW11] independent from scaling the covariance of ${\mathbf{Q}}$.]{} If the support of ${{\mathbf{R}}_{{\mathbf{w}}}}$ is not known, one can use $$\begin{gathered} ({\widehat{{\mathbf{L}}}},{\widehat{{\mathbf{R}}}_{{\mathbf{w}}}}) = \operatorname*{argmin}_{({\mathbf{L}},{\mathbf{S}})} \{ \| {\mathbf{L}}\|_* + \gamma_D \| {\mathbf{S}}\|_1 \\ + \lambda_D \| {\widehat{{\mathbf{R}}}_{{\mathbf{x}}}}- {\mathbf{L}}- {\mathbf{S}}\|_F ~ | ~{\mathbf{L}}\succeq {\mathbf{0}}, {\mathbf{S}}\succeq {\mathbf{0}}\}.\end{gathered}$$ to estimate ${\mathbf{A}}{{\mathbf{R}}_{{\mathbf{s}}}}{\mathbf{A}}^H$. In the next step, we need to estimate ${\boldsymbol{\theta}}$ from ${\widehat{{\mathbf{L}}}}$, an estimate of ${\mathbf{A}}{{\mathbf{R}}_{{\mathbf{s}}}}{\mathbf{A}}^H$. As ${\mathbf{A}}$ is unknown, we use a gridding technique to find DOAs. Let ${\widetilde{{\mathbf{A}}}}= [{\mathbf{a}}(\phi_1), \cdots, {\mathbf{a}}(\phi_M)]$ denote the sampled array manifold in which $\phi_1, \cdots, \phi_M$ are the grid directions and $M$ is the number of grid points. If the gridding is fine enough, then ${\mathbf{A}}{{\mathbf{R}}_{{\mathbf{s}}}}{\mathbf{A}}^H \approx {\widetilde{{\mathbf{A}}}}{\widetilde{{\mathbf{R}}}_{{\mathbf{s}}}}{\widetilde{{\mathbf{A}}}}^H$, where ${\widetilde{{\mathbf{R}}}_{{\mathbf{s}}}}$ equals to ${{\mathbf{R}}_{{\mathbf{s}}}}$ in rows and columns associated to $\phi_k \approx \theta_i,~ 1 \leq i \leq q$, and is zero in other locations. As a result of this gridding, we use the following optimization problem to estimate ${\widetilde{{\mathbf{R}}}_{{\mathbf{s}}}}$ $$\label{REst} {\widehat{{\mathbf{R}}}_{{\mathbf{s}}}}= \operatorname*{argmin}_{{\mathbf{P}}} \{\| {\mathbf{P}}\|_1 + \lambda_2 \|{\widehat{{\mathbf{L}}}}- {\widetilde{{\mathbf{A}}}}{\mathbf{P}}{\widetilde{{\mathbf{A}}}}^H \|_F ~ | ~{\mathbf{P}}\succeq {\mathbf{0}}\}.$$ After obtaining ${\widehat{{\mathbf{R}}}_{{\mathbf{s}}}}$ from the above program, $\operatorname{diag}({\widehat{{\mathbf{R}}}_{{\mathbf{s}}}})$ designates the estimated spatial spectrum at the grid points. Also, it is possible to combine and to solve directly for ${\widehat{{\mathbf{R}}}_{{\mathbf{s}}}}$, i.e., $$\begin{gathered} \label{REst2} {\widehat{{\mathbf{R}}}_{{\mathbf{s}}}}= \operatorname*{argmin}_{{\mathbf{P}}} \{\| {\widetilde{{\mathbf{A}}}}{\mathbf{P}}{\widetilde{{\mathbf{A}}}}^H \|_* + \alpha \| {\mathbf{P}}\|_1 \\ + \beta \| {\mathcal{P}_{\Omega^c}}({\widetilde{{\mathbf{A}}}}{\mathbf{P}}{\widetilde{{\mathbf{A}}}}^H) - {\mathcal{P}_{\Omega^c}}({\widehat{{\mathbf{R}}}_{{\mathbf{x}}}}) \|_F ~ | ~{\mathbf{P}}\succeq {\mathbf{0}}\}.\end{gathered}$$ However, because we have to choose two regularization parameters at the same time, solving may be harder than estimating ${\widetilde{{\mathbf{R}}}_{{\mathbf{s}}}}$ in two steps. In contrast, when sources are uncorrelated, ${\mathbf{P}}$ is a diagonal matrix and with [letting]{} ${\mathbf{p}}= \operatorname{diag}({\mathbf{P}})$, simplifies to $$\label{UncorrEst} \min_{{\mathbf{p}}} \| {\mathbf{p}}\|_1 + \lambda_u \| {\mathcal{P}_{\Omega^c}}(({\widetilde{{\mathbf{A}}}}^* \odot {\widetilde{{\mathbf{A}}}}) {\mathbf{p}}) - {\mathcal{P}_{\Omega^c}}( \operatorname{vec}({\widehat{{\mathbf{R}}}_{{\mathbf{x}}}}) ) \|_2 ~ \text{s.t.} ~{\mathbf{p}}\succeq {\mathbf{0}},$$ where ${\widetilde{{\mathbf{A}}}}^{*}$ denotes the conjugate of ${\widetilde{{\mathbf{A}}}}$, $\odot$ is the Khatri-Rao product (column-wise Kronecker product), $\operatorname{vec}({{\mathbf{R}}_{{\mathbf{x}}}})$ denotes the vector with the columns of ${{\mathbf{R}}_{{\mathbf{x}}}}$ stacked on top of one another, $\|\cdot\|_2$ is the $\ell_2$-norm, and ${\mathbf{p}}\succeq {\mathbf{0}}$ means that all entries of ${\mathbf{p}}$ are non-negative.[^3] Numerical simulations {#sec:Sim} ===================== In this section, the performance of the proposed algorithm is numerically analyzed and is compared to the [stochastic CRB which can be obtained by extending the stochastic CRB for nonuniform white noise in [@PesaG01]]{}. In the simulations, we use a 10-element ULA with half wavelength antenna spacing. Sources and sensors are at the same plane. Signals and noise are iid realizations of zero-mean Gaussian distributions with covariance matrices ${{\mathbf{R}}_{{\mathbf{x}}}}$ and ${{\mathbf{R}}_{{\mathbf{w}}}}$, respectively. Further, the noise covariance matrix in all experiments has the structure given in with $\sigma_1^2,\cdots,\sigma_m^2$ equal to 1, $\sigma_{1,2}, \cdots \sigma_{m-1,m} = 0.5j$, and $\sigma_{2,1}, \cdots \sigma_{m,m-1} = -0.5j$. In the first experiment, two uncorrelated sources at directions $\theta_1=88.05^{\circ}$ and $\theta_2=91.95^{\circ}$ impinge on the array. [$[0^{\circ},180^{\circ}]$ is uniformly divided into 1800 points resulting in a $0.1^{\circ}$ gridding. To estimate $\theta_1$ and $\theta_2$, program , which is indeed an SOCP problem [@BellCW11], is solved by CVX [@cvx].]{} Since is a square-root LASSO [@BellCW11], though not optimal, based on the criterion introduced in [@BellCW11], we use a fixed regularization parameter [ $\lambda_u = \frac{1}{1.1 \| \widetilde{\mathbf{s}} \|_{\infty}\sqrt{M^2 - |\Omega|}} = 0.54$, where $\widetilde{\mathbf{s}}$ denotes a fixed vector defined in [@BellCW11] and obtained by a simple numerical simulation [@BellCW11].]{} The root mean square error (RMSE) in estimating unknown directions are reported as a function of $N$ and SNR with 500 Monte-Carlo simulations. Fig. \[fig:FigN\] shows the RSMEs of our approach as well as the CRBs when $N$ changes from $50$ to $10^5$ and ${\text{SNR}}$ [is fixed to 0 dB.]{} As can bee seen in this figure, the proposed approach closely follows the CRB at small and medium number of samples, yet the errors remain unchanged after reaching half of the grid size. To obtain, smaller errors at larger number of measurements, one can use finer grids at the cost of an increase in computational complexity. In Fig. \[fig:FigSNR\], the RMSEs and CRBs are plotted versus SNR when $N=500$. Here, we observe again a saturation in RMSEs at high SNRs which is due to the limited accuracy of the gridding. In the second experiment, the effectiveness of programs and in estimating the DOAs of highly correlated sources is verified. The regularization parameters $\lambda_1$ and $\lambda_2$ are numerically tuned to be 10 and 5, respectively. Two sources are at directions $\theta_1=84.75^{\circ}$ and $\theta_2=95.25^{\circ}$ with cross correlation equal to 0.99 and ${\text{SNR}}=-2.5 \text{ dB}$. Since is computationally demanding, we first use a coarse grid of $2.5^{\circ}$ and after finding two peaks from the estimated spatial spectrum, resolve with a finer grid. To be precise, let $\hat{\theta}_1^{(1)}$ and $\hat{\theta}_2^{(1)}$ denote the estimated directions with the coarse grid, in the second step, we grid the interval $[\hat{\theta}_1^{(1)} - 3^{\circ},\hat{\theta}_2^{(1)} + 3^{\circ}]$ with a fine grid of $0.5^{\circ}$. Furthermore, we also use program with a grid resolution of $0.5^{\circ}$ to estimate DOAs and show the effect of source correlation on its performance. We run 100 Monte-Carlo simulations, and the histogram of estimated DOAs for the two approaches are plotted in Fig. \[fig:FigHist\]. As can be seen from this plot, ignoring the correlation may cause large biases. ![RMSEs for estimation of $\theta_1$ and $\theta_2$ using the proposed program as well as corresponding CRBs are plotted as a function of number of samples. True $\theta_1$ and $\theta_2$ are $88.05^{\circ}$ and $91.95^{\circ}$, respectively. 500 Monte-Carlo simulations are run and ${\text{SNR}}= 0 \text{ dB}$.[]{data-label="fig:FigN"}](FigN.eps){width="49.00000%"} ![RMSEs for estimation of $\theta_1$ and $\theta_2$ using the proposed program as well as corresponding CRBs are plotted as a function of SNR. True $\theta_1$ and $\theta_2$ are $88.05^{\circ}$ and $91.95^{\circ}$, respectively. 500 Monte-Carlo simulations are run and $N = 500$.[]{data-label="fig:FigSNR"}](FigSNR.eps){width="49.00000%"} ![Histogram of the estimated directions of two near coherent sources at directions $84.75^{\circ}$ and $95.25^{\circ}$. Blue and red bars denotes the results of using programs and , and black and magenta bars shows the results of using program . In this plot, ${\text{SNR}}= -2.5\text{ dB}$ and $N=1000$.[]{data-label="fig:FigHist"}](FigHist.eps){width="49.00000%"} Conclusion {#sec:Con} ========== Based on some recent results in compressive sensing and matrix rank minimization frameworks, we proposed a DOA estimation algorithm which works well in conditions that the noise covariance matrix of the exploited array is sparse. If the emitters are uncorrelated, our approach involves solving a rather simple convex program, and we suggested an appropriate choice for the regularization parameter of this program which effectively works for any SNR and number of samples. However, when the emitters are correlated or coherent, the proposed approach leads to a computationally demanding convex optimization problem. [^1]: [This work was supported by the Swedish Research Council under contract 621-2011-5847 and the Iran National Science Foundation under contract 91004600.]{} [^2]: ${{\mathbf{R}}_{{\mathbf{w}}}}= \sigma^2 {\mathbf{I}}$ and ${{\mathbf{R}}_{{\mathbf{w}}}}= \operatorname{diag}(\sigma_1^2,\cdots,\sigma_m^2)$ are also sparse covariance matrices and can be handled by the proposed algorithm. [^3]: [After submitting this paper, we became aware that a special case of , where ${{\mathbf{R}}_{{\mathbf{w}}}}$ is diagonal, has been proposed in [@HeSH14]. However, applies to a more general setting and includes an appropriate choice for $\lambda_u$.]{}
--- abstract: 'We express the real period of a family of elliptic curves in terms of classical hypergeometric series. This expression is analogous to a result of Ono which relates the trace of Frobenius of the same family of elliptic curves to a Gaussian hypergeometric series. This analogy provides further evidence of the interplay between classical and Gaussian hypergeometric series.' address: 'School of Mathematical Sciences, University College Dublin, Belfield, Dublin 4, Ireland' author: - Dermot MCarthy date: 'September 3, 2008' title: '$_3F_2$ hypergeometric series and periods of elliptic curves' --- Introduction ============ In [@G], Greene introduced the notion of general hypergeometric series over finite fields or *Gaussian hypergeometric series*, which are analogous to classical hypergeometric series. The motivation for his work was to develop the area of character sums and their evaluations through parallels with the theory of hypergeometric functions. The basis for this parallel was the analogy between Gauss sums and the gamma function as discussed in [@Ev; @IR; @Ko3; @Y]. Since then, the interplay between ordinary hypergeometric series and Gaussian hypergeometric series has played an important role in character sum evaluations [@GS], supercongruences [@M], finite field versions of the Lagrange inversion formula [@G2] and the representation theory of SL(2, $\mathbb{R}$) [@G3]. Recently, the author in [@Ro] has further developed this interplay by providing an expression for the real period of an elliptic curve in Legendre normal form in terms of an ordinary hypergeometric series. This formula is analogous to an expression for the trace of Frobenius of the curve in terms of a Gaussian hypergeometric series. He then displays a striking analogy between binomial coefficients involving rational numbers and those involving multiplicative characters. This paper examines this analogy further using a different family of elliptic curves and is organized as follows. In Section 2 we outline this analogy and state our results. Section 3 recalls some properties of ordinary hypergeometric series, elliptic curves and the arithmetic-geometric mean. In Section 4 we prove our results. Statement of Results ==================== We recall that the ordinary hypergeometric series $_pF_q$ is defined by $${_pF_q} \left( \begin{array}{ccccc} a_1, & a_2, & a_3, & \dotsc, & a_p \vspace{.05in}\\ \phantom{a_1} & b_1, & b_2, & \dotsc, & b_q \end{array} \Big| \; z \right) :=\sum^{\infty}_{n=0} \frac{{{\left({a_1}\right)}_{n}} {{\left({a_2}\right)}_{n}} {{\left({a_3}\right)}_{n}} \dotsm {{\left({a_p}\right)}_{n}}} {{{\left({b_1}\right)}_{n}} {{\left({b_2}\right)}_{n}} \dotsm {{\left({b_q}\right)}_{n}}} \; \frac{z^n}{{n!}}$$ where $a_i$, $b_i$ and $z$ are complex numbers, with none of the $b_i$ being negative integers or zero, $p$ and $q$ are positive integers, ${{\left({a}\right)}_{0}}:=1$ and ${{\left({a}\right)}_{n}} := a(a+1)(a+2)\dotsm(a+n-1)$ for positive integers $n$. Let $\mathbb{F}_{p}$ denote the finite field with $p$ elements. We extend the domain of all characters $\chi$ of $\mathbb{F}^{*}_{p}$ to $\mathbb{F}_{p}$, by defining $\chi(0):=0$. We now introduce two definitions from [@G]. The first definition is the finite field analogue of the binomial coefficient. For characters $A$ and $B$ of $\mathbb{F}_{p}$, define ${\left({\genfrac{}{}{0pt}{}{A}{B}}\right)}$ by $$\label{FF_Binomial} \binom{A}{B} := \frac{B(-1)}{p} J(A, {\overline}{B})$$ where $J(\chi, \lambda)$ denotes the Jacobi sum for $\chi$ and $\lambda$ characters of $\mathbb{F}_{p}$. The second definition is the finite field analogue of ordinary hypergeometric series. For characters $A_0,A_1,\dotsc, A_n$ and $B_1, \dotsc, B_n$ of $\mathbb{F}_{p}$ and $x \in \mathbb{F}_{p}$, define the *Gaussian hypergeometric series* by $${_{n+1}F_n} {\left( \begin{array}{cccc} A_0, & A_1, & \dotsc, & A_n \\ \phantom{A_0} & B_1, & \dotsc, & B_n \end{array} \Big| \; x \right)}_{p} := \frac{p}{p-1} \sum_{\chi} \binom{A_0 \chi}{\chi} \binom{A_1 \chi}{B_1 \chi} \dotsm \binom{A_n \chi}{B_n \chi} \chi(x)$$ where the summation is over all characters $\chi$ on $\mathbb{F}_{p}$. In the case where $A_i = \phi_p$, the quadratic character mod $p$, for all $i$ and $B_j= \epsilon_p$, the trivial character mod $p$, for all $j$ we denote this by ${_{n+1}F_n}(x)_{p}$ for brevity. We now briefly recall some facts about elliptic curves. For further details see [@Kn], [@Ko2] and [@S]. Recall that every elliptic curve $E / \mathbb{C}$ can be written in the form $$\label{Egg} y^2 = 4x^3 - g_2x - g_3 \: ,$$ with $g_2$, $g_3 \in \mathbb{C}.$ We can associate a period lattice $\Lambda$ to $E$ via the biholomorphic mapping $\varphi$ : $\mathbb{C} / \Lambda \rightarrow E(\mathbb{C}) $ given by $$\varphi(z) = \left\{ \begin{array}{ll} (\wp(z), \wp'(z), 1) & \quad \textup{for } z \notin \Lambda, \vspace{.05in} \\ (0,1,0) & \quad \textup{for } z \in \Lambda, \end{array} \right.$$ where $\wp$ is the Weierstrass $\wp$-function. If $g_2$, $g_3 \in \mathbb{R}$ then $\Lambda$ can be chosen to be of the form $\Lambda = \Omega(E) \mathbb{Z} + \Omega'(E) \mathbb{Z}$ where $\Omega(E) \in \mathbb{R}$ and $\Omega'(E) \in \mathbb{C}$. We call $\Omega(E)$ the *real period* of $E$. Furthermore, if the right-hand side of (\[Egg\]) has three real roots then $\Omega'(E)$ will be strictly imaginary. Consider an elliptic curve $E / \mathbb{Q}$ in Weierstrass form $$E: y^2 + a_1 xy + a_3y = x^3 + a_2 x^2 +a_4 x + a_6$$ where $a_i \in \mathbb{Q}$. Defining the quantities $$b_2 := {a_1}^2 +4 a_2, b_4 := 2 a_4 + a_1a_3, b_6 := {a_3}^2 + 4 a_6,$$ and $$b_8 := {a_1}^2 a_6 + 4 a_2 a_6 - a_1 a_3 a_4 + a_2 {a_3}^2 - {a_4}^2,$$ the discriminant of $E$, $\Delta(E)$, is given by $$\Delta(E) = -{b_2}^2 b_8 - 8{b_4}^3 - 27{b_6}^2 + 9 b_2 b_4 b_6.$$ Let $\tilde{E}$ denote the reduction of $E$ mod $p$. Recall that if $p \nmid \Delta(E)$ then $E$ has good reduction ($\tilde{E} / \mathbb{Q}$ is an elliptic curve) and we say $p$ is a prime of good reduction. We define the integer $a_p(E)$ by $$a_p(E) := 1 + p - N_p \: ,$$ where $N_p$ is the number of rational points on $\tilde{E}$ over $\mathbb{F}_p$ (including the point at infinity). When $p$ is a prime of good reduction, we refer to $a_p(E)$ as the *trace of Frobenius* as it can be interpreted as the trace of the Frobenius endomorphism on $E$. Furthermore, if $E$ is given by $y^2 = f(x)$ then $$\label{TraceFormula} a_p(E) = - \sum_{x \in \mathbb{F}_p} \phi_p(f(x)) \: .$$ Consider the family of elliptic curves $E_{\lambda} / \mathbb{Q}$ defined by $$E_{\lambda}: y^2=(x-1)(x^2+\lambda), \qquad \lambda \in \mathbb{Q} \setminus \lbrace 0, -1 \rbrace \: .$$ Ono [@O Thm. 5], (see also [@O2 Chapter 11]), proved that if $\lambda \in \mathbb{Q} \setminus \lbrace 0,-1\rbrace$ and $p$ is an odd prime for which ord$_p(\lambda(\lambda+1)) = 0$ then $$\label{ThmOno} _3F_2 \left( \frac{1+\lambda}{\lambda} \right)_p = \frac{\phi_p(-\lambda)\left(a_p(E_\lambda)^2 - p\right)}{p^2} \: .$$ Note that a change of variables in Theorem 5 of [@O] is required to arrive at (\[ThmOno\]) (see also [@FOP Thm. 4.4]). The condition ord$_p(\lambda(\lambda+1)) = 0$ ensures that $p$ is a prime of good reduction and so $a_p(E_\lambda)$ is the trace of Frobenius. Using the following property of Gaussian hypergeometric series (see [@G Thm. 4.2]) $$_3F_2\left(\frac{1}{t}\right)_p = \phi_p(-t) _3F_2(t)_p \: ,$$ we transform (\[ThmOno\]) to get $$_3F_2 \left( \frac{\lambda}{1+\lambda} \right)_p = \frac{\phi_p(1+\lambda)\left(a_p(E_\lambda)^2 - p \right)}{p^2} \: .$$ We would like to prove an analogous formula which replaces the Gaussian hypergeometric series with the classical hypergeometric series and trace of Frobenius with the real period $\Omega(E_\lambda)$. In [@Ro] this analogy is based on replacing characters of order $n$ in the Gaussian hypergeometric series with $1/n$ as the arguments in the classical hypergeometric series, $\phi_p(s)$ with $\sqrt{s}$ and $\phi_p(-1) p$ with $\pi$. This would suggest $$\label{Guess} {_3F_2} \left( \begin{array}{ccc} \frac{1}{2}, & \frac{1}{2}, & \frac{1}{2} \\ \phantom{\frac{1}{2},} & 1, & 1 \end{array} \Big| \; \frac{\lambda}{1+\lambda} \right) = \frac{\sqrt{1+\lambda} \: \: \Omega(E_\lambda)^2}{\pi^2} - i \frac{\sqrt{1+\lambda}}{\pi}$$ as an appropriate analogy. The main result of this paper extends the analogy in [@Ro] by taking the real part of the right-hand side of (\[Guess\]). This extension is consistent with the results in [@Ro]. \[TheoremPeriod\] Let $E_\lambda$ be the elliptic curve defined by $$E_{\lambda}: y^2=(x-1)(x^2+\lambda), \qquad \lambda \in \mathbb{R} \setminus \lbrace 0, -1 \rbrace \: .$$ Then for $\lambda>0$, $${_3F_2} \left( \begin{array}{ccc} \frac{1}{2}, & \frac{1}{2}, & \frac{1}{2} \\ \phantom{\frac{1}{2},} & 1, & 1 \end{array} \Big| \; \frac{\lambda}{1+\lambda} \right) = \frac{\sqrt{1+\lambda} \: \: \Omega(E_\lambda)^2}{\pi^2} \: ,$$ where $\Omega(E_\lambda)$ is the real period of $E_\lambda$. Note that the analogy in [@Ro] also contained a factor of ${-1}$ which the author explains is inherent in the definition of Gaussian hypergeometric series. This may be better explained by the $-1$ preceding the character sum expression for $a_p(E_\lambda)$ in (\[TraceFormula\]), which disappears upon squaring. (The real period can be expressed as an elliptic integral which is somewhat analogous to (\[TraceFormula\]) but without the minus sign). Therefore we suggest a further refinement which would see $a_p(E_\lambda)$ being replaced with $-\Omega(E_\lambda)$. We now specialize the curve by choosing $\lambda =$ 1/3. We then use a known transformation of the hypergeometric series in terms of the gamma function to simplify the expression as a binomial coefficient. We extend the interpretation of the binomial coefficient to include rational arguments via $${\left({\genfrac{}{}{0pt}{}{n}{k}}\right)} = \frac{{\Gamma{\left({n+1}\right)}}}{{\Gamma{\left({k+1}\right)}}{\Gamma{\left({n-k+1}\right)}}} \: .$$ Our result is as follows. \[CorBinomial\] Let $E_{\frac{1}{3}}$ be the elliptic curve defined by $$E_{\frac{1}{3}}: y^2=(x-1)(x^2+\tfrac{1}{3}) \: .$$ Then $$\frac{2 \sqrt{2}}{3\pi} \cdot \Omega(E_\frac{1}{3}) = {\left({\genfrac{}{}{0pt}{}{\frac{1}{3}}{\frac{1}{2}}}\right)}$$ and $$\sqrt{2} \cdot \Omega(E_\frac{1}{3}) = \frac{{\Gamma{\left({\frac{1}{3}}\right)}} {\Gamma{\left({\frac{1}{2}}\right)}}}{{\Gamma{\left({\frac{5}{6}}\right)}}} \: ,$$ where $\Omega(E_\frac{1}{3})$ is the real period of $E_{\frac{1}{3}}$. We now find an analogous result in terms of the trace of Frobenius and the binomial coefficient of characters, as defined in (\[FF\_Binomial\]), which we can also express in terms of Gauss sums. \[TheoremBinomial\] Let $E_{\frac{1}{3}}$ be the elliptic curve defined by $$E_{\frac{1}{3}}: y^2=(x-1)(x^2+\tfrac{1}{3}) \: .$$ Then for $p$ a prime with $p>3$, $$\label{ThmBinPart1} - \frac{\phi_p(-2)}{p} \cdot a_p(E_\frac{1}{3}) = 2 \: \textup{Re} {\left({\genfrac{}{}{0pt}{}{\chi_3}{\phi_p}}\right)}$$ and $$\label{ThmBinPart2} - \phi_p(2) \cdot a_p(E_\frac{1}{3}) = 2 \: \textup{Re} \left[\frac{G(\chi_3) G(\phi_p)}{G(\chi_3 \: \phi_p)}\right] \: ,$$ where $a_p(E_\frac{1}{3})$ is the trace of Frobenius of $E_{\frac{1}{3}}$, $\chi_3$ is a character of order three of $\mathbb{F}_p$ and $G(\chi)$ is a Gauss sum. Again the analogy is achieved by replacing $\Omega(E_\frac{1}{3})$ with $-a_p(E_\frac{1}{3})$, $\sqrt{s}$ with $\phi_p(s)$, $\pi$ with $\phi_p(-1) p$, rational numbers with characters and taking the real part of the terms involving characters. We now also replace the gamma function with the Gauss sum in the second result, which is what we would expect. The analogy holds up to a factor of 2 (which also appears in [@Ro]). This might be explained as follows. It should be possible to express the trace of Frobenius in terms of a $_2F_1$ Gaussian hypergeometric series which in turn could be expressed as a Jacobi sum plus its conjugate, using Theorem 4.16 in [@G], which would evaluate as two times its real part. However, we do not investigate this here. Preliminaries ============= We first recall some properties of ordinary hypergeometric series. In particular we recall that a $_2F_1$ has the following integral representation [@E page 115 (7)]: $$\label{IntegralRep} {_2F_1} \left( \begin{array}{cc} a, & b \\ \phantom{a,} & c \end{array} \Big| \; z \right) = \frac{2 \: {\Gamma{\left({c}\right)}}}{{\Gamma{\left({b}\right)}} {\Gamma{\left({c-b}\right)}}} \int_{0}^{\frac{\pi}{2}} \frac{(\sin{t})^{2b-1} (\cos{t})^{2c-2b-1}}{(1-z \sin^2{t})^a} dt$$ where Re $c >$ Re $b > 0$. We also note three transformation properties which we will use in Section 4. From [@E page 111 (10)] we have that $$\label{TransformQuadratic} {_2F_1} \left( \begin{array}{cc} a, & b \\ \phantom{a,} & a+b+\tfrac{1}{2} \end{array} \Big| \; z \right) = {_2F_1} \left( \begin{array}{cc} 2a, & 2b \\ \phantom{2a,} & a+b+\tfrac{1}{2} \end{array} \Big| \; \tfrac{1}{2} - \tfrac{1}{2}(1-z)^{\frac{1}{2}} \right) \: ,$$ from [@B Entry 33(iii)] that $$\label{TransformSquare} {_3F_2} \left( \begin{array}{ccc} \frac{1}{2}, & \frac{1}{2}, & \frac{1}{2} \\ \phantom{\frac{1}{2},} & 1, & 1 \end{array} \Big| \; z \right) = {\left[{_2F_1} \left( \begin{array}{cc} \frac{1}{4}, & \frac{1}{4} \\ \phantom{\frac{1}{4},} & 1 \end{array} \Big| \; z \right) \right]}^2,$$ and from [@E page 105 (3)] that $$\label{Transform3} {_2F_1} \left( \begin{array}{cc} a, & b \\ \phantom{a,} & c \end{array} \Big| \; z \right) = (1-z)^{-a} {_2F_1} \left( \begin{array}{cc} a, & c-b \\ \phantom{a,} & c \end{array} \Big| \; \frac{z}{z-1} \right).$$ These transformations are valid for all values of $z$ for which the series involved converge. We now recall the definition of the Arithmetic-Geometric Mean. Given two positive real numbers $\alpha$ and $\beta$, the Arithmetic-Geometric Mean of $\alpha$ and $\beta$, denoted AGM($\alpha$,$\beta$), is defined as the common limit of the two sequences $\alpha_n$ and $\beta_n$ where $\alpha_0:=\alpha$, $\beta_0:=\beta$, $\alpha_{n+1}:=(\alpha_n+\beta_n)/2$ and $\beta_{n+1}:=\sqrt{\alpha_n \beta_n}$. The AGM can be expressed as an integral ([@Co page 390]) which can be transformed into a hypergeometric series as follows. $$\begin{aligned} \label{AGM_to_Hyp} \frac{\pi}{AGM(\alpha, \beta)} &= 2 \; \int_{0}^{\frac{\pi}{2}} \frac{dt} {\sqrt{\alpha^2 \cos^2{t} + \beta^2 \sin^2{t}}}\\ &\notag= 2 \alpha^{-1} \int_{0}^{\frac{\pi}{2}} \left[{ \cos^2{t} + {\left(\tfrac{\beta}{\alpha}\right)}^2 \sin^2{t} }\right]^{-\frac{1}{2}} dt\\ &\notag= 2 \alpha^{-1} \int_{0}^{\frac{\pi}{2}} \left[{{1- \left(1-{\left(\tfrac{\beta}{\alpha}\right)}^2\right) \sin^2{t}}}\right] ^{-\frac{1}{2}} dt \\ &\notag= \alpha^{-1} \: \pi \; {_2F_1} \left( \begin{array}{cc} \frac{1}{2}, & \frac{1}{2} \\ \phantom{\frac{1}{2},} & 1 \end{array} \Big| \; 1- \left(\tfrac{\beta}{\alpha}\right)^2 \right) \: .\end{aligned}$$ The last step in (\[AGM\_to\_Hyp\]) follows from (\[IntegralRep\]) with $a=b=\frac{1}{2}$, $c=1$ and $z=1-{\left(\tfrac{\beta}{\alpha}\right)}^2$. Next we introduce the notion of a *quadratic twist* of an elliptic curve. Let $E /\mathbb{Q}$ be an elliptic curve defined by $$E: y^2 = x^3 +ax^2 + bx +c,$$ with $a,b,c \in \mathbb{Q}$. If $t$ is a square-free integer, then the $t$-quadratic twist of $E$, which we denote $E_t$, is defined by $$E_t : y^2 = x^3 +atx^2 + bt^2x + ct^3.$$ If $p$ is a prime of good reduction for both $E$ and $E_t$ and gcd($p$, 6)=1, then $$\label{Twist} a_p(E) = \phi_p(t) \: a_p(E_t).$$ We now mention a result which we will use in the proof of Theorem \[TheoremBinomial\]. As it follows from well-known properties of Jacobsthal sums (see Sections 6.1 and 6.2 in [@BEW]) we omit the proof. \[PropJacobsthal\] $$\sum_{x \in \mathbb{F}_p} \phi_p(x^3+1) = \left\{ \begin{array}{ll} 2a & \quad \textup{if} \quad p\equiv1 \pmod 3, \textup{ where } p=a^2+3b^2 \textup{ and } a\equiv-1 \pmod 3, \vspace{.05in} \\ \phantom{2}0 & \quad \textup{if} \quad p\equiv2 \pmod 3 \; .\end{array} \right.$$ Proofs of Theorem \[TheoremPeriod\], Corollary \[CorBinomial\] and Theorem \[TheoremBinomial\] =============================================================================================== Making the change of variable $y \mapsto \frac{y}{2}$ in $E_\lambda$ yields $$E'_{\lambda}: y^2=4(x-1)(x^2+\lambda)\: .$$ For $\lambda>0$, we note that $4(x-1)(x^2+\lambda)=0$ has one real root. The real period $\Omega(E_\lambda)$ is then given by [@Co page 391] $$\Omega(E_\lambda) = \frac{2\pi}{AGM(2\sqrt{b},\sqrt{2b+a})} \: ,$$ where $a=2$ and $b=\sqrt{1+\lambda}$. Using (\[AGM\_to\_Hyp\]) we get $$\Omega(E_\lambda) = {\left(\sqrt{1+\lambda}\right)}^{-\frac{1}{2}} \; \pi \; {_2F_1} \left( \begin{array}{cc} \frac{1}{2}, & \frac{1}{2} \\ \phantom{\frac{1}{2},} & 1 \end{array} \Big| \; \tfrac{1}{2} \left(1- \tfrac{1}{\sqrt{1+\lambda}}\right) \right) \: .$$ Now applying equation (\[TransformQuadratic\]) with $a=b=\frac{1}{4}$ and $z=\frac{\lambda}{1+\lambda}$ we get $$\label{Period2F1} \Omega(E_\lambda) = {\left(\sqrt{1+\lambda}\right)}^{-\frac{1}{2}} \; \pi \; {_2F_1} \left( \begin{array}{cc} \frac{1}{4}, & \frac{1}{4} \\ \phantom{\frac{1}{4},} & 1 \end{array} \Big| \; \frac{\lambda}{1+\lambda} \right) \: .$$ We note that the condition $\lambda>0$ implies that these hypergeometric series converge. Squaring both sides and applying (\[TransformSquare\]) yields the result. Transforming the hypergeometric series on the right-hand side of (\[Period2F1\]) using (\[Transform3\]), with $a=b=\frac{1}{4}$ and $z=\frac{\lambda}{1+\lambda}$, we see that for $0<\lambda<1$, $$\Omega(E_\lambda) = \pi \; {_2F_1} \left( \begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ \phantom{\frac{1}{4},} & 1 \end{array} \Big| \; -\lambda \right) \:.$$ Now letting $\lambda=\frac{1}{3}$ and noting that $${_2F_1} \left( \begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ \phantom{\frac{1}{4},} & 1 \end{array} \Big| \; -\tfrac{1}{3} \right) = \frac{3}{2\sqrt{2}} \cdot \frac{{\Gamma{\left({\frac{4}{3}}\right)}}}{{\Gamma{\left({\frac{3}{2}}\right)}} {\Gamma{\left({\frac{5}{6}}\right)}}}\: ,$$ (see [@E page 104 (53)] with $a=-\frac{1}{4}$), the first result follows. The second result then follows upon recalling that ${\Gamma{\left({1+x}\right)}} = x \: {\Gamma{\left({x}\right)}}$ and ${{\Gamma{\left({\frac{1}{2}}\right)}}}^2 = \pi$. We first prove (\[ThmBinPart1\]). By definition (\[FF\_Binomial\]) it suffices to prove $$\label{NewTheorem} -\tfrac{1}{2} \cdot {\phi_p(2)} \cdot a_p(E_\frac{1}{3}) = \textup{Re} [J(\chi_3, \phi_p)] \: .$$ We now evaluate $a_p(E_\frac{1}{3}$). A similar calculation appears in [@O] although we present our result slightly differently. Making the change of variables $(x,y) \mapsto (\frac{x}{9}+\frac{1}{3},\frac{y}{27})$ in $E_\frac{1}{3}$ yields $${E'}_\frac{1}{3}: y^2 = x^3 - 6^3 \: .$$ This is the $-6$-quadratic twist of $y^2=x^3+1$. Therefore, applying (\[TraceFormula\]) and (\[Twist\]), noting that for $p>3$, gcd($p$, 6)=1 and $p$ is a prime of good reduction for both $y^2 = x^3 + 1$ and ${E'}_\frac{1}{3}$, we get $$a_p(E_\frac{1}{3}) = a_p({E'}_\frac{1}{3}) = - \phi_p(-6) \sum_{x \in \mathbb{F}_p} \phi_p(x^3+1) \: .$$ Using Proposition \[PropJacobsthal\] and the fact that $\phi_p(-3) = 1$ if and only if $p \equiv 1\pmod3$ we get $$-\tfrac{1}{2} \cdot \phi_p(2) \cdot a_p(E_\frac{1}{3}) = \left\{ \begin{array}{ll} a & \quad \textup{if} \quad p\equiv1 \pmod 3, \textup{ where } p=a^2+3b^2 \textup{ and } a\equiv-1 \pmod 3, \vspace{.05in} \\ 0 & \quad \textup{if} \quad p\equiv2 \pmod 3 \; .\end{array} \right.$$ Next we examine the right-hand side of (\[NewTheorem\]). We first note that $$J(\chi_3, \phi_p) = \sum_{x \in \mathbb{F}_p} \chi_3(x) \phi_p(1-x) \: .$$ If $p \equiv 2 \pmod 3$ then $\chi_3(x) = 1$ for all $x \in \mathbb{F}_p$. Therefore, $$J(\chi_3, \phi_p) = \sum_{x \in \mathbb{F}_p} \phi_p(1-x) = 0 \;.$$ If $p \equiv 1 \pmod 3$, then $$\begin{aligned} \textup{Re} [J(\chi_3, \phi_p)] &= \textup{Re} \left[\sum_{x \in \mathbb{F}_p} \chi_3(x) \phi_p(1-x)\right]\\ \notag &= \sum_{x \in {\mathbb{F}_p^*}^3} \phi_p(1-x) + \textup{Re} \left[\sum_{x \in \mathbb{F}_p^* \setminus {\mathbb{F}_p^*}^3} \chi_3(x) \phi_p(1-x)\right]\\ \notag &= \frac{1}{3} \sum_{x \in \mathbb{F}_p^*} \phi_p(1-x^3) + \sum_{x \in \mathbb{F}_p^* \setminus {\mathbb{F}_p^*}^3} \textup{Re}\left[\chi_3(x)\right] \phi_p(1-x)\\ \notag &= \frac{1}{3} \sum_{x \in \mathbb{F}_p^*} \phi_p(1-x^3) - \frac{1}{2} \sum_{x \in \mathbb{F}_p^* \setminus {\mathbb{F}_p^*}^3} \phi_p(1-x)\\ \notag &= \frac{1}{3} \sum_{x \in \mathbb{F}_p^*} \phi_p(1-x^3) - \frac{1}{2} \left[-1- \frac{1}{3} \sum_{x \in \mathbb{F}_p^*} \phi_p(1-x^3) \right]\\ \notag &= \frac{1}{2} \sum_{x \in \mathbb{F}_p}\phi_p(1-x^3)\\ \notag &= \frac{1}{2} \sum_{x \in \mathbb{F}_p}\phi_p(x^3+1)\\ \notag &= a \qquad (\textup{by Proposition (\ref{PropJacobsthal})})\end{aligned}$$ where $p=a^2+3b^2$ and $a\equiv-1 \pmod 3$, which completes the proof of (\[ThmBinPart1\]). Expanding the right-hand side of (\[ThmBinPart1\]) using (\[FF\_Binomial\]), and then using the fact that $$J(\chi, \psi) = \frac{G(\chi) G(\psi)}{G(\chi \: \psi)},$$ where $J(\chi, \psi)$ and $G(\chi)$ are Jacobi and Gauss sums respectively, (\[ThmBinPart2\]) follows. Remark ====== It is worth noting that if $E / \mathbb{R}$ is an elliptic curve and $E(\mathbb{R})$ has a single connected component then $E$ is isomorphic to either $E_\lambda$, as defined in Theorem \[TheoremPeriod\], or its $-1$ quadratic twist. Hence, Theorem \[TheoremPeriod\] provides a formula for the real period of $E$ in this case. If $E(\mathbb{R})$ has two connected components then $E$ is isomorphic to an elliptic curve in Legendre normal form, ${E'}_{\lambda}: y^2=x(x-1)(x-\lambda)$, for some $\lambda \in \mathbb{R} \setminus \lbrace 0, 1 \rbrace$. This is the case covered in [@Ro]. acknowledgements ================ The author would like to thank Robert Osburn for his advice during the preparation of this paper and the UCD Ad Astra Research Scholarship program for its financial support. [B ]{} B. Berndt, *Ramanujan’s notebooks, part II*, Springer-Verlag, NewYork, 1989. B. Berndt, R. Evans, K. Williams, *Gauss and Jacobi Sums*, Canadian Mathematical Society Series of Monographs and Advanced Texts, A Wiley-Interscience Publication, John Wiley & Sons, Inc., New York, 1998. H. Cohen, *A course in computational algebraic number theory*, Graduate Texts in Mathematics, 138, Springer-Verlag, Berlin, 1993. A. Erd[é]{}lyi et al, *Higher transcendental functions, Vol 1*, McGraw-Hill, New York, 1953. R. Evans, *Identities for products of Gauss sums over finite fields*, Enseign. Math. (2) **27** (1981), no. 3-4, 197–209 (1982). S. Frechette, K. Ono, and M. Papanikolas, *Gaussian hypergeometric functions and traces of Hecke operators*, Int. Math. Res. Not. **2004**, no. 60, 3233–3262. J. Greene, *Hypergeometric functions over finite fields*, Trans. Amer. Math. Soc. [**301**]{} (1987), no. 1, 77-101. J. Greene, *Lagrange inversion over finite fields*, Pacific J. Math. **130** (1987), no. 2, 313–325. J. Greene, *Hypergeometric functions over finite fields and representations of ${\rm SL}(2,q)$*, Rocky Mountain J. Math. **23** (1993), no. 2, 547–568. J. Greene, D Stanton, *A character sum evaluation and Gaussian hypergeometric series*, J. Number Theory **23** (1986), no. 1, 136-148. K. Ireland, M. Rosen, *A classical introduction to modern number theory*, 2nd ed., Graduate Texts in Mathematics, 84, Springer-Verlag, New York, 1990. A.W. Knapp, *Elliptic Curves*, Mathematical Notes, 40, Princeton University Press, Princeton, New Jersey, 1992. N. Koblitz, *Introduction to elliptic curves and modular forms*, 2nd ed., Graduate Texts in Mathematics, 97, Springer-Verlag, New York, 1984, 1993. N. Koblitz, *The number of points on certain families of hypersurfaces over finite fields*, Compositio Math. **48** (1983), no. 1, 3–23. E. Mortenson, *Supercongruences for truncated ${}_{n+1}F_{n}$ hypergeometric series with applications to certain weight three newforms*, Proc. Amer. Math. Soc. **133** (2005), no. 2, 321–330. K. Ono, *Values of Gaussian hypergeometric series*, Trans. Amer. Soc., **350** (1998), 1295-1223. K. Ono, *The web of modularity: arithmetic of the coefficients of modular forms and $q$-series*, CBMS Regional Conference Series in Mathematics, 102, Amer. Math. Soc., Providence, RI, 2004. J. Rouse, *Hypergeometric functions and elliptic curves*, Ramanujan Journal, **12** (2006), no. 2, 197-205. J. Silverman, *The arithmetic of elliptic curves*, Graduate Texts in Mathematics, 106, Springer-Verlag, New York, 1986. K. Yamamoto, *On a conjecture of Hasse concerning multiplicative relations of Gaussian sums*, J. Combinatorial Theory **1** 1966 476–489.
0.3cm [**THE YANG-MILLS STRING AS THE $A$-MODEL ON THE TWISTOR SPACE OF THE COMPLEX TWO-DIMENSIONAL PROJECTIVE SPACE WITH FLUXES AND WILSON LOOPS: THE BETA FUNCTION**]{}\ \ Physical Jefferson Laboratory, Harvard University, Cambridge MA 02138 USA\ and\ INFN Sezione di Roma [^1]\ Dipartimento di Fisica, Universita’ di Roma ‘La Sapienza’\ Piazzale Aldo Moro 2 , 00185 Roma\ e-mail: marco.bochicchio@roma1.infn.it\ \ We argue that the string theory dual to a certain sector of the four-dimensional Yang-Mills theory at large-$N$ is the $A$-model wrapping $N$ Lagrangian $D$-branes on the twistor space of the complex two-dimensional projective space, with a certain flux and Wilson loop background, by finding that the target-space beta function of this $A$-model coincides with the large-$N$ beta function of the Yang-Mills theory. The beta function is obtained by the quantization of the $A$-model Chern-Simons effective action in target space, provided a certain $B$-field and Wilson loop background is coupled to the $A$-model world-sheet sigma-model. The $B$-field provides the embedding of the four-dimensional space-time into the two-dimensional base of the Lagrangian-twistorial Chern-Simons by the non-commutative large-$N$ Eguchi-Kawai reduction. In fact, in presence of the Wilson loop and $B$-field background, the large-$N$ loop equation of the twistorial Chern-Simons theory implies the non-commutative vortex equation of (anti-)self-dual type of the pure Yang-Mills theory restricted to a Lagrangian submanifold in space-time, that in turn have been shown to occur, on the Yang-Mills side, in the localization of the loop equation of the full Yang-Mills theory in the (anti-)self-dual variables for a certain diagonal embedding of quasi $BPS$ Wilson loops. Remarkably, the Wilsonean large-$N$ beta function of full Yang-Mills and the large-$N$ $A$-model target-space beta function coincide. Introduction ============ This paper arose combining the conjecture [@V1], about the link between a twistorial topological $A$-model and (supersymmetric) Yang-Mills ($YM$) theory, with the computation of the beta function of the pure $YM$ theory via the homological localization of the loop equation [@MB1; @MB2]. The main result of this paper is that a certain version of the $A$-model defined on the twistor space of $CP^2$, $TW(CP^2)$,[^2] is described in target space by an effective action that has the same (Wilsonean) beta function of pure $YM$ theory at large $N$. The basic idea is that there is a version of the $A$-model on twistor-space whose target-space equation of motion coincides with the vortex equation of self-dual ($SD$) type obtained by the localization of the loop equation of large-$N$ $YM$ for certain quasi $BPS$ Wilson loops [@MB1; @MB2]. We recall here in the introduction the framework in which the conjecture in [@V1] and the present work originated. In [@W2] it was discovered that the scattering amplitudes of $ \cal{N}$ $=4$ $SUSY$ $YM$ at weak coupling can be computed in terms of the vertex operators of the topological $B$-model on a supersymmetric version, $CP^{3|4}$, of the twistor space $CP^3$ of the four-sphere $S^4$ [@Hit; @X], that is the conformal compactification of the space-time $R^4$. The supersymmetric version, $CP^{3|4}$, of the twistor space $CP^3$, is crucial in order to satisfy the Calabi-Yau condition, necessary for the quantum consistency of the $B$-model [@W2]. As a consequence the $B$-model construction is strongly rigid and essentially applies uniquely to the $ \cal{N}$ $=4$ $SUSY$ $YM$ theory. In the $B$-model description of [@W2] scattering amplitudes arise by the contributions of certain $D1$-branes of the $B$-model. In [@V1] it was conjectured that there is a $S$-dual description of the $B$-model theory, given by a twistorial $A$-model on the same Calabi-Yau super-manifold, in which the scattering amplitudes may arise by means of the world-sheet instantons of the $A$-model instead of the $D1$-branes of the $B$-model. The occurrence of the $A$-model was advocated, among other reasons, because of its less rigidity for deformations with respect to the $B$-model [@V1]. Indeed the $A$-model does not need, for its quantum consistency, to be defined on a Calabi-Yau manifold. In fact it was later conjectured in [@V2] that a version of the $A$-model on the non-supersymmetric twistor space $CP^3$, obtained coupling the $A$-model to certain charged operators, would be equivalent to the full sector of non-supersymmetric $YM$ theory in the large-$N$ limit. As far as the scattering amplitudes of the $ \cal{N}$ $=4$ $SUSY$ $YM$ theory are concerned, the conjecture in [@V1] encounters the difficulty that the cohomology ring of the $A$-model on twistor space, contrary to the one of the $B$-model, does not seem to support continuous fields, in particular the twistor image of plane waves. Therefore the $A$-model does not seem to have room enough to define scattering amplitudes. This difficulty was pointed out already in [@V1] and in [@W2] and in fact it occurs also in the non-supersymmetric version proposed in [@V2]. In this paper we suggest a partial solution of this difficulty, in the case of the pure $YM$ theory, by means of the Eguchi-Kawai ($EK$) large-$N$ reduction [@EK; @Neu; @Twc; @DN]. Indeed the $EK$ reduction allows us to reabsorb some continuous translational degrees of freedom in the internal color space at $ N=\infty $ (for a quick review see [@Mak]). Thus we argue that the cohomology of the $A$-model at $ N=\infty $ may support the translations and therefore some continuous field. Technically the $EK$ reduction is realized in the $A$-model by a background $B$-field, that implies a non-commutative gauge theory in target space [@SW], for which the large-$N$ $EK$ reduction applies. However, in this paper we do not attempt to define any $A$-model $S$-matrix for the large-$N$ $YM$ theory, but, as a first step, we limit ourselves to the computation of the beta function of the $A$-model effective action. We outline now some physical motivations for our construction. The open strings of the $A$-model end on a Lagrangian submanifold [@W1]. Thus the Lagrangian submanifold supports a gauge theory in target space. If $N$ $D$-branes wrap around the Lagrangian submanifold, the gauge theory has gauge group $U(N)$. This is taken into account, in the world-sheet language, by Chan-Paton factors is some representation of $U(N)$. We choose the fundamental representation. As it is well known [@W1] the $A$-model effective action is a $U(N)$ Chern-Simons ($CS$) theory defined on a Lagrangian submanifold, modified by the insertion of Wilson loops, that arise from the world-sheet instantons corrections of the $A$-model. A Lagrangian submanifold in twistor space is three-dimensional, so that the twistor projection to the space-time base is two-dimensional. Thus the effective $CS$ gauge theory contains only a two-dimensional information about the physical space-time. The only way to recover the four-dimensional information is by means of a large-$N$ partial Eguchi-Kawai reduction from four to two dimensions. This reduction is obtained in the large-$N$ limit by making two of the four coordinates non-commutative in target space. Indeed it is known that the limit of infinite non-commutativity coincides with the large-$N$ limit [@Mak]. On the world-sheet this is equivalent to coupling the sigma-model to a certain $B$-field [@SW]. Notice that the target space of the sigma-model, the twistor space, has real dimension six, so that the twistor projection has real dimension four. Since the base of the Lagrangian submanifold has dimension two, only a partial $EK$ reduction is necessary. Hence, on the sigma-model world-sheet, a $B$-field is needed, whose components are non-vanishing only along some real two-cycle of the four-dimensional base of the twistor fibration. In our construction the two-dimensional base of the Lagrangian submanifold, contrary to the two-cycle over which the $B$-field lives, has to be embedded, as most as possible, “diagonally” in the four-dimensional space-time. This has to do with the special sector of observables to which our $A$-model is supposed to be related on the $YM$ side in the large-$N$ limit. These observables are quasi $BPS$ Wilson loops introduced in the $YM$ theory in [@MB1; @MB2]. These quasi $BPS$ Wilson loops are the analogs of supersymmetric Wilson loops in theories with extended supersymmetry and are described in sect.2. One of their features is that they are supported on a topological sphere “diagonally” embedded in four-dimensional space-time. In fact this “diagonally” embedded sphere turns out to be the double cover of the base of the Lagrangian submanifold in twistor space. For these quasi $BPS$ Wilson loops of the pure $YM$ theory some localization properties hold [@MB1] that are analogue of those of their supersymmetric cousins [@P]. In particular they are localized on certain vortices equations of the non-commutative $EK$ reduction. It is precisely at this stage that the conjecture in [@V1] plays a role in this paper. The world-sheet instantons of the $A$-model advocated in [@V1] are related, via their contribution to the $CS$ effective action, to the vortices that arise in the localization of the loop equation for the quasi $BPS$ Wilson loops in [@MB1]. In turn these vortices are essential to get the $YM$ beta function as explained in sect.2. The localization in [@MB1] on the $YM$ side arises by homological methods, as opposed to the usual cohomological localization that occurs in twisted supersymmetric gauge theories [@P]. The need of homological methods, to get localization on the $YM$ side, is due to the lack of supersymmetry of the pure $YM$ theory and to the employment of a new form [@MB1] of the loop equation [@MM], that has of course a non-local nature. In this new form of the loop equation a key role is played by the conformal anomaly, somehow in analogy [@MB2] with the role that the chiral anomaly plays in the loop equation for the chiral ring of $ \cal{N}$ $=1 $ $SUSY$ gauge theories [@DV; @SW2]. Yet, on the $A$-model side, we get of course a cohomological theory. Therefore, loosely speaking, we may consider the cohomological localization [@W1] of our proposed twistorial topological $A$-model as the counterpart, obtained by means of gauge fields/string duality [@W1], of the homological localization that occurs in the quasi $BPS$ sector of the large-$N$ pure $YM$ theory. The plan of the paper is as follows. In sect.2 we describe the $YM$ side of our correspondence and we recall the localization of quasi-$BPS$ Wilson loops and the computation of the large-$N$ $YM$ beta function, to make this paper as self-contained as possible. In sect.3 we describe the geometry of our $A$-model from a world-sheet perspective. In particular we define explicitly $TW(CP^2)$ and we recall that admits an almost complex structure with vanishing first Chern class. We describe also our version, on the Lagrangian submanifold of twistor space, of the Penrose-Ward twistor construction (see [@T]) of solutions of the $SD$ $YM$ equations, that is essential to our approach. In sect.4 we recall the target-space effective action of the $A$-model and we compute, in our twistorial case, its beta function, finding agreement with the $YM$ computation. In sect.5 we recall our conclusions. Quasi $BPS$ Wilson loops and beta function on the Yang-Mills side ================================================================= In this section we summarize the localization argument in [@MB1] to which, for reasons of space, we refer for more details and references. In [@MB1] it was found that in the large-$N$ limit there exists a sector of Wilson loops of the pure $YM$ theory, referred to as the quasi $BPS$ sector, that possesses special non-renormalization properties, i.e. no perimeter divergence and no cusp anomaly for backtracking cusps, somehow analogue to the sector of $BPS$ (i.e. supersymmetric) Wilson loops of theories with extended supersymmetry. The quasi $BPS$ Wilson loops of the pure $YM$ theory are defined on the basis of the analogy with the following supersymmetric Wilson loops of theories with extended supersymmetry: Tr(BPS)=Tr P i\_[C]{} (A\_[a]{} dx\_[a]{}(s)+i\_[b]{} dy\_[b]{}(s)) satisfying the local $BPS$ constraint: \_[a]{} x\^2\_[a]{}(s)-\_[b]{} y\^2\_[b]{}(s)=0 Since scalar fields are absent in the pure $YM$ theory, it has been proposed in [@MB1; @MB2] that the correct analog of Eq.(1) is : Tr (quasi-BPS)=Tr P i \_[C]{}( A\_z dz + A\_[|z]{} d[|z]{}+ D\_[u]{} du + D\_[|u]{} d[|u]{} ) ($D_u=\partial_{u}+iA_{u}$ is the covariant derivative along the $u$ direction) with the quasi $BPS$ condition: dz(s)=du(s)\ d|z(s)=d|u(s) Because of the quasi $BPS$ condition this Wilson loop can also be interpreted as the holonomy of the non-hermitean connection: A+D=(A\_z+D\_u) dz+(A\_[|z]{}+ D\_[|u]{}) d |z defined on a Riemann surface diagonally embedded in four-dimensional non-commutative space-time, $ R^2 \times R^2_{\theta} $, in the limit of infinite non-commutativity, that is indeed equivalent to the commutative large-$N$ limit [@Mak]. In [@MB1; @MB2] $A+D$ was denoted by $B$. Here, to agree with a universal convention, we reserve the symbol $B$ for the field of the $EK$ reduction that in the thermodynamic limit coincides with the background $B$-field of the $A$-model sigma-model. Notice that the definition in Eq.(5) makes sense also in the case that the two $R^2$ factors have not the same non-commutativity, as it is in our case. However, in this case, the “diagonal” embedding needs to be interpreted as follows. There is a partial Eguchi-Kawai reduction [@Mak] from four to two dimensions. In such reduction the $u$ dependence of the fields is in fact reabsorbed into the internal color space, while the fields remain $z$ dependent. Technically this can be obtained representing the non-commutative coordinates and derivatives as creation and annihilation operators acting on the internal color (Hilbert) space [@Mak; @DN]. The curvature of the two-dimensional non-hermitean connection $A+D$ is a linear combination of only the $ASD$ part of the four-dimensional curvature. All the considerations in [@MB1] apply in fact also to Wilson loops defined by the conjugate diagonal embedding: dz(s)=d|u(s)\ d|z(s)=du(s) that corresponds to quasi $BPS$ Wilson loops whose connection is given by: A+ |D=(A\_z+D\_[|u]{}) dz+(A\_[|z]{}+ D\_[ u]{}) d |z and whose curvature is of $SD$ type. The conjugate diagonal embedding is important in this paper because it is Lagrangian in the four-dimensional space-time and it can be lifted to a Lagrangian embedding in twistor space. This is not the case, instead, for the diagonal embedding, since it is not Lagrangian in space-time. It was argued in [@MB1] that the quasi $BPS$ Wilson loops have no perimeter divergence and no cuspidal anomaly (for backtracking cusps) in the limit $B \rightarrow 0$ that coincides with the large-$N$ limit [@Mak]. In addition it easy to see that they are in fact trivial to the lowest order in perturbation theory in this limit, because of the cancellation between the propagator of $A_z$ and of $A_u$, due to the factor of $i$ that occurs in front of $A_u$ in the covariant derivative. This cancellation is the pure-$YM$ analogue of the cancellation that occurs for $BPS$ Wilson loops in $SUSY$ gauge theories, between the gauge-fields and the scalar propagators, due to the factor of $i$ in front of the scalar field in Eq.(1) and to the $BPS$ condition in Eq.(2). It is not known if the quasi $BPS$ Wilson loops of large-$N$ $YM$ are trivial to all orders in perturbation theory or if they are non-trivial non-perturbatively. The homological localization of quasi $BPS$ Wilson loops involves the use of the zig-zag symmetry of the Wilson loops and a new form of the loop equation, that is obtained by changing variables from the connection to the $ASD$ part of the curvature. The zig-zag symmetry is exploited drawing backtracking strings that start at a marked point of the loop and end into cusps at infinity. Adding a backtracking string does not change the holonomy class of a Wilson loop, in the same way adding a co-boundary does not change the cohomology class of a closed differential form. This, together with the fact that in the large-$N$ limit quasi $BPS$ Wilson loops have no cusp anomaly for backtracking cusps and no perimeter divergence, is the staring point of the homological localization pursued in [@MB1]. Essentially the zig-zag symmetry is used to show that the right hand side of the loop equation vanishes at a backtracking cusp. Then the left hand side reduces to the critical equation for an effective action. To complete the argument every marked point of the loop must be mapped into a cusp at infinity. This is achieved by a local conformal transformation on the surface over which the Wilson loop lives after introducing a lattice regularization. This two-dimensional conformal change lifts to a four-dimensional conformal transformation, because of the diagonal (or conjugate diagonal) embedding of the quasi $BPS$ Wilson loop in space-time. Thus the effective action changes by the conformal anomaly, that is a local counter-term and thus does not change the renormalization group flow, but only the position of the subtraction point [@MB1]. To pursue the analogy with the usual cohomological localization, the invariance of the renormalization group flow for a transformation that adds to the loop a backtracking cusp is the analog of the property that the action is a closed form. Indeed it can be also described as the property that the action is invariant under the transformation that generates a co-boundary, since the action is annihilated by a $BRST$ differential in the cohomological case. Its homological counterpart is the invariance of the flow of the renormalized effective action for the addition of a boundary, in the sense that adding to a quasi $BPS$ Wilson loop a backtracking string can be interpreted as a local change of the conformal structure of the surface over which the Wilson loop lives. The main technical point, to get the new loop equation, is a change of variables in the $YM$ functional integral from the connection to the $ASD$ part of the curvature. We start with the large-$N$ $YM$ theory defined on $R^2 \times R_{\theta}^2$ in the limit of infinite non-commutativity $\theta$, that is known to reproduce the ordinary commutative large-$N$ limit: Z=(- \_ Tr\_f (F\_\^2) d\^4x) DA\ =(- Q- \_ Tr\_f(F\^[-2]{}\_) d\^4x) DA In the second line the classical action is conveniently rewritten as the sum of a topological and a purely $ASD$ term. The topological term $Q$ is the second Chern class, given by: Q= \_ Tr\_f (F\_ F\_) d\^4x while the $ASD$ curvature $F^-_{\alpha \beta}$ is defined by: F\^-\_=F\_- F\_\ F\_ = \_ F\_ Introducing the projectors, $P^-$ and $P^+$, the curvature can be decomposed into its $ASD$ and $SD$ components: F\_=P\^-F\_+P\^+ F\_\ = F\^-\_+F\^+\_ We change variables from the connection to the $ASD$ curvature, introducing in the functional integral the appropriate resolution of identity: 1= (F\^[-]{}\_-\^[-]{}\_) D\^[-]{}\_ The partition function thus becomes: Z=(- Q- \_ Tr(\^[-2]{}\_) d\^4x)\ (F\^[-]{}\_-\^[-]{}\_) D\^[-]{}\_ DA We can write the partition function in the new form: Z=(- Q- \_ Tr(\^[-2]{}\_) d\^4x)\ Det’\^[-]{}(-\_A \_ + D\_ D\_ +i ad\_[\^-\_]{} ) D\^[-]{}\_ where the integral over the gauge connection of the delta-function has been now explicitly performed: DA\_ (F\^-\_- \^-\_)= |Det’\^[-1]{}(P\^- d\_A )|\ =Det’\^[-]{} ((P\^- d\_A )\^\*(P\^- d\_A ))\ =Det’\^[-]{}(-\_A \_ + D\_ D\_ +i ad\_[ F\^-\_]{} ) and, by an abuse of notation, the connection $A$ in the determinants denotes the solution of the equation $F^-_{\alpha \beta}- \mu ^-_{\alpha \beta}=0$. The $ ' $ superscript requires projecting away from the determinants the zero modes due to gauge invariance, since gauge fixing is not yet implied, though it may be understood if we like to. We refer to the determinant in the preceding equation as to the localization determinant, because it arises localizing the gauge connection on a given level, $\mu ^-_{\alpha \beta}$, of the $ASD$ curvature. We can interpret the $ASD$ relations: F\^-\_- \^-\_=0 as an equation for the curvature of the non-Hermitean connection $A+D=(A_z+D_u) dz+(A_{\bar z}+ D_{\bar u}) d \bar z$ and a harmonic condition for the Higgs field $\Psi=-iD=-i( D_u dz+D_{\bar u} d \bar z )= \psi+\bar \psi$: F\_[A+D]{} - =0\ |F\_[A+D]{} - |=0\ d\^\*\_A - =0 that can also be written as: F\_[A+D]{} - =0\ |\_A - n=0\ \_[A]{} |- |n=0 where the fields $\mu, \nu, n$ are suitable linear combinations of the $ASD$ components $\mu ^-_{\alpha \beta}$. The resolution of identity in the functional integral then reads: 1=(F\_[A+D]{} - ) (|\_A - n) (\_[A]{} |- |n) D Dn D |n where the measure $D \mu$ is interpreted in the sense of holomorphic matrix models employed in the study of the chiral ring of $ \cal {N}$ $=1$ $SUSY$ gauge theories [@DV; @SW2]. The holomorphic gauge is defined as the change of variables for the connection $A+D$, in which the curvature of $A+D$ is given by the field $\mu'$, obtained from the equation: F\_[A+D]{} - =0 by means of a complexified gauge transformation $G(x;A+D)$ that puts $A+D=b+ \bar b$ in the gauge $\bar b=0$: |b\_z=-i where $\mu'=G \mu G^{-1}$. Employing Eq.(19) as a resolution of identity in the functional integral, the partition function becomes: Z= (F\_[A+D]{} - ) (|\_A - n) (\_[A]{} |- |n) (-S\_[YM]{})\ Db D |b D ’ Dn D |n The integral over $(b, \bar b)$ is the same as the integral over the four $A_{\alpha}$. The resulting functional determinants, together with the Jacobian of the change of variables to the holomorphic gauge, are absorbed into the definition of $\Gamma$. $\Gamma$ plays here the role of a classical action, since it must be still integrated over the fields $\mu', n, \bar n$. $\Gamma$ is given by: = Q + Tr\_f(F\^[-2]{}\_[01]{}+F\^[-2]{}\_[02]{}+F\^[-2]{}\_[03]{} ) d\^4x\ + log Det’\^[-]{}(-\_A \_ + D\_ D\_ +i ad\_[\^-\_]{}) -log with: \^0=F\^-\_[01]{}\ n+|n=F\^-\_[02]{}\ i(n-|n)=F\^-\_[03]{} Although $\Gamma$ is the classical action in the $ASD$ variables it contains already quantum corrections because of the Jacobian of the change of variables. It turns out that its divergent part coincides with the divergent part of the Wilsonean localized quantum effective action, after the inclusion of zero modes. Until now the theory is still four-dimensional. Taking functional derivatives with respect to the $ASD$ field we get, for a planar quasi $BPS$ loop, the loop equation: 0=D’ Tr ((- ) (x,x;b))\ = D’ (-) (Tr( (x,x;b))\ -\_[C(x,x)]{} dy\_z \^[(2)]{}(0) |\^[-1]{}(w-y) Tr(\^a (x,y;b) \^a (y,x;b)) )\ =D’ (-) (Tr( (x,x;b))\ - \_[C(x,x)]{} dy\_z \^[(2)]{}(0) |\^[-1]{}(w-y)(Tr( (x,y;b)) Tr((y,x;b))\ - Tr( (x,y;b) (y,x;b)))) where in our notation we have omitted the integrations $D n D \bar n$, since they are irrelevant in the loop equation, because the curvature of $A+D$ depends only on $\mu$. In the large-$N$ limit it reduces to: ( (x,x;b)) =\ \_[C(x,x)]{} dy\_z \^[(2)]{}(0) |\^[-1]{}(w-y) ( (x,y;b)) ((y,x;b))) Now it is natural to perform a partial $EK$ reduction from four to two dimensions. Let us describe what in fact the partial $EK$ reduction means in this context. We already observed that we can absorb the translations into a gauge transformation in the four-dimensional non-commutative theory along the two non-commutative directions [@DN]. As a result the classical action looks two-dimensional, in the sense that the space-time dependence of the fields is two-dimensional, despite the fact that the theory is truly four-dimensional. The four-dimensional information is hidden in the central extension $B=\frac{2\pi}{\theta}$ that shows up in the curvature due to non-commutativity [@DN]. Now we use the fact that in the non-commutative theory the integral over the non-commutative directions can be represented as a (color) trace [@DN]: d\^2u = Tr Hence the non-commutative classical action of the reduced theory gets a volume factor of $V_2=2\pi \theta=\frac{2\pi}{B}$ because of the gauge choice. The equation of motion of the $EK$ reduced theory is therefore multiplied by this volume factor. We can divide both sides of the loop equation by this volume factor in the reduced theory in such a way that the equation of motion is normalized as in the four-dimensional theory. Then the inverse volume will appear in the right hand side instead of the factor $\delta^{(2)}(0)$ . We can compensate this fact by rescaling the classical action by a factor of $N_2^{-1}$ [@Mak; @Kaw], with $N_2=V_2 \delta^{(2)}(0)$, in such a way that the factor of $\frac{V_2}{N_2}$ in the reduced classical action produces the factor of $\delta^{(2)}(0)= \frac{N_2}{V_2}$ once carried to the right hand side of the loop equation. Of course in all this discussion we are implicitly assuming that the trace of the reduced theory includes now the non-commutative degrees of freedom. Thus the classical action of the $EK$ reduced theory in the $ASD$ variables is given by: = Q + Tr\_f(F\^[-2]{}\_[01]{}+F\^[-2]{}\_[02]{}+F\^[-2]{}\_[03]{} ) d\^2x\ + log Det’\^[-]{}(-\_A \_ + D\_ D\_ +i ad\_[\^-\_]{}) -log where the trace in the functional determinants has to be interpreted coherently with the partial $EK$ reduction. The new loop equation is then: ( (x,x;b)) =\ \_[C(x,x)]{} dy\_z |\^[-1]{}(w-y) ( (x,y;b)) ((y,x;b))) This new loop equation, that is a version of the usual Makeenko-Migdal loop equation [@MM; @MM1], holds for the $YM$ theory after the $EK$ reduction from four to two dimensions in the holomorphic gauge in which the connection $A+D$ is gauge equivalent to $b$ and its curvature is gauge equivalent to $\mu'$. After this reduction it is convenient to perform a conformal compactification in such a way that the theory now is defined over a two-sphere $S^2$, in order to get a nice moduli problem for the gauge fields. Before the $EK$ reduction, in the four-dimensional Euclidean theory, this would amount to a compactification from $R^4$ to $S^2 \times S^2$. In the four-dimensional theory in ultra-hyperbolic $(2,2)$ signature the conformal compactification would amount instead to compactify to $\frac{S^2 \times S^2}{Z_2}$ where the $Z_2$ acts by the antipodal map on both $S^2$. After the $EK$ reduction the theory is thus defined on $S^2/Z_2$ in Minkowskian $(2,2)$ signature. The conformal compactification adds to the effective action at most a finite conformal anomaly, that can be ignored. It is clear that the contour integration in the right hand side of the loop equation includes the pole of the Cauchy kernel. We need therefore a gauge invariant regularization. The natural choice consists in analytically continuing the loop equation from Euclidean to Minkowskian space-time (with ultra-hyperbolic signature, if we must keep the gauge group to be $U(N)$ in the $ASD$ equations). Thus $z \rightarrow i(x_+ + i \epsilon)$. This regularization has the great virtue of being manifestly gauge invariant. In addition this regularization is not loop dependent. The result of the $i \epsilon$ regularization of the Cauchy kernel is the sum of two distributions, the principal part plus a one-dimensional delta-function: |\^[-1]{}(w\_x -y\_x +i)= (2 )\^[-1]{} (P(w\_x -y\_x)\^[-1]{} - i (w\_x -y\_x)) The loop equation thus regularized looks like: ( (x,x;b))=\ \_[C(x,x)]{} dy\_x(2 )\^[-1]{} (P(w\_x -y\_x)\^[-1]{} - i (w\_x -y\_x))\ ( (x,y;b)) ((y,x;b))) The right hand side of the loop equation contains now two contributions. A delta-like one-dimensional contact term, that is supported on closed loops and a principal part distribution that is supported on open loops. Since by gauge invariance it is consistent to assume that the expectation value of open loops vanishes, the principal part does not contribute and the loop equation reduces to: ( (x,x;b))=\ \_[C(x,x)]{} dy\_[x]{} (w\_x -y\_x) ((x,y;b)) ((y,x;b))) Taking $w=x$ and using the transformation properties of the holonomy of $b$ and of $\mu(x)'$, the preceding equation can be rewritten in terms of the connection, $A+D$, and the curvature, $\mu$: ( (x,x;A+D))=\ \_[C(x,x)]{} dy\_[x]{} (x\_x -y\_x) ( (x,y;A+D)) ((y,x;A+D))) where we have used the condition that the trace of open loops vanishes to substitute the $b$ holonomy with the $A+D$ holonomy. We need a lattice version of the continuum loop equation to implement our localization argument. Thus we write the loop equation in the $ASD$ variables on a lattice in the partially $EK$ reduced theory. If we introduce a lattice, the delta-functional constraint in Eq.(16-19) becomes, after the partial $EK$ reduction: & \[ D\_[z]{}, D\_[|z]{}\]- \[D\_u , D\_[|u]{}\]- i\_[p]{} \^0\_p \^[(2)]{}(x-x\_p)+i B1=0\ & \[ D\_[|z ]{}, D\_ u \]- i\_[p]{} n\_p \^[(2)]{}(x-x\_p)=0\ & \[D\_[z]{} , D\_[|u]{}\]-i\_[p]{} |n\_p \^[(2)]{}(x-x\_p)=0 This system of equations defines a central extension of parabolic Higgs bundles on a sphere, in which the role of the Higgs field $\Psi$ is played by $-iD$. Parabolic Higgs bundles have been introduced also in [@W4], in their study of the ramified Langlands conjecture. Since the Higgs field,$-iD$, acts on the infinite dimensional Hilbert space of a non-commutative $R^2$, the curvature equation involves a central term, $B=\frac{2 \pi}{\theta}$, that we have displayed explicitly. In the case $n_p=\bar n_p=0$, that is the most relevant for us, we may interpret the preceding equations as vortex equations. In the partially $EK$ reduced theory, the vortices live at the lattice points, where the $ASD$ curvature is singular. This means that in the original four-dimensional theory they form two-dimensional vortex sheets. Codimension-two singularities of this kind occur also in [@W4], but without the non-splitting central extension in the curvature. The loop equation on our lattice now reads: ((x\_p,x\_p;A+D))=\ \_[C(x\_p,x\_p)]{} dy\_z |\^[-1]{}(x\_p-y) ((x\_p,y;A+D)) ((y,x\_p;A+D))) and correspondingly for the analytic continuation to Minkowskian space-time: ( (x\_q,x\_q;A+D))=\ \_[C(x\_q,x\_q)]{} dy\_x(2 )\^[-1]{} (P(w\_[x\_q]{} -y\_x)\^[-1]{} - i (w\_[x\_q]{} -y\_x))\ ( (x\_q,y;A+D)) ((y,x\_q;A+D))) Notice that a smooth marked point gives a non-trivial contribution in the continuum or on the lattice in the right hand side. However around a backtracking cusp the contributions of the two sides of the asymptotes to the cusp cancel each other for the contact term: \_[C(x\_q,x\_q)]{} dy\_x(s) (w\_[x\_q]{}(s\_[cusp]{}) -y\_x(s))= (+ ) because of the opposite sign of $ \dot w_{x_q}(s^+_{cusp})$ and $ \dot w_{x_q}(s^-_{cusp})$ on the two sides of the backtracking cusp. For the principal part the same argument applies because of the opposite orientations of the asymptotes and because both the cusp asymptotes are approached either from below or from above: |\_[C(x\_q,x\_q)]{} dy\_x(s) P(w\_[x\_q]{}(s\_[cusp]{}) -y\_x(s))\^[-1]{}|=\ ds (+ ) Thus if every marked point can be transformed into a backtracking cusp we can complete our argument about localization, since then the loop equation reduces to the equation of motion for the effective action in the left hand side. But this is precisely the effect of our lattice, since marked points contribute to the loop equation in the lattice theory only if they coincide with the lattice points. Thus we can simply draw backtracking strings from the loop to the lattice points and then map conformally the lattice points to infinity in order to transform all the marked points into cusps, for which the right hand side of the loop equation vanishes. Doing so we change $\Gamma$ by the conformal anomaly, into $\Gamma_q$, the quantum effective action. Thus we may say that open strings solve the $YM$ loop equation for the quasi $BPS$ Wilson loops, in the sense that they localize the loop equation on a saddle-point for an effective action. As a consequence, the large-$N$ loop equation of the full four-dimensional Yang-Mills theory in the anti-self-dual variables ($ASD$) for the diagonally embedded quasi $BPS$ Wilson loops, whose connection is $A+D$, localizes on the moduli space of a central extension of parabolic Higgs bundles. In addition it was found in [@MB1] that the critical point in the loop equation actually corresponds to non-commutative vortices equations, reduced to two dimensions a la Eguchi-Kawai: &\[D\_z, D\_ [|z]{}\]- \[D\_u , D\_[|u]{}\]= i \_[p]{} g \_p g\^[-1]{} \^[(2)]{}(w-w\_p)-i B1\ &\[D\_[|z ]{}, D\_ u\]=0\ &\[D\_[z]{} , D\_ [|u]{}\]=0 It was found in [@MB1] that the vortices equations imply the correct beta function of the Yang-Mills theory in the large-$N$ limit [^3]. The beta function is extracted from the $YM$ effective action $\Gamma_q$ in the $ASD$ variables computed on the vortices. More precisely, the following result was found for the Wilsonean and canonical beta function. There exists a renormalization scheme in which the large-$N$ canonical beta function of the pure $YM$ theory is given by: = with: \_0=\ \_J= where $g_c$ is the ’t Hooft canonical coupling constant and $ \frac{\partial log Z}{\partial log \Lambda} $ is computed to all orders in the ’t Hooft Wilsonean coupling constant, $g_W$, by: = with $c$ a scheme dependent arbitrary constant. At the same time, the beta function for the ’t Hooft Wilsonean coupling is exactly one loop: =-\_0 g\_[W]{}\^3 Once the result for $ \frac{\partial log Z}{\partial log \Lambda} $ to the lowest order in the canonical coupling = g\_c\^2 + ... is inserted in Eq.(40), it implies the correct value of the first and second perturbative coefficients of the beta function: = -\_0 g\_c\^3+ ( -\_0 \_J) g\_c\^5 +...\ =- g\_c\^3 + ( -)g\_c\^5 +...\ =- g\_c\^3 - g\_c\^5+... which are known to be universal, i.e. scheme independent. In addition it was argued in [@MB1] that there is a scheme in which the canonical coupling coincides with a certain definition of the physical effective charge in the inter-quark potential. In this scheme the beta function is given by: =\_0 with $\Lambda_W$ the $RG$ invariant scale in the Wilsonean scheme. The preceding formula compares favorably with numerical lattice computations for $SU(3)$. For completeness we outline here briefly the computation of the Wilsonean beta function since it is related to the computation made in sect.4 of this paper. The effective action of vortices is given by: ([-\_q]{})=d(zero-modes) with $\Gamma$, in the partially reduced $EK$ theory, given by Eq.(28). The divergent part of $\Gamma$ is given by: (-(2-) log()) \_ d\^4x Tr\_f (2(F\^-\_)\^2)\ =(- log()) \_ d\^4x Tr\_f (2(F\^-\_)\^2\ = Z\^[-1]{} \_ d\^4x Tr\_f (2(F\^-\_)\^2) where $Z^{-1}$ is given by: Z\^[-1]{}=1- g\_W\^2 log() and we have added to $g$ the underscript $_W$ to stress that our computation here refers to the Wilsonean coupling constant. We must add to this divergence the one due to vortices zero modes, that are the moduli of the adjoint orbit. The zero modes divergence is due to the powers of the Pauli-Villars regulator that have to be inserted in the integral over the zero modes. For a $Z_N$ vortex of charge $k$ we get $N-k$ eigenvalues of the curvature $\lambda_p$ equal to $\frac{2 \pi k}{N}$ and $k$ eigenvalues equal to $\frac{2 \pi (k-N)}{N}$. The trace of the eigenvalues of the curvature in the fundamental representation is thus: (N-k) ()\^2 + k ()\^2 =(2 )\^2 Each $Z_N$ vortex carries a number of zero modes equal to the real dimension of the adjoint orbit $ g \lambda_p g^{-1}$. However it must taken into account the fact that the loop equation localizes only after analytic continuation to $(2,2)$ signature, since this analytic continuation is needed to regularize the loop equation in a gauge invariant way. This imposes some global constraint on the vortex solution. In $(2,2)$ signature the conformal compactification of space-time is $S^2 \times S^2/Z_2$ where the $Z_2$ acts by the antipodal involution on both $S^2$. Its double cover is $S^2 \times S^2$. Thus a vortex solution on Euclidean $S^2 \times S^2$ extends, after analytic continuation to Minkowski, to a vortex solution on $S^2 \times S^2/Z_2$ only if vortices on $S^2 \times S^2$ come in pairs identified by the antipodal involution. On the double cover the number of vortices is doubled, so that the action is doubled, but the number of zero modes is not, because the vortex adjoint orbits must be pairwise identified. Thus the number of real zero modes per vortex is one half, i.e. it is the number of complex zero modes. Thus it is equal to the complex dimension of the orbit. We do not include zero modes associated to translations of the vortices since their contribution is subleading in $\frac{1}{N}$. The complex dimension of an adjoint orbit of $U(N)$ is given by: dim= (N\^2-\_i m\_i\^2) where $m_i$ are the multiplicities of the eigenvalues. For vortices this reduces to: dim= (N\^2-k\^2-(N-k)\^2)=k(N-k) In the $YM$ theory, in the thermodynamic limit, due to the contributions of vortices zero modes, the effective action reads: = \_p\ The renormalization of the Wilsonean coupling constant in the $YM$ theory now follows immediately from the local part of the vortices effective action. It contains two terms. The one proportional to $\frac{5}{3}$ comes from the functional determinants; the one proportional to $2$ from the vortices zero modes: = 8 \^2 k(N-k)( - (2+) log ()) and thus: = - (2+) log ()\ = - \_[0]{} log ()\ \_0= We can reabsorb the coupling constant, $g_W$,into a redefinition of the subtraction point: (-\_q)= \_p (k\_p(N-k\_p) 8 \^2 \_0 ()) There is another redefinition of the subtraction point when the conformal anomaly, due to the homological localization, is added. This leads to the effective action: (-\_q)= \_p (k\_p(N-k\_p) 8 \^2 \_0 ( )) It is easy to see that the critical point of the effective action corresponds to $Z_2$ vortices. Thus in the large-$N$ limit the quasi $BPS$ Wilson loops of the pure $YM$ theory are localized on the $Z_2$ vortices of $ASD$ type of the partial $EK$ reduction. The A-model world-sheet geometry ================================ Following the analogy with the construction of the twistorial $B$-model for the $ \cal{N}$ $=4$ $SUSY$ $YM$ theory at weak coupling in [@W2] and the conjectured $S$-duality to the $A$-model in [@V1] and [@V2], we would like to identify an $A$-model on twistor space dual to the large-$N$ pure $YM$ theory, more precisely to the restricted sector of quasi $BPS$ Wilson loops described in the previous section. We use as a guide the localization result in [@MB1] for quasi $BPS$ Wilson loops. We look for an $A$-model whose classical equations of motion reproduce the vortex equation of the large-$N$ $YM$ theory, on which the quasi $BPS$ Wilson loops are localized. In the next section we show, using the loop equation of the effective $CS$ theory, that the equivalence extends to the quantum level and to the Wilsonean beta function in the large-$N$ limit. Since we cannot show in general that all the $YM$ observables are localized on vortices, we can reasonably hope at most that the $A$-model which we are looking for describes only the quasi $BPS$ sector of the $YM$ theory. The natural candidates are the $A$-models defined on twistor space of a compactification of space-time. On the $A$-model side the compactification is needed to have a well defined moduli problem for the $A$-model world-sheet instantons. The thermodynamic limit is then recovered in the decompactification limit. Though the compactification appears at this stage as a technical device, it becomes apparent later that it has in fact deeper reasons. Twistor space can be defined for any orientable Riemannian four-manifold, as the bundle over the given manifold of all its almost complex structures compatible with the given Riemannian metric. This is a generalization of the original construction for a hyper-Kahler manifold and for details we refer to the mathematical and physical literature [@Hit; @X; @D; @To]. Natural compactifications of four-dimensional Euclidean space-time are the four-sphere $S^4$, the complex projective surface $CP^2$, and the four-torus, $T^4$. Thus we have as candidates the corresponding twistor spaces. These candidates are a priori on an equal footing, but in fact we argue that only twistor space of $CP^2$ meets all the following physical requirements. It is a deep result [@W3] that the $A$-model does not need to be defined on a Calabi-Yau for its quantum consistency but only on an almost complex manifold [^4]. Indeed the Calabi-Yau condition for a complex threefold ensures the vanishing of the chiral anomaly in the $B$ model [@W1]. For an $A$-model on a threefold there is not such a chiral anomaly, but rather a ghost number anomaly that affects only the selection rule for the observables [@W1; @Mar]. In particular, in order to couple the $A$-model to gravity on the world-sheet, i.e. to define an $A$-model string theory, we must require that the selection rule for $n$-point observables is genus independent, for the observables to have corrections to all orders in the string coupling constant as for a critical string theory [@W1; @Mar]. This implies that the first Chern class of the complexified tangent bundle vanishes. Yet this is considerably weaker than the Calabi-Yau condition since the almost complex structure that is involved in the definition of the tangent bundle need not to be actually complex, i.e. integrable. Thus our first requirement is that the $A$-model twistor space has vanishing first Chern class. The second requirement is that the $A$-model must support a $B$-field on twistor space that projects to a $B$-field on the space-time base. This is necessary to identify the $B$-field with the one that implies the non-commutative partial large-$N$ $EK$ reduction in physical space-time. Finally, the third requirement is that the twistor space admits a Lagrangian submanifold over which the equation of motion of the $CS$ effective theory implies the vortex equation associated to the localization of quasi $BPS$ Wilson loops of the $YM$ theory. We examine now as to whether these constraints can be satisfied for our candidate twistor spaces. We start describing the twistor space of our compactified space-times [@Hit; @X; @D; @To]. $TW(S^4)=CP^3$, $TW(CP^2)=SU(3)/U(1) \times U(1)$, $TW(T^4)=T^4\times CP^1$. Since none of these spaces is Ricci-flat, the corresponding topological models are not Calabi-Yau’s. In particular there is no way to define the $YM$ string as a $B$-model extending the construction in [@W2] to a non-supersymmetric version. However if an $A$-model on super-twistor space $S$-dual to the the $B$-model exists, as conjectured in [@V1], then its non-supersymmetric version should be naturally related to non-supersymmetric $YM$ theory [@V2]. In fact in this paper we would like to make this conjecture more precise identifying a version of the $A$-model that reproduces the $YM$ beta function in the large-$N$ limit. Since to define the $A$-model we need an almost complex structure, we must discuss almost complex or complex structures on twistor space. In this case we have the embarrassment of richness since twistor spaces admit several different (almost) complex structures. The (almost) complex structures that can be defined on the same twistor space may have different Chern classes and thus may define nonequivalent $A$-models because of the different selection rules for the observables. (Almost) complex structures on twistor spaces have been classified recently [@D]. However it has been known for a long time that twistor space of self-dual manifolds always admits the two following (almost) complex structures [@D]. The standard integrable complex structure, that is covariantly constant with respect to the Levi-Civita connection: its Chern class is non-vanishing in all the three cases of our study. The non-integrable almost complex structure obtained reversing the orientation of the complexified tangent bundle on the fibre of the twistor fibration [@D; @To]: its Chern class vanishes for the twistor space of self-dual Einstein manifolds [@X; @To], thus in all our three cases. Because the almost complex structure is not integrable, a $H$-flux appears as the torsion of the almost complex manifold [@W3]. This torsion is associated to a non-trivial $B$-field. In addition we have the freedom to add a closed $B$ be field to the $A$-model action without changing the flux and the topological nature of the $A$-model [@W3]. While any component of the $B$-field along the fibre of the twistor fibration is allowed, the component along the base must be the $B$-field needed for the $EK$ reduction. In particular it must be vanishing small in the large-$N$ limit and in the infinite tension limit [@SW] of the topological theory [^5]. We must describe in more detail the action of the $A$-model in the case of an almost complex manifold and the almost complex structure on twistor space. It is one of the deep results in [@W3] that for an $A$-model on an almost complex manifold the topological action contains a linear combination of the metric $g$ and of the two-form $J$ associated to the almost complex structure. Thus $J$ plays the role of a $B$-field. More explicitly for the twistor space of a self-dual Einstein manifold the metric $g_6$ and the almost complex structure $J$ are given in terms of the holomorphic vierbein as follows (we use the notation of [@To]): g\_6= e\^i |e\^i\ J= i e\^i |e\^i The flux $H=dJ$ can be computed using the following formulae [@X; @To]: d ( [r]{} e\^1\ e\^2\ e\^3\ ) = ( [rc]{} - &\ & Tr()\ ) ( [r]{} e\^1\ e\^2\ e\^3\ ) + ( [r]{} |e\^2 |e\^3\ |e\^3 |e\^1\ |e\^1 |e\^2\ ) where $\alpha$ is an anti-hermitean matrix of one-forms that acts on $(e^1, e^2)$ and $R$ an overall length scale. $\sigma$ parametrizes the curvature of the base manifold relative to the fibre. These formulae show that the topological action contains a $B$-field, $J$, whose components are non-vanishing along all the three orthogonal complex directions in twistor space. This is not what we want. We need one complex line of the base without $B$-field and one with vanishing small $B$-field, to achieve the partial $EK$ reduction. For $\sigma=1$ twistor space is a Kahler manifold [@Hit; @X; @To]. This corresponds to the case of $S^4$ and $CP^2$ with the integrable complex structure. For $\sigma=2$ twistor space of $S^4$ and $CP^2$ is a nearly Kahler manifold with the non-integrable almost complex structure [@X; @To] [^6]. In the case of $S^4$ and $CP^2$ with $\sigma=2$ we obtain the required $EK$ field by adding to the topological action the following $B$-field with vanishing torsion: -ie\^1 |e\^1 -i(1-)e\^1 |e\^1 +i(1-)e\^3 |e\^3 that cancels the $B$-field along the complex direction $1$ and creates a small $B$-field along the complex direction $2$, at the price of creating a large compensating $B$-field along the complex direction $3$. This is not possible for $T^4$, because it has zero $\sigma$, since $T^4$ is a flat manifold.and thus the torsion of the would be compensating $B$-field on the fibre vanishes in this case. Thus the only two suitable spaces that may lead to a stringy $A$-model and large-$N$ non-commutative $EK$ reduction are the four-sphere and the projective surface. The problem with the four-sphere is that it is not a complex manifold. Thus on the four-sphere there is no notion of the conjugate diagonal embedding $z=\bar u$, that is needed to define our Lagrangian submanifold in a way compatible with the embedding in space-time of quasi $BPS$ Wilson loops [^7]. Only twistor space of the projective surface remains. On the projective surface the conjugate embedding defines a Lagrangian submanifold that is isomorphic topologically to $RP^2$, by a slight modification of the standard embedding of $RP^2$ into $CP^2$. It lifts to a Lagrangian submanifold in twistor space that locally is $RP^2 \times RP^1$ where $RP^1$ can be taken to be a great circle in $RP^2$. This can be explained as follows. There is a realization of $TW(CP^2)$ as the subset of $CP^2 \times \tilde CP^2$ for which $ \sum_i z_i \tilde z_i=0 $.in projective coordinates (see [@To] and references therein). This means that $TW(CP^2)$ can be thought as a fibration of a complex projective surface by its orthogonal complex line, that in turn is labeled by another complex line that belongs to the surface. Thus $TW(CP^2)$ is a flag manifold that is a fibration of a complex surface by a complex line that belongs to the surface [@To]. When restricting to the Lagrangian submanifold the previous statements continue to hold in their real version. Now we must show that on this Lagrangian submanifold the $CS$ theory at classical level leads to the vortex equation. The target-space action of the $A$-model in the open string sector is the effective action: S = Tr( A F - A \^3)+ \_[i]{} \_i e\^[-a(\_i)]{} Tr( P i \_[ \_i]{} A ) defined on a Lagrangian submanifold of $TW(CP^2)$. It is not restrictive to assume that $\eta _i=1$ [@W1]. $e^{-a({\gamma _i})}$ is the weight, (not necessarily real in presence of a $B$-field), by which an instanton of (complex) area $a$, whose boundary is $\gamma _i$, is weighted in the world-sheet expansion of the $A$-model. The extra term with respect to the usual $CS$ action is due to world-sheet instantons corrections and to the insertion of Wilson-loop operators on the world-sheet. Notice that the effective action is obtained re-summing the world-sheet genus expansion in a non-trivial background of Wilson loops. Thus for its existence we must assume that the ghost number anomaly selection rule is satisfied in a genus independent way. This in particular requires that the first Chern class of the almost complex structure vanishes as we anticipated. Since the connection that can be coupled to the $A$-model is flat, the conjugacy class of the Wilson loop holonomy depends only on the homology class of $\gamma _i$. We must show that a vortex equation arises as a solution of this $CS$ theory on the Lagrangian submanifold. Motivated by the embedding of the space-time ($CP^2$) with coordinates $(w , \bar w , u ,\bar u)$ into the twistorial fibration with coordinates $( \lambda, u_1,u_2)$: w-|u=u\_1\ u+|w=u\_2 we make the following ansatz for the $CS$ connection on the twistorial fibration in terms of the gauge fields $(A_w, A_{\bar w}, A_u, A_{\bar u})$ on the four-dimensional space-time $CP^2$. This ansatz is very well known as the Penrose-Ward construction and in fact it is at the heart of the link between self-dual $YM$ equations and flat equations along the tangent direction to twistor surfaces [@T]. Our main point is that we need a certain version of it, restricted to the Lagrangian submanifold, in such a way that the $CS$ equations gets deformed into the vortices equations. Our ansatz for the covariant derivatives of the $CS$ connection along the tangent vector fields of the double cover of the Lagrangian fibration is: &D\_[u\_1]{}=D\_[|w]{}- D\_u\ &D\_[u\_2]{}= D\_[w]{}+ D\_[|u]{} where $ D_u= \partial _u+ i A_u $. We employ the double cover because we want to use complex coordinates. Physically this ansatz corresponds to imposing that the $CS$ gauge fields that live in a neighborhood of the tangent directions of the Lagrangian submanifold contain the $YM$ gauge fields in a way that respects the geometry of the twistor fibration. In particular on our Lagrangian submanifold both $\lambda$ and $-\frac{1}{\lambda}$ occur, because of the antipodal identification on the fibre $RP^1$. From the point of view of the underlying string theory defined by the $A$-model this occurs because the gauge fields are associated to the open strings that live indeed on the Lagrangian submanifold. However, we must check that our ansatz can be satisfied in fact by the equation of motion of the string theory, that are the $CS$ equations of motion. The curvature equations without the condensate are now: \[D\_[u\_1]{} , D\_[u\_2]{}\] = \[D\_[|w]{} , D\_[|u]{}\]- ( \[D\_w ,D\_[|w]{}\]+ \[D\_u , D\_[|u]{}\])\ +=0 Hence they are satisfied provided the self-dual curvature of the underlying $YM$ connection vanishes. In this case the $CS$ theory would be defined on the moduli space of $SD$ connections. We now require that the $(u, \bar u)$ coordinates become non-commutative. This corresponds to turning on a $B$-field in the $A$-model sigma-model. This $B$-field implies a non-vanishing flux in general and it was already implicit in our choice of an almost complex structure. The physical reason for which we require this $B$-field is that we must convey in the $CS$ functional integral, that contains only a two-dimensional Lagrangian submanifold of space-time, the physical information about the four-dimensional nature of the space-time. This is not possible at finite $N$, but it is possible at $N =\infty $, by the $EK$ reduction. This is realized by making space-time non-commutative in the $(u,\bar u)$ directions in the limit of infinite non-commutativity and by reabsorbing the non-commutative degrees of freedom in the infinite dimensional (at $N= \infty $) Hilbert color space of the $CS$ theory. Because of the limit of infinite non-commutativity the $B$-field is vanishing small in the large-$N$ limit. As a result Eq.(62) gets deformed to: \[D\_[u\_1]{}, D\_[u\_2]{}\]+ i B 1 =0 since now some partial derivatives are non-commutative. Finally we include the condensate of Wilson loops. We suppose that the Wilson loop extends only along $\lambda$ and it is based over the point $w_p$ of the double cover of $RP^2$. The double covering is needed to allow for the existence of complex coordinates. The curvature equation now reads: \[ D\_[u\_1]{}, D\_[u\_2]{} \] \^a=-i B 1\^a+i \^[(2)]{} (w-w\_p) e\^[-a(\_p)]{} Tr(P T\^a i \_[\_p]{} A\_ d ) Thus we see that the effect of instantons is to introduce a vortex singularity in the (non-commutative) $SD$ equations. For a certain choice of the background Wilson loop and of the string coupling these become identical to the $Z_2$ non-commutative vortex equation of $SD$ type of the $YM$ theory: &\[D\_w , D\_[|w]{}\]+ \[D\_u , D\_[|u]{}\]= i g \_p g\^[-1]{} \^[(2)]{}(w-w\_p)+ B1\ &\[D \_w, D\_u \] =0\ &\[D\_[|w]{}, D\_[|u]{}\] =0 repeated with an infinite degeneracy, $N_1$, along the fibre. This occurs for a certain condition on the string coupling (see next section) and for the natural choice: (i \_[\_p]{} A\_ d )\^2=1 in such a way that $A_{\lambda}$ is flat on $RP^2$. We explain now why this choice is natural. Turning on a non-trivial Wilson loop background on the fibre implies a compatibility condition involving the $CS$ flatness condition on the fibre and on the base (see Eq.(70)). We will not examine in detail this compatibility condition, but we observe that flatness conditions of this type have been studied in the mathematical literature under the name of twistor structures [@Moc]. In particular solutions exist for holomorphically flat connections on the fibre, that correspond to our ansatz, since the holonomy on the fibre is in a representation of the fundamental group of $RP^2$, because its square is one, and thus it is flat. Even if the fibre becomes non-commutative because of the background $B$-field along the fibre, the vortex equation is satisfied on the base. Indeed then $\lambda$ becomes a hermitean operator. The flatness condition still implies the three $SD$ equations provided $(1, \lambda, \lambda^{-1})$ are linearly independent as operators. The equation for the hermitean part of the $CS$ curvature is unchanged. In the next section we study these equations at classical and quantum level as well. The $A$-model beta function via its Chern-Simons effective action ================================================================= The beta function of our twistorial $A$-model arises as follows. The quantization of the standard $CS$ theory, without the condensate of Wilson loops, is described by the functional integral [@M]: Z=( Tr \_ d) ( \[D\_[u\_1]{}, D\_[u\_2]{}\]) D where the connection one-form $\tilde{A}$, on the double cover of the base of the Lagrangian submanifold, is given by: =(A\_w+D\_[|u]{})dw+(A\_[|w]{}-D\_[u]{})d|w We call the exponent in Eq.(67) the kinetic term, because it is the only term that contains derivatives along the fibre. The delta-functional constraint imposes the flatness condition to the gauge connection tangent to the base. We will see that, after integrating the constraint in the delta-functional and after the inclusion of the condensate of Wilson loops, the kinetic term defines a quantum mechanical theory on the moduli space of the vortex equation. We expect that such quantum mechanical system is ultraviolet finite, being a one-dimensional quantum field theory. Thus the only possible source of divergences comes from the functional determinant that arises by integrating the constraint in the delta-functional. This is the situation in which the $YM$ beta function occurs, since, although the theory looks two-dimensional on the base, it is in fact four-dimensional, because of the $EK$ reduction. The asymmetry between the base and the fibre is created by the presence of the Wilson loop condensate, that we choose along a non-contractible loop on the fibre, in order to reproduce the correct vortex equation. The presence of the condensate modifies the flatness constraint in a way that we can understand as follows. The functional integral in presence of a condensate is given by: Z=( Tr \_ d)\ ( Tr(A\_\[D\_[u\_1]{}, D\_[u\_2]{}\]) dw d|w d- e\^[-a(\_p)]{} Tr(P ( i \_[\_p]{} A\_ ))) DA If the chosen background loop, $\gamma_p$, extends in the $\lambda$ direction only and is based on the point $w_p$ of the base, the classical equations of motion of the $CS$ effective action, as shown in the previous section, are: F\^a\_[w]{}=0\ F\^a\_[|w]{}=0\ F\^a\_[w |w]{}() -i \^[(2)]{}(w-w\_p) e\^[- a(\_p)]{} Tr( P T\^a i \_[\_p]{} A\_ d)=0 where $T^a$ are the generators of $U(N)$ in the fundamental representation. Once our choice of the background Wilson loop is made, the world-sheet instantons that dominate the world-sheet theory are constant on the base and wrap once along the fibre. Notice that the presence of the condensate introduces a $\delta ^{(2)}(w-w_p)$ singularity in the curvature of the $CS$ connection on the base. This $\delta ^{(2)}(w-w_p)$ singularity arises integrating, along the fibre on which the background Wilson loop lies, the $\delta ^{(3)}$ singularity due to the functional differentiation in three dimensions. In addition the presence of a condensate of Wilson lines with non-trivial holonomy along the fibre implies a twist in the dependence of the fields of the base on the fibre coordinate. The dependence on the fibre in the coefficient of $\delta ^{(2)}(w-w_p)$ is through the holonomy of the connection $A_{\lambda}$. This holonomy dependence determines the vortex moduli space and leads to the mentioned one-dimensional quantum mechanics on the fibre, due to the kinetic term. We now show that in the large-$N$ limit the condensate equation does not get quantum corrections to the leading order, under certain assumptions. The standard Makeenko-Migdal loop equation [@MM; @MM1] applied to the $CS$ theory with the condensate reads schematically: (( \[D\_[u\_1]{},D\_[u\_2]{}\](w\_p)-i \^[(2)]{}(w-w\_p) e\^[- a(\_p)]{} (\_p ; A))(\_p ; A))\ = i \^[2]{}(w-w\_p) ((\_[p]{} ; A)) ((\_[p]{}; A)) where $\tau$ is a generalized trace, that is the combination of the v.e.v. with the normalized color trace and $\Psi(\gamma_{p}; A)=P \exp ( i \int_{\gamma_p} A_{\lambda} )$ is the holonomy of the Wilson loop. The term that contains the double loop in the left-hand side is due to the condensate. The term on the right hand side is the usual interacting $MM$ term. If the background of Wilson loops is non-trivial, i.e. $\tau(\Psi(\gamma_{p}; A))=0$, the interaction term vanishes and thus the classical equation is not renormalized in the large-$N$ limit. This is precisely the situation that is needed to reproduce the $YM$ beta function. The $YM$ beta function is reproduced by a background of Wilson loops on the fibre that implies a $Z_2$ vortex in space-time and thus satisfies the non-trivial condition $\tau(\Psi(\gamma_{p}; A))=0$. Indeed the equation for the hermitean part of the $CS$ curvature on the double covering of the base of the Lagrangian submanifold reads: - ( \[D\_w , D\_[|w]{}\]+\[D\_u, D\_[|u]{}\] )(w\_p) = - B +i e\^[- a(\_p)]{}(\_p;A) \^[(2)]{} (w-w\_p) The holonomy along the fibre, $\Psi(\gamma_p;A)$, leads to the $Z_2$ vortex provided $g_s e^{- a(\gamma_p)}\Psi(\gamma_p;A)= g \lambda_p g^{-1}$, for some $g$ unitary, is the curvature of the gauge connection of a $Z_2$ vortex. The only obvious solution is $g_s e^{- a(\gamma_p)}=\pi$ and the eigenvalues of $\Psi(\gamma_p;A)$ equal to $(1,-1)$ in equal number. Thus $\tau(\Psi(\gamma_{p}; A))=0$. Now we compute the beta function by saturating the $MM$ loop equation. By saturating the $MM$ equation we mean writing a functional integral that satisfies the same quantum equation of motion, i.e. the same $MM$ loop equation, to the leading $\frac{1}{N}$ order. Hence this saturating path integral contains a delta-functional involving the equation of motion of one vortex and its antipodal image, since we are on the double covering: Z=\_ ( F\_[w |w]{}() - i \^[(2)]{} (w-w\_p) g \_p g\^[-1]{} - i g \_p g \^[-1]{} \^[(2)]{} (w-|w\_p))\ DA\_[w]{}() D A\_[|w]{}() with $\lambda_p$ the curvature of the $YM$ connection of a $Z_2$ vortex at $p$, that means that $\lambda_p$ has eigenvalues equal to $(\pi, -\pi)$ in equal number. This functional integral is apparently two-dimensional, but since it is repeated on the fibre, it reproduces the integral over the four-dimensional gauge connections with the same degeneracy by which the $SD$ constraint is repeated. Hence it reproduces the four-dimensional information. To see this, we recast it in the following form: DA\_w DA\_[|w]{} DA\_u DA\_[|u]{}\ (\[D\_w , D \_[|w]{}\]+ \[D\_u , D\_[|u]{}\]-i g \_p g\^[-1]{} \^[(2)]{}(w-w\_p) -i g \_p g\^[-1]{} \^[(2)]{}(w - |w\_p)-i B 1)\ (\[D\_w , D\_u\]) (\[D \_[|w]{}, D\_[|u]{}\]) repeated with an infinite multiplicity $N_1$, along the fibre. In the decompactification limit we can identify the base of the Lagrangian twistorial fibration with space-time of $YM$ theory in the thermodynamic limit. Thus Eq.(74) becomes: (F\^[+]{}\_-\^[+]{}\_) DA\ =d(zero-modes) Det’\^[-]{}(-\_A \_ + D\_ D\_ +i ad\_[ F\^+\_]{} ) \_[FP]{} for \^[+]{}\_[01]{}= g\_p g\^[-1]{} \^[(2)]{}(w-w\_p)+g\_p g\^[-1]{} \^[(2)]{}(w-|w\_p)+B1\ \^[+]{}\_[23]{}= g\_p g\^[-1]{} \^[(2)]{}(w-w\_p)+g\_p g\^[-1]{} \^[(2)]{}(w- |w\_p)+B1 and all other components of $\mu^{+}_{\alpha \beta}$ vanishing. This functional integral, in the $EK$ reduced version, was already computed in sect.(2) and in [@MB1] but for the $ASD$ variables instead of the $SD$ ones. Of course this does not change the divergences. We can compare the result for the effective action of vortices of the $YM$ theory computed in sect.3 with the $CS$ functional integral of Eq.(73,74,75). The exponential of minus the $CS$ effective action is: (N\_1 k\_p(N-k\_p) 8 \^2 \_0 ( )) with $k_p=\frac{N}{2}$ since in the $CS$ theory we get only a $Z_2$ vortex with multiplicity $N_1$. The factor of $N_1$ is due to the multiplicity of the fibre delta-functional constraint in the $CS$ functional integral. In [@MB1] it was shown that the $YM$ Wilsonean beta function is one-loop exact at large $N$ using the homological localization of the loop equation. The $CS$ beta function shares the same feature. Thus we have the identification $\frac{ \Lambda ^{2}_{CS} }{B_CS}= \frac{1}{N_D B e^{\frac{1}{ \beta _0 g_W ^2}} a^2 } $ that is our result. Conclusions =========== We have shown that there exists an $A$-model on the twistor space of $CP^2$ with a non-integrable complex structure that has vanishing first Chern class. Thus it defines a topological string theory. In addition its Chern-Simons effective action at large $N$, on a certain Lagrangian submanifold and for a certain background Wilson loop and $B$-field, has the same Wilsonean beta function as the large-$N$ Yang-Mills theory. This is due to the identification of the twistor Chern-Simons loop equation in the given background with the non-commutative vortex equation of self-dual type of the Yang-Mills theory reduced a la Eguchi-Kawai. Conjecturally, following the analogy with the topological $B$-model of the $ \cal{N} $ $=4$ $YM$ theory, this topological $A$-model holds promise to describe the glueball dynamics of a certain quasi $BPS$ sector of the pure $YM$ theory. Acknowledgments =============== We thank Andrew Neitzke and Cumrun Vafa for several discussions about the $A$-model and the $YM$-loop equation and Cumrun Vafa for inviting us to complete this work at Harvard University. We thank Arthur Jaffe for inviting us to talk at the seminar of his group about the localization of the $YM$ loop equation and the $A$-model. We thank Denis Auroux and Tomasz Mrowka at $MIT$ for explaining to us some features of the twistor fibration. We thank Roberto Martinez for several stimulating conversations about the $A$-model and the $YM$ loop equation. [99]{} A. Neitzke, C. Vafa [*“N=2 strings and the twistorial Calabi-Yau”*]{}, \[hep-th/0402128\]. M. Bochicchio [*“Quasi $BPS$ Wilson loops, localization of loop equation by homology and exact beta function in the large-$N$ limit of $SU(N)$ Yang-Mills theory”*]{}, \[hep-th/0809.4662\], to appear in JHEP. M. Bochicchio, JHEP 0709 (2007) 033, \[hep-th/0705.0082\]. N. Hitchin, [*Proc. Lon. Math. Soc.* ]{} [**(3) 43**]{} (1981) 133. F. Xu , [*“$SU(3)$-structures and special lagrangian geometries”*]{}, \[math.DG/0610532\]. G. Deschamps, [*“Compatible Complex Structures on Twistor Spaces”*]{}, \[math.DG/0810.1135\]. A. Tomasiello, [*Phys. Rev.*]{} [**D 78**]{} (2007) 0460007, \[hep-th/0712.1396\]. E. Witten, [*Comm. Math. Phys.*]{} [**189**]{} (2004) 252, \[hep-th/0312171\]. R. Dijgraaf, S. Gukov, A. Neitzke, C. Vafa, [*Adv. Theor. Math. Phys.*]{} [**9**]{} (2005) 603, \[hep-th/0411073\]. T. Eguchi, H. Kawai, [*Phys. Rev. Lett.*]{} [**48**]{} (1982) 1063. G. Bhanot, U. Heller, H. Neuberger, [*Phys. Lett.*]{} [**B 113**]{} (1982) 47. A. Gonzales-Arroyo, C. P. Korthals-Altes, [*Phys. Lett.*]{} [**B 131**]{} (1983) 396. M. R. Douglas, N. A. Nekrasov, [*Rev. Mod. Phys.*]{} [**73**]{} (2001) 977, \[hep-th/0106048\]. Y. Makeenko, [*“The First Thirty Years of Large-$N$ Gauge Theory”*]{}, \[hep-th/0407028\]. N. Seiberg, E. Witten, JHEP 9909 (1999) 032, \[hep-th/9908142\]. E. Witten, [*Progr. Math.*]{} [**133**]{} (1995) 637, \[hep-th/9207094\]. V. Pestun, [*“Localization of gauge theory on a four-sphere and supersymmetric Wilson loops”*]{}, \[hep-th/0712.2824\]. Yu. M. Makeenko, A. A. Migdal, [*Phys. Lett.*]{} [**B 88**]{} (1979) 135. Yu. M. Makeenko, A. A. Migdal, [*Nucl. Phys.*]{} [**B 188**]{} (1981) 269. R, Dijkgraaf, C. Vafa, [*“A Perturbative Window into Non-Perturbative Physics”*]{}, \[hep-th/0208048\]. F. Cachazo, M. R. Douglas, N. Seiberg, E. Witten, JHEP 0212 (2002) 071, \[hep-th/0211170\]. K. Takesaki, [*J. Geom. Phys*]{} [**37**]{} (2001) 291. H. Kawai, T. Kuroki, T. Morita, [*Nucl. Phys.*]{} [**B 664**]{} (2003) 185, \[hep-th/0303210\]. S. Gukov, E. Witten [*“Gauge Theory, Ramification, And The Geometric Langlands Program”*]{}, \[hep-th/0612073\]. E. Witten, [*Commun. Math. Phys.*]{} [**118**]{} (1988) 411. V. Stojevic, [*“Topological A-Type Models with Flux”*]{}, \[hep-th/0801.1160\]. M. Marino, [*Rev. Mod. Phys.*]{} [**77**]{} (2005) 675, \[hep-th/0406005\]. S. J. Gates, C. M. Hill., M. Rocek, [*Nucl. Phys.*]{} [**B 248**]{} (1984). R. Zucchini, JHEP 0612 (2006) 039, \[hep-th/0608145\]. T. Mochizuchi, [*“Asymptotic behaviour of variation of pure polarized TERP structures”*]{}, \[math.DG/0811.1384\]. S. Elitzur, G. Moore, A. Schwimmer, N. Seiberg, [**Nucl. Phys.**]{} [**B 326**]{} (1989) 108. [^1]: permanent address [^2]: Twistor space can be defined for every orientable four-manifold, essentially as the bundle of all the almost complex structures over the given manifold [@Hit]. This more general definition extends the original construction of twistor space of hyper-Kahler manifolds. See also [@X; @D] for a convenient mathematical description and [@To] for physical applications. [^3]: Since the integral over $\mu=\mu^0+n-\bar n$ has to be interpreted in a holomorphic sense, a choice of the holomorphic path is needed. The hermitean path corresponding to $n=\bar n=0 $ leads to the correct beta function. [^4]: Topological $A$-models defined on almost complex manifolds are considered also in [@S]. Bi-hermitean models are studied in [@R; @Z]. We would like to thank Alessandro Tomasiello for an enlightening discussion about this point. [^5]: The cohomological localization of the topological theory implies that the infinite tension limit is in fact exact. [^6]: Indeed the same topological twistor space admits two different metrics and (almost) complex structures. [^7]: Since twistor space of the four-sphere is $CP^3$ there is a $CP^2$ inside $CP^3$ on which to define the conjugate diagonal Lagrangian embedding. However the complex structure of this $CP^2$ does not project to a complex structure on $S^4$ [@To].
--- abstract: 'We discuss two concepts of metric and linear connections in noncommutative geometry, applying them to the case of the product of continuous and discrete (two-point) geometry.' --- =msbm10 scaled 1 \#1\#2\#3[\^[\#1]{}\_[\#2\#3]{}]{} 2\#1\#2[B\_[\#1\#2]{}]{} \#1[\_[\#1]{}]{} \#1\#2[\^[\#1]{}\_[\#2]{}]{} \#1\#2[\_[\#2]{}\^[\#1]{}]{} ‘@=11 \#1[ ]{} \#1[ ]{} ‘@=12 ł\#1[\[\#1\]]{} = \#1 Ø\#1[\^[\#1]{}]{} = \#1 [**Observation \#1**]{} = [On Some Aspects of Linear Connections\ in Noncommutative Geometry]{}\  \  \ Andrzej Sitarz [^1]\  \ [*Department of Theoretical Physics\ Institute of Physics, Jagiellonian University\ Reymonta 4, 30-059 Kraków, Poland\ e-mail: sitarz@if.uj.edu.pl*]{}\ [TPJU 5/95]{}\ [February 1995 ]{} Introduction ============ Noncommutative geometry \[1-5\] is one of the most attractive mathematical concepts in physics that could be applied in fundamental field theory. So far, the investigations of gravity in this framework have been concentrated on the case of the product of Minkowski space by a two-point space, which has been motivated by the Standard Model (see \[6-8\]) Their methods, however, did not use the whole structure of noncommutative geometry, in particular, the definitions of metric and linear connections did not use the bimodule structure of differential forms. Only recently some general ideas concerning linear connection and metric have been proposed and discussed for other examples . They \[9-11\] are based on the idea that a key role in the introduction of these structure plays as generalised permutation operation. A different model of the generalisation of the metric as well as a simple model of gravity on the product of Minkowski space and two-point space has been already discussed by us earlier, with some encouraging results [@JA]. In this paper, we shall discuss two methods of construction of the metric and linear connections based on two different concepts, first as proposed in \[9-11\], based on symme tric metric and bimodule property of linear connection, the other one, which uses hermitian metric and left-linearity of linear connection and follows the idea of our previous paper (though it differs in few significant points). We shall try to derive the consequences of these models for the considered example. Our main aim is to determine what conditions are necessary, what could be abandoned and what are too strict for noncommutative geometry. Of course, the basic test is the agreement with the standard differential geometry. Notation ======== Our basic data is a (graded) differential algebra $\O{}$ with the external derivative $d$ obeying the graded Leibniz rule: d( u v) = du v + (-1)\^[u]{} u dv, We shall denote by $\O{n}$ a bimodule of $n$-forms, $n \geq 1$ and we shall write ${\CA}$ for $\O{0}$. Let $\pi_n$ be the canonical projection $\pi_n: \O{\ts n} \to \O{n}$, $n \geq 2$, for simplicity we shall often write $\pi$ unless it is necessary to specify the index $n$. We assume also that our external algebra is a graded $\star$-algebra and we have d (\^) = (d )\^, To end this section let us remind the basic notation of an example of a noncommutative differential calculus on a product of $\R^n$ and a two-point space. The algebra ${\CA}$ contains of functions on $R^n\times \Z_2$, with pointwise addition and multiplication (also with respect to the discrete coordinates). The bimodule of one-forms is generated by $n+1$ elements: $\{ dx^i\}_{i=1,\ldots,n}$ and $\chi$, with the following set of multiplication properties: f(x,p) dx\^i & = & dx\^i f(x,p)\ f(x,p) & = & f(x,-p) where $p$ denotes the discrete coordinate taking values $+$ and $-$. The external derivative is defined as follows: d f(x,p) = \_[i=1]{}\^n dx\^i \_i f(x,p) + f(x,p) where $\pt_i$ is the usual partial derivative and $\pt f = (1 - \CR)f$, $\CR$ being the morphism, which flips the discrete coordinate: $\CR f(x,p) = f(x, -p)$. The external algebra is built with the following multiplication rules: dx\^i dx\^j & = & - dx\^j dx\^i,\ dx\^i & = & - dx\^i,\ d( ) & = & 2 , and is infinite-dimensional, as $\chi \bt \chi$ does not vanish. One can introduce a $\star$-algebra structure on this algebra, assuming that: (dx\^i)\^= dx\^i, \^= - . ł[conj1]{} The differential calculus constructed in the above described way is just a tensor product of external algebras on the continuous space (which is a standard one) and the discrete two-point space (which is an universal differential calculus). Symmetrization and antisymmetrization ===================================== In the classical differential geometry the external algebra is defined as an antisymmetrization of the tensor algebra of one-forms, therefore these operations precede the construction of differential calculus. In noncommutative geometry this situation could be different and we may choose between several possibilities, all of them coinciding in the case of commutative differential structures. Antisymmetrization ------------------ We may choose a similar way as in the standard differential geometry and, having constructed the first order differential calculus, (i.e. bimodule $\O{1}$ and $d : \CA \to \O{1}$, which obeys the Leibniz rule) we may look for a bimodule isomorphism $\sigma$: : , which would correspond to the permutation: $dx^a \tsa dx^b \to dx^b \tsa dx^a$. Then, we define the noncommutative analogue of the symmetrizing morphism on $\OD$ as $1 - \s$ and, consequently, the bimodule of two-forms as a quotient bimodule $\OD / S$, where $S = \kr (1-\s)$. However, we must ensure that the following consistency conditions hold: for any elements $a_i,b_i \in \CA$ we must have: \_i a\_i db\_i = 0 \_i d a\_i d b\_i S. If $\sigma$ satisfies a braid group relation on $\Omega^{\tsa 3}$ then the construction of the whole differential algebra follows directly, let us stress that it is not necessary to require $\sigma^2=1$. Symmetrization -------------- Having defined $\s$ and the external algebra we might define a symmetrization morphism on $\OD$ as $1+\s$, however, we cannot guarantee without some additional assumptions that: (1+) = 0, ł[sym]{} Indeed, since $\kr \pi = \kr (1-\s)$, if $(1+\s) \xi \in \kr \pi$ we would have that either $\s \xi = -\xi$ or $(1-\s)(1+\s) \xi = 0$, so that $\s^2 \xi = \xi$ in both cases. Therefore, $\s^2=1$ is a necessary requirement (it is obvious that it is also sufficient) for (\[sym\]) to hold. Another option (which has been discussed by [@MAD1]) is to assume the existence of $\s$ and the (\[symm\]) relation without deriving the external calculus from $\s$, in that case, however, we can lose strict relations between the calculus and $\s$, and the choice of $\s$ could be rather ambiguous. Symmetrization and Antisymetrization - All In One ------------------------------------------------- In what follows we shall discuss a possibility of deriving the symmetrization and antisymmetrization operations from the external algebra itself. Of course, without some additional assumptions this is not possible, however, as one could see that these assumptions are rather natural, we shall present the idea here. Let $S$ denote $\kr \pi$ and $j$ be the inclusion of $S$ in $\OD$. Then the following is a short exact sequence of bimodules over $\CA$: 0 S Ø[2]{} 0. If $\O{2}$ is a projective module the above exact sequence is a split sequence, i.e. there exist maps $r$ and $\rho$: r: & S & r j = \_S\ : & Ø[2]{} & = \_ and, moreover: S Ø[2]{}. ł[split]{} The latter allows us to introduce a natural symmetrization and antisymmetrization operations on $\OD$. For every $\xi \in \OD$ we can represent it as a sum $\xi_s+ \xi_a$, where $\xi_s \in j(S)$ and $\xi_a \in \rho(\O{2})$. Then the following map: : () = \_s - \_a, is a bimodule homomorphism such that $\kr \s = 0$ and $\s^2=1$. One can easily verify that $1-\s$ is then a projection on $\rho(\O{2})$ and $1+\s$ is a projection on $j(S)$. Example ------- In the example discussed in this paper the situation is rather simple, as the only nontrivial noncommutative part comes from the discrete geometry. As all bimodules $\O{n}$ are free, we may use results of the last section. We shall write here only the resulting homomorphism $\s$: (dx\^i dx\^j) & = & dx\^j dx\^i\ (dx\^i ) & = & dx\^i\ () & = & - Metric ====== The construction of metric is one of the most important issues in noncommutative geometry. First, it is required in the studies of field theories (in particular gauge theories) in this framework, secondly it is a crucial step towards the analysis of gravity. We shall outline here the commonly used definition and discuss several points, which are still not well established. Definition ---------- It has been almost generally agreed that the proper generalisation of the metric tensor is a bimodule map: g: , ł[defi]{} as it is a natural extension of the standard bilinear map to the noncommutative situation. If our differential algebra is a $\star$-algebra one should also postulate: g(u\^,v\^) = g(v,u)\^, ł[herm]{} which guarantees that $g(u,u^\star)$ is self-adjoined. The above mentioned properties of the metric tensor translate easily from the standard differential geometry into the noncommutative geometry, however, the problems start, when we begin to analyse other features of the metric tensor. Symmetry -------- In the standard differential geometry one postulates that the metric is symmetric, i.e. $g(u,v)=g(v,u)$ for any one-forms $u,v$. Of course, this requirement cannot hold in noncommutative geometry, however, one could think of replacing it by a different one, which recover this property in the commutative limit. The ambiguity comes from the fact that even in classical geometry one may look at this property of the metric from two different points. First, one may view the symmetry as related to the hermitian metric condition, then the appropriate generalisation should be just (\[herm\]). Another point of view relates the symmetry of the metric to the symmetrization operation on the bimodule $\OD$, then the corresponding generalisation should take the form [@MAD1]: g = g, ł[symm]{} where $\s$ is the bimodule isomorphism discussed earlier. We shall now investigate the consequences of each of these definitions in our example. ### Symmetric metric - example From the definition (\[defi\]) we immediately get that the metric evaluated on the generating one forms must be: g( dx\^i dx\^j) & = & g\^[ij]{} ł[met1]{}\ g( dx\^i ) & = & 0\ g( dx\^i ) & = & 0\ g(chi) & = & g ł[met4]{} so that the ’mixed’ components must vanish and $g^{ij}$,$g$ denote the nonzero elements of the algebra $\CA$. The hermitian metric condition (\[herm\]) together with (\[conj1\]) relations gives us: ( g\^[ij]{} )\^& = & g\^[ji]{},\ g\^& = & g. If we require the additional symmetry (\[symm\]), we obtain: g\^[ij]{} & = & g\^[ji]{},\ g & = & - g. ł[disy]{} so that $g^{ij}$ is a real and symmetric tensor and $g$ vanishes. The latter property is rather inconvenient and we shall now generalise it and discuss in details. Metric on Universal Differential Calculus ----------------------------------------- So far, we have encountered a problem with the existence of a (nontrivial) metric on the discrete space of two-points, if we assumed its symmetry (\[symm\]). This feature appears every time we have an universal differential calculus: \ [**Proof:**]{} If the calculus is universal, then $\pi_n=\id_{\O{n}}$ and therefore $\s = - \id$. From (\[symm\]) follows that $g = -g$, hence $g \equiv 0$. Such consequence is rather an undesired one, as one of its aftermath would be the elimination Higgs-field components of the Standard Model Lagrangian, as we shall see later. Therefore, we should rather stick to the basic interpretation of the symmetry property (\[herm\]) of the metric. Metric on higher order forms ---------------------------- Another standard property of the metric is the possibility of extending its definition for modules of higher-order forms. We shall propose here a scheme for generalisation of it in noncommutative geometry. First, we shall extend $g$ to the tensor products of $\O{1}$ g( u\_1 …u\_n v\_1 …v\_n) =\ = g(u\_1 ( …g(u\_[n-1]{} g(u\_n v\_1)v\_2) …v\_n ), which satisfies the basic requirements (\[defi\]-\[herm\]). Now, using the result (\[split\]) we may extend the metric for higher order forms using the embedding $\rho$. For instance, in the case of two-forms this would be: g(,) = g( (), () ). ł[mehi]{} for any two-forms $\omega,\eta$. ### Example - metric on two-forms Here we shall demonstrate how the metric acts on an arbitrary two-form of our product geometry $\CF$: = dx\^i dx\^j F\_[ij]{} + dx\^i \_i + , Using the form of the metric (\[met1\]-\[met4\]) and the definition (\[mehi\]) we find: g(\^, ) = & - & F\_[ij]{}\^F\_[kl]{} g\^[ik]{} g\^[kl]{}\ & + & \^\_i \_j g ((g\^[ij]{}) + g\^[ij]{})\ & - & \^g (g). The first term is a standard one, coming only from the part of continuous geometry, whereas the last comes only from the discrete geometry and the middle one is mixed. Had we assumed the symmetry condition (\[symm\]) to hold, we would have had consequently $g=0$ and both additional terms that have origins in discrete geometry would not appear. This would have profound consequences for physics, as any field theory, and gauge theory in particular, would not feel the presence of discrete geometry (apart from the simple fact that we would have had two seperate copies of each field). In such situation no Higgs-type model would be possible to obtain from the noncommutative geometry based on the product of $\R^n \times \Z_2$, and one should look for models, which involve products of differential calculi, which are not universal, to obtain nontrivial results. Linear Connections ================== As the standard methods of differential geometry use rather the language of vector fields than differential forms, the translation of the concept of linear connection is a delicate problem. We might also look at the formulation of gauge theory in noncommutative geometry to guess the best definition, let us remind that for any left-module $\CM$ over $\CA$ the covariant derivative $D$ is defined as a map $\CM \to \O{1} \tsa \CM$ such that: D(a m) = da m + a D(m), ł[codi]{} which could be then extended to the degree 1 operator $D: \Omega \tsa \CM \to \Omega \tsa \CM$. One could easily apply this definition for the case of linear connections (and the related covariant derivative) by replacing $\CM$ with the appropriate object in this case, the bimodule $\O{1}$. The problem starts when we begin to look at the bimodule structure of $\O{1}$ and ask how $D$ acts on $u a$, $u \in \O{1}$ and $a \in \CA$. Of course, this action is determined by the bimodule structure on $\O{1}$ and the definition (\[codi\]), however, it remains to be said whether some extra conditions should be assumed. The only limitation is that the introduced additional restrictions should reduce to (\[codi\]) in the case of commutative differential calculus. Bimodule linear connection -------------------------- The proposition that one should use the bimodule isomorphism $\s$ to define such property, has been put forward by Dubois-Violette, Madore and others \[9-11\]: D(a) = ( da ) + D() a. ł[lcsy]{} Throughout this paper we shall call connections that use (in any form) the bimodule property of $\O{1}$ [*bimodule connections*]{}. Indeed, this reduces to the standard expression in the classical case, where $\om a = a \om$ and, since $\s (\om \tsa \rho) = \rho \tsa \om$ we have (\[lcsy\]) equivalent to (\[codi\]). We shall see, however that this condition is very restrictive in noncommutative case, in fact, as shown recently [@MAD1], in many cases of noncommutative Kaluza-Klein theories thet only existing bimodule linear connections have no mixed terms. We shall discuss it later while applying the theory to our example of $\R^n \times \Z_2$. Torsion and curvature --------------------- Let us observe that (\[codi\]) is, as mentioned earlier, easily extendible to $\Omega \tsa \O{1}$ according to the rule: D( u ) = du + (-1)\^[u]{} u D(), where $u \in \Omega$ and $\rho \in \O{1}$. We can calculate then the curvature $D^2$ and show that it is left-linear: D\^2 (u ) = u D( ). Similar extension is not possible for the right-multiplication property of the covariant derivative, and, what is more important one cannot assure that the curvature $D^2$ is right-linear. The torsion could be defined as the following map $T: \Omega \tsa \O{1} \to \Omega$: T = D - d , ł[tors]{} where $\pi$ is standard projection. From the construction it is clear that $T$ is a left-module morphism (in case of symmetric connections it is a bimodule homomorphism). Finally, let us make some general observations on linear connections in noncommutative geometry, which later would be useful. If $D$ and $D'$ are two linear connections, then $D-D'$ is a left-linear morphism $\Omega \tsa \O{1} \to \Omega \tsa \O{1}$ of grade 1, moreover, if they are bimodule connections (i.e. obeying (\[lcsy\]) ) then $D-D'$ is a bimodule morphism. If $\O{1}$ is a free bimodule and $\om_1,\ldots,\om_n$ form its base, then a connection $D$ such that $D(\om_i)=0$ is called [*trivial*]{} in this base. Then, as a result of observation 3 we may observe that in that case every connection is a sum of this trivial connection and a left-module (or bimodule in the case of bimodule connections) morphism of grade 1. To end this section we shall observe that having a $\star$-structure on our external algebra we cannot easily relate somehow $D(u)$ with $D(u^\star)$. However, let us observe that in the classical differential geometry it is not true that $D( \om^\star ) = D (\omega)^\star$ $\O{1}$ as a bimodule over $\Omega$ ----------------------------------- As we have shown in the previous paragraph, the use of bimodule properties of linear connection is rather complicated. In what follows we should attempt to propose a solution, which would make both the notation and results simpler. The price we have to pay is the introduction of additional structure on our differential algebra, as we shall assume that there exist a bimodule structure over $\Omega$ (treated as an algebra) on $\O{1}$. We shall call this bimodule $\CM$, assuming that the following conditions hold:\ 1. $\CM$ is generated by elements of the form $u \tsa \om$ where $u \in \Omega$ and $\om \in \O{1}$. Of course, $\O{1} \subset \CM$\ 2. The left- and right-multiplications by the elements of $\Omega$ coincide with $\tsa$ if the element of the module is in $\O{1}$ and $\bt$ otherwise.\ 3. $\pi: \CM \to \Omega$ defined on the generators $\pi (u \tsa \om) = u \bt \om$ is bimodule morphism\ 3. There exists a $\star$-operation on $\CM$. We shall demonstrate that such structure exists in the standard differential geometry as well as in few examples of noncommutative geometry. First, let us notice that having defined this structure we could immediately write both rules for $D$, now seen as a map $D: \CM \to \CM$ of degree 1: D( u m) & = & du m + (-1)\^[u]{} u D(m) ł[cobi1]{}\ D( m u) & = & D(m) u + (-1)\^[m]{} m du ł[cobi2]{} for $m \in \CM$ and $u \in \Omega$. Now, $D^2$ is automatically a bimodule morphism!. ### Examples First, we shall demonstrate that this structure exists in the standard commutative differential calculus. Define the right action of $\Omega$ on the generators of $\CM$ as follows: u = (-1)\^[u]{} u , then this gives a proper bimodule structure on $\CM$ and $\pi$ is a bimodule morphism. We can see that in this case (\[cobi1\]) is equivalent to (\[cobi2\]), as one would expect. Now let us turn to noncommutative geometry. For universal calculus one can always introduce the bimodule $\CM$ as $\wedge$ is just $\tsa$ and $\CM$ could be identified with the tensor algebra of differential forms itself. For the simplest possible case of two-point geometry we have: D(a ) & = & (a) + a D(),\ D(a) & = & D() a - (a), and we could verify that they agree with each other provided that $D(\chi) = 2 \chi \tsa \chi$, so that $D$ coincides with $d$. This result is what we could have expected, observe that since $\pi$ is just identity map, every torsion-free connection on universal calculus must coincide with $d$. Next we shall discuss the product of continuous and discrete geometries with the following construction of $\CM$. The bimodule structure on $\CM$ is, for products of the forms $dx^i$, just as in the case of continuous geometry, as discussed above. Similarly, for products of $\chi$ alone, we take it as in the example of universal calculus right above. What we have to add is the rule of right multiplication between $dx^i$ and $\chi$: dx\^i & \~& - dx\^i\ dx\^i & \~& - dx\^i We could verify now what (\[cobi1\]-\[cobi2\]) imply on the covariant derivative. First let us compare $D( a dx^i)$ with $D(dx^i a)$: D( a dx\^i ) = dx\^j dx\^i (\_j a) + dx\^i (a) + a D(dx\^i),\ D( dx\^i a ) = D(dx\^i) a + dx\^j dx\^i (\_j a) + dx\^i (a), where in the second relation we have used the rules of right multiplication on $\CM$. By comparing the right-hand side of these relations we immediately get that: D(dx\^i) = dx\^j dx\^k + . ł[slc-1]{} The other pair of relations is: D( a ) = dx\^i (\_i a) - (a) + a D(),\ D( (a) ) = D() (a) + dx\^i (\_i a) + (a), and by comparing the right-hand sides we get; D() = 2 . ł[slc-2]{} Suppose now that we demand that this connection has a vanishing torsion (\[tors\]). We have already observed that (\[slc-2\]), which is equal to the connection on $\Z_2$ alone is torsion-free. For (\[slc-1\]) the vanishing of torsion is equivalent to $\alpha=0$ and $\GH{i}{j}{k} = \GH{i}{k}{j}$, so in the end we obtain that the linear connection on $\R^n \times \Z_2$ splits into separate components, each operating on one element of the product. Therefore, the curvature has also such property, additionally, as in our case $D$ on the discrete space is flat $D^2=0$, we have the resulting total curvature operator to have only the standard contribution coming from the continuous element of the product. This would suggest that already on this level, without even introducing the concept of metric connection, we are certain that for bimodule linear connections there would be no modifications to gravity, coming from effects of noncommutative geometry on $\R^n \times \Z_2$. We shall see, that if we drop the requirement of bimodule property (in either form) we can proceed with the construction, which shall lead to some interesting and unexpected features. Metric linear connections ========================= In this sections we shall discuss the generalisation of the idea of metric connections. The form of the definition depends on our assumptions concerning the bimodule properties of $D$ - as our main task is to apply the theory to the considered example and we have already shown that for bimodule connections give no new features in the theory, we shall concentrate on connection, which only satisfy (\[codi\]) alone. We say that $D$ is [*metric*]{} if the following holds for all one-forms $u,v$: d g(u, v\^) = g( D(u), v\^) - g(u, D(v)\^), ł[meco]{} where we use a shorthand notation: $g(u_1 \tsa u_2, v) = u_1 g(u_2, v)$. This definition is well-defined for any $D$ and it gives precise prescription for metric connection in the commutative limit. The most general form of torsion-free $D$ is: D(dx\^) & = & dx\^ dx\^ + ( dx\^+ dx\^),\ D() & = & 2 + 2 dx\^ dx\^ + W\_( dx\^+ dx\^). where $\GH{\m}{\n}{\r} = \GH{\m}{\r}{\n}$ and $\B2{\m}{\n} = \B2{\n}{\m}$. Using the metric (\[met1\]-\[met4\]) and the definition (\[meco\]) we end up with the following set of relations: \_g\^ & = & g\^ + g\^,\ g\^ & = & () g\^ - (g\^), \[G-g\]\ g & = & g\^ 2 \[G-b\],\ 0 & = & g\^ W\_, \[im4\]\ \_g & = & 2 W\_g,\ g & = & 0 and, as we assume that $g^{\m\n}$ is non-degenerate, we immediately get that $W_\m=0$ and $g=const$. This simplifies the curvature $R=D^2$ and e have: R(dx\^) & = & dx\^dx\^ ( \_ - \_ + - - 2 + 2 ) dx\^\ & & + dx\^ ( - - ( ) + \_() + () ) dx\^\ & & + ( 2 - - () ) dx\^\ & & + dx\^dx\^ ( \_ - \_ + - )\ & & + dx\^ ( - - 2 () + () )\ & &\ R() & = & dx\^dx\^ ( \_2 - \_2 + 2 - 2 ) dx\^\ & & + dx\^( 2 2 - 2 - ( 2 ) ) dx\^\ & & + dx\^dx\^ ( 2 - 2 )\ Now, if we use (\[G-b\]), we may eliminate $\B2{\m}{\n}$ from the expressions for $R$. Furthermore, it will be convenient to use $\t{\m}{\n}$: $\GT{\m}{\n} = \delta^\m_\n + \t{\m}{\n}$. First, we may rewrite (\[G-g\]): () g\^ = (g\^) , or using the inverse $g_{\m\n}$: ( g\_ ) = g\_. ł[rel4]{} The curvature tensor, rewritten using only using $\GH{\m}{\n}{\r}$ and $\t{\m}{\n}$ variables (only in the first line we still use $\GT{\m}{\n}$) is: R(dx\^) & = & dx\^dx\^ ( \_ - \_ + - - g g\_ ( - ) ) dx\^\ & & + dx\^ ( () - ( ) + \_( ) dx\^\ & & + ( \^\_- () ) dx\^\ & & + dx\^dx\^ ( \_ - \_ + - )\ & & + dx\^ ( - \^\_+ () )\ & &\ R() & = & dx\^dx\^ ( g g\_ ) ( \_ - \_ + - ) dx\^\ & & + dx\^ ( g g\_ ) ( \^\_- () ) dx\^\ & & + dx\^dx\^ ( g g\_ ) ( - ) and we see that some expressions repeat itself in the structure of the curvature tensor. The Ricci tensor $R_c$ is the trace of the curvature tensor: R\_c & = & dx\^dx\^ ( \_ - \_ + - - g g\_ ( - ) )\ & & - dx\^dx\^ ( g g\_ ) ( \^\_- () )\ & & + dx\^ ( () - ( ) + \_( )\ & & + dx\^ ( \_ - \_ + - )\ & & + ( - \^\_+ () ) and finally the curvature scalar, which is simply the value of the metric on the Ricci tensor: R = g\^ ( \_ - \_ + - ) - g ( - ), ł[scalar]{} Such result is an interesting one - we shall get the action, which is a sum of two standard Hilbert-Einstein actions for gravity (one for $g{\mu\nu}(x,+)$ and the other one for $g^{\mu\nu}(x,-)$) as well as additional terms, which depend only on both metric tensors (no derivatives!) and the field $\t{\k}{\b}$ (satisfying (\[rel4\])). This would suggest that such term plays the role of a constraint, which enforces relations between $g^{\mu\nu}(x,+)$ and $g^{\mu\nu}(x,-)$. In the simplest possible case, when the are equal to each other, it would reduce itself to the cosmological constant term. This would recover the results obtained by [@KAL; @KAS] using different approach based on the Dirac operator and Wodzicki residue. Further and more detailed discussion on the properties of the obtained model of gravity on $\R^n \times \Z_2$ and example solutions shall be presented elsewhere. Conclusions =========== In this paper we have presented few schemes, which have been considered as a generalisation of linear connections (and related objects) in noncommutative geometry. Our main aim was to apply these methods to a simple example of noncommutative Kaluza-Klein type model, being the product of continuous ($\R^n$) and discrete ($\Z_2$ geometries. Our choice has been motivated by the interpretation of the electroweak part of the Standard Model, in which such geometry plays an important role providing the explanation of the origin of Higgs field. We have found that most concepts ale easily translated from standard differential geometry to the noncommutative case and give reasonable results in our example. However, some others, especially the postulate of symmetry imposed on the metric and bimodule properties of linear connections, can cause rather significant problems. In particular, in our example each of these requirements has profound consequences. In the first case, it eliminates all discrete degrees of freedom in the field theory, whereas in the second case, it gives no new features of gravity in this setup. Though the latter may be considered as an acceptable result, we cannot agree with the former - as, we already know how the theory should look like [@NCG1; @JA2]. Therefore we definitely cannot accept the generalisation of symmetric metric as discussed here (we still require that is hermitian), being aware that the other result might also suggest the second postulate (bilinear linear connections) goes too far. In our considerations we have also proposed another version of this postulate, which makes it more natural. One of its main advantages is that $R$ becomes a bimodule morphism. On the other hand we have provided a derivation of gravity-type theory for our example of product geometry, based on the assumption that only left-linearity is important for linear connections, obtaining quite a feasible result.\  \ Of course, it still remains open, whether the accepted methods are proper for noncommutative geometry, as they are based on what we have learned from standard differential geometry. The main problem is that few features, which coincide in the commutative case, are different if we turn on noncommutativity. One has to choose, which property is appropriate in such situation and different choices may give completely different results. It is also not clear why the standard methods must be followed in noncommutative case, for instance, we might ask why we have to set the torsion to zero. We have demonstrated in this paper some good points and problems of two methods as applied to a simply and - realistic - model. The results that we have found are important for determination of some fundamental concepts of noncommutative geometry, however, they have to be verified using other methods, so that they could be accepted or properly generalised for noncommutative geometry, which remains a big task for future research.  \  \ [**Acknowledgements:**]{} The author would like to thank J.Madore for helpful discussions. v\#1[[ **\#1**]{} ]{} [10]{} A.Connes,J.Lott, [*Particle Models and Concommutative Geometry*]{}, Nucl.Phys. (Proc. Suppl.) B18, (1990) 29-47. A.Connes, [*Noncommutative Geometry*]{}, Academic Press, to be published. A.Connes, [*Noncommutative Geometry in Physics*]{}, preprint ǏHES/M/93/32, (1993) . J.Madore, [*An Introduction to Noncomutative Differential Geometry and its Physical Applications*]{}, to appear J.C.Varily, J.M.Gracia-Bonda, [*’Connes’ Noncommutative Geometry and the Standard Model*]{}, J.geom.Phys. 12 (1993) 223-301. A.H.Chamseddine, G.Felder, J.Fröhlich, [*Gravity in Non-Commutative Geometry*]{}, Comm.Math.Phys. 155 (1993) 205-207. A.H.Chamseddine, J.Fröhlich, preprint ŽU-TH-18/1993. G.Landi et al, [*Gravity and Electromagnetism in Noncommutative Geometry*]{}, Phys.Lett B326 45, A. Kehagias, J. Madore, J. Mourad and G. Zoupanos, [*Linear Connections on Extended Space-Time*]{}, preprint LMPM 95-01 J. Madore, T. Masson, J. Mourad [*Linear connections on matrix geometries*]{} preprint LPTHE-ORSAY 94/96 M. Dubois-Violette, J. Madore, T. Masson and J. Mourad [*Linear connections on the quantum plane*]{}, hep-th/9410199 A.Sitarz, [*Gravity from Noncommutative Geometry*]{}, Class.Quant.Grav. 11 2127 , [*Gravity, Non-Commutative Geometry and the Wodzicki Residue*]{}, preprint MZ-TH/93-38. , [*The Dirac Operator and Gravitation*]{}, preprint ČPT-93/P.2970. A.Sitarz [*Higgs Mass and Noncommutative Geometry*]{},\ Phys.Lett. B308, (1993), 311, [^1]: Partially supported by KBN grant 2P 302 103 06
--- abstract: 'We show that the virial theorem provides a useful simple tool for approximating nonlinear problems. In particular we consider conservative nonlinear oscillators and a bifurcation problem. In the former case we obtain the same main result derived earlier from the expansion in Chebyshev polynomials.' address: - ' Facultad de Ciencias, Universidad de Colima, Bernal Díaz del Castillo 340, Colima, Colima, Mexico' - ' INIFTA (UNLP, CCT La Plata-CONICET), División Química Teórica, Blvd. 113 S/N, Sucursal 4, Casilla de Correo 16, 1900 La Plata, Argentina' author: - Paolo Amore and Francisco M Fernández title: The virial theorem for nonlinear problems --- Introduction \[sec:Intro\] ========================== In a recent paper Beléndez et al[@BAFP09] showed that the widely used small–amplitude approximation cannot always be successfully applied to nonlinear oscillators. To overcome this difficulty the authors proposed the expansion of the nonlinear force in terms of Chebyshev polynomials. This alternative linearization of nonlinear problems proved to be remarkably more accurate and efficient than the straightforward small–amplitude approach. Besides, the Chebyshev series applies even to such difficult cases where the Taylor series fails[@BAFP09]. The purpose of present article is to discuss an alternative approach to nonlinear problems based on the well–known virial theorem[@G80]. In Sec \[sec:oscillators\] we outline the main results of Beléndez et al[@BAFP09] for conservative nonlinear oscillators. In Sec. \[sec:VT\] we develop the virial theorem, apply it to conservative nonlinear oscillators, and compare its results with those obtained by Beléndez et al[@BAFP09]. In Sec. \[sec:bifurcation\] we apply the virial theorem to a nonlinear problem that exhibits bifurcation and compare its results with the exact ones and with those produced by the small–amplitude approximation. In Sec. \[sec:conclusions\] we discuss the main results of the paper and draw conclusions. Conservative nonlinear oscillators \[sec:oscillators\] ====================================================== Beléndez et al[@BAFP09] considered nonlinear conservative autonomous systems given by the second–order differential equation $$\ddot{x}+f(x)=0 \label{eq:dif_eq}$$ with the boundary conditions $x(0)=A$, $\dot{x}(0)=0$. Here a point indicates derivative with respect to $t$. In particular, Beléndez et al[@BAFP09] restricted themselves to odd functions $f(-x)=-f(x)$ that satisfy $xf>0$. The approach proposed by Beléndez et al[@BAFP09] consists in the expansion of the force in a series of Chebyshev polynomials of the first kind $T_{n}(z)$: $$f(x)=\sum_{n=0}^{\infty }b_{2n+1}(A)T_{2n+1}(y) \label{eq:f_Cheby}$$ where $y=x/A$. These polynomials are given by the recurrence relation $$\begin{aligned} T_{0}(z) &=&1 \nonumber \\ T_{1}(z) &=&z \nonumber \\ T_{n+1}(z) &=&2zT_{n}(z)-T_{n-1}(z) \label{eq:Cheby_rec}\end{aligned}$$ and are orthogonal in $-1\leq z\leq 1$ with the weight function $w(z)=(1-z^{2})^{-1/2}$: $$\int_{-1}^{1}(1-z^{2})^{-1/2}T_{m}(z)T_{n}(z)=\frac{\pi }{2}(1+\delta _{m0})\delta _{mn} \label{eq:orthogonal}$$ Therefore, the coefficients of the expansion (\[eq:f\_Cheby\]) are given by $$b_{2n+1}(A)=\frac{2}{\pi }\int_{-1}^{1}(1-y^{2})^{-1/2}T_{2n+1}(y)f(Ay)\,dy \label{eq:bn}$$ Notice that there is a misprint in the weight function shown by Beléndez et al[@BAFP09]. If we keep only the first term in the expansion (\[eq:f\_Cheby\]) the differential equation (\[eq:dif\_eq\]) becomes that for a harmonic oscillator $$\ddot{x}+\frac{b_{1}(A)}{A}x=0 \label{eq:dif_eq_lin}$$ with a frequency $$\omega =\sqrt{\frac{b_{1}(A)}{A}} \label{eq:omega_app}$$ that depends on the amplitude $A$. This expression proves to be remarkably accurate for many problems[@BAFP09] in spite of its simplicity. The virial theorem \[sec:VT\] ============================= Here we consider the same differential equation (\[eq:dif\_eq\]) with the more general boundary conditions $$x(b)\dot{x}(b)-x(a)\dot{x}(a)=0 \label{eq:BC_gen}$$ If we integrate the equation $$\frac{d}{dt}x^{n}\dot{x}=nx^{n-1}\dot{x}^{2}+x^{n}\ddot{x}=nx^{n-1}\dot{x}^{2}-x^{n}f$$ we obtain $$n\int_{a}^{b}x^{n-1}\dot{x}^{2}\,dt=\int_{a}^{b}x^{n}f\,dt+x(b)^{n}\dot{x}(b)-x(a)^{n}\dot{x}(a) \label{eq:HT}$$ In particular, when $n=1$ we have $$\int_{a}^{b}\dot{x}^{2}\,dt=\int_{a}^{b}xf\,dt \label{eq:VT}$$ because of the boundary conditions (\[eq:BC\_gen\]). We now apply this general expression to the oscillators studied by Beléndez et al[@BAFP09] that are periodic of period $\tau $. In this case the kinetic energy is $$K=\frac{\dot{x}^{2}}{2} \label{eq:K}$$ and if we choose $a=0$ and $b=\tau $ equation (\[eq:VT\]) becomes the well–known virial theorem[@G80] $$2\bar{K}=\overline{xf} \label{eq:VT_part}$$ where the expectation values are defined as $$\bar{F}=\frac{1}{\tau }\int_{0}^{\tau }F\,dt \label{eq:exp_val}$$ The virial theorem is known from long ago[@G80]; its name comes from the fact that $xf$ is known as the virial of the forces in the mechanical system. This theorem reveals the balance between the kinetic and potential energies[@G80]. The exact trajectory $x(t)$ satisfies equation (\[eq:VT\_part\]). If we propose an approximate trajectory of the form $$x_{app}(t)=A\cos (\omega t) \label{eq:x_app}$$ where $\omega =2\pi /\tau $ is the frequency of the oscillator, then it is reasonable to set this approximate frequency so that $x_{app}(t)$ satisfies the virial theorem (\[eq:VT\_part\]). If we substitute equation (\[eq:x\_app\]) into equation (\[eq:VT\_part\]) we obtain $$\pi \omega A^{2}=2\int_{0}^{\tau /2}xf\,dt=\frac{2A}{\omega }\int_{-1}^{1}\frac{yf(Ay)}{\sqrt{1-y^{2}}}\,dy$$ by means of the change of variables $y=\cos (\omega t)$. This is exactly the equation for the frequency (\[eq:omega\_app\]) derived by Beléndez et al[@BAFP09]. We appreciate that both the virial theorem and the first term of the Chebyshev expansion lead to the same approximate frequency. Bifurcation \[sec:bifurcation\] =============================== Equation (\[eq:VT\]) is sufficiently general for the treatment of a wide variety of interesting nonlinear problems of the form (\[eq:dif\_eq\]). In this section we consider the Bratu equation $$u^{\prime \prime }(x)+\lambda e^{u(x)}=0,\,u(0)=u(1)=0 \label{eq:Bratu}$$ that appears in simple models for the study spontaneous explosion due to internal heating in combustible materials[@VA93; @MO05]. It is also interesting for another reason: it is a simple strongly nonlinear problem that can be exactly solved. Therefore, it is not surprising that it has become a useful benchmark for testing approximate methods[@MO05; @W89; @M98; @B03; @B04]. It is well–known that the solution to the Bratu equation is[@B03] $$u(x)=-2\ln \left\{ \frac{\cosh \left[ \theta (x-1/2)\right] }{\cosh (\theta /2)}\right\} \label{eq:Bratu_exact}$$ where $\theta $ is a root of $$\lambda =\frac{2\theta ^{2}}{\cosh (\theta /2)^{2}} \label{eq:lambda(theta)}$$ This equation exhibits two solutions when $\lambda <\lambda _{c}$, only one when $\lambda =\lambda _{c}$, and none when $\lambda >\lambda _{c}$, where the critical $\lambda $–value $\lambda _{c}$ is the maximum of $\lambda (\theta )$. We easily obtain it from the root of $d\lambda (\theta )/d\theta =0$ that is given by $$e^{\theta _{c}}(\theta _{c}-2)-\theta _{c}-2=0 \label{eq:theta_c}$$ The exact critical parameters are $\theta _{c}=2.399357280$ and $\lambda _{c}=3.513830719$. The slope at origin $$u^{\prime }(0)=\frac{2\theta (e^{\theta }-1)}{(e^{\theta }+1)} \label{eq:Bratu_u'(0)_exact}$$ displays a bifurcation diagram as a function of $\lambda $ as shown in Fig. \[fig:Bratu\] (it is not difficult to obtain it by means of a parametric plot using equations (\[eq:lambda(theta)\]) and (\[eq:Bratu\_u’(0)\_exact\])). For the critical value of $\lambda $ we have $u^{\prime }(0)_{c}=4$. In what follows we show that the virial theorem is suitable for estimating the form of this bifurcation diagram. We simply have to introduce a trial function $u(x)$, which satisfies the appropriate boundary conditions, into the expression for the “virial theorem” $$\int_{0}^{1}u^{\prime 2}\,dx+\lambda \int_{0}^{1}ue^{u}\,dx=0 \label{eq:Bratu_VT}$$ Notice that the exact solution satisfies $u^{\prime \prime }(x)<0$ for all $0<x<1$; therefore $u(x)$ is positive and do not have zeros between the end points. This conclusion will guide us towards the choice of the trial function. One of the simplest functions that meets the criteria just indicated is $$u(x)=Ax(1-x) \label{eq:Bratu_trial1}$$ A straightforward calculation shows that $$\lambda =\frac{4A^{5/2}}{3\left[ \sqrt{\pi }(A-2)e^{A/4}\mathop{\rm erf}\left( \sqrt{A}/2\right) +2\sqrt{A}\right] } \label{eq:Bratu_lambda_var1}$$ and the slope at origin is $u^{\prime }(0)=A$, so that we can easily plot $u^{\prime }(0)$ vs $\lambda $ parametrically. Fig. \[fig:Bratu\] shows that this expression is suitable fo the lower branch (small $\lambda $) but it is not so accurate for the upper one (large $\lambda $). However, it provides a reasonable description of the bifurcation diagram and the critical parameters $\lambda _{c}=3.569086042$ and $u^{\prime }(0)_{c}=4.727715383$ are remarkably close to the exact ones. Another simple variational function that meets the required criteria is $$u(x)=A\sin (\pi x) \label{eq:Bratu_trial_sin}$$ that leads to $$\lambda =\frac{A\pi ^{3}}{2\left\{ 2+\pi \left[ I_{1}(A)+L_{1}(A)\right] \right\} } \label{eq:Bratu_lambda_2}$$ where $I_{\nu }(z)$ and $L_{\nu }(z)$ stand for the modified Bessel and Struve functions[@AS72], respectively. In this case $u^{\prime }(0)=\pi A $ and Fig. \[fig:Bratu\] shows that this expression is slightly less accurate than the preceding one for the lower branch and certainly more accurate for the upper one. Besides, this trial function yields better critical parameters: $\lambda _{c}=3.509329130$ and $u^{\prime }(0)_{c}=3.756549365$. The Bratu equation is also suitable for revealing the limitation of the linearization by means of an expansion in a Taylor series. If we neglect the nonlinear terms in the expansion: $e^{u}=1+u+\ldots $ then we can solve the resulting differential equation exactly and obtain $$u(x)=\cos \left( \sqrt{\lambda }x\right) +\tan \left( \frac{\sqrt{\lambda }}{2}\right) \sin \left( \sqrt{\lambda }x\right) -1 \label{eq:Bratu_u_expan}$$ In this case the slope at origin is $$u^{\prime }(0)=\sqrt{\lambda }\tan \left( \frac{\sqrt{\lambda }}{2}\right) \label{eq:Bratu_slope_expan}$$ Fig. \[fig:Bratu\] shows that this approach based on the Taylor expansion is unable to reproduce the upper branch of the bifurcation diagram. The explanation is quite simple: the solution for the lower branch is considerably smaller than the one for the upper branch. Therefore, an expansion based on small values of $u$ will necessarily produce the former and fail on the latter. On the other hand, an expansion in appropriate orthogonal polynomials (or the virial theorem) provides an acceptable description of both branches of the bifurcation diagram. Conclusions \[sec:conclusions\] =============================== We have shown that the approach derived by Beléndez et al[@BAFP09] from the first term of the expansion in Chebyshev polynomials can also be obtained by means of the virial theorem. It is clear that we can introduce the approximation in two different ways: as the first term of a systematic numerical method or as the requirement posed by the virial theorem with a direct physical interpretation. One or the other point of view (or perhaps one after the other) may be useful for teaching an undergraduate course on classical mechanics. One can easily derive and discuss the virial theorem for mechanical problems and then generalize it for the treatment of arbitrary ordinary nonlinear differential equations. One advantage of the approach based on the virial theorem is that it is also suitable for the treatment of quantum–mechanical problems as well[@FC87]. The virial theorem provides us with a quite general expression that may be useful in the study of many nonlinear problems. As an example we have shown that the approach is suitable for the treatment of the well–known Bratu equation that appears in simple models for heat combustion[@VA93; @MO05; @W89; @M98; @B03; @B04]. In this case we have been able to try two different approximate solutions which may probably be more difficult if one merely resorts to an expansion in orthogonal polynomials. [99]{} Beléndez A, Álvarez M L, Fernández E, and Pascual I 2009 *Eur. J. Phys.* **30** 259. Goldstein H 1980 *Classical Mechanics* (Addison-Wesley, Reading, MA). Vázquez-Espí C and Liñán A 1993 *SIAM J. Appl. Math.* **53** 1567. Makinde O D and Osalusi E 2005 *Rom. Journ. Phys.* **50** 621. Wanelik K 1989 *J. Math. Phys.* **30** 1707. McGough J S 1998 *Appl. Math. Comput.* **89** 225. Buckmire R, On exact and numerical solutions of the one-dimensional planar Bratu problem, http://faculty.oxy.edu/ron/research/bratu/bratu.pdf Buckmire R 2004 *Num. Meth. Partial Diff. Eq.* **20** 327. Abramowitz M and Stegun I A 1972 *Handbook of Mathematical Functions* (Dover, New York). Fernández F M and Castro E A 1987 *Hypervirial theorems* (Springer, Berlin, Heidelberg, New York, London, Paris, Tokyo). ![Bifurcation diagram for the slope at origin $u^\prime (0)$ in terms of $\lambda$ obtained by means of the exact expression (solid line), $u(x)=Ax(1-x)$ (dashed line), $u(x)=A\sin(\pi x)$ (dots) and Taylor linearization (circles)[]{data-label="fig:Bratu"}](BRATU3.eps){width="9cm"}
--- abstract: 'We establish the local existence and the uniqueness of solutions of the heat equation with a nonlinear boundary condition for the initial data in uniformly local $L^r$ spaces. Furthermore, we study the sharp lower estimates of the blow-up time of the solutions with the initial data $\lambda\psi$ as $\lambda\to 0$ or $\lambda\to\infty$ and the lower blow-up estimates of the solutions.' author: - | Kazuhiro Ishige and Ryuichi Sato\ Mathematical Institute, Tohoku University\ Aoba, Sendai 980-8578, Japan title: | Heat equation with a nonlinear boundary condition\ and uniformly local $L^r$ spaces --- Introduction ============ This paper is concerned with the heat equation with a nonlinear boundary condition, $$\left\{ \begin{array}{ll} \partial_t u=\Delta u, & x\in\Omega,\,t>0,\vspace{3pt}\\ \nabla u\cdot\nu(x)=|u|^{p-1}u,\qquad &x\in\partial\Omega,\,\,t>0,\vspace{3pt}\\ u(x,0)=\varphi(x), & x\in\Omega, \end{array} \right. \label{eq:1.1}$$ where $N\ge 1$, $p>1$, $\Omega$ is a smooth domain in ${\bf R}^N$, $\partial_t=\partial/\partial t$ and $\nu=\nu(x)$ is the outer unit normal vector to $\partial\Omega$. For any $\varphi\in BUC(\Omega)$, problem  has a unique solution $$u\in C^{2,1}(\Omega\times(0,T])\,\cap\,C^{1,0}(\overline{\Omega}\times(0,T])\,\cap\, BUC(\overline{\Omega}\times[0,T])$$ for some $T>0$ and the maximal existence time $T(\varphi)$ of the solution can be defined. If $T(\varphi)<\infty$, then $$\limsup_{t\to\,T(\varphi)}\|u(t)\|_{L^\infty(\Omega)}=\infty$$ and we call $T(\varphi)$ the blow-up time of the solution $u$. Problem  has been studied in many papers from various points of view (see e.g. [@CF01]–[@DB01], [@FR]–[@GL], [@GH]–[@HM], [@IK], [@K], [@QS2] and references therein) while there are few results related to the dependence of the blow-up time on the initial function even in the case $\Omega={\bf R}^N_+$. We remark that the blow-up time for problem  cannot be chosen uniform for all initial functions lying in a bounded set of $L^r({\bf R}^N_+)$ with $1\le r\le N(p-1)$. Indeed, similarly to [@QS Remark 15.4 (i)], for any solution $u$ blowing up at $t=T<\infty$ and $\mu>0$, $$\label{eq:1.2} u_\mu(x,t):=\mu^{1/(p-1)}u(\mu x,\mu^2 t)$$ is a solution of blowing up at $t=\mu^{-2}T$ while $$\|u_\mu(0)\|_{L^r({\bf R}^N_+)} =\mu^{\frac{1}{p-1}-\frac{N}{r}}\|\varphi\|_{L^r({\bf R}^N_+)} \le\|\varphi\|_{L^r({\bf R}^N_+)}$$ for any $\mu\ge 1$. For $1\le r<\infty$ and $\rho>0$, let $L^r_{uloc,\rho}(\Omega)$ be the uniformly local $L^r$ space in $\Omega$ equipped with the norm $$||f||_{r,\rho}:=\sup_{x\in\overline{\Omega}}\,\left(\int_{\Omega\,\cap\,B(x,\rho)}|f(y)|^rdy\right)^{1/r}.$$ We denote by ${\cal L}^r_{uloc,\rho}(\Omega)$ the completion of bounded uniformly continuous functions in $\Omega$ with respect to the norm $\|\cdot\|_{r,\rho}$, that is, $${\cal L}^r_{uloc,\rho}(\Omega):=\overline{BUC(\Omega)}^{\,\|\,\cdot\,\|_{r,\rho}}.$$ We set $L^\infty_{uloc,\rho}(\Omega)=L^\infty(\Omega)$ and ${\cal L}^\infty_{uloc,\rho}(\Omega)=BUC(\Omega)$. In this paper we prove the local existence and the uniqueness of the solutions of problem  with initial functions in ${\cal L}^r_{uloc,\rho}(\Omega)$, and study the dependence of the blow-up time on the initial functions. As an application of the main results of this paper, we study the asymptotic behavior of the blow-up time $T(\varphi)$ with $\varphi=\lambda\psi$ as $\lambda\to 0$ or $\lambda\to\infty$ and show the validity of our arguments. Furthermore, we obtain a lower estimate of the blow-up rate of the solutions (see Section 5). Throughout this paper, following [@QS Section 1], we assume that $\Omega\subset{\bf R}^N$ is a uniformly regular domain of class $C^1$. For any $x\in{\bf R}^N$ and $\rho>0$, define $$B(x,\rho):=\{y\in{\bf R}^N:|x-y|<\rho\},\,\,\, \Omega(x,\rho):=\Omega\,\cap\,B(x,\rho),\,\,\, \partial\Omega(x,\rho):=\partial\Omega\,\cap\,B(x,\rho).$$ By the trace inequality for $W^{1,1}(\Omega)$-functions and the Gagliardo-Nirenberg inequality we can find $\rho_*\in(0,\infty]$ with the following properties (see Lemma \[Lemma:2.2\]). - There exists a positive constant $c_1$ such that $$\label{eq:1.3} \int_{\partial\Omega(x,\rho)}|v|\,d\sigma\le c_1\int_{\Omega(x,\rho)}|\nabla v|\,dy$$ for all $v\in C_0^1(B(x,\rho))$, $x\in\overline{\Omega}$ and $0<\rho<\rho_*$. - Let $1\le \alpha$, $\beta\le\infty$ and $\sigma\in[0,1]$ be such that $$\label{eq:1.4} \frac{1}{\alpha}=\sigma\left(\frac{1}{2}-\frac{1}{N}\right)+(1-\sigma)\frac{1}{\beta}.$$ Assume, if $N\ge 2$, that $\alpha\not=\infty$ or $N\not=2$. Then there exists a constant $c_2$ such that $$\label{eq:1.5} \|v\|_{L^\alpha(\Omega(x,\rho))}\le c_2\|v\|_{L^\beta(\Omega(x,\rho))}^{1-\sigma}\|\nabla v\|_{L^2(\Omega(x,\rho))}^\sigma$$ for all $v\in C^1_0(B(x,\rho))$, $x\in\overline{\Omega}$ and $0<\rho<\rho_*$. We remark that, in the case $$\Omega=\{(x',x_N)\in{\bf R}^N\,:\,x_N>\Phi(x')\},$$ where $N\ge 2$ and $\Phi\in C^1({\bf R}^{N-1})$ with $\|\nabla\Phi\|_{L^\infty({\bf R}^{N-1})}<\infty$, and hold with $\rho_*=\infty$ (see Lemma \[Lemma:2.2\]). Inequalities  and are used to treat the nonlinear boundary condition. Next we state the definition of the solution of . \[Definition:1.1\] Let $0<T\le\infty$ and $1\le r<\infty$. Let $u$ be a continuous function in $\overline\Omega\times(0,T]$. We say that $u$ is a $L^r_{uloc}(\Omega)$-solution of in $\Omega\times[0,T]$ if - $u\in L^\infty(\tau,T:L^\infty(\Omega))\cap L^2(\tau,T:W^{1,2}(\Omega\cap B(0,R)))$ for any $\tau\in(0,T)$ and $R>0$, - $u\in C([0,T):L^r_{uloc,\rho}(\Omega))$ with $\displaystyle{\lim_{t\to 0}}\,\|u(t)-\varphi\|_{r,\rho}=0$ for some $\rho>0$, - $u$ satisfies $$\label{eq:1.6} \int_0^T\int_\Omega \left\{-u\partial_t\phi+\nabla u\cdot\nabla\phi\right\}\,dyds=\int_0^T\int_{\partial\Omega}|u|^{p-1}u\phi\,d\sigma ds$$ for all $\phi\in C_0^\infty({\bf R}^N\times(0,T))$. Here $d\sigma$ is the surface measure on $\partial\Omega$. Furthermore, for any continuous function $u$ in $\overline{\Omega}\times(0,T)$, we say that $u$ is a $L^r_{uloc}(\Omega)$-solution of in $\Omega\times[0,T)$ if $u$ is a $L^r_{uloc}(\Omega)$-solution of in $\Omega\times[0,\eta]$ for any $\eta\in(0,T)$. We remark the following for any $\rho$, $\rho'\in(0,\infty)$: - $f\in L^r_{uloc,\rho}(\Omega)$ is equivalent to $f\in L^r_{uloc,\rho'}(\Omega)$; - $u\in C([0,T]:L^r_{uloc,\rho}(\Omega))$ is equivalent to $u\in C([0,T]:L^r_{uloc,\rho'}(\Omega))$. These follow from property (i) in Section 2. Now we are ready to state the main results of this paper. Let $p_*=1+1/N$. \[Theorem:1.1\] Let $N\ge 1$ and $\Omega\subset{\bf R}^N$ be a uniformly regular domain of class $C^1$. Let $\rho_*$ satisfy and . Then, for any $1\le r<\infty$ with $$\label{eq:1.7} \left\{ \begin{array}{ll} r\ge N(p-1) & \mbox{if}\quad p>p_*,\vspace{3pt}\\ r>1 & \mbox{if}\quad p=p_*,\vspace{3pt}\\ r\ge 1 & \mbox{if}\quad 1<p<p_*, \end{array} \right.$$ there exists a positive constant $\gamma_1$ such that, for any $\varphi\in {\cal L}^r_{uloc,\rho}(\Omega)$ with $$\label{eq:1.8} \rho^{\frac{1}{p-1}-\frac{N}{r}}\|\varphi\|_{r,\rho}\le\gamma_1$$ for some $\rho\in(0,\rho_*/2)$, problem  possesses a $L^r_{uloc}(\Omega)$-solution $u$ of in $\Omega\times[0,\mu\rho^2]$ satisfying $$\begin{aligned} \label{eq:1.9} & &\!\!\!\!\!\! \sup_{0<t<\mu\rho^2}\,\|u(t)\|_{r,\rho}\le C\|\varphi\|_{r,\rho},\\ & &\!\!\!\!\!\! \sup_{0<t<\mu\rho^2}\,t^{\frac{N}{2r}}\|u(t)\|_{L^\infty(\Omega)}\le C\|\varphi\|_{r,\rho}. \label{eq:1.10} \end{aligned}$$ Here $C$ and $\mu$ are constants depending only on $N$, $\Omega$, $p$ and $r$. \[Theorem:1.2\] Assume the same conditions as in Theorem [\[Theorem:1.1\]]{}. Let $v$ and $w$ be $L^r_{uloc}(\Omega)$-solutions in $\Omega\times[0,T)$ such that $v(x,0)\le w(x,0)$ for almost all $x\in\Omega$, where $T>0$ and $r$ is as in . Assume, if $r=1$, that $$\label{eq:1.11} \limsup_{t\to +0}\,t^{\frac{1}{2(p-1)}}\left[\|v(t)\|_{L^\infty(\Omega)}+\|w(t)\|_{L^\infty(\Omega)}\right]<\infty.$$ Then there exists a positive constant $\gamma_2$ such that, if $$\label{eq:1.12} \rho^{\frac{1}{p-1}-\frac{N}{r}} \left[\|v(0)\|_{r,\rho}+\|w(0)\|_{r,\rho}\right]\le\gamma_2$$ for some $\rho\in(0,\rho_*/2)$, then $$v(x,t)\le w(x,t)\quad\mbox{in}\quad \Omega\times(0,T).$$ We give some comments related to Theorems \[Theorem:1.1\] and \[Theorem:1.2\]. - Let $u$ be a $L^r_{uloc}(\Omega)$-solution of in $\Omega\times[0,T)$. It follows from Definition [\[Definition:1.1\]]{} that $u\in L^\infty(\tau,\sigma:L^\infty(\Omega))$ for any $0<\tau<\sigma<T$. This together with Theorem [6.2]{} of [[@DB01]]{} implies that $u(t)\in BUC(\Omega)$ for any $t\in(0,T)$. This means that $u(0)\in{\cal L}^r_{uloc,\rho}(\Omega)$ for any $\rho>0$. - Consider the case $\Omega={\bf R}^N_+$. Let $u$ be a $L^r_{uloc}(\Omega)$-solution of blowing up at $t=T<\infty$, where $r$ is as in . Then, for any $\mu>0$, $u_\mu$ defined by  satisfies $$\mu^{-\left(\frac{1}{p-1}-\frac{N}{r}\right)}\|u_\mu(0)\|_{r,\mu^{-1}}=\|u(0)\|_{r,1}$$ and it blows up at $t=\mu^{-2}T$. This means that Theorem [\[Theorem:1.1\]]{} holds with $\rho=1$ if and only if Theorem [\[Theorem:1.1\]]{} holds for any $\rho>0$. - Let $1\le r<\infty$. If, either $${\rm(a)}\quad f\in L^r_{uloc,1}(\Omega), \quad r>N(p-1) \qquad\mbox{or}\qquad {\rm(b)}\quad f\in L^r(\Omega),\quad r\ge N(p-1),$$ then, for any $\gamma>0$, we can find a constant $\rho>0$ such that $\rho^{\frac{1}{p-1}-\frac{N}{r}}\|f\|_{r,\rho}\le\gamma$. As a corollary of Theorem \[Theorem:1.1\], we have: \[Corollary:1.1\] Assume the same conditions as in Theorem [\[Theorem:1.1\]]{} and $p>p_*$. - For any $\varphi\in L^{N(p-1)}(\Omega)$, problem  has a unique $L^{N(p-1)}_{uloc}(\Omega)$-solution in $\Omega\times[0,T]$ for some $T>0$. - Assume $\rho_*=\infty$. Then there exists a constant $\gamma$ such that, if $$\label{eq:1.13} \|\varphi\|_{L^{N(p-1)}(\Omega)}\le\gamma,$$ then problem  has a unique $L^{N(p-1)}_{uloc}(\Omega)$-solution $u$ such that $$\sup_{0<t<\infty}\|u(t)\|_{L^{N(p-1)}(\Omega)}+\sup_{0<t<\infty}\,t^{\frac{1}{2(p-1)}}\|u(t)\|_{L^\infty(\Omega)}<\infty.$$ For further applications of our theorems, see Section 5. \[Remark:1.1\] Let $\Omega={\bf R}^N_+:=\{(x',x_N)\in{\bf R}^N\,:\,x_N>0\}$. If $1<p\le p_*$, then problem  possesses no positive global-in-time solutions. See [[@DFL]]{} and [[@GL]]{}. For the case $p>p_*$, it is proved in [[@K]]{} [(]{}see also [[@IK]]{}[)]{} that, if $\varphi\ge 0$, $\varphi\not\equiv 0$ in $\Omega$ and $$\|\varphi\|_{L^1({\bf R}^N_+)}\|\varphi\|_{L^\infty({\bf R}^N_+)}^{N(p-1)-1} \quad\mbox{is sufficiently small},$$ then there exists a positive global-in-time solution of . This also immediately follows from assertion [(ii)]{} of Corollary [\[Corollary:1.1\]]{} and the comparison principle. We explain the idea of the proof of Theorem \[Theorem:1.1\]. Under the assumptions of Theorem \[Theorem:1.1\], there exists a sequence $\{\varphi_n\}_{n=1}^\infty\subset BUC(\Omega)$ such that $$\label{eq:1.14} \lim_{n\to\infty}\|\varphi-\varphi_n\|_{r,\rho}=0, \qquad \sup_n\,\|\varphi_n\|_{r,\rho}\le 2\|\varphi\|_{r,\rho}.$$ For any $n=1,2,\dots$, let $u_n$ satisfy in the classical sense $$\label{eq:1.15} \left\{ \begin{array}{ll} \partial_t u=\Delta u & \mbox{in}\quad\Omega\times(0,T_n),\vspace{3pt}\\ \nabla u\cdot\nu(x)=|u|^{p-1}u & \mbox{on}\quad\partial\Omega\times(0,T_n),\vspace{3pt}\\ u(x,0)=\varphi_n(x) & \mbox{in}\quad\Omega, \end{array} \right.$$ where $T_n$ is the blow-up time of the solution $u_n$. By regularity theorems for parabolic equations (see e.g. [@DB01] and [@LSU Chapters III and IV]) we see that $$\label{eq:1.16} u_n\in BUC(\overline{\Omega}\times[0,T]), \qquad \nabla u_n\in L^\infty(\Omega\times(\tau,T)),$$ for any $0<\tau<T<T_n$, which imply that $u_n$ is a $L^r_{uloc}(\Omega)$-solution in $\Omega\times[0,T_n)$ for any $1\le r<\infty$. Set $$\Psi_{r,\rho}[u_n](t):=\sup_{0\le\tau\le t}\,\sup_{x\in\overline{\Omega}}\, \int_{\Omega(x,\rho)}|u_n(y,\tau)|^r\,dy, \qquad 0\le t<T_n.$$ It follows from and that $$\label{eq:1.17} \Psi_{r,\rho}[u_n](0)^{\frac{1}{r}}=\|\varphi_n\|_{r,\rho}\le 2\|\varphi\|_{r,\rho} \le 2\gamma_1\rho^{-\frac{1}{p-1}+\frac{N}{r}}.$$ Define $$\label{eq:1.18} \begin{split} T_n^* & :=\sup\left\{\sigma\in(0,T_n)\,:\,\Psi_{r,\rho}[u_n](t)\le 6M\Psi_{r,\rho}[u_n](0)\quad\mbox{in}\quad[0,\sigma]\right\},\\ T_n^{**} & :=\sup\left\{\sigma\in(0,T_n)\,:\, \rho^{-1}+\|u_n(t)\|_{L^\infty(\Omega)}^{p-1}\le 2t^{-\frac{1}{2}}\quad\mbox{in}\quad(0,\sigma]\right\}, \end{split}$$ where $M$ is the integer given in Lemma \[Lemma:2.1\]. We adapt the arguments in [@A], [@AD] and [@I] to obtain uniform estimates of $u_n$ and $u_m-u_n$ with respect to $m$, $n=1,2,\dots$, and prove that $$\inf_nT_n^*\ge\mu\rho^2,\qquad \inf_nT_n^{**}\ge\mu\rho^2,$$ for some $\mu>0$. This enables us to prove Theorem \[Theorem:1.1\]. Theorem \[Theorem:1.2\] follows from a similar argument as in Theorem \[Theorem:1.1\]. The rest of this paper is organized as follows. In Section 2 we give some preliminary lemmas related to $\rho_*$. In Sections 3 and 4 we prove Theorems \[Theorem:1.1\] and \[Theorem:1.2\]. In Section 5, as applications of Theorem \[Theorem:1.1\], we give some results on the blow-up time and the blow-up rate of the solutions. Preliminaries ============= In this section we recall some properties of uniformly local $L^r$ spaces and prove some lemmas related to $\rho_*$. Furthermore, we give some inequalities used in Sections 3 and 4. In what follows, the letter $C$ denotes a generic constant independent of $x\in\overline{\Omega}$, $n$ and $\rho$. Let $1\le r<\infty$. We first recall the following properties of $L^r_{uloc,\rho}(\Omega)$: - if $f\in L^r_{uloc,\rho}(\Omega)$ for some $\rho>0$, then, for any $\rho'>0$, $f\in L^r_{uloc,\rho'}(\Omega)$ and $$\|f\|_{r,\rho'}\le C_1\|f\|_{r,\rho}$$ for some constant $C_1$ depending only on $N$, $\rho$ and $\rho'$; - there exists a constant $C_2$ depending only on $N$ such that $$\label{eq:2.1} \|f\|_{r,\rho}\le C_2\rho^{N(\frac{1}{r}-\frac{1}{q})}\|f\|_{q,\rho}, \qquad f\in L^q_{uloc,\rho}(\Omega),$$ for any $1\le r\le q<\infty$ and $\rho>0$; - if $f\in L^r(\Omega)$, then $f\in L^r_{uloc,\rho}(\Omega)$ for any $\rho>0$ and $$\label{eq:2.2} \lim_{\rho\to +0}\,\|f\|_{r,\rho}=0.$$ Properties (ii) and (iii) are proved by the Hölder inequality and the absolute continuity of $|f|^r\,dy$ with respect to $dy$. Property (i) follows from the following lemma. \[Lemma:2.1\] Let $N\ge 1$ and $\Omega$ be a domain in ${\bf R}^N$. Then there exists $M\in\{1,2,\dots\}$ depending only on $N$ such that, for any $x\in\overline{\Omega}$ and $\rho>0$, $$\label{eq:2.3} \Omega(x,2\rho)\subset\bigcup_{k=1}^n \Omega(x_k,\rho)$$ for some $\{x_k\}_{k=1}^n\subset\overline{\Omega}$ with $n\le M$. [**Proof.**]{} There exist $M\in\{1,2,\dots\}$ and $\{y_k\}_{k=1}^M\subset B(0,2)$ such that $$B(0,2)\subset\bigcup_{k=1}^M B(y_k,1/2).$$ Then, for any $x\in\overline{\Omega}$ and $\rho>0$, we can find $\{y_{k_i}\}_{i=1}^n\subset\{y_k\}_{k=1}^M$ such that $$\label{eq:2.4} \Omega(x+\rho y_{k_i},\rho/2)\not=\emptyset \quad\mbox{and}\quad \Omega(x,2\rho)\subset\bigcup_{i=1}^n \Omega(x+\rho y_{k_i},\rho/2).$$ Furthermore, for any $i\in\{1,\dots,n\}$, there exists $x_{k_i}\in\overline{\Omega}$ such that $$x_{k_i}\in \Omega(x+\rho y_{k_i},\rho/2) \quad\mbox{and} \quad \Omega(x+\rho y_{k_i},\rho/2)\subset \Omega(x_{k_i},\rho).$$ This together with implies , and Lemma \[Lemma:2.1\] follows. $\Box$ We state a lemma on the existence of $\rho_*$ satisfying and . \[Lemma:2.2\] Let $N\ge 1$ and $\Omega$ be a uniformly regular domain of class $C^1$. Then there exists $\rho_*>0$ such that and hold. In particular, if $$\label{eq:2.5} \Omega=\{(x',x_N)\in{\bf R}^N\,:\,x_N>\Phi(x')\},$$ where $N\ge 2$ and $\Phi\in C^1({\bf R}^{N-1})$ with $\|\nabla\Phi\|_{L^\infty({\bf R}^{N-1})}<\infty$, then and hold with $\rho_*=\infty$. [**Proof.**]{} By the definition of uniformly regular domain, it suffices to consider the case . Let $f\in C^1_0(B(x_*,\rho))$, where $x_*\in\overline{\Omega}$ and $\rho>0$. Set $f=0$ outside $B(x_*,\rho)$. We first consider the case of $\partial\Omega(x_*,\rho)\not=\emptyset$. Then there exists $y_*\in\partial\Omega$ such that $B(x_*,\rho)\subset B(y_*,2\rho)$. Set $$g(x',x_N):= \left\{ \begin{array}{ll} f(x'-y_*',x_N+\Phi(x')) & \mbox{for}\quad x_N\ge 0,\vspace{3pt}\\ f(x'-y_*',-x_N+\Phi(x')) & \mbox{for}\quad x_N<0, \end{array} \right. \qquad \tilde{g}(z):=g(2\rho'z),$$ where $$\rho'=\rho\left(1+\|\nabla\Phi\|_{L^\infty({\bf R}^{N-1})}^2\right)^{1/2}.$$ Then $\tilde{g}\in C^1_0(B(0,1))$. Applying the Gagliardo-Nirenberg inequality (see e.g. [@GGS]) and the trace imbedding theorem (see e.g. [@A Theorem 5.22]), we obtain $$\begin{aligned} \|\tilde{g}\|_{L^\beta(B(0,1))} \!\!\! & \le &\!\!\! C\|\tilde{g}\|_{L^\beta(B(0,1))}^{1-\sigma}\|\nabla \tilde{g}\|_{L^2(B(0,1))}^\sigma,\\ \int_{B(0,1)\cap\partial{\bf R}^N_+}|\tilde{g}|\,d\sigma \!\!\! & \le &\!\!\! C\|\tilde{g}\|_{W^{1,1}(B(0,1)\cap {\bf R}^N_+)} \le C\|\nabla\tilde{g}\|_{L^1(B(0,1)\cap{\bf R}^N_+)},\end{aligned}$$ where $\alpha$, $\beta$ and $\sigma$ are as in and $\alpha\not=\infty$ if $N=2$. These imply that $$\begin{aligned} \|g\|_{L^\beta(B(0,2\rho'))} \!\!\! & \le &\!\!\! C\|g\|_{L^\beta(B(0,2\rho'))}^{1-\sigma}\|\nabla g\|_{L^2(B(0,2\rho'))}^\sigma,\\ \int_{B(0,2\rho')\cap\partial{\bf R}^N_+}|g|\,d\sigma \!\!\! & \le &\!\!\! C\|\nabla g\|_{L^1(B(0,2\rho')\cap{\bf R}^N_+)},\end{aligned}$$ for some constants $C$ independent of $\rho$. Then we have $$\begin{aligned} \|f\|_{L^\beta(\Omega(x_*,\rho))} & = & \|f\|_{L^\beta(\Omega(y_*,2\rho))} \le C\|g\|_{L^\beta(B(0,2\rho'))}\\ & \le & C\|g\|_{L^\beta(B(0,2\rho'))}^{1-\sigma}\|\nabla g\|_{L^2(B(0,2\rho'))}^\sigma\notag\\ & \le & C\|f\|_{L^\beta(\Omega(x_*,\rho))}^{1-\sigma}\|\nabla f\|_{L^2(\Omega(x_*,\rho))}^\sigma,\label{eq:2.7}\\ \int_{\partial\Omega(x_*,\rho)}|f|\,d\sigma & \le & C\int_{B(0,2\rho')\cap\partial{\bf R}^N_+}|g|\,d\sigma \le C\|\nabla g\|_{L^1(B(0,2\rho')\cap{\bf R}^N_+)}\notag\\ & \le & C\|\nabla f\|_{L^1(\Omega(x_*,\rho))}. \label{eq:2.8}\end{aligned}$$ Therefore we obtain and for any $\rho>0$ in the case of $\partial\Omega(x_*,\rho)\not=\emptyset$. Similarly, we get and for all $\rho>0$ in the case of $\partial\Omega(x_*,\rho)=\emptyset$. Thus and hold with $\rho_*=\infty$ in the case , and the proof is complete. $\Box$ We obtain the following two lemmas by using and . \[Lemma:2.3\] Let $N\ge 1$ and $\Omega\subset{\bf R}^N$ be a uniformly regular domain of class $C^1$. Let $\rho_*$ satisfy and . Then there exists a constant $C_1$ such that $$\label{eq:2.9} \int_{\partial\Omega(x,\rho)}\phi^2\,d\sigma \le\epsilon\int_{\Omega(x,\rho)}|\nabla\phi|^2\,dy+\frac{C_1}{\epsilon}\int_{\Omega(x,\rho)}\phi^2\,dy$$ for all $\phi\in C^1_0(B(x,\rho))$, $\epsilon>0$, $x\in\overline{\Omega}$ and $\rho\in(0,\rho_*)$. Furthermore, for any $p>1$ and $r>0$, there exists a constant $C_2$ such that $$\label{eq:2.10} \int_{\Omega(x,\rho)}\,f^{2p+r-2}\,dy \le C_2\left(\int_{\Omega(x,\rho)}\,f^{N(p-1)}\,dy\right)^{\frac{2}{N}} \int_{\Omega(x,\rho)}\,|\nabla f^{\frac{r}{2}}|^2\,dy$$ for all nonnegative functions $f$ satisfying $f^{r/2}\in C^1(\Omega(x,\rho))$ with $f=0$ near $\Omega\,\cap\,\partial B(x,\rho)$, $\rho\in(0,\rho_*)$ and $x\in\overline{\Omega}$. [**Proof.**]{} It follows from that $$\begin{split} \int_{\partial\Omega(x,\rho)}\phi^2\,d\sigma & \le C\int_{\Omega(x,\rho)}|\nabla \phi^2|\,dy \le 2C\int_{\Omega(x,\rho)}|\phi|\,|\nabla \phi|\,dy\\ & \le\epsilon\int_{\Omega(x,\rho)}|\nabla\phi|^2\,dy +\frac{C^2}{\epsilon}\int_{\Omega(x,\rho)}\phi^2\,dy \end{split}$$ for all $\phi\in W^{1,2}_0(B(x,\rho))$, $\epsilon>0$, $x\in\overline{\Omega}$ and $\rho\in(0,\rho_*)$. This implies . Let $r>0$ and $0<\rho<\rho_*$. If $2N(p-1)\ge r$, then, by we have $$\label{eq:2.11} \int_{\Omega(x,\rho)}\,g^{\frac{4}{r}(p-1)+2}\,dy \le C\left(\int_{\Omega(x,\rho)}\,g^{\frac{2N(p-1)}{r}}\,dy\right)^{\frac{2}{N}} \int_{\Omega(x,\rho)}\,|\nabla g|^2\,dy$$ for all $g\in C^1_0(B(x,\rho))$ and $x\in\overline{\Omega}$. Furthermore, we obtain by the Hölder inequality and even for the case $2N(p-1)<r$ (see e.g. [@N Lemma 3]). Then, setting $g=f^{r/2}$, we obtain , and the proof is complete. $\Box$ \[Lemma:2.4\] Assume the same conditions as in Theorem [\[Theorem:1.1\]]{}. Let $r\ge 1$, $T>0$ and $f$ be a nonnegative function such that $$f\in C([0,T]:L^r_{uloc,\rho}(\Omega))\cap L^2(\tau,T:W^{1,2}(\Omega\cap B(0,R)))$$ for any $\rho\in(0,\rho_*/2)$, $\tau\in(0,T)$ and $R>0$. Let $x\in\overline{\Omega}$ and $\zeta$ be a smooth function in ${\bf R}^N$ such that $$\begin{aligned} & & 0\le\zeta\le 1\quad\mbox{and}\quad|\nabla\zeta|\le 2\rho^{-1}\quad\mbox{in}\quad{\bf R}^N,\vspace{5pt}\\ & & \zeta=1\quad\mbox{on}\quad B(x,\rho), \quad \zeta=0\quad\mbox{outside}\quad B(x,2\rho).\end{aligned}$$ Set $f_\epsilon=f+\epsilon$ for $\epsilon>0$. Then, for any sufficiently large $k\ge 2$, there exists a constant $C$ such that $$\label{eq:2.12} \begin{split} & \sup_{x\in\overline{\Omega}}\,\int_\tau^t\int_{\partial\Omega(x,2\rho)}f_\epsilon^{p+r-1}\zeta^k\,d\sigma ds\\ & \le C\biggr[\rho^{\frac{r}{p-1}-N}\Psi_{r,\rho}[f_\epsilon](t)\biggr]^{\frac{p-1}{r}} \left[\sup_{x\in\overline{\Omega}}\,\int_\tau^t\int_{\Omega(x,\rho)}|\nabla f_\epsilon^{\frac{r}{2}}|^2\,dyds +\rho^{-2}(t-\tau)\Psi_{r,\rho}[f_\epsilon](t)\right] \end{split}$$ for all $0<\tau<t\le T$, $\rho\in(0,\rho_*/2)$ and $\epsilon>0$. [**Proof.**]{} Let $\rho\in(0,\rho_*/2)$. It suffices to consider the case where $\partial\Omega(x,\rho)\not=\emptyset$. Let $k\ge 2$ be such that $$\label{eq:2.13} \frac{k}{2p+r-2}\cdot\frac{r}{2}\ge 1.$$ By and Lemma \[Lemma:2.1\], for any $\delta>0$, we have $$\label{eq:2.14} \begin{split} & \int_\tau^t\int_{\partial\Omega(x,2\rho)}f_\epsilon^{p+r-1}\zeta^k\,d\sigma ds \le C\int_\tau^t\int_{\Omega(x,2\rho)}\left|\nabla[f_\epsilon^{p+r-1}\zeta^k]\right|\,dyds\\ & \le C\int_\tau^t\int_{\Omega(x,2\rho)}f_\epsilon^{p+\frac{r}{2}-1}|\nabla f_\epsilon^{\frac{r}{2}}|\zeta^k\,dyds +C\int_\tau^t\int_{\Omega(x,2\rho)}f_\epsilon^{p+r-1}|\nabla\zeta|\zeta^{k-1}\,dyds\\ & \le C\delta\int_\tau^t\int_{\Omega(x,2\rho)}f_\epsilon^{2p+r-2}\zeta^k\,dyds\\ & \qquad\quad +C\delta^{-1}\int_\tau^t\int_{\Omega(x,2\rho)}|\nabla f_\epsilon^{\frac{r}{2}}|^2\zeta^k\,dyds +C\delta^{-1}\int_\tau^t\int_{\Omega(x,2\rho)}f_\epsilon^r\zeta^{k-2}|\nabla\zeta|^2\,dy\,ds\\ & \le C\delta\int_\tau^t\int_{\Omega(x,2\rho)}f_\epsilon^{2p+r-2}\zeta^k\,dyds\\ & \qquad\quad +C\delta^{-1}\sup_{x\in\overline{\Omega}}\,\int_\tau^t\int_{\Omega(x,\rho)}|\nabla f_\epsilon^{\frac{r}{2}}|^2\,dyds +C\delta^{-1}\rho^{-2}(t-\tau)\Psi_{r,\rho}[f_\epsilon](t) \end{split}$$ for $0<\tau<t\le T$, where $C$ is a constant independent of $\epsilon$ and $\delta$. Set $g_\epsilon:=f_\epsilon\zeta^{k/(2p+r-2)}$. It follows from that $f_\epsilon^{r/2}=0$ near $\partial B(x,2\rho)\cap\Omega$. Then, by Lemmas \[Lemma:2.1\] and \[Lemma:2.3\] we have $$\label{eq:2.15} \begin{split} & \int_\tau^t\int_{\Omega(x,2\rho)}f_\epsilon(y,\tau)^{2p+r-2}\zeta^k\,dyds =\int_\tau^t\int_{\Omega(x,2\rho)}g_\epsilon(y,\tau)^{2p+r-2}\,dyds\\ & \le C\sup_{0<s<t}\left(\int_{\Omega(x,2\rho)}g_\epsilon(y,s)^{N(p-1)}\,dy\right)^{\frac{2}{N}} \int_\tau^t\int_{\Omega(x,2\rho)}|\nabla g_\epsilon^{\frac{r}{2}}|^2\,dyds\\ & \le C\sup_{0<s<t}\left(\rho^{\frac{r}{p-1}-N}\int_{\Omega(x,2\rho)}f_\epsilon(y,s)^r\,dy\right)^{\frac{2(p-1)}{r}}\\ & \qquad\qquad\qquad \times\biggr[\int_\tau^t\int_{\Omega(x,2\rho)}|\nabla f_\epsilon^{\frac{r}{2}}|^2\,dyds +\rho^{-2}\int_\tau^t\int_{\Omega(x,2\rho)}f_\epsilon^r\,dyds\biggr]\\ & \le C\biggr[\rho^{\frac{r}{p-1}-N}\Psi_{r,\rho}[f_\epsilon](t)\biggr]^{\frac{2(p-1)}{r}}\\ & \qquad\qquad\qquad \times\biggr[\sup_{x\in\overline{\Omega}}\int_\tau^t\int_{\Omega(x,\rho)}|\nabla f_\epsilon^{\frac{r}{2}}|^2\,dyds +\rho^{-2}(t-\tau)\Psi_{r,\rho}[f_\epsilon](t)\biggr] \end{split}$$ for $0<\tau<t\le T$. Therefore, taking $\delta=[\rho^{\frac{r}{p-1}-N}\Psi_{r,\rho}[f_\epsilon](t)]^{-(p-1)/r}$, by and we obtain , and the proof is complete. $\Box$ Proof of Theorems \[Theorem:1.1\] and \[Theorem:1.2\] in the case $r>1$. ======================================================================== Let $v$ and $w$ be $L^r_{uloc}(\Omega)$-solutions of in $\Omega\times[0,T]$, where $0<T<\infty$ and $r$ is as in . Set $z:=v-w$ and $z_\epsilon:=\max\{z,0\}+\epsilon$ for $\epsilon\ge 0$. Then $z_\epsilon$ satisfies $$\label{eq:3.1} \partial_t z_\epsilon\le\Delta z_\epsilon\quad\mbox{in}\quad\Omega\times(0,T],\qquad \nabla z_\epsilon\cdot\nu(x)\le a(x,t)z_\epsilon\quad\mbox{on}\quad\partial\Omega\times(0,T],$$ in the weak sense (see e.g. [@DB02 Chapter II]). Here $$\label{eq:3.2} a(x,t):=\left\{ \begin{array}{ll} \displaystyle{\frac{|v(x,t)|^{p-1}v(x,t)-|w(x,t)|^{p-1}w(x,t)}{v(x,t)-w(x,t)}} & \mbox{if}\quad v(x,t)\not=w(x,t),\vspace{5pt}\\ p|v(x,t)|^{p-1} & \mbox{if}\quad v(x,t)=w(x,t), \end{array} \right.$$ which satisfies $$\label{eq:3.3} 0\le a(x,t)\le C(|v|^{p-1}+|w|^{p-1}) \quad\mbox{in}\quad\Omega\times(0,T].$$ In this section we give some estimates of $z$, and prove Theorems \[Theorem:1.1\] and \[Theorem:1.2\] in the case $r>1$. We first give an $L^\infty_{loc}$ estimate of $z_0$ by using the Moser iteration method with the aid of . For related results, see [@FiloK]. \[Lemma:3.1\] Assume the same conditions as in Theorem [\[Theorem:1.1\]]{}. Let $v$ and $w$ be $L^r_{uloc}(\Omega)$-solutions of in $\Omega\times[0,T]$, where $0<T<\infty$ and $r\ge 1$. Set $z_0:=\max\{v-w,0\}$ and $a=a(x,t)$ as in . Then there exists a constant $C$ such that $$\begin{aligned} \label{eq:3.4} & & \|z_0(t)\|_{L^\infty(\Omega(x,R_1)\times(t_1,t))} \le CD^{\frac{N+2}{2r}}\left(\int_{t_2}^t\int_{\Omega(x,R_2)}\,z_0^r\,dyds\right)^{1/r},\\ \label{eq:3.5} & & \int_{t_1}^t\int_{\Omega(x,R_1)}|\nabla z_0|^2\,dyds \le CD\int_{t_2}^t\int_{\Omega(x,R_2)} z_0^2\,dyds, \end{aligned}$$ for all $x\in\overline{\Omega}$, $0<R_1<R_2<\rho_*$ and $0<t_2<t_1<t\le T$, where $$D:=\|a\|_{L^\infty(\Omega(x,R_2)\times(t_2,t))}^2+(R_2-R_1)^{-2}+(t_1-t_2)^{-1}.$$ [**Proof.**]{} Let $x\in\overline{\Omega}$, $0<R_1<R_2<\rho_*$ and $0<t_2<t_1<t\le T$. For $j=0,1,2,\dots$, set $$r_j:=R_1+(R_2-R_1)2^{-j},\quad \tau_j:=t_1-(t_1-t_2)2^{-j},\quad Q_j:=\Omega(x,r_j)\times(\tau_j,t).$$ Let $\zeta_j$ be a piecewise smooth function in $Q_j$ such that $$\begin{split} & 0\le\zeta_j\le 1\quad \mbox{in} \quad {\bf R}^N, \quad \zeta_j=1\quad\mbox{on}\quad Q_{j+1},\vspace{3pt}\\ & \zeta_j=0\quad\mbox{near}\quad \partial\Omega(x,r_j)\times[\tau_j,t]\cup\Omega(x,r_j)\times\{\tau_j\},\\ & |\nabla\zeta_j|\le\frac{2^{j+1}}{R_2-R_1} \quad\mbox{and}\quad 0\le\partial_t\zeta_j\le\frac{2^{j+1}}{t_1-t_2} \quad\mbox{in}\quad Q_j. \end{split} \label{eq:3.6}$$ Let $\alpha_0>1$ and $\epsilon>0$. For any $\alpha\ge\alpha_0$, multiplying by $z_\epsilon^{\alpha-1}\zeta_j^2$ and integrating it on $Q_j$, we obtain $$\begin{split} & \frac{1}{\alpha}\sup_{\tau_j<s<t}\int_{\Omega(x,r_j)}z_\epsilon^\alpha\zeta_j^2 \,dy +\frac{\alpha-1}{2}\iint_{Q_j}z_\epsilon^{\alpha-2}|\nabla z_\epsilon|^2\zeta_j^2\,dyds\\ & \le\frac{4}{\alpha}\iint_{Q_j}z_\epsilon^\alpha\zeta_j|\partial_t\zeta_j|\,dyds +\frac{4}{\alpha-1}\iint_{Q_j}z_\epsilon^\alpha|\nabla\zeta_j|^2\,dyds\\ & \qquad\qquad\qquad\qquad\qquad\qquad\quad +2\int_{\tau_j}^t\int_{\partial\Omega(x,r_j)}a(y,s)z_\epsilon^\alpha \zeta_j^2\,d\sigma ds. \end{split} \label{eq:3.7}$$ This calculation is somewhat formal, however it is justified by the same argument as in [@LSU Chapter III] (see also [@DB02]). Then it follows that $$\begin{split} & \sup_{\tau_j<s<t}\int_{\Omega(x,r_j)}z_\epsilon^\alpha\zeta_j^2 \,dy +\iint_{Q_j}|\nabla[z_\epsilon^{\frac{\alpha}{2}}\zeta_j]|^2\,dyds \le C\iint_{Q_j}z_\epsilon^\alpha\zeta_j\partial_t\zeta_j\,dyds\\ & \qquad\qquad\qquad +C\iint_{Q_j}z_\epsilon^\alpha|\nabla \zeta_j|^2 \,dyds +C\alpha\int_{\tau_j}^t\int_{\partial\Omega(x,r_j)}a(y,s)z_\epsilon^\alpha\zeta_j^2\,d\sigma ds \end{split} \label{eq:3.8}$$ for all $j=0,1,2,\dots$ and $\alpha\ge\alpha_0$. On the other hand, by Lemma \[Lemma:2.3\] we have $$\begin{split} & C\alpha\int_{\tau_j}^t\int_{\partial\Omega(x,r_j)}a(y,s)z_\epsilon^\alpha\zeta_j^2\,d\sigma ds \le C\alpha\|a\|_{L^\infty(Q_0)}\int_{\tau_j}^t\int_{\partial\Omega_j} z_\epsilon^\alpha\zeta_j^2\,d\sigma ds\\ & \qquad\quad \le\frac{1}{2}\iint_{Q_j} |\nabla[z_\epsilon^{\frac{\alpha}{2}}\zeta_j]|^2\,dyds +C\alpha^2\|a\|_{L^\infty(Q_0)}^2\iint_{Q_j}z_\epsilon^\alpha\zeta_j^2\,dyds. \end{split} \label{eq:3.9}$$ We deduce from , and that $$\begin{split} & \sup_{\tau_j<s<t}\int_{\Omega(x,r_j)}z_\epsilon^\alpha\zeta_j^2 \,dy +\iint_{Q_j}|\nabla[z_\epsilon^{\frac{\alpha}{2}}\zeta_j]|^2\,dyds\\ & \le C\left[\alpha^2\|a\|_{L^\infty(Q_0)}^2+\frac{2^{2j}}{(R_2-R_1)^2}+\frac{2^j}{t_1-t_2}\right] \iint_{Q_j}z_\epsilon^\alpha\,dyds \end{split} \label{eq:3.10}$$ for all $j=0,1,2,\dots$ and $\alpha\ge \alpha_0$. This together with implies that $$\begin{split} & \left(\iint_{Q_{j+1}}z_\epsilon^{\kappa\alpha}\,dyds\right)^{1/\kappa}\\ & \le C\left[\alpha^2\|a\|_{L^\infty(Q_0)}^2+\frac{2^{2j}}{(R_2-R_1)^2}+\frac{2^j}{t_1-t_2}\right] \iint_{Q_j}z_\epsilon^\alpha\,dyds \end{split} \label{eq:3.11}$$ for all $j=0,1,2,\dots$ and $\alpha\ge\alpha_0$, where $\kappa:=1+2/N$. Furthermore, by with $\alpha=2$ we have . We prove in the case $r\ge 2$. Setting $$I_j:=\|z_\epsilon\|_{L^{\alpha_j}(Q_j)}, \qquad \alpha_j:=r\kappa^j,$$ by we have $$\label{eq:3.12} I_{j+1}\le C^{\frac{1}{\alpha_j}} \left[\alpha_j^2\|a\|_{L^\infty(Q_0)}^2+\frac{2^{2j}}{(R_2-R_1)^2}+\frac{2^j}{t_1-t_2}\right]^{\frac{1}{\alpha_j}} I_j \le C^{\frac{j}{\alpha_j}}(CD)^{\frac{1}{\alpha_j}}I_j$$ for all $j=0,1,2,\dots$, where $D:=\|a\|_{L^\infty(Q_0)}^2+(R_2-R_1)^{-2}+(t_1-t_2)^{-1}$. Since $$\sum_{j=0}^\infty\frac{1}{\alpha_j}=\frac{1}{r}\sum_{j=0}^\infty\kappa^{-j} =\frac{1}{r(1-\kappa^{-1})}=\frac{N+2}{2r}, \qquad \sum_{j=0}^\infty \frac{j}{\alpha_j}<\infty,$$ we deduce from that $$\|z_\epsilon\|_{L^\infty(Q_\infty)}= \lim_{j\to\infty}I_j\le C^{\sum_{j=0}^\infty\frac{j}{\alpha_j}}(CD)^{\sum_{j=0}^\infty\frac{1}{\alpha_j}}I_0\\ \le CD^{(N+2)/2r}\|z_\epsilon\|_{L^r(Q_0)},$$ which implies $$\label{eq:3.13} \|z_\epsilon\|_{L^\infty(\Omega(x,R_1)\times(t_1,t))}\le CD^{\frac{N+2}{2r}}\left(\int_{t_2}^t\int_{\Omega(x,R_2)}z_\epsilon^r \,dyds\right)^{1/r},$$ where $r\ge 2$. Then, passing the limit as $\epsilon\to 0$, we obtain . On the other hand, for the case $1\le r<2$, applying with $r=2$ to the cylinders $Q_j$ and $Q_{j+1}$, we have $$\begin{split} \|z_\epsilon\|_{L^\infty(Q_{j+1})} & \le C\left((2^{2j}D)^{\frac{N+2}{2}}\iint_{Q_j}z_\epsilon^2\,dyds\right)^{\frac{1}{2}}\\ & \le Cb^j\|z_\epsilon\|_{L^\infty(Q_j)}^{1-r/2}\left(D^{(N+2)/2}\iint_{Q_j}z_\epsilon^r\,dyds\right)^{\frac{1}{2}}, \end{split}$$ where $b=2^{(N+2)/2}$. Then, for any $\nu>0$, we have $$\begin{split} \|z_\epsilon\|_{L^\infty(Q_{j+1})} & \le\nu\|z_\epsilon\|_{L^\infty(Q_j)}+C\nu^{-\frac{2-r}{r}} b^{\frac{2}{r}j}D^{\frac{N+2}{2r}}\biggr(\iint_{Q_j}z_\epsilon^r\,dyds\biggr)^{1/r}\\ & \le\nu^{j+1}\|z_\epsilon\|_{L^\infty(Q_0)} +C\nu^{-\frac{2-r}{r}}\sum_{i=0}^j(\nu b^{\frac{2}{r}})^iD^{\frac{N+2}{2r}}\biggr(\iint_{Q_0}z_\epsilon^r\,dyds\biggr)^{1/r} \end{split}$$ for $j=1,2,\dots$. Taking a sufficiently small $\nu$ if necessary, we see that $$\|z_\epsilon\|_{L^\infty(Q_{j+1})}\le\nu^{j+1}\|z_\epsilon\|_{L^\infty(Q_0)}+CD^{\frac{N+2}{2r}}\biggr(\iint_{Q_0}z_\epsilon^r\,dyds\biggr)^{1/r}$$ for $j=1,2,\dots$. Passing to the limit as $j\to\infty$ and $\epsilon\to 0$, we obtain $$\|z_0\|_{L^\infty(Q_\infty)}\le CD^{\frac{N+2}{2r}}\biggr(\iint_{Q_0}z_0^r\,dyds\biggr)^{1/r},$$ which implies in the case $1\le r<2$. Thus Lemma \[Lemma:3.1\] follows. $\Box$ \[Lemma:3.2\] Assume the same conditions as in Theorem [\[Theorem:1.1\]]{}. Let $r$ satisfy and $r>1$. Let $v$ be a $L^r_{uloc}(\Omega)$-solution of in $\Omega\times[0,T]$, where $T>0$. Then there exists a positive constant $\Lambda$ such that, if $$\label{eq:3.14} \rho^{\frac{r}{p-1}-N}\Psi_{r,\rho}[v](T)\le\Lambda$$ for some $\rho\in(0,\rho_*/2)$, then $$\begin{aligned} \label{eq:3.15} & & \Psi_{r,\rho}[v](t)\le 5M\Psi_{r,\rho}[v](\tau),\\ \label{eq:3.16} & & \sup_{x\in\overline{\Omega}}\,\int_\tau^t\int_{\partial\Omega(x,\rho)}|v|^{p+r-1}\,d\sigma ds \le C\Lambda^{\frac{p-1}{r}}\Psi_{r,\rho}[v](\tau),\end{aligned}$$ for all $0\le \tau\le t\le T$ with $t-\tau\le\mu\rho^2$, where $C$ and $\mu$ are positive constants depending only on $N$, $\Omega$, $p$ and $r$. [**Proof.**]{} Let $x\in\overline{\Omega}$ and let $\zeta$ and $k$ be as in Lemma \[Lemma:2.4\]. By we can take a sufficiently small $\epsilon>0$ so that $$\label{eq:3.17} \rho^{\frac{r}{p-1}-N}\Psi_{r,\rho}[v_\epsilon](T)\le 2\Lambda,$$ where $v_\epsilon:=\max\{\pm v,0\}+\epsilon$. Similarly to , for any $0<\tau<t\le T$, multiplying by $v_\epsilon^{r-1}\zeta^k$ and integrating it in $\Omega\times(\tau,t)$, we obtain $$\label{eq:3.18} \begin{split} & \int_{\Omega(x,2\rho)}v_\epsilon(y,s)^r\zeta^k\,dy\biggr|_{s=\tau}^{s=t} +\int_\tau^t\int_{\Omega(x,\rho)}|\nabla v_\epsilon^{\frac{r}{2}}|^2\,dyds\\ & \le C\rho^{-2}\int_\tau^t\int_{\Omega(x,2\rho)} v_\epsilon^r\,dyds +C\int_\tau^t\int_{\partial\Omega(x,2\rho)}v_\epsilon^{p+r-1}\zeta^k\,d\sigma ds. \end{split}$$ This together with $v\in C(\overline{\Omega}\times[\tau,T])\cap L^\infty(\tau,T:L^\infty(\Omega))$ (see Definition \[Definition:1.1\]) implies that $$\label{eq:3.19} \sup_{x\in\overline{\Omega}}\, \int_\tau^t\int_{\Omega(x,\rho)}|\nabla v_\epsilon^{\frac{r}{2}}|^2\,dyds<\infty.$$ Furthermore, by Lemma \[Lemma:2.4\], and we have $$\label{eq:3.20} \begin{split} & \int_{\Omega(x,2\rho)}v_\epsilon(y,s)^r\zeta^k\,dy\biggr|_{s=\tau}^{s=t} +\int_\tau^t\int_{\Omega(x,\rho)}|\nabla v_\epsilon^{\frac{r}{2}}|^2\,dyds \le C\rho^{-2}\int_\tau^t\int_{\Omega(x,2\rho)} v_\epsilon^r\,dyds\\ & \qquad\qquad\qquad +C(2\Lambda)^{\frac{p-1}{r}} \left[\sup_{x\in\overline{\Omega}}\,\int_\tau^t\int_{\Omega(x,\rho)}|\nabla v_\epsilon^{\frac{r}{2}}|^2\,dyds +\rho^{-2}(t-\tau)\Psi_{r,\rho}[v_\epsilon](t)\right] \end{split}$$ for $0<\tau<t\le T$. Therefore, by Lemma \[Lemma:2.1\], and we obtain $$\begin{split} & \sup_{x\in\overline{\Omega}}\,\int_{\Omega(x,2\rho)}v_\epsilon(y,t)^r\,dy +\sup_{x\in\overline{\Omega}}\,\int_\tau^t\int_{\Omega(x,\rho)}|\nabla v_\epsilon^{\frac{r}{2}}|^2\,dyds\\ & \le M\sup_{x\in\overline{\Omega}}\,\int_{\Omega(x,\rho)}v_\epsilon(y,\tau)^r\,dy +C\rho^{-2}(t-\tau)\Psi_{r,\rho}[v_\epsilon](t)\\ &\qquad +C(2\Lambda)^{\frac{p-1}{r}} \left[\sup_{x\in\overline{\Omega}}\,\int_\tau^t\int_{\Omega(x,\rho)}|\nabla v_\epsilon^{\frac{r}{2}}|^2\,dyds +\rho^{-2}(t-\tau)\Psi_{r,\rho}[v_\epsilon](t)\right] \end{split} \label{eq:3.21}$$ for $0<\tau<t\le T$. Taking a sufficiently small $\Lambda$ if necessary, we deduce from and that $$\begin{split} & \sup_{x\in\overline{\Omega}}\,\int_{\Omega(x,\rho)}v_\epsilon(y,t)^r\,dy +\frac{1}{2}\sup_{x\in\overline{\Omega}}\,\int_\tau^t\int_{\Omega(x,\rho)}|\nabla v_\epsilon^{\frac{r}{2}}|^2\,dyds\\ & \qquad \le M\sup_{x\in\overline{\Omega}}\,\int_{\Omega(x,\rho)}v_\epsilon(y,\tau)^r\,dy +C\rho^{-2}(t-\tau)\Psi_{r,\rho}[v_\epsilon](t). \end{split}$$ Taking a sufficiently small $\mu\in(0,1]$, we obtain $$\label{eq:3.22} \begin{split} & \Psi_{r,\rho}[v_\epsilon](t)+\frac{1}{2}\sup_{x\in\overline{\Omega}}\,\int_\tau^t\int_{\Omega(x,\rho)}|\nabla v_\epsilon^{\frac{r}{2}}|^2\,dyds\\ & \le 2M\Psi_{r,\rho}[v_\epsilon](\tau)+C\rho^{-2}(t-\tau)\Psi_{r,\rho}[v_\epsilon](t) \le 2M\Psi_{r,\rho}[v_\epsilon](\tau)+\frac{1}{2}\Psi_{r,\rho}[v_\epsilon](t) \end{split}$$ for $0<\tau<t\le T$ with $t-\tau\le\mu\rho^2$. This implies that $$\label{eq:3.23} \Psi_{r,\rho}[\max\{\pm v,0\}](t)\le\Psi_{r,\rho}[v_\epsilon](t)\le 4M\Psi_{r,\rho}[v_\epsilon](\tau) \le 5M\Psi_{r,\rho}[v](\tau)+C\epsilon^r\rho^N$$ for $0<\tau<t\le T$ with $t-\tau\le\mu\rho^2$. Furthermore, by Lemma \[Lemma:2.4\], and we have $$\label{eq:3.24} \begin{split} & \int_\tau^t\int_{\partial\Omega(x,\rho)}\max\{\pm v,0\}^{p+r-1}\,d\sigma ds \le \int_\tau^t\int_{\partial\Omega(x,\rho)}v_\epsilon^{p+r-1}\,d\sigma ds\\ & \qquad\quad \le C\Lambda^{\frac{p-1}{r}}\Psi_{r,\rho}[v_\epsilon](\tau) \le C\Lambda^{\frac{p-1}{r}}\Psi_{r,\rho}[v](\tau)+C\epsilon^r\rho^N. \end{split}$$ Since $\tau$ and $\epsilon$ is arbitrary, by and we obtain and . Thus Lemma \[Lemma:3.2\] follows. $\Box$ \[Lemma:3.3\] Assume the same conditions as in Lemma [\[Lemma:3.1\]]{}. Let $r$ satisfy and $r>1$. Then there exists a positive constant $\Lambda$ such that, if $$\label{eq:3.25} \rho^{\frac{r}{p-1}-N}\left(\Psi_{r,\rho}[v](T)+\Psi_{r,\rho}[w](T)\right)\le\Lambda$$ for some $\rho\in(0,\rho_*/2)$, then $$\label{eq:3.26} \Psi_{r,\rho}[z_0](t)\le C\Psi_{r,\rho}[z_0](\tau)$$ for $0\le\tau<t\le T$ with $t-\tau\le\mu\rho^2$, where $C$ and $\mu$ are positive constants depending only on $N$, $\Omega$, $p$ and $r$. [**Proof.**]{} Let $x\in\overline{\Omega}$ and $\zeta$ be as in Lemma \[Lemma:2.4\]. Let $k$ be as in Lemma \[Lemma:2.4\] and $\epsilon>0$. Similarly to , we have $$\label{eq:3.27} \begin{split} & \int_{\Omega(x,2\rho)} z_\epsilon(y,s)^r\zeta^k\,dy\biggr|_{s=\tau}^{s=t} +\int_\tau^t\int_{\Omega(x,2\rho)}|\nabla z_\epsilon^{\frac{r}{2}}|^2\zeta^k\,dyds\\ & \le C\rho^{-2}\int_\tau^t\int_{\Omega(x,2\rho)}z_\epsilon^r\,dyds +C\int_\tau^t\int_{\partial\Omega(x,2\rho)}a(y,s)z_\epsilon^r\zeta^k\,d\sigma ds \end{split}$$ for all $0<\tau<t\le T$. This together with $z_\epsilon$, $a\in C(\overline{\Omega}\times[\tau,T])\cap L^\infty(\Omega\times(\tau,T))$ implies that $$\label{eq:3.28} \sup_{x\in\overline{\Omega}}\,\int_\tau^t\int_{\Omega(x,2\rho)}|\nabla z_\epsilon^{\frac{r}{2}}|^2\,dyds<\infty$$ for $0<\tau<t\le T$. On the other hand, by the Hölder inequality and we have $$\label{eq:3.29} \begin{split} \int_\tau^t\int_{\partial\Omega(x,2\rho)}a(y,\tau)z_\epsilon^r\zeta^k\,d\sigma ds & \le C\left(\int_\tau^t\int_{\partial\Omega(x,2\rho)}(|v|^{p+r-1}+|w|^{p+r-1})\,d\sigma ds\right)^{\frac{p-1}{p+r-1}}\\ & \qquad\qquad \times\left(\int_\tau^t\int_{\partial\Omega(x,2\rho)}z_\epsilon^{p+r-1}\zeta^k\,d\sigma ds\right)^{\frac{r}{p+r-1}}. \end{split}$$ Let $\Lambda$ and $\mu$ be sufficiently small positive constants. Then, by Lemma \[Lemma:2.1\], and we see that $$\label{eq:3.30} \begin{split} & \int_\tau^t\int_{\partial\Omega(x,2\rho)}(|v|^{p+r-1}+|w|^{p+r-1})\,d\sigma ds\\ & \le M\sup_{x\in\overline{\Omega}}\, \int_\tau^t\int_{\partial\Omega(x,\rho)}(|v|^{p+r-1}+|w|^{p+r-1})\,d\sigma ds\\ & \le C\Lambda^{\frac{p-1}{r}}\left\{\Psi_{r,\rho}[v](\tau)+\Psi_{r,\rho}[w](\tau)\right\} \le C\Lambda^{\frac{p+r-1}{r}}\rho^{-\frac{r}{p-1}+N} \end{split}$$ for all $0<\tau<t\le T$ with $t-\tau\le\mu\rho^2$. Similarly, by Lemma \[Lemma:2.4\] we obtain $$\label{eq:3.31} \begin{split} & \int_\tau^t\int_{\partial\Omega(x,2\rho)}z_\epsilon^{p+r-1}\zeta^k\,d\sigma ds \le C\left(\rho^{\frac{r}{p-1}-N}\Psi_{r,\rho}[z_\epsilon](t)\right)^{\frac{p-1}{r}}\\ & \qquad\qquad\times \biggr[\sup_{x\in\overline{\Omega}}\,\int_\tau^t\int_{\Omega(x,\rho)}|\nabla(z_\epsilon)^{\frac{r}{2}}|^2\,dyds +\rho^{-2}(t-\tau)\Psi_{r,\rho}[z_\epsilon](\tau)\biggr] \end{split}$$ for all $0<\tau<t\le T$ with $t-\tau\le\mu\rho^2$. Then we deduce from – that $$\label{eq:3.32} \begin{split} & \int_\tau^t\int_{\partial\Omega(x,2\rho)}a(y,t)z_\epsilon^r\zeta^k\,d\sigma ds\\ & \le C\Lambda^{\frac{p-1}{r}}\left(\Psi_{r,\rho}[z_\epsilon](t)\right)^{\frac{p-1}{p+r-1}}\\ & \qquad\times \biggr[\sup_{x\in\overline{\Omega}}\int_\tau^t\int_{\Omega(x,\rho)}|\nabla(z_\epsilon)^{\frac{r}{2}}|^2\,dyds +\rho^{-2}(t-\tau)\Psi_{r,\rho}[z_\epsilon](t)\biggr]^{\frac{r}{p+r-1}}\\ & \le C\Lambda^{\frac{p-1}{r}} \biggr[\sup_{x\in\overline{\Omega}}\int_\tau^t\int_{\Omega(x,\rho)}|\nabla z_\epsilon^{\frac{r}{2}}|^2\,dyds +\Psi_{r,\rho}[z_\epsilon](t)+\rho^{-2}(t-\tau)\Psi_{r,\rho}[z_\epsilon](\tau)\biggr] \end{split}$$ for all $0<\tau<t\le T$ with $t-\tau\le\mu\rho^2$. Therefore, by Lemma \[Lemma:2.1\], and we have $$\begin{split} & \sup_{x\in\overline{\Omega}}\,\int_{\Omega(x,\rho)}z_\epsilon^r\,dy +\sup_{x\in\overline{\Omega}}\,\int_\tau^t\int_{\Omega(x,\rho)}|\nabla z_\epsilon^{\frac{r}{2}}|^2\,dyds\\ & \quad \le M\Psi_{r,\rho}[z_\epsilon](\tau)+C\rho^{-2}(t-\tau)\Psi_{r,\rho}[z_\epsilon](t)\\ & \qquad +C\Lambda^{\frac{p-1}{r}} \biggr[\sup_{x\in\overline{\Omega}}\int_\tau^t\int_{\Omega(x,\rho)}|\nabla z_\epsilon^{\frac{r}{2}}|^2\,dyds +\Psi_{r,\rho}[z_\epsilon](t)+\rho^{-2}(t-\tau)\Psi_{r,\rho}[z_\epsilon](\tau)\biggr] \end{split}$$ for all $0<\tau<t\le T$ with $t-\tau\le\mu\rho^2$. Then, taking sufficiently small constants $\Lambda$ and $\mu$ if necessary, we obtain $$\Psi_{r,\rho}[z_\epsilon](t)\le 4M\Psi_{r,\rho}[z_\epsilon](\tau)$$ for all $0<\tau<t\le T$ with $t-\tau\le\mu\rho^2$. This implies , and the proof is complete. $\Box$ Now we are ready to complete the proof of Theorems \[Theorem:1.1\] and \[Theorem:1.2\] in the case $r>1$. Let $\gamma_1$ be a sufficiently small positive constant and assume . Let $\{\varphi_n\}$ satisfy and define $T_n^*$ and $T_n^{**}$ as in . Then it follows from that $$\label{eq:3.33} \rho^{\frac{r}{p-1}-N}\Psi_{r,\rho}[u_n](t)\le 6M\rho^{\frac{r}{p-1}-N}\Psi_{r,\rho}[u_n](0)\le 6M(2\gamma_1)^r$$ for all $0\le t\le T_n^*$. Taking a sufficiently small $\gamma_1$ if necessary, by Lemma \[Lemma:3.2\], and , we can find a constant $\mu>0$ such that $$\label{eq:3.34} \Psi_{r,\rho}[u_n](t)\le 5M\Psi_{r,\rho}[u_n](0) <6M\Psi_{r,\rho}[u_n](0)\le C\|\varphi\|_{r,\rho}^r$$ for $0\le t\le\min\{T_n^*,\mu\rho^2\}$. On the other hand, we apply Lemma \[Lemma:3.1\] with $R_1=\rho/2$, $R_2=\rho$, $t_1=t/2$ and $t_2=t/4$ to obtain $$\begin{aligned} \label{eq:3.35} & & \|u_n(t)\|_{L^\infty(\Omega(x,\rho/2))}\le CD^{\frac{N+2}{2r}} \left(\int_{t/4}^t\int_{\Omega(x,\rho)}|u_n|^r\,dyds\right)^{1/r},\\ \label{eq:3.36} & & \int_{t/2}^t\int_{\Omega(x,\rho/2)}|\nabla u_n|^2\,dyds \le CD\int_{t/4}^t\int_{\Omega(x,\rho)}|u_n|^2\,dyds,\end{aligned}$$ for all $x\in\overline{\Omega}$ and $t\in(0,T_n)$. where $D=\||u_n|^{p-1}\|_{L^\infty(\Omega(x,\rho)\times(t/4,t))}^2+\rho^{-2}+t^{-1}$. By , and we have $$\begin{aligned} \label{eq:3.37} & & \|u_n(t)\|_{L^\infty(\Omega)} \le Ct^{-\frac{N}{2r}}\|\varphi\|_{r,\rho} \le C\gamma_1t^{-\frac{1}{2(p-1)}}(\rho^{-2}t)^{-\frac{N}{2r}+\frac{1}{2(p-1)}},\\ \label{eq:3.38} & & \sup_{x\in\overline{\Omega}}\int_{t/2}^t\int_{\Omega(x,\rho)} |\nabla u_n|^2\,dyds \le C\rho^N\|u_n\|_{L^\infty(\Omega\times(t/4,t))}^2 \le C\rho^Nt^{-\frac{N}{r}}\|\varphi\|_{r,\rho}^2,\qquad\end{aligned}$$ for all $0<t\le\min\{\mu\rho^2,T_n^*,T_n^{**}\}$. Since $r\ge N(p-1)$, taking sufficiently small $\gamma_1>0$ and $\mu>0$ if necessary, by we have $$(\rho^{-2}t)^{\frac{1}{2}}+t^{\frac{1}{2}}\|u_n(t)\|_{L^\infty(\Omega)}^{p-1} \le\mu^{\frac{1}{2}}+(C\gamma_1)^{p-1}\mu^{-\frac{N(p-1)}{2r}+\frac{1}{2}}\le 1$$ for $0<t\le\min\{\mu\rho^2,T_n^*,T_n^{**}\}$. This implies that $T_n>T_n^{**}>\min\{T_n^*,\mu\rho^2\}$ for $n=1,2,\dots$. Then, by we see that $T_n^*>\mu\rho^2$ for $n=1,2,\dots$. Therefore, by , and we obtain $$\begin{aligned} \label{eq:3.39} & & \|u_n(t)\|_{L^\infty(\Omega)}\le Ct^{-\frac{N}{2r}}\|\varphi\|_{r,\rho},\\ \label{eq:3.40} & & \sup_{x\in\overline{\Omega}}\int_{t/2}^t\int_{\Omega(x,\rho)} |\nabla u_n|^2\,dyds\le C\rho^Nt^{-\frac{N}{r}}\|\varphi\|_{r,\rho}^2,\\ \label{eq:3.41} & & \sup_{0<t<\mu\rho^2}\|u_n(t)\|_{r,\rho}\le C\|\varphi\|_{r,\rho},\end{aligned}$$ for $0<t\le\mu\rho^2$ and $n=1,2,\dots$. Applying [@DB01 Theorem 6.2] with the aid of , we see that $u_n$ $(n=1,2,\dots)$ are uniformly bounded and equicontinuous on $K\times[\tau,\mu\rho^2]$ for any compact set $K\subset\overline{\Omega}$ and $\tau\in(0,\mu\rho^2]$. Then, by the Ascoli-Arzelà theorem and the diagonal argument we can find a subsequence $\{u_{n'}\}$ and a continuous function $u$ in $\Omega\times(0,\mu\rho^2]$ such that $$\lim_{n'\to\infty}\,\|u_{n'}-u\|_{L^\infty(K\times[\tau,\mu\rho^2])}=0$$ for any compact set $K\subset\overline{\Omega}$ and $\tau\in(0,\mu\rho^2]$. This together with and implies and . Furthermore, by , taking a subsequence if necessary, we see that $$\lim_{n'\to\infty}u_{n'}=u\quad\mbox{weakly in}\,\,\,L^2([\tau,\mu\rho^2]:W^{1,2}(\Omega\cap B(0,R)))$$ for any $R>0$ and $0<\tau<\mu\rho^2$. This implies that $u$ satisfies . On the other hand, since $u_n$ is a $L^r_{uloc}(\Omega)$-solution of (see ), we see that $$u_n\in C([0,\mu\rho^2]:L^r_{uloc,\rho}(\Omega)).$$ Furthermore, by Lemma \[Lemma:3.3\] and , taking a sufficiently small $\gamma_1$ if necessary, we have $$\sup_{0<\tau<\mu\rho^2}\|u_m(\tau)-u_n(\tau)\|_{r,\rho}\le C\|u_m(0)-u_n(0)\|_{r,\rho}, \quad m,n=1,2,\dots.$$ This means that $\{u_n\}$ is a Cauchy sequence in $C([0,\mu\rho^2]:L^r_{uloc,\rho}(\Omega))$, which implies $$\label{eq:3.42} u\in C([0,\mu\rho^2]:L^r_{uloc,\rho}(\Omega)).$$ Therefore we see that $u$ is a $L^r_{uloc}(\Omega)$-solution of in $\Omega\times[0,\mu\rho^2]$ satisfying and , and the proof of Theorem \[Theorem:1.1\] for the case $r>1$ is complete. $\Box$ [**Proof of Theorem \[Theorem:1.2\] in the case $r>1$.**]{} Let $v$ and $w$ be $L^r_{uloc}(\Omega)$-solutions of in $\Omega\times[0,T)$, where $T>0$. Let $\gamma_2$ be a sufficiently small constant and assume . We can assume, without loss of generality, that $\rho\in(0,\rho_*/2)$. Since $v$, $w\in C([0,T]:L^r_{uloc,\rho}(\Omega))$, we can find a constant $T'\in(0,T)$ such that $$\label{eq:3.43} \rho^{\frac{1}{p-1}-\frac{N}{r}}\left[\sup_{0<\tau\le T'}\|v(\tau)\|_{r,\rho}+\sup_{0<\tau\le T'}\|w(\tau)\|_{r,\rho}\right]\le 2\gamma_2.$$ Furthermore, for any $T''\in(T',T)$, since $v$, $w\in L^\infty(\Omega\times(T',T''))$, we see that $$\label{eq:3.44} \tilde{\rho}^{\frac{1}{p-1}-\frac{N}{r}} \left[\sup_{T'<\tau\le T''}\|v(\tau)\|_{r,\tilde{\rho}}+\sup_{T_1<\tau\le T_2}\|w(\tau)\|_{r,\tilde{\rho}}\right]\le\gamma_2$$ for some $\tilde{\rho}\in(0,\rho)$. Since $v(x,0)\le w(x,0)$ for almost all $x\in\Omega$, by and we apply Lemma \[Lemma:3.3\] to obtain $$\sup_{0<\tau<\min\{\mu\tilde{\rho}^2,T''\}}\|(v(\tau)-w(\tau))_+\|_{r,\tilde{\rho}}\le C\|(v(0)-w(0))_+\|_{r,\tilde{\rho}}=0$$ for some constant $\mu>0$. This implies that $v(x,t)\le w(x,t)$ in $\Omega\times(0,\min\{\mu\tilde{\rho}^2,T''\}]$. Repeating this argument, we see that $v(x,t)\le w(x,t)$ in $\Omega\times(0,T'']$. Finally, since $T''$ is arbitrary, we see that $v(x,t)\le w(x,t)$ in $\Omega\times(0,T)$, and the proof is complete. $\Box$ Proof of Theorems \[Theorem:1.1\] and \[Theorem:1.2\] in the case $r=1$ ======================================================================= In this section we consider the case $1<p<1+1/N$ and $r=1$, and complete the proof of Theorems \[Theorem:1.1\] and \[Theorem:1.2\]. Furthermore, we prove Corollary \[Corollary:1.1\]. We use the same notation as in Section 3. \[Lemma:4.1\] Assume the same conditions as in Theorem [\[Theorem:1.1\]]{}. Let $v$ and $w$ be $L^1_{uloc}(\Omega)$-solutions of in $\Omega\times[0,T]$, where $0<T<\infty$, such that $$\label{eq:4.1} \|v(t)\|_{L^\infty(\Omega)}+\|w(t)\|_{L^\infty(\Omega)} \le C_1t^{-\frac{1}{2(p-1)}},\qquad 0<t\le T,$$ for some $C_1>0$. Then there exists a constant $C_2$ such that $$\begin{aligned} \label{eq:4.2} & & \|v(t)\|_{L^\infty(\Omega)}\le C_2t^{-\frac{N}{2}}\Psi_{1,\rho}[v](t),\\ \label{eq:4.3} & & \|z_0(t)\|_{L^\infty(\Omega)}\le C_2t^{-\frac{N}{2}}\Psi_{1,\rho}[z_0](t),\end{aligned}$$ for all $0<t\le\min\{T,\rho^2\}$ and $0<\rho<\rho_*$. [**Proof.**]{} Similarly to , by Lemma \[Lemma:3.1\] and we have $$\begin{split} \|z_0(t)\|_{L^\infty(\Omega(x,\rho/2))} & \le C\left[\|v(t)\|_{L^\infty(\Omega\times(t/4,t))}^{2(p-1)}+\rho^{-2}+t^{-1}\right]^{\frac{N+2}{2}} \int_{t/4}^t\int_{\Omega(x,\rho)} |z_0(y,s)|\,dyds\\ & \le C(1+C_1^{2(p-1)})^{\frac{N+2}{2}}t^{-\frac{N}{2}}\Psi_{1,\rho}[z_0](t) \end{split}$$ for all $x\in\overline{\Omega}$ and $0<t\le\min\{T,\rho^2\}$. This implies . Furthermore, follows from , and the proof is complete. $\Box$ \[Lemma:4.2\] Assume the same conditions as in Theorem [\[Theorem:1.1\]]{} and $1<p<1+1/N$. Let $v$ and $w$ be $L^1_{uloc}(\Omega)$-solutions of in $\Omega\times[0,T]$, where $0<T<\infty$, and assume for some constant $C_1>0$. Let $0<\rho<\rho_*$ and $\Lambda$ be such that $$\label{eq:4.4} \rho^{\frac{1}{p-1}-N}\left[\Psi_{1,\rho}[v](T)+\Psi_{1,\rho}[w](T)\right]\le\Lambda.$$ Then, for any $\sigma\in(0,1)$ and $\delta\in(0,1)$ with $\sigma>\delta N/2$, there exists a positive constant $C_2$ such that $$\label{eq:4.5} \limsup_{\epsilon\to 0}\,\sup_{x\in\overline{\Omega}}\,\int_0^t \int_{\Omega(x,\rho)} (\rho^{-2}s)^\sigma\frac{|\nabla z_\epsilon|^2}{z_\epsilon^{1-\delta}}\,dyds \le C_2\mu^{\sigma-\frac{\delta N}{2}}\rho^{-\delta N}\Psi_{1,\rho}[z_0](t)^{1+\delta}$$ for $0<t\le\min\{T,\mu\rho^2\}$ and $0<\mu\le 1$. [**Proof.**]{} Let $\sigma\in(0,1)$ and $\delta\in(0,1)$ be such that $\sigma>\delta N/2$. Let $x\in\overline{\Omega}$ and $\zeta$ be as in Lemma [\[Lemma:2.4\]]{}. Similarly to , multiplying by $(\rho^{-2}t)^\sigma z_\epsilon(x,t)^\delta\zeta(x)^2$ and integrating it on $\Omega(x,2\rho)\times(\tau,t)$, we obtain $$\begin{split} & \frac{\delta}{2} \int_\tau^t \int_{\Omega(x,2\rho)} (\rho^{-2}s)^\sigma\frac{|\nabla z_\epsilon|^2}{z_\epsilon^{1-\delta}}\zeta^2\,dyds \le \frac{(\rho^{-2}\tau)^\sigma}{1+\delta}\int_{\Omega(x,2\rho)}z_\epsilon(y,\tau)^{1+\delta}\,dy\\ & \qquad +\frac{\sigma}{1+\delta}\rho^{-2}\int_\tau^t \int_{\Omega(x,2\rho)}(\rho^{-2}s)^{\sigma-1} z_\epsilon^{1+\delta} \zeta^2 \,dyds\\ & \qquad +C\rho^{-2}\int_\tau^t \int_{\Omega(x,2\rho)} (\rho^{-2}s)^\sigma z_\epsilon^{1+\delta} \,dyds +\int_\tau^t\int_{\partial\Omega(x,2\rho)}(\rho^{-2}s)^\sigma a(y,s)z_\epsilon^{1+\delta}\zeta^2 \,d\sigma \end{split} \label{eq:4.6}$$ for $0<\tau<t\le T$. On the other hand, it follows from Lemma \[Lemma:4.1\], , and that $$\label{eq:4.7} \|a(t)\|_{L^\infty(\Omega)}\le Ct^{-\frac{N(p-1)}{2}}\left[\Psi_{1,\rho}[v](t)^{p-1}+\Psi_{1,\rho}[w](t)^{p-1}\right] \le C\Lambda^{p-1}\rho^{-1}(\rho^{-2}t)^{-\frac{N(p-1)}{2}}$$ for all $0<t\le\min\{T,\rho^2\}$. Furthermore, by Lemma \[Lemma:2.3\] we have $$\label{eq:4.8} \begin{split} & \int_{\partial\Omega(x,2\rho)}z_\epsilon^{1+\delta}\zeta^2 \,d\sigma \le\nu\int_{\Omega(x,2\rho)}|\nabla(z_\epsilon^{\frac{1+\delta}{2}}\zeta)|^2 \,dy +\frac{C}{\nu}\int_{\Omega(x,2\rho)}(z_\epsilon^{\frac{1+\delta}{2}}\zeta)^2 \,dy\\ & \qquad\qquad \le 2\nu\left(\frac{1+\delta}{2}\right)^2\int_{\Omega(x,2\rho)}\frac{|\nabla z_\epsilon|^2}{z_\epsilon^{1-\delta}}\zeta^2\,dy +2\nu\int_{\Omega(x,2\rho)}z_\epsilon^{1+\delta}|\nabla\zeta|^2\,dy\\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\,\,\, +\frac{C}{\nu}\int_{\Omega(x,2\rho)}z_\epsilon^{1+\delta}\zeta^2 \,dy \end{split}$$ for all $0<t\le T$ and $\nu>0$. By and we obtain $$\label{eq:4.9} \begin{split} & \int_\tau^t\int_{\partial\Omega(x,2\rho)}(\rho^{-2}s)^\sigma a(y,s)z_\epsilon^{1+\delta}\zeta^2 \,d\sigma ds\\ & \le\frac{\delta}{4}\int_\tau^t \int_{\Omega(x,2\rho)} (\rho^{-2}s)^\sigma\frac{|\nabla z_\epsilon|^2}{z_\epsilon^{1-\delta}}\zeta^2\,dyds +C\rho^{-2}\int_\tau^t \int_{\Omega(x,2\rho)}(\rho^{-2}s)^\sigma z_\epsilon^{1+\delta}\,dyds\\ & \qquad\qquad\qquad\qquad\qquad +C\rho^{-2}\int_\tau^t \int_{\Omega(x,2\rho)} (\rho^{-2}s)^{\sigma-N(p-1)}z_\epsilon^{1+\delta}\,dyds \end{split}$$ for all $0<\tau<t\le\min\{T,\rho^2\}$. We deduce from – that $$\label{eq:4.10} \begin{split} & \frac{\delta}{4}\int_\tau^t \int_{\Omega(x,2\rho)} (\rho^{-2}s)^\sigma\frac{|\nabla z_\epsilon|^2}{z_\epsilon^{1-\delta}}\zeta^2\,dyds \le \frac{(\rho^{-2}\tau)^\sigma}{1+\delta}\int_{\Omega(x,2\rho)}z_\epsilon(y,\tau)^{1+\delta}\,dy\\ & \qquad +C\rho^{-2}\int_\tau^t \int_{\Omega(x,2\rho)} [(\rho^{-2}s)^{\sigma-1} +(\rho^{-2}s)^\sigma+ (\rho^{-2}s)^{\sigma-N(p-1)}] z_\epsilon^{1+\delta}\,dyds \end{split}$$ for all $0<\tau<t\le\min\{T,\rho^2\}$. Furthermore, by Lemmas \[Lemma:2.1\] and \[Lemma:4.1\] we have $$\label{eq:4.11} \begin{split} & \sup_{x\in\overline{\Omega}}\,\int_{\Omega(x,2\rho)}z_\epsilon(y,s)^{1+\delta}\,dy \le 2M\sup_{x\in\overline{\Omega}}\,\int_{\Omega(x,\rho)}z_0(y,s)^{1+\delta}\,dy+C\epsilon^{1+\delta}\rho^N\\ & \qquad \le 2M\|z_0(s)\|_{L^\infty(\Omega)}^\delta\Psi_{1,\rho}[z_0](t)+C\epsilon^{1+\delta}\rho^N\\ & \qquad \le C(\rho^{-2}s)^{-\frac{\delta N}{2}}\rho^{-\delta N}\Psi_{1,\rho}[z_0](t)^{1+\delta}+C\epsilon^{1+\delta}\rho^N \end{split}$$ for all $0<s<t\le\min\{T,\rho^2\}$. It follows from $N(p-1)<1$ and $\sigma>\delta N/2$ that $$\sigma-N(p-1)-\frac{\delta N}{2}>\sigma-1-\frac{\delta N}{2}>-1.$$ Then, by and , passing to the limit as $\tau\to 0$ and $\epsilon\to 0$, we have $$\begin{split} & \limsup_{\epsilon\to 0}\,\sup_{x\in\overline{\Omega}}\, \int_0^t \int_{\Omega(x,\rho)} (\rho^{-2}s)^\sigma\frac{|\nabla z_\epsilon|^2}{z_\epsilon^{1-\delta}}\,dyds\\ & \le C\rho^{-2-\delta N}\Psi_{1,\rho}[z_0](t)^{1+\delta}\int_0^t (\rho^{-2}s)^{-\frac{\delta N}{2}} [(\rho^{-2}s)^{\sigma-1} +(\rho^{-2}s)^\sigma+ (\rho^{-2}s)^{\sigma-N(p-1)}]\,ds\\ & \le C\rho^{-\delta N}\mu^{\sigma-\frac{\delta N}{2}}\Psi_{1,\rho}[z_0](t)^{1+\delta} \end{split}$$ for all $0<t\le\min\{T,\mu\rho^2\}$ and $0<\mu\le 1$. This implies , and Lemma \[Lemma:4.2\] follows. $\Box$ \[Lemma:4.3\] Assume the same conditions as in Lemma [\[Lemma:4.2\]]{} with $\rho\in(0,\rho_*/2)$. Then there exists a constant $\mu\in(0,1)$ such that $$\label{eq:4.12} \Psi_{1,\rho}[z_0](t)\le 2M\Psi_{1,\rho}[z_0](0), \quad 0<t\le\min\{T,\mu\rho^2\}.$$ [**Proof.**]{} Let $x\in\overline{\Omega}$ and $\zeta$ be as in Lemma \[Lemma:2.4\]. Let $\sigma\in(0,1)$ and $\delta\in(0,1)$ be such that $$\label{eq:4.13} \frac{\delta N}{2}<\sigma<1-N(p-1) \qquad\mbox{and}\qquad p-1>\delta.$$ By we have $$\label{eq:4.14} \int_{\Omega(x,2\rho)}z_0\zeta^2\,dy\biggr|_{s=\tau}^{s=t} \le 2\int_\tau^t\int_{\Omega(x,2\rho)}|\nabla z_0||\nabla\zeta|\zeta\,dyds +\int_\tau^t\int_{\partial\Omega(x,2\rho)}a(y,s)z_0\zeta^2\,d\sigma ds$$ for $0<\tau<t\le T$. Furthermore, we have $$\label{eq:4.15} \begin{split} 2\int_\tau^t\int_{\Omega(x,2\rho)}|\nabla z_0||\nabla\zeta|\zeta\,dyds & \le\nu\limsup_{\epsilon\to 0}\int_\tau^t\int_{\Omega(x,2\rho)}(\rho^{-2}s)^\sigma\frac{|\nabla z_\epsilon|^2}{z_\epsilon^{1-\delta}}\zeta^2\,dyds\\ & \qquad\quad +C\nu^{-1}\rho^{-2}\int_\tau^t\int_{\Omega(x,2\rho)}(\rho^{-2}s)^{-\sigma}z_0^{1-\delta}\,dyds \end{split}$$ for $\nu>0$. On the other hand, by and we obtain $$\begin{split} & \int_\tau^t\int_{\partial\Omega(x,2\rho)}a(y,s)z_0\zeta^2\,d\sigma ds \le\int_\tau^t\int_{\partial\Omega(x,2\rho)}a(y,s)z_\epsilon\zeta^2\,d\sigma ds\\ & \le C\Lambda^{p-1}\rho^{-1}\int_\tau^t\int_{\partial\Omega(x,2\rho)} (\rho^{-2}s)^{-\frac{N(p-1)}{2}}z_\epsilon\zeta^2\,d\sigma ds\\ & \le C\rho^{-1}\int_\tau^t(\rho^{-2}s)^{-\frac{N(p-1)}{2}} \int_{\Omega(x,2\rho)}[|\nabla z_\epsilon|\zeta^2+2z_\epsilon\zeta|\nabla\zeta|]\,dy ds\\ & \le C\nu\int_\tau^t\int_{\Omega(x,2\rho)} (\rho^{-2}s)^\sigma\frac{|\nabla z_\epsilon|^2}{z_\epsilon^{1-\delta}}\zeta^2\,dyds\\ & \qquad\qquad +C\rho^{-2}\nu^{-1}\int_\tau^t(\rho^{-2}s)^{-\sigma-N(p-1)} \int_{\Omega(x,2\rho)} z_\epsilon^{1-\delta}\,dyds\\ & \qquad\qquad\qquad +C\rho^{-2}\int_\tau^t(\rho^{-2}s)^{-\frac{N(p-1)}{2}} \int_{\Omega(x,2\rho)} z_\epsilon\,dyds \end{split} \label{eq:4.16}$$ for $0<t\le\min\{T,\rho^2\}$, $\epsilon>0$ and $\nu>0$. Then it follows from Lemma \[Lemma:2.1\] and – that $$\label{eq:4.17} \begin{split} & \sup_{x\in\overline{\Omega}}\,\int_{\Omega(x,\rho)}z_0(y,t)\,dy\le M \sup_{x\in\overline{\Omega}}\,\int_{\Omega(x,\rho)}z_0(y,0)\,dy\\ & \qquad +C\nu\limsup_{\epsilon\to 0}\,\sup_{x\in\overline{\Omega}}\, \int_0^t\int_{\Omega(x,2\rho)}(\rho^{-2}s)^\sigma \frac{|\nabla z_\epsilon|^2}{z_\epsilon^{1-\delta}}\zeta^2\,dyds\\ & \qquad\qquad +C\nu^{-1}\rho^{-2}\sup_{x\in\overline{\Omega}}\,\int_0^t\int_{\Omega(x,\rho)}(\rho^{-2}s)^{-\sigma}z_0^{1-\delta}\,dyds\\ & \qquad\qquad\qquad +C\rho^{-2}\nu^{-1}\sup_{x\in\overline{\Omega}}\,\int_0^t(\rho^{-2}s)^{-\sigma-N(p-1)} \int_{\Omega(x,\rho)} z_0^{1-\delta}\,dyds\\ & \qquad\qquad\qquad\qquad +C\rho^{-2}\sup_{x\in\overline{\Omega}}\,\int_0^t(\rho^{-2}s)^{-\frac{N(p-1)}{2}} \int_{\Omega(x,\rho)} z_0\,dyds \end{split}$$ for $0<t\le\min\{T,\rho^2\}$ and $\nu>0$. Furthermore, by the Hölder inequality we have $$\label{eq:4.18} \sup_{x\in\overline{\Omega}}\, \int_{\Omega(x,\rho)} z_0(y,t)^{1-\delta}\,dy\le C\rho^{\delta N}\Psi_{1,\rho}[z_0](t)^{1-\delta}, \quad t>0.$$ Then we deduce from , and that $$\begin{split} \Psi_{1,\rho}[z_0](t) & \le M\Psi_{1,\rho}[z_0](0) +C\nu\mu^{\sigma-\frac{\delta N}{2}}\rho^{-\delta N}\Psi_{1,\rho}[z_0](t)^{1+\delta}\\ & \quad +C\nu^{-1}\rho^{\delta N}\Psi_{1,\rho}[z_0](t)^{1-\delta}\rho^{-2}\int_0^t[(\rho^{-2}s)^{-\sigma}+(\rho^{-2}s)^{-\sigma-N(p-1)}]\,ds\\ & \qquad +C\rho^{-2}\Psi_{1,\rho}[z_0](t)\int_0^t(\rho^{-2}s)^{-\frac{N(p-1)}{2}}\,ds \end{split}$$ for $0<t\le\min\{T,\mu\rho^2\}$, $0<\mu\le 1$ and $\nu>0$. Then, taking $\nu=\rho^{\delta N}\Psi_{1,\rho}[z_0](t)^{-\delta}$ if $\Psi_{1,\rho}[z_0](t)\not=0$, we can find a positive constant $\mu\in(0,1)$ such that $$\begin{split} \Psi_{1,\rho}[z_0](t) & \le M\Psi_{1,\rho}[z_0](0) +C(\mu^{\sigma-\frac{\delta N}{2}}+\mu^{1-\sigma-N(p-1)}+\mu^{1-\frac{N(p-1)}{2}})\Psi_{1,\rho}[z_0](t)\\ & \le M\Psi_{1,\rho}[z_0](0)+\frac{1}{2}\Psi_{1,\rho}[z_0](t) \end{split}$$ for $0<t\le\min\{T,\mu\rho^2\}$. This implies , and Lemma \[Lemma:4.3\] follows. $\Box$ Now we are ready to prove Theorem \[Theorem:1.1\] in the case $r=1$. [**Proof of Theorem \[Theorem:1.1\] in the case $r=1$.**]{} It suffices to consider the case $1<p<1+1/N$. Let $\gamma_1$ be a sufficiently small positive constant and assume . Let $\{\varphi_n\}$ satisfy and define $T_n^*$ and $T_n^{**}$ as in . Then it follows from that $$\label{eq:4.19} \rho^{\frac{1}{p-1}-N}\Psi_{1,\rho}[u_n](t)\le 6M\rho^{\frac{1}{p-1}-N}\Psi_{1,\rho}[u_n](0)\le 12M\gamma_1$$ for all $0\le t\le T_n^*$. By Lemma \[Lemma:4.1\] we have $$\label{eq:4.20} \|u_n(t)\|_{L^\infty(\Omega)}\le Ct^{-\frac{N}{2}}\Psi_{1,\rho}[u_n](t)$$ for $0<t\le\min\{T_n^{**},\rho^2\}<T_n$ and $n=1,2,\dots$. Then, taking a sufficiently small $\gamma_1$ and applying Lemma \[Lemma:4.3\] with $v=u_n$ and $w=0$, we can find a constant $\mu\in(0,1)$ such that $$\label{eq:4.21} \Psi_{1,\rho}[u_n](t)\le 2M\Psi_{1,\rho}[u_n](0)$$ for $0<t\le\min\{T_n^*,T_n^{**},\mu\rho^2\}$ and $n=1,2,\dots$. This implies that $\min\{T_n^{**},\mu\rho^2\}<T_n^*$ for $n=1,2,\dots$. Furthermore, by –, taking a sufficiently small $\mu$ if necessary, we obtain $$\begin{split} (\rho^{-2}t)^{\frac{1}{2}}+t^{\frac{1}{2}}\|u_n(t)\|_{L^\infty(\Omega)}^{p-1} & \le \mu^{\frac{1}{2}}+C(\rho^{-2}t)^{-\frac{N(p-1)}{2}+\frac{1}{2}}\gamma_1^{p-1}\\ & \le \mu^{\frac{1}{2}}+C\mu^{-\frac{N(p-1)}{2}+\frac{1}{2}}\gamma_1^{p-1}\le 1 \end{split}$$ for $0<t\le\min\{\mu\rho^2,T_n^{**}\}$. This yields $T_n^{**}>\mu\rho^2$ for $n=1,2,\dots$. Therefore, by , , and we obtain $$\label{eq:4.22} \sup_{0<\tau<t}\|u_n(\tau)\|_{1,\rho}=\Psi_{1,\rho}[u_n](t)\le C\|\varphi\|_{1,\rho}, \quad \|u_n(t)\|_{L^\infty(\Omega)}\le Ct^{-\frac{N}{2}}\|\varphi\|_{1,\rho},$$ for all $0<t\le\mu\rho^2$ and $n=1,2,\dots$. Furthermore, applying Lemma \[Lemma:4.3\] with $v=u_m$ and $w=u_n$ and taking a sufficiently small $\mu $ if necessary, we see that $$\sup_{0<\tau<\mu\rho^2}\|u_m-u_n\|_{1,\rho}\le 2M\|u_m(0)-u_n(0)\|_{1,\rho}.$$ Then, by the same argument as in the proof for the case $r>1$ we see that there exists a $L^1_{uloc}(\Omega)$-solution $u$ of in $\Omega\times[0,\mu\rho^2]$ satisfying and . Thus the proof of Theorem \[Theorem:1.1\] in the case $r=1$ is complete. $\Box$ [**Proof of Theorem \[Theorem:1.2\] in the case $r=1$.**]{} Let $v$ and $w$ be $L^1_{uloc}(\Omega)$-solutions of in $\Omega\times[0,T)$, where $0<T\le\infty$. Assume . Then, for any $0<T'<T$, we have $$\|v(t)\|_{L^\infty(\Omega)}+\|w(t)\|_{L^\infty(\Omega)}\le Ct^{-\frac{1}{2(p-1)}},\qquad 0<t\le T'.$$ By Lemma \[Lemma:4.3\] we can find a positive constant $\mu\in(0,1)$ such that $$\|(v(t)-w(t))_+\|_{1,\rho}\le 2M\|(v(0)-w(0))_+\|_{1,\rho}=0$$ for all $0<t\le\min\{T',\mu\rho^2\}$. Repeating this argument, we see that $$\|(v(t)-w(t))_+\|_{1,\rho}\le 0$$ for all $0<t\le T'$. Since $T'$ is arbitrary, we deduce that $v(x,t)\le w(x,t)$ in $\Omega\times(0,T)$. Thus Theorem \[Theorem:1.2\] in the case $r=1$ follows. $\Box$ [**Proof of Corollary \[Corollary:1.1\].**]{} Let $p>1+1/N$ and $\varphi\in L^{N(p-1)}(\Omega)$. By we can find $\rho\in(0,\rho_*)$ such that $$\|\varphi\|_{N(p-1),\rho}\le\gamma_1,$$ where $\gamma_1$ is the constant given in Theorem \[Theorem:1.1\]. Then assertion (i) follows from Theorem \[Theorem:1.1\]. Furthermore, if $\rho_*=\infty$ and $\varphi$ satisfies , then assertion (i) of Theorem \[Theorem:1.1\] holds for any $\rho>0$. This implies assertion (ii), and Corollary \[Corollary:1.1\] follows. $\Box$ Applications ============ In this section, as an application of Theorem \[Theorem:1.1\], we give lower estimates of the blow-up time and the blow-up rate for problem . Blow-up time ------------ Let $T(\lambda\psi)$ be the blow-up time of the solution of with the initial function $\varphi=\lambda\psi$. In this subsection we study the behavior of $T(\lambda\psi)$ as $\lambda\to\infty$ or $\lambda\to 0$. \[Theorem:5.1\] Let $N\ge 1$ and $\Omega\subset{\bf R}^N$ be a uniformly regular domain of class $C^1$. Let $r$ satisfy $$N(p-1)<r\le\infty\quad\mbox{if}\quad p\ge p_* \qquad\mbox{and}\qquad 1\le r\le\infty\quad\mbox{if}\quad 1<p<p_*.$$ Then, for any $\psi\in L^r_{uloc,\rho}(\Omega)$ with $\rho>0$, there exists a positive constant $C$ such that $$T(\lambda\psi)\ge \left\{ \begin{array}{ll} C(\lambda\|\psi\|_{r,\rho})^{-\frac{2r(p-1)}{r-N(p-1)}} & \mbox{if}\quad r<\infty,\vspace{3pt}\\ C(\lambda\|\psi\|_{L^\infty(\Omega)})^{-2(p-1)} & \mbox{if}\quad r=\infty, \end{array} \right.$$ for all sufficiently large $\lambda$. [**Proof.**]{} Let $\gamma_1$ and $\mu$ be constants given in Theorem \[Theorem:1.1\]. If $r<\infty$, by Theorem \[Theorem:1.1\] we see that $$T(\lambda\psi)\ge\mu\left(\frac{\gamma_1}{\lambda\|\psi\|_{r,\rho}}\right)^{2(\frac{1}{p-1}-\frac{N}{r})^{-1}} \ge C(\lambda\|\psi\|_{r,\rho})^{-\frac{2r(p-1)}{r-N(p-1)}}$$ for all sufficiently large $\lambda$. If $r=\infty$, then $$\|\lambda\psi\|_{N(p-1),\rho}\le C\lambda\|\psi\|_{L^\infty(\Omega)}\rho^{\frac{1}{p-1}}.$$ It follows from Theorem \[Theorem:1.1\] that $$T(\lambda\psi)\ge\mu\left(\frac{\gamma_1}{C\lambda\|\psi\|_{L^\infty(\Omega)}}\right)^{2(p-1)} \ge C(\lambda\|\psi\|_{L^\infty(\Omega)})^{-2(p-1)}$$ for all sufficiently large $\lambda$. Thus Theorem \[Theorem:5.1\] follows. $\Box$ \[Theorem:5.2\] Let $N\ge 1$ and $\Omega\subset{\bf R}^N$ be a uniformly regular domain of class $C^1$. Assume $$\label{eq:5.1} \sup_{x\in\overline{\Omega}}|x|^\beta|\psi(x)|<\infty,$$ where $0\le\beta<N$ if $1<p<p_*$ and $0\le\beta<1/(p-1)$ if $p\ge p_*$. Then there exists a positive constant $C_1$ such that $$\label{eq:5.2} T(\lambda\psi)\ge C_1\lambda^{-\frac{2(p-1)}{1-\beta(p-1)}}$$ for all sufficiently large $\lambda$. Furthermore, if $\Omega={\bf R}^N_+$ and $$\label{eq:5.3} \inf_{x\in\Omega(0,\delta)}|x|^\beta\psi(x)>0$$ for some $\delta>0$, then there exists a positive constant $C_2$ such that $$\label{eq:5.4} T(\lambda\psi)\le C_2\lambda^{-\frac{2(p-1)}{1-\beta(p-1)}}$$ for all sufficiently large $\lambda$. [**Proof.**]{} In the case $1<p<p_*$, let $r>1$, $r>N(p-1)$ and $\beta<N/r$. In the case $1<p<p_*$, let $r=1$. It follows from that $\rho^{\frac{1}{p-1}-\frac{N}{r}}\|\psi\|_{r,\rho}\le C\rho^{\frac{1}{p-1}-\beta}$ for all sufficiently small $\rho>0$. This together with Theorem \[Theorem:1.1\] implies . Assume . Let $v$ be a solution of $$\left\{ \begin{array}{ll} \partial_t v=\Delta v & \quad\mbox{in}\quad{\bf R}^N_+\times(0,\infty),\vspace{3pt}\\ \nabla v\cdot\nu(x)=0 & \quad\mbox{in}\quad\partial{\bf R}^N_+\times(0,\infty),\vspace{3pt}\\ v(x,0)=A|x|^{-\beta}\chi_{B(0,\delta)} & \quad\mbox{in}\quad{\bf R}^N_+, \end{array} \right.$$ where $A$ is a positive constant to be chosen as $\psi(x)\ge v(x,0)$ in ${\bf R}^N_+$. By [@DFL Lemma 2.1.2] we can find a constant $c_p$ depending only on $p$ such that $$\label{eq:5.5} \lambda\|v(\cdot,0,t)\|_{L^\infty({\bf R}^{N-1})} \le c_pt^{-\frac{1}{2(p-1)}},\qquad 0<t<T(\lambda v(0)).$$ On the other hand, since $T(\lambda\psi)\le T(\lambda v(0))$ and $$\|v(\cdot,0,t)\|_{L^\infty({\bf R}^{N-1})}\ge Ct^{-\frac{\beta}{2}},\quad 0<t\le1,$$ we have $$\lambda T(\lambda\psi)^{\frac{1}{2(p-1)}-\frac{\beta}{2}} \le \lambda T(\lambda v(0))^{\frac{1}{2(p-1)}-\frac{\beta}{2}}\le Cc_p,$$ which implies . Thus Theorem \[Theorem:5.2\] follows. $\Box$ For the case $\Omega=(0,\infty)$, Fernández Bonder and Rossi [[@FR]]{} proved $$\lim_{\lambda\to\infty}\lambda^{2(p-1)}T(\lambda\psi)=T(\psi(0))$$ provided that $\psi$ is bounded continuous and positive on $[0,\infty)$. Motivated by [@LN], we consider the case $\Omega={\bf R}^N_+$ and study the behavior of the blow-up time $T(\lambda\psi)$ as $\lambda\to 0$. \[Theorem:5.3\] Let $\Omega={\bf R}^N_+$ and assume $$\label{eq:5.6} \sup_{x\in{\bf R}^N_+}\,(1+|x|)^\beta|\psi(x)|<\infty$$ for some $\beta\ge 0$. Let $\lambda>0$ and consider problem  with $\varphi=\lambda\psi$. Then there exists a positive constant $C_1$ such that $$\label{eq:5.7} T(\lambda\psi)\ge C_1f(\lambda)$$ for all sufficiently small $\lambda>0$, where $$f(\lambda):=\left\{ \begin{array}{lll} \lambda^{-\frac{2(p-1)}{1-\beta(p-1)}} & \mbox{if}\quad p\ge p_*, & 0\le\beta<\frac{1}{p-1},\\ \lambda^{-\frac{2(p-1)}{1-\beta(p-1)}} & \mbox{if}\quad 1<p<p_*, & 0\le\beta<N,\\ (\lambda|\log\lambda|)^{-\frac{2(p-1)}{1-N(p-1)}} & \mbox{if}\quad 1<p<p_*, & \beta=N,\\ \lambda^{-\frac{2(p-1)}{1-N(p-1)}} & \mbox{if}\quad1<p<p_*, & \beta>N. \end{array} \right.$$ Furthermore, if $$\inf_{x\in{\bf R}^N_+}\,(1+|x|)^\beta\psi(x)>0,$$ then there exists a positive constant $C_2$ such that $$\label{eq:5.8} T(\lambda\psi)\le C_2f(\lambda)$$ for all sufficiently small $\lambda>0$. [**Proof.**]{} Consider the case $p\ge p_*$. Let $0\le \beta<1/(p-1)$, $r>N(p-1)$ and $\beta<N/r$. By  we have $$\rho^{\frac{1}{p-1}-\frac{N}{r}}\|\lambda\psi\|_{r,\rho} \le C\lambda\rho^{-\beta+\frac{1}{p-1}}$$ for all sufficiently large $\rho$. Similarly, in the case $p<p_*$, it follows from that $$\rho^{\frac{1}{p-1}-N}\|\lambda\psi\|_{1,\rho} \le \left\{ \begin{array}{ll} C\lambda\rho^{-\beta+\frac{1}{p-1}} & \mbox{if}\quad 0\le\beta<N,\vspace{3pt}\\ C\lambda\rho^{\frac{1}{p-1}-N}\log\rho & \mbox{if}\quad \beta=N,\vspace{3pt}\\ C\lambda\rho^{\frac{1}{p-1}-N} & \mbox{if}\quad\beta>N, \end{array} \right.$$ for all sufficiently large $\rho$. Therefore, by Theorem \[Theorem:1.1\] we obtain in the case $p\ge p_*$ $$T(\lambda\psi)\ge C\lambda^{-\frac{2}{-\beta+\frac{1}{p-1}}} =C\lambda^{-\frac{2(p-1)}{1-\beta(p-1)}}$$ and in the case $1<p<p_*$ $$T(\lambda\psi) \ge\left\{ \begin{array}{ll} C\lambda^{-\frac{2(p-1)}{1-\beta(p-1)}} & \mbox{if}\quad 0\le\beta<N,\vspace{3pt}\\ C(\lambda|\log\lambda|)^{-\frac{2(p-1)}{1-N(p-1)}} & \mbox{if}\quad\beta=N,\vspace{3pt}\\ C\lambda^{-\frac{2(p-1)}{1-N(p-1)}} & \mbox{if}\quad\beta>N, \end{array} \right.$$ for all sufficiently small $\lambda>0$. These imply . Let $v$ be a solution of $$\left\{ \begin{array}{ll} \partial_t v=\Delta v & \quad\mbox{in}\quad{\bf R}^N_+\times(0,\infty),\vspace{3pt}\\ \nabla v\cdot\nu(x)=0 & \quad\mbox{in}\quad\partial{\bf R}^N_+\times(0,\infty),\vspace{3pt}\\ v(x,0)=A(1+|x|)^{-\beta} & \quad\mbox{in}\quad{\bf R}^N_+, \end{array} \right.$$ where $A$ is a positive constant to be chosen as $\psi(x)\ge v(x,0)$ in ${\bf R}^N_+$. Since $T(\lambda\psi)\le T(\lambda v(0))$ and $$\|v(\cdot,0,t)\|_{L^\infty({\bf R}^{N-1})} \ge\left\{ \begin{array}{ll} Ct^{-\frac{\beta}{2}} & \mbox{if}\quad 0\le\beta<N,\vspace{3pt}\\ Ct^{-\frac{N}{2}}\log t & \mbox{if}\quad\beta=N,\vspace{3pt}\\ Ct^{-\frac{N}{2}} & \mbox{if}\quad\beta>N, \end{array} \right.$$ for all sufficiently large $t$, by a similar argument as in the proof of we obtain . Thus Theorem \[Theorem:5.3\] follows. $\Box$ Blow-up rate ------------ Let $u$ be a solution of in $\Omega\times[0,T)$, where $0<T<\infty$, such that $u$ blows up at $t=T$. In this subsection, as a corollary of Theorem \[Theorem:1.1\], we state a result on lower estimates of the blow-up rate of the solution $u$. Blow-up rate of positive solutions for problem  was first obtained by Fila and Quittner [@FQ], where it was shown that $$\label{eq:5.9} \limsup_{t\to T}\,(T-t)^{\frac{1}{2(p-1)}}\|u(t)\|_{L^\infty(\Omega)}<\infty$$ holds in the case where $\Omega$ is a ball, the initial function $\varphi$ is radially symmetric and satisfies some monotonicity assumptions. Subsequently, it was proved that holds for positive solutions in the following cases: - $\Omega$ is a bounded smooth domain, $(N-2)p<N$ and $\partial_t u\ge 0$ in $\Omega\times(0,T)$ (see [@GH], [@H1] and [@HM]); - $\Omega$ is a bounded smooth domain and $p\le 1+1/N$ (see [@H3]); - $\Omega={\bf R}^N_+$ and $(N-2)p<N$ (see [@CF02]). See [@QS2] for sign changing solutions. On the other hand, for positive solutions, it was shown in [@HM] that $$\label{eq:5.10} \liminf_{t\to T}\,(T-t)^{\frac{1}{2(p-1)}}\|u(t)\|_{L^\infty(\Omega)}>0$$ holds if $\Omega$ is a bounded smooth domain (see also [@GH] and [@H1]). We state a result on lower estimates of the blow-up rate of the solutions. Theorem \[Theorem:5.4\] is a generalization of and it holds without the boundedness of the domain $\Omega$ and the positivity of the solutions. \[Theorem:5.4\] Let $N\ge 1$ and $\Omega\subset{\bf R}^N$ be a uniformly regular domain of class $C^1$. Let $u$ be a solution of blowing up at $t=T<\infty$. Then $$\label{eq:5.11} \liminf_{t\to T}\,(T-t)^{\frac{1}{2(p-1)}-\frac{N}{2r}}\|u(t)\|_{L^r(\Omega)}>0,$$ where $$\label{eq:5.12} \left\{ \begin{array}{ll} N(p-1)\le r\le\infty & \mbox{if}\quad p>1+1/N,\vspace{3pt}\\ 1<r\le\infty & \mbox{if}\quad p=1+1/N,\vspace{3pt}\\ 1\le r\le\infty & \mbox{if}\quad 1<p<1+1/N. \end{array} \right.$$ [**Proof.**]{} Let $1\le r<\infty$ satisfy . By Theorem \[Theorem:1.1\] we can find positive constants $\gamma_1$ and $\mu$ such that, if $$||u(T-t)||_{r,\rho}\le\gamma_1\rho^{\frac{N}{r}-\frac{1}{p-1}}$$ for some $\rho\in(0,\rho_*/2)$, then the solution $u$ exists in $\Omega\times(0,T-t+\mu\rho^2]$. Since the solution $u$ blows up at $t=T$, we can find a constant $\delta>0$ such that $$\label{eq:5.13} ||u(T-t)||_{r,\rho(t)}>\gamma_1\rho(t)^{\frac{N}{r}-\frac{1}{p-1}}\quad\mbox{for}\quad t\in(T-\delta,T),$$ where $$\rho(t):=\left(\frac{T-t}{\mu}\right)^{\frac{1}{2}}.$$ This implies in the case $r<\infty$. Furthermore, by , for any $t\in(T-\delta, T)$, there exist $x(t)\in\overline{\Omega}$ and $y(t)\in\Omega(x(t),\rho(t))$ such that $$C\rho(t)^Nu(y(t),t)^r\ge \int_{\Omega(x(t),\rho(t))}u(y,t)^r\,dy\ge\frac{\gamma_1}{2}\rho(t)^{N-\frac{r}{p-1}}.$$ This yields in the case $r=\infty$, and Theorem \[Theorem:5.4\] follows. $\Box$ [**Acknowledgements.**]{} The first author was supported in part by the Grant-in-Aid for for Scientific Research (B)(No. 23340035), from Japan Society for the Promotion of Science. The second author was supported in part by Research Fellow of Japan Society for the Promotion of Science. [10]{} R. A. Adams, [*Sobolev spaces*]{}, Pure and Applied Mathematics, [**65**]{}, Academic Press, 1975. D. Andreucci, New results on the Cauchy problem for parabolic systems and equations with strongly nonlinear sources, Manuscripta Math. [**77**]{} (1992), 127–159. D. Andreucci and E. DiBenedetto, On the Cauchy problem and initial traces for a class of evolution equations with strongly nonlinear sources, Ann. Scuola Norm. Sup. Pisa Cl. Sci. [**18**]{} (1991), 363–441. M. Chleb[í]{}k and M. Fila, From critical exponents to blow-up rates for parabolic problems, Rend. Mat. Appl. [**19**]{} (1999), 449–470. M. Chleb[í]{}k and M. Fila, On the blow-up rate for the heat equation with a nonlinear boundary condition, Math. Methods Appl. Sci. [**23**]{} (2000), 1323–1330. M. Chleb[í]{}k and M. Fila, Some recent results on blow-up on the boundary for the heat equation, in: Evolution Equations: Existence, Regularity and Singularities, Banach Center Publ., [**52**]{}, Polish Acad. Sci., Warsaw, (2000), 61–71. K. Deng, M. Fila, and H. A. Levine, On critical exponents for a system of heat equations coupled in the boundary conditions, Acta Math. Univ. Comenian [**63**]{} (1994), 169–192. E. DiBenedetto, Continuity of weak solutions to a general porous medium equation, Indiana Univ. Math. J. [**32**]{} (1983), 83–118. E. DiBenedetto, [*Degenerate parabolic equations*]{}, Universitext, Springer-Verlag, New York, 1993. J. Fernández Bonder and J. D. Rossi, Life span for solutions of the heat equation with a nonlinear boundary condition, Tsukuba J. Math. [**25**]{} (2001), 215–220. M. Fila, Boundedness of global solutions for the heat equation with nonlinear boundary conditions, Comm. Math. Univ. Carol. [**30**]{} (1989), 479–484. M. Fila and P. Quittner, The blow-up rate for the heat equation with a nonlinear boundary condition, Math. Methods Appl. Sci. [**14**]{} (1991), 197–205. J. Filo and J. Kačur, Local existence of general nonlinear parabolic systems. Nonlinear Anal. [**24**]{} (1995), 1597–1618. V. A. Galaktionov and H. A. Levine, On critical Fujita exponents for heat equations with nonlinear flux conditions on the boundary, Israel J. Math. [**94**]{} (1996), 125–146. M.-H. Giga, Y. Giga, and J. Saal, [*Nonlinear Partial Differential Equations, Asymptotic Behavior of Solutions and Self-Similar Solutions*]{}, Progr. Nonlinear Differential Equations Appl., [**79**]{}, Birkhäuser Boston, Inc., Boston, MA, 2010. J.-S. Guo and B. Hu, Blowup rate for heat equation in Lipschitz domains with nonlinear heat source terms on the boundary, J. Math. Anal. Appl. [**269**]{} (2002), 28–49. J. Harada, Single point blow-up solutions to the heat equation with nonlinear boundary conditions, Differ. Equ. Appl. [**5**]{} (2013), 271–295. B. Hu, Nonexistence of a positive solution of the Laplace equation with a nonlinear boundary condition, Differential Integral Equations [**7**]{} (1994), 301–313. B. Hu, Nondegeneracy and single-point-blowup for solution of the heat equation with a nonlinear boundary condition, J. Math. Sci. Univ. Tokyo [**1**]{} (1994), 251–276. B. Hu, Remarks on the blowup estimate for solution of the heat equation with a nonlinear boundary condition, Differential Integral Equations [**9**]{} (1996), 891–901. B. Hu and H.-M. Yin, The profile near blowup time for solution of the heat equation with a nonlinear boundary condition, Trans. Amer. Math. Soc. [**346**]{} (1994), 117–135. K. Ishige, On the existence of solutions of the Cauchy problem for a doubly nonlinear parabolic equation, SIAM J. Math. Anal. [**27**]{} (1996), 1235–1260. K. Ishige and T. Kawakami, Global solutions of the heat equation with a nonlinear boundary condition, Calc. Var. Partial Differential Equations [**39**]{} (2010) 429–457. T. Kawakami, Global existence of solutions for the heat equation with a nonlinear boundary condition, J. Math. Anal. Appl. [**368**]{} (2010), 320–329. O. A. Ladyženskaja, V. A. Solonnikov, and N. N. Ural’ceva, [*Linear and Quasi-linear Equations of Parabolic Type*]{}, American Mathematical Society Translations, vol. 23, American Mathematical Society, Providence, RI, 1968. T.-Y. Lee and W.-M. Ni, Global existence, large time behavior and life span of solutions of a semilinear parabolic Cauchy problem, Trans. Amer. Math. Soc. [**333**]{} (1992), 365–378. Y. Maekawa and Y. Terasawa, The Navier-Stokes equations with initial data in uniformly local $L^p$ spaces, Differential Integral Equations [**19**]{} (2006), 369–400. M. Nakao, Global solutions for some nonlinear parabolic equations with nonmonotonic perturbations, Nonlinear Anal. [**10**]{} (1986), 299–314. P. Quittner and P. Souplet, [*Superlinear Parabolic Problems, Blow-up, Global Existence and Steady States*]{}, Birkhäuser Advanced Texts: Basler Lehrbücher Birkhäuser Verlag, Basel, 2007. P. Quittner and P. Souplet, Blow-up rate of solutions of parabolic problems with nonlinear boundary conditions, Discrete Contin. Dyn. Syst. Ser. S [**5**]{} (2012), 671–681.
--- abstract: 'We illustrate how rotation of the central star can give rise to latitudinal variations in the wind properties from the star. Interaction of these winds with the surrounding medium can produce asymmetrical planetary nebulae.' author: - 'Vikram V. Dwarkadas' title: Stellar Rotation and the Formation of Asymmetric Nebulae --- Introduction ============= Planetary Nebulae (PNe) exhibit a dazzling variety of shapes, as emphasized so effectively in various talks at this conference. While this diversity does not allow easy classification, it is universally accepted that most, if not all, PNe do not show spherical symmetry. Balick (1987) classified them as ranging from spherical through elliptical to bipolar, which show two lobes emanating from an equatorial waist. The origin of this asymmetry in shape has been the cause of much speculation. A favored model for many years was the Generalized Interacting Stellar Winds Model (GISW; see Frank 1999 and references therein), which stated that the asymmetry was due to the expansion of the nebula within a structured ambient medium, whose density was higher at the equator than at the poles. The high equatorial density inhibits the expansion at the equator, leading to a prolate or, for very high density contrasts, a bipolar nebula. While many authors have shown (eg. Dwarkadas, Chevalier & Blondin 1996) that this model can reproduce a vast diversity of morphologies, it does not seem to reproduce many of the details. Besides, the nagging question of what produces the asymmetry in the surrounding medium has always persisted. Rotation, binary evolution, magnetic fields, pre-existing disks, stellar pulsations and many other suggestions have been put forward, none of them ubiquitous or totally convincing. Many authors have also questioned whether it is likely that such an asymmetric density distribution is really present in every aspherical planetary nebula, given the lack of visible evidence. In this work we show that rotation of the central star can lead to aspherical mass-loss from the star. This aspherical wind expanding into a constant density medium forms an aspherical nebula, without having to resort to any external asymmetries. Our results, although derived primarily from radiatively driven wind-theory applied to high luminosity stars (see Dwarkadas & Owocki 2002 for further details), have more general applicability, and could prove relevant for the central stars of PNe. Effect of Rotation ================== For a rotating star, reduction by the radial component of the centrifugal acceleration yields an effective gravity that scales with co-latitude $\theta$ as g\_[eff]{}() = g , where $\Omega \equiv \omega/\omega_c$, with $\omega$ the star’s angular rotation frequency, and $\omega_c \equiv \sqrt{g/R}$, $F$ is the stellar radiative flux and $\kappa_e$ is the electron scattering opacity. For a broad range of stellar wind-driving mechanisms, flow terminal speed scales directly with the surface escape speed. If the star is uniformly bright, then the latitudinal variation of terminal speed goes as: = \^[1/2]{} = \^[1/2]{} , where the polar speed $v_\infty (0) \sim \sqrt{gR(1-\Gamma)}$, and $\Gamma =$ Eddington parameter. However, if $F(\theta) \propto g (1 - \Omega^2 \sin^2 \theta)$ (gravity darkening effect, von Zeipel, 1924) then this implies = \^[1/2]{} = \^[1/2]{} . The evolution of the wind velocity is quite general, and does not depend strongly on the details of the wind-driving mechanism. This is not true for the mass-loss rate. In order to determine the mass-loss rate, we must include more details of the specific wind-driving mechanism. We derive our results from radiatively-driven wind theory (see Owocki et al. 1998; Dwarkadas & Owocki 2002), applicable mainly to PNe with hot O and WR type central stars. For a star with luminosity $L$, the CAK, line-driven mass loss rate (Castor, Abbott & Klein 1975) can be written in terms of the mass flux ${\dot m} \equiv {\dot M}/4 \pi R^2$ at the stellar surface radius $R$, which then depends on the surface radiative flux $F=L/4 \pi R^2$ and the effective surface gravity $g_{eff} \equiv (GM/R^2)(1-\Gamma)$ through (Owocki, Cranmer & Gayley 1998; Dwarkadas & Owocki 2002) F\^[1/]{}   g\_[eff]{}\^[1-1/]{} . where $\alpha < 1$. If the radiative flux $F$ is constant over the stellar surface, then $\kappa_e F/gc= \Gamma$ in equation (1), and application of equation (4) yields = \^[1 - 1/]{} = \^[1 - 1/]{} . Since the exponent $1-1/\alpha$ is negative, the mass flux from such a uniformly bright, rotating star increases from the pole ($\theta = 0$) to the equator ($\theta = 90$). However, taking gravity darkening into account yields the mass flux scaling: \[eq:mfvz\] [[()]{} ]{} = [F () F(0) ]{} = 1 - \^2 [\^2]{} i.e. the mass flux is highest at the poles, and decreases towards the equator. Rotation thus generates a wind that is faster at the poles, but denser at the equator. The inclusion of gravity darkening leads to a wind that is both faster and denser at the poles than the equator. These effects are illustrated in Fig 1. Figure 2 shows results from simulations for values of the rotation parameter ${\tilde{\Omega}}=\Omega / (1 - \Gamma)$, without including gravity darkening. The nebula is still in the early stages of evolution, and there is no hot, high-pressure bubble driving the expansion. As the rotation parameter increases from 50% to 90% of critical, the nebular morphology changes from nearly spherical to distinctly bipolar. Inclusion of gravity darkening alters only the interior density distribution, and not the overall morphology. This indicates that it is the wind velocity distribution that is primarily responsible for shaping the nebula. Conclusions and Discussion ========================== We have shown that rotation can significantly modulate the winds from stars, leading to higher velocities and larger wind-momentum at the poles as compared to the equator. This can drive an aspherical, and even bipolar wind-blown nebula, without having to resort to any external, ad-hoc density asymmetry. The nebula will start out as a momentum-driven, aspherical, structure, whose morphology depends on the rate of rotation. Over time it will slowly [*lose*]{} its asphericity, becoming more and more spherical as it evolves into an energy-conserving bubble. This situation is inverse to that of the GISW model, where the nebula slowly becomes more aspherical over time. The results derived herein for the mass-flux are obtained using radiatively-driven wind theory from hot, luminous stars. However we emphasize that the shaping of the nebula depends mainly on the asymmetry in velocity, which is not strongly dependent on the specific wind-driving mechanism. A star rotating at a large fraction of critical velocity will be flattened, resulting in a faster wind at the poles than the equator, producing an elliptical or bipolar nebula. The question remains then whether such large rotation velocities are typical of PNe central stars. AGB stars in general are known to be slow rotators. However we also know of exceptions such as V Hydra (Barnbaum et al. 1995), which is claimed to be rotating at close to critical velocity. Dorfi and Hoefner (1996) have shown that even small rotational velocities (of the order of 2 km/s) at the stellar photosphere can cause significant variations in the outflow velocities and mass-loss rate. The variations that they find in their models look similar to those in our models, although they are computed for dust-driven winds. The presence of a binary companion can lead to increased rotation rates. Common envelope binary evolution, the presence of planets around stars and tidal spin-up by a binary companion all tend to increase the rotation velocities of stars. Many authors (see Bond 2000) have found that a large percentage of PNe central stars may exist in binary systems. In view of the theoretical possibility, as well as presently available observational evidence (however slim), we feel that fast rotation of PNe central stars cannot be ruled out. Vikram Dwarkadas is supported by Award \# AST-0319261 from the National Science Foundation, and by the US. Dept. of Energy grant \# B341495 to the ASCI Flash Center (U Chicago). Collaboration with Dr. Stan Owocki on this research is gratefully acknowledged. A big thanks to the organisers for inviting me to an engrossing conference in a fascinating setting. Balick, B. 1987, AJ, 94, 671 Barnbaum, C., Morris, M., & Kahane, C. 1995, ApJ, 450, 862 Bond, H. E. 2000, in APN2, ASP Conf Series 199, eds.J. H. Kastner, N. Soker, and S. Rappaport., (San Francisco: ASP), 115 Castor, J. I., Abbott, D. C., & Klein, R. I. 1975, ApJ, 195, 157 Dorfi, E. A., & Hoefner, S 1996, A&A, 313, 605 Dwarkadas, V. V., Chevalier, R. A., & Blondin J. M. 1996, ApJ, 457, 773 Dwarkadas, V. V., & Owocki, S. 2002, ApJ, 581, 1337 Frank, A. 1999, NewAR, 43, 31 Owocki, S. P., Cranmer, S. R., & Gayley, K. G. 1998, ApSS, 260, 1490 von Zeipel, H. 1924, MNRAS, 84, 684
--- abstract: 'The covariance matrix function is characterized in this paper for a Gaussian or elliptically contoured vector random field that is stationary, isotropic, and mean square continuous on the compact two-point homogeneous space. Necessary and sufficient conditions are derived for a symmetric and continuous matrix function to be an isotropic covariance matrix function on all compact two-point homogeneous spaces. It is also shown that, for a symmetric and continuous matrix function with compact support, if it makes an isotropic covariance matrix function in the Euclidean space, then it makes an isotropic covariance matrix function on the sphere or the real projective space.' author: - Tianshi Lu - Chunsheng Ma date: 'May 5, 2019' title: 'Isotropic Covariance Matrix Functions on Compact Two-Point Homogeneous Spaces' --- Introduction ============ A $d$-dimensional compact two-point homogeneous space $\mathbb{M}^d$ is a compact Riemannian symmetric space of rank one, and belongs to one of the following five families ([@Helgason2011], [@Wang1952]): the unit spheres $\S^d$ ($ d =1, 2, \ldots$), the real projective spaces $\mathbb{P}^d(\R)$ ($d = 2, 4, \ldots$), the complex projective spaces $\mathbb{P}^d(\mathbb{C})$ ($d = 4, 6, \ldots$), the quaternionic projective spaces $\mathbb{P}^d(\mathbb{H})$ ($d = 8, 12, \ldots$), and the Cayley elliptic plane $\mathbb{P}^{16} (Cay)$ or $\mathbb{P}^{16} (\mathbb{O})$. There are at least two different approaches to the subject of compact two-point homogeneous spaces [@MaMalyarenko2018], including an approach based on Lie algebras and a geometric approach, which are used in probabilistic literature [@Askey1976], [@Gangolli1967], [@Malyarenko2013], statistical literaure [@Patrangenaru2016], and approximation theory literature [@Azevedo2017], [@BrownDai2005]. All compact two-point homogeneous spaces share the same property that all geodesics in a given one of these spaces are closed and have the same length [@Gangolli1967]. In particular, when the unit sphere $\mathbb{S}^d$ is embedded into the space $\mathbb{R}^{d+1}$, the length of any geodesic line is equal to that of the unit circle, that is, $2\pi$. In what follows, the distance $\rho (\x_1, \x_2)$ between two points $\x_1$ and $\x_2$ on $\Md$ is defined in such a way that the length of any geodesic line on all $\mathbb{M}^d$ is equal to $2\pi$, or the distance between any two points is bounded between 0 and $\pi$, [*i.e.*]{}, $0 \le \rho (\x_1, \x_2) \le \pi$. Over $\S^d$, for instance, $\rho (\x_1, \x_2) $ is defined by $\rho (\x_1, \x_2)= \arccos (\x_1' \x_2), \x_1, \x_2 \in \S^d$, where $\x_1'\x_2$ is the inner product between $\x_1$ and $\x_2$. Expressions of $\rho (\x_1, \x_2)$ on other spaces may be found in [@Bhattacharya2012]. Gaussian random fields on $\Md$ have been studied in [@Askey1976], [@Gangolli1967], [@Malyarenko2013], among others, while theoretical investigations and practical applications of scalar and vector random fields on spheres may be found in [@Askey1976], [@Bingham1973], [@Cheng2016], [@Cohen2012], [@Dovidio2014] [@Gangolli1967], [@Leonenko2012], [@Leonenko2013], [@Ma2015]-[@Ma2017], [@Malyarenko2013], [@Malyarenko1992], [@Yadrenko1983]-[@Yaglom1987]. Recently, a series representation is presented in [@MaMalyarenko2018] for a vector random field that is isotropic and mean square continuous on $\Md$ and stationary on a temporal domain, and a general form of the covariance matrix function is derived for such a vector random field, which involve Jacobi polynomials and the distance defined on $\Md$. It is called for parametric and semiparametric covariance matrix structures on $\Md$ in [@MaMalyarenko2018], which are the topics of this paper. $\mathbb{M}^d$ $\alpha$ $\beta$ -------------------------------------------- ----------------- ----------------- $\mathbb{S}^d$, $d=1$, $2$, … $\frac{d-2}{2}$ $\frac{d-2}{2}$ $\mathbb{P}^d(\mathbb{R})$, $d=2$, $3$, … $\frac{d-2}{2}$ $-\frac{1}{2}$ $\mathbb{P}^d(\mathbb{C})$, $d=4$, $6$, … $\frac{d-2}{2}$ 0 $\mathbb{P}^d(\mathbb{H})$, $d=8$, $12$, … $\frac{d-2}{2}$ 1 $\mathbb{P}^{16}(Cay)$ 7 3 : Parameters $\alpha$ and $\beta$ associated with Jacobi polynomials over $\Md$[]{data-label="tab:1"} Consider an $m$-variate second-order random field $\{ \bZ (\x), \x \in \Md \}$. It is called a stationary (homogeneous) and isotropic random field, if its mean function $\rE \bZ (\x) = ( \rE Z_1(\x), \ldots, \rE Z_m (\x) )'$ does not depend on $\x$, and its covariance matrix function, $$\cov ( \bZ (\x_1), \bZ ( \x_2) ) = \rE [ ( \bZ (x_1) - \rE \bZ (\x_1)) ( \bZ (x_2) - \rE \bZ (\x_2))' ], ~~~~~~ \x_1, \x_2 \in \Md,$$ depends only on the distance $\rho (\x_1, \x_2)$ between $\x_1$ and $\x_2$. We denote such a covariance matrix function by $\bC( \rho (\x_1, \x_2)), \x_1, \x_2 \in \Md, $ and call it an isotropic covariance matrix function on $\Md$. An isotropic random field $\{ \bZ (\x), \x \in \Md \}$ is said to be mean square continuous if, for $k =1, \ldots, m$, $$\rE | Z_k (\x_1) -Z_k (\x_2) |^2 \to 0, ~~ \mbox{as} ~~ \rho (\x_1, \x_2 ) \to 0, ~ \x_1, \x_2 \in \mathbb{M}^d.$$ It implies the continuity of each entry of the associated covariance matrix function in terms of $\rho (\x_1, \x_2)$. An $m$-variate isotropic and mean square continuous random field on $\Md$ has a series representation [@MaMalyarenko2018], for $d \ge 2$, $$\bZ ( \x ) = \sum_{n=0}^\infty \mathbf{B}_n^{\frac{1}{2}} \mathbf{V}_n P_n^{ (\alpha, \beta) } ( \cos \rho (\vec{x}, \vec{U} )), ~~~~~~ \x \in \Md,$$ where $\{ \mathbf{V}_n, n \in \mathbb{N}_0 \}$ is a sequence of independent $m$-variate random vectors with $\rE ( \mathbf{V}_n)= \0$ and $\cov ( \mathbf{V}_n, \mathbf{V}_n ) = a_n^2 \mathbf{I}_m$, $\mathbf{U}$ is a random vector uniformly distributed on $\mathbb{M}^d$ and is independent of $\{\, \mathbf{V}_n, n \in \mathbb{N}_0\, \}$, $\{ \mathbf{B}_n, n \in \mathbb{N}_0 \}$ is a sequence of $m \times m$ positive definite matrices, $\sum\limits_{n=0}^\infty \mathbf{B}_n P_n^{(\alpha, \beta) } \left( 1 \right)$ converges, $\mathbf{I}_m$ is an $m \times m$ identity matrix, $\mathbb{N}_0$ and $\N$ denote the sets of nonnegative integers and of positive integers, respectively, $$\label{JacobiPolynomial} P_n^{(\alpha, \beta)} (x) = \frac{\Gamma (\alpha+n+1)}{n! \Gamma (\alpha+\beta+n+1)}\sum_{k=0}^n\binom{n}{k}\frac{\Gamma (\alpha+\beta+n+k+1)}{\Gamma ( \alpha+k+1 )} \left(\frac{x-1}{2} \right)^k,$$ $ \quad x \in \R, \quad n \in \mathbb{N}_0, $ are Jacobi polynomials [@Szego1975] with specific pairs $\alpha$ and $\beta$ given in Table \[tab:1\], and $$\label{a.n.definition} a_n=\left(\frac{\Gamma(\beta+1)(2 n +\alpha+\beta+1)\Gamma(n+\alpha+\beta+1)} {\Gamma(\alpha+\beta+2)\Gamma(n+\beta+1)}\right)^{\frac{1}{2}}, \qquad n \in \mathbb{N}_0.$$ The covariance matrix function of $\{ \bZ(\x), \x \in \Md \}$ is $$\label{cov.mf1} \bC( \rho (\x_1, \x_2)) = \sum_{n=0}^\infty \mathbf{B}_n P_n^{(\alpha, \beta) } \left( \cos \rho (\x_1, \x_2) \right), ~~~~~~ \x_1, \x_2 \in \mathbb{M}^d.$$ On the other hand, there exists an $m$-variate isotropic Gaussian or elliptically contoured random field on $\Md$ with $\bC( \rho (\x_1, \x_2))$ as its covariance matrix function [@MaMalyarenko2018], if $\bC( \rho (\x_1, \x_2))$ is an $m \times m$ symmetric matrix function of the form (\[cov.mf1\]). Given a symmetric matrix function $\bC (\vartheta)$ whose entries are continuous on $[0, \pi]$, Section 2 presents the characterizations for $\bC( \rho (\x_1, \x_2))$ to be the covariance matrix function of an isotropic elliptically contoured vector random field on $\Md$, in terms of the positive definiteness of a sequence of symmetric matrices. It is characterized in Section 3 for $\bC( \rho (\x_1, \x_2))$ to be an isotropic covariance matrix function on all possible $\Md$. If $\bC (\vartheta)$ makes $\bC (\| \x_1-\x_2 \|)$ an isotropic covariance matrix function in $\R^d$, does it make $\bC( \rho (\x_1, \x_2))$ an isotropic covariance matrix function on $\Md$? A partial answer to this question or the conjecture in [@NieMa2019] is given in Section 4, when $\bC (\vartheta)$ is compactly supported on $[0, \pi]$ and $d$ is odd. Proofs of theorems are given in Section 5. Isotropic covariance matrix functions on $\Md$ =============================================== The covariance matrix function is characterized in this section of an isotropic and mean square continuous elliptically contoured vector random field on $\Md$. Theorem \[thm1\] provides a useful tool for verifying whether a continuous matrix function is the covariance matrix function of an isotropic vector elliptically random field on $\Md$, by checking that each of a sequence of matrices is positive definite and a relevant infinite series is convergent, and Theorem \[thm2\] presents the interrelationship of an isotropic covariance matrix function on different compact two-point homogeneous spaces. \[thm1\] Let $\alpha$ and $\beta$ be the pair for $\Md$ in Table \[tab:1\]. For an $m \times m$ symmetric matrix function $\bC (\vartheta)$ whose entries are continuous on $[0, \pi]$, the following statements are equivalent: - $\bC (\rho (\x_1, \x_2))$ is the covariance matrix function of an $m$-variate isotropic elliptically contoured random field on $\Md$; - $\bC (\vartheta)$ is of the form $$\label{cov.mf2} \bC (\vartheta) = \sum_{n=0}^\infty \mathbf{B}_n P_n^{ (\alpha, \beta)} ( \cos \vartheta), ~~~~~ \vartheta \in [0, \pi ],$$ where $\{ \mathbf{B}_n, n \in \N_0 \}$ is a sequence of $m \times m$ positive definite matrices, and the series $\sum\limits_{n=0}^\infty n^{\alpha} \mathbf{B}_n $ converges; - the matrices $$\label{thm1.eq1} \mathbf{H}_n^{(\alpha, \beta) } = \int_0^\pi \bC (\vartheta) P_n^{(\alpha, \beta) } \left( \cos \vartheta \right) \sin^{2 \alpha+1} \left( \frac{\vartheta}{2} \right) \cos^{2 \beta+1} \left( \frac{\vartheta}{2} \right) d \vartheta, ~~~~ n \in \mathbb{N}_0,$$ are positive definite, and the series $\sum\limits_{n=0}^\infty n^{\alpha+1} \mathbf{H}_n^{(\alpha, \beta) } $ converges. Note that $P_n^{(\alpha, \beta)} (1) = \frac{\Gamma (n+\alpha+1)}{\Gamma (n+1) \Gamma (\alpha+1)}$, $n \in \N_0$. By the asymptotic formula (5.11.12) of [@Olver2010], $ \frac{\Gamma (n +\alpha+1)}{\Gamma (n+1)} \sim n^\alpha$   ($n \to \infty$), $\sum\limits_{n=0}^\infty \mathbf{B}_n P_n^{(\alpha, \beta)} (1)$ converges if and only if $\sum\limits_{n=0}^\infty n^\alpha \, \mathbf{B}_n$ converges. The convergence of $\sum\limits_{n=0}^\infty n^\alpha \mathbf{B}_n$ in Theorem \[thm1\] (ii) is necessary to guarantee the convergence of the series in (\[cov.mf2\]) for $\vartheta=0$, and is also sufficient for the convergence for all $\vartheta \in [0, \pi]$, since $|P^{(\alpha,\beta)}_n(\cos\theta)|\le P^{(\alpha,\beta)}_n(1), n \in \N_0$. The condition that $\sum\limits_{n=0}^\infty n^{\alpha+1} \mathbf{H}_n^{ (\alpha, \beta)}$ converges in Theorem \[thm1\] (iii) is equivalent to the convergence of $\sum\limits_{n=0}^\infty n^\alpha \mathbf{B}_n$ in Theorem \[thm1\] (ii). There are two key parameters associated with $\Md$ in Table \[tab:1\], $\alpha$ and $\beta$, which are not dependent each other, except for $\S^d$ where $\alpha = \beta$. The parameter $\beta$ is a constant with respect to $d$ or $\Md$, except for $\S^d$. The following formula expresses a coefficient $ \mathbf{H}_n^{(\alpha, \beta) }$ on $\mathbb{M}^d$ in terms of two coefficients $ \mathbf{H}_n^{(\alpha-1, \beta) }$ and $ \mathbf{H}_{n+1}^{(\alpha-1, \beta) }$ on $\mathbb{M}^{d-2}$ ($d \ge 3$). For $d \ge 3$, $$\label{H.n.idenity} \mathbf{H}_n^{(\alpha, \beta) } = \frac{ (n+\alpha) \mathbf{H}_n^{(\alpha-1, \beta) } - (n+1) \mathbf{H}_{n+1}^{(\alpha-1, \beta) } }{ 2n +\alpha+\beta+1}, ~~~~ n \in \N_0.$$ Identity (\[H.n.idenity\]) follows directly from (\[thm1.eq1\]) and (4.5.4) of [@Szego1975], $$\label{Szego4.5.4} \frac{ 2 n+\alpha +\beta +1 }{2} (1-x) P_n^{(\alpha, \beta)} (x) = (n+\alpha) P_n^{(\alpha-1, \beta)} (x) - (n+1) P_{n+1}^{(\alpha-1, \beta)} (x), ~~ x \in \R.$$ A dual identity of (\[Szego4.5.4\]) is $$\label{Szego4.5.4.dual} \frac{ 2 n+\alpha +\beta +1 }{2} (1+x) P_n^{(\alpha, \beta)} (x) = (n+\beta) P_n^{(\alpha, \beta-1)} (x) + (n+1) P_{n+1}^{(\alpha, \beta-1)} (x), ~~ x \in \R,$$ from which and from (\[thm1.eq1\]) we obtain the following corollary, which is useful over $\S^d$ since $\beta$ keeps fixed over other spaces. For $d \ge 3$, $$\label{H.n.idenity2} \mathbf{H}_n^{(\alpha, \beta) } = \frac{ (n+\beta) \mathbf{H}_n^{(\alpha, \beta-1) } + (n+1) \mathbf{H}_{n+1}^{(\alpha, \beta-1) } }{ 2n +\alpha+\beta+1}, ~~~~ n \in \N_0.$$ Notice that the parameter $\alpha$ in Table \[tab:1\] is either a nonnegative integer or an integer plus half, according to whether the dimension $d$ is even or odd. For these two cases, in the next two corollaries we are going to write $ \mathbf{H}_n^{(\alpha, \beta) }$ as a linear combination of $ \mathbf{H}_j^{(0, \beta) }$ or $ \mathbf{H}_j^{ \left( -\frac{1}{2}, \beta \right) }$, $j \ge n$, respectively, which are coefficients in low dimensions. For an even $d \ge 4$ or a positive integer $\alpha =\frac{d-2}{2}$, successively using identity (\[Szego4.5.4\]), the $n+\alpha$ degree polynomial $(1-x)^\alpha P_n^{(\alpha, \beta)} (x)$ can be expressed as a linear combination of polynomials $ P_{n+j}^{(0, \beta)} (x)$, $ j = 0, 1, \ldots, n+\alpha$. More precisely, it can be established by induction on $\alpha$ that $$\label{Jacobi.identity1} \left(\frac{1-x}{2}\right)^\alpha P_n^{(\alpha, \beta)} (x) = \sum_{j=0}^\alpha (-1)^j a_j^{(0)} (n) P_{n+j}^{(0, \beta)} (x), ~~~ x \in \R, ~ n \in \N_0,$$ where $$\label{Jacobi.identity1.coeff} \begin{array}{llr} a_j^{(0)} (n) & = & \frac{\alpha! \Gamma(n+\alpha+1) (2n+2j+\beta+1) \Gamma(2n+j+\beta+1)}{ j! (\alpha-j)! n! \Gamma(2n+j+\alpha+\beta+2) }, \\ & & ~~~ j =0, 1, \ldots, \alpha. \end{array}$$ The following corollary is derived from (\[thm1.eq1\]) and (\[Jacobi.identity1\]). For $\alpha \in \N$, $$\label{H.n.idenity3} \mathbf{H}_n^{(\alpha, \beta) } = \sum_{j=0}^\alpha (-1)^j a_j^{(0)} (n) \mathbf{H}_{n+j}^{(0, \beta) }, ~~~~ n \in \N_0.$$ For an odd $d \ge 3$, $\alpha +\frac{1}{2} =\frac{d-1}{2}$ is a positive integer. Successively using identity (\[Szego4.5.4\]), the $n+\alpha +\frac{1}{2}$ degree polynomial $(1-x)^{\alpha+\frac{1}{2}} P_n^{(\alpha, \beta)} (x)$ can be expressed as a linear combination of polynomials $ P_{n+j}^{ \left( -\frac{1}{2}, \beta \right)} (x)$, $ j = 0, 1, \ldots, n+\alpha+\frac{1}{2}$, and, by induction on $\alpha+\frac{1}{2}$, $$\label{Jacobi.identity2} \left(\frac{1-x}{2}\right)^{ \alpha+\frac{1}{2} } P_n^{(\alpha, \beta)} (x) = \sum_{j=0}^{\alpha+\frac{1}{2}} (-1)^j a_j^{ \left( - \frac{1}{2} \right)} (n) P_{n+j}^{ \left( -\frac{1}{2}, \beta \right)} (x), ~~~ x \in \R, ~ n \in \N_0,$$ where $$\label{Jacobi.identity2.coeff} \begin{array}{llr} a_j^{ \left( - \frac{1}{2} \right)} (n) & = & \frac{\Gamma\left(\alpha+\frac{3}{2}\right) (n+j)! \Gamma(n+\alpha+1) \left(2n+2j+\beta+\frac{1}{2}\right) \Gamma\left(2n+j+\beta+\frac{1}{2}\right) }{ j! \Gamma\left(\alpha-j+\frac{3}{2}\right) n! \Gamma\left(n+j+\frac{1}{2}\right) \Gamma(2n+j+\alpha+\beta+2) }, \\ & & ~~~ j =0, 1, \ldots, \alpha. \end{array}$$ The following corollary follows directly from (\[thm1.eq1\]) and (\[Jacobi.identity2\]). For $\alpha +\frac{1}{2} \in \N$, $$\label{H.n.idenity4} \mathbf{H}_n^{(\alpha, \beta) } = \sum_{j=0}^{\alpha +\frac{1}{2}} (-1)^j a_j^{ \left( - \frac{1}{2} \right)} (n) \mathbf{H}_{n+j}^{ \left( -\frac{1}{2}, \beta \right) }, ~~~~ n \in \N_0.$$ Second-order elliptically contoured random fields form one of the largest sets, if not the largest set, which allows any possible correlation structure [@Ma2011]. Examples of elliptically contoured random fields include Gaussian, Student’s t, Cauchy, Laplace, logistic, hyperbolic, hyperbolic secant, variance Gamma, normal inverse Gaussian, K-differenced, stable, Linnik, and Mittag-Leffler random fields. The characterizations in Theorem \[thm1\] are available for a second-order elliptically contoured vector random field. However, they may not be available for other non-Gaussian random fields, such as a log-Gaussian, $\chi^2$, binomial-$\chi^2$, K-distributed, or skew-Gaussian one, for which admissible correlation structure must be investigated on a case-by-case basis. In what follows, every covariance matrix function is set up under the elliptically contoured background. To distinguish the distances of the five families listed in Table 1, whenever necessary, we adopt the symbol $\rho_{\tiny{\S^d}}( \x_1, \x_2)$ for the distance over $\S^d$, $\rho_{\tiny{\mathbb{P}^d( \R)}}( \x_1, \x_2)$ for the distance on $\mathbb{P}^d( \R)$, and so on. The next theorem shows the interrelationship of an isotropic covariance matrix function on different compact two-point homogeneous spaces. It looks like that isotropic covaraince matrix structures on $\S^d$ are richer than those on other compact two-point homogeneous spaces. \[thm2\] Suppose that $\bC (\vartheta)$ is an $m \times m$ symmetric matrix function and each of its entries is continuous on $[0, \pi]$. - For an odd $d \ge 3$, if $\bC (\vartheta)$ makes $\bC \left( \rho_{\tiny{\mathbb{P}^d( \R)}}( \x_1, \x_2) \right)$ an isotropic covariance matrix function on $\mathbb{P}^d (\R)$, then it makes $\bC \left( \rho_{\tiny{\S^d}}( \x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\S^d$. For an even $d$, if $\bC \left( \rho_{\tiny{\mathbb{P}^d( \R)}}( \x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\mathbb{P}^d (\R)$, then $\bC \left( \rho_{\tiny{\S^{d-1}}}( \x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\S^{d-1}$. - For an even $d \ge 4$, if $\bC \left( \rho_{\tiny{\mathbb{P}^d( \C)}}( \x_1, \x_2) \right) $ is an isotropic covariance matrix function on $\mathbb{P}^d (\C)$, then $\bC \left( \rho_{\tiny{\S^d}}( \x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\S^d$ for $ d \ge 4$, and $\bC \left( \rho_{\tiny{\mathbb{P}^d( \H)}}( \x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\mathbb{P}^d (\H)$ if $d =8, 12, \ldots.$ - For $d = 8, 12, \ldots$, if $\bC \left( \rho_{\tiny{\mathbb{P}^d( \H)}}( \x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\mathbb{P}^d (\H)$, then $\bC \left( \rho_{\tiny{\S^d}}( \x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\S^d$. -   If $\bC \left( \rho_{\tiny{\S^d}}( \x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\S^d$, then both $\bC \left( \frac{ \rho_{\tiny{\mathbb{P}^d( \R)}}( \x_1, \x_2) }{2} \right)$ $+\bC \left( \pi - \frac{ \rho_{\tiny{\mathbb{P}^d( \R)}}( \x_1, \x_2)}{2} \right)$ and $ \left\{ \bC \left( \frac{\rho_{\tiny{\mathbb{P}^d( \R)}}( \x_1, \x_2)}{2} \right)-\bC \left( \pi - \frac{\rho_{\tiny{\mathbb{P}^d( \R)}}( \x_1, \x_2)}{2} \right) \right\} \cos \left( \frac{\rho_{\tiny{\mathbb{P}^d( \R)}}( \x_1, \x_2) }{2} \right)$ are isotropic covariance matrix functions on $\mathbb{P}^d (\R)$. Isotropic covariance matrix functions on all dimensions ======================================================= Except for $\mathbb{P}^{16} (Cay)$, the dimension $d$ of $\Md$ can take infinitely many values, as shown in Table \[tab:1\]. If an $m \times m$ continuous matrix function $\bC (\vartheta)$ on $[0, \pi]$ makes $\bC (\rho (\x_1, \x_2))$ an isotropic covariance matrix function on all possible $\Md$ (all five families described in Section 1), then it is called an isotropic covariance matrix function on $\mathbb{M}^\infty$. Such a matrix function is characterized in the following theorem. \[thm3\] For an $m \times m$ symmetric matrix function $\bC(\vartheta)$ whose all entries are continuous on $[0, \pi]$, the following statements are equivalent: - $\bC (\rho (\x_1, \x_2))$ is an isotropic covariance matrix function on $\mathbb{M}^\infty$; - $\bC(\vartheta)$ is of the form $$\label{thm3.eq1} \bC (\vartheta) = \sum_{n=0}^\infty \mathbf{B}_n (1+ \cos \vartheta )^n, ~~~~~ \vartheta \in [0, \pi],$$ where $\{ \mathbf{B}_n, n \in \N_0 \}$ is a sequence of $m \times m$ positive definite matrices and $\sum\limits_{n=0}^\infty 2^n \mathbf{B}_n$ converges; - $\bC \left( \frac{\pi}{2} - \arcsin x \right)$ is of the form $$\label{thm3.eq2} \bC \left( \frac{\pi}{2} - \arcsin x \right) = \sum_{n=0}^\infty \mathbf{B}_n (1+x)^n, ~~~~~ x \in [-1, 1],$$ where $\{ \mathbf{B}_n, n \in \N_0 \}$ is a sequence of $m \times m$ positive definite matrices and $\sum\limits_{n=0}^\infty 2^n \mathbf{B}_n$ converges; - $\bC \left( \pi -2 \arcsin x \right)$ is of the form $$\label{thm3.eq3} \bC \left( \pi -2 \arcsin x \right) = \sum_{n=0}^\infty 2^n \mathbf{B}_n x^{2n}, ~~~~~ x \in [0, 1],$$ where $\{ \mathbf{B}_n, n \in \N_0 \}$ is a sequence of $m \times m$ positive definite matrices and $\sum\limits_{n=0}^\infty 2^n \mathbf{B}_n$ converges. It is understandable that the characterizations in Theorem \[thm3\] differ from those on all spheres $\S^\infty$ presented in [@Ma2015]. Actually, the set of isotropic covariance matrix functions on $\mathbb{M}^\infty$ is a proper subset of that on $\S^\infty$. \[thm3.cor\] If $\bC (\rho (\x_1, \x_2))$ is an isotropic covariance matrix function on $\mathbb{M}^\infty$, then - $\bC (\vartheta)$ is a positive definite matrix for each fixed $\vartheta \in [ 0, \pi]$; - $\bC (\vartheta_1) - \bC (\vartheta_2) $ is a positive definite matrix for $0 \le \vartheta_1 \le \vartheta_2 \le \pi$; - for $\vartheta \in (0, \pi),$ $\bC' (\vartheta) $ is a negative definite matrix, whenever the derivative exists. Since each $\mathbf{B}_n$ is positive definite, Corollary \[thm3.cor\] is due to (\[thm3.eq1\]), Part (i) from the fact that $1+ \cos \vartheta$ is nonnegative, Part (ii) from that $1+\cos \vartheta$ is decreasing on $[0, \pi]$, and Part (iii) from Part (ii) and $\bC' (\vartheta) = - \lim\limits_{\delta \to 0+} \frac{\bC (\vartheta) - \bC (\vartheta+\delta)}{\delta}$. For an $m \times m$ symmetric matrix function whose entries are second order polynomials, $$\bC(\vartheta) = {\bf B}_0 + {\bf B}_1 \vartheta +{\bf B}_2 \vartheta^2, ~~~~~~~~~~ \vartheta \in [0, \pi],$$ it makes $\bC ( \rho (\x_1, \x_2)) $ an isotropic covariance matrix function on $\mathbb{M}^\infty$ if and only if ${\bf B}_0- \pi^2 {\bf B}_2$ and $ {\bf B}_2$ are positive definite matrices, and ${\bf B}_1= -2 {\bf B}_2 \pi$. To derive a form of (\[thm3.eq3\]) for $\bC \left( \pi- 2\arcsin x \right) $, we employ the Taylor expansions of $\arcsin x$ and $(\arcsin x)^2$, $$\label{Ex.arcsin} \arcsin x = \sum\limits_{n=0}^\infty \frac{(2n)!}{2^{2n} (n!)^2 (2n+1)} x^{2n+1}, ~~~~~~~~~ x \in [-1, 1],$$ $$\label{Ex.arcsin2} (\arcsin x)^2= \sum_{n=1}^\infty \frac{2^{2n-1} ( (n-1)!)^2}{(2n)!} x^{2n}, ~~~~~~~~~~ x \in [-1, 1],$$ and obtain $$\begin{aligned} & & \bC \left( \pi -2 \arcsin x \right) \\ & = & {\bf B}_0 + \pi {\bf B}_1 + \pi^2 {\bf B}_2 - 2 \left( {\bf B}_1+ 2\pi {\bf B}_2 \right) \arcsin x + 4 {\bf B}_2 (\arcsin x )^2 \\ & = & {\bf B}_0 + \pi {\bf B}_1 + \pi^2 {\bf B}_2 - 2 \left( {\bf B}_1+2 \pi {\bf B}_2 \right) \sum\limits_{n=0}^\infty \frac{(2n)!}{2^{2n} (n!)^2 (2n+1)} x^{2n+1} \\ & & + 4 {\bf B}_2 \sum_{n=1}^\infty \frac{2^{2n-1} ( (n-1)!)^2}{(2n)!} x^{2n}, ~~ x \in [0, 1]. \end{aligned}$$ By Theorem \[thm3\] (iv), $ {\bf B}_0 + \pi {\bf B}_1 + \pi^2 {\bf B}_2 $ and ${\bf B}_2$ must be positive definite, and ${\bf B}_1+2 \pi {\bf B}_2 = \mathbf{0}$. Moreover, if $ {\bf B}_0-\pi^2 {\bf B}_2$ and ${\bf B}_2$ are $m \times m$ positive definite matrices, then $$\bC( \rho (\x_1, \x_2)) = \left( {\bf B}_0 - 2 \pi {\bf B}_2 \rho (\x_1, \x_2) + {\bf B}_2 (\rho (\x_1, \x_2) )^2 \right)^{\circ \ell}$$ is an isotropic covariance matrix function on $\mathbb{M}^\infty$, where $\ell$ is a natural number, and ${\bf B}^{\circ \ell}$ denotes the Hadamard $\ell$ power of ${\bf B} =(b_{ij})$, whose entries are $b_{ij}^\ell$, the $\ell$ power of $b_{ij}, i, j = 1, \ldots, m$. Given two $m \times m$ symmetric matrices $\mathbf{B}_1$ and $\mathbf{B}_2$, consider an $m \times m$ matrix function $$\bC (\vartheta) = \mathbf{B}_1 \exp \left( \frac{\vartheta}{2} \right)+ \mathbf{B}_2 \exp \left( -\frac{\vartheta}{2} \right), ~~~~~~~ \vartheta \in [0, \pi].$$ By Theorem \[thm3\], $\bC (\rho (\x_1, \x_2))$ is an isotropic covariance matrix function on $\mathbb{M}^\infty$ if and only if $\mathbf{B}_2= \mathbf{B}_1 e^\pi$ is a positive definite matrix. To see this, notice that $\exp ( \arcsin x)$ possesses the Taylor series with positive coefficients (see, for instance, formula 1.216 of [@Gradshteyn2007]) $$\label{Taylor.series. exp.arcsin} \exp ( \arcsin x) = \sum_{n=0}^\infty a_n x^n = 1+x+\frac{x^2}{2!}+\frac{2x^3}{3!}+\frac{5x^4}{4!}+\cdots, ~~~~~~ x \in [-1, 1],$$ from which we obtain $$\begin{aligned} \bC \left( \pi- 2 \arcsin x \right) & = & \sum_{n=0}^\infty \left( (-1)^n \mathbf{B}_1 e^{\frac{\pi}{2}} +\mathbf{B}_2 e^{-\frac{\pi}{2}} \right) a_n x^n, ~~~~~~~~~~ x \in [0, 1]. \end{aligned}$$ A comparison between the last equation with (\[thm3.eq3\]) results in the positive definiteness of $ \mathbf{B}_1 e^{\frac{\pi}{2}} +\mathbf{B}_2 e^{-\frac{\pi}{2}}$ and $$(-1)^{2 n+1} \mathbf{B}_1 e^{\frac{\pi}{2}} +\mathbf{B}_2 e^{-\frac{\pi}{2}} = \mathbf{0}, ~~~~~ n \in \N_0,$$ or, equivalently, the positive definiteness of $\mathbf{B}_2= \mathbf{B}_1 e^\pi$. Given an $m \times m$ symmetric matrix $\mathbf{B}$ with entries $b_{ij}$, the entries of an $m \times m$ matrix function $\bC(\vartheta)$ are defined by $$C_{ij} (\vartheta) = \exp \left( b_{ij} \cos \frac{ \vartheta}{2} \right) + \exp \left( - b_{ij} \cos \frac{ \vartheta}{2} \right), ~~~~~~~ \vartheta \in [0, \pi], ~~~ i, j =1, \ldots, m.$$ It makes $\bC (\rho (\x_1, \x_2))$ an isotropic covariance matrix function on $\mathbb{M}^\infty$ if and only if ${\bf B}^{\circ 2}$ is a positive definite matrix, by Theorem \[thm3\], since $$\bC \left( \pi- 2 \arcsin x \right) = 2 \sum_{n=0}^\infty \frac{ \mathbf{B}^{\circ 2n} }{(2n)!} x^{2n}, ~~~~~ x \in [0, 1],$$ and $\sum\limits_{n=0}^\infty \frac{ \mathbf{B}^{\circ 2n} }{(2n)!}$ converges. In the scalar case $m=1$, the following corollary is a consequence of Theorem \[thm3\], and, by Theorem 2 of [@Askey1976], (\[thm3.eq4\]) below is an isotropic covariance function on $\mathbb{M}^\infty$. \[thm3.cor2\] For a continuous function $C(\vartheta)$ on $[0, \pi]$, the following statements are equivalent: - $C (\rho (\x_1, \x_2))$ is an isotropic covariance function on $\mathbb{M}^\infty$; - $C(\vartheta)$ is of the form $$\label{thm3.eq4} C (\vartheta) = \sum_{n=0}^\infty b_n (1+ \cos \vartheta )^n, ~~~~~ \vartheta \in [0, \pi],$$ where $\{ b_n, n \in \N_0 \}$ is a sequence of nonnegative numbers and $\sum\limits_{n=0}^\infty 2^n b_n$ converges; - $C \left( \frac{\pi}{2} - \arcsin x \right)$ is of the form $$\label{thm3.eq5} C \left( \frac{\pi}{2} - \arcsin x \right) = \sum_{n=0}^\infty b_n (1+x)^n, ~~~~~ x \in [-1, 1],$$ where $\{ b_n, n \in \N_0 \}$ is a sequence of nonnegative numbers and $\sum\limits_{n=0}^\infty 2^n b_n$ converges; - $C \left( \pi -2 \arcsin x \right)$ is of the form $$\label{thm3.eq6} C \left( \pi -2 \arcsin x \right) = \sum_{n=0}^\infty 2^n b_n x^{2n}, ~~~~~ x \in [0, 1],$$ where $\{ b_n, n \in \N_0 \}$ is a sequence of nonnegative numbers and $\sum\limits_{n=0}^\infty 2^n b_n$ converges. The exponential function $\exp \left(- \frac{\vartheta}{2} \right)$ is an important generator on all spheres $\S^\infty$. But, interestingly, it does not make $\exp \left( - \frac{\rho (\x_1, \x_2)}{2} \right)$ an isotropic covariance function on $\mathbb{M}^\infty$, since it follows from (\[Taylor.series. exp.arcsin\]) that $$\exp \left(- \frac{\pi - 2 \arcsin x}{2} \right) = \exp \left(- \frac{\pi}{2} \right) \sum_{n=0}^\infty a_n x^n, ~~~~~~ 0 \le x \le 1,$$ so that (\[thm3.eq6\]) fails. Nevertheless, $\cosh \left( \frac{\pi- \rho (\x_1, \x_2)}{2} \right)$ is an isotropic covariance function on $\mathbb{M}^\infty$, as is seen from Example 2. For a constant $\nu \in (0, 2]$, $$C( \vartheta ) = 1- \left( \sin \frac{\vartheta}{2} \right)^\nu, ~~~~ \vartheta \in [0, \pi],$$ makes $C(\rho (\x_1, \x_2))$ an isotropic covariance function on $\mathbb{M}^\infty$. Corollary \[thm3.cor2\] is applicable, with a form (\[thm3.eq6\]) of $C( \pi- 2 \arcsin x )$ given by $$\begin{aligned} C \left( \pi -2 \arcsin x \right) & = & 1- \left( \sin \frac{\pi-2 \arcsin x}{2} \right)^\nu \\ & = & 1- (1-x^2)^{\frac{\nu}{2} } \\ & = & \left\{ \begin{array}{lll} x^2, ~ & ~ \nu =2, \\ \sum\limits_{n=1}^\infty \frac{ \prod\limits_{k=1}^n \left( k- \frac{\nu}{2} \right) }{n!} x^{2n}, ~ & ~ \nu \in (0, 2), ~ x \in [0, 1). \end{array} \right. \end{aligned}$$ Moreover, for constants $\nu_i \in (0, 2]$, an $m \times m $ matrix function $\bC (\vartheta)$ with entries $$C_{ij} (\vartheta) = 1- \left( \sin \frac{\vartheta}{2} \right)^{\max (\nu_i, \nu_j)}, ~~~~ \vartheta \in [0, \pi], ~ i, j = 1, \ldots, m,$$ makes $C(\rho (\x_1, \x_2))$ an isotropic covariance function on $\mathbb{M}^\infty$, by Theorem \[thm3\], since an $m \times m $ matrix with entries $k- \frac{\max ( \nu_i, \nu_j)}{2} = \min \left( k- \frac{\nu_i}{2}, k - \frac{\nu_j}{2} \right)$ is positive definite, for $k \ge 1$. - For $n \in \N_0$, $$\label{Askey1974} \left( \frac{1+x}{2} \right)^n = \sum_{k=0}^n \varphi_k^{(\alpha,\beta)} P_n^{(\alpha, \beta)} (x), ~~~~~~ x \in \R,$$ where $$\varphi_k^{(\alpha,\beta)} =\frac{\Gamma (n+\beta+1)n! (2k +\alpha+\beta+1) \Gamma (k+\alpha+\beta+1) \Gamma (k+\alpha+1)}{ \Gamma (k+n+\alpha+\beta+2) \Gamma (k+\beta+1) k! (n-k)! \Gamma (\alpha+1) P_n^{(\alpha, \beta)} (1)}.$$ - With $a_k$ given by (\[a.n.definition\]), if $\mathbf{U}$ is a random vector uniformly distributed on $\Md$, then $$Z ( \x) = \sum_{k=0}^n a_k \left( \varphi_k^{(\alpha,\beta)} \right)^{\frac{1}{2}} P_n^{(\alpha, \beta)} ( \cos \rho (\x, \mathbf{U} ) ), ~~~~~~ \x \in \Md,$$ is a scalar isotropic random field on $\Md$ with mean 0 and covariance function $\left( \frac{1+\cos \rho (\x_1, \x_2) }{2} \right)^n$. <!-- --> - For a fixed $\beta > -1$ and $n \in \N$, $$\label{Jacobi.lim} \lim_{\alpha \to \infty} \frac{ P_n^{(\alpha, \beta)} (\cos \vartheta)}{ P_n^{(\alpha, \beta)} (1)} = \left( \frac{1+\cos \vartheta }{2} \right)^n, ~~~ \vartheta \in [0, \pi].$$ - For a fixed $\beta \ge -\frac{1}{2}$ and $\vartheta \in (0, \pi]$, as $\alpha \to \infty$, the limit in (\[Jacobi.lim\]) is uniformly for all $n \in \N$; that is, for any $\epsilon > 0$, there exists $A(\epsilon, \vartheta, \beta)$ such that, for any $\alpha > A(\epsilon, \vartheta, \beta)$ and $n \in \N$, $$\label{Jacobi.lim.epsion} \left| \frac{ P_n^{(\alpha, \beta)} (\cos \vartheta )}{ P_n^{(\alpha, \beta)} (1)} - \left( \frac{1+ \cos \vartheta }{2} \right)^n \right| < \epsilon.$$ To prove Theorem \[thm3\], we need Lemmas 1 and 2. With identity (\[Askey1974\]) taken from [@Askey1974], Lemma 1 (ii) is derived from (\[Askey1974\]) and Lemma 3 of [@MaMalyarenko2018]. The proof of Lemma 2 (ii) is given in Subsection \[lemma2.proof\], while limit (\[Jacobi.lim\]) in Lemma 2 (i) is from (18. 6.2) of [@Olver2010]. Isotropic covaraince matrix functions on $\Md$ generated from those in the Euclidean space ============================================================================================ For an $m \times m$ symmetric matrix function $\bC (\vartheta)$ with all entries continuous on $[0, \infty)$, in this section we show that it makes $\bC \left( \rho_{\tiny{\S^{d}}} (\x_1, \x_2) \right) $ and $\bC \left( \rho_{\tiny{\mathbb{P}^{d}( \R)}} (\x_1, \x_2) \right)$ isotropic covariance matrix functions on $\S^d$ and $\mathbb{P}^d (\R)$, respectively, if it is compactly supported and it makes $\bC ( \| \x_1 -\x_2 \|)$ an isotropic covariance matrix function in $\R^d$, whenever $d$ is odd. An $m$-variate stationary random field $\{ \bZ (\x), \x \in \R^d \}$ is said to be isotropic, if its covariance matrix function $\cov (\bZ (\x_1), \bZ (\x_2))$ depends only on the Euclidean distance $\| \x_1 - \x_2 \|$ between two points $\x_1$ and $\x_2$ in $\R^d$. When $\{ \bZ (\x), \x \in \R^d \}$ is mean square continuous, $\cov (\bZ (\x_1), \bZ (\x_2))$ is continuous in $\R^d$ and possesses an integral representation [@WangDuMa2014], $$\cov (\bZ (\x_1), \bZ (\x_2)) = \int_0^\infty \Omega_d ( \| \x_1-\x_2 \| \omega) d \mathbf{F} (\omega), ~~~~~~ \x_1, \x_2 \in \R^d,$$ where $\mathbf{F}( \omega), \omega \in [0, \infty ),$ is an $m \times m$ right-continuous, bounded matrix function with $ \mathbf{F}(0-) = \mathbf{0}$, $\mathbf{F} ( \omega_2) - \mathbf{F} ( \omega_1)$ is positive definite for every pair of $\omega_1 $ and $\omega_2$ with $0 \le \omega_1 \le \omega_2$, $$\Omega_d ( \omega ) = 2^{\frac{d}{2}-1} \Gamma \left( \frac{d}{2} \right) \omega^{-\frac{d}{2}+1} J_{\frac{d}{2}-1} (\omega), ~ \omega \ge 0,$$ and $J_\nu (x)$ is the Bessel function of the first kind [@Szego1975]. For an integer or positive order $\nu$, $J_\nu(x)$ possesses a series representation $$J_{\nu} (x) = \left( \frac{x}{2} \right)^\nu \sum_{k=0}^\infty \frac{(-1)^k}{k! \Gamma (\nu+k+1)} \left( \frac{x}{2} \right)^{2 k}, ~~~~ x > 0.$$ \[thm4\] Suppose that $\bC (\vartheta)$ is an $m \times m$ symmetric matrix function on $[0, \infty)$ and all its entries are continuous on $[0, \infty)$. For an odd $d$, if $\bC ( \| \x_1-\x_2 \|)$ is an isotropic covariance matrix function in $\R^d$, and, if all entries of $\bC (\vartheta)$ are compactly supported with $$C_{ij} (\vartheta) = 0, ~~~~ \vartheta \ge \pi, ~~~~ i, j = 1, \ldots, m,$$ then $\bC \left( \rho_{\tiny{\S^d}} (\x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\S^d$, and $\bC \left( \rho_{\tiny{\mathbb{P}^d( \R)}} (\x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\mathbb{P}^d( \R)$. It is not clear whether a similar result holds for an even integer $d$. Nevertheless, the following corollary is a consequence of Theorem \[thm4\], since an isotropic covariance matrix function in $\R^d$ is also an isotropic covariance matrix function in $\R^{d-1}$ ($d \ge 2$), and $d-1$ is odd for an even $d$. Let $\bC( \vartheta)$ be as in Theorem \[thm4\]. For an even integer $d$, if $\bC(\| \x_1-\x_2 \|)$ is an isotropic covariance matrix function in $\R^d$, then $\bC \left( \rho_{\tiny{\S^{d-1}}} (\x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\S^{d-1}$, and $\bC \left( \rho_{\tiny{\mathbb{P}^{d-1}( \R)}} (\x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\mathbb{P}^{d-1}( \R)$. The requirement that $\bC( \vartheta)$ vanishes over $[ \pi, \infty)$ is not crucial in Theorem \[thm4\], since it is always possible to change the scale for a compactly supported function. This results in the following corollary. Suppose that all entries of $\bC( \vartheta)$ are continuous on $[0, \infty)$, and $$C_{ij}( \vartheta)=0, ~~~ \vartheta \ge l, ~ i, j = 1, \ldots, m,$$ where $l$ is a positive constant. For an odd integer $d$, if $\bC(\| \x_1-\x_2 \|)$ is an isotropic covariance matrix function in $\R^d$, then $\bC \left( \frac{l}{\pi} \rho_{\tiny{\S^{d}}} (\x_1, \x_2) \right) $ is an isotropic covariance matrix function on $\S^{d}$, and $\bC \left( \rho_{\tiny{\mathbb{P}^{d}( \R)}} (\x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\mathbb{P}^d (\R)$. Theorem \[thm4\], which contains Theorems 3 and 4 of [@Ma2016a] as special cases where $d=1, 3$, is conjectured in [@Ma2016a] with the comment that “A difficulty arises when one deals with the connection between the two bases, the Bessel functions for $\R^d$ and ultraspherical polynomials for $\S^d$”. Such a difficulty is overcome in Theorem \[thm5\], where identity (\[thm5.eq\]) builds a useful connection between an integral with respect to Jacobi polynomials and an integral with respect to the Bessel function, observing that the right-hand side of (\[thm5.eq\]) is related to the Fourier transform of the isotropic function $g( \| \x \|), \x \in \R^d$. In the scalar case $m=1$, Theorem \[thm4\] is proved on $\S^d$ via another approach and is conjectured on $\Md$ in [@NieMa2019], with an interesting example in [@Xu2017]. \[thm5\] Suppose that $g(x)$ is a continuous function on $[0, \pi]$, and that $\alpha+\frac{1}{2}$ is a nonnegative integer. - For a nonnegative integer $\beta+\frac{1}{2}$, there is a number $\xi_n \in [ n, n+\alpha+\beta+1]$ such that $$\label{thm5.eq} \begin{array}{lll} & & \int_0^\pi g(\vartheta) P_n^{(\alpha, \beta) } \left( \cos \vartheta \right) \sin^{2 \alpha+1} \left( \frac{\vartheta}{2} \right) \cos^{2 \beta+1} \left( \frac{\vartheta}{2} \right) d \vartheta \\ & = & \frac{ \Gamma (n+\alpha+1)} { 2^{\alpha+1} n!} \int_0^\pi \frac{J_\alpha (\xi_n x)}{\xi_n^\alpha} g(x) x^{\alpha+1} dx, ~~~~~ n \in \N_0. \end{array}$$ - If the cosine series of $g(x)$ converges at $x=0$, and $$\label{thm5.ineq} \int_0^\pi J_\alpha (\omega x) g(x) x^{\alpha+1} dx \ge 0, ~~~~~ \omega \ge 0,$$ then, for each $\beta$ with $\beta +\frac{1}{2} \in \N_0$, $$\label{thm5.ineq2} h_n^{(\alpha, \beta)} = \int_0^\pi g(\vartheta) P_n^{(\alpha, \beta) } \left( \cos \vartheta \right) \sin^{2 \alpha+1} \left( \frac{\vartheta}{2} \right) \cos^{2 \beta+1} \left( \frac{\vartheta}{2} \right) d \vartheta \ge 0, ~~ n \in \N_0,$$ the infinite series $\sum\limits_{n=0}^\infty n^{\alpha+1} h_n^{(\alpha, \beta)}$ converges, and $g(x)$ can be written as the Jacobi series $$g (x) = \sum_{n=0}^\infty \frac{n! (2n+\alpha+\beta+1) \Gamma (n+\alpha+\beta+1)}{ \Gamma (n+\alpha+1) \Gamma (n+\beta+1)} h_n^{(\alpha, \beta)} P_n^{(\alpha, \beta)} (\cos x), ~~~~~ 0 \le x \le \pi.$$ In a particular case where $\alpha =\beta =-\frac{1}{2}$, (\[thm5.eq\]) holds with $\xi_n =n$. As a likely explanation for why identity (\[thm5.eq\]) works well for a positive integer $\alpha+\frac{1}{2}$, $\left( \frac{\pi}{2 x} \right)^{\frac{1}{2}} J_\alpha (x)$ is the spherical Bessel function of the first kind [@Olver2010] and is a linear combination of $\sin x$, $\cos x$, and rational functions, according to (10.49.2) of [@Olver2010]. This may lead to its connection to $P_n^{(\alpha, \beta) } \left( \cos x \right)$, which is simply a polynomial of $\cos x$. Proofs ====== Proof of Theorem \[thm1\] ------------------------- In the particular case $d=1$, $\Md = \S^1$, and Theorem \[thm1\] is known [@Ma2016a], [@Ma2017]. For $d \ge 2$, it suffices to verify the equivalnece bewteen (ii) and (iii), while the equivalence between (i) and (ii) is shown in [@MaMalyarenko2018]. \(ii) $\Longrightarrow$ (iii). Suppose that $\bC (\vartheta)$ is of the form (\[cov.mf1\]). Making the transform $u =\cos \vartheta$, we obtain $$\begin{aligned} \mathbf{H}_n^{(\alpha, \beta) } & = & \int_0^\pi \bC (\vartheta) P_n^{(\alpha, \beta) } \left( \cos \vartheta \right) \sin^{2 \alpha+1} \left( \frac{\vartheta}{2} \right) \cos^{2 \beta+1} \left( \frac{\vartheta}{2} \right) d \vartheta \\ & = & \int_0^\pi \left( \sum_{k=0}^\infty \mathbf{B}_k P_k^{(\alpha, \beta) } \left( \cos \vartheta \right) \right) P_n^{(\alpha, \beta) } \left( \cos \vartheta \right) \sin^{2 \alpha+1} \left( \frac{\vartheta}{2} \right) \cos^{2 \beta+1} \left( \frac{\vartheta}{2} \right) d \vartheta \\ & = & \sum_{k=0}^\infty \mathbf{B}_k \int_0^\pi P_k^{(\alpha, \beta) } \left( \cos \vartheta \right) P_n^{(\alpha, \beta) } \left( \cos \vartheta \right) \sin^{2 \alpha+1} \left( \frac{\vartheta}{2} \right) \cos^{2 \beta+1} \left( \frac{\vartheta}{2} \right) d \vartheta \\ & = & 2^{- (\alpha+\beta+1)} \sum_{k=0}^\infty \mathbf{B}_k \int_{-1}^1 P_k^{(\alpha, \beta) } \left( u \right) P_n^{(\alpha, \beta) } \left( u \right) (1-u)^\alpha (1+u)^\beta du \\ & = & \frac{ \Gamma (n+\alpha+1) \Gamma (n+\beta+1)}{n! (2n+\alpha+\beta+1) \Gamma (n+\alpha+\beta+1)} \mathbf{B}_n, ~~~~~ n \in \N_0, \end{aligned}$$ where the exchange between the integral and the infinite summation is ensured by the convergence of $\sum\limits_{k=0}^\infty \mathbf{B}_k P_k^{(\alpha, \beta)} (1)$, and the last equality is due to the following orthogonal property of the Jacobi polynomials [@Szego1975], $$\label{Jacobi.Orthogonal} \int_{-1}^1 P^{(\alpha, \beta)}_i (x) P^{(\alpha, \beta)}_j (x) (1-x)^\alpha (1+x)^\beta d x = \left\{ \begin{array}{ll} \frac{2^{\alpha+\beta+1} }{2 j +\alpha+\beta+1} \frac{\Gamma (j+\alpha+1) \Gamma (j+\beta+1)}{ j! \Gamma ( j +\alpha+\beta+1) }, ~ & ~ i =j, \\ 0, ~ & ~ i \neq j, \end{array} \right.$$ for each pair of $\alpha>-1$ and $\beta>-1$. The matrix $\mathbf{H}_n^{(\alpha, \beta) } $ is positive definite, since $\mathbf{B}_n$ is so. The convergence of $\sum\limits_{k=0}^\infty k^\alpha \mathbf{B}_k$ implies that that of $\sum\limits_{k=0}^\infty k^{\alpha+1} \mathbf{H}_k^{(\alpha, \beta) } $, since $ \lim\limits_{k \to \infty} \frac{ \Gamma (k+\alpha+1) \Gamma (k+\beta+1)}{k! \Gamma (k+\alpha+\beta+1)} =1. $ \(iii) $\Longrightarrow$ (ii). If $\mathbf{H}_n^{(\alpha, \beta) } $ ($n \in \N_0$) are positive definite, then so are $$\mathbf{B}_n = \frac{n! (2n+\alpha+\beta+1) \Gamma (n+\alpha+\beta+1)}{ \Gamma (n+\alpha+1) \Gamma (n+\beta+1)} \mathbf{H}_n^{(\alpha, \beta) } , ~~~ n \in \N_0.$$ The convergence of $\sum\limits_{n=0}^\infty n^{\alpha+1} \mathbf{H}_n^{(\alpha, \beta) } $ implies those of $\sum\limits_{n=0}^\infty n^\alpha \mathbf{B}_n$, $\sum\limits_{n=0}^\infty \mathbf{B}_n P_n^{(\alpha, \beta)} (1)$, and the infinite series at the right-hand side of (\[cov.mf2\]), which converges to $\bC (\vartheta)$ uniformly over $[0, \pi]$. Proof of Theorem \[thm2\] ------------------------- \(i) For an odd $d \ge 3$, $\alpha+\frac{1}{2} = \frac{d-1}{2}$ is a positive integer. In (\[Jacobi.identity2\]) taking $\beta =\alpha$ and substituting $x$ by $-x$, from identity $P_n^{(\alpha, \beta)} (-x) = (-1)^n P_n^{(\beta, \alpha)} (x)$ we obtain $$(1+x)^{ \frac{d-1}{2} } P_n^{ \left( \frac{d-2}{2}, \frac{d-2}{2} \right)} (x) = \sum_{j=0}^{\frac{d-1}{2} } a_j^{ \left( - \frac{1}{2} \right)} (n) P_{n+j}^{ \left( \frac{d-1}{2}, -\frac{1}{2} \right)} (x), ~~~ x \in \R, ~ n \in \N_0,$$ and, from (\[thm1.eq1\]), $$\begin{aligned} & & \mathbf{H}_n^{ \left( \frac{d-2}{2}, \frac{d-2}{2} \right)} = \int_0^\pi \bC (\vartheta) P_n^{ \left( \frac{d-2}{2}, \frac{d-2}{2} \right) } \left( \cos \vartheta \right) \sin^{d-1} \left( \frac{\vartheta}{2} \right) \cos^{d-1} \left( \frac{\vartheta}{2} \right) d \vartheta \\ & = & 2^{1-d} \int_0^\pi \bC (\vartheta) (1+\cos \vartheta)^{\frac{d-1}{2}} P_n^{ \left( \frac{d-2}{2}, \frac{d-2}{2} \right) } \left( \cos \vartheta \right) \sin^{d-1} \left( \frac{\vartheta}{2} \right) d \vartheta \\ & = & 2^{1-d} \sum_{j=0}^{\frac{d-1}{2} } a_j^{ \left( - \frac{1}{2} \right)} (n) \int_0^\pi \bC (\vartheta) P_{n+j}^{ \left( \frac{d-1}{2}, -\frac{1}{2} \right)} ( \cos \vartheta ) \sin^{d-1} \left( \frac{\vartheta}{2} \right) d \vartheta \\ & = & 2^{1-d} \sum_{j=0}^{\frac{d-1}{2} } a_j^{ \left( - \frac{1}{2} \right)} (n) \mathbf{H}_n^{ \left( \frac{d-2}{2}, -\frac{1}{2} \right)}, ~~~~~~ n \in \N_0, \end{aligned}$$ where the positive constant $a_j^{ \left( - \frac{1}{2} \right)} (n)$ is given by (\[Jacobi.identity2.coeff\]) with $\alpha=\beta = \frac{d-2}{2}$. If $\bC \left( \rho_{\tiny{\mathbb{P}^{d}( \R)}} (\x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\mathbb{P}^d (\R)$, then, by Theorem \[thm1\], $\mathbf{H}_n^{ \left( \frac{d-2}{2}, -\frac{1}{2} \right)}$ is positive definite. So is $\mathbf{H}_n^{ \left( \frac{d-2}{2}, \frac{d-2}{2} \right)}$, $n \in \N_0$. The convergence of $\sum\limits_{n=0}^\infty n^d \mathbf{H}_n^{ \left( \frac{d-2}{2}, -\frac{1}{2} \right) } $ implies that of $\sum\limits_{n=0}^\infty n^d \mathbf{H}_n^{ \left( \frac{d-2}{2}, \frac{d-2}{2} \right) } .$ Thus, $\bC \left( \rho_{\tiny{\S^d}} (\x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\S^d$, by Theorem \[thm1\]. For an even $d$, if $\bC \left( \rho_{\tiny{\mathbb{P}^{d}( \R)}} (\x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\mathbb{P}^d (\R)$, then it is an isotropic covariance matrix function on $\mathbb{P}^{d-1} (\R)$, with $d-1$ being odd, and, consequently, $\bC \left( \rho_{\tiny{\S^{d-1}}} (\x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\S^{d-1}$. \(ii) For an even $d \ge 4$, $\alpha = \frac{d-2}{2}$ is an even integer. Substituting $x$ by $-x$, (\[Jacobi.identity1\]) becomes $$\label{thm2.proof} (1+x)^\alpha P_n^{(\beta, \alpha)} (x) = \sum_{j=0}^\alpha a_j^{(0)} (n) P_{n+j}^{( \beta, 0)} (x), ~~~ x \in \R, ~ n \in \N_0.$$ For $\beta =\alpha = \frac{d-2}{2}$, it follows from (\[thm1.eq1\]) and (\[thm2.proof\]) that $$\begin{aligned} & & \mathbf{H}_n^{ \left( \frac{d-2}{2}, \frac{d-2}{2} \right)} = 2^{2-d} \int_0^\pi \bC (\vartheta) (1+\cos \vartheta)^{\frac{d-2}{2}} P_n^{ \left( \frac{d-2}{2}, \frac{d-2}{2} \right) } \left( \cos \vartheta \right) \sin^{d-1} \left( \frac{\vartheta}{2} \right) \cos \left( \frac{\vartheta}{2} \right) d \vartheta \\ & = & 2^{2-d} \sum_{j=0}^{\frac{d-2}{2} } a_j^{ \left( 0 \right)} (n) \int_0^\pi \bC (\vartheta) P_{n+j}^{ \left( \frac{d-1}{2}, 0 \right)} ( \cos \vartheta ) \sin^{d-1} \left( \frac{\vartheta}{2} \right) \cos \left( \frac{\vartheta}{2} \right) d \vartheta \\ & = & 2^{2-d} \sum_{j=0}^{\frac{d-2}{2} } a_j^{ \left( 0 \right)} (n) \mathbf{H}_n^{ \left( \frac{d-2}{2}, 0 \right)}, ~~~~~~ n \in \N_0. \end{aligned}$$ If $\bC \left( \rho_{\tiny{\mathbb{P}^{d}( \C)}} (\x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\mathbb{P}^d (\C)$, then, by Theorem \[thm1\], $\mathbf{H}_n^{ \left( \frac{d-2}{2}, 0 \right)}$ is positive definite. So is $\mathbf{H}_n^{ \left( \frac{d-2}{2}, \frac{d-2}{2} \right)}$, $n \in \N_0$. By Theorem \[thm1\], $\bC \left( \rho_{\tiny{\S^d}} (\x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\S^d$. For $d = 8, 12, \ldots,$ if $\bC \left( \rho_{\tiny{\mathbb{P}^{d}( \C)}} (\x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\mathbb{P}^d (\C)$, then $\mathbf{H}_n^{ \left( \frac{d-2}{2}, 0 \right)}$ is positive definite, by Theorem \[thm1\]. So is $\mathbf{H}_n^{ \left( \frac{d-2}{2}, 1 \right)}$, $n \in \N_0$, by identity (\[H.n.idenity2\]). As a result, $\bC \left( \rho_{\tiny{\mathbb{P}^{d}( \H)}} (\x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\mathbb{P}^d (\H)$. \(iii) It can be derived in a way similar to the proof of Part (ii). \(iv) Since $\bC \left( \rho_{\tiny{\S^d}} (\x_1, \x_2) \right)$ is an isotropic covariance matrix function on $\S^d$, $\bC (\vartheta)$ is of the form (\[cov.mf2\]) with $\alpha =\beta =\frac{d-2}{2}$, and, thus, $$\begin{aligned} & & \bC \left( \frac{\vartheta}{2} \right)+ \bC \left( \pi- \frac{\vartheta}{2} \right) \\ & = & \sum_{n=0}^\infty \mathbf{B}_n \left\{ P_n^{ \left( \frac{d-2}{2}, \frac{d-2}{2} \right)} \left( \cos \frac{\vartheta}{2} \right) + P_n^{ \left( \frac{d-2}{2}, \frac{d-2}{2} \right)} \left( -\cos \frac{\vartheta}{2} \right) \right\} \\ & = & \sum_{n=0}^\infty \mathbf{B}_n \left\{ P_n^{ \left( \frac{d-2}{2}, \frac{d-2}{2} \right)} \left( \cos \frac{\vartheta}{2} \right) +(-1)^n P_n^{ \left( \frac{d-2}{2}, \frac{d-2}{2} \right)} \left( \cos \frac{\vartheta}{2} \right) \right\} \\ & = & 2 \sum_{n=0}^\infty \mathbf{B}_{2n} P_{2n}^{ \left( \frac{d-2}{2}, \frac{d-2}{2} \right)} \left( \cos \frac{\vartheta}{2} \right) \\ & = & 2 \sum_{n=0}^\infty \frac{ \Gamma \left( 2n+\frac{d}{2} \right) \Gamma (n+1) \mathbf{B}_{2n}}{ \Gamma \left( n+\frac{d}{2} \right) \Gamma (2n+1)} P_{n}^{ \left( \frac{d-2}{2}, -\frac{1}{2} \right)} \left( \cos \vartheta \right), \end{aligned}$$ where the second and the last equalities follow from identities (4.1.3) and (4.1.5) of [@Szego1975], respectively. It follows from $\frac{\Gamma (n+\kappa+1)}{\Gamma (n+1)} \sim n^\kappa$ that $$\lim_{n \to \infty} \frac{ \Gamma \left( 2n+\frac{d}{2} \right) \Gamma (n+1) }{ \Gamma \left( n+\frac{d}{2} \right) \Gamma (2n+1)} = 2^{\frac{d}{2}-1},$$ and the convergence of $\sum\limits_{n=0}^\infty n^{\frac{d-2}{2}} \mathbf{B}_n$ implies that of $\sum\limits_{n=0}^\infty \frac{ \Gamma \left( 2n+\frac{d}{2} \right) \Gamma (n+1) \mathbf{B}_{2n}}{ \Gamma \left( n+\frac{d}{2} \right) \Gamma (2n+1)} P_{n}^{ \left( \frac{d-2}{2}, -\frac{1}{2} \right)} (1)$. By Theorem \[thm1\], $ \bC \left( \frac{\rho_{\tiny{\mathbb{P}^{d}( \R)}} (\x_1, \x_2) }{2} \right)+ \bC \left( \pi- \frac{\rho_{\tiny{\mathbb{P}^{d}( \R)}} (\x_1, \x_2)}{2} \right) $ is an isotropic covariance matrix function on $\mathbb{P}^d (\R)$. Similarly, it follows from identities (4.1.3) and (4.1.5) of [@Szego1975] that $$\begin{aligned} & & \bC \left( \frac{\vartheta}{2} \right)- \bC \left( \pi- \frac{\vartheta}{2} \right) = 2 \sum_{n=0}^\infty \mathbf{B}_{2n+1} P_{2n+1}^{ \left( \frac{d-2}{2}, \frac{d-2}{2} \right)} \left( \cos \frac{\vartheta}{2} \right) \\ & = & 2 \cos \left( \frac{\vartheta}{2} \right) \sum_{n=0}^\infty \frac{ \Gamma \left( 2n+\frac{d}{2} +1\right) \Gamma (n+1) \mathbf{B}_{2n+1}}{ \Gamma \left( n+\frac{d}{2} \right) \Gamma (2n+2)} P_{n}^{ \left( \frac{d-2}{2}, \frac{1}{2} \right)} \left( \cos \vartheta \right), \end{aligned}$$ and, from (\[Szego4.5.4.dual\]), $$\begin{aligned} & & \left( \bC \left( \frac{\vartheta}{2} \right)- \bC \left( \pi- \frac{\vartheta}{2} \right) \right) \cos \left( \frac{\vartheta}{2} \right) \\ & = & \frac{1}{2} \sum_{n=0}^\infty \frac{ \Gamma \left( 2n+\frac{d}{2} +1\right) \Gamma (n+1) \mathbf{B}_{2n+1}}{ \Gamma \left( n+\frac{d}{2} \right) \Gamma (2n+2)} (1+\cos \vartheta) P_{n}^{ \left( \frac{d-2}{2}, \frac{1}{2} \right)} \left( \cos \vartheta \right) \\ & = & \sum_{n=0}^\infty \frac{ \Gamma \left( 2n+\frac{d}{2} +1\right) \Gamma (n+1) \mathbf{B}_{2n+1}}{ \Gamma \left( n+\frac{d}{2} \right) \Gamma (2n+2) (4n +d+1)} \left( (2n+1) P_{n}^{ \left( \frac{d-2}{2}, -\frac{1}{2} \right)} \left( \cos \vartheta \right) \right. \\ & & \left. + 2( n+1) P_{n+1}^{ \left( \frac{d-2}{2}, -\frac{1}{2} \right)} \left( \cos \vartheta \right) \right), \end{aligned}$$ which implies that $ \left( \bC \left( \frac{\rho_{\tiny{\mathbb{P}^{d}( \R)}} (\x_1, \x_2)}{2} \right)- \bC \left( \pi- \frac{\rho_{\tiny{\mathbb{P}^{d}( \R)}} (\x_1, \x_2)}{2} \right) \right) \cos \left( \frac{\rho_{\tiny{\mathbb{P}^{d}( \R)}} (\x_1, \x_2)}{2} \right)$ is an isotropic covariance matrix function on $\mathbb{P}^d (\R)$ by Theorem \[thm1\]. Proof of Theorem \[thm3\] ------------------------- It suffices to establish the equivalence between statements (i) and (ii), while the equivalence between statements (ii) and (iii) is due to the identity $ \vartheta = \frac{\pi}{2} -\arcsin ( \cos \vartheta ),$ $ \vartheta \in [0, \pi], $ and that between statements (ii) and (iv) is made by the transform $x = \cos \frac{\vartheta}{2}, 0 \le x \le 1.$ \(ii) $\Longrightarrow$ (i): Let $\bC (\vartheta)$ take the form (\[thm3.eq1\]). For each $n \in \N_0$, $(1+\cos \rho (\x_1, \x_2) )^n$ is an isotropic covariance function on each $\Md$, by Lemma 1 (i). So is $\bC ( \cos \rho (\x_1, \x_2))$, by Theorem 2 of [@MaMalyarenko2018]. \(i) $\Longrightarrow$ (ii): Suppose that $\bC (\rho (\x_1, \x_2))$ is an isotropic covariance matrix function on $\mathbb{M}^\infty$. Then it is an isotropic covariance matrix function on each $\Md$, and, for each possible pair of $\alpha$ and $\beta$ in Table \[tab:1\], by Theorem 1 (ii), $\bC (\vartheta)$ must be of the form $$\label{thm3.proof.eq} \bC (\vartheta) = \sum_{n=0}^\infty \mathbf{B}_n^{(\alpha, \beta)} \frac{P_n^{(\alpha, \beta)} (\cos \vartheta)}{ P_n^{(\alpha, \beta)} (1)}, ~~~~ \vartheta \in [ 0, \pi],$$ where $\{ \mathbf{B}_n^{(\alpha, \beta)}, n \in \N_0 \}$ is a sequence of $m \times m $ positive definite matrices and the series $\sum\limits_{n=0}^\infty \mathbf{B}_n^{(\alpha, \beta)}$ converges. When $\beta =\alpha = \frac{d-2}{2}$, limit (18. 6.4) of [@Olver2010] reads $ \lim\limits_{\alpha \to \infty} \frac{P_n^{(\alpha, \beta)} (\cos \vartheta)}{ P_n^{(\alpha, \beta)} (1)} =\cos^n \vartheta$. In (\[thm3.proof.eq\]) taking $\alpha \to \infty$ and applying Lemma 1 of [@Schoenberg1942] yields (see [@Ma2015]) $$\bC (\vartheta) = \sum_{n=0}^\infty \mathbf{B}_n \cos^n \vartheta, ~~~~ \vartheta \in [ -\pi, \pi],$$ which contains (\[thm3.eq1\]) as a special case. When $\beta$ is fixed as listed in Table \[tab:1\], we consider the scalar case $m=1$ first, under which (\[thm3.proof.eq\]) reduces to $$C (\vartheta) = \sum_{n=0}^\infty b_n^{(\alpha, \beta)} \frac{P_n^{(\alpha, \beta)} (\cos \vartheta)}{ P_n^{(\alpha, \beta)} (1)}, ~~~~ \vartheta \in [ 0, \pi],$$ where the nonnegative series $\sum\limits_{n=0}^\infty b_n^{(\alpha, \beta)}$ converges. For the nonnegative convergent series $\sum\limits_{n=0}^\infty b_n^{(\alpha, \beta)}$, its terms are bounded by $$0 \le b_n^{(\alpha, \beta)} \le \sum\limits_{k=0}^\infty b_k^{(\alpha, \beta)} = C(0), ~~~~ n \in \N_0.$$ By Cantor’t diagonal argument, there exists a subsequence $\{ \alpha_k , k \in \N \}$ and a nonnegative sequence $\{b_n, n \in \N_0 \}$ such that for any $n \in \N_0$, $$\label{diag} \lim_{k \to\infty} b^{(\alpha_k,\beta)}_n = b_n.$$ For $\vartheta \in (0, \pi]$, we have $$\begin{aligned} & & C (\vartheta) - \sum_{n=0}^\infty b_n \left( \frac{1+\cos \vartheta}{2} \right)^n \\ & = & \sum_{n=0}^\infty b_n^{(\alpha_k, \beta)} \frac{P_n^{(\alpha_i, \beta)} (\cos \vartheta)}{ P_n^{(\alpha_k, \beta)} (1)} - \sum_{n=0}^\infty b_n \left( \frac{1+\cos \vartheta}{2} \right)^n \\ & = & \sum_{n=0}^\infty b_n^{(\alpha_k, \beta)} \left( \frac{P_n^{(\alpha_k, \beta)} (\cos \vartheta)}{ P_n^{(\alpha_i, \beta)} (1)} - \left( \frac{1+\cos \vartheta}{2} \right)^n \right) + \sum_{n=0}^\infty ( b_n^{(\alpha_k, \beta)}- b_n) \left( \frac{1+\cos \vartheta}{2} \right)^n, \end{aligned}$$ where the first sum converges to 0 as $\alpha_k \to\infty$ by Lemma 2 (ii), and the second sum converges to 0 by the dominated convergence, since $$\sum_{n=0}^\infty b^{ (\alpha_k,\beta)}_n \left( \frac{1+\cos \vartheta}{2} \right)^n \le \sum_{n=0}^\infty C(0)\left(\frac{1+\cos \vartheta}{2} \right)^n =\frac{2C(0)}{1-\cos \vartheta},$$ and (\[diag\]) implies $$\lim_{k \to \infty} \sum_{n=0}^\infty b_n^{(\alpha_k, \beta)} \left( \frac{1+\cos \vartheta}{2} \right)^n = \sum_{n=0}^\infty b_n \left( \frac{1+\cos \vartheta}{2} \right)^n.$$ For $\vartheta=0$, (\[thm3.eq1\]) is also valid, since its both sides are continuous. In a vector case $m \ge 2$, if $\bC (\rho (\x_1, \x_2))$ is an isotropic covariance matrix function on $\mathbb{M}^\infty$, then $\mathbf{a}' \bC (\rho (\x_1, \x_2)) \mathbf{a}$ is an isotropic covariance function on $\mathbb{M}^\infty$ for an arbitrary $\mathbf{a} \in \R^m$. Thus, $$\mathbf{a}' \bC (\vartheta) \mathbf{a} = \sum_{n=0}^\infty b_n (\mathbf{a}) (1+\cos \vartheta)^n, ~~~~ \vartheta \in [ 0, \pi],$$ where $\{ b_n (\mathbf{a}), n \in \N_0 \}$ is a sequence of nonnegative numbers, and $\sum\limits_{n=0}^\infty 2^n b_n (\mathbf{a})$ converges. Similarly, for an arbitrary $\mathbf{b} \in \R^m$, $$\label{thm3.proof.eq2} ( \mathbf{a}+\mathbf{b})' \bC (\vartheta) (\mathbf{a}+\mathbf{b}) = \sum_{n=0}^\infty b_n (\mathbf{a}+\mathbf{b}) (1+\cos \vartheta)^n, ~~~~ \vartheta \in [ 0, \pi],$$ and $$\label{thm3.proof.eq3} ( \mathbf{a}-\mathbf{b})' \bC (\vartheta) (\mathbf{a}+\mathbf{b}) = \sum_{n=0}^\infty b_n (\mathbf{a}-\mathbf{b}) (1+\cos \vartheta)^n, ~~~~ \vartheta \in [ 0, \pi].$$ Taking the difference between (\[thm3.proof.eq2\]) and (\[thm3.proof.eq3\]) yields $$\label{thm3.proof.eq4} \mathbf{a}' \bC (\vartheta) \mathbf{b} = \sum_{n=0}^\infty \frac{b_n (\mathbf{a}+\mathbf{b})- b_n (\mathbf{a}-\mathbf{b})}{2} (1+\cos \vartheta)^n, ~~~~ \vartheta \in [ 0, \pi],$$ noticing that $\bC(\vartheta)$ is symmetric. The form (\[thm3.eq1\]) of $\bC(\vartheta)$ and the convergence of $\sum\limits_{n=0}^\infty 2^n \mathbf{B}_n$ are obtained from (\[thm3.proof.eq4\]) by taking the $i$th entry of $\mathbf{a}$ and the $j$th entry of $\mathbf{b}$ equal to 1 and the rest being 0, for $i, j \in \{ 1, \ldots, m \}$. Multiplying both sides of (\[thm3.eq1\]) by an arbitrary $\mathbf{a} \in \R^m$ yields $$\mathbf{a}' \bC (\vartheta) \mathbf{a} = \sum_{n=0}^\infty \mathbf{a}' \mathbf{B}_n \mathbf{a} (1+\cos \vartheta )^n, ~~~~ \vartheta \in [ 0, \pi],$$ where the left-hand side is an isotropic covariance function on $\mathbb{M}^\infty$, so that the coefficients at the right-hand side, $ \mathbf{a}' \mathbf{B}_n \mathbf{a}$, have to be nonnegative; in other words, $\{ \mathbf{B}_n, n \in \N_0 \}$ must be a sequence of positive definite matrices. Proof of Lemma 2 {#lemma2.proof} ---------------- For $\alpha>\beta>-{\frac{1}{2}}$, $\frac{ P_n^{(\alpha, \beta)} (\cos \vartheta )}{ P_n^{(\alpha, \beta)} (1)}$ admits an integral representation (see, formula (18.10.3) of [@Olver2010]) $$\label{intrep} \frac{ P_n^{(\alpha, \beta)} (\cos \vartheta )}{ P_n^{(\alpha, \beta)} (1)} = \int_0^1\int_0^\pi \left( \frac{1+\cos \vartheta}{2}-r^2 \frac{1-\cos \vartheta}{2}+ \imath r \sin \vartheta \cos \phi \right)^n h^{(\alpha,\beta)} (r,\phi)d\phi dr,$$ where $\imath$ is the imaginary unit, and $$h^{(\alpha,\beta)}(r,\phi)= \frac{(1-r^2)^{\alpha-\beta-1}r^{2\beta+1}\sin^{2\beta}\phi}{ \int_0^1\int_0^\pi (1-r^2)^{\alpha-\beta-1}r^{2\beta+1}\sin^{2\beta}\phi d \phi d r}, ~~ 0 \le r \le 1, ~ 0 \le \phi \le \pi,$$ is a nonnegative function with the range between 0 and 1. Notice that $$\begin{aligned} & & \left|\left(\frac{1+\cos \vartheta}{2}-r^2 \frac{1-\cos \vartheta}{2}+ \imath r \sin \vartheta \cos\phi \right)^n -\left(\frac{1+\cos \vartheta}{2} \right)^n \right| \\ & \le & \left(\frac{1+\cos \vartheta}{2}+r^2 \frac{1-\cos \vartheta}{2} \right)^n+\left(\frac{1+\cos \vartheta}{2} \right)^n. \end{aligned}$$ For a given $0< \vartheta \le \pi$ and $\epsilon>0$, there exists $N(\epsilon, \vartheta)$ such that for any $n>N(\epsilon, \vartheta)$ and $r<{\frac{1}{2}}$, $$\label{halfeps} \left|\left(\frac{1+\cos \vartheta}{2}-r^2 \frac{1-\cos \vartheta}{2}+ \imath r \sin \vartheta \cos\phi \right)^n -\left(\frac{1+\cos \vartheta}{2} \right)^n \right| < \frac{\epsilon}{2}.$$ On the other hand, there exists $0<\delta(\epsilon, \vartheta)<{\frac{1}{2}}$ such that (\[halfeps\]) holds for any $0\le n \le N(\epsilon, \vartheta)$ and $0 \le r<\delta(\epsilon, \vartheta)$. Therefore, (\[halfeps\]) holds for any $0\le r<\delta(\epsilon, \vartheta)$ and $n \in \N_0$. For $\alpha>\beta>-{\frac{1}{2}}$, it follows from (\[intrep\]) that $$\begin{aligned} & & \left| \frac{ P_n^{(\alpha, \beta)} (\cos \vartheta )}{ P_n^{(\alpha, \beta)} (1)} - \left( \frac{1+ \cos \vartheta }{2} \right)^n \right| \\ & = & \left| \int_0^1\int_0^\pi \left( \frac{1+\cos \vartheta}{2}-r^2 \frac{1-\cos \vartheta}{2}+ \imath r \sin \vartheta \cos \phi \right)^n - \left( \frac{1+ \cos \vartheta }{2} \right)^n h^{(\alpha,\beta)} (r,\phi)d\phi dr \right| \\ & \le & \int_0^1\int_0^\pi \left| \left( \frac{1+\cos \vartheta}{2}-r^2 \frac{1-\cos \vartheta}{2}+ \imath r \sin \vartheta \cos \phi \right)^n - \left( \frac{1+ \cos \vartheta }{2} \right)^n \right| h^{(\alpha,\beta)} (r,\phi)d\phi dr \\ & = & \int_0^{\delta(\epsilon, \vartheta)} \int_0^\pi \left| \left( \frac{1+\cos \vartheta}{2}-r^2 \frac{1-\cos \vartheta}{2}+ \imath r \sin \vartheta \cos \phi \right)^n - \left( \frac{1+ \cos \vartheta }{2} \right)^n \right| h^{(\alpha,\beta)} (r,\phi)d\phi dr \\ & & + \int_{\delta(\epsilon, \vartheta)}^1\int_0^\pi \left| \left( \frac{1+\cos \vartheta}{2}-r^2 \frac{1-\cos \vartheta}{2}+ \imath r \sin \vartheta \cos \phi \right)^n - \left( \frac{1+ \cos \vartheta }{2} \right)^n \right| h^{(\alpha,\beta)} (r,\phi)d\phi dr \\ & \le & \frac{\epsilon}{2}+\frac{4\Gamma(\alpha+1)(1-\delta(\epsilon, \vartheta)^2)^{\alpha-\beta-1}}{\Gamma(\alpha-\beta)\Gamma(\beta+1)} \\ & < & \epsilon, \end{aligned}$$ where the last inequality holds since it possible to find $A(\epsilon, \vartheta, \beta)$ such that, for $\alpha > A(\epsilon, \vartheta, \beta)$, $$\frac{4\Gamma(\alpha+1)(1-\delta(\epsilon, \vartheta)^2)^{\alpha-\beta-1}}{\Gamma(\alpha-\beta)\Gamma(\beta+1)} < \frac{\epsilon}{2}.$$ Noticing that $A(\epsilon, \vartheta, \beta)$ is finite when $\beta \to -\frac{1}{2}$, inequality (\[Jacobi.lim.epsion\]) also holds for $\alpha > A(\epsilon, \vartheta, \beta)$ and $\beta = -\frac{1}{2}$. Proof of Theorem \[thm4\] ------------------------- In case $m=1$, Theorem \[thm4\] follows directly from Theorem 5. For $m \ge 2$, define $g(x) = \mathbf{a}' \bC( x) \mathbf{a}, x \ge 0$, for an arbitrary $\mathbf{a} \in \R^m$. Since $\bC (\| \x_1-\x_2 \|)$ is an isotropic covariance matrix function in $\R^d$, $g(x)$ satisfies inequality (\[thm5.ineq\]) by Theorems 3.1 and 3.2 of [@WangDuMa2014], so that inequality (\[thm5.ineq2\]) holds for each $n \in \N_0$, [*i.e.,*]{} Theorem \[thm1\] (iii) is satisfied. Consequently, $\bC (\rho (\x_1, \x_2))$ is an isotropic covariance matrix function on $\S^d$ or $\mathbb{P}^d (\R)$. Proof of Theorem \[thm5\] ------------------------- \(i) For $n \in \N_0$, write $$g_n^{(\alpha, \beta)} = \frac{n!\sqrt{\pi}}{\Gamma(n+\alpha+1)} \int_0^\pi g ( \vartheta )P_n^{(\alpha,\beta)}(\cos \vartheta) \sin^{2\alpha+1} \left( \frac{\vartheta}{2} \right) \cos^{2 \beta+1} \left( \frac{\vartheta}{2} \right) d \vartheta$$ and $$g_{ (\alpha) } (\omega) = \frac{\sqrt{\pi}}{2^{\alpha+1}}\int_0^\pi g (x)\frac{J_\alpha( \omega x)}{\omega^\alpha}x^{\alpha+1} dx, ~~~~ \omega \ge 0.$$ Then (\[H.n.idenity\]) reads $$\label{thm5.proof1} (2n+\alpha+\beta+2) g_n^{(\alpha+1, \beta)} = g_n^{(\alpha, \beta)} - g_{n+1}^{(\alpha, \beta)}, ~~~~ n \in \N_0,$$ and it follows from the identity ${\frac{\textrm{d}{}}{\textrm{d}{x}}}\left(\frac{J_\alpha(x)}{x^\alpha}\right) =-\frac{J_{\alpha+1}(x)}{x^\alpha}$ that $$\label{Aalpha} g_{ (\alpha+1)}(\omega)=-{\frac{1}{2 \omega}}{\frac{\textrm{d}{g_{(\alpha)} (\omega)}}{\textrm{d}{\omega}}}.$$ What needs a proof now is the following equivalent form of identity (\[thm5.eq\]), $$\label{thm5.proof2} g_n^{(\alpha, \beta)} = g_{ (\alpha) } (\xi_n), ~~~~~~ n \in \N_0.$$ In a particular case where $\alpha=\beta = -\frac{1}{2}$, (\[thm5.proof2\]) holds with $\xi_n =n$, since $P_n^{ \left( -\frac{1}{2}, -\frac{1}{2} \right) }(\cos \vartheta) $ $= \frac{(2n)!}{2^{2n} (n!)^2} \cos (n \vartheta)$, $J_{-\frac{1}{2}} (x) = \sqrt{ \frac{ 2}{\pi x} } \cos x$, and $$g_n^{ \left( -\frac{1}{2}, -\frac{1}{2} \right)} = \int_0^\pi g(\vartheta) \cos (n \vartheta) d \vartheta = g_{ \left( -{\frac{1}{2}}\right) } (n), ~~~~~~ n \in \N_0.$$ Next we verify (\[thm5.proof2\]) for $\alpha>-\frac{1}{2}$ and $\beta=-\frac{1}{2}$, where $\alpha+\frac{1}{2}$ is a positive integer. Define $h(\omega) = g_{\left( -\frac{1}{2} \right)} (\sqrt{\omega})$, $\omega \ge 0$. Then $$\label{thm5.proof3} g_{ (\alpha)}(\omega) = (-1)^{\alpha+\frac{1}{2}} \frac{d^{ \alpha+{\frac{1}{2}}}}{ d \omega^{\alpha+{\frac{1}{2}}} } h (\omega^2).$$ By induction on $\alpha+\frac{1}{2}$ or simply on $\alpha$, we can show that $$\label{thm5.proof4} g_n^{ \left( \alpha, -{\frac{1}{2}}\right)} = (-1)^{\alpha+{\frac{1}{2}}} \left( \alpha+{\frac{1}{2}}\right)! \, D \left[ n^2, (n+1)^2, \ldots, \left( n +\alpha+{\frac{1}{2}}\right)^2 \right] h,$$ where $$D[y_1, \ldots, y_k] h = \sum_{j=1}^k \frac{h (y_j)}{\prod\limits_{i=1, i \neq j}^k (y_j -y_i)}$$ is the $(k-1)$th divided difference of $h (x)$. Indeed, this is true for $\alpha= -{\frac{1}{2}}$. Assuming that (\[thm5.proof4\]) is valid for an $\alpha$, then, by identity (\[thm5.proof1\]), $$\begin{aligned} g_n^{ \left( \alpha+1, -{\frac{1}{2}}\right)} & = & \frac{ g_n^{ \left( \alpha, -{\frac{1}{2}}\right)}- g_{n+1}^{ \left( \alpha, -{\frac{1}{2}}\right)} }{ 2n+ \alpha +\frac{3}{2} } \\ & = & \frac{ (-1)^{\alpha+{\frac{1}{2}}} \left( \alpha+{\frac{1}{2}}\right)! }{ 2n+ \alpha +\frac{3}{2} } \left\{ D \left[ n^2, (n+1)^2, \ldots, \left( n +\alpha+{\frac{1}{2}}\right)^2 \right] h \right. \\ & & \left. - D \left[ (n+1)^2, (n+2)^2, \ldots, \left( n +\alpha+\frac{3}{2} \right)^2 \right] h \right\} \\ & = & \frac{ (-1)^{\alpha+\frac{3}{2} } \left( \alpha+\frac{3}{2} \right)! }{ n^2- \left( n+ \alpha +\frac{3}{2} \right)^2 } \left\{ D \left[ n^2, (n+1)^2, \ldots, \left( n +\alpha+{\frac{1}{2}}\right)^2 \right] h \right. \\ & & \left. - D \left[ (n+1)^2, (n+2)^2, \ldots, \left( n +\alpha+\frac{3}{2} \right)^2 \right] h \right\} \\ & = & (-1)^{\alpha+\frac{3}{2} } \left( \alpha+\frac{3}{2} \right)! D \left[ n^2, (n+1)^2, \ldots, \left( n +\alpha+\frac{3}{2} \right)^2 \right] h, \end{aligned}$$ [*i.e.,*]{} (\[thm5.proof4\]) is valid for $\alpha+1$. Applying the mean value theorem [@Boor2005] to the divided difference on the right-hand side of (\[thm5.proof4\]), $g_n^{ \left( \alpha, -{\frac{1}{2}}\right)}$ can be written as $$g_n^{ \left( \alpha, -{\frac{1}{2}}\right)} = (-1)^{ \alpha+{\frac{1}{2}}} h^{ \left( \alpha+{\frac{1}{2}}\right)} (\varsigma_n),$$ for some $\varsigma_n \in \left[ n^2, \left( n+\alpha+{\frac{1}{2}}\right)^2 \right]$. Comparing it with (\[thm5.proof3\]) yields $ g_n^{ \left( \alpha, -{\frac{1}{2}}\right)} = g_{(\alpha)} (\xi_n), $ where $\xi_n = \sqrt{\varsigma_n} \in \left[ n, n+\alpha+{\frac{1}{2}}\right]$. Lastly, we verify (\[thm5.proof2\]) by induction on $\beta+{\frac{1}{2}}$ or $\beta$. The case of $\beta =-{\frac{1}{2}}$ has been proved. Suppose that (\[thm5.proof2\]) is valid for some $\beta$. By identity (\[H.n.idenity2\]), we obtain $$\begin{aligned} g_n^{(\alpha, \beta+1)} & = & \frac{n+\beta+1}{2n+\alpha+\beta+2} g_n^{ (\alpha,\beta)} +\frac{n+\alpha+1}{2n+\alpha+\beta+2} g_{n+1}^{ (\alpha,\beta)} \\ &=& \frac{n+\beta+1}{2n+\alpha+\beta+2} g_{ (\alpha)} (\xi_{n_1}) +\frac{n+\alpha+1}{2n+\alpha+\beta+2} g_{(\alpha)} (\xi_{n_2}), \end{aligned}$$ where $\xi_{n_1} \in [n,n+\alpha+\beta+1]$ and $\xi_{n_2} \in [n,n+\alpha+\beta+2]$. In other words, $g_n^{(\alpha,\beta+1)}$ is an interpolation between $ g_{(\alpha)} (\xi_{n_1})$ and $ g_{(\alpha)} (\xi_{n_2})$. Since $g_{(\alpha)} (\omega)$ is a continuous function for integrable $g(x)$, we have $$g_n^{(\alpha,\beta+1)} = g_{(\alpha)} (\xi_n),$$ for some $\xi_n$ between $\xi_{n_1}$ and $\xi_{n_2}$, which resides in the interval $[n, n+\alpha+\beta+2]$. \(ii) Under assumption (\[thm5.ineq\]), it follows from (\[thm5.eq\]) that $g_n^{(\alpha,\beta)} \ge 0$. It remains to prove that $\sum\limits_{n=0}^\infty g_n^{(\alpha,\beta)}P_n^{ (\alpha,\beta)}(1)$ is bounded. For the continuous function $g(x)$, we can define the formal Jacobi series, $$\hat{g}^{(\alpha,\beta)}(x)=\sum_{n=0}^\infty g_n^{(\alpha,\beta)}[g] P_n^{(\alpha,\beta)}(\cos x),$$ where $$\begin{aligned} g_n^{(\alpha,\beta)}[g] & = & \frac{n! (2n+\alpha+\beta+1)\Gamma(n+\alpha+\beta+1)}{\Gamma(n+\alpha+1)\Gamma(n+\beta+1)} \\ & & \times \int_0^\pi g(x)P_n^{(\alpha,\beta)}(\cos x)\sin^{2\alpha+1} \left( \frac{x}{2} \right) \cos^{2\beta+1} \left( \frac{x}{2} \right) dx,\end{aligned}$$ and $[g]$ indicates the dependency of $g_n^{(\alpha,\beta)}$ on $g$. For $n\in \N$, define $$S_n^{(\alpha,\beta)}[g](x)=\sum_{k=0}^{n-1} g_k^{(\alpha,\beta)}[g]P_k^{(\alpha,\beta)}(\cos x).$$ If $g(x)\in \mathbb{P}_n(\cos x)$, the space of polynomials with degree less than $n$, then $S_n^{(\alpha,\beta)}[g](x)$ $=g(x)$. Denote by $V^{(\alpha,\beta)}$ the set of functions $h(x)$ for which $h_n^{(\alpha,\beta)}[h] \ge 0$ for all $n \in \N_0$. As is shown in Part (i), $ g \in V^{ \left( \alpha,-\frac{1}{2} \right)}$. What we are going to show is that $$S_n^{(\alpha,\beta)}[g](0)=\sum_{k=0}^{n-1} g_k^{(\alpha,\beta)}[g] P_k^{(\alpha,\beta)}(1)$$ is bounded for any $\beta+{\frac{1}{2}}\in\N_0$ and $n \in \N$. First we prove the case of $\beta=-{\frac{1}{2}}$. If $\alpha=-{\frac{1}{2}}$, it is obvious since the Jacobi series for $\alpha=\beta=-{\frac{1}{2}}$ is the cosine series. For $\alpha \ge {\frac{1}{2}}$, noticing that the Jacobi functions converge to cosine functions as $n\to\infty$, which implies by the Riemann lemma that $\lim\limits_{n\to\infty} g_n^{(\alpha,\beta)} [g]=0$, we apply (\[thm5.proof1\]) to obtain $V^{(\alpha,\beta)} \subseteq V^{(\alpha-1,\beta)}.$ Setting $$\phi(x)=S_n^{(\alpha,\beta)}[g](x) \in \mathbb{P}_n(\cos x),$$ we have $ g-\phi \in V^{(\alpha,\beta)} \subseteq V^{(\alpha-1,\beta)}$, and $$\begin{aligned} S_n^{(\alpha-1,\beta)}[g](0) & = & S_n^{(\alpha-1,\beta)}[\phi](0)+S_n^{(\alpha-1,\beta)}[g-\phi](0) \\ & \ge & S_n^{(\alpha-1,\beta)}[\phi](0) = \phi(0) = S_n^{(\alpha,\beta)}[\phi](0) \\ & = & S_n^{(\alpha,\beta)}[g](0). \end{aligned}$$ As a result, $$S_n^{ \left( \alpha,-{\frac{1}{2}}\right) }[g](0) \le S_n^{ \left(-{\frac{1}{2}},-{\frac{1}{2}}\right)}[g](0).$$ Thus, the convergence of the cosine series for $g$ at $x=0$, or equivalently, the uniform boundedness of $S_n^{ \left( -{\frac{1}{2}},-{\frac{1}{2}}\right)}[g](0)$, implies that $S_n^{ \left( \alpha,-{\frac{1}{2}}\right)}[g](0)$ is uniformly bounded for all $n \in \N_0$. To see that $S_n^{(\alpha,\beta)}[g](0)$ is uniformly bounded for all $\beta \ge -{\frac{1}{2}}$ and $n \in \N_0$, notice that (\[thm5.proof1\]) implies that for the function $\phi(x)$ defined above, $g_{n-1}^{(\alpha,\beta+1)}[\phi]\ge0$, and $$g_k^{(\alpha,\beta+1)}[g-\phi]=0$$ for all $k<n-1$. Therefore $$S_n^{(\alpha,\beta)}[g](0) = S_n^{(\alpha,\beta)}[\phi](0)=S_n^{(\alpha,\beta+1)}[\phi](0) \ge S_{n-1}^{(\alpha,\beta+1)}[\phi](0)=S_{n-1}^{(\alpha,\beta+1)}[f](0).$$ The uniform boundedness of $S_n^{ \left( \alpha,-{\frac{1}{2}}\right)}[g](0)$ results in the uniform boundedness of $S_n^{(\alpha,\beta)}[g](0)$ over all $\beta\ge -{\frac{1}{2}}$ and $n \in \N_0$. [10]{} Askey, R.: Jacobi polynomials. I. New proofs of Koornwinder’s Laplace type integral representation and Bateman’s bilinear sum. SIAM J. Math. Anal. [**5**]{}, 119–124 (1974) Askey, R., Bingham, N. H.: Gaussian processes on compact symmetric spaces. Z. Wahrscheinlichkeitstheorie verw. Gebiete [**37**]{}, 127-143 (1976) Azevedo, D., Barbosa, V. S.: Covering numbers of isotropic reproducing kernels on compact two-point homogeneous spaces. Math. Nachr. [**290**]{}, 2444–2458 (2017) Bhattacharya, A., Bhattacharya, R. Nonparametric Inference on Manifolds. Cambridge Univ. Press, Cambridge (2012) Bingham, N. H.: Positive definite functions on spheres. Proc. Cambridge Phil. Soc. [**73**]{}, 145-156 (1973) Bochner, S.: Hilbert distance and positive definite functions. Ann. Math. [**42**]{}, 647–656 (1941) de Boor, C.: Divided differences. Surv. Approx. Theory [**1**]{}, 46-69 (2005) Brown, G., Dai, F. Approximation of smooth functions on compact two-point homogeneous spaces. J. Funct. Anal. [**220**]{}, 401-423 (2005) Cheng, D., Xiao, Y.: Excursion probability of Gaussian random fields on sphere. Bernoulli [**22**]{}, 1113–1130 (2016) Cohen, S., Lifshits, M. A.: Stationary Gaussian random fields on hyperbolic spaces and on Euclidean spheres. ESAIM Probab. Stat. [**16**]{}, 165-221 (2012) D’Ovidio, M.: Coordinates changed random fields on the sphere. J. Stat. Phys. [**154**]{}, 1153–1176 (2014) Gangolli, R.: Positive definite kernels on homogeneous spaces and certain stochastic processes related to L[' e]{}vy’s Brownian motion of several parameters. Ann Inst H Poincar[' e]{} B [**3**]{}, 121–226 (1967) Gradshteyn, I. S., Ryzhik, I. M.: Tables of Integrals, Series, and Products, 7th edtion. Academic Press, Amsterdam (2007) Helgason, S.: Integral Geometry and Radon Transforms. Springer, New York (2011) Leonenko, N., Sakhno, L.: On spectral representation of tensor random fields on the sphere. Stoch. Anal. Appl. [**31**]{}, 167–182 (2012) Leonenko, N., Shieh, N.: R[' e]{}nyi function for multifractal random fields. Fractals, [**21**]{}, 1350009, 13 pp (2013) Ma, C.: Vector random fields with second-order moments or second-order increments. Stoch. Anal. Appl. [**29**]{}, 197–215 (2011) Ma, C.: Isotropic covariance matrix functions on all spheres. Math. Geosci. [**47**]{}, 699–717 (2015) Ma, C.: Stochastic representations of isotropic vector random fields on spheres. Stoch. Anal. Appl. [**34**]{}, 389–403 (2016) Ma, C.: Time varying isotropic vector random fields on spheres. J. Theor. Prob. [**30**]{}, 1763–1785 (2017) Ma, C., Malyarenko, A.: Time-varying isotropic vector random fields on compact two-point homogeneous spaces. Accepted by J. Theor. Prob. Malyarenko, A.: Invariant Random Fields on Spaces with a Group Action. Springer, New York (2013) Malyarenko, A., Olenko, A.: Multidimensional covariant random fields on commutative locally compact groups. Ukrainian Math. J. [**44**]{}, 1384-1389 (1992) Nie, Z., Ma, C.: Isotropic positive definite functions on spheres generated from those in Euclidean spaces. Proc. Amer. Math. Soc. [**147**]{}, 3047-3056 (2019) Olver, F. W. J., Lozier, D. W., Boisvert, R. F., Clark, C. W.: NIST Handbook of Mathematical Functions. Cambridge University Press, Cambridge (2010) Patrangenaru, V., Ellingson, L.: Nonparametric Statistics on Manifolds and Their Applications to Object Data Analysis. Taylor & Francis Group, LLC, New York (2016) Schoenberg, I.: Positive definite functions on spheres. Duke Math. J. [**9**]{}, 96–108 (1942) Szegö, G. Orthogonal Polynomials, 4th edition. Amer. Math. Soc. Colloq. Publ., vol 23. Amer. Math. Soc., Providence (1975) Wang, H.-C.: Two-point homogenous spaces. Ann. Math. [**55**]{}, 177–191 (1952) Wang, R., Du, J., Ma, C.: Covariance matrix functions of isotropic vector random fields. Comm. Statist. - Theory Med. [**43**]{}, 2081-2093 (2014) Xu, Y.: Positive definite functions on the unit sphere and integrals of Jacobi polynomials. Proc. Amer. Math. Soc. [**146**]{}, 2039-2048 (2018) Yadrenko, A. M. Spectral Theory of Random Fields. Optimization Software, New York (1983) Yaglom, A. M.: Second-order homogeneous random fields. Proc. 4th Berkeley Symp. Math. Stat. Prob. [**2**]{}, 593–622 (1961) Yaglom, A. M.: Correlation Theory of Stationary and Related Random Functions. vol. I. Springer, New York (1987)
--- abstract: 'We study an analogue of the classical moment problem in the framework where moments are indexed by graphs instead of natural numbers. We study limit objects of graph sequences where edges are labeled by elements of a topological space. Among other things we obtain strengthening and generalizations of the main results of previous papers characterizing reflection positive graph parameters, graph homomorphism numbers, and limits of simple graph sequences. We study a new class of reflection positive partition functions which generalize the node-coloring models (homomorphisms into weighted graphs).' author: - | [László Lovász]{}[^1], Eötvös Loránd University, Budapest\ and\ [Balázs Szegedy]{}, University of Toronto, Toronto date: Oct 2010 title: 'The graph theoretic moment problem[^2]' --- Introduction ============ To study very large graphs, a natural way to obtain information about them is sampling. In the case of dense simple graphs, a natural way to sample is to pick $k$ random nodes and look at the subgraph induced by them. A sequence $G_1,G_2,\dots$ of simple graphs with $|V(G_n)|\to\infty$ is called [*convergent*]{} if the distribution of this random induced subgraph is convergent for every $k$. To every convergent sequence of simple graphs one can assign a limit object in the form of a 2-variable real function [@LSz1]. Instead of the induced subgraph samples, one can consider homomorphism densities of various “small” graphs. While for simple graphs they trivially carry the same information as the samples described above (connected by a simple inclusion-exclusion), their algebraic properties are quite different and often more useful. These densities are very good 2-variable analogues of moments of 1-variable functions (see Section \[MOMSIMPLE\]). It turns out that in a more general setting, moment sequences can be indexed by multigraphs rather than simple graphs. Let $X$ be a random variable. A moment of $X$ (in a slightly generalized sense) is the expected value of $p(X)$ where $p$ is a polynomial in $\R[x]$. The classical moment problem can be phrased as follows: which functions $\alpha:~\R[x]\to\R$ can be represented by a real valued random variable $X$ so that $\alpha(p)=E(p(X))$ for all $p\in\R[x]$. The necessary and sufficient condition is that $\alpha$ is linear, normalized ($\alpha(1)=1$) and positive definite ($\alpha(p^2)\geq 0$ for very polynomial $p$). Consider a symmetric measurable 2-variable function $W:~[0,1]^2\to [0,1]$. Let $X_1,X_2,X_3,...$ be random independent elements from $[0,1]$. The random variables $Z_{i,j}=W(X_i,X_j)~~(i\neq j)$ have all the same distribution but they are not all independent (for example, $Z_{1,2}$ and $Z_{2,3}$ are correlated in general). Note that by the symmetry of $W$, we have $Z_{i,j}=Z_{j,i}$ for every $i$ and $j$. It is natural to define the moments of $W$ as expected values of multivariate polynomials in the variables $Z_{i,j}$. As in the one-variable case, $W$ induces a linear map from the polynomial ring $\R[\{z_{i,j}|1\leq i<j\}]$ to the real numbers by $$\label{TDEF-P} t(p,W)=E(p(\{Z_{i,j}|1\leq i<j\})),$$ and this moment function is determined by its values on monomials. Every monomial in this ring corresponds to a multigraph, and if two such monomials correspond to isomorphic graphs, then the moment function has the same value on them. So, just like in the one-variable case, $W$ has a countable number of “moments”, but instead of forming a single sequence, they are indexed by (finite) multigraphs. Moments indexed by simple graphs {#MOMSIMPLE} -------------------------------- Somewhat surprisingly, if we want to define moments of a $2$-variable function $W$, it is often enough to restrict ourselves to simple graphs (in other words, to multilinear polynomials $p\in \ize$). In this section we recall various results that can be viewed as supporting this claim. (We’ll return to why moments indexed by multigraphs are needed, and how to treat them.) Recall that a [*graph parameter*]{} is a map from the set of finite graphs to the real numbers, invariant under isomorphism. A [*simple graph parameter*]{} is only defined on simple graphs. Let $\WW$ be the space of bounded symmetric measurable functions $W:~[0,1]^2\to\R$, and let $F$ be a simple graph with $k$ nodes. We define $$\label{EQ:T-DEF} t(F,W)=\int_{[0,1]^{V(F)}} \prod_{ij\in E(F)} W(x_i,x_j)\,dx.$$ We call $t(F,W)$ as the [*$F$-moment*]{} of the function $W$. While this definition is meaningful for every (multi)-graph $F$, we’ll restrict our attention for the time being to simple graphs. There is an obvious relation between these moments: if $F_1$ and $F_2$ are two graphs and $F_1F_2$ denotes their disjoint union, then $$t(F_1F_2,W)=t(F_1,W)t(F_2,W).$$ We call this relation the [*multiplicativity*]{} of the moments. Using this relation, we can restrict our attention to moments defined by connected graphs. Let us compare some basic properties of these moments with the analogous properties of moments of one-variable functions. \[P1\] [*Moment sequences are interesting.*]{} For example, the Fibonacci sequence is a moment sequence. Moment parameters are also interesting. The number of $q$-colorings of a graph $F$, divided by $q^{|V(F)|}$, is a moment parameter; more generally, the number of homomorphisms $\hom(F,G)$ of a graph $F$ into a fixed (for simplicity, simple) graph $G$ (appropriately normalized) is a moment parameter. To be precise, if $$t(F,G)=\frac{\hom(F,G)}{|V(G)|^{|V(F)|}},$$ then $t(F,G)=t(F,W_G)$ for an appropriate function $W_G$. The number of nowhere-zero $k$-flows is an important graph parameter representable this way. To show a moment sequence of a non-step-function with combinatorial significance, let us quote the following example from [@LSz1]: the number $$2^{|E(F)|} t(F, \cos(2\pi(x-y)))$$ is the number of eulerian orientations of the graph $F$. \[P2\] [*Any finite number of moments are independent: no finite number of moments determine any other.*]{} This is also true in the 2-variable case: [*For any finite set $F_1,\dots,F_k$ of connected graphs, the set of vectors $(t(F_1,W),\dots,t(F_k,W))$ has a nonempty interior in $\R^k$*]{} (Erdős, Lovász and Spencer [@ELS]). This shows that each of this countable, but “large” set of moments carries information that is not implied by a finite number of others. So in a sense this large set of moments is indeed needed (instead of, say, a two-parameter family). \[P3\] [*The moments determine the function up to a measure preserving transformation of the variable.*]{} (For one-variable functions, this is equivalent to saying that they determine the distribution of the function values, but this would be too weak for two-variable functions.) To be more precise, it is well known that if $f,g:~[0,1]\to\R$ are two (for simplicity, bounded) measurable functions such that $\int_0^1 f^k=\int_0^1 g^k$ for all $k$, then there is a third bounded measurable function $h:~[0,1]\to\R$ and measure-preserving maps $\varphi,\psi:~[0,1]\to[0,1]$ such that $f(x)=h(\varphi(x))$ and $g(x)=h(\psi(x))$ for almost all $x$. This fact generalizes to two-variable functions (Borgs, Chayes and Lovász [@BCL]): [*If $U,W\in \WW$ such that for every simple graph $F$, $t(F,U)=t(F,W)$, then there exists a function $V\in\WW$ and two measure preserving maps $\varphi,\psi:~[0,1]\to[0,1]$ such that $U(x,y)=V(\varphi(x),\varphi(y))$ and $W(x,y)=V(\psi(x),\psi(y))$ almost everywhere.*]{} \[P4\] [*Moment sequences can be characterized by inclusion-exclusion.*]{} Hausdorff [@Haus] proved that a sequence $(a_0,a_1,\dots)$ is the moment sequence of a function $f$ with $0\le f\le 1$ if and only if $a_0=1$, and the following inequality holds for all $0\le k\le n$: $$\sum_{j=0}^k (-1)^{k-j} {k\choose j} a_{n+j}\ge 0.$$ (cf. Diaconis and Freedman [@DF]). The following analogue of this for graph parameters was proved by the authors in [@LSz1]: [*A simple graph parameter $f$ can be represented as $f=t(.,W)$ with some $W\in\WW_0$ if and only if $f(K_1)=1$, $f$ is multiplicative, and the following inequality holds for all simple graphs $F$:*]{} $$\sum_{F'\supseteq F\atop V(F')=V(F)} (-1)^{|E(F')\setminus E(F)|} f(F) \ge 0.$$ \[P5\] [*Moment sequences can be characterized by a semidefiniteness condition.*]{} Hausdorff gave another characterization as well: a sequence $(a_0,a_1,\dots)$ is the moment sequence of a function $f$ with $0\le f\le 1$ if and only if $a_0=1$, and the (infinite) matrix $A$ defined by $A_{ij}=a_{i+j-2}$ $(i,j=1\dots \infty)$ is positive semidefinite. An analogue for graph parameters was proved by the authors in [@LSz1]. We need to define what replaces adding up indices $i$ and $j$. To this end, we define [*$k$-labeled simple graph*]{} ($k\ge 0$) is a finite graph in which $k$ nodes are labeled by $1,2,\dots k$ (it can have any number of unlabeled nodes). The [*simple product*]{} $F_1F_2$ of two $k$-labeled graphs $F_1$ and $F_2$ is defined by taking their disjoint union, and then identifying nodes with the same label; if we get parallel edges, then their multiplicity is suppressed. (For 0-labeled graphs product means disjoint union.) Let $f$ be any simple graph parameter and $k\ge 0$. We define the following (infinite) matrix $M(f,k)$. The rows and columns are indexed by isomorphism types of $k$-labeled simple graphs. The entry in the intersection of the row corresponding to $F_1$ and the column corresponding to $F_2$ is $f(F_1F_2)$. With this notation, we can state the following characterization of moment parameters [@LSz1]: [*A simple graph parameter $f$ can be represented as $f=t(.,W)$ with some $W\in\WW_0$ if and only if $f(K_1)=1$, $f$ is multiplicative, and the (infinite) matrix $M(f,k)$ is positive semidefinite for each $k$.*]{} \[P6\] [*A sequence is the moment sequence of a stepfunction if and only if the matrix $A$ defined above is semidefinite and has finite rank.*]{} To state an analogous assertion for two-variable functions, we call a symmetric measurable function $W:~[0,1]^2\to[0,1]$ is a [*stepfunction*]{} if there is a finite partition $[0,1]=\cup_{i=1}^r S_i$ into measurable sets such that $W$ is constant on every $S_i\times S_j$. The following was proved for simple graph parameters by Lovász and Schrijver [@LSch] (paralleling an earlier result by Freedman, Lovász and Schrijver [@FLS] for multigraph parameters, see Theorem \[THM:FLS\] below): [*A simple graph parameter is the moment parameter of a stepfunction with $q$ steps if and only if the matrix $M(f,k)$ is semidefinite and has rank at most $q^k$ for every $k\ge 0$.*]{} Considering stepfunctions points at other interesting analogies with the one-variable case. It is not hard to see that [*a one-variable function is a stepfunction if and only if it is determined by a finite set of its moments.*]{} The “only if” part of the analogous statement for 2-variable functions was proved (in graph-theoretic terms) for two-variable functions by Lovász and Sós [@LS]: [*For every stepfunction $U\in\WW$ there is a finite set $F_1,\dots,F_m$ of simple graphs such that if $t(F_j,U)=t(F_j,W)$ for some $W\in\WW$ for $j=1,\dots,m$, then $t(F,U)=t(F,W)$ for every simple graph $F$.*]{} However, the converse fails to hold [@LSz5]. \[P7\] [*Convergence in moments implies convergence.*]{} More exactly, if $X_1,X_2,\dots$ are uniformly bounded random variables such that $\E(X_n^k)$ is convergent for every $k$, then $X_n$ tends to a limit in distribution. Analogously, if $(W_n)$ is a uniformly bounded sequence of functions in $\WW$, then $t(F,W_n)$ is convergent for every simple graph $F$ if and only if there are measure preserving maps $\varphi_n:~[0,1]\to[0,1]$ such that the functions $W_n'(x,y)=W_n(\varphi_n(x),\varphi_n(y))$ are convergent in an appropriate norm (the $\|.\|_\square$. This fact is closely related to limits of graph sequences. In fact, if $(G_n)$ is a sequence of simple graphs for which $t(F,G_n)$ is convergent for every simple graph $F$, then there is a function $W\in\WW$ such that $t(F,G_n)\to t(F,W)$ for every $F$ [@LSz1]. This result can be extended to the case when $(G_n)$ is a sequence of weighted graphs with uniformly bounded edgeweights [@BCLSV]. Moments indexed by multigraphs {#MOMMULTI} ------------------------------ We have seen that the densities of simple graphs in symmetric measurable functions $W:~[0,1]^2\to[0,1]$ can be considered as an analogue of moments. The formula \[EQ:T-DEF\] defining $F$-moments makes sense for all (multi)graphs $F$, and there are many reasons why we don’t want to restrict ourselves to just simple graph moments. For example, we may be interested in the “ordinary” moments of a function $W\in\WW$ (considered as a function in a single variable defined on the probability space $[0,1]^2$, rather than a 2-variable function). These moments can be expressed as $$\int_{[0,1]^2} W(x,y)^n\,dx\,dy = t(K_2^n,W),$$ where $K_2^n$ consists of two nodes connected by $n$ parallel edges. By Property \[P3\], this is determined by the simple graph moments, but it can be seen (using a slight extension of the results mentioned in Property \[P2\]) that no finite number of them determines $t(K_2^n,W)$. Another reason for considering multigraphs is that we want to think of a polynomial $p$ in variables $z_{i,j}$ ($1\le i<j\le 1$) as a formal linear combination of multigraphs. Then every multigraph parameter $f$ can be extended linearly to these polynomials. ### Limits of weighted graphs Suppose that the sequence $t(F,G_n)$ is convergent for every multigraph $F$ (rather than for every simple graph $F$). Does this imply that there exists a limit function $W$ that encodes the limiting values? To illustrate the difficulty, let $G_n$ be a random graph on $n$ nodes, with edge probability $1/2$. It is easy to see that with probability $1$, $$t(F,G_n)\to 2^{-|E(F)|}=t(F,1/2) \qquad (n\to\infty)$$ for every simple graph $F$ (here $1/2$ denotes the identically $1/2$ function). It can be shown that this is the only limit function (e.g. by Property 3 above). Suppose that $F$ has multiple edges, and let $F'$ denote the simple graph obtained from $F$ by suppressing the edge multiplicities. Then $t(F,G_n)=t(F',G_n)$, so $t(F,G_n)\to2^{-|E(F')|}$; but $t(F,1/2)=2^{-|E(F)|}$, so while the sequence $(t(F,G_n))$ is convergent for every multigraph $F$, its limit is not $t(F,1/2)$ if multiple edges are present. By the uniqueness of the limit function, this means that the limit cannot be described by a single function in $\WW$. In [@LSz6], limit objects for moments indexed by multigraphs with bounded edge multiplicities are described. Let $\WW(d)$ denote the set of symmetric measurable functions $W:~[0,1]\times [0,1]\to [-d,d]$. A [*moment function sequence*]{} is a sequence $(W_0,W_1,\dots)$ of functions such that $W_i\in\WW(d^i)$ and $(W_0(x,y),W_1(x,y),\dots)$ is a moment sequence for almost all pairs $(x,y)\in[0,1]^2$. Then the limit object of a graph sequence with edge weights uniformly bounded by $d$ can be described by a moment function sequence. (As in the one-variable and also in the simple-graph case, these objects are not uniquely determined by their moments since any measure preserving transformation of $[0,1]$ yields another object which has the same moments.) It is also shown in [@LSz6] that moment function sequences can be represented essentially uniquely by functions $W:~[0,1]^2\to\PP(d)$, where $\PP[-d,d]$ is the set of probability distributions on the Borel sets of $[-d,d]$ (we endow $\PP[-d,d]$ with the week topology, and require $W$ to be measurable as a map into the Borel sets of $\PP[-d,d]$). Such a function is called a [*$[-d,d]$-graphon*]{}. There is a third representation which is unique and is analogous to the distribution of a random variable: This is a probability distribution on infinite edge-weighted graphs on the node set $\N$ that is symmetric under the permutations of the node set and has the property that disjoint subsets of $\N$ span independent (in the probability sense) labeled weighted graphs. (Again, the edge weights are between $d$ and $-d$.) The above can be viewed as a natural characterization of these homogeneous infinite random graph models. ### Characterizing moment parameters One of the goals of this paper is to characterize moment parameters indexed by multigraphs. Here are some basic properties of graph parameters of the form $t(.,W)$, where $W\in\WW(d)$ (see Proposition \[PROP:NEC\]): - $t(K_1,W)=1$ where $K_1$ is the one-node graph ($t(.,W)$ is normalized). - $t(F_1\cup F_2,W)=t(F_1,W)t(F_2,W))$ for all $F_1$ and $F_2$, where $F_1\cup F_2$ is the disjoint union of $F_1$ and $F_2$ (multiplicativity). - $t(p^2,W)\ge 0$ for all polynomials $p\in \ize$ (weak reflection positivity). Note that this makes sense since, as remarked above, every multigraph parameter extends to polynomials in $\ize$. - $|t(K_2^n,W)|\le d^n$ (exponentially bounded growth on the $n$-fold edge). Let $\TT_3(d)$ denote the of multigraph parameters with these four properties, and let $f\in\TT_3(d)$. Can $f$ be represented as $t(.,W)$ with some function $W:\Omega\times\Omega\to [-d,d]$? The parameter $2^{-|E(F')|}$ discussed above shows that these conditions are not sufficient; however they are not very far from being sufficient. We will show (Theorem \[closure\]) that the set of graph parameters of the form $t(.,W)$ $(W\in\WW(d))$ is dense with respect to the pointwise convergence in $\TT_3(d)$. We will also show that graph parameters in $\TT_3(d)$ can be represented by $[-d,d]$-graphons. \[PDSEMI\] There is another generalization of the classical moment problem, the theory of positive definite functions on semigroups [@BCR; @LM]. Although our context does not entirely fit into the framework of that theory, we will make use of a theorem about exponentially bounded positive definite functions [@BM] (see [@LSch] for results on semigroups that are related to both that theory and our framework). ### Homomorphisms and stepfunctions One can define the homomorphism number $\hom(F,H)$ from a multigraph into a weighted graph, as well as connection matrices $M(f,k)$ for multigraph parameters $f$, analogously to the simple case. The analogue of Property 5 above holds (Freedman, Lovász and Schrijver [@FLS]): \[THM:FLS\] A multigraph parameter is of the form $\hom(.,H)$ for some weighted graph with $q$ nodes if and only if the multigraph connection matrix $M(f,k)$ is semidefinite and has rank at most $q^k$ for every $k\ge 0$. In this paper we prove extensions of this theorem. To state our results, we need the notion of a [*randomly weighted graph*]{}: a graph whose nodes are weighted with nonnegative real numbers, and edges are weighted by random variables with values from a finite set of real numbers. A weighted graph is a special case when all these distributions are concentrated on a single value. We say that the randomly weighted graph is proper, if it is not an ordinary weighted graph. Multigraph moments $t(F,H)$ of a randomly weighted graph $H$ can be defined; they will be multiplicative, normalized, reflection positive graph parameters. Our main result (Theorem \[genfls2\]) describes multigraph parameters $f$ that are multiplicative, normalized, reflection positive, and whose second connection matrix $M(f,2)$ has finite rank (it is enough to require the finiteness of certain very simple submatrices). The theorem gives two alternatives: such a graph parameter is either — of the form $t(.,H)$ for some weighted graph $H$, in which case $\rk(M(f,k))^{1/k}\to c\ge 1$ as $k\to\infty$, or — of the form $t(.,H)$ for some proper randomly weighted graph $H$, in which case $\rk(M(f,k))^{1/k^2}\to c>1$ as $k\to\infty$. In particular, the finiteness of the rank of $M(f,2)$ implies the finiteness of the ranks of all higher connection matrices $M(f,k)$. Preliminaries ============= Graphs and homomorphisms {#homom} ------------------------ We consider four types of graphs. A [*simple graph*]{} is a finite undirected graph without loops or multiple edges. In a [*multigraph*]{} multiple edges are allowed but loop edges are excluded. The edge set $E(G)$ of a multigraph $G$ is a multiset of unordered pairs $ij$ where $i,j$ are distinct elements of the node set. A [*weighted graph*]{} $H$ on node set $V=V(H)$ is given by an assignment of positive nodeweights $(\alpha_i:~i\in V)$ and an assignment of real edgeweights $\beta_{ij}:~i, j\in V)$. We consider $i,j\in V$ as adjacent if $\beta_{ij}\not=0$. Note that we allow loop edges in weighted graphs, but if $\beta_{i,i}=0$ for all $1\leq i\leq n$, then we say that $H$ is loopless. Every multigraph $F$ can be considered as a weighted graph with nodeweights $1$ and nonnegative integral edgeweights (multiplicities) $F_{i,j}$. We say that $H$ is a [*randomly weighted graph*]{} if its nodes are weighted by nonnegative real numbers $\alpha_i$, and its edges are weighted by independent random variables $B_{i,j}$ with finite distribution. We can also think of randomly weighted graphs as graphs whose edges are labeled by moment sequences of random variables with a finite range, showing that these are discrete versions of $[-d,d]$-graphons. Also note that ordinary weighted graphs can be regarded as randomly weighted graphs in which the edgeweights are single-valued random variables. An important parameter of randomly weighted graph $H$ will be $p_{i,j}$, the number of values $B_{ij}$ takes with positive probability, and $p(H)$, the maximum of the $p_{i,j}$. ordinary weighted graphs are just those random weighted graphs with $p(H)=1$. Throughout this paper, if we say just [*graph*]{}, we mean a multigraph. For an arbitrary multigraph $F$ and weighted graph $H$, the homomorphism number from $F$ to $H$ is defined by $$\label{homform} \hom (F,H)=\sum_{\varphi:V(F)\to V(H)}~\prod_{i\in V(F)}\alpha_{\varphi(i)}\prod_{(i,j)\in E(F) }\beta_{\varphi(i),\varphi(j)}.$$ Sometimes it is convenient to normalize the graph parameter $\hom(G,H)$ and to introduce the [*homomorphism density*]{} $$t(F,H)=\frac{\hom (F,H)}{\bigl(\sum_i\alpha_i\bigr)^{|V(F)|}}.$$ Note that $t(F,H)=\hom (F,H')$ where $H'$ is obtained from $H$ by dividing the node weights by $\alpha$. A weighted graph is called [*normalized*]{} if the sum of its node weights is $1$. For an arbitrary graph $F$ with $m$ nodes we define an injective version of these numbers by the formula $$\inj(F,H)=\sum_{\varphi:\,V(F)\hookrightarrow V(H)}~\prod_{(i,j)\in E(F)}\beta_{\varphi(i),\varphi(j)},$$ where $\varphi$ ranges over all injective functions from $V(F)$ to $V(H)$. Again, we can normalize to get $$t_\inj(F,H)=\frac{\inj(F,H)}{\sigma_{|V(F)|}(\alpha)},$$ where $\sigma_k(\alpha)$ denotes the $k$-th elementary symmetric polynomial of the $\alpha_i$. For a randomly weighted graph we define the homomorphism number $\hom(F,H)$ as $$\label{homtorwg} \hom(F,H)=\sum_{\varphi:\,V(F)\to V(H)}~\prod_{i\in V(F)}\alpha_{\varphi(i)}\prod_{ij\in E(F)} \E(B_{\varphi(i),\varphi(j)}^{F_{i,j}}).$$ Setting $\beta_{i,j,k}=\E(B_{ij}^k)$, we have $$\hom(F,H)=\sum_{\varphi:\,V(F)\to V(H)}~\prod_{i\in V(F)}\alpha_{\varphi(i)}\prod_{ij\in E(F)} \beta_{\varphi(i),\varphi(j),F_{i,j}}.$$ Similarly as before, we introduce the scaled version $$t(F,H)=\frac{\hom(F,H)}{\bigl(\sum_i\alpha_i\bigr)^{|V(F)|}}.$$ \[REM:RW\] It is not quite evident where to put the expectation in . Moving it further in like $$\sum_{\varphi:\,V(F)\to V(H)}~\prod_{i\in V(F)}\alpha_{\varphi(i)}\prod_{ij\in E(F)} \E(B_{\varphi(i),\varphi(j)})^{A_{i,j}}$$ would of course just reduce the issue to an ordinary weighted graph, where each random variable $B_{i,j}$ is replaced by its expectation. Moving it further out like $$\E\Bigl(\sum_{\varphi:\,V(F)\to V(H)}~\prod_{i\in V(F)}\alpha_{\varphi(i)}\prod_{ij\in E(F)} B_{\varphi(i),\varphi(j)}^{A_{i,j}}\Bigr)$$ would destroy multiplicativity. With every normalized randomly weighted graph $H$ we can associate a $[-d,d]$-graphon $W_H$, by splitting the unit interval into $|V(H)|$ intervals $S_i$ of length $\alpha_i$, and assigning the random variable $B_{i,j}$ to each point in $S_i\times S_j$. This graphon $W_H$ has two finiteness properties: $W_H(x,y)$ has a finite range for all $x$ and $y$, and there are only a finite number of different distributions $W_H(x,y)$. In the special case when $H$ is a weighted graph, we get a function $W\in\WW$. It is easy to see that this representation has the property that for every multigraph $F$, $t(F,W_H)=\hom (F,H)=t(F,H)$. Quantum graphs and reflection positivity ---------------------------------------- Let $\GG_n~~(n=0,1,2,\dots)$ denote the set of multigraphs in which $n$ different nodes are labeled by the natural numbers $\{1,2,\dots,n\}$ (the graphs may have an arbitrary number of unlabeled nodes). Note that $\GG_0$ is the set of (isomorphism classes of) graphs without labeled nodes. Let $\FF_n\subset\GG_n$ denote the set of graphs whose node set is $\{1,2,\dots,n\}$. For two graphs $F_1,F_2\in\GG_n$ we define their product $F_1F_2$ as follows: we take their disjoint union and then we identify nodes with identical labels. The set $\GG_n$ endowed with this multiplication forms a commutative semigroup with a unit element in which $\FF_n$ is a sub-semigroup. We denote by $\QQ_n$ the semigroup algebra $\R[\GG_n]$ and by $\PP_n$ the semigroup algebra $\R[\FF_n]$. The elements of these algebras are formal linear combinations of (partially) labeled graphs, and for this reason we call them [*quantum graphs*]{}. Let us fix a number $n$ and let $z_{i,j}~(1\leq i<j\leq n)$ denote the graph with $V(z_{i,j})=[n]$ with a single edge connecting $i$ and $j$. It is clear that $\PP_n$ is generated freely by $\{z_{i,j}|1\leq i<j\leq n\}$ as a commutative algebra and thus it is isomorphic to the polynomial ring $\R[\{z_{i,j}|1\leq i<j\leq n\}]$. Note that the monomials of this polynomial ring are in a one-to-one correspondence with graphs in $\FF_n$. A [*graph parameter*]{} is a map from the set of multigraphs to the real numbers. Any graph parameter $f$ can be extended linearly to the vector spaces $\QQ_n$ and $\PP_n$ for all $n\ge 0$. We say that $f$ is [*reflection positive*]{} (resp. [*weakly reflection positive*]{}) if $f(p^2)\geq 0$ holds for all natural numbers $n$ and quantum graphs $p\in\QQ_n$ (resp. $p\in\PP_n$). Any graph parameter $f$, as we have seen, extends linearly to $\QQ_n$. In addition, $f$ induces a bilinear form $\langle.,.\rangle_f$ on $\QQ_n$ by $\langle p,q \rangle_f=f(pq)$. This form has the property that $\langle pq,r \rangle_f=\langle p,qr \rangle_f$. Note that the reflection positivity (resp. weak reflection positivity) of $f$ is equivalent to the positive semidefinitness of the bilinear forms $\langle.,.\rangle_f$ on the algebras $\QQ_n$ (resp. $\PP_n$). Let $$\II(\PP_n,f)=\{x|x\in \PP_n~,~\langle x,\PP_n\rangle_f=0\}.$$ It is clear that $\II(\QQ_n,f)$ is an ideal of the algebra $\QQ_n$, and we can consider the factor $\QQ_n/f=\QQ_n/\II(\QQ_n,f)$. Clearly $\dim(\QQ_n/f)$ is the rank of the bilinear form $\langle.,.\rangle_f$ on $\QQ_n$. We can carry out these constructions with $\PP_n$ instead of $\QQ_n$. In the case when $f=\hom(.,H)$ for some randomly weighted graph $H$, we also denote $\QQ_n/f$ by $\QQ_n/H$. The algebras $\QQ_n/f$ and the numbers $\dim(\QQ_n/f)$ were introduced in [@FLS]. Basic properties of these algebras can also be expressed in terms of certain matrices. The $n$-th [*connection matrix*]{} of a graph parameter $f$ is an infinite matrix $M(n,f)$ whose rows and columns are indexed by the elements of $\GG_n$ and the entry in the intersection of the row corresponding to $F_1$ and column corresponding to $F_2$ is $f(F_1F_2)$. The rank of this matrix is equal to $\dim(\QQ_n/f)$, and this matrix is positive semidefinite if and only if so is the bilinear form $\langle.,.\rangle_f$ on $\QQ_n$. A graph parameter $f$ is [*multiplicative*]{} if $f(F_1\cup F_2)=f(F_1)f(F_2)$ where $F_1\cup F_2$ is the disjoint union of $F_1$ and $F_2$. We call $f$ [*normalized*]{} if takes the value $1$ on a single node. It is clear that $f$ is multiplicative and normalized if and only if the induced map $f:~\QQ_0\to\R$ is an algebra homomorphism. It is also easy to see that $f$ is multiplicative if and only if $\dim(\QQ_0,f)\leq 1$. Semidefinite functions on polynomial rings ------------------------------------------ Let $n$ be a fixed natural number and let $x_1,x_2,\dots x_n$ be variables. A [*polynomial expression*]{} of $W$ in $n$ variables is a polynomial of the functions $\{W(x_i,x_j)~|~1\leq i<j\leq n\}$. Note that these $n$-variable functions form a commutative algebra with the pointwise multiplication and addition. We define the [*moment*]{} of $W$ corresponding to a polynomial expression $p(x_1,x_2,\dots,x_n)$ as $$\int_{[0,1]^n}p~dx_1\,\dots\,dx_n.$$ The ring of the polynomial expressions of $W$ in $n$ variables is a homomorphic image of $\PP_n$ where the homomorphism is given by $z_{i,j}\to W(x_i,x_j)$. Composing the moment map with this homomorphism we obtain a linear map $t_W:~ \PP_n\to \R$. Since the moments of a polynomial expression are invariant under any permutation of the variables, we obtain that for an element $F\in\FF_n$ the value $t_W(F)=t(F,W)$ does not depend on the labeling of $F$, only on its isomorphism class. For this reason we can also regard $t_W$ as a graph parameter which carries all the information about moments. Furthermore, it is easy to see that $t_W$ is normalized and multiplicative. Let $n$ be an arbitrary natural number and $p\in \PP_n$. Since the polynomial expression corresponding to $p^2$ is the square of the polynomial expression corresponding to $p$ we get that $t_W(p^2)\geq 0,$ which shows that the graph parameter $t_W$ is weakly reflection positive. If $W\in\WW(d)$ then $$t(K_2^k,W ) = \Bigl|\int_{(x,y)\in [0,1]^2}W(x,y)^k\,dx\,dy\Bigr|\leq d^k,$$ which is equivalent to $|t_W(z_{1,2}^k)|\leq d^k$. Let $\alpha$ be an element of the dual space of $\R[x_1,x_2,\dots,x_n]$. The map $\alpha$ is said to be [*positive semidefinite*]{} if $\alpha(f^2)\geq 0$ for all $f\in \R[x_1,x_2,\dots,x_n]$. We say that $\alpha$ is normalized if $\alpha(1)=1$. The following theorem follows quickly from the theory of semidefinite functions on Abelian semigroups. \[semfuncpol\] Let $\alpha$ be a normalized, positive semidefinite element of the dual space of $\R[x_1,x_2,\dots,x_n]$ such that $|\alpha(x_i^r)|\leq d^r$ for all natural numbers $1\leq i\leq n$ and $0\leq r$ . Then there is a unique probability measure $\mu$ on $[-d,d]^n$ with $$\label{mertek} \alpha(f)=\int_{x\in [-d,d]^n}f(x)~d\mu.$$ If the rank of the bilinear form given by $\langle f,g \rangle=\alpha(fg)$ is finite then the measure $\mu$ is concentrated on finitely many points. Let us introduce the linear function $\beta: \R[x_1,x_2,\dots,x_n]\to\R$ by $$\beta(x_1^{r_1}x_2^{r_2}\dots x_n^{r_n})=d^{-(r_1+r_2+\dots +r_n)}\alpha(x_1^{r_1}x_2^{r_2}\dots x_n^{r_n}).$$ It is easy to see that $\beta$ is a positive semidefinite function and $|\beta(x_i^r)|\leq 1$ for all $i$ and $r$. We show that $|\beta(x_1^{r_1}x_2^{r_2}\dots x_n^{r_n})|\leq 1$. We do it by induction on the index of the last nonzero $r_i$. Assume that $|\beta(x_1^{r_1}x_2^{r_2}\dots x_i^{r_i})|\leq 1$ for all possible sequences $r_1,r_2,\dots,r_i$. Let $p=x_1^{r_1}x_2^{r_2}\dots x_i^{r_i}$. It follows by the positive semidefinitness of $\beta$ that $$\beta((p\pm x_{i+1}^{r_{i+1}})^2)\geq 0$$ and so $$2\geq\beta(p^2)+\beta(x_{i+1}^{2r_{i+1}})\geq \pm 2px_{i+1}^{r_{i+1}}$$ which implies $1\geq |px_{i+1}^{r_{i+1}}|$. As a consequence we get that $$|\alpha(x_1^{r_1}x_2^{r_2}\dots x_n^{r_n})|\leq d^{r_1+r_2+\dots +r_n}.$$ This means that $\alpha$ is an exponentially bounded positive semidefinite function on the semigroup of the monomials which is isomorphic to $\N_0^n$. Now 2.5.Theorem (...) completes the proof. Note that any probability measure $\mu$ on $[-d,d]^n$ defines a semidefinite function $\alpha$ on $\R[x_1,x_2,\dots,x_n]$ by (\[mertek\]). We will need two further well-known facts. \[rankofbf\] Assume that the map $\alpha:\R[x_1,x_2,\dots,x_n] \mapsto\R$ is defined by $$\alpha(p)=\sum_{i=1}^k h_ip(a_i),$$ where $a_1,\dots,a_k\in\R^n$ are different real vectors, and the weights $h_i$ are positive real numbers. Then $\alpha$ is a positive semidefinite function and the corresponding bilinear form $$\langle p_1,p_2 \rangle =\alpha(p_1p_2)~~(p_1,p_2\in\R[x_1,x_2,\dots,x_n])$$ has rank $k$. \[konvmet\] Let $d>0$ be a fixed real number. A sequence of measures $\mu_1,\mu_2,\dots$ on $[-d,d]^n$ is weakly convergent if and only if $\lim_{k\to\infty}\int f d\mu_k$ exists for every monomial $f$. Two-variable functions as operators ----------------------------------- Any bounded symmetric function $W$ on $[0,1]^2$ gives rise to a symmetric integral kernel operator $T_W$ on the Hilbert space $L_2([0,1])$, by $$T_W(f)(x)=\int_0^1 W(y,x)f(y)~dy.$$ It follows from the Hilbert-Smith condition that such an operator is always compact, and so it has a countable set of nonzero eigenvalues $\{\lambda_1,\lambda_2,\lambda_3\dots\}$, where we may assume that $|\lambda_1|\ge|\lambda_2|\ge\dots$. It is known that $\lambda_k\to 0$, and so every nonzero eigenvalue has finite multiplicity. We will need the well-known fact that for $n\ge 2$, $$\label{EQ:CNLAMBDA} t(C_n,W)=\sum_{k=1}^{\infty}\lambda_k^n.$$ The operator rank of $T_W$ and the matrix rank of $C(T_W)$ are either both infinite or both finite. More exactly, \[LEM:TW-RANK\] The rank of $C(t_W)$ is between the number of different nonzero eigenvalues and the number of all nonzero eigenvalues of $T_W$. By , $$C(t_W)_{i,j}= \sum_k \lambda_k^{i+j}=\sum_k m_k\bar\lambda_k^{i+j}.$$ If this sum is finite, i.e., $\rk(T_W)=m$ is finite, then $C(t_W)$ is the sum of $m$ matrices of rank $1$, and so it has rank at most $m$. Conversely, suppose that $C(t_W)$ has finite rank $n$. Then there is a linear dependence between its first $n+1$ columns, which means that we have a relation $$\sum_{j=1}^{n+1} a_j \sum_k\lambda_k^{i+j} =0$$ valid for all $i\ge 1$. We can rewrite this as $$\sum_k p(\lambda_k) \lambda_k^{i+1} = 0,$$ where $p$ is a polynomial of degree at most $n$. We claim that $p(\lambda_k)=0$ for all $k$. Suppose not, and let $r$ be the first index for which $p(\lambda_r)\not=0$, and let $a$ and $b$ be the multiplicities of the eigenvalues $\lambda_r$ and $-\lambda_r$ ($a\ge 1, b\ge 0$). Then we have $$a p(\lambda_r) + (-1)^{i+1} b p(-\lambda_r) = -\sum_{k:|\lambda_k|<|\lambda_r|} p(\lambda_k) \Bigl(\frac{\lambda_k}{\lambda_r}\Bigr)^{i+1}.$$ Here the right hand side tens to $0$ as $i\to\infty$, implying that $a+b=0$ and also $a-b=0$, which is a contradiction. So every nonzero eigenvalue of $T_W$ is a root of $p$, which means that their number is at most $\deg(p)\le n$. This implies \[finiterank\] Let $W\in\WW(d)$ and assume that $C(t_W)$ has finite rank. Then the kernel operator $T_W$ is of finite rank and there is a finite sequence of pairwise orthogonal functions $g_1,g_2,\dots,g_k\in L_2([0,1])$ and numbers $\nu_i\in\{d,-d\}$ such that $$W(x,y)=\sum_{i=1}^k \nu_ig_i(x)g_i(y)$$ almost everywhere on $[0,1]^2$. The product of two operators $T_{W_1}$ and $T_{W_2}$ is $T_{W_1\circ W_2}$ where $W_1\circ W_2$ is given by $$(W_1\circ W_2)(x,y)=\int_0^1 w_1(x,z)w_2(z,y)\,dz\,.$$ Let $F'$ denote the graph which is obtained from $F$ by subdividing each edge in $E(G)$. It will be useful to note that \[subdiv\] If $W\in\WW$ and $F$ is any graph then $t(F',W)=t(F,W\circ W)$. Results and proofs ================== Moments and moment-like graph parameters ---------------------------------------- Let $\WW(d)$ denote the set of 2-variable measurable functions $W:~[0,1]^2\to [-d,d]$ that are symmetric in the sense that $W(x,y)=W(y,x)$ for all $x,y\in [0,1]$. We denote by $\WW$ the union of the sets $\WW(d)$ over all real numbers $d$. Let us define four sets of graph parameters: $$\begin{aligned} \TT_0(d)&=\{t(.,H):~H\text{ is a $[-d,d]$-weighted graph}\},\\ \TT_1(d)&=\{t(.,H):~H\text{ is a randomly $[-d,d]$-weighted graph}\},\\ \TT_2(d)&=\{t(.,W):~W\in\WW(d)\},\\ \TT_3(d)&=\{t(.,W):~W\text{ is a $[-d,d]$-graphon}\}.\end{aligned}$$ Clearly $\TT_0(d)\subseteq \TT_1(d),\TT_2(d)\subseteq \TT_3(d)$. We prove that equality almost holds here. Let us quote Theorem 2.6 from [@LSz6], applied to our case: \[convgen-m\] Let $W_1,W_2,\dots$ be a sequence of $[-d,d]$-graphons such that $(t(F,W_1),t(F,W_2),\dots)$ is a convergent sequence for every multigraph $F$. Then there is a $[-d,d]$-graphon $W$ such that $t(F,W_n)\to t(F,W)$ for every $F$. This theorem implies that $\TT_3(d)$ is a closed subset of the space of graph parameters under pointwise convergence. We are going to prove: \[closure\] The set $\TT_3(d)$ is the closure of $\TT_0(d)$. We are also going to prove \[chr-t2\] A graph parameter $f$ belongs to $\TT_3(d)$ if and only if it is normalized, multiplicative, weakly reflection positive and satisfies $|f(K_2^k)|\leq d^k$. By Tychonov’s Compactness Theorem, $\TT_3(d)$ is compact as a closed subspace of the compact space $\prod_F [-d^{|E(F)|},d^{|E(F)|}]$. Both theorems will follow if we prove two facts: \[PROP:NEC\] Every graph parameter $f\in\TT_3(d)$ is normalized, multiplicative, reflection positive and satisfies $|f(K_2^k)|\leq d^k$. \[PROP:SUFF\] Every normalized, multiplicative, weakly reflection positive graph parameter satisfying $|f(K_2^k)|\leq d^k$ is the limit of graph parameters in $\TT_0$. Before proving these propositions, we state an easy lemma about homomorphism densities and injective homomorphism densities, which follows from Lemma 2.1 in [@LSz1] by scaling the edgeweights. \[tavolsag\] Let $H$ be a weighted graph with $n$ nodes such that all the nodeweights are $1$ and the edgeweights are in $[-d,d]$. Then for an arbitrary multigraph $F$ with $m$ nodes, $$|t(F,H)-t_\inj(F,H)|\leq 2{{m}\choose{2}}\frac{1}{n}d^{|E(F)|}.$$ [Proposition \[PROP:NEC\]]{} It is clear that for every $[-d,d]$-graphon $W$, the parameter $f=t(.,W)$ is multiplicative and normalized, and satisfies $|f(K_2^k)|\leq d^k$. We prove that $M(f,n)$ is positive semidefinite. Let $F$ be a graph in $\GG_n$ such that $V(F)=[m]$ for some natural number $m\geq n$. For every choice of the variables $x_1,\dots,x_n$ we define $$t_{x_1,\dots,x_n}(F,W)=\int\limits_{[0,1]^{m-n}} \prod_{{1\leq i\leq m,n+1\leq j\leq m}\atop{i < j}}W_{F_{i,j}}(x_i,x_j)\,dx_{n+1}\dots dx_m.$$ We have $$\begin{aligned} t(F F',W)=&\int_{[0,1]^n}t_{x_1,\dots,x_n}(F,W) t_{x_1,\dots,x_n}(F',W)\\ &\times\prod_{1\leq i<j \leq n}W_{F_{i,j}+F'_{i,j}}(x_i,x_j)\,dx_1\dots dx_n.\end{aligned}$$ where $F$ and $F'$ are graphs in $\GG_n$. For every $x\in [0,1]^n$, let $M(x)$ denote the $\GG_n\times\GG_n$ matrix in which $$M(x)_{F,F'}=t_x(F,W)t_x(F',W)\prod_{1\leq i<j\leq n}W_{F_{i,j}+F'_{i,j}}(x_i,x_j).$$ From the above formulas one obtains that $$\label{EQ:MNF} M(f,n)=\int_{[0,1]^n} M(x)\,dx,$$ so it suffices to prove that $M(x)$ is positive semidefinite for every $x$. For $1\le i<j\le n$, let $M(x,i,j)$ denote the $\GG_n\times\GG_n$ matrix in which $$M(x,i,j)_{F,F'}=W_{F_{i,j}+F'_{i,j}}(x_i,x_j).$$ Since $M(x,i,j)$ is essentially (up to repetition of rows and columns) the moment matrix of the random variable $W(x_i,x_j)$, it is positive semidefinite. We get $M(x)$ from the Schur product of the matrices $M(x,i,j)$ over all possible pairs $1\leq i<j\leq n$ by scaling the rows and columns. This shows that $M(x)$ is indeed positive semidefinite. [Proposition \[PROP:SUFF\]]{} It suffices to consider the case $d=1$, since we can scale the edgeweights by $1/d$. Let $f$ be a weakly reflection positive, normalized multiplicative graph parameter with $f(K_2^k)\le 1$ for all $k\ge 1$. We prove that there is a sequence of stepfunctions $U_1,U_2,\dots$ in $\WW(1)$ such that $\lim_{n\to \infty}t(F,U_n)=f(F)$ for every graph $F$. The weak reflection positivity of $f$ means that $f$ is a semidefinite function on the polynomial ring $\PP_n$ for every natural number $n$. Using that $f(K_2^k)\leq 1$ and Theorem \[semfuncpol\] for $\PP_n$ we obtain that there is a unique probability measure $\mu_n$ on $[-1,1]^{{n}\choose{2}}$ such that $$f(F)=\E\Bigl(\prod_{1\leq i<j\leq n}z_{i,j}^{F_{i,j}}\Bigr)$$ for every graph $F\in \FF_n$, where the $z_{i,j}$ are regarded as random variables whose joint distribution is given by $\mu_n$. Let $Z_n$ be a random weighted graph (not a randomly weighted graph!) on $[n]$ with nodeweights $1/n$ and edgeweights $z_{i,j}$. Since $f$ is invariant under relabeling the nodes of $F$ we get that $$f(F)=\E\Bigl(\frac{1}{n!}\sum_{\sigma\in S_n}\prod_{1\leq i<j\leq n}z_{\sigma(i),\sigma(j)}^{F_{i,j}}\Bigr)=\E(t_\inj(F,Z_n)).$$ Fix a graph $F\in \FF_m$ and for every $n\geq m$, define the graph $F_n\in\FF_n$ by adding $n-m$ isolated labeled nodes to $F$. It is clear that $$t_{\inj}(F_n,Z_n)=t_\inj(F,Z_n)$$ and (using the properties of $f$) that $$\label{injeq} f(F)=f(F_n)=\E(t_\inj(F_n,Z_n))=\E(t_\inj(F,Z_n))$$ for all $n\geq m$. Let $F^2$ denote the disjoint union of $F$ with itself. Using that $f(F^2)=f(F)^2$ and (\[injeq\]) we get that $$\begin{aligned} \Var(t_\inj(F,Z_n))&=\E(t_\inj(F,Z_n)^2)-\E(t_\inj(F,Z_n))^2\\ &=\E(t_\inj(F,Z_n)^2)-f(F^2)=\E(t_\inj(F,Z_n)^2-t_\inj(F^2,Z_n)) .\end{aligned}$$ From Lemma \[tavolsag\] it follows that $$|t(F,Z_n)^2-t_\inj(F,Z_n)^2|\leq \Bigl|2\binom{m}{2}\frac{1}{n}(t(F,Z_n)+t_\inj(F,Z_n))\Bigr|\leq \frac{4}{n}\binom{m}{2}.$$ and similarly $$|t(F^2,Z_n)-t_\inj(F^2,Z_n)|\leq \frac{2}{n}\binom{2m}{2}.$$ Using that $t(F^2,Z_n)=t(F,Z_n)^2$, we get $$\begin{aligned} |t_\inj(F,Z_n)^2-t_\inj(F^2,Z_n)| \le\frac{6m^2}{n}.\end{aligned}$$ Thus $$\Var(t_\inj(F,Z_n))=\E(t_\inj(F,Z_n)^2-t_\inj(F^2,Z_n))\le \frac{6m^2}{n}.$$ By Chebyshev’s inequality, we have for every $\eps>0$, $$\Pr(|t_\inj(F,Z_n)-f(F)|>\eps)\le \frac{6m^2}{\eps^2n}.$$ It follows by the Borel-Cantelli lemma that $$\lim_{n\to\infty}t_\inj(F,Z_{n^2})=f(F)$$ with probability $1$. Since there are only countably many different graphs $F$, we obtain that the above convergence holds simultaneously for all graphs with probability $1$. By Lemma \[tavolsag\] we get that the graph parameter $t(.,Z_{n^2})$ converges to $f(F)$ with probability one in the space of graph parameters. Thus $f$ is in the closure of $\TT_0(d)$. Finiteness conditions --------------------- In a sense, the classes $\TT_0(d)$ and $\TT_1(d)$ are finite versions of the classes $\TT_2(d)$ and $\TT_3(d)$. Theorem \[THM:FLS\] tells us that a graph parameter $f\in\TT_2(d)$ belongs to $\TT_0(d)$ if and only if there is a positive integer $q$ such that $\rk(M(f,k))\le q^k$ for all $k$. We prove that a much weaker condition is sufficient. For a graph parameter $f$, we define three infinite matrices $E(f)$, $C(f)$ and $B(f)$, in each of which the rows and columns are indexed by the natural numbers $0,1,2,3,\dots$, and the entries are defined by three one-parameter families of graphs. Let $K_2^n$ consist of 2 nodes joined by $n$ edges, let $C_n$ be the $n$-cycle, and let $K_{a,b}$ be the complete bipartite graph with color classes if sizes $a$ and $b$. We define $$\begin{aligned} E(f)_{ij}&=f(K_2^{i+j}),\\ C(f)_{ij}&=f(C_{i+j-1}),\\ B(f)_{ij}&=f(K_{i+j,2}).\end{aligned}$$ Note that all three matrices are submatrices of the multigraph connection matrix $M(f,2)$. \[step\] For a graph parameter $f\in\TT_2(d)$, the following are equivalent: [(a)]{} $f\in\TT_0(d)$; [(b)]{} both $C(f)$ and $E(f)$ have finite rank; [(c)]{} both $B(f)$ and $E(f)$ have finite rank; [(d)]{} $M(f,2)$ has finite rank. Even though the graphs used in condition (b) in Theorem \[step\] are smaller, condition (c) may be more useful because it doesn’t use graphs with multiple edges. It is trivial that (a) implies (d), which in turn implies both (b) and (c). (b)$\Rightarrow$(a). Let $f=t(.,W)$ and let $X_W$ denote the random variable $W(X_1,X_2)$ where $X_1$ and $X_2$ are chosen uniformly at random from $[0,1]$. The $n$-th moment of $X_W$ is $$\int_{[0,1]^2} W(x_1,x_2)^n\,dx_1\,dx_2=t(K_2^n,W).$$ Let a linear map $\alpha:~\R[x]\to\R$ be defined by $$\alpha(x^n)=t(K_2^n,W).$$ Since the matrix $E(t_W)$ is the matrix of the bilinear form $\langle f,g \rangle =\alpha(fg)$ in the basis $1,x,x^2,\dots$ it follows from Theorem \[semfuncpol\] that the distribution of $X_W$ is concentrated on some finite set $S$. Since $C(T_W)$ has finite rank, Corollary \[finiterank\] implies that there is a finite system of one-variable functions $g_1,g_2,\dots,g_k$ and signs $\nu_i\in\{1,-1\}$ such that $$W(x,y)=\sum_{i=1}^k \nu_ig_i(x)g_i(y)$$ almost everywhere. By changing $W$ on a zero measure set, we can assume that the previous equality holds everywhere. Since the set $\{(x,y):~W(x,y)\notin S\}$ has measure $0$, there is a set $Z\subseteq [0,1]$ with measure $0$ such that for all $x\in [0,1]\setminus Z$, the function $W(x,.)$ is measurable and the set $\{y\in[0,1]:~W(x,y)\notin S\}$ has measure $0$. For every fixed $x$, the function $W(x,.)$ is an element of the finite dimensional subspace generated by $g_1,g_2,\dots, g_k$, and so there are points $x_1,x_2,\dots,x_k\in [0,1]\setminus Z$ such that every function $W(x,.)$, $x\in [0,1]\setminus Z$ is a linear combination of the the functions $W(x_i,.)$. For each $i$, there is partition $\{U^i_0,U^i_1,\dots,U^i_s\}$ of $[0,1]\setminus Z$ into measurable sets, where $s=|S|$, such that $\lambda(U^i_0)=0$ and $W(x_i,.)$ is constant on each $U^i_j$, $1\le j\le s$. Combining the sets $U^i_0$ into a single $0$-measure set, and taking a common refinement of the partitions on the rest, we get a finite partition $\{U_0,U_1,\dots,U_N\}$ such that $\lambda(U_0)=0$ and each $W(x_i,.)$ is constant on each $U_j$, $1\le j\le N$. Hence every function $W(x,.)$, $x\in [0,1]\setminus Z$, is constant on every set $U_j$, $1\le j\le N$. From the symmetry of $W$ it follows that $W$ is constant on every set $U_i\times U_j$, and so $W$ is equal to a stepfunction almost everywhere. (c)$\Rightarrow$(a). It follows from Lemma \[subdiv\] that the matrix $B(t_W)$ is the same as $E(t_{W\circ W})$ and that $C(t_{W\circ W})$ is a submatrix of $C(t_W)$. So $W\circ W$ satisfies (b), and so we already know that $W\circ W$ is a stepfunction. Thus $T_{W\circ W}$ has finite rank and every eigenvector of $T_{W\circ W}$ corresponding to a nonzero eigenvalue is a one-variable stepfunction. Since $T_{W\circ W}$ is the square of $T_W$, it follows that the same statement holds for $T_W$. This implies that $W$ is a stepfunction. Homomorphisms into randomly weighted graphs ------------------------------------------- We prove the following generalization of Theorem \[THM:FLS\]. \[genfls\] Let $f$ be a graph parameter. Then the following are equivalent. [(1)]{} There is a randomly weighted graph $H$ such that $f(F)=\hom (F,H)$ for all graphs $F$. [(2)]{} $f$ is reflection positive, multiplicative and $\rk(M(2,f))<\infty$. [(3)]{} $f$ is weakly reflection positive, multiplicative and $\rk(M(2,f))<\infty$. As a corollary, we obtain the following characterization of simple graph parameters representable as homomorphism functions: If $f$ is weakly reflection positive, multiplicative and $\rk(M(2,f))<\infty$, then there is a weighted graph $H$ such that $\hom (F,H)=f(F)$ for all simple graphs $F$. [Theorem \[genfls\]]{} (1)$\Rightarrow$(2) Let $H$ be a randomly weighted graph. We may scale the nodeweights so that they sum to $1$. Then $f=t(.,W_H)$. By Proposition \[PROP:NEC\] we know that $f$ is multiplicative, normalized and reflection positive. We need to prove that $M(f,2)$ has finite rank. This follows easily by looking at the proof of Proposition \[PROP:NEC\] carefully. In , the integral can be replaced by a finite sum with $|V(H)|^n$ terms, since the integrand depends only on the nodes of $H$ represented by the intervals containing each $x_i$. Furthermore, each matrix $M(x)$ is the Schur product of a finite number of matrices $M(x,i,j)$. Each $M(x,i,j)$ is a moment matrix of a random variable with finite range, and hence it has finite rank. Hence every matrix $M(x)$ has finite rank, and so $M(f,n)$ has finite rank. (2)$\Rightarrow$(3) is trivial. (3)$\Rightarrow$(1) If $f$ is identically zero, then we regard it as the homomorphism function into the empty graph. Assume that $f$ is not identically zero. First we prove that $f(K_1)>0$ where $K_1$ is the one-node graph. Regarding $K_1$ as an element of the algebra $\QQ_1$, we get from the weak reflection positivity of $f$ that $f(K_1)^2=f(K_1)\geq 0$. Now assume that $f(K_1)=0$. Let $F\in\FF_n$ be a graph with $f(F)\neq 0$ for some natural number $n$ and let $e_n\in\FF_n$ denote graph with $n$ labeled nodes and no edge (the unit element in $\FF_n$). By multiplicativity, $f(e_n)=0$. By weak reflection positivity we get for every real $\lambda$ that $$0\leq f((F-\lambda e_n)^2)=f(F^2)-2\lambda f(F),$$ which is a contradiction. Replacing $f$ by $f/f(K_1)^{|V(F)|}$, we may assume that $f$ is normalized. The matrix $E(f)$ is positive semidefinite with finite rank. This implies that the sequence $f(K_2^n),~~n=0,1,2,\dots$ is the moment sequence of some random variable $X$ whose values are from a finite set. It follows that there is a number $d>0$ such that $|f(K_2^n)|\leq d^n$ for every $n$. By Theorem \[chr-t2\], this implies that there is a $[-d,d]$-graphon $W$ such that $f(F)=t(F,w)$ for all graphs $F$. Let $(W_0,W_1,\dots)\in\MM(d)$ be the moment function sequence representing $W$. We show that each function $W_i$ is a stepfunction. By Theorem \[step\], it is enough to show that $C(t_{W_i})$ and $B(t_{W_i})$ have finite rank. This will follow if we show that both are submatrices of $M(2,f)$. Let $P_{a;i}\in\GG_2$ denote the path of length $a$ in which each edge is $i$-fold and the two endpoints are labeled by $1$ and $2$. Let $K_{a;i}\in\GG_2$ denote the complete bipartite graph $K_{2,a}$ in which each edge is $i$-fold and the nodes from the color class with two nodes are labeled by $1$ and $2$. It is clear from the definitions that the $\{P_{a;i}|a\geq 2\}\times \{P_{a;i}|a\geq 2\}$ sub-matrix of $M(2,f)$ is identical with $C(t_{W_i})$ and the $\{K_{a;i}|a\geq 0\}\times \{K_{a;i}|a\geq 0\}$ sub-matrix of $M(2,f)$ is identical with $B(t_{W_i})$ for all $i$. This proves that each $W_i$ is a stepfunction. Next we argue that the $W_i$ can be considered stepfunctions with the same steps. For every pair $x,y\in [0,1]$, $W(x,y)$ is a random variable with values in $[-d,d]$. Let $Y$ be the random variable which is obtained by selecting two random points $x,y$ uniformly form $[0,1]$ and then evaluating the random variable $W(x,y)$. It is clear that $$\E(Y^i)=\int_{[0,1]^2}W_i(x,y)=f(K_2^i)=\E(X^i),$$ and thus the distribution of $Y$ is the same as the distribution of $X$, which is concentrated on the finite set $S$. It follows that for almost all pairs $x,y\in [0,1]$ the distribution of $W(x,y)$ is concentrated on $S$ and at such places the first $|S|$ moments $W_1(x,y),\dots,W_{|S|}(x,y)$ of $W(x,y)$ determine all other moments. This means that all of the functions $W_i$ are stepfunctions with the same steps that are intersections of the steps of $W_1,W_2,\dots,W_{|S|}$. Thus there is a partition $[0,1]=P_1\cup P_2\cup\dots\cup P_t$ such that the variables $W(x,y)$ are constant on $P_i\times P_j$ for all $1\leq i,j\leq t$. This defines the structure of a randomly weighted graph $H$ on $\{1,2,\dots,t\}$, in which the nodeweights are the sizes of the sets $P_i$, and the edgeweight of $ij$ is the random variable $W(x,y)$ for any $(x,y)\in P_i\times P_j$. It is clear that $f(F)=t(F,W)=\hom(F,H)$ for every graph $F$. The growth rate of connection ranks ----------------------------------- We have seen (Theorem \[THM:FLS\]) that for a graph parameter $f\in\TT_0(d)$, the connection ranks $\rk(M(f,k))$ are bounded by $q^n$ for an appropriate $q$. What can we say about graph parameters in the larger class $\TT_1(d)$? The following theorem gives the answer. \[genfls2\] If $f$ is a weakly reflection positive and multiplicative graph parameter, then $f$ belongs to one of the following three types. [(1)]{} $\rk(M(f,n))=\infty$ for all $n\geq 2$. [(2)]{} $\rk(M(f,n))^{1/n}\to c$ $(n\to\infty)$ with some $c\ge 1$, and there is a weighted graph $H$ such that $f=\hom(.,H)$. [(3)]{} $\rk(M(f,n))^{1/n^2}\to c$ $(n\to\infty)$ with some $c>1$, and there is a proper randomly weighted graph $H$ such that $f=\hom(.,H)$. \[REM:M1-FIN\] Finiteness of the rank of the first connection matrix $M(f,1)$ is not enough here. In fact, let $W\in\WW$ be a function such that its measure preserving automorphism group (the group of invertible measure preserving maps $\varphi:~[0,1]\to[0,1]$ such that $W(\varphi(x),\varphi(y)) =W(x,y)$) is transitive. (For example, $W(x,y)=|x-y|$ is such a function.) Then $M(t_W,1)$ has rank $1$. However, such functions may be far from being stepfunctions. Before proving Theorem \[genfls2\], we remark that the limiting constants $c$ in (2) and (3) can be described easily, once we know that $f$ is given by a randomly weighted graph. In the case when $f=t(.,H)$ for an ordinary weighted graph $H$, the rank of $M(f,n)$ was described in [@L]. We may assume that $H$ has no twin nodes, since we can identify twin nodes in $H$ without changing $f$. Let us state also a description the dimension of $\PP_n/f$. \[LEM:OLD-RK\] Let $f=t(.,H)$, where $H$ is a weighted graph without twin nodes. Then [(a)]{} The dimension of $\QQ_n/f$ is equal to the number of non-equivalent maps $[n]\to V(H)$, where two maps $\varphi,\psi$ are equivalent if there is an automorphism $\alpha$ of $H$ such that $\varphi\alpha=\psi$. [(b)]{} The dimension of $\PP_n/f$ is equal to the number of non-equivalent maps $[n]\to V(H)$, where two maps $\varphi,\psi$ are equivalent if there is an isomorphism $\alpha$ between the subgraphs of $H$ induced by ${\rm Rng}(\varphi)$ and ${\rm Rng}(\psi)$ such that $\varphi\alpha=\psi$. Part (a) was proved in [@L]. We prove part (b) in a more general form, for randomly weighted graphs: \[exactrank\] Let $H$ be a randomly weighted graph with edge weights $B_{i,j}$ and node weights $\alpha_i$, let $f=\hom(.,H)$, and let $n$ be a natural number. Then $\dim(\PP_n/f)$ is equal to the number of different weighted graphs $L$ on the node set $[n]$ with nodeweights $1$ for which there is a function $\varphi:~[n]\to V(H)$ such that each edge weight $\lambda_{i,j}$ of $L$ is an element of the range of $B_{\varphi(i),\varphi(j)}$. [We will not need to extend part (a) of Lemma \[LEM:OLD-RK\] to randomly weighted graphs, but we believe this is possible.]{} Let $f=\hom(.,H)$. Let $\Z^{(2)}_n$ denote the polynomial ring $\R[\{z_{i,j}|1\leq i<j\leq n\}]$, which is isomorphic to the algebra $\PP_n$. Let $A$ denote the a set of all possible pairs $(L,\varphi)$ where $L$ is a weighted graph on $\{1,2,\dots,n\}$, $\varphi:\{1,2,\dots,n\}\mapsto V(H)$ is a function such that every edge weight $\lambda_{i,j}$ of $L$ is in the range of $B_{\varphi(i),\varphi(j)}$. To each element $(L,\varphi)\in A$ we introduce the weight $$h(L,\varphi)=\prod_{i=1}^n\alpha_{\varphi(i)}\prod_{1\leq i<j\leq n} P(B_{\varphi(i),\varphi(j)}=\lambda_{i,j}),$$ which is always a positive number. By substituting the definition of moments into the formula \[homtorwg\] one obtains that if $p$ is an arbitrary element of $R$ then $$f(p)=\sum_{(L,\varphi)\in A}h(L,\varphi)p(\lambda)$$ where $p(\lambda)$ denotes the substitution of $z_{i,j}=\lambda_{i,j}$ in the polynomial $p$. Note that these substitutions are not always different for two different elements of $A$ but after sorting the sum according to different substitutions they can’t cancel each other because the weights $h(L,\varphi)$ are all positive. Using Lemma \[rankofbf\] we get that $\dim(\PP_n/f)$ is equal to the number of different labeled weighted graphs $L$ occurring in the first coordinate of the elements of $A$. This is exactly the statement of the lemma. Using this lemma, we can derive bounds on the rank of $M(f,n)=\dim(\QQ_n/f$, where $f=\hom(.,H)$ for a randomly weighted graph $H$. Define $$\label{EQ:A-DEF} A(H) = \max\Bigl\{\frac12\sum_{u,v\in V(H)} x_u x_v\log p_{u,v}:~x\ge 0, \sum_{u\in V(H)} x_u=1\Bigr\}.$$ \[LEM:RANK\] Let $H$ be a randomly weighted graph, $f=\hom(.,H)$, and $n\in\N$. Then $$\frac{2^{n^2A(H)}}{p(H)^{2n}} \le \dim(\PP_n/f) \le \dim(\QQ_n/f) \le |V(H)|^n2^{n^2A(H)}.$$ We may assume for convenience that $H$ is normalized. The upper bound follows from an even more careful look at the proof of Theorem \[genfls\], part (1)$\Rightarrow$(2). Each point $x\in[0,1]$ defines a map $\varphi:~[n]\to V(H)$, and $M(x)$ depends on this $\varphi$ only. Each matrix $M(x,i,j)$ is a moment matrix of a random variable with finite range of size $p_{\varphi(i),\varphi(j)}$, and hence it has rank $p_{\varphi(i),\varphi(j)}$. Hence the rank of $M(x)$ is at most $$\rk(M(x)) \le \prod_{1\le i<j\le n} p_{\varphi(i),\varphi(j)}.$$ Let $n_u=|\varphi^{-1}(u)|$ ($u\in V(H)$), then we get $$\rk(M(x)) \le \prod_{u,v\in V(H)\atop u\not=v} p_{u,v}^{\frac12 n_un_v} \prod_{u\in V(H)} p_{u,u}^{\binom{n_u}{2}} \le \prod_{u,v\in V(H)} p_{u,v}^{\frac12 n_un_v}.$$ Here $$\begin{aligned} \log \prod_{u,v\in V(H)} &p_{u,v}^{\frac12 n_un_v} =\sum_{u,v\in V(H)} n_un_v \log p_{u,v}\\ &= n^2 \sum_{u,v\in V(H)} \frac{n_u}{n}\frac{n_v}{n} \log p_{u,v} \le n^2 A(H),\end{aligned}$$ and so $\rk(M(x)) \le 2^{n^2 A(H)}$. Since there are at most $|V(H)|^n$ different matrices $M(x)$, the upper bound follows. To prove the lower bound, let $x\in\R^{V(H)}$ be the vector that attains the maximum in . Let $n_u$ ($u\in V(H)$) be integers such that $|nx_u-n_u|<1$. Fix a map $\varphi:~[n]\to V(H)$ such that $|\varphi^{-1}(u)|=n_u$ for all $u$. It is clear that we can create at least $$N=\prod_{u,v\in V(H)\atop u\not=v} p_{u,v}^{\frac12 n_un_v} \prod_{u\in V(H)} p_{u,u}^{\binom{n_u}{2}}$$ different weighted graphs on $[n]$ satisfying the condition of lemma \[exactrank\] by choosing the edge weights between $\varphi^{-1}(u)$ and $\varphi^{-1}(v)$ independently from the range of $B_{\varphi(u),\varphi(v)}$. We have $$\begin{aligned} \log N &= \frac12 \sum_{u,v\in V(H)\atop u\not=v} n_un_v\log p_{u,v} + \sum_{u\in V(H)} \binom{n_u}{2} \log p_{u,u}\\ &= \frac12 \sum_{u,v\in V(H)} n_un_v\log p_{u,v} - \sum_{u\in V(H)} n_u \log p_{u,u}.\end{aligned}$$ Here $$\begin{aligned} \frac12 &\sum_{u,v\in V(H)} n_un_v\log p_{u,v} - n^2 A(H)\\ &=\frac12\sum_{u,v\in V(H)} n_un_v\log p_{u,v} - n^2 \frac12\sum_{u,v\in V(H)} x_ux_v\log p_{u,v}\\ &= \frac12\sum_{u,v\in V(H)} n_u(n_v -nx_u)\log p_{u,v} + \frac12 \sum_{u,v\in V(H)} (n_u-nx_u)nx_v\log p_{u,v}\\ &\le \sum_{u,v\in V(H)} n_u\log p_{u,v} \le n\log p(H),\end{aligned}$$ and $$\sum_{u\in V(H)} n_u \log p_{u,u} \le n\log p(H),$$ showing that $$\log N \ge n^2 A(H) - 2n\log p(H).$$ By Lemma \[exactrank\], this proves that $$\dim(\PP_n/f)\ge N \ge \frac{2^{n^2A(H)}}{p(H)^{2n}}.$$ [of Theorem \[genfls2\]]{} Let $f$ be a weakly reflection positive and multiplicative graph invariant. Assume that $\rk(M(f,n))<\infty$ for some integer $n\geq 2$. Since $M(f,2)$ is a submatrix of $M(f,n)$, we have that $\rk(M(f,2))<\infty$. By Theorem \[genfls\] we obtain that there is a randomly weighted graph $H$ such that $f=\hom(.,H)$. If $H$ is a weighted graph, then by Lemma \[LEM:OLD-RK\] it follows that $$\frac{|V(H)|^n}{|V(H)|!} \le \dim(\PP_n/f) \le \dim(\QQ_n/f) \le |V(H)|^n,$$ and hence both $\dim(\PP_n/f)^{1/n}$ and $\dim(\QQ_n/f)^{1/n}$ tend to $\log|V(H)|$. On the other hand, if $H$ is a proper randomly weighted graph, then by Lemma \[LEM:RANK\] we have $$\frac{2^{A(H)}}{p(H)^{2/n}} \le \dim(\PP_n/f)^{1/n^2} \le \dim(\QQ_n/f)^{1/n^2} \le |V(H)|^{1/n}2^{A(H)},$$ and so both $\dim(\PP_n/f)^{1/n^2}$ and $\dim(\QQ_n/f)^{1/n^2}$ tend to $A(H)$. [99]{} C. Berg, J.P.R. Christensen, P. Ressel, Positive definite functions on abelian semigroups, [*Mathematische Annalen*]{} 223 (1976) 253–272. C. Berg, P.H. Maserick, Exponentially bounded positive definite functions, [*Illinois Journal of Mathematics*]{} 28 (1984) 162–179. C. Borgs, J. Chayes, L. Lovász: Moments of Two-Variable Functions and the Uniqueness of Graph Limits, [*Geom. Funct. Anal.,*]{} [**19**]{} (2010), 1597–1619. C. Borgs, J.T. Chayes, L. Lovász, V.T. Sós, and K. Vesztergombi: Convergent Graph Sequences I: Subgraph frequencies, metric properties, and testing, [*Advances in Math.*]{} [**219**]{} (2008), 1801–1851. P. Diaconis and D. Freedman: The Markov Moment Problem and de Finetti’s Theorem: Part I P. Erdös, L. Lovász, J. Spencer: Strong independence of graphcopy functions, in: [*Graph Theory and Related Topics*]{}, Academic Press (1979), 165-172. M. Freedman, L. Lovász, A. Schrijver: Reflection positivity, rank connectivity, and homomorphism of graphs [*Journal of The American Mathematical Society*]{} (to appear) A. Frieze and R. Kannan: Quick approximation to matrices and applications, [*Combinatorica*]{} [**19**]{}, 175–220. F. Hausdorff: Summationsmethoden und Momentfolgen I–II, [*Math. Z.*]{} [**9**]{} (1921), 74–109 and 280–299. R.J. Lindahl, P.H. Maserick, Positive-definite functions on involution semigroups, [*Duke Mathematical Journal*]{} 38 (1971) 771–782. L. Lovász: The rank of connection matrices and the dimension of graph algebras, [*Eur. J. Comb.*]{} [**27**]{} (2006), 962–970. L. Lovász, A. Schrijver: Graph parameters and semigroup functions, [*Europ. J. Comb.*]{} [**29**]{} (2008), 987–1002. L. Lovász, V.T. Sós: Generalized quasirandom graphs, [*J. Comb. Th. B*]{} [**98**]{} (2008), 146–163. L. Lovász, B. Szegedy: Limits of dense graph sequences, [*J. Comb. Theory B*]{} [**96**]{} (2006), 933–957. L. Lovász, B. Szegedy: Szemerédi’s Lemma for the analyst, [*Geom. Func. Anal.*]{} [**17**]{} (2007), 252–270. L. Lovász, B. Szegedy: Finitely forcible graphons,\ <http://arxiv.org/abs/0901.0929> L. Lovász, B. Szegedy: Limits of compact decorated graphs (manuscript) [^1]: Research supported by OTKA grant No. 77780 and ERC Advanced research grant No. 227701 [^2]: AMS Subject Classification: Primary 05C99, Secondary 82B99
--- abstract: 'This talk reviews some recent results on the NLL resummed small-$x$ gluon splitting function, as determined including renormalisation-group improvements. It also discusses the observation that the LO, NLO, NNLO, etc. hierarchy for the gluon splitting function breaks down not when ${\alpha_s}\ln 1/x \sim 1$ but rather for ${\alpha_s}\ln^2 1/x \sim 1$.' author: - | Gavin P. Salam\ LPTHE, Universities of Paris VI & VII and CNRS,\ 75252 Paris 75005, France. title: '<span style="font-variant:small-caps;">Fall and rise of the gluon splitting function</span>[^1]' --- LPTHE–P04–04\ hep-ph/0407368 [^1]: Talk presented at DIS 2004, Štrbské Pleso, Slovakia, April 2004, and at the Eighth Workshop on Non-Perturbative Quantum Chromodynamics, Paris, France, June 2004.
--- abstract: 'We study the superconducting phase transition, both in a graphene bilayer and in graphite. For that purpose we derive the mean-field effective potential for a stack of graphene layers presenting hopping between adjacent sheets. For describing superconductivity, we assume there is an on-site attractive interaction between electrons and determine the superconducting critical temperature as a function of the chemical potential. This displays a dome-shaped curve, in agreement with previous results for two-dimensional Dirac fermions [@Smith2009; @Nunes2005]. We show that the hopping between adjacent layers increases the critical temperature for small values of the chemical potential. Finally, we consider a minimal model for graphite [@Pershoguba2010] and show that the transition temperature is higher than that for the graphene bilayer for small values of chemical potential. This might explain why intrinsic superconductivity is observed in graphite.' author: - 'Lizardo H. C. M. Nunes' - 'A. L. Mota' - 'E. C. Marino' bibliography: - 'apssamp.bib' title: 'Superconductivity in graphene stacks: from the bilayer to graphite' --- Introduction {#Introduction} ============ Graphene is a one-atom-thick layer of graphite [@Neto_RMP09]. The carbon atoms in each layer are arranged in a honeycomb lattice and the tight-binding energy presents a band structure such that the valence and conduction bands touch precisely in the vertices of two inequivalent Dirac cones in the Brillouin zone. The electronic excitations appearing in the conduction band have the dispersion relation of a relativistic massless particle and their properties, accordingly, will be determined by the Dirac equation. Graphene is believed to be the parent compound of most of the carbon-based systems and their electric, magnetic and elastic properties all originate from the properties of graphene. Interestingly, several carbon-based compounds present superconductivity. For instance, the graphite intercalated compounds (GIC) [@Csanyi2005] which consists of graphene sheets alternated by alkali layers, mainly acting as charge reservoirs, becomes superconducting with the transition temperature ranging from below 1K for KC$_{ 8 } $ to 11.5 K for CaC $_{ 6 } $ [@Weller2005; @Emery2005; @Belash2002; @Hannay1965]; some fullerides present critical temperatures as high as 33 K as applied pressure or the chemical composition increases the lattice parameter [@Gunnarson1997]; and there are reports of room temperature local superconductivity within isolated “grains” in highly oriented pyrolitic graphite (HOPG) [@Kopolevich2007] and also with critical temperature $ T_{ c } \sim $ 25 K in thin samples [@Esquinazi2008]. Moreover, a fully saturated hydrocarbon derived from a single graphene sheet, called graphane, is predicted to be a high-temperature electron-phonon superconductor exhibiting a critical temperature of above 90 K [@Savini2010]. Despite the fact that theoretical conjectures have been proposed as possible candidates to produce superconductivity [@Meng2010; @Pathak2010; @Kopnin2008; @Baskaran2002; @Jiang2008; @Black-Schaffer2007; @Roy2010], intrinsic superconductivity has never been observed in graphene, but it could be only induced by proximity effects, where a superconducting current propagated through a superconductor-normal-superconductor (SNS) Josephson junction, with graphene as the N region [@Heersche2007]. Nevertheless, the stability of the superconducting phase has been investigated in graphene [@Pellegrino2010; @Khveshchenko2009; @Honerkamp2008] and the symmetry of the order parameter in the honeycomb lattice was identified; if there is an on-site net attractive interaction between electrons in the honeycomb lattice, the usual $ s $-wave singlet pairing is favoured [@Zhao2007]. As nearest-neighbours attraction are taken into account, an exotic combination of $ s $-wave and $ p $-wave superconducting order parameters is possible [@Uchoa2007]. In the context of the $ t $-$ J $-$ U $ model, $ f $-wave triplet-pairing and $ d + i d $ singlet-pairing instabilities are found to emerge away from half-filling [@Honerkamp2008]. Previously, some of us have investigated the phase diagram of a quasi-two-dimensional interacting Dirac electrons system forming Cooper pairs in the singlet state, which is a suitable model to describe a stack of uncoupled superconducting graphene sheets, and we have found a quantum critical point connecting the normal and superconducting phases at a certain critical coupling [@Marino2006]. If low magnetic fields are applied to the system, we have found a critical field as a function of the superconducting interaction [@Marino2007]. In those previous investigations, the variation of the chemical potential was not taken into account; however, applying a bias voltage, the carrier density of graphene can be controlled by electric field effect. Therefore, in the present paper we investigate the effect of the chemical potential as a free parameter of our model and we also consider the effect of the out-of-plane hopping between adjacent graphene sheets. In order to describe graphite, we consider the minimal model with the electron tunneling between the nearest sites in the plane and out of the plane. We have found that the superconducting critical temperature is enhanced at small values of the chemical potential for graphite when compared to the values predicted by us for graphene bilayer, what might explain why intrinsic superconductivity has been observed in HOPG. The paper is organized as follows: In Sec. \[TheModel\] we present the model Hamiltonian for the graphene bilayer, the dispersion relation is calculated and the effective potential (free energy) is derived. In Sec. \[T=0\] the superconducting phase diagram at $ T = 0 $ is obtained analyzing the minima conditions for the effective potential for several values of the interaction and the hopping between adjacent graphene sheets. In Sec. \[Tneq0\] we calculate the superconducting critical temperature as a function of the chemical potential for several values of the hopping parameter between layers. The results represent an upper bound for the Kosterlitz-Thouless transition. In Sec. \[graphite\] our results for the superconducting phase diagram are extended for an infinite number of coupled graphene layers considering the electron tunneling amplitudes between the nearest sites in the plane and out of the plane. Sec. \[Conclusions\] is devoted to the conclusion. Graphene bilayer {#TheModel} ================ Consider a stack of $ N $ graphene layers with a hopping term between adjacent planes, where the upper layer has its $ B $ sublattice on top of sublattice $ A $ of the underlying layer (Bernal stacking), as can be seen in Fig.\[FigBilayer\]. The Hamiltonian of each coupled layer is described by the following [@Nuno_Book_2007], $$\begin{aligned} H_{ t, l } & = & - \mu \sum_{ {\bf k }, \sigma } \left[ a^{ \dagger }_{ {\bf k }, \sigma, l } a_{ {\bf k }, \sigma, l } + b^{ \dagger }_{ {\bf k }, \sigma, l } b _{ {\bf k }, \sigma, l } \right] \nonumber \\ & & -t \sum_{ {\bf k }, \sigma } s_{ k } \left[ a^{ \dagger }_{ {\bf k }, \sigma, l } b_{ {\bf k }, \sigma, l } + a^{ \dagger }_{ {\bf k }, \sigma, l+1 } b_{ {\bf k }, \sigma, l+1 } \right] + \mbox{h.c. } \nonumber \\ %+ \mbox{h.c.} %-t s^{ * }_{ \bf k } b^{ \dagger }_{ {\bf k }, \sigma, a } a_{ {\bf k }, \sigma, a } %\nonumber \\ %& & %\sum_{ {\bf k }, \sigma } %+ \mbox{h.c.} %-t s^{ * }_{ \bf k } b^{ \dagger }_{ {\bf k }, \sigma, a+1 } a_{ {\bf k }, \sigma, a+1 } & & -t_{ \bot } \sum_{ {\bf k }, \sigma } a^{ \dagger }_{ {\bf k }, \sigma, l } b_{ {\bf k }, \sigma, l+1 } + \mbox{h.c. } %-t_{ \bot } b^{ \dagger }_{ {\bf k }, \sigma, a+1 } a_{ {\bf k }, \sigma } \, , \label{EqUnbiasedBilayer}\end{aligned}$$ where the index $ l = 1, \cdots, N $ characterizes the different planes and $ \mu $ is the chemical potential. The second line in the RHS of the above equation describes the hopping between electrons of different sublattices within a graphene sheet, while the third line describes the hopping between layers. The hopping parameter is about $ t \approx 2.8 $ eV and $ t_{ \bot } \approx t / 10 $. The operators $ a^{ \dagger }_{ i, \sigma, l } = \sum_{ k } e^{ i { \bf k } \cdot { \bf r}_{ i } } \, a^{ \dagger }_{ {\bf k }, \sigma, l } $ and $ b^{ \dagger }_{ i, \sigma, l } = \sum_{ {\bf k } } e^{ i { \bf k } \cdot { \bf r}_{ i } } \, b^{ \dagger }_{ {\bf k }, \sigma, l } $ create, respectively, an electron on site $ i $ with spin $ \sigma $ on sublattice $ A $ and an electron on site $ i $ with spin $ \sigma $ on sublattice $ B $ of plane $ l $. In the honeycomb lattice we have $ s_{ k } = 1 + e^{ i { \bf k } \cdot { \bf a }_{ 1 } } + e^{ i { \bf k } \cdot { \bf a }_{ 2 } } $, where $ {\bf a }_{ 1 } = a \hat{ e }_{ x } $ and $ 2 {\bf a }_{ 2 } = a \left( \hat{ e }_{ x } - \sqrt{ 3 } \hat{ e }_{ y } \right) $, as shown in Fig.\[FigBilayer\]. The lattice parameter is $ a = $ 2.46 Åfor graphene. ![Lattice structure of two adjacent graphene layers (after [@Nuno_Book_2007]).[]{data-label="FigBilayer"}](Bilayer.jpg){width="49.00000%"} We add an on-site attractive interaction between the electrons within each graphene layer forming Cooper pairs in the $ s $-wave state. The interaction term is given by $$\begin{aligned} H_{ \mbox{\scriptsize{SC}}, l } & = & - g \sum_{ {\bf k}, {\bf k}', \sigma} \left( a^{\dagger}_{ {\bf k }, \sigma, l }a^{\dagger}_{ - {\bf k }, -\sigma, l} a_{ - {\bf k }', -\sigma, l }a_{ {\bf k }', \sigma, l } \right. \nonumber \\ & & + \left. b^{\dagger}_{ {\bf k }, \sigma, l }b^{\dagger}_{ - {\bf k }, -\sigma, l } b_{ - {\bf k }', -\sigma, l }b_{ {\bf k }', \sigma, l } \right) \, , \label{EqHPairing}\end{aligned}$$ with $ g > 0 $. The origin of the interaction is to be determined by some underlying microscopic theory, which is not considered here. However, the symmetry of the gap originated from this interaction is consistent with the isotropic $ s $-wave symmetry gap observed in some GICs [@Kremer2007]. Introducing the following Nambu fermion field, $$\Psi^{ \dagger }_{ { \bf k }, l } = \left( \psi^{ \dagger }_{ { \bf k }, l }, \psi^{ \dagger }_{ { \bf k }, l + 1 } \right) \, , \label{EqNambu}$$ where $$\psi^{ \dagger }_{ { \bf k }, l } = \left( a^{ \dagger }_{ { \bf k }, \uparrow , l} \, b^{ \dagger }_{ { \bf k }, \uparrow , l} \, a_{ -{ \bf k }, \downarrow , l} \, b_{ -{ \bf k }, \downarrow , l} \right) \, , \label{EqNambu2}$$ one can rewrite the combined Hamiltonian $ H_{ t, l } + H_{ \mbox{\scriptsize{SC}}, l } $ at the mean-field level, $$H_{ \mbox{\scriptsize{MF}} } = \sum_{ \bf k } \Psi^{ \dagger }_{ { \bf k }, l } \, \mathcal{A} \, \Psi_{ { \bf k }, l } - \frac{ \Delta \Delta^{ * } }{ g } \, \label{EqHMF}$$ where, by definition, the superconducting order parameter is $$- \frac{ \Delta }{ g } = \sum_{ \bf k } \langle a^{\dagger}_{ {\bf k }, \uparrow, l }a^{\dagger}_{ - {\bf k }, -\downarrow, l} \rangle = \sum_{ \bf k } \langle b^{\dagger}_{ {\bf k }, \uparrow, l }b^{\dagger}_{ - {\bf k }, -\downarrow, l} \rangle \label{EqDefGap}$$ and the 8 $ \times $ 8 matrix $ \mathcal{A} $ in Eq. (\[EqHMF\]) is given by $$\begin{aligned} \mathcal{A} = \begin{pmatrix} \mathcal{ A }_{ 1 } & \mathcal{ A }_{ 1 2 } \\ \mathcal{ A }_{ 2 1 } & \mathcal{ A }_{ 2 } \end{pmatrix} \, , \label{EqMatrixA}\end{aligned}$$ with $$\mathcal{A}_{ 1 } = \mathcal{A}_{ 2 } = \begin{pmatrix} - \mu & - t s_{ k } & 0 & \Delta \\ - t s^{ * }_{ k } & -\mu & \Delta & 0 \\ 0 & \Delta^{ * } & \mu & t s^{ * }_{ k } \\ \Delta^{ * } & 0 & t s_{ k } & \mu \\ \end{pmatrix} \label{EqMatrixA1}$$ and $$\mathcal{A}_{ 1 2 } = \mathcal{A}^{ T }_{ 2 1 } = \begin{pmatrix} 0 & -t_{ \bot } & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & t_{ \bot } \\ 0 & 0 & 0 & 0 \\ \end{pmatrix} \, . \label{EqA12}$$ From $ H_{ \mbox{\scriptsize{MF}} } $ in Eq. (\[EqHMF\]), follows the dispersion relation, $$E_{ k } = \pm \sqrt{ | \Delta |^{2 } + E^{ 2 }_{ \mbox{\scriptsize{BL}} } } \, , \label{EqDispersion}$$ where $$E_{ \mbox{\scriptsize{BL}} } = \pm \sqrt{ t^{2} | s_{ k } |^{ 2 } + \left( \frac{ t_{ \bot } }{ 2 } \right)^{ 2 } } \pm \frac{ t_{ \bot } }{ 2 } - \mu \, . \label{EqDispersionBL}$$ ![Band structure for (a) $ \Delta = \mu = 0 $, (b) $\Delta = 0 $ and $ \mu = 0.2 t $, (c) $ \Delta = 0.1 t $ and $ \mu = 0 $ (d) $ \Delta = 0.1 t $ and $ \mu = 0.2 t $. Energy is given in units of $ t $ and $ t_{ \bot } = 0.2 t $.[]{data-label="FigBands"}](Bands.pdf){width="47.00000%"} Let us neglect the hopping term between planes for a moment and consider only the normal state of the system at the Fermi level, which means $ t_{ \bot } = \Delta = \mu = 0 $. In that case, $ \mathcal{A} $ has eight eigenvalues, but only two are undistinguished: $ \pm t \sqrt{ | s_{ \bf k } |^{ 2 } }$, which is exactly the dispersion relation of a single layer for a given spin state [@Nuno_Book_2007; @Neto_RMP09]. For $ \Delta \neq 0 $, $\mu \neq 0 $, we have $ \pm \sqrt{ | \Delta| ^{2 } + \left( \mu \mp t |s_{ k } | \right)^{2} } $ for each layer, which is the spectrum for the $ s $-wave pairing [@Uchoa2007]. If we take into account the hopping term between planes, in the absence of superconductivity, we obtain $ \pm | E_{ \mbox{\scriptsize{BL}} } |$. In particular, at $ \mu = 0 $, the four energy bands along three directions in the first Brillouin zone for $ \Delta = \mu = 0 $ can be seen in Fig. \[FigBands\].a, which is the same plot shown for the unbiased graphene bilayer in [@Nuno_Book_2007]. The eight distinct energy bands in the normal state are shown in Fig. \[FigBands\].b. (It should be noticed that, for this particular choice of parameters, the chemical potential is sitting right at the bottom of the upper band.) As expected, the system is gapped in the superconducting state and the four energy bands at $ \mu = 0 $ are shown in Fig \[FigBands\].c. Finally, the eight energy bands for nonzero gap and chemical potential are shown in Fig. \[FigBands\].d. The graphene dispersion relation has six Dirac points at the corners of the first Brillouin zone; however, only two of them are non-equivalent. The continuum limit of our model Hamiltonian is obtained expanding Eq. (\[EqHMF\]) in the vicinity of the Dirac points $ {\bf K } = - 4 \pi / 3 a \, \hat{ e }_{ x } $ and $ {\bf K }' = 4 \pi / 3 a \, \hat{ e }_{ x } $, $$H^{ \mbox{\scriptsize{CL}} }_{ \mbox{\scriptsize{MF}}, l } = \sum_{ \alpha } \int \frac{ d^{ 2 } k }{ \left( 2 \pi \right)^{ 2 } } \, \Psi^{ \dagger }_{ \alpha, l } ( k ) \, \mathcal{A}_{ \alpha } \, \Psi_{ \alpha, l } ( k ) - \frac{ \Delta \Delta^{ * } }{ g } \, , \label{EqHCL}$$ where $ \alpha = K , K' $ and $ \mathcal{ A }_{ \alpha } $ is obtained replacing $ t s_{ k } $ by $ - v_{ \rm{ F } } \left( k_{ x } - i k_{ y } \right) $ and $ - v_{ \rm{ F } } \left( k_{ x } + i k_{ y } \right) $ in Eq.(\[EqMatrixA1\]) for $ K $ and $ K' $ respectively, with $ \hbar = 1 $ and $ v_{ \rm{ F } } = \sqrt{ 3 } t a / 2 $. The partition function in the complex time representation is written as $$\begin{aligned} \mathcal{ Z } & = & \frac{ 1 }{ \mathcal{ Z }_{ 0 } } %&& \int \mathcal{ D } \Psi^{ * } \mathcal{ D } \Psi %\nonumber \\ %& & %\; \; \; \; \; \; \exp \left\{ \sum_{ l = 1 }^{ N } \int_{ 0 }^{ \beta } d \tau \, L^{ \mbox{\scriptsize{CL}} }_{ \mbox{\scriptsize{MF}}, l } \right\} \, , \label{EqZ}\end{aligned}$$ where $\mathcal{ Z }_0 $ is the vacuum functional, $ \beta = 1 / k_{ B } T $ ($ k_{B }$ is the Boltzmann constant) and $$L^{ \mbox{\scriptsize{CL}} }_{ \mbox{\scriptsize{MF}}, l } = \sum_{ \alpha, \sigma } \, \int \frac{ d^{ 2 } k }{ \left( 2 \pi \right)^{ 2 } } \, \left( \psi^{ \dagger }_{ \alpha, \sigma, l} i \partial_{ \tau } \psi_{ \alpha, \sigma, l} - H^{ \mbox{\scriptsize{CL}} }_{ \mbox{\scriptsize{MF}}, l } \right) \, , \label{EqLCL}$$ with $ \psi^{ \dagger }_{ \alpha, \sigma, l} = \left[ a^{ \dagger }_{ \alpha, \sigma, l} ( k ), b^{ \dagger }_{\alpha, \sigma, l } ( k ) \right] $ representing the spinorial fields appearing in the continuum limit of the tight-biding graphene Hamiltonian density. Integrating over the fermion fields, we get that the partition function is proportional to $$%\mathcal{ Z } %\propto \left( \frac{ \mbox{det} \, \mathcal{ A }'_{ \alpha, n } }{ \mbox{det} \, \mathcal{ A }'_{ \alpha, n } [ \Delta = 0 ] } \right)^{ 2 N } \, , \label{EqFermionIntegral}$$ where $ \mathcal{ A }'_{ \alpha, n } = - i \omega_{ n } { \bf 1 } + \mathcal{ A }_{ \alpha } $ is a function of the Matsubara frequencies for fermions, $\omega_n = (2 n+1)\pi T$, and $ { \bf 1} $ is the 8 $ \times $ 8 unity matrix, with $ k_{ B } = 1 $ hereafter for the sake of simplicity. Finally, redefining the coupling $ g = \lambda / N $, the “effective potential” per bilayer for each Dirac point will be $$\begin{aligned} V_{ \rm eff} & = & 2 \frac{ \Delta \Delta^{*} }{ \lambda } %\nonumber \\ %& & - \frac{ 1 }{ \beta } \sum_{ n } \left[ \int \frac{ d^{ 2 } k }{ \left( 2 \pi \right)^{ 2 } } \right. \nonumber \\ & & \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \left. \ln \left( \frac{ \mbox{det} \, \mathcal{ A }'_{ K, n } }{ \mbox{det} \, \mathcal{ A }'_{ K, n } [ \Delta = 0 ] } \right) \right] . \label{EqVeff}\end{aligned}$$ This would be the leading order in a $ 1 / N $ expansion and would be the exact result for $ N \rightarrow \infty $. In the next section, we analyze the conditions for the appearance of superconductivity at zero temperature provided by our mean-field effective potential. The superconducting instabilities at $ T = 0 $ {#T=0} ----------------------------------------------- We shall study the minima of the effective potential. The occurrence of superconductivity corresponds to the existence of nonzero solutions of the order parameter, which minimizes the effective potential. Taking the derivative of $ V_{\rm eff} $ with respect to the order parameter and summing over the Matsubara frequencies, we obtain $$\begin{aligned} V'_{\rm eff}( T ) & = & \Delta^{ * } %\left\{ \left[ \frac{2}{\lambda } - \frac{ 1 }{ 2 } \sum_{ j = 1 }^{ 4 } %\left[ \int \frac{ d^{ 2 } k }{ \left( 2 \pi \right)^{ 2 } } %\right. \right. \nonumber \\ & & \hspace{-0.3cm} \left. %\left. \frac{ 1 } { \sqrt{ | \Delta |^{2 } + \xi_{ j }^{ 2 } } } \tanh \left( \frac{ \beta }{ 2 } \sqrt{ | \Delta |^{2 } + \xi_{ j }^{ 2 } } \right) %\right] %\right\} \right] \, , \label{EqV'eff}\end{aligned}$$ where $$\xi_{ j } = \pm \sqrt{ v_{ \rm F }^{2} k^{ 2 } + \left( \frac{ t_{ \bot } }{ 2 } \right)^{ 2 } } \pm \frac{ t_{ \bot } }{ 2 } - \mu \, . \label{EqXi}$$ The nonzero solutions for $ | \Delta | $ are given equalizing to zero the expression within brackets in the Eq. (\[EqV’eff\]) above, which provides a self-consistent gap equation. In particular, at zero temperature, we get $$V'_{\rm eff}( 0 ) = \Delta^{ * } \left[ \frac{2}{\lambda } - \frac{ 1 }{ 2 } \sum_{ j = 1}^{ 4 } \int \frac{ d^{ 2 } k }{ \left( 2 \pi \right)^{ 2 } } \, \frac{ 1 } { \sqrt{ | \Delta |^{2 } + \xi_{ j }^{ 2 } } } \right] \, . \label{EqV'effT0a}$$ Introducing a large momentum cutoff $\Lambda/v_{ \rm{F}}$, we can integrate Eq. (\[EqV’effT0a\]) over $ k $, $$\begin{aligned} V'_{\rm eff} & = & \Delta^{ * } \left\{ \frac{2}{\lambda } %\right. %\nonumber \\ %& & %\; \; \; \; \; \; \; \; - \frac{ 1 }{ 2 \alpha } \sum_{ a, b } \left[ \sqrt{ | \Delta |^{2 } + \xi^{ 2 }_{ a b } } - \sqrt{ | \Delta |^{2 } + \epsilon^{ 2 }_{ a b } } \right. \right. \nonumber \\ & + & \left. \left. \left( \mu - b \frac{t_{ \bot } }{ 2 } \right) %\, \ln \left( \frac{ \sqrt{ | \Delta |^{2 } + \xi^{ 2 }_{ a b } } + \xi_{ a b } } { \sqrt{ | \Delta |^{2 } + \epsilon^{ 2 }_{ a b } + \epsilon_{ a b } } } \right) \right] \right\} \, , \label{EqV'effT0}\end{aligned}$$ where $ \alpha = 2 \pi v^{ 2 }_{ \rm F } $, $ a, b = \pm 1$, $$\xi_{ ab } = a \sqrt{ \Lambda^{ 2 } + \left( \frac{ t_{ \bot }}{ 2 } \right)^{ 2 } } + b \frac{ t_{ \bot }}{ 2 } - \mu \, , \label{EqXiab}$$ and $ \epsilon_{ a b} = \xi_{ a b }( \Lambda = 0 )$. In particular, for $ \mu = t_{ \bot } = 0 $, the nonzero solutions for the superconducting gap are [@Marino2006] $$\Delta = \frac{ \alpha \lambda }{ 2 } \left( \frac{ \Lambda^{ 2 } }{ \alpha^{ 2 } } - \frac{ 1 }{ \lambda^{ 2 } } \right) \, , \label{EqDelta00}$$ for $ \lambda > \alpha / \Lambda $, what establishes quantum critical point for the onset of superconductivity in the system at the critical coupling $ \lambda_c = \alpha / \Lambda $. Evidencing $ \Lambda $ in (\[EqV’effT0\]), it can be reexpressed as $$\begin{aligned} V'_{\rm eff} & = & \frac{ \Delta^{ * } }{ \tilde{ \alpha } } \left\{ \frac{2}{\lambda'} %\right. %\nonumber \\ %& & %\; \; \; \; \; \; \; \; - \frac{ 1 }{ 2 } \sum_{ a, b } \left[ \sqrt{ | \tilde{ \Delta } |^{2 } + \tilde{ \xi }^{ 2 }_{ a b } } - \sqrt{ | \tilde{ \Delta } |^{2 } + \tilde{ \epsilon }^{ 2 }_{ a b } } \right. \right. \nonumber \\ & + & \left. \left. \left( \tilde{ \mu } - b \frac{ \tilde{t}_{ \bot } }{ 2 } \right) %\, \ln \left( \frac{ \sqrt{ | \tilde{ \Delta } |^{2 } + \tilde{ \xi }^{ 2 }_{ a b } } + \tilde{ \xi }_{ a b } } { \sqrt{ | \tilde{ \Delta } |^{2 } + \tilde{ \epsilon }^{ 2 }_{ a b } + \tilde{ \epsilon }_{ a b } } } \right) \right] \right\} \, , \label{EqV'effT02}\end{aligned}$$ where $ \lambda' = \lambda / \lambda_{ c }$ and the tilde indicates that the quantity is divided by $ \Lambda $. Since all the nonzero solutions for the gap are given by the expression between the curly brackets above, and given that it does not depend on any experimental data, our results are suitable to describe any planar Dirac fermion system with a hopping between adjacent sheets, assuming that $ \Lambda $ is the single free parameter of our model. Therefore, in the following we present our numerical results for $ \Delta $ in terms of the parameter $ \Lambda $. Notice that the cutoff is always provided by the lattice in condensed matter systems. Indeed, we have $ \Lambda = 2 \pi \hbar v_{ \rm F }/ a $ as an upper bound for the energy cutoff, and since $ a $ is the smallest distance scale, $ \Lambda $ becomes a natural high-energy cutoff. In fact, this frequently happens in condensed matter. A familiar example in the case of conventional, phonon mediated superconductivity, is the Debye frequency (energy) a natural cutoff that emerges in BCS theory. Moreover, we also constrain ourselves to positive values of the chemical potential up to $ \mu / \Lambda = 0.9 $, given the half bandwidth of $ \Lambda $. ![The superconducting gap as a function of the chemical potential for $ t_{ \bot } = 0, 0.3$ and $ 0.5 $. The inset shows the same plot for a smaller range of the chemical potential. $ \lambda / \lambda_{ c } = 0.8 $ and all the other quantities are given in unities of $ \Lambda $.[]{data-label="FigGap0_X_mu"}](Gap0xMu.pdf){width="47.00000%"} The case for $ t_{ \bot } = 0 $ and finite $ \mu $ with different values of interaction coupling has been exhaustively investigated by some of us [@Nunes2010]. The plots of the superconducting gap as a function of $ \mu $ for $ \lambda / \lambda_{ c } = 0.8 $ are shown in Fig. \[FigGap0\_X\_mu\]. Starting at $ \mu / \Lambda = 0 $, the system is in the normal state, since $ \lambda < \lambda_{ c } $. As $ \mu / \Lambda $ increases, $ \Delta_0 / \Lambda $ displays a dome-shaped plot: the system asymptotically becomes superconducting up to a maximum value at an optimal chemical potential and decreases as $ \mu / \Lambda $ increases even further. Notice that the system is not quantum critical but, in fact, the curve vanishes exponentially as $ \mu \rightarrow 0 $, hence, superconductivity persists down to $ \mu = 0 $. Our results are consistent with [@Fukushima2007], which also obtains a dome-shaped plot of $ \Delta $ for relativistic interacting particles, as can be seen in Fig. 1 of their paper (choice of parameters I, referred as the [*weak-coupling case*]{}). An interesting result is obtained as we increase the value of $ t_{ \bot } $, as can bee seen in the inset of Fig. \[FigGap0\_X\_mu\]. For small values of the chemical potential, as the out of the plane hopping between layers increases, we see that the superconducting gap also increases for the same value of the chemical potential. Indeed, even for $ \mu = 0 $, for which there is no superconducting gap when $ t_{ \bot } = 0 $, given that $ \lambda < \lambda_{ c } $, there is a nonzero value of $ \Delta $ for $ t_{ \bot } / \Lambda = 0.3 $ or $ 0.5 $, indicating that the system is in the superconducting state. Therefore, the hopping between layers favors the appearance of superconductivity. As shall be seen in the next section, since the energy gap and the superconducting critical temperature tend to be proportional quantities, this result also shows that the superconducting critical temperature increases as the hopping between layers increases for small values of the chemical potential. Superconducting phase at finite temperatures {#Tneq0} -------------------------------------------- In this section we calculate the superconducting phase diagram for finite temperatures. [*A priori*]{}, the nonzero solutions for $\Delta$ are supposed to hold only in the $ N \rightarrow \infty $ limit at a finite temperature, because, otherwise, they are ruled out by the Coleman-Mermin-Wagner-Hohenberg theorem [@mw]. This limit corresponds to a physical situation where the three-dimensionality of the system is explicitly taken into account. For finite values of $ N $ and $ T \neq 0 $, there is an underlying Berezinskii-Kosterlitz-Thouless (BKT) transition [@kt], below which phase coherence is found for a nonzero $ \Delta $. The actual superconducting transition occurs at $T_{BKT} \leq T_c $. However, it can be shown that $T_{BKT} \stackrel{N\rightarrow\infty}{\longrightarrow} T_c$ [@babaev]. This clearly indicates that, in spite of the fact that we may have a nonzero superconducting gap at $ T = T_{ c } $, only in a really three-dimensional system we will have phase coherence developing at the same temperature that the modulus of the order parameter becomes nonzero, as determined by the gap equation. Therefore, $ T_{ c } $ calculated in this section may be regarded as a mean-field upper bound critical temperature for the KT transition, which sets the actual temperature for the appearance of superconductivity in the $ N \rightarrow \infty $ limit. We start considering the gap equation provided by the Eq. (\[EqV’eff\]). Making the change of variables $ x = ( v_{ F } k )^{ 2 } $, we get $$\frac{ 1 }{\lambda } - \frac{ 1 }{ 8 \alpha } \sum_{ a, b } \int_{ 0 }^{ \Lambda^{ 2 } } dx \; \frac{ 1 } { E_{ a b}( x ) } \tanh \left[ \frac{ E_{ a b}( x ) }{ 2 T } \right] = 0 \, , \label{EqGapEquationTneq0}$$ where $$E_{ a b}( x ) \equiv \sqrt{ | \Delta |^{2 } + \xi_{ a b }^{ 2 }( x ) } \, , \label{EqEab}$$ with $ \xi_{ a b }( x ) $ given by Eq. (\[EqXiab\]), replacing $ \Lambda^{ 2 } $ for $ x $. From Eq. (\[EqGapEquationTneq0\]), we calculate the superconducting critical temperature $ T_{ c } $ making $ \Delta = 0 $ at $ T = T_{ c } $ in the above expression. ![The superconducting critical temperature as a function of the chemical potential for $ t_{ \bot } = 0, 0.3 $ and $ 0.5 $. The inset shows the same plot for a smaller range of the chemical potential. $ \lambda / \lambda_{ c } = 0.8 $ and all the other quantities are given in unities of $ \Lambda $.[]{data-label="FigTc_X_mu"}](TcxMu_bilayer.pdf){width="47.00000%"} In Section \[T=0\] we have found a dome-shaped superconducting gap as a function of the chemical potential for $ \lambda < 1 $ and several values of $ t_{ \bot } $. Since the energy gap and the superconducting critical temperature $ T_{ c }$ are proportional, we expect to find a dome-shaped plot for $ T_{ c } $ as a function of $ \mu $ as well. In fact, as can be seen in Fig. \[FigTc\_X\_mu\], our numerical results for the superconducting critical temperature presents the characteristic dome experimentally observed in several compounds, like 1111 pnictides and cuprate superconductors. A dome-like structure of the superconducting phase for two-dimensional Dirac fermions has been previously obtained in [@Smith2009], where the superconducting critical temperature also presents a dome at intermediate filling fractions, surrounded by the normal phase for fillings close to unity or zero, which is consistent to our results. Also, a dome for $ T_{ c } $ as a function of hole concentrations has been previously obtained in a brief letter by some of us [@Nunes2005] for a relativistic version of the spin-fermion Hamiltonian [@Kampf1994], used to describe the Cu-O planes in the cuprates. Those results and the phase diagram presently calculated suggest that Dirac fermions may play a relevant role in the description of cuprates and iron pnictides. Indeed, it has been shown that Dirac points appear in the intersection of the nodes of the d-wave superconducting gap and the 2D-Fermi surface in the high-Tc cuprate superconductors and the low-energy excitations will correspond exclusively to these points [@cuprates]. Also, it has been experimentally found that the iron pnictides [@pnic1; @pnic2] also present electronic excitations whose properties are governed by the Dirac equation. Theoretical results also support the existence of Dirac electrons in the pnictides [@Direlpnic; @Direlpnic1]. As also suggested in the former section, the inset of Fig. \[FigGap0\_X\_mu\] shows that, for a small value of $ \mu $, $ T_{ c } $ increases as the $ t_{ \bot } $ parameter is increased, indicating that the hopping between layers favors the appearance of superconductivity in the system. As shall be seen in the next section, the same feature is observed as we take into account the first-neighbors out of the plane hopping between adjacent layers, which is the case for graphite. Graphite ======== In this section, we calculate the superconducting phase diagram of many coupled graphene layers for a finite chemical potential. To simplify the problem, we consider only the minimal model where only the electron tunneling amplitudes between the nearest sites in the plane $ t $ and out of the plane $ t_{ \bot } $ are regarded. The same approach was employed in [@Pershoguba2010], as briefly explained below. Consider a Bernal-stacked graphene bilayer described by the following Hamiltonian in the vicinity of each non-equivalent Dirac point [@Nuno_Book_2007], $$H_{ \mbox{\scriptsize{BL}} } = \sum_{ {\bf k}, \sigma } \Phi^{ \dagger }_{ {\bf k}, \sigma } \mathcal{ B }_{\bf k} \Phi_{ {\bf k}, \sigma } \, , \label{EqHBilayer}$$ where the above 4 $ \times $ 4 matrix $ \mathcal{ B }_{\bf k} $ is given by $$\begin{aligned} \mathcal{ B }_{\bf k} = \begin{pmatrix} v_{ F } \, {\bf k } \cdot \vec{\sigma } & \mathcal{ B }_{ 1 2 } \\ \mathcal{ B }_{ 2 1 } & v_{ F } \, {\bf k } \cdot \vec{\sigma } \end{pmatrix} \, , \label{EqMatrixB}\end{aligned}$$ the vector $ \vec{ \sigma } = \left( \sigma_{ x }, \sigma_{ y } \right) $ is written in terms of the well-known Pauli matrices, the matrix $ \mathcal{ B }_{ 1 2 } $ is $$\mathcal{B}_{ 1 2 } = \mathcal{B}^{ T }_{ 2 1 } = \begin{pmatrix} 0 & t_{ \bot } \\ 0 & 0 \end{pmatrix} \, , \label{EqB12}$$ and $ \Phi^{ \dagger }_{ {\bf k }, \sigma } = \left( \phi^{\dagger}_{ {\bf k }, \sigma, 1} , \phi^{\dagger}_{ {\bf k }, \sigma, 2} \right) $, with $ \phi^{\dagger}_{ {\bf k }, \sigma, j} = \left( a^{ \dagger }_{ { \bf k }, \sigma , j} \, b^{ \dagger }_{ { \bf k }, \sigma , j} \right) $, $ j = 1, 2 $ denotes the layer index. The model Hamiltonian for graphite is assumed to be described as an infinite number of graphene layers coupled by the hopping between adjacent sheets. Therefore, introducing the operator $$\tilde{ \Phi }^{ \dagger }_{ {\bf k }, \sigma } = \left( \cdots \, \phi^{\dagger}_{ {\bf k }, \sigma, l -1} \, \phi^{\dagger}_{ {\bf k }, \sigma, l } \, \phi^{\dagger}_{ {\bf k }, \sigma, l + 1} \, \cdots \right) \, , \label{EqPhiGraphite}$$ the Hamiltonian becomes $$H_{ \mbox{\scriptsize{Gr}} } = \sum_{ {\bf k}, \sigma } \tilde{ \Phi }^{ \dagger }_{ {\bf k}, \sigma } \mathcal{ C }_{\bf k} \tilde{ \Phi }_{ {\bf k}, \sigma } \, , \label{EqHGrafite}$$ where $$\begin{aligned} \mathcal{ C }_{\bf k} = \begin{pmatrix} \ddots & & & & \\ & v_{ F } \, {\bf k } \cdot \vec{\sigma } & \mathcal{ B }_{ 1 2 } & & \\ & \mathcal{ B }_{ 2 1 } & v_{ F } \, {\bf k } \cdot \vec{\sigma } & \mathcal{ B }_{ 1 2 } & \\ & & \mathcal{ B }_{ 2 1 } & v_{ F } \, {\bf k } \cdot \vec{\sigma } & \\ & & & & \ddots \\ \end{pmatrix} \, . \label{EqMatrixC}\end{aligned}$$ Introducing the momentum $ k_{ z } $ in the $ z $ direction, it is possible to re-express the Hamiltonian for graphite taking the first-neighbors hopping between adjacent layers in the momentum representation, which is written in terms of a 4 $ \times $ 4 matrix similar to $ \mathcal{ B }_{\bf k } $ in Eq. (\[EqMatrixB\]) [@Pershoguba2010], $$H_{ \mbox{\scriptsize{Gr}} } = \sum_{ {\bf k}, k_{ z }, \sigma } \Phi^{ \dagger }_{ {\bf k}, , k_{ z }, \sigma } \mathcal{ D }_{ {\bf k}, k_{ z } } \Phi_{ {\bf k}, , k_{ z }, \sigma } \, , \label{EqHGraphite2}$$ where $$\begin{aligned} \mathcal{ D }_{ {\bf k}, k_{ z } } = \begin{pmatrix} v_{ F } \, {\bf k } \cdot \vec{\sigma } & 2 \, \mathcal{ B }_{ 1 2 } \cos k_{ z } d \\ 2 \, \mathcal{ B }_{ 2 1 } \cos k_{ z } d & v_{ F } \, {\bf k } \cdot \vec{\sigma } \end{pmatrix} \, , \label{EqMatrixD}\end{aligned}$$ and $ d $ is the distance between layers. For this minimal model, the dispersion relation is given by $$E_{ \mbox{\scriptsize{Gr}} } = \pm \sqrt{ | v_{ \mbox{\scriptsize{F}} } \, {\bf k } |^{ 2 } + \left( { t_{ \bot } \cos k_{ z } d } \right)^{ 2 } } \pm { t_{ \bot } \cos k_{ z } d } \, \label{EqDispersionGraphite}$$ and, for $ k_{ z } d = \pi / 2 $, we recover the Dirac-type dispersion found in graphene. ![The superconducting critical temperature as a function of the chemical potential for (a) $ t_{ \bot } = 0.3 $ and (b) $ t_{ \bot } = 0.5 $ for both graphene bilayer (solid line) and graphite (dotted line). The inset in each pane shows the same plot for a smaller range of the chemical potential. $ \lambda / \lambda_{ c } = 0.8 $ and all the other quantities are given in unities of $ \Lambda $.[]{data-label="FigTc_X_mu_Graphite"}](TcxMu_BilayerGraphite.pdf){width="40.00000%"} Taking into account the attractive interaction forming Cooper pairs within each graphene layer, as seen in Eq. (\[EqHPairing\]), and introducing the operator $$\tilde{ \Psi}^{ \dagger }_{ {\bf k }, \sigma } = \left( \cdots \, \psi^{\dagger}_{ {\bf k }, \sigma, l -1} \, \psi^{\dagger}_{ {\bf k }, \sigma, l } \, \psi^{\dagger}_{ {\bf k }, \sigma, l + 1} \, \cdots \right) \, , \label{EqPsiGraphiteSC}$$ where $ \psi^{\dagger}_{ {\bf k }, \sigma, l } $ is given by Eq. (\[EqNambu2\]), the model Hamiltonian which describes the superconducting graphite in a mean-field approximation becomes $$H_{ \mbox{\scriptsize{Gr}}, \mbox{\scriptsize{SC}} } = \sum_{ {\bf k}, \sigma } \tilde{ \Psi }^{ \dagger }_{ {\bf k}, \sigma } \mathcal{ E }_{\bf k} \tilde{ \Psi }_{ {\bf k}, \sigma } \, , \label{EqHSCGrafite}$$ where $$\begin{aligned} \mathcal{ E }_{\bf k} = \begin{pmatrix} \ddots & & & & \\ & \mathcal{ A }_{ 1 } & \mathcal{ A }_{ 1 2 } & & \\ & \mathcal{ A }_{ 2 1 } & \mathcal{ A }_{ 2 } & \mathcal{ A }_{ 1 2 } & \\ & & \mathcal{ A }_{ 2 1 } & \mathcal{ A }_{ 1 } & \\ & & & & \ddots \\ \end{pmatrix} \, , \label{EqMatrixE}\end{aligned}$$ with $ \mathcal{A}_{ 1 } = \mathcal{A}_{ 2 } $ and $ \mathcal{A}_{ 1 2 } = \mathcal{A}^{ T }_{ 2 1 } $ given by the Eqs. (\[EqMatrixA1\]) and (\[EqA12\]) respectively. Accordingly, it is possible to re-express the Hamiltonian for the superconducting graphite in terms of an 8 $ \times $ 8 matrix, which is similar to $ \mathcal{ A }_{\bf k } $ in Eq. (\[EqMatrixA\]), $$H_{ \mbox{\scriptsize{Gr}} , \mbox{\scriptsize{SC}} } = \sum_{ {\bf k}, k_{ z }, \sigma } \Psi^{ \dagger }_{ {\bf k}, , k_{ z }, \sigma } \mathcal{ F }_{ {\bf k}, k_{ z } } \Psi_{ {\bf k}, , k_{ z }, \sigma } \, , \label{EqHSCGraphite2}$$ where $$\begin{aligned} \mathcal{ F }_{ {\bf k}, k_{ z } } = \begin{pmatrix} \mathcal{ A }_{ 1 } & 2 \mathcal{ A }_{ 1 2 } \cos k_{ z } d \\ 2 \mathcal{ A }_{ 2 1 } \cos k_{ z } d & \mathcal{ A }_{ 2 } \\ \end{pmatrix} %\, . \label{EqMatrixF}\end{aligned}$$ and the dispersion is given by the 8 eigenvalues $$E_{ \pm }( {\bf k } , k_{ z } ) = \pm \sqrt{ | \Delta |^{ 2 } + \left( E_{ \mbox{\scriptsize{Gr}} } - \mu \right)^{ 2 } } \, , \label{EqDispersionGraphiteSC}$$ with $ E_{ \mbox{\scriptsize{Gr}} } $ given by Eq. (\[EqDispersionGraphite\]). Therefore, the self-consistent equation for the superconducting gap becomes $$\begin{aligned} \frac{2}{\lambda } & = & \frac{ 1 }{ 2 } \sum_{ j = 1}^{ 4 } %\left[ \int_{ - \frac{ \pi }{ d } }^{ \frac{ \pi }{ d } } \frac{ d k_{ z } }{ 2 \pi d } \, \int \frac{ d^{ 2 } k }{ \left( 2 \pi \right)^{ 2 } } %\right. \nonumber \\ & & \hspace{1.25cm} %\left. \frac{ 1 } { E_{ + }( {\bf k } , k_{ z } ) } \tanh \left[ \frac{ \beta }{ 2 } E_{ + }( {\bf k } , k_{ z } ) \right] %\right] %\right\} \, , \label{EqDVeff}\end{aligned}$$ where the four values of $ E_{ + }( {\bf k } , k_{ z } ) $, labeled by the index $ j $ in the above expression, are given by $ E_{ \mbox{\scriptsize{Gr}} } $ in Eq. (\[EqDispersionGraphite\]), in analogy to the discussions in the previous sections. We calculate the critical temperature from Eq. (\[EqDVeff\]) and our results are compared with $ T_{ c } $ obtained for graphene bilayer, from the former section. Our results are shown in Fig. \[FigTc\_X\_mu\_Graphite\] and we see that there is not an enhancement of $ T_{ c } $ for every range of chemical potential for a given value of $ t_{ \bot } $. However, given $ t_{ \bot }$, we always find that the critical temperature for graphite is bigger than the $ T_{ c } $ obtained for graphene bilayer for small values of the chemical potential, what demonstrates that the first neighbors hopping between adjacent sheets favors the superconductivity in the system. Conclusions {#Conclusions} =========== In conclusion, in the present paper we have derived the effective potential for a stack of graphene layers with a hopping between adjacent sheets and an on-site attractive interaction between electrons in a mean-field approximation. For a single layer or two adjacent coupled layers of graphene, a remarkable result was obtained for the superconducting critical temperature as a function of the chemical potential: it displays a dome-shaped curve, as experimentally observed in several compounds, like 1111 pnictides and cuprate superconductors. This result suggests that Dirac fermions may play a relevant role in the description of cuprates and iron pnictides, which shall be object of further investigation. Indeed, a dome-like structure of the superconducting phase is in agreement with previous results for strongly interacting two-dimensional Dirac fermions [@Smith2009; @Nunes2005]. As pointed out in [@Smith2009], our results can also be experimentally realized with ultracold atoms in a two-dimensional optical square lattice. Finally, considering a minimal model for graphite, taking into account only the tunneling amplitudes between the nearest sites in the plane and out of the plane [@Pershoguba2010], we have compared the superconducting critical temperature for graphite and graphene bilayer. We have seen that the $ T_{ c } $ calculated for graphite is bigger than the one for graphene bilayer for a small value of $ \mu $, what might explain why intrinsic superconductivity is observed in HOPG. This work has been supported in part by CNPq, FAPEMIG and FAPERJ. We would like to thank N. M. R. Peres, H. Caldas, and A. H. Castro Neto for discussions on related matters. [400]{} A. H. Castro Neto et al., Rev. Mod. Phys. [**81**]{}, 109 (2009). Csányi G et al., Nat. Phys. [**1**]{}, 42 (2005). T. E. Weller et al., Nat. Phys. [**1**]{}, 39 (2005). N. Emery et al. Phys. Rev. Lett. [**95**]{}, 087003 (2005). I. T. Belash et al., Synt. Metals [**36**]{}, 283 (2002). N. B. Hannay et al., Phys. Rev. Lett. [**14**]{}, 225 (1965). O. Gunnarsson, Rev. Mod. Phys. [**69**]{}, 575 (1997). Y. Kopolevich, J. Low Temp. Phys. [**119**]{}, 691 (2000); Y. Kopelevich et al., Physics of the Solid State [**41**]{}, 1959 (1999) \[Fizika Tverd. Tela (St. Petersburg) 41 (1999) 2135\]. P. Esquinazi et al., Phys. Rev. B [**78**]{}, 134516 (2008). G. Savini, A. C. Ferrari and F. Giustino Phys. Rev. Lett. [**105**]{}, 037002 (2010). Z. Y. Meng et al., Nature [**464**]{}, 847 (2010). S. Pathak, V. B. Shenoy and G. Baskaran Phys. Rev. B [**81**]{}, 085431 (2010). N. B. Kopnin and E. B. Sonin, Phys. Rev. Lett. [**100**]{}, 246808 (2008). G. Baskaran Phys. Rev. B [**65**]{}, 212505 (2002). Y. Jiang et al., Phys. Rev. B [**77**]{}, 235420 (2008). A. M. Black-Schaffer and S. Doniach, Phys. Rev. B [**75**]{}, 134512 (2007). B. Roy B and I. F. Herbut Phys. Rev. B [**82**]{}, 035429 (2010). H. B. Heersche, P. Jarillo-Herrero, J. B. Oostinga, L. M. K. Vandersypen, and A. F. Morpurgo, Nature [**446**]{}, 56 (2007). F. M. D. Pellegrino, G. G. N. Angilella and R. Pucci, Eur. Phys. J. B [**76**]{}, 469 (2010). D. V. Khveshchenko, J. Phys.: Condens. Matter [**21**]{}, 075303 (2009). C. Honerkamp, Phes. Rev. Lett. [**100**]{}, 146404 (2008). E. Zhao and A. Paramekanti, Phys. Rev. Lett. [**97**]{}, 230404 (2007). B. Uchoa B and A. H. Castro Neto, Phys. Rev. Lett. [**98**]{}, 146801 (2007). E. C. Marino and L. H. C. M. Nunes Nuc. Phys, B [**741**]{} \[FS\] 404 (2006). E. C. Marino and L. H. C. M. Nunes Nuc. Phys. B [**769**]{} \[FS\] 275 (2007). Eduardo V. Castro et al., An Introduction to the Physics of Graphene Layers, in [*Strongly Correlated Systems, Coherence and Entanglement*]{}, (World Scientific, 2007) R. K. Kremer, J. S. Kim and A. Simon Carbon Based Superconductors, in [*High Tc Superconductors and Related Transition Metal Oxides*]{}, (Springer-Verlag, Berlin Heidelberg, 2007) L. H. C. M. Nunes, R. L. S. Farias. E. C. Marino, [*Superconducting and excitonic quantum phase transitions in doped systems with Dirac electrons*]{} [**ref do cond-mat**]{} K. Fukushima and K. Iida, Phys. Rev. D [**76**]{}, 054004 (2007). N. D. Mermin and H. Wagner, Phys. Rev. Lett. [**17**]{}, 1133 (1966); P. C. Hohenberg, Phys. Rev. [**158**]{}, 383 (1967); S.Coleman, Commun. Math. Phys. [**31**]{}, 259 (1973). V.L.Berezinskii, Zh. Eksp. Teor. Fiz. [**59**]{}, 907 (1970); J.Kosterlitz and D.Thouless, J. Phys. C [**6**]{}, 1181 (1973). E. Babaev, Phys. Lett. B [**497**]{}, 323 (2001). L.-K. Lim et al., Eur. Phys. Lett. [**88**]{}, 36001 (2009). L. H. C. M. Nunes and E. C. Marino, Physica B [**378-380**]{}, 704 (2006). A.P. Kampf, Phys. Rep. [**249**]{}, 219 (1994). I. Affleck and J. B. Marston Phys. Rev. B [**37**]{}, 3774 (1988); Phys. Rev. B [**39**]{}, 11 538 (1989); X-G. Wen and P. A. Lee Phys. Rev. Lett. [**76**]{}, 503 (1996). Y. Kamihara et al., J. Am. Chem. Soc. [**130**]{}, 3296 (2008). M. Rotter, M. Tegel M and D. Johrendt, Phys. Rev. Lett. [**101**]{}, 107006 (2008). P. Richard et al., Phys. Rev. Lett. [**104**]{}, 137001 (2010). C. M. S. da Conceição, M. B. Silva Neto and E. C. Marino, Phys. Rev. Lett. [**106**]{}, 117002 (2011). S. S. Pershoguba and V. M. Yakovenko, Phys. Rev. B [**82**]{}, 205408 (2010).
--- abstract: 'We present theoretical analysis of the first data on the high energy and momentum transfer (hard) quasielastic $C(p,2p)X$ reactions. The cross section of hard $A(p,2p)X$ reaction is calculated within the light-cone impulse approximation based on two-nucleon correlation model for the high-momentum component of the nuclear wave function. The nuclear effects due to modification of the bound nucleon structure, soft nucleon-nucleon reinteraction in the initial and final states of the reaction with and without color coherence have been considered. The calculations including these nuclear effects show that the distribution of the bound proton light-cone momentum fraction $(\alpha)$ shifts towards small values ($\alpha < 1$), effect which was previously derived only within plane wave impulse approximation. This shift is very sensitive to the strength of the short range correlations in nuclei. Also calculated is an excess of the total longitudinal momentum of outgoing protons. The calculations are compared with data on the $C(p,2p)X$ reaction obtained from the EVA/AGS experiment at Brookhaven National Laboratory. These data show $\alpha$-shift in agreement with the calculations. The comparison allows also to single out the contribution from short-range nucleon correlations. The obtained strength of the correlations is in agreement with the values previously obtained from electroproduction reactions on nuclei.' address: - 'School of Physics and Astronomy, Sackler Faculty of Exact Sciences, Tel Aviv University, Ramat Aviv 69978, Israel' - 'Department of Physics, Florida International University, Miami, FL 33199, U.S.A' - 'Department of Physics, Pennsylvania State University, University Park, PA 16802, U.S.A' author: - 'I. Yaron, J. Alster, L. Frankfurt, E. Piasetzky' - 'M. Sargsian' - 'M. Strikman' title: 'Investigation of the high momentum component of nuclear wave function using hard quasielastic A(p,2p)X reactions.' --- Introduction {#intro} ============ One of the important signatures of quark-gluon structure in nucleon-nucleon interaction at short distances is the observed strong energy dependence ($\sim s^{-10}$) of the wide angle pp elastic differential cross section at $s\geq 12 ~GeV^2$, where $s$ is the square of the pp c.m. energy. Despite the ongoing debate on the validity of perturbative QCD in this energy region [@hex; @Isgur_Smith; @Rady] or the debate on the relevance of a particular mechanism of subnucleon interaction (i.e. quark-interchange[@BCL79; @FGST79; @RS95], three-gluon exchange[@LLP; @BoSt], reggeon-type contribution[@BoSo]), it is commonly accepted that the power-law $s$- dependence of the elastic cross section signals the onset of the hard dynamics of the quark-gluon interaction. In this paper we address the question of what happens when wide angle pp scattering takes place inside the nucleus, i.e. the incident proton is scattered off a bound proton. If this reaction would have the same $\sim s^{-10}$ energy dependence as that of the cross section of free $pp$ scattering, one may expect that the incoming proton will favor to scatter off a bound proton with larger initial momentum aligned to the direction of the incoming proton [@FS88; @FLFS]. This kinematic condition corresponds to $pp$ scattering with smaller $s$ and therefore larger scattering cross section. Thus, if nuclear effects will not alter the genuine $s$-dependence of the $pp$ cross section, the high momentum transfer $p+A\to p + p + X$ reaction would select preferably the high momentum components of the nuclear wave function. Due to the short-range nature of the strong interaction, the high internal momentum in the nucleus will be generated mainly by short-range NN correlations. Therefore, at sufficiently high energies and high momentum transfers one expects to probe the short-range properties of the nucleus. In Ref.[@FS88; @FLFS] within a plane wave impulse approximation (PWIA) the authors calculated the cross section of high momentum transfer $A(p,2p)X$ reactions and observed a strong sensitivity to the high momentum component of the nuclear wave function. Motivated by the recent measurements of high-momentum transfer pA reactions at Brookhaven National Laboratory (BNL)[@kn:I101] we carried out a detailed analysis of the high-momentum transfer $A(p,2p)X$ reaction investigating specifically the competing nuclear effects, not discussed previously. These effects may obscure the observed sensitivity shown within PWIA [@FLFS]. Our main goal is to see whether these reactions probe short range correlations (SRC) and their sensitivity to the dynamical structure of these correlations. The structure of the paper is as follows. In Chapter 2 we outline the basic theoretical framework for the calculation of the high-energy wide angle quasielastic $A(p,2p)X$ reaction. We also discuss the nuclear effects which can compete with the expected signatures of the scattering from SRC. In Chapter 3 we present the predictions of the model presented in the Chapter 2. Chapter 4 describes briefly the EVA experiment at BNL. The calculations are compared with the data obtained in this experiment in chapter 5. In chapter 6 we summarize the results of our study. The Basic Theoretical Framework {#II} =============================== In quasi-elastic (QE) scattering a projectile is elastically scattered from a single bound “target” nucleon in the nucleus while the rest of the nucleus acts as a spectator. A schematic presentation of (p,2p) QE scattering is given in Fig 1. \[Fig.1\] **Kinematics** {#IIa} --------------- ${\em p_{A}} =(E_{A},\vec{p_{A}})$, ${\em p_{1}}=(E_{1},\vec{p_{1}})$, ${\em p_{3}}=(E_{3},\vec{p_{3}})$, ${\em p_{4}} =(E_{4},\vec{p_{4}})$, ${\em p_{R}} =(E_{R},\vec{p_{R}})$ - are the four - momenta of the target nucleus, the incoming proton, the scattered proton, the ejected proton and the recoil nucleus, respectively. For simplicity we did not show in the Figure 1 ${\em p_{A}}$ and ${\em p_{R}}$. Using the variables defined in the figure the Mandelstam variables are: $$\begin{aligned} s & = & ({\em p_3}+{\em p_4})^{2} \ ; \ t = ({\it p_1}-{\it p_3})^{2}. \label{kinst}\end{aligned}$$ The high-momentum transfer primary process in the $A(p,2p)X$ quasi-elastic reaction is the hard $pp$ elastic scattering. Since the general predictions are based on the implication of the strong $s$-dependence ($\sim 1/s^{10}$) of hard elastic $pp$ cross section we will limit our calculations to high energy and high momentum transfer kinematics were the $1/s^{10}$-dependence is observed experimentally for $pp$ scattering off hydrogen target. Thus, our calculations are limited to $s\gtrsim 12 \ GeV^2$ and $\theta_{cm}\sim 90^0$. The missing energy ($E_m$) for the $A(p,2p)X$ reaction is given by $E_m$ = $E_{1}$ + $E_A$ $-$ $E_3$ $-$ $E_4$ $-$ $E_{A-1}$. The available high energy $A(p,2p)X$ data have a missing energy resolution of about 240 Mev [@kn:I101]. Therefore, the calculations which we compare with the data are integrated over a wide range of missing energy. This integration simplifies the calculations as will be discussed below. Plane Wave Impulse Approximation {#IIb} -------------------------------- A clear interpretation of the quasi-elastic measurements is possible in Plane Wave Impulse Approximation (PWIA). Within this approximation it is possible to separate nuclear properties from the reaction mechanism. In a high energy scattering the reaction evolves near the light cone $\tau = t-z\sim 1/(E+p_z)\ll t+z$, where z is the direction of the incident proton and $E$, $p_z$ are the energy and leading longitudinal momentum of the high energy particles involved in the scattering. Thus it is natural to describe the reaction in the light cone reference frame (similar to high energy deep-inelastic scattering from hydrogen target see e.g. [@Feynman]). Within the light cone plane wave impulse approximation the cross section of the quasielastic $A(p,2p)X$ reaction can be represented as a convolution of the elementary elastic $pp$ scattering cross section off bound nucleon and the four - dimensional Light Cone Spectral function[@FS88]: $$\begin{aligned} \frac{d^{6}\sigma}{(d^{3}p_{3}/2E_{3})(d^{3}p_{4}/2E_{4}) } & & = \sum\limits_{Z}{1\over 4j_{pA}}{{\vert M_{pp} \vert}^{2} \over (2\pi)^2}\cdot {P_{A}(\alpha,p_{t}^{2},p_{R+}) \over \alpha^{2}} = \nonumber \\ & & \nonumber \\ & & = \sum\limits_{Z}{2\over \pi}\sqrt{s^{2}-4m^{2}s} \frac{d\sigma}{dt}^{pp}(s,t) \cdot {P_{A}(\alpha,p_{t}^{2},p_{R+})\over A\cdot\alpha} \label{pwia}\end{aligned}$$ where $$\begin{aligned} p_2 & = & p_3 + p_4 - p_1 \ ; \ p_{t} = p_{3}^t + p_{4}^t \nonumber \\ \alpha & = & \alpha_{4} + \alpha_{3} - \alpha_{1} \ ; \ \alpha_{i} = A{p_{i-}\over P_{A-}} \equiv A{E_{i}-p_{i}^z \over E_{A}-P_{A}^z}. \label{kinvar}\end{aligned}$$ The superscript $"t"$ and $"z"$ denote the transverse ($x,y$) and longitudinal directions with respect to incoming proton momentum $\vec p_{1}$. The $"+"$ and $"-"$ indices denote the energy and longitudinal components of four - momenta in the light cone reference frame [^1]. The variable $\alpha$ defined in Eq.(\[kinvar\]) describes the light cone momentum fraction of nucleus carried out by target nucleon, normalized in such a way that a nucleon at rest has $\alpha=1$. The $j_{pA}$ - is the invariant flux with respect to the nucleus, the $M_{pp}$ and $ \frac{d\sigma}{dt}^{pp}$ - are the invariant amplitude and cross section for elastic $pp$ scattering. The Light Cone spectral function represents the probability to find the target nucleon with the light cone momenta ($\alpha$, $p_{t}$) times the probability that the residual nuclear system has a momentum component $p_{R+} = E_{R} + p_{R}^{z}$. The Spectral function is normalized as follows[@FS88]: $$\int {p_{A-}\over 2A}P_{A}(\alpha,p_{t}^{2},p_{R+}){d\alpha\over \alpha} d^{2}p_{t}dp_{R+} = A.$$ **The Light Cone Spectral Function** {#IIc} ------------------------------------ The integration over a wide range of the missing energy allows us to use the following approximations for the spectral function: For target proton momenta below the Fermi sea level ($p_{2}< p_{Ferm}\sim 250 MeV/c$ ) we use the nonrelativistic limit of the light cone spectral function[@FS81; @FS88]: $$P_{A}(\alpha,p_{t}^{2},p_{R+}) \approx {1\over 2}n(p_{2})\cdot \delta(p_{R+}-(\sqrt{M_{A-1}^{2}+p_{2}^{2}}-p_{2}^{z})), \label{mf}$$ where $\alpha \approx 1 - p_{2}^z/m$ and $\vec p_{2} = \vec p_3 + \vec p_4-\vec p_1$ are the missing momentum components of the reaction. $n(p)$ is the momentum distribution of nucleons calculated within the mean field approximation. For the momentum range of ($p_{Ferm} < p_{2} < 0.7~ GeV/c )$ we assume the dominance of the two nucleon short-range correlations which allows to model the spectral function as follows[@FS88; @DFSS]: $$\begin{aligned} P_{A}(\alpha,p_{t}^{2},p_{R+}) & \approx & \int {A^{2}\over 2p_{A-}}a_{2}(A)\cdot \rho_{2}^n\left ({2\alpha\over (A-\beta)},(\vec p_{t}+{\alpha\over (A-\beta)}\vec p_{(A-2)t})^{2}\right ) \cdot \rho_{A-2}(\beta,p_{(A-2)t}^{2})\cdot \nonumber \\ & & \nonumber \\ & & \delta \left (p_{R+}-{m^{2}+(\vec p_{(A-2)t}+\vec p_{t})^{2}\over m (A -\alpha-\beta)} - {M_{A-2}^{2}+p_{(A-2)t}^{2}\over m \beta}\right ){d\beta\over \beta} d^2p^{t}_{(A-2)}, \label{lcsf2n}\end{aligned}$$ where ($\beta,p_{(A-2)t}^{2}$) and $\rho_{A-2}$ are the light cone momentum and the density matrix of the recoiling $(A-2)$ system. The parameter $a_{2}(A)$ is the probability of finding two-nucleon correlations in the nucleus A and $\rho_{2}^n$ is the density matrix of the correlated pair which we set equal to the Light Cone density matrix of the deuteron[@FS81]: $$\rho_{2}^n(\alpha,p_{t}^{2}) = {\Psi_{D}^{2}(k)\over 2-\alpha} \sqrt{m^{2}+k^{2}} \ ; \ k = \sqrt{{m^{2}+p_{t}^{2}\over \alpha (2-\alpha)}-m^{2}} \ ; \ (0 < \alpha < 2). \label{hmlc}$$ Note that the factorization of the nuclear density matrix to the correlation and $(A-2)$ density matrices is specific for the short-range two-nucleon correlation approximation. In this approximation it is assumed that the singular character of $NN$ potential at short distances (existence of repulsive core) defines the main structure of the nucleon momentum distribution in SRC and it is less affected by the collective interaction with the $A-2$ nuclear system. Notice that the expression in Eq.(\[hmlc\]) is the light cone analog of the approximated spectral function used in Ref.[@CSFS], where the validity of two-nucleon correlation approximation is demonstrated comparing the prediction of nonrelativistic analogue of Eq.(\[lcsf2n\]) with the exact calculations of spectral function of $^3He$ nucleus and infinite nuclear matter. To obtain the density matrix of the recoiling $(A-2)$ system, additional physical assumptions are required. However the fact that we are interested in the cross section integrated over a wide range of the missing energy allows us to simplify the Eq.(\[lcsf2n\]) by neglecting the momentum of the recoiling $(A-2)$ system (SRC at rest approximation): $$\rho_{A-2}(\beta,p_{(A-2)t}^{2})= (A-2)\cdot\delta(A-2-\beta)\cdot \delta(p_{(A-2)t}^{2}). \label{srcrest}$$ Inserting Eq.(\[srcrest\]) into Eq.(\[lcsf2n\]) one obtains the following expression for the light cone spectral function in the high missing momentum range: $$P_{A}(\alpha,p_{t}^{2},p_{R+}) \approx {A^{2}\over 2p_{A-}}a_{2}(A)\cdot \rho_{2}^n(\alpha,p_{t}^{2})\cdot \delta(p_{R+}-{m^{2}+p_{t}^{2}\over m(2-\alpha)}-M_{A-2}). \label{sfunrest}$$ It is worth noting that the above approximation is justified based on the observation of Ref.[@CSFS] that it correctly predicts the position of the maximum in the missing energy distribution at fixed values of missing momenta. Therefore, in regime in which the integration over the wide range of missing energies is allowed, Eq.(\[sfunrest\]) represents a valid approximation of nuclear spectral function at the SRC domain. The same model was also used to describe the inclusive nucleon and pion production in kinematics forbidden for scattering off a free nucleon [@FS88; @FS81] and electroproduction[@FS81; @DFSS] reactions from nuclei at $x>1$ and $Q^2\geq 1~GeV^2$. **Proton-Proton Elastic Scattering Cross Section** {#IId} -------------------------------------------------- The next quantity which is needed to calculate the quasielastic $A(p,2p)X$ cross section in Eq.(\[pwia\]) is the differential cross section of $pp$ elastic scattering. For $s \geq 12 \ GeV^{2}$ we use the phenomenological parameterization of the free pp elastic cross section. We assumed a combination of $s$-parameterization at $90^{0}$ presented in Ref.[@RP] and $\theta_{c.m.}$-parameterization in the form suggested in Ref[@SBB]: $$\begin{aligned} \frac{d\sigma}{dt}^{pp} & = & 45.0 {\mu b\over sr GeV^{2}}\cdot \left ({10\over s} \right )^{10}\cdot (1-\cos{\theta_{c.m.}})^{-4\gamma}\cdot \nonumber \\ & & \nonumber \\ & & \left [ 1 + \rho_{1}\sqrt{s\over GeV^{2}}\cdot\cos{\phi(s)} + {\rho_{1}^{2}\over 4}{s\over GeV^{2}} \right ]\cdot F(s,\theta_{c.m.}) \label{pp}\end{aligned}$$ where $\rho_{1} = 0.08$, $\gamma = 1.6$ and $\phi(s) = {\pi\over 0.06} \ln(\ln[s/(0.01 GeV^{2})])^{-2}$. The function $F(s,\theta_{c.m.})$ is used for further adjustment of the phenomenologically motivated parameterization to the experimental data at $60^{0} \leq \theta_{c.m.} \leq 90 ^{0}$[@FPSS95]. **Calculation of the $\alpha$-dependence of the Cross Section in PWIA** {#IIe} ----------------------------------------------------------------------- The main quantity in which we are interested is the $\alpha$-dependence of the $A(p,2p)X$ quasielastic cross section at fixed-large c.m. angles and high momentum transfer. The reason of this choice is twofold: first, the $\alpha$-dependence naturally expresses the sensitivity of the $A(p,2p)X$ cross section to the high momentum component of the nuclear wave function which will be discussed below; second, as it will be demonstrated in the Section \[IIf2\] the $\alpha$ variable is not sensitive to the soft initial and final state reinteractions of energetic protons with target nucleons. Thus its distribution will largely reflect the distribution of the nucleon within the SRC without substantial modification due to initial and final state interactions. In Figure 2 we present the $\alpha$-dependence of the $^{12}C(p,2p)X$ cross section calculated for different values of incoming proton momenta. The calculations are within the PWIA framework described above. Here the c.m. angle of the $pp\to pp$ scattering is restricted to $90\pm 5^0$. The calculation is done for $^{12}C$ target using Harmonic Oscillator momentum distribution $n(k)$ in Eq.(\[mf\]) and high momentum tail of the deuteron wave function calculated, using the NN Paris potential in Eq.(\[hmlc\]), with $a_2(^{12}C)=5$. Elastic pp scattering off a proton at rest corresponds to $\alpha=1$. As can be seen from Figure 2, most of the strength is at $\alpha<1$ which corresponds to a scattering off a proton with momenta in the direction of $\vec p_1$. This is a quantitative illustration of the discussion in the introduction: the $pp$ cross section on bound proton scales with the total pp c.m. energy as $\sim (s\alpha)^{-10}$, therefore the $A(p,2p)X$ cross section is dominated by smaller $\alpha$. One can clearly observe a double peak structure of the $\alpha$-distributions. The first peak, closer to $\alpha=1$, is due to scattering off a proton in the Fermi sea Eq.(\[mf\]). The other peak, at even lower $\alpha$ values, is due to the scattering off the SRC Eq.(\[lcsf2n\]). As the incoming energy increases, one can see the shift of the strength to the lower $\alpha$-range which means more and more scattering off target protons with high Fermi momenta aligned in the direction of the incoming proton momentum $p_1$. This shift shows the onset of the regime where one expects to probe short-range nucleon correlations in the nucleus. This picture demonstrates the selectivity of hard $A(p,2p)X$ reactions to the large values of the bound nucleon momenta in the nucleus, predicted originally in Ref.[@FS88; @FLFS]. **Competing Nuclear Effects** {#IIf} ----------------------------- The calculations above were done within PWIA, using $pp$ parameterization (Eq.(\[pp\])) for the scattering off a free proton. Two basic nuclear effects that can obscure the expected $\alpha$-dependence are the modification of the bound protons in the nuclei and the initial and final state interactions of incoming and scattered protons respectively. ### **Nuclear Medium Modification of Bound Protons** {#IIf1} We consider possible binding modifications of the bound nucleon structure which are consistent with the in medium deep-inelastic (DIS) nucleon structure functions measured using lepton-nucleus scattering - phenomenon known as the “EMC effect” [@EMC]. One of the mechanisms that can account for the observed modification of DIS structure function is the suppression of point-like configurations (PLC) in a bound nucleon as compared to a free nucleon [@FS85; @FS88; @Frank]. The PLC are small sized partonic configurations in the nucleons which due to the color screening are weakly interacting objects. In the color screening model of EMC[@FS85; @FS88], the binding of the nucleonic system results a suppression of the nucleon’s PLC component. This suppression does not lead to a noticeable change in the average characteristics of a nucleon in the nucleus. However, it is sufficient to account for the observed EMC effect in DIS scattering from nuclei. Since the high momentum transfer $pp$ elastic scattering is mainly due to the scattering off a PLC in the protons, the expected suppression of PLC will reduce the cross section of $pp$ scattering off bound proton. This suppression can be estimated by multiplying the free $pp$ cross section of Eq.(\[pp\]) by the factor [@FS85] $$\delta (k,t) = \left( 1 + \Theta (t_{0}-t)\cdot (1- {t_{0}\over t})\cdot {{k^{2}\over m_{p}} +2\epsilon_{A} \over \Delta E} \right )^{-2}, \label{delta}$$ where $\epsilon_{A} \approx 8 \ MeV$ is the average nuclear binding energy and $\Delta E\approx 0.6-1 ~GeV$ is a parameter that characterize a typical excitation of the bound nucleon. The $t$-dependence in Eq.(\[delta\]) is due to the fact that in the wave function of a nucleon the PLC dominate at sufficiently high values of the momentum transfer [@FSZ93] ($-t_0\approx 2 GeV^2$). As follows from Eq.(\[delta\]) the $\delta (k,t)$ correction tends to reduce the expected $\alpha$-shift shown in Figure 1, since it introduces additional $\alpha^l$ (${\l}\sim 2-3$) dependence, which softens the $(s\alpha)^{-10}$ dependence of the $pp$ cross section in Eq.(\[pwia\]). Note that a similar suppression is expected within the rescaling model of the EMC effect [@rescaling; @MSS]). On the other hand in a number of models of the EMC effect, such as pion and binding models (for review see [@FS88]) the shift to $\alpha<1$ is amplified as compared to the multinucleon calculation[@FLFS; @FMS92]. Thus, our estimation within the color screening model can be considered as the upper limit of possible suppression due to binding nucleon modification. Using Eqs.(\[pwia\],\[sfunrest\],\[pp\],\[delta\]) the calculated cross section is shown in Figure 3 as a function of $\alpha$. As Figure 3 shows, the considered medium modification effect suppresses the high momentum strength of the cross section, since it corresponds to the larger virtualities of the bound nucleon, which are more sensitive to the PLC structure of nucleon. However, the suppression does not diminish the expected downward shift of the $\alpha$-distribution. It would require very unreasonable modifications of the bound nucleon structure (contradicting the EMC effects in DIS) to make the $\alpha$-shift (to the $\alpha <1$ region) completely vanish. ### **The Effect of the Initial and Final State Interactions** {#IIf2} The major nuclear effect which can obscure the information on SRC are the initial and final state interactions (ISI, FSI) of the incident and outgoing protons in the nuclear medium. Since the momenta of incoming and two outgoing protons are above a few GeV/c one can calculate these rescatterings in eikonal approximation. For bound nucleons with small momenta $0.8 <\alpha <1.2 $ and $p_t\leq p_{Ferm}$, where the scattered and the knocked-out protons reinteract with uncorrelated nucleons, we apply the conventional Glauber approximation to calculate the small angle rescatterings. This is justified since in these cases the spectator nucleons can be considered as a stationary scatterers. Integrating over a wide range of the missing energy of the $A(p,2p)X$ reaction allows to simplify further the calculation of ISI/FSI using the probabilistic approximation of Ref.[@Yael], which accounts for all orders (single, double, etc) of the soft $pN$ rescatterings. However, the above approximation cannot be used for the bound protons in SRC (which have a large value of Fermi momentum). There, the spectator nucleon cannot be treated as a stationary scatterer and therefore the Glauber approximation is not valid (see e.g. [@FSS]). To calculate the initial and final state rescatterings in this case we assume that for incoming and outgoing protons the first rescattering most probably happens with the partner nucleon in the SRC. Indeed, as it was demonstrated in Ref.[@DFSS], because of the large virtuality of interacting nucleon in SRC the distance of the first soft reinteraction after the point of hard interaction is less than $1$ fm and it decreases with the increase of $t$ and $p_2$. Within the framework of two-nucleon correlation model one can account for the soft rescatterings in the SRC using the calculation of $d(p,2p)n$ reaction in Generalized Eikonal Approximation (GEA), Ref.[@FSS; @FPSS97]. Using the GEA we only calculate the single rescatterings of the incoming and knocked-out protons with the correlated nucleon (Figure 4 b,c,d). The main feature of the GEA is that it accounts for the nonzero values of spectator nucleon momentum (it does not treat the spectator as a stationary scatterer as being done in the conventional eikonal approximation). This feature is especially important in the SRC region since in this case the correlated nucleon momenta are large and can not be neglected. The effect of the rescatterings in the SRC (in the range of $\alpha<0.8$ or $\alpha >1.2$) can be accounted for by introducing a correction factor $\kappa$ which multiplies the SRC spectral function of Eq.(\[sfunrest\]). We define $\kappa$ as follows: $$\kappa = {|F_a + F_b + F_c + F_d|^2\over |F_a|^2}, \label{kappa}$$ where $F_a$ is the PWIA amplitude, and $F_b$, $F_c$ and $F_d$ are the single rescattering amplitudes corresponding to $p+ (NN)_{SRC} \to p + N + N$ scattering shown in Figure 4. To obtain the $F$’s we use the rescattering amplitudes for the $d(p,pp)n$ reaction calculated in Ref.[@FPSS97]: $$F_{(j)} = -{(2\pi)^{3\over 2}\over 4 i}A^{hard}_{pp}(s,t)\int {d^2k_t\over (2\pi)^2}f^{pN}(k_t)(\psi^{\mu}_d(\tilde p^{(j)}_s) - n\cdot i \psi^{\prime \mu}(\tilde p^{(j)}_s)), \label{F_j}$$ where j(n)=b(1),c(1),d(-1). $A^{hard}_{pp}$ is the amplitude of the $pp$ hard scattering which, within the factorization approximation, cancels in $\kappa$. $f^{pN}$ is the amplitude of a small angle (soft) $pN$ scattering, $N$ can be either proton or neutron. $\psi_d$ is the deuteron wave function and $\psi^\prime$ accounts for the distortion due to FSI (see Ref.[@FPSS97]). For higher order rescatterings we applied the probabilistic approximation of Ref.[@Yael] which we used already in the case of small Fermi momenta. This is justified since in the kinematics of two-nucleon SRC the second and higher order rescatterings happen outside of the SRC. It is worth noting that the error originating from the last approximation is rather small since for the intermediate size nuclei ($A\sim 12-16$) the overall contribution of higher order rescatterings in the considered kinematics of the $A(p,2p)X$ reaction is small (a few percent as compared with the single rescattering contribution [@Yael]). It is important to emphasize that the major qualitative feature of reinteractions with uncorrelated nucleons, in high energies, is the existence of the approximate conservation law for the light cone momenta of interacting particles[@FSS; @MS]. Namely, for energetic particles small angle soft reinteractions do not change the $\alpha$- distribution. To demonstrate this let us consider the propagation of fast nucleon with momentum $p_1= (E_1,p^z_{1}, 0)$ through the nuclear medium. After the small angle reinteraction of this nucleon with a nucleon in the nucleus with momentum $p_2=(E_2,p^z_{2},p^t_{2})$, the energetic nucleon still maintains its high momentum and leading $z$-direction having now a momentum $p^{\prime}_1=(E^{\prime}_1,p^{z\prime}_{1},p^{t\prime}_{1})$ with ${<(p^{t\prime}_{1})^2>\over (p^{z\prime})^2}\ll 1$. The other nucleon momentum after the collision is $p^{\prime}_2=(E^{\prime}_2,p^{z\prime}_{2},p^{t\prime}_{2})$. The energy momentum conservation for this scattering allows us to write for the “$\alpha$” component: $$\alpha_1 + \alpha_2 = \alpha^\prime_1 + \alpha^\prime_2 \equiv {p_{1-}\over m} + {p_{2-}\over m} = {p^\prime_{1-}\over m} + {p^\prime _{2-}\over m}. \label{claw}$$ The change of the $\alpha_2$ (“$-$”) component due to rescattering can be obtained from Eq.(\[claw\]): $$\Delta \alpha_2\equiv {\Delta p_{2-}\over m} = {p_{2-}-p^\prime_{2-}\over m} = {p^\prime_{1-}-p_{1-}\over m}\ll 1. \label{dal}$$ which means: $${\alpha^\prime_2 \approx \alpha_2.} \label{dal1}$$ In Eq.(\[dal\]) we use the conditions ${p^\prime_{1-}\over m}, {p_{1-}\over m} \ll 1$ which is well satisfied in the small angle reinteractions since ${<(p^{t\prime}_{1})^2>\over (p^{z\prime})^2}\ll 1$. Thus, with the increase of the incident energy a new approximate conservation law is emerging: $\alpha_2$ is conserved by ISI/FSI. The uniqueness of the high energy rescattering is in the fact that although both the energy and the momentum of the nucleons are distorted by the rescattering, the combination of $E_2-p^z_2$ is almost not affected. In the same way the rescattering of the incoming and two outgoing protons in the (p,2p) reaction conserve the reconstructed $\alpha$-component of the target proton. Therefore the $\alpha$-distribution measured in $A(p,2p)X$ reaction reflects well the original $\alpha$-distribution of the target proton in the nucleus. A numerical estimate of this conservation will be presented in the next section. To complete the discussion on ISI/FSI we should mention that for incident proton momenta exceeding $6-9~GeV/c$ the Glauber approximation overestimates the absorption of protons if compared with the data of Ref.[@Carroll; @EVA]. The overestimate of the absorption in these experiments is attributed to the Color Transparency (CT) phenomena, in which it is assumed that the hard $pp\to pp$ primary process in the $A(p,2p)X$ reaction is dominated by the interaction of protons in the point like $qqq$ configurations. As a result, immediately before and after the hard interaction the color neutral PLC has a diminished strength for ISI/FSI reinteraction. Since the PLC is not an eigenstate of QCD Hamiltonian (free nucleons have a finite size) the interaction strength will evolve to the normal hadronic interaction strength in parallel with the evolution of PLC to the normal hadronic size during the propagation of the fast proton in the nuclear medium. We estimate the CT phenomenon within the quantum diffusion model of Ref.[@FFLS]. This model which describes reasonably well[@FSZ93] the data [@Carroll] assumes the following amplitude for the $PLC-N$ soft interaction: $$f^{PLC,N}(z,k_t,Q^2) = i\sigma_{tot}(z,Q^{2}) \cdot e^{{b\over 2 }t}\cdot {G_{N}(t\cdot\sigma_{tot}(z,Q^{2})/\sigma_{tot}) \over G_{N}(t)}, \label{F_NNCT}$$ where $b/2$ is the slope of elastic $NN$ amplitude, $G_{N}(t)$ ($\approx (1-t/0.71)^{2}$) is the Sachs form factor and $t= -k_t^2 $. The last factor in Eq.(\[F\_NNCT\]) accounts for the difference between elastic scattering of PLC and average configurations, using the observation that the $t$-dependence of $d\sigma^{h+N\to h+N}/dt $ is roughly that of $\sim~G_{h}^{2}(t)\cdot G_{N}^{2}(t)$ and that $G_{h}^{2}(t)\approx exp(R_h^2t/3)$, where $R_h$ is the rms radius of the hadron. In Eq. (\[F\_NNCT\]) $\sigma_{tot}(l,Q^{2})$ is the effective total cross section for the PLC to interact at distance $l$ from the hard interaction point and $\sigma_{tot}$ is the pN total cross section. The quantum diffusion model [@FFLS] predicts: $$\sigma _{tot}(,Q^{2}) = \sigma_{tot} \left \{ \left ({z \over l_{h}} + {\langle r_{t}(Q^2)^{2} \rangle \over \langle r_t^{2} \rangle } (1-{z \over l_{h}}) \right )\Theta (l_{h}-z) + \Theta (z-l_{h})\right\}, \label{SIGMA_CT}$$ where ${l_h = 2p_{f}/\Delta~M^{2}}$, with ${\Delta~M^{2}=0.7-1.1~GeV^{2}}$. Here ${\langle r_{t}(Q^2)^{2} \rangle}$ is the average squared transverse size of the configuration produced at the interaction point. In several realistic models considered in Ref.[@FMS92] it can be approximated as ${ {\langle r_{t}(Q^2)^2\rangle\over\langle r_t^2\rangle} \sim{1\,GeV^2\over Q^2}}$ for $Q^2~\geq~1.5~GeV^2$. Note that due to expansion, the results of the calculations are rather insensitive to the value of this ratio whenever it is much less than unity. For numerical calculations we assumed $\Delta M^2\approx 0.7 GeV^2$ as was chosen to describe the nuclear transparencies from $A(p,2p)X$ [@Carroll] and $A(e,e'p)X$ [@NE18] experiments (see comparisons in Ref.[@FSZ93]). In Figure 5 we compares the prediction of quantum diffusion model for nuclear transparency $T$ with the data of the EVA experiment[@kn:I101; @EVA]. The transparency $T$ is defined as the ratio of the $A(p,2p)X$ cross section calculated using PWIA, color screening and rescattering effects to the cross section calculated within PWIA only. The comparison shows that one has a fair agreement with the data up to 9 GeV/c incoming proton momenta (note that one expects that the probabilistic model of rescattering to work within $20$% accuracy). The decrease of the experimental values of transparency can be understood in terms of the interplay of the hard and soft component in the amplitude of high momentum transfer pp scattering[@RP; @BT] which is not incorporated in the current calculations. Since in the further analysis we will concentrate only in the region of incoming proton momenta $5.9\le p_1\le 7.5 GeV/c$ where this interplay does not play a role, we will use the simple formulae of Eqs.(\[F\_NNCT\],\[SIGMA\_CT\]) for numerical estimations. The detailed analysis of the energy dependence of the nuclear transparency, $T$ will be presented elsewhere. Results of the Model {#III} ==================== In the following chapter we discuss the results of the model presented in Chapter II, for several nuclear observables that can be measured in the $A(p,2p)X$ reaction. We are particularly interested in two kinds of information: how the substructure of high-momentum transfer $pp$ scattering reveals itself in the nuclear reaction and what kind of information one can infer about short-range nuclear structure from these reactions. For numerical calculations in this chapter we apply the kinematics of EVA experiment[@kn:I101]. Because of the multidimensional character of the kinematical restrictions the numerical calculations are implemented through the Monte Carlo calculation. Furthermore, we will present the cross sections in arbitrary units since we are interested mainly in the shapes of the $\alpha$ and $p_t$ dependence of the $A(p,2p)X$ cross section. How the Quark Substructure of Hard pp Scattering is being Reflected in the Nuclear Observables {#IIIa} ---------------------------------------------------------------------------------------------- The power law energy-dependence of the hard elastic $pp$ scattering cross section is the signature of the dominance of quark-gluon degrees of freedom in the high-momentum transfer scattering (see e.g. [@hex]). As was predicted in Ref.[@FLFS] if this strong energy-dependence ($\sim s^{-10}$) exists in the nuclear medium it will amplify the contribution to the cross section coming from the scattering off deeply bound protons. These protons have a large momentum in the direction of the incoming proton. Since the cross section for the high-momentum transfer scattering of incoming proton off the bound proton at fixed and large $\theta_{cm}\sim 90^0$ is roughly proportional to $(\alpha s)^{-10}$ (see Eqs.(\[pwia\]-\[pp\])), an observation that reflects the sensitivity of $A(p,2p)X$ reaction to the high momentum component of the nuclear wave function is the shift of the $\alpha$-spectra to the lower $\alpha$ values. To demonstrate this sensitivity, in Figure 6 we represent the $\alpha$-dependence of the $A(p,2p)X$ reaction cross section assuming different $s$-dependences of the cross section for hard $p+p\to p+p$ scattering. These calculations are merely for illustration of the connection between the s-dependence and the $\alpha$-shift. Figure 6 confirms that the larger is the negative power of $s$-dependence for the hard $pp$ scattering the larger is the average longitudinal momentum of the interacting bound nucleon ($\alpha <1$). The $\alpha$-shift also produces an excess of the total longitudinal momentum of the final outgoing protons as compared to the initial longitudinal momentum $p_1$. One can characterize this excess through the variable: $$x = {p^z_3 + p^z_4\over p_1}, \label{x}$$ which will increase as the power of the hard $pp$ scattering cross section increases. In Figure 7 we show the calculated $x$-dependence of the cross section for different assumed s-dependences. The expected shift to the higher x (lower $\alpha$) is clearly seen in Figure 7. The x-distribution for quasielastic $C(p,2p)X$ reactions peaks at $x<1$, if one assumes no s-dependence of the elementary $p+p\to p+p$ reaction. As the dependence on s increases the peak is shifted to $x>1$ which represents the nuclear “boosting” effect: the outgoing protons have more longitudinal momentum than the incoming momentum. It is worth noting that this effect is reminiscent of subthreshold production in nuclei, in which a very low available energy in the nuclear medium can cause dramatic changes in the cross section of the reaction. Sensitivity to Short Range Correlations in Nuclei {#IIIb} ------------------------------------------------- The next question we would like to address is the sensitivity of the $\alpha$-shift to the existence of high momentum components in the nuclear ground state wave function. To asses this sensitivity we compare the cross sections of the $A(p,2p)X$ reaction using two models for the nuclear wave function: an Harmonic Oscillator (HO) model and the two-nucleon SRC model of high momentum component (HMC) of nuclear wave function, described in Section \[IIc\] (HO+HMC). In Figure 8 we present the $\alpha$-dependence of the $A(p,2p)X$ cross section calculated within PWIA at $p_1=6~GeV/c$ and $\theta_{cm}=90^0$ using these two models. As Figure 8 shows, even at moderate energies as $p_1 = 6~GeV/c$ the $\alpha$- dependence shows substantial sensitivity to the high momentum structure of the nuclear wave function. Thus, the measured cross section at small $\alpha$ will allow us to obtain the characteristics of the high momentum tail of the wave function. In Figure 9, we show the results of the PWIA calculations for transverse momentum distribution of the cross section of $A(p,2p)X$ reaction. It also exhibits a sensitivity to the high momentum part of the nuclear wave function. However, as will be shown below, unlike the $\alpha$-distribution the transverse momentum distribution is strongly distorted due to the initial and final state interactions. Note that hereafter, for the transverse missing momentum distribution, we will consider only the $p_y$ component of $p_t$. This restriction is related to the fact that the experimental data have better resolution for the $p_y$ component of missing momentum. The Effect of Initial and Final State interactions {#IIIc} -------------------------------------------------- As was discussed in Chapter 2 (see Eq.(\[dal\])) one expects that the soft rescatterings with uncorrelated nucleons at high energies will conserve the $\alpha$ parameter of interacting nucleons. Thus the measured $\alpha_2$-distribution of $A(p,2p)X$ cross section will not be affected strongly by the ISI/FSI and will reflect the original $\alpha$-distribution of the target proton in the nucleus. In Figure 10 we compare the $p_2$ and $\alpha$ distribution of the $\theta_{cm}=90^0$ $A(p,2p)X$ differential cross section at $p_1=6~GeV/c$. The dashed lines correspond to the PWIA prediction, thus representing the “true” momentum distribution of the bound nucleon. The solid lines represent the calculation including ISI/FSI. In the latter case the $p_2$ and $\alpha$ are reconstructed through the momenta of the incoming ($p_1$) and outgoing protons ($p_3$, $p_4$), thus representing the “measured” quantities. Notice the effect of the ISI/FSI on the $p_2$-distribution versus the effect of the same ISI/FSI on the $\alpha$-distribution. As we mentioned before, both the reconstructed energy and momentum of the target proton are modified by the rescattering, but their linear combination, $\alpha$, is almost unchanged. Finally, in Figure 11 we show the transverse momentum distribution ($p_{y}$) calculated for the same kinematics as in Figure 10. Figure 11 shows substantial ISI/FSI effects on the $p_{y}$-distribution for both calculation with and without Color Transparency. The large contribution from ISI/FSI in the transverse momentum distribution is attributed to the structure of small angle hadronic interaction in high energies. The rescattering is mainly transverse thus affecting maximally the transverse momenta of interacting nucleons. The above discussion allows us to conclude that the experimental study of the $\alpha$-distribution provides direct information on high momentum components of the nuclear wave function. On the other hand, the large values of missing transverse momentum is mainly sensitive to the dynamics of initial and final state interaction. In the subsequent sections we will discuss the analysis of the first experimental data on $A(p,2p)X$ reaction. Measurements and data. {#IV} ======================= We compare the calculations with the data that were collected in EXP 850 using the EVA spectrometer at the AGS accelerator of Brookhaven National Laboratory. During the preparation of this work these data were the only ones on high momentum transfer quasi-elastic reactions [@kn:I101]. In this chapter we will briefly describe the experiment and the experimental procedures relevant for comparing the data with the calculations. In the following chapter we will present the calculations and compare them with the data. The EVA collaboration performed a second measurement over a wider kinematical range with incident momenta above 7.5 GeV/c. These data were not analyzed yet. Some of the calculations in this work are predictions for these new data which might become later available. The Experimental Setup {#IVa} ---------------------- The EVA spectrometer, located on the secondary line C1, consisted of a 2 meter diameter and 3 meter long super-conducting solenoidal magnet operated at 0.8 Tesla (see Fig 12). The beam entered along the $z$ axis and hit a series of targets located at various $z$ positions. The scattered particles were tracked by four cylindrical chambers (C1-C4 , Fig 12). Each had 4 layers of long straw drift tubes with a high resistance central wire. For any of the 5632 tubes that fired, the drift time to its central wire was read out. In three out of the four cylindrical chambers signals were read out at both ends, providing position information along the $z$ direction as well. The straw tubes information allowed the target identification, the measurement of the particles transverse momentum as they were bent in the axial magnetic field, and their scattering angles. The overall resolution caused by the beam, the target and the detector were determined from the two body elastic pp scattering measurement. The standard deviation ($\sigma$) for the resolution of the transverse momentum is $\Delta p_t/p_t=7\% $ and 0.27 GeV for the missing energy. The polar angles ($\theta_3$,$\theta_4$) of the two outgoing protons were measured with a resolution of 7 mrad. The beams ranged in intensity from 1 to $2\cdot 10^7$ over a one second spill every 3 seconds. Two counter hodoscopes in the beam (only one shown in fig 12) provided beam alignment and a timing reference and two differential Cerenkov counters (not shown in fig 12) identified the incident particles. Three levels of triggering were used to select events with a predetermined minimum transverse momentum. The first two hardware triggers selected events with transverse momenta p$_t>$ 0.8 and p$_t>$ 0.9 GeV/c, for the 6 and 7.5 GeV/c measurements, respectively. The third level software trigger required two almost coplanar tracks, each satisfying the second level trigger requirement and low multiplicity hits in the straw tubes. See Ref [@kn:ref7] for a detailed description of the trigger system. Details on the EVA spectrometer are given in Refs. [@kn:ref7; @kn:ref3; @kn:ref4; @kn:ref5]. Three solid targets, CH$_2$, C and CD$_2$ (enriched to $95\%$) were placed on the $z$ axis inside the C1 cylinder separated by about 20 cm. They were $5.1$x$5.1$ $cm^2$ squares and 6.6 $cm$ long in the $z$ direction except for the CD$_2$ target which was 4.9 cm long. Their positions were interchanged at several intervals in order to reduce systematic uncertainties and to maximize the acceptance range for each target. Only the C target was used to extract the QE events, while the other targets served for normalizations and references. Event Selection and Kinematical Constraints ------------------------------------------- Quasi-elastic scattering events, with only two charged particles in the spectrometer, were selected. An excitation energy of the residual nucleus $\mid E_{miss}\mid$$<$ 500 MeV was imposed in order to suppress events where additional particles could be produced without being detected in EVA. Since this cut is above $m_\pi$, some inelastic background, such as those coming from $pA\to pp \pi^0 (A-1)$ events, could penetrate the cuts and had to be subtracted. The shape of this background was determined from a fit to the $E_{miss}$ distribution of events with extra tracks in the spectrometer. An inelastic background with this shape was subtracted. The measured distributions represent background subtracted quantities. See Refs. [@kn:ref5; @kn:I101] for more details. The coordinate system was chosen with the $z$ coordinate in the beam direction and the $y$ direction normal to the scattering plane ($x,z$). The latter is defined by the incident beam and one of the emerging protons. The selection among the two was random. This arbitrariness in the selection does not affect the extracted quantities of interest. The data were analyzed in terms of the momenta in the $y$ direction $p_{y}$ and the light cone $\alpha$ variable. $\alpha$ was determined with a precision of $\sigma\simeq 3\%$. The $ p_{y}$ (perpendicular to the scattering plane) had a resolution of $\sigma= 40$ MeV/c. The resolution in $p_{x}$ (in the scattering plane) was $\sigma=170$ MeV/c. Because of its better resolution, $p_{y}$ was used to represent a transverse component. The laboratory polar angles of both detected protons were limited by a software cut to a region of $\pm(3-5)^o$ around the center of the angular acceptance, for each target position. The angular range enforced by the software cut is smaller than the geometrical limits of the spectrometer (see Fig 12) but it ensures a uniform acceptance. Since the experiment was focused on shapes and not absolute values, an acceptance correction in the $(\theta_3, \theta_4)$ plane is not needed. An explicit cut on the center of mass scattering angle $\theta_{cm}$ was not applied on the data, however the cuts on the laboratory polar angles limit the $\theta_{cm}$ to the range of 83$^o$ to 90$^o$ for the proton at rest kinematics. The Longitudinal ($\alpha$) Distributions {#IVc} ----------------------------------------- Each target position corresponds to a limited polar angular range $(\theta_3, \theta_4)$ and $\alpha$ is a strong function of $\theta_3 + \theta_4$. To cover the largest possible acceptance in $\alpha$ one has to merge the measured $\alpha$-distributions from different targets. The distributions from the individual target positions were normalized to each other using the overlapping regions. The experimental error in each bin includes also the relative normalization error. The value of $\mid \theta_3 - \theta_4 \mid$ was limited by the largest common acceptance of all target position. To summarize: the following angular acceptance cuts were applied on the data: - $\mid \theta_3 - \theta_4 \mid<0.06$ radians (For all target positions and both beam energies). - downstream target: $ 23.5^0 <\theta_3<32.0^0$ and $23.5^0 < \theta_4 < 29.5^0$ or $ \theta_3$ and $\theta_4$ inverted. - middle target: $ 20.0^0 <\theta_3<30.0^0$ and $22.0^0 < \theta_4 < 28.0^0$ or $ \theta_3$ and $\theta_4$ inverted. - upstream target: $ 19.0^0 <\theta_3<28.0^0$ and $21.0^0 < \theta_4 < 27.5^0$ or $ \theta_3$ and $\theta_4$ inverted. These cuts yield for 5.9 GeV/c the following $\alpha$ acceptance ranges: - downstream target: $0.9<\alpha<1.05$. - middle target: $0.767<\alpha<0.967$. - upstream target: $0.7<\alpha<0.867$. For the 7.5 GeV/c data the angular ranges were: - downstream target: $ 22.0^0 <\theta_3<32.0^0$ and $22.0^0 < \theta_4 < 31.5^0$ or $ \theta_3$ and $\theta_4$ replaced. - middle target: $ 21.0^0 <\theta_3<27.0^0$ and $21.0^0 < \theta_4 < 27.0^0$. - upstream target: $ 20.0^0 <\theta_3<26.0^0$ and $20.0^0 < \theta_4 < 26.0^0$. These cuts yield for 7.5 GeV/c the following $\alpha$ acceptance ranges: - downstream target: $0.967<\alpha<1.05$. - middle target: $0.834<\alpha<1.0$. - upstream target: $0.767<\alpha<0.934$. The Transverse ($p_{y}$) Distributions {#IVd} -------------------------------------- The $p_{y}$-distributions were studied for narrow regions of $\alpha$. The regions of $\alpha$ were chosen to yield a large overlap between the 5.9 GeV/c and the 7.5 GeV/c data sets for each target position: - $0.74<\alpha<0.84$ for the upstream target position. - $0.82<\alpha<0.92$ for the middle target position. - $0.95<\alpha<1.05$. for the downstream target position. The shape of the $p_{y}$- distributions for the two data at 6 and 7.5 GeV/c are consistent in each one of the three $\alpha$- regions. Since the data sets of the two energies were found to be consistent they were added in order to reduce the statistical errors. Even after this procedure the poor statistics for the $0.95<\alpha<1.05$ range do not allow us to draw conclusions for this range. All the data presented consist of events that passed all the quasi-elastic cuts and the residual inelastic background was subtracted in a way similar to that described for the $\alpha$- distributions (see Ref. [@kn:I101; @kn:ref5] for details). All measured $p_{y}$- distributions are normalized to 10000 at $p_{y}=0$ and shown on a logarithmic scale to emphasize their shapes. The data are compared to the calculations in chapter 5. Comparison of the calculations with the data {#V} ============================================ The Longitudinal ($\alpha$) Distributions {#Va} ----------------------------------------- As was mentioned in Chapter 3 the calculations are implemented through the Monte Carlo code which allowed to incorporate the theoretical calculations with the multidimensional kinematic cuts applied in the experiment. The following cuts have been included in the calculations: - The angular and $\alpha$ acceptances are constrained for the same ranges as presented in chapter \[IV\] for the data. - $60^{0}<\theta_{cm}<120^{0}$ (for all target positions). The calculations include all considered nuclear effects (EMC, ISI/FSI and CT). Figure 13 shows the measured longitudinal $\alpha$-distributions at 5.9 GeV/c and 7.5 GeV/c together with the calculations. In the calculation we used the two-nucleon correlation model for the high momentum component of the nuclear wave function, discussed in Chapter \[II\]. For the parameter $a_2(^{12}C)$ which defines the strength of the SRC in the nuclear spectral function (Eq.(\[sfunrest\])) we used the estimate obtained from the analysis of high $Q^2$ and large Bjorken x $A(e,e')X$ data Ref.[@DFSS]. This analysis yield $a_2\approx 5$ for $^{12}C$. The $\chi^2$ per degree of freedom obtained by comparing the measured and calculated distributions at 5.9 GeV/c and 7.5 GeV/c ($\chi^2=0.8$ and $\chi^2=2.0$ respectively) confirm that the calculation and the data have the same shape. The next question we ask is whether the data allow us to understand the ingredients contributing to the strength of the $\alpha$-distribution at lower $\alpha$-values. First we check if the high momentum transfer elastic $pp$ scattering off bound nucleon still attains the $s^{-10}$ energy dependence. In Figure 14 we compare the calculations done using $s$-independent pp cross section (triangle points) and the $pp$ cross section parameterized according to Eq.(\[pp\]), in which $\frac{d\sigma^{pp}}{dt} \propto s^{-10}$ (solid points). If there was no scaling for hard $pp$ scattering in the nuclei the $\alpha$-distribution would peak around $\alpha=1$, as shown by the calculations with no “s-weighting” (triangles). The data clearly show a shift to lower $\alpha$ which confirms the strong s-dependence of the quasi elastic process. Next we address the question whether the strength seen at $\alpha<1$ comes from the SRC in nucleus. Figure 15 shows two calculated $\alpha$-distributions for the incoming proton momentum of 5.9 GeV/c. One distribution is calculated with the harmonic oscillator wave function only (i.e. $a_2=0$, in Eq.(\[sfunrest\])) (Triangle points). The second distribution is calculated with the SRC contribution to the high momentum tail of the nuclear wave function, described by $a_2=5$ (solid points). The open circles are the data. It is clearly seen in the figure that the $\alpha$-distribution calculated with $a_2=0$ does not provide sufficient strength at low $\alpha$ to describe the data, and SRC contributions are necessary. It is important to note that both the strong $s$-dependence of hard $pp$ scattering and the contribution of SRC are needed for agreement with the data. A mean field wave function for the nucleus would require a very unreasonable (exponentially falling with s) energy dependence of the $pp$ scattering cross section, in order to explain the observed strength of the cross section at $\alpha<1$. Moreover the agreement with the data using the same value of $a_2$ parameter obtained from electronuclear reactions indicates that we are dealing with a genuine property of the nucleus that does not depend on a specific probe. The Transverse ($p_{y}$) Distributions {#Vb} -------------------------------------- As it was discussed in Sections \[II\] and \[III\], we expect the transverse missing momentum of the quasielastic $A(p,2p)X$ cross section to be sensitive mainly to the dynamics of ISI/FSI. The studies of electro-nuclear $A(e,e'p)X$ reactions, in which FSI occurs through the rescattering of only one knocked-out proton demonstrated that the eikonal approximation can describe the FSI with better than 10% accuracy (see e.g. [@Garrow]). This indicates that the expected level of accuracy in calculations of ISI/FSI in $A(p,2p)X$ reactions, in which one incoming and two outgoing protons undergo the soft rescatterings, will be on the order of 15-20%. Keeping these accuracies in mind we compare the theoretical calculations with the data checking how well the probabilistic approximation of ISI/FSI can reproduce the shape of the transverse missing momentum distribution. The following kinematical constraints are imposed in the Monte Carlo calculations - middle target: $0.82<\alpha<0.92$. - upstream target: $0.74<\alpha<0.84$. - $\mid\theta_3-\theta_4\mid<0.06$ rad (for all target positions). - $60^{0}<\theta_{cm}<120^{0}$ (for all target positions). The calculations include all the effects discussed in the Chapters 2 and 3 (i.e. ISI/FSI, EMC, CT) and the strength of the SRC defined with $a_2=5$. Figure 16 shows the comparison between the measured transverse $p_{y}$ distribution and the calculated distribution. The theoretical and experimental distributions are normalized to 1000 at the first bin so only the difference in shape between them is relevant. They are for the combined 5.9 GeV/c and 7.5 GeV/c data and the upstream target ($\alpha = 0.79 \pm 0.05$). See chapter 4 for the detailed procedure of combining the 5.9 GeV/c and 7.5 GeV/c data sets. We followed the same procedure in the calculations. Figure 17 shows the similar to Figure 16 comparison for the kinematics of the middle stream target ($\alpha=0.87 \pm 0.05$). The calculations presented in Figure 16 and 17 overestimate the data at the transverse missing momenta above $0.2 ~GeV/c$. There are several reasons for such a discrepancy. First, one should notice that the tail of the distribution above $p_{y}=200 MeV/c$ is only $10\%$ of the peak value at $p_{y}=0$. Since calculation and the data are normalized at the maximum, even small discrepancy between calculation and the data at $p_{y}=0$ will reproduce a large discrepancy at large values of $p_y$. Next, this discrepancy may be the indication of the limit of applicability of the probabilistic approximation of ISI/FSI. In this approximation we neglected the interference terms which may contribute at large values of transverse momenta. Indeed as the complete calculation of $d(p,2p)n$ reaction demonstrated[@FPSS97] the interference terms are not negligible at $p_t\ge 150-200 MeV/c$ and their contribution tends to diminish the overall cross section. Another reason for the discrepancy may be the fact that within the eikonal approximation, starting at transverse missing momenta ($\geq 150-200 MeV/c$) the ISI and FSI are dominated by incoherent elastic rescattering which enhance the cross section of the nuclear reaction (see for the details Ref.[@Yennie]). It was observed in Refs.[@EFGMSS; @FGMSS] that incoherent elastic rescatterings are much more sensitive to the CT phenomena then the nuclear absorption is. The qualitative reason is that the absorption is proportional to the total cross section of PLC-N interaction, $\sigma^{tot}_{PLC,N}$, while incoherent elastic rescattering is proportional to $(\sigma^{tot}_{PLC,N})^2$. Thus the overestimate of the calculation may indicate that the onset of CT is stronger than it is modeled in the calculations (see Section \[II\]). Note that a noticeable ($\sim 20\%$) change in the strength of the incoherent elastic rescattering will result only $\sim 5\%$ change of the absorption thus such a modification of the size of the CT effect will still maintain the agreement of the calculation with the transparency data of Ref.[@Carroll]. Ending the above discussion we can only conclude that the strength of the high transverse momentum distributions is generated by ISI/FSI. However both improved theoretical calculation of ISI/FSI and the better experimental resolution are needed for understanding the details of the dynamics behind the strength of high transverse momentum distributions. Summary ======== We present the theoretical analysis of the first published data on the high momentum transfer quasielastic $C(p,2p)X$ reaction. First, we outline the light cone plane wave impulse approximation, in which the high momentum component of the nuclear wave function is treated within a two-nucleon short range correlation model. Within the same model it was predicted in Ref.[@FLFS] that the $\alpha$-distribution of the $A(p,2p)X$ cross section will be shifted to the smaller values of $\alpha$ thereby enhancing the contribution from SRC. We further develop the SRC model taking into account the medium modification of the bound nucleon as well as initial and final state reinteractions of the incoming and two outgoing protons in the nuclear medium, combined with the color transparency effects. For nuclear medium modification we demonstrated that within the color screening model, which describes reasonably the available electroproduction data, the strength of the SRC is not obscured. Furthermore we demonstrated that in the high energy regime the $\alpha$-distribution of the bound proton is practically unaltered by ISI/FSI. As a result the $\alpha$-distribution of the $C(p,2p)X$ cross section reflects the genuine distribution of the bound proton in the nucleus. We also showed that the transverse missing momentum distribution is strongly sensitive to the dynamics of initial and final state reinteractions, and discussed its potential use to study the effects related to the color transparency phenomena. In addition to the $\alpha$ and $p_t$ distributions we discussed the dependence of the cross section on the total longitudinal momentum of the two outgoing protons. It indicates the existence of a nuclear “boosting” effect, in which the longitudinal momentum of the two outgoing protons is larger than the momentum of the incoming proton. This result is in qualitative agreement with the new data recently obtained at EVA[@EVA]. After briefly describing the experiment we proceed with comparison of the theoretical calculations with the data. The comparison demonstrates that the theoretical expectation of the $\alpha$ shift, based on scaling in hard elastic scattering off a bound nucleon in the nucleus, is correct. The physical meaning of these shifts is that hard quasi elastic $pp$ scattering is sensitive to the high momentum components of the nuclear wave function. One observes that a momentum tail in the nuclear wave function that is needed to explain the data is significantly larger than what is expected from the mean field approximation. The value of the two nucleon SRC strength needed to describe the data is in agreement with the SRC strength obtained from electronuclear reactions. The analysis of the transverse missing momentum distribution shows that it is very sensitive to the mechanism of ISI/FSI and both improved calculations and the data are needed for understanding the details of the dynamics that generates the high transverse momentum strength. Thus the studies of the transverse-momentum distribution may emerge as an additional tool for study the color transparency phenomena. Acknowledgements {#acknowledgements .unnumbered} ================ Part of the data related to $p_y$ distribution have not been published before. We would like to acknowledge the EVA collaboration allowing us to present them in this paper. The authors are thankful to the EVA collaboration, especially to the spokespersons: S. Heppelmann and A. Carroll for very useful discussions. Special thanks to Y. Mardor for providing the details of her analysis of the experimental data. M. Sargsian gratefully acknowledges a contract from Jefferson Lab under which this work was done. The Thomas Jefferson National Accelerator Facility (Jefferson Lab) is operated by the Southeastern Universities Research Association (SURA) under DOE contract DE-AC05-84ER40150. This work is supported also by DOE grants under contract DE-FG02-01ER-41172 and DE-FG02-93ER-40771 as well as by the U.S. - Israel Binational Science foundation and the Israel Science Foundation founded by the Israel Academy of Sciences and Humanities. S.J. Brodsky and G.R. Farrar, Phys. Rev. Lett. [**31**]{}, 1153 (1973); Phys. Rev. [**D11**]{}, 1309 (1975); V. Matveev, R.M. Muradyan and A.N. Tavkhelidze, Lett. Nuovo Cimento [**7**]{}, 719 (1973). N. Isgur and C.H. Llewellyn Smith, Phys. Rev. Lett. [**52**]{}, (1984) 1080; Phys.Lett. [**B217**]{}, 535 (1989). A. Radyushkin, Acta Phys. Pol. [**B15**]{}, 403 (1984). S. J. Brodsky, C. E. Carlson and H. J. Lipkin, Phys. Rev.  [**D20**]{}, 2278 (1979). G. R. Farrar, S. Gottlieb, D. Sivers and G. H. Thomas, Phys. Rev.  [**D20**]{}, 202 (1979). G. P. Ramsey and D. Sivers, Phys. Rev.  [**D52**]{}, 116 (1995). P. Landshoff, Phys. Rev. [**D10**]{}, 1024 (1974); P. Landshoff and D. Pritchard, Z.Phys. [**C6**]{}, 69 (1980). J. Botts and G. Sterman, Nucl. Phys.  [**B325**]{}, 62 (1989). C. Bourrely and J. Soffer, Phys. Rev.  [**D35**]{}, 145 (1987). L.L. Frankfurt and M.I. Strikman, Phys. Rep. [**160**]{}, 235 (1988). G. R. Farrar, H. Liu, L. L. Frankfurt, and M. I. Strikman Phys.  Rev.  Lett. [**62**]{}, 1095 (1989). Y.Mardor [*et al.*]{}, Phys. Lett. B437( 1998) 257. R. Feynman, [*Photon - Hadron Interactions*]{}, W.A. Benjamin Inc. 1972. L.L. Frankfurt and M.I. Strikman, Phys. Rep. [**76**]{}, 214 (1981). L. L. Frankfurt, M. I. Strikman, D. B. Day and M. Sargsian, Phys. Rev.  [**C48**]{}, 2451 (1993). C. Ciofi degli Atti, S. Simula, L. L. Frankfurt and M. I. Strikman, Phys. Rev.  [**C44**]{}, 7 (1991). S. J. Brodsky and G. F. de Teramond, Phys. Rev. Lett.  [**60**]{}, 1924 (1988). D. Sivers, S.J. Brodsky and R. Blankenbecler, Phys. Rep. [**23**]{}, 1 (1976). L. Frankfurt, E. Piasetsky, M. Sargsian and M. Strikman, Phys. Rev.  [**C51**]{}, 890 (1995). J.J. Aubert [*et al.*]{} (EM Collaboration), Phys. Lett. B [**123**]{}, 275 (1983). L. L. Frankfurt and M. I. Strikman, Nucl. Phys.  [**B250**]{} (1985) 143. M. R. Frank, B. K. Jennings and G. A. Miller, Phys. Rev. C [**54**]{}, 920 (1996). L. L. Frankfurt, M. I. Strikman and M. B. Zhalov, Phys. Rev.  [**C50**]{}, 2189 (1994). R.L. Jaffe, F.E. Close, R.G. Roberts and G.G. Ross: Phys. Lett. [**B134**]{}, 449 (1984). W. Melnitchouk, M. Sargsian and M. I. Strikman, Z. Phys. A [**359**]{}, 99 (1997). L. Frankfurt, G. A. Miller and M. Strikman, Phys. Rev. Lett.  [**68**]{}, 17 (1992). I. Mardor, Y. Mardor, E. Piasetzky, J. Alster and M. M. Sargsian, Phys. Rev. C [**46**]{}, 761 (1992) L. L. Frankfurt, M. M. Sargsian and M. I. Strikman, Phys. Rev. C [**56**]{}, 1124 (1997). L. L. Frankfurt, E. Piasetzky, M. M. Sargsian and M. I. Strikman, Phys. Rev. C [**56**]{}, 2752 (1997). M. M. Sargsian, Int. J. Mod. Phys. E [**10**]{}, 405 (2001). A. S. Carroll [*et al.*]{}, Phys. Rev. Lett.  [**61**]{}, 1698 (1988). A. Leksanov [*et al.*]{}, Phys. Rev. Lett.  [**87**]{}, 21230 (2001). G. R. Farrar, H. Liu, L. L. Frankfurt and M. I. Strikman, Phys. Rev. Lett.  [**61**]{}, 686 (1988). L. Frankfurt, G. A. Miller and M. Strikman, Comments Nucl. Part. Phys.  [**21**]{}, 1 (1992). N. Makins [*et al.*]{},[*(NE18 collaboration)*]{} Phys. Rev. Lett.  [**72**]{}, 1986 (1994). Y. Mardor [*et al.*]{} Phys. Rev. Lett.  [**81**]{}, 5085 (1998). J. P. Ralston and B. Pire, Phys. Rev. Lett.  [**61**]{}, 1823 (1988). B. K. Jennings and G. A. Miller, Phys. Lett. B [**318**]{}, 7 (1993). M.A.Shupe [*et al*]{}. [*EVA, a solenoidal detector for large angle exclusive reactions: Phase I - determining color transparency to 22 GeV/c.*]{} Experiment E850 Proposal to Brookhaven National Laboratory, 1988 (unpublished). , S.Durrant, PhD thesis, Pennsylvania State University, 1994 (unpublished). , Y.Mardor, PhD thesis, Tel Aviv University, 1997 (unpublished). K. Garrow [*et al.*]{}, arXiv:hep-ex/0109027, (2001). D.R.  Yennie, [*in Hadronic Interactions of Electrons and Photons*]{}, edited by J. Cummings and D. Osborn (Academic, New York, 1971), p.321. K.Sh. Egiyan, L.L. Frankfurt, W. Greenberg, G.A. Miller, M.M. Sargsian and M.I. Strikman, Nucl. Phys. A [**580**]{}, 365 (1994). L.L. Frankfurt, G.A. Miller, W. Greenberg, M.M. Sargsian and M.I. Strikman, Z. Phys. A [**352**]{}, 97 (1995). [^1]: Since $z$ directions is chosen as the direction of incoming proton momentum, the “-” component corresponds to the light cone longitudinal momentum, which is conserved at the scattering vertices.
--- abstract: 'Water-worlds are water-rich ($>$ 1 wt% H$_2$O) exoplanets. The classical models of water-worlds considered layered structures determined by the phase boundaries of pure water. However, water-worlds are likely to possess comet-like compositions, with between $\sim$ 3 mol% to 30 mol% CO$_2$ relative to water. In this study, we build an interior structure model of habitable (i.e. surface-liquid-ocean-bearing) water-worlds using the latest results from experimental data on the CO$_2$-H$_2$O system, to explore the CO$_2$ budget and to localize the main CO$_2$ reservoirs inside of these planets. We show that CO$_2$ dissolved in the ocean and trapped inside of a clathrate layer can not accommodate a cometary amount of CO$_2$ if the planet accretes more than 11 wt% of volatiles (CO$_2$ + H$_2$O) during its formation. We propose a new, potentially dominant, CO$_2$ reservoir for water-worlds: CO$_2$ buried inside of the high-pressure water ice mantle as CO$_2$ ices or (H$_2$CO$_3$ $\cdot$ H$_2$O), monohydrate of carbonic acid. If insufficient amounts of CO$_2$ are sequestered either in this reservoir or the planet’s iron core, habitable zone water-worlds could generically be stalled in their cooling before liquid oceans have a chance to condense.' author: - 'Nadejda Marounina and Leslie A. Rogers' title: 'Internal Structure and CO$_2$ Reservoirs of Habitable Water-Worlds' --- Introduction ============ Water-rich exoplanets ($>$1% water by mass) will be the next accessible targets on the path of the observation of Earth-like planets [e.g. @Beichman:2014]. Water-rich planets possess lower densities and larger radii than terrestrial Earth-like planets with similar masses, making them more amenable to observations and characterization by future surveys. [@Kuchner:2003] and [@Leger:2004fh] were the first to propose the existence of water-rich planets, suggesting that they would form beyond the snow line and accrete comet-like proportions of rocky material and ices. Then, interactions with protoplanetary disk or with other bodies in the system would bring them into the habitable zone (HZ) of their star, forming a global water ocean at their surfaces. Since then, @Luger:2015de showed that photo-evaporation of a H$_2$/He envelope from a mini-Neptune could be another path of formation of water-worlds, especially relevant for planets in the HZs of M-dwarfs. Additionally, several theoretical studies predict an efficient accretion of volatiles during planet assembly (especially in scenarios with low-mass host stars and long-lived protoplanetary disks), forming planets with up to $\sim$50% of water by mass [@Raymond:2004de; @Alibert:2017; @Kite:2018]. Thus, water-rich planets are possibly numerous around M dwarfs and will be prime targets for atmospheric characterization in the near future. Indeed, the recent discovery and characterization of the TRAPPIST-1 system has shown that it may contain HZ planets with several tens of percent of water by mass [@Gillon:2017fw; @Unterborn:2018ep; @Grimm:2018; @Unterborn:2018gr]. At the present time, a widely accepted terminology for the denomination of water-rich exoplanets does not exist, and terms such as “water-world" or “ocean planet" may have different meanings from paper to paper. Here, we choose to call “water-world" any planet with sufficient quantities of volatiles that it could form a high-pressure water ice layer given a favorable interior temperature profile (regardless of whether the planet actually cools sufficiently for the high-pressure ice to form). By this definition, a water-world could have a subsurface ocean, a surface global or partial ocean, or in the most extremes cases, no ocean at all with the volatile-rich envelope entirely in a supercritical or vapor state. The common definition of the HZ does not apply to water-worlds, and the habitability of water-worlds is still poorly constrained. Computations of the HZs mostly focus on the Earth-like planets, where continent and seafloor weathering stabilizes the concentration of CO$_2$ in the atmosphere, inducing a negative feedback [e.g. @Walker:1981uq; @Kasting:1993; @Kasting:2003bp; @Kopparapu:2013]. Such feedback is probably not active for habitable water-worlds, or water-worlds with global surface water oceans, because silicates are likely to be isolated from liquid water by a high-pressure ice mantle [@Leger:2004fh; @Sotin:2007fh; @Fu:2010ch; @Levi:2013eq; @Levi:2017gv]. Assuming that the high-pressure ice layer precludes chemical exchanges between the silicate layers and liquid water, the partial pressure of CO$_2$ in the atmosphere would be controlled by the total amount of CO$_2$ in the hydrosphere (i.e. volatile-dominated layers of the planet) and the temperature profile inside of these volatile-rich layers. Water-worlds are likely to possess CO$_2$-rich bulk compositions. @Kuchner:2003 [@Leger:2004fh; @Selsis:2007jx] proposed that water-worlds would initially form with comet-like compositions. In comets, CO$_2$ is the second most abundant volatile after H$_2$O, ranging from $\sim$ 3 mol% to 30 mol% relative to water [e.g. @Bockelee:2004; @Mumma:2011jn; @Ootsubo:2012cg]. For these planets, the pressure at the ice-silicate interface exceeds the limiting pressure for volcanic degassing [$\sim$ 0.6GPa, @Kite:2009kv]. Therefore, CO$_2$ is not supplied to the hydrosphere by volcanism from the silicate layers of the planet, and the total CO$_2$ mass in the hydrosphere does not vary substantially after the planet forms and differentiates. In the scenario where water-worlds form by photo-evaporation [@Luger:2015de], the amount of CO$_2$ left in the planet’s atmosphere would depend on the overall CO$_2$ accretion and escape history. @Luger:2015de propose that planets that have lost their hydrogen envelopes may still possess high-density atmospheres with considerable amounts of CO$_2$. Only a small fraction of the CO$_2$ accreted by a water-world can reside in the planet’s atmosphere if the water-world is to maintain a temperate surface temperature and a surface liquid water ocean. Indeed, even the highest CO$_2$ partial pressures allowed inside of the HZ (P$_{CO_2}\sim$100 bar, Fig. \[fig:HZ\]) correspond to less than  1 wt% CO$_2$ relative to the total volatile content of the planet. Depending on the orbital separation of the planet, the host star, and the planet history, higher amounts of CO$_2$ in the atmosphere could lead either (i) to the evaporation of the liquid oceans or (ii) to the prevention of liquid oceans from condensing in the first place as the planet is impeded from cooling from its post-accretion hot state. To avoid this situation, much of the CO$_2$ accreted by the planet would need to be sequestered in the planetary interior, separate from the atmosphere because CO$_2$ reservoirs in contact with the atmosphere could be easily destabilized by temperature changes [@Kitzmann:2015; @Levi:2017gv]. The water-dominated layers of the hydrosphere can store only a limited amount of CO$_2$. If these layers are *saturated*, the *excess* of carbon dioxide would form a new, separate phase. To set an upper limit on the amount of CO$_2$ that can be stored in these water-rich layers, here we consider the extreme scenario in which they are fully saturated. Therefore, in the rest of this work, we distinguish between CO$_2$ *saturated* reservoirs (which designate the storage of CO$_2$ by saturating water-dominated layers), and *excess* CO$_2$ reservoirs (i.e. CO$_2$ reservoirs that would form if water-dominated layers are saturated). One example of a saturated reservoir would be the water ocean, while an example of an excess reservoir is the atmosphere. Previous work on water-rich exoplanets explored the potential interactions between the ocean and the atmosphere in great detail, but has not examined how/if the overall CO$_2$ budget and the distribution of CO$_2$ throughout the hydrosphere would even allow the presence of liquid water at the surface of water-worlds. @Kitzmann:2015 was the first to compute the HZ of water-worlds possessing liquid water oceans at their surfaces. The study constrains the size of the HZ for a range of partial pressures of CO$_2$. It also points out that a solubility-controlled CO$_2$ abundance in the atmosphere constitutes an unstable CO$_2$ feedback cycle and any perturbation in temperature or atmospheric CO$_2$ content could lead to a runaway greenhouse or the freezing of the surface. The extensive study of @Levi:2017gv proposed a mechanism to stabilize CO$_2$ partial pressure in the atmospheres of water-worlds by accounting for a wind-driven surface circulation and sea-ice formation at higher latitudes [see also @Ramirez:2018wn]. To date, hydrosphere structures, CO$_2$ contents and CO$_2$ reservoirs in the interiors of these CO$_2$-rich water-worlds are poorly constrained, and this is what we aim to study here. Here, we build a planet interior structure model using the latest results from experimental data on the CO$_2$-H$_2$O system to explore the CO$_2$ budget and to localize the main CO$_2$ reservoirs inside of water-worlds. The classical models of water-worlds [@Kuchner:2003; @Leger:2004fh; @Selsis:2007jx] considered layered structures determined by the phase boundaries of pure water. As a first application of our new water-world interior structure model, we perform the thought experiment of considering complete saturation in CO$_2$ of these water-dominated layers. By quantifying the maximum amount of CO$_2$ possibly stored inside each of these water-rich layers, we assess whether cometary amounts of CO$_2$ can be accommodated in the classical structure models for water-worlds. Section \[sec:Model\] describes the thermodynamic and planetary model that we developed. We apply the model to quantify the potential saturated and excess CO$_2$ reservoirs in the hydrospheres of water-worlds in Sections \[sec:ResultsSaturated\] and  \[sec:ResultsExcess\], respectively. We discuss the limitations of our model and uncertainties of the current equations of state in Section \[sec:Discussion\] and summarize our conclusions in Section \[sec:Conclusion\]. Model {#sec:Model} ===== We consider H$_2$O-and-CO$_2$-rich, fully differentiated planets. The internal structure of these planets is differentiated into a rocky core with a roughly Earth-like Fe/silicate ratio surrounded by upper volatile-rich layers, which we call the “hydrosphere" (c.f. Fig. \[fig:struct\]). Our study focuses on the structure of the hydrosphere. ![\[fig:struct\] An example of an interior structure of a water-world, for 33 % of water by mass and an isotherm of 300 K in the ocean. Layer thicknesses are true to scale.](f1.pdf){width="1\linewidth"} Depending on the temperature and pressure profile in the hydrosphere, phases such as high-pressure ice, clathrate hydrates of CO$_2$, solid or liquid CO$_2$, liquid water or gas could form (Fig. \[fig:phasediagram\]). We compute the partitioning of water and CO$_2$ between these phases for a range of planetary masses ($M_p$), volatile (H$_2$O+CO$_2$) mass fractions respective to the total planet mass ($X_v=M_v/M_p$, where $M_v=M_{H_2O}+M_{CO_2}$ is the total volatile mass), mass fraction of CO$_2$ relative to the total volatile mass ($X_{CO_2}=M_{CO_2}/M_v$), and an assumed temperature-pressure profile. As a result, we obtain an internal structure of the hydrosphere, or the separation of the hydrosphere in several layers assuming thermodynamic equilibrium. For this study, the silicate and iron parts of the planet contribute to the model only in setting the mass-radius boundary conditions at the base of the hydrosphere. As we explore planets with surface water oceans, the surface temperature (i.e., the temperature at the ocean-atmosphere interface) must be between 273.15 K and 400 K. For surface temperatures lower than 273.15 K, the ocean has an icy crust at its surface, which would make the liquid water ocean challenging to detect by current and planned surveys. At the other extreme, $\sim$400 K is the highest temperature for the life as we know it [@Holden:2010ep; @Corkrey:2014cp]. ![\[fig:phasediagram\] P-T projection of the CO$_2$-H$_2$O phase diagram, showing the two- and three-phase coexistence curves and CO$_2$ critical point. Annotations specify the various phases including the vapor phase V, water-rich liquid L$_w$, CO$_2$-rich liquid L$_c$, fluid CO$_2$ phase (above the critical temperature of CO$_2$) F$_c$, hydrate phase H, CO$_2$ ice I$_{CO_2}$ and ices VI and VII. The filled red circle denotes the critical point of CO$_2$, while the open red circles Q$_1$ and Q$_2$ are quadruple points where there is a coexistence of liquid water, vapor, hydrate and water ice Ih (Q$_1$) and liquid water, vapor, hydrate, and liquid CO$_2$ (Q$_2$). The ice curves are from @Abramson:2017ie, the clathrate curve is from TREND (see the comparison with the experimental data Fig. \[fig:comp\_trend\_exp\]), and the values of the quadruple points and the triple lines that cross them are from @Wendland:1999jy. ](f2.pdf){width="1\linewidth"} We focus on planets with sufficient water to form high pressure ice mantles, wherein reactions between the silicate rocks and liquid water are suppressed and assumed to be negligible. This assumption is commonly made in the study of water-worlds [e.g. @Fu:2010ch; @Kitzmann:2015; @Levi:2017gv]. The absence of significant liquid water-rock chemical interactions has two important consequences for the internal structure and CO$_2$ reservoirs of water-worlds. First, though formation of carbonates — the entrapment of CO$_2$ as (Ca, Mg, Fe)CO$_3$ — is an important carbon reservoir on the Earth, it is negligible for water-worlds. Second, in the absence of additional chemical agents that could drive the pH (i.e. salts) and increase the speciation of CO$_2$, we can use the equation of state for pure H$_2$O-CO$_2$ mixtures to model water-worlds. We further discuss and quantify the effect of water-rock interactions in § \[sec:CO2vsRocks\]. Fig. \[fig:phasediagram\] shows the possible phases of CO$_2$+H$_2$O mixtures over the temperature and pressure ranges relevant to habitable water-worlds. Areas between lines represent ranges of temperatures and pressures where one phase can exist, or where two phases are in equilibrium. Lines trace three-phase or two-phase equilibria. Lastly, points mark both specific combinations of temperature and pressure where four phases are in equilibrium (also called quadruple points) and the critical point of CO$_2$. Vapor (V) and water-rich liquid phases (L$_w$) coexist in equilibrium at the low pressures end of this diagram. At higher pressures (P$\gtrsim$1.2 MPa) and low temperatures (T$<$294 K), CO$_2$+H$_2$O mixtures form a phase called clathrate hydrate. Clathrate hydrate is a crystalline structure consisting of a lattice of water molecules, organized as cages entrapping guest molecules (in this case CO$_2$). For the structure to be stable, guest molecules have to occupy a minimum fraction of the water cages. The total occupancy of the cages varies with temperature and pressure [@Sloan:2007cl]. Clathrate hydrates naturally occur on Earth [@Sloan:2007cl] and their presence has been hypothesized on Mars [e.g. @Kite:2017bz] and icy satellites [@Tobie:2006fj; @Choukroun:2010ca]. For even higher pressures (P$\gtrsim$4.5 MPa), CO$_2$ may condense in a liquid (noted as L$_c$ in Fig. \[fig:phasediagram\]). Under these conditions, instead of a vapor-liquid equilibrium, we have a liquid-liquid equilibrium with a water-rich liquid and a CO$_2$-rich liquid. For the highest range of pressures explored in this study (480 MPa and above), CO$_2$ ice (I) and polymorphs of high-pressure water ice form. Atmosphere ---------- We use the 1-D radiative-transfer/climate model CLIMA, originally developed by @Kasting:1986ix, and most recently updated in calculations of the HZs of Earth-like exoplanets [@Kopparapu:2013; @Kopparapu:2014]. CLIMA uses a correlated-k method to calculate the absorption coefficients of spectrally active gases both for the incoming shortwave stellar radiation (in 38 solar spectral intervals ranging from 0.2 to 4.5 $\mu$m) and for the outgoing longwave IR radiation (in 55 spectral intervals spanning wavenumbers from 0 to 15,000 cm$^{-1}$). The 2-stream multiple scattering method of @Toon:1989 is used to calculate the radiative heating rate in each of the 101 atmospheric layers. For the inner edge of the HZ, the assumed atmospheric pressure-temperature profile consists of a moist pseudoadiabat extending from the surface up to an isothermal (200 K) stratosphere as described in @Kasting:1988gd. For the outer edge of the HZ, a moist H$_2$O adiabat is assumed in the lower troposphere, and when condensation was encountered in the upper troposphere a moist CO$_2$ adiabat is used, as described in Appendix B of @Kasting:1991. In this study, we focus on “habitable" water-worlds or water-words with liquid water at their surfaces. In these cases, gases in the atmosphere are dissolved in the ocean. To place an upper limit on volatile content of *saturated* reservoirs, we assume the atmosphere species are in thermodynamic phase equilibrium with the liquid water layer. Consequently, at the interface between the ocean and the atmosphere, the temperature, pressure and chemical potentials of all of the chemical species constituting these layers are equal. While CLIMA handles the calculation of the atmospheric radiative transfer and pressure - temperature profile in our model of water-worlds, phase changes and phase equilibria occurring in the fluid and solid water-rich layers of the hydrosphere are computed using TREND 3.0 software [@TREND:2016wr]. Equation of State for CO$_2$-H$_2$O Mixtures -------------------------------------------- To estimate the potential CO$_2$ reservoirs of HZ water-rich exoplanets, we need to model the behaviour of CO$_2$-H$_2$O mixtures at temperatures corresponding to “habitable" planet surfaces (273 K- 400 K) and at pressures up to the formation of high-pressure ice ($\sim$ 3.2 GPa). For this temperature range and at low to moderate pressures (up to few MPa), there are numerous equations of state for the H$_2$O-CO$_2$ system that compute both the behavior of individual phases and phase equilibria [e.g. @Carroll:1991; @Spycher:2003fb; @Hu:2007ca]. However, computing the behavior of CO$_2$+H$_2$O mixtures up to several GPa is currently still a challenge due to the lack of experimental data in the region of pressures above $\sim$1 GPa. The recent study of @Abramson:2017ie provides the first experimental data in this high-pressure region. We use the state-of-the-art equations of state (EOS) TREND 3.0 to model H$_2$O-CO$_2$ mixtures. TREND 3.0 is the reference EOS in the carbon capture, storage and transport industry, and is (to our knowledge) the best option to reproduce the behaviour of CO$_2$-H$_2$O system in the temperature and pressure ranges of relevance to ocean-bearing water-worlds [@Gernert:2016bc]. TREND 3.0 reproduces all of the available experimental data of CO$_2$-H$_2$O system for pressures up to 100 MPa with low errors (typically lower than 2 %, see e.g. Fig. \[fig:comp\_trend\_exp\] or detailed comparisons in @Gernert:2016bc). TREND 3.0 is the result of decades of development by specialists in fluid behaviour and construction of equations of state. We describe TREND 3.0 in detail below. The model for CO$_2$-H$_2$O mixtures implemented in TREND 3.0 uses two empirical equations of state for pure fluids, explicit in Helmholtz free energy: one for water [IAPWS, @Wagner:2002in] and one for CO$_2$ [@Span:1996cg]. Provided density $\rho$ and temperature $T$ as independent variables, these equations of state compute ideal and the residual parts of the Helmholtz energy and their first, second and third derivatives. By combining these derivatives, all thermodynamic properties can be calculated, including thermodynamic potentials $u$ (specific internal energy), $h$ (specific enthalpy), $g$ (specific Gibbs free energy) or $p$ (pressure), $s$ (specific entropy), and $C_p$ (heat capacity at constant pressure), to cite few of them. See the complete list of thermodynamic properties in Tab. 6.3 of @Wagner:2002in and Tab. 3 of @Span:1996cg. The range of temperature-pressure-compositions where reliable experimental data exists defines the initial range of validity of the pure water and pure CO$_2$ equations of state. For pure water, IAPWS is valid up to 1273 K and 1 GPa, and for pure CO$_2$ the primary range of validity of the equation of state goes up to 1100 K and 800 MPa. However, special care has been taken to ensure that these equations of state yield reasonable results when extrapolated beyond this initial range of validity. The density of water predicted by extrapolating IAPWS up to 3.5 GPa deviates from the experimental data of @Wiryana:1998kd by no more than $3.5\%$. For CO$_2$, the equation of state of @Span:1997 reasonably describes the behavior of the pure substance along the Hugoniot curve up to the limits of the chemical stability of carbon dioxide [see Fig. 36 of @Span:1996cg]. Moreover, both of these equations of state perform well when compared to ideal curves up to very high pressures and temperatures. Ideal curves [as defined by @Span:1997] are curves along which one property of a real fluid is equal to the corresponding property of the hypothetical ideal gas at the same temperature and density. Comparison to ideal curves has been shown to reliably predict the quality of extrapolations of empirical equations of state for pure substances [@Span:1997; @Deiters:1997fi; @Span:2013]. To combine the equations of state of pure compounds and compute the properties of various phases of CO$_2$-H$_2$O mixtures, TREND 3.0 blends multiple approaches that we detail in the following paragraphs. The overall methodology of TREND 3.0 is to compute the Gibbs energy of the CO$_2$-H$_2$O mixture at a given $p$ and $T$ and overall molar composition $z$, and then to minimize it. TREND 3.0 then determines: which phases are stable at the provided $T$, $p$ and $z$; the density of all stable phases; and the molar composition of each of these phases. Once the above parameters are evaluated, TREND 3.0 computes the thermodynamic and calorific properties of each phase. For details of the algorithms used in TREND 3.0 refer to @Kunz:2007fq. To compute the behavior of the CO$_2$-H$_2$O mixture from the pure fluid equations of state, TREND 3.0 uses mixing rules from @Kunz:2007fq as adapted in @Gernert:2016bc. The mixing parameters are derived by fitting the experimental data of the CO$_2$-H$_2$O system, that includes the small amount of CO$_2$ dissociated in HCO$_3^-$ and CO$_3^{2-}$. Consequently, the resulting equation of state that is fitted to this data implicitly accounts for this dissociation, in absence of other solutes. TREND 3.0 is widely used in the carbon capture and storage (CCS) community and is regularly updated upon the arrival of new experimental data [e.g. @Kunz:2007fq; @Kunz:2012fd; @Lovseth:2018gn]. The model described in @Gernert:2016bc, EOS-CG, is based on the mathematical structure introduced in the GERG-2004 and GERG-2008 approaches [@Kunz:2007fq; @Kunz:2012fd] but is explicitly developed to provide an accurate description of the CO$_2$-H$_2$O mixture. The initial range of validity of EOS-CG extends up to 500 K and 0.1 GPa; within these ranges, the carefully vetted experimental data sets on CO$_2$-H$_2$O mixtures to which EOS-CG is fit agree to within $\pm$ 2% on the density of the mixture and to within $\pm$ 0.3 mol% on the solubility of CO$_2$ in water. @Gernert:2016bc could not extend the EOS-CG model fit to higher pressures and temperatures due to significant disagreements in the density and/or solubility measurements between experimental data sets; no data set could be identified as significantly more accurate than the others. Tests have shown that this mixing rule can reasonably be used outside the extended range of validity if larger uncertainties are acceptable [@Kunz:2012fd]. An estimate of the error introduced by extrapolating EOS-CG above $>$ 0.1 GPa is shown in Fig. \[fig:profil\] D. EOS-CG successfully describes the fluid phase that is in equilibrium with dry ice or CO$_2$-hydrate [@Gernert:2016bc]. TREND 3.0 detects the formation of CO$_2$ ice I using the model detailed in @Jager:2012jo, which proposes a thermal equation of state for solid carbon dioxide that is explicit in Gibbs energy. The initial range of validity of this equation of state is 80 K $<$ T $<$ 300 K and 0 MPa $<$ T $<$ 500 MPa. However, even when compared to experimental data with pressures up to 10 GPa [for a 296 K isotherm @Liu:1984jg; @Olinger:1982kx], this equation of state reproduces the density of solid CO$_2$ within 3 %. The formation of CO$_2$ clathrate hydrate is computed using the model described in a series of three papers: @Vins:2016kx [@Vins:2017hv] and @Jager:2016dr. The model is inspired by @Sloan:2007cl and based on the statistical van der Waals and Plattew approach [@Vdwaals:1959cs]. It computes the chemical potential of water in the hydrate lattice, $\mu^H_w$. Typically, if this chemical potential is lower than the chemical potential of liquid water, computed with EOS-CG, then the clathrate hydrate phase is stable. ![Comparison between the prediction of TREND 3.0 and the experimental data for the triple line CO$_2$ clathrates/liquid water-rich phase/CO$_2$-rich fluid phase. Experimental data are summarized in @Sloan:2007cl.[]{data-label="fig:comp_trend_exp"}](f3.pdf){width="1\linewidth"} @Vins:2016kx has shown that the lattice parameter of hydrates and the Langmuir constants that quantify the molecular interactions between the water lattice and CO$_2$ guest molecule, strongly influence the computed value of $\mu^H_w$. The water lattice parameter of clathrate hydrates is especially important at high pressures, while the Langmuir constant affects the computed filling fraction of the clathrate cages, and therefore for the amount of CO$_2$ trapped in the clathrate layer. @Vins:2017hv describe the fitting procedure to obtain all the necessary model parameters. @Jager:2016dr provides the result of the fitting, assesses the performance of this model and details its implementation in the TREND 3.0. For the CO$_2$-H$_2$O mixture, this new model performs better when compared to previous formulations [e.g. @Ballard:2002a; @Ballard:2002b]. This model is valid for the whole range of temperature, pressure, and compositions of interest in this study (T=$\left[273K,400K\right]$, P up to $\sim$700 MPa, see Fig. \[fig:comp\_trend\_exp\]). TREND 3.0 does not account for the formation of high-pressure water ice. To detect the phase boundary of high-pressure water ice in contact with CO$_2$ ice, we use the scaling laws of @Abramson:2017hl, for ice VI and VII. In that study, the authors obtained and fitted experimental data for the triple line CO$_2$ ice/high-pressure water ice (either VI or VII, depending on the temperature)/liquid water. This triple line shows considerable deviations from the pure water system (e.g., a difference of 0.22 GPa for T = 363 K). Thus water planet interior structure models that rely on the phase boundaries of pure water could incur significant errors. For the equation of state of ices VI, VII, and X (for the density and adiabatic temperature profiles of these phases) we use the formulation and the parameters summarized in @Noack:2016bh. Water-World Interior Structure ------------------------------ We use a planet interior structure model to self-consistently compute the radii and hydrosphere structure of water-worlds (Fig \[fig:struct\]). As inputs to the model, we specify the mass of the planet’s rocky core M$_{rock}$ (assumed to have a Earth-like silicate to iron mass ratio), the planet’s volatile mass $M_{v}=M_p-M_{rock}=X_v M_p$, and surface temperature. We assume an isothermal temperature profile in the liquid and clathrate layers, and an adiabatic temperature profile in the high-pressure ice layers [@Fu:2010ch; @Noack:2016bh]. To model a planet, we guess an initial planet radius (defined at the ocean-atmosphere boundary). Then, we integrate the equations of hydrostatic equilibrium and the mass in a spherical shell inwards through the hydrosphere toward the center of the planet (with care to use the equation of state of the appropriate phase). We continue integrating inward until the total mass balance of volatiles is satisfied (i.e., when the integrated mass of the hydrosphere equals M$_{v}$). We then compare the radius at the water-silicate boundary obtained from the integration, R$_{wsb}$, to the radius R$_{rock}$ of an Earth-composition core of mass M$_{rock}$ under the pressure overburden of the volatile envelope. R$_{rock}$ is derived by interpolating the models for rocky cores from @Rogers:2011gz. If R$_{wsb}$ and R$_{rock}$ are too disparate, we adjust the radius of the planet at the ocean/atmosphere interface and repeat the inward integration through the hydrosphere. We iterate until R$_{wsb}$ and R$_{rock}$ correspond with an error less than 0.1%. By modeling isothermal oceans saturated with CO$_2$ we set a strict upper limit on the CO$_2$ mass dissolved in water and trapped in clathrates. Adiabatic temperature profiles are associated with convection. Convection would lead to mixing and evolution towards a near-constant concentration of CO$_2$ through the ocean, mediated by the CO$_2$ partial pressure in the atmosphere. Because the solubility of CO$_2$ increases with pressure (Section \[sec:profil\]), the solubility of CO$_2$ in the liquid ocean is at its lowest at the ocean-atmosphere interface. Consequently, convecting adiabatic oceans would lead to lower CO$_2$ contents for the planet than those estimated with isothermal saturated ocean profiles. Further, an adiabatic temperature profile within the clathrate layer would lead to thinner clathrate layers, because of the increase in temperature with depth. This would in turn diminish the clathrate layer’s capacity as a CO$_2$ reservoir. There is also physical motivation to consider non-convecting oceans on water-worlds. @Levi:2017gv showed that oceans in water-worlds might not possess a deep overturning circulation, due to the lack of an energy source. The circulation in Earth’s ocean relies on winds, lunar and solar tidal forcing, and the subsequent turbulent dissipation of internal waves on the seafloor topography. Water-worlds do not have strong topography at the oceanic floor, and the depth of the ocean of these planets is considerably more important than the Earth’s ocean, requiring significant energy sources for a steady state global circulation. @Levi:2017gv show that winds on water-worlds could provide enough energy for convection inside of a $\sim$1-km surface layer. If the internal heat flux on these planets is low [@Levi:2014bc], then the temperature profile inside the ocean will be conductive. CO$_2$ content of saturated reservoirs {#sec:ResultsSaturated} ====================================== Description of saturated reservoirs of water-worlds {#sec:profil} --------------------------------------------------- The hydrospheres of water-worlds present a variety of structures, depending on temperature. Examples of these possible ocean structures are plotted in Fig. \[fig:profil\]. In this figure, all of the profiles are plotted for an atmospheric pressure of 5 bar. We show in section \[sec:atm\] that the amount of CO$_2$ in the atmosphere does not influence the results displayed in Fig. \[fig:profil\] or Fig. \[fig:bilan\]. For temperatures between 273 K and and 279 K, CO$_2$ solubility increases with pressure in the liquid ocean until 1.2 MPa, where a thick CO$_2$ clathrate hydrate layer forms. Fig. \[fig:profil\] A and E show an example of this profile for a T = 273 K ocean isotherm. The phase transition from liquid water to clathrate is marked by a sudden jump in the CO$_2$ molar fraction and density. This clathrate layer is immediately in contact with the high-pressure water ice at P = 710 MPa. For ocean isotherms between 279 K and 294 K, water-world hydrosphere structures display two oceans: one on top of the clathrate layer and another one under the clathrate layer, immediately in contact with the high-pressure ice. Fig. \[fig:profil\] displays an example of this second type of hydrosphere structure for a 285 K ocean isotherm (panels B and F). Both density and composition profiles exhibit two phase transitions, one at 28 MPa and another at $\sim$600 MPa. The second phase transition occurs when the clathrate layer becomes unstable at higher pressures. This is due to the inversion of slope of L$_w$L$_c$H triple line (liquid water/liquid CO$_2$/clathrate hydrates) for pressures higher than 350 MPa (see Fig. \[fig:phasediagram\]). Clathrate hydrates contain up to 15 mol% of CO$_2$, while the solubility of CO$_2$ in the global liquid ocean does not exceed $\sim$4 mol%, making clathrate hydrates the main CO$_2$ reservoir for ocean isotherms T $<$ 294 K. The temperature of 294 K marks the stability limit of the clathrate phase: above it, clathrates are not stable at any pressure in the ocean. For T = 300 K (Fig. \[fig:profil\] C and G), the ocean is limited by the formation of high-pressure water ice. For the highest ocean temperature explored here, T = 400 K (Fig. \[fig:profil\] D and H), the liquid water region extends up to 3.2 GPa. ![image](f4a.pdf){width="0.24\linewidth"} ![image](f4b.pdf){width="0.24\linewidth"} ![image](f4c.pdf){width="0.24\linewidth"} ![image](f4d.pdf){width="0.24\linewidth"}\ ![image](f4e.pdf){width="0.24\linewidth"} ![image](f4f.pdf){width="0.24\linewidth"} ![image](f4g.pdf){width="0.24\linewidth"} ![image](f4h.pdf){width="0.24\linewidth"} Mass budgets of saturated reservoirs ------------------------------------ We now evaluate the total CO$_2$ mass contained in the saturated reservoirs at a given temperature, using the compositional profiles from § \[sec:profil\]. This mass is then divided by the total mass of the hydrosphere ($M_v$, including both CO$_2$ and water), to obtain the CO$_2$ fraction in the volatiles in the hydrosphere ($X_{CO_2}$, the vertical axis of Fig. \[fig:bilan\]). We vary the total amount of volatiles accreted by our model planets from $X_v=$ 1 wt% to $X_v=$ 50 wt%. The main difference here between a planet that accreted 1 wt% volatiles and another that accreted 50 wt% is that the latter has a thicker high-pressure water ice mantle, considered here to be pure water ice. Consequently, the calculated CO$_2$ fraction in the volatiles of the planet’s saturated reservoirs decreases as $X_v$ increases. The cases that do not allow the formation of the high-pressure water ice at the oceanic floor ($X_v\lesssim$ 3 wt% at 373 K and 400 K) are not displayed in the Fig. \[fig:bilan\]. Fig. \[fig:bilan\] shows that, for cold oceans (i.e., sufficiently cold to form clathrates $<$ 294 K), only planets that accreted low (X$_v$ $<$ 2 wt%) bulk fraction of volatiles could store cometary abundances of CO$_2$ in their saturated reservoirs. In this temperature regime, decreasing temperatures lead to the formation of thicker CO$_2$ clathrate layers (because clathrates of CO$_2$ are stable over a larger pressure range). Since clathrates can store more CO$_2$ per unit mass than the liquid ocean (see Fig. \[fig:profil\]), the ability of the hydrosphere to store CO$_2$ in the saturated reservoirs increases as ocean temperature decreases. For T $>$ 294 K, the clathrate layer is not stable and CO$_2$ dissolved in liquid water becomes the dominant saturated reservoir. This leads to low CO$_2$ fractions in the hydrosphere, not exceeding 0.01 wt% for our highest temperature isotherms (300 K and 400 K). ![Hydrosphere saturated reservoir CO$_2$ storage capacity. The total mass fraction of CO$_2$ in the hydrosphere is plotted as a function of the bulk mass fraction of volatiles of the planet, for several oceanic isotherms. The low $X_v$ extreme of each curve is defined by the minimum volatile mass fraction at which the water-worlds develop a high-pressure water ice mantle. The red line adopts CO$_2$ solubility interpolated from the experimental data of @Abramson:2017ie. The grey line includes, in addition to the CO$_2$ dissolved in the ocean and trapped in clathrates, the possible contribution of the CO$_2$-filled ice (see \[sec\_CO2filled\]). For temperatures of 300 K and 400 K the CO$_2$ fractions are low ($<$0.01%, see text) and the lines are nearly coincident with the horizontal axis of the plot. The range of CO$_2$ fractions shaded in blue shows the range of mass fraction detected in comets [@Mumma:2011jn; @Ootsubo:2012cg]. The darker blue region encompasses the upper and lower quartile of the 17 comets sample, analyzed in @Ootsubo:2012cg, while the blue line shows the median value of this sample. The striped region shows the range of CO$_2$ fractions in low-mass protostellar envelopes [@Oberg:2011ev]. []{data-label="fig:bilan"}](f5.pdf){width="1\linewidth"} Contribution of CO$_2$-filled ice {#sec_CO2filled} --------------------------------- Between pressures of $\sim$ 0.6 GPa and 1 GPa, the CO$_2$-H$_2$O system can form a phase called CO$_2$-filled ice [@Hirai:2010hpa; @Bollengier:2013jl; @Tulk:2014du; @Massani:2017fr]. CO$_2$-filled ice corresponds to a compressed hydrate phase, where CO$_2$ molecules fill the water channels. Above 1 GPa, this phase dissociates to CO$_2$ ice and ice VI. The presence of CO$_2$-filled ice has been investigated between 80 K up to 277 K. [@Amos:2017ki] estimated the mass ratio between CO$_2$ and water molecules in this phase to be 41 wt%. Using this result, we estimate the size of the potential CO$_2$ reservoir in CO$_2$-filled ice and its influence on the hydrosphere CO$_2$ fraction, plotted as a grey line in Fig. \[fig:bilan\]. We find that CO$_2$-filled ice can make a modest contribution to the total CO$_2$ budget of the water-world. The presence of this phase adds between $\lesssim$ 1 wt% and $\sim$ 10 wt% to the total fraction of CO$_2$ in the hydrosphere, allowing habitable water-worlds with $X_v$ up to 3.5 wt% to reach comet-like CO$_2$ abundances (compared to the limit of $X_v<$ 2 wt% in the absence of CO$_2$ filled ice). Effect of CO$_2$ dissociation at high pressures and temperatures ---------------------------------------------------------------- As warned by @Gernert:2016bc, the computation of the solubility of CO$_2$ in water by TREND 3.0 becomes more and more uncertain above $\sim$100 MPa. Recent experiments of the CO$_2$-H$_2$O system of @Wang:2016if, @Abramson:2017ie [@Abramson:2018] and the *ab initio* simulations of @Pan:2016 show that CO$_2$ strongly dissociates and forms carbonic acid (H$_2$CO$_3$) at high pressures and temperatures (P $\gtrsim$ 1 GPa, T $\gtrsim$ 373 K). This leads to a much higher solubility of CO$_2$ at high pressures than predicted by TREND 3.0 (or that could be predicted by any of the currently available equation of state of CO$_2$-H$_2$O system, see @Abramson:2017ie for a detailed discussion). In Fig. \[fig:profil\] D we plot in red the solubility of CO$_2$ in the liquid water ocean for T = 373 K provided by the interpolation of the experimental data of @Abramson:2017ie. The solubility of CO$_2$ at 400 K is expected to be very similar to that plotted for 373 K, because at pressures above 1 GPa the isotherms display very similar compositions [see Fig. 7 and 8 in @Abramson:2017ie]. The solubility of CO$_2$ reaches up to $\sim$ 20 mol% at 3 GPa and 373 K (see the red line in Fig. \[fig:profil\] D). The high-pressure dissociation of CO$_2$ investigated by @Abramson:2017ie increases the CO$_2$ storage capacity of deep liquid oceans by more than 3 orders of magnitude in some cases (red line in Fig. \[fig:bilan\]). For example, the CO$_2$ storage capacity goes from 0.01 wt% CO$_2$ at 400 K without the high-pressure dissociation to more than 10 wt% with. This highlights the effect of uncertainties in the solubility of CO$_2$ at high pressures on the structure and maximal global saturated CO$_2$ reservoirs of water-worlds. With the high-pressure dissociation of @Abramson:2017ie, the CO$_2$ storage capacity of the ocean at 373 K eclipses that of clathrates at temperatures below 294 K. We find that hot habitable water-worlds may store cometary abundances of CO$_2$ in their water-dominated layers only if they initially accreted less than 11 wt% of volatiles by mass, when the high pressure dissociation of CO$_2$ is taken into account. We expect that the impact of high-pressure dissociation on the CO$_2$ mass budgets calculated in Figure \[fig:bilan\] will diminish with decreasing ocean temperature. Oceans at lower temperatures are shallower, reaching the interface between the liquid water and high-pressure ice or clathrates at lower pressures. CO$_2$ dissociation decreases with decreasing pressures, leading to lower CO$_2$ solubilities for pressures $\lesssim$ 1 GPa. Consequently, the red line in Fig. \[fig:bilan\] is an upper limit on the contribution of saturated reservoirs to the total CO$_2$ budget of water-worlds. CO$_2$ content of excess reservoirs {#sec:ResultsExcess} =================================== For planets with high volatile mass fractions (more than 3.5 wt% or 11 wt%, depending on the temperature), the liquid water ocean, clathrates and CO$_2$ filled ice (if present) are insufficient to store comet-like amounts of CO$_2$. Here we propose possible [*excess*]{} reservoirs that CO$_2$ may form upon the saturation of the water-dominated phases: the atmosphere, liquid CO$_2$, CO$_2$ ice, and monohydrate of carbonic acid (H$_2$CO$_3$ $\cdot$ H$_2$O). Atmospheres of habitable water-worlds {#sec:atm} ------------------------------------- ![Boundaries of the habitable zone (HZ), as a function of the partial pressure of CO$_2$ in the atmosphere. For this figure the partial pressure of N$_2$ has been fixed to 1 bar and the surface albedo of the planet to 0.06, corresponding to the albedo of the Earth’s ocean. Q$_1$ and Q$_2$ are quadruple points of CO$_2$ (same labels as in the Fig. \[fig:phasediagram\]) and CP shows the critical point of CO$_2$.[]{data-label="fig:HZ"}](f6.pdf){width="1\linewidth"} Using the CLIMA 1D radiative-convective model we reproduce the boundaries of the water-world HZ, as a function of the partial pressure of CO$_2$ (first computed in @Kitzmann:2015, Fig. \[fig:HZ\]). For visual reference, we have indicated locations of the critical point and the quadruple points Q$_1$ and Q$_2$ of the CO$_2$-H$_2$O phase diagram (Fig. \[fig:phasediagram\]). Limits of the HZ plotted here are uncertain due to the host star type, atmospheric 3D circulation, planet rotation rate, oceanic circulation, planet mass and possibly other factors [@Marshall:2007fj; @Yang:2013gl; @Kopparapu:2014; @Kopparapu:2017; @Turbet:2017cx; @Ramirez:2018wn; @Kite:2018 e.g.]. We use these 1D simulations only to estimate the possible atmospheric masses of CO$_2$ of habitable water-worlds. For a given distance from a Sun-like star, if the CO$_2$ content of the atmosphere exceeds the amount indicated by the black line, the surface temperature of this planet would exceed 400 K and the water-world would then be considered as uninhabitable. We stopped our simulations for the inner edge of the HZ at a partial pressure of CO$_2$ of 40 bar, because for higher pressures, the non-ideal behaviour of CO$_2$ must be taken into account and CLIMA treats CO$_2$ as an ideal gas at temperatures above 303 K. For pure CO$_2$, the deviation from ideal pressure at 20 bar and $\sim$400 K is of the order of 5%, while at 50 bar it reaches $\sim$20% [e.g. @Hu:2007ca; @Duan:1992]. The addition of any other compounds (i.e. N$_2$ and H$_2$O) would only accentuate this error. Fig. \[fig:HZ\] indicates that above a partial pressure of $\sim$ 10 bar, the distance from the star to the inner edge of the HZ sharply increases [see also @Kitzmann:2015]. An order-of-magnitude estimation — $M_{CO_2}~=~P_{CO_2}4\pi R_p^2/g$, for Earth values of M$_p$, R$_p$ and g — indicates that a CO$_2$ pressure of 100 bar represents a mass of the order of $5\times10^{20}$ kg, corresponding to $\sim$ 0.01 wt% of the total mass of the planet. Thus, CO$_2$ in the atmosphere would contribute at most $X_{CO_2}\lesssim$ 1 wt% to the total volatile budget of the water-world for the minimum $X_v\sim$1 wt% that we consider, and even less for larger $X_v$. We explored the influence of surface pressure on the CO$_2$ storage capacity of habitable water-worlds. We reproduced the results of Fig. \[fig:bilan\] with an atmospheric pressure of 100 bar instead of 5 bar, finding no significant change ($\leq~10^{-3}$) in the final total CO$_2$ mass fractions of saturated reservoirs. Since we are considering a fully saturated case, the profiles of $X_{CO_2}$ with pressure throughout the ocean and clathrate layers (displayed in Fig. \[fig:profil\]) are unchanged; effectively, choosing a higher surface pressure of CO$_2$, only changes the low-pressure limit of the profiles. Since the atmosphere and ocean surface layer do not contribute significantly to the CO$_2$ storage capacity of habitable water-worlds, our results are insensitive to the choice of CO$_2$ atmospheric partial pressure. Condensation of liquid CO$_2$ ----------------------------- Liquid CO$_2$ is a potential excess reservoir for water-worlds near the outer edge of the HZ, with surface temperatures below the upper critical end point (UCEP) of the CO$_2$-H$_2$O system (304.5 K). At temperatures above the UCEP, liquid CO$_2$ will not condense at any pressure. Climate models of terrestrial planets have long recognized the possibility of condensation of CO$_2$ in Earth-like planet atmospheres and surfaces [@Kasting:1991; @vonParis:2013hc]. These studies focused, however, on the limits imposed by the saturation vapor pressure on the amount of CO$_2$ in planetary atmospheres, and not on the liquid CO$_2$ condensate. The recent studies of @Turbet:2017cx and @Levi:2017gv investigated the possibility of condensation of liquid CO$_2$ and CO$_2$ clathrates at the poles of water-covered exoplanets. For habitable water-worlds, condensation of liquid CO$_2$ at the surface is possible only if the partial pressure of CO$_2$ in the atmosphere is at least $P_{CO_2}>45$ bar and if the planet temperature lies between the second quadruple point (283 K) and UCEP (304.5 K) of the H$_2$O-CO$_2$ system. At temperatures below 283 K, CO$_2$ clathrates would form at the planet surface before liquid CO$_2$, assuming thermodynamic equilibrium. At temperatures above 304.5 K, CO$_2$ does not condense. Thus, surface liquid CO$_2$ oceans are stable in a rather narrow range of pressure and temperature conditions. 3D simulations [e.g., @Turbet:2017cx; @Ramirez:2018wn] are thus needed to assess the long-term stability of liquid CO$_2$ as a CO$_2$ reservoir at the surface of water-worlds. ![Comparison between densities of liquid water and liquid (T $<$ 304 K) or fluid CO$_2$ (T $>$ 304 K). Panel (A) compares the densities of liquid water, ice Ih, CO$_2$ clathrates and liquid CO$_2$ at low/surface pressures (P $<$ 1 bar, at the saturation vapor pressures of each compound). The data plotted are from TREND 3.0 for liquid water, @Choukroun:2007cx for Ih water ice and @Duschek:1990fc for liquid CO$_2$. Panel (B) shows pressure as a function of temperature on the dividing line where the density of the water-rich liquid and the CO$_2$-rich fluid are equal (from TREND 3.0).[]{data-label="fig:densityCO2water"}](f7a.pdf "fig:"){width="1\linewidth"} ![Comparison between densities of liquid water and liquid (T $<$ 304 K) or fluid CO$_2$ (T $>$ 304 K). Panel (A) compares the densities of liquid water, ice Ih, CO$_2$ clathrates and liquid CO$_2$ at low/surface pressures (P $<$ 1 bar, at the saturation vapor pressures of each compound). The data plotted are from TREND 3.0 for liquid water, @Choukroun:2007cx for Ih water ice and @Duschek:1990fc for liquid CO$_2$. Panel (B) shows pressure as a function of temperature on the dividing line where the density of the water-rich liquid and the CO$_2$-rich fluid are equal (from TREND 3.0).[]{data-label="fig:densityCO2water"}](f7b.pdf "fig:"){width="1\linewidth"} The density of liquid CO$_2$ varies considerably with temperature. Fig. \[fig:densityCO2water\] shows that along its saturation line and for temperatures T $>$ 283 K, the density of liquid CO$_2$ will be lower than the density of water, allowing it to float on top of the liquid water ocean. For temperatures T $<$ 273 K, if CO$_2$ condenses before the formation of clathrates and in presence of surface water ice, then it would sink under the Ih ice layer, as described in @Turbet:2017cx. To determine the possible presence of large areas of liquid CO$_2$ on top the liquid water ocean, one would need to account for temperature variations across planet surface, heat redistribution and CO$_2$ transport in the ocean and the atmosphere. If a surface liquid CO$_2$ ocean is present, the atmospheric partial pressure of CO$_2$ would be at (or near) saturation. At higher pressures (between 40 and 130 MPa, depending on temperature, Fig. \[fig:densityCO2water\] (B)), the density of fluid CO$_2$ can be higher than the density of liquid water. If CO$_2$ liquid-liquid phase separation occurs deep in the water ocean, i.e. at pressures higher than the line displayed in Fig. \[fig:densityCO2water\] (B), then the liquid CO$_2$ would be denser than the ambient water and would sink toward the oceanic floor. There, the liquid CO$_2$ would cross the stability domain of CO$_2$ ice (see Fig. \[fig:phasediagram\]), freeze and sink in the high-pressure water ice mantle, as elaborated below (Section \[sec:CO2ice\]). CO$_2$ ice as a main reservoir of CO$_2$ for habitable water-worlds {#sec:CO2ice} ------------------------------------------------------------------- Water-worlds may encounter the conditions for CO$_2$ ice formation within their hydrospheres, as the planets evolve and cool. Whether the CO$_2$ ice forms before, during, or after the water-world’s high-pressure water ice mantle freezes determines the initial formation location of the CO$_2$ ice layer (be it under, within or on top of the high pressure water ice mantle, see also Section \[sec:Discussion\]). Because the density of CO$_2$ ice is always higher than the density of high-pressure water ices VI and VII (Fig \[fig:density\]), CO$_2$ ice is stably stratified when buried under those layers. If the CO$_2$ ice is instead deposited on top of the high-pressure water ice later in the evolution of the planet, an unstable density stratification results, which could lead to gravitational Rayleigh-Taylor instabilities (and the eventual burial of CO$_2$ ice in the high-pressure water ice mantle). ![Comparison between the density of CO$_2$ ice and high-pressure water ices. We use TREND 3.0 to obtain the density of CO$_2$ ice as a function of pressure for a 380 K isotherm. Points are experimental data of @Bezacier:2014ii for ice VI and ice VII, for temperatures ranging from 300 K to 340 K, and 300 K to 380 K, respectively.[]{data-label="fig:density"}](./f8.pdf){width="1\linewidth"} We estimate the timescales for the development of a Rayleigh-Taylor instability in CO$_2$ ice deposited at the surface of the high-pressure ice mantle. The timescale for the development of a Rayleigh-Taylor instability ($\tau_{RT}$) is the nominal time required to produce unit strain under a deviatoric stress of magnitude $g \times \Delta \rho \times H $ [e.g. @Turcotte:2014]: $$\tau_{RT}=\frac{13.04 \eta}{g \Delta \rho H }, \label{eq:RT}$$ where $\eta$ is the viscosity of the most viscous layer, $g$ is the local gravitational acceleration, $\Delta \rho$ is the difference in density between the two layers and $H$ is the length scale, set here to the thickness of the CO$_2$ layer. The most viscous layer controls the development of a Rayleigh-Taylor instability. For temperatures below 343 K, CO$_2$ ice is in contact with ice VI. Laboratory measurements of the viscosity of ice VI at high differential stresses ($\gtrsim10^6$ Pa) give viscosities of the order of 10$^{13}$-10$^{14}$ Pas [@Poirier:1981db; @Sotin:1985; @Durham:1996kq]. The viscosity of CO$_2$ ice (I) for similar differential stresses is of the order of 10$^{12}$- 10$^{13}$ Pas. [@Durham:1999]. The extrapolation of the behavior of each material to planetary shear stresses must be handled with care. The creep behaviour at the low shear stresses ($\lesssim10^5$ Pa) relevant to planet interiors might be controlled by different mechanisms than those probed in experimental studies [@Durham:2001]. However, no experimental work to date has confirmed a change in creep behaviour for ice VI and CO$_2$ ice at decreasing shear stresses. Therefore, to estimate the viscosity of ice VI and CO$_2$ in planetary conditions, we adopt the shear stress dependence provided in @Poirier:1981db and @Durham:1999, respectively. We obtain upper estimates of 10$^{18}$ Pas for ice VI and 10$^{15}$ Pas for CO$_2$ ice [@Durham:2010bi]. Consequently, ice VI would control the formation of the Rayleigh-Taylor instability. For temperatures above 343 K, CO$_2$ ice would be in contact with ice VII (Fig. \[fig:phasediagram\]). To our knowledge, no experimental measurements of the viscosity of ice VII currently exist. The only theoretical study of the ice VII viscosity is found in @Poirier:1982, where the author examined the crystalline lattice of ice VII and concluded that ice VII should have a high viscosity because its crystalline structure does not favor the propagation of dislocations. Thus, ice VII is likely more viscous than the CO$_2$ ice and would control the formation of Rayleigh-Taylor instability at T $>$ 343 K. Figure \[fig:RTinstability\] shows timescales of the development of the Rayleigh-Taylor instability for several choices of $\eta$. The viscosity values span from the lowest viscosities experimentally measured for ice VI, to $\eta=10^{25}$ Pa s, where the timescales to form Rayleigh-Taylor instabilities start to be comparable to the planetary ages. ![Timescales for the development of a Rayleigh-Taylor instability for a layer of CO$_2$ ice on top of the pure ice VI or VII. Due to the uncertainty of the viscosity of ice VI and VII for the planetary conditions, this figure displays a wide range of viscosities: $\eta=10^{13}$ Pa.s is the highest order of magnitude viscosity of ice VI experimentally measured by [@Poirier:1981db] and [@Durham:1996kq]; $\eta=10^{18}$ Pa.s is the extrapolation of these experimental measurements up to the planetary scales @Durham:2010bi. For a visual reference, we also show the same timescales for $\eta=10^{21}$ Pa.s, the viscosity of the Earth’s mantle and for $\eta=10^{25}$ Pa.s, for which timescales of the development of Rayleigh-Taylor instability starts to be comparable to planetary ages.[]{data-label="fig:RTinstability"}](f9.pdf){width="1\linewidth"} For ice VI, the timescale of the development of Rayleigh-Taylor instability is always short, less than 10,000 years. Unless ice VII viscosity is greater than $\eta=10^{25}$ Pa.s, CO$_2$ ice would also sink in the mantle in short timescales for temperatures higher than 343 K. We thus conclude that the formation of a CO$_2$ ice layer at any time during the thermal evolution of a water-world would lead to its burial inside of high-pressure ice mantle. If CO$_2$ is buried as CO$_2$ ice inside of the high-pressure water ice mantle, the atmospheric CO$_2$ content would be decoupled from the total CO$_2$ content of the water-world. Consequently, future observations of habitable water-worlds’ atmospheres would not provide the total CO$_2$ content of the planet but the CO$_2$ content of the atmosphere only. With models like the one presented in this study, it would be possible to estimate the amount of CO$_2$ dissolved in the global water ocean and trapped in clathrates, for an assumed interior temperature profile. However, the masses of CO$_2$ ice that might be trapped in the high-pressure water mantle are independent of these measurements and would be more challenging to constrain with future observations. Monohydrate of Carbonic Acid ---------------------------- The H$_2$O-CO$_2$ phase diagram is poorly characterized at pressures above $\sim$1 GPa. A recent discovery is the formation of a carbon-bearing solid at high temperatures and pressures: the monohydrate of carbonic acid (H$_2$CO$_3$ $\cdot$ H$_2$O) [@Abramson:2017ie; @Abramson:2018]. @Abramson:2017ie identified the presence of a stable solid phase in the H$_2$O-CO$_2$ system for $P>4.4$ GPa. @Abramson:2018 then determined that this phase consists of a monohydrate of carbonic acid, measured its crystalline structure and derived its density at $P=6.5$ GPa and $T=413.15$ K. The temperatures and pressures at which the monohydrate of carbonic acid has been observed correspond to the conditions in the high-pressure ice layers of habitable water-worlds. The density of this solid is higher than the density of ices VI and VII (2194 kg.m$^{-3}$, @Abramson:2018, compare to Fig. \[fig:density\]). Consequently, monohydrate of carbonic acid will remain buried in the high-pressure ice layers and coexist with CO$_2$ ice. Both of these solids would constitute an excess reservoir of CO$_2$ that would sequester the carbon dioxide away from the atmosphere. The stability field of monohydrate of carbonic acid has not yet been mapped. It is still unclear if the monohydrate of carbonic acid (or other solids) form at lower pressures and temperatures (P $<$ 4.4 GPa and T $<$ 438.15 K, see @Saleh:2016ew). Any future detection of solid compounds in the H$_2$O-CO$_2$ system at these conditions would impact the interior models of water-worlds and icy satellites. Discussion {#sec:Discussion} ========== Planet Mass Dependence ---------------------- Our constraints on the CO$_2$ content of saturated reservoirs are more severe for planets with more massive rocky (iron and silicate) cores. For water-worlds with $M_{\rm rock}=2~M_{\rm Earth}$ (with Earth-like Fe/Si values), cometary compositions would be compatible with habitable surfaces if planets initially have less than 6 wt% of volatiles by mass (see Fig. \[fig:2Mearth\]), compared to the limit of 11 wt% volatiles for planets with $M_{\rm rock}=1~M_{\rm Earth}$. More massive planets reach higher pressures in their hydrospheres due to their higher surface gravity. Consequently, they reach the stability field of high-pressure ice at lower depths. The ocean and the clathrate layers are thinner, and store less CO$_2$, when compared to lower mass planets with the same ocean temperature. ![Dependence of hydrosphere saturated reservoir CO$_2$ storage capacity with $M_{rock}$. The total mass fraction of CO$_2$ in the hydrosphere is plotted as a function of the bulk mass fraction of volatiles of the planet, for oceanic isotherms of 273 K and 373 K. The $M_{rock}=1~M_{Earth}$ isotherms are the replication of the results displayed on the Fig. \[fig:bilan\]. ABB (2017) references the solubility obtained from the interpolation of the experimental data of @Abramson:2017ie. See the caption of Fig. \[fig:bilan\] for the description of shaded/stripped regions on the figure.[]{data-label="fig:2Mearth"}](f10.pdf){width="1\linewidth"} Metal Core as another potential reservoir of CO$_2$ --------------------------------------------------- In this work, we have focused on CO$_2$ reservoirs within the hydrospheres (water-rich outer layers) of water-worlds. Depending on the formation history of the planet, the rocky interiors of water-worlds offer another potential CO$_2$ reservoir. Experimental studies show that CO$_2$, and more generally carbon, behaves as a siderophile element and is more likely to dissolve in the liquid iron than in silicate melts [e.g. @Dasgupta:2013a]. Carbon solubility in iron-rich liquids increases with pressure and decreases with increasing temperature, extent of hydration and oxygen fugacity [@Dasgupta:2012; @Dasgupta:2013a]. Previous studies have shown that Earth’s core is constituted from 5 % to 10 % of light elements [@Birch:1964], with C as a plausible candidate along with H, S, O and Si [@Poirier:1994]. Indeed, the iron core could be Earth’s dominant carbon reservoir [@Bergin:2015in; @Hirschmann:2016jn]. The exact composition in light elements of Earth’s core is currently unknown, and the conditions for the C supply and the Earth’s core formation continue to be subject to debate [see the summary in @Dasgupta:2013_chap]. Likewise, the efficiency of the CO$_2$ entrapment in the iron-rich liquid, in the context of the formation of differentiation of water-worlds, is currently unexplored. Entrapment of CO$_2$ as CaCO$_3$ {#sec:CO2vsRocks} ---------------------------------- On the Earth, carbonates – (Ca, Mg, Fe)CO$_3$ – are an important reservoir of CO$_2$. On Earth, carbonate minerals form when dissolved cations (such as Ca, Mg, Fe) produced by silicate weathering react with carbonate ions, CO$_3^{2-}$ in the ocean. In this work, since we focus on planets with sufficient water to form high pressure ice mantles, we have so far neglected reactions between the silicate rocks and liquid water [following e.g. @Leger:2004fh; @Selsis:2007jx; @Fu:2010ch; @Kitzmann:2015; @Levi:2017gv] and thus have neglected carbonate minerals as a CO$_2$ reservoir on water-worlds. The formation of carbonates is impeded on a water-world because 1) the high pressure ice mantle creates a physical barrier between the ocean and silicates, 2) high pressures ($\gtrsim$ 0.6 GPa) at the silicate surface leads to a stagnant lid tectonic regime with little renewal of fresh silicates for weathering [@Kite:2009kv] and 3) incorporation of salts into high-pressure water ice can efficiently remove salts from the liquid ocean [@Levi:2018hp]. In this section we elaborate further on the justification for this baseline assumption, and set an upper limit on the capacity of carbonates as a CO$_2$ storage reservoir on water-worlds. The thick high-pressure ice mantles of water-worlds may impede chemical exchanges between silicates and the liquid ocean on water-world planets. On water-worlds, liquid water oceans may be separated from silicate rock by high pressure ice mantles that are up to $\sim$ 4000 km thick. In the case of Ganymede, @Kalousova:2018 showed that the melting of water at the base of the high-pressure water ice mantle and the transport of this water by convection to the upper liquid water ocean is favored only for thin ice mantles ($<$ 200 km) and high viscosities in the high-pressure ices. @Kalousova:2018 found that the transport of molten water from the base of the high-pressure water mantle to the liquid water ocean shut down for a mantle thickness greater than 400 km. While such detailed studies have not yet been accomplished for water-worlds, this indicates a trend of weaker convection and therefore less chemical exchanges for thicker high-pressure ice mantles. On Earth, plate tectonics and volcanism continuously provide fresh silicates at the Earth’s surface, which replenish the source of cations for the carbonate formation. @Kite:2009kv show that, on water-worlds, once the post-accretional magma ocean has solidified, high pressures ($\gtrsim$ 0.6GPa) at the water-silicate interface curtail the volcanism. The resulting tectonic regime for the planet is a stagnant lid. Water-worlds thus have a limited reservoir of silicates that would be available to supply cations to the ocean. If salts are initially present in the water-world ocean, they will be removed on timescales of tens of millions to hundreds of millions of years. [@Levi:2018hp] described a mechanism that can pump salts out of water-world oceans, sequestering them inside of the high-pressure ice. Despite the arguments above, it is still to be determined if the presence of a thick high-pressure water ice layer would *fully* impede chemical exchanges between the liquid water ocean and silicates. Water-rock interactions could occur during the early stages of planet formation (before the high-pressure ice mantle forms). They may also potentially occur during the later stages of the planet’s evolution, if the heat flux at the top of the silicate layer allows for the melting of high-pressure ices [@Noack:2016bh; @Kalousova:2018]. We set an upper limit on the CO$_2$ storage capacity of carbonates in ocean-bearing water-worlds, following a similar approach to @Kite:2018. @Kite:2018 estimate that in the most optimistic case (assuming the liquid water reacts efficiently with the silicates) liquid water would react with a layer of silicates at most $\sim 50$ km thick. These reactions would primarily form CaCO$_3$, as Mg and Fe cations are more to likely form silicates (see @Kite:2018 for a detailed discussion). For the purpose of this calculation, we assume that the silicates are basalts (which are ubiquitous in the Solar System). Basalts contain 11.39 wt% of CaO, so if all of the calcium reacts with CO$_2$ to form CaCO$_3$, it would results in an entrapment of $\sim7\times10^{21}\left(R_{rock}/R_{\oplus}\right)^2~\mathrm{kg}$ of CO$_2$. This mass of CO$_2$ stored in the carbonates, when scaled to the total volatile mass of the planet, $M_v$, corresponds to: $$X_{\rm CO_2}=11~wt\% \left( \frac{R_{rock}}{R_{\oplus}}\right) ^2 \left( \frac{X_v^{-1} -1}{0.01^{-1}-1}\right) \left(\frac{M_{\oplus}}{M_{rock}} \right).$$ For a water-world with $M_{rock}=1~M_{\oplus}$ and $R_{rock}=1~R_{\oplus}$, carbonates can store an additional $ X_{CO_2}=11~wt\%$ when $X_v =1~wt\%$, and only $X_{CO_2}=2~wt\%$ when $X_v = 5~wt\%$. Thus, carbonates may extend the storage capacity of CO$_2$ for water-worlds with total volatile contents $<$5 wt%, but represent a negligible CO$_2$ reservoir for planets with more massive volatile envelopes. Consequently, while carbonates could increase the CO$_2$ storage capacity in the hydrosphere at the low $X_v<$5 wt% (left-most) edge of Figs. \[fig:bilan\] and \[fig:2Mearth\], the effect of carbonates is negligible over the rest of these figures. Our main conclusions still hold: water-worlds with hydrospheres accounting for more than 11 wt% of the total planet mass require additional CO$_2$ reservoirs (beyond the liquid ocean, clathrates if present, atmosphere, and carbonates) to both accommodate cometary abundances of CO$_2$ and host a surface liquid water ocean. Implications for Water-World Habitability ----------------------------------------- Our models indicate that, in a majority of cases, water-worlds with comet-like compositions will be too CO$_2$ rich to host a liquid water ocean on their surfaces. Indeed, for water-worlds with more than 11 wt% of volatiles, the saturated (water-dominated) layers of the hydrosphere cannot store comet-like amounts of CO$_2$. Moreover, the 11 wt% limit is already generous. Our habitable water-world models almost never reach the median comet composition $X_{CO_2}~\sim29$ wt%. Our initial assumptions of an isothermal oceanic temperature profile and full saturation of CO$_2$ maximize the amount of CO$_2$ stored in the water-dominated saturated reservoirs. If water-worlds are to be habitable, the excess CO$_2$ (i.e. the CO$_2$ that could not be incorporated in the saturated reservoirs) must be stored away from the ocean and the atmosphere. Whether, how much, and where liquid and/or solid CO$_2$ layers form depend on the evolution and accretion history and the global CO$_2$ fraction of the planet. To determine whether the excess CO$_2$ is more likely to degas in the atmosphere, form a liquid layer on top of the water ocean, form CO$_2$ ice or monohydrate of carbonic acid, one would need to model the post-accretional cooling of a water-world’s steam envelope from its initial hot state and determine at which pressures the CO$_2$-H$_2$O mixture saturates in CO$_2$. If this saturation is reached at low pressures (i.e. below the line of \[fig:densityCO2water\] (B)), condensed CO$_2$ is likely to float on the surface of a water ocean and/or evaporate in the atmosphere. Consequently, a potential outcome for the evolution of water-worlds could be a hot hydrosphere consisting of an extended steam envelope that transitions from vapor to super-critical fluid, to plasma at greater and greater depths [as has been considered by, e.g., @Kuchner:2003; @Rogers:2010fv; @Nettelmann:2011dh; @Lopez:2012hi]. At high pressures, where CO$_2$ fluid is denser than water, the excess CO$_2$ would condense, sink and precipitate as ice or as monohydrate of carbonic acid. Our results therefore indicate that further modeling of water-world cooling and formation will be crucial for determining how much CO$_2$ each of these excess reservoirs could store and whether cometary composition water-worlds are likely to be habitable. Previous studies [e.g. @Kitzmann:2015; @Levi:2017gv; @Turbet:2017cx; @Ramirez:2018wn] have implicitly assumed that water-worlds will form liquid water oceans if they’re at an appropriate distance from the star and then derive an amenable CO$_2$ flux from the interior or CO$_2$ partial pressure in the atmosphere. Our work shows that the condensation of the high-pressure water ice layer as well as a liquid water ocean is not a given. Detailed 3D GCM modeling of water-worlds with liquid water oceans are inapplicable if water-worlds generically never manage to cool sufficiently to form liquid water oceans in the first place. Indeed, there is a tension between the likely comet-like compositions of volatiles accreted by water-worlds and the amount of CO$_2$ that can be accommodated in the water-world structures modeled in previous works. It is possible that the sequestration of solid CO$_2$ in the high-pressure ice mantle, as we have proposed in § \[sec:CO2ice\], could save the habitability of water-world exoplanets, but detailed models of the post-accretional cooling of water-worlds are needed to demonstrate this. Effect of equations of state uncertainties ------------------------------------------ TREND 3.0 (and other cubic equations of state, when coupled to an appropriate clathrate model, i.e. @Gasem:2001 [@Soave:1972; @Sloan:2007cl]) can accurately set an upper limit on the mass of CO$_2$ that could be stored in the water-dominated layers of ocean-bearing water-worlds at the outer edge of the HZ. For isotherms T $<$ 283 K, the depth of the ocean is limited by the formation of clathrates to isostatic pressures not exceeding 100 MPa. Experimental data are widely available for pressure-dependant CO$_2$ solubilities in these relatively shallow surface oceans. For ocean temperatures T $>$ 294 K, water-worlds have deep oceans and do not form clathrates. Current thermodynamic models [including cubic equations of state, and TREND 3.0, for a review see @Abramson:2017ie], strongly underestimate the amount of CO$_2$ dissolved in the deep liquid water ocean. This is likely due to the dissociation of CO$_2$ at high temperatures and pressures, that has been investigated only recently [i.e. @Pan:2016; @Abramson:2017ie] and is not included in the current equations of state for CO$_2$-H$_2$O system. Accounting for the dissociation of CO$_2$ at high pressure and high temperature greatly influences for the total CO$_2$ budget of saturated reservoirs. In Fig. \[fig:bilan\], accounting for this effect adds more than three order of magnitude in the total CO$_2$ storage capacity. Models that correctly reproduce the high solubility of CO$_2$ at high pressure and high temperature demonstrated by the recent experimental data are needed to assess the habitability of water-worlds. Moreover, the phase boundaries of monohydrate of carbonic acid are currently poorly constrained [@Wang:2016if; @Abramson:2018]. The experimental work of @Wang:2016if shows that this solid could form at high pressures ($>$ 3.5 GPa) and high temperatures (1773 K), where neither high-pressures ice or CO$_2$ ice yet condense. Therefore, the accumulation of the monohydrate of carbonic acid could start at the early stages of water-world post-accretional evolution, removing CO$_2$ from a supercritical envelope. Detailed models of the cooling of water-worlds and the formation of such a reservoir are needed to understand if the removal of CO$_2$ by the precipitation of monohydrate of carbonic acid would be sufficient to lead to an efficient cooling of water-worlds and a subsequent condensation of liquid water and high-pressures phases of water ice. These models would necessitate further constraints on the location of the monohydrate of carbonic acid phase limit in pressure-temperature-composition space. We hope that this paper will motivate new experimental and computational studies to further explore the phase diagram of the CO$_2$-H$_2$O system, specifically the dissociation of CO$_2$ in water at pressures above 1 GPa and the phase limit of the newly discovered solid monohydrate of carbonic acid. Conclusions {#sec:Conclusion} =========== We model hydrosphere structures of water-worlds. We use the TREND 3.0, a state-of-the-art equation of state widely used by the carbon capture and storage community, to determine the maximum amount of CO$_2$ dissolved in water and trapped in clathrates hydrates, as a function of temperature and pressure. We assume an isothermal profile in the liquid water and clathrates and an adiabatic profile in the high-pressure water ice mantle. We determine that the atmosphere, ocean and clathrate layer can not be the main CO$_2$ reservoir on habitable (i.e., surface ocean-bearing) water-worlds that accreted more than 11 wt% volatiles during their formation. Water-worlds that accreted a smaller mass fraction of volatiles could potentially store comet-like amounts of CO$_2$ in their saturated reservoirs, depending on their temperature profile. Even then, in our models, the saturated hydrospheres of habitable water-worlds almost never reach the median comet composition of $X_{CO_2}~\sim0.29$ wt%. If the excess CO$_2$ is not sequestered away from the atmosphere, habitable zone water-worlds may be unable to cool sufficiently from their post accretional hot state to condense liquid water oceans. The current paradigm of habitable-zone water-worlds as condensed, super-Ganymedes should be expanded. Depending on their post-accretional cooling history, we may be more likely to observe habitable zone water-worlds in hot uncondensed states with supercritical steam envelopes. We stress that extrapolations of current equations of state to high pressures and high temperatures (P $>$ 100 MPa, T $>$ 400 K) are unable to correctly predict the solubility of CO$_2$ in water. This is due to the dissociation of CO$_2$ at high pressures and high temperatures and is an ongoing research frontier in material science. Our work demonstrates that the dissociation of CO$_2$ has crucial implications for the habitability of water-worlds at the inner edge of the habitable zone. Unless the C entrapment in the iron core was efficient during the accretion of water-worlds, the largest potential reservoir of CO$_2$ in the hydrospheres of habitable water-worlds is likely to be CO$_2$ ice and monohydrate of carbonic acid, trapped in the high-pressure water ice mantle. Consequently, the atmospheric composition of an ocean-bearing water-world does not necessarily reflect the total mass of volatiles accreted during the formation of the planet, nor the relative proportions of CO$_2$ and H$_2$O in the hydrosphere. More detailed modeling of the post-accretional cooling of water-worlds is needed to determine whether CO$_2$ ice burial could allow water-worlds to have liquid water oceans or whether the evolution of the planet would generically lead to too much atmospheric CO$_2$ for the planets to be habitable. Aknowledgements {#aknowledgements .unnumbered} =============== We would like to thank R. Span for providing us with the TREND 3.0 software, S. Domagal-Goldman R. Ramirez and R. Kopporapu for their help and for providing us with ATMOS, and E. Kite for insightful discussions about water-worlds. Abramson, E. H. 2017, J. Phys.: Conf. Ser., 950, 042019 Abramson, E. H., Bollengier, O., & Brown, J. M. 2017, Am. J. Sci., 317, 967 Abramson, E. H., O. Bollengier, J. M. Brown, B. Journaux, W. Kaminsky, and A. Pakhomova (2018, September). .  [*103*]{}(9), 1468–1472. Alibert, Y., & Benz, W. 2017, A[&]{}A, 598, 4 Amos, D. M., Donnelly, M.-E., Teeratchanan, P., Bull, C. L., Falenty, A., Kuhs, W. F., Hermann, A., & Loveday, J. S. 2017, J. Phys. Chem. Lett., 8, 4295 Ballard, A. L., & Sloan, E. D. 2002, Fluid Ph. Equilibria, 194, 371 Ballard, A. L., & Sloan Jr., E. D. 2002, J. Supramol. Chem., 2, 385 Beichman, C. [et al.]{} 2014, Publ. Astron. Soc. Pac., 126, 1134 Bergin, E. A., G. A. Blake, F. Ciesla, M. M. Hirschmann, and J. Li (2015, July). .  [ *112*]{}(29), 8965–8970. Bezacier, L., Journaux, B., Perrillat, J.-P., Cardon, H., Hanfland, M., & Daniel, I. 2014, J. Chem. Phys., 141, 104505 Birch, F. 1964, J. Geophys. Res. Planets, 69, 4377 Bockelee-Morvan, D., Crovisier, J., Mumma, M. J., & Weaver, H. A. 2004, in Comets II, 391 Bollengier, O. [et al.]{} 2013, Geochim. Cosmochim. Acta, 119, 322 Carroll, J. J., Slupsky, J. D., & Mather, A. E. 1991, J. Phys. Chem. Ref. Data, 20, 1201 Choukroun, M., & Grasset, O. 2007, J. Chem. Phys., 127, 124506 Choukroun, M., Grasset, O., Tobie, G., & Sotin, C. 2010, Icarus, 205, 581 Corkrey, R., McMeekin, T. A., Bowman, J. P., Ratkowsky, D. A., Olley, J., & Ross, T. 2014, PLoS ONE, 9, e96100 Dasgupta, R. 2013, Rev. Mineral. Geochem., 75, 183 Dasgupta, R., Chi, H., Shimizu, N., Buono, A., & Walker, D. 2012, in LPSC Dasgupta, R., Chi, H., Shimizu, N., Buono, A. S., & Walker, D. 2013, Geochim. Cosmochim. Acta, 102, 191 Deiters, U. K., & De Reuck, K. M. 1997, Pure Appl. Chem., 69, 1237 Ding, F., & Pierrehumbert, R. T. 2016, ApJ, 822, 24 Duan, Z., M[ø]{}ller, N., & Weare, J. H. 1992, Geochim. Cosmochim. Acta, 56, 2605 Durham, W. B., Kirby, S. H., & Stern, L. A. 1999, Geophys. Res. Lett., 26, 3493 Durham, W. B., Prieto-Ballesteros, O., Goldsby, D. L., & Kargel, J. S. 2010, Space Sci Rev, 153, 273 Durham, W. B., Stern, L. A., & Kirby, S. H. 1996, J. Geophys. Res. B: Solid Earth, 101, 2989 Durham, W., & Stern, L. 2001, Annual Review of Earth and Planetary Sciences, 29, 295 Duschek, W., Kleinrahm, R., & Wagner, W. 1990, J. Chem. Thermodyn., 22, 841 Edwards, T. J., Newman, J., & Prausnltz, J. M. 1978, Industrial [&]{} Engineering Chemistry Fundamentals, 17, 264 Fu, R., O’Connell, R. J., & Sasselov, D. D. 2010, ApJ, 708, 1326 Gasem, K., Gao, W., Pan, Z., & Robinson, R. L. 2001, Fluid Ph. Equilibria, 181, 113 Gattuso, J.-P., & Hansson, L. 2011, Ocean acidification (Oxford University Press) Gernert, J., & Span, R. 2016, The Journal of Chemical Thermodynamics, 93, 274 Gillon, M. [et al.]{} 2017, Nature, 542, 456 Grimm, S. L. [et al.]{} 2018, A[&]{}A, 613, A68 Hirai, H., Komatsu, K., Honda, M., Kawamura, T., Yamamoto, Y., & Yagi, T. 2010, J. Chem. Phys., 133, 124511 Hirschmann, M. M. (2016, March). .  [*101*]{}(3), 540–553. Holden, J. F., & Daniel, R. M. 2010, in The Subseafloor Biosphere at Mid-Ocean Ridges (Washington, D. C.: American Geophysical Union), 13–24 Hu, J., Duan, Z., Zhu, C., & Chou, I.-M. 2007, Chem. Geol., 238, 249 J[ä]{}ger, A., & Span, R. 2012, J. Chem. Eng. Data, 57, 590 J[ä]{}ger, A., Vin[š]{}, V., Span, R., & Hrub[ý]{}, J. 2016, Fluid Ph. Equilibria, 429, 55 Kalousov[á]{}, K., C. Sotin, G. Choblet, G. Tobie, and O. Grasset (2018, January). .  [*299*]{}, 133–147. Kasting, J. F. 1988, Icarus, 74, 472 Kasting, J. F. 1991, Icarus, 94, 1 Kasting, J. F., & Ackerman, T. P. 1986, Science, 234, 1383 Kasting, J. F., D. P. Whitmire, and R. T. Reynolds (1993, January). .  [*101*]{}(1), 108–128. Kasting, J. F., & Catling, D. 2003, Annu. Rev. Astron. Astrophys., 41, 429 Kite, E. S., & Ford, E. B. 2018, arXiv, arXiv:1801.00748, Kite, E. S., Gao, P., Goldblatt, C., Mischna, M. A., Mayer, D. P., & Yung, Y. L. 2017, Nat. Geosci., 10, 737 Kite, E. S., Manga, M., & Gaidos, E. 2009, ApJ, 700, 1732 Kitzmann, D. [et al.]{} 2015, MNRAS, 3, 500.04 Kopparapu, R. K. [et al.]{} 2013, ApJ, 765, 131 Kopparapu, R. K., Ramirez, R. M., SchottelKotte, J., Kasting, J. F., Domagal-Goldman, S., & Eymet, V. 2014, ApJL, 787, L29 Kopparapu, R. K., Wolf, E. T., Arney, G., Batalha, N. E., Haqq-Misra, J., Grimm, S. L., & Heng, K. 2017, ApJ, 845, 5 Kuchner, M. J. 2003, ApJ, 596, L105 Kunz, O., Klimeck, R., Wagner, W., & Jaeschke, M. 2007, [The GERG-2004 wide-Range Equation of State for Natural gases and Other Mixtures]{}, Tech. rep., GERG Technical Monograph Kunz, O., & Wagner, W. 2012, J. Chem. Eng. Data, 57, 3032 L[é]{}ger, A. [et al.]{} 2004, Icarus, 169, 499 Levi, A., Sasselov, D., & Podolak, M. 2013, ApJ, 769, 29 Levi, A., Sasselov, D., & Podolak, M. 2014, ApJ, 792, 125 Levi, A., Sasselov, D., & Podolak, M. 2017, ApJ, 838, 0 Levi, A., & Sasselov, D. 2018, ApJ, 857, 65 Liu, L. G. 1984, Earth Planet. Sci. Lett., 71, 104 Lopez, E. D., Fortney, J. J., & Miller, N. 2012, ApJ, 761, 59 L[ø]{}vseth, S. W., Austegard, A., Westman, S. F., Stang, H. G. J., Herrig, S., Neumann, T., & Span, R. 2018, Fluid Ph. Equilibria, 466, 48 Luger, R., Barnes, R., Lopez, E., Fortney, J., Jackson, B., & Meadows, V. 2015, Astrobiology, 15, 57 Marshall, J., Ferreira, D., Campin, J.-M., & Enderton, D. 2007, J. Atmos. Sci., 64, 4270 Massani, B., Mitterdorfer, C., & Loerting, T. 2017, J. Chem. Phys., 147, 134503 Mumma, M. J., & Charnley, S. B. 2011, Annu. Rev. Astron. Astrophys., 49, 471 Nettelmann, N., Fortney, J. J., Kramm, U., & Redmer, R. 2011, ApJ, 733, 2 Noack, L. [et al.]{} 2016, Icarus, 277, 215 berg, K. I., Boogert, A. C. A., Pontoppidan, K. M., van den Broek, S., van Dishoeck, E. F., Bottinelli, S., Blake, G. A., & Evans, N. J. 2011, ApJ, 740, 109 Olinger, B. 1982, J. Chem. Phys., 77, 6255 Ootsubo, T. [et al.]{} 2012, ApJ, 752, 15 Pan, D., & Galli, G. 2016, Sci. Adv., 2, e1601278 Poirier, J. P. 1982, Nature, 299, 683 Poirier, J. P. 1994, Physics of The Earth and Planetary Interiors, 85, 319 Poirier, J. P., Sotin, C., & Peyronneau, J. 1981, Nature, 292, 225 Ramirez, R. M. and A. Levi (2018, March). . , 1–30. Raymond, S. N., Quinn, T., & Lunine, J. I. 2004, Icarus, 168, 1 Rogers, L. A., Bodenheimer, P., Lissauer, J. J., & Seager, S. 2011, ApJ, 738, 59 Rogers, L. A., & Seager, S. 2010, ApJ, 716, 1208 Saleh, G. and A. R. Oganov (2016, September). .  [*6*]{}(1), 32486. Selsis, F. [et al.]{} 2007, Icarus, 191, 453 Sloan, E. D., & Koh, C. 2007, [Clathrate Hydrates of Natural Gases, Third Edition]{}, Chemical Industries (CRC Press) Soave, G. 1972, Chemical Engineering Science, 27, 1197 Sotin, C., Gillet, P., & Poirier, J. 1985, in Ices in the solar system (Springer), 109–118 Sotin, C., O. Grasset, and A. MOCQUET (2007, November). .  [*191*]{}(1), 337–351. Span, R. 2013, Multiparameter equations of state: an accurate source of thermodynamic property data (Springer Science & Business Media) Span, R., Eckermann, T., Herrig, S., Hielscher, S., J[ä]{}ger, A., & Thol, M. 2016 Span, R., & Wagner, W. 1996, J. Phys. Chem. Ref. Data, 25, 1509 Span, R., & Wagner, W. 1997, Int J Thermophys, 18, 1415 Spycher, N., Pruess, K., & Ennis-King, J. 2003, Geochim. Cosmochim. Acta, 67, 3015 Tobie, G., Lunine, J. I., & Sotin, C. 2006, Nature, 440, 61 Toon, O. B., McKay, C. P., Ackerman, T. P., & Santhanam, K. 1989, J. Geophys. Res.: Atmospheres, 94, 16287 Tulk, C. A., Machida, S., Klug, D. D., Lu, H., Guthrie, M., & Molaison, J. J. 2014, J. Chem. Phys., 141, 174503 Turbet, M., Forget, F., Leconte, J., Charnay, B., & Tobie, G. 2017, Earth Planet. Sci. Lett., 476, 11 Turcotte, D., & Schubert, G. 2014, Geodynamics (Cambridge University Press) Unterborn, C. T., Desch, S. J., Hinkel, N. R., & Lorenzo, A. 2018, Nature Astronomy, 2, 297 Unterborn, C. T., Hinkel, N. R., & Desch, S. J. 2018, Res. Notes AAS, 2, 116 Van der Waals, J. H., & Platteeuw, J. C. 1959, Adv. Chem. Phys., 2, 1 Vin[š]{}, V., J[ä]{}ger, A., Hrub[ý]{}, J., & Span, R. 2017, Fluid Ph. Equilibria, 435, 104 Vin[š]{}, V., J[ä]{}ger, A., Span, R., & Hrub[ý]{}, J. 2016, Fluid Ph. Equilibria, 427, 268 von Paris, P., Grenfell, J. L., Hedelt, P., Rauer, H., Selsis, F., & Stracke, B. 2013, A[&]{}A, 549, A94 Wagner, W., & Pru[ß]{}, A. 2002, J. Phys. Chem. Ref. Data, 31, 387 Walker, J., Hays, P. B., & Kasting, J. F. 1981, J. Geophys. Res.: Oceans, 86, 9776 Wang, H., Zeuschner, J., Eremets, M., Troyan, I., & Willams, J. 2016, Nature, 6, 1 Wendland, M., Hasse, H., & Maurer, G. 1999, J. Chem. Eng. Data, 44, 901 Wiryana, S., Slutsky, L. J., & Brown, J. M. 1998, Earth Planet. Sci. Lett., 163, 123 Yang, J., Cowan, N. B., & Abbot, D. S. 2013, MNRAS, 771, L45
--- author: - Tomohiro Fujita - ', Ryo Namba' title: ' Pre-reheating Magnetogenesis in the Kinetic Coupling Model ' --- IPMU16-0017 Introduction ============ Observations have revealed that our universe is magnetized on various different scales. One of the most intriguing scales is the largest one. It is known that galaxies and their clusters have their own magnetic fields with the typical strength $\mathcal{O}(10^{-6}) \, \G$. However, their origin is a long-term open question. Furthermore, the multi-frequency blazar observation implies that the magnetic fields which are not associated with galaxies or clusters do exist [@Neronov:1900zz; @Tavecchio:2010mk; @Dolag:2010ni; @Essey:2010nd; @Taylor:2011bn; @Takahashi:2013uoa; @Chen:2014rsa]. They are called extragalactic magnetic fields (EGMF) or void magnetic fields, and their strength are estimated to be no less than $\mathcal{O}(10^{-15}) \, \G$ [@Taylor:2011bn].[^1] EGMF may also indicate that the galaxy and cluster magnetic fields have a cosmological origin. Nevertheless, it is difficult even to find a hypothetical scenario which explains the origin of EGMF in a consistent and quantitative way without fine tuning, and thus EGMF has attracted attention as a unique arena of theoretical cosmology and astrophysics (for recent review see [@Kandus:2010nw; @Durrer:2013pga; @Subramanian:2015lua]). The blazar observations put the lower bound not on the strength of EGMF itself but on the following [*effective*]{} strength of EGMF [@Fujita:2012rb; @Caprini:2015gga], i.e., $$\begin{aligned} B_{\rm eff} \gtrsim 10^{-15}\G \label{Target}\end{aligned}$$ with $$\begin{aligned} B_{\rm eff}^2(\eta_{\rm now}) \equiv \int^{k_{\rm diff}}_0 \frac{{\rm d} k}{k} F(kL)\, \mathcal{P}_{B}(\eta_{\rm now},k), \label{Beff} \\ F(z) \equiv \frac{3}{2}z^{-2} \left[ \cos (z) - \frac{\sin (z)}{z} + z {\rm Si}(z) \right] \label{Fz-def} .\end{aligned}$$ Here $\mathcal{P}_{B}(\eta_{\rm now},k)$ is the power spectrum of EGMF at present, Si$(z)$ denotes the sine integral function, $k_{\rm diff}^{-1}\sim 100$AU is the wave number corresponding to the present cosmic diffusion length, and $L \simeq$ 1Mpc stands for the characteristic length scale for energy losses of charged particles due to inverse Compton scattering. Since $F(kL) \propto k^{-1}$ for $k\gtrsim L^{-1}$ and it suppresses the contribution from smaller scales than $L$, it is favorable to produce large-scale magnetic fields to explain the blazar observation. The magnetic fields present in the line of sight of the blazar photons are in the extragalactic regions, and hence astrophysical processes are hardly responsible for their generation. A compelling possibility is to attribute them to a cosmological origin. There have been dedicated studies on several different mechanisms to produce large-scale magnetic fields in those regions. As a small subset of examples, collision of bubbles created at a phase transition in the early universe, such as QCD and electroweak, can produce magnetic fields [@Vachaspati:1991nm; @Sigl:1996dm]. A concrete model that quantitatively account for the blazar observations is, however, not yet well established. Also by the second-order perturbation theory in the plasma, the effective difference in the motion of charged particles can induce magnetic fields in cosmological scales [@Matarrese:2004kq; @Saga:2015bna]. Quantitative studies have shown that the effective magnetic strength from the second-order effects does not reach the observed value. Inflationary magnetogenesis, i.e. the generation of magnetic fields during inflation, has been intensively investigated [@Turner:1987bw; @Ratra:1991bn; @Garretson:1992vt; @Finelli:2000sh; @Davis:2000zp; @Bamba:2003av; @Anber:2006xt; @Martin:2007ue; @Durrer:2010mq; @Ferreira:2013sqa; @Caprini:2014mja; @Kobayashi:2014sga; @Tasinato:2014fia; @Fujita:2015iga; @Domenech:2015zzi; @Campanelli:2015jfa].[^2] This is because large-scale structures are believed to be seeded in the inflationary era and the idea is naturally extended to explore the similar possibility for magnetic fields on large scales. The most well-studied model of inflationary magnetogenesis is the kinetic coupling model (a.k.a. $I^2FF$ model) proposed by Ratra [@Ratra:1991bn]. In this model, a rolling scalar field is coupled with the kinetic term of the gauge field, and the energy density of the scalar field is transferred to the electromagnetic sector. Another model in which a rolling pseudo-scalar field drives magnetogenesis is also well studied [@Garretson:1992vt; @Anber:2006xt; @Durrer:2010mq; @Caprini:2014mja; @Fujita:2015iga]. Although quite a few models have been proposed and explored so far, each of them has to face all of the following problems [@Fujita:2012rb; @Demozzi:2009fu; @Barnaby:2012xt; @Fujita:2013qxa; @Fujita:2014sna; @Ferreira:2014hma; @Ferreira:2014zia; @Ferreira:2014mwa]: [*(i) The backreaction problem*]{}: the energy density of the generated electromagnetic fields must not exceed the inflaton energy density during inflation. [*(ii) The strong coupling problem*]{}: the effective coupling constant between the gauge field and charged fermions should be small to verify the perturbative calculation. [*(iii) The curvature perturbation problem*]{}: the curvature perturbation induced by the electromagnetic fields must be consistent with CMB observations. It has been pointed out that imposing the conditions to resolve these issues (i)-(iii) is quite challenging and that primordial magnetogenesis solely from inflation at least requires a dedicated low-energy inflationary model, whose energy scale is close to the threshold of Big Bang nucleosynthesis (BBN)  [@Fujita:2014sna; @Ferreira:2014hma]. In this paper, we consider a magnetogenesis scenario in the framework of the kinetic coupling model. To overcome the above three problems, we extend the original model in the following way. In the original paper, the scalar field coupled with the electromagnetism is the inflation. Furthermore, it has been often assumed that the kinetic function $I$ which is multiplied by the kinetic term of the gauge field $F_{\mu\nu} F^{\mu\nu}$ is just a simple power-law function of the scale factor, namely $I(t) \propto a^{-n}$, and it varies only during inflation.[^3] However, provided that the kinetic function $I$ is driven by a spectator scalar field which is not the inflaton, it is quite natural that $I$ continues to vary after inflation ends. Thus we assume that $I$ varies until reheating is completed.[^4] Moreover, we also consider that $I$ starts varying after perturbations with the wave numbers corresponding to the CMB scales exit the horizon to optimize the scenario for magnetogenesis. In our scenario, since the kinetic coupling is always no less than unity, $I\ge1$, we do not have to be concerned with the strong coupling problem.[^5] Yet, we need to properly analyze the perturbations of the fields to address the other two problems. We obtain the exact solution of the gauge field and rigourously estimate its energy density and the curvature perturbation induced by it. Furthermore, the curvature perturbation is also produced from the scalar field perturbations which are sourced by the electromagnetic fields through both the direct coupling and the gravitational interactions. We calculate all of the leading-order contributions and find the constraints on the produced magnetic fields. Our result shows that the magnetic fields generated in our scenario can be strong enough to explain the observational value. This paper is organized as follows. In sec. \[Magnetogenesis\], we explain the setup of our scenario, calculate the evolution of the electromagnetic fields, and obtain the magnetic power spectrum at present. In sec. \[Constraints\], the constraints from the backreaction and the induced curvature perturbation are derived. We also make a comment on the interaction between the electromagnetic fields and charged particles. In sec. \[Results\], the results of this paper are shown. Section \[Summary\] is devoted to the conclusion. In appendices, the explicit derivation and expressions of the exact solution of the electromagnetic fields are shown, and the calculation of curvature perturbation is described. Magnetogenesis {#Magnetogenesis} ============== Model setup ----------- We consider the kinetic coupling model with the following action: $$S= \int \dd^4 x \sqrt{-g} \left[ \frac{M_{\rm Pl}^2}{2} R - \frac{1}{2}(\partial \phi)^2-V(\phi) -\frac{1}{2}(\partial\chi)^2-U(\chi) -\frac{1}{4} I^2(\chi) F_{\mu\nu}F^{\mu\nu} \right] \; , \label{Model Action}$$ where $\phi$ is the inflaton, $\chi$ is a spectator scalar field which drives the kinetic function $I(\chi)$, $F_{\mu\nu}\equiv \partial_\mu A_\nu - \partial_\nu A_\mu$, $A_\mu$ is the U(1) gauge field associated to the electromagnetism, namely the photon, and $R$ and $M_{\rm Pl}$ are the Ricci scalar and the reduced Planck mass, respectively. The energy of the background $\chi$ field is transferred to the electromagnetic fields through the kinetic coupling, $I^2 FF$, and thus the electromagnetic fields are generated in this model. In this paper, we assume that the inflaton oscillation after inflation can be well approximated by the one with a quadratic potential, while we let the explicit forms of $V(\phi)$ during inflation and $U(\chi)$ unspecified and assume simple time evolution of the background universe and of $I(\chi)$. We consider the quasi-de Sitter expansion during inflation (i.e. $H_\inf \approx {\rm const.}$ and $\eta=-1/a H_\inf$ where $a$, $H$ and $\eta$ are the scale factor, Hubble parameter and conformal time, respectively), and the expansion of the matter-dominated universe, $\rho_\phi \propto a^{-3}$ and $\eta=2/aH\propto a^{1/2}$, during the inflaton oscillating phase between the end of inflation and the completion of reheating. We impose that the $\chi$ field is in a perturbative regime, and such time dependence is driven by the homogeneous vacuum expectation value of $\chi$, i.e. $I(\chi) \approx I(\langle \chi \rangle) = I(a)$. We assume that $I$ is constant at the beginning, starts varying at a certain time during inflation and ceases to evolve at the completion of reheating. Without loss of generality, we set $I=1$ when it stops. While it is natural for $\chi$ and thus $I(\chi)$ to evolve after inflation as $\chi$ is not the inflaton but a spectator field, they could have different time dependences during and after inflation and could stop varying at an arbitrary time. We place additional assumptions that $I \propto a^{-n}$ both during and after inflation and $\chi$ decays at the time of reheating, simply to reduce the number of model parameters. In summary, the behaviors of the background expansion and the kinetic function $I$ are given by $$\eta = \left\{ \begin{array}{lc} -1/a H_\inf & (a <a_e)\\ 2/aH & (a_e <a <a_r) \end{array} \right., \qquad I(a) = \left\{ \begin{array}{lc} (a_i/a_r)^{-n}\equiv I_i & (a <a_i) \\ (a/a_r)^{-n} & (a_i <a <a_r) \\ 1 & \quad (a_r<a) \\ \end{array} \right. , \label{simple I}$$ where $a_i, a_e$ and $a_r$ denote the values of scale factor $a$ when $I(a)$ starts varying, inflation ends and reheating completes, respectively. Fig. \[simple case\] illustrates this behavior of $I$. Since we discuss only the case with $n>5/2$, for the reason mentioned in Subsection \[subsec:strength-present\], $I(a)$ is always larger than unity and hence our scenario is free from the strong coupling problem, namely the effective electromagnetic coupling strength $e / I < e$ at all times. ![The behavior of $I(a)$ given in eq. . []{data-label="simple case"}](I_behavior.eps){width="75mm"} Electromagnetic spectra ----------------------- Using the background evolution of the universe and the time dependence of $I(a)$ given in the previous subsection, we compute the power spectra of the generated electromagnetic fields. To formulate them, we take the Coulomb gauge, giving $A_0 = \partial_i A_i = 0$, and expand the transverse part of $A_i$ with the polarization vector $\epsilon_i^{(\lambda)}$ and the creation/annihilation operator $a^{\dagger(\lambda)}_{\bm{k}}/a^{(\lambda)}_{\bm{k}}$ as [^6] $$A_i(\eta, \bm{x}) = \sum_{\lambda=1}^{2} \int \frac{{\rm d}^3 k}{(2\pi)^3} \, {\rm e}^{i \bm{k \cdot x}} \, \left[ \epsilon_{i}^{(\lambda)}(\hat{\bm{k}}) \, a_{\bm{k}}^{(\lambda)} \mcA_{k}(\eta) + {\rm h.c.} \right] \,, \label{introduction of creation/annihilation operator}$$ where the hat of $\hat{\bm{k}}$ denotes the unit vector, $(\lambda)$ is the polarization label and $\mcA_k(\eta)$ is the mode function. Note that the mode function ${\cal A}_k (\eta)$ carries no polarization index $(\lambda)$ since the production mechanism in this model does not distinguish different polarization states. The spectra of electric and magnetic fields are then given by $$\mcP_E (\eta,k) \equiv \frac{k^3 I^2}{\pi^2 a^4} \, |\partial_\eta \mcA_k|^2 \; ,\qquad \mcP_B (\eta,k) \equiv \frac{k^5 I^2}{\pi^2 a^4} \, |\mcA_k|^2 \; , \label{P of EB}$$ respectively. The equation of motion for the mode function is $$\left[\partial_\eta^2 +k^2-\frac{ \partial_\eta^2 I }{I}\right](I\mcA_k)=0 \,. \label{EoM of A}$$ From eq. , it reads $$\begin{aligned} &\left[\partial_\eta^2 +k^2\right](I_i \mcA_k)=0 \,, \qquad (a<a_i ) \label{BD EoM} \\ &\left[\partial_\eta^2 +k^2-\frac{n(n-1)}{\eta^2}\right](I\mcA_k^\inf)=0 \,, \qquad (a_i <a <a_e) \label{inf EoM} \\ &\left[\partial_\eta^2 +k^2-\frac{2n(2n+1)}{\eta^2}\right](I\mcA_k^\osc)=0, \qquad (a_e <a <a_r) \label{osc EoM}\end{aligned}$$ where the superscripts “inf" and “osc" denote quantities during inflation and the inflaton oscillating phase, respectively. Provided that the initial condition is given by the Bunch-Davies vacuum state, $I\mcA_k(a<a_i)={\rm e}^{-ik\eta}/\sqrt{2k}$, one can solve $I\mcA_k(a>a_i)$ by using the general solutions of above equations and the junction conditions between them. Then substituting the mode function into eq. , one obtains the electromagnetic power spectra. Here we show only the super-horizon asymptotic forms of the spectra, while their derivation and exact expressions are shown in appendix \[Derivation\]: $$\begin{aligned} \mcP_E^\inf(k,\eta) &\xrightarrow{|k\eta|\ll 1} \frac{2^{2n+1}}{\pi^4}\Gamma^2(n+1/2) |C_2( -k\eta_i )|^2 H_\inf^4 |k\eta|^{4-2n}, \label{PE inf} \\ \mcP_B^\inf(k,\eta) &\xrightarrow{|k\eta|\ll 1} \dfrac{2^{2n-1}}{\pi^4}\Gamma^2(n-1/2) |C_2( -k\eta_i )|^2 H_\inf^4 |k\eta|^{6-2n}, \\ \mcP_E^\osc(k,\eta) &\xrightarrow{|k\eta|\ll 1} \frac{2^{2n+1}}{\pi^4}\Gamma^2(n+1/2) |C_2( -k\eta_i )|^2 H_\inf^4 \left(\frac{k}{a_e H_\inf}\right)^{4-2n}\left(\frac{a}{a_e}\right)^{2n-4}, \label{PE osc} \\ \mcP_B^\osc(k,\eta) &\xrightarrow{|k\eta|\ll 1} \dfrac{2^{2n+3}}{\pi^4 }\frac{\Gamma^2(n+1/2)}{(4n+1)^2} |C_2(-k\eta_i)|^2 H_\inf^4 \left(\frac{k}{a_e H_\inf}\right)^{6-2n}\left(\frac{a}{a_e}\right)^{2n-3}, \label{PB osc}\end{aligned}$$ where $C_2( x )$ and its asymptotic expressions are given by $$\begin{aligned} &C_2( x ) = \frac{i\pi}{2\sqrt{2}}\sqrt{ x } \left[ J_{n-1/2}( x ) -i J_{n+1/2}( x )\right]. \label{fullC2} \\ &C_2( x ) \xrightarrow{x \ll 1} \frac{i\pi}{2\Gamma(n+1/2)}\left|\frac{ x }{2}\right|^n, \label{C2 large scale} \\ &C_2( x ) \xrightarrow{x \gg 1} \frac{i\sqrt{\pi}}{2} \, {\rm e}^{i \left( -x + \frac{n\pi}{2} \right)} \; . \label{C2 small scale}\end{aligned}$$ We have also introduced $\eta_i \equiv -1/a_i H_\inf$, which is the conformal time when $I$ starts varying. In fig. \[EM spectra\], we show the electromagnetic power spectra, $\mcP_E$ and $\mcP_B$, normalized by $H_\inf^4$ for $n=3.5$ during inflation (left panel) and during the inflaton oscillating phase (right panel). In fig. \[PEM\_evolve\], the time evolution of these spectra and their ratio are plotted.[^7] ![ [**(Left panel)**]{} The electromagnetic spectra during inflation. The horizontal axis denotes $K\equiv |k\eta_i|$, and we set $n=3.5$. $\mcP_E$ (red lines) and $\mcP_B$ (blue lines) normalized by $H_\inf^4$ are shown for $\ln (a/a_i) =1, 10, 20$ from transparent to opaque. The sub-horizon modes are shown as the dotted lines. [**(Right panel)**]{} The electromagnetic spectra during the inflaton oscillating phase. We set $n=3.5$ and $N_i\equiv \ln(a_e/a_i) \approx22$. $\mcP_E$ (red lines) and $\mcP_B$ (blue lines) are shown for $\ln (a/a_e) =1,5,10$ from transparent to opaque. One can see that the sub-horizon modes oscillate and damp, while the super-horizon ones continue to grow. The modes which did not exit the horizon during inflation and thus are unphysical are shown as the dotted line. []{data-label="EM spectra"}](EMspe_inflation.eps "fig:"){width="70mm"} ![ [**(Left panel)**]{} The electromagnetic spectra during inflation. The horizontal axis denotes $K\equiv |k\eta_i|$, and we set $n=3.5$. $\mcP_E$ (red lines) and $\mcP_B$ (blue lines) normalized by $H_\inf^4$ are shown for $\ln (a/a_i) =1, 10, 20$ from transparent to opaque. The sub-horizon modes are shown as the dotted lines. [**(Right panel)**]{} The electromagnetic spectra during the inflaton oscillating phase. We set $n=3.5$ and $N_i\equiv \ln(a_e/a_i) \approx22$. $\mcP_E$ (red lines) and $\mcP_B$ (blue lines) are shown for $\ln (a/a_e) =1,5,10$ from transparent to opaque. One can see that the sub-horizon modes oscillate and damp, while the super-horizon ones continue to grow. The modes which did not exit the horizon during inflation and thus are unphysical are shown as the dotted line. []{data-label="EM spectra"}](EMspe_oscillation.eps "fig:"){width="70mm"} ![[**(Left panel)**]{} The time evolution of the electric spectrum $\mcP_E$ (red line) and the magnetic spectrum $\mcP_B$ (blue line). We set $n=3.5$, $N_i\equiv \ln(a_e/a_i)\approx22,$ and $K\equiv|k\eta_i|=4$ which corresponds to the peak scale of $\mcP_E$ and $\mcP_B$. The horizontal axis denotes the e-folding number $N\equiv\ln(a/a_e)$ and inflation ends at $N=0$. [**(Right panel)**]{} The ratio between the magnetic and electric power spectrum, $\mcP_B/\mcP_E$. The parameters are the same as the left panel. The ratio decreases during inflation, but it increases after inflation. []{data-label="PEM_evolve"}](EMspe_evolution.eps "fig:"){width="70mm"} ![[**(Left panel)**]{} The time evolution of the electric spectrum $\mcP_E$ (red line) and the magnetic spectrum $\mcP_B$ (blue line). We set $n=3.5$, $N_i\equiv \ln(a_e/a_i)\approx22,$ and $K\equiv|k\eta_i|=4$ which corresponds to the peak scale of $\mcP_E$ and $\mcP_B$. The horizontal axis denotes the e-folding number $N\equiv\ln(a/a_e)$ and inflation ends at $N=0$. [**(Right panel)**]{} The ratio between the magnetic and electric power spectrum, $\mcP_B/\mcP_E$. The parameters are the same as the left panel. The ratio decreases during inflation, but it increases after inflation. []{data-label="PEM_evolve"}](EM_ratio.eps "fig:"){width="70mm"} The features of the generation of the electromagnetic fields in our scenario are threefold: 1. [*Post-inflationary amplification*]{}: In our scenario, it is assumed that $I$ continues to vary after inflation ends. This is quite natural if the $\chi$ field that drives $I$ is not the inflaton. As a result, the electromagnetic fields continue to grow even after inflation. Comparing eqs. -, one finds the amplification factors are $$\frac{\mcP_E^\osc(\eta)}{\mcP_E^\inf(\eta_e)} \simeq \left(\frac{a}{a_e}\right)^{2n-4}, \quad \frac{\mcP_B^\osc(\eta)}{\mcP_B^\inf(\eta_e)} \simeq \left(\frac{a}{a_e}\right)^{2n-3}, \quad (|k\eta|\ll1, a\gg a_e). \label{MF faster}$$ They can be substantial amplification for $n>2$. Indeed, a massive increase can be seen in fig. \[EM spectra\] and fig. \[PEM\_evolve\]. Furthermore, considering that the total energy density decreases in proportional to $a^{-3}$ for $a_e<a<a_r$, and without varying $I$ the magnetic power spectrum would decrease as $a^{-4}$ after inflation, one recognizes that this amplification is very effective for magnetogenesis. At the same time, however, one may wonder if such a substantial amplification leads to large electric fields which may cause strong backreaction, spoiling the background evolution, or too large curvature perturbation inconsistent with observations. These issues are addressed in the following two points and are discussed in detail in Section \[Constraints\]. 2. [*IR suppression due to the sudden onset of the varying of $I$*]{}: As one can see in fig. \[EM spectra\], the spectra are suppressed on larger scales than $k\sim k_i\equiv |\eta_i|^{-1}$. By substituting eq.  into eqs. -, we obtain $$\mcP_E\propto k^{4},\quad \mcP_B\propto k^6,\quad (|k\eta_i|\ll1) \; ,$$ during both inflation and the inflaton oscillating phase. In fact, it is advantageous to make the electric power spectrum suppressed on larger scales in order to evade the back reaction and the curvature perturbation problems. It is known that in the kinetic coupling model, if one tries to obtain sufficiently strong magnetic fields on large scales, the electric spectrum should be red-tilted and has a huge amplitude at the IR-cutoff, which corresponds to the mode that crosses horizon at the onset of inflation [@Demozzi:2009fu]. Then the electric fields cause the problems [@Barnaby:2012xt; @Fujita:2013qxa].[^8] In our scenario, however, even if the electric fields have a red-tilted spectrum, the IR cut-off around $k\sim k_i$ prevents that $\mcP_E$ becomes huge on larger scales [@Ferreira:2013sqa]. In particular, one can expect that our scenario avoids constraints from the CMB observations if the peak scale $k_i^{-1}$ is much smaller than the CMB scale. 3. [*Reduction of the hierarchy between the electric and magnetic fields*]{}: As one sees in figs. \[EM spectra\] and \[PEM\_evolve\], on super-horizon scales the electric fields are always stronger than the magnetic fields. This is generically true in cases where the gauge mode function is proportional to the power-law of the conformal time, $\mcA_k\propto \eta^s$ on super-horizon scales (in our case, $s=1-2n$ during inflation and $s=1+4n$ during the inflaton oscillating phase). In that case, from the definition of the power spectra eq. , one finds $$\frac{\mcP_B(k,\eta)}{\mcP_E(k,\eta)}=\frac{k^2|\mcA_k|^2}{|\partial_\eta \mcA_k|^2} \propto |k \eta|^2 \propto \left\{ \begin{array}{lc} a^{-2}, & \quad(\rm inflation)\\ a, & \quad(\rm oscillation) \end{array}\right.. \label{EB hierarchy}$$ Since $|k\eta|\ll 1$ on super-horizon scales, we always have $\mcP_B\ll \mcP_E$. It should be noted that the hierarchy between $\mcP_E$ and $\mcP_B$ simply depends on how the scale of the mode is bigger than the horizon scale. Therefore during inflation the hierarchy widens, while it is reduced during the inflaton oscillating phase. This behavior is clearly seen in fig. \[PEM\_evolve\]. In other words, the magnetic fields grow faster than the electric fields during the inflaton oscillating phase (see eq. ), but the opposite is true during inflation. Since the energy density of the electromagnetic fields is dominated by the electric fields, the constraints coming from the back reaction and the curvature perturbation problem are put on $\mcP_E$. Consequently, stronger constraints are put on $\mcP_B$ because the magnetic fields should be smaller than the electric fields on super-horizon scales by the hierarchical factor, eq. . This is the reason why generated magnetic fields are severely constrained in conventional inflationary magnetogenesis. Nevertheless, in our scenario, the hierarchy is reduced by many orders of magnitude during the inflaton oscillating phase (see fig. \[PEM\_evolve\]). Therefore the constraints on the magnetic fields are substantially relaxed.[^9] The strength of the magnetic field at present {#subsec:strength-present} --------------------------------------------- Let us compute the strength of the produced magnetic field at present and its effective amplitude $B_{\rm eff}$ to compare the prediction of the model with the blazar observation. Since the produced magnetic fields evolve adiabatically after $I$ becomes constant at the time of reheating,[^10] we can obtain the magnetic power spectrum at present by multiplying $\mcP_B^\osc(\eta_r)$ in eq.  by $a_r^4$, with the scale factor normalized by its present value: $$\begin{aligned} \mcP_B(\eta_{\rm now}) &=\frac{2^{2n+3}\Gamma^2(n+\frac{1}{2})}{9\pi^4 (4n+1)^2} \frac{a_r^4 \rho_\inf^2}{\Mpl^4} \bigg| C_2 \left( \frac{k}{k_i} \right) \bigg|^2 \, {\rm e}^{2(n-3)N_k + (2n-3)N_r}, \quad (|k\eta_r|\ll 1). \label{PBnow1}\end{aligned}$$ Here $N_k$ and $N_r$ represent the e-folding number between the horizon crossing of the $k$ mode and the end of inflation, and that of the inflaton oscillating phase, respectively; $$\begin{aligned} N_r &= \ln \left(\frac{a_r}{a_e}\right) = \frac{1}{3}\ln \left(\frac{H_\inf^2}{H_r^2}\right) =\frac{1}{3}\ln \left(\frac{\rho_\inf}{\frac{\pi^2}{30}g_* T_r^4}\right) \notag\\ &\approx 29.5+\frac{4}{3} \ln \left(\frac{\rho_\inf^{1/4}}{10^{10}\GeV}\right) -\frac{4}{3}\ln \left(\frac{T_r}{1\GeV}\right)-\frac{1}{3}\ln \left(\frac{g_*}{100}\right), \label{Nr} \\ N_k &= \ln \left(\frac{a_e H_\inf}{k}\right)=\ln \left(\frac{a_r e^{-N_r}\rho_\inf^{1/2}}{\sqrt{3}\Mpl k}\right) \notag\\ &\approx 31.4 +\frac{2}{3} \ln \left(\frac{\rho_\inf^{1/4}}{10^{10}\GeV}\right) +\frac{1}{3}\ln \left(\frac{T_r}{1\GeV}\right) -\ln \left(\frac{k}{1\Mpc^{-1}}\right) \; , \label{Nk}\end{aligned}$$ where $T_r$ and $g_*$ are the temperature and the number of degree of freedom, respectively, at the time of reheating. We have also used the equation of the entropy conservation, $$a_r = \left(\frac{g_{\ast s}(T_{\rm CMB})}{g_{\ast s}(T_r)}\right)^{\frac{1}{3}}\frac{T_{\rm CMB}}{T_r} \approx 8.0 \times10^{-14} \left(\frac{T_r}{1\GeV}\right)^{-1}\left(\frac{g_\ast}{100}\right)^{-1/3}, \label{ar}$$ where $T_{\rm CMB}$ is the CMB temperature and $g_{\ast s}$ is the number of degree of freedom for entropy which is assumed to equal to $g_*$ at reheating. Substituting eqs. - into eq. , we obtain the magnetic power spectrum at present. One may be tempted to make an immediate comparison with the result of the blazar observations, which actually has been done in some of the literature. However, it should be stressed that what is measured in the blazar observations is not $\mcP_B$ but $B^2_{\rm eff}$, defined in eq. . Therefore we should further substitute the obtained $\mcP_B(\eta_{\rm now})$ into eq.  and compute the effective strength of the magnetic fields. We then obtain $$\begin{aligned} B_{\rm eff}^2(\eta_{\rm now}) = & 2 \times 10^2 \, {\rm G}^2 \, \frac{2^{2n} \, {\rm e}^{121.8 (n-2.5)} \, \Gamma^2 \left( n+\frac{1}{2} \right)}{(4n+1)^2} \, H_n (k_i) \left(\frac{g_*}{100}\right)^{-\frac{2n+1}{3}} \nonumber\\ & \times \left(\frac{\rho_\inf^{1/4}}{10^{10} \, \GeV}\right)^{4n} \left(\frac{T_r}{1 \, \GeV}\right)^{-2(n+1)} \left( \frac{L}{1 \, {\rm Mpc}} \right)^{2n-6} \; , \label{Beff2}\end{aligned}$$ where $L \simeq 1 \, {\rm Mpc}$ corresponds to the characteristic length scale for energy losses of charged particles due to inverse Compton scattering, and $$\begin{aligned} H_n( k_i ) & \equiv & \int^{k_{\rm diff}}_0 \frac{{\rm d} k}{k} F(kL) \bigg| C_2 \left( \frac{k}{k_i} \right) \bigg|^2 (kL)^{2(3-n)} \; , \label{Hs def}\\ & \simeq & (k_i L)^{5-2n} \exp \left( 8.404 - 2.226 n - 0.1947 n^2 \right), \label{Hn_fitted}\end{aligned}$$ where $C_2$ is defined in eq. . The last approximate expression which is obtained by fitting the numerical result of is available in the case with $k_i L\gtrsim1$ and $3 \le n \le 8$. This fit is quite good with error $\sim 1 \, \%$ in this case but not particularly so for $k_i L \ll 1$ or $2.5 \le n \le 3$. Therefore the exponential factor in that depends only on $n$ is mainly for an illustrative purpose, and we use the exact calculation for later analyses; however it is worth noting that the $k_i$ dependence, $H_n \propto k_i^{5-2n}$, is quite accurate for $k_i L \gtrsim 1$ even in $2.5 \le n \le 3$. ![[**(Left panel)**]{} The integrand of $H_n$ defined in eq. . Solid lines are for $k_i^{-1}=L = 1\Mpc$ and $n=2.1, 2.5, 3$ and $4$ from top to bottom. The dashed and dotdashed lines are for $k_i L = 10, 100$, respectively, with $n=3$. One can see that the main contribution to $B_{\rm eff}^2$ comes from $k\sim k_i$ for $n>2.5$. However, for $n<2.5$, the contribution from smaller scales is dominant and $B_{\rm eff}$ depends on the cutoff scale, $k_{\rm diff}$. [**(Right panel)**]{} The numerically evaluated $H_n$ for $n>2.5$. The lines correspond to $k_i L = 1,10,100$ and $1000$ from top to bottom. $H_n$ shows the logarithmic divergence for $n=2.5$. []{data-label="Hs plot"}](h_integrand.eps "fig:"){width="70mm"} ![[**(Left panel)**]{} The integrand of $H_n$ defined in eq. . Solid lines are for $k_i^{-1}=L = 1\Mpc$ and $n=2.1, 2.5, 3$ and $4$ from top to bottom. The dashed and dotdashed lines are for $k_i L = 10, 100$, respectively, with $n=3$. One can see that the main contribution to $B_{\rm eff}^2$ comes from $k\sim k_i$ for $n>2.5$. However, for $n<2.5$, the contribution from smaller scales is dominant and $B_{\rm eff}$ depends on the cutoff scale, $k_{\rm diff}$. [**(Right panel)**]{} The numerically evaluated $H_n$ for $n>2.5$. The lines correspond to $k_i L = 1,10,100$ and $1000$ from top to bottom. $H_n$ shows the logarithmic divergence for $n=2.5$. []{data-label="Hs plot"}](Hs_plot.eps "fig:"){width="70mm"} In fig. \[Hs plot\], we show numerically evaluated $H_n$ and its integrand. It can be seen that for $n<5/2$ the contribution from small scales is dominant and hence $H_n$ depends on the small scale cutoff $k_{\rm diff}$. This is simply because the magnetic power spectrum is blue-tilted for $n<3$ (see eq. ) and $F(kL) \propto (kL)^{-1}$ for $kL \gtrsim 1$ (see eq. ). Therefore we concentrate on cases with $n>2.5$ henceforth. Constraints {#Constraints} =========== We have obtained eq.  as the magnetic field strength effective for blazar observations in our scenario of the model . This result should be taken under computational and observational consistencies. In our calculations in the previous sections, we have assumed that the energy density of the produced electromagnetic field does not alter the background evolution to a significant level. We thus have to impose this condition with the obtained result. Moreover, the produced field inevitably contributes to the fluctuations of the total energy density and therefore to the curvature perturbation $\zeta$. Since the electromagnetic spectra are strongly scale-dependent in almost all cases (see fig. \[EM spectra\]), the electromagnetically induced $\zeta$ must be subdominant to the standard quasi-scale-invariant curvature perturbation originated from vacuum fluctuations, to be consistent with the CMB observations. One last thing to be taken care of is the effect of charged particles during the reheating process. Some charged particles are produced even before the completion of reheating, and once they are present, they may potentially wash away the electric fields and consequently prevent the evolution of magnetic fields. This effect must be negligible for successful magnetogenesis. We carefully evaluate these three issues one by one in the following subsections. The final results for present effective magnetic fields with all these constraints imposed are given in Section \[Results\]. Backreaction problem -------------------- In this subsection, we evaluate the energy density of the produced electromagnetic fields and derive the constraint from the backreaction to the total energy density. As we discuss in the previous section, the magnetic fields are negligible compared to the electric counterpart, and thus it suffices to focus on the electric fields for the current consideration. During the inflaton oscillating phase, since the total energy density behaves as $\rho_\tot \simeq \rho_\phi\propto a^{-3}$, eq.  implies $$\Omega_\em \equiv \frac{\rho_\em}{\rho_\tot} \propto a^{2n-1} \; , \qquad (a_e< a< a_r) \; .$$ For $n>1/2$, the energy fraction of the electric fields $\Omega_\em$ increases. In this case, $\Omega_\em$ reaches the maximum value at $a=a_r$, and we should evaluate $\Omega_\em(\eta_r)$. Since the contribution from the sub-horizon mode is negligible, we can compute the electric energy density $\rho_E$ for $a_e\le a\le a_r$ from eq.  as $$\begin{aligned} \rho_\em^\osc(\eta_r) &\simeq \rho_E^\osc(\eta_r) \simeq \frac{1}{2}\int\frac{\dd k}{k} \mcP^\osc_E(\eta_r), \notag\\ &\simeq \dfrac{2^{2n}}{9\pi^4}\Gamma^2(n+1/2) \frac{\rho_\inf^2}{\Mpl^4} \exp\left[(2n-4)(N_i + N_r)\right] F_n, \label{rhoemosc}\end{aligned}$$ where we define $$N_i\equiv \ln \left(\frac{a_e}{a_i}\right), \quad F_n \equiv \int_0^{|\eta_i/\eta_r|} \dd K \, |C_2 (K) |^2 K^{3-2n}, \quad (K\equiv k/k_i). \label{Fs def}$$ For $n>2$, $F_n$ depends only on $n$, because $\mcP_E^\osc$ has its peak at $k\sim k_i$. We can numerically evaluate the integral in $F_n$ by sending the upper bound to infinity and find a good fitting function with error $< 1 \, \%$ within the domain $2<n<10$ as $$F_n \simeq \exp\left( 4.944 -1.461 n - 0.3430 n^2 + 0.0085 n^3 \right) \; . \label{Fn approx}$$ Dividing eq.  by $\rho_r \equiv \rho_\tot(\eta_r)$ and using $\rho_\inf/\rho_r =e^{3N_r}$, we obtain $$\begin{aligned} \Omega_\em(\eta_r) &\simeq \dfrac{2^{2n}}{9\pi^4}\Gamma^2 \! \left( n+ \frac{1}{2} \right) \, \frac{\rho_\inf}{\Mpl^4} \exp\left[(2n-4)N_i + (2n-1) N_r\right]F_n, \notag\\ &\approx 2.5 \times 10^{28} \, 2^{2n} \, {\rm e}^{121.8(n-2.5)} \, \Gamma^2 \! \left( n+ \frac{1}{2} \right) \, F_n \notag\\ &\quad\times \left(\frac{\rho_\inf^{1/4}}{10^{10}\GeV}\right)^{4n} \left(\frac{T_r}{1\GeV}\right)^{-2n} \left(\frac{g_*}{100}\right)^{\frac{1-2n}{3}} \left(\frac{k_i}{1\Mpc^{-1}}\right)^{2(2-n)}. \label{Omega em 1}\end{aligned}$$ To avoid the backreaction problem, $\Omega_\em(\eta_r)< 1$ is required. Comparing eqs.  and , one can observe that, to evade strong backreaction, lowering the inflationary energy scale and raising the reheating temperature are favored; however, this would also result in smaller $B_{\rm eff}$. In particular, a higher $T_r$ decreases $B_{\rm eff}$ more than loosening the backreaction, and therefore lowering $\rho_{\rm inf}$ provides a larger parameter window for successful magnetogenesis avoiding the backreaction problem. Curvature perturbation problem {#Curvature perturbation problem} ------------------------------ In this subsection, we explore the curvature perturbation induced by the production process of the electromagnetic fields, which we call $\zeta_\em$. Considering the curvature perturbation observed in CMB experiments $\zeta_\obs$, the additional contribution to the curvature power spectrum from $\zeta_\em$ must satisfy, $\mcP_\zeta^\em(k_{\rm CMB}) < \mcP_\zeta^\obs(k_{\rm CMB}),$ where $k_{\rm CMB}$ denotes the CMB scales, since $\zeta_{\rm em}$ has a strongly scale-dependent spectrum. This inequality gives a constraint on our magnetogenesis scenario. Note that we do not specify the origin of $\zeta_\obs$ and use the observational result $\mcP_\zeta^\obs\approx 2.2 \times 10^{-9}$ in this paper. On a flat slice (uniform-curvature hypersurface), the curvature perturbation is given by $$\zeta = -H\frac{\delta\rho}{\dot{\rho}}, \label{zeta eq1}$$ where $\delta\rho$ is density perturbation on the flat slice. Here, it is important to notice that the perturbation of the energy density induced by the generation/amplification process of the electromagnetic field includes not only that of the electromagnetic fields itself $\delta\rho_\em\equiv \rho_\em -\langle\rho_\em\rangle$, but also the perturbations of the scalar field energy densities which are sourced by the generated electromagnetic fields. In addition to the direct coupling between the $\chi$ field and the electromagnetic fields, the gravitational interaction couples all fields in our scenario, namely $\phi$, $\chi$ and $A_\mu$. To properly evaluate the curvature perturbation induced by the produced electromagnetic field, therefore, one must take into account these couplings, solve the equations of motion for the scalar fields, and obtain their energy density perturbations, as well as the direct contribution $\delta\rho_{\rm em}$. The leading contributions to the scalar perturbations $\delta\phi$ and $\delta\chi$ from the produced electromagnetic field are threefold: (i) the inverse-decay of $A_\mu$ to $\delta\chi$ through the direct coupling $I^2(\chi) F^2$, (ii) $A_\mu$ gravitationally sourcing $\delta\phi$ through the trace of the energy-momentum tensor, and (iii) the gravitational mass mixing of $\delta\phi$ with the sourced $\delta\chi$. These processes can be depicted schematically as (i) $A_\mu + A_\mu \xrightarrow{\rm direct} \delta\chi$, (ii) $A_\mu + A_\mu \xrightarrow{\rm grav.} \delta\phi$, and (iii) $A_\mu + A_\mu \xrightarrow{\rm direct} \delta\chi \xrightarrow{\rm grav.} \delta\phi$, and they and $\delta\rho_{\rm em}$ give contributions of the same order.[^11] We refer interested readers to Appendix \[app:zeta-em\] for the detailed derivation of ${\cal P}_\zeta^{\rm em}$ taking all these effects into account, and only report the final result here. The total energy density perturbation is the sum of all the energy contents, $$\delta\rho_{\rm tot} = \delta\rho_\phi + \delta\rho_\chi + \delta\rho_{\rm em} \; ,$$ and we define the power spectrum of the curvature perturbation in the standard way: $$\mcP_\zeta (k) \, \frac{2\pi^2}{k^3} \, (2\pi)^3 \delta^{(3)} (\bm{k}+\bm{k}') = \left\langle \hat\zeta(\bm{k}) \, \hat\zeta(\bm{k}')\right\rangle = \frac{H^2}{\dot\rho^2} \left\langle \delta\hat\rho_{\rm tot} (\bm{k}) \, \delta\hat\rho_{\rm tot} (\bm{k}')\right\rangle \; , \label{Pzetaem}$$ where hat denotes an operator in the Fourier space. Derived in Appendix \[app:zeta-em\], the part of $\mcP_\zeta$ sourced directly and indirectly by the produced electromagnetic field, evaluated at the time of reheating, is given in eq. , $$\begin{aligned} {\cal P}_\zeta^\em \big\vert_{t=t_r} &\simeq& \frac{\rho_{\rm inf}^2}{M_{\rm Pl}^8} \, \frac{2^{4n} \, \Gamma^4\left( n + \frac{1}{2} \right)}{3^6 \pi^8} \, \mathcal{G}_n \left( \frac{a_e}{a_i} \right)^{4n-5} \left( \frac{a_r}{a_e} \right)^{4n-2} \left( \frac{k}{a_e H_{\rm inf}} \right)^3 \; , \label{induced Pzeta}\end{aligned}$$ where the background time evolution $H_r^2 / H_{\rm inf}^2 = \rho_r / \rho_e \propto (a_e / a_r)^3$ is used, and $$\begin{aligned} && \mathcal{G}_n \equiv \gamma_n^2 G_n^{(1)} + \frac{\pi^4 \lambda_n^2 G_n^{(2)}}{60 \left( 4n+1 \right)^2} + \frac{\pi^4 \gamma_n \lambda_n G_n^{(3)}}{12 \left( 4n+1 \right)} \; , \\ && \gamma_n \equiv \frac{8n^2+61n-100}{4(n-2)(2n-1)(4n-5)} \; , \quad \lambda_n \equiv \frac{3(8n - 7)}{8(2n-1)} \; , \nonumber\\ && G_n^{(1)} \simeq \exp\left( 5.27 - 2.34 n - 0.821 n^2 + 0.0240 n^3 \right) \; , \nonumber\\ && G_n^{(2)} \simeq \exp\left( 5.86 - 2.34 n - 0.820 n^2 + 0.0240 n^3 \right) \; , \nonumber\\ && G_n^{(3)} \simeq \exp\left( 3.46 - 2.33 n - 0.821 n^2 + 0.0241 n^3 \right) \; .\end{aligned}$$ Note that this expression is valid for $k \ll k_i$, which is relevant for $k \sim k_{\rm CMB}$ and $k_i^{-1} \lesssim 1 \, {\rm Mpc}$. With eqs. -, eq.  reads $$\begin{aligned} \mcP_\zeta^\em \vert_{t=t_r} &\approx 8.7 \times 10^{51} \, 2^{4n} \, \Gamma^4\!\left( n + \frac{1}{2} \right) {\rm e}^{243.6(n-2.5)} \, {\cal G}_n \notag\\ &\times \left(\frac{\rho_\inf^{1/4}}{10^{10}\GeV}\right)^{8n} \left(\frac{T_r}{1\GeV}\right)^{-4n} \left(\frac{g_*}{100}\right)^{\frac{2(1-2n)}{3}} \left(\frac{k_i}{1\Mpc^{-1}}\right)^{5-4n} \left(\frac{k}{0.05\Mpc^{-1}}\right)^{3} \label{Pzetaem-result}\end{aligned}$$ After reheating completes, the electric fields quickly vanish due to the high electric conductivity, and $\zeta_\em$ freezes out [@Martin:2007ue]. Thus the requirement from the CMB observation is $$\mcP_\zeta^\em (k_{\rm CMB},\eta_r) < \mcP_\zeta^\obs \approx 2.2\times 10^{-9},$$ and this puts a constraint on the strength of the produced magnetic fields. It should be noted that one can compute higher-order correlation functions of $\zeta_\em$, and they might potentially provide further constraints on the strength of generated electromagnetic fields. Nevertheless, the curvature perturbation that is sourced by the electromagnetic field is strongly scale-dependent, and the shape of the bispectrum is very different from the local type or other shapes analyzed in observational papers, such as the ones by the Planck collaboration. Therefore, the existing bounds on non-Gaussianity are not directly applicable to the present case, and a dedicated work is necessary to obtain a constraint. Thus we concentrate on the power spectrum in this paper for a concrete analysis, and we would like to come back to this issue in future studies. The interaction with charged particles -------------------------------------- In the previous section, we have solved the equation of motion for the gauge field by ignoring interactions with charged particles. During the inflaton oscillating phase, however, charged particles can be produced by the decay of the inflaton. If such an interaction is non-negligible, the dynamics of the electromagnetic fields may be significantly altered [@Martin:2007ue; @Bassett:2000aw]. Therefore in this subsection, we investigate the condition to safely neglect the interaction which should be satisfied for the consistency of our calculation. We basically follow the argument in ref. [@Fujita:2015iga], and assume that the inflaton decay rate $\Gamma_\phi$ is constant. Then the energy density of the decay product is given by $\rho_{\rm rad} =2\Gamma_\phi \rho_\phi/5H$ during the inflaton oscillating phase [@Kolb:1990vq]. The interaction between photon and charged particles can be ignored if their interaction rate $\Gamma_{\rm int}$ is smaller than the Hubble expansion rate $H$, i.e. $\Gamma_{\rm int}<H$. One can estimate the interaction rate as $\Gamma_{\rm int}=n_{\rm c} \sigma_{\rm int} v,$ where $n_{\rm c} \simeq \rho_{\rm rad}/m_\phi$ is the number density of the charged particles, $\sigma_{\rm int}\simeq \alpha_{\rm eff}^2/m_\phi^2$ is their interaction cross section, and $v\approx 1$ is their velocity. Here we have introduced the inflaton mass $m_\phi$ and the effective fine structure constant $\alpha_{\rm eff}\equiv \alpha/I^2$ which is rescaled by $I^2$ because the kinetic term of photon is modified [@Demozzi:2009fu]. Then one can recast the condition $\Gamma_{\rm int}<H$ as the lower bound on $m_\phi$; $$m_\phi \gtrsim 10^5\GeV\times I^{-\frac{4}{3}} \left(\frac{\alpha}{0.01}\right)^{2/3} \left(\frac{T_r}{1\GeV}\right)^{2/3}, \label{mass bound}$$ where we have used $T_r \simeq \sqrt{\Gamma_\phi \Mpl}$. Here we assume the charged particles dominate an $\mathcal{O}(1)$ fraction of $\rho_{\rm rad}$, while the condition eq.  can be further relaxed if it is not the case. For example, it is possible that the inflaton does not decay into any charged particles and they are only secondarily produced from other decay products [@Fujita:2015iga]. It is also interesting to note that if the inflaton decays only through dimension five operators, the decay rate is naturally expected to be suppressed by the Planck mass, $\Gamma_\phi \simeq m_\phi^3/\Mpl^2$, and the reheating temperature is given by $$T_r \simeq \frac{m_\phi^{3/2}}{\Mpl^{1/2}} \approx 1\GeV \left(\frac{m_\phi}{10^6\GeV}\right)^{3/2}.$$ In this case, the condition eq.  is always satisfied. Results {#Results} ======= In this section, we obtain the range of the magnetic field strength in our scenario which evades the backreaction and the curvature perturbation problems. First, we solve eqs.  and in terms of $\rho_\inf$ by fixing the parameters, $T_r, g_*$ and $k_i$. In fig. \[rho plot\], we plot the obtained $\rho_\inf$ for the parameters that we use later. ![[**(Left panel)**]{} The inflation energy scale $\rho_\inf^{1/4}$ is shown for $\Omega_\em(\eta_r)=1$ (solid lines) and for $\mcP_\zeta^\em=\mcP_\zeta^\obs$ (dashed lines). We fix $k_i= 1$ (blue), $10$ (orange), $10^{2}$ (green) and $ 10^{3} \, \Mpc^{-1}$ (red). These lines are the upper bounds on the inflationary energy scale, but the value of $\rho_{\rm inf}^{1/4}$ depends on the given values of $\Omega_{\rm em}$ and ${\cal P}_\zeta^{\rm em}$ only weakly, $\rho_\inf^{1/4}\propto \Omega_\em^{1/4n} \propto (\mcP_\zeta^\em)^{1/8n}$. Since we set $T_r = 1 \, \GeV$, $\rho_\inf^{1/4}$ should be larger than $1 \, \GeV$, shown as the black dashed line. [**(Right panel)**]{} The peak scale of the electromagnetic field is pushed up to $k_i = 1{\rm kpc}^{-1}$. Again, the solid lines denote $\Omega_\em(\eta_r)=1$ and the dashed lines denote $\mcP_\zeta^\em=\mcP_\zeta^\obs$. The colors represents different reheating temperature, $T_r=10^3$ (green), $10^2$ (orange), and $10 \, \GeV$ (blue). The dotted lines show these $T_r$, setting the lower bounds on $\rho_\inf^{1/4}$. []{data-label="rho plot"}](rho_single.eps "fig:"){width="70mm"} ![[**(Left panel)**]{} The inflation energy scale $\rho_\inf^{1/4}$ is shown for $\Omega_\em(\eta_r)=1$ (solid lines) and for $\mcP_\zeta^\em=\mcP_\zeta^\obs$ (dashed lines). We fix $k_i= 1$ (blue), $10$ (orange), $10^{2}$ (green) and $ 10^{3} \, \Mpc^{-1}$ (red). These lines are the upper bounds on the inflationary energy scale, but the value of $\rho_{\rm inf}^{1/4}$ depends on the given values of $\Omega_{\rm em}$ and ${\cal P}_\zeta^{\rm em}$ only weakly, $\rho_\inf^{1/4}\propto \Omega_\em^{1/4n} \propto (\mcP_\zeta^\em)^{1/8n}$. Since we set $T_r = 1 \, \GeV$, $\rho_\inf^{1/4}$ should be larger than $1 \, \GeV$, shown as the black dashed line. [**(Right panel)**]{} The peak scale of the electromagnetic field is pushed up to $k_i = 1{\rm kpc}^{-1}$. Again, the solid lines denote $\Omega_\em(\eta_r)=1$ and the dashed lines denote $\mcP_\zeta^\em=\mcP_\zeta^\obs$. The colors represents different reheating temperature, $T_r=10^3$ (green), $10^2$ (orange), and $10 \, \GeV$ (blue). The dotted lines show these $T_r$, setting the lower bounds on $\rho_\inf^{1/4}$. []{data-label="rho plot"}](rho_triple.eps "fig:"){width="70mm"} Next, substituting the obtained $\rho_\inf$ into eq. , we find $B_{\rm eff}$ written in terms of $\Omega_\em(\eta_r)$ and $\mcP_\zeta^\em$ as $$\begin{aligned} B_{\rm eff}(\eta_{\rm now}) \approx& 10^{-13}\G\, \frac{\sqrt{H_n/F_n}}{4n+1} \Omega_\em^{1/2}(\eta_r) \left(\frac{T_r}{1\GeV}\right)^{-1} \left(\frac{g_*}{100}\right)^{-\frac{1}{3}} \left(\frac{k_i}{1\Mpc^{-1}}\right)^{n-2}, \label{max B BR} \\ B_{\rm eff}(\eta_{\rm now}) \approx& 10^{-14}\G\, \frac{1}{4n+1} \frac{H_n^{1/2}}{\mathcal{G}_n^{1/4}}\left(\frac{\mcP_\zeta^\em}{2.2\times10^{-9}}\right)^{\frac{1}{4}} \notag\\ &\times \left(\frac{T_r}{1\GeV}\right)^{-1} \left(\frac{g_*}{100}\right)^{-\frac{1}{3}} \left(\frac{k_i}{1\Mpc^{-1}}\right)^{n-\frac{5}{4}} \left(\frac{k_{\rm CMB}}{0.05\Mpc^{-1}}\right)^{-\frac{3}{4}}, \label{max B CP}\end{aligned}$$ where we have set $L=1 \, \Mpc$. Note that $H_n$ depends on $k_i$, as we discuss below eq. . To avoid the backreaction and the curvature perturbation problem, it is required that $\Omega_\em(\eta_r)<1$ and $\mcP_\zeta^\em<\mcP_\zeta^\obs$, and these conditions lead to the upper bounds on $B_{\rm eff}$. In fig. \[T=1,k=1\], we show the bounds on $B_{\rm eff}$ for $T_r=1 \, \GeV, g_*=100$ and $k_i=1 \, \Mpc^{-1}$. ![ The effective strength of the magnetic field predicted by our model. In this figure, we fix the parameters as $T_r=1\GeV, k_i=1\Mpc^{-1}$ and $g_*=100$. The solid lines represent the cases with $\Omega_\em(\eta_r)=1, 10^{-1}, 10^{-2}, 10^{-3}$ and $10^{-4}$ from top to bottom. The orange shaded region is excluded by the curvature perturbation problem. On the orange dashed line, the curvature perturbation induced by the electric fields is as large as the observed one, $\mcP_\zeta^\em = 2.2\times 10^{-9}$. The blue dotted line shows the lower bound inferred by blazar observations, $B_{\rm eff} \gtrsim 10^{-15}\G$. The viable region in which sufficient magnetic fields can be generated without the backreaction and the curvature perturbation problem exists for $B_{\rm eff}\lesssim10^{-14}\G$. []{data-label="T=1,k=1"}](Main_result_new.eps){width="90mm"} There exists the viable region where the sufficiently strong magnetic fields, $B_{\rm eff} (\eta_{\rm now})>10^{-15}\G$, are generated without the backreaction or the curvature perturbation problem. The inflationary energy scale corresponding to a given $n$ and $\Omega_\em(\eta_r)$ can be derived from eq.  (or found in fig. \[rho plot\]). As an illustrative example, the following set of parameters and predictions is obtained in our model: $$\begin{aligned} n&=3, &T_r&=1\GeV,& k_i&=1\Mpc^{-1} \notag\\ \Omega_\em(\eta_r)&= 1.8 \times 10^{-3},& \rho_\inf^{1/4}&\approx 1.1\times 10^5 \, \GeV, & B_{\rm eff}(\eta_{\rm now})&\approx 10^{-15} \, \G. \label{fiducial values}\end{aligned}$$ Note that the generated magnetic fields have a scale-invariant spectrum for $k\gtrsim k_i$ in this case of $n=3$. Now we explore cases with different $T_r$ and $k_i$. If the reheating temperature $T_r$ increases, the hierarchy between the electric and magnetic fields widens, since $\mcP_B/\mcP_E(\eta_r)\propto |k\eta_r|^2\propto T_r^{-2}$, and thus the constraints get tighter. In fact, eqs.  and indicate that the maximum $B_{\rm eff}$ decreases in proportional to $T_r^{-1}$. On the other hand, if one makes $k_i$ larger (i.e. pushes the peak scale of the electromagnetic fields into a smaller scale), the two constraints become weaker, since the IR cutoff scale goes higher, and thus stronger magnetic fields can be obtained. In fig. \[Tk change\], we show the cases with larger $k_i$. Eqs.  and imply that the curvature perturbation constraint becomes irrelevant in comparison with the backreaction constraint for sufficiently large $k_i$. This is because the characteristic scale of the electromagnetic fields goes away from the CMB scale. Furthermore, since the hierarchy between electric and magnetic fields is reduced, the back reaction problem is also relaxed. This time, however, a simple scaling argument is difficult because a varying $k_i$ (or $\eta_i$) changes the numerical integration of $H_s$ in $B_{\rm eff}$ (see eq.  and fig. \[Hs plot\]). Comparing fig. \[T=1,k=1\] and the left panel of fig. \[Tk change\], one can see how the viable region broadens by pushing $k_i$ from $1 \, \Mpc^{-1}$ into $10 \, \Mpc^{-1}$. In the right panel of fig. \[Tk change\], we show the present values of the effective field strength $B_{\rm eff}(\eta_{\rm now})$ with the backreaction constraint saturated ($\Omega_\em(\eta_r)=1$), with $k_i=1 \, {\rm kpc}^{-1}$ for $T_r=1, 10, 10^2$ and $10^3 \, \GeV$. ![[**Left panel)**]{} The case with $k_i = 10\Mpc^{-1}$. The other parameters and the plot scheme are the same as fig. \[T=1,k=1\]. Since the peak scale of the electromagnetic fields becomes smaller, the hierarchy is more reduced and the constraints are weaker than fig. \[T=1,k=1\]. In particular, the constraint from the curvature perturbation becomes irrelevant. [**(Right panel)**]{} The case with $k_i= 1{\rm kpc}^{-1}$. The solid lines show $\Omega_\em(\eta_r)=1$ and the reheating temperature is fixed as $T_r = 1, 10, 10^2$ and $10^3\GeV$ from top to bottom. One can see that the reheating temperature cannot exceed $1$TeV for $k_i \le 1$kpc. []{data-label="Tk change"}](Beff_contour_k10_new.eps "fig:"){width="70mm"} ![[**Left panel)**]{} The case with $k_i = 10\Mpc^{-1}$. The other parameters and the plot scheme are the same as fig. \[T=1,k=1\]. Since the peak scale of the electromagnetic fields becomes smaller, the hierarchy is more reduced and the constraints are weaker than fig. \[T=1,k=1\]. In particular, the constraint from the curvature perturbation becomes irrelevant. [**(Right panel)**]{} The case with $k_i= 1{\rm kpc}^{-1}$. The solid lines show $\Omega_\em(\eta_r)=1$ and the reheating temperature is fixed as $T_r = 1, 10, 10^2$ and $10^3\GeV$ from top to bottom. One can see that the reheating temperature cannot exceed $1$TeV for $k_i \le 1$kpc. []{data-label="Tk change"}](Beff_contour_T.eps "fig:"){width="70mm"} ![The reheating temperature as a function of $k_i$. Here we fix $B_{\rm eff} (\eta_{\rm now}) = 10^{-15} \, {\rm G}$, choose $n=3$, which gives a scale-invariant magnetic spectrum, and show a few cases for different values of $\Omega_{\rm em} (\eta_r)$. As can be seen from , increasing $k_i$ can achieve higher $T_r$ for a given $B_{\rm eff}$ and $\Omega_{\rm em}$. []{data-label="fig:Tk_ki"}](Treh-vs-ki){width="80mm"} Finally, let us comment on the allowed maximum reheating temperature in this scenario. Combining eqs. , and , one can derive an approximated expression for the reheating temperature as $$T_r \simeq 5.6\times 10^2 \, \GeV \, \frac{e^{-0.3825n+0.07415n^2-0.00425n^3}}{4n+1} \, \Omega_\em^{1/2} \left(\frac{k_i}{1\Mpc^{-1}}\right)^{1/2} \left(\frac{B_{\rm eff}}{10^{15}\G}\right)^{-1} \left(\frac{g_*}{100}\right)^{-\frac{1}{3}}. \label{Tr approx}$$ It should be noted that this expression is valid for $k_i L\gtrsim1$ and $3\le n \le 8$. As seen from this expression and fig. \[fig:Tk\_ki\], higher reheating temperature can be achieved for larger values of $k_i$. In this figure, we fix $B_{\rm eff} (\eta_{\rm now}) = 10^{-15} \, {\rm G}$ and $n=3$, which leads to a scale-invariant magnetic spectrum with an observed amplitude, and take a few different values of $\Omega_{\rm em}(\eta_r)$. Interestingly, given the $k_i$ dependence of $H_n$ as in , one can see $T_r \propto k_i^{1/2}$, independent of the value of $n$, in the range $k_i L \gtrsim 1$. The constraint from the curvature perturbation quickly becomes irrelevant as $k_i$ goes further away from the CMB scales. Hence the possible range of reheating temperature is not much limited in the model.[^12] Conclusion {#Summary} ========== We investigate the viability of successful magnetogenesis in the model of the electromagnetic field coupled to a non-inflaton scalar field $\chi$ through its kinetic term in the primordial universe.[^13] The time variation of the kinetic function $I(\chi)$ transfers the energy of $\chi$ into the electromagnetic field and thus leads to the production of photons. We assume that $I(\chi)$ evolves in time only for a fixed period during inflation and continues until the completion of reheating; after reheating, the electromagnetic field evolves adiabatically. The produced magnetic field is originated from the vacuum fluctuations during inflation, and its scale dependence differs among different modes, which can be classified into four cases: (i) the modes that are always outside horizon from the onset of $I(\chi)$ until reheating, (ii) those that cross horizon after the onset of $I(\chi)$ and stay super-horizon until reheating, (iii) those that both cross and re-enter horizon between the onset of $I(\chi)$ and the time of reheating, and (iv) those that do not exit horizon until the end of inflation and thus never do. Our aim is to search for a successful scenario for the generation of large-scale magnetic fields to account for the blazar observations, preserving all the computational and observational consistencies, namely respecting the backreaction and the CMB constraints within a weak-coupling regime. Although a red-tilted magnetic spectrum is preferred for large-scale fields, this, combined with the requirement to avoid the strong-coupling problem, results in much larger production of electric fields. Hence imposing the constraints on them largely suppresses the amplitude of the corresponding magnetic fields. It has been known to be particularly difficult, if not impossible, to generate $\sim 1 \, {\rm Mpc}$ scale magnetic fields solely from inflation, unless inflationary energy scale is extremely low, around the BBN bound. In view of the blazars, we choose such parameters that the scale for the peak amplitude is of ${\cal O}(1 \, {\rm Mpc})$ or smaller, and then the scales relevant for the CMB observations are much larger, corresponding to the modes (i) in the previous paragraph. In this case, the constraints from the CMB curvature perturbation are relatively loose, while the produced magnetic fields keep evolving after inflation, preventing their dilution against the background energy density during the period of inflaton oscillation. We compute the effective amplitude of the present magnetic field, imposing the constraints from the CMB fluctuations and from the backreaction to the background dynamics, in this rather optimal scenario. To compute the curvature perturbation induced by the produced electromagnetic field, we include all the relevant contributions, namely those coming from the direct coupling $I^2 F^2$ and from the gravitational interactions, up to the leading order. We restrict our attention to the two-point correlator of the curvature perturbation and require the sourced part of its power spectrum to be smaller than the observed quasi scale-invariant value ${\cal P}_\zeta^{\rm obs} \cong 2.2 \times 10^{-9}$, since the sourced mode is strongly scale-dependent. The shape of the induced non-Gaussianity is different from that of the templates used in the CMB analysis, and the existing bounds on non-Gaussianity in the Planck papers are not directly applicable. Therefore a dedicated data analysis would be necessary to provide a constraint on our mechanism of magnetogenesis from higher-order correlation functions, which is beyond the scope of this paper. We find a viable parameter space for the generation of magnetic fields with amplitudes sufficient to account for the spectrum of the $\gamma$ rays from distant blazars. This is, to our knowledge, the first example of successful large-scale magnetic fields of primordial origin in the $I^2 F^2$ model with inflationary energy scales away from the BBN, respecting all the observationally relevant constraints consistently in the weak-coupling regime. The constraint from the curvature perturbation places the strongest bounds if the peak scale of the produced magnetic field is ${\cal O}(1 \, {\rm Mpc})$. The smaller the peak scale is, however, the looser both the backreaction and the CMB constraints are, as the power on larger scales is more suppressed. We also verify that the conductivity induced by the charged particles that may be present even before the completion of reheating does not prevent the evolution of the magnetic fields, since the effective electromagnetic coupling $e / I$ is much smaller than unity before this time. Our results also infer that the reheating temperature for successful magnetogenesis has a strong relationship with the peak scale of the magnetic field. If one allows a small correlation length of the magnetic field, still compatible with the observed amplitude, then a rather large range of reheating temperature can be realized. While our scenario succeeds to generate magnetic fields large enough for blazar observations, it still lacks a concrete model. We have assumed a simple time dependence of background $I(a)$, but we have not specified the functional form of $I(\chi)$ or of the potential $U(\chi)$ to realize the desired time dependence. This model building would require a further investigation and is beyond the scope of our current goal, which is to provide a successful scenario for primordial magnetogenesis. In a realistic scenario, it would not be surprising that the time dependence of $I$ changes at the end of inflation, and then the magnetic-field spectrum would have more non-trivial shape. Also the decay time of $\chi$ would not necessarily coincide with that of the inflaton; under some circumstances, $\chi$ might behave as a curvaton, which would be an interesting possibility. The construction and analysis of a realistic model, as well as potential constraints from higher-order correlations, are important issues that we would like to come back in the future studies. The authors are grateful to Takeshi Kobayashi, Sabino Matarrese and Marco Peloso, Jun’ichi Yokoyama and Shuichiro Yokoyama for useful discussions. This work is supported in part by the World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. The work of TF has been supported in part by the JSPS Postdoctoral Fellowships for Research Abroad (Grant No. 27-154). Derivation of the Electromagnetic Power Spectra {#Derivation} =============================================== In this appendix, we solve the E.o.M. for the mode function $\mcA_k$, namely eqs. -, and obtain the electromagnetic power spectra, $\mcP_E$ and $\mcP_B$. First we assume the gauge field is in the Bunch-Davies vacuum state for $a<a_i$, $$I_i\mcA_k^\BD (a<a_i) =\frac{e^{-ik \left( \eta - \eta_i \right)} }{\sqrt{2k}} \; , \label{BD vac}$$ where the constant phase is added for convenience. Solving eqs.  and one finds the general solutions are given by $$\begin{aligned} &I\mcA_k^\inf(\eta) =\sqrt{-\eta}\left[ C_1 J_{n-1/2} (-k\eta)+ C_2 Y_{n-1/2} (-k\eta)\right], \label{inf general solutions} \\ &I\mcA_k^\osc (\eta) = \sqrt{\eta} \left[ D_1 J_{2n+1/2} (k\eta) + D_2 Y_{2n+1/2}(k\eta) \right], \label{osc general solutions}\end{aligned}$$ where $J_\nu(x)$ and $Y_\nu(x)$ are the Bessel function of the first and second kind, respectively. Here $C_1, C_2, D_1$ and $D_2$ are constant and they will be determined by the junction conditions. By using the junction condition between eqs.  and at $\eta=\eta_i$, [^14] $$\mcA_k^\BD(\eta_i) = \mcA_k^\inf(\eta_i), \qquad \partial_\eta\mcA_k^\BD(\eta_i) = \partial_\eta\mcA_k^\inf(\eta_i),$$ one finds $C_1$ and $C_2$ are given by $$\begin{aligned} C_1 &= - \frac{i\pi}{2\sqrt{2}}\sqrt{-k\eta_i} \left[ Y_{s-1/2}(-k\eta_i) -i Y_{s+1/2}(-k\eta_i)\right], \label{fullC1} \\ C_2 &= \frac{i\pi}{2\sqrt{2}}\sqrt{-k\eta_i} \left[ J_{s-1/2}(-k\eta_i) -i J_{s+1/2}(-k\eta_i)\right].\end{aligned}$$ Next, one can connect eq.  to eq.  by using the junction condition at the end of inflation, $a=a_e$. It should be noted that the conformal time $\eta$ is not continuous there. Requiring that the scale factor $a$ and Hubble parameter $H$ are continuous, one finds $\eta$ jumps as $$\eta_e = -\frac{1}{a_e H_\inf} \quad{\rm (end\ of\ inflation)} \quad\Longrightarrow\quad \tilde{\eta}_e= \frac{2}{a_e H_\inf}\quad {\rm (onset\ of\ oscillation)}.$$ Thus the junction condition is $$\mcA_k^{\inf}(\eta_e) = \mcA_k^{\rm MD}(\tilde{\eta}_e), \qquad \partial_\eta\mcA_k^{\inf}(\eta_e) = \partial_\eta\mcA_k^{\rm MD}(\tilde{\eta}_e),$$ and one can obtain $D_1$ and $D_2$. The calculation is straightforward while the full expressions of $D_1$ and $D_2$ are complicated. Since we are interested only in the modes which exit the horizon during inflation, we show their asymptotic form in the limit $|k \eta_e|\ll 1$ (we use the full expression to plot Fig. \[EM spectra\]); $$\begin{aligned} D_1 &\simeq - \frac{2^n}{\pi} \Gamma(2n+1/2)\Gamma(n+1/2) |k\eta_e|^{-3n} C_2, \label{D1} \\ D_2 &\simeq 2^{n-2} \frac{3\Gamma(n-1/2)}{\Gamma(2n+3/2)}|k\eta_e|^{1+n}C_2 -\frac{2^{-n}\pi |k\eta_e|^{3n}C_1}{\Gamma(2n+1/2)\Gamma(n+1/2)}. \label{D2}\end{aligned}$$ The second term in eq.  is important only for very large scale modes, $|k\eta_e|(\eta_i/\eta_e)^{2n}\lesssim 1$, and thus we ignore it. Now we can obtain the electromagnetic power spectra by substituting eqs.  and into eq. . Let us see the results in order. During inflation, before $I$ starts varying: $a<a_i$ ---------------------------------------------------- Substituting the Bunch-Davies vacuum eq. , one finds $$\mcP_E^{\rm BD} =\mcP_B^{\rm BD}= \frac{H_\inf^4}{2\pi^2} |k\eta|^4.$$ In the vacuum, the magnetic and the electric spectrum are identical, and they are blue-tilted in proportional to $k^4$. Even after $I$ starts varying, the electromagnetic spectra on sub-horizon scales do not change. This is because the $k^2$ term dominates the $\partial_\eta^2 I/I$ term in eqs.  and , and thus the sub-horizon modes do not feel the time-variation of $I$. Hence, hereafter we focus on the super-horizon modes. During inflation, after $I$ starts varying: $a_i<a<a_e$ ------------------------------------------------------- Substituting eq.  into eq. , one finds $$\begin{aligned} \mcP_E^\inf &= \frac{H_\inf^4}{\pi^2} |k\eta|^5 \left| C_1 J_{n+1/2}(-k\eta)+C_2 Y_{n+1/2}(-k\eta)\right|^2, \quad ({\rm exact})\notag\\ &\xrightarrow{|k\eta|\ll 1} \frac{2^{2n+1}}{\pi^4}\Gamma^2(n+1/2) |C_2|^2 H_\inf^4 |k\eta|^{4-2n}, \quad ({\rm super\ horizon}) \notag\\ &\simeq \left\{ \begin{array}{lc} \dfrac{2^{2n-1}}{\pi^3}\Gamma^2(n+1/2)H_\inf^4 |k\eta|^{4-2n}, & \quad(|k\eta_i|\gg 1)\\ \dfrac{H_\inf^4}{2\pi^2}\left(\dfrac{\eta_i}{\eta}\right)^{2n}|k\eta|^4, & \quad (|k\eta_i|\ll1) \end{array} \right. \quad (C_2\ {\rm approx.}). \label{PEinf Appendix}\end{aligned}$$ $$\begin{aligned} &\mcP_B^\inf = \frac{H_\inf^4}{\pi^2} |k\eta|^5 \left| C_1 J_{n-1/2}(-k\eta)+C_2 Y_{n-1/2}(-k\eta)\right|^2, \quad ({\rm exact})\notag\\ &\xrightarrow{|k\eta|\ll 1} \dfrac{2^{2n-1}}{\pi^4}\Gamma^2(n-1/2) |C_2|^2 H_\inf^4 |k\eta|^{6-2n}, \quad ({\rm super\ horizon}) \notag\\ &\simeq \left\{ \begin{array}{lc} \dfrac{2^{2n-3}}{\pi^3}\Gamma^2(n-1/2)H_\inf^4|k\eta|^{6-2n}, & \quad (|k\eta_i|\gg1) \\ \dfrac{H_\inf^4}{2\pi^2(2n-1)^2}\left(\dfrac{\eta_i}{\eta}\right)^{2n}|k\eta|^{6}, & \quad(|k\eta_i|\ll\ 1) \end{array} \right. \quad (C_2\ {\rm approx.}).\end{aligned}$$ In both the equations, the first line shows the exact expression, the second line shows the super-horizon asymptotic formula, and in the third line the asymptotic form of $C_2$, eqs.  and , are used. Exactly speaking, right after $a=a_i$, $\mcA_k$ is dominated by a constant term for a short interval, $\eta_i \le \eta < \eta_c$ with $|k\eta_c| \equiv [(2 n -1)\pi^2]^{\frac{-1}{2 n -1}}|k\eta_i|^{\frac{2 n}{2 n -1}}$. That term gives an additional contribution to $\mcP_B^\inf$. However, we suppress it in the approximated expressions because it quickly becomes subdominant for $k\sim k_i$ and is not significant to estimate the final amplitude of the magnetic fields. Inflaton oscillating phase with varying $I$: $a_e<a<a_r$ -------------------------------------------------------- Substituting eq.  into eq. , one finds $$\begin{aligned} \mcP_E^\osc &= \frac{H_\inf^4}{2^4\pi^2} |k\eta|^5 \left(\frac{\tilde{\eta}_e}{\eta}\right)^{12} \left|D_1 J_{2n-1/2}(k\eta)+D_2 Y_{2n-1/2}(k\eta)\right|^2, \quad ({\rm exact})\notag\\ &\xrightarrow{|k\eta|\ll 1} \frac{2|D_1|^2}{\pi^2\Gamma^2(2n+1/2)} H_\inf^4 |k\eta_e|^{4(n+1)}\left(\frac{\eta}{\tilde{\eta}_e}\right)^{4(n-2)}, \quad ({\rm super\ horizon}) \notag\\ &\simeq \left\{ \begin{array}{lc} \dfrac{2^{2n-1}}{\pi^3}\Gamma^2(n+1/2)H_\inf^4 |k\eta_e|^{4-2n}\left(\dfrac{\eta}{\tilde{\eta}_e}\right)^{4(n-2)}, & \quad(|k\eta_i|\gg 1)\\ \dfrac{H_\inf^4}{2\pi^2}|k\eta_e|^4\left(\dfrac{\eta_i}{\eta_e}\right)^{2n}\left(\dfrac{\eta}{\tilde{\eta}_e}\right)^{4(n-2)}, & \quad (|k\eta_i|\ll1) \label{IPEoscC2} \end{array} \right. \quad (C_2\ {\rm approx.}).\end{aligned}$$ $$\begin{aligned} & \mcP_B^\osc = \frac{H_\inf^4}{2^4\pi^2} |k\eta|^5 \left(\frac{\tilde{\eta}_e}{\eta}\right)^{12} \left| D_1 J_{2n+1/2}(k\eta)+D_2 Y_{2n+1/2}(k\eta)\right|^2, \quad ({\rm exact})\notag\\ &\xrightarrow{|k\eta|\ll 1} \dfrac{2|D_1|^2}{\pi^2\Gamma^2(2n+3/2)}H_\inf^4 |k\eta_e|^{4n+6} \left(\frac{\eta}{\tilde{\eta}_e}\right)^{4n-6} , \quad({\rm super\ horizon}) \notag\\ &\simeq \left\{ \begin{array}{lc} \dfrac{2^{2s+1}\Gamma^2(n+1/2)}{\pi^3(4n+1)^2}H_\inf^4|k\eta_e|^{6-2n}\left(\dfrac{\eta}{\tilde{\eta}_e}\right)^{4n-6}, & \quad (|k\eta_i|\gg1) \\ \dfrac{2H_\inf^4}{\pi^2(4n+1)^2}|k\eta_e|^{6}\left(\dfrac{\eta_i}{\eta_e}\right)^{2n} \left(\dfrac{\eta}{\tilde{\eta}_e}\right)^{4n-6}, & \quad(|k\eta_i|\ll\ 1) \end{array} \right. \quad (C_2\ {\rm approx.}). \label{IPBosc}\end{aligned}$$ Again we ignore the constant contribution to $\mcA_k^\osc$ on super-horizon scales, which becomes dominant for a short interval while it is not relevant to the final result. Comparing these spectra, one can explicitly confirm the hierarchical relation eq. , $\mcP_B \sim |k\eta|^2 \mcP_E$ holds during both inflation and the inflaton oscillating phase. The calculation of $\zeta_\em$ {#app:zeta-em} ============================== We have discussed the constraints due to the CMB observations in Subsection \[Curvature perturbation problem\] of the main text, where only the final results are reported. In this appendix section, we show the detailed calculation of the curvature perturbations induced by the produced photon fields. To this end, we expand the inflaton ($\phi$) and spectator ($\chi$) fields as $$\phi = \phi_0 + \delta\phi \; , \quad \chi = \chi_0 + \delta\chi \; ,$$ and decompose the scalar part of the metric in the spatially flat gauge, $$g_{00} = - a^2 \left( 1 + 2 \, \Phi \right) \; , \quad g_{0i} = a^2 \, \partial_i B \; , \quad g_{ij} = a^2 \, \delta_{ij} \; .$$ Due to the assumption that the photon field $A_\mu$ has no intrinsic background values and to the fact that it enters the action only quadratically, the effects of the produced fields on the curvature perturbations are formally second-order in the perturbative expansion. To focus on their effects, we therefore treat all the scalar modes as second order and the gauge field as first order, namely, $$\delta\phi \rightarrow \delta_2 \phi \; , \quad \delta\chi \rightarrow \delta_2 \chi \; , \quad \Phi \rightarrow \Phi_2 \; , \quad B \rightarrow B_2 \; , \quad A_\mu \rightarrow \delta_1 A_\mu \; ,$$ where the subscripts $(1,2)$ are the perturbative orders. Noting this, we suppress them in the following calculations without ambiguity. Expanding the action up to second order, we understand that the variations with respect to $\Phi$ and $B$ only provide constraint equations. Thus the true equations of motion are those of $\delta\phi$, $\delta\chi$ and $A_\mu$. The one for $A_\mu$ is given in , and those for $\delta\phi$ and $\delta\chi$ are found as $$\begin{aligned} && \left( \partial_{\eta}^2 - \nabla^2 \right) Q_i + {\cal M}_{ij}^2 Q_j = {\cal S}_i^{d} + {\cal S}_i^{g} \; , \label{eom-dphidchi}\end{aligned}$$ where $Q_i = \{ a \, \delta\phi ,\, a \, \delta\chi \}$, ${\eta}$ is the conformal time, and $$\begin{aligned} {\cal M}_{ij}^2 &=& - \frac{a''}{a} \delta_{ij} + a^2 V_{,ij} + \left( 3 - \frac{\phi_k' \phi_k'}{2 M_p^2 {\cal H}^2} \right) \frac{\phi_i' \phi_j'}{M_p^2} + \frac{a^2}{M_p^2 {\cal H}} \left( \phi_i' V_{,j} + \phi_j' V_{,i} \right) \; , \nonumber \\ {\cal S}_i^{d} & = & a^3 \, \frac{\partial_\chi I}{I} \, \left( \bm{E}^2 - \bm{B}^2 \right) \left( \begin{array}{c} 0 \\ 1 \end{array} \right)_i \; , \nonumber\\ {\cal S}_i^{g} & = & - \frac{a^3 \phi_i'}{2 M_{\rm Pl}^2 {\cal H}} \, \left\{ \frac{\bm{E}^2 + \bm{B}^2}{2} - \frac{{\cal H}^2}{a^4 \phi_i'{}^2} \, \nabla^{-2} \partial_{\eta} \left[ \frac{a^4 \phi_i'{}^2}{{\cal H}^2} \, \bm{\nabla} \cdot \left( \bm{E} \times \bm{B} \right) \right] \right\} \; , \label{source-app}\end{aligned}$$ with $\phi_i = \{ \phi_0 , \, \chi_0 \}$, prime denoting derivatives with respect to ${\eta}$, and ${\cal H} \equiv a'/a$. The first, second and third equations of correspond, respectively, to the mass mixing, the direct coupling between the photon and $\chi$, and the gravitational interaction. Here we have defined the electric and magnetic fields as $$\bm{E} \equiv - \frac{\langle I \rangle}{a^2} \, \partial_{\eta} \bm{A} \; , \quad \bm{B} \equiv \frac{\langle I \rangle}{a^2} \, \bm{\nabla} \times \bm{A}$$ where the bracket denotes background values. The equations of motion, , together with , are exact at this order. Since we are interested in the regime of parameters where the electric contribution is dominant over the magnetic one, we neglect all the terms coming from the latter, except for the one in the term of Poynting vector $\bm{E} \times \bm{B}$. Also in this regime the electric field is always monotonically increasing both during and after inflation until reheating, and thus we solve during the period of inflaton oscillation. To utilize the Green function method, we first find the homogeneous solution of . Fourier-transforming $\delta\phi$ and $\delta\chi$ as $$\delta\phi({\eta}, \bm{x}) = \int \frac{d^3k}{(2\pi)^3} \, {\rm e}^{i \, \bm{k} \cdot \bm{x}} \, \frac{\varphi_{\bm k} ({\eta})}{a({\eta})} \; , \quad \delta\chi({\eta}, \bm{x}) = \int \frac{d^3k}{(2\pi)^3} \, {\rm e}^{i \, \bm{k} \cdot \bm{x}} \, \frac{{\cal X}_{\bm k} ({\eta})}{a({\eta})} \; ,$$ and $A_i$ as in , the homogeneous equations of can be approximated as,[^15] $$\varphi_{\bm k}'' + \left( k^2 + a^2 V_{\phi\phi} \right) \varphi_{\bm k } \simeq 0 \; , \quad {\cal X}_{\bm k}'' + \left( k^2 - \frac{a''}{a} \right) {\cal X}_{\bm k} \simeq 0 \; . \label{eom-free}$$ Since it suffices to consider the period of inflaton oscillation for our current purpose, we solve these equations in this era. For later use, we remind readers that for a given equation of state of the universe the scale factor is related to conformal time as $a= \frac{2}{1+3w} \frac{1}{{\eta} H} \propto t^{2/3(1+w)} \propto {\eta}^{2/(1+3w)}$. Defining the Fourier transform of the source terms in as $\hat{\cal S}_i^{d/g} \equiv \int d^3x \, {\rm e}^{-i \bm{k} \cdot \bm{x}} \, {\cal S}_i^{d/g}$, they can be approximated as, $$\begin{aligned} \hat{\cal S}_i^d \left( {\eta} , \bm k \right) &\simeq& 2 \, a^3 \, \frac{\partial_\chi I}{I} \, \delta\hat{\rho}_E \, \left( \begin{array}{c} 0 \\ 1 \end{array} \right)_i \; , \nonumber\\ \hat{\cal S}_i^g \left( {\eta} , \bm k \right) & \simeq & -\frac{a^3 \phi_i'}{2 M_{\rm Pl}^2 {\cal H}} \, \left[ \delta\hat{\rho}_E - \frac{i \, k_j}{k^2} \, \frac{{\cal H}^2}{a^4 \phi_i'{}^2} \, \partial_{\eta} \left( \frac{a^4 \phi_i'{}^2}{{\cal H}^2} \, \delta\hat{P}_j \right) \right] \; , \label{source-FT}\end{aligned}$$ where the electric energy density and the Poynting vector are $$\begin{aligned} \delta \hat{\rho}_E \left( {\eta} , \bm{k} \right) &\equiv& \frac{1}{2} \int \frac{d^3p}{(2\pi)^3} \, \hat{E}_i \left( {\eta} , \bm p \right) \, \hat{E}_i \left( {\eta} , \bm{k} - \bm{p} \right) \; , \nonumber\\ \delta\hat{P}_i \left( {\eta} , \bm{k} \right) &\equiv& \int \frac{d^3p}{(2\pi)^3} \, \epsilon_{ijk} \, \hat{E}_j \left( {\eta} , \bm p \right) \, \hat{B}_k \left( {\eta} , \bm{k} - \bm{p} \right) \; ,\end{aligned}$$ with $\hat{E}_i$ and $\hat{B}_i$ being the Fourier-transformed $\bm{E}$ and $\bm{B}$ fields, respectively. Note that although the term of the Poynting vector in the source $\hat{\cal S}_i^g$ appears divergent in the limit $k \rightarrow 0$, physical spectra do not suffer IR divergence, which we shall show in Subsection \[subapp:total\]. To compute the curvature perturbation resulting from the produced photon field, we have the relation $$\zeta = - \frac{H}{\dot\rho} \, \delta\rho \; , \label{rel-zetarho}$$ in the spatially flat gauge. The density perturbation $\delta\rho$ consists of three contributions, from the bare electric energy density $\delta\rho_E$ (with negligible magnetic component), and the density perturbations of spectator and inflaton fields, $\delta\rho_\chi$ and $\delta\rho_\phi$, respectively, sourced by the electromagnetic fields. Their formal expressions are written as, up to the relevant orders, $$\delta\rho_E = \frac{\bm{E}^2}{2} \; , \quad \delta\rho_\chi = \dot\chi_0 \, \delta\dot\chi + U_{,\chi} \, \delta\chi \; , \quad \delta\rho_\phi = \dot\phi_0 \, \delta\dot\phi + V_{,\phi} \, \delta\phi \; . \label{density-pert-all}$$ In the following subsections, we compute the contributions from the photon field to $\delta\chi$ and $\delta\phi$ separately, and then the total one to the curvature perturbation. For concrete calculation, we hereafter set $w=0$ during the phase of inflaton oscillation, that is the inflaton oscillates around its potential $V(\phi) = m_\phi^2 \phi^2 / 2$. Contributions to spectator field perturbation --------------------------------------------- The equation of motion for the spectator field perturbation during inflaton oscillation can be obtained from the homogeneous part together with the source from the electromagnetic field, , and reads $${\cal X}_{\bm k}'' + \left[ k^2 - \frac{2}{{\eta}^2}\right] {\cal X}_{\bm k} \simeq -2 \, n \, \frac{a^3 H}{\dot\chi_0} \, \delta\hat{\rho}_E \; , \label{eom-X-source}$$ where we have set $w=0$ and used the relation $\partial_\chi I / I = \dot I / (\dot\chi_0 I) = - n H / \dot\chi_0$. Here we neglect the Planck suppressed operators, as they make only sub-leading contributions. Note that when the mass of the spectator field is light, i.e. $U_{,\chi\chi} \ll H^2$, the background $\chi_0$ follows the approximate time dependence $\frac{9}{2} H \dot\chi_0 \simeq - U_{,\chi}$ in the matter-dominated universe. We assume this “slow roll” of $\chi_0$, and under this assumption, the time evolution of $\dot\chi_0$ is approximately $\dot\chi_0 \propto H^{-1}$. The Green function associated with ${\cal X}$ can be found by equating the homogeneous part of to $\delta \left( {\eta} - {\eta}' \right)$, giving $$G_k^\chi \left( {\eta} , {\eta}' \right) = \Theta\left( {\eta} - {\eta}' \right) \, \frac{\pi}{2} \sqrt{{\eta} {\eta}'} \left[ Y_{3/2} \left( \vert k {\eta}\vert \right) \, J_{3/2} \left( \vert k {\eta}'\vert \right) - J_{3/2} \left( \vert k {\eta}\vert \right) \, Y_{3/2} \left( \vert k {\eta}'\vert \right) \right] \; ,$$ where $\Theta$ is the Heaviside step function. The part of the solution due to the electromagnetic source can then be solved as $${\cal X}_{\bm k}^{\em} \left( {\eta} \right) = -2n \int_{-\infty}^\infty d{\eta}' \, G_k^\chi \left( {\eta} , {\eta}' \right) \, \left[ \frac{a^3H}{\dot\chi_0} \right]_{{\eta}'} \, \delta\hat\rho_E \left( {\eta}' , \bm{k} \right) \; , \label{X-sol-formal}$$ where superscript $\em$ denotes a quantity sourced by the electromagnetic field, and subscript ${\eta}'$ indicates a quantity evaluated at time $\eta'$. To evaluate the time integral, we are only interested in super-horizon modes during the period of inflaton oscillation, as the modes inside the horizon damp away quickly and leave negligible contributions. In this limit, we have $$\begin{aligned} G_k^\chi \left( {\eta} , {\eta}' \right) & \simeq & \frac{\Theta\left( {\eta} - {\eta}' \right)}{3} \sqrt{ {\eta} {\eta}'} \left[ \left( \frac{{\eta}}{{\eta}'} \right)^{3/2} - \left( \frac{{\eta}'}{{\eta}} \right)^{3/2} \right] \; ,\end{aligned}$$ $a^3H / \dot\chi_0 = {\rm const.}$, and $\delta\hat\rho_E \propto a^{2n-4} \propto {\eta}^{4n-8}$. Then is evaluated to be $${\cal X}_{\bm k}^{\em} \left( {\eta} \right) \simeq a({\eta}) \, \frac{-2n}{\left( n-2 \right) \left( 4n-5 \right)} \, \frac{\delta\hat\rho_E ( {\eta} , \bm k )}{\dot\chi_0 H} \; . \label{X-source-sol}$$ Notice that $\dot\chi_0 H$ is constant, and therefore the time dependence of the physical mode follows that of the electric energy density, i.e. ${\cal X}^{\em} / a \propto \delta\hat\rho_E$. Using and recalling the relation $U_{,\chi} \simeq - 9 H \dot\chi_0/2$ (see below ), we obtain, for the sourced $\delta\rho_\chi$, $$\delta\hat\rho_\chi^{\em} \simeq \frac{4n-17}{2} \, \dot\chi_0 H \delta\hat\chi^s \simeq \frac{-n \left( 4n-17 \right)}{\left( n-2 \right) \left( 4n - 5 \right)} \, \delta\hat\rho_E \; , \label{drho-chi}$$ where hat denotes operators in the Fourier space, and superscript $s$ the sourced mode. Contributions to inflaton perturbation -------------------------------------- We now turn to the contributions to inflaton perturbations from the produced electromagnetic fields. In this computation, we switch to using physical time $t$ instead of conformal one ${\eta}$. As it becomes clear below, we need to take into account the contributions from sub-leading orders in ${\cal O}(H/m_\phi)$. For this reason, we include up to the first order in $H/m_\phi$ and find $$\begin{aligned} \phi_0 &\cong& \phi_e \left( \frac{a_e}{a} \right)^{3/2} \cos\theta(t) \; , \nonumber\\ \dot\phi_0 &\cong& - m_\phi \, \phi_e \left( \frac{a_e}{a} \right)^{3/2} \left[ \sin\theta(t) + \frac{3H_{\rm inf}}{2m_\phi} \left( \frac{a_e}{a} \right)^{3/2} \cos\theta(t) \right] \; ,\nonumber\\ H &\cong& H_{\rm inf} \left( \frac{a_e}{a} \right)^{3/2} \left[ 1+ \frac{3H_{\rm inf}}{4m_\phi} \left( \frac{a_e}{a} \right)^{3/2} \sin 2\theta(t) \right] \; ,\end{aligned}$$ where $\theta(t) = m_\phi (t-t_e)$, $\phi_e = \sqrt{6} \, M_{\rm Pl} H_{\rm inf} / m_\phi$, and subscripts $e$ and ${\rm inf}$ denote values at the end of and during inflation, respectively. Note that $a/a_0 = (3H_{\rm inf} \, t/2)^{2/3}$ up to this order. The source term for $\delta\phi$ consists of two contributions, one from the gravitational interaction with the gauge field, $\hat S_\phi^g$, and the other from the mass mixing with $\delta\chi$, ${\cal M}_{\phi\chi}$. Writing up to the terms one-order suppressed by $H/m_\phi$, we find[^16] $$\begin{aligned} \hat S_\phi^g &\cong& \sqrt{\frac{3}{2}} \, \frac{a^3}{M_{\rm Pl}} \bigg\{ - 2 i m_\phi \frac{a k_i}{k^2} \, \delta\hat{P}_i \, \cos\theta + \delta\hat\rho_E \, \sin\theta \nonumber\\ && \qquad - i H_{\rm inf} \left( \frac{a_e}{a} \right)^{3/2} \frac{a k_i}{k^2} \, \delta\hat{P}_i \left[ \left( 2n - \frac{7}{4} \right) \sin\theta - \frac{9}{4} \sin3\theta \right] \bigg\} \; , \nonumber\\ {\cal M}_{\phi\chi}^2 &\cong& \sqrt{6} \, a^2 \frac{\dot\chi_0}{M_{\rm Pl}} \, m_\phi \left[ \cos\theta + \frac{9H_{\rm inf}}{8m_\phi} \left( \frac{a_e}{a} \right)^{3/2} \left( 3 \sin\theta - \sin 3\theta \right) \right] \; . \label{phi-sources}\end{aligned}$$ The first terms in both expressions appear dominant; however, we will see that they contribute to the curvature perturbation only in the same order as the other terms (and that is why we need to include the apparently sub-leading terms). In the contribution from $\delta\chi$, we concentrate on the effect from the gauge field, which is given by . Therefore we can express the total source for the inflation perturbation from the gauge field as $$\hat{S}_\phi \equiv \hat{S}_\phi^g - {\cal M}_{\phi\chi}^2 {\cal X}_{\bm k}^{\em} \; , \label{phi-source-tot}$$ where $\hat{S}_\phi^g$ and ${\cal M}_{\phi\chi}^2$ are given in , and ${\cal X}_{\bm k}^s$ in . The equation of motion for the inflaton perturbation during inflaton oscillation can be obtained from the homogeneous part in together with the source . In physical time, we have[^17] $$\left( \partial_t^2 + \frac{k^2}{a^2} + m_\phi^2 \right) \left( a^{1/2} \varphi_{\bm k} \right) \simeq a^{-3/2} \hat{S}_\phi \; .$$ This equation can be formally solved for the sourced part of the solution as $$\begin{aligned} a^{1/2} \varphi_{\bm k}^{\em} (t) &=& \int dt' \, G_k^\phi \left( t, t' \right) \, a^{-3/2} (t') \, \hat{S}_\phi \left( t' , \bm{k} \right) \; , \nonumber\\ \partial_t \left[ a^{1/2} \varphi_{\bm k}^{\em} (t) \right] &=& \int dt' \, \partial_t G_k^\phi \left( t, t' \right) \, a^{-3/2} (t') \, \hat{S}_\phi \left( t' , \bm{k} \right) \; \label{phi-formal}\end{aligned}$$ Focusing on the super-horizon modes, i.e., $k/a \ll m_\phi$, the Green function $G_k^\phi(t,t')$ is found as $$G_k^\phi(t,t') = \Theta(t-t') \, \frac{\sin\left[ m_\phi(t-t') \right]}{m_\phi} \; .$$ Evaluating the time integrals in for the period of inflaton oscillation, during which the electromagnetic field evolves as $\delta\hat\rho_E \propto a^{2n-4}$ and $\delta\hat P_i \propto a^{2n - 7/2}$, we find $$\begin{aligned} a^{1/2} \varphi_{\bm k}^{\em} &\simeq& A_1 \left( \frac{t}{t_e} \right)^{\frac{4n+1}{3}} \left( \frac{m_\phi}{H_{\rm inf}} \sin\theta + \frac{4n+1}{4} \, \frac{t_e}{t} \cos\theta \right) - A_2 \left( \frac{t}{t_e} \right)^{\frac{4n-2}{3}} \cos\theta \; , \nonumber\\ \frac{\partial_t \left( a^{1/2} \varphi_{\bm k}^{\em} \right)}{m_\phi} &\simeq& A_1 \left( \frac{t}{t_e} \right)^{\frac{4n+1}{3}} \left( \frac{m_\phi}{H_{\rm inf}} \cos\theta + \frac{4n+1}{4} \, \frac{t_e}{t} \sin\theta \right) + A_2 \left( \frac{t}{t_e} \right)^{\frac{4n-2}{3}} \sin\theta \; , \nonumber\\ \label{phi-dphi}\end{aligned}$$ where $$\begin{aligned} A_1 &\equiv& \frac{\sqrt{6} \, a_e^{3/2}}{M_{\rm Pl} m_\phi H_{\rm inf} (4n+1)} \left[ \frac{2n}{(n-2)(4n-5)} \, \delta\hat\rho_E(t_e) - i \, \frac{a_e H_{\rm inf} k_i}{k^2} \, \delta\hat P_i (t_e) \right] \; , \nonumber\\ A_2 &\equiv& \frac{\sqrt{6} \, a_e^{3/2}}{M_{\rm Pl} m_\phi H_{\rm inf} (4n-2)} \left[ \frac{8n^2+n+20}{4(n-2)(4n-5)} \, \delta\hat\rho_E (t_e) - i \left( n - \frac{7}{8} \right) \frac{a_e H_{\rm inf} k_i}{k^2} \, \delta\hat P_i (t_e) \right] \; . \nonumber\\\end{aligned}$$ Using the definition of $\delta\rho_\phi$ in , one can easily see that the dominant terms of the equations in cancel out. This is because the would-be leading (dangerous) contributions in $\delta\phi$ interfere destructively with the oscillating background $\phi_0$. We thus obtain the time-averaged perturbation of inflaton energy density sourced by the EM field, $$\overline{\delta\hat\rho_\phi^{\em}} \simeq \frac{3}{2n-1} \left[- \frac{8n^2+n+20}{4(n-2)(4n-5)} \, \delta\hat\rho_E (t) + i \left( n - \frac{7}{8} \right) \frac{a H k_i}{k^2} \, \delta\hat P_i (t) \right] \; , \label{drho-phi}$$ up to the actual leading order, where bar denotes the time average. Total energy density perturbation {#subapp:total} --------------------------------- The total energy density perturbation is a simple summation of density perturbations of electric, spectator and inflaton, i.e. $\delta\rho_{\rm tot} = \delta\rho_E + \delta\rho_\chi + \delta\rho_\phi$. Using and , we obtain the total perturbation originated from the gauge field, $$\delta\hat\rho_{\rm tot}^{\em} \simeq \frac{8n^2+61n-100}{4(n-2)(2n-1)(4n-5)} \, \delta\hat\rho_E + i \, \frac{3(8n - 7)}{8(2n-1)} \, \frac{a H k_i}{k^2} \, \delta\hat P_i \; , \label{drhotot}$$ where hat denotes an operator in the Fourier space. Before proceeding to compute correlation functions, let us comment on convergence of the $\delta\hat P_i$ term. At the operator level, it is straightforward to see $$\begin{aligned} i \, \frac{k_i}{k^2} \, \delta\hat P_i (\bm k) &=& \frac{I^2}{a^4} \int \frac{d^3p}{(2\pi)^3} \bigg\{ \frac{p}{2k} \, \hat{\bm k} \cdot \hat{\bm p} \left[ \hat A_i'(\bm k - \bm p) \, \hat A_i(\bm p) - \hat A_i'(\bm p) \, \hat A_i(\bm k - \bm p) \right] \nonumber\\ && \qquad\qquad\qquad\quad - \left( \hat k_i \hat k_j - \frac{\delta_{ij}}{2} \right) \, \hat A_i' (\bm p) \, \hat A_j (\bm k - \bm p) \bigg\} \; , \label{dPi-operator}\end{aligned}$$ where we have sent the integration variable $\bm p \rightarrow \bm k - \bm p$. A glance at the first term in the curly parentheses of seems to hint that this quantity diverges in the IR limit, $k \rightarrow 0$. We now show that this is not the case. By decomposing $\hat A_i (t, \bm p) = \epsilon_i^\lambda \left( \hat {\bm p} \right) A_\lambda (t, \bm p)$, we see, in the limit $k \rightarrow 0$, $$\hat A_i'(\bm k - \bm p) \, \hat A_i(\bm p) - \hat A_i'(\bm p) \, \hat A_i(\bm k - \bm p) \rightarrow A_{\lambda}'(- \bm p) \, A_\lambda(\bm p) - A_\lambda'(\bm p) \, A_{\lambda}(- \bm p) \; .$$ From this, one can easily show that as long as the mode function in $A_\lambda$ is real up to a constant phase (which is generally true once modes become classical), the right-hand side vanishes. Therefore, as long as we consider semi-classical (statistical) quantities, we can quite generally conclude that $k_i \delta\hat P_i / k^2$ stays finite in the limit $k \rightarrow 0$. The two-point correlation function of $\delta\hat\rho_{\rm tot}^\em$ consists of the following three contributions: $$\begin{aligned} && \left\langle \delta\hat\rho_E \left( \bm k \right) \delta\hat\rho_E \left( \bm k' \right) \right\rangle = \frac{\delta^{(3)} \left( \bm{k} + \bm{k}' \right)}{2} \int d^3p \left[ 1 + \left( \hat{\bm p} \cdot \widehat{\bm k - \bm p} \right)^2 \right] \big\vert E(p) \big\vert^2 \, \big\vert E( \vert \bm k - \bm p \vert ) \big\vert^2 \; , \nonumber\\ && \left\langle i \frac{k_i}{k^2} \delta\hat P_i \left( \bm k \right) \, i \frac{k_j'}{k'{}^2} \delta\hat P_j \left( \bm k' \right) \right\rangle = \frac{\delta^{(3)} ( \bm k + \bm k')}{k^2} \int d^3p \nonumber\\ && \qquad\qquad \times \Big\{ \left[ \left( \hat{\bm k} \cdot \hat{\bm p} \right)^2 + \left( \hat{\bm k} \cdot \widehat{\bm{k} - \bm{p}} \right)^2 \right] \big\vert E (p) \big\vert^2 \, \big\vert B ( \vert \bm k - \bm p \vert ) \big\vert^2 \nonumber\\ && \qquad\qquad\quad + 2 \left( \hat{\bm k} \cdot \hat{\bm p} \right) \left( \hat{\bm k} \cdot \widehat{ \bm k - \bm p} \right) \, E(p) \, B^*(p) \, B(\vert \bm k - \bm p \vert) \, E^*( \vert \bm k - \bm p \vert ) \Big\} \; , \nonumber\\ && \left\langle i \frac{k_i}{k^2} \delta\hat P_i \left( \bm k \right) \, \delta\hat\rho_E \left( \bm k' \right) \right\rangle = - \frac{\delta^{(3)} \left( \bm{k} + \bm{k}' \right)}{k} \int d^3p \nonumber\\ && \qquad\qquad \times \left[ \hat{\bm k} \cdot \hat{\bm p} + \left( \hat{\bm k} \cdot \widehat{ \bm k - \bm p} \right) \left( \hat{\bm p} \cdot \widehat{\bm k - \bm p} \right) \right] B (p) \, E^*(p) \, \big\vert E (\vert \bm k - \bm p \vert) \big\vert^2 \; , \label{twopoints-EM}\end{aligned}$$ where $$E(p) \equiv - \frac{I}{a^2} {\cal A}'(p) \; , \quad B(p) \equiv \frac{I}{a^2} \, p \, {\cal A}(p) \; , \quad {\cal A}(p) \equiv {\cal A}_+(p) = {\cal A}_-(p) \; .$$ We are interested in the scales relevant to CMB observations, i.e. the external momentum $k \sim k_{\rm CMB}$, while the phase space momentum $\vec p$ is peaked around $p \sim a_i H_{\rm inf} \gg k_{\rm CMB}$, for our current interest in magnetogenesis with coherent length $L \lesssim 1 \; {\rm Mpc}$. In this limit, the first quantity in simplifies to $$\left\langle \delta\hat\rho_E \left( \bm k \right) \delta\hat\rho_E \left( \bm k' \right) \right\rangle \simeq \delta^{(3)} \left( \bm{k} + \bm{k}' \right) \int d^3p \, \big\vert E(p) \big\vert^4 \; . \label{drhoEdrhoE-1}$$ The expressions for $E(p)$ and $B(p)$ outside horizon during the period of inflaton oscillation can be approximated as $$E\left( {\eta} , p \right) \simeq C_2 \!\left( \frac{p}{k_i} \right) \frac{2^{n + \frac{1}{2}} \, \Gamma \left( n+\frac{1}{2} \right)}{\pi \, p^{3/2}} \left( \frac{a_e H_{\rm inf}}{p} \right)^n \left( \frac{a_e H_{\rm inf}}{aH} \right)^{2n} \left( \frac{p}{a} \right)^2 \, , \; B\left( {\eta} , p \right) \simeq - \frac{p{\eta}}{4n+1} \, E \left( {\eta} , p \right) \; , \label{EB-approx}$$ where subscripts $e$ and ${\rm inf}$ denote values at the end of and during inflation, respectively, and we recall the definition of $C_2$ in eq.  $$C_2 (x) = \frac{\pi}{2 \sqrt{2}} \sqrt{x} \left[ J_{n + \frac{1}{2}} \left( x \right) + i \, J_{n - \frac{1}{2}} \left( x \right) \right] \; .$$ Plugging into , we find $$\left\langle \delta\hat\rho_E \left( \bm k \right) \delta\hat\rho_E \left( \bm k' \right) \right\rangle \big\vert_{t=t_r} \simeq \delta^{(3)} \left( \bm{k} + \bm{k}' \right) \frac{2^{4(n+1)} \, \Gamma^4\left( n + \frac{1}{2} \right)}{\pi^3 \, a_r^8} \left( a_i H_{\rm inf} \right)^5 \left( \frac{a_e}{a_i} \right)^{4n} \left( \frac{a_e H_{\rm inf}}{a_r H_r} \right)^{8n} G_n^{(1)} \; , \label{drhoEdrhoE}$$ where the quantity is evaluated at the time of reheating, denoted by subscripts $r$, and $$G_n^{(1)} \equiv \int_0^\infty dz \, \vert C_2 \left( z \right) \vert^4 \, z^{-4n+4} \; , \label{def-Gn}$$ The lower and upper bounds of the integral in should in principle be numbers of ${\cal O} \left( \frac{a_r H_r}{a_i H_{\rm inf}} \right)$ and ${\cal O} \left( \frac{k}{a_i H_{\rm inf}} \right)$, respectively. However, the integrand is peaked around $z \sim {\cal O}(1)$, which is well within the domain of integration, and sending the lower and upper bounds respectively to $0$ and $\infty$ is a good approximation. We compute the integral numerically, and the result can be fitted as, within the domain $2<n<10$, $$G_n^{(1)} \simeq \exp\left( 5.27 - 2.34 n - 0.821 n^2 + 0.0240 n^3 \right) \; ,$$ with an error of $\lesssim 1 \, \%$. For the second and third quantities in , the would-be leading-order terms vanish in the expansion for small $k/p$. The next would-be leading order also vanishes, since they are proportional to odd orders in $\hat k \cdot \hat p$ and thus give zero after the angular integration. This explicitly demonstrates that the $\delta\hat P_i$ terms do not suffer IR divergence, as discussed at the beginning of this subsection. Then the actual leading-order contributions are $$\begin{aligned} \left\langle i \frac{k_i}{k^2} \delta\hat P_i \left( \bm k \right) \, i \frac{k_j'}{k'{}^2} \delta\hat P_j \left( \bm k' \right) \right\rangle &\simeq& \delta^{(3)} ( \bm k + \bm k') \, \frac{2^{4n+2} \pi \, \Gamma^4\left( n + \frac{1}{2} \right)}{15 \left( 4n+1 \right)^2 a^8} \, \frac{\left( a_i H_{\rm inf} \right)^5}{\left( a H \right)^2} \left( \frac{a_e}{a_i} \right)^{4n} \left( \frac{a_e H_{\rm inf}}{aH} \right)^{8n} G_n^{(2)} \; , \nonumber \\ \left\langle i \frac{k_i}{k^2} \delta\hat P_i \left( \bm k \right) \, \delta\hat\rho_E \left( \bm k' \right) \right\rangle &\simeq& \delta^{(3)} ( \bm k + \bm k') \, \frac{2^{4n+1} \pi \, \Gamma^4\left( n + \frac{1}{2} \right)}{3 \left( 4n+1 \right) a^8} \, \frac{\left( a_i H_{\rm inf} \right)^{5}}{a H} \left( \frac{a_e}{a_i} \right)^{4n} \left( \frac{a_e H_{\rm inf}}{aH} \right)^{8n} G_n^{(3)} \; , \nonumber\\ \label{PP-Prho}\end{aligned}$$ where $$\begin{aligned} G_n^{(2)} & \equiv & \int_0^\infty dz \, z^{6-4n} \left[ J_{n+1/2}^2 (z) + J_{n-1/2}^2 (z) \right] \bigg[ \left( 6 n - 1 \right) J_{n+1/2}^2 (z) - J_{n-1/2}^2 (z) \bigg] \; , \nonumber \\ G_n^{(3)} & \equiv & \int_0^\infty dz \, z^{6-4n} \left[ J_{n+1/2}^2 ( z ) + J_{n-1/2}^2 (z) \right] \left[ \left( 2 n -1 \right) J_{n+1/2}^2 (z) - J_{n-1/2}^2 (z) \right] \; . \nonumber\\\end{aligned}$$ These functions can be fitted as, in the regime $2<n<10$, $$\begin{aligned} G_n^{(2)} &\simeq& \exp\left( 5.86 - 2.34 n - 0.820 n^2 + 0.0240 n^3 \right) \; , \nonumber\\ G_n^{(3)} &\simeq& \exp\left( 3.46 - 2.33 n - 0.821 n^2 + 0.0241 n^3 \right) \; ,\end{aligned}$$ and we hence see $G_n^{(3)} \simeq G_n^{(1)} / 6 \simeq G_n^{(2)} / 11$ to a good agreement. We are now ready to collect the power spectrum of curvature perturbation $\zeta$, defined as $${\cal P}_\zeta \left( k \right) \, \left( 2 \pi \right)^3 \delta^{(3)} \left( \bm k + \bm k' \right) \equiv \frac{k^3}{2\pi^2} \left\langle \hat\zeta \left( \bm k \right) \hat\zeta \left( \bm k' \right) \right\rangle \; ,$$ where hat denotes an operator in the Fourier space. Using the relation , together with the expression for the total density perturbation, and recalling the background equation $\dot\rho = -3H \rho = - 9 M_{\rm Pl}^2 H^3$, we obtain the sourced power spectrum evaluated at the time of reheating, $t=t_r$, $$\begin{aligned} {\cal P}_\zeta^{\em} \big\vert_{t=t_r} &\simeq& \frac{2^{4n} \, \Gamma^4\left( n + \frac{1}{2} \right)}{81 \pi^8} \left[ \gamma_n^2 G_n^{(1)} + \frac{\pi^4 \lambda_n^2 G_n^{(2)}}{60 \left( 4n+1 \right)^2} + \frac{\pi^4 \gamma_n \lambda_n G_n^{(3)}}{12 \left( 4n+1 \right)} \right] \nonumber\\ && \times \left( \frac{H_r}{M_{\rm Pl}} \right)^4 \left( \frac{a_i H_{\rm inf}}{a_r H_r} \right)^{5} \left( \frac{a_e}{a_i} \right)^{4n} \left( \frac{a_e H_{\rm inf}}{a_r H_r} \right)^{8n} \left( \frac{k}{a_r H_r} \right)^3 \; , \label{powerzeta-result}\end{aligned}$$ where $$\gamma_n \equiv \frac{8n^2+61n-100}{4(n-2)(2n-1)(4n-5)} \; , \quad \lambda_n \equiv \frac{3(8n - 7)}{8(2n-1)} \; . \label{def-gamlam}$$ Eq.  is the main result of this appendix. The total power spectrum is the simple sum of the vacuum and sourced modes, denoted respectively by superscripts ${\rm vac}$ and ${\rm em}$, i.e. $${\cal P}_\zeta^{\rm tot} = {\cal P}_\zeta^{\rm vac} + {\cal P}_\zeta^{\em} \; .$$ Since the sourced spectrum is strongly scale-dependent, we, at the very least, enforce ${\cal P}_\zeta^{\em} < {\cal P}_\zeta^{\rm vac} \simeq {\cal P}_\zeta^{\rm tot}$. This puts a constraint on the production of magnetic fields in the model of our current consideration. In the main text, we discuss in detail the final magnetic-field strength with this bound on curvature perturbation imposed. [99]{} A. Neronov and I. Vovk, [*Evidence for strong extragalactic magnetic fields from Fermi observations of TeV blazars*]{}, Science [**328**]{}, 73 (2010) \[ [[arXiv:1006.3504]{}]{}\]. F. Tavecchio, G. Ghisellini, L. Foschini, G. Bonnoli, G. Ghirlanda and P. Coppi, [*The intergalactic magnetic field constrained by Fermi/LAT observations of the TeV blazar 1ES 0229+200*]{}, Mon. Not. Roy. Astron. Soc.  [**406**]{}, L70 (2010) \[ [[arXiv:1004.1329]{}]{}\]. K. Dolag, M. Kachelriess, S. Ostapchenko and R. Tomas, [*Lower limit on the strength and filling factor of extragalactic magnetic fields*]{}, Astrophys. J.  [**727**]{}, L4 (2011) \[[[arXiv:1009.1782]{}](http://arxiv.org/abs/1009.1782)\]. W. Essey, S. ‘i. Ando and A. Kusenko, “Determination of intergalactic magnetic fields from gamma ray data,” Astropart. Phys.  [**35**]{}, 135 (2011) \[arXiv:1012.5313 \[astro-ph.HE\]\]. A. M. Taylor, I. Vovk and A. Neronov, [*Extragalactic magnetic fields constraints from simultaneous GeV-TeV observations of blazars*]{}, Astron. Astrophys.  [**529**]{}, A144 (2011) K. Takahashi, M. Mori, K. Ichiki, S. Inoue and H. Takami, “Lower Bounds on Magnetic Fields in Intergalactic Voids from Long-term GeV-TeV Light Curves of the Blazar Mrk 421,” Astrophys. J.  [**771**]{}, L42 (2013). W. Chen, J. H. Buckley and F. Ferrer, “Evidence for GeV Pair Halos around Low Redshift Blazars,” arXiv:1410.7717 \[astro-ph.HE\]. A. Kandus, K. E. Kunze and C. G. Tsagas, “Primordial magnetogenesis,” Phys. Rept.  [**505**]{} (2011) 1 \[arXiv:1007.3891 \[astro-ph.CO\]\]. R. Durrer and A. Neronov, “Cosmological Magnetic Fields: Their Generation, Evolution and Observation,” Astron. Astrophys. Rev.  [**21**]{} (2013) 62 \[arXiv:1303.7121 \[astro-ph.CO\]\]. K. Subramanian, “The origin, evolution and signatures of primordial magnetic fields,” arXiv:1504.02311 \[astro-ph.CO\]. T. Fujita and S. Mukohyama, “Universal upper limit on inflation energy scale from cosmic magnetic field,” JCAP [**1210**]{} (2012) 034 \[arXiv:1205.5031 \[astro-ph.CO\]\]. C. Caprini and S. Gabici, “Gamma-ray observations of blazars and the intergalactic magnetic field spectrum,” Phys. Rev. D [**91**]{}, no. 12, 123514 (2015) \[arXiv:1504.00383 \[astro-ph.CO\]\]. T. Vachaspati, Phys. Lett. B [**265**]{} (1991) 258. doi:10.1016/0370-2693(91)90051-Q G. Sigl, A. V. Olinto and K. Jedamzik, Phys. Rev. D [**55**]{} (1997) 4582 doi:10.1103/PhysRevD.55.4582 \[astro-ph/9610201\]. S. Matarrese, S. Mollerach, A. Notari and A. Riotto, Phys. Rev. D [**71**]{} (2005) 043502 doi:10.1103/PhysRevD.71.043502 \[astro-ph/0410687\]. S. Saga, K. Ichiki, K. Takahashi and N. Sugiyama, Phys. Rev. D [**91**]{} (2015) 12, 123510 doi:10.1103/PhysRevD.91.123510 \[arXiv:1504.03790 \[astro-ph.CO\]\]. M. S. Turner and L. M. Widrow, [*Inflation Produced, Large Scale Magnetic Fields*]{}, Phys. Rev. D [**37**]{}, 2743 (1988). B. Ratra, [*Cosmological ’seed’ magnetic field from inflation*]{}, Astrophys. J.  [**391**]{}, L1 (1992). W. D. Garretson, G. B. Field and S. M. Carroll, ‘Primordial magnetic fields from pseudoGoldstone bosons,’’ Phys. Rev. D [**46**]{}, 5346 (1992) \[hep-ph/9209238\]. F. Finelli and A. Gruppuso, [*Resonant amplification of gauge fields in expanding universe*]{}, Phys. Lett. B [**502**]{}, 216 (2001) \[[[hep-th/0001231]{}](http://arxiv.org/abs/hep-th/0001231)\]. A. -C. Davis, K. Dimopoulos, T. Prokopec and O. Tornkvist, [*Primordial spectrum of gauge fields from inflation*]{}, Phys. Lett. B [**501**]{}, 165 (2001) \[Phys. Rev. Focus [**10**]{}, STORY9 (2002)\] \[[[astro-ph/0007214]{}](http://arxiv.org/abs/astro-ph/0007214)\]. K. Bamba and J. Yokoyama, [*Large scale magnetic fields from inflation in dilaton electromagnetism*]{}, Phys. Rev. D [**69**]{}, 043507 (2004) M. M. Anber and L. Sorbo, “N-flationary magnetic fields,” JCAP [**0610**]{}, 018 (2006) \[astro-ph/0606534\]. J. Martin and J. ’i. Yokoyama, [*Generation of Large-Scale Magnetic Fields in Single-Field Inflation*]{}, JCAP [**0801**]{}, 025 (2008) \[arXiv:0711.4307 \[astro-ph\]\]. R. Durrer, L. Hollenstein and R. K. Jain, [*Can slow roll inflation induce relevant helical magnetic fields?*]{}, JCAP [**1103**]{}, 037 (2011) \[[[arXiv:1005.5322]{}](http://arxiv.org/abs/1005.5322)\]. R. J. Z. Ferreira, R. K. Jain and M. S. Sloth, “Inflationary magnetogenesis without the strong coupling problem,” JCAP [**1310**]{}, 004 (2013) \[arXiv:1305.7151 \[astro-ph.CO\]\]. C. Caprini and L. Sorbo, “Adding helicity to inflationary magnetogenesis,” JCAP [**1410**]{}, no. 10, 056 (2014) \[arXiv:1407.2809 \[astro-ph.CO\]\]. T. Kobayashi, “Primordial Magnetic Fields from the Post-Inflationary Universe,” JCAP [**1405**]{} (2014) 040 \[arXiv:1403.5168 \[astro-ph.CO\]\]. G. Tasinato, JCAP [**1503**]{} (2015) 040 doi:10.1088/1475-7516/2015/03/040 \[arXiv:1411.2803 \[hep-th\]\]. G. Domènech, C. Lin and M. Sasaki, “Inflationary Magnetogenesis with Broken Local U(1) Symmetry,” arXiv:1512.01108 \[astro-ph.CO\]. T. Fujita, R. Namba, Y. Tada, N. Takeda and H. Tashiro, “Consistent generation of magnetic fields in axion inflation models,” JCAP [**1505**]{}, no. 05, 054 (2015) \[arXiv:1503.05802 \[astro-ph.CO\]\]. L. Campanelli, “Lorentz-violating inflationary magnetogenesis,” Eur. Phys. J. C [**75**]{}, no. 6, 278 (2015) doi:10.1140/epjc/s10052-015-3510-x \[arXiv:1503.07415 \[gr-qc\]\]; L. Campanelli, “Superhorizon magnetic fields,” arXiv:1512.08600 \[astro-ph.CO\]. J. M. Salim, N. Souza, S. E. Perez Bergliaffa and T. Prokopec, “Creation of cosmological magnetic fields in a bouncing cosmology,” JCAP [**0704**]{}, 011 (2007) doi:10.1088/1475-7516/2007/04/011 \[astro-ph/0612281\]. F. A. Membiela, “Primordial magnetic fields from a non-singular bouncing cosmology,” Nucl. Phys. B [**885**]{}, 196 (2014) doi:10.1016/j.nuclphysb.2014.05.018 \[arXiv:1312.2162 \[astro-ph.CO\]\]. L. Sriramkumar, K. Atmjeet and R. K. Jain, “Generation of scale invariant magnetic fields in bouncing universes,” JCAP [**1509**]{}, no. 09, 010 (2015) doi:10.1088/1475-7516/2015/09/010 \[arXiv:1504.06853 \[astro-ph.CO\]\]. V. Demozzi, V. Mukhanov and H. Rubinstein, “Magnetic fields from inflation?,” JCAP [**0908**]{}, 025 (2009) \[arXiv:0907.1030 \[astro-ph.CO\]\]. N. Barnaby, R. Namba, M. Peloso, “Observable non-gaussianity from gauge field production in slow roll inflation, and a challenging connection with magnetogenesis,” Phys. Rev. D [**85**]{}, 123523 (2012) \[arXiv:1202.1469 \[astro-ph.CO\]\]. T. Fujita and S. Yokoyama, “Higher order statistics of curvature perturbations in IFF model and its Planck constraints,” JCAP [**1309**]{}, 009 (2013) \[arXiv:1306.2992 \[astro-ph.CO\]\] T. Fujita and S. Yokoyama, “Critical constraint on inflationary magnetogenesis,” JCAP [**1403**]{}, 013 (2014) \[JCAP [**1405**]{}, E02 (2014)\] \[arXiv:1402.0596 \[astro-ph.CO\]\]. R. J. Z. Ferreira, R. K. Jain and M. S. Sloth, “Inflationary Magnetogenesis without the Strong Coupling Problem II: Constraints from CMB anisotropies and B-modes,” JCAP [**1406**]{}, 053 (2014) \[arXiv:1403.5516 \[astro-ph.CO\]\]. R. Z. Ferreira and M. S. Sloth, “Universal Constraints on Axions from Inflation,” JHEP [**1412**]{} (2014) 139 \[arXiv:1409.5799 \[hep-ph\]\]. R. Z. Ferreira and J. Ganc, “Inflationary dynamics of kinetically-coupled gauge fields,” JCAP [**1504**]{}, no. 04, 029 (2015) \[arXiv:1411.5362 \[astro-ph.CO\]\]. B. A. Bassett, G. Pollifrone, S. Tsujikawa and F. Viniegra, “Preheating as cosmic magnetic dynamo,” Phys. Rev. D [**63**]{}, 103515 (2001) \[astro-ph/0010628\]. E. W. Kolb and M. S. Turner, “The Early Universe,” Front. Phys.  [**69**]{}, 1 (1990). [^1]: The value of the lower bound on the EGMF strength varies by one or two orders of magnitude (roughly ${\cal O}(10^{-17}) - {\cal O}(10^{-15}$) depending on assumptions in the analysis, such as the energy spectrum and time variation of the source. In this paper, we aim at generation of more sufficient, i.e. larger, magnetic fields and consider the upper value, ${\cal O}(10^{-15}) \, \G$. [^2]: Magnetogenesis in bouncing universe scenarios is considered in [@Salim:2006nw; @Membiela:2013cea; @Sriramkumar:2015yza]. [^3]: The reader should be referred to some important exceptional works [@Bamba:2003av; @Ferreira:2013sqa; @Kobayashi:2014sga]. [^4]: Clearly, the spectator field responsible for the time dependence of $I$ does not necessarily decay at the same time as the inflaton, which is the dominant energy content at the time and drives reheating. We simply impose their simultaneous decay as an additional assumption, in order to reduce the number of model parameters. [^5]: A recent study [@Domenech:2015zzi] considers an opposite regime, i.e. $I \le 1$ during inflation, while keeping weak couplings to fermions by introducing a coupling function in the fermion sector as well and by explicitly breaking the $U(1)$ gauge invariance. [^6]: The polarization vector $\epsilon_i^{(\lambda)}$ satisfies $k_i \epsilon_{i}^{(\lambda)}(\hat{\bm{k}})=0$ and $\sum_{p=1}^{2} {\epsilon}_{i}^{(\lambda)}(\hat{\bm{k}}) {\epsilon}_{j}^{(\lambda)}(-\hat{\bm{k}}) =\delta_{ij} - (\hat{\bm{k}})_{i}(\hat{\bm{k}})_{j}$, and the creation/annihilation operators satisfy the commutation relation, $[a^{(\lambda)}_{\bm{p}},a^{\dagger(\sigma)}_{-\bm{q}}] = (2\pi)^3\delta(\bm{p}+\bm{q})\delta^{\lambda \sigma}$. [^7]: In fig. \[EM spectra\] and fig. \[PEM\_evolve\], we use the exact solutions in eqs. -, instead of the approximated ones. [^8]: Magnetic fields are always subdominant to electric fields in the scenario where red-tilted magnetic spectra are achieved within the regime free of the strong coupling problem. Thus the backreaction and observational constraints are imposed on the electric field energy. [^9]: By considering very low energy inflation, it is possible to make the hierarchy between $\mcP_E$ and $\mcP_B$ small without the post-inflationary amplification. In this case, however, the curvature perturbation constraint is sensitive to the inflationary dynamics. Hence we do not explore this possibility in this paper. [^10]: If the produced magnetic field is strong enough to satisfy $B_0>10^{-14}\G(\lambda_0/1\rm pc)$, it is possibly processed by the turbulent plasma and its evolution can be modified from the adiabatic evolution [@Durrer:2013pga]. In this paper, however, we focus on the case where the plasma effect is insignificant, and indeed it is the case for the fiducial value in eq. . [^11]: The gravitational sourcing to $\delta\chi$, i.e. $A_\mu + A_\mu \xrightarrow{\rm grav.} \delta\chi$ and $A_\mu + A_\mu \xrightarrow{\rm grav.} \delta\phi \xrightarrow{\rm grav.} \delta\chi$, is parametrically smaller and therefore negligible, simply because gravitational interaction is weak and the energy density of $\chi$ is subdominant to that of inflaton $\phi$. [^12]: The requirement of $k_i$ for the galactic seed magnetic fields is not clear and $k_i \gg 1 \, {\rm kpc}^{-1}$ is not excluded. Furthermore, the necessity of the primordial seed magnetic field for the galactic magnetic fields is also disputed. If it is not necessary, $k_i$ could be larger. [^13]: Here we say “electromagnetic field” to mean a Standard Model $U(1)$ gauge field. In principle, if the production occurs before the electroweak phase transition, one should instead consider the gauge field associated with the $U(1)$ hypercharge. The true electromagnetic field consists partially of this gauge field, and the conversion is trivial. This may modify the strength of the magnetic field only around 10% and therefore will not change our conclusion. [^14]: Since the electric energy density depends on not $\partial_\eta(I\mcA_k)$ but $I\partial_\eta \mcA_k$, the variable which should be used in the junction is not $\partial_\eta (I\mcA_k)$ but $\partial_\eta \mcA_k$ to ensure the continuity of the physical quantity. [^15]: Here we treat the mass mixing terms in perturbatively, and thus they do not enter in the free equations of motion. [^16]: One can show that the next-to-leading order contribution to ${\cal X}_{\bm k}^s$ is $\mathcal{O}(H^3/m_\phi^3)$ and thus negligible. [^17]: We neglect the next-to-leading order in ${\cal M}_{\phi\phi}$, coming from the $\phi_0' V_{,\phi}$ term, which might in principle contribute to the homogeneous solution for $\varphi_{\bm k}$ and therefore to its Green function. This would modify our result [*at most*]{} by an ${\cal O}(1)$ numerical factor, and thus our main message would not be altered.
--- abstract: 'Navigation is an essential capability for mobile robots. In this paper, we propose a generalized yet effective 3M (i.e., multi-robot, multi-scenario, and multi-stage) training framework. We optimize a mapless navigation policy with a robust policy gradient algorithm. Our method enables different types of mobile platforms to navigate safely in complex and highly dynamic environments, such as pedestrian crowds. To demonstrate the superiority of our method, we test our methods with four kinds of mobile platforms in four scenarios. Videos are available at <https://sites.google.com/view/crowdmove>' author: - 'Tingxiang Fan$^{1}$, Xinjing Cheng$^{1}$, Jia Pan$^{2}$, Dinesh Manocha$^{3}$ and Ruigang Yang$^{1}$[^1][^2] [^3]' bibliography: - 'references.bib' title: '**CrowdMove: Autonomous Mapless Navigation in Crowded Scenarios**' --- [^1]: $^{1}$Tingxiang Fan, Xinjing Cheng and Ruigang Yang are with Researcher of Robotics and Auto-driving Lab, Baidu Research, Baidu Inc., China [v\_fantingxiang@baidu.com, chengxinjing@baidu.com, yangruigang@baidu.com]{} [^2]: $^{2}$Jia Pan is with the Department of Mechanical and Biomedical Engineering, City University of Hong Kong, Hong Kong, China [jiapan@cityu.edu.hk]{} [^3]: $^{3}$Dinesh Manocha is with Department of Computer Science, University of Maryland, College Park, USA [dm@cs.umd.edu]{}
--- abstract: | In this paper, we consider the robust linear infinite programming problem $({\rm RLIP}_c) $ defined by $$\begin{aligned} ({\rm RLIP}_c)\quad &&\inf\; \langle c,x\rangle \\ \textrm{subject to } &&x\in X,\; \langle x^\ast,x \rangle \le r ,\;\forall (x^\ast,r)\in\mathcal{U}_t,\; \forall t\in T, \end{aligned}$$ where $X$ is a locally convex Hausdorff topological vector space, $T$ is an arbitrary (possible infinite) index set, $c\in X^*$, and $\mathcal{U}_t\subset X^*\times \mathbb{R}$, $t \in T$ are uncertainty sets. We propose an approach to duality for the robust linear problems with convex constraints $({\rm RP}_c)$ and establish corresponding robust strong duality and also, stable robust strong duality, i.e., robust strong duality holds “uniformly” with all $c \in X^\ast$. With the different choices/ways of setting/arranging data from $({\rm RLIP}_c) $, one gets back to the model $({\rm RP}_c)$ and the (stable) robust strong duality for $({\rm RP}_c)$ applies. By such a way, nine versions of dual problems for $ ({\rm RLIP}_c)$ are proposed. Necessary and sufficient conditions for stable robust strong duality of these pairs of primal-dual problems are given, for which some cover several known results in the literature while the others, due to the best knowledge of the authors, are new. Moreover, as by-products, we obtained from the robust strong duality for variants pairs of primal-dual problems, several robust Farkas-type results for linear infinite systems with uncertainty. Lastly, as extensions/applications, we extend/apply the results obtained to robust linear problems with sub-affine constraints, and to linear infinite problems (i.e., $({\rm RLIP}_c) $ with the absence of uncertainty). It is worth noticing even for these cases, we are able to derive new results on (robust/stable robust) duality for the mentioned classes of problems and new robust Farkas-type results for sub-linear systems, and also for linear infinite systems in the absence of uncertainty. author: - '[ N. Dinh]{}[^1]' - '[D.H. Long]{}[^2]' - '[J.-C. Yao]{}[^3]' title: '**Duality for Robust Linear Infinite Programming Problems Revisited[^4]**' --- *Dedicated to Professor Marco Antonio L' opez’s 70$^{th}$ birthday* [**Key words**]{}: Linear infinite programming problems, robust linear infinite problems, stable robust strong duality for robust linear infinite problems, Farkas-type results for infinite linear systems with uncertainty, Farkas-type results for sub-affine systems with uncertainty. [**Mathematics Subject Classification:**]{} 39B62, 49J52, 46N10, 90C31, 90C25. Introduction ============= In this paper, we are concerned with the [*linear infinite programming with uncertainty parameters*]{} of the form $$\begin{aligned} ({\rm LIP}_c)\quad &&\inf\; \langle c,x\rangle \\ \textrm{subject to} &&x\in X,\;\langle a_t,x \rangle \le b_t,\;\forall t\in T,\end{aligned}$$ where $X$ is a locally convex Hausdorff topological vector space, $T$ is an arbitrary (possible infinite) index set, $c\in X^*$, $a_t \in X^*$ and $b_t \in \mathbb{R}$ for each $t \in T$, and the couple $(a_t , b_t )$ belongs to an uncertainty set $\mathcal{U}_t\subset X^*\times \mathbb{R}$. For such a [*linear infinite programming*]{} ${\rm (LIP}_c)$ with [*input-parameter uncertainty*]{}, its robust counterpart is the robust linear infinite programing problem $({\rm RLIP}_c) $ defined as follows: $$\begin{aligned} ({\rm RLIP}_c)\quad &&\inf\; \langle c,x\rangle \\ \textrm{subject to } &&x\in X,\; \langle x^\ast,x \rangle \le r ,\;\forall (x^\ast,r)\in\mathcal{U}_t,\; \forall t\in T.\end{aligned}$$ The robust linear infinite problems of the model $({\rm RLIP}_c) $ together with their duality were considered in several works in the literature such as, [@ChJeya17], [@DGLM17-Optim], [@DMVV17-Robust-SIAM], [@GJLL], [@JL11]. There are variants of duality results for robust convex problems (see [@Bot], [@CLLLY], [@DGV-AMO], [@DGV19a], [@DGLV17-Robust], [@FLY15], [@DMVV17-Robust-SIAM], [@JL2], [@LJL11] and references therein), and also for robust vector optimization/multi-objective problems (see, e.g., [@CKY19], [@DGLM17-Optim], [@DL18ACTA], [@GJLb]). In the mentioned papers, results for robust strong duality are established for classes of problems from linear to convex, non-convex, and vector problems, under various constraint qualification conditions (or qualification conditions). In this paper we propose a way, which can be considered as a unification approach to duality for the robust linear problems $({\rm RLIP}_c) $. Concretely, we propose some model for a bit more general problem, namely, the robust linear problem with convex conical constraints $({\rm RP}_c)$ and establish corresponding robust strong duality and also, stable robust strong duality, i.e., robust strong duality holds “uniformly” with all $c \in X^\ast$. Then, with the different choices/ways of setting, we transfer $({\rm RLIP}_c) $ to the models $({\rm RP}_c)$, and the (stable) robust strong duality results for $({\rm RP}_c)$ apply. By such a way, several forms of dual problems for $ ({\rm RLIP}_c)$ are proposed. Necessary and sufficient conditions for stable robust strong duality of these pairs of primal-dual problems are given, for which some cover results known in the literature while the others, due to the best knowledge of the authors, are new. We point out also that, even in the case with the absence of uncertainty, i.e., in the case where $ \mathcal{U}_t $ is singleton for each $t \in T$, the results obtained still lead to new results on duality for robust linear infinite/semi-infinite problems (see Section 6). The paper is organized as follows: In Section 2, some preliminaries and basic tools are introduced. Concretely, we introduce or quote some robust Farkas lemmas for conical constraint systems under uncertainty, some results on duality of robust linear problems with convex conical constraints. The model of robust linear infinite problem and its seven models of robust dual problems are given in Section 3. The main results: Robust stable strong duality results for $ ({\rm RLIP}_c)$ are given in Section 4 together with two more models of robust dual problems of $ ({\rm RLIP}_c)$. Here, the stable strong duality for the seven pairs of primal-dual problems are established and the ones for two new pairs are mentioned. Some of these results cover or extend some in [@DGLV17-Robust], [@GJLL]. In Section 5, from the duality results in Section 4, we derive variants of stable robust Farkas lemmas for linear infinite systems with uncertainty which cover the ones in [@DGLM17-Optim], [@DMVV17-Robust-SIAM] while the others are new. In Section 6, as an extension/application of the approach, we get robust strong duality results for linear problems with sub-affine constraints. We consider a particular case with the absence of uncertainty, (i.e., in the case where $ \mathcal{U}_t $ is singleton for each $t \in T$) the results obtained still lead to some new results on duality for robust linear infinite/semi-infinite problems, and, in turn, these results also give rise to several new versions of Farkas lemmas for sub-affine systems under uncertainty and also, some new versions of Farkas-type results for linear infinite/semi-infinite systems. Preliminaries and Basic Tools ============================= Let $X$ and $Z$ be locally convex Hausdorff topological vector spaces with topological dual spaces $X^{\ast }$ and $Z^{\ast }$, respectively. The only topology considered on dual spaces is the weak\*-topology. Let $S$ be a non-empty closed and convex cone in $Z$. The positive dual cone $S^+$ of $S$ is $S^+:= \{ z^* \in Z^\ast: \la z^*, s\ra \geq 0, \forall s \in S\}$. Let further, $\Gamma (X)$ be the set of all proper, convex and lower semi-continuous (briefly, lsc) functions on $X$. Denote by $\mathcal{L}(X,Z)$ the space of all continuous linear mappings from $X$ to $Z$ and $ \overline{\mathbb{R}}:=\mathbb{R}\cup\{\pm\infty\}$, $\mathbb{R}_{\infty}:=\mathbb{R}\cup\{+\infty\}$. Notations and prelimaries ------------------------- We now give some notations which will be used in the sequent. For $f\colon X\to \overline{\mathbb{R}}:=\mathbb{R}\cup\{\pm\infty\}$, the domain and the epigraph of $f$ are defined respectively by $$\begin{aligned} \operatorname{dom}f&:=\{x\in X:\ f(x)\neq +\infty \},\\ \operatorname{epi}f&:=\{(x,r)\in X\times \mathbb{R}: f(x)\leq r\},\end{aligned}$$while its conjugate function $f^\ast\colon X\rightarrow \overline{\mathbb{R}}$  is $$f^{\ast }\left( x^{\ast }\right) :=\sup_{x\in X}\left[ \left\langle x^{\ast },x\right\rangle -f\left( x\right) \right] ,\quad \forall x^{\ast }\in X^{\ast }.$$ Let $ \leq_{S}$ be the ordering on $Z$ induced by the cone $S$, i.e., $$\label{order} z_{1}\leqq _{S}z_{2}\ \text{ if and only if }\ z_{2}-z_{1}\in S.$$We enlarge $Z$ by attaching a greatest element $+\infty _{Z}$ and a smallest element $-\infty _{Z}$ which do not belong to $Z$ by the convention, $-\infty _{Z} \leqq _{S} z \leqq_{S} +\infty_{Z}$ for all $z \in Z$. Denote $Z^{\bullet }:=Z\cup \{-\infty _{Z},\ +\infty _{Z}\}$. Let $G\colon X\to Z^\bullet$. We define $$\begin{aligned} \operatorname{dom}G&:=\{x\in X:\ G(x)\neq +\infty_Z \},\\ \operatorname{epi}_S G&:=\{(x,z)\in X\times Z: z \in G(x) + S \}.\end{aligned}$$If $-\infty_Z\notin G(X)$ and $\operatorname{dom}G\ne\emptyset$, then we say that $G$ is a [*proper mapping*]{}. We say that $G$ is $S$-*convex* (resp., $S$-*epi closed*) if $\operatorname{epi}_{S}G $ is a convex subset (resp., a closed subset) of $X\times Z$. The mapping $G $ is called *positively $S$-upper semicontinuous*[^5] (*positively $S$-usc*, briefly) if $\lambda G$ is upper semicontinuous (in short, usc) for all $\lambda\in S^{+}$ (see [@Bol98], [@Bol01]). Let $T$ be an index (possibly infinite) set and let $\mathbb{R}^{T}$ be the product space endowed with the product topology and its dual space, $\mathbb{R}^{(T)}$, the so-called *space of generalized finite sequences* $\lambda =(\lambda _{t})_{t\in T}$ such that $\lambda _{t}\in \mathbb{R},$ for each $t\in T,$ and with only finitely many $\lambda _{t}$ different from zero. The supporting set of $\lambda \in \mathbb{R}^{(T)}$ is $\operatorname{supp}\lambda :=\{t\in T \, :\ \lambda _{t}\neq 0\}.$ For a pair $\left( \lambda ,v\right) \in \mathbb{R}^{(T)}\times \mathbb{R }^{T}$, the dual product is defined by $$\left\langle \lambda ,v\right\rangle :=\left\{ \begin{array}{ll} \sum\limits_{t\in \operatorname{supp}\lambda }\lambda _{t}v_{t} & \text{if }\lambda \neq 0_{T}, \\ \ \ 0 & \text{otherwise}. \end{array}\right. \text{ }$$ The positive cones in $\mathbb{ R}^{T}$ and in $\mathbb{R}^{(T)}$ are denoted by $\mathbb{R}_+^T$ and $\mathbb{R}_{+}^{(T)}$, respectively. [*$S^+$-Upper Semi-Continuity and Uniform $S^+$-Concavity*]{}. Let $U \ne \emptyset$ be a subset of some topological space. We recall the notions of $S^+$-upper semi-continuity, $S^+$-convexity, and uniform $S^+$-convexity introduced recently in [@DL18ACTA]. \[def\_1\] [@DL18ACTA] Let $H\colon U\to Z\cup\{+\infty_Z\}$. We say that: $\bullet$ $H$ is [*$S^+$-convex*]{} if for all $(u_i,\lambda_i)\in U\times S^+ \; (i=1,2)$ there is $(\bar u,\bar \lambda)\in U\times S^+$ such that $(\lambda_1 H)(u_1)+(\lambda_2 H)(u_2) \ge (\bar \lambda H)(\bar u)$, $\bullet$ $H$ is [*$S^+$-upper semi-continuos*]{} (briefly, [*$S^+$-usc*]{}) if for any net $(\lambda_\alpha, u_\alpha,r_\alpha)_{\alpha\in D}\subset S^+\times U\times \RR$ and any $(\bar \lambda,\bar u,\bar r)\in S^+\times U\times \RR$, satisfying $$\begin{cases} (\lambda_\alpha H)( {u_\alpha}) \ge r_\alpha,\; \forall \alpha \in D\\ \lambda_\alpha \overset{*}{\rightharpoonup} \bar \lambda,\; u_\alpha\to \bar u,\; r_\alpha\to\bar r \end{cases} \Longrightarrow\; (\bar \lambda H)(\bar u)\ge \bar r,$$ where the symbol “$\overset{*}{\rightharpoonup}$" means the convergence with respect to weak$^*$-topology. $\bullet$ $H$ is [*$S^+$-concave*]{} ([*$S^+$-lsc*]{}, resp.) if $-H$ is [*$S^+$-convex*]{} ([*$S^+$-usc*]{}, respectively). \[def\_2\] [@DL18ACTA] For the collection $(H_j)_{j \in I}$ with $H_j\colon U\to Z\cup\{+\infty_Z\}$, we say that $(H_j)_{j\in I}$ is [*uniformly $S^+$-convex*]{} if for all $(u_i,\lambda_i)\in U\times S^+, \; i=1,2$, there is $(\bar u,\bar \lambda)\in U\times S^+$ such that $(\lambda_1 H_j)(u_1)+ (\lambda_2 H_j)(u_2) \ge (\bar \lambda H_j)(\bar u)$ for all $j\in I$. The collection $(H_j)_{j\in I}$ is said to be [*uniformly $S^+$-concave*]{} if $(-H_j)_{j\in I}$ is uniformly $S^+$-convex. \[rem\_2.1eeee\] It is worth observing that when $H\colon U \to Z\cup \{+\infty_Z\}$ is [$S^+$-usc]{} then $G$ is positively $S$-usc [@Bot10]. Moreover, in the case where $Z=\mathbb{R}$ and $S=\mathbb{R}_+$, (and hence, $S^+=\mathbb{R}_+$), the following assertions hold[^6]: [(i)]{} If $H\colon U\to \mathbb{R}_{\infty}$ is a convex function then $H$ is $\mathbb{R}_+$-convex, [(ii)]{} If $H_j\colon U \to \mathbb{R}_\infty$ is convex for all $j\in I$ then $(H_j)_{j\in I}$ is [uniformly $\mathbb{R}_+$-convex]{}. [(iii)]{} $H\colon U \to \mathbb{R}_\infty$ is [$\RR_+$-usc]{} if and only if it is usc. For details, see [@DL18ACTA]. Conical Constrained Systems with Uncertainty -------------------------------------------- Let $\U$ be an uncertainty parameter set, $(G_u)_{u\in \U}$ with $G_u\colon X\to Z\cup\{+\infty_Z\}$, be a proper $S$-convex and $S$-epi closed mapping for each $u \in \U$. We are concerned with the robust cone constraint system: $$\label{eq_3.1bbb} G_u(x)\in-S,\quad \forall u\in \U.$$ Denote $$\label{Fu} \mathcal{F}_u:=\{x\in X: G_u(x)\in-S\},\ u\in \U,$$ and $ \mathcal{F}$ the *solution set* of , i.e., $$\label{F} \mathcal{F}:=\{x\in X: G_u(x)\in -S,\; \forall u\in \U\}.$$ It is clear that $\mathcal{F} = \bigcap \limits_{u \in \U} \mathcal{F}_u$. Assume that $\mathcal{F}\ne\emptyset$. Corresponding to the system , let us consider the set (also called: [*robust moment cone*]{} corresponding to the system ) $$\label{M0} \mathcal{M}_0:=\bigcup_{(u,\lambda)\in \U\times S^+} \operatorname{epi}(\lambda G_u)^\ast.$$ It is easy to check that (generalizing the one in [@JL2 Proposition 2.2]) $\mathcal{M}_0$ is a cone in $X^\ast\times \mathbb{R}$. Moreover, $\mathcal{M}_0$ (and also $\mathcal{M}_1$ in ) leads to the cone $M$ in [@GJLL]. We now introduce a version of robust Farkas-type result involving the system and some of its consequences, which will be used as a key tool for the results of this section. \[Farkas-type result involving robust system \] \[lem\_FL\] For all $(x^\ast, r)\in X^\ast\times\mathbb{R}$, the next statements are equivalent: [(i)]{} $G_u(x)\in -S,\ \forall u\in \U \; \Longrightarrow\; \la x^\ast, x\ra \ge r$, [(ii)]{} $(x^\ast, r)\in -\operatorname{\overline{\operatorname{co}}}\mathcal{M}_0$. It is easy to see that (i) is equivalent to $ -r\ge -\la x^\ast, x\ra$ for all $x\in \mathcal{F}$ , which also means $(x^\ast, r)\in -\operatorname{epi}\delta_\mathcal{F}^\ast.$ So, to prove the equivalence (i)$\Longleftrightarrow$(ii), it suffices to show that $\operatorname{epi}\delta_\mathcal{F}^\ast=\operatorname{\overline{\operatorname{co}}}\mathcal{M}_0$. Now, for each $u\in \U$, $\mathcal{F}_u$ is closed and convex subsets of $X$, and hence, $\delta_{\mathcal{F}_u} \in \Gamma(X)$ and so $\delta_{\mathcal{F}}=\sup_{u\in \U}\delta_{\mathcal{F}_u} \in \Gamma (X)$. By [@GNg08 Lemma 2.2], one gets $\operatorname{epi}\delta_{\mathcal{F}}^\ast=\operatorname{\overline{\operatorname{co}}}\bigcup_{u\in \U}\operatorname{epi}\delta_{\mathcal{F}_u}^\ast.$ On the other hand, for each $u\in \U$, one has $\operatorname{epi}\delta_{\mathcal{F}_u}^\ast=\overline{\bigcup_{\lambda\in S^+} \operatorname{epi}(\lambda G_u)^\ast}$ (see [@DNV10]), and so, $\operatorname{epi}\delta_{\mathcal{F}}^\ast=\operatorname{\overline{\operatorname{co}}}\mathcal{M}_0$ and we are done. As a direct consequence of Proposition \[lem\_FL\], we get \[lem\_FL2\] Let $(A_u)_{u\in \U}\subset \L(X,Z)$ and $(\omega_u)_{u\in \U}\subset Z$. If $(x^\ast, s)\in X^\ast\times\mathbb{R}$ then the next statements are equivalent: [(i)]{} $ A_u(x) -\omega_u\in -S,\ \forall u\in \U \; \Longrightarrow\; \la x^\ast, x\ra \ge s$, [(ii)]{} $(x^\ast, s)\in -\operatorname{\overline{\operatorname{co}}}\Big[\big\{(\lambda A_t, \la \lambda, \omega_t\ra) : (u,\lambda)\in \U\times S^+\big\}+\mathbb{R}_+(0_{X^\ast},1)\Big]$. Corollary \[lem\_FL2\] covers [@JL11 Theorem 3.1], [@DGLM17-Optim Theorem 4.2(iii)], [@DMVV17-Robust-SIAM Theorem 5.5], and in some sense, it extend the robust semi-infinite Frakas’ lemmas in [@GJLL]. In the case when $Z=\mathbb{R}$ and $\U=T$, $X = \mathbb{R}^n$, Corollary \[lem\_FL2\] extends [@GL98 Corollary 3.1.2]. Let $\emptyset \ne B \subset X^*$ and $\beta \in \R$. The function $\sigma_B (\cdot) - \beta$, where $\sigma_{B}(x):=\sup\{\la x^\ast, x\ra: x^\ast\in B\}$, is known as a sub-affine function [@DGV19a]. We next give a version of robust Farkas lemma for a system involving sub-affine functions. \[lem\_FL3\] Let $( \A_t) _{t\in T}$ be a family of nonempty, $w^{\ast }$- closed convex subsets of $X^{\ast }$ and $(b_t)_{t\in T}\subset \mathbb{R}$. Then, for each $(x^\ast, r)\in X^\ast\times\mathbb{R}$, the next statements are equivalent: [(i)]{} $\sigma_{{\A}_t}( x)\le b_t,\ \forall t\in T\; \Longrightarrow\; \la x^\ast, x\ra \ge r$, [(ii)]{} $(x^\ast, r)\in -\operatorname{\overline{\operatorname{co}}}\operatorname{cone}\Big[\bigcup_{t\in T}(\A_t\times\{ b_t\}) \cup \{(0_{X^*},1)\}\Big]$. Take $Z=\mathbb{R}$, $S=\mathbb{R}_+$ (and hence, $Z^\ast=\mathbb{R}$ and $S^+=\mathbb{R}^+$), $\U=T$, and $G_t:=\sigma_{\A_t}-b_t$ for each $t\in T$. Then, for any $(t,\lambda)\in T\times\mathbb{R}_+$, one has $$\begin{aligned} \operatorname{epi}(\lambda G_t)^\ast&=\lambda\operatorname{epi}( G_t)^\ast=\lambda\operatorname{epi}(\sigma_{\A_t}-b_t)^\ast=\lambda (\A_t\times\{b_t\}) +\mathbb{R}_+(0_{X^\ast},1), \\ \mathcal{M}_0 &=\bigcup_{t\in T}\operatorname{co}\operatorname{cone}\Big[(\A_t\times \{b_t\})\cup \{(0_{X^\ast},1)\}\Big],\end{aligned}$$ and so, $ \operatorname{\overline{\operatorname{co}}}\mathcal{M}_0=\operatorname{\overline{\operatorname{co}}}\operatorname{cone}\Big[\bigcup_{t\in T}(\A_t\times \{b_t\})\cup \{(0_{X^\ast},1)\}\Big].$ The conclusion now follows from Proposition \[lem\_FL\]. Duality of Robust Linear Problems with Convex Conical constraints ----------------------------------------------------------------- Let $c\in X^\ast$. We consider the pair of primal-dual robust problems: $$\begin{aligned} ({\rm RP}_c)\qquad& {\inf }\; \la c,x\ra\\ \textrm{subject to}\ \ & x\in X,\; G_u(x)\in -S, \ \forall u\in \U,\\ ({\rm RD}_c)\qquad& \sup_{(u,\lambda)\in \U\times S^+} \inf_{x\in X} (\la c,x\ra+\lambda G_u(x)).\end{aligned}$$ Let $\F_u$ and $\F$ be as in and . Let further $\bar x\in \F$ and $(\bar u,\bar \lambda)\in \U\times S^+$. As $\bar x\in \F$, $G_u(\bar x) \in-S$ for all $u\in \U$, and in particular, $G_{\bar u}(\bar x) \in-S$. Moreover, as $\bar \lambda\in S^+$, one has $\bar \lambda G_{\bar u} (\bar x)\leq 0$. Therefore, $ \la c, \bar x \ra + (\bar \lambda G_{\bar u}) (x) \le \la c,\ \bar x\ra , $ and so, $$\begin{aligned} \inf\limits_{x \in X} \Big[ \la c,x\ra +(\bar \lambda G_{\bar u}) (x)\Big] &\le& \la c,\bar x\ra + (\bar \lambda G_{\bar u}) (x) \leq \la c,\bar x\ra , \end{aligned}$$ leading to $$\begin{aligned} \inf\limits_{x \in X} \Big[ \la c,x\ra +(\bar \lambda G_{\bar u}) (x)\Big] &\le& \inf\limits_{\bar x \in A} \la c,\bar x\ra. \end{aligned}$$ Consequently, $$\begin{aligned} \label{eq_2.2ff} \sup\limits_{(\bar u, \bar \lambda) \in \U \times S^+} \inf\limits_{x \in X} \Big[ \la c,x\ra +(\bar \lambda G_{\bar u}) (x)\Big] \leq \inf\limits_{\bar x \in A} \la c,\bar x\ra, \end{aligned}$$ which means that the [*weak duality*]{} holds for the pair $({\rm RP}_c) - ({\rm RD}_c)$. \[robust-dual\] We say that - [*the robust strong duality holds for the pair $({\rm RP}_c)-({\rm RD}_c)$*]{} if\ $\inf ({\rm RP}_c)=\max ({\rm RD}_c)$, - [*the stable robust strong duality holds for the pair $({\rm RP}_c)-({\rm RD}_c)$*]{} if\ $\inf ({\rm RP}_c)=\max ({\rm RD}_c)$ for all $c\in X^*$. The next theorem, Theorem \[thm\_StrD\], can be derived from [@DMVV17-Robust-SIAM Theorem 6.3]. However, for the sake of convenience we will give here a short and direct proof. \[thm\_StrD\] Assume that $r_0:= \inf ({\rm RP}_c) > - \infty$. Then the following statements are equivalent: $\rm(a)$ $\mathcal{M}_0 $ is a closed and convex subset of $X^*\times \mathbb{R}$, $\rm(b)$ The stable robust strong duality holds for the pair [$({\rm RP}_c)-({\rm RD}_c)$]{}, i.e., $$\inf ({\rm RP}_c)=\max ({\rm RD}_c),\quad\forall c\in X^\ast.$$ Take arbitrarily $c\in X^\ast$. Observe firstly that $$\begin{aligned} \sup ({\rm RD}_c)&=\sup_{(u,\lambda)\in \U\times S^+} \inf\limits_{x \in X} \Big\{ \la c, x\ra + (\lambda G_u)(x)\Big\} \notag\\ &=\sup_{(u,\lambda)\in \U\times S^+} \Big[ - \sup \limits_{x \in X} \Big\{ \la -c, x\ra - (\lambda G_u)(x)\Big\} =\sup_{(u,\lambda)\in \U\times S^+} [-(\lambda G_u)^\ast(-c)] \notag\\ &=\sup\Big\{r: (c,r)\in -\!\!\! \bigcup\limits_{ (u,\lambda)\in\U\times S^+} \!\!\! \operatorname{gph}(\lambda G_u)^\ast \Big\} \notag\\ &=\sup\Big\{r: (c,r)\in -\bigcup\limits_{ (u,\lambda)\in\U\times S^+}\operatorname{gph}(\lambda G_u)^\ast -\mathbb{R}_+(0_{X^\ast},1)\Big\} \notag\\ &= \sup\Big\{r: (c,r)\in -\bigcup\limits_{ (u,\lambda)\in\U\times S^+}\Big[ \operatorname{gph}(\lambda G_u)^\ast + \mathbb{R}_+(0_{X^\ast},1)\Big]\Big\} \notag\\ &= \sup \Big\{r: (c,r)\in - \!\!\! \bigcup\limits_{ (u,\lambda)\in\U\times S^+} \!\!\!\! \operatorname{epi}(\lambda G_u)^\ast \Big\} =\sup \Big\{r: (c,r)\in -\mathcal{M}_0\Big\}. \label{eq_3.3bbb}\end{aligned}$$ Observe also that $ r_0 < + \infty$ as $({\rm RP}_c)$ is feasible (i.e., its feasible set $\F$ is non-empty) and so, we can assume that $r_0 \in \RR$. $\bullet$ \[(a)$\Longrightarrow$(b)\] Assume that (a) holds. As $r_0= \inf ({\rm RP}_c)$, one has $$\label{eq_3.4bbb} G_u\in -S,\forall u\in \U\; \Longrightarrow \la c,x\ra \ge r_0.$$ As (a) holds, it follows from Proposition \[lem\_FL\] that $$(c,r_0)\in -\operatorname{\overline{\operatorname{co}}}\mathcal{M}_0= - \mathcal{M}_0 =\ - \bigcup_{(u,\lambda)\in \U\times S^+} \operatorname{epi}(\lambda G_u)^\ast,$$ and so, by and the weak duality , we get $$r_0 \leq \sup \{ r: (c, r) \in -\mathcal{M}_0\} = \sup ({\rm RD}_c) \leq r_0 = \inf (\rm{RP}_c),$$ yielding $r_0 = \sup \{ r: (c, r) \in -\mathcal{M}_0\} = \sup ({\rm RD}_c) = \inf (\rm{RP}_c)$. As $r_0 \in \{ r: (c, r) \in -\mathcal{M}_0\} $ there exist $(\bar u, \bar \lambda) \in \U \times S^+$ satisfying (by ) $$r_0 = -(\bar \lambda G_{\bar u})^\ast(-c) = \max ({\rm RD}_c) = \inf (\rm{RP}_c),$$ which means that (b) holds. $\bullet$ \[(b)$\Longrightarrow $(a)\] Assume that (b) holds. To prove (a), it suffices to show that $\operatorname{\overline{\operatorname{co}}}\mathcal{M}_0\subset \mathcal{M}_0$. Take $(c,r)\in -\operatorname{\overline{\operatorname{co}}}\mathcal{M}$. It follows from Proposition \[lem\_FL\] that holds with $r_0=r$, which, taking (b) and into account, entails $$r\le r_0= \inf ({\rm RP}_c)=\max ({\rm RD}_c) = \max_{(u,\lambda)\in \U\times S^+} [-(\lambda G_u)^\ast(-c)].$$ This means that there exists $(\bar u, \bar \lambda) \in \U \times S^+$ such that $ (-c, -r_0 ) \in \operatorname{epi}(\bar \lambda G_{\bar u})^\ast$. Now, as $r\le r_0 $, one has $(-c,- r )\in \operatorname{epi}(\bar \lambda G_{\bar u})^\ast $, and hence, $(c, r) \in -\mathcal{M}_0$. We have proved that $\operatorname{\overline{\operatorname{co}}}\mathcal{M}_0\subset \mathcal{M}_0$ and the proof is complete. We now provide some sufficient conditions for the convexity and closedness of the robust moment cone $\mathcal{M}_0$. Assume from now to end this section that $\U$ is a subset of some topological vector space. The next result is a consequence of [@DL18ACTA Propositions 5.1, 5.2]. \[prop\_conclo\] \[pro\_suffcond\] Assume that that $\U$ is a subset of some topological vector space and $\rm{ int }\, S \neq \emptyset$. Then [(i)]{} If the collection $(u\mapsto G_u(x))_{x\in X}$ is uniformly $S^+$-concave, then $\mathcal{M}_0$ is convex. [(ii)]{} If   $\U$ is a compact set, $Z$ is a normed space, $u\mapsto G_u(x)$ is $S^+$-usc for all $x\in X$, and the following Slater-type condition holds: $$(C_0) \ \ \ \ \ \ \ \qquad \qquad\forall u\in \U,\; \exists x_u\in X: G_u(x_u)\in-\operatorname{int}S, \ \ \ \ \ \ \ \qquad \ \ \ \ \ \ \ \qquad$$ then $\mathcal{M}_{0}$ is closed. \[rem\_2.1\] If    $\U$ is a singleton then it is easy to see that the assumption of Proposition \[pro\_suffcond\](i) automatically holds, and consequently, $\mathcal{M}_0$ is convex. Moreover, if the Slater condition $(C_0)$ holds then $\mathcal{M}_0$ is closed. \[rem24-nw\] It is worth noticing that the Proposition \[prop\_conclo\] and the next Corollary \[cor\_3.1bis\] constitute generalizations of Proposition 2 and Corollary 1 in [@GJLL], respectively. Propositions \[convexity-N\]-\[closedness-N\] on the sufficient conditions for the convexity and closedness of moment cones are of the same line of generalization which show the role played by the Slater constraint qualification condition. \[Sufficient condition for stable robust strong duality of $({\rm RP}_c)$\]\[cor\_3.1bis\] Assume that the following conditions holds: [(i)]{}  $\U$ is a compact set, $Z$ is a normed space, [(ii)]{} $(u\mapsto G_u(x))_{x\in X}$ is uniformly $S^+$-concave, [(iii)]{} $u\mapsto G_u(x)$ is $S^+$-usc for all $x\in X$, [(iv)]{} The Slater-condition $(C_0)$ holds. Then, the stable robust strong duality holds for the pair [$({\rm RP}_c)-({\rm RD}_c)$]{}. The conclusion follows from Theorem \[thm\_StrD\] and Proposition \[pro\_suffcond\]. We now consider a special case of $(\rm{RP}_c)$ where, for each $u\in \U$, $G_u$ is an affine mapping, say $\bar G_u$, defined as $$\bar G_u(.):=A_u(.)-\omega_u,\quad \forall u\in \U,$$ where $(A_u)_{u\in\U}\subset \L(X,Z)$ and $(\omega_u)_{u\in \U}\subset Z$. Let $c\in X^\ast$. The problem $(\rm{RP}_c)$ becomes[^7] $$\begin{aligned} \label{problem_RLP} ({\rm RLP}_c)\qquad& {\inf }\; \la c,x\ra\\ \textrm{subject to } &A_u(x)\in \omega_u-S, \ \forall u\in \U, x \in X.\notag\end{aligned}$$ For each $(u,\lambda)\in \U\times S^+$, by a simple calculation, one gets $$\operatorname{epi}(\lambda \bar G_u)^\ast= (\lambda A_u, \la\lambda, b_u\ra)+\mathbb{R}_+(0_{X^\ast},1).$$ Here we understand that $\lambda A_u $ is an element of $X^\ast$ with $(\lambda A_u)(x) = \la \lambda , A_u(x) \ra $, for all $ x \in X$. So, the $\mathcal{M}_0$ defined in collapses to $$\label{M_1} \mathcal{M}_1:= \{(\lambda A_u, \la \lambda, \omega_u\ra),\; (u,\lambda)\in\U\times S^+\} +\mathbb{R}_+ (0_{X^\ast}, 1),$$ and one has, $$\inf_{x\in X} \Big\{\la c,x\ra +\lambda G_u(x)\Big\}=\begin{cases} -\la \lambda,\omega_u\ra &\textrm{if } c=-\lambda A_u\\ -\infty &\textrm{otherwise.} \end{cases}$$ The dual problem of $(\rm{RLP}_c)$, specialized from $({\rm RD}_c)$, turns to be $$\begin{aligned} ({\rm RLD}_c)\qquad& {\sup }\; -\la \lambda,\omega_u\ra \\ \textrm{subject to} \ \ &(u,\lambda)\in \U\times S^+,\; c=-\lambda A_u.\end{aligned}$$ \[thm\_lStrD\] The following statements are equivalent: [$\rm(a)$]{} $\mathcal{M}_1 $ is a closed and convex subset of $X^*\times \mathbb{R}$. [$\rm(b)$]{} The stable robust strong duality holds for the pair [$({\rm RLP}_c)-({\rm RLD}_c)$]{}, i.e., $$\inf ({\rm RLP}_c) = \max ({\rm RLD}_c), \ \forall c \in X^\ast.$$ The previous corollary is a direct consequence of Theorem \[thm\_StrD\]. Moreover, apply Corollary \[cor\_3.1bis\] with $\bar G_u(.)=A_u(.)-\omega_u$ one also gets a sufficient condition for stable robust strong duality for $({\rm RLP}_c)$. Robust Linear Infinite Problem and Its Robust Duals ==================================================== We retain the notations in Section 2 and let $c\in X^\ast$. Statement of Robust Linear Infinite Problems and Their Robust Duals ------------------------------------------------------------------- Consider the [*linear infinite programming*]{} with [*uncertain input-parameters*]{} of the form: $$\begin{aligned} ({\rm ULIP}_c)\quad &\inf\; \langle c,x\rangle \notag\\ \textrm{subject to }\ \ & \langle a_t,x \rangle \le b_t ,\;\forall t\in T, x \in X, \end{aligned}$$ where $(a_t , b_t)$ belongs to an uncertainty set $\U_t$ with $\emptyset\ne\U_t\subset X^*\times \mathbb{R}$ for all $t\in T$. The [*robust counterpart*]{} of $({\rm ULIP}_c)$ is $$\begin{aligned} ({\rm RLIP}_c)\quad &\inf\; \langle c,x\rangle \notag\\ \textrm{subject to }\ \ & \langle x^\ast,x \rangle \le r,\;\forall (x^\ast,r)\in\U_t,\; \forall t\in T, x \in X.\end{aligned}$$ Assume that the problem $({\rm RLIP}_c)$ is feasible for each $c \in X^\ast$, i.e., $$\F:=\{x\in X: \langle x^*,x \rangle \le r ,\;\forall (x^*,r)\in\U_t,\; \forall t\in T\}\ne \emptyset, \ \forall c \in X^\ast$$ and set $$\mathscr{U}:=\prod_{t\in T}\U_t\qquad \textrm{and}\quad \mathscr{V}:=\bigcup_{t\in T}\U_t.$$ By convention, we write $v = (v^1, v^2) \in X^\ast\times \mathbb{R}$ and $u = (u_t)_{t \in T} \in \mathscr{U}$, with $u_t = (u^1_t, u^2_t) \in \U_t$. For brevity, we also write: $u=(u^1_t, u^2_t)_{t\in T} \in \mathscr{U}$ instead of $u=((u^1_t, u^2_t))_{t\in T} \in \mathscr{U}$. The robust problem of the model (RLIP$_c$) was considered in several earlier works such as [@DGLM17-Optim], [@GJLL] (where $X = \R^n$, i.e., a robust semi-infinite linear problem), [@LJL11] where $X$ is a Banach space, $T$ is finite, objective function is a convex function, and for each $t \in T$, $\U_t$ has a special form (problem (SP), page 2335), and in [@DGLV17-Robust] with a bit more general on constraint linear inequalities, concretely, for all $t\in T$, $(x^\ast,r)$ is a function defined on $\U_t$ instead of $(x^\ast, r)\in \U_t$. We now propose variants of robust dual problems for $({\rm RLIP}_c)$: $$\begin{aligned} ({\rm RLID}_c^1)\quad &\sup\; [-\lambda v^2]\notag\\ \textrm{s.t.}\ \ & v\in\mathscr{V},\;\lambda\ge 0,\; c=-\lambda v^1,\notag\\ ({\rm RLID}_c^2)\quad &\sup\; \left [-\sum_{u\in \operatorname{supp}\lambda}\lambda_u u^2_t\right]\notag\\ \textrm{s.t.}\ \ & t\in T,\; \lambda \in \mathbb{R}^{(\mathscr{U})}_+,\; c=-\sum_{u\in \operatorname{supp}\lambda}\lambda_u u^1_t, \notag\\ ({\rm RLID}_c^3)\quad &\sup\; \left [-\sum_{t\in \operatorname{supp}\lambda}\lambda_t u^2_t\right]\notag\\ \textrm{s.t.}\ \ & u\in\mathscr{U},\;\lambda \in \mathbb{R}^{(T)}_+,\; c=-\sum_{t\in \operatorname{supp}\lambda}\lambda_t u^1_t,\notag\\ ({\rm RLID}_c^4)\quad &\sup_{\lambda\ge 0,\; t\in T} \inf_{x\in X}\sup_{v\in\U_t}[\la c+\lambda v^1,x\ra -\lambda v^2],\notag\\ ({\rm RLID}_c^5)\quad &\sup_{\lambda\ge 0,\; u\in \mathscr{U}} \inf_{x\in X}\sup_{t\in T}[\la c+\lambda u^1_t,x\ra -\lambda u^2_t],\notag\\ ({\rm RLID}_c^6)\quad &\sup\; \left[-\sum_{v\in \operatorname{supp}\lambda}\lambda_v v^2\right]\notag\\ \textrm{s.t. }\ \ & \lambda \in \mathbb{R}^{(\mathscr{V})}_+,\; c=-\sum_{v \in \operatorname{supp}\lambda}\lambda_{v} v^1,\notag\\ ({\rm RLID}_c^7)\quad &\sup_{\lambda\ge 0} \inf_{x\in X}\sup_{v\in \mathscr{V}}[\la c+\lambda v^1,x\ra -\lambda v^2]. \notag $$ It is worth observing firstly that $({\rm RLID}_c^3)$ and $({\rm RLID}_c^6)$ are (ODP) and (DRSP) in [@GJLL], respectively. These two classes are also special case of (OLD) and (RLD) in [@JL2] (where the constraint functions are affine) and of (RLD$^O$) and (RLD$^C$) in [@DGLV17-Robust], respectively. The [*“robust strong duality (and also, stable robust strong duality) holds for the pair $({\rm RLIP}_c)-({\rm RLID}^i_c)$"*]{}, $i = 1, 2, \ldots, 7$, is understood as in the Definition \[robust-dual\]. Note that robust strong duality holds for $({\rm RLIP}_c)-({\rm RLID}^3_c)$ is known as “primal worst equals dual best problem” with the attainment of dual problem [@DGLV17-Robust], [@GJLL]. Relationship Between The Values of Dual Problems and Weak Duality ----------------------------------------------------------------- In this subsection we will establish some relations between the values of the dual problems $({\rm RLID}_c^i)$ to each other and the weak duality to each of the dual pairs $({\rm RLIP}_c) - ({\rm RLID}_c^i) $, $i = 1, 2, \cdots, 7$. \[pro\_3.1ff\] One has $$\label{eq-prop31} \sup ({\rm RLID}_c^1)\le \begin{array}{c}\sup ({\rm RLID}_c^2)\\\sup ({\rm RLID}_c^3)\end{array} \le \sup ({\rm RLID}_c^6).$$ Observe that, for $k=1,2,3,6$, it holds $\sup({\rm RLID}_c^k)=\sup E_k$ with $$\begin{aligned} E_1&:=\{\alpha: v\in \mathscr{V},\; \lambda\ge 0,\; (c,\alpha)=-\lambda v\},\\ E_2&:=\{\alpha: t\in T,\; \lambda \in \mathbb{R}^{(\mathscr{U})}_+,\; (c,\alpha)=-\sum_{u\in\operatorname{supp}\lambda}\lambda_u u_t\},\\ E_3&:=\{\alpha: u\in\mathscr{U},\;\lambda \in \mathbb{R}^{(T)}_+,\; (c,\alpha)=-\sum_{t\in \operatorname{supp}\lambda}\lambda_t u_t\},\\ E_6&:=\{\alpha: \lambda \in \mathbb{R}^{(\mathscr{V})}_+,\; (c,\alpha)=-\sum_{v \in \operatorname{supp}\lambda}\lambda_{v} v\}.\label{eq_6.4eeee}\end{aligned}$$ So, to prove , it suffices to verify that $E_i\subset E_j$ for $(i,j)\in \{(1,2), (1,3), (2,6), (3.6)\}$. $\bullet$ \[$E_1\subset E_2$\] Take $\bar \alpha\in E_1$. Then, there are $\bar v\in \mathscr{V}$ and $\bar{\lambda}\ge 0$ such that $(c,\bar v)=-\bar \lambda \bar v$. Now, take $\bar t\in T$ and $\bar u\in \mathscr{U}$ such that $\bar u_{\bar t}=\bar v$. Define $\bar\lambda \in \mathbb{R}^{(\mathscr{U})}_+$ by $\bar \lambda_{\bar u}=\bar \lambda$ and $\bar \lambda_u=0$ whenever $u\ne \bar u$. Then, it is easy to see that $$-\sum_{u\in\operatorname{supp}\bar \lambda}\bar \lambda_u u_{\bar t}=-\bar \lambda_{\bar u} \bar u_{\bar t}=-\bar\lambda \bar v=(c,\bar \alpha),$$ yielding $\bar \alpha \in E_2$. $\bullet$ \[$E_1\subset E_3$\] Can be done by using the same argument as in the proof of $E_1\subset E_2$, just replace $\bar \lambda\in \mathbb{R}^{(\mathscr{U})}_+$ by $\bar \lambda\in \mathbb{R}^{(T)}_+$ such that $\bar\lambda_{\bar t}=\bar\lambda$ and $\bar\lambda_t=0$ for all $t\ne \bar t$. $\bullet$ \[$E_2\subset E_6$\] Take $\bar \alpha\in E_2$. Then, there exists $(\bar t, \bar \lambda)\in T\times \mathbb{R}^{(\mathscr{U})}_+$ satisfying $$-\sum_{u\in \operatorname{supp}\bar\lambda}\bar \lambda_u u_{\bar t} = (c,\bar\alpha).$$ Consider the set-valued mapping $\mathcal{K}\colon \mathscr{V}\rightrightarrows\mathscr{U}$ defined by $$\mathcal{K}(v):=\{u\in \operatorname{supp}\bar\lambda: u_{\bar t}=v\}.$$ It is easy to see that the decomposition $\operatorname{supp}\bar \lambda =\bigcup_{v\in \mathscr{V}} \mathcal{K}(v)$ holds. Moreover, as $\operatorname{supp}\bar\lambda$ is finite, $\operatorname{dom}\mathcal{K}$ is also finite (where $\operatorname{dom}\mathcal{K}:=\{v\in \mathscr{V}: \mathcal{K}(v)\ne \emptyset\}$). Take $\hat\lambda\in \mathbb{R}^{(\mathscr{V})}_+$ such that $\hat \lambda_{v}=\sum_{u\in \mathcal{K}(v)}\bar \lambda_u$ if $v\in \operatorname{dom}\mathcal{K}$ and $\hat \lambda_{v}=0$ if $v\notin \operatorname{dom}\mathcal{K}.$ Then, one has $$-\sum_{v\in \operatorname{supp}\hat\lambda}\hat\lambda_v v=-\sum_{v\in \operatorname{dom}\mathcal{K}} \sum_{u\in\mathcal{K}(v)} \bar \lambda_u u_{\bar t}=-\sum_{u\in \operatorname{supp}\bar\lambda} \bar\lambda_u u_{\bar t}= (c,\bar \alpha),$$ yielding $\bar\alpha\in E_6$. $\bullet$ \[$E_3\subset E_6$\] Similar to the proof of \[$E_2\subset E_6$\]. \[pro\_3.2ff\] One has $$\label{eq-prop32} \sup ({\rm RLID}_c^1)\le \begin{array}{c}\sup ({\rm RLID}_c^4)\\\sup ({\rm RLID}_c^5)\end{array} \le \sup ({\rm RLID}_c^7).$$ It is worth noting firstly that, for any non-empty sets $Y_1$ and $ Y_2$, any function $f\colon Y_1\times Y_2\to \mathbb{R}$, it always holds $$\label{eq_6.1eee} \sup_{y_1\in Y_1}\inf_{y_2\in Y_2} f(y_1,y_2)\le \inf_{y_2\in Y_2}\sup_{y_1\in Y_1} f(y_1,y_2).$$ By a simple calculation, one easily gets $$\begin{aligned} \sup({\rm RLID}_c^1)&=\sup_{\substack{\lambda\ge 0,\; v \in \mathscr{V}}}\inf _{x\in X}(\la c+\lambda v^1,x\ra-\lambda v^2)\\ &=\sup_{\lambda\ge 0,\; t\in T}\sup_{w\in \U_t}\inf _{x\in X}(\la c+\lambda w^1,x\ra-\lambda w^2)\\ &=\sup_{\lambda\ge 0,\; u\in \mathscr{U}}\sup_{t\in T} \inf_{x\in X}[\la c+\lambda u^1_t,x\ra -\lambda u^2_t]\end{aligned}$$ (as $\mathscr{V}=\bigcup_{t\in T}\U_t=\{u_t: u\in \mathscr{U},\; t\in T\}$). So, according to , $$\begin{gathered} \sup({\rm RLID}_c^1)\le\sup_{\lambda\ge 0,\; t\in T}\inf _{x\in X}\sup_{w\in \U_t}[\la c+\lambda w^1,x\ra-\lambda w^2]= \sup({\rm RLID}_c^4),\\ \sup({\rm RLID}_c^1)\le\sup_{\lambda\ge 0,\; u\in \mathscr{U}} \inf_{x\in X}\sup_{t\in T}[\la c+\lambda u^1_t,x\ra -\lambda u^2_t]= \sup({\rm RLID}_c^5).\end{gathered}$$ The other desired inequalities in follow from in a similar way as above. The weak duality for the primal-dual pairs of problems $({\rm RLIP}_c^i) - ({\rm RLID}_c^i)$, $i = 1, 2, \cdots, 7$, will be given in the next proposition. \[weakduality1\] \[pro\_3.3gg\] One has $$\label{eq-prop32} \begin{array}{c}\sup ({\rm RLID}_c^6)\\ \sup ({\rm RLID}_c^7) \\ \end{array} \le \inf ({\rm RLIP}_c).$$ Consequently, $\sup ({\rm RLID}_c^i) \le \inf ({\rm RLIP}_c)$ for all $i =1, 2, \cdots, 7$. $\bullet$ [*Proof of $\sup ({\rm RLID}_c^6)\le \inf ({\rm RLIP}_c)$:*]{} Take $\bar\lambda \in \mathbb{R}^{(\mathscr{V})}_+$, and $\bar x\in X$ such that $c=-\sum_{v \in \operatorname{supp}\lambda}\bar\lambda_{v} v^1$ and $$\label{eq_3.12gg} \la v^1,\bar x\ra - v^2\le 0,\quad v\in\mathscr{V}.$$ Then it is easy to see that $-\sum_{v\in \operatorname{supp}\bar\lambda} v^2\le - \sum_{v\in \operatorname{supp}\bar\lambda}\la v^1,\bar x\ra=\la c,\bar x\ra.$ So, by the definitions of $ ({\rm RLID}_c^6)$ one has $\sup ({\rm RLID}_c^6) \leq \la c,\bar x\ra $ for any $\bar x \in X$ satisfying , which yields $\sup ({\rm RLID}_c^6)\le \inf ({\rm RLIP}_c)$. $\bullet$ [*Proof of   $\sup ({\rm RLID}_c^7)\le \inf ({\rm RLIP}_c)$:*]{} Take $\bar \lambda\ge 0$ and $\bar x\in X$ such that holds. For all $v\in \mathscr{V}$, as holds, one has $\la c+\bar\lambda v^1, \bar x\ra -\bar\lambda v^2\le \la c,\bar x\ra$. This yields that $\sup_{v\in \mathscr{V}}[\la c+\bar\lambda v^1, \bar x\ra -\bar\lambda v^2]\le \la c,\bar x\ra$ which, in turn, amounts for $$\inf_{x\in X}\sup_{v\in \mathscr{V}}[\la c+\bar\lambda v^1, x\ra -\bar\lambda v^2]\le \la c,\bar x\ra.$$ The conclusion follows. Robust Stable Strong Duality for (RLIP$_c$) ============================================ In this section, we will establish variants of stable robust strong duality results for (RLIP$_c$). Some of them cover the ones in [@GJLL], [@JL2] and the others are new. Let us introduce variants of [*robust moment cones*]{} of (RLIP$_c$): $$\begin{aligned} \mathcal{N}_1&:=\operatorname{cone}\mathscr{V}+ \mathbb{R}_+ (0_{X^*},1), &&\mathcal{N}_2:=\bigcup_{ t\in T}\operatorname{co}\operatorname{cone}[\U_t\cup\{ (0_{X^*},1)\}],\\ \mathcal{N}_3&:=\bigcup_{ u\in\mathscr{U}}\operatorname{co}\operatorname{cone}[u(T)\cup\{ (0_{X^*},1)\}], \ \ \ \ &&\mathcal{N}_4:=\bigcup_{t\in T}\operatorname{cone}\operatorname{\overline{\operatorname{co}}}[\U_t+ \mathbb{R}_+( 0_{X^*},1)],\\ \mathcal{N}_5&:=\bigcup_{u\in \mathscr{U}}\operatorname{cone}\operatorname{\overline{\operatorname{co}}}[u(T)+\mathbb{R}_+( 0_{X^*},1)], && \mathcal{N}_6:=\operatorname{co}\operatorname{cone}\left[\mathscr{V}\cup \{(0_{X^*},1)\} \right], \\ \mathcal{N}_7&:=\operatorname{cone}\operatorname{\overline{\operatorname{co}}}\left[\mathscr{V}+\mathbb{R}_+ (0_{X^*},1)\right], \end{aligned}$$ where $u(T):=\{u_t: t\in T\}$ if $u\in\mathscr{U}$. Observe that $\mathcal{N}_3$ is $M_{\ell f}$ in [@DGLM17-Optim], and $ \mathcal{N}_3$ and $\mathcal{N}_6 $ were introduced in [@GJLL] and known as “robust moment cone" and “characteristic cone", respectively. \[$1^{st}$ characterization of stable robust strong duality for (RLIP$_c$)\] \[thm\_3.2\] For $ i \in \{ 1, 2, \ldots, 5\} $, consider the following statements: $({\rm c}_i)$ $\mathcal{N}_i $ is a closed and convex subset of $X^*\times \mathbb{R}$, $({\rm d}_i)$ The stable robust strong duality holds for the pair [$({\rm RLIP}_c)-({\rm RLID}_c^i)$]{}. Then, one has $[({\rm c}_i)\Leftrightarrow ({\rm d}_i)]$ for all $i\in\{1,2,\ldots, 5\}$. $\bullet$ $[({\rm c}_1)\Leftrightarrow ({\rm d}_1)]$ Set $Z=\mathbb{R}$, $S=\mathbb{R}_+$, $\U=\mathscr{V}$, $A_v=v^1$ and $\omega_v=v^2$ for all $v = (v^1, v^2) \in\mathscr{V}$. Then, $({\rm RLIP}_c)$ has the form of $({\rm RLP}_c)$ in . In such a setting, the robust moment cone $\mathcal{M}_1$ in reduces to $$\begin{aligned} \mathcal{M}_1 &=\{(\lambda A_v,\la\lambda, \omega_v\ra): v\in \mathscr{V},\; \lambda\ge 0\}+ \mathbb{R}_+ (0_{X^*},1)\\ &=\{\lambda v: v\in \mathscr{V},\; \lambda\ge 0\}+ \mathbb{R}_+ (0_{X^*},1)\\ &=\operatorname{cone}\mathscr{V}+ \mathbb{R}_+ (0_{X^*},1)=\mathcal{N}_1. \end{aligned}$$ It is easy to see that the robust dual problem $({\rm RLD}_c)$ of the resulting robust problem $(\rm {RLP}_c)$ now turns be exactly $({\rm RLID}_c^1)$, and so, the equivalence $[({\rm c}_1)\Leftrightarrow ({\rm d}_1)]$ follows directly from Corollary \[thm\_lStrD\]. $\bullet$ $[({\rm c}_2)\Leftrightarrow ({\rm d}_2)]$ Set $Z=\mathbb{R}^{\mathscr{U}}$, $S=\mathbb{R}^{\mathscr{U}}_+$ (and consequently, $Z^*=\mathbb{R}^{(\mathscr{U})}$ and $S^+=\mathbb{R}^{(\mathscr{U})}_+$), $\U=T$, $A_t= (u^1_t)_{u\in \mathscr{U}}$ and $\omega_t= (u^2_t)_{u\in \mathscr{U}}$ for all $t\in T$. Then the problem $({\rm RLIP}_c)$ possesses the form $({\rm RLP}_c)$. In this setting, the set $\mathcal{M}_1$ in becomes $$\begin{aligned} \mathcal{M}_1&=\{(\lambda A_t,\la\lambda, \omega_t\ra): t\in T,\; \lambda\in \mathbb{R}^{(\mathscr{U})}_+\}+ \mathbb{R}_+ (0_{X^*},1)\\ &=\left\{\left(\sum_{u\in\operatorname{supp}\lambda}\lambda_u u^1_t,\sum_{u\in\operatorname{supp}\lambda}\lambda_u u^1_t\right): t\in T,\; \lambda\in \mathbb{R}^{(\mathscr{U})}_+\right\}+ \mathbb{R}_+ (0_{X^*},1)\\ &=\left\{\sum_{u\in\operatorname{supp}\lambda}\lambda_uu_t: t\in T,\; \lambda\in \mathbb{R}^{(\mathscr{U})}_+\right\}+ \mathbb{R}_+ (0_{X^*},1)\\ &=\left[\bigcup_{t\in T}\left\{\sum_{u\in\operatorname{supp}\lambda}\lambda_uu_t: \lambda\in \mathbb{R}^{(\mathscr{U})}_+\right\}\right]+ \mathbb{R}_+ (0_{X^*},1)\\ &= \left[\bigcup_{t\in T} \operatorname{co}\operatorname{cone}\U_t\right]+ \mathbb{R}_+ (0_{X^*},1)\quad \textrm{(note that $\{u_t: u\in \mathscr{U}\}=\U_t$)} \\ &= \bigcup_{t\in T} \left[\operatorname{co}\operatorname{cone}\U_t+ \mathbb{R}_+ (0_{X^*},1)\right]= \bigcup_{t\in T} \operatorname{co}\operatorname{cone}\left[ \U_t\cup \{(0_{X^*},1)\}\right]=\mathcal{N}_2, \end{aligned}$$ and the dual problem of $({\rm RLD}_c)$ (in the new format) has the form $({\rm RLID}_c^2)$. The equivalence $[({\rm c}_2)\Leftrightarrow ({\rm d}_2)]$ then follows from Corollary \[thm\_lStrD\]. $\bullet$ $[({\rm c}_3)\Leftrightarrow ({\rm d}_3)]$ We transform $({\rm RLIP}_c)$ to $({\rm RLP}_c)$ by setting: $Z=\mathbb{R}^{T}$, $S=\mathbb{R}^{T}_+$ (hence, $Z^*=\mathbb{R}^{(T)}$ and $S^+=\mathbb{R}^{(T)}_+$), $\U=\mathscr{U}$, $A_u=(u^1_t)_{t\in T}$ and $\omega_u=(u^2_t)_{t\in T}$ for all $u \in \mathscr{U}$. Then, one has $$\begin{aligned} \mathcal{M}_1&=\{(\lambda A_u,\la\lambda, \omega_u\ra): u\in \mathscr{U},\; \lambda\in \mathbb{R}^{(T)}_+\}+ \mathbb{R}_+ (0_{X^*},1)\\ &=\left\{\sum_{t\in\operatorname{supp}\lambda}\lambda_t u_t: u\in \mathscr{U},\; \lambda\in \mathbb{R}^{(T)}_+\right\}+ \mathbb{R}_+ (0_{X^*},1)\\ &= \left[\bigcup_{u \in \mathscr{U}} \operatorname{co}\operatorname{cone}u(T)\right]+ \mathbb{R}_+ (0_{X^*},1)\quad\textrm{(note that $\left\{u_t: t\in T \right\}=u(T)$)}\\ &= \bigcup_{u\in \mathscr{U}} \operatorname{co}\operatorname{cone}\left[ u(T)\cup \{(0_{X^*},1)\}\right]=\mathcal{N}_3.\end{aligned}$$ and dual problem $({\rm RLD}_c)$ of the resulting problem $({\rm RLP}_c)$ is exactly $({\rm RLID}_3)$. The desired equivalence follows from Corollary \[thm\_lStrD\]. $\bullet$ $[({\rm c}_4)\Leftrightarrow ({\rm d}_4)]$ We now consider another way of transforming $({\rm RLIP}_c)$ to the form $({\rm RP}_c)$ by letting $Z=\mathbb{R}$, $S=\mathbb{R}_+$, $\U=T$, and $G_t\colon X\to \overline{\mathbb{R}}$ such that $G_t(x)= \sup_{v\in\U_t} [\la v^1, x\ra-v^2]$ for all $t\in T$. Then, one has (see ) $$\begin{aligned} \mathcal{M}_0&=\bigcup_{t\in T,\; \lambda\ge 0} \operatorname{epi}(\lambda G_t)^\ast =\bigcup_{t\in T,\; \lambda\ge 0} \lambda\operatorname{epi}(G_t)^\ast\\ &=\bigcup_{t\in T} \operatorname{cone}\operatorname{epi}(G_t)^\ast =\bigcup_{t\in T} \operatorname{cone}\operatorname{epi}\left[\sup_{ v\in\U_t} (\la v^1,.\ra-v^2)\right]^*\\ &=\bigcup_{t\in T} \operatorname{cone}\operatorname{\overline{\operatorname{co}}}\bigcup_{ v\in\U_t}\operatorname{epi}\left( \la v^1,.\ra-v^2\right)^*\end{aligned}$$ (the last equalities follows from [@GNg08 Lemma 2.2]). On the other hand, for each $t\in T$ and $v\in \U_t$, by simple calculation one gets $\operatorname{epi}\left( \la v^1,.\ra-v^2\right)^*=v+\mathbb{R}_+ (0_{X^*},1).$ So, $$\begin{aligned} \mathcal{M}_0 =\bigcup_{t\in T} \operatorname{cone}\operatorname{\overline{\operatorname{co}}}\bigcup_{ v\in\U_t}[v+\mathbb{R}_+ (0_{X^*},1)] =\bigcup_{t\in T} \operatorname{cone}\operatorname{\overline{\operatorname{co}}}[\U_t+\mathbb{R}_+ (0_{X^*},1)] =\mathcal{N}_4.\end{aligned}$$ It is easy to see that the dual problem $({\rm RD}_c)$ of the resulting problem $({\rm RP}_c)$ is nothing else but $({\rm RLID}_c^4)$, and the equivalence $[({\rm c}_4)\Leftrightarrow ({\rm d}_4)]$ is a consequence of Theorem \[thm\_StrD\]. $\bullet$ $[({\rm c}_5)\Leftrightarrow ({\rm d}_5)]$ Again, we transform $({\rm RLIP}_c)$ to $({\rm RP}_c)$ but by another setting: $Z=\mathbb{R}$, $S=\mathbb{R}_+$, $\U=\mathscr{U}$, and $G_u\colon X\to \overline{\mathbb{R}}$ such that $G_u(x)= \sup_{t\in T}[\la u^1_t, x\ra-u^2_t)]$ for all $u\in \mathscr{U}$. Then, one has (see ) $$\begin{aligned} \mathcal{M}_0&=\bigcup_{u\in \mathscr{U},\; \lambda\ge 0} \operatorname{epi}(\lambda G_u)^\ast =\bigcup_{u\in \mathscr{U}} \operatorname{cone}\operatorname{epi}(G_u)^\ast\\ &=\bigcup_{u\in \mathscr{U}} \operatorname{cone}\operatorname{epi}\left[\sup_{ t\in T} (\la u^1_t,.\ra-u^2_t)\right]^*=\bigcup_{u\in \mathscr{U}} \operatorname{cone}\operatorname{\overline{\operatorname{co}}}\bigcup_{ t\in T}\operatorname{epi}(\la u^1_t,.\ra-u^2_t)^*\\ &=\bigcup_{u\in \mathscr{U}} \operatorname{cone}\operatorname{\overline{\operatorname{co}}}\bigcup_{ t\in T}[u_t+\mathbb{R}_+ (0_{X^*},1)] =\bigcup_{u\in \mathscr{U}} \operatorname{cone}\operatorname{\overline{\operatorname{co}}}[u(T)+\mathbb{R}_+ (0_{X^*},1)]=\mathcal{N}_5, \end{aligned}$$ and the robust dual problem $({\rm RD}_c)$ of the new problem $({\rm RP}_c)$ is exactly $({\rm RLID}_c^5)$. The desired equivalence again follows from Theorem \[thm\_StrD\]. Theorem \[thm\_3.2\] with $i=3$ is [@GJLL Theorem 2] while $i=6$ ($i = 3$, resp.) is similar to [@DGLV17-Robust Proposition 5.2(ii)] with $i=C$ ($i = O$, resp.). \[$2^{nd}$ characterization for stable robust strong duality for (RLIP$_c$)\] \[thm\_3.2bis\] For $ i = 6, 7$, consider the next statements: $({\rm c}_i)$ $\mathcal{N}_i $ is a closed subset of $X^*\times \mathbb{R}$, $({\rm d}_i)$ The stable robust strong duality holds for the pair [$({\rm RLIP}_c)-({\rm RLID}_c^i)$]{}. Then $[({\rm c}_i) \Leftrightarrow ({\rm d}_i)]$ for $ i = 6, 7$. $\bullet$ $[({\rm c}_6)\Leftrightarrow ({\rm d}_6)]$ The robust problem $({\rm RLIP}_c)$ turns to be $({\rm RLP}_c)$ if we set $Z=\mathbb{R}^{\mathscr{V}}$, $S=\mathbb{R}^{\mathscr{V}}_+$, $\U$ to be a singleton, $A=(v^1)_{v\in \mathscr{V}}$ and $\omega=(v^2)_{v\in \mathscr{V}}$. In such a setting, one gets $$\begin{aligned} \mathcal{M}_1&=\{(\lambda A,\la\lambda, \omega\ra): \lambda\in \mathbb{R}^{(\mathscr{V})}_+\}+ \mathbb{R}_+ (0_{X^*},1)\\ &=\left\{\sum_{v\in\operatorname{supp}\lambda}\lambda_v v: \lambda\in \mathbb{R}^{(\mathscr{V})}_+\right\}+ \mathbb{R}_+ (0_{X^*},1)\\ &= \operatorname{co}\operatorname{cone}\mathscr{V}+ \mathbb{R}_+ (0_{X^*},1)=\operatorname{co}\operatorname{cone}\left[\mathscr{V}\cup \{(0_{X^*},1)\} \right]=\mathcal{N}_6, \end{aligned}$$ while the robust dual problem of the new problem $({\rm RLP}_c)$ (i.e., $({\rm RLD}_c)$) is non other than $({\rm RLID}_c^6)$. The equivalence $[({\rm c}_6)\Leftrightarrow ({\rm d}_6)]$ now follows from Corollary \[thm\_lStrD\] and the fact that the robust moment cone is always convex whenever $\U$ is a singleton (see Proposition \[prop\_conclo\] and Remark \[rem\_2.1\]). $\bullet$ $[({\rm c}_7)\Leftrightarrow ({\rm d}_7)]$ Set $Z=\mathbb{R}$, $S=\mathbb{R}$, $\U$ to be a singleton, and $G= \sup_{v\in \mathscr{V}}(\la v^1, .\ra-v^2)$. The problem $({\rm RLIP}_c)$ now becomes $({\rm RP}_c)$. On the other hand, one has (see ) $$\begin{aligned} \mathcal{M}_0&=\bigcup_{\lambda\ge 0} \operatorname{epi}(\lambda G)^\ast =\operatorname{cone}\operatorname{epi}(\lambda G)^\ast\\ &=\operatorname{cone}\operatorname{epi}\left[\sup_{v\in \mathscr{V}}(\la v^1, .\ra-v^2)\right]^* = \operatorname{cone}\operatorname{\overline{\operatorname{co}}}\bigcup_{v\in \mathscr{V}}\operatorname{epi}(\la v^1,.\ra-v^2)^*\\ &=\operatorname{cone}\operatorname{\overline{\operatorname{co}}}\bigcup_{v\in \mathscr{V}}[v+\mathbb{R}_+ (0_{X^*},1)] =\operatorname{cone}\operatorname{\overline{\operatorname{co}}}\left[\mathscr{V}+\mathbb{R}_+ (0_{X^*},1)\right]=\mathcal{N}_7, \end{aligned}$$ while the dual problem of $({\rm RD}_c)$ of the new problem $({\rm RP}_c)$ is $({\rm RLID}_c^7)$. The equivalence $[({\rm c}_7)\Leftrightarrow ({\rm d}_7)]$ is a consequence of Theorem \[thm\_StrD\], Proposition \[prop\_conclo\] (see also Remark \[rem\_2.1\]). \[rem41a\]There may have sone more ways of transforming (RLIP$_c$) to the form of (RP$_c$) which give rise to some more robust dual problems for (RLIP$_c$), for instance, $(\alpha)$ Set $Z=\mathbb{R}^T$, $S=\mathbb{R}_+^T$, $\U$ to be a singleton, and $G= \left(\sup_{ v\in\U_t} [\la v^1, .\ra-v^2]\right)_{t\in T}$. Then (RLIP$_c$) reduces to the form of (RP$_c$) with no uncertainty as now $\U$ is a singleton. In this setting, the moment cone $\mathcal{M}_0$ becomes $$\begin{aligned} \label{eq_7.1ee} \mathcal{M}_0=\bigcup_{\lambda\in \mathbb{R}_+^{(T)}}\operatorname{epi}\left[\sum_{t\in T} \lambda_t \sup_{ v\in\U_t} (\la v^1, .\ra-v^2) \right]^*=:\mathcal{N}_8, \end{aligned}$$ and the robust dual problems now collapses to $$\begin{aligned} ({\rm RLID}_c^8)\quad &\sup_{\lambda\in\mathbb{R}_+^{(T)}} \inf_{x\in X}\left[\la c,x\ra +\sum_{t\in \operatorname{supp}\lambda}\lambda_t \sup_{v\in\U_t}\left(\la v^1,x\ra -v^2\right)\right].\end{aligned}$$ $(\beta)$ Set $Z=\mathbb{R}^\mathscr{U}$, $S=\mathbb{R}^\mathscr{U}_+$, $\U$ to be a singleton, and $G= (\sup_{t\in T}[\la u^1_t, .\ra-u^2_t)])_{u\in \mathscr{U}}$. Then, the problem (RLIP$_c$) turns to be of the model (RP$_c$), and one has $$\begin{aligned} \mathcal{M}_0 &=\operatorname{co}\operatorname{cone}\bigcup_{u \in \mathscr{U}} \operatorname{\overline{\operatorname{co}}}\left[u(T)+\mathbb{R}_+(0_{X^*},1)\right]=:\mathcal{N}_9. \end{aligned}$$ The corresponding dual problem is $$\begin{aligned} ({\rm RLID}_c^9)\quad &\sup_{\lambda \in\mathbb{R}_+^{(\mathscr{U})}} \inf_{x\in X}\left[\la c,x\ra +\sum_{u\in \operatorname{supp}\lambda}\lambda_u \sup_{t\in T}\left(\la u^1_t,x\ra -u_t^2\right)\right].\notag\end{aligned}$$ For the mentioned cases, we get also the relation between the values of these two dual problems: $$\sup({\rm RLID}_c^6)\le \sup({\rm RLID}_c^8)\ \ \textrm{ and } \ \ \sup({\rm RLID}_c^6)\le \sup({\rm RLID}_c^9),$$ and weak duality hold as well: $$\begin{array}{c}\sup ({\rm RLID}_c^8)\\ \sup ({\rm RLID}_c^9) \\ \end{array} \le \inf ({\rm RLIP}_c).$$ Moreover, under some suitable conditions, robust strong duality holds, similar to the ones in [@DGLV17-Robust Proposition 5.2(ii)]. \[rem\_4.3new\] From Propositions \[pro\_3.1ff\]-\[pro\_3.3gg\] and Remark \[rem41a\], we get an overview of the relationship between the values of robust dual problems and weak duality of each pair of primal-dual problems which can be described as in the next figure: (-3,0) – (-1.8,1.5); (-0.3,1.5) – (0.2,1.1); (-3,0) – (-1.8,0.5); (-0.3,0.5) – (0.2,0.9); (1.7,1) – (2.2,1.5); (3.7,1.5) – (4.7,0.2); (1.7,1) – (2.2,0.5); (3.7,0.5) – (4.7,0); (-3,0) – (-1.8,-0.5); (-0.3,-0.5) – (0.2,-0.9); (-3,0) – (-1.8,-1.5); (-0.3,-1.5) – (0.2,-1.1); (1,-1) – (3.7,-1); (3.7,-1) – (4.7,-0.2); (-3,0) node \[fill=white\] [$\sup ({\rm RLID}_c^1)$]{}; (-1,1.5) node \[fill=white\] [$\sup({\rm RLID}_c^2)$]{}; (-1,0.5) node \[fill=white\] [$\sup({\rm RLID}_c^3)$]{}; (-1,-0.5) node \[fill=white\] [$\sup({\rm RLID}_c^4)$]{}; (-1,-1.5) node \[fill=white\] [$\sup({\rm RLID}_c^5)$]{}; (1,1) node \[fill=white\] [$\sup({\rm RLID}_c^6)$]{}; (1,-1) node \[fill=white\] [$\sup({\rm RLID}_c^7)$]{}; (3,1.5) node \[fill=white\] [$\sup({\rm RLID}_c^8)$]{}; (3,0.5) node \[fill=white\] [$\sup({\rm RLID}_c^9)$]{}; (5.5,0) node \[fill=white\] [$\inf({\rm RLIP}_c)$]{}; where by $a\longrightarrow b$ we mean $a\le b$. As we have seen from the previous theorems and from the previous section, the closedness and convexity of robust moment cones play crucial roles in closing the dual gaps for the primal-dual pairs of robust problems. In the left of this section, we will give some sufficient conditions for the mentioned properties of these cones whose proofs are rather long and will be put in the last section: Appendices. \[convexity-N\] The next assertions hold: $\phantom{x}$ - If $\mathscr{V}$ is convex then $\mathcal{N}_1$ is convex, - If $\{x^\ast\in X^*: (x^\ast,r)\in \U_t\}$ is convex for all $t\in T$ then $\mathcal{N}_3$ is convex, - Assume that $T$ is a convex subset of some vector space, and that, for all $t\in T$, $\U_t=\U^1_t\times\U^2_t$ with $\U^1_t\subset X^*$ and $\U^2_t\subset \mathbb{R}$. Assume further that, for each $t\in T$ and $x\in X$, the function $t\mapsto \sup_{x^\ast\in \U^1_t} \la x^\ast, x\ra$ is affine and the function $t\mapsto \inf \U^2_t$ is convex. Then, $\mathcal{N}_4$ is convex, - The sets $\mathcal{N}_6$, $\mathcal{N}_7$ are convex[^8]. See Appendix A. \[closedness-N\] The following assertions are true. $\phantom{x}$ - If $ \mathscr{V}$ is compact and $$\label{eq_4.2zab} \forall v\in \mathscr{V},\; \exists \bar x\in X: \la v^1,\bar x\ra<v^2,$$ then $\mathcal{N}_1$ is closed. - If $T$ is compact, $t\mapsto \sup_{v\in \U_t} [\langle v^1, x\rangle-v^2]$ is usc for all $x\in X$, and $$\label{eq_4.3zab} \forall t\in T ,\; \exists x_t\in X: \sup_{v\in \U_t} [\langle v^1, x_t\rangle-v^2]<0,$$ then $\mathcal{N}_4$ is closed. - If $\U_t$ is compact for all $t\in T$, $u\mapsto \sup_{t\in T}[\la u_t^1, x\ra - u_t^2]$ is usc for all $x\in X$, and $$\label{eq_4.4zab} \forall u \in \mathscr{U},\; \exists x_u\in X: \sup_{t\in T}[\la u_t^1, x_u\ra - u_t^2]<0,$$ then $\mathcal{N}_5$ is closed. - If the following condition holds $$\label{eq_4.5zab} \exists x\in X: \sup_{v\in \mathscr{V}} [\la v^1, x\ra - v^2]<0,$$ then $\mathcal{N}_7$ is closed. See Appendix B. Farkas-Type Results for Infinite Linear systems with Uncertainty ================================================================= We retain the notations used in Sections 2, 3, and 4. Let $ c \in X^\ast$, $T$ be an index set (possibly infinite), and let $\U_t $ be uncertainty set for each $t \in T$. Consider the robust linear system of the form $$\begin{aligned} \ & \langle x^\ast,x \rangle \le r,\qquad\forall (x^\ast,r)\in\U_t,\; \forall t\in T,$$ which is the constraint system of the problem $(\mathrm{RLIP}_c)$ considered in Section 4. Based on the stable strong robust duality results established in Section 4, we can derive the next robust Farkas-type results for the linear systems with uncertainty parameters (for a short survey on Farkas-type results, see, e.g., [@DJ-Top]). \[cor\_6.1ff\] Let $\emptyset \ne \mathscr{V} \subset X^\ast \times \R$. The following statements are equivalent: $({\rm i})$ For all $(c,s )\in X^\ast\times \mathbb{R}$ such that $\inf({\rm RLIP}_c) > - \infty$, $x \in X$, the next assertions are equivalent: $(\alpha)$ $\langle x^\ast,x \rangle \le r,\;\forall (x^\ast,r)\in \mathscr{V} \; \Longrightarrow\; \la c,x\ra \ge s$, $(\beta)$ $\exists (\bar x^\ast, \bar r) \in\mathscr{V},\; \exists \bar\lambda\ge 0 : \begin{cases} \bar\lambda \bar x^\ast=-c,\\ \bar \lambda \bar r\le - s, \end{cases} $ $({\rm ii})$ $\operatorname{cone}\mathscr{V}+ \mathbb{R}_+ (0_{X^*},1)$ is convex and closed. Take $(c,s)\in X^\ast\times \mathbb{R}$. Set $\Lambda := \{ (x^*, r , \lambda) : (x^*, r ) \in \mathscr{V}, \lambda \in \R_+, \lambda x^* = -c \}$ and $\Phi (x^*, r, \lambda) = - \lambda r$ for all $(\bar x^*, \bar r, \bar \lambda) \in \Lambda $. So, $\sup ({\rm RLID}_c^1) = \sup\limits_{(x^*, r , \lambda) \in \Lambda} \Phi (x^*, r, \lambda)$. From the statements of the problems $({\rm RLIP}_c) $ and $({\rm RLID}_c^1) $, one has $$\begin{aligned} (\alpha) &\Longleftrightarrow& \inf ({\rm RLIP}_c)\ge s, \label{eqalpha} \\ (\beta) &\Longleftrightarrow& \Big( \exists(\bar x^*, \bar r, \bar \lambda) \in \Lambda: \sup ({\rm RLID}_c^1)\ge \Phi (\bar x^*, \bar r, \bar \lambda) = -\bar \lambda \bar r \geq s\Big) \label{eqbeta}. \end{aligned}$$ $\bullet$ $[(ii) \Longrightarrow (i)]$ Assume that $(ii) $ holds. Then it follows from Theorem \[thm\_3.2\] (with $i=1$) that $$\begin{aligned} \label{eqFarkas1} (ii) &\Longleftrightarrow& \Big(\textrm{the stable robust strong duality holds for the pair } ({\rm RLIP}_c)-({\rm RLID}_c^1) \Big) \notag\\ &\Longleftrightarrow& \Big( \forall c \in X^\ast, \ \ \inf({\rm RLIP}_c) = \max ({\rm RLID}_c^1) \Big). \end{aligned}$$ So, for $c \in X^*$, if $(\alpha)$ holds then $\inf ({\rm RLIP}_c)\ge s$ and hence, we get from , $$\inf ({\rm RLIP}_c) = \max ({\rm RLID}_c) = \Phi (\bar x^*, \bar r, \bar \lambda) = -\bar \lambda \bar r \geq s,$$ for some $ (\bar x^*, \bar r, \bar \lambda) \in \Lambda$, which means that $(\beta)$ holds, and so $[(\alpha) \Longrightarrow (\beta)]$. Conversely, if $(\beta)$ holds, then from and the weak duality of the primal-dual pair $({\rm RLIP}_c)-({\rm RLID}_c^1)$, one gets the existence of $(\bar x^*, \bar r, \bar \lambda) \in \Lambda $ such that $$\inf ({\rm RLIP}_c) \geq \sup ({\rm RLID}_c^1)\ge \Phi (\bar x^*, \bar r, \bar \lambda) = -\bar \lambda \bar r \geq s,$$ yielding $(\alpha)$. So, $[(\beta) \Longrightarrow (\alpha)]$ and consequently, we have proved that $[(ii) \Longrightarrow (i)]$. $\bullet$ $[(i) \Longrightarrow (ii)]$ Assume that $(i)$ holds. Take $s = \inf ({\rm RLIP}_c) \in \R$ and $c \in X^*$. Then $(\alpha)$ holds and as $(i)$ holds, $(\beta)$ holds as well. This, together with the weak duality, yields, for some $ (\bar x^*, \bar r, \bar \lambda) \in \Lambda$ (see ), $$\inf ({\rm RLIP}_c) \geq \sup ({\rm RLID}_c^1) = \Phi (\bar x^*, \bar r, \bar \lambda) = -\bar \lambda \bar r \geq s = \inf ({\rm RLIP}_c),$$ meaning that the robust dual problem $({\rm RLID}_c^1)$ attains and $ \inf ({\rm RLIP}_c) = \max ({\rm RLID}_c^1) $. Since $c\in X^*$ is arbitrary, the stable robust strong duality holds for the pair $({\rm RLIP}_c)-({\rm RLID}_c^1) $. The fulfillment of $(ii)$ now follows from Theorem \[thm\_3.2\] (with $i=1$). Assume that $\mathscr{V}$ is a convex and compact subset of $X^\ast\times \mathbb{R}$ and that the Slater-type condition holds. According to Propositions \[convexity-N\] and \[closedness-N\], one has $\mathcal{N}_1:= \operatorname{cone}\mathscr{V}+ \mathbb{R}_+ (0_{X^*},1)$ is closed and convex. So, it follows from Corollary \[cor\_6.1ff\], $(\alpha)$ and $(\beta)$ in Corollary \[cor\_6.1ff\] are equivalent. This observation may apply to some of the next corollaries. The next versions of robust Farkas lemmas follows from the same way as Corollary \[cor\_6.1ff\], using Theorem \[thm\_3.2\] with $i=2, 3$, and $i= 4$. \[cor\_6.2ff\] The following statements are equivalent: $({\rm i})$ For all $(c,s)\in X^\ast\times \mathbb{R}$ such that $\inf({\rm RLIP}_c) > - \infty$, $x \in X$, the next assertions are equivalent: $(\alpha)$ $\langle x^\ast,x \rangle \le r,\;\forall (x^\ast,r)\in \mathscr{V}\; \Longrightarrow\; \la c,x\ra \ge s$, $(\gamma)$ $ \exists \bar t\in T,\; \exists \bar \lambda \in \mathbb{R}^{(\mathscr{U})}_+ : \begin{cases} \sum\limits_{u\in \operatorname{supp}\lambda}\bar\lambda_u u^1_{\bar t}=-c,\\ \sum\limits_{u\in \operatorname{supp}\lambda}\bar\lambda_u u^2_{\bar t}\le -s, \end{cases} $ $({\rm ii})$ $\bigcup_{ t\in T}\operatorname{co}\operatorname{cone}[\U_t\cup\{ (0_{X^*},1)\}]$ is convex and closed. [[@DMVV17-Robust-SIAM Theorem 5.6], [@GJLL Corollary 3],[@DGLM17-Optim Theorem 6.1(i)]]{} \[cor\_6.3ff\] The following statements are equivalent: $({\rm i})$ For all $(c,s)\in X^\ast\times \mathbb{R}$, such that $\inf({\rm RLIP}_c) > - \infty$, $x \in X$, the next assertions are equivalent: $(\alpha)$ $\langle x^\ast,x \rangle \le r,\;\forall (x^\ast,r)\in \mathscr{V}\; \Longrightarrow\; \la c,x\ra \ge s$, $(\delta)$ $\exists \bar u\in\mathscr{U},\;\exists \bar \lambda \in \mathbb{R}^{(T)}_+ : \begin{cases} \sum\limits_{t\in \operatorname{supp}\lambda}\bar\lambda_t \bar{u}^1_t=-c,\\ \sum\limits_{t\in \operatorname{supp}\lambda}\bar\lambda_t \bar{u}^2_t\le -s, \end{cases} $ $({\rm ii})$ $\bigcup_{ u\in\mathscr{U}}\operatorname{co}\operatorname{cone}[u(T)\cup\{ (0_{X^*},1)\}]$ is convex and closed, where $u(T):=\{u_t: t\in T\}$ for all $u\in\mathscr{U}$. \[Robust Farkas lemma for linear system IV\] \[cor\_6.4ff\] The following statements are equivalent: $({\rm i})$ For all $(c,s)\in X^\ast\times \mathbb{R}$, such that $\inf({\rm RLIP}_c) > - \infty$, $x \in X$, the next assertions are equivalent: $(\alpha)$ $\langle x^\ast,x \rangle \le r,\;\forall (x^\ast,r)\in \mathscr{V}\; \Longrightarrow\; \la c,x\ra \ge s$, $(\epsilon)$ $\exists \bar t\in T,\; \exists \bar\lambda\ge 0$ such that $\forall x\in X, \;\forall \varepsilon>0,\; \exists (x_0^\ast, r_0)\in \U_{\bar t}$ satisfying $$\la c+\bar\lambda x_0^\ast,x \ra -\bar\lambda r_0 \ge s - \varepsilon ,$$ $({\rm ii})$ $\bigcup_{ u\in\mathscr{U}}\operatorname{co}\operatorname{cone}[u(T)\cup\{ (0_{X^*},1)\}]$ is convex and closed. It worth noting that robust Farkas-type results can be established in the same way as in the previous corollaries, corresponding to the stable robust strong duality for pairs $({\rm RLIP}_c) - ({\rm RLID}_c^j)$ with $j=5,\ldots, 9$. The results corresponding to $i=6$ can be considered as a version of [@GJLL Corollary 4] with $\mathscr{V}$ replacing $\operatorname{gph}\mathscr{U}$. Linear Infinite Problems with Sub-affine Constraints ==================================================== The results in previous sections for robust linear infinite problems $ ({\rm RLIP}_c)$ ($c \in X^*$) can be extended to a rather broader class of robust problems by a similar approaching. Here we consider a concrete class of problems: The robust linear problems with sub-affine constraints. Denote by $\mathscr{P}_0(X^\ast)$ the set of all the nonempty, $w^{\ast }-$closed convex subsets of $X^{\ast }$. Let $T$ be a possibly infinite index set, $(\U_t)_{t\in T} \subset \mathscr{P}_0(X^\ast)\times \mathbb{R}$ be a collection of nonempty uncertainty sets. We introduce the sets $$\mathfrak{V}:=\bigcup_{t\in T}\U_t \quad \textrm{and}\quad \mathfrak{U}=\prod_{t\in T}\U_t.$$ By convention, for each $V \in\mathscr{P}_0(X^\ast)\times\mathbb{R}$, we write $V = (V^1, V^2)$ with $V^1 \subset X^\ast$ and $V^2 \in \R$. In some case, we also considered $V = (V^1, V^2) \in\mathscr{P}_0(X^\ast)\times\mathbb{R} $ as a subset of the set $X^\ast\times \mathbb{R}$ ideybntifying $V $ with $V^1\times \{V_2\}$. In the same way, for $U\in \mathfrak{U}$, we write $U=(U_t)_{t\in T}$ with $U_t = (U_t^1, U_t^2) \in \U_t$ for each $t \in T$. For each $c\in X^\ast$, consider the robust linear problem with sub-affine constraints: $$\begin{aligned} ({\rm RSAP}_c)\qquad &\inf\; \langle c,x\rangle \notag\\ \textrm{subject to }\ \ \ \ & \sigma_{\A_t}(x)\le b_t,\; \forall (\A_t,b_t)\in\U_t,\; \forall t\in T, x \in X. \end{aligned}$$ Here $\sigma_{\A_t}$ denotes the support function of the set $\A_t \subset X^\ast$, i.e., $\sigma_{\A_t}(x) := \sup\limits_{x^* \in \A_t} \la x^*, x\ra$. We now introduce two robust dual problems for ${\rm (RSAP}_c)$: $$\begin{aligned} ({\rm RSAD}_c^1)\qquad &\inf\; - \lambda v^2 \notag\\ \textrm{subject to } \ \ \ \ & V\in \mathfrak{V},\; v = (v^1, v^2) \in V, \\ &\lambda\ge 0, \; c=-\lambda v^1. \notag\end{aligned}$$ $$\begin{aligned} \ \ \ \ \ \ \ \ \ ({\rm RSAD}_c^2)\qquad &\inf\; - \sum_{U\in \operatorname{supp}\lambda} \lambda_Uv^2_U \notag\\ \textrm{subject to } \ \ \ & (v_U)_{U\in \mathfrak{U}}\in (U_t)_{U\in\mathfrak{U}}, \; c=-\sum_{U\in \operatorname{supp}\lambda} \lambda_Uv^1_U \\ &\ t \in T, \ \ \lambda\in \mathbb{R}_+^{(\mathfrak{U})}. \end{aligned}$$ We can state stable robust strong duality for ${\rm (RSAP}_c)$ as follows: The following statements are equivalent: $\rm(g_1)$ $\mathcal{R}_1:=\operatorname{cone}\mathfrak{U}+\mathbb{R}_+(0_{X^\ast},1) $ is a closed and convex subset of $X^*\times \mathbb{R}$, $\rm(h_1)$ The stable robust strong duality holds for the pair [$({\rm RSAP}_c)-({\rm RSAD}_c^1)$]{}, i.e., $$\inf ({\rm RSAP}_c)=\max ({\rm RSAD}_c^1),\quad\forall c\in X^\ast.$$ Set $Z=\mathbb{R}$, $S=\mathbb{R}^+$, $\U=\mathfrak{V}$ and $G_V(.)=\sigma_{V^1}(.)-V^2$ for all $V = (V^1, V^2) \in \mathfrak{V}$. Then the problem ${\rm (RSAP}_c)$ possesses the form of $({\rm RP}_c)$. The corresponding robust moment cone $\mathcal{M}_0$ now becomes $$\begin{aligned} \mathcal{M}_0&=\bigcup_{(V,\lambda)\in \mathfrak{V}\times\mathbb{R}_+}\operatorname{epi}(\lambda G_V)^\ast =\bigcup_{(V,\lambda)\in \mathfrak{V}\times\mathbb{R}_+}\lambda\operatorname{epi}( G_V)^\ast\\ &=\operatorname{cone}\bigcup_{V\in\mathfrak{V}}\operatorname{epi}( G_V)^\ast =\operatorname{cone}\bigcup_{V\in \mathfrak{V}}\operatorname{epi}(\sigma_{V^1}(.)-V^2)^\ast\\ &=\operatorname{cone}\bigcup_{V\in \mathfrak{V}}[ V^1 \times\{V^2\} +\mathbb{R}_+(0_{X^\ast},1)] =\operatorname{cone}[\mathfrak{U}+\mathbb{R}_+(0_{X^\ast},1)]\\ &=\operatorname{cone}\mathfrak{U}+\mathbb{R}_+(0_{X^\ast},1)=\mathcal{R}_1\end{aligned}$$ and the dual problem $({\rm RD}_c)$ of the resulting problem $({\rm RP}_c)$ is exactly the problem $({\rm RSAD}_c^1)$. The conclusion now follows from Theorem \[thm\_StrD\]. Assume that for all $V = (V^1, V^2) \in\mathfrak{V}$, $V^1$ is a $w^*$-compact subset of $X^\ast$. Then the following statements are equivalent: $\rm(g_2)$ $\mathcal{R}_2:= \bigcup_{t\in T} \operatorname{co}\operatorname{cone}\left[ \U_t\cup \{(0_{X^*},1)\}\right] $ is a closed and convex subset of $X^*\times \mathbb{R}$, $\rm(h_2)$ The stable robust strong duality holds for the pair [$({\rm RSAP}_c)-({\rm RSAD}_c^2)$]{}. Under the current assumption, $\sigma_{V^1}$ is continuous on $X$ for all $V = (V^1, V^2) \in\mathfrak{V}$. Take $Z=\mathbb{R}^{\mathfrak{U}}$, $S=\mathbb{R}^{\mathfrak{U}}_+$, $\U=T$, $G_t=(\sigma_{U^1_t}(.)-U^2_t)_{U\in \mathfrak{U}}$ for all $t\in T$. Then the problem $({\rm RSAP}_c)$ turns to the model $({\rm RP}_c)$ and in this setting, the moment cone $\mathcal{M}_0 $ becomes: $$\begin{aligned} \mathcal{M}_0&=\bigcup_{(t,\lambda)\in T\times \mathbb{R}_+^{(\mathfrak{U})}}\operatorname{epi}(\lambda G_t)^\ast =\bigcup_{(t,\lambda)\in T\times \mathbb{R}_+^{(\mathfrak{U})}}\operatorname{epi}\left(\sum_{U\in \operatorname{supp}\lambda}\lambda_U \sigma_{U^1_t}(.)-U^2_t\right)^\ast\\ &=\!\!\!\!\!\! \bigcup_{(t,\lambda)\in T\times \mathbb{R}_+^{(\mathfrak{U})}}\!\!\sum_{U\in \operatorname{supp}\lambda}\!\!\!\!\lambda_U \operatorname{epi}\left(\sigma_{U^1_t}(.)-U^2_t\right)^\ast \!\!\!\! =\!\!\!\!\bigcup_{(t,\lambda)\in T\times \mathbb{R}_+^{(\mathfrak{U})}}\!\!\sum_{U\in \operatorname{supp}\lambda}\!\!\!\!\lambda_U \left[ U_t^1\times \{U_t^2\} + \mathbb{R}_+ (0_{X^*},1) \right]\\ &=\bigcup_{(t,\lambda)\in T\times \mathbb{R}_+^{(\mathfrak{U})}}\sum_{U\in \operatorname{supp}\lambda}\lambda_U\left[ U_t + \mathbb{R}_+ (0_{X^*},1) \right] = \bigcup_{t\in T} \operatorname{co}\operatorname{cone}\left[ \U_t\cup \{(0_{X^*},1)\}\right]=\mathcal{R}_2. \end{aligned}$$ Moreover, the dual problem of the new problem $({\rm RP}_c)$ turns to be exactly $({\rm RSAD}_c^2)$. The conclusion now follows from Theorem \[thm\_StrD\]. From the above results on the (stable) robust strong duality, one can use the same argument as the one in Section 5 to get some versions of (stable) robust Farkas lemma for systems involved sub-affine functions with uncertain parameters. Concretely, we can state the following robust Farkas lemmas for this class of systems and omit the proofs. \[corolRLAP1\] The following statements are equivalent: $({\rm i})$ For all $(c,s)\in X^\ast\times \mathbb{R}$, next assertions are equivalent: $(\alpha'')$ $\sigma_{\A_t}(x)\le b_t,\; \forall (\A_t,b_t)\in\U_t,\; \forall t\in T \, \Longrightarrow \la c, x\ra \geq s$. $(\beta'')$ $\exists \bar V\in \mathfrak{V},\; \exists(\bar x^\ast, \bar r)\in \bar V,\; \exists\bar\lambda\ge 0 : \begin{cases} \bar\lambda \bar x^\ast=-c\\ \bar\lambda \bar r\le -s. \end{cases} $ $({\rm ii})$ $\operatorname{cone}\mathfrak{U}+\mathbb{R}_+(0_{X^\ast},1)$ is a closed and convex subset of $X^*\times \mathbb{R}$. The following statements are equivalent: $({\rm i})$ For all $(c,s)\in X^\ast\times \mathbb{R}$, next assertions are equivalent: $(\alpha'')$ $\sigma_{\A_t}(x)\le b_t,\; \forall (\A_t,b_t)\in\U_t,\; \forall t\in T \, \Longrightarrow \la c, x\ra \geq s$. $(\gamma'')$ $\exists \bar t \in T,\; \exists (\bar x^\ast_U, \bar r_U)_{U\in \mathfrak{U}}\in (U_{\bar t})_{U\in\mathfrak{U}},\; \bar \lambda\in \mathbb{R}_+^{(\mathfrak{U})} :\\ \null \ \ \ \ \ \ \ \ \sum\limits_{U\in \operatorname{supp}\bar\lambda} \bar\lambda_U\bar x^\ast_U=-c, \ \ \textrm{and} \ \ \sum\limits_{U\in \operatorname{supp}\bar\lambda} \bar\lambda_U\bar r_U\le -s. $ $({\rm ii})$ $\bigcup_{t\in T} \operatorname{co}\operatorname{cone}\left[ \U_t\cup \{(0_{X^*},1)\}\right]$ is a closed and convex subset of $X^*\times \mathbb{R}$. [*Duality for Linear Infinite Programming Problems.*]{} We now consider a special case of (RLIP$_c$): the [*linear infinite programming problems.* ]{} $$\begin{aligned} ({\rm LIP}_c)\quad &\inf\; \langle c,x\rangle \notag\\ \textrm{s.t. }\ \ &x\in X,\;\langle a_t,x \rangle \le b_t,\;\forall t\in T,\notag\end{aligned}$$ where $T$ is an arbitrary (possible infinite) index set, $c\in X^*$, $a_t \in X^*$, and $b_t \in \R$ for all $t \in T$. In the case where $X =\mathbb{R}^n$ this problem is often known as linear semi-infinite problem (see [@GL98] and also, [@CKL], [@DW] for applications of this model in finance). x We consider $({\rm LIP}_c)$ in a new look: a special case of $({\rm RLIP}_c)$ where all uncertainty sets $\U_t$, $t\in T$, are singletons for all $t \in T$, say, $\U_t = \{(a_t,b_t)\}$, and then $\mathscr{U} = \prod_{t\in T}\U_t $ is also a singleton, say $ \mathscr{U} = \left\{ \Big( (a_t, b_t)\Big) _{ t \in T} \right\}$, while $\mathscr{V}=\{(a_t,b_t): t\in T\}$. We now have: $\bullet$ All the three “robust” dual problems $({\rm RLID}_c^1)$, $({\rm RLID}_c^2)$, $({\rm RLID}_c^4)$ of the problem $({ \rm LIP}_c)$ (considered as $({ \rm RLIP}_c)$) collapse to $$\begin{aligned} \hskip-1.5cm ({\rm LID}_c^1)\quad& \sup\; [ -\lambda b_t]\notag\\ \textrm{subject to }\ \ \ & t\in T,\;\lambda\ge 0,\; c=-\lambda a_t, \notag\end{aligned}$$ and in this situation, the three corresponding moments cones $\mathcal{N}_1$, $\mathcal{N}_2$, and $\mathcal{N}_4$ reduce to the moment cone corresponding to the pair $({\rm LIP}_c) - ({\rm LID}_c^1)$: $$\mathcal{E}_1 :=\bigcup_{t\in T}\operatorname{co}\operatorname{cone}\{(a_t, b_t), (0_{X^*},1)\}.$$ $\bullet$ All the three “robust” dual problems $({\rm RLID}_c^3)$, $({\rm RLID}_c^6)$, $({\rm RLID}_c^8)$ of the new-formulated problem $({\rm RLIP}_c)$ collapse to the next problem (which is introduced in [@GL98] for the case where $X = \R^n$) $$\begin{aligned} ({\rm LID}_c^2)\quad &\sup\; \left[-\sum_{t\in \operatorname{supp}\lambda} \lambda_t b_t\right]\notag\\ \textrm{subject to } \ \ &\lambda\in \mathbb{R}^{(T)}_+ ,\; c=- \sum_{t\in \operatorname{supp}\lambda}\lambda_t a_t, \notag\end{aligned}$$ and, in the same way as above, the three corresponding moments cones $\mathcal{N}_3$, $\mathcal{N}_6$, and $\mathcal{N}_8$ reduce to moment cone corresponding to the pair $({\rm LIP}_c) - ({\rm LID}_c^2):$ $$\mathcal{E}_2:=\operatorname{co}\operatorname{cone}\{(a_t, b_t),t\in T; (0_{X^*},1)\} .$$ $\bullet$ All the three dual problems $({\rm RLID}_c^5)$, $({\rm RLID}_c^7)$, $({\rm RLID}_c^9)$ of the resulting problem $({\rm RLIP}_c)$ reduce to: $$\ \ \ \ \ ({\rm LID}_c^3)\qquad\sup_{\lambda\ge 0} \inf_{x\in X} \sup_{t\in T}\Big[\la c,x\ra+\la \lambda a_t,x\ra -\lambda b_t\Big],$$ while the three robust moment cones $\mathcal{N}_5$, $\mathcal{N}_7$, and $\mathcal{N}_9$ all reduce to the moment cone corresponding to the pair $({\rm LIP}_c) - ({\rm LID}_c^3):$ $$\mathcal{E}_3 :=\operatorname{cone}\operatorname{\overline{\operatorname{co}}}\{(a_t, b_t),t\in T; (0_{X^*},1)\}.$$ Moreover, for all $c\in X^\ast$, one has (see Remark \[rem\_4.3new\]), $$\label{eq_6.3bis} \sup ({\rm LID}_c^1)\le \begin{array}{c}\sup ({\rm LID}_c^2)\\\sup ({\rm LID}_c^3)\end{array} \le \inf ({\rm LIP}_c).$$ As consequences of Theorems \[thm\_3.2\] and \[thm\_3.2bis\], we have \[Principles of stable robust strong duality for $ {\rm(LIP}_c)$\]\[thm\_2.1nww\] The following assertions are true. ${\rm (i)} $ The next two statements are equivalent: .5cm $\rm(e_1)$ $\mathcal{E}_1 $ is a closed and convex subset of $X^*\times \mathbb{R}$, .5cm $\rm(f_1)$ The stable robust strong duality holds for the pair $({\rm LIP}_c)-({\rm LID}^1_c)$, i.e.,\          $\inf({\rm LIP}_c)=\max ({\rm LID}_c^1)$ for all $c\in X^*$. ${\rm (ii)} $ For each $i=2,3$, the following statements are equivalent: .5cm $\rm(e_i)$ $\mathcal{E}_i $ is a closed subset of $X^*\times \mathbb{R}$. .5cm $\rm(f_i)$ The stable robust strong duality holds for the pair $({\rm LIP}_c)-({\rm LID}^i_c)$. It is clear that in this setting, one can specify sufficient conditions in Propositions \[convexity-N\] and \[closedness-N\] to guarantee the convexity and closedness of the moment cones $\mathcal {E}_i$, $i = 1, 2, 3$, and hence, the stable robust strong duality for the pair $({\rm LIP}_c)-({\rm LID}^i_c)$ for $i = 1, 2, 3$ hold as well. [*Farkas-Type Results for Linear Infinite Systems.*]{} Similar to what is done in the Section 5, the duality results of the primal-dual pairs of problems $({\rm LIP}_c)-({\rm LID}^j_c)$, $j = 1, 2, 3$ will give rise to some new variants of generalized Farkas lemmas for linear infinite systems. By this way, it is easy to see that for the case $j=2$ we will get a version of Farkas lemma which goes back to [@GL98 Corollary 3.1.2] in the case where $X = \R^n$. In the next corollaries, we realize the process for $j=1$ and $j=3$, and to the best of our knowledge, these resulting versions of Farkas lemmas for linear infinite systems obtained here are new. Their proofs are similar to those of Corollaries \[cor\_6.1ff\]-\[cor\_6.4ff\] and will be omitted. \[corolLIP1\] The following statements are equivalent: $({\rm i})$ For all $(c,s)\in X^\ast\times \mathbb{R}$, next assertions are equivalent: .5cm $(\alpha')$ $\langle a_t,x \rangle \le b_t,\;\forall t\in T \; \Longrightarrow\; \la c,x\ra \ge s$, .5cm $(\beta')$ $ \exists \bar t\in T,\; \exists \bar \lambda \ge 0: \bar\lambda a_{\bar t}=-c\; \ \ \textrm{and} \; \ \bar\lambda b_{\bar t} \le -s,$ $({\rm ii})$ $\bigcup_{t\in T}\operatorname{co}\operatorname{cone}\{(a_t, b_t), (0_{X^*},1)\}$ is a closed and convex subset of $X^*\times \mathbb{R}$. \[Farkas lemma for linear infinite systems II\] \[corolLIP3\] The following statements are equivalent: $({\rm i})$ For all $(c,s)\in X^\ast\times \mathbb{R}$, next assertions are equivalent: .5cm $(\alpha')$ $\langle a_t,x \rangle \le b_t,\;\forall t\in T \; \Longrightarrow\; \la c,x\ra \ge S$, .5cm $(\delta')$ $\exists \bar\lambda\ge 0: \big[\forall x\in X, \;\forall \varepsilon>0,\; \exists t_0\in T: \la c+\bar\lambda a_{t_0},x \ra -\bar\lambda b_{t_0}+\varepsilon\ge s\big],$ $({\rm ii})$ $\operatorname{cone}\operatorname{\overline{\operatorname{co}}}\{(a_t, b_t),t\in T; (0_{X^*},1)\}$ is a closed subset of $X^*\times \mathbb{R}$. -.5cm Appendices ========== [**Appendix A.**]{} [*Proof of Proposition \[convexity-N\].* ]{} (i) From the proof of Theorem \[thm\_3.2\] for the case $i=1$, we can see that the problem $({\rm RLIP}_c)$ can be transformed to $({\rm RP}_c)$ with $Z=\mathbb{R}$, $S=\mathbb{R}_+$, $\U=\mathscr{V}$, $G_v(.)= v^1(.)-v^2$ for all $v\in \mathscr{V}$, and in such a case, $\mathcal{M}_0=\mathcal{N}_1$. Observe that the functions $v\mapsto \la v^1,x\ra -v_2$, $x\in X$, are concave (actually, they are affine). Together with the the fact that $\mathscr{V}$ is convex and $Z=\mathbb{R}$, the collection $(v\mapsto G_v(x))_{x\in X}$ is [uniformly $\mathbb{R}_+$-concave]{}. So, in the light of Proposition \[prop\_conclo\](i), $\mathcal{M}_0$ is convex, and so is $\mathcal{N}_1$. [ (ii)]{} By the same argument as above, to prove $\mathcal{N}_3$ is convex, it is sufficient to show that the collection $(u\mapsto G_u(x))_{x\in X}$ is [uniformly $\mathbb{R}_+^{(T)}$-concave]{} with $\U=\mathscr{U}$, $Z=\mathbb{R}^T$ and $G_u(.)=(\la u^1_t,.\ra-u^2_t)_{t\in T}$ for all $u\in \mathscr{U}$ (the setting in the proof of Theorem \[thm\_3.2\] for the case $i=3$). Now, take arbitrarily $\lambda,\mu\in \mathbb{R}^{(T)}_+$ and $u,w\in \mathscr{U}$. Let $\bar \lambda\in \mathbb{R}^{(T)}_+$ and $\bar u\in \mathscr{U}$ such that $\bar \lambda_t=\lambda_t+\mu_t$, $\bar u^2_t=\min\{u^2_t, w^2_t\}$ and $$\bar u_t^1=\begin{cases} \frac{1}{\bar \lambda_t}(\lambda_t u^1_t+\mu_t w^1_t),&\textrm{if } \lambda_t+\mu_t\ne 0\\ u^1_t,&\textrm{otherwise} \end{cases}$$ ($\bar u \in\mathscr{U}$ as $\{x^\ast\in X^*: (x^\ast, r)\in \U_t\}$ is convex for all $t\in T$). Then, it is easy to check that $$\lambda_t(\la u^1_t,x\ra -u^2_t)+\mu_t(\la w^1_t,x\ra -w^2_t)\le\bar \lambda_t (\la \bar u^1_t,x\ra-\bar u^2_t),\quad\forall t\in T,\; \forall x\in X,$$ and consequently, $$\sum_{t\in T}\lambda^1_t(\la u^1_t,x\ra -u^2_t)+\sum_{t\in T}\lambda^2_t(\la w^1_t,x\ra -w^2_t)\le\sum_{t\in T}\bar \lambda_t (\la \bar u^1_t,x\ra-\bar u^2_t),\quad \forall x\in X,$$ which means $\lambda G_{u}(x)+\mu G_{w}(x)\le \bar\lambda G_{\bar u}(x)$ for all $x\in X$, yielding the uniform $\mathbb{R}_+^{(T)}$-concavity of the collection $(u\mapsto G_u(x))_{x\in X}$. The conclusion now follows from Proposition \[prop\_conclo\](i). [ (iii)]{} Recall that $\mathcal{N}_4$ is a specific form of $\mathcal{M}_0$ with $Z=\mathbb{R}$, $S=\mathbb{R}_+$, $\U=T$, and $G_t(.)= \sup_{v\in\U_t} [\la v^1, .\ra-v^2]$ for all $t\in T$ (the setting in the proof of Theorem \[thm\_3.2\] for the case $i=4$). Now, for each $t\in T$ and $x\in X$, as $\U_t=\U^1_t\times\U^2_t$ (with $\U^1_t\subset X^*$ and $\U^2_t\subset \mathbb{R}$), it holds $$G_t(x)=\sup_{x^\ast\in \U^1_t} \langle x^\ast,x\rangle -\inf_{r\in \U^2_t}r=\sup_{x^\ast\in \U^1_t} \langle x^\ast,x\rangle- \inf \U^2_t.$$ So, for all $x\in X$, because $T$ is convex, $t\mapsto \sup_{x^\ast\in \U^1_t} \la x^\ast, x\ra$ is affine, and $t\mapsto \inf \U^2_t$ is convex, the function $t\mapsto G_t(x)$ is concave. This accounts for the uniform $\mathbb{R}^{(T)}_+$-concavity of the collection $(t\mapsto G_t(x))_{x\in X}$. The conclusion again follows from Proposition \[prop\_conclo\](i). [ (iv)]{} Consider the ways of transforming $({\rm RLIP}_c)$ to $({\rm RP}_c)$ in the proofs of Theorem \[thm\_3.2bis\] for the case $i=6,7$. Note that, in these ways, the uncertain set $\U$ is always a singleton. So, the corresponding qualifying sets (i.e, $\mathcal{N}_6$ and $\mathcal{N}_7$) are always convex (see Remark \[rem\_2.1\]). $\square$ [**Appendix B.**]{} [*Proof of Proposition \[closedness-N\].* ]{} Recall that $\mathcal{N}_i$, $i=1,2,\ldots,7$, are specific forms of $\mathcal{M}_0$ following the corresponding ways transforming of $({\rm RLIP}_c)$ to $({\rm RP}_c)$ considered in the proofs of Theorems \[thm\_3.2\] and \[thm\_3.2bis\]. So, to prove that $\mathcal{N}_i$ is closed, we make use of Proposition \[prop\_conclo\](ii), which provides some sufficient condition for the closedness of the robust moment cone $\mathcal{M}_0$. [ (i)]{} For $i = 1$, let us consider the way of transforming $({\rm RLIP}_c)$ to $({\rm RP}_c)$ by setting $Z=\mathbb{R}$, $S=\mathbb{R}_+$, $\U=\mathscr{V}$, and $G_v(.)= \la v^1, .\ra-v^2$ for all $v \in\mathscr{V}$. For all $x\in X$, it is easy to see that the function $v\mapsto G_v(x)=\la v^1,x\ra -v^2$ is continuous, and hence, it is $\mathbb{R}^+$-usc (see Remark \[rem\_2.1eeee\](iii)). Moreover, $\operatorname{gph}\mathscr{U}$ is compact, $\mathbb{R}$ is normed space, and ensures the fulfilling of condition $(C_0)$ in Proposition \[prop\_conclo\]. The closedness of $\mathcal{N}_1$ follows from Proposition \[prop\_conclo\](ii). [ (ii)]{} For $i = 4$, consider the way of transforming with the setting $Z=\mathbb{R}$, $S=\mathbb{R}_+$, $\U=T$, and $G_t(.)= \sup_{v\in\U_t} [\la v^1, .\ra-v^2]$ for all $t\in T$. One has that $\U=T$ is a compact set, that $t\mapsto G_t(x)= \sup_{v\in \U_t} [\langle v^1, x\rangle-v^2]$ is usc and hence, it is $\mathbb{R}^+$-usc, and that Slater-type condition $(C_0)$ holds (as holds). The conclusion now follows from Proposition \[prop\_conclo\](ii). [ (iii)]{} Consider the way of transforming which corresponds to $i=5$, i.e., we consider $Z=\mathbb{R}$, $S=\mathbb{R}_+$, $\U=\mathscr{U}$, and $G_u(.)= \sup_{t\in T}[\la u^1_t, .\ra-u^2_t]$ for all $u\in \mathscr{U}$. As $\mathscr{U}=\prod_{t\in T} \U_t$, the assumption that $\U_t$ is compact for all $t\in T$ which entails the compactness of $\mathscr{U}$. The other assumptions ensure the fulfillment of conditions in Proposition \[prop\_conclo\](ii) and the conclusion follows from this very proposition. [ (iv)]{} For $ i = 7$, using the same argument as above in transforming $({\rm RLIP}_c)$ to $({\rm RP}_c)$ in the proof of Theorems \[thm\_3.2bis\]. As by this way, the uncertainty set is a singleton, and hence, $\mathcal{N}_7$ is convex (see Remark \[rem\_2.1\]). Now from Proposition \[prop\_conclo\](ii), Slater-type condition ensures the closedness of the robust moment cone $\mathcal{N}_7$, as desired. $\square$ [99]{} Bolintinéanu, S.: Vector variational principles towards asymptotically well behaved vector convex functions. Lecture Notes in Econom. and Math. Systems, 481. Springer, Berlin (2000) Bolintinéanu, S.: Vector variational principles; $\varepsilon$-efficiency and scalar stationarity. J. Convex Anal. [**8**]{}, 71-85 (2001) , R.I.: Conjugate Duality in Convex Optimization. Springer-Verlag, Berlin (2010) , R.I., Jeyakumar, V., Li, G.: Robust duality in parametric convex optimization. Set-Valued Var. Anal. [**21**]{}, 177-189 (2013) Chen, J., Li, J., Li, X., Lv, Y., Yao, J.-C.: Radius of robust feasibility of system of convex inequalities with uncertain data. J. Optim. Theory Appl. (to appear) Chuong, T.D. Jeyakumar, V.: An exact formula for radius of robust feasibility of uncertain linear programs. J. Optim. Theory Appl., [**173**]{}, 203-226 (2017). DOI 10.1007/s10957-017-1067-6 Chen, J., Köbis, E., Yao, J.-C.: Optimality conditions and duality for robust nonsmooth multiobjective optimization problems with constraints. J. Optim. Theory Appl. (to appear). DOI: 10.1007/s10957-018-1437-8 Cho, H., Kim K.-K., Lee, K.: Computing lower bounds on basket option prices by discretizing semi-infinite linear programming. Optimization Letters, [**10**]{} (2015) DOI: 10.1007/s11590-015-0987-z Daum, S., Werner, R.: A novel feasible discretization method for linear semi-infinite programming applied to basket option pricing. Optimization, [**60**]{}, 1379?1398 (2011) Dinh N., Jeyakumar V.: Farkas’ lemma: Three decades of generalizations for mathematical optimization, TOP, [**22**]{}, 1 - 22 (2014) Dinh, N., Goberna, M.A., López, M.A., Volle, M.: A unifying approach to robust convex infinite optimization duality. J. Optim. Theory Appl. [**174**]{}, 650 - 685 (2017) Dinh, N., Goberna, M.A., López, M.A., Mo, T.H.: Robust optimization revisited via robust vector Farkas lemmas. Optimization [**66**]{}, 939 - 963 (2017) Dinh, N., Long, D.H.: Sectional convexity of epigraphs of conjugate mappings with applications to robust vector duality. Acta Math. Vietnam. (to appear) Dinh, N., Goberna, M.A., Volle, M.: Primal-dual optimization conditions for the robust sum of functions with applications. Applied Mathematics & Optimization (to appear)   DOI: 10.1007/s00245-019-09596-9 Dinh, N., Goberna, M.A., Volle, M.: Duality for the robust sum of functions. Set Val. Var. Anal. (to appear)   DOI: 10.1007/s11228-019-00515-2 Dinh, N., Mo, T.H., Vallet, G., Volle, M.: A unified approach to robust Farkas-type results with applications to robust optimization problems. SIAM J. Optim. [**27**]{}, 1075-1101 (2017) Dinh, N., Nghia, T.T.A, Vallet, G.: A closedness condition and its applications to DC programs with convex constraints. Optimization [**59**]{}, 541-560 (2010). Fang, D., Li, C., Yao, J.-C.: Stable Lagrange dualities for robust conical programming. Journal of Nonlinear and Convex Analysis [**16**]{}, 2141-2158 (2015) Goberna, M.A., López, M.A.: Linear Semi-Infinite Optimization. John Wiley & Sons, England (1998) Goberna, M.A., Jeyakumar, V., Li, G., López, M.A.: Robust linear semi-infinite programming duality under uncertainty. Math. Program. Ser. B **139**, 185-203 (2013) Goberna M.A., Jeyakumar V., Li G., Vicente-P�rez J.: Robust solutions of multi-objective linear semi-infinite programs under constraint data uncertainty. SIAM J. Optim., [**24**]{}, 1402 - 1419 (2014) Jeyakumar, V., Li, G.: Strong duality in robust convex programming: complete characterizations. SIAM J. Optim., **20**, 3384-3407 (2010). Jeyakumar, V., Li, G.Y.: Robust Farkas’ lemma for uncertain linear systems with applications. Positivity [**15**]{}, 331-342 (2011) Li, G.Y., Jeyakumar, V., Lee, G.M: Robust conjugate duality for convex optimization under uncertainty with application to data classification. Nonlinear Analysis [**74**]{}, 2327-2341 (2011) Li, G., Ng, K.F: On extension of Fenchel duality and its application. SIAM J. Optim. [**19**]{}, 1489-1509 (2008) [^1]: International University, Vietnam National University - HCMC, Linh Trung ward, Thu Duc district, Ho Chi Minh city, Vietnam ([*ndinh@hcmiu.edu.vn*]{}). Part of the work of this author was realized when he visited Center for General Education, China Medical University, Taiwan. He expresses his sincere thanks to the hospitality he received. [^2]: VNUHCM - University of Science, District 5, Ho Chi Minh city, Vietnam, and Tien Giang University, Tien Giang town, Vietnam ([*danghailong@tgu.edu.vn*]{}) [^3]: Center for General Education, China Medical University, Taichung 40402, Taiwan ([*yaojc@mail.cmu.edu.tw*]{}) [^4]: This work is partly supported by the project 101.01-2018.310, NAFOSTED, Vietnam [^5]: In [@Bot10] this notion is named as [*Star $S$-usc*]{} [^6]: For a function, we prefer the the lowercase letter $h$ to $H$. [^7]: The model of Problem $({\rm RLP}_c) $ was considered in [@DGLM17-Optim] where some characterizations of its solutions were proposed. [^8]: $\mathcal{N}_8$, $\mathcal{N}_9$ are also convex.
--- abstract: 'In the present article we investigate the influence of the contact region on the distribution of the chemical potential in integer quantum Hall samples, as well as the longitudinal and Hall resistance as a function of the magnetic field. First we use a standard quantum Hall sample geometry and analyse the influence of the length of the leads where current enters/leaves the sample and the ratio of the contact width to the width of these leads. Furthermore we investigate potential barriers in the current injecting leads and the measurement arms in order to simulate non-ideal contacts. Second we simulate nonlocal quantum Hall samples with applied gating voltage at the metallic contacts. For such samples it has been found experimentally that both the longitudinal and Hall resistance as a function of the magnetic field can change significantly. Using the nonequilibrium network model we are able to reproduce most qualitative features of the experiments.' author: - Christoph Uiberacker - Christian Stecher - Josef Oswald title: 'A systematic study of non-ideal contacts in integer quantum Hall systems' --- Introduction ============ Nowadays the steps in the transversal (Hall) resistance at the inverse of integer multiples of $e^2/h$ are well understood in terms of one-dimensional edge channels [@datta95]. The number of such channels decreases with increasing magnetic field due to increasing energy spacing between the Landau levels. Despite this clear physical picture it was found experimentally that width and magnetic field values of the transition region between plateaus can change in case of nonideal contacts, e.g. when applying gating to the contacts. Since a few years experiments using scanning probe techniques (see Ahlswede et. al. [@ahlswede]) were performed to visualize the distribution of the nonequilibrium potential near non-ideal contacts. On the other hand also the distribution of current in quantum Hall samples was investigated [@dominguez89] in order to optimize the contact geometry. A significant number of experiments (see [@alphenaar90; @mueller90; @dahlem]) found a strong indication that the contacts do not behave as ideal ones, meaning that not all edge channels seem to arrive at the reservoir of the metallic contacts of the sample. In addition imperfect equilibration among edge channels can also lead to deviations in the measured resistance values. Furthermore, deviations of the $R_{xx}$ peaks from the expected shapes were observed for non-ideal contacts by Dahlem et. al. [@dahlem], who proposed that the deviation might result from the crystalline orientation of the edge of the contacts. The dependence of the contact resistance on the crystal orientation has also been subject to other experiments [@goektas]. They embedded Au/Ge/Ni contacts in various $Al_{0.67}Ga_{0.33}As/GaAs$ heterostructures and found that the contact resistance varied with length and orientation of the interface line of the contact. The experimental findings show nontrivial results for the potential distribution and resistances as a function of details of the sample geometry. In order to simulate such samples we use the nonequilibrium network model (NNM) (Ref. [@oswald98] and [@oswald06]) because the actual geometry of the sample can be taken into account. In addition the NNM has already proven successful to simulate even complicated sample geometries like anti-Hall bars [@antiHallbar]. Regarding theoretical investigations of nonideal contacts in simplified models, van Wees et. al. discuss influences of non-ideal contacts in terms of a nonzero reflection probability of electrons at the contact, which leads to waves traveling in the system [@vanWees90]. This leads to potential resonances and a fine-structure of the conductance between plateaus. We did not consider such effects here and assumed zero reflection at all our contacts. Note in case of slowly varying electrostatic potential from the system into the contact the reflection would be suppressed [@glazman]. The article is structured as follows: In Sec. \[s\_theory\] we present the setup and underlying theory of the NNM. We investigate the influence of the geometric shape of the contact and the current-injecting leads as well as potential barriers within leads and measurement contacts in Sec. \[s\_standardgeom\] of the results. Simulations of gated samples are then presented in section \[s\_gatings\]. Finally we summarize and conclude in Sec. \[s\_conclusion\]. Theory {#s_theory} ====== The exact Hamiltonian of an integer quantum Hall sample is not easily formulated because the disorder potential (long ranged), which defines the underlying electrical potential besides the atomic (microscopic) environment and the confining potential (due to interfaces), depends on details of the exact configuration within the sample. This lead to a variety of more or less simplifying models. The most prominent of this type of models is the Chalker-Coddington model [@ccm], which is able to predict delocalized states but cannot describe the nonequilibrium steady state. However it seems to lead to the correct exponent of the correlation length [@hohl03; @cain01; @cain04] from which we conclude that the topology of the setup of the network is sensible even in the nonequilibrium regime. Due to the strong magnetic fields Landau levels develop with localization within the magnetic length $l_B=\sqrt{\hbar/eB}$ given by the cyclotron motion, $e$ and $B$ denote elementary charge and magnetic induction. Considering the very different length scales involved, electrons are confined to move semi-classically along equipotential lines of the disorder potential [@localizedstates]. Regarding this approximation transport occurs via tunneling constrictions, that is, saddle points in the underlying potential landscape. The elastic tunneling transition probabilities across such saddles have been calculated quantum mechanically for equilibrium [@fertig87; @buettiker90]. In order to be able to discuss special wiring configurations we have to define the notion of longitudinal and transversal components with the help of a curvilinear coordinate system, determined by the local orientation of the contour of the chemical potential in the plateau regime. In this way the local longitudinal direction is defined within the plateau state via the local tangent to the equipotential line. Clearly no distinction between longitudinal and transversal resistance can be made in the transition region. If then two measurement contacts can be connected within the plateau region along an equipotential line, we measure a longitudinal resistance. In case of intersections a transversal (Hall) component is measured. Note however, the NNM calculates within a fixed coordinate system labeled by $(x,y)$. As required the resistances do not depend on the choice of these coordinates. The physical content of the NNM can be understood in terms of local equilibrium (see e.g. Ref. [@zubarev] for a formulation in continuous space variables) when reformulating it for the network of semiclassical wavefunctions. In this way we attribute unique thermodynamical quantities such as the chemical potential to each single wavefunction, given by a trajectory along the contour connecting two saddle points. Oh and Gerhardts [@gerhardts] use a very related concept of local equilibrium, however in a continuum description, together with the Thomas-Fermi or Hartree approach to the self-consistent calculation of charges. They use a simple model geometry and assume translational invariance in longitudinal direction which results by continuity in a current independent of the transversal direction. They also include the velocity of particles when averaging over states to obtain the charge density. Current injection and the important influence of contacts are however not discussed. Setup of the NNM ---------------- We start by replacing the underlying potential landscape, consisting of the long-ranged random potential due to impurities and doping atoms and the confining potential due to the surface of the sample, by a regular grid of nodes. This is fairly general because it is possible to model trajectories of arbitrary shape by states with appropriate energies at the saddles. On this background we replace the random potential by an effective oscillating potential $V(x,y)=\tilde{V}[\cos(\omega y)-\cos(\omega x)]$ with period $L:=2\pi/\omega$ and amplitude $\tilde{V}$ obtained as averages of the random potential. The potential distribution is then only topologically changed (e.g. deformations of contours) but edge channels and backscattering remain. Due to the topology of the saddle points each node connects to four chemical potentials and is visualized as a circle with links labeled by $u_1,\dots,u_4$ counter-clockwise with $u_1$ denoting the upper, right corner (see Fig. \[f\_node\]). Consistency demands that only two independent differences of potentials exist, which represent the two components of the local electric field. Opposite differences are therefore equal [@oswald06]. We then define the ratio of longitudinal to transversal field component, given by $$P := \frac{E_x}{E_y}=\frac{u_1-u_2}{u_1-u_4}=\frac{u_4-u_3}{u_2-u_3} \quad .$$ In this way we construct a ”transfer” equation for the chemical potentials $$\left[\begin{array}{c} u_2 \\ u_3 \end{array}\right] = \left[\begin{array}{cc} 1-P & P \\ -P & 1+P \end{array}\right] \left[\begin{array}{c} u_1 \\ u_4 \end{array}\right] \quad .$$ The chemical potential distribution can be calculated as a boundary value problem once the values for $P$ at each node are given. External contacts supplying current are modeled by saddles with a pair of trajectories that point into/out of the sample ($E_x=0$, compare the Landauer-Büttiker picture [@datta95]) and one of these two trajectories is kept fixed at the chemical potential of the supply contact. The random potential is mapped to the NNM by setting up the grid in such a way that in every second column the nodes are rotated clockwise by an angle of 90° and $P'=1/P$ is used for $P$. This reflects the topology of saddle points of the underlying oscillating effective potential $V(x,y)$. As a result loops can be formed in isolated valleys or around peaks. Such loops have actually been observed experimentally [@CarrierLoops]. We use the Landau levels and a self-consistent Thomas-Fermi approximation in order to obtain the charge density at zero temperature. Finite temperatures increase the transition regions as was analysed in a previous paper [@oswald98_2]. The screening of the electrical bare potential is then obtained from the charge density simply by multiplication with a given constant factor of $C=50mV/(10^{11}cm^{-2})$ (units $e=1$). Furthermore we use a constant broadening of $0.5meV$ to mimic Landau bands generated by the disorder potential. Concerning spin splitting a $g$-factor of $g=4$ was used for all calculations [@gfactor]. We neglect effects due to feedback of the chemical potential onto the charge distribution and hence on the values of $P$. Note this amounts to the assumption that states across saddles describe a linear interpolation between states of two chemical potentials, that is, the system traverses from equilibrium to the nonequilibrium steady state adiabatically. In order to calculate the $P$-values we adopt the high-field approximation of localized electron states in form of contour lines of the potential, also at saddle points. In this way we assume a purely off-diagonal conductance tensor within the sample with a contribution of $e^2/h$ from each Landau level. Then we can immediately write $P=I_y/I_x$ as a ratio of transversal to longitudinal component of the current. According to the edge channel picture we call $T$ the probability of transmission in longitudinal direction, therefore we get $I_y\propto R$ and $I_x\propto T=1-R$. This can be used to calculate $P$ from details of the Landau levels. Compare the work of Streda et. al. [@streda87] in this respect, who describe the whole sample as such a single node. The current of the sample is finally calculated by taking the sum of all transversal potential differences in the contact nodes and multiply by $e^2/h$. Equality of total input and output current determines the potential distribution uniquely. The present algorithm has the advantage that only the electric field has to be calculated. The only detail used from the Landau levels is the energy spacing of levels and the number of levels below $E_F$. Similar to other approaches [@gerhardts; @nachtwei] we are able to calculate longitudinal and transversal resistance by identifying the current with the macroscopic current direction. The local dissipative component could be found with a formulation of tunneling and electron statistics in nonequilibrium together with a principle to describe the nonequilibrium steady state, such as, e.g., minimum entropy production, which would however make calculations much more demanding. Such investigations are under way and will be published elsewhere. In this spirit we use the expression of the tunneling transmission $R_{mn}$ between edge channels through the saddle in the presence of a magnetic field derived in Refs. [@fertig87] and [@buettiker90] to obtain $P$ (Ref. [@oswald06]) $$P = \delta_{mn}\frac{R_{mn}}{1-R_{mn}}= \exp\left[-\frac{L^2B}{h\tilde{V}}\epsilon\right] \quad ,$$ with $\epsilon:=E_F - V_S$ the difference of the Fermi energy to the saddle energy $V_S$ and $B$ denoting the magnetic field strength. In the plateau region the loops in the bulk are isolated but in the transition between plateaus a fraction of electrons in the edge channels tunnels into the bulk, leading to a potential difference in the direction of the mainly longitudinal current and therefore dissipative transport. Equilibration among edge channels is allowed by simulating tunneling between edge channels with a chosen constant decay length for the involved states. Results ======= Standard sample geometry {#s_standardgeom} ------------------------ In this paper we present a systematic study of nonideal contacts. The sample is set up as a typical Hall bar with two current-injecting metallic contacts and four measuring contacts to obtain longitudinal and transversal resistances. In the course we investigate the distribution of the chemical potential across the sample when varying the ratio of contacted cross-section of the leads as well as the length of the leads. Furthermore we placed a barrier across the width of the leads in vicinity of the current-injecting contacts (contact nodes), formed by a potential energy ridge consisting of a plateau and quadratic tails, to simulate non-ideal matching between the states in the contact and the 2d electron gas. The curvature of the saddles was chosen as $a:=2h\tilde{V}/L^2=1$ in all calculations of this section. Regarding the metallic contacts and the leads, the quantities relevant within the nonequilibrium steady state are the ratio $r_c:=W_C/W_L$ of the width of the contact $W_C$ to the width of the lead $W_L$ and the ratio $r_l:=L/W$, $0\le r_l<\infty$, of the lead length $L$ to its width $W$. We motivate this choice by noting that the potential distribution tends to adapt to the sample geometry and therefore simply scales on multiplying $x$ and $y$ with a common scale factor. We present an overview of transversal and longitudinal resistances against magnetic field in Fig. \[f\_RLeads\] by comparing samples with different $r_l$. In Figs. \[f\_RContactdiff\] and \[f\_Rdiff\] we show differences to illustrate small changes while changing $r_c$ and $r_l$. Introducing a barrier in the current injecting leads is discussed in Fig. \[f\_BarrierRdiff\]. The relative error of Hall resistances for various calculated setups is demonstrated in Fig. \[f\_RrelErr\]. ### Different metallic contacts We investigate two limits when applying the external potential, namely point contacts ($r_c\to 0$) on the one hand and second contacting the whole cross section of the lead ($r_c=1$). Intuitively for long enough leads we expect the potential to adapt to the shape of the lead and the geometrical shape of the contact should be irrelevant. This has indeed been found in experiments [@LeadLengthConvergence], the rule of thumb to be sure that the measurement does not depend on the form of the contact is to use leads with $r_l\ge 4$. In Fig. \[f\_contacts\] we compare the potential distribution for samples with $r_c\to 0$ and $r_c=1$ in the plateau and transition region, using $r_l=1.5$ for both samples. It turns out that the potentials are very similar in the whole sample except near the contacts. However we note that in contrast to the rule of thumb the length-scale around the contact where the potential contours have different slopes is only a fraction of the width of the lead. In the plateau region this length is even smaller than in the transition region, because $P\approx 0$ near the boundary forces the potential contours parallel to the longitudinal direction. In Fig. \[f\_RContactdiff\] we show the difference between resistances for point contacts and the respective values for $r_c=1$. We obtain very different magnitude of differences depending on $r_l$ and on the magnetic field $B$. Apparently the differences are very small for $r_l=1.5$ up to a field of about $10T$. On the other hand, for $r_l=0$ the differences are much larger and one can clearly see $\Delta R_{xy}$ tracing the structure of peaks in $R_{xx}$. Furthermore it is interesting to note that independent of $r_l$ and $B$ the difference of $R_{xx}$ is significantly smaller than for $R_{xy}$. We conclude that a difference between point- and line-contacts vanishes quickly with increasing $r_l$ and $R_{xy}$ is stronger influenced. ### Length of the leads We calculated five sample geometries, one with practically no leads (limit $r_l\to 0$) , one with small leads ($r_l = 0.5$), one with medium length ($r_l=1.5$) and two with long leads ($r_l=3$, $r_l=5$). Chemical potential distributions at selected magnetic fields are shown in Fig. \[f\_lengthLead\]. To our surprise even for $r_l<<4$ (at least down to $r_l=1$) the field contours adapt from the contacts to the parallel configuration in the bulk within the plateau region before leaving the leads, against the rule of thumb. Furthermore it seems that actually the lengthscale for this adjustment depends very much on the length of the leads itself. The adjustment is the faster the shorter the leads, again pointing to a scaling behaviour of the chemical potential. We can then understand this behaviour by noting that the change in geometry when leaving the lead, in our case the widening of the sample by the measurement contacts, forces the potential to adapt to the core of the sample. The calculated resistance differences from changing $r_l$ restore the rule of thumb, meaning that resistance values converge nonlinearly with increasing $r_l$, with $r_l=3$ already close to the limit. In Fig. \[f\_RLeads\] we find the typical function of transversal and longitudinal resistances with the magnetic field. At low field values the peaks of the longitudinal resistance clearly show that spin levels are not completely resolved as the 2 pairs around $2.5T$ and $3-4T$ are not yet split. We then show the difference of transversal and longitudinal resistance of various values of $r_l$ to the corresponding resistances for long leads, $r_l=5$ in Fig. \[f\_Rdiff\]. It is apparent that $R_{xx}$ decreases slightly with increasing $r_l$ while $R_{xy}$ increases at the same time. We also see a convergence with increasing $r_l$, that is the differences for $r_l=1.5$ are smaller than half the one from $r_l=0$ and only a few ohms at $r_l=3$. One further notes that differences of $R_{xx}$ and $R_{xy}$ have opposite sign and approximately equal magnitude. Making the plausible assumption that the current does hardly change with $r_l$, this points to the fact that the sum of transversal and longitudinal potential difference is approximately constant for various $r_l$. This can be explained qualitatively by the fact that all chemical potentials are evaluated by current conservation, because then the sum of potentials should stay constant if the current does not change. This also gives a hint, why $\Delta R_{xx}$ and $\Delta R_{xy}$ in Fig. \[f\_RContactdiff\] are not symmetric around zero in case of altering $r_c$. This time we expect the current to change with the contact geometry. ### Contact barriers All previous calculations have been made under the assumptions that the equilibrium electrical potential within the leads is fairly flat. This is somehow unrealistic because the interface between metal and 2DEG usually develops a barrier due to charge transfer processes, having different potential energy. We try to mimic realistic contacts by introducing such a barrier in front of the metallic contacts. The height of the bare barrier was chosen to be $50meV$ at its plateau, which is reduced by screening to a value in the range $0.5meV$ - $8.5meV$ depending on the Fermi energy. The distribution of the chemical potential is shown in Fig. \[f\_barrier\] for low and high magnetic field in the plateau and transition region for the case of each current-injecting contact having a barrier. Furthermore we compare the scenario of having a barrier in only one of the leads for the transition region within the high field case. The most striking difference between low and high field is the larger angle of equipotential contours at high field relative to the longitudinal direction in the bulk for low magnetic field. This agrees with the increasing peak maxima of the longitudinal resistance when increasing the magnetic field. There are two aspects for understanding this behaviour. We note that $P=0$ (transport by edge channels only) holds for all network layers except the topmost one. Therefore the decreasing number of Landau levels decreases the total current and consequently increases the resistances because in these layers with $P=0$ each one contributes the same part to the total current. On the other hand with increasing number of layers with $P=0$ the potential distribution is forced stronger to a plateau-like potential distribution even if the topmost layer has $P>0$ in the transition region. We compare the influence of various configurations of barriers on the longitudinal resistance in Fig. \[f\_RBarriers\]. Obviously considering barriers in the measurement contacts has a larger effect than non-ideal current-injecting leads alone. We note that the transition starts at slightly lower field for barriers in all arms and the transition region is enlarged. In Fig. \[f\_BarrierRdiff\] we plotted the difference in resistances for the left lead or both leads having a barrier and the corresponding values for the sample without barriers. Interestingly $R_{xx}$ decreases and $R_{xy}$ increases when having a barrier in the leads, similar to the behaviour of the resistances with increasing $r_l$. The difference between one and both leads with barrier is fairly small, especially for the transversal resistance. Both leads having a barrier produced a larger $|\Delta R_{xx}|$ than one barrier, as expected. In addition we compared calculations with barriers of bare height $100meV$ in the current injecting leads with the analogous calculations for $50meV$ described above. The difference of resistances is significantly smaller than the comparison to the case of no barriers. This can be understood by the fact that the height after screening is not very different despite the ratio of 2 for the unscreened barriers. However we observe a much larger error for samples with barriers in the measurement contacts. This is to be expected because these barriers directly influence the potential differences measured. Apparently this has a larger effect than a change in total current due to barriers in the current-injecting leads. ### Dissipation We can learn valuable details about dissipation by inspecting the chemical potential distribution. We find in Fig. \[f\_contacts\] that the equipotential contours always form loops that start from above a current-inducing contact and end below the [*same*]{} contact. Therefore, due to the setup that a finite current is flowing from one to the other contact it is necessary for this current to cross contours, meaning that the current flowing through the sample must have a dissipative component. Assuming homogeneous diagonal entries to the conductivity tensor at saddles across the sample (which should be an adequate approximation for a constant equilibrium potential energy across the sample) we find that the dissipative current should be inversely proportional to the gradient of the potential. We conclude that entropy production occurs quite locally at the lower end of the left current-injecting contact and at the upper end of the right current-injecting contact in the geometry of our samples. This is in accord with measurements [@kent91]. Especially for contacts with barriers we expect hardly any current crossing the barrier but rather it will follow potential contours around the barrier. This is always possible if the barrier is not too high, as in our simulated cases. We note that flow along the equipotential contours agrees with the experience that current flows along the edges [@vKlitzing95], whereas the Landauer-Büttiker picture would give maximal current in the bulk as there are the maximum number of channels up to the Fermi energy [@tsemekhman95]. Nonlocal configurations and gated samples {#s_gatings} ----------------------------------------- After investigating systematic changes at the contacted leads where current is injected into the sample we proceed to simulate nonlocal geometries of recent experimental investigations of non-ideal contacts with gatings. To set up the simulation we first generated two samples with the same geometry as used in the experiment. In the following the contacts are numbered by starting with 1 at the left boundary edge and increasing the numbers in the clockwise direction. The first sample (S1) had the external potential connected to contacts 1 and 2 and the voltage drop is measured between contacts 4 and 3. In this way we measure a longitudinal voltage difference because (4,3) can be connected along equipotential lines in the plateau region. The second sample (S2) had the external potentials connected to contacts 2 and 4 and measures the voltage difference between contacts 1 and 3. This therefore amounts to a transversal voltage drop. We ran calculations of both sample geometries by sweeping the magnetic field up to $12T$. We used ideal samples without gating and ones where gating of some of the contacts has been applied in analogy to the experiment. The samples are illustrated in Fig. \[f\_sample1\] and \[f\_sample2\]. In all calculations of this subsection the saddle curvature was chosen to be $a:=2h\tilde{V}/L^2=0.1$. Before discussing the potential distributions, we want to point out a general problem in this context. In analogy to the experiment the bulk region of the sample carries electrical potential even in case it is insulating (plateau regime). This potential distribution is left over from the previous transition between plateaus, after which the bulk became decoupled from the edges due to insufficient equilibration time. Consequently the bulk potential has physical relevance only within the transition between plateaus. ### Enhanced longitudinal resistance in sample 1 By sweeping the field we found that the sample with gating on contacts 1 and 4 shows an extra $R_{xx}$ peak centered around $B=5.7T$ and a significantly enlarged peak in the interval $[8.5T,11T]$, that is at the same time much broader than the respective peak of the sample without gates. Furthermore it forms a plateau in the middle, however with a dip around $B=10.33T$. The plot is given in Fig. \[f\_Rxx\]. It is apparent that besides the two intervals $[5.5T,6T]$ and $[8.5T,11T]$ the samples with and without gating show qualitatively the same resistance, even with slightly higher values for the ideal sample. To understand all these features we concentrate on the potential distribution across the sample, which is given in Fig. \[f\_PotDistL\] for four magnetic fields within or in vicinity to the larger peak of the longitudinal resistance of the gated sample. At $8.00T$ we still have perfect edge channels and therefore $R_L$ is zero. At around $8.5T$ dissipation sets in and $R_L$ becomes nonzero. Concentrating on the distribution at $8.86T$ we find the reason in an equilibration of channels that occurs at contact 3. Around a magnetic field of $10.3T$ a dip of $R_L$ develops which can be attributed to equilibration among edge channels. Note the edge channel with the potential of contact 1 strongly equilibrates at the lower left corner with other edge channels which are at the potential of contact 2 and have not been equilibrated to the potential of contact 1 yet due to the gating. This mechanism results in less voltage drop between contacts 3 and 4 because the main drop occurs in the lower left corner. Finally around $11T$ the dissipation vanishes in the gated and ungated samples and we reach the next plateau region. Note similar equilibrations among edge channels is responsible for the non-ideal dissipation producing the peak starting at $8.86~T$ and also the peak centered around $5.7T$. ### Shift of transition in transversal resistance in sample 2 For this calculation we used sample 2 with different gating configurations. Figure \[f\_Rxy\] shows the transversal resistance obtained from the potential applied via contacts 1 and 3. There are no qualitative differences between the ungated sample and gating configurations a) and b). However gating all contacts leads to the interesting phenomenon of shifting the last plateau transition to a lower value of $B$ by the remarkable values of about $2T$. Furthermore for gating configuration c) a peak develops around $5.75T$ due to changes in edge channel equilibration, where all other gating configurations are within the plateau. It seems that also this transition starts shifted to lower magnetic fields but then falls back to the plateau value. This can happen when the equilibration region among edge channels changes from outside the range between the measurement contacts to inside. As an intuitive explanation for the shift of the last plateau transition it comes to mind that due to gating at all contacts isolated edge channels can be generated, which couple to only one contact. Noting that the energy of the states qualitatively follows the underlying potential energy surface and the Fermi energy in equilibrium is to a large extend given by bulk properties we arrive at the conclusion that indeed the number of occupied states is decreased at the gating. The plots of the distribution of chemical potentials in Fig. \[f\_decoupledEC\] indeed support this explanation. We find that at $8.3~T$ with and without gating the edge channels from contact 2 to contact 4 have the same potential, due to equilibration at contact 4 where the external potential is applied. On the other hand for $9.5~T$ the gating configuration c) clearly develops two edge channels between contacts 2 and 4 of different potentials, namely, one with the potential of contact 4 and the other of contact 2. This results in an isolated edge channel that cannot contribute to the current and effects the shift of the transition. Summary and Conclusion {#s_conclusion} ====================== In the present paper we summarize a series of calculations of resistances and chemical potential distributions for integer quantum-Hall samples with the aim to investigate the influence of the shape and potential energy of nonideal contacts where the current enters/leaves the sample. In the first part of the paper we investigate the influence of the ratio of the contacted part of the cross-section of the leads, the length of the leads and potential barriers within the leads and measurement contacts. The contours of the chemical potential more or less adapt to the local geometry, for instance they ”flow” around barriers. From potential contours we were able to identify the regions where dissipation will occur. Resistances strongly depend on the details of the sample used. We find relative errors that range up to a few percent in the transition region. We conclude that the Quantum Hall effect is robust against such changes only within the plateau regime, that is, whenever the Fermi energy lies between two Landau levels. In the second part we calculated gated quantum-Hall samples in nonlocal geometries that have recently been used in experiments of non-ideal contacts. Various different configurations of gating at the metallic contacts are considered in analogy to the experiment. We find that partial equilibration near a remote gated contact can lead to enhanced dissipation. On the other hand if all involved contacts are gated it might happen at sufficiently high magnetic fields that one Landau level is connected to only one of the contacts and as a consequence does not contribute to transport and therefore the corresponding plateau transition is shifted down in magnetic field. We believe that our calculations give important information for experimentalists, especially regarding questions of metrology. Recent experiments [@baumgartner07] used a scanned tip to apply local gating and investigated the change in resistance. In this way the underlying potential landscape can be locally changed. Using a network model that combines tunneling and superconducting states the current distribution and hot spots have been calculated [@dubi06]. It would be an interesting extension of the present work to directly simulate these resistance changes, which is possible with the NNM. Future work is planned on this subject. This work was sponsored by the Austrian Science Fund under the Research Program No. P 19353-N16. [9]{} S. Datta, ”Electronic Transport in Mesoscopic Systems”, Cambridge University Press (1995) E. Ahlswede, P. Weitz, J. Weis, K. v. Klitzing and K. Eberl, Physica B [**298**]{}, 562 (2001), E. Ahlswede, J. Weis, K. v. Klitzing and K. Eberl, Physica E [**12**]{}, 165 (2002) D. Dominguez, K. v. Klitzing, and K. Ploog, Metrologia [**26**]{}, 197 (1989) B. W. Alphenaar, P. L. McEuen, R. G. Wheeler, and R. N. Sacks, Phys. Rev. Lett. [**64**]{}, 677, (1990) G. Müller, D. Weiss, S. Koch, K. von Klitzing, H. Nickel, W. Schlapp, and R. Lösch, Phys. Rev. B [**42**]{}, 7633 (1990) F. Dahlem, J. Weis, and K. von Klitzing, private communication Oktay Göktas, Jochen Weber, Jürgen Weis, Klaus von Klitzing, Physica E (Amsterdam) [**40**]{}, 1579 (2008) J. Oswald, Physica E (Amsterdam) [**3**]{}, 30 (1998) J. Oswald and M. Oswald, J. Phys.: Condens. Matter [**18**]{}, R101 (2006) M. Oswald, J. Oswald and R. G. Mani, Phys. Rev. B [**72**]{}, 035334 (2005); J. Oswald and M. Oswald, Phys. Rev. B [**74**]{}, 153315 (2006) B. J. van Wees, L. P. Kouwenhoven, E. M. M. Willems, C. J. P. M. Harmans, J. E. Mooij, H. van Houten, C. W. J. Beenakker, and J. G. Williamson and C. T. Foxon, Phys. Rev. B [**43**]{}, 12431 (1991) L. I. Glazman and M. Jonson, J. Phys. Condens. Matter [**1**]{}, 5547 (1989); Phys. Rev. Lett. [**64**]{}, 1186 (1990) J. T. Chalker and P. D. Coddington, J. Phys. C [**21**]{}, 2665 (1988) F. Hohls, U. Zeitler, R. J. Haug, R. Meisels, K. Dybko and F. Kuchar, Physica E, [**16**]{}, 10 (2003) P. Cain, R. A. Römer, M. Schreiber, and M. E. Raikh, Phys. Rev. B [**64**]{}, 235326 (2001) P. Cain and R. A. Römer, Europhys. Lett. [**66**]{}, 104 (2004) R. Kubo, S. K. Miyake and N. Hashitsume, Solid State Phys. [**17**]{}, 269 (1965); M. Tsukada, J. Phys. Soc. Jpn. [**41**]{}, 1466 (1976); R. E. Prange and R. Joynt, Phys. Rev. B [**25**]{}, 2943 (1982); H. A. Fertig and B. I. Halperin, Phys. Rev. B [**36**]{}, 7969 (1987) M. Büttiker, Phys. Rev. B [**41**]{}, 7906 (1990) D. N. Zubarev, ”Nonequilibrium Statistical Thermodynamics”, Plenum, New York, 1974; S. R. deGroot and P. Mazur, ”Non-equilibrium Thermodynamics”, Dover, New York, 1984 J. H. Oh and R. R. Gerhardts, Phys. Rev. B [**56**]{}, 13519 (1997); A. Siddiki and R. R. Gerhardts, Phys. Rev. B [**70**]{}, 195335 (2004) S. H. Tessmer, P. I. Glicofridis, R. C. Ashoori, L. S. Levitov and M. R. Melloch, Nature (London) [**392**]{}, 51 (1998); J. P. Eisenstein, H. L. Störmer, V. Narayanamurti and A. C. Gossard, Superlattices Microstruct. [**1**]{}, 11 (1985); T. J. Kershaw, A. Usher, A. S. Sachrajda, J. Gupta, Z. R. Wasilewski, M. Elliott, D. A. Ritchie and M. Y. Simmons, New J. Phys. [**9**]{}, 71 (2007) J. Oswald, G. Span, and F. Kuchar, Phys. Rev. B [**58**]{}, 15401 (1998) T. Y. Huang, Y. M. Cheng, C. T. Liang, G. H. Kim and J. Y. Leem, Physica E (Amsterdam) [**12**]{}, 424 (2002) P. Streda, J. Kucerà and A. H. MacDonald, Phys. Rev. Lett. [**59**]{}, 1973 (1987) G. Nachtwei, Physica E (Amsterdam) **4** (1999) 79 J. Oswald, Private Communication A. J. Kent, Physica B (Amsterdam) [**169**]{}, 356 (1991) K. v. Klitzing Physica B (Amsterdam) [**204**]{}, 111 (1995) K. Tsemekhman, V. Tsemekhman, C. Wexler and D. J. Thouless, Solid State Commun. [**101**]{}, 549 (1997) A. Baumgartner, T. Ihn, K. Ensslin, K. Maranowski, and A. C. Gossard, Phys. Rev. B [**76**]{}, 085316-1 (2007) Y. Dubi, Y. Meir, and Y. Avishai, Phys. Rev. B [**74**]{}, 205314-1 (2006) [**Figure captions**]{} Figure 1. Representation of tunneling at a single saddle point. The saddle is denoted by a circle and the lines crossing the saddle are trajectories of edge channels. The double-arrow denotes tunneling between the edge channels. Figure 2. (color online) Comparison of point contacts with contacts extending over the width of the current-injecting leads. Visualization of chemical potential distributions in the sample. The left column corresponds to the plateau region at $9T$ whereas for the right column shows the region of transition to the last plateau at a magnetic field of $11T$. Positive/negative values of the chemical potential are shown by full/broken contours. We plotted the values $0,\pm 4,\pm 8$ with the largest/smallest value drawn in bold. The value $0$ has a separate linestyle for easy distinction. Figure 3. (color online) Difference of transversal and longitudinal resistances for point contacts ($r_c=0$) relative to values for line contacts ($r_c=1$), for $r_l=0$ and $r_l=1.5$, plotted against magnetic field. Figure 4.(color online) Visualization of chemical potential distributions across the sample with different current-injecting leads. The upper row shows results for the ratio length versus width of $r_l\approx 0$ whereas the lower row demonstrates results for $r_l=3$. The magnetic field is set to values of the plateu ($9T$) and transition ($11T$) region. Corresponding resistances as functions of magnetic field are shown in Fig. \[f\_RLeads\]. The linestyle and values of contours is equivalent to Fig. \[f\_contacts\]. Figure 5. (color online) Longitudinal and transversal resistance versus magnetic field for various values $r_l$ of the length of the current injecting leads. Figure 6. (color online) Difference of transversal and longitudinal resistances for leads with various $r_l$ relative to values for $r_l=5$, plotted against magnetic field. Figure 7. (color online) Potential distributions for current-injecting contacts having a barrier in the lead. The first row shows the plateau region whereas the second demonstrates results for the transition region. The last row shows the case of only one barrier and the sample with barriers in the leads and measurement arms. The linestyle of contours is equivalent to Fig. \[f\_contacts\], however we used the values $0,\pm 1,\pm 5,\pm 9$. Figure 8. (color online) Transversal resistance versus magnetic field for various configurations of barriers. Figure 9. (color online) Difference of transversal and longitudinal resistances for leads with and without barrier in one or both current-injecting leads, plotted against magnetic field. The energies denote the value of the height of the unscreened barriers. Figure 10. (color online) Relative Error of transversal resistances for various $r_l$, with and without barrier, and the case of leads with point contacts ($r_c=0$), no barrier and $r_l=1.5$, compared to leads with $r_c=1$, no barrier and $r_l=1.5$, plotted against magnetic field. Figure 11. (color online) This figure shows the setup of gated sample 1. The contacts are numbered by starting with 1 at the left boundary edge and increasing the numbers in the clockwise direction. The external potentials are connected via contacts 1 and 2 and the (longitudinal) voltage drop is measured between contacts 4 and 3. Gating potentials are indicated by shaded stripes along contacts. Figure 12. (color online) This figure shows the setup of gated sample 2 for various gating configurations. The contacts are numbered as in Fig. \[f\_sample1\]. The external potentials are connected via contacts 2 and 4 and the (Hall) voltage drop is measured between contacts 1 and 3. Gating potentials are indicated by shaded stripes along contacts. Gating configurations are labelled as a) contacts 1,4, b) contacts 2,3 and c) all contacts. Figure 13. (color online) The longitudinal resistance $R_L$ as a function of the applies magnetic field, calculated from sample 1 by dividing the voltage drop by the total current. We compare the ideal sample (without gatings) with the gated configuration in Fig. \[f\_sample1\]. Figure 14. (color online) Visualization of chemical potential distribution across sample 1 as a color chart. The color label gives the respective voltage values. The magnetic field is indicated above each plot. Figure 15. (color online) The Hall resistance $R_T$ as a function of the applies magnetic field, calculated from sample 2 by dividing the voltage drop by the total current. We compare various possible configurations of gatings with the numbers corresponding to Fig. \[f\_sample2\] and ideal corresponds to no gating at all. Figure 16. (color online) Visualization of chemical potential distribution across the sample as a color chart. The color label gives the respective voltage values. The upper row corresponds to all contacts gated while the remaining image corresponds to no gating. Magnetic fields are indicated above each plot. ![[]{data-label="f_node"}](Knoten.eps) ------------------------------------------------------------------- -------------------------------------------------------------------- ![[]{data-label="f_contacts"}](B2-30V10V-10_9.0142.eps "fig:") ![[]{data-label="f_contacts"}](B2-30V10V-10_10.9499.eps "fig:") ![[]{data-label="f_contacts"}](CL_B2-30V10V-10_9.0142.eps "fig:") ![[]{data-label="f_contacts"}](CL_B2-30V10V-10_10.9499.eps "fig:") ------------------------------------------------------------------- -------------------------------------------------------------------- ![[]{data-label="f_RContactdiff"}](PtvsLineCnt_RDiff.eps) --------------------------------------------------------------------------------- ---------------------------------------------------------------------------------- ![[]{data-label="f_lengthLead"}](B2-30V10V-10ShortestCL_ECs2_9.0142.eps "fig:") ![[]{data-label="f_lengthLead"}](B2-30V10V-10ShortestCL_ECs2_10.9499.eps "fig:") ![[]{data-label="f_lengthLead"}](B2-30V10V-10LongCL_ECs2_9.0142.eps "fig:") ![[]{data-label="f_lengthLead"}](B2-30V10V-10LongCL_ECs2_10.9499.eps "fig:") --------------------------------------------------------------------------------- ---------------------------------------------------------------------------------- -------------------------------------------------------- ![[]{data-label="f_RLeads"}](LeadLength_RH.eps "fig:") ![[]{data-label="f_RLeads"}](LeadLength_RL.eps "fig:") -------------------------------------------------------- ![[]{data-label="f_Rdiff"}](LeadLength_RDiff.eps) --------------------------------------------------------------------- ------------------------------------------------------------------------- ![[]{data-label="f_barrier"}](CL_2B50_V10V-10_ECs2_2.3T.eps "fig:") ![[]{data-label="f_barrier"}](CL_2B50_V10V-10_ECs2_9T.eps "fig:") ![[]{data-label="f_barrier"}](CL_2B50_V10V-10_ECs2_2.2T.eps "fig:") ![[]{data-label="f_barrier"}](CL_2B50_V10V-10_ECs2_11T.eps "fig:") ![[]{data-label="f_barrier"}](CL_B50_V10V-10_ECs2_11T.eps "fig:") ![[]{data-label="f_barrier"}](CL_AllArms50_V10V-10_ECs2_11T.eps "fig:") --------------------------------------------------------------------- ------------------------------------------------------------------------- ![[]{data-label="f_RBarriers"}](R_Barriers.eps) ![[]{data-label="f_BarrierRdiff"}](Barrier_RDiff.eps) ![[]{data-label="f_RrelErr"}](RRelErr.eps) ![[]{data-label="f_sample1"}](Cnt12_V34_G14.eps) --------------------------------------------------------- ----------------------------------------------------------- ![[]{data-label="f_sample2"}](Cnt24_V13.eps "fig:") ![[]{data-label="f_sample2"}](Cnt24_V13_G14.eps "fig:") ![[]{data-label="f_sample2"}](Cnt24_V13_G23.eps "fig:") ![[]{data-label="f_sample2"}](Cnt24_V13_G1234.eps "fig:") --------------------------------------------------------- ----------------------------------------------------------- ![[]{data-label="f_Rxx"}](R_Cnt12_V34.eps) ------------------------------------------------------------------- ------------------------------------------------------------------- ![[]{data-label="f_PotDistL"}](C12V34_B8_ColorMap.eps "fig:") ![[]{data-label="f_PotDistL"}](C12V34_B8.86_ColorMap.eps "fig:") ![[]{data-label="f_PotDistL"}](C12V34_B10.33_ColorMap.eps "fig:") ![[]{data-label="f_PotDistL"}](C12V34_B10.84_ColorMap.eps "fig:") ------------------------------------------------------------------- ------------------------------------------------------------------- ![[]{data-label="f_Rxy"}](R_Cnt24_V13.eps) -------------------------------------------------------------------------- --------------------------------------------------------------------- ![[]{data-label="f_decoupledEC"}](C24V13_B8.3_ColorMap.eps "fig:") ![[]{data-label="f_decoupledEC"}](C24V13_B9.53_ColorMap.eps "fig:") ![[]{data-label="f_decoupledEC"}](C24V13ideal_B9.53_ColorMap.eps "fig:") -------------------------------------------------------------------------- ---------------------------------------------------------------------
--- author: - 'M.J. Hardcastle' date: Version of title: 'An optical inverse-Compton hotspot in 3C196?' --- Introduction ============ The relativistic electron population responsible for synchrotron emission in extragalactic radio sources necessarily scatters incoming photons up to higher energies by the inverse-Compton process. Possible source photon populations include the microwave background, starlight from the host galaxy, photons from the active nucleus and the synchrotron photons themselves; different populations will dominate in different regions of the source. In the compact hotspots of powerful double (FRII) radio sources, the dominant component is expected to be the synchrotron photons, and the so-called ‘synchrotron self-Compton’ (SSC) process should accordingly dominate the inverse-Compton emissivity. Observations of SSC emission are important because they allow us to make a measurement of the magnetic field strength in the hotspot. We know the synchrotron flux density and the dimensions of the hotspot, which give us the synchrotron emissivity and photon energy density. A measurement of the inverse-Compton emissivity then tells us about the electron energy density, allowing us to infer the magnetic field strength from the observed synchrotron emission. However, SSC emission from hotspots is expected to be faint. So far there are four convincing cases of inverse-Compton emission detected with X-ray observations, in the radio galaxies 3C405 (Harris, Carilli & Perley 1994; Wilson, Young & Shopbell 2000), 3C295 (Harris [[et al]{}.]{} 2000), 3C123 (Hardcastle [[et al]{}.]{} 2000) and the quasar 3C263 (Hardcastle [[et al]{}.]{} in preparation); these detections are consistent with a hotspot magnetic field strength close to the minimum energy or equipartition value. These objects represent some of the brightest well-studied hotspots in the sky, and the faintest of them approaches the detection limit of long [*Chandra*]{} observations. There may be fewer than ten objects in the entire sky that have hotspots whose SSC emission is detectable at a useful level in the X-ray with the present generation of instruments. Because the spectrum of SSC emission is expected to be similar to that of synchrotron emission, SSC emission in the optical ought to be detectable at flux levels higher than those seen in the X-ray. However, there are several difficulties with observing this in practice. Firstly, in many cases, the high-frequency radio/sub-mm/IR spectrum of hotspots is poorly known, which means that it is hard to say whether an optically detected hotspot is synchrotron or inverse-Compton — a number of sources have well-known optical synchrotron hotspots. For example, it is not clear whether the optical hotspot of 3C295 detected by Harris [[et al]{}.]{} (2000) is synchrotron or SSC in nature. Secondly, because the increase in frequency in the SSC process goes as $\gamma^2$, SSC at low frequencies probes low-energy electrons (with $\gamma \la 1000$) and we typically do not know much about the radio emission from these electrons; our models are accordingly uncertain. And finally there are practical difficulties; emission from the hotspots is often too faint to be seen in the optical against the background from the host galaxy or active nucleus. However, detection of optical SSC emission is still possible in principle, and could give us valuable information about the magnetic field strengths and low-energy electron populations in hotspots. In this note I report on a possible detection of optical SSC emission from the distant quasar 3C196. Except where otherwise stated, I use a cosmology with $H_0 = 65$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\rm m} = 0.3$, $\Omega_\Lambda = 0.7$. Observations ============ 8.8cm 3C196 is a $z=0.871$ quasar with a compact, FRII-like radio structure (Laing 1982; Lonsdale & Morison 1983, hereafter LM; Brown 1990). It came to our attention as a possible target for [*Chandra*]{} observations because of its very bright, compact hotspots, with flux densities around 2 Jy at 5 GHz (LM). However, investigation of the available high-frequency data in the VLA archive[^1] showed that the radio spectrum of the hotspots cuts off very steeply at observing frequencies of tens of GHz (Table \[fluxes\], Fig. \[flux\]). Because it is photons at around these frequencies that are scattered up to X-ray energies, 3C196 became less attractive as a [*Chandra*]{} target. But the cutoff in the synchrotron spectrum does rule out the possibility of an optical synchrotron hotspot. For this reason it is interesting to look for optical inverse-Compton emission in this object. ----------- ----------------- ---------------- ----------- Frequency North South Reference (GHz) (Jy) (Jy) 0.329 6.9 – 1 0.408 $7.1\pm 0.4$ $15.2\pm0.6$ 2 1.67 $3.420\pm 0.2$ $5.58\pm 0.3$ 2 5.0 $1.6\pm0.2$ $2.5 \pm 0.2$ 2 15.0 $0.472\pm 0.02$ $0.47\pm 0.07$ 3 22.5 $0.123\pm 0.05$ – 3 ----------- ----------------- ---------------- ----------- : Radio flux densities for the hotspots of 3C196[]{data-label="fluxes"} [[Data tabulated are for the compact component of the hotspot, which in most cases was unresolved.]{}]{} References are: (1) Linfield & Simon (1984) (2) LM (3) This paper, from VLA archive. The N hotspot is partially resolved at 22 GHz, so the flux quoted may be an underestimate. 3C196 has already been well studied in the optical (e.g. Boissé & Boulade 1990; Cohen [[et al]{}.]{} 1996), because the quasar lies close to a $z=0.437$ barred spiral galaxy which gives rise to absorption in HI (Brown & Mitchell 1983; Brown [[et al]{}.]{} 1988; [[Briggs, de Bruyn & Vermeulen 2001]{}]{}) and optical lines (Foltz, Chaffee & Wolfe 1988). Because the hotspots lie only $\sim 2$ arcsec from the quasar nucleus, high resolution is needed to separate any possible optical hotspot emission from the nucleus. The deepest [*Hubble Space Telescope*]{} ([*HST*]{}) image is presented by Ridgway & Stockton (1997; hereafter RS), and consists of 8 dithered observations of 900 s duration each on the WFC3 chip in the F622W filter. After combining the images and subtracting the PSF, they find some extended emission probably related to the quasar, as well as imaging the foreground spiral galaxy, but do not comment on any possible hotspot emission. I have obtained the data of RS from the [*HST*]{} archive and combined the individual observations, using the IRAF task [crrej]{} to remove cosmic rays in the pairs of observations with the same pointings followed by the AIPS tasks [hgeom]{} and [comb]{} to stack the four dither directions. In Fig. \[image\] I show a greyscale of the resulting image. There is a weak but clear source coincident with the northern hotspot, a component which can also be seen on the image of RS. One of the spiral arms of the foreground galaxy crosses the position of the southern hotspot, rendering any discussion of that component impossible. The flux density of the component coincident with the northern hotspot can be determined by small-aperture photometry. After subtraction of the well-determined sky background and of a locally determined (and more uncertain) correction for the background local to the source, I find the source to contain $133\pm 27$ counts in an extraction region with a radius of 3 pixels. Correcting for the effects of the PSF (Holtzman [[et al]{}.]{} 1995) and for a small amount of Galactic reddening \[$E(B-V) = 0.058$, according to the data of Schlegel, Finkbeiner & Davis (1998)\] this translates, using factors provided by the IRAF [synphot]{} package, to a flux density of $82 \pm 17$ nJy at an observing frequency of $4.85 \times 10^{14}$ Hz. It is worth briefly considering whether additional reddening might be introduced by the foreground spiral galaxy. The northern hotspot is the component of 3C196 furthest from the spiral; the separation of 3.1 arcsec corresponds to a distance of 23 kpc at the redshift of the spiral. Observations of optical line absorption against the quasar nucleus show that absorbing material from the spiral certainly has an effect at $\sim 10$ kpc. On the other hand, we know from the VLBI work of Brown [[et al]{}.]{} (1988) that the HI column density to the hotspot is $\la 3 \times 10^{20}$ cm$^{-2}$, which would imply $E(B-V)$ in the frame of the galaxy of $\la 0.06$; this could give up to $0.14$ mag of reddening at our observing wavelength, but this would only increase the inferred flux density by 14 per cent, to 93 nJy. The effect is therefore not significant. [[The detailed models of Briggs [[et al]{}.]{} (2001) place the northern hotspot outside the absorbed region.]{}]{} Inverse-Compton? ================ In order to calculate the inverse-Compton flux density expected at this frequency we must assume a size and geometry for the hotspot. To carry out the calculation I use the code of Hardcastle, Birkinshaw & Worrall (1998) which assumes that the hotspot is a homogeneous sphere. The radius of the sphere is set to 0.3 arcsec, based on the MERLIN image of Lonsdale (1984). The synchrotron spectrum is then fit with a simple model consisting of a low-frequency power law with spectral index 0.5 and a high-energy cutoff; this constrains the upper energy of the electrons, and gives an adequate fit to the radio data (Fig. \[flux\]). The unknown parameters are then the low-energy cutoff of the electron spectrum, and the magnetic field strength in the hotspot. The value of the low-energy cutoff has a strong effect on the predicted optical SSC emissivity, because it is the low-energy electrons that scatter radio photons into the optical band. For a given energy density in photons, high cutoffs reduce the optical emissivity, while low cutoffs increase it. A smaller but still significant effect is that a lower cutoff implies a greater energy density in electrons, giving rise to an increased equiapartition magnetic field strength. The fact that the source is detected at 408 MHz and below constrains the low-energy cutoff; $\gamma_{\rm min} \la 800 (B_{\rm eq}/B)^{1\over2}$, where $B_{\rm eq}$ is the equipartition magnetic field strength assuming no protons and filling factor unity. The best fits to the entire radio spectrum are given by $\gamma_{\rm min} \approx 600$, which is comparable to the low-energy cutoffs inferred in Cygnus A and 3C123 (Table \[compare\]). To see what regions of $B$ and $\gamma_{\rm min}$ are consistent with the observed optical emission, I allowed them both to vary over a wide range and determined the difference between the predicted and observed optical flux density. The results are plotted in Fig. \[sigmacont\]. 8.8cm It will be seen that even if we treat the optical emission from the hotspot as unrelated to the inverse-Compton process, and use it to give an upper limit, this calculation constrains the magnetic field strength in the hotspot. The region in the bottom left of Fig.\[sigmacont\], [[below the solid contours]{}]{}, is excluded at better than the $3\sigma$ level by the data, [[since parameters in this region would produce more optical emission than is observed]{}]{}. This implies, for plausible $\gamma_{\rm min}$, that the magnetic field strength [[in the hotspot]{}]{} is close to or greater than the equipartition value. If we believe that inverse-Compton emission has actually been detected in this object, then the magnetic field strength implied is very close to the equipartition value if $\gamma_{\rm min} \sim 400$ – $600$, a plausible fit to the data. If the low-energy cutoff in the electron spectrum is much lower than this, then to avoid producing an optical SSC hotspot brighter than that observed we require magnetic field strengths greater than the equipartition value, [[although the difference is never very large, and approaches $1\sigma$ for $\gamma_{\rm min} \approx 10$]{}]{}; but the radio data are less well fitted by [[models with low $\gamma_{\rm min}$. Table \[compare\] compares the derived field strengths for 3C196, on the assumption of $\gamma_{\rm min} = 600$, with those obtained for other sources. The results are broadly similar.]{}]{} --------- -------------------- ---------------------------- -------------- --------------- ------ Source $\gamma_{\rm min}$ $u_\nu$ ($\times 10^{-11}$ $B_{\rm eq}$ $B_{\rm SSC}$ Ref. J m$^{-3}$) (nT) (nT) 3C405 A 420 1.0 31 27 1 3C295 N 800 8.2 63 30 2 3C123 1000 0.57 17 12 3 3C196 600 1.3 28 24 4 --------- -------------------- ---------------------------- -------------- --------------- ------ : Inverse-Compton parameters obtained for 3C196 and other sources.[]{data-label="compare"} I tabulate information for hotspot A of Cygnus A and the N hotspot of 3C295. $\gamma_{\rm min}$ is the value assumed in the calculation; see the text for a discussion of the values applicable to 3C196. $u_\nu$ is the mean energy density in synchrotron photons; $B{\rm eq}$ is the equipartition magnetic field strength, using assumptions given in the text; $B_{\rm SSC}$ is the field required if the X-ray or optical emission observed is all to be modelled as SSC. References are: (1) Harris [[et al]{}.]{} (1994); (2) Harris [[et al]{}.]{} (2000); (3) Hardcastle [[et al]{}.]{} (2001); (4) this paper. For consistency, all calculations have been carried out using the Hardcastle [[et al]{}.]{}(1998) code and the cosmological parameters of this paper, so the results quoted here differ slightly from published values. As Brunetti, Setti & Comastri (1997) have pointed out, photons from the active nucleus can also be inverse-Compton scattered to higher energies by electrons in the radio components. The N hotspot in 3C196 is a projected distance of only 20 kpc from the nucleus, and so the energy density of nuclear photons in the hotspot may be significant. However, the restricted range of electron energies inferred in the hotspot ($500 \la \gamma \la 6000$) mean that only photons in the radio band can be scattered into the optical. The radio nucleus of 3C196, as we observe it, is unusually weak for a quasar — only 7 mJy at 5 GHz (Reid [[et al]{}.]{} 1995). Taking into account the effects of beaming in the manner discussed by Hardcastle [[et al]{}.]{} (2001) I find that, if the quasar is at an angle of less than 45 degrees to the line of sight, extremely high nuclear bulk Lorentz factors ($\ga 25$) are required to make the number spectral density of nuclear radio photons equal to that seen in the hotspot. A model of this sort seems unlikely to be viable for this particular source. Conclusions =========== A weak optical component coincident with the northern radio hotspot is detected in 3C196. The radio spectrum makes this component very unlikely to be due to synchrotron emission. If it is used as an upper limit on any optical SSC emission, it requires the magnetic field strength in the hotspot to be greater than or equal to the equipartition value. If it is taken to be a [*detection*]{} of SSC emission, its flux level is in good agreement with a model similar to the one found to work in X-ray detected hotspots; the low-energy cutoff is around $\gamma = 500$ and the magnetic field strength is close to the equipartition value. This work illustrates the possibility of finding optical inverse-Compton hotspots in deep observations of radio sources. At the time of writing I am not aware of any other sources with bright compact hotspots of which suitable optical observations exist; observers are encouraged to be alert to the possibility of finding such components in their data. I thank Dan Harris for a helpful referee’s report on the first version of this paper. Boissé P., Boulade O., 1990, A&A, 236, 291 Briggs F.H., de Bruyn A.G., Vermeulen R.C., 2001, A&A in press astro-ph/0104457 Brown R.L., 1990, in Zensus J.A., Pearson T.J., eds, Parsec-scale Radio Jets, Cambridge University Press, Cambridge, p. 199 Brown R.L., Mitchell K.J., 1983, ApJ, 264, 87 Brown R.L., Broderick J.J., Johnston K.J., Benson J.M., Mitchell K.J., Waltman E.B., 1988, ApJ, 329, 138 Brunetti G., Setti G., Comastri A., 1997, A&A, 325, 898 Carilli C.L., Perley R.A., Dreher J.W., Leahy J.P., 1991, ApJ, 383, 554 Cohen R.D., Beaver E.A., Diplas A., Junkkarinen V.T., Barlow T.A., Lyons R.W., 1996, ApJ, 456, 132 Foltz C.B., Chaffee F.H., Wolfe A.M., 1988, ApJ, 335, 35 Hardcastle M.J., Birkinshaw M., Worrall D.M., 1998, MNRAS, 294, 615 Hardcastle M.J., Birkinshaw M., Worrall D.M., 2001, MNRAS, 323, L17 Harris D.E., Carilli C.L., Perley R.A., 1994, Nat, 367, 713 Harris D.E., et al., 2000, ApJ, 530, L81 Holtzman J., et al., 1995, PASP, 107, 156 Laing R.A., 1982, in Heeschen, D.S., Wade C.M., eds, Extragalactic Radio Sources, IAU Symposium 97, Reidel, Dordrecht, p. 161 Linfield R., Simon R.S., 1984, AJ, 89, 1799 Lonsdale C.J., 1984, MNRAS, 208, 545 Lonsdale C.J., Morison I., 1983, MNRAS, 203, 833 \[LM\] Ridgway S.E., Stockton A., 1997, AJ, 114, 511 Reid A., Shone D.L., Akujor C.E., Browne I.W.A., Murphy D.W., Pedelty J., Rudnick L., Walsh D., 1995, A&AS, 110, 213 Schlegel D.J., Finkbeiner D.P., Davis M., 1998, ApJ, 500, 525 Wilson A.S., Young A.J., Shopbell P.L., 2000, ApJ, 544, L27 [^1]: The National Radio Astronomy Observatory Very Large Array (VLA) is operated by Associated Universities Inc., under co-operative agreement with the National Science Foundation.
--- abstract: 'We study the boundedness properties of commutators formed by $b$ and $T$, where $T$ is a bilinear bi-parameter singular integral satisfying natural $T1$ type conditions and $b$ is a little BMO function. For paraproduct free bilinear bi-parameter singular integrals $T$ we prove that $[b, T]_1 \colon L^p(\mathbb{R}^{n+m}) \times L^q(\mathbb{R}^{n+m}) \to L^r(\mathbb{R}^{n+m})$ in the full range $1 < p, q \le \infty$, $1/2 < r < \infty$ satisfying $1/p+1/q = 1/r$. A special case is when $T$ is a bilinear bi-parameter multiplier. We also prove the corresponding Banach range result for all singular integrals satisfying the $T1$ type conditions. In doing so we simplify the corresponding linear proof. Lastly, we prove analogous results for iterated commutators.' address: - 'BCAM (Basque Center for Applied Mathematics), Alameda de Mazarredo 14, 48009 Bilbao, Spain' - 'Department of Mathematics and Statistics, University of Helsinki, P.O.B. 68, FI-00014 University of Helsinki, Finland' - 'Department of Mathematics and Statistics, University of Helsinki, P.O.B. 68, FI-00014 University of Helsinki, Finland' author: - Kangwei Li - Henri Martikainen - Emil Vuorinen title: 'Commutators of bilinear bi-parameter singular integrals' --- Introduction ============ This paper concerns commutator estimates for general bilinear bi-parameter singular integrals $T$. Examples of such operators include the bilinear bi-parameter multiplier operators $T_m$ studied in Muscalu–Pipher–Tao–Thiele [@MPTT]: $$T_m(f_1, f_2)(x) = \iint_{{\mathbb{R}}^{n+m}} \iint_{{\mathbb{R}}^{n+m}} m(\xi, \eta) \widehat f_1(\xi) \widehat f_2(\eta) e^{2\pi i x \cdot (\xi + \eta)} {\,\mathrm{d}}\xi {\,\mathrm{d}}\eta,$$ where $$|\partial^{\alpha_1}_{\xi_1} \partial^{\alpha_2}_{\xi_2} \partial^{\beta_1}_{\eta_1} \partial^{\beta_2}_{\eta_2} m(\xi, \eta)| \lesssim (|\xi_1| + |\eta_1|)^{-|\alpha_1| - |\beta_1|} (|\xi_2| + |\eta_2|)^{-|\alpha_2| - |\beta_2|}.$$ See Coifman–Meyer [@CM] and Grafakos–Torres [@GT] for the one-parameter theory of such multipliers. A general definition of a (not necessarily of tensor product or convolution type) bilinear bi-parameter singular integral was given in our previous paper [@LMV]. There we showed a dyadic representation theorem under $T1$ type assumptions, and used it to conclude various boundedness properties, including weighted estimates $L^p(w_1) \times L^q(w_2) \to L^r(v_3)$, where $1 < p, q < \infty$, $1/2 < r < \infty$, $1/p+1/q = 1/r$, $w_1 \in A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)$, $w_2 \in A_q({\mathbb{R}}^n \times {\mathbb{R}}^m)$ and $v_3 := w_1^{r/p} w_2^{r/q}$. Here we complement these results and provide further use for our recent bilinear bi-parameter representation theorem by proving commutator estimates. A very special case of our results implies that $$\|[b,T_m]_1(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})} \lesssim \|b\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}$$ for all $1 < p, q \le \infty$ and $1/2 < r < \infty$ satisfying $1/p+1/q = 1/r$, where $b$ is in little BMO, $T_m$ is a bi-parameter multiplier and $[b,T_m]_1(f_1,f_2) := bT_m(f_1, f_2) - T_m(bf_1, f_2)$. Our main theorem for the first order commutator is: Let $1 < p,q \le \infty$ and $1/2 < r < \infty$ satisfy $1/p + 1/q = 1/r$, and let $b \in {\operatorname{bmo}}({\mathbb{R}}^{n+m})$. Suppose $T$ is a bilinear bi-parameter Calderón–Zygmund operator satisfying all the structural assumptions and all the boundedness and cancellation assumptions as formulated in Section 3 of [@LMV]. Then $$\|[b, T]_1(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})} \lesssim \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}$$ if $p,q \ne \infty$ and $r > 1$. If $T$ is free of paraproducts (so that it has a representation with shifts only), then the same bound holds in the full range. We also obtain similar results for iterated commutators like $[b_2, [b_1, T]_1]_2$. Regarding the extremely vast theory of commutators, we focus here only on the story of upper estimates (which are also relevant for the lower estimates) in the multi-parameter settings. This setting is inherently much more demanding than the one-parameter setting. For example, the lack of a satisfying theory of sparse domination, on which many modern one-parameter proofs are based on, demands different proofs. The idea has been to rely on representation theorems such as the bi-parameter representation theorem [@Ma1] by one of us (or the multi-parameter generalisation of this by Y. Ou [@Ou]). Ou, Petermichl and Strouse proved in [@OPS] that $[b,T] \colon L^2({\mathbb{R}}^{n+m}) \to L^2({\mathbb{R}}^{n+m})$, when $T$ is a paraproduct free bi-parameter singular integral. This was eventually generalised to concern all bi-parameter singular integrals satisfying $T1$ conditions by Holmes–Petermichl–Wick [@HPW] – in fact, they prove a more general Bloom type two-weight bound. For a more comprehensive account of commutators in the multi-parameter setup see the introductions of [@OPS] and [@HPW]. In this paper we go after bilinear variants of these bi-parameter upper bound estimates for commutators. We point out to the introduction of the recent paper [@KO] for an account of multilinear commutator estimates in the one-parameter setting. Our proof now relies on the recent bilinear bi-parameter representation [@LMV]. Compared to the linear case one of the additional difficulties lies in obtaining quasi–Banach estimates, which are in general a challenge to obtain in the bi-parameter setting even when no commutators are present: e.g. in [@MPTT] – see also [@MPTT2] and [@MS] – the main challange was to obtain quasi–Banach estimates for $T_m$. Moreover, bilinear model operators have more non-cancellation present, which is a complication in the commutator setting. The main challenge in going from [@OPS] to [@HPW] appeared to be that estimates for $[b,S]$, where $S$ is a bi-parameter shift, were easier to obtain than for $[b, P]$, where $P$ is some other dyadic model operator (namely a full paraproduct or a partial paraproduct) appearing in the representation [@Ma1]. We imagine that the presence of non-cancellative Haar functions $h_I^0$ (as opposed to cancellative Haar functions $h_I$) in the paraproducts was probably the main issue for the authors. In the bilinear situation, however, non-cancellative Haar functions appear already in shifts. Moreover, we need an argument that can be iterated in a reasonable way and one that can be used in restricted weak type arguments, so we needed to develop a clear general method. Our guideline is to expand $bf$ using bi-parameter martingales in $\langle bf, h_{I} \otimes h_J\rangle$, using one-parameter martingales in $\langle bf, h_{I}^0 \otimes h_J\rangle$ (or $\langle bf, h_I \otimes h_J^0\rangle$), and not to expand at all in $\langle bf, h_{I}^0 \otimes h_J^0\rangle$. When working like this it appears that in the linear situation, or in the bilinear Banach range theory, there is no large difference what model operator we have, which leads to a relevant simplification. It appears to us that in [@HPW] everything was always reduced to a so called remainder term, which essentially entails expanding $bf$ in the bi-parameter sense in all of the above situations. However, this remainder term has a particularly nice structure only when there are no non-cancellative Haar functions. In the bilinear situation only when proving the Banach range boundedness are we able to obtain a unified proof that works for all model operators. We are currently unable to produce weighted estimates for bilinear commutators, and so our quasi-Banach estimates are now based on restricted weak type considerations. We currently only know how to do restricted weak type arguments for shifts and full paraproducts, but not for partial paraproducts. This is the case even when we are considering the operators themselves and not commutators of them. However, in [@LMV] we were able to prove weighted bounds for partial paraproducts, and these can be extrapolated, so we did not require restricted weak type arguments for partial paraproducts there. But for quasi–Banach commutator bounds we would now require them. That is why we restrict our quasi–Banach commutator estimates to shifts, and therefore to paraproduct free singular integrals. When running the restricted weak type argument for $[b,S]_1$, where $S$ is a bilinear bi-parameter shift, we need to exploit the good localisation properties of bi-parameter paraproducts as expansions of commutators essentially produce compositions of model operators and paraproducts. Furthermore, we need to be careful so that we can move the estimates from the model operators to singular integrals, as the presence of averaging in the dyadic representation makes this a somewhat delicate business in the quasi–Banach range. Iteration requires further care to maintain the localisation properties. We conclude by quickly giving some general references of multilinear multi-parameter analysis not connected to commutators. For the classical linear theory of multi-parameter analysis see e.g. Chang and Fefferman [@CF1; @CF2], Fefferman [@Fe], Fefferman and Stein [@FS], and Journé [@Jo1; @Jo2]. In Journé [@Jo3] some bounds for tensor products of multilinear singular integrals are obtained. Deep multilinear multi-parameter theory appears e.g. in the already mentioned paper Muscalu–Pipher–Tao–Thiele [@MPTT], where the quasi-Banach estimates for the multipliers $T_m$ was the main question. See also Benea–Muscalu [@BM1; @BM2] and Di Plinio–Ou [@DO]. Among many other things, these papers contain some generalisations of [@MPTT], including mixed-norm type bounds. See also [@LMV], where we we proved the representation and used it to generalise many of the above results to concern completely general singular integrals. See also the book [@MS] by Muscalu and Schlag for a wonderful introduction to multilinear multi-parameter analysis. Acknowledgements {#acknowledgements .unnumbered} ---------------- K. Li is supported by Juan de la Cierva - Formación 2015 FJCI-2015-24547, by the Basque Government through the BERC 2018-2021 program and by Spanish Ministry of Economy and Competitiveness MINECO through BCAM Severo Ochoa excellence accreditation SEV-2013-0323 and through project MTM2017-82160-C2-1-P funded by (AEI/FEDER, UE) and acronym “HAQMEC”. H. Martikainen is supported by the Academy of Finland through the grants 294840 and 306901, the three-year research grant 75160010 of the University of Helsinki, and is a member of the Finnish Centre of Excellence in Analysis and Dynamics Research. E. Vuorinen is supported by the Academy of Finland through the grant 306901 and by the Finnish Centre of Excellence in Analysis and Dynamics Research. Basic definitions ================= Vinogradov notation ------------------- We denote $A \lesssim B$ if $A \le CB$ for some absolute constant $C$. We allow the exponent $C$ to depend on the dimension of the underlying spaces, on integration exponents, and on various other constants appearing in the assumptions. We denote $A \sim B$ if $B \lesssim A \lesssim B$. Dyadic notation --------------- If $Q$ is a cube: - $\ell(Q)$ is the side-length of $Q$; - $\text{ch}(Q)$ denotes the dyadic children of $Q$; - If $Q$ is in a dyadic grid, then $Q^{(k)}$ denotes the unique dyadic cube $S$ in the same grid so that $Q \subset S$ and $\ell(S) = 2^k\ell(Q)$; In this paper we denote a dyadic grid in ${\mathbb{R}}^n$ by ${\mathcal{D}}^n$ and a dyadic grid in ${\mathbb{R}}^m$ by ${\mathcal{D}}^m$. Using the above notation ${\mathcal{D}}^n_i$ denotes those $I \in {\mathcal{D}}^n$ for which $\ell(I) = 2^{-i}$. The measure of a cube $I$ is simply denoted by $|I|$ no matter in what dimension we are in. When $I \in {\mathcal{D}}^n$ we denote by $h_I$ a cancellative $L^2$ normalised Haar function. This means the following. Writing $I = I_1 \times \cdots \times I_n$ we can define the Haar function $h_I^{\eta}$, $\eta = (\eta_1, \ldots, \eta_n) \in \{0,1\}^n$, by setting $$h_I^{\eta} = h_{I_1}^{\eta_1} \otimes \cdots \otimes h_{I_n}^{\eta_n},$$ where $h_{I_i}^0 = |I_i|^{-1/2}1_{I_i}$ and $h_{I_i}^1 = |I_i|^{-1/2}(1_{I_{i, l}} - 1_{I_{i, r}})$ for every $i = 1, \ldots, n$. Here $I_{i,l}$ and $I_{i,r}$ are the left and right halves of the interval $I_i$ respectively. If $\eta \ne 0$ the Haar function is cancellative: $\int h_I^{\eta} = 0$. We usually suppress the presence of $\eta$ and simply write $h_I$ for some $h_I^{\eta}$, $\eta \ne 0$. For $I \in {\mathcal{D}}^n$ and a locally integrable function $f\colon {\mathbb{R}}^n \to {\mathbb{C}}$, we define the martingale difference $$\Delta_I f = \sum_{I' \in \textup{ch}(I)} \big[ {\big \langle}f {\big \rangle}_{I'} - {\big \langle}f {\big \rangle}_{I} \big] 1_{I'}.$$ Here ${\big \langle}f {\big \rangle}_I = \frac{1}{|I|} \int_I f$. We also sometimes write $E_I f = {\big \langle}f {\big \rangle}_I 1_I$. Now, we have $\Delta_I f = \sum_{\eta \ne 0} \langle f, h_{I}^{\eta}\rangle h_{I}^{\eta}$, or suppressing the $\eta$ summation, $\Delta_I f = \langle f, h_I \rangle h_I$, where $\langle f, h_I \rangle = \int f h_I$. A martingale block is defined by $$\Delta_{K,i} f = \mathop{\sum_{I \in {\mathcal{D}}^n}}_{I^{(i)} = K} \Delta_I f, \qquad K \in {\mathcal{D}}^n.$$ Weights ------- We have some use for weighted estimates even though they are not part of the main results. A weight $w(x_1, x_2)$ (i.e. a locally integrable a.e. positive function) belongs to $A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)$, $1 < p < \infty$, if $$[w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)} := \sup_{R} {\big \langle}w {\big \rangle}_R {\big \langle}w' {\big \rangle}_R^{p-1} < \infty,$$ where the supremum is taken over rectangles $R \subset {\mathbb{R}}^{n+m}$ and $w' := w^{1-p'}$. We have $$\label{eq:prodap} [w]_{A_p({\mathbb{R}}^n\times {\mathbb{R}}^m)} \sim \max\big( {\operatornamewithlimits{ess\,sup}}_{x_1 \in {\mathbb{R}}^n} \,[w(x_1, \cdot)]_{A^p({\mathbb{R}}^m)}, {\operatornamewithlimits{ess\,sup}}_{x_2 \in {\mathbb{R}}^m}\, [w(\cdot, x_2)]_{A^p({\mathbb{R}}^n)} \big).$$ Of course, $A_p({\mathbb{R}}^n)$ is defined similarly as $A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)$ – just take the supremum over cubes. For the basic theory of bi-parameter weights consult e.g. [@HPW]. Bi-parameter notation --------------------- We work in the bi-parameter setting in the product space ${\mathbb{R}}^{n+m}$. In such a context $x = (x_1, x_2)$ with $x_1 \in {\mathbb{R}}^n$ and $x_2 \in {\mathbb{R}}^m$. We often need to take integral pairings with respect to one of the two variables only. For example, if $f \colon {\mathbb{R}}^{n+m} \to {\mathbb{C}}$, then $\langle f, h_I \rangle_1 \colon {\mathbb{R}}^{m} \to {\mathbb{C}}$ is defined by $$\langle f, h_I \rangle_1(x_2) = \int_{{\mathbb{R}}^n} f(y_1, x_2)h_I(y_1){\,\mathrm{d}}y_1.$$ Next, we define bi-parameter martingale differences. Let $f \colon {\mathbb{R}}^n \times {\mathbb{R}}^m \to {\mathbb{C}}$ be locally integrable. Let $I \in {\mathcal{D}}^n$ and $J \in {\mathcal{D}}^m$. We define the martingale difference $$\Delta_I^1 f \colon {\mathbb{R}}^{n+m} \to {\mathbb{C}}, \Delta_I^1 f(x) := \Delta_I (f(\cdot, x_2))(x_1).$$ Define $\Delta_J^2f$ analogously, and also define $E_I^1$ and $E_J^2$ similarly. We set $$\Delta_{I \times J} f \colon {\mathbb{R}}^{n+m} \to {\mathbb{C}}, \Delta_{I \times J} f(x) = \Delta_I^1(\Delta_J^2 f)(x) = \Delta_J^2 ( \Delta_I^1 f)(x).$$ Notice that $\Delta^1_I f = h_I \otimes \langle f , h_I \rangle_1$, $\Delta^2_J f = \langle f, h_J \rangle_2 \otimes h_J$ and $ \Delta_{I \times J} f = \langle f, h_I \otimes h_J\rangle h_I \otimes h_J$ (suppressing the finite $\eta$ summations). We record the following standard lemma. \[lem:mest\] For all $p \in (1,\infty)$ and $w \in A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)$ it holds that $$\begin{aligned} \| f \|_{L^p(w)} & \sim_{[w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)}} \Big\| \Big( \mathop{\sum_{I \in {\mathcal{D}}^n}}_{J \in {\mathcal{D}}^m} |\Delta_{I \times J} f|^2 \Big)^{1/2} \Big\|_{L^p(w)} \\ &\sim_{[w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)}} \Big\| \Big( \sum_{I \in {\mathcal{D}}^n} |\Delta_I^1 f|^2 \Big)^{1/2} \Big\|_{L^p(w)} \\ &\sim_{[w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)}} \Big\| \Big( \sum_{J \in {\mathcal{D}}^m} |\Delta_J^2 f|^2 \Big)^{1/2} \Big\|_{L^p(w)}.\end{aligned}$$ Martingale blocks are defined in the natural way $$\Delta_{K \times V}^{i, j} f = \sum_{I\colon I^{(i)} = K} \sum_{J\colon J^{(j)} = V} \Delta_{I \times J} f = \Delta_{K,i}^1( \Delta_{V,j}^2 f) = \Delta_{V,j}^2 ( \Delta_{K,i}^1 f).$$ Maximal functions ----------------- Given dyadic grids ${\mathcal{D}}^n$ and ${\mathcal{D}}^m$ we denote the dyadic maximal functions by $$M_{{\mathcal{D}}^n}f(x) := \sup_{I \in {\mathcal{D}}^n} \frac{1_I(x)}{|I|}\int_I |f(y)| {\,\mathrm{d}}y$$ and $$M_{{\mathcal{D}}^n, {\mathcal{D}}^m} f(x_1, x_2) := \sup_{R \in {\mathcal{D}}^n \times {\mathcal{D}}^m} \frac{1_R(x_1, x_2)}{|R|}\iint_R |f(y_1, y_2)|{\,\mathrm{d}}y_1 {\,\mathrm{d}}y_2.$$ The latter is also called the strong maximal function. The non-dyadic variants are simply denoted by $M$, as it is clear what is meant from the context. The following definitions are in line with our usual notational conventions. If $f \colon {\mathbb{R}}^{n+m} \to {\mathbb{C}}$ we set $M^1_{{\mathcal{D}}^n} f(x_1, x_2) = M_{{\mathcal{D}}^n}(f(\cdot, x_2))(x_1)$. The operator $M^2_{{\mathcal{D}}^m}$ is defined similarly. For various maximal functions $M$ we define $M_s$ by setting $M_s f = (M |f|^s)^{1/s}$. BMO spaces {#ss:bmo} ---------- We say that $b \in L^1_{{\operatorname{loc}}}({\mathbb{R}}^n)$ belongs to the dyadic BMO space ${\operatorname{BMO}}_{{\mathcal{D}}^n}({\mathbb{R}}^n) = {\operatorname{BMO}}_{{\mathcal{D}}^n}$ if $$\|b\|_{{\operatorname{BMO}}_{{\mathcal{D}}^n}} := \sup_{I \in {\mathcal{D}}^n} \frac{1}{|I|} \int_I |b - \langle b \rangle_I| < \infty.$$ The ordinary space ${\operatorname{BMO}}({\mathbb{R}}^n)$ is defined by taking the supremum over all cubes. ### Product BMO {#product-bmo .unnumbered} Here we define the (dyadic) bi-parameter product BMO space ${\operatorname{BMO}}_{\textup{prod}}^{{\mathcal{D}}^n, {\mathcal{D}}^m}({\mathbb{R}}^n \times {\mathbb{R}}^m) = {\operatorname{BMO}}_{\textup{prod}}^{{\mathcal{D}}^n, {\mathcal{D}}^m}$. For a sequence $\lambda = (\lambda_{I,J})$ we set $$\|\lambda\|_{{\operatorname{BMO}}_{\textup{prod}}^{{\mathcal{D}}^n, {\mathcal{D}}^m}} := \sup_{\Omega} \Big( \frac{1}{|\Omega|} \mathop{\sum_{I \in {\mathcal{D}}^n, J \in {\mathcal{D}}^m}}_{I \times J \subset \Omega} |\lambda_{I,J}|^2 \Big)^{1/2},$$ where the supremum is taken over those sets $\Omega \subset {\mathbb{R}}^{n+m}$ such that $|\Omega| < \infty$ and such that for every $x \in \Omega$ there exist $I \in {\mathcal{D}}^n, J \in {\mathcal{D}}^m$ so that $x \in I \times J \subset \Omega$. We say that $b \in L^1_{{\operatorname{loc}}}({\mathbb{R}}^{n+m})$ belongs to the space ${\operatorname{BMO}}_{\textup{prod}}^{{\mathcal{D}}^n, {\mathcal{D}}^m}$ if $$\| b \|_{{\operatorname{BMO}}_{\textup{prod}}^{{\mathcal{D}}^n, {\mathcal{D}}^m}} := \| (\langle b, h_I \otimes h_J\rangle)_{I,J} \|_{{\operatorname{BMO}}_{\textup{prod}}^{{\mathcal{D}}^n, {\mathcal{D}}^m}} < \infty.$$ The (non-dyadic) product BMO space ${\operatorname{BMO}}_{\textup{prod}}({\mathbb{R}}^{n+m})$ can be defined via the norm defined by the supremum of the above dyadic norms. ### Little BMO {#little-bmo .unnumbered} We say that $b \in {\operatorname{bmo}}_{{\mathcal{D}}^n, {\mathcal{D}}^m}({\mathbb{R}}^n \times {\mathbb{R}}^m) = {\operatorname{bmo}}_{{\mathcal{D}}^n, {\mathcal{D}}^m}$ if $$\|b\|_{{\operatorname{bmo}}_{{\mathcal{D}}^n, {\mathcal{D}}^m}} := \sup_{\substack{I \in {\mathcal{D}}^n \\ J \in {\mathcal{D}}^m}} \frac{1}{|I||J|} \iint_{I \times J} |b - \langle b \rangle_{I \times J}| < \infty.$$ The (non-dyadic) little BMO space ${\operatorname{bmo}}({\mathbb{R}}^{n+m})$ is defined by taking the supremum over all rectangles. It is important that $$\|b\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} \sim \max\big( {\operatornamewithlimits{ess\,sup}}_{x_1 \in {\mathbb{R}}^n} \, \|b(x_1, \cdot)\|_{{\operatorname{BMO}}({\mathbb{R}}^m)}, {\operatornamewithlimits{ess\,sup}}_{x_2 \in {\mathbb{R}}^m}\, \|b(\cdot, x_2)\|_{{\operatorname{BMO}}({\mathbb{R}}^n)} \big)$$ and that we have the John–Nirenberg property $$\|b\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} \sim \sup_{R \subset {\mathbb{R}}^{n+m}} \Big( \frac{1}{|R|} \int_{R} |b - \langle b \rangle_{R}|^p \Big)^{1/p}, \qquad 1 < p < \infty.$$ Moreover, we need to know that ${\operatorname{bmo}}({\mathbb{R}}^{n+m}) \subset {\operatorname{BMO}}_{\textup{prod}}({\mathbb{R}}^{n+m})$. The reader can consult e.g. [@HPW; @OPS]. ### Adapted maximal functions {#adapted-maximal-functions .unnumbered} For $b \in {\operatorname{BMO}}({\mathbb{R}}^n)$ and $f \colon {\mathbb{R}}^n \to {\mathbb{C}}$ define $$M_bf = \sup_I \frac{1_I}{|I|} \int_I |b-\langle b \rangle_I| |f|.$$ In the situation $b \in {\operatorname{bmo}}({\mathbb{R}}^n \times {\mathbb{R}}^m)$ and $f \colon {\mathbb{R}}^{n+m} \to {\mathbb{C}}$ we similarly define $$M_b f = \sup_{I,J} \frac{1_{I \times J}}{|I||J|} \iint_{I \times J} |b-\langle b \rangle_{I \times J}| |f|.$$ Here the supremums are taken over all intervals $I \subset {\mathbb{R}}^n$ and $J \subset {\mathbb{R}}^m$. The dyadic variants could also be defined, and denoted by $M_{{\mathcal{D}}^n, b}$ and $M_{{\mathcal{D}}^n, {\mathcal{D}}^m, b}$. For a little bmo function $b \in {\operatorname{bmo}}({\mathbb{R}}^n \times {\mathbb{R}}^m)$ define $$\varphi_{{\mathcal{D}}^m, b}(f) = \sum_{J \in {\mathcal{D}}^m} M_{\langle b \rangle_{J,2}} \langle f, h_J \rangle_2 \otimes h_J,$$ and similarly define $\varphi_{{\mathcal{D}}^n, b}(f)$. For our later usage it is important to not to use the dyadic variant $M_{{\mathcal{D}}^n, \langle b \rangle_{J,2}}$, as it would induce an unwanted dependence on ${\mathcal{D}}^n$ (which has relevance in some randomisation considerations). \[lem:bmaxbounds\] Suppose $\|b_i\|_{{\operatorname{BMO}}({\mathbb{R}}^n)} \le 1$, $1 < u, p < \infty$ and $w \in A_p({\mathbb{R}}^n)$. Then we have $$\label{eq:vMb} \Big\| \Big( \sum_i [M_{b_i} f_i]^u \Big)^{1/u} \Big\|_{L^p(w)} \lesssim C([w]_{A_p({\mathbb{R}}^n)}) \Big\| \Big( \sum_i |f_i|^u \Big)^{1/u} \Big\|_{L^p(w)}.$$ The same bound holds with $\|b_i\|_{{\operatorname{bmo}}({\mathbb{R}}^n \times {\mathbb{R}}^m)} \le 1$ and $w \in A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)$. For $b$ with $\|b\|_{{\operatorname{bmo}}({\mathbb{R}}^n \times {\mathbb{R}}^m)} \le 1$ we also have $$\|\varphi_{{\mathcal{D}}^m, b}(f)\|_{L^p(w)} \le C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)})\|f\|_{L^p(w)}, \qquad 1 < p < \infty,\, w \in A_p({\mathbb{R}}^n \times {\mathbb{R}}^m).$$ We begin by proving the bound $\|M_b f\|_{L^p(w)} \le C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)})\|f\|_{L^p(w)}$ – the proof is the same in the one-parameter case. Fix $w \in A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)$ and choose $s = s([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)}) \in (1,p)$ so that $[w]_{A^{p/s}({\mathbb{R}}^n \times {\mathbb{R}}^m)} \le C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)})$. This can be done using the reverse Hölder inequality – the well-known bi-parameter version is stated and proved e.g. in Proposition 2.2. of [@HPW]. Using Hölder’s inequality and the John–Nirenberg for little bmo we get that $$M_b f \le C(s) M_s f = C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)})M_s f.$$ Now, using that $M \colon L^q(v) \to L^q(v)$ for all $q \in (1,\infty)$ and $v \in A_q({\mathbb{R}}^n \times {\mathbb{R}}^m)$ we have $$\|M_b f\|_{L^p(w)} \le C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)})\| M |f|^s \|_{L^{p/s}(w)}^{1/s} \le C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)}) \|f\|_{L^p(w)}.$$ The bi-parameter version of (and itself) now follow immediately by extrapolation. Next, using Lemma \[lem:mest\], the estimate and the fact that $\| \langle b \rangle_{J,2} \|_{{\operatorname{BMO}}({\mathbb{R}}^n)} \lesssim 1$ we get $$\begin{split} \|\varphi_{{\mathcal{D}}^m, b}(f)\|_{L^p(w)} &\le C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)}) \Big\| \Big(\sum_{J} \big(M_{\langle b \rangle_{J,2}} \langle f, h_{J}\rangle_2 \big)^2 \otimes \frac{1_{J}}{|J|} \Big)^{1/2} \Big\|_{L^p(w)} \\ & \le C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)}) \Big\| \Big(\sum_{J} \big|\langle f, h_{J}\rangle_2 \big|^2 \otimes \frac{1_{J}}{|J|} \Big)^{1/2} \Big\|_{L^p(w)} \\ &\le C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)}) \| f \|_{L^p(w)}. \end{split}$$ Commutators ----------- We set $$[b,T]_1(f_1,f_2) = bT(f_1, f_2) - T(bf_1, f_2) \, \textup{ and } \, [b,T]_2(f_1, f_2) = bT(f_1,f_2) - T(f_1, bf_2).$$ These are understood generally in a situation, where we e.g. already know that $T \colon L^3({\mathbb{R}}^{n+m}) \times L^3({\mathbb{R}}^{n+m}) \to L^{3/2}({\mathbb{R}}^{n+m})$, and $b$ is locally in $L^3$. Then we initially study the case that $f_1$ and $f_2$ are, say, bounded and compactly supported, so that e.g. $bf_2 \in L^3({\mathbb{R}}^{n+m})$ and $bT(f_1,f_2) \in L^1_{{\operatorname{loc}}}({\mathbb{R}}^{n+m})$. Model operators: Shifts, partial paraproducts and full paraproducts {#sec:defbilinbiparmodel} =================================================================== We define the model operators that appear in the bilinear bi-parameter representation theorem [@LMV]. In this section all the objects are defined using some fixed dyadic grids ${\mathcal{D}}^n$ and ${\mathcal{D}}^m$. Let $f_1, f_2 \colon {\mathbb{R}}^{n+m} \to {\mathbb{C}}$ be two given functions. Bilinear bi-parameter shifts {#ss:bilinbiparshiftScalar} ---------------------------- For triples of positive integers $k = (k_1, k_2, k_3)$, $k_1, k_2, k_3 \ge 0$, and $v = (v_1, v_2, v_3)$, $v_1, v_2, v_3 \ge 0$, and cubes $K \in {\mathcal{D}}^n$ and $V \in {\mathcal{D}}^m$, define $$\begin{aligned} A_{K, k}^{V, v}(f_1,f_2) = \sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n \\ I_1^{(k_1)} = I_2^{(k_2)} = I_3^{(k_3)} = K}} &\sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m \\ J_1^{(v_1)} = J_2^{(v_2)} = J_3^{(v_3)} = V}} a_{K, V, (I_i), (J_j)} \\ &\times \langle f_1, h_{I_1} \otimes h_{J_1}\rangle \langle f_2, h_{I_2} \otimes h_{J_2}\rangle h_{I_3}^0 \otimes h_{J_3}^0.\end{aligned}$$ We also demand that the scalars $a_{K, V, (I_i), (J_j)}$ satisfy the estimate $$|a_{K, V, (I_i), (J_j)}| \le \frac{|I_1|^{1/2} |I_2|^{1/2}|I_3|^{1/2}}{|K|^2} \frac{|J_1|^{1/2} |J_2|^{1/2}|J_3|^{1/2}}{|V|^2}.$$ A shift of complexity $(k,v)$ of a particular form (the non-cancellative Haar functions are in certain positions) is $$S_{k}^{v}(f_1,f_2) = \sum_{K \in {\mathcal{D}}^n} \sum_{V \in {\mathcal{D}}^m} A_{K, k}^{V, v}(f_1,f_2).$$ An operator of the above form, but having the non-cancellative Haar functions $h_I^0$ and $h_J^0$ in some of the other slots, is also a shift. So there are shifts of nine different types, and we could e.g. also have for all $K, V$ that $$\begin{aligned} A_{K, k}^{V, v}(f_1,f_2) = \sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n \\ I_1^{(k_1)} = I_2^{(k_2)} = I_3^{(k_3)} = K}} &\sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m \\ J_1^{(v_1)} = J_2^{(v_2)} = J_3^{(v_3)} = V}} a_{K, V, (I_i), (J_j)} \\ &\times \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, h_{I_2} \otimes h_{J_2}\rangle h_{I_3} \otimes h_{J_3}^0.\end{aligned}$$ Bilinear paraproducts --------------------- Let $b \colon {\mathbb{R}}^m \to {\mathbb{C}}$ be a function and define $$A_b(g_1, g_2) := \sum_{V} \langle b, h_V\rangle \langle g_1 \rangle_V \langle g_2 \rangle_V h_V,$$ where $g_i \colon {\mathbb{R}}^m \to {\mathbb{C}}$. An operator $\pi_b$ is called a dyadic bilinear paraproduct in ${\mathbb{R}}^m$ if it is of the form $A_b$, $A_b^{1*}$ or $A_b^{2*}$. We often write $\pi_{{\mathcal{D}}^m ,b}$ to emphasise the dyadic grid using which it is defined. Bilinear bi-parameter partial paraproducts ------------------------------------------ Let $k = (k_1, k_2, k_3)$, $k_1, k_2, k_3 \ge 0$. For each $K, I_1, I_2, I_3 \in {\mathcal{D}}^n$ we are given a function $b_{K, I_1, I_2, I_3} \colon {\mathbb{R}}^m \to {\mathbb{C}}$ such that $$\| b_{K, I_1, I_2, I_3} \|_{{\operatorname{BMO}}({\mathbb{R}}^m)} \le \frac{|I_1|^{1/2} |I_2|^{1/2}|I_3|^{1/2}}{|K|^2}.$$ A partial paraproduct of complexity $k$ of a particular form is $$P_k(f_1, f_2) = \sum_{K \in {\mathcal{D}}^n} \sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n \\ I_1^{(k_1)} = I_2^{(k_2)} = I_3^{(k_3)} = K}} h_{I_3}^0 \otimes \pi_{b_{K, I_1, I_2, I_3}}(\langle f_1, h_{I_1} \rangle_1, \langle f_2, h_{I_2} \rangle_1),$$ where $\pi_{b_{K, I_1, I_2, I_3}} $denotes a bilinear paraproduct in ${\mathbb{R}}^m$, and is of the same form for all $K,I_1, I_2, I_3$. Again, an operator of the above form, but having the non-cancellative Haar function $h_I^0$ in some other slot, is also a partial paraproduct. Therefore, we have nine different possibilities again (the bilinear paraproducts can be of one of the three different types, and the non-cancellative Haar function in ${\mathbb{R}}^n$ can appear in one of the three slots). Of course, we also have partial paraproducts with shift structure in ${\mathbb{R}}^m$ and paraproducts in ${\mathbb{R}}^n$. Bilinear bi-parameter full paraproducts --------------------------------------- Given a function $b \colon {\mathbb{R}}^{n+m} \to {\mathbb{C}}$ with $\|b\|_{{\operatorname{BMO}}_{\textup{prod}}({\mathbb{R}}^{n+m})} = 1$ a full paraproduct $\Pi_b$ of a particular form is $$\Pi_b(f_1, f_2) = \sum_{\substack{K \in {\mathcal{D}}^n \\ V \in {\mathcal{D}}^m}} \lambda_{K,V}^b \langle f_1 \rangle_{K \times V} \langle f_2 \rangle_{K \times V} h_K \otimes h_V,$$ where the function $b$ determines the coefficients $\lambda_{K,V}^b$ via the formula $\lambda_{K,V}^b := \langle b, h_K \times h_V \rangle$. Again, an operator of the above form, but having the cancellative Haar functions $h_K$ or $h_V$ in some other slots, is also a full paraproduct. There are nine different cases as the Haar functions present in the coefficients $\lambda_{K,V}^b$ are not allowed to move, i.e. we always have $\lambda_{K,V}^b := \langle b, h_K \times h_V \rangle$. For example, $\Pi_b$ could also be of the form $$\Pi_b(f_1, f_2) = \sum_{\substack{K \in {\mathcal{D}}^n \\ V \in {\mathcal{D}}^m}} \lambda_{K,V}^b \langle f_1 \rangle_{K \times V} \Big\langle f_2, \frac{1_K}{|K|} \otimes h_V \Big\rangle h_K \otimes \frac{1_V}{|V|}.$$ We warn the reader that later we will have *linear* bi-parameter paraproducts (the operators $A_i(b, \cdot)$, $i = 5,6,7,8$, in Section \[sec:marprod\]) so that even the coefficients $\lambda_{K,V}^b$ can have $\frac{1_K}{|K|} \otimes h_V$ or $h_K \otimes \frac{1_V}{|V|}$. The role of such paraproducts is the following: they appear in some decompositions of $bf$ related to commutators, but they *do not* appear in the linear bi-parameter representation theorem [@Ma1]. In fact, their boundedness also requires more: $b$ has to be in ${\operatorname{bmo}}({\mathbb{R}}^n \times {\mathbb{R}}^m)$. In this section we are only introducing operators that appear in the bilinear bi-parameter representation theorem [@LMV], so philosophies of such nature do not concern us here. Boundedness properties of the model operators --------------------------------------------- In [@LMV] we showed that all the model operators are bounded in the full range $L^p({\mathbb{R}}^{n+m}) \times L^q({\mathbb{R}}^{n+m}) \to L^r({\mathbb{R}}^{n+m})$, $p,q \in (1,\infty]$, $r \in (1/2, \infty)$ and $1/p + 1/q = 1/r$. In fact, we even showed various weighted estimates and mixed-norm estimates. Bilinear bi-parameter singular integrals and commutators ======================================================== A bilinear bi-parameter singular integral $T$ has a relatively long definition. A model of a bilinear bi-parameter CZO in ${\mathbb{R}}^n \times {\mathbb{R}}^m$ is $$(T_1 \otimes T_2)(f_1 \otimes f_2, g_1 \otimes g_2)(x) := T_1(f_1, g_1)(x_1)T_2(f_2, g_2)(x_2),$$ where $f_1, g_1 \colon {\mathbb{R}}^n \to {\mathbb{C}}$, $f_2, g_2 \colon {\mathbb{R}}^m \to {\mathbb{C}}$, $x = (x_1, x_2) \in {\mathbb{R}}^{n+m}$, $T_1$ is a bilinear CZO in ${\mathbb{R}}^n$ and $T_2$ is a bilinear CZO in ${\mathbb{R}}^m$. A model of a bilinear CZO $T$ in ${\mathbb{R}}^n$ is $$T(f_1, f_2)(x) := \tilde T(f_1 \otimes f_2)(x,x), \qquad x \in {\mathbb{R}}^n,$$ where $\tilde T$ is a usual linear CZO in ${\mathbb{R}}^{2n}$. For the general definition of a bilinear singular integral see e.g. [@GT]. For the general definition of bilinear bi-parameter singular integrals we refer to Section 3 of [@LMV]. In [@LMV] we proved that under certain natural T1 type conditions we can represent $\langle T(f_1, f_2), f_3 \rangle$ using the model operators from Section \[sec:defbilinbiparmodel\] (shifts, partial paraproducts and full paraproducts). For the definition of the $T1$ type conditions (their exact nature is not needed in this paper) we again refer to Section 3 of [@LMV]. We now state the bilinear bi-parameter representation theorem from Section 5 of [@LMV]. For this we need the following notation regarding random dyadic grids. Let $\mathcal{D}_0^n$ and $\mathcal{D}_0^m$ denote the standard dyadic grids on ${\mathbb{R}}^n$ and ${\mathbb{R}}^m$ respectively. For $\omega = (\omega_i) \in (\{0,1\}^n)^{{\mathbb{Z}}}$, $\omega' = ( \omega'_i) \in(\{0,1\}^m)^{{\mathbb{Z}}}$, $I \in {\mathcal{D}}^n_0$ and $J \in {\mathcal{D}}^m_0$ denote $$I + \omega := I + \sum_{i:\, 2^{-i} < \ell(I)} 2^{-i}\omega_i \qquad \textup{and} \qquad J + \omega' := J + \sum_{i:\, 2^{-i} < \ell(J)} 2^{-i}\omega'_i.$$ Then we define the random lattices $${\mathcal{D}}^n_{\omega} = \{I + \omega\colon I \in {\mathcal{D}}^n_0\} \qquad \textup{and} \qquad {\mathcal{D}}^m_{\omega'} = \{J + \omega'\colon J \in {\mathcal{D}}^m_0\}.$$ In what follows always $\omega \in (\{0,1\}^n)^{{\mathbb{Z}}}$ and $\omega' \in(\{0,1\}^m)^{{\mathbb{Z}}}$. There is a natural probability product measure $\mathbb{P}_{\omega}$ in $(\{0,1\}^n)^{{\mathbb{Z}}}$ and $\mathbb{P}_{\omega'}$ in $(\{0,1\}^m)^{{\mathbb{Z}}}$. We denote the expectation over these probability spaces by ${\mathbb{E}}_{\omega, \omega'} = {\mathbb{E}}_{\omega} {\mathbb{E}}_{\omega'} = \iint {\,\mathrm{d}}\mathbb{P}_{\omega} {\,\mathrm{d}}\mathbb{P}_{\omega'}$. We sometimes can also write ${\mathcal{D}}_0 = {\mathcal{D}}^n_0 \times {\mathcal{D}}^m_0$ and ${\mathcal{D}}_{\omega, \omega'} = {\mathcal{D}}^n_{\omega} \times {\mathcal{D}}^m_{\omega'}$. \[thm:rep\] Suppose $T$ is a bilinear bi-parameter Calderón–Zygmund operator satisfying all the structural assumptions and all the boundedness and cancellation assumptions as formulated in Section 3 of [@LMV]. Then $$\langle T(f_1,f_2), f_3\rangle = C_T \mathbb{E}_{\omega, \omega'}\mathop{\sum_{k = (k_1, k_2, k_3) \in {\mathbb{Z}}_+^3}}_{v = (v_1, v_2, v_3) \in {\mathbb{Z}}_+^3} \alpha_{k, v} \sum_{u} {\big \langle}U^{v}_{k, u, \mathcal{D}^n_{\omega},\mathcal{D}^m_{\omega'}}(f_1, f_2), f_3 {\big \rangle},$$ where $C_T \lesssim 1$, $\alpha_{k, v} = 2^{- \alpha \max k_i/2} 2^{- \alpha \max v_j/2}$, the summation over $u$ is finite, and $U^{v}_{k, u, \mathcal{D}^n_{\omega},\mathcal{D}^m_{\omega'}}$ is always either a shift of complexity $(k,v)$, a partial paraproduct of complexity $k$ or $v$ (this requires $k= 0$ or $v=0$) or a full paraproduct (this requires $k=v=0$). We can e.g. understand that here $f_1, f_2, f_3 \in L^3({\mathbb{R}}^{n+m})$. We recall that in [@LMV] we in particular showed that every bilinear bi-parameter singular integral $T$ satisfying the assumptions of the above representation theorem maps in the full range $L^p({\mathbb{R}}^{n+m}) \times L^q({\mathbb{R}}^{n+m}) \to L^r({\mathbb{R}}^{n+m})$, $p,q \in (1,\infty]$, $r \in (1/2, \infty)$ and $1/p + 1/q = 1/r$. In fact, we showed much more general bounds – see [@LMV]. We can now formulate our theorem about the Banach range boundedness of $[b,T]_1$, where $T$ is a bilinear bi-parameter singular integral satisfying the assumptions of the above representation theorem and $\|b\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = 1$. \[thm:main1\] Suppose $T$ is a bilinear bi-parameter singular integral satisfying the assumptions of Theorem \[thm:rep\] and $\|b\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = 1$. Let $p,q,r \in (1,\infty)$ with $1/p + 1/q = 1/r$. Then we have $$\|[b, T]_1(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})} \lesssim \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}.$$ The claim follows from Theorem \[thm:rep\] and from the Banach range commutator bounds of the model operators, Theorem \[thm:com1ofmodelBanach\]. A bilinear bi-parameter singular integral $T$ is called free of paraproducts if for all suitable functions $f_i \colon {\mathbb{R}}^n \to {\mathbb{C}}$, $g_i \colon {\mathbb{R}}^m \to {\mathbb{C}}$, $i=1,2$, all $$S \in \{T, T^{*1}, T^{*2}, T^{1*}_1, T^{2*}_1, T^{1*}_2, T^{2*}_2, T^{1*, 2*}_{1,2}, T^{1*, 2*}_{2,1}\}$$ and all cubes $I \subset {\mathbb{R}}^n$, $J \subset {\mathbb{R}}^m$ there holds $$\langle S(1 \otimes g_1, 1 \otimes g_2), h_I \otimes h_J \rangle = \langle S(f_1 \otimes 1, f_2 \otimes 1), h_I \otimes h_J \rangle =0.$$ For the definition of all the nine adjoints and partial adjoints of T see Section 2.8 of [@LMV]. This definition guarantees that $T$ has a representation with shifts only. In Section 8 of [@LMV] we proved that the bi-parameter multipliers $T_m$ of [@MPTT] are bilinear bi-parameter singular integrals satisfying the assumptions of Theorem \[thm:rep\] and that they are free of paraproducts. Our second main theorem involving quasi–Banach estimates for commutators of paraproduct free singular integrals is: \[thm:main2\] Suppose $T$ is a bilinear bi-parameter singular integral satisfying the assumptions of Theorem \[thm:rep\] and that $T$ is free of paraproducts. Let $\|b\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = 1$, and let $1 < p, q \le \infty$ and $1/2 < r < \infty$ satisfy $1/p+1/q = 1/r$. Then we have $$\|[b, T]_1(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})} \lesssim \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}.$$ Using Theorem \[thm:rep\] write the pointwise identity $$[b,T]_1(f_1, f_2) = C_T \mathop{\sum_{k = (k_1, k_2, k_3) \in {\mathbb{Z}}_+^3}}_{v = (v_1, v_2, v_3) \in {\mathbb{Z}}_+^3} \alpha_{k, v} \sum_{u} \mathbb{E}_{\omega, \omega'} [b,S^{v}_{k, u, \mathcal{D}^n_{\omega},\mathcal{D}^m_{\omega'}}]_1(f_1, f_2),$$ where $S^{v}_{k, u, \mathcal{D}^n_{\omega},\mathcal{D}^m_{\omega'}}$ are bilinear bi-parameter shifts of complexity $(k,v)$ defined using the dyadic grids $\mathcal{D}^n_{\omega}$ and $\mathcal{D}^m_{\omega'}$. If $r \in (1/2,1]$ we get using $\|\sum_i g_i\|_{L^r({\mathbb{R}}^{n+m})}^r \le \sum_i \| g_i \|_{L^r({\mathbb{R}}^{n+m})}^r$ that we have $$\| [b,T]_1(f_1, f_2) \|_{L^r({\mathbb{R}}^{n+m})}^r \lesssim \mathop{\sum_{k = (k_1, k_2, k_3) \in {\mathbb{Z}}_+^3}}_{v = (v_1, v_2, v_3) \in {\mathbb{Z}}_+^3} \alpha_{k, v}^r \sum_{u} \|\mathbb{E}_{\omega, \omega'}[b,S^{v}_{k, u, \mathcal{D}^n_{\omega},\mathcal{D}^m_{\omega'}}]_1(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})}^r.$$ If $r > 1$ simply use $\|\sum_i g_i\|_{L^r({\mathbb{R}}^{n+m})} \le \sum_i \| g_i \|_{L^r({\mathbb{R}}^{n+m})}$ instead. The claim then follows using Theorem \[thm:com1ofmodelQuasiBanach\], which says that *averages* of commutators of shifts map in the full range with a bound polynomial in complexity. The corresponding results for iterated commutators are recorded and proved in Section \[sec:iterated\]. Martingale difference expansions of products {#sec:marprod} ============================================ The idea is that a product $bf$ paired with Haar functions is expanded in the bi-parameter fashion only if both of the Haar functions are cancellative. In a mixed situation we expand only in ${\mathbb{R}}^n$ or ${\mathbb{R}}^m$, and in the remaining fully non-cancellative situation we do not expand at all – and this protocol is key for us. Also, our protocol entails the following: when pairing with a non-cancellative Haar function we add and subtract a suitable average of $b$. Let ${\mathcal{D}}^n$ and ${\mathcal{D}}^m$ be some fixed dyadic grids in ${\mathbb{R}}^n$ and ${\mathbb{R}}^m$, respectively, and write ${\mathcal{D}}= {\mathcal{D}}^n \times {\mathcal{D}}^m$. In what follows we sum over $I \in {\mathcal{D}}^n$ and $J \in {\mathcal{D}}^m$. Paraproduct operators --------------------- Let us first define certain paraproduct operators: $$\begin{aligned} A_1(b,f) &= \sum_{I, J} \Delta_{I \times J} b \Delta_{I \times J} f, \\ A_2(b,f) &= \sum_{I, J} \Delta_{I \times J} b E_I^1\Delta_J^2 f, \\ A_3(b,f) &= \sum_{I, J} \Delta_{I \times J} b \Delta_I^1 E_J^2 f,\\ A_4(b,f) &= \sum_{I, J} \Delta_{I \times J} b {\big \langle}f {\big \rangle}_{I \times J},\end{aligned}$$ and $$\begin{aligned} A_5(b,f) &= \sum_{I, J} E_I^1 \Delta_J^2 b \Delta_{I \times J} f, \\ A_6(b,f) &= \sum_{I, J} E_I^1 \Delta_J^2 b \Delta_I^1 E_J^2 f, \\ A_7(b,f) &= \sum_{I, J} \Delta_I^1 E_J^2 b \Delta_{I \times J} f, \\ A_8(b,f) &= \sum_{I, J} \Delta_I^1 E_J^2 b E_I^1 \Delta_J^2 f.\end{aligned}$$ We grouped these into two collections, because these are handled differently. When desired, these operators can be written with Haar functions using $$\Delta_{I \times J} g = \sum_{I, J} \langle g, h_I \otimes h_J\rangle h_I \otimes h_J, \,\, \Delta_I^1 g = h_I \otimes \langle g, h_I \rangle_1 \textup{ and } \Delta_J^2 g = \langle g, h_J \rangle_2 \otimes h_J,$$ where we have suppressed the signatures $h_I = h_I^{\epsilon}$ and $h_J = h_J^{\delta}$, $\epsilon \in \{0,1\}^n \setminus \{0\}$, $\delta \in \{0,1\}^m \setminus \{0\}$, of the Haar functions. This means that the finite summations over the signatures are implicitly understood. To understand things correctly, one has to be slightly careful when a term like $h_I h_I$ or $h_J h_J$ appears (as they do e.g. when expanding $A_1$ using Haar functions). This really can be of the form $h_I^{\epsilon_1} h_I^{\epsilon_2}$ for possibly different $\epsilon_1, \epsilon_2$. However, the only property we will use is that $|h_I h_I| = 1_I / |I|$, i.e. we always treat such products as non-cancellative objects (the available cancellation when $\epsilon_1 \ne \epsilon_2$ is simply never needed or used). Suppose that $\|b\|_{{\operatorname{BMO}}_{\textup{prod}}({\mathbb{R}}^{n+m})} = 1$. Then standard theory tells us that for $i = 1, \ldots, 4$ we have $$\label{eq:wforlargeA1} \|A_i(b, f)\|_{L^p(w)} \lesssim C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)}) \|f\|_{L^p(w)}, \, p \in (1,\infty), \, w \in A_p({\mathbb{R}}^n \times {\mathbb{R}}^m).$$ This is because these operators are bi-parameter paraproducts, and their boundedness follows easily by using $$\sum_{I, J} |\langle b, h_I \otimes h_J \rangle| |A_{IJ}| \lesssim \Big\|\Big( \sum_{I,J} |A_{IJ}|^2 \frac{1_{I \times J}}{|I \times J|} \Big)^{1/2} \Big\|_{L^1({\mathbb{R}}^{n+m})}.$$ For a simple proof of this inequality see e.g. Proposition 4.1 of [@MO]. If we assume more in that $\|b\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = 1$, then also for $i = 5, \ldots, 8$ we have $$\label{eq:wforlargeA2} \|A_i(b, f)\|_{L^p(w)} \lesssim C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)}) \|f\|_{L^p(w)}, \, p \in (1,\infty), \, w \in A_p({\mathbb{R}}^n \times {\mathbb{R}}^m).$$ The proofs of these bounds are similar to the above ones, and are proved using that uniformly on $I$ we have $$\sum_J \Big| \Big\langle b, \frac{1_I}{|I|} \otimes h_J\Big\rangle\Big| |A_{IJ}| \lesssim \Big\|\Big( \sum_{J} |A_{IJ}|^2 \frac{1_J}{|J|} \Big)^{1/2} \Big\|_{L^1({\mathbb{R}}^m)}.$$ We also define $$a^1_1(b,f) = \sum_I \Delta_I^1 b \Delta_I^1 f$$ and $$a^1_2(b,f) = \sum_I \Delta_I^1 b E_I^1 f.$$ Again, we have that if $\|b\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = 1$ then for $i=1,2$ we have $$\label{eq:wforsmallA} \|a_i^1(b, f)\|_{L^p(w)} \lesssim C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)}) \|f\|_{L^p(w)}, \,\, p \in (1,\infty), \, w \in A_p({\mathbb{R}}^n \times {\mathbb{R}}^m).$$ The operators $a^2_1(b,f)$ and $a^2_2(b,f)$ are defined analogously. Let now $I_0 \in {\mathcal{D}}^n$ and $ J_0 \in {\mathcal{D}}^m$, and suppose $b \in {\operatorname{bmo}}({\mathbb{R}}^{n+m})$ and $f\in L^{p_0}({\mathbb{R}}^{n+m})$ for some $p_0 \in (1, \infty)$. We introduce our basic expansions of $\langle bf, h_{I_0} \otimes h_{J_0}\rangle$ and $\langle bf, h_{I_0} \otimes h_{J_0}^0\rangle$ (the expansion in the case $\langle bf, h_{I_0}^0 \otimes h_{J_0}\rangle$ being symmetric, of course). Expansion of $\langle bf, h_{I_0} \times h_{J_0} \rangle$ --------------------------------------------------------- We know that $b \in L^{p_0'}_{{\operatorname{loc}}}({\mathbb{R}}^{n+m})$. Therefore, there holds $$1_{I_0 \times J_0} b = \sum_{\substack{I_1\times J_1 \in {\mathcal{D}}\\ I_1 \times J_1 \subset I_0 \times J_0}}\Delta_{I_1 \times J_1} b +\sum_{\substack{J_1 \in {\mathcal{D}}^m \\ J_1 \subset J_0}} E^1_{I_0} \Delta^2_{J_1} b + \sum_{\substack{I_1 \in {\mathcal{D}}^n \\ I_1 \subset I_0}} \Delta^1_{I_1} E^2_{J_0} b + E_{I_0 \times J_0} b.$$ Let us denote these terms by $I_i$, $i=1,2,3,4$, in the respective order. We have the corresponding decomposition of $f$, whose terms we denote by $II_i$, $i=1,2,3,4$. Notice that $$\sum_{i=1}^4 \langle I_1 II_i, h_{I_0} \otimes h_{J_0} \rangle = \sum_{i=1}^4 \langle A_i(b, f), h_{I_0} \otimes h_{J_0} \rangle,$$ $$\sum_{i=1}^4 \langle I_2 II_i, h_{I_0} \otimes h_{J_0} \rangle = \sum_{i=5}^6 \langle A_i(b, f), h_{I_0} \otimes h_{J_0} \rangle,$$ $$\sum_{i=1}^4 \langle I_3 II_i, h_{I_0} \otimes h_{J_0} \rangle = \sum_{i=7}^8 \langle A_i(b, f), h_{I_0} \otimes h_{J_0} \rangle$$ and $$\sum_{i=1}^4 \langle I_4 II_i, h_{I_0} \otimes h_{J_0} \rangle = \langle b \rangle_{I_0 \times J_0} \langle f, h_{I_0} \otimes h_{J_0} \rangle.$$ Therefore, we have $$\label{eq:biparEX} \langle bf, h_{I_0} \otimes h_{J_0} \rangle = \sum_{i=1}^8 \langle A_i(b, f), h_{I_0} \otimes h_{J_0} \rangle + \langle b \rangle_{I_0 \times J_0} \langle f, h_{I_0} \otimes h_{J_0} \rangle.$$ Expansion of $\langle bf, h_{I_0} \times h_{J_0}^0 \rangle$ ----------------------------------------------------------- This time we write $$1_{I_0} b = \sum_{\substack{I_1 \in {\mathcal{D}}^n \\ I_1 \subset I_0}}\Delta_{I_1}^1 b + E_{I_0}^1 b,$$ and similarly for $f$, and notice that $$\langle bf, h_{I_0} \rangle_1 = \sum_{i=1}^2 \langle a_i^1(b,f), h_{I_0}\rangle_1 + \langle b \rangle_{I_0,1} \langle f, h_{I_0}\rangle_1.$$ Therefore, we have $$\label{eq:1EX} \begin{split} \langle bf, h_{I_0} \otimes h_{J_0}^0 \rangle &= \sum_{i=1}^2 \langle a_i^1(b,f), h_{I_0} \otimes h_{J_0}^0 \rangle \\ &+ \langle (\langle b \rangle_{I_0,1} - \langle b \rangle_{I_0 \times J_0}) \langle f, h_{I_0}\rangle_1, h_{J_0}^0\rangle + \langle b \rangle_{I_0 \times J_0} \langle f, h_{I_0} \otimes h_{J_0}^0 \rangle. \end{split}$$ When we have $\langle bf, h_{I_0}^0 \otimes h_{J_0}^0 \rangle$ we do not expand at all, we simply add and subtract an average: $$\label{eq:noEX} \langle bf, h_{I_0}^0 \otimes h_{J_0}^0 \rangle = \langle (b-\langle b \rangle_{I_0 \times J_0})f, h_{I_0}^0 \otimes h_{J_0}^0 \rangle + \langle b \rangle_{I_0 \times J_0} \langle f, h_{I_0}^0 \otimes h_{J_0}^0 \rangle.$$ Key identities related to commutators {#sec:KeyIdentities} ===================================== We state some lemmas related to identities that appear when we expand using , , the symmetric form of or in commutators of model operators. The proofs of these lemmas are trivial applications of these identities, and the cancellation present in the commutators is simply exploited by grouping the terms involving free averages of $b$ together (the last term appearing in these identities). In the first order commutators of model operators there are essentially seven different symmetries depending on how many non-cancellative Haar functions we have in the model operator in question, and how they are situated – for these symmetries see the proof of Theorem \[thm:com1ofmodelBanach\]. We only explicitly state lemmas relevant for three of these symmetries, but the remaining identities are completely analogous and obtained by expanding using the described protocol. Below we have $I,Q \in {\mathcal{D}}^n$ and $J, R \in {\mathcal{D}}^m$ with some fixed dyadic grids ${\mathcal{D}}^n$ and ${\mathcal{D}}^m$. \[lem:case1\] We have $$\begin{aligned} \langle f, h_I \otimes h_J& \rangle \langle bg, h_Q \otimes h_R \rangle - \langle bf, h_I \otimes h_J \rangle \langle g, h_Q \otimes h_R \rangle \\ &= \sum_{i=1}^8 \langle f, h_I \otimes h_J \rangle \langle A_i(b,g), h_Q \otimes h_R \rangle \\ &- \sum_{i=1}^8 \langle A_i(b,f), h_I \otimes h_J \rangle \langle g, h_Q \otimes h_R \rangle \\ &+ [\langle b \rangle_{Q \times R} - \langle b \rangle_{I \times J}] \langle f, h_I \otimes h_J \rangle \langle g, h_Q \otimes h_R \rangle.\end{aligned}$$ \[lem:case2\] We have $$\begin{aligned} \langle f, h_I^0 \otimes h_J& \rangle \langle bg, h_Q \otimes h_R \rangle - \langle bf, h_I^0 \otimes h_J \rangle \langle g, h_Q \otimes h_R \rangle \\ &= \sum_{i=1}^8 \langle f, h_I^0 \otimes h_J \rangle \langle A_i(b,g), h_Q \otimes h_R \rangle \\ &- \sum_{i=1}^2 \langle a_i^2(b,f), h_I^0 \otimes h_J \rangle \langle g, h_Q \otimes h_R \rangle \\ &+ {\big \langle}(\langle b \rangle_{I \times J} - \langle b \rangle_{J, 2})\langle f, h_J\rangle_2, h_{I}^0{\big \rangle}\langle g, h_Q \otimes h_R \rangle \\ &+ [\langle b \rangle_{Q \times R} - \langle b \rangle_{I \times J}] \langle f, h_I^0 \otimes h_J \rangle \langle g, h_Q \otimes h_R \rangle\end{aligned}$$ \[lem:case3\] We have $$\begin{aligned} \langle f, h_I^0 \otimes h_J& \rangle \langle bg, h_Q \otimes h_R^0 \rangle - \langle bf, h_I^0 \otimes h_J \rangle \langle g, h_Q \otimes h_R^0 \rangle \\ &=\sum_{i=1}^2 \langle f, h_I^0 \otimes h_J \rangle \langle a^1_i (b,g), h_Q \otimes h_R^0 \rangle \\ &-\sum_{i=1}^2 \langle a^2_i(b,f), h_I^0 \otimes h_J \rangle \langle g, h_Q \otimes h_R^0 \rangle \\ &+\langle f, h_I^0 \otimes h_J \rangle {\big \langle}(\langle b \rangle_{Q,1}-\langle b \rangle_{Q \times R}) \langle g ,h_Q \rangle_1, h^0_R {\big \rangle}\\ &-{\big \langle}(\langle b \rangle_{J,2}-\langle b \rangle_{I \times J}) \langle f,h_J \rangle_2,h^0_I {\big \rangle}\langle g, h_Q \otimes h_R^0 \rangle \\ &+[\langle b \rangle_{Q \times R} - \langle b \rangle_{I \times J}] \langle f, h_I^0 \otimes h_J \rangle \langle g, h_Q \otimes h_R^0 \rangle.\end{aligned}$$ We also record two additional lemmas, which are used in conjunction with such identities. \[lem:maximalbound\] For $I \in {\mathcal{D}}^n$ and $J \in {\mathcal{D}}^m$ we have $$\big|{\big \langle}(\langle b \rangle_{J, 2} -\langle b \rangle_{I \times J} )\langle f, h_J\rangle_2 {\big \rangle}_I\big| \le \Big\langle \varphi_{{\mathcal{D}}^m, b}(f), \frac{1_I}{|I|} \otimes h_J\Big\rangle$$ and $$\big|{\big \langle}(b-\langle b \rangle_{I \times J})f {\big \rangle}_{I \times J}\big| \lesssim \langle M_b f \rangle_{I \times J}.$$ There holds $$\big|{\big \langle}(\langle b \rangle_{J, 2} -\langle b \rangle_{I \times J} )\langle f, h_J\rangle_2 {\big \rangle}_I\big| \le \langle M_{\langle b \rangle_{J,2}} \langle f, h_J\rangle_2 \rangle_{I} = \Big\langle \varphi_{{\mathcal{D}}^m, b}(f), \frac{1_I}{|I|} \otimes h_J\Big\rangle,$$ where the last inequality follows from orthogonality. The second claimed inequality is even more immediate. \[lem:bmobound\] Suppose $I^{(i)} = Q^{(q)} = K$ and $J^{(j)} = R^{(r)} = V$. If $\|b\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = 1$ then we have $$|\langle b \rangle_{Q \times R} - \langle b \rangle_{I \times J}| \lesssim \max(i, j, q, r).$$ Estimate $$|\langle b \rangle_{Q \times R} - \langle b \rangle_{I \times J}| \le |\langle b \rangle_{Q \times R} - \langle b \rangle_{K \times V}| + |\langle b \rangle_{I \times J}- \langle b \rangle_{K \times V}|,$$ and use repeatedly that $$|\langle b \rangle_{Q \times R} - \langle b \rangle_{Q^{(1)} \times R}| \le \langle |b- \langle b \rangle_{Q^{(1)} \times R}| \rangle_{Q \times R} \lesssim \langle |b- \langle b \rangle_{Q^{(1)} \times R}| \rangle_{Q^{(1)} \times R} \le 1.$$ Banach range boundedness of commutators of model operators {#sec:BanachforModels} ========================================================== For the Banach range theory of commutators we only need the rather easy fact that all the model operators from Section \[sec:defbilinbiparmodel\] are of the following general type. Fix dyadic grids ${\mathcal{D}}^n$ and ${\mathcal{D}}^m$. Let $U = U^v_k$, $0 \le k_i \in {\mathbb{Z}}$ and $0 \le v_i \in {\mathbb{Z}}$, $i=1,2,3$, be a bilinear bi-parameter operator such that $$\begin{split} \langle U(f_1,f_2),f_3 \rangle = \sum_{\substack{K \in {\mathcal{D}}^n \\ V \in {\mathcal{D}}^m}} \sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n \\ I_1^{(k_1)} = I_2^{(k_2)} = I_3^{(k_3)} = K}} &\sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m \\ J_1^{(v_1)} = J_2^{(v_2)} = J_3^{(v_3)} = V}} a_{K, V, (I_i), (J_j)} \\ &\times \langle f_1, {{\widetilde{h}}}_{I_1} \otimes {{\widetilde{h}}}_{J_1}\rangle \langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle \langle f_3, {{\widetilde{h}}}_{I_3} \otimes {{\widetilde{h}}}_{J_3} \rangle, \end{split}$$ where $a_{K, V, (I_i), (J_j)}$ are constants and for all $i=1,2,3$ we have $ {{\widetilde{h}}}_{I_i}= h_{I_i}$ for all $I_i \in {\mathcal{D}}^n$ or ${{\widetilde{h}}}_{I_i}= h_{I_i}^0$ for all $I_i\in {\mathcal{D}}^n$, and similarly with the functions ${{\widetilde{h}}}_{J_j}$. We assume that for all $p,q,r \in (1,\infty)$ with $1/p + 1/q = 1/r$ we have $$\label{eq:bb} \begin{split} \sum_{\substack{K \in {\mathcal{D}}^n \\ V \in {\mathcal{D}}^m}} \sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n \\ I_1^{(k_1)} = I_2^{(k_2)} = I_3^{(k_3)} = K}} &\sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m \\ J_1^{(v_1)} = J_2^{(v_2)} = J_3^{(v_3)} = V}} \big| a_{K, V, (I_i), (J_j)} \\ &\times \langle f_1, {{\widetilde{h}}}_{I_1} \otimes {{\widetilde{h}}}_{J_1}\rangle \langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle \langle f_3, {{\widetilde{h}}}_{I_3} \otimes {{\widetilde{h}}}_{J_3} \rangle \big | \\ & \lesssim \| f_1 \|_{L^p} \| f_2 \|_{L^q}\| f_3 \|_{L^{r'}}. \end{split}$$ We do not assume anything else about the constants $a_{K, V, (I_i), (J_j)}$. In particular, $U$ can be a bilinear bi-parameter shift, a partial paraproduct or a full paraproduct. \[thm:com1ofmodelBanach\] Let $p,q,r \in (1,\infty)$, $1/p + 1/q = 1/r$, $0 \le k_i \in {\mathbb{Z}}$ and $0 \le v_i \in {\mathbb{Z}}$, $i=1,2,3$. Let $U = U^v_k$ be a general bilinear bi-parameter model operator satisfying . In particular, $U$ can be a bilinear bi-parameter shift, a partial paraproduct or a full paraproduct. Then for $b$ such that $\|b\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = 1$ we have $$\|[b, U]_1(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})} \lesssim (1+\max(k_i, v_i)) \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}.$$ We separately treat the different possible combinations of cancellative and non-cancellative Haar functions. The proof depends only on what Haar functions we have paired with $f_1$ and $f_3$ in $\langle U(f_1, f_2), f_3\rangle$. All the model operators fall into one of the following cases: 1. We have $\langle f_1, h_{I_1} \otimes h_{J_1}\rangle \langle f_3, h_{I_3} \otimes h_{J_3}\rangle$. 2. We have $\langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_3, h_{I_3} \otimes h_{J_3}\rangle$, or one of the three other symmetric cases. 3. We have $\langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_3, h_{I_3} \otimes h_{J_3}^0\rangle$, or the symmetric case. 4. We have $\langle f_1, h_{I_1}^0 \otimes h_{J_1}^0\rangle \langle f_3, h_{I_3} \otimes h_{J_3}\rangle$, or the symmetric case. 5. We have $\langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_3, h_{I_3}^0 \otimes h_{J_3}\rangle$, or the symmetric case. 6. We have $\langle f_1, h_{I_1}^0 \otimes h_{J_1}^0\rangle \langle f_3, h_{I_3}^0 \otimes h_{J_3}\rangle$, or one of the other three symmetric cases. 7. We have $\langle f_1, h_{I_1}^0 \otimes h_{J_1}^0\rangle \langle f_3, h_{I_3}^0 \otimes h_{J_3}^0\rangle$. **Case 1.** We use Lemma \[lem:case1\] with $f = f_1$, $I = I_1$, $J=J_1$ and $g=f_3$, $Q=I_3$, $R=J_3$. Using Lemma \[lem:bmobound\], the boundedness property and the boundedness of the operators $A_i(b, \cdot)$, $i = 1,\ldots, 8$, we have that $$|\langle [b,U]_1(f_1, f_2), f_3\rangle| \lesssim \max(k_i, v_i) \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})} \|f_3\|_{L^{r'}({\mathbb{R}}^{n+m})}.$$ **Case 2.** This time we use Lemma \[lem:case2\]. Then we use Lemma \[lem:maximalbound\], Lemma \[lem:bmobound\], the boundedness property , the boundedness of the operators $A_i(b, \cdot)$, $i = 1,\ldots, 8$, the boundedness of the operators $a_i^1(b,\cdot)$, $a_i^2(b,\cdot)$, $i = 1,2$, and Lemma \[lem:bmaxbounds\]. This gives us the desired bound. For example, one calculates like $$\begin{aligned} &\sum_{\substack{K \in {\mathcal{D}}^n \\ V \in {\mathcal{D}}^m}} \sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n \\ I_1^{(k_1)} = I_2^{(k_2)} = I_3^{(k_3)} = K}} \sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m \\ J_1^{(v_1)} = J_2^{(v_2)} = J_3^{(v_3)} = V}} |a_{K, V, (I_i), (J_j)}| \\ &\times \big|{\big \langle}(\langle b \rangle_{J_1,2}-\langle b \rangle_{I_1 \times J_1}) \langle f_1,h_{J_1} \rangle_2,h^0_{I_1} {\big \rangle}\big| |\langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle| |\langle f_3, h_{I_3} \otimes h_{J_3} \rangle| \\ &\lesssim \sum_{\substack{K \in {\mathcal{D}}^n \\ V \in {\mathcal{D}}^m}} \sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n \\ I_1^{(k_1)} = I_2^{(k_2)} = I_3^{(k_3)} = K}} \sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m \\ J_1^{(v_1)} = J_2^{(v_2)} = J_3^{(v_3)} = V}} |a_{K, V, (I_i), (J_j)}| \\ &\times \langle \varphi_{{\mathcal{D}}^m, b}(f_1), h_{I_1}^0 \otimes h_{J_1}\rangle |\langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle| |\langle f_3, h_{I_3} \otimes h_{J_3} \rangle| \\ &\lesssim \| \varphi_{{\mathcal{D}}^m, b}(f_1) \|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})} \|f_3\|_{L^{r'}({\mathbb{R}}^{n+m})} \\ &\lesssim \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})} \|f_3\|_{L^{r'}({\mathbb{R}}^{n+m})}.\end{aligned}$$ **Cases 3.-7.** We operate exactly as above but use Lemma \[lem:case3\], or other completely analogous identities (which are always obtained using the protocol stated in Section \[sec:marprod\]). Quasi–Banach estimates for ${\mathbb{E}}_{\omega, \omega'}[b,S_{\omega, \omega'}]_1$ via restricted weak type {#sec:quasiviarest} ============================================================================================================= In this section we will prove: \[thm:com1ofmodelQuasiBanach\] Let $\|b\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = 1$, and let $1 < p, q \le \infty$ and $1/2 < r < \infty$ satisfy $1/p+1/q = 1/r$. Suppose $S_{\omega, \omega'} := S^{v}_{k, \mathcal{D}^n_{\omega},\mathcal{D}^m_{\omega'}}$ is a bilinear bi-parameter shift of complexity $(k,v)$ defined using the dyadic grids $\mathcal{D}^n_{\omega}$ and $\mathcal{D}^m_{\omega'}$. Then we have $$\|\mathbb{E}_{\omega, \omega'}[b,S_{\omega, \omega'}]_1(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})} \lesssim (1+\max(k_i, v_i)) \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}.$$ The case when $r>1$ in Theorem \[thm:com1ofmodelQuasiBanach\] is easy, since we already know the Banach range boundedness of commutators of shifts. Indeed, if $f_1 \in L^p({\mathbb{R}}^{n+m})$, $f_2 \in L^q({\mathbb{R}}^{n+m})$ and $f_3 \in L^{r'}({\mathbb{R}}^{n+m})$, then $$\begin{split} | \langle \mathbb{E}_{\omega, \omega'}[b,S_{\omega, \omega'}]_1(f_1, f_2),f_3 \rangle | & \le {\mathbb{E}}_{\omega,\omega'} | \langle [b,S_{\omega, \omega'}]_1(f_1, f_2),f_3 \rangle | \\ &\lesssim (1+\max(k_i, v_i)) \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})} \|f_3\|_{L^{r'}({\mathbb{R}}^{n+m})}. \end{split}$$ The main task is to prove a restricted weak type estimate, which combined with the Banach range boundedness implies Theorem \[thm:com1ofmodelQuasiBanach\] via interpolation. We will show that given $p,q \in (1, \infty)$ and $r \in (1/2,1)$ satisfying $1/p+1/q=1/r$, $f_1 \in L^p({\mathbb{R}}^{n+m})$, $f_2 \in L^q({\mathbb{R}}^{n+m})$ and a set $E \subset {\mathbb{R}}^{n+m}$ with $0 < |E| < \infty$, there exists a subset $E' \subset E$ such that $|E'| \ge |E|/2$ and such that for all functions $f_3$ satisfying $|f_3| \le 1_{E'}$ there holds $$\label{eq:ResAveShift} \begin{split} | \langle \mathbb{E}_{\omega, \omega'}&[b,S_{\omega, \omega'}]_1(f_1, f_2),f_3 \rangle | \\ &\lesssim (1+\max(k_i, v_i)) \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}|E|^{1/r'}. \end{split}$$ To prove we consider the different types of shifts separately and split the commutators using the identities from Section \[sec:KeyIdentities\]. We choose one particular term and show the proof with it in all detail. This pretty well describes the steps needed for the other terms also, and we shall comment on this in the end. We will denote the coefficients related to the shift $S_{\omega, \omega'}$ by $a^{\omega,\omega'}_{K,V,(I_i),(J_i)}$. The shifts we consider here are of the form $$\label{eq:ResAveShiftMixed} \begin{split} \langle S_{\omega, \omega'}(f_1,f_2),f_3 \rangle = \sum_{\substack{K \in {\mathcal{D}}^n_\omega \\ V \in {\mathcal{D}}^m_{\omega'}}} &\sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n_\omega \\ I_i^{(k_i)} = K}} \sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m_{\omega'} \\ J_i^{(v_i)} = V}} a^{\omega,\omega'}_{K, V, (I_i), (J_j)} \\ &\times \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, h_{I_2} \otimes h_{J_2}\rangle \langle f_3, h_{I_3} \otimes h_{J_3}^0 \rangle. \end{split}$$ To the commutator of this we apply Lemma \[lem:case3\]. One of the resulting terms, which is the term that we handle in detail, is considered in the next lemma. \[lem:ResAveShiftEx\] Let $\| b \|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})}=1$ and let $p,q \in (1, \infty)$ and $r \in (1/2,1)$ satisfy $1/p+1/q=1/r$. Suppose $f_1 \in L^p({\mathbb{R}}^{n+m})$, $f_2 \in L^q({\mathbb{R}}^{n+m})$ and $E \subset {\mathbb{R}}^{n+m}$ with $0 < |E| < \infty$. Then there exists a subset $E' \subset E$ with $|E'| \ge \frac{99}{100} |E|$ so that for all functions $f_3$ satisfying $|f_3| \le 1_{E'}$ there holds $$\label{eq:ResAveShiftExEst} \begin{split} \Big|{\mathbb{E}}_{\omega,\omega'}\sum_{\substack{K \in {\mathcal{D}}^n_0 \\ V \in {\mathcal{D}}^m_0}} \Lambda^{\omega,\omega'}_{K \times V} (f_1,f_2,f_3 )\Big| \lesssim \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}|E|^{1/r'}, \end{split}$$ where $$\begin{split} \Lambda^{\omega,\omega'}_{K \times V} (f_1,f_2,f_3 ) = &\sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n_\omega \\ I_i^{(k_i)} = K+\omega}} \sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m_{\omega'} \\ J_i^{(v_i)} = V+\omega'}} a^{\omega,\omega'}_{K + \omega, V+\omega', (I_i), (J_j)} \\ &\times \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, h_{I_2} \otimes h_{J_2}\rangle {\big \langle}(\langle b \rangle_{I_3,1}-\langle b \rangle_{I_3 \times J_3}) \langle f_3, h_{I_3} \rangle_1, h_{J_3}^0 {\big \rangle}. \end{split}$$ For the proof of Lemma \[lem:ResAveShiftEx\] we record the boundedness of certain deterministic square functions. Let $i,j \in {\mathbb{Z}}$, $i, j \ge 0$. Suppose that we have a family of operators $U=\{U_ {\omega,\omega'}\}_{\omega,\omega'}$ such that for all $\omega,\omega'$ there holds $$\| U_{\omega,\omega'} f \|_{L^2(w)} \le C([w]_{A_2({\mathbb{R}}^n \times {\mathbb{R}}^m)})\| f \|_{L^2(w)}, \quad f \in L^2(w),$$ for all $w \in A_2({\mathbb{R}}^n \times {\mathbb{R}}^m)$. Recall that ${\mathcal{D}}_0 = {\mathcal{D}}^n_0 \times {\mathcal{D}}^m_0$, where ${\mathcal{D}}^n_0$ and ${\mathcal{D}}^m_0$ are the standard dyadic grids of ${\mathbb{R}}^n$ and ${\mathbb{R}}^m$ respectively. Define $$S_{U}^{i,j} f =\Big(\sum_{K \times V \in {\mathcal{D}}_0} {\mathbb{E}}_{\omega,\omega'} (M \Delta^{i,j}_{(K+\omega) \times (V+\omega')} U_{\omega,\omega'}f)^2 \Big)^{1/2}.$$ Given a similar $U=\{U_ {\omega}\}_{\omega}$ we set $$S^1_{i,U}=\Big(\sum_{K \in {\mathcal{D}}_0^n} {\mathbb{E}}_{\omega} (M \Delta^{1}_{K+\omega,i} U_{\omega} f)^2 \Big)^{1/2}$$ and given $U=\{U_ {\omega'}\}_{\omega'}$ we set $$S^2_{j,U}=\Big(\sum_{V \in {\mathcal{D}}_0^m} {\mathbb{E}}_{\omega'} (M \Delta^{2}_{V+\omega',j} U_{\omega'} f)^2 \Big)^{1/2}.$$ We write $S^{i,j}$, $S^1_{i}$ and $S^2_j$ if there is no $U$ present. \[lem:DetSquareShift\] For all $p \in (1, \infty)$ and $w \in A_p({\mathbb{R}}^n\times{\mathbb{R}}^m)$ there holds $$\| S^{i,j}_Uf\|_{L^p(w)}+\| S^{1}_{i,U}f\|_{L^p(w)}+\| S^{2}_{j,U}f\|_{L^p(w)} \le C([w]_{A_p({\mathbb{R}}^n\times{\mathbb{R}}^m)}) \| f\|_{L^p(w)}.$$ We show the estimate for $S^{i,j}_U$. The other two are very similar. Suppose $w \in A_2({\mathbb{R}}^n \times {\mathbb{R}}^m)$ and $f \in L^2(w)$. Then $$\begin{aligned} \| S^{i,j}_Uf\|_{L^2(w)}^2 &= {\mathbb{E}}_{\omega,\omega'} \sum_{K \times V \in {\mathcal{D}}_0} \| M \Delta^{i,j}_{(K+\omega) \times (V+\omega')} U_{\omega,\omega'}f \|_{L^2(w)}^2 \\ &\le C([w]_{A_2({\mathbb{R}}^n \times {\mathbb{R}}^m)}) {\mathbb{E}}_{\omega,\omega'} \| U_{\omega,\omega'}f \|_{L^2(w)}^2 \le C([w]_{A_2({\mathbb{R}}^n \times {\mathbb{R}}^m)}) \| f \|_{L^2(w)}^2,\end{aligned}$$ where in the second step we used the weighted boundedness of the strong maximal function and the usual rectangular dyadic square function. The claim for $p \in (1,\infty)$ follows from extrapolation. The following estimates contain the standard estimates for shifts, and are the reason why various square functions arise naturally. For $R = K \times V \in {\mathcal{D}}_0$ we denote $R_{\omega, \omega'} = (K+\omega) \times (V+\omega') \subset 3K \times 3V = 3R$. Then using the normalisation of the constants $a^{\omega,\omega'}_{K + \omega, V+\omega', (I_i), (J_j)}$ and adding martingale differences using cancellative Haar functions we have $$\begin{aligned} &\sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n_\omega \\ I_i^{(k_i)} = K+\omega}} \sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m_{\omega'} \\ J_i^{(v_i)} = V+\omega'}} |a^{\omega,\omega'}_{K + \omega, V+\omega', (I_i), (J_j)}| \\ & \hspace{2cm} \times |\langle U_{1, \omega'} f_1, h_{I_1}^0 \otimes h_{J_1}\rangle| |\langle U_{2, \omega, \omega'} f_2, h_{I_2} \otimes h_{J_2}\rangle| |\langle U_{3, \omega} f_3, h_{I_3} \otimes h_{J_3}^0 \rangle| \\ &\le \langle |\Delta^{2}_{V+\omega',v_1}U_{1, \omega'} f_1 |\rangle_{R_{\omega, \omega'}} \langle |\Delta^{k_2,v_2}_{K+\omega, V + \omega'}U_{2, \omega, \omega'} f_2 |\rangle_{R_{\omega, \omega'}}\langle |\Delta^{1}_{K+\omega,k_3}U_{3, \omega} f_3 |\rangle_{R_{\omega, \omega'}} |R| \\ &\lesssim \int \langle |\Delta^{2}_{V+\omega',v_1}U_{1, \omega'} f_1 |\rangle_{3R} \langle |\Delta^{k_2,v_2}_{K+\omega, V + \omega'}U_{2, \omega, \omega'} f_2 |\rangle_{3R}\langle |\Delta^{1}_{K+\omega,k_3}U_{3, \omega} f_3 |\rangle_{3R}1_R \\ &\le \int M \Delta^{2}_{V+\omega',v_1}U_{1, \omega'} f_1 \cdot M\Delta^{k_2,v_2}_{K+\omega, V + \omega'}U_{2, \omega, \omega'} f_2 \cdot M \Delta^{1}_{K+\omega,k_3}U_{3, \omega} f_3,\end{aligned}$$ and so furthermore $$\begin{aligned} &{\mathbb{E}}_{\omega, \omega'} \sum_{\substack{K \in {\mathcal{D}}^n_0 \\ V \in {\mathcal{D}}^m_0}}\sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n_\omega \\ I_i^{(k_i)} = K+\omega}} \sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m_{\omega'} \\ J_i^{(v_i)} = V+\omega'}} |a^{\omega,\omega'}_{K + \omega, V+\omega', (I_i), (J_j)}| \\ & \hspace{2cm} \times |\langle U_{1, \omega'} f_1, h_{I_1}^0 \otimes h_{J_1}\rangle| |\langle U_{2, \omega, \omega'} f_2, h_{I_2} \otimes h_{J_2}\rangle| |\langle U_{3, \omega} f_3, h_{I_3} \otimes h_{J_3}^0 \rangle| \\ &\lesssim \int \sum_{\substack{K \in {\mathcal{D}}^n_0 \\ V \in {\mathcal{D}}^m_0}} {\mathbb{E}}_{\omega, \omega'} M \Delta^{2}_{V+\omega',v_1}U_{1, \omega'} f_1 \cdot M\Delta^{k_2,v_2}_{K+\omega, V + \omega'}U_{2, \omega, \omega'} f_2 \cdot M \Delta^{1}_{K+\omega,k_3}U_{3, \omega} f_3 \\ &\le \int \sum_{\substack{K \in {\mathcal{D}}^n_0 \\ V \in {\mathcal{D}}^m_0}} ({\mathbb{E}}_{\omega, \omega'} (M \Delta^{2}_{V+\omega',v_1}U_{1, \omega'} f_1)^2 (M \Delta^{1}_{K+\omega,k_3}U_{3, \omega} f_3)^2 )^{1/2} \\ & \hspace{6cm}\times({\mathbb{E}}_{\omega, \omega'} (M\Delta^{k_2,v_2}_{K+\omega, V + \omega'}U_{2, \omega, \omega'} f_2)^2 )^{1/2} \\ &\le \int \Big( \sum_{\substack{K \in {\mathcal{D}}^n_0 \\ V \in {\mathcal{D}}^m_0}} {\mathbb{E}}_{\omega, \omega'} (M \Delta^{2}_{V+\omega',v_1}U_{1, \omega'} f_1)^2 (M \Delta^{1}_{K+\omega,k_3}U_{3, \omega} f_3)^2 \Big)^{1/2} \\ &\hspace{6cm} \times \Big( \sum_{\substack{K \in {\mathcal{D}}^n_0 \\ V \in {\mathcal{D}}^m_0}} {\mathbb{E}}_{\omega, \omega'} (M\Delta^{k_2,v_2}_{K+\omega, V + \omega'}U_{2, \omega, \omega'} f_2)^2 \Big)^{1/2} \\ &= \int \Big( \sum_{V \in {\mathcal{D}}^m_0} {\mathbb{E}}_{\omega'} (M \Delta^{2}_{V+\omega',v_1}U_{1, \omega'} f_1)^2 \Big)^{1/2} \Big( \sum_{\substack{K \in {\mathcal{D}}^n_0 \\ V \in {\mathcal{D}}^m_0}} {\mathbb{E}}_{\omega, \omega'} (M\Delta^{k_2,v_2}_{K+\omega, V + \omega'}U_{2, \omega, \omega'} f_2)^2 \Big)^{1/2} \\ &\hspace{6cm} \times \Big( \sum_{K \in {\mathcal{D}}^n_0} {\mathbb{E}}_{\omega} (M \Delta^{1}_{K+\omega,k_3}U_{3, \omega} f_3)^2 \Big)^{1/2}.\end{aligned}$$ Estimates in the above spirit are used repeatedly below. We are now ready to prove Lemma \[lem:ResAveShiftEx\]. Because the square functions $S^2_{v_1}$ and $S^{k_2,v_2}$ are bounded, it is enough to assume that $$\| S^2_{v_1}f_1 S^{k_2,v_2}f_2 \|_{L^r}=1,$$ and show that there exists a set $E' \subset E$ with $|E'| \ge \frac{99}{100} |E|$ so that the left hand side of is dominated by $|E|^{1/r'}$. Define $$\Omega_u =\{S^2_{v_1}f_1 S^{k_2,v_2}f_2 > C_0 2^{-u}|E|^{-1/r}\}, \quad u \ge 0,$$ and $${{\widetilde{\Omega}}}_u = \{M1_{\Omega_u}> c_1\},$$ where $c_1>0$ is a small enough dimensional constant. Then we can choose $C_0=C_0(c_1)$ so large that the set $E':=E \setminus {{\widetilde{\Omega}}}_0$ satisfies $|E'| \ge \frac{99}{100} |E|$. Then we define the collections $$\widehat {\mathcal{R}}_u =\Big\{ R \in {\mathcal{D}}_0 \colon |R \cap \Omega_u | \ge \frac{|R|}{2}\Big\},$$ where ${\mathcal{D}}_0 = {\mathcal{D}}^n_0 \times {\mathcal{D}}^m_0$. Below we denote $(K+\omega) \times (V+ \omega') = K \times V+ (\omega, \omega')$. Let us list some properties of the collections $\widehat {\mathcal{R}}_u$. Fix now some function $f_3$ such that $|f_3| \le 1_{E'}$. Suppose $K \times V \in {\mathcal{D}}_0$ is such that $${\mathbb{E}}_{\omega,\omega'} \Lambda^{\omega,\omega'}_{K \times V} (f_1,f_2,f_3) \ne 0.$$ From this it can be concluded that $$\begin{split} 0 < {\mathbb{E}}_{\omega,\omega'} &\langle |\Delta^{2}_{V+\omega',v_1}f_1 |\rangle_{K\times V+(\omega,\omega')} \langle |\Delta^{k_2,v_2}_{K\times V+(\omega,\omega')}f_2 |\rangle_{K\times V+(\omega,\omega')} \\ & \lesssim \inf_{K \times V} S^{2}_{v_1} f_1 S^{k_2,v_2} f_2. \end{split}$$ Therefore, every relevant $K \times V \in {\mathcal{D}}_0$ that appears in the summation in is contained in $\Omega_u$ for some $u$, and therefore belongs to $\widehat {\mathcal{R}}_u$. Also, if $K \times V \in \widehat {\mathcal{R}}_u$, then for all $(\omega,\omega')$ we have $K\times V+(\omega,\omega') \subset 3K \times 3V \subset {{\widetilde{\Omega}}}_u$. Here we used the fact that $c_1$ is small enough. Thus, if $K \times V \in \widehat {\mathcal{R}}_0$, then for all $(\omega,\omega')$ we have $(K+\omega) \times (V +\omega') \cap E'=\emptyset$. We want to say that this implies that for all $(\omega, \omega')$ we have $\Lambda^{\omega,\omega'}_{K \times V} (f_1,f_2,f_3)=0$, and so all relevant $K \times V$ satisfy $K \times V \not \in \widehat {\mathcal{R}}_0$. The claim in the previous sentence is based on the following localisation property $$\label{eq:localisation} {\big \langle}(\langle b \rangle_{I_3,1}-\langle b \rangle_{I_3 \times J_3}) \langle f_3, h_{I_3} \rangle_1, h_{J_3}^0 {\big \rangle}={\big \langle}(\langle b \rangle_{I_3,1}-\langle b \rangle_{I_3 \times J_3}) \langle 1_{I_3 \times J_3}f_3, h_{I_3} \rangle_1, h_{J_3}^0 {\big \rangle},$$ and the fact that $|f_3| \le 1_{E'}$. The localisation will also be used below to see that we can replace $f_3$ by $f_31_F$ for any set $F \supset I_3 \times J_3$. Define the collections ${\mathcal{R}}_u= \widehat {\mathcal{R}}_u \setminus \widehat {\mathcal{R}}_{u-1}$, where $u \ge 1$. We have demonstrated that every relevant $K \times V \in {\mathcal{D}}_0$ appearing in the summation in belongs to exactly one of these collections. Therefore, we have $${\mathbb{E}}_{\omega,\omega'}\sum_{\substack{K \in {\mathcal{D}}^n_0 \\ V \in {\mathcal{D}}^m_0}} \Lambda^{\omega,\omega'}_{K \times V} (f_1,f_2,f_3 ) =\sum_{u=1}^\infty \sum_{K \times V\in {\mathcal{R}}_u} {\mathbb{E}}_{\omega,\omega'} \Lambda^{\omega,\omega'}_{K \times V} (f_1,f_2,f_3 ).$$ We now fix one $u$ and estimate the corresponding term. Using now that $I_3 \times J_3 \subset (K+\omega) \times (V+\omega') \subset {{\widetilde{\Omega}}}_u$, since $K \times V \in {\mathcal{R}}_u \subset \widehat {\mathcal{R}}_u$, we may replace $f_3$ with $1_{{{\widetilde{\Omega}}}_u}f_3$. Let us write $\varphi_{\omega, b} = \varphi_{{\mathcal{D}}^n_{\omega}, b}$. Suppose $K \times V \in {\mathcal{R}}_u$. Using Lemma \[lem:maximalbound\] we have $$\begin{split} | \Lambda^{\omega,\omega'}_{K \times V} (f_1,f_2,1_{{{\widetilde{\Omega}}}_u}f_3 )| &\le \langle |\Delta^{2}_{V+\omega',v_1}f_1 |\rangle_{K\times V+(\omega,\omega')} \langle |\Delta^{k_2,v_2}_{K\times V+(\omega,\omega')}f_2 |\rangle_{K\times V+(\omega,\omega')} \\ & \times \langle |\Delta^{1}_{K+\omega,k_3}\varphi_{\omega, b} (1_{{{\widetilde{\Omega}}}_u}f_3) |\rangle_{K\times V+(\omega,\omega')} |K\times V|. \end{split}$$ Since $K \times V \in {\mathcal{R}}_u$, we also have that $|K \times V| \lesssim \int_{{{\widetilde{\Omega}}}_u \setminus \Omega_{u-1}} 1_{K \times V}$. Combining these there holds that $$\begin{split} {\mathbb{E}}_{\omega,\omega'} &| \Lambda^{\omega,\omega'}_{K \times V} (f_1,f_2,f_3 )| \\ & \lesssim \int_{{{\widetilde{\Omega}}}_u \setminus \Omega_{u-1}} {\mathbb{E}}_{\omega,\omega'} M (\Delta^2_{V+\omega',v_1} f_1) M(\Delta^{k_2,v_2}_{K \times V+(\omega,\omega')} f_2) M (\Delta^1_{K+\omega,k_3} \varphi_{\omega, b} (1_{{{\widetilde{\Omega}}}_u}f_3)). \end{split}$$ Recalling that ${\mathbb{E}}_{\omega,\omega'}= {\mathbb{E}}_\omega {\mathbb{E}}_{\omega'}$, we notice that the last integrand is pointwise dominated by $$\big({\mathbb{E}}_{\omega'} (M \Delta^2_{V+\omega',v_1} f_1)^2\big)^{\frac{1}{2}} \big({\mathbb{E}}_{\omega,\omega'} (M\Delta^{k_2,v_2}_{K \times V+(\omega,\omega')} f_2)^2\big)^{\frac{1}{2}} \big({\mathbb{E}}_{\omega} (M \Delta^1_{K+\omega, k_3} \varphi_{\omega, b}(1_{{{\widetilde{\Omega}}}_u} f_3)^2\big)^{\frac{1}{2}}.$$ Using this, we finally have that $$\label{eq:SquaresAppear} \sum_{K \times V\in {\mathcal{R}}_u} {\mathbb{E}}_{\omega,\omega'} | \Lambda^{\omega,\omega'}_{K \times V} (f_1,f_2,f_3 )| \lesssim \int_{{{\widetilde{\Omega}}}_u \setminus \Omega_{u-1}} S^2_{v_1} f_1 S^{k_2,v_2} f_2 S^1_{k_3, \varphi_b^1} (1_{{{\widetilde{\Omega}}}_u}f_3),$$ where $S^1_{k_3, \varphi_b^1}$ is the square function formed with the family $\varphi_b^1 := \{\varphi_{\omega, b}\}_{\omega}$ as in Lemma \[lem:DetSquareShift\]. If $x \not \in \Omega_{u-1}$, then by definition $S^2_{v_1} f_1(x) S^{k_2,v_2} f_2(x) \lesssim 2^{-u}|E|^{-1/r}$. Thus, the right hand side of is dominated by $$2^{-u}|E|^{-1/r} | {{\widetilde{\Omega}}}_{u} |^{1/2} \| S^1_{k_3, \varphi_b^1} (1_{{{\widetilde{\Omega}}}_u}f_3)\|_{L^2} \lesssim 2^{-u(1-r)}|E|^{1-1/r}.$$ The boundedness of $S^1_{k_3, \varphi_b^1}$ is based on Lemma \[lem:bmaxbounds\] and Lemma \[lem:DetSquareShift\]. The last estimate can be summed over $u$, since $r < 1$. This concludes the proof. Based on the proof of Lemma \[lem:ResAveShiftEx\], we shall now comment on the other terms that arise when we apply Lemma \[lem:case3\] to the commutator of a shift of the form . First, we consider the variant of Lemma \[lem:ResAveShiftEx\], where in the definition of $\Lambda^{\omega,\omega'}_{K \times V}(f_1,f_2,f_3)$ we have replaced the pairing ${\big \langle}(\langle b \rangle_{I_3,1}-\langle b \rangle_{I_3 \times J_3}) \langle f_3, h_{I_3} \rangle_1, h_{J_3}^0 {\big \rangle}$ with $\langle a^1_{i,\omega}(b,f_3), h_{I_3} \otimes h_{J_3}^0 \rangle$, where $i=1,2$. In this case we construct the sets $\Omega_u$, ${{\widetilde{\Omega}}}_u$ and the collections $ {\mathcal{R}}_u$ precisely as in Lemma \[lem:ResAveShiftEx\]. Again $E'=E \setminus {{\widetilde{\Omega}}}_0$, and we assume that $|f_3| \le 1_{E'}$. From the definition of the operators $a^1_{i,\omega}$ one sees that if $I_3 \times J_3 \in {\mathcal{D}}_{\omega}^n \times {\mathcal{D}}^m_{\omega'}$, we have a similar localisation property as in , namely $$\langle a^1_{i,\omega}(b,f_3), h_{I_3} \otimes h_{J_3}^0 \rangle =\langle a^1_{i,\omega}(b,1_{I_3 \times J_3}f_3), h_{I_3} \otimes h_{J_3}^0 \rangle.$$ We can again organise the sum over $K \times V \in {\mathcal{D}}_0$ as $${\mathbb{E}}_{\omega,\omega'}\sum_{K \times V \in {\mathcal{D}}_0} \Lambda^{\omega,\omega'}_{K \times V}(f_1,f_2,f_3) =\sum_{u=1}^\infty \sum_{K \times V \in {\mathcal{R}}_u} {\mathbb{E}}_{\omega,\omega'}\Lambda^{\omega,\omega'}_{K \times V}(f_1,f_2,f_3).$$ Let $a^1_{i,b}$ denote the family of operators $\{a^1_{i,\omega}(b,\cdot)\}_{\omega}$. For a fixed $u$, we have corresponding to that $$\sum_{K \times V\in {\mathcal{R}}_u} {\mathbb{E}}_{\omega,\omega'} | \Lambda^{\omega,\omega'}_{K \times V} (f_1,f_2,f_3 )| \lesssim \int_{{{\widetilde{\Omega}}}_u \setminus \Omega_{u-1}} S^2_{v_1} f_1 S^{k_2,v_2} f_2 S^1_{k_3, a^1_{i,b}} (1_{{{\widetilde{\Omega}}}_u}f_3).$$ The boundedness of $S^1_{k_3, a^1_{i,b}}$ is based on and Lemma \[lem:DetSquareShift\]. From here the proof can be concluded as in Lemma \[lem:ResAveShiftEx\]. Next, we consider the case where $\Lambda^{\omega,\omega'}_{K \times V}$ is defined by $$\sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n_\omega \\ I_i^{(k_i)} = K+\omega}} \sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m_{\omega'} \\ J_i^{(v_i)} = V+\omega'}} a^{\omega,\omega'}_{K+\omega, V+\omega', (I_i), (J_j)} \\ \langle a^2_{i,\omega'}(b, f_1), h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, h_{I_2} \otimes h_{J_2}\rangle \langle f_3, h_{I_3} \otimes h_{J_3}^0 \rangle,$$ where $i=1,2$. This time we define $$\Omega_u = \{ S^2_{v_1,a^2_{i,b}}f_1 S^{k_2,v_2}f_2 > C_0 2^{-u}|E|^{-1/r}\}.$$ Based on these, we construct the sets ${{\widetilde{\Omega}}}_u$ and the collections ${\mathcal{R}}_u$ as before. This time the localisation property is clear, as $f_3$ is free. For a fixed $u$, we have corresponding to that $$\sum_{K \times V\in {\mathcal{R}}_u} {\mathbb{E}}_{\omega,\omega'} | \Lambda^{\omega,\omega'}_{K \times V} (f_1,f_2,f_3 )| \lesssim \int_{{{\widetilde{\Omega}}}_u \setminus \Omega_{u-1}} S^2_{v_1,a^2_{i,b}}f_1 S^{k_2,v_2}f_2 S^1_{k_3} (1_{{{\widetilde{\Omega}}}_u}f_3),$$ and the proof can be concluded analogously. The remaining cases are: we have ${\big \langle}(\langle b \rangle_{J_1,2}-\langle b \rangle_{I_1 \times J_1}) \langle f_1, h_{J_1} \rangle_2, h_{I_1}^0 {\big \rangle}$, or we have the factor $\langle b \rangle_{I_3 \times J_3}- \langle b \rangle_{I_1 \times J_1}$ at the front. These can be done similarly, the last one being easiest due to Lemma \[lem:bmobound\]. We have now proved for the shifts of the type . Shifts of other type {#shifts-of-other-type .unnumbered} -------------------- Let us briefly comment on commutators of shifts that are of different type than above. Depending on the shift, the identities from Section \[sec:KeyIdentities\] give various terms. These are all handled similarly as above, the main difference being in the construction of the sets $\Omega_u$ and in the use of different combinations of square functions and maximal functions. We give a few indications of the required modifications. We did not encounter the $A_{i, \omega, \omega'}(b, \cdot)$ operators above, so we comment on a few cases which entail them. Suppose we are dealing with terms of the form $$a^{\omega,\omega'}_{K+\omega, V+\omega', (I_i), (J_j)} \langle A_{i, \omega, \omega'}(b, f_1), h_{I_1} \otimes h_{J_1}\rangle \langle f_2, h_{I_2}^0 \otimes h_{J_2}^0\rangle \langle f_3, h_{I_3} \otimes h_{J_3} \rangle,$$ where $i=1,\dots,8$. Let $A_{i,b}=\{A_{i,\omega,\omega'}(b, \cdot) \}_{\omega,\omega'}$ and let $S^{k_1,v_1}_{A_{i,b}}$ be the related square function. This time one defines $$\Omega_u=\{S^{k_1,v_1}_{A_{i,b}}f_1 Mf_2 > C_0 2^{-u} |E|^{-1/r}\}.$$ The boundedness of $S^{k_1,v_1}_{A_{i,b}}$ is based on , and Lemma \[lem:DetSquareShift\]. The proof proceeds as previously. Related to terms $$a^{\omega,\omega'}_{K+\omega, V+\omega', (I_i), (J_j)} \langle f_1, h_{I_1} \otimes h_{J_1}\rangle \langle f_2, h_{I_2}^0 \otimes h_{J_2}^0\rangle \langle A_{i, \omega, \omega'}(b, f_3), h_{I_3} \otimes h_{J_3} \rangle,$$ one sets $$\Omega_u=\{S^{k_1,v_1}f_1 Mf_2 > C_0 2^{-u} |E|^{-1/r}\}.$$ Corresponding to the key localisation property , the operators $A_{i,\omega,\omega'}(b, \cdot)$ satisfy that if $I_3 \times J_3 \in {\mathcal{D}}^n_\omega \times {\mathcal{D}}^m_{\omega'}$ then $$\langle A_{i, \omega, \omega'}(b, f_3), h_{I_3} \otimes h_{J_3} \rangle =\langle A_{i, \omega, \omega'}(b, 1_{I_3 \times J_3}f_3), h_{I_3} \otimes h_{J_3} \rangle.$$ In the proof one uses related to $f_3$ the square function $S^{k_3,v_3}_{A_{i,b}}f_3$. Finally, terms of the form $$a^{\omega,\omega'}_{K+\omega, V+\omega', (I_i), (J_j)} \langle f_1, h_{I_1} \otimes h_{J_1}\rangle \langle f_2, h_{I_2} \otimes h_{J_2}\rangle \langle (b-\langle b \rangle_{I_3 \times J_3})f_3, h_{I_3}^0 \otimes h_{J_3}^0 \rangle$$ are also easy to handle via Lemma \[lem:maximalbound\]. Concluding the proof of Theorem \[thm:com1ofmodelQuasiBanach\] -------------------------------------------------------------- Having now proved for all shift types, it only remains to interpolate to get Theorem \[thm:com1ofmodelQuasiBanach\]. Let now $1/p + 1/q = 1/r$, $1 < p, q < \infty$, and $S_{\omega, \omega'}$ be a shift of any type. At this point we know that for $r > 1$ we have $$\label{eq:BRange} \|{\mathbb{E}}_{\omega,\omega'} [b,S_{\omega, \omega'}]_1(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})} \lesssim (1+\max(k_i, v_i))\|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}$$ and for $r < 1$ we have $$\label{eq:Weak} \|{\mathbb{E}}_{\omega,\omega'} [b,S_{\omega, \omega'}]_1(f_1, f_2)\|_{L^{r, \infty}({\mathbb{R}}^{n+m})} \lesssim (1+\max(k_i, v_i))\|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}.$$ Notice that for $r = 1$ we may easily get that if $0 < |E_i| < \infty$, $i = 1,2,3$, there exists $E_3' \subset E_3$ so that $|E_3'| \ge |E_3|/2$, and so that for all $|f_1| \le 1_{E_1}$, $|f_2| \le 1_{E_2}$ and $|f_3| \le 1_{E_3'}$ we have $$|\langle {\mathbb{E}}_{\omega,\omega'} [b,S_{\omega, \omega'}]_1(f_1, f_2), f_3\rangle| \lesssim (1+\max(k_i, v_i)) |E_1|^{1/p}|E_2|^{1/p'}.$$ This follows by taking convex combinations of our existing estimates , . Then use e.g. Theorem 3.8 in Thiele’s book [@Th:Book] to update all of our estimates that are either weak type (if $r < 1$) or restricted weak type (if $r=1$) into strong type bounds. Finally, notice that the cases $p= \infty$ or $q = \infty$ can now be obtained by duality. Indeed, let $p = \infty$ and $r = q \in (1,\infty)$. Then we have $$|\langle {\mathbb{E}}_{\omega,\omega'} [b, S_{\omega, \omega'}]_1(f_1, f_2), f_3\rangle| = |\langle {\mathbb{E}}_{\omega,\omega'} [b, S^{1*}_{\omega, \omega'}]_1(f_3, f_2), f_1\rangle|,$$ where we used that $[b, S_{\omega, \omega'}]_1^{1*} = -[b, S_{\omega, \omega'}^{1*}]_1$. It remains to use the at this point already known bound $$\| {\mathbb{E}}_{\omega,\omega'} [b, S^{1*}_{\omega, \omega'}]_1(f_3, f_2)\|_{L^1({\mathbb{R}}^{n+m})} \lesssim (1+\max(k_i, v_i)) \|f_3\|_{L^{q'}({\mathbb{R}}^{n+m})}\|f_2\|_{L^{q}({\mathbb{R}}^{n+m})}.$$ We have proved Theorem \[thm:com1ofmodelQuasiBanach\]. Iterated commutators {#sec:iterated} ==================== For the iterated commutators $[b_2, [b_1, T]_1]_2$ (or $[b_2, [b_1, T]_1]_1$) we need weighted bounds for the linear commutators $[b_2, A_i(b_1, \cdot)]$ and $[b_2, a_i^1(b_1, \cdot)]$. The need arises similarly as previously when we needed weighted bounds for $A_i(b_1, \cdot)$ and $a_i^1(b_1, \cdot)$ (stated in Section \[sec:marprod\]) when considering $[b_1, T]_1$. Again, the *weighted* versions are only needed for the boundedness of some square functions as in Lemma \[lem:DetSquareShift\], which are needed in the quasi–Banach estimates. As these are linear estimates, some of them were already considered in [@HPW] – namely, for $i = 1,2,3,4$ they even proved Bloom type two-weight estimates for $[b_2, A_i(b_1, \cdot)]$, when $b_2 \in {\operatorname{bmo}}({\mathbb{R}}^n \times {\mathbb{R}}^m)$ and $b_1 \in {\operatorname{BMO}}_{\textup{prod}}({\mathbb{R}}^{n+m})$. However, we need the one-weight versions also for $i = 5, 6,7,8$, and the proofs are quite straightforward with the our by now familiar method. As we do not have any use for Bloom type estimates, we content here by giving a quick proof of the one-weight result. \[lem:weightedoneparcommutator\] Let $\|b_1\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = \|b_2\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = 1$, $1 < p < \infty$ and $w \in A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)$. Then for $i = 1, \ldots, 8$ we have $$\| [b_2, A_i(b_1, f)] \|_{L^p(w)} \le C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)}) \|f\|_{L^p(w)}$$ and for $i = 1,2$ we have $$\| [b_2, a_i^1(b_1, f)] \|_{L^p(w)} \le C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)}) \|f\|_{L^p(w)}.$$ We only prove $$\| [b_2, A_5(b_1, f)] \|_{L^p(w)} \le C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)}) \|f\|_{L^p(w)},$$ the rest of the cases being similar. Denote $\lambda_{I,J}^{b_1} = \big\langle b_1, \frac{1_I}{|I|} \otimes h_J\big\rangle$. The estimate $$\label{eq:A5weighted} \sum_{I,J} |\lambda_{I,J}^{b_1}| |\langle f_1, h_I \otimes h_J\rangle| \langle |\langle f_2, h_I \rangle_1| \rangle_J \le C([w]_{A_p({\mathbb{R}}^n \times {\mathbb{R}}^m)}) \|f_1\|_{L^p(w)} \|f_2\|_{L^p(w')}$$ is the dualised version of the already known weighted estimate of $A_5(b_1, \cdot)$. Expanding as usual we get that $$\begin{aligned} \langle A_5(b_1, f_1)&, b_2f_2\rangle - \langle A_5(b_1, b_2f_1), f_2\rangle \\ &= \sum_{i=1}^2 \langle A_5(b_1, f_1), a_i^1(b_2, f_2) \rangle \\ &- \sum_{i=1}^8 \langle A_5(b_1, A_i(b_2,f_1)), f_2) \rangle \\ &+ \sum_{I,J} \lambda_{I,J}^{b_1} \langle f_1, h_I \otimes h_J\rangle \langle (\langle b_2\rangle_{I,1} - \langle b_2 \rangle_{I \times J}) \langle f_2, h_I\rangle_1, h_Jh_J\rangle.\end{aligned}$$ For the last term first estimate $$|\langle (\langle b_2\rangle_{I,1} - \langle b_2 \rangle_{I \times J}) \langle f_2, h_I\rangle_1, h_J h_J\rangle| \le \Big\langle \varphi_{{\mathcal{D}}^n, b_2}(f_2), h_I \otimes \frac{1_J}{|J|} \Big\rangle,$$ then use and the weighted boundedness of the operator $\varphi_{{\mathcal{D}}^n, b_2}$. The first two terms are even more immediate – we are done. Banach range boundedness ------------------------ We consider first the Banach range boundedness of $[b_2, [b_1, U]_1]_2$, when $U = U^v_k$ is a general bilinear bi-parameter model operator satisfying as in Section \[sec:BanachforModels\], and $\|b_1\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = \|b_2\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = 1$. For clarity we pick one explicit $U$: $$\begin{split} \langle U(f_1,f_2),f_3 \rangle = \sum_{\substack{K \in {\mathcal{D}}^n \\ V \in {\mathcal{D}}^m}} \sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n \\ I_1^{(k_1)} = I_2^{(k_2)} = I_3^{(k_3)} = K}} &\sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m \\ J_1^{(v_1)} = J_2^{(v_2)} = J_3^{(v_3)} = V}} a_{K, V, (I_i), (J_j)} \\ &\times \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle \langle f_3, h_{I_3} \otimes h_{J_3}^0 \rangle. \end{split}$$ Lemma \[lem:case3\] gives that $$\begin{aligned} &\langle [b_1,U]_1(f_1,f_2),f_3 \rangle \\ &= \sum_{i=1}^2 \sum_{K, V, (I_i), (J_j)} a_{K, \ldots} \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle \langle a_i^1(b_1,f_3), h_{I_3} \otimes h_{J_3}^0 \rangle \\ &+ \sum_{K, V, (I_i), (J_j)} a_{K, \ldots} \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle {\big \langle}(\langle b_1 \rangle_{I_3,1}-\langle b_1 \rangle_{I_3 \times J_3}) \langle f_3 ,h_{I_3} \rangle_1, h^0_{J_3} {\big \rangle}\\ &- \sum_{i=1}^2 \sum_{K, V, (I_i), (J_j)} a_{K, \ldots} \langle a_i^2(b_1,f_1), h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle \langle f_3, h_{I_3} \otimes h_{J_3}^0 \rangle \\ &+ \sum_{K, V, (I_i), (J_j)} a_{K, \ldots} {\big \langle}(\langle b_1 \rangle_{I_1 \times J_1} - \langle b_1 \rangle_{J_1, 2})\langle f_1, h_{J_1}\rangle_2, h_{I_1}^0{\big \rangle}\langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle \langle f_3, h_{I_3} \otimes h_{J_3}^0 \rangle \\ &+ \sum_{K, V, (I_i), (J_j)} a_{K, \ldots} [\langle b \rangle_{I_3 \times J_3} - \langle b \rangle_{I_1 \times J_1}] \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle \langle f_3, h_{I_3} \otimes h_{J_3}^0 \rangle \\ &=: I + II + III + IV + V.\end{aligned}$$ When considering $[b_2, [b_1, U]_1]_2$ the first line $I_1$ from above leads to the term $$\begin{aligned} \sum_{i=1}^2 &\sum_{K, V, (I_i), (J_j)} a_{K, \ldots} \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle \langle a_i^1(b_1, b_2f_3), h_{I_3} \otimes h_{J_3}^0 \rangle \\ &- \sum_{i=1}^2 \sum_{K, V, (I_i), (J_j)} a_{K, \ldots} \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle b_2f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle \langle a_i^1(b_1,f_3), h_{I_3} \otimes h_{J_3}^0 \rangle.\end{aligned}$$ We add and subtract $$\sum_{i=1}^2 \sum_{K, V, (I_i), (J_j)} a_{K, \ldots} \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle \langle b_2a_i^1(b_1,f_3), h_{I_3} \otimes h_{J_3}^0 \rangle,$$ so that we need to consider $$\begin{aligned} I_{1} := \sum_{i=1}^2 \langle &U(f_1, f_2), a_i^1(b_1, b_2f_3) - b_2a_i^1(b_1,f_3) \rangle \\ &= -\sum_{i=1}^2 \langle U(f_1, f_2), [b_2, a_i^1(b_1, \cdot)](f_3)\rangle\end{aligned}$$ and $$\begin{aligned} I_{2} := \sum_{i=1}^2 \langle [b_2, U]_2(f_1, f_2), a_i^1(b_1,f_3)\rangle.\end{aligned}$$ Lemma \[lem:weightedoneparcommutator\] in particular gives $$\|[b_2, a_i^1(b_1, \cdot)](f_3)\|_{L^s({\mathbb{R}}^{n+m})} \lesssim \|f_3\|_{L^s({\mathbb{R}}^{n+m})}, \qquad s \in (1,\infty).$$ This, together with the boundedness of $U$, takes care of the term $I_{1}$. The term $I_{2}$ is handled using the already known boundedness of the commutator $[b_2, U]_2$, and the boundedness of $a_i^1(b_1, \cdot)$, $i=1,2$. When considering $[b_2, [b_1, U]_1]_2$ the term $II$ from above leads to the term $$\begin{aligned} &\sum_{K, V, (I_i), (J_j)} a_{K, \ldots} \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle {\big \langle}(\langle b_1 \rangle_{I_3,1}-\langle b_1 \rangle_{I_3 \times J_3}) \langle b_2f_3 ,h_{I_3} \rangle_1, h^0_{J_3} {\big \rangle}\\ &-\sum_{K, V, (I_i), (J_j)} a_{K, \ldots} \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle b_2f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle {\big \langle}(\langle b_1 \rangle_{I_3,1}-\langle b_1 \rangle_{I_3 \times J_3}) \langle f_3 ,h_{I_3} \rangle_1, h^0_{J_3} {\big \rangle}.\end{aligned}$$ Here we simply start following our original strategy of expanding $b_2f_3$ and $b_2f_2$. How $b_2f_2$ is expanded will, of course, depend on the the Haar functions ${{\widetilde{h}}}_{I_2}$ and ${{\widetilde{h}}}_{J_2}$. So we first expand $b_2f_3$. The first line from above can then be written as the sum of $$\begin{aligned} \sum_{i=1}^2 \sum_{K, V, (I_i), (J_j)} a_{K, \ldots} \langle f_1, h_{I_1}^0& \otimes h_{J_1}\rangle \langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle \\ &\times {\big \langle}(\langle b_1 \rangle_{I_3,1}-\langle b_1 \rangle_{I_3 \times J_3}) \langle a_i^1(b_2,f_3) ,h_{I_3} \rangle_1, h^0_{J_3} {\big \rangle},\end{aligned}$$ $$\begin{aligned} \sum_{K, V, (I_i), (J_j)}& a_{K, \ldots} \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle \\ &\times {\big \langle}(\langle b_1 \rangle_{I_3,1}-\langle b_1 \rangle_{I_3 \times J_3})(\langle b_2 \rangle_{I_3,1}-\langle b_2 \rangle_{I_3 \times J_3}) \langle f_3,h_{I_3} \rangle_1, h^0_{J_3} {\big \rangle}\end{aligned}$$ and $$\begin{aligned} \sum_{K, V, (I_i), (J_j)} a_{K, \ldots}\langle b_2\rangle_{I_3 \times J_3} \langle f_1, h_{I_1}^0& \otimes h_{J_1}\rangle \langle f_2, {{\widetilde{h}}}_{I_2} \otimes {{\widetilde{h}}}_{J_2}\rangle \\ &\times {\big \langle}(\langle b_1 \rangle_{I_3,1}-\langle b_1 \rangle_{I_3 \times J_3}) \langle f_3 ,h_{I_3} \rangle_1, h^0_{J_3} {\big \rangle}.\end{aligned}$$ We take care of the first two terms – the third term is of course not handled alone. The first term just uses the estimate from Lemma \[lem:maximalbound\] saying that $$\begin{aligned} \big|{\big \langle}(\langle b_1 \rangle_{I_3,1}-\langle& b_1 \rangle_{I_3 \times J_3}) \langle a_i^1(b_2,f_3) ,h_{I_3} \rangle_1, h^0_{J_3} {\big \rangle}\big| \\ &\le \langle \varphi_{{\mathcal{D}}^n, b_1}(a_i^1(b_2,f_3)), h_{I_3} \otimes h_{J_3}^0\rangle,\end{aligned}$$ then the boundedness , and finally the bound $$\| \varphi_{{\mathcal{D}}^n, b_1}(a_i^1(b_2,f_3)) \|_{L^{r'}({\mathbb{R}}^{n+m})} \lesssim \| a_i^1(b_2,f_3)\|_{L^{r'}({\mathbb{R}}^{n+m})} \lesssim \|f_3\|_{L^{r'}({\mathbb{R}}^{n+m})}.$$ The second term goes in a similar way. Indeed, notice that you can use natural maximal functions, like $$M_{\langle b_1 \rangle_{I, 1}, \langle b_2 \rangle_{I, 1}} f = \sup_{J \subset {\mathbb{R}}^m} \frac{1_J}{|J|} \int_J |\langle b_1 \rangle_{I, 1} - \langle b_1 \rangle_{I \times J}||\langle b_2 \rangle_{I, 1} - \langle b_2 \rangle_{I \times J}| |f|,$$ for which results as in Lemma \[lem:bmaxbounds\] hold with essentially the same proof. We are now ready to expand $b_2f_2$. To avoid a case chase we assume for simplicity that ${{\widetilde{h}}}_{I_2} = h_{I_2}^0$ and ${{\widetilde{h}}}_{J_2} = h_{J_2}$. Then we have $$\begin{aligned} \langle b_2f_2, h_{I_2}^0 \otimes h_{J_2}\rangle &= \sum_{i=1}^2 \langle a^2_i(b_2,f_2), h_{I_2}^0 \otimes h_{J_2}\rangle \\ &+ \langle (\langle b_2 \rangle_{J_2,2} - \langle b_2 \rangle_{I_2 \times J_2}) \langle f_2, h_{J_2} \rangle_2, h_{I_2}^0 \rangle + \langle b_2 \rangle_{I_2 \times J_2} \langle f_2, h_{I_2}^0 \otimes h_{J_2}\rangle.\end{aligned}$$ It is easy to handle $$\begin{aligned} -\sum_{i=1}^2 \sum_{K, V, (I_i), (J_j)} a_{K, \ldots} \langle f_1, h_{I_1}^0 \otimes & h_{J_1}\rangle \langle a^2_i(b_2,f_2), h_{I_2}^0 \otimes h_{J_2}\rangle \\ &\times {\big \langle}(\langle b_1 \rangle_{I_3,1}-\langle b_1 \rangle_{I_3 \times J_3}) \langle f_3 ,h_{I_3} \rangle_1, h^0_{J_3} {\big \rangle}\end{aligned}$$ and $$\begin{aligned} - \sum_{K, V, (I_i), (J_j)} a_{K, \ldots} \langle f_1, h_{I_1}^0 \otimes & h_{J_1}\rangle \langle (\langle b_2 \rangle_{J_2,2} - \langle b_2 \rangle_{I_2 \times J_2}) \langle f_2, h_{J_2} \rangle_2, h_{I_2}^0 \rangle \\ &\times {\big \langle}(\langle b_1 \rangle_{I_3,1}-\langle b_1 \rangle_{I_3 \times J_3}) \langle f_3 ,h_{I_3} \rangle_1, h^0_{J_3} {\big \rangle},\end{aligned}$$ and so we are only left with $$\begin{aligned} \sum_{K, V, (I_i), (J_j)} a_{K, \ldots}[\langle b_2\rangle_{I_3 \times J_3}-\langle b_2& \rangle_{I_2 \times J_2}] \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, h_{I_2}^0 \otimes h_{J_2}\rangle \\ &\times {\big \langle}(\langle b_1 \rangle_{I_3,1}-\langle b_1 \rangle_{I_3 \times J_3}) \langle f_3 ,h_{I_3} \rangle_1, h^0_{J_3} {\big \rangle},\end{aligned}$$ which is again easy (using Lemma \[lem:maximalbound\], Lemma \[lem:bmobound\] and ). We are done with the contribution of $II$ to $[b_2, [b_1, U]_1]_2$. The contribution of $III$ to $[b_2, [b_1, U]_1]_2$ simply is $$- \sum_{i=1}^2 \langle [b_2,U]_2(a_i^2(b_1,f_1), f_2), f_3\rangle,$$ which is readily in control. The contributions of the terms $IV$ and $V$ to $[b_2, [b_1, U]_1]_2$ are similarly easy, but they require running our usual argument, instead of getting an easy formula like in the case $III$. We have now taken care of $[b_2, [b_1, U]_1]_2$. With some thought we can see that above type arguments also take care of a commutator of the form $[b_2, [b_1, U]_1]_1$. We have proved the following theorem. \[thm:com2ofmodelBanach\] Let $p,q,r \in (1,\infty)$, $1/p + 1/q = 1/r$, $0 \le k_i \in {\mathbb{Z}}$ and $0 \le v_i \in {\mathbb{Z}}$, $i=1,2,3$. Let $U = U^v_k$ be a general bilinear bi-parameter model operator satisfying . In particular, $U$ can be a bilinear bi-parameter shift, partial paraproduct or full paraproduct. Then for $b_1, b_2$ such that $\|b_1\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = \|b_2\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = 1$ we have $$\begin{aligned} \|[b_2, [b_1, U]_1]_2(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})} + \|[b_2,& [b_1, U]_1]_1(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})} \\ &\lesssim (1+\max(k_i, v_i))^2 \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}.\end{aligned}$$ It follows that if $T$ is a bilinear bi-parameter singular integral satisfying the assumptions of Theorem \[thm:rep\] then also $$\begin{aligned} \|[b_2, [b_1, T]_1]_2(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})} + \|[b_2, [b_1, &T]_1]_1(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})} \\ &\lesssim \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}.\end{aligned}$$ We content with the formulation of the above theorem, and do not explicitly iterate more. Quasi–Banach estimates ---------------------- Our first goal is to prove: \[thm:com2ofmodelQuasiBanach\] Let $\|b_1\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = \|b_2\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = 1$, and let $1 < p, q \le \infty$ and $1/2 < r < \infty$ satisfy $1/p+1/q = 1/r$. Suppose $S_{\omega, \omega'} := S^{v}_{k, \mathcal{D}^n_{\omega},\mathcal{D}^m_{\omega'}}$ is a bilinear bi-parameter shift of complexity $(k,v)$ defined using the dyadic grids $\mathcal{D}^n_{\omega}$ and $\mathcal{D}^m_{\omega'}$. Then we have $$\|\mathbb{E}_{\omega, \omega'}[b_2,[b_1,S_{\omega, \omega'}]_1]_2(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})} \lesssim (1+\max(k_i, v_i))^2 \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})},$$ and similarly for $\mathbb{E}_{\omega, \omega'}[b_2,[b_1,S_{\omega, \omega'}]_1]_1$. As in Section \[sec:quasiviarest\] we need to prove the following: Given $p,q \in (1, \infty)$ and $r \in (1/2,1)$ satisfying $1/p+1/q=1/r$, $f_1 \in L^p({\mathbb{R}}^{n+m})$, $f_2 \in L^q({\mathbb{R}}^{n+m})$ and a set $E \subset {\mathbb{R}}^{n+m}$ with $0 < |E| < \infty$, there exists a subset $E' \subset E$ such that $|E'| \ge |E|/2$ and such that for all functions $f_3$ satisfying $|f_3| \le 1_{E'}$ there holds $$\label{eq:ResAveShift2} \begin{split} | \langle \mathbb{E}_{\omega, \omega'}&[b_2,[b_1,S_{\omega, \omega'}]_1]_2(f_1, f_2),f_3 \rangle | \\ &\lesssim (1+\max(k_i, v_i))^2 \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}|E|^{1/r'}. \end{split}$$ For definiteness let $S_{\omega, \omega'}$ be again of the form : $$\langle S_{\omega, \omega'}(f_1, f_2), f_3 \rangle = \sum_{\substack{K \in {\mathcal{D}}^n_0 \\ V \in {\mathcal{D}}^m_0}} A^{\omega, \omega'}_{K \times V}(f_1, f_2, f_3)$$ where $$\begin{split} A^{\omega, \omega'}_{K \times V}(f_1, f_2, f_3) = \sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n_\omega \\ I_i^{(k_i)} = K+\omega}} & \sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m_{\omega'} \\ J_i^{(v_i)} = V+\omega'}} a^{\omega,\omega'}_{K+\omega, V+\omega', (I_i), (J_j)} \\ &\times \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, h_{I_2} \otimes h_{J_2}\rangle \langle f_3, h_{I_3} \otimes h_{J_3}^0 \rangle. \end{split}$$ Similarly as in the Banach range case above we start with the identity given by Lemma \[lem:case3\]: $$\begin{aligned} &\langle [b_1,S_{\omega, \omega'}]_1(f_1,f_2),f_3 \rangle \\ &= \sum_{i=1}^2 \sum_{K, V, (I_i), (J_j)} a_{K+\omega, \ldots}^{\omega, \omega'} \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, h_{I_2} \otimes h_{J_2}\rangle \langle a_{i,\omega}^1(b_1,f_3), h_{I_3} \otimes h_{J_3}^0 \rangle \\ &+ \sum_{K, V, (I_i), (J_j)} a_{K+\omega, \ldots}^{\omega, \omega'} \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, h_{I_2} \otimes h_{J_2}\rangle {\big \langle}(\langle b_1 \rangle_{I_3,1}-\langle b_1 \rangle_{I_3 \times J_3}) \langle f_3 ,h_{I_3} \rangle_1, h^0_{J_3} {\big \rangle}\\ &- \sum_{i=1}^2 \sum_{K, V, (I_i), (J_j)} a_{K+\omega, \ldots}^{\omega, \omega'}\langle a_{i,\omega'}^2(b_1,f_1), h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, h_{I_2} \otimes h_{J_2}\rangle \langle f_3, h_{I_3} \otimes h_{J_3}^0 \rangle \\ &+ \sum_{K, V, (I_i), (J_j)} a_{K+\omega, \ldots}^{\omega, \omega'} {\big \langle}(\langle b_1 \rangle_{I_1 \times J_1} - \langle b_1 \rangle_{J_1, 2})\langle f_1, h_{J_1}\rangle_2, h_{I_1}^0{\big \rangle}\langle f_2, h_{I_2} \otimes h_{J_2}\rangle \langle f_3, h_{I_3} \otimes h_{J_3}^0 \rangle \\ &+ \sum_{K, V, (I_i), (J_j)} a_{K+\omega, \ldots}^{\omega, \omega'} [\langle b \rangle_{I_3 \times J_3} - \langle b \rangle_{I_1 \times J_1}] \langle f_1, h_{I_1}^0 \otimes h_{J_1}\rangle \langle f_2, h_{I_2} \otimes h_{J_2}\rangle \langle f_3, h_{I_3} \otimes h_{J_3}^0 \rangle \\ &=: I + II + III + IV + V.\end{aligned}$$ We fix $i \in \{1,2\}$ and start considering the corresponding term of $I$, namely $$\sum_{\substack{K \in {\mathcal{D}}^n_0 \\ V \in {\mathcal{D}}^m_0}} A^{\omega, \omega'}_{K \times V}(f_1, f_2, a_{i,\omega}^1(b_1, f_3)),$$ and its contribution to $[b_2, [b_1, U]_1]_2$. This leads to the need to prove the following lemma. \[lem:ResAveShiftEx2\] Let $\|b_1\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = \|b_2\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = 1$ and let $p,q \in (1, \infty)$ and $r \in (1/2,1)$ satisfy $1/p+1/q=1/r$. Suppose $f_1 \in L^p({\mathbb{R}}^{n+m})$, $f_2 \in L^q({\mathbb{R}}^{n+m})$ and $E \subset {\mathbb{R}}^{n+m}$ with $0 < |E| < \infty$. Then there exists a subset $E' \subset E$ with $|E'| \ge \frac{99}{100} |E|$ so that for all functions $f_3$ satisfying $|f_3| \le 1_{E'}$ there holds $$\label{eq:ResAveShiftExEst2} \begin{split} \Big|{\mathbb{E}}_{\omega,\omega'}\sum_{\substack{K \in {\mathcal{D}}^n_0 \\ V \in {\mathcal{D}}^m_0}} & [A^{\omega, \omega'}_{K \times V}(f_1, f_2, a_{i,\omega}^1(b_1, b_2f_3)) - A^{\omega, \omega'}_{K \times V}(f_1, b_2f_2, a_{i,\omega}^1(b_1, f_3))] \Big| \\ &\lesssim (1+\max(k_i,v_i))\|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}|E|^{1/r'}. \end{split}$$ We assume $$\begin{aligned} \Big\| S^2_{v_1}f_1 S^{k_2,v_2}f_2 + \sum_{j=1}^8 S^2_{v_1}f_1 S^{k_2,v_2}_{A_{j,b_2}} f_2 \Big\|_{L^r} = 1, \end{aligned}$$ where $A_{j,b_2}$ denotes the family $\{A_{j,\omega,\omega'}(b_2, \cdot)\}_{\omega,\omega'}$, and the square functions are defined as in Lemma \[lem:DetSquareShift\]. Define $$\Omega_u =\Big\{S^2_{v_1}f_1 S^{k_2,v_2}f_2 + \sum_{j=1}^8 S^2_{v_1}f_1 S^{k_2,v_2}_{A_{j,b_2}} f_2 > C_0 2^{-u}|E|^{-1/r}\Big\}, \quad u \ge 0,$$ and $${{\widetilde{\Omega}}}_u = \{M1_{\Omega_u}> c_1\},$$ where $c_1>0$ is a small enough dimensional constant. Then we can choose $C_0=C_0(c_1)$ so large that the set $E':=E \setminus {{\widetilde{\Omega}}}_0$ satisfies $|E'| \ge \frac{99}{100} |E|$. Then we define the collections $$\widehat {\mathcal{R}}_u =\Big\{ R \in {\mathcal{D}}_0 \colon |R \cap \Omega_u | \ge \frac{|R|}{2}\Big\},$$ where ${\mathcal{D}}_0 = {\mathcal{D}}^n_0 \times {\mathcal{D}}^m_0$, and set ${\mathcal{R}}_u = \widehat{\mathcal{R}}_u \setminus \widehat {\mathcal{R}}_{u-1}$ for $u \ge 1$. Fix now some function $f_3$ such that $|f_3| \le 1_{E'}$. Lets abbreviate $$\Lambda_{K \times V}^{\omega, \omega'}(f_1,f_2,f_3) = A^{\omega, \omega'}_{K \times V}(f_1, f_2, a_{i,\omega}^1(b_1, b_2f_3)) - A^{\omega, \omega'}_{K \times V}(f_1, b_2f_2, a_{i,\omega}^1(b_1, f_3)).$$ Notice the localisation property $$\Lambda_{K \times V}^{\omega, \omega'}(f_1,f_2,f_3) =\Lambda_{K \times V}^{\omega, \omega'}(f_1,f_2,1_{(K+\omega) \times (V+\omega')} f_3).$$ Based on this, using an argument as in the proof of Lemma \[lem:DetSquareShift\], and splitting $\Lambda_{K \times V}^{\omega, \omega'}(f_1,f_2,f_3)$ as in and , we see that we may write $$\begin{aligned} {\mathbb{E}}_{\omega,\omega'}\sum_{\substack{K \in {\mathcal{D}}^n_0 \\ V \in {\mathcal{D}}^m_0}} \Lambda^{\omega,\omega'}_{K \times V} (f_1,f_2,f_3 ) =\sum_{u=1}^\infty \sum_{K \times V\in {\mathcal{R}}_u} {\mathbb{E}}_{\omega,\omega'} \Lambda^{\omega,\omega'}_{K \times V} (f_1,f_2, 1_{{{\widetilde{\Omega}}}_u} f_3 ).\end{aligned}$$ We now fix $u$, and our goal is to prove $$\sum_{K \times V\in {\mathcal{R}}_u} {\mathbb{E}}_{\omega,\omega'} |\Lambda^{\omega,\omega'}_{K \times V} (f_1,f_2, 1_{{{\widetilde{\Omega}}}_u} f_3 )| \lesssim (1+\max(k_i,v_i))2^{-u(1-r)}|E|^{1/r'}.$$ By adding and subtracting $$\begin{aligned} A^{\omega, \omega'}_{K \times V}(f_1, f_2, b_2a_{i,\omega}^1(b_1, 1_{{{\widetilde{\Omega}}}_u}f_3)),\end{aligned}$$ we see that $$\label{eq:Com2Split1} \begin{split} \Lambda^{\omega,\omega'}_{K \times V} (f_1,f_2, 1_{{{\widetilde{\Omega}}}_u} f_3 ) &= -A^{\omega, \omega'}_{K \times V}(f_1, f_2, [b_2,a_{i,\omega}^1(b_1, \cdot)](1_{{{\widetilde{\Omega}}}_u}f_3)) \\ &+ A^{\omega, \omega'}_{K \times V}(f_1, f_2, b_2a_{i,\omega}^1(b_1, 1_{{{\widetilde{\Omega}}}_u}f_3)) \\ &- A^{\omega, \omega'}_{K \times V}(f_1, b_2f_2, a_{i,\omega}^1(b_1, 1_{{{\widetilde{\Omega}}}_u}f_3)). \end{split}$$ Using the symmetric form of Lemma \[lem:case2\] we have that the last difference $$A^{\omega, \omega'}_{K \times V}(f_1, f_2, b_2a_{i,\omega}^1(b_1, 1_{{{\widetilde{\Omega}}}_u}f_3)) - A^{\omega, \omega'}_{K \times V}(f_1, b_2f_2, a_{i,\omega}^1(b_1, 1_{{{\widetilde{\Omega}}}_u}f_3))$$ equals $$\label{eq:Com2Split2} \begin{split} \sum_{j=1}^2 &A^{\omega, \omega'}_{K \times V}(f_1, f_2, a_{j,\omega}^1(b_2,a_{i,\omega}^1(b_1, 1_{{{\widetilde{\Omega}}}_u}f_3))) \\ &- \sum_{j=1}^8 A^{\omega, \omega'}_{K \times V}(f_1, A_{j, \omega, \omega'}(b_2,f_2), a_{i,\omega}^1(b_1, 1_{{{\widetilde{\Omega}}}_u}f_3)) \\ &+\sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n_\omega \\ I_i^{(k_i)} = K+\omega}} \sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m_{\omega'} \\ J_i^{(v_i)} = V+\omega'}} \Big[a^{\omega,\omega'}_{K+\omega,\dots} \langle f_1,h^0_{I_1}\otimes h_{J_1} \rangle \langle f_2,h_{I_2}\otimes h_{J_2} \rangle \\ & \hspace{2cm}\times{\big \langle}(\langle b_2 \rangle_{I_3,1}-\langle b_2 \rangle_{I_3\times J_3}) \langle a_{i,\omega}^1(b_1, 1_{{{\widetilde{\Omega}}}_u}f_3),h_{I_3} \rangle_1, h_{J_3}^0 {\big \rangle}\Big]\\ &+\sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n_\omega \\ I_i^{(k_i)} = K+\omega}} \sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m_{\omega'} \\ J_i^{(v_i)} = V+\omega'}} \Big[a^{\omega,\omega'}_{K+\omega,\dots} (\langle b_2 \rangle_{I_3 \times J_3}-\langle b_2 \rangle_{I_2 \times J_2}) \langle f_1,h^0_{I_1}\otimes h_{J_1} \rangle \\ & \hspace{2cm} \times \langle f_2,h_{I_2}\otimes h_{J_2} \rangle \langle a_{i,\omega}^1(b_1, 1_{{{\widetilde{\Omega}}}_u}f_3),h_{I_3} \otimes h_{J_3}^0 \rangle \Big]. \end{split}$$ We consider the first term of and the terms from separately. These are all handled quite similarly, the only difference being what square functions appear. Let’s first take a look at the term from . Denote the family of operators $\{ [b_2, a^1_{i,\omega}(b_1, \cdot)]\}_\omega$ by $[b_2,a^1_{i,b_1}]$. Estimating similarly as in , we see that $$\begin{split} \sum_{K\times V \in {\mathcal{R}}_u} & {\mathbb{E}}_{\omega,\omega'} \big| A^{\omega, \omega'}_{K \times V}(f_1, f_2, [b_2,a_{i,\omega}^1(b_1, \cdot)](1_{{{\widetilde{\Omega}}}_u}f_3)) \big| \\ & \lesssim \int_{{{\widetilde{\Omega}}}_u \setminus \Omega_{u-1}} S^{2}_{v_1}f_1 S^{k_2,v_2} f_2 S^1_{k_1,[b_2,a^1_{i,b_1}]}(1_{{{\widetilde{\Omega}}}_u}f_3). \end{split}$$ From here the estimate can be concluded in the familiar way, using that $$S^{2}_{v_1}f_1 S^{k_2,v_2} f_2 \lesssim 2^{-u}|E|^{-1/r}$$ in the complement of $\Omega_{u-1}$ and that the square function $S^1_{k_1,[b_2,a^1_{i,b_1}]}$ is bounded. The boundedness of this square function follows from Lemma \[lem:weightedoneparcommutator\] and Lemma \[lem:DetSquareShift\]. Similarly, consider for instance the terms from the second line of . For fixed $j=1,\dots,8$ we have $$\begin{split} \sum_{K\times V \in {\mathcal{R}}_u} & {\mathbb{E}}_{\omega,\omega'} \big| A^{\omega, \omega'}_{K \times V}(f_1, A_{j, \omega, \omega'}(b_2,f_2), a_{i,\omega}^1(b_1, 1_{{{\widetilde{\Omega}}}_u}f_3))\big| \\ & \lesssim \int_{{{\widetilde{\Omega}}}_u \setminus \Omega_{u-1}} S^{2}_{v_1}f_1 S^{k_2,v_2}_{A_{j,b_2}} f_2 S^1_{k_3,a^1_{i,b_1}}(1_{{{\widetilde{\Omega}}}_u}f_3), \end{split}$$ and the rest is finished as usual. As a final example we consider the terms from the third line of . We have using Lemma \[lem:maximalbound\] that $$\begin{split} \big|{\big \langle}(\langle b_2 \rangle_{I_3,1}-\langle b_2 \rangle_{I_3\times J_3}) & \langle a_{i,\omega}^1(b_1, 1_{{{\widetilde{\Omega}}}_u}f_3),h_{I_3} \rangle_1, h_{J_3}^0 {\big \rangle}\big| \\ &\lesssim \langle \varphi_{\omega, b_2} (a_{i,\omega}^1(b_1, 1_{{{\widetilde{\Omega}}}_u}f_3)), h_{I_3}\otimes h_{J_3}^0 \rangle, \end{split}$$ where $\varphi_{\omega, b_2} := \varphi_{{\mathcal{D}}^n_{\omega}, b_2}$. Therefore, there holds that $$\begin{split} \sum_{K\times V \in {\mathcal{R}}_u} & {\mathbb{E}}_{\omega,\omega'} \Big|\sum_{\substack{I_1, I_2, I_3 \in {\mathcal{D}}^n_\omega \\ I_i^{(k_i)} = K+\omega}} \sum_{\substack{J_1, J_2, J_3 \in {\mathcal{D}}^m_{\omega'} \\ J_i^{(v_i)} = V+\omega'}} \Big[a^{\omega,\omega'}_{K+\omega,\dots} \langle f_1,h^0_{I_1}\otimes h_{J_1} \rangle \langle f_2,h_{I_2}\otimes h_{J_2} \rangle \\ & \hspace{2cm}\times{\big \langle}(\langle b_2 \rangle_{I_3,1}-\langle b_2 \rangle_{I_3\times J_3}) \langle a_{i,\omega}^1(b_1, 1_{{{\widetilde{\Omega}}}_u}f_3),h_{I_3} \rangle_1, h_{J_3}^0 {\big \rangle}\Big] \Big| \\ & \lesssim \int_{{{\widetilde{\Omega}}}_u \setminus \Omega_{u-1}} S^{2}_{v_1}f_1 S^{k_2,v_2} f_2 S^1_{k_3,\varphi^1_{b_2} a^1_{i,b_1}}(1_{{{\widetilde{\Omega}}}_u}f_3), \end{split}$$ where we wrote $\varphi^1_{b_2} a^1_{i,b_1}$ to mean the family $\{\varphi_{\omega, b_2} a^1_{i,\omega}(b_1,\cdot)\}_\omega$. Once again, one can finish as before. The remaining terms, that is the first and fourth ones from go in the same way. Notice that the fourth term produces the factor $(1+\max(k_i,v_i))$ to the final estimate. The contributions of the terms $II, \ldots, IV$ to the commutator $[b_2, [b_1, U]_1]_2$ can also be handled, and they are easier (the localisation property is more readily available). Moreover, the fact that we assumed $S_{\omega, \omega'}$ to be of the form did not play a big role: the other forms can be handled similarly. Therefore, we get . With the Banach range boundedness and the usual interpolation this gives us $$\|\mathbb{E}_{\omega, \omega'}[b_2,[b_1,S_{\omega, \omega'}]_1]_2(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})} \lesssim (1+\max(k_i, v_i))^2 \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}$$ whenever $1 < p, q < \infty$ and $1/2 < r < \infty$ satisfy $1/p+1/q = 1/r$. We can similarly prove $$\|\mathbb{E}_{\omega, \omega'}[b_2,[b_1,S_{\omega, \omega'}]_1]_1(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})} \lesssim (1+\max(k_i, v_i))^2 \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})}$$ in the same range. Then e.g. the case $p=\infty$ for $[b_2,[b_1,S_{\omega, \omega'}]_1]_2$ follows by using the formula $$[b_2,[b_1,S_{\omega, \omega'}]_1]_2^{1*} = [b_2, [b_1, S_{\omega, \omega'}^{1*}]_1]_1 - [b_2, [b_1, S_{\omega, \omega'}^{1*}]_1]_2.$$ We have proved Theorem \[thm:com2ofmodelQuasiBanach\]. We end the paper by stating the corresponding corollary for paraproduct free singular integrals $T$. \[thm:com2ofmodelQuasiBanach\] Let $\|b_1\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = \|b_2\|_{{\operatorname{bmo}}({\mathbb{R}}^{n+m})} = 1$, and let $1 < p, q \le \infty$ and $1/2 < r < \infty$ satisfy $1/p+1/q = 1/r$. Let $T$ be a bilinear bi-parameter singular integral satisfying the assumptions of Theorem \[thm:rep\] and assume also that $T$ is free of paraproducts. Then we have $$\|[b_2,[b_1,T]_1]_2(f_1, f_2)\|_{L^r({\mathbb{R}}^{n+m})} \lesssim \|f_1\|_{L^p({\mathbb{R}}^{n+m})} \|f_2\|_{L^q({\mathbb{R}}^{n+m})},$$ and similarly for $[b_2,[b_1,T]_1]_1$. [10]{} C. Benea, C. Muscalu, Multiple vector valued inequalities via the helicoidal method, Anal. PDE 9 (2016) 1931–1988. C. Benea, C. Muscalu, Quasi-Banach valued inequalities via the helicoidal method, J. Funct. Anal. 273 1295–1353. S.-Y. A. Chang, R. Fefferman, A continuous version of duality of $H^1$ with BMO on the Bidisc, Ann. of Math. 112 (1980) 179–201. S.-Y. A. Chang, R. Fefferman, Some recent developments in Fourier analysis and $H^p$ theory on product domains, Bull. Amer. Math. Soc. 12 (1985) 1–43. R. Coifman, Y. Meyer, Au delà des opérateurs pseudo-différentiels, Astérisque 57 (1978) 1–185. F. Di Plinio, Y. Ou, Banach-valued multilinear singular integrals, Indiana Univ. Math. J., to appear, arXiv:1506.05827, 2015. R. Fefferman, Harmonic analysis on product spaces, Ann. of Math. 126 (1987) 109–130. R. Fefferman, E. Stein, Singular integrals on product spaces, Adv. Math. 45 (1982) 117–143. L. Grafakos, R. Torres, Multilinear Calderón–Zygmund theory, Adv. Math. 165 (2002) 124–164. I. Holmes, S. Petermichl, B. Wick, Weighted little bmo and two-weight inequalities for Journé commutators, Anal. PDE, to appear, arXiv:1701.06526, 2017. J.-L. Journé, A covering lemma for product spaces, Proc. Amer. Math. Soc. 96 (1986) 593–598. J.-L. Journé, Calderón-Zygmund operators on product spaces, Rev. Mat. Iberoam. 1 (1985) 55–91. J.-L. Journé, Two problems of Calderón–Zygmund theory on product-spaces, Ann. Inst. Fourier 38 (1988) 111–132. I. Kunwar, Y. Ou, Two-weight inequalities for multilinear commutators, arXiv:1710.07392, 2017. K. Li, H. Martikainen, E. Vuorinen, Bilinear bi-parameter singular integrals: Representation theorem and boundedness properties, arXiv:1712.08135, 2018. H. Martikainen, Representation of bi-parameter singular integrals by dyadic operators, Adv. Math. 229 (2012) 1734–1761. H. Martikainen, T. Orponen, Some obstacles in characterising the boundedness of bi-parameter singular integrals, Math. Z. 282 (2016) 535–545. C. Muscalu, J. Pipher, T. Tao, C. Thiele, Bi-parameter paraproducts, Acta Math. 193 (2) (2004) 269–296. C. Muscalu, J. Pipher, T. Tao, C. Thiele, Multi-parameter paraproducts, Rev. Mat. Iberoam. 22 (2006) 963–976. C. Muscalu, W. Schlag, Classical and Multilinear Harmonic Analysis, Vol. II, Cambridge Studies in Advanced Mathematics 138, Cambridge University Press, 2013. Y. Ou, Multi-parameter singular integral operators and representation theorem, Rev. Mat. Iberoam. 33 (2017) 325–350. Y. Ou, S. Petermichl, E. Strouse, Higher order Journé commutators and characterizations of multi-parameter BMO, Adv. Math. 291 (2016) 24–58. C. Thiele, Wave packet analysis, CBMS Regional Conference Series in Mathematics 105. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 2006.
--- abstract: '*We present a short proof of Jin’s theorem which is entirely elementary, in the sense that no use is made of nonstandard analysis, ergodic theory, measure theory, ultrafilters, or other advanced tools. The given proof provides the explicit bound $1/\text{BD}(A)\cdot\text{BD}(B)$ to the number of shifts $A+B+m_i$ that are needed to cover a thick set.*' address: | Dipartimento di Matematica\ Università di Pisa, Italy author: - Mauro Di Nasso title: | An elementary proof of Jin’s theorem\ with a bound --- Introduction {#introduction .unnumbered} ============ Many results in combinatorial number theory are about structural properties of sets of integers that only depends on their largeness as given by the density. A beautiful result in this area was proved in 2000 by Renling Jin with the tools of nonstandard analysis. - **Jin’s theorem**: *If $A,B\subseteq\N$ have positive upper Banach density then their sumset $A+B$ is piecewise syndetic.* (The upper Banach density is a refinement of the usual upper asymptotic density; a set is piecewise syndetic if it has “bounded gaps" in suitable arbitrarily long intervals. See §1 below for precise definitions). Many researchers showed interest in Jin’s result but were not comfortable with nonstandard analysis. In answer to that, a few years later Jin himself [@jin2] directly translated his nonstandard proof into “standard" terms, but unfortunately in this way “certain degree of intuition and motivation are lost" \[*cit.*\]. In 2006, with the use of ergodic theory, V. Bergelson, H. Furstenberg and B. Weiss [@bfw] found a completely different proof of that result, and improved on it by showing that the sumset $A+B$ is in fact piecewise Bohr, a stronger property than piecewise syndeticity. (This result was subsequently stretched by J.T. Griesmer [@gri] to cases where one of the summands has null upper Banach density.) Again by means of ergodic theory, V. Bergelson, M. Beiglböck and A. Fish [@bbf] elaborated a shorter proof and extended the validity of the theorem to the general framework of countable amenable groups. In 2010, M. Beiglböck [@bei] found another proof by using ultrafilters plus a bit of measure theory. Recently, this author [@dn] applied nonstandard methods to show several properties of difference sets, and gave yet another different proof of Jin’s result where an explicit bound to the number of shifts of $A+B$ that are needed to cover arbitrarily large intervals is found. In this paper we present a short proof of Jin’s theorem in the strengthened version mentioned above, which is entirely elementary and hence easily accessible also for the non-specialists. (Here, “elementary" means that no use is made of nonstandard analysis, measure theory and ergodic theory, ultrafilters, or any other advanced tool.) The underlying intuitions are close to some of the nonstandard arguments in [@dn], but of course formalization is different. We paid attention to keep the exposition in this paper self-contained. **Notation:** By $\N=\{1,2,3,\ldots\}$ we denote the set of *positive* integers. If not specified otherwise, lower-case letters $a, b, c, x, y, z, \ldots$ will denote integer numbers, and upper-case letters $A, B, C, \ldots$ will denote sets of integers. By writing $[a,b]$ we always denote intervals of integers, *i.e.* $[a,b]=\{x\in\Z\mid a\le x\le b\}$. Jin’s theorem with a bound ========================== Let us start by recalling three important structural notions for sets of integers. Let $A\subseteq\Z$ be a set of integers. - $A$ is *thick* if it covers intervals of arbitrary length, *i.e.* if for every $k\in\N$ there exists an interval $I=[y+1,y+k]$ of length $k$ such that $I\subseteq A$. - $A$ is *syndetic* if it has bounded gaps, *i.e.* if there exists $k$ such that $A\cap I\ne\emptyset$ for every interval $I$ of length $k$. - $A$ is *piecewise syndetic* if it covers arbitrarily large intervals of a syndetic set, *i.e.* if $A=B\cap C$ where $B$ is thick and $C$ is syndetic. Remark that thickness and syndeticity are dual notions, in the sense that $A$ is thick if and only if its complement $A^c$ is not syndetic. Recall the *difference set* and the *sumset* of two sets of integers $A,B\subseteq\Z$: $$A-B\ =\ \{a-b\mid a\in A,\ b\in B\}\ ;\quad A+B\ =\ \{a+b\mid a\in A,\ b\in B\}.$$ With obvious notation, we shall simply write $A-z$ to indicate the shift $A-\{z\}$. It is easily shown that $A$ is syndetic if and only if $A+F=\Z$ for a suitable finite set $F$; and that $A$ is piecewise syndetic if and only if $A+F$ is thick for a suitable finite set $F$. Let us now turn to concepts of largeness for sets of integers. A familiar notion in number theory is that of *upper asymptotic density* $\overline{d}(A)$ of set of natural numbers $A\subseteq\N$, which is defined as the limit superior of the relative densities of its initial segments: $$\overline{d}(A)\ =\ \limsup_{n\to\infty}\frac{|A\cap[1,n]|}{n}.$$ The upper Banach density refines the density $\overline{d}$ to sets of integers by considering arbitrary intervals instead of just initial intervals. The *upper Banach density* of a set $A\subseteq\Z$ is defined as: $$\text{BD}(A)\ =\ \lim_{n\to\infty}\left(\max_{x\in\Z}\frac{|A\cap[x+1,x+n]|}{n}\right).$$ One needs to check that such a limit always exists, and in fact $\text{BD}(A)=\inf_{n\in\N}a_n/n$ where $a_n=\max_{x\in\Z}|A\cap[x+1,x+n]|$. In consequence, if $\text{BD}(A)\ge\alpha$ then for every $n$. Trivially $\text{BD}(A)\ge\overline{d}(A)$ for every $A\subseteq\N$. The following properties, which directly follow from the definitions, will be used in the sequel: - $\text{BD}(A)=1$ if and only if $A$ is thick. - The family of sets with null Banach density is closed under finite unions, *i.e.* if $\text{BD}(A_i)=0$ for $i=1,\ldots,k$, then $\text{BD}(A_1\cup\ldots\cup A_k)=0$. - The Banach density is invariant under shifts, *i.e.* $\text{BD}(A+x)=\text{BD}(A)$. Remark that the upper Banach density is not additive, *i.e.* there exist disjoint sets $A,B$ such that $\text{BD}(A\cup B)<\text{BD}(A)+\text{BD}(B)$. However, for families of shifts of a given set, additivity holds: - If $A+x_i$ are pairwise disjoint sets for $i=1,\ldots,k$, then $\text{BD}\left(\bigcup_{i=1}^k A+x_i\right)=k\cdot\text{BD}(A_i)$. A consequence of the above property is the following: - If $\text{BD}(A)>1/2$ then every number is the difference of two elements in $A$, *i.e.* $A-A=\Z$. To see this, notice that for every $z$ one has $A\cap(A-z)\ne\emptyset$, as otherwise $\text{BD}(A\cup(A-z))=2\cdot\text{BD}(A)>1$, a contradiction. So $a=a'-z$ for suitable $a,a'\in A$, and hence $z\in A-A$. An important general property of difference sets is given by the following well-known result. If $\text{BD}(A)>0$ then $A-A$ is syndetic. If by contradiction $A-A$ was not syndetic, then its complement $(A-A)^c$ would include a thick set $T$. By the property of thickness, it is not hard to construct an infinite set $X=\{x_1<x_2<\ldots\}$ such that $X-X\subseteq T$. Since $(X-X)\cap(A-A)=\emptyset$, the sets in the family $\{A-x_i\mid i=1\in\N\}$ are pairwise disjoint. But this is not possible because if $k\in\N$ is such that $1/k<\text{BD}(A)$, then one would get $\text{BD}\left(\bigcup_{i=1}^k A-x_i\right)= \sum_{i=1}^k\text{BD}(A-x_i)=k\cdot\text{BD}(A)>1$, a contradiction. Remark that the above property does *not* extend to the general case $A-B$; *e.g.*, it is not hard to find thick sets $A,B,C$ such that their complements $A^c,B^c,C^c$ are thick as well, and $A-B\subset C$. However, $A-B$ is necessarily thick in case the two sets are “sufficiently dense". Precisely, the following holds: Let $A\subseteq\Z$ be such that $\sup_{n\to\infty} n\cdot(a_n/n-\alpha)=+\infty$, where $a_n=\max_{x\in\Z}|A\cap[x+1,x+n]|$. If $\text{BD}(B)\ge 1-\alpha$, then $A-B$ is thick. For every $k\in\N$, we show that an interval of length $k$ is included in $A-B$. Let $N$ be such that $N\cdot(a_N/N-\alpha)>k$, and pick an interval $I$ of length $N$ with $a_N=|A\cap I|$. For every $i=1,\ldots,k$ we have that: $$|(A-i)\cap I|\ \ge\ |A\cap I|-i\ >\ (\alpha\cdot N+k)-i\ \ge\ \alpha\cdot N.$$ Now recall that $\text{BD}(B)=\inf_{n\in\N}b_n/n$, so by the hypothesis we can find an interval $J$ of length $N$ such that $|B\cap J|\ge (1-\alpha)\cdot N$. Finally, pick $t$ such that $t+J=I$. We claim that the interval $[t+1,t+k]\subseteq A-B$. To show this, notice that for every $i=1,\ldots,k$ we have that $$|(A-i)\cap I|+|(B+t)\cap I|\ =\ |(A-i)\cap I|+|B\cap J|\ >\ \alpha\cdot N + (1-\alpha)\cdot N\ =\ N\ =\ |I|.$$ So, $(A-i)\cap(B+t)\cap I\ne\emptyset$, and we can find $a\in A$ and $b\in B$ such that $a-i=b+t$, and hence $t+i\in A-B$. Notice that $\text{BD}(A)>\alpha$ implies $\sup_{n\to\infty} n\cdot(a_n/n-\alpha)=+\infty$, which in turn implies $\text{BD}(A)\ge\alpha$; however, neither implication can be reversed. The fact that $A-B$ is thick whenever $\text{BD}(A)+\text{BD}(B)>1$ was first proved by M. Beiglböck, V. Bergelson and A. Fish in [@bbf]; in fact, their proof actually shows the (slightly) stronger property given in the previous proposition. What presented so far is just a hint of the rich combinatorial structure of sumsets and sets of differences, whose investigation seems still far from being completed (see *e.g.* the monographies [@tv; @ru]). In this area, a relevant contribution was given in 2000 by Renling Jin. By working in the setting of hypernatural numbers of nonstandard analysis, he showed that the appropriate structural property to be considered for differences of dense sets is *piecewise* syndeticity. If $A,B\subseteq\N$ have positive upper Banach density then their sumset $A+B$ is piecewise syndetic. As mentioned in the introduction, Jin’s theorem has been recently re-proved by other means and with some improvements. Here we shall present an elementary proof of the following strengthening from [@dn], where an explicit bound is given on the number of shifts that are needed to cover a thick set. (Recall that a set $C$ is piecewise syndetic if and only if $C+F$ is thick for a suitable finite set $F$.) \[Jin – with a bound\] Let $A,B\subseteq\Z$ have positive upper Banach densities $\text{BD}(A)=\alpha$ and $\text{BD}(B)=\beta$, respectively. Then there exists a finite set $F$ such that $|F|\le 1/\alpha\beta$ and $(A-B)+F$ is thick. Notice that if $A,B\subseteq\N$ are sets of natural numbers then $A+B=A-(-B)$, where $-B=\{-b\mid b\in B\}$. As trivially $\text{BD}(B)=\text{BD}(-B)$, the above theorem immediately yields Jin’s result about sumsets. The elementary proof ==================== By the definition of upper Banach density, we can pick two sequences of integers $\langle x_n\mid n\in\N\rangle$ and $\langle y_n\mid n\in\N\rangle$ such that if we put: - $A_n=A\cap[x_n+1,x_n+n^2]$ - $B_n=B\cap[y_n+1,y_n+n]$ then $\lim_{n\to\infty}|A_n|/{n^2}=\alpha$ and $\lim_{n\to\infty}|B_n|/n=\beta$. As the first step in the proof, for every $n$ we shall find a suitable shift of $A_n$ that meets $B_n$ on a set whose relative density approaches $\alpha\beta$ as $n$ goes to infinity. To this end, we need the following lemma from [@dn]. (In order to keep this paper self-contained, we re-prove it here.) **Lemma 1.** *Let $N,n\in\N$. If $C\subseteq[1,N]$ and $D\subseteq[1,n]$ then there exists $z$ such that $$\frac{|(C-z)\cap D|}{n}\ \ge\ \frac{|C|}{N}\cdot\frac{|D|}{n}-\frac{|D|}{N}.$$* Let $\vartheta:\N\to\{0,1\}$ the characteristic function of $C$. Then: $$\begin{aligned} \nonumber \sum_{x=1}^N\left(\sum_{d\in D}\vartheta(x+d)\right) & = & \sum_{d\in D}\left(\sum_{x=1}^N\vartheta(x+d)\right)\ =\ \sum_{d\in D}|C\cap[d+1,N]| \\ \nonumber {} & \ge & \sum_{d\in D}(|C|-d)\ \ge\ |D|\cdot(|C|-n).\end{aligned}$$ By the *pigeonhole principle*, there must be at least one $z$ such that $$\frac{1}{n}\sum_{d\in D}\vartheta(z+d)\ \ge\ \frac{1}{n}\cdot\frac{|D|\cdot(|C|-n)}{N}\ =\ \frac{|C|}{N}\cdot\frac{|D|}{n}-\frac{|D|}{N}.$$ Finally, notice that $$\frac{|(C-z)\cap D|}{n}\ =\ \frac{|(D+z)\cap C|}{n}\ =\ \frac{1}{n}\sum_{d\in D}\vartheta(z+d).$$ For every $n$, apply the above lemma where $C=A_n-x_n\subseteq[1,n^2]$ and $D=B_n-y_n\subseteq[1,n]$. (Notice that $|C|=|A_n|$ and $|D|=|B_n|$.) Then we can pick a suitable sequence $\langle z_n\mid n\in\N\rangle$ such that $$\frac{|(A_n-x_n-z_n)\cap(B_n-y_n)|}{n}\ \ge\ \frac{|A_n|}{n^2}\cdot\frac{|B_n|}{n}-\frac{|B_n|}{n^2}.$$ Now put: - $E_n=(A_n-x_n-z_n)\cap(B_n-y_n)\subseteq[1,n]$. Passing the above inequality to the limit, we obtain that $$\lim_{n\to\infty}\frac{|E_n|}{n}\ \ge\ \alpha\beta.$$ In the second part of the proof we shall use the fact that any sequence of subsets of $[1,n]$ whose relative densities have a positive limit as $n$ approaches infinity, satisfies a relevant combinatorial property about the corresponding difference sets. Precisely: **Lemma 2.** We inductively define a finite increasing sequence $\sigma=\langle m_i\mid i=1,\ldots,k\rangle$. Set $m_1=0$. If property $(\star)$ is satisfied by $F=\{0\}$, then put $\sigma=\langle m_1\rangle$, and stop. Otherwise, let $m_2\in\N$ be the least counterexample. So, $\Gamma_1=\{n\mid [1,m_2-1]\subseteq(E_n-E_n)+m_1\}$ has positive upper Banach density, but $\Lambda_1=\{n\in\Gamma_1\mid m_2\in (E_n-E_n)+m_1\}$ has null upper Banach density. Notice that for every $n\in\Gamma_1\setminus\Lambda_1$ one has $(E_n+m_1)\cap(E_n+m_2)=\emptyset$. If for every $m\in\N$ the set of all $n\in\Gamma_1$ such that $[1,m]\subseteq\bigcup_{i=1}^2(E_n-E_n)+m_i$ has positive upper Banach density, then put $\sigma=\langle m_1,m_2\rangle$ and stop. Otherwise, let $m_3\in\N$ be the least counterexample. So, the set $\Gamma_2=\{n\in\Gamma_1\mid [1,m_2-1]\subseteq\bigcup_{i=1}^2(E_n-E_n)+m_i\}$ has positive Banach density, but $\Lambda_2=\{n\in\Gamma_2\mid m_3\in\bigcup_{i=1}^2(E_n-E_n)+m_i\}$ has null Banach density. Notice that for every $n\in\Gamma_2\setminus\Lambda_2$ one has $(E_n+m_i)\cap(E_n+m_3)=\emptyset$ for $i=1,2$. Iterate this process. We claim that we must stop at a step $k\le 1/\gamma$. To see this, we show that whenever $m_1<\ldots<m_k$ are defined, one necessarily has $k\le 1/\gamma$. This is trivial for $k=1$, so let us assume $k\ge 2$. Notice that $\Lambda_1\cup\ldots\cup\Lambda_k$ has null Banach density, and so $X=\Gamma_k\setminus(\Lambda_1\cup\ldots\cup\Lambda_k)$ has positive Banach density (and hence it is infinite). Since $X\subseteq\Gamma_i\setminus\Lambda_i$ for all $i$, for every $N\in X$ the sets in the family $\{E_N+m_i\mid i=1,\ldots k\}$ are pairwise disjoint. Now, every $E_N+m_i\subseteq[1,N+m_k]$, and so we obtain the following inequality: $$N+m_k\ \ge\ \left\vert\bigcup_{i=1}^k(E_N+m_i)\right\vert\ =\ \sum_{i=1}^k|E_N+m_i|\ =\ k\cdot|E_N|,$$ and hence $$\frac{|E_N|}{N}\ \le\ \frac{1}{k}+\frac{m_k}{N}.$$ By taking limits as $N\in X$ approaches infinity, one gets the desired inequality $\gamma\le 1/k$. Finally observe that, by the definition of $\sigma=\langle m_i\mid i=1,\ldots,k\rangle$, for every $n\in\Gamma_k$ and for every $m\in\N$ we have the inclusion $[1,m]\subseteq\bigcup_{i=1}^k(E_n-E_n)+m_i$. This shows that property $(\star)$ is fulfilled by setting $F=\{m_1,\ldots,m_k\}$. By the above Lemma where $\gamma=\alpha\beta>0$, we can pick a finite set $F$ with $|F|\le 1/\alpha\beta$ and such that property $(\star)$ is satisfied by the sets $$E_n\ =\ (A_n-x_n-z_n)\cap(B_n-y_n).$$ So, for every $m$ there exists $n$ (actually “densely many" $n$) such that: $$[1,m]\ \subseteq\ (E_n-E_n)+F\ \subseteq\ (A_n-x_n-z_n)-(B_n-y_n)+F\ \subseteq\ A-B+F-t_n,$$ and hence $[t_n+1,t_n+m]\subseteq A-B+F$, where we denoted $t_n=x_n-y_n+z_n$. This shows that $A-B+F$ is thick, and the proof is complete. Open problems ============= 1. Lemma 2 states a much stronger property than needed for the proof of the main theorem. Can one derive a stronger result by a full use of that lemma? 2. We saw in §1 that $A-B$ is thick whenever $\text{BD}(A)+\text{BD}(B)>1$. Can one combine this fact with similar arguments as the ones presented in this paper, and prove interesting structural properties about $A-B$, $A-C$, $B-C$ under the assumption that $\text{BD}(A)+\text{BD}(B)+\text{BD}(C)>1$? <span style="font-variant:small-caps;">M. Beiglböck, V. Bergelson and A. Fish</span>, Sumset phenomenon in countable amenable groups, *Adv. Math.* **223**, pp. 416–432, 2010. <span style="font-variant:small-caps;">M. Beiglböck</span>, An ultrafilter approach to Jin’s theorem, *Isreal J. Math.* **185**, pp. 369–374, 2011. <span style="font-variant:small-caps;">V. Bergelson, H. Furstenberg and B. Weiss</span>, Piece-wise sets of integers and combinatorial number theory, in *Topics in Discrete Mathematics*, Algorithms Combin. **26**, Springer, Berlin, pp. 13-–37, 2006. <span style="font-variant:small-caps;">M. Di Nasso</span>, Embeddability properties of difference sets, Arxiv:1201.5865 (2012), submitted. <span style="font-variant:small-caps;">J.T. Griesmer</span>, Sumsets of dense sets and sparse sets, *Isreal J. Math.* **190**, pp. 229–252, 2012. <span style="font-variant:small-caps;">R. Jin</span>, The sumset phenomenon, *Proc. Amer. Math. Soc.*, **130**, pp. 855–861, 2002. <span style="font-variant:small-caps;">R. Jin</span>, Standardizing nonstandard methods for upper Banach density problems, in *Unusual Applications of Number Theory* (M. Nathanson ed.), DIMACS Series, vol. 64, pp. 109–124, 2004. <span style="font-variant:small-caps;">I.Z. Rusza</span>, Sumsets and structure, part I of *Combinatorial Number Theory and Additive Group Theory* (A. Geroldinger and I.Z. Ruzsa), Birkhäuser, 2009. <span style="font-variant:small-caps;">T. Tao and V.H. Vu</span>, *Additive Combinatorics*, Cambridge University Press, Cambridge, 2006.
--- abstract: | Given a metric space $(F \cup C, d)$, we consider star covers of $C$ with balanced loads. A star is a pair $(f, C_f)$ where $f \in F$ and $C_f \subseteq C$, and the load of a star is $\sum_{c \in C_f} d(f, c)$. In minimum load $k$-star cover problem $({\mathrm{MLkSC}\xspace})$, one tries to cover the set of clients $C$ using $k$ stars that minimize the maximum load of a star, and in minimum size star cover $({\mathrm{MSSC}\xspace})$ one aims to find the minimum number of stars of load at most $T$ needed to cover $C$, where $T$ is a given parameter. We obtain new bicriteria approximations for the two problems using novel rounding algorithms for their standard LP relaxations. For ${\mathrm{MLkSC}\xspace}$, we find a star cover with $(1+{\varepsilon})k$ stars and $O(1/{\varepsilon}^2){\mathrm{OPT}_\mathrm{MLk}}$ load where ${\mathrm{OPT}_\mathrm{MLk}}$ is the optimum load. For ${\mathrm{MSSC}\xspace}$, we find a star cover with $O(1/{\varepsilon}^2) {\mathrm{OPT}_\mathrm{MS}}$ stars of load at most $(2 + {\varepsilon}) T$ where ${\mathrm{OPT}_\mathrm{MS}}$ is the optimal number of stars for the problem. Previously, non-trivial bicriteria approximations were known only when $F = C$. **Keywords**: Star Cover, Approximation Algorithms, LP Rounding. author: - | [Buddhima Gamlath]{}\ buddhima.gamlath@epfl.ch\ EPFL, Lausanne, Switzerland - | [Vadim Grinberg]{}\ vgm@ttic.edu\ TTIC[^1], Chicago, USA bibliography: - 'references.bib' title: '**Approximating Star Cover Problems**' --- Introduction ============ Facility location (FL) is a family of problems in computer science where the general goal is to assign a set of clients to a set of facilities under various constraints and optimization criteria. FL family encompasses many natural clustering problems like $k$-median and $k$-means, most of which are well studied. In this work, we study two relatively less studied FL problems which we call minimum load $k$-star cover (${\mathrm{MLkSC}\xspace}$) and minimum size star cover $({\mathrm{MSSC}\xspace}$). The goal of ${\mathrm{MLkSC}\xspace}$ is to assign clients to at most $k$ facilities, minimizing the maximum assignment cost of a facility, while that of ${\mathrm{MSSC}\xspace}$ is to find a client-facility assignment with the minimum number of facilities such that the total assignment cost of each facility is upper bounded by a given threshold $T$. We begin by formally defining the two problems. Let $C$ be a finite set of clients and $F$ be a finite set of facilities. Let $(F \cup C, d)$ be a finite metric space where $d : (F \cup C) \times (F \cup C) \to {\mathbb{R}}_0^+$ is a distance metric. By a *star* in $(F, C)$, we mean any tuple $(f, C_f)$, where $f \in F$ and $C_f \subseteq C$. We say two stars $(f, C_f)$ and $(g, C_g)$ are *disjoint* if $f \neq g$ and $C_f \cap C_g = \varnothing$. A *star cover* of $(F, C)$ is a finite collection $S = \{(f_1, C_{f_1}), \allowbreak \dots, (f_{|S|}, C_{i_{|S|}})\}$ of disjoint stars such that $C = C_{f_1} \cup \dots \cup C_{f_{|S|}}$. The size of a star cover $S$ is the number of stars $|S|$ in the cover. Given a star cover $S$, a star $(f, C_f) \in S$, and a client $c \in C_f$, we say that client $c$ is *assigned* to facility $f$ under $S$ and the facility $f$ is *serving* client $c$ under $S$. For a star $(f, C_f)$, the *load* of facility $f$ is the sum of pair-wise distances $\sum_{c \in C_f}d(f, c)$ between itself and its clients. The *load* $L(S)$ of a star cover $S$ is the load of its maximum load star. I.e., $L(S) := \max_{(f, C_f) \in S}\sum_{c \in C_f}d(f, c)$. For notational convenience, we denote the collection of all star covers of $(F, C)$ by $\mathcal{S}$. Using the introduced notation, we now define ${\mathrm{MLkSC}\xspace}$ and ${\mathrm{MSSC}\xspace}$. Given a finite metric space $(F \cup C, d)$ and number $k \in {\mathbb{N}}$, the task of minimum load $k$-star cover problem is to find a star cover of size at most $k$ that minimizes the load; I.e., find $S^\ast := {\mathop{\mathrm{argmin}}}_{S \in \mathcal{S} : |S| \leq k} L(S).$ We denote the optimal load $L(S^\ast)$ by ${\mathrm{OPT}_\mathrm{MLk}}$. Given a finite metric space $(F \cup C, d)$ and a number $T \in {\mathbb{R}}_+$, the task of minimum size star cover problem is to find a star cover of load at most $T$ that minimizes the size; I.e., find a star cover $S^\star := {\mathop{\mathrm{argmin}}}_{S \in \mathcal{S} : L(S) \leq T} |S|$. We denote the optimal size $|S^\star|$ by ${\mathrm{OPT}_\mathrm{MS}}$. Even et al. [@EGK03] showed that both ${\mathrm{MLkSC}\xspace}$ and ${\mathrm{MSSC}\xspace}$ are NP-hard for general metrics even when $F = C$. Both Even et al. [@EGK03] and Arkin et al. [@AHL06] studied the problem in $F = C$ setting and gave constant factor bicriteria approximation algorithms for ${\mathrm{MLkSC}\xspace}$. The latter work also gave a constant factor approximation algorithm for ${\mathrm{MSSC}\xspace}$ in the same setting. Arkin et al. [@AHL06] use k-median clustering and then split the individual clusters that are too large into several smaller clusters to obtain their approximation guarantees. However, the splitting of clusters rely on that the clients and facilities are indistinguishable, which allows one to conveniently choose a new facility for each new partition created in the splitting process. Meanwhile, the technique of Even et al. [@EGK03] is to formulate the problem as an integer program, round its LP relaxation using minimum make-span rounding techniques, and use a clustering approach that also relies on $F$ being equal to $C$ to obtain the final bicriteria approximation guarantees. Both the techniques do not generalize to the case where $F \neq C$ unless it is allowed to open the same facility multiple times. Recently, Ahmadian et al. [@ABB18] showed that ${\mathrm{MLkSC}\xspace}$ is NP-hard even if we restrict the metric space to be a line metric. They further gave a PTAS for ${\mathrm{MLkSC}\xspace}$ in line metrics and a quasi-PTAS for the same in tree metrics. However, their techniques are specific to line and tree metrics, and it is not known whether they can be extended to general metrics. The main goal of this work is to extend the approach of Even et al. [@EGK03] to $F \neq C$ setting where any given facility can be opened at most once. To do so, we introduce a novel clustering technique and an accompanied new algorithm to modify the LP solution before applying the minimum makespan rounding at the end. This yields the following theorem: \[t1\] There exists a polynomial time algorithm that, given an instance $(F \cup C, d)$ of ${\mathrm{MLkSC}\xspace}$ problem and any ${\varepsilon}\in (0,1)$, finds a star cover of $(F, C)$ of size at most $(1 + {\varepsilon})k$ and load at most $O({\mathrm{OPT}_\mathrm{MLk}}/{\varepsilon}^2)$. As a complementary result, we also show that the standard LP relaxation has some inherent limitations. That is, we construct a family of ${\mathrm{MLkSC}\xspace}$ instances where the load of any integral $(1 + {\varepsilon})k$-star cover is at least $\Omega(1/{\varepsilon})$ times the optimal value of the standard LP. With slight modifications to our clustering and rounding techniques, we further obtain the following theorem on ${\mathrm{MSSC}\xspace}$: \[t2\] There exists a polynomial time algorithm that, given an instance of ${\mathrm{MSSC}\xspace}$ problem with load parameter $T$ and any ${\varepsilon}\in (0, 1)$, finds a star cover of load at most $(2 + {\varepsilon})T$ and size at most $O({\mathrm{OPT}_\mathrm{MS}}/{\varepsilon}^2)$. As with ${\mathrm{MLkSC}\xspace}$, we show that the standard LP-relaxation for ${\mathrm{MSSC}\xspace}$ also suffers from inherent limitations; I.e., for any ${\varepsilon}> 0$, we give an instance of ${\mathrm{MSSC}\xspace}$ for which there is a *fractional* star cover of load at most $T$ but any integral star cover of that instance has load at least $(2 - {\varepsilon})T$ even with all facilities opened. We end the introduction with a brief section on other related work. In \[sec:tech\], we introduce the LP relaxations of the two problems and provide a more elaborate description of our techniques. Later in \[sec:algo1\] and \[sec:algo2\] we describe the proofs of \[t1\] and \[t2\] in detail. We present the explicit constructions of families of ${\mathrm{MLkSC}\xspace}$ and ${\mathrm{MSSC}\xspace}$ that show inherent limitations of the respective standard LP relaxations in \[sec:appb\]. ### Other Related Work {#other-related-work .unnumbered} To the best of our knowledge, Even et al. [@EGK03] and Arkin et al. [@AHL06] were among the first to explicitly address close relatives of ${\mathrm{MLkSC}\xspace}$ and ${\mathrm{MSSC}\xspace}$ problems. Both of their works considered the problem where one has to cover nodes (or edges) of a graph using a collection of *objects* (I.e.,trees or stars). Evans et al. considered the problem of minimizing the maximum cost of an object when the number of objects is fixed, for which they gave a $4$-approximation algorithm. Arkin et al. also studied the same problem and additionally considered paths and walks as covering objects. They further discussed the ${\mathrm{MSSC}\xspace}$ version of the problems where the goal is to minimize the number of covering objects such that the cost of each object is at most a given threshold. For min-max tree cover with $k$ trees, Khani and Salavatipour [@KS11] later improved the approximation guarantee to a factor of three. In general, many well-known facility location problems have constant factor approximation guarantees. For example, for uncapacitated facility location, the known best algorithm (Li et al. [@Li13]) gives an approximation ratio of 1.488. For $k$-median in general metric spaces, the current best is $2.675$ due to Byrka et al. [@BTS17], and for $k$-means in general metric spaces, it is $(9 + \varepsilon)$ due to Ahmadian et al. [@ANSW18]. Remarkably, all these results follows from LP based approaches. A common theme of all these problems is that their objectives are to minimize a summation of costs. I.e., we minimize the sum of distances from clients to their respective closest opened facilities, where in uncapacitated facility location problem, we additionally have the sum of opening costs of the opened facilities. This *min-sum* style objective is in contrast with the min-max style objective of minimum star cover problem which makes it immune to algorithmic approaches that are applicable to other common facility location counterparts. As discussed, minimum star cover problems are closely related to minimum makespan scheduling and the generalized assignment problem. Two most influential literature in this regard include Lenstra et al. [@LST90] and Shmoys et al. [@ST93]. Our Results and Techniques {#sec:tech} ========================== We start with the LP relaxations of the standard integer program formulations for ${\mathrm{MLkSC}\xspace}$ and ${\mathrm{MSSC}\xspace}$. To make the presentation easier, we first define a polytope ${\operatorname{SC-LP}}(T, k)$ such that the *integral* points of ${\operatorname{SC-LP}}(T, k)$ are feasible star covers of load at most $T$ and size at most $k$. For $i \in F$, let variable $y_i \in \{0, 1\}$ denote whether $i$’th facility is *opened* (I.e., $y_i=1$ if and only if there is a star $(i, C_i)$ in the target star cover), and for $(i, j) \in F \times C$, let variable $x_{ij} \in \{0, 1\}$ denote whether $j$’th client is *assigned* to facility $i$ (I.e., $x_{ij} = 1$ if and only if $j \in C_i$ where $(i, C_i)$ is a star in the target star cover). Then the following set of constraints define ${\operatorname{SC-LP}}(T, k)$: $$\begin{minipage}[c]{0.7\textwidth} \begin{align} & && \sum_{j \in C}d(i, j) \cdot x_{ij} \leq T \cdot y_i & \forall i \in F, \label{eq:scons1}\\ & && \sum_{i \in F}y_i \leq k, \label{eq:scons2}\\ & && \sum_{i \in F}x_{ij} = 1 & \forall j \in C, \label{eq:scons3}\\ & && x_{ij} \leq y_i & \forall i \in F, \forall j \in C, \label{eq:scons4}\\ & && y_i \in [0, 1] & \forall i \in F, \label{eq:scons5}\\ & && x_{ij} \in [0, 1] & \forall i \in F, \forall j \in C \label{eq:scons6}\\ & && x_{ij} = 0&\forall i \in F, \forall j \in C: d(i, j) > T. \label{eq:scons7} \end{align} \end{minipage} \tag{${\operatorname{SC-LP}}(T, k)$}$$ Here, Constraint  ensures that the load of an opened facility $i \in F$ is at most $T$, while Constraint  limits the maximum number of opened facilities to $k$. Constraint  and Constraint  ensure that each client is fully assigned and they are only assigned to opened facilities. Finally Constraint  and Constraint  ensures that the only integral values of $x_{ij}$’s and $y_i$’s are $0$ or $1$, while Constraint  essentially removes any $(i, j)$ pair from consideration if the distance between them is larger than $T$. Note that we can now define the LP for ${\mathrm{MLkSC}\xspace}$ as $$\text{Minimize } T \text{ such that } {\operatorname{SC-LP}}(T, k) \text{ is feasible,} \tag{${\operatorname{MLkSC-LP}}$}$$ where one can find the minimum such $T$ using the standard binary search technique. Similarly, the LP for ${\mathrm{MSSC}\xspace}$ can be stated as $$\text{Minimize } k \text{ such that } {\operatorname{SC-LP}}(T, k) \text{ is feasible.} \tag{${\operatorname{MSSC-LP}}$}$$ Recall that $k$ is part of the ${\mathrm{MLkSC}\xspace}$ problem input and $T$ is a part of the ${\mathrm{MSSC}\xspace}$ problem input. For an arbitrary (not necessarily feasible) solution $(x, y)$ to ${\operatorname{SC-LP}}(T, k)$, for $i \in F$ let $L(i, x)$ denote the *fractional load* of facility $i$ with respect to the assignment $x$, I.e., $L(i, x) := \sum_{j \in C}d(i, j)x_{ij}$. A solution $(x, y)$ to ${\operatorname{SC-LP}}(T, k)$ is called *$(\alpha, \beta)$-approximate*, if for every $i \in F$, $L(i, x) \leq \alpha T y_i$, and $\sum_{i \in F}y_i \leq \beta k$. The proofs of \[t1\] and \[t2\] immediately follow from the two theorems on rounding feasible solutions of ${\operatorname{SC-LP}}$ presented below: \[scload\] There exists a polynomial time rounding algorithm that, given a feasible solution $(x^\ast, y^\ast)$ to ${\operatorname{SC-LP}}(T, k)$ and any ${\varepsilon}\in (0, 1)$, outputs an integral $(O(1/{\varepsilon}^2), 1 + {\varepsilon})$-approximate solution to ${\operatorname{SC-LP}}(T, k)$. Let ${\varepsilon}\in (0, 1)$ be given. Using standard binary search approach, we can guess the value $T^*$, such that ${\mathrm{OPT}_\mathrm{MLk}}\leq T^* \leq 2{\mathrm{OPT}_\mathrm{MLk}}$, by solving ${\operatorname{MLkSC-LP}}$ multiple times for different values of $T^*$ and either finding a feasible fractional solution of load at most $T^*$, or determining that no such solution exists. Let $(x^*, y^*)$ be the corresponding fractional solution to ${\operatorname{MLkSC-LP}}$. Observe that $(x^*, y^*)$ is a feasible solution to ${\operatorname{SC-LP}}(T^*, k)$. By \[scload\], we can round $(x^*, y^*)$ to an integral solution $({\dot{x}}, {\dot{y}})$, which opens at most $(1 + {\varepsilon})k$ facilities and achieves maximum load at most $O(1/{\varepsilon}^2)T^*$, and it will take polynomial time. Therefore, $({\dot{x}}, {\dot{y}})$ will be an integral solution to ${\operatorname{MLkSC-LP}}$ with opening at most $(1 + {\varepsilon})k$ and maximum load at most $O(1/{\varepsilon}^2){\mathrm{OPT}_\mathrm{MLk}}$. \[scsize\] There exists a polynomial time rounding algorithm that, given a feasible solution $(x^\ast, y^\ast)$ to ${\operatorname{SC-LP}}(T, k)$ and any ${\varepsilon}\in (0, 1)$, outputs an integral $(2 + {\varepsilon}, O(1/{\varepsilon}^2))$-approximate solution to ${\operatorname{SC-LP}}(T, k)$. The proof of \[t2\] using \[scsize\] is just the same as the proof of \[t1\] using \[scload\], omitting the binary search part (as we optimize over $k$ instead of $T$). Note that ${\operatorname{MLkSC-LP}}$ closely resembles the LP used in minimum make-span rounding by Lenstra et al. [@LST90]. In fact, for the case where we do not have a restriction on number of opened facilities, we can assume $y_i = 1$ for all $i \in F$, and the LP reduces to the minimum make-span problem, yielding a $2$-approximation algorithm. The main difficulty here is to figure out which facilities to open. Once we have an integral opening of facilities, we can still use minimum make-span rounding at a loss of only a factor two in the guarantee for minimum load. Thus, our algorithm for ${\mathrm{MLkSC}\xspace}$ essentially transforms the initial solution for ${\operatorname{MLkSC-LP}}$ via a series of steps to a solution with integral openings, I.e., $y_i \in \{0, 1\}$ for all $i \in F$, and fractional assignments, without violating Constraint \[eq:scons1\] by too much. When we fully open (I.e., set $y_i = 1$) some facilities in the solution, inevitably, we have to close down (set $y_i = 0$) some other partially opened ones, which requires redistributing their assigned clients to the opened ones. This process is called *rerouting* and is a well-known technique in rounding facility-location-like problems. However, instead of bounding the total load of all facilities, our problem requires bounding each $L(i, x)$ separately, and consequently, many facility-location rounding algorithms which use rerouting fail to produce a good solution. Let $x^\circ$ be the solution we obtain from $x$ after rerouting facility $i$ to facility $h$. Using triangle inequality $d(h, j) \leq d(h, i) + d(i, j)$ for $j \in C$, we can bound $L(h, x^\circ)$, the new load of $h$: $$L(h, x^\circ) \leq L(h, x) + L(i, x) + d(h, i)\sum_{j \in C}x_{ij}.$$ If both $L(h, x)$ and $L(i, x)$ were initially $O(T)$, the new load of $h$ will also be $O(T)$ if and only if the sum $d(h, i)\sum_{j \in C}x_{ij} \leq d(h, i)|N(i)|$ is also at most $O(T)$ (here $N(i)$ is the set of all clients partially served by $i$). However, if $d(h, i)|N(i)|$ is large for all other facilities $h$, a good alternative to rerouting is to open $i$ integrally and assign every client in $N(i)$ to $i$. We call such facilities *heavy* facilities. There is still an issue if the integral load $\sum_{j \in N(i)}d(i, j)$ is too large compared to $T$, but we show that we can prevent having too large integral loads in heavy facilities by preceding the rerouting step with additional filtering and preprocessing steps. The filtering step blows-up the load constraint by a $(1 + {\varepsilon})$ factor while ensuring that no client is fractionally assigned to far away facilities. The preprocessing step uses techniques similar to those of minimum make-span rounding by Lenstra et al. [@LST90] to ensure that any non-zero fractional assignment $x_{ij}$ to a facility $i$ is at least a constant factor times its opening $y_i$, while slightly relaxing other constraints. Once we identify the heavy facilities, we cluster the remaining, non-heavy facilities, and choose which ones should be opened based on the clustering. Then we redistribute the assignments of the remaining facilities to those that were opened. Using the properties of the preprocessed solution and the clustering, and using the fact that none of the un-opened facilities are heavy, we show that the resulting fractional assignment satisfies the constraints up to an $O(1/{\varepsilon}^2)$ factor violation of load constraints. Hence, the algorithmic result of follows from the minimum make-span rounding of Lenstra et al. [@LST90], which gives us an integral assignment with maximum load increased at most by another factor of $2$ The algorithm for ${\mathrm{MSSC}\xspace}$ problem, on a high level, resembles that for ${\mathrm{MLkSC}\xspace}$: We first alter the solution of ${\operatorname{MSSC-LP}}$ to have integral $y_i$’s and fractional $x_{ij}$’s, allowing the total opening $\sum_{i \in F}y_i$ to be at most $O(1/{\varepsilon}^2)$ factor larger than the value of ${\operatorname{MSSC-LP}}$, and then use minimum make-span rounding of Lenstra et al. [@LST90] to obtain the final solution. However, since make-span rounding guarantees only a factor two violation in the load constraint, we need to make sure that our modified solution with integral openings and fractional assignments introduces only small error in load constraints. Namely, to ensure that the final solution satisfies $(2 + {\varepsilon})T$ maximum load, before applying the minimum make-span rounding, all the loads must be at most $(1 + {\varepsilon}/2)T$. We ensure this by re-arranging the steps of the algorithm for ${\mathrm{MLkSC}\xspace}$ and carefully choosing the parameters. $(O(1/{\varepsilon}^2), 1 + {\varepsilon})$-approximation to ${\mathrm{SC}\xspace}(T, k)$ {#sec:algo1} ========================================================================================= In this section, we show how to convert a (feasible) fractional solution $(x, y)$ of ${\operatorname{SC-LP}}$ in to a $(O(1/{\varepsilon}^2), 1 + {\varepsilon})$-approximate solution with integral $y$ values. This together with minimum make-span rounding scheme by Lenstra et al. [@LST90] proves \[scload\]. Preprocessing and filtering --------------------------- Suppose that for each $(i, j) \in F \times C$ we either have $x_{ij} = 0$ or $x_{ij} \geq \gamma y_i$ for constant $\gamma \in (0, 1)$. Then, if $L(i, x) = \sum_{j \in C}d(i, j)x_{ij} \leq \nu Ty_i$ for some constant $\nu \geq 1$, we have $\sum_{j \in N(i)}d(i, j) \leq \frac{\nu}{\gamma}T$. Therefore, if we open $i$ integrally and assign all $N(i)$ to $i$, the resulting load of $i$ will be $O(T)$. Even though we cannot guarantee the property above for every solution $(x, y)$ to ${\operatorname{SC-LP}}(T, k)$, we can modify $(x, y)$ so that all non-zero assignments $x_{ij}$ satisfy $x_{ij} \geq \gamma y_i$ for some constant $\gamma \in (0, 1)$ at the expense of slightly relaxing other constraints of ${\operatorname{SC-LP}}$. This is exactly the statement of the preprocessing theorem. \[preproc1\] Let $(x, y)$ be such that, for all $i \in F$, $L(i, x) \leq \mu T y_i$ for some constant $\mu \geq 1$ and all other constraints of ${\operatorname{SC-LP}}(T, k)$ on variables $x$ are satisfied. There exists a polynomial time algorithm that, given such solution $(x, y)$ and a constant $\gamma \in (0, 1)$, finds a solution $(x', y')$ such that 1. $y' = y$, and if $x_{ij} = 0$, then $x'_{ij} = 0$; 2. for every $(i, j) \in F\times C$, $y'_i \geq x'_{ij}$, and if $x'_{ij} > 0$, then $x'_{ij} \geq \gamma y_i'$; 3. for every $j \in C$, $1 \geq \sum_{i \in F}x'_{ij} \geq 1 - \gamma$; 4. for every $i \in F$, $L(i, x') \leq (\mu + 2 - \gamma)Ty_i'$. That is to say, we can guarantee the property $\{x_{ij} > 0 \iff x_{ij} \geq \gamma y_i\}$ by loosing at most $\gamma$ portion of each client’s demand and slightly increasing each facility’s load. Loosing a factor of $\gamma$ demand is affordable for our purposes, as one can meet the demand constraint by scaling each $x_{ij}$ by a factor of at most $1/(1 - \gamma)$. Since $\gamma$ is a constant, this would blow up the load constraint only by an additional constant factor. The proof of \[preproc1\] is rather technical and is given in \[sec:appa\]. We now present our rounding algorithm step by step. Let $(x, y)$ be a feasible fractional solution to ${\operatorname{SC-LP}}(T, k)$ and let ${\varepsilon}\in (0, 1)$. Let $({\dot{x}}, {\dot{y}})$ denote the final rounded solution with integral ${\dot{y}}$ and fractional ${\dot{x}}$. For $j \in C$, let $D(j) := \sum_{i \in F}d(i, j)x_{ij}$, the average facility distance to client $j$. Let $\rho := \frac{1 + {\varepsilon}}{{\varepsilon}}$. By applying the well-known filtering technique of Lin and Vitter [@LV92] to $(x, y)$, we construct a new solution $({\hat{x}}, {\hat{y}})$, such that $\sum_{i \in F}{\hat{y}}_i \leq (1 + {\varepsilon})k$, $L(i, {\hat{x}}) \leq (1 + {\varepsilon})T{\hat{y}}_i$ for all $i \in F$, and for every $i, j$, ${\hat{x}}_{ij} \leq {\hat{y}}_i$ and if ${\hat{x}}_{ij} > 0$, then $d(i, j) \leq \rho D(j)$. Applying \[preproc1\] to $({\hat{x}}, {\hat{y}})$, we obtain solution $(x', y')$ such that 1. $\sum_{i \in F}y_i' \leq (1 + {\varepsilon})k$, 2. for all $(i, j)$, $y'_i \geq x'_{ij}$, and if $x'_{ij} > 0$, then $x'_{ij} \geq \gamma y_i'$ and $d(i, j) \leq \rho D(j)$, 3. for every $j \in C$, $1 \geq \sum_{i \in F}x'_{ij} \geq 1 - \gamma$, and 4. for every $i \in F$, $L(i, x') \leq (\mu + 2 - \gamma)Ty_i' = \nu Ty_i'$. Here $\nu := (\mu + 2 - \gamma)$ is a new load bound. We choose $\mu := (1 + {\varepsilon})$ and $\gamma := {\varepsilon}/(1 + {\varepsilon})$, but will keep the parameters unsubstituted, for convenience. It is easy to see from the bounds above that for every $j \in C$, $\sum_{i \in F:x'_{ij} > 0}y_i' \geq 1/(1 + {\varepsilon})$. Opening heavy facilities ------------------------ We now give an algorithm to choose heavy facilities based on $(x', y')$. For $F'\subseteq F$, $C'\subseteq C$, let $N'(i) := \{j \in C' : x'_{ij} > 0\}$, $N'(j) := \{i \in F' : x'_{ij} > 0\}$. The algorithm internally maintains two subsets $F' \subseteq F$ and $C' \subseteq C$. Notice that $N'$ changes as the algorithm modifies $F'$ and $C'$. A facility $i \in F'$ is *$\lambda$-heavy* for $\lambda > 0$, if $\sum_{j \in N'(i)}D(j) > \lambda T$. opens all $\lambda$-heavy facilities for the given value of $\lambda$. It starts with $F' = F$ and $C' = C$ and scans $F'$ for $\lambda$-heavy facilities. It fully opens every $\lambda$-heavy facility $i \in F'$ and assigns all $N'(i)$ integrally to $i$. Then, it discards $i$ from $F'$ and $N'(i)$ from $C'$, and continues until all facilities are processed. Initialize $F' \leftarrow F$, $C' \leftarrow C$ Initialize $C(i) \leftarrow N'(i)$ $F' \leftarrow F' \setminus \{i\}$, ${\dot{y}}_i = 1$ $C' \leftarrow C'\setminus \{j\}$, ${\dot{x}}_{ij} = 1$ ${\dot{x}}_{hj} = 0$ $({\dot{x}}, {\dot{y}})$, $F'$, $C'$. Since for each $h \in F'$ we may discard some clients from $N'(h)$ after every step, facilities that were $\lambda$-heavy might become non-$\lambda$-heavy under updated $F'$ and $C'$. \[heavylemma\] shows that this procedure does not open too many facilities and that the load of opened facilities does not exceed $T$ by too much. \[heavylemma\] Let $F', C'$ be the sets returned by \[heavy\]. Then $|F \setminus F'| \leq k/\lambda$, and for each facility $i \in F\setminus F'$, $L(i, {\dot{x}}) \leq \frac{\nu}{\gamma}T$. The set $F\setminus F'$ is exactly the set of facilities integrally opened during \[heavy\]. For $i \in F\setminus F'$, set $C(i)$ in \[heavy\] is exactly the set of clients, integrally assigned to $i$ by the algorithm. Observe that for every $i, h \in F\setminus F'$, $i \neq h$, the sets $C(i)$ and $C(h)$ are *disjoint*. Hence, by feasibility of $(x, y)$, $$|F\setminus F'|\cdot \lambda T < \sum_{i \in F\setminus F'}\sum_{j \in C(i)}D(j) \leq \sum_{j \in C}D(j) = \sum_{j \in C}\sum_{i \in F}d(i, j)x_{ij} \leq \sum_{i \in F}Ty_i \leq Tk$$ and $|F\setminus F'| < \frac{Tk}{\lambda T} = \frac{k}{\lambda}$. Next, by the properties of solution $(x', y')$: $$\nu Ty_i' \geq \sum_{j \in C(i)}d(i, j)x'_{ij} \geq \gamma y_i'\sum_{j \in C(i)}d(i, j) \implies L(i, {\dot{x}}) = \sum_{j \in C(i)}d(i, j) \leq \frac{\nu}{\gamma}T.$$ We apply \[heavy\] with $\lambda := 1/{\varepsilon}$, and by \[heavylemma\] this opens at most ${\varepsilon}k$ additional facilities. The load of each opened facility is at most $\frac{\nu}{\gamma}T = \frac{(1 + {\varepsilon})(\mu + 2 - \gamma)}{{\varepsilon}}T = O(T/{\varepsilon})$. For the returned sets $F'$ and $C'$, $\sum_{j \in N'(i)}D(j) \leq T/{\varepsilon}$ for all $i \in F'$. Moreover, since $j \in C'$ if and only if $j$ was not served by any $\lambda$-heavy facility (which got opened), for all $j \in C'$ we have $\sum_{i \in N'(j)}y_i' = \sum_{i \in F: x'_{ij} > 0}y_i'\geq 1/(1 + {\varepsilon})$. Facilities in $F\setminus F'$ are all integral, and it remains to find the integral opening among facilities in $F'$. As discussed earlier, if we reroute $i \in F'$ to $h \in F'$, to guarantee a good approximation we have to bound the term $d(h, i)|N'(i)|$. Observe that $\sum_{j \in N'(i)}D(j)$ is an upper bound for $|N'(i)|\min_{j \in N'(i)}D(j)$. Therefore, to get a good bound, we need to choose $h$ for $i$ so that $d(h, i)$ is at most some constant times $\min_{j \in N'(i)}D(j)$. This requires some sophisticated clustering technique and a wise choice of facility $h$ for every such $i$. Clustering ---------- To create an integral opening over $F'$, we partition $F'$ into disjoint clusters, open some facilities in every cluster and reroute the closed ones into opened ones within the same cluster. Our goal is to cluster $F'$ so that, if $i$ and $h$ belong to the same cluster and we reroute $h$ to $i$, $d(h, i) \leq O(\min_{j \in N'(i)}D(j))$. Classic clustering approaches for facility-location-like problems do not work, and to achieve this bound we are required to design a novel approach. Let ${\mathcal{C}}\subseteq C'$ be the set of cluster centers. For every $j \in {\mathcal{C}}$, let $F'(j) \subseteq F'$ be the set of facilities belonging to the cluster centered at $j$, for $i \in F'$ let ${\mathcal{C}}(i)$ be the center of the cluster $i$ belongs to (I.e., $i \in F'(j) \iff {\mathcal{C}}(i) = j$). The clustering procedure works as follows. First, we form cluster centers ${\mathcal{C}}$ by scanning $j \in C'$ in ascending order of $D(j)$ and adding $j$ to ${\mathcal{C}}$ only if there are no other centers in ${\mathcal{C}}$ within the distance $2\rho D(j)$ from $j$. Having determined ${\mathcal{C}}$, we add facilities from $F'$ to different clusters. Most classical clustering approaches would put $i$ into $F'(s)$, if $s$ is closest to $i$ among ${\mathcal{C}}$. Our approach is different: if $i \in F'$ is serving some $s \in {\mathcal{C}}$, we add $i \in F'(s)$ regardless the distance $d(i, s)$. Otherwise, we consider $j \in N'(i)$ with *minimum* $D(j)$ ($j$ is not a cluster center), take $s \in {\mathcal{C}}$ that prevented $j$ from becoming a center, and add $i$ to $F'(s)$. visualizes the clustering procedure and \[alg:clustering\] gives its pseudocode. Initialize ${\mathcal{C}}\leftarrow \varnothing$, sort $j \in C'$ by the values of $D(j)$ in ascending order ${\mathcal{C}}\leftarrow {\mathcal{C}}\cup \{j\}$ For all $s \in {\mathcal{C}}$, initialize $F'(s) \leftarrow \varnothing$ ${\mathcal{C}}(i) \leftarrow s$, $F'(s) \leftarrow F'(s) \cup \{i\}$ Let $j := {\mathop{\mathrm{argmin}}}_{r \in N'(i)}D(r)$ Take $s \in {\mathcal{C}}$, such that $D(s) \leq D(j)$, $d(s, j) \leq 2\rho D(j)$ ${\mathcal{C}}(i) \leftarrow s$, $F'(s) \leftarrow F'(s) \cup \{i\}$ ${\mathcal{C}}$, $F'(s)$ for $s \in {\mathcal{C}}$. \[alg:clustering\] (0,0) ellipse (2.5cm and 2cm); at (0,1.25) [$N'(v)$]{}; (0,0) node\[draw,circle,fill=red,minimum size=10pt,inner sep=0pt\] (v) \[label=above left:$v$\] ; (3, 0) ellipse (1.5cm and 2.25cm); at (3,1.5) [$N'(j)$]{}; (3,0) node\[draw,circle,fill=red,minimum size=10pt,inner sep=0pt\] (j) \[label=above:$j$\] ; (6, 0.75) ellipse (2.5cm and 1.5cm); at (6, 1.5) [$N'(s)$]{}; (6, 0.75) node\[draw,circle,fill=red,minimum size=10pt,inner sep=0pt\] (s) \[label=right:$s$\] ; (-1.1,-1.3) node\[draw,fill=green,minimum size=10pt,inner sep=0pt\] (u) \[label=right:$u$\] ; (u) – (v); (1.9,0.4) node\[draw,fill=green,minimum size=10pt,inner sep=0pt\] (w) \[label=above:$w$\] ; (w) – (v); (4,0.6) node\[draw,fill=green,minimum size=10pt,inner sep=0pt\] (i) \[label=above:$i$\] ; (i) – (s); at (6, -1.75) [$D(v) \leq D(j) \leq D(s)$]{}; (3.5,-1.4) node\[draw,fill=green,minimum size=10pt,inner sep=0pt\] (h) \[label=above right:$h$\] ; (h) – (j); (i) – (j); (h) – (v); One can easily check that, after \[clustering\] finishes, for any $s, v \in {\mathcal{C}}$, $s \neq v$, $d(s, v) > 2\rho\max\big(D(s), D(v)\big)$, and as a result $N'(s)$ and $N'(v)$, as well as $F'(s)$ and $F'(v)$ are disjoint. Also, for every $j \notin {\mathcal{C}}$ there exists $s \in {\mathcal{C}}$ such that $D(s) \leq D(j)$ and $d(s, j) \leq 2\rho D(j)$, simply by construction of the algorithm. \[clustering\] allows us to obtain an upper bound on the distance between a facility an its cluster center, represented in terms of minimum average distance of the client served by this facility. \[distance\] Let $i \in F'$, let $j = {\mathop{\mathrm{argmin}}}_{r \in N'(i)}D(r)$. Then $d(i, {\mathcal{C}}(i)) \leq 3\rho D(j)$. Let ${\mathcal{C}}(i) = s$. There are two cases to distinguish. - $i \notin N'(s)$ (this case is shown by clients $j, v$ and facility $h$ in ). By construction of \[clustering\], client $s$ is exactly the one that prevented $j$ from becoming a cluster center, therefore $D(s) \leq D(j)$ and $d(j, s) \leq 2 \rho D(j)$. Thus, by triangle inequality $d(i, s) \leq d(i, j) + d(j, s) \leq \rho D(j) + 2\rho D(j)= 3\rho D(j)$. - $i \in N'(s)$. Then $D(j) = \min_{r \in N'(i)}D(r) \leq D(s)$. If $s = j$ or $D(s) = D(j)$, then $d(i, s) \leq \rho D(j)$ automatically. Suppose that $s \neq j$, and $D(j) < D(s)$, then $j \notin {\mathcal{C}}$, as $s \in {\mathcal{C}}$ and $i \in N'(j) \cap N'(s)$ (this case is shown by clients $j, s$ and facility $i$ in ). Hence, there exists some $s' \in {\mathcal{C}}$ that prevented $j$ from becoming a cluster center, so $D(s') \leq D(j)$ and $d(s', j) \leq 2\rho D(j)$. It is easy to see that $D(j)$ must be strictly greater than zero, and since both $s$ and $s'$ are cluster centers, $d(s', s) > 2\rho D(s)$. So, by triangle inequality, $$2\rho D(s) < d(s', s) \leq d(s', j) + d(i, j) + d(i, s) \leq 2\rho D(j) + \rho D(j) + \rho D(s),$$ implying $$2\rho D(s) \leq 3\rho D(j) + \rho D(s) \implies D(s) \leq 3D(j).$$ Since $i \in N'(s)$, it immediately follows that $d(i, s) \leq \rho D(s) \leq 3 \rho D(j)$. By applying the triangle inequality once more, we get the desired upper bound on the distances between any two facilities within the same cluster. \[dist\] Let $i, h \in F'$, such that ${\mathcal{C}}(i) = {\mathcal{C}}(h)$. Let $j = {\mathop{\mathrm{argmin}}}_{r \in N'(i)}D(r)$ and $v = {\mathop{\mathrm{argmin}}}_{w \in N'(h)}D(w)$. Then $d(i, h) \leq 6\rho \max\big(D(j), D(v)\big)$. Another useful observation is that $\sum_{i \in F'(s)}y_i' \geq 1/(1 + {\varepsilon})$ for every cluster center $s \in {\mathcal{C}}$. It follows from $N'(s) \subseteq F'(s)$ and $\sum_{i \in N'(s)}y_i' \geq 1/(1 + {\varepsilon})$. Rerouting --------- The last part of our rounding algorithm is opening some facilities in every cluster and rerouting the closed ones. For $s \in {\mathcal{C}}$ we open $\lfloor (1 + {\varepsilon})\sum_{u \in F'(s)}y_u'\rfloor$ facilities in cluster $F'(s)$, prioritizing facilities $i$ with minimum values of $\min_{r \in N'(i)}D(r)$. Since $\sum_{u \in F'(s)}y_u' \geq 1/(1 + {\varepsilon})$, we will open at least one facility in every cluster $F'(s)$ for $s \in {\mathcal{C}}$. Then, the demand of each closed facility in $F'(s)$ is redistributed at an equal fraction between all the opened ones in $F'(s)$, I.e. we reroute it to all opened facilities in $F'(s)$. This gives us an integral opening ${\dot{y}}$ over facilities in $F'$ and a fractional assignment ${\dot{x}}$ over clients in $C'$. \[reroutlemma\] shows that by opening $|K_s| = \lfloor (1 + {\varepsilon})\sum_{u \in F'(s)}y_u'\rfloor$ facilities in cluster $F'(s)$ and rerouting all closed facilities in $F'(s)$, we open at most $(1 + 3{\varepsilon})k$ facilities in total, and the load of every opened facility in $F'$ exceeds $T$ at most by a constant factor. Initialize $K_s \leftarrow \varnothing$ Sort $i \in F'(s)$ in ascending order of $\min_{r \in N'(i)}D(r)$ ${\dot{y}}_i \leftarrow 0$ $K_s \leftarrow K_s \cup \{i\}$, ${\dot{y}}_i \leftarrow 1$ ${\dot{x}}_{ij} \leftarrow x'_{ij}$ ${\dot{y}}_i \leftarrow 0$ ${\dot{x}}_{ir} \leftarrow 0$ ${\dot{x}}_{hr} \leftarrow {\dot{x}}_{hr} + x'_{ir}/|K_s|$ $({\dot{x}}, {\dot{y}})$, $K_s$ for $s \in {\mathcal{C}}$. \[reroutlemma\] After \[rerouting\], for every facility $h \in F'$, ${\dot{y}}_h = 1$: $L(h, {\dot{x}}) \leq 3(\nu + 4\rho\lambda)T$. Moreover, $\sum_{h \in F'}{\dot{y}}_h \leq (1 + 3{\varepsilon})k$. Since for every $s \in {\mathcal{C}}$ we have $\sum_{u \in F'(s)}y_u' \geq 1/(1 + {\varepsilon})$, $\lfloor (1 + {\varepsilon})\sum_{u \in F'(s)}y_u'\rfloor \geq 1$ and $|K_s| \geq 1$. After filtering and preprocessing steps, $\sum_{u \in F'}y_u' \leq (1 + {\varepsilon})k$, so $$\sum_{h \in F'}{\dot{y}}_h = \sum_{s \in {\mathcal{C}}}\sum_{h \in K_s}{\dot{y}}_h \leq (1 + {\varepsilon})\sum_{s\in {\mathcal{C}}}\sum_{u \in F'(s)}y'_u = (1 + {\varepsilon})\sum_{u \in F'}y_u' \leq (1 + {\varepsilon})^2k \leq (1 + 3{\varepsilon})k.$$ Next, let $h \in F'$, ${\dot{y}}_h = 1$, and let ${\mathcal{C}}(h) = s$. Take $i \in F'(s)$ that was closed by \[rerouting\]. The demand of every $r \in N'(i)$ served by $i$ gets split between all opened facilities from $K_s$ at an equal fraction. So, after we reroute $i$ into $h$, the *additional* load of $h$ is $$\sum_{r \in N'(i)}d(h, r)\frac{x'_{ir}}{|K_s|} \leq \frac{1}{|K_s|}\sum_{r \in N'(i)}d(i, r)x'_{ir} + \frac{d(h, i)}{|K_s|}\sum_{r \in N'(i)}x'_{ir}.$$ Recall that $\sum_{r \in N'(i)}d(i, r)x'_{ir} = L(i, x')\leq \nu Ty_i'$. Let $v = {\mathop{\mathrm{argmin}}}_{w \in N'(h)}D(w)$ and $j = {\mathop{\mathrm{argmin}}}_{r \in N'(i)}D(r)$. Since $h$ was opened, and $i$ was closed, $D(v) \leq D(j)$, and by \[dist\] $d(h, i) \leq 6\rho D(j)$. Hence, the *additional* load of $h$ is at most $$\begin{gathered} \frac{1}{|K_s|}\sum_{r \in N'(i)}d(i, r)x'_{ir} + \frac{d(h, i)}{|K_s|}\sum_{r \in N'(i)}x'_{ir} \leq \frac{\nu Ty_i'}{|K_s|} + \frac{6\rho D(j)}{|K_s|} \sum_{r \in N'(i)}x'_{ir}\leq \\ \leq \frac{\nu Ty_i'}{|K_s|} + \frac{6\rho D(j)}{|K_s|} \sum_{r \in N'(i)}y_i' =\frac{y_i'}{|K_s|}\left(\nu T + 6\rho \cdot |N'(i)|D(j) \right) \leq \\ \leq \frac{y_i'}{|K_s|}\left(\nu T + 6\rho\cdot \lambda T \right) = \frac{y_i'}{|K_s|}(\nu + 6\rho\lambda)T. \end{gathered}$$ We used the bound $|N'(i)|\min_{r \in N'(i)}D(r) \leq \sum_{r \in N'(i)}D(r) \leq \lambda T$ for non-$\lambda$-heavy facilities. Hence, the *total additional* load of $h$, gained after rerouting all closed facilities $i \in F'(s)\setminus K_s$ in its cluster, is at most $$\begin{gathered} \sum_{i \in F'(s)\setminus K_s}\sum_{r \in N'(i)}d(h, r)\frac{x'_{ir}}{|K_s|} \leq \sum_{i \in F'(s)\setminus K_s}\frac{y_i'}{|K_s|}(\nu + 6\rho\lambda)T = \\ =(\nu + 6\rho\lambda)T\cdot \frac{\sum_{i \in F'(s)\setminus K_s}y_i'}{\lfloor (1 + {\varepsilon})\sum_{u \in F'(s)}y_u'\rfloor} \leq (\nu + 6\rho\lambda)T\cdot \frac{(1 + {\varepsilon})\sum_{i \in F'(s)}y_i'}{\lfloor (1 + {\varepsilon})\sum_{u \in F'(s)}y_u'\rfloor}\leq\\ \leq (2\nu + 12\rho\lambda)T. \end{gathered}$$ The load of $h$ before rerouting was $L(h, x') \leq \nu Ty'_h \leq \nu T$, so after \[rerouting\] the total load of facility $h$ is $L(h, {\dot{x}})\leq 3(\nu + 4\rho\lambda)T$. This holds for every $h \in K_s$ and every center $s \in {\mathcal{C}}$. Now we are ready to complete the analysis of the rounding algorithm. We claim that, having completed all the intermediate steps from filtering and up to \[rerouting\] included, with parameter values $\rho = \frac{1 + {\varepsilon}}{{\varepsilon}}$, $\gamma = {\varepsilon}/(1 + {\varepsilon})$ and $\lambda = 1/{\varepsilon}$, for the resulting solution $({\dot{x}}, {\dot{y}})$ it holds: 1. ${\dot{y}}$ is integral, and $\sum_{i \in F}{\dot{y}}_i \leq (1 + 4{\varepsilon})k$; 2. for every $j \in C$, $1 \geq \sum_{i \in F}{\dot{x}}_{ij} \geq 1/(1 + {\varepsilon})$, and if $j \in C\setminus C'$, $\sum_{i \in F}{\dot{x}}_{ij} = 1$; 3. for every $i \in F$, $L(i, {\dot{x}}) \leq 12\left(1 +\frac{1 + {\varepsilon}}{{\varepsilon}^2}\right)T{\dot{y}}_i$. By \[heavylemma\], \[heavy\] could open additional ${\varepsilon}k$ facilities, so $\sum_{i \in F\setminus F'}{\dot{y}}_i \leq {\varepsilon}k$. By \[reroutlemma\], $\sum_{h \in F'}{\dot{y}}_h \leq (1 + 3{\varepsilon})k$. This gives us total opening $\sum_{i \in F}{\dot{y}}_i \leq (1 + 4{\varepsilon})k$. Next, take $j \in C$. If $j$ was serving some $\lambda$-heavy facility $i$, then $j \in C\setminus C'$, and \[heavy\] sets ${\dot{x}}_{ij} = 1$ and ${\dot{x}}_{hj} = 0$ for all other facilities $h \neq i$. If $j$ did not serve any $\lambda$-heavy facility, then $j \in C'$, and we get $1 \geq \sum_{i \in F}{\dot{x}}_{ij} = \sum_{i \in F}x'_{ij} \geq 1/(1 + {\varepsilon})$ after rerouting. Finally, if $i \in F$ was $\lambda$-heavy, by \[heavylemma\] $L(i, {\dot{x}}) \leq \frac{\nu}{\gamma}T \leq \frac{4}{{\varepsilon}}T{\dot{y}}_i$. Let $i$ be non-$\lambda$-heavy, I.e. $i \in F'$. If ${\dot{y}}_i = 0$, I.e. $i$ is closed, then \[rerouting\] assures that $L(i, {\dot{x}}) = 0$. If ${\dot{y}}_i = 1$, then by \[reroutlemma\] we have $L(i, {\dot{x}}) \leq 3(\nu + 4\rho\lambda)T \leq 3\left(4 + 4\frac{1 + {\varepsilon}}{{\varepsilon}^2}\right)T{\dot{y}}_i = 12\left(1 +\frac{1 + {\varepsilon}}{{\varepsilon}^2}\right)T{\dot{y}}_i$. For every $(i, j) \in F' \times C'$, we multiply the assignment variables ${\dot{x}}_{ij}$ by $1/\left(\sum_{i \in F}{\dot{x}}_{ij}\right)$. Since $\sum_{i \in F}{\dot{x}}_{ij} \geq 1/(1 + {\varepsilon})$, the load of every opened facility in $F'$ gets increased at most by a factor of $1 + {\varepsilon}\leq 2$. After this change, $\sum_{i \in F}{\dot{x}}_{ij} = 1$ and $L(i, {\dot{x}}) \leq 24\left(1 + \frac{1 + {\varepsilon}}{{\varepsilon}^2}\right)T$ for all $j \in C$, $i \in F$. The solution $({\dot{x}}, {\dot{y}})$ has integral opening ${\dot{y}}$, and every client $j \in C$ is served fully (I.e. $\sum_{i \in F}{\dot{x}}_{ij} = 1$). By applying minimum makespan rounding algorithm [@LST90], we get an integral assignment with respect to facilities opened in ${\dot{y}}$, sacrificing another factor of 2 in approximation. We obtain a $\left(48\left(1 + \frac{1 + {\varepsilon}}{{\varepsilon}^2}\right), 1 + 4{\varepsilon}\right)$-approximate solution to ${\mathrm{SC}\xspace}(T, k)$ problem, and the whole algorithm clearly runs in polynomial-time. $(2 + {\varepsilon}, O(1/{\varepsilon}^2))$-approximation to ${\mathrm{SC}\xspace}(T, k)$ {#sec:algo2} ========================================================================================= Similar to the $(O(1/{\varepsilon}^2), 1 + {\varepsilon})$-approximation to ${\mathrm{SC}\xspace}(T, k)$, our goal is, given some solution $(x, y)$ to ${\operatorname{SC-LP}}(T, k)$, find an integral opening ${\dot{y}}$ and fractional assignment ${\dot{x}}$, and then apply minimum makespan rounding [@LST90], which will prove \[scsize\]. However, this time we need to assure that $L(i, {\dot{x}}) \leq (1 + {\varepsilon}/2)T$ for every $i \in F$. To achieve this, we use the same steps, applied in different order and with different values of parameters. Preprocessing and opening heavy facilities ------------------------------------------ Let $(x, y)$ be a feasible fractional solution to ${\operatorname{SC-LP}}(T, k)$, and let ${\varepsilon}\in (0, 1)$. Straightahead, we apply preprocessing algorithm from \[preproc1\] to $(x, y)$ with parameters $\mu = 1$ and $\gamma = \frac{1}{1 + {\varepsilon}}$. This will give us a solution $(x', y')$ such that 1. $y' = y$, and $\sum_{i \in F}y_i' \leq k$, 2. for all $(i, j) \in F \times C$, $y'_i \geq x_{ij}'$ and if $x'_{ij} > 0$ then $x'_{ij} \geq \gamma y_i' = y_i'/(1 + {\varepsilon})$, 3. for every $j \in C$, $1 \geq \sum_{i \in F}x_{ij}' \geq 1 - \gamma = {\varepsilon}/(1 + {\varepsilon})$, and 4. for every $i \in F$, $L(i, x') \leq (\mu + 2 - \gamma)Ty_i' = (2 + \frac{{\varepsilon}}{1 + {\varepsilon}})Ty_i' = \nu Ty_i'$. In this algorithm, we overuse the notation and define $D(j)$ with respect to assignment $x'$. For $j \in C$, let $D(j) := \sum_{i \in F}d(i, j)x'_{ij}$, the average facility distance to client $j$. The definitions of $N'(i)$, $N'(j)$ for $i \in F'$, $j \in C'$, given $F'\subseteq F$ and $C'\subseteq C$, are the same. For $F'\subseteq F$, $C'\subseteq C$, let $N'(i) := \{j \in C' : x'_{ij} > 0\}$, $N'(j) := \{i \in F' : x'_{ij} > 0\}$. A facility $i \in F'$ is *$\lambda$-heavy* for $\lambda > 0$, if $\sum_{j \in N'(i)}D(j) > \lambda T$. We apply \[heavy\] to $(x', y')$ with $\lambda := {\varepsilon}^2/15$. Observe that $$\sum_{j \in C}D(j) = \sum_{i \in F}\sum_{j \in C}d(i, j)x'_{ij} \leq \sum_{i \in F}\nu Ty_i' \leq \nu Tk.$$ Hence, applying a similar analysis as in \[heavylemma\], we open at most $\frac{\nu}{\lambda}k = O(k/{\varepsilon}^2)$ additional facilities, and the load of every opened facility is at most $\frac{\nu}{\gamma}T = (1 + {\varepsilon})\left(2 + \frac{{\varepsilon}}{1 + {\varepsilon}}\right)T = (2 + 3{\varepsilon})T$. For the returned sets $F'$ and $C'$, $\sum_{j \in N'(i)}D(j) \leq \lambda T = {\varepsilon}^2T/15$, for every $i \in F'$. As before, it remains to find the integral opening among facilities in $F'$. However, there may be clients $j \in C'$, for which preprocessing step might have dropped a very huge portion of their demand, as the best bound we have is $\sum_{i \in F'}x'_{ij} \geq {\varepsilon}/(1 + {\varepsilon})$. Just for the same reason, the opening $\sum_{i \in N'(j)}y_i'$ may be too small for some clients $j \in C'$, so we cannot apply the clustering and rerouting steps to solution $(x', y')$, as we did in \[sec:algo1\], without loosing a lot in both approximation factors, we even do not have any distance upper bounds. We are going to handle these issues by applying a specific filtering step to $(x', y')$, bounding the distance between facilities and clients they serve, as well retrieving the lost demand of every client in $C'$. Filtering --------- We apply filtering to the restriction of $(x', y')$ on $F' \times C'$, however, the filtering process will be quite different from [@LV92]. We will rely a lot on the fact that we now operate with non-$\lambda$-heavy facilities only. Let $\rho := \frac{(1 + {\varepsilon})^2}{{\varepsilon}^2}$. For every $j \in C$ define $F'_j := \{i \in F' : d(i, j)\leq \rho D(j)\}$. \[filter\] For every $j \in C'$, $\sum_{i \in F'_j}x'_{ij} \geq 1/(\rho{\varepsilon}) = {\varepsilon}/(1 + {\varepsilon})^2$. Every $j \in C'$ was served by $F'$ only, therefore $D(j) = \sum_{i \in F'}d(i, j)x'_{ij}$. Observe that at most a portion of $1/\rho$ demand of $j$ can be served by facilities not in $F'_j$. Otherwise, $$D(j) = \sum_{i \in F'}d(i, j)x'_{ij} \geq \sum_{i \in F'\setminus F'_j}d(i, j)x'_{ij} \geq \rho D(j)\sum_{i \in F'\setminus F'_j}x'_{ij} > \rho D(j)\cdot \frac{1}{\rho} = D(j),$$ a contradiction. Hence, $\sum_{i \in F'\setminus F'_j}x'_{ij} \leq 1/\rho$. Since $\sum_{i \in F'}x'_{ij} \geq {\varepsilon}/(1 + {\varepsilon})$ for all $j \in C'$, we have $$\sum_{i \in F'_j}x'_{ij} = \sum_{i \in F'}x'_{ij} - \sum_{i \in F'\setminus F'_j}x'_{ij} \geq \frac{{\varepsilon}}{1 + {\varepsilon}} - \frac{{\varepsilon}^2}{(1 + {\varepsilon})^2} = \frac{{\varepsilon}}{(1 + {\varepsilon})^2} = \frac{1}{\rho{\varepsilon}}.$$ We construct a new solution $({\hat{x}}, {\hat{y}})$ as follows: $$\text{for all $(i, j) \in F' \times C'$,}\qquad\qquad {\hat{x}}_{ij} = \begin{cases} 0,& i \notin F'_j;\\ \frac{x'_{ij}}{\sum_{i \in F'_j}x'_{ij}},& i \in F'_j; \end{cases}\qquad {\hat{y}}_i = \min\left(1, \rho{\varepsilon}y_i'\right).$$ Clearly, $\sum_{i \in F'}{\hat{y}}_i \leq \rho{\varepsilon}\sum_{i \in F'}y_i' \leq \rho{\varepsilon}k = O(k/{\varepsilon})$. Also, by \[filter\], ${\hat{x}}_{ij} \leq \min(1, \rho{\varepsilon}x'_{ij}) \leq {\hat{y}}_i$ for every $(i, j) \in F' \times C'$. To bound $L(i, {\hat{x}})$ for $i \in F'$, recall that $i$ is non-$\lambda$-heavy, therefore $\sum_{j \in N'(i)}D(j) \leq \lambda T = {\varepsilon}^2T/15$. Since ${\hat{x}}_{ij} > 0$ if and only if $x'_{ij} > 0$ and $d(i, j) \leq \rho D(j)$, $$\lambda T{\hat{y}}_i \geq \sum_{j \in N'(i)}D(j){\hat{y}}_i \geq \sum_{\substack{j \in N'(i)\\s.t.\,{\hat{x}}_{ij} > 0}}D(j){\hat{y}}_i \geq \frac{1}{\rho} \sum_{\substack{j \in N'(i)\\s.t.\,{\hat{x}}_{ij} > 0}}d(i, j){\hat{y}}_i \geq \frac{1}{\rho} \sum_{\substack{j \in N'(i)\\s.t.\,{\hat{x}}_{ij} > 0}}d(i, j){\hat{x}}_{ij},$$ implying $$L(i, {\hat{x}}) = \sum_{\substack{j \in N'(i)\\s.t.\,{\hat{x}}_{ij} > 0}}d(i, j){\hat{x}}_{ij} \leq \rho\lambda T{\hat{y}}_i= \frac{\rho{\varepsilon}^2}{15}T{\hat{y}}_i = \frac{(1 + {\varepsilon})^2}{15}T{\hat{y}}_i.$$ Also, for every $j \in C'$ we now have $\sum_{i \in F'}{\hat{x}}_{ij} = 1$ and $\sum_{i: {\hat{x}}_{ij} > 0}{\hat{y}}_i \geq 1$. Since $\{i \in F' : {\hat{x}}_{ij} > 0\}\subseteq N'(j)$ and $\{j \in C' : {\hat{x}}_{ij} > 0\}\subseteq N'(i)$, we will abuse the notation and redefine $N'(i)$ and $N'(j)$ in terms of assignment ${\hat{x}}$. Let $\hat{\nu} := \frac{(1 + {\varepsilon})^2}{15}$. It holds for $({\hat{x}}, {\hat{y}})$: 1. $\sum_{i \in F'}{\hat{y}}_i \leq \rho{\varepsilon}k = O(k/{\varepsilon})$, 2. for all $(i, j) \in F' \times C'$, ${\hat{y}}_i \geq {\hat{x}}_{ij}$ and if ${\hat{x}}_{ij} > 0$ then $d(i, j) \leq \rho D(j)$, 3. for every $j \in C'$, $\sum_{i \in F'}{\hat{x}}_{ij} = 1$ and $\sum_{i \in N'(j)}{\hat{y}}_i \geq 1$, 4. for every $i \in F'$, $L(i, {\hat{x}}) \leq \hat{\nu} T{\hat{y}}_i$. Finishing the algorithm ----------------------- Now we can correctly use our clustering and rerouting algorithms with $({\hat{x}}, {\hat{y}})$. We subsequently apply \[clustering\] and \[rerouting\] to $({\hat{x}}, {\hat{y}})$ with newly defined sets $N'$ for $F'$ and $C'$, with corresponding values of of parameters $\lambda, \rho$ and $\nu \equiv \hat{\nu}$, obtaining the integral opening ${\dot{y}}$ and possibly fractional assignment ${\dot{x}}$ over $(F', C')$. By \[reroutlemma\], for $h \in F'$: ${\dot{y}}_h = 1$, $$L(h, {\dot{x}}) \leq 3(\hat{\nu} + 4\rho\lambda)T = 3\left(\frac{(1 + {\varepsilon})^2}{15} + 4\frac{(1 + {\varepsilon})^2}{{\varepsilon}^2}\cdot \frac{{\varepsilon}^2}{15}\right) = (1 + {\varepsilon})^2T \leq (1 + 3{\varepsilon})T,$$ and we open at most $(1 + {\varepsilon})\sum_{i \in F'}{\hat{y}}_i = O(k/{\varepsilon})$ facilities. Since for every $j \in C$ we have $\sum_{i \in F}{\dot{x}}_{ij} = 1$, there is no need to modify fractional variables ${\hat{x}}$ any further. Observe that all $i \in F\setminus F'$ serve $j \in C\setminus C'$ only, these $j$ are assigned to $i \in F\setminus F'$ integrally, and for all $i \in F\setminus F'$ we have $L(i, {\dot{x}}) \leq (2 + 3{\varepsilon})T$. Therefore, it remains to obtain integral assignment over $(F', C')$, where for every $i \in F'$ we have $L(i, {\dot{x}}) \leq (1 + 3{\varepsilon})T$. By applying minimum makespan rounding algorithm [@LST90] to the restriction of $({\dot{x}}, {\dot{y}})$ on $(F', C')$, we get integral assignment, sacrificing a factor of 2 in load approximation for $i \in F'$, resulting in maximum load of the final solution at most $(2 + 6{\varepsilon})T$. \[heavy\] might have opened at most $O(k/{\varepsilon}^2)$ additional facilities, so we obtain a $\left(2 + 6{\varepsilon}, O(1/{\varepsilon}^2)\right)$-approximate solution to ${\mathrm{SC}\xspace}(T, k)$ problem, and the whole algorithm clearly runs in polynomial-time, proving \[scsize\]. Acknowledgements {#acknowledgements .unnumbered} ================ We are very grateful to Ola Svensson for influential discussions at multiple stages of this work. Preprocessing {#sec:appa} ============= \[preproc\] Let $(x, y)$ be such that, for all $i \in F$, $L(i, x) \leq \mu T y_i$ for some constant $\mu \geq 1$ and all other constraints of ${\operatorname{SC-LP}}(T, k)$ on variables $x$ are satisfied. There exists a polynomial time algorithm that, given such solution $(x, y)$ and a constant $\gamma \in (0, 1)$, finds a solution $(x', y')$ such that 1. $y' = y$, and if $x_{ij} = 0$, then $x'_{ij} = 0$; 2. for every $(i, j) \in F\times C$, $y'_i \geq x'_{ij}$, and if $x'_{ij} > 0$, then $x'_{ij} \geq \gamma y_i'$; 3. for every $j \in C$, $1 \geq \sum_{i \in F}x'_{ij} \geq 1 - \gamma$; 4. for every $i \in F$, $L(i, x') \leq (\mu + 2 - \gamma)Ty_i'$. The algorithm we use in \[preproc1\] is heavily inspired by the minimum makespan rounding algorithm, introduced by Lenstra et. al in [@LST90]. In a sense, their algorithm achieves the desired property: in minimum makespan problem we have $y_i = 1$ for all $i \in F$, so for $j \in C$ we wish to have either $x_{ij}' = 0$ or $x_{ij}'= 1 = y_i'$. The key difference is that in our case $y$ is *not* integral, which requires several modifications of the original algorithm. Let $(x, y)$ and $\gamma \in (0, 1)$ be given. Let ${\tilde{F}}\subseteq F$, ${\tilde{C}}\subseteq C$, let $E \subseteq {\tilde{F}}\times {\tilde{C}}$. Consider a bipartite graph $G = ({\tilde{F}}\cup {\tilde{C}}, E)$, let $\delta_E(v)$ be the neighbors of $v \in {\tilde{F}}\cup {\tilde{C}}$ in $G$, I.e., for $i \in {\tilde{F}}$, $\delta_E(i) = \{j \in {\tilde{C}}: (i, j) \in E\}$, and for $j \in {\tilde{C}}$, $\delta_E(j) = \{i \in {\tilde{F}}: (i, j) \in E\}$. For $(i, j) \in {\tilde{F}}\times {\tilde{C}}$ we introduce a variable $w_{ij}$, and numbers $d_j \leq 1$ and $L_i \leq \mu Ty_i$, which can be thought of as the remaining demand of client $j \in {\tilde{C}}$ and the remaining load of facility $i \in {\tilde{F}}$ correspondingly. Given sets ${\tilde{F}}, {\tilde{C}}, E$ and numbers $d, L$, we define the polytope $P({\tilde{F}}, {\tilde{C}}, E, d, L)$ as the solution set of the following feasibility linear program: $$\begin{aligned} &&&\sum_{i \in \delta_E(j)}w_{ij} = d_j,&\forall j \in {\tilde{C}},\\ &&&\sum_{j \in \delta_E(i)}d(i, j)w_{ij} \leq L_i,&\forall i \in {\tilde{F}},\\ &&& w_{ij} \leq \min(y_i, d_j),&\forall (i, j) \in E,\\ &&& w_{ij} \geq 0,&\forall (i, j) \in E. \end{aligned}\tag{$P({\tilde{F}}, {\tilde{C}}, E, d, L)$}$$ Note that all values $y_i$ for $i \in F$ are fixed, so for every number $d_j$, $j \in C$, we have either constraint $\{w_{ij} \leq y_i\}$ or constraint $\{w_{ij} \leq d_j\}$. The extreme points of $P({\tilde{F}}, {\tilde{C}}, E, d, L)$ possess some very important properties, which resemble the properties of the extreme point solutions to the auxiliary program for the minimum makespan rounding algorithm of [@LST90]. \[extpoint\] Let $w$ be an extreme point of $P({\tilde{F}}, {\tilde{C}}, E, d, L)$, where $d_j \geq \gamma$ for all $j \in {\tilde{C}}$. One of the following must hold: 1. there exists $(i, j) \in E$ such that $w_{ij} = 0$, 2. there exists $(i, j) \in E$ such that $w_{ij} = y_i$, 3. there exists $(i, j) \in E$ such that $w_{ij} = d_j$, 4. there eixsts $i \in {\tilde{F}}$ such that $|\delta_E(i)| \leq 1$, 5. there exists $i \in {\tilde{F}}$ such that $|\delta_E(i)| = 2$ and $\sum_{j \in \delta_E(i)}w_{ij} \geq \gamma y_i$. Suppose that none of (a), (b), (c), or (d) hold. We will show that (e) must hold then. For all $(i, j) \in E$ we have $0 < w_{ij} < \min(y_i, d_j)$, and for every $i \in {\tilde{F}}$ we have $|\delta_E(i)| \geq 2$. Since $\sum_{i \in \delta_E(j)}w_{ij} = d_j$ for all $j \in {\tilde{C}}$, we must also have $|\delta_E(j)| \geq 2$. As $w$ is an extreme point of $P({\tilde{F}}, {\tilde{C}}, E, d, L)$, there exist ${\tilde{F}}_* \subseteq {\tilde{F}}$ and ${\tilde{C}}_* \subseteq {\tilde{C}}$ such that $\sum_{i \in \delta_E(j)}w_{ij} = d_j$ for all $j \in {\tilde{C}}_*$, $\sum_{j \in \delta_E(i)}d(i, j)w_{ij} = L_i$ for all $i \in {\tilde{F}}_*$, $|{\tilde{F}}_*| + |{\tilde{C}}_*| = |E|$, and constraints corresponding to ${\tilde{F}}_*, {\tilde{C}}_*$ are linearly independent. Since $2|E| = 2|{\tilde{F}}_*| + 2|{\tilde{C}}_*| \leq \sum_{i \in {\tilde{F}}_*}|\delta_E(i)| + \sum_{j \in {\tilde{C}}_*}|\delta_E(j)| \leq 2|E|$, for all $i \in {\tilde{F}}_*$ we must have $|\delta_E(i)| =2$, as well as $|\delta_E(j)| = 2$ for all $j \in {\tilde{C}}_*$. Therefore, the subgraph $G[{\tilde{F}}_*\cup {\tilde{C}}_*]$ of $G$ induced on ${\tilde{F}}_* \cup {\tilde{C}}_*$ is a bipartite union of disjoint cycles. Let $H$ be a cycle of $G[{\tilde{F}}_*\cup {\tilde{C}}_*]$, let $H_{{\tilde{F}}_*} := H\cap {\tilde{F}}_*$, $H_{{\tilde{C}}_*} := H \cap {\tilde{C}}_*$. Since for all $i \in H_{{\tilde{F}}_*}$ we have $|\delta_E(i)| = 2$, $\delta_E(i)\subseteq H \cap E$, and similarly, as $|\delta_E(j)| = 2$ for all $j \in H_{{\tilde{C}}_*}$, $\delta_E(j)\subseteq H \cap E$. Suppose that (e) does not hold, then for all $i \in H_{{\tilde{F}}_*}$ we have $\sum_{j \in \delta_E(i)}w_{ij} < \gamma y_i$. It follows that $$\sum_{i \in H_{{\tilde{F}}_*}}y_i> \frac{1}{\gamma}\sum_{i \in H_{{\tilde{F}}_*}}\sum_{j \in \delta_E(i)}w_{ij} = \frac{1}{\gamma}\sum_{(i, j) \in H\cap E}w_{ij} = \frac{1}{\gamma}\sum_{j \in H_{{\tilde{C}}_*}}\sum_{i \in \delta_E(j)}w_{ij} = \frac{1}{\gamma}\sum_{j \in H_{{\tilde{C}}_*}}d_j.$$ The last inequality follows from $\sum_{i \in \delta_E(j)}w_{ij} = d_j$ for every $j \in {\tilde{C}}_*$. Since $d_j \geq \gamma$ for all $j \in {\tilde{C}}$, $d_j \geq \gamma y_i$ for all $(i, j)\in E$. Since $H$ is a cycle in bipartite graph, it has even length, its vertices alternate between ${\tilde{F}}_*$ and ${\tilde{C}}_*$, and $|H_{{\tilde{F}}_*}| = |H_{{\tilde{C}}_*}|$. Then, we can split the vertices of $H$ into disjoint consecutive pairs $(i, j)$, so that $i \in H_{{\tilde{F}}_*}$, $j \in H_{{\tilde{C}}_*}$, $(i, j) \in H\cap E$, and apply $d_j \geq \gamma y_i$ for every pair. Therefore, $\sum_{j \in H_{{\tilde{C}}_*}}d_j \geq \gamma \sum_{i \in H_{{\tilde{F}}_*}}y_i$, which combined with inequalities above leads to a contradiction. So, there must exist $i \in H_{{\tilde{F}}_*}$ such that $\sum_{j \in \delta_E(i)}w_{ij} \geq \gamma y_i$, implying (e). We transform $(x, y)$ into $(x', y')$ using a similar approach as in [@LST90]. On every step $t \geq 1$ of the algorithm, we provide values of parameters ${\tilde{F}}^t, {\tilde{C}}^t, E^t, d^t, L^t$ so that polytope $P^t := P({\tilde{F}}^t, {\tilde{C}}^t, E^t, d^t, L^t)$ is nonempty and $d^t_j \geq \gamma$ for $j \in {\tilde{C}}^t$, and find its extreme point $w^t$. By \[extpoint\], either (a), (b), (c), (d) or (e) cases may occur for $w^t$. If (a), we set $x'_{ij} \leftarrow 0$, $E^{t + 1} \leftarrow E^t\setminus \{(i, j)\}$. If (b), we set $x'_{ij} \leftarrow y_i$, $d^{t + 1}_j \leftarrow d^t_j - y_i$, $L^{t + 1}_i \leftarrow L^t_i - d(i, j)w_{ij}^t$, $E^{t + 1} \leftarrow E^t\setminus \{(i, j)\}$. If (c), we set $x'_{ij} \leftarrow d^t_j$, $d^{t + 1}_j \leftarrow 0$, $L^{t + 1}_i \leftarrow L^t_i - d(i, j)w^t_{ij}$, $E^{t + 1} \leftarrow E^t \setminus \{(i, j)\}$. If (d) or (e), we set ${\tilde{F}}^{t + 1} \leftarrow {\tilde{F}}^t \setminus \{i\}$. After processing exactly one case (a), (b), (c), (d) or (e), we scan $j \in {\tilde{C}}^{t + 1}$, and if $d_j^{t + 1} < \gamma$ for some $j$, set ${\tilde{C}}^{t + 1} \leftarrow {\tilde{C}}^{t + 1} \setminus \{j\}$, $x'_{ij} \leftarrow 0$ for all $(i, j)$ such that $i \in \delta_{E^{t + 1}}(j)$, and then $E^{t + 1}\leftarrow E^{t + 1} \setminus \{(i, j) : i \in \delta_{E^{t + 1}}(j)\}$. If the change of ${\tilde{F}}^{t + 1}, {\tilde{C}}^{t + 1}, E^{t + 1}, d^{t + 1}$ or $L^{t + 1}$ is not mentioned for current case, the values are as in step $t$, so even though we drop facility $i$ from ${\tilde{F}}^t$ in case (d) or (e), the edges $(i, j)$ for $j \in \delta_{E^t}(i)$ are still kept in $E^{t + 1}$. Having processed ${\tilde{C}}^{t + 1}$, if $E^{t + 1} \neq \varnothing$, we move to step $t + 1$ and consider $P^{t + 1}$. \[preprocalgo\] gives the full pseudocode, summarizing all the steps. Find an extreme point $w$ of $P({\tilde{F}}, {\tilde{C}}, E, d, L)$ $x'_{ij} \leftarrow 0$, $E \leftarrow E\setminus \{(i, j)\}$ $x'_{ij} \leftarrow y_i$, $d_j \leftarrow d_j - y_i$, $L_i \leftarrow L_i - d(i, j)w_{ij}$, $E \leftarrow E \setminus \{(i, j)\}$ $x'_{ij} \leftarrow d_j$, $d_j \leftarrow 0$, $L_i \leftarrow L_i - d(i, j)w_{ij}$, $E\leftarrow E\setminus \left\{(i, j)\right\}$ ${\tilde{F}}\leftarrow {\tilde{F}}\setminus \{i\}$ ${\tilde{F}}\leftarrow {\tilde{F}}\setminus \{i\}$ ${\tilde{C}}\leftarrow {\tilde{C}}\setminus \{j\}$, **for $i \in \delta_E(j)$ do** $x'_{ij} \leftarrow 0$, $E\leftarrow E\setminus \left\{(i, j)\right\}$ $x'$, extended to $F \times C$ by adding zero entries It is easy to see that if $P^t$ is nonempty and $d^t_j \geq \gamma$ for $j \in {\tilde{C}}^{t}$, the very same holds for $P^{t + 1}$ in the next step, unless $E^{t + 1} = \varnothing$. Indeed, we manually assure that for all $j$ kept in ${\tilde{C}}^{t + 1}$ the condition $d^{t + 1}_j \geq \gamma$ must hold, and the restriction of $w^t$ to the set $E^{t + 1}\subseteq E^{t}$ is a feasible solution to $P^{t + 1}$, by construction of the algorithm. Moreover, if we take ${\tilde{F}}^1 = F$, ${\tilde{C}}^1 = C$, $E^1 = \{(i, j) \in F\times C : x_{ij} > 0\}$, $d^1_j = 1$ for $j \in C$ and $L^1_i = \mu T y_i$ for $i \in F$, $d^1_j \geq \gamma$ and $P^1$ is nonempty, since there is a feasible solution $w_{ij} := x_{ij}$ for $(i, j) \in E^1$. We run \[preprocalgo\] with these initial values of ${\tilde{F}}$, ${\tilde{C}}$, $E$, $d$ and $L$ given as input, obtaining an assignment $x'$. By setting $y' := y$, we obtain a solution $(x', y')$. We claim that \[preprocalgo\] runs in polynomial-time, and solution $(x', y')$ satisfies all requirements of \[preproc1\]. By \[extpoint\], on every step $t \geq 1$ either (a), (b), (c), (d) or (e) must occur for $w^t$, the extreme point of $P^t$. Then, either $|E^t|$, $|{\tilde{F}}^t|$ or $|{\tilde{C}}^t|$ is reduced at least by 1 after step $t$. So, since $|E^1| \leq |F||C|$, after at most $2|F||C|$ steps we will have $E^{t + 1} = \varnothing$ for some $1 \leq t \leq 2|F||C|$. Each step $t$ takes only polynomial time to perform, thus the total running time is also polynomial. Since $E^1 = \{(i, j) \in F\times C : x_{ij} > 0\}$, the only positive coordinates of $x'$ can be $(i, j)$ such that $x_{ij} > 0$, as if $x_{ij} = 0 \iff (i, j) \notin E^1$, \[preprocalgo\] sets $x'_{ij} = 0$ in the very end. The constraint $\{w_{ij} \leq \min(y_i, d_j)\}$ of $P({\tilde{F}}, {\tilde{C}}, E, d, L)$ assures that $x'_{ij} \leq y_i'$, for all $(i, j) \in F\times C$. If $x'_{ij} > 0$, then either $x'_{ij} = y_i'$ (case (b)) or $x'_{ij} = d^t_j$ for some step $t \geq 1$ (case (c)). Since $y_i' \leq 1$ and for all steps $t \geq 1$ we maintain $d_j^t \geq \gamma$ for all $j \in {\tilde{C}}$, in both cases we have $x'_{ij} \geq \gamma y_i'$. Next, if after processing cases for $w^t$ during some step $t \geq 1$ we end up with $d^{t + 1} < \gamma$, client $j$ gets discarded from ${\tilde{C}}^{t + 1}$. Since $d_j^1 = 1$ initially, by the end of step $t$ we must have assigned at least $1 - \gamma$ portion of $j$’s demand before discarding $j$ to make $d^{t + 1}_j < \gamma$. Then, after \[preprocalgo\] finishes, for all $j \in C$ we have $1 \geq \sum_{i \in F}x'_{ij} \geq 1 - \gamma$. Finally, fix $i \in F$. Observe that if $i \in {\tilde{F}}^t$ in the beginning of step $t \geq 1$, then $$\sum_{j \in \delta_{E^t(i)}}d(i, j)w^t_{ij} \leq L_i^t = L^1_i - \sum_{C\setminus {\tilde{C}}^t}d(i, j)x'_{ij} \implies \sum_{C\setminus {\tilde{C}}^t}d(i, j)x'_{ij} + \sum_{j \in \delta_{E^t(i)}}d(i, j)w^t_{ij} \leq \mu T y_i,$$ by feasibility of $w^t$ for polytope $P^t$. Suppose that after step $t$ facility $i$ gets removed from ${\tilde{F}}^t$, so $i\notin {\tilde{F}}^{t + 1}$. If case (d) occurred and $|\delta_{E^t}(i)| \leq 1$, let $j \in \delta_{E^t}(i)$ be a single client served by facility $i$. After removing $i$ from ${\tilde{F}}^t$, the constraint $\{\sum_{j \in \delta_E(i)}d(i, j)w_{ij} \leq L_i\}$ is not present in $P^{t + 1}$ and all future-step polytopes. So, for any step $r \geq t + 1$, the load we may get after obtaining $w^{r}$ and determining the value of $x'_{ij}$ is at most $d(i, j)w^{r} \leq d(i, j)y_i \leq Ty_i$ (as $d(i, j) \leq T$ for all $x_{ij} > 0$). The total load of facility $i$ becomes $L(i, x') \leq (\mu + 1)Ty_i$. If case (e) occurred for this facility $i$, $|\delta_{E^t}(i)| = 2$ and $\sum_{j \in \delta_{E^t}(i)}w^t_{ij} \geq \gamma y_i$. Let $j'$ and $j''$ be the two clients belonging to $\delta_{E^t}(i)$. Their contribution to facility $i$’s load on step $t$ is exactly $d(i, j')w_{ij'}^t + d(i, j'')w_{ij''}^t$, which is at most $L_i^t$. After removing $i$ from ${\tilde{F}}^t$, the constraint $\{\sum_{j \in \delta_E(i)}d(i, j)w_{ij} \leq L_i\}$ is not present in $P^{t + 1}$ and all future-step polytopes. So, for any step $r \geq t + 1$, the load we may get after obtaining $w^{r}$ and determining the values of both $x'_{ij'}$ and $x'_{ij''}$ is at most $d(i, j')w_{ij'}^r + d(i, j'')w_{ij''}^r \leq d(i, j')y_i + d(i, j'')y_i$. Hence, the *additional* load facility $i$ gained since the end of step $t$ is at most $$\begin{gathered} (d(i, j')y_i + d(i, j'')y_i) - (d(i, j')w_{ij'}^t + d(i, j'')w_{ij''}^t) =\\ = d(i, j')(y_i - w_{ij'}^t) + d(i, j'')(y_i - w_{ij''}^t) \leq T(2y_i - (w_{ij'}^t + w_{ij''}^t))\leq \\ \leq T(2y_i - \gamma y_i) = (2 - \gamma)Ty_i.\end{gathered}$$ Therefore, the total load of facility $i$ becomes $L(i, x') \leq (\mu + 2 - \gamma)Ty_i$. As a result, solution $(x', y')$ and the preprocessing algorithm (\[preprocalgo\]) indeed satisfy all the claimed properties of \[preproc1\], thus finishing the proof. Hard instances {#sec:appb} ============== We first present a hard instance for ${\mathrm{MLkSC}\xspace}$ problem. Let $R, M$ be integers, $R \ll M$. Let $k = 2R - 1$, $|F| = 2R$, $|C| = (M + R)R$. $F$ and $C$ are partitioned into $R$ disjoint groups, each has exactly $2$ facilities and exactly $M + R$ clients. For $i, h \in F$, $d(i, h) = 1$ if $i, h$ are in the same group, otherwise $d(i, h) = R$. In every group, one facility has $M$ collocated clients (call it $M$-facility), the other has $R$ collocated clients ($R$-facility). The instance is illustrated in \[fig:igap1\]. (0,0) ellipse (2cm and 3cm); at (-2.25,2.25) ; at (2,0)[$\times R$]{}; (0, 1.5) ellipse (.85cm and .85cm); (0, -1.5) ellipse (0.65cm and 0.65cm); (-0.35,2) node\[shape=circle,draw,fill=red,minimum size=6pt,inner sep=0pt\] ; (0.35,2) node\[shape=circle,draw,fill=red,minimum size=6pt,inner sep=0pt\] ; (-0.35,1) node\[shape=circle,draw,fill=red,minimum size=6pt,inner sep=0pt\] ; (0.35,1) node\[shape=circle,draw,fill=red,minimum size=6pt,inner sep=0pt\] ; (0,1.5) node\[draw,fill=green,minimum size=12pt,inner sep=0pt\] (M) ; at (0, 2.35) [$M$]{}; (0,-1.5) node\[draw,fill=green,minimum size=8pt,inner sep=0pt\] (R) ; at (0, -2.15) [$R$]{}; (-0.25,-1.9) node\[shape=circle,draw,fill=red,minimum size=4pt,inner sep=0pt\] ; (0.25,-1.9) node\[shape=circle,draw,fill=red,minimum size=4pt,inner sep=0pt\] ; (-0.25,-1.1) node\[shape=circle,draw,fill=red,minimum size=4pt,inner sep=0pt\] ; (0.25,-1.1) node\[shape=circle,draw,fill=red,minimum size=4pt,inner sep=0pt\] ; (0, -0.85) – (0, 0.65); at (0,0) [$1$]{}; There is a feasible fractional solution to ${\operatorname{MLkSC-LP}}$ for this instance with $T = 1$. Open every $M$-facility fully, and there assign all its collocated clients. Next, open every $R$-facility to $1 - 1/R$, and let it serve $(1 - 1/R)$-fraction of its collocated clients’ demand. The remaining $1/R$ fraction of these clients’ demand will be served by $M$-facility of the same group. It is easy to see that the load of every $R$-facility is $0$, the load of every $M$-facility is $R\cdot 1/R \cdot 1 = 1$, and the opening is exactly $R\cdot (1 + 1 - 1/R) = 2R - 1 = k$. Consider any integral solution to this instance of ${\mathrm{MLkSC}\xspace}$. If it assigns some client to a facility from different group, maximal load will be at least $R$. Suppose that all clients are assigned to facilities only from the same group. Since $k = 2R - 1$, there will be at least one group with at most one facility opened, take this group. If $M$-facility is opened, both its clients and clients of $R$-facility must be assigned to $M$-facility fully, resulting in its load $R \cdot 1 \cdot 1 = R$. Similarly, if $R$-facility is opened, maximum load will be at least $M\gg R$. Hence, the load of any integral star cover of size $k$ is at least $R$. Furthermore, even if we allow opening $(1 + {\varepsilon})k$ facilities for ${\varepsilon}= 1/(2R)$, since $$(1 + {\varepsilon})k = \left(1 + \frac{1}{2R}\right)(2R - 1) = 2R - \frac{1}{2R} < 2R,$$ there will still be a group with at most one facility opened, resulting in maximum load at least $R = T/(2{\varepsilon})$, where $T = 1$ is maximal fractional load. It follows that if $T^*$ is an optimal load to ${\operatorname{MLkSC-LP}}$, any integral $(1 + {\varepsilon})k$ star cover of $(F, C)$ has load is at least $\Omega(1/{\varepsilon})T^*$. Now, we move to a hard instance for ${\mathrm{MSSC}\xspace}$. For integer $N$, let $|F| = N$ and $|C| = N + 1$, the load bound $T \geq 1$ is arbitrary. Both $F = \{i_1, \ldots, i_N\}$ and $C = \{J, j_1,\ldots, j_N\}$ are vertices of a bipartite graph, and the metric $d$ is a shortest-path metric. For every $1 \leq r \leq N$ we have an edge $(i_r, j_r)$ or length $d(i_r, j_r) = (1 - 1/N)T$. Also, every facility $i_r$ for $1 \leq r \leq N$ is connected to a “central” client $J$ by an edge of length $d(i_r, J) = T$. The instance is illustrated in \[fig:igap2\]. (0,0) node\[draw,fill=green,minimum size=12pt,inner sep=0pt\] (i1)\[label=left:$i_1$\] ; (0,2.5) node\[shape=circle,draw,fill=red,minimum size=12pt,inner sep=0pt\] (j1) \[label=left:$j_1$\] ; (2,0) node\[draw,fill=green,minimum size=12pt,inner sep=0pt\] (i2) \[label=left:$i_2$\] ; (2,2.5) node\[shape=circle,draw,fill=red,minimum size=12pt,inner sep=0pt\] (j2) \[label=left:$j_2$\] ; (6,0) node\[draw,fill=green,minimum size=12pt,inner sep=0pt\] (i3) \[label=right:$i_{N - 1}$\] ; (6,2.5) node\[shape=circle,draw,fill=red,minimum size=12pt,inner sep=0pt\] (j3) \[label=right:$j_{N - 1}$\] ; (8,0) node\[draw,fill=green,minimum size=12pt,inner sep=0pt\] (i4) \[label=right:$i_{N}$\] ; (8,2.5) node\[shape=circle,draw,fill=red,minimum size=12pt,inner sep=0pt\] (j4) \[label=right:$j_{N}$\] ; (i1) – (j1); (i2) – (j2); (i3) – (j3); (i4) – (j4); at (0, 1.25) [$(1 - 1/N)T$]{}; at (8, 1.25) [$(1 - 1/N)T$]{}; (4,-2) node\[shape=circle,draw,fill=red,minimum size=12pt,inner sep=0pt\] (J) \[label=below:$J$\] ; (i1) – (J); (i2) – (J); (i3) – (J); (i4) – (J); \(J) – (2.75, -0.25); (J) – (3.5, -0.25); (J) – (4, -0.25); (J) – (4.5, -0.25); (J) – (5.25, -0.25); at (3, 2.5) [$\textbf{\dots}$]{}; at (4, 2.5) [$\textbf{\dots}$]{}; at (5, 2.5) [$\textbf{\dots}$]{}; at (3, 0) [$\textbf{\dots}$]{}; at (4, 0) [$\textbf{\dots}$]{}; at (5, 0) [$\textbf{\dots}$]{}; at (2, -1.35) [$T$]{}; at (6, -1.35) [$T$]{}; It is easy to see that in any integral solution to ${\operatorname{MSSC-LP}}$ every client $j_r$ for $1 \leq r \leq N$ can be served only by facility $i_r$. Furthermore, client $J$ should also be served fully, so it should be assigned to one of $i \in F$. Therefore, even if we open all facilities in $F$ fully, for some facility $i \in F$ which gets $J$ assigned to it, the load will be at least $(2 - 1/N)T$. This means that there is no feasible integral solution to ${\operatorname{MSSC-LP}}$, and any integral solution violates the maximum load constraint at least by a factor of $(2 - 1/N)$. On the other hand, there exists a feasible *fractional* solution to ${\operatorname{MLkSC-LP}}$ for this instance. We open all $i_r$ for $1 \leq r \leq N$ and assign $j_r$ fully to it. Also, client $J$ gets served by all $i \in F$ at an equal fraction of $1/N$. In this solution, the load of every facility $i \in F$ is exactly $T$. [^1]: Part of the work was done while the author was a Summer@EPFL intern in the School of Computer and Communication Sciences, Ecole polytechnique federale de Lausanne, Lausanne, and a full-time undergraduate student in the Faculty of Computer Science, Higher School of Economics, Moscow.
--- abstract: | The out-of-equilibrium dynamics of interaction quenched finite ultracold bosonic ensembles in periodically driven one-dimensional optical lattices is investigated. It is shown that periodic driving enforces the bosons in the outer wells of the finite lattice to exhibit out-of-phase dipole-like modes, while in the central well the atomic cloud experiences a local breathing mode. The dynamical behavior is investigated with varying driving frequency, revealing a resonant-like behavior of the intra-well dynamics. An interaction quench in the periodically driven lattice gives rise to admixtures of different excitations in the outer wells, an enhanced breathing in the center and an amplification of the tunneling dynamics. We observe then multiple resonances between the inter- and intra-well dynamics at different quench amplitudes, with the position of the resonances being tunable via the driving frequency. Our results pave the way for future investigations on the use of combined driving protocols in order to excite different inter- and intra-well modes and to subsequently control them.\ Keywords: non-equilibrium dynamics; periodically driven lattices; interaction quench; excited modes; tunneling dynamics; dipole mode; breathing-mode. author: - 'S.I. Mistakidis' - 'P. Schmelcher' title: | Mode coupling of interaction quenched ultracold few-boson\ ensembles in periodically driven lattices --- Introduction ============ Ultracold atoms in optical lattices offer an ideal platform for simulating certain problems of condensed matter physics and constitute many-body systems exhibiting a diversity of physical phenomena. In particular, the understanding of the non-equilibrium dynamics of strongly correlated many-body systems in optical lattices is currently one of the most challenging problems for both theory and experiment. This dynamics is typically triggered by an external periodic driving [@Goldman; @Goldman1; @Morsch1; @Bloch] or an instantaneous change (quench) of a Hamiltonian parameter [@Polkovnikov]. Remarkable dynamical phenomena employing a periodic driving [@Goldman; @Goldman1] of the optical lattice include Bloch-oscillations [@Dahan; @Morsch; @Hartmann], the realization of the superfluid to Mott insulator phase transition [@Eckardt], topological states of matter [@Zheng], artificial gauge fields [@Struck], the realization of ferromagnetic domains [@Parker; @Choudhury] and even applications to quantum computation [@Schneider1]. On the other hand, quench dynamics enables us to explore among others the light-cone-effect in the spreading of correlations [@Cheneau; @Natu], the Kibble-Zurek mechanism [@Zurek; @Chen1] or the question of thermalization [@Rigol; @Altman]. Driving or quenches can also be used in order to generate energetically low-lying collective modes, such as the dipole [@Kohn; @Bonitz] or the breathing mode [@Abraham; @Bauch1; @Abraham1; @Schmitz; @Peotta]. In general, a sudden displacement or a periodic shaking of the external trap induces a dipole oscillation of the atomic cloud, while a quench on the frequency of the trap excites a breathing mode of the cloud. These modes constitute a main probe both for theoretical investigations, to understand and interpret the non-equilibrium dynamics, and for experiments, as they can be used in order to measure key quantities of trapped many-body systems [@Abraham]. Recently, increasing effort has been devoted to control the atomic motion in optical lattices by subjecting them to a time-periodic external driving [@Lignier; @Sias; @Haller; @Chen] and investigating the optimal driving protocol [@Rosi; @Brif; @Brif1]. In this direction, it is important to carefully explore and design the relevant driving protocol to transfer the energy to the desired final degrees of freedom. To trigger or even control a certain type of (collective) modes of the dynamics, widely used techniques in the literature constitute either the periodic driving of the lattice potential, e.g. a lattice shaking, or a quench of a parameter of the system, e.g. a lattice amplitude quench or an interaction quench. In the former case a tunable local dipole mode and a resonant intra-well dynamics were recently explored by shaking an optical lattice [@Mistakidis2]. On the other hand, in the latter case it has been shown [@Mistakidis] that a sudden increase of the inter-particle repulsion in a non-driven lattice induces a rich inter-well as well as intra-well dynamics which can be coupled and consequently mixed for certain quench amplitudes. However, for decreasing repulsive forces [@Mistakidis1] the accessible inter-well tunneling channels are much fewer compared to the excited intra-well modes, and in particular no resonant dynamics can be observed. From the above analysis it becomes evident that a crucial ingredient for the design and further control of the dynamics is the choice of the driving protocol of the system: By using different driving schemes, different types of excited modes are induced, i.e. different energetical channels can be triggered. In this direction, an intriguing question is how a combination of periodic driving and interaction quenches can be used to steer the dynamics of the system and as a consequence the coupling of the inter-well and intra-well modes. Such an investigation will, among others, permit us to gain a deeper understanding of the underlying microscopic mechanisms, and will allow us to activate certain energy channels by using specific driving protocols for the control of the different processes. In the spirit of the above-posed question we investigate in the present work the quantum dynamics of interaction quenched few-boson ensembles trapped in periodically driven finite optical lattices. Concerning the periodic driving, a vibration of the optical lattice is employed. This scheme, in contrast to shaking, induces out-of-phase dipole modes among the outer wells and a local breathing mode in the central well of the finite lattice. We cover the dynamics of the periodically driven lattice with varying driving frequency in the complete range from adiabatic to high frequency driving. In particular we observe for the intermediate driving frequency regime, being intractable by current state of the art analytical methods [@Goldman; @Goldman1], a resonant-like behavior of the intra-well dynamics. This resonance is accompanied by a rich excitation spectrum and an enhanced inter-well tunneling as compared to adiabatic or high intensity driving and it is mainly of single particle character. Indeed, it survives upon increasing interaction obtaining faint additional features the most remarkable being the co-tunneling of an atom pair [@Chen; @Folling]. To induce a correlated many-body dynamics we employ an interaction quench on top of the driven lattice, thus opening energetically higher inter-well and intra-well channels. As a consequence the inter-well tunneling is amplified even for adiabatic driving and admixtures of excitations possessing breathing-like and dipole-like components are generated. Remarkably enough, as a function of the quench amplitude, the system experiences multiple resonances between the inter- and intra-well dynamics. This observation indicates the high degree of controlability of the system especially for the excited modes under such a combination of driving protocols and it is arguably one of our central results. To the best of our knowledge, this multifold mode coupling behavior unraveled with a composite driving protocol has never been reported before. Moreover, the position of the above mentioned resonances is tunable via the driving frequency allowing for further control of the mode coupling in optical lattices. Finally, the realization of intensified loss of coherence caused either by the resonant driving or by a quench on top of the driving is an additional indicator for the observed phenomena. To obtain a comprehensive understanding of the microscopic properties of the strongly driven and interacting system, we focus on the few-body dynamics in small lattices (specifically, four bosons in a triple well setup). However, we provide strong evidence that our findings apply equally to larger lattice systems and particle numbers. All calculations to solve the underlying many-body Schrödinger equation are performed by employing the Multi-Configuration Time-Dependent Hartree method for Bosons (MCTDHB) [@Alon; @Alon1], which is especially designed to treat the out-of-equilibrium quantum dynamics of interacting bosons under time-dependent modulations. This work is organized as follows. In Sec.II we explain our setup and introduce the multi-band expansion and the basic observables that we shall use in order to interpret the dynamics. Sec.III presents the effects resulting from an interaction quench of a driven triple well for filling factors larger than unity. Sec.IV. presents the dynamics for filling factors smaller than unity. We summarize our findings and give an outlook in Sec.V. In Appendix A the non-equilibrium dynamics induced by a driven harmonic oscillator and simultaneously interaction quenched bosonic cloud is briefly outlined. Appendix B briefly comments on the resonant response of the driven lattice and finally Appendix C describes our computational method. Setup and analysis tools ======================== In the present section we shall briefly report on our theoretical framework. Firstly, we introduce the protocol of the driven optical lattice and the many-body Hamiltonian. Secondly, the wavefunction representation in terms of a multiband expansion and some basic observables for the understanding of the inter- and intra-well modes of the dynamics, are introduced. Setup and Hamiltonian --------------------- To model a lattice vibration, with amplitude $\delta$ and angular frequency $\omega_{D}=2\pi f_{\rm{D}}$, a spatio-temporal sinusoidal modulation is used to generate a lattice potential of the form $$\label{eq:1}{V_{br}}({x};t) = {V_0}{\sin ^2}[k_x(1+\delta \sin(\omega_{D}t))x],$$ with lattice depth ${V_0}$ and wave-vector $k_x= \frac{\pi }{l}$, where $l$ denotes the distance between successive potential minima. Such a potential can be realized e.g. via acousto-optical modulators [@Parker], which induce a frequency difference among counterpropagating laser beams. The Hamiltonian of $N$ identical bosons of mass $M$ following an interaction quench protocol upon the driven one-dimensional lattice reads $$\label{eq:2}H(x;t) = \sum\limits_{i = 1}^N { \frac{{{p_i ^2}}}{{2M}}} + {V_{br}}({x_i};t) + g_{1D}^{(f)}\sum\limits_{i < j} {\delta({x_i} - {x_j})},$$ where $g_{1D}^{(f)}=\delta{g}+g_{1D}^{(in)}$, with $g_{1D}^{(in)}$, $g_{1D}^{(f)}$ being the initial and final interaction strengths respectively and $\delta{g}$ denotes the corresponding perturbation. The short-range interaction potential between particles located at positions ${x_i}$, is modeled by a Dirac delta-function. The interaction is well described by s-wave scattering and the effective 1D coupling strength [@Olshanii] becomes ${g_{1D}} = \frac{{2{\hbar ^2}{a_0}}}{{Ma_ \bot ^2}}{\left( {1 - \frac{{\left| {\zeta (1/2)} \right|{a_0}}}{{\sqrt 2 {a_ \bot }}}} \right)^{ - 1}}$. The transversal length scale is ${a_ \bot } = \sqrt {\frac{\hbar }{{M{\omega _ \bot }}}}$, with ${{\omega _ \bot }}$ the frequency of the confinement, while ${a_0}$ denotes the 3D s-wave scattering length. The interaction strength can be tuned either via ${a_0}$ with the aid of Feshbach resonances [@Kohler; @Chin], or via the transversal confinement frequency ${\omega _ \bot }$ [@Kim; @Giannakeas; @Giannakeas1]. In the following, for reasons of universality the Hamiltonian (2) is rescaled in units of the recoil energy ${E_{\rm{R}}} = \frac{{{\hbar ^2k_x^2}}}{{2M}}$. Then, the corresponding length, time and frequency scales are given in units of ${k_x^{ - 1}}$, $\omega_{\rm{R}}^{-1}=\hbar E_{\rm{R}}^{ - 1}$ and $\omega_{\rm{R}}$, respectively. For our simulations we have used a sufficiently large lattice depth of the order of $V_{0}=10.0E_{R}$, such that each well includes three localized single-particle Wannier states. The confinement of the bosons in the $m$-well system is imposed by the use of hard-wall boundary conditions at the appropriate position $x_{\sigma} = \pm \frac{{m\pi }}{{2k_x}}$. Finally, for computational convenience we shall set $\hbar = M = k_x = 1$ and therefore all quantities below are given in dimensionless units. Wavefunction representation and basic observables ------------------------------------------------- To understand the microscopic properties and analyze the dynamics, the notion of non-interacting multiband Wannier number states is employed. The presently used lattice potential is deep enough for the Wannier states between different wells to have a very small overlap for not too high energetic excitation. In the case of a periodically driven potential the above description can still be valid if the driving amplitude is small enough in comparison to the lattice constant $l$, i.e. $\delta\ll l$ such that each localized Wannier function is assigned to a certain well and the respective band-mixing is fairly small. For $\delta \gg l$ the use of a time-dependent Wannier basis is more adequate. Summarizing, for a system with $N$ bosons, $m$-wells and $j$ localized single particle states [@Mistakidis; @Mistakidis1] the expansion of the many-body bosonic wavefunction reads $$\label{eq:3}\left| \Psi \right\rangle = \sum\limits_{\{N_{i}\},\{I_i\}} {{C_{\{N_i\};\{I_i\}}}{{\left| {{N_1^{(I_1)}},{N_2^{(I_2)}},...,{N_m^{(I_m)}}} \right\rangle }}},$$ where ${{\left| {{N_1^{(I_1)}},{N_2^{(I_2)}},...,{N_m}^{(I_m)}} \right\rangle}}$ is the multiband Wannier number state, the element ${{N_i}^{(I_i)}}=\ket{n_i^{(1)}}\otimes\ket{n_i^{(2)}}\otimes....\otimes\ket{n_i^{(j)}}$ denotes the number of bosons being localized in the $i$-th well, and $I_i$ indexes the corresponding energetic excitation order. In particular, $\ket{n_i^{(k)}}$ refers to the number of bosons which reside at the $i$-th well and $k$-th band, satisfying the closed subspace constraint $\sum_{i=1}^m\sum_{k=1}^j n_i^{(k)} = N$. For instance, in a setup with $N=4$ bosons confined in a triple well i.e. $m=3$, which includes $k=3$ single particle states, the state $\ket{1^{(0)} \otimes 1^{(1)},1^{(0)},1^{(0)}}$ indicates that in every well one boson occupies the zeroth excited band, but in the left well there is one extra boson localized in the first excited band. For this setup it is also important to notice that one can realize four different energetic classes of number states, namely the quadruple mode $\{ {\left| {4^{(I_1)},0^{(I_2)},0^{(I_3)}} \right\rangle}+\circlearrowright\}$ (Q), the triple mode $\{ {\left| {3^{(I_1)},1^{(I_2)},0^{(I_3)}} \right\rangle}+\circlearrowright\}$ (T), the double pair mode $\{ {\left| {2^{(I_1)},2^{(I_2)},0^{(I_3)}} \right\rangle}+\circlearrowright\}$ (DP), and the single pair mode $\{ {\left| {2^{(I_1)},1^{(I_2)},1^{(I_3)}}\right\rangle}+\circlearrowright\}$ (SP), where $\circlearrowright$ stands for all corresponding permutations. It is important to note that, for later convenience, we consider only the corresponding subclass with isoenergetic states and not all members which would also include energetically unequal number states, e.g. for the single pair mode $\{ {\left| {2^{(I_1)},1^{(I_2)},1^{(I_3)}} \right\rangle}, {\left| {1^{(I_1)},2^{(I_2)},1^{(I_3)}} \right\rangle}, {\left| {1^{(I_1)},1^{(I_2)},2^{(I_3)}} \right\rangle}\}$. Also, in the present consideration for a given set of excitation indices $\textbf{I}=(I_1,I_2,I_3)$, the above-mentioned class of number states we are focusing on have similar on-site energies and will contribute significantly to the same eigenstates. Indexing each such class by $\alpha$, we adopt the more compact notation ${\left|{q} \right\rangle _{\alpha;\textbf{I}}}$ for the characterization of the eigenstates in terms of number states, where the index $q$ refers to the spatial occupation. For instance ${\{\left|{q} \right\rangle _{3;\textbf{I}}}\}$ with $\textbf{I}=(1,0,0)$ represent the eigenstates which are dominated by the set of the triple pair states $\{ {\left| {3^{(1)},1^{(0)},0^{(0)}} \right\rangle}$, ${\left| {0^{(0)},3^{(1)},1^{(0)}} \right\rangle}$, ${\left| {1^{(0)},0^{(0)},3^{(1)}} \right\rangle}$, ${\left| {1^{(0)},3^{(1)},0^{(0)}} \right\rangle}$, ${\left| {0^{(0)},1^{(0)},3^{(1)}} \right\rangle}$, ${\left| {3^{(1)},0^{(0)},1^{(0)}} \right\rangle}\}$, and the index $q$ runs from 1 to 6. Below, a few basic observables which refer to the inter- and intra-well generated modes are introduced and their expansion in terms of the multiband number state basis is given. Note that from here on we shall denote by $\left| {\Psi (0)} \right\rangle = \sum\limits_{q ;\alpha ;\textbf{I}} {C_{\alpha ;\textbf{I}} ^q{{\left| q \right\rangle }_{\alpha ;\textbf{I}}}}$ the initial wavefunction in terms of the eigenstates ${{{\left| q \right\rangle }_{\alpha ;\textbf{I}}}}$ of the final Hamiltonian. A time resolved measure for the impact of the external driving on the system is provided via the fidelity $F_{\{\lambda_i\}}(t)={\left| {\left\langle {\Psi (0)} \right|\left. {\Psi_{\{\lambda_i\}} (t)} \right\rangle } \right|^2}$, being the overlap between the time evolved and the initial (ground) state. Note the dependence of the fidelity on the set of parameters $\{\lambda_{i}\}$, e.g. the driving frequency $\omega_{D}$, the interaction strength $g$, the particle number $N$ etc. The expansion of the fidelity reads $$\begin{split} \label{eq:9}F_{\{\lambda_i\}}(t) = \sum\limits_{q_{1};\alpha;\textbf{I}} {{\left| {{C_{\alpha;\textbf{I}}^{q_{1}}}} \right|}^4} + \sum\limits_{q_{1},q_{2};\alpha ,\beta;\textbf{I}} {{\left| {{C_{\alpha;\textbf{I}}^{q_{1}}}} \right|}^2}\\\times{{\left| {{C_{\beta;\textbf{I}}^{q_{2}}}} \right|}^2}\cos ({\epsilon _{\alpha;\textbf{I}}^{q_{1}}} - {\epsilon _{\beta;\textbf{I}}^{q_{2}}})t. \end{split}$$ The second term on the right-hand-side of the above expression contains the energy difference between two distinct number states and therefore offers to be a measure of the tunneling process. The indices $\alpha$, $\beta $ indicate a particular number state group [@Mistakidis], ${q _i}$ is the intrinsic index within each group, $\textbf{I}$ corresponds to the respective energetical level and $\epsilon$ refers to the corresponding on-site energy of a particular number state and energetical level. For the investigation of the intra-well dynamics it is appropriate to employ a local density analysis. To measure the instantaneous spreading of the cloud in the $i$-th well we define the operator of the second moment $\sigma _i^2(t) = \left\langle {{{\Psi|\left( {x - R_{CM}^{(i)}} \right)}^2}} |\Psi\right\rangle $ [@Ronzheimer]. Here $R_{CM}^{(i)} =\int_{d_i}^{d'_{i}}dx \left( {x - x_0^{(i)}} \right){\rho _i}(x)/\int_{d_i}^{d'_{i}}dx{\rho _i}(x)$ refers to the coordinate of the center-of-mass [@Klaiman1; @Klaiman2], ${x_0^{(i)}}$ denotes the central point of the $i$-th well under investigation, ${d_i}$, ${d'_i}$ correspond to the instantaneous limits of the wells, whereas ${\rho _{i}}(x)$ is the respective single particle density. Then, the expansion of the second moment for the middle well in terms of the eigenstates of the final Hamiltonian reads $$\begin{split} \label{eq:10}\begin{array}{l} \sigma _M^2(t) = \sum\limits_{\alpha ;q_{1};\textbf{I}} {{{\left| {C_{a;\textbf{I}}^{q_{1}}} \right|}^2}{}_{\alpha ;\textbf{I}}\left\langle q_{1} \right|} {\left( {x - R_{CM}^{(i)}} \right)^2}{\left| q_{1} \right\rangle _{\alpha ;\textbf{I}}} \\ ~~~~~~~~+ 2\sum\limits_{q_{1} \ne q_{2}} {{\mathop{\rm Re}\nolimits} \left( {C_{\beta ;\textbf{I}}^{*q_{1}}C_{\alpha ;\textbf{I}}^{q_{2}}} \right){}_{\beta ;\textbf{I}}\left\langle q_{1} \right|} {\left( {x - R_{CM}^{(i)}} \right)^2}{\left| q_{2} \right\rangle _{\alpha ;\textbf{I}}}\\~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\times\cos \left( {\omega _{\beta ;\textbf{I}}^{q_{1}} - \omega _{\alpha ;\textbf{I}}^{q_{2}}} \right)t. \end{array} \end{split}$$ Finally, as a measure of the dipole motion the intra-well asymmetry $\Delta {\rho _a}(t) = {\rho _{a,1}}(t) - {\rho _{a,2}}(t)$ is introduced. Here, a particular well $a$ (in a triple well $a = L,M,R$ stands for the left, middle and right well respectively) is divided from the center point into two equal sections with ${\rho _{a,1}}(t)$ and ${\rho _{a,2}}(t)$ being the respective integrated densities of the left and right parts during the evolution. The expectation value of the asymmetry operator is expressed as $$\begin{split} \label{eq:13}\begin{array}{l}\left\langle \Psi \right|\Delta \rho(t) \left| \Psi \right\rangle = \sum\limits_{q_{1};\alpha;\textbf{I}} {{{\left| {{C_{\alpha;\textbf{I}}^{q_{1}}}} \right|}^2}{}_{\textbf{I};\alpha}\left\langle q_{1} \right|} \Delta\rho \left| q_{1} \right\rangle_{\alpha;\textbf{I}} \\ ~~~~~~~~+ 2\sum\limits_{q_{1} \ne q_{2} } {{\mathop{\rm Re}\nolimits} \left( {C_{\alpha;\textbf{I} }^{*q_{1}}{C_{\beta;\textbf{I}}^{q_{2}}}} \right){}_{\textbf{I};\alpha}\left\langle q_{1} \right|} \Delta \rho \left| q_{2} \right\rangle_{\beta;\textbf{I}}\\~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\times \cos \left[ {\left( {{\omega _{\alpha;\textbf{I}}^{q_{1}} } - {\omega _{\beta;\textbf{I}}^{q_{2}}}} \right)t} \right]. \end{array} \end{split}$$ First order coherence --------------------- The spectral representation of the reduced one-body density matrix [@Titulaer; @Naraschewski; @Sakmann4] reads $$\label{eq:4} \rho_1 (x,x';t) = \sum\limits_{\alpha=1}^{M} {{n_{\alpha}}(t){\varphi _{\alpha}}(x,t)} \varphi _{\alpha}^*(x',t),$$ where ${\varphi _\alpha}(x,t)$ are the so-called natural orbitals and $M$ corresponds to the considered number of orbitals. The population eigenvalues $n_{\alpha}(t) \in [0,1]$ characterize the fragmentation of the system [@Spekkens; @Klaiman; @Mueller; @Penrose]: For only one macroscopically occupied orbital the system is said to be condensed, otherwise it is fragmented. To quantify the degree of first order coherence during the dynamics, the normalized spatial first order correlation function $g^{(1)}(x,x';t)$ is defined $$\label{eq:4} g^{(1)}(x,x';t)=\frac{\rho_{1}(x,x';t)}{\sqrt{\rho_1(x;t)\rho_1(x';t)}}.$$ It is known that for $|g^{(1)}(x,x';t)|^2<1$ the corresponding visibility of interference fringes in an interference experiment is less than $100\%$ and this case is referred to as loss of coherence. On the contrary, when $|g^{(1)}(x,x';t)|^2=1$ the fringe visibility of the interference pattern is maximal and is referred to as full coherence. The above quantity depends strongly on the various parameters of the Hamiltonian, and an investigation of the aforementioned dependence will be given in Sec. IV. Interaction quench dynamics on a driven lattice for filling factor $\nu>1$ ========================================================================== To analyze the dynamics of our system, it is instructive first to comment on the relation between the ground state and its dominant interaction dependent spatial configuration employing the multiband expansion. Let us consider a setup with four bosons in a triple well (which is our workhorse). Within the weak interaction regime $0<g<0.1$ the dominant spatial configuration of the system is $\ket{1^{(0)},2^{(0)},1^{(0)}}$, while states of double-pair, e.g. $\ket{2^{(0)},2^{(0)},0^{(0)}}$, and triple-pair occupancy, e.g. $\ket{1^{(0)},3^{(0)},0^{(0)}}$, possess a small contribution. In the intermediate interaction regime $0.1<g<1.0$ the system is described by a superposition of lowest-band states which are predominantly of single-pair occupancy, e.g. $\ket{1^{(0)},2^{(0)},1^{(0)}}$, $\ket{2^{(0)},1^{(0)},1^{(0)}}$, and double-pair occupancy, e.g. $\ket{2^{(0)},2^{(0)},0^{(0)}}$, while energetically higher states to the first excited-band start to be occupied. For further increasing repulsion, e.g. $1.0<g<5.0$, the excited states gain more population and the corresponding ground state configuration is characterized by an admixture of ground- (predominantly of single pair occupancy) and excited-band (to the first and even to the second band) states. In the following, we shall investigate the effect of an interaction quench upon a periodically driven finite lattice. Note that we consider interaction quenches imposed at $t=0$ or after a short transient time. The resulting dynamics is qualitatively the same. We shall refer, for brevity, to the effect of an interaction quench performed at $t=0$, i.e. when also the periodic driving starts. To be more specific, below we shall firstly explore the effect of an interaction quench for various driving frequencies and compare the induced dynamics with an unquenched system. Subsequently, the dynamics for a fixed driving frequency while varying the quench amplitude is investigated. We remark that in each case we consider quench amplitudes for which the induced above-barrier transport is suppressed. ![image](Fig1-eps-converted-to.pdf){width="80.00000%"} Case I: Interaction quench dynamics for different driving frequencies --------------------------------------------------------------------- We shall explore the effect of an interaction quench on top of a periodically driven triple well potential with four bosons in the weak interaction regime ($g= 0.05$), where the dominant spatial configuration of the ground state corresponds to states of single-pair occupancy e.g. $\ket{1^{(0)},2^{(0)},1^{(0)}}$. To demonstrate the difference between the dynamics of the quenched and the unquenched bosonic ensemble let us firstly investigate the response of an explicitly driven system, i.e. with $\delta g=0$. Figure 1(a) shows $F_{\{\omega_{\rm{D}}\}}(t)$ (see also Sec. II.B) with varying $\omega_{D}$. It is observed that for $0<\omega_{D}<1.5$ (nearly adiabatic driving) or very intense driving $\omega_{D}>12.0$ the system remains essentially unperturbed. In between, an interesting stripe pattern occurs. To be self-contained, in the following, let us classify the frequency intervals $$\label{eq:4}\Delta\omega_{\rm{D}_1}\equiv[2.0,6.0]~~and~~\Delta \omega_{\rm{D}_2}\equiv[7.0,11.0],$$ where the time evolved state of the periodically driven system deviates significantly from the initial (ground) state. Indeed, for $\omega_{\rm{D}}\in\Delta\omega_{\rm{D}_1}\equiv[2.0,6.0]$ the minimal overlap during the dynamics drops down to 0.1, whereas for $\omega_{\rm{D}}\in\Delta \omega_{\rm{D}_2}\equiv[7.0,11.0]$ the system maximally departs from the initial state by a percentage of the order of $30\%$. To probe the effect of the interactions and of the driving frequency on the overall dynamics, the inset of Figure 1(a) illustrates $\bar{F}_{\{\omega_{\rm{D}}\}}=\int_{0}^{T}dtF_{\{\omega_{\rm{D}}\}}(t)/T$, ($T$ denotes the considered evolution time) at $\omega_{\rm{D}}=1.5$ and at $\omega_{\rm{D}}=2.75\in\Delta\omega_{\rm{D}_1}$ for different initial interactions and particle number. Focusing on the same driving frequency $\omega_{\rm{D}}$ and a large interparticle interaction we observe that the mean response of the system decreases as a function of the particle number and therefore the system can be driven more efficiently out-of-equilibrium. The same observation holds for a fixed interaction strength and particle number but a driving frequency below and in the region $\Delta\omega_{\rm{D}_1}$, e.g. for $N=4$, $g=3$, $\bar{F}_{\{\omega_{\rm{D}}=1.5\}}=0.9405$, while $\bar{F}_{\{\omega_{\rm{D}}=2.75\}}=0.1202$. Let us now inspect how an interaction quench distorts the fidelity evolution. Figure 1(b) shows $F_{\{\omega_{\rm{D}},\delta g\}}(t)$ for $\delta{g}=2.0$ (performed at $t=0$, i.e. simultaneously with the driving) with varying $\omega_{\rm{D}}$. It is observed that the combination of driving and interaction quench brings the system significantly out-of-equilibrium for every driving frequency. To understand the effect of the quench on the system let us compare Figure 1(b) with Figure 1(a) for the fidelity evolution of the driven but unquenched system. Indeed, an interaction quench introduces more energy into the system and as a consequence the final evolving state deviates significantly from the initial one even in the region of adiabatic driving, e.g. $\omega_{\rm{D}}=0.5$ or high frequency driving, e.g. $\omega_{\rm{D}}=14.0$, as seen in Figure 1(b). For instance, $\bar{F}_{\{\omega_{\rm{D}}=1.0,\delta g=0\}}=0.98$ and $\bar{F}_{\{\omega_{\rm{D}}=1.0,\delta g=2.0\}}=0.81$, while $\bar{F}_{\{\omega_{\rm{D}}=14.0,\delta g=0\}}=0.92$ and $\bar{F}_{\{\omega_{\rm{D}}=14.0,\delta g=2.0\}}=0.78$. Finally, as an estimate we report that according to our simulations the deviation of $\bar{F}$ between the unquenched and the quenched system ranges from $12\%$ to $70\%$. To analyze the role of dynamical fragmentation [@Mueller; @Sakmann5] (see Eq.(7)), Figure 1(c) shows the deviation from unity, $\lambda(t)=1-n_1(t)$, during the evolution of the first natural population for different driving frequencies $\omega_{\rm{D}}$ and no quench. Note here that even $\lambda(0)\neq 0$, i.e. as a result of the finite repulsion the initial state possesses a small degree of fragmentation. As shown $\lambda(t)$, is always significantly above zero, confirming the fragmentation process. Focusing on different $\omega_{\rm{D}}$’s we note that the temporal average of the fragmentation, i.e. $\bar{\lambda}= \int dt\lambda(t)/T$, increases if $\omega_{\rm{D}}\in\Delta\omega_{\rm{D}_1}\cup\Delta\omega_{\rm{D}_2}$, while for the regions where $F_{\{\omega_{\rm{D}}\}}\simeq1$ it reduces but never tends to a perfectly condensed state. Note also that for $\omega_{\rm{D}} \not\in \Delta\omega_{\rm{D}_1}\cup\Delta\omega_{\rm{D}_2}$, $\lambda(t)$ possesses small amplitude oscillations, whereas for $\omega_{\rm{D}}$ $\in \Delta\omega_{\rm{D}_1}\cup\Delta\omega_{\rm{D}_2}$ the external driving introduces large amplitude variations in $\lambda(t)$. As expected the interparticle repulsion supports the fragmentation process (see $\lambda(t)$ for $\omega_{\rm{D}}=3.0$, $g=1.0$ and $\delta g=0.0$ in Figure 1(c)). The effect of an interaction quench on the fragmentation process is shown in Figure 1(d) employing $\lambda(t)$, for $\delta g=2.0$ and the same driving frequencies as in Figure 1(c). A tendency for a higher fragmented state for every $\omega_{\rm{D}}$ at least for certain time periods is manifest. Comparing $\lambda(t)$ for $\omega_{\rm{D}}$ below $\Delta \omega_{\rm{D}_1}$, with the unquenched case, we observe that the interaction quench introduces large amplitude variations, while for $\omega_{\rm{D}}\in\Delta \omega_{\rm{D}_1}\cup\Delta \omega_{\rm{D}_2}$ $\lambda(t)$ shows a monotonic increase towards a fully fragmented state. Thus, in conclusion, the fragmentation process under an interaction quench is enhanced, which is attributed to the consequent raise of the interparticle repulsion. To identify the effect of an interaction quench on the one-body level, Figure 2 compares $\rho_1(x,t)$ without and with an interaction quench on top of the periodically driven triple well for $\omega_{\rm{D}}=0.75$ and amplitude $\delta=0.03$. Without quench, the one-body density (see Figure 2(a)) shows a weak response, a local dipole mode in the outer wells and a local breathing mode (hardly visible in Figure 2(a) due to weak driving) in the central well occur due to the combination of the parity of the lattice (odd number of sites) and the driving scheme. The dynamics in the central well shows a compression and decompression, while the outer wells are shaken (for a lattice with an even number of sites the generated intra-well mode will solely be a local dipole mode). As can be seen, by performing a quench (see Figure 2(b)) with $\delta g=2.0$, the breathing-like mode in the central well is enhanced, while in the outer wells the cloud exhibits admixtures of excitations consisting of a dipole and a breathing component. Focusing on the dynamics of the left well it is obvious that the atomic cloud oscillates inside the well with a varying amplitude, i.e. it performs an oscillation with a simultaneous compression and decompression. Finally, the inter-well tunneling mode which is manifested as a direct population transport from the middle to the outer wells and accompanies the whole process is amplified. To illustrate explicitly the evolution of the atomic cloud in each well we follow the $\rho_1(x,t)=0.25$ of the local density, shown as the thick white line on top of the density. It is observed that in the central well the cloud compresses and decompresses during the evolution, while in the outer wells the cloud oscillates changing also its width (in Appendix A, this mode is generated in a harmonic trap for a deeper understanding). ![Time evolution of the one-body density $\rho_1(x,t)$ caused by a periodically driven triple well with (a) $\omega_{\rm{D}}=0.75$ and (b) a simultaneous interaction quench with amplitude $\delta g=2.0$. White contours, at $\rho_1(x,t)=0.25$, are plotted on top in order to facilitate a comparison of the atomic motion between the unquenched (a) and the quenched system (b). The driving amplitude is fixed to the value $\delta=0.03$ and the initial state corresponds to the ground state of four weakly interacting bosons with $g=0.05$.](Fig2-eps-converted-to.pdf){width="50.00000%"} ![(a) Spectrum of the fidelity $F_{\{\omega_{\rm{D}}\}}(\omega)$ as a function of the driving frequency $\omega_{\rm{D}}$. The black dots correspond to $F_{\{\omega_{\rm{D}}, \delta g=0.0\}}(\omega)$, i.e. to the unquenched system, while the red empty circles refer to $F_{\{\omega_{\rm{D}}, \delta g=2.0\}}(\omega)$ , i.e. to the case of a simultaneous interaction quench with amplitude $\delta g=2.0$ on top of the driving. Inset ($a_1$): Tunneling probabilities $A_1(t)$, $A_2(t)$ and $A_3(t)$ (see main text and legend) at $\omega_{\rm{D}}=2.75$. (b) Comparison of the single-particle tunneling probabilities $A_1(t)$ in a periodically driven triple well without and with a simultaneous interaction quench for various driving frequencies $\omega_{\rm{D}}$ (see legend). The driving amplitude is fixed to the value $\delta=0.03$ and the initial state corresponds to the ground state of four weakly interacting bosons with $g=0.05$.](Fig3-eps-converted-to.pdf){width="40.00000%"} ![image](Fig4-eps-converted-to.pdf){width="70.00000%"} ![Time evolution of the fidelity $F_{\{\omega_{\rm{D}},\delta g\}}(t)$ in a periodically driven triple well with (a) $\omega_{\rm{D}}=0.75$ and (b) $\omega_{\rm{D}}=2.75$ as a function of the quench amplitude. The driving amplitude is $\delta=0.03$, while the initial state corresponds to the ground state of four weakly interacting bosons with $g=0.05$.](Fig5-eps-converted-to.pdf){width="45.00000%"} ![image](Fig6-eps-converted-to.pdf){width="50.00000%"} To obtain a quantitative understanding of the inter-well tunneling dynamics, let us investigate the spectrum of the fidelity, i.e. ${F_{\{\omega_{\rm{D}}, \delta g\}}}(\omega ) = \frac{1}{\pi }\int {dt{F_{\{\omega_{\rm{D}}, \delta g\}} }(t){e^{i\omega t}}}$ (see also Eq.(4)). Figure 3(a) shows both the tunneling spectrum of the unquenched (see black line) and the quenched system (see red line) with respect to the driving frequency. Indeed, employing Eq.(4) we obtain that for the unquenched system the dominant tunneling process for every $\omega_{\rm{D}}$ corresponds to tunneling within the SP mode (e.g. between the states $\ket{2^{(0)},1^{(0)},1^{(0)}}$ and $\ket{1^{(0)},2^{(0)},1^{(0)}}$). It is important here to note that for $\omega_{\rm{D}}\in\Delta\omega_{\rm{D}_1}$ additional tunneling modes from the SP to the DP mode (e.g. from $\ket{1^{(0)},2^{(0)},1^{(0)}}$ to $\ket{2^{(0)},2^{(0)},0^{(0)}}$) and from the SP to the T mode (e.g. from $\ket{1^{(0)},2^{(0)},1^{(0)}}$ to $\ket{3^{(0)},1^{(0)},0^{(0)}}$) can be generated. To ilustrate this fact we depict in the inset of Figure 3(a) the probabilities $A_1(t)=|\left\langle 2^{(0)},1^{(0)},1^{(0)}| \Psi(t) \right\rangle|^2$, $A_2(t)=|\left\langle 2^{(0)},2^{(0)},0^{(0)}| \Psi(t) \right\rangle|^2$ and $A_3(t)=|\left\langle 3^{(0)},1^{(0)},0^{(0)}| \Psi(t) \right\rangle|^2$ at $\omega_{\rm{D}}=2.75$. It is shown that $A_{2}(t)$ and $A_{3}(t)$ although suppressed in comparison to $A_1(t)$ possess significant populations. We remark here that a similar tunneling procedure corresponding to atom-pair tunneling has already been observed for few-atoms confined in a driven double-well in [@Chen]. However, for the quenched system the tunneling takes place only within the SP mode, while the remaining tunneling modes are supressed, due to the quench, even for $\omega_{\rm{D}}\in\Delta \omega_{\rm{D}_1}$. To illustrate the effect of an interaction quench upon the driven lattice, on the tunneling dynamics, Figure 3(b) shows the probability $A_1(t)$ both for the unquenched and the quenched system for various driving frequencies. As it can be observed the effect of the quench depends on the driving frequency. Indeed, for $\omega_{\rm{D}}\leq \min(\Delta\omega_{\rm{D}_1})$ the quench decreases the frequency of the tunneling branch (see the red empty circles in Figure 3(a) which correspond to the interaction quenched fidelity spectrum) and leads to a significant enhancement of the amplitude of this tunneling branch (e.g. see the blue and black line in Figure 3(b)). The latter is a consequence of the fact that the interaction quench injects energy to the system. However, for $\omega_{\rm{D}}>\max(\Delta\omega_{\rm{D}_1})$ the tunneling branch is quite insensitive to the quench by means that both the frequency and the amplitude of the tunneling probability are slightly larger (see Figures 3(a) and (b)). To determine the frequencies of the local dipole mode in the outer wells we calculate the spectrum $\Delta {\rho _L}(\omega ) = \frac{1}{\pi }\int {dt} \Delta {\rho _L}(t){e^{i\omega t}}$. The analysis of the corresponding breathing component will be performed in the next subsection, where we shall examine in more detail the effects of the quench dynamics. Figure 4(a) presents $\Delta\rho_{L}(\omega)$, where two emergent frequency branches (denoted as ($a_1$) and ($a_2$) in the spectrum) of the intra-well oscillations are visible. It is observed that for driving frequencies $\omega_{\rm{D}}\in [0,1.5]$ the intra-well dipole mode possesses two distinct frequencies which come into resonance in the region $\omega_{\rm{D}} \in[2,3]$ and then for $\omega_{\rm{D}}>3.0$ are again well separated. To gain insight into the impact of an interaction quench, performed on top of the driving, on the intra-well density oscillations, Figure 4(b) shows $\Delta \rho_L(t)$, at resonance ($\omega_{\rm{D}}=2.875$) for different quench amplitudes, namely at $\delta g=0.0, 1.0$ and $2.0$. As expected (resonance) $\Delta \rho_L(t)$ features a beating dynamics but with an increasingly decaying envelope with increasing quench amplitude, which is a direct effect of the interactions. A similar dephasing behavior holds for the other $\omega_{\rm{D}}$’s where $\Delta \rho_L(t)$ does not exhibit a beating pattern. Concerning the width of the resonant region different amplitudes of the interaction quench lead to a slight broadening of the resonant region. According to our calculations for the case with $\delta g=0$ the resonant frequency region corresponds to $\omega_{\rm{D}} \in [2,3]$, while for $\delta g=1.0$ and $\delta g=2.0$ the corresponding regions are $\omega_{\rm{D}} \in [1.8,3.2]$ and $\omega_{\rm{D}} \in [1.5,3.5]$ respectively. Summarizing, one can induce this resonant intra-well dynamics by adjusting the driving frequency and by applying an interaction quench to increase the width of the resonance and manipulate the amplitude of the intra-well oscillations. From another perspective the above-mentioned resonant behavior can be illustrated by employing the occupation of the zeroth band of the triple well during the evolution. The probability of finding all the four bosons within the zeroth band (employing the multiband expansion) reads $$|B_{\{N_i\};\{I_i\}}(t)|^2=\sum_{\{I_i\}} |\left\langle N_{1}^{(I_1)},N_{2}^{(I_2)},N_{3}^{(I_3)}| \Psi(t) \right\rangle|^2,$$ where the summation is performed over the excitation indices with the imposed constraints $\sum_{i=1}^3 n_i^{(1)}=N$ and $\sum_{i=1}^3 \sum_{j=2}^3 n_i^{(j)}=0$ (see also Eq.(3)). Figure 4(c) shows the probability $|B_{\{N_i\};\{I_i\}}(t)|^2$ for all the bosons to reside in the zeroth band for various driving frequencies $\omega_{D}$ and a fixed amplitude $\delta=0.03$. At resonance a complete depopulation of the zeroth band at some specific time intervals is observed. To be more precise, this probability exhibits a revival-like behavior on short time-scales, and decays as time evolves (see in particular the black dashed curve in Figure 4(c)). The local minima of $|B_{\{N_i\};\{I_i\}}(t)|^2$ are connected to the enhancement of the amplitude of the oscillations of the single-particle density (see also Appendix B). On the other hand, for driving frequencies away from $\Delta \omega_{\rm{D}_1}$ the respective probability for all the bosons to occupy the zeroth band is rather large and is indeed dominant. However, significant contributions e.g. at $\omega_{\rm{D}}=5.25$ or $\omega_{\rm{D}}=9.25$ (see Figure 4(c)) from excited configurations cannot be neglected, especially in the regions $\Delta\omega_{\rm{D}_1}$, $\Delta\omega_{\rm{D}_2}$ where the system departs from the initial state (see also Figure 1(a)) in a prominent way. Finally, in order to explore the impact of the interaction quench at resonance, Figure 4(d) shows $|B_{\{N_i\};\{I_i\}}(t)|^2$ for different quench amplitudes at $\omega_{D}=2.75$. It is observed that for larger interaction quenches, this probability exhibits a more strongly decaying envelope which is a pure effect of the interactions. As it can be seen for increasing quench amplitude the probability for the system to remain in the zeroth band, in the course of the dynamics, decays on increasingly shorter time scales and the system is dominated by different types of excitations, e.g. two, three or four particles distributed in the first and second excited bands, as expected intuitively. Case II: Periodically driven dynamics for different interaction quench amplitudes --------------------------------------------------------------------------------- In the following, we shall examine the impact of the quench amplitude $\delta{g}$, focusing on two different driving frequency regions, i.e. for an almost adiabatic periodic driving and in the vicinity of the resonance (see also Figure 1(a)). To obtain an overview of the dynamical response, Figures 5(a) and (b) show the fidelity evolution with respect to $\delta{g}$, for fixed driving frequencies $\omega_{\rm{D}}=0.75$ and $\omega_{\rm{D}}=2.75$ respectively. As it is expected, for larger quench amplitudes the time-evolved final state deviates from the initial (ground) state in a prominent way. For instance, $\bar{F}_{\{\omega_{\rm{D}}=0.75,\delta g=0\}}=0.95$ and $\bar{F}_{\{\omega_{\rm{D}}=0.75,\delta g=4.0\}}=0.6$, while $\bar{F}_{\{\omega_{\rm{D}}=2.75,\delta g=0\}}=0.7$ and $\bar{F}_{\{\omega_{\rm{D}}=2.75,\delta g=4.0\}}=0.4$. Next, let us proceed with a more detailed analysis in order to probe the effect of an interaction quench on the inter-well tunneling dynamics and the intra-well excited modes. To examine the tunneling dynamics, Figure 6(a) presents the fidelity spectrum $F_{\{\delta g\}}(\omega)=\frac{1}{\pi} \int dt F_{\{\delta g\}}(t)$ as a function of the quench amplitude. Three inter-well tunneling branches ($a_1'-a_3'$) can be identified. The lowest branch ($a_1'$), which dominates for strong quench amplitudes, refers to the energy difference $\Delta\epsilon$ within the energetically lowest-band states of the SP mode, e.g. from the initial state $\ket{1^{(0)},2^{(0)},1^{(0)}}$ to a final state $\ket{2^{(0)},1^{(0)},1^{(0)}}$ etc. The second branch ($a_2'$) corresponds to tunneling between the SP and DP modes, e.g. from $\ket{1^{(0)},2^{(0)},1^{(0)}}$ to $\ket{2^{(0)},2^{(0)},0^{(0)}}$ etc. The third branch ($a_3'$) refers to a tunneling process among the SP and T modes, e.g. from $\ket{1^{(0)},2^{(0)},1^{(0)}}$ to $\ket{3^{(0)},1^{(0)},0^{(0)}}$ etc. The remaining inter-well tunneling branches which correspond to transitions of energetically higher different modes are negligible in comparison to the aforementioned and therefore we can hardly identify them in Figure 6(a). To probe the effect of the driving frequency on the tunneling spectrum, Figure 6(b) shows $F_{\{\delta g\}}(\omega)$ at $\omega_{\rm{D}}=2.75$ (i.e. at resonance of an explicitly driven triple well) with varying quench amplitude. The three observed tunneling branches ($b_1'-b_3'$) refer to the same transitions, i.e. between the same number states as addressed above, but they are slightly shifted to higher frequencies as a consequence of the higher driving frequency. The remaining branches, e.g. $a_4'$ and $b_4'$, that are visible in the spectrum, which show more prominent deviations for the different driving frequencies correspond to other modes and inter-band transitions and will be explained below. To identify the frequencies of the local breathing mode we resort to the second moment $\sigma_i^2(\omega ) = \frac{1}{\pi}\int{dt\sigma_i^2(t){e^{i\omega t}}}$ for each well (see Sec. II.B and Eq.(5)). Focusing on the left well, which possesses a breathing component (see also Figure 2(b)) we calculate the frequency spectrum of $\sigma_L^2(\omega )$ which matches the branch $a_4'$ in the fidelity spectrum (see Figure 6(a)). Most importantly this frequency branch resonates with two distinct tunneling branches at different quench amplitudes, namely at $\delta g\approx1.0$ with the branch $a_3'$ (see the elipse in Figure 6(a)) and at $\delta g\approx2.8$ with the branch $a_2'$ (see the dashed elipse in Figure 6(a)) of the tunneling. Turning to the middle well, Figure 6(c) presents $\sigma_M^2(\omega )$, thus showing two main peaks ($a_1''$-$a_2''$) with respect to the quench amplitude. The lowest of these peaks refers to a tunneling mode (see also Figure 6(a)) being identified from the energy difference within the energetically lowest states of the SP mode. The appearance of this peak in the spectrum is due to the fact that the tunneling can induce a modulation on the width of the local wavepacket. The second peak located at $\omega_{2} \approx 4.5$ refers to an inter-band process, i.e. to a transition from $\ket{1^{(0)},2^{(0)},1^{(0)}}$ to $\ket{1^{(0)},1^{(0)}\otimes 1^{(2)},1^{(0)}}$. Inspecting now more carefully the fidelity spectrum in Figure 6(a) we observe that the latter breathing frequency branch $a_2''$ (being denoted as $a_5'$ in Figure 6(a)) comes into a resonance with the highest tunneling frequency branch ($a_3'$) at large quench amplitudes $\delta g \approx 5.2$. However, this tunneling branch is not visible in Figure 6(c) due to its small amplitude in comparison to the breathing ($a_2''$) branch. To comment on the dependence of the breathing peak ($a_2''$) on the interaction quench we observe that it is more sensitive to $\delta g$ for $0.0<g_{f}<2.5$, otherwise it is approximately constant. To probe the effect of the driving frequency on the breathing branch of the middle well, Figure 6(d) illustrates the spectrum of $\sigma_M^2(\omega)$ with respect to a varying $\delta g$ for $\omega_{\rm{D}}=2.75$. The respective breathing branches denoted as $b_1''$, $b_2''$ in the Figure, are slightly disturbed in comparison to the case with $\omega_{\rm{D}}=0.75$. Concerning the first one, we have already commented on its deviation in our discussion of Figures 6(a), (b). Focusing now on the highest frequency branch of the breathing a significant alteration is observed: for small quench amplitudes, $0.0<\delta g<0.8$, it possesses a single frequency, while for $\delta g>0.8$ the branch splits into two, with slightly different frequencies. The first is near to the corresponding frequency for $\omega_{\rm{D}}=0.75$ but slightly larger, while the second is larger than both. ![image](Fig7-eps-converted-to.pdf){width="60.00000%"} Finally, let us quantitatively examine the dipole component in the outer wells by employing the frequency spectrum $\Delta {\rho _L}(\omega ) = \frac{1}{\pi }\int {dt} \Delta {\rho _L}(t){e^{i\omega t}}$ for various quench amplitudes. Figure 6(e) shows $\Delta\rho_L(\omega)$ where we can identify three dominant peaks (denoted as $a_1'''-a_3'''$) which are located at ${\omega _1'''} \approx 1.2$, ${\omega _3'''} \approx 2.5$, while $\omega_2'''$ is quench dependent. The steady frequency branches ($a_1'''$ and $a_3'''$) correspond to the dipole mode and refer to inter-band transitions, e.g. from $\ket{1^{(0)},2^{(0)},1^{(0)}}$ to $\ket{1^{(0)}\otimes 1^{(1)},1^{(0)},1^{(0)}}$ or to $\ket{1^{(0)}\otimes 1^{(2)},1^{(0)},1^{(0)}}$ respectively. On the other hand, the quench dependent frequency peak ($a_2'''$) is related to the third inter-well tunneling mode (being denoted in Figure 6(a) as $a_3'$). As shown in Figure 6(e) the latter branch $a_2'''$ experiences two resonances with each dipole branch at different quench amplitudes, namely at $\delta g \approx 0.7$ with the lowest frequency dipole branch ($a_1'''$) and at $\delta g \approx 3.0$ with the higher frequency dipole branch $a_3'''$. Moreover, by examining once more the fidelity spectrum (Figure 6(a)) more carefully, it is obsrved that the highest frequency dipole branch experiences a resonance with the second inter-well tunneling mode ($a_2'$) at $\delta g \approx5.0$. In order to conclude on the dependence of the dipole branches on the driving frequency we show in Figure 6(f) the $\Delta {\rho _L}(\omega)$ at $\omega_{\rm{D}}=2.75$. As shown the lower frequency dipole branch ($a_1'''$) is strongly dependent on the driving frequency (see branch $b_1'''$ in Figure 6(f)), while the higher frequency branch ($a_3'''$) is essentially unaffected. Most importantly, the aforementioned resonant behavior is still existing for $\omega_{\rm{D}}=2.75$ but in this case two more resonances appear in the spectrum (see Figure 6(b)) due to a shift of the lowest frequency dipole branch. These resonances are located at $\delta g\approx2.1$ and $\delta g\approx 4.0$ and refer to a coupling among the second ($b_2'$) and third ($b_3'$) tunneling branches with the lowest frequency dipole branch. In the next section, we proceed to the investigation of a system with filling $\nu < 1$ in order to generalize our findings. In particular, by considering a setup with eleven wells and five particles we demonstrate that the above discussed resonant behavior for the intra-well dynamics induced by an explicitly driven potential is present also here. Subsequently, we explore the impact of an interaction quench. Quench dynamics in the driven lattice for filling factor $\nu<1$ ================================================================ Here we shall concentrate on a larger lattice system characterized by a filling factor smaller than unity, namely we consider the case of five bosons trapped in an eleven-well potential. To understand and interpret the dynamics let us firstly briefly comment on the ground state properties of the system. An important property of the ground state is the spatial redistribution of the atoms as the interparticle repulsion increases. The non-interacting ground state (g=0) is the product of the single-particle eigenstates spreading across the entire lattice, while the presence of the hard-wall boundaries render the neighborhood of the central well of the potential slightly more populated. Increasing the repulsion within the weak interaction regime the atoms are pushed to the outer sites which gain and lose population in the course of increasing $g$ [@Brouzos1]. In the following, let us firstly focus on the driven bosonic dynamics induced, at $t=0$, by a vibrating eleven-well potential to the ground state of five repulsively interacting bosons with $g=0.05$. Figures 7(a), (b) demonstrate the response of the system on the one-body level for different driving frequencies $\omega_{\rm{D}}$, but the same driving amplitude $\delta=0.03$. The overall out-of-equilibrium behavior shows similar characteristics as in the case of the triple well, i.e. the occurrence of out-of-phase dipole-like modes among the outer wells of the lattice, a local-breathing mode in the central well and an inter-well tunneling mode accompanying the dynamics. In addition, a transition from a non-resonant (Figure 7(a)) to a resonant intra-well dynamics (Figure 7(b)) by adjusting $\omega_{\rm{D}}$ is observed at the same frequency $\omega_{D}=2.875$ as in the triple well case. This resonant behavior is again manifested (Figure 7(b)) on the one-body density evolution as the formation of enhanced density oscillations at each site being further related to a gradual depopulation of the zeroth band during the evolution. In terms of the significant contributing number states we can infer that out-of-resonance the dynamics can well be described by the set of lowest-band states (with a small contribution from the excited band states), while at resonance the inclusion of number states which obey the constraints $\sum_{i=1}^{11} n_i^{(1)}=N-1$, $n_i^{(3)}=0$ and $n_i^{(2)}=1$ for $k=1,...,11$ is necessary. Contributions from excited states to the second band, i.e. $\sum_{i=1}^{11} n_i^{(1)}=N-1$, $n_i^{(2)}=0$ and $n_i^{(3)}=1$ for $k=1,...,11$ also exist but they are negligible in comparison to the excitations to the first excited band. Another important observation here is that by tuning the driving frequency $\omega_{\rm{D}}$ close to resonance the tunneling dynamics is modified. To explicate the latter, we employ as a measure of the inter-well tunneling the spatially integrated middle-well density $P_M(t)=\int_{-\pi/2}^{\pi/2} dx \rho_1(x,t)$, shown in Figure 7(c) for different driving frequencies, namely before, exactly at and after the resonance. Approaching $\omega_{\rm{D}}=2.875$ from below a diffusion to the outer wells is observed. At the region of $\omega_{\rm{D}}=2.875$ the tunneling dynamics is slowed down, i.e. the occupation of the middle well is fluctuating around a mean value. For $\omega_{\rm{D}}>2.875$ the tunneling process is modified and a tendency for the particles to concentrate in the central well is observed. Employing a corresponding number state analysis we can infer that for $\omega_{\rm{D}}>2.875$ states with higher occupancy in the central well gain prominence. The same behavior of the tunneling dynamics (before and after the resonance) is also observed in the triple well case. Furthermore, let us inspect the influence of an interaction quench on top of the driven lattice. As expected intuitively, by increasing the interaction quench the tunneling process decreases. Figure 7(d) shows $P_M(t)$ for different interaction quench amplitudes on top of the periodically driven lattice with $\omega_{\rm{D}}=0.75$ (i.e. away from the resonance). It is observed that $P_M(t)$ becomes steady for increasingly longer times as we increase $\delta g$, thus indicating a decrease of the corresponding inter-well tunneling dynamics. Finally, note that due to the low filling the admixing modes, induced after an interaction quench upon the periodically driven lattice, in the outer wells are hardly visible and therefore not shown here. ![One body coherence function for different time instants ($t_1=1.0$, $t_2=56.0$, $t_3=123.0$ and $t_4=193.0$) during the evolution caused by a periodically driven eleven-well potential (a,b,c,d) with $\omega_{\rm{D}}=0.75$ and (e,f,g,h) $\omega_{\rm{D}}=3.0$. (i,j,k,l) shows the evolution of the one-body coherence in a periodically driven potential with $\omega_{\rm{D}}=0.75$ and a simultaneous interaction quench with amplitude $\delta g=1.0$. The driving amplitude is fixed to the value $\delta=0.03$ and the initial state corresponds to the ground state of five weakly interacting bosons with $g=0.05$.](Fig8-eps-converted-to.pdf){width="50.00000%"} Let us further investigate the signature of the resonant regions as well as the effect of the interaction quench on top of the periodically driven lattice by exploring the first order correlation function (see Eq.(8)), in coordinate space, which quantifies the degree of spatial coherence of the interacting system [@Naraschewski]. It is important to stress that, within the single-orbital Gross-Pitaevskii theory, the quantum wavepacket remains coherent at all times in contrast to a many-body calculation where it exhibits prominent time-varying structures which in turn indicate the rise of fragmentation on the system as the correlations between particles build up. From this point of view we expect a strong influence on the change of the spatial distribution of the atoms in the lattice either due to the resonant driving or as a consequence of the interaction quench. Foccussing on small driving frequencies ($\omega_{\rm{D}}=0.75$) within the weakly interacting regime ($g=0.05$) we observe the spread of the coherence (Figures 8(a),(b),(c),(d)) through the lattice sites as time evolves. The diagonal elements are always perfectly coherent and their first neighboors remain close to unity throughout the time evolution. The off-diagonal elements are partially coherent, and are oscillating around the value 0.5, while for comparatively long evolution times a site selective off-diagonal long range order appears (see Figure 8(d)). Turning our attention to the resonant driving (see Figures 8(e),(f),(g),(h)) a different behavior throughout the time evolution is observed: On short time scales only the diagonal elements remain coherent and the off-diagonal is partially coherent. As time evolves a substantial loss of coherence even on the diagonal is observed, while the off-diagonal elements exhibit a much more prominent and complex structure. A direct comparison at equal times of the correlation function for non-resonant and resonant driving shows that resonant driving and loss of coherence go hand in hand with each other. On the other hand, by performing an interaction quench on top of the driving, the coherence (see Figures 8(i),(j),(k),(l)) is unity along the diagonal, while for sufficiently long evolution times it tends to vanish away from the diagonal. Finally, note that the off-diagonal contributions tend to fade out (but never vanish completely even for stronger quenches since the particles always remain delocalized) with increasing quench amplitude and a tendency for concentration close to the diagonal is observed at equal times. This indicates that the strength of the interaction between the particles strongly affects the correlations; the stronger the inter-particle repulsion, the stronger the loss of coherence. As a concluding remark we can infer that either the resonant driving or a quench on top of the driving entails an intensified loss of coherence. Conclusions and Outlook ======================= In the present work, the few-body correlated non-equilibrium quantum dynamics of an interaction quenched bosonic cloud in an external periodically driven finite-size optical lattice has been investigated. The effect of an interaction quench on top of the driven lattice has been analyzed. We focus on large lattice depths and small driving amplitudes in order to limit the degree of excitations that could lead to the creation of the cradle motion [@Mistakidis1] or even to heating processes. Starting from the ground state of a weakly interacting small atomic ensemble, we examine in detail the time evolution of the system in the periodically driven optical lattice by a simultaneous interaction quench. It has been shown that for the case of the periodically driven lattice one can induce out-of-phase local dipole modes in the outer wells, while a local breathing mode can be generated in the central well. This is in direct contrast with a shaken lattice, where only in-phase dipole modes are excited. A wide range of driving frequencies has been considered in order to unravel the range from adiabatic to high frequency driving. We observe that within the intermediate frequency regimes, being intractable by current analytical methods, the system can be driven to a far out-of-equilibrium state when compared to other driving frequency regions. In particular, a resonance of the intra-well dynamics occurs with enhanced tunneling dynamics, thus opening energetically higher-lying inter-well tunneling channels. A prominent signature of the resonant regions as well as the effect of the interaction is provided via the study of the time-dependence of the first order coherence, where intensified loss of coherence is observed. This loss of coherence constitutes an independent signature of the resonant regions, allowing to study it from another perspective and, potentially, to measure it in experiments. Following an interaction quench on top of the periodically driven lattice for various driving frequencies, we can trigger more effectively the inter-well as well as the intra-well dynamics and steer the system towards strongly out-of-equilibrium regimes. Here, the tunneling as well as the local breathing mode in the middle-well are amplified, while in the outer wells the atomic cloud experiences an admixture of a dipole and a breathing component. This admixture leads to simultaneous oscillations around the minimum of the well as well as a contraction and expansion in the course of the dynamics. Our analysis shows that one can use the interaction quench to manipulate the tunneling frequency rendering the single-particle tunneling dominant even at resonance. Concerning the on-site modes it is shown that an interaction quench can be used in order to manipulate their amplitude oscillations yielding also a strong influence on the excitation dynamics. Subsequently, the dynamics of the periodically driven lattice (i.e. for a fixed driving frequency) as a function of the quench amplitude has been studied. In particular, the tunneling contains three modes, the breathing possesses two frequency branches and the corresponding admixture three branches: one from the breathing component and two which refer to the dipole component. Furthermore, five resonances between the inter-well tunneling dynamics and the intra-well dynamics have been revealed. The inter-well tunneling experiences a resonance with the breathing component of the central well, two resonances with the breathing component of the outer wells and two resonances with the dipole component of the outer wells. These resonances can further be manipulated via the frequency of the periodic driving. As a result, the combination of different driving protocols can excite different inter- and intra-well modes as well as manifest various energetically higher components of a mode. Most importantly, the observed resonances between different inter- and intra-well modes demonstrate the richness of the system, while their dependence on various system parameters, e.g. the driving frequency shows the tunability of the system. The above-mentioned realization of multiple resonances constitutes arguably one of the central results of our investigation, which to the best of our knowledge has never been reported in such a setting. Finally, let us comment on possible future extensions of the present work. Our analysis reveals that a combination of different driving protocols can induce admixtures of excited modes which in the present case corespond to admixtures of dipole-like and breathing-like modes. In this direction, it would be a natural next step to find the optimal pulse of the interaction quench protocol in order to induce a perfectly shaped squeezed state. Also the understanding and prediction of the long-time dynamics imposing the interaction quench on the driven lattice at different transient times is certainly of interest. Harmonic oscillator: Admixtures of dipole-like and breathing-like modes ======================================================================= In the present Appendix we shall briefly demonstrate the creation of admixtures of excitations consisting of a dipole and a breathing component in the dynamics of a bosonic ensemble confined in a one-dimensional harmonic oscillator. To begin with, let us firstly comment on the creation of each of the above excited modes separately. It is well known that a quench on the frequency of the harmonic oscillator or on the interatomic repulsive interaction induces a breathing mode oscillation of the atomic cloud. On the other hand, a sudden displacement or a periodic driving, e.g. shaking, of the harmonic oscillator can induce a dipole mode in the atomic cloud. However, a combination of the above techniques can induce in the dynamics [@Streltsova] more complicated modes and requires computational methods which can take into account higher orbitals, i.e. correlations. Here, we aim at illuninating this scenario by examining the evolution of an atomic cloud consisting of six bosons initially ($t<0$) prepared in the ground state of an harmonic oscillator potential. Subsequently, ($t>0$) the cloud is subjected to a periodic driving and a simultaneous quench on the interatomic repulsive interaction. Thus, the Hamiltonian that governs the dynamics reads $$\label{eq:2}H = \sum\limits_{i = 1}^N \bigg({ \frac{{{p_i ^2}}}{{2M}}}+ {V_{D}}({x_i};t)\bigg) + g_{f}\sum\limits_{i < j} {\delta({x_i} - {x_j})},$$ where the periodic driving of the harmonic oscillator is modelled via the time-dependent potential $V_{D}(x;t)= \frac{\omega^2}{2}(x-A\sin(\omega_{D}t))^2$ and $\delta{g}=g_{f}-g_{in}$ denotes the quench amplitude. ![(a) Time evolution of the one-body density $\rho_{1}(x,t)$ caused by a periodically driven harmonic trap with $\omega_{D}=0.25$ and a simultaneous interaction quench with amplitude $\delta g=1.6$. The driving amplitude is fixed to the value $A=0.6$, while the initial state corresponds to the ground state of six weakly interacting bosons with $g=0.05$. We also illustrate the one-body density profiles at certain time instants (see legend) during the evolution of the periodically driven oscillator with (b) $\delta g=1.6$ and (c) $\delta g=0$. ](Fig9-eps-converted-to.pdf){width="45.00000%"} Figure 9(a) illustrates the dynamics of the atomic cloud on the single-particle level by employing the one-body density. It is observed that the cloud not only oscillates inside the external trap but also changes its shape during the oscillation. This is a clear signature that the induced mode is different from a pure dipole mode or a pure breathing mode but it is an admixture of the above mentioned excitations. To indicate explicitly this fact we illustrate in Figure 9(b) the profiles of the one-body density at certain time-instants during the evolution. The cloud compresses and decompresses (caused by the interaction quench) during its oscillation (caused by the driven oscillator) inside the external harmonic trap. On the contrary, a cloud which is only subjected to the above external driving (see Figure 9(c)) performs the well-known dipole oscillation and the wavepacket exhibits oscillations with constant width and amplitude. Remarks on the resonant intra-well dynamics of the driven lattice ================================================================= In the present Appendix we shall briefly comment on the characteristics of the resonant dynamics of the driven lattice from a one-body perspective. Indeed, Figure 10(a) presents $\rho_1(x,t)$ at $\omega_{\rm{D}}=2.75$. The overall dynamics exhibits enhanced density modulations being manifest as internal fast oscillations and large amplitude oscillations in each well of period $\sim 20$. The inter-well tunneling is also enhanced in comparison to small $\omega_{\rm{D}}$’s (see Figure 2(a)). A similar intrawell resonant behavior has been observed in [@Mistakidis2], where enhanced and in-phase oscillating dipoles have been revealed. On the contrary, here, we observe enhanced and out-of-phase oscillating dipole modes as well as an amplified breathing mode in the center. Thus, exploiting the presently used driving scheme we have the possibility to open an additional energetic channel. To quantify that the driven lattice induced dynamical features are independent of the interaction strength $g$ or the particle number $N$ we calculate the deviation of the local density oscillation from its mean value, i.e. $\Lambda=\int_0^Tdt|\Delta{\rho_{\alpha}(t)-\overline{\Delta\rho_{\alpha}}}|/T$, where $\overline{\Delta\rho_{\alpha}}=\int_0^Tdt\Delta\rho_{\alpha}(t)/T$ denotes the mean oscillation amplitude over the considered propagation time $T$ and $\Delta\rho_{\alpha}(t)$ refers to the intra-well wavepacket asymmetry. Figure 10(b) shows the mean amplitude of the intra-well oscillation for the left well as a function of the driving frequency $\omega_{D}$ for different interaction strengths $g$ but the same particle number. The mean amplitude with a varying $\omega_{D}$ increases up to $\omega_{D}=2.875$ where it exhibits a peak (position of the resonance) and then decreases again exhibiting several smaller peaks at frequencies where the system is driven far from equilibrium (see also Figure 1(a)). Comparing the dynamics for different interactions it is observed that the ensemble exhibits the same overall behavior but the mean oscillation amplitude is slightly larger (for higher interactions) especially in the region of the central peak. This is a direct interaction effect, since the system possesses more energy. On the other hand, in order to investigate whether the above results are independent of the particle number the same quantity ($\Lambda$) is shown in Figure 10(c) for varying particle number, namely $N=4,8$. The mean amplitude presents the same overall behavior with respect to the driving frequency $\omega_{D}$ but it is also slightly larger for increasing particle number with a maximal deviation of the order of $30\%$. ![(a) Time evolution of the one-body density $\rho_{1}(x,t)$ in a triple well for $\omega_{D}=2.75$. The driving amplitude is fixed to the value $\delta=0.03$, while the initial state corresponds to the ground state of four weakly interacting bosons with $g=0.05$. (b) Mean oscillation amplitude $\Lambda$ of the left well for $N=4$ bosons as a function of the driving frequency $\omega_{\rm{D}}$ for different interparticle repulsion (see legend). (c) The same as (b) but for fixed interaction $g=0.2$ and different particle number (see legend).](Fig10-eps-converted-to.pdf){width="40.00000%"} ![Fidelity evolution $F_{\omega_{\rm{D}}}(t)$ of a periodically driven triple well with (a) $\omega_{\rm{D}}=2.5$ and (b) $\omega_{\rm{D}}=7.5$ with an increasing number of SPFs (see legend). (c), (d) $F_{\omega_{\rm{D}}}(t)$ for various SPFs (see legend) with a simultaneous interaction quench of amplitude (c) $\delta g=0.5$ and (d) $\delta g=2.0$ on top of the periodically driven triple well with $\omega_{\rm{D}}=0.75$. ](Fig11-eps-converted-to.pdf){width="50.00000%"} ![Fidelity evolution $F_{\omega_{\rm{D}}}(t)$ of a periodically driven triple well with (a) $\omega_{\rm{D}}=2.5$ and (b) $\omega_{\rm{D}}=7.5$ with an increasing number of grid sizes (see legend). (c), (d) $F_{\omega_{\rm{D}}}(t)$ for various grid sizes (see legend) with a simultaneous interaction quench of amplitude (c) $\delta g=0.5$ and (d) $\delta g=2.0$ on top of the periodically driven triple well with $\omega_{\rm{D}}=0.75$.](Fig12-eps-converted-to.pdf){width="50.00000%"} The Computational Approach: MCTDHB ================================== To solve the many-body Schrödinger equation $\left( {i\hbar {\partial _t} - H} \right)\Psi (x,t) = 0$ of the interacting bosons as an initial value problem $\ket{{\Psi (0)}} = \left| {{\Psi _0}} \right\rangle$, we employ the Multi-Configuration Time-Dependent Hartree method for Bosons (MCTDHB) [@Alon; @Alon1; @Streltsov]. The latter constitutes an efficient and accurate method both for the stationary properties and the non-equilibrium quantum dynamics of systems consisting of a single bosonic species and has already been applied for a wide set of problems, see e.g. [@Streltsov; @Streltsov1; @Alon2; @Alon3]. The wavefunction is represented by a set of variationally optimized time-dependent orbitals which implies an optimal truncation of the Hilbert space by employing a time-dependent moving basis where the system can be instantaneously optimally represented by time-dependent permanents. Thus, the many-body wavefunction which is expanded in terms of the bosonic number states $\left| {{n_1},{n_2},...,{n_M};t} \right\rangle$, based on time-dependent single-particle functions (SPFs) $\left| \phi_{i}(t) \right\rangle$, $i=1,2,...,M$, reads $$\label{eq:10}\left| {\Psi (t)} \right\rangle = \sum\limits_{\vec n } {{C_{\vec n }}(t)\left| {{n_1},{n_2},...,{n_M};t} \right\rangle }.$$ Here $M$ is the number of SPFs and the summation $\vec n$ is over all the possible combinations $n_{i}$ such that the total number of bosons $N$ is conserved. Note that in the limit in which $M$ approaches the number of grid points the above expansion is equivalent to a full configuration interaction approach. However, in the case of $M=1$ the many-body wavefunction is given by a single permanent $\ket{n_{1}=N;t}$ and the method reduces to the time-dependent Gross Pitevskii equation. To determine the time-dependent wave function $\left| \Psi(t) \right\rangle$ we need the equations of motion for the coefficients ${{C_{\vec n }}(t)}$ and of the SPFs $\left| \phi_{i}(t) \right\rangle$. Following e.g. the Dirac-Frenkel [@Frenkel; @Dirac] variational principle, i.e. ${\bra{\delta \Psi}}{i{\partial _t} - \hat{ H}\ket{\Psi }}=0$, we end up with the well-known MCTDHB equations of motion [@Alon; @Streltsov; @Alon1] consisting of a set of $M$ non-linear integrodifferential equations of motion for the orbitals which are coupled to the $\frac{(N+M-1)!}{N!(M-1)!}$ linear equations of motion for the coefficients. Finally, let us remark that in terms of our implementation we use an extended version of MCTDHB being referred to in the literature as the Multi-Layer Multi-Configuration Time-Dependent Hartree method for Bosons (ML-MCTDHB) [@Cao; @Kronke]. This package is particularly suitable for treating systems consisting of different bosonic species, while for the case of a single species it reduces to MCTDHB. For our numerical implementation a discrete variable representation (DVR) for the SPFs and a sine-DVR, which intrinsically introduces hard-wall boundaries at both edges of the potential, has been employed. The preparation of the initial state has been performed by using the so-called relaxation method in terms of which one obtains the lowest eigenstates of the corresponding $m$-well setup. The key idea is to propagate some trial wave function ${\Psi ^{(0)}}(x)$ by the non-unitary operator ${e^{ - H\tau }}$. This is equivalent to an imaginary time propagation and for $\tau \to \infty $, the propagation converges to the ground state, as all other contributions (i.e., $e^{-E_n\tau}$) are exponentially suppressed. In turn, we periodically drive the optical lattice and perform a quench on the strength of the interparticle repulsion and study the evolution of $\Psi ({x_1},{x_2},..,{x_N};t)$ in the $m$-well potential within MCTDHB. Within our simulations the following overlap criteria are fullfilled $\left| \langle \Psi | \Psi \rangle - 1 \right| < 10^{-9}$ and $\left| \langle \varphi_i | \varphi_j \rangle - \delta_{ij} \right| < 10^{-10}$ for the total wavefunction and the SPFs respectively. Furthermore, to ensure the convergence of our simulations we have used up to 12(11) optimized single particle functions for the triple-(eleven) well, thereby observing a systematic convergence of our results for sufficiently large spatial grids. In particular, we have used 350 spatial grid points in the case of a triple-well and 800 spatial grid points for the eleven-well potential. In the following, let us briefly demonstrate the convergence procedure concerning our simulations either with an increasing number of SPFs (and fixed number of 350 grid points) or for a varying number of grid points and a fixed number of SPFs, $M=12$. Figure 11 shows the fidelity evolution for different numbers of SPFs, namely $M=8,10,12$, for the driven triple well at driving frequencies $\omega_{\rm{D}}=2.5$, $\omega_{\rm{D}}=7.5$ (see Figures 11(a), (b) respectively) and $F_{\omega_{\rm{D}}}(t)$ by employing simultaneous interaction quenches with amplitudes $\delta g=0.5$, $\delta g=2.0$ on top of the driving, $\omega_{\rm{D}}=0.75$, (see Figures 11(c), (d) respectively). A systematic convergence of the fidelity evolution (for $M>8$) is observed for increasing number of SPFs. For instance, the maximum deviation (at $\omega_{\rm{D}}=2.5$) observed in the fidelity evolution (see Figure 11(a)) calculated using 8 and 12 SPFs respectively is of the order of $0.3\%$ at large evolution times ($t>200$). Furthermore, in order to show the convergence with an increasing number of grid points Figure 12 presents the fidelity evolution of the driven triple well at $\omega_{\rm{D}}=2.5$, $\omega_{\rm{D}}=7.5$ (see Figures 12(a), (b) respectively) and by performing interaction quenches with $\delta g=0.5$, $\delta g=2.0$ on top of the driven triple well, $\omega_{\rm{D}}=0.75$, (see Figures 12(c), (d) respectively). Again, we observe convergence for an increasing number of grid points (especially for grid sizes that contain larger than 300 spatial grid points). For instance, the maximum deviation (at $\omega_{\rm{D}}=2.5$) observed in the fidelity evolution (see Figure 12(a)) calculated using 300 and 350 grid points respectively (and 12 SPFs) is of the order of $0.1\%$ at large evolution times ($t>250$). The same analysis has also been performed for the eleven well case (omitted here for brevity) showing the same behavior. Another criterion that confirms the achieved convergence is the population of the lowest occupied natural orbital kept in each case below $0.1\%$. Acknowledgments {#acknowledgments .unnumbered} =============== The authors gratefully acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG) in the framework of the SFB 925 ”Light induced dynamics and control of correlated quantum systems”. The authors thank C.V. Morfonios for fruitful discussions. [60]{} N. Goldman, and J. Dalibard, Phys. Rev. X **4**, 031027 (2014). N. Goldman, J. Dalibard, M. Aidelsburger, and N. R. Cooper, Phys. Rev. A **91**, 033632 (2015). O. Morsch, and M. Oberthaler, Rev. Mod. Phys. **78**, 179 (2006). I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. **80**, 885 (2008). A. Polkovnikov, K. Sengupta, A. Silva, and M. Vengalattore, Rev. Mod. Phys. **83**, 863 (2011). M. B. Dahan, E. Peik, J. Reichel, Y. Castin, and C. Salomon, Phys. Rev. Lett. **76**, 4508 (1996). O. Morsch, J. H. M[ü]{}ller, M. Cristiani, D. Ciampini, and E. Arimondo, Phys. Rev. Lett. **87**, 140402 (2001). T. Hartmann, F. Keck, H. J. Korsch, and S. Mossmann, New J. Phys. **6**, 2 (2004). A. Eckardt, C. Weiss, and M. Holthaus, Phys. Rev. Lett. **95**, 260404 (2005). W. Zheng, and H. Zhai, Phys. Rev. A **89**, 061603 (2014). J. Struck, C. [Ö]{}lschl[ä]{}ger, M. Weinberg, P. Hauke, J. Simonet, A. Eckardt, M. Lewenstein, K. Sengstock and P. Windpassinger, Phys. Rev. Lett. **108**, 225304 (2012). C. V. Parker, L. C. Ha, and C. Chin, Nature Phys. **9**, 769 (2013). S. Choudhury, and E. J. Mueller, Phys. Rev. A **90**, 013621 (2014). P. I. Schneider, and A. Saenz, Phys. Rev. A **85**, 050304 (2012). M. Cheneau, P. Barmettler, D. Poletti, M. Endres, P. Schauß, T. Fukuhara, C. Gross, I. Bloch, C. Kollath, and S. Kuhr, Nature **481** 484 (2012). S.S. Natu, and E. J. Mueller, Phys. Rev. A **87**, 053607 (2013). W. H. Zurek, U. Dorner, and P. Zoller, Phys. Rev. Lett. **95**, 105701 (2005). D. Chen, M. White, C. Borries, and B. DeMarco, Phys. Rev. Lett. **106**, 235304 (2011). M. Rigol, V. Dunjko, and M. Olshanii, Nature **452**, 854 (2008). E. Altman, and A. Auerbach, Phys. Rev. Lett. **89**, 250404 (2002). W. Kohn, Phys. Rev. **123**, 1242 (1961). M. Bonitz, K. Balzer, and R. Van Leeuwen, Phys. Rev. B **76**, 045341 (2007). J. W. Abraham, and M. Bonitz, Contributions to Plasma Physics **54**, 27 (2014). S. Bauch, K. Balzer, C. Henning, and M. Bonitz, Phys. Rev. B **80**, 054515 (2009). J. W. Abraham, K. Balzer, D. Hochstuhl, and M. Bonitz, Phys. Rev. B **86**, 125112 (2012). R. Schmitz, S. Kr[ö]{}nke, L. Cao, and P. Schmelcher, Phys. Rev. A **88**, 043601 (2013). S. Peotta, D. Rossini, M. Polini, F. Minardi, and R. Fazio, Phys. Rev. Lett. **110**, 015302 (2013). H. Lignier, C. Sias, D. Ciampini, Y. Singh, A. Zenesini, O. Morsch, and E. Arimondo, Phys. Rev. Lett. **99**, 220403 (2007). C. Sias, H. Lignier, Y. P. Singh, A. Zenesini, D. Ciampini, O. Morsch, and E. Arimondo, Phys. Rev. Lett. **100**, 040404 (2008). E. Haller, R. Hart, M. J. Mark, J. G. Danzl, L. Reichs[ö]{}llner, and H. C. N[ä]{}gerl, Phys. Rev. Lett. **104**, 200403 (2010). Y. A. Chen, S. Nascimbène, M. Aidelsburger, M. Atala, S. Trotzky, and I. Bloch, Phys. Rev. Lett. **107**, 210405 (2011). S. Rosi, A. Bernard, N. Fabbri, L. Fallani, C. Fort, M. Inguscio, T. Calarco, and S. Montangero, Phys. Rev. A **88**, 021601 (2013). C. Brif, R. Chakrabarti, and H. Rabitz, Adv. Chem. Phys. **148**, 1 (2012). C. Brif, R. Chakrabarti, and H. Rabitz, New J. Phys. **12**, 075008 (2010). S. I. Mistakidis, T. Wulf, A. Negretti, and P. Schmelcher, J. Phys. B: At. Mol. Opt. Phys. **48**, 244004 (2015). S. I. Mistakidis, L. Cao, and P. Schmelcher, J. Phys. B: At. Mol. Opt. Phys. **47**, 225303 (2014). S. I. Mistakidis, L. Cao, and P. Schmelcher, Phys. Rev. A **91**, 033611 (2014). S. F[ö]{}lling, S. Trotzky, P. Cheinet, M. Feld, R. Saers, A. Widera, T. M[ü]{}ller, and I. Bloch, Nature **448**, 7157 (2007). O. E. Alon, A. I. Streltsov, and L. S. Cederbaum, J. Chem. Phys. **127**, 154103 (2007). O. E. Alon, A. I. Streltsov, and L. S. Cederbaum, Phys. Rev. A **77**, 033613 (2008). M. Olshanii, Phys. Rev. Lett. **81**, 938 (1998). T. K[ö]{}hler, K. Goral, and P. S. Julienne, Rev. Mod. Phys. **78**, 1311 (2006). C. Chin, R. Grimm, P. Julienne, and E. Tiesinga, Rev. Mod. Phys. **82**, 1225 (2010). J. I. Kim, V. S. Melezhik, and P. Schmelcher, Phys. Rev. Lett. **97**, 193203 (2006). P. Giannakeas, F. K. Diakonos, and P. Schmelcher, Phys. Rev. A **86**, 042703 (2012). P. Giannakeas, V. S. Melezhik, and P. Schmelcher, Phys. Rev. Lett. **111**, 183201 (2013). S. Klaiman, and O. E. Alon, Phys. Rev. A **91**, 063613 (2015). S. Klaiman, A. I. Streltsov, and O. E. Alon, Phys. Rev. A **93**, 023605 (2016). J. P. Ronzheimer, M. Schreiber, S. Braun, S. S. Hodgman, S. Langer, I. P. McCulloch, F. Heidrich-Meisner, I. Bloch, and U. Schneider, Phys. Rev. Lett. **110**, 205301 (2013). U. M. Titulaer, and R. J. Glauber, Phys. Rev. **140**, 676 (1965). M. Naraschewski, and R. J. Glauber, Phys. Rev. A **59**, 4595 (1999). K. Sakmann, A. I. Streltsov, O. E. Alon, and L. S. Cederbaum, Phys. Rev. A **78**, 023615 (2008). R. W. Spekkens, and J. E. Sipe, Phys. Rev. A **59**, 3868 (1999). S. Klaiman, N. Moiseyev, and L. S. Cederbaum, Phys. Rev. A **73**, 013622 (2006). E. J. Mueller, T. L. Ho, M. Ueda, and G. Baym, Phys. Rev. A **74**, 33612 (2006). K. Sakmann, A. I. Streltsov, O. E. Alon, and L. S. Cederbaum, Phys. Rev. A **89**, 23602 (2014). O. Penrose, and L. Onsager, Phys. Rev. **104**, 576 (1956). I. Brouzos, S. Z[ö]{}llner, and P. Schmelcher, Phys. Rev. A **81**, 053613 (2010). O. I. Streltsova, O. E. Alon, L. S. Cederbaum, and A. I. Streltsov, Phys. Rev. A **89**, 061602 (2014). A. I. Streltsov, O. E. Alon, and L. S. Cederbaum, Phys. Rev. Lett. **99**, 030402 (2007). A. I. Streltsov, K. Sakmann, O. E. Alon, and L. S. Cederbaum, Phys. Rev. A **83**, 043604 (2011). O. E. Alon, A. I. Streltsov, and L. S. Cederbaum, Phys. Rev. A **76**, 013611 (2007). O. E. Alon, A. I. Streltsov, and L. S. Cederbaum, Phys. Rev. A **79**, 022503 (2009). J. Frenkel, in Wave Mechanics 1st ed. (Clarendon Press, Oxford, 1934), pp. 423-428. P. A. Dirac, (1930, July). Proc. Camb. Phil. Soc. **26**, 376. Cambridge University Press. L. Cao, S. Kr[ö]{}nke, O. Vendrell, and P. Schmelcher, J. Chem. Phys. **139**, 134103 (2013). S. Kr[ö]{}nke, L. Cao, O. Vendrell, and P. Schmelcher, New J. Phys. **15**, 063018 (2013).
--- abstract: 'The first observation is made of hadronic string breaking due to dynamical fermions in zero temperature lattice QCD. The simulations are done for SU(2) color in three dimensions, with two flavors of staggered fermions. The results have clear implications for the large scale simulations that are being done to search (so far, without success) for string breaking in four-dimensional QCD. In particular, string breaking is readily observed using only Wilson loops to excite a static quark-antiquark pair. Improved actions on coarse lattices are used, providing an extremely efficient means to access the quark separations and propagation times at which string breaking occurs.' address: | Newman Laboratory of Nuclear Studies, Cornell University, Ithaca, NY 14853-5001, and\ Physics Department, Simon Fraser University, Burnaby, B.C., Canada V5A 1S6[@Permanent] author: - 'Howard D. Trottier' title: 'String breaking by dynamical fermions in three-dimensional lattice QCD' --- {#section .unnumbered} The string picture of quark confinement predates Quantum Chromodynamics, and continues to play a central role in theoretical efforts to characterize the physics of QCD [@Kuti]. Simulations of quenched lattice QCD have in fact demonstrated that color-electric field lines connecting a static quark and antiquark are squeezed into a narrow tube [@Michael], in accord with many models of the linearly rising static potential. In QCD with dynamical fermions the flux-tube joining static quarks is expected to be unstable against fission at large $R$, where there is enough energy in the fields to materialize light quarks from the vacuum, which bind to the heavy quarks to form a pair of color-neutral bound states. The simplest indicator of string breaking is provided by the static quark-antiquark potential $V(R)$, which should approach a constant at large $R$ $$V(R\to\infty) = 2 M_{Q\bar q} , \label{Vbreak}$$ where $M_{Q\bar q}$ is the mass of a bound state of a heavy quark and a light antiquark. There is as yet no convincing evidence of string breaking in lattice simulations at zero temperature, despite extensive simulations by several large scale collaborations [@Kuti; @CPPACS; @UKQCD; @SESAMTchiL; @DeTar]. A recent conjecture [@Gusken; @Drummond], which has received widespread attention, holds that the failure to detect the broken string state is due to a very poor overlap of the Wilson loop operator with the true ground state of the system at large $R$ (Wilson loops have been used almost exclusively to construct trial states in these simulations). The possibility has been raised that the broken string state may even be unobservable with Wilson loops for all practical purposes [@CPPACS]. This picture has recently stimulated interest in simulations using other operators to excite the system [@Wittig; @Koniuk]. Evidence for a somewhat different point of view is presented in this paper. The task of observing string breaking in full QCD is computationally very challenging, given the high cost of simulating dynamical fermions. These simulations have so far been done on lattices with a relatively fine grid. Consequently, only a fairly limited range of quark separations $R$ has been explored and, more importantly, the propagation of the trial states created by Wilson loops has only been studied for rather short Euclidean times $T$ (e.g., $T=0.4$–$0.8$ fm in Ref. [@CPPACS]). The restriction to such short propagation times precludes a definitive assessment of whether string breaking occurs in these states. This limitation may be substantially alleviated by working on coarser lattices with improved actions. This proposal is assessed here by doing simulations in three-dimensional lattice QCD (QCD$_3$) with two flavors of dynamical staggered fermions. QCD$_3$ provides an excellent laboratory for studying the physics of realistic QCD (for a review see Ref. [@Teper]). Quenched QCD$_3$ has been shown to exhibit linear confinement, flux-tube formation, a deconfining phase transition, and a rich glueball spectrum, and spontaneous breakdown of a “chiral” symmetry has been observed in both the quenched and unquenched theories [@Kogut]. The first observation of string breaking by dynamical fermions is reported here, using only Wilson loop operators to excite the static quark-antiquark system [@Prelim]. Although these simulations are done in QCD$_3$, the results have clear implications for work on the four-dimensional theory. In particular the simulations are done on coarse lattices, using an improved gluon action that is accurate through $O(a^2)$, and a staggered fermion action that is accurate through $O(a)$. This allows the system to be much more efficiently probed at the physical separations and propagation times relevant to string breaking. Moreover, the overlap of an operator with the lowest-lying state generally increases with the coarseness of the lattice, due to the suppression of higher momentum modes in the trial state. In contrast, recent studies of QCD$_3$ with dynamical scalar matter fields [@Wittig], and adjoint matter fields [@Poulis], using unimproved actions on lattices with about half the spacing considered here, did not resolve string breaking in Wilson loops, presumably because insufficient propagation times were attained. Using the quenched string tension to set the “physical” length scale for QCD$_3$, these results suggest that string breaking occurs in Wilson loops at modest propagation times, somewhat in excess of 1 fm (on modestly coarse lattices), which is not much beyond what has been reached in four dimensions. The physical quark separation $R\approx1.5$ fm at which string breaking is found in QCD$_3$ is consistent with expectations in four dimensions [@Sommer]. Lattice spacings of about 0.2–0.3 fm are used here; these lattices are about twice as coarse as those that have been used in four-dimensional studies of string breaking. Comparable lattice spacings could be reliably accessed in realistic QCD using improved actions [@Alford], resulting in an enormous improvement in computational efficiency. To begin with, consider the pure gauge theory in three dimensions. To reduce the computational cost SU(2) color is considered. Simulations were done using a tree-level $O(a^2)$-accurate gluon action, allowing for different spatial and temporal lattices spacings, $a_s$ and $a_t$ respectively [@Symanzik] $${\cal S}_{\rm imp} = - \beta \! \sum_{x,\mu>\nu} \! \xi_{\mu\nu} \left[ \case{5}{3} P_{\mu\nu} - \case{1}{12} (R_{\mu\nu} + R_{\nu\mu}) \right] . \label{Simp}$$ $P_{\mu\nu}$ is one-half the trace of the $1\times1$ plaquette and $R_{\mu\nu}$ is one-half the trace of the $2\times1$ rectangle in the $\mu\times\nu$ plane. The bare QCD coupling $g_0^2$ (of dimension $1$) enters through the dimensionless parameter $\beta = 4/g_0^2a_s$, and the bare lattice anisotropy is input through $\xi_{3i} = \xi_{i3} = a_s/a_t$ and $\xi_{ij} = a_t/a_s$ ($i,j=1,2$). QCD$_3$ is a super-renormalizable theory; in fact the bare coupling $g_0^2$ and bare quark masses $m_0$ remain finite as the ultraviolet regulator is removed. Since $g_0^2$ and $m_0$ are cutoff-independent in the continuum limit, one obtains extremely simple scaling laws for physical quantities in lattice QCD$_3$. In particular, masses in lattice units (including the input bare quark masses) should satisfy $$\beta a m = \mbox{constant} \label{scaling}$$ in the continuum limit. The improved action $S_{\rm imp}$ differs from the continuum theory at the tree-level by terms of $O(a^4)$; one-loop corrections induce errors of $O(g_0^2 a^3)$. Results for the quenched string tension obtained with $S_{\rm imp}$ are shown in Fig. \[fig:sigma\]; for comparison, results obtained by Teper [@Teper] for the unimproved Wilson gluon action are also shown (the dashed lines show the continuum limit estimated in [@Teper]). A comparison of the potential computed from the two actions on lattices of comparable spacing is shown in Fig. \[fig:qpot\]. Simulations of $S_{\rm imp}$ were done on lattices with $\beta=2$, 2.5, and 3, all with anisotropy $a_t/a_s=1/4$. The lattice volumes range from $16^3$ at $\beta=2$ to $24^3$ at $\beta=3$. Hybrid molecular dynamics were used to generate the configurations (the $\Phi$ algorithm [@Gottlieb] was used to generate the unquenched configurations analyzed below); 50 molecular dynamics steps were taken with step size $\Delta t = 0.02$. A standard fuzzing procedure was employed on the link variables used in Wilson loop measurements. Integrated autocorrelation times satisfied $\tau_{\rm int} \lesssim 0.5$, with typically 5–10 trajectories skipped between measurements. A superficially surprising aspect of these results is the $O(a)$ scaling violation evident in the Wilson action data for $\sqrt\sigma / g_0^2$. In fact this demonstrates the need to renormalize the bare coupling at finite $a$ [@Lepage]. In four dimensions it is known that perturbative expansions in the bare lattice coupling are spoiled by large renormalizations. A renormalized coupling defined by a physical quantity absorbs these large corrections, and greatly improves perturbation theory for many quantities [@LepMac]. In lattice QCD$_3$ one expects to find an $O(g_0^2 a)$ renormalization of the bare coupling at one-loop order. This produces the linear scaling violation in the unimproved results for $\sqrt\sigma / g_0^2$. On the other hand this effect can be removed by using a physical quantity to define a renormalized scale. For example, Teper has done simulations of three-dimensional glueball masses $m_G$ [@Teper], and showed that the ratios $m_G / \sqrt\sigma$ exhibit $O(a^2)$ scaling violations. The distinctive signature of the renormalization of the bare coupling in QCD$_3$ exposes an interesting feature of the action. One might expect to see a reduction in the renormalization of $g_0^2$ when $S_{\rm imp}$ is used, compared to simulations with the Wilson action. Remarkably one finds that the renormalization is in fact almost completely eliminated; as seen in Fig. \[fig:sigma\] the error in $\sqrt\sigma / g_0^2$ at $\beta=2$ is reduced from about 45% with the Wilson action, to less than about 5% with $S_{\rm imp}$. This is also apparent from the data in Fig. \[fig:qpot\], where the potential $V$ and separation $R$ from the lattices at $\beta=2$ and 3 are compared on a common scale, set by $g_0^2$. This is a genuinely surprising result, since all of the higher dimension operators present in $S_{\rm imp}$ should contribute to a leading $O(g_0^2 a)$ renormalization of the bare coupling. Apparently the operator series in this effective action converges very rapidly, even at scales near the lattice cutoff. One can therefore conveniently measure physical quantities directly in terms of the bare coupling when the improved action Eq. (\[Simp\]) is used. It is important to note however that the use of $S_{\rm imp}$ would prove advantageous even if $g_0^2$ required renormalization; one would simply use a measured quantity (such as the quenched string tension) to set the scale for other observables, and the leading $O(a^2)$ errors in the Wilson action would thus be removed. Simulations with dynamical Kogut-Susskind fermions were done to look for effects of string breaking. The staggered fermion action ${\cal S}_{\rm KS}$ in three dimensions [@Burden] is identical in form to the four-dimensional theory, and describes two flavors of four-component spinors $$\begin{aligned} {\cal S}_{\rm KS} = \sum_{x,\mu} \eta_\mu(x) \bar\chi(x) \bigl[ U_\mu(x) \chi(x+\hat\mu) \ \ \ \ \ \ & & \nonumber \\ - U^\dagger_\mu(x-\hat\mu) \chi(x-\hat\mu) \bigr] + 2 am_0 \sum_x \bar\chi(x) \chi(x) , & & \label{Sstagg}\end{aligned}$$ where $\eta_\mu(x) = (-1)^{x_1 + \ldots + x_{\mu-1}}$ is the usual staggered fermion phase. Unquenched simulations were done with improved glue on isotropic lattices ($a_t = a_s$) at $\beta=2$ and 3, on $12^2\times8$ and $16^2\times10$ volumes respectively [@Temp]. Isotropic lattices for the unquenched simulations were used in order to probe the longest propagation times $T$ possible, for the least computational cost (although measurements over a finer range of $T$ on anisotropic lattices would be advatangeous for fitting and possible extraction of excited state energies). The input bare quark mass in lattice units was scaled according to $m_0/g_0^2 = 0.075$ \[cf. Eq. (\[scaling\])\], with $m_\pi / m_\rho$ found to be $\approx 0.75$. Approximately 6,000 measurements were done at $\beta=2$, and 2,000 at $\beta=3$. The leading discretization errors in the action Eq. (\[Sstagg\]) are of $O(a^2)$ [@ImpStagg]; at finite lattice spacing the bare quark mass must also absorb an $O(g_0^2 a)$ renormalization. However these effects appear to induce little scaling violation in the static potential at the small quark mass used here, perhaps because the energy in the color fields dominates the string breaking process (significant scaling violations in the unquenched potential were seen when the Wilson gluon action was used). Results for the unquenched potential are shown in Fig. \[fig:dpot\]. The onset of string breaking is assessed by comparing the potential energy with the mass of the two $Q\bar q$ bound states into which the system should hadronize. The staggered heavy-light meson propagator [@Fiebig] was computed in the unquenched configurations at $\beta=3$, for equal valence and sea quark masses; the result is shown as the dashed lines in Fig. \[fig:dpot\]. String breaking is clearly demonstrated, with the unquenched potential approaching the expected asymptotic value \[cf. Eq. (\[Vbreak\])\]. Excellent scaling behavior is observed; note that the lattice spacing increases by 50% from $\beta=3$ to $\beta=2$. A continuum extrapolation of the string breaking distance $R_b$ can be estimated from these results, $R_b / (\beta a) \approx 2.5$. The onset of string breaking is made particularly evident by a comparison of the quenched and unquenched potentials at the same $\beta$ (note that no adjustment of the energy zero has been made in any of these results). The meaningfulness of this comparison is supported by the fact that the bare coupling $g_0^2$ undergoes little renormalization; moreover quenching effects at a given $\beta$ were found to change the $\rho$-meson mass by less than 10%. Only (fuzzy) Wilson loop operators were used in these calculations. As discussed above, little evidence of string breaking in Wilson loops has been found in previous simulations with dynamical fermions. A distinguishing feature of the simulations done here, compared with earlier work, is the relative coarseness of the lattices that were used (of course the physics of the string breaking process could be substantially different in three dimensions; on the other hand, simulations of QCD$_3$ on much finer lattices [@Wittig; @Poulis] did not resolve string breaking in Wilson loops). Using coarse lattices made it much easier to measure Wilson loops $W(R,T)$ at propagation times $T$ large enough to accurately identify the ground state energy at large $R$. Plots of the time-dependent effective potential $V(R,T) = -\ln[W(R,T) / W(R,T-1)]$ are given in Fig. \[fig:time\], and show that one could easily be misled as to the shape of the potential if the calculation is not done at large enough $T$, particularly at large $R$. In order to have some intuition for the length scales probed in these simulations, it is instructive to use the physical value of the string tension in four dimensions, $\sqrt\sigma=0.44$ GeV, to in effect set the value of the dimensionful coupling constant in QCD$_3$. Using the quenched continuum extrapolation $\beta a \sqrt\sigma=1.33(1)$ [@Teper] one identifies, for example, $a(\beta=3) \approx 0.2$ fm. Comparable, if slightly smaller, estimates of the lattice spacing are obtained if simulation results for the quenched or unquenched $\rho$ meson mass in QCD$_3$ are identified with the physical mass. This scale setting implies a string breaking distance $R_b \approx 1.5$ fm, which is numerically very similar to estimates of $R_b$ in four dimensions [@Sommer]. The results of Fig. \[fig:time\] suggest that propagation times somewhat in excess of 1 fm are needed to adequately resolve string breaking in Wilson loops on modestly coarse lattices; notice than an appreciable rise in the potential remains even at $T \approx 0.6$ fm, which is comparable to the longest propagation times resolved in the four-dimensional simulations of Ref. [@CPPACS]. Although longer propagation times $T$ are necessary to fully isolate the ground state energy at the largest separations $R$ in Fig. \[fig:time\], these data nevertheless clearly support the onset of string breaking. This follows from a comparison of the unquenched and quenched potentials. Of particular importance is the fact that any slope that may exist in the unquenched potential data at large $R$ is clearly much smaller than the slope in the quenched data. Such a small slope, if truly present, would imply a very large change in other physical quantities due to unquenching. In fact, $m_\rho$ was found to change by less than 10% with unquenching. Hence the significant flattening of the unquenched potential data in Figs. \[fig:dpot\] and \[fig:time\], compared to the quenched data, is very strongly indicative of the onset of string breaking. To summarize, string breaking due to dynamical fermions was observed in three-dimensional lattice QCD. The results have clear implications for current work on the four dimensional theory. In particular, string breaking was readily observed using only Wilson loop operators. The use of improved actions on coarse lattices provided a crucial advantage in accessing the quark separations and propagation times at which string breaking occurs. The use of comparably coarse lattices in four dimensions would result in an enormous improvement in computational efficiency, and could enable string breaking to be demonstrated in realistic QCD. I am indebted to Peter Lepage for several insightful discussions. I also thank R. Woloshyn, R. Fiebig, M. Alford, and M. Teper for fruitful conversations. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada. Permanent address. For a recent review see, e.g., J. Kuti, Report No. hep-lat/9811021. See C. Michael, Report No. hep-ph/9710249, and references therein. S. Aoki [*et al.*]{}, CP-PACS collaboration, Report No. hep-lat/9809185. M. Talevi [*et al.*]{}, UKQCD collaboration, Report No. hep-lat/9809182. G. Bali [*et al.*]{}, SESAM and T$\chi$L collaborations, Nucl. Phys. B (Proc. Suppl.) [**63**]{}, 209 (1998). String breaking due to dynamical fermions at nonzero temperature has been recently been reported in C. DeTar [*et al.*]{}, Report No. hep-lat/9808028. S. Güsken, Nucl. Phys. B (Proc. Suppl.) [**63**]{}, 16 (1998). I. T. Drummond, Phys. Lett. B [**434**]{}, 92 (1998). O. Philipsen and H. Wittig, Phys. Rev. Lett. [**81**]{}, 4056 (1998). F. Knechtli and R. Sommer, Phys. Lett. B [**440**]{}, 345 (1998); C. Stewart and R. Koniuk, hep-lat/9811012. M. Teper, Phys. Rev. D [**59**]{}, 014512 (1999). E. Dagotto, A. Kocic and J.B. Kogut, Nucl. Phys. B [**362**]{}, 498 (1991). Some this work was reported in preliminary form in H. D. Trottier, Report No. hep-lat/9809183. G. I. Poulis and H. D. Trottier, Phys. Lett. B [**400**]{}, 358 (1997). C. Alexandrou [*et al.*]{}, Nucl. Phys. B [**414**]{}, 815 (1994). See, e.g., M. Alford, T. R. Klassen, and G. P. Lepage, Phys. Rev. D [**58**]{}, 034503 (1998). K. Symanzik, Nucl. Phys. B [**226**]{}, 187 (1983); M. Lüscher and P. Weisz, Comm. Math. Phys. [**97**]{}, 59 (1985). S. Gottlieb [*et al.*]{}, Phys. Rev. D [**35**]{}, 2531 (1987). This argument is due to G. P. Lepage (private communication). See also G. D. Moore, Nucl. Phys. B [**523**]{}, 569 (1998). G.P. Lepage and P.B. Mackenzie, Phys. Rev. D [**48**]{}, 2250 (1993). C. Burden and A.N. Burkitt, Europhys. Lett. [**3**]{}, 545 (1987). Finite temperature effects should be negligible on these lattices, where $1/(N_\tau a) \lesssim 0.2 T_c$ \[using the quenched estimate of $T_c$ in M. Teper, Phys. Lett. B [**313**]{}, 417 (1993)\]. Simulations were also done with some improvement of the staggered fermion action; see Ref. [@Prelim]. A. Mihaly [*et al.*]{}, Phys. Rev. D [**55**]{}, 3077 (1997).
--- abstract: 'We study range-searching for colored objects, where one has to count (approximately) the number of colors present in a query range. The problems studied mostly involve orthogonal range-searching in two and three dimensions, and the dual setting of rectangle stabbing by points. We present optimal and near-optimal solutions for these problems. Most of the results are obtained via reductions to the approximate uncolored version, and improved data-structures for them. An additional contribution of this work is the introduction of nested shallow cuttings.' author: - | Saladi Rahul\ Department of Computer Science and Engineering\ University of Minnesota\ [*sala0198@umn.edu*]{} bibliography: - './ref.bib' title: 'Approximate Range Counting Revisited[^1]' --- Introduction ============ [r]{}[0.25]{} ![image](colored-problems.pdf) \[fig:colored-problems\] Let $S$ be a set of $n$ geometric objects in ${\mathbb{R}}^d$ which are segregated into disjoint groups (i.e., [*colors*]{}). Given a query $q\subseteq {\mathbb{R}}^d$, a color $c$ [*intersects (or is present in)*]{} $q$ if any object in $S$ of color $c$ intersects $q$, and let $k$ be the number of colors of $S$ present in $q$. In the [*approximate colored range-counting problem*]{}, the task is to preprocess $S$ into a data structure, so that for a query $q$, one can efficiently report the [*approximate*]{} number of colors present in $q$. Specifically, return any value in the range\ $[(1-{\varepsilon})k,(1+{\varepsilon})k]$, where ${\varepsilon}\in (0,1)$ is a pre-specified parameter. Colored range searching and its related problems have been studied before [@kn11; @nv13; @ptsnv14; @m02; @lps08; @ksv06; @gjs04; @bkmt95; @agm02; @gjs95; @jl93; @krsv07; @lp12; @lw13; @n14; @sj05]. They are known as GROUP-BY queries in the database literature. A popular variant is the [colored orthogonal range searching]{} problem: $S$ is a set of $n$ colored points in ${\mathbb{R}}^d$, and $q$ is an axes-parallel rectangle. As a motivating example for this problem, consider the following query: “How many countries have employees aged between $X_1$ and $X_2$ while earning annually more than $Y$ rupees?". An employee is represented as a colored point $(age, salary)$, where the color encodes the country, and the query is the axes-parallel rectangle $[X_1,X_2] \times [Y,\infty)$. Previous work and background ---------------------------- In the [*standard*]{} approximate range counting problem there are no colors. One is interested in the approximate number of objects intersecting the query. Specifically, if $k$ is the number of objects of $S$ intersecting $q$, then return a value in the range $[(1-{\varepsilon})k,(1+{\varepsilon})k]$. #### ${\varepsilon}$-approximations. In the [*additive-error ${\varepsilon}$-approximation*]{}, a set $Z \subseteq S$ is picked such that, given a query $q$, we [*only*]{} inspect $Z$ and return a value which lies in the range $[k-{\varepsilon}n,k+{\varepsilon}n]$. Vapnik and Chervonenkis [@vc71] proved that a random sample $Z$ of size $O(\frac{\delta}{{\varepsilon}^2}\log\frac{\delta}{{\varepsilon}})$ provides an ${\varepsilon}$-approximation with good probability, where $\delta$ is the VC-dimension ($\delta$ is usually a constant). #### Relative $(p,{\varepsilon})$-approximation. Har-Peled and Sharir [@hs11] introduced the notion of [*relative $(p,{\varepsilon})$-approximation*]{} for geometric settings. The goal is to pick a [*small*]{} set $Z \subset S$ which can be used to compute a relative approximation for queries with large value of $k$. Formally, given a parameter $p\in (0,1)$, a set $Z \subset S$ is a relative $(p,{\varepsilon})$-approximation if: $$\begin{aligned} |Z\cap q|\cdot \frac{n}{|Z|} &\in \begin{cases} [(1-{\varepsilon})k,(1+{\varepsilon})k] \quad \quad \text{if } k\geq pn \\ [k-{\varepsilon}pn,k+{\varepsilon}pn] \quad \quad \text{otherwise. } \end{cases}\end{aligned}$$ Har-Peled and Sharir prove that a sample $Z$ from $S$ of size $O\left(\frac{1}{{\varepsilon}^2 p}\left(\delta\log\frac{1}{p} + \log\frac{1}{q} \right)\right)$ will succeed with probability at least $1-q$. Har-Peled and Sharir construct relative $(p,{\varepsilon})$-approximations for point sets and halfspaces in ${\mathbb{R}}^d$, for $d\geq 2$, and use them to answer approximate counting for any query which contains more than $pn$ points. A nice feature of these results is that they are [*sensitive*]{} to the value of $k$. Specifically, the larger the value of $k$ is, the faster the query is answered. The intuition is that the larger the value of $k$ is, the larger is the error the query is allowed to make and hence, a smaller sample suffices. Even though relative $(p,{\varepsilon})$-approximations give a relative approximation only for queries with large values of $k$, Aronov and Sharir [@as10], and Sharir and Shaul [@ss11] incorporated them into data structures which give an approximate count for all values of $k$. #### General reduction to companion problems. Aronov and Har-Peled [@ah08], and Kaplan, Ramos and Sharir [@krs11] presented general techniques to answer approximate range counting queries. In both instances, the authors reduce the task of answering an approximate counting query, into answering a few queries in data-structures solving an easier [*(companion)*]{} problem. Aronov and Har-Peled’s companion problem is the emptiness query, where the goal is to report whether $|S\cap q|=0$. Specifically, assume that there is a data structure of size $S(n)$ which answers the emptiness query in $O(Q(n))$ time. Aronov and Har-Peled show that there is a data structure of size $O(S(n)\log n)$ which answers the approximate counting query in $O(Q(n)\log n)$ time (for simplicity we ignore the dependency on ${\varepsilon}$). Kaplan [*et al.*]{}’s companion problem is the range-minimum query, where each object of $S$ has a weight associated with it and the goal is to report the object in $S\cap q$ with the minimum weight. Even though the reductions of [@ah08] and [@krs11] seem different, there is an interesting discussion in Section $6$ of [@ah08] about the underlying “sameness" of both techniques. #### Levels. Informally, for a set $S$ of $n$ objects, a [*$t$-level*]{} of $S$ is a surface such that if a point $q$ lies above (resp., on/below) the surface, then the number of objects of $S$ containing $q$ is $>t$ (resp., $\leq t$). Range counting can be reduced in some cases to deciding the level of a query point. Unfortunately, the complexity of a single level is not well understood. For example, for hyperplanes in the plane, the $t$-level has super-linear complexity $\Omega(n 2^{\sqrt{\log t}})$ [@t00] in the worst-case (the known upper bound is $O(n t^{1/3})$ [@d98] and closing the gap is a major open problem). In particular, the prohibitive complexity of such levels makes them inapplicable for the approximate range counting problem, where one shoots for linear (or near-linear) space data-structures. #### Shallow cuttings. A *$t$-level shallow cutting* is a set of simple cells, that lies strictly below the $2t$-level, and their union covers all the points below (and on) the $t$-level. For many geometric objects in two and three dimensions, such $t$-shallow cuttings have $O(n/t)$ cells [@aes99]. Using such cuttings leads to efficient data-structures for approximate range counting. Specifically, one uses binary search on a “ladder” of approximate levels (realized via shallow cuttings) to find the approximation. For halfspaces in ${\mathbb{R}}^3$, Afshani and Chan [@ac09] avoid doing the binary search and find the two consecutive levels in optimal $O(\log \frac{n}{k})$ expected time. Later, Afshani, Hamilton and Zeh [@ahz10] obtained a worst-case optimal solution for many geometric settings. Interestingly, their results hold in the pointer machine model, the I/O-model and the cache-oblivious model. However, in the word-RAM model their solution is not optimal and the query time is $\Omega(\log\log U + (\log\log n)^2)$. [**Specific problems.**]{} Approximate counting for orthogonal range searching in ${\mathbb{R}}^2$ was studied by Nekrich [@n14], and Chan and Wilkinson [@cw13] in the word-RAM model. In this setting, the input set is points in ${\mathbb{R}}^2$ and the query is a rectangle in ${\mathbb{R}}^2$. A hyper-rectangle in ${\mathbb{R}}^d$ is [*$(d+k)$-sided*]{} if it is bounded on both sides in $k$ out of the $d$ dimensions and unbounded on one side in the remaining $d-k$ dimensions. Nekrich [@n14] presented a data structure for approximate colored $3$-sided range searching in ${\mathbb{R}}^2$, where the input is points and the query is a $3$-sided rectangle in ${\mathbb{R}}^2$. However, it has an approximation factor of $(4+{\varepsilon})$, whereas we are interested in obtaining a tighter approximation factor of $(1+{\varepsilon})$. To the best of our knowledge, this is the only work directly addressing an approximate colored counting query. Motivation ---------- #### Avoiding expensive counting structures. A search problem is decomposable if given two disjoint sets of objects $S_1$ and $S_2$, the answer to $F(S_1\cup S_2)$ can be computed in constant time, given the answers to $F(S_1)$ and $F(S_2)$ separately. This property is widely used in the literature [@ae98] for counting in standard problems (going back to the work of Bentley and Saxe [@bs80] in the late 1970s). For colored counting problems, however, $F(\cdot)$ is not decomposable. If $F(S_1)$ (resp. $F(S_2)$) has $n_1$ (resp. $n_2$) colors, then this information is insufficient to compute $F(S_1 \cup S_2)$, as they might have common colors. As a result, for [*many exact*]{} colored counting queries the known space and query time bounds are expensive. For example, for colored orthogonal range searching problem in ${\mathbb{R}}^d$, existing structures use $O(n^d)$ space to achieve polylogarithmic query time [@krsv07]. Any substantial improvement in the preprocessing time [*and*]{} the query time would lead to a substantial improvement in the best exponent of matrix multiplication [@krsv07] (which is a major open problem). Similarly, counting structures for colored halfspace counting in ${\mathbb{R}}^2$ and ${\mathbb{R}}^3$ [@gjs04] are expensive. Instead of an exact count, if one is willing to settle for an approximate count, then this work presents a data structure with $O(n \text{ polylog } n)$ space and $O(\text{polylog } n)$ query time. #### Approximate counting in the speed of emptiness. In an emptiness query, the goal is to decide if $S\cap q$ is empty. The approximate counting query is at least as hard as the emptiness query: When $k=0$ and $k=1$, no error is tolerated. Therefore, a natural goal while answering approximate range counting queries is to match the bounds of its corresponding [*emptiness query*]{}. Our results and techniques -------------------------- ### Specific problems The focus of the paper is building data structures for approximate colored counting queries, which exactly match or [*almost*]{} match the bounds of their corresponding emptiness problem. #### $3$-sided rectangle stabbing in 2d and related problems. In the colored interval stabbing problem, the input is $n$ colored intervals with endpoints in ${\left\llbracket U \right\rrbracket} = \{1,\ldots, U\}$, and the query is a point in ${\left\llbracket U \right\rrbracket}$. We present a linear-space data structure which answers the approximate counting query in $O(\log\log U)$ time. The new data structure can be used to handle some geometric settings in 2d: the [*colored dominance search*]{} (the input is a set of $n$ points, and the query is a $2$-sided rectangle) and the [*colored $3$-sided rectangle stabbing*]{} (the input is a set of $n$ $3$-sided rectangles, and the query is a point). The results are summarized in Table \[table:results\]. #### Range searching in ${\mathbb{R}}^2$. The input is a set of $n$ colored points in the plane. For $3$-sided query rectangles, an [*optimal*]{} solution (in terms of $n$) for approximate counting is obtained. For $4$-sided query rectangles, an [*almost-optimal*]{} solution for approximate counting is obtained. The size of our data structure is off by a factor of $\log\log n$ w.r.t. its corresponding emptiness structure which occupies $O(n\frac{\log n}{\log\log n})$ space and answers the emptiness query in $O(\log n)$ time [@c86]. The results are summarized in Table \[table:results\]. #### Dominance search in ${\mathbb{R}}^3$. The input is a set of $n$ colored points in ${\mathbb{R}}^3$ and the query is a $3$-sided rectangle in ${\mathbb{R}}^3$ (i.e., an octant). An almost-optimal solution is obtained requiring $O(n\log\log n)$ space and $O(\log n)$ time to answer the approximate counting query. -------- ----------------------- --------------------------------------- --------------------------- ------------------------------- ------- Dime- Input, New Results Previous Approx. Exact Counting Model -nsion Query Counting Results Results $1$ intervals, S: $n,$ S: $n,$ S: $n,$ point Q: $\log\log U$ Q: $\log\log U + $ Q: $\log\log U+ \log_wn$ WR $2$ points, $(\log\log n)^2$ $2$-sided rectangle $2$ $3$-sided rectangles, Theorem \[thm:many-colored\] Remark \[rem:ac-many\] Remark \[rem:ec-many\] point $2$ points, S: $n,$ S: $n\log^2n,$ $3$-sided rectangle Q: $\log n$ Q: $\log^2n$ not studied PM Theorem \[thm::three-sided-color\](A) Remark \[rem:ac-3-sided\] $2$ points, S: $n\log n,$ S: $n\log^3n,$ S: $n^2\log^6n,$ $4$-sided rectangle Q: $\log n$ Q: $\log^2n$ Q: $\log^7n$ PM Theorem \[thm::three-sided-color\](B) Remark \[rem:ac-3-sided\] Kaplan [*et al.*]{} [@krsv07] $3$ points, S: $n\log^{*}n,$ S: $n\log^2n,$ $3$-sided rectangle Q: $\log n\cdot\log\log n$ Q: $\log^2n$ not studied PM Theorem \[thm:3d-dom\] Remark \[rem:ac-3d-dom\] -------- ----------------------- --------------------------------------- --------------------------- ------------------------------- ------- : A summary of the results obtained for several approximate colored counting queries. To avoid clutter, the $O(\cdot)$ symbol and the dependency on ${\varepsilon}$ is not shown in the space and the query time bounds. For the second column in the table, the first entry is the input and the second entry is the query. For each of the results column in the table, the first entry is the space occupied by the data structure and the second entry is the time taken to answer the query. WR denotes the word-RAM model and PM denotes the pointer machine model.[]{data-label="table:results"} For the sake of completeness, in Section \[sec:appl-sec-red\] we present results for a couple of other colored problems which have expensive exact counting structures. ### General reductions We present two general reductions for solving approximate colored counting queries by reducing them to “easy" companion queries (Theorem \[thm::main-1\] and Theorem \[thm:accq\]). [**Reduction-I (Reporting + $C$-approximation).**]{} In the first reduction a colored approximate counting query is answered using two companion structures: (a) [*reporting structure*]{} (its objective is to report the $k$ colors), and (b) [*$C$-approximation structure*]{} (its objective is to report any value $z$ s.t. $k \in [z,Cz]$, where $C$ is a constant). Significantly, unlike previous reductions [@ah08; @krs11], there is [*no asymptotic loss*]{} of efficiency in space and query time bounds w.r.t. to the two companion problems. [**Reduction-II (Only Reporting).**]{} The second reduction is a modification of the Aronov and Har-Peled [@ah08] reduction. We present the reduction for the following reasons: Unlike reduction-I, this reduction is “easier" to use since it uses only the reporting structure and avoids the $C$-approximation structure. The analysis of Aronov and Har-Peled is slightly complicated because of their insistence on querying emptiness structures. We show that by using reporting structures the analysis becomes simpler. This reduction is useful when the reporting query is not significantly costlier than the emptiness query. ### Our techniques The results are obtained via a non-trivial combination of several techniques. For example, (a) new reductions from colored problems to standard problems, (b) obtaining a linear-space data structure by performing random sampling on a super-linear-size data structure, (c) refinement of path-range trees of Nekrich [@n14] to obtain an optimal data structure for $C$-approximation of colored $3$-sided range search in ${\mathbb{R}}^2$, and (d) [*random sampling on colors*]{} to obtain the two general reductions. In addition, we introduce [*nested shallow cuttings*]{} for $3$-sided rectangles in 2d. The idea of using a hierarchy of cuttings (or samples) is, of course, not new. However, for this specific setting, we get a hierarchy where there is no penalty for the different levels being compatible with each other. Usually, cells in the lower levels have to be clipped to cells in the higher levels of the hierarchy, leading to a degradation in performance. In our case, however, cells of the lower levels are fully contained in the cells of the level above it. #### Paper organization. In Section $2$, we present a solution to the colored $3$-sided rectangle stabbing in 2d problem. In Section $3$ we present a solution to the colored dominance search in ${\mathbb{R}}^3$ problem. In Section $4$ and $5$, the two general reductions are presented. In Section $6$, the application of the first reduction to colored orthogonal range search in ${\mathbb{R}}^2$ problem is shown. In Section $7$, applications of the second reduction is shown. Finally, we conclude in Section $8$. $3$-sided Rectangle Stabbing in 2d {#sec:many-colored} ================================== The goal of this section is to prove the following theorem. \[thm:many-colored\] Consider the following three colored geometric settings: 1. [**Colored interval stabbing in 1d**]{}, where the input is a set $S$ of $n$ colored intervals in one-dimension and the query $q$ is a point. The endpoints of the intervals and the query point lie on a grid ${\left\llbracket U \right\rrbracket}$. 2. [**Colored dominance search in 2d**]{}, where the input is a set $S$ of $n$ colored points in 2d and the query $q$ is a quadrant of the form $[q_x,\infty) \times [q_y,\infty)$. The input points and the point $(q_x,q_y)$ lie on a grid ${\left\llbracket U \right\rrbracket} \times {\left\llbracket U \right\rrbracket}$. 3. [**Colored $3$-sided rectangle stabbing in 2d**]{}, where the input is a set $S$ of $n$ colored $3$-sided rectangles in 2d and the query $q$ is a point. The endpoints of the rectangles and the query point lie on a grid ${\left\llbracket U \right\rrbracket} \times {\left\llbracket U \right\rrbracket}$. Then there exists an ${O_{{\varepsilon}}}(n)$ size word-RAM data structure which can answer an approximate counting query for these three settings in ${O_{{\varepsilon}}}(\log\log U)$ time. The notation ${O_{{\varepsilon}}}(\cdot)$ hides the dependency on ${\varepsilon}$. Our strategy for proving this theorem is the following: In Subsection \[subsec:trans\], we present a transformation of these three colored problems to the [*standard*]{} $3$-sided rectangle stabbing in 2d problem. Then in Subsection \[subsec:standard\], we construct nested shallow cuttings and use them to solve the standard $3$-sided rectangle stabbing in 2d problem. Transformation to a standard problem {#subsec:trans} ------------------------------------ From now on the focus will be on colored $3$-sided rectangle stabbing in 2d problem, since the geometric setting of (1) and (2) in Theorem \[thm:many-colored\] are its special cases. We present a transformation of the colored $3$-sided rectangle stabbing in 2d problem to the [*standard*]{} $3$-sided rectangle stabbing in 2d problem. Let $S_c \subseteq S$ be the set of $3$-sided rectangles of a color $c$. In the preprocessing phase, we perform the following steps: (1) Construct a union of the rectangles of $S_c$. Call it ${\cal U}(S_c)$. (2) The vertices of ${\cal U}(S_c)$ include original vertices of $S_c$ and some new vertices. Perform a [*vertical decomposition*]{} of ${\cal U}(S_c)$ by shooting a vertical ray upwards from every [*new*]{} vertex of ${\cal U}(S_c)$ till it hits $+\infty$. This leads to a decomposition of ${\cal U}(S_c)$ into $\Theta(|S_c|)$ pairwise-disjoint $3$-sided rectangles. Call these new set of rectangles ${\cal N}(S_c)$. ![image](trans-I.pdf) \[fig:trans-I\] Given a query point $q$, we can make the following two observations: - If $S_c \cap q = \emptyset$, then ${\cal N}(S_c) \cap q = \emptyset$. See query point $q_1$ in the above figure. - If $S_c \cap q \neq \emptyset$, then exactly one rectangle in ${\cal N}(S_c)$ is stabbed by $q$. See query point $q_2$ in the above figure. Let ${\cal N}(S)=\bigcup_{\forall c} {\cal N}(S_c)$, and clearly, $|{\cal N}(S)|=O(n)$. Therefore, the colored $3$-sided rectangle stabbing in 2d problem on $S$ has been reduced to the [*standard*]{} $3$-sided rectangle stabbing in 2d problem on ${\cal N}(S)$. Standard $3$-sided rectangle stabbing in 2d {#subsec:standard} ------------------------------------------- In this subsection we will prove the following lemma. \[lemma:rect-stab\] [**(Standard $3$-sided rectangle stabbing in 2d.)**]{} In this geometric setting, the input is a set $S$ of $n$ uncolored $3$-sided rectangles of the form $[x_1,x_2] \times [y,\infty)$, and the query $q$ is a point. The endpoints of the rectangles lie on a grid ${\left\llbracket U \right\rrbracket} \times {\left\llbracket U \right\rrbracket}$. There exists a data structure of size ${O_{{\varepsilon}}}(n)$ which can answer an approximate counting query in ${O_{{\varepsilon}}}(\log\log U)$ time. By a standard rank-space reduction, the rectangles of $S$ can be projected to a ${\left\llbracket 2n \right\rrbracket} \times {\left\llbracket n \right\rrbracket}$ grid: Let $S_x$ (resp., $S_y$) be the list of the $2n$ vertical (resp., $n$ horizontal) sides of $S$ in increasing order of their $x-$ (resp., $y-$) coordinate value. Then each rectangle $r=[x_1,x_2] \times [y,\infty) \in S$ is projected to a rectangle $[rank(x_1), rank(x_2)] \times [rank(y),\infty)$, where $rank(x_i)$ (resp., $rank(y)$) is the index of $x_i$ (resp., $y$) in the list $S_x$ (resp., $S_y$). Given a query point $q \in {\left\llbracket U \right\rrbracket} \times {\left\llbracket U \right\rrbracket}$, we can use the van Emde Boas structure [@b77] to perform a predecessor search on $S_x$ and $S_y$ in $O(\log\log U)$ time to find the position of $q$ on the ${\left\llbracket 2n \right\rrbracket} \times {\left\llbracket n \right\rrbracket}$ grid. Now we will focus on the new setting and prove the following result. \[lemma:rect-stab-n\] For the standard $3$-sided rectangle stabbing in 2d problem, consider a setting where the rectangles have endpoints lying on a grid ${\left\llbracket 2n \right\rrbracket} \times {\left\llbracket n \right\rrbracket}$. Then there exists a data structure of size ${O_{{\varepsilon}}}(n)$ which can answer the approximate counting query in ${O_{{\varepsilon}}}(1)$ time. ### Nested shallow cuttings To prove Lemma \[lemma:rect-stab-n\], we will first construct shallow cuttings for $3$-sided rectangles in 2d. Unlike the general class of shallow cuttings, the shallow cuttings we construct for $3$-sided rectangles will have a stronger property of cells in the lower level lying completely inside the cells of a higher level. Let $S$ be a set of $3$-sided rectangles (of the form $[x_1,x_2] \times [y,\infty)$) whose endpoints lie on a ${\left\llbracket 2n \right\rrbracket} \times {\left\llbracket n \right\rrbracket}$ grid. A [*$t$-level shallow cutting*]{} of $S$ produces a set ${\cal C}$ of interior-disjoint $3$-sided rectangles/cells of the form $[x_1,x_2] \times (-\infty,y]$. There exists a set ${\cal C}$ with the following three properties: 1. $|{\cal C}| = 2n/t$. 2. If $q$ does not lie inside any of the cell in ${\cal C}$, then $|S\cap q| \geq t$. 3. Each cell in ${\cal C}$ intersects at most $2t$ rectangles of $S$. Partition the plane into $\frac{2n}{t}$ vertical slabs, such that $t$ vertical lines of $S$ lie in each slab, i.e., each slab has a width of $t$. See Figure \[fig:three-sided-rectangle\](a). Consider a slab $s=[x_1,x_2] \times (-\infty, +\infty)$. Among all the rectangles of $S$ which completely span the slab $s$, let $y_t$ be the $y$-coordinate of the rectangle with the $t$-th smallest $y$-coordinate. If less than $t$ segments of $S$ span slab $s$, then set $y_t:= +\infty$. Let the [*upper segment*]{} of the slab $s$ be the horizontal segment $[x_1,x_2] \times [y_t]$. Each slab contributes a cell $[x_1,x_2] \times (-\infty, y_t]$ to set ${\cal C}$. See Figure \[fig:three-sided-rectangle\](a). Property $1$ is easy to verify, since $\frac{2n}{t}$ slabs are constructed. To prove Property $2$, consider a point $q$ which lies in slab $s$ but does not lie in the cell $[x_1,x_2] \times (-\infty, y_t]$. This implies that there are at least $t$ rectangles of $S$ which contain $q$, and hence, $|S\cap q| \geq t$. To prove Property $3$, consider a cell $r$ and its corresponding slab $s$. The rectangles of $S$ which intersect $r$ either span the slab $s$ or partially span the slab $s$. By our construction, there can be at most $t$ rectangles of $S$ of each type. \[lemma:containment\] [*(Nested Property)*]{} Let $t$ and $i$ be integers. Consider a $t$-level and a $2^it$-level shallow cutting. By our construction, each cell in $2^it$-level contains exactly $2^i$ cells of the $t$-level. More importantly, each cell in the $t$-level is contained inside a single cell of $2^it$-level (see Figure \[fig:three-sided-rectangle\](a)). ![(a) A portion of the $t$-level and $2t$-level is shown. Notice that by our construction, each cell in the $t$-level is contained inside a cell in the $2t$-level. (b) A cell in the $t$-level and the set ${\cal C}_r$ associated with it. (c) A high-level summary of our data structure.[]{data-label="fig:three-sided-rectangle"}](three-sided-rectangle.pdf) ### Data structure Now we will use nested shallow cuttings to find a constant-factor approximation for the $3$-sided rectangle stabbing in 2d problem. In [@ahz10], the authors show how to convert a constant-factor approximation into a $(1+{\varepsilon})$-approximation for this geometric setting. The solution is based on [*$(t,t')$-level-structure*]{} and [*$(\leq\sqrt{\log n})$-level shared table*]{}. #### $(t,t')$-level structure. Let $i, t$ and $t'$ be integers s.t. $t'=2^it$. If $q(q_x,q_y)$ lies between the $t$-level and the $t'$-level cutting of $S$, then a $(t,t')$-level-structure will answer the approximate counting query in $O(1)$ time and occupy $O\left(n+\frac{n}{t}\log t'\right)$ space. Construct a shallow cutting of $S$ for levels $2^jt, \forall j\in [0,i]$. For each cell, say $r$, in the $t$-level we do the following: Let ${\cal C}_r$ be the set of cells from the $2^1t, 2^2t,2^3t,\ldots,2^it$-level, which contain $r$ (Observation \[lemma:containment\] guarantees this property). Now project the upper segment of each cell of ${\cal C}_r$ onto the $y$-axis (each segment projects to a point). Based on the $y$-coordinates of these $|{\cal C}_r|$ projected points build a fusion-tree [@fw93]. Since there are $O(n/t)$ cells in the $t$-level and $|{\cal C}_r|=O(\log t')$, the total space occupied is $O(\frac{n}{t}\log t')$. See Figure \[fig:three-sided-rectangle\](b). Since $q_x\in{\left\llbracket 2n \right\rrbracket}$, it takes $O(1)$ time to find the cell $r$ of the $t$-level whose $x$-range contains $q_x$. If the predecessor of $q_y$ in ${\cal C}_r$ belongs to the $2^jt$-level, then $2^jt$ is a constant-factor approximation of $k$. The predecessor query also takes $O(1)$ time. #### $(\leq\sqrt{\log n})$-level shared table. Suppose $q$ lies in a cell in the $\sqrt{\log n}$-level shallow cutting of $S$. Then constructing the $(\leq \sqrt{\log n})$-level shared table will answer the exact counting query in $O(1)$ time. We will need the following lemma. For a cell $c$ in the $\sqrt{\log n}$-level shallow cutting of $S$, its [*conflict list*]{} $S_c$ is the set of rectangles of $S$ intersecting $c$. Although the number of cells in the $\sqrt{\log n}$-level is $O\left(\frac{n}{\sqrt{\log n}}\right)$, the number of combinatorially “different" conflict lists is merely $O(\sqrt{n})$. Consider any set $S_c$ from the shallow cutting. By a standard rank-space reduction the endpoints of $S_c$ will lie on a ${\left\llbracket 2|S_c| \right\rrbracket} \times {\left\llbracket |S_c| \right\rrbracket}$ grid. Any set $S_c$ on the ${\left\llbracket 2|S_c| \right\rrbracket} \times {\left\llbracket |S_c| \right\rrbracket}$ grid can be [*uniquely*]{} represented using $O(|S_c|\log|S_c|)=O(\sqrt{\log n}\log\log n)$ bits as follows: (a) assign a [*label*]{} to each rectangle, and (b) write down the label of each rectangle in increasing order of their $y$-coordinates. The label for a rectangle $[x_1,x_2] \times [y,\infty)$ will be $``x_1 x_2"$ which requires $O(\log\log n)$ bits. The number of combinatorially different conflict lists which can be represented using $O(\sqrt{\log n}\log\log n)$ bits is bounded by $2^{O(\sqrt{\log n}\log\log n)}=O(n^{\delta})$, for an arbitrarily small $\delta<1$. We set $\delta=1/2$. [*Shared table.*]{} Construct a $\sqrt{\log n}$-level shallow cutting of $S$. For each cell $c$, perform a rank-space reduction of its conflict list $S_c$. Collect the combinatorially different conflict lists. On each conflict list, the number of combinatorially different queries will be only $O(|S_c|^2)=O(\log n)$. In a lookup table, for each pair of $(S_c,q)$ we store the exact value of $|S_c \cap q|$. The total number of entries in the lookup table is $O(n^{1/2}\log n)$. [*Query algorithm.*]{} Given a query $q(q_x,q_y)$, the following three $O(1)$ time operations are performed: (a) Find the cell $c$ in the $\sqrt{\log n}$-level which contains $q$. If no such cell is found, then stop the query and conclude that $k\geq \sqrt{\log n}$. (b) Otherwise, perform a rank-space reduction on $q_x$ and $q_y$ to map it to the ${\left\llbracket 2|S_c| \right\rrbracket} \times {\left\llbracket |S_c| \right\rrbracket}$ grid. Since, $|S_c|=O(\sqrt{\log n})$, we can build fusion trees [@fw93] on $S_c$ to perform the rank-space reduction in $O(1)$ time. (c) Finally, search for $(S_c,q)$ in the lookup table and report the exact count. #### Final structure. At first thought, one might be tempted to construct a $(0,n)$-level-structure. However, that would occupy $O(n\log n)$ space. The issue is that the $(t,t')$-level structure requires super-linear space for small values of $t$. Luckily, the $(\leq\sqrt{\log n})$-level shared table will efficiently handle the small values of $t$. Therefore, the strategy is to construct the following: (a) a $(\leq\sqrt{\log n})$-level shared table, (b) a $(\sqrt{\log n},\log n)$-level-structure, and (c) a $(\log n,n)$-level-structure. Now, the space occupied by all the three structures will be $O(n)$. See Figure \[fig:three-sided-rectangle\](c) for a summary of our data structure. \[rem:ac-many\] For the standard $3$-sided rectangle stabbing in 2d problem, a simple binary search on the levels leads to a linear-space data structure with a query time of ${O_{{\varepsilon}}}(\log\log U + (\log\log n)^2)$. The technique of Afshani [*et al.*]{} [@ahz10] can be used to answer this approximate counting query. However, their analysis works well for structures with query time of the form $\log n$ or $\log_Bn$, but breaks down for structures with query time of the form $\log\log n$. \[rem:ec-many\] If we want an exact count for the standard $3$-sided rectangle stabbing in 2d problem, then the problem can be reduced to exact counting for standard dominance search in 2d [@eo82]. Jaja [*et al.*]{} [@jms04] present a linear-space structure which can answer the exact counting for dominance search in 2d in ${O_{{\varepsilon}}}(\log\log U + \log_wn )$ time. Colored Dominance Search in ${\mathbb{R}}^3$ {#sec:dominance} ============================================ \[thm:3d-dom\] In the colored dominance search in ${\mathbb{R}}^3$ problem, the input set $S$ is $n$ colored points in ${\mathbb{R}}^3$ and the query $q$ is a point. Then there is a pointer machine data structure of size ${O_{{\varepsilon}}}(n\log^{*}n)$ which can answer an approximate colored counting query in ${O_{{\varepsilon}}}(\log n\cdot \log\log n)$ time. The notation ${O_{{\varepsilon}}}(\cdot)$ hides the dependency on ${\varepsilon}$. The strategy to prove this theorem is the following. First, we reduce the colored dominance search in ${\mathbb{R}}^3$ problem to a [*standard*]{} problem of $5$-sided rectangle stabbing in ${\mathbb{R}}^3$. Then in the remaining section we solve the standard $5$-sided rectangle stabbing in ${\mathbb{R}}^3$ problem. Reduction to $5$-sided rectangle stabbing in ${\mathbb{R}}^3$ {#subsec:red-colored-2} ------------------------------------------------------------- In this subsection we present a reduction of colored dominance search in ${\mathbb{R}}^3$ problem to the standard $5$-sided rectangle stabbing in ${\mathbb{R}}^3$ problem. Let $S$ be a set of $n$ colored points lying in ${\mathbb{R}}^3$. Let $S_c \subseteq S$ be the set of points of color $c$, and $p_1,p_2,\ldots,p_t$ be the points of $S_c$ in decreasing order of their $z$-coordinate value. With each point $p_i(p_{ix},p_{iy},p_{iz})$, we associate a region $\phi_i$ in ${\mathbb{R}}^3$ which satisfies the following invariant: a point $(x,y,z)$ belongs to $\phi_i$ if and only if in the region $[x,+\infty) \times [y,+\infty) \times [z,+\infty)$ the point of $S_c$ with the largest $z$-coordinate is $p_i$. The following assignment of regions ensures the invariant: - $\phi_1 = (-\infty, p_{1x}] \times (-\infty, p_{1y}] \times (-\infty, p_{1z}]$ - $\phi_i = (-\infty, p_{ix}] \times (-\infty, p_{iy}] \times (-\infty, p_{iz}] \setminus \bigcup_{j=1}^{i-1} \phi_j, \forall i\in [2,|S_c|]$. By our construction, each region $\phi_i$ is unbounded in the negative $z$-direction. Each region $\phi_i$ is broken into disjoint $5$-sided rectangles via [*vertical decomposition*]{} in the $xy$-plane (see Figure \[fig:5-sided\]). The vertical decomposition ensures that the total number of disjoint rectangles generated is bounded by $O(|S_c|)$. Now we can observe that (i) if a color $c$ has at least one point inside $q$, then exactly one of its transformed rectangle will contain $q$, and (ii) if a color $c$ has no point inside $q$, then none of its transformed rectangles will contain $q$. Therefore, the colored dominance search in ${\mathbb{R}}^3$ has been transformed to the standard $5$-sided rectangle stabbing query. ![A dataset containing four points. The projection of $\phi_1,\phi_2,\phi_3$ and $\phi_4$ onto the $xy$-plane is shown. The dashed lines are created during the vertical decomposition. Each rectangle created during the vertical decomposition is lifted back to a $5$-sided rectangle in ${\mathbb{R}}^3$. []{data-label="fig:5-sided"}](5-sided.pdf) Initial strcuture ----------------- \[lemma:subopt\] In the standard $5$-sided rectangle stabbing in ${\mathbb{R}}^3$ problem, the input is a set $S$ of $n$ $5$-sided rectangles in ${\mathbb{R}}^3$ and the query $q$ is a point. Then there exists a pointer machine data structure of size ${O_{{\varepsilon}}}(n\log\log n)$ which can answer an approximate counting query in ${O_{{\varepsilon}}}(\log n\cdot \log\log n)$ time. The rest of the subsection is devoted to proving this lemma. [**Recursion tree.**]{} Define a parameter $t=\log_{1+{\varepsilon}}n$. We will assume that the $5$-sided rectangles are unbounded along the $z$-axis. Consider the projection of the rectangles of $S$ on to the $xy$-plane and impose an orthogonal ${\left\llbracket 2\sqrt{\frac{n}{t}} \right\rrbracket} \times {\left\llbracket 2\sqrt{\frac{n}{t}} \right\rrbracket}$ grid such that each horizontal and vertical slab contains the projections of $\sqrt{nt}$ sides of $S$. Call this the root of the recursion tree. Next, for each vertical and horizontal slab, we recurse on the rectangles of $S$ which are [*sent*]{} to that slab. At each node of the recursion tree, if we have $m$ rectangles in the subproblem, then $t$ is changed to $\log_{1+{\varepsilon}}m$ and the grid size changes to ${\left\llbracket 2\sqrt{\frac{m}{t}} \right\rrbracket} \times {\left\llbracket 2\sqrt{\frac{m}{t}} \right\rrbracket}$. We stop the recursion when a node has less than $c$ rectangles, for a suitably large constant $c$. [**Assignment of rectangles.**]{} For a node in the tree, the intersection of every pair of horizontal and vertical grid line defines a [*grid point*]{}. Each rectangle of $S$ is assigned to ${O_{{\varepsilon}}}(\log\log n)$ nodes in the tree. The assignment of a rectangle to a node is decided by the following three cases: [*Case-I.*]{} The $xy$-projection of a rectangle intersects none of the grid points, i.e., it lies completely inside one of the row slab or/and the column slab. Then the rectangle is not assigned to this node, but sent to the child node corresponding to the row or column the rectangle lies in. [*Case-II.*]{} The $xy$-projection of a rectangle $r$ intersects at least one of the grid points. Let $c_l$ and $c_r$ be the leftmost and the rightmost column of the grid intersected by $r$. Similarly, let $r_b$ and $r_t$ be the bottommost and the topmost row of the grid intersected by $r$. Then the rectangle is broken into at most five disjoint pieces: a [*grid rectangle*]{}, which is the bounding box of all the grid points lying inside $r$ (see Figure \[fig:type-II\](b)), two [*column rectangles*]{}, which are the portions of $r$ lying in column $c_l$ and $c_r$ (see Figure \[fig:type-II\](d)), and two [*row rectangles*]{}, which are the remaining portion of the rectangle $r$ lying in row $r_b$ and $r_t$ (see Figure \[fig:type-II\](c)). The grid rectangle is [*assigned*]{} to the node. Note that each column rectangle (resp., row rectangle) is now a $4$-sided rectangle in ${\mathbb{R}}^3$ w.r.t. the column (resp., row) it lies in, and is sent to its corresponding child node. ![[]{data-label="fig:type-II"}](type-II.pdf) [*Case-III.*]{} The $xy$-projection of a [*$4$-sided rectangle*]{} $r$ intersects at least one of the grid points. Without loss of generality, assume that the $4$-sided rectangle $r$ is unbounded along the negative $x$-axis. Then the rectangle is broken into at most four disjoint pieces: a [*grid rectangle,*]{} as shown in Figure \[fig:type-III\](b), one [*column rectangle*]{}, as shown in Figure \[fig:type-III\](d), and two [*row rectangles*]{}, as shown in Figure \[fig:type-III\](c). The grid rectangle and the two row rectangles are [*assigned*]{} to the node. Note that the two row rectangles are now $3$-sided rectangles in ${\mathbb{R}}^3$ w.r.t. their corresponding rows (unbounded in one direction along $x-$, $y-$ and $z-$axis). The column rectangle is sent to its corresponding child node. Analogous partition is performed for $4$-sided rectangles which are unbounded along positive $x$-axis, positive $y$-axis and negative $y$-axis. ![[]{data-label="fig:type-III"}](type-III.pdf) \[obs:rec-tree\] A rectangle of $S$ gets assigned to at most four nodes at each level in the recursion tree. Consider a rectangle $r\in S$. If $r$ falls under Case-II, then its grid rectangle is assigned to the node. Note that $r$ can fall under Case-II only once, since each of its four row and column rectangles are now effectively $4$-sided rectangles. Let $r'$ be one of these row or column rectangles. If $r'$ falls under Case-III at a node, then it gets assigned there. However, this time exactly [*one*]{} of the broken portion of $r'$ will be sent to the child node. Therefore, there can be at most four nodes at each level where rectangle $r$ (and broken portions of $r$) can get assigned. [**Data structures at each node.**]{} We build two types of structures at each node in the tree. [*Structure-I.*]{} A rectangle $r'$ is [*higher*]{} than rectangle $r''$ if $r'$ has a larger span than $r''$ along $z$-direction. For each cell $c$ of the grid, based on the rectangles which completely cover $c$, we construct a [*sketch*]{} as follows: select the rectangle with the $(1+{\varepsilon})^0, (1+{\varepsilon})^1,(1+{\varepsilon})^2,\ldots$-th largest span. For a given cell, the size of the sketch will be $O(\log_{1+{\varepsilon}}m)$. [*Structure-II.*]{} For a given row or column in the grid, let $\hat{S}$ be the $3$-sided rectangles in ${\mathbb{R}}^3$ assigned to it. We build the linear-size structure of [@ahz10] on $\hat{S}$, which will return a $(1+{\varepsilon})$-approximation of $|\hat{S}\cap q|$ in ${O_{{\varepsilon}}}(\log n)$ time. This structure is built for each row and column slab. [**Space analysis.**]{} Consider a node in the recursion tree with $m$ rectangles. There will be $\left(2\sqrt{\frac{m}{t}}\right) \times \left(2\sqrt{\frac{m}{t}}\right)=4\frac{m}{t}$ cells at this node. The space occupied by structure-I will be $O\left(\frac{m}{t}\cdot\log_{1+{\varepsilon}}m \right)=O(m)$. The space occupied by structure-II will be $O(m)$. Using Observation \[obs:rec-tree\], the total space occupied by all the nodes at a particular level will be $O(n)$. Since the height of the recursion tree is ${O_{{\varepsilon}}}(\log\log n)$, the total space occupied is ${O_{{\varepsilon}}}(n\log\log n)$. [**Query algorithm.**]{} Given a query point $q$, we start at the root node. At each visited node, the following three steps are performed: 1. [*Query structure-I.*]{} Locate the cell $c$ on the grid containing $q$. Scan the sketch of cell $c$ to return a $(1+{\varepsilon})$-approximation of the number of rectangles which cover $c$ and contain $q$. This takes ${O_{{\varepsilon}}}(\log m)$ time. 2. [*Query structure-II.*]{} Next, query structure-II of the horizontal and the vertical slab containing $q$, to find a $(1+{\varepsilon})$-approximation of the $3$-sided rectangles containing $q$. This takes ${O_{{\varepsilon}}}(\log m)$ time. 3. [*Recurse.*]{} Finally, we recurse on the horizontal and the vertical slab containing $q$. The final output is the [*sum*]{} of the count returned by all the nodes queried. [**Query time analysis.**]{} Let $Q(n)$ denote the overall query time. Then $$Q(n) = 2Q(\sqrt{nt}) + {O_{{\varepsilon}}}(\log n), t=\log_{1+{\varepsilon}} n.$$ This solves to $Q(n)={O_{{\varepsilon}}}(\log n\cdot\log\log n)$. This finishes the proof of Lemma \[lemma:subopt\]. Final structure --------------- In this subsection we improve upon the data structure built in the previous subsection by reducing the size to ${O_{{\varepsilon}}}(n\log^{*}n)$. \[lemma:almost-opt\] In the standard $5$-sided rectangle stabbing in ${\mathbb{R}}^3$ problem, the input is a set $S$ of $n$ $5$-sided rectangles in ${\mathbb{R}}^3$ and the query $q$ is a point. Then there exists a pointer machine data structure of size ${O_{{\varepsilon}}}(n\log^{*}n)$ which can solve an approximate counting problem in ${O_{{\varepsilon}}}(\log n\cdot \log\log n)$ time. Let ${\varepsilon}' \leftarrow {\varepsilon}/4$ and $C>3$. The reason for choosing these parameters will become clear later. We divide the solution into two cases. ### Case-I: $k\in[0,C{\varepsilon}'^{-2}\log n\cdot\log\log n]$ For the reporting version of $5$-sided rectangle stabbing in ${\mathbb{R}}^3$ problem, Rahul [@r15] presented a structure of size $O(n\log^{*}n)$ which can answer a query in $O(\log n\cdot \log\log n + k)$ time. Build this structure on all the rectangles in set $S$. Given a query point $q$, query the structure till all the rectangles in $S\cap q$ have been reported or $C{\varepsilon}'^{-2}\log n\cdot\log\log n +1$ rectangles in $S\cap q$ have been reported. If the first event happens, then the exact value of $k$ is reported. Otherwise, we conclude that $k >C{\varepsilon}'^{-2}\log n\cdot\log\log n$. ### Case-II: $k\in[C{\varepsilon}^{-2}\log n\cdot\log\log n,n]$ We will need the following random sampling based lemma. \[lem:sampling\] Let $S$ be a set of $n$ $5$-sided rectangles in ${\mathbb{R}}^3$. Consider a query point $q$ such that $k \geq C{\varepsilon}'^{-2}\log n\cdot\log\log n$. Then there exists a set $R \subset S$ of size $O\left(\frac{n}{\log\log n}\right)$ such that $(|R\cap q|\cdot\log\log n) \in [(1-{\varepsilon}')k, (1+{\varepsilon}')k]$. Fix a parameter $\delta=\log\log n$. Choose a random sample $R$ where each object of $S$ is picked independently with probability $1/\delta$. Therefore, the expected size of $R$ is $n/\delta$ (if the size of $R$ exceeds $O(n/\delta)$, then re-sample till we get the desired size). For a given query $q$, $E[|R\cap q|]=|S\cap q|/\delta=k/\delta$. Therefore, by Chernoff bound [@mp95] we observe that $$\textbf{Pr}\left[\left|\left|R\cap q\right| - \frac{k}{\delta}\right| > {\varepsilon}' \frac{k}{\delta}\right] \leq e^{-\Omega({\varepsilon}'^2 (k/\delta))} \leq e^{-\Omega({\varepsilon}'^2(C{\varepsilon}'^{-2}\log n))} \leq e^{-\Omega(C\log n)} = n^{-\Omega(C)} \leq o(1/n^C)$$ Set $C$ to be greater than $3$. There are $O(n^{3})$ combinatorially different query points on the set $S$. Therefore, by union bound it follows that there exists a subset $R\subset S$ of size $O(n/\delta)$ such that $|k-|R\cap q|\cdot \delta| \leq {\varepsilon}' k$, for any $q$ such that $k \geq C{\varepsilon}'^{-2}\log n\cdot\log\log n$. [**Preprocessing steps.**]{} We perform the following steps: - Apply Lemma \[lem:sampling\] on set $S$ to obtain a set $R$ of size $O(n/\log\log n)$. - Build the data structure of Lemma \[lemma:subopt\] based on set $R$ with error parameter ${\varepsilon}'$. [**Query algorithm.**]{} For a given a query $q$, let $\tau_R$ be the value returned by the data structure built on $R$. Then we report $\tau_R\cdot\log\log n$ as the answer. [**Analysis.**]{} Since $|R|=O(n/\log\log n)$, by Lemma \[lemma:subopt\] the space occupied by this data structure will be ${O_{{\varepsilon}}}(n)$. The query time follows from Lemma \[lemma:subopt\]. Next, we will prove that $(1-{\varepsilon})k \leq\tau_R\cdot\log\log n \leq (1+{\varepsilon})k$. If we knew the exact value of $|R\cap q|$, then from Lemma \[lem:sampling\] we can infer that: $$(1-{\varepsilon}')k \leq |R\cap q|\log\log n \leq (1+ {\varepsilon}')k$$ However, by using Lemma \[lemma:subopt\] we only get an approximate value of $|R\cap q|$: $$(1-{\varepsilon}')|R\cap q| \leq \tau_R \leq (1+ {\varepsilon}')|R\cap q|$$ Combining the above two equations, it is easy to verify that $(1-{\varepsilon})k \leq\tau_R\log\log n \leq (1+{\varepsilon})k$, where ${\varepsilon}=4{\varepsilon}'$. This finishes the proof of Lemma \[lemma:almost-opt\]. \[rem:ac-3d-dom\] The general technique of Aronov and Har-Peled [@ah08] can be adapted to answer the approximate counting query for the colored dominance search in ${\mathbb{R}}^3$ problem. Assume that we have a data structure of size $S(n)$ which can answer the emptiness query in $Q(n)$ time. Ignoring the dependence on ${\varepsilon}$, the technique of [@ah08] guarantees a data structure of size $O(S(n)\log^2n)$ which can answer a colored approximate counting query in $O(Q(n)\log n)$ time ($O(\log^2n)$ emptiness structures are built with each of them storing $\Theta(n)$ objects in the worst-case). For colored dominance search in ${\mathbb{R}}^3$, plugging in $S(n)=O(n)$ and $Q(n)=O(\log n)$ [@p08b], we get a data structure which requires ${O_{{\varepsilon}}}(n\log^2n)$ space and ${O_{{\varepsilon}}}(\log^2n)$ query time. Reduction-I: Reporting + $C$-approximation {#sec:first-red} ========================================== Our first reduction states that given a colored reporting structure and a colored $C$-approximation structure, one can obtain a colored $(1+{\varepsilon})$-approximation structure with no additional loss of efficiency. We need a few definitions before stating the theorem. A geometric setting is [*polynomially bounded*]{} if there are only $n^{O(1)}$ possible outcomes of $S\cap q$, over all possible values of $q$. For example, in $1d$ orthogonal range search on $n$ points, there are only $\Theta(n^2)$ possible outcomes of $S\cap q$. A function $f(n)$ is [*converging*]{} if $\sum_{i=0}^t n_i =n, \text{ then } \sum_{i=0}^t f(n_i) = O(f(n))$. For example, it is easy to verify that $f(n)=n\log n$ is converging. \[thm::main-1\] For a colored geometric setting, assume that we are given the following two structures: - a colored reporting structure of $\S_\rep(n)$ size which can solve a query in $O(\Q_\rep(n) + \kappa)$ time, where $\kappa$ is the output-size, and - a colored $C$-approximation structure of $\S_\capp(n)$ size which can solve a query in $O(\Q_\capp(n))$ time. We also assume that: (a) $\S_\rep(n)$ and $\S_\capp(n)$ are converging, and (b) the geometric setting is polynomially bounded. Then we can obtain a $(1+{\varepsilon})$-approximation using a structure that requires $\S_\eapp(n)$ space and $\Q_\eapp(n)$ query time, such that $$\begin{aligned} \hspace{-10mm} &&\S_\eapp(n) = O(\S_\rep(n)+ \S_\capp(n)) \label{eqn::intro-ours1-space} \\ \hspace{-10mm} &&\Q_\eapp(n) = O\left(\Q_\rep(n) + \Q_\capp(n) + {\varepsilon}^{-2}\cdot \log n \right). \label{eqn::intro-ours1-qry} \end{aligned}$$ Refinement Structure -------------------- The goal of a refinement structure is to convert a constant-factor approximation of $k$ into a $(1+{\varepsilon})$-approximation of $k$. \[lem:refinement\] [**(Refinement structure)**]{} Let ${\cal C}$ be the set of colors in set $S$, and ${\cal C}\cap q$ be the set of colors in ${\cal C}$ present in $q$. For a query $q$, assume we know that: - $k=|{\cal C}\cap q|=\Omega({\varepsilon}^{-2}\log n)$, and - $k\in [z,Cz]$, where $z$ is an integer. Then there is a refinement structure of size $O\left(\S_\rep\left(\frac{{\varepsilon}^{-2}n\log n}{z}\right)\right)$ which can report a value $\tau \in [(1-{\varepsilon})k,(1+{\varepsilon})k]$ in $O(\Q_\rep(n) +{\varepsilon}^{-2}\log n)$ time. The following lemma states that sampling colors (instead of input objects) is a useful approach to build the refinement structure. \[lem:color-sampling-1\] Consider a query $q$ which satisfies the two conditions stated in Lemma \[lem:refinement\]. Let $c_1$ be a sufficiently large constant and $c$ be another constant s.t. $c=\Theta(c_1\log e)$. Choose a random sample $R$ where each color in ${\cal C}$ is picked independently with probability $M= \frac{c_1{\varepsilon}^{-2}\log n}{z}$. Then with probability $1-n^{-c}$ we have $\left|k-\frac{\left|R\cap q\right|}{M}\right| \leq {\varepsilon}k$. For each of the $k$ colors which are present in $q$, define an indicator variable $X_i$. Set $X_i=1$, if the corresponding color is in the random sample $R$. Otherwise, set $X_i=0$. Then $|R \cap q|=\sum_{i=1}^k X_i$ and $E[|R \cap q|]=k\cdot M$. By Chernoff bound, $$\begin{aligned} \textbf{Pr}\Bigg[\Big||R \cap q| -E[|R\cap q|]\Big|>{\varepsilon}\cdot E[|R \cap q|]\Bigg] <\text{exp}\Big(-{\varepsilon}^{2}E[|R\cap q|]\Big) \\ <\text{exp}\left(-{\varepsilon}^{2}\cdot kM\right) < \text{exp}\left(-{\varepsilon}^{2}zM\right) <\text{exp}\left(-c_1\log n\right) \leq \frac{1}{n^{c}} \end{aligned}$$ Therefore, with high probability $\Big||R \cap q| -kM\Big|\leq{\varepsilon}\cdot kM$. \[lemma:r-exists\] [**(Finding a suitable $R$)**]{} Pick a random sample $R$ as defined in Lemma \[lem:color-sampling-1\]. Let $n_R$ be the number of objects of $S$ whose color belongs to $R$. We say $R$ is [*suitable*]{} if it satisfies the following two conditions: - $\left|k-\frac{\left|R\cap q\right|}{M}\right| \leq {\varepsilon}k$ for all queries which have $k=\Omega({\varepsilon}^{-2}\log n)$. - $n_R \leq 10nM$. This condition is needed to bound the size of the data structure. A suitable $R$ always exists. Let $n^{\alpha}$ be the number of combinatorially different queries $q$ on the set $S$. From Lemma \[lem:color-sampling-1\], by setting $c=\alpha+1$, we can conclude that $\tau \longleftarrow \frac{\left|R\cap q\right|}{M}$ will lie in the range $[(1-{\varepsilon})k,(1+{\varepsilon})k]$ with probability at least $1-1/n^{\alpha+1}$. By the standard union bound, it implies that the probability of the random sample $R$ failing for any query is at most $1/n^{\alpha+1}\times n^{\alpha}=1/n$. Next, it is easy to observe that the [*expected*]{} value of $n_R$ is $nM$: Let $n_c$ be the number of objects of $S$ having color $c$. Then $\textbf{E}[n_R]=\sum_{\forall c}n_c\cdot M=nM$. By Markov’s inequality, the probability of $n_R$ being larger than $10nM$ is less than or equal to $1/10$. By union bound, $R$ will be not be suitable with probability $\leq 1/n+1/10$. Therefore, with probability $\geq 9/10-1/n$, $R$ will be suitable and hence, we are done. We do not discuss the preprocessing time here, since it is not known how to [*efficiently*]{} verify if a sample $R$ is suitable. We leave this as an interesting open problem. #### Refinement structure and query algorithm. In the preprocessing stage pick a random sample $R\subseteq {\cal C}$ as stated in Lemma \[lem:color-sampling-1\]. If the sample $R$ is [*not suitable*]{}, then discard $R$ and re-sample, till we get a suitable sample. Based on all the objects of $S$ whose color belongs to $R$, build a colored reporting structure. Given a query $q$, the colored reporting structure is queried to compute $|R\cap q|$. We report $\tau \longleftarrow \left(|R\cap q|/M\right)$ as the final answer. The query time is bounded by $O(\Q_\rep(n) + {\varepsilon}^{-2}\log n)$, since by Lemma \[lem:color-sampling-1\], $|R\cap q| \leq (1+{\varepsilon})\cdot kM =O({\varepsilon}^{-2}\log n)$. This finishes the description of the refinement structure. Overall solution ---------------- #### Data structure. The data structure consists of the following three components: 1. [*Reporting structure.*]{} Based on the set $S$ we build a colored reporting structure. This occupies $O(\S_\rep(n))$ space. 2. [*$\sqrt{C}$-approximation structure.*]{} Based on the set $S$ we build a $\sqrt{C}$-approximation structure. The choice of $\sqrt{C}$ will become clear in the analysis. This occupies $O(\S_\capp(n))$ space. 3. [*Refinement structures.*]{} Build the refinement structure of Lemma \[lem:refinement\] for the values $z=(\sqrt{C})^i\cdot {\varepsilon}^{-2}\log n, \forall i\in \left[0, \log_{\sqrt{C}}\left(\left\lceil {\varepsilon}^{2}n\right\rceil \right)\right]$. The total size of all the refinement structures will be $\sum O\left(\S_\rep(nM)\right) = O(\S_\rep(n))$, since $\S_\rep(\cdot)$ is converging and $\sum nM=O(n)$. Note that our choice of $z$ ensures that the size of the data structure is independent of ${\varepsilon}$. #### Query algorithm. The query algorithm performs the following steps: 1. Given a query object $q$, the colored reporting structure reports the colors present in $S\cap q$ till all the colors have been reported or ${\varepsilon}^{-2}\log n +1$ colors have been reported. If the first event happens, then the exact value of $k$ is reported. Otherwise, we conclude that $k=\Omega({\varepsilon}^{-2}\log n)$. This takes $O(\Q_\rep(n) + {\varepsilon}^{-2}\log n)$ time. 2. If $k > {\varepsilon}^{-2}\log n$, then 1. First, query the $\sqrt{C}$-approximation structure. Let $k_a$ be the $\sqrt{C}$-approximate value returned s.t. $k\in [k_a, \sqrt{C}k_a]$. This takes $O(\Q_\capp(n))$ time. 2. Then query the refinement structure with the largest value of $z$ s.t. $z\leq k_a \leq \sqrt{C}z$. It is trivial to verify that $k \in [z,Cz]$. This takes $O(\Q_\rep(n) +{\varepsilon}^{-2}\log n)$ time. Reduction-II: Using Only Reporting Structure {#sec:second-red} ============================================ In this section we will present our second general reduction. The reader is assumed to be familiar with Section \[sec:first-red\]. \[thm:accq\] For a given colored geometric setting, assume that we are given a colored reporting structure of $\S_\rep(n)$ size which can answer the query in $O(\Q_\rep(n) + \kappa)$ time. We also assume that: (a) $\S_\rep(n)$ is converging, and (b) the geometric setting is [*polynomially bounded*]{}. Then we can obtain a $(1+{\varepsilon})$ approximation using a structure which requires $\S_\eapp(n)=O(\S_\rep(n))$ space and $$\begin{aligned} \label{eqn:insensitive}\hspace{-10mm} &&\Q_\eapp(n) = O\bigg(\big(\Q_\rep(n) + {\varepsilon}^{-2}\cdot \log n \big) \cdot \log(\log_{1+{\varepsilon}} |{\cal C}|)\bigg) \end{aligned}$$ query time, where ${\cal C}$ is the number of colors in $S$. Similar to Section \[sec:first-red\], a colored reporting structure will be built on $S$ to either report the exact value of $|{\cal C}\cap q|$ or report that $|{\cal C}\cap q|$ is greater than ${\varepsilon}^{-2}\log n $. From now on we will assume that $k=|{\cal C}\cap q|=\Omega({\varepsilon}^{-2}\log n)$. Decision structure {#decision} ------------------ \[lem:decision\] [**(Decision structure)**]{} Let $z=\Omega({\varepsilon}^{-2}\log n)$ be a pre-specified parameter. Given a query $q$, the decision structure reports whether $|{\cal C}\cap q| \geq z$ or $|{\cal C}\cap q| < z$. The data structure is allowed to make a mistake when $|{\cal C} \cap q| \in [(1-{\varepsilon})z,(1+{\varepsilon})z]$. There is a decision structure of size $O\left(\S_\rep\left(\frac{{\varepsilon}^{-2}n\log n}{z}\right)\right)$ which can answer the query in $O(\Q_{rep}(n) +{\varepsilon}^{-2}\log n)$ time. In this subsection we will prove the above lemma. A few words on the intuition behind the solution. Suppose each color in ${\cal C}$ is sampled with probability $\approx (\log n)/z$. For a given query $q$, if $k < z$ (resp., $k > z$), then the expected number of colors from ${\cal C} \cap q$ sampled will be less than $\log n$ (resp., greater than $\log n$). We will start by proving the following lemma. \[lem:failure\] Let $c_1$ be a sufficiently large constant and $c$ be another constant s.t. $c=\Theta(c_1\log e)$. Consider a random sample $R$ where each color in ${\cal C}$ is picked independently with probability $M= \frac{c_1{\varepsilon}^{-2}\log n}{z}$, where ${\varepsilon}\in (0,1/2]$. Then $$\textbf{Pr}\bigg[|R\cap q| > zM \hspace{2 mm} \bigg|\hspace{2 mm} k \leq (1-{\varepsilon})z\bigg] \leq \frac{1}{n^{c}}.$$ Similarly, $$\textbf{Pr}\bigg[|R\cap q| \leq zM \hspace{2 mm}\bigg|\hspace{2 mm} k \geq (1+{\varepsilon})z\bigg] \leq \frac{1}{n^{c}}$$ For each of the $k$ colors present in $q$, define an indicator variable $X_i$. Set $X_i=1$ if the corresponding color is in the random sample $R$. Otherwise, set $X_i=0$. Then $|R \cap q|=\sum_{i=1}^k X_i$ and $E[|R \cap q|]=k\cdot M$. For the sake of brevity, let $Y=|R\cap q|$. We only prove the first fact here. The proof for the second fact is similar. Let $$\alpha = \textbf{Pr}\bigg[Y > zM \hspace{2 mm} \bigg|\hspace{2 mm} k \leq (1-{\varepsilon})z\bigg]$$ The value $\alpha$ is maximized when $k=(1-{\varepsilon})z$. Therefore, $$\alpha \leq \textbf{Pr}\bigg[Y > zM \hspace{2 mm} \bigg|\hspace{2 mm} k = (1-{\varepsilon})z\bigg]$$ In this case, $\textbf{E}[Y]=kM=(1-{\varepsilon})zM$. Therefore, $$\begin{aligned} \alpha&\leq \textbf{Pr}[Y>zM] = \textbf{Pr}\left[Y> \frac{1}{1-{\varepsilon}}\textbf{E}[Y]\right] \leq \textbf{Pr}\left[Y> (1+{\varepsilon})\textbf{E}\left[Y\right]\right] \\ &\leq \text{exp}\left(-\frac{{\varepsilon}^2\textbf{E}[Y]}{4}\right) \quad \quad \text{By Chernoff bound}\\ &=\text{exp}\left(-{\varepsilon}^2(1-{\varepsilon})z\left(\frac{c_1{\varepsilon}^{-2}\log n}{4z}\right)\right) =\text{exp}\left(-c_1(1-{\varepsilon})\frac{\log n}{4}\right) \leq \text{exp}\left(-\frac{c_1}{8}\log n\right) \quad \text{ since } {\varepsilon}\leq 1/2\\ &\leq \frac{1}{n^{c}} \end{aligned}$$ Let $z=\Omega({\varepsilon}^{-2}\log n)$ be a pre-specified parameter. Using notation from Section \[sec:first-red\], a sample $R \subseteq {\cal C}$ is called [*suitable*]{} if - For all queries, (a) if $k < (1-{\varepsilon})z$ then $|R\cap q| < c_1{\varepsilon}^{-2}\log n$, and (b) if $k \geq (1+{\varepsilon})z$ then $|R\cap q| \geq c_1{\varepsilon}^{-2}\log n$. - $n_R \leq 10nM$. Such an $R$ always exists. The proof is exactly the same as the proof in Lemma \[lemma:r-exists\]. The only difference is that we replace Lemma \[lem:color-sampling-1\] with Lemma \[lem:failure\]. #### Decision structure and query algorithm. In the preprocessing phase pick a random sample $R\subseteq {\cal C}$ as stated in Lemma \[lem:failure\]. If the sample $R$ is [*not suitable*]{}, then discard $R$ and re-sample, till we get a suitable sample. Based on all the points of $S$ whose color belongs to $R$, build a colored reporting structure. Given a query object $q$, the colored reporting structure reports $R\cap q$, till all the colors have been reported or $c_1{\varepsilon}^{-2}\log n$ colors have been reported. If the first event happens, then we report $k < z$. Otherwise, we report $k\geq z$. The query time is bounded by $O(\Q_\rep(n) + {\varepsilon}^{-2}\log n)$, In Lemma \[lem:failure\], we assumed ${\varepsilon}\in (0,1/2]$. Handling ${\varepsilon}\in (1/2,1]$ is easy: Set a new variable ${\varepsilon}_{new} \longleftarrow 1/2$. The decision structure will be built with the error parameter ${\varepsilon}_{new}$ (and not ${\varepsilon}$). Since ${\varepsilon}_{new} < {\varepsilon}$, the error made by the decision structure is tolerable. Since $\frac{1}{{\varepsilon}_{new}} \leq \frac{2}{{\varepsilon}}$, the space and the query time bounds are also not affected. Final structure --------------- [**Data structure.**]{} Recall that we only have to handle $k=\Omega({\varepsilon}^{-2}\log n)$. For the values $z_i=c_1({\varepsilon}^{-2}\log n)(1+{\varepsilon})^i$, for $i=1, 2,3,\ldots, W=O(\log_{1+{\varepsilon}} |{\cal C}|)$, we build a decision structure ${\cal D}_i$ using Lemma \[lem:decision\]. By performing similar analysis as in Section \[sec:first-red\], the overall size will be $O(\S_\rep(n))$. [**Query algorithm.**]{} For a moment, assume that we query all the data structures ${\cal D}_1,\ldots,{\cal D}_W$. Then we will see a sequence of structures ${\cal D}_j$ for $j\in [1,i]$ claiming $|{\cal C}\cap q| > z_j$, followed by a sequence of structures ${\cal D}_{i+1},\ldots, {\cal D}_W$ claiming $|{\cal C}\cap q| \leq z_j$. Then we report $\tau\leftarrow z_i$ as the answer to the approximate colored counting query. A simple calculation reveals that $\tau$ will lie in the range $[(1-{\varepsilon})k,(1+{\varepsilon})k]$. We perform a binary search on ${\cal D}_1,\ldots,{\cal D}_W$ to efficiently find the index $i$. The query time will be $O\bigg(\big(\Q_\rep(n) + {\varepsilon}^{-2}\cdot \log n \big) \cdot \log(\log_{1+{\varepsilon}} |{\cal C}|)\bigg)$. #### Remark. Our result is a generalization of the reduction of Aronov and Har-Peled [@ah08] to colored problems. Handling “small" values of $k$ efficiently is usually challenging, since the error tolerated is small. Using the reporting structure makes it easy to handle the “small" values of $k$ (unlike an emptiness structure which was used by [@ah08]). Random sampling and Chernoff bound are easy to apply for “large" values of $k$. As a result, the analysis of our reduction is easier than [@ah08]. Colored Orthogonal Range Search in ${\mathbb{R}}^2$ {#sec:three-sided-color} =================================================== To illustrate an application of Reduction-I, we study the approximate colored counting query for orthogonal range search in ${\mathbb{R}}^2$. \[thm::three-sided-color\] Consider the following two problems: 1. [**Colored $3$-sided range search in ${\mathbb{R}}^2$.**]{} In this setting, the input set $S$ is $n$ colored points in ${\mathbb{R}}^2$ and the query $q$ is a $3$-sided rectangle in ${\mathbb{R}}^2$. There is a data structure of $O(n)$ size which can answer the approximate colored counting query in $O({\varepsilon}^{-2}\log n)$ time. This pointer machine structure is optimal in terms of $n$. 2. [**Colored $4$-sided range search in ${\mathbb{R}}^2$.**]{} In this setting, the input set $S$ is $n$ colored points in ${\mathbb{R}}^2$ and the query $q$ is a $4$-sided rectangle in ${\mathbb{R}}^2$. There is a data structure of $O(n\log n)$ size which can answer the approximate colored counting query in $O({\varepsilon}^{-2}\log n)$ time. Colored $3$-sided range search in ${\mathbb{R}}^2$ -------------------------------------------------- We use the framework of Theorem \[thm::main-1\] to prove the result of Theorem \[thm::three-sided-color\](A). For this geometric setting, a colored reporting structure with $\S_\rep =n$ and $\Q_\rep=\log n$ is already known [@sj05]. The path-range tree of Nekrich [@n14] gives a $(4+{\varepsilon})$-approximation, but it requires super-linear space. The $C$-approximation structure presented in this subsection is a refinement of the path-range tree for the pointer machine model. \[lem:3-sided-c\] For the colored $3$-sided range search in ${\mathbb{R}}^2$ problem, there is a $C$-approximation structure which requires $O(n)$ space and answers a query in $O(\log n)$ time. We prove Lemma \[lem:3-sided-c\] in the rest of this subsection. ### Interval tree Our solution is based on an interval tree and we will need the following fact about it. \[lem:it\] Using interval trees, a query on $(3+t)$-sided rectangles in ${\mathbb{R}}^3$ can be broken down into $O(\log n)$ queries on $(2+t)$-sided rectangles in ${\mathbb{R}}^3$. Here $t\in [1,3]$. Let $R$ be a set of $n$ $(3+t)$-sided rectangles. We build an interval tree ${\cal IT}$ as follows: W.l.o.g., assume that the rectangles are bounded along the $x$-axis. Let $h$ be a plane perpendicular to the $x$-axis such that there are equal number of endpoints of $R$ on each side of the plane. The splitting halfplane $h$ is stored at the root of ${\cal IT}$ and the two subtrees are built recursively. In general, $h(v)$ is the splitting halfplane stored at a node $v \in {\cal IT}$. A rectangle $r\in R$ is stored at the highest node $v$ s.t. $r$ intersects $h(v)$. Let $R_v$ be the set of rectangles stored at a node $v$. Each rectangle in $r\in R_v$ is split by $h(v)$ into two rectangles $r^-$ and $r^+$. Define $R_v^- := \bigcup_{r\in R_v} r^-$ and $R_v^+:=\bigcup_{r\in R_v} r^+$. Given a query point $q$, trace a path $\Pi$ of length $O(\log n)$ from the root to a leaf node corresponding to $q$. For a node $v\in \Pi$, if $q$ lies to the left (resp., right) of $h(v)$, then answering a query on $R_v \cap q$ is equivalent to answering it on $R_v^- \cap q$ (resp., $R_v^+ \cap q$), and we can treat $R_v^-$ (resp., $R_v^+$) as $(2+t)$-sided rectangles in ${\mathbb{R}}^3$, since $h(v)$ is effectively $+\infty$ (resp., $-\infty$). ### Initial structure \[lem:is\] For the colored $3$-sided range search in ${\mathbb{R}}^2$ problem, there is a $2$-approximation structure which requires $O(n)$ space and answers a query in $O(\log^3 n)$ time. By a simple exercise, the colored $3$-sided range search in ${\mathbb{R}}^2$ can be reduced to the colored dominance search in ${\mathbb{R}}^3$. Therefore, using the reduction of Subsection \[subsec:red-colored-2\] the colored $3$-sided range search in ${\mathbb{R}}^2$ also reduces to standard $5$-sided rectangle stabbing problem (for brevity, call it $5$-sided RSP). There is a simple linear-size data structure which reports in $O(\log^3 n)$ time a $2$-approximation for the $5$-sided RSP: By inductively applying Lemma \[lem:it\] twice, we can decompose $5$-sided RSP to $O(\log^2n)$ $3$-sided RSPs. For $3$-sided RSP, there is a linear-size structure of which reports a $2$-approximation in $O(\log n)$ time [@ahz10]. By using this structure the $5$-sided RSP can be solved in $O(\log^3n)$ time. ### Final structure Now we will present the optimal $C$-approximation structure of Lemma \[lem:3-sided-c\]. [*Structure.*]{} Sort the points of $S$ based on their $x$-coordinate value and divide them into buckets containing $\log^2n$ consecutive points. Based on the points in each bucket, build a $D$-structure which is an instance of Lemma \[lem:is\]. Next, build a height-balanced binary search tree ${\cal T}$, where the buckets are placed at the leaves from left to right based on their ordering along the $x$-axis. Let $v$ be a proper ancestor of a leaf node $u$ and let $\Pi(u,v)$ be the path from $u$ to $v$ (excluding $u$ and $v$). Let $S_l(u,v)$ be the set of points in the subtrees rooted at nodes that are left children of nodes on the path $\Pi(u,v)$ but not themselves on the path. Similarly, let $S_r(u,v)$ be the set of points in the subtrees rooted at nodes that are right children of nodes on the path $\Pi(u,v)$ but not themselves on the path. See Figure \[fig:color-3-sided\], which illustrates these sets for two leaves $u=u_l$ and $u=u_r$. For each pair $(u,v)$, let $S_l'(u,v)$ (resp., $S_r'(u,v)$) be the set of points that each have the highest $y$-coordinate value among the points of the same color in $S_l(u,v)$ (resp., $S_r(u,v)$). Finally, for each pair $(u,v)$, construct a [*sketch*]{}, $S_l''(u,v)$, by selecting the $2^0, 2^1,2^2,\ldots$-th highest $y$-coordinate point in $S_l'(u,v)$. A symmetric construction is performed to obtain $S_r''(u,v)$. The number of $(u,v)$ pairs is bounded by $O((n/\log^2n)\times(\log n))=O(n/\log n)$ and hence, the space occupied by all the $S_l''(u,v)$ and $S_r''(u,v)$ sets is $O(n)$. ![[]{data-label="fig:color-3-sided"}](color-3-sided.pdf) [*Query algorithm.*]{} To answer a query $q=[x_1,x_2] \times [y,\infty)$, we first determine the leaf nodes $u_l$ and $u_r$ of ${\cal T}$ containing $x_1$ and $x_2$, respectively. If $u_l =u_r$, then we query the $D$-structure corresponding to the leaf node and we are done. If $u_l \neq u_r$, then we find the node $v$ which is the least common ancestor of $u_l$ and $u_r$. The query is now broken into four sub-queries: First, report the approximate count in the leaves $u_l$ and $u_r$ by querying the $D$-structure of $u_l$ with $[x_1,\infty) \times [y,\infty)$ and the $D$-structure of $u_r$ with $(-\infty, x_2] \times [y,\infty)$. Next, scan the list $S_r''(u_l,v)$ (resp., $S_l''(u_r,v)$) to find a $2$-approximation of the number of colors of $S_r(u_l,v)$ (resp., $S_l(u_r,v)$) present in $q$. The final answer is the sum of the count returned by the four sub-queries. The time taken to find $u_l$, $u_r$ and $v$ is $O(\log n)$. Querying the leaf structures takes $O((\log (\log^2n))^3)=O(\log n)$ time. The time taken for scanning the lists $S_r''(u_l,v)$ and $S_l''(u_r,v)$ is $O(\log n)$. Therefore, the overall query time is bounded by $O(\log n)$. Since each of the four sub-queries give a $2$-approximation, overall we get a $8$-approximation. $C$-approximation for $4$-sided range search -------------------------------------------- Now we will prove Theorem \[thm::three-sided-color\](B). Again we will use the framework of Theorem \[thm::main-1\]. It is straightforward to obtain a data structure with $\S_{\capp}=O(n\log n)$, $\Q_{\capp}=O(\log n)$ and $C=16$. Simply build a binary range tree on the $y$-coordinates of $S$ and at each node build an instance of Lemma \[lem:3-sided-c\] based on the points in its subtree. Given a $4$-sided query rectangle $q$, it can be broken down into two $3$-sided query rectangles. Shi and Jaja [@sj05] presented a reporting structure with $\S_{\rep}=O(n\log n)$ and $\Q_{\rep}=O(\log n)$. Plugging in these values into Theorem \[thm::main-1\] proves Theorem \[thm::three-sided-color\](B). \[rem:ac-3-sided\] As discussed in Remark \[rem:ac-3d-dom\], the technique of [@ah08] can be adapted to answer a colored approximate counting query. For colored $3$-sided range search in ${\mathbb{R}}^2$, plugging in $S(n)=O(n)$ and $Q(n)=O(\log n)$ [@m85] leads to a data structure of size $O(n\log^2n)$ and query time $O(\log^2n)$. For colored $4$-sided range search in ${\mathbb{R}}^2$, plugging in $S(n)=O(n\log n)$ and $Q(n)=O(\log n)$ [@bcko08] leads to a data structure of size $O(n\log^3n)$ and query time $O(\log^2n)$ (the structure of Chazelle [@c86] can be used to obtain slightly better space). Applications of Reduction-II {#sec:appl-sec-red} ============================ In this section we present a few applications of reduction-II. For the colored problems discussed in this section, their exact counting structures are expensive [@gjs04; @krsv07]. \[thm::halfspace-color\] Consider the following three colored geometric settings: 1. [**Colored halfplane range search**]{}, where the input is a set of $n$ colored points in ${\mathbb{R}}^2$ and the query is a halfplane. There is a data structure of $O(n)$ size which can answer the approximate counting query in $O\left(\frac{1}{{\varepsilon}^2}\cdot\log n\cdot\log(\log_{1+{\varepsilon}} |{\cal C}|)\right)$ time. 2. [**Colored halfspace range search in ${\mathbb{R}}^3$**]{}, where the input is a set of $n$ points in ${\mathbb{R}}^3$ and the query is a halfspace. There is a data structure of $O(n\log n)$ size which can answer the approximate counting query in $O\left(\frac{1}{{\varepsilon}^{2}}\log^3n\cdot \log(\log_{1+{\varepsilon}} |{\cal C}|)\right)$ time. 3. [**Colored orthogonal range search in ${\mathbb{R}}^d$**]{}, where the input is a set of $n$ points in ${\mathbb{R}}^d$ and the query is an axis-parallel rectangle. There is a data structure of $O(n\log^d n)$ size which can answer the approximate counting query in $O\left(\frac{1}{{\varepsilon}^{2}}\cdot\log^{d+1}n \cdot\log(\log_{1+{\varepsilon}} |{\cal C}|) \right)$ time. [**Colored orthogonal range search in ${\mathbb{R}}^d$.**]{} First consider the [*standard*]{} orthogonal range emptiness query in ${\mathbb{R}}^d$ ($d\geq 2$). Using range trees, this problem can be solved using $M(n)=O(n\log^{d-1}n)$ space and $\Q_{\rep}(n)=O(\log^{d-1}n)$ query time. Using this structure, the colored orthogonal range reporting problem in ${\mathbb{R}}^d$ can be answered in $O(\Q_{\rep}(n) + \kappa \Q_{\rep}(n)\log n)$ query time using a structure of size $O(M(n)\log n)$. Here $\kappa$ is the number of colors reported (see Section $1.3.4$ of [@gjs04] for the details of this transformation). By applying Theorem \[thm:accq\], the space occupied by the approximate counting structure will be $O(n\log^{d}n)$. In Theorem \[thm:accq\], we assumed that the query time of the colored reporting structure can be expressed as $O(\Q_{\rep} + \kappa)$, whereas for this problem the query time is being expressed as $O(\Q_{\rep} + \kappa \Q_{\rep}\log n)$. Therefore, equation \[eqn:insensitive\] of the query time $\Q_{\eapp}(n)$ in Theorem \[thm:accq\] can be rewritten as $$O\bigg(\big(\Q_{\rep}(n) + ({\varepsilon}^{-2}\log n) \Q_{\rep}(n)\log n \big)\cdot\log(\log_{1+{\varepsilon}} |{\cal C}|)\bigg)$$ Plugging in the value of $\Q_{\rep}(n)$ into the above expression, we get $$\Q_{\eapp}(n) =O\bigg(\big(\log^{d-1}n + {\varepsilon}^{-2}\log^{d+1} n\big)\cdot\log(\log_{1+{\varepsilon}} |{\cal C}|)\bigg) = O\bigg( {\varepsilon}^{-2}\cdot\log^{d+1}n \cdot\log(\log_{1+{\varepsilon}} |{\cal C}|) \bigg)$$ [**Colored halfspace range search in ${\mathbb{R}}^3$.**]{} There exists an $O(n\log^2n)$ space reporting data structure for this problem which can answer the query in $O(n^{1/2+\delta}+ \kappa)$ time [@gjs04]. But we will not use this structure, since for our purpose $\kappa=O({\varepsilon}^{-2}\log n)$ and the transformation technique used for the colored orthogonal range search problem will give us a reporting data structure with better bounds. Again, first consider the [*standard*]{} halfspace range emptiness query in ${\mathbb{R}}^3$. This problem can be solved using $M(n)=O(n)$ space and $\Q_{\rep}(n)=O(\log n)$ query time [@ac09b]. Using this structure, the colored halfspace range reporting problem in ${\mathbb{R}}^3$ can be answered in $O(\Q_{\rep}(n) + \kappa \Q_{\rep}(n)\log n)=O({\varepsilon}^{-2}\log^3n)$ query time using a structure of size $O(M(n)\log n)=O(n\log n)$. Applying Theorem \[thm:accq\], the space occupied by the approximate counting structure will be $O(n\log n)$. The query time will be $$O\bigg(\big(\Q_{\rep}(n) + ({\varepsilon}^{-2}\log n) \Q_{\rep}(n)\log n \big)\cdot\log(\log_{1+{\varepsilon}} |{\cal C}|)\bigg) = O\bigg( ({\varepsilon}^{-2}\log^3n)\cdot \log(\log_{1+{\varepsilon}} |{\cal C}|)\bigg)$$ [**Colored halfplane range search.**]{} In [@gjs04] a reduction is presented from this problem to the [*segments-below-point*]{} problem: Given a set of $n$ segments in the plane, report all the segments hit by a vertical query ray. Recently, this segments-below-point has been solved by Agarwal, Cheng, Tao and Yi [@acty09] in the context of designing data structures for uncertain data. They present an $O(n)$ space data structure to solve the query in $O(\log n + \kappa)$ time. This implies a solution for colored halfplane range reporting with the same bounds. Plugging in this result into Theorem \[thm:accq\], we obtain an $O(n)$ size data structure which can answer the approximate counting query in $O({\varepsilon}^{-2}\log n\cdot \log(\log_{1+{\varepsilon}} |{\cal C}|))$ time. Conclusion and Future Work ========================== In this work, we built optimal and near-optimal approximate counting data structures for several colored and uncolored (i.e., standard) geometric settings. We finish by presenting a few open problems: 1. We do not discuss the preprocessing time in this paper since most of the solutions in this work are based on verifying if a sample is “suitable". It is not known how to efficiently verify if a sample is suitable. 2. In a colored orthogonal range search in ${\mathbb{R}}^1$ problem, the input is a set of $n$ colored points on the real-line and the query is an interval. Is it possible to build a linear-space data structure which can answer the approximate counting query in ${O_{{\varepsilon}}}(1)$ time in the word-RAM model? [**Acknowledgements.**]{} I am thankful to Prof. Sariel Har-Peled, Prof. Ravi Janardan, Yuan Li, Sivaramakrishnan Ramamoorthy, Stavros Sintos, Jie Xue for fruitful discussions on these problems and for comments on the previous drafts. I would also like to thank the anonymous referees whose comments helped in immensely improving the content and the presentation of the paper. [^1]: This research was partly supported by a Doctoral Dissertation Fellowship (DDF) from the Graduate School of University of Minnesota.
harvmac \#1&\#2(\#3)[, *\#1 **\#2 (19\#3)*** ]{} \#1&\#2(\#3)[*\#1 **\#2 (19\#3)*** ]{} \#1[[\#1]{}]{} \#1\#2[[\#1\#2]{}]{} \#1\#2 \#1[|\#1]{} \#1[\#1|]{} \#1[\#1]{} \#1\#2\#3[Nucl. Phys. [**B\#1**]{} (\#2) \#3]{} \#1\#2\#3[Phys. Lett. [**\#1B**]{} (\#2) \#3]{} \#1\#2\#3[Phys. Lett. [**\#1B**]{} (\#2) \#3]{} \#1\#2\#3[Phys. Rev. Lett. [**\#1**]{} (\#2) \#3]{} \#1\#2\#3[Phys. Rev. [**D\#1**]{} (\#2) \#3]{} \#1\#2\#3[Phys. Rev. [**D\#1**]{} (\#2) \#3]{} \#1\#2\#3[Phys. Rep. [**\#1**]{} (\#2) \#3]{} \#1\#2\#3[Rev. Mod. Phys. [**\#1**]{} (\#2) \#3]{} \#1\#2\#3[Comm. Math. Phys. [**\#1**]{} (\#2) \#3]{} \#1\#2\#3[Class. Quant. Grav. [**\#1**]{} (\#2) \#3]{} \#1\#2\#3[Mod. Phys. Lett. [**\#1**]{} (\#2) \#3]{} ‘=11 /\#1 Å[[A]{}]{} \#1\#2 ‘=12 \#1[\#1|]{} \#1[| \#1]{} \#1[\#1 ]{} $${[} \def$$[\]]{} \#1[ ]{} \#1 \#1 \#1[($\bullet$\#1$\bullet$)]{} *[**]{}* David Kutasov,$^1$ Marcos Mariño,$^2$ and Gregory Moore $^2$ $^1$ [*Department of Physics, University of Chicago*]{} *5640 S. Ellis Av., Chicago, IL 60637, USA* kutasov@theory.uchicago.edu $^2$ [*Department of Physics, Rutgers University*]{} *Piscataway, NJ 08855-0849, USA* marcosm, gmoore@physics.rutgers.edu The study of open string tachyon condensation in string field theory can be drastically simplified by making an appropriate choice of coordinates on the space of string fields. We show that a very natural coordinate system is suggested by the connection between the worldsheet renormalization group and spacetime physics. In this system only one field, the tachyon, condenses while all other fields have vanishing expectation values. These coordinates are also well-suited to the study of D-branes as solitons. We use them to show that the tension of the D25-brane is cancelled by tachyon condensation and compute exactly the profiles and tensions of lower dimensional D-branes. The problem of open string tachyon condensation on unstable branes in bosonic and supersymmetric string theory is interesting, since it touches on important issues in string theory such as background independence, off-shell physics, the symmetry structure of the theory, and the role of closed strings. In the context of string field theory (SFT) the main approach to this problem has been through Witten’s cubic, or Chern-Simons string field theory , and in the past year notable progress has been made (see  ). On the other hand, the physics of tachyon condensation is well understood from the first quantized (worldsheet) point of view. The endpoint of condensation is a state in which the brane has completely “disappeared.” The process of condensation can also produce lower dimensional unstable branes (or BPS brane – anti-brane pairs in the superstring) as intermediate states. Reproducing these results in the SFT of  is non-trivial. The apparent simplicity of a cubic interaction vertex is deceptive – the condensation involves an infinite number of physical and unphysical scalar fields of arbitarily high mass. Recent progress on the problem involves a level truncation , which appears to lead to very good agreement with the expected results for some quantities, such as the vacuum energy after condensation. At the same time, it is not clear why and when level truncation works, and it is difficult to study the dynamics of the non-trivial vacuum using this approach. The worldsheet analysis (see  for a recent discussion) suggests that there should exist a choice of coordinates on the space of string fields that is better suited for the study of tachyon condensation. To see that consider, for example, the process in which the tachyon on a $D25$-brane in the bosonic string condenses to make a lower dimensional $Dp$-brane with $p<25$, which is stretched in the directions $(1,2,\cdots,p)$. This is achieved by considering the path integral on the disk with the worldsheet action where $\SS_0$ is the free field action describing open plus closed strings on the disk, $\theta$ is an angular coordinate parametrizing the boundary of the disk, and $T(X^{p+1}, \cdots, X^{25})$ is a slowly varying tachyon profile with a quadratic minimum giving mass to the $25-p$ coordinates transverse to the $Dp$-brane $\{X^i(\theta)\}$, $i=p+1,\cdots, 25$. The action  describes a renormalization group flow from a theory where all $26$ $\{X^\mu\}$ satisfy Neumann boundary conditions (corresponding to the 25-brane) to one where the $25-p$ coordinates $\{X^i\}$ have Dirichlet boundary conditions (the $p$-brane). Any profile $T(X)$ with the above properties will do, but a particularly simple choice is for which the worldsheet theory is free throughout the RG flow. The parameters $a,u_i$ flow from zero in the UV to infinity in the IR. A crucial point for what follows is that in this flow $a,u_i$ do not mix with any other couplings. In the spacetime SFT, the $p$-brane can be constructed as a finite energy soliton . The above worldsheet considerations imply that there must exist a choice of coordinates on the space of string fields in which the tachyon profile is given by  and no other fields are excited. In the cubic SFT  this is not the case – the soliton contains excitations of an infinite number of fields . As we will see below, a more suitable candidate for describing tachyon condensation is the open SFT proposed by Witten in  and refined by Shatashvili  (see also ). We will refer to this string field theory as “boundary string field theory” (B-SFT). The plan of the paper is the following. In section 2 we briefly review the construction of B-SFT . We comment on the relation of the spacetime action to the boundary entropy of Affleck and Ludwig  and to the cubic Chern-Simons string field theory . In section 3 we turn to an example: bosonic open string theory on a $D25$-brane in flat spacetime. We evaluate the tachyon potential and kinetic (two derivative) term and study condensation to the vacuum and to lower branes. The description of the condensation to the vacuum is exact (since the tachyon is the only field that condenses, and its potential is known exactly), while the properties of solitons (corresponding to lower dimensional branes) receive corrections from higher derivative terms in the action, although the two derivative action is in excellent qualitative agreement with the expected exact results. In section 4 we show that the corrections to the tension of solitons in this SFT can be computed exactly, since to analyze them it is enough to compute the exact action for tachyon profiles of the form . We use this observation to compute the tensions and show that they are in exact agreement with the expected results. Some comments about the physics of excited open string states and other issues appear in sections 5, 6. Two appendices contain some of the technical details. A. Gerasimov and S. Shatashvili have independently noticed the relevance of boundary string field theory to the problem of tachyon condensation . The construction of  is aimed at making precise the notion that the configuration space of open string field theory is the space of all two dimensional worldsheet field theories on the disk, which are conformal in the interior of the disk but have arbitrary boundary interactions. Thus, as in , one studies the worldsheet action where $\SS_0$ is a free action defining an open plus closed conformal background, and $\VV$ is a general boundary perturbation. We will later discuss the twenty six dimensional bosonic string, for which $\VV$ has a derivative expansion (or level expansion) of the form The boundary conditions on $X$ (in the unperturbed theory) are $\p_r X^\mu\vert_{r=1}=0$. If one wishes to include Chan-Paton indices, the field $\VV$ is promoted to an $N\times N$ matrix and the path integral measure on the disk is weighted by We will mostly restrict to the case $N=1$ in what follows. In general, $\VV$ is a ghost number zero operator, which nevertheless might depend on the ghosts, and one must also introduce a ghost number one operator $\OO$ via If, as in , $\VV$ is constructed out of matter fields alone, one has It is not clear that the theory on the disk described by  makes sense. Even if one restricts attention to $\VV$’s that do not depend on ghosts, such as , in general the interaction is non-renormalizable and one might expect the theory to be ill-defined. This is an important issue, about which we will have nothing new to say here, however there are clearly interesting cases, such as tachyon condensation, in which the interaction  is renormalizable. The discussion below definitely applies to these cases and perhaps more generally. Parametrizing the space of boundary perturbations $\VV$ by couplings $\lambda^i$: (and consequently $\OO=\sum_i\lambda^i \OO_i$ ), the spacetime SFT action $S$ is defined by where $Q$ is the BRST charge and the correlator is evaluated with the worldsheet action . Note that  defines the action up to an additive constant; also, the normalization of the action is not necessarily the same as in other definitions. We will fix both ambiguities below. Specializing to $\OO$’s of the form , and using the fact that if $\VV_i$ is a conformal primary of dimension $\Delta_i$, we conclude from  that where Actually, it is clear that eq.  cannot be true in general, since it does not transform covariantly under reparametrizations of the space of theories, $\lambda^j\to f^j(\lambda^i)$. Indeed, $\partial_i S$ and $G_{ij}$ transform as tensors (the latter is the metric on the space of worldsheet theories), but $\lambda^i$ does not. The correct covariant generalization of  was given in . The worldsheet RG defines a natural vector field on the space of theories where $|x|$ is a distance scale ( a UV cutoff), and is the $\beta$ function, which transforms as a vector under reparametrizations of $\lambda^i$. The covariant form of  is thus As is well known in the general theory of the RG, one can choose coordinates on the space of theories such that the $\beta$ functions are exactly linear. This can always be done locally in the space of couplings, so long as the linear term in the $\beta$-function is non-vanishing. In such coordinates,  reduces to . In it was further shown that the action $S$ defined by is related to the partition sum on the disk $Z(\lambda^i)$ via Note that  fixes the additive ambiguity in $S$ by requiring that at fixed points of the boundary RG (at which $\beta^i(\lambda^*)=0$) From the worldsheet point of view, the properties ,  and  mean that $S$ is a non-conformal generalization of the boundary entropy of . In fact, in any unitary theory satisfying these properties one can prove the “$g$-theorem” postulated in . Indeed, the scale variation of $S$ is given by the Callan-Symanzik equation where we used the fact that $S$ depends on the scale only via its dependence on the running couplings, and equations , . In a unitary theory, the metric $G_{ij}$  is positive definite; thus $S$ decreases along RG trajectories. Finally, the property  implies that at fixed points of the boundary RG, $S$ coincides with the boundary entropy as defined in . Thus, in any unitary theory in which the considerations of  do not suffer from UV subtleties (associated with non-renormalizability), the $g$-theorem of  is valid. As mentioned above, a natural choice of coordinates on the space of string fields is one in which the $\beta$-functions are exactly linear. This choice can always be made locally for $\Delta_i\not=1$. These coordinates become singular as $\Delta_i\to 1$, which in string theory language is the place where the components of the string field ( $T(X)$, $A_\mu$  in ) go on-shell. On the other hand, since the RG flows are straight lines in these coordinates, they are well suited to studying processes which are far off-shell, such as tachyon condensation, since they minimize the mixing between different modes. In contrast, the cubic SFT parametrization of worldsheet RG is regular close to the mass shell; it appears to be closely related to the coordinates on coupling space implied by the $\epsilon(=1-\Delta)$ expansion. These coordinates are useful for studying processes close to the mass shell, such as reproducing perturbative on-shell amplitudes. This raises the interesting question of how the action $S$ defined above is related to the cubic action of . It seems clear that the cubic SFT must correspond to , for a particular choice of coordinates on the space of string fields (or worldsheet couplings). The two sets of coordinates are related by a complicated and highly singular transformation (see appendix A for some comments on this transformation). As we will see below, tachyon condensation is simpler in the coordinates , as one would expect from the above discussion. In this section we will study the action $S$ described in the previous section, restricting to the tachyon field. We will keep terms with up to two derivatives and study various features of tachyon condensation using the resulting action, which will turn out to have the form where the $\cdots$ stand for terms with more than two derivatives. Before deriving , we would like to make a few comments on its form. [(1)]{} The tachyon potential is This potential is exact, and indeed already appears in . The perturbative vacuum corresponds to $T=0$, near which $U(T)=1-{1\over2}T^2+\cdots$. The “stable vacuum” to which the tachyon condenses is at $T=+\infty$, where $U(T)\to 0$. One can ask why the tachyon does not instead roll to $T=-\infty$ where $U(T)$ goes to $-\infty$. We will postpone this issue to section 6. [(2)]{} $T_{25}$ in  is the tension of the $D25$-brane. Indeed, in the perturbative vacuum $T=0$, $S=T_{25} V$, where $V$ is the volume of spacetime. Note that our tachyon field $T$ is dimensionless. [(3)]{} From  it seems that the mass of the tachyon in the perturbative vacuum is $\ap M^2=-1/2$. Of course, the correct result is $\ap M^2=-1$, but there is no paradox since the higher derivative terms that have been neglected in  are important in determining this mass. In Appendix A we show that the inverse propagator of the tachyon indeed exhibits a simple pole at $\ap k^2=1$. [(4)]{} The action  is related by a field redefinition to an action studied recently in  as a toy model of tachyon condensation. These authors found that this model exhibits some remarkable similarities to tachyon condensation in SFT. We now see that it is in fact a two derivative approximation to the exact tachyon action. As we will discuss later, this clarifies the origin of some of the properties found in . The action  can be determined from the definitions , as follows. One starts by evaluating the partition sum $Z$ in for the tachyon profile  (with generic $u_i> 0$ and $p=-1$). This should be possible because, as mentioned in section 1, the resulting worldsheet theory is free for all $a$, $u_i$. Plugging into  one then finds that the action $S(a, u_i)$ is given by The action  can then be reconstructed by taking the limit $u_j\to 0$. A simple scaling argument shows that the leading behavior of the action  evaluated on the profile  in the limit $u_j\to 0$ comes from the potential term; terms with $2n$ derivatives are down from the leading term by $n$ powers of $u_j$. Thus, by examining the first two terms in $S$  we can uniquely reconstruct the potential and kinetic terms in . At higher orders in derivatives, there are many terms one can write down (most of which vanish on the profile ) and one needs additional information to determine the full action (see appendix B). The partition sum $Z(a, u_i)$ has been computed in . The answer can be written as where and $\gamma$ is the Euler number. For small $u_i$ one finds The first line of  should be compared to the potential term in  evaluated on the profile ; the second must be due to the kinetic term. Evaluating the potential energy one finds Comparing to  we see that Of course, the fact that this is not the standard form of the tension is due to the freedom of rescaling the action , . We can use  to determine the multiplicative renormalization of $S$ needed to bring it into standard form. Computing the kinetic term in  and comparing the result to the second line of  fixes the coefficient of $(\p_\mu T)^2$ to be the one given in . After deriving the action  we are now ready to proceed to studying tachyon condensation. The first thing that one might worry about is whether it is enough to study the tachyon dynamics, or whether one must include the infinite number of excited states, as in . We will show later that only the zero momentum tachyon condenses in the coordinates on string field space that we are working in, but for now we will assume that and proceed. Consider first spacetime independent tachyon profiles. The (locally) stable vacuum is at $T=\infty$. The vacuum energy vanishes there. Since the potential  is exact, this gives a proof of Sen’s conjecture  that $U_{\rm pert} - U_{\rm closed} = T_{25}$ where $U_{\rm pert}$ is the value of the potential in the perturbative open string vacuum and $U_{\rm closed}$ is the value of the potential in the closed string vacuum where the open strings have “condensed” and “disappeared.” We have seen above that at a stationary point, $U$ is just the $g$-function, and moreover that $U_{\rm closed}=0$. On the other hand, it is straightforward to identify the tension of D-branes with the $g$-function . Note that in our coordinates the stable vacuum is at infinity. This is not in disagreement with other calculations in which it occurs at a finite value of $T$ , since field redefinitions change the value of $T$ at the minimum of the potential. In fact, in coordinates on the space of couplings where the $\beta$ functions in a theory are exactly linear, any infrared fixed points will [*always*]{} occur at infinite values of the couplings. A more invariant question is what is the distance in field space between the perturbative fixed point at $T=0$, and the stable minimum at $T=\infty$. For this we need to compute the metric on the space of $T$’s. This is easily done either by using  (with $S=(T+1)\exp(-T)$, $\beta^T=-T$) or by reading off the metric from the kinetic term in . Either way one finds that the metric on field space is Thus, the distance between $T=0$ and $T=\infty$ is finite. Consider next spacetime dependent tachyon profiles, which describe lower dimensional branes as solitons. The equations of motion following from the action  are We are looking for finite action solutions which asymptote to the “stable vacuum” $T=\infty$. The solutions are in fact precisely the profiles  that entered our discussion a number of times before! Substituting  into one finds that each of the $u_i$ is either $0$ or $1/4\ap$. That is, the solitons are translationally invariant along a linear subspace of $R^{26}$ and spherically symmetric transverse to that subspace. Let $n$ be the number of nonzero $u_i$’s. Then $a=-n$. We interpret such codimension $n$ solitons as $D(25-n)$-branes. Substituting into the action  gives Comparing this to the expected tension $T_{25-n} V_{26-n}$ we conclude that with $\xi = e/\sqrt{\pi} \cong 1.534 $. We have written it this way to facilitate comparison with the exact answer In the next section we will see that one can improve on the result  and calculate the tensions  exactly. But before moving on to that analysis it is useful to make a few remarks about the results obtained so far. One striking feature of the foregoing discussion is the fact that the soliton solutions are given precisely by the quadratic tachyon profiles that play such a prominent role in the worldsheet analysis. This explains why studying them is so easy: the worldsheet theory in their presence remains free! It also makes it clear why we are getting descent relations of the form : as explained above, the action is nothing but the boundary entropy, and for spherically symmetric profiles of the form the boundary entropy factorizes. Finally, it is clear why we are not getting the correct descent relation  but rather an approximate version . The reason is that at this level of approximation we find a finite value of the mass parameter $u$ in . So the action  is approximately computing the boundary entropy at a finite point along the RG trajectory. Since, as discussed in section 2, the boundary entropy is a monotonically decreasing quantity, we expect to find a larger answer at finite $u$ than at the infrared fixed point ($u=\infty$). This is the reason why the parameter $\xi$ in  is larger than one. All this makes it clear that what will happen when we include higher derivative corrections in  is that the soliton profiles will still have the form , but $|a|$ and $u$ will increase to infinity. We will demonstrate that this is indeed the case in the next section. The codimension $n$ solitons were also discussed recently in . Our results are in exact agreement with those of , although the interpretation is slightly different. The authors of  analyzed the spectrum of small fluctuations around the solitons . They found a discrete spectrum of scalars with masses This is very natural from the worldsheet point of view as well, since once we turn on a worldsheet potential of the form (even for finite $u$), it is clear that one expects to find only fields that are bound to the lower dimensional brane but otherwise have the same properties as their higher dimensional cousins. Finally, one might wonder whether it is possible to describe multi-soliton configurations in the theory . From the worldsheet point of view this involves  studying multicritical tachyon profiles of the form For $l>1$ the worldsheet theory is no longer free and one expects complications having to do with the interactions between the solitons (fundamental strings connecting different D-branes). Plugging  into  we see that the reflection of this in spacetime is that one needs to keep higher derivative terms in the action to study such configurations. We would like to compute the corrections to the descent relations  coming from higher derivative corrections to the action . In principle, one might proceed as follows. First generalize the procedure of section 3 to compute higher derivative corrections to the action, and then use the resulting action to determine the profile of the solitons and their tensions. This looks difficult; computing the higher derivative corrections involves both technical and conceptual complications. Also it is likely that the resulting action would be rather unwieldy and difficult to study. Some of the technical complications can be seen by looking at the action for quadratic tachyon profiles , . As discussed in section 3, implicit in the action $S(a, u_i)$ is an infinite series of higher derivative corrections to  which can be computed by expanding $S$ in powers of the $u_j$. An example of such an infinite-derivative action is given in appendix B. Unfortunately,  and do not determine the tachyon action uniquely, since it is easy to write an infinite number of terms which annihilate the profile  and thus do not contribute to $S(a, u_i)$. Nevertheless, the discussion of the previous section makes it clear that there is an alternative way to proceed that circumvents all of the above complications and can be used to compute the tensions of the solitons exactly. The basic observation is that [*we know that the exact profile of the soliton in the exact SFT ,  is going to be of the form , with some particular values of $a$, $u_i$*]{}. The reason is that this mode does not mix with any other modes in the SFT (as will be shown in the next section). Thus, all we have to do to compute the exact tension of the D-brane solitons is to take the exact action $S(a, u_i)$ given by , , and extremize it in $a$ and $u_i$. Furthermore, we know that the extremum we are looking for is one in which $n$ of the $u_i\to \infty$ and the rest vanish (for a codimension $n$ soliton). We next describe this calculation. For simplicity let us consider first a codimension one soliton. We would like to substitute the ansatz $T= a + u X_1^2$ in  (with the other $u_i=0$) and set the action equal to $V_{24+1} T_{24}$. Of course, the action ,  diverges when $u_i\to 0$, which is a reflection of the divergent volume $V_{24+1}$. In order to do the computation in a well defined way we must regularize the volume divergence. We do this by periodic identification of We must now determine the correct normalization of the path integral $Z$. The correct normalization for the worldsheet zero-mode of an uncompactified spacetime coordinate $X$ is We know this because if we substitute $T=a + u X^2 $, we reproduce $e^{-a} {1\over \sqrt{2\ap u}}$. It follows that when we periodically identify $X^\mu$ as in  in directions $\mu=2,\dots, 26$ and take $T= a+ u X_1^2$ the resulting boundary string field theory action is, exactly, As discussed above, the dynamical variables in this action are $a,u$. Therefore, we should minimize $S$ with respect to them. Minimizing first with respect to $a$ we find Substituting back into the action we get: where we define We may now invoke Witten’s result . The action  is a monotonically decreasing function of $u$, and therefore the minimization pushes $u$ to $\infty$, as expected from the worldsheet renormalization group arguments (the $g$-theorem). We are particularly interested in the value of the action at the end of the RG trajectory. From Stirling’s formula we find at large $z$ We thus obtain the boundary string field theory action On the other hand, from the spacetime point of view this is clearly equal to $T_{24}\prod_\mu R^{\mu}$. We therefore conclude that which is [*precisely*]{} the expected value! Clearly this exercise can be repeated for branes of higher codimension. After minimization with respect to $a$ we find the action for the codimension $n$ soliton: and therefore each codimension leads to an extra factor of $2 \pi \sqrt{\ap}$, in agreement with . We finish this section with a few comments: [(1)]{} The solitonic solutions describing lower dimensional D-branes constructed in section 3 had a finite size, of order $l_s=\sqrt{\ap}$ (since their profiles were given by with $u=1/4\ap$). In the exact problem, the sizes of the solitons go to zero like $1/\sqrt u$. This is in nice correspondence with the usual description of D-branes as (classically) pointlike objects. In level truncated SFT, the lower D-branes were found to correspond to finite size lumps, similar to those of section 3. Here, we saw that the higher derivative terms in the action play a crucial role in reducing the size of the soliton from $l_s$ to zero. Since in the level truncation scheme the contributions of such terms seem to increase with level, it is possible that if the calculations of  were continued to much higher levels, the size of the solitons would slowly decrease to zero, as it does in our approach. Another possibility is that the complicated relation of our parametrization of the space of string fields to that of cubic SFT transforms the $\delta$-function tachyon profile we find to a finite size lump. [(2)]{} The fact that we have been able to reproduce exactly the tension ratios  may at first sight seem puzzling. The full spacetime classical SFT is a very complicated theory, with an infinite number of fields and a rich pattern of non-polynomial interactions. The fact that one can prove that this theory has finite action solitonic solutions with profiles and tensions that can be computed exactly looks from the spacetime point of view like a “string miracle.” Such “miracles” are very generic in string theory. The oldest example is perhaps (channel) duality of the tree level S-matrix. The fact that an infinite sum over massive s-channel poles can produce a t-channel pole is due in spacetime to an incredible conspiracy of the masses and couplings of Regge resonances. Describing this in terms of a spacetime Lagrangian seems hopeless. However, on the worldsheet, this is one of the many consequences of conformal invariance and is easily described and understood. In the tachyon condensation problem, something very similar happens. The miracle is explained by noting that the spacetime action is nothing but the boundary entropy (see section 2), and the process of condensation is trivial since it corresponds to free field theory on the worldsheet . In our discussion so far we focused on the physics of the tachyon. It is interesting, and for some purposes necessary, to generalize the discussion to include excited open strings. The first question that we address is one that was noted a few times in the text: why can we study condensation of the tachyon without taking into account other modes of the string? The reason is that we can divide the coordinates on field space into $a,u$, which are free field perturbations and an orthogonal set of coordinates $\lambda_i$ corresponding to the non-zero momentum modes of the tachyon and excited open string modes. The $\lambda_i$ could be  modes of one of the fields $A_\mu$, $B_{\mu\nu}$, $C_\mu$ in . It is consistent to set all the excited string modes $\lambda_i$ to zero in the presence of a tachyon profile of the form  if and only if the action ,  does not have any linear terms in any of the couplings $\lambda_i$ in the background . It should be emphasized that while the couplings $\lambda_i$ are in general non-renormalizable (since they correspond to irrelevant operators with $\Delta_i>1$), we are treating the dependence of $S$ on $\lambda_i$ perturbatively. There is no problem with calculating integrated correlation functions of irrelevant operators in a background such as , perturbed from a conformal background by relevant and marginal operators, at least after suitable regularization and renormalization procedures are specified. One such procedure is described in appendix A (by contrast, studying a worldsheet action like  with a finite perturbation by an irrelevant operator is likely to lead to inconsistencies.) Accordingly, we may write the action in the form where $S^{(0)}(a, u)$ is the action  and we would like to prove that $S^{(1)}_i(a, u)=0$. Suppose, on the contrary that $S^{(1)}\not=0$. Then $\partial_i S|_{\lambda=0}\not=0$. Looking back at equation  we see that this means that if the metric $G_{ij}$ is non-degenerate on the space of couplings orthogonal to $a,u$, then $\beta^j(a,u; \lambda^i=0)\not=0$. Now, after fixing string gauge invariances, the metric $G_{ij}$ is non-degenerate in the background , which corresponds to free field theory. At the same time, the statement that $\beta^i(\lambda)$ does not vanish at $\lambda^i=0$ implies that as we turn on $a$ and the $u$’s, the $\lambda^i$ start flowing according to . But we know that this is false. In free field theory no new couplings are generated by the RG flow. Therefore, $S^{(1)}_i(a,u)$ must vanish. We conclude that all other string modes appear at least quadratically in the spacetime action in the tachyon backgrounds , and they can be consistently set to zero when studying tachyon condensation. Again, it is interesting to contrast the situation with the cubic SFT. In this case a higher string mode, call it schematically $v$, can couple to the tachyon $T$ schematically as $v^2 + v T^2 + v^2 T + v^3$. The couplings of the form $v T^2$ are generically nonzero, and indeed the explicit computations show that higher string modes do obtain nontrivial expectation values during tachyon condensation. Another interesting circle of questions surrounds the fate of the excited string modes as $T\to \infty$. From the worldsheet analysis it is expected that they all “disappear” from the spectrum, but the precise mechanism by which this happens in spacetime is not well understood. It has been proposed  that the coefficients of the kinetic terms vanish at the “stable minimum” but the situation is unclear. The viewpoint of this paper sheds some light on these issues. We would like to construct the action for excited open string modes using the prescription , . We may determine the dependence on the zero mode of the tachyon as follows. Consider the theory in the background $T=a$ (corresponding to  with $u_i=0$). The partition sum has in this case a simple dependence on $a$, where we denoted all the other modes collectively by $\lambda_i$. The action  therefore takes the form Recalling the form of the exact tachyon potential  the action  can be rewritten as As we show in appendix A, near the mass shell (as $\Delta_i\to 1$), the quadratic term in the partition sum exhibits a first order zero $(\propto 1-\Delta_i)$; thus the usual kinetic terms for the modes $\lambda_i$ come from the first term on the r.h.s., while the second term, which goes like $(1-\Delta_i)^2$ near the mass shell, contributes higher derivative corrections. In any case, we see that all terms in the action go to zero as the tachyon relaxes to $T=\infty$, but, at least in these coordinates on the space of string fields, they do not all go like $U(T)$. A simple application of  is to the dependence of the Born-Infeld action on $T$ discussed in . A constant $F_{\mu\nu}$ on the $D25$-brane does not break conformal invariance, and therefore the second term in vanishes in this case. The partition sum in the presence of the constant electro-magnetic field is the Born-Infeld action (for a review see ), Substituting into  we conclude that the action for slowly varying gauge fields and tachyons is in agreement with  (essentially the same result already appears in .) One can also use our construction to study the spectrum of the open string theory in the background of a soliton. This involves computing the partition sum $Z$ to quadratic order in the couplings $\lambda_i$ in the soliton background  and should give rise to the standard picture of states bound to the soliton (or lower dimensional brane). As we have mentioned, this should help to explain some results of . There is a large number of open problems associated with the circle of ideas explored in this paper. In this section we list a few. It would be interesting to calculate additional terms in the SFT action. This involves both the determination of higher derivative corrections to  and the inclusion of excited string modes discussed in the previous section. As noted in the text, the exact action  implies an infinite number of higher derivative corrections to , but in order to calculate all terms of a given order in derivatives, more information is needed. Perhaps, additional information can be obtained by solving the worldsheet theory corresponding to the multi-soliton tachyon profiles . A related problem is understanding more clearly the relation between boundary string field theory and the cubic SFT. It is conceivable that the space of 2d field theories is a nontrivial infinite dimensional space with no good global coordinate system. It appears from the singular relation between the fields (see  appendix A) that coordinates appropriate to the cubic SFT might have a range of validity which is geodesically incomplete and does not coincide with the “patch” in which good coordinates for boundary SFT are valid. The discussion throughout this paper has focused on the bosonic string, but the construction of section 2 is more general. In particular, the worldsheet RG picture has been generalized to the superstring , where it applies to non-BPS $D$-branes, $D-\bar D$ systems and related configurations. It would be interesting to generalize the considerations of this paper to these problems, especially because the generalization of the cubic SFT to the superstring is subtle and complicated. Another interesting problem involves the role of quantum effects in the tachyon condensation process. Our discussion here was entirely classical, and yet we found that the action goes to zero as the tachyon condenses . Usually this is taken to be a sign of strong coupling, and indeed there were proposals in the literature that tachyon condensation leads to a strongly coupled string theory. For example, the form  of the gauge field action seems naively to suggest that the effective Yang-Mills coupling behaves as and therefore, as $U(T)\to 0$ the gauge theory becomes more and more strongly coupled. On the other hand, the worldsheet analysis of  seems to suggest that no strong coupling behavior should be encountered as the tachyon condenses, since diagrams with many holes are not becoming larger in this process. Boundary SFT seems to lead to the same conclusion. It is natural to expect that quantum corrections to the string field action $S$  come from performing the worldsheet path integral over Riemann surfaces with holes. Each hole contributes a factor of $g_sN$ as usual (for $N$ $D25$-branes), as well as a factor of $\exp(-T)$ from the path integral of . Thus, it looks like the effective coupling is in fact and the perturbative expansion looks like where $A_{-\chi}$ is obtained from the path integral on surfaces of Euler character $\chi$ and no handles. Eq.  suggests that the theory in fact remains weakly coupled as $T\to\infty$, but this seems difficult to reconcile with the Feynman diagram expansion arising from the coupling . It would be interesting to resolve this apparent contradiction. The behavior of the effective string coupling  is related to another issue raised earlier in the paper. Recall that the tachyon potential  is not bounded from below as $T\to-\infty$. Even if the tachyon condenses to the locally stable vacuum at $T=+\infty$ (the closed string vacuum), the system will tunnel through the potential barrier to the true vacuum at $T=-\infty$. The instanton responsible for this tunneling is the Euclidean bounce solution corresponding to a codimension twenty six brane in our construction. Like all the other solitons, it has one negative mode, and therefore mediates vacuum decay. It is natural to ask what is the nature of this instability. The behavior of the effective coupling  provides a hint for a possible answer. We see that as the system rolls towards the “true vacuum” at $T\to-\infty$, the string coupling grows. This is significant since as is well known, quantum mechanically, open strings can produce closed strings, and in particular, in this case, the closed string tachyon. Thus, one is led to interpret the instability of the “closed string vacuum” at $T\to\infty$ to decay to the “true vacuum” at $T\to-\infty$ as the closed string tachyon instability. While this is a speculation that needs to be substantiated, we note the following as (weak) evidence for it: [(1)]{} The amplitude for false vacuum decay due to the bounce goes like $\exp(-1/g_s)$. The fact that it vanishes to all orders in $g_s$ is consistent with the fact that no such instability is observed in perturbative open string theory . Understanding the precise dependence on $g_s$ probably requires a better understanding of the issues discussed around equation . [(2)]{} The fact that the string coupling grows after closed string tachyon condensation, suggested by , is consistent with the known physics of closed string tachyon condensation. In this process the central charge of the system decreases, and the dilaton becomes non-trivial (linear in one of the coordinates). This leads to strong coupling somewhere in space. [(3)]{} For unstable $D$-branes in the superstring, the corresponding tachyon potential does not have a similar instability, in accord with the fact that there is no tachyon in the closed string sector in that case. [**Acknowledgements:**]{} We would like to thank T. Banks, M. Douglas, J. Harvey, E. Martinec, N. Seiberg, S. Shenker, and A. Strominger for useful discussions, and J. Harvey for comments on the manuscript. We also thank the participants of the Rutgers group meeting for many lively questions during a presentation of these results. The work of D.K. is supported in part by DOE grant \#DE-FG02-90ER40560. The work of GM and MM is supported by DOE grant DE-FG02-96ER40949. DK thanks the Rutgers High Energy Theory group for hospitality during the course of this work. MM would like to thank the High Energy Theory group at Harvard for hospitality in the final stages of this work. The string field theory action  for a tachyon profile is an infinite series $S^{(2)} + S^{(3)} + S^{(4)} + \cdots $ in powers of $T$. In this appendix we give explicit formulae for the first two terms in this expansion. More generally, we will show that the quadratic term in the string field action for a primary field $\VV_i$ has a pole at $\Delta_i=1$. The structure of the quadratic term $S^{(2)}$ for a primary field $\VV_i$ has a rather simple expression. We only need the correlation function of the boundary operator in the free field theory, which is given by and $c$ is a constant. We also need the value of the following integral: Notice that in the evaluation of this integral we have regulated the short-distance singularities by analytic continuation in $z$. We will now compute $S^{(2)}$ using . The term of order $(\lambda^i)^2$ in the partition function is given by: We see that this gives a simple pole in the propagator at $\Delta_i=1$, a fact that was used in section 5. The action $S$  to quadratic order is then: Notice that the term $\beta_i \partial^i Z$ gives a second order pole in the propagator at this order, as stated in section 5. It is easy to check that the definition  gives the same answer for $S^{(2)}$. In the case of the tachyon field , the correlation function in free field theory is and the quadratic piece of the action  reads in this case Notice again that the propagator, which is a complicated function of $k^2$, exhibits the required pole at $\alpha'k^2 =1$. The cubic term for the tachyon field  can be computed by evaluating the correlation function  at next order in perturbation theory. The result is: where and the hypergeometric function ${}_pF_{q}$ is defined by We can now try to compare the action $S=S^{(2)} + S^{(3)}+ \cdots$ to the cubic action obtained in the open string field theory of . Using, for example, the approach of , one finds where If we assume that the tachyon fields $\phi(k)$ and ${\widehat \phi}(k)$ are related as follows, in such a way that where $\kappa$ is a nonzero constant, we obtain: where we have used that $f_1(k) =f_1(-k)$ (this follows from reality of the tachyon field). By comparing the cubic terms, we find: where we have defined Notice that $f_1(k)$ is regular and different from zero when the tachyon is on-shell. On the other hand, if we evaluate the relation  when the three tachyon fields are on-shell, we find that $G(k,k',k'')=0$, and therefore $f_2(k,k'')$ must have a pole with nonzero residue at $\alpha'k^2=1$. This shows that the relation between the CS and the B-SFT tachyon fields becomes singular on-shell. In this appendix we give an example of a higher derivative Lagrangian for the tachyon which reproduces the exact action $S(a,u)$. This is simply meant to indicate the nature of some of the terms. We stress at the outset that the following does not determine an infinite set of couplings, namely, anything which vanishes on the Gaussian profile. One unambiguous conclusion one can draw from this exercise is that in terms of $\phi\sim e^{-T}$ the higher derivative terms must be singular at $\phi=0$. It is useful to generalize the tachyon profile to $T= a + u_{\mu\nu} X^{\mu} X^{\nu}$ with $u_{\mu\nu}$ positive definite. The exact action may be written as (there is a regularization dependent term $\sim {\rm Tr} u$ in the exponential. With the normal ordering prescription of  this term vanishes). Expanding the exponential we obtain a series where at a given order in scaling under $u \to \alpha u$ the sum is a Schur polynomial. Now the action becomes: where it is convenient to define $L_0(n_k) = \sum_k k n_k$. One straightforward way to reproduce this from a Lagrangian proceeds by starting with the $a e^{-a}$ term. This is reproduced by In order to account for the second line in  we add terms with $B_{n_k} = (1-L_0(n_k))A_{n_k} $. Finally to get the last line of  we take
--- abstract: 'A small fraction of millicharged dark matter (DM) is considered in the literature to give an interpretation about the enhanced 21-cm absorption at the cosmic dawn, while the main component of DM is still unclear. Here we focus on the case that the main component is self-interacting dark matter (SIDM), motivated by the small scale problems. For self interactions of SIDM being compatible from dwarf to cluster scales, Sommerfeld enhanced velocity-dependent self interactions mediated by a light scalar $\phi$ is considered. To fermionic SIDM $\Psi$, the main annihilation mode $\Psi \bar{\Psi} \to \phi \phi$ is a $p -$wave process, which could evade constraints from CMB and indirect detections. For thermal freeze-out type SIDM, the thermal equilibrium between SIDM and the standard model (SM) particles in the early universe via the transition of SIDM $\rightleftarrows \phi \rightleftarrows$ SM sets a lower bound on couplings of $\phi$ to SM particles, which has been excluded by DM direct detections, and here we consider SIDM in the thermal equilibrium with millicharged DM. For $m_\phi >$ twice millicharged DM mass, $\phi$’s lifetime could be much smaller than 1 second, avoiding excess energy injection to the big bang nucleosynthesis. Thus, the $\phi -$SM particle couplings could be very tiny and evade DM direct detections. The momentum transfer in SIDM-target nucleus scattering may be comparable with $m_\phi$ in direct detections, and the picture of WIMP-nucleus scattering with contact interactions fails for SIDM-nucleus scattering with a light mediator. A method is explored in this paper, with which a WIMP search result can be converted into the hunt for SIDM in direct detections.' author: - 'Lian-Bao Jia' title: 'Velocity-dependent self-interacting dark matter from thermal freeze-out and tests in direct detections' --- Introduction ============ Modern astronomical observations [@Aghanim:2018eyx] indicate that dark matter (DM) accounts for about 84% of the matter density in our universe, while the particle characters of DM, e.g., masses, components and interactions, etc, are currently unclear yet. If DM and ordinary matter are in thermal equilibrium in the very early universe, the DM particles would be thermal freeze-out with the expansion of the universe. One of the popular thermal freeze-out DM candidates is weakly interacting massive particles (WIMPs) with masses in a range of GeV$-$TeV scale. For WIMP type DM, the target nucleus could acquire a large recoil energy in WIMP-nucleus scattering in DM direct detections. Yet, confident WIMP signals are still absent from recent sensitive direct detections [@Liu:2019kzq; @Abdelhameed:2019hmk; @Agnese:2017jvy; @Agnes:2018ves; @Cui:2017nnn; @Akerib:2016vxi; @Akerib:2018hck; @Aprile:2018dbl; @Akerib:2017kat; @Xia:2018qgs; @Aprile:2019dbj; @Amole:2019fdf], and the upper limit on WIMP-nucleon scattering cross section will reach the floor of the neutrino background in the following decade(s). DM may have multi-components. Recently, a strong than expected 21-cm absorption at the cosmic dawn was reported by EDGES [@Bowman:2018yin], and a possible explanation is that neutral hydrogen was cooled by the scattering with a small fraction of MeV millicharged DM [@Munoz:2018pzp; @Fialkov:2018xre; @Barkana:2018lgd; @Barkana:2018cct; @Berlin:2018sjs; @Slatyer:2018aqg; @Liu:2018uzy; @Munoz:2018jwq; @Jia:2018csj; @Mahdawi:2018euy; @Kovetz:2018zan; @Jia:2019yhr]. If so, what is the main component of DM? In addition, the $\Lambda$CDM model is successful in explaining the large-scale structure of the Universe, while deviations appear in small scales ($\lesssim$ 10 kpc), such as the core-cusp problem, missing satellites problem, and too-big-to-fail problem, etc (see e.g., Refs. [@Weinberg:2013aya; @Bull:2015stt; @Tulin:2017ara; @Bullock:2017xww] for more). These small-scale problems may indicate some characters about the main component of DM, and possible strong self-interactions between DM particles could provide a solution to the core-cusp and too-big-to-fail problems [@Spergel:1999mh; @Vogelsberger:2012ku; @Zavala:2012us; @Kaplinghat:2015aga; @Tulin:2017ara; @Kamada:2016euw; @Valli:2017ktb].[^1] In this paper, the main component of DM is considered to be self-interacting dark matter (SIDM). For collisional SIDM, to resolve the small-scale problems, the required scattering cross section per unit DM mass $\sigma/m_{\mathrm{DM}}$ is $\gtrsim$ 1 cm$^2$/g, while constraints from cluster collisions indicate that $\sigma/m_{\mathrm{DM}}$ should be $\lesssim$ 0.47 cm$^2$/g [@Randall:2007ph; @Harvey:2015hha] (see Ref. [@Tulin:2017ara] for a recent review). In addition, the density profiles of galaxy clusters indicate that the corresponding self-interaction should be $\lesssim$ 0.1$-$0.39 cm$^2$/g [@Kaplinghat:2015aga; @Harvey:2018uwf; @Elbert:2016dbb]. This tension could be relaxed if the scattering cross section of SIDM is velocity dependent. Here we consider the light mediator being a scalar $\phi$, which couples to the Standard Model (SM) sector via the Higgs portal. When the mass of the mediator $m_\phi$ is much smaller than the SIDM mass, the scattering could be enhanced by the Sommerfeld effect [@Sommerfeld:1931; @ArkaniHamed:2008qn] at low velocities. Thus, the self interactions of SIDM could be compatible from dwarf to cluster scales. For fermionic SIDM $\Psi$, the annihilation $\Psi \bar{\Psi} \to \phi \phi$ is a $p -$wave process, which is suppressed at low velocities, and thus it could evade constraints from CMB and other indirect detections. In the early universe, if SIDM and the SM particles were in the thermal equilibrium for a while via the transitions SIDM $\rightleftarrows \phi \rightleftarrows$ SM particles, this thermal equilibrium sets a lower bound on the couplings of $\phi$ to SM particles [@Chu:2011be; @Dolan:2014ska; @Jia:2016pbe]. For the light $\phi$ required by the velocity-dependent scattering between SIDM particles, the lower bound of the $\phi -$SM particle couplings set by the thermal equilibrium has been excluded by the present DM direct detections [@Jia:2016pbe].[^2] Thus, this type thermal freeze-out SIDM has been excluded by direct detections, and freeze-in SIDM is considered in the literature [@Duch:2017khv; @Zakeri:2018hhe; @Hambye:2018dpi]. For velocity-dependent SIDM required to solve the small-scale problems, if the relic abundance of SIDM was set by the thermal freeze-out mechanism in the early universe, how to evade present constraints becomes an issue (especially DM direct detections). This is of our concern in this paper. For multi-component DM, besides the thermal equilibrium via SIDM $\rightleftarrows \phi \rightleftarrows$ SM particles, SIDM could be in the thermal equilibrium with millicharged DM, which was in the thermal equilibrium with SM particles in the early universe and could give an explanation about the anomaly 21-cm absorption at the cosmic dawn. In addition, to avoid the excess energy injection into the period of the big bang nucleosynthesis (BBN) or an overabundance of $\phi$, the lifetime of $\phi$ should be much smaller than 1 second, and this can be achieved in the case of $m_\phi >$ twice millicharged DM mass. Thus, SIDM could be in the thermal equilibrium with SM particles via millicharged DM, and the $\phi -$SM particle couplings could be very tiny and evade DM direct detections. In addition, for SIDM-target nucleus scattering mediated by a light mediator, the momentum transfer could be comparable with the mediator mass $m_\phi$ in direct detections, and SIDM-nucleus scatterings would be different from WIMP-nucleus scatterings [@Kaplinghat:2013yxa]. The scenario above will be explored in this paper. The following of this paper is organized as follows. The interactions in the new sector will be presented, and the self interactions of SIDM will be discussed in the next. Then, the direct detection of SIDM will be elaborated. The last part is the conclusion. Interactions in the new sector ============================== In this paper, two possible components of DM, the main component of SIDM $\Psi$ and a small fraction of millicharged DM $\chi$, are of our concern. For a small fraction of millicharged DM, it could give an explanation about the 21-cm absorption, and possible interactions between millicharged DM and SM particles have been studied in Refs. [@Berlin:2018sjs; @Jia:2018csj; @Jia:2019yhr]. Here we focus on SIDM, i.e., key transitions or interactions between SIDM and millicharged DM, SM particles. The effective interactions mediated by a new scalar field $\Phi$ are $$\begin{aligned} \mathcal{L}_i &=& - \lambda \Phi \bar{\Psi} \Psi - \lambda_0 \Phi \bar{\chi} \chi - \mu_h \Phi (H^\dag H -\frac{V^2}{2}) \nonumber \\ && - \lambda_h \Phi^2 (H^\dag H -\frac{V^2}{2}) - \frac{ \mu}{3!} \Phi^3 - \frac{ \lambda_4}{4!} \Phi^4 ~,\end{aligned}$$ where $V$ is the vacuum expectation value, with $V \approx$ 246 GeV. The $\Phi$ field mixes with the Higss field after the electroweak symmetry breaking, and a mass eigenstate $\phi$ is generated (see e.g., Ref. [@Jia:2017kjw]). Here we suppose the mixing is very tiny, and thus $\phi$’s couplings to $\Psi$ and $\chi$ can be taken as equal to that of the corresponding $\Phi$’s couplings. The effective couplings of $\phi$ to SM fermions can be written as $$\begin{aligned} \mathcal{L}^i_{\phi f} = - \theta_\mathrm{mix} \frac{m_f}{V} \phi \bar{f} f,\end{aligned}$$ where the mixing parameter $\theta_\mathrm{mix}$ is very tiny compared with 1. Here the particles playing important roles in transitions between DM and SM sectors are of our conern. There may be more particles in the new sector, and DM particles may also be composite particles [@Bhattacharya:2013kma; @Cline:2013zca; @Kribs:2016cew; @Kopp:2016yji; @Forestell:2017wov]. To enhance the self interactions of SIDM at low velocities via the Sommerfeld effect, the case of $2 m_\chi < m_\phi \ll m_\Psi$ is of our concern. The relation $\mu \ll \lambda m_\Psi$ holds if the Yukawa couplings are similar to that of the SM Higgs boson, and the $\phi^3$-term will be negligible in SIDM annihilations. In the period of SIDM freeze-out, the main annihilation process of SIDM is $\Psi \bar{\Psi} \to \phi \phi$, and the annihilation cross section is approximately $$\begin{aligned} \sigma_\mathrm{ann} v_\mathrm{r} \approx \frac{1}{2} \frac{\lambda^4 (s - 4 m_\Psi^2)}{48 \pi (s - 2 m_\Psi^2) s^2} (s + 32 m_\Psi^2) ~, \label{ann-phi}\end{aligned}$$ where $v_\mathrm{r}$ is the relative velocity between the two SIDM particles. The factor $\frac{1}{2}$ is for the $\Psi \bar{\Psi}$ pair required in SIDM annihilations. $s$ is the total invariant mass squared, with $s$ = 4$m_\Psi^2 + m_\Psi^2 v_\mathrm{r}^2 + \mathcal{O} (v_\mathrm{r}^4)$. In Eq. (\[ann-phi\]), the terms of $\mathcal{O} (v_\mathrm{r}^4)$ are neglected. For SIDM annihilations today, the p-wave mode $\Psi \bar{\Psi} \to \phi \phi$ is suppressed. In addition, the lifetime of $\phi$ should be much smaller than a second with the constraint of the BBN. As $\phi$’s couplings to SM fermions should be very tiny to evade constraints from direct detection, and here the dark sector decay of $\phi$ predominantly decaying into $\chi \bar{\chi}$ pairs could do the job ($ m_\phi > 2 m_\chi$, e.g., $m_\phi \gtrsim$ 22 MeV). For fermionic $\chi$, the decay width of $\phi$ is $$\begin{aligned} \Gamma_\phi \simeq \frac{\lambda_0^2 m_\phi}{8 \pi} \bigg( 1-\frac{4 m_\chi^2}{m_\phi^2} \bigg)^{3/2} ~ .\end{aligned}$$ Hence a very tiny mixing $\theta_\mathrm{mix}$ between $\phi$ and SM Higgs boson is compatible with the BBN constraint, and SIDM could evade the present DM direct detection hunts. Self interactions of SIDM ========================= ![The effective coupling $\lambda$ as a function of SIDM mass $m_\Psi$, with $m_\Psi$ in a range of 10$-$500 GeV. Here the relic fraction of SIDM $f_{\mathrm{SIDM}} \simeq$ 99.6% is taken.[]{data-label="coupling-dm"}](coupling.pdf){width="36.00000%"} Here we first estimate couplings set by the relic abundance of DM. The total relic abundance of DM is $\Omega_D h^2 = 0.120 \pm 0.001$ [@Aghanim:2018eyx], and there are two components of DM in this paper, the main component of SIDM $\Psi$ and a small fraction of millicharged DM $\chi$. To explain the 21-cm anomaly, tens of MeV millicharged DM with a relic fraction about 0.4% could do the job. Thus, the relic fraction of SIDM $f_{\mathrm{SIDM}} \simeq$ 99.6% is adopted here. Taking the millicharged DM in Ref. [@Jia:2019yhr] as an example, the effective degree of freedom from the new sector is about 7.5 (fermionic millicharged DM, dark photon and $\phi$) at the SIDM freeze-out temperature $T_f$. Considering the relic fraction of SIDM and the effective degree of freedom [@Drees:2015exa] from SM + the new sector, the effective coupling $\lambda$ can be derived for a given SIDM mass $m_\Psi$, as shown in Fig. \[coupling-dm\]. Additionally, considering the perturbative limit, $\alpha_\lambda$ ($\lambda^2 / 4 \pi$) should be very small compared with 1. ![image](somm-10.pdf){width="36.00000%"} ![image](somm-500.pdf){width="36.00000%"} Now we turn to the self-interaction of SIDM. In the non-relativistic limit, the self-scattering cross section of a SIDM pair $\Psi \bar{\Psi}$ is $$\begin{aligned} \sigma_1 = \frac{1}{4 \pi} \frac{\lambda^4 m_{\Psi}^2}{m_\phi^2 (m_\phi^2 + 4 |\vec{p}|^2 )},\end{aligned}$$ and the self-scattering cross section of a SIDM pair $\Psi \Psi$, $\bar{\Psi} \bar{\Psi}$ is $$\begin{aligned} \sigma_2 &=& \frac{\lambda^4 m_{\Psi}^2}{16 \pi} [ \frac{4}{m_\phi^2 (m_\phi^2 + 4 |\vec{p}|^2 )} \nonumber \\ && - \frac{1}{2 |\vec{p}|^2 (m_\phi^2 + 2 |\vec{p}|^2 )} \ln (1 + \frac{4 |\vec{p}|^2}{m_\phi^2})] ~ .\end{aligned}$$ Here $|\vec{p}|$ is the momentum of each SIDM particle in the center-of-momentum frame ($|\vec{p}| = m_{\Psi} v_\mathrm{r} /2$). For the self-scattering cross section, we have $1/2 \leq \sigma_2/\sigma_1 \leq 1$, with $\sigma_2/\sigma_1 \rightarrow 1/2 $ for the case of $|\vec{p}|^2 / m_\phi^2 \ll 1$ and $\sigma_2/\sigma_1 \rightarrow 1 $ for the case of $|\vec{p}|^2 / m_\phi^2 \gg 1$. Here we take the cross section $\sigma_0 = (\sigma_1 + \sigma_2)/2$. For $m_\phi \ll m_{\Psi}$, the self-interaction of SIDM is enhanced by the Sommerfeld effect at low velocities, with the actual self-interaction multiplied by a factor $S$, i.e., the self-scattering cross section $\sigma \simeq \sigma_0 S$. Note two parameters $$\begin{aligned} \varepsilon_v = \frac{v}{\alpha_\lambda} \quad \mathrm{and} \quad \varepsilon_\phi = \frac{m_\phi}{\alpha_\lambda m_{\Psi}} ~,\end{aligned}$$ where $v = v_\mathrm{r} /2$ is the velocity of each SIDM particle in the center-of-momentum frame for a SIDM pair, and $\alpha_\lambda$ is $\alpha_\lambda$ = $\lambda^2 / 4 \pi$. Now, an analytic form of $S$ can be written as [@Cassel:2009wt; @Slatyer:2009vg] $$\begin{aligned} S \!= \! \frac{\pi}{\varepsilon_v} \frac{\mathrm{sinh} (\frac{2 \pi \varepsilon_v}{\pi^2 \varepsilon_\phi / 6})}{\mathrm{cosh} (\frac{2 \pi \varepsilon_v}{\pi^2 \varepsilon_\phi / 6}) \! - \! \mathrm{cos} (2 \pi \sqrt{\frac{1}{\pi^2 \varepsilon_\phi / 6} \! - \! \frac{\varepsilon_v^2}{(\pi^2 \varepsilon_\phi / 6)^2} } ) } ~.\end{aligned}$$ This self-interaction between SIDM particles is enhanced at low velocities, which may resolve the small-scale problems and evade constraints from clusters. The corresponding parameter spaces will be derived in the following. Here the self-interactions of SIDM are velocity dependent, and the typical relative velocities $v_\mathrm{r}$ in the dwarf, galaxy, and cluster scales are 20 km/s, 200 km/s and 2000 km/s, respectively. In the case of $\varepsilon_\phi \lesssim 1$, the non-perturbative Sommerfeld effect will play an important role in low-velocity self-interactions of SIDM. For a given SIDM mass ($m_\Psi =$ 10, 500 GeV), the typical self-interactions at dwarf, galaxy, and cluster scales are shown in Fig. \[self-cs\]. Considering $\sigma/m_\Psi \gtrsim$ 1 cm$^2$/g at dwarf and galaxy scales, and meanwhile $\sigma/m_\Psi \lesssim$ 0.1$-$0.3 cm$^2$/g at cluster scale, it can be seen that there are parameter spaces to resolve the small-scale problems and meanwhile be compatible from dwarf to cluster scales. For $m_\Psi$ in a range of 10$-$500 GeV, the required $m_\phi$ approximately varies from 22 MeV to 1.2 GeV. ![image](sigv-10.pdf){width="36.00000%"} ![image](sigv-20.pdf){width="36.00000%"} ![image](sigv-50.pdf){width="36.00000%"} ![image](sigv-100.pdf){width="36.00000%"} ![image](sigv-200.pdf){width="36.00000%"} ![image](sigv-500.pdf){width="36.00000%"} ![The values of $\varepsilon_\phi$ (or $m_\phi$) required by the upper limits, lower bounds of $\langle \sigma v_r \rangle/m_\Psi$ for given SIDM masses. Here are for cases of $m_\Psi =$ 10, 20, 50, 100, 200 and 500 GeV (Fig. \[sig-average\]). The dots, the stars are corresponding to the ranges of $\varepsilon_\phi$, $m_\phi$ respectively, with the lower one, upper one in each type corresponding to the upper limit, lower bound of $\langle \sigma v_r \rangle/m_\Psi$, respectively.[]{data-label="eps-mph"}](eps-mph.pdf){width="36.00000%"} For the above self-interactions of SIDM, the monochromatic typical relative velocities $v_\mathrm{r}$ are adopted in the dwarf, galaxy, and cluster scales. Actually, the distribution of SIDM velocities needs to be taken into account, and this will give a mild modification. In the inner regions of dwarf galaxies, galaxies, and clusters, the inner profile is related to the velocity-averaged self-scattering cross section per unit of SIDM mass $\langle \sigma v_\mathrm{r}\rangle /m_\Psi$ [@Kaplinghat:2015aga], where $$\begin{aligned} \langle \sigma v_\mathrm{r} \rangle = \int_0^{v_\mathrm{r}^{\mathrm{max}}} f(v_\mathrm{r}, v_0) \sigma v_\mathrm{r} d v_\mathrm{r} ~,\end{aligned}$$ and a Maxwell-Boltzmann velocity distribution is assumed, with $$\begin{aligned} f(v_\mathrm{r}, v_0) = \frac{4 v_\mathrm{r}^2 e^{- v_\mathrm{r}^2/ v_0^2}}{\sqrt{\pi} v_0^3} ~.\end{aligned}$$ Here the escape velocity can be taken as $v_\mathrm{r}^{\mathrm{max}}$, and $v_0$ is a parameter related to the typical velocities in the DM halo. In the inner regions of halos, $v_\mathrm{r}^{\mathrm{max}}$ is much larger than $v_0$, and the averaged relative velocity $\langle v_\mathrm{r} \rangle$ is $\langle v_\mathrm{r} \rangle \simeq$ 2$v_0/\sqrt{\pi}$. Here we take the averaged self-interaction cross section as $\langle \sigma v_r \rangle/ \langle v_r \rangle$, and adopt the constraints of $\langle \sigma v_r \rangle/( \langle v_r \rangle m_\Psi ) \gtrsim$ 1 cm$^2$/g at dwarf and galaxy scales and $\langle \sigma v_r \rangle/( \langle v_r \rangle m_\Psi ) \lesssim$ 0.1$-$0.3 cm$^2$/g at cluster scale. Considering the velocity distributions, the self-interactions of SIDM at dwarf, galaxy, and cluster scales are shown in Fig. \[sig-average\], with $m_\Psi =$ 10, 20, 50, 100, 200 and 500 GeV, and the corresponding ranges of $\varepsilon_\phi$ (or in the form of $m_\phi$) are shown in Fig. \[eps-mph\]. It can be seen that, there are parameter spaces to resolve the small scale problems and meanwhile be compatible with constraints from clusters. Direct detection of SIDM ======================== Now we turn to the direct detection of SIDM. In WIMP-type DM direct detections, the momentum transfer $|q|$ in the WIMP-target nucleus elastic scattering is generally assumed to be much smaller than the mediator mass $m_{\mathrm{med}}$, and thus the WIMP-nucleus elastic scattering cross section could be derived in the limit of zero momentum transfer $|q^2| \rightarrow$ 0. The $q$-dependent squared matrix element for WIMP-nucleus spin-independent (SI) elastic scattering $|\mathcal{M}_{\Psi N} (q)|^2$ can be written as $$\begin{aligned} |\mathcal{M}_{\Psi N} (q)|^2 &=& |\mathcal{M}_{\Psi N} (q)|^2 |_{q^2=0} \frac{m_{\mathrm{med}}^4}{(|q^2| + m_{\mathrm{med}}^2)^2} \nonumber \\ && \times |F_N^{\mathrm{SI}} (q)|^2 ~,\end{aligned}$$ where $F_N^{\mathrm{SI}} (q)$ is the nuclear form factor. For a small momentum transfer with $1/|q|$ larger than the nuclear radius, WIMP could interact coherently with the nucleus which seems to be a point-like particle, and the corresponding nuclear form factor is $|F_N^{\mathrm{SI}} (q)|^2 \rightarrow 1$. Note $$\begin{aligned} F_{\mathrm{med}}(q^2) = \frac{m_\mathrm{med}^4}{(|q^2| + m_\mathrm{med}^2)^2} .\end{aligned}$$ In the limit of $|q^2|/m_\mathrm{med}^2 \rightarrow$ 0, one has $F_{\mathrm{med}}(q^2) \simeq$1. Thus, the WIMP-nucleus scattering is a contact interaction, and a constant WIMP-nucleus scattering cross section can be extracted from the recoil rate [@Lewin:1995rx]. For the scalar mediator $\phi$ of concern, $m_\phi/ m_\Psi$ is about $ 2 \times 10^{-3}$, and the velocity of the incoming SIDM $v_\mathrm{in}$ relative to the Earth detector is $v_\mathrm{in}/c \sim 10^{-3}$. Therefore, whether the zero momentum transfer limit could be adopted in direct detections needs further discussions, and this will be briefly analyzed in the following. In GeV SIDM-target nucleus elastic scattering, the target nucleus can be considered to be at rest initially, and the momentum transfer is $q \rightarrow (0, \vec{q})$. The nucleus recoil energy $E_R$ is $$\begin{aligned} E_R = \frac{\mu_{\Psi N}^2 v_\mathrm{in}^2}{m_N} (1-\cos \theta_{\mathrm{cm}}) = \frac{|\vec{q}|^2}{2 m_N} ~,\end{aligned}$$ where $m_N$ is the target nucleus mass, $\mu_{\Psi N}^{}$ is the reduced mass of the SIDM-nucleus system, and $\theta_{\mathrm{cm}}$ is the polar angle in the center-of-momentum frame in the SIDM-nucleus scattering. To the momentum transfer, we have $|\vec{q}|^2 = 2 \mu_{\Psi N}^2 v_\mathrm{in}^2 (1-\cos \theta_{\mathrm{cm}}) = 2 m_N E_R$. For a given recoil energy $E_R$, the minimum incoming velocity of SIDM is $v_\mathrm{in}^\mathrm{min} = \sqrt{m_N E_R / 2 \mu_{\Psi N}^2}$. The available maximum value $|\vec{q}|^2_{\mathrm{max}}$ are related to the maximum velocity squared $(v_\mathrm{in}^2)_{\mathrm{max}}$ and the maximum nuclear recoil energy $E_R^\mathrm{max}$ in DM detections. For SIDM with the escape velocity $v_\mathrm{esc}$, the SIDM incoming velocity squared is $v_\mathrm{in}^2 \approx v_\mathrm{esc}^2 + v_{\oplus}^2 - 2 v_\mathrm{esc} v_{\oplus} \cos \theta$, where $v_{\oplus}$ is the Earth’s velocity relative to the galactic center (here the influence of the Earth annual modulation is not taken into account), and $\theta$ is the angle between $v_\mathrm{esc}$ and $v_{\oplus}$. Taking $v_\mathrm{esc} =$ 544 km/s and $v_{\oplus} =$ 232 km/s, the maximum velocity squared is $(v_\mathrm{in}^2)_\mathrm{max} = $ (776 km/s)$^2$. For DM direct detection experiments, the results from XENON1T [@Aprile:2018dbl], LUX [@Akerib:2016vxi], and PandaX-II [@Cui:2017nnn] set strong limits on WIMP type DM with masses $\gtrsim$ 10 GeV. The nucleus recoil energy region of interest in the XENON1T experiment [@Aprile:2018dbl], i.e. \[4.9, 40.9\] keV$_{\mathrm{nr}}$, is employed to set the range of $|\vec{q}|^2$ in direct detections. For SIDM in a range of 10$-$500 GeV, considering the value of $(v_\mathrm{in}^2)_{\mathrm{max}}$ and the nucleus recoil energy region of interest in detections, the range of $|\vec{q}|^2$ in SIDM-target nucleus ($^{131}$Xe) elastic scattering are shown in Fig. \[q-2\]. It can be seen that, the zero momentum transfer limit could be adopted for $m_{\Psi} \gtrsim$ 100 GeV [^3], while the momentum transfer should be taken into account for $m_{\Psi}$ lighter than 100 GeV. ![The range of $|\vec{q}|^2$ in SIDM-target nucleus ($^{131}$Xe) elastic scattering, with SIDM mass in a range of 10$-$500 GeV. The upper, the lower dashed curves are corresponding to $E_R =$ 4.9, 40.9 keV respectively, with the limits from the nuclear recoil energy region of interest of XENON1T [@Aprile:2018dbl]. The solid curve is the $|\vec{q}|^2$ for SIDM with the maximum velocity squared. The dotted curve is the typical momentum transfer squared $|q^2_\mathrm{typ}|$ in detections. The stars are the mass squared $m_\phi^2$, with the upper one, low one corresponding to the upper limit, lower bound of $m_\phi^2$ respectively. The filled area is the range of $|\vec{q}|^2$ in direct detections. []{data-label="q-2"}](q-square.pdf){width="36.00000%"} In the SIDM-target nucleus SI elastic scattering, the differential cross section can be evaluated as $$\begin{aligned} \frac{d \sigma_N^{\mathrm{SI}} (q)}{d E_R} = \frac{m_N}{2 \mu_{\Psi N}^2 v_\mathrm{in}^2} \sigma_N^{\mathrm{SI}}(q)|_{q^2=0} F_{\mathrm{med}}(q^2) |F_N^{\mathrm{SI}} (q)|^2 \, , \nonumber \\ \label{diff-cs}\end{aligned}$$ with $|q| = \sqrt{2 m_N E_R}$. The SIDM-nucleus scattering cross section at $q^2 \rightarrow$ 0 is $$\begin{aligned} \sigma_N^{\mathrm{SI}}(q)|_{q^2=0} &=& \sigma_p^{\mathrm{SI}}|_{q^2=0} \frac{\mu_{\Psi N}^2}{\mu_{\Psi p}^2} \nonumber \\ && \times [Z + \frac{f_n}{f_p} (A-Z)]^2 ,\end{aligned}$$ where $\sigma_p|_{q^2=0}$ is the SIDM-proton scattering cross section in the limit of $q^2=0$, $\mu_{\Psi p}$ is the SIDM-proton reduced mass. $Z$ is the number of protons, $A$ is the mass number of the nucleus, and ${f_n}$ and ${f_p}$ describe the SIDM-neutron and SIDM-proton couplings respectively. For $\phi$-mediated scattering, one has ${f_n} = {f_p}$, and the SIDM-nucleon elastic scattering cross section can be defined as $$\begin{aligned} \sigma_n^{\mathrm{SI}} \equiv \sigma_p^{\mathrm{SI}}|_{q^2=0} ~ .\end{aligned}$$ Now, Eq. (\[diff-cs\]) can be rewritten as $$\begin{aligned} \frac{d \sigma_N^{\mathrm{SI}} (q)}{d E_R} = \frac{m_N}{2 \mu_{\Psi p}^2 v_\mathrm{in}^2} \sigma_n^{\mathrm{SI}} F_{\mathrm{med}}(q^2) A^2 |F_N^{\mathrm{SI}} (q)|^2 \, . \label{diff-csnr}\end{aligned}$$ For SIDM-target nucleus elastic scattering, due to the factor $F_{\mathrm{med}}(q^2)$, it fails to directly extract the SIDM-nucleon scattering cross section from the recoil rate without consideration of the mediator’s mass in direct detections (it can be done in WIMP-nucleus elastic scatterings with contact interactions). Here a reference value of $F_{\mathrm{med}}(q^2)$ is introduced in SIDM-target nucleus elastic scattering in direct detections, i.e., a reference factor $\overline{F}_{\mathrm{med}}$. For all target nuclei in one species, the factor $\overline{F}_{\mathrm{med}}$ is $$\begin{aligned} \overline{F}_\mathrm{med} = \frac{ \int_{E_R^\mathrm{thr}}^{E_R^\mathrm{high}} d E_R ~ \epsilon(E_R) \frac{d R }{d E_R} } {\int_{E_R^\mathrm{thr}}^{E_R^\mathrm{high}} d E_R ~ \epsilon(E_R) \frac{d R }{d E_R}|_{F_{\mathrm{med}}(q^2) = 1}} ~ , \label{f-med}\end{aligned}$$ where $\epsilon(E_R)$ is the detection efficiency for a given recoil energy $E_R$, and $\frac{d R }{d E_R}$ is the differential recoil rate (see the Appendix \[kq\] for the details). For target nuclei in the same species, substituting Eq. (\[diff-rate-II\]) into Eq. (\[f-med\]), we have $$\begin{aligned} \overline{F}_\mathrm{med} = \frac{ \int_{E_R^\mathrm{thr}}^{E_R^\mathrm{high}} d E_R ~ \epsilon(E_R) F_\mathrm{med}(q^2) |F_N^{\mathrm{SI}} (q)|^2 \eta ( v_\mathrm{in}^\mathrm{min}) } {\int_{E_R^\mathrm{thr}}^{E_R^\mathrm{high}} d E_R ~ \epsilon(E_R) |F_N^\mathrm{SI} (q)|^2 \eta ( v_\mathrm{in}^\mathrm{min})} ~ . \nonumber \\\end{aligned}$$ The WIMP search results via WIMP-nucleus elastic scatterings with contact interactions, i.e., the results of $\sigma_\mathrm{n}^\mathrm{SI}$ (WIMP) in direct detection experiments, can be converted into the SIDM search results $\sigma_\mathrm{n}^\mathrm{SI}$ (SIDM) via the relation $$\begin{aligned} \sigma_\mathrm{n}^\mathrm{SI} (\mathrm{WIMP}) \simeq f_\mathrm{SIDM} \sigma_\mathrm{n}^\mathrm{SI} (\mathrm{SIDM}) \times \overline{F}_\mathrm{med} ~ . \label{WIMP-SIDM}\end{aligned}$$ ![The reference factor $\bar{F}_\mathrm{med}$ as a function of SIDM mass $m_\Psi$ in SIDM-target nucleus ($^{131}$Xe) SI elastic scattering. The solid curve is the result of $\bar{F}_\mathrm{med}$, with SIDM mass in a range of 10$-$500 GeV. Here the range of the recoil energy $E_R$ and the detection efficiency $\epsilon(E_R)$ in XENON1T (2018) experiment [@Aprile:2018dbl] are adopted as inputs. For comparison, the dashed line is for the case $\bar{F}_\mathrm{med} =$ 1. []{data-label="f-mediator"}](f-med.pdf){width="36.00000%"} For the WIMP search result of XENON1T (2018) [@Aprile:2018dbl], the detection efficiency $\epsilon(E_R)$ in the nuclear recoil energy region of interest is released (Fig. 1 in Ref. [@Aprile:2018dbl]). For SIDM of concern, the upper and lower limits of $m_\phi$ are close to each other (see Fig. \[eps-mph\]), and here the median value of $m_\phi$ is taken as input for a given SIDM mass. After substituting values of the corresponding parameters, the results of $\overline{F}_\mathrm{med}$ can be derived, as shown in Fig. \[f-mediator\]. It can be seen that, for SIDM of concern, the approximation of contact interactions between WIMP-nucleus scatterings fails for SIDM with $m_\Psi \lesssim$ 100 GeV. Thus, the mediator’s mass needs considered in direct detections of SIDM, and the WIMP detection results can be converted into the SIDM search via Eq. (\[WIMP-SIDM\]). Moreover, for a given SIDM mass, a typical momentum transfer squared $|q^2_\mathrm{typ}|$ for the recoil energy of interest can be obtained via $$\begin{aligned} F_{\mathrm{med}}(q^2_\mathrm{typ}) = \overline{F}_\mathrm{med} ~ .\end{aligned}$$ For the reference factor $\overline{F}_{\mathrm{med}}$ in Fig. \[f-mediator\], the corresponding $|q^2_\mathrm{typ}|$ is presented in Fig. \[q-2\] (the dotted curve). ![The SIDM-nucleon SI scattering cross section in the form of $f_\mathrm{SIDM} \sigma_\mathrm{n}^\mathrm{SI} \overline{F}_\mathrm{med}$ as a function of SIDM mass. The dashed curves from top to bottom are the scattering cross section $f_\mathrm{SIDM} \sigma_\mathrm{n}^\mathrm{SI} \overline{F}_\mathrm{med}$ for the case of $\theta_\mathrm{mix} = 10^{-6}, 10^{-7}, 10^{-8}$ respectively. The upper, lower solid curves are the upper limit from XENON1T [@Aprile:2018dbl], the detection bound of the neutrino floor [@Billard:2013qya] respectively.[]{data-label="SIDM-direct-d"}](direct-d-XENON.pdf){width="36.00000%"} Now we launch a specific WIMP detection result (XENON1T-2018 [@Aprile:2018dbl]) to the SIDM of concern. The cross section of SIDM-nucleon (proton, neutron) SI elastic scattering mediated by $\phi$ can be parameterized as $$\begin{aligned} \sigma_n^\mathrm{SI} = \frac{\lambda^2 \theta_\mathrm{mix}^2 g_{hnn}^2 }{\pi m_\phi^4} \mu_{\Psi p}^2 ~ ,\end{aligned}$$ where $g_{hnn}^{}$ is the effective Higgs-nucleon coupling, with $g_{hnn}^{} \simeq 1.1 \times 10^{-3}$ [@Cheng:2012qr] adopted here. For SIDM-nucleus scattering with a light mediator $\phi$, though the cross section $\sigma_n^\mathrm{SI}$ cannot be directly extracted from the recoil rate in direct detections, the factor $f_\mathrm{SIDM} \sigma_\mathrm{n}^\mathrm{SI} \overline{F}_\mathrm{med}$ is feasible, as discussed above. Here we take $^{131}$Xe as the target nucleus of the liquid xenon detector for simplicity. Considering the constraint of WIMPs from XENON1T [@Aprile:2018dbl], the result for SIDM detection is shown in Fig. \[SIDM-direct-d\]. For SIDM with masses in a range of 10$-$500 GeV, the parameter $\theta_\mathrm{mix}$ should be $\lesssim 10^{-8} - 10^{-6}$. In addition, for given SIDM and light mediator masses, constraints on WIMP-nucleon scattering cross section derived by different DM detection experiments cannot be directly applied to the SIDM detection in company, and this is due to the value of $\overline{F}_\mathrm{med}$ being related to some characters of the detectors, i.e., the constituent of target material, the nucleus recoil energy region of interest and corresponding detection efficiency. In this case, the scattering cross section $f_\mathrm{SIDM} \sigma_\mathrm{n}^\mathrm{SI} (\mathrm{SIDM})$ is available for comparison between different detection experiments, with $$\begin{aligned} \frac{ \sigma_\mathrm{n}^\mathrm{SI} (\mathrm{WIMP})}{\overline{F}_\mathrm{med}} \simeq f_\mathrm{SIDM} \sigma_\mathrm{n}^\mathrm{SI} (\mathrm{SIDM}) ~ ,\end{aligned}$$ i.e. the WIMP-nucleon scattering cross section divided by the factor $\overline{F}_\mathrm{med}$. Conclusion and discussion ========================= We have investigated a scenario of two-component DM, a small fraction is MeV millicharged DM which could cause the anomalous 21-cm absorption at the cosmic dawn, and the main component is SIDM which could resolve small-scale problems. We focus on the main component of DM, i.e. the SIDM, in this paper. The Sommerfeld enhanced velocity-dependent self interaction of SIDM mediated by a light scalar $\phi$ has been considered, which can be compatible from dwarf to cluster scales. For SIDM’s mass $m_\Psi$ in a range of 10$-$500 GeV, the mediator’s mass required should be much smaller that the SIDM mass, with $m_\phi/ m_\Psi \sim 2 \times 10^{-3}$. For fermionic SIDM $\Psi$, the main annihilation mode of $\Psi \bar{\Psi} \to \phi \phi$ is a $p -$wave process, and this velocity-suppressed annihilation could evade constraints from CMB and other indirect detections. For the relic abundance of SIDM set by thermal freeze-out mechanism, the thermal equilibrium between SIDM and the SM particles in the very early universe via the transitions SIDM $\rightleftarrows \phi \rightleftarrows$ SM particles sets a lower bound on the couplings of $\phi$ to SM particles, which has been excluded by the present DM direct detections. To evade constraints from DM direct detections, here we considered the case that SIDM was in the thermal equilibrium with millicharged DM with $\phi$ predominantly decaying into a pair of millicharged DM. Thus, SIDM could be in the thermal equilibrium with SM particles via millicharged DM, and the $\phi -$SM particle couplings could be very tiny and evade present DM direct detections. Due to the small mediator’s mass required by the velocity-dependent self interactions of SIDM, the momentum transfer $|q|$ could be comparable with the mediator mass $m_\phi$ in direct detections. In this case, the picture of WIMP-target nucleus scattering with contact interactions fails for SIDM-target nucleus scattering with a light mediator, and thus the detection results for WIMPs cannot be directly applied to the SIDM detection. In this paper, a method is explored, with which the results of $\sigma_\mathrm{n}^\mathrm{SI}$ (WIMP) in direct detection experiments can be converted into the SIDM search results $\sigma_\mathrm{n}^\mathrm{SI}$ (SIDM), i.e., for given SIDM and mediator masses, a mediator-dependent factor $\overline{F}_\mathrm{med}$ included. With this method, the XENON1T result is employed to constrain the SIDM-nucleon SI scattering. The value of $\overline{F}_\mathrm{med}$ is related to some characters of the detectors, i.e., the constituent of target material, the nucleus recoil energy region of interest and corresponding detection efficiency. It is welcome to release the nucleus recoil energy $E_R$’s region of interest and the corresponding detection efficiency $\epsilon (E_R)$ in DM direct detection experiments, and thus the WIMP detection result can be employed in SIDM hunts. We look forward to the search of SIDM in GeV$-$TeV scale by the future DM direct detections, such as PandaX-4T [@Zhang:2018xdp], XENONnT [@Aprile:2015uzo], LZ [@Akerib:2018lyp], DarkSide-20k [@Aalseth:2017fik] and DARWIN [@Aalbers:2016jon], and the detections will reach the neutrino floor in the next decade(s). This work was supported by the National Natural Science Foundation of China under Contract No. 11505144, and the Longshan academic talent research supporting program of SWUST under Contract No. 18LZX415. The $\overline{F}_\mathrm{med}$ {#kq} =============================== To evaluate the reference factor $\overline{F}_{\mathrm{med}}$, i.e., a typical value of $F_{\mathrm{med}}(q^2)$ in direct detections, we start from the recoil rate for the SIDM-target nucleus SI elastic scattering. The differential recoil rate per unit target mass and per unit time is $$\begin{aligned} \frac{d R }{d E_R} &=& \frac{\rho_\mathrm{DM}^{} f_{\mathrm{SIDM}} }{m_N m_\Psi} \int \int \int d^3 \vec{v}_\mathrm{in} ~ [ \frac{d \sigma_N^{\mathrm{SI}} (q)}{d E_R} v_\mathrm{in} f_\mathrm{E}(\vec{v}_\mathrm{in}) \nonumber \\ && \times \Theta (v_\mathrm{in} - v_\mathrm{in}^\mathrm{min}) ] \, , \label{diff-rate}\end{aligned}$$ where $\rho_\mathrm{DM}^{}$ is the local DM density, $f_\mathrm{E}(\vec{v}_\mathrm{in})$ is the velocity distribution of SIDM relative to the Earth, and $\Theta (v_\mathrm{in} - v_\mathrm{in}^\mathrm{min})$ is the step function corresponding to the minimum incoming velocity of SIDM for a recoil energy $E_R$. Substituting Eq. (\[diff-csnr\]) into Eq. (\[diff-rate\]), we have $$\begin{aligned} \frac{d R }{d E_R} &=& \frac{\rho_\mathrm{DM}^{} f_{\mathrm{SIDM}} }{ m_\Psi} \frac{ A^2}{2 \mu_{\Psi p}^2 } \sigma_n^{\mathrm{SI}} F_{\mathrm{med}}(q^2) |F_N^{\mathrm{SI}} (q)|^2 \nonumber \\ && \times \eta ( v_\mathrm{in}^\mathrm{min}) ~ , \label{diff-rate-II}\end{aligned}$$ where $\eta ( v_\mathrm{in}^\mathrm{min})$ is $$\begin{aligned} \eta ( v_\mathrm{in}^\mathrm{min}) = \int \int \int d^3 \vec{v}_\mathrm{in} \frac{f_\mathrm{E}(\vec{v}_\mathrm{in})}{v_\mathrm{in}} \Theta (v_\mathrm{in} - v_\mathrm{in}^\mathrm{min}) ~ .\end{aligned}$$ The incoming velocity of SIDM $\vec{v}_\mathrm{in}$ is related to the SIDM’s velocity $\vec{v}_\mathrm{halo}$ in the halo via $\vec{v}_\mathrm{in} = \vec{v}_\mathrm{halo} - \vec{v}_{\oplus}$ (here the orbital motion of the Earth is neglected). For SIDM in the halo, the SIDM particles are assumed to be isotropic with a Maxwell-Boltzmann distribution, $$\begin{aligned} f_\mathrm{halo}(\vec{v}_\mathrm{halo}) = \frac{1}{N_\mathrm{F}} \mathrm{exp} \big( - \frac{\vec{v}_\mathrm{halo}^2}{ v_c^2} \big) ~ ,\end{aligned}$$ where $N_\mathrm{F}$ is the normalization factor, and the value of $v_c$ is $v_c \approx$ 220 km/s. Boosting this distribution to the Earth rest frame, one has $$\begin{aligned} f_\mathrm{E}(\vec{v}_\mathrm{in}) = \frac{1}{N_\mathrm{F}} \mathrm{exp} \big( - \frac{( \vec{v}_\mathrm{in} + \vec{v}_{\oplus})^2}{v_c^2} \big) ~ .\end{aligned}$$ A usual choice of the nuclear form factor $F_N^\mathrm{SI} (q)$ is the analytical Helm form factor [@Helm:1956zz; @Lewin:1995rx], which can be expressed as $$\begin{aligned} F_N^\mathrm{SI} (q) = \frac{3}{r_N |q|} j_1 (r_N |q|) e^{- |q^2| s_\mathrm{skin}^2 /2} ~ ,\end{aligned}$$ where $s_\mathrm{skin}$ is the nuclear skin thickness parameter, with $s_\mathrm{skin} \approx$ 0.9 fm. $j_1 (x)$ ($x = r_N |q|$) is the spherical Bessel function of the first kind, with $$\begin{aligned} j_1 (x) = \frac{\sin x}{x^2} - \frac{\cos x}{x} ~ .\end{aligned}$$ $r_N$ is the effective nuclear radius, with $$\begin{aligned} r_N = \sqrt{ c_A^2 + \frac{7}{3} \pi^2 a^2 - 5 s_\mathrm{skin}^2} ~ ,\end{aligned}$$ where $c_A^{} =$ 1.23 $A^{1/3} -$ 0.6 fm, and $a =$ 0.52 fm. Now, for target nuclei with multiple species, the factor $\overline{F}_\mathrm{med}$ is $$\begin{aligned} \overline{F}_\mathrm{med} = \frac{\sum_i f_i \int_{E_R^\mathrm{thr}}^{E_{R,i}^\mathrm{high}} d E_R ~ \epsilon_i(E_R) \frac{d R_i }{d E_R} } {\sum_i f_i \int_{E_R^\mathrm{thr}}^{E_{R,i}^\mathrm{high}} d E_R ~ \epsilon_i(E_R) \frac{d R_i }{d E_R}|_{F_{\mathrm{med}}(q^2) = 1}} ~ , \nonumber \\\end{aligned}$$ where $f_i$ is the mass fraction of nuclear species $i$ in the detector, and $E_R^\mathrm{thr}$ is the recoil energy threshold of the target nucleus in detections. For a nuclear species $i$: $E_{R,i}^\mathrm{high}$ is the upper boundary of the recoil energy for a given SIDM mass, with $ E_{R,i}^\mathrm{high}$ being the minimum of the two, $min \big[ 2 \mu_{\Psi N}^2 (v_\mathrm{in}^2)_\mathrm{max} /m_N, E_R^\mathrm{max} \big] $. $\epsilon_i(E_R)$ is the detection efficiency for a given recoil energy $E_R$. $\frac{d R_i }{d E_R}|_{F_{\mathrm{med}}(q^2) = 1}$ is the differential recoil rate with the factor $F_{\mathrm{med}}(q^2) =$ 1 adopted. [0]{} N. Aghanim [*et al.*]{} \[Planck Collaboration\], arXiv:1807.06209 \[astro-ph.CO\]. Z. Z. Liu [*et al.*]{} \[CDEX Collaboration\], arXiv:1905.00354 \[hep-ex\]. A. H. Abdelhameed [*et al.*]{} \[CRESST Collaboration\], arXiv:1904.00498 \[astro-ph.CO\]. R. Agnese [*et al.*]{} \[SuperCDMS Collaboration\], Phys. Rev. D [**97**]{}, no. 2, 022002 (2018) \[arXiv:1707.01632 \[astro-ph.CO\]\]. P. Agnes [*et al.*]{} \[DarkSide Collaboration\], Phys. Rev. Lett.  [**121**]{}, no. 8, 081307 (2018) \[arXiv:1802.06994 \[astro-ph.HE\]\]. X. Cui [*et al.*]{} \[PandaX-II Collaboration\], Phys. Rev. Lett.  [**119**]{}, no. 18, 181302 (2017) \[arXiv:1708.06917 \[astro-ph.CO\]\]. D. S. Akerib [*et al.*]{} \[LUX Collaboration\], Phys. Rev. Lett.  [**118**]{}, no. 2, 021303 (2017) \[arXiv:1608.07648 \[astro-ph.CO\]\]. D. S. Akerib [*et al.*]{} \[LUX Collaboration\], Phys. Rev. Lett.  [**122**]{}, no. 13, 131301 (2019) \[arXiv:1811.11241 \[astro-ph.CO\]\]. E. Aprile [*et al.*]{} \[XENON Collaboration\], Phys. Rev. Lett.  [**121**]{}, no. 11, 111302 (2018) \[arXiv:1805.12562 \[astro-ph.CO\]\]. D. S. Akerib [*et al.*]{} \[LUX Collaboration\], Phys. Rev. Lett.  [**118**]{}, no. 25, 251302 (2017) \[arXiv:1705.03380 \[astro-ph.CO\]\]. J. Xia [*et al.*]{} \[PandaX-II Collaboration\], Phys. Lett. B [**792**]{}, 193 (2019) \[arXiv:1807.01936 \[hep-ex\]\]. E. Aprile [*et al.*]{} \[XENON Collaboration\], Phys. Rev. Lett.  [**122**]{}, no. 14, 141301 (2019) \[arXiv:1902.03234 \[astro-ph.CO\]\]. C. Amole [*et al.*]{} \[PICO Collaboration\], Phys. Rev. D [**100**]{}, no. 2, 022001 (2019) \[arXiv:1902.04031 \[astro-ph.CO\]\]. J. D. Bowman, A. E. E. Rogers, R. A. Monsalve, T. J. Mozdzen and N. Mahesh, Nature [**555**]{}, no. 7694, 67 (2018) \[arXiv:1810.05912 \[astro-ph.CO\]\]. J. B. Mu$\tilde{n}$oz and A. Loeb, Nature [**557**]{}, no. 7707, 684 (2018) \[arXiv:1802.10094 \[astro-ph.CO\]\]. A. Fialkov, R. Barkana and A. Cohen, Phys. Rev. Lett.  [**121**]{}, 011101 (2018) \[arXiv:1802.10577 \[astro-ph.CO\]\]. R. Barkana, Nature [**555**]{}, no. 7694, 71 (2018) \[arXiv:1803.06698 \[astro-ph.CO\]\]. R. Barkana, N. J. Outmezguine, D. Redigolo and T. Volansky, Phys. Rev. D [**98**]{}, no. 10, 103005 (2018) \[arXiv:1803.03091 \[hep-ph\]\]. A. Berlin, D. Hooper, G. Krnjaic and S. D. McDermott, Phys. Rev. Lett.  [**121**]{}, no. 1, 011102 (2018) \[arXiv:1803.02804 \[hep-ph\]\]. T. R. Slatyer and C. L. Wu, Phys. Rev. D [**98**]{}, no. 2, 023013 (2018) \[arXiv:1803.09734 \[astro-ph.CO\]\]. H. Liu and T. R. Slatyer, Phys. Rev. D [**98**]{}, no. 2, 023501 (2018) \[arXiv:1803.09739 \[astro-ph.CO\]\]. J. B. Mu$\tilde{n}$oz, C. Dvorkin and A. Loeb, Phys. Rev. Lett.  [**121**]{}, no. 12, 121301 (2018) \[arXiv:1804.01092 \[astro-ph.CO\]\]. L. B. Jia, Eur. Phys. J. C [**79**]{}, no. 1, 80 (2019) \[arXiv:1804.07934 \[hep-ph\]\]. M. S. Mahdawi and G. R. Farrar, JCAP [**1810**]{}, no. 10, 007 (2018) \[arXiv:1804.03073 \[hep-ph\]\]. E. D. Kovetz, V. Poulin, V. Gluscevic, K. K. Boddy, R. Barkana and M. Kamionkowski, Phys. Rev. D [**98**]{}, no. 10, 103529 (2018) \[arXiv:1807.11482 \[astro-ph.CO\]\]. L. B. Jia and X. Liao, Phys. Rev. D [**100**]{}, no. 3, 035012 (2019) \[arXiv:1906.00559 \[hep-ph\]\]. D. H. Weinberg, J. S. Bullock, F. Governato, R. Kuzio de Naray and A. H. G. Peter, Proc. Nat. Acad. Sci.  [**112**]{}, 12249 (2015) \[arXiv:1306.0913 \[astro-ph.CO\]\]. P. Bull [*et al.*]{}, Phys. Dark Univ.  [**12**]{}, 56 (2016) \[arXiv:1512.05356 \[astro-ph.CO\]\]. J. S. Bullock and M. Boylan-Kolchin, Ann. Rev. Astron. Astrophys.  [**55**]{}, 343 (2017) \[arXiv:1707.04256 \[astro-ph.CO\]\]. S. Tulin and H. B. Yu, Phys. Rept.  [**730**]{}, 1 (2018) \[arXiv:1705.02358 \[hep-ph\]\]. D. N. Spergel and P. J. Steinhardt, Phys. Rev. Lett.  [**84**]{}, 3760 (2000) \[astro-ph/9909386\]. M. Vogelsberger, J. Zavala and A. Loeb, Mon. Not. Roy. Astron. Soc.  [**423**]{}, 3740 (2012) \[arXiv:1201.5892 \[astro-ph.CO\]\]. J. Zavala, M. Vogelsberger and M. G. Walker, Mon. Not. Roy. Astron. Soc.  [**431**]{}, L20 (2013) \[arXiv:1211.6426 \[astro-ph.CO\]\]. M. Kaplinghat, S. Tulin and H. B. Yu, Phys. Rev. Lett.  [**116**]{}, no. 4, 041302 (2016) \[arXiv:1508.03339 \[astro-ph.CO\]\]. A. Kamada, M. Kaplinghat, A. B. Pace and H. B. Yu, Phys. Rev. Lett.  [**119**]{}, no. 11, 111102 (2017) \[arXiv:1611.02716 \[astro-ph.GA\]\]. M. Valli and H. B. Yu, Nat. Astron.  [**2**]{}, 907 (2018) \[arXiv:1711.03502 \[astro-ph.GA\]\]. S. Dodelson and L. M. Widrow, Phys. Rev. Lett.  [**72**]{}, 17 (1994) \[hep-ph/9303287\]. S. Colombi, S. Dodelson and L. M. Widrow, Astrophys. J.  [**458**]{}, 1 (1996) \[astro-ph/9505029\]. P. Bode, J. P. Ostriker and N. Turok, Astrophys. J.  [**556**]{}, 93 (2001) \[astro-ph/0010389\]. M. R. Lovell, C. S. Frenk, V. R. Eke, A. Jenkins, L. Gao and T. Theuns, Mon. Not. Roy. Astron. Soc.  [**439**]{}, 300 (2014) \[arXiv:1308.1399 \[astro-ph.CO\]\]. S. W. Randall, M. Markevitch, D. Clowe, A. H. Gonzalez and M. Bradac, Astrophys. J.  [**679**]{}, 1173 (2008) \[arXiv:0704.0261 \[astro-ph\]\]. D. Harvey, R. Massey, T. Kitching, A. Taylor and E. Tittley, Science [**347**]{}, 1462 (2015) \[arXiv:1503.07675 \[astro-ph.CO\]\]. D. Harvey, A. Robertson, R. Massey and I. G. McCarthy, Mon. Not. Roy. Astron. Soc.  [**488**]{}, no. 2, 1572 (2019) \[arXiv:1812.06981 \[astro-ph.CO\]\]. O. D. Elbert, J. S. Bullock, M. Kaplinghat, S. Garrison-Kimmel, A. S. Graus and M. Rocha, Astrophys. J.  [**853**]{}, no. 2, 109 (2018) \[arXiv:1609.08626 \[astro-ph.GA\]\]. A. Sommerfeld, Annalen der Physik [**403**]{}, 257 (1931). N. Arkani-Hamed, D. P. Finkbeiner, T. R. Slatyer and N. Weiner, Phys. Rev. D [**79**]{}, 015014 (2009) \[arXiv:0810.0713 \[hep-ph\]\]. X. Chu, T. Hambye and M. H. G. Tytgat, JCAP [**1205**]{}, 034 (2012) \[arXiv:1112.0493 \[hep-ph\]\]. M. J. Dolan, F. Kahlhoefer, C. McCabe and K. Schmidt-Hoberg, JHEP [**1503**]{}, 171 (2015) Erratum: \[JHEP [**1507**]{}, 103 (2015)\] \[arXiv:1412.5174 \[hep-ph\]\]. L. B. Jia, Phys. Rev. D [**94**]{}, no. 9, 095028 (2016) \[arXiv:1607.00737 \[hep-ph\]\]. M. Duch, B. Grzadkowski and D. Huang, JHEP [**1801**]{}, 020 (2018) \[arXiv:1710.00320 \[hep-ph\]\]. S. Peyman Zakeri, S. Mohammad Moosavi Nejad, M. Zakeri and S. Yaser Ayazi, Chin. Phys. C [**42**]{}, no. 7, 073101 (2018) \[arXiv:1801.09115 \[hep-ph\]\]. T. Hambye, M. H. G. Tytgat, J. Vandecasteele and L. Vanderheyden, Phys. Rev. D [**98**]{}, no. 7, 075017 (2018) \[arXiv:1807.05022 \[hep-ph\]\]. M. Kaplinghat, S. Tulin and H. B. Yu, Phys. Rev. D [**89**]{}, no. 3, 035009 (2014) \[arXiv:1310.7945 \[hep-ph\]\]. L. B. Jia, Phys. Rev. D [**96**]{}, no. 5, 055009 (2017) \[arXiv:1703.06938 \[hep-ph\]\]. S. Bhattacharya, B. Meli$\acute{c}$ and J. Wudka, JHEP [**1402**]{}, 115 (2014) \[arXiv:1307.2647 \[hep-ph\]\]. J. M. Cline, Z. Liu, G. Moore and W. Xue, Phys. Rev. D [**90**]{}, no. 1, 015023 (2014) \[arXiv:1312.3325 \[hep-ph\]\]. G. D. Kribs and E. T. Neil, Int. J. Mod. Phys. A [**31**]{}, no. 22, 1643004 (2016) \[arXiv:1604.04627 \[hep-ph\]\]. J. Kopp, J. Liu, T. R. Slatyer, X. P. Wang and W. Xue, JHEP [**1612**]{}, 033 (2016) \[arXiv:1609.02147 \[hep-ph\]\]. L. Forestell, D. E. Morrissey and K. Sigurdson, Phys. Rev. D [**97**]{}, no. 7, 075029 (2018) \[arXiv:1710.06447 \[hep-ph\]\]. M. Drees, F. Hajkarim and E. R. Schmitz, JCAP [**1506**]{}, no. 06, 025 (2015) \[arXiv:1503.03513 \[hep-ph\]\]. S. Cassel, J. Phys. G [**37**]{}, 105009 (2010) \[arXiv:0903.5307 \[hep-ph\]\]. T. R. Slatyer, JCAP [**1002**]{}, 028 (2010) \[arXiv:0910.5713 \[hep-ph\]\]. J. D. Lewin and P. F. Smith, Astropart. Phys.  [**6**]{}, 87 (1996). H. Y. Cheng and C. W. Chiang, JHEP [**1207**]{}, 009 (2012) \[arXiv:1202.1292 \[hep-ph\]\]. J. Billard, L. Strigari and E. Figueroa-Feliciano, Phys. Rev. D [**89**]{}, no. 2, 023524 (2014) \[arXiv:1307.5458 \[hep-ph\]\]. H. Zhang [*et al.*]{} \[PandaX Collaboration\], Sci. China Phys. Mech. Astron.  [**62**]{}, no. 3, 31011 (2019) \[arXiv:1806.02229 \[physics.ins-det\]\]. E. Aprile [*et al.*]{} \[XENON Collaboration\], JCAP [**1604**]{}, 027 (2016) \[arXiv:1512.07501 \[physics.ins-det\]\]. D. S. Akerib [*et al.*]{} \[LUX-ZEPLIN Collaboration\], arXiv:1802.06039 \[astro-ph.IM\]. C. E. Aalseth [*et al.*]{}, Eur. Phys. J. Plus [**133**]{}, 131 (2018) \[arXiv:1707.08145 \[physics.ins-det\]\]. J. Aalbers [*et al.*]{} \[DARWIN Collaboration\], JCAP [**1611**]{}, 017 (2016) \[arXiv:1606.07001 \[astro-ph.IM\]\]. R. H. Helm, Phys. Rev.  [**104**]{}, 1466 (1956). [^1]: See Refs. [@Dodelson:1993je; @Colombi:1995ze; @Bode:2000gq; @Lovell:2013ola] for the scenario of warm DM to the small-scale problems. [^2]: For example, for the case of the SIDM mass $\sim$ 20 GeV and \[$m_\phi$/SIDM mass\] $\sim 10^{-2}$, the SIDM-nucleon scattering cross section set by the thermal equilibrium is $\gtrsim$ 10$^{-40}$ cm$^2$, which has been excluded by DM direct detection experiments. [^3]: In this case, $F_{\mathrm{med}}(q^2) \approx $1 could be adopted (the contact interaction in WIMP-nucleus scattering in direct detections), while the $q$-dependent nuclear form factor $F_N^{\mathrm{SI}} (q)$ needs to be considered for the nucleus $^{131}$Xe.
--- abstract: 'Actual causation is concerned with the question “what caused what?" Consider a transition between two states within a system of interacting elements, such as an artificial neural network, or a biological brain circuit. Which combination of synapses caused the neuron to fire? Which image features caused the classifier to misinterpret the picture? Even detailed knowledge of the system’s causal network, its elements, their states, connectivity, and dynamics does not automatically provide a straightforward answer to the [“what caused what?"]{} question. Counterfactual accounts of actual causation based on graphical models, paired with system interventions, have demonstrated initial success in addressing specific problem cases [in line with intuitive causal judgments]{}. Here, we start from a set of basic requirements for causation (realization, composition, information, integration, and exclusion) and develop a rigorous, quantitative account of actual causation [that is generally]{} applicable to discrete dynamical systems. We present a formal framework to evaluate these causal requirements that is based on system interventions and partitions, and considers all counterfactuals of a state transition. This framework is used to provide a complete causal account of the transition by identifying and quantifying the strength of all actual causes and effects linking the two [consecutive system states]{}. Finally, we examine several exemplary cases and paradoxes of causation and show that they can be illuminated by the proposed framework for quantifying actual causation.' author: - - - - bibliography: - 'AC\_bibtex.bib' title: 'What caused what? A quantitative account of actual causation using dynamical causal networks. ' --- =1 , , Introduction ============ The nature of cause and effect has been much debated in both philosophy and the sciences. To date, there is no single widely accepted account of causation, and the various sciences focus on different aspects of the issue [@Illari2011]. In physics, no formal notion of causation seems even required to describe the dynamical evolution of a system by a set of mathematical equations. At most, the notion of causation is reduced to the basic requirement that causes must precede and be able to influence their effects—no further constraints are imposed as to “what caused what". However, a detailed record of “what happened" prior to a particular occurrence[^1] rarely provides a satisfactory explanation for *why* it occurred in causal, mechanistic terms. As an example, take AlphaGo, the deep neural network that repeatedly defeated human champions in the game Go [@Silver2016]. Understanding why AlphaGo chose a particular move is a non-trivial problem [@Metz2016], even though all its network parameters and its state evolution can be recorded in detail. Identifying “what caused what" becomes particularly difficult in complex systems with a distributed, recurrent architecture and wide-ranging interactions such as the brain [@Sporns2000; @Wolff2018]. Our interest here lies in the principled analysis of *actual causation* in discrete distributed dynamical systems, such as artificial neural networks, computers made of logic gates, or cellular automata, but also biological brain circuits or gene regulatory networks. By contrast to *general* (or *type*) *causation* which addresses the question whether the type of occurrence $A$ generally “brings about" the type of occurrence $B$, the underlying notion of *actual* (or *token*) *causation* addresses the question “what caused what" given a specific occurrence $A$ followed by a specific occurrence $B$. For example, [what part of the board’s particular pattern caused AlphaGo to decide on this particular move?[^2] As highlighted by the AlphaGo example,]{} even with detailed knowledge of all circumstances, the prior system state, and the outcome, there often is no straightforward answer to the “what caused what" question. This has also been demonstrated by a long list of controversial examples conceived, analyzed, and debated primarily by philosophers (e.g., [@Lewis1986; @Pearl2000; @Woodward2003; @Hitchcock2007; @Paul2013; @Weslake2015-WESAPT; @Halpern2016]). During the last decades, a number of attempts to operationalize the notion of causation and to give it a formal description have been developed, most notably in computer science, probability theory, statistics [@Good1961; @Suppes1970; @Spirtes1993; @Pearl1988; @Pearl2000], the law [@Wright1985], and neuroscience, (*e.g.*, [@Tononi1999]). Graphical methods paired with system interventions [@Pearl2000] have proven especially valuable for developing causal explanations. Given a causal network that represents how the state of each variable depends on other system variables via a “structural equation" [@Pearl2000], it is possible to evaluate the effects of interventions imposed from outside the network by setting certain variables to a specific value. This operation has been formalized by Pearl, who introduced the “do-operator", ${\operatorname{do}}(X=x)$, which signifies that a subset of system variables $X$ has been actively set into state $x$ rather than being passively observed in this state [@Pearl2000]. Because statistical dependence does not imply causal dependence, the conditional probability of occurrence $B$ after observing occurrence $A$, $p(B \mid A)$ may differ from the probability of occurrence $B$ after enforcing $A$, $p(B \mid {\operatorname{do}}(A))$. Causal networks are a specific subset of “Bayesian" networks that explicitly represent *causal* dependencies consistent with interventional probabilities. The causal networks approach has also been applied to the case of *actual causation* [@Pearl2000; @Hitchcock2001; @Woodward2003; @Halpern2005; @Weslake2015-WESAPT; @Halpern2015]. There, system interventions can be used to evaluate whether and to what extent an occurrence was necessary or sufficient for a subsequent occurrence by assessing counterfactuals—alternative occurrences “counter to fact”[^3] [@Lewis1973; @Pearl2000; @Woodward2004]—[within a given causal model. The objective is to define “what it means for $A$ to be a cause of $B$ *in model $M$*" [@Halpern2016].]{} While promising results have been obtained in specific cases, no single proposal to date has characterized actual causation in a universally satisfying manner [@Paul2013; @Halpern2016]. [One concern about existing measures of actual causation is the incremental manner in which they progress; a definition is proposed that satisfies existing examples in the literature, until a new problematic example is discovered, at which point the definition is updated to address the new example [@Weslake2015-WESAPT; @Beckers2018]. While valuable, the problem with such an approach is that one cannot be confident in applying the framework beyond the scope of examples already tested. For example, while the methods are well explored in simple binary examples, there is less evidence that the methods conform with intuition when we consider the much larger space of non-binary examples (see \[S1\]). This is especially critical when moving beyond intuitive toy examples to scientific problems where intuition is lacking, such as understanding actual causation in biological or artificial neural networks.]{} Our goal is to provide a robust framework for assessing actual causation that is based on general causal principles, and can thus be expected to naturally extend beyond simple, binary, and deterministic example cases. Below we present a formal account of actual causation that is generally applicable to discrete Markovian dynamical systems that are constituted of interacting elements (Fig. \[fig1\]). The proposed framework is based on five causal principles identified in the context of integrated information theory (IIT)—namely existence (here: realization), composition, information, integration, and exclusion [@Oizumi2014; @Albantakis2015]). Originally developed as a theory of consciousness [@Tononi2015; @Tononi2016], IIT provides the tools to characterize *potential causation*—the causal constraints exerted by a mechanism in a given state. In particular, our objective is to provide a complete, quantitative causal account of “what caused what" within a transition between consecutive system states. Our approach differs from previous accounts of actual causation in what constitutes a complete causal account. Unlike most accounts of actual causation (*e.g.*, [@Pearl2000; @Paul2013; @Halpern2016]), but see [@Chajewska1997], causal links within a transition are considered from the perspective of *both* causes and effects. Additionally, we not only evaluate actual causes and effects of individual variables, but also actual causes and effects of high-order occurrences comprising multiple variables. While some existing accounts of actual causation include the notion of being “part of a cause" [@Halpern2015; @Halpern2016], the possibility of multi-variate causes and effects is rarely addressed, or even outright excluded [@Weslake2015-WESAPT]. Despite the differences in what constitutes a complete causal account, our approach remains compatible with the traditional view of actual causation, which considers only actual causes of individual variables (no high-order causation, and no actual effects). In this context, the main difference between our proposed framework and existing “contingency” based definitions is that we simultaneously consider *all* counterfactual states of the transition, rather than a single contingency (e.g., [@Hitchcock2001; @Yablo2002; @Woodward2003; @Halpern2005; @Hall2007; @Halpern2015; @Weslake2015-WESAPT], see \[S1\] for a detailed comparison). This allows us to express the causal analysis in probabilistic, informational terms [@Ay2008; @Korb2011; @Janzing2013; @Oizumi2014][, which has the additional benefit that our framework naturally extends from deterministic to probabilistic causal networks, and also from binary to multi-valued variables. Finally, it allows us to quantify the strength of all causal links between occurrences and their causes and effects within the transition.]{} In the following, we will first formally describe the proposed causal framework of actual causation. We then demonstrate its utility on a set of examples, which illustrate the benefits of characterizing both causes and effects, the fact that causation can be compositional, and the importance of identifying irreducible causes and effects for obtaining a complete causal account. Finally, we illustrate several prominent paradoxical cases from the actual causation literature, including overdetermination and prevention, [as well as a toy-model of an image classifier based on an artificial neural network]{}. Theory ====== Integrated information theory is concerned with the *intrinsic cause-effect power* of a physical system (*intrinsic existence*). The IIT formalism [@Oizumi2014; @Tononi2015] starts from a discrete distributed dynamical system in its current state and asks how the system’s elements, alone and in combination (*composition*), constrain the *potential* past and future states of the system (*information*), and whether they do so above and beyond their parts (*integration*). The potential causes and effects of a system subset correspond to the set of elements over which the constraints are maximally informative and integrated (*exclusion*). In the following we aim to translate IIT’s account of potential causation into a principled, quantitative framework for *actual* causation that allows evaluating all actual causes and effects within a state transition of a dynamical system of interacting elements, such as a biological or artificial neural network (Fig. \[fig1\]). For maximal generality, we will formulate our account of actual causation in the context of dynamical causal networks [@Ay2008; @Janzing2013; @Biehl2016]. Dynamical Causal Networks {#DCN} ------------------------- Our starting point is a dynamical causal network—a directed acyclic graph (DAG) $G_u = (V, E)$ with edges $E$ that indicate the causal connections among a set of nodes $V$ and a given set of background conditions (state of exogenous variables) $U = u$ (Fig. \[fig1\]B). The nodes in $G_u$ represent a set of associated random variables (which we also denote $V$) with state space $\Omega$ and probability function $p(v | u), \, v \in \Omega$. For any node $V_i \in V$, we can define the parents of $V_i$ in $G_u$ as all nodes with an edge leading into $V_i$, $$pa(V_i) = \{V_j \mid e_{ji} \in E\}.$$ A causal network $G_u$ is dynamical in the sense that we can define a partition of its nodes $V$ into $k+1$ temporally ordered “slices” $V = \{V_0, V_1, \dots, V_{k}\}$, starting with an initial slice without parents ($pa(V_0) = \varnothing$) and such that the parents of each successive slice are fully contained within the previous slice ($pa(V_t) \subseteq V_{t-1}, \, t = 1, \dots, k$). This definition is similar to one proposed in [@Ay2008], but is stricter, requiring that there are no within-slice causal interactions. This restriction prohibits any “instantaneous causation" between variables (see also [@Pearl2000], Section 1.5) and signifies that $G_u$ fulfills the Markov property. The parts of $V = \{V_0, V_1, \dots, V_{k}\}$ can thus be interpreted as consecutive time steps of a discrete dynamical system of interacting elements (Fig. \[fig1\]); a particular state $V = v$ then corresponds to a realization of a system transient over $k+1$ time steps. In a *Bayesian* network, the edges of $G_u$ fully capture the dependency structure between nodes $V$. That is, for a given set of background conditions, each node is conditionally independent of every other node given its parents in $G_u$, and the probability function can be factored as $$p(v \mid u) = \prod_{i} p(v_i \mid pa(v_i), u), \quad v \in \Omega$$ For a *causal* network, there is the additional requirement that the edges $E$ capture causal dependencies (rather than merely correlations) between nodes. This means that the decomposition of $p(v\mid u)$ holds even if the parent variables are actively set into their state as opposed to passively observed in that state (“Causal Markov Condition", [@Spirtes1993; @Pearl2000]), $$p(v \mid u) = \prod_{i} p\big(v_i \mid do(pa(v_i), u) \big), \quad v \in \Omega.$$ Because we assume here that $U$ contains all relevant exogenous variables, any statistical dependencies between $V_{t-1}$ and $V_t$ are in fact causal dependencies, and cannot be explained by latent external variables (“causal sufficiency”, see [@Janzing2013]). Moreover, because time is explicit in $G_u$ and we assume that there is no instantaneous causation, there is no question of the direction of causal influences—it must be that the earlier variables ($V_{t-1}$) influence the later variables ($V_t$). By definition, $V_{t-1}$ contains all parents of $V_t$ for $t = 1, \dots, k$. Together, these assumptions imply a transition probability function for $V$ such that the nodes at time $t$ are conditionally independent given the state of the nodes at time $t-1$ (Fig. \[fig1\]C), $$\label{eqn1b} \begin{split} p_u(v_t \mid v_{t-1}) &= p(v_t \mid v_{t-1}, u) \\ &= \prod_{i} p\big(v_{i, t} \mid v_{t-1}, u\big) \\ &= \prod_{i} p\big(v_{i, t} \mid do(v_{t-1}, u)\big), \quad \forall \, (v_{t-1}, v_{t}) \in \Omega. \end{split}$$ To reiterate, a dynamical causal network $G_u$ describes the causal interactions among a set of nodes (the edges in $E$ describe the causal connections between the nodes in $V$) conditional on the state of exogenous variables $U$, and the transition probability function $p_u(v_t \mid v_{t-1})$ (Eqn. \[eqn1b\]) fully captures the nature of these causal dependencies. In sum, we assume that $G_u$ fully and accurately describes the system of interest for a given set of background conditions. In reality, a causal network reflects assumptions about a system’s elementary mechanisms. Current scientific knowledge must inform which variables to include, what their relevant states are, and how they are related mechanistically [@Pearl2000; @Pearl2010]. Here, we are primarily interested in natural and artificial systems, such as neural networks, for which detailed information about the causal network structure and the mechanisms of individual system elements is often available, or can be obtained through exhaustive experiments[^4]. In such systems, counterfactuals can be evaluated by performing experiments or simulations that assess how the system reacts to interventions. Our objective here is to formulate a quantitative account of actual causation applicable to any predetermined, dynamical causal network, independent of practical considerations about model selection [@Pearl2010; @Halpern2016]. Confounding issues due to incomplete knowledge, such as estimation biases of probabilities from finite sampling, or latent variables, are thus set aside for the present purposes. To what extent and under which conditions the identified actual causes and effects generalize across possible levels of description, or under incomplete knowledge, is an interesting question that we plan to address in future work (see also [@Rubenstein2017; @Marshall2018]). Occurrences and transitions --------------------------- In general, actual causation can be evaluated over multiple time steps, e.g., considering indirect causal influences. Here, however, we specifically focus on direct causes and effects without intermediary variables or time steps.[^5] For this reason, we only consider causal networks containing nodes from two consecutive time points, $V = \{V_{t-1}, V_{t}\}$ and define a *transition*, denoted ${v_{t-1} \prec v_t}$, as a realization $V = v$ with $v = (v_{t-1}, v_t) \in \Omega$ (Fig. \[fig1\]D). Within a dynamical causal network $G_u = (V, E)$ with $V = \{V_{t-1}, V_t\}$, our objective is to determine the actual cause or actual effect of occurrences within a transition ${v_{t-1} \prec v_t}$. Formally, an *occurrence* is defined to be a substate $X_{t-1} = x_{t-1} \subseteq V_{t-1} = v_{t-1}$ or $Y_t = y_t\subseteq V_{t} = v_{t}$, corresponding to a subset of elements at a particular time and in a particular state. Cause and effect repertoires {#cer} ---------------------------- Before defining the actual cause or actual effect of an occurrence, we first introduce two definitions from IIT that are useful for characterizing the causal powers of occurrences in a causal network: cause/effect repertoires and partitioned cause/effect repertoires. In IIT, a cause (or effect) repertoire is a conditional probability distribution that describes how an occurrence (set of elements in a state) constrains the potential past (or future) states of other elements in a system [@Oizumi2014; @Albantakis2015], see also [@Tononi2015; @Marshall2016] for a general mathematical definition. In the present context of a transition ${v_{t-1} \prec v_t}$, an effect repertoire specifies how an occurrence $x_{t-1}\subseteq v_{t-1}$ constrains the potential future states of a set of nodes $Y_t \subseteq V_t$. Likewise, a cause repertoire specifies how an occurrence $y_t\subseteq v_t$ constrains the potential past states of a set of nodes $X_{t-1} \subset V_{t-1}$ (Fig. \[fig2\]). The effect and cause repertoire can be derived from the system’s transition probabilities (Eqn. \[eqn1b\]) by conditioning on the state of the occurrence and *causally marginalizing* the variables outside the occurrence $V_{t-1}\setminus X_{t-1}$ and $V_t\setminus Y_t$ (see Discussion \[D1\] and Fig. \[fig10B\]). Causal marginalization serves to remove any contributions to the repertoire from variables outside the occurrence by averaging over all their possible states. Explicitly, for a single node $Y_{i,t}$ the effect repertoire is: $$\label{cmarg} \pi(Y_{i, t} \mid x_{t-1}) = \frac{1}{|\Omega_{W}|} \sum_{w \in \Omega_{W}} p_u\left(Y_{i, t} \mid {\operatorname{do}}\left(x_{t-1}, W = w\right)\right),$$ where $W = V_{t-1}\setminus X_{t-1}$ with state space $\Omega_{W}$. Note that for causal marginalization, each possible state $W = w \in \Omega_{W}$ is given the same weight $|\Omega_{W}|^{-1}$ in the average. This ensures that the repertoire captures the constraints due to the occurrence *per se*, and not to whatever external factors might bias the variables in $W$ to one state or another (this is discussed in more detail in Section \[D1\]). The complementary cause repertoire of a singleton occurrence $y_{i, t}$, using Bayes’ rule, is: $$\pi(X_{t-1}\mid y_{i, t}) = \sum_{w \in \Omega_{W}} \frac{p_u\left(y_{i, t} \mid {\operatorname{do}}\left(X_{t-1}, W = w\right)\right)}{\sum_{z \in \Omega_{V_{t-1}}} p_u\left(y_{i, t} \mid {\operatorname{do}}\left(V_{t-1} = z\right)\right)}.$$ In the general case of a multi-variate $Y_t$ (or $y_t$), the transition probability function $p_u(Y_t \mid x_{t-1})$ not only contains dependencies of $Y_t$ on $x_{t-1}$, but also correlations between variables in $Y_t$ due to common inputs from nodes in $W_{t-1} = V_{t-1}\setminus X_{t-1}$, which should not be counted as constraints due to $x_{t-1}$. To discount such correlations, we define the effect repertoire over a set of variables $Y_t$ as the product of the effect repertoires over individual nodes[^6] (see also [@Janzing2013]): $$\label{eqn2} \pi(Y_t \mid x_{t-1}) = \prod_i \pi(Y_{i, t} \mid x_{t-1}).$$ In the same manner, we define the cause repertoire of a general occurrence $y_t$ over a set of variables $X_{t-1}$ as: $$\label{eqn3} \pi(X_{t-1} \mid y_t) = \frac{\prod_i \pi(X_{t-1} \mid y_{i, t})}{\sum_{x \in \Omega_{X_{t-1}}} \prod_i \pi(X_{t-1} = x \mid y_{i, t})}.$$ We can also define *unconstrained* cause and effect repertoires, a special case of cause or effect repertoires, where the occurrence that we condition on is the empty set. In this case, the repertoire describes the causal constraints on a set of the nodes due to the structure of the causal network, under maximum uncertainty about the states of variables within the network. With the convention that $\pi(\varnothing) = 1$, we can derive these unconstrained repertoires directly from the formulas for the cause and effect repertoires, Eqn \[eqn2\] and \[eqn3\]. The unconstrained cause repertoire simplifies to a uniform distribution, representing the fact that the causal network itself imposes no constraint on the possible states of variables in $V_{t-1}$, $$\label{eqnUCC} \pi(X_{t-1})= |\Omega_{X_{t-1}}|^{-1}.$$ The unconstrained effect repertoire is shaped by the update function of each individual node $Y_{i,t} \in Y_t$ under maximum uncertainty about the state of its parents, $$\label{eqnUCE} \pi(Y_t) = \prod_i \pi(Y_{i,t}) = \prod_i |\Omega_{W}|^{-1}\sum_{w \in \Omega_{W}} p_u(Y_{i, t} \mid {\operatorname{do}}(W = w)),$$ where $W = V_{t-1}\setminus X_{t-1} = V_{t-1}$, since $X_{t-1} = \varnothing$. In summary, the effect and cause repertoires $\pi(Y_t \mid x_{t-1})$ and $\pi(X_{t-1} \mid y_t)$, respectively, are conditional probability distributions that specify the causal constraints due to an occurrence on the *potential* past and future states of variables in a causal network $G_u$. The cause and effect repertoires discount constraints that are not specific to the occurrence of interest; possible constraints due to the state of variables outside of the occurrence are causally marginalized from the distribution, and constraints due to common inputs from other nodes are avoided by treating each node in the occurrence independently. An objective of IIT is to evaluate whether the causal constraints of an occurrence on a set of nodes are “integrated", or “irreducible", that is, whether the individual variables in the occurrence work together to constrain the past or future states of the set of nodes in a way that is not accounted for by the variables taken independently [@Balduzzi2008; @Oizumi2014]. To this end, the occurrence (together with the set of nodes it constrains) is partitioned into independent parts, by rendering the connection between the parts causally ineffective [@Balduzzi2008; @Janzing2013; @Oizumi2014; @Albantakis2015]. The *partitioned* cause and effect repertoires describe the residual constraints under the partition. Comparing the partitioned cause and effect repertoires to the intact cause and effect repertoires reveals what is lost or changed by the partition. A partition $\psi$ of the occurrence $x_{t-1}$ (and the nodes it constrains, $Y_t$) into $m$ parts is defined as: $$\label{eqn:Pe} \psi(x_{t-1}, Y_t) = \{(x_{1, t-1}, Y_{1, t}), (x_{2, t-1}, Y_{2, t}), \ldots, (x_{m, t-1}, Y_{m, t})\},$$ such that $\{x_{j, t-1}\}_{j=1}^m$ is a partition of $x_{t-1}$ and $Y_{j, t} \subseteq Y_t$ with $Y_{j, t} \cap Y_{k, t} = \varnothing,\, j \neq k$. Note that this includes the possibility that any $Y_{j,t} = \varnothing$, which may leave a set of nodes $Y_t \setminus \bigcup_{j=1}^m Y_{j,t}$ completely unconstrained (see Fig. \[Fig2B\] for examples and details). The partitioned effect repertoire of an occurrence $x_{t-1}$ over a set of nodes $Y_t$ under a partition $\psi$ is defined as: $$\label{eqn6} \pi(Y_t \mid x_{t-1})_\psi = \prod_{j = 1}^m \pi(Y_{j, t} \mid x_{j, t-1}) \times \pi\left(Y_t \setminus \bigcup_{j=1}^m Y_{j,t}\right).$$ It is the product of the corresponding $m$ effect repertoires, multiplied by the unconstrained effect repertoire of the remaining set of nodes $Y_t \setminus \bigcup_{j=1}^m Y_{j,t}$, as these nodes are no longer constrained by any part of $x_{t-1}$ under the partition. In the same way, a partition $\psi$ of the occurrence $y_t$ (and the nodes it constrains $X_{t-1}$) into $m$ parts is defined as: $$\label{eqn:Pc} \psi(X_{t-1}, y_t) = \{(X_{1, t-1}, y_{1, t}), (X_{2, t-1}, y_{2, t}), \ldots, (X_{m, t-1}, y_{m, t})\},$$ such that $\{y_{i, t}\}_{i=1}^m$ is a partition of $y_{t}$ and $X_{j, t-1} \subseteq X_{t-1}$ with $X_{j, t-1} \cap X_{k, t-1} = \varnothing,\, j \neq k$. The partitioned cause repertoire of an occurrence $y_{t}$ over a set of nodes $X_{t-1}$ under a partition $\psi$ is defined as: $$\label{eqn7} \pi(X_{t-1} \mid y_t)_\psi = \prod_{j = 1}^m \pi(X_{j, t-1} \mid y_{j,t}) \times \pi\left(X_{t-1} \setminus \bigcup_{j = 1}^m X_{j, t-1}\right).$$ Actual causes and actual effects {#ACAE} -------------------------------- The objective of this section is to introduce the notion of a causal account for a transition of interest ${v_{t-1} \prec v_t}$ in $G_u$ as the set of all causal links between occurrences within the transition. There is a causal link between occurrences $x_{t-1}$ and $y_t$ if $y_t$ is the actual effect of $x_{t-1}$, or if $x_{t-1}$ is the actual cause of $y_t$. Below, we define *causal link*, *actual cause*, *actual effect*, and *causal account* following five causal principles: realization, composition, information, integration, and exclusion. **Realization.** A transition ${v_{t-1} \prec v_t}$ must be consistent with the transition probability function of a dynamical causal network $G_u$, $$p_u(v_t | v_{t-1}) > 0.$$ Only occurrences within a transition ${v_{t-1} \prec v_t}$ may have, or be, an actual cause or actual effect.[^7] As a first example, we consider the transition $\{(\text{OR}, \text{AND})_{t-1} = 10\} \prec \{(\text{OR}, \text{AND})_{t} = 10\}$ shown in Fig. \[fig1\]D. The transition is consistent with the conditional transition probabilities of the system shown in Fig. \[fig1\]C. **Composition.** Occurrences and their actual causes and effects can be uni- or multi-variate. For a complete causal account of the transition ${v_{t-1} \prec v_t}$, *all* causal links between occurrences $x_{t-1} \subseteq v_{t-1}$ and $y_t \subseteq v_t$ should be considered. For this reason, we evaluate every subset of $x_{t-1} \subseteq v_{t-1}$ as occurrences that may have actual effects and every subset $y_t \subseteq v_t$ as occurrences that may have actual causes (Fig. \[fig2C\]). For a particular occurrence $x_{t-1}$, all subsets $y_t \subseteq v_t$ are considered as candidate effects (Fig. \[fig3\]A). For a particular occurrence $y_t$, all subsets $x_{t-1} \subseteq v_{t-1}$ are considered as candidate causes (Fig. \[fig3\]B). In what follows we refer to occurrences consisting of a single variable as “first-order” occurrences and to multi-variate occurrences as “high-order” occurrences, and, likewise, to “first-order" and “high-order" causes and effects. In the example transition shown in Fig. \[fig2C\], $\{\text{OR}_{t-1} = 1\}$ and $\{\text{AND}_t = 0\}$ are first-order occurrences that could have an actual effect in $v_t$, and $\{\text{(OR, AND)}_{t-1} = 10\}$ is a high-order occurrence that could also have its own actual effect in $v_t$. On the other side, $\{\text{OR}_t = 1\}$, $\{\text{AND}_t = 0\}$ and $\{\text{(OR, AND)}_t = 10\}$ are occurrences (two first-order and one high-order) that could have an actual cause in $v_{t-1}$. To identify the respective actual cause (or effect) of any of these occurrences, we evaluate all possible sets $\{\text{OR} = 1\}$, $\{\text{AND} = 0\}$, and $\{(\text{OR, AND}) = 10\}$ at time $t-1$ (or $t$). Note that, in principle, we also consider the empty set, again using the convention that $\pi(\varnothing) = 1$ (see “exclusion" below). **Information.** An occurrence must provide information about its actual cause or effect. This means that it should increase the probability of its actual cause or effect compared to its probability if the occurrence is unspecified. To evaluate this, we compare the probability of a candidate effect $y_t$ in the effect repertoire of the occurrence $x_{t-1}$ (Eqn. \[eqn2\]) to its corresponding probability in the unconstrained repertoire (Eqn. \[eqnUCE\]). Specifically, we define an effect ratio $\rho_e$ for the occurrence $x_{t-1}$ and a subsequent occurrence $y_t$ (the candidate effect) as: $$\label{eqn4} \rho_e(x_{t-1}, y_t) = \log_2\left(\frac{\pi(y_t \mid x_{t-1})}{\pi(y_t)}\right),$$ In words, the effect ratio $\rho_e$ is the relative increase in probability of an occurrence at $t$ when constrained by an occurrence at $t-1$, compared to when it is unconstrained. A positive effect ratio $\rho_e(x_{t-1}, y_t) > 0$ means that the occurrence $x_{t-1}$ makes a positive difference in bringing about $y_t$. Similarly, we compare the probability of a candidate cause $x_{t-1}$ in the cause repertoire of the occurrence $y_{t}$ (Eqn. \[eqn3\]) to its corresponding probability in the unconstrained repertoire (Eqn. \[eqnUCC\]). Thus, we define the cause ratio $\rho_c$ for the occurrence $y_t$ and a prior occurrence $x_{t-1}$ (the candidate cause) as: $$\label{eqn5} \rho_c(x_{t-1}, y_t) = \log_2\left(\frac{\pi(x_{t-1} \mid y_t)}{\pi(x_{t-1})}\right).$$ In words, the cause ratio $\rho_c$ is the relative increase in probability of an occurrence at $t-1$ when constrained by an occurrence at $t$, compared to when it is unconstrained. Note that the unconstrained repertoire (Eqn. \[eqnUCC\] and \[eqnUCE\]) is an average over all possible states of the occurrence. The cause and effect ratios thus take all possible counterfactual states of the occurrence into account in determining the strength of constraints. Both $\rho_e$ and $\rho_c$ can be interpreted as the number of bits of information that one occurrence specifies about the other (see [@Fano1961], Chapter 2).[^8]^,^[^9] Note that $\rho_e > 0$ is a necessary, but not sufficient condition for $y_t$ to be an actual effect of $x_{t-1}$ and $\rho_c > 0$ is a necessary, but not sufficient condition for $x_{t-1}$ to be an actual cause of $y_t$. $\rho_{c/e} = 0$ iff conditioning on the occurrence does not change the probability of a potential cause or effect, which includes the case of the empty set. Occurrences $x_{t-1}$ that lower the probability of a subsequent occurrence $y_t$ have been termed “preventative causes” by some [@Korb2011]. Rather than counting a negative effect ratio $\rho_e(x_{t-1}, y_t) < 0$ as indicating a possible “preventative effect”, we take the stance that such an occurrence $x_{t-1}$ has no effect on $y_t$, since it actually predicts other occurrences $Y_t = \neg y_t$ that did not happen. By the same logic, a negative cause ratio $\rho_c(x_{t-1}, y_t) < 0$ means that $x_{t-1}$ is no cause of $y_t$ within the transition. Nevertheless, the current framework can in principle quantify the strength of possible “preventative” causes and effects. In Fig. \[fig3\]A, for example, the occurrence $\{\text{OR}_{t-1} = 1\}$ raises the probability of $\{\text{OR}_t = 1\}$, and vice versa (Fig. \[fig3\]B), with $\rho_e(\{\text{OR}_{t-1} = 1\}, \{\text{OR}_t = 1\}) = \rho_c(\{\text{OR}_{t} = 1\}, \{\text{OR}_{t-1} = 1\}) = 0.415$ bits. By contrast, the occurrence $\{\text{OR}_{t-1} = 1\}$ lowers the probability of occurrence $\{\text{AND}_t = 0\}$ and also of the second-order occurrence $\{\text{(OR, AND)}_t = 10\}$ compared to their unconstrained probabilities. Thus, neither $\{\text{AND}_t = 0\}$ nor $\{\text{(OR, AND)}_t = 10\}$ can be actual effects of $\{\text{OR}_{t-1} = 1\}$. Likewise, the occurrence $\{\text{OR}_t = 1\}$ lowers the probability of $\{\text{AND}_{t-1} = 0\}$, which can thus not be its actual cause. **Integration.** A high-order occurrence must specify more information about its actual cause or effect than when its parts are considered independently. This means that the high-order occurrence must increase the probability of its actual cause or effect beyond the value specified by its parts. As outlined in section \[cer\], a partitioned cause or effect repertoire specifies the residual constraints of an occurrence after applying a partition $\psi$. We quantify the amount of information specified by the parts of an occurrence based on partitioned cause/effect repertoires (Eqn. \[eqn6\] and \[eqn7\]). We define the partitioned effect ratio $$\rho_e(x_{t-1}, y_t)_\psi = \log_2\left(\frac{\pi(y_t \mid x_{t-1})_\psi}{\pi(y_t)}\right),$$ and the partitioned cause ratio $$\rho_c(x_{t-1}, y_t)_\psi = \log_2\left(\frac{\pi(x_{t-1} \mid y_t)_\psi}{\pi(x_{t-1})}\right).$$ The information a high-order occurrence specifies about its actual cause or effect is irreducible to the extent that it exceeds the information specified under *any* partition $\psi$. Out of all permissible partitions $\Psi(x_{t-1}, Y_t)$ (Eqn. \[eqn:Pe\]), or $\Psi(X_{t-1}, y_t)$ (Eqn. \[eqn:Pc\]), the partition that least reduces an effect or cause ratio is denoted the “minimum information partition" (${\operatorname{MIP}}$) [@Oizumi2014; @Albantakis2015], respectively: $${\operatorname{MIP}}= \operatorname*{arg\,min}_{\psi \in \Psi(x_{t-1} , Y_t)} \left(\rho_e(x_{t-1}, y_t) - \rho_e(x_{t-1}, y_t)_\psi\right)$$ or $${\operatorname{MIP}}= \operatorname*{arg\,min}_{\psi \in \Psi(X_{t-1}, y_t)} \left(\rho_c(x_{t-1}, y_t) - \rho_c(x_{t-1}, y_t)_\psi\right).$$ We can then define the irreducible effect ratio $\alpha_e$ as the difference between the intact ratio and the ratio under the ${\operatorname{MIP}}$: $$\label{eqn8} \alpha_e(x_{t-1}, y_t) = \rho_e(x_{t-1}, y_t) - \rho_e(x_{t-1}, y_t)_{{\operatorname{MIP}}} = \log_2\left(\frac{\pi(y_t \mid x_{t-1})}{\pi(y_t \mid x_{t-1})_{{\operatorname{MIP}}}}\right),$$ and the irreducible cause ratio $\alpha_c$ as: $$\label{eqn9} \alpha_c(x_{t-1}, y_t) = \rho_c(x_{t-1}, y_t) - \rho_c(x_{t-1}, y_t)_{{\operatorname{MIP}}} = \log_2\left(\frac{\pi(x_{t-1} \mid y_t)}{\pi(x_{t-1} \mid y_t)_{{\operatorname{MIP}}}}\right).$$ For first-order occurrences $x_{i,t-1}$ or $y_{i,t-1}$ there is only one way to partition the occurrence ($\psi = \{(x_{i, t-1}, \varnothing)\}$ or $\psi = \{(y_{i, t}, \varnothing)\}$) which is necessarily the ${\operatorname{MIP}}$, leading to $\alpha_e(x_{i, t-1}, y_t) = \rho_e(x_{i, t-1}, y_t)$ or $\alpha_c(x_{t-1}, y_{i,t}) = \rho_c(x_{t-1}, y_{i,t})$, respectively. A positive irreducible effect ratio ($\alpha_e(x_{t-1}, y_t) > 0$) signifies that the occurrence $x_{t-1}$ has an irreducible effect on $y_t$, which is necessary but not sufficient for $y_t$ to be an actual effect of $x_{t-1}$. Likewise, a positive irreducible cause ratio ($\alpha_c(x_{t-1}, y_t) > 0$) means that $y_t$ has an irreducible cause in $x_{t-1}$, which is a necessary but not sufficient condition for $x_{t-1}$ to be an actual cause of $y_t$. In our example transition, the occurrence $\{\text{(OR, AND)}_{t-1} = 10\}$ (Fig. \[fig3\]C) is reducible. This is because $\{\text{OR}_{t-1} = 1\}$ is sufficient to determine that $\{\text{OR}_t = 1\}$ with probability 1.0 and $\{\text{AND}_{t-1} = 0\}$ is sufficient to determine that $\{\text{AND} = 0\}$ with probability 1.0. Thus, there is nothing to be gained by considering the two nodes together as a second-order occurrence. By contrast, the occurrence $\{\text{(OR, AND)}_t = 10\}$ determines the particular past state $\{\text{(OR, AND)}_{t-1} = 10\}$ with higher probability than the two first-order occurrences $\{\text{OR}_t = 1\}$ and $\{\text{AND}_t = 0\}$ taken separately (Fig. \[fig3\]D, right). Thus, the second-order occurrence $\{\text{(OR, AND)}_t = 10\}$ is irreducible over the candidate cause $\{\text{(OR, AND)}_{t-1} = 10\}$ with $\alpha_c(\{\text{(OR, AND)}_{t-1} = 10\}, \{\text{(OR, AND)}_t = 10\}) = 0.17$ bits (see Discussion \[D4\]). **Exclusion:** An occurrence should have at most one actual cause and one actual effect (which, however, can be multivariate, that is, a high-order occurrence). In other words, only one occurrence $y_t \subseteq v_t$ can be the actual effect of an occurrence $x_{t-1}$, and only one occurrence $x_{t-1} \subseteq v_{t-1}$ can be the actual cause of an occurrence $y_t$. It is possible that there are multiple occurrences $y_t \subseteq v_t$ over which $x_{t-1}$ is irreducible, $\alpha_e(x_{t-1}, y_t) > 0$, as well as multiple occurrences $x_{t-1} \subseteq v_{t-1}$ over which $y_t$ is irreducible, $\alpha_c(x_{t-1}, y_t) > 0$. The irreducible effect or cause ratio of an occurrence quantifies the strength of its causal constraint on a candidate effect or cause. When there are multiple candidate causes or effects for which $\alpha_{c/e}(x_{t-1}, y_t) > 0$, we select the strongest of those constraints as its actual cause or effect (that is, the one that maximizes $\alpha$). Note that adding unconstrained variables to a candidate cause (or effect) does not change the value of $\alpha$, as the occurrence still specifies the same irreducible constraints about the state of the extended candidate cause (or effect). For this reason, we include a “minimality” condition, such that no subset of an actual cause or effect should have the same irreducible cause or effect ratio.[^10]^,^[^11] We define the irreducibility of an occurrence as its maximum irreducible effect (or cause) ratio over all candidate effects (or causes), $$\alpha_e^{{\textrm{max}}}(x_{t-1}) = \max_{y_t \subseteq v_t} \alpha_e(x_{t-1}, y_t),$$ and $$\alpha_c^{{\textrm{max}}}(y_t) = \max_{x_{t-1} \subseteq v_{t-1}} \alpha_c(x_{t-1}, y_t).$$ Considering the empty set as a possible cause or effect guarantees that the minimal value that $\alpha^{{\textrm{max}}}$ can take is $0$. Accordingly, if $\alpha^{{\textrm{max}}}=0$, then the occurrence is said to be reducible, and it has is no actual cause or effect. For the example in Fig. \[fig2\]A, $\{\text{OR}_t = 1\}$ has two candidate causes with $\alpha_c^{{\textrm{max}}}(\{\text{OR}_t = 1\}) = 0.415$ bits, the first-order occurrence $\{\text{OR}_{t-1} = 1\}$ and the second-order occurrence $\{\text{(OR, AND)}_{t-1} = 10\}$. In this case, $ \{\text{OR}_{t-1} = 1\}$ is the actual cause of $\{\text{OR}_t = 1\}$ by the minimality condition across overlapping candidate causes. The exclusion principle avoids causal overdetermination which arises from counting multiple causes or effects for a single occurrence. Note, however, that symmetries in $G_u$ can give rise to genuine indeterminism about the actual cause or effect (see Results \[R\]). This is the case if multiple candidate causes (or effects) are maximally irreducible and they are not simple sub- or supersets of each other. Upholding the causal exclusion principle, such degenerate cases are resolved by stipulating that the *one* actual cause remains undetermined between all minimal candidate causes (or effects). To summarize, we formally translate the five causal principles of IIT into the following requirements for actual causation: Realization : There is a dynamical causal network $G_u$ and a transition ${v_{t-1} \prec v_t}$, such that $p_u(v_t | v_{t-1}) > 0$. Composition : All $x_{t-1} \subseteq v_{t-1}$ may have actual effects and be actual causes and all $y_{t} \subseteq v_t$ may have actual causes and be actual effects. Information : Occurrences must increase the probability of their causes or effects ($\rho(x_{t-1}, y_t) > 0$). Integration : Moreover, they must do so above and beyond their parts ($\alpha(x_{t-1}, y_t) > 0$). Exclusion : An occurrence has only one actual cause (or effect) and it is the occurrence that maximizes $\alpha_c$ (or $\alpha_e$). Having established the above causal principles, we now formally define the actual cause and the actual effect of an occurrence within a transition ${v_{t-1} \prec v_t}$ of the dynamical causal network $G_u$: \[def1\] Within a transition ${v_{t-1} \prec v_t}$ of a dynamical causal network $G_u$, the actual cause of an occurrence $y_t \subseteq v_t$ is an occurrence $x_{t-1} \subseteq v_{t-1}$ which satisfies the following conditions: 1. The irreducible cause ratio of $y_t$ over $x_{t-1}$ is maximal $$\alpha_c(x_{t-1}, y_t) = \alpha^{\max}(y_t)$$ 2. No subset of $x_{t-1}$ satisfies condition (1) $$\alpha_c(x'_{t-1}, y_t) = \alpha^{\max}(y_t) \Rightarrow x'_{t-1} \not\subset x_{t-1}$$ Define the set of all occurrences that satisfy the above conditions as $x^*(y_t)$. Since an occurrence can have at most one actual cause, there are three potential outcomes: 1. if $x^*(y_t) = \{x_{t-1}\}$, then $x_{t-1}$ is the actual cause of $y_t$; 2. if $|x^*(y_t)| > 1$ then the actual cause of $y_t$ is indeterminate; 3. if $x^*(y_t) = \{\varnothing\}$, then $y_t$ has no actual cause. \[def2\] Within a transition ${v_{t-1} \prec v_t}$ of a dynamical causal network $G_u$, the actual effect of an occurrence $x_{t-1} \subseteq v_{t-1}$ is an occurrence $y_t \subseteq v_t$ which satisfies the following conditions: 1. The irreducible effect ratio of $x_{t-1}$ over $y_t$ is maximal $$\alpha_e(x_{t-1}, y_t) = \alpha^{\max}(x_{t-1})$$ 2. No subset of $y_t$ satisfies condition (1) $$\alpha_e(x_{t-1}, y'_t) = \alpha^{\max}(x_{t-1}) \Rightarrow y'_t \not\subset y_t$$ Define the set of all occurrences that satisfy the above conditions as $y^*(x_{t-1})$. Since an occurrence can have at most one actual effect, there are three potential outcomes: 1. if $y^*(x_{t-1}) = \{y_t\}$, then $y_t$ is the actual effect of $x_{t-1}$; 2. if $|y^*(x_{t-1})| > 1$ then the actual effect of $x_{t-1}$ is indeterminate; 3. if $y^*(x_{t-1}) = \{\varnothing\}$, then $x_{t-1}$ has no actual effect. Based on Definitions \[def1\] and \[def2\]: \[def3\] Within a transition ${v_{t-1} \prec v_t}$ of a dynamical causal network $G_u$, a *causal link* is an occurrence $x_{t-1} \subseteq v_{t-1}$ with $\alpha_e^{{\textrm{max}}}(x_{t-1}) > 0$ and its actual effect $y^*(x_{t-1})$, $$x_{t-1} \rightarrow y^*(x_{t-1}),$$ or an occurrence $y_t \subseteq v_t$ with $\alpha_c^{{\textrm{max}}}(y_t) >0$ and its actual cause $x^*(y_t)$, $$x^*(y_t) \leftarrow y_t$$ An irreducible occurrence defines a single causal link, regardless of whether the actual cause (or effect) is unique or indeterminate. When the actual cause (or effect) is unique, we sometimes refer to the actual cause (or effect) explicitly in the causal link, $x_{t-1} \leftarrow y_t$ (or $x_{t-1} \rightarrow y_t$). The *strength* of a causal link is determined by its $\alpha_e^{{\textrm{max}}}$ or $\alpha_c^{{\textrm{max}}}$ value. Reducible occurrences ($\alpha^{{\textrm{max}}} = 0$) cannot form a causal link. \[def4\] For a transition ${v_{t-1} \prec v_t}$ of a dynamical causal network $G_u$, the causal account ${\mathcal{C}}({v_{t-1} \prec v_t})$ is the set of all causal links $x_{t-1} \rightarrow y^*(x_{t-1})$ and $x^*(y_t) \leftarrow y_t$ within the transition. Under this definition, all actual causes and actual effects contribute to the causal account ${\mathcal{C}}({v_{t-1} \prec v_t})$. Notably, the fact that there is a causal link $x_{t-1} \rightarrow y_t$ does not necessarily imply that the reverse causal link $x_{t-1}\leftarrow y_t$ is also present, and vice versa. In other words, just because $y_t$ is the actual effect of $x_{t-1}$, the occurrence $x_{t-1}$ does not have to be the actual cause of $y_t$. It is therefore not redundant to include both directions in ${\mathcal{C}}({v_{t-1} \prec v_t})$, as illustrated by examples of overdetermination and prevention in the Results section (see also Discussion \[D2\]). Fig. \[fig4\] shows the entire causal account of our example transition. Intuitively, in this simple example, $\{\text{OR}_{t-1} = 1\}$ has the actual effect $\{\text{OR}_t = 1\}$ and is also the actual cause of $\{\text{OR}_t = 1\}$, and the same for $\{\text{AND}_{t-1} = 0\}$ and $\{\text{AND} = 0\}$. Nevertheless, there is also a causal link between the second-order occurrence $\{\text{(OR, AND)}_t = 10\}$ and its actual cause $\{\text{(OR, AND)}_{t-1} = 10\}$, which is irreducible to its parts, as shown in Fig. \[fig3\]D (right). However, there is no complementary link from $\{\text{(OR, AND)}_t = 10\}$ to $\{\text{(OR, AND)}_{t-1} = 10\}$, as it is reducible (Fig. \[fig3\]C, right). The causal account shown in Fig. \[fig4\] provides a complete causal explanation for “what happened” and “what caused what” in the transition $\{\text{(OR, AND)}_{t-1} = 10\} \prec \{\text{(OR, AND)}_t = 10\}$. Similar to the notion of system-level integration in IIT [@Oizumi2014; @Albantakis2015], the principle of integration can also be applied to the causal account as a whole, not only to individual causal links (see \[S2\]). In this way it is possible to evaluate to what extent the transition ${v_{t-1} \prec v_t}$ is irreducible to its parts, which is quantified by ${\mathcal{A}}({v_{t-1} \prec v_t})$. In summary, the measures defined in this section provide the means to exhaustively assess “what caused what” in a transition ${v_{t-1} \prec v_t}$, and to evaluate the strength of specific causal links of interest under a particular set of background conditions, $U = u$. Software to analyze transitions in dynamical causal networks with binary variables is freely available within the “PyPhi" toolbox for integrated information theory [@Mayner2018] at <https://github.com/wmayner/pyphi>, including documentation at <https://pyphi.readthedocs.io/en/stable/examples/actual_causation.html>. Results {#R} ======= In the following, we will present a series of examples to illustrate the quantities and objects defined in the theory section and address several dilemmas taken from the literature on actual causation. For simplicity, we only cover examples including binary variables in the main text. Multi-variate examples which demonstrate that our proposed framework for actual causation naturally generalizes beyond the binary case can be found in the \[S1\]. There, we also discuss in detail how our approach and the results below compare to counterfactual accounts of actual causation based on “contingency conditions" [@Hitchcock2001; @Halpern2001; @Woodward2003; @Halpern2005; @Halpern2015; @Weslake2015-WESAPT][^12]. Same transition, different mechanism: disjunction, conjunction, biconditional, and prevention --------------------------------------------------------------------------------------------- Fig. \[fig5\] shows 4 causal networks of different types of logic gates with two inputs each, all transitioning from the input state $v_{t-1} = \{AB = 11\}$ to the output state $v_t = \{C = 1\}$, $\{D = 1\}$, $\{E = 1\}$ or $\{F = 1\}$. From a dynamical point of view, without taking the causal structure of the mechanisms into account, the same occurrences happen in all four situations. However, analyzing the causal accounts of these transitions reveals differences in the number, type, and strength of causal links between occurrences and their actual causes or effects. **Disjunction:** The first example (Fig. \[fig5\]A – OR-gate), is a case of symmetric overdetermination ([@Pearl2000], Chapter 10): each input to $C$ would have been sufficient for $\{C = 1\}$, yet both $\{A = 1\}$ and $\{B = 1\}$ occurred at $t-1$. In this case, each of the inputs to $C$ has an actual effect, $\{A = 1\} \rightarrow \{C = 1\}$ and $\{B = 1\} \rightarrow \{C = 1\}$, as they raise the probability of $\{C = 1\}$ compared to its unconstrained probability. The high-order occurrence $\{AB = 11\}$, however, is reducible with $\alpha_e = 0$. While both $\{A = 1\}$ and $\{B = 1\}$ have actual effects, by the causal exclusion principle, the occurrence $\{C = 1\}$ can only have one actual cause. Since both $\{A = 1\} \leftarrow \{C = 1\}$ and $\{B = 1\} \leftarrow \{C = 1\}$ have $\alpha_c = \alpha^{{\textrm{max}}}_c = 0.415$ bits, by Definition \[def1\], the actual cause of $\{C = 1\}$ is either $\{A = 1\}$, or $\{B = 1\}$; which of the two inputs it is remains undetermined, since they are perfectly symmetric in this example. Note that $\{AB = 11\} \leftarrow \{C = 1\}$ also has $\alpha_c = 0.415$ bits, but $\{AB = 11\}$ is excluded from being a cause by the minimality condition. **Conjunction:** In the second example (Fig. \[fig5\]B – AND-gate), both $\{A = 1\}$ and $\{B = 1\}$ are necessary for $\{D = 1\}$. In this case, each input alone has an actual effect, $\{A = 1\} \rightarrow \{C = 1\}$ and $\{B = 1\} \rightarrow \{C = 1\}$ (with higher strength than in the disjunctive case), but here also the second-order occurrence of both inputs together has an actual effect, $\{AB = 11\} \rightarrow \{D = 1\}.$ Thus, there is a composition of actual effects. Again, the occurrence $\{D = 1\}$ can only have one actual cause; here it is the second-order cause $\{AB = 11\}$, the only occurrence that satisfies the conditions in Definition \[def1\] with $\alpha_c = \alpha_c^{{\textrm{max}}} = 2.0$. The two examples in Fig. \[fig5\]A and B are often referred to as the disjunctive and conjunctive versions of the “forest-fire” example [@Halpern2005; @Halpern2015; @Halpern2016], where lightning and/or a match being dropped result in a forest fire. In the case that lightning strikes and the match is dropped, $\{A = 1\}$ and $\{B = 1\}$ are typically considered two separate (first-order) causes in both the disjunctive and conjunctive version (e.g., [@Halpern2005], see \[S1\]). This result is not a valid solution within our proposed account of actual causation, as it violates the causal exclusion principle. We explicitly evaluate the high-order occurrence $\{AB = 11\}$ as a candidate cause, in addition to $\{A = 1\}$ and $\{B = 1\}$. In line with the distinct logic structure of the two examples, we identify the high-order occurrence $\{AB = 11\}$ as the actual cause of $\{D = 1\}$ in the conjunctive case, while we identify either $\{A = 1\}$ or $\{B = 1\}$ as the actual cause of $\{C = 1\}$ in the disjunctive case, but not both. By separating actual causes from actual effects, acknowledging causal composition, and respecting the causal exclusion principle, our proposed causal analysis can illuminate and distinguish all situations displayed in Fig. \[fig5\]. **Biconditional**: The significance of high-order occurrences is further emphasized by the third example (Fig. \[fig5\]C), where $E$ is a “logical biconditional” (an XNOR) of its two inputs. In this case, the individual occurrences $\{A = 1\}$ and $\{B = 1\}$ by themselves make no difference in bringing about $\{E = 1\}$; their effect ratios are zero. For this reason, they cannot have actual effects and cannot be actual causes. Only the second-order occurrence $\{AB = 11\}$ specifies $\{E = 1\}$, which is its actual effect $\{AB = 11\} \rightarrow \{E = 1\}$. Likewise, $\{E = 1\}$ only specifies the second-order occurrence $\{AB = 11\}$, which is its actual cause $\{AB = 11\} \leftarrow \{E = 1\}$, but not its parts taken separately. Note that the causal strength in this example is lower than in the case of the AND-gate, since, everything else being equal, $\{D = 1\}$ is mechanistically a less likely output than $\{E = 1\}$. **Prevention:** In the final example, Fig. \[fig5\]D, all input states but $\{AB = 10\}$ lead to $\{F = 1\}$. Here, $\{B = 1\} \rightarrow \{F = 1\}$ and $\{B = 1\} \leftarrow \{F = 1\}$, whereas $\{A = 1\}$ does not have an actual effect and is not an actual cause. For this reason, the transition ${v_{t-1} \prec v_t}$ is reducible (${\mathcal{A}}({v_{t-1} \prec v_t}) = 0$, \[S2\]), since $A$ could be partitioned away without loss. This example can be seen as a case of prevention: $\{B = 1\}$ causes $\{F = 1\}$, which prevents any effect of $\{A = 1\}$. In a popular narrative accompanying this example, $\{A = 1\}$ is an assassin putting poison in the King’s tea, while a bodyguard administers an antidote $\{B = 1\}$, and the King survives $\{F = 1\}$ [@Halpern2016]. The bodyguard thus “prevents” the King’s death[^13]. Note that the causal account is state dependent: for a different transition, $A$ may have an actual effect or contribute to an actual cause: if the bodyguard does not administer the antidote ($\{B = 0\})$, whether the King survives depends on the assassin (the state of $A$). Taken together, the above examples demonstrate that the causal account and the causal strength of individual causal links within the account capture differences in sufficiency and necessity of the various occurrences in their respective transitions. Including both actual causes and effects moreover contributes to a mechanistic understanding of the transition, since not all occurrences at $t-1$ with actual effects end up being actual causes of occurrences at $t$. Linear threshold units ---------------------- A generalization of simple, linear logic gates, such as OR- and AND-gates, are binary linear threshold units (LTUs). Given $n$ equivalent inputs $V_{t-1} = \{V_{1, t-1}, V_{2, t-1}, \ldots, V_{n, t-1}\}$ to a single LTU $V_t$, $V_t$ will turn on (‘1’) if the number of inputs in state ‘1’ exceeds a given threshold $k$, $$\label{LTU} p(V_t = 1 \mid v_{t-1}) = \begin{cases} 1 & \text{if}~ \sum_{i = 1}^n v_{i, t-1} \geq k, \\ 0 & \text{if}~ \sum_{i=1}^n v_{i, t-1} < k. \end{cases}$$ LTUs are of great interest, for example, in the field of neural networks, since they comprise one of the simplest model mechanisms for neurons, capturing the notion that a neuron fires if it received sufficient synaptic inputs. One example is a Majority-gate, which outputs ‘1’ *iff* more than half of its inputs are ‘1’. Fig. \[fig6\]A displays the causal account of a Majority-gate $M$ with 4 inputs for the transition $v_{t-1} = \{ABCD = 1110\} \rightarrow v_t = \{M = 1\}$. All of the inputs in state ‘1’, as well as their high-order occurrences, have actual effects on $\{M = 1\}$. Occurrence $\{D = 0\}$, however, does not work towards bringing about $\{M = 1\}$: it reduces the probability for $\{M = 1\}$ and thus does not contribute to any actual effects or the actual cause. As with the AND-gate in the previous section, there is a composition of actual effects in the causal account. Yet, there is only one actual cause, $\{ABC = 111\} \leftarrow \{M = 1\}$. In this case, it happens to be that the third-order occurrence $\{ABC = 111\}$ is minimally sufficient for $\{M = 1\}$—no smaller set of inputs would suffice. Note however, that the actual cause is not determined based on sufficiency, but because $\{ABC = 111\}$ is the set of nodes maximally constrained by the occurrence $\{M = 1\}$. Nevertheless, causal analysis as illustrated here will always identify a minimally sufficient set of inputs as the actual cause of an LTU $v_t = 1$, for any number of inputs $n$ and any threshold $k$. Furthermore, any occurrence of input variables $x_{t-1} \subseteq v_{t-1}$ with at most $k$ nodes, all in state ‘1’, will be irreducible, with the LTU $v_t = 1$ as their actual effect. \[thm1\] Consider a dynamical causal network $G_u$ such that $V_t = \{Y_t\}$ is a linear threshold unit with $n$ inputs and threshold $k \leq n$, and $V_{t-1}$ is the set of $n$ inputs to $Y_t$. For a transition $v_{t-1} \prec v_{t}$, with $y_t = 1$ and $\sum v_{t-1} \geq k$, the following holds: 1. The actual cause of $\{Y_t = 1\}$ is an occurrence $\{X_{t-1} = x_{t-1}\}$ with $|x_{t-1}| = k$ and $\min(x_{t-1}) = 1$. 2. If $\min(x_{t-1}) = 1$ and $|x_{t-1}| \leq k$ then the actual effect of $\{X_{t-1} = x_{t-1}\}$ is $\{Y_t = 1\}$; otherwise $\{X_{t-1} = x_{t-1}\}$ has no actual effect, it is reducible. Proof: See \[proof\]. Note that an LTU in the off (‘0’) state, $\{Y_t = 0\}$, has equivalent results with the role of ‘0’ and ‘1’ reversed, and a threshold of $n-k$. In the case of overdetermination, *e.g.*, the transition $v_{t-1} = \{ABCD = 1111\} \prec v_t = \{M = 1\}$, where all inputs to the Majority-gate are ‘1’, the actual cause will again be a subset of 3 input nodes in state ‘1’. However, which of the possible sets remains undetermined due to symmetry, just as in the case of the OR-gate in Fig. \[fig5\]A. Distinct background conditions ------------------------------ The causal network in Fig. \[fig6\]A considers all inputs to $M$ as relevant variables. Under certain circumstance, however, we may want to consider a different set of background conditions. For example, in a voting scenario it may be a given that $D$ always votes “no" ($D=0$). In that case we may want to analyze the causal account of the transition $v_{t-1} = \{ABC = 111\} \prec v_t = \{M=1\}$ in the alternative causal model $G_{u'}$ , where $\{D = 0\} \in \{U' = u'\}$ is treated as a background condition (Fig. \[fig6\]B). Doing so results in a causal account with the same causal links but higher causal strengths. This captures the intuition that $A$, $B$, and $C$’s “yes votes” are more important if it is already determined that $D$ will vote “no”. The difference between the causal accounts of ${v_{t-1} \prec v_t}$ in $G_u$ compared to $G_{u'}$, moreover, highlights the fact that we explicitly distinguish fixed background conditions $U = u$ from relevant variables $V$ whose counterfactual relations must be considered (see also [@McDermott2002]). While the background variables are fixed in their actual state $U = u$, all counterfactual states of the relevant variables $V$ are considered when evaluating the causal account of ${v_{t-1} \prec v_t}$ in $G_u$. Disjunction of conjunctions {#R3} --------------------------- Another case often considered in the actual causation literature is a disjunction of conjunctions, that is, an OR-operation over two or more AND-operations. In the general case, a disjunction of conjunctions is a variable $V_t$ that is a disjunction of $k$ conditions, each of which is a conjunction of $n_j$ input nodes $V_{t-1} = \{\{V_{i, j, t-1}\}_{i = 1}^{n_j}\}_{j=1}^{k}$, $$p(V_t = 1 \mid v_{t-1}) = \begin{cases} 0 & \text{if} ~ \sum_{i=1}^{n_j} v_{i, j, t-1} < n_j, ~ \forall j \\ 1 & \text{otherwise} \end{cases}$$ Here we consider a simple example, $(A \wedge B) \vee C$ (Fig. \[fig7\]). The debate over this example is mostly concerned with the type of transition shown in Fig. \[fig7\]A: $v_{t-1} = \{ABC = 101\} \prec v_t = \{D = 1\}$, and the question whether $\{A = 1\}$ is a cause of $\{D = 1\}$ even if $B = 0$.[^14] The quantitative assessment of actual causes and actual effects can help to resolve issues of actual causation in this type of example. As shown in Fig. \[fig7\]A, with respect to actual effects, both causal links $\{A = 1\} \rightarrow \{D = 1\}$ and $\{C = 1\} \rightarrow \{D = 1\}$ are present, with $\{C = 1\}$ having a stronger actual effect. However, $\{C = 1\}$ is the one actual cause of $\{D = 1\}$, being the maximally irreducible cause with $\alpha_c^{{\textrm{max}}}(\{D = 1\}) = 0.678$. When judging the actual effect of $\{A = 1\}$ at $t-1$ within the transition $v_{t-1} = \{ABC = 101\} \prec v_t = \{D = 1\}$, $B$ is assumed to be undetermined. By itself, the occurrence $\{A = 1\}$ does raise the probability of occurrence $\{D = 1\}$, and thus $\{A = 1\} \rightarrow \{D = 1\}$. If we instead consider $\{B = 0\} \in \{U' = u'\}$ as a fixed background condition and evaluate the transition $v_{t-1} = \{AC = 11\} \prec v_t = \{D = 1\}$ in $G_{u'}$, $\{A = 1\}$ does not have an actual effect anymore (Fig. \[fig7\]B). In this case, the background condition $\{B = 0\}$ prevents $\{A = 1\}$ from having any effect. The results from this example extend to the general case of disjunctions of conjunctions. In the situation where $v_t=1$, the actual cause of $v_t$ is a minimally sufficient occurrence. If multiple conjunctive conditions are satisfied, the actual cause of $v_t$ remains indetermined between all minimally sufficient sets (asymmetric overdetermination). At $t-1$, any first-order occurrence in state ‘1’, as well as any high-order occurrence of such nodes that does not overdetermine $v_t$, has an actual effect. This includes any occurrence in state all ‘1’ that contains only variables from exactly one conjunction, as well as any high-order occurrence of nodes across conjunctions, which do not fully contain any specific conjunction. If instead $v_t=0$, then its actual cause is an occurrence that contains a single node in state ‘0’ from each conjunctive condition. At $t-1$, any occurrence in state all ‘0’ that does not overdetermine $v_t$ has an actual effect, which is any all ‘0’ occurrence that does not contain more than one node from any conjunction. These results are formalized by the following theorem. \[thm2\] Consider a dynamical causal network $G_u$ such that $V_t = \{Y_t\}$ is a DOC element that is a disjunction of $k$ conditions, each of which is a conjunction of $n_j$ inputs, and $V_{t-1} = \{\{V_{i, j, t-1}\}_{i = 1}^{n_j}\}_{j=1}^{k}$ is the set of its $n = \sum_j n_j$ inputs. For a transition $v_{t-1} \prec v_{t}$, the following holds: 1. If $y_t = 1$, 1. The actual cause of $\{Y_t = 1\}$ is an occurrence $\{X_{t-1} = x_{t-1}\}$ where $x_{t-1} = \{x_{i, j, t-1}\}_{i=1}^{n_j} \subseteq v_{t-1}$ such that $\min(x_{t-1}) = 1$. 2. The actual effect of $\{X_{t-1} = x_{t-1}\}$ is $\{Y_t = 1\}$ if $\min(x_{t-1}) = 1$ and $|x_{t-1}| = c_j = n_j$; otherwise $x_{t-1}$ is reducible. 2. If $y_t = 0$, 1. The actual cause of $\{Y_t = 0\}$ is an occurrence $x_{t-1} \subseteq v_{t-1}$ such that $\max(x_{t-1}) = 0$ and $c_j = 1 ~\forall~j$. 2. If $\max(x_{t-1}) = 0$ and $c_j \leq 1~\forall~j$ then the actual effect of $\{X_{t-1} = x_{t-1}\}$ is $\{Y_t = 0\}$; otherwise $x_{t-1}$ is reducible. Proof: See \[proof\]. Complicated voting ------------------ As already demonstrated in the examples in Fig. \[fig5\]C and D, the proposed causal analysis is not restricted to linear update functions or combinations thereof. Fig. \[fig8\] depicts an example transition featuring a complicated, nonlinear update function. This specific example is taken from [@Halpern2015; @Halpern2016]: If $A$ and $B$ agree, $F$ takes their value, if $B$, $C$, $D$, and $E$ agree, $F$ takes $A$’s value, otherwise majority decides. The transition of interest is $v_{t-1} = \{ABCDE = 11000\} \prec v_t = \{F = 1\}$. According to [@Halpern2015], intuition suggests that $\{A = 1\}$ together with $\{B = 1\}$ cause $\{F = 1\}$. Indeed, $\{AB = 11\}$ is one minimally sufficient occurrence in the transition that determines $\{F = 1\}$. The result of the present causal analysis of the transition (Fig. \[fig8\]) is that both $\{AB = 11\}$ and $\{ACDE = 1000\}$ completely determine that $\{F = 1\}$ will occur with $\alpha_c(x_{t-1}, y_t) = \alpha_c^{{\textrm{max}}}(y_t) = 1.0$. Thus, there is indeterminism between these two causes (see \[S1\] for a comparison of our results with those of [@Halpern2015]). In addition, the effects $\{A = 1\} \rightarrow \{F = 1\}$, $\{B = 1\} \rightarrow \{F = 1\}$, $\{AB = 11\} \rightarrow \{F = 1\}$, and $\{ACDE = 1000\} \rightarrow \{F = 1\}$ all contribute to the causal account. Noise and probabilistic variables --------------------------------- The examples so far involved deterministic update functions. Probabilistic accounts of causation are closely related to counterfactual accounts [@Paul2013]. Nevertheless, certain problem cases only arise in probabilistic settings (*e.g.* Fig. \[fig9\]B). The present causal analysis can be applied equally to probabilistic and deterministic causal networks, as long as the system’s transition probabilities satisfy conditional independence (Eqn. \[eqn1b\]). No separate, probabilistic calculus for actual causation is required. In the simplest case, where noise is added to a deterministic transition ${v_{t-1} \prec v_t}$, the noise will generally decrease the strength of the causal links in the transition. Fig. \[fig9\] shows the causal account of the transition $v_{t-1} = \{A = 1\} \prec v_t = \{N = 1\}$, where $N$ is the slightly noisy version of a COPY-gate. In this example, both $\{A = 1\} \rightarrow \{N = 1\}$ and $\{A = 1\} \leftarrow \{N = 1\}$. The only difference with the equivalent deterministic case is that the causal strength $\alpha_e^{{\textrm{max}}} = \alpha_c^{{\textrm{max}}} = 0.848$ is lower than in the deterministic case where $\alpha_e^{{\textrm{max}}} = \alpha_c^{{\textrm{max}}} = 1$. Note that in this probabilistic setting, the actual cause $\{A = 1\}$ by itself is not sufficient to determine $\{N = 1\}$. Nevertheless, $\{A = 1\}$ makes a positive difference in bringing about $\{N = 1\}$, and this difference is irreducible, so the causal link is present within the transition. The transition $v_{t-1} = \{A = 1\} \prec v_t = \{N = 0\}$ has no counterpart in the deterministic case where $p(\{N = 0\}|\{A = 1\}) = 0$ (considering the transition would thus violate the realization principle). The result of the causal analysis is that there are no irreducible causal links within this transition. $\{A = 1\}$ decreases the probability of $\{N = 0\}$ and vice versa, which leads to $\alpha_{c/e}<0$. Consequently, $\alpha_{c/e}^{{\textrm{max}}}=0$, as specified by the empty set. One interpretation is that the actual cause of $\{N = 0\}$ must lie outside of the system, such as a missing latent variable. Another interpretation is that the actual cause for $\{N = 0\}$ is genuine ‘physical noise’, for example, within an element or connection. In any case, the proposed account of actual causation is sufficiently general to cover both deterministic as well as probabilistic systems. Simple classifier ----------------- As a final example, we consider a transition with a multi-variate $v_t$: the 3 variables $A$, $B$, and $C$ provide input to 3 different “detectors", the nodes $D$, $S$, and $L$. $D$ is a “dot-detector”; it outputs ‘1’ if exactly one of the 3 inputs is in state ‘1’. $S$ is a “segment-detector”: it outputs ‘1’ for input states $\{ABC = 110\}$ and $\{ABC = 011\}$. $L$ detects lines, that is, $\{ABC = 111\}$. Fig. \[fig10\] shows the causal account of the specific transition $v_{t-1} = \{ABC = 001\} \prec v_t = \{DSL = 100\}$. In this case, only a few occurrences $x_{t-1} \subseteq v_{t-1}$ have actual effects, but all possible occurrences $y_t \subseteq v_{t}$ are irreducible with their own actual cause. The occurrence $\{C = 1\}$ by itself, for example, has no actual effect. This may be initially surprising since $D$ is a dot detector and $\{C = 1\}$ is supposedly a dot. However, $\{C=1\}$ by itself does not raise the probability of $\{D=1\}$. The specific configuration of the entire input set is necessary to determine $\{D = 1\}$ (a dot is only a dot if the other inputs are ‘0’). Consequently, $\{ABC = 001\} \rightarrow \{D = 1\}$ and also $\{ABC = 001\} \leftarrow \{D = 1\}$. By contrast, the occurrence $\{A = 0\}$ is sufficient to determine $\{L = 0\}$ and raises the probability of $\{D = 1\}$; the occurrence $\{B = 0\}$ is sufficient to determine $\{S = 0\}$ and $\{L = 0\}$ and also raises the probability of $\{D = 1\}$. We thus get the following causal links: $\{A = 0\} \rightarrow \{DL = 10\}$, $\{\{A = 0\},\{B = 0\}\} \leftarrow \{L = 0\}$, $\{B = 0\} \rightarrow \{DSL = 100\}$ and $\{B = 0\} \leftarrow \{S = 0\}$. In addition, all high-order occurrences $y_t$ are irreducible, each having their own actual cause above those of their parts. The actual cause identified for these high-order occurrences can be interpreted as the “strongest” shared cause of nodes in the occurrence, for example $\{B = 0\} \leftarrow \{DS = 10\}$. While only the occurrence $\{ABC = 001\}$ is sufficient to determine $\{DS = 10\}$, this candidate causal link is reducible, because $\{DS = 10\}$ does not constrain the past state of $ABC$ any more than $\{D = 1\}$ by itself. In fact, the occurrence $\{S = 0\}$ does not constrain the past state of $AC$ at all. Thus $\{ABC = 001\}$ and all other candidate causes of $\{DS = 10\}$ that include these nodes are either reducible (because their causal link can be partitioned with $\alpha_c^{{\textrm{max}}} = 0$) or excluded (because there is a subset of nodes whose causal strength is at least as high). In this example, $\{B = 0\}$ is the only irreducible shared cause of $\{D = 1\}$ and $\{S = 0\}$, and thus also the actual cause of $\{DS = 10\}$. Discussion {#D} ========== In this article, we presented a principled, comprehensive formalism to assess actual causation within a given dynamical causal network $G_u$. For a transition ${v_{t-1} \prec v_t}$ in $G_u$, the proposed framework provides a complete causal account of all causal links between occurrences at $t-1$ and $t$ of the transition, based on five principles—realization, composition, information, integration, and exclusion. In what follows, we review specific features and limitations of our approach, discuss how the results relate to intuitive notions about actual causation and causal explanation, and highlight some of the main differences with previous proposals aimed at operationalizing the notion of actual causation. Specifically, our framework considers all counterfactual states rather than a single contingency, which makes it possible to assess the strength of causal links. Second, it distinguishes between actual causes and actual effects, which are considered separately. Third, it allows for causal composition, in the sense that first- and high-order occurrences can have their own causes and effects within the same transition, as long as they are irreducible. And fourth, it provides a rigorous treatment of causal overdetermination. As demonstrated in the results section and the \[S1\], the proposed formalism is generally applicable to a vast range of physical systems, whether deterministic or probabilistic, with binary or multi-valued variables, feedforward or recurrent architectures, as well as narrative examples, as long as they can be represented as a causal network with an explicit temporal order. Testing all possible counterfactuals with equal probability {#D1} ----------------------------------------------------------- In the simplest case, counterfactual approaches to actual causation are based on the “but-for” test [@Halpern2016]: $C = c$ is a cause of $E = e$ if $C = \neg c$ implies $E= \neg e$ (“but for $c$, $e$ would not have happened”). In multi-variate causal networks this condition is typically dependent on the remaining variables $W$. What differs among current counterfactual approaches are the permissible *contingencies* ($W=w$) under which the “but-for” test is applied (e.g., [@Hitchcock2001; @Yablo2002; @Woodward2003; @Halpern2005; @Hall2007; @Halpern2015; @Weslake2015-WESAPT]) (see \[S1\]). Moreover, if there is one permissible contingency (counterfactual state) $\{\neg c, w\}$ that implies $E = \neg e$, $c$ is identified as a cause of $e$ in an “all-or-nothing" manner. In sum, current approaches test for counterfactual dependence under a fixed contingency $W = w$, evaluating a particular counterfactual state $C = \neg c$. Our starting point is a realization of a dynamical causal network $G_u$, which is a transition ${v_{t-1} \prec v_t}$ that is compatible with $G_u$’s transition probabilities ($p_u(v_t | v_{t-1}) > 0$) given the fixed background conditions $U = u$ (Fig. \[fig10B\]A). However, we employ *causal marginalization* instead of fixed $W = w$ and $C = \neg c$ within the transition. This means that we replace these variables with an average over *all* their possible states (Eqn. \[cmarg\]). Applied to variables outside of the candidate causal link (Fig. \[fig10B\]B), causal marginalization serves to remove the influence of these variables on the causal dependency between the occurrence and its candidate cause (or effect), which is thus evaluated based on its own merits. The difference between marginalizing the variables outside the causal link of interest and treating them as fixed contingencies becomes apparent in the case of an XOR (“exclusive OR") mechanism in Fig. \[fig10B\] (or equivalently the biconditional (XNOR), Fig. \[fig5\]C). With input $B$ fixed in a particular state (‘0’ or ‘1’) the state of the XOR will completely depend on the state of $A$. However, the state of $A$ alone does not determine the state of the XOR at all if $B$ is marginalized. The latter better captures the mechanistic nature of the XOR, which requires a difference in $A$ and $B$ to switch on (‘1’). We also marginalize across all possible states of $C$ in order to determine whether $e$ counterfactually depends on $c$. Instead of identifying one particular $C = \neg c$ for which $E = \neg e$, all of $C$’s states are equally taken into account. The notion that counterfactual dependence is an “all-or-nothing concept" [@Halpern2016] becomes problematic, for example, if non-binary variables are considered (see \[S1\]) and also in non-deterministic settings. By contrast, our proposed approach, that considers all possible states of $C$, naturally extends to the case of multi-valued variables and probabilistic causal networks. Moreover, it has the additional benefit that we can quantify the strength of the causal link between an occurrence and its actual cause (effect). In the present framework, having a positive effect ratio $\rho_e(x_{t-1}, y_t) > 0$ is necessary but not sufficient for $x_{t-1} \rightarrow y_t$, and the same for a positive cause ratio $\rho_c(x_{t-1}, y_t) > 0$. Taken together, we argue that causal marginalization, that is, averaging over contingencies and all possible counterfactuals of an occurrence, reveals the mechanisms underlying the transition. By contrast, fixing relevant variables to any one specific state largely ignores them. This is because a mechanism is only fully described by all its transition probabilities, for all possible input states (Eqn. \[eqn1b\]). For example, the biconditional $E$ in Fig. \[fig5\]C, only differs from the conjunction $D$ in Fig. \[fig5\]B, for the input state [AB = 00]{}. Once the underlying mechanisms are specified based on all possible transition probabilities, causal interactions can be quantified in probabilistic terms [@Ay2008; @Oizumi2014] even within a single transition ${v_{t-1} \prec v_t}$, *i.e.* in the context of actual causation [@Glennan2011; @Korb2011]. However, this also means that all transition probabilities have to be known for the proposed causal analysis, even for states that are not typically observed (see also [@Ay2008; @Balduzzi2008; @Janzing2013; @Oizumi2014]). Finally, in our analysis all possible past states are weighted equally in the causal marginalization. Related measures of information flow in causal networks [@Ay2008] and causal information [@Korb2011] consider weights based on a distribution of $p(v_{t-1})$, for example, the stationary distribution, or observed probabilities, or also a maximum entropy distribution (equivalent to weighting all states equally). However, in the context of actual causation, the prior probabilities of occurrences at $t-1$ are extraneous to the question “what caused what?" All that matters is what actually happened, the transition ${v_{t-1} \prec v_t}$, and the underlying mechanisms. How likely $v_{t-1}$ was to occur should not influence the causes and effects within the transition, nor how strong the causal links are between actual occurrences at $t-1$ and $t$. In other words, the same transition, involving the same mechanisms and background conditions should always result in the same causal account. Take, for instance, a set of nodes $AB$ that output to $C$, which is a deterministic OR-gate. If $C$ receives no further inputs from other nodes, then whenever $\{AB = 11\}$ and $\{C = 1\}$, the causal links, their strength, and the causal account of the transition $\{AB = 11\} \prec \{C = 1\}$ should be the same as in Fig. \[fig5\]A (“Disjunction"). Which larger system the set of nodes was embedded in, or what the probability was for the transition to happen in the first place, according to the equilibrium, observed, or any other distribution is not relevant in this context. Let us assume, for example, that $\{A = 1\}$ was much more likely to occur than $\{B = 1\}$. This bias in prior probability does not change the fact that, mechanistically, $\{A = 1\}$ and $\{B = 1\}$ have the same effect on $\{C = 1\}$ and are equivalent causes. Distinguishing actual effects and actual causes {#D2} ----------------------------------------------- An implicit assumption commonly made about (actual) causation is that the relation between cause and effect is bidirectional: if occurrence $C=c$ had an effect on occurrence $E=e$, then $c$ is assumed to be a cause of $e$ [@Hitchcock2001; @Yablo2002; @Woodward2003; @Halpern2005; @Hall2007; @Halpern2015; @Weslake2015-WESAPT; @Twardy2011; @Fenton-Glynn2017]. As demonstrated throughout the Results section, however, this conflation of causes and effects is untenable once multi-variate transitions ${v_{t-1} \prec v_t}$ are considered (see also next, \[D3\]). There, an asymmetry between causes and effects simply arises due to the fact that the set of variables that is affected by an occurrence $x_{t-1} \subseteq v_{t-1}$ typically differs from the set of variables that affects an occurrence $y_t \subseteq v_t$. Take the toy classifier example in Fig. \[fig10\]: while $\{B = 0\}$ is the actual cause of $\{S = 0\}$, $\{B = 0\}$’s actual effect is $\{DLS = 100\}$. Accordingly, we propose that a comprehensive causal understanding of a given transition is provided by its complete causal account ${\mathcal{C}}$ (Definition \[def4\]), including both actual effects and actual causes. Actual effects are identified from the perspective of occurrences at $t-1$, whereas actual causes are identified from the perspective of occurrences at $t$. This means that also the causal principles of composition, integration, and exclusion are applied from these two perspectives. When we evaluate causal links of the form $x_{t-1} \rightarrow y_t$, any occurrence $x_{t-1}$ may have one actual effect $y_{t} \subseteq v_{t}$ if $x_{t-1}$ is irreducible ($\alpha^{{\textrm{max}}}_e(x_{t-1}) > 0$) (Definition \[def2\]). When we evaluate causal links of the form $x_{t-1} \leftarrow y_t$, any occurrence $y_t$ may have one actual cause $x_t \subseteq v_{t-1}$ if $y_t$ is irreducible ($\alpha^{{\textrm{max}}}_c(y_t) > 0$) (Definition \[def1\]). As seen in the first example (Fig. \[fig4\]), there may be a high-order causal link in one direction, but the reverse link may be reducible. As mentioned in the Introduction and exemplified in the \[S1\], our approach has a more general scope but is still compatible with the traditional view of actual causation, concerned only with actual causes of singleton occurrences. Nevertheless, even in the limited setting of singleton $v_t$, considering both causes and effects may be illuminating. Consider, for example, the transition shown in Fig. \[fig7\]A: by itself, the occurrence $\{A = 1\}$ raises the probability of $\{D = 1\}$ ($\rho_e(x_{t-1}, y_t) = \alpha_e(x_{t-1}, y_t) > 0$), which is a common determinant of being a cause in probabilistic accounts of (actual) causation [@Good1961; @Suppes1970; @Eells1983; @Pearl2009]. However, even in deterministic systems with multi-variate dependencies, the fact that an occurrence $c$, by itself, raises the probability of an occurrence $e$, does not necessarily determine that $E=e$ will actually occur [@Paul2013]. In the example of Fig. \[fig7\], $\{A = 1\}$ is neither necessary nor sufficient for $\{D = 1\}$. Here, this issue is resolved by acknowledging that both $\{A = 1\}$ and $\{C = 1\}$ have an actual effect on $\{D = 1\}$, whereas $\{C=1\}$ is identified as the (one) actual cause of $\{D=1\}$,[^15] in line with intuition [@Halpern2015]. In sum, an actual effect $x_{t-1} \rightarrow y_t$ does not imply the corresponding actual cause $x_{t-1} \leftarrow y_t$ and vice versa. Including both directions in the causal account may thus provide a more comprehensive explanation of “what happened” in terms of “what caused what”. Composition {#D3} ----------- The proposed framework of actual causation explicitly acknowledges that there may be high-order occurrences, which have genuine actual causes or actual effects. While multi-variate dependencies play an important role in complex distributed systems [@Mitchell1998; @Sporns2000; @Wolff2018], they are largely ignored in the actual causation literature. From a strictly informational perspective focused on predicting $y_t$ from $x_{t-1}$, one might be tempted to disregard such compositional occurrences and their actual effects, since they do not add predictive power. For instance, the actual effect of $\{AB = 11\}$ in the conjunction example of Fig. \[fig5\]B is informationally redundant, since $\{D = 1\}$ can be inferred (predicted) from $\{A = 1\}$ and $\{B = 1\}$ alone. From a causal perspective, however, such compositional causal links specify mechanistic constraints that would not be captured otherwise. It is these mechanistic constraints, and not predictive powers, that provide an explanation for “what happened” in the various transitions shown in Fig. \[fig5\] by revealing “what caused what”. In Fig. \[fig5\]C for example, the individual nodes $A$ and $B$ do not fulfill the most basic criterion for having an effect on the XNOR node $\{E = 1\}$ as $\rho_e(x_{t-1}, y_t) = 0$, whereas the second-order occurrence $\{AB = 11\}$ has the actual effect $\{E = 1\}$. In the conjunction example (Fig. \[fig5\]B), $\{A = 1\}$ and $\{B = 1\}$ both constrain the AND-gate $D$ in the same way, but the occurrence $\{AB = 11\}$ further raises the probability of $\{D = 1\}$ compared to the effect of each individual input. The presence of causal links specified by first-order occurrences does not exclude the second-order occurrence $\{AB = 11\}$ from having an additional effect on $\{D = 1\}$. To illustrate this with respect to both actual causes and actual effects, we can extend the XNOR example to a “double-biconditional” and consider the transition $v_{t-1} = \{ABC = 111\} \prec v_t = \{DE = 11\}$ (Fig. \[fig11\]). In the figure, both $D$ and $E$ are XNOR nodes that share one of their inputs (node $B$), and $\{AB = 11\} \leftarrow \{D = 1\}$ and $\{BC = 11\} \leftarrow \{E = 1\}$. As illustrated by the cause-repertoires shown in Fig. \[fig11\]B, and in accordance with $D$’s and $E$’s logic function (mechanism), the actual cause of $\{D = 1\}$ can be described as the fact that $A$ and $B$ were in the same state, and the actual cause of $\{E = 1\}$ as the fact that $B$ and $C$ were in the same state. In addition to these first-order occurrences, also the second-order occurrence $\{DE = 11\}$ has an actual cause $\{ABC = 111\}$, which can be described as the fact that all three nodes $A$, $B$, and $C$ were in the same state. Crucially, this fact is not captured by either the actual cause of $\{D = 1\}$, or by the actual cause of $\{E = 1\}$, but only by the constraints of the second-order occurrence $\{DE = 11\}$. On the other hand, the causal link $\{ABC = 111\} \leftarrow \{DE = 11\}$ cannot capture the fact that $\{AB = 11\}$ was the actual cause of $\{D = 1\}$ and $\{BC = 11\}$ was the actual cause of $\{E = 1\}$. Of note, in this example the same reasoning applies to the composition of high-order occurrences at $t-1$ and their actual effects. In sum, high-order occurrences capture multi-variate mechanistic dependencies between the occurrence’s variables that are not revealed by the actual causes and effects of their parts. Moreover, a high-order occurrence does not exclude lower-order occurrences over their parts, which specify their own actual causes and effects. In this way, the composition principle makes explicit that high-order and first-order occurrences all contribute to the explanatory power of the causal account. Integration {#D4} ----------- As discussed above, high-order occurrences can have actual causes and effects, but only if they are irreducible to their parts. This is illustrated in Fig. \[fig12\], in which a transition equivalent to our initial example in Fig. \[fig4\] (Fig. \[fig12\]A) is compared against a similar, but reducible transition (Fig. \[fig12\]C) in a different causal network. The two situations differ mechanistically: the OR and AND gate in Fig. \[fig12\]A receive common inputs from the same two nodes, while the OR and AND in Fig. \[fig12\]C have independent sets of inputs. Nevertheless, the actual causes and effects of all single-variable occurrences are identical in the two cases. In both transitions, $\{\text{OR} = 1\}$ is caused by its one input in state ‘1’, and $\{\text{AND} = 0\}$ is caused by its one input in state ‘0’. What distinguishes the two causal accounts is the additional causal link in Fig. \[fig12\]A, between the second-order occurrence $\{\text{(OR,AND)} = 10\}$ and its actual cause $\{AB = 10\}$. $\{\text{(OR,AND)} = 10\}$ raises the probability of both $\{AB = 10\}$ (in Fig. \[fig12\]A) and $\{AD = 10\}$ (in Fig. \[fig12\]C) compared to their unconstrained probability $\pi = 0.25$, and thus $\rho_c(x_{t-1}, y_t) > 0$ in both cases. Yet, only $\{AB = 10\} \leftarrow \{\text{(OR,AND)} = 10\}$ in Fig. \[fig12\]A is irreducible to its parts. This is shown by partitioning across the ${\operatorname{MIP}}$ with $\alpha_c(x_{t-1}, y_t) = 0.17$. This second-order occurrence thus specifies that the OR and AND gate in Fig. \[fig12\]A receive common inputs—a fact that would otherwise remain undetected. As described in the \[S2\], using the measure ${\mathcal{A}}({v_{t-1} \prec v_t})$ we can also quantify the extent to which the entire causal account ${\mathcal{C}}$ of a transition ${v_{t-1} \prec v_t}$ is irreducible. ${\mathcal{A}}({v_{t-1} \prec v_t}) = 0$ indicates that ${v_{t-1} \prec v_t}$ can either be decomposed into multiple transitions without causal links between them (Fig. \[fig12\]C), or includes variables without any causal role in the transition (e.g., Fig. \[fig5\]D). Exclusion {#D4.5} --------- That an occurrence can affect several variables (high-order effect), and that the cause of an occurrence can involve several variables (high-order cause) is uncontroversial [@Woodward2010]. Nevertheless, the possibility of multi-variate causes and effects is rarely addressed in a rigorous manner. Instead of one high-order occurrence, contingency-based approaches to actual causation typically identify multiple first-order occurrences as separate causes in these cases (see also \[S1\]). This is because some approaches only allow for first-order causes by definition (e.g., [@Weslake2015-WESAPT]), while other accounts include a minimality clause that does not consider causal strength and thus excludes virtually all high-order occurrences in practice (e.g., [@Halpern2005], but see [@Halpern2015]). Take the example of a simple conjunction $\text{AND} = A \land B$ in transition $\{AB = 11\} \prec \{\text{AND}=1\}$ (Fig. \[fig5\]B, and Fig. \[fig13\]). To our knowledge, all contingency-based approaches regard the first-order occurrences $\{A = 1\}$ and $\{B = 1\}$ as two separate causes of $\{\text{AND}=1\}$ in this case (but see [@Datta2016]), while we identify the second-order occurrence $\{AB = 11\}$ (the conjunction) as the one actual cause with $\alpha^{{\textrm{max}}}_c$. Given a particular occurrence $x_{t-1}$ in the transition ${v_{t-1} \prec v_t}$, we explicitly consider the whole power set of $v_t$ as candidate effects of $x_{t-1}$, and the whole power set of $v_{t-1}$ as candidate causes of a particular occurrence $y_t$ (Fig. \[fig13\]). However, the possibility of genuine multi-variate actual causes and effects requires a principled treatment of causal overdetermination. While most approaches to actual causation generally allow for both $\{A = 1\}$ and $\{B = 1\}$ to be actual causes of $\{\text{AND} = 1\}$, this seemingly innocent violation of the causal exclusion principle becomes prohibitive once $\{A = 1\}$, $\{B = 1\}$, and $\{AB = 11\}$ are recognized as candidate causes. In this case, either $\{AB = 11\}$ was the actual cause, or $\{A = 1\}$, or $\{B = 1\}$. Allowing for any combination of these occurrences, however, would be illogical. Within our framework, any occurrence can thus have at most one actual cause (or effect) within a transition—the minimal occurrence with $\alpha^{{\textrm{max}}}$ (Fig. \[fig13\]). Finally, cases of true, mechanistic overdetermination due to symmetries in the causal network are resolved by leaving the actual cause (effect) indetermined between all $x^*(y_t)$ with $\alpha^{{\textrm{max}}}_c$ (see Definitions \[def1\] and \[def2\]). In this way, the causal account provides a complete picture of the actual mechanistic constraints within a given transition. Intended scope and limitations {#D5} ------------------------------ The objective of many existing approaches to actual causation is to provide an account of people’s intuitive causal judgments [@Halpern2016]. For this reason, the literature on actual causation is largely rooted in examples involving situational narratives, such as “Billy and Suzy throw rocks at a bottle” [@Pearl2000; @Halpern2016], which are then compressed into a causal model to be investigated. Such narratives can serve as intuition pumps, but can also lead to confusion if important aspects of the story are omitted in the causal model applied to the example [@Hitchcock2007; @Paul2013; @Weslake2015-WESAPT] (see \[S1\]). Our objective is to provide a principled, quantitative causal account of “what caused what" within a fully specified (complete) model of a physical systems of interacting elements. We purposely set aside issues regarding model selection or incomplete causal knowledge in order to formulate a rigorous theoretical framework applicable to any predetermined, dynamical causal network [@Pearl2010; @Halpern2016]. This puts the explanatory burden on the formal framework of actual causation, rather than on the adequacy of the model. In this setting, causal models should always be interpreted mechanistically and time is explicitly taken into account. Rather than on capturing people’s intuitions, an emphasis is put on explanatory power and consistency (see also [@Paul2013]). With a proper formalism in place, future work should address to what extent and under which conditions the identified actual causes and effects generalize across possible levels of description (macro vs. micro causes and effects), or under incomplete knowledge (see also [@Rubenstein2017; @Marshall2018]). In addition, the examples examined in this study have been limited to direct causes and effects within transitions ${v_{t-1} \prec v_t}$ across a single system update. The explanatory power of the proposed framework was illustrated in several examples, which included paradigmatic problem cases involving overdetermination and prevention. Yet, some prominent examples that raise issues of “preemption” or “causation by omission” have no direct equivalent in these basic types of physical causal models (see \[S1\]). While the approach can, in principle, identify and quantify counterfactual dependencies across $k > 1$ time steps by replacing $p_u(v_t \mid v_{t-1})$ with $p_u(v_t \mid v_{t-k})$ in Eqn. \[eqn1b\], for the purpose of tracing a causal chain back in time [@Datta2016], the role of intermediary occurrences remains to be investigated. Nevertheless, the present framework is unique in providing a general, quantitative, and principled approach to actual causation that naturally extends beyond simple, binary, and deterministic example cases to all mechanistic systems that can be represented by a set of transition probabilities as specified in Eqn. \[eqn1b\]. Accountability and causal responsibility ---------------------------------------- This work presents a step towards a quantitative causal understanding of “what is happening" in systems such as natural or artificial neural networks, computers, and other discrete, distributed dynamical systems. Such causal knowledge can be invaluable, for example, to identify the reasons for an erroneous classification by a convolutional neural network [@Szegedy2013], or the source of a protocol violation in a computer network [@Datta2015]. A notion of multi-variate actual causes and effects, in particular, is crucial for addressing questions of accountability, or sources of network failures [@Halpern2016] in distributed systems. A better understanding of the actual causal links that govern a system’s transitions should also improve our ability to effectively control the dynamical evolution of such systems and to identify adverse system states that would lead to unwanted system behaviors. Finally, a principled approach to actual causation in neural networks may illuminate the causes of an agent’s actions or decisions (biological or artificial) [@Economist2018; @WillKnight; @damasio2012neurobiology], including the causal origin of voluntary actions [@Haggard2008]. However, addressing the question “who caused what?”, as opposed to “what caused what”, implies modeling an agent with intrinsic causal power and intention [@Tononi2013; @Datta2015]. Future work will combine the present mechanistic framework for actual causation with a mechanistic account of autonomous, causal agents, based on the same set of principles [@Marshall; @Oizumi2014]. \[S1\] \[S2\] \[proof\] [^1]: [A formal definition of the term “occurrence" is provided below in the theory section, where it denotes a system (sub)state, i.e., a set of random variables in a particular state at a particular time. This corresponds to the general usage of the term “event" in the computer science and probability literature. The term “occurrence" was chosen instead to avoid philosophical baggage associated with the term “event".]{} [^2]: A question regarding general causation in the context of AlphaGo would be, e.g., whether an opponents “moyo” (framework for establishing territory) typically causes AlphaGo to perform an invasion. [^3]: Note that counterfactuals here strictly refer to possible states within the system’s state space other than the actual one and not to abstract notions such as other “possible worlds” as in [@Lewis1973], (see also [@Pearl2000] Chapter 7). [^4]: The transition probabilities can, in principle, be determined, by perturbing the system into all possible states while holding the exogenous variables fixed and observing the resulting transitions. Alternatively, the causal network can be constructed by experimentally identifying the input-output function of each element (its structural equation [@Pearl2000; @Janzing2013]). Merely observing the system without experimental manipulation is insufficient to identify causal relationships in most situations. [^5]: Note that our approach generalizes, in principle, to system transitions across multiple time steps by considering the transition probabilities $p_u(v_t \mid v_{t-k})$ instead of $p_u(v_t \mid v_{t-1})$ in Eqn. \[eqn1b\]. While this practice would correctly identify counterfactual dependencies between $v_{t-k}$ and $v_{t}$, it ignores the actual states of intermediate time steps $(v_{t-k+1}, \dots, v_{t-1})$. As a consequence, the approach cannot, at present, address certain issues regarding causal transitivity across multiple paths, incomplete causal processes in probabilistic causal networks [@Schaffer2001], or causal dependencies in non-Markovian systems. [^6]: In general, $\pi(Y_t \mid x_{t-1}) \neq p(Y_t \mid x_{t-1})$. However, $\pi(Y_t \mid x_{t-1})$ is equivalent to $p(Y_t \mid x_{t-1})$ in the special case that all variables $Y_{i,t} \in Y_{t}$ are conditionally independent given $x_{t-1}$ (see also [@Janzing2013], Remark 1). This is the case, for example, if $X_{t-1}$ already includes all inputs (all parents) of $Y_t$, or determines $Y_t$ completely. [^7]: This requirement corresponds to the first clause (“AC1") of the Halpern and Pearl account of actual causation [@Halpern2005; @Halpern2015], that for $C=c$ to be an actual cause of $E=e$ both must actually happen in the first place. [^8]: In an information theoretic context, the formula $\log_2\left({p(x \mid y)}/{p(x)}\right)$ is also known as the “pointwise mutual information". While the pointwise mutual information is symmetric, the cause and effect ratios for an occurrence pair $(x_{t-1}, y_t)$ are not always identical as they are defined based on the product probabilities in Eqn. \[eqn2\] and \[eqn3\]. [^9]: In addition to the mutual information, $\rho_{e/c}$ is also related to information theoretic divergences that measure differences in probability distributions, such as the Kullback-Leibler divergence, which would correspond to an average of $\log_2\left({p(x \mid y)}/{p(x)}\right)$ over all states $x \in \Omega_X$ weighted by $p(x \mid y)$. Here, we do not include any such weighting factor, since the transition specifies which states actually occurred. [^10]: The minimality condition between overlapping candidate causes or effects is related to the third clause (“AC3") in the various Halpern-Pearl accounts of actual causation [@Halpern2005; @Halpern2015], which states that no subset of an actual cause should also satisfy the conditions for being an actual cause. See \[S1\]. [^11]: Under uncertainty about the causal model, or other practical considerations, the minimality condition could, in principle, be replaced by a more elaborate criterion, similar to, e.g., the Akaike information criterion (AIC) that weighs increases in causal strength as measured here against the number of variables included in the candidate cause or effect. [^12]: While indeterminism may play a fundamental role in physical causal models, the existing literature on actual causation largely focuses on deterministic problem cases. For ease of comparison, most causal networks analyzed in the following are thus deterministic, corresponding to prominent test cases. [^13]: Note however that this causal model is equivalent to an OR-gate, as can be seen by switching the state labels of $A$ from ‘0’ to ‘1’ and vice versa. The discussed transition would correspond to the case of one input to the OR-gate being ‘1’ and the other ‘0’. Since the OR-gate switches on (‘1’) in this case, the ‘0’ input has no effect and is not a cause. [^14]: One story accompanying this example is that “a prisoner dies either if $A$ loads $B$’s gun and $B$ shoots, or if $C$ loads and shoots his gun, $\ldots$ $A$ loads $B$’s gun, $B$ does not shoot, but $C$ does load and shoot his gun, so that the prisoner dies” [@Hopkins2003; @Halpern2016]. [^15]: Note that Pearl initially proposed maximizing the posterior probability $p(c \mid e)$ as a means of identifying the best (“most probable”) explanation for an occurrence $e$ ([@Pearl1988]; Chapter 5). This approach has later been criticized, among others, by Pearl himself ([@Pearl2000]; Chapter 7), as it had been formalized in purely probabilistic terms, lacking the notion of system interventions. Moreover, without a notion of irreducibility, as applied in the present framework, explanations based on $p(c \mid e)$ tend to include irrelevant variables [@Shimony1991; @Chajewska1997].
--- abstract: 'Given a Lévy process $L$, we consider the so-called statistical Skorohod embedding problem of recovering the distribution of an independent random time $T$ based on i.i.d. sample from $L_{T}.$ Our approach is based on the genuine use of the Mellin and Laplace transforms. We propose a consistent estimator for the density of $T,$ derive its convergence rates and prove their optimality. It turns out that the convergence rates heavily depend on the decay of the Mellin transform of $T.$ We also consider the application of our results to the problem of statistical inference for variance-mean mixture models and for time-changed Lévy processes.' author: - 'Denis Belomestny$^{1} $ and John Schoenmakers$^{2}$' bibliography: - 'est\_subbm\_bibliography.bib' title: Statistical Skorohod embedding problem and its generalizations --- *Keywords:* Skorohod embedding problem, Lévy process, Mellin transform, Laplace transform, variance mixture models, time-changed Lévy processes. Introduction ============ The so called Skorohod embedding (SE) problem or Skorohod stopping problem was first stated and solved by Skorohod in 1961. This problem can be formulated as follows. \[Skorohod Embedding Problem\]For a given probability measure $\mu$ on $\mathbb{R},$ such that $\int|x| d\mu(x)<\infty$ and $\int x d\mu(x)=0,$ find a stopping time $T$ such that $B_{T}\sim\mu$ and $B_{T\wedge t}$ is a uniformly integrable martingale. The SE problem has recently drawn much attention in the literature, see e.g. Ob[ł]{}[ó]{}j, [@obloj2004skorokhod], where the list of references consists of more than 100 items. In fact, there is no unique solution to the SE problem and there are currently more than $20$ different solutions available. This means that from a statistical point of view, the SE problem is not well posed. In this paper we first study what we call *statistical Skorohod embedding* (SSE) problem. \[Statistical Skorohod Embedding Problem\]\[stat\_skorohod\] Based on i.i.d. sample $X_{1},\ldots, X_{n}$ from the distribution of $B_{T}$ consistently estimate the distribution of the random time $T\geq0,$ where $B$ and $T$ are assumed to be independent. The independence of $B$ and $T$ is needed to ensure the identifiability of the distribution of $T$ from the distribution of $B_T$. It is shown that the SSE problem is closely related to the multiplicative deconvolution problem. Using the Mellin transform technique, we construct a consistent estimator for the density of $T$ and derive its convergence rates in different norms. Furthermore, we show that the obtained rates are optimal in minimax sense. The asymptotic normality of the proposed estimator is addressed as well. Next, we generalize the SSE problem by replacing the standard Brownian motion with a general Lévy process. The generalized SSE problem turns out to be much more involved and its solution requires some new ideas. Using a genuine combination of the Laplace and Mellin transforms, we construct a consistent estimator, derive its minimax convergence rates and prove that these rates basically coincide with the rates in the SSE problem. Some particular cases of generalized statistical Skorohod embedding problem have been already studied in the literature. For example, the case of the stopped Poisson process was considered in the recent paper of Comte and Genon-Catalot, [@comteadaptive]. Statistical Skorohod embedding problem ====================================== Let $B$ be a Brownian motion and let a random variable $T\geq0$ be independent of $B.$ We then have,$$X:=B_{T}\sim\sqrt{T}\,B_{1} \label{scaling_prop_bm}$$ and the problem of reconstructing $T$ is related to a multiplicative deconvolution problem. While for additive deconvolution problems the Fourier transform plays an important role, here we can conveniently use the Mellin transform. Let $\xi$ be a non-negative random variable with a probability density $p_{\xi}$, then the *Mellin transform* of $p_{\xi}$ is defined via $$\mathcal{M}[p_{\xi}](z):=\mathbb{E}[\xi^{z-1}]=\int_{0}^{\infty}p_{\xi }(x)x^{z-1}\,dx \label{def}$$ for all $z\in\mathcal{S}_{\xi}$ with $\mathcal{S}_{\xi}=\bigl\{z\in \mathbb{C}:\mathbb{E}[\xi^{{\mathsf{Re}}z-1}]<\infty\bigr\}.$ Since $p_{\xi}$ is a density, it is integrable and so at least $\left\{ z\in\mathbb{C}:{\mathsf{Re}}(z)=1\right\} \subset\mathcal{S}_{\xi}.$ Under mild assumptions on the growth of $p_{\xi}$ near the origin, one obtains $$\left\{ z\in\mathbb{C}:0\leq a_{\xi}<{\mathsf{Re}}(z)<b_{\xi}\right\} \subset \mathcal{S}_{\xi}$$ for some $0\leq a_{\xi}<1\leq b_{\xi}.$ Then the Mellin transform (\[def\]) exists and is analytic in the strip $a_{\xi}<\operatorname{Re}z<b_{\xi}.$ For example, if $p_{\xi}$ is essentially bounded in a right-hand neighborhood of zero, we may take $a_{\xi}=0.$ The role of the Mellin transform in probability theory is mainly related to the product of independent random variables: in fact it is well-known that the probability density of the product of two independent random variables is given by the Mellin convolution of the two corresponding densities. Due to , the SSE problem is closely connected to the Mellin convolution. Suppose that the random time $T$ has a density $p_{T}$ and that we may take $0\leq a_{T}<1\leq b_{T}.$ Since $\mathcal{S}_{|B_{1}|}\supset\left\{ z\in\mathbb{C}:{\mathsf{Re}}(z)>0\right\} ,$ we derive for $\max(2a_{T}-1,0)<{\mathsf{Re}}(z)<2b_{T}-1,$ $$\begin{gathered} \mathcal{M}[p_{|X|}](z)=\mathbb{E}\bigl[|B_{1}|^{z-1}\bigr]\mathbb{E}\bigl[T^{(z-1)/2}\bigr]\\ =\mathcal{M}[p_{|B_{1}|}](z)\mathcal{M}[p_{T}]((z+1)/2)=\frac{2^{(z-1)/2}}{\sqrt{\pi}}\Gamma(z/2)\mathcal{M}[p_{T}]((z+1)/2).\end{gathered}$$ As a result $$\mathcal{M}[p_{T}](z)=\frac{\sqrt{\pi}}{2^{z-1}}\frac{\mathcal{M}[p_{|X|}](2z-1)}{\Gamma(z-1/2)},\text{ \ \ }\max(a_{T},1/2)<{\mathsf{Re}}(z)<b_{T}$$ and the Mellin inversion formula yields $$\begin{aligned} p_{T}(x) & =\frac{1}{2\pi}\int_{\gamma-{{\mathrm{i}}}\infty}^{\gamma+{{\mathrm{i}}}\infty }x^{-\gamma-{{\mathrm{i}}} v}\mathcal{M}[p_{T}](\gamma+{{\mathrm{i}}} v)\,dv\text{ }\label{inverse_mellin}\\ & =\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}x^{-\gamma-{{\mathrm{i}}} v}\frac{\mathcal{M}[p_{|X|}](2\left( \gamma+{{\mathrm{i}}} v\right) -1)}{2^{\gamma+{{\mathrm{i}}} v}\Gamma(\gamma+{{\mathrm{i}}} v-1/2)}\,dv\text{\ \ for \ }\max(a_{T},1/2)<\gamma <b_{T},\text{ \ \ }x>0.\nonumber\end{aligned}$$ Furthermore, the Mellin transform of $p_{|X|}$ can be directly estimated from the data $X_{1},\ldots,X_{n}$ via the empirical Mellin transform: $$\mathcal{M}_{n}[p_{|X|}](z):=\frac{1}{n}\sum_{k=1}^{n}|X_{k}|^{z-1},\text{ \ \ }{\mathsf{Re}}(z)>1/2, \label{melest}$$ where the condition ${\mathsf{Re}}(z)>1/2$ guarantees that the variance of the estimator (\[melest\]) is finite$.$ Note however that the integral in may fail to exist if we replace $\mathcal{M}[p_{|X|}]$ by $\mathcal{M}_{n}[p_{|X|}].$ We so need to regularize the inverse Mellin operator. To this end, let us consider a kernel $K(\cdot)\geq0$ supported on $[-1,1]$ and a sequence of bandwidths $h_{n}$ $>0$ tending to $0$ as $n\rightarrow\infty.$ Then we define, in view of (\[melest\]), for some $\max(a_{T},3/4)<\gamma<b_{T},$ $$\label{p_Tn_bm}p_{T,n}(x):=\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty }x^{-\gamma-{{\mathrm{i}}} v}K(vh_{n})\frac{\mathcal{M}_{n}[p_{|X|}](2(\gamma+{{\mathrm{i}}} v)-1)}{2^{\gamma+{{\mathrm{i}}} v}\Gamma(\gamma-1/2+{{\mathrm{i}}} v)}\,dv.$$ For our convergence analysis, we will henceforth take the simplest kernel $$K(y)=1_{[-1,1]}(y),$$ but note that in principle other kernels may be considered as well. The next theorem states that $p_{T,n}$ converges to $p_{T}$ at a polynomial rate, provided the Mellin transform of $p_{T}$ decays exponentially fast. We shall use throughout the notation $A\lesssim B$ if $A$ is bounded by a constant multiple of $B$, independently of the parameters involved, that is, in the Landau notation $A=O(B)$. \[sep\_conv\_rates\] For any $\beta>0,$ $\gamma>0$ and $L>0$, introduce the class of functions $$\mathcal{C}(\beta,\gamma,L)=\left\{ f:\int_{-\infty}^{\infty}\left\vert \mathcal{M}[f](\gamma+{{\mathrm{i}}} v)\right\vert e^{\beta\left\vert v\right\vert }\,dv<L\right\} .$$ Assume that $p_{T}\in\mathcal{C}(\beta,\gamma,L)$ for some $\beta>0,$ $L>0$ and $$\max((a_{T}+1)/2,3/4)<\gamma<b_{T}. \label{sg}$$ Then for some constant $C_{\gamma,L}$ depending on $\gamma$ and $L$ only, it holds $$\sup_{x\geq0}\mathbb{E}\Bigl[\bigl\{x^{\gamma}|p_{T}(x)-p_{T,n}(x)|\bigr\}^{2}\Bigr]\leq C_{\gamma,L}\times\begin{cases} e^{-2\beta/h_{n}}+\frac{1}{n}h_{n}^{2\left( \gamma-1\right) }e^{\pi/h_{n}}, & \gamma<1,\\ e^{-2\beta/h_{n}}+\frac{1}{n} e^{\pi/h_{n}}, & \gamma\geq1. \end{cases} \label{ms}$$ By next choosing $$h_{n}= \begin{cases} \frac{\pi+2\beta}{\log n-2(1-\gamma)\log\log n}, & \gamma<1,\\ (\pi+2\beta)/\log n, & \gamma\geq1, \end{cases} \label{hn}$$ we arrive at the rate $$\label{rates_bm_pol}\sup_{x\geq0}\sqrt{\mathbb{E}\Bigl[\bigl\{x^{\gamma}|p_{T}(x)-p_{T,n}(x)|\bigr\}^{2}\Bigr]}\lesssim\begin{cases} n^{-\frac{\beta}{\pi+2\beta}}\log^{\frac{2(1-\gamma)\beta}{\pi+2\beta}}n, & \gamma<1,\\ n^{-\frac{\beta}{\pi+2\beta}}, & \gamma\geq1 \end{cases}$$ as $n\rightarrow\infty.$ With a little bit more effort one can prove the strong uniform convergence of the estimate $p_{n,T}.$ \[uniform\_upper\_bound\] Under conditions of Theorem \[sep\_conv\_rates\] and for $\gamma<1$ $$\begin{aligned} \sup_{p_{T}\in\mathcal{C}(\beta,\gamma,L)}\sup_{x\geq0}\bigl\{x^{\gamma }|p_{T,n}(x)-p_{T}(x)|\bigr\}=O_{a.s.}\left( n^{-\frac{\beta}{\pi+2\beta}}\log^{\frac{2(1-\gamma)\beta}{\pi+2\beta}}n\right) .\end{aligned}$$ Let us turn now to some examples. \[exmp\_gamma\] Consider the class of Gamma densities $$\begin{aligned} p_{T}(x;\alpha)=\frac{x^{\alpha-1}\cdot e^{-x}}{\Gamma(\alpha)}, \quad x\geq0\end{aligned}$$ for $\alpha>0.$ Since $$\begin{aligned} \mathcal{M}[p_{T}](z)=\frac{\Gamma(z+\alpha-1)}{\Gamma(\alpha)}, \quad\mathsf{Re}(z)>0,\end{aligned}$$ we derive that $p_{T}\in\mathcal{C}(\beta,\gamma,L)$ for all $0<\beta<\pi/2,$ $\gamma>0$ and some $L=L(\beta,\gamma)$ due to the asymptotic properties of the Gamma function (see Lemma \[lemma\_gamma\_asymp\] in Appendix). As a result, Theorem \[sep\_conv\_rates\] implies $$\begin{aligned} \sup_{x\geq0}\sqrt{\mathbb{E}\Bigl[\bigl\{x^{\gamma}|p_{T}(x)-p_{T,n}(x)|\bigr\}^{2}\Bigr]}\lesssim n^{-\rho},\quad n\rightarrow\infty\end{aligned}$$ for any $\rho<1/4,$ provided $\gamma\geq1.$ Let us look at the family of densities $$\begin{aligned} p_{T}(x;q)=\frac{q\sin(\pi/q)}{\pi}\frac{1}{1+x^{q}}, \quad q\geq2,\quad x\geq0.\end{aligned}$$ We have $$\begin{aligned} \mathcal{M}[p_{T}](z)=\frac{\sin(\pi/q)}{\sin(\pi z/q)},\quad0<\mathsf{Re}(z)<q.\end{aligned}$$ Therefore, $p_{T}\in\mathcal{C}(\beta,\gamma,L)$ for all $0<\beta<\pi/q,$ $\gamma>0$ and $L=L(\beta,\gamma),$ implying $$\begin{aligned} \sup_{x\geq0}\sqrt{\mathbb{E}\Bigl[\bigl\{x^{\gamma}|p_{T}(x)-p_{T,n}(x)|\bigr\}^{2}\Bigr]}\lesssim n^{-\rho},\quad n\rightarrow\infty\end{aligned}$$ for any $\rho<1/(2+q),$ provided $\gamma\geq1.$ If $\mathcal{M}[p_{T}]$ decays polynomially fast, we get the following result. \[sep\_log\_rates\] Consider the class of functions$$\mathcal{D}(\beta,\gamma,L)=\left\{ f:\int_{-\infty}^{\infty}\left\vert \mathcal{M}[f](\gamma+{{\mathrm{i}}} v)\right\vert (1+|v|^{\beta})\,dv<L\right\} ,$$ and assume that $p_{T}\in\mathcal{D}(\beta,\gamma,L)$ for some $\beta>0$ and $L>0$ and $\gamma$ as in (\[sg\]). Then for some constant $D_{\gamma,L},$ it holds $$\sup_{x\geq0}\mathbb{E}\Bigl[\bigl\{x^{\gamma}|p_{T}(x)-p_{T,n}(x)|\bigr\}^{2}\Bigr]\leq D_{\gamma,L}\times\begin{cases} h_{n}^{2\beta}+\frac{1}{n}h_{n}^{2\left( \gamma-1\right) }e^{\pi/h_{n}}, & \gamma<1,\\ h_{n}^{2\beta}+\frac{1}{n} e^{\pi/h_{n}}, & \gamma\geq1. \end{cases} \label{ms1}$$ By choosing $$h_{n}=\frac{\pi}{\log n-2\left( \beta+1-\gamma\right) \log\log n}, \label{hn1}$$ if $\gamma<1$ and $$h_{n}=\frac{\pi}{\log n-2\beta\log\log n} \label{hn2}$$ for $\gamma\geq1,$ we arrive at $$\sup_{x\geq0}\sqrt{\mathbb{E}\Bigl[\bigl\{x^{\gamma}|p_{T}(x)-p_{T,n}(x)|\bigr\}^{2}\Bigr]}\lesssim\log^{-\beta}(n),\quad n\rightarrow\infty. \label{arr1}$$ Due to the relation $$\begin{aligned} \mathcal{M}[p_{T}](\gamma+{{\mathrm{i}}} v)=\mathcal{F}[e^{\gamma\cdot}p_{T}(e^{\cdot })](v),\quad a_{T}<\gamma<b_{T},\end{aligned}$$ the conditions $p_{T}\in\mathcal{C}(\beta,\gamma,L)$ and $p_{T}\in \mathcal{D}(\beta,\gamma,L)$ are closely related to the smoothness properties of the function $e^{\gamma x}p_{T}(e^{x}).$ For example, if $p_{T}\in\mathcal{C}(\beta,\gamma,L),$ then $$\begin{aligned} \int_{-\infty}^{\infty}\left\vert \mathcal{F}[e^{\gamma\cdot}p_{T}(e^{\cdot })](v)\right\vert e^{\beta\left\vert v\right\vert }\,dv<L\end{aligned}$$ and the function $e^{\gamma x}p_{T}(e^{x})$ is called supersmooth in this case, see Meister [@meister2009deconvolution] for the discussion on different smoothness classes in the context of the additive deconvolution problems. \[c\][|c|c|c|]{} &\ & $\mathcal{D}(\beta,\gamma,L) $\ &\ $\gamma<1$ & $\gamma\geq1$ &\ & &\ $n^{-\frac{\beta}{\pi+2\beta}}\log^{\frac{2(1-\gamma)\beta}{\pi+2\beta}}(n)$ & $n^{-\frac{\beta}{\pi+2\beta}}$ & $\log^{-\beta}(n) $\ & &\ The rates of Theorem \[sep\_conv\_rates\] and Theorem \[sep\_log\_rates\] summarized in Table \[RCS\] are in fact optimal (up to a logarithmic factor) in minimax sense for the classes $\mathcal{C}(\beta,\gamma,L)$ and $\mathcal{D}(\beta,\gamma,L),$ respectively. \[low\_bound\] Fix some $\beta>1.$ There are $\varepsilon>0$ and $x>0$ such that $$\begin{aligned} & \liminf_{n\to\infty}\inf_{p_{n}}\sup_{p_{T}\in\mathcal{C}(\beta,\gamma ,L)}{\operatorname{P}}^{\otimes n}_{p_{T}}\Bigl(|p_{T}(x)-p_{n}(x)|\ge\varepsilon\, n^{-\frac{\beta}{\pi+2\beta}}\log^{-\rho}(n)\Bigr)>0,\\ & \liminf_{n\to\infty}\inf_{p_{n}}\sup_{p_{T}\in\mathcal{D}(\beta,\gamma ,L)}{\operatorname{P}}^{\otimes n}_{p_{T}}\Bigl(|p_{T}(x)-p_{n}(x)|\ge\varepsilon\log ^{-\beta}(n)\Bigr)>0,\end{aligned}$$ for some $\rho>0,$ where the infimum is taken over all estimators (i.e. all measurable functions of $X_{1},\ldots,X_{n}$) of $p_{T}$ and ${\operatorname{P}}^{\otimes n}_{p_{T}}$ is the distribution of the i.i.d. sample $X_{1},\ldots, X_{n}$ with $X_{1}\sim W_{T}$ and $T\sim p_{T}.$ Asymptotic normality -------------------- In the case of $K(v)=1_{[-1,1]}(v),$ the estimate $p_{T,n}(x)$ can be written as$$\begin{aligned} p_{T,n}(x) & :=\frac{1}{\sqrt{\pi}}\int_{-1/h_{n}}^{1/h_{n}}\left[ \frac {1}{n}\sum_{k=1}^{n}|X_{k}|^{2(\gamma+{\mathrm{i}}v-1)}\right] \frac{x^{-\gamma-{\mathrm{i}}v}}{2^{\gamma+{\mathrm{i}}v}\Gamma(\gamma-1/2+{\mathrm{i}}v)}\,dv\\ & =\frac{1}{n}\sum_{k=1}^{n}Z_{n,k}$$ where $$Z_{n,k}:=\frac{1}{\sqrt{\pi}}\int_{-1/h_{n}}^{1/h_{n}}|X_{k}|^{2(\gamma+{\mathrm{i}}v-1)}\frac{x^{-\gamma-{\mathrm{i}}v}}{2^{\gamma+{\mathrm{i}}v}\Gamma(\gamma-1/2+{\mathrm{i}}v)}\,dv.$$ The following theorem holds \[asymp\_norm\] Suppose that $$\begin{aligned} \left. \frac{d}{du}\left( \Gamma(2\gamma-3/2+{\mathrm{i}}u)\mathcal{M}[p_{T}](2\gamma-1+{\mathrm{i}}u)\right) \right| _{u=0}\neq0,\end{aligned}$$ and $$\begin{aligned} \int_{-\infty}^{\infty}\left\vert \mathcal{M}[p_{T}](2\gamma-1+{\mathrm{i}}u)\right\vert du <\infty,\end{aligned}$$ then $$\begin{aligned} \rho^{-1}_{n}\bigl(p_{T,n}(x)-\mathbb{E}[p_{T,n}(x)]\bigr)\overset {\mathcal{D}}{\longrightarrow} \mathcal{N}(0,\sigma^{2})\end{aligned}$$ for some $\sigma^{2}>0,$ where $\rho_{n}=n^{-1/2} h_{n}^{2\left( \gamma-1\right) }\log^{-2}\left( 1/h_{n}\right) \exp\left[ \pi/h_{n}\right](1+o(1))$ and $h_{n}\asymp c\log^{-1}(n)$ for some $c>0,$ as $n\to \infty.$ Generalised statistical Skorohod embedding problem {#seq: gsse} ================================================== In this section we generalize the statistical Skorohod embedding problem to the case of Lévy processes. In particular, we consider the following problem. \[stat\_skorohod\_gen\] Based on i.i.d. sample $X_{1},\ldots, X_{n}$ from the distribution of $\mu,$ estimate the distribution of the random time $T\geq0$ independent of a Lévy process $L$ such that $L_{T}\sim\mu.$ Note that the situation here is much more difficult than before, since the Lévy processes do not have, in general, the scaling property . Hence the approach based on the Mellin deconvolution technique can not be applied any longer. Let $(L_{t},\,t\geq0)$ be a Lévy process with the triplet $(\mu,\sigma^{2},\nu).$ Define a curve in $\mathbb{C}$ $$\ell:=\Bigl\{\mathsf{Re}(\psi(u))+{{\mathrm{i}}}\,\mathsf{Im}(\psi(u)),\,u\in \mathbb{R}_{+}\Bigr\},$$ where $\psi(u)=-t^{-1}\log(\mathbb{E}(\exp({\mathrm{i}}uL_{t}))).$ Our approach to reconstruct the distribution of $T$ is based on the simple identity $$\begin{aligned} \label{cfx_main}\mathcal{F}[p_{X}](\lambda)=\mathbb{E}[\exp({{\mathrm{i}}}\lambda L_{T})]=\mathcal{L}[p_{T}](\psi(\lambda)).\end{aligned}$$ It is well known that the Laplace transform of $\mathcal{L}[p_{T}](u)$ is analytic in the domain $\bigl\{\mathsf{Re}(u)>0\bigr\}.$ ![A typical shape of the contour $\ell$.[]{data-label="fig:ell"}](l_contour.pdf){width="65.00000%"} The following proposition shows that the object $\mathcal{M}[\mathcal{L}[p_{T}]](z)$ is well defined and that it can be related to the Fourier transform of $p_{X},$ which in turn can be estimated from the data. \[arc\] Let us assume that $\mathsf{Re}(\psi(u))\rightarrow\infty$ as $u\rightarrow\infty$ and that$$\label{ImReCond}\frac{\left\vert \mathsf{Im}(\psi(u))\right\vert }{\mathsf{Re}(\psi(u))}<A<\infty$$ for all $u>0$ and some $A>0.$ Moreover, let $p_{T}$ be (essentially) bounded. Then, for $0<{\mathsf{Re}}(z)<1$ it holds that $$\mathcal{M}[\mathcal{L}[p_{T}]](z)=\int_{0}^{\infty}u^{z-1}\mathcal{L}[p_{T}](u)du=\int_{\ell}w^{z-1}\mathcal{L}[p_{T}](w)dw.$$ The condition (\[ImReCond\]) is fulfilled if, for example, the diffusion part of $L$ is nonzero or if $\psi$ is real and $\psi(u)\to\infty$ as $u\to\infty.$ Under the assumptions of Proposition \[arc\] we may write,$$\mathcal{M}[\mathcal{L}[p_{T}]](z)=\int_{0}^{\infty}\left[ \psi (\lambda)\right] ^{z-1}\mathcal{L}[p_{T}](\psi(\lambda))\psi^{\prime}(\lambda)d\lambda,$$ where $\mathcal{L}[p_{T}](\psi(\lambda))=\mathcal{F}[p_{X}](\lambda)$ due to . On other hand, one may straightforwardly derive, $$\mathcal{M}[\mathcal{L}[p_{T}]](z)=\mathcal{M}[p_{T}](1-z)\Gamma (z),\quad0<\mathsf{Re}(z)<1,$$ i.e., $$\mathcal{M}[p_{T}](z)=\frac{\mathcal{M}[\mathcal{L}[p_{T}]](1-z)}{\Gamma (1-z)}=\frac{\int_{0}^{\infty}\left[ \psi(\lambda)\right] ^{-z}\mathcal{F}[p_{X}](\lambda)\psi^{\prime}(\lambda)d\lambda}{\Gamma(1-z)},\quad0<\mathsf{Re}(z)<1. \label{FpX}$$ In principle, one can now replace the Fourier transform of $p_{X}$ in by its empirical counterpart based on the data. However, in this case we need to regularize the estimate of $\mathcal{M}[p_{T}](z)$ to perform the inverse Mellin transform. To this end consider the approximation $$\mathcal{M}[\mathcal{L}[p_{T}]](z)\approx\frac{1}{n}\sum_{k=1}^{n}\int _{0}^{A_{n}}\left[ \psi(\lambda)\right] ^{z-1}e^{{{\mathrm{i}}}X_{k}\lambda}\psi^{\prime}(\lambda)d\lambda=:\frac{1}{n}\sum_{k=1}^{n}\Phi_{n}(z,X_{k})$$ and define in view of (\[FpX\]),$$\begin{aligned} \label{pTn_gen}p_{T,n}(x):=\frac{1}{2\pi n}\sum_{k=1}^{n}\int_{-U_{n}}^{U_{n}}\frac{\Phi_{n}(1-\gamma-{{\mathrm{i}}}v,X_{k})}{\Gamma(1-\gamma-{{\mathrm{i}}}v)}x^{-\gamma-{\mathrm{i}}v}dv,\text{ \ \ for \ }0<\gamma<1\text{\ }$$ where $U_{n},A_{n}\rightarrow\infty$ in a suitable way as $n\rightarrow \infty.$ Note that in many cases the function $\Phi_{n}$ can be found in closed form. For example, consider the case of a subordinated stable Lévy process with $\psi(\lambda)=|\lambda|^{\alpha}.$ It then holds for ${\mathsf{Re}}(z)>0,$ $$\begin{aligned} \Phi_{n}(z,x) & =\int_{0}^{A_{n}}\left[ \psi(\lambda)\right] ^{z-1}e^{{{\mathrm{i}}}x\lambda}\psi^{\prime}(\lambda)d\lambda\\ & =\alpha\int_{0}^{A_{n}}\lambda^{\alpha(z-1)}e^{{{\mathrm{i}}}x\lambda}\lambda ^{\alpha-1}\,d\lambda\\ & =\alpha\int_{0}^{A_{n}}\lambda^{\alpha z-1}e^{{{\mathrm{i}}}x\lambda}d\lambda\\ & =\frac{A_{n}^{\alpha z}}{z}F_{1}(\alpha z;1+\alpha z;{{\mathrm{i}}}A_{n}x),\end{aligned}$$ where $F_{1}$ is Kummer’s function. In the next two theorems we prove a remarkable result showing that the estimate $p_{T,n}(x)$ converges to $p(x)$ at the same rate (up to a logarithmic factor in the polynomial case) as in the case of the time-changed Brownian motion. \[polR\] Suppose that $\psi$ satisfies the conditions of Proposition \[arc\], and that moreover $\int_{\{|x|>1\}}|x|\nu(dx)<\infty.$ Furthermore suppose that there is a $1/2<\gamma<1$ such that $p_{T}\in\mathcal{C}(\beta,\gamma,L)$ (cf. Theorem \[sep\_conv\_rates\]) for some $\beta>0,$ and $$\int_{{1}}^{\infty}\frac{1}{\lambda^{2\gamma-1-{\varepsilon}}}\left\vert \mathcal{F}[p_{X}](\lambda)\right\vert d\lambda<\infty, \label{ae}$$ for some ${\varepsilon}>0.$ Then under the choice $$A_{n}=n^{\frac{1}{4\left( 1-\gamma\right) +2{\varepsilon}}} \label{ch1}$$ and$$U_{n}=\frac{{\varepsilon}}{\left( 2-2\gamma+{\varepsilon}\right) \left( 2\beta +\pi\right) }\log n-\frac{2\gamma-1}{2\beta+\pi}\log\log n, \label{ch2}$$ we get $$\sup_{x\geq0}\sqrt{\mathrm{E}\left[ x^{2\gamma}\left\vert p_{T,n}(x)-p_{T}(x)\right\vert ^{2}\right] }\lesssim n^{-\frac{\beta}{2\beta+\pi }\frac{{\varepsilon}}{2\left( 1-\gamma\right) +{\varepsilon}}}\log^{\beta\frac {2\gamma-1}{2\beta+\pi}}n,\quad n\rightarrow\infty. \label{arr_levy}$$ Thus for $\gamma\to1$ or ${\varepsilon}\to0,$ we recover the rates of Theorem \[sep\_conv\_rates\] up to a logarithmic factor. \[rem\_four\_cond\] Since $$\begin{aligned} \int_{1}^{\infty}\frac{1}{\lambda^{2\gamma-1-{\varepsilon}}}\left\vert \mathcal{F}[p_{X}](\lambda)\right\vert d\lambda=\int_{1}^{\infty}\frac {1}{\lambda^{2\gamma-1-{\varepsilon}}}\left\vert \mathcal{L}[p_{T}](\psi (\lambda))\right\vert d\lambda,\end{aligned}$$ the condition is, for example, fulfilled for some ${\varepsilon}>0$ if ${\mathsf{Re}}[\psi(\lambda)]\gtrsim\lambda$ for $\lambda\to+\infty$ and $p_{T}$ is of bounded variation with $p_{T}(0)<\infty.$ In the case $p_{T}\in\mathcal{D}(\beta,\gamma,L),$ we get exactly the same logarithmic rates as in Theorem \[sep\_log\_rates\]. \[logR\] Suppose that $\psi$ and $\gamma$ are as in Theorem \[polR\], and that now $p_{T}\in\mathcal{D}(\beta,\gamma,L)$ (cf. Theorem \[sep\_log\_rates\]) for some $\beta>0.$ Further suppose that (\[ae\]) holds. Then under the choice $$A_{n}=n^{\frac{1}{4\left( 1-\gamma\right) +2{\varepsilon}}} \label{ch1l}$$ (hence the same as in Theorem \[polR\]) and $$U_{n}=\frac{{\varepsilon}}{\pi\left( 2-2\gamma+{\varepsilon}\right) }\log n-\frac{2\beta+2\gamma-1}{\pi}\log\log n, \label{ch2l}$$ we get $$\sup_{x\geq0}\sqrt{\mathrm{E}\left[ x^{2\gamma}\left\vert p_{T,n}(x)-p_{T}(x)\right\vert ^{2}\right] }\lesssim\log^{-\beta}(n),\quad n\rightarrow\infty.$$ #### Discussion The rates in Theorem \[polR\] and Theorem \[logR\] are optimal in minimax sense, since they are basically coincides (up to a logarithmic factor) with the rates in Theorem \[sep\_conv\_rates\] and Theorem \[sep\_log\_rates\], respectively. As can be seen from the proof of Theorem \[low\_bound\] and Remark \[rem\_four\_cond\], the lower bonds continue to hold true under the additional assumption . Let us also stress that the class $\mathcal{C}(\beta,\gamma,L)$ is quite large and contains the well known families of distributions such as Gamma, Beta and Weibull families. It follows from Theorem \[polR\] that for all these families our estimator $p_{n,T}$ converges at a polynomial rate. Applications ============ Estimation of the variance-mean mixture models ---------------------------------------------- The variance-mean mixture of the normal distribution is defined as $$\begin{aligned} p(x)=\int_{0}^{\infty}(2\pi\sigma^{2} u)^{-1/2}\exp(-(x-\mu u)^{2}/(2\sigma^{2} u))\, g(u) du,\end{aligned}$$ where $g(u)$ is a mixing density on $\mathbb{R}_{+}.$ The variance-mean mixture models play an important role in both the theory and the practice of statistics. In particular, such mixtures appear as limit distributions in asymptotic theory for dependent random variables and they are useful for modeling data stemming from heavy-tailed and skewed distributions, see, e.g. [@barndorff1982normal] and [@bingham2002semi]. As can be easily seen, the variance-mean mixture distribution $p$ coincides with the distribution of the random variable $\sigma W_{T}+\mu T, $ where $T$ is the random variable with density $g,$ which is independent of $W.$ The class of variance-mean mixture models is rather large. For example, the class of the normal variance mixture distributions ($\mu=0$) can be described as follows: $p$ is the density of a normal variance mixture (equivalently $p$ is the density of $W_{T}$) if and only if $\mathcal{F}[p](\sqrt{u})$ is a completely monotone function in $u.$ The problem of statistical inference for variance-mean mixture models has been already considered in the literature. For example, Korsholm, [@korsholm2000semiparametric] proved the consistency of the non-parametric maximum likelihood estimator for the parameters $\sigma$ and $\mu,$ $g$ being treated as an infinite dimensional nuisance parameter. In Zhang [@zhang1990fourier] the problem of estimating the mixing density in location (mean) mixtures was studied. To the best of our knowledge, we here address, for the first time, the problem of non-parametric inference for the mixing density $g$ in full generality and derive the minimax convergence rates. In fact, Theorem \[polR\] and Theorem \[logR\] directly apply not only to normal variance-mean mixture models, but also to stable variance-mean mixtures. Estimation of time-changed Lévy models -------------------------------------- Let $L = (L_{t})_{t\geq0} $ be a one-dimensional Lévy process and let $\mathcal{T} = (\mathcal{T}(s))_{s\geq0} $ be a non-negative, non-decreasing stochastic process independent of $X $ with $\mathcal{T}(0)=0 $. A time-changed Lévy process $Y = (Y_{s})_{s\geq0} $ is then defined as $Y_{s} = X_{\mathcal{T}(s)}. $ The process $\mathcal{T} $ is usually referred to as time change or subordinator. Consider the problem of statistical inference on the distribution of the time change $\mathcal{T} $ based on the low-frequency observations of the time-changed Lévy process $X_{t}=L_{\mathcal{T}(t)}. $ Suppose that $n$ observations of the Lévy process $L_{t} $ at times $t_{j}=j\Delta, $ $j=0,\ldots, n, $ are available. If the sequence $\mathcal{T}(t_{j})-\mathcal{T}(t_{j-1}), $ $j=1,\ldots, n, $ is strictly stationary with the invariant stationary distribution $\pi, $ then for any bounded “test function” $f,$ $$\begin{aligned} \label{CONVERG}\frac{1}{n}\sum_{j=1}^{n}f\left( L_{\mathcal{T}(t_{j})}-L_{\mathcal{T}(t_{j-1})}\right) \to\mathbb{E}_{\pi}[f(L_{\mathcal{T}(\Delta)})], \quad n\to\infty,\end{aligned}$$ The limiting expectation in is then given by $$\begin{aligned} \mathbb{E}_{\pi}[f(L_{\mathcal{T}(\Delta)})]=\int_{0}^{\infty} \mathbb{E}[f(L_{s})]\, \pi(ds).\end{aligned}$$ Taking $f(z) = f_{u}(z)=\exp({\mathrm{i}}u^{\top}z), $ $u\in\mathbb{R}^{d}, $ we arrive at the the following representation for the c.f. of $L_{\mathcal{T}(s)} $: $$\begin{aligned} \label{EstEq}\mathbb{E}\left[ \exp\left( {\mathrm{i}}u L_{\mathcal{T}(\Delta )}\right) \right] =\int_{0}^{\infty} \exp(t\psi(u))\, \pi(dt)=\mathcal{L}_{\pi}(\psi(u)),\end{aligned}$$ where $\psi(u):=-t^{-1}\log(\phi_{t}(u))$ with $\phi_{t}(u)=\mathbb{E}\exp({\mathrm{i}}u^{\top}L_{t})$ being the characteristic exponent of the Lévy process $L $ and $\mathcal{L}_{\pi} $ being the Laplace transform of $\pi. $ Suppose we want to estimate the invariant measure $\pi$ (or its density) from the discrete time observations of $L_{\mathcal{T}},$ then we are in the setting of the generalized statistical Skorohod embedding with the only difference that the elements of the sample $L_{\mathcal{T}(t_{1})}-L_{\mathcal{T}(t_{0})},\ldots, L_{\mathcal{T}(t_{n})}-L_{\mathcal{T}(t_{n-1})}$ are not necessarily independent. However, under appropriate mixing properties of the sequence $\mathcal{T}(t_{j})-\mathcal{T}(t_{j-1}), $ $j=1,\ldots, n, $ one can easily generalize the results of Section \[seq: gsse\] to the case of dependent data (see, e.g. [@belomestny2011statistical] for similar results). The problem of estimating the parameters of a Lévy process observed at low frequency was considered in Neumann and Rei[ß]{}, [@neumann2009nonparametric] and Chen et al, [@chen2010nonparametric]. Let us note that the statistical inference for time-changed Lévy processes based on high-frequency observations of $Y$ has been the subject of many studies, see, e.g. Bull, [@bull2013estimating] and Todorov and Tauchen, [@todorov2012realized] and the references therein. Numerical examples ================== Barndorff-Nielsen et al. [@barndorff1982normal] consider a class of variance-mean mixtures of normal distributions which they call generalized hyperbolic distributions. The univariate and symmetric members of this family appear as normal scale mixtures whose mixing distribution is the generalized inverse Gaussian distribution with density $$\begin{aligned} \label{invGauss}p_{T}(v)=\frac{(\varkappa/\delta)^{\lambda}}{2K_{\lambda }(\delta\varkappa)} v^{\lambda-1} \exp\left( -\frac{1}{2}\left( \varkappa^{2} v+\frac{\delta^{2}}{v}\right) \right) , \quad v>0,\end{aligned}$$ for some $\varkappa,$ $\delta\geq0$ and $\lambda>0,$ where $K$ is a modified Bessel function. The resulting normal scale mixture has probability density function $$\begin{aligned} p_{X}(x)=\frac{\varkappa^{1/2}}{(2\pi)^{1/2}\delta^{\lambda}} K_{\lambda }(\delta\varkappa) (\delta^{2}+x^{2})^{\frac{1}{2}\left( \lambda-\frac{1}{2}\right) }K_{\lambda-\frac{1}{2}}\bigl(\varkappa(\delta^{2}+x^{2})^{1/2}\bigr).\end{aligned}$$ Let us start with a simple example, Gamma density $p_{T}(x)=x\exp(-x),$ $x\geq0,$ which is a special case of for $\delta=0,$ $\lambda=2$ and $\varkappa=\sqrt{2}.$ We simulate a sample of size $n$ from the distribution of $X,$ and construct the estimate with the bandwidth $h_{n}$ given (up to a constant not depending on $n$) by and $\gamma=0.8.$ In Figure \[fig: gamma\_bm\] (left), one can see $50$ estimated densities based on $50$ independent samples from $W_{T}$ of size $n=1000,$ together with $p_{T}$ in red. Next we estimate the distribution of the loss $\sup_{x\in[0,10]}\bigl\{ |p_{T,n}(x)-p_{T}(x)|\bigr\}$ based on $100$ independent repetitions of the estimation procedure. The corresponding box plots for different $n$ are shown in Figure \[fig: gamma\_bm\] (right). ![Left: the Gamma density (red) and its $50$ estimates (grey) for the sample size $n=1000$. Right: the box plots of the loss $\sup_{x\in[0,10]}\bigl\{ |p_{T,n}(x)-p_{T}(x)|\bigr\}$ for different sample sizes.[]{data-label="fig: gamma_bm"}](gamma_dens_5000.pdf "fig:"){width="0.45\linewidth"}  ![Left: the Gamma density (red) and its $50$ estimates (grey) for the sample size $n=1000$. Right: the box plots of the loss $\sup_{x\in[0,10]}\bigl\{ |p_{T,n}(x)-p_{T}(x)|\bigr\}$ for different sample sizes.[]{data-label="fig: gamma_bm"}](gamma_bm_norm_n.pdf "fig:"){width="0.45\linewidth"} Let us now turn to a more interesting example of variance-mean mixtures. We take $X=T+W_{T}$ and choose $T$ to follow a Gamma distribution with the density $p_{T}(x)=x\exp(-x),$ $x\geq0.$ The estimate is constructed as follows. First note that $\psi(\lambda)=-{\mathrm{i}}\lambda+\lambda ^{2}/2.$ In order to numerically compute the function $\Phi_{n}(1-z,X_{k})$ for $z=\gamma+{\mathrm{i}}v$ with $\gamma<1,$ we use the decomposition $$\begin{aligned} \label{phin_approx}\frac{1}{n}\sum_{k=1}^{n}\Phi_{n}(1-z,X_{k}) & =\int _{0}^{A_{n}}\left[ \psi(\lambda)\right] ^{-z}[\phi_{n}(\lambda )-e^{-m_{n}\psi(\lambda)}]\psi^{\prime}(\lambda)\,d\lambda\\ & +m_{n}^{z-1}\Gamma(1-z)+O\bigl(m_{n}^{-(1-\gamma)}\exp(-m_{n}A_{n}^{2}/2)\bigr),\nonumber\end{aligned}$$ where $\phi_{n}(\lambda)=\frac{1}{n} \sum_{k=1}^{n} e^{{\mathrm{i}}\lambda X_{k}}$ is the empirical characteristic function and $m_{n}=\frac{1}{n}\sum_{k=1}^{n} X_{k}\to2.$ This decomposition follows from a Cauchy argument similar as in the proof of Proposition \[arc\] and is quite useful to reduce the cost of computing the integral in , since the integral on the r.h.s. of is much easier to compute due to the asymptotic relation $\phi_{n}(\lambda)-e^{-m_{n}\psi(\lambda)}=O(\lambda^{2}),$ $\lambda\to0.$ Next we take $\gamma=0.7,$ $A_{n}$ and $h_{n}$ as in Theorem \[polR\] with $\varepsilon=0.5$ and $\beta=\pi/2$ (see Example \[exmp\_gamma\]). Figure \[fig: gamma\_bmd\] shows the performance of the estimate defined in : on the left-hand side $20$ independent realizations of the estimate $p_{T,n}$ for $n=1000$ are shown together with the true density $p_{T}.$ The box plots of the loss $\sup _{x\in[0,10]}\bigl\{ |p_{T,n}(x)-p_{T}(x)|\bigr\}$ based on $100$ runs of the algorithm are depicted on the right-hand side of Figure \[fig: gamma\_bmd\]. By comparing the right-hand sides of Figure \[fig: gamma\_bm\] and Figure \[fig: gamma\_bmd\], we observe that the performances of the estimates and are similar, although the estimate seem to have higher variance. This supports the claim of Theorem \[polR\] about the same convergence rates in statistical Skorohod embedding and generalized statistical Skorohod embedding problems, given that $p_{T}\in\mathcal{C}(\beta,\gamma,L).$ ![Left: the Gamma density (red) and its $20$ estimates (grey) for the sample size $n=5000$. Right: the box plots of the loss $\sup_{x\in[0,10]}\bigl\{ |p_{T,n}(x)-p_{T}(x)|\bigr\}$ for different sample sizes.[]{data-label="fig: gamma_bmd"}](gamma_bmd_dens_5000.pdf "fig:"){width="0.45\linewidth"}  ![Left: the Gamma density (red) and its $20$ estimates (grey) for the sample size $n=5000$. Right: the box plots of the loss $\sup_{x\in[0,10]}\bigl\{ |p_{T,n}(x)-p_{T}(x)|\bigr\}$ for different sample sizes.[]{data-label="fig: gamma_bmd"}](gamma_bmd_norm_n.pdf "fig:"){width="0.45\linewidth"} Proofs ====== Proof of Theorem \[sep\_conv\_rates\] ------------------------------------- First let us estimate the bias of $p_{T,n}.$ We have $$\begin{aligned} \mathbb{E}[p_{T,n}(x)] & =\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty }x^{-\gamma-{\mathrm{i}}v}K(vh_{n})\frac{\mathcal{M}[p_{|X|}](2(\gamma+{\mathrm{i}}v)-1)}{2^{\gamma+{\mathrm{i}}v}\Gamma(\gamma-1/2+{\mathrm{i}}v)}\,dv\\ & =\frac{1}{2\pi}\int_{-1/h_{n}}^{1/h_{n}}x^{-\gamma-{\mathrm{i}}v}\mathcal{M}[p_{T}](\gamma+{\mathrm{i}}v)\,dv.\end{aligned}$$ Hence $$p_{T}(x)-\mathbb{E}[p_{T,n}(x)]=\frac{1}{2\pi}\int_{\{\left\vert v\right\vert \geq1/h_{n}\}}\mathcal{M}[p_{T}](\gamma+{\mathrm{i}}v)x^{-\gamma-{\mathrm{i}}v}dv$$ and we then have the estimate, $$\begin{aligned} \sup_{x\geq0}\bigl\{x^{\gamma}|\mathbb{E}[p_{T,n}(x)]-p_{T}(x)|\bigr\} & \leq\frac{1}{2\pi}\int_{\{\left\vert v\right\vert \geq1/h_{n}\}}\left\vert \mathcal{M}[p_{T}](\gamma+{\mathrm{i}}v)\right\vert dv\nonumber\\ & \leq\frac{e^{-\beta/h_{n}}}{2\pi}\int_{\{\left\vert v\right\vert \geq1/h_{n}\} }e^{-\beta\left\vert v\right\vert }\left\vert \mathcal{M}[p_{T}](\gamma+{\mathrm{i}}v)\right\vert e^{\beta\left\vert v\right\vert } dv\nonumber\\ & \leq L\,\frac{e^{-\beta/h_{n}}}{2\pi}. \label{bias}$$ As to the variance, by the simple inequality $\operatorname{Var}\left( \int f_{t}dt\right) \leq\left( \int\sqrt{\operatorname{Var}[f_{t}]} dt\right) ^{2},$ which holds for any random function $f_{t}$ with $\int\mathbb{E}[f^{2}_{t}] dt<\infty,$ we get $$\begin{aligned} \operatorname{Var}[x^{\gamma}p_{T,n}(x)] & =\operatorname{Var}\left[ \frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}x^{-{\mathrm{i}}v}K(vh_{n})\frac {\mathcal{M}_{n}[p_{|X|}](2(\gamma+{\mathrm{i}}v)-1)}{2^{\gamma+iv}\Gamma(\gamma-1/2+{\mathrm{i}}v)}\,dv\right] \nonumber\\ & \leq\frac{1}{\pi2^{2\gamma}}\left[ \int_{-1/h_{n}}^{1/h_{n}}\frac {\sqrt{\operatorname{Var}\left( \mathcal{M}_{n}[p_{|X|}](2(\gamma+{\mathrm{i}}v)-1)\right) }}{\left\vert \Gamma(\gamma-1/2+{\mathrm{i}}v)\right\vert }dv\right] ^{2}\,\nonumber\\ & \leq\frac{1}{2n\pi}\left[ \int_{-1/h_{n}}^{1/h_{n}}\frac{\sqrt {\operatorname{Var}\bigl(|X|^{2(\gamma+{\mathrm{i}}v-1)}\bigr)}}{\left\vert \Gamma(\gamma-1/2+{\mathrm{i}}v)\right\vert }dv\right] ^{2}\,\nonumber\\ & \leq\frac{1}{2n\pi}\left[ \int_{-1/h_{n}}^{1/h_{n}}\frac{\sqrt {\mathbb{E}\left[ |W_{T}|^{4(\gamma-1)}\right] }}{\left\vert \Gamma (\gamma-1/2+{\mathrm{i}}v)\right\vert }\, dv\right] ^{2}. \label{vr}$$ Note that $$\begin{aligned} \mathbb{E}\left[ |W_{T}|^{4(\gamma-1)}\right] & =\int_{0}^{\infty }\mathbb{E}\left[ |W_{t}|^{4(\gamma-1)}\right] p_{T}(t)\, dt\\ & =\mathbb{E}\left[ |W_{1}|^{4(\gamma-1)}\right] \int_{0}^{\infty }t^{2(\gamma-1)}p_{T}(t)\,dt\\ & =:C_{2}(\gamma)<\infty,\end{aligned}$$ due to (\[sg\]). We obtain from (\[vr\]) due to Corollary \[cor\_integ\_gamma\] (see Appendix) and by taking into account (\[sg\]),$$\operatorname{Var}[x^{\gamma}p_{T,n}(x)]\leq\frac{C_{2}(\gamma)}{2n\pi}C_{3}h_{n}^{2\left( \gamma-1\right) }e^{\pi/h_{n}}=\frac{C_{3}(\gamma)}{n}h_{n}^{2\left( \gamma-1\right) }e^{\pi/h_{n}}.$$ and so (\[ms\]) follows with $C_{\gamma,L}=\max(C_{3}(\gamma),\frac{L^{2}}{4\pi^{2}}).$ Finally, by plugging (\[hn\]) into (\[ms\]) we get (\[rates\_bm\_pol\]) and the proof is finished. Proof of Theorem  \[sep\_log\_rates\] ------------------------------------- The proof is analog to the one of Theorem \[sep\_conv\_rates\] , the only difference is the bias estimate (\[bias\]) that now becomes $$\sup_{x\geq0}\bigl\{x^{\gamma}|\mathbb{E}[p_{T,n}(x)]-p_{T}(x)|\bigr\}\leq \frac{L}{2\pi}h_{n}^{\beta},$$ which gives (\[ms1\]) with a constant $D_{\gamma,L}=$ $\max(C_{3}(\gamma),\frac{L^{2}}{4\pi^{2}})$ again. Next with the choice (\[hn1\]) we obtain from (\[ms1\]) the logarithmic rate (\[arr1\]). Proof of Theorem \[low\_bound\] ------------------------------- Our construction relies on the following basic result (see [@tsybakov2009introduction] for the proof). \[ThmLowerBound\] Suppose that for some $\varepsilon>0$ and $n\in \mathbb{N}$ there are two densities $p_{0,n},p_{1,n}\in\mathcal{G}$ such that $$d(p_{0,n},p_{1,n})> 2\varepsilon v_{n}.$$ If the observations in model $n$ follow the product law $\mathsf{P}_{p,n}=\mathsf{P}_{p}^{\otimes n}$ under the density $p\in\mathcal{G}$ and $$\chi^{2}(p_{1,n}\,|\,p_{0,n})\le n^{-1}\log(1+(2-4\delta)^{2})$$ holds for some $\delta\in(0,1/2)$, then the following lower bound holds for all density estimators $\hat p_{n}$ based on observations from model $n$: $$\inf_{\hat p_{n}}\sup_{p\in\mathcal{G}}{\operatorname{P}}^{\otimes n}_{p}\big(d(\hat p_{n},p)\ge\varepsilon v_{n}\big)\ge\delta.$$ If the above holds for fixed $\varepsilon,\delta>0$ and all $n\in\mathbb{N}$, then the optimal rate of convergence in a minimax sense over $\mathcal{G}$ is not faster than $v_{n}$. ### Proof of a lower bound for the class $\mathcal{C}(\beta ,\gamma,L)$ Let us start with the construction of the densities $p_{0,n}$ and $p_{1,n}.$ Define for any $\nu>1$ and $M>0$ two auxiliary functions $$\begin{aligned} q(x)=\frac{\nu\sin(\pi/\nu)}{\pi}\frac{1}{1+x^{\nu}},\quad x\geq0\end{aligned}$$ and $$\begin{aligned} \rho_{M}(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{\log^{2}(x)}{2}}\frac{\sin (M\log(x))}{x}, \quad x\geq0.\end{aligned}$$ The properties of the functions $q$ and $\rho_{M}$ are collected in the following lemma. \[l1\_proof\_low\] The function $q$ is a probability density on $\mathbb{R}_{+}$ with the Mellin transform $$\begin{aligned} \mathcal{M}[q](z)=\frac{\sin(\pi/\nu)}{\sin(\pi z/\nu)} ,\quad\mathsf{Re}[z]>0.\end{aligned}$$ The Mellin transform of the function $\rho_{M}$ is given by $$\begin{aligned} \label{mellin_rho_pol}\mathcal{M}[\rho_{M}](u+{\mathrm{i}}v)=\frac{1}{2}\left[ e^{(u-1+{\mathrm{i}}(v+M))^{2}/2}-e^{(u-1+{\mathrm{i}}(v-M))^{2}/2}\right] .\end{aligned}$$ Hence $$\begin{aligned} \int_{0}^{\infty}\rho_{M}(x) dx=\mathcal{M}[\rho_{M}](1)=0.\end{aligned}$$ Set now for any $M>0$ $$\begin{aligned} q_{0,M}(x):=q(x), \quad q_{1,M}(x):=q(x)+(q\vee\rho_{M}) (x),\end{aligned}$$ where $f\vee g$ stands for the multiplicative convolution of two functions $f$ and $g$ on $\mathbb{R}_{+}$ defined as $$\begin{aligned} (f\vee g)(x):=\int_{0}^{\infty}\frac{f(t) g(x/t)}{t} dt, \quad x\geq0.\end{aligned}$$ The following lemma describes some properties of $q_{0,M}$ and $q_{1,M}.$ \[l2\_proof\_low\] For any $M>0$ the function $q_{1,M}$ is a probability density satisfying $$\begin{aligned} \|q_{0,M}-q_{1,M}\|_{\infty}=\sup_{x\in\mathbb{R}_{+}}|q_{0,M}(x)-q_{1,M}(x)|\gtrsim\exp(-M\pi/\nu) , \quad M\to\infty.\end{aligned}$$ Moreover, $q_{0,M}$ and $q_{1,M}$ are in $\mathcal{C}(\beta,\gamma,L)$ for all $0<\beta<\pi/\nu$ and $\gamma>0$ with $L$ depending on $\gamma.$ First note that $$\begin{aligned} \int_0^\infty q_{1,M}(x) dx=1+\int_0^\infty (q\vee \rho_M) (x)=1+\mathcal{M}[q](1)\mathcal{M}[\rho_M](1) = 1.\end{aligned}$$ Furthermore, due to the Parseval identity $$\begin{aligned} (q\vee \rho_M) (y)&=&\int_{0}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{\log^{2}(x)}{2}}\frac{\sin(M\log(x))}{x^{2}}\frac{1}{1+(y/x)^{\nu}}dx \\ &=&\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{v^{2}}{2}}\sin(Mv)\frac{e^{-v}}{1+e^{-\nu(v-y_{l})}}dv \\ &=&e^{-\log(y)}\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{v^{2}}{2}}\sin(Mv)\frac{e^{(y_{l}-v)}}{1+e^{\nu (\log(y)-v)}}dv \\ &=& \frac{e^{-\log(y)}}{2\pi}\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{v^{2}}{2}}\sin(Mv)\frac{e^{(\log(y)-v)}}{1+e^{\nu (\log(y)-v)}}dv \\ &=& \frac{e^{-\log(y)}}{2\pi}\int_{-\infty}^{\infty}e^{-{\mathrm{i}}u\log(y)}\left[\frac{H(u+M)-H(u-M)}{2}\right]\mathcal{F}[R](u)du,\end{aligned}$$ where $R(x)=\frac{e^{x}}{1+e^{\nu x}}$ and $H(x)=e^{-x^{2}/2}$. Note that $$\begin{aligned} \mathcal{F}[R](u)=\int_{-\infty}^{\infty}\frac{e^{x+{\mathrm{i}}ux}}{1+e^{\nu x}}\,dx=\frac{1}{\nu}\int_{-\infty}^{\infty}\frac{e^{v/\nu+{\mathrm{i}}uv/\nu}}{1+e^{v}}\,dx=\frac{1}{\nu}\,\Gamma\left(\frac{1+{\mathrm{i}}u}{\nu}\right)\Gamma\left(1-\frac{1+{\mathrm{i}}u}{\nu}\right).\end{aligned}$$ Hence due to $$\sup_{y\in \mathbb{R}_+}|q_{0,M}(y)-q_{1,M}(y)|=\sup_{y\in \mathbb{R}_+}|(q\vee \rho_M) (y)|\gtrsim \exp(-M\pi/\nu), \quad M\to \infty.$$ The second statement of the lemma follows from Lemma \[l1\_proof\_low\] and the fact that $\mathcal{M}[q\vee \rho_M]=\mathcal{M}[q]\mathcal{M}[\rho_M].$ Let $T_{0,M}$ and $T_{1,M}$ be two random variables with densities $q_{0,M}$ and $q_{1,M},$ respectively. Then the density of the r.v. $|W_{T_{i,M}}|,$ $i=0,1,$ is given by $$\begin{aligned} p_{i,M}(x) & :=\frac{2}{\sqrt{2\pi}}\int_{0}^{\infty}\lambda^{-1/2}e^{-\frac{x^{2}}{2\lambda}}q_{i,M}(\lambda)\,d\lambda\quad i=0,1.\end{aligned}$$ For the Mellin transform of $p_{i,M}$ we get $$\begin{aligned} \label{p_mellin_pol}\mathcal{M}[p_{i,M}](z) & =\mathbb{E}\bigl[|W_{1}|^{z-1}\bigr]\mathbb{E}\bigl[T_{i,M}^{(z-1)/2}\bigr]\nonumber\\ & =\mathbb{E}\bigl[|W_{1}|^{z-1}\bigr]\mathcal{M}[q_{i,M}]((z+1)/2)\nonumber\\ & =\frac{2^{z/2}}{\sqrt{2\pi}}\Gamma(z/2)\mathcal{M}[q_{i,M}]((z+1)/2), \quad i=0,1.\end{aligned}$$ \[l3\_proof\_low\] The $\chi^{2}$-distance between the densities $p_{0,M}$ and $p_{1,M}$ fulfills $$\begin{aligned} \chi^{2}(p_{1,M}|p_{0,M})=\int\frac{(p_{1,M}(x)-p_{0,M}(x))^{2}}{p_{0,M}(x)}dx\lesssim e^{-M\pi(1+2/\nu)}, \quad M\to\infty.\end{aligned}$$ First note that $p_{0,M}(x)>0$ on $[0,\infty).$ Since $$\begin{aligned} p_{0,M}(x)&=&\frac{2}{\sqrt{2\pi}}\frac{\nu\sin(\pi/\nu)}{\pi}\int_{0}^{\infty}\lambda^{-1/2}e^{-\frac{x^{2}}{2\lambda}}\frac{1}{1+\lambda^{\nu}}d\lambda\\/y=1/\lambda/&=&\frac{2}{\sqrt{2\pi}}\frac{\nu\sin(\pi/\nu)}{\pi}\int_{0}^{\infty}y^{1/2}e^{-y\frac{x^{2}}{2}}\frac{1}{y^{2}(1+y^{-\nu})}dy\\&=&\frac{2}{\sqrt{2\pi}}\frac{\nu\sin(\pi/\nu)}{\pi}\int_{0}^{\infty}e^{-y\frac{x^{2}}{2}}\frac{y^{\nu-1/2-1}}{(1+y^{\nu})}dy\\&\asymp&\frac{2}{\sqrt{2\pi}}\frac{\nu\sin(\pi/\nu)}{\pi}\Gamma(\nu-1/2)x^{-2\nu+1}, \quad x\to \infty,\end{aligned}$$ we have $p_{0,M}(x)\gtrsim x^{-2\nu+1},$ $x\to \infty.$ Furthermore, due to and the Parseval identity $$\begin{gathered} \label{parseval} \int_{0}^{\infty}x^{2\nu-1}\left|p_{0,M}(x)-p_{1,M}(x)\right|^{2}dx= \\ \frac{2^{-4+2\nu}}{\pi}\int_{\gamma-i\infty}^{\gamma+i\infty}\mathcal{M}[q\vee \rho_M]\left(\frac{z+1}{2}\right) \Gamma\left(\frac{z}{2}\right)\mathcal{M}[q\vee \rho_M]\left(\frac{2\nu-z+1}{2}\right)\Gamma\left(\frac{2\nu-z}{2}\right)dz,\end{gathered}$$ where $\mathcal{M}[q\vee \rho_M](z)=\mathcal{M}[q](z)\mathcal{M}[\rho_M](z).$ Due to $$\begin{aligned} \label{mellin_rho_bound} |\mathcal{M}[\rho_{M}](u+{\mathrm{i}}v)|\leq e^{\frac{(u-1)^{2}}{2}}\frac{\phi(v+M)+\phi(v-M)}{2}\end{aligned}$$ with $\phi(v)=e^{-\frac{ v^{2}}{2}}.$ Combining (Appendix), and , we derive $$\begin{aligned} \chi^{2}(p_{1,M}|p_{0,M})&=&\int\frac{(p_{1,M}(x)-p_{0,M}(x))^{2}}{p_{0,M}(x)}dx \\ &\lesssim&\int_{0}^{\infty}(p_{1,M}(x)-p_{0,M}(x))^{2}dx+\int_{0}^{\infty}x^{2\nu-1}(p_{1,M}(x)-p_{0,M}(x))^{2}dx\\&\lesssim&\int_{-\infty}^{\infty}|v|^{\nu-1}e^{-|v|\pi/2-|v|\pi/\nu}\left(\phi(v/2+M)+\phi(v/2-M)\right)^{2}dv \\ &\lesssim & M^{\nu-1}e^{-M\pi(1+2/\nu)}, \quad M\to \infty.\end{aligned}$$ Fix some $\kappa\in(0,1/2).$ Due to Lemma \[l3\_proof\_low\], the inequality $$n\chi^{2}(p_{1,M}|p_{0,M})\leq\kappa$$ holds for $M$ large enough, provided $$\begin{aligned} M=\frac{1+{\varepsilon}}{\pi(1+2/\nu)}(\log(n)+(\nu-1)\log\log(n))\end{aligned}$$ for arbitrary small ${\varepsilon}>0.$ Hence Lemma \[l2\_proof\_low\] and Theorem \[ThmLowerBound\] imply $$\inf_{\hat p_{n}}\sup_{p\in\mathcal{C}(\beta,\gamma,L)}\mathsf{P}_{p,n}\big(\|\hat p_{n}-p\|_{\infty}\ge c v_{n}\big)\ge\delta.$$ for any $\beta<\pi/\nu<\pi,$ any $\gamma>0,$ some constants $c>0,$ $\delta>0$ and $v_{n}=n^{-\beta/(\pi+2\beta)}\log^{-\frac{\pi-\beta}{\pi+2\beta}}(n).$ ### Proof of a lower bound for the class $\mathcal{D}(\beta ,\gamma,L)$ Define for any $\nu>1,$ $\alpha>0$ and $M>0,$ $$\begin{aligned} q(x)=\left[ 2\Gamma(\nu)\right] ^{-1}\times\begin{cases} \log^{\nu-1}(1/x), & 0\leq x\leq1,\\ x^{-2}\log^{\nu-1}(x), & x>1 \end{cases}\end{aligned}$$ and $$\begin{aligned} \rho_{M}(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{\log^{2}(x)}{2}}\frac{\sin (M\log(x))}{x\log(x)}, \quad x\geq0.\end{aligned}$$ The properties of the functions $q$ and $\rho_{M}$ can be found in the next lemma. \[l1\_proof\_low\_log\] The function $q$ is a probability density on $\mathbb{R}_{+}$ with the Mellin transform $$\begin{aligned} \mathcal{M}[q](z)=\frac{1}{2}\left[ z^{-\nu}+(2-z)^{-\nu}\right] ,\quad0< \mathsf{Re}[z]< 2.\end{aligned}$$ The Mellin transform of the function $\rho_{M}$ is given by $$\begin{aligned} \label{mellin_rho}\mathcal{M}[\rho_{M}](u+{\mathrm{i}}v)=e^{\frac{(u-1)^{2}}{2}}\frac{G(u,v+M)-G(u,v-M)}{2},\end{aligned}$$ where $G(u,v)=\int_{-\infty}^{v}e^{-\frac{x^{2}}{2}+{\mathrm{i}}x(u-1)}dx.$ Hence $$\begin{aligned} \zeta_{M}:=\int_{0}^{\infty}\rho_{M}(x) dx=\mathcal{M}[\rho_{M}](1)=\int _{-M}^{M}e^{-\frac{x^{2}}{2}}dx.\end{aligned}$$ Set now for any $M>0$ $$\begin{aligned} q_{0,M}(x):=q(x), \quad q_{1,M}(x):=(1-\zeta_{M})q(x)+(q\vee\rho_{M}) (x),\end{aligned}$$ where $f\vee g$ stands for the multiplicative convolution of two functions $f$ and $g$ on $\mathbb{R}_{+}$ defined via $$\begin{aligned} (f\vee g)(x):=\int_{0}^{\infty}\frac{f(t) g(x/t)}{t} dt.\end{aligned}$$ \[l2\_proof\_low\_log\] For any $M>0,$ the function $q_{1,M}$ is a probability density satisfying $$\begin{aligned} \sup_{x\in(1-\delta,1+\delta)}|q_{0,M}(x)-q_{1,M}(x)|\asymp|\cos(\pi \nu/2)|M^{-\nu+1}, \quad M\to\infty,\end{aligned}$$ where $\delta>0$ is a fixed number. Moreover, $q_{0,M}$ and $q_{1,M}$ are in $\mathcal{D}(\beta,\gamma,L)$ for all $\beta<\nu-1$ and $\gamma\in(0,2).$ First note that $$\begin{aligned} \int_0^\infty q_{1,M}(x) dx=1+\int_0^\infty (q\vee \rho_M) (x) - \zeta_M= 1+\mathcal{M}[\rho_{M}](1)\times\mathcal{M}[q](1)-\zeta_M=1.\end{aligned}$$ Furthermore, $(q\vee \rho_M) (y)=\left[2\Gamma(\nu)\right]^{-1}[I_1(y)+I_2(y)]$ with $$\begin{aligned} I_{1}(y)&=&\int_{y}^{\infty}e^{-\frac{\log^{2}(x)}{2\alpha}}x^{-2}\frac{\sin(M\log(x))}{\log(x)}\log^{\nu-1}(x/y)dx \\ &=&\int_{\log(y)}^{\infty}e^{-\frac{z^{2}}{2\alpha}-z}\frac{\sin(Mz)}{z}(z-\log(y))^{\nu-1}dz\end{aligned}$$ and $$\begin{aligned} I_{2}(y)&=&\int_{0}^{y}e^{-\frac{\log^{2}(x)}{2\alpha}}y^{-2}\frac{\sin(M\log(x))}{\log(x)}\log^{\nu-1}(y/x)dx \\ &=&\int_{-\infty}^{\log(y)}e^{-\frac{z^{2}}{2\alpha}+z}y^{-2}\frac{\sin(Mz)}{z}(\log(y)-z)^{\nu-1}dz.\end{aligned}$$ By taking $y=\exp(A),$ we get for $I_1(y)$ $$\begin{aligned} I_1(y)&=&\int_{0}^{\infty}e^{-\frac{(z+A)^{2}}{2\alpha}-(z+A)}\frac{\sin(M(z+A))}{z+A}z^{\nu-1}dz \\ &=&\cos(AM)\int_{0}^{\infty}\frac{e^{-\frac{(z+A)^{2}}{2\alpha}-(z+A)}}{z+A}\sin(Mz)z^{\nu-1}dz \\ && + \sin(AM)\int_{0}^{\infty}\frac{e^{-\frac{(z+A)^{2}}{2\alpha}-(z+A)}}{z+A}\cos(Mz)z^{\nu-1}dz.\end{aligned}$$ The well known Erdélyi lemma implies $$\begin{aligned} \int_{0}^{\infty}\frac{e^{-\frac{(z+A)^{2}}{2\alpha}-(z+A)}}{z+A}\sin(Mz)z^{\nu-1}dz\asymp\frac{e^{-\frac{A{}^{2}}{2\alpha}-A}}{A}\Gamma(\nu)\sin(\pi\nu/2)M^{-\nu}, \quad M\to \infty\end{aligned}$$ and $$\begin{aligned} \int_{0}^{\infty}\frac{e^{-\frac{(z+A)^{2}}{2\alpha}-(z+A)}}{z+A}\cos(Mz)z^{\nu-1}dz\asymp\frac{e^{-\frac{A{}^{2}}{2\alpha}-A}}{A}\Gamma(\nu)\cos(\pi\nu/2)M^{-\nu}, \quad M\to \infty.\end{aligned}$$ Hence $$\begin{aligned} \label{I_1(e^A)} I_1(e^A)\asymp\frac{e^{-\frac{A{}^{2}}{2\alpha}-A}}{A}\Gamma(\nu)\sin(AM+\pi\nu/2)M^{-\nu}, M\to \infty.\end{aligned}$$ Analogously $$\begin{aligned} I_2(e^A)&=&e^{-2A}\int_{-\infty}^{A}e^{-\frac{z^{2}}{2\alpha}+z}\frac{\sin(Mz)}{z}(A-z)^{\nu-1}dz \\ &=& e^{-2A}\int_{0}^{\infty}e^{-\frac{(A-z)^{2}}{2\alpha}+A-z}\frac{\sin(M(A-z))}{A-z}z{}^{\nu-1}dz \\ &=& e^{-2A}\sin(AM)\int_{0}^{\infty}e^{-\frac{(A-z)^{2}}{2\alpha}+A-z}\frac{\cos(Mz)}{A-z}z{}^{\nu-1}dz \\ && -e^{-2A}\cos(AM)\int_{0}^{\infty}e^{-\frac{(A-z)^{2}}{2\alpha}+A-z}\frac{\sin(Mz)}{A-z}z{}^{\nu-1}dz \\ &\asymp& \frac{e^{-\frac{A^{2}}{2\alpha}-A}}{A}\Gamma(\nu)\sin(AM-\pi\nu/2)M^{-\nu}.\end{aligned}$$ Combining the previous estimates, we arrive at $$\begin{aligned} I_2(e^A)+I_1(e^A)=2\frac{e^{-\frac{A^{2}}{2\alpha}-A}}{A}\Gamma(\nu)\sin(AM)\cos(\pi\nu/2)M^{-\nu}.\end{aligned}$$ It remains to note that the maximum of r.h.s of is attained for $A\in \{\pi/2M,3\pi/2M\}$ and $$\begin{aligned} \sup_A[I_2(e^A)+I_1(e^A)]\asymp \Gamma(\nu)|\cos(\pi\nu/2)|M^{-\nu+1}.\end{aligned}$$ The property $q_{1,M}\in \mathcal{D}(\beta,\gamma,L) $ for all $\beta<\nu-1$ and $\gamma\in (0,2)$ with $L$ depending on $\gamma,$ follows from the identity $\mathcal{M}[q_{1,M}](z)=\mathcal{M}[q](z)(1-\zeta_M)+\mathcal{M}[\rho_{M}](z)\mathcal{M}[q](z)$ and . Let $T_{0,M}$ and $T_{1,M}$ be two random variables with densities $q_{0,M}$ and $q_{1,M}$ respectively. The the density of the r.v. $|W_{T_{i,M}}|,$ $i=0,1,$ is given by $$\begin{aligned} p_{i,M}(x) & :=\frac{2}{\sqrt{2\pi}}\int_{0}^{\infty}\lambda^{-1/2}e^{-\frac{x^{2}}{2\lambda}}q_{i,M}(\lambda)d\lambda,\quad i=0,1.\end{aligned}$$ For the Mellin transform of $p_{i,M},$ we have $$\begin{aligned} \label{p_mellin}\mathcal{M}[p_{i,M}](z) & =\mathbb{E}\bigl[|W_{1}|^{z-1}\bigr]\mathbb{E}\bigl[T_{i,M}^{(z-1)/2}\bigr]\nonumber\\ & =\mathbb{E}\bigl[|W_{1}|^{z-1}\bigr]\mathcal{M}[q_{i,M}]((z+1)/2)\nonumber\\ & =\frac{2^{z/2}}{\sqrt{2\pi}}\Gamma(z/2)\mathcal{M}[q_{i,M}]((z+1)/2).\end{aligned}$$ \[l3\_proof\_low\_log\] The $\chi^{2}$-distance between the densities $p_{0,M}$ and $p_{1,M}$ satisfies $$\begin{aligned} \chi^{2}(p_{1,M}|p_{0,M}):=\int\frac{(p_{1,M}(x)-p_{0,M}(x))^{2}}{p_{0,M}(x)}dx\lesssim e^{-M\pi/2}, \quad M\to\infty.\end{aligned}$$ First note that $p_{0,M}(x)>0$ on $[0,\infty).$ Since $$\begin{aligned} \int_{0}^{1}\lambda^{-1/2}e^{-\frac{x^{2}}{2\lambda}}\log^{\nu-1}(1/\lambda)d\lambda&=&\int_{0}^{1}\lambda^{-1/2}e^{-\frac{x^{2}}{2\lambda}}\log^{\nu-1}(1/\lambda)d\lambda\\/y=1/\lambda,\lambda=1/y/&=&\int_{1}^{\infty}y^{-3/2}e^{-x^{2}y/2}\log^{\nu-1}(y)dy\\&=&\int_{x^{2}}^{\infty}x^{-2}(y/x^{2})^{-3/2}e^{-y/2}\log^{\nu-1}(y/x^{2})dy\\&=&x\int_{x^{2}}^{\infty}y{}^{-3/2}e^{-y/2}\log^{\nu-1}(y/x^{2})dy\lesssim e^{-x^{2}/2}\end{aligned}$$ and $$\begin{aligned} \int_{1}^{\infty}\lambda^{-3/2}e^{-\frac{x^{2}}{2\lambda}}\log^{\nu-1}(\lambda)d\lambda&=&\int_{0}^{1}y^{-1/2}e^{-\frac{x^{2}}{2}y}\log^{\nu-1}(1/y)dy\\&\asymp&\frac{\Gamma(1/2)}{\sqrt{2}}x^{-1}\log^{\nu-1}(x^{2}).\end{aligned}$$ we have $p_{0,M}(x)\gtrsim x^{-1},$ $x\to \infty.$ Furthermore, due to and the Parseval identity $$\begin{gathered} \label{parseval_log} \int_{0}^{\infty}x^{a-1}\left|p_{0,M}(x)-p_{1,M}(x)\right|^{2}dx= \\ \frac{2^{-4+a}}{\pi}\int_{\gamma-{\mathrm{i}}\infty}^{\gamma+{\mathrm{i}}\infty}\mathcal{M}[q\vee \rho_M]\left(\frac{z+1}{2}\right) \Gamma\left(\frac{z}{2}\right)\mathcal{M}[q\vee \rho_M]\left(\frac{a-z+1}{2}\right)\Gamma\left(\frac{a-z}{2}\right)dz,\end{gathered}$$ where $\mathcal{M}[q\vee \rho_M](z)=\mathcal{M}[q](z)\mathcal{M}[\rho_M](z).$ Due to $$\begin{aligned} \label{mellin_rho_bound_log} |\mathcal{M}[\rho_{M}](u+{\mathrm{i}}v)|\leq e^{\frac{(u-1)^{2}}{2}}\frac{\Phi(v+M)+\Phi(v-M)}{2}\end{aligned}$$ with $\Phi(v)=\int_{-\infty}^{v}e^{-\frac{x^{2}}{2}}dx.$ Combining with properly chosen $\gamma>0$, and Lemma  \[lemma\_gamma\_asymp\] (see Appendix), we derive $$\begin{aligned} \chi^{2}(p_{1}|p_{0})&=&\int\frac{(p_{1}(x)-p_{0}(x))^{2}}{p_{0}(x)}dx\lesssim\int_{0}^{\infty}(p_{1}(x)-p_{0}(x))^{2}dx+\int_{0}^{\infty}x(p_{1}(x)-p_{0}(x))^{2}dx\\&\lesssim&\int_{-\infty}^{\infty}e^{-|v|\pi/2}\left(\Phi(v/2+M)+\Phi(v/2-M)\right)^{2}dv\lesssim e^{-M\pi/2}, \quad M\to \infty.\end{aligned}$$ Fix some $\kappa\in(0,1/2).$ Due to Lemma \[l3\_proof\_low\_log\], the inequality $$n\chi^{2}(p_{1,M}|p_{0,M})\leq\kappa$$ holds for $M$ large enough, provided $$\begin{aligned} M=\frac{2(1+{\varepsilon})}{\pi}\log(n)\end{aligned}$$ for arbitrary small ${\varepsilon}>0.$ Hence Lemma \[l2\_proof\_low\_log\] and Theorem \[ThmLowerBound\] imply $$\inf_{\hat p_{n}}\sup_{p\in\mathcal{D}(\beta,\gamma,L)}\mathsf{P}_{p,n}\big(\|\hat p_{n}-p\|_{\infty}\ge c v_{n}\big)\ge\delta.$$ for any $\beta<\nu-1,$ any $\gamma\in(0,2),$ some constants $c>0,$ $\delta>0$ and $v_{n}=\log^{-\beta}(n).$ Proof of Proposition \[uniform\_upper\_bound\] ---------------------------------------------- It holds $$\begin{aligned} p_{T,n}(x)-\mathbb{E}[p_{T,n}(x)] & =\frac{1}{\sqrt{\pi}}\int_{-1/h_{n}}^{1/h_{n}}x^{-\gamma-{\mathrm{i}}v}\frac{K(vh_{n})}{2^{\gamma+{\mathrm{i}}v}}\\ & \times\frac{\bigl\{\mathcal{M}_{n}[p_{|X|}](2(\gamma+{\mathrm{i}}v)-1)-\mathcal{M}[p_{|X|}](2(\gamma+{\mathrm{i}}v)-1)\bigr\}}{\Gamma((\gamma+{\mathrm{i}}v)-1/2)}\,dv.\end{aligned}$$ Due to Proposition \[ExpBounds\] $$\sup_{x\geq0}\bigl\{x^{\gamma}|\mathbb{E}[p_{T,n}(x)]-p_{T}(x)|\bigr\}\leq \frac{\Delta_{n}}{\sqrt{\pi n}}\int_{-1/h_{n}}^{1/h_{n}}\frac{A_{1}}{2^{\gamma}}\frac{\log(e+|v|)}{\Gamma((\gamma+{\mathrm{i}}v)-1/2)}\,dv$$ with $\Delta_{n}=O_{a.s.}(1).$ Proof of Proposition \[asymp\_norm\] ------------------------------------ We have $$\begin{aligned} \operatorname{Var}(Z_{n,1}) & =\frac{1}{\pi}\int_{-1/h_{n}}^{1/h_{n}}\int_{-1/h_{n}}^{1/h_{n}}\frac{x^{-2\gamma-{{\mathrm{i}}}(v-u)}}{2^{2\gamma +{{\mathrm{i}}}(v-u)}}\frac{\operatorname{Cov}\left( |X_{1}|^{2(\gamma+{{\mathrm{i}}} v-1)},|X_{1}|^{2(\gamma+{{\mathrm{i}}} u-1)}\right) }{2^{2\gamma+{{\mathrm{i}}}(v-u)}\Gamma(\gamma-1/2+{{\mathrm{i}}} v)\Gamma(\gamma-1/2-{{\mathrm{i}}} u)}\,du\,dv\\ & =\frac{1}{\pi}\int_{-1/h_{n}}^{1/h_{n}}\int_{-1/h_{n}}^{1/h_{n}}\frac {1}{\left( 2x\right) ^{2(\gamma-1)+{{\mathrm{i}}}(v-u)}}\frac{\mathcal{M}[p_{\left\vert X\right\vert }](4\gamma-3+2{{\mathrm{i}}}(v-u))}{\Gamma(\gamma -1/2+{{\mathrm{i}}} v)\Gamma(\gamma-1/2-{{\mathrm{i}}} u)}\,dv\,du\\ & -\frac{1}{\pi}\left\vert \int_{-1/h_{n}}^{1/h_{n}}\frac{1}{\left( 2x\right) ^{(\gamma+{{\mathrm{i}}} v-1)}}\frac{\mathcal{M}[p_{\left\vert X\right\vert }](2\gamma-1+2{{\mathrm{i}}} v)}{\Gamma(\gamma-1/2+{{\mathrm{i}}} v)}\,dv\right\vert ^{2}=R_{1}-R_{2}.\end{aligned}$$ Note that $$\begin{aligned} R_{2} & \leq\frac{1}{(2x)^{2\left( \gamma-1\right) }}\left( \int _{-1/h_{n}}^{1/h_{n}}\left\vert \frac{\mathcal{M}[p_{\left\vert X\right\vert }](2\gamma-1+2{{{\mathrm{i}}}}v)}{\Gamma(\gamma-1/2+{{{\mathrm{i}}}}v)}\,\right\vert dv\right) ^{2}\\ & =\frac{1}{(2x)^{2\left( \gamma-1\right) }}\left\vert \int_{-1/h_{n}}^{1/h_{n}}\bigl|\mathcal{M}[p_{T}](\gamma+{{{\mathrm{i}}}}v)\bigr|\,dv\right\vert ^{2}<C<\infty\end{aligned}$$ and furthermore $$\begin{aligned} R_{1} & =\frac{1}{x^{2(\gamma-1)}\pi}\int_{-1/h_{n}}^{1/h_{n}}\int_{-1/h_{n}}^{1/h_{n}}\frac{1}{x^{{{{\mathrm{i}}}}(v-u)}}\frac{\Gamma(2\gamma-3/2+{{{\mathrm{i}}} }(v-u))\mathcal{M}[p_{T}](2\gamma-1+{{{\mathrm{i}}}}(v-u))}{\Gamma(\gamma-1/2+{{{\mathrm{i}}} }v)\Gamma(\gamma-1/2-{{{\mathrm{i}}}}u)}\,dv\,du\\ & =\frac{1}{x^{2(\gamma-1)}\pi}\times I_{n}.\end{aligned}$$ Without loss of generality we may take $x=1$ (for $x\neq1$ the proof is similar). Observe that for $v\in\mathbb{R},$ $$|\Gamma(\gamma-1/2+{{{\mathrm{i}}}}v)|\geq C_{1}1_{\left\vert v\right\vert \leq 2}+C_{2}1_{\left\vert v\right\vert >2}\left\vert v\right\vert ^{\gamma -1}e^{-\pi\left\vert v\right\vert /2},\label{g1}$$ for some constants $C_{1}>0,$ $C_{2}>0$ (depending on $\gamma$), and that $$|\Gamma(2\gamma-3/2+{{{\mathrm{i}}}}(v-u))|\leq D_{1}1_{\left\vert u-v\right\vert \leq2}+1_{\left\vert u-v\right\vert >2}D_{2}\left\vert u-v\right\vert ^{2(\gamma-1)}e^{-\pi\left\vert u-v\right\vert /2}\label{g2}$$ for some $D_{1}>0,$ $D_{2}>0.$ Let $\rho_{n}=h_{n}^{-\alpha}$ for $0<\alpha<1/2$. By the estimates (\[g1\]) and (\[g2\]), one can straightforwardly derive that the integral $$I_{1,n,\rho_{n}}:=\int_{-1/h_{n}}^{1/h_{n}}\int_{-1/h_{n}}^{1/h_{n}}1_{\left\vert v-u\right\vert \geq\rho_{n}}\frac{\Gamma(2\gamma-3/2+{{{\mathrm{i}}} }(v-u))\mathcal{M}[p_{T}](2\gamma-1+{{{\mathrm{i}}}}(v-u))}{\Gamma(\gamma-1/2+{{{\mathrm{i}}} }v)\Gamma(\gamma-1/2-{{{\mathrm{i}}}}u)}\,dv\,du$$ can be bounded from above as $$|I_{1,n,\rho_{n}}|\lesssim h_{n}^{-3\left\vert 1-\gamma\right\vert }e^{\pi\left( \frac{1}{2h_{n}}-\rho_{n}/2\right) }+h_{n}^{-2\left\vert 1-\gamma\right\vert -1}e^{\pi\left( \frac{1}{h_{n}}-\rho_{n}/2\right) }+h_{n}^{-4\left\vert 1-\gamma\right\vert -1}e^{\pi\left( \frac{1}{h_{n}}-\rho_{n}\right) }$$ for $n\rightarrow\infty.$ Similarly $$\begin{gathered} \int_{-1/h_{n}}^{1/h_{n}}\int_{-1/h_{n}}^{1/h_{n}}1_{\left\vert u\right\vert \leq\frac{1}{h_{n}}-\rho_{n}}1_{\left\vert v-u\right\vert \leq\rho_{n}}\frac{\Gamma(2\gamma-3/2+{{{\mathrm{i}}}}(v-u))\mathcal{M}[p_{T}](2\gamma-1+{{{\mathrm{i}}} }(v-u))}{\Gamma(\gamma-1/2+{{{\mathrm{i}}}}v)\Gamma(\gamma-1/2-{{{\mathrm{i}}}}u)}\,dv\,du\\ =O\Bigl(h_{n}^{-l}e^{\pi\left( \frac{1}{h_{n}}-\rho_{n}\right) }\Bigr)\end{gathered}$$ and $$\begin{gathered} \int_{-1/h_{n}}^{1/h_{n}}\int_{-1/h_{n}}^{1/h_{n}}1_{\left\vert v\right\vert \leq\frac{1}{h_{n}}-\rho_{n}}1_{\left\vert v-u\right\vert \leq\rho_{n}}\frac{\Gamma(2\gamma-3/2+{{{\mathrm{i}}}}(v-u))\mathcal{M}[p_{T}](2\gamma-1+{{{\mathrm{i}}} }(v-u))}{\Gamma(\gamma-1/2+{{{\mathrm{i}}}}v)\Gamma(\gamma-1/2-{{{\mathrm{i}}}}u)}\,dv\,du\\ =O\Bigl(h_{n}^{-l}e^{\pi\left( \frac{1}{h_{n}}-\rho_{n}\right) }\Bigr)\end{gathered}$$ for some $l>0.$ Hence $$\begin{aligned} I_{2,n,\rho_{n}}:= & \int_{-1/h_{n}}^{1/h_{n}}\int_{-1/h_{n}}^{1/h_{n}}1_{\left\vert v-u\right\vert \leq\rho_{n}}\frac{\Gamma(2\gamma-3/2+{{{\mathrm{i}}} }(v-u))\mathcal{M}[p_{T}](2\gamma-1+{{{\mathrm{i}}}}(v-u))}{\Gamma(\gamma-1/2+{{{\mathrm{i}}} }v)\Gamma(\gamma-1/2-{{{\mathrm{i}}}}u)}\,dv\,du\\ & =\int_{-1/h_{n}}^{1/h_{n}}\int_{-1/h_{n}}^{1/h_{n}}1_{\left\vert u\right\vert \geq\frac{1}{h_{n}}-\rho_{n}}1_{\left\vert v\right\vert \geq \frac{1}{h_{n}}-\rho_{n}}1_{\left\vert v-u\right\vert \leq\rho_{n}}\\ & \times\frac{\Gamma(2\gamma-3/2+{{{\mathrm{i}}}}(v-u))\mathcal{M}[p_{T}](2\gamma-1+{{{\mathrm{i}}}}(v-u))}{\Gamma(\gamma-1/2+{{{\mathrm{i}}}}v)\Gamma(\gamma -1/2-{{{\mathrm{i}}}}u)}\,dv\,du+O\Bigl(h_{n}^{-l}e^{\pi\left( \frac{1}{h_{n}}-\rho_{n}\right) }\Bigr)\\ & =:I_{3,n,\rho_{n}}+O\Bigl(h_{n}^{-l}e^{\pi\left( \frac{1}{h_{n}}-\rho _{n}\right) }\Bigr).\end{aligned}$$ Now let us study the asymptotic behaviour of the integral $I_{3,n,\rho_{n}}.$ To this end, we will use the Stirling formula $$\Gamma(\gamma-1/2+{{{\mathrm{i}}}}v)=\left( \gamma-1/2+iv\right) ^{\gamma -1+iv}e^{-\gamma+1/2-iv}\sqrt{2\pi}(1+O(\left\vert v\right\vert ^{-1})).$$ First consider the integrand of $I_{3,n,\rho_{n}}$ in the case $u,v\rightarrow +\infty,$ where $$\begin{aligned} \Gamma(\gamma-1/2+{{{\mathrm{i}}}}v)\Gamma(\gamma-1/2-{{{\mathrm{i}}}}u) & =2\pi\exp\left[ iv\log v-iu\log u-i\left( v-u\right) \right] \\ & \times\exp\left[ -\frac{\pi}{2}\left( u+v\right) +\left( \gamma -1\right) \left( \log v+\log u\right) \right] (1+O(1/u)+O(1/v)).\end{aligned}$$ Then on the set $$\left\{ \left\vert u\right\vert \geq\frac{1}{h_{n}}-\rho_{n}\right\} \cap\left\{ \left\vert v\right\vert \geq\frac{1}{h_{n}}-\rho_{n}\right\} \cap\left\{ \left\vert v-u\right\vert \leq\rho_{n}\right\} \cap\left\{ v\geq0,u\geq0\right\}$$ we define $u=1/h_{n}-r,$ $v=1/h_{n}-s$ with $0<r,s<\rho_{n},$ $\left\vert r-s\right\vert <\rho_{n}$ to get $$\begin{aligned} \Gamma(\gamma-1/2+{{{\mathrm{i}}}}v)\Gamma(\gamma-1/2-{{{\mathrm{i}}}}u) & =2e^{i\left( 1/h_{n}-s\right) \log\left( 1/h_{n}-s\right) -i\left( 1/h_{n}-r\right) \log\left( 1/h_{n}-r\right) -i\left( r-s\right) }\\ & \times h_{n}^{-2\left( \gamma-1\right) }\exp\left[ -\pi/h_{n}\right] \exp\left[ (r+s)\pi\right] \\ & \times(1+O(h_{n}))(1+O(\rho_{n}h_{n})).\end{aligned}$$ Note that due to the choice of $\rho_{n},$ $\rho_{n}h_{n}\downarrow0$ and $\rho_{n}^{2}h_{n}\downarrow0.$ Using the asymptotic expansion $$\left( 1/h_{n}-s\right) \log\left( 1/h_{n}-s\right) -\left( 1/h_{n}-r\right) \log\left( 1/h_{n}-r\right) -\left( r-s\right) =\left( r-s\right) \log\left( 1/h_{n}\right) +O(\rho_{n}^{2}h_{n}),$$ we derive $$\begin{aligned} \Gamma(\gamma-1/2+{{{\mathrm{i}}}}v)\Gamma(\gamma-1/2-{{{\mathrm{i}}}}u) & =2\pi h_{n}^{-2\left( \gamma-1\right) }\exp\left[ -\pi/h_{n}\right] \exp\left[ (r+s)\pi\right] \\ & \times\exp\left[ i\left( r-s\right) \log\left( 1/h_{n}\right) \right] (1+O(\rho_{n}^{2}h_{n})).\end{aligned}$$ Analogously, on the set $$\left\{ \left\vert u\right\vert \geq\frac{1}{h_{n}}-\rho_{n}\right\} \cap\left\{ \left\vert v\right\vert \geq\frac{1}{h_{n}}-\rho_{n}\right\} \cap\left\{ \left\vert v-u\right\vert \leq\rho_{n}\right\} \cap\left\{ v\leq0,u\leq0\right\}$$ we define $u=-1/h_{n}+r,$ $v=-1/h_{n}+s,$ with $0<r,s<\rho_{n},$ $\left\vert r-s\right\vert <\rho_{n},$ to get $$\begin{aligned} \Gamma(\gamma-1/2+{{{\mathrm{i}}}}v)\Gamma(\gamma-1/2-{{{\mathrm{i}}}}u) & =2\pi h_{n}^{-2\left( \gamma-1\right) }\exp\left[ -\pi/h_{n}\right] \exp\left[ (r+s)\pi\right] \\ & \times\exp\left[ -i\left( r-s\right) \log\left( 1/h_{n}\right) \right] (1+O(\rho_{n}^{2}h_{n})).\end{aligned}$$ Hence the integral $I_{3,n,\rho_{n}}$ can be decomposed as follows $$I_{3,n,\rho_{n}}=:\frac{h_{n}^{2\left( \gamma-1\right) }}{\pi}\exp\left[ \pi/h_{n}\right] \left\{ {\mathsf{Re}}\lbrack I_{4,n,\rho_{n}}]+O(\rho_{n}^{2}h_{n})\right\},$$ where $$\begin{aligned} I_{4,n,\rho_{n}} & =\int\int1_{0\leq r\leq\rho_{n}}1_{0\leq s\leq\rho_{n}}1_{\left\vert r-s\right\vert \leq\rho_{n}}\exp\left[ -(r+s)\pi\right] \Gamma(2\gamma-3/2+{{{\mathrm{i}}}}(r-s))\\ & \times\mathcal{M}[p_{T}](2\gamma-1+{{{\mathrm{i}}}}(r-s))\exp\left[ i\left( s-r\right) \log\left( 1/h_{n}\right) \right] drds\\ & =\int_{0}^{\rho_{n}}e^{-2v\pi}R_{n}(v)dv\end{aligned}$$ with $$R_{n}(v)=\int1_{0\leq u\leq\rho_{n}-v}e^{-u\pi}\Gamma(2\gamma-3/2+{{{\mathrm{i}}} }u)\mathcal{M}[p_{T}](2\gamma-1+{{{\mathrm{i}}}}u)e^{iu\log\left( 1/h_{n}\right) }du.$$ Using the saddle point method (see, e.g., de Bruijn, [@de1970asymptotic]), it is easy to show that $$\begin{aligned} R_{n}(v) & =e^{i\pi/2}\Gamma(2\gamma-3/2)\mathcal{M}[p_{T}](2\gamma -1)\log^{-1}\left( 1/h_{n}\right) \\ & +e^{i\pi}\left[ \left. \frac{d}{du}\left( \Gamma(2\gamma -3/2+iu)\mathcal{M}[p_{T}](2\gamma-1+iu)\right) \right\vert _{u=0}\right] \log^{-2}\left( 1/h_{n}\right) +O(\log^{-3}\left( 1/h_{n}\right) )\end{aligned}$$ uniformly in $v.$ As a result $${\mathsf{Re}}\lbrack I_{4,n,\rho_{n}}]=\left[ \left. \frac{d}{du}\left( \Gamma(2\gamma-3/2+iu)\mathcal{M}[p_{T}](2\gamma-1+iu)\right) \right\vert _{u=0}\right] \log^{-2}\left( 1/h_{n}\right) +O(\log^{-3}\left( 1/h_{n}\right) ).$$ Combining all above estimates, we finally get $$\begin{aligned} \operatorname{Var}(Z_{n,1}) & =\frac{h_{n}^{2\left( \gamma-1\right) }}{\pi^{2}}\log^{-2}\left( 1/h_{n}\right) \exp\left[ \pi/h_{n}\right] \label{var_asymp}\\ & \times\left\{ \left[ \left. \frac{d}{du}\left( \Gamma(2\gamma -3/2+iu)\mathcal{M}[p_{T}](2\gamma-1+iu)\right) \right\vert _{u=0}\right] \right. \nonumber\\ & \left. +O(\log^{-1}\left( 1/h_{n}\right) )+O(\rho_{n}^{2}h_{n}\log ^{2}\left( 1/h_{n}\right) )+O\Bigl(e^{-\pi\rho_{n}/2}\log^{2}\left( 1/h_{n}\right) \Bigr)\right\} .\nonumber\end{aligned}$$ Using the decomposition , the Lyapounov condition for some $\delta>0$ $$\frac{\mathbb{E}|Z_{n,1}-\mathbb{E}Z_{n,1}|^{2+\delta}}{n^{\delta /2}[\operatorname{Var}(Z_{n,1})]^{1+\delta/2}}\rightarrow0,\quad n\rightarrow\infty$$ is easy to verify, since $\mathbb{E}Z_{n,1}\rightarrow p_{T}(x).$ Proof of Proposition \[arc\] ---------------------------- Let $\theta_{\max}$ be such that $A=\tan\theta_{\max}.$ At the arc $K_{R}:w=R\,e^{i\theta},$ $-\theta_{\max}<\theta<\theta_{\max},$ it holds that$$\begin{aligned} \left\vert \int_{K_{R}}w^{z-1}\mathcal{L}[p_{T}](w)dw\right\vert & \leq R\theta_{\max}\cdot R^{\operatorname{Re}z-1}\int e^{-xR\cos\theta_{\max}}p_{T}(x)dx\\ & \leq B\theta_{\max}R^{\operatorname{Re}z}\int\,e^{-xR\cos\theta_{\max}}dx=B\theta_{\max}\frac{R^{\operatorname{Re}z-1}}{\cos\theta_{\max}}\rightarrow0,\end{aligned}$$ for $0<\operatorname{Re}z<1,$ where $\sup_{x>0}p_{T}(x)\leq B.$ Proof of Proposition \[polR\] ----------------------------- By (\[FpX\]) we derive for the bias of $p_{T,n}(x),$ $x>0,$ $$\begin{gathered} |\mathbb{E}[p_{T,n}(x)]-p_{T}(x)|=\left\vert \frac{1}{2\pi}\int_{-U_{n}}^{U_{n}}\frac{\mathrm{E}\left[ \Phi_{n}(1-\gamma-{\mathrm{i}}v,X_{1})\right] }{\Gamma(1-\gamma-{\mathrm{i}}v)}x^{-{\mathrm{i}}v}dv-\int_{-\infty}^{\infty}\mathcal{M}[p_{T}](\gamma+{\mathrm{i}}v)x^{-\gamma-{\mathrm{i}}v}dv\right\vert \\ \leq\left\vert \frac{1}{2\pi}\int_{-U_{n}}^{U_{n}}\frac{\int_{A_{n}}^{\infty }\left[ \psi(\lambda)\right] ^{-\gamma-{\mathrm{i}}v}\mathcal{F}[p_{X}](\lambda )\psi^{\prime}(\lambda)d\lambda}{\Gamma(1-\gamma-{\mathrm{i}}v)}x^{-\gamma-{\mathrm{i}}v}dv\right\vert +\frac{\left\vert x\right\vert ^{-\gamma}}{2\pi}\int_{\{|v|>U_{n}\}}\left\vert \mathcal{M}[p_{T}](\gamma+{\mathrm{i}}v)\right\vert dv\\ =:(\ast)_{1}+(\ast)_{2}$$ Similar to the proof of Theorem \[sep\_conv\_rates\] we have, $$(\ast)_{2}\leq\frac{\left\vert x\right\vert ^{-\gamma}}{2\pi}e^{-\beta U_{n}}\int_{\{|v|>U_{n}\}}\left\vert \mathcal{M}[p_{T}](\gamma+{\mathrm{i}}v)\right\vert e^{\beta|v|}dv\leq e^{-\beta U_{n}}\frac{\left\vert x\right\vert ^{-\gamma}L}{2\pi},$$ and by Lemma \[dif\] and $$\begin{aligned} (\ast)_{1} & \lesssim\frac{|x|^{-\gamma}}{2\pi}\int_{-U_{n}}^{U_{n}}\frac{\int_{A_{n}}^{\infty}\lambda^{-2\gamma+1}\left\vert \mathcal{F}[p_{X}](\lambda)\right\vert d\lambda}{\left\vert \Gamma(1-\gamma-{\mathrm{i}}v)\right\vert }dv\\ & \lesssim|x|^{-\gamma}U_{n}^{\gamma-1/2}e^{U_{n}\pi/2}\int_{A_{n}}^{\infty }\frac{\lambda^{-{\varepsilon}}}{\lambda^{2\gamma-1-{\varepsilon}}}\left\vert \mathcal{F}[p_{X}](\lambda)\right\vert d\lambda\lesssim|x|^{-\gamma}\frac{U_{n}^{\gamma-1/2}e^{U_{n}\pi/2}}{A_{n}^{{\varepsilon}}}.\end{aligned}$$ As for the variance $$\begin{aligned} \mathrm{Var}(p_{T,n}(x)) & =\frac{1}{(2\pi)^{2}n}\mathrm{Var}\left[ \int_{-U_{n}}^{U_{n}}\frac{\Phi_{n}(1-\gamma-{\mathrm{i}}v,X_{1})}{\Gamma(1-\gamma-{\mathrm{i}}v)}x^{-\gamma-{\mathrm{i}}v}dv\right] \nonumber\\ & \leq\frac{1}{(2\pi)^{2}n}|x|^{-2\gamma}\left[ \int_{-U_{n}}^{U_{n}}\frac{\sqrt{\mathrm{Var}[\Phi_{n}(1-\gamma-{\mathrm{i}}v,X_{1})]}}{\left\vert \Gamma(1-\gamma-{\mathrm{i}}v)\right\vert }dv\right] ^{2}, \label{hu}$$ where$$\begin{aligned} \sqrt{\mathrm{Var}[\Phi_{n}(1-\gamma-{\mathrm{i}}v,X_{1})]} & \leq\int_{0}^{A_{n}}\sqrt{\mathrm{Var}[\left[ \psi(\lambda)\right] ^{-\gamma-{\mathrm{i}}v}e^{{\mathrm{i}}X_{1}\lambda}\psi^{\prime}(\lambda)]}d\lambda\\ & =\int_{0}^{A_{n}}\left\vert \psi(\lambda)\right\vert ^{-\gamma}\left\vert \psi^{\prime}(\lambda)\right\vert \sqrt{\mathrm{Var}[e^{{\mathrm{i}}X_{1}\lambda}]}d\lambda.\end{aligned}$$ Due to Lemma \[dif\] we have $$\int_{1}^{A_{n}}\left\vert \psi(\lambda)\right\vert ^{-\gamma}\left\vert \psi^{\prime}(\lambda)\right\vert \sqrt{\mathrm{Var}[e^{{\mathrm{i}}X_{1}\lambda}]}d\lambda\lesssim\int_{1}^{A_{n}}\lambda^{(1-2\gamma)}d\lambda\leq C_{0}\frac{A_{n}^{2\left( 1-\gamma\right) }}{1-\gamma}$$ and in any case of Lemma \[dif\] it holds$$\int_{0}^{1}\left\vert \psi(\lambda)\right\vert ^{-\gamma}\left\vert \psi^{\prime}(\lambda)\right\vert \sqrt{\mathrm{Var}[e^{{\mathrm{i}}X_{1}\lambda}]}d\lambda\leq\int_{0}^{1}\left\vert \psi(\lambda)\right\vert ^{-\gamma }\left\vert \psi^{\prime}(\lambda)\right\vert d\lambda\leq\frac{C_{1}}{1-\gamma}$$ for some natural constant $C_{0},C_{1}>0.$ Hence from (\[hu\]) we get by (\[gamma\_asymp\]), $$|x|^{2\gamma}\mathrm{Var}(p_{T,n}(x))\leq\frac{1}{(2\pi)^{2}n}\left( CU_{n}^{\gamma-1/2}e^{U_{n}\pi/2}\frac{A_{n}^{2\left( 1-\gamma\right) }}{1-\gamma}\right) ^{2}=:(\ast)_{3},$$ and by gathering $(\ast)_{1},$ $(\ast)_{2},$ and $(\ast)_{3},$$$\sqrt{\mathrm{E}\left[ x^{2\gamma}\left\vert p_{n}(x)-p(x)\right\vert ^{2}\right] }\lesssim\frac{C}{2\pi\left( 1-\gamma\right) \sqrt{n}}U_{n}^{\gamma-1/2}e^{U_{n}\pi/2}A_{n}^{2\left( 1-\gamma\right) }+\frac {U_{n}^{\gamma-1/2}e^{U_{n}\pi/2}}{A_{n}^{{\varepsilon}}}+e^{-\beta U_{n}}.$$ Next, the choices (\[ch1\]) and (\[ch2\]) lead to the desired result. Appendix ======== \[ExpBounds\] Let $Z_{j},$ $j=1,\ldots, n, $ be a sequence of independent identically distributed random variables. Fix some $u>0$ and define $$\begin{aligned} \varphi_{n}(v):=\frac{1}{n}\sum_{j=1}^{n}\exp\left\{ \left( u +{\mathrm{i}}v \right) Z_{j}\right\} , \quad v\in\mathbb{R}.\end{aligned}$$ Furthermore let $w$ be a positive monotone decreasing Lipschitz function on $\mathbb{R}_{+}$ such that $$\label{decreasing_w}0<w(z)\leq\frac{1}{\sqrt{\log(e+|z|)}}, \quad z\in\mathbb{R}_{+}.$$ Suppose that $\mathbb{E} \bigl[ e^{pu Z}\bigr]<\infty$ and $\mathbb{E} \bigl[ |Z|^{p}\bigr]<\infty$ for some $p>2.$ Then with probability $1$ $$\begin{aligned} \label{MINEQ}\left\| \varphi_{n}- \varphi\right\| _{L_{\infty}(\mathbb{R},w)}=O\left( \sqrt{\frac{\log n}{n}}\right) .\end{aligned}$$ Fix a sequence $\Xi_{n}\to \infty$ as $n\to \infty.$ Denote $$\begin{aligned} \mathcal{W}_{n}^{1}(v) &:=& \frac{w(v) }{n} \; \sum_{j=1}^{n} \Bigl( e^{(u+{\mathrm{i}}v) Z_{j}} \mathbb{I}\left\{ e^{u Z_{j }} < \Xi_{n} \right\} - {\mathbb{E}}\left[ e^{(u+{\mathrm{i}}v) Z} \mathbb{I}\left\{ e^{u Z} < \Xi_{n} \right\} \right] \Bigr),\\ \mathcal{W}_{n}^{2}(v) &:=& \frac{w(v) }{n} \; \sum_{j=1}^{n} \Bigl( e^{(u+{\mathrm{i}}v) Z_{j}} \mathbb{I}\left\{ e^{u Z_{j }} \geq \Xi_{n} \right\} - {\mathbb{E}}\left[ e^{(u+{\mathrm{i}}v) Z} \mathbb{I}\left\{ e^{u Z} \geq \Xi_{n} \right\} \right] \Bigr),\end{aligned}$$ where $Z$ is a random variable with the same distribution as $Z_{1}$. The main idea of the proof is to show that $$\begin{aligned} \label{aim1} |\mathcal{W}_{n}^{1}(v)|&=&O_{a.s.}\left(\sqrt{\frac{\log n}{n}}\right),\\ \label{aim2} |\mathcal{W}_{n}^{2}(v) |&=&O_{a.s.}\left(\sqrt{\frac{\log n}{n}}\right)\end{aligned}$$ under a proper choice of the sequence $\Xi_{n}.$\ **Step 1.** The aim of the first step is to show . Consider the sequence $ A_{k}=e^{k},\, k\in \mathbb{N} $ and cover each interval $ [-A_{k},A_{k}] $ by $ M_{k}=\left(\lfloor 2A_{k}/\gamma \rfloor +1 \right) $ disjoint small intervals $ \Lambda_{k,1},\ldots,\Lambda_{k,M_{k}} $ of the length $ \gamma. $ Let $ v_{k,1},\ldots, v_{k,M_{k}} $ be the centers of these intervals. We have for any natural $ K>0 $ $$\begin{gathered} \max_{k=1,\ldots,K}\sup_{A_{k-1}<| v |\leq A_{k}}|\mathcal{W}_{n}^{1}(v)|\leq \max_{k=1,\ldots,K}\max_{1\leq m \leq M_{k}} \sup_{v\in \Lambda_{k,m}}|\mathcal{W}_{n}^{1}(v)-\mathcal{W}_{n}^{1}(v_{k,m})| \\ +\max_{k=1,\ldots,K}\max_{\Bigl\{\substack{ 1\leq m \leq M_{k}:\\ | v_{k,m} |>A_{k-1}}\Bigr\}}|\mathcal{W}_{n}^{1}(v_{k,m})|.\end{gathered}$$ Hence for any positive $\lambda$, $$\begin{gathered} \label{DEC1} {\operatorname{P}}\left( \max_{k=1,\ldots,K}\sup_{A_{k-1}< | v |\leq A_{k}}|\mathcal{W}_{n}^{1}(v)|>\lambda \right) \leq {\operatorname{P}}\left(\sup_{| v_{1}-v_{2} |<\gamma}|\mathcal{W}_{n}^{1}(v_{1})-\mathcal{W}_{n}^{1}(v_{2})|>\lambda/2\right) \\ + \sum_{k=1}^{K}\sum_{\Bigl\{\substack{ 1\leq m \leq M_{k}:\\ | v_{k,m} |>A_{k-1}}\Bigr\}}{\operatorname{P}}(|\mathcal{W}_{n}^{1}(v_{k,m})|>\lambda/2).\end{gathered}$$ We proceed with the first summand in . It holds for any $ v_{1},v_{2}\in \mathbb{R} $ $$\begin{aligned} \label{WNDIFF} \nonumber |\mathcal{W}_{n}^{1}(v_{1})-\mathcal{W}_{n}^{1}(v_{2})|&\leq& 2 \: \Xi_{n} \left| w( v_{1})-w( v_{2} ) \right| +\frac{1}{n}\sum_{j=1}^{n} \Bigl[ \left| e^{(u+{\mathrm{i}}v_{1}) Z_{j}} - e^{(u+{\mathrm{i}}v_{2}) Z_{j}} \right| I\left\{ e^{u Z_{j}} < \Xi_{n} \right\} \Bigr]\\ &&\nonumber+ \Bigl| {\mathbb{E}}\left[ \left( e^{(u+{\mathrm{i}}v_{1}) Z} - e^{(u+{\mathrm{i}}v_{2}) Z} \right) I\left\{ e^{u Z} < \Xi_{n} \right\} \right] \Bigr| \\ &\leq& \left| v_{1}-v_{2} \right|\:\Xi_{n}\: \left[ 2 \: L_{w}+\frac{1}{n}\sum_{j=1}^{n}| Z_{j}|+{\mathbb{E}}| Z| \right],\end{aligned}$$ where $ L_{\omega } $ is the Lipschitz constant of $ w$ and $Z$ is a random variable distributed as $Z_{1}$. Next, the Markov inequality implies $$\begin{aligned} {\operatorname{P}}\left\{ \frac{1}{n}\sum_{j=1}^{n}\Bigl[| Z_{j} |-{\mathbb{E}}| Z |\Bigr]> c \right\}\leq c^{-p}n^{-p}\:{\mathbb{E}}\left| \sum_{j=1}^{n}\Bigl[| Z_{j} |-{\mathbb{E}}| Z | \Bigr] \right|^{p}\end{aligned}$$ for any $ c >0. $ Note that $$\begin{aligned} {\mathbb{E}}\left| \sum_{j=1}^{n}\Bigl[ | Z_{j} |-{\mathbb{E}}| Z | \Bigr] \right|^{p}\leq c_{p} n^{p/2},\end{aligned}$$ for some constant $ c_{p} $ depending on $ p$ and we obtain from $$\label{LINEQ} {\operatorname{P}}\Bigl\{ \sup_{| v_{1}-v_{2}|<\gamma}|\mathcal{W}_{n}^{1}(v_{1})-\mathcal{W}_{n}^{1}(v_{2})|>2\gamma \Xi_{n} (L_{\omega}+{\mathbb{E}}|Z|+c)\Bigr\} \leq C_{p} \,c^{-p} n^{-p/2}.$$ Hence if $\gamma \Xi_{n}\geq 1$ and $\lambda\geq 4(L_{\omega}+{\mathbb{E}}|Z|+c)$ we get Now we turn to the second term on the right-hand side of . Applying the Bernstein inequality, we get $${\operatorname{P}}\Bigl\{ \sup_{| v_{1}-v_{2}|<\gamma}|\mathcal{W}_{n}^{1}(v_{1})-\mathcal{W}_{n}^{1}(v_{2})|>\lambda/2\Bigr\} \leq C_{p} \,c^{-p} n^{-p/2}.$$ $$\begin{aligned} {\operatorname{P}}\left(|{\mathsf{Re}}\left[ \mathcal{W}_{n}^{1}(v_{k,m}) \right]|>\lambda/4\right) \leq \exp\left( -\frac{ \lambda^{2}n }{ 32(\Xi_{n} w(A_{k-1}) \lambda/3 +w^2(A_{k-1})\, {\mathbb{E}}[e^{2uZ}])} \right).\end{aligned}$$ Similarly, $$\begin{aligned} {\operatorname{P}}\left(|{\mathsf{Im}}\left[ \mathcal{W}_{n}^{1}(v_{k,m}) \right]|>\lambda/4\right) \leq \exp\left( -\frac{ \lambda^{2}n }{ 32(\Xi_{n} w(A_{k-1}) \lambda/3 +w^2(A_{k-1})\, {\mathbb{E}}[e^{2uZ}])} \right).\end{aligned}$$ Therefore $$\begin{aligned} \sum_{\{ | v_{k,m} |>A_{k-1} \}}{\operatorname{P}}(|\mathcal{W}_{n}^{1}(v_{k,m})|>\lambda/2)\leq \left(\lfloor 2A_{k}/\gamma \rfloor +1 \right)\exp\left( -\frac{ \lambda^{2}n }{ 32(\Xi_{n} w(A_{k-1}) \lambda/3 +w^2(A_{k-1})\, {\mathbb{E}}[e^{2uZ}])} \right).\end{aligned}$$ Set now $\gamma= \sqrt{(\log n)/n},$ $\lambda = \zeta \sqrt{(\log n)/n}$ and $\Xi_n=\sqrt{n/\log (n)},$ then $$\begin{aligned} \sum_{\{ | v_{k,m} |>A_{k-1} \}}{\operatorname{P}}(|\mathcal{W}_{n}^{1}(v_{k,m})|>\lambda/2) &\lesssim & A_k\,\sqrt{\frac{n}{\log(n) }} \exp\left( -\frac{ \lambda^{2}n }{ 32\bigl(\Xi_{n} w(A_{k-1}) \lambda/3 +w^2(A_{k-1})\, {\mathbb{E}}[e^{2uZ}]\bigr)} \right) \\ &\lesssim & \sqrt{\frac{n}{\log(n) }} \exp\left(-k+k\left[1- \frac{\zeta^{2}\log (n)}{32(1+{\mathbb{E}}[e^{2uZ}])}\right] \right).\end{aligned}$$ Assuming that $ \zeta^2\geq 32\theta (1+{\mathbb{E}}[e^{2uZ}])$ for some $\theta>1$, we arrive at $$\begin{aligned} \sum_{k=2}^{\infty}\sum_{\{ |v_{k,m} |>A_{k-1} \}}{\operatorname{P}}(|\mathcal{W}_{n}(v_{k,m})|>\lambda/2) & \lesssim & e^{-k}\frac{n^{1/2-\theta}}{\sqrt{\log(n)}}, \quad n\to \infty\end{aligned}$$ **Step 2**. Now we turn to . Consider the sequence $$R_{n}(v) := \frac{1 }{n} \; \sum_{j=1}^{n} e^{(u+{\mathrm{i}}v) Z_{j}} \mathbb{I}\left\{ e^{u Z_{j }} \geq \Xi_{n} \right\}.$$ By the Markov inequality we get for any $p>1$ $$\left| {\mathbb{E}}\left[ R_{n}(u) \right] \right| \leq {\mathbb{E}}\left[ e^{u Z_{j}} \right] {\operatorname{P}}\left\{ e^{u Z_{j }} \geq \Xi_{n} \right\} \leq \Xi_{n}^{-p} \;\; {\mathbb{E}}\left[ e^{u Z_{j}} \right] \; {\mathbb{E}}\left[ e^{u p Z_{j}} \right] = o\Bigl(\sqrt{(\log n)/ n}\Bigr)$$ Set $\eta_{k}=2^{k}, k=1,2,\ldots$, then it holds for any $p>2$ $$\sum_{k=1}^{\infty}{\operatorname{P}}\Bigl\{ \max_{j=1,\ldots,\eta_{k+1}} e^{u Z_{j}} \geq \Xi_{\eta_{k}}\Bigr\} \leq \sum_{k=1}^{\infty} \eta_{k+1} {\operatorname{P}}\{ e^{u Z} \geq \Xi_{\eta_{k}}\} \leq {\mathbb{E}}[e^{p u Z}] \sum_{k=1}^{\infty} \eta_{k+1} \Xi_{\eta_{k}}^{-p}<\infty.$$ By the Borel-Cantelli lemma, $${\operatorname{P}}\Bigl\{ \max_{j=1,\ldots,\eta_{k+1}} e^{u Z_{j}} \geq \Xi_{\eta_{k}} \quad\mbox {for infinitely many } k\Bigr\} = 0.$$ From here it follows that $R_{n}(u)- {\mathbb{E}}R_{n}(u) = o\Bigl(\sqrt{(\log n)/ n}\Bigr)$. This completes the proof. \[dif\] Let $(L_{t},\,t\geq0)$ be a Lévy process with the triplet $(\mu,\sigma^{2},\nu).$ Suppose that $\int_{\{|x|>1\}}|x|\nu(dx)<\infty,$ and that $\sigma$ and $\nu$ are not both zero. It then holds for $\psi (u)=-\log(\mathbb{E}(\exp({\mathrm{i}}uL_{t})))$ $$(i):\text{ }\left\vert \psi(u)\right\vert \lesssim u^{2}\text{ \ \ and \ \ }(ii):\text{ }\left\vert \psi^{\prime}(u)\right\vert \lesssim u,\text{ \ \ }u\rightarrow\infty. \label{uinf}$$ Further, if $$d=\mu+\int_{\{|x|>1\}}x\nu(dx)\neq0 \label{dr}$$ we have $$(i):\text{ }\left\vert \psi(u)\right\vert \gtrsim u\text{ \ \ and \ }(ii):\text{ \ }\left\vert \psi^{\prime}(u)\right\vert \lesssim1,\text{ \ \ }u\downarrow0. \label{u0}$$ If $d=0$ we have in the case $\nu(\{|x|>1\}\cap dx)\equiv0,$ $$(i):\text{ }\left\vert \psi(u)\right\vert \gtrsim u^{2},\text{ \ \ and \ \ }(ii):\text{ }\left\vert \psi^{\prime}(u)\right\vert \lesssim u,\text{ \ \ }u\downarrow0, \label{u01}$$ and in the case $\nu(\{|x|>1\}\cap dx)\neq0,$$$(i):\text{ }\left\vert \psi(u)\right\vert \gtrsim u,\text{ \ \ and \ \ }(ii):\text{ }\left\vert \psi^{\prime}(u)\right\vert =o(1),\text{ \ \ }u\downarrow0. \label{u02}$$ In general we have$$\psi(u)=-{\mathrm{i}}u\mu+\frac{u^{2}\sigma^{2}}{2}+\int_{\mathbb{R}}(1-e^{{\mathrm{i}}ux}+{\mathrm{i}}ux1_{\left\vert x\right\vert \leq1})\nu(dx), \label{as1}$$ where $$\begin{aligned} \int_{\mathbb{R}}(1-e^{{\mathrm{i}}ux}+{\mathrm{i}}ux1_{\left\vert x\right\vert \leq1})\nu(dx) & =u^{2}\int_{\{|x|\leq1\}}\frac{1-e^{{\mathrm{i}}ux}+{\mathrm{i}}ux}{\left( ux\right) ^{2}}x^{2}\nu(dx)\label{as}\\ & +\int_{\{|x|>1\}}\left( 1-e^{{\mathrm{i}}ux}\right) \nu(dx).\nonumber\end{aligned}$$ Note that $$0<c_{1}<\frac{\left\vert 1-e^{{\mathrm{i}}y}+{\mathrm{i}}y\right\vert }{y^{2}}<c_{2}\text{ \ for \ }y\in\mathbb{R},$$ with $0<c_{1}<c_{2},$ and that$$\int_{\{|x|>1\}}\left( 1-e^{{\mathrm{i}}ux}\right) xv(dx)\longrightarrow \int_{\{|x|>1\}}xv(dx)\text{ \ \ for }u\rightarrow\infty$$ by Riemann-Lebesgue. This yields (\[uinf\])-$(i).$ It is not difficult to show by standard arguments that due to the integrability condition we have $$\psi^{\prime}(u)=-{\mathrm{i}}\mu+u\sigma^{2}-{\mathrm{i}}\int_{\mathbb{R}}(e^{{\mathrm{i}}ux}-1_{\left\vert x\right\vert \leq1})x\nu(dx).$$ Next, (\[uinf\])-$(ii)$ follows by observing that$$\int_{\{|x|\leq1\}}(e^{{\mathrm{i}}ux}-1)xv(dx)=u\int_{\{|x|\leq1\}}\frac{e^{{\mathrm{i}}ux}-1}{ux}x^{2}\nu(dx),$$ where $\left( e^{{\mathrm{i}}y}-1\right) /y$ is bounded for $y\in \mathbb{R}.$ Suppose $d\neq0.$ By (\[dr\]), $\psi^{\prime}(0)=-{\mathrm{i}}d\neq0,$ and since $\psi(0)=0$ we have (\[u0\])-$(i),$ and (\[u0\])-$(ii)$ is obvious. Next suppose $d=0,$ i.e. $\psi^{\prime}(0)=0.$ We then have,$$\begin{aligned} \psi(u) & =\psi(u)-u\psi^{\prime}(0)=\psi(u)+{\mathrm{i}}ud \\ & =\frac{u^{2}\sigma^{2}}{2}+\int_{\mathbb{R}}(1-e^{{\mathrm{i}}ux}+{\mathrm{i}}ux1_{\left\vert x\right\vert \leq1})\nu(dx)+{\mathrm{i}}u\int_{\{|x|>1\}}x\nu(dx)\\ & =\frac{u^{2}\sigma^{2}}{2}+\int_{\{|x|\leq1\}}(1-e^{{\mathrm{i}}ux}+{\mathrm{i}}ux)\nu(dx)\\ & +\int_{\{|x|>1\}}(1-e^{{\mathrm{i}}ux})\nu(dx)+{\mathrm{i}}u\int _{\{|x|>1\}}x\nu(dx)\end{aligned}$$ and$$\begin{aligned} \psi^{\prime}(u) & =\psi^{\prime}(u)-\psi^{\prime}(0)\\ & =u\sigma^{2}-{\mathrm{i}}\int_{\mathbb{R}}(e^{{\mathrm{i}}ux}-1_{\left\vert x\right\vert \leq1})x\nu(dx)+{\mathrm{i}}\int_{\{|x|>1\}}x\nu(dx).\end{aligned}$$ If $\nu(\{|x|>1\}\cap dx)\equiv0$ we thus have$$\begin{aligned} \psi(u) & =\frac{u^{2}\sigma^{2}}{2}+\int_{\{|x|\leq1\}}(1-e^{{\mathrm{i}}ux}+{\mathrm{i}}ux)\nu(dx)\\ & =\frac{u^{2}\sigma^{2}}{2}+u^{2}\int_{\{|x|\leq1\}}\frac{1-e^{{\mathrm{i}}ux}+{\mathrm{i}}ux}{\left( ux\right) ^{2}}x^{2}\nu(dx)\end{aligned}$$ and we observe that$$\operatorname{Re}\left( 1-e^{{\mathrm{i}}ux}+{\mathrm{i}}ux\right) =1-\cos(ux)\geq0$$ so in particular $\operatorname{Re}\psi(u)\gtrsim u^{2}$ while $\left\vert \psi(u)\right\vert \lesssim u^{2}.$ Hence (\[u01\])-$(i)$ is shown. Then, $$\begin{aligned} \psi^{\prime}(u) & =u\sigma^{2}-{\mathrm{i}}\int_{\{|x|\leq1\}}(e^{{\mathrm{i}}ux}-1)x\nu(dx)\\ & =u\sigma^{2}-{\mathrm{i}}u\int_{\{|x|\leq1\}}\frac{e^{{\mathrm{i}}ux}-1}{ux}x^{2}\nu(dx)\end{aligned}$$ and note again that $\left( e^{{\mathrm{i}}y}-1\right) /y$ is bounded, hence we have (\[u01\])-$(ii).$ Finally,  if $d=0$ and $\nu(\{|x|>1\}\cap dx)\neq0,$ let us write$$\begin{aligned} \psi(u) & =\frac{u^{2}\sigma^{2}}{2}+u^{2}\int_{\{|x|\leq1\}}\frac {1-e^{{\mathrm{i}}ux}+{\mathrm{i}}ux}{\left( ux\right) ^{2}}x^{2}\nu(dx)\\ & +\int_{\{|x|>1\}}(1-\cos(ux))\nu(dx)+{\mathrm{i}}\int_{\{|x|>1\}}\left( ux-\sin(ux)\right) \nu(dx)\end{aligned}$$ where$$0\leq\int_{\{|x|>1\}}\left( ux-\sin(ux)\right) \nu(dx)\leq u\int _{\{|x|>1\}}x\nu(dx)\lesssim u,$$ but due to dominated convergence also$$\int_{\{|x|>1\}}\left( ux-\sin(ux)\right) \nu(dx)=u\int_{\{|x|>1\}}x\nu(dx)+o(1).$$ Hence,$$\int_{\{|x|>1\}}\left( ux-\sin(ux)\right) \nu(dx)\asymp u,\text{ \ \ }u\downarrow0,$$ and from this (\[u02\])-$(i).$ For the derivative we have,$$\begin{aligned} \psi^{\prime}(u) & =u\sigma^{2}-{\mathrm{i}}\int_{\mathbb{R}}(e^{{\mathrm{i}}ux}-1_{\left\vert x\right\vert \leq1})xv(dx)+{\mathrm{i}}\int_{\{|x|>1\}}x\nu(dx)\\ & =u\sigma^{2}-{\mathrm{i}}u\int_{\{|x|\leq1\}}\frac{e^{{\mathrm{i}}ux}-1}{ux}x^{2}\nu(dx)-{\mathrm{i}}\int_{\{|x|>1\}}\left( e^{{\mathrm{i}}ux}-1\right) x\nu(dx)\\ & =o(1),\text{ \ \ \ }u\downarrow0,\end{aligned}$$ by similar arguments, i.e. (\[u02\])-$(ii).$ \[lemma\_gamma\_asymp\] For any $\alpha\geq-2,$ there exist positive constants $C_{1}$ and $C_{2}(\alpha)$ such that uniformly for $\left\vert \beta\right\vert \geq2,$ $$\begin{aligned} \label{gamma_asymp}C|\beta|^{\alpha-1/2}e^{-|\beta|\pi/2}\leq\left\vert \Gamma(\alpha+{\mathrm{i}}\beta)\right\vert \leq C_{\alpha}|\beta|^{\alpha -1/2}e^{-|\beta|\pi/2}.\end{aligned}$$ \[cor\_integ\_gamma\] For all $0<\alpha<1/2$ and all $U>2,$ it holds$$\label{integ_gamma}\int_{-U}^{U}\frac{d\beta}{\left\vert \Gamma(\alpha+{\mathrm{i}}\beta)\right\vert }\leq CU^{1/2-\alpha}e^{U\pi/2}$$ for a constant $C>0.$ For $\alpha>1/2,$ we have $$\label{integ_gamma1}\int_{-U}^{U}\frac{d\beta}{\left\vert \Gamma(\alpha +{\mathrm{i}}\beta)\right\vert }\leq C_{1}(\alpha)+C_{2}e^{U\pi/2}$$ where $C_{2}$ does not depend on $\alpha.$
--- abstract: 'Transmission eigenchannels are building blocks of coherent wave transport in diffusive media, and selective excitation of individual eigenchannels can lead to diverse transport behavior. An essential yet poorly understood property is the transverse spatial profile of each eigenchannel, which is critical for coupling into and out of it. Here, we discover that the transmission eigenchannels of a disordered slab possess localized incident and outgoing profiles, even in the diffusive regime far from Anderson localization. Such transverse localization arises from a combination of reciprocity, local coupling of spatial modes, and nonlocal correlations of scattered waves. Experimentally, we observe signatures of such localization despite finite illumination area. Our results reveal the intrinsic characteristics of transmission eigenchannels in the open slab geometry, commonly used for applications in imaging and energy transfer through turbid media.' author: - 'Hasan Y[i]{}lmaz' - Chia Wei Hsu - Alexey Yamilov - Hui Cao bibliography: - 'transverse\_localization.bib' title: Transverse localization of transmission eigenchannels --- Spatial inhomogeneities in the refractive index of a disordered medium cause multiple-scattering of light. In disordered media such as biological tissue, white paint, and clouds, most of the incident light reflects back, hindering the transfer of energy and information through the media. However, by utilizing the interference of scattered waves, it is possible to prepare optimized wavefronts that completely suppress reflection—a striking phenomenon first predicted in the context of mesoscopic electron transport [@Dorokhov; @1986_Imry_EPL; @Mello1; @Nazarov]. The required incident wavefronts are the eigenvectors of $t^{\dagger}t$ where $t$ is the field transmission matrix; the corresponding eigenvalues give the total transmission. In a lossless diffusive medium, the transmission eigenvalues $\tau$ span from 0 to 1, leading to closed ($\tau \approx 0$) and open ($\tau \approx 1$) channels. In recent years, spatial light modulators (SLMs) have been used to excite the open channels [@MoskR; @VellekoopR; @RotterR; @Mosk2; @Choi3; @Choi4; @Popoff2; @2016_Bosch_OE; @Wade1; @Cao4] to enhance light transmission through diffusive media. Selective excitation of individual channels can dramatically change the total energy stored inside the random media as well as the spatial distribution of energy density [@Choi1; @Aubry; @Genack1; @Mosk3; @Cao4; @Cao5; @2017_Hong_arXiv]. However, some important questions regarding the transmission eigenchannels remain open. What are the transverse spatial profiles for coupling light into such channels? Once coupled in, how do the eigenchannels spread in the transverse direction? In the Anderson localization regime of transport, a high-transmission channel is formed by coupled spatially localized modes [@1987_Pendry_JPC; @2005_Bertolotti_PRL; @2006_Sebbah_PRL; @Choi2; @2014_Pena_ncomms; @Carminati1]; thus a transversely localized excitation and propagation is expected. However, Anderson localization is extremely hard to achieve in three-dimensional (3D) disordered systems [@Page], and diffusive transport is much more common. In the diffusive regime, the open channels are expected to cover the entire transverse extent of the system [@Choi1; @Choi2], utilizing all available spatial degrees of freedom. ![image](figure_01.pdf){width="\linewidth"} Here we discover that the transmission eigenchannels are transversely localized even in the diffusive regime of transport. In a disordered slab of width $W$ much larger than thickness $L$, all transmission eigenchannels have a finite transverse extent that is much smaller than $W$. In the $W \to \infty$ limit, the channel width approaches an asymptotic value $D_\infty$, which scales as $(kl_t)L$ in two dimensions. Here $l_t$ is the transport mean free path, $k = n_0 k_0 = n_0 2\pi/\lambda$, $\lambda$ is the vacuum wavelength, and $n_0$ is the effective refractive index of the slab. Furthermore, the eigenchannels do not spread laterally as they propagate through the slab, and the transverse extent at the output surface is equal to that at the input surface. Experimentally, we observe the transverse localization for high-transmission channels in a diffusive slab of zinc oxide (ZnO) nanoparticles. The finite illumination area modifies the transmission eigenchannels, especially when the size of the illumination region is smaller than $D_\infty$. While the lateral spreading is suppressed for high-transmission channels, it becomes enhanced for low-transmission channels. These properties can be explained in terms of optical reciprocity, bandedness of real-space transmission matrix, and non-local correlations of scattered waves as a result of multipath interference. The transverse localization of transmission eigenchannels that we discover in the diffusive regime is a distinct physical phenomenon from the previously known transverse localization of waves in Anderson-localized systems [@Lagendijk; @Segev1; @Mafi1; @Wong; @VanTiggelen; @2010_VanTiggelen_PRE]. The localization demonstrated here enables selective excitation of individual eigenchannels over input areas substantially smaller than the full extent of a diffusive slab, facilitating coherent control of light penetration and energy distribution in practical conditions. It therefore has potential impact on the advancement of deep-tissue imaging methods [@YangR; @ChoiR; @2015_Park_R] and the manipulation of light–matter interactions inside turbid media [@Vynck1; @Cao7]. Transverse localization of eigenchannels ======================================== For a complete characterization of the transmission eigenchannels, we start with numerical simulations where we can exert full control over the incident wavefront and systematically explore the entire parameter space of interest. We solve the two-dimensional (2D) scalar wave equation $[\nabla^2 + k_0^2 \epsilon({\bf r})] \psi({\bf r}) = 0$ on a finite-difference grid. We consider disordered slabs of width $W$ and thickness $L$ in background refractive index $n_0$. The dielectric constant of the slab is modeled as $\epsilon({\bf r}) = n_0^2 + \delta \epsilon({\bf r})$ at each grid point, and $\delta \epsilon({\bf r})$ is a random number drawn from a zero-mean uniform distribution whose width determines the transport mean free paths $l_t$; see section B of the supplement for details. After calculating the field transmission matrix $t$ for the entire slab using the recursive Green’s function method [@1991_Baranger_PRB], we obtain the incident wavefronts $\psi_n^{\rm in}$ of the eigenchannels via $t^\dagger t \psi_n^{\rm in} = \tau_n \psi_n^{\rm in}$, and calculate the spatial profile of the eigenchannels given such incident wavefronts. In this work we focus on scattering systems in the diffusive regime of transport, namely $Nl_t \gg L \gg l_t$, where $N \approx kW/\pi$ is the number of modes. Remarkably, we observe that in wide slabs, the eigenchannels are spatially localized in the transverse direction parallel to the slab; an exemplary open channel is shown in Fig. \[figure1\]a. Even though we impose no constraint on where or how wide the incident wavefront should be, the resulting open channel only occupies a relatively small transverse extent, utilizing just a fraction of the spatial degrees of freedom that are available across the width of the structure. Moreover, the open channel does not spread laterally as it propagates through the disordered slab; the transmitted profile is also localized, with a width similar to that of the incidence. As shown in the log-linear plot in Fig. \[figure1\]a, the transverse profile decays exponentially on both input and output surfaces, which is surprising given the wave transport is diffusive. A legitimate question is whether such transverse localization of eigenchannels persists in large systems, as experimentally the slab width $W$ is typically so large that it can be regarded infinite. To find the answer, we carry out a scaling analysis with increasing $W$, with results shown in Fig. \[figure1\]b. We quantify the width of an eigenchannel via the definition of participation number; the input diameter is found from the expression $D_{\rm in}\equiv\left[\int_{0}^{W}\mid\psi(x, z=0)\mid^2dx\right]^2 \Big/ \left[\int_{0}^{W}\mid\psi(x, z=0)\mid^4dx\right]$ where $\psi(x, z=0) $ is the field distribution of the channel at the input surface ($z=0$) of the slab, and similarly we calculate the output diameter $D_{\rm out}$ using the field distribution at the output surface ($z=L$). For each $W$, we consider all open channels (defined as having transmission eigenvalues $\tau_n \geq 1/\mathrm{e}$) in 10 different realizations of disorder. As shown in Fig. \[figure1\]b, we find $D_{\rm in}$ and $D_{\rm out}$ to be the same after ensemble average. In the $W \to \infty$ limit of interest, the open channel remains transversely localized, and its width saturates to an asymptotic value that we denote $D_{\infty}$. The extrapolation of $D_{\infty}$ in the $W\rightarrow\infty$ limit is described in the supplement section B. The absence of eigenchannel spreading, $\langle D_{\rm in} \rangle = \langle D_{\rm out} \rangle$, can be explained by reciprocity. Lorentz reciprocity requires the scattering matrix to be symmetric [@2013_Jalas_nphoton], so the transmission matrix coming from one side must be the transpose of the transmission matrix coming from the other side. One can express the transmission matrix through its singular value decomposition, $t = U \sqrt{\tau} V^\dagger$, where the $n$-th column of $V$ and $U$ are the normalized input and output wavefronts of the $n$-th transmission eigenchannel with eigenvalue $\tau_n$. Since $t^{\rm T} = V^* \sqrt{\tau} (U^*)^\dagger$, reciprocity demands that the phase conjugation of the $n$-th eigenchannel output must be precisely the input of the $n$-th eigenchannel coming from the other side, with the same eigenvalue. If the disordered medium is statistically equivalent for light incident from either side, the eigenchannel input width must be statistically identical for both directions. Thus the input and output channel widths should be the same after ensemble average. The above argument applies to all eigenchannels, open or closed. In fact, numerical simulations indicate that all eigenchannels are transversely localized with no lateral spreading, as shown in Fig. \[figure1\]c. In this example (same as in Fig. \[figure1\]a), the system width is $k_0W = 6000$, and random incident wavefronts have an average width of $k_0 D_{\rm in}^{\rm rand}=k_0W/2 = 3000$ from the participation number, but all eigenchannels have widths an order of magnitude smaller. For the closed channels, the transmitted intensities are much weaker than the incident ones, but the width of the transmitted profile (as defined by $D_{\rm out}$) remains the same as that of the incidence. ![**Bandedness of real-space transmission matrix.** **a**, Calculated intensity profile inside a disordered slab when the incident light is focused to a diffraction-limited spot at the front surface, showing the extent of transverse spreading as light diffuses through the slab. $D_\mathrm{in}^\text{point}$ and $D_\mathrm{out}^\text{point}$ are the beam widths at the input and output surfaces. The intensity profiles shown are ensemble averaged over 1000 realizations of disorder. **b**, Amplitudes of the elements of the real-space transmission matrix. While the matrix size is given by the slab width $W$, only elements within a distance $\sim L$ to the diagonal are non-vanishing, because the extent of diffusive spreading in the slab is much less than the slab width. The simulation parameters are the same as in Fig. 1a. The inset is an expanded view of a part of the banded transmission matrix. []{data-label="figure2"}](figure_02.pdf){width="\linewidth"} ![image](figure_03.pdf){width="\linewidth"} Origin of transverse localization ================================= While reciprocity explains the absence of lateral spreading, it remains to be answered why the eigenchannels are transversely localized in the first place. We can gain insight by examining the real-space transmission matrix. Although scattering ensures that light with a specific incident angle is coupled into all outgoing angles once $L$ exceeds $l_t$, this is not the case in real space. Given a point-like excitation at the input surface, light spreads laterally as it diffuses through the disordered slab, covering a finite extent of width on the order of $L$ at the output surface; this is shown in Fig. \[figure2\]a. Such geometric local spreading is the origin of the much celebrated “memory effect” [@1988_Freund_PRL; @1989_Berkovits_PRB; @2015_Judkewitz_nphys; @Vellekoop2]. As a result, the input and output spatial modes are not fully mixed, which emerges as non-vanishing elements only within a distance of $\sim L$ to the diagonal of the real-space transmission matrix ([*i.e.*]{}, the surface-to-surface Green’s function), as shown in Fig. \[figure2\][b]{}. It is noteworthy that 2D Anderson localization is absent in our systems, since the real-space transmission matrix bandwidth is proportional to the sample thickness in all of the systems we study here (see Fig. \[figure8\] in the supplement). Similarly, the real-space matrix $t^\dagger t$ also exhibits a bandwidth proportional to $L$. Random matrices with dominant near-diagonal elements were previously studied in the context of quantum chaos; it was found that the eigenvectors of such “band random matrices” are exponentially localized [@Izrailev; @IzrailevR; @Mirlin]. It is therefore tempting to explain the transverse localization of eigenchannels through the “bandedness” of real-space transmission matrix for a wide slab. The standard theory of band random matrices predicts that when the elements of a Hermitian random matrix is non-vanishing within a band of size $b$, the eigenvectors are localized with participation numbers proportional to $b^2$ [@Izrailev; @IzrailevR; @Mirlin]. In the present context, one would then expect the normalized eigenchannel width $kD$ to be on the order of $(kL)^2$ since the dimensionless bandwidth is $b \approx kL$. For the example in Fig. \[figure1\], this argument suggests $k D_\infty \approx 5600$, but the actual eigenchannel width is only 90. The far smaller channel width indicates a much stronger transverse localization, which is beyond the standard band random matrix theory. To explore what determines the asymptotic open channel width $D_\infty$, we carry out a systematic study to map out its dependence on the slab thickness $L$ and the transport mean free path $l_t$. As shown in Fig. \[figure3\]a, the open channel width $D_\infty$ in fact scales [*linearly*]{} with the slab thickness $L$ that determines the real-space transmission matrix bandwidth $b$, in contrast to predictions from the standard band random matrix theory. Meanwhile, even though the transport mean free path $l_t$ does not affect the real-space transmission matrix bandwidth $b$, we find in Fig. \[figure3\]b that the open channel width $D_\infty$ also scales linearly with $l_t$. A dimensional analysis and the scale invariance of the electromagnetic wave equation indicates a prefactor proportional to the wave number $k = n_0 k_0$. Putting these together, we expect a scaling of $D_\infty \propto (kl_t)L$. In Fig. \[figure3\]c, we plot the compiled data of $D_\infty$ as a function of $(kl_t)L$ from $6 \times 6 = 36$ combinations of $(L, l_t)$ for $n_0 = 1.5$ and $6 \times 2 = 12$ combinations of $(L, l_t)$ for $n_0 = 1$; each $D_\infty$ is determined from 8 widths of $W$ and 10 realizations of disorder (totaling $>3000$ simulations). Indeed we observe the $D_\infty \propto (kl_t)L$ scaling. Least-square fit determines the proportionality constant to be 0.68, close to $2/3$. Therefore, we find the asymptotic open channel width $D_\infty \approx (2/3)(kl_t)L$ in 2D. Note that previous studies [@Choi1; @Choi2] did not find such transverse localization in the diffusive transport regime because the system width $W$ used in the previous simulations were not wide enough. Also note that such eigenchannel width $D_\infty$ is generally far smaller than the 2D localization length $\xi_{\rm 2D} \approx l_t e^{\pi k l_t/2}$. The reduction of eigenchannel width from $kL^2$ to $kl_tL$ requires explanations beyond the bandedness of the real-space transmission matrix. The key factor is the correlations among the non-zero matrix elements induced by multiple scattering of light in the slab. It is known that multipath interference in scattering media leads to non-local correlations of scattered waves [@Cwilich; @Stone2; @1988_Mello_PRL; @Shapiro; @BerkovitsPR94; @Genack0; @Scheffold1; @2002_Sebbah_PRL; @Yamilov; @Muskens1; @Carminati2; @2018_Bertolotti_PRX; @Akkermans]. When we replace the non-vanishing elements of the real-space transmission matrix with uncorrelated complex Gaussian random numbers ([*i.e.*]{} remove the correlations artificially), we observe much wider eigenchannel widths that scale as $kL^2$ as predicted by standard band random matrix theory. Stronger scattering (smaller $kl_t$) enhances non-local correlations and leads to tighter transverse localization of the eigenchannels. Extending such scaling study to disordered slabs in 3D is a daunting computational task. Nevertheless, we expect transverse localization of transmission eigenchannels in 3D both at the input and the output surfaces, since 3D systems also possess banded real-space transmission matrices, non-local correlations, and reciprocity. ![image](figure_04.pdf){width="\linewidth"} Experimental results ==================== To search for experimental evidence of transverse localization of transmission eigenchannels, we measure the spatial profiles of individual eigenchannels at the input and output surfaces of a 3D scattering slab. The sample consists of zinc oxide (ZnO) nanoparticles that are spin-coated on a cover slide. The thickness of the ZnO layer is about 10 m, much less than the lateral width of the layer (2 cm $\times$ 2 cm). The average transmittance of light at wavelength of 532 nm through the sample is approximately 0.2. We start by measuring the transmission matrix of the disordered slab. A simplified schematic of the experimental setup is shown in Fig. \[figure4\]a, with a detailed one given in Fig. \[figure6\] in the supplement section A. A spatially uniform monochromatic laser beam at wavelength $\lambda = 532$ nm is modulated by a phase-only SLM. The SLM surface is imaged by a pair of lenses onto the pupil of a microscope objective. Therefore, the spatial profile of illumination is the 2D Fourier transform of the SLM phase pattern; the illumination area is finite, and its widths scales inversely with the SLM macropixel size. We use the SLM and a CCD camera to measure the field transmission matrix in $k$-space, using a common-path interferometry method akin to references [@Popoff1; @Wade1]. The number of SLM macro-pixels that modulate the input beam is 2048, and the number of output speckle grains recorded by the camera is about 15000. After measuring the field transmission matrix $t$, we determine the incident wavefronts of individual eigenchannels as the eigenvectors of $t^\dagger t$. Then we display the corresponding phase patterns on the SLM, and record the 2D spatial intensity profiles $I(x,y)$ at the input and output surfaces of the sample with two cameras (CCD1, CCD3). We define the effective area of such a profile through the 2D participation number $A \equiv \left[\iint I(x,y) \mathrm{d}x \, \mathrm{d}y \right]^2 / \left[\iint I^2(x,y) \mathrm{d}x \, \mathrm{d}y\right]$, and the effective width $D$ through $A = \pi \left(D/2\right)^2$. The highest-transmission eigenchannel indeed exhibits narrower spatial profiles (shown in Figs. \[figure4\]b,d): it has $D_\mathrm{in} \approx 10$ m and $D_\mathrm{out} \approx 14$ m, while random wavefronts have $D_{\rm in}^{\rm rand} \approx 13$ m and $D_{\rm out}^{\rm rand} \approx 21$ m. The lateral spreading is also less: $\Delta D = D_\mathrm{out} - D_\mathrm{in} \approx 4$ m for the highest-transmission eigenchannel, while $\Delta D^{\rm rand} \approx 8$ m $\sim L$ for random wavefronts. These are experimental signatures of the transverse localization phenomenon in high-transmission eigenchannels introduced in the previous section. More complex behaviors emerge when we also examine eigenchannels with lower transmission. In our experiment, the low-transmission eigenchannels have incident profiles (Fig. \[figure4\]c) comparable in size to those of random wavefronts. Furthermore, their output profiles (Fig. \[figure4\]e) are wider than output profile of a random input in terms of the participation number $D_\mathrm{out}$. Figs. \[figure4\]f,g show the width and spreading of all of the 2048 eigenchannels as a function of the normalized transmission eigenvalue, and compare them to random incident wavefronts. We can see that, in contrast to Fig. \[figure1\]c, the experimental eigenchannel widths reveal systematic dependences on the transmission eigenvalue, particularly for $D_\mathrm{out}$. Specifically, the transverse spreading increases with decreasing eigenvalue, with the high-transmission eigenchannels exhibiting suppressed lateral spreading and the low-transmission eigenchannels exhibiting enhanced spreading. In next section, we demonstrate that this discrepancy between numerical and experimental results can be understood by taking into account the finite illumination area, phase-only modulation, and the noise in the experiment. Effect of incomplete control ============================ ![image](figure_05.pdf){width="\linewidth"} There are important differences between the experimental setup and the idealized scenario considered in Figs. 1–3. In our experiment, the illumination beam-width on the sample surface is finite and is comparable to $L$, much smaller than the expected $D_{\infty}$. Also, we use phase-only modulation over a finite fraction of incident angles, and collect a finite fraction of outgoing angles in one polarization. Such experimental conditions lead to incomplete control which is known to affect the transmission eigenvalues [@Stone1; @Wade1], and we expect them to also modify the eigenchannel profiles. Experimentally it is not possible to separate the different factors, but we can do so with simulations. Numerically we consider 2D disordered slabs with parameters comparable to the experiment (see the caption of Fig. \[figure5\]), with the asymptotic open-channel width being $D_\infty \approx 90$ m. Naturally we do not expect quantitative comparison with the 3D sample in the experiment, but we aim to extract physical insights that do not depend on dimensionality. We describe finite-width illumination by grouping incident modes into equally-spaced intervals of transverse momenta that model the SLM macropixels [@Wade1]. For random incident wavefronts, the beam widths as defined by the participation number are $D_{\rm in}^{\rm rand} \approx 13$ m on the front and $D_\text{out}^\text{rand} \approx 19$ m on the back surface. With such finite-width illumination (Fig. \[figure5\]a), we find that eigenchannels with intermediate eigenvalues have incident widths $D_\text{in} \approx D_{\rm in}^{\rm rand}$; this is to be contrasted with the full-width illumination case of Fig. \[figure1\]c. Meanwhile, it is striking that despite the illumination width $D_{\rm in}^{\rm rand}$ is much smaller than the asymptotic eigenchannel width $D_\infty$, both high-transmission and low-transmission channels have input widths even smaller than $D_{\rm in}^{\rm rand}$ (Fig. \[figure5\]a). We attribute this to the fact that these channels utilize multipath interference to enhance or suppress the total transmission. Indeed, in the scattering paths picture of wave propagation in disordered media, path crossings inside the sample lead to non-local correlations [@BerkovitsPR94; @Akkermans] and enhances the range of transmission eigenvalues [@Wade1]. Therefore, eigenchannels with extremal eigenvalues prefer smaller input beam widths to increase the probability of crossing. In addition, the extremal eigenchannels preferentially enhance or suppress the output intensity near the center of the beam (see Fig. \[figure11\] in the supplement). Such a non-uniform modification of the transmitted intensity profile results in an effective reduction of the participation number $D_{\rm out}$ for the high-transmission eigenchannels that we observe in Fig. \[figure5\]a, and similarly for the increased $D_{\rm out}$ of the low-transmission eigenchannels. We find that the other sources of incomplete control have relatively minor effects. In Fig. \[figure5\]b, we include the phase-only modulation of the incident wavefront, as well as the finite range of numerical aperture (NA) both in illumination and detection (see section B of the supplement for detail). The ranges of transmission eigenvalues and eigenchannel widths both decrease, but the qualitative trends remain the same. Finally, we also model the effect of experimental noise (see section B of the supplement for detail). As shown in Fig. \[figure5\]c, the low-transmission eigenchannels are more sensitive to noise than the high-transmission channels: the input widths of low-transmission channels become equal to those of random incident wavefronts, while the input widths of the high-transmission channels only change slightly. These results agree qualitatively with our experimental data. Conclusion ========== In conclusion, we discover transverse localization of transmission eigenchannels in diffusive slabs. In the presence of complete control, each eigenchannel has statistically identical input and output widths as a result of optical reciprocity. In a 2D slab, the asymptotic width for open channels is $D_\infty \approx (2/3)kl_tL$, due to the bandedness and non-local correlations of the real-space transmission matrix. We experimentally observe signatures of transverse localization of transmission eigenchannels in a diffusive slab with finite illumination area. While the transverse spreading is suppressed for high-transmission channels, it is enhanced for low-transmission channels. These results are reproduced numerically and explained via multipath interference effects. Our results provide physical insights of transmission eigenchannels in open slab geometry and illustrate the effects of local illumination on eigenchannel profiles. This work opens the possibility of controlling transmission eigenchannels in open systems, which will have significant impact on information and energy delivery through strongly scattering systems. Acknowledgement {#acknowledgement .unnumbered} =============== We thank Allard Mosk, Azriel Genack, Boris Shapiro, Frank Scheffold, Sergey Skipetrov, Stefan Bittner, Stefan Rotter, and Tsampikos Kottos for stimulating discussions and useful feedback. This work was supported by the Office of Naval Research (ONR) under grant no. MURI N00014-13-0649, and by the US-Israel Binational Science Foundation (BSF) under grant no. 2015509. Author contributions {#author-contributions .unnumbered} ==================== H.Y. performed the experiments and analyzed the data. C.W.H. performed the numerical simulations and fabricated the samples. H.Y. analyzed the numerical data. C.W.H. helped with experimental data acquisition and contributed to numerical data analysis. H.C. supervised the project. All authors contributed to the interpretation of the results. H.Y. and C.W.H. prepared the manuscript, H.C. edited it, and A. Y. provided feedback. Supplementary material ====================== This document provides supplementary information to “Transverse localization of transmission eigenchannels". In the first section we describe details of our experimental setup and measurement procedure. In the second section, we present details of our numerical simulations. Experiment ---------- ![image](figure_supp_01.pdf){width="\linewidth"} The scattering sample in our experiment is made of closely-packed zinc oxide (ZnO) nanoparticles (average diameter $\sim$ 200 nm), deposited on a cover slip of thickness 170 m. The ZnO layer thickness is about 10 m, and the transport mean free path is approximately 1.5 m. The average transmission is approximately 0.2. We define the interface between ZnO and air as the front (input) surface, and the interface between the ZnO and the cover slip as the back (output) surface of the sample. The effective index of refraction for the ZnO nanoparticle layer is about 1.4, which almost matches the refractive index of the glass substrate (cover slip). Our experimental setup is sketched in Fig. \[figure6\]. A linearly-polarized monochromatic laser beam (Coherent, Compass 215M-50 SL) with wavelength $\lambda = 532$ nm is expanded and then clipped in order to uniformly cover a large area on the spatial light modulator (SLM). Its polarization direction is rotated from vertical to 45$^{\circ}$ by a half-wave $(\lambda/2)$ plate, and consequently split into vertical and horizontal polarizations by a polarizing beam splitter (PBS). The horizontal-polarized component of the beam illuminates one part of a reflective phase-only SLM (Hamamatsu, X10468-01). Since the SLM only modulates horizontal polarization, the vertical-polarized component of the beam is converted to horizontal polarization by another $\lambda/2$ plate before impinging onto the second part the SLM; the modulated reflected beam is converted back to vertical polarization after passing through the same $\lambda/2$ plate again. The two polarizations are recombined at the PBS, and the SLM plane is imaged onto the pupil of a microscope objective $\text{MO}_1$ (Nikon CF Plan 100$\times$ with a numerical aperture $\text{NA}_\mathrm{in} = 0.95$) by a pair of lenses $L_1$ and $L_2$ (with focal lengths $f_1 = f_2 = 200$ mm). This setup enables independent modulation of the spatial wavefront for two orthogonal polarizations. An iris diaphragm ${\rm ID}_1$ between $L_1$ and $L_2$ blocks high-order diffractions from the SLM. The objective $\text{MO}_1$ projects the Fourier transform of the SLM phase pattern onto the front (input) surface of the scattering sample. To measure the spatial profile of illumination on the sample surface, we insert a beam splitter before the objective $\text{MO}_1$ to split the input beam, and use another lens $L_3$ ($f_3 = 100$ mm) to image the Fourier plane of the SLM onto a CCD camera CCD1 (Allied Vision, Guppy PRO F-031B). An iris ${\rm ID}_2$ between the beam splitter and $L_3$ blocks light that does not enter $\text{MO}_1$. In transmission, the Fourier transform of the transmitted field on the back (output) surface of the sample is imaged onto a CCD camera CCD2 (Allied Vision, Manta G-031B) by an oil-immersion microscope objective $\text{MO}_2$ (Edmund DIN Achromatic 100$\times$, $\text{NA}_\mathrm{out} = 1.25$) and a pair of lens $L_4$ ($f_4 = 200$ mm) and $L_5$ ($f_5 = 100$ mm). A linear polarizer is placed right after $\text{MO}_2$ to filter out one polarization component of the transmitted light. In between the polarizer and the lens $L_4$, a beam splitter is inserted to split the output beam, and the intensity profile on the back (output) surface of the sample is imaged onto another CCD camera CCD3 (Allied Vision, Pike F-100B) by a lens $L_6$ ($f_6 = 200$ mm). Field transmission matrix from the SLM to CCD2 is measured with a common-path interferometry akin to the method in reference [@Popoff1]. We display a complete set of 2048 input field vectors in Hadamard basis on 2048 macropixels of the SLM, each consisting of $4\times 4$ SLM pixels. The 2048 macropixels are imaged onto approximately half of the area of $\text{MO}_1$ pupil as the signal field. To measure phases of each Hadamard base vector, we display a random (but fixed) phase pattern on the remaining SLM macropixels (of the same $4\times 4$ size) that also get imaged onto $\text{MO}_1$ as reference macropixels. Together, the signal macropixels and the reference macropixels fill the the pupil of $\text{MO}_1$. In order to measure intensity of each Hadamard basis vector in transmission, a high-spatial-frequency phase grating is displayed on the reference region of the SLM so that light incident upon the reference macropixels is blocked by the iris ${\rm ID}_1$ and does not enter $\text{MO}_1$. After measuring the field transmission matrix, we calculate the eigenvectors which represent the input wavefronts for individual transmission eigenchannels: $$\label{eigen} \tilde{t}^{\dagger}\tilde{t}\ket{\tilde{\psi}_n} = \tau_n\ket{\tilde{\psi}_n} \, ,$$ where $\ket{\tilde{\psi}_n}$ is the $n$-th eigenvector, and $\tau_n$ is the corresponding eigenvalue that gives the transmittance of the $n$-th eigenchannel. After finding the eigenvectors, we display the phase pattern of $\ket{\tilde{\psi}_n}$ on the 2048 macropixels of the SLM, and record the intensity profiles on the front and back surfaces of the scattering sample. At this time, the high-spatial-frequency phase grating is displayed on the reference region of the SLM. Fig. \[figure7\] shows the normalized transmittance $\tau/\langle\tau\rangle$ of each eigenchannel in our experiment. The red filled circles denote the values of $\tau/\langle\tau\rangle$ predicted from the measured transmission matrix, and black filled circles represent the experimentally measured $\tau/\langle\tau\rangle$. While the range of $\tau/\langle\tau\rangle$ is predicted to be between 2.2 and 0.47, the experimental values of $\tau/\langle\tau\rangle$ range from 1.95 to 0.67 due to measurement noise. ![**Normalized transmission eigenvalues $\tau/\langle\tau\rangle$ for all eigenchannels.** Red filled circles represent values of $\tau/\langle\tau\rangle$ predicted from the measured transmission matrix after removing amplitude modulation from input eigenvectors. Black filled circles denote the experimentally measured $\tau/\langle\tau\rangle$ when the phases of input eigenvectors are displayed on the SLM. The latter has a smaller range than the former due to measurement noise in our experiment.[]{data-label="figure7"}](figure_supp_02.pdf){width="\linewidth"} Numerical simulations --------------------- In this section, we present details of our numerical simulations. The first subsection describes the extraction of transport and scattering mean free paths of the simulation samples. The second subsection depicts the relation between sample thickness $L$ and real-space transmission matrix bandwidth $b$. The third subsection narrates how the asymptotic eigenchannel widths are calculated for diffusive media in wide slab geometry. The final subsection provides details of numerical simulations that account for finite-width illumination, finite numerical aperture, phase-only control, and noise in the experiment. We simulate wave propagation through two-dimensional (2D) diffusive slabs numerically. The normalized width of a slab is $k_0W$ and the normalized thickness is $k_0L$, where $k_0 = 2\pi/\lambda$ and $\lambda$ is the vacuum wavelength. The slab is discretized on a 2D square grid, and the grid size is $(\lambda/2\pi)\times (\lambda/2\pi)$. The dielectric constant at each grid point is $\epsilon(\textbf{r}) = n_0^2 + \delta\epsilon(\textbf{r})$, where $n_0$ is the average refractive index of the disordered slab, $\delta\epsilon(\textbf{r})$ a random number drawn from the interval $[-\sigma,\sigma]$ with uniform probability. The disordered slab is sandwiched between two homogeneous materials with refractive indices of $n_1$ and $n_2$. Either perfectly reflecting (Dirichlet) or periodic boundary conditions are applied to the transverse boundaries. To obtain the field transmission matrix $t$ at wavelength $\lambda$, we solve the scalar wave equation $\left[\nabla^2+k_0^2\epsilon(\textbf{r})\right]\psi(\textbf{r}) = 0$ with the recursive Green’s function method [@Stone3]. The singular value decompostion of $t = U \sqrt{\tau} V^\dagger$ gives the transmittance $\tau$, the input $V$ and output $U$ wavefronts of transmission eigenchannels. ### Extraction of transport and scattering mean free paths In this and the following two subsections, we set $n_1 = n_2 = n_0$ and apply the perfectly reflecting (Dirichlet) boundary conditions to the transverse boundaries ($n_1 = n_2 = n_0 = 1.5$ in this subsection). The transport mean free path $l_t$ is obtained from the average transmittance $\langle\tau\rangle$: $$\label{tmfp} \langle\tau\rangle = \frac{\left(1+\Delta\right)l_t}{L+2\Delta l_t} \, ,$$ where $\Delta = 0.818$ for the 2D diffusive slab with index-matched homogeneous media at both surfaces [@Durian; @Yamilov]. To extract scattering mean free path $l_s$ for a fixed strength of disorder $\sigma$, we calculate the transmission matrices of waveguides with the same width $k_0W = 100$ and varying thickness between $k_0L = 1$ and $k_0L = 6k_0l_t$. After average over 40000 disorder realizations, the diagonal elements of the transmission matrices $\langle\psi_{nn}\rangle$ give the scattering mean free path [@Genack6] $$\label{tmfp} \lvert \langle\psi_{nn}\rangle\rvert^2 = \mathrm{exp}\left[\frac{-kL}{k_z^nl_s}\right ] \ ,$$ where $k_z^n$ the longitudinal component of the wave vector for each waveguide mode $n$. Table \[table\] presents the values of the transport mean free path and the scattering mean free path for different strengths of disorder $\sigma$. Their values are approximately equal, $l_t\approx l_s$. $\sigma$   0.60      0.65      0.75      0.90      1.10      1.45 ---------- ----------- ----------- ----------- ----------- ----------- -------- $kl_t$ 30.9 27.1 19.3 13.5 8.7 4.6 $kl_s$ 34.3 29.3 22.2 15.5 10.4 6.0 : **Transport mean free path $l_t$ and scattering mean free path $l_s$ for each disorder strength $\sigma$.** \[table\] ### Bandwidth $b$ of real-space transmission matrices ![**Bandwidth $b$ of real-space transmission matrices versus the slab thickness $k_0L$.** The slab width is $k_0W = 6000$. The mean free path is the shortest among all simulated slabs. The linear scaling of bandwidth $b$ with the slab thickness $L$ confirms diffusive transport.[]{data-label="figure8"}](figure_supp_03.pdf){width="\linewidth"} We compute the bandwidth $b$ of real-space transmission matrices using the definition of the participation number $$b\equiv\Big\langle\left[\int_{0}^{W}\mid\psi\mid^4dx\right]\Big/\left[\int_{0}^{W}\mid\psi\mid^2dx\right]^2\Big\rangle^{-1},$$ where $\langle\rangle$ is an ensemble average over all columns of ten real-space transmission matrices representing different realization of disorder. We observe the bandwidth $b$ of the real-space transmission matrix scales linearly with the slab thickness $L$ within the range of disorder strength in our simulation. Fig. \[figure8\] is a plot of $b$ versus $k_0L$ for the slabs with the shortest transport mean free path $kl_t = 4.6$. The linear scaling of $b$ with $L$ is an evidence that 2D Anderson localization is absent even in the slabs with the smallest $kl_t = 4.6$. ### Asymptotic width of open channels ![**Scaling of asymptotic width $D_\infty$ of transmission eigenchannels.** $k_0D_\infty$ is obtained from $ \langle \left(k_0D\right)^{-1}\rangle^{-1}$ in the limit $W\rightarrow \infty$. $D_\infty$ of diffusive slabs with different $L$, $l_t$, and $n_0$, showing a universal scaling $D_\infty \propto (n_0k_0l_t)L$. Linear regression gives the proportionality constant to be 0.54 (black solid line). Each data point represents an ensemble average over open channels (with $\tau_n\geq 1/\text{e}$) in ten realizations of disorder; the error bars are the standard deviation among the disorder realizations.[]{data-label="figure9"}](figure_supp_04.pdf){width="\linewidth"} ![image](figure_supp_05.pdf){width="\linewidth"} ![image](figure_supp_06.pdf){width="\linewidth"} We excite a single channel and calculate its field distribution across the slab. The input and output widths are obtained from the participation number of field intensity distribution on the front and back surfaces of the slab. We repeat this calculation for slabs of different width $k_0W =$ 100, 250, 500, 1000, 2000, 3000, 4500, 6000, while fixing the thickness $k_0L$, the average refractive index $n_0 = n_1 = n_2$, and the strength of disorder $\sigma$. The channel widths are averaged over all open channels with transmission eigenvalue $\tau_n\geq 1/\text{e}$. As shown in Fig. 1b of the main text, the open channel width $k_0 D$ increases as $W$ increases, eventually saturates to a constant value, which can be extracted from a two-parameter fit, $$\label{fit} \langle k_0D\rangle = \left(a_0 + \frac{a_1}{k_0W}\right)^{-1}.$$ The asymptotic open channel width $k_0 D_\infty = 1/a_0$ is obtained in the limit $W\rightarrow \infty$. We apply the above procedure to slabs of different thickness $k_0L = 50, 100, 150, 200, 250, 300$ to find the scaling of $D_\infty$ with $L$, while fixing $n_0$ and $\sigma$. To find the scaling of $D_\infty$ with $l_t$, the disorder strength in the slab is varied as $\sigma =$ 0.6, 0.65, 0.75, 0.9, 1.1, 1.45. The corresponding normalized transport mean free paths $kl_t$ are given in table \[table\]. For each set of parameters $(k_0L, \sigma, k_0W)$, we simulate 10 different realizations of disorder to obtain the ensemble-averaged values. In addition to $n_0 = n_1 = n_2 = 1.5$, we also set $n_0 = n_1 = n_2 = 1$, then vary the thickness $k_0L$ (as listed above) and $\sigma =$ 0.6, 1.1 ($kl_t = 17.4, 6.1$). To check whether the transverse boundary conditions affect the channel widths, we repeat the simulations with periodic boundary conditions, and obtain the same asymptotic open channel widths shown in Fig. 3 of the main text. Finally, we examine whether the scaling of $D_\infty$ depends on the way of averaging. Instead of obtaining $D_\infty$ from $\langle D \rangle$ in the limit of $W \rightarrow \infty$, we compute $\langle 1/D\rangle$ and extract $D_\infty$ from its inverse in $W \rightarrow \infty$ limit. The same scaling of $D_\infty\propto kl_tL$ is obtained, as shown in Fig. \[figure9\], only the prefactor of 0.54 is slightly different from that of 0.68 in Fig. 3c of the main text. ### Incomplete control Experimentally only a small region of a wide slab is illuminated, and partial transmission matrix is measured. We numerically investigate the effects of finite-width illumination, finite numerical aperture (NA), phase-only modulation, and noise on the spatial profiles of transmission eigenchannels. We first calculate the complete transmission matrices of 2D slabs for 50 disorder realizations. The slab parameters, given in the caption of Fig. 5 of the main text, are chosen to be close to those of the ZnO nanoparticle layer in our experiment. The slab ($n_0 = 1.4$) is sandwiched between air ($n_1 = 1.0$) and glass ($n_2 = 1.5$). Periodic boundary conditions are applied to the transverse boundaries. The number of input modes (from the air) is $N_1 = 1999 \approx 2n_1W/\lambda$, and the number of output modes (to the glass) $N_2 = 3239$. To model the binning of SLM pixels into macropixels and the limited field of view of detection optics, we group the input and output modes in $k$-space. The number of input modes in one group, $m_1$, is chosen such that the corresponding illumination width on the front surface of the slab, as measured by the participation number of random inputs, is similar to that in the experiment. The number of output modes in a group, $m_2$, sets the size of detection region, which is similar to the field of view in our experiment (180 m in diameter). Such grouping effectively reduces the number of degrees of freedom to $M_1 = 62$ at the input and and $M_2 = 1079$ at the output. The spatial profile of each transmission eigenchannel is then calculated. The input and output widths are plotted in Fig. 5a of the main text, and the transverse spread is plotted as a function of the normalized transmission eigenvalue in Fig. \[figure10\]a. Low-transmission channels spread significantly more than random incident wavefronts, while high-transmission channels spread slightly less than random wavefronts with finite illumination width. In order to account for finite NA in the experiment, we take only $M_1 = 32$ input degrees of freedom and $M_2 = 234$ output degrees of freedom for the $k$-space transmission matrix. The ratio $M_1/M_2 = 0.136$ is chosen to match that in the experiment, even though the experimental values of $M_1$ and $M_2$ are much larger. Additionally, we remove amplitude modulation from the input eigenvectors to simulate phase-only modulation in our experiment. As shown in Fig. \[figure10\]b, the range for transverse spread of transmission eigenchannels is reduced, but the trend is similar to that in Fig. \[figure10\]a. Finally we use random Gaussian noise to model experimental errors in a transmission matrix measurement. We simulate the common-path interferometric measurement using the partial transmission matrix, adding random Gaussian numbers to the intensity values in transmission to model measurement noise. Such random Gaussian noise results in phase estimation errors in the “measured" transmission matrix, $\tilde{t} + \delta\tilde{t}$, which deviates from the actual matrix $\tilde{t}$ by $\delta\tilde{t}$ due to noise. We calculate the transmission eigenvectors of this partial transmission matrix, remove their amplitude modulations, and then calculate their transmissions and spatial profiles using the actual matrix $\tilde{t}$. The error in the measurement of the transmission matrix cause a further reduction in the range of transmission eigenvalues and the lateral spread of the eigenchannels as Fig. \[figure10\]c shows. The above numerical simulation results illustrate that the finite illumination width has the predominant effect on the spatial profiles of transmission eigenchannels, making their behavior qualitatively different from the eigenchannels of the complete transmission matrix for a wide slab. For comparison, we plot in Fig. \[figure11\]a the ensemble-averaged intensity distribution on the output surface of the slab for a high-transmission eigenchannel $\langle I_{\rm high}\rangle$ and a low-transmission eigenchannel $\langle I_{\rm low}\rangle$, in comparison to a random incident wavefront $\langle I_{\rm rand\rangle}$. Their ratios $\langle I_{\rm high}\rangle/ \langle I_{\rm rand}\rangle$ and $\langle I_{\rm low}\rangle/ \langle I_{\rm rand}\rangle$, plotted in Fig. \[figure11\]b, reveal that the enhancement of transmitted light intensity is higher at the center of the output beam for the high-transmission channel, leading to an effective reduction of the width (characterized by the participation number) of the output beam. For the low-transmission channel, the suppression of transmitted light intensity is also stronger at the beam center, resulting in an increase of output beam-width. Such behavior is attributed to the fact that multipath interference effects are enhanced near the center of illumination region due to higher probability of scattering path crossing compared to the edges.
--- abstract: 'We propose a simple scheme to construct a model whose Fermi surface is comprised of crossing-line nodes. The Hamiltonian consists of a normal hopping term and an additional term which is odd under the mirror reflection. The line nodes appear along the mirror-invariant planes, where each line node carries the quantized Berry magnetic flux. We explicitly construct a model with the $N$-fold rotational symmetry, where the $2N$ line nodes merge at the north and south poles. Photoirradiation induces a topological phase transition. When we apply photoirradiation along the $k_z$ axis, there emerge point nodes carrying the monopole charge $\pm N$ at these poles, while all the line nodes disappear. The resultant system describes anisotropic multiple-Weyl fermions.' author: - Motohiko Ezawa title: | Photoinduced topological phase transition\ from a crossing-line nodal semimetal to a multiple-Weyl semimetal --- *Introduction:* Weyl semimetal is one of the hottest topics in condensed matter physics[@Hosur; @Jia]. It is protected by a monopole charge in the momentum space[@Murakami]. Multiple Weyl semimetal is a generalization of a Weyl semimetal which has a monopole charge larger than the unit charge[@CFang; @BJYang; @Xli; @Huang2].There exist another class of novel semimetals. They are line nodal semimetals whose Fermi surfaces form one-dimensional lines[Burkov,CFangA,Xie,Yamakage16,Hyper,Hirayama,Sy,Carter,Phillips,ChenLu,Chiu,Mullen,Weng,Bian,Rhim,ChenXie,Fang,BianChang]{}. A line node is protected by a quantized Berry magnetic flux. Recently, line nodal semimetals are generalized into two species. One is a loop node forming a nontrivial link such as the Hopf link[@WChen; @ZYan; @PYChang; @Hopf]. The other is a crossing-line node, where several line nodes cross at a point[@Zeng; @Chain; @Kim; @Yamakage; @CaTe; @Yu]. Photoirradiation is a powerful tool to modify the band structure[Oka09L,Kitagawa01B,Lindner,Dora,EzawaPhoto,Gold]{}. According to the Floquet theory an additional term emerges due to the second-order process of photoirradiation. A typical example is the generation of a Weyl point from a Dirac semimetal[@PWang; @PChan; @Ebihara; @PChan2]. It is shown that a Weyl node can also be generated by applying photoirradiation to a loop nodal semimetal[@PYan; @Multi]. In this paper, we first propose a simple scheme to construct models for crossing-line nodal semimetals. We then investigate how a crossing-line nodal semimetal is modified by way of photoirradiation. The model Hamiltonian consists of a normal hopping term and a mirror-odd interaction term. A line node emerges on the mirror-invariant plane. Each line node is topologically protected by the quantized Berry magnetic flux. Explicitly, we construct an $N$-fold rotational symmetric model, where the crossing of $2N$-fold line nodes occurs at the north pole and also at the south pole. There are no magnetic monopoles at these poles. Next, we derive a photoinduced term based on the Floquet theory. It induces a topological phase transition. Indeed, by applying photoirradiation along the $z$ direction, the Fermi surface is found to disappear by the emergence of gap except for two nodal points carrying the $N$ ($-N$) units of the monopole charge at the north (south) pole. The resultant system is the anisotropic $N$-fold multiple-Weyl semimetal. On the other hand, by applying photoirradiation perpendicular to one of the loop node, the resultant Fermi surface turns out to be comprised of this loop node and point nodes. Otherwise, only the point nodes appear. *Model:* A prototype of line nodal semimetals is given by the model $$H(\mathbf{k})=f_{x}(\mathbf{k})\sigma _{x}+f_{z}(\mathbf{k})\sigma _{z}, \label{BasicHamil}$$whose energy spectrum reads$$E(\mathbf{k})=\pm \sqrt{f_{x}^{2}(\mathbf{k})+f_{z}^{2}(\mathbf{k})}.$$The Fermi surface is given by the two conditions $f_{x}(\mathbf{k})=0$ and $% f_{z}(\mathbf{k})=0$, each of which produces a two-dimensional surface. The intersection of the two surfaces consists of lines and/or points in general. Namely we obtain line nodes and/or point nodes in general. For simplicity we consider the following case: (i) The condition $f_{x}(\mathbf{k})=0$ generates an ellipsoid, which is rotational symmetric around the $k_z$ axis and centered at the origin ($\mathbf{k}=0$). (ii) The condition $f_{x}(% \mathbf{k})=0$ generates planes sharing the $k_z$ axis with each other. Furthermore we require that $f_{z}(\mathbf{k})$ is odd under the mirror operation $M_{\alpha }$ with respect to each plane, where the index $\alpha $ denotes the direction normal to the plane. For example, if $f_{z}(\mathbf{k}% ) $ is odd for the mirror reflection with respect to the $k_{y}k_{z} $ plane, $M_{x}f_{z}(k_{x},k_{y},k_{z})M_{x}^{-1}=-f_{z}(-k_{x},k_{y},k_{z})$, we have a zero-energy solution at $k_{x}=0$ since $% M_{x}f_{z}(0,k_{y},k_{z})M_{x}^{-1}=-f_{z}(0,k_{y},k_{z})$. ![image](Berry){width="95.00000%"} A key observation is that, when there are $N$ mirror-odd planes, there emerges the crossing of $2N$ line nodes. For example, by assuming the $N$-fold rotation symmetry, the lattice Hamiltonian is given by ([BasicHamil]{}) together with$$\begin{aligned} f_{x}(\mathbf{k})& =t\sum_{j=1}^{N}\cos \left( \mathbf{d}_{j}^{c}\cdot \mathbf{k}\right) +t_{z}\cos k_{z}-m^{\prime }, \notag \\ f_{z}(\mathbf{k})& =\lambda ^{\prime }g\left( k_{z}\right) \prod\limits_{j=1}^{N}\sin \left( \mathbf{d}_{j}^{s}\cdot \mathbf{k}\right) , \label{EqA}\end{aligned}$$where $\mathbf{d}_{j}^{c}=\left( \cos \left[ j\pi /N\right] ,\sin \left[ j\pi /N\right] ,0\right) $ and $\mathbf{d}_{j}^{s}=\left( \sin \left[ \left( 2j+1\right) \pi /\left( 2N\right) \right] ,\cos \left[ \left( 2j+1\right) \pi /\left( 2N\right) \right] ,0\right) $. We have included a function $% g\left( k_{z}\right) $ to allow the freedom of introducing additional crossing-line modes perpendicular to the $k_{z}$ axis such as in (\[EqB\]) for the cubic symmetric model. The summation $\sum_{j=1}$ runs over the nearest neighbor sites. The Fermi surface is given by the cross section of the $N$ planes and the ellipsoid. They are $2N$ line nodes which cross at the north and south poles. We show Fermi surfaces for $N=2$ and $N=3$ in Figs.\[FigBerry\](a1) and (b1), respectively. (Actually, we present almost zero-energy surfaces $E=\delta $ with $0<\delta \ll t$.) We note that the lattice structure is possible in the real space only for $N=2$ and $3$. The lattice with $N=2$ forms a layered square lattice, while the lattice with $% N=3$ forms a layered triangular lattice.** **Nevertheless, we analyze the general $N$ case to make the mathematical structure clearer. The corresponding continuum theory is given by $$H=[a\left( k_{x}^{2}+k_{y}^{2}\right) +ck_{z}^{2}-m]\sigma _{x}+\lambda g\left( k_{z}\right) \text{Re}(k_{+}^{N})\sigma _{z} \label{HamilConti}$$ with $a=-Nt/4$, $c=-t_{z}/2$, $m=m^{\prime }+Nt+t_{z}$ and $\lambda =\lambda ^{\prime }/2^{N-1}$.** **For example, for the case of $N=2$, there are two mirror planes $M_{x+y}$ and $M_{x-y}$. A simplest representation is $% f_{z}=k_{x}^{2}-k_{y}^{2}$, whose zero-energy solution is given by the two planes $k_{x}=k_{y}$ and $k_{y}=-k_{y}$. For the case of $N=3$, there are three mirror planes determined by $f_{z}=k_{x}^{3}-3k_{x}k_{y}^{2}$. We show the Fermi surfaces in Figs.\[FigBerry\](a2) and (b2). The Fermi surfaces obtained in the continuum theory are found to be almost the same as those obtained in the lattice model. Hence, we use the continuum theory in the following. With the use of the eigenfunction $\left\vert \psi \right\rangle $ of the Hamiltonian (\[HamilConti\]) we may calculate the Berry connection as$$A_{i}\left( \mathbf{k}\right) =-i\left\langle \psi \right\vert \partial _{i}\left\vert \psi \right\rangle =\frac{f_{x}\partial _{i}f_{z}-f_{z}\partial _{i}f_{x}}{2\left( f_{x}^{2}+f_{z}^{2}\right) }=% \frac{1}{2}\partial _{i}\Theta$$with $\partial _{i}=\partial /\partial k_{i}$, where $f_{x}=f\cos \Theta $, $% f_{z}=f\sin \Theta $ and $f=\sqrt{f_{x}^{2}+f_{z}^{2}}$. We show the stream plot of the Berry connection for $N=2$ and $3$, where vortex and antivortex structures are observed around the line nodes in the constant $k_{z}$ plane, as in Figs.\[FigBerry\](b1) and (d1). A pair of vortex and antivortex annihilates at the north and south poles as in Figs.\[FigBerry\](b2) and (d2). Each line node is topologically protected since the Berry phase along the line nodes is quantized to be $\pm \pi $,$$\oint A_{j}dk_{j}=\int \nabla \times \mathbf{A}\,dS=\pm \pi .$$The Berry curvature $\mathbf{B}=\nabla \times \mathbf{A}$ is strictly localized along the line node. Indeed, we can explicitly check this by the direct calculation,$$B_{i}\left( \mathbf{k}\right) =\varepsilon _{ijk}\partial _{j}A_{k}=\pm \pi \sum \delta \left( f_{x}\right) \delta \left( f_{y}\right) .$$Namely, the Berry magnetic flux is present along each line node, while the Berry curvature is strictly zero away from the line nodes. Consequently a line node is topologically protected. *Photoirradiation parallel to the* $k_{z}$ axis*:* We proceed to investigate a topological phase transition due to the $\sigma _{y}$ term induced by photoirradiation. The following formulas hold for any function $% g(k_{z})$ in (\[EqA\]). We summarize the Floquet theory on photoirradiation. First, we irradiate a beam of circularly polarized light along the $z$ direction. We take the electromagnetic potential as $\mathbf{A}% _{\text{EM}}(t)=(A\cos \omega t,A\sin \omega t,0)$, where $\omega $ is the frequency of light with $\omega >0$ for the right circulation and $\omega <0$ for the left circulation. The effective Hamiltonian due to the second order process of photoirradiation is given by[Oka09L,Kitagawa01B,Lindner,Dora,EzawaPhoto,Gold]{}$$\Delta H_{\text{eff}}\left( \mathbf{k},\mathbf{A}_{\text{EM}}\right) =\frac{1% }{\hbar \omega }\left[ H_{-1}\left( \mathbf{k},\mathbf{A}_{\text{EM}}\right) ,H_{+1}\left( \mathbf{k},\mathbf{A}_{\text{EM}}\right) \right] .$$It is explicitly evaluated to be$$\Delta H_{\text{eff}}\left( \mathbf{k},\mathbf{A}_{\text{EM}}\right) =f_{y}\sigma _{y},$$where$$f_{y}=-2na\lambda \alpha g\left( k_{z}\right) \text{Im}(k_{+}^{N})$$with $\alpha =\left( eA\right) ^{2}/\left( \hbar \omega \right) $. The second-order perturbed process produces the term $f_{y}(\mathbf{k})\sigma _{y}$ due to the commutation relation $[\sigma _{z},\sigma _{x}]=i\sigma _{y} $. By including this term into the Hamiltonian we find$$H(\mathbf{k})=f_{x}(\mathbf{k})\sigma _{x}+f_{y}(\mathbf{k})\sigma _{y}+f_{z}(\mathbf{k})\sigma _{z},$$while the energy is modified as$$E(\mathbf{k})=\pm \sqrt{f_{x}^{2}(\mathbf{k})+f_{y}^{2}(\mathbf{k}% )+f_{z}^{2}(\mathbf{k})}.$$Now the condition $f_{y}(\mathbf{k})=0$ should be imposed as the zero-energy condition additionally to $f_{x}(\mathbf{k})=f_{z}(\mathbf{k})=0$. In general, there is no intersection between three surfaces, and the system becomes an insulator. However, there are several cases where crossing-line nodes are reduced to points nodes, as shown in Figs.\[FigBerry\](a3) and (c3). The Fermi surface consists of only two point nodes at the north and south poles, $(k_{x},k_{y},k_{z})=(0,0,k_{\text{c}})$ with $k_{\text{c}}=\pm \sqrt{% m/c}$. In the vicinity of these points, we have$$f_{z}\approx \pm 2\sqrt{mc}\left( k_{z}\mp \sqrt{m/c}\right) .$$The Hamiltonian with photoirradiation is given by$$\begin{aligned} H& =\pm 2\sqrt{mc}\left( k_{z}\mp \sqrt{m/c}\right) \sigma _{x}+\lambda g\left( k_{z}\right) \text{Re}(k_{+}^{N})\sigma _{z} \notag \\ & -2na\lambda \alpha g\left( k_{z}\right) \text{Im}(k_{+}^{N})\sigma _{y}. \label{PhotoH}\end{aligned}$$In particular, when $2na\left( eA\right) ^{2}=\hbar \omega $ and $g\left( k_{z}\right) =1$, the Hamiltonian is reduced to that of the multiple-Weyl fermion,$$H=\lambda \left( k_{+}^{n}\sigma _{+}+k_{-}^{n}\sigma _{-}\right) \pm 2\sqrt{% mc}\left( k_{z}\mp \sqrt{m/c}\right) \sigma _{z},$$and otherwise it is that of the anisotropic multiple-Weyl fermion. The Berry curvature is calculated as$$\begin{aligned} F_{i}\left( \mathbf{k}\right) & =\frac{\varepsilon _{ijk}}{2}\sin \Theta \left( \partial _{j}\Theta \partial _{k}\Phi -\partial _{k}\Theta \partial _{j}\Phi \right) \notag \\ & =\varepsilon _{ijk}\left( \partial _{j}\mathbf{f}\times \partial _{k}% \mathbf{f}\right) \cdot \mathbf{f}\end{aligned}$$where $\mathbf{f}=\left( f_{x},f_{y},f_{z}\right) $ with $f_{x}=f\cos \Phi \sin \Theta $, $f_{y}=f\sin \Phi \sin \Theta $, $f_{z}=f\cos \Theta $. It describes monopoles with the charges $\pm N$ at the north and south poles. We illustrate the Berry curvature around the north pole for $N=2$ and $3$ in Figs.\[FigBerry\](b3) and (d3) for the case of $g\left( k_{z}\right) =1$, where the presence of the monopoles is observed as a source or a sink of the Berry magnetic flux. We conclude that, by applying photoirradiation along the $z$ direction, the Fermi surface changes from the nodal crossing lines to the two nodal points carrying the $N$ ($-N$) units of the monopole charge at the north (south) pole. *Photoirradiation perpendicular to the* $k_{z}$ axis*:* We next apply photoirradiation perpendicular to the $k_{z}$ axis. We explicitly study the tetragonal symmetric model ($N=2$) and the trigonal symmetric model ($N=3$). Here we set $g(k_{z})=1$ in (\[EqA\]). The lattice Hamiltonian of the tetragonal symmetric model with $N=2$ is given by $$\begin{aligned} f_{x}& =t\left( \cos k_{x}+\cos k_{y}\right) +t_{z}\cos k_{z}-m^{\prime }, \notag \\ f_{z}& =-\lambda \left( \cos k_{x}-\cos k_{y}\right) .\end{aligned}$$We inject photoirradiation along the $\phi $ direction with $\mathbf{A}_{% \text{EM}}(t)=(-A\sin \phi \cos \omega t,A\cos \phi \cos \omega t,A\sin \omega t)$. The effective Hamiltonian induced by photoirradiation along the $% \phi $ direction is given in the continuum theory by $$f_{y}=-\alpha \lambda t_{z}k_{z}\left( k_{x}\sin \phi +k_{y}\cos \phi \right) .$$Solving $f_{x}=f_{z}=f_{y}=0$, we obtain the Fermi surface. When $\phi =\pi /4$ or $-\pi /4$, there emerge a loop node along the $% k_{x}=k_{y}$ plane or the $k_{x}=-k_{y}$ plane, and two zero-energy points emerge at the two points $\left( k_{x},k_{y},k_{z}\right) =\left( \pm k_{c},\pm k_{c},0\right) $ or $\left( \pm k_{c},\mp k_{c},0\right) $ with $% k_{c}=\sqrt{m/2t}$. By expanding the Hamiltonian around these points the dispersion relation is found to be linear. For instance, at $\phi =\pi /4$ it reads $$H=\pm k_{c}\left( tk_{y}^{\prime \prime }\sigma _{x}-2\alpha k_{z}\sigma _{y}-2\lambda k_{x}^{\prime \prime }\sigma _{z}\right)$$with $k_{x}^{\prime \prime }=k_{x}^{\prime }+k_{y}^{\prime }$, $% k_{y}^{\prime \prime }=k_{x}^{\prime }-k_{y}^{\prime }$, $k_{x}^{\prime }=k_{x}\mp k_{c}$, $k_{y}^{\prime }=k_{x}\pm k_{c}$. Hence, they are Weyl point nodes carrying the unit monopole charge: See Figs.\[FigBerry\](a4) and (b4). Unless $\phi =\pm \pi /4$, only Weyl point nodes appear as in Fig.\[FigBerry\](a5) and (b5). The lattice Hamiltonian of the trigonal symmetric model with $N=3$ is given by$$\begin{aligned} f_{x}& =t\left( \cos k_{x}+\sum_{\eta =\pm 1}\cos \frac{-k_{x}+\eta \sqrt{3}% k_{y}}{2}\right) +t_{z}\cos k_{z}-m, \notag \\ f_{z}& =\frac{\lambda }{2}\sin k_{x}\left( \cos k_{x}-\cos \sqrt{3}% k_{y}\right) .\end{aligned}$$The photoinduced term reads in the continuum theory as $$f_{y}=\alpha \frac{3\lambda t_{z}}{4}k_{z}\left( \left( k_{x}^{2}-k_{y}^{2}\right) \sin \phi +2k_{x}k_{y}\cos \phi \right) .$$When $\phi =\pi /2$, $\pi /2\pm 2\pi /3$, a loop node and four point nodes emerge as in Figs.\[FigBerry\](c4) and (d4), and otherwise only points modes emerge. Namely, a loop node emerge only when the direction of photoirradiation is perpendicular to the loop node. Figs.\[FigBerry\](c5) and (d5). ![Bird’s eye’s view of the almost zero-energy surfaces of the Hamiltonian with the cubic symmetry for (a) the lattice model, (b) the continuum model, (c) the continuum model with photoirradiation along the $% k_{z}$ axis. Each loop node carries the unit Berry magnetic flux in (a)–(c). The point nodes at the north and south poles carry the $\pm 2$ units of Berry monopole charges. []{data-label="FigCubic"}](Cube){width="50.00000%"} *Cubic symmetric model:* Finally, we present a simple realization of a lattice model with the cubic symmetry by taking $$\begin{aligned} f_{x}& =t\left( \cos k_{x}+\cos k_{y}+\cos k_{z}\right) -m^{\prime }, \notag \\ f_{z}& =\lambda ^{\prime }\sin k_{x}\sin k_{y}\sin k_{z}, \label{EqB}\end{aligned}$$where we have set $g(k_{z})=\sin k_{z}$ in (\[EqA\]). We illustrate the Fermi surface of the lattice model and the continuum model in Fig.[FigCubic]{}(a) and (b). Photoirradiation applied along the $z$ direction induces the term $$f_{y}=\alpha \lambda tk_{z}\left( k_{x}^{2}-k_{y}^{2}\right)$$ in the continuum theory. Solving $f_{x}=f_{z}=f_{y}=0$, we obtain a loop mode given by $k_{x}^{2}+k_{y}^{2}=2(3t-m)/t$ and $k_{z}=0$. Additionally anisotropic double-Weyl points emerge at the north and south poles, carrying the monopole charge $\pm 2$. We illustrate the Fermi surface in Fig.[FigCubic]{}(c). *Discussion:* Crossing-line nodal semimetals with the cubic symmetry are realizable in CaTe and Cu$_3$PdN according to recent first principles calculations in Ref.[@CaTe] and Ref.[@Yu; @Kim], respectively. It is also shown that LaN has a crossing line node, which is topologically identical to the cubic symmetric model in Ref.[@Zeng]. Furthermore, a hexagonal hydride, YH$_{3}$, has a crossing-line nodes with $N=3$, as shown in Ref.[@Yamakage]. It is an interesting problem to search further materialization of crossing-line node semimetals. The author is very much grateful to N. Nagaosa for many helpful discussions on the subject. This work is supported by the Grants-in-Aid for Scientific Research from MEXT KAKENHI (Grant Nos.JP17K05490 and 15H05854). This work was also supported by CREST, JST (Grant No. JPMJCR16F1). [99]{} P. Hosur, X.L. Qi, C. R. Physique 14, 857 (2013). S. Jia, S.-Y. Xu, M. Z. Hasan, Nature Materials 15, 1140 (2016). S. Murakami, New J. Phys. 9, 356 (2007). C. Fang, M. J. Gilbert, X. Dai, and B. A. Bernevig, Phys. Rev. Lett. **108**, 266802 (2012). B.-J. Yang and N. Nagaosa, Nat. Commun. **5**, 4898 (2014). X. Li, B. Roy and S. Das Sarma, Phys. Rev. B **94**, 195144 (2016) S.-M. Huang, S.-Y. Xu, I. Belopolski, C.-C. Lee, G. Chang, T.-R. Chang, B. Wang, N. Alidoust, G. Bian, M. Neupane, D. Sanchez, H. Zheng, H.-T. Jeng, A. Bansil, T. Neupert, H. Lin, and M. Z. Hasan, Proc. Natl. Acad. Sci. 113, 1180 (2016) A. A. Burkov, M. D. Hook, and L. Balents, Phys. Rev. B **84**, 235126 (2011). C. Fang, Y. Chen, H.-Y. Kee and L. Fu, Phys. Rev. B **92**, 081201 (2015). L. S. Xie, L. M. Schoop, E. M. Seibel, Q. D. Gibson, W. Xie, and R. J. Cava, APL Materials **3**, 083602 (2015). A. Yamakage, Y. Yamakawa, Y. Tanaka, Y. Okamoto, J. Phys. Soc. Jpn. **85**, 013708 (2016). M. Ezawa, Phys. Rev. Lett. Phys. Rev. Lett. **116**, 127202 (2016). M. Hirayama, R. Okugawa, T. Miyake, and S. Murakami, Nat. Com. **8**, 14022 (2017). Y.-H. Chan, C.-K. Chiu, M. Y. Chou, and A. P. Schnyder, Phys. Rev. B **93**, 205132 (2016). J.-M. Carter, V.V. Shankar, M. A. Zeb, and H.-Y. Kee, Phys. Rev. B **85**, 115105 (2012). M. Phillips and V. Aji, Phys. Rev. B **90**, 115111 (2014). Y. Chen, Y.-M. Lu, and H.-Y. Kee, Nat. Commun. **6**, 6593 (2015). C.-K. Chiu and A. P. Schnyder, Phys. Rev. B **90**, 205136 (2014). K. Mullen, B. Uchoa, and D. T. Glatzhofer, Phys. Rev. Lett. **115**, 026403 (2015). H. Weng, Y. Liang, Q. Xu, R. Yu, Z. Fang, X. Dai, and Y. Kawazoe, Phys. Rev. B **92**, 045108 (2015). G. Bian, T.-R. Chang, R. Sankar, S.-Y. Xu, H. Zheng, T. Neupert, C.-K. Chiu, S.-M. Huang, G. Chang, I. Belopolski et al., Nat. Commun. **7**, 10556 (2016). J.-W. Rhim and Y.B. Kim, Phys. Rev. B **92**, 045126 (2015). Y. Chen, Y. Xie, S. A. Yang, H. Pan, F. Zhang, M. L. Cohen, and S. Zhang, Nano Lett. **15**, 6974 (2015). C. Fang, Y. Chen, H.-Y. Kee, and L. Fu, Phys. Rev. B **92**, 081201 (2015). G. Bian, T.-R. Chang, H. Zheng, S. Velury, S.-Y. Xu, T. Neupert, C.-K. Chiu, S.-M. Huang, D. S. Sanchez, I. Belopolski et al., Phys. Rev. B **93**, 121113 (2016). W. Chen, H.-Z. Lu, and J.-M. Hou, cond-mat/arXiv:1703.10886. Z. Yan, R. Bi, H. Shen, L. Lu, S.-C. Zhang, and Z. Wang, cond-mat/arXiv:1704.00655. . P.-Y. Chang and C.-H. Yee, cond-mat/arXiv:1704.01948 M. Ezawa, cond-mat/arXiv:1704.04941 M. Zeng, C. Fang, G. Chang, Y.-A. Chen, T. Hsieh, A. Bansil, H. Lin, and L. Fu, arXiv:1504.03492. T. Bzduek, Q.-S. Wu, A. Ruegg, M. Sigrist and A. A. Soluyanov, Nature **538**, 75 (2016). Y. Kim, B. J. Wieder, C. L. Kane, and A. M. Rappe, Phys. Rev. Lett. **115**, 036806 (2015). S. Kobayashi, Y. Yamakawa, A. Yamakage, T. Inohara, Y. Okamoto, and Y. Tanaka, cond-mat/arXiv:1703.03587 Y. Du, F. Tang, D. Wang, L. Sheng, E.-j. Kan, C.-G. Duan, S. Y. Savrasov, and X. Wan, cond-mat/arXiv:1605.07998 R. Yu, H. Weng,Z. Fang, X. Dai and X. Hu, Phys. Rev. Lett. **115**, 036807 (2015). T. Oka and H. Aoki, Phys. Rev. B 79, 081406(R) (2009). T. Kitagawa, T. Oka, A. Brataas, L. Fu, and E. Demler, Phys. Rev. B 84, 235108 (2011). N. Lindner, G. Refael and V. Gaslitski, Nat. Phys. 7, 490 (2011). B. Dóra, J. Cayssol, F. Simon and R. Moessner, Phys. Rev. Lett. **108**, 056602 (2012). M. Ezawa, Phys. Rev. Lett. 110, 026603 (2013). N. Goldman and J. Dalibard, Phys. Rev. X 4, 031027 (2014). Wang et al., EPL 105, 17004 (2014) Chan et al., Phys. Rev. Lett. 116, 026805 (2016); Ebihara et al., Phys. Rev. B 93, 155107 (2016) Chan et al., Phys. Rev. B 94, 121106 (2016) Yan and Wang, Phys. Rev. Lett. 117, 087402 (2016) M. Ezawa, cond-mat/arXiv:1612.05857
--- abstract: | We present [*Spitzer*]{} Space Telescope observations of 11 regions southeast of the Bright Bar in the Orion Nebula, along a radial from the exciting star $\theta^1$ Ori C, extending from 2.6 to 12.1$'$. Our Cycle 5 programme obtained deep spectra with matching IRS short-high (SH) and long-high (LH) aperture grid patterns. Most previous IR missions observed only the inner few arcmin (the “Huygens" Region). The extreme sensitivity of [*Spitzer*]{} in the 10-37 $\mu$m spectral range permitted us to measure many lines of interest to much larger distances from $\theta$$^1$ Ori C. Orion is the benchmark for studies of the interstellar medium, particularly for elemental abundances. [*Spitzer*]{} observations provide a unique perspective on the neon and sulfur abundances by virtue of observing the dominant ionization states of Ne (Ne$^+$, Ne$^{++}$) and S (S$^{++}$, S$^{3+}$) in Orion and regions in general. The Ne/H abundance ratio is especially well determined, with a value of $(1.01\pm{0.08})\times$10$^{-4}$ or in terms of the conventional expression, 12 + log (Ne/H) = 8.00$\pm$0.03. We obtained corresponding new ground-based spectra at Cerro Tololo Interamerican Observatory (CTIO). These optical data are used to estimate the electron temperature, electron density, optical extinction, and the S$^+$/S$^{++}$ ionization ratio at each of our [*Spitzer*]{} positions. That permits an adjustment for the total gas-phase sulfur abundance because no S$^+$ line is observed by [*Spitzer*]{}. The gas-phase S/H abundance ratio is $(7.68\pm0.30)\times10^{-6}$ or 12 + log (S/H) = 6.89$\pm$0.02. The Ne/S abundance ratio may be determined even when the weaker hydrogen line, H(7–6) here, is not measured. The mean value, adjusted for the optical S$^+$/S$^{++}$ ratio, is Ne/S = $13.0\pm{0.6}$. We derive the electron density ($N_e$) versus distance from $\theta$$^1$ Ori C for \[S [iii]{}\] ([*Spitzer*]{}) and \[S [ii]{}\] (CTIO). Both distributions are for the most part decreasing with increasing distance. The values for $N_e$ \[\] fall below those of $N_e$ \[\] at a given distance except for the outermost position. This general trend is consistent with the commonly accepted blister model for the Orion Nebula. The natural shape of such a blister is concave with an underlying decrease in density with increasing distance from the source of photoionization. Our spectra are the deepest ever taken in these outer regions of Orion over the 10-37 $\mu$m range. Tracking the changes in ionization structure via the line emission to larger distances provides much more leverage for understanding the far less studied outer regions. A dramatic find is the presence of high-ionization Ne$^{++}$ all the way to the outer optical boundary $\sim$12$'$ from $\theta$$^1$ Ori C. This IR result is robust, whereas the optical evidence from observations of high-ionization species (e.g. O$^{++}$) at the outer optical boundary suffers uncertainty because of scattering of emission from the much brighter inner Huygens Region. The [*Spitzer*]{} spectra are consistent with the Bright Bar being a high-density ‘localized escarpment’ in the larger Orion Nebula picture. Hard ionizing photons reach most solid angles well SE of the Bright Bar. The so-called Orion foreground ‘Veil’, seen prominently in projection at our outermost position 12$'$ from $\theta$$^1$ Ori C, is likely an H [ii]{} region – photo-dissociation region (PDR) interface. The [*Spitzer*]{} spectra show very strong enhancements of PDR lines – \[\] 34.8 $\mu$m, \[\] 26.0 $\mu$m, and molecular hydrogen – at the outermost position. author: - 'Robert H. Rubin, Janet P. Simpson, C. R. O’Dell, Ian A. McNabb, Sean W. J. Colgan, Scott Y. Zhuge, Gary J. Ferland and Sergio A. Hidalgo' title: '[*Spitzer*]{} reveals what’s behind Orion’s Bar' --- \#1 \#2 \#3/ Introduction ============ Most observational studies of the chemical evolution of the universe rest on emission line objects. regions help elucidate the current mix of elemental abundances in the ISM. They are laboratories for understanding physical processes in all emission-line sources and probes for stellar, galactic, and primordial nucleosynthesis. regions are also among the best tracers of recent star formation. The Orion Nebula (M42) is the benchmark for studies of the interstellar medium (ISM), particularly as a gauge of elemental abundances. In many ways this is similar to the role the Sun plays with respect to stars. Because Orion is nearby and bright, it is one of the most observed nebulae. Not surprisingly, most observations of Orion have been of the inner bright region. \[Here we refer to this inner region as the classical “Huygens Region".\] Detailed photoionization models, including our own (Baldwin et al. 1991; Rubin et al. 1991a, b) as well as deep spectroscopic observations interpreted via empirical analyses (Esteban et al. 2004; Baldwin et al. 2000) have concentrated on the Huygens Region. The Bright Bar has been treated as the “poster child" H [ii]{} region – photo-dissociation region (PDR) interface. The famous 3-colour image of the PDR (Tielens et al. 1993) demonstrated the progressive separation of the 3.3 $\mu$m polycyclic aromatic hydrocarbon (PAH) feature (blue), H$_2$ 1-0 S(1) (green), and CO $J$ = 1-0 (red) with increasing distance from $\theta^1$ Ori C. This was in good agreement with their theoretical model of a plane-parallel slab for the Bright Bar. Their result showed conclusively that the incident far-UV (non-ionizing) radiation field from $\theta^1$ Ori C was responsible for this molecular structure in the Bright Bar. Because their interest primarily concerned the structure, properties, and observations of the Bright Bar PDR, they were not concerned with the emission that extends far beyond in the extended Orion Nebula (obviously present, from any reasonably deep photograph). With regard to the Huygens Region, one of our own papers derived a 3-dimensional model of the inner ionized region (Wen & O’Dell 1995). This work used detailed surface brightness images to delineate the 3-dimensional position of the main ionization front with increasing distance from the exciting star $\theta^1$ Ori C, and argued that the Bright Bar is almost perpendicular to the plane of the sky. With regard to the fainter extended outer nebula, there has been progress in characterizing the so-called foreground “Veil" with early [*prima facie*]{} evidence for its existence stemming from the 21-cm line absorption line work of van der Werf & Goss (1989). The Veil is seen in projection ($\sim$ edge-on) as the outer boundary of M42, the grayish colour extending from roughly north counter clockwise to the southeast in the optical image shown here as Figure 5. For a review of the structure of Orion, see O’Dell (2001) and references therein. More recent studies of the Veil include Abel et al. (2004 and 2006). Using the [*Kuiper Airborne Observatory (KAO)*]{}, Simpson et al. (1986) measured the \[\] 51.8 and 88.4 $\mu$m lines at several positions in Orion along a radial straight south from $\theta^1$ Ori C, extending as far as a position called P6 centred 3.75$'$ from $\theta^1$ Ori C. This did provide IR evidence of species as high ionization as O$^{++}$ beyond the Bright Bar. Except as noted, prior to [*Spitzer*]{}, high spectral resolution space– or airborne–IR data have never extended to angular separations from $\theta^1$ Ori C that would place them in the extended outer nebula. To the best of our knowledge, the first such data exterior to the Bright Bar and the Huygens Region were taken under the GTO 45 programme (PI: T. Roellig) and the GO 1094 programme (PI: F. Kemper). We did not examine the GTO 45 spectra, (“Orion Bar neutral"), a pair of short-high resolution (SH) and long-high resolution (LH) aperture spectra centred close to and just SE of $\theta^2$ Ori A, taken in staring mode. Instead, we chose to examine a set of the GO 1094 paired SH and LH aperture spectra centred well SE of the Bright Bar $\sim$3.4 arcmin from $\theta^1$ Ori C. These spectra were taken in staring mode with the minimum ramp (exposure) time of 6 s and total for each spectrum of just 12 s. As will be discussed later, the fields of view for the SH and LH were quite different. These spectra demonstrated that there were lines of high-ionization species (\[\] and \[\]) measurable with excellent signal-to-noise even beyond the PDR of the Bright Bar. We also determined that none of the emission lines was saturated. Those 24 s of data were an inspiration for us to propose using [*Spitzer*]{} to probe even further from $\theta^1$ Ori C. [*Spitzer*]{} has a unique ability to address the abundances of the elements neon and sulfur. This is particularly true in the case of regions, where one can simultaneously observe four emission lines that probe the dominant ionization states of Ne (Ne$^+$ and Ne$^{++}$) and S (S$^{++}$ and S$^{3+}$). The four lines, \[\] 12.81, \[\] 15.56, \[\] 18.71, and \[\] 10.51 $\mu$m can be observed cospatially with the Infrared Spectrograph (IRS) on the [*Spitzer*]{}. Because of the sensitivity of [*Spitzer*]{}, a special niche, relative to previous (and near-term foreseeable) instruments, is for studies of fainter regions. Indeed many of the well-known Galactic regions would cause saturation problems if observed at their brightest positions. Because of this, prior to our Orion programme, we have used [*Spitzer*]{} to observe a number of regions in galaxies with various metallicities and other properties. These studies were of the spiral galaxies M83 (Rubin et al.  2007, hereafter R07), M33 (Rubin et al.  2008, hereafter R08), and the dwarf irregular galaxy NGC 6822 (Rubin et al.  2010, hereafter R10). To the extent that all the major forms of Ne and S are observed, the true Ne/S abundance ratio could be inferred. For Ne, this is a safe assumption, but for S, there is the possibility of non-negligible contributions due to S$^+$ as well as what could be tied up in dust. We have an ongoing interest to utilize this special capability of [*Spitzer*]{} archival spectra to address the Ne/S abundance ratio. Our current assessment of how much Ne/S may vary was discussed in Rubin et al. (2008), where we also included other [*Spitzer*]{} data, reanalyzed with a homogeneous atomic database. In this paper, we make a careful assessment of the Orion Nebula value for Ne/S. This not only uses [*Spitzer*]{} measurements of the dominant ionic species, but also new ground-based spectra that permit an accounting for S$^+$, which [*Spitzer*]{} cannot do. In the customary role of the Orion Nebula providing an important benchmark for the ISM, it is important to compare the Ne/S value with others, including the uncertain and controversial solar value as well as what is predicted by nucleosynthesis, galactic chemical evolution (GCE) models. The solar abundance, particularly of Ne, remains the subject of much controversy (e.g., Drake & Testa 2005; Bahcall, Serenelli, & Basu 2006; and references therein). The preponderance of evidence points to a Ne abundance substantially higher in the solar neighborhood, and even in the Sun itself, than the “canonical" solar values, Ne/S $\sim$6.5 (Asplund et al. 2009). While we cannot directly address the solar Ne value, it is crucial to an understanding of nucleosynthesis and GCE to have reliable benchmarks. We made the case that the solar Ne/S ratio is ‘out of line’ with our [*Spitzer*]{} region values (R07, R08, R10 and references therein). [*Note that the reason abundances are often derived as ratios is to avoid absolute calibration problems.*]{} Previous to that, Pottasch & Bernard-Salas (2006) discussed in their study of planetary nebulae with the [*Infrared Space Observatory (ISO)*]{} that the solar neon abundance was likely too low. They suggested that the planetary nebula neon abundance should be used instead. Optical studies of planetary nebulae and regions have suggested an upward revision of the solar Ne/O ratio (Wang & Liu 2008; Magrini, Stanghellini, & Villaver 2009). Recent observations of nearby B stars also suggest that the solar Ne/O ratio should be higher (e.g., Morel & Butler 2008). With our new Orion data, we focus predominantly on neon, the fifth most abundant element in the Universe, and sulfur, one of the top ten, because of the specific capability that [*Spitzer*]{} provided. Naturally, deriving abundances of other elements is also important, but there was no special ability to tackle these with [*Spitzer*]{}. Suffice it to say that to provide precision abundance measurements of S and Ne is a major advance in basic data needed to understand and test nucleosynthesis/GCE models. While both S and Ne are ‘primary’ $\alpha$-elements produced in massive stars and released to the ISM in supernovae, some differences in their production and GCE may be expected. $^{20}$Ne exists primarily in the C-burned shell of massive stars, whereas $^{32}$S arises during O-burning, probably explosively (e.g., an interesting article with a useful cutaway schematic of the fusion zones by Clayton 2007). According to figure 7 in the nucleosynthesis/GCE model of Woosley & Heger (2007), the Ne/S ratio is $\sim$8.6, when they start with the Lodders (2003) solar abundances. We discuss the [*Spitzer*]{} observations in section 2. In section 3, our new ground-based spectra are presented. In section 4, we discuss the variation of the electron density and three measures of the degree of ionization with distance from the exciting star. Section 5 continues with the derivation of elemental abundance ratios: Ne/S, Ne/H, S/H, and Fe/H. In section 6, we present additional data in order to characterize the Bright Bar and Outer Veil in the context of an overview of the entire Orion Nebula. In section 7, there is additional discussion pertaining to the major findings, including the Ne/S & Ne/H ratios and the nature of the Bright Bar and Outer Veil as an region – PDR interface. Last, we provide a summary and conclusions in section 8. [*Spitzer Space Telescope*]{} Observations ========================================== We observed the outer Orion Nebula under our Cycle 5 [*Spitzer Space Telescope*]{} programme GO-50082. The observations were all southeast of the famous bar, which we shall refer to as the Bright Bar (BB). The fields chosen were centred along a radial outbound from the exciting star $\theta$$^1$ Ori C and approximately orthogonal to the BB (see Figure 1). This radial coincides with our “Slit 4", one of the slits defined in our previous programme with [*HST*]{}/STIS long-slit spectra. The SE tip passed through HH203 (Rubin et al. 2003, colour fig. 1). Our set of positions was selected to examine the far side of the Bright Bar. There are 11 locations that start at 2.6$'$ and extend to 12.1$'$ from $\theta^1$ Ori C (see Figure 1). In order of increasing distance (D) from $\theta^1$ Ori C, the positions are called “inner" (I4, I3, I2, I1), “middle" (M1, M2, M3, M4), and “veil" (V1, V2, V3). Table 1 lists the coordinates for the centres of the areas mapped and the projected angular distance D. We note that for the inner positions, the time-sequence order of observations was indeed I1, I2, I3, and I4. We chose that just in case the brightest I4 region might suffer some saturation effect, which might then cause a latency problem with the subsequent observation position. Fortunately, we experienced no saturation issues. We obtained deep spectra with both the [*Spitzer*]{} Infrared Spectrograph (IRS) short wavelength, high dispersion (spectral resolution, R $\sim$ 600) configuration, called the short-high (SH) module and the long wavelength, high dispersion (R $\sim$ 600) configuration, called the long-high (LH) module (e.g., Houck et al. 2004). These cover respectively the wavelength range from 9.9 – 19.6 $\mu$m and from $\sim$19 – $\sim$36 $\mu$m. The SH slit size is 4.7$''$ $\times$ 11.3$''$, while the LH is 11.1$''$ $\times$ 22.3$''$. The SH observations permit cospatial observations of five important emission lines: \[\] 10.51, hydrogen H(7–6) (Hu$\alpha$) 12.37, \[\] 12.81, \[\] 15.56, and \[\] 18.71 . The LH observations permit cospatial observations of several more important emission lines: \[\] 22.93, \[\] 25.99, \[\] 33.48, \[\] 34.82 . In order that we could use [**all**]{} the emission lines observed with both modules, we made a concerted effort to match the field of view (FOV) for the SH and LH modules. However, a perfect match is not possible because the SH and LH rectangular apertures are not exactly orthogonal (84.8$^{\rm o}$). With the “mapping mode" for the IRS, we had the ability to overlap apertures by offsetting in either the parallel direction (along the long-axis of the rectangular aperture) or the perpendicular direction (along the short-axis of the aperture). By selecting the following scheme, the resulting SH and LH aperture grid patterns (henceforth [*‘chex’*]{}, after the breakfast cereal) very closely match the same area in the nebula: with SH, one displacement of 5$''$ parallel and 9 displacements of 2.3$''$ perpendicular; with LH, one displacement of 4.5$''$ parallel and one displacement of 4.5$''$ perpendicular. We used the [*Spitzer*]{} software [SPOT]{} to measure our chex size. The SH is 25.4$''$$\times$16.3$''$ (area 414.0 arcsec$^2$) and the LH is 26.8$''$$\times$15.5$''$ (area 415.4 arcsec$^2$), indeed a good match (see Figure 1). Another very important purpose of overlapping the apertures is that most spatial positions will be covered in several locations on the array, minimizing the effects of bad pixels. To save overhead, we clustered our 11 positions into 5 on-source [*Spitzer*]{} Astronomical Observing Requests (AORs). Because much more integration time was necessary to observe the fainter veil positions (V1, V2, V3 – all three included in the [*same*]{} AOR), we needed to split these into 3 separate AORs, that were designated veil1, veil2, and veil3. The other two AORs clustered all of the inner positions in one and all the middle positions in the other. We did not control the scheduling of the AORs which were actually in the following time sequence (with the data set number, and total time in min.): veil3 (25381120, 261.94); middle (25381362, 276.97); inner (25381376, 198.80); veil2 (25380864, 261.96); and veil1 (25380608, 326.37). For various reasons, we [**changed the nomenclature**]{} herein for the three veil data sets – veil1, veil2, and veil3 refer to respective data sets 25380864, 25380608, and 25381120. Throughout this paper Vx-y means chex x and AOR y. For example, V3-1 means chex V3 and veil AOR 1 (data set 25380864). The entire programme was executed between 2008 November 14 and November 21 (UT), thereby causing very little sky-rotation of the FOV. Immediately adjacent in time to each on-source AOR, a background off-source AOR was taken. These were all done at the same position – $\alpha$, $\delta$ = $5^{\rm h}32^{\rm m}36\fs5$, $-5^{\rm o}$1747$''$ (J2000) – in “staring mode", which utilizes a single aperture with a shift along the long-slit axis (parallel direction) of 1/3 the aperture dimension. More time was used to observe those background observations associated with the fainter regions. Our choice of ramp (exposure) times and number of mapping cycles was as follows: inner chex, SH 6 s, 8 cycles and LH 6 s, 8 cycles; middle chex, SH 6 s, 12 cycles and LH 14 s, 6 cycles; for all veil chex, SH 30 s and LH 14 s. For both AORs veil2 and veil3, there were 5 and 11 cycles respectively for the SH and LH, while more time was used in veil1 with 6 and 17 cycles respectively to fill up our [*Spitzer*]{} allotment. Our data were processed and calibrated with version S18.5 of the standard IRS pipeline at the [*Spitzer*]{} Science Center. To build our post-BCD (basic calibrated data) data products, we use [cubism]{}, the CUbe Builder for IRS Spectral Mapping, (version 1.6) (Smith et al. (2007a, b and references therein). [cubism]{} was used to build maps, which account for aperture overlaps, and to deal effectively with bad pixels. From the IRS mapping observations, it can combine these data into a single 3-dimensional cube with two spatial and one spectral dimension. For each of our regions, we constructed a data cube. Global bad pixels (those occurring at the same pixel in every BCD) were removed manually. Record level bad pixels (those occurring only within individual BCDs) – that deviated by 5 $\sigma$ from the median pixel value and occurred within at least 10 per cent of the BCDs – were removed automatically in [cubism]{} with the “Auto Bad Pixels" function. In reducing our data, we were careful to ensure that the “Auto Bad Pixels" function did not incorrectly flag any of the pixels on our programme spectral lines as bad. Our Orion chex are in a fairly “smooth" area, and as such, it is more appropriate to reduce our data assuming each region is uniformly extended within the SH and LH apertures. This is the default option and the one we used with [cubism]{}. The fully processed background-subtracted spectra that we use are presented in a colour montage showing all 11 chex in Figure 2 for the SH and Figure 3 for the LH. For the veil chex, we show only the longest-exposure spectra (the one formed from data set 25380608) in order not to clutter the figures. These figures provide a useful overview of the changes that occur at the varying distances from the exciting star. The changes to the continuum levels and the PAH features can also be seen. For instance, it is apparent that the continuum intensity decreases with increasing distance from $\theta^1$ Ori C from I4 through V1, but then increases from V1 to V3. All of the spectral lines that we discuss in the paper are labeled in Figures 2 and 3. There are some features that we do not measure or discuss that are also labeled. These include the PAH bands and weaker lines such as H(8-7). In addition, the very recently identified C$_{60}$ feature near 18.9 $\mu$m (Cami et al. 2010) is also marked. They found this in the young planetary nebula Tc 1 and as they discuss, a minor fraction of this emission feature is due to C$_{70}$ also. Our further analysis of these spectra used the line-fitting routines in the IRS Spectroscopy Modeling Analysis and Reduction Tool ([smart]{}, Higdon et al. 2004). The emission lines were measured with [smart]{} using a Gaussian line fit. The continuum baseline was fit with a linear or quadratic function. Figures 4 (a)–(d) show the data and fits for several lines at chex V3 for one of the three veil AORs, the one we call veil1 (using data set 25380864). Most of our line measurements have higher signal-to-noise (S/N) than these. We display this set to illustrate that lines from species as highly ionized as Ne$^{++}$ are clearly measurable all the way to the outer extended optical boundary. A line is deemed to be detected if the intensity is at least as large as the 3 $\sigma$ uncertainty. We measure the uncertainty by the product of the full-width-half-maximum (FWHM) and the root-mean-square variations in the adjacent, line-free continuum; it does not include systematic effects. The possible uncertainty in the absolute flux calibration of the spectroscopic products delivered by the pipeline is likely confined to between 5 and 10 per cent (see discussion on p. 1411 of R07). Any uncertainty in the flux due to pointing errors is probably small and in the worst case should not exceed 10 per cent. For the brighter lines the systematic uncertainty far exceeds the measured (statistical) uncertainty. Even for the fainter lines, we estimate that the systematic uncertainty exceeds the measured uncertainty. In addition to the line intensity, the measured FWHM and heliocentric radial velocities (V$_{helio}$) are listed in Table 2. Both the FWHM and V$_{helio}$ are useful in judging the reliability of the line measurements. The FWHM is expected to be the instrumental width for all our lines. With a resolving power for the SH and LH modules of $\sim$600, our lines should have a FWHM of roughly 500 km s$^{-1}$. The values for V$_{helio}$ should straddle the heliocentric systemic radial velocity for M42. For the Huygens Region, heliocentric velocities of the higher ionization lines are $\sim$+18 km s$^{-1}$, those for the lower-ionization species near the main ionization front are $\sim$+25 km s$^{-1}$, while those for the PDR lines are $\sim$+28 km s$^{-1}$ (O’Dell 2001). Subject to the coarse spectral resolution with [*Spitzer*]{}, most of our measurements are in agreement with these expectations. Ground-based Observations ========================= The ground-based spectroscopy was performed with the Boller & Chivens spectrograph mounted on the 1.5 m telescope at the Cerro Tololo Interamerican Observatory on the nights of 2008 November 18, 19, 22, 24 and 2009 December 9, 10, 13 (UT). Observations were made with a long slit crossing at or near most of the positions measured with [*Spitzer*]{}. The illuminated portion of the 2.6 wide slit was 429 long in the 2008 observations and 345 during the 2009 observations. The slit was opened to greater than 5 width during observations of the photometric reference stars Feige 15, Feige 25, and Hiltner 600, which was wide enough to include all of the wavelengths measured over the limited range of zenith distances (25 to 51) employed and the astronomical seeing image size of no more than 1.0. Feige 15 observations were made early each night at multiple zenith distances in 2008 and multiple reference stars were observed once each night in 2009. Photometrically clear conditions applied during all observations of the reference stars and the nebula. All observations were made such that the first order of the grating was employed with a chopping filter (GG 385 in 2008 and GG 395 in 2009) that permitted measurement of the red end of the spectrum without contamination by signal from the overlapping second order. Each pixel of the Loral 1K CCD subtended 1.30 along the slit. For the 400 lines/mm (blaze 8000 Å) grating 58 observations on the first three nights in 2008 (November 18, 19, 22), each pixel along the dispersion was about 2.2 Å  and the FWHM of the emission lines was about 6.7 Å. The 300 lines/mm, blaze 4000 Å  grating 09 used on the night of 2008 November 24 and for the 2009 observations gave a slightly higher wavelength range, had a scale of 2.9 Å  per pixel, and FWHM = 6.8 Å. A position angle (PA) of 134.6 was used for observations centring the star JW 831 (Jones & Walker 1988) and PA=59.9  used for JW 873. On the third night in 2008 the PA=90slit was placed 11.7 south of JW 887, while on the fourth night of 2008 the PA=90 slit was carefully displaced to the south from the brightest Trapezium star [$\rm \theta ^{1}$ Ori C]{}  distances of 120, 150, and 180. During the 2009 observations, JW 887 was used for displacements to positions V1 and M4, and JW 975 was used for the displacement to V3. The location of the slits are shown in Figure 5. Sky observations were made at two locations selected to be well removed from nebular emission, these being identified from wide field of view [H$\alpha$]{}+\[N II\] images of the region. The sky positions were $\alpha$, $\delta$ = $5^{\rm h}26^{\rm m}03^{\rm s}$, $-0^{\rm o}$2542$''$ and $5^{\rm h}28^{\rm m}19^{\rm s}$, $-7^{\rm o}$0836$''$ (J2000) and the measurements were indistinguishable from one another. In 2008 on the first night of the JW 831 observation of a bright portion of the nebula, sky observations totaling 3600 seconds were made. On the second night of the JW 873 observations, sky observations totaling 2700 seconds were made. On the third night of the JW 887 observations, four sky observations totaling 3600 seconds were made, and on the fourth night of the observations displaced from $\theta$$^1$ Ori C, frequent observation sets of 2400 seconds were interleaved with the observations of the nebula. In 2009, 3600 seconds of sky observations were made on December 9 and 7200 seconds of sky observations on each of December 10 and 13. Observations of the twilight sky were made and used to determine the illumination correction along the slit. Where necessary, a series of exposure times were used since the strongest emission-lines entered the non-linear portion of the CCD detector during the long exposures. In all cases the exposures were made in pairs, which were then used for correction of cosmic-ray tracks. For the JW 831 observations, twin exposures of 60, 300, 600, and 1200 seconds were made. For the JW 873 observations, twin exposures of 600 seconds and two twin exposures of 1800 seconds were made. For the JW 887 observations twin exposures of 900 seconds were made. For the fourth night observations displaced south from $\theta$$^1$ Ori C, exposure times were 60 seconds for 120, 120 seconds for 150, and 150 seconds for 180. The total signal per pixel along the slit in the [H$\beta$]{} reference line ranged from 2200 to 7200 analog-digital-units (ADU) at a gain of 0.7 ADU per electron event for the shortest exposures in the faintest to brightest regions sampled. In the case of the V1, V3, and M4 observations in 2009, total exposure times of 3900 seconds, 5700 seconds, and 3900 seconds were used. [iraf]{}[^1] tasks were used to process and spectro-photometrically calibrate the observations. Samples from along the slits that correspond to different [*Spitzer*]{} observations were taken. The location of the sampled regions are also shown in Figure 5. The total intensity in each emission-line was measured by fitting each line with a Lorentzian line profile using the task ‘splot’. Features that were identified as a blend of emission from two or more ions, using the high spectral resolution results of Esteban et al. (2004) as a guide, were not measured. All the measured line intensities were then normalized to [H$\beta$]{}. A representative spectrum is shown in Figure 6. Because of the wide range of intensities, this M4 position spectrum is shown as a logarithm of the intensity. The effects of interstellar extinction were removed by comparing the observed [H$\alpha$]{}/[H$\beta$]{} flux ratio with the value of 2.89 expected from recombination theory assuming case B, electron density ($N_e$) = 1000 cm$^{-3}$, and electron temperature ($T_e$) = 8500 K (Storey & Hummer 1995), and employing the recently determined reddening curve derived by Blagrave  et al. (2007) from the nebular lines. Note that the predicted [H$\alpha$]{}/[H$\beta$]{} flux ratio changes little with $N_e$ and $T_e$ over our range of interest. The results are expressed as the commonly used logarithmic extinction at [H$\beta$]{} ([c$\rm _{H\beta}$]{}) and are given in Table 3. This table also gives the extinction corrected surface brightness of the sample in the [H$\beta$]{} line. Tables 4 – 7 present the observed (F$_{\lambda}$) and extinction corrected (I$_{\lambda}$) line intensities relative to [H$\beta$]{} for the 16 different spectral samples. In the case of the southwest-most samples, the observed [H$\alpha$]{}/[H$\beta$]{} ratios were less than theoretically expected. The theoretical [H$\alpha$]{}/[H$\beta$]{} ratios vary only slowly with $T_e$ and matching the observations would require temperatures twice as high as those derived from heavy ion line ratios. The dominance of higher temperatures in the [H$\alpha$]{} and [H$\beta$]{} emitting regions is probably not the correct interpretation of these data because hydrogen recombination emission increases with decreasing $T_e$. Thus this emission should selectively come from any lower $T_e$ regions along the line of sight. The explanation of these anomalously low [H$\alpha$]{}/[H$\beta$]{} ratios probably lies with the fact that these regions have important components of the emission illuminated from the much brighter part of the nebula that are being scattered by material along these outer lines of sight. One knows from high spectral resolution studies (O’Dell 1992, Henney 1994, Henney 1998, O’Dell 2001) that even in the inner nebula, the dust component of the PDR beyond the main ionization front scatters several tens of per cent of the emission and that the nebular continuum (Baldwin et al. 1991) is much stronger than expected for an atomic continuum because of scattered light from the Trapezium stars. The anomalously low line ratio would indicate that the bluer [H$\beta$]{} line is scattered more efficiently than the [H$\alpha$]{} line. Since the effects of such scattering have not been modeled and there is a pattern of decreasing extinction in the direction of the anomalous line ratios, we have assumed that there is no extinction in those four samples. This assumption and the uncertainties of the role of the scattered emission-line radiation probably introduce an uncertainty of the derived line ratios of about 10 per cent. Electron temperatures were determined from line ratios using the [iraf-stsdas]{} task [temden]{} from the \[\] ratio \[I(6548) + I(6583)\]/I(5755) and the \[\] ratio \[I(4959) + I(5007)\]/I(4363). Electron densities were determined using the \[\] I(6716)/I(6731) ratios but [*updating*]{} the atomic data as discussed in the next section. These combinations give the particularly useful advantage of sampling different regions along the line of sight. \[\] emission will arise essentially at the main ionization front, \[\] emission comes from a zone where hydrogen is ionized and helium is neutral, and the \[\] emission comes from a zone where H is ionized and He is singly ionized (O’Dell 1998). The results of the calculations are presented in Table 8. Variations with Distance from the Exciting Star =============================================== Variations in Electron Density ------------------------------ The [*Spitzer*]{} data provide an excellent diagnostic of electron density ($N_e$) in the S$^{++}$ region from the line flux ratio \[\] 18.7/33.5 $\mu$m. Likewise, the ground-based observations provide an excellent diagnostic of $N_e$ in the S$^+$ region from the line flux ratio \[\] 6716/6731 Å. Both of these diagnostic tools are very insensitive to $T_e$ (e.g., Rubin 1989). For our analyses, we will use $T_e$ = 8000 K. The optical spectra discussed in the last section permit an assessment of $T_e$ \[\] and $T_e$ \[\] values (see Table 8) from classical forbidden line ratios. While these values for $T_e$ are somewhat higher than the 8000 K adopted, we point out a well-known bias. That is, both $T_e$\[\] and $T_e$\[\] derived from the ratio of fluxes of ‘auroral’ to ‘nebular’ lines are systematically higher than the so-called ‘$T_0$’, which is the ($N_e$$\times$$N_i$$\times$$T_e$)–weighted average, where $N_i$ is the ion density of interest. The amount of this bias depends on the degree of $T_e$ variations in the observed volume (see Peimbert 1967, and many forward references). In our analyses, for $N_e$ now, and in later sections using the set of IR lines, it is more appropriate to be using a $T_e$ that is similar to $T_0$. Because of the insensitivity of the volume emissivities to $T_e$, particularly when working with ratios for these IR lines, our results depend very little on this $T_e$ choice. Figure 7 shows $N_e$ \[\] and $N_e$ \[\] versus D (the projected distance in arcmin from $\theta^1$ Ori C to the centre of the chex or optical sample). For \[\], we use the effective collision strengths from Tayal & Gupta (1999) and the transition probabilities (A-values) from the recent compilation “Critically Evaluated Atomic Transition Probabilities for Sulfur  – " (Podobedova, Kelleher & Wiese 2009). The original source they cite is Froese Fischer, Tachiev & Irimia (2006). For \[\], we use the effective collision strengths from Ramsbottom, Bell & Stafford (1996) and the A-values from Podobedova et al. (2009) with the original source Irimia & Froese Fischer (2005). These two $N_e$ distributions provide a unique perspective for the extended outer Orion Nebula. Clearly the values for $N_e$ \[\] fall below those of $N_e$ \[\] at a given D except for the outermost regions, including V3. For any given [*Spitzer*]{} chex or optical sample, we view a column along the line of sight with a rectangular cross section. Due to ionization stratification, S$^{++}$/S$^+$ will be selectively highest in the column near the minimal projected distance from $\theta^1$ Ori C. Along this line-of-sight, at distances on either side of the minimum impact parameter, S$^{++}$/S$^+$ will be expected to be decreasing because the actual 3-D distance to $\theta^1$ Ori C is larger. In this picture, there would not be a plane-parallel density profile but one that had a degree of concavity with respect to $\theta^1$ Ori C and an approximately monotonically decreasing density with increasing D from the exciting star. There are several other considerations. A blister is not only the commonly accepted model for the Orion Nebula, it is also a natural configuration once a nebula enters the champagne-phase (e.g., Tenorio-Tagle 1979). Ionizing radiation leads to the creation of a dense PDR and an ionization stratified layer facing the dominant ionizing source ($\theta^1$ Ori C). The natural shape of such a blister is concave, thus explaining the general form of the Huygens Region (Wen & O’Dell 1995). The factors that produce the concavity in the Huygens Region will also be at play further away as one gets beyond the perturbation of the Bright Bar. In quasi-steady state, there would be a gas density drop going away from the PDR into the ionized layer. When viewing \[\] emission, we are seeing material that is for the most part very close to the H$^+$–H$^0$ ionization front. Just interior to this H-ionization front is where sulfur transitions from S$^{++}$ to S$^+$. There is then the possibility that the bulk of the \[\] emission arises from a region where there is only partial ionization of hydrogen. Hence $N_e$ as measured by $N_e$ \[\] would be lower than that obtained from $N_e$ \[\] even though the [*total*]{} gas density could be higher (as the PDR is approached) than the total gas density nearby, but closer to $\theta^1$ Ori C. In order to explain why $N_e$ \[\] exceeds $N_e$ \[\] at the outermost position V3, we offer the following. As one views far enough away from $\theta^1$ Ori C, scattered light becomes more important. By comparing [H$\beta$]{} and the radio continuum, O’Dell & Goss (2009) showed that in the outer Orion regions the dust in the PDR is not only scattering Trapezium optical starlight, but also scattering nebular emission line radiation produced in the much brighter Huygens Region. While this can be important for the \[\] emission, the infrared \[\] emission will be far less affected by scattering. The optical spectrum at V3 has a strong continuum, indicating substantial scattered optical light. This is likely why $N_e$ \[\] is larger than $N_e$ \[\] because the \[\] flux is a mix of local (low $N_e$) emission and scattered light from the higher $N_e$ Huygens Region. Variations in Degree of Ionization ---------------------------------- From the measured infrared intensities, we are able to estimate ionic abundance ratios for three elements in adjacent ionic states: Ne$^{++}$/Ne$^+$, S$^{3+}$/S$^{++}$ and Fe$^{++}$/Fe$^+$. Important advantages compared with optical studies of various other ionic ratios are: (1) the IR lines have a weak and similar $T_e$ dependence, while the collisionally-excited optical lines vary exponentially with $T_e$ (e.g., Osterbrock & Ferland 2006), and (2) the IR lines suffer far less from interstellar extinction and scattering. Indeed for our purposes, the differential extinction correction is negligible as the lines are relatively close in wavelength. In our analysis, we deal with ionic abundance ratios and therefore line intensity ratios. In order to derive the ionic abundance ratios, we perform the usual semiempirical analysis assuming a constant $T_e$ and $N_e$ to obtain the volume emissivities for the pertinent transitions. We use the atomic data described in Simpson et al. (2004) and Simpson et al. (2007) except for the A-values for the sulfur ionic species. Earlier we discussed \[\] and \[\]. We also use the A-values in Podobedova et al. 2009 for \[\]. The original source they cite is ‘Froese Fischer 2002a, downloaded from http://atoms.vuse.vanderbilt.edu/ on 2005 December 21’. In addition, we use a different effective collision strength for the \[\] line, as detailed in the next paragraph. ### Ne$^{++}$/Ne$^+$ We present both the variation of the observed flux ratio F(15.6)/F(12.8) and Ne$^{++}$/Ne$^+$ with D in Figure 8 using the values from Table 2 and Table 10, respectively. Here and throughout, the error values represent the propagated intensity measurement uncertainties and do not include the systematic uncertainties. In this paper, we commence to use the effective collision strengths for \[\] of Griffin et al. (2001).[^2] In our previous papers (R07, R08, and R10), we had used the values from Saraph & Tully (1994). Compared to those, the Griffin et al. values are approximately 10 per cent higher at the $T_e$’s characteristic of regions. The Griffin et al. (2001) values appear to be the best available now (as also judged by Witthoeft et al. 2007). We continue to use the same effective collision strengths for \[\] (McLaughlin & Bell 2000). In our empirical derivation of ion ratios, as already discussed, we use the derived $N_e$ \[\] and $T_e$ = 8000 K throughout. The F(15.6) decreases monotonically with D by almost a factor of 700 from I4 to V3. We note that F(12.8) is a monotonically decreasing relation as well except for a rise at V2 of $\sim$30 per cent compared with V1. Even though Ne$^+$ is the dominant neon ion beyond the Bright Bar, the \[\] 15.6 line is clearly present all the way to the outer boundary (see Figure 4). In fact, there is a very dramatic increase in the Ne$^{++}$/Ne$^+$ ratio for all three V3 observations by a factor of $\sim$4.8 over the three V2 observations. The main reason for this jump is likely due to the large drop in $N_e$ \[\] by a factor of 3 from V2 to V3. Ionization equilibrium dictates that Ne$^{++}$/Ne$^+$ $\propto$ $N_e$$^{-1}$ all other things being equal. Whether the rest of the decrease in the neon ionization equilibrium (factor of $\sim$4.8) is necessary to attribute to other causes is difficult to determine. We could speculate that there might be another source of hard ionizing photons besides $\theta^1$ Ori C at this outer boundary, perhaps even external to the Orion Nebula. ### S$^{3+}$/S$^{++}$ As for neon, we present both the variation with D of the observed flux ratioF(10.5)/F(18.7) as well as the derived ionic ratio S$^{3+}$/S$^{++}$ (Figure 9). Both \[\] 10.5 and \[\] 18.7 intensities decrease monotonically with D. Clearly F(10.5) decreases more steeply than F(18.7) with increasing D. The \[\] 10.5 line was detected in just one of the three V3 observations, V3-2. As for the Ne$^{++}$/Ne$^+$ ratio, the analysis shows that there is a similar dramatic increase in the S$^{3+}$/S$^{++}$ ratio for V3-2 by more than a factor of 5 over the V2 observations. The reasons provided in the last subsection would have a bearing for this ionic ratio as well. Following Table 2, we show non-detections in the plots as 3 $\sigma$ upper limits. ### Fe$^{++}$/Fe$^+$ By virtue of the simultaneous measurement of both \[\] 22.9 and \[\] 26.0 lines with the LH module, the line flux ratio covers exactly the same sky area (as did ratios involving lines observed with the SH module). Here we present both the variation with D of the observed flux ratio F(22.9)/F(26.0) and the derived ionic ratio Fe$^{++}$/Fe$^+$ (Figure 10). Both \[\] 22.9 and \[\] 26.0 intensities decrease with increasing D except that there is a dramatic increase in F(26.0) at V2 by a factor of 2.2 compared to the intensity at V1. An increase was also noted above for the \[\] 12.8 line intensity. The \[\] 22.9 line was not detected in any of the three V3 observations and is treated as a 3 $\sigma$ upper limit in the plot. In Figure 10, the observed ratio F(22.9)/F(26.0) follows a very different pattern with D than those seen in Figures 8 and 9 with the higher ionization line in the numerator and the lower ionization line in the denominator. The primary reason for this is that the \[\] 26.0 line has a very substantial PDR contribution (Kaufman et al. 2006), because it arises from the second energy level just 385 cm$^{-1}$ above ground (e.g., see discussion on p. 1126 of Simpson et al.  2007). Our analysis of the Fe$^{++}$/Fe$^+$ ratio [*does not account*]{} for the PDR contribution to the \[\] 26.0 line intensity. We derive Fe$^+$ by assuming the 26.0 line intensity is excited by electron collisions only. Even for this excitation route, we have not accounted for the PDR contribution, which occurs at the lower $T_e$ $\sim$500 K for the upper (second) energy level. Thus the Fe$^{++}$/Fe$^+$ ratios derived using our measured \[\] 26.0 line intensity must be [**lower limits**]{}. There is another \[\] line $^4F_{7/2}$–$^4F_{9/2}$ at 17.936 $\mu$m that has a purer region origin. This arises from a level 2430 cm$^{-1}$ above ground (characteristic temperature $\sim$3500 K). Unfortunately, this is a weak line and at the SH spectral resolution, blended with \[\] $^2P_{3/2}$–$^2P_{1/2}$ at 17.885 $\mu$m (see Figure 2). We are able to measure this \[\] line only at chex V1 and V2. At V1 the \[\] line is the brighter while at V2 the \[\] line becomes the brighter. The Fe$^{++}$/Fe$^+$ ratio derived using this weak line is also shown in Figure 10 as the star symbol (red in the colour version). As expected, these few Fe$^{++}$/Fe$^+$ values are much higher than those inferred using the 26.0 $\mu$m line and should be considered the truer estimate of the Fe$^{++}$/Fe$^+$ ratio. Figure 10 may hold some important clues about the behaviour of the outer Orion regions. Notable compared with the neon and sulfur plots is the increase in both F(22.9)/F(26.0) and Fe$^{++}$/Fe$^+$ beginning from I2 to I1 (between D = 3.7 – 4.4$'$). While [*both*]{} F(22.9) and F(26.0) are decreasing with D for all the inner and middle chex, between I2 and I1, the drop in F(26.0) is much larger (factor of 2.27) than that for F(22.9) (factor of 1.25). The lower F(22.9)/F(26.0) ratios at I4, I3 and I2 may be due to some residual influence of the Bright Bar contributing significantly to F(26.0), although I2 is well removed from the BB. Another factor that may contribute to the ‘inversion’ in F(22.9)/F(26.0) with D is the decrease in $N_e$. Again, ionization equilibrium would require that Fe$^{++}$/Fe$^+$ $\propto$ $N_e$$^{-1}$, all other things being equal. Finally, another possibility that might contribute to the increased F(22.9)/F(26.0) ratio between I2 to I1 is the presence of \[\]. In fact, \[\] is believed to be the most abundant ion in the Orion Nebula according to detailed photoionization models (Rubin et al. 1991a, 1991b; Baldwin et al. 1991). The discovery of the \[\] 2837 Å line in Orion (Rubin et al. 1997) was used to estimate the iron abundance. A more recent discussion may be found in Rodríguez & Rubin (2005). If the transition from Fe$^{3+}$ to Fe$^{++}$ is occurring between chex I2 and I1, this would help to explain the ‘inversion’. Determination of Elemental Abundance Ratios =========================================== In this section we derive several ratios of elemental abundances that may be addressed with our [*Spitzer*]{} data. As stated earlier, we have been particularly interested in the Ne/S ratio and have undertaken several studies to utilize the special ability of [*Spitzer*]{} spectroscopy in this regard (R07, R08, R10). In this section, we first cover Ne/S. Then we derive and discuss three measures of metallicity: Ne/H, S/H and Fe/H. Neon to Sulfur abundance ratio ------------------------------ For regions, using [*Spitzer*]{} data only, the gas-phase Ne/S ratio may be approximated as (Ne$^+$ + Ne$^{++}$)/(S$^{++}$ + S$^{3+}$). This includes the dominant ionization states of these two elements. However this relation does not account for S$^+$, which should be present at some level. We may safely ignore the negligible contributions of neutral Ne and S in the ionized region. Figure 11 shows our approximation for Ne/S versus D. Our ground-based observations, which cover \[\] 6716, 6731 Å and \[\] 6312 Å cospatially, allow for a correction to the [*Spitzer*]{}-data-only measurements. In order to estimate the downward corrections that apply to the individual chex, we derive S$^+$/S$^{++}$ from the above optical lines. Because the position of the spectral long-slit sample extractions are usually not the same as the chex and always a much smaller area on the sky, we use the [*optical sample closest to the various chex*]{}. The volume emissivities used in conjunction with the extinction-corrected intensities for the \[\] 6716, 6731 and \[\] 6312 lines are those for $N_e$ \[\] and $N_e$ \[\] respectively; we continue to use $T_e$ = 8000 K for both. With these S$^+$/S$^{++}$ values, we correct the [*Spitzer*]{}-data-only estimate to obtain Ne/S = (Ne$^+$ + Ne$^{++}$)/(S$^+$ + S$^{++}$ + S$^{3+}$). The derived S$^+$/S$^{++}$ ratio is always less than 0.19 for any of the inner or middle chex. For the three sets of observations of the veil chex, it is no higher than 0.44. Thus S$^{++}$ remains the dominant S ion even in the outermost regions. While we find a fairly constant Ne/S for the 8 chex comprising I4 – M4, Figure 11 indicates a steep increase in Ne/S with D in the veil positions. We surmise that this may be due to a significant and increasing amount of S being tied up in dust grains. It is a safe assumption that there will be negligible Ne in grains. Thus while the gas-phase Ne/S ratio may indeed be larger for these veil positions, the values presented in Figure 11 must be considered [*upper limits for the [**total**]{}*]{} Ne/S abundance ratio. Because of the likelihood that not all forms of a significant amount of sulfur are accounted for in the veil positions, our best estimate of the true Ne/S abundance ratio for the Orion Nebula is obtained from the eight values, corrected for S$^+$, for the I4 – M4 chex. The median value is 12.8. From the internal scatter amongst these 8 values, we obtain a sample mean and variance of 13.01$\pm$0.64. The uncorrected median for these same 8 chex is 15.0. Ne/H and S/H ------------ By virtue of measuring the H(7–6) line in the same SH spectra as the two neon and two sulfur lines, we are able to derive the Ne/H and S/H abundances. The H(7–6) line provides a measure of H$^+$ from recombination theory (Storey & Hummer 1995). There is a bit of a complication here because at Spitzer’s spectral resolution, the H(7-6) line is blended with the H(11-8) line. Their respective $\lambda$(vac) = 12.371898 and 12.387168 $\mu$m. In order to correct for the contribution of the H(11-8) line, we use the relative intensity of H(11-8)/H(7-6) from recombination theory (Storey & Hummer 1995) assuming case B and $N_e$ = 500 cm$^{-3}$. The ratio H(11-8)/H(7-6) = 0.122 and holds over our range of interest $N_e$ = 100 – 1000 cm$^{-3}$ and $T_e$ = 8000 K. Indeed, it is appropriate for $T_e$ = 10000 K and for case A as well. There is also the possible blending with the H(7-6) line by He(7-6), that we do not account for in this paper, but now discuss with regard to how this would affect our analysis of metallicity. In an [*ISO*]{} short wavelength spectrometer (SWS) IR spectrum of the inner Orion Nebula (within the Huygens Region), the spectral resolution (R $\sim$2000) permitted a separation of the H(5-4) from the strongest He(5-4) components (Rubin et al. 1998). They were then able to derive a robust He$^+$/H$^+$ ratio of 0.085$\pm$0.003 from those H and He Br$\alpha$ transitions. In the present case, all the strongest fine-structure components of the He(7-6) transition remain blended with the H(7-6) line at the [*Spitzer*]{} spectral resolution. We have used the photoionization code [cloudy]{} to predict the intensities of the He(7-6) lines relative to the H(7-6) line. This has incorporated the physics described in Porter et al. (2005). The estimate is made using a $T_e$ of 8500 K and $N_e$ of 1000 cm$^{-3}$ consistent with those used in this paper and case B recombination theory. The strongest He(7-6) component is the combined triplet and singlet multiplet $7i~^3I$ $\rightarrow$ $6h~^3H^o$ and $7i~^1I$ $\rightarrow$ $6h~^1H^o$ at 12.366519 $\mu$m. Next strongest is the combined triplet and singlet multiplet $7h~^3H^o$ $\rightarrow$ $6g~^3G$ and $7h~^1H^o$ $\rightarrow$ $6g~^1G$ at 12.3657 $\mu$m. This is followed by the combined triplet and singlet multiplet $7g~^3G$ $\rightarrow$ $6f~^3F^o$ and $7g~^1G$ $\rightarrow$ $6f~^1F^o$ at 12.3618 $\mu$m. Other multiplets that would also blend are weaker and not used for this estimate. If the appropriate He$^+$/H$^+$ value were 0.085 at the location of our chex, then summing the above transitions for He(7-6) would result in an expected flux ratio He(7-6)/H(7-6) = 0.065. In terms of the contribution of the He(7-6) components to the [*entire*]{} observed blend \[H(7-6) + H(11-8) + He(7-6)\], it would be 0.055. However, it is very unlikely that at our chex locations SE of the Bright Bar, that He$^+$/H$^+$ is that large. Because we are unable to estimate how much smaller the ratio might be, we do not apply any correction to values for Ne/H and S/H derived herein. We may safely conclude that any upward adjustment to these metallicities would be [*no larger than a factor of 1.055*]{} and likely only a few percent. We note that all three He(7-6) components are on the blue side of H(7-6) while H(11-8) is on the red side. At the limited [*Spitzer*]{} spectral resolution, we see no systematic velocity shift or increase in the H(7-6) FWHM with respect to the other lines measured in Table 2. Figure 12 shows the Ne/H values. These are the sum of the Ne$^+$/H$^+$ and Ne$^{++}$/H$^+$ ratios listed in Table 10 along with the propagated uncertainties. There appears to be little variation with position for all chex. The H(7-6) line was not detected at V3, thus there are only lower limits at this outermost position. Following the same method as for the Ne/S ratio, utilizing just the innermost 8 chex, the median value Ne/H = 1.01$\times$10$^{-4}$; the sample mean and variance yields (0.99$\pm$0.07)$\times$10$^{-4}$. If we also include the 6 independent measurements at V1 and V2, the median becomes Ne/H = 1.03$\times$10$^{-4}$, while the sample of 14 mean and variance is (1.01$\pm$0.08)$\times$10$^{-4}$. In terms of the conventional expression, this is 12 + log (Ne/H) = 8.00$\pm$0.03. Figure 13 shows the S/H estimates from the [*Spitzer*]{} data. These are the sum of the S$^{++}$/H$^+$ and S$^{3+}$/H$^+$ ratios in Table 10 along with the propagated uncertainties. There appears to be little variation with position until reaching the V2 position. Once again we use the mean for the innermost 8 chex as the best value S/H = 6.58$\times$10$^{-6}$. The drop in the estimated S/H as indicated by all three independent measurements at V2 is likely due to the onset of more sulfur being tied up in grains. For these 8 innermost chex, we again make a correction for S$^+$, unseen by [*Spitzer*]{}, by using the S$^+$/S$^{++}$ ratios derived from the optical data here. The best [*corrected*]{} S/H = $(7.68\pm0.30)\times10^{-6}$ or 12 + log (S/H) = 6.89$\pm$0.02. Esteban et al. (2004) made deep optical echelle spectra within the inner Huygens Region. They used empirical methods to derive gas-phase elemental abundances. According to their table 14, for collisionally-excited lines (CELs), they range from 12 + log(Ne/H) = $7.78\pm0.07$ to $8.05\pm0.07$ (Ne/H = 6.03$\times$10$^{-5}$ to 1.12$\times$10$^{-4}$) depending on various ionization correction factors and whether they assume no $T_e$ variations or a mean-square $T_e$ variation factor, $t^2$ (Peimbert 1967) of 0.022, respectively. Similarly for sulfur, they found 12  + log(S/H) = 7.06$\pm$0.04 to 7.22$\pm$0.04 (S/H = 1.15$\times$10$^{-5}$ to 1.66$\times$10$^{-5}$). Fe/H ---- The discussion in section 4.2.3 is very relevant to our derivation of the Fe/H abundances. Figure 14 plots the Fe/H estimates from the [*Spitzer*]{} data. These are the sum of the Fe$^+$/H$^+$ and Fe$^{++}$/H$^+$ ratios in Table 10 along with the propagated uncertainties. There appears to be little variation with position except for the V3 position. We stress that the Fe$^+$/H$^+$ ratios are derived from the \[\] 26 $\mu$m line, which as discussed no doubt has an unknown significant PDR contribution. Because of this, the Fe$^+$/H$^+$ ratios are overestimated, causing the Fe/H estimates for chex I4 – V2 in Figure 14 to be deemed an [*upper limit*]{}. While the surface brightness of the \[\] 26 $\mu$m line is somewhat smaller at V3 compared with V2, the derived Fe$^+$/H$^+$ ratios are much higher because the H(7-6) line is not detected at V3. The three separate V3 points are plotted as lower limits because we use the 3 $\sigma$ upper limit for the H(7-6) line. Nevertheless, the same caveat applies here too, that is, we have not accounted for any PDR contribution to the 26 $\mu$m line. Hence, it is incorrect to conclude that the gas-phase Fe/H abundance at V3 is as high as these 3 points indicate. Subject to all the uncertainty, we follow the same method of using the median for the innermost 8 chex to estimate an upper limit for the gas-phase (Fe$^+$ + Fe$^{++}$)/H$^+$ = 1.39$\times$10$^{-6}$. However, Fe$^{3+}$ has not been accounted for and that would necessitate an [*increase*]{} in the estimate above for an assessment of the [*total gas-phase*]{} Fe/H. Indeed, there is little that can be contributed in this paper to the determination of the total or even the gas-phase Fe/H abundance. As mentioned in section 4.2.3, there is the uncertainty of how much Fe$^{3+}$ there might be, which could be particularly important for the inner chex positions. Furthermore there have been a number of studies that conclude iron must be substantially tied up in dust grains even within the region (e.g., Rodr[í]{}guez 2002 and references therein). From their deep optical echelle spectra within the inner Huygens Region, Esteban et al. (2004) used empirical methods to also derive the gas-phase Fe/H abundance ratio. According to their table 14, they range from 12 + log(Fe/H) = 5.86$\pm$0.10 to 6.23$\pm$0.08 (Fe/H = 7.24$\times$10$^{-7}$ to 1.70$\times$10$^{-6}$) depending on various ionization correction factors and whether they assume no $T_e$ variations or a mean-square $T_e$ variation factor, $t^2$ (Peimbert 1967) of 0.022, respectively. Characterization of the Bright Bar and Outer Veil as an region – PDR interface ============================================================================== While [*Spitzer*]{} is an admirable machine for measuring both Ne and S abundances in regions, the neon abundances are determined more reliably. As previously mentioned, this is because with [*Spitzer*]{} observations alone, we are neither accounting for S$^+$ nor S that may be tied up in dust. Thus it is preferable here to ratio silicon (and other heavy elements) to neon because neon is so well determined with both the 12.8 and 15.6 $\mu$m lines well measured all the way to the extended Orion outer boundary at V3. We list the Si$^+$/(Ne$^+$ + Ne$^{++}$) ratio in Table 10 and show it versus D in Figure 15. Our derivation of the Si$^+$ abundance assumes that [*all*]{} the \[\] 34.8 $\mu$m line emission arises within the ionized region and does not include the very significant PDR contribution at much lower characteristic temperatures (e.g., Kaufman et al. 2006). This caveat is similar to what was discussed for the \[\] 26  line (see section 4.2.3). Thus, the Si$^+$/Ne values here must be considered [*upper limits*]{}. Figure 15 shows at first a monotonic decrease in this ratio moving outward from the Bright Bar from I4 to M1 (D = 2.6 – 5.1$'$). The ratio then increases with distance from V1 to V3 (D = 8.8 – 12.1$'$) with excellent repeatability amongst the 3 independent observations. There is a dramatic increase at V3. It is well established that the \[\] 34.8 $\mu$m line in Orion predominantly arises in the PDR but also is produced in the ionized region (Rubin, Dufour & Walter 1993). It is possible that the drop in the estimated Si$^+$/Ne ratio from I4 to M1 (D = 2.6 – 5.1$'$) is due to a residual influence of the Bright Bar contributing significantly to F(34.8), although this is a stretch for I1 and M1 given that they are far from the BB. Nevertheless, there is a robust conclusion that we may draw here; that the dramatic rise at V3 must be due to a very substantial PDR 34.8 $\mu$m contribution. This is a strong piece of evidence that V3 is viewing an region – PDR interface. This picture is consistent with many of the other Figures indicating a large change at V3. In a manner similar to Figure 15, we also have plotted the (Fe$^+$ + Fe$^{++}$)/(Ne$^+$ + Ne$^{++}$) ratio versus D (not included in this paper). This shows a giant leap up at V3 even when we take Fe$^{++}$ as zero (recall it was not detected at V3). We attribute this rise due to a very substantial PDR 26.0 $\mu$m contribution. The set of measured hydrogen lines may also prove particularly useful to disentangle emission arising in the ionized region and the PDR. Figure 16 displays in four panels the flux ratio versus D of the H(7–6) line, which arises in the region, along with the three H$_2$ lines – H$_2$ S(2) 12.28, H$_2$ S(1) 17.04, and H$_2$ S(0) 28.22 $\mu$m – which arise in the PDR. Here we discuss panel (a) only – the flux ratio of the adjacent lines H(7–6) 12.4/H$_2$ S(2) 12.3. The intensity of the H(7–6) line falls monotonically with increasing D, except for an increase at V2 compared with V1. For all three observations at V3, H(7–6) was not detected (see the upper limits in Table 2), which indicate that it is faintest by far at V3. TheH(7–6)/H$_2$ S(2) flux ratio shows an increase at I1, M2, and M3 compared to adjacent chex. This is somewhat reminiscent of the behaviour of the F(22.9)/F(26.0) ratio (see Fig. 10), where we raised the possibility that the lower F(22.9)/F(26.0) ratios at I4, I3 and I2 might be due to some residual influence of the Bright Bar contributing significant PDR F(26.0) emission. In the case of Figure 16 (a), the H(7–6)/H$_2$ S(2) flux ratio would be lower because of the Bright Bar PDR still enhancing the H$_2$ lines. However, the ratio at M1 does not fit the pattern. More definitively, the upper limit to the flux ratio at V3 does comport with the other evidence that there is a very substantial PDR line contribution at V3. Indeed the H$_2$ S(2) and H$_2$ S(1) lines have become brighter with increasing D from V1 to V3, and at V3 are brighter than at I1 and almost as bright as at M1 (see Table 2). This is yet another strong piece of evidence that V3 is indeed sampling an region – PDR interface. While it is beyond the scope of this paper, we do note that this set of [*Spitzer*]{} data should provide a means to compare, test, and interpret with a detailed photoionization modeling effort that treats both the region and the PDR. Discussion ========== After a 2009 conference talk on the subject of this paper, one of the leading experts on PDR modeling, and the Orion Nebula Bright Bar (BB) specifically, told RR that he/she was surprised to hear that there were lines of high-ionization species beyond (SE of) the BB. This individual thought that the BB quenched all ionizing radiation. After all, there is a definite transition from the ionized region to the PDR at the BB – per the famous 3-colour image of the PDR by Tielens et al. (1993) mentioned earlier. We posit that the reconciliation of that view with the observations/analysis/results here supplies important information regarding the BB. As generally believed, the BB may be treated as a $\sim$ plane-parallel slab, viewed nearly edge-on to the line of sight. This slab is at much higher density than the adjacent material within the region (NW of the BB, that is, the side closer to $\theta$$^1$ Ori C). The amount of matter at these higher densities within the slab is sufficient to soak up all the ionizing ($\geq$13.6 eV) photons, causing the PDR. Our [*Spitzer*]{} results demand a scenario in which copious ionizing photons penetrate to [**much larger distances**]{} SE of the BB. A simple and reasonable explanation is that the slab representing the BB is a within the confines of the larger Orion Nebula picture. In this picture, the BB slab will quench the ionizing photons emanating from $\theta$$^1$ Ori C over a very limited solid angle. There will then be foreground and background emission along sight lines to the BB that is not produced in the BB. Because of the high density within the slab, the contribution to the emission measure through the (edge-on) length of the BB will be by far the majority of the emission measure integrated over the entire line-of-sight column. Hence this foreground and background emission, including spectral lines of higher-ionization species, not generated within the BB will be dwarfed by the emission produced within the BB. Once the line of sight is clear of the dominating influence of the BB, the character of this harder spectrum can be seen SE of the bar. It would be expected that the BB will create a shadow-zone volume that is devoid of direct ionizing photons from $\theta$$^1$ Ori C, but again over a limited solid angle. There is the possibility that the BB is clumpy and/or has holes, allowing radiation to penetrate to the ‘shadowed’ side. However this appears to be ruled out by the observations and modeling of the BB (Tauber et al. 1994; Tielens et al. 1993). There was previous IR evidence of species as high ionization as O$^{++}$ beyond the BB from [*KAO*]{} observations (Simpson et al. 1986). Without question, there is abundant evidence from optical observations beyond the BB of line emission from O$^{++}$, as well as other ionic species found in regions. Indeed, one need look no further than the optical spectra presented here in Tables 4–7. Even at the most distant position V3, lines are measured from the following higher-ionization species (along with the ionization potential to create the ion): (24.6 eV), \[\] (27.6 eV), \[\] (35.1 eV), and \[\] (41.0 eV). The problem with interpreting these optical observations is due to the fact that much of the emission may be photons scattered from the much brighter inner Huygens Region (O’Dell 2001; O’Dell & Goss 2009). Because scattering is wavelength dependent, it is unknown how much of the observed optical line emission is produced in situ and how much is the scattered component. The mid-IR [*Spitzer*]{} lines suffer far less from scattering than do the optical lines, providing another inherent advantage when interpreting them in terms of nebular properties, including abundances. As mentioned in §4.2, the other advantages, compared with the optical, are that they are far less sensitive to $T_e$ and fluctuations in $T_e$ ($t^2$) and suffer far less from extinction. Because of these important advantages, together with the ability of [*Spitzer*]{} to measure all the pertinent neon species along with the H(7–6) line in the same spectra, and the fact that Ne will not be incorporated in grains and molecules, the Orion Nebula Ne/H = (1.01$\pm$0.08)$\times$10$^{-4}$ (12 + log (Ne/H) = 8.00$\pm$0.03) is one of the most robust determinations of [*total*]{} metallicity for any element in any region. It is somewhat ironic that while Ne/H is the poorest determined amongst the most abundant elements in the Sun, it is (arguably) the best determined heavy element abundance ratio in Orion – a worthy benchmark standard. There have been more estimates of the gas-phase Ne/S abundance ratio using [*Spitzer*]{} data than Ne/H due to the weakness of the H(7–6) line relative the Ne and S lines used. We reviewed the situation with regard to Ne/S in R08 (see figures 11 and 12 in that paper). The value we determine here $13.0\pm{0.6}$ is in reasonable accord with those found in R08 for the higher ionization regions. However, all of the results in R08 used a different effective collision strength for \[\] as discussed earlier. Our transition to using Griffin et al. (2001) instead of Saraph & Tully (1994) values will result in a downward revision to Ne/S in the R08 estimates by as much as 10 per cent for the lower ionization regions, but a smaller change for those at higher ionization. We defer a reanalysis of the results in R08 to a later paper in which we will also present our [*Spitzer*]{} observations of a number of regions in the dwarf irregular galaxy NGC 6822. Summary and conclusions ======================= We obtained [*Spitzer*]{} IRS observations at 11 positions in the Orion Nebula all southeast of the Bright Bar and extending in a straight line to more than 12$'$ from the exciting star $\theta$$^1$ Ori C. These spectra were taken with both the short-high (SH) and long-high (LH) modules using an aperture grid patterns chosen to very closely match the same area in the nebula. In addition, we have made new ground-based, long-slit spectra that correspond closely with the 11 regions observed with [*Spitzer*]{}. Orion is the benchmark for studies of the interstellar medium, particularly for elemental abundances. With these data, we focus predominantly on neon, the fifth most abundant element in the Universe, and sulfur, one of the top ten, because of the specific capability that [*Spitzer*]{} provided. Our major points are enumerated below. \(1) The Ne/H abundance ratio is especially well determined, with a value of $(1.01\pm{0.08})\times$10$^{-4}.$ In terms of the conventional expression, this is 12 + log (Ne/H) = 8.00$\pm$0.03. This may well be the [*gold standard*]{} for a determination of metallicity in an region. \(2) We estimate the Ne/S gas-phase abundance ratio by observing the dominant ionization states of Ne (Ne$^+$, Ne$^{++}$) and S (S$^{++}$, S$^{3+}$) with [*Spitzer*]{}. The optical data are used to correct our [*Spitzer*]{}-derived Ne/S ratio for S$^+$, which is not observed with [*Spitzer*]{}. Excluding all three outermost ‘Veil’ positions, we find the median value adjusted for the optical S$^+$/S$^{++}$ ratio is Ne/S = 12.8. From the internal scatter amongst these 8 values, we obtain a sample mean and variance of 13.01$\pm$0.64. \(3) A dramatic find is the presence of species as high-ionization as Ne$^{++}$ all the way to the outer optical boundary $\sim$12$'$ from $\theta$$^1$ Ori C. At these locations beyond the Bright Bar, where the transition from ionized to photo-dissociation region lines is purported to be complete, it was somewhat surprising to find the high ionization lines of \[\] 10.51 and \[\] 15.56 $\mu$m present with excellent signal-to-noise (S/N) ratios. A likely possibility is that the Bright Bar is an escarpment that is quenching the ionizing radiation from $\theta^1$ Ori C over a [*localized solid angle*]{}. As usually characterized, the Bright Bar is seen nearly edge-on. The depth along the line of sight is not known. Thus there can be copious ionizing radiation in the foreground (and the background) that does not encounter the Bar at all. Such a scenario very much modifies a common viewpoint of the Nebula in the SE quadrant. This picture of the ionized region continuing SE of the Bar is further supported by our long-slit spectra that sample all the chex. From these we infer $T_e$ values at least as high as 8300 K from the familiar diagnostic line intensity ratios, \[\] 6584/5755 Å and \[\] 5007/4363 Å – values that are typical for the ionized region, not PDRs. Likewise, our estimate for the fractional ionic abundance for S$^+$ is significantly smaller than that for S$^{++}$. This IR result is robust, whereas the optical evidence from observation of high-ionization (e.g. O$^{++}$) at the outer optical boundary suffers uncertainty because of the possible scattering of emission from the much brighter inner Huygens Region. The [*Spitzer*]{} spectra are consistent with the Bright Bar being a high-density [**‘localized escarpment’**]{} in the larger Orion Nebula picture. Hard ionizing photons reach most solid angles well SE of the Bright Bar. \(4) The [*Spitzer*]{} data provide an excellent diagnostic of electron density in the S$^{++}$ region from the line flux ratio \[\] 18.7/33.5 $\mu$m. Likewise, the ground-based observations provide an excellent diagnostic of $N_e$ in the S$^+$ region from the line flux ratio \[\] 6716/6731 Å. From these, we derive the electron density versus distance from $\theta$$^1$ Ori C (see Figure 7). These two $N_e$ distributions provide a unique perspective for the extended outer Orion Nebula, with the values for $N_e$ \[\] $<$ $N_e$ \[\] at a given distance except for the outermost region V3. The fact that $N_e$ \[\] is lower than $N_e$ \[\] for the most part is expected, as explained in §4.1, where reasons for the behaviour in the outermost region are also offered. \(5) The [*Spitzer*]{} data provide substantial evidence that at chex V3, the observations are sampling an region – PDR interface. This should not be unexpected since visually this appears to be the outer boundary of the Orion Nebula in this direction. As mentioned in the introduction, it is also the position of the “Veil" seen in projection (essentially edge-on) along our observed radial from $\theta$$^1$ Ori C. As described in O’Dell (2001), early evidence for this foreground “Veil" stemmed from 21-cm line absorption line observations (van der Werf & Goss 1989). The Veil is seen in projection as the outer boundary of M42, the grayish colour extending from roughly north counter clockwise to the southeast (see Figure 5). In a very recent paper (O’Dell & Harris 2010), the case is made that the more likely picture is the following. Instead of the foreground Veil curving back away from the observer to be seen edge-on near V3, it is the background region  – PDR boundary that is curving up toward the observer. In this view, they suggest the word “Rim" to define this feature. As such, our position V3 is then sampling the “Rim wall" in this particular radial direction from $\theta$$^1$ Ori C. This difference in perception and nomenclature does not alter the conclusions of the present paper. The following [*Spitzer*]{} data support the inference that at V3, we are indeed sampling an region – PDR interface. From the plot of the Si$^+$/Ne versus D (Figure 15), derived using the \[\] 34.8 $\mu$m line, there is a dramatic increase in this ratio at the outermost V3 position. As detailed in §6, our estimate of Si$^+$/Ne assumes [*all*]{} of the 34.8 $\mu$m emission arises in the ionized region and does not account for an unknown PDR contribution. The large increase at the outermost V3 position is strong evidence that the bulk of the \[\] 34.8 emission arises in a PDR at this region  – PDR boundary. In a manner similar to Figure 15, we also plotted the (Fe$^+$ + Fe$^{++}$)/(Ne$^+$ + Ne$^{++}$) ratio versus D (not included in this paper). This shows a giant leap up at V3 even when we take Fe$^{++}$ as zero (recall it was not detected at V3). We attribute this rise due to a very substantial PDR 26.0 $\mu$m contribution. For all three observations at V3, H(7–6) was not detected, an indication that by far it is faintest at V3. On the other hand, the H$_2$ S(2) and H$_2$ S(1) lines, with an origin only in the PDR, have become brighter with increasing D from V1 – V3 and at V3 are brighter than at I1 and almost as bright as at M1. This work is based on observations made with the [*Spitzer Space Telescope*]{}, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under NASA contract 1407. Support for this work was provided by NASA for this [*Spitzer*]{} programme identification 50082. In addition to the [*Spitzer*]{} support, CRO was supported in part by [*HST*]{} grant AR 10967. GJF gratefully acknowledges support by NSF (0607028 and 0908877) and NASA (07-ATFP07-0124). We thank Don Clayton and Stan Woosley for providing information on the Ne/S ratio from a nucleosynthesis, galactic chemical evolution perspective. We are grateful for the help of our students – David Ng, Tim Craven, Savannah Lodge-Scharff, Evan Gitterman, Chris Lo, and Atish Agarwala – with various stages of this work. We thank the referee for valuable comments. Abel N.P., Brogan C.L., Ferland G.J., O’Dell C.R., Shaw G., Troland T.H., 2004, ApJ, 609, 247 Abel N.P., Ferland G.J., O’Dell C.R., Shaw G., Troland T.H., 2006, ApJ, 644, 344 Asplund M., Grevesse N., Sauval A.J., Scott P., 2009, ARA&A, 47, 481 Bahcall J.N., Serenelli A.M., Basu S., 2006, ApJS, 165, 400 Baldwin J. A., Ferland G. J., Martin P. G., Corbin M. R., Cota S. A., Peterson B. M., Slettebak A., 1991, ApJ, 374, 580 Baldwin J.A., Verner E.M., Verner D.A., Ferland G.J., Martin P.G., Korista K.T., Rubin R.H., 2000, ApJS, 129, 229 Blagrave K. P. M., Martin P. G., Rubin R. H., Dufour R. J., Baldwin J. A., Hester J. J., Walter D. K., 2007, ApJ, 655, 299 Cami J., Bernard-Salas J., Peeters E., Malek S.E., 2010, Science (in press) Clayton D.D., 2007, Science, 318, 1876 Drake J.J., Testa P., 2005, Nature, 436, 525 Esteban C., Peimbert M., García-Rojas J., Ruiz M. T., Peimbert A., Rodríguez M., 2004, MNRAS, 355, 229 Froese Fischer C., Tachiev G., Irimia A., 2006, ADNDT, 92, 607 Griffin D. C., Mitnik D. M., Badnell N. R., 2001, J. Phys. B, 34, 4401 Henney W. J., 1994, ApJ, 427, 288 Henney W. J., 1998, ApJ, 503, 760 Henney W. J., O’Dell C. R., Zapata L. A., García-Díaz, Ma. T., Rodríguez, L. F., & Robberto M., 2007, AJ, 133, 2192 Higdon S. J. U., et al., 2004, PASP, 116, 975 Houck J. R., et al., 2004, ApJS, 154, 18 Irimia A., Froese Fischer C., 2005, Phy. Scripta, 71, 172 Jones B. F., Walker M. F., 1988, AJ, 95, 1755 Kaufman M.J., Wolfire M.G., Hollenbach D.J., 2006, ApJ, 644, 283 Lodders K., 2003, ApJ, 591, 1220 Magrini L., Stanghellini L., Villaver E., 2009, ApJ, 696, 729 McLaughlin B.M., Bell K.L., 2000, J. Phys. B, 33, 597 Morel T., Butler K., 2008, A&A, 487, 307 O’Dell C. R., 1998, AJ, 116, 1346 O’Dell C. R., 2001, ARA&A, 39, 99 O’Dell C. R., Goss W. M., 2009, AJ, 138, 12350 O’Dell C. R., Harris J. A., 2010, AJ (submitted) O’Dell C. R., Walter D. K., Dufour R. J., 1992, ApJ, 399, L67 Osterbrock D. E., Ferland G. J., 2006, Astrophysics of Gaseous Nebulae and Active Galactic Nuclei (second edition), University Science Books (Mill Valley) Peimbert M., 1967, ApJ, 150, 825 Podobedova L.I., Kelleher D.E., Weise W.L., 2009, JPCRD, 38, 171 Porter R.L., Bauman R.P., Ferland G.J., MacAdam K.B., 2005, ApJ, 622, L73 Pottasch S.R., Bernard-Salas J., 2006, A&A, 457, 189 Ramsbottom A., Bell K. L., Stafford R. P., 1996, ADNDT, 63, 57 Rodríguez M., 2002, A&A, 389, 556 Rodríguez M., Rubin R. H., 2005, ApJ, 626, 900 Rubin R. H., 1989, ApJS, 69, 897 Rubin R. H., Colgan S. W. J., Dufour R. J., Lord S. D., 1998, ApJ, 501, L209 Rubin R. H., Dufour R. J., Ferland G. J., Martin P. G., O’Dell C. R., Baldwin J. A., Hester J. J., Walter D. K., Wen Z., 1997, ApJ, 474, 131 Rubin R. H., Dufour R. J., Walter D. K., 1993, ApJ, 413, 242 Rubin R. H., Martin P. G., Dufour R. J., Ferland G. J., Blagrave K. P. M., Liu X.-W., Nguyen J. F., Baldwin J. A., 2003, MNRAS, 340, 362 Rubin R. H., McNabb I. A., Simpson J. P., Dufour R. J., Pauldrach A. W. A., Colgan S. W. J., Craven T. W., Gitterman E. D., Lo C. C., 2010, IAU Symp., 265, 249 (R10) Rubin R. H., Simpson J. P., Colgan S. W. J., Dufour R. J., Brunner G., McNabb I. A., Pauldrach A. W. A., Erickson E. F., Haas M. R., Citron R. I., 2008, MNRAS, 387, 45 (R08) Rubin R. H., Simpson J. P., Colgan S. W. J., Dufour R. J., Ray K. L., Erickson E. F., Haas M. R., Pauldrach A. W. A., Citron R. I., 2007, MNRAS, 377, 1407 (R07) Rubin R. H., Simpson J. P., Haas M. R., Erickson E. F., 1991a, ApJ, 374, 564 Rubin R. H., Simpson J. P., Haas M. R., Erickson E. F., 1991b, PASP, 103, 834 Saraph H.E., Tully J.A., 1994, A&AS, 107, 29 Simpson J. P., Colgan S. W. J., Cotera A. S., Erickson E. F., Hollenbach D. J., Kaufman M. J., Rubin R. H., 2007, ApJ, 670, 1115 Simpson J. P., Rubin R. H., Colgan S. W. J., Erickson E. F., Haas M. R., 2004, ApJ, 611, 338 Simpson J.P., Rubin R.H., Erickson E.F., Haas M.R., 1986, ApJ, 311, 895 Smith J. D. T., et al., 2007a, PASP, 119, 1133 Smith J. D. T., et al., 2007b, [cubism]{} Handbook Storey P. J., Hummer D. G., 1995, MNRAS, 272, 41 Tauber J.A., Tielens A.G.G.M., Meixner M.M., Goldsmith P.F., 1994, ApJ, 422, 136 Tayal S. S., Gupta G. P., 1999, ApJ, 526, 544 Tenorio-Tagle G., 1979, A&A, 71, 59 Tielens A.G.G.M., Meixner M.M., van der Werf P.P., Bregman J., Tauber J.A., Stutzki J., Rank D., 1993, Science, 262, 86 van der Werf P.P., Goss W.M., 1989, A&A, 224, 209 Wang W., Liu X.-W., 2008, MNRAS, 389, L33 Wen Z., O’Dell C. R., 1995, ApJ, 438, 784 Witthoeft M. C., Whiteford, A. D., Badnell N. R., 2007, J. Phys. B, 40, 2969 Woosley S.E., Heger A., 2007, Phys. Rep., 442, 269 0.1truein 0.1truein 0.1truein -0.25truein -- -- -- -- 0.1truein 0.1truein 0.1truein 0.1truein 0.1truein 0.1truein 0.1truein 0.1truein 0.1truein 0.1truein -0.25truein -- -- -- -- [^1]: [iraf]{} is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc. under cooperative agreement with the National Science foundation. [^2]: The value at 8000 K is 0.310 from the more complete set of effective collision strengths, available on the Controlled Fusion Atomic Data Center Web Site at ORNL, www-cfadc.phy.ornl.gov/data\_and\_codes.
--- abstract: 'The electronic correlations on a C$_{20}$ molecule, as described by an extended Hubbard Hamiltonian with a nearest neighbor Coulomb interaction of strength $V$, are studied using quantum Monte Carlo and exact diagonalization methods. For electron doped C$_{20}$, it is known that pair-binding arising from a purely electronic mechanism is absent within the standard Hubbard model ($V=0$). Here we show that this is also the case for hole doping for $0<U/t\leq 3$ and that, for both electron and hole doping, the effect of a non-zero $V$ is to work against pair-binding. We also study the magnetic properties of the neutral molecule, and find transitions between spin singlet and triplet ground states for either fixed $U$ or $V$ values. In addition, spin, charge and pairing correlation functions on C$_{20}$ are computed. The spin-spin and charge-charge correlations are very short-range, although a weak enhancement in the pairing correlation is observed for a distance equal to the molecular diameter.' author: - Fei Lin - 'Erik S. S[ø]{}rensen' - Catherine Kallin - 'A. John Berlinsky' bibliography: - 'exthub.bib' title: 'Extended Hubbard model on a C$_{20}$ molecule' --- Introduction ============ Shortly after the discovery of superconductivity in C$_{60}$, it was suggested by Chakravarty, Kivelson and Gelfand [@kivelson91a; @kivelson91b; @kivelson01] that an electronic mechanism, in which pairs of electrons preferentially reside on a single molecule rather than on neighboring molecules, might provide the pairing mechanism for superconductivity. Using second order perturbation theory they found evidence for pair binding, above a threshold value of $U/t \approx 3$. They also found that this attraction between doped electrons is accompanied by a violation of Hund’s rule, which requires maximal spin, for the two-electron-doped C$_{60}$, and that for $U/t > 3$, the ground state for two-electron-doped C$_{60}$ has spin zero.  [@kivelson91a; @kivelson91b] However, recent calculations, [@lin05a] using quantum Monte Carlo (QMC) techniques, suggest that the repulsive Hubbard model does not lead to pairing on C$_{60}$. On the other hand, there [*are*]{} geometries where pair binding [*is*]{} known to occur [@kivelson92; @kivelson01]. In particular, White *et al.* in exact diagonalization (ED) studies of the extended Hubbard model on the much smaller C$_{12}$ (truncated tetrahedron) molecule have shown that a negative pair-binding energy (effective attraction between doped electrons) exists for an intermediate value of the on-site Coulomb interaction $U$ \[see Eq. (\[exthubmd\]) and Fig.\[edc12pbe\] (a)\]. A more realistic model of the fullerenes would include longer ranged Coulomb repulsions, and it was found that this pairing energy also survives in C$_{12}$ for modestly repulsive values of the nearest-neighbor (NN) interaction, $V$, but increasing $V$ eventually kills the pair binding. The same violation of Hund’s rule as in C$_{60}$ was also observed in C$_{12}$ \[see Ref.  and Fig.\[edc12pbe\] (b)\]. With a different extended Hubbard model, Sondhi *et al.* [@sondhi95] studied the effects of both NN interaction $V$ and the off-diagonal interactions on the pair-binding energy and Hund’s rules violation in the C$_{60}$ molecule. Using perturbative calculations, they find that the NN interaction $V$ terms suppress pair binding while the off-diagonal terms enhance it. Goff and Phillips [@goff92; @goff93] considered the effects of both NN interaction $V$ and longer-range terms, $V$, on the pair-binding energy, again by perturbation theory, and also found that the inclusion of $V$ terms strongly suppresses pair binding in C$_{60}$. The fact that ED studies found pair-binding for the smaller C$_{12}$ molecule [@kivelson92] and the recent rapid development of experimental techniques for the synthesis of C$_{20}$ solid phases [@wang01; @iqbal03] make it interesting and timely to explore correlation effects in C$_{20}$, the smallest gas-phase fullerene molecule which has dodecahedral geometry. [@prinzbach00] In Ref. , we briefly reported on pair-binding for electron-doped C$_{20}$ for a wide range of values of $U/t\leq 100$, but with $V=0$, using both QMC for $U/t\leq 3$ and ED for the full range of values. Using cluster perturbation theory [@senechal00; @senechal02] we also identified a metal-insulator transition near $U_c/t\sim 4.2$ for molecular solids formed of C$_{20}$. In this paper, we provide further details of our numerical techniques and consider both electron and hole doping for an extended Hubbard model with both on-site and NN repulsion. We also study density-density, spin-spin and pairing correlation functions as a function of separation on the molecule. The extended Hubbard Hamiltonian on a single C$_{20}$ molecule is defined as $$H=-t\sum_{\langle ij\rangle\sigma}(c^{\dagger}_{i\sigma}c_{j\sigma}+h.c.) +U\sum_i n_{i\uparrow}n_{i\downarrow}+V\sum_{\langle ij\rangle}n_in_j, \label{exthubmd}$$ where $c^{\dagger}_{i\sigma}$ ($c_{i\sigma}$) is an electron creation (annihilation) operator on site $i$, indices $i,j$ run over 20 sites of a dodecahedron, $U$ is the on-site Coulomb interaction, $V$ is the NN Coulomb interaction, and $n_i=n_{i\uparrow}+n_{i\downarrow}$ is the number of electrons on site $i$. Our goal is here to focus on strong correlation effects in C$_{20}$ using exact numerical techniques. The Hamiltonian Eq. (\[exthubmd\]) is a simplified model of C$_{20}$ but it still largely captures such correlation effects. We calculate ground state energies as a function of both $U$ and $V$ for neutral, one- and two-electron dopings. Comparisons among these energies show that the electronic pair-binding energy $\Delta_{\textrm{b}}(21)=E(20)+E(22)-2E(21)$ is positive (repulsive) for the parameter ranges studied ($0<U/t\leq 3$ for $V/t=0.2$ and $0.20\leq V/t\leq 0.46$ for $U/t=1$). This implies that it is energetically favorable for two electrons to stay on different molecules as opposed to the same molecule. We also find that the existence of a NN Coulomb interaction $V$ enhances this tendency, as expected, in order to reduce the intramolecular Coulomb interaction energy. For hole doping, the corresponding hole pair-binding energy $\Delta_{\textrm{b}}(19)=E(18)+E(20)-2E(19)$ is again positive (repulsive) for the parameter range ($0<U/t\leq 3$ and $V=0$), i.e., there is an effective repulsion between two doped holes on the same C$_{20}$ molecule. ![Huckel molecular orbitals of a neutral dodecahedral C$_{20}$ molecule.  [@ellzey03][]{data-label="c20huckel"}](c20huckel.eps){width="8cm"} Unlike the case of C$_{60}$, the highest occupied molecular orbital (HOMO) of the neutral C$_{20}$ molecules, in the weakly interacting limit, is a four-fold orbitally degenerate level occupied by two electrons. (See Fig. \[c20huckel\].) Hund’s rules predict for this case that the two electrons occupy different orbitals and have total $S=1$, implying that, in the absence of a Jahn-Teller distortion, the neutral molecule has a magnetic moment. In previous work [@strongc20], for $V=0$, we have confirmed this magnetic moment for $0<U/t<3$ and shown that at the metal-insulator transition, $U_c$, the ground-state changes from a spin triplet to a singlet for neutral C$_{20}$ and from $S=2$, through $S=1$, to $S=0$ for C$_{20}^{2-}$. Here we extend this analysis to determine ground state spin configuration for neutral C$_{20}$ for a fixed value of $U/t=2$ as a function of $V$, and find a level crossing between $V/t=1$ and $V/t=1.5$ for spin triplet and singlet states. For $U/t=2$, we estimate the critical $V_c/t$ to be $1.1$ for the spin triplet to singlet transition of the neutral molecule. In light of our results for $V=0$, we expect that, in this case too, the magnetic transition at $V_c$ will coincide with a metal-insulator transition for molecular solids formed of C$_{20}$. We also investigate the pair-binding energy for the hole doped case for both $V=0$ and $V\neq 0$, and examine the effect of a non-zero $V$ on Hund’s rule. The occurrence of orbital degeneracy and the resulting magnetic moment are tied to the icosahedral symmetry of the molecule. Simple molecular orbital calculations strongly suggest that the molecular symmetry is lowered by a Jahn-Teller effect from $I_h$ to $D_{3d}$, with the HOMO being a non-degenerate singlet.  [@yamamoto05] However, the correlation effects that give rise to Hund’s rule compete with this tendency to form a singlet ground state, and hence they also compete with the Jahn-Teller effect. As reported previously, [@strongc20] we find that when the on-site Coulomb interaction $U/t$ is sufficiently large ($U\gtrsim 4.2t$), the ground state is gapped with $S=0$ and the $I_h$ symmetry is likely stable against a $D_{3d}$ distortion. In order to more exclusively focus on the effects of the non-zero $V$ term we shall here assume that the icosahedral symmetry is unbroken even for smaller $U$ values. In the next section, we briefly introduce the projection quantum Monte Carlo (PQMC) [@white89] and ED methods for this model. This is followed, in Section \[c12results\], by a comparison of PQMC with ED results on a C$_{12}$ and a discussion of Hund’s rule violation in C$_{12}$. In section \[c20results\] we focus on the C$_{20}$ molecule. Hole pai-rbinding in C$_{20}$ is discussed and the influence of a non-zero next nearest neighbor $V$ on the pair-binding is investigated and results for the triplet-singlet transition with $V$ are described along with calculations of several correlation functions in the C$_{20}$ molecule. Section \[conclusions\] contains discussion and conclusions. Method ====== PQMC ---- As noted in Ref. , the idea in PQMC simulations of the extended Hubbard model is to decouple the two-body interaction terms (both $U$ and $V$ terms) in the partition function by means of discrete Hubbard-Stratonivich transformations.  [@hirsch83] The resultant one-body terms are coupled to several auxiliary Ising spin fields that live either on the lattice sites ($U$ term) or on the lattice bonds ($V$ term). One such discrete transformation in the $V$ term is given by $$e^{-\Delta\tau Vn_{i\alpha}n_{j\beta}} =\frac{1}{2}\textrm{Tr}_{\{\sigma_{ij}^{\alpha\beta}\}} e^{\lambda_2\sigma_{ij}^{\alpha\beta}(n_{i\alpha}-n_{j\beta}) -\frac{\Delta\tau V}{2}(n_{i\alpha}+n_{j\beta})}, \label{decouple}$$ where $\alpha,\beta=\uparrow,\downarrow$, $\sigma_{ij}^{\alpha\beta}=\pm 1$ is the auxiliary Ising spin on bond $(i,j)$, $\Delta\tau$ is the discrete imaginary time slice in PQMC, and the parameter $\lambda_2$ is determined by $\tanh^2(\lambda_2/2)=\tanh(\frac{\Delta\tau V}{4})$. The same decoupling equation applies for on-site Coulomb interactions, i.e., $i=j$, except that the constant $V$ is replaced by $U$ and $\lambda_2$ by $\lambda_1$, which is similarly given by $\tanh^2(\lambda_1/2)=\tanh(\frac{\Delta\tau U}{4})$. These one-body fermionic terms in the partition function can then be explicitly traced out, leaving traces over the auxiliary Ising spins, which can be evaluated by Monte Carlo (MC) [@hirsch83]. $$\begin{aligned} Z&=&\sum_{\{\sigma\}}\prod_{\alpha}\det [1+B_{L}(\alpha)B_{L-1}(\alpha)\cdots B_{1}(\alpha)]\nonumber\\ &=&\sum_{\{\sigma\}}\det O(\{\sigma\})_{\uparrow}\det O(\{\sigma\})_{\downarrow}, \label{zfinal}\end{aligned}$$ where $\{\sigma\}=\{\sigma^1,\sigma^2,\sigma^3,\sigma^4,\sigma^5\}$ is the set of five species of Ising fields, with $\sigma^1$ representing the on-site Ising spins and $\sigma^{2-5}$ the NN bond Ising spins (one for each of the 4 spin configurations). The $B_l$ matrices are defined as $$\begin{aligned} B_{l}(\alpha)&=&e^{-\Delta\tau K/2}e^{W^{\alpha}(l)}e^{-\Delta\tau K/2},\label{bdefine}\\ (K)_{ij}&=&\left\{\begin{array}{cc} -t & \mbox{for $i$,$j$ NN},\\ 0 & \mbox{otherwise}, \end{array}\right.\\ W_{ij}^{\alpha}(l)&=&\alpha[\delta_{ij}\lambda_1\sigma_{i}^1(l)+ \delta_{\langle ij\rangle}\lambda_2\sum_{m=2}^5\sigma_{ij}^m],\\ \delta_{\langle ij\rangle}&=&\left\{\begin{array}{cc} 1 & \mbox{for $i$,$j$ NN},\\ 0 & \mbox{otherwise}, \end{array}\right.\end{aligned}$$ where $l=1,\cdots,L$ is the time slice index, and $\alpha=\pm 1$ denotes the two determinants in Eq. (\[zfinal\]). A complete MC sweep through the lattice will therefore consist of trial flipping of one species of auxiliary Ising spins on all the lattice sites and trial flipping of four species of auxiliary Ising spins on all the NN bonds in the lattice system. Fast calculation of the probability ratio in flipping one bond Ising spin at one time slice is still possible using the local update technique,  [@blankenbecler81] except that one needs to apply the probability ratio formula twice for each bond Ising spin flip (which affects two sites). We remark that in this decomposition scheme it is possible to treat even longer range Coulomb interactions \[e.g., next nearest neighbor (NNN) Coulomb interactions, etc.\] by introducing more species of auxiliary Ising spins that live on these longer bonds. The only problem is that one needs to walk through a larger and larger phase space of the auxiliary Ising spins during the MC simulations, which will, of course, increase the computation time. Practically, we find that, to collect the same amount of data, the CPU time doubles for $V\neq 0$ compared with the $V=0$ case. In a typical calculation the projection factor $\beta$ in PQMC was taken to be $\beta=10/t$, and the discrete time slice was set at $\Delta\tau=0.05/t$. $10^3$ MC warm-up sweeps through the whole space-time lattice are typically performed before collecting data. To estimate the statistical errors, we use the same method as was used in Ref. . Exact Diagonalizations (ED) --------------------------- ![(Color online) Dodecahedral C$_{20}$ geometry in 2D view. Solid and empty points denote two sets (orbits) of carbon atoms divided by the S$_{10}$ symmetry.[]{data-label="c20_real"}](c20_real.eps){width="8cm"} The exact diagonalizations on C$_{12}$ are done using standard Lanczos techniques and we therefore focus on the ED of C$_{20}$. We always use total particle number $N$ and total $S_z$ as quantum numbers since they are conserved and we perform ED in the corresponding reduced Hilbert space. In addition, ED are performed using the $S_{10}$ sub-group symmetry present in the point group $I_h$. The improper rotations generated by the elements of $S_{10}$ can be visualized as a rotation of an angle $2\pi/10$ around the center of a pentagon followed by a reflection in a plane perpendicular to the rotation axis. This is illustrated in Fig. \[c20\_real\] where the numbering of the sites is to be understood in the following way: the sites 1 through 10 are shifted up by 1 (modulo 10) under $S_{10}$ and the sites 11 through 20 are shifted in a similar manner. Hence, under the action of the $S_{10}$ group two different orbits exist, marked by the solid and open points in Fig. \[c20\_real\]. Many other symmetries exist but the $S_{10}$ symmetry is large and relatively easy to implement, and we have not exploited additional symmetries since the added cpu-time needed to implement them was significant enough to offset the time gained from reducing the size of the Hilbert space. The $S_{10}$ quantum number can be thought of as a pseudo angular momentum, $j_{10}$, and for each value of $N$ and $S_z$ we have to find the value of $j_{10}$ that corresponds to the ground-state. In many cases it is not an obvious value and it is often non-zero. In the accompanying tables we show the values of $j_{10}$ corresponding to the listed energies and in Table \[edu\] we show complete dispersion of the lowest magnetic modes for neutral C$_{20}$ as a function of $j_{10}$. The calculations are fully parallelized Lanczos calculations executed on SHARCNET computers. A typical calculation performed at half-filling for $N=20$, $Sz=0$ that, after $S_{10}$ symmetry reductions, requires a Hilbert space of ${\cal N}=3,418,725,024$, is performed with $P=64$ cpu’s using about 540 seconds of cpu-time (for each cpu) per Lanczos iteration. The memory requirement for this example is roughly 2.1Gb per cpu. Excellent convergence is always observed with less than 300 Lanczos iterations, typically less than 200. The heart of the Lanczos calculation is the matrix vector multiplication that in this case has to be implemented in parallel. As one of several choices, we have chosen to have each cpu apply the full matrix to one section of the vector with each cpu returning the corresponding section of the resulting vector. The partial results from each cpu therefore needs to be communicated between all $P$ processors with each processor communicating to all others. Due to the size of the involved Lanczos vectors ( 40-60Gb) which greatly exceeds the available per-cpu memory, it is necessary to repeat this $P\times P$ communication step many thousands of times per Lanczos step. The communication step therefore quickly becomes the bottle-neck in the calculation unless it can be done very efficiently. Fortunately, this is possible using non-blocking communications where the individual cpu’s do not wait for a communication to complete. The draw back of using non-blocking communications is that buffer space has to be allocated until it has explicitly been verified that the communication has been completed. We have implemented a dual buffer strategy yielding an extremely efficient communication step. The cpu-time spent per cpu is, for all accessible number of processors we have been able to check, overwhelmingly dominated by actual calculations rather than communications. For a fixed ${\cal N}$ we have then observed almost linear scaling for $P=64,128,256,384$ and $512$. The great advantage of this approach is that the complexity of Lanczos calculations scale with the size of the Hilbert space, ${\cal N}$, as ${\cal N}\log{\cal N}$. Neglecting the logarithm, a doubling of the size of the Hilbert space, ${\cal N}$, can then be almost compensated by doubling the number of processors $P$. $S$ $S_z$ --------------------------- ----- -------------- --------------- ----------- ------ $E_{12}$ 0 0 -9.4647669965 -9.466(2) 0.97 $E_{13}$ 1/2 1/2 -6.8287003500 -6.829(4) 0.33 $E_{13}$ 3/2 3/2 -6.0844214907 -6.059(6) 0.20 $E_{14}$ 0 0 -4.1568425864 -4.11(1) 0.11 $E_{14}$ 1 1 -4.0772924523 -4.080(5) 0.34 $\Delta_{1,0}$ 2.6360666465 2.637(4) $\Delta_{1,0}$ 3.3803455058 3.407(6) $\Delta_{\textrm{b}}(13)$ 0.0357911171 0.08(1) : Comparison of ED and PQMC calculations on the truncated tetrahedron (12 sites) at $U=2t$ and $V=0.2t$. $E_n(S_z)$ is the energy of a system with $n$ electrons and $z$-component of total spin $S_z$. $\Delta_{n,m}$ is the energy difference $E_{12+n}(S^n_z)-E_{12+m}(S^m_z)$ with $(S^n_z,S^m_z)$ given in the second column. For binding energies $\Delta_{\textrm{b}}(n)$ the second column shows $(S^{n+1}_z,S^{n-1}_z,S^{n}_z)$ – the $S_z$ values for 3 states involved in its calculation [@lin05a] .\[c12edpqmc\] Results for C$_{12}$ {#c12results} ==================== Before turning our attention to the C$_{20}$ molecule we investigate the simpler C$_{12}$ molecule in the truncated tetrahedron configuration. As mentioned above, previous studies [@kivelson92] have found a negative pair-binding energy on this molecule that, however, became positive (repulsive interaction) in the presence of a sufficiently large $V$. The purpose of this investigation is two-fold. First of all, we want to verify the correctness of our numerical approach while at the same time highlighting some of the subtleties of interpreting the PQMC data. Secondly, due to the relative ease with which calculations can be performed on this molecule it allows for a rather detailed study of the correlation between the negative pair-binding energy and a violation of Hund’s rule for the two electron doped molecule [@kivelson91a; @kivelson91b]. Tests on the C$_{12}$ molecule ------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![(Color online) (a) Variation of the pair-binding energy $\Delta_{\textrm{b}}(13)=E(12)+E(14)-2E(13)$ of a truncated tetrahedron molecule (C$_{12}$) with $U$ and $V$ as in Fig. 3 in Ref. . (b)Hund’s rules violation in the two-electron doped C$_{12}$ molecule, where $\Delta E(14)=E_{14}(\hbox{triplet})-E_{14}(\hbox{singlet})$.[]{data-label="edc12pbe"}](edc12pbehund.eps "fig:"){width="8cm"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- To test our ED program, we use the same parameters as in Ref. and we are able to reproduce the same pair-binding energy as shown in Fig. \[edc12pbe\] (a). In Table \[c12edpqmc\], we see good agreement between PQMC and ED energy values within statistical error bounds. An exception is found for $E_{14}$ and $S_z=0$, where the PQMC result lies a bit higher than the ED energy value. This is due to the mixture of singlet and triplet components in the $S_z=0$ sector and the near degeneracy of these two states that makes the projection of the singlet ground state out of the mixed state difficult. [@lin05a] We will see that this difficulty [*does not*]{} occur for C$_{20}$, where the ground state with two-electron doping is in the spin-2 sector for $U/t\leq 3$. Hence the pair-binding energy extracted for C$_{20}$ by PQMC for $U/t\leq 3$ is more reliable than the one for C$_{12}$. Hund’s rule violation for C$_{12}^{2-}$ --------------------------------------- In the perturbation theory studies of pair-binding in the larger fullerene C$_{60}$,[@kivelson91a; @kivelson91b] it was noted that a negative pair-binding energy (effective attraction) was correlated with a violation of Hund’s rule for the two-electron doped molecule; i.e., that for C$_{60}^{2-}$, the ground state was found to be a singlet. Although our QMC results did not support the existence of pair-binding in C$_{60}^{2-}$ and found a spin-triplet ground state, it is of interest to examine the correlation between pair-binding and the violation of Hund’s rule in C$_{12}$. The non-interacting $V=U=0$ neutral molecule has completely filled levels and hence a total spin zero. Added electrons therefore enter an unfilled level with an orbital degeneracy of 3. Hund’s rule would then predict C$_{12}^{2-}$ to have total $S=1$. What we find is that the ground state of C$_{12}^{2-}$ is a singlet both when the pairing is attractive and when it is driven repulsive by increasing the nearest neighbor repulsion $V$. This is shown in Fig. \[edc12pbe\](b) where the singlet state is found to lie below the triplet state for both positive and negative pair-binding energies, for the range of $U$ and $V$ studied. Thus, for this case, Hund’s rule is found to be violated where pair-binding occurs as well as where it does not. Results for C$_{20}$ molecule {#c20results} ============================= We now turn to the more interesting case of the C$_{20}$ molecule. Compared with the $V=0$ case, where PQMC already has a sign problem for the non-bipartite dodecahedral molecular geometry, the NN Coulomb interaction $V$ terms introduce more sources of negative probability weight, lowering the average value of the sign. Fig. \[signc20\] shows the average sign for these two cases. For the worst case ($N=20, S_z=1, U/t=3, V/t=0.2$), where the average sign is as low as $0.05$, we have collected $7.2\times 10^7$ MC lattice sweeps. This gives a relatively large but nevertheless meaningful error bar. \[See Fig. \[pqmc20\] (a).\] For other parameter values, we have collected about $2.2\times 10^7$ MC sweeps. The acceptance ratio for the on-site Ising spin trial flipping ranges from $80\%$ ($U/t=3$) to $93\%$ ($U/t=1$), while that for the bond Ising spins is about $95\%$ due to the small value of $V/t=0.2$. ![Average sign behavior for both $V/t=0$ (solid symbols) and $V/t=0.2$ (hollow symbols) at different fillings $N=20, 21, 22$. The lines connecting the points are guides to the eye only.[]{data-label="signc20"}](signc20.eps){width="8cm"} $S$ $S_z$ $j_{10}$ ---------- ----- ------- ---------------- ------------------- ------------- ------ $E_{18}$ 0 0 -22.4044466933 0 -22.402(1) 1.00 $E_{18}$ 1 1 -21.6778357505 $\pm 3,5$ -21.637(2) 0.49 $E_{19}$ 1/2 1/2 -21.5223243600 $\pm 1,\pm$ 3 -21.5227(6) 0.64 $E_{19}$ 3/2 3/2 -20.8990191757 0,$\pm 4$ -20.826(3) 0.35 $E_{20}$ 1 0 -20.5983834340 $0,\pm 2$ -20.533(3) 0.26 $E_{20}$ 1 1 -20.5983834340 $0,\pm 2$ -20.597(2) 0.54 $E_{20}$ 0 0 -20.5920234654 $0,\pm 2, \pm 4 $ $E_{20}$ 2 2 -19.9634427212 $\pm 2, \pm 4, 5$ $E_{21}$ 3/2 1/2 -19.6331786587 $ \pm 1, \pm 3$ -19.465(8) 0.19 $E_{21}$ 3/2 3/2 -19.6331786587 $ \pm 1, \pm 3$ -19.634(1) 0.64 $E_{22}$ 2 0 -18.6289129089 0 -18.282(7) 0.10 $E_{22}$ 2 1 -18.6289129089 0 -18.448(5) 0.32 $E_{22}$ 2 2 -18.6289129089 0 -18.628(1) 1.00 : Comparison of ground state energies from ED and PQMC calculations on the C$_{20}$ molecule at $U/t=2$ and $V=0$. See the caption in Table \[c12edpqmc\] for the corresponding definition of various quantities.[]{data-label="c20edpqmc"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![(Color online) Electronic pair-binding energies $\Delta_{\textrm{b}}(21)/t$ as a function of $U/t$ and $V/t$ from ED and PQMC simulations. (a) The variation of pair-binding energy with $U/t$ for fixed $V/t$ values. (b) The variation of pair-binding energy with $V/t$ for fixed $U/t=1$. The lines connecting MC and ED points are guides to the eye only.[]{data-label="pqmc20"}](binde.eps "fig:"){width="8cm"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![(Color online) Hole pair-binding energies $\Delta_{\textrm{b}}(19)/t$ as a function of $U/t$ for $V=0$ from ED and PQMC simulations. The lines connecting MC points are guides to the eye only.[]{data-label="pqmc20h"}](bindh.eps){width="8cm"} $S$ $S_z$ $U/t=3$ $S$ $S_z$ $U/t=5$(ED) $j_{10}$ ---------- ----- ------- ------------ ----- ------- --------------- ------------------- $E_{20}$ 1 0 -17.04(2) 0 0 -12.111284292 5 $E_{20}$ 1 1 -17.036(6) 1 1 -11.877033283 $0,\pm 2$ $E_{21}$ 3/2 1/2 -15.29(6) 1/2 1/2 -9.1165560273 $\pm 1, \pm 3, 5$ $E_{21}$ 3/2 3/2 -15.529(5) 3/2 3/2 -8.9633623599 $\pm 1, \pm 3$ $E_{22}$ 2 0 -13.936353 1 0 -5.9715313615 $\pm 2, \pm 4$ $E_{22}$ 2 1 -13.81(2) 1 1 -5.9715313615 $\pm 2, \pm 4$ $E_{22}$ 2 2 -13.935(1) : Ground state energies for neutral, one- and two-electron-doped C$_{20}$ molecules at $U/t=3, 5$ and $V=0$ from PQMC and ED, which shows a transition between Hund’s and anti-Hund’s states at $3<U/t<5$ for neutral, one-, and two-electron-doped molecules, respectively. Data without error bars are from ED.[]{data-label="antihund"} $U$ $S$ $j_{10}=0$ $j_{10}=\pm 1$ $j_{10}=\pm 2$ $j_{10}=\pm 3$ $j_{10}=\pm 4$ $j_{10}=5$ -------------- ------ ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- $ 0$ -20.5920234655 $>$ -19.90 -20.5920234655 $>$ -19.90 -20.5920234655 -20.0527029539 \[0pt\][2]{} $ 1$ -20.5983834340 -19.9776970001 -20.5983834340 -19.9776970001 -20.5981592741 -19.9634427213 $ 0$ -12.0123014488 -11.6726562451 -12.0123014488 -11.6726562451 -12.0123014488 -12.1112842959 \[0pt\][5]{} $ 1$ -11.8770332831 -11.8472120431 -11.8770332831 -11.8472120431 -11.8103044760 -11.8118179567 $ 0$ -8.0452584717 -7.806831 -8.0452584717 -7.8068365859 -8.0452584717 -8.1803385740 \[0pt\][8]{} $ 1$ -7.9497836200 -7.9415479844 -7.9497836200 -7.9415479844 -7.8490047592 -7.9156714009 Pair-binding energy ------------------- Table \[c20edpqmc\] shows the energies of the C$_{20}$ molecule at different fillings from PQMC and ED for $U/t=2$ and $V=0$. Both ED and PQMC predict the ground states to be in the same spin sectors for the molecule, and the calculated energies are in agreement within MC error bounds. In order to understand the comparison of PQMC and ED data in Table \[c20edpqmc\], it is important to recognize a systematic weakness of PQMC which is that, when the ground state is a spin multiplet, the different partners appear to have different energies, increasing with decreasing values of $|S_z|$, because the states with smaller values of $|S_z|$ mix with higher lying states that have the same value of $|S_z|$. In general except for statistical error, a state with $S_z=0$ will appear to lie above its partners with the same total $S$. This tendency is apparent in the results for $E_{20}$ ($S$=1), $E_{21}$ ($S$=3/2), and $E_{22}$ ($S$=2). Conversely, if a ground state with $S_z=0$ lies below a state with $S_z=1$, we expect the ground state to be a singlet. However, in this case, the value of the ground state energy will be perturbed upward by any admixture of the next higher state with $S=1, S_z=0$, as happened for $E_{14}(S=0)$ in Table \[c12edpqmc\]. In general it is also true that accurate PQMC results are more easily obtained when the average sign is close to 1 compared to when the average sign is small. Pair-binding energies $\Delta_{\textrm{b}}(21)/t$ (electron) and $\Delta_{\textrm{b}}(19)/t$ (hole) as a function of both $U/t$ and $V/t$ are shown in Fig. \[pqmc20\] and \[pqmc20h\], respectively. For $V=0$, for both electron and hole doping, we see that the pair-binding energy is always positive (repulsive) for $U/t>0$, and increases with increasing $U/t$. This is the same behavior as we observed for the C$_{60}$ molecule. [@lin05a] Turning on the NN Coulomb interaction $V$ ($V/t=0.2$ in Fig. \[pqmc20\](a)) increases the pair-binding energy further. Hence, putting two extra electrons on the same neutral molecule becomes more costly when the NN Coulomb interaction is not negligible. Panel (b) in Fig. \[pqmc20\] shows the variation of the pair-binding energy as a function of $V/t$ for fixed $U/t=1$. Again the pair-binding energy is positive (repulsive), and generally increases with $V/t$. The agreement between ED and PQMC results is fairly good and even though the PQMC data show some tendency to non-monotonic behavior for this interval of $V/t$, the ED results show that this is explained by the natural statistical spread of the data. Hence, in the regime $V<V_c,\ U<U_c$, the pair-binding energy increases with both $U$ and $V$ and energetically, it becomes increasingly favorable for two electrons to stay on two different C$_{20}$ molecules. However, we note that, for $V=0,\ U>U_c$ it was previously found [@strongc20] that the pair-binding energy [*decreases*]{} with $U$, reaching a minimum at $U/t\sim 10$, before increasing and reaching a finite value in the $U\to\infty$ limit. Hund’s rule ----------- It is also clear, from the data in Table \[c20edpqmc\] and \[antihund\], that Hund’s rule is obeyed for the corresponding range of parameters, i.e., $U/t\leq3$, $V=0$. That is, the ground state for 20 through 22 all have the maximum values of total spin for electrons outside the $C_{20}^{2+}$ core, ranging from total spin 1 for 20 electrons through total spin 2 for 22 electrons. This behavior occurs in the range of parameters where PQMC converges (for maximal $|S_z|$ as discussed above.) As $U/t$ is increased above 3, the sign problem prevents reliable PQMC calculations. This difficulty does not arise in ED where accurate calculations are possible for essentially any value of $U/t$. We have used ED to explore what happens for larger values of $U/t$. [@strongc20] For example, results for $U=5t$ are shown in the right hand columns of Table \[antihund\]. Here Hund’s rule is clearly violated. For 20 electrons, the ground state has spin zero; for 21 electrons the ground state has spin 1/2; while for 22 electrons the ground state has spin 1. Clearly there are level crossings in the range $3 < U/t < 5$. Additional results in this regime are given in Ref. . ED also allows the calculation of the spin gap, the gap between the ground state and the lowest lying excited state with different total spin. Results are shown in Table \[edu\] for a neutral C$_{20}$ molecule with $U = 2,5,8$ and $V=0$. When the metal-insulator transition occurs in the vicinity of $U_c/t\sim 4.2$, the ground-state spin changes from an orbitally degenerate $S=1$ for $U<U_c$ to a non-degenerate singlet for $U>U_c$. From the results presented in table \[edu\] we see that it is the singlet state at $j_{10}=5$ that moves toward the bottom of the spectrum with increasing $U$ and eventually, for $U>U_c$ becomes the ground-state. Focusing on the case $V/t=0$, we see from table \[edu\] that at $U/t=2$, the ground-state energy is a singlet $E^{1}/t=-20.5983834340$ with a gap to the lowest lying singlet of $\Delta E^{1,0}/t = 0.0063599685$. Here the superscripts denote the spin of the ground- and excited states, respectively. For $U/t\geq 5$ we find that the ground-state for the $I_h$ configuration now is a non-degenerate singlet, $S=0$, with energy $E^{0}/t=-12.1112842922$. The lowest lying triplet excitation with $\Delta E^{0,1}/t =0.2342510092$. This picture continues to hold for larger $U/t$ with the triplet gap at $U/t=8$ only slightly larger, $\Delta E^{0,1}/t=0.2305549540$. Next we explore the ground state spin of the neutral molecule with different $V/t$ values for a fixed $U/t=2$. Using ED techniques we determine that the ground-state for $V/t=1$ and $V/t=1.5$ in both cases occur for $j_{10}=0$. However, the ground state changes from a spin triplet for $V/t=1$ to a spin singlet for $V/t=1.5$. Specifically, we find at $V/t=1$, E(Singlet)=$5.702018$ and E(triplet)=$5.639496$, whereas for $V/t=1.5$ we find E(Singlet)=$17.318536$ and E(triplet)=$17.499741$. By assuming a linear dependence of the energy on $V/t$ in this region, we determine that the level crossing occurs near $V_c/t\sim 1.1$ for $U/t=2$. Correlation functions --------------------- We have also investigated what other correlations might be induced in the C$_{20}$ molecule by calculating the following correlation functions: charge-charge, spin-spin, and pairing correlations as a function of lattice distance. Similar calculations for the C$_{60}$ molecule have been reported in Refs.  and . ![(Color online) Variation of (a) charge-charge, (b) spin-spin, and (c) pairing correlation functions for $U=2t, 3t$ and $V=0$ for a C$_{20}$ molecule with respect to the lattice site spacing. $d_{1i}$ is distance between site 1 and $i$. $R$ is diameter of C$_{20}$ molecule.[]{data-label="corr"}](corr.eps){width="8cm"} ![(Color online) Variation of (a) charge-charge, (b) spin-spin, and (c) pairing correlation functions for $U=3t$ and $V=0, 0.1t, 0.2t$ for a C$_{20}$ molecule with respect to the lattice site spacing.[]{data-label="corrV"}](corrV.eps){width="8cm"} We define the correlation functions with respect to lattice site 1 in the neutral molecule: $\langle n_1n_i\rangle$ is the charge-charge correlation, $\langle S_1\cdot S_i\rangle$ is the spin-spin correlation, and $\langle c_{1\sigma}^{\dagger}c_{i,-\sigma}^{\dagger}c_{i,-\sigma}c_{1,\sigma}\rangle$ is the pairing correlation, where $i=1,\ldots,20$. In Fig. \[corr\], we show the variation of these correlations for $U=2t$ and $3t$ as a function of lattice spacing $d_{1i}/R$, where $d_{1i}$ is the distance between site 1 and $i$, and $R$ is the molecular diameter. For the dodecahedral geometry, there are only 5 inequivalent neighbors, all at distinct distances. One can understand the on-site correlations in terms of the probabilities, $p_n$, $n$=0,1,2, for having $n$ electrons on each site. Then, the on-site correlations functions are $\langle n_1^2\rangle=p_1+4p_2$, $\langle S_1^2\rangle=3p_1/4$, and $\langle c_{1\sigma}^{\dagger}c_{1,-\sigma}^{\dagger}c_{1,-\sigma}c_{1,\sigma}\rangle=p_2.$ In Fig. \[corr\](a) we show results for the charge-charge correlation function for 2 different values of $U$ with $V=0$. As expected, the on-site charge-charge correlation is reduced by an increase of the on-site Coulomb interaction $U$ \[panel (a)\]. At larger distances, the charge on site 1 and $i$ are uncorrelated. The unit value of the charge-charge correlation corresponds to uniform distribution of charge. Fig. \[corr\](b) shows the spin-spin correlation function again for $U/t=2,3$ with $V=0$. The NN spin-spin correlation has a negative finite value, and its magnitude is enhanced by a larger $U$ value. For spatial distances larger than 1 we see that this correlation function quickly approaches 0. Similar behavior has been observed for the spin-spin correlation in the C$_{60}$ molecule in Ref.  and it was suggested that the rapid decay of the spin-spin correlation function was indicative of a resonant valence bond (RVB) or “spin dimer” state. The similarity between our results and those of Ref. , suggest that the spin correlations in the ground-state of C$_{20}$ also might be described by considering valence bond states including only dimers of relatively short length. QMC results for the pair correlation are shown in Fig. \[corr\](c) with $U/t=2,3$ and $V=0$. Interestingly, there is a peak of the pairing order when site 1 and $i$ are NN sites. This again supports the RVB or “spin dimer” model for the ground-state. Beyond the nearest-neighbor distance, the pairing correlation function, along with the other correlation functions, is very close to its uncorrelated value except for $d_{1i}=R$, where the pairing order parameter is slightly enhanced. At the same time the spin-spin correlation is slightly negative showing an antiferromagnet correlation. This corresponds to the “dumb bell” model proposed in Ref.  where electron pairs are formed at the maximal distances of the molecular diameter. We note that in the present case, the enhancement of the correlations at the distances of $R$ corresponding to this “dumb bell” pairing is relatively weak. We have also studied the influence of a non-zero $V$ on the correlations. In Fig. \[corrV\] we show results for a fixed $U/t=3$ and three different values of $V=0,0.1t, 0.2t$. Clearly, the effect of the NN Coulomb interaction $V$ on these correlation functions is relatively weak, with the curves being almost identical for the range of $V$ considered here. Conclusions =========== In this paper we have studied the extended Hubbard model on a C$_{20}$ molecule through ED and PQMC simulations. The comparison clearly elucidates the relative strengths of the two methods. PQMC is possible for much larger systems than can be treated by ED. However, ED has been applied successfully to the Hubbard model on 20 sites with 18-22 electrons, by making effective use of the capabilities of a large number of coupled processors. PQMC works best when the ground state is well separated from excited states with the same value of $S_z$. As a result, ground states with larger total spin $S$ and maximal $|S_z|$ are most accurately determined, while ground states with $S=0$ are sometimes problematic. This behavior was also found in our earlier work on $C_{60}$,[@lin05a] and the comparison of ED and PQMC results for $C_{20}$ is consistent with and lends confidence to those earlier results. The pair-binding energy for $C_{20}$ shows that extra added electrons (holes) prefer to sit on different molecules, rather than to reside in pairs on molecules. This rules out the possibility that the extended Hubbard model on a single C$_{20}$ molecule can produce an effective attraction between electrons (holes) from purely electronic interactions. Our earlier work showed that this conclusion applies to the C$_{60}$ molecule as well. [@lin05a] We also find that Hund’s rule is obeyed for $U/t \le 3$ and small values of $V$ and that larger values of $U$ and $V$ lead to level crossings and ground states for which Hund’s rule is violated. For fixed $V=0$, we have determined that this transition happens between $U/t=3$ and $U/t=5$, at $U_c/t\sim4.2$. And for fixed $U/t=2$, as a function of $V$, we have determined that this transition happens between $V/t=1$ and $V/t=1.5$, at $V_c\sim 1.1$. As was the case at the transition occurring at $U_c/t\sim 4.2$ for $V=0$, we expect this transition to coincide with a metal-insulator transition for molecular solids formed of C$_{20}$. More generally, for $U/t\leq3$ and $V/t\leq 0.2$, we find that the spin, charge and pairing correlations fall off rapidly even in the presence of NN Coulomb repulsion. It is an interesting open question if molecular solids formed of C$_{20}$, in particular away from half-filling, would display non-trivial order for $V>V_c$. The answer to this question would be numerically demanding and we have therefore left it for future work. This project was supported by the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute for Advanced Research, and the Canadian Foundation for Innovation. FL is supported by the US Department of Energy under award number DE-FG52-06NA26170. AJB, CK and ESS gratefully acknowledge the hospitality of the Kavli Institute for Theoretical Physics in Santa Barbara, where part of this work was carried out and supported by the NSF under Grant No. PHY05-51164. All the calculations were performed using SHARCNET supercomputing facilities.
The electron-neutrino charged-current quasielastic (CCQE) cross section on nuclei is an important input parameter to appearance-type neutrino oscillation experiments. Current experiments typically work from the muon neutrino cross section and apply corrections from theoretical arguments to obtain a prediction for the electron neutrino cross section, but to date there has been no experimental verification of the estimates for this channel at an energy scale appropriate to such experiments. We present the first measurement of an exclusive reaction in few-GeV electron neutrino interactions, namely, the cross section for a CCQE-like process, made using the MINERvA detector. The result is given as differential cross-sections vs. the electron energy, electron angle, and square of the four-momentum transferred to the nucleus, $Q^2$. We also compute the ratio to a muon neutrino cross-section in $Q^{2}$ from . We find satisfactory agreement between this measurement and the predictions of the GENIE generator. DPF 2015\ The Meeting of the American Physical Society\ Division of Particles and Fields\ Ann Arbor, Michigan, August 4–8, 2015\ Introduction ============ Current terrestrial neutrino oscillation experiments searching for fundamental information in the neutrino sector, such as the neutrino mass ordering and whether CP violation occurs for leptons, usually employ experimental designs which rely on the partial oscillation of a beam of muon neutrinos into electron neutrinos.[@T2K; @NIM; @NOvA; @TDR] These experiments build large detectors of heavy materials to maximize the rate of neutrino interactions, and then examine the energy distribution of the neutrinos that do interact with the detector, comparing the observed spectrum with predictions based on hypotheses of no oscillation or oscillation with given parameters. Correct prediction of the observed energy spectrum for electron neutrino interactions—on which these oscillation results depend—requires an accurate model of the rates and outgoing particle kinematics. This, in essence, boils down to a need for precise $\nu_{e}$ cross sections on the detector materials in use. And yet, because of the difficulties associated with producing few-GeV electron neutrino beams, even when including very recent results, only two such cross section measurements exist[@Gargamelle; @nue; @T2K; @nue]. Furthermore, the small statistics and inclusive nature of both of these measurements make their use as model discriminators challenging. Instead, most simulations begin from the wealth of high-precision cross-section data available for muon neutrinos and apply corrections such as those discussed in ref. [@DayMcF] to obtain a prediction for $\nu_{e}$. We offer here a higher-statistics cross section for a quasielastic-like electron neutrino process, which is among the dominant reaction mechanisms at most energies of interest to oscillation experiments. We use the detector, which consists of a central sampling scintillator region, built from strips of fluoror-doped scintillator glued into sheets, then stacked transverse to the beam axis; both barrel-style and downstream longitudinal electromagnetic and hadronic sampling calorimeters; and a collection of upstream passive targets of lead, iron, graphite, water, and liquid helium. The detector design and performance are discussed in full detail elsewhere.[@MINERvA; @NIM] occupies space in the NuMI $\nu_{\mu}$ beam, where it was exposed to a flux of $\sim 99$% $\nu_{\mu}$ and $\sim 1$% $\nu_{e}$ mostly between for this data set. We also compare the result for $\nu_{e}$ to a similar, previous result for $\nu_{\mu}$ to evaluate the assumption of the model that the only relevant difference between $\nu_{\mu}$ and $\nu_{e}$ charged-current scattering is due to the mass of the final-state charged lepton. Signal definition ================= In traditional charged-current quasielastic neutrino scattering, CCQE, the neutrino is converted to a charged lepton via exchange of a W boson with a nucleon, resulting in the following reaction: $\nu_{l} n \rightarrow l^{-} p$. (Antineutrino scattering reverses the lepton number and isospin: $\bar{\nu}_{l} p \rightarrow l^{+} n$.) Because the detector is not magnetized, we cannot differentiate between electrons and positrons on an event-by-event basis. Moreover, hadrons exiting the nucleus after the interaction can re-interact and change identity or eject other hadrons[@GiBUU; @FSI]; furthermore, pairs of nucleons correlated within the initial state may cause multiple nucleons to be ejected by a single interaction[@Martini; @corr; @Nieves; @corr]. Therefore, we define the signal process “phenomenologically,” by its final-state particles: we search for events with either an electron or positron, no other leptons or photons, any number of nucleons, and no other hadrons. We call this type of event “CCQE-like.” We also demand that events originate from a 5.57-ton volume fiducial volume in the central scintillator region of . Event selection and backgrounds =============================== Candidate events are selected from the data based on four major criteria. First, a candidate must contain a reconstructed electromagnetic shower primarily contained within a cone of opening angle $7.5^{\circ}$, originating in the fiducial volume, which is identified as a shower by a multivariate PID algorithm. The latter combines details of the energy deposition pattern both longitudinally (mean $dE/dx$, fraction of energy at downstream end of cone) and transverse to the axis of the cone (mean shower width) using a $k$-nearest-neighbors (kNN) algorithm. Secondly, we separate electrons and positrons from photons by cutting events in which the energy deposition rate ($dE/dx$) at the upstream end of the shower is consistent with two particles rather than one (since photons typically interact in by producing an electron-positron pair). At this point, showers surviving the cuts become electron candidates. Thirdly, we remove events with candidate muon decay electrons identified by their separation in time from the main event; these Michel electrons typically occur in inelastic interactions with final-state pions ($\pi^{\pm} \rightarrow \mu^{\pm} \rightarrow e^{\pm}$). Our final criterion is an attempt to select CCQE-like interactions using a classifier we call “extra energy fraction,” $\Psi$, which, when an event’s visible energy not inside the electron candidate or a sphere of radius centered around the cone vertex is denoted “extra energy,” is defined as: $$\Psi = \frac{E_{\mathrm{extra}}}{E_{\mathrm{electron}}}$$ Our cut is a function of the total visible energy of the event. The cut at the most probable total visible energy, $E_{\mathrm{vis}} = \unit[1.25]{GeV}$, is illustrated in fig. \[fig:psi\]. Finally, we retain only events with reconstructed electron energy $E_{e} \geq \unit[0.5]{GeV}$ and reconstructed neutrino energy $E_{\nu}^{QE} \leq \unit[10]{GeV}$. Here the lower bound excludes a region where the expected flux of electron-flavor neutrinos is small and the backgrounds are large, and the upper bound restricts the sample to events where the uncertainties on flux prediction are tolerable. The distribution of events selected by this sequence is shown in fig. \[fig:selected sample\]. [0.48]{} ![Left: example cut on $\Psi$ (defined in the text) at the most probable event visible energy, $E_{vis} = \unit[1.25]{GeV}$. Right: event sample after all selection cuts.](psi_cut_1d.pdf "fig:"){width="\textwidth"} [0.48]{} ![Left: example cut on $\Psi$ (defined in the text) at the most probable event visible energy, $E_{vis} = \unit[1.25]{GeV}$. Right: event sample after all selection cuts.](selected_events.pdf "fig:"){width="\textwidth"} ![Left: example cut on $\Psi$ (defined in the text) at the most probable event visible energy, $E_{vis} = \unit[1.25]{GeV}$. Right: event sample after all selection cuts.](data+MC_legend_horiz.pdf){width="75.00000%"} As fig. \[fig:selected sample\] shows, even after the final selection, a significant fraction of the sample is predicted to be from background processes. To validate the background predictions from the generator, we use an *in situ* measurement based on elastic scattering of neutrinos from atomic electrons[@Jaewon; @thesis] and a recent measurement of charged-current coherent pion production[@MINERvA; @coherent] to constrain the $\nu-e$ and NC coherent backgrounds. We then attempt to constrain the remaining components of the background model by examining sidebands in two of the variables already mentioned. The first of these is composed of events that contain Michel electron candidates, which results in a nearly pure sideband of inelastic $\nu_{e}$ events. The second sideband is in the extra energy fraction $\Psi$; a sample of events at larger $\Psi$ constitutes a sideband rich in both the $\nu_{e}$ inelastic background and backgrounds where photon(s) from a $\pi^{0}$ decay comprise the electromagnetic shower. We use these sidebands together to fit the normalizations of the three major backgrounds: $\nu_{e}$ inelastic events, neutral-current incoherent $\pi^{0}$ events, and charged-current incoherent $\pi^{0}$ events. The normalizations of the $\nu_{e}$ background and the sum of the $\pi^{0}$ backgrounds are each fitted using distributions in both reconstructed candidate electron angle and energy, across the two sidebands, to obtain scale factors that represent the best estimate of the normalizations in the data as compared to the prediction from GENIE. We obtain scale factors of $0.89 \pm 0.08$ and $1.06 \pm 0.12$, respectively. Subsequent to the constraint, we scale the backgrounds in the signal region and subtract them from the data. We then compare the simulated prediction of the signal process to the background-subtracted data. Cross section result ==================== We calculate three differential cross sections in electron angle, electron energy, and four-momentum transferred from neutrino to nucleus $Q^{2}$. For $Q^{2}$, we employ the commonly-used CCQE approximations (assuming a stationary target nucleon) which allow us to compute the neutrino kinematics from just the lepton variables: $$E_{\nu}^{QE} = \frac{m_{n}^{2} - (m_{p} - E_{b})^{2} - m_{e}^{2} + 2(m_{p}-E_{b}E_{e})}{2(m_{p} - E_{b} - E_{e} + p_{e} \cos{\theta_{e}})}$$ $$Q^{2}_{QE} = 2 E_{\nu}^{QE} \left(E_{e} - p_{e} \cos{\theta_{e}}\right) - m_{e}^{2} \label{eq:q2}$$ The cross sections are calculated in bins $i$ according to the following rule for sample variable $\xi$, with $\epsilon$ representing signal acceptance, $\Phi$ the flux integrated over the energy range of the measurement, $T_{n}$ the number of targets (CH molecules) in the fiducial region, $\Delta_{i}$ the width of bin $i$, and $U_{ij}$ a matrix correcting for detector smearing in the variable of interest: $$\label{eq:dsigma} \left( \frac{d\sigma}{d\xi} \right)_{i} = \frac{1}{\epsilon_{i} \Phi T_{n} \left(\Delta_{i}\right)} \times \sum_{j}{U_{ij} \left(N_{j}^{\mathrm{data}} - N_{j}^{\mathrm{bknd\ pred}}\right)}$$ We perform unfolding in these variables using a Bayesian technique[@D'Agostini; @unf] with a single iteration. The unfolding matrices $U_{ij}$ needed as input are predicted by our simulation. Our prediction for the neutrino flux $\Phi$ by which we then divide is derived from a GEANT4-based simulation of the NuMI beamline (described further in ref. [@antinumu; @PRL]). In addition, the neutrino-electron elastic scattering measurement mentioned above provides an *in situ*, data-based constraint for the flux estimate. The cross sections obtained from this procedure are given in fig. \[fig:XSs\]. To help understand whether any differences between the model and our data stem from deficiencies in the underlying cross section model itself (which is tuned to $\nu_{\mu}$ scattering data, as noted in the introduction) or differences between $\nu_{e}$ and $\nu_{\mu}$ interactions, we also computed the ratio of the cross section in fig. \[fig:XS q2\] to a recent measurement of the same cross section for muon neutrinos, which is shown in fig. \[fig:ratio\]. We note that $Q^2$-dependent correlated errors, such as that in the electromagnetic energy scale, can cause trends in the data similar to the difference between the prediction and observed shape in $Q^{2}$ in fig. \[fig:XS q2\] and the apparent upward slope in fig. \[fig:ratio\]. When these correlated errors are taken into account, in all cases the data is consistent with the GENIE prediction within $1\sigma$. [-0.75in]{}[-0.5in]{} [0.41]{} ![Differential cross sections. Inner errors are statistical; outer are statistical added in quadrature with systematic.[]{data-label="fig:XSs"}](xs_theta_e.pdf "fig:"){width="\textwidth"} [0.41]{} ![Differential cross sections. Inner errors are statistical; outer are statistical added in quadrature with systematic.[]{data-label="fig:XSs"}](xs_E_e.pdf "fig:"){width="\textwidth"} [0.41]{} ![Differential cross sections. Inner errors are statistical; outer are statistical added in quadrature with systematic.[]{data-label="fig:XSs"}](xs_q2.pdf "fig:"){width="\textwidth"} ![Ratio of $\frac{d\sigma}{dQ^{2}_{QE}}$ for $\nu_{e}$ to that for $\nu_{\mu}$. Inner errors are statistical; outer are statistical added in quadrature with systematic.[]{data-label="fig:ratio"}](ratio.pdf){width="75.00000%"} Conclusions =========== Though $\nu_{e}$ cross section data is vitally important for neutrino oscillation searches, experimental challenges have prevented extensive measurement of this quantity until recently. In this first-ever measurement of $\nu_{e}$ CCQE scattering, we find that the electron neutrino cross section predictions of the GENIE generator, based on cross section models tuned to muon neutrino scattering data, are consistent with our measured values within our uncertainties. This implies that the generator models in their current form are suitable for use by current neutrino oscillation experiments. However, future experiments, which depend on significantly reducing the influence of cross section systematic uncertainties on their results, may require further data to resolve whether the apparent (but not significant) trends in our result correspond to real discrepancies between the models and nature. [99]{} K. Abe [*et al.*]{} \[T2K Collaboration\], Nucl. Instrum. Meth. A [**659**]{}, 106 (2011) \[arXiv:1106.1238 \[physics.ins-det\]\]. D. S. Ayres [*et al.*]{} \[NOvA Collaboration\], FERMILAB-DESIGN-2007-01. J. Blietschau [*et al.*]{} \[Gargamelle Collaboration\], Nucl. Phys. B [**133**]{}, 205 (1978). K. Abe [*et al.*]{} \[T2K Collaboration\], Phys. Rev. Lett.  [**113**]{}, no. 24, 241803 (2014) \[arXiv:1407.7389 \[hep-ex\]\]. M. Day and K. S. McFarland, Phys. Rev. D [**86**]{}, 053003 (2012) \[arXiv:1206.6745 \[hep-ph\]\]. L. Aliaga [*et al.*]{} \[MINERvA Collaboration\], Nucl. Instrum. Meth. A [**743**]{}, 130 (2014) \[arXiv:1305.5199 \[physics.ins-det\]\]. O. Lalakulich, U. Mosel and K. Gallmeister, Phys. Rev. C [**86**]{}, 054606 (2012) \[arXiv:1208.3678 \[nucl-th\]\]. M. Martini and M. Ericson, Phys. Rev. C [**87**]{}, no. 6, 065501 (2013) \[arXiv:1303.7199 \[nucl-th\]\]. J. Nieves, M. Valverde and M. J. Vicente Vacas, Phys. Rev. C [**73**]{}, 025504 (2006) \[hep-ph/0511204\]. G. D’Agostini, Nucl. Instrum. Meth. A [**362**]{}, 487 (1995). L. Fields [*et al.*]{} \[MINERvA Collaboration\], Phys. Rev. Lett.  [**111**]{}, no. 2, 022501 (2013) \[arXiv:1305.2234 \[hep-ex\]\]. G. A. Fiorentini [*et al.*]{} \[MINERvA Collaboration\], Phys. Rev. Lett.  [**111**]{}, 022502 (2013) \[arXiv:1305.2243 \[hep-ex\]\]. J. Park, FERMILAB-THESIS-2013-36. A. Higuera [*et al.*]{} \[MINERvA Collaboration\], Phys. Rev. Lett.  [**113**]{}, no. 26, 261802 (2014) \[arXiv:1409.3835 \[hep-ex\]\].
--- abstract: 'Both the ATLAS and CMS collaborations have recently observed an excess in the di-photon invariant mass distribution in the vicinity of $750$ GeV with a local significance of $\sim3\sigma$. In this article we try to investigate this excess in the context of a minimal simplified framework assuming effective interactions of the hinted resonance with photons and gluons. We scrutinise the consistency of this observation with possible accompanying yet hitherto unseen signatures of this resonance. Subsequently, we try to probe the nature of new particles, e.g., spin, electric charge and number of colour, etc., that could remain instrumental to explain this excess through loop-mediation.' address: - 'Department of Physics, Indian Institute of Technology, Kanpur-208016, India' - | Consortium for Fundamental Physics, Department of Physics and Astronomy,\ University of Sheffield, Sheffield S3 7RH, United Kingdom - | Consortium for Fundamental Physics, Department of Physics and Astronomy,\ University of Manchester, Manchester, M13 9PL, United Kingdom - 'Laboratoire de Physique Théorique, CNRS$^1$, Univ. Paris-Sud, Université Paris-Saclay, 91405 Orsay, France' - 'Centre de Physique Théorique, École polytechnique, CNRS$^2$, Université Paris-Saclay, 91128 Palaiseau, France' - 'Regional Centre for Accelerator-based Particle Physics, Harish-Chandra Research Institute, Allahabad 211019, India' author: - Joydeep Chakrabortty - Arghya Choudhury - Pradipta Ghosh - Subhadeep Mondal - Tripurari Srivastava bibliography: - 'CCGMS-V1.bib' title: 'Di-photon resonance around 750 GeV: shedding light on the theory underneath' --- The recent observation by the LHC collaborations [@ATLAS-run-II-1; @CMS-run-II-2; @ATLAS-run-II-1L; @CMS-run-II-2L], concerning an excess in the di-photon invariant mass distribution $\minv$ near $750$ GeV, has gained huge attention in the particle physics community. The ATLAS group, using $3.2$ $\fbi$ of data with $13$ TeV centre-of-mass energy $(\ecm)$, has estimated a local (global) significance of $3.9\sigma~(2.0\sigma)$ for a mass of the resonance $M_X=750$ GeV [@ATLAS-run-II-1L]. At the same time, the CMS collaboration has noticed a local (global) significance of $2.8\sigma-2.9\sigma~(<1.0\sigma)$ for $M_X=760$ GeV [@CMS-run-II-2L] using $3.3$ $\fbi$ of data at $\ecm=13$ TeV. Combining with the run-I data ($19.7\,\fbi$ at $\ecm=8$ TeV), the CMS excess appears at $M_X=750$ GeV [@CMS-run-II-2L] with a local (global) significance of $3.4\sigma~(1.6\sigma)$. The latter corresponds to a narrow width for the resonance, $\Gamma_X=105$ MeV while interpretation with only $13$ TeV data indicates $\Gamma_X=10.6$ GeV. The ATLAS measurement, on the contrary, hints a large decay width $\Gamma_X=45$ GeV [@ATLAS-run-II-1L]. This is the first surprise from LHC run-II with 13 TeV center-of-mass energy[^1] which remains unexplained within the Standard Model (SM) framework. In other words, properties of the said resonance, as experimentally observed so far, e.g., excess in $\gamma\gamma$ only and nothing in $ZZ,\,Z\gamma$ or in di-jet $(jj)$ channels, definitely demand physics beyond the SM (BSM). It is, thus, timely to explore the origin and associated consequences of this resonance although the possibility of loosing this excess with more data-set can not be completely overlooked. A quest to accommodate this excess has already produced a handful of contemporary analyses [@Harigaya:2015ezk; @Mambrini:2015wyu; @Backovic:2015fnp; @Angelescu:2015uiz; @Nakai:2015ptz; @Knapen:2015dap; @Buttazzo:2015txu; @Pilaftsis:2015ycr; @Franceschini:2015kwy; @DiChiara:2015vdm; @Higaki:2015jag; @McDermott:2015sck; @Ellis:2015oso; @Low:2015qep; @Bellazzini:2015nxw; @Gupta:2015zzs; @Petersson:2015mkr; @Molinaro:2015cwg] along with a few simultaneous[^2] [@Chao:2015ttq; @Cao:2015pto; @Kobakhidze:2015ldh; @Curtin:2015jcv; @Ahmed:2015uqt] studies. Most of these analyses are proposed within the context of a specific theory framework, which often requires new decay modes (invisible for example) and thus, address other issues, for example the dark matter (see Refs. [@Mambrini:2015wyu; @Backovic:2015fnp]). We, however, aim to investigate this excess with a simplified effective framework and will try to explore the nature of hitherto unseen particles which, while running in the loop, can appear instrumental to produce the observed di-photon excess. With this idea we have used a generic Lagrangian which couples this new resonance $\hx$ with photons and gluons as shown by eq. (\[form1\]). We have further assumed: (1) [*on-shell*]{} production of $H_X$ and (2) a scalar, i.e., spin-0, nature[^3] for $H_X$. The latter is one of the natural options to explain a resonance in di-photon channel, i.e., two [*identical massless spin-1*]{} particles, as dictated by Landau-Yang theorem [@Landau:1948kw; @Yang:1950rg]. The effective minimal[^4] Lagrangian is written as: $$\label{form1} {\mathcal{L}}_{ eff} = \kappa_g G^a_{\mu \nu} G^{\mu \nu}_a {H}_{\rm X} + \kappa_A B_{\mu \nu} B^{\mu \nu} {H}_{\rm X},$$ where $G^a_{\mu\nu}$, $B_{\mu\nu}$ are the associated field strengths with $``a"$ representing the relevant non-Abelian index. The effective $\hx$-$g$-$g$ and $\hx$-$\gamma$-$\gamma$ vertices are parametrised as $\kappa_g$ and $\kappa_A$ which encapsulate the effect of new physics appearing in the loops. The latter is an absolute necessity since SM-like couplings between the SM-gauge bosons and $H_X$ appear inadequate [@Carena:2012xa] to explain the observed [*[sizable]{}*]{} decay width $\Gamma_X$ [@ATLAS-run-II-1; @ATLAS-run-II-1L] and the production cross-section $\sigma(pp(gg)\to H_X \to \gamma\gamma)$ $\sim \mathcal{O}(10$ fb) [@ATLAS-run-II-1; @CMS-run-II-2; @ATLAS-run-II-1L; @CMS-run-II-2L], consistent with the results of various other LHC searches. The observations from different LHC searches put strong constraints on the $\kappa_g$ - $\kappa_A$ parameter space. The latter can be translated in terms of $\hx \to g g$, $\hx \to \gamma \gamma$ branching fractions $(Brs)$ since they are $\propto 8\kappa^2_g$, $\kappa^2_A \cos^4\theta_W$, respectively. Moreover, the associated squared matrix elements are similar while the phase spaces are identical. The number $`8$’ appears from the colour factor and $\theta_W$ is the Weinberg angle [@Agashe:2014kda]. ![Variations of the $\Gamma_X$ (in GeV) with $\kappa_g$ (left) and $\kappa_A$ (right). The details are explained in the text.[]{data-label="fig:fit1"}](kG-vs-width.pdf "fig:"){width="4.39cm" height="3.55cm"} ![Variations of the $\Gamma_X$ (in GeV) with $\kappa_g$ (left) and $\kappa_A$ (right). The details are explained in the text.[]{data-label="fig:fit1"}](kA-vs-width.pdf "fig:"){width="4.39cm" height="3.55cm"} After the electroweak symmetry breaking, the second term of eq. (\[form1\]) generates effective interactions like $\hx\gamma\gamma$ and also $\hx Z \gamma$, $\hx ZZ$, even with vanishing $\kappa_W$. Their strengths are $\propto$ $\kappa_A\cos^2\theta_W$, $\kappa_A\sin\theta_W\cos\theta_W$ and $\kappa_A\sin^2\theta_W$, respectively. It is thus, important to note that a non-zero $Br(\hx\to \gamma\gamma)$ would also imply non-zero $Br(\hx\to Z\gamma,\,ZZ)$ values since all of them are connected to $\kappa_A$. Their relative magnitudes, however, remain different depending on the factor of $\sin\theta_W$ or $\cos\theta_W$. Measurements from the experimental collaborations for the said processes, using 13 TeV data, remain yet inadequate[^5]. Nevertheless, measured information for $\hx\to ZZ,\, Z\gamma$ and $\hx\to \gamma\gamma$ [@ATLAS-diphoton] processes from the 8 TeV searches definitely constrain the range of $\kappa_A,\,\kappa_g$ parameters. For example, one obtains $\sigma (pp\to \hx\to ZZ) < 12$ fb [@Aad:2015kna] and $\sigma (pp\to \hx\to Z\gamma) < 11$ fb [@Aad:2014fha] from the similar searches performed by the ATLAS with 8 TeV data. The available parameter space is also constrained by the di-jets searches, given as $\sigma(pp\to jj) < 1.9$ pb [@Aad:2014aqa][^6], such that the missing evidence of $pp\to\hx\to jj$ process at the 13 TeV appears consistent. Needless to mention that the CMS collaboration has also made similar studies [@CMS-diphoton; @CMS-zz; @CMS-zgamma; @CMS-dijet]. Furthermore, if one wishes to account for a large $\Gamma_X$ by introducing new, e.g., invisible decays, one needs to incorporate the constraints from monojet searches accordingly [@ATLAS-monojet; @CMS-monojet]. In this article we have used the [*expected*]{} limits from 13 TeV LHC searches for $ZZ,\,Z\gamma$, $jj$ and $\gamma\gamma$ processes, derived using the 8 TeV results. We have used [Madgraph v2.2.3]{} [@Alwall; @Alwall:2014hca] and observed that the production (via gluon fusion) cross-section with 13 TeV $\ecm$ is roughly five times of the same with $8$ TeV $\ecm$, i.e., $\sigma(pp\to \hx)|_{13~ \rm TeV}/\sigma(pp\to \hx)|_{8~\rm TeV}\approx 5$, as also noted in Ref. [@Franceschini:2015kwy]. Further, we have also used the constraint from Ref. [@photon-photon] assuming that this resonance can also appear through photon fusion. In our numerical study we have used [FeynRules 2.3]{} [@Alloul:2013bka] to implement eq. (\[form1\]) together with the SM Lagrangian. Subsequently, [Madgraph v2.2.3]{} has been utilised to compute the production cross-section $\sigma(pp\to \hx)$ through gluon fusion and to calculate different partial decay widths of $\hx$. In this study we have utilised $3.2~\fbi$ of ATLAS data at 13 TeV to accommodate the observed resonance. In detail, we have used $\Delta N$, the discrepancy between the observed and expected number of events $=13.6\pm3.69$. Further, for this purpose three $40$ GeV bins are chosen for $690~{\rm GeV}\lsim \minv \lsim 810$ GeV [@ATLAS-run-II-1] with an efficiency of 0.4 [@CMS-run-II-2]. In order to study the effect of BSM physics, we first show the variation of $\Gamma_X$ with changes in the new physics parameters, $\kappa_g$ (left), $\kappa_A$ (right), in Fig. \[fig:fit1\]. Here, we have varied $\kappa_g,\,\kappa_A$ in the span of $10^{-6}$ - $1$. In these two plots the cyan coloured region represents the allowed $2\sigma$ range of $\Delta N$. The orange, golden and green coloured regions represent various zones in the $\Gamma_X$ - $\kappa_g~(\kappa_A)$ planes that are excluded from the 8 TeV LHC measurements of $\hx\to ZZ,\,Z\gamma$, $jj$ processes. The yellow coloured region remains excluded from the measurement of $\hx\to \gamma\gamma$ [@ATLAS-diphoton] process at the ATLAS with 8 TeV centre-of-mass energy. Lack of precision measurements for the latter, assuming $\sigma(pp\to \hx)|_{13~\rm TeV}/\sigma(pp\to \hx)|_{8~\rm TeV}\approx 5$, predicts a $2\sigma$ upper bound [@Aad:2015mna] on $\sigma(pp\to\hx\to \gamma\gamma)|_{13~\rm TeV}$ inconsistent with the one observed with 13 TeV. We will discuss this later in detail. Finally, the gray coloured region remains excluded from the photon fusion process, i.e., $\gamma\gamma\to\hx\to\gamma\gamma$, [@photon-photon] which predicts a maximum for $Br(\hx\to\gamma\gamma)$, independent of $Br(\hx\to gg)$. The region excluded by the photon fusion process is estimated by assuming that $\hx$ has only two decay modes $gg,\,\gamma\gamma$, i.e., $Br(\hx\to gg)$ + $Br(\hx\to \gamma\gamma) = 1$. The observed limits on the $Brs$ are subsequently translated in terms of $\kappa_g$ and $\kappa_A$. It is evident from Fig. \[fig:fit1\] that expecting $\Gamma_X$ as large as $45$ GeV or more is perfectly consistent with the observed limits on $ZZ,\,\gamma\gamma,\,Z\gamma$ searches at the 8 TeV LHC. However, it is the di-jet searches which rules out the region of parameter space with $\Gamma_X > 3$ GeV (right plot), corresponding to $\kappa_g \gsim \mathcal{O}(0.001)$ (left plot). The observed behaviour is well expected as $\Gamma_{\hx\to gg}$ and thus, $\Gamma_X$ grows rapidly with $\kappa_g$ compared to that with $\kappa_A$, i.e., $\Gamma_{\hx\to \gamma\gamma}$ since the latter is suppressed by a factor of $\cos^4\theta_W/8$. For $\kappa_A$ (estimated from $Br(\hx\to \gamma\gamma)$), the most stringent bound is coming from the photon fusion process which is represented by the gray coloured region. For the photon fusion process, $Br(\hx\to\gamma\gamma)$ $\propto$ $1/\sqrt{\Gamma_X}$ [@photon-photon] and thus, [*[smaller]{}*]{} upper bound on $Br(\hx\to\gamma\gamma)$ and hence, on $\kappa_A$ is expected for larger $\Gamma_X$. This is evident from the right plot of Fig. \[fig:fit1\]. It is important to note that the photon fusion process can also provide an indirect bound on $Br(\hx \to g g)$, i.e., on $\kappa_g$, assuming $Br(\hx\to \gamma\gamma)+ Br(\hx \to g g)=1$. It is also apparent that the photon fusion process discards $\Gamma_X \gtrsim$ $0.3$ GeV which is $10$ times smaller than the one predicted from the di-jet search limit. Hence, given the observed large $\Gamma_X$ from the ATLAS, one needs almost the [*equal*]{} amount of $\Gamma_X$ from the hitherto unseen decay modes of this resonance, e.g., invisible decays. Here, we use $\Gamma_X=\Gamma_{\hx\to \gamma\gamma} + \Gamma_{\hx\to ZZ}+ \Gamma_{\hx\to Z\gamma}+ \Gamma_{\hx\to jj}$, as expected from eq. (\[form1\]), to estimate $Br(\hx\to \gamma\gamma)$ for the photon fusion process [@photon-photon]. It is now clear that in the chosen setup, no realistic values of $\kappa_A,\,\kappa_g$ parameters can account for a total $\Gamma_X \gsim 0.3$ GeV. Thus, the presence of a [*[huge]{}*]{} additional decay width is essential for the studied construction which will be tightly constrained from the dark matter and monojet searches. The discussion presented so far concerning the photon fusion process has one caveat related to the estimation of $Br(\hx\to gg)$. So far, we have used eq. (\[form1\]) to estimate $\Gamma_X$, however, while evaluating the effect of photon fusion process on $Br(\hx\to gg)$, i.e., on $\kappa_g$ (left plot of Fig. \[fig:fit1\]), we have used $Br(\hx\to \gamma\gamma)+Br(\hx\to gg)=1$ which is [*apparently*]{} contradicting. At this point one must note that in the given construction the quantities $Brs(\hx\to Z\gamma,\,ZZ)$, as already explained, are suppressed compared to $Br(\hx\to \gamma\gamma)$. Moreover, so far we have no information available for processes like $\gamma Z,\,ZZ\to \hx$. Thus, the assumption $Br(\hx\to gg)=1-Br(\hx\to \gamma\gamma)$ remains useful for estimating the scale of $Br(\hx\to gg)$. Using all the available branching fractions instead would yield [*weaker*]{} upper bounds on $Br(\hx \to gg)$, i.e., on $\kappa_g$. It is evident from Fig. \[fig:fit1\] that $\Gamma_X \gsim 0.3$ GeV appears excluded from the relevant existing LHC limits and from the constraint of photon fusion process. This observation demands the existence of [*huge*]{} additional decay width to reach the target of $45$ GeV. If we call this additional width as $\Gamma^{add}_X$, without specifying the origin, then one can write $\Gamma^{tot}_X \equiv \Gamma_X =$ $\Gamma_{\hx\to \gamma\gamma} + \Gamma_{\hx\to ZZ}$ $+ \Gamma_{\hx\to Z\gamma}+ \Gamma_{\hx\to jj} +\Gamma^{add}_X$. This approach will modify all the associated branching ratios as will be explored subsequently by [*choosing*]{} three different values of the total decay widths: (1) $1$ GeV (small width), (2) $10$ GeV (moderate width) and (3) $45$ GeV (large width). ![image](Brgg-vs-Brgamgam-1GeV.pdf){width="5.70cm" height="3.75cm"} ![image](Brgg-vs-Brgamgam-10GeV.pdf){width="5.70cm" height="3.75cm"} ![image](Brgg-vs-Brgamgam-45GeV.pdf){width="5.70cm" height="3.75cm"} The subsequent effects of the aforesaid construction are explored in Fig. \[fig:fit\] where we have investigated the impact of diverse LHC and photon fusion constraints in the $Br(\hx\to gg)$ - $Br(\hx\to \gamma\gamma)$ plane. These two $Brs$ are expected to show [*[some kind]{}*]{} of correlation[^7] between them since the observed excess appears through $gg\to\hx$ process followed by $\hx \to \gamma\gamma$ decay. It is also possible to observe a similar correlation in the $\kappa_g,\,\kappa_A$ plane since $Br(\hx\to gg), \,Br(\hx\to \gamma\gamma)\propto \kappa^2_g,\,\kappa^2_A$, respectively. In Fig. \[fig:fit\] the black coloured line represents the best-fit value corresponding to $\Delta N=13.6$ while the cyan and blue coloured bands represent the $1\sigma~(9.91 \lsim \Delta N \lsim 17.29)$ and $2\sigma~(6.22 \lsim \Delta N \lsim 20.98)$ allowed regions in the concerned planes, respectively. The orange, golden, green and yellow coloured regions, similar to Fig. \[fig:fit1\], represent various zones in the concerned plane that are excluded from 8 TeV LHC limits on $\hx\to ZZ,\,Z\gamma$, $jj$ and $\gamma\gamma$ processes. In the case of $\hx \to \gamma\gamma$ process, assuming $\sigma(pp\to \hx)|_{13~\rm TeV}/\sigma(pp\to \hx)|_{8~\rm TeV}\approx 5$, one would expect a $2\sigma$ upper bound [@Aad:2015mna] on $\sigma(pp\to\hx\to \gamma\gamma)|_{13~\rm TeV}$ as $10$ fb using the ATLAS data. This is in tension with the 13 TeV ATLAS observation [@ATLAS-run-II-1] and rules out higher values of the observed $\sigma(pp\to\hx\to\gamma\gamma)$, starting from the central one. A similar analysis using the CMS data [@CMS:2015cwa] excludes the higher values of the observed $\sigma(pp\to\hx\to \gamma\gamma)|_{13~\rm TeV}$ [@CMS-run-II-2] beyond $1\sigma$. Lastly, the photon fusion process at the LHC, which predicts a maximum for $Br(\hx\to\gamma\gamma)$ independent of $Br(\hx\to gg)$, rules out the gray coloured region in the $Br(\hx\to gg)$ - $Br(\hx\to\gamma\gamma)$ plane. It is interesting to note that the constraint for the photon fusion was derived with the assumption of $Br(\hx\to gg)+ Br(\hx\to\gamma\gamma)=1$ which discards a region where $Br(\hx\to gg)$ + $Br(\hx\to \gamma\gamma) > 1$. For the three chosen values of $\Gamma_X$, the maximum $Br(\hx\to \gamma\gamma)$ is estimated [@photon-photon] as $\sim 0.42,\,0.13,\,0.06$, respectively and thus, the regions with $Br(\hx\to gg)> 0.58$ (left plot of Fig. \[fig:fit\]), $Br(\hx\to gg)> 0.87$ (middle plot of Fig. \[fig:fit\]), $Br(\hx\to gg)> 0.94$ (right plot of Fig. \[fig:fit\]) remain ruled out. The upper limits of $Br(\hx\to gg)$, as depicted in Fig. \[fig:fit\] are purely illustrative. This is because, following our earlier discussion, $Br(\hx\to gg)$ $=1-Br(\hx\to\gamma\gamma)$ estimated in a regime when $\Gamma^{add}_X\approx \Gamma^{tot}_X\equiv \Gamma_X$ appears simply illustrative. For the rest of the processes the primary productions are driven by the gluon fusion process. The latter gives a high value for $Br(\hx\to gg)$ with increasing $\kappa_g$ and as a consequence remains excluded from the di-jet search limits, especially for moderate to large $\Gamma_X$. For example, for the choice of $\Gamma_X=10$ GeV one gets $Br(\hx\to gg)_{max}\sim 0.40$ (middle plot of Fig. \[fig:fit\]) while for the choice of $\Gamma_X=45$ GeV one ends up with $Br(\hx\to gg)_{max}\sim 0.20$ (right plot of Fig. \[fig:fit\]). In the case of small decay width (left plot of Fig. \[fig:fit\]) constraint from the di-jet searches remains ineffectual. It is evident from eq. (\[form1\]) that $Br(\hx\to ZZ)$, $Br(\hx\to Z\gamma)$ are suppressed compared to $Br(\hx\to \gamma\gamma)$ by factors of $\tan^4\theta_W$ and $\tan^2\theta_W$ (numerically $\sim 0.09$ and $0.3$), respectively which is also apparent from Fig. \[fig:fit\]. Thus, unless one introduces interaction like $\kappa_W W^a_{\mu\nu}W^{\mu\nu}_a$ ($W^a_{\mu\nu}$ as the $SU(2)$ field strengths) these modes remain sub-leading. One can, nevertheless, compensate these deficits with a larger $Br(\hx\to gg)$, assuming $gg\to \hx$ to be the leading production channel. These behaviours are reflected in Fig. \[fig:fit\] where the regions excluded from $ZZ$ and $Z\gamma$ searches appear with lateral shifts towards larger $Br(\hx\to gg)$ values compared to $Br(\hx\to \gamma\gamma)$ values, required to reproduce the observed excess. Larger $Br(\hx\to gg)$ and hence larger $\Gamma_X$ appear naturally for higher $\kappa_g$ values which are in tension with the di-jet searches. Increasing $\kappa_A$ receives constraint from the photon fusion process. The $ZZ$ and $Z\gamma$ constraints, as already mentioned, require large values for both of the $Br(\hx\to gg)$ and $Br(\hx\to \gamma\gamma)$. The former faces tension from the di-jet search limits (moderate and large $\Gamma_X$ scenarios) while the latter, if not excluded by the photon fusion constraint, might give larger $\Delta N$ than actually observed. Hence, the parameter space ruled out by these constraints do not affect the signal region compatible for explaining the observed excess. The key feature of Fig. \[fig:fit\] is the prediction of the value of product[^8] $Br(\hx\to g g)\times Br(\hx \to \gamma\gamma)$ (henceforth written as $Br^2(\gamma\gamma\times gg)$). From the best-fit line we observe that this value changes from $\mathcal{O}(10^{-3})$ to $\mathcal{O}(10^{-5})$ as the chosen $\Gamma_X$ changes from $1$ GeV to $45$ GeV. Explicitly, $0.08(1.73)\times 10^{-3(5)}\lsim Br^2(\gamma\gamma\times gg)\lsim$ $1.93(4.28)\times 10^{-3(5)}$ for $\Gamma_X=1(45)$ GeV using $2\sigma$ limits on $\Delta N$. We will use these information subsequently. Now we are ready to discuss the presence of other BSM particles that are essential to explain this excess through higher order processes. Information about these states are encapsulated within $\kappa_g,\,\kappa_A$ (see eq. (\[form1\])). These states must not be very heavy to avoid propagator suppression and at the same time, must possess sizable couplings with $\hx$ to reproduce the detected excess. Concerning the leading production, i.e., $gg \to \hx$, the possible candidate(s) is(are) either new coloured scalar(s) $\Phi$ or additional coloured fermion(s) $F$, possibly vector-like. These new particles must simultaneously couple to gluons as well as to $\hx$ and, are possibly embedded in a representation of some larger symmetry group. If these new scalars/fermions are also responsible for producing an enhanced $Br(\hx\to\gamma\gamma)$, they must carry electrical charges to get coupled to a photon. However, the other non-minimal possibility is to consider another set of uncoloured but electrically charged fermion(s), scalar(s) or gauge boson(s) (appears in theories with extended non-Abelian gauge sector). Note that contributions from new chiral fermions produce a destructive effect compared to the bosonic contributions and thus, often are not compatible with the observed excess. On the other hand, vector-like fermions remain a viable alternative. The presence of an extended scalar sector has additional phenomenological advantages, e.g., stability of the SM-Higgs potential up to the Planck scale [@Sher:1988mj; @EliasMiro:2012ay; @Alekhin:2012py; @Buttazzo:2013uya]. This argument also holds true for new gauge boson(s). We, however, do not consider them in this article since they are hinted to be rather heavy $\gsim 2.5$ TeV [@ATLAS:2012ak; @Khachatryan:2014dka]. In a nutshell, we conclude that to accommodate the observed di-photon excess one needs sizable couplings between $\hx$ and the new particles, for which coloured and/or electrically charged scalars or fermions remain the realistic options. Moreover, in the presence of the said new states, an enhanced $Br(\hx\to\gamma\gamma)$ is more anticipated compared to an enlarged $Br(\hx\to gg)$ as for the latter experimental evidences are still missing. In the presence of a new BSM scalar $\Phi$, with mass $M_\Phi$, electric charge $Q_\Phi$ (in the units of $|e|$) and number of colour $N^c_\Phi$, the $Br(\hx\to \gamma\gamma)$ can be written as [@Carena:2012xa; @Jaeckel:2012yz]: [\[form2\] Br([H]{}\_[X]{} )= | N\_\^c Q\_\^2 A\_(x\_) |\^2. ]{} Here, $\alpha_{em}$ is the electromagnetic coupling constant, $g_{\Phi\Phi\hx}$ represents the coupling between $\Phi$ and $\hx$ and the detail of $A_{\Phi}(x_{\Phi})$ function, where $x_\Phi=4 M^2_\Phi/M^2_X$, is given in Ref. [@Carena:2012xa]. Keeping in mind the issue of perturbativity we choose $-\sqrt{4\pi}\lsim g_{\Phi\Phi\hx} \lsim \sqrt{4\pi}$, in our numerical analyses. The quantity $g_{\Phi\Phi\hx}$ parametrises the information about the vacuum expectation value (VEV) of $\hx$ and the amount of [*possible*]{} mixing between $\hx$ and the SM-Higgs. From eq. (\[form2\]) it appears that a larger $g_{\Phi\Phi{\rm H}_{\rm X}}$ is useful to produce a bigger $Br(\hx\to \gamma\gamma)$. In reality, however, such scenarios are unrealistic as they correspond to either experimentally challenging large mixing within $\hx$ and the SM-Higgs or a large VEV for $\hx$ inconsistent with the electroweak precision tests [@Agashe:2014kda]. It is apparent from eq. (\[form2\]) that depending on the values of $M_\Phi,\,Q_\Phi$ and $N^c_\Phi$, the quantity $Br(\hx\to\gamma\gamma)$ can receive sizable enhancement. An enlargement is also possible if the future LHC observation confirms a smaller $\Gamma_X$. In our numerical analyses we choose $400~{\rm GeV}\lsim M_{\Phi} \lsim 1000~{\rm GeV}$, consistent with the existing collider bounds on such exotic particles [@Chatrchyan:2012ya; @ATLAS:2012hi; @ATLAS:2014kca]. A sample variation of $Br(\hx\to\gamma\gamma)$ in the $M_\Phi-g_{\Phi\Phi{\rm H}_{\rm X}}$ plane for a colour singlet ($N^c_\Phi=1$), triply charged $(Q_\Phi=3)$, scalar with different $\Gamma_X$, 1 GeV (left) and 45 GeV (right) is shown in Fig. \[fig:scalar\_contrb\]. ![Plots showing the variation of $Br\gamma\gamma = Br(\hx\rightarrow\gamma\gamma)\times 10^n $ in the ${M_{\Phi}}$ - $|g_{\Phi\Phi {\rm H}_{\rm X}}|$ plane for $N_{\Phi}^c,\,Q_{\Phi}=1,\,3$ with $\Gamma_X=$ $1$ GeV (left) and 45 GeV (right). The chosen ranges for $M_\Phi$ and $g_{\Phi\Phi {\rm H}_{\rm X}}$ are explained in the text. The multiplicative factor $n=9(11)$ for $\Gamma_X=1(45)$ GeV.[]{data-label="fig:scalar_contrb"}](ScaN1Q3-1GeV.pdf "fig:"){width="4.45cm" height="3.7cm"} ![Plots showing the variation of $Br\gamma\gamma = Br(\hx\rightarrow\gamma\gamma)\times 10^n $ in the ${M_{\Phi}}$ - $|g_{\Phi\Phi {\rm H}_{\rm X}}|$ plane for $N_{\Phi}^c,\,Q_{\Phi}=1,\,3$ with $\Gamma_X=$ $1$ GeV (left) and 45 GeV (right). The chosen ranges for $M_\Phi$ and $g_{\Phi\Phi {\rm H}_{\rm X}}$ are explained in the text. The multiplicative factor $n=9(11)$ for $\Gamma_X=1(45)$ GeV.[]{data-label="fig:scalar_contrb"}](ScaN1Q3-45GeV.pdf "fig:"){width="4.45cm" height="3.7cm"} It is evident from Fig. \[fig:scalar\_contrb\] that an experimentally viable light, i.e., $M_\Phi=400$ GeV, colour singlet $\Phi$ with $Q_\Phi=3$ can produce at most a $Br(\hx\to \gamma\gamma)$ $\sim \mathcal{O}(10^{-8})$ (left plot) when $\Gamma_X$ is small, i.e., 1 GeV. Choosing $\Gamma_X=45$ GeV instead one faces a reduction by a factor of $45$ (right plot). From eq. (\[form2\]) we see that $Br(\hx\to \gamma\gamma)\propto Q^4_\Phi$. Thus, even for an exotic colour singlet $\Phi$ with $Q_\Phi=10$, one would expect a maximum $Br(\hx\to\gamma\gamma)\sim \mathcal{O}(10^{-6})$ keeping $\Gamma_X,\,M_\Phi=1~{\rm GeV},\,400$ GeV. Now, from our previous discussion in the context of Fig. \[fig:fit\], we have estimated $Br^2(\gamma\gamma\times gg)$ as $\sim \mathcal{O}(10^{-3})$ and $\sim \mathcal{O}(10^{-5})$ for the choice of $\Gamma_X=1$ and 45 GeV, respectively from the best-fit value of $\Delta N$. Hence, the maximum $Br(\hx\to\gamma\gamma)$, extracted from Fig. \[fig:scalar\_contrb\] using eq. (\[form2\]) for a 400 GeV $\Phi$ with $N^c_\Phi,\,Q_\Phi=1,\,10$, would give an [*unrealistic*]{} $Br(\hx\to gg)\sim $ $400(180)$ for $\Gamma_X=1(45)$ GeV scenarios. One may try to consider a [*similar*]{} but [*coloured*]{} (say $N^c_\Phi =3$) $\Phi$ which predicts a maximum $Br(\hx\to\gamma\gamma)\sim \mathcal{O}$ $(10^{-5})$ for $\Gamma_X=1$ GeV. However, one still needs an [*unrealistic*]{} $Br(\hx\to gg)\sim 50$ in this scenario. Moreover, for a $\Phi$ with non-zero colours one must carefully investigate the $\hx\to jj$ constraint, even for a realistic $Br(\hx\to gg)$, especially for moderate to large $\Gamma_X$. ![Variation of $Br\gamma\gamma=Br(\hx\rightarrow\gamma\gamma)\times 10^n$ in the $N_{\Phi}^c$ - $Q_{\Phi}$ plane for $M_{\Phi}=$ 600 GeV, $g_{\Phi\Phi {\rm H}_{\rm X}}=1$, $\Gamma_X=$ 1 GeV(left) and 45 GeV(right). Here, $n=5(7)$ for $\Gamma_X=$ 1(45) GeV.[]{data-label="fig:scalar_contrb2"}](ScaNQ-1GeV.pdf "fig:"){width="4.45cm" height="3.7cm"} ![Variation of $Br\gamma\gamma=Br(\hx\rightarrow\gamma\gamma)\times 10^n$ in the $N_{\Phi}^c$ - $Q_{\Phi}$ plane for $M_{\Phi}=$ 600 GeV, $g_{\Phi\Phi {\rm H}_{\rm X}}=1$, $\Gamma_X=$ 1 GeV(left) and 45 GeV(right). Here, $n=5(7)$ for $\Gamma_X=$ 1(45) GeV.[]{data-label="fig:scalar_contrb2"}](ScaNQ-45GeV.pdf "fig:"){width="4.45cm" height="3.7cm"} From the last discussion it appears that the use of new BSM scalar is not adequate to explain the observed excess. In order to explore this further we have plotted the change of $Br(\hx\to\gamma\gamma)$ in the $N^c_\Phi$-$Q_\Phi$ plane in Fig. \[fig:scalar\_contrb2\] with $M_\Phi,\,g_{\Phi\Phi {\rm H}_{\rm X}}=600$ GeV, 1 for the choice of $\Gamma_X=1$ GeV (left) and 45 GeV (right). Here, we vary both $N^c_\Phi,\,Q_\Phi$ in the range of $1:20$ and the chosen values of $M_\Phi,\,g_{\Phi\Phi {\rm H}_{\rm X}}$ are purely illustrative. It is apparent from both of these plots that to satisfy $Br^2(\gamma\gamma\times gg)$, consistent with the observation of Fig. \[fig:fit\], one should have an [*unrealistic*]{} $Br(\hx\to gg)\sim 10(5)$ for $\Gamma_X=1(45)$ GeV. Adopting smaller $M_\Phi$ (say 400 GeV) simultaneously with a larger $g_{\Phi\Phi {\rm H}_{\rm X}}$ (say $\pm 3$) one can reach a maximum $Br(\hx\to \gamma\gamma)$ $\sim 0.012$ and $\sim 0.00025$ for $\Gamma_X=1$ and $45$ GeV, respectively considering $N^c_\Phi \gsim14,\,Q_\Phi\gsim 16$. Here, we have used eq. (\[form2\]) and information from Fig. \[fig:scalar\_contrb2\]. So, apparently these exotic scenarios can give a [*realistic*]{} $Br(\hx\to gg)\lsim \mathcal{O}(0.1)$, consistent with the di-jet searches (see Fig. \[fig:fit\]). However, this moderate $Br(\hx\to gg)$ value may get excluded from the future LHC searches with expected higher sensitivity. Moreover, one must carefully re-evaluate the maximum value for $g_{\Phi\Phi {\rm H}_{\rm X}}$ in a consistent theory framework. It is now evident from the last discussion that the presence of BSM $\Phi$s, instrumental to reproduce the observed excess, requires really high electric and colour charges. Particles with such high colour charges are expected to be produced amply at the LHC, unless very massive and hence, rather stringent constraints are expected on their existence. We thus, leave our discussion about the BSM scalars without further detail. We note in passing that $Q_\Phi$ value as high as 20 can be interpreted as an effective electric charge, keeping $N^c_\Phi$ fixed. For example data-set with $Q_\Phi=20$ for a fixed $N^c_\Phi$, using eq. (\[form2\]), can be thought of as a coloured/uncoloured multiplet with members of [*almost the same*]{} masses and having electric charges from $\pm 1$ to $\pm10$. Let us now investigate a similar scenario in the presence of new BSM vector-like fermion, $F$. For a fermion with mass $M_F$, electric charge $Q_F$ (in the units of $|e|$), number of colours $N^c_F$, the quantity $Br(\hx\to \gamma\gamma)$ is expressed as [@Carena:2012xa; @Jaeckel:2012yz]: [\[form3\] Br([H]{}\_[X]{} )= |N\_[F]{}\^c Q\_[F]{}\^2 A\_[F]{}(x\_[F]{}) |\^2. ]{} Here, $g_{FF{\rm H}_{\rm X}}$ represents the generic coupling between $F$ and $\hx$. The function $A_{F}(x_{F})$, with $x_F=4M^2_F/M^2_X$, is given in Ref. [@Carena:2012xa]. We consider $500~{\rm GeV}\lsim M_F\lsim 1$ TeV (see Ref. [@BSM-fermion] and references therein) while $g_{FF{\rm H}_{\rm X}}$ is varied in a range similar to $g_{\Phi\Phi {\rm H}_{\rm X}}$, based on the same argument. ![Plots showing the variation of $Br\gamma\gamma= Br(\hx\rightarrow\gamma\gamma)\times10^n$ in the ${M_{F}}$ - $|g_{FF {\rm H}_{\rm X}}|$ plane for $N_{F}^c,\,Q_{F}=1,\,2$ with $\Gamma_X=$ $1$ GeV (left) and 45 GeV (right). The chosen ranges for $M_F$ and $g_{FF {\rm H}_{\rm X}}$ are explained in the text. Here, $n=3(5)$ for the left(right) plot.[]{data-label="fig:fermion_contrb"}](FerN1Q2-1GeV.pdf "fig:"){width="4.45cm" height="3.7cm"} ![Plots showing the variation of $Br\gamma\gamma= Br(\hx\rightarrow\gamma\gamma)\times10^n$ in the ${M_{F}}$ - $|g_{FF {\rm H}_{\rm X}}|$ plane for $N_{F}^c,\,Q_{F}=1,\,2$ with $\Gamma_X=$ $1$ GeV (left) and 45 GeV (right). The chosen ranges for $M_F$ and $g_{FF {\rm H}_{\rm X}}$ are explained in the text. Here, $n=3(5)$ for the left(right) plot.[]{data-label="fig:fermion_contrb"}](FerN1Q2-45GeV.pdf "fig:"){width="4.45cm" height="3.7cm"} The sample variation of $Br(\hx\to \gamma\gamma)$ in the $M_F$ - $g_{FF{\rm H}_{\rm X}}$ plane for a colour singlet doubly charged ($Q_F=2$) fermion is shown in Fig. \[fig:fermion\_contrb\] for $\Gamma_X=1$ (left) and 45 GeV (right). It is easy to see from these plots that the presence of BSM fermions is more efficient to raise $Br(\hx\to \gamma\gamma)$ compared to the BSM scalars. For example a colour singlet doubly charged fermion can produce $Br(\hx\to\gamma\gamma)$ as high as $0.007$ and $\sim 10^{-4}$ for $\Gamma_X=1$ and 45 GeV, respectively. These numbers are orders of magnitude larger compared to the same from Fig. \[fig:scalar\_contrb\] and, as stated before, can only be achieved for a $\Phi$ with very high $Q_\Phi$ and $N^c_\Phi$. These enhanced $Br(\hx\to\gamma\gamma)$ values are also useful to estimate [*realistic*]{} values of $Br(\hx\to gg)$, using the information from Fig. \[fig:fit\]. As an example, from Fig. \[fig:fermion\_contrb\], with the maximum of $Br(\hx\to\gamma\gamma)$, one can estimate $Br(\hx\to gg)\sim 0.14(0.063)$ for $\Gamma_X=1(45)$ GeV using the derived bound on $Br^2(\gamma\gamma\times gg)$. Clearly, one can easily reproduce the observed excess, especially for smaller $\Gamma_X$, without any difficulty. However, for larger $\Gamma_X$, depending on its value, some of the $Br(\hx\to gg)$ values remain excluded from the di-jet searches as already depicted in Fig. \[fig:fit\]. In order to study the behaviour of $Br(\hx\to\gamma\gamma)$ with changes in the $N^c_F$-$Q_F$ values we have plotted the same in Fig. \[fig:fermion\_contrb2\] for the choice of $\Gamma_X=1$ GeV (left) and 45 GeV (right). The chosen $M_F$, $g_{FF {\rm H}_{\rm X}}$ values are purely illustrative. We vary $N^c_F(Q_F)$ in the range of $1:3(5)$. ![Variation of $Br\gamma\gamma=Br( \hx\rightarrow\gamma\gamma)\times 10^n$ in the $N_{F}^c$ - $Q_{F}$ plane for $M_{F}=$ 600 GeV, $g_{FF{\rm H}_{\rm X}}=1$, $\Gamma_X=$ 1 GeV(left) and 45 GeV(right). For the left(right) plot $n=2(3)$.[]{data-label="fig:fermion_contrb2"}](FerNQ-1GeV.pdf "fig:"){width="4.45cm" height="3.7cm"} ![Variation of $Br\gamma\gamma=Br( \hx\rightarrow\gamma\gamma)\times 10^n$ in the $N_{F}^c$ - $Q_{F}$ plane for $M_{F}=$ 600 GeV, $g_{FF{\rm H}_{\rm X}}=1$, $\Gamma_X=$ 1 GeV(left) and 45 GeV(right). For the left(right) plot $n=2(3)$.[]{data-label="fig:fermion_contrb2"}](FerNQ-45GeV.pdf "fig:"){width="4.45cm" height="3.7cm"} It is visible from these plots that an electrically charged coloured fermion can generate $Br(\hx\to\gamma\gamma)$ as high as $0.12(0.0025)$ for $\Gamma_X=1(45)$ GeV. Using the information from Fig. \[fig:fermion\_contrb2\] and our knowledge of Fig. \[fig:fit\], these numbers predict $0.01(0.004)\lsim Br(\hx\to gg)\lsim 0.1(0.05)$ for $\Gamma_X=1(45)$ GeV. These numbers are absolutely consistent with the di-jet searches as shown in Fig. \[fig:fit\]. For exotic fermions a lower value of $M_F$ (say 400 GeV) and a higher $g_{FF{\rm H}_{\rm X}}$ value (say $\pm \sqrt{4\pi}$), especially for small $\Gamma_X$, will produce $Br(\hx\to \gamma\gamma)$ either $>1$ or inconsistent with the constraint of photon fusion process for certain region of the $N^c_F-Q_F$ plane, e.g., $1\lsim N^c_F\lsim 3,\,Q_F> 3$. Further, compared to the BSM scalars, the BSM fermions can appear instrumental to reproduce the observed excess with realistic values of $Br(\hx\to gg)$ and $Br(\hx\to \gamma\gamma)$ even with $M_F=1$ TeV. As for the latter with $g_{FF{\rm H}_{\rm X}}=1,\,\Gamma_X=$ $45$ GeV and with $1\lsim N^c_F(Q_F)\lsim 3(5)$ one gets $0.0001\lsim Br(\hx\to \gamma\gamma)\lsim 0.0008$ and thus, $0.0125\lsim Br(\hx\to g g)\lsim 0.1$, consistent with Fig. \[fig:fit\]. We note in passing that, similar to the scalars, one can also explain Fig. \[fig:fermion\_contrb2\], say $N^c_F,Q_F=1,\,4$, with a single uncoloured multiplet with quasi-degenerate masses for the members and $Q_F$ ranging from $\pm 1$ to $\pm 3$. The proficiency of the BSM fermions over the scalars are now established. Although one can reproduce the excess with a colour singlet fermion with high $Q_F$ (see Fig. \[fig:fermion\_contrb2\]), nevertheless, it is an absolute necessity to explore the scenario with $N^c_F >1$ as otherwise the expected BSM origin for $gg\to \hx$ process remains unexplained. This scenario may receive constraint from di-jet searches provided the enhanced efficiency expected from the future LHC operation. The exotic fermions, similar to $Br(\hx\to \gamma\gamma)$ (see eq. (\[form3\])), can also contribute to $Br(\hx\to gg)$. At the leading order this branching ratio [@Spira:1995rr] is given as: [\[h2gg\_fermion\] Br([H]{}\_[X]{} gg)= | A\_[F]{}(x\_[F]{}) |\^2. ]{} Here, $\alpha_s$ is the strong coupling constant. In our numerical analysis we have multiplied $Br(\hx\to gg)$, as shown in eq. (\[h2gg\_fermion\]), by a factor of $1.5$, relevant for the higher order effects of strong interactions. Using eqs. (\[form3\]) and (\[h2gg\_fermion\]) simultaneously we have studied a sample variation of $Br^2(\gamma\gamma\times gg)$ in the $M_F$-$g_{FF{\rm H}_{\rm X}}$ plane as shown by Fig. \[fig:fermion\_contrb3\] with $\Gamma_X=1$ GeV (left) and 45 GeV (right). For this figure $M_F$, $g_{FF{\rm H}_{\rm X}}$ are varied as of Fig. \[fig:fermion\_contrb\] and we work with $N^c_F=3$ and $Q_F=3$. ![Plots showing the variation of $\gamma\otimes g \equiv Br(\hx\to\gamma\gamma)\times$ $Br(\hx\to gg)\times 10^n$ in the ${M_F}-|g_{FF {\rm H}_{\rm X}}|$ plane for $N^c_F=3,\,Q_F=3$ with $\Gamma_X=1$ GeV (left) and 45 GeV (right). Here, $n=3(5)$ for $\Gamma_X=1(45)$ GeV.[]{data-label="fig:fermion_contrb3"}](Brggphph1CGeV.pdf "fig:"){width="4.45cm" height="3.75cm"} ![Plots showing the variation of $\gamma\otimes g \equiv Br(\hx\to\gamma\gamma)\times$ $Br(\hx\to gg)\times 10^n$ in the ${M_F}-|g_{FF {\rm H}_{\rm X}}|$ plane for $N^c_F=3,\,Q_F=3$ with $\Gamma_X=1$ GeV (left) and 45 GeV (right). Here, $n=3(5)$ for $\Gamma_X=1(45)$ GeV.[]{data-label="fig:fermion_contrb3"}](Brggphph45CGeV.pdf "fig:"){width="4.45cm" height="3.75cm"} The observed behaviours of $Br^2(\gamma\gamma\times gg)$ with different parameters, i.e., $M_F,\,g_{FF {\rm H}_{\rm X}}$ and $\Gamma_X$ are expected from eqs. (\[form3\]) and (\[h2gg\_fermion\]). For example, both $Br(\hx\to \gamma\gamma)$ and $Br(\hx\to gg)$ are $\propto \Gamma^{-1}_X$ and thus, shrinking of the allowed parameter space, compatible with the observed excess, for larger $\Gamma_X,\,45$ GeV, (right plot of Fig. \[fig:fermion\_contrb3\]) is anticipated. At the same time, these two branching ratios are $\propto g^2_{FF {\rm H}_{\rm X}}/M^2_F$ (see eqs. (\[form3\]),(\[h2gg\_fermion\])). Hence, apparent lowering of $Br^2(\gamma\gamma\times gg)$ for larger $M_F$ values must be compensated with larger $g_{FF {\rm H}_{\rm X}}$ values in order to remain compatible with the excess. This feature is depicted in Fig. \[fig:fermion\_contrb3\], notably for the left one. The most useful aspect of Fig. \[fig:fermion\_contrb3\] is connected with the estimation of future detection possibility for the process $gg \to H^*_X \to F \bar{F}$. Assuming that the future measurements indicate a narrow width for this excess, say 1 GeV, then the room for measuring $\sigma(gg \to H^*_X \to F \bar{F})$ is less promising for two reasons: (1). The expected enhancement in the production for low $M_F$ region is ameliorated with a relatively small $g_{FF {\rm H}_{\rm X}}$ and (2). In the high $g_{FF {\rm H}_{\rm X}}$ regime, the same logic remains applicable through heavier $M_F$. These two features are visible from the left plot of Fig. \[fig:fermion\_contrb3\]. On the contrary, a more stringent limit, i.e., $Br^2(\gamma\gamma\times gg) \sim \mathcal{O}(10^{-5})$, for larger $\Gamma_X=45$ GeV prefers smaller $M_F$ and larger $g_{FF {\rm H}_{\rm X}}$ (see right plot of Fig. \[fig:fermion\_contrb3\]). Both of these would appear useful to enhance $\sigma(gg \to H^*_{X} \to F \bar{F})$. [*Conclusions:*]{} To summarise, the LHC run-II has already observed an excess in the di-photon invariant mass distribution near 750 GeV. This excess, as argued in this article, definitely requires BSM physics. In this article we tried to explore this excess, assuming a spin-0 nature, using an simplified effective Lagrangian, sensitive to [*new*]{} physics effects. The chosen framework helped us to estimate a lower bound of $\Gamma_X$, consistent with the different LHC constraints and photon fusion process, for changes in the new physics parameters, $\kappa_g,\,\kappa_A$. We have also explored the possible correlation between $Br(\hx\to \gamma\gamma)$ and $Br(\hx\to gg)$ in the light of the observed excess and diverse possible constraints. This correlation provides a [*model-independent but $\Gamma_X$-dependent*]{} bound on $Br(\hx\to \gamma\gamma)\times$ $Br(\hx\to gg)$. Subsequently, we have utilised this correlation to scrutinise the effect of other BSM scalars, fermions with various electric charge, number of colour which simultaneously couple to $\hx$ and $gg,\,\gamma\gamma$ and might appear instrumental to reproduce this excess through higher order processes. Our analyses show that to accommodate the observed excess, the presence of additional BSM fermions are preferred compared to the scalars. Moreover, detecting these new fermions in the future is more anticipated for a large width of the observed excess. In conclusion, given this di-photon excess survives with more data-set, this can not be an isolated [*surprise*]{}. Rather, this must be the pioneering evidence of a BSM mass spectrum while other heavier members are awaiting to be detected. Acknowledgments {#acknowledgments .unnumbered} =============== The work of JC is supported by the Department of Science and Technology, Government of India under the Grant Agreement number IFA12-PH-34 (INSPIRE Faculty Award). The work of AC is supported by the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics under STFC Grant No. ST/L000520/1. PG acknowledges the support from P2IO Excellence Laboratory (LABEX). The work of SM is partially supported by funding available from the Department of Atomic Energy, Government of India, for the Regional Centre for Accelerator-based Particle Physics (RECAPP), Harish-Chandra Research Institute. [^1]: A [*less significant*]{} increasing fluctuation was also noticed during the LHC run-I with 8 TeV center-of-mass energy [@CMS:2015cwa; @Aad:2015mna]. [^2]: Appeared in the arXiv on the same day with this article. [^3]: The observed excess is also compatible with a spin-2 nature [@CMS-run-II-2; @ATLAS-run-II-1L; @CMS-run-II-2L]. [^4]: We are working in a limit when interaction like $\kappa_W W^a_{\mu\nu}W^{\mu\nu}_a$, i.e., coupling between the SM $SU(2)$ gauge bosons and $\hx$ vanishes. [^5]: The ATLAS and CMS collaborations have recently reported $ZZ$ [@atlas-zz13; @CMS:2016vvl] and $Z\gamma$ [@atlas-zph13; @CMS:2016rsl] search results with early 13 TeV data. [^6]: For di-jet searches, early 13 TeV results are also available [@ATLAS:2015nsi; @Khachatryan:2015dcf]. [^7]: This correlation may disappear if this new resonance arises through the photon fusion process [@photon-photon]. [^8]: The fact that the same product is $\sim \mathcal{O} (10^{-11})$[@cernyellow] for a 750 GeV state with SM-like properties justifies BSM nature of this excess.
--- abstract: 'We perform quantum key distribution (QKD) in the presence of 4 classical channels in a C-band dense wavelength division multiplexing (DWDM) configuration using a commercial QKD system. The classical channels are used for key distillation and 1 Gbps encrypted communication, rendering the entire system independent from any other communication channel than a single dedicated fibre. We successfully distil secret keys over fibre spans of up to 50 km. The separation between quantum channel and nearest classical channel is only 200 GHz, while the classical channels are all separated by 100 GHz. In addition to that we discuss possible improvements and alternative configurations, for instance whether it is advantageous to choose the quantum channel at 1310 nm or to opt for a pure C-band configuration.' author: - Patrick Eraerds - Nino Walenta - Matthieu Legré - Nicolas Gisin - Hugo Zbinden bibliography: - 'Bibliography.bib' title: 'Quantum key distribution and 1 Gbit/s data encryption over a single fibre' --- Introduction {#sec:Intro} ============ Since the initial proposal of quantum key distribution (QKD) in 1984 [@BB84] and its first experimental demonstration [@Bennett92], major progresses in long-distance, fibre-based point-to-point QKD have been achieved (for an overview over current state-of-the-art implementations, see [@NJPFocusOnQKD]). Until recently, one of the specifics of QKD systems was the need for a dedicated dark optical fibre, exclusively reserved for the quantum channel (single photon level). Signals of classical strength, used to perform key distillation and encrypted communication between the end users, were sent through a second fibre to not compromise the weak quantum signal. The next consequential step towards larger availability of QKD links is to look at the compatibility of QKD with existing fibre infrastructure. Common public DWDM (dense wavelength division multiplexing) networks multiplex up to 50 different wavelength channels on a single fibre. If the quantum channel is launched into a fibre accompanied with other classical signals, several effects like channel crosstalk, Raman scattering, four-wave mixing or amplified spontaneous emission (in case of amplification of the classical channels) can severely degrade QKD system operation or worse, can prevent it at all. First investigations in this direction were conducted by Townsend in the late nineties [@Townsend]. The impact of a single classical C-band channel, wavelength multiplexed with a quantum channel at 1310 nm, was analyzed. Later, in 2005, Lee and Wellbrock demonstrated quantum key distribution, placing both the quantum channel and one classical channel into the C-band with a separation of down to 400 GHz equivalent to 3.2 nm [@Lee]. We note that the classical channel was neither linked to QKD system operation nor used for encrypted communication. More recent works [@Telcordia2009; @Telcordia2007] investigate different impairment sources on a more general level, including effects which occur when more than one classical channel is present, e.g. four-wave mixing. Apart from the long term goal of QKD operation on public DWDM networks, another frequently encountered network topology could push forward QKD availability on a short term scale. In order to accommodate future growth, telecom companies have spent the last few years installing point-to-point dedicated fibres. These fibres can also be used in the standard configuration of QKD using a dark fibre for the quantum channel and another for the encrypted communication. However, for reasons of availability and fibre leasing costs the operation on only one fibre is highly desirable. In this paper we investigate exactly this situation where in total only one dedicated fibre is available and an encrypted link, based on QKD, should be established between its endpoints. This objective thus necessitates the wavelength multiplexing of all system relevant channels, i.e. key distillation- and encrypted communication channels as well as the quantum channel on a single fibre. If one is not obliged to operate in a two fibre configuration, then in particular QKD systems which require a classical clock signal to synchronize the separate devices would benefit from higher robustness against fibre drifts. Furthermore, the configuration investigated here surely bears the advantage of having perfect information on the classical channels, while they are difficult to assess in the public network configuration. Therefore a reliable performance characterisation of the entire system is obtainable. In our experiment we use a standard 8 channel C-band DWDM with 100 GHz (corresponding to 0.8 nm) spacing. We simultaneously multiplex 4 classical channels (one bidirectional channel for distillation and encrypted 1 Gbps communication, respectively) with a quantum channel, separated from the nearest classical channel by only 200 GHz. The paper is organized as follows: In Sec. \[sec:Impairment\] we discuss the different impairment sources relevant for our realization. Section \[sec:Experiment\] describes the QKD setup and presents the experimental results followed by a discussion and outlook in Sec. \[sec:Discussion\]. Section \[sec:Conclusions\] contains our conclusions. Impairment sources {#sec:Impairment} ================== Raman scattering {#sec:Raman} ---------------- ![\[fig:RamanSpectra\] Measured effective Raman cross-section $\rho\left(\lambda\right)$ (per km fibre length and nm bandwidth) for a pump laser wavelength centred at 1550 nm in a standard single mode fibre at room temperature.](./Figures/figure1a){width="1\linewidth"} ![\[fig:RamanSpectra\] Zoom on Anti-Stokes dip of the Raman spectrum. In channels +2 and +3 the minimal amount of Raman scatter is found.](./Figures/figure1b){width="1\linewidth"} Due to photon-phonon interaction, photons can change their wavelength and thus compromise other channels. Depending on whether a phonon gets excited or de-exited, photons at wavelengths above (Stokes) and below (anti-Stokes) the initial wavelength are generated. Scattering off acoustic phonons (Brillouin scattering) is not critical, since the maximal frequency shift of the scattered photons is small (10 GHz, in backward direction) and therefore cannot reach adjacent channels on a 100 GHz grid. By contrast, scattering off optical phonons (Raman scattering) can lead to significant frequency shifts covering the entire C-band [^1], having an intensity maximum at a shift of about 13 THz (corresponding to a wavelength shift of 100 nm at 1550 nm). Unlike acoustic phonons, the more or less flat dispersion relation of optical phonons causes frequency shifts independent of the scatter direction. This means that in co- as well as in counter propagating direction (with respect to the exciting signal) a broad spectrum of photons is present. We measured the Raman scatter generated by a 50 km standard single mode fibre and extracted an effective Raman scattering cross-section $\rho(\lambda)$, shown in Fig. \[fig:RamanSpectra\]. It is normalized with respect to spectral bandwidth and fibre length and accounts for the fibre caption ratio of the scattered light. In return, by means of $\rho(\lambda)$ and allowing for fibre attenuation we can calculate the Raman scatter power emerging from the input $P_{ram,b}$ (backward Raman scattering) and output $P_{ram,f}$ (forward Raman scattering) of a fibre of arbitrary length $L$. Assuming a certain filter pass-band $[\lambda,\lambda+\Delta\lambda]$ and approximating the spectral integration via $$\int^{\lambda+\Delta\lambda}_{\lambda}\rho(\lambda')d\lambda' \approx \rho(\lambda)\cdot \Delta\lambda \label{Ispec}$$ we obtain (see Appendix A) $$P_{ram,f} = P_{out}\cdot L\cdot \rho(\lambda)\cdot \Delta\lambda \\\\ \label{eq:RamanFW}$$ $$P_{ram,b} = P_{out}\cdot \frac{sinh(\alpha\cdot L)}{\alpha}\cdot \rho(\lambda)\cdot \Delta\lambda \label{eq:RamanBW}$$ where $P_{out}$ is the power of the exciting laser at the fibre output in \[W\], $\alpha$ the fibre attenuation coefficient \[km$^{-1}$\] and $L$ the fibre length \[km\]. $P_{out}$ can be written in terms of the input power via $P_{out}=P_{in}\cdot e^{-\alpha\cdot L}$. The impact of each of the Raman contributions, represented by the detection probability per ns detector gate is depicted in Fig. \[fig:AllNoise\]. Note that we assume equal attenuation for initial and scattered wavelength, which is reasonable for our total wavelength span of 4 nm (see Sec. \[sec:Experiment\]). Channel crosstalk {#sec:CrossTalk} ----------------- The relative strength of the classical channels requires a large DWDM isolation with respect to the quantum channel. As a reasonable benchmark for a sufficient isolation we propose the detector dark count probability. To calculate a typical value we need to consider the receiver sensitivity of the transceiver modules used for the classical communication (see Sec. \[sec:Experiment\]). It conditions the necessary optical power to ensure error free detection. In our particular case the sensitivity which guarantees a bit error rate $BER<10^{-12}$ is equal to -28 dBm (Finisar FWLF-1631-xx). This power corresponds to approximately $1.2\cdot10^4$ photons per ns. In order to attenuate this photon number so that the detection probability per ns gate is of order of the dark count probability ($5\cdot10^{-6}$ ns$^{-1}$) an isolation of about 80 dB is needed. Here we assume a detector efficiency of $\eta=0.07$ and internal components loss of 2.65 dB on Bob’s side. Our standard 8-channel DWDM provides an isolation of 82 dB between non-adjacent channels which is just sufficient to match the before mentioned criteria, see Fig. \[fig:AllNoise\]. In the case of insufficient isolation, additional filters can further improve the isolation, however at the expense of additional insertion loss in the quantum channel. In particular, considering Raman scattering we find that crosstalk is not a limiting factor for long fibre lengths. Finally, we note that a sufficient isolation of the co-propagating quantum and classical channel entails that crosstalk from Rayleigh backscatter in a counter-propagating configuration can be neglected. ![\[fig:AllNoise\] Different contributions to the total noise count probability per ns detector gate assuming our system parameters (DWDM channel isolation=82 dB, fibre loss $\alpha_{dB}=0.21$ dB/km, 4 classical channels each with a power of -28 dBm at the receiver, $\eta=0.07$, internal components loss=2.65 dB).](./Figures/figure2){width="\linewidth"} Four-wave mixing {#sec:FWM} ---------------- Four-wave mixing (FWM) is mediated by the third order susceptibility $\chi^{(3)}$ and describes the generation of additional photon frequencies, different from those present in the initial fields. In contrast to Raman scattering no energy is transferred to or taken from the fibre, i.e. no phonon excitation or de-excitation takes place. Most harmful for our setup would be the degenerate case where two exciting frequencies $f_1$, $f_2$ (assuming $f_1>f_2$) generate side band frequencies $f_+=f_1+(f_1-f_2)$ and $f_-=f_2-(f_1-f_2)$. If the channel separation is not properly chosen, $f_{+/-}$ may coincide with the quantum channel pass-band. The generation efficiency depends on the phase-matching condition, as well as on the relative polarisation and propagation direction of the involved field frequency components. It is particularly easy to fulfil around the zero dispersion wavelength, where it can corrupt even classical communication [@Forghieri]. In Sec. \[sec:Experiment\] we present a channel configuration which prevents efficient FWM generation, independent of the fibre type (SSMF, DSF or NZDSF). In addition to the stimulated case described before, it is also important to assess the noise contribution from spontaneous FWM. Spontaneous FWM allows the creation of signal and idler frequencies $f_s,f_i$ from each pump frequency $f_p$, satisfying energy conservation via $2f_p=f_s+f_i$. The efficient generation again depends on the phase-matching condition. Around the zero dispersion wavelength the generated spectrum can be rather broad, superposing the spectrum generated by Raman scattering [@Agrawal]. Following [@Agrawal] we calculate that even in our most demanding configuration the $\gamma P_0 L$ product is very small ($=0.002$, for considerable contributions at least $\gamma P_0 L$ of about 0.1 is needed). This indicates that even when we were operating around the zero dispersion wavelength, spontaneous FWM can be neglected with respect to Raman scattering. Experiment {#sec:Experiment} ========== Setup {#sec:Setup} ----- For the experiments we adopt a commercial QKD system (Cerberis from idQuantique [@idQuantique]). As outlined in Fig. \[fig:OverallSetup\], this solution combines a QKD server for secure point-to-point key distribution, and Layer 2 encryption units to encode and decode messages with the key provided by the quantum server for complete secure bidirectional communication between two distant partners, Alice and Bob. ![image](./Figures/figure3){width="\linewidth"} The QKD layer is based on a “plug & play” phase encoding quantum key distribution system where all optical and mechanical fluctuations are automatically and passively compensated [@StuckiPlugAndPlay2002]. Bob generates a sequence of optical pulses with a frequency of $f_{rep}=5$ MHz. It propagates through his unbalanced Mach-Zehnder interferometer such that each pulse is split into two orthogonally polarized pulses which are separated by the interferometer imbalance. The sequence length is chosen to match twice the length of the storage line of $L_{s}\approx 10$ km at Alice’s in order to avoid compromising Rayleigh backscatter. At Alice’s, the major proportion of photons per pulse is used to trigger the classical detector $D_{A}$ in order to synchronize her device with Bob’s. The remaining proportion is reflected at the Faraday mirror (FM), phase modulated by $\phi_{A}$ in accordance to Alice’s choice of bit value and encoding base, attenuated by the variable optical attenuator (VOA) to $\mu$ photons per pulse and returned to Bob through the same fibre link. Due to the Faraday rotation, each pulse propagates along the contrary interferometer arm as before and interferes at the beam splitter (BS) in accordance to the phase difference between $\phi_{A}$ and Bob’s base choice $\phi_{B}$. All internal losses of Bob’s optical components sum up to $t_{B}=2.65~dB$ (excluding DWDMs and optional filters). The signals are detected by InGaAs avalanche photo diodes (APDs) operated in Geiger mode. The APDs are temperature stabilized at 220 K, gated using 1.5 ns long gates and with a dead time of $\tau_{dead}=10~\mu$s applied after each detection to reduce the afterpulse probability to $p_{ap}\approx 0.008$. Their detection efficiencies are $\eta\approx0.07$ at a dark count probability of $p_{dc}\approx5\cdot10^{-6}$ ns$^{-1}$. After key sifting, optionally via the sifting protocols BB84 or SARG [@SARG2004], followed by fully implemented error correction using the CASCADE algorithm [@CASCADE1994] and privacy amplification using hashing functions based on Toeplitz matrices [@WegmanCarter1981], Alice and Bob remain with shared secret keys. The integrity of the public distillation communication is ensured by a Wegman-Carter-type authentication scheme based on universal hashing functions [@Carter1979]. The pair of Ethernet encryptors is periodically updated with the secret keys to establish a permanent AES-256 encrypted 1 Gbps data link between Alice and Bob. The data to be encrypted are continuously provided by two 1 Gbps streams of random bits from a network test system (EXFO PacketBlazer FTB-8510). We note that typically the key refresh rate is once per minute which requires a secret key rate of at least 8.6 bps. In order to guarantee continuous operation the key refresh rate is temporarily reduced if the secret key rate drops below that limit. All in all, to completely operate the Cerberis system, four classical communication channels have to be set up between Alice and Bob in addition to the quantum channel. The bidirectional communication for distillation, i.e. key sifting, error correction and privacy amplification, demands two authenticated channels, one from Alice to Bob and one from Bob to Alice. Similarly, two channels are required for the bidirectional encrypted data transmission between the encryptors. All classical communication channels are implemented using standard optical 2.67 Gbps DWDM SFP transceivers (Finisar FWLF-1631-xx). We multiplex the quantum channel along with the four classical channels using off-the-shelf 100 GHz DWDM modules (OptiWorks). The modules possess an insertion loss of 1.95 dB and an isolation of 59 dB (82 dB) for adjacent (non-adjacent) channels. The implemented channel configuration is shown in Fig. \[fig:OverallSetup\]. For the quantum channel we choose a wavelength of 1551.72 nm on the ITU C-band grid. We take advantage of 10 % less Raman noise on the anti-Stokes side of the Raman spectrum at ambient temperature (see Fig. \[fig:RamanSpectra\]) by placing all classical channels at higher wavelengths. To benefit from both, the considerably higher DWDM channel isolation for non-adjacent channels as well as from lower Raman noise we omit the adjacent channel and set up the quantum channel 200 GHz (1.6 nm) apart from the nearest classical channel. We minimize the direct impairment due to FWM by choosing the frequency difference between two co-propagating channels such that no FWM frequency product is generated within the quantum channel pass-band (see Sec. \[sec:FWM\]). The discussion of impairment sources has shown that in general the amount of noise impinging on the detectors increases with the total power present in the fibre. Hence, we reduce the power of the classical channels to the overall transmission losses using variable optical attenuators (VOA), such that the corresponding power at the receiver’s end just matches the receiver sensitivity of -28 dBm. This corresponds to $P_{out}=-26.05$ dBm in Eq. \[eq:RamanFW\], \[eq:RamanBW\] due to the insertion losses of our DWDM modules. With the aim to further minimize the amount of Raman noise, we optionally add phase-shifted fibre Bragg grating filters (F) from aos [@aos] centred around the quantum channel wavelength in front of each APD. Their spectral bandwidth of 45 pm (fwhm) and extinction ratio of 14 dB entails a 85 % rejection of noise photons, outweighing the additional attenuation of 2 dB due to insertion loss. The filters are actively and independently temperature stabilized using standard temperature controllers, mainly to permit fine adjustment of their transmission bandwidth. A straightforward configuration with only one filter inserted between the PBS and the DWDM was abandoned because of back reflections of the quantum channel laser which completely saturated the APDs. Fibre length 1 km 5 km 10 km 25 km 35 km 41 km 50 km ----------------------------- -------- -------- -------- ----------- ----------- --------- ---------- **Without filters** $R_{sec}$ \[bps\] BB84/SARG 2829/- 2047/- 1524/- 134/511 4.3/72 -/2.0 $QBER$ \[%\] BB84/SARG 0.57/- 0.72/- 1.18/- 4.53/2.12 8.60/4.77 -/7.48 **With filters** $R_{sec}$ \[bps\] BB84/SARG 251/347 25/128 7.5/43 0/11 $QBER$ \[%\] BB84/SARG 1.6/1.7 3.6/2.5 6.7/3.7 34.5/5.4 Results {#sec:Results} ------- ![\[fig:OverallResults\]Performance of the QKD based encryption system in terms of $QBER$ (top) and secret key rate provided to the encryptors (bottom) in dependence of the fibre length. Symbols denote our experimental results, solid lines our calculations. Additional filtering increases the maximum fibre length to 41 km using BB84 key sifting and 50 km using SARG.](./Figures/figure4a "fig:"){width="1.0\linewidth"}\ ![\[fig:OverallResults\]Performance of the QKD based encryption system in terms of $QBER$ (top) and secret key rate provided to the encryptors (bottom) in dependence of the fibre length. Symbols denote our experimental results, solid lines our calculations. Additional filtering increases the maximum fibre length to 41 km using BB84 key sifting and 50 km using SARG.](./Figures/figure4b "fig:"){width="1.0\linewidth"} We characterize the system performance for different fibre lengths by measuring the quantum bit error rate $QBER$ and the secret key rate $R_{sec}$. The $QBER$, i.e. the number of erroneous detections over the total number of detections, can be approximated by $$QBER = QBER_{opt}+QBER_{det}+QBER_{wdm} \label{eq:DefQBER}$$ (for more details see \[sec:AppendixQBER\]). The optical share $QBER_{opt}$ is determined by the interference visibility entailed by the quality of the optical components and their alignment. Its typical value was 0.3 % (0.6 %) using BB84 (SARG). $QBER_{det}$ depends on the characteristics of Bob’s single photon detectors and includes errors due to detector dark counts of around $5\cdot10^{-6}$ per ns as well as afterpulses. $QBER_{wdm}$ summarizes all additional errors from noise due to wavelength-division multiplexing with classical channels, i.e. channel crosstalk and Raman scatter (see Fig. \[fig:AllNoise\]). The secret key rate, i.e. the net rate of secret key bits provided to the encryptors to cipher data communication between Alice and Bob, is given by [@RibordyPlugAndPlay2000] $$R_{sec} = R_{sift}\left(1-r_{ec}\right)\left(1-r_{pa}\right). \label{eq:DefSecretRate}$$ Here, $R_{sift}$ is the detection rate after sifting (Eq. \[eq:siftRate\]), and $r_{ec}$ and $r_{pa}$ are the fractions of bits used for error correction and privacy amplification. Both, $r_{ec}$ and $r_{pa}$, increase non-linearly with the $QBER$. Our performance results in terms of $QBER$ (estimated by the CASCADE error correction protocol) and net rate of secret keys are plotted in Fig. \[fig:OverallResults\] and listed in Tab. \[tab:ResultTable\]. The solid lines indicate our calculations which make use of the formulas given in \[sec:AppendixQBER\]. We note that we do not account for the time needed for key distillation and fibre length measurements, which gets the more significant the higher the key rates. Hence, in general we overestimate the secret key rate especially for short fibre lengths. The dashed lines in Fig. \[fig:OverallResults\] indicate the maximum $QBER$ of 9 % below which the system can distil a secret key, and the minimum secret key rate of 8.6 bps required for AES encryption with 256 bit keys which are updated once a minute, respectively. Without the optional spectral filters (F) we obtain a secret key rate which remains well above 1000 bps up to a fibre length of 10 km using BB84 key sifting. Inserting the optional spectral filters in front of the APDs does not only increase the secret key rate from 4.3 bps to 25 bps for a fibre length of 35 km, it also increases the maximal distance to 41 km at which we obtain 7.5 bps. We achieve a further increase in the secret key rate and maximum distance if we use the SARG key sifting protocol instead. Here, the average secret key rate is 128 bps for 35 km and 11 bps for 50 km fibre length. We emphasize that the SARG protocol equally guarantees the security of the key material. While for BB84 the optimum mean photon number $\mu$ of the quantum pulses depends on the fibre transmission $t$ according to $\mu_{BB84}=t$, the SARG protocol allows to benefit from a higher mean photon number $\mu_{SARG}=2\cdot\sqrt{t}$ [@BranciardSARG]. Concerning the stability of the setup we verified constant detection and secret key rates over a period as long as five days in the configuration without the additional filters (F). Having added the filters we still observe a constant detection rate in one detector which confirms that a sufficient stabilisation of the filter transmission spectra can be achieved using standard temperature control. However, after a few hours the detection rate in the second detector tends to decrease due to a drift of the transmission spectra of the corresponding filter, most likely caused by a filter fabrication flaw. Discussion and outlook {#sec:Discussion} ====================== In Fig. \[fig:OverallResults\] (left) we compare the $QBER$ values obtained experimentally with theoretical calculations which take all discussed noise sources into account. It reproduces very well the measurement results giving us confidence that we have successfully identified the dominant impairment sources present in our implementation. Based on this we discuss some alternative configurations in the following paragraphs. Firstly, we address the question whether or not it might be advantageous to place the quantum channel in the O-band around 1310 nm while keeping the classical communication channels in the C-band around 1550 nm (for an O-band implementation see [@Nist]). The maximal reach of the 1550 nm solution is ultimately limited by the Raman noise (see Fig. \[fig:AllNoise\]). Calculating the mean phonon occupation numbers we find that the Raman noise at 1310 nm is about 4000 times weaker as at 1550 nm. We calculate two scenarios: firstly, we take the dark count probability of the detectors used in our experiment (straight lines, $p_{dc}=5\cdot 10^{-6}$ ns$^{-1}$, $\eta = 0.07$) and secondly, we assume a very small detector dark count probability (dashed lines, $p_{dc}=5\cdot10^{-8}$ ns$^{-1}$, $\eta = 0.07$). In addition to that we suppose a better channel isolation in the 1310 nm case of 100 dB while it is at 82 dB in the 1550 nm case (like in our experiment). The results are shown in Fig. \[fig:1550vs1310\]. For all curves we neglected the influence of detector dead time, the system specific duty cycle and the reduced efficiency of the error correction protocol (see \[sec:AppendixQBER\]). As expected, we find that the lower dark count rate dramatically improves the 1310 nm curve, whereas it has rather minor impact on the 1550 nm. However, we see that if high key rates are desired, the 1310 nm solution cannot keep up with the 1550 nm one due to the higher fibre attenuation. Only in an extreme case where lower key rates are acceptable, the 1310 nm solution can reach a larger distance, provided detectors with very low dark count probability are used. ![\[fig:1550vs1310\]Comparison between 1550 nm and 1310 nm quantum channel wavelength (SARG, with filters). $p_{dc}$ denotes detector dark count probability. The assumed fibre attenuation is $\alpha_{1550}=0.21$ dB/km and $\alpha_{1310}=0.35$ dB/km. The calculations for a dark fibre configuration (without DWDMs and filters) is also shown for comparison.](./Figures/figure5){width="\linewidth"} Secondly, we want to estimate the implications of higher transmission rates in the encrypted channels. As described before, we minimize the total power present in the fibre by adapting the laser power of the SFP modules to their receiver sensitivity of -28 dBm. Modules designated for higher transmission rates currently have lower sensitivity. For example, the 10 Gbps transceiver module Finisar FTRX-1811-3 is specified with a receiver sensitivity of -23 dBm. Using two of these modules for the encrypted link instead of the 1 Gbps modules we used would consequently increase the total classical power by 3.2 dB, and hence the detected noise. Taking this into account but keeping all other parameters unchanged, we estimate for distances up to 40 km no significant degradation of the secret key rate. However, the maximum distance at which a key rate of 8.6 bps can be achieved, reduces by 4-5 km, depending on the sifting protocol. Next, we take a look on possible measures which could improve the performance of the current setup. One possibility is the reduction of the total classical channel power. This could be achieved by amplification of the classical signals in front of the receivers, or by prospective SFP modules with better receiver sensitivity. While a solution with amplifiers is cost-intensive, an improvement of the receiver sensitivity of more than 3 dB is unlikely in near future. One could also assume that narrower spectral or temporal filtering of the quantum channel could further reduce the impact of Raman noise. However, we observe that there is no more room for improvements here. On one hand the transmission width of 45 pm (corresponding to 5.6 GHz) of our additional filter is already the limit for the spectral width of our sub-nanosecond quantum signals. On the other hand, we can not further reduce the temporal gate width without clipping the pulses and, hence, introducing additional losses. Since the pulse duration of the quantum signals is related to the inverse of its spectral bandwidth, further narrower temporal filtering would entail broader spectral filtering and vice versa. Finally, we would like to give an outlook on prospective DWDM implementations with next-generation QKD systems based on the Differential-phase shift protocol [@DPSYamamoto] or the Coherent-one-way protocol (COW, [@COWStucki]). These systems largely benefit from high speed electronics and a better key generation efficiency due to their improved tolerance to photon number splitting attacks. As an illustration we take a look at the COW prototype as presented in [@COW250km], which uses a QKD encoding frequency of 312.5 MHz and a mean photon number of $\mu_{COW}=0.5$ photons per pulse. Assuming the same parameters as used for the calculations with the additional filters in Fig. \[fig:OverallResults\] we find an increase in the maximum link distance to 70 km and a secret key rate of $>$ 10,000 bps for fibre lengths up to 43 km. Conclusions {#sec:Conclusions} =========== We demonstrate that a QKD based encryption system can be efficiently operated on a single dedicated fibre of up to 50 km length. All four classical channels necessary to establish the encrypted link are multiplexed along with the quantum signal in a 100 GHz DWDM configuration, rendering the system independent of any additional network connection. We also show that a combined O-band/C-band solution (quantum channel at 1310 nm) cannot improve the performance. We find that with respect to the conventional dark fibre configuration, requiring two independent fibres, comparable secret key rates can be obtained, e.g. up to 25 km the decrease of the secret key rate is less than 50 %. We conclude that with only moderate additional efforts a commercial QKD system can be upgraded to network topologies where only one dedicated fibre is available at a time. Acknowledgements ================ We gratefully acknowledge the helpful support by Patrick Trinkler from idQuantique. This project was financially supported by Swiss NCCR “Quantum Photonics” and the ERC-AG QORE. Appendix ======== Derivation of Raman Scatter power formulas {#App1} ------------------------------------------ The Raman Scatter power $dP_{ram}$ at wavelength $\lambda$ from a fibre element of length $dx$ at position $x$ when a power $P_{in}$ is launched into a fibre is $$dP_{ram}(\lambda,x)= P_{in}\cdot e^{-\alpha\cdot x}\cdot\rho(\lambda)\cdot\Delta\lambda\cdot dx$$ $\rho(\lambda)$ accounts already for the fibre caption ratio and we used the same approximation for the spectral integral (see Eq. \[Ispec\]). The scatter from a single fibre element is almost isotropic. Now we have to account for the fibre attenuation (fibre length $L$) when the scatter propagates to the fibre output (forward scatter) and back to the fibre input (backward scatter)\ a) forward : $$dP_{ram,f} = dP_{ram}(\lambda,x)\cdot e^{-\alpha\cdot(L-x)}$$ integrating over whole fibre $$\Rightarrow P_{ram,f}=P_{in}\cdot e^{-\alpha\cdot L}\cdot\rho(\lambda)\cdot\Delta\lambda$$ b) backward : $$dP_{ram,b} = dP_{ram}(\lambda,x)\cdot e^{-\alpha\cdot x}$$ integrating over whole fibre $$\Rightarrow P_{ram,b}= P_{in}\cdot e^{-\alpha\cdot x}\cdot\frac{sinh(\alpha\cdot L)}{\alpha}\cdot\rho(\lambda) \cdot\Delta\lambda$$ In order to obtain the detection probabilities per gate ($p_{ram,f}$ and $p_{ram,b}$ respectively), used for the $QBER$ calculation (see Appendix B), we calculate $$p_{ram,f}=\frac{P_{ram,f}}{h\nu}\cdot\eta\cdot\Delta t_{gate} \label{ramprobfw}$$ where $\Delta t_{gate}$ is the gate duration and $\eta$ the detector efficiency. By replacing $P_{ram,f}$ by $P_{ram,b}$, one obtains $p_{ram,b}$ in the same manner. Explicit $QBER$ and key rate formulas {#sec:AppendixQBER} ------------------------------------- The general definition of $QBER$ is given by the ratio between the number of false detections and total detections (right+false): $$QBER=\frac{false}{right+false}$$ Introducing a sifting protocol specific parameter $\beta$ which is $\beta_{BB84}=1$ for BB84 and $\beta_{SARG}=\frac{2-V}{2}$ for SARG we obtain $$QBER = \frac{1}{2}\cdot\frac{p_\mu(1-V)+2\cdot p_{dc}+p_{AP}+p_{ram}+p_{ct}}{\beta\cdot p_\mu+2\cdot p_{dc}+p_{AP}+p_{ram}+p_{ct}}$$ where every quantity signifies a detection probability per detector gate. In particular : $p_\mu$ = signal detection, $p_{dc}$ = dark count, $p_{AP}$ = after pulse, $p_{ram}= p_{ram,f}+p_{ram,b}$ = Raman photon detection (see Eq. \[ramprobfw\]), $p_{ct}$ = cross talk photon detection and $V$ is the interference visibility. The signal detection probability $p_\mu$ is a product of the average number of photons per pulse $\mu$, fibre transmission $t$, detector efficiency $\eta$ and $t_{B}$ the loss of Bob’s internal components. The optimal $\mu$ also depends on the sifting protocol, it is $\mu_{BB84}=t$ and $\mu_{SARG}=2\cdot\sqrt{t}$. We estimate the secret key rate after error correction and privacy amplification by $$R_{sec} = R_{sift}\left(I_{AB}-I_{AE}\right). \label{eq:DefSecretRateApp}$$ Here, $R_{sift}$ is the sifted bit rate, $$\label{eq:siftRate} R_{sift}=\frac{\frac{1}{2}\left(\beta\cdot p_{\mu} +2p_{dc}+p_{AP}+p_{ram}+p_{ct}\right)\cdot f_{rep}\cdot\eta_{duty}}{1+\tau_{dead}\left(p_{\mu}+2p_{dc}+p_{AP}+p_{ram}+p_{ct}\right)\cdot f_{rep}},$$ where $f_{rep}$ is the pulse repetition frequency and $\eta_{duty}=\frac{L_{S}}{L+2L_{S}}$ accounts for the duty cycle of our system with $L$ the fibre length and $L_{S}$ the length of Alice’s storage line. We note, that Eq. \[eq:siftRate\] does not account for double detections and Poissonian photon number statistics. During error correction and privacy amplification, $R_{sift}$ is reduced by a factor $\left(I_{AB}-I_{AE}\right)$. $I_{AB}$ and $I_{AE}$ are the mutual information per bit between Alice and Bob, and between Alice and a potential eavesdropper, respectively. Due to quantum bit errors, $I_{AB}$ is smaller than 1 and amounts to $$\begin{aligned} \label{eq:IAB} I_{AB}=1-\eta_{ec}\cdot H\left(QBER\right),\end{aligned}$$ with $H\left(p\right)=-p\log_{2}p-\left(1-p\right)\log_{2}\left(1-p\right)$ the binary entropy function. In the ideal case, the amount of bits discarded during error correction is given by the Shannon limit, i.e. $\eta_{ec}=1$. In practice, however, we observe that the implemented algorithm for CASCADE error correction consumes about 20 % more bits than given by the Shannon limit. Hence, we correct Eq. \[eq:IAB\] by choosing $\eta_{ec}^{Cascade}=\frac{6}{5}$. To calculate the information per bit $I_{AE}$ between Alice and an eavesdropper we assume that an eavesdropper has full control over the quantum channel (i.e. the visibility and fibre transmission). In contrast, he can not modify the characteristics of Bob’s detectors. Additionally, we suppose that he performs an optimal coherent attack [@Fuchs1997] on pulses containing one photon, and a PNS attack [@PNS2000] if more than one photon is present in a pulse (without affecting the total detection rate at Bob). For BB84 with weak laser pulses one then obtains [@Niederberger2005] $$\begin{aligned} I_{AE,BB84}=\frac{\left(1-\frac{\mu}{2t}\right)\left(1-H\left(P\right)\right)+\frac{\mu}{2t}}{1+\frac{2p_{dc}}{\mu t \eta}}\end{aligned}$$ with $P=\frac{1}{2}+\sqrt{D\left(1-D\right)}$, $D=\frac{1-V}{2-\mu / t}$. Based on the same assumptions, we use the results in [@BranciardSARG] to estimate for the SARG protocol $$\begin{aligned} I_{AE,SARG}=I_{pns}\left(1\right)+\frac{1}{12}\frac{\mu^{2}}{t}e^{-\mu}\left(1-I_{pns}\left(1\right)\right),\end{aligned}$$ where $I_{pns}(k)=1-H(\frac{1}{2}+\frac{1}{2}\sqrt{1-\frac{1}{2^{k}}})$ is the potential information gain of an eavesdropper due to PNS attacks on multi-photon pulses when $k$ photons are split and stored. [^1]: Assuming the initial pump frequency to lie somewhere in the C-band.
--- author: - François Arleo - ', Charles-Joseph Naïm[^1]' - ', Stephane Platchkov' title: 'Initial-state energy loss in cold QCD matter and the Drell-Yan process' --- Introduction {#sec:intro} ============ The celebrated jet quenching phenomenon, observed in heavy ion collisions at RHIC and LHC, indicates that quarks and gluons experience radiative energy loss while propagating in a hot, deconfined QCD medium (see Refs. [@Majumder:2010qh; @Mehtar-Tani:2013pia; @Armesto:2015ioy; @Qin:2015srf] for recent reviews). The wealth of data collected so far has triggered detailed phenomenological studies serving a dual purpose, namely (i) to probe the radiative energy loss of partons (and multipartonic states) in a medium, and (ii) to extract the scattering properties of the expanding medium produced in these collisions, eventually leading to a better understanding of hot QCD matter. Another way to study parton energy loss is to consider the production of hard QCD processes in hadron-nucleus (hA) collisions; see Ref. [@Arleo:2016lvg] for a recent discussion. In this case the medium, cold nuclear matter, is simpler: it is static with known size and nuclear density. It is not less interesting, though, as it reveals important features of medium-induced gluon radiation expected in QCD. More explicitly, hard processes in hA collisions are sensitive to different timescales of gluon radiation: the Landau-Pomeranchuk-Migdal (LPM) regime, corresponding to gluon formation times of the order of the medium length (${\ensuremath{t_{\mathrm{f}}}\xspace}\lesssim L$), and the factorization regime (also known as [*fully coherent*]{}) for which ${\ensuremath{t_{\mathrm{f}}}\xspace}\gg L$ [@Peigne:2008wu]. The latter, fully coherent radiative energy loss, arises from the interference of gluon emission amplitudes off an ‘asymptotic’ incoming particle, produced long before entering the medium, and an asymptotic outgoing particle [@Arleo:2010rb; @Arleo:2012hn; @Arleo:2012rs; @Peigne:2014uha; @Peigne:2014rka]. It thus differs from the initial-state (final-state) energy loss of a given incoming (outgoing) particle. In this regime, the average energy loss associated to the production of a massive particle (mass $M$) scales *linearly* with the particle energy $E$ in the rest frame of the medium, [@Peigne:2014uha] $$\label{eq:coherent} {\langle{\epsilon}\rangle}_{\rm coh} \propto (C_R + C_{R^\prime} - C_t) \cdot \frac{\sqrt{{\hat{q}}L}}{M}\cdot E\,,$$ where $C_R$, $C_{R^\prime}$ and $C_t$ are respectively the color charge (Casimir) of the incoming, outgoing and exchanged[^2] particle ($C_R=4/3$ for a quark and $C_R=3$ for a gluon), and ${\hat{q}}$ is the transport coefficient of cold nuclear matter. The fully coherent energy loss could play a key role in the suppression of hard processes in hA collisions. It was shown in particular that this sole process is able to reproduce ${{\ensuremath{{J}\hspace{-.08em}/\hspace{-.14em}\psi}\xspace}}$ suppression data in hA collisions [@Arleo:2012rs; @Arleo:2013zua], from SPS to LHC energy and on a wide range in Feynman-$x$ ([$x_{\mathrm{F}}$]{}), in contradistinction to other effects such as nuclear modifications of parton distribution functions. On the contrary, initial-state (or, final-state) energy loss is only sensitive to the LPM regime [@Peigne:2014uha], for which the average energy loss is independent of the parton energy (up to a logarithmic dependence), [@Peigne:2008wu] $$\label{eq:LPM} {\langle{\epsilon}\rangle}_{\rm LPM} \propto C_R \cdot {\hat{q}}L^2$$ where now $C_R$ is the Casimir of the propagating particle. The fact that the LPM fractional energy loss, ${\langle{\epsilon}\rangle}_{\rm LPM}/E$, vanishes in the high energy limit has important consequences for the phenomenology. In particular, the effects of initial-state (or, final-state) energy loss in nuclear matter should be negligible in hA collisions at high energy, ${\langle{\epsilon}\rangle}_{\rm LPM} \ll E$, as the particle energy *in the nucleus rest frame* gets large, $E \propto {\ensuremath{\sqrt{s}}\xspace}$. Unless the particle energy becomes very small, $E \lesssim M \sqrt{{\hat{q}}L^3}$, the fully coherent energy loss exceeds that in the LPM regime, ${\langle{\epsilon}\rangle}_{\rm coh} \gg {\langle{\epsilon}\rangle}_{\rm LPM}$. However, not all processes are sensitive to fully coherent energy loss. One such case is particle production at large angle[^3] (with respect to the beam axis) in the medium rest frame: this typically corresponds to the case of jet quenching in heavy ion collisions. Consequently, an energetic parton propagating in a hot medium is expected to experience final-state energy loss according to the LPM regime, [Eq. ]{}. On the contrary, a particle produced in hA collisions is almost always produced with a large rapidity in the nucleus rest frame, thus at a ‘small’ angle, making it potentially sensitive to fully coherent energy loss [@Arleo:2010rb]. Another process should be insensitive to fully coherent energy loss, namely the single inclusive production of Drell-Yan (DY) lepton pairs, since at leading order the final state is color neutral and therefore does not radiate gluons.[^4] Hence, the production of DY pairs in hA collisions appears as a promising candidate in order to probe LPM initial-state energy loss in cold nuclear matter.[^5] At next-to-leading order (NLO), the production of a virtual photon in association with a hard parton in the final state would make DY production potentially sensitive to fully coherent radiation; however, the dominant NLO subprocess at large [$x_{\mathrm{F}}$]{}is Compton scattering, $qg \to q \gamma^\star$, for which the fully coherent radiation is small $(\propto 1/N_c)$ and negative.[^6] Final-state energy loss in nuclei could also be probed from the measurements of hadron production in semi-inclusive deep inelastic scattering (SIDIS) events [@Wang:2002ri; @Arleo:2003jz]. These processes are summarized in Table \[tab:energylosses\]. Energy loss Process Regime ${\langle{\epsilon}\rangle}$ ---------------------------- ------------------------------------------------------- -------------------------------------------------------- ------------------------------------- Initial-state $\text{h}{{\rm A}}\to \ell^+\ell^-+\X$ (LO Drell-Yan) ${\ensuremath{t_{\mathrm{f}}}\xspace}\lesssim L$ (LPM) $\propto{\hat{q}}L^2$ \[0.25cm\] Final-state $e{{\rm A}}\to e + h + \X$ (SIDIS) ${\ensuremath{t_{\mathrm{f}}}\xspace}\lesssim L$ (LPM) $\propto{\hat{q}}L^2$ \[0.125cm\] \[-0.25cm\] Fully coherent $\text{h}{{\rm A}}\to [Q\bar{Q}(g)]_8 + \X$ ${\ensuremath{t_{\mathrm{f}}}\xspace}\gg L$ (fact.) $\propto\sqrt{{\hat{q}}L}/M\cdot E$ (quarkonium) : Probing energy loss in cold nuclear matter in different processes.[]{data-label="tab:energylosses"} Until now, the interpretation of DY data in hA collisions has been delicate and ambiguous. The E772 and later the E866 experiment at FNAL performed high-statistics measurements of DY pairs in [$\text{pA}$]{}collisions at ${\ensuremath{\sqrt{s}}\xspace}=38.7$ GeV, on a wide range in [$x_{\mathrm{F}}$]{}, $0.1\lesssim {\ensuremath{x_{\mathrm{F}}}\xspace}\lesssim 0.9$  [@Alde:1990im; @Vasilev:1999fa]. The depletion observed in heavier nuclei (Fe, W) at large $x_F$ could either be attributed to nuclear parton distribution (nPDF) effects, namely sea quark shadowing at $x_2\gtrsim 10^{-2}$ [@deFlorian:2011fp; @Kovarik:2015cma; @Eskola:2016oht] or to strong energy loss effects in cold nuclear matter [@Johnson:2001xfa; @Neufeld:2010dz; @Song:2017wuh], therefore preventing a clear interpretation of these data. Older and less precise NA3 data in [$\pi\text{A}$]{}collisions at ${\ensuremath{\sqrt{s}}\xspace}=16.8$ GeV [@Badier:1981ci] proved less sensitive to nPDF effects and allowed for setting upper limits on parton energy loss in nuclei; however these measurements were also compatible with vanishing parton energy loss [@Arleo:2002ph]. Therefore, because of both the poorly known sea quark shadowing and the large experimental uncertainties in earlier data, no clear evidence for parton energy loss in the Drell-Yan process has yet been found. Two fixed-target experiments now make it possible to better understand the origin of the DY nuclear dependence. The E906 experiment [@E906] recently performed preliminary measurements of DY production in [$\text{pA}$]{}collisions at ${\ensuremath{\sqrt{s}}\xspace}=15$ GeV on a wide range of [$x_{\mathrm{F}}$]{} [@Lin:2017eoc]. In addition, the COMPASS experiment at the SPS collected data in [$\pi\text{A}$]{}collisions at ${\ensuremath{\sqrt{s}}\xspace}=18.9$ GeV that could be used to determine DY nuclear production ratios also on a large [$x_{\mathrm{F}}$]{}interval [@Aghasyan:2017jop]. The goal of this article is to revisit the effects of LPM initial-state energy loss in the BDMPS formalism on DY production in hA collisions at fixed target energies (${\ensuremath{\sqrt{s}}\xspace}< 40$ GeV), with a systematic comparison between model calculations and experimental results. The theoretical framework is presented in Section \[sec:model\] and results are shown in Section \[sec:phenomenology\]. The violation of QCD factorization in Drell-Yan production in [$\text{pA}$]{}collisions is discussed in Section \[sec:x2scaling\]. Conclusions are drawn in Section \[sec:summary\]. Drell-Yan production in hA collisions {#sec:model} ===================================== NLO production cross section ---------------------------- We consider the inclusive production of Drell-Yan lepton pairs of large invariant mass, $M \gg {\Lambda_{_{\rm QCD}}}$, in hadronic collisions. The analysis is carried out at next-to-leading order accuracy in the strong coupling constant, [[i.e.]{}]{}at order $\cO{\alpha^2\,{\alpha_s}}$, using the DYNNLO Monte Carlo program [@Catani:2007vq; @Catani:2009sm]. The [$x_{\mathrm{F}}$]{}-differential production cross section in a generic ${\ensuremath{\text{h}_1\text{h}_2}\xspace}$ collision reads $$\label{eq:DYxs} \frac{{{\rm d}}\sigma({\ensuremath{\text{h}_1\text{h}_2}\xspace})}{{{\rm d}}{\ensuremath{x_{\mathrm{F}}}\xspace}\,{{\rm d}}M} = \sum\limits_{i,j=q,\bar{q},g} \int_{0}^{1} \, {{\rm d}}x_1 \int_{0}^{1}\, {{\rm d}}x_2\, f_{i}^{{\ensuremath{\text{h}_1}\xspace}}(x_{1}, \mu_R^2) f_{j}^{{\ensuremath{\text{h}_2}\xspace}}(x_{2}, \mu_R^2)\, \frac{{{\rm d}}\widehat{\sigma}_{ij}}{{{\rm d}}{\ensuremath{x_{\mathrm{F}}}\xspace}\,{{\rm d}}M}(x_1 x_2 s, \mu^2, \mu_R^2)\,.$$ At NLO, the partonic cross section $\hat{\sigma}_{ij}$ includes Compton scattering and annihilation processes, $qg \to q \gamma^{\star}$ and $q\bar{q} \to g \gamma^{\star}$, in addition to virtual corrections to the Born diagram, $q\bar{q}\to \gamma^\star$. In [Eq. ]{}, both the renormalization and factorization scales are set equal to the DY invariant mass,[^7] $\mu_{R}^{2} = \mu^{2} = M^2$. The single differential cross section ${{\rm d}}\sigma/{{\rm d}}{\ensuremath{x_{\mathrm{F}}}\xspace}$ is obtained by integrating over the dilepton mass range, here between the charmonium and bottomonium resonances. In this analysis we are interested in the production of Drell-Yan pairs using either a proton or a pion beam on nuclear targets, [$\text{pA}$]{}and [$\pi\text{A}$]{}collisions. In the absence of genuine nuclear effects, $f_j^{{\ensuremath{\text{h}_2}\xspace}}$ appearing in [Eq. ]{} should thus be replaced by the corresponding average over proton and neutron partonic densities, $$\label{eq:PDFnucleus} f_j^{{{\rm A}}}(x_{2}) = Z\,f_{j}^{p}(x_{2}) + (A-Z)\,f_{j}^{n}(x_{2})$$ where $Z$ and $A$ are respectively the atomic and the mass number of the nucleus A. Several proton NLO PDF sets have been used in this analysis (MMHT2014 [@Harland-Lang:2014zoa], nCTEQ [@Kovarik:2015cma], and CT14 [@Dulat:2015mca]) in order to evaluate part of systematic uncertainties of our calculations. The GRV NLO set [@Gluck:1991ey] has been used for the PDF in a pion (we checked that the SMRS [@Sutton:1991ay] and BSMJ [@Barry:2018ort] sets give almost identical results). The neutron parton distributions are deduced from those in a proton using isospin symmetry, $f_d^n = f_u^p$, $f_u^n = f_d^p$, $f_{\bar{d}}^n = f_{\bar{u}}^p$, $f_{\bar{u}}^n = f_{\bar{d}}^p$, and $f_{i}^{n} = f_{i}^{p}$ otherwise. We are interested here in the nuclear dependence of the Drell-Yan process, via the production ratio, $$\label{eq:DY_ratio} R^{\rm DY}_{\text{h}}({{\rm A}}/{\ensuremath{{\text{B}}}\xspace},{\ensuremath{x_{\mathrm{F}}}\xspace}) = \frac{B}{A}\,\left( \frac{{{\rm d}}\sigma({\ensuremath{\text{hA}}\xspace})}{{{\rm d}}{\ensuremath{x_{\mathrm{F}}}\xspace}} \right) \times \left( \frac{{{\rm d}}\sigma({\ensuremath{\text{hB}}\xspace})}{{{\rm d}}{\ensuremath{x_{\mathrm{F}}}\xspace}} \right)^{-1},$$ in a heavy nucleus (A) over a light nucleus (B). In the absence of nuclear effects, [Eq. ]{} may differ from unity because of the differences between proton and neutron parton densities. In practice, proton-induced Drell-Yan collisions probe the target [*antiquarks*]{} for which $f_{\bar{q}}^n \simeq f_{\bar{q}}^p$, resulting in rather small isospin effects.[^8] Isospin effects are more important in low energy [$\pi\text{A}$]{}collisions, mostly sensitive to the valence up and down quark distribution in the nucleus. Nuclear parton distribution functions {#subsec:NuclearPDF} ------------------------------------- Parton distribution functions in a nucleus differ from those in a free proton over the whole Bjorken-$x$ range, $f_{i}^{p/{{\rm A}}}(x) \ne f_{i}^{p}(x)$, where $f_{i}^{p/{{\rm A}}}$ is defined as the PDF of the parton of flavor $i$ inside a proton bound in a nucleus. The latest global fit extractions of nPDF at NLO have been done by DSSZ [@deFlorian:2011fp], nCTEQ15 [@Kovarik:2015cma], and EPPS16 [@Eskola:2016oht] which included for the first time data from the LHC. In order to take into account nPDF effects on DY production, $f_{j}^p$ ($f_{j}^n$) needs to be replaced by $f_{j}^{p/{{\rm A}}}$ ($f_{j}^{n/{{\rm A}}}$) in [Eq. ]{}, $$\label{eq:nPDF} f_j^{{{\rm A}}}(x_{2}) = Z\,f_{j}^{p/{{\rm A}}}(x_{2}) + (A-Z)\,f_{j}^{n/{{\rm A}}}(x_{2})\,.$$ Depending on the parametrizations, either the absolute PDF $f_{j}^{p/{{\rm A}}}$ or the nPDF ratios, $R_{j}^{{{\rm A}}} \equiv f_{j}^{p/{{\rm A}}} / f_{j}^p$, are provided. In this article, DY production in [$\text{pA}$]{}and [$\pi\text{A}$]{}collisions is computed using the nPDF ratios provided by the latest nPDF set, EPPS16 [@Eskola:2016oht]. This set actually includes DY data in both [$\text{pA}$]{}and [$\pi\text{A}$]{}collisions, in addition to other measurements. The implicit assumption is that no other physical effect than a universal leading-twist nuclear PDF would play a role in the production of DY pairs in hadron-nucleus collisions. However, the radiative energy loss of partons may affect the nuclear dependence of DY production, thus spoiling a clean extraction of nPDFs from these data. Initial-state energy loss {#subsec:EnergyLoss} ------------------------- The high-energy partons from the hadron projectile experience multiple scattering while propagating through nuclear matter. This rescattering process induces soft gluon emission, carrying away some of the parton energy available for the hard QCD process, here the Drell-Yan mechanism. The effects of initial-state energy loss on DY can be modelled as [@Arleo:2002ph] $$\begin{aligned} \label{eq:DYxs_eloss} \frac{{{\rm d}}\sigma(h A)}{{{\rm d}}{\ensuremath{x_{\mathrm{F}}}\xspace}\,{{\rm d}}M} &=& \sum\limits_{i,j=q,\bar{q},g} \int_{0}^{1} \, {{\rm d}}x_1 \int_{0}^{1} \, {{\rm d}}x_2\,\int_{0}^{(1-x_1) {\ensuremath{E_\mathrm{b}}\xspace}} \, {{\rm d}}\epsilon\,{{\cal P}}_i(\epsilon) \, f_{i}^{h}\left(x_{1} + \frac{\epsilon}{{\ensuremath{E_\mathrm{b}}\xspace}}\right) f_{j}^{{{\rm A}}}(x_{2}) \nonumber \\ && \vspace{5cm} \times\, \frac{{{\rm d}}\widehat{\sigma}_{ij}}{{{\rm d}}{\ensuremath{x_{\mathrm{F}}}\xspace}\,{{\rm d}}M}(x_1 x_2 s)\,,\end{aligned}$$ where ${\ensuremath{E_\mathrm{b}}\xspace}$ is the hadron beam energy in the rest frame of the nucleus, and ${{\cal P}}_i$ is the probability distribution in the energy loss of the parton $i$ [@Baier:2001yt]. The latter has been determined numerically from a Poisson approximation [@Arleo:2002kh; @Salgado:2003gb], using the LPM medium-induced gluon spectrum derived by Baier-Dokshitzer-Mueller-Peigné-Schiff (BDMPS) [@Baier:1996sk]. The first moment of this distribution is given by $$\label{eq:mean} \langle \epsilon_i \rangle \equiv \int_{}^{} \,{{\rm d}}\epsilon\,\epsilon\, {{\cal P}}_i(\epsilon) = \frac{1}{4} \alpha_{s} C_{R}\,{\hat{q}}\,L^{2}$$ where ${\alpha_s}=1/2$ is frozen at low scales, $\hat{q} L$ $\lesssim$ 1 GeV$^2$. The medium length $L$ is given by $L=3/4 R$ with $R=r_0\,A^{1/3}$ is the nuclear radius, assuming a hard sphere nuclear density profile ($r_0=(4 \pi \rho / 3)^{-1/3}=1.12$ fm and $\rho$ is the nuclear matter density). The transport coefficient has been parametrized as (see appendix of Ref. [@Arleo:2012rs]) $$\hat{q}(x) \equiv {\hat{q}_0}\left(\frac{10^{-2}}{x}\right) ^{0.3} \;;\quad x = \min(x_0,x_2) \;;\quad x_0 \equiv \frac{1}{2m_p L}\,,$$ where the power law behavior reflects the $x$-dependence of the gluon distribution in the nucleus at small values of $x$. In this article we shall consider DY production in low energy hA collisions, probing large values of $x_2$, typically $x_2 > x_0$. Therefore, the transport coefficient used in is essentially frozen at $x_0$, ${\hat{q}}(x_0) = (0.02\,m_p L)^{0.3}\,{\hat{q}_0}\simeq 0.8\,{\hat{q}_0}$ in a large nucleus ($L=5$ fm). The coefficient ${\hat{q}_0}$ is the only parameter of the model. It has been determined from ${{\ensuremath{{J}\hspace{-.08em}/\hspace{-.14em}\psi}\xspace}}$ data using the energy loss in the fully coherent regime, ${\hat{q}_0}= 0.07$–$0.09$ [GeV$^2$/fm]{} [@Arleo:2012rs]. No attempt has been made here to extract an independent estimate of ${\hat{q}}$ from low energy DY data, as the measurements from the E906 experiment are still preliminary. Eventually, this would allow one to check the universality of this parameter, for different processes and in different energy loss regimes (LPM for DY, fully coherent for ${{\ensuremath{{J}\hspace{-.08em}/\hspace{-.14em}\psi}\xspace}}$; see Introduction). The BDMPS formalism is particularly suited to describe gluon radiation induced by multiple soft scattering, thus appropriate for thick media for which the typical number of scatterings, $n=L/\lambda$, is large ($\lambda$ is the parton mean free path). This is at variance with the GLV [@Gyulassy:1999zd] and higher-twist approach [@Wang:2001if], which instead consider a single hard scattering in a medium. Using ${\hat{q}}=\mu^2/\lambda$, where $\mu\simeq 200$ MeV is the typical momentum transfer in single soft scattering, the number of scatterings in a big nucleus (taking $L=5$ fm) is $n={\hat{q}}L / \mu^2 =7$–$9$, which is consistent with the initial assumption of BDMPS.[^9] As can be seen from [Eq. ]{}, the nuclear dependence of DY production depends on the shape of the hadron beam PDF: the steeper the $x$-dependence of $f_i^h$, the stronger the DY suppression. Since at large $x$ one expects $f_q^p(x)\sim(1-x)^3$ and $f_q^\pi(x)\sim(1-x)^2$ from quark counting rules [@Brodsky:1973kr], a stronger DY suppression can be expected in [$\text{pA}$]{}collisions with respect to that in [$\pi\text{A}$]{}collisions [@Arleo:2002ph]. Another consequence follows from [Eq. ]{}. At large [$x_{\mathrm{F}}$]{}, the maximal amount of parton energy loss is restricted to be $\epsilon < (1-\xone)\,{\ensuremath{E_\mathrm{b}}\xspace}\simeq (1-{\ensuremath{x_{\mathrm{F}}}\xspace})\,{\ensuremath{E_\mathrm{b}}\xspace}$, making DY production dramatically suppressed at the edge of phase-space, ${\ensuremath{x_{\mathrm{F}}}\xspace}\lesssim 1$. Phenomenology {#sec:phenomenology} ============= On top of isospin effects (labelled CT14 in the figures), the DY nuclear production ratio is computed assuming either nPDF effects, as estimated using the EPPS16 parton densities in [Eq. ]{} and their associated error sets, or initial-state energy loss effects, [Eq. ]{}. Although each effect is here studied separately, both could in principle be taken into account in order to achieve a complete description of the Drell-Yan process in hA collisions. The calculations are compared to the preliminary results from E906[^10] at $E_{p}=120$ GeV [@Lin:2017eoc], NA10 data at $E_{\pi^-}=140$ GeV [@Bordalo:1987cs] and E866 at $E_{p}=800$ GeV [@Vasilev:1999fa]. Predictions for the COMPASS experiment, which are being collected Drell-Yan data in [$\pi\text{A}$]{}collisions at $E_{\pi^-}$=190 GeV [@Aghasyan:2017jop], are also presented. E906 preliminary data {#sec:e906} --------------------- The preliminary results from the E906 experiment [@E906; @Lin:2017eoc] on the ratios ${\ensuremath{R_{\rm pA}}\xspace}(\text{Fe/C})$ and ${\ensuremath{R_{\rm pA}}\xspace}(\text{W/C})$ are shown as a function of [$x_{\mathrm{F}}$]{}in Fig. \[E906\_data\]. The mass range is $4.5 < M < 5.5$ GeV at ${\ensuremath{\sqrt{s}}\xspace}= 15$ GeV with an additional kinematical cut, $0.1 < x_{2} < 0.3$. The data indicate a clear DY suppression in both Fe and W nuclear targets, increasingly pronounced at large [$x_{\mathrm{F}}$]{}, in clear contrast with the nPDF calculations shown as a blue band. In particular, nPDF effects make [$R_{\rm pA}$]{}consistent with the CT14 predictions assuming no nuclear modification of parton densities, see , shown as a dashed line in Fig. \[E906\_data\]. This [$x_{\mathrm{F}}$]{}range indeed corresponds to typical values of $\xtwo\simeq 0.1$–$0.3$, at the boundary between the EMC effect and the antishadowing region, hence for which $f_i^{p/{{\rm A}}}\simeq f_i^p$. The slight rise of [$R_{\rm pA}$]{}with increasing [$x_{\mathrm{F}}$]{}, seen for both CT14 and EPPS16, originates from the asymmetry in the nucleon sea, $f_{\bar{d}}^p > f_{\bar{u}}^p$.[^11] To our knowledge, this is the first time that DY measurements in hA collisions exhibit such a clear disagreement with the nPDF expectations. The initial-state energy loss effects, assuming the default choice of ${\hat{q}_0}=0.07$–$0.09$ [GeV$^2$/fm]{}, are shown as a red band. The model uncertainty includes the choice of different proton PDF sets, on top of the variation of ${\hat{q}_0}$. The calculations predict a significant DY suppression which becomes more pronounced as [$x_{\mathrm{F}}$]{}gets larger, in qualitative agreement with the E906 results. While the magnitude of [$R_{\rm pA}$]{}is smaller in the measurements than in the model, by roughly 5% and 10% in W and Fe targets respectively, we find it nevertheless remarkable that the *shapes* of [$R_{\rm pA}$]{}predicted by the energy loss model and the E906 results prove similar.[^12] In our opinion, these preliminary data strongly hint at the existence of (LPM) initial-state quark energy loss in the DY process. ![E906 nuclear production ratio measured in pFe (left) and pW (right), normalized to pC, collisions at ${\ensuremath{\sqrt{s}}\xspace}=15$ GeV compared to EPPS16 nPDF calculation (blue band), isospin effect (dotted line) and energy loss effects (red band).[]{data-label="E906_data"}](E906_Fe_C.pdf) ![E906 nuclear production ratio measured in pFe (left) and pW (right), normalized to pC, collisions at ${\ensuremath{\sqrt{s}}\xspace}=15$ GeV compared to EPPS16 nPDF calculation (blue band), isospin effect (dotted line) and energy loss effects (red band).[]{data-label="E906_data"}](E906_W_C.pdf) The origin of the different magnitude in the data and in the model is not clear. We checked that a larger transport coefficient would reproduce perfectly the W/C ratio, however at the expense of a non-universal coefficient in the LPM and fully coherent energy loss regimes. Also, note that the present discrepancy is of the same order as the nPDF uncertainty; adding initial-state energy loss together with nPDF effects would thus come in good agreement with the data. Finally, it cannot be excluded that fully coherent energy loss, expected at NLO in the $q\bar{q}\to \gamma^\star g$, could lead to an extra suppression at large [$x_{\mathrm{F}}$]{}. Any conclusion nonetheless awaits for the E906 measurements to become final. Earlier calculations have been performed in a different theoretical set-up, the higher twist (HT) formalism, which allows for computing within the same framework the DY production process and the energy loss effects, assuming one additional hard scattering in the target nucleus [@Xing:2011fb]. These calculations should match at leading twist (assuming no additional scattering) the leading order DY production cross section. On the contrary, the DY process is here computed at NLO accuracy to which a rescaling of the projectile is PDF is assumed, in order to model energy loss effects arising from multiple soft scattering in the medium, see [Eq. ]{}. Despite these important differences, both calculations of the DY nuclear production ratio prove remarkably similar (and the HT calculation of Ref. [@Xing:2011fb] in agreement with E906 preliminary data). It should also be mentioned that the transport coefficient used in both approaches coincide, as the *gluon* transport coefficient used here would correspond to a *quark* transport coefficient given by ${\hat{q}}_{\text{quark}} = 4/9\, {\hat{q}}\simeq 0.025$–$0.032$ [GeV$^2$/fm]{}(in a W nucleus) perfectly matches the value ${\hat{q}}_{\text{quark}}=0.024\pm0.008$ [GeV$^2$/fm]{}used in Ref. [@Xing:2011fb] and extracted from semi-inclusive DIS measurements [@Wang:2009qb]. NA10 data --------- The NA10 collaboration collected Drell-Yan data on two nuclear targets (W, D) and in the mass range is $4.35 < M < 15$ GeV at ${\ensuremath{\sqrt{s}}\xspace}= 16.2$ GeV, excluding the $\Upsilon$ peak region $8.5< M < 11$ GeV. ![NA10 nuclear production ratio, corrected for isospin effects, measured in $\pi$W, normalized to $\pi$D, collisions at ${\ensuremath{\sqrt{s}}\xspace}=16.2$ GeV compared to EPPS16 nPDF calculation (blue band) and energy loss effects (red band).[]{data-label="NA10_data"}](NA10_146.pdf) The original measurements were corrected for isospin effects in the W target [@Bordalo:1987cs]. Similarly, an isospin correction [@Paakkinen:2016wxk] is applied to the present calculation, defined as $$R_{\pi^-}^{\text{NLO-isospin corrected}}(\text{W/D}) = R_{\pi^-}^{\text{LO}}(\text{W}^{\text{isoscalar}}\text{/W})_{\text{no nPDF}} \times R_{\pi^-}^{\text{NLO}}(\text{W/D}),$$ where $R_{\pi^-}^{\text{LO}}(\text{W}^{\text{isoscalar}}\text{/W})_{\text{no nPDF}}$ is calculated at leading order in an ‘isospin-symmetrized’ W nucleus ($Z=A/2$) over that in a W nucleus. The isospin corrected nPDF calculation overestimates the data, as illustrated in Fig. \[NA10\_data\]. This disagreement has also been reported by Paakkinen et al. [@Paakkinen:2016wxk] who apply a rescaling factor of $12.5\%$ ($r=1.125$) in their calculations to make data and nPDF corrections come in agreement. This rescaling is however twice larger than the systematic uncertainties of $6\%$ reported in the experiment [@Bordalo:1987cs]. The energy loss calculation shown in Fig. \[NA10\_data\] leads to a suppression of approximately $3\%$ (${\ensuremath{R_{\rm pA}}\xspace}\simeq0.97$), independent of $x_2$. As expected, energy loss effects in [$\pi\text{A}$]{}collisions turn out to be significantly smaller than in [$\text{pA}$]{}collisions due to the harder quark distribution in a pion with respect to that in a proton (see Section \[subsec:EnergyLoss\]). Taking into account energy loss in addition to nPDF effects would thus require a rescaling factor of approximately $1.125 \times 0.97 = 9\%$, instead of $12.5\%$ previously, hence closer to the reported experimental systematic uncertainty. Perhaps more importantly, the present calculation shows that initial-state energy loss gives an effect on [$R_{\rm {\pi}A}$]{}of the same magnitude as that of nPDF corrections. This may thus question a reliable extraction of nuclear parton distributions from DY data in [$\pi\text{A}$]{}collisions without taking energy loss effects into account. Moreover, although [$R_{\rm {\pi}A}$]{}proved to be constant (but still different from unity) in this $\xtwo$ range, we shall see in Section \[sec:compass\] that it is not necessarily always the case. E866 data --------- The E866 collaboration measured the Fe/Be and W/Be nuclear production ratios in [$\text{pA}$]{}collisions at ${\ensuremath{\sqrt{s}}\xspace}= 38.7$ GeV. The integrated mass range is $4 < M < 8$ GeV, with an additional kinematical cut $0.02 \lesssim x_{2} \lesssim 0.10$. The nuclear depletion of DY production in E772 and E866 data has long been delicate to interpret as it is virtually impossible to disentangle both nPDF and energy loss processes from these sole measurements [@Arleo:2002ph]. Several groups attribute the measurements as coming mostly from large energy loss effects [@Johnson:2001xfa; @Neufeld:2010dz; @Song:2017wuh], while the nPDF analyses instead assume energy loss effects to be negligible and hence do incorporate these experimental data in the global fits [@deFlorian:2011fp; @Kovarik:2015cma; @Eskola:2016oht]. ![E866 nuclear production ratio measured in pFe (left) and pW (right), normalized to pBe, collisions at ${\ensuremath{\sqrt{s}}\xspace}=38.7$ GeV compared to EPPS16 nPDF calculation (blue band), isospin effect (dotted line) and energy loss effects (red band).[]{data-label="E866_data"}](E866_Fe_Be.pdf) ![E866 nuclear production ratio measured in pFe (left) and pW (right), normalized to pBe, collisions at ${\ensuremath{\sqrt{s}}\xspace}=38.7$ GeV compared to EPPS16 nPDF calculation (blue band), isospin effect (dotted line) and energy loss effects (red band).[]{data-label="E866_data"}](E866_W_Be.pdf) The comparison between our results and the data is shown in Fig. \[E866\_data\]. The agreement between nPDF calculation and the data is satisfactory for both nuclear ratios. However, this should not come as a surprise since these data have been used in the global fit of EPPS16. In Ref. [@Arleo:2002ph], NA3 DY data in [$\pi\text{A}$]{}collisions were used at lower energy to extract an upper limit on the amount of quark energy loss in nuclear matter, and hence helped to lift this ‘degeneracy’ at E866 energy. That study led to the conclusion that energy loss effects on the E866 DY measurements were presumably small. Here, the independent estimate of ${\hat{q}}$ from [[${J}\hspace{-.08em}/\hspace{-.14em}\psi$]{}]{}suppression data – which are not included in nPDF analyses – corroborates this statement as parton energy loss shows a negligible effect, except at very large ${\ensuremath{x_{\mathrm{F}}}\xspace}\gtrsim 0.8$ (see Fig. \[E866\_data\]). At E866 energy and above, the forward DY measurements in [$\text{pA}$]{}collisions are therefore most likely due to nPDF effects and hence could be included in the global fit analyses.[^13] At LHC, the coming DY measurements in pPb collisions at the LHC should thus allow for a clean extraction of nPDF at small $x$ [@Arleo:2015qiv]. Predictions for the COMPASS experiment {#sec:compass} -------------------------------------- Drell-Yan data are also being collected by the COMPASS collaboration at the CERN SPS, using a pion beam on two nuclear targets (NH$_{3}$, W) at a collision energy ${\ensuremath{\sqrt{s}}\xspace}=18.9$ GeV [@Aghasyan:2017jop]. The expected mass range is $4.3 < M < 8.5$ GeV. With such a mass range, the COMPASS measurements would explore a typical range in $0.1\lesssim\xtwo\lesssim0.5$, embracing both the antishadowing and EMC regions. The predictions for the ratio $R_{\pi^{-}\text{A}}^{\text{DY}}$(W/NH$_{3}$) are shown in Fig. \[COMPASS\_data\]. A significant suppression from isospin effects (dashed curve) are expected because of the lesser up quark density in W than in NH$_3$ nuclear target.[^14] Because of the valence quark antishadowing in the EPPS16 set, the inclusion of nPDF corrections increases ${\ensuremath{R_{\rm {\pi}A}}\xspace}$ by up to $5\%$ with respect to the calculation assuming no nuclear effect.[^15] On the contrary, energy loss effects would lead to a suppression of the DY yield, increasingly pronounced at larger [$x_{\mathrm{F}}$]{}as the phase space available for gluon radiation shrinks. At ${\ensuremath{x_{\mathrm{F}}}\xspace}=0.9$, the model predicts a suppression of ${\ensuremath{R_{\rm {\pi}A}}\xspace}\simeq0.8$ while isospin effects only would lead to ${\ensuremath{R_{\rm {\pi}A}}\xspace}\simeq0.94$. Despite the smaller energy loss effects in [$\pi\text{A}$]{}collisions, the future DY measurements by the COMPASS experiment will provide additional constraints for the extraction of the transport coefficient, thanks to the large [$x_{\mathrm{F}}$]{}acceptance and the expected statistics. ![Nuclear production ratio measured in $\pi$W, normalized to $\pi$NH$_3$, collisions at ${\ensuremath{\sqrt{s}}\xspace}=18.9$ GeV compared to EPPS16 nPDF calculation (blue band), isospin effect (dotted line) and energy loss effects (red band)[]{data-label="COMPASS_data"}](NA58_W_NH3.pdf) Before closing this section, let us mention that the pion PDF are extracted from DY production measured in pion-induced collisions on *nuclear* targets, assuming no nuclear effect beyond isospin corrections. However, energy loss effects may affect the reliable extraction of the pion PDF at large $x$, $f_u^\pi(x)\sim(1-x)^n$, and make $n$ possibly overestimated in the PDF global fits. In order to estimate the error associated to the use of nuclear targets, the DY nuclear production ratio has been fitted as ${\ensuremath{R_{\rm {\pi}A}}\xspace}(\xone) = (1-\xone)^{\delta n}$, where $\delta n \simeq 0.06$ is a typical correction to the slope of the pion PDF. Violation of factorization in DY production in pA collisions {#sec:x2scaling} ============================================================ Factorization and x2 scaling ---------------------------- It has been shown in the previous section that calculations using EPPS16 nPDF sets fail to describe the preliminary results by the E906 experiment (Fig. \[E906\_data\]), while exhibiting a good agreement with E866 measurements (Fig. \[E866\_data\]). It could be argued that *this* specific nPDF set is unable to account for both data sets. On the contrary, we demonstrate here that the failure to describe both data sets should be generic to all calculations based on nPDF effects only, following the reasoning of Ref. [@Hoyer:1990us]. Let us consider the factorized expression for the DY production cross section in [$\text{pA}$]{}collisions, [Eq. ]{}, assuming possible nPDF corrections, [Eq. ]{}. In the forward region, ${\ensuremath{x_{\mathrm{F}}}\xspace}= \xone - \xtwo >0$, and at large collision energy ($\hat{s}/s\simeq M^2/s\ll 1$), the momentum fractions carried by the incoming partons can be approximated as $\xone \simeq {\ensuremath{x_{\mathrm{F}}}\xspace}$ and $\xtwo = \hat{s} / ({\ensuremath{x_{\mathrm{F}}}\xspace}s)$. Since forward DY production is dominated by the scattering of a quark in the incoming proton, the [$\text{pA}$]{}differential cross section can thus be approximated as [@Hoyer:1990us] $$\begin{aligned} \label{eq:DYxs_approx} \frac{{{\rm d}}\sigma({\ensuremath{\text{pA}}\xspace})}{{{\rm d}}{\ensuremath{x_{\mathrm{F}}}\xspace}\,{{\rm d}}M} &\simeq& f_{q}^{p}({\ensuremath{x_{\mathrm{F}}}\xspace}) \times \left( \sum\limits_{j=q,\bar{q},g} \, \int\, {{\rm d}}\xtwo\, f_{j}^{{{\rm A}}}(x_{2}) \frac{{{\rm d}}\widehat{\sigma}_{qj}}{{{\rm d}}{\ensuremath{x_{\mathrm{F}}}\xspace}\,{{\rm d}}M}({\ensuremath{x_{\mathrm{F}}}\xspace}\xtwo s)\right)\, \nonumber \\ &\simeq& f_{q}^{p}({\ensuremath{x_{\mathrm{F}}}\xspace}) \times \left( \sum\limits_{j=q,\bar{q},g} f_{j}^{{{\rm A}}}(x_{2})\, \frac{{{\rm d}}\widehat{\sigma}_{qj}}{{{\rm d}}{\ensuremath{x_{\mathrm{F}}}\xspace}\,{{\rm d}}M}(M^2)\right)\,\end{aligned}$$ where the second line is obtained assuming that the partonic cross section peaks close to the threshold, $\hat{s} \gtrsim M^2$. Using [Eq. ]{}, the nuclear production ratio thus becomes a scaling function of the momentum fraction only – independent of the center-of-mass energy of the collision – should the factorization hold in DY forward production in [$\text{pA}$]{}collisions. Conversely, a lack of $\xtwo$ scaling in data would signal the breakdown of QCD factorization and would indicate that nPDF corrections alone cannot account for the nuclear dependence of DY production. x2 scaling violation in the DY process -------------------------------------- In order to check whether the DY nuclear production ratio indeed scales like , the [$\text{pA}$]{}data from E772 (on W/D targets), E866 (both taken at ${\ensuremath{\sqrt{s}}\xspace}=38.7$ GeV) and E906 (${\ensuremath{\sqrt{s}}\xspace}=15$ GeV) are plotted as a function of in Fig. \[fig:x2scaling\] (left). The comparison between E772 and E906 results clearly show a violation of $\xtwo$ scaling at large $\xtwo\sim 10^{-1}$.[^16] On the contrary, it has been checked that the NLO calculations (using EPPS16 nPDF sets) at both energies follow, as expected, $\xtwo$ scaling to a very good accuracy. ![Left: DY nuclear production ratio measured by E772 [@Alde:1990im], E866 [@Vasilev:1999fa], E906 [@Lin:2017eoc], plotted as a function of . Right: Nuclear dependence of ${{\ensuremath{{J}\hspace{-.08em}/\hspace{-.14em}\psi}\xspace}}$ production measured by NA3 [@Badier:1983dg], E866 [@Leitch:1999ea], PHENIX [@Adare:2010fn], ALICE [@Abelev:2013yxa] and LHCb [@Aaij:2017cqq], plotted as a function of .[]{data-label="fig:x2scaling"}](scaling_x2_dy.pdf) ![Left: DY nuclear production ratio measured by E772 [@Alde:1990im], E866 [@Vasilev:1999fa], E906 [@Lin:2017eoc], plotted as a function of . Right: Nuclear dependence of ${{\ensuremath{{J}\hspace{-.08em}/\hspace{-.14em}\psi}\xspace}}$ production measured by NA3 [@Badier:1983dg], E866 [@Leitch:1999ea], PHENIX [@Adare:2010fn], ALICE [@Abelev:2013yxa] and LHCb [@Aaij:2017cqq], plotted as a function of .[]{data-label="fig:x2scaling"}](scaling_x2_jpsi.pdf) This comparison thus provides for the first time a clear evidence of the violation of QCD factorization in the Drell-Yan process in [$\text{pA}$]{}collisions, implying the presence of higher-twist processes. A natural candidate responsible for the reported violation of $\xtwo$ scaling is LPM initial-state energy loss, as the comparison between the model calculation and E906 results suggests. As mentioned in the Introduction, initial-state energy loss should have a weak impact at high incoming parton energy, $E = \xone {\ensuremath{E_\mathrm{b}}\xspace}= M^2/(\xtwo s)$. Therefore, no violation of $x_2$ scaling in DY production is expected at small values of $\xtwo$, as ${\langle{\epsilon}\rangle}_{\text{LPM}}/E \propto \xtwo$. Comparing with J/psi production ------------------------------- Such scaling breakdown has long been reported in [[${J}\hspace{-.08em}/\hspace{-.14em}\psi$]{}]{}production [@Hoyer:1990us], later confirmed by measurements at higher collision energy [@Leitch:2006ff]. Fig. \[fig:x2scaling\] (right) shows the nuclear dependence of ${{\ensuremath{{J}\hspace{-.08em}/\hspace{-.14em}\psi}\xspace}}$ production in [$\text{pA}$]{}collisions[^17] measured by NA3 [@Badier:1983dg], E866 [@Leitch:1999ea], PHENIX [@Adare:2010fn], ALICE [@Abelev:2013yxa] and LHCb [@Aaij:2017cqq], from ${\ensuremath{\sqrt{s}}\xspace}\simeq 20$ GeV to ${\ensuremath{\sqrt{s}}\xspace}\simeq 8$ TeV. As can be seen, the ${{\ensuremath{{J}\hspace{-.08em}/\hspace{-.14em}\psi}\xspace}}$ suppression data clearly rule out the $\xtwo$ scaling predicted by QCD factorization. This prevents the use of ${{\ensuremath{{J}\hspace{-.08em}/\hspace{-.14em}\psi}\xspace}}$ measurements in [$\text{pA}$]{}collisions in order to extract nuclear parton densities. The $\xtwo$ dependence of DY and ${{\ensuremath{{J}\hspace{-.08em}/\hspace{-.14em}\psi}\xspace}}$ suppression at different collision energies show similar patterns, despite a clear difference in the magnitude of the suppression.[^18] In the case of [[${J}\hspace{-.08em}/\hspace{-.14em}\psi$]{}]{}production, the good agreement between data and the model based on fully coherent energy loss [@Arleo:2012hn; @Arleo:2012rs] makes the latter a natural process responsible for the breakdown of $\xtwo$ scaling. Unlike initial-state energy loss, fully coherent energy loss is proportional the parton energy, [Eq. ]{}, making ${\langle{\epsilon}\rangle}_{\text{coh}}/E$ independent of $\xtwo$. As a consequence, quarkonium suppression should not follow $\xtwo$ scaling even at small values of $\xtwo$. Summary {#sec:summary} ======= The effects of parton energy loss on Drell-Yan production in [$\text{pA}$]{}and [$\pi\text{A}$]{}collisions at fixed-target energies has been investigated, based on a BDMPS energy loss framework embedded in a NLO calculation. Nuclear production ratios were compared to calculations assuming nPDF effects as well as to experimental results. Let us summarize the main conclusions of this study: - Preliminary results by the E906 experiment in [$\text{pA}$]{}collisions exhibit a significant DY suppression at large [$x_{\mathrm{F}}$]{}, in clear disagreement with calculations including nPDF corrections only. This is the first time that the nuclear dependence of the DY process contradicts unambiguously nPDF expectations, indicating that other processes should be at work. This conclusion is confirmed by the direct comparison of E906 and E772/E866 results, which lack of $\xtwo$ scaling signals the violation of QCD factorization in Drell-Yan production in [$\text{pA}$]{}collisions. - In contrast, the E906 results prove in good qualitative agreement with the energy loss model predictions, despite a slightly different magnitude, using the transport coefficient extracted from [[${J}\hspace{-.08em}/\hspace{-.14em}\psi$]{}]{}data. This is a clear hint that energy loss in cold QCD matter affects the Drell-Yan process in nuclear collisions, while earlier claims of energy loss effects on DY production at higher collision energy are spoiled by possible nPDF effects. Moreover, the qualitative agreement with E906 preliminary data points to a consistent value of the transport coefficient for two processes in two different dynamical regimes: LPM energy loss for Drell-Yan and fully coherent radiation for [[${J}\hspace{-.08em}/\hspace{-.14em}\psi$]{}]{}. - In [$\pi\text{A}$]{}collisions, energy loss effects are naturally smaller than in [$\text{pA}$]{}collisions due to the harder PDF in a pion with respect to that in a proton. However, it is shown that energy loss effects are of the same magnitude as nPDF corrections, sowing seeds of doubt on a clean extraction of nPDF from these data without including energy loss effects. Energy loss processes suppress the DY yield and helps reducing the known tension between nPDF results and NA10 data, however not sufficiently to claim for a good agreement. Predictions for the future COMPASS results are made; significant energy loss effects are expected especially above ${\ensuremath{x_{\mathrm{F}}}\xspace}\gtrsim 0.7$ which should bring additional constraints on the cold nuclear matter transport coefficient. - At E866 energy ($\sqrt{s} = 38.7$ GeV) the effects of LPM energy loss on DY production, using ${\hat{q}}$ extracted from [[${J}\hspace{-.08em}/\hspace{-.14em}\psi$]{}]{}data, significantly weaken, as already pointed out in Refs. [@Arleo:2002ph; @Xing:2011fb]. This justifies a posteriori the use of these results to extract nPDF, except at very large [$x_{\mathrm{F}}$]{}where energy loss affects DY almost as much as nPDF corrections. We would like to thank Yann Bedfer, Fabienne Kunne and Stéphane Peigné for a careful reading of the manuscript and for discussions. We also thank Po-Ju Lin for discussions on the E906 results. This work was supported in part by the P2IO LabEx (ANR-10-LABX-0038). [10]{} A. Majumder and M. Van Leeuwen, *[The Theory and Phenomenology of Perturbative QCD Based Jet Quenching]{}*, [*Prog. Part. Nucl. Phys.* [**66**]{} (2011) 41](https://doi.org/10.1016/j.ppnp.2010.09.001) \[[[1002.2206]{}](https://arxiv.org/abs/1002.2206)\]. Y. Mehtar-Tani, J. G. Milhano and K. Tywoniuk, *[Jet physics in heavy-ion collisions]{}*, [*Int. J. Mod. Phys.* [**A28**]{} (2013) 1340013](https://doi.org/10.1142/S0217751X13400137) \[[[1302.2579]{}](https://arxiv.org/abs/1302.2579)\]. N. Armesto and E. Scomparin, *[Heavy-ion collisions at the Large Hadron Collider: a review of the results from Run 1]{}*, [*Eur. Phys. J. Plus* [**131**]{} (2016) 52](https://doi.org/10.1140/epjp/i2016-16052-4) \[[[1511.02151]{}](https://arxiv.org/abs/1511.02151)\]. G.-Y. Qin and X.-N. Wang, *[Jet quenching in high-energy heavy-ion collisions]{}*, [*Int. J. Mod. Phys.* [**E24**]{} (2015) 1530014](https://doi.org/10.1142/S0218301315300143, 10.1142/9789814663717_0007) \[[[1511.00790]{}](https://arxiv.org/abs/1511.00790)\]. F. Arleo, *[Aspects of hard QCD processes in proton–nucleus collisions]{}*, [*Nucl. Part. Phys. Proc.* [**289-290**]{} (2017) 71](https://doi.org/10.1016/j.nuclphysbps.2017.05.014) \[[[1612.07987]{}](https://arxiv.org/abs/1612.07987)\]. S. Peign[é]{} and A. Smilga, *[Energy losses in a hot plasma revisited]{}*, [*Phys.Usp.* [**52**]{} (2009) 659](https://doi.org/10.3367/UFNe.0179.200907a.0697) \[[[ 0810.5702]{}](https://arxiv.org/abs/0810.5702)\]. F. Arleo, S. Peign[é]{} and T. Sami, *[Revisiting scaling properties of medium-induced gluon radiation]{}*, [*Phys. Rev.* [**D83**]{} (2011) 114036](https://doi.org/10.1103/PhysRevD.83.114036) \[[[1006.0818]{}](https://arxiv.org/abs/1006.0818)\]. F. Arleo and S. Peign[é]{}, *[J/$\psi$ suppression in pA collisions from parton energy loss in cold QCD matter]{}*, [*Phys. Rev. Lett.* [**109**]{} (2012) 122301](https://doi.org/10.1103/PhysRevLett.109.122301) \[[[1204.4609]{}](https://arxiv.org/abs/1204.4609)\]. F. Arleo and S. Peign[é]{}, *[Heavy-quarkonium suppression in pA collisions from parton energy loss in cold QCD matter]{}*, [*JHEP* [**03**]{} (2013) 122](https://doi.org/10.1007/JHEP03(2013)122) \[[[1212.0434]{}](https://arxiv.org/abs/1212.0434)\]. F. Arleo, R. Kolevatov and S. Peign[é]{}, *[Coherent medium-induced gluon radiation in hard forward $1\to1$ partonic processes]{}*, [*Phys. Rev.* [**D93**]{} (2016) 014006](https://doi.org/10.1103/PhysRevD.93.014006) \[[[1402.1671]{}](https://arxiv.org/abs/1402.1671)\]. S. Peign[é]{} and R. Kolevatov, *[Medium-induced soft gluon radiation in forward dijet production in relativistic proton-nucleus collisions]{}*, [*JHEP* [**01**]{} (2015) 141](https://doi.org/10.1007/JHEP01(2015)141) \[[[1405.4241]{}](https://arxiv.org/abs/1405.4241)\]. F. Arleo, R. Kolevatov, S. Peign[é]{} and M. Rustamova, *[Centrality and $p_\perp$ dependence of $J/\psi$ suppression in proton-nucleus collisions from parton energy loss]{}*, [*JHEP* [**05**]{} (2013) 155](https://doi.org/10.1007/JHEP05(2013)155) \[[[1304.0901]{}](https://arxiv.org/abs/1304.0901)\]. F. Arleo and S. Peign[é]{}, *[Disentangling Shadowing from Coherent Energy Loss using the Drell-Yan Process]{}*, [*Phys. Rev.* [**D95**]{} (2017) 011502](https://doi.org/10.1103/PhysRevD.95.011502) \[[[1512.01794]{}](https://arxiv.org/abs/1512.01794)\]. R. Vogt, *[The $x_F$ dependence of $\psi$ and Drell-Yan production]{}*, [*Phys. Rev.* [**C61**]{} (2000) 035203]{} \[[[hep-ph/9907317]{}](https://arxiv.org/abs/hep-ph/9907317)\]. E. Wang and X.-N. Wang, *Jet tomography of dense and nuclear matter*, [*Phys. Rev. Lett.* [**89**]{} (2002) 162301]{} \[[[hep-ph/0202105]{}](https://arxiv.org/abs/hep-ph/0202105)\]. F. Arleo, *Quenching of hadron spectra in [DIS]{} on nuclear targets*, [*Eur. Phys. J.* [**C30**]{} (2003) 213]{} \[[[hep-ph/0306235]{}](https://arxiv.org/abs/hep-ph/0306235)\]. collaboration, D. M. Alde et al., *[Nuclear dependence of dimuon production at 800 GeV in the E772 experiment]{}*, [*Phys. Rev. Lett.* [**64**]{} (1990) 2479](https://doi.org/10.1103/PhysRevLett.64.2479). collaboration, M. A. Vasilev et al., *[Parton energy loss limits and shadowing in Drell-Yan dimuon production]{}*, [*Phys. Rev. Lett.* [**83**]{} (1999) 2304](https://doi.org/10.1103/PhysRevLett.83.2304) \[[[hep-ex/9906010]{}](https://arxiv.org/abs/hep-ex/9906010)\]. D. de Florian, R. Sassot, P. Zurita and M. Stratmann, *[Global Analysis of Nuclear Parton Distributions]{}*, [*Phys. Rev.* [**D85**]{} (2012) 074028](https://doi.org/10.1103/PhysRevD.85.074028) \[[[1112.6324]{}](https://arxiv.org/abs/1112.6324)\]. K. Kovarik et al., *[nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties in the CTEQ framework]{}*, [*Phys. Rev.* [**D93**]{} (2016) 085037](https://doi.org/10.1103/PhysRevD.93.085037) \[[[1509.00792]{}](https://arxiv.org/abs/1509.00792)\]. K. J. Eskola, P. Paakkinen, H. Paukkunen and C. A. Salgado, *[EPPS16: Nuclear parton distributions with LHC data]{}*, [*Eur. Phys. J.* [**C77**]{} (2017) 163](https://doi.org/10.1140/epjc/s10052-017-4725-9) \[[[1612.05741]{}](https://arxiv.org/abs/1612.05741)\]. M. B. Johnson, B. Z. Kopeliovich, I. K. Potashnikova, P. L. McGaughey, J. M. Moss et al., *[Energy loss versus shadowing in the Drell-Yan reaction on nuclei]{}*, [*Phys. Rev.* [**C65**]{} (2002) 025203](https://doi.org/10.1103/PhysRevC.65.025203) \[[[hep-ph/0105195]{}](https://arxiv.org/abs/hep-ph/0105195)\]. R. Neufeld, I. Vitev and B.-W. Zhang, *[A possible determination of the quark radiation length in cold nuclear matter]{}*, [*Phys.Lett.* [**B704**]{} (2011) 590](https://doi.org/10.1016/j.physletb.2011.09.045) \[[[1010.3708]{}](https://arxiv.org/abs/1010.3708)\]. L.-H. Song and L.-W. Yan, *[Constraining the transport coefficient in cold nuclear matter with the Drell-Yan process]{}*, [*Phys. Rev.* [**C96**]{} (2017) 045203](https://doi.org/10.1103/PhysRevC.96.045203). collaboration, J. Badier et al., *[Test of nuclear effects in hadronic dimuon production]{}*, [*Phys. Lett.* [**B104**]{} (1981) 335](https://doi.org/10.1016/0370-2693(81)90137-4). F. Arleo, *Constraints on quark energy loss from [Drell-Yan]{} data*, [*Phys. Lett.* [**B532**]{} (2002) 231]{} \[[[hep-ph/0201066]{}](https://arxiv.org/abs/hep-ph/0201066)\]. collaboration, P. E. Reimer, *[Drell-Yan Measurements by Fermilab E-906/SeaQuest]{}*, [<http://www.phy.anl.gov/mep/SeaQuest/> (2012) ]{}. P.-J. Lin, *[Measurement of Quark Energy Loss in Cold Nuclear Matter at Fermilab E906/SeaQuest]{}*, [<http://lss.fnal.gov/archive/thesis/2000/fermilab-thesis-2017-18.pdf>]{}, Ph.D. thesis, Colorado U., 2017. 10.2172/1398791. collaboration, M. Aghasyan et al., *[First measurement of transverse-spin-dependent azimuthal asymmetries in the Drell-Yan process]{}*, [*Phys. Rev. Lett.* [**119**]{} (2017) 112002](https://doi.org/10.1103/PhysRevLett.119.112002) \[[[1704.00488]{}](https://arxiv.org/abs/1704.00488)\]. S. Catani and M. Grazzini, *[An NNLO subtraction formalism in hadron collisions and its application to Higgs boson production at the LHC]{}*, [*Phys. Rev. Lett.* [**98**]{} (2007) 222002](https://doi.org/10.1103/PhysRevLett.98.222002) \[[[hep-ph/0703012]{}](https://arxiv.org/abs/hep-ph/0703012)\]. S. Catani, L. Cieri, G. Ferrera, D. de Florian and M. Grazzini, *[Vector boson production at hadron colliders: a fully exclusive QCD calculation at NNLO]{}*, [*Phys. Rev. Lett.* [**103**]{} (2009) 082001](https://doi.org/10.1103/PhysRevLett.103.082001) \[[[0903.2120]{}](https://arxiv.org/abs/0903.2120)\]. L. A. Harland-Lang, A. D. Martin, P. Motylinski and R. S. Thorne, *[Parton distributions in the LHC era: MMHT 2014 PDFs]{}*, [*Eur. Phys. J.* [**C75**]{} (2015) 204](https://doi.org/10.1140/epjc/s10052-015-3397-6) \[[[1412.3989]{}](https://arxiv.org/abs/1412.3989)\]. S. Dulat, T.-J. Hou, J. Gao, M. Guzzi, J. Huston et al., *[New parton distribution functions from a global analysis of quantum chromodynamics]{}*, [*Phys. Rev.* [**D93**]{} (2016) 033006](https://doi.org/10.1103/PhysRevD.93.033006) \[[[1506.07443]{}](https://arxiv.org/abs/1506.07443)\]. M. Gl[ü]{}ck, E. Reya and A. Vogt, *[Pionic parton distributions]{}*, [*Z. Phys.* [**C53**]{} (1992) 651](https://doi.org/10.1007/BF01559743). P. J. Sutton, A. D. Martin, R. G. Roberts and W. J. Stirling, *[Parton distributions for the pion extracted from Drell-Yan and prompt photon experiments]{}*, [*Phys. Rev.* [**D45**]{} (1992) 2349](https://doi.org/10.1103/PhysRevD.45.2349). P. C. Barry, N. Sato, W. Melnitchouk and C.-R. Ji, *[First Monte Carlo global QCD analysis of pion parton distributions]{}*, [[1804.01965]{}](https://arxiv.org/abs/1804.01965). R. Baier, Y. L. Dokshitzer, A. H. Mueller and D. Schiff, *[Quenching of hadron spectra in media]{}*, [*JHEP* [ **09**]{} (2001) 033](https://doi.org/10.1088/1126-6708/2001/09/033) \[[[ hep-ph/0106347]{}](https://arxiv.org/abs/hep-ph/0106347)\]. F. Arleo, *[Tomography of cold and hot QCD matter: Tools and diagnosis]{}*, [*JHEP* [ **11**]{} (2002) 044](https://doi.org/10.1088/1126-6708/2002/11/044) \[[[ hep-ph/0210104]{}](https://arxiv.org/abs/hep-ph/0210104)\]. C. A. Salgado and U. A. Wiedemann, *[Calculating quenching weights]{}*, [*Phys. Rev.* [**D68**]{} (2003) 014008](https://doi.org/10.1103/PhysRevD.68.014008) \[[[hep-ph/0302184]{}](https://arxiv.org/abs/hep-ph/0302184)\]. R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peign[é]{} and D. Schiff, *[Radiative energy loss and $p_\perp$ broadening of high energy partons in nuclei]{}*, [*Nucl. Phys.* [**B484**]{} (1997) 265](https://doi.org/10.1016/S0550-3213(96)00581-0) \[[[hep-ph/9608322]{}](https://arxiv.org/abs/hep-ph/9608322)\]. M. Gyulassy, P. Levai and I. Vitev, *[Jet quenching in thin quark gluon plasmas. 1. Formalism]{}*, [*Nucl. Phys.* [**B571**]{} (2000) 197](https://doi.org/10.1016/S0550-3213(99)00713-0) \[[[hep-ph/9907461]{}](https://arxiv.org/abs/hep-ph/9907461)\]. X.-N. Wang and X. Guo, *Multiple parton scattering in nuclei: Parton energy loss*, [*Nucl. Phys.* [**A696**]{} (2001) 788]{} \[[[hep-ph/0102230]{}](https://arxiv.org/abs/hep-ph/0102230)\]. S. J. Brodsky and G. R. Farrar, *[Scaling Laws at Large Transverse Momentum]{}*, [*Phys. Rev. Lett.* [**31**]{} (1973) 1153](https://doi.org/10.1103/PhysRevLett.31.1153). collaboration, P. Bordalo et al., *[Nuclear Effects on the Nucleon Structure Functions in Hadronic High Mass Dimuon Production]{}*, [*Phys. Lett.* [**B193**]{} (1987) 368](https://doi.org/10.1016/0370-2693(87)91253-6). H. Xing, Y. Guo, E. Wang and X.-N. Wang, *[Parton Energy Loss and Modified Beam Quark Distribution Functions in Drell-Yan Process in p+A Collisions]{}*, [*Nucl. Phys.* [**A879**]{} (2012) 77](https://doi.org/10.1016/j.nuclphysa.2012.01.012) \[[[1110.1903]{}](https://arxiv.org/abs/1110.1903)\]. W.-t. Deng and X.-N. Wang, *[Multiple Parton Scattering in Nuclei: Modified DGLAP Evolution for Fragmentation Functions]{}*, [*Phys. Rev.* [**C81**]{} (2010) 024902](https://doi.org/10.1103/PhysRevC.81.024902) \[[[0910.3403]{}](https://arxiv.org/abs/0910.3403)\]. P. Paakkinen, K. J. Eskola and H. Paukkunen, *[Applicability of pion nucleus Drell-Yan data in global analysis of nuclear parton distribution functions]{}*, [*Phys. Lett.* [**B768**]{} (2017) 7](https://doi.org/10.1016/j.physletb.2017.02.009) \[[[1609.07262]{}](https://arxiv.org/abs/1609.07262)\]. P. Hoyer, M. Vanttinen and U. Sukhatme, *[Violation of factorization in charm hadroproduction]{}*, [*Phys. Lett.* [**B246**]{} (1990) 217](https://doi.org/10.1016/0370-2693(90)91335-9). Y. H. Leung, *[PHENIX measurements of charm, bottom, and Drell-Yan via dimuons in pp and pAu at [$\sqrt{s_{\rm NN}}=200$ GeV]{}]{}*, [*<https://indico.cern.ch/event/634426/contributions/3090546/attachments/1727756/2791469/hp2018vf.pdf>* (2018) ]{}. collaboration, I. Bediaga et al., *[Physics case for an LHCb Upgrade II - Opportunities in flavour physics, and beyond, in the HL-LHC era]{}*, [[ 1808.08865]{}](https://arxiv.org/abs/1808.08865). collaboration, J. Badier et al., *[Experimental $J/\psi$ Hadronic Production from 150 GeV/c to 280 GeV/c]{}*, [*Z. Phys.* [**C20**]{} (1983) 101](https://doi.org/10.1007/BF01573213). collaboration, M. J. Leitch et al., *[Measurement of differences between $J/\psi$ and $\psi^\prime$ suppression in pA collisions]{}*, [*Phys. Rev. Lett.* [**84**]{} (2000) 3256](https://doi.org/10.1103/PhysRevLett.84.3256) \[[[nucl-ex/9909007]{}](https://arxiv.org/abs/nucl-ex/9909007)\]. collaboration, A. Adare et al., *[Cold Nuclear Matter Effects on J/$\psi$ Yields as a Function of Rapidity and Nuclear Geometry in Deuteron-Gold Collisions at $\sqrt{s_{\rm NN}} = 200$ GeV]{}*, [*Phys. Rev. Lett.* [**107**]{} (2011) 142301](https://doi.org/10.1103/PhysRevLett.107.142301) \[[[1010.1246]{}](https://arxiv.org/abs/1010.1246)\]. collaboration, B. B. Abelev et al., *[$J/\psi$ production and nuclear effects in pPb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV]{}*, [*JHEP* [**1402**]{} (2014) 073](https://doi.org/10.1007/JHEP02(2014)073) \[[[1308.6726]{}](https://arxiv.org/abs/1308.6726)\]. collaboration, R. Aaij et al., *[Prompt and nonprompt J/$\psi$ production and nuclear modification in $p$Pb collisions at $\sqrt{s_{\text{NN}}}= 8.16$ TeV]{}*, [*Phys. Lett.* [**B774**]{} (2017) 159](https://doi.org/10.1016/j.physletb.2017.09.058) \[[[1706.07122]{}](https://arxiv.org/abs/1706.07122)\]. M. Leitch, *[Overview of charm physics at RHIC]{}*, [*AIP Conf.Proc.* [ **892**]{} (2007) 404](https://doi.org/10.1063/1.2714429) \[[[ nucl-ex/0610031]{}](https://arxiv.org/abs/nucl-ex/0610031)\]. [^1]: corresponding author [^2]: between the projectile hadron and the target nucleus [^3]: At large angle, gluon emissions along the incoming parton and along the outgoing parton would not overlap, thus suppressing interferences. [^4]: This can be seen from [Eq. ]{}: in the DY case, $C_{R^\prime}=0$ and $C_R=C_t$, leading to a vanishing color prefactor. [^5]: At high collision energy, [[e.g.]{}]{}at RHIC or LHC, LPM initial-state energy loss in nuclear matter is negligible, making DY production in [$\text{pA}$]{}collisions an ideal process to probe nuclear parton densities [@Arleo:2015qiv]. [^6]: In addition, these small effects could be balanced by the real NLO annihilation process, $q\bar{q}\to\gamma^\star g$, significantly suppressed compared to Compton scattering [@Vogt:1999dw] but more sensitive to fully coherent radiation ($\propto N_c$) and positive [@Peigne:2014uha]. [^7]: In the following, the explicit scale dependence is omitted for clarity. [^8]: The slight sea quark asymmetry is actually probed from the comparison of DY production in pp and pD collisions. Here, we shall always compare DY yields on nuclei with similar $Z/A$ ratios, making this effect small (see Section \[sec:e906\]). [^9]: Note that in a single hard picture, the momentum transfer neeeds to be large enough in order to induce gluon emission. Taking $\mu_{\text{semi-hard}}\simeq 500$ MeV leads to $n\simeq1$, also consistent with the assumption of a small number of (semi-hard) scattering. [^10]: These are not included in the global fit of EPPS16. [^11]: Thus in a W target, for which neutrons are more abundant than protons, one expects $f_{\bar{u}}^{\rm W} > A_{\rm W}\,f_{\bar{u}}^p$, leading to ${\ensuremath{R_{\rm pA}}\xspace}$ slightly above unity. [^12]: In particular, the calculation of [$R_{\rm pA}$]{}does not depend on any free parameter. [^13]: The extraction of sea quark nPDF at small $\xtwo\sim10^{-2}$ (corresponding the largest [$x_{\mathrm{F}}$]{}bin in the experiment) is probably affected by a few percent. [^14]: Using $Z_{\text{W}}/A_\text{W}\approx2/5\simeq 1-Z_{\text{NH}_3}/A_{\text{NH}_3}$ leads to an expected suppression ${\ensuremath{R_{\rm {\pi}A}}\xspace}\simeq (1+3/2\,r)/(3/2+r)\approx7/8$ assuming $r\equiv d/u\approx1/2$ at large . At smaller , hence at larger [$x_{\mathrm{F}}$]{}, $r$ is getting larger and so does ${\ensuremath{R_{\rm {\pi}A}}\xspace}$. [^15]: The EMC region is probed at smaller values of [$x_{\mathrm{F}}$]{}, not shown in Fig. \[COMPASS\_data\]. [^16]: In principle the backward ($-2.2 < y < -1.2$) DY data by PHENIX in pAu collisions at ${\ensuremath{\sqrt{s}}\xspace}=200$ GeV could probe similar values of $\xtwo$, yet the present uncertainties prevent any quantitative conclusions yet [@phenixdy]. Similarly, future measurements by the LHCb experiment in pPb collisions at ${\ensuremath{\sqrt{s}}\xspace}=8.16$ TeV [@Bediaga:2018lhg] may also access the large $\xtwo$ domain. [^17]: parametrized by $\alpha$, where $\alpha$ is defined as $\sigma({\ensuremath{\text{pA}}\xspace}\to {{\ensuremath{{J}\hspace{-.08em}/\hspace{-.14em}\psi}\xspace}}\,\X) \equiv \sigma(\text{pp}\to{{\ensuremath{{J}\hspace{-.08em}/\hspace{-.14em}\psi}\xspace}}\,\X)\times A^\alpha$ [^18]: The suppression is much more pronounced in the ${{\ensuremath{{J}\hspace{-.08em}/\hspace{-.14em}\psi}\xspace}}$ channel than in the DY process. Taking [[e.g.]{}]{}$\alpha=0.75$ would lead to $R^{J/\psi}_{pA} \simeq 0.27$ in a $A=200$ nucleus.
--- author: - | S. Perezhogin, E. Bulyak, A. Dovbnya, V. Mitrochenko,\ A. Opanasenko, V. Kushnir date: 'NSC KIPT, Kharkov, Ukraine 2018' title: | KIPT Positron Source Project\ Conceptual Design --- Introduction ============ Employing the monochromatic positron beams with energy below 1MeV provides the unique possibility for research study in the solid materials. The methods of positron annihilation spectroscopy are capable to detect the electron structure of crystals and numerous small-size defects both in solids and porous materials, such as vacancies, vacancy clusters and voids up to size of a cubic nanometer. All these methods have been widely applied in the modern material science, particularly, in the atomic and electronic material science, see [@siegel80; @singh16]. That is why development of the intensive positron sources based upon the electron linacs is an important yet complicated task. Commonly, two types of positron sources are employed, radioactive and accelerator-based, [@golde12]. The former uses $\beta^+$ active isotopes such as $^{22}$Na or $^{64}$Cu, which directly emit positrons. The sources of this type are compact yet of low intensity, and produce the monoenergetic positrons in MeV range. In the latter – the electron linac based – the intensive electron beam is converted into the positron one, with wide spectrum and intensity up to $10^{11}$positron per second, by many orders higher then the radioactive source. Due to relativistic effect, the generated positrons are contained within a relatively small solid angle in contrast to the radioactive isotropic emission. Thus efficiency of the accelerator source is much higher. At present time, the positron sources of this kind based on the electron linacs with energy from 10MeV to 70MeV are employed for study properties of materials by the methods of positron-annihilation spectroscopy [@wada13; @rourke13; @chemerisov09]. NSC KIPT has a certain experience in construction and employment of a positron source for high-energy physics experiments, see [@artemov84]. Theoretical background ====================== Electron-to-positron conversion ------------------------------- In the accelerator based method of production, the positrons are generated via two–stage conversion. First, the accelerated electrons produce bremsstrahlung radiation while passing through conversion target. Then, these photons produce electron–positron pairs in the strong field of nuclei. The positrons having traversed the conversion target then are collected for further acceleration/deceleration to meet needs of final users. The photons for positron production must have the energy in access of 1MeV: the pair rest energy equals to 1.022MeV. With increase of gamma’s energy above this threshold positrons production rate mainly increases due to smaller losses of them while traversing the conversion target. For higher energy of the initial electrons, the secondary electrons and positrons have high enough energy to cause electron–positron showers, when these secondary particles produce the gamma-photons, which in turn are converted into the pairs. It should be noted that besides conversion both the electrons and the positrons suffer from parasitic processes reducing the yield and increasing the emittance of the positron beam [@shulga93]. Mainly, these processes are: - Bremsstrahlung spectrum declines gradually with energy: Vast part of the photons possess energy below the pair creation threshold. Emission of these photons decreases the energy of initial electrons (so called radiation losses) and increases the angular spread of electrons’ trajectories. - A define fraction of the energy has been lost due to ionization losses. This process is of importance for the positrons, since probability of annihilation increases with the energy decrease. - Elastic scattering (without energy loss) results in increase of angular spread in trajectories. For estimation of the yield of positrons, we separate consideration of the whole process of generation – beginning from impinging the electrons upon the front (upstream) surface of convertor and finishing at escaping (emitting) positrons from the rear (downstream) surface – into elemental specific physical processes. Such commonly accepted approach allows to optimize the system aiming at maximum yield of positrons. The processes to be modeled are as follows. 1. Built up the radiation field in the target bulk by the initial electrons. 2. Creation of the electron–positron pairs in the volume. 3. Motion of the positrons to the rear surface of the target. The escaped positrons present the positron beam. For numerical estimations we will employ tantalum and tungsten conversion targets and three energy of the initial electrons: 9MeV, 40Mev and 90MeV, available at existing accelerators of KIPT. Radiation density ----------------- An electron with energy $E_e$ produces in a (thin) target the photons with wide spectrum – so called bremsstrahlung radiation – see [@koch59]. This spectrum can be approximated by the formula (see [@roy68; @shulga93]): $$\begin{aligned} \label{eq:brems} \frac{{\,\mathrm{d}}\sigma}{{\,\mathrm{d}}E_p} &=& \frac{4\alpha_{f} (Z r_0)^2}{E_p}\left[ 1+\frac{(E_e-E_p)^2}{E_e^2}-\frac{2}{3} \frac{(E_e-E_p)}{E_e}\right]\times \nonumber \\ &&\left[ \ln\left(\frac{2 E_e(E_e-E_p)}{m c^2 E_p}\right)-\frac{1}{2}\right]\; ,\end{aligned}$$ where $E_p$ is the photon energy, $\alpha_{f}$ is the fine structure constant, $r_0$ and $m c^2$ are the classical electron radius and its rest energy, resp., $Z$ is the target material charge number. The electron energy decreased after a photon emitted, therefore the next photon will be emitted with smaller energy in average. The losses of this kind are referred to as the radiation losses and are dominant in the considered range. The radiation losses are in linear proportion to the electron energy, [@nist]. Due to linear dependence of losses, the average electron energy is decreasing exponentially with the decay length equal to the radiation length of target material. The radiation spectra produced by the electrons with the initial energy 40MeV at different depth in the tungsten target are presented on figure \[fig:sect\]. ![Bremsstrahlung spectra in tungsten at 0, 0.25, 0.5, 1 rad. length (from top to bottom) produced by 40MeV electrons.\[fig:sect\]](sect.pdf){width="80.00000%"} In contrast to electrons, which gradually lose their energy but conserve number of them, the high-energy photons while traversing the target lose the number of them (intensity) but preserve the spectrum. Decreasing of the intensity occurs mainly due to conversion of the photons into the electron-positron pairs. In figure \[fig:photTaW\], there are present dependencies of the total loss of photons upon their energy and losses due to conversion into the pairs (the data taken from NIST) for tantalum and tungsten (practically the same). As it can be seen, at the energy higher than $\sim 10$MeV the photon losses are due to production of the pairs. ![Losses of photons and yield of pairs for tantalum and tungsten. \[fig:photTaW\]](photTaW.pdf){width="\textwidth"} Positron density ---------------- For not too high energy of the gamma quanta, as in the considering case, every quantum with the energy $E_\mathrm{g} = m_ec^2 \gamma_\mathrm{g} $ produces $\kappa$ electron-positron pairs over one radiation length: $$\label{pairprod} \kappa(\gamma_\mathrm{g},Z) = \frac{7}{9\ln (183Z^{-1/3})}\left[\ln 2\gamma_\mathrm{g} -\frac{109}{42}- 1.2021(\alpha Z)^2\right]\; ,$$ where $Z$ is the charge number of nucleus of the conversion target, $\alpha\approx 1/137$ is the fine structure constant. As it can be seen from this formula, dependence of density of the born pairs upon the energy of quantum is weak (logarithmical). Yield of the positron-electron pairs in tungsten converter via the energy of gammas computed with the code XCOM of NIST, is presented in figure \[fig:photTaW\]. The energy of gammas to produce the pairs should be higher than 1.25MeV. Convolving the bremsstrahlung spectrum with probability of the pair production , we may deduce the spectrum of total energy of pairs – electron+positron – as function of the energy of initial electron. Figure \[fig:yeld94090\] presents distributions of the energy of pairs for the energy of primer electrons 9, 40 and 90MeV. ![Yield of the pairs born by bremsstrahlung, arb. units.\[fig:yeld94090\]](yeld94090.pdf){width="\textwidth"} As it is seen from the curves in this figure, maximum number of pairs produced at the energy far below the initial energy of the electrons: within the range from 5MeV to 10Mev for the electron’s energy from 9 up to 90MeV. It is important to emphasize that total number of the pairs – area encircled by the corresponding curve – sufficiently increases with increase of the electron’s energy. The positrons are born in the volume of the target bulk, their spectrum and density are determined by the initial energy of the impinging electrons and distance from the front face of the target. As it follows from the considered above theory, each stage min the process of positron production, $\mathrm{e}^{-}\to\gamma\to \mathrm{e}^{+}+\mathrm{e}^{-}$ increases the spectral width of the corresponding ensemble and shifts it toward lower energy. As a result, the maximum number of positrons are produced with a smaller energy than that of the prime electrons. Positron stream from the target ------------------------------- Born in the target bulk positrons are moving toward the rear face of the conversion target, their energy degrades along the pass. Unlike electrons, the number of positrons is decreased because of annihilation with electrons. Beyond the annihilation, interactions of the positrons with matter are practically the same as of the electrons: the radiation losses dominate in degradation of the energy. The ionization losses that are dominant at the energy below a few MeVs, are equal to that of the electron’s with precision better than 10%, see [@roy68]. The annihilation cross-section has the inverse dependence on the energy. The positrons losses due to two-photon annihilation (dominant mechanism) read: $$\frac{\partial N}{N\partial t} = - \frac{\pi}{4\alpha}\, \frac{(\ln 2\gamma_+ -1)}{\gamma_+}\, \frac{A}{Z\Lambda(Z)}\; ,$$ where $t$ is the pass length in rad.length, $\gamma_+$ the energy (Lorentz-factor) of the positron, $\Lambda(Z)=\ln 183/Z6[1/3]$ the Coulomb logarithm, $A$ the atomic number of the target nuclei, $Z$ the charge number. Optimal thickness of the target ------------------------------- As it can be seen from the above, the electrons lose their energy sufficiently faster then the gamma quanta generated by electrons. (The difference in degradation of the electrons moving through matter and the photons is that the electrons lose their energy preserving the number, while the photons preserve their energy decreasing the number of them.) In accordance with this mechanism, the density of gammas, and thus the density of the produced electron-positron pairs, will increase with the target thickness, reaches a certain maximum. Then it will be exponentially degraded when the electron’s energy decreases to the limit when the generated gammas will no be able to produce positrons – the threshold energy of electrons is about 2…3MeV. When the born positrons move toward the rear face of the conversion target, their energy and number are decreased. When the target is thin, a small number of gammas is produced and thus small number of the positrons. On the other hand, at large thickness of the electrons’ kinetic energy transfers into gammas and then to positrons. But the latter are annihilated in the target body. As it follows from the reasoning, there exists an optimal target thickness that produces maximum number of positrons. In figure \[fig:yield40L\] there are presented the dependencies of yield of the positrons with the energies of 1.5, 5 and 10MeV from tantalum target on it’s thickness (in rad.length) for the 40MeV electron beam. ![Yield of 1.5, 5 and 10MeV positrons from the tantalum target. \[fig:yield40L\]](yield40L.pdf){width="\textwidth"} As it can be seen, the yield is weakly dependent on the positrons’ energy and reaches maximum at the thickness 2…4 r.l. As a result of study on yield of the positrons from the target via the energy of the initial electrons, it was established that number of the gammas with the ‘convertible’ energy logarithmically slow increases with the energy of electrons. In turn, cross-section of the pair production has the logarithm dependence on the energy of quanta. Thus, dependence of the number of produced positrons upon the electrons’ energy is rather weak. Nevertheless, efficiency of positrons production significantly increases with the energy of electrons due to the fact that the more energetic electrons produce larger fraction of the energetic gammas, that are converted in the more energetic positrons. The loss rate of high energy positrons is smaller than that of the slow ones. Computing on the positron source ================================ Theoretical considerations yield a general picture of the process – “dependencies”. To obtain the specific numbers we used a CERN-originated code, GEANT4 [@geant4]. For verification of the model, we used the scheme adjusted for registration of the well-known bremsstrahlung radiation. A test computation shows that the model is viable and meets the requirements: We obtain the specific spectrum of the bremsstrahlung radiation superimposed by the spectrum of the two-photon annihilation of positrons, see figure \[fig:7\]. ![Gamma-spectrum from tantalum target of 2mm thickness, irradiated with 9MeV electrons. \[fig:7\]](Fig5.pdf){width="\textwidth"} A major goal of simulations was to obtain the spectra of positron energy emitted from the conversion target together with the total yield $N_{p}$ as ratio to the total number of income electrons $N_e$ as function of the target thickness. Simulations for the electron energies of 9MeV, 40MeV and 90MeV result in determination of the optimal thicknesses of the targets. The spectra of positrons per initial electron for the simulated energies are presented on figure \[fig:8\] for the optimal converters. Efficiency of positron production (ratio of number of positrons per impinging electron) is presented in figure \[fig:9\]. ![Spectra of positrons at target’s exit: green line for 9MeV electron, red for 40MeV and black for 90MeV. \[fig:8\]](Fig6.pdf){width="\textwidth"} ![Efficiency of positron production via the target thickness for electrons of 9MeV (gren), 40MeV (red) and 90MeV (black). \[fig:9\]](Fig7.pdf){width="80.00000%"} The results of simulation proved qualitative dependence of the yield of positrons on the energy of electrons and the target thickness. The quantitative results were sufficiently clarified. The simulations display the presence of large amount of isotropically distributed annihilation quanta. Isotropic distribution indicates that the positrons have been retarded down to nonrelativistic energy before annihilation. A detector for the photons from annihilation of positrons would be a good monitor of the positron source. Main results of the simulation are presented in table \[tab:1\]. ------------- ---------- ------- --------------------------------- ----------- $E$ $N_e$ $N_p$ $\eta=N_p/N_e $ $d$ \[1ex\] MeV mm/r.l. \[1ex\] 9 $10^{7}$ 13466 $(1.350\pm 0.010)\times10^{-3}$ 1.5 / 0.4 \[1ex\] 40 $10^{6}$ 56772 $(5.677\pm 0.024)\times10^{-2}$ 5.0 / 1.3 \[1ex\] 90 $10^{5}$ 19235 $(1.924\pm 0.013)\times10^{-1}$ 7.0 / 1.8 \[1ex\] ------------- ---------- ------- --------------------------------- ----------- : The results from simulation.[]{data-label="tab:1"} As it follows from the results of simulations, the total yield of positrons from the tantalum conversion target of optimal thickness is dramatically dependent upon the energy of initial electrons: increase in the energy from 9MeV up to 90MeV will increase the yield by more than two orders of magnitude. This enhancement in the yield requires a more thick conversion target. Estimates for the positrons yield from the linacs existed in NSC KIPT: LUE-10 ($E=9$MeV, the electrons’ average current $270\,\mu$A) and LUE-40 [@dovbnya14] with the energy variate in the range 40–90MeV at the average current $5\,\mu$A show that expected positron yield may be as high as $2.3\times 10^{12}$pos/s (LUE-10) and $(1.8\dots 5.9)\times 10^{12}$pos/s (LUE-40), see table\[tab:1\]. A classical setup of installation to produce a monochromatic positron beam comprises the positron source – the isotopes or the linac with an electron-to-positron converter, and a moderator attached downstream. The moderator consists of the material with negative positron affine (tungsten or inert gases in solid state). As a result of bombardment, the positrons are thermalize, a fraction of them with energy of a few eV diffuses back on the surface. These ‘slow’ positrons are accelerated in constant electric field and are transported to the object under examination. Since the positron beam is of big angular spread, the moderator should be placed as close to the source as possible. The moderator efficiency (ratio of the ‘slow’ positrons fraction to the total number of them) substantially reduces with increase of the energy of initial positrons and reaches $\sim 10^{-5}$ at the energy of a few MeV, [@rourke11]. One of effective methods to reduce the energy of the income to the moderator positrons was proposed in ANL, [@long07]. The method consists in set up a RF cavity in front of the moderator. The cavity slows down the positrons. Naturally, it requires a system to transport the positrons from the converter to the cavity Design of the positron transport system ======================================= The positron beam that produced in the conversion target possesses specific properties: large spread of energies (wide spectrum), large angular spread and a relatively small radius, order of a millimeter. Such set of the parameters, especially the wide angular spread, does not allow directly to accelerate/decelerate the beam and to transfer it to the experimental site. The positron beam should be properly formatted and refined from the electrons and the residual gammas. In order to match the beam emittance to the acceptance of the transport line, the Adiabatic Matching Device (AMD) is employed most commonly. While passing along AMD, the shape of emittance is transforming from large angular spread and small radius into a relatively small angular spread and a large radius. Theoretical estimations ----------------------- For preliminary estimation we use so-called paraxial approximation, which is valid for relatively small angles between a positron trajectory and the axis of the system. Within this approximation, transversal motion of a positron can be considered as nonrelativistic with the transversal mass $m = \gamma m_0$ ($\gamma $ is the Lorentz-factor of a positron). In the cylindrical framework suitable for a problem with the axial symmetry we can derive the nonrelativistic Hamiltonian. Let us start from the Lagrangian $$\label{eq:lagr} \mathcal{L}=\frac{mv^2}{2}+e\vec{A}\cdot\vec{v}\; ,$$ where $\vec{A}=A_\theta(r,z)$ is the (vector) potential of the magnetic field, which can be described by the single axial component. Substituting the velocity components into , $$\begin{aligned} v_r &= \dot{r}\; ; & v_z &=\dot{z}\; ; & v_\theta = r\dot{\theta}\; ,\nonumber \intertext{we get the canonical momenta} p_r &= \frac{\partial \mathcal{L}}{\partial\dot{r}}=m\dot{r}\; ; & p_z &= \frac{\partial \mathcal{L}}{\partial\dot{z}}=m\dot{r}\; ; & p_\theta &= \frac{\partial \mathcal{L}}{\partial\dot{\theta}} = m r^2 \dot{\theta}\; . \label{eq:canonp}\end{aligned}$$ From the relation $$\mathcal{H} =\sum_i \dot{q}_ip_i - \mathcal{L}$$ accounting for and, we deduce an expression for the Hamiltonian in the axially symmetric field in the cylindrical coordinate frame: $$\label{eq:hamilt} \mathcal{H}=\frac{p_r^2+p_z^2}{2m}+\frac{1}{2m}\left(\frac{p_\theta}{r}-eA\right)^2\; ,$$ with $A\equiv A_\theta$. From , there come the canonical equations describing the particle trajectory. The equations for the coordinates are \[eq:canonco\] $$\begin{aligned} \dot{r} &= \frac{\partial\mathcal{H}}{\partial p_r}=\frac{p_r}{m}\; ; \\ \dot{z} &= \frac{\partial\mathcal{H}}{\partial p_z}=\frac{p_z}{m}\; ; \\ \dot{\theta} &= \frac{\partial\mathcal{H}}{\partial p_\theta} = \frac{1}{mr}\left(\frac{p_\theta}{r}-eA\right)\; . $$ for the momenta: \[eq:canonmo\] $$\begin{aligned} \dot{p}_r &= -\frac{\partial \mathcal{H}}{\partial r} = \frac{1}{m} \left(\frac{p_\theta}{r}-eA\right)\left(\frac{p_\theta}{r^2}+e\frac{\partial A}{\partial r}\right)\; ; \\ \dot{p}_z &= -\frac{\partial \mathcal{H}}{\partial\dot{z}} = \frac{1}{m} \left(\frac{p_\theta}{r}-eA\right)\left(e\frac{\partial A}{\partial z}\right)\; ; \\ \dot{p}_\theta &= -\frac{\partial \mathcal{H}}{\partial \theta } = 0\; , \label{eq:canontheta}\end{aligned}$$ and for the temporal evolution of the Hamiltonian $$\label{eq:hamt} \frac{{\,\mathrm{d}}\mathcal{H}}{{\,\mathrm{d}}t} = \frac{\partial \mathcal{H}}{\partial t} = 0\;.$$ The two integrals of motion come from and : the Hamiltonian itself equal to the total energy of a particle, which is independent explicitly of time \[eq:const\] $$\begin{aligned} \mathcal{H}&=\mathrm{const}\; , \label{eq:consth} \intertext{ and conservation law for the angular momentum:} \dot{p}_\theta &= 0\;\to\; p_\theta = mr^2\dot{\theta} + e A r = \mathrm{const}\; .\label{eq:constpt}\end{aligned}$$ Constant–field solenoid ------------------------ The simplest case is the constant magnetic field – the radially uniform solenoidal field $$B_r=B_\theta=0\;,\qquad B_z = B\; .$$ Assuming the only axial component of the vector potential being nonzero, from the expression $$\nabla \vec{A} = \vec{B}$$ we can find $$\label{eq:pota1} \frac{1}{r}\frac{\partial (r A_\theta )}{\partial r} = B\quad\to \quad A_\theta = \frac{Br}{2}\; ,$$ where the constant of integration is set up to zero. As it seen from , the equilines of the magnetic potential (force lines) are parallel to the solenoid axis. Considering the initial conditions (at $t=0$) – $\theta = \dot{\theta} = 0$, $r = 0$, $\dot{r} = \beta_0 c $ (where $\beta_0\equiv \beta_\perp (t=0)$) – we get the projection of the trajectory onto the transverse plane: $$\label{eq:rtraj} r = \frac{\beta_0 c}{\omega }\left(1 - \cos \omega t\right)\; ,$$ where $\omega = \omega_\text{cycl}/2\gamma $, $\omega_\text{cycl} = e B /m = 1.7588047\times 10^{11}\times B[\text{Tl}]\,\text{s}^{-1}$. The positron is oscillating around the axis at the frequency equal to half of the cyclotron frequency. The maximum deviation from the axis is: $$\label{eq:rmax} r_\text{max} = \frac{2 \beta_0 c}{\omega }=\frac{4\gamma c m \beta_0 }{e B } \approx \frac{4\gamma c \sqrt{1-\gamma^{-2}}\sin\phi_0}{1.7588\times 10^{11} B[\text{Tl}] } \; ,$$ where $\phi $ is the initial angle with respect to the axis. In the cylindrical coordinate frame with the center in $R=r_\text{max}/2, \Theta = \pi/2$, the positron moves with constant speed along a circle of the radius $r_*=r_\text{max}/2$, with angular frequency $\omega_*=2 \omega = \omega_\text{cycl}/\gamma $. Spatial period of such helical motion is $$\label{eq:Lz} L_z = \frac{2\pi \gamma c}{\omega_\text{cycl}}\cos\phi_0 \sqrt{1-\gamma^{-2}}=\frac{\pi}{2}r_\text{max}\cot\phi_0 \; .$$ The positron trajectory represent Larmour ring of radius $r_\text{max}/2$ rotating with frequency $\omega_\text{cycl}$. The center of rotation is displaced by $r_\text{max}/2$ from the axis. A so-called “single-particle emittance” (the envelope of transverse projection) determined as $$\epsilon_{env} =r_\text{max}\frac{\beta_0}{\beta_\parallel} = \frac{4c \gamma \sqrt{1-\gamma^{-2}}}{\kappa B}\,\frac{\sin ^2\phi_0}{\cos\phi_0}$$ is inversely proportional to the field amplitude $B$. So is the envisaged emittance of the positron beam. Tapered field ------------- As usual, see e.g. [@chehab83; @chehab94], the field decreasing along the solenoid axis is formed according to a dependence $$B_z(r=0) = \frac{B_0}{1+\alpha z}\; ,$$ where $\alpha $ is the slope factor. This field strength may be represented by a potential $$\label{eq:potz} A_\theta(r,z) = \frac{B_0 r}{2(1+\alpha z)}\; .$$ Axial decrease gives rise to the radial component: $$B_r(r,z) =\frac{B_0 \alpha r}{2(1+\alpha z)^2}\; .$$ For the considering system with a large longitudinal size $Z$ as compared to the transverse one, $r_\text{max} /Z\ll 1$, the introduced above factor $\alpha$ is small: the particle performs many turns while traversing along the system. Adiabatic analysis of AMD ------------------------- A positron trajectory in the magnetic field of AMD described by the system of equations ,, with the potential is rather complicated. A sufficient simplification can be achieved under assumption of small transversal particles’ velocity together with a small longitudinal gradient of the field. These assumptions allow one to average the trajectory over the cyclotron frequency, then to consider dynamics of the ‘Larmor rings’. A small gradient of the field allows to employ the adiabatic theorem (for the charged particles dynamics in the magnetic field this theorem is referred to as the Bush theorem, see [@lawson]). Tapering of the longitudinal component of the field gives rise to the radial gradient and the drift of the Larmor rings in the direction perpendicular both to the gradient and the field strength lines – so called angular drift: the centers of the rings remain at the initial radius with slow azimuth turning. The adiabatic approach is applicable if the particle turns many times while traversing the system, the frequency of such transversal oscillations may be considered as the slow function of the time (or the longitudinal coordinate). In this case the most slowly changing parameter – the adiabatic invariant [@bakay81e] – is not the ‘transversal energy’ $$\mathcal{H}_\perp =\frac{p_r^2}{2m}+\frac{1}{2m}\left(eA\right)^2=\frac{\gamma m \beta_0^2 c^2}{2}\; ,$$ but its ratio to the frequency: $$\label{eq:adia} \mathcal{H}_\perp / \omega(z) \approx \mathrm{const}\; .$$ From , the main principle of the beam transform directly follows by AMD: due to $D$ times decrease of the field, $D = 1+\alpha Z$, the Larmor radius is increased by $\sqrt{D}$ times, and the transverse component of the velocity decreased also by $\sqrt{D}$ times. ‘The single-particle emittance’ is almost not changed: $$\label{eq:emit1} \epsilon(Z) = r_*(1 + \sqrt{D}) \times \beta_0 c /\sqrt{D} = \epsilon(Z) \frac{1 + \sqrt{D}}{\sqrt{D}}\; .$$ Here we assumed that the center of Larmor ring remains at the initial radius. Mitigation of the ‘transverse energy’ by $D$ times is compensated by increasing its longitudinal component – the positrons trajectories straighten as is schematically displayed in figure \[fig:amd\]. ![AMD scheme. The positrons at the target exit are red, the transverse projection of a trajectory is in blue.[]{data-label="fig:amd"}](amd.pdf){width="50.00000%"} Rough numbers on the positrons beam transformations with AMD are as follows. A positron with the energy 2MeV emitted at the angle of 0.1rad to the axis of AMD with the field at entrance 1Tl, maximally deviates by 2.6mm ($r_* = 1.3$mm). The initial longitudinal advance per period of transverse rotations will be around 4cm. So, at the length of the solenoid about 1m (usual AMD length is about $40\dots 80$cm) and the field reduction equal 10 – the field strength at the exit is 0.1Tl – the final beam radius will be about 5.2mm, with 3 times reduction of the angular spread, up to 0.033rad. Numerical simulations ===================== To elucidate the theoretical estimations, we performed numerical simulations. The numerical simulations of the processes of generation the positron beam as well as its dynamics have been carried out for a system consisting of the 9MeV electron linac, the electrons-to-positrons converter, the transport line and the RF cavity for reduction the positrons energy. The packages GEANT [@geant4] and PARMELA [@parmela] were employed for these purposes. The simulations was aimed at clarification the production and transport of the low-energy positrons. In addition, we considered moderation (slowing down) of such positrons, implying that the yield of positrons from a moderator is inversely proportional to their energy, [@rourke]. The PARMELA code allows to simulate the spatial dynamics of charged particles in the magnetic field together with RF cavities. PARMELA was applied twice: first for simulation of the electron beam dynamics in the linac and second – for simulation of the positron beam dynamics downstream after the conversion target. The conversion process was simulated by the code GEANT. Figure \[fig:10\] presents a layout of the transport system used in simulations. Between the conversion target and RF cavity, there inserted is AMD (intended mainly to rotate the transverse phase ellipse by $\pi /2$) that allows to reduce the transverse momentum of positrons at the AMD exit. ![Low-energy positrons transport system[]{data-label="fig:10"}](Fig9.pdf){width="\textwidth"} The AMD magnetic field strength decreased from 1Tl to 0.08Tl over 75cm length. The field decreases in accordance with , the parameter $\alpha =0.13$. Figure \[fig:11\] represents the on-axis field strength, begin from the target rear plane downstream to the rear end of the RF cavity. Both the AMD field index $\alpha $ and the length were optimized to obtain minimal r.m.s. angular spread of the positrons trajectories. ![AMD field strength along the axial coordinate. \[fig:11\]](Fig10.pdf){width="\textwidth"} The RF cavity adjoined to the AMD rear end is intended for slowing down the positron beam. The radius of internal hole of the cavity was 5cm in the simulations, the magnetic field strength along the drift space and the cavity was setup to 0.076Tl. The RF cavity design (see figure \[fig:10\]) was performed with the SUPERFISH code [@superfish]. The RF basic frequency is 114MHz that equal to 25th subharmonics of the accelerator frequency, $f_\text{linac}=2856\,\text{MHz}$. By tuning of the magnetic field distribution, we obtained required transformation of the transverse phase portrait of the positron beam, figure \[fig:12\]. ![Transverse phase portraits of the positron beam: left – at AMD entry, right – at exit. \[fig:12\]](Fig11.pdf){width="80.00000%"} An optimal reduction of the positrons energy was obtained by variation of the RF cavity field amplitude and phase. The positron spectra before and after AMD are presented in figure \[fig:13\] for the RF field strength 6MV/m. ![Spectra of the positron beam: red curve at AMD entry, blue at exit. \[fig:13\]](Fig12.pdf){width="80.00000%"} As it can be seen from the figure, the cavity decrease energy of a vast fraction of the positron beam to the range $5\dots 200$keV. The maximum of positron spectrum is at 50keV, there is approximately 10% of all positrons within the range $0\dots 200$keV. Summary and conclusion ====================== The process of the positrons production by the electron linacs has been considered theoretically. The problem of transporting positron beam from the electron-to-positron converter to a moderator that reduces energy of the positrons to desired level. Analysis on the adiabatic matching device (AMD) has been performed. Dependence of the total positron yield on the energy of the initial accelerated electrons as well as on the thickness of the conversion target is obtained. As is was shown, there exists the optimal thickness of the target that maximizes the yield. This maximum is sufficiently broad that allows for variation the target thickness due to design considerations with no significant reduction of the yield. Numerical simulations on the electron-to-positron conversion process have been performed for the electron energy from 9MeV to 90MeV. The conversion ratio and the spectrum of the positron beam, as well as optimal target thickness have been estimated, the results are in agreement with those obtained by other authors. The simulation shows a significant number of the isotropically distributed annihilation photons, detecting of which allows to tune the positron source. Qualitative analytical dependencies of the positron beam parameters at the system exit upon the amplitude and the decrease factor of the magnetic field in the AMD solenoid have been established. These dependencies have been used for optimizing the system. Numerical simulations allow optimize the parameters of AMD for solenoid available in the laboratory. Possible application of the subharmonic RF cavity for reduction of the energy of positrons is also estimated and validated by the simulations. As it was shown, this cavity can substantially decrease the positron energy and thus facilitate operation of the moderator. The results obtained indicate a potential realization of the source of slow positrons on the linacs of NSC KIPT. Acknowledgements {#acknowledgements .unnumbered} ---------------- The work is supported by the project X-6-1/2016 of National Academy of Science of Ukraine. [22]{} \[1\][\#1]{} \[1\][`#1`]{} urlstyle \[1\][doi: \#1]{} R.W. Siegel. Positron annihilation spectroscopy. *Annual Review of material science*, 10:0 393–425, 1980. Aditya Narayan Singh. Positron annihilation spectroscopy in tomorrow’s material defect studies. *Applied Spectroscopy Reviews*, 51:0 1–48, 2016. B. Vlaholic S. Golde. Review of low-energy positron beam facilities. In *Proc. of IPAC 12*, pages 1464–1466, 2012. R. Wada, T. Hyodo, T. Kasuge, et al. New experimental station at [KEK]{}. [S]{}low positron facility. *Journal of Physics: Conference Series*, 444:0 012082, 2016. B. E. O’Rourke et al. Recent developments and future plans for the accelerator based slow positron facilities at [AIST]{}. *Materials Science Forum*, 733:0 285–290, 2013. S. Chemerisov and C. D. Jonah. Developments of the positron facility at the argonne national laboratory’s 20 [MeV]{} linac. *Materials Science Forum*, 607:0 243–247, 2009. V. I. Artemov. *Methods of production and forming of the positron beams in the linear accelerators*. TsNIIatominform, Moscow, 1984. (in Russian). A.I. Akhiezer and N.F. Shulga. *Electrodynamics of High Energies in Substance*. Nauka, 1993. H. W. Koch and J. W. Motz. Bremsstrahlung cross-section formulas and related data. *Rev. Mod. Phys.*, 31:0 920, 1959. R. R. Roy and Robert D. Reed. *Interactions of photons and leptons with matter*. Academic Press, New York and London, 1968. National Institute of Standards and Technology (NIST). code. In *<http://physics.nist.gov/cgi-bin/Xcom/xcom3_1>*. 2016. A toolkit for the simulation of the passage of particles through matter. code. In *<https://cern.ch/geant4>*. 2007. A.N. Dovbnya, M.I. Ayzatsky, V.N. Boriskin, et al. State and prospects of the linac of the nuclear-physics complex with energy up to 100 [MeV]{}. In *Problems of Atomic Science and Technology, ser. “Nuclear physics investigation”*, volume 3 (91), pages 60–63. 2014. B. E. O’Rourke, N. Hayashizaki, A. Kinomura, R. Kuroda, E. Minehara, T. Ohdaira, N. Oshima, and R. Suzuki. Simulations of slow positron production using a low energy electron accelerator. In *<arXiv://physics/1102.1220v2>*. 2011. J. Long, S. Chemerisov, W. Gai, C. D. Jonah, W. Liu, and H. Wang. Study on highflux accelerator positron source. In *Proc. of IPAC 07*, pages 2921–2923, 2007. R. Chehab, G. LeMeur, B. Mouton, and M. Renard. Adiabatic matching device for the [O]{}rsay linear accelerator. *IEEE Trans NS*, NS-30(4):0 2850–2852, 1983. Robert Chehab. Positron sources. In *CAS, Fifth General Accelerator Course. CERN 94-01, vol II*, pages 643–678. 1994. J.D. Lawson. *The Physics of Charged–Particle Beams*. Claredon Press, Oxford, 1977. A.S. Bakay and Yu.P. Stepanovsky. *Adiabatic invariants*. Naukova Dumka, Kiev, 1981. in Russian. L. M. Young and J. M. Billen. The particle tracking code [PARMELA]{}. In *PAC 2003, Portland, USA*, page 3521, 2003. B. E. O’Rourke et al. Simulations of slow positron production using a low energy electron accelerator. In *<arXiv://physics/1102.1220v2>*. 2011. J. H. Billen and L. M. Young. on [PC]{} compartibles. In *PAC 1993, Washington, USA*, pages 790–792, 1993.
--- abstract: 'Dynamical decoupling has been actively investigated since Viola first suggested using a pulse sequence to protect a qubit from decoherence. Since then, many schemes of dynamical decoupling have been proposed to achieve high-order suppression, both analytically and numerically. However, hitherto, there has not been a systematic framework to understand all existing uniform $\pi$-pulse dynamical decoupling schemes. In this report, we use the projection pulse sequences as basic building blocks and concatenation as a way to combine them. We derived a new concatenated-projection dynamical decoupling (CPDD), a framework in which we can systematically construct pulse sequences to achieve arbitrary high suppression order. All previously known uniform dynamical decoupling sequences using $\pi$ pulse can be fit into this framework. Understanding uniform dynamical decoupling as successive projections on the Hamiltonian will also give insights on how to invent new ways to construct better pulse sequences.' author: - Haoyu Qi - 'Jonathan P. Dowling' title: 'A Method for Generating All Uniform $\pi$-Pulse Sequences Used in Deterministic Dynamical Decoupling' --- Introduction {#sec:intro} ============ One of the major difficulties in the realization of quantum computing and quantum information processing is protecting the quantum state from decoherence. Quantum error correction protocols were developed to meet this challenge [@Shor:1995; @Steane:1996; @Knill:1997]. For a review and recent progress of this area, see references [@Zoller:2005; @Steane:1998]. While quantum error correction can be regarded as a form of closed-loop control, dynamical decoupling has been proposed as a way to counteract the interaction between a quantum system and the environment by an open-loop control field. The idea of using pulse sequences, to protect nuclear spins from classical decoherence, dates back to 1950, when the spin-echo method was found [@Hahm:1950]. Since then, many pulse methods have been developed in nuclear magnetic resonance spectroscopy [@NMR:book]. In 1998, it was first pointed out that a similar technique, periodical dynamical decoupling (PDD), can be applied to open quantum systems [@Viola:98]. By using a control field, with duration shorter than the time scale of the environment, dynamical decoupling can suppress the interaction between a qubit and the bath [@Viola:98]. Important theoretical progress was made by understanding the effect of dynamical decoupling as the result of a symmetrizing procedure over the group composed of all independent $\pi$ pulses [@Viola:99; @Zanardi:99]. However, the finite switching time in real experimental conditions makes the symmetrizing imperfect. Therefore, concatenated dynamical decoupling (CDD) was proposed to eliminate higher-order errors in the interaction Hamiltonian [@KhodjastehLidar:04; @KhodjastehLidar:07]. In realistic experiments, the imperfection of pulses such as the rotation angle error and the finite width, must be taken into account. CDD is preferred over PDD in this case, due to the fact that concatenation not only suppresses the interaction with the environment but also the pulse errors to higher order [@KhodjastehLidar:04; @KhodjastehLidar:07]. Meanwhile, by optimizing the pulse interval to suppress the low-frequency region of the noise spectrum, Uhrig dynamical decoupling (UDD) can achieve the same suppression order with fewer pulses, compared with CDD [@Uhrig:07; @Uhrig:08]. Although it has superior performance, UDD is very sensitive to pulse errors, due to the fact that it only uses single-axis rotations. However, to protect unknown states, uniform DD with multi-axis rotations can compensate errors due to its symmetric structure [@Xiao:11; @Lange:2010; @Ryan:2010; @Souza:2012].\ Another development in dynamical decoupling is to use random pulses, instead of deterministic schemes, for sufficiently long sequences [@Viola:05; @Viola:08]. Instead of trying to suppress the interaction to arbitrarily high orders, random dynamical decoupling schemes improves the time dependence of the error accumulation from quadratic to linear [@Viola:05; @Viola:08].\ In this paper we only consider deterministic uniform dynamical decoupling for two reasons: (1) it is easy to implement in experiment, and (2) it is robust against pulse error compared to UDD. Besides analytical calculations, recently many other DD sequences have been found using genetic algorithms to optimize the suppression order [@Quiroz:13], some of which even achieve same suppression order with fewer pulses than CDD. However, a unified understanding of all known DD schemes has not been developed until now.\ In this work, we propose concatenated-projection dynamical decoupling (CPDD) to unify all known uniform DD schemes. Our framework gives a way to construct new pulse sequences and to calculate their suppression order. In Sec. \[sec:set\], we describe the mathematical settings of the dynamical decoupling technique. In Sec. \[sec:CPMG\], we first define our projection pulse sequence and explain its effect as an ‘atomic’ projection. Then we use concatenated projections along different directions to construct more complex pulse sequences with arbitrary suppression order in Sec. \[sec:Concat\]. These results comprise the two cornerstones of our theory of CPDD. In Sec. \[sec:CPDD\] we formally introduce CPDD by defining the CPDD equivalence classes, which are specified by three integers. In subsection A, a series of properties of CPDD are developed; in subsection B we design a deterministic scheme to construct optimized pulse sequence for each suppression order. A table of known DD schemes is given as well to show how these known schemes fit into our CPDD framework. In Sec. \[sec:DISCUSS\], we discuss why, intuitively, some CPDD sequences are superior than CDD. We also point out a typo in Ref.[@Quiroz:13], which is easily detected within the framework of CPDD. We present our conclusion in Sec. \[sec:conc\]. Dynamical decoupling settings {#sec:set} ============================= We consider a qubit system $S$ coupled to an arbitrary bath $B$, which forms a closed system on the Hilbert space $\mathcal{H}_S\otimes\mathcal{H}_B$. The overall Hamiltonian can be written in the form, $$H_0 = H_S\otimes{1\!\!1}_B + {1\!\!1}_S\otimes H_B +H_{SB},$$ where ${1\!\!1}$ is the identity operator, $H_S$ is the pure system Hamiltonian, $H_B$ is the pure environment Hamiltonian, and $H_{SB}$ is the interaction between the qubit and the bath. In the following, we will assume the interaction takes the general linear form, $$H_{SB} = \sigma_x\otimes B_x + \sigma_y\otimes B_y + \sigma_z\otimes B_z, \label{eq:HSB}$$ where $B_x,B_y$ and $B_z$ are abitary operator on $\mathcal{H}_B$. Decoherence can be suppressed by adding a control field solely to the system, i.e., $H_c(t)\otimes{1\!\!1}_B$. If the control field is a series of pulses, then the suppression effect can be understood as a symmetrizing procedure [@Viola:99]. In the toggling frame, the Hamiltonian is transformed into,$$\tilde{H}(t) = U_c^\dagger(t) H_0 U_c(t) ,$$ where $U_c(t) = \mathcal{T}\exp\lbrace-i\int_0^t H_c(t)dt\rbrace$ is the evolution operator of the control field. Here $\mathcal{T}$ is time ordering operator. The evolution of the state vector is governed by the evolution operator,$$\tilde{U}(t) = \mathcal{T}\exp\Big\lbrace-i\int_0^t \tilde{H}(t)dt\Big\rbrace.$$ If the sequence length is $\tau_c$, we can define an average Hamiltonian $\bar{H}(\tau_c)$ by $\tilde{U}(\tau_c) = \exp(-i\bar{H}\tau_c)$. By expanding $\tilde{U}(\tau_c)$, we can collect terms according to different order of $\tau_c$,$$\mathcal{T}\exp\Big\lbrace-i\int_0^t \tilde{H}(t)dt\Big\rbrace = e^{-i[\bar{H}^{(0)}+\bar{H}^{(1)}+...]\tau_c}. \label{eq:average}$$ The first two terms are $$\begin{aligned} &\bar{H}^{(0)}&= \frac{1}{\tau_c}\int_0^{\tau_c}\tilde{H}(t)dt, \notag\\ &\bar{H}^{(1)}&=\frac{-i}{2\tau_c}\int_0^{\tau_c}dt_1\int_0^{t_1}dt_2 [\tilde{H}(t_1),\tilde{H}(t_2)], \notag\\ &...& \label{eq:magnus}\end{aligned}$$ We make the separation, $$\label{eq:err1} \bar{H} = {1\!\!1}_s\otimes H_B + \bar{H}_{\rm{err}} ,$$ in which $\bar{H}_{\rm{err}}$ is the part responsible for undesired evolution of the qubit. In the context of dynamical decoupling, the only measure of the performance of a pulse sequence is the suppression order. The suppression order $N$ is defined through the following equation, $$\begin{aligned} \label{eq:err2} \bar{H}_{\rm{err}} = \sum_{m=N+1}^{\infty}\bar{H}^{(m)},\end{aligned}$$ which means the corresponding pulse sequence achieves $N^{th}$ order decoupling. Projection pulse sequence {#sec:CPMG} ========================= Now consider that we use $K$ uniform ideal $\pi$ pulses as the control field. Thus the field takes the form $H_c(t) = -\sum_{j=1}^K \frac{\pi}{2}\sigma_j\delta(t-t_j)$, where $t_j = j\tau_d$, $\tau_d$ is the pulse interval and $\sigma_j\in \lbrace {1\!\!1},\sigma_x, \sigma_y,\sigma_z\rbrace$. In the following, we use $P_K...P_1$, $P_K\in\lbrace I,X,Y,Z\rbrace$ to represent corresponding pulse sequence. In the limit of $\tau_d\rightarrow 0$, we have a continuous pulse sequence, and only the zeroth-order term in the Magnus expansion survives,$$\bar{H}^{(0)}= \frac{1}{\tau_c}\int_0^{\tau_c}U_c^\dagger(t)H_0U_c(t)dt. \label{eq:1st}$$ Also, since the pulses are ideal, $U_c(t)$ is a piece-wise constant function. The first-order average Hamiltonian reduces to $$\bar{H}^{(0)}= \frac{1}{K}\sum_{j=1}^K U_{c}^\dagger (t_j) H_0U_{c}(t_j). \label{eq:symmetrize}$$ If we chose pulses such that $U_{c}(t_j)$ go through each element of certain group such as $\mathcal{G}=\lbrace I,X,Y,Z\rbrace$, Eq. (\[eq:symmetrize\]) is just a symmetrizing procedure which projects $H_0$ onto the commutant of the group algebra [@Viola:99]. A more intuitive way to view the effect of pulse sequence is to look at them as combination of basic projections [@KhodjastehLidar:04]. The simplest pulse sequence is $P_iP_i$, which is similar to the CPMG pulse sequence [@Bloem:07; @Carr:1954]. Hereafter we call it the projection pulse sequence and use the notation $p_i = P_iP_i$. To explain its projecting effect, we consider the spin-boson model, which induces the longitudinal decay of the spin. The interaction takes the form, $$H_{SB} = \sum_k(g_k\sigma^+\otimes b_k+g_k^*\sigma^-\otimes b_k^\dagger),$$ where $b_k$ is the annihilation operator of a photon with momentum $k$, and $g_k$ is the coupling strength between photon with mode $k$ and the spin. Here $\sigma^\pm$ is the creation (annihilation) operator, $\sigma^\pm = \sigma_x\pm i\sigma_y$. If we apply $p_z=ZZ$ to the system, then $U_c(t_j)\in \mathcal{G} = \lbrace{1\!\!1},Z\rbrace$. Using Eq. (\[eq:symmetrize\]), the interaction term will be completely removed in the continuous limit due to the fact $\sigma_z\sigma_{x(y)}\sigma_z = -\sigma_{x(y)}$. Therefore, geometrically the pulse sequence $p_z$ projects the Hamiltonian along the $z$ direction. However, in real experimental conditions, there is an upper limit of the pulse switching rate; thus $\tau_d$ is finite. Although the Magnus expansion is still valid, so long as $\tau_c\omega_c << 1$, where $\omega_c$ is the cut-off frequency of the bath, the projection is not exact anymore due to higher-order corrections. \[theo:CPMG\] : Assume the interaction between a single qubit and bath takes the form of Eq. (\[eq:HSB\]). After applying the projection pulse sequence $p_j$ ($j=x,y,z$) with pulse interval $\tau_d$, the zeroth order error Hamiltonian defined in Eq. (\[eq:err1\]) and Eq. (\[eq:err2\]) is given by,$$\bar{H}^{(0)}_{\rm{err}} \equiv \pi_j^{(0)}H_0=\sigma_j\otimes B_j .$$ Thus the full error average Hamiltonian is given by, $$\bar{H}_{\rm{err}} = \sigma_j\otimes [B_j+k_j^{(1)}(\tau_d)]+\sum_{i\perp j}\sigma_i\otimes [B_i^{(1)}+k_{i}^{(2)}o(\tau_d^2)],$$ where we use $\pi_j^{(0)}$ to represent the mapping from $H_0$ to $\bar{H}^{(0)}_{\rm{err}}$ that is induced by the projection pulse sequence $p_j$. The symbol $\perp$ represents the directions orthogonal to direction $j$ and $B_i^{(1)}\sim k_i^{(1)}o(\tau_d)$. Here $k^{(n)}_i$ is some combination of commutators of the bath operators with dimension of $[H]^n$. This projection point of view gives a geometrical and intuitive way to understand the effect of the pulse sequence $P_jP_j$. The full proof is given in Appendix A. In summary, we have shown that the effect of the projection pulse sequence $p_i =P_iP_i$ is to project the Hamiltonian along $i$ direction up to first order. Concatenation of cyclic pulse sequences as successive projections {#sec:Concat} ================================================================= The higher-order terms remaining in the $H_{\rm{err}}$, after applying the projections, will coherently add up with time. To achieve higher-order suppression, we need to project the Hamiltonian along different directions successfully. We will show in this section that the effect of the concatenation of cyclic pulse sequences is to apply successively the projections induced by each pulse sequence. A pulse sequence is called cyclic when the generated evolution operator is periodic with period $\tau_c$ up to a phase factor, $$U_c(n\tau_c) = e^{i\phi}U_c(\tau_c), \label{eq:cyc}$$ where $n=0,1,2~...$ and $\phi$ is an arbitrary phase. An equivalent definition of a cyclic pulse sequence is that the product of all the pulses is equal to one, up to an arbitrary phase, $$\prod_{i=1}^{K}P_i = e^{i\phi}{1\!\!1}, \label{eq:cyc1}$$ which follows directly from Eq. (\[eq:cyc\]) when $n=0$. We will see that this property is necessary for the proof of the equivalence between concatenation and successive projections. Another useful property of the cyclic pulse sequence is that the concatenation of two cyclic pulse sequences is still cyclic. The proof is straightforward and we include it in the Appendix A. Having the definition of cyclic pulse sequence, we now prove the second basic theorem of our CPDD scheme. \[lem:conca\] : Consider two pulse sequences A and B, $P_{K}^i...P_{1}^i$, where i = $A,B$, with the same pulse interval. The first pulse sequence $A = P_jP_j$ ($j\in\lbrace x,y,z\rbrace$), which is a projection pulse sequence, and sequence $B$ is concatenated from multiple projection pulse sequences. A third pulse sequence C is constructed by concatenating A and B, $C = A[B] \equiv P_jBP_jB$. The following relationship is true, $$\pi_C^{0} = \pi_B^{0}\pi_A^{0},$$ where the mapping $\pi_i^{0}$ induced by applying the pulse sequence $i$ is defined in Theorem \[theo:CPMG\]. Lemma \[lem:conca\] is the theoretical cornerstone of this work. It explains why concatenation can increase the suppression order, which is not so obvious. The cyclic properties and the changeability of different pulses (up to an irrelevant phase factor) are necessary for the proof. The full proof is included in the Appendix A. Concatenated projections dynamical decoupling {#sec:CPDD} ============================================= What really distinguishes our work from the CDD scheme is that we chose the projection pulse sequence as the basic element of concatenation. Motivated by Theorems \[theo:CPMG\] and \[theo:con\], we define our concatenated-projection dynamical decoupling (CPDD) as a new way to construct pulse sequences by applying projections along different directions successively. Since each projection kills the interaction terms orthogonal to it by one more order, by appropriately combining different projections, our CPDD can achieve arbitrarily high suppression order. \[def:CPDD\] : A CPDD pulse sequence is specified by an ordered series $i_N,i_{N-1},...,i_1$. It is constructed by concatenating N projection pulse sequences successively, $A = p_{i_N}[p_{i_{N-1}}[...[p_{i_1}\underbrace{]...]}_N$, where $i_j\in \lbrace x,y,z \rbrace$ and $1\leq j\leq N$. The suppressing effect of the CPDD sequence on the Hamiltonian follows immediately from the combination of the effects of projections and concatenation. \[theo:con\] : Consider a CPDD pulse sequence A specified by $i_N,i_{N-1},...,i_1$. After applying pulse sequence A, the average interaction Hamiltonian is given by$$\bar{H}_{err} = \sum_{i=x,y,z}\sigma_i\otimes[ B_i^{(d_i)} + k_i^{(d_i+1)}o(\tau_d^{d_i+1})],$$ where $$d_i = \sum_{j\perp i}n_j,$$ and $n_j$ is the number of $p_j$ sequences. **Proof** : Repeatedly using Lemma \[lem:conca\], the leading order of the error average Hamiltonian after applying sequence $A$ is given by$$\pi_A^{(0)} H_0 = \pi^{(0)}_{i_1}\pi_{i_2}^{(0)}...\pi_{K_A}^{(0)}H_{0}~.$$ From Theorem \[theo:CPMG\], each projection remove the first order term in the perpendicular direction. Therefore, $$\bar{H}_{err}^{(0)} = \sum_{i=x,y,z}\sigma_i\otimes B_i^{(d_i)},$$ where $$\label{eq:sum} d_i = \sum_{j\perp i}n_j.$$ From Theorem \[theo:con\] we can see that the effect of $n_i$ $p_i$s is to suppress the error Hamiltonian along the $i$ direction to $n_i$th order. From Eq. (\[eq:sum\]) we also notice that the order of how different projection pulse sequences is concatenated does not affect the leading order of the average Hamiltonian along each direction. Therefore we can define an equivalence relationship between different CPDD pulse sequences, \[def:eqv\] : Consider two pulse sequences $A$ and $A'$. The leading order of the error Hamiltonians induced by each of them are $\bar{H}^{(0)}_{err} = \sum_{i=x,y,z}\sigma_i\otimes B_i^{(d_i)}$ and $\bar{H}^{(0)'}_{err} = \sum_{i=x,y,z}\sigma_i\otimes B_i^{(d_i')}$. We define $A$ and $A'$ to be equivalent to each other$$A\sim A',$$ if the leading order of the average error Hamiltonians are the same along each direction, namely $n_i=n_i'$. It can be easily proved that the relationship defined above satisfies the three properties of an equivalence relationship. Therefore, for a CPDD sequence specified by sequence $a = i_N,i_{N-1},...,i_1$, all CPDD sequences specified by $a$’s permutations $A\lbrace i_N,i_{N-1},...,i_1\rbrace$ form an equivalence class. By the virtue of the equivalence class, only three numbers $n_x,n_y,n_z$ are needed to completely specify a CPDD class. : A CPDD class is defined as an equivalence class with equivalence relationship defined in Definition \[def:eqv\], specified by three integers, $\lbrace n_x,n_y,n_z\rbrace$. The structure of the pulse sequence can be generated by concatenating all $n_i$ $p_i$ sequences ($i=x,y,z$) in arbitrary order. From the definition of CPDD and Theorem \[theo:con\], we derive a series of properties satisfied by CPDD sequences and their equivalence classes. Properties of CPDD {#subsec:propert} ------------------- Due to the way concatenation connects two pulse sequences, we can derive two properties that the structure of each CPDD sequence must satisfy.\ \ 1. For an arbitrary CPDD pulse sequence, each odd site has the same kind of $\pi$ pulse. We prove this by induction. Consider a pulse sequence $A = P_KP_{K-1}...P_{1}$, which is concatenated from $N$ projection pulse sequences. Pulse sequences $ A_n$ are concatenated from the first $n$ ($1\leq n\leq N$) of them. The following relations are satisfied, $$\begin{aligned} \label{eq:induct} A_n = p_i[A_{n-1}],\end{aligned}$$ where $p_i$ is the $n$th projection pulse sequence. Assume for subsequence $A_{n-1}$ the pulses are the same for each odd site, $$A_{n-1} = P_{2n-2}P_0.....P_2P_0. \label{eq:recur}$$ Let’s examine the pulse sequence $A_n$, $$\begin{aligned} A_n &=& p_i[A_{n-1}]\notag\\ &=&(P_iP_{2n-2})P_0...P_2P_0(P_iP_{2n-2})P_0...P_2P_0.\end{aligned}$$ Therefore, the fact that all the pulses at each odd site are the same still holds. Since $A_2 = p_i[p_j]=(P_iP_j)P_j(P_iP_j)P_j$, which also has the same kind of pulse on its odd sites, by induction we have proved that for each odd sites of $A_N$ the pulses are the same: $$P_{2m+1} = P_0, ~~~m=0,1,...$$\ \ 2. For an arbitrary CPDD pulse sequence, the first half and the second half subsequences are the same.\ Again this property follows from the definition of concatenation. Using Eq. (\[eq:induct\]) for $n = N$ we have $$\begin{aligned} A = p_{i_n}[A_{n-1}].\end{aligned}$$ Assume $A_{n-1} = P_{2N-2}P_{2N-3}...P_1$, we have $$A = (P_{i_n}P_{2N-2})P_{2N-3}...P_1(P_{i_n}P_{2N-2})P_{2N-3}...P_1.$$ Obviously sequence $A$ is composed the same two copies of $A_{N-1}$, or more precisely $$P_m = P_{m-N/2}, ~~m=1,2,...,N.$$\ \ 3\. For CPDD class $\lbrace n_x,n_y,n_z\rbrace$, the number of pulses or sequence length $K$ is given by $$K_{n_x,n_y,n_z} = 2^{n_x+n_y+n_z}. \label{eq:length}$$\ The proof is straightforward. At first, each basic projection is induced by the two same pulses. Secondly, the length of two concatenated pulse sequences, $A$ and $B$, is equal to the product of the length of each two, $K_{A[B]} = K_AK_B$. Therefore, for a pulse sequence composed of $n_i$ pairs of ($P_i,P_i$), the total pulse number $K$ is given by Eq. (\[eq:length\]).\ \ 4\. For the CPDD class $\lbrace n_x,n_y,n_z\rbrace$, the suppression order achieved is given by,$$N_{n_x,n_y,n_z} = \min{\lbrace n_y+n_z,n_x+n_z,n_x+n_y\rbrace}. \label{eq:suppr}$$ From Theorem \[theo:con\], the leading order of the error Hamiltonian induced by any pulse sequence in the CPDD class $\lbrace n_x,n_y,n_z \rbrace$ is given by $$\bar{H}^{(0)}_{err} = \sum_{i=x,y,z}\sigma_i\otimes B_i^{(d_i)},$$ where $d_i = \sum_{j\perp i}n_j$. Since the suppression order $N$ is defined as the leading order of $\bar{H}$, $N=\min{\lbrace d_x,d_y,d_z\rbrace}$.\ \ From the expression of suppression order, Eq. (\[eq:suppr\]), we can see that simply increasing pulse numbers (number of projections) does not necessarily increase the suppression order. Actually, in the framework of CPDD, only for pulse sequences with certain pulse numbers can the suppression order increase.\ \ 5. For a given suppression order $N$, the minimum number of pulses, $K_{\rm{min}}$, required to achieve such suppression order is given by $$\log_2(K_{\rm{min}}) = \frac{1}{2}[3N+\frac{1}{2}(1-1^{\oplus N})], \label{eq:Kinc}$$ where $1^{\oplus N} = \underbrace{1\oplus 1 \oplus ~...~\oplus 1}_{N}$.\ \ To achieve suppression order $N$, $3N$ terms in the interaction Hamiltonian need to be eliminated due to the form of $H_{SB}$, Eq. (\[eq:HSB\]). However, each basic projection $\pi_i$ requires two pulses, which implies that only an even number of terms can be eliminated for a given CPDD sequence. Therefore, we need to add one more pulse depending on whether $N$ is odd or not. Dividing the total number of eliminated terms by two we have, $$\sum_{i=x,y,z}n_i = \frac{1}{2}[3N+\frac{1}{2}(1+(-1)^{N+1}].$$ Using the results of property 3, we have Eq. (\[eq:Kinc\]). The mysterious series $4,8,32,64,256...$ was first found by a genetic algorithm in Ref. [@Quiroz:13] which is now understood within the framework of our unifying CPDD. Optimized uniform dynamical decoupling -------------------------------------- The pulse sequences corresponding to the pulse number in Eq. (\[eq:Kinc\]) uses the minimum number of pulses at each suppression order. This optimized uniform dynamical decoupling (OUDD) scheme can be represented using the CPDD indexes as $$\label{eq:OUDD} \mbox{OUDD}_k :\frac{1}{2}\lbrace k-1^{\oplus k},k+1^{\oplus k},k+1^{\oplus k}\rbrace.$$ The suppression order of OUDD$_k$ is $N_k = k$ and the sequence length is given by $K_{\rm{min}}$ in Eq. (\[eq:Kinc\]).\ \ As we can see, some of the pulse sequences are particular levels of CDD$_l$ and GA8$_l$. To compare with OUDD and other known DD schemes, we list the corresponding CPDD indexes of known DD schemes in Table \[tab:1\] below. \[tab:1\] $\lbrace n_x,n_y,n_z\rbrace$    Name    Pulse sequence     K    N  ------------------------------- ---------------- ------------------------ ------- ------ $\lbrace 0,0,1\rbrace$ Projection $P_iP_i$ 2 0 $\lbrace 0,1,1\rbrace$ PDD(CDD$_{1}$) $P_iP_jP_iP_j$ 4 1 $\lbrace 1,1,1\rbrace$ GA8$_a$ $IP_iP_jP_iIP_iP_jP_i$ 8 2 $\lbrace 0,l,l\rbrace$ CDD$_{l}$ CDD\[CDD$_{l-1}$\] $4^l$ $l$ $\lbrace l,l,l\rbrace$ GA8$_{l}$ GA8$_a$\[GA8$_{l-1}$\] $8^l$ $2l$ Discussion {#sec:DISCUSS} ========== Although CDD also relies on concatenation, the fact that our CPDD uses basic projections as building blocks makes finding more efficient pulse sequences possible. To make this clear, we consider the CDD from the view point of our CPDD. CDD$_1$ can be considered as the concatenation of two different projections, CDD$_1 = X(YY)X(YY) = ZYZY$ [@KhodjastehLidar:07]. Therefore the effect of CDD$_1$ is to apply projections along the y and x direction successively,$$\begin{aligned} \pi_{CDD_1}^{(0)}H_{SB} &\equiv& \pi^{(0)}_y\pi_x^{(0)}H_0 \notag\\ &=&\pi_y^{(0)}[\sigma_x\otimes B_x+\sigma_y\otimes B_y^{(1)}+\sigma_z\otimes B_z^{(1)}]\notag\\ &=&\sigma_x\otimes B_x^{(1)}+\sigma_y\otimes B_y^{(1)}+\sigma_z\otimes B_z^{(2)} \notag\\ &\sim& k^{(1)}o(\tau_d^1).\end{aligned}$$ As we can see, CDD$_1$ completely removes the zeroth order interaction terms, thus achieving suppression order $N_{\rm{CDD}_1} = 1$. Now consider CDD$_2$, which is the concatenation of two CDD$_1$ sequences: we write explicitly the process of the successive projections , $$\begin{aligned} \pi^{0}_{CDD_2} H_{SB} &=& \pi_y^{(0)}\pi_x^{(0)} \pi_y^{(0)}\pi_x^{(0)} H_0 \notag\\ &=&\pi_y^{(0)}\pi_x^{(0)}[\sigma_x\otimes B_x^{(1)}+\sigma_y\otimes B_y^{(1)}+\sigma_z\otimes B_z^{(2)}]\notag\\ &=&\sigma_x\otimes B_x^{(2)}+\sigma_y\otimes B_y^{(2)}+\sigma_z\otimes B_z^{(4)} \notag\\ &\sim& k^{(2)}o(\tau_d^2).\end{aligned}$$ As we can see from above, a total of eight eliminations (each projection pulse sequence eliminates two terms in the orthogonal directions) are used to completely remove the first two orders of the interaction $H_{SB}$. However, the two additional eliminations of $B_z$ does not contribute to further increasing of the suppression order. To avoid this, we consider projecting along each direction exactly once, namely $\pi_x\pi_y\pi_z$, which belong to the CPDD class $\lbrace 1,1,1\rbrace$. Translating the projection back to corresponding pulse sequence according to the rule of concatenation, we have$$\begin{aligned} \pi_x^{(0)}\pi_y^{(0)}\pi_z^{(0)} &:& p_x[p_y[p_z]] \notag\\ &=& p_x[Y(ZZ)Y(ZZ)] \notag\\ &=& X(XZXZ)X(XZXZ) \notag\\ &=& IZXZIZXZ ,\end{aligned}$$ which only uses eight pulses and six projections. This was first found by a genetic algorithm and called GA8$_a$ in Ref. [@Quiroz:13].\ \ To achieve suppression order of $N = 2$, CDD$_2$ needs $16$ pulses while GA8a sequences only requires $8$ pulses. This efficiency of using pulses comes from the very fact that GA8a uses basic projections $\pi_i$ as building blocks while CDD$_2$ uses composed projections $\pi_i\pi_j$ ($i\neq j$) as building blocks.\ \ In Ref. [@Quiroz:13], the author claimed to find another 8 pulse sequence GA8$_b$=$Z(XYXY)Z(XYXY)$ which also achieved suppression order of $N = 2$. From the structure of GA8$_b$, we know the projections induced by it is, $$\pi^{(0)}_{GA8_b}=\pi_z^{(0)}\pi_z^{(0)}\pi_y^{(0)},$$ which belong to CPDD class $\lbrace 0,1,2\rbrace$. Using the results from Eq. (\[eq:suppr\]), the suppression order of GA8$_b$ is equal to 1. Therefore the claim in Ref. [@Quiroz:13] is typo.\ \ To double check our results, we also use the multi-precision package [*mpmath*]{} [@mpmath] to compute the suppression order of both GA8$_a$ and GA8$_b$ for a 5-spin model with random coupling constants. Here the distance $D$ is defined as the distance between an actual evolution operator and the unit operator [@Quiroz:13], $$D(U,{1\!\!1}_S) = \sqrt{1-\frac{1}{d_\mathcal{H}}||\Gamma||_{Tr}},$$ where $d_\mathcal{H}$ is the dimension of the Hilbert space, $\Gamma = {\operatorname{Tr}}_S[U]$ and $||\cdot||_{Tr}$ represents the trace norm. An upper bound of $D$ can be calculated[@Quiroz:13], $$D\lesssim O[\tau_d^{N+1}].$$ Therefore, we can extract the suppression order by plotting $D$ versus $\tau_d$ in the log-log diagram. As we can see in Figure \[fig:GA8\], GA$_{8a}$ achieves higher suppression order than GA$_{8b}$ which is in agreement with the argument from the view point of our CPDD. Conclusion {#sec:conc} ========== We have developed tour CPDD schemes in which CPDD sequences are concatenated from different projection pulse sequences. We also define CPDD equivalence classes as the set of pulse sequences that have the same leading order along each direction in the Magnus expansion of the average Hamiltonian. Based upon the definition of our CPDD pulse sequences, we prove a series of properties about the structure of the sequences that must hold for CPDD. We also give a formula to calculate suppression order for a given CPDD class specified by $\lbrace n_x,n_y,n_z\rbrace$. We propose the optimal uniform DD sequence given in Eq. (\[eq:OUDD\]) use in experiments, since each of the sequences in the series achieves its suppression order using minimum pulse numbers. Although some of OUDD are already known DD sequences, our CPDD framework gives a unifying and consistent way to both understand and construct all of them.\ \ The main advantage of using UDD is that the pulse number needed scales linearly with the suppression order, $N_{UDD}\sim \mathcal{O}(K)$, which is much more efficient than the exponentially dependence in CPDD. However, UDD is subject to several difficulties. Firstly it is valid only for environmental spectrum with a hard cut-off [@Pasini:2010; @Uhrig:08], and secondly it is very sensitive to pulse errors [@Xiao:11; @Lange:2010; @Ryan:2010; @Souza:2012]. However, for our CPDD, especially the OUDD class, some pulse sequences have rotation symmetry thus making them robust against pulse error. Although we have not given a rigorous bound analysis for CPDD here, the results and the calculation should be similar to those in Ref. [@KhodjastehLidar:07]: the distance between the actual state and the desired state goes to zero as the concatenation level goes to infinity. More importantly, pulse errors are suppressed along with series of concatenations as long as the error is not too large.\ \ Moreover, although the suppression order of CPDD is still exponentially dependent on the pulse number, $N_{\rm{CPDD}}\sim\mathcal{O}(\log_2K)$, the reduction of the number of pulses for the same suppression order is exponentially large compared to the case of CDD, $N_{\rm{CDD}}\sim\mathcal{O}(\log_4K)$.\ \ Given the structure of our CPDD, one can see that two directions to try to find more efficient pulse sequences are worth exploring. Firstly, to find basic sequences, which could achieve higher $N$ to $K$ ratio than projection pulse sequences. The concatenated UDD [@Uhrig:09; @Lidar:10] obviously belongs to this class. However, the applicability of Lemma 1 in the context of non-uniform DD is questionable and requires a new proof. Secondly, we seek to find new ways to combine two pulse sequences or two projections. Since the fact that exponentially large pulse numbers are needed to achieve high suppression order resulting from the concatenation , new ideas in this direction may greatly improve the efficiency of uniform dynamical decoupling. We thank for helpful discussion with Robert Lanning and Zhihao Xiao. This work is supported by the Air Force Office of Scientific Research, the Army Research Office, and the National Science Foundation. \[append\] In this appendix we include the proof of Theorem \[theo:CPMG\] and Lemma \[lem:conca\], which are the foundations of our CPDD scheme. A proof about a property of cyclic pulse sequences is also included.\ \ **Proof of Theorem \[theo:CPMG\]:** Without loss of generality, we consider the pulse sequence is $ZZ$, where $Z = -i\sigma_z$. The transformed Hamiltonian $\tilde{H}(t)$ is given by$$\tilde{H}(t)\equiv\begin{cases} H_1, ~0<t<\tau_d\\ H_2, ~\tau_d<t<2\tau_d~, \end{cases} \label{theo1:eq:H}$$ where $$\begin{aligned} H_1& =& {1\!\!1}\otimes B_0+\sigma_x\otimes B_x+\sigma_y\otimes B_y+\sigma_z\otimes B_z,~~~~\\ H_2& =& {1\!\!1}\otimes B_0-\sigma_x\otimes B_x-\sigma_y\otimes B_y+\sigma_z\otimes B_z.~\end{aligned}$$ After applying the pulse sequence $P_jP_j$, the first and second order expansion of the average Hamiltonian is given by Eq. (\[eq:magnus\]), $$\begin{aligned} \bar{H}^{(0)}&=&\frac{1}{2}\sum_{i=1}^2 H_i,\\ \bar{H}^{(1)}&=&\frac{-i}{2}\tau_c\sum_{i<j} [H_i,H_j].\end{aligned}$$ Using Eq. \[theo1:eq:H\], we have $$\begin{aligned} \bar{H}^{(0)} &=& {1\!\!1}\otimes B_0 +\sigma_z\otimes B_z,\\ \bar{H}^{(1)} &=& \tau_d[B_0,B_x]\otimes\sigma_x\notag\\ & & + \tau_d[B_0,B_y]\otimes\sigma_y + 2\tau_d[B_y,B_x]\otimes \sigma_z.\end{aligned}$$ Since the commutator between bath operators is not zero in general, we have $$\bar{H}_{err} = \sigma_z\otimes [B_z+k^{(1)}_zo(\tau_d)]+\sum_{i\perp z}\sigma_i\otimes [B_i^{(1)}+k^{(2)}_io(\tau_d^2)],$$ where $B_i^{(1)} = \tau_d[B_0,B_i]$. The same calculation gives similar results for $j = x, y$. Q.E.D.\ **Proof of Lemma \[lem:conca\] :** The concatenated sequence $C$ is given by $$(P_iP^B_K)P^B_{K-1}...P^B_1(P_iP^B_K)...P^B_1,$$ where the bracket means the there is no free evolution in between the two pulses inside the bracket. Since the projection pulse sequence is cyclic by definition, and sequence $B$ is also cyclic by Theorem \[theo:cyc\], $$\label{eq:prf1:cyc} \prod_{i=1}^{K}P^B_i = e^{i\phi}{1\!\!1}.$$ The $2K_B$ evolution operator of the control field, $U_m^C$ is given by, $$U_m^C = \prod_{j\leq m}P^C_j$$ To construct $\pi_A^{(0)}$ and $\pi_B^{(0)}$, we group $U_m^C$ and $U_{m+K_B}^C$ together($m\leq K_B$). For $1\leq m < K_B$, $$\begin{aligned} U_m^C &=&\prod_{j<=m}P^B_j=U_m^B, \end{aligned}$$ and, $$\begin{aligned} U_{m+K_B}^C &=&\Big(\prod_{j<=m}P^B_j\Big) P_i\Big(\prod_{j>m}^{K_B}P^B_j\Big)\Big(\prod_{j<=m}P^B_j\Big)\notag\\ &=&e^{i\phi}P_i\prod_{j<=m}P^B_j\notag\\ &=& e^{i\phi}P_i U^B_m~,\end{aligned}$$ where we have used the commutativity of Pauli matrices and the cyclic property Eq. (\[eq:prf1:cyc\]) of pulse sequence $B$. Now adding the action of $U_m^C$ and $U_{m+K_B}^C$ on $H_{SB}$ together, we have $$\begin{aligned} \label{eq:prf1:1} & & U_m^{C\dagger}H_{SB}U_m^C + U_{m+K_B}^{C\dagger}H_{SB}U_{m+K_B}^C \notag\\ &=& U_m^{B\dagger}H_{SB}U_m^B +U_m^{B\dagger}P_i^\dagger H_{SB}P_iU_m^B \notag\\ &=&U_m^{B\dagger}(2\pi_A^{(0)}H_{SB})U_m^B\end{aligned}$$ If $m = K_B$, we have $U_{K_B}^C$ and $U_{2K_B}^C$, which are $$\begin{aligned} \label{eq:prf1:2} U_{K_B}^C&=&P_iP_{K}^B\prod_{j<K_B}P^B_j\notag\\ &=&P_i\prod_{j<=K_B}P^B_j\notag\notag\\ &=&e^{i\phi}P_iU_{K_B}^B\end{aligned}$$ $$\begin{aligned} \label{eq:prf1:3} U_{2K_B}^C&=&P_i\Big(\prod_{j<=K_B}P^B_j\Big)P_i\Big(\prod_{j<=K_B}P^B_j\Big)\notag\\ &=&e^{i\phi}U_{K_B}^B.\end{aligned}$$ Now using Eqs. (\[eq:prf1:1\], \[eq:prf1:2\], \[eq:prf1:3\]), the first order of average Hamiltonian after applying sequence $C$ is given by $$\begin{aligned} \bar{H}^{(0)}_C&=& \frac{1}{2K_B}\sum_{m=1}^{2K_B}U_{m}^{C\dagger} H_{SB}U_{m}^C \notag\\ &=&\frac{1}{2K_B} \sum_{m=1}^{K_B}\Big(U_m^{C\dagger}H_{SB}U_m^C + U_{m+K_B}^{C\dagger}H_{SB}U_{m+K_B}^C\Big)\notag\\ &=&\frac{1}{K_B}\sum_{m=1}^{K_B}U_m^{B\dagger}\Big(\frac{1}{2}\sum_{l=1}^{K_A}U_l^{A\dagger} H_{SB}U_l^A\Big)U_m^B\notag\\ &=&\pi_B^{(0)}\pi_A^{(0)}H_{SB}\end{aligned}$$ Q.E.D. \[theo:cyc\] The concatenation of two cyclic pulse sequences is still cyclic. **Proof of Theorem \[theo:cyc\] :** Consider two cyclic pulse sequences $A$ and $B$. From \[Eq. (\[eq:cyc1\])\] we have $$\begin{aligned} \prod_{i=1}^{K_A}P_i^A = e^{i\phi_A}{1\!\!1},\\ \prod_{i=1}^{K_B}P_i^B = e^{i\phi_B}{1\!\!1}.\end{aligned}$$ The pulse sequence $C$ is constructed by concatenating $A$ and $B$, thus $$\begin{aligned} C &=& A[B]\notag\\ &\equiv& P_1^A(P_1^B...P_{K_B}^B)P_2^A(P_1^B...P_{K_B}^B)...P_{K_A}^A(P_1^B...P_{K_B}^B)\notag\\\end{aligned}$$ Therefore the product of all pulses of sequence C is $$\begin{aligned} \prod_{i=1}^{K_C}P_i^C &=& P_1^A (\prod_{i=1}^{K_B}P_i^B)....P_{K_A}^A(\prod_{i=1}^{K_B}P_i^B)\notag\\ &=&P_1^Ae^{i\phi_B}...P_{K_A}^Ae^{i\phi_B}\notag\\ &=&e^{iK_B\phi_B}\prod_{i=1}^{K_A}P_i^A\notag\\ &=&e^{i(K_B\phi_B+\phi_A)}{1\!\!1}\notag\\ &=&e^{i\phi_C}{1\!\!1}\end{aligned}$$ where we define $\phi _c=K_B\phi_B+\phi_A$. Therefore, the pulse sequence $C$=$A$\[$B$\] is also cyclic. Q.E.D. [99]{} , Phys. Rev. A **52**, R2493 (1995). , Phys. Rev. Lett. **77**, 793 (1996). , Phys. Rev. A **55**, 900 (1990). , Eur. Phys. J. D **36**, 203 (2005). , Rep. Prog. Phys. **61**, 117 (1998). , Phys. Rev. **80**, 580 (1950). , Phys. Rev. **94**, 630 (1954). , *Principles of Magnetic Resonance, 3rd. ed* ([Springer-Verlag, New York, 1990]{}). , Phys. Rev. A **58**, 2733 (1998). , Phys. Rev. Lett. **82**, 2417 (1999). , Phys. Lett. A **258**, 77 (1999). , Phys. Rev. Lett. **95**, 180501 (2005). , Phys. Rev. A **75**, 062310 (2007). , Phys. Rev. Lett. **94**, 060502 (2005). , New J. Phys. **10**, 083009 (2008). , Phys. Rev. Lett. **98**, 100504 (2007). , New J. Phys. **10**, 083024 (2008). , Phys. Rev. A **83**, 032322 (2011). , Phys. Rev. A **88**, 052306 (2013). , Phys. Rev. **73**, 679 (1948). , *Quantum Computation and Quantum Information* ([Cambridge University Press]{}, Cambridge, UK, 2000). , Phys. Rev. Lett. **105**, 200402 (2010). , Science, **330**, 6000 (2010). , Phil. Trans. R. Soc. A, **370**, 4748-4769 (2012). , Phys. Rev. A **81**, 012309 (2010). , Phys. Rev. Lett. **102**, 120502 (2009). , Phys. Rev. Lett. **104**, 130501 (2010). *mpmath: a Python library for arbitrary-precision floating-point arithmetic (version 0.18), December 2013. http://mpmath.org/.*
--- abstract: 'We investigate the excitonic spectrum of MoS$_2$ monolayers and calculate its optical absorption properties over a wide range of energies. Our approach takes into account the anomalous screening in two dimensions and the presence of a substrate, both cast by a suitable effective Keldysh potential. We solve the Bethe-Salpeter equation using as a basis a Slater-Koster tight-binding model parameterized to fit the *ab initio* MoS$_2$ band structure calculations. The resulting optical conductivity is in good quantitative agreement with existing measurements up to ultraviolet energies. We establish that the electronic contributions to the C excitons arise not from states at the $\Gamma$ point, but from a set of ${{\text{$\boldsymbol{k}$}}}$-points over extended portions of the Brillouin zone. Our results reinforce the advantages of approaches based on effective models to expeditiously explore the properties and tunability of excitons in TMD systems.' author: - Emilia Ridolfi - 'Caio H. Lewenkopf' - 'Vitor M. Pereira' bibliography: - 'MoS2\_excitons-v24.bib' title: 'Excitonic structure of the optical conductivity in MoS$_2$ monolayers' --- Introduction ============ The widespread availability of bulk trigonal molybdenum disulfide (MoS$_2$) has made this material one of the most widely studied transition metal dichalcogenides (TMDs) — especially at the strict monolayer thickness —, and has propelled MoS$_2$ to one of the most prominent members of the family of semiconducting two-dimensional materials beyond graphene [@wang2012; @Geim2013; @Butler2013; @Choi2017; @WangReview2017]. In parallel with the interest and continued advances in optimizing sample production and transport characteristics, MoS$_2$ and other closely related TMDs are of great appeal for optoelectronic applications [@wang2012; @Choi2017; @WangReview2017]. This has sustained intensive research to understand the processes that govern the electronic response of these crystals to light. Much progress has been made theoretically [@Komsa2012; @Malic2014; @Molina2013; @Berchelbach2013; @Qiu2013; @Pedersen2014; @Zhang2014; @Wu2015; @Jose2016; @Nuno2017; @Maxim2017; @Malte2016; @wang-substrate2017; @Attaccalite2014; @Glazov2017; @Rhim2015; @Wang2015-dft] and experimentally [@Mak2010; @Splendiani2010; @LiRao2013; @Zhang_exciton2014; @Zhang2014; @Li2014; @Kim2014; @Miwa2015; @Hill2015; @Klots2014; @Rigosi2016; @Aleit2016; @Chiu2015; @Malte2016; @Chiu2015; @Kumar2013; @Clark2014; @Mishina2015; @Pedersen2015; @Rostami2016; @Woodward2017; @Xiaobo2014; @Zhang2015-TwoPhotons] in both understanding fundamental properties and exploring the potential practical uses of these materials in devices. As in nearly all strictly two-dimensional materials, MoS$_2$ monolayers have a highly tunable carrier density [@Butler2013; @Liu2017; @Saito2016; @Wang2015] and are amenable to having a number of properties tailored on-demand by different external procedures, including the customization of the optical band-gap [@Raja2017]. It was early recognized that the intrinsic two-dimensionality and semiconducting character of TMDs bring about enhanced Coulomb interactions which, not only renormalize the electronic band structure with quantitative consequences for all derived single-particle processes, but also give rise to the strongest excitonic effects seen to date in the optical response of semiconductors [@Nakajima1980; @Chirolli2017; @Crommie2014]. With binding energies as high as $0.22{\,-\,}1.1$eV [@Komsa2012; @Malic2014; @Berchelbach2013; @Qiu2013; @Pedersen2014; @Wu2015; @Jose2016; @Nuno2017; @Zhang_exciton2014; @Zhang2014; @Hill2015; @Klots2014; @Rigosi2016; @Chiu2015] (depending on the strength of the interaction due to the sensitivity to their environment) and carrying a large spectral weight [@Qiu2013; @Li2014; @Mak2010], these two-particle excitations determine and dominate the optical response of TMD materials. As a result, excitons are now understood as a critical ingredient in any reliable theory and model of the optical properties of TMDs, to the extent that any theory that does not account for excitonic effects fails to capture even the most basic qualitative features of the optical gap and/or spectral weight distribution. Monolayers of semiconducting TMDs are also interesting due to a number of other fundamental and unique features of their electronic structure that can broaden their range of applicability in optoelectronics. For example, the strong spin-orbit (SO) coupling splits the valence bands at the $K$ point by a large amount ($\Delta_{SOC}$) which generates two families of excitons [@Miwa2015] \[see A and B excitons in [Fig. \[fig:bands\]]{}(b)\] and allows the selective excitation of electrons with predefined spin polarization [@Xiao2012]. Moreover, the non-zero Berry curvature offers a number of opportunities to explore applications related to the non-trivial topological nature of electronic states near the band edges. These include the facile injection of valley-polarized carriers by optical pumping [@Mak2012; @Zheng2012; @Sallen2012; @Cao2012], the ability to control spin and valley populations simultaneously as a result of the spin-valley locking [@Bawden2016], or the anomalous splitting of bound excitonic levels due to a pseudo spin-orbit coupling of topological origin [@Zhou2015; @Srivastava2015]. In view of this, the development of reliable models with enough flexibility to allow the prediction of the optical response of TMDs in different experimental settings is clearly of high interest. Ideally, one wishes a scheme that augments the reach and expediency of accurate and unbiased first-principles calculations of the full excitonic spectrum. The latter are notoriously demanding from the numerical point of view and, in addition, are particularly onerous for 2D materials when reasonable convergence is required [@Qiu2013; @Louie2000]. It then becomes prohibitive to rely only on these approaches to scan a potentially large scope of modifications (structural, chemical, electronic) that can be of interest to tailor the material’s intrinsic response for specific purposes. In this paper our focus are single-layer systems. Henceforth, except if explicitly emphasized otherwise, we shall refer to the MoS$_2$ monolayer as simply MoS$_2$. The approach that we describe here begins with an accurate Slater-Koster (SK) tight-binding parameterization of the target band structure [@Ridolfi2015]. The model parameters are benchmarked against information from first-principles calculations and experiments to describe the most important spectral features of MoS$_2$, such as the correct energies and orbital content of the low lying conduction and valence bands at the critical points in the Brillouin zone (BZ). We are able to reproduce optical absorption spectra obtained experimentally with quantitative accuracy in both frequency and absolute magnitude. As expected, our calculations agree with previous theoretical works [@Ramasubramanian2012; @Qiu2013; @Berchelbach2013; @Klots2014; @Komsa2012] that provide a good description of the strongly localized $A$ and $B$ excitons. By using a large sampling of ${{\text{$\boldsymbol{k}$}}}$ points in the Brillouin zone we are also able to study the so-called C-exciton [@Qiu2013] and establish its nature, a subject under debate in the literature [@Qiu2013; @Klots2014; @Kim2014; @Aleit2016]. The remainder of this paper is organized as follows: In Sec. \[sec:state-of-the-art\] we discuss the state-of-the-art experimental and theoretical work on the optical response of TMD monolayers. Section \[sec:theory\] presents our solution of the BSE using a SK Hamiltonian optimized for MoS$_2$ monolayers and its use in calculating the optical conductivity in linear response. The results, with focus on the nature of the resonant C excitons, are discussed in Sec. \[sec:results\]. Finally, a summary of our main findings is presented in Sec. \[sec:conclusions\]. The paper also includes one appendix that addresses technical issues, such as the choice of the number of bands taken in the calculation, an analysis of the spin-orbit effects in the optical response, and a comparison between the energy spectrum obtained with and without the Coulomb interaction. ![image](fig1a.png){width="0.75\columnwidth"} ![image](fig1b_levels.png){width="1.05\columnwidth"} Excitons in the optical response of MoS$_2$ {#sec:state-of-the-art} =========================================== The optical conductivity, absorption, and reflectance spectra of few and single-layer MoS$_{2}$ has been extensively studied in recent experiments [@Mak2010; @Splendiani2010; @LiRao2013; @Zhang_exciton2014; @Zhang2014; @Li2014; @Kim2014; @Miwa2015; @Hill2015; @Klots2014; @Rigosi2016; @Aleit2016]. It is well established that the onset of optical absorption in clean, undoped monolayers occurs at $1.8{{\,\pm\,}}0.1$eV, as measured spectroscopically by reflection and photoluminescence [@Mak2010; @Splendiani2010; @Zhang2014; @Li2014; @Klots2014; @Rigosi2016], absorption [@Zhang_exciton2014; @Li2014; @Kim2014; @Aleit2016], photoemission [@Miwa2015], and second-harmonic analysis [@Kumar2013; @Pedersen2015; @Rostami2016]. The absorption threshold is characterized by two peaks separated by $145{{\,\pm\,}}4$meV [@Miwa2015], associated with the two families of excitons (A and B) derived from transitions between the spin-split valence and conduction bands. Studies of angle-resolved photoemission spectroscopy [@Zhang2014], as well as X-ray photoemission and scanning tunneling microscopy/spectroscopy [@Chiu2015], show that the single-particle gap lies within $2.15-2.35$eV, thereby placing the binding energies of the lowest A exciton at $0.22-0.42$eV (we note that the non-negligible temperature dependence of the absorption peaks is an important factor when extracting binding energies and identifying experimental variability [@Malte2016; @Molina2016; @Tongay2012; @Kioseoglou2016]). Such large binding energy values imply unusually small exciton radii, typically on the order of ${{\,\sim\,}}5$Å[@Wu2015], but still of the Wannier-Mott type. Theoretically, the solution of the BSE from first principles on the basis of GW-corrected electronic states captures accurately the experimental behavior related to the A and B excitons [@Ramasubramanian2012; @Qiu2013; @Berchelbach2013; @Klots2014; @Komsa2012]. Yet, they also reveal the numerical challenges intrinsic to a full *ab-initio* approach to this problem in 2D systems, which is particularly demanding [^1] in terms of convergence at both the stage of the GW single-particle corrections and the subsequent solution of the BSE [@Ramasubramanian2012; @Qiu2013; @Berchelbach2013; @Klots2014; @Louie2000; @Cudazzo2011]. Effective models, on the other hand, must cope with the non-Coulomb form of the screened potential which is essential to capture the correct bound exciton series [@Chernikov2014; @Rodin2014; @Keldysh1973], but prevents closed-form analytical results for the binding energies or wave functions. ${{\text{$\boldsymbol{k}$}}}{\cdot}\bm{p}$ models that describe the conduction and valence valleys in terms of a massive Dirac equation adapted to MoS$_2$ [@Berchelbach2013; @Zhang_exciton2014; @Jose2016; @Xiao2012] have been able to capture the bound excitonic series [@Berchelbach2013; @Zhang_exciton2014; @Jose2016], the momentum dispersion of the excitonic spectrum [@Wu2015], and the excitonic contributions to the optical conductivity [@Zhang_exciton2014; @Xiao2012; @Nuno2017]. A prevalent characteristic of studies based on these models is their focus on specific features, most notably the energy spectrum itself which is non-hydrogenic [@Chernikov2014; @Maxim2017; @Srivastava2015] and had not been correctly described until recently. Whereas such a restricted analysis is an implicit requirement of effective mass approaches, it is not a limitation for models based on TB, which can describe the entire BZ and large energy ranges of relevance for experiments and applications, provided the starting Hamiltonian gives a quantitatively accurate and qualitatively faithful description of the single-particle states. Parametrized models, both of the SK type as well as simpler, orbital non-specific TB Hamiltonians, have also been employed with different levels of accuracy and reproducibility of experiments [@Malic2014; @Pedersen2014; @Nuno2017; @Wu2015]: some report results on restricted energy ranges around the A and B peaks [@Malic2014; @Wu2015], others resort to TB models with a large number of fitting parameters (e.g., $>28$) [@Pedersen2014; @Wu2015] or include only a basis of $d$ orbitals [@Nuno2017; @Wu2015] (the orbital character becomes relevant away from the $K$ point; for example, [Ref.]{}  identifies the C excitons with states that have both Mo $d_{z^2}$ and S $p_{x,y}$ character). Most importantly, the single particle band structure of some of these calculations does not capture well the GW-corrected band gap [@Wu2015] or the dispersion of the upper conduction bands [@Jose2016], which is especially relevant factors in the excitonic problem. We note, finally, that, due to the zero crystal momentum involved in the underlying electronic excitations, the excitonic fingerprints in the optical properties of bulk MoS$_2$ are qualitatively and quantitatively similar to those of the monolayer. This follows from the layered structure of the former which, combined with a relatively weak inter-layer electronic coupling, makes the electronic properties of the bulk strongly two-dimensional in character. Not surprisingly, and despite the different screening environment, excitons in bulk samples tend to have binding energies and radii similar to those occurring in the monolayers [@Saigal2016], and remain mostly localized within one layer [@Molina2013]. Understanding the excitonic physics in the monolayer is therefore key for the description of the corresponding physics in the bulk as well. Theory and methods {#sec:theory} ================== Exciton states and the BSE -------------------------- Neutral excitations in crystals, both bound and extended, are well described by approximate solutions of the BSE [@BSE; @BSEsite; @DelSole1998; @Louie2000; @Olevano2017]. First principles methods have been widely used to investigate the optical properties of insulators and semiconductors in this framework, and provide the current standard to tackle the excitonic spectrum of solid-state materials [@Louie2000]. Since Coulomb interactions and screening are the essence of the exciton problem, these have to be properly handled in a consistent way to establish even the “single-particle” ground-state of the system (i.e., its one-particle band structure). This need to self-consistently account for quasiparticle corrections in addition to solving the BSE proper constitutes a notable challenge both in terms of implementation and in computational time. Thorough and converged first-principle calculations of the excitonic spectrum and related observables in MoS$_2$ have, as a result, been typically few and far between [@Ramasubramanian2012; @Qiu2013; @Berchelbach2013; @Klots2014]. Since the excitonic physics is essential to describe the optical response of semiconducting TMDs, and in view of the current need for accurate, yet expedite, methods to tackle these properties, we solve the BSE and calculate the optical conductivity using an orthogonal SK TB Hamiltonian parameterized to describe the MoS$_2$ band structure. The atomic orbital basis comprises the three $p$ valence orbitals in each S plus the five $d$ orbitals in each Mo within the trigonal unit cell, giving a total of 11 atomic orbitals. The construction of the Hamiltonian and optimization of its SK parameters has been presented in detail elsewhere [@Ridolfi2015]. In brief, its main characteristics are: (i) only $14$ fitting parameters, (ii) the correct band gap of $2.115$eV at the $K$ point, (iii) the spin splitting of the VB at the $K$ point by $150$meV, (iv) the effective masses and positions of the conduction and valence bands at $K$, $\Gamma$ and at the so-called $Q$ point. The TB parameters for the SO coupling have been chosen to match $\Delta_{SOC}$ with the experimental energy difference between A and B peaks [@Miwa2015]. The associated band structure is reproduced in [Fig. \[fig:bands\]]{}(a), reflecting the insulating ground state of a pristine monolayer. The Bloch states, $\psi_{n{{\text{$\boldsymbol{k}$}}}}({{\text{$\boldsymbol{r}$}}})$, derived from this Hamiltonian are taken as a good approximation to the eigenstates of the crystal Hamiltonian, $$\hat{H}\, \psi_{n{{\text{$\boldsymbol{k}$}}}}({\text{$\boldsymbol{r}$}}) = {\varepsilon}_{n{{\text{$\boldsymbol{k}$}}}}\psi_{n{{\text{$\boldsymbol{k}$}}}}({\text{$\boldsymbol{r}$}}),$$ where $$\psi_{n{{\text{$\boldsymbol{k}$}}}}({\text{$\boldsymbol{r}$}}) = \frac{1}{\sqrt{N_c}}\sum_{{\text{$\boldsymbol{R}$}}}e^{i{{\text{$\boldsymbol{k}$}}}\cdot{\text{$\boldsymbol{ R }$}} } \sum_{\alpha}C_{\alpha,{{\text{$\boldsymbol{k}$}}}}^{n}\phi_{\alpha}({\text{$\boldsymbol{r}$}}-{\text{$\boldsymbol{R}$}} -{\text{$\boldsymbol{t}$}}_\alpha) \label{eq:bloch}.$$ The lattice vector ${\text{$\boldsymbol{R}$}}$ runs over all $N_c$ unit cells of the crystal, $n$ is the band index, $\alpha$ denotes the orbitals, and ${\text{$\boldsymbol{t}$}}_\alpha$ corresponds to the position in the unit cell of the atoms at which the orbitals are centered. Both the band $n$ and orbital $\alpha$ indices run over the same interval $[1,N]$, where $N{{\,=\,}}22$ ($11{\times}2$, with spin) is the dimension of the orbital basis considered in the SK Hamiltonian. These Bloch states are used to set up the BSE in the Tamm-Dancoff approximation (TDA) by introducing a basis of two-particle excitations of the Fermi sea, $\ket{\text{FS}} {{\,\equiv\,}}\prod_{v{{\text{$\boldsymbol{k}$}}}} a_{v{\text{$\boldsymbol{k}$}}}^{\dagger}\ket{0}$, $$\ket{v{{\text{$\boldsymbol{k}$}}}\rightarrow c{{\text{$\boldsymbol{k}$}}}} \equiv a^\dagger_{c{{\text{$\boldsymbol{k}$}}}}a^{}_{v{{\text{$\boldsymbol{k}$}}}} \ket{\text{FS}}. \label{eq:basis}$$ The latter are used to express the exciton states $$\ket{M} = \sum_{c,v,{{\text{$\boldsymbol{k}$}}}}A_{cv{{\text{$\boldsymbol{k}$}}}}^M \ket{v{{\text{$\boldsymbol{k}$}}}\rightarrow c{{\text{$\boldsymbol{k}$}}}}, \label{eq:expansion}$$ with energy $E_{M}$, where $M$ labels the excitonic modes. Note that these definitions implicitly restrict the exciton momentum to zero; this is sufficient to capture all first order optical processes, as the excitons with zero momentum are the optically bright ones. Hence, we restrict our discussion to this subspace only. Furthermore, we employ the traditional notation “$c/v$” to designate conduction and valence bands in order to emphasize that we shall be working at zero temperature. It is also instructive to note at this point that, since $c\in[1,N_{c}]$, and $v\in[1,N_{v}]$ where $N_{c}$/$N_{v}$ is the number of conduction/valence bands, the dimension of the vector space spanned by the basis states in [Eq. (\[eq:basis\])]{} is $N_\text{tot} {{\,\equiv\,}}N_k^2 \times N_{c} \times N_{v}$, where $N_k^2$ represents the total number of points sampled in the BZ. The number of bands and the size of the sampling in ${{\text{$\boldsymbol{k}$}}}$ points is one of the critical limiting factors in calculations of the two-particle spectrum, even within a parameterized TB framework. Replacing [(\[eq:expansion\])]{} in the Schrödinger equation that includes the many-body Coulomb interaction yields the reduced eigenproblem [@Grosso; @Pedersen2014; @Wu2015] $$E_{cv{{\text{$\boldsymbol{k}$}}}}A_{cv{{\text{$\boldsymbol{k}$}}}}^{M} + \frac{1}{V} \sum_{c'v'{{\text{$\boldsymbol{k}$}}}'} W_{cv{{\text{$\boldsymbol{k}$}}},c'v'{{\text{$\boldsymbol{k}$}}}'}A_{c'v'{{\text{$\boldsymbol{k}$}}}'}^M = E_M A_{cv{{\text{$\boldsymbol{k}$}}}}^M . \label{eq:BSE}$$ Here, $E_{cv{{\text{$\boldsymbol{k}$}}}} {{\,\equiv\,}}{\varepsilon}_{c{{\text{$\boldsymbol{k}$}}}}-{\varepsilon}_{v{{\text{$\boldsymbol{k}$}}}}$ is the energy difference between the $c$ and $v$ bands at ${{\text{$\boldsymbol{k}$}}}$, $V {{\,\equiv\,}}A_{c} N_k^2$ is the total area of the crystal ($A_c{{\,=\,}}\sqrt{3}a^{2}/2$ with $a {{\,\simeq\,}}3.16$Å the lattice constant) and $W_{cv{{\text{$\boldsymbol{k}$}}},c'v' {\text{$\boldsymbol{k'}$}} } {{\,\equiv\,}}\bra{v {{\text{$\boldsymbol{k}$}}}\rightarrow c{{\text{$\boldsymbol{k}$}}}} \hat{U} \ket{v' {\text{$\boldsymbol{k'}$}} \rightarrow c' {\text{$\boldsymbol{k'}$}}}$ represents the matrix element of the many-body Coulomb potential, $\hat{U}$, between two particle-hole excitations. In the TDA, there are two contributions to this matrix element: a direct and an exchange term [^2]. As pointed out earlier [@Wu2015], in an orthogonal basis and in our approximation where the Coulomb interaction is independent of the orbital character of the states involved, the exchange term does not contribute for zero-momentum excitons. As a result, only the direct Coulomb matrix element remains, which can be expressed simply as [@Pedersen2014; @Wu2015] $$\label{eq:direct-W} W_{cv{{\text{$\boldsymbol{k}$}}},c'v'{{\text{$\boldsymbol{k}$}}}'}^{(d)} = u({{\text{$\boldsymbol{k}$}}}-{{\text{$\boldsymbol{k}$}}}') \, I_{c'{{\text{$\boldsymbol{k}$}}}',c{{\text{$\boldsymbol{k}$}}}}^{*}I_{v'{{\text{$\boldsymbol{k}$}}}',v{{\text{$\boldsymbol{k}$}}}},$$ where $u(\bm{q})$ is the Fourier transform of the *screened* Coulomb potential [@Louie2000]. The orthogonality assumed in defining our SK basis [@Ridolfi2015] allows one to express the overlap integrals $I_{a{{\text{$\boldsymbol{k}$}}}',b{{\text{$\boldsymbol{k}$}}}}$ in terms of the expansion coefficients of the Bloch states [(\[eq:bloch\])]{} as $$I_{a{{\text{$\boldsymbol{k}$}}}',b{\text{$\boldsymbol{ k}$}}} = \sum_{\alpha}C_{\alpha{{\text{$\boldsymbol{k}$}}}'}^{a\,*} \, C_{\alpha{{\text{$\boldsymbol{k}$}}}}^{b}.$$ Since the $C_{\alpha{{\text{$\boldsymbol{k}$}}}}^{b}$ are obtained from the numerical eigenvectors of the Bloch Hamiltonian [(\[eq:bloch\])]{}, it is important to ensure a consistent choice of phase because the $I_{a{{\text{$\boldsymbol{k}$}}}',b{\text{$\boldsymbol{ k}$}}}$ are not gauge-invariant quantities. We chose to require the sum of the basis-set coefficients of the wave function $\rho_{{{\text{$\boldsymbol{k}$}}}}^{n}{{\,=\,}}\sum_{\alpha}C_{\alpha {{\text{$\boldsymbol{k}$}}}}^{n}$ to be real, as suggested in [Ref.]{} . Screened Coulomb interaction ---------------------------- An accurate approximation to describe the screened Coulomb interaction in [Eq. (\[eq:direct-W\])]{} is essential for a realistic description of the excitonic spectrum [@Louie2000]. Early attempts to theoretically describe the exciton series in MoS$_2$ and related 2D materials provide a good example of this stringent requirement, since, by simplistically using the bare Coulomb form of the potential, one fails to capture the non-Rydberg level structure observed experimentally [@Chernikov2014; @Maxim2017; @Srivastava2015]. The distinct series of bound exciton levels in MoS$_2$ is due to both its pseudospin degree of freedom [@Maxim2017] and the modified electrostatic interaction in strictly 2D electronic systems which, in Fourier space, acquires the form [@Chernikov2014; @Rodin2014] $$\label{eq:potential-transform} u({\text{$\boldsymbol{q}$}})=-\frac{e^{2}}{2\epsilon_{0}\epsilon_d\,q\,\kappa(q)}, \quad \kappa(q) \equiv 1 + r_0q,$$ where $r_0$ defines the 2D polarizability of the electronic system [@Cudazzo2011; @Chernikov2014; @Rodin2014] and $\epsilon_d$ captures the static, uniform screening due to the top and bottom media surrounding the MoS$_2$ monolayer [@Zhang2014]. We assume a MoS$_{2}$ monolayer of effective thickness $d$ and effective dielectric constant $\epsilon_{2}$ sandwiched between materials with dielectric constants $\epsilon_{1}$ and $\epsilon_{3}$. The environment dielectric constant is thus $\epsilon_{d}{{\,=\,}}(\epsilon_{1}+\epsilon_{3})/2$. The potential [(\[eq:potential-transform\])]{} has precisely the form derived by Keldysh for a thin metallic film, in which case the parameter $r_0$ enters as the film thickness [@Keldysh1973]. The explicit $q$-dependence in the dielectric function $\kappa(q)$ due to many-body interactions qualitatively modifies Coulomb’s law in real space which becomes $$u({{\text{$\boldsymbol{r}$}}}) = -\frac{e^2}{8\epsilon_{0}\epsilon_{d}r_{0}} \Bigl[H_{0}\Bigl(\frac{r}{r_{0}}\Bigr) -Y_{0}\Bigl(\frac{r}{r_{0}}\Bigr)\Bigr] ,\label{eq:potential}$$ where $H_{0}$ and $Y_{0}$ are Struve and Bessel functions, respectively. The parameter $r_0$ defines a crossover length scale separating the long-range decay $\propto 1/r$ from the short-range domain characterized by a singularity $\propto \log r$ as $r\to 0$ [@Cudazzo2011]. *Ab-initio* studies have confirmed that the Keldysh interaction accurately describes the screened potential in MoS$_2$ [@Berchelbach2013; @Latini2015]. Thus, we employ $u({{\text{$\boldsymbol{q}$}}})$ in [Eq. (\[eq:direct-W\])]{} to solve the BSE. Only the parameters $r_0$ and $\epsilon_{d}$ remain now to fully specify the content of [Eq. (\[eq:BSE\])]{}. They have been reported with a large variation among different authors in the recent literature [@Zhang2014; @Wu2015; @Jose2016; @Berchelbach2013]. *Ab-initio* calculations of the 2D polarizability find $r_0{{\,\sim\,}}31.2-41.5$Å  in vacuum [@Berchelbach2013]. However, it is known that the precise energy placement of the exciton level series is sensitive to the details of the dielectric environment surrounding the monolayer sample [@Wu2015; @Raja2017; @Nuno2017; @wang-substrate2017]. Prior estimates for the monolayer in the dielectric environment of an air/substrate interface report values spanning a relatively wide interval, namely, $r_0{{\,\sim\,}}13.55-57.6$ Å [@Zhang2014; @Wu2015; @Jose2016; @Berchelbach2013]. Since we will be referring to the measurements by Li and collaborators [@Li2014] as our reference for the experimental optical conductivity, the environment’s dielectric constant is $\epsilon_d{{\,=\,}}2.5$, as appropriate for the air/silica interface ($\epsilon_1{{\,=\,}}1$, $\epsilon_3{{\,=\,}}4$). In the absence of present *ab-initio* calculations of the corrections to the polarizability due to the effect of a silica substrate, we follow the estimates put forward in Refs. , which are based on the Keldysh-type finite thickness ($d$) model: $$\label{eq:r0} r_0 = \frac{2\epsilon_{2}^2-\epsilon_{1}^2-\epsilon_{3}^2}{2\epsilon_{2} (\epsilon_{1}+\epsilon_{3})}\,d .$$ In this expression, $\epsilon_2$ stands for an effective dielectric constant of MoS$_2$. The best agreement with the measured exciton binding energies is obtained with [@Zhang2014] $d{{\,\simeq\,}}6$Å and $\epsilon_{2}{{\,\simeq\,}}12$ (the latter matches well the results from first principles calculations of the dielectric constant of bulk MoS$_{2}$ [@Berchelbach2013]), resulting in $r_0{{\,=\,}}13.55$Å[^3]. Having thus specified all its contributions, and even though numerically more efficient methods have been proposed recently [@Pedersen2014], we solved the BSE by full diagonalization of the eigenvalue problem in [Eq. (\[eq:BSE\])]{}. In view of the ranges of energy covered in current experiments, we restricted our base states in [Eq. (\[eq:basis\])]{} to include excitations between $N_v {{\,=\,}}2$ valence bands ($1\times 2$ since spin is explicitly included) and $N_c{{\,=\,}}8$ conduction bands. Note that, when considering effective models in the ${{\text{$\boldsymbol{k}$}}}\cdot{\text{$\boldsymbol{p}$}}$ (Dirac) approximation it is important to additionally include the pseudospin degree of freedom in the treatment of the effective Schrödinger equation to properly describe the spectrum [@Maxim2017]. Optical response {#sec:optical_response} ---------------- As the $D_{3h}$ point group determines that all rank 2 tensors are in-plane isotropic, in a MoS$_2$ monolayer it is sufficient to consider the diagonal component of the optical conductivity $\sigma(\omega){{\,\equiv\,}}\sigma_{xx}(\omega)$ which is given in linear response by (dipole approximation, $T{{\,=\,}}0$) [@Pedersen2014; @Fabio2016] $${\operatorname{Re}}\sigma(\omega) = \frac{e^{2}\pi}{m^2\hbar\omega V} \sum_{M}|\bra{\text{FS}}\hat{P}_x\ket{M}|^{2}\delta(\omega-\omega_{M}) . \label{eq:sigma-def}$$ Using [Eq. (\[eq:expansion\])]{} to express $\ket{M}$ we write the total momentum operator matrix element as $$\bra{\text{FS}}\hat{P}_x\ket{M} = \sum_{cv{{\text{$\boldsymbol{k}$}}}}A_{cv{{\text{$\boldsymbol{k}$}}}}^{M} \bra{\text{FS}}\hat{P}_{x} a_{c{{\text{$\boldsymbol{k}$}}}}^{\dagger}a_{v{{\text{$\boldsymbol{k}$}}}}^{} \ket{\text{FS}} . \label{eq:FS-P-M}$$ Expanding the many-body momentum operator in the usual way, $\hat{P}_x {{\,=\,}}\sum_{pq} \bra{p}\hat{p}_x\ket{q} a_{p}^{\dagger}a_{q}$, one has $$\begin{gathered} \bra{\text{FS}}\hat{P}_{x} a_{s}^{\dagger}a_{r} \ket{\text{FS}} = \\ \sum_{pq}\bra{p}\hat{p}_{x}\ket{q} \bigl[ \delta_{pr}\delta_{qs}f_r(1-f_s) + \delta_{pq}\delta_{sr} f_p f_s \bigr],\end{gathered}$$ where $f_j$ corresponds to the Fermi-Dirac occupation number of an electron at the state $j$. At zero temperature and noting that we are interested in the case where $c\ne v$, we write $$\begin{aligned} \bra{\text{FS}}\hat{P}_{x} a_{c{{\text{$\boldsymbol{k}$}}}}^{\dagger}a_{v{{\text{$\boldsymbol{k}$}}}}^{} \ket{\text{FS}} & = \bra{\psi_{v{{\text{$\boldsymbol{k}$}}}}}\hat{p}_x\ket{\psi_{c{{\text{$\boldsymbol{k}$}}}}} \nonumber \\ & = \frac{m}{\hbar}\bra{\psi_{v{{\text{$\boldsymbol{k}$}}}}}\nabla_{k_x} \hat{H}({{\text{$\boldsymbol{k}$}}})\ket{\psi_{c{{\text{$\boldsymbol{k}$}}}}} . \label{eq:FS-P-aa-FS}\end{aligned}$$ Hence, inserting here the expression for Bloch states given in [Eq. (\[eq:bloch\])]{}, the optical conductivity [(\[eq:sigma-def\])]{} becomes $${\operatorname{Re}}\sigma (\omega) = \frac{e^{2}}{4\hbar}\,\frac{4\pi}{\hbar\omega V} \sum_{M} \biggl| \sum_{{{\text{$\boldsymbol{k}$}}}cv}A_{cv{{\text{$\boldsymbol{k}$}}}}^M \sum_{\alpha \beta} (C_{\alpha {{\text{$\boldsymbol{k}$}}}}^v)^* C_{\beta {{\text{$\boldsymbol{k}$}}}}^c \nabla_{k_x} \! \bra{\phi_\alpha} \hat{H}({{\text{$\boldsymbol{k}$}}}) \ket{\phi_\beta} \biggr|^2 \delta(\hbar\omega-\hbar\omega_M). \label{eq:sigma-final}$$ This form makes explicit that the oscillator strength associated with each particle-hole excitation involves contributions that depend both on the solution of the BSE (through the eigenvector components $A^M_{cv{{\text{$\boldsymbol{k}$}}}}$) and on the effective Hamiltonian in the crystal momentum representation (through the components $C_{m{{\text{$\boldsymbol{k}$}}}}^c$). The response in the non-interacting approximation (single-particle) is readily recovered by noting that, in the absence of particle-hole interaction, the BSE, [Eq. (\[eq:BSE\])]{}, is diagonal. In this limit, $A_{cv{{\text{$\boldsymbol{k}$}}}}^M \to \delta_{cv{{\text{$\boldsymbol{k}$}}},M}$, $E_M \to E_{cv{{\text{$\boldsymbol{k}$}}}}$, and [Eq. (\[eq:sigma-final\])]{} simplifies to $${\operatorname{Re}}\sigma_{\rm sp} (\omega) = \frac{e^{2}}{4\hbar}\,\frac{4\pi}{\hbar \omega V} \sum_{cv{{\text{$\boldsymbol{k}$}}}} \biggl| \sum_{\alpha \beta} (C_{\alpha{{\text{$\boldsymbol{k}$}}}}^v)^* C_{\beta{{\text{$\boldsymbol{k}$}}}}^c \nabla_{k_x} \! \bra{\phi_\alpha} \hat{H}({{\text{$\boldsymbol{k}$}}}) \ket{\phi_\beta} \biggr|^2 \delta(\hbar\omega - E_{cv{{\text{$\boldsymbol{k}$}}}}). \label{eq:sigma-single}$$ ![ The energy of the lowest eigenvalue of the BSE \[$E^A{{\,=\,}}1.775$ eV, dark, cf. [Fig. \[fig:spectrum\]]{}(a)\] as a function of the total number of ${{\text{$\boldsymbol{k}$}}}$ points used in the uniform sampling of the Brillouin zone. The horizontal dashed line indicates the value obtained from the combined extrapolation of all the curves corresponding to the different regularizations. The shaded strip identifies the energy interval within ${{\,\pm\,}}20$meV of the extrapolated result. []{data-label="fig:convergence"}](fig2_convergence){width="\columnwidth"} Results {#sec:results} ======= Convergence of the exciton spectrum ----------------------------------- We solve the eigenproblem in [Eq. (\[eq:BSE\])]{} using a uniform sampling of $N_k$ points along the directions defined by the reciprocal lattice vectors ${\bf b}_{1} {{\,=\,}}(2\pi/a,\,2\pi/a\sqrt{3})$, ${\bf b}_{2} {{\,=\,}}(2\pi/a,-2\pi/a\sqrt{3})$. We place the $\Gamma$ point at the origin, leaving $K$ and $M$ at the center of our BZ sampling domain \[see [Fig. \[fig:wavefunctions\]]{}(a) below\]. Since the Coulomb interaction is not periodic in the reciprocal space and our approach does not include local field terms ([*i.e.*]{}, Fourier components beyond the first BZ), the selection of the sampling domain can have an impact on the energies of the bound excitonic levels, especially when these arise from transitions at ${{\text{$\boldsymbol{k}$}}}$ points near BZ boundaries. More specifically, in the summation performed in [Eq. (\[eq:BSE\])]{}, ${{\text{$\boldsymbol{k}$}}}'$ formally runs over the whole reciprocal space, which can be expressed by defining ${{\text{$\boldsymbol{k}$}}}' {{\,\equiv\,}}{{{\text{$\boldsymbol{k}$}}}}'' + \bm{Q}$, with ${{{\text{$\boldsymbol{k}$}}}}'$ spanning the first BZ and $\bm{Q}$ the reciprocal lattice. When the excitonic states have wave functions strongly localized near the edges of the BZ, one must include $\bm{Q}{\,\neq\,}0$ terms, otherwise one misses important contributions from Coulomb matrix elements between ${{\text{$\boldsymbol{k}$}}}$ states closely spaced, but in adjacent Brillouin zones. Since the wave functions relevant to our problem are localized in regions around the $\bm{K}$ points in the BZ (to be discussed below, see [Fig. \[fig:wavefunctions\]]{}), our choice of the ${{\text{$\boldsymbol{k}$}}}$-domain gives converged results for the excitonic spectrum in agreement with the experiment reported in [Ref.]{}  by keeping only $\bm{Q} {{\,=\,}}0$ (i.e. by considering all wave vectors and matrix elements within the first BZ). Being able to work with this truncation of the Coulomb matrix elements without affecting the convergence of the spectrum provides an additional improvement in the numerical efficiency of the calculation. Note, however, that any discrete approach requires the regularization of the ${{\text{$\boldsymbol{k}$}}}$-diagonal matrix elements $W_{cv{{\text{$\boldsymbol{k}$}}},c'v'{{\text{$\boldsymbol{k}$}}}}$ due to the $\propto 1/|{{\text{$\boldsymbol{k}$}}}-{{\text{$\boldsymbol{k}$}}}'|$ integrable singularity that arises from the long range tail of the screened potential \[cf. [Eq. (\[eq:potential-transform\])]{}\]. As their contributions to [Eq. (\[eq:BSE\])]{} become regular in the thermodynamic limit $N_k\gg1$, this regularization is well posed in the sense that the specific way it is performed has no influence on the calculated observables in that limit. However, we find that the regularization strategy *significantly* impacts the rate of convergence of the spectrum with $N_k$, to the extent that one can gain an order of magnitude reduction in the dimension of the Bethe-Salpeter matrix for a given convergence target. This is a point of high practical significance because, in an effective SK description such as ours, the number of bands is, by construction, the minimal *a priori* required set. The BZ sampling thus becomes the limiting factor determining the size and tractability of the numerical problem for a given target precision of the calculated spectrum. Since the singular matrix elements are integrated over the ${{\text{$\boldsymbol{k}$}}}$-space, the most straightforward regularization consists in replacing $u({{\text{$\boldsymbol{q}$}}}{{\,=\,}}{{\text{$\boldsymbol{k}$}}}-{{\text{$\boldsymbol{k}$}}}'{{\,=\,}}0)$ by its average value over a small enclosing domain $\Delta_{{\text{$\boldsymbol{q}$}}}$, namely, $$\begin{aligned} u({{\text{$\boldsymbol{q}$}}}\approx0) & \to \frac{1}{\Delta_{{\text{$\boldsymbol{q}$}}}} \int_{\Delta_{{\text{$\boldsymbol{q}$}}}} u({{\text{$\boldsymbol{q}$}}}) d{{\text{$\boldsymbol{q}$}}}\nonumber \\ & \approx \frac{ca{N_k}}{2\pi} \left[ \alpha_1 + \alpha_2\frac{2\pi r_0 }{a{N_k}} + \alpha_3 \Bigl(\frac{2\pi r_0 }{a{N_k}}\Bigr)^2 \right] . \label{eq:regularization}\end{aligned}$$ Here, $c{{\,\equiv\,}}-e^2 / 2 \epsilon_0 \epsilon_d$ and the constants $\alpha_j$ depend on the geometry of the averaging domain $\Delta_{{\text{$\boldsymbol{q}$}}}$ and the truncation level of the expansion [(\[eq:regularization\])]{}. [Fig. \[fig:convergence\]]{} illustrates the different convergence rate of the lowest exciton level ($E^A$) as a function of the sampling dimension. For demonstration purposes, these data were obtained by solving [Eq. (\[eq:BSE\])]{} with only the two bands of each spin nearest to the bandgap (i.e., $N_c{{\,=\,}}N_v{{\,=\,}}2$). The asymptotic value is clearly approached much faster for certain choices of the regularization scheme. In particular, one sees that neglecting the leading higher order terms in the expansion [(\[eq:regularization\])]{} by having $\alpha_1{{\,=\,}}1$, $\alpha_{2,3}{{\,=\,}}0$ (parameter set \#$6$, see label in [Fig. \[fig:convergence\]]{}) provides a particularly slow convergence[^4]. This is not unexpected because $2\pi r_0 / a {{\,\simeq\,}}26.9$, which is precisely of the same magnitude as those values of $N_k$ that are within practical numerical reach[^5]. On the other hand, the particular $\alpha$ parameter sets \#$2$ and \#$11$ (see labels in Fig. \[fig:convergence\]) provide much faster convergence to the asymptotic value $E^A {{\,=\,}}1.775$eV within a $0.020$eV precision[^6]. This translates into a binding energy $E_b^A {{\,=\,}}0.34$eV since the single-particle gap in our SK band structure parametrization is $E_g {{\,=\,}}2.12$eV [@Ridolfi2015] (this value corresponds to the first dark $A$ exciton, while the first bright one has $E^A {{\,=\,}}1.80$eV and binding energy $E_b^A {{\,=\,}}0.32$eV). Henceforth, all our calculations will be presented according to the regularization scheme \#$2$. We have verified that it works efficiently for the whole spectrum. [Fig. \[fig:spectrum\]]{}(a) shows a direct comparison between our BSE-derived eigenvalues and experimental spectra measured for MoS$_2$ in silica in the range of the single-particle gap [@Li2014; @Hill2015]. We find that, except for a global rigid offset of $0.07$eV, the spectrum obtained with our model parameters reproduces extremely well the bound exciton series. In particular, it captures with accuracy the level spacing of the lowest-lying states which are those most sensitive to the modified screened potential [(\[eq:potential\])]{} at short distances and, hence, those that most clearly deviate from a hydrogen-like spectrum [@Srivastava2015; @Chernikov2014; @Maxim2017]. This spectrum also captures the fact that the ground excitonic state is dark, as recently established experimentally [@Molas2017]. We obtain a bright-dark splitting of $\Delta E_{\rm bd} \approx 12$meV for the lowest excitonic states. This value is due to the 6meV separation of the conduction bands due to spin-orbit coupling plus differences in effective masses. Our results agree with other theoretical calculations that find $\Delta E_{\rm bd} \alt 20$meV [@Echeverry2016; @Baranowski2017; @Malic2018] depending on the kind of *ab-initio* approach. So far, the only direct experimental investigation of the bright-dark splitting in MoS$_2$ [@Molas2017] finds $\Delta E_{\rm bd} \approx 100$meV. This value is unexpectedly large in view of the theoretical literature and considering that for TMDs with a larger spin-orbit coupling, like MoSe$_2$ and WSe$_2$, one finds $ \Delta E_{\rm bd} \approx 47 - 57$meV [@Zhang2017; @Molas2017]. These elements indicate that that quantitative aspects of $\Delta E_{\rm bd}$ in MoS$_2$ are still under experimental scrutiny. To allow a direct comparison of the absorption spectrum with the experiments, a rigid blue-shift in the energies by $+0.07$eV has been incorporated in the results shown in [Fig. \[fig:spectrum\]]{}. Such a “calibration” is somewhat expected because of the effective parameterizations of both the band structure and the screened Coulomb potential[^7] (yet, a $0.07$eV offset is rather small in comparison with similar approaches, where corrections of up to $0.57$eV were necessary [@Wu2015]). This calibration of the energy axis has been also applied in the presentation of all our subsequent results. The bound series is directly associated with transitions between the topmost valence and bottom conduction bands near the $K$ point of the BZ. Each level is 2-fold degenerate on account of the $K{-}K'$ valley degeneracy. When the small energy difference between the two lowest spin-polarized conduction bands at $K$/$K'$ is ignored, there is an additional 2-fold degeneracy. In our calculations, that small energy difference is explicitly finite \[cf. [Fig. \[fig:bands\]]{}(b)\] which explains the existence of two closely spaced levels labeled, for example, A$_\mathrm{1s}$, in [Fig. \[fig:spectrum\]]{}(a). Nevertheless, one should keep in mind that only half of these excitons are optically bright due to the parity selection rule [@Wu2015]. ![ (a) Comparison of the lower portion of the theoretical exciton spectrum obtained in this work with the energies of the A and B peaks measured in [Refs.]{}  (note that the assignment of the peaks labeled $\mathrm{A_{2s}^*}$ and $\mathrm{A_{3s}^*}$ is not unequivocal in [Ref.]{} ). After a rigid displacement by $+0.07$eV (see text), the theoretical spectrum reproduces extremely well the position of the experimental A and B peaks. The dashed horizontal line indicates the one-particle energy gap at 2.18eV ($2.18{{\,=\,}}2.12+0.07$). (b) Joint density of exciton states as defined in [Eq. (\[eq:jdos\])]{}, already rigidly displaced by $+0.07$eV. A Lorentzian level broadening of 0.1eV (full width) has been used, except in the lower energy range (magnified in the inset) where it is 0.001eV to allow the resolution of individual bound exciton levels. []{data-label="fig:spectrum"}](fig3a_spectra_007 "fig:"){width="0.35\columnwidth"}![ (a) Comparison of the lower portion of the theoretical exciton spectrum obtained in this work with the energies of the A and B peaks measured in [Refs.]{}  (note that the assignment of the peaks labeled $\mathrm{A_{2s}^*}$ and $\mathrm{A_{3s}^*}$ is not unequivocal in [Ref.]{} ). After a rigid displacement by $+0.07$eV (see text), the theoretical spectrum reproduces extremely well the position of the experimental A and B peaks. The dashed horizontal line indicates the one-particle energy gap at 2.18eV ($2.18{{\,=\,}}2.12+0.07$). (b) Joint density of exciton states as defined in [Eq. (\[eq:jdos\])]{}, already rigidly displaced by $+0.07$eV. A Lorentzian level broadening of 0.1eV (full width) has been used, except in the lower energy range (magnified in the inset) where it is 0.001eV to allow the resolution of individual bound exciton levels. []{data-label="fig:spectrum"}](fig3b_JDOS "fig:"){width="0.65\columnwidth"} In this respect, it is worth to point out that the difference in the excitation energies of the lowest bound A and B excitons does not follow exactly the spin-orbit spitting of the bands. This is significant because many theoretical TB parameterizations in the literature have identified $E^B {{\,-\,}}E^A$ directly with the SO coupling parameter and, equivalently, this exciton splitting is frequently used as a direct experimental measure of the spin-orbit splitting in the single-particle band structure, which is not strictly correct. For example, we obtain $E^B {{\,-\,}}E^A {{\,=\,}}130$meV while the valence band splitting due to SOC in our chosen TB is 150meV. The difference can be mainly traced back to the different effective masses of the spin-split valence bands [@Malic2014]. To identify the ${{\text{$\boldsymbol{k}$}}}$-point sampling that guarantees convergence of the whole spectrum, we have analyzed the exciton joint density of states (JDOS), $$\rho_J(E) \equiv \frac{1}{N_c N_v N_k^2} \sum_{M} \delta (E-E_M), \label{eq:jdos}$$ where $E_M$ are the eigenvalues of the BSE. Our calculations show that the JDOS is already reasonably converged for $N_k{{\,=\,}}48$ and that the differences for $N_{k}{{\,=\,}}90$ up to $120$ are negligible. We take $N_{k}{{\,=\,}}60$ for the results presented in this paper. The converged JDOS calculated with $N_c {{\,=\,}}8$ conduction and $N_v {{\,=\,}}2$ valence bands is shown in [Fig. \[fig:spectrum\]]{}(b) for $N_k{{\,=\,}}60$. Linear optical conductivity {#sec:linear_optical_conductivity} --------------------------- ![ The room-temperature experimental traces reported in references and for the linear optical conductivity of a MoS$_{2}$ monolayer on silica (black) and its comparison with our results with (blue) and without (red) particle-hole interactions ($N_{c}{{\,=\,}}8$, $N_{v}{{\,=\,}}2$, $N_k{{\,=\,}}60$). An energy-independent Lorentzian broadening of $0.136$eV (full width) was added to the calculated $\sigma(\omega)$ to capture the experimental broadening at the positions of the A and B excitons. The decay to zero at high energy is artificial: the restricted number of bands used in our diagonalization of the BSE makes the calculated spectra complete only up to ${{\,\sim\,}}3.5$eV. Nevertheless, for completeness, we show here the conductivity in the full range spanned by those bands. []{data-label="fig:sigma"}](fig4_mine_final_new){width="\columnwidth"} The optical conductivity in linear order is directly obtained from [Eqs. (\[eq:sigma-final\])]{} and [(\[eq:sigma-single\])]{} by a diagonalization of the full BSE matrix in the subspace spanned by the 10 bands mentioned above and a suitable sampling of the BZ. The Dirac-delta functions are broadened by replacing them with Lorenzians of full width $\gamma$. The latter qualitatively represents a total decay rate due to several microscopic mechanisms, each of them contributing to $\gamma$ with a characteristic energy dependence [@Qiu2013]. Since the experimental broadening is largely disorder and sample dependent, we take the simple approach of considering $\gamma$ as constant and to adjust its value to fit the experimental data (see discussion below). Figure \[fig:sigma\] shows the real part of $\sigma(\omega)$ that we obtain for the single-particle and BSE calculations. It is our most significant result. For reference, the plot includes the experimental traces reported recently by Li *et al.* [@Li2014] for exfoliated MoS$_2$ and Jayaswal *et al.* [@Jayaswal2018] for CVD-grown MoS$_2$, both measured on silica at room temperature. Note that the comparison with experimental data is only meaningful up to ${{\,\sim\,}}3.5$eV, beyond which excitations involving additional valence bands not included in our present calculation ([Fig. \[fig:bands\]]{}a) must be taken into account [@Ridolfi2015]. We put $\gamma {{\,=\,}}0.068$eV in our calculations, which corresponds to the average width of the A and B exciton peaks reported in the experiment of Li *et al.*. As anticipated, the single-particle results fail to describe the spectral features of the optical response. Even though the discrepancy is most obvious in the region dominated by the bound exciton levels, $E \lesssim 2.1$eV, there is also a remarkable difference in the continuum. In particular, the single-particle spectral weight is generically distributed at energies much above the interaction-corrected values, as highlighted by the horizontal arrow in Figure \[fig:sigma\]. Therefore, by neglecting the excitonic effects and interpreting the single-particle absorption spectrum literally, one incurs in a severe qualitative and quantitative misrepresentation, not only in the vicinity of the optical gap, but actually over the entire range of energies. Thus, the strong Coulomb interactions in these two-dimensional materials, not only lead to high exciton binding energies, but also largely renormalize the whole spectrum. Figure \[fig:sigma\] shows that our BSE calculation captures rather well the important experimental features of the optical conductivity in MoS$_2$, namely, the position of the A and B peaks, the overall maximum at $E^C {{\,\simeq\,}}2.85$eV, the energy dependence over the entire experimental range, and the absolute magnitude of the optical conductivity. We stress that we do not adjust or scale the magnitude of $\sigma(\omega)$, as is frequently done in theoretical work that discusses similar comparisons with experimental data. This overall agreement attests to the validity and accuracy of the SK parameterization of the underlying band structure, and provides strong support to the use of the Keldysh effective screened potential in the BSE calculations (an additional overview of different theoretical and experimental spectra reported in recent literature is given in [Fig. \[fig:sigma-comparisons\]]{} of the appendix). Since $\gamma$ corresponds to microscopic processes that depend on the excitation energy, one might expect such dependence to have significant influence in the line shape of the optical conductivity. Yet, our calculation that uses a constant broadening captures the measured energy dependence quite satisfactorily. Surprisingly, *ab initio* results [@Qiu2013; @Ramasubramanian2012; @Klots2014] are less accurate in describing ${\operatorname{Re}}\sigma(\omega)$ despite the inclusion of specific energy dependent broadening processes, such as electron-phonon scattering [@Qiu2013] \[see [Fig. \[fig:sigma-comparisons\]]{}(b)\]. This indicates that the experimental broadening is likely dominated by disorder and justifies *a posteriori* our choice of an energy-independent $\gamma$ to broaden the numerically discrete spectrum in [Eq. (\[eq:sigma-final\])]{}. Finally, as pointed out in [Ref.]{} , convergence studies up to large ${{\text{$\boldsymbol{k}$}}}$-sampling meshes are essential to guarantee that one meaningfully “*reproduces features in the experimental absorption spectrum above 2eV*”. The comparison presented in [Fig. \[fig:sigma-comparisons\]]{} provides a good example of how approaches based on parameterized SK models such as ours can outperform full first-principles solutions of the BSE when it comes to expediency and the need of a large BZ sampling to ensure convergence. It is key, of course, to rely on underlying band structures that accurately describe the quasiparticle renormalization from the outset. Nature of the C excitons ------------------------ ![ Representative excitonic wave functions in momentum space including only the two conduction and valence bands closest to $E_F$ ($N_k{{\,=\,}}60$, $N_c{{\,=\,}}N_v{{\,=\,}}2$). (a) Diagram of the first BZ of MoS$_2$ (dashed line) and the equivalent reciprocal unit cell used in our ${{\text{$\boldsymbol{k}$}}}$-sampling (solid rhombus). The $\Gamma$ point is located at the vertices of the sampling domain, $M{{\,=\,}}(\pi/a,-\pi/\sqrt{3}a)$, $K'{{\,=\,}}(4\pi/3a, 0)$ and $K{{\,=\,}}(8\pi/3a, 0)$. The left column gives wave functions obtained with the TB model of Ref. , while those on the right use the SK parameterization of Ref. . The second row \[(b) and (c)\] shows density plots of the probability distribution arising from one of the wave functions associated with the A peak in the optical conductivity. The two bottom rows \[(d) to (g)\] show representative wave functions in the region of C excitons. The axes are in units of Å$^{-1}$. Note that the color scale is logarithmic.[]{data-label="fig:wavefunctions"}](fig5a_sketch.pdf){width="0.65\columnwidth"} ![ Representative excitonic wave functions in momentum space including only the two conduction and valence bands closest to $E_F$ ($N_k{{\,=\,}}60$, $N_c{{\,=\,}}N_v{{\,=\,}}2$). (a) Diagram of the first BZ of MoS$_2$ (dashed line) and the equivalent reciprocal unit cell used in our ${{\text{$\boldsymbol{k}$}}}$-sampling (solid rhombus). The $\Gamma$ point is located at the vertices of the sampling domain, $M{{\,=\,}}(\pi/a,-\pi/\sqrt{3}a)$, $K'{{\,=\,}}(4\pi/3a, 0)$ and $K{{\,=\,}}(8\pi/3a, 0)$. The left column gives wave functions obtained with the TB model of Ref. , while those on the right use the SK parameterization of Ref. . The second row \[(b) and (c)\] shows density plots of the probability distribution arising from one of the wave functions associated with the A peak in the optical conductivity. The two bottom rows \[(d) to (g)\] show representative wave functions in the region of C excitons. The axes are in units of Å$^{-1}$. Note that the color scale is logarithmic.[]{data-label="fig:wavefunctions"}](fig5bc_AB_2x2x60.pdf){width="\columnwidth"} ![ Representative excitonic wave functions in momentum space including only the two conduction and valence bands closest to $E_F$ ($N_k{{\,=\,}}60$, $N_c{{\,=\,}}N_v{{\,=\,}}2$). (a) Diagram of the first BZ of MoS$_2$ (dashed line) and the equivalent reciprocal unit cell used in our ${{\text{$\boldsymbol{k}$}}}$-sampling (solid rhombus). The $\Gamma$ point is located at the vertices of the sampling domain, $M{{\,=\,}}(\pi/a,-\pi/\sqrt{3}a)$, $K'{{\,=\,}}(4\pi/3a, 0)$ and $K{{\,=\,}}(8\pi/3a, 0)$. The left column gives wave functions obtained with the TB model of Ref. , while those on the right use the SK parameterization of Ref. . The second row \[(b) and (c)\] shows density plots of the probability distribution arising from one of the wave functions associated with the A peak in the optical conductivity. The two bottom rows \[(d) to (g)\] show representative wave functions in the region of C excitons. The axes are in units of Å$^{-1}$. Note that the color scale is logarithmic.[]{data-label="fig:wavefunctions"}](fig5de_C1_2x2x60.pdf){width="\columnwidth"} ![ Representative excitonic wave functions in momentum space including only the two conduction and valence bands closest to $E_F$ ($N_k{{\,=\,}}60$, $N_c{{\,=\,}}N_v{{\,=\,}}2$). (a) Diagram of the first BZ of MoS$_2$ (dashed line) and the equivalent reciprocal unit cell used in our ${{\text{$\boldsymbol{k}$}}}$-sampling (solid rhombus). The $\Gamma$ point is located at the vertices of the sampling domain, $M{{\,=\,}}(\pi/a,-\pi/\sqrt{3}a)$, $K'{{\,=\,}}(4\pi/3a, 0)$ and $K{{\,=\,}}(8\pi/3a, 0)$. The left column gives wave functions obtained with the TB model of Ref. , while those on the right use the SK parameterization of Ref. . The second row \[(b) and (c)\] shows density plots of the probability distribution arising from one of the wave functions associated with the A peak in the optical conductivity. The two bottom rows \[(d) to (g)\] show representative wave functions in the region of C excitons. The axes are in units of Å$^{-1}$. Note that the color scale is logarithmic.[]{data-label="fig:wavefunctions"}](fig5fg_C2_2x2x60.pdf){width="\columnwidth"} ![ Representative excitonic wave functions in momentum space including only the two conduction and valence bands closest to $E_F$ ($N_k{{\,=\,}}60$, $N_c{{\,=\,}}N_v{{\,=\,}}2$). (a) Diagram of the first BZ of MoS$_2$ (dashed line) and the equivalent reciprocal unit cell used in our ${{\text{$\boldsymbol{k}$}}}$-sampling (solid rhombus). The $\Gamma$ point is located at the vertices of the sampling domain, $M{{\,=\,}}(\pi/a,-\pi/\sqrt{3}a)$, $K'{{\,=\,}}(4\pi/3a, 0)$ and $K{{\,=\,}}(8\pi/3a, 0)$. The left column gives wave functions obtained with the TB model of Ref. , while those on the right use the SK parameterization of Ref. . The second row \[(b) and (c)\] shows density plots of the probability distribution arising from one of the wave functions associated with the A peak in the optical conductivity. The two bottom rows \[(d) to (g)\] show representative wave functions in the region of C excitons. The axes are in units of Å$^{-1}$. Note that the color scale is logarithmic.[]{data-label="fig:wavefunctions"}](fig5h_scale.pdf){width="0.8\columnwidth"} The maximum in ${\operatorname{Re}}\sigma(\omega)$ at $E^C$ has been attributed to resonant excitons involving transitions near the center of the BZ (the $\Gamma$ point), the so-called C excitons [@Qiu2013]. As we discuss below, these excitons are actually not more related to $\Gamma$ than they are to the $K$ point. Hence, it is incorrect to refer to them as “$\Gamma$-point excitons”. Turning our attention to the spectral details of the conductivity around $E^C$, [Fig. \[fig:sigma\]]{} shows that the energy dependence obtained from the excitonic calculation in the interval $[2.5,\,3.5]$eV is nearly identical to that given at the single particle level in the different interval $[3.5,\,4.5]$eV. This seems to indicate that, in this energy range, the primary effect of the electronic interaction is to rigidly redshift the one-electron conductivity by about $0.9$eV, without notable modifications to the line shape (indicated by the horizontal arrow in [Fig. \[fig:sigma\]]{}). That being the case, one could question the attribution of the enhanced spectral weight in this broad region to interaction effects, insofar as (i) the one-electron trace seems to already carry the key aspects of the energy dependence and magnitude of $\sigma(\omega)$ and (ii) the excitonic corrections do not seem to generate any additional spectral feature beyond simply repositioning the curve *en bloc* to lower energies, as expected from the attractive electron-hole interaction. In other words, it appears as if, for excitation energies belonging to the one-particle continuum, the restructuring of the absorption spectrum caused by interactions amounts to a “scissor”-type correction of the energy spectrum, where an effective “binding energy” of ${{\,\sim\,}}0.9$eV brings the one-electron trace (red) to its correct position (blue), but with essentially no changes in oscillator strength. The problem with this, however, is that the value $0.9$eV is much larger than the binding energy of the A *bound* excitons ($E^A_b{{\,=\,}}0.324$eV). It turns out that explaining this excitonic redshift on the basis of these one-electron band structure features is questionable and, as we now explain in detail, too simplistic and misleading. The large spectral weight of the one-electron curve (red in [Fig. \[fig:sigma\]]{}) around its maximum is primarily due to the downward dispersion of the lowest conduction bands along $\Gamma{-}K$ (see [Fig. \[fig:bands\]]{}): the fact that conduction and valence bands separated by excitation energies ${{\,\sim\,}}4$eV disperse roughly parallel to each other entails a large enhancement of the one-particle JDOS in this region, naturally explaining the peak in ${\operatorname{Re}}\sigma(\omega)$. Equivalently, it can be inferred from [Fig. \[fig:bands\]]{} that, in our SK model, the one-electron “optical band structure” is nearly flat along the $\Gamma{-}K$ line. Hence, the one-electron “C peak” is mostly the result of a large one-electron JDOS at ${{\,\sim\,}}4$eV (see also [Fig. \[fig:sigma-jdos-small-broadening\]]{} in the appendix which explicitly confirms this). However, the same conclusion does not apply to the results of the excitonic calculation. As we have seen in [Fig. \[fig:spectrum\]]{}(b), the excitonic JDOS is peaked, broadly speaking, at ${{\,\sim\,}}4$eV while the corresponding conductivity peaks at $E^C {{\,\simeq\,}}2.85$eV. It follows that the enhanced optical response near $E^C$ is, clearly, not the result of a large number of excitonic levels with energies close to $E^C$. In reality, except for the bound excitonic levels that emerge in the gap, the one-particle and excitonic JDOS do not differ much at energies in the continuum, as we demonstrate in [Fig. \[fig:sigma-jdos-small-broadening\]]{} (appendix). For example, predicting the spectral shape of the optical conductivity solely on the basis of the excitonic spectrum (through the JDOS) would clearly fail for the energies in the continuum. This is the reason why the idea of an effective “binding energy” of ${{\,\sim\,}}0.9$eV that rigidly redshifts the one-electron spectrum, as discussed above, is misleading. We must therefore explicitly consider the oscillator strengths which, according to [Eq. (\[eq:sigma-final\])]{}, are given by $$\biggl| \sum_{{{\text{$\boldsymbol{k}$}}}cv}A_{cv{{\text{$\boldsymbol{k}$}}}}^M \sum_{\alpha \beta} (C_{\alpha {{\text{$\boldsymbol{k}$}}}}^v)^* C_{\beta {{\text{$\boldsymbol{k}$}}}}^c \nabla_{k_x} \! \bra{\phi_\alpha} \hat{H}({{\text{$\boldsymbol{k}$}}}) \ket{\phi_\beta} \biggr|^2. \label{eq:osc-str}$$ Recalling that $A_{cv{{\text{$\boldsymbol{k}$}}}}^M$ represents the probability amplitude of the exciton $M$ in reciprocal space, the oscillator strength depends not only on the one-electron dipole matrix elements, but also on the specific texture of each excitonic wave function in ${{\text{$\boldsymbol{k}$}}}$ space. In [Fig. \[fig:wavefunctions\]]{} we analyze representative excitonic wave functions in reciprocal space associated with the largest optical spectral weights (cf. [Fig. \[fig:sigma\]]{} and see also [Fig. \[fig:osc-str\]]{} below). As per our earlier remarks regarding the “calibration” of the energies, the values indicated in each panel are shifted by $+0.07$eV (right column) and $+0.57$eV (left column) with respect to the original eigenvalues of the BSE. For reference, [Fig. \[fig:wavefunctions\]]{}(b) and \[fig:wavefunctions\](c) show that the wave functions associated with the A and B excitons concentrate at the vicinity of the $K$ points [^8], as has been well established by previous calculations. The degree of their localization is directly related to the large binding energies and, in real space, they appear much more localized than typical excitons in semiconductors [@Wu2015; @WangReview2017]. We recall that, due to the parity selection rule, only half of the excitons related to the two valence and two conduction bands straddling the gap are optically bright. Each of the optically bright excitons is 2-fold degenerate on account of the $K$–$K'$ valley degeneracy (when the small energy difference between the two lowest spin-polarized conduction bands at $K$/$K'$ is ignored, these are further degenerate with the other pair of doubly-degenerate dark excitons. In our calculations, that small energy difference is explicitly finite, in correspondence with the spectrum shown in [Fig. \[fig:spectrum\]]{}). The picture is rather different for the C excitons. The wave functions shown in panels (d) to (g) of [Fig. \[fig:wavefunctions\]]{} correspond to selected states with energies close to $E^C$. We caution the reader that, unlike the case of bound excitons such as A or B, here we show selected representative wave functions of a “continuum” of states. (We have checked over a window of energies near $E^C$ that the spreading of the wave functions over portions of the BZ similar to those shown here is a robust common feature.) Obviously, the two TB models we use [@Wu2015; @Ridolfi2015] render different states. We note, however, the corresponding wave functions show rather large similarities (compare the left and right columns in [Fig. \[fig:wavefunctions\]]{}). Remarkably, there is no pronounced contribution *right at* the $\Gamma$ point itself; on the contrary, for our choice of primitive cell, the excitonic wave functions appear distributed on a ring at a finite distance from all the symmetry points, midway between $K{-}M$ and $\Gamma{-}K$. This agrees with similar observations based on full *ab initio* solutions of the BSE in the region of the C excitons which, likewise, associate C excitons with transitions within a similar annular region, but not specifically at or near $\Gamma$ [@Qiu2013; @Klots2014; @Molina-Sanchez2015] (see, for example, Fig. 21 in [Ref.]{} ). Further, the C excitons in [Fig. \[fig:wavefunctions\]]{} appear as more tightly localized in ${{\text{$\boldsymbol{k}$}}}$ space than their A counterparts, since the rings are rather thin (keep also in mind that the color scale is logarithmic). ![ (a) An overlay of the real part of the optical conductivity, the excitonic JDOS and the oscillator strength expressed in [Eq. (\[eq:osc-str\])]{} according to our excitonic calculation. (b) and (c) show representative excitonic wave functions contributing to the peaks D’ and C’ whose energies and spectral weight are highlighted in panel (a). ($N_k{{\,=\,}}60$, $N_c{{\,=\,}}8$, $N_v{{\,=\,}}2$). The axes are in units of Å$^{-1}$. Note that the color scale is logarithmic and the same as in [Fig. \[fig:wavefunctions\]]{}. []{data-label="fig:osc-str"}](fig6_weight_complete){width="\columnwidth"} This localization is established at a quantitative level by a computation of the inverse participation ratio associated with each exciton wave function, which we describe in the appendix. The data shown there, in [Fig. \[fig:sigma-jdos-small-broadening\]]{}(b), reveal that while the region of the C excitons is characterized by a comparatively small JDOS, the corresponding states are typically more localized than the average [^9]. This ultimately determines the energy dependence of the oscillator strength, which is maximized in the region of energy near $E^C$, as can be directly seen in [Fig. \[fig:osc-str\]]{} where the quantity [(\[eq:osc-str\])]{} is shown for all excitons. Therefore, opposite to the case of a single-particle calculation, the spectral profile of ${\operatorname{Re}}\sigma(\omega)$ is almost entirely determined by the oscillator strength and not the optical JDOS. In conclusion, the discussion above indicates that it is misleading to designate these as “$\Gamma$-point excitons” and reinforces the perspective that relates them with the properties of the “optical band structure” along the $\Gamma{-}K$ and $K{-}M$ directions [@Qiu2013; @Klots2014]. Extending the calculation of the excitonic wavefunctions shown in [Fig. \[fig:wavefunctions\]]{} to include not only the lowest 2 but all 8 conduction bands in our TB model, we can conclusively assign the two peaks in the oscillator strength at $E {{\,\simeq\,}}2.9$eV and $E {{\,\simeq\,}}3.3$eV (labeled as D’ and C’ in [Fig. \[fig:osc-str\]]{}) to the two contributions distinguished in [Ref.]{} . Specifically, with our band structure, they arise from particle-hole excitations between approximately parallel bands ${{\,\simeq\,}}3.8$eV apart along the $\Gamma{-}K$ and $K{-}M$ symmetry lines. From this point of view, excitons belonging to the broad C region do have a large binding energy of about $0.9$eV ($0.9 {{\,=\,}}3.8 {{\,-\,}}2.9$). Notably, a comparison between [Fig. \[fig:wavefunctions\]]{}(g) and [Fig. \[fig:osc-str\]]{}(c) reveals additional weight in the latter over an inner ring close to $K$. This is contributed by transitions from the valence to the 5th and 6th conduction bands which disperse downwards and nearly parallel to each other ${{\,\simeq\,}}4$eV apart near $K$. This vividly illustrates that the attribution of fine details associated with the whole region of the C excitons is sensitive to the particulars of the underlying bandstructure, and necessitates the inclusion of higher conduction bands. That C excitons arise from particle-hole excitations midway from the $\Gamma{-}K$ and $K{-}M$ lines in reciprocal space is consistent with the C peak being more sensitive to the number of layers in thin MoS$_2$ films than the A and B features. In particular, the experimental shift of $E^C$ correlates with the changes in the separation of bands with MoS$_2$ thickness [@Malard2013; @Kim2014; @Klots2014]. These changes in electronic structure are known to be small at the $K$ point but large along the whole $\Gamma{-}K$ line, ultimately determining the transition from a direct to indirect gap as a function of thickness [@Splendiani2010]. The contrast between A/B and C excitons can be understood from the fact that the electronic states at $K$/$K'$ contain mostly contributions from the $d$ orbitals in Mo, while in the regions of ${{\text{$\boldsymbol{k}$}}}$ that contribute to the C excitons they have a strong $p$ character arising from the sulfur atoms [@Cappelluti2013; @Molina-Sanchez2015; @Ridolfi2015]. As $d$ orbitals are spatially more localized and, moreover, lie in the inner of the 3 atomic planes that make each MoS$_2$ monolayer, the A and B exciton states at $K$/$K'$ are not as perturbed in a stacked multilayer structure or as a result of strain, in comparison with the changes that occur to the C excitons due to their strong sulfur orbital content [@Molina-Sanchez2015]. Summary {#sec:conclusions} ======= We revisited the problem of calculating the excitonic spectrum in the MoS$_2$ monolayer. It has been shown that many-body effects strongly restructure the optical absorption spectrum over an unusually large range of energies in comparison with the single-particle picture. Our approach accounts for the anomalous screening in two dimensions and for the presence of a substrate, both modeled by a suitable effective Keldysh potential. We solve the Bethe-Salpeter equation for the interacting electron-hole excitations by using a Slater-Koster tight-binding model parametrized to fit the calculated first-principles band structure of the material. The optical conductivity that emerges captures with good accuracy both the shape and absolute magnitude of the experimental data. Our calculation does not consider any temperature-induced change in the band structure nor microscopic broadening mechanisms such as those from the unavoidable phonon excitations. Indeed, by solving a temperature-dependent BSE based on first-principles electron and phonon spectra, Molina-Sanchéz *et al.* [@Molina2016] have shown that the electron-phonon coupling is responsible for most of the 40meV red-shift observed between zero and room temperature [@Kioseoglou2016]. In addition to this, for quantitative and qualitative accuracy at the microscopic level, one must consider the impact of the thermal expansion in the band structures and, in the case of excitons, the broadening contributed by radiative recombination. Details of such processes are, however, not in the scope of the present work; in many experimental cases, such microscopic details are overwhelmed by disorder-induced broadening. To compare our results with experiments at room temperature, we blue-shifted the calculated optical response spectrum by 70meV and introduced a phenomenological broadening, as discussed in Sec. \[sec:results\]. Seeing that our result captures well the experimental spectrum up to ${{\,\sim\,}}3.5$eV, we relied on the predictions of this model to investigate the effects and characteristics of the so-called C excitons. Notably, we explicitly showed in [Fig. \[fig:wavefunctions\]]{} that they arise from particle-hole excitations in an annular region of the BZ centered at, but at finite radii from, the $K$ points (maximal contributions arise from regions between $\Gamma{-}K$ and $M{-}K$). The interplay between the texture of the excitonic wave functions and the one-electron dipole matrix elements is responsible for the massive transfer of spectral weight seen in the conductivity when compared with results at the one-electron level ([Fig. \[fig:sigma\]]{}). Our results also suggest a cautionary word when it comes to effective mass descriptions of the MoS$_2$ band structure, especially if the aim is to describe the optical excitations in the vicinity of $E^C$. In this case, a model that captures only the band structure at the $\Gamma$ point will be certainly insufficient for that. Throughout our analysis, we presented results obtained using two different tight-binding descriptions of the underlying single-particle band structure. This provides an example of the immediate transferability in this approach to study the optical response of other members of the TMD family. In such a case, one can readily use the same orbital basis for the TB model and the only material-dependent input is the corresponding band structure. The generic workflow is the same as the one we have used above: (i) Obtain an *accurate* quasiparticle band structure from first principles, (ii) Determine the parameters of the Slater-Koster TB that most faithfully describe such band structure, and (iii) Solve the BSE in the eigenbasis of that TB Hamiltonian. Our analysis of two different TB parameterizations also affords a perspective over some aspects that are robust in this approach and others that depend on fine details of the parameterization. Overall, the accuracy of our results based on the SK model developed in [Ref.]{}  vividly supports the use of effective models to expeditiously explore the properties of excitons in 2D materials. This work shows it to be a reliable strategy, provided that the starting Hamiltonian faithfully describes the quasiparticle-corrected band structure. These approaches are orders of magnitude faster in CPU time than complete first-principles solutions of the BSE. Such an advantage facilitates properly addressing the optical response of MoS$_2$ at energies around the C excitons, where a fine sampling of the $k$-space is necessary. We believe that, due to their intrinsic flexibility to model reliably a variety of conditions such as heterostructures, disorder and strain, effective models open the path for a more comprehensive investigation of the optical properties of TMDs where interaction effects play a fundamental role. We acknowledge fruitful discussions with P. E. Trevisanutto, V. Olevano, T. G. Pedersen, L. Lima, F. Wu, F. Qu, and M. L. Trolle. E. Ridolfi was supported by the Singapore Ministry of Education under grant number MOE2015-T2-2-059. This work was further supported by the Singapore Ministry of Education Academic Research Fund Tier 1 under Grant No. R-144-000-386-114 and the Brazilian funding agencies CNPq, CAPES, and FAPERJ. Numerical computations were carried out at the HPC facilities of the NUS Centre for Advanced 2D Materials. Appendix {#appendix .unnumbered} ======== Optical conductivity: compilation of theoretical results -------------------------------------------------------- ![ Comparison between different theoretical predictions for the optical conductivity and the experimental results of Li and collaborators [@Li2014] and Jayaswal and collaborators [@Jayaswal2018]. (a) Re($\sigma_{xx}$) obtained from the TB-based solutions of the BSE reported in [Refs.]{}  contrasted with our calculations based on both the TB model of Ridolfi *et al.* [@Ridolfi2015] (blue) and that of Wu *et al.* [@Wu2015] (red) (in the latter we added the same broadening indicated in [Fig. \[fig:sigma\]]{}; this curve must be rigidly shifted by $+0.57$eV to match the experimental position of the A and B excitons [@Wu2015]). (b) Re($\sigma_{xx}$) based on full *ab initio* solutions of the BSE from [Refs.]{} . Since the latter [@Klots2014; @Molina2016] present Re($\sigma_{xx}$) in arbitrary units, we vertically scaled each curve to directly compare with the experimental trace. The curve from [Ref.]{}  is at 300K. []{data-label="fig:sigma-comparisons"}](fig7a_final_comparison_new_bis "fig:"){width="0.95\columnwidth"}\ ![ Comparison between different theoretical predictions for the optical conductivity and the experimental results of Li and collaborators [@Li2014] and Jayaswal and collaborators [@Jayaswal2018]. (a) Re($\sigma_{xx}$) obtained from the TB-based solutions of the BSE reported in [Refs.]{}  contrasted with our calculations based on both the TB model of Ridolfi *et al.* [@Ridolfi2015] (blue) and that of Wu *et al.* [@Wu2015] (red) (in the latter we added the same broadening indicated in [Fig. \[fig:sigma\]]{}; this curve must be rigidly shifted by $+0.57$eV to match the experimental position of the A and B excitons [@Wu2015]). (b) Re($\sigma_{xx}$) based on full *ab initio* solutions of the BSE from [Refs.]{} . Since the latter [@Klots2014; @Molina2016] present Re($\sigma_{xx}$) in arbitrary units, we vertically scaled each curve to directly compare with the experimental trace. The curve from [Ref.]{}  is at 300K. []{data-label="fig:sigma-comparisons"}](fig7b_final_comparison_new_dft_bis "fig:"){width="0.95\columnwidth"} Figure \[fig:sigma-comparisons\] gives an overview of different theoretical results for the optical conductivity in MoS$_2$ found in the literature. Panel (a) compiles TB-based calculations [@Pedersen2014; @Nuno2017; @Wu2015], while panel (b) compares *ab initio* results [@Molina2016; @Qiu2013; @Klots2014]. In the case of the TB model, [Fig. \[fig:sigma-comparisons\]]{} shows both the ${\operatorname{Re}}\sigma(\omega)$ given in [Ref.]{}  as well as the our BSE calculation using Wu and collaborators [@Wu2015] TB parameterization over a wider energy range with a suitable broadening adapted to the experimental traces. Figure \[fig:sigma-comparisons\] indicates that the different theoretical approaches describe roughly the same behavior of ${\operatorname{Re}}\sigma(\omega)$ around the $A$ and $B$ peaks[^10]. In contrast, the energy dependence and spectral weight in the interval $[2.0,\,3.0]$eV that covers the region of the $C$ excitons are significantly approach dependent. In the particular case of the two TB models that we analyze in detail, these differences can be traced to the larger splitting of the conduction bands near the $\Gamma$ point in the model of [Ref.]{}  and the different orbital content of the Bloch states that dominate the dipole matrix elements [^11]. Number of conduction bands -------------------------- Reference  reports that the C peak is contributed by 6 nearly degenerate exciton states made from transitions between the highest 2 valence bands and the first three lowest conduction bands (including spin). In [Fig. \[fig:bandssnumber\]]{} we calculate ${\operatorname{Re}}\sigma_{xx}$ for $N_{c}{{\,=\,}}2$, $N_{c}{{\,=\,}}6$ and $N_{c}{{\,=\,}}8$ and verify the necessity of including at least $N_{c}{{\,=\,}}6$ bands. We note that the optical conductivity acquires significant corrections due to the increased number of bands precisely in the energy region of the C excitons for both TB models. In the model of Wu *et al.* [@Wu2015], the C peak is also slightly enhanced when passing from $N_{c}{{\,=\,}}6$ to $8$, while in our TB model the most significant changes occur when passing from $2$ to $6$. ![ Linear optical conductivity when considering different numbers of conduction bands in the models of Wu *et al.* [@Wu2015] (a) and Ridolfi *et al.* [@Ridolfi2015] (b). []{data-label="fig:bandssnumber"}](fig8a_Fanyao60_sigma "fig:"){width="0.9\columnwidth"}\ ![ Linear optical conductivity when considering different numbers of conduction bands in the models of Wu *et al.* [@Wu2015] (a) and Ridolfi *et al.* [@Ridolfi2015] (b). []{data-label="fig:bandssnumber"}](fig8b_mine60_sigma "fig:"){width="0.9\columnwidth"} Even and odd bands ------------------ The mirror symmetry with respect to the horizontal plane that contains the transition metal ions has important consequences for the band structure of MoS$_2$ monolayers. The TB model that we used to describe the ground-state band structure [@Ridolfi2015] predicts one even valence band, two even conduction bands, and two odd parity conduction bands around the Fermi energy \[see [Fig. \[fig:bands\]]{}(a)\]. The importance of the odd bands has been unclear [@Qiu2013], and we examine this issue next. Figure \[fig:Fanyao\_even\_vs\_all\](a) shows that the odd bands of the TB model of [Ref.]{}  do not contribute to the optical conductivity. This is expected because this TB model does not include spin-flipping terms: since the dipole coupling is diagonal in spin, the non-zero transition matrix elements must involve initial and final states with the same parity under a reflection with respect to the plane. Our TB model gives a small difference between the all even and odd bands basis caused by the coupling of odd and even bands by the spin-flip terms in the Hamiltonian. We conclude that the increment in the optical conductivity when increasing the number of bands comes mainly from the two upper “even” bands. ![ Comparison between the linear optical conductivity obtained with an “all-band” model ($N_{c}{{\,=\,}}8$, $N_{v}{{\,=\,}}2$, $N_{ks}{{\,=\,}}60$) and an “even-band” model ($N_{c}{{\,=\,}}4$, $N_{v}{{\,=\,}}2$, $N_{ks}{{\,=\,}}60$) using the TB parameterizations of Wu *et al.* [@Wu2015] (a) and Ridolfi *et al.* [@Ridolfi2015] (b). []{data-label="fig:Fanyao_even_vs_all"}](fig9a_Fanyao_even_vs_all_60 "fig:"){width="0.9\columnwidth"}\ ![ Comparison between the linear optical conductivity obtained with an “all-band” model ($N_{c}{{\,=\,}}8$, $N_{v}{{\,=\,}}2$, $N_{ks}{{\,=\,}}60$) and an “even-band” model ($N_{c}{{\,=\,}}4$, $N_{v}{{\,=\,}}2$, $N_{ks}{{\,=\,}}60$) using the TB parameterizations of Wu *et al.* [@Wu2015] (a) and Ridolfi *et al.* [@Ridolfi2015] (b). []{data-label="fig:Fanyao_even_vs_all"}](fig9b_mine_even_vs_all_60 "fig:"){width="0.9\columnwidth"} Exciton and one-electron JDOS ----------------------------- When interpreting the origin of the spectral weight shift in the optical conductivity associated with the C excitons, it is useful to analyze how the excitation spectrum itself changes in the presence of the Coulomb interaction. [Fig. \[fig:sigma-jdos-small-broadening\]]{}(a) shows the same data for ${\operatorname{Re}}\sigma(\omega)$ that was presented in [Fig. \[fig:sigma\]]{}, but using a considerably smaller broadening to reveal more clearly the fine spectral structure. In [Fig. \[fig:sigma-jdos-small-broadening\]]{}(b) we compare the joint density of states (JDOS) with and without interaction. Apart from the emergence of the bound excitonic states in the gap, we can see that the excitation spectrum largely maintains the JDOS computed at the one-electron level. The interaction causes a global redshift of about $0.1$eV, which is much smaller than the spectral weight transfer seen in the conductivity. This figure additionally includes the inverse participation ratio (IPR) of all the excitonic levels [(\[eq:ipr\])]{}, which quantifies the degree of localization of the respective wave functions in reciprocal space. ![ (a) The same as [Fig. \[fig:sigma\]]{}, except that the calculated curves have been broadened by the smaller value 0.03eV (full width). (b) Joint density of exciton states (JDOS) as defined in [Eq. (\[eq:jdos\])]{}. A Lorentzian level broadening of 0.01eV (full width) has been used, except in the lower energy range, where we used 0.001eV to allow the resolution of individual bound exciton levels. The inverse participation ratio (IPR) has been calculated from [Eq. (\[eq:ipr\])]{} and is presented without any broadening. []{data-label="fig:sigma-jdos-small-broadening"}](fig10a_sigma-g0-015 "fig:"){width="0.8\columnwidth"}\ ![ (a) The same as [Fig. \[fig:sigma\]]{}, except that the calculated curves have been broadened by the smaller value 0.03eV (full width). (b) Joint density of exciton states (JDOS) as defined in [Eq. (\[eq:jdos\])]{}. A Lorentzian level broadening of 0.01eV (full width) has been used, except in the lower energy range, where we used 0.001eV to allow the resolution of individual bound exciton levels. The inverse participation ratio (IPR) has been calculated from [Eq. (\[eq:ipr\])]{} and is presented without any broadening. []{data-label="fig:sigma-jdos-small-broadening"}](fig10b_jdos-g0-005-de0-001-pr "fig:"){width="0.8\columnwidth"} Exciton inverse participation ratio ----------------------------------- To have an overall perspective over the degree of localization of each exciton’s wave function, we computed the inverse participation ratio (IPR), $$\mathcal{P}(E_M) \equiv \sum_{cv{{\text{$\boldsymbol{k}$}}}} |A^M_{cv{{\text{$\boldsymbol{k}$}}}}|^4 / \sum_{cv{{\text{$\boldsymbol{k}$}}}} |A^M_{cv{{\text{$\boldsymbol{k}$}}}}|^2, \label{eq:ipr}$$ for all the eigenfunctions of the BSE [(\[eq:BSE\])]{}. This quantity provides a rough measure of the spread (in *reciprocal* space) of the wave function belonging to the exciton with energy $E_M$, being largest for the most localized states and scaling ${\propto\,}1/N_{\text{tot}}$ for states uniformly extended over the whole BZ. The result is included in [Fig. \[fig:sigma-jdos-small-broadening\]]{}(b). It reveals that while, on the one hand, the region of the C excitons is characterized by a comparatively small JDOS, on the other, states there are typically more localized than the average, as revealed by a number of peaks of the IPR in the interval $[2.75,\,3.25]$eV. This simply reflects what has been inferred from the selected wave functions shown in [Fig. \[fig:wavefunctions\]]{} and, moreover, confirms our earlier statement that C excitons are considerably more localized than bound ones in reciprocal space: $\mathcal{P}(E{{\,\approx\,}}E^{A,B}) {\,\ll\,} \mathcal{P}(E{{\,\approx\,}}E^C)$. This, of course, is as expected because the latter are true bound states in *real* space (in relation to this, note that a pure Bloch state has $\mathcal{P}({\varepsilon}_{{{\text{$\boldsymbol{k}$}}}}) {{\,=\,}}1$ since its wave function is entirely localized at the point ${{\text{$\boldsymbol{k}$}}}$ in the BZ). [^1]: For example, [Ref.]{}  attributes the peak C to transitions near, but not directly at, the $\Gamma$ point, which require a fine sampling with $300^2$ ${{\text{$\boldsymbol{k}$}}}$ points, and at least $56$ bands in the underlying GW calculation. The authors also use local field effects to include the interaction over different BZs. [^2]: For the derivation of the BSE in details on the computation of $W$ see, for instance, Chapters VII.1. and IV.4-5 of [Ref.]{} . [^3]: The expression $r_{0} {{\,=\,}}\frac{\epsilon_{2}d}{\epsilon_{1}+\epsilon_{3}}$ is sometimes reported as the limit of [Eq. (\[eq:r0\])]{} when $\epsilon_{2} \gg \epsilon_{1,3}$. [^4]: Having $\alpha_1{{\,=\,}}1$, $\alpha_{2,3}{{\,=\,}}0$ in [Eq. (\[eq:regularization\])]{} corresponds to leaving the factor $\propto 1/\kappa(q)$ in the potential [(\[eq:potential-transform\])]{} outside of the average integral. [^5]: Recall that if $N_k{{\,=\,}}100$, the full diagonalization of the BSE Hamiltonian requires handling a matrix of dimension $N_\text{tot}\times N_\text{tot} {{\,=\,}}(10^4 N_c N_v)^2$. For us, with $N_c {{\,=\,}}8$ and $N_v {{\,=\,}}2$, that amounts to ${{\,\simeq\,}}2.56 \times 10^{10}$. [^6]: In particular, \#$11$ consists of choosing $q{{\,=\,}}0$ at the center of a square integration domain of side ${2\pi }/{a{N_k}}$ and \#$2$ in placing $q{{\,=\,}}0$ at the corner of the same integration square. [^7]: Alternatively, the experimental positions of the A and B peaks can be matched by tuning the environment dielectric constants that determine $r_0$ \[cf. [Eq. (\[eq:r0\])]{}\] [@Pedersen2014]. [^8]: The results for the B excitons are not shown explicitly, but are similar to those reported in [Fig. \[fig:wavefunctions\]]{} for the A counterparts. [^9]: At a technical level, it is important to realize that the sharp localization associated with the C excitons requires a fine ${{\text{$\boldsymbol{k}$}}}$-point mesh to ensure proper convergence of the absorption spectrum over this large range of energies . [^10]: We stress, however, that some models require rather large rigid shifts in energy and/or vertical scaling in order to make the calculated results agree with the experimental traces as shown in [Fig. \[fig:sigma-comparisons\]]{}. [^11]: For completeness, it is worth noting that the spectral shape in the range of the $C$ excitons measured in MoS$_2$ multilayers changes appreciably with layer number, as shown by the experiments reported in [Ref.]{} .
--- abstract: 'This paper proposes a new architecture — [Attentive Tensor Product Learning]{}([ATPL]{}) — to represent grammatical structures in deep learning models. [ATPL]{}exploits Tensor Product Representations (TPR), a structured neural-symbolic model developed in cognitive science, to integrate deep learning with explicit language structures and rules. The key ideas of [ATPL]{}are: 1) unsupervised learning of role-unbinding vectors of words via TPR-based deep neural network; 2) employing attention modules to compute TPR; and 3) integration of TPR with typical deep learning architectures including Long Short-Term Memory (LSTM) and Feedforward Neural Network (FFNN). The novelty of our approach lies in its ability to extract the grammatical structure of a sentence by using role-unbinding vectors, which are obtained in an unsupervised manner. This [ATPL]{}approach is applied to 1) image captioning, 2) part of speech (POS) tagging, and 3) constituency parsing of a sentence. Experimental results demonstrate the effectiveness of the proposed approach.' author: - 'Qiuyuan Huang, Li Deng, Dapeng Wu, Chang Liu, Xiaodong He [^1]' title: Attentive Tensor Product Learning --- Introduction ============ Deep learning (DL) is an important tool in many natural language processing (NLP) applications. Since natural languages are rich in grammatical structures, there is an increasing interest in learning a vector representation to capture the grammatical structures of the natural language descriptions using deep learning models [@tai2015improved; @kumar2016ask; @kong2017dragnn]. In this work, we propose a new architecture, called [*[Attentive Tensor Product Learning]{}([ATPL]{})*]{}, to address this representation problem by exploiting Tensor Product Representations (TPR) [@smolensky1990tensor; @smolensky2006harmonic]. TPR is a structured neural-symbolic model developed in cognitive science over 20 years ago. In the TPR theory, a sentence can be considered as a sequences of [*roles*]{} (i.e., grammatical components) with each filled with a [*filler*]{} (i.e., tokens). Given each role associated with a [*role vector*]{} $r_t$ and each filler associated with a [*filler vector*]{} $f_t$, the TPR of a sentence can be computed as $S=\sum_t f_tr_t^\top$. Comparing with the popular RNN-based representations of a sentence, a good property of TPR is that decoding a token of a timestamp $t$ can be computed directly by providing an [*unbinding vector*]{} $u_t$. That is, $f_t=S\cdot u_t$. Under the TPR theory, encoding and decoding a sentence is equivalent to learning the role vectors $r_t$ or unbinding vectors $u_t$ at each position $t$. We employ the TPR theory to develop a novel attention-based neural network architecture for learning the unbinding vectors $u_t$ to serve the core at [ATPL]{}. That is, [ATPL]{}employs a form of the recurrent neural network to produce $u_t$ one at a time. In each time, the TPR of the partial prefix of the sentence up to time $t-1$ is leveraged to compute the attention maps, which are then used to compute the TPR $S_t$ as well as the unbinding vector $u_t$ at time $t$. In doing so, our [ATPL]{}can not only be used to generate a sequence of tokens, but also be used to generate a sequence of [*roles*]{}, which can interpret the syntactic/semantic structures of the sentence. To demonstrate the effectiveness of our [ATPL]{}architecture, we apply it to three important NLP tasks: 1) image captioning; 2) POS tagging; and 3) constituency parsing of a sentence. The first showcases our [ATPL]{}-based generator, while the later two are used to demonstrate the power of role vectors in interpreting sentences’ syntactic structures. Our evaluation shows that on both image captioning and POS tagging, our approach can outperform previous state-of-the-art approaches. In particular, on the constituency parsing task, when the structural segmentation is given as a ground truth, our [ATPL]{}approach can beat the state-of-the-art by $3.5$ points to $4.4$ points on the Penn TreeBank dataset. These results demonstrate that our ATPL is more effective at capturing the syntactic structures of natural language sentences. The paper is organized as follows. Section \[sec:RelatedWork\] discusses related work. In Section \[sec:GenArch\], we present the design of [ATPL]{}. Section \[sec:ImageCaptioning\] through Section \[sec:ConstituencyParsing\] describe three applications of [ATPL]{}, i.e., image captioner, POS tagger, and constituency parser, respectively. Section \[sec:Conclusion\] concludes the paper. Related work {#sec:RelatedWork} ============ Our proposed image captioning system follows a great deal of recent caption-generation literature in exploiting end-to-end deep learning with a CNN image-analysis front end producing a distributed representation that is then used to drive a natural-language generation process, typically using RNNs [@mao2015deep; @vinyals2015show; @karpathy2015deep]. Our grammatical interpretation of the structural roles of words in sentences makes contact with other work that incorporates deep learning into grammatically-structured networks [@tai2015improved; @andreas2015deep; @yogatama2016learning; @maillard2017jointly]. Here, the network is not itself structured to match the grammatical structure of sentences being processed; the structure is fixed, but is designed to support the learning of distributed representations that incorporate structure internal to the representations themselves — filler/role structure. The second task we consider is POS tagging. Methods for automatic POS tagging include unigram tagging, bigram tagging, tagging using Hidden Markov Models (which are generative sequence models), maximum entropy Markov models (which are discriminative sequence models), rule-based tagging, and tagging using bidirectional maximum entropy Markov models [@jurafsky2017Speech]. The celebrated Stanford POS tagger of [@Stanford_Parser_weblink] uses a bidirectional version of the maximum entropy Markov model called a cyclic dependency network in [@toutanova2003feature]. Methods for automatic constituency parsing of a sentence, our third task, include methods based on probabilistic context-free grammars (CFGs) [@jurafsky2017Speech], the shift-reduce method [@zhu2013fast], sequence-to-sequence LSTMs [@vinyals2015grammar]. Our constituency parser is similar to the sequence-to-sequence LSTMs [@vinyals2015grammar] since both use LSTM neural networks to design a constituency parser. Different from [@vinyals2015grammar], our constituency parser uses TPR and unbinding role vectors to extract features that contain grammatical information. ![image](ATPL.png){width="0.7\linewidth"} [Attentive Tensor Product Learning]{} {#sec:GenArch} ===================================== In this section, we present the [ATPL]{}architecture. We will first briefly revisit the Tensor Product Representation (TPR) theory, and then introduce several building blocks. In the end, we explain the [ATPL]{}architecture, which is illustrated in Figure \[fig:Architecture1\]. Background: Tensor Product Representation {#sec:naive-tpr} ----------------------------------------- The TPR theory allows computing a vector representation of a sentence as the summation of its individual tokens while the order of the tokens is within consideration. For a sentence of $T$ words, denoted by $x_1,\cdots,x_T$, TPR theory considers the sentence as a sequence of [*grammatical role slots*]{} with each slot filled with a concrete token $x_t$. The role slot is thus referred to as a [*role*]{}, while the token $x_t$ is referred to as a [*filler*]{}. The TPR of the sentence can thus be computed as [*binding*]{} each role with a filler. Mathematically, each role is associated with a [*role vector*]{} $r_t \in \mathbb{R}^{d}$, and a filler with a [*filler vector*]{} $f_t \in \mathbb{R}^{d}$. Then the TPR of the sentence is $$S=\sum_{t=1}^T f_t\cdot r_t^\top\label{eq:binding}$$ where $S \in \mathbb{R}^{d\times d}$. Each role is also associated with a dual [*unbinding vector*]{} $u_t$ so that $r_t^\top u_t=1$ and $r_t^\top u_{t'} = 0, t' \neq t$; then $$f_t=Su_t\label{eq:unbinding}$$ Intuitively, Eq.  requires that $R^\top U=\mathbf{I}$, where $R=[r_1;\cdots;r_T]$, $U=[u_1;\cdots;u_T]$, and $\mathbf{I}$ is an identity matrix. In a simplified case, i.e., $r_t$ is orthogonal to each other and $r_t^\top r_t=1$, we can easily derive $u_t=r_t$. Eq. (\[eq:binding\]) and (\[eq:unbinding\]) provide means to [*binding*]{} or [*unbinding*]{} a TPR. Through these mechanisms, it is also easy to construct an encoder and a decoder to convert between a sentence and its TPR. All we need to compute is the role vector $r_t$ (or its dual unbinding vector $u_t$) at each timestep $t$. One simple approach is to compute it as the hidden states of a recurrent neural network (e.g., LSTM). However, this simple strategy may not yield the best performance. Building blocks --------------- Before we start introducing [ATPL]{}, we first introduce several building blocks repeatedly used in our construction. An [*attention module*]{} over an input vector $v$ is defined as $${\ensuremath{\mathrm{Attn}(v)}} = \sigma(Wv+b)$$ where $\sigma$ is the sigmoid function, $W\in\mathbb{R}^{d_1\times d_2}$, $b\in\mathbb{R}^{d_1}$, $d_2$ is the dimension of $v$, and $d_1$ is the dimension of the output. Intuitively, ${\ensuremath{\mathrm{Attn}(\cdot)}}$ will output a vector as the attention heatmap; and $d_1$ is equal to the dimension that the heatmap will be attended to. $W$ and $b$ are two sets of parameters. Without specific notices, the sets of parameters of different attention modules are disjoint to each other. We refer to a [*Feed-Forward Neural Network*]{} (FFNN) module as a single fully-connected layer: $${\ensuremath{\mathrm{FFNN}(v)}}=\mathbf{tanh}(Wv + b)$$ where $W$ and $b$ are the parameter matrix and the parameter vector with appropriate dimensions respectively, and $\mathbf{tanh}$ is the hyperbolic tangent function. [ATPL]{}architecture -------------------- In this paper, we mainly focus on an [ATPL]{}decoder architecture that can decode a vector representation $\mathbf{v}$ into a sequence $x_1,\cdots,x_T$. The architecture is illustrated in Fig. \[fig:Architecture1\]. We notice that, if we require the role vectors to be orthogonal to each other, then to decode the filler $f_t$ only needs to unbind the TPR of undecoded words, $S_t$: $$f_t = S_tu_t=\big(\sum_{i=t}^T (W_ex_i)r_i^\top\big) u_t = W_ex_t \label{eq:decoder}$$ where $x_t\in \mathbb{R}^V$ is a one-hot encoding vector of dimension $V$ and $V$ is the size of the vocabulary; $W_e \in \mathbb{R}^{d\times V}$ is a word embedding matrix, the $i$-th column of which is the embedding vector of the $i$-th word in the vocabulary; the embedding vectors are obtained by the Stanford GLoVe algorithm with zero mean [@Stanford_Glove_weblink]. To compute $S_t$ and $u_t$, [ATPL]{}employs two attention modules controlled by $\tilde{S}_{t-1}$, which is the TPR of the so-far generated words $x_1,\cdots,x_{t-1}$: $$\tilde{S}_{t-1} = \sum_{i=1}^{t-1} W_ex_i{r}_i^\top$$ On one hand, $S_t$ is computed as follows: $$\begin{aligned} S_t = {\ensuremath{\mathrm{FFNN}(\mathbf{q}_t)}}\\ \mathbf{q}_t=\mathbf{v}\odot{\ensuremath{\mathrm{Attn}(h_{t-1}\oplus\mathrm{vec}(\tilde{S}_{t-1}))}}\end{aligned}$$ where $\odot$ is the point-wise multiplication, $\oplus$ concatenates two vectors, and ${\mathrm vec}$ vectorizes a matrix. In this construction, $h_{t-1}$ is the hidden state of an external LSTM, which we will explain later. The key idea here is that we employ an attention model to put weights on each dimension of the image feature vector $\mathbf{v}$, so that it can be used to compute $S_t$. Note it has been demonstrated that that attention structures can be used to effectively learn any function [@vaswani2017attention]. Our work adopts a similar idea to compute $S_t$ from $\mathbf{v}$ and $\tilde{S}_{t-1}$. On the other hand, similarly, $u_t$ is computed as follows: $$u_t = \mathbf{U}{\ensuremath{\mathrm{Attn}(h_{t-1}\oplus\mathrm{vec}(\tilde{S}_{t-1}))}}$$ where $\mathbf{U}$ is a constant normalized Hadamard matrix. In doing so, [ATPL]{}can decode an image feature vector ${\mathbf v}$ by recursively 1) computing $S_t$ and $u_t$ from $\tilde{S}_{t-1}$, 2) computing $f_t$ as $S_tu_t$, and 3) setting ${r}_t=u_t$ and updating $\tilde{S}_t$. This procedure continues until the full sentence is generated. ![image](Option1.png){width="80.00000%"} Methods METEOR BLEU-1 BLEU-2 BLEU-3 BLEU-4 CIDEr -------------------------- ----------- ----------- ----------- ----------- ----------- ----------- -- -- NIC [@vinyals2015show] 0.237 0.666 0.461 0.329 0.246 0.855 CNN-LSTM [@SCN_CVPR2017] 0.238 0.698 0.525 0.390 0.292 0.889 SCN-LSTM [@SCN_CVPR2017] 0.257 0.728 0.566 0.433 0.330 1.012 [ATPL]{} **0.258** **0.733** **0.572** **0.437** **0.335** **1.013** Next, we will present three applications of [ATPL]{}, i.e., image captioner, POS tagger, and constituency parser in Section \[sec:ImageCaptioning\] through Section \[sec:ConstituencyParsing\], respectively. Image Captioning {#sec:ImageCaptioning} ================ To showcase our [ATPL]{}architecture, we first study its application in the image captioning task. Given an input image $\mathbf{I}$, a standard encoder-decoder can be employed to convert the image into an image feature vector $\mathbf{v}$, and then use the [ATPL]{}decoder to convert it into a sentence. The overall architecture is dipected in Fig. \[fig:Architecture3\]. We evaluate our approach with several baselines on the COCO dataset [@COCO_weblink]. The COCO dataset contains 123,287 images, each of which is annotated with at least 5 captions. We use the same pre-defined splits as [@karpathy2015deep; @SCN_CVPR2017]: 113,287 images for training, 5,000 images for validation, and 5,000 images for testing. We use the same vocabulary as that employed in [@SCN_CVPR2017], which consists of 8,791 words. For the CNN of Fig. \[fig:Architecture1\], we used ResNet-152 [@he2016deep], pretrained on the ImageNet dataset. The image feature vector ${\mathbf v}$ has 2048 dimensions. The model is implemented in TensorFlow [@tensorflow2015-whitepaper] with the default settings for random initialization and optimization by backpropagation. In our [ATPL]{}architecture, we choose $d=32$, and the size of the LSTM hidden state to be $512$. The vocabulary size $V=8,791$. [ATPL]{}uses tags as in [@SCN_CVPR2017]. In comparison, we compare with [@vinyals2015show] and the state-of-the-art CNN-LSTM and SCN-LSTM [@SCN_CVPR2017]. The main evaluation results on the MS COCO dataset are reported in Table \[table:BLEU\]. The widely-used BLEU [@papineni2002bleu], METEOR [@banerjee2005meteor], and CIDEr [@vedantam2015cider] metrics are reported in our quantitative evaluation of the performance of the proposed scheme. We can observe that, our [ATPL]{}architecture significantly outperforms all other baseline approaches across all metrics being considered. The results clearly attest to the effectiveness of the [ATPL]{}architecture. We attribute the performance gain of [ATPL]{}to the use of TPR in replace of a pure LSTM decoder, which allows the decoder to learn not only how to generate the [*filler*]{} sequence but also how to generate the [*role*]{} sequence so that the decoder can better understand the grammar of the considered language. Indeed, by manually inspecting the generated captions from [ATPL]{}, none of them has grammatical mistakes. We attribute this to the fact that our TPR structure enables training to be more effective and more efficient in learning the structure through the role vectors. Note that the focus of this paper is on developing a Tensor Product Representation (TPR) inspired network to replace the core layers in an LSTM; therefore, it is directly comparable to an LSTM baseline. So in the experiments, we focus on comparison to a strong CNN-LSTM baseline. We acknowledge that more recent papers reported better performance on the task of image captioning. Performance improvements in these more recent models are mainly due to using better image features such as those obtained by Region-based Convolutional Neural Networks (R-CNN), or using reinforcement learning (RL) to directly optimize metrics such as CIDEr to provide a better context vector for caption generation, or using an ensemble of multiple LSTMs, among others. However, the LSTM is still playing a core role in these works and we believe improvement over the core LSTM, in both performance and interpretability, is still very valuable. Deploying these new features and architectures (R-CNN, RL, and ensemble) with ATPL is our future work. ![image](seq2seq.png){width="80.00000%"} POS Tagging {#sec:POS_Tagging} =========== In this section, we study the application of [ATPL]{}in the POS tagging task. Intuitively, given a sentence $x_1,...,x_T$, POS tagging is to assign a POS tag denoted as $z_t$, for each token $x_t$. In the following, we first present our model using [ATPL]{}for POS tagging, and then evaluate its performance. [ATPL]{}POS tagging architecture -------------------------------- Based on TPR theory, the role vector (as well as its dual unbinding vector) contains the POS tag information of each word. Hence, we first use [ATPL]{}to compute a sequence of unbinding vectors $u_t$ which is of the same length as the input sentence. Then we take $u_t$ and $x_t$ as input to a bidirectional LSTM model to produce a sequence of POS tags. Our training procedure consists of two steps. In the first step, we employ an unsupervised learning approach to learn how to compute $u_t$. Fig. \[fig:Architecture2\] shows a sequence-to-sequence structure, which uses an LSTM as the encoder, and [ATPL]{}as the decoder; during the training phase of Fig. \[fig:Architecture2\], the input is a sentence and the expected output is the same sentence as the input. Then we use the trained system in Fig. \[fig:Architecture2\] to produce the unbinding vectors $u_t$ for a given input sentence $x_1,...,x_T$. In the second step, we employ a bidirectional LSTM (B-LSTM) module to convert the sequence of $u_t$ into a sequence of hidden states $\mathbf{h}$. Then we compute a vector $z_{1,t}$ from each $(x_t, \mathbf{h}_t)$ pair, which is the POS tag at position $t$. This procedure is illustrated in Figure \[fig:POStagger\]. ![Structure of POS tagger.[]{data-label="fig:POStagger"}](POStag.jpg){width="30.00000%"} The first step follows [ATPL]{}and is straightforward. Below, we focus on explaining the second step. In particular, given the input sequence $u_t$, we can compute the hidden states as $$\begin{aligned} \overrightarrow{{\mathbf h}}_{t},\overleftarrow{{\mathbf h}}_{t} = BLSTM(u_{t},\overrightarrow{{\mathbf h}}_{t-1},\overleftarrow{{\mathbf h}}_{t+1}) \label{eqn:BLSTM}\end{aligned}$$ Then, the POS tag embedding is computed as $$\begin{aligned} {\mathbf z}_{1,t}= \mathbf{softmax}\big(\overrightarrow{{\mathbf W}}(x_t) \overrightarrow{{\mathbf h}}_{t} + \overleftarrow{{\mathbf W}}(x_t) \overleftarrow{{\mathbf h}}_{t}\big) \label{eq:TPR-LSTM3_5}\end{aligned}$$ Here $\overrightarrow{{\mathbf W}}(x_t)$ is computed as follows $$\begin{aligned} \overrightarrow{{\mathbf W}}({\mathbf x})= \overrightarrow{{\mathbf W}}_{a} \cdot \textrm{diag}(\overrightarrow{{\mathbf W}}_{b}\cdot x_t) \cdot \overrightarrow{{\mathbf W}}_{c} \label{eqn:Wright}\end{aligned}$$ where $\textrm{diag}(\cdot)$ constructs a diagonal matrix from the input vector; $\overrightarrow{{\mathbf W}}_{a}, \overrightarrow{{\mathbf W}}_{b}, \overrightarrow{{\mathbf W}}_{c}$ are matrices of appropriate dimensions. $\overleftarrow{{\mathbf W}}_{3,h}({\mathbf x}_t)$ is defined in the same manner as $\overrightarrow{{\mathbf W}}_{3,h}({\mathbf x}_t)$, though a different set of parameters is used. Note that ${\mathbf z}_{1,t}$ is of dimension $P$, which is the total number of POS tags. Clearly, this model can be trained end-to-end by minimizing a cross-entropy loss. Evaluation ---------- To evaluate the effectiveness of our model, we test it using the Penn TreeBank dataset [@Penn_treebank_weblink]. In particular, we first train the sequence-to-sequence in Fig. \[fig:Architecture2\] using the sentences of Wall Street Journal (WSJ) Section 0 through Section 21 and Section 24 in Penn TreeBank data set [@Penn_treebank_weblink]. Afterwards, we use the same dataset to train the B-LSTM module in Figure \[fig:POStagger\]. 0.15in ---------- -------- -------- ------------------ -------- WSJ 22 WSJ 23 WSJ 22 WSJ 23 Accuracy 0.972 0.973 **0.973&**0.974\ **** ---------- -------- -------- ------------------ -------- : Performance of POS Tagger. []{data-label="table:POStagger"} Once the model gets trained, we test it on WSJ Section 22 and 23 respectively. We compare the accuracy of our approach against the state-of-the-art Stanford parser [@Stanford_Parser_weblink]. The results are presented in Table \[table:POStagger\]. From the table, we can observe that our approach outperforms the baseline. This confirms our hypothesis that the unsupervisely trained unbinding vector $u_t$ indeed captures grammatical information, so as to be used to effectively predict grammar structures such as POS tags. ![The parse tree of a sentence and its layers.[]{data-label="fig:layers"}](layers.png){width="40.00000%"} Constituency Parsing {#sec:ConstituencyParsing} ==================== In this section, we briefly review the constituency parsing task, and then present our approach, which contains three component: segmenter, classifier, and creator of a parse tree. In the end, we compare our approach against the state-of-the-art approach in [@vinyals2015grammar]. A brief review of constituency parsing -------------------------------------- Constituency parsing converts a natural language into its parsing tree. Fig. \[fig:layers\] provides an example of the parsing tree on top of its corresponding sentence. From the tree, we can label each node into layers, with the first layer (Layer 0) consisting of all tokens from the original sentence. Layer $k$ contains all internal nodes whose depth with respect to the closest leaf that it can reach is $k$. In particular, at Layer 1 are all POS tags associated with each token. In higher layers, each node corresponds to a [*substring*]{}, a consecutive subsequence, of the sentence. Each node corresponds to a grammar structure, such as a single word, a phrase, or a clause, and is associated with a category. For example, in Penn TreeBank, there are over 70 types of categories, including (1) clause-level tags such as S (simple declarative clause), (2) phrase-level tags such as NP (noun phrase), VP (verb phrase), (3) word-level tags such as NNP (Proper noun, singular), VBD (Verb, past tense), DT (Determiner), NN (Noun, singular or mass), (4) punctuation marks, and (5) special symbols such as \$. The task of constituency parsing recovers both the tree-structure and the category associated with each node. In our approach to employ [ATPL]{}to construct the parsing tree, we use an encoding $z$ to encode the tree-structure. Our approach first generates this encoding from the raw sentence, layer-by-layer, and then predict a category to each internal node. In the end, an algorithm is used to convert the encoding $z$ with the categories into the full parsing tree. In the following, we present the three sub-routines. Segmenting a sentence into a tree-encoding {#subsec:segmenter} ------------------------------------------ We first introduce the concept of the encoding $z$. For each layer $k$, we assign a value ${\mathbf{z}}_{k, t}$ to each location $t$ of the input sentence. In the first layer, ${\mathbf{z}}_{1, t}$ simply encodes the POS tag of input token $x_i$. In a higher level, ${\mathbf{z}}_{k, t}$ is either $0$ or $1$. Thus the sequence ${\mathbf{z}}_{k, t}$ forms a sequence with alternating sub-sequences of consecutive 0s and consecutive 1s. Each of the longest consecutive 0s or consecutive 1s indicate one internal node at layer $k$, and the consecutive positions form the substring of the node. For example, the second layer of Fig. \[fig:layers\] is encoded as $\{0,1,0,0\}$, and the third layer is encoded as $\{0, 1, 1, 1\}$. The first component of our [ATPL]{}-based parser predicts ${\mathbf{z}}_{k, t}$ layer-by-layer. Note that the first layer is simply the POS tags, so we will not repeat it. In the following, we first explain how to construct the second layer’s encoding ${\mathbf{z}}_{2, t}$, and then we show how it can be expanded to construct higher layer’s encoding ${\mathbf{z}}_{k, t}$ for $k\geq 3$. ![Structure of the segmenter on Layer 2.[]{data-label="fig:substringLayer2"}](substringLayer2.jpg){width="40.00000%"} #### Constructing the second layer ${\mathbf{z}}_{2, t}$. We can view ${\mathbf{z}}_{2, t}$ as a special tag over the POS tag sequence, and thus the same approach to compute the POS tag can be adapted here to compute ${\mathbf{z}}_{2, t}$. This model is illustrated in Fig. \[fig:substringLayer2\]. In particular, we can compute the hidden state from the unbinding vectors from the raw sentence as before: $$\begin{aligned} \overrightarrow{{\mathbf h}}_{2,t},\overleftarrow{{\mathbf h}}_{2,t} = BLSTM(u_{t},\overrightarrow{{\mathbf h}}_{2,t-1},\overleftarrow{{\mathbf h}}_{2,t+1}) \label{eqn:BLSTM2}\end{aligned}$$ and the output of the attention-based B-LSTM is given as below $$\begin{aligned} {\mathbf z}_{2,t}= \sigma_s(\overrightarrow{{\mathbf W}}_{2}({\mathbf z}_{1,t}) \overrightarrow{{\mathbf h}}_{2,t} + \overleftarrow{{\mathbf W}}_{2}({\mathbf z}_{1,t}) \overleftarrow{{\mathbf h}}_{2,t}) \label{eq:TPR-LSTM4_5}\end{aligned}$$ where $\overrightarrow{{\mathbf W}}_{2,h}({\mathbf z}_{1,t})$ and $\overleftarrow{{\mathbf W}}_{2,h}({\mathbf z}_{1,t})$ are defined in the same manner as in . ![Structure of the segmenter on Layer $k\geq 3$.[]{data-label="fig:substringLayer_k"}](substringLayer_k.jpg){width="50.00000%"} ![Segmenting Layer $k\geq 3$.[]{data-label="fig:substringLayer_k_RNN"}](substringLayer_k_RNN.png){width="40.00000%"} 0.15in ------------- -------- -------- -------- -------- -------- -------- WSJ 22 WSJ 23 WSJ 22 WSJ 23 WSJ 22 WSJ 23 Precision N/A N/A 0.898 0.910 0.952 0.952 Recall N/A N/A 0.901 0.907 0.973 0.978 F-1 measure 0.928 0.921 0.900 0.908 0.963 0.965 ------------- -------- -------- -------- -------- -------- -------- #### Constructing higher layer’s encoding ${\mathbf{z}}_{k, t}$ $(k\geq 3)$. Now we move to higher levels. For a layer $k\geq 3$, to predict ${\mathbf{z}}_{k, t}$, our model takes both the POS tag input ${\mathbf{z}}_{1, t}$ and the $(k-1)$-th layer’s encoding ${\mathbf{z}}_{k-1, t}$. The high-level architecture is illustrated in Fig. \[fig:substringLayer\_k\]. Let us denote $${\mathbf{z}}_{k, t} = \mathbf{softmax}(J_{k, t})$$ the key difference is how to compute $J_{k, t}$. Intuitively, $J_{k, t}$ is an embedding vector corresponding to the node, whose substring contains token $x_t$. Assume word ${x}_t$ is in the $m$-th substring of Layer $k-1$, which is denoted by $s_{k-1,m}$. Then, the embedding $J_{k, t}$ can be computed as follows: $$J_{k, t}=\sum_{i\in s_{k-1, m}} \frac{\overrightarrow{{\mathbf W}}_k({\mathbf z}_{1,i}) \overrightarrow{{\mathbf h}}_{k,i} + \overleftarrow{{\mathbf W}}_{k}({\mathbf z}_{1,i}) \overleftarrow{{\mathbf h}}_{k,i} }{|s_{k-1, m}|} \label{eq:bigJ}$$ Here, $\overrightarrow{{\mathbf h}}_{k,i}$ and $\overleftarrow{{\mathbf h}}_{k,i}$ are the hidden states of BLSTM running over the unbinding vectors as before, and $\overrightarrow{{\mathbf W}}_k(\cdot)$ and $\overleftarrow{{\mathbf W}}_k(\cdot)$ are defined in a similar fashion as (\[eqn:Wright\]). We use $|\cdot|$ to indicate the cardinality of a set. The most interesting part is that $J_{k, t}$ aggregates all embeddings computed from the substring of the previous layer $s_{k-1, m}$. Note that the set $s_{k-1, m}$ of indexes can be computed easily from ${\mathbf{z}}_{k-1, t}$. Note that many different aggregation functions can be used. In (\[eq:bigJ\]), we choose to use the average function. The process of this calculuation is illustrated in Fig. \[fig:substringLayer\_k\_RNN\]. ![Structure of the classifier on Layer $k$.[]{data-label="fig:classificationLayer_k"}](classificationLayer_k.jpg){width="50.00000%"} Classification of Substrings {#subsec:ClassificationSubstrings} ---------------------------- Once the tree structure is computed, we attach a category to each internal node. We employ a similar approach as predicting ${\mathbf{z}}_{k, t}$ for $k\geq 3$ to predict this category ${\mathbf z}^{(k)}_t$. Note that, in this time, the encoding ${\mathbf{z}}_{k, t}$ of the internal node is already computed. Thus, instead of using the encoding ${\mathbf{z}}_{k-1, t}$ from the previous layer, we use the encoding of the current layer ${\mathbf{z}}_{k, t}$ to predict ${\mathbf z}^{(k)}_t$ directly. This procedure is illustrated in Fig. \[fig:classificationLayer\_k\]. Similar to (\[eq:bigJ\]), we have ${\mathbf z}^{(k)}_t = \mathbf{softmax}(E_{k, t})$, where $E_{k, t}$ is computed by ($\forall t \in \{t: {\mathbf x}_t \in s_{k,m}\}$) $$E_{k, t}=\sum_{i\in s_{k, m}} \frac{\overrightarrow{{\mathbf W}}_k({\mathbf z}_{1,i}) \overrightarrow{{\mathbf h}}_{k,i} + \overleftarrow{{\mathbf W}}_{k}({\mathbf z}_{1,i}) \overleftarrow{{\mathbf h}}_{k,i} }{|s_{k, m}|} \label{eq:bigE}$$ Here, we slightly overload the variable names. We emphasize that the parameters $\overrightarrow{{\mathbf W}}$ and $\overleftarrow{{\mathbf W}}$ and the hidden states $\overrightarrow{{\mathbf h}}_{k,i}$ and $\overleftarrow{{\mathbf h}}_{k,i}$ are both independent to the ones used in (\[eq:bigE\]). Note that the main different between (\[eq:bigE\]) and (\[eq:bigJ\]) is that, the aggregation is operated over the set $s_{k, t}$, i.e., the substring at layer $k$, rather than $s_{k-1, t}$, i.e., the substring at layer $k-1$. Also, $E_{k, t}$’s dimension is the same as the total number of categories, while $J_{k, t}$’s dimension is 2. Creation of a Parse Tree {#subsec:creator} ------------------------ Once both ${\mathbf{z}}_{k, t}$ and ${\mathbf z}^{(k)}_t$ are constructed, we can create the parse tree out of them using a linear-time sub-routine. We rely on Algorithm \[alg:parseTree\] to this end. For the example in Fig. \[fig:layers\], the output is (S(NNP John)(VP(VBD hit)(NP(DT the)(NN ball)))). ${\mathbf x}_{t},{\mathbf z}^{(k)}_t,{\mathbf z}_{k,t}$ ($t=1,\cdots,T$; $k=1,\cdots,h_p$) i=0 output “(” and ${\mathbf z}^{(h_p)}_1$ push ${\mathbf z}^{(h_p)}_1$ into the stack output ${\mathbf x}_1$ and “)” pop ${\mathbf z}^{(h_p)}_1$ out of the stack output “(” and ${\mathbf z}^{(h_p-j+1)}_1$ push ${\mathbf z}^{(h_p-j+1)}_1$ into the stack output ${\mathbf x}_1$ and “)” pop ${\mathbf z}^{(h_p-j+1)}_1$ out of the stack output “(” and ${\mathbf z}^{(h_p-j+1)}_t$ push ${\mathbf z}^{(h_p-j+1)}_t$ into the stack output ${\mathbf x}_t$ and “)” pop ${\mathbf z}^{(h_p-j+1)}_t$ out of the stack pop an element out of the stack output “)” push the element back into the stack Evaluation {#subsec:EvaluationParser} ---------- We now evaluate our constituency parsing approach against the state-of-the-art approach [@vinyals2015grammar] using WSJ data set in Penn TreeBank. Similar to our setup for POS tag, we training our model using WSJ Section 0 through Section 21 and Section 24, and evaluate it on Section 22 and 23. Table \[table:parseTree\] shows the performance for both [@vinyals2015grammar] and our proposed approach. In addition, we also evaluate our approach assuming the tree-structure encoding ${\mathbf{z}}_{k, t}$ is known. In doing so, we can evaluate the performance of our classification module of the parser. Note, the POS tag is not provided. We observe that the F-1 measure of our approach is 2 points worse than [@vinyals2015grammar]; however, when the ground-truth of ${\mathbf{z}}_{k, t}$ is provided, the F-1 measure is 4 points higher than [@vinyals2015grammar], which is significant. Therefore, we attribute the reason for our approach’s underperformance to the fact that our model may not be effective enough to learn to predict the tree-encoding ${\mathbf{z}}_{k, t}$. #### Remarks. We view the use of unbinding vectors as the main novelty of our work. In contrast, all other parsers need to input the words directly. Our [ATPL]{}separates grammar components ${\mathbf{u}}_t$ of a sentence from its lexical units ${\mathbf{f}}_t$ so that one author’s grammar style can be characterized by unbinding vectors ${\mathbf{u}}_t$ while his word usage pattern can be characterized by lexical units ${\mathbf{f}}_t$. Hence, our parser enjoys the benefit of aid in learning the writing style of an author since the regularities embedded in unbinding vectors ${\mathbf{u}}_t$ and the obtained parse trees characterize the writing style of an author. Conclusion {#sec:Conclusion} ========== In this paper, we proposed a new ATPL approach for natural language generation and related tasks. The model has a novel architecture based on a rationale derived from the use of Tensor Product Representations for encoding and processing symbolic structure through neural network computation. In evaluation, we tested the proposed model on image captioning. Compared to widely adopted LSTM-based models, the proposed ATPL gives significant improvements on all major metrics including METEOR, BLEU, and CIDEr. Moreover, we observe that the unbinding vectors contain important grammatical information, which allows us to design an effective POS tagger and constituency parser with unbinding vectors as input. Our findings in this paper show great promise of TPRs. In the future, we will explore extending TPR to a variety of other NLP tasks. [99]{} K. S. Tai, R. Socher, and C. D. Manning, “Improved semantic representations from tree-structured long short-term memory networks,” *arXiv preprint arXiv:1503.00075*, 2015. A. Kumar, O. Irsoy, P. Ondruska, M. Iyyer, J. Bradbury, I. Gulrajani, V. Zhong, R. Paulus, and R. Socher, “Ask me anything: Dynamic memory networks for natural language processing,” in *International Conference on Machine Learning*, 2016, pp. 1378–1387. L. Kong, C. Alberti, D. Andor, I. Bogatyy, and D. Weiss, “Dragnn: A transition-based framework for dynamically connected neural networks,” *arXiv preprint arXiv:1703.04474*, 2017. P. Smolensky, “Tensor product variable binding and the representation of symbolic structures in connectionist systems,” *Artificial intelligence*, vol. 46, no. 1-2, pp. 159–216, 1990. P. Smolensky and G. Legendre, *The harmonic mind: From neural computation to optimality-theoretic grammar. Volume 1: Cognitive architecture*.1em plus 0.5em minus 0.4emMIT Press, 2006. J. Mao, W. Xu, Y. Yang, J. Wang, Z. Huang, and A. Yuille, “Deep captioning with multimodal recurrent neural networks (m-rnn),” in *Proceedings of International Conference on Learning Representations*, 2015. O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and tell: A neural image caption generator,” in *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, 2015, pp. 3156–3164. A. Karpathy and L. Fei-Fei, “Deep visual-semantic alignments for generating image descriptions,” in *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, 2015, pp. 3128–3137. J. Andreas, M. Rohrbach, T. Darrell, and D. Klein, “Deep compositional question answering with neural module networks,” *arXiv preprint arXiv:1511.02799*, vol. 2, 2015. D. Yogatama, P. Blunsom, C. Dyer, E. Grefenstette, and W. Ling, “Learning to compose words into sentences with reinforcement learning,” *arXiv preprint arXiv:1611.09100*, 2016. J. Maillard, S. Clark, and D. Yogatama, “Jointly learning sentence embeddings and syntax with unsupervised tree-lstms,” *arXiv preprint arXiv:1705.09189*, 2017. D. Jurafsky and J. H. Martin, *Speech and Language Processing*, 3rd ed., 2017. C. Manning, “Stanford parser,” <https://nlp.stanford.edu/software/lex-parser.shtml>, 2017. K. Toutanova, D. Klein, C. D. Manning, and Y. Singer, “Feature-rich part-of-speech tagging with a cyclic dependency network,” in *Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1*.1em plus 0.5em minus 0.4emAssociation for Computational Linguistics, 2003, pp. 173–180. M. Zhu, Y. Zhang, W. Chen, M. Zhang, and J. Zhu, “Fast and accurate shift-reduce constituent parsing.” in *Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL)*, 2013, pp. 434–443. O. Vinyals, [Ł]{}. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton, “Grammar as a foreign language,” in *Advances in Neural Information Processing Systems*, 2015, pp. 2773–2781. J. Pennington, R. Socher, and C. Manning, “Stanford glove: Global vectors for word representation,” <https://nlp.stanford.edu/projects/glove/>, 2017. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, [Ł]{}. Kaiser, and I. Polosukhin, “Attention is all you need,” in *Advances in Neural Information Processing Systems*, 2017, pp. 6000–6010. Z. Gan, C. Gan, X. He, Y. Pu, K. Tran, J. Gao, L. Carin, and L. Deng, “Semantic compositional networks for visual captioning,” in *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, 2017. COCO, “Coco dataset for image captioning,” <http://mscoco.org/dataset/#download>, 2017. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, 2016, pp. 770–778. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “[TensorFlow]{}: Large-scale machine learning on heterogeneous systems,” 2015, software available from tensorflow.org. \[Online\]. Available: <https://www.tensorflow.org/> K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a method for automatic evaluation of machine translation,” in *Proceedings of the 40th annual meeting on association for computational linguistics*.1em plus 0.5em minus 0.4emAssociation for Computational Linguistics, 2002, pp. 311–318. S. Banerjee and A. Lavie, “Meteor: An automatic metric for mt evaluation with improved correlation with human judgments,” in *Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization*.1em plus 0.5em minus 0.4em Association for Computational Linguistics, 2005, pp. 65–72. R. Vedantam, C. Lawrence Zitnick, and D. Parikh, “Cider: Consensus-based image description evaluation,” in *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, 2015, pp. 4566–4575. M. P. Marcus, B. Santorini, M. A. Marcinkiewicz, and A. Taylor, “Penn treebank,” <https://catalog.ldc.upenn.edu/ldc99t42>, 2017. [^1]: QH is with Microsoft Research AI, Redmond, WA; email: qihua@microsoft.com. LD is with Citadel; email: l.deng@ieee.org. DW is with University of Florida, Gainesville, FL 32611; email: dpwu@ufl.edu. CL is with University of California, Berkeley; email: liuchang@eecs.berkeley.edu. XH is with JD AI Research, Beijing, China; email: xiaohe.ai@outlook.com.
--- abstract: 'Production of pions in proton-nucleus (p+A) reactions outside of a kinematical boundary of proton-nucleon collisions, the so-called cumulative effect, is studied. The kinematical restrictions on pions emitted in backward direction in the target rest frame are analyzed. It is shown that cumulative pion production requires a presence of massive baryonic resonances that are produced during successive collisions of projectile with nuclear nucleons. After each successive collision the mass of created resonance may increase and, simultaneously, its longitudinal velocity decreases. Simulations within Ultra relativistic Quantum Molecular Dynamics model reveals that successive collisions of baryonic resonances with nuclear nucleons plays the dominant role in cumulative pion production in p+A reactions.' author: - 'A. Motornenko' - 'M. I. Gorenstein' title: ' [^1]' --- Introduction ============ Cumulative effect in proton-nucleus (p+A) reactions is a production of secondary particles in a kinematic region forbidden in proton-nucleon (p+N) collisions at the same energy of projectile protons. First experiments with detection of cumulative particles were performed at Synchrophasotron accelerator of the Joint Institute for Nuclear Research in Dubna [@baldin-74; @baldin-77; @leksin-77]. In this work inclusive reactions p+A$\rightarrow \pi(180^\circ)+X$ are considered with pions emitted in backward direction, i.e. at 180$^\circ$, in the target rest frame. Let $E_{\pi}^{*}$ denotes the maximal possible energy of the pion emitted at angle $180^{\circ}$ in the laboratory frame in p+N interaction at fixed projectile proton momentum $p_0$. In p+A collisions at the same projectile proton momentum $p_0$, pions emitted at $180^{\circ}$ in the the nucleus rest frame with energies $E_{\pi}> E_{\pi}^*$, even above 2$E_{\pi}^*$, were experimentally observed [@baldin-74; @baldin-77; @leksin-77; @frankel-79; @bayukov-79]. The main physical quantities in our analysis are the masses and longitudinal (i.e. along the collision axis) velocities of the baryonic resonances created in p+A reactions. Resonance $R$ is first produced in p+N$\rightarrow R$+N reaction, and it then participates in successive $R$+N$\rightarrow R'$+N collisions. Due to subsequent collisions of the resonances with nuclear nucleons the resonance mass may increase and its longitudinal velocity decreases. We argue that the cumulative pions in p+A reactions are created by baryonic resonances with very high masses that are formed due to successive collisions with nuclear nucleons. We also use the UrQMD model to analyze some microscopic aspects of cumulative pion production in p+A reactions. Successive collisions with nuclear nucleons {#sec-multi} =========================================== The different theoretical models were proposed to describe the cumulative pion production. However, origin of this effect is still not settled. In the present study we will advocate the approach suggested in Ref. [@gor-77]. More details are presented in our recent paper [@MG], where the references to other theoretical models can be also found. We assume that cumulative particle production takes place due to the large mass of the projectile baryonic resonance created in the [*first*]{} p+N collision and its further propagation through the nucleus. This baryonic resonance has a chance to interact with other nuclear nucleons earlier than it decays to free final hadrons. It can be shown [@MG] that during [*successive collisions*]{} of the baryonic resonance with nuclear nucleons it is possible both to enlarge the resonance mass $M_R$ and, simultaneously, to reduce its longitudinal velocity $v_R$. Production of any additional hadron(s) and/or a presence of non-zero transverse momenta in the final state would require an additional energy and lead to a reduction of $E^{\rm max}_\pi$ value at fixed projectile momentum $p_0$. Thus, to find the maximum of pion energy one should restrict the kinematical analysis to the one-dimensional (longitudinal) direction, i.e., all particle momenta should be directed along the collision axis. If one consider baryonic resonance decay, $R\rightarrow {\rm N}+\pi(180^\circ)$, the value of $E_\pi$ depends on the resonance mass $M_R$ and its longitudinal velocity $v_R$. In the resonance rest frame the pion energy and momentum can be easily found [$$\label{E0-R} E^{0}_\pi= ~\frac{M_R^2-m_N^2+m_\pi^2}{2M_R}~,~~~~~p_\pi^{0}=\sqrt{(E_\pi^{0})^2-m_\pi^2}~.$$]{} The energy $E_\pi$ (with neglected pion mass for simplicity), in the laboratory frame, is obtained as [$$\label{Epi-lab} E_\pi= ~\frac{E^{0}_\pi - v_R p_\pi^{0}}{\sqrt{1-v_R^2}}~.$$]{} Therefore, both the increase of $M_R$ and decrease of $v_R$ provide an extension of the available kinematic region of $E_\pi$ for pions emitted at $180^{\circ}$. Thus, the suppression of $E_\pi$ compared to $E^{0}_\pi$ can be interpreted as Doppler (“red shift”) effect. As seen from Eq. (\[Epi-lab\]), both effects of resonance mass increase and its velocity decrease lead to larger values of $E_\pi$ and, thus, extend the kinematic region for cumulative pion production. The object responsible for the cumulative production of $\pi(180^\circ)$, i.e., the heavy and slow moving resonance, does not exist inside a nucleus but is formed during the whole evolution process of p+A reaction. Let us consider successive collisions with nuclear nucleons: ${\rm p}+{\rm N}\rightarrow R_1+ {\rm N}$, $R_1+{\rm N}\rightarrow R_2+ {\rm N}$, ... , $R_{n}+{\rm N}\rightarrow {\rm N}+{\rm N}+\pi(180^\circ)$. The nuclear nucleons are considered as free particles. This approximation can be justified by the fact that the projectile proton energy is typically 3 orders of magnitude larger than the binding energy of nucleons in a nucleus. It is assumed that after $n$-th collision the baryonic resonance decays, $R_n\rightarrow \pi(180^\circ)$ + N. The energy and momentum conservation between initial and final state read as [$$\label{cons-n} \sqrt{m_N^2+p_0^2}+n\cdot m_N =\sum_{i=1}^{n+1}\sqrt{m_N^2+p_i^2}+E_\pi~,~~~~~~ p_{0}=\sum_{i=1}^{n+1}p_i-p_\pi~.$$]{} The maximal pion energy $E_\pi$ after $n$ successive collisions denoted now $E_{\pi,n}^*$ can be found from Eq. (\[cons-n\]) using the extremum conditions $\partial E_\pi /\partial p_{i}=0$. This leads to [$$\label{p*n} p_{N,n}^*\equiv p_1=p_2=...=p_{n+1}=\frac{p_0+p^*_{\pi,n}}{n+1}~,$$]{} and gives an implicit equation for $E_{\pi,n}^*$. The maximal energies $E_{\pi,n}^*$ of pions emitted at $180^{\circ}$ are presented in Fig. \[fig-En\]. ![Maximal energies $E_{\pi,n}^*$ (a) and velocities $v_n^*$ (b) of the baryonic resonances after $n$ successive collisions with nuclear nucleons. The values of $v_n^*$ are required to provide the maximal energy $E_{\pi ,n}^*$ of $\pi(180^\circ)$. They are calculated from Eq. \[cons-n\] with assumption of $\partial E_\pi /\partial p_{i}=0$, as functions of projectile proton momentum $p_0$.[]{data-label="fig-En"}](n_coll_E.pdf "fig:"){width="49.00000%"} ![Maximal energies $E_{\pi,n}^*$ (a) and velocities $v_n^*$ (b) of the baryonic resonances after $n$ successive collisions with nuclear nucleons. The values of $v_n^*$ are required to provide the maximal energy $E_{\pi ,n}^*$ of $\pi(180^\circ)$. They are calculated from Eq. \[cons-n\] with assumption of $\partial E_\pi /\partial p_{i}=0$, as functions of projectile proton momentum $p_0$.[]{data-label="fig-En"}](n_coll_v.pdf "fig:"){width="49.00000%"} A surprising behavior with $v_n^*<0$ for $n\ge 4$ is observed at some finite regions of projectile momentum $p_0$, i.e., heavy resonance may start to move backward after a large number of successive collisions for not too large $p_0$. In p+N$\rightarrow R$+N reactions the only $v_R$ values with $v_R>0$ are permitted. It should be noted that the values of $E^*_n$ and $v^*_n$ found in this section are by no means typical (or average) ones. In fact, the probability to reach these values in p+A reaction is very small. In other words, cumulative pion production is a very rare process. UrQMD simulations {#sec-UrQMD} ================= In this section the analysis of the cumulative production of $\pi (180^\circ)$ within the UrQMD model [@urqmd] is presented. The UrQMD gives a unique opportunity to study a history of each individual reaction. ![[]{data-label="fig-pPb"}](pPb6_number.pdf "fig:"){width="48.00000%"} ![[]{data-label="fig-pPb"}](pPb158_number.pdf "fig:"){width="48.00000%"} In Fig. \[fig-pPb\] we compare the spectra of $\pi(180^{\circ})$ emitted from resonance decay after $n=1$ (solid line), $n=2$ (dashed line), and $n\geq 3$ (dotted line) successive collisions of the projectile with nuclear nucleons. From Fig. \[fig-pPb\] one observes that $E_\pi$ may exceed $E^*_{\pi,1}$ even for $n=1$ contribution. This happens because of nucleon motion inside nuclei (Fermi motion) which exists in the UrQMD model. This effect is, however, not large. The main contribution to the kinematical region forbidden for p+N collisions (i.e., to $E_\pi > E^*_{\pi,1}$) comes from the decays of resonances created within $n=2$ and $n\ge 3$ successive collisions with nuclear nucleons. Therefore, the proposed mechanism of the cumulative pion production – the successive interactions of heavy resonances with nuclear nucleons – is supported by the UrQMD analysis. Summary {#sec-sum} ======= Pions emitted in p+A reactions at 180$^\circ$ in the target rest frame are considered. Extension of a kinematical boundary of p+N reactions due to existence of massive baryonic resonances is studied. These resonances are produced after several successive collisions of projectile with nuclear nucleons: resonances $R$ created in p+N reactions may have further inelastic collisions in the nuclear medium. Due to successive collisions with nuclear nucleons the masses of these resonances may increase and simultaneously their longitudinal velocities decrease. These two effects give an explanation of the cumulative pion production. The simulations of p+A reactions within the UrQMD model support this physical picture. This work is supported by the Goal-Oriented Program of the National Academy of Sciences of Ukraine and the European Organization of Nuclear Research (CERN), Grant CO-1-3-2016. [99]{} A. M. Baldin [*et al.*]{} Yad. Fiz. [**20**]{}, 1201 (1974). A. M. Baldin, Part. Nucl. [**8**]{} 429 (1977). G. A. Leksin, [*Proceedings of the 18th International Conference on High Energy Physics, Dubna, 1977, vol.1, AG-3*]{}; V. S. Stavinsky, [*ibid.,*]{} AG-1. S. Frankel [*et al.*]{}, Phys.Rev. C [**20**]{}, 2257 (1979). Yu.D. Bayukov [*et al.*]{}, Phys.Lett. B [**85**]{}, 315 (1979). M. I. Gorenstein and G. M. Zinovjev, Phys. Lett. B [**67**]{}, 100 (1977). A. Motornenko and M. I. Gorenstein, arXiv:1604.04308. S.A. Bass [*et al.*]{}, Prog. Part. Nucl. Phys. [**41**]{}, 255 (1998); M. Bleicher [*et al.*]{}, J. Phys. G [**25**]{}, 1859 (1999); H. Petersen, M. Bleicher, S.A. Bass, and H. Stöcker, arXiv:0805.0567 \[hep-ph\]. [^1]: Presented at “Critical Point and Onset of Deconfinement”, Wrocław, Poland, May 30th – June 4th, 2016
--- abstract: 'We are interested in the generic behaviour of nonlinear sound waves as they approach the surface of a star, here assumed to have the polytropic equation of state $P=K\rho^\Gamma$. Restricting to spherical symmetry, and considering only the region near the surface, we generalise the methods of Carrier and Greenspan (1958) for the shallow water equations on a sloping beach to this problem. We give a semi-quantitative criterion for a shock to form near the surface during the evolution of generic initial data with support away from the surface. We show that in smooth solutions the velocity and the square of the sound speed remain regular functions of Eulerian radius at the surface.' author: - Carsten Gundlach and Colin Please date: '18 January 2009, revised version 10 April 2009' title: 'Generic behaviour of nonlinear sound waves near the surface of a star: smooth solutions' --- Introduction ============ In numerical simulations of neutron stars in general relativity, the matter is often modelled as a perfect fluid. The simplest equation of state usually considered is the ideal gas equation of state $$P=(\Gamma-1)e\rho,$$ where $P$ is the pressure, $\rho$ the rest mass density and $e$ the internal energy per rest mass. The polytropic index $\Gamma\equiv 1+1/n>1$ is a constant. If the entropy per rest mass is everywhere the same, the ideal gas equation of state reduces to the polytropic equation of state $$\label{polytrope} P=K\rho^\Gamma,$$ where $K$ is another constant depending on the entropy per rest mass. If the initial data are isentropic, the solution remains isentropic until a shock forms. For the polytropic equation of state with $n>0$, spherically symmetric self-gravitating solutions with a regular centre (stars) have a surface, characterised by $P=\rho=0$ at finite radius $r=r_*$, where $\rho\sim (r_*-r)^n$ near the surface. Standard numerical methods for evolving stars fail at the surface because division by zero density occurs and the speed of sound goes to zero. For smooth solutions in spherical symmetry, this can be avoided by using Lagrangian coordinates, but in 3-dimensional (3D) simulations with high-resolution shock capturing (HRSC) methods, the standard practice is to match the star to a thin “atmosphere”, which is then artificially kept from accreting onto it. This method is likely to give qualitatively wrong results, as the wave structure of the Riemann problem that underlies HRSC methods is different if the right state is vacuum. The failure of the numerical methods is related to the physical fact that the perfect fluid approximation must break down at the surface. This approximation includes the assumption that small fluid elements are in thermal equilibrium on dynamical timescales, but as the density goes to zero, the thermal timescale diverges while the fluid dynamical timescales are still determined by waves in the interior and remain finite. In reality, some kind of plasma physics approximation applies. The premise of this paper is that a mathematically correct numerical implementation of the perfect fluid assumption is more correct than the use of an unphysical atmosphere, which at best introduces physically unmotivated approximations and at worst does not even have a continuum limit. In this paper we provide two mathematical results that should be useful in achieving this goal. We begin here with smooth solutions and leave shocks for later work. Our preliminary question is whether smooth initial data representing an outgoing wave with compact support form a shock as the wave approaches the surface. That a shock forms is suggested by the fact that the sound speed goes to zero at the surface with $c_s \sim \sqrt{r_*-r}$ (independently of the polytropic index $n$), so that any outgoing wave steepens. Sperhake [@Sperhake] has investigated this numerically in general relativity in spherical symmetry and concludes that small amplitude waves do not shock but large amplitude waves do. In the Newtonian case in spherical symmetry this had already been proved by Pelinovsky and Petrukhin [@PP]. We improve on this result by deriving a semiquantitative criterion for a sound wave to remain regular as it approaches the surface. Our main question is what kinematic boundary conditions can be used in a numerical simulation to represent the free boundary at the surface of the star. This has been addressed in general relativity by Sperhake [@Sperhake] for nonlinear spherical perturbations, using Lagrangian coordinates, and by Passamonti [@Passamonti] for linear non-spherical perturbations. Here we consider the nonlinear case in [*Eulerian*]{} coordinates. To answer both questions we use the mathematical methods of a classic paper by Carrier and Greenspan [@CarrierGreenspan] concerning the shallow water equations on a sloping beach. We begin by reviewing their results and extending them from the shallow water case $n=1$ to the general polytropic case $n>0$. Mathematical setup ================== For simplicity we assume spherical symmetry. Near the surface of the star, gravity is typically weak. Furthermore, the formation of shocks does not require large fluid velocities. This suggests that Newtonian physics should be a good approximation for what we want to investigate. On a sufficiently small scale the spherical symmetry of the star reduces to planar symmetry, and the Newtonian gravitational acceleration $g$ is dominated by the interior of the star, and can be approximated as constant in space and time. Finally, for smooth solutions, sufficiently close to the surface, the entropy gradient can be neglected compared to the density gradient in determining the pressure gradient. We can therefore approximate the ideal gas as isentropic, with equation of state (\[polytrope\]). (This last approximation would not hold if a shock reached the surface.) In the “radial” spatial coordinate $x$ and time $t$, with $v$ the Eulerian fluid velocity in the $x$ direction, the Euler and conservation equations are $$\begin{aligned} v_t+vv_x+\Gamma K\rho^{1/n-1}\rho_x&=&-g, \\ \rho_t+v\rho_x+\rho v_x &=&0.\end{aligned}$$ Here $\rho=0$ defines a free boundary $x=x_*(t)$. Within the approximation of planar symmetry, $x$ has an infinite range, with $x<x_*(t)$ representing the interior of the star. It is useful to replace the dependent variable $\rho$ with the sound speed $c$ given by $c^2=dP/d\rho=\Gamma K\rho^{1/n}$ to obtain $$\begin{aligned} \label{vdot} v_t+vv_x+2ncc_x&=&-g, \\ \label{cdot} c_t+vc_x+{1\over 2n}cv_x&=&0. \end{aligned}$$ For $n=1$, these equations are identical with the shallow water equations restricted to planar symmetry on a uniformly sloping beach, with $x$ and $v$ the horizontal position and velocity, $\rho\sim c^2$ the height of the water, $g$ the effective horizontal gravitational acceleration, and $x=x_*(t)$ the instantaneous shoreline [@Acheson]. The unique static solution of (\[vdot\],\[cdot\]) is $$\label{static} v=0, \quad c=\sqrt{-{gx\over n}},$$ and hence $\rho\sim(-x)^n$, where we have fixed a translation invariance by locating the surface at $x=0$. Hodograph transform =================== The problem can be written as $$\label{Riemann} \left[{\partial}_t+(v\pm c){\partial}_x\right](v+gt\pm 2nc)=0,$$ and so admits the Riemann invariants $(v+gt)\pm 2nc$ with characteristic speeds $v\pm c$. Carrier and Greenspan [@CarrierGreenspan] (considering the shallow water case $n=1$) suggested a hodograph transform from independent variables $t$ and $x$ to independent variables $\lambda$ and $\sigma$ given by $$\begin{aligned} \label{lambdadef} {\lambda}&\equiv& v+gt, \\ \label{sigmadef} {\sigma}&\equiv& 2nc.\end{aligned}$$ (These definitions differ from [@CarrierGreenspan] by a factor of $2$.) The resulting transformation of partial derivatives is $$\left(\begin{array}{c} {\partial}_t \\ {\partial}_x \end{array}\right) =\Delta^{-1}\left(\begin{array}{cc} x_{\sigma}& - x_{\lambda}\\ - t_{\sigma}& t_{\lambda}\end{array}\right) \left(\begin{array}{c} {\partial}_{\lambda}\\ {\partial}_{\sigma}\end{array}\right),$$ where $$\label{Deltadef} \Delta\equiv t_{\lambda}x_{\sigma}-x_{\lambda}t_{\sigma}.$$ In particular, we have $$\label{vartrans} \left(\begin{array}{cc} {\lambda}_t & {\sigma}_t \\ {\lambda}_x & {\sigma}_x \end{array}\right) =\Delta^{-1}\left(\begin{array}{cc} x_{\sigma}& - x_{\lambda}\\ - t_{\sigma}& t_{\lambda}\end{array}\right).$$ Clearly, the transformation is regular if and only if $\Delta\ne 0,\pm\infty$. Substituting (\[lambdadef\]-\[sigmadef\]) and (\[vartrans\]) into (\[Riemann\]), we obtain $$\begin{aligned} \label{xtpde1} x_{\sigma}-(\lambda-gt)\, t_{\sigma}+\left({\sigma\over 2n}\right) t_{\lambda}&=& 0, \\ \label{xtpde2} x_{\lambda}+\left({\sigma\over 2n}\right) t_{\sigma}-(\lambda-gt)\,t_{\lambda}&=& 0.\end{aligned}$$ This PDE system is not yet linear because of the appearance of $gt$ in the coefficients of $t_{\sigma}$ and $t_{\lambda}$. However, from the two nonlinear first-order PDEs (\[xtpde1\]-\[xtpde2\]) one can derive a linear second-order PDE for $t(\lambda,\sigma)$, namely $$t_{{\lambda}{\lambda}}=t_{{\sigma}{\sigma}}+{2n+1\over {\sigma}} t_{\sigma}.$$ Trivially, $\lambda$ taken as a function of $\lambda$ and $\sigma$ obeys the same PDE as $t$, and by adding the two we obtain the autonomous linear wave equation $$\label{vwave} v_{{\lambda}{\lambda}}=v_{{\sigma}{\sigma}}+{2n+1\over {\sigma}} v_{\sigma}$$ for $v(\lambda,\sigma)$. This is the key equation of this paper. The problem has now been cast into linear form, and the free boundary $x=x_*(t)$ has been mapped to the coordinate line $\sigma=0$, with $\sigma>0$ representing the interior of the star. A criterion for shock formation =============================== From (\[vartrans\]) with (\[lambdadef\],\[sigmadef\]) we find $$\begin{aligned} \label{vxDelta} v_x&=&{1\over g\Delta} v_{\sigma}, \\ \label{cxDelta} c_x&=&{1\over 2ng\Delta} \left(1-v_{\lambda}\right),\end{aligned}$$ and so a shock forms from regular initial data as and only if $\Delta\to 0$. Using (\[lambdadef\]-\[sigmadef\],\[xtpde1\]-\[xtpde2\]), the Jacobian $\Delta$ defined by (\[Deltadef\]) can be expressed in terms of $v$ alone as $$\label{Deltaval} \Delta = -{{\sigma}\over 2ng^2}\left[\left(1-v_{\lambda}\right)^2-v_{\sigma}^2\right].$$ We see that the wave does not form a shock if the first derivatives of $v$ in a solution of (\[vwave\]) remain sufficiently small, so that $\Delta$ remains negative. Such solutions are easily obtained by rescaling the amplitude of any given solution. We shall now consider small smooth initial data for (\[vdot\],\[cdot\]) on the curve $t=0$, $x<0$. These correspond to Cauchy data for (\[vwave\]) on the curve given by $\lambda=\lambda_0(\sigma)$, $\sigma>0$. We require these data to obey $$\label{criterion} \left(1-v_{\lambda}\right)^2-v_{\sigma}^2>0$$ for all $\rho>0$ on $\lambda=\lambda_0(\sigma)$. This criterion is necessary for the existence of the equivalence between (\[vdot\],\[cdot\]) and (\[vwave\]), and implies that there is no shock present in the initial data. We then formally evolve the data to $\lambda>\lambda_0(\sigma)$ using (\[vwave\]). Setting aside the boundary at $\sigma=0$, which we consider later, this solution exists because (\[vwave\]) is linear. However, if at any point in $\lambda>\lambda_0(\sigma)$ the condition (\[criterion\]) is violated, the wave has developed a shock at some $t>0$, and the solution of (\[vwave\]) does not have physical meaning for larger values of $t$. In order to translate initial data in coordinates $(x,t)$ to $(\sigma,\lambda)$, we consider smooth data with compact support away from the boundary and which are sufficiently weak (in the sense of close to the static star solution) that initially the solution can be approximated by a solution of the linearisation of (\[vdot\],\[cdot\]) around the static star solution. We then evolve these data using (\[vwave\]), and so do not require them to remain small. We use (\[criterion\]) in this solution as the necessary and sufficient criterion for the absence of shocks. Linearising (\[vdot\],\[cdot\]) about the static solution (\[static\]), we obtain $$\delta v_{tt}=\left(-{gx\over n}\right)\left(\delta v_{xx}+{n+1\over x} \delta v_x\right).$$ (We have written $\delta v$ instead of $v$ to stress that this is only an approximation valid for small $v$.) The same equation can be obtained from (\[vwave\]) by the substitutions $$\label{lstx} {\lambda}=gt, \quad {\sigma}=2\sqrt{-gnx}.$$ This gives us a simple approximate relation between initial data for the [*linearisation*]{} of (\[vdot\],\[cdot\]), and initial data for (\[vwave\]) (which is linear but contains the nonlinear dynamics). A formal d’Alembert solution of (\[vwave\]) is h$$v({\lambda},{\sigma})=\sum_\pm \sum_{k=0}^\infty{\sigma}^{-n-{1\over 2}-k}f_k^\pm({\lambda}\pm{\sigma})$$ where $f_0^\pm$ is free data and $$f_{k+1}^\pm={\left(k+{1\over2}\right)^2-n^2\over 2(k+1)} \int f_k^\pm.$$ A few remarks will put this result into context: This series converges at most in the sense of an asymptotic series as ${\sigma}\to\infty$, and clearly diverges for sufficiently small ${\sigma}$. Another formal d’Alembert solution exists which has [*ascending*]{} powers of ${\sigma}$, but it does not interest us here. In the special case $n=1/2$, either series reduces to the well-known d’Alembert solution of the spherical wave equation in 3 dimensions, while for $n=-1/2$ we obtain the d’Alembert solution of the 1-dimensional wave equation. Consider now an isolated wave packet approaching the surface with initial position ${\sigma}_0$, width ${\sigma}_1\ll {\sigma}_0$ and amplitude $v_0$, so that $|v_{\lambda}|\sim |v_{\sigma}| \sim v_0/{\sigma}_1$ initially. In this regime, we can approximate $$\label{leadingorder} v({\lambda},{\sigma})\simeq {\sigma}^{-n-{1\over 2}}f_0^+({\lambda}+{\sigma}).$$ The derivatives of $v$ take their largest values when the wave packet turns around close to the surface. From the scaling properties of solutions of (\[vwave\]), this must happen at ${\sigma}\sim{\sigma}_1$, at which point its amplitude will be $v_0 ({\sigma}_1/{\sigma}_0)^{-n-1/2}$ in the approximation (\[leadingorder\]). Evaluating (\[criterion\]) at that point, we obtain a criterion for the wave never to form a shock, which is $${v_0\over{\sigma}_1}\lesssim \left({\sigma}_1\over{\sigma}_0\right)^{n+{1\over 2}}.$$ Finally, expressing ${\sigma}_0$ and ${\sigma}_1$ in terms of the initial Eulerian position $x_0$ and length scale $x_1$ of the wave packet by using (\[lstx\]), we obtain the regularity criterion $$\label{final} {v_0\over\sqrt{g x_0}}\lesssim \left(x_1\over |x_0|\right)^{n+(3/2)}$$ In these estimates we neglect an unknown $O(1)$ factor depending on the precise shape of the wave packet. Although we have worked in the approximation of planar symmetry and constant $g$, it is useful to express the parameter $g$ in terms of $v_*=\sqrt {2gr_*}$, which is the escape velocity at the surface of a spherical star, where $r_*$ is its radius and $g$ is the gravitational acceleration at its surface. We can then rewrite the estimate (\[final\]) as $$v_0\lesssim \left(x_1\over |x_0|\right)^{n+(3/2)} \left(|x_0|\over r_*\right)^{1\over 2}v_*$$ A numerical example will illustrate this: in a neutron star modelled as a polytrope with $r_*\sim 10^4m$, $v_*\sim 10^8m/s$ and $n=1$, a sound wave of wavelength $x_1\sim 1m$ deep in the interior ($x_0\sim -r_*$) must have an amplitude of $v_0\lesssim 10^{-2} m/s$ to remain regular. Generic behaviour at the free boundary ====================================== The surface of the star is a free boundary characterised by the kinematic boundary conditions $$\begin{aligned} P(x_*(t),t)&=&0, \\ {dx_*\over dt}&=&v(x_*(t),t).\end{aligned}$$ These conditions are straightforward to implement in Lagrangian coordinates, but in 3D HRSC simulations we need their equivalent in Eulerian coordinates. For solutions which remain smooth, we obtain these by going through the hodograph transformation. The general solution of Eq. (\[vwave\]) can be written as a linear superposition of solutions of the form $$\label{Besselsolution} v({\lambda},{\sigma})=e^{i\omega{\lambda}}\,{\sigma}^{-n} J_{\pm n} (\omega{\sigma}).$$ As $J_n(\sigma)$ is ${\sigma}^n$ times a power series in positive even powers of $\sigma$, the solution using $J_n$ is an even regular function of ${\sigma}$, while the solution using $J_{-n}$ diverges as ${\sigma}^{-2n}$ as ${\sigma}\to 0$. The regular solution can be selected by imposing the boundary condition $$\label{BC} v_{\sigma}=0 \quad \hbox{at} \quad {\sigma}=0,$$ which together with (\[vwave\]) makes a well-posed linear initial-boundary value problem. Clearly this condition is the required kinematic boundary condition for smooth solutions. We now translate this back into the Eulerian variables $c(x,t)$ and $v(x,t)$. Assuming the wave does not form a shock, the square bracket in (\[Deltaval\]) is strictly positive, and so $\Delta\sim{\sigma}$ at the boundary. Substituting $\Delta\sim{\sigma}$ into (\[cxDelta\]) gives $${\sigma}c_x\sim \left(1-v_{\lambda}\right)$$ at the boundary. The right-hand side is even in ${\sigma}$ because $v$ is even in ${\sigma}$ by the assumption of regularity. It follows, using (\[sigmadef\]), that $(c^2)_x$ is a regular function of $c^2$, and hence $c^2$ is a regular function of $x$. Substituting $\Delta\sim{\sigma}$ into (\[vxDelta\]) gives $$v_x\sim {\sigma}^{-1}v_{\sigma}$$ at the boundary. The right-hand side is again even in ${\sigma}$. Hence $v_x$ is an even function of $c^2$ and so, using our previous result, it is a regular function of $x$. It follows that $v$ is a regular function of $x$. It is clear that the ${\lambda}$ or $t$ dependence does not affect these results in the limit $x\to x_*(t)$ or $\sigma\to 0$. We have therefore shown that as long as the solution remains regular, $c^2$ and $v$ are regular functions of $x$ and $t$ at the surface. This is the desired kinematic free boundary condition. In particular, $c^2\sim x_*(t)-x$ at the moving surface of regular solutions, as in the static case. Note that $\rho\sim (c^2)^n$, so $\rho$ is a regular function of $x$ only if $n$ is an integer. Note also that in general $v$ and $c^2$ are neither even nor odd in $x-x_*$. Discussion ========== Building on the earlier work [@CarrierGreenspan; @PP], we have given various forms of an upper limit on the amplitude of nonlinear sound waves if they are to avoid forming a shock. This tells us in which physical regime a simple (non-shock capturing) numerical method will be valid because shocks do not occur. It may also be of direct astrophysical interest. For solutions which remain regular as they are reflected at the free boundary, we have shown that the usual free boundary condition is equivalent to $v$ and $c^2$ being regular functions of $x$ and $t$. This suggests an alternative numerical treatment of the stellar surface which does not require an unphysical atmosphere. Our results were derived within the approximations of Newtonian physics, a constant gravitational field, a polytropic equation of state and planar symmetry (as the limit of spherical symmetry near the surface). As discussed in the introduction, these are all natural approximations to make, except for spherical symmetry. However, applying geometric optics to the linearised sound wave equation for the pressure perturbation $\delta P$, $$\delta P_{tt}=\left(-{gx\over n}\right) \left(\delta P_{xx}+{n+1\over x} \delta P_x+\delta P_{yy}+\delta P_{zz}\right),$$ we find that its sound rays, without loss of generality restricted to the $xy$ plane, are given by $y(x)=a+\ln(1+bx^2)$ for constants $a$ and $b$, and so in the geometric optics approximation sound waves approaching the surface $x=0$ at any angle are refracted towards lower sound speed until they reach the surface at right angles. This provides some justification for the assumption that our results will also be qualitatively correct beyond the restriction to spherical (planar) symmetry. We would like to thank Marvin Jones for discussions, and Michael Gabler for pointing out an error in the original version. U. Sperhake, Non-linear numerical schemes in general relativity, PhD thesis, University of Southampton, 2001, arXiv:gr-qc/0201086. A. Passamonti, Non-linear oscillations of compact stars and gravitational waves, PhD thesis, University of Portsmouth, 2005, arXiv:gr-qc/0607143. E. N. Pelinovsky and N. S. Petrukhin, Emergence of a nonlinear wave at a stellar surface, Soviet Astronomy [**32**]{}, 457-459 (1988). G. F. Carrier and H. P. Greenspan, Water waves of finite amplitude on a sloping beach, J. Fluid Mech. [**4**]{}, 97-109 (1958). D. J. Acheson, [*Elementary Fluid Dynamics*]{}, Oxford University Press 1990. J. Ockendon, S.Howison, A. Lacey and A. Movchan, [*Applied Partial Differential Equations*]{}, Oxford University Press, 1999.
--- abstract: | This paper presents the results of an analysis of the low-energy $\pi^\pm p$ differential cross sections, acquired by the CHAOS Collaboration at TRIUMF [@chaos; @denz]. We first analyse separately the $\pi^+ p$ and the $\pi^- p$ elastic-scattering measurements on the basis of standard low-energy parameterisations of the $s$- and $p$-wave $K$-matrix elements. After the removal of the outliers, we subject the truncated $\pi^\pm p$ elastic-scattering databases into a common optimisation scheme using the ETH model [@glmbg]; the optimisation failed to produce reasonable values for the model parameters. We conclude that the problems we have encountered in the analysis of these data are due to the shape of the angular distributions of their $\pi^+ p$ differential cross sections.\ [*PACS:*]{} 13.75.Gx; 25.80.Dj address: - 'Centre for Applied Mathematics and Physics, Zurich University of Applied Sciences, Technikumstrasse 9, P.O. Box, CH-8401 Winterthur, Switzerland' - 'Institut für Theoretische Physik der Universität, Winterthurerstrasse 190, CH-8057 Zürich, Switzerland' author: - 'E. Matsinos[$^*$]{}' - 'G. Rasche' title: 'Analysis of the low-energy $\pi^\pm p$ differential cross sections of the CHAOS Collaboration' --- , , $\pi N$ elastic scattering [$^*$]{}[Corresponding author. E-mail: evangelos.matsinos@zhaw.ch, evangelos.matsinos@sunrise.ch; Tel.: +41 58 9347882; Fax: +41 58 9357306]{} \[sec:Introduction\]Introduction ================================ This is the second of three papers addressing issues of the pion-nucleon ($\pi N$) interaction at low energies (pion laboratory kinetic energy $T \leq 100$ MeV). The goal in this study is to investigate the self-consistency of the $\pi^\pm p$ elastic-scattering differential cross sections (DCSs) of Refs. [@chaos; @denz] (in accordance with our naming convention, we hereafter refer to these data as DENZ04); these measurements, which had been acquired at TRIUMF in 1999 and 2000, have not been included in our last two phase-shift analyses (PSAs) [@mworg; @mrw1] (to be referred to as UZH06 and ZUAS12, respectively). In the case of UZH06, we did not notice that the measurements were already available for some time [^1]. In the case of ZUAS12, we made a conscious decision to avoid modifying our UZH06 database prior to the assessment of the self-consistency of any candidate additions, considering in particular the amount of the DENZ04 data which almost matches the size of our UZH06 database. In the present work, we will give arguments supporting our position not to use the DENZ04 data and to retain our initial UZH06 $\pi^\pm p$ elastic-scattering databases for future use. We will analyse the DENZ04 data as if it comprised the entire $\pi^\pm p$ elastic-scattering database at low energies; this way, a possible failure when testing the degree of applicability of our PSA to these data cannot be blamed on other experimental data. We will follow the method introduced in Section 4 of Ref. [@mworg] and developed to its current form in Ref. [@mrw1] (see Section 2 therein). We will first investigate the self-consistency of the $\pi^+ p$ measurements on the basis of suitable low-energy parameterisations of the $s$- and $p$-wave $K$-matrix elements; the most recent values of any constants, which are used in the parameterisation of these quantities, may be found in Ref. [@mrw1]. Any outliers will be removed from the data, one data point at a time, until the data sets are consistent and ready for further analysis. At the next step, the $\pi^- p$ elastic-scattering measurements will be analysed. After removing any outliers also from these data, we will investigate the possibility of analysing both reactions in a common optimisation scheme; in that part of the study, we will use both the low-energy parameterisations of the $s$- and $p$-wave $K$-matrix elements and, finally, the ETH model [@glmbg]. The last part of the study will be dedicated to the reproduction of the absolute normalisation of the DENZ04 data on the basis of our ZUAS12 solution. We will show that the ZUAS12 solution, based on the bulk of our established $\pi^+ p$ database at low energies, is incompatible with the shape of the angular distribution of the DENZ04 $\pi^+ p$ differential cross sections. \[sec:Method\]Method ==================== The determination of the observables from the hadronic phase shifts has been given in detail in Section 2 of Ref. [@mworg]. For $\pi^+ p$ scattering, one obtains the partial-wave amplitudes from Eq. (1) of that paper and determines the no-spin-flip and spin-flip amplitudes via Eqs. (2) and (3). Finally, the observables are evaluated from these amplitudes via Eqs. (13) and (14). For $\pi^- p$ elastic scattering, the observables are determined on the basis of Eqs. (15-20). All the details on the analysis method (i.e., on the minimisation function, on the definitions of the scale factors, etc.) may be found in Section 2.2 of Ref. [@mrw1]. The contribution $\chi_j^2$ of the $j^{th}$ data set to the overall $\chi^2$ is given therein by Eq. (1). The scale factors $z_j$, which minimise each $\chi_j^2$, are evaluated using Eq. (2); the minimal $\chi_j^2$ value for each data set (denoted by $(\chi_j^2)_{min}$) is given in Eq. (3) and the scaling contribution (of the $j^{th}$ data set) to $(\chi_j^2)_{min}$ in Eq. (4). Finally, the scale factors for free floating $\hat{z}_j$ (which we will use in Section \[sec:Reproduction\], when investigating the absolute normalisation of the DENZ04 data sets using the ZUAS12 solution as reference) are obtained via Eq. (5); their total uncertainty $\Delta \hat{z}_j$ has been defined at the end of Section 2.2 of Ref. [@mrw1]. One statistical test will be performed for each data set, the one involving its contribution $(\chi_j^2)_{min}$ to the overall $\chi^2$. The corresponding p-value will be evaluated on the basis of $(\chi_j^2)_{min}$ and number of degrees of freedom of the data set (hereafter, the acronym DOF will stand for ‘degree(s) of freedom’, whereas NDF will denote the ‘number of DOF’); for a data set with $N_j$ data points (none of which is an outlier), NDF is equal to $N_j$. The p-value for each data set will be compared to the confidence level $\mathrm{p}_{min}$ for the acceptance of the null hypothesis (implying no statistically-significant effects). The value of $\mathrm{p}_{min}$ is fixed to the equivalent of a $2.5 \sigma$ effect in the normal distribution, corresponding to about $1.24 \cdot 10^{-2}$. To facilitate the repetitive use of the full description of the databases, we adhere to the following notation: DB$_+$ for the $\pi^+ p$ database; DB$_-$ for the $\pi^- p$ elastic-scattering database; DB$_{+/-}$ for the combined $\pi^\pm p$ elastic-scattering databases. \[sec:Database\]The DENZ04 data =============================== \[sec:General\]General comments on the data ------------------------------------------- The DENZ04 DB$_+$ consists of $275$ data points, acquired at five energies between $19.90$ and $43.30$ MeV. Two sets of values are available at $43.30$ MeV: one was taken at the same conditions as the data at the lower energies, whereas another was obtained with the target rotated by $64^\circ$; similarly to the notation in Ref. [@denz], we will identify the data set corresponding to the rotated target via the label ‘(rot.)’. Technically, the DENZ04 $\pi^+ p$ data must be assigned to only $6$ data sets [@chaos]. However, the measurements have been assigned to a total of $17$ data sets in the SAID database [@abws], after splitting each data set (except at $19.90$ MeV) into three parts: forward-angle ($\theta < 35^\circ$), medium-angle ($35 < \theta \lesssim 150^\circ$), and backward-angle ($\theta \gtrsim 150^\circ$); $\theta$ denotes the centre-of-mass (CM) scattering angle. The $19.90$ MeV data do not cover scattering angles above $98.45^\circ$; therefore, the original data set has been split into two segments. Some justification for this splitting of the original data sets may be found in Ref. [@abws], and rests on the variation of the event-selection algorithm with the scattering-angle interval. The essential point is that a coincidence measurement (i.e., the simultaneous detection of both the scattered pion and proton), enabling the vertex reconstruction at ‘medium’ scattering-angle values, cannot be performed in the cases of very-forward and very-backward scattering at low energies; as a result, only the scattered pion had been detected in very-forward scattering, and only the scattered proton in very-backward scattering. Although we have analysed the DENZ04 data also the way the CHAOS Collaboration appears to suggest (i.e., by assigning the measurements for each reaction to only $6$ data sets), the results of our analysis clearly favour the splitting of the data sets into segments the way this is done in the SAID database. We will give the results of the optimisation only in the case of the split $\pi^+ p$ data sets. The assignment of the measurements to $17$ data sets enables the determination of the scale factors $z_j$ from data which are more ‘localised’ (in terms of the scattering angle) and, as such, it implies a more favourable treatment of the data. Evidently, the appearance of any problems in case of the assignment of the data to $17$ data sets can only be exacerbated if only $6$ data sets are used. The DENZ04 DB$_-$ comprises $271$ data points, taken at the same five beam energies of the DENZ04 DB$_+$; similarly to $\pi^+ p$, two sets of DCS values have been acquired at $43.30$ MeV. In SAID, the measurements have been assigned to $12$ data sets; we will do the same in the optimisation phase. The final normalisation uncertainties reported by the CHAOS Collaboration [@chaos] are as follows: $5 \%$ at the three lowest energies and $7 \%$ for the $43.30$ MeV data sets. Asymmetric uncertainties have been given for the $37.10$ MeV data sets ($+5, -9 \%$); unable to treat asymmetric uncertainties in our software, we have decided to use the normalisation uncertainty of $7 \%$ (average of the two absolute values) for the $37.10$ MeV data [^2]. The SAID output for the DENZ04 $\pi^+ p$ data also contains the contribution of these data sets to their overall $\chi^2$ value. According to their results, the contribution of the $274$ data points [^3] of the DENZ04 $\pi^+ p$ data to the overall $\chi^2$ is very large ($703.22$ units). Furthermore, the $\chi^2$ value for the DENZ04 $\pi^- p$ elastic-scattering data is only slightly better ($503.30$ for $271$ data points). On the basis of these numbers, it is evident that the WI08 phase-shift solution yields a statistically-poor reproduction of the DENZ04 data. \[sec:K-Matrix\_pi+p\]Fits to the DENZ04 DB$_+$ using the $K$-matrix parameterisations -------------------------------------------------------------------------------------- The parameterisation of the $s$- and $p$-wave $K$-matrix elements for the low-energy $\pi^+ p$ scattering may be found in Section 3.1 of Ref. [@mrw1]. The optimal values of the corresponding seven parameters ($\tilde{a}_{0+}^{3/2}$, $b_3$, $c_3$, $d_{33}$, $e_{33}$, $d_{31}$, and $e_{31}$) are obtained via the minimisation of the $\chi^2$ function (see Section 2.2 of Ref. [@mrw1]). We will apply the same acceptance criteria to the DENZ04 measurements which were applied to the data in the ZUAS12 PSA. The results of the optimisation procedure are shown in Table \[tab:Pi+p\]. Since seven parameters are used to generate the fitted values, the NDF in the first fit to the DENZ04 DB$_+$ was $268$; the minimum value of $\chi^2$ was $486.9$. For the truncated DENZ04 DB$_+$, the minimum value of $\chi^2$ was $401.2$ for $260$ DOF in the fit. The values of the seven parameters of the fit came out far from those obtained in the fits to the truncated DB$_+$ of Ref. [@mrw1]. The details on the truncated DENZ04 DB$_+$, as obtained from the final fit, are given in Table \[tab:DBpi+p\]. \[sec:K-Matrix\_pi-p\]Fits to the DENZ04 DB$_-$ using the $K$-matrix parameterisations -------------------------------------------------------------------------------------- The $I=3/2$ amplitudes were fixed from the final fit to the truncated DENZ04 DB$_+$ and were imported into the analysis of the DENZ04 DB$_-$. The parameterisation of the $s$- and $p$-wave $I=1/2$ $K$-matrix elements, suitable for the low-energy $\pi^- p$ elastic scattering, may be found in Section 3.2 of Ref. [@mrw1]. Seven new parameters ($\tilde{a}_{0+}^{1/2}$, $b_1$, $c_1$, $d_{13}$, $e_{13}$, $d_{11}$, and $e_{11}$) are introduced at this stage. We present the steps in the process of removing outliers from the DENZ04 DB$_-$ in Table \[tab:Pi-p\]; only three data points had to be removed. The final result for $\tilde{a}^{cc}$, obtained from the data, was around $0.041\:\mu_c^{-1}$, a value which differs by a factor of $2$ from the result $\tilde{a}^{cc}=0.0803(11)\: \mu_c^{-1}$ of Ref. [@mrw1] ($\mu_c$ denotes the mass of the charged pion); the $\tilde{a}^{cc}$ value of Ref. [@mrw1] roughly agrees with the result obtained from the pionic-hydrogen data. We traced this discrepancy to the very unusual final values of the parameters entering the modelling of the $s$- and $p$-wave $K$-matrix elements for the $\pi^+ p$ reaction (i.e., the values which fixed the $I=3/2$ amplitudes in the case of the fits to the DENZ04 DB$_-$). The details on each data set of the truncated DENZ04 DB$_-$, as obtained from the final fit, are given in Table \[tab:DBpi-p\]. \[sec:K-Matrix\]Common fit to the DENZ04 DB$_{+/-}$ using the $K$-matrix parameterisations ------------------------------------------------------------------------------------------ In order to give the two elastic-scattering reactions equal weight, we multiplied $(\chi^2_j)_{min}$ for each $\pi^+ p$ data set by $$w_+=\frac{N\!_+ + N\!_-}{2N\!_+}$$ and for each $\pi^- p$ elastic-scattering data set by $$w_-=\frac{N\!_+ + N\!_-}{2N\!_-} \, ,$$ where $N\!_+$ and $N\!_-$ represent the NDF in the two databases; we then added these quantities for all the data sets to obtain the overall $\chi^2$ value. The application of these ‘global’ weights for the two reactions was made as a matter of principle; given the proximity of the $N\!_+$ and $N\!_-$ values in the case of the DENZ04 data, the effect of this weighting on our results is very small. The common fit to the truncated DENZ04 DB$_{+/-}$ was subsequently performed, using $14$ parameters. This step was taken in order to examine whether any additional points (or data sets) had to be removed; none were identified. The common fit to the data yielded a $\chi^2$ value of $751.5$ for $521$ DOF in the fit. \[sec:Model\]Common fit to the truncated DENZ04 DB$_{+/-}$ using the ETH model ------------------------------------------------------------------------------ So far in this paper, we have used standard low-energy parameterisations of the $\pi N$ amplitudes in terms of the pion CM kinetic energy. We will now use the ETH model which is based on Feynman diagrams. Details on the model, as well as on its seven parameters ($G_\sigma$, $K_\sigma$, $G_\rho$, $K_\rho$, $g_{\pi NN}$, $g_{\pi N \Delta}$, and $Z$) may be obtained from Refs. [@mworg; @mrw1]. This model was introduced in Ref. [@glmbg] and was developed to its final form by the mid 1990s. ### \[sec:ModelResults\]Results The common fit of the ETH model to the truncated DENZ04 DB$_{+/-}$ yielded a $\chi^2$ value of $783.3$ for $528$ DOF in the fit. All results for the parameters of the ETH model turned out to be far from their ‘established’ values. At the same time, the numerical evaluation of the correlation (Hessian) matrix failed and the positivity had to be enforced by the MINUIT software library [@jms] (which we exclusively used in the optimisation); hence, the uncertainties of the fit parameters could not be obtained. Our results for the seven model parameters have shown good stability over the years, from the period when the fits were performed to old, outdated phase shifts to the present times when the fits are made directly to the contents of the low-energy $\pi N$ database. The database itself has also changed significantly over the last two decades. In any case, it is fair to say that, no matter which data were fitted to, the results for the model parameters always came out within a reasonable interval of values; this had been the case even when obvious outliers (e.g., the measurements of Ref. [@brt]) were included in our database (e.g., see Ref. [@m]). The results for the model parameters, obtained from the common fit to the truncated DENZ04 DB$_{+/-}$, are very odd. Evidently, for whichever reasons, the parameters drift away from their ‘established’ values, to unreasonable (or even unphysical) ones. It is therefore meaningless to give ‘optimal’ values for the model parameters. Despite the drift of the model parameters in the fit, we decided to determine the $s$- and $p$-wave phase shifts with the model-parameter values obtained in the fit to the truncated DENZ04 DB$_{+/-}$. The values of the DENZ04-based hadronic phase shifts (and their overall tendency with increasing energy) were found hard to accept. The final results for these phase shifts were far from the values established in Refs. [@mworg; @mrw1; @abws]. ### \[sec:Reproduction\]Reproduction of the DENZ04 data on the basis of the ZUAS12 solution Given all these problems, we decided to investigate the reproduction of the DENZ04 data on the basis of the ZUAS12 prediction. Our goal in this part of the study is to identify the kinematical region(s) in which the DENZ04 data are poorly reproduced; if successful, we could pinpoint the origin of the problems we have encountered in the analysis of these measurements. The DENZ04 measurements, normalised to the corresponding ZUAS12 predictions, are shown in Figs. \[fig:PIPPE\] and \[fig:PIMPE\]; the outliers detailed in Tables \[tab:Pi+p\] and \[tab:Pi-p\] are also contained in these figures. Evidently, the angular distribution of the DB$_+$ disagrees with the shape obtained from the rest of the $\pi^+ p$ measurements in Ref. [@mrw1]; on the other hand, the angular distribution (and the absolute normalisation) of the DB$_-$ is in reasonable agreement with the results of Ref. [@mrw1]. We will start with the reproduction of the measurements when the data sets are characterised only by the target configuration and the energy of the incident beam (i.e., following the suggestion of the CHAOS Collaboration [@chaos]). The results are shown in Table \[tab:Reproduction1\]. We notice that the overall $\chi^2$ values of the reproduction (i.e., the sums of the corresponding $(\chi_j^2)_{min}$ values given in the table for the two elastic-scattering reactions) are: $547.7$ for $\pi^- p$ elastic scattering ($271$ data points) and $2446.0$ for $\pi^+ p$ ($275$ data points). The reproduction of the DENZ04 measurements after the data sets have been split into $29$ segments in total (see Section \[sec:General\]) are given in Table \[tab:Reproduction2\]. We observe that the overall $\chi^2$ values of the reproduction drop for both reactions: to $506.5$ for $\pi^- p$ elastic scattering, to $747.3$ for $\pi^+ p$. The dramatic decrease in the latter case indicates that the problems we have encountered in the analysis of the DENZ04 data are mainly due to the shape of the $\pi^+ p$ angular distributions. The decrease in the case of the $\pi^- p$ elastic-scattering data (i.e., ‘unsplit’ versus split data sets) is very moderate, indicating considerably fewer problems with the DENZ04 DB$_-$. The results after removing the outliers, detailed in Tables \[tab:Pi+p\] and \[tab:Pi-p\], are shown in Table \[tab:Reproduction3\]; the $\chi^2$ values drop further to $469.1$ and $665.7$ for $\pi^- p$ and $\pi^+ p$ elastic scattering, respectively. The scale factors for free floating $\hat{z}_j$, corresponding to the optimal reproduction of the absolute normalisation of the DENZ04 data on the basis of the ZUAS12 solution are given in Figs. \[fig:sfpip\] and \[fig:sfpim\], separately for the two elastic-scattering reactions. For $\pi^+ p$ scattering, three $\hat{z}_j$ values per energy are obtained (corresponding to the three angular intervals into which the measurements have been split, i.e., forward, medium, and backward angles); as earlier mentioned, the $19.90$ MeV data set does not cover backward angles. For $\pi^- p$ elastic scattering, two $\hat{z}_j$ values per energy are obtained (corresponding to the two angular intervals of the measurements, i.e., forward and medium/backward angles). We recollect that the $\chi^2$ results of Ref. [@mrw1] (for $\mathrm{p}_{min} \approx 1.24 \cdot 10^{-2}$) for the two reactions were: $371.0$ and $427.2$ for $321$ and $333$ DOF in the fit, for $\pi^- p$ and $\pi^+ p$ elastic scattering, respectively. The $F$-test performed on the two $\pi^- p$ elastic-scattering databases (i.e., on the truncated DENZ04 DB$_-$ and on the truncated ZUAS12 DB$_-$) results in the score value of $1.515$ for $268$ and $321$ DOF, corresponding to the p-value of about $1.9 \cdot 10^{-4}$. On the other hand, the $F$-test performed on the two corresponding $\pi^+ p$ databases results in the score value of $1.944$ for $267$ and $333$ DOF, resulting in a p-value of about $4.8 \cdot 10^{-9}$. These two results are sufficient to substantiate our position that the DENZ04 measurements are not compatible with the rest of the low-energy $\pi N$ database, as it emerged in our PSA of Ref. [@mrw1]. In view of these striking differences, it makes no sense to include even part of the DENZ04 data, as they currently stand, in our PSAs. The inspection of Fig. \[fig:sfpip\] shows that the extracted scale factors $\hat{z}_j$ for the DENZ04 DB$_+$ scatter to the extent that no coherent picture may be obtained from these results. A closer look, however, at the five entries for backward scattering demonstrates that these data can be reproduced well by our ZUAS12 solution [^4]. However, all scale factors which are obtained at forward and medium angles (except at $37.10$ MeV) show that the experimental data systematically exceed the ‘theoretical’ values obtained on the basis of the optimal parameters of Ref. [@mrw1]. The discrepancies reach the $35 \%$ level, with an average around $15 \%$. On the other hand, the scale factors for free floating $\hat{z}_j$ obtained in the case of the truncated DENZ04 DB$_-$ (Fig. \[fig:sfpim\]) cluster well around the expectation value of $1$. Therefore, the absolute normalisation of the DENZ04 DB$_-$ appears to be compatible with the results obtained in Ref. [@mrw1]. To summarise, the absolute normalisation of the DENZ04 DB$_-$ appears to be in good agreement with our ZUAS12 solution, as is the normalisation of the $\pi^+ p$ backward-angle data sets. Large effects in the normalisation of the $\pi^+ p$ data sets have been seen at forward and medium scattering angles [^5]. The hadronic part of the $\pi N$ interaction is dominant in backward scattering. The importance of the electromagnetic (em) contributions increases with decreasing scattering angle, finally culminating in the Coulomb peak which governs the very-forward scattering. The conclusion we drew from the analysis of the DENZ04 DB$_-$ is that the ETH model, along with the optimal values of the model parameters (as obtained in the ZUAS12 solution) and the known em contributions, accounts for the normalisation of the data successfully. The conclusion we drew from the analysis of the DENZ04 DB$_+$ is that the hadronic part of the interaction, as deduced on the basis of the ZUAS12 solution, is successful in reproducing the normalisation of the experimental data in the backward direction. Due to the fact that the problems lie with the scale factors obtained at forward and medium scattering-angle values, the truncated DENZ04 DB$_+$ seems to indicate modifications in the em part of the $\pi N$ interaction. However, the em parts of the two elastic-scattering reactions are intimately connected; one cannot modify one and leave the other intact. (This comment also applies to the hadronic part of the amplitude when involving the ETH model in the fits. The two elastic-scattering reactions are linked via the crossing symmetry which the model obeys.) Additionally, the Physics of the Coulomb peak has been established since a very long time. In view of these results, it is now understood why the fit of the ETH model to the DENZ04 data drifts. As there are no adjustable parameters in the em part of the $\pi N$ interaction, the adjustable hadronic part attempts to compensate for unexpected features in the data in a kinematical region in which the sensitivity of the DCS to the hadronic part of the interaction is expected to be low. Upon inspection of the results of the single-energy phase-shift solution obtained from the DENZ04 data (see Table $6.1$ of Ref. [@denz] and Fig. 4 of the main publication of the CHAOS Collaboration [@chaos]), one cannot but feel uneasy about these values. Of course, it *is* true that the single-energy phase-shift solutions are not expected to show the smoothness of the results obtained when the energy dependence of the phase shifts is modelled via appropriate functions and experimental data, taken at more than one energy, are fitted to, yet the value of the P31 phase shift (i.e., $+0.65^\circ$, no uncertainty has been quoted) in Table $6.1$ of Denz’s dissertation, at $19.90$ MeV, is wrong by about $0.9^\circ$; the *largest* of the p-wave phase shifts (P33) is itself about $1^\circ$ at that energy! At $20$ MeV, the SAID result [@abws] for P31 ($-0.22^\circ$) is almost identical to the value we had obtained in Ref. [@mrw1]. This discrepancy alone provides good reason for the thorough re-examination of the results (at least at $19.90$ MeV) of Refs. [@chaos; @denz]. The DENZ04 data cover very low $T$ values, a ‘corner’ of the phase space in which the parameterisations of the K-matrix elements, which we have been using in our analyses for almost two decades, should have worked best. It seems puzzling to be able to successfully analyse all the rest of the $\pi N$ data (taken by different groups, with different detectors, at different meson-factory facilities and times) up to $100$ MeV, but be unable to obtain any meaningful results from the DENZ04 measurements, which extend only up to $43.30$ MeV. Given the characteristics of the DENZ04 data, there is no room for questioning the theoretical background on which this work relies; we strongly believe that we should have been able to obtain meaningful results from these data using both our K-matrix parameterisations, as well as the ETH model. \[sec:Discussion\]Discussion and Summary ======================================== This paper presents the results of an analysis of the DENZ04 [@chaos; @denz] low-energy $\pi^\pm p$ differential cross sections. Given the size of the data acquired in this experiment (a total of $546$ data points), the self-consistency of these measurements must be addressed prior to their inclusion into our database, which we have carefully established and analysed in our last two phase-shift analyses (PSA) of Refs. [@mworg; @mrw1]. The DENZ04 data were analysed as if they comprised the entire $\pi^\pm p$ elastic-scattering database at low energies, by following the method set forth in Ref. [@mworg]. The analysis of the DENZ04 $\pi^+ p$ measurements on the basis of standard low-energy parameterisations of the $s$- and $p$-wave $K$-matrix elements led to the identification of eight outliers in a total of $275$ data points (see Table \[tab:DBpi+p\]), whereas that of DENZ04 $\pi^- p$ elastic-scattering measurements to the removal of three out of a total of $271$ data points (see Table \[tab:DBpi-p\]). We subsequently subjected the truncated DENZ04 combined $\pi^\pm p$ elastic-scattering databases into a common optimisation scheme, using the ETH model [@glmbg]. The ability of the model to account for the low-energy $\pi N$ interaction has been demonstrated during the past two decades of research in this field. To our surprise, the optimisation failed to yield reasonable values for the model parameters. The phase-shift solution, extracted from the fit to the DENZ04 data, is far from the results of Refs. [@mworg; @mrw1; @abws]. We next tried to trace the origin of these problems by investigating the reproduction of the DENZ04 data on the basis of the results of our recent PSA [@mrw1]. We found out that the absolute normalisation of the DENZ04 $\pi^- p$ elastic-scattering data is in good agreement with our ZUAS12 solution, as is the normalisation of the DENZ04 $\pi^+ p$ data sets at backward angles. On the other hand, large effects in the normalisation of the DENZ04 $\pi^+ p$ data sets have been seen at forward and medium scattering angles, i.e., in the region where the electromagnetic (em) effects are important. Therefore, the DENZ04 data seem to suggest modifications of the em part of the $\pi^+ p$ reaction, whereas the DENZ04 $\pi^- p$ elastic-scattering data are compatible with the em part as it currently stands. Given the relation between the em amplitudes for the two reactions, the two aforementioned suggestions are mutually incompatible. An explanation of the failure of the model to account for the DENZ04 data has been advanced on the basis of the interpretation of Figs. \[fig:sfpip\] and \[fig:sfpim\]. Given that there are no adjustable parameters in the em part of the $\pi N$ interaction, the adjustable hadronic part attempts to compensate for unexpected features in the data. In an effort to model large differences in a region where the sensitivity of the DCS to the hadronic part of the interaction is expected to be low, the model parameters drift away from their ‘established’ values, to unreasonable ones. Given all these problems, we are currently unable to include the experimental data of Refs. [@chaos; @denz] in our database. Additionally, we would like to remark that the use of these data in low-energy PSAs will surely lead to bias. We hope that the findings of the present work will be helpful in the future analyses of the $\pi N$ data. We acknowledge helpful discussions with G.J. Wagner on the acquisition and treatment of the experimental data of the CHAOS Collaboration. We thank G.R. Smith and I.I. Strakovsky for their remarks. Finally, we acknowledge the exchange of interesting ideas with W.S. Woolcock(deceased) on a number of issues connected with the present work. [99]{} H. Denz , Phys. Lett. B 633 (2006) 209-13. H. Denz, Ph.D. dissertation, Tübingen University, 2004; http://tobias-lib.uni-tuebingen.de/dbt/volltexte/2004/1323/. E. Matsinos, W.S. Woolcock, G.C. Oades, G. Rasche, A. Gashi, Nucl. Phys. A 778 (2006) 95-123. E. Matsinos, G. Rasche, J. Mod. Phys. 3 (2012) 1369-87. P.F.A. Goudsmit, H.J. Leisi, E. Matsinos, B.L. Birbrair, A.B. Gridnev, Nucl. Phys. A 575 (1994) 673-706. R.A. Arndt, W.J. Briscoe, I.I. Strakovsky, R.L. Workman, Phys. Rev. C 74 (2006) 045205; SAID PSA Tool, http://gwdac.phys.gwu.edu. F. James, ‘MINUIT - Function Minimization and Error Analysis’, CERN Program Library Long Writeup D506. P.Y. Bertin , Nucl. Phys. B 106 (1976) 341-54. E. Matsinos, Phys. Rev. C 56 (1997) 3014-25. **** The list of outliers in the DENZ04 $\pi^+ p$ database. The rows represent steps in the outlier-identification/elimination process. The columns indicate: the $\chi^2$ value, the number of degrees of freedom NDF in the fit, and the worst data point at that step; the worst data point was then removed and the fit to the remaining data was made. No data can be marked for removal at step $9$. The worst data point is identified on the basis of the corresponding pion laboratory kinetic energy $T$ (in MeV) and the centre-of-mass scattering angle $\theta$. The presence of an angular interval at step $6$ indicates that the corresponding data set (i.e., the backward-angle data set at $25.80$ MeV) was freely floated in all subsequent fits. Step $\chi^2$ NDF Worst data point ($T$, $\theta$) ------ ---------- ------- ---------------------------------- $1$ $486.9$ $268$ $25.80$, $165.78^\circ$ $2$ $474.0$ $267$ $37.10$, $93.16^\circ$ $3$ $463.2$ $266$ $19.90$, $20.35^\circ$ $4$ $450.7$ $265$ $37.10$, $54.59^\circ$ $5$ $441.3$ $264$ $19.90$, $84.16^\circ$ $6$ $431.1$ $263$ $25.80$, $150.48 - 167.46^\circ$ $7$ $421.6$ $262$ $19.90$, $42.75^\circ$ $8$ $411.9$ $261$ $37.10$, $169.23^\circ$ $9$ $401.2$ $260$ : \[tab:Pi+p\] **** The data sets comprising the truncated DENZ04 $\pi^+ p$ database, the pion laboratory kinetic energy $T$ (in MeV), the corresponding angular interval ($\theta$) of the data set (f: forward, m: medium, b: backward), the number of degrees of freedom (NDF)$_j$ for each data set, the scale factor $z_j$ which minimises $\chi_j^2$, the values of $(\chi_j^2)_{min}$, and the p-value of the fit for each data set. The numbers of this table correspond to the final fit to the data using the $K$-matrix parameterisations (see Section \[sec:K-Matrix\_pi+p\]). $T$, $\theta$ (NDF)$_j$ $z_j$ $(\chi_j^2)_{min}$ p-value Comments ------------------ ----------- ---------- -------------------- ---------- ---------------------------------------- $19.90$, f $5$ $1.0591$ $9.3280$ $0.0967$ $20.35^\circ$ removed $19.90$, m $25$ $0.9378$ $40.8771$ $0.0236$ $42.75$, $84.16^\circ$ removed $25.80$, f $5$ $1.0359$ $12.7392$ $0.0260$ $25.80$, m $27$ $1.0739$ $33.0017$ $0.1970$ $25.80$, b $9$ $0.8150$ $16.3025$ $0.0608$ $165.78^\circ$ removed, freely floated $32.00$, f $5$ $1.0299$ $6.3159$ $0.2767$ $32.00$, m $28$ $1.0595$ $44.4274$ $0.0252$ $32.00$, b $13$ $1.0149$ $17.9694$ $0.1587$ $37.10$, f $8$ $0.8957$ $4.8626$ $0.7722$ $37.10$, m $26$ $0.8824$ $41.6963$ $0.0264$ $54.59$, $93.16^\circ$ removed $37.10$, b $12$ $0.8940$ $16.2488$ $0.1801$ $169.23^\circ$ removed $43.30$, f $12$ $0.9905$ $15.9922$ $0.1916$ $43.30$, m $28$ $0.9771$ $37.8442$ $0.1014$ $43.30$, b $13$ $0.9498$ $25.6257$ $0.0191$ $43.30$(rot.), f $12$ $1.0214$ $23.2806$ $0.0254$ $43.30$(rot.), m $27$ $0.9934$ $42.8706$ $0.0270$ $43.30$(rot.), b $12$ $0.9512$ $11.8162$ $0.4606$ : \[tab:DBpi+p\] **** The equivalent of Table \[tab:Pi+p\] for the truncated DENZ04 $\pi^- p$ elastic-scattering database. Step $\chi^2$ NDF Worst data point ($T$, $\theta$) ------ ---------- ------- ---------------------------------- $1$ $388.2$ $264$ $25.80$, $11.08^\circ$ $2$ $371.5$ $263$ $43.30$, $152.55^\circ$ $3$ $358.9$ $262$ $25.80$, $76.00^\circ$ $4$ $350.2$ $261$ : \[tab:Pi-p\] **** The equivalent of Table \[tab:DBpi+p\] for the truncated DENZ04 $\pi^- p$ elastic-scattering database; m/b in the corresponding angular interval ($\theta$) of the data set indicates combined medium and backward angles. The numbers of this table correspond to the final fit to the data using the $K$-matrix parameterisations (see Section \[sec:K-Matrix\_pi-p\]). $T$, $\theta$ (NDF)$_j$ $z_j$ $(\chi_j^2)_{min}$ p-value Comments -------------------- ----------- ---------- -------------------- ---------- ------------------------ $19.90$, f $6$ $1.0075$ $3.6609$ $0.7225$ $19.90$, m $25$ $0.9959$ $18.7750$ $0.8078$ $25.80$, f $6$ $1.0186$ $15.1307$ $0.0193$ $11.08^\circ$ removed $25.80$, m/b $37$ $1.0074$ $54.0761$ $0.0346$ $76.00^\circ$ removed $32.00$, f $5$ $0.9525$ $5.6430$ $0.3425$ $32.00$, m/b $40$ $0.9918$ $38.8705$ $0.5210$ $37.10$, f $9$ $0.9513$ $8.7386$ $0.4617$ $37.10$, m/b $41$ $0.9620$ $59.0529$ $0.0336$ $43.30$, f $12$ $1.0554$ $23.3763$ $0.0247$ $43.30$, m/b $38$ $1.0861$ $52.6021$ $0.0579$ $152.55^\circ$ removed $43.30$(rot.), f $12$ $1.1054$ $21.0420$ $0.0498$ $43.30$(rot.), m/b $37$ $1.1103$ $49.2107$ $0.0864$ : \[tab:DBpi-p\] **** The reproduction of the DENZ04 data sets using the ZUAS12 solution as reference (i.e., yielding the ‘theoretical values’ $y_{ij}^{th}$ in Eq. (1) of Ref. [@mrw1]). The columns represent: the pion laboratory kinetic energy $T$ (in MeV), the number of degrees of freedom (NDF)$_j$ for each data set, the reported normalisation uncertainty [@chaos], the scale factor $z_j$ minimising $\chi_j^2$ with its uncertainty $\Delta z_j$ (combining in quadrature $\delta z_j$ and the statistical uncertainty derived from Eq. (2) of Ref. [@mrw1]), the scale factor for free floating $\hat{z}_j$ with its uncertainty $\Delta \hat{z}_j$, the minimal $\chi_j^2$ value of the reproduction, the part of $(\chi_j^2)_{min}$ which corresponds to the statistical fluctuation in the data, and the part of $(\chi_j^2)_{min}$ which corresponds to the scaling of each data set as a whole. All definitions have been given in Section 2.2 of Ref. [@mrw1]. The table corresponds to the original DENZ04 data sets; all the data, which have been obtained at one condition (target configuration, energy), are assumed to comprise one data set in this table. The outliers, detailed in Tables \[tab:Pi+p\] and \[tab:Pi-p\], have been removed. [|c|c|c|c|c|c|c|c|]{} $T$ & (NDF)$_j$ & $\delta z_j$ & $z_j(\Delta z_j)$ & $\hat{z}_j(\Delta \hat{z}_j)$ & $(\chi_j^2)_{min}$ & $(\chi_j^2)_{st}$ & $(\chi_j^2)_{sc}$\ \ $19.90$ & $33$ & $0.050$ & $1.077(50)$ & $1.078(50)$ & $118.1$ & $115.7$ & $2.4$\ $25.80$ & $43$ & $0.050$ & $1.045(50)$ & $1.046(50)$ & $1208.8$ & $1208.0$ & $0.8$\ $32.00$ & $46$ & $0.050$ & $1.129(50)$ & $1.130(50)$ & $340.0$ & $333.3$ & $6.7$\ $37.10$ & $49$ & $0.070$ & $0.982(70)$ & $0.982(70)$ & $315.0$ & $314.9$ & $0.1$\ $43.30$ & $53$ & $0.070$ & $1.061(70)$ & $1.062(70)$ & $235.6$ & $234.8$ & $0.8$\ $43.30$(rot.) & $51$ & $0.070$ & $1.039(70)$ & $1.039(70)$ & $228.5$ & $228.2$ & $0.3$\ \ $19.90$ & $31$ & $0.050$ & $0.963(50)$ & $0.963(50)$ & $54.8$ & $54.3$ & $0.5$\ $25.80$ & $45$ & $0.050$ & $1.019(50)$ & $1.019(50)$ & $160.5$ & $160.4$ & $0.1$\ $32.00$ & $45$ & $0.050$ & $1.041(50)$ & $1.042(50)$ & $68.6$ & $67.9$ & $0.7$\ $37.10$ & $50$ & $0.070$ & $0.999(70)$ & $0.999(70)$ & $84.7$ & $84.7$ & $0.0$\ $43.30$ & $51$ & $0.070$ & $1.052(70)$ & $1.052(70)$ & $83.1$ & $82.5$ & $0.6$\ $43.30$(rot.) & $49$ & $0.070$ & $1.087(71)$ & $1.088(71)$ & $96.0$ & $94.5$ & $1.6$\ **** The equivalent of Table \[tab:Reproduction1\] in the case that the original $12$ data sets are split in a total of $29$ segments (see Section \[sec:General\]). The outliers, detailed in Tables \[tab:Pi+p\] and \[tab:Pi-p\], have been removed. The corresponding angular interval ($\theta$) of each data set (f: forward, m: medium, b: backward, m/b: combined medium and backward angles) is indicated in the first column. [|c|c|c|c|c|c|c|c|]{} $T$, $\theta$ & (NDF)$_j$ & $\delta z_j$ & $z_j(\Delta z_j)$ & $\hat{z}_j(\Delta \hat{z}_j)$ & $(\chi_j^2)_{min}$ & $(\chi_j^2)_{st}$ & $(\chi_j^2)_{sc}$\ \ $19.90$, f & $6$ & $0.050$ & $1.084(51)$ & $1.087(51)$ & $24.1$ & $21.1$ & $2.9$\ $19.90$, m & $27$ & $0.050$ & $1.070(51)$ & $1.072(51)$ & $95.1$ & $93.1$ & $2.0$\ $25.80$, f & $5$ & $0.050$ & $1.063(51)$ & $1.065(51)$ & $21.2$ & $19.6$ & $1.6$\ $25.80$, m & $27$ & $0.050$ & $1.208(51)$ & $1.213(51)$ & $62.8$ & $45.0$ & $17.8$\ $25.80$, b & $11$ & $0.050$ & $0.824(51)$ & $0.819(51)$ & $42.0$ & $29.2$ & $12.7$\ $32.00$, f & $5$ & $0.050$ & $1.157(54)$ & $1.199(56)$ & $19.0$ & $6.5$ & $12.5$\ $32.00$, m & $28$ & $0.050$ & $1.193(50)$ & $1.197(50)$ & $73.7$ & $58.5$ & $15.2$\ $32.00$, b & $13$ & $0.050$ & $1.023(51)$ & $1.024(51)$ & $19.4$ & $19.1$ & $0.2$\ $37.10$, f & $8$ & $0.070$ & $1.076(72)$ & $1.080(72)$ & $9.5$ & $8.2$ & $1.2$\ $37.10$, m & $28$ & $0.070$ & $1.008(70)$ & $1.008(70)$ & $110.1$ & $110.1$ & $0.0$\ $37.10$, b & $13$ & $0.070$ & $0.912(70)$ & $0.911(70)$ & $27.3$ & $25.7$ & $1.6$\ $43.30$, f & $12$ & $0.070$ & $1.225(77)$ & $1.314(83)$ & $28.8$ & $14.4$ & $14.4$\ $43.30$, m & $28$ & $0.070$ & $1.130(71)$ & $1.132(71)$ & $45.0$ & $41.5$ & $3.5$\ $43.30$, b & $13$ & $0.070$ & $0.983(71)$ & $0.983(71)$ & $23.4$ & $23.3$ & $0.1$\ $43.30$(rot.), f & $12$ & $0.070$ & $1.265(77)$ & $1.360(82)$ & $48.7$ & $29.2$ & $19.5$\ $43.30$(rot.), m & $27$ & $0.070$ & $1.063(70)$ & $1.064(70)$ & $87.3$ & $86.4$ & $0.8$\ $43.30$(rot.), b & $12$ & $0.070$ & $0.984(71)$ & $0.984(71)$ & $10.2$ & $10.2$ & $0.1$\ \ $19.90$, f & $6$ & $0.050$ & $1.004(51)$ & $1.005(51)$ & $3.6$ & $3.6$ & $0.0$\ $19.90$, m & $25$ & $0.050$ & $0.940(51)$ & $0.939(51)$ & $31.0$ & $29.5$ & $1.5$\ $25.80$, f & $7$ & $0.050$ & $1.011(50)$ & $1.011(50)$ & $38.5$ & $38.5$ & $0.0$\ $25.80$, m/b & $38$ & $0.050$ & $1.026(50)$ & $1.027(50)$ & $118.9$ & $118.7$ & $0.3$\ $32.00$, f & $5$ & $0.050$ & $0.992(52)$ & $0.991(52)$ & $4.4$ & $4.4$ & $0.0$\ [**Table 6 continued**]{} $T$, $\theta$ (NDF)$_j$ $\delta z_j$ $z_j(\Delta z_j)$ $\hat{z}_j(\Delta \hat{z}_j)$ $(\chi_j^2)_{min}$ $(\chi_j^2)_{st}$ $(\chi_j^2)_{sc}$ -------------------- ----------- -------------- ------------------- ------------------------------- -------------------- ------------------- ------------------- $32.00$, m/b $40$ $0.050$ $1.050(50)$ $1.050(50)$ $49.0$ $48.0$ $1.0$ $37.10$, f $9$ $0.070$ $0.986(71)$ $0.985(71)$ $7.8$ $7.8$ $0.0$ $37.10$, m/b $41$ $0.070$ $1.002(70)$ $1.002(70)$ $75.4$ $75.4$ $0.0$ $43.30$, f $12$ $0.070$ $1.052(71)$ $1.054(71)$ $23.4$ $22.8$ $0.6$ $43.30$, m/b $39$ $0.070$ $1.051(71)$ $1.052(71)$ $60.2$ $59.7$ $0.5$ $43.30$(rot.), f $12$ $0.070$ $1.102(71)$ $1.106(71)$ $21.2$ $19.0$ $2.2$ $43.30$(rot.), m/b $37$ $0.070$ $1.066(71)$ $1.069(72)$ $73.1$ $72.2$ $0.9$ **** The equivalent of Table \[tab:Reproduction2\] in the case that the outliers, detailed in Tables \[tab:Pi+p\] and \[tab:Pi-p\], have been removed. [|c|c|c|c|c|c|c|c|]{} $T$, $\theta$ & (NDF)$_j$ & $\delta z_j$ & $z_j(\Delta z_j)$ & $\hat{z}_j(\Delta \hat{z}_j)$ & $(\chi_j^2)_{min}$ & $(\chi_j^2)_{st}$ & $(\chi_j^2)_{sc}$\ \ $19.90$, f & $5$ & $0.050$ & $1.094(51)$ & $1.098(51)$ & $14.8$ & $11.1$ & $3.7$\ $19.90$, m & $25$ & $0.050$ & $1.070(51)$ & $1.072(51)$ & $66.0$ & $64.0$ & $2.0$\ $25.80$, f & $5$ & $0.050$ & $1.063(51)$ & $1.065(51)$ & $21.2$ & $19.6$ & $1.6$\ $25.80$, m & $27$ & $0.050$ & $1.208(51)$ & $1.213(51)$ & $62.8$ & $45.0$ & $17.8$\ $25.80$, b & $9$ & $0.050$ & $0.830(51)$ & $0.830(51)$ & $17.3$ & $17.3$ & $0.0$\ $32.00$, f & $5$ & $0.050$ & $1.157(54)$ & $1.199(56)$ & $19.0$ & $6.5$ & $12.5$\ $32.00$, m & $28$ & $0.050$ & $1.193(50)$ & $1.197(50)$ & $73.7$ & $58.5$ & $15.2$\ $32.00$, b & $13$ & $0.050$ & $1.023(51)$ & $1.024(51)$ & $19.4$ & $19.1$ & $0.2$\ $37.10$, f & $8$ & $0.070$ & $1.076(72)$ & $1.080(72)$ & $9.5$ & $8.2$ & $1.2$\ $37.10$, m & $26$ & $0.070$ & $1.009(70)$ & $1.009(70)$ & $101.6$ & $101.6$ & $0.0$\ $37.10$, b & $12$ & $0.070$ & $0.907(70)$ & $0.906(70)$ & $17.3$ & $15.5$ & $1.8$\ $43.30$, f & $12$ & $0.070$ & $1.225(77)$ & $1.314(83)$ & $28.8$ & $14.4$ & $14.4$\ $43.30$, m & $28$ & $0.070$ & $1.130(71)$ & $1.132(71)$ & $45.0$ & $41.5$ & $3.5$\ $43.30$, b & $13$ & $0.070$ & $0.983(71)$ & $0.983(71)$ & $23.4$ & $23.3$ & $0.1$\ $43.30$(rot.), f & $12$ & $0.070$ & $1.265(77)$ & $1.360(82)$ & $48.7$ & $29.2$ & $19.5$\ $43.30$(rot.), m & $27$ & $0.070$ & $1.063(70)$ & $1.064(70)$ & $87.3$ & $86.4$ & $0.8$\ $43.30$(rot.), b & $12$ & $0.070$ & $0.984(71)$ & $0.984(71)$ & $10.2$ & $10.2$ & $0.1$\ \ $19.90$, f & $6$ & $0.050$ & $1.004(51)$ & $1.005(51)$ & $3.6$ & $3.6$ & $0.0$\ $19.90$, m & $25$ & $0.050$ & $0.940(51)$ & $0.939(51)$ & $31.0$ & $29.5$ & $1.5$\ $25.80$, f & $6$ & $0.050$ & $1.029(51)$ & $1.030(51)$ & $19.1$ & $18.7$ & $0.3$\ $25.80$, m/b & $37$ & $0.050$ & $1.023(50)$ & $1.024(50)$ & $109.9$ & $109.7$ & $0.2$\ $32.00$, f & $5$ & $0.050$ & $0.992(52)$ & $0.991(52)$ & $4.4$ & $4.4$ & $0.0$\ [**Table 7 continued**]{} $T$, $\theta$ (NDF)$_j$ $\delta z_j$ $z_j(\Delta z_j)$ $\hat{z}_j(\Delta \hat{z}_j)$ $(\chi_j^2)_{min}$ $(\chi_j^2)_{st}$ $(\chi_j^2)_{sc}$ -------------------- ----------- -------------- ------------------- ------------------------------- -------------------- ------------------- ------------------- $32.00$, m/b $40$ $0.050$ $1.050(50)$ $1.050(50)$ $49.0$ $48.0$ $1.0$ $37.10$, f $9$ $0.070$ $0.986(71)$ $0.985(71)$ $7.8$ $7.8$ $0.0$ $37.10$, m/b $41$ $0.070$ $1.002(70)$ $1.002(70)$ $75.4$ $75.4$ $0.0$ $43.30$, f $12$ $0.070$ $1.052(71)$ $1.054(71)$ $23.4$ $22.8$ $0.6$ $43.30$, m/b $38$ $0.070$ $1.050(71)$ $1.051(71)$ $51.4$ $50.8$ $0.5$ $43.30$(rot.), f $12$ $0.070$ $1.102(71)$ $1.106(71)$ $21.2$ $19.0$ $2.2$ $43.30$(rot.), m/b $37$ $0.070$ $1.066(71)$ $1.069(72)$ $73.1$ $72.2$ $0.9$ ![image](PIPPELow.eps){width="15.5cm"} ![\[fig:PIPPE\]The DENZ04 $\pi^+ p$ measurements ($y_{ij}^{exp}$), normalised to the corresponding ZUAS12 predictions ($y_{ij}^{th}$); the outliers detailed in Table \[tab:Pi+p\] are also contained. The normalisation uncertainties of the DENZ04 data sets (see Section \[sec:General\]) are not shown. The statistical uncertainties of the ZUAS12 predictions (below $5 \%$ in all cases, for the energies considered herein) are also not shown.](PIPPEHigh.eps){width="15.5cm"} ![image](PIMPELow.eps){width="15.5cm"} ![\[fig:PIMPE\]The DENZ04 $\pi^- p$ elastic-scattering measurements ($y_{ij}^{exp}$), normalised to the corresponding ZUAS12 predictions ($y_{ij}^{th}$); the outliers detailed in Table \[tab:Pi-p\] are also contained. The normalisation uncertainties of the DENZ04 data sets (see Section \[sec:General\]) are not shown. The statistical uncertainties of the ZUAS12 predictions (typically at the few-percent level; between $6$ and about $23 \%$ in the very backward direction, for the energies considered herein) are also not shown.](PIMPEHigh.eps){width="15.5cm"} ![\[fig:sfpip\]The scale factors for free floating $\hat{z}_j$ for the DENZ04 $\pi^+ p$ data, obtained on the basis of the ZUAS12 solution. To improve the display, the $\hat{z}_j$ values at $43.30$ MeV are shown slightly shifted (horizontally). The labels ‘forward’, ‘medium’, and ‘backward’ indicate the corresponding angular interval of the measurements.](sfpip.eps){width="15.5cm"} ![\[fig:sfpim\]The scale factors for free floating $\hat{z}_j$ for the DENZ04 $\pi^- p$ elastic-scattering data, obtained on the basis of the ZUAS12 solution. To improve the display, the $\hat{z}_j$ values at $43.30$ MeV are shown slightly shifted (horizontally). The labels ‘forward’ and ‘medium/backward’ indicate the corresponding angular interval of the measurements.](sfpim.eps){width="15.5cm"} [^1]: The $546$ DENZ04 DCS values have been given in tabulated form in Ref. [@denz], and were available two years prior to the main publication of the CHAOS Collaboration [@chaos]. [^2]: At the time the present work is submitted for reviewing, the SAID group are still using the normalisation uncertainties as they appeared in the captions of the tables of Appendix B of Ref. [@denz], instead of those quoted in the final publication of the CHAOS Collaboration [@chaos]. [^3]: The SAID group have excluded one data point belonging to the $37.10$ MeV medium-angle data set. [^4]: Note that the backward-angle $25.80$ MeV data set had to be freely floated, when the self-consistency of the DENZ04 DB$_+$ was investigated using the $K$-matrix parameterisations, e.g., see Section \[sec:K-Matrix\_pi+p\], as well as Tables \[tab:Pi+p\] and \[tab:DBpi+p\]; therefore, the normalisation of this data set is questionable even when the data set is compared only to the rest of the DENZ04 $\pi^+ p$ measurements. This implies that the seemingly-poor reproduction of the absolute normalisation of this data set on the basis of the ZUAS12 solution is less problematic than it appears to be. Indeed, if the scale factor of $0.8150$ of Table \[tab:DBpi+p\] is applied to this data set, its resulting absolute normalisation will be compatible with our ZUAS12 solution. [^5]: It must be added that the absolute normalisation is not the only problem of the DENZ04 measurements. When using the ZUAS12 solution as reference, the shapes of $11$ out of the $29$ data sets (after all the outliers are removed) do not pass the test for $\mathrm{p}_{min} \approx 1.24 \cdot 10^{-2}$. The disagreement in shape is very pronounced in the $\pi^+ p$ medium-angle $37.10$ MeV, in the $\pi^- p$ medium/backward-angle $25.80$ MeV, and in the $\pi^+ p$ medium-angle $43.30$(rot.) MeV data sets. None of the backward-angle measurements of the DENZ04 DB$_+$ shows any inconsistency in shape when compared with the ZUAS12 solution.
--- abstract: 'We examine the low-energy physics of graphene in the presence of a circularly polarized electric field in the terahertz regime. Specifically, we derive a general expression for the dynamical polarizability of graphene irradiated by an ac electric field. Several approximations are developed that allow one to develop a semianalytical theory for the weak field regime. The ac field changes qualitatively the single and many electron excitations of graphene: undoped samples may exhibit collective excitations (in contrast to the equilibrium situation), and the properties of the excitations in doped graphene are strongly influenced by the ac field. We also show that the intensity of the external field is the critical control parameter for the stability of these excitations.' author: - 'Maria Busl$^1$, Gloria Platero$^1$, and Antti-Pekka Jauho$^2$' bibliography: - 'bibliographie\_graphene.bib' title: Dynamical polarizability of graphene irradiated by circularly polarized ac electric fields --- Introduction ============ Graphene is a genuinely two dimensional material whose peculiar properties have received a lot of attention since its first isolation in 2004.[@geimSC04; @zhangNA05] Structurally, it is a single atom thick layer of graphite, i.e. a two dimensional crystal, that remains stable both when it is deposited over a substrate or when it is suspended. Its electronic properties have attracted huge interest: the low energy excitations are chiral massless Dirac electrons in two dimensions, thereby providing a new platform for testing the basic tenets of solid state physics. This fact, which ultimately arises from the honeycomb structure of the graphene crystal lattice, is responsible for a strikingly different electronic behavior as compared to the conventional two dimensional electron gases (e.g., in semiconductor heterostructures) studied extensively in the laboratory.[@guineaRMP09; @andoRMP82] The effect of external fields in the low energy properties of the electric carriers in graphene has been a topic of extensive research since early days, as the discovery or the anomalous Quantum Hall effect witnesses.[@geimNA05; @zhangNA05] Understanding the behavior of graphene in the presence of electrical and magnetic fields is of major relevance both from a fundamental and an applied point of view. The former, since new exotic behavior may arise in the presence of external fields, and the latter, because external fields can be used to manipulate its properties, for instance by opening gaps in the electronic spectrum, which is essential for applications in the semiconductor industry. The effect of radiation on both monolayer and bilayer graphene has been analyzed only recently, and has led to the prediction of a variety of phenomena, such as the photovoltaic Hall effect,[@aokiPRB09] metal-insulator transition of graphene,[@kibisPRB10] valley-polarized currents in both monolayer and bilayer graphene,[@chakrabortyAPL09; @chakrabortyNT11] and photoinduced quantum Hall effect in the absence of magnetic fields.[@demlerPRB11] Other theoretical works include the analysis of ac transport properties through graphene ribbons,[@fertigPRL11] graphene-based $pn$-junctions,[@efetovPRB08] graphene-based Fabry-Pérot devices[@cunibertiPRB10] and the recent proposal of quantum pumping in graphene by an external ac field.[@kohlerPRB11] Experimentally it has been found that a circularly polarized ac field induces a dynamic Hall effect in graphene.[@olbrichPRL10] Several studies have been devoted to the theoretical analysis of the quasienergy spectrum of graphene and graphene dots under ac fields,[@calvoAPL11; @naumisPRB08; @riveraPRB09; @zhangNJofPhys09] and the optical properties of graphene have been studied by calculating the optical conductivity.[@zhouPRB11] One of the earliest and yet most important findings in all these studies is that a circularly polarized field induces a band-gap at the Dirac point, along with dynamical gaps at other momenta, all of which are tunable by the field intensity. This is, however, not the case for a linearly polarized field: there the [*anisotropic*]{} quasienergy spectrum shows dynamical gaps at non-zero momentum only in certain directions, and especially no gap is induced at the Dirac point.[@zhouPRB11; @calvoAPL11] In this paper we study theoretically the effect of a circularly polarized ac electric field in the terahertz regime on the electron excitation spectrum, and on the electron-electron interaction. The interactions are found to be affected qualitatively by the external field, altering the nature of the single particle excitations as well as the many-particle excitations, both in doped and undoped graphene. Special attention is paid to the existence of a plasmon in [*undoped*]{} graphene, which is not present in its field-free counterpart. In order to perform this investigation, the natural object to study is the dynamical polarizability, which has already been studied extensively in graphene without an ac field.[@shungPRB86; @dassarmaPRB07; @wunschNJoP06; @mishchenkopolPRL08; @sabiofsumPRB08; @stauberanalyticalpolPRB10; @roldandynpolmagPRB09] The structure of the paper is the following: In Sect. \[sec2\], we briefly introduce the Hamiltonian of graphene in the presence of a circularly polarized electric field, emphasizing the role of Floquet theory in Sect. \[sec2subsec1\], and present several approximations to the single electron Hamiltonian valid for weak fields in Sect. \[sec2subsec2\]. Section \[sec3\] is dedicated to the analysis of the dynamical polarizability: we derive a general expression for the polarizability of graphene in an ac electric field in Sect. \[sec3subsec1\], and compare it with the corresponding expression for the two dimensional electron gas.[@chinodynscreeningPRB02] Finally in section \[sec3subsec2\], the general formula is combined with the analytical approximations in order to work it out both for the non-interacting system and for the interacting system in the Random Phase Approximation (RPA). Single electron properties of graphene under a circularly polarized ac electric field {#sec2} ===================================================================================== Model and technique {#sec2subsec1} ------------------- In the low-energy regime, the Hamiltonian for single electron excitations in graphene is the infamous Dirac Hamiltonian. In order to introduce a time-dependent electric field we choose a gauge in which the latter is represented via a gauge potential $\mathbf{A}(t)$, whose time dependence is that of a single monochromatic and circularly polarized wave of frequency $\Omega$: $$\begin{aligned} \mathbf{A}(t) &= -\frac{E_0}{\Omega\sqrt{2}}[\hat{x}\sin(\Omega t) - \hat{y}\cos(\Omega t)],\end{aligned}$$ By using a minimal coupling scheme, the Hamiltonian for graphene irradiated by this electric field reads: $$H(t) = \begin{pmatrix} 0 & k_x - i k_y + iAe^{-i\Omega t}\\ k_x + i k_y-iA e^{i\Omega t} & 0 \end{pmatrix}, \label{hamilcirc}$$ with $A = eE_{0}/(\sqrt{2}\Omega)$ and $v_{\text{F}} = \hbar = 1$. Here the Hamiltonian is expressed in terms of Bloch states of momentum $\mathbf{k}$, which is defined with respect to one of the valleys. Note that the electric field does not couple the spin and valley degrees of freedom in graphene, which remain as an extra degeneracy $N_v N_s = 4$. The eigenstates of graphene in the absence of the external field are two-dimensional spinors representing the two components of the unit cell of the honeycomb lattice in graphene, that once diagonalized give rise to two bands (or Dirac cones). In the Dirac Hamiltonian, the pseudospin has a scalar coupling with the momentum, and its eigenstates are those whose pseudospin is either parallel or antiparallel to its momentum. In fact, the mathematical structure of the Hamiltonian coincides with that of an electronic spin coupled through Rashba interaction to a magnetic field. In this analogy, the momentum in graphene plays the role of the magnetic field, and the pseudospin operator is the analogous to the ordinary spin, both having the same representation in terms of Pauli matrices. This allows one to write the Hamiltonian $H = {\bf \sigma} \cdot {\bf k}$. In the presence of an external electric field an extra term of the same nature arises in the Hamiltonian, now coupling the pseudospin and the electric field, and inducing transitions between the eigenstates for the isolated system. In a sense the momentum and the electric field are competing dynamically for the direction of the pseudospin, but no compromise can be reached due to the time-dependence of the field, which no longer allows for an analysis of the problem in terms of stationary eigenstates.\ To proceed, we apply the Floquet theorem, which is the most suitable way to address time periodic Hamiltonians (detailed accounts can be found in Refs. \[\]). Floquet theory states that for a Hamiltonian that is periodic in time – $H(t + 2\pi/\Omega) = H(t)$ – a complete set of solutions of the time-dependent Schrödinger equation $$H(t)|\psi(t)\rangle = i\frac{d}{dt}|\psi(t)\rangle \label{sgltime}$$ can be written as $$\begin{aligned} |\psi_{\alpha}(t)\rangle &= e^{-i\epsilon_{\alpha}t}|\phi_{\alpha}(t)\rangle\nonumber\\ |\phi_{\alpha}(t)\rangle &= |\phi_{\alpha}(t+T)\rangle, \label{sglsol}\end{aligned}$$ where $\alpha$ contains the quantum numbers of the problem and the so-called Floquet index, that we will label as $l$. The role of this index is to classify the different [*sidebands*]{}, since $\epsilon_{\alpha}$, the quasienergies, are defined $\mod \hbar\Omega$, being related by the simple transformation: $$\epsilon_{\alpha(l)} = \epsilon_{\alpha(0)} + l \Omega \label{transformation1}$$ In analogy to the Bloch theorem, the quasienergies can be mapped into a first time Brillouin zone, which is $[-\Omega/2, \Omega/2]$, and therefore corresponds to $l = 0$. The Floquet states $|\phi_{\alpha}(t)\rangle$ have the same periodicity as the driving field (see Eq.) and can therefore be expanded into a Fourier series: $$|\phi_{\alpha}(t)\rangle = \sum_{n=-\infty}^{\infty}e^{in\Omega t}|\phi_{\alpha}^n\rangle. \label{fourierfloquet}$$ The Floquet states are also defined in different branches of solutions, being related between them by the transformation: $$|\phi_{\alpha(l)}^n\rangle = |\phi_{\alpha(0)}^{l + n}\rangle \label{transformation2}$$ Substituting Eq. into Eq. and using Eq. yields a static eigenvalue equation of the form $$\sum_m (H^{nm} - n \Omega \delta_{mn}) |\phi_\alpha^m\rangle = \epsilon_\alpha|\phi_\alpha^n\rangle. \label{floquet}$$ Defining now the Floquet Hamiltonian as $H_{\text{F}} ^{nm}= H^{nm} - m\Omega \delta_{mn}$, we see that a significant simplification has been achieved: the time-dependent problem has been transformed to a static problem, and, consequently, one can apply the intuition about equilibrium problems to make statements about a dynamical problem. The resulting equilibrium-like observables derived within this framework have to be understood as time averages over a period of the external field.\ Let us now apply the Floquet formalism to the Hamiltonian of graphene . In this case, the solutions are characterized by indices $\alpha=(\mathbf{k},\sigma,l)$, being $\sigma = \pm$ the pseudospin index: $$\begin{aligned} n\Omega\phi_{\alpha}^{n, a} + (k_x - ik_y)\phi_{\alpha}^{n,b} + i A\phi_{\alpha}^{n+1, b} &= \epsilon_{\alpha}\phi_{\alpha}^{n,a}\nonumber\\ n\Omega\phi_{\alpha}^{n,b} + (k_x + ik_y)\phi_{\alpha}^{n,a} - i A\phi_{\alpha}^{n-1,a} &= \epsilon_{\alpha}\phi_{\alpha}^{n,b}\end{aligned}$$ Notice that $a$ and $b$ are the indices for the sublattices of the honeycomb lattice. These equations can be written in matrix form, where the infinite Floquet Hamiltonian reads $$H_{\text{F}} = \begin{pmatrix} \ddots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \iddots \\ \cdots & -\Omega & (k_x - ik_y) & 0 & i A & 0 & 0 & \cdots\\ \cdots & (k_x + ik_y) & -\Omega & 0 & 0 & 0 & 0 & \cdots\\ \cdots & 0 & 0 & 0 & (k_x - ik_y) & 0 & i A & \cdots\\ \cdots & -i A & 0 & (k_x + ik_y) & 0 & 0 & 0 & \cdots\\ \cdots & 0 & 0 & 0 & 0 & \Omega & (k_x - ik_y) & \cdots\\ \cdots & 0 & 0 & -i A & 0 & (k_x + ik_y) & \Omega & \cdots\\ \iddots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \label{hamilfloquetcirc} \end{pmatrix}.$$ The structure of this Hamiltonian deserves a few comments. The ac field $A$ connects $(2\times2)$ graphene Hamiltonians with energies $n\Omega$ and $(n+1)\Omega$ and so on. Each of these building blocks contributes with its own dispersion relation (that of a Dirac cone) to the energy spectrum, and the field introduces transitions between these cones. These transitions are expressed as anticrossings in the spectrum, and become exact crossings for $A\to0$. It can easily be seen that the anticrossings occur at $|\mathbf{k}|\approx n\Omega/2$, with $n=0,1,2,\dots$. At $|\mathbf{k}|\approx\Omega/2$ e.g., the $(+,0)$ and the $(-,1)$ sideband anticross, which would be a so-called [*one–photon resonance*]{}. Analytical approximations to the single particle Hamiltonian {#sec2subsec2} ------------------------------------------------------------ The Floquet Hamiltonian, Eq. (\[hamilfloquetcirc\]) can be diagonalized numerically in order to analyze the energy spectrum and its features. However, in order to simplify calculations and to illuminate the main physics, here we resort to analytical approximations which capture the main features whenever the electric field intensity is sufficiently weak. We will show in Sect. \[sec2subsec3\] the full numerical results for the quasienergy spectrum in order to compare it with the analytical approximations. Before introducing such approximations, it is convenient to project the Hamiltonian into another basis. As can be seen in Fig. \[chaincirc\], for $\mathbf{k}=0$, the Floquet chain breaks up into a series of disconnected two-level systems. ![\[chaincirc\]Sketch of the Hamiltonian for the circularly polarized field. Note that if $\mathbf{k}=0$, the Hamiltonian breaks up into disconnected two-level systems, in which site $a_n$ is coupled to site $b_{n+1}$. ](graphene_fig1.eps){width="3.4in"} We therefore diagonalize the Hamiltonian for the two-level-system, and then write the full Hamiltonian in the resulting basis. The Hamiltonian for $\mathbf{k}=0$ reads $$\begin{gathered} H^{\mathbf{k=0}}_{\text{F}} = \sum_n n\Omega\left[|\phi^{n,a}_{\mathbf{k} = 0}\rangle\langle \phi^{n,a}_{\mathbf{k} = 0}| + |\phi^{n,b}_{\mathbf{k} = 0}\rangle\langle \phi^{n,b}_{\mathbf{k} = 0}|\right] \\ + iA\left[|\phi^{n,a}_{\mathbf{k} = 0} \rangle\langle \phi^{n+1,b}_{\mathbf{k} = 0}| - |\phi^{n+1, b}_{\mathbf{k} = 0}\rangle\langle \phi^{n,a}_{\mathbf{k} = 0}|\right]. %+ (k_x - ik_y)|a_n\rangle\langle b_n| + (k_x + ik_y)|b_n\rangle\langle a_n|\end{gathered}$$ An excerpt of the series of $(2\times2)$ Hamiltonians is $$H^{\mathbf{k}=0}_{\text{F}} = \begin{pmatrix} n\Omega & iA & 0 & 0\\ -iA & (n+1)\Omega & 0 & 0\\ 0 & 0 & (n-1)\Omega & iA\\ 0 & 0 & -iA & n\Omega \end{pmatrix}. \label{circh0}$$ Out of the four eigenenergies of this matrix we are interested in $$\begin{aligned} \epsilon_{l}^{\pm} &= l\Omega \pm \frac{1}{2}\Delta,\end{aligned}$$ with $$\begin{aligned} \Delta &= \widetilde{\Omega}-\Omega\nonumber\\ \widetilde{\Omega} &= \sqrt{4A^2 + \Omega^2}.\end{aligned}$$ These two energies fulfill $\lim_{A\to0}\epsilon_{0}^{\pm}=0$, thus, we associate the first Brillouin zone, $l = 0$, from the Floquet solutions, to the solutions corresponding to graphene in the absence of an external field. The corresponding eigenvectors are $$\begin{aligned} |\phi_{l}^{+}\rangle &= \frac{1}{N}\left(2iA |\phi^{l-1, a}_{\mathbf{k} = 0}\rangle + (\Delta+2\Omega)|\phi^{l,b}_{\mathbf{k} = 0}\rangle\right)\\ |\phi_{l}^{-}\rangle &= \frac{1}{N}\left((\Delta+2\Omega)|\phi^{l,a}_{\mathbf{k} = 0}\rangle+2iA|\phi^{l+1,b}_{\mathbf{k} = 0}\rangle \right),\end{aligned}$$ where $N=\sqrt{4A^2+(\Delta+2\Omega)^2}$. From here on and for the rest of the paper, we neglect the index $\mathbf{k}$ in the energies and vectors, unless we have to distinguish between $\mathbf{k}$ and $\mathbf{k+q}$. Using these eigenvectors as a basis, we rewrite the full Floquet Hamiltonian $$\begin{aligned} H_{\text{F}}= \small &\begin{pmatrix}[c|cc|cc|cc|c] \ddots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \iddots\\[5pt] \hline \hdots & \epsilon_{n-1}^{+}& F_0k e^{i\Theta} & F_1 k e^{i\Theta} & 0 & 0 & 0 & \hdots\\[5pt] \hdots & F_0k e^{-i\Theta} & \epsilon_{n-1}^{-} & 0 & F_1^{*}k e^{i\Theta} & F_{2}k e^{i\Theta} & 0 & \hdots\\[5pt] \hline \hdots & F_{1}^{*}ke^{-i\Theta} & 0 & \epsilon_{n}^{+} & F_0 k e^{i\Theta} & F_{1}k e^{i\Theta} & 0 & \hdots\\[5pt] \hdots & 0 & F_{1}ke^{-i\Theta} & F_{0}ke^{-i\Theta} & \epsilon_{n}^{-} & 0 & F_{1}^{*}ke^{i\Theta} & \hdots\\[5pt] \hline \hdots & 0 & F_{2}ke^{-i\Theta} & F_{1}^{*}k e^{-i\Theta} & 0 & \epsilon_{n+1}^{+} & F_{0}ke^{i\Theta}& \hdots\\[5pt] \hdots & 0 & 0 & 0 & F_{1}ke^{-i\Theta} & F_{0}ke^{-i\Theta} & \epsilon_{n+1}^{-}& \hdots\\[5pt] \hline \iddots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}, \label{HF0F1F2}\end{aligned}$$ where $k=|\mathbf{k}|$, $\Theta=\arctan{k_y/k_x}$ and we introduced the three functions $F_0$, $F_1$ and $F_2$ that will form the basis for our approximations: $$\begin{aligned} F_0 &= \frac{(\Delta+2\Omega)^2}{4A^2+(\Delta+2\Omega)^2}\nonumber\\ F_1 &= \frac{2iA(\Delta+2\Omega)}{4A^2+(\Delta+2\Omega)^2}\nonumber\\ F_2 &= \frac{4A^2}{4A^2+(\Delta+2\Omega)^2}. \label{functionsF0F1F2}\end{aligned}$$ All three functions in Eq. are functions of $A$ and $\Omega$. For small $A/\Omega\ll1$, one finds that $F_0\approx1$, and $F_{1,2}\approx0$, however, $F_1$ increases linearly whereas $F_2$ increases quadratically with $A$. Note that for a two-level system driven by a linearly polarized field, the $n$th order Bessel function $J_{n}\left(A/\Omega\right)$ plays the role of the function $F_{0,1,2}$ presented here, and for a complete analysis one has to consider Bessel functions up to infinite order, see e.g. Ref. \[\]. Here however, the complete information lies in $F_{0,1,2}$. In the subsequent analysis, we will at first only consider the couplings given by $F_0$, and then include also the couplings given by $F_1$. We will neglect $F_2$ in general, which is valid for small $A/\Omega$. ### $F_0$–approximation {#sectionF0} A first approximation consists in neglecting both $F_1$ and $F_2$ and considering only $F_0$, which connects energies with the same photon number $n$. This approximation is valid for the calculation of many observables as far as the dimensionless quantity $A/\Omega\ll1$ – i.e. the field intensity is small compared to the frequency – and we are interested in excitations in the low energy sector, as we will see below, when we analyze the excitation spectrum and the generalized density of states. The resulting Hamiltonian is then block diagonal with building blocks $H_{\text{F}_0}^{n}$, where the matrix $H_{\text{F}_0}^{n}$ reads $$\begin{aligned} H_{\text{F}_0}^{n} = \begin{pmatrix} \epsilon_{n}^{+} & F_0ke^{i\Theta}\\ F_0ke^{-i\Theta} & \epsilon_{n}^{-} \end{pmatrix}.\end{aligned}$$ Its eigenvalues and eigenvectors are $$\begin{aligned} \epsilon_{l,\text{F}_0}^{\pm} &= l\Omega \pm \frac{1}{2}\sqrt{4F_0^2k^2+\Delta^2} \label{quasiF0}\end{aligned}$$ $$\begin{aligned} |\chi_{l,\text{F}_0}^{+}\rangle &= \frac{1}{\sqrt{|\chi_a|^2+|\chi_b|^2}}\left(\chi_a|\phi_{l}^{+}\rangle + \chi_b|\phi_{l}^{-}\rangle\right)\nonumber\\ |\chi_{l,\text{F}_0}^{-}\rangle &= \frac{1}{\sqrt{|\chi_a|^2+|\chi_b|^2}}\left(\chi_{b}^{*}|\phi_{l}^{+}\rangle - \chi_{a}^{*}|\phi_{l}^{-}\rangle\right), \label{quasiF0vec}\end{aligned}$$ where $$\begin{aligned} \chi_a &= 2F_{0}ke^{i\frac{\Theta}{2}}\\ \chi_b &= \left(\sqrt{4F_{0}^{2}k^2+\Delta^2}-\Delta\right)e^{-i\frac{\Theta}{2}}. \label{chiachib}\end{aligned}$$ The main virtue of this approximation is the fact that it captures the gap $\Delta$ produced at $\mathbf{k}=0$ by the ac electric field, giving an analytical expression for its magnitude, $\Delta = \sqrt{4A^2+\Omega^2} - \Omega$, so this gap can be tuned by varying the field strength of the applied ac field, see also Refs. \[\]. We point out that an analogous phenomenon occurs in the optics of semiconductors in a strong THz-field: there the dynamical Franz-Keldysh effect[@anttiPRL96; @nordstromPRL98] blue-shifts the conduction band edge (or, equivalently, the optical absorption edge) by the ponderomotive energy, which also depends quadratically on the ac-field amplitude. This so called [*$F_0$–approximation*]{} neglects the coupling between Hamiltonians with a different number of photons, and is therefore not useful once we are interested in the anticrossings of the Floquet quasienergies for non-zero momentum. ### $F_1$–approximation {#sectionF1} In order to analyze higher order processes, we go one step further and take into account the coupling elements $F_1$, which capture the one-photon resonances, yielding a much more robust approximation for the Hamiltonian $H_{\text{F}}$ . At the resonances the relevant couplings are the ones between $\epsilon_{n-1}^+$ and $\epsilon_{n}^-$, $\epsilon_{n}^+$ and $\epsilon_{n+1}^-$ etc., see Eq. . By applying the unitary matrix that diagonalizes $H_{\text{F}_0}^{n}$, $$\begin{aligned} U_{n}=\frac{1}{\sqrt{|\chi_a|^2+|\chi_b|^2}}\begin{pmatrix} \chi_a & \chi_{b}^{*}\\ \chi_b & -\chi_{a}^{*} \end{pmatrix} \label{UF0F1}\end{aligned}$$ we can construct a new effective infinite Hamiltonian which includes the features of the one–photon resonance, and which is again block diagonal, now mixing the sectors that differ in one photon in the [$F_0$–approximation]{}: $$\begin{aligned} H_{\text{F}_1}^{\text{eff},n} = \begin{pmatrix} \epsilon_{n-1, \text{F}_0}^{+} & \frac{2}{S_k}F_0F_1k^2e^{i\Theta} \\ \frac{2}{S_k}F_0F_1^{*}k^2e^{-i\Theta} & \epsilon_{n, \text{F}_0}^{-} \end{pmatrix},\end{aligned}$$ where $S_{k} = \sqrt{4F_0^2k^2+\Delta^2}$. The intensity of the coupling is proportional to $F_1$, as expected. Diagonalizing this Hamiltonian yields the following Floquet quasienergies: $$\epsilon_{l,\text{F}_1}^{+} = \left\{ \begin{array}{l l} l\Omega + \frac{1}{2}\left(\Omega-\sqrt{\left(\Omega-S_{k}\right)^2 + \frac{16}{S_k^2}F_0^2|F_1|^2k^4}\right) & \quad \text{if $k<k_c$}\\ (l+1)\Omega - \frac{1}{2}\left(\Omega-\sqrt{\left(\Omega-S_{k}\right)^2 + \frac{16}{S_k^2}F_0^2|F_1|^2k^4}\right) & \quad \text{if $k>k_c$}\\ \end{array} \right. \label{quasiF1a}$$ $$\epsilon_{l,\text{F}_1}^{-} = \left\{ \begin{array}{l l} l\Omega - \frac{1}{2}\left(\Omega-\sqrt{\left(\Omega-S_{k}\right)^2 + \frac{16}{S_k^2}F_0^2|F_1|^2k^4}\right) & \quad \text{if $k<k_c$}\\ (l-1)\Omega + \frac{1}{2}\left(\Omega-\sqrt{\left(\Omega-S_{k}\right)^2 + \frac{16}{S_k^2}F_0^2|F_1|^2k^4}\right) & \quad \text{if $k>k_c$}, \end{array} \right. \label{quasiF1b}$$ where $k_c$ is the momentum at which the one–photon resonance takes place. For the Floquet eigenvectors, it is more convenient to write them in the basis that diagonalizes the Hamiltonian for $\mathbf{k} = 0$, reading $$|\xi_{l,\text{F}_1}^{+}\rangle = \left\{ \begin{array}{l l} \frac{1}{\sqrt{(|\chi_{a}|^2+|\chi_{b}|^2)(|\xi_{a}|^2+|\xi_{b}|^2)}} \left[\xi_{a}\chi_{a}|\phi_{l}^{+}\rangle+\xi_{a}\chi_{b}|\phi_{l}^{-}\rangle - \xi_{b}\chi_{b}^{*}|\phi_{l+1}^{+}\rangle + \xi_{b}\chi_{a}^{*}|\phi_{l+1}^{-}\rangle\right]& \quad \text{if $k<k_c$}\\ \frac{1}{\sqrt{(|\chi_{a}|^2+|\chi_{b}|^2)(|\xi_{a}|^2+|\xi_{b}|^2)}} \left[\xi_{b}^{*}\chi_{a}|\phi_{l}^{+}\rangle+\xi_{b}^{*}\chi_{b}|\phi_{l}^{-}\rangle + \xi_{a}^{*}\chi_{b}^{*}|\phi_{l+1}^{+}\rangle - \xi_{a}^{*}\chi_{a}^{*}|\phi_{l+1}^{-}\rangle\right]& \quad \text{if $k>k_c$} \end{array} \right. \label{quasiF1avec}$$ $$|\xi_{l,\text{F}_1}^{-}\rangle = \left\{ \begin{array}{l l} \frac{1}{\sqrt{(|\chi_{a}|^2+|\chi_{b}|^2)(|\xi_{a}|^2+|\xi_{b}|^2)}} \left[\xi_{b}^{*}\chi_{a}|\phi_{l-1}^{+}\rangle+\xi_{b}^{*}\chi_{b}|\phi_{l-1}^{-}\rangle + \xi_{a}^{*}\chi_{b}^{*}|\phi_{l}^{+}\rangle - \xi_{a}^{*}\chi_{a}^{*}|\phi_{l}^{-}\rangle\right]& \quad \text{if $k<k_c$}\\ \frac{1}{\sqrt{(|\chi_{a}|^2+|\chi_{b}|^2)(|\xi_{a}|^2+|\xi_{b}|^2)}} \left[\xi_{a}\chi_{a}|\phi_{l-1}^{+}\rangle+\xi_{a}\chi_{b}|\phi_{l-1}^{-}\rangle - \xi_{b}\chi_{b}^{*}|\phi_{l}^{+}\rangle + \xi_{b}\chi_{a}^{*}|\phi_{l}^{-}\rangle\right]& \quad \text{if $k>k_c$}, \end{array} \right. \label{quasiF1bvec}$$ where we have introduced $$\begin{aligned} \xi_a &= \left(\left(\Omega-S_k\right)+\sqrt{\left(\Omega-S_k\right)^2+\frac{16}{S_k^2}F_0^2|F_1|^2k^4}\right)e^{\frac{i}{2}\Theta}\\ \xi_b &= \frac{4}{S_k}F_0F_1^*k^2e^{-\frac{i}{2}\Theta}.\end{aligned}$$ The [*$F_1$–approximation*]{} captures the gap at $\mathbf{k} = 0$ as well as the first resonance. The latter gives rise to the opening of new gaps, whose expression can be obtained analytically in this approximation, yielding $$\begin{aligned} \Delta_{\text{F}_1} = \sqrt{\left(S_{k_c}-\Omega\right)^2 + \frac{16}{S_{k_c}^2}F_0^2F_1^2k_{c}^{4}}. \label{gapfirstresonance}\end{aligned}$$ For a frequency $\Omega\approx150$meV in the mid-infrared regime, and field intensity $E_{0}\approx4.8$MV/m (so that $A/\Omega=0.1$), the size of the two gaps would be $\Delta\approx3$meV, $\Delta_{\mathrm{F}_1}\approx15$meV. Single particle properties of the Hamiltonian derived from the analytical approximations {#sec2subsec3} ---------------------------------------------------------------------------------------- We next consider the quasienergy spectrum for circularly polarized field, both the full numerical result and the analytical approximation derived in the previous sections. With increasing field strength, zero–photon, one–photon, two–photon and higher order resonances appear. In Fig. \[fig:quasikxcirc\] we compare the numerical (upper panel) and the analytical results (middle and lower panels) for the quasienergy spectrum as a function of the wavevector $k_x$ and for weak fields. In the middle panel we plot the $F_0$–approximation, which reproduces very well the gap at $k_x=0$, but no other features induced by the ac field show up. The lower panel shows the results for the $F_1$–approximation, which in addition to the $F_0$ result captures nicely the one–photon resonance at $k_c\approx\pm0.5$. ![\[fig:quasikxcirc\]Quasienergy spectrum as a function of $k_x$ for $k_y=0$. The solid lines represent the $l=0$ band, the dashed lines the $l=1$, and the dashdotted lines the $l=-1$ sideband. [*Upper panel*]{}: The full numerical result of the quasienergies. [*Middle panel*]{}: The quasienergies for the zero–photon approximation $F_0$. [*Lower panel*]{}: The quasienergies for the one–photon approximation $F_1$. Parameters: $k_y=0$, $A = 0.2$, $\Omega=1$.](graphene_fig2.eps){width="3.4in"} The analytical approximations can be tested by computing the density of states (DOS), which was already analyzed numerically using the full Floquet Hamiltonian by Oka [*et al.*]{},[@aokiPRB09] Calvo [*et al.*]{},[@calvoAPL11] and Zhou [*et al.*]{}.[@zhouPRB11] The generalized (time averaged) density of states can be calculated as: $$D(\omega) = 4\sum_{\mathbf{k},\sigma}\delta(\omega - \epsilon_{\mathbf{k}, \sigma, 0}),$$ where the quasienergies are those defined in the first Brillouin zone of the Floquet spectrum. In the $F_0$– approximation, $\epsilon_{\mathbf{k}, \sigma, l=0} = \epsilon_{0,\text{F}_0}^{\pm}$, see Eq. , and the density of states can be calculated analytically yielding $$\begin{aligned} D_{\text{F}_0}(\omega) &= \frac{2}{\pi F_0^2}|\omega|\Theta\left(|\omega|-\frac{\Delta}{2}\right).\end{aligned}$$ Notice the presence of the gap at zero energy in the density of states. The simplicity of this zero-photon approximation allows for analytical computations of many physical quantities, something that no longer happens in general in the one–photon approximation, for which we have to resort to numerical calculations in most of the cases. For the generalized density of states, by using the analytical quasienergies, Eqs.  and , the results of both the $F_0$– and $F_1$–approximation are plotted in Fig. \[fig:doscircF0F1\] (upper panel). As a comparison, the lower panel of Fig. \[fig:doscircF0F1\] shows the density of states calculated numerically by diagonalizing the full Floquet Hamiltonian . Once again, notice that the $F_0$–approximation works very well for energies of the order of the first gap, while in order to study the first resonance the $F_1$–approximation excels quite well. Higher resonances – visible in the numerical result for the density of states at around $\omega\approx1$, are almost negligible for the field strength we are considering here.\ Although the density of states has already been achieved numerically by various authors,[@aokiPRB09; @calvoAPL11; @zhouPRB11] the developed approximations are useful in order to both obtain analytical results when this is possible or at least simplify the numerical complexity of the problem, as it happens when we use the one–photon approximations. In principle, these approximations are valid for arbitrary $\mathbf{k}$, as long as $A/\Omega\ll1$. The results for the density of states show that in both analytical approximations and in the full numerical case the gaps remain stable independently of the range of integration in the momentum included in the density of states, i.e., the inclusion of higher momentum states does not close the gaps in our case, contrary to what was found in Ref. \[\]. ![\[fig:doscircF0F1\]Density of states versus energy for the analytical approximations $F_0$ and $F_1$ (upper panel) and considering the full Floquet Hamiltonian Eq.  (lower panel). The $F_0$–approximation reproduces the gap given by $\Delta$, see text. For the one–photon resonance in the $F_1$–approximation, the gap at $\omega=0.5$ is reproduced, where the field couples modes with $n$ and $n+1$ photons. For the same field strength as considered in the $F_0$ and $F_1$ approximations, the full numerical density of states is identical up to an additional resonance at $\omega\approx1$, which is due to two–photon processes. In the inset, the region around the gap $\omega=0$ is blown up for better visibility. Parameters: $A=0.1$, $\Omega = 1$.](graphene_fig3.eps){width="3.2in"} Single and many-particle excitations in graphene in a circularly polarized ac electric field {#sec3} ============================================================================================ Electron interactions and the formula for the dynamical polarizability {#sec3subsec1} ---------------------------------------------------------------------- So far we have analyzed the single particle properties of the Hamiltonian. However, a full description of electron excitations in graphene requires to understand the role of electron-electron interactions in the system. The Hamiltonian of the interacting system in the presence of an ac field reads now, in second quantization, $$H(t) = v_{\mathrm{F}}\sum_{\mathbf{k}} \Psi_{\mathbf{k}}^{\dagger}\boldsymbol\sigma \cdot (\mathbf{k} - e \mathbf{A}(t))\Psi_{\mathbf{k}} + \sum_{\mathbf{q}} v_{\mathbf{q}} n_{\mathbf{q}}^{\dagger} n_{\mathbf{q}}, \label{eq:graphene_hamilV}$$ where $v_{\mathbf{q}}= 2\pi e^2/\epsilon_0 q$ is the 2D unscreened Coulomb interaction. In the absence of the external ac field, this Hamiltonian has been extensively studied (for a review, see ref. ). For doped samples of graphene, the Coulomb interaction becomes screened, yielding a system whose low-energy excitations around the Fermi surface are barely interacting, i.e., electrons in graphene behave as a Fermi liquid. Moreover, a collective excitation, a plasmon, exists.[@dassarmaPRB07; @wunschNJoP06] This is no longer true when the level of doping is zero or very small, where the role of interactions is controversial due to the singular nature of the Dirac point, where screening is uneffective.[@gonzalezNPB94; @mishchenkopolPRL08] In order to understand the effect of interactions between electrons when an external ac field is applied, we will compute the dynamical polarizability, which tells us about the response of the system to probes that couple to the electric charge. This function will yield information about the full spectrum of electron interactions of the system, that contains both single particle and collective excitations. The dynamical polarizability of graphene in the presence of an ac electric field shows particular features that differ from its counterpart, the two dimensional electron gas, as well as from the one derived for graphene in the absence of the field.[@shungPRB86] The detailed derivation of the polarizability function is given in the Appendix, whereas here we present the final result: $$\begin{aligned} \Pi(\mathbf{q},\omega) = \sum_{\sigma,\sigma'}\sum_{\mathbf{k}}\sum_{l}\frac{f_{\mathbf{k},\sigma}-f_{\mathbf{k+q},\sigma'}}{\omega - \epsilon_{\mathbf{k+q},\sigma',l}+\epsilon_{\mathbf{k},\sigma,0}+i\eta}\nonumber\\ \times \sum_{n}|\phi_{\mathbf{k+q},\sigma',l}^{n,a,*}\phi_{\mathbf{k},\sigma,0}^{n,a}+\phi_{\mathbf{k+q},\sigma',l}^{n,b,*}\phi_{\mathbf{k},\sigma,0}^{n,b}|^2 \label{polarizability}\end{aligned}$$ The index $n$ stands for the Fourier component of a solution in the sideband $l$ of the infinite Floquet Hamiltonian. The summation over $n$ constitutes the scalar product of the solution $|\phi_{\mathbf{k+q},\sigma',l}\rangle$ with $|\phi_{\mathbf{k},\sigma,0}\rangle$, where we have used the fact that solutions belonging to the $l$th Brilloun zone in the Floquet spectrum are those of the first Brilloun zone shifted by $l$ units, see Eqs.  and . The solution $l=0$ is the one which fulfills the condition that at $A\rightarrow0$, $\Pi(\mathbf{q},\omega)$ becomes the polarizability for an isolated graphene sheet. Mathematically, it is important to notice that the analytical properties of the dynamical polarizability do not change in the presence of the external ac field. It is a complex function, analytical in the upper half plane, whose real and imaginary parts are not independent, but related via the Kramers-Kronig relations, see e.g. ref. . The latter can also be seen as a consequence of the causality in the response of the system to the external probe. The polarizability is written in terms of the single particle excitations of the system, as in conventional linear response theory. In the presence of the ac electric field, there is an infinite set of single particle excitations that differ between them in the relative number of photons. Once the external field is switched on, the system is no longer isolated, and the field can pump or extract energy into the system in the form of photons of frequency $\Omega$. Therefore, the polarizability can be seen as a linear combination of polarizabilities, each describing excitations in which the number of photons in the system changes by a certain integer number $l$. Or in other words, the response of the system to an external probe of energy $\omega$ and momentum $q$ can arise from excitations in which no extra photons are introduced in the system, as it happens in the absence of the field, or also in which a given number $l$ of photons is introduced or extracted from it. Similar results as those shown here have been derived in the context of low-dimensional semiconductors,[@anttiPRL99] and the 2DEG.[@chinodynscreeningPRB02] The latter is usually the benchmark to compare the results derived for graphene, and therefore we include here the formula for the ease of comparison: $$\Pi^{2{\rm DEG}}_{ac}(\mathbf{q},\omega) = \sum_{\mathbf{k}}\sum_{l}\frac{f_{\mathbf{k}}-f_{\mathbf{k+q}}}{\omega - \epsilon_{\mathbf{k+q},l}+\epsilon_{\mathbf{k},0} + i\eta}$$ Note in this expression that the effects due to the ac electric field appear in two different places: (i) the index $l$ of the sideband reflecting a change in the number of photons, and (ii) as a modification of the single particle excitations in $\epsilon_{\mathbf{k},\sigma,l}$. The situation is more complicated in the case of graphene, Eq. (\[polarizability\]), where also a momentum dependent overlap term between the excitations with different momentum must be included. This overlap is reflected in the polarizability via the scalar product between quasieigenstates, and it is in turn a consequence of the existence of the pseudospin in graphene. The effect of the the electric field on the electronic system can be understood in terms of transfer of spectral weight, as we shall show in the next section. As the electric field is switched on, the spectral weight is reorganized, although in a way that still preserves the conservation rule imposed by the $f$-sum rule, that was derived and analyzed in the context of low-energy graphene by Sabio [*et al.*]{}[@sabiofsumPRB08]. As a last remark, since we are dealing with a system in which a polarized ac electric field is already present, one has to wonder about the possible influence of the polarization of the external probe to which the system responds. In fact, since we are analyzing the dynamical polarizability, which arises from the coupling between the electronic density and the potential induced by the external probe, the response of the system in the linear regime is insensitive to the polarization of the probe field. In order to see a response that depends on the polarization of the probe, we would have to analyze the response of the electronic current, which couples to the electric field, and whose linear response function is the conductivity. Notice, however, that the response function itself will not be altered due to this polarization, since in linear response it only depends on the properties of the system in the absence of the external probe by virtue of the fluctuation-dissipation theorem. Analytic approximations for the dynamical polarizability {#sec3subsec2} -------------------------------------------------------- We now evaluate the dynamical polarizability (\[polarizability\]) using the analytic approximations developed in Sect. \[sec2subsec2\]. We first consider the imaginary part of (\[polarizability\]). This yields the response of the non-interacting system to an external probe of energy $\omega$ and momentum $q$, and is a building block for the Random Phase Approximation (RPA). RPA is known to work well for doped graphene,[@castronetoRMP11] where Landau’s theory of the Fermi liquid provides a good description of the low energy excitations. In the case of undoped graphene, the issue is more complicated due to the lack of screening near the Dirac point.[@mishchenkopolPRL08] In our case, due to the opening of a gap at $\mathbf{k}=0$, we expect RPA to be sufficient to describe the main features of the response of the system once electron-electron interactions are taken into account. ### $F_0$–approximation {#f_0approximation} In the $F_0$–approximation a gap opens up at zero momentum, so it can be used for a qualitative description of the response to an external ac probe. We set $T=0$, and for the moment we will restrict ourselves to undoped graphene. In this case, the dynamical polarization involves only the term that accounts for transitions between the Floquet bands $(0, -)$ and $(0, +)$, i.e., transitions in which the number of photons is conserved: $$\begin{aligned} \Pi_{\text{F}_0}(\mathbf{q},\omega) = \sum_{\mathbf{k}} \frac{|\langle\chi_{\mathbf{k+q},0}^{+}|\chi_{\mathbf{k},0}^{-}\rangle|^2}{\omega-\epsilon_{\mathbf{k+q},0}^{+}+\epsilon_{\mathbf{k},0}^{-}+i\eta}, \label{polF0}\end{aligned}$$ where $|\chi_{\mathbf{k},0}^{\pm}\rangle$ are the eigenvectors from the $F_0$–approximation, see Eq. . We find $$\begin{gathered} \operatorname{Im}\Pi_{\text{F}_0}(\mathbf{q},\omega) = -\frac{1}{4}\frac{F_{0}^2q^2}{\sqrt{\omega^2-F_0^2q^2}}\\ \times\left(1+\frac{\Delta^2}{\omega^2-F_0^2q^2}\right)\Theta\left(\omega^2-F_0^2q^2-\Delta^2\right), \label{impolF0}\end{gathered}$$ with $\Delta = \sqrt{4A^2+\Omega^2}-\Omega$ being the gap opened at the first resonance. This is actually the dynamical polarizability for a “gapped graphene"; now the gap is due to the presence of the circularly polarized ac field. Gapped graphene has been studied extensively by Pyatkovskiy,[@russeJoPCM09] who also derived analytical expressions for the real part of the polarization, for both doped and undoped graphene. We show the imaginary part of the polarizability in the $F_0$–approximation in Fig. \[figimpolF0\]. For a given momentum of the external probe, the energy threshold required to produce single particle excitations is increased due to the existence of the gap, being located now at $\omega = \sqrt{F_0^2 q^2 + \Delta^2}$. This yields a rearrangement of the spectral weight of the excitations, which might allow for the existence of more complex excitations in the spectrum of the interacting system. We investigate this question within RPA. The polarizability in the RPA is $$\Pi_{{\rm RPA}}(\mathbf{q}, \omega) = \frac{\Pi_{0}(\mathbf{q}, \omega)}{1 - v_q \Pi_{0}(\mathbf{q}, \omega)}, \label{RPAPol}$$ where the denominator is the dielectric function in RPA with $v_q$ being the 2D unscreened Coulomb potential. In order to have long-lived collective excitations, i.e. plasmons, the dielectric function must vanish at certain points $\omega_{p}(q)$, which leads to the conditions $v_q \operatorname{Re} \Pi_{0}(\mathbf{q}, \omega) = 1$ and $\operatorname{Im}\Pi_{0}(\mathbf{q}, \omega) = 0$. In Fig. \[figRPAF0\] the imaginary and real part of the polarizability in the RPA are plotted, where we use in Eq.  $\Pi_{0}(\mathbf{q},\omega)=\Pi_{\text{F}_0}(\mathbf{q},\omega)$ (Eq. ). It can be seen that the divergence in the threshold of excitations found for the non-interacting polarizability has disappeared, a feature that is also observed in graphene in the absence of ac fields. However, the real part of the polarizability does not develop a resonance, leading to the absence of plasmons, at least within the RPA of the $F_0$–approximation. In order to test if this is still true when higher order photon resonances are included, we analyze the $F_1$–approximation in the next section. ![\[figimpolF0\]Imaginary part of the polarizability in the $F_0$–approximation as a function of $\omega$ and for different values of $q$, Eq. . For $\omega<\sqrt{F_{0}^2q^2+\Delta^2}$, no excitations are possible, as compared to free graphene, where no excitations are possible for $\omega<q$. Note that only for small $q = 0.05$, the effect of the gap is visible as a shift of the divergence away from $\omega=q$. Parameters: $A=0.1$, $\Omega=1$.](graphene_fig4.eps){width="3.4in"} ![\[figRPAF0\]Imaginary and real part of the polarizability in the $F_0$–approximation as a function of $\omega$ for different $q$ in the RPA. The divergence in the imaginary part of the polarizability has disappeared, but the real part has not developed a resonance, which would be the signature of the existence of collective electronic excitations. Parameters: $A=0.1$, $\Omega=1$.](graphene_fig5.eps){width="3.4in"} Before that, let us analyze the case of doped graphene. As already shown in Ref. \[\] for the case of gapped graphene, the plasmon already present in the system without ac fields is still robust once a weak field is introduced. The main effect of the external ac field is to modify the plasmon dispersion: $$\begin{aligned} \omega_{p}^{\text{F}_0}(q) = \sqrt{\frac{g N_s N_v F_{0} q \mu}{2}\left(1-\frac{\Delta^2}{4\mu^2}\right)},\end{aligned}$$ where $g = e^2/v_F \epsilon_0 \hbar$ is the fine structure constant of graphene. The correction affects the plasmon frequency $\omega_0 = \sqrt{\frac{g N_s N_v F_{0} \mu}{2}(1-\frac{\Delta^2}{4\mu^2})}$, but not the dependence on momentum, which still follows the law $\omega_{p}^{\text{F}_0}(q) \propto \sqrt{q}$. The plasmon frequency is diminished due to the effect of the external ac field, since $F_0 < 1$ and $1- \Delta^2/(4 \mu^2) < 1$. The correction coming from the factor $F_0$ is essentially due to the renormalization of the Fermi velocity due to the ac field. The second correction depends on the relation of the chemical potential to the gap at zero momentum, and it is maximal for a chemical potential below $\Delta / 2$, where the plasmon is completely suppressed since no electrons are populating the upper Dirac cone. For chemical potentials above this value, the correction tends to be smaller, being almost negligible for $\mu \gg \Delta/2$. We point out that similar results hold in a quite different context, that of graphene anti-dot lattices,[@anttiPRL08; @anttiPRB11] where it was found that in the limit of low doping, gapped graphene models reproduce very well the plasmon dispersion of the anti-dot lattice. In short, the results from the $F_0$–approximation yield a similar picture to that of graphene in the absence of an external field, and the main effect of the ac field is a renormalization of the single and many-particle spectrum, with a shift of the threshold for excitations due to the gap at zero momentum.\ ### $F_1$–approximation {#f_1approximation} The $F_1$–approximation accounts for non–zero–photon processes, neglected in the $F_0$–approximation. It is not fully analytically tractable in the calculation of many observables. In the $F_1$–approximation using Eqs. - the polarizability for undoped graphene at $T=0$ becomes $$\begin{aligned} \Pi_{F_{1}}(\mathbf{q},\omega) = \sum_{\mathbf{k}}&\frac{|\langle\xi_{\mathbf{k+q},0}^{+}|\xi_{\mathbf{k},0}^{-}\rangle|^2}{\omega-\epsilon_{\mathbf{k+q},0}^{+}+\epsilon_{\mathbf{k},0}^{-}+i\eta}\nonumber\\ +&|\langle\xi_{\mathbf{k+q},-1}^{+}|\xi_{\mathbf{k},0}^{-}\rangle|^2\left(\frac{1}{\omega-\epsilon_{\mathbf{k+q},-1}^{+}+\epsilon_{\mathbf{k},0}^{-}+i\eta}-\frac{1}{\omega-\epsilon_{\mathbf{k+q},1}^{-}+\epsilon_{\mathbf{k},0}^{+}+i\eta}\right)\nonumber\\ +&|\langle\xi_{\mathbf{k+q},-2}^{+}|\xi_{\mathbf{k},0}^{-}\rangle|^2\left(\frac{1}{\omega-\epsilon_{\mathbf{k+q},-2}^{+}+\epsilon_{\mathbf{k},0}^{-}+i\eta}-\frac{1}{\omega-\epsilon_{\mathbf{k+q},2}^{-}+\epsilon_{\mathbf{k},0}^{+}+i\eta}\right) \label{polF1}\end{aligned}$$ Here, there are three different contributions to the polarizability, coming from the Floquet bands $l=0$, $l=\pm1$ and $l=\pm2$, i.e., from excitations that involve the exchange of up to two photons from the external field. However, for the electric fields in which this approximation holds, the contribution from the $l=\pm2$ components is essentially negligible, and therefore only zero and one–photon processes will be considered. In what follows we evaluate (\[polF1\]) numerically, first integrating the imaginary part and then computing the real part via the Kramers Kronig relations. Figure \[impol\] shows the imaginary part of the polarizability $\Pi_{F_{1}}$ for fixed $q$ as a function of $\omega$. In the upper panel, the components $l=0$ and $l=\pm1$ and their sum are shown for $q = 0.1$, in order to illustrate where its structure comes from. The two lower panels represent the total polarizability for two different wavevectors $q$, divided into two regions for better visibility of the different features. ![\[impol\]Imaginary part of the polarizability $\Pi_{\text{F}_1}$ as a function of $\omega$. [*Upper panel*]{}: $q=0.1$; the components $l=0,\pm1$ of the polarizability and their sum are shown, see Eq. \[polF1\]. [*Lower panels*]{}: Imaginary part of the total polarizability as a function of $\omega$ for two different $q$: $q_1 = 0.05$, $q_2 = 0.1$. Left plot for $\omega<0.4$, right plot for $\omega>0.7$. Parameters: $A=0.1$, $\Omega=1$. ](graphene_fig6.eps){width="3.4in"} Several new features emerge from the $F_1$–approximation. As shown in Fig. \[impol\], at the level of zero–photon processes, there is a gap at zero momentum, already captured in the $F_0$–approximation. In addition, gaps appear at higher momenta, where the first anticrossing of Floquet sidebands occurs (see Fig. \[fig:quasikxcirc\]). For a sufficiently small $q$ this second gap translates into two small gaps in the single particle excitation spectrum around $\omega\approx 1$, which are eventually closed for higher momenta, as shown in Fig. \[impol\] (lower panel, right plot). The first of those gaps, for $0.9<\omega<1$ , is due to the fact that for electrons from the lower cone of graphene no states are available in the upper band for those values of $q$ and $\omega$ due to the anticrossing of Floquet sidebands. The second one, at $1<\omega<1.1$, is due to the lack of states in the lower cone in the region where this anticrossing occurs with lower Floquet sidebands. The most important new features of the response of the system, however, come from the contribution of one–photon processes, in which transitions from the $l = 0$ to the $l= \pm 1$ sidebands are taken into account. New single particle excitations appear below $\omega = \sqrt{F_0^2q^2 + \Delta^2}$, leaving only a small region of energies where no excitations are found, a region which again is closed for sufficiently large $q$ (dashed line for $q=0.1$). ![\[imRPAreRPA\]Imaginary and real part of the polarizability $\Pi_{\text{F}_1}$ in the RPA approximation. Notice the resonance in the real part for small momenta $q_1=0.035$ and $q_2=0.05$ (lower panel), where no excitations in the imaginary part exist (upper panel), pointing to the existence of collective excitations. Parameters: $A=0.1$, $\Omega=1$. ](graphene_fig7.eps){width="3.4in"} One–photon processes introduce new single particle excitations into the response of the system, and we next examine the effect of these processes on the collective excitations of the system. The RPA polarizability $\Pi_{\text{F}_1,\text{RPA}}$ is shown in Fig. \[imRPAreRPA\] for different values of the external momentum $q$. One–photon processes have an important effect on the response of the interacting system, allowing for the existence of collective excitations for small enough momentum, see curves for $q_1=0.035$, $q_2=0.05$ in Fig. \[imRPAreRPA\]. For those, the plasmon conditions are fulfilled, which is reflected in the development of a resonance in the real part of the RPA polarizability. For an external momentum of $q = 0.05$ e.g., the resonance is located at $\omega \simeq 0.021$, which is already in the region where the imaginary part of the polarizability is zero, allowing for an undamped plasmon. It is important to remark that this plasmon becomes unstable in two different scenarios. (i) For large enough momentum of the external probe, where the resonance is weakened and it occurs in a region where single particle excitations exist, so the plasmon can decay into those excitations, see $q_3=0.15$ in Fig. \[imRPAreRPA\]. (ii) When two–photon processes are considered, there is no longer a region where the imaginary part of the polarizability is zero. For weak fields, however, these processes are negligible and their effect on the plasmon should also essentially be irrelevant. However, this suggests that as we increase the intensity of the electric field, and higher order photon processes are important, there is no region of momenta $q$ in which the plasmon is stable. ![\[impoldoped\]Imaginary part of the polarizability for doped graphene as a function of $\omega$ for $q=0.05$. Both the $F_{0}$- (solid red) and the $F_{1}$-approximation (dashed dark red) are shown. In the lower panels, the upper plot is split in two parts in order to better visualize the different regions of $\omega$. Parameters: $A=0.1$, $\Omega=1$, $\mu=0.2$. ](graphene_fig8.eps){width="3.4in"} For doped graphene, the $F_1$–approximation introduces similar features as those described for undoped graphene, see Fig. \[impoldoped\]. The effect of the anticrossing of Floquet sidebands is to induce gaps in the response of the system for $\omega \sim 1$, and processes including the exchange of one photon give rise to new excitations for small energies. In order to quantify the effect of these processes, in this figure the polarizability is compared to the one for doped graphene, using the $F_0$–approximation, where only zero–photon processes are considered. The RPA response of the interacting system is shown for a couple of representative external momenta in Fig. \[impoldopedRPA\]. As it happened in the undoped case, for small external momentum ($q=0.05$ in Fig. \[impoldopedRPA\]) there is a resonance in the real part, signaling the existence of a plasmon, which again has a renormalized dispersion relation due to the effect of the external ac field. However, non–zero–photon processes are responsible again for the appearance of low-energy excitations that tend to make the plasmon unstable for large enough momenta $q$ ($q=0.15$ in Fig. \[impoldopedRPA\]) and for larger intensities of the field, as discussed for the undoped case. These momenta $q$ for which plasmons become unstable are still lower than those for which the plasmon is damped in graphene without ac field. ![\[impoldopedRPA\]Polarizability in the RPA approximation for doped graphene in the $F_1$–approximation. [*Upper panel*]{}: imaginary part. [*Lower panel*]{}: real part. The results are shown for two different momenta $q = 0.05$ and $q = 0.15$. In both figures the results are compared with the $F_0$–approximation. Notice the existence, in both approximations, of the resonance in the real part of the polarizability, signalizing the existence of a collective excitation. For large momentum $q = 0.15$, however, the imaginary part shows no gap and therefore the plasmon can decay into single particle excitations. Parameters: $A=0.1$, $\Omega=1$, $\mu=0.2$. ](graphene_fig9.eps){width="3.4in"} Summarizing, the inclusion of non–zero–photon processes is crucial in order to capture the physics of the response of graphene to an external probe in the presence of a weak ac field. This is due to the appearance of excitations in the low energy spectrum of the system, not included in the $F_0$–approximation, that allow for the existence of collective excitations in undoped graphene, but make those plasmons unstable for smaller momenta than their counterparts in graphene with no ac fields. Conclusions =========== In this work we have analyzed the properties of graphene under an external circularly polarized ac field in the weak field regime. We have developed analytical approximations to the Hamiltonian, the so called $F_0$ (see section \[sectionF0\]) and $F_1$–approximations (see section \[sectionF1\]), that allow a certain analytical tractability of many relevant objects. The $F_0$–approximation includes only zero–photon excitations in the system, and is useful to calculate certain observables in the low energy sector. The $F_1$–approximation includes higher order photon processes, allowing for the analysis of a wider range of observables and a larger energy sector. However, it requires in many cases numerical calculations to extract the observables. Special emphasis has been put on the calculation of the polarizability of the system, which can be used to analyze the spectra of single and many-particle excitations of the system. We have derived a general expression for the polarizability of graphene in the presence of an ac electric field, which we have analyzed in the context of the $F_0$ and $F_1$–approximations. While the former allows for analytical expressions, and captures well the effect of the zero-momentum gap in the system, it misses the non–zero–photon processes, that are captured by the $F_1$–approximation, and in turn are responsible for the emergence of collective excitations even for undoped graphene, as far as the Random Phase Approximation remains valid. However, it also points out that these collective excitations are less stable when compared to graphene with no external ac field: for large enough external momenta and ac field intensities, these excitations become damped and acquire a finite lifetime. We have shown that circularly polarized ac fields can be used to modify the properties of graphene in several ways: (i) They open up gaps at zero momentum that can be exploited in practical applications, (ii) They permit the existence of plasmons (in both undoped and doped graphene), (ii) The plasmon frequency is tunable with the external field, and, finally, (iv) For large enough fields the plasmons become unstable. Moreover, we have developed and tested analytical tools to analyze theoretically the behavior of graphene in the presence of ac electric fields, which should be useful in future works in this field. The Center for Nanostructured Graphene is sponsored by the Danish National Research Foundation. We would like to thank J. Sabio for fruitful discussions and T. Stauber for helpful comments. We are grateful to Prof. M. W. Wu for communicating his results[@zhouPRB11] prior to publication. We acknowledge financial support through Grant No. MAT2011-24331 (MEC), from JAE (CSIC) (M.B.), and from ITN under Grant No. 234970 (EU). M. B. and A. P. J. are grateful to the FiDiPro program of the Academy of Finland for support during the early stages of this project. Derivation of the polarizability for circularly polarized field {#appendix} =============================================================== The derivation of the formula for the dynamical polarizability follows the lines of its counterpart in the 2DEG.[@chinodynscreeningPRB02] The wavefunction for graphene under a periodic driving can be written by use of the Floquet theorem as $$\begin{aligned} \psi_{\mathbf{k},\sigma}(\mathbf{r},t) = \frac{1}{\sqrt{2}}e^{i\mathbf{kr}}e^{-i\epsilon_{\mathbf{k},\sigma}t}\phi_{\mathbf{k},\sigma}(t),\end{aligned}$$ where $\epsilon_{\mathbf{k},\sigma}$ is the quasienergy and $\phi_{\mathbf{k},\sigma}(t)$ are the Floquet states which fulfill the time-periodicity of the driving field, and we have chosen the solution corresponding to the First Brillouin zone. After applying a weak probe potential, these wavefunctions are not any more eigenfunctions of the full Hamiltonian, but we can use them as a basis to write the new wavefunction: $$\begin{aligned} \Psi_{\mathbf{k},\sigma}(\mathbf{r},t) = \sum_{\mathbf{k'}\sigma'}a_{\mathbf{k'},\sigma'}(t)\psi_{\mathbf{k'},\sigma'}(\mathbf{r},t)\end{aligned}$$ Inserting this into the Schrödinger equation for the Hamiltonian $H_0(t) + H_1(t)$, where $H_0(t)$ is the Hamiltonian for the periodically driven graphene, and $H_1(t) = V(\mathbf{r},t)$ represents the weak probe potential, we are left with a differential equation for the coefficients $a_{\mathbf{k},\sigma}(t)$: $$\begin{aligned} i\sum_{\mathbf{k'}\sigma'}\dot{a}_{\mathbf{k'},\sigma'}(t)\psi_{\mathbf{k'},\sigma'}(\mathbf{r},t) = \sum_{\mathbf{k'}\sigma'}a_{\mathbf{k'},\sigma'}(t)V(\mathbf{r},t)\psi_{\mathbf{k'},\sigma'}(\mathbf{r},t)\end{aligned}$$ We can now project this equation into a state $\psi_{\mathbf{k''},\sigma''}$, yielding $$\dot{a}_{\mathbf{k''},\sigma''}(t) = -i\sum_{\mathbf{k'}\sigma'}a_{\mathbf{k'},\sigma'}(t)e^{i(\epsilon_{\mathbf{k''},\sigma''}-\epsilon_{\mathbf{k'},\sigma'})t}\phi_{\mathbf{k''},\sigma''}^{*}(t)\phi_{\mathbf{k'},\sigma'}(t)V(\mathbf{k''-k'},t),$$ where $V(\mathbf{k''-k'},t)$ is the projection of the probe potential into the states $\mathbf{k'}$ and $\mathbf{k''}$. Now we can expand this equation in a power series of the external potential, and keeping only the first order we are left with $$\begin{aligned} \dot{a}_{\mathbf{k''},\sigma''}^{(1)}(t) &= -ie^{i(\epsilon_{\mathbf{k''},\sigma''}-\epsilon_{\mathbf{k},\sigma})t}\phi_{\mathbf{k''},\sigma''}^{*}(t)\phi_{\mathbf{k},\sigma}(t)V(\mathbf{k''-k},t).\end{aligned}$$ This equation can be simplified by Fourier transforming it, yielding $$a_{\mathbf{k''},\sigma''}(t) = \int \frac{d\omega}{2\pi}V(\mathbf{k''-k},\omega) e^{-i\omega t}e^{i(\epsilon_{\mathbf{k''},\sigma''}-\epsilon_{\mathbf{k},\sigma})t}e^{\eta t}\\ \sum_{nn'}\frac{e^{i(n'-n)\Omega t}\left[\phi_{\mathbf{k''},\sigma''}^{n',a*}\phi_{\mathbf{k},\sigma}^{n,a*}+\phi_{\mathbf{k''},\sigma''}^{n',b*}\phi_{\mathbf{k},\sigma}^{n,b*}\right]}{\omega-(n'-n)\Omega-(\epsilon_{\mathbf{k''},\sigma''}-\epsilon_{\mathbf{k},\sigma})+i\eta}.$$ In order to get the response of the system to the external probe in linear response, we write down the expression of the induced charge density: $$\begin{aligned} \rho_{\mathbf{k},\sigma}^{\text{ind}}(\mathbf{r},t) &= \Psi_{\mathbf{k},\sigma}^{*}(\mathbf{r},t)\Psi_{\mathbf{k},\sigma}(\mathbf{r},t) - \psi_{\mathbf{k},\sigma}^{*}(\mathbf{r},t)\psi_{\mathbf{k},\sigma}(\mathbf{r},t)\nonumber\\ &= \sum_{\mathbf{k'}\sigma'}a_{\mathbf{k'},\sigma'}^{*}(t)\psi_{\mathbf{k'},\sigma'}^{*}(\mathbf{r},t)\psi_{\mathbf{k},\sigma}(\mathbf{r},t)+a_{\mathbf{k'},\sigma'}(t)\psi_{\mathbf{k},\sigma}^{*}(\mathbf{r},t)\psi_{\mathbf{k'},\sigma'}(\mathbf{r},t)\end{aligned}$$ and insert the result obtained for $a_{\mathbf{k},\sigma}(t)$. After some algebra we arrive at $$\begin{aligned} \rho^{\text{ind}}(\mathbf{r},t) = \sum_{\mathbf{q}}\int\frac{d\omega}{2\pi}V^{\text{ext}}(\mathbf{q},\omega)e^{-i\omega t}e^{i\mathbf{qr}}\sum_{\sigma\sigma'}\sum_{\mathbf{k}}f_{\mathbf{k},\sigma}\mathcal{F}_{\mathbf{k},\sigma,\sigma'},\end{aligned}$$ where we have introduced the short notation $$\begin{aligned} \mathcal{F}_{\mathbf{k},\sigma,\sigma'} = \sum_{nn'}\sum_{mm'}&\left[\frac{1}{2}\frac{e^{i(n'-n)\Omega t}e^{i(m'-m)\Omega t} \left(\phi_{\mathbf{k+q},\sigma'}^{n',a*}\phi_{\mathbf{k},\sigma}^{n,a}+\phi_{\mathbf{k+q},\sigma'}^{n',b*}\phi_{\mathbf{k},\sigma}^{n,b}\right) \left(\phi_{\mathbf{k},\sigma}^{m',a*}\phi_{\mathbf{k+q},\sigma'}^{m,a}+\phi_{\mathbf{k},\sigma}^{m',b*}\phi_{\mathbf{k+q},\sigma'}^{m,b}\right)}{\omega-(n'-n)\Omega-(\epsilon_{\mathbf{k+q},\sigma'}-\epsilon_{\mathbf{k},\sigma})+i\eta}\right.+ \nonumber\\ &\left.\frac{1}{2}\frac{e^{-i(n'-n)\Omega t}e^{-i(m'-m)\Omega t} \left(\phi_{\mathbf{k-q},\sigma'}^{n',a}\phi_{\mathbf{k},\sigma}^{n,a*}+\phi_{\mathbf{k-q},\sigma'}^{n',b}\phi_{\mathbf{k},\sigma}^{n,b*}\right) \left(\phi_{\mathbf{k},\sigma}^{m',a}\phi_{\mathbf{k-q},\sigma'}^{m, a*}+\phi_{\mathbf{k},\sigma}^{m',b}\phi_{\mathbf{k-q},\sigma'}^{m,b*}\right)} {-\omega-(n'-n)\Omega-(\epsilon_{\mathbf{k-q},\sigma'}-\epsilon_{\mathbf{k},\sigma})-i\eta}\right].\end{aligned}$$ By comparing with the Poisson equation $$\rho^{\text{ind}}(\mathbf{r},t) = \sum_{\mathbf{q}}\int\frac{d\omega}{2\pi}V^{\text{ind}}(\mathbf{q},\omega)e^{-i\omega t}e^{i\mathbf{qr}}\frac{q^2}{4\pi}$$ we see that the induced potential must fulfill $$V^{\text{ind}}(\mathbf{q},\omega) = \frac{4\pi}{q^2}V^{\text{ext}}(\mathbf{q},\omega)\sum_{\sigma\sigma'}\sum_{\mathbf{k}}f_{\mathbf{k},\sigma}\mathcal{F}_{\mathbf{k},\sigma,\sigma'}.$$ We sum on both sides $V^{\text{ext}}(\mathbf{q},\omega)$ and get $$V^{\text{tot}}(\mathbf{q},\omega) = \left(1+\frac{4\pi}{q^2}\sum_{\sigma\sigma'}\sum_{\mathbf{k}}f_{\mathbf{k},\sigma}\mathcal{F}_{\mathbf{k},\sigma,\sigma'}\right)V^{\text{ext}}(\mathbf{q},\omega),$$ where the dielectric function is given by $$\varepsilon(\mathbf{q},\omega) = \frac{1}{1+\frac{4\pi}{q^2}\sum_{\sigma\sigma'}\sum_{\mathbf{k}}f_{\mathbf{k},\sigma}\mathcal{F}_{\mathbf{k},\sigma,\sigma'}}.$$ In the RPA approximation we obtain therefore $$\varepsilon(\mathbf{q},\omega)_{\text{RPA}} = 1 - \frac{4\pi}{q^2} \sum_{\sigma\sigma'}\sum_{\mathbf{k}}f_{\mathbf{k},\sigma}\mathcal{F}_{\mathbf{k},\sigma,\sigma'}.$$ After substituting the expression for $\mathcal{F}_{\mathbf{k},\sigma,\sigma'}$ and some straightforward manipulations, we arrive at our desired result for the dynamical polarizability: $$\Pi(\mathbf{q},\omega) = \sum_{\sigma\sigma'}\sum_{\mathbf{k}}\sum_{l}\frac{f_{\mathbf{k},\sigma}-f_{\mathbf{k+q},\sigma'}}{\omega -\epsilon_{\mathbf{k+q},\sigma', l}+\epsilon_{\mathbf{k},\sigma,0}+i\eta} \sum_{n} |\phi_{\mathbf{k+q},\sigma',l}^{n,a,*}\phi_{\mathbf{k},\sigma,0}^{n,a}+\phi_{\mathbf{k+q},\sigma',l}^{n,b,*}\phi_{\mathbf{k},\sigma,0}^{n,b}|^2$$ Notice that now we have simplified the expression by writing it as the scalar product between different Floquet sidebands by using Eqs.  and .
--- abstract: | We consider a family of linear viscoelastic shells with thickness $2{\varepsilon}$ (where ${\varepsilon}$ is a small parameter), clamped along a portion of their lateral face, all having the same middle surface $S$. We formulate the three-dimensional mechanical problem in curvilinear coordinates and provide existence and uniqueness of (weak) solution of the corresponding three-dimensional variational problem. We are interested in studying the limit behavior of both the three-dimensional problems and their solutions (displacements ${\mbox{\boldmath{$u$}}}^{\varepsilon}$ of covariant components $u_i^{\varepsilon}$) when ${\varepsilon}$ tends to zero. To do that, we use asymptotic analysis methods. First, we formulate the variational problem in a fixed domain independent of ${\varepsilon}$. Then we assume an asymptotic expansion of the scaled displacements field ${\mbox{\boldmath{$u$}}}({\varepsilon})=(u_i({\varepsilon}))$. Identifying the terms of the proposed asymptotic expansion we characterize the zeroth order term as the solution of a two-dimensional scaled limit problem. Moreover, on one hand, we find that if the applied body force density is $O(1)$ with respect to ${\varepsilon}$ and surface tractions density is $O({\varepsilon})$, the limit of the field ${\mbox{\boldmath{$u$}}}({\varepsilon})$ is the solution of a two-dimensional system of variational equations called viscoelastic membrane problem. On the other hand, if the applied body force density is $O({\varepsilon}^2)$ and surface tractions density is $O({\varepsilon}^3)$, the limit of the field ${\mbox{\boldmath{$u$}}}({\varepsilon})$ is the solution of a different system of two-dimensional variational equations called viscoelastic flexural problem. In both cases, we find a model which presents a long-term memory that takes into account the deformations at previous times. We finally comment on the existence and uniqueness of solution for the two-dimensional variational problems found and announce convergence results in forthcoming papers. address: - 'Departamento de Matemática Aplicada, Univ. de Santiago de Compostela, Spain' - 'Departamento de Métodos Matemáticos e Representación, Univ. da Coruña, Spain' author: - 'G. Castiñeira' - 'Á. Rodríguez-Arós' title: Derivation of models for linear viscoelastic shells by using asymptotic analysis --- Asymptotic Analysis,Viscoelasticity ,Shells ,Membrane ,Flexural ,Time dependent 34K25, 35O30, 35Q74, 34E05, 34E10, 41A60, 74K25, 74K15, 74D05, 35J15 Introduction ============ In solid mechanics, the obtention of models for rods, beams, plates and shells is based on [*a priori*]{} hypotheses on the displacement and/or stress fields which, upon substitution in the three-dimensional equilibrium and constitutive equations, lead to useful simplifications. Nevertheless, from both constitutive and geometrical point of views, there is a need to justify the validity of most of the models obtained in this way. For this reason a considerable effort has been made in the past decades by many authors in order to derive new models and justify the existing ones by using the asymptotic expansion method, whose foundations can be found in [@Lions]. Indeed, the first applied results were obtained with the justification of the linearized theory of plate bending in [@CD; @Destuynder]. The theories of beam bending and rod stretching also benefited from the extensive use of asymptotic methods and so the justification of the Bernoulli-Navier model for the bending-stretching of elastic thin rods was provided in [@BV]. In the following years, the nonlinear case was studied in [@CGLRT2] and the analysis and error estimation of higher-order terms in the asymptotic expansion of the scaled unknowns was given in [@iv]. In [@TraViano], the authors use the asymptotic method to justify the Saint-Venant, Timoshenko and Vlassov models of elastic beams. A description of the mathematical models for the three-dimensional elasticity, including the nonlinear aspects, together with a mathematical analysis of these models, can be found in [@Ciarlet2]. A justification of the two-dimensional equations of a linear plate can be found in [@CD]. An extensive review concerning plate models can be found in [@Ciarlet3], which also contains the justification of the models by using asymptotic methods. The existence and uniqueness of solution of elliptic membrane shell equations, can be found in [@CiarletLods] and in [@CiarletLods2]. These two-dimensional models are completely justified with convergence theorems. A complete theory regarding elastic shells can be found in [@Ciarlet4b], where models for elliptic membranes, generalized membranes and flexural shells are presented. It contains a full description of the asymptotic procedure that leads to the corresponding sets of two-dimensional equations. Also, the dynamic case has been study in [@Limin_mem; @Limin_flex; @Limin_koit], concerning the justification of dynamic equations for membrane, flexural and Koiter shells. More recently in [@ArosObs] the obstacle problem for an elastic elliptic membrane has been identified and justified as the limit problem for a family of unilateral contact problems for elastic elliptic shells. A large number of real problems had made it necessary the study of new models which could take into account effects such as hardening and memory of the material. An example of these, are the viscoelasticity models (see [@DL; @LC1990; @Pipkin]). Regarding the obtention and justification of viscoelastic models by using asymptotic expansion methods, we find several models for the bending-stretching of viscoelastic rods in [@AV; @AV2]. For a family of shells made of a long-term memory viscoelastic material we can find in [@Viscoshells; @Viscoshellsf; @ViscoshellsK] the use of asymptotic analysis to justify with convergence results the limit two-dimensional membrane, flexural and Koiter equations. In this work, we analyse the asymptotic behaviour of the scaled three-dimensional displacement field of a shell made of a viscoelastic short-term memory material (Kelvin-Voigt) as the thickness approaches zero. We consider that the displacements vanish in a portion of the lateral face of the shell, obtaining the equations of a viscoelastic membrane shell or of a viscoelastic flexural shell depending on the order of the forces and the geometry. We will follow the notation and style of [@Ciarlet4b], where the linear elastic shells are studied. For this reason, we shall reference auxiliary results which apply in the same manner to the viscoelastic case. One of the major differences with respect to previous works in elasticity, consists on time dependence, that will lead to ordinary differential equations that need to be solved in order to find the zeroth-order approach of the solution. The structure of the paper is the following: in Section \[problema\] we shall describe the mechanical problem in the original domain, while in Section \[seccion\_dominio\_ind\] we will use a projection map into a reference domain, we will introduce the scaled unknowns and forces and the assumptions on coefficients. In Section \[preliminares\] we recall some technical results which will be needed in what follows and moreover, we include the theoretical results that support existence and uniqueness of solution for the problems presented in this paper. In Section \[procedure\] we show the asymptotic analysis leading to the formulation of the variational equations of the viscoelastic shells. In Section \[Existencia\] we first recall the classification of the shells attending to its boundary conditions and the geometry of the middle surface $S$ and then, we study the existence and uniqueness of solution of the de-scaled problems derived from the asymptotic procedure. In Section \[conclusiones\] we shall present some conclusions, including a comparison between the viscoelastic models and the elastic case studied in [@Ciarlet4b] and announce the convergence results in forthcoming papers. The three-dimensional shell problem =================================== \[problema\] We denote by $\mathbb{S}^d$, where $d=2,3$ in practice, the space of second-order symmetric tensors on $\mathbb{R}^d$, while $\ \cdot$ will represent the inner product and $|\cdot|$ the usual norm in $\mathbb{S}^d$ and $\mathbb{R}^d$. In what follows, unless the contrary is explicitly written, we will use summation convention on repeated indices. Moreover, Latin indices $i,j,k,l,...$, take their values in the set $\{1,2,3\}$, whereas Greek indices $\alpha,\beta,\sigma,\tau,...$, do it in the set $\{1,2\}$. Also, we use standard notation for the Lebesgue and Sobolev spaces. Also, for a time dependent function $u$, we denote $\dot{u}$ the first derivative of $u$ with respect to the time variable. Let ${\Omega}^*$ be a domain of $\mathbb{R}^3$, with a Lipschitz-continuous boundary ${\Gamma^*}={\partial}{\Omega^*}$. Let ${{\mbox{\boldmath{$x$}}}^*}=({x}_i^*)$ be a generic point of its closure $\bar{\Omega}^*$ and let ${{\partial}}^*_i$ denote the partial derivative with respect to ${x}_i^*$. Let $d{\mbox{\boldmath{$x$}}}^*$ denote the volume element in $\Omega^*$, $d\Gamma^*$ denote the area element along $\Gamma^*$ and ${\mbox{\boldmath{$n$}}}^*$ denote the unit outer normal vector along $\Gamma^*$. Finally, let $\Gamma^*_0$ and $\Gamma_1^*$ be subsets of $\Gamma^*$ such that $meas(\Gamma_0^*)>0$ and $\Gamma^*_0 \cap \Gamma_1^*=\emptyset.$ The set $\Omega^*$ is the region occupied by a deformable body in the absence of applied forces. We assume that this body is made of a Kelvin-Voigt viscoelastic material, which is homogeneous and isotropic, so that the material is characterized by its Lamé coefficients $\lambda\geq0, \mu>0$ and its viscosity coefficients, $\theta\geq 0,\rho\geq 0$ (see for instance [@DL; @LC1990; @Shillor]). Let $T>0$ be the time period of observation. Under the effect of applied forces, the body is deformed and we denote by $u_i^*:[0,T]\times \bar{\Omega}^*\rightarrow \mathbb{R}^3$ the Cartesian components of the displacements field, defined as ${\mbox{\boldmath{$u$}}}^*:=u_i^* {\mbox{\boldmath{$e$}}}^{i}:[0,T]\times\bar{\Omega}^* \rightarrow \mathbb{R}^3$, where $\{{\mbox{\boldmath{$e$}}}^i\}$ denotes the Euclidean canonical basis in $\mathbb{R}^3$. Moreover, we consider that the displacement field vanishes on the set $\Gamma^*_0$. Hence, the displacements field ${\mbox{\boldmath{$u$}}}^*=(u_i^*):[0,T]\times\Omega^*\longrightarrow \mathbb{R}^3$ is solution of the following three-dimensional problem in Cartesian coordinates. \[problema\_mecanico\] Find ${\mbox{\boldmath{$u$}}}^*=(u_i^*):[0,T]\times\Omega^*\longrightarrow \mathbb{R}^3$ such that, $$\begin{aligned} \label{equilibrio} -{\partial}_j^*\sigma^{ij,*}({\mbox{\boldmath{$u$}}}^*)&=f^{i,*} { \ \textrm{in} \ }\Omega^*, \\\label{Dirichlet} u_i^*&=0 { \ \textrm{on} \ }\Gamma^*_0, \\\label{Neumann} \sigma^{ij,*}({\mbox{\boldmath{$u$}}}^*)n_j^*&=h^{i,*} { \ \textrm{on} \ }\Gamma_1^*,\\ \label{condicion_inicial} {\mbox{\boldmath{$u$}}}^*(0,\cdot)&={\mbox{\boldmath{$u$}}}_0^* { \ \textrm{in} \ }\Omega^*, \end{aligned}$$ where the functions $$\begin{aligned} \sigma^{ij,*}({\mbox{\boldmath{$u$}}}^*):=A^{ijkl,*}e_{kl}^*({\mbox{\boldmath{$u$}}}^*)+ B^{ijkl,*}e_{kl}^*(\dot{{\mbox{\boldmath{$u$}}}}^*), \end{aligned}$$ are the components of the linearized stress tensor field and where the functions $$\begin{aligned} & A^{ijkl,*}:= \lambda \delta^{ij}\delta^{kl} + \mu\left(\delta^{ik}\delta^{jl} + \delta^{il}\delta^{jk}\right) , \\ & B^{ijkl,*}:= \theta \delta^{ij}\delta^{kl} + \frac{\rho}{2}\left(\delta^{ik}\delta^{jl} + \delta^{il}\delta^{jk}\right) , \end{aligned}$$ are the components of the three-dimensional elasticity and viscosity fourth order tensors, respectively, and $$\begin{aligned} e^*_{ij}({\mbox{\boldmath{$u$}}}^*):= \frac1{2}({\partial}^*_ju^*_{i}+ {\partial}^*_iu^*_{j}), \end{aligned}$$ designates the components of the linearized strain tensor associated with the displacement field ${\mbox{\boldmath{$u$}}}^*$of the set $\bar{\Omega}^*$. We now proceed to describe the equations in Problem \[problema\_mecanico\]. Expression (\[equilibrio\]) is the equilibrium equation, where $f^{i,*}$ are the components of the volumic force densities. The equality (\[Dirichlet\]) is the Dirichlet condition of place, (\[Neumann\]) is the Neumann condition, where $h^{i,*}$ are the components of surface force densities and (\[condicion\_inicial\]) is the initial condition, where ${\mbox{\boldmath{$u$}}}_0^*$ denotes the initial displacements. Note that, for the sake of briefness, we omit the explicit dependence on the space and time variables when there is no ambiguity. Let us define the space of admissible unknowns, $$\begin{aligned} V(\Omega^*)=\{{\mbox{\boldmath{$v$}}}^*=(v_i^*)\in [H^1(\Omega^*)]^3; {\mbox{\boldmath{$v$}}}^*=\mathbf{{\mbox{\boldmath{$0$}}}} \ on \ \Gamma_0^* \}. \end{aligned}$$ Therefore, assuming enough regularity, the unknown ${\mbox{\boldmath{$u$}}}^*=(u_i^*)$ satisfies the following variational problem in Cartesian coordinates: \[problema\_cartesian\] Find ${\mbox{\boldmath{$u$}}}^*=(u_i^*):[0,T]\times {\Omega}^* \rightarrow \mathbb{R}^3$ such that, $$\begin{aligned} \displaystyle \nonumber & {\mbox{\boldmath{$u$}}}^*(t,\cdot)\in V(\Omega^*) {\ \forall \ t\in[0,T]}, \\ \nonumber &\int_{\Omega^*}A^{ijkl,*}e^*_{kl}({\mbox{\boldmath{$u$}}}^*(t))e^*_{ij}({\mbox{\boldmath{$v$}}}^*) dx^*+ \int_{\Omega^*} B^{ijkl,*}e^*_{kl}(\dot{{\mbox{\boldmath{$u$}}}}^*(t))e_{ij}^*({\mbox{\boldmath{$v$}}}^*) dx^* \\ & \quad= \int_{\Omega^*} f^{i,*}(t) v_i^* dx^* + \int_{\Gamma_1^*} h^{i,*}(t) v_i^* d\Gamma^* \quad \forall {\mbox{\boldmath{$v$}}}^*\in V(\Omega^*), {\ a.e. \ \textrm{in} \ (0,T)}, \\\displaystyle & {\mbox{\boldmath{$u$}}}^*(0,\cdot)= {\mbox{\boldmath{$u$}}}_0^*(\cdot).\end{aligned}$$ Let us consider that $\Omega^*$ is a viscoelastic shell of thickness $2{\varepsilon}$ and middle surface $S$. Now, we shall express the equations of the Problem \[problema\_cartesian\] in terms of curvilinear coordinates. Let $\omega$ be a domain of $\mathbb{R}^2$, with a Lipschitz-continuous boundary $\gamma={\partial}\omega$. Let ${\mbox{\boldmath{$y$}}}=(y_\alpha)$ be a generic point of its closure $\bar{\omega}$ and let ${\partial}_\alpha$ denote the partial derivative with respect to $y_\alpha$. Let ${\mbox{\boldmath{$\theta$}}}\in\mathcal{C}^2(\bar{\omega};\mathbb{R}^3)$ be an injective mapping such that the two vectors ${\mbox{\boldmath{$a$}}}_\alpha({\mbox{\boldmath{$y$}}}):= {\partial}_\alpha {\mbox{\boldmath{$\theta$}}}({\mbox{\boldmath{$y$}}})$ are linearly independent. These vectors form the covariant basis of the tangent plane to the surface $S:={\mbox{\boldmath{$\theta$}}}(\bar{\omega})$ at the point ${\mbox{\boldmath{$\theta$}}}({\mbox{\boldmath{$y$}}})={\mbox{\boldmath{$y$}}}^*.$ We can consider the two vectors ${\mbox{\boldmath{$a$}}}^\alpha({\mbox{\boldmath{$y$}}})$ of the same tangent plane defined by the relations ${\mbox{\boldmath{$a$}}}^\alpha({\mbox{\boldmath{$y$}}})\cdot {\mbox{\boldmath{$a$}}}_\beta({\mbox{\boldmath{$y$}}})=\delta_\beta^\alpha$, that constitute the contravariant basis. We define the unit vector, $$\begin{aligned} \label{a_3} {\mbox{\boldmath{$a$}}}_3({\mbox{\boldmath{$y$}}})={\mbox{\boldmath{$a$}}}^3({\mbox{\boldmath{$y$}}}):=\frac{{\mbox{\boldmath{$a$}}}_1({\mbox{\boldmath{$y$}}})\wedge {\mbox{\boldmath{$a$}}}_2({\mbox{\boldmath{$y$}}})}{| {\mbox{\boldmath{$a$}}}_1({\mbox{\boldmath{$y$}}})\wedge {\mbox{\boldmath{$a$}}}_2({\mbox{\boldmath{$y$}}})|},\end{aligned}$$ normal vector to $S$ at the point ${\mbox{\boldmath{$\theta$}}}({\mbox{\boldmath{$y$}}})={\mbox{\boldmath{$y$}}}^*$, where $\wedge$ denotes vector product in $\mathbb{R}^3.$ We can define the first fundamental form, given as metric tensor, in covariant or contravariant components, respectively, by $$\begin{aligned} a_{\alpha\beta}:={\mbox{\boldmath{$a$}}}_\alpha\cdot {\mbox{\boldmath{$a$}}}_\beta, \qquad a^{\alpha\beta}:={\mbox{\boldmath{$a$}}}^\alpha\cdot {\mbox{\boldmath{$a$}}}^\beta,\end{aligned}$$ the second fundamental form, given as curvature tensor, in covariant or mixed components, respectively, by $$\begin{aligned} b_{\alpha\beta}:={\mbox{\boldmath{$a$}}}^3 \cdot {\partial}_\beta {\mbox{\boldmath{$a$}}}_\alpha, \qquad b_{\alpha}^\beta:=a^{\beta\sigma} b_{\sigma\alpha},\end{aligned}$$ and the Christoffel symbols of the surface $S$ by $$\begin{aligned} \Gamma^\sigma_{\alpha\beta}:={\mbox{\boldmath{$a$}}}^\sigma\cdot {\partial}_\beta {\mbox{\boldmath{$a$}}}_\alpha.\end{aligned}$$ The area element along $S$ is $\sqrt{a}dy=dy^*$ where $$\begin{aligned} \label{definicion_a} a:=\det (a_{\alpha\beta}).\end{aligned}$$ Let $\gamma_0$ be a subset of $\gamma$, such that $meas (\gamma_0)>0$. For each $\varepsilon>0$, we define the three-dimensional domain $\Omega^\varepsilon:=\omega \times (-\varepsilon, \varepsilon)$ and its boundary ${\Gamma^{\varepsilon }}={\partial}\Omega^{\varepsilon}$. We also define the following parts of the boundary, $$\begin{aligned} \Gamma^\varepsilon_+:=\omega\times \{\varepsilon\}, \quad \Gamma^\varepsilon_-:= \omega\times \{-\varepsilon\},\quad \Gamma_0^\varepsilon:=\gamma_0\times[-\varepsilon,\varepsilon].\end{aligned}$$ Let ${\mbox{\boldmath{$x$}}}^\varepsilon=(x_i^\varepsilon)$ be a generic point of $\bar{\Omega}^\varepsilon$ and let ${\partial}_i^{\varepsilon}$ denote the partial derivative with respect to $x_i^\varepsilon$. Note that $x_\alpha^\varepsilon=y_\alpha$ and ${\partial}_\alpha^\varepsilon ={\partial}_\alpha$. Let ${\mbox{\boldmath{$\Theta$}}}:\bar{\Omega}^\varepsilon\rightarrow \mathbb{R}^3$ be the mapping defined by $$\begin{aligned} \label{bTheta} {\mbox{\boldmath{$\Theta$}}}({\mbox{\boldmath{$x$}}}^\varepsilon):={\mbox{\boldmath{$\theta$}}}({\mbox{\boldmath{$y$}}}) + x_3^\varepsilon {\mbox{\boldmath{$a$}}}_3({\mbox{\boldmath{$y$}}}) \ \forall {\mbox{\boldmath{$x$}}}^\varepsilon=({\mbox{\boldmath{$y$}}},x_3^\varepsilon)=(y_1,y_2,x_3^\varepsilon)\in\bar{\Omega}^\varepsilon.\end{aligned}$$ The next theorem shows that if the injective mapping ${\mbox{\boldmath{$\theta$}}}:\bar{\omega}\rightarrow\mathbb{R}^3$ is smooth enough, the mapping ${\mbox{\boldmath{$\Theta$}}}:\bar{\Omega}^{\varepsilon}\rightarrow\mathbb{R}^3$ is also injective for ${\varepsilon}>0$ small enough (see Theorem 3.1-1, [@Ciarlet4b]). \[var\_0\] Let $\omega$ be a domain in $\mathbb{R}^2$. Let ${\mbox{\boldmath{$\theta$}}}\in\mathcal{C}^2(\bar{\omega};\mathbb{R}^3)$ be an injective mapping such that the two vectors ${\mbox{\boldmath{$a$}}}_\alpha={\partial}_\alpha{\mbox{\boldmath{$\theta$}}}$ are linearly independent at all points of $\bar{\omega}$ and let ${\mbox{\boldmath{$a$}}}_3$, defined in (\[a\_3\]). Then there exists ${\varepsilon}_0>0$ such that the mapping ${\mbox{\boldmath{$\Theta$}}}:\bar{\Omega}_0 \rightarrow\mathbb{R}^3$ defined by $$\begin{aligned} {\mbox{\boldmath{$\Theta$}}}({\mbox{\boldmath{$y$}}},x_3):={\mbox{\boldmath{$\theta$}}}({\mbox{\boldmath{$y$}}}) + x_3 {\mbox{\boldmath{$a$}}}_3({\mbox{\boldmath{$y$}}}) \ \forall ({\mbox{\boldmath{$y$}}},x_3)\in\bar{\Omega}_0, \ \textrm{where} \ \Omega_0:=\omega\times(-{\varepsilon}_0,{\varepsilon}_0),\end{aligned}$$ is a $\mathcal{C}^1-$ diffeomorphism from $\bar{\Omega}_0$ onto ${\mbox{\boldmath{$\Theta$}}}(\bar{\Omega}_0)$ and $\det ({\mbox{\boldmath{$g$}}}_1,{\mbox{\boldmath{$g$}}}_2,{\mbox{\boldmath{$g$}}}_3)>0$ in $\bar{\Omega}_0$, where ${\mbox{\boldmath{$g$}}}_i:={\partial}_i{\mbox{\boldmath{$\Theta$}}}$. For each ${\varepsilon}$, $0<{\varepsilon}\le{\varepsilon}_0$, the set ${\mbox{\boldmath{$\Theta$}}}(\bar{\Omega}^{\varepsilon})=\bar{\Omega}^*$ is the reference configuration of a viscoelastic shell, with middle surface $S={\mbox{\boldmath{$\theta$}}}(\bar{\omega})$ and thickness $2\varepsilon>0$. Furthermore for $\varepsilon>0,$ ${\mbox{\boldmath{$g$}}}_i^\varepsilon({\mbox{\boldmath{$x$}}}^\varepsilon):={\partial}_i^\varepsilon{\mbox{\boldmath{$\Theta$}}}({\mbox{\boldmath{$x$}}}^\varepsilon)$ are linearly independent and the mapping ${\mbox{\boldmath{$\Theta$}}}:\bar{\Omega}^\varepsilon\rightarrow \mathbb{R}^3$ is injective for all ${\varepsilon}$, $0<{\varepsilon}\le{\varepsilon}_0$, as a consequence of injectivity of the mapping ${\mbox{\boldmath{$\theta$}}}$. Hence, the three vectors ${\mbox{\boldmath{$g$}}}_i^\varepsilon({\mbox{\boldmath{$x$}}}^\varepsilon)$ form the covariant basis of the tangent space at the point ${\mbox{\boldmath{$x$}}}^*={\mbox{\boldmath{$\Theta$}}}({\mbox{\boldmath{$x$}}}^\varepsilon)$ and ${\mbox{\boldmath{$g$}}}^{i,\varepsilon}({\mbox{\boldmath{$x$}}}^\varepsilon) $ defined by the relations ${\mbox{\boldmath{$g$}}}^{i,\varepsilon}\cdot {\mbox{\boldmath{$g$}}}_j^\varepsilon=\delta_j^i$ form the contravariant basis at the point ${\mbox{\boldmath{$x$}}}^*={\mbox{\boldmath{$\Theta$}}}({\mbox{\boldmath{$x$}}}^\varepsilon)$. We define the metric tensor, in covariant or contravariant components, respectively, by $$\begin{aligned} g_{ij}^\varepsilon:={\mbox{\boldmath{$g$}}}_i^\varepsilon \cdot {\mbox{\boldmath{$g$}}}_j^\varepsilon,\quad g^{ij,\varepsilon}:={\mbox{\boldmath{$g$}}}^{i,\varepsilon} \cdot {\mbox{\boldmath{$g$}}}^{j,\varepsilon},\end{aligned}$$ and Christoffel symbols by $$\begin{aligned} \label{simbolos3D} \Gamma^{p,\varepsilon}_{ij}:={\mbox{\boldmath{$g$}}}^{p,\varepsilon}\cdot{\partial}_i^\varepsilon {\mbox{\boldmath{$g$}}}_j^\varepsilon. $$ The volume element in the set ${\mbox{\boldmath{$\Theta$}}}(\bar{\Omega}^\varepsilon)=\bar{\Omega}^*$ is $\sqrt{g^\varepsilon}dx^{\varepsilon}=dx^*$ and the surface element in ${\mbox{\boldmath{$\Theta$}}}(\Gamma^\varepsilon)=\Gamma^*$ is $\sqrt{g^\varepsilon}d{\Gamma^{\varepsilon }}=d\Gamma^*$ where $$\begin{aligned} \label{g} g^\varepsilon:=\det (g^\varepsilon_{ij}).\end{aligned}$$ Therefore, for a field ${{\mbox{\boldmath{$v$}}}}^*$ defined in ${\mbox{\boldmath{$\Theta$}}}(\bar{\Omega}^{\varepsilon})=\bar{\Omega}^*$, we define its covariant curvilinear coordinates $v_i^{\varepsilon}$ by$${{\mbox{\boldmath{$v$}}}}^*({{\mbox{\boldmath{$x$}}}}^*)={v}^*_i({{\mbox{\boldmath{$x$}}}}^*){{\mbox{\boldmath{$e$}}}}^i=:v_i^{\varepsilon}({\mbox{\boldmath{$x$}}}^{\varepsilon}){\mbox{\boldmath{$g$}}}^i({\mbox{\boldmath{$x$}}}^{\varepsilon}),\ {\rm with}\ {{\mbox{\boldmath{$x$}}}}^*={\mbox{\boldmath{$\Theta$}}}({\mbox{\boldmath{$x$}}}^{\varepsilon}).$$ Besides, we denote by $u_i^\varepsilon:[0,T]\times \bar{\Omega}^\varepsilon \rightarrow \mathbb{R}^3$ the covariant components of the displacements field, that is ${\mbox{\boldmath{$\mathcal{U}$}}}^{\varepsilon}:=u_i^\varepsilon {\mbox{\boldmath{$g$}}}^{i,\varepsilon}:[0,T]\times\bar{\Omega}^\varepsilon \rightarrow \mathbb{R}^3$ . For simplicity, we define the vector field ${\mbox{\boldmath{$u$}}}^\varepsilon=(u_i^\varepsilon):[0,T]\times {\Omega}^\varepsilon \rightarrow \mathbb{R}^3$ which will be denoted vector of unknowns. Recall that we assumed that the shell is subjected to a boundary condition of place; in particular that the displacements field vanishes in a portion of the lateral face of the shell, that is, ${\mbox{\boldmath{$\Theta$}}}(\Gamma_0^\varepsilon)=\Gamma_0^*$. Accordingly, let us define the space of admissible unknowns, $$\begin{aligned} V(\Omega^\varepsilon)=\{{\mbox{\boldmath{$v$}}}^\varepsilon=(v_i^\varepsilon)\in [H^1(\Omega^\varepsilon)]^3; {\mbox{\boldmath{$v$}}}^\varepsilon=\mathbf{{\mbox{\boldmath{$0$}}}} \ on \ \Gamma_0^\varepsilon \}.\end{aligned}$$ This is a real Hilbert space with the induced inner product of $[H^1(\Omega^{\varepsilon})]^3$. The corresponding norm is denoted by $||\cdot||_{1,\Omega^{\varepsilon}}$. Therefore, we can find the expression of the Problem \[problema\_cartesian\] in curvilinear coordinates (see [@Ciarlet4b] for details). Hence, the “ displacements " field ${\mbox{\boldmath{$u$}}}^{\varepsilon}=(u_i^{\varepsilon})$ verifies the following variational problem of a three-dimensional viscoelastic shell in curvilinear coordinates: \[problema\_eps\] Find ${\mbox{\boldmath{$u$}}}^\varepsilon=(u_i^\varepsilon):[0,T]\times {\Omega}^\varepsilon \rightarrow \mathbb{R}^3$ such that, $$\begin{aligned} \displaystyle \nonumber & {\mbox{\boldmath{$u$}}}^\varepsilon(t,\cdot)\in V(\Omega^\varepsilon) {\ \forall \ t\in[0,T]}, \\ \nonumber &\int_{\Omega^\varepsilon}A^{ijkl,\varepsilon}e^\varepsilon_{k||l}({\mbox{\boldmath{$u$}}}^\varepsilon(t))e^\varepsilon_{i||j}({\mbox{\boldmath{$v$}}}^\varepsilon)\sqrt{g^\varepsilon} dx^\varepsilon+ \int_{\Omega^\varepsilon} B^{ijkl,\varepsilon}e^\varepsilon_{k||l}(\dot{{\mbox{\boldmath{$u$}}}}^\varepsilon(t))e_{i||j}^{\varepsilon}({\mbox{\boldmath{$v$}}}^\varepsilon) \sqrt{g^\varepsilon} dx^\varepsilon \\ \label{Pbvariacionaleps} & \quad= \int_{\Omega^\varepsilon} f^{i,\varepsilon}(t) v_i^\varepsilon \sqrt{g^\varepsilon} dx^\varepsilon + \int_{\Gamma_+^\varepsilon\cup\Gamma_-^\varepsilon} h^{i,\varepsilon}(t) v_i^\varepsilon\sqrt{g^\varepsilon} d\Gamma^\varepsilon \quad \forall {\mbox{\boldmath{$v$}}}^\varepsilon\in V(\Omega^\varepsilon), {\ a.e. \ \textrm{in} \ (0,T)}, \\\displaystyle \nonumber & {\mbox{\boldmath{$u$}}}^\varepsilon(0,\cdot)= {\mbox{\boldmath{$u$}}}_0^\varepsilon(\cdot),\end{aligned}$$ where the functions $$\begin{aligned} \label{TensorAeps} & A^{ijkl,\varepsilon}:= \lambda g^{ij,\varepsilon}g^{kl,\varepsilon} + \mu(g^{ik,\varepsilon}g^{jl,\varepsilon} + g^{il,\varepsilon}g^{jk,\varepsilon} ), \\ \label{TensorBeps} & B^{ijkl,\varepsilon}:= \theta g^{ij,\varepsilon}g^{kl,\varepsilon} + \frac{\rho}{2}(g^{ik,\varepsilon}g^{jl,\varepsilon} + g^{il,\varepsilon}g^{jk,\varepsilon} ), \end{aligned}$$ are the contravariant components of the three-dimensional elasticity and viscosity tensors, respectively. We assume that the Lamé coefficients $\lambda\geq0, \mu>0$ and the viscosity coefficients $\theta\geq 0,\rho\geq 0$ are all independent of ${\varepsilon}$. Moreover, the terms $$\begin{aligned} e^\varepsilon_{i||j}({\mbox{\boldmath{$u$}}}^{\varepsilon}):= \frac1{2}(u^\varepsilon_{i||j}+ u^\varepsilon_{j||i})=\frac1{2}({\partial}^\varepsilon_ju^\varepsilon_i + {\partial}^\varepsilon_iu^\varepsilon_j) - \Gamma^{p,\varepsilon}_{ij}u^\varepsilon_p, \end{aligned}$$ designate the covariant components of the linearized strain tensor associated with the displacement field ${\mbox{\boldmath{$\mathcal{U}$}}}^{\varepsilon}$of the set ${\mbox{\boldmath{$\Theta$}}}(\bar{\Omega}^\varepsilon)$. Moreover, $f^{i,{\varepsilon}}$ denotes the contravariant components of the volumic force densities, $h^{i,{\varepsilon}}$ denotes contravariant components of surface force densities and ${\mbox{\boldmath{$u$}}}_0^{\varepsilon}$ denotes the initial “ displacements " (actually, the initial displacement is ${\mbox{\boldmath{$\mathcal{U}$}}}_0^{\varepsilon}:=(u_0^{\varepsilon})_i{\mbox{\boldmath{$g$}}}^{i,{\varepsilon}}$). Note that the following additional relations are satisfied, $$\begin{aligned} \nonumber \Gamma^{3,\varepsilon}_{\alpha 3}=\Gamma^{p,\varepsilon}_{33}&=0 \ \textrm{in} \ \bar{\Omega}^\varepsilon, \\ \label{tensor_terminos_nulos} A^{\alpha\beta\sigma 3,\varepsilon}=A^{\alpha 333,\varepsilon}=B^{\alpha\beta\sigma 3 , \varepsilon}&=B^{\alpha 333, \varepsilon}=0 \ \textrm{in} \ \bar{\Omega}^\varepsilon, \end{aligned}$$ as a consequence of the definition of ${\mbox{\boldmath{$\Theta$}}}$ in (\[bTheta\]). The definitions of the fourth order tensors (\[TensorAeps\]) and (\[TensorBeps\]), imply that (see Theorem 1.8-1, [@Ciarlet4b]) for ${\varepsilon}>0$ small enough, there exist two constants $C_e>0$ and $C_v>0$, independent of ${\varepsilon}$, such that, $$\begin{aligned} \label{elipticidadA} \sum_{i,j}|t_{ij}|^2\leq C_e A^{ijkl,{\varepsilon}}({\mbox{\boldmath{$x$}}}^{\varepsilon})t_{kl}t_{ij},\\\label{elipticidadB} \sum_{i,j}|t_{ij}|^2\leq C_v B^{ijkl,{\varepsilon}}({\mbox{\boldmath{$x$}}}^{\varepsilon})t_{kl}t_{ij}, \end{aligned}$$ for all ${\mbox{\boldmath{$x$}}}^{\varepsilon}\in\bar{\Omega}^{\varepsilon}$ and all ${\mbox{\boldmath{$t$}}}=(t_{ij})\in\mathbb{S}^2$. Note that the proof for the scaled viscosity tensor $\left(B^{ijkl,{\varepsilon}}\right)$ would follow the steps of the proof for the elasticity tensor $\left(A^{ijkl,{\varepsilon}} \right)$ in Theorem 1.8-1, [@Ciarlet4b], since from a quality point of view their expressions differ in replacing the Lamé constants by the two viscosity coefficients. The proof that Problem \[problema\_eps\] has a unique solution for ${\varepsilon}>0$ small enough is left to Section \[preliminares\] (see Theorem \[Thexistunic\]). The scaled three-dimensional shell problem ========================================== \[seccion\_dominio\_ind\] For convenience, we consider a reference domain independent of the small parameter ${\varepsilon}$. Hence, let us define the three-dimensional domain $\Omega:=\omega \times (-1, 1) $ and its boundary $\Gamma={\partial}\Omega$. We also define the following parts of the boundary, $$\begin{aligned} \Gamma_+:=\omega\times \{1\}, \quad \Gamma_-:= \omega\times \{-1\},\quad \Gamma_0:=\gamma_0\times[-1,1]. \end{aligned}$$ Let ${\mbox{\boldmath{$x$}}}=(x_1,x_2,x_3)$ be a generic point in $\bar{\Omega}$ and we consider the notation ${\partial}_i$ for the partial derivative with respect to $x_i$. We define the following projection map, $$\begin{aligned} \pi^\varepsilon:{\mbox{\boldmath{$x$}}}=(x_1,x_2,x_3)\in \bar{\Omega} \longrightarrow \pi^\varepsilon({\mbox{\boldmath{$x$}}})={\mbox{\boldmath{$x$}}}^\varepsilon=(x_i^\varepsilon)=(x_1^{\varepsilon},x_2^{\varepsilon},x_3^{\varepsilon})=(x_1,x_2,\varepsilon x_3)\in \bar{\Omega}^\varepsilon, \end{aligned}$$ hence, ${\partial}_\alpha^\varepsilon={\partial}_\alpha $ and ${\partial}_3^\varepsilon=\frac1{\varepsilon}{\partial}_3$. We consider the scaled unknown ${\mbox{\boldmath{$u$}}}(\varepsilon)=(u_i(\varepsilon)):[0,T]\times \bar{\Omega}\longrightarrow \mathbb{R}^3$ and the scaled vector fields ${\mbox{\boldmath{$v$}}}=(v_i):\bar{\Omega}\longrightarrow \mathbb{R}^3 $ defined as $$\begin{aligned} u_i^\varepsilon(t,{\mbox{\boldmath{$x$}}}^\varepsilon)=:u_i(\varepsilon)(t,{\mbox{\boldmath{$x$}}}) \ \textrm{and} \ v_i^\varepsilon({\mbox{\boldmath{$x$}}}^\varepsilon)=:v_i({\mbox{\boldmath{$x$}}}) \ \forall {\mbox{\boldmath{$x$}}}^\varepsilon=\pi^\varepsilon({\mbox{\boldmath{$x$}}})\in \bar{\Omega}^\varepsilon, \ \forall \ t\in[0,T]. \end{aligned}$$ We remind that, by hypothesis, the Lamé and viscosity constants are independent of $\varepsilon$. Also, let the functions, $\Gamma_{ij}^{p,\varepsilon}, g^\varepsilon, A^{ijkl,\varepsilon}, B^{ijkl,\varepsilon}$ defined in (\[simbolos3D\]), (\[g\]), (\[TensorAeps\]) and (\[TensorBeps\]), be associated with the functions $\Gamma_{ij}^p(\varepsilon), g(\varepsilon), A^{ijkl}(\varepsilon), B^{ijkl}(\varepsilon)$ defined by $$\begin{aligned} \label{escalado_simbolos} &\Gamma_{ij}^p(\varepsilon)({\mbox{\boldmath{$x$}}}):=\Gamma_{ij}^{p,\varepsilon}({\mbox{\boldmath{$x$}}}^\varepsilon),\\\label{escalado_g} & g(\varepsilon)({\mbox{\boldmath{$x$}}}):=g^\varepsilon({\mbox{\boldmath{$x$}}}^\varepsilon),\\\label{tensorA_escalado} & A^{ijkl}(\varepsilon)({\mbox{\boldmath{$x$}}}):=A^{ijkl,\varepsilon}({\mbox{\boldmath{$x$}}}^\varepsilon),\\\label{tensorB_escalado} & B^{ijkl}(\varepsilon)({\mbox{\boldmath{$x$}}}):=B^{ijkl,\varepsilon}({\mbox{\boldmath{$x$}}}^\varepsilon), \end{aligned}$$ for all ${\mbox{\boldmath{$x$}}}^\varepsilon=\pi^\varepsilon({\mbox{\boldmath{$x$}}})\in\bar{\Omega}^\varepsilon$. For all ${\mbox{\boldmath{$v$}}}=(v_i)\in [H^1(\Omega)]^3$, let there be associated the scaled linearized strains $({e_{i||j}}({\varepsilon})({\mbox{\boldmath{$v$}}}))\in L^2(\Omega)$, defined by $$\begin{aligned} &{e_{\alpha||\beta}}(\varepsilon;{\mbox{\boldmath{$v$}}}):=\frac{1}{2}({\partial}_\beta v_\alpha + {\partial}_\alpha v_\beta) - \Gamma_{\alpha\beta}^p(\varepsilon)v_p,\\ & {e_{\alpha||3}}(\varepsilon;{\mbox{\boldmath{$v$}}}):=\frac{1}{2}(\frac{1}{{\varepsilon}}{\partial}_3 v_\alpha + {\partial}_\alpha v_3) - \Gamma_{\alpha 3}^p(\varepsilon)v_p,\\ & {e_{3||3}}(\varepsilon;{\mbox{\boldmath{$v$}}}):=\frac1{\varepsilon}{\partial}_3v_3.\end{aligned}$$ Note that with these definitions it is verified that $$\begin{aligned} {e_{i||j}}^{\varepsilon}({\mbox{\boldmath{$v$}}}^{\varepsilon})(\pi^{\varepsilon}({\mbox{\boldmath{$x$}}}))={e_{i||j}}({\varepsilon};{\mbox{\boldmath{$v$}}})({\mbox{\boldmath{$x$}}}) \ \forall{\mbox{\boldmath{$x$}}}\in\Omega.\end{aligned}$$ The functions $\Gamma_{ij}^p(\varepsilon), g(\varepsilon), A^{ijkl}(\varepsilon), B^{ijkl}(\varepsilon)$ converge in $\mathcal{C}^0(\bar{\Omega})$ when $\varepsilon$ tends to zero. When we consider $\varepsilon=0$ the functions will be defined with respect to ${\mbox{\boldmath{$y$}}}\in\bar{\omega}$. We shall distinguish the three-dimensional Christoffel symbols from the two-dimensional ones by using $\Gamma_{\alpha \beta}^\sigma(\varepsilon)$ and $ \Gamma_{\alpha\beta}^\sigma$, respectively. The next result is an adaptation of $(b)$ in Theorem 3.3-2, [@Ciarlet4b] to the viscoelastic case. We will study the asymptotic behavior of the scaled contravariant components $A^{ijkl}({\varepsilon}), B^{ijkl}({\varepsilon})$ of the three-dimensional elasticity and viscosity tensors defined in (\[tensorA\_escalado\])–(\[tensorB\_escalado\]), as ${\varepsilon}\rightarrow0$. We show their uniform positive definiteness not only with respect to ${\mbox{\boldmath{$x$}}}\in\bar{\Omega}$, but also with respect to ${\varepsilon}$, $0<{\varepsilon}\leq{\varepsilon}_0$. Finally, their limits are functions of ${\mbox{\boldmath{$y$}}}\in\bar{\omega}$ only, that is, independent of the transversal variable $x_3$. \[Th\_comportamiento asintotico\] Let $\omega$ be a domain in $\mathbb{R}^2$ and let ${\mbox{\boldmath{$\theta$}}}\in\mathcal{C}^2(\bar{\omega};\mathbb{R}^3)$ be an injective mapping such that the two vectors ${\mbox{\boldmath{$a$}}}_\alpha={\partial}_\alpha{\mbox{\boldmath{$\theta$}}}$ are linearly independent at all points of $\bar{\omega}$, let $a^{\alpha\beta}$ denote the contravariant components of the metric tensor of $S={\mbox{\boldmath{$\theta$}}}(\bar{\omega})$. In addition to that, let the other assumptions on the mapping ${\mbox{\boldmath{$\theta$}}}$ and the definition of ${\varepsilon}_0$ be as in Theorem \[var\_0\]. The contravariant components $A^{ijkl}({\varepsilon}), B^{ijkl}({\varepsilon})$ of the scaled three-dimensional elasticity and viscosity tensors, respectively, defined in (\[tensorA\_escalado\])–(\[tensorB\_escalado\]) satisfy $$\begin{aligned} A^{ijkl}({\varepsilon})= A^{ijkl}(0) + O({\varepsilon}) \ \textrm{and} \ A^{\alpha\beta\sigma 3}({\varepsilon})=A^{\alpha 3 3 3}({\varepsilon})=0, \\ B^{ijkl}({\varepsilon})= B^{ijkl}(0) + O({\varepsilon}) \ \textrm{and} \ B^{\alpha\beta\sigma 3}({\varepsilon})=B^{\alpha 3 3 3}({\varepsilon})=0 ,\end{aligned}$$ for all ${\varepsilon}$, $0<{\varepsilon}\leq {\varepsilon}_0$, and $$\begin{aligned} A^{\alpha\beta\sigma\tau}(0)&= \lambda a^{\alpha\beta}a^{\sigma\tau} + \mu(a^{\alpha\sigma}a^{\beta\tau} + a^{\alpha\tau}a^{\beta\sigma}), & A^{\alpha\beta 3 3}(0)&= \lambda a^{\alpha\beta}, \\ A^{\alpha 3\sigma 3}(0)&=\mu a^{\alpha\sigma} ,& A^{33 3 3}(0)&= \lambda + 2\mu, \\ A^{\alpha\beta\sigma 3}(0) &=A^{\alpha 333}(0)=0, \\ B^{\alpha\beta\sigma\tau}(0)&= \theta a^{\alpha\beta}a^{\sigma\tau} + \frac{\rho}{2}(a^{\alpha\sigma}a^{\beta\tau} + a^{\alpha\tau}a^{\beta\sigma}),& B^{\alpha\beta 3 3}(0)&= \theta a^{\alpha\beta}, \\ B^{\alpha 3\sigma 3}(0)&=\frac{\rho}{2} a^{\alpha\sigma} ,& B^{33 3 3}(0)&= \theta + \rho, \\ B^{\alpha\beta\sigma 3}(0) &=B^{\alpha 333}(0)=0.\end{aligned}$$ Moreover, there exist two constants $C_e>0$ and $C_v>0$, independent of the variables and ${\varepsilon}$, such that $$\begin{aligned} \label{elipticidadA_eps} \sum_{i,j}|t_{ij}|^2\leq C_e A^{ijkl}(\varepsilon)({\mbox{\boldmath{$x$}}})t_{kl}t_{ij},\\\label{elipticidadB_eps} \sum_{i,j}|t_{ij}|^2 \leq C_v B^{ijkl}(\varepsilon)({\mbox{\boldmath{$x$}}})t_{kl}t_{ij}, \end{aligned}$$ for all ${\varepsilon}$, $0<{\varepsilon}\leq{\varepsilon}_0$, for all ${\mbox{\boldmath{$x$}}}\in\bar{\Omega}$ and all ${\mbox{\boldmath{$t$}}}=(t_{ij})\in\mathbb{S}^2$. Note that the proof for the scaled viscosity tensor $\left(B^{ijkl}(\varepsilon)\right)$ would follow the steps of the proof for the elasticity tensor $\left(A^{ijkl}({\varepsilon})\right)$ in Theorem 3.3-2, [@Ciarlet4b], since from a quality point of view their expressions differ in replacing the Lamé constants by the two viscosity coefficients. The asymptotic behavior of $g({\varepsilon})$ and the contravariant components of elasticity and viscosity tensors, $A^{ijkl}({\varepsilon})$, $B^{ijkl}({\varepsilon})$ also implies that $$\begin{aligned} \label{tensorA_tildes} A^{ijkl}({\varepsilon})\sqrt{g({\varepsilon})}= A^{ijkl}(0)\sqrt{a} + {\varepsilon}\tilde{A}^{ijkl,1} + {\varepsilon}^2 \tilde{A}^{ijkl,2} + o({\varepsilon}^2), \\ \label{tensorB_tildes} B^{ijkl}({\varepsilon})\sqrt{g({\varepsilon})}= B^{ijkl}(0)\sqrt{a} + {\varepsilon}\tilde{B}^{ijkl,1} + {\varepsilon}^2 \tilde{B}^{ijkl,2} + o({\varepsilon}^2),\end{aligned}$$ for certain regular contravariant components $\tilde{A}^{ijkl,\alpha}, \tilde{B}^{ijkl,\alpha}$ of certain tensors. Let the scaled applied forces ${\mbox{\boldmath{$f$}}}(\varepsilon):[0,T]\times \Omega\longrightarrow \mathbb{R}^3$ and ${\mbox{\boldmath{$h$}}}(\varepsilon):[0,T]\times (\Gamma_+\cup\Gamma_-)\longrightarrow \mathbb{R}^3$ be defined by $$\begin{aligned} {\mbox{\boldmath{$f$}}}^{\varepsilon}&=(f^{i,\varepsilon})(t,{\mbox{\boldmath{$x$}}}^\varepsilon)=:{\mbox{\boldmath{$f$}}}({\varepsilon})= (f^i(\varepsilon))(t,{\mbox{\boldmath{$x$}}}) \\ \nonumber &\forall {\mbox{\boldmath{$x$}}}\in\Omega, \ \textrm{where} \ {\mbox{\boldmath{$x$}}}^\varepsilon=\pi^\varepsilon({\mbox{\boldmath{$x$}}})\in \Omega^\varepsilon \ \textrm{and} \ \forall t\in[0,T], \\ {\mbox{\boldmath{$h$}}}^{\varepsilon}&=(h^{i,\varepsilon})(t,{\mbox{\boldmath{$x$}}}^\varepsilon)=:{\mbox{\boldmath{$h$}}}({\varepsilon})= (h^i(\varepsilon))(t,{\mbox{\boldmath{$x$}}}) \\ \nonumber &\forall {\mbox{\boldmath{$x$}}}\in\Gamma_+\cup\Gamma_-, \ \textrm{where} \ {\mbox{\boldmath{$x$}}}^\varepsilon=\pi^\varepsilon({\mbox{\boldmath{$x$}}})\in \Gamma_+^\varepsilon\cup\Gamma_-^\varepsilon \ \textrm{and} \ \forall t\in[0,T]. \end{aligned}$$ Also, we introduce ${\mbox{\boldmath{$u$}}}_0({\varepsilon}): \Omega \longrightarrow \mathbb{R}^3$ as $$\begin{aligned} {\mbox{\boldmath{$u$}}}_0({\varepsilon})({\mbox{\boldmath{$x$}}}):={\mbox{\boldmath{$u$}}}_0^{\varepsilon}({\mbox{\boldmath{$x$}}}^{\varepsilon}) \ \forall {\mbox{\boldmath{$x$}}}\in\Omega, \ \textrm{where} \ {\mbox{\boldmath{$x$}}}^\varepsilon=\pi^\varepsilon({\mbox{\boldmath{$x$}}})\in \Omega^\varepsilon, \end{aligned}$$ and define the space $$\begin{aligned} V(\Omega)=\{{\mbox{\boldmath{$v$}}}=(v_i)\in [H^1(\Omega)]^3; {\mbox{\boldmath{$v$}}}=\mathbf{0} \ on \ \Gamma_0\}, \end{aligned}$$ which is a Hilbert space, with associated norm denoted by $||\cdot||_{1,\Omega}$. The scaled variational problem can then be written as follows: \[problema\_escalado\] Find ${\mbox{\boldmath{$u$}}}(\varepsilon):[0,T]\times \Omega\longrightarrow \mathbb{R}^3$ such that, $$\begin{aligned} \nonumber &{\mbox{\boldmath{$u$}}}(\varepsilon)(t,\cdot)\in V(\Omega) {\ \forall \ t\in[0,T]}, \\ \nonumber &\int_{\Omega}A^{ijkl}(\varepsilon)e_{k||l}(\varepsilon;{\mbox{\boldmath{$u$}}}(\varepsilon))e_{i||j}(\varepsilon;{\mbox{\boldmath{$v$}}})\sqrt{g(\varepsilon)} dx + \int_{\Omega} B^{ijkl}(\varepsilon)e_{k||l}(\varepsilon;\dot{{\mbox{\boldmath{$u$}}}}(\varepsilon))e_{i||j}(\varepsilon;{\mbox{\boldmath{$v$}}}) \sqrt{g(\varepsilon)} dx \\ \label{ec_problema_escalado} & \quad= \int_{\Omega} {{{f}}}^{i}(\varepsilon) v_i \sqrt{g(\varepsilon)} dx + \frac{1}{\varepsilon}\int_{\Gamma_+\cup\Gamma_-} {{{h}}}^{i}(\varepsilon) v_i\sqrt{g(\varepsilon)} d\Gamma \quad \forall {\mbox{\boldmath{$v$}}}\in V(\Omega), {\ a.e. \ \textrm{in} \ (0,T)}, \\\displaystyle \nonumber & {\mbox{\boldmath{$u$}}}({\varepsilon})(0,\cdot)= {\mbox{\boldmath{$u$}}}_0({\varepsilon})(\cdot). \end{aligned}$$ Note that the order of the applied forces has not been determined yet. The proof that Problem \[problema\_escalado\] has a unique solution is left to Section \[preliminares\] (see Theorem \[Theorema\_exist\_escalado\_sin\_orden\]). Technical preliminaries ======================= \[preliminares\] Concerning geometrical and mechanical preliminaries, we shall present some theorems, which will be used in the following sections. Then, we show some new results related with the existence and uniqueness of solution of the problems presented in this paper. First, we recall the Theorem 3.3-1, [@Ciarlet4b]. \[Th\_simbolos2D\_3D\] Let $\omega$ be a domain in $\mathbb{R}^2$, let ${\mbox{\boldmath{$\theta$}}}\in\mathcal{C}^3(\bar{\omega};\mathcal{R}^3)$ be an injective mapping such that the two vectors ${\mbox{\boldmath{$a$}}}_\alpha={\partial}_\alpha{\mbox{\boldmath{$\theta$}}}$ are linearly independent at all points of $\bar{\omega}$ and let ${\varepsilon}_0>0$ be as in Theorem \[var\_0\]. The functions $\Gamma^p_{ij}({\varepsilon})=\Gamma^p_{ji}({\varepsilon})$ and $g({\varepsilon})$ are defined in (\[escalado\_simbolos\])–(\[escalado\_g\]), the functions $b_{\alpha\beta}, b_\alpha^\sigma, \Gamma_{\alpha\beta}^\sigma,a$, are defined in Section \[problema\] and the covariant derivatives $b_\beta^\sigma|_\alpha$ are defined by $$\begin{aligned} \label{b_barra} b_\beta^\sigma|_\alpha:={\partial}_\alpha b_\beta^\sigma +\Gamma^\sigma_{\alpha\tau}b_\beta^\tau - \Gamma^\tau_{\alpha\beta}b^\sigma_\tau.\end{aligned}$$ The functions $b_{\alpha\beta}, b_\alpha^\sigma, \Gamma_{\alpha\beta}^\sigma, b_\beta^\sigma|_\alpha$ and $a$ are identified with functions in $\mathcal{C}^0(\bar{\Omega})$. Then $$\begin{aligned} \begin{aligned}[c] \Gamma_{\alpha\beta}^\sigma({\varepsilon})&= \Gamma_{\alpha\beta}^\sigma -{\varepsilon}x_3b_\beta^\sigma|_\alpha + O({\varepsilon}^2), \\ {\partial}_3 \Gamma_{\alpha\beta}^p({\varepsilon})&= O({\varepsilon}), \\ \Gamma_{\alpha3}^3({\varepsilon})&=\Gamma_{33}^p({\varepsilon})=0, \end{aligned} \qquad \begin{aligned}[c] \Gamma_{\alpha\beta}^3({\varepsilon})&=b_{\alpha\beta} - {\varepsilon}x_3 b_\alpha^\sigma b_{\sigma\beta}, \\ \Gamma_{\alpha3}^\sigma({\varepsilon})& = -b_\alpha^\sigma - {\varepsilon}x_3 b_\alpha^\tau b_\tau^\sigma + O({\varepsilon}^2), \\ g(\varepsilon)&=a + O(\varepsilon), \end{aligned}\end{aligned}$$ for all ${\varepsilon}$, $0<{\varepsilon}\leq{\varepsilon}_0$, where the order symbols $O({\varepsilon})$ and $O({\varepsilon}^2)$ are meant with respect to the norm $||\cdot||_{0,\infty,\bar{\Omega}}$ defined by $$\begin{aligned} ||w||_{0,\infty,\bar{\Omega}}=\sup \{|w({\mbox{\boldmath{$x$}}})|; {\mbox{\boldmath{$x$}}}\in\bar{\Omega}\}.\end{aligned}$$ Finally, there exist constants $a_0, g_0$ and $g_1$ such that $$\begin{aligned} & 0<a_0\leq a({\mbox{\boldmath{$y$}}}) \ \forall {\mbox{\boldmath{$y$}}}\in \bar{\omega}, \\ & 0<g_0\leq g(\varepsilon)({\mbox{\boldmath{$x$}}}) \leq g_1 \ \forall {\mbox{\boldmath{$x$}}}\in\bar{\Omega} \ \textrm{and} \ \forall \ {\varepsilon}, 0<\varepsilon\leq \varepsilon_0. \end{aligned}$$ We now include the following result that will be used repeatedly in what follows (see Theorem 3.4-1, [@Ciarlet4b], for details). \[th\_int\_nula\] Let $\omega$ be a domain in $\mathbb{R}^2$ with boundary $\gamma$, let $\Omega=\omega\times (-1,1)$, and let $g\in L^p(\Omega)$, $p>1$, be a function such that $$\begin{aligned} {\int_{\Omega}}g {\partial}_3v dx=0, \ \textrm{for all} \ v\in \mathcal{C}^{\infty}(\bar{\Omega}) \ \textrm{with} \ v=0 { \ \textrm{on} \ }\gamma\times[-1,1]. \end{aligned}$$ Then $g=0.$ This result holds if ${\int_{\Omega}}g {\partial}_3v dx=0$ for all $v\in H^1(\Omega)$ such that $v=0$ in $\Gamma_0$. It is in this way that we will use this result in the following. In what follows we shall present several results related with the existence and uniqueness of the solutions of the problems presented in this paper. Moreover, we show the regularity of these solutions depending on the regularity of the data provided. Let $V$ be a Hilbert space. We denote by $(\cdot,\cdot)_V$ and $||\cdot||_V$ the corresponding inner product and associated norm. Consider the bounded operators $B:V\longrightarrow V$, $A:V\longrightarrow V$ and a function $f:(0,T)\longrightarrow V$. Let also $u_0\in V$. We are interested in studying the problem \[problema\_apendice\] Find $u: [0,T]\to V$ such that, $$\begin{aligned} & B \dot{u}(t)+ A u(t) =f(t) {\ a.e. \ t\in(0,T)}, \\ & u(0)=u_0. \end{aligned}$$ \[teorema\_existenciayunicidad\] Assume that $B:V\longrightarrow V$ is strongly monotone, Lipschitz-continuous operator and $A:V\longrightarrow V$ is a Lipschitz-continuous operator. Also, let $u_0\in V$ and $f\in L^2(0,T;V)$. Then, the Problem \[problema\_apendice\] has a unique solution $u\in W^{1,2}(0,T;V)$. The proof of this theorem can be found in Theorem 3.3, [@pto_fijo], where the author uses the inverse of the operator $A$ and the Banach fixed point theorem. Alternatively, we can prove the result without explicitly using the inverse of the operator by using its Lipschitz-continuity instead. The existence and uniqueness of the inhomogeneous evolutionary equations, when the operator $B$ is the identity, can be found in Chapter 6, [@Yosida]. In addition, in [@Mascarenhas] the author proves the scalar version for the quasi-static case and with no body loadings. In Chapter 6, [@SanchezPhy], it is shown that these restrictions can be dropped obtaining the existence of a unique solution in the framework of semigroup theory. \[Cor\_ex\_un\_reg\] Under the assumptions of the previous theorem if, in addition, $\dot{f}\in L^2(0,T;V)$ and the operators $A$ and $B$ are linear, the Problem \[problema\_apendice\] has a unique solution $u\in{W}^{2,2}(0,T;V)$. The existence and uniqueness of ${\mbox{\boldmath{$u$}}}\in W^{1,2}(0,T;V)$ is consequence of the Theorem \[teorema\_existenciayunicidad\]. Let us find the additional regularity of the solution. To do that consider the equation $$\label{Bz} B\dot{z}(t) + A{z}(t)=\dot{f}(t), {\ a.e. \ t\in(0,T)},$$ with the initial condition $B{z}(0)=f(0) - A u_0\in V$. By Theorem \[teorema\_existenciayunicidad\] there exists a unique $z\in W^{1,2}(0,T;V)$ solution of (\[Bz\]). Now, if we integrate the equation and substitute the initial condition, by the linearity of the operator $B$ we find that $$\begin{aligned} B({{z}}(t)) - B({z}(0))+ \int_{0}^{t}A {z}(s)ds= f(t) - f(0).\end{aligned}$$ Let ${w}(t)=u_0 + \int_{0}^{t}{z}(s)ds$, so that $\dot{w}(t)={z}(t)$ and $w(0)=u_0$. Due to the linearity of the operator $A$ we find that $$\begin{aligned} B\dot{w}(t)+ A ({w}(t)-{u}_0)=f(t)-Au_0,\end{aligned}$$ hence, $$\begin{aligned} B\dot{w}(t)+ A {w}(t)=f(t).\end{aligned}$$ Since by Theorem \[teorema\_existenciayunicidad\] there is a unique solution for this equation, we deduce that $u={w}\in {W}^{1,2}(0,T;V)$. Moreover, as $z$ is solution of (\[Bz\]) then $\dot{u}=\dot{w}=z\in{W}^{1,2}(0,T;V)$. Therefore, we conclude $u\in{W}^{2,2}(0,T;V)$. \[Thexistunic\] Let $\Omega^{\varepsilon}$ be a domain in $\mathbb{R}^3$ defined as in Section \[problema\] and let ${\mbox{\boldmath{$\Theta$}}}$ be a $\mathcal{C}^2$-diffeomorphism of $\bar{\Omega}^{\varepsilon}$ in its image ${\mbox{\boldmath{$\Theta$}}}(\bar{\Omega}^{\varepsilon})$, such that the three vectors ${\mbox{\boldmath{$g$}}}_i^{\varepsilon}({\mbox{\boldmath{$x$}}})={\partial}_i^{\varepsilon}{\mbox{\boldmath{$\Theta$}}}({\mbox{\boldmath{$x$}}}^{\varepsilon})$ are linearly independent for all ${\mbox{\boldmath{$x$}}}^{\varepsilon}\in\bar{\Omega}^{\varepsilon}$. Let $\Gamma_0^{\varepsilon}$ be a $d\Gamma^{\varepsilon}$-measurable subset of $\Gamma^{\varepsilon}={\partial}\Omega^{\varepsilon}$ such that $meas(\Gamma_0^{\varepsilon})>0.$ Let ${{{f}}}^{i,{\varepsilon}}\in L^{2}(0,T; L^2(\Omega^{\varepsilon})) $, ${{{h}}}^{i,{\varepsilon}}\in L^{2}(0,T; L^2(\Gamma_1^{\varepsilon}))$, where $\Gamma_1^{\varepsilon}:= \Gamma_+^{\varepsilon}\cup\Gamma_-^{\varepsilon}$. Let ${\mbox{\boldmath{$u$}}}_0^{\varepsilon}\in V(\Omega^{\varepsilon}). $ Then, there exists a unique solution ${\mbox{\boldmath{$u$}}}^{\varepsilon}=(u_i^{\varepsilon}):[0,T]\times\Omega^{\varepsilon}\rightarrow \mathbb{R}^3$ satisfying the Problem \[problema\_eps\]. Moreover ${\mbox{\boldmath{$u$}}}^{\varepsilon}\in W^{1,2}(0,T;V(\Omega^{\varepsilon}))$. In addition to that, if $\dot{{{{f}}}}^{i,{\varepsilon}}\in L^{2}(0,T; L^2(\Omega^{\varepsilon})) $, $\dot{{{{h}}}}^{i,{\varepsilon}}\in L^{2}(0,T; L^2(\Gamma_1^{\varepsilon}))$, then ${\mbox{\boldmath{$u$}}}^{\varepsilon}\in W^{2,2}(0,T;V(\Omega^{\varepsilon}))$. Let $V=V(\Omega^{\varepsilon})$ for simplicity. By the Riesz Representation Theorem we find that there exist bounded linear operators $B:V\longrightarrow V ,$ $A:V\longrightarrow V$ and ${\mbox{\boldmath{$f$}}}\in V$ such that $$\begin{aligned} (B{\mbox{\boldmath{$u$}}}^{\varepsilon},{\mbox{\boldmath{$v$}}}^{\varepsilon})_{V}&:=\int_{\Omega^\varepsilon} B^{ijkl,\varepsilon}e^\varepsilon_{k||l}(\dot{{\mbox{\boldmath{$u$}}}}^\varepsilon)e_{i||j}^{\varepsilon}({\mbox{\boldmath{$v$}}}^\varepsilon) \sqrt{g^\varepsilon} dx^\varepsilon, \\ (A {\mbox{\boldmath{$u$}}}^{\varepsilon},{\mbox{\boldmath{$v$}}}^{\varepsilon})_{V}&:=\int_{\Omega^\varepsilon}A^{ijkl,\varepsilon}e^\varepsilon_{k||l}({\mbox{\boldmath{$u$}}}^\varepsilon)e^\varepsilon_{i||j}({\mbox{\boldmath{$v$}}}^\varepsilon)\sqrt{g^\varepsilon} dx^\varepsilon, \\ ({\mbox{\boldmath{$f$}}},{\mbox{\boldmath{$v$}}}^{\varepsilon})_{V}&:=\int_{\Omega^\varepsilon} f^{i,\varepsilon} v_i^\varepsilon \sqrt{g^\varepsilon} dx^\varepsilon + \int_{\Gamma_1^\varepsilon} h^{i,\varepsilon} v_i^\varepsilon\sqrt{g^\varepsilon} d\Gamma^\varepsilon ,\end{aligned}$$ for all ${\mbox{\boldmath{$u$}}}^{\varepsilon}, {\mbox{\boldmath{$v$}}}^{\varepsilon}\in V$. The operators $B$ and $A$ are strongly monotone as a consequence of the ellipticity of the fourth order tensors $(A^{ijkl,{\varepsilon}})$ and $(B^{ijkl,{\varepsilon}})$ in (\[elipticidadA\])–(\[elipticidadB\]). Hence, the Problem \[problema\_eps\] can be written as : Find ${\mbox{\boldmath{$u$}}}^{\varepsilon}:[0,T]\times\Omega^{\varepsilon}\longrightarrow \mathbb{R}^3$ such that, $$\begin{aligned} & {\mbox{\boldmath{$u$}}}^{\varepsilon}(t)\in V {\ \forall \ t\in[0,T]},\\ & B \dot{{\mbox{\boldmath{$u$}}}}^{\varepsilon}(t) + A {\mbox{\boldmath{$u$}}}^{\varepsilon}(t)={\mbox{\boldmath{$f$}}}(t) {\ a.e. \ t\in(0,T)},\\ &{\mbox{\boldmath{$u$}}}^{\varepsilon}(0)={\mbox{\boldmath{$u$}}}_0^{\varepsilon}\ \textrm{in}\ V.\end{aligned}$$ Therefore, we can apply Theorem \[teorema\_existenciayunicidad\] and conclude that ${\mbox{\boldmath{$u$}}}^{\varepsilon}\in{W}^{1,2}(0,T;V)$. Moreover, if $\dot{{{{f}}}}^{i,{\varepsilon}}\in L^{2}(0,T; L^2(\Omega^{\varepsilon})) $, $\dot{{{{h}}}}^{i,{\varepsilon}}\in L^{2}(0,T; L^2(\Gamma_1^{\varepsilon}))$, then we are in conditions of the Corollary \[Cor\_ex\_un\_reg\] and we conclude that ${\mbox{\boldmath{$u$}}}^{\varepsilon}\in W^{2,2}(0,T;V)$. \[Theorema\_exist\_escalado\_sin\_orden\] Let $\Omega$ be a domain in $\mathbb{R}^3$ defined as in Section \[seccion\_dominio\_ind\] and let ${\mbox{\boldmath{$\Theta$}}}$ be a $\mathcal{C}^2$-diffeomorphism of $\bar{\Omega}$ onto its image ${\mbox{\boldmath{$\Theta$}}}(\bar{\Omega})$, such that the three vectors ${\mbox{\boldmath{$g$}}}_i={\partial}_i{\mbox{\boldmath{$\Theta$}}}({\mbox{\boldmath{$x$}}})$ are linearly independent for all ${\mbox{\boldmath{$x$}}}\in\bar{\Omega}$. Let ${{{f}}}^{i}({\varepsilon})\in L^{2}(0,T; L^2(\Omega)) $, ${{{h}}}^{i}({\varepsilon})\in L^{2}(0,T; L^2(\Gamma_1))$, where $\Gamma_1:= \Gamma_+\cup\Gamma_-$. Let ${\mbox{\boldmath{$u$}}}_0({\varepsilon})\in V(\Omega). $ Then, there exists a unique solution ${\mbox{\boldmath{$u$}}}({\varepsilon})=(u_i({\varepsilon})):[0,T]\times\Omega \rightarrow \mathbb{R}^3$ satisfying the Problem \[problema\_escalado\]. Moreover ${\mbox{\boldmath{$u$}}}({\varepsilon})\in W^{1,2}(0,T;V(\Omega))$. In addition to that, if $\dot{{{{f}}}}^{i}({\varepsilon})\in L^{2}(0,T; L^2(\Omega)) $, $\dot{{{{h}}}}^{i}({\varepsilon})\in L^{2}(0,T; L^2(\Gamma_1))$, then ${\mbox{\boldmath{$u$}}}({\varepsilon})\in W^{2,2}(0,T;V(\Omega))$. The proof of this theorem is analogous to the proof in Theorem \[Thexistunic\], taking into account the ellipticity of the scaled fourth-order tensors in (\[elipticidadA\_eps\])–(\[elipticidadB\_eps\]) and applying a corollary of Theorem \[teorema\_existenciayunicidad\] with $V=V(\Omega)$. Moreover, if $\dot{{{{f}}}}^{i}({\varepsilon})\in L^{2}(0,T; L^2(\Omega)) $, $\dot{{{{h}}}}^{i}({\varepsilon})\in L^{2}(0,T; L^2(\Gamma_1))$, then we are in conditions of the Corollary \[Cor\_ex\_un\_reg\] and we conclude that ${\mbox{\boldmath{$u$}}}({\varepsilon})\in W^{2,2}(0,T;V(\Omega))$. Now, let $\tilde{V}:= W^{1,2}(0,T; { \mathcal{Q}})$, where $\mathcal{Q}:=\{ (\Phi_{\alpha\beta})\in \mathbb{S}^2 ; \Phi_{\alpha\beta}\in L^2(\omega)\}$. Notice that $\left({ \mathcal{Q}}, (\cdot, \cdot) \right)$ is a Hilbert space, where $ (\cdot, \cdot) $ denotes its inner product. We define the operators $a: { \mathcal{Q}}\times { \mathcal{Q}}\longrightarrow \mathbb{R}$, $b: { \mathcal{Q}}\times { \mathcal{Q}}\longrightarrow \mathbb{R}$ and $c: { \mathcal{Q}}\times { \mathcal{Q}}\longrightarrow \mathbb{R}$ by $$\begin{aligned} \label{operador_a} a(\Sigma, \Phi):= {\int_{\omega}}{a^{\alpha\beta\sigma\tau}}\Sigma_{\sigma \tau} \Phi_{\alpha \beta} \sqrt{a} dy, \\ \label{operador_b} b(\Sigma, \Phi):= {\int_{\omega}}{b^{\alpha\beta\sigma\tau}}\Sigma_{\sigma \tau} \Phi_{\alpha \beta} \sqrt{a} dy, \\ \label{operador_c} c(\Sigma, \Phi):= {\int_{\omega}}{c^{\alpha\beta\sigma\tau}}\Sigma_{\sigma \tau} \Phi_{\alpha \beta} \sqrt{a} dy,\end{aligned}$$ for all $ \Sigma, \Phi \in{ \mathcal{Q}}, $ where ${a^{\alpha\beta\sigma\tau}}, {b^{\alpha\beta\sigma\tau}}$ and ${c^{\alpha\beta\sigma\tau}}$ denote the contravariant components of three fourth order two-dimensional elliptic tensors. \[teorema\_existencia\_bidimensional\] Let $f\in L^p( 0, T ;{ \mathcal{Q}})$ with $p \geq 2$ , $\Sigma_0 \in { \mathcal{Q}}$ and a constant $k>0$. Consider the strongly monotone, Lipschitz-continuous operators $a,b,c :{ \mathcal{Q}}\times { \mathcal{Q}}\longrightarrow \mathbb{R}$ defined in (\[operador\_a\])–(\[operador\_c\]). Then, there exists $\Sigma:[0,T]\longrightarrow{ \mathcal{Q}}$ unique solution to the problem $$\begin{aligned} \label{ecuacion_operadores} &a(\Sigma, \Phi) + b(\dot{\Sigma}, \Phi) - c\left( \int_0^t e^{-k(t-s)} \Sigma(s) ds, \Phi\right) = \left( f(t), \Phi\right), \ \forall \Phi\in{ \mathcal{Q}}, {\ a.e. \ \textrm{in} \ (0,T)}, \\ \label{condicion_operadores} &\Sigma(0)=\Sigma_0.\end{aligned}$$ Moreover, $\Sigma\in \tilde{V}$. In addition, if $\dot{{{{f}}}}\in L^{2}(0,T; { \mathcal{Q}}) $, then $\Sigma\in W^{2,2}(0,T;{ \mathcal{Q}})$. We first consider the auxiliary problem $$\begin{aligned} \label{ecuacion_auxiliar} &a(\Sigma_\theta, \Phi) + b(\dot{\Sigma}_\theta, \Phi) = \left( f(t), \Phi\right) + c\left( \theta, \Phi\right), \ \forall \Phi\in{ \mathcal{Q}}{\ a.e. \ \textrm{in} \ (0,T)}, \\ \label{condicion_auxiliar} &\Sigma_\theta(0)=\Sigma_0,\end{aligned}$$ where $\theta\in \tilde{V}$. Notice that by the Riesz Representation Theorem we find that there exist bounded linear operators $\tilde{B}:{ \mathcal{Q}}\longrightarrow { \mathcal{Q}},$ $\tilde{A}:{ \mathcal{Q}}\longrightarrow { \mathcal{Q}}$ and $\tilde{f}\in { \mathcal{Q}}$ such that $$\begin{aligned} (\tilde{B}\Sigma_\theta,\Phi)&:=b({\Sigma}_\theta, \Phi), \\ (\tilde{A }\Sigma_\theta,\Phi)&:= a(\Sigma_\theta, \Phi), \\ (\tilde{f},\Phi)&:=\left( f(t), \Phi\right) + c\left( \theta, \Phi\right),\end{aligned}$$ for all $\Sigma_\theta,\Phi \in { \mathcal{Q}}$. Moreover, the operators $\tilde{A}$ and $\tilde{B}$ are strongly monotone by the definitions (\[operador\_a\])–(\[operador\_b\]). Therefore, following similar arguments as in the proof of Theorem \[teorema\_existenciayunicidad\], we conclude that there exists a unique solution of the auxiliary problem satisfying $\Sigma_\theta\in \tilde{V}$. Now, we consider the operator $\Psi: \tilde{V}\longrightarrow \tilde{V}$ given by, $$\begin{aligned} \Psi\theta(t)= \int_0^t e^{-k(t-s)}\Sigma_\theta(s) ds,\end{aligned}$$ where $\Sigma_\theta$ is the solution of (\[ecuacion\_auxiliar\])–(\[condicion\_auxiliar\]). Let $\theta_1,\theta_2, \Sigma_{\theta_1}, \Sigma_{\theta_2} \in \tilde{V}$, hence by (\[ecuacion\_auxiliar\]) we can find that, $$\begin{aligned} a(\Sigma_{\theta_1}- \Sigma_{\theta_2}, \Sigma_{\theta_1}- \Sigma_{\theta_2}) + \frac{1}{2}\frac{{\partial}}{{\partial}t}\left(b({\Sigma}_{\theta_1} - {\Sigma}_{\theta_2}, \Sigma_{\theta_1}- \Sigma_{\theta_2}) \right) = - c\left( \theta_1 - \theta_2, \Sigma_{\theta_2}- \Sigma_{\theta_1}\right).\end{aligned}$$ Since the operator $a$ is strongly monotone we find that, $$\begin{aligned} \frac{1}{2}\frac{{\partial}}{{\partial}t}\left(b({\Sigma}_{\theta_1} - {\Sigma}_{\theta_2}, \Sigma_{\theta_1}- \Sigma_{\theta_2}) \right) \leq - c\left( \theta_1 - \theta_2, \Sigma_{\theta_2}- \Sigma_{\theta_1}\right).\end{aligned}$$ Integrating with respect to the time variable we find that, $$\begin{aligned} \label{refA} b({\Sigma}_{\theta_1} - {\Sigma}_{\theta_2}, \Sigma_{\theta_1}- \Sigma_{\theta_2}) \leq - \int_0^t c\left( \theta_1 - \theta_2, \Sigma_{\theta_2}- \Sigma_{\theta_1}\right) ds.\end{aligned}$$ In what follows let $||\cdot||$ denote a norm induced by the inner product in ${ \mathcal{Q}}$. Moreover, by the continuity of the operator $c$ , there exists a constant $c_1>0$ such that $$\begin{aligned} \nonumber &- \int_0^t c\left( \theta_1 - \theta_2, \Sigma_{\theta_2}- \Sigma_{\theta_1}\right) ds \leq || \int_0^t c\left( \theta_1 - \theta_2, \Sigma_{\theta_2}- \Sigma_{\theta_1}\right) ds|| \\ \nonumber \qquad &\leq \int_0^t ||c\left( \theta_1 - \theta_2, \Sigma_{\theta_2}- \Sigma_{\theta_1}\right)|| ds \leq c_1 \int_0^t || \theta_1 - \theta_2|| || \Sigma_{\theta_2}- \Sigma_{\theta_1}|| ds \\ \label{refA2} & \qquad \leq \frac{c_1}{2} \int_0^t\left( || \theta_1 - \theta_2||^2 + || \Sigma_{\theta_2}- \Sigma_{\theta_1}||^2 \right)ds.\end{aligned}$$ On the other hand, since $b$ is a strongly monotone operator, there exists a constant $c_2>0$ such that $$\begin{aligned} \frac{1}{2}b({\Sigma}_{\theta_1} - {\Sigma}_{\theta_2}, \Sigma_{\theta_1}- \Sigma_{\theta_2}) \geq c_2 ||{\Sigma}_{\theta_1} - {\Sigma}_{\theta_2}||^2, \end{aligned}$$ hence, together with (\[refA\])–(\[refA2\]) we obtain the following inequality, $$\begin{aligned} c_2 ||{\Sigma}_{\theta_1} - {\Sigma}_{\theta_2}||^2 \leq \frac{c_1}{2} \int_0^t || \theta_1 - \theta_2||^2ds + \frac{c_1}{2} \int_0^t|| \Sigma_{\theta_2}- \Sigma_{\theta_1}||^2 ds.\end{aligned}$$ Applying Gronwall’s inequality we find that there exists a $C>0$ such that $$\begin{aligned} ||{\Sigma}_{\theta_1}(t) - {\Sigma}_{\theta_2}(t)||^2 \leq C \int_0^t || \theta_1(s) - \theta_2(s)||^2ds.\end{aligned}$$ for all $t\in[0,T].$ Therefore, $$\begin{aligned} ||\Psi\theta_1(t) - \Psi\theta_2(t)||^2 \leq C \int_0^t || \theta_1(s) - \theta_2(s)||^2ds.\end{aligned}$$ for all $t\in[0,T].$ Furthermore, $$\begin{aligned} \frac{{\partial}}{{\partial}t}\left( \Psi\theta(t)\right)= \Sigma_\theta(t)- k \int_0^t e^{-k(t-s)}\Sigma_\theta(s)ds.\end{aligned}$$ As a consequence, there exists a $n\in \mathbb{N}$ such that $||\Psi^n\theta_1- \Psi^n\theta_2||_{\tilde{V}}< || \theta_1-\theta_2||_{\tilde{V}}$. By the Banach fixed point theorem, there exists a unique $\theta^*$ such that $\Psi\theta^*(t)=\theta^*(t)$, $\forall t \in [0,T]$. Hence, the auxiliary problem (\[ecuacion\_auxiliar\])–(\[condicion\_auxiliar\]) for $\theta=\theta^*$ is a reformulation of the original problem (\[ecuacion\_operadores\])–(\[condicion\_operadores\]). Therefore, there exists a unique solution of the original problem satisfying $\Sigma\in\tilde{V}$. Moreover, if $\dot{{{{f}}}}\in L^{2}(0,T; { \mathcal{Q}}) $, applying a modified version of the arguments in Corollary \[Cor\_ex\_un\_reg\] we conclude that $\Sigma\in W^{2,2}(0,T;{ \mathcal{Q}})$. Formal Asymptotic Analysis ========================== \[procedure\] In this section, we highlight some relevant steps in the construction of the formal asymptotic expansion of the scaled unknown variable ${\mbox{\boldmath{$u$}}}({\varepsilon})$ including the characterization of the zeroth-order term, and the derivation of some key results which will lead to the two-dimensional equations of the viscoelastic shell problems. We define the scaled applied forces as, $$\begin{aligned} & {\mbox{\boldmath{$f$}}}(\varepsilon)(t, {\mbox{\boldmath{$x$}}})=\varepsilon^p{\mbox{\boldmath{$f$}}}^p(t,{\mbox{\boldmath{$x$}}}) \ \forall {\mbox{\boldmath{$x$}}}\in \Omega \ \textrm{and} \ \forall t\in[0,T], \\ & {\mbox{\boldmath{$h$}}}(\varepsilon)(t, {\mbox{\boldmath{$x$}}})=\varepsilon^{p+1}{\mbox{\boldmath{$h$}}}^{p+1}(t,{\mbox{\boldmath{$x$}}}) \ \forall {\mbox{\boldmath{$x$}}}\in \Gamma_+\cup\Gamma_- \ \textrm{and} \ \forall t\in[0,T], \end{aligned}$$ where $p$ is a natural number that will show the order of the volume and surface forces, respectively. We substitute in (\[ec\_problema\_escalado\]) to obtain the following problem: \[problema\_orden\_fuerzas\] Find ${\mbox{\boldmath{$u$}}}(\varepsilon):[0,T]\times\Omega\longrightarrow \mathbb{R}^3$ such that, $$\begin{aligned} \nonumber & {\mbox{\boldmath{$u$}}}(\varepsilon)(t,\cdot)\in V(\Omega) {\ \forall \ t\in[0,T]}, \\ \nonumber &\int_{\Omega}A^{ijkl}(\varepsilon)e_{k||l}(\varepsilon;{\mbox{\boldmath{$u$}}}(\varepsilon))e_{i||j}(\varepsilon;{\mbox{\boldmath{$v$}}})\sqrt{g(\varepsilon)} dx + \int_{\Omega} B^{ijkl}(\varepsilon)e_{k||l}(\varepsilon;\dot{{\mbox{\boldmath{$u$}}}}(\varepsilon))e_{i||j}(\varepsilon;{\mbox{\boldmath{$v$}}}) \sqrt{g(\varepsilon)} dx \\\label{ecuacion_orden_fuerzas} &\quad= \int_{\Omega} {\varepsilon}^p {{{f}}}^{i,p}v_i \sqrt{g(\varepsilon)} dx + \int_{\Gamma_+\cup\Gamma_-} {\varepsilon}^{p}{{{h}}}^{i,p+1} v_i\sqrt{g(\varepsilon)} d\Gamma \quad \forall {\mbox{\boldmath{$v$}}}\in V(\Omega), {\ a.e. \ \textrm{in} \ (0,T)}, \\\displaystyle \nonumber & {\mbox{\boldmath{$u$}}}({\varepsilon})(0,\cdot)= {\mbox{\boldmath{$u$}}}_0({\varepsilon})(\cdot). \end{aligned}$$ The existence and uniqueness of solution of Problem \[problema\_orden\_fuerzas\] follows using analogous arguments as in Theorem \[Theorema\_exist\_escalado\_sin\_orden\]. Assume that ${\mbox{\boldmath{$\theta$}}}\in\mathcal{C}^3(\bar{\omega};\mathbb{R}^3)$ and that the scaled unknown ${\mbox{\boldmath{$u$}}}(\varepsilon)$ and scaled initial displacement ${\mbox{\boldmath{$u$}}}_0({\varepsilon})$ admit an asymptotic expansion of the form $$\begin{aligned} \label{desarrollo_asintotico} {\mbox{\boldmath{$u$}}}(\varepsilon)&= {\mbox{\boldmath{$u$}}}^0 + \varepsilon {\mbox{\boldmath{$u$}}}^1 + \varepsilon^2 {\mbox{\boldmath{$u$}}}^2 +... \quad \textrm{with} \ {\mbox{\boldmath{$u$}}}^0\neq \mathbf{0}, \\ \nonumber {\mbox{\boldmath{$u$}}}_0(\varepsilon)&= {\mbox{\boldmath{$u$}}}_0^0 + \varepsilon {\mbox{\boldmath{$u$}}}_0^1 + \varepsilon^2 {\mbox{\boldmath{$u$}}}_0^2 +... . \quad \textrm{with} \ {\mbox{\boldmath{$u$}}}^0_0={\mbox{\boldmath{$u$}}}^0(0,\cdot),\end{aligned}$$ where ${\mbox{\boldmath{$u$}}}^0(t)\in V(\Omega),$ $ {\mbox{\boldmath{$u$}}}^q(t)\in [H^1(\Omega)]^3 {\ a.e. \ t\in(0,T)}$ and ${\mbox{\boldmath{$u$}}}^0_0\in V(\Omega),$ $ {\mbox{\boldmath{$u$}}}^q_0\in [H^1(\Omega)]^3$ with $q\geq1$. The assumption (\[desarrollo\_asintotico\]) implies an asymptotic expansion of the scaled linear strain as follows $$\begin{aligned} {e_{i||j}}({\varepsilon})\equiv{e_{i||j}}(\varepsilon;{\mbox{\boldmath{$u$}}}(\varepsilon))&=\frac1{\varepsilon}{e_{i||j}}^{-1}+ {e_{i||j}}^0 + \varepsilon{e_{i||j}}^1 + \varepsilon^2{e_{i||j}}^2 + \varepsilon^3{e_{i||j}}^3+...\end{aligned}$$ where, $$\begin{aligned} \left\{\begin{aligned}[c] {e_{\alpha||\beta}}^{-1}&=0, \\ {e_{\alpha||3}}^{-1}&=\frac{1}{2}{\partial}_3u_\alpha^0, \\ {e_{3||3}}^{-1}&={\partial}_3u_3^0, \end{aligned}\right. \qquad \qquad \qquad \left\{\begin{aligned}[c] {e_{\alpha||\beta}}^0&=\frac{1}{2}({\partial}_\beta u_\alpha^0 + {\partial}_\alpha u_\beta^0) - \Gamma_{\alpha\beta}^\sigma u_\sigma^0 - b_{\alpha\beta}u_3^0, \\ {e_{\alpha||3}}^0&=\frac{1}{2}({\partial}_3 u_\alpha^1 + {\partial}_\alpha u_3^0) + b_{\alpha}^\sigma u_\sigma^0, \\ {e_{3||3}}^0&={\partial}_3u_3^1, \end{aligned}\right.\qquad\end{aligned}$$ $$\label{eij_terminos_expansion_u}$$ $$\begin{aligned} \left\{\begin{aligned}[c] {e_{\alpha||\beta}}^1&=\frac{1}{2}({\partial}_\beta u_\alpha^1 + {\partial}_\alpha u_\beta^1) - \Gamma_{\alpha\beta}^\sigma u_\sigma^1 - b_{\alpha\beta}u_3^1 + x_3(b_{\beta|\alpha}^\sigma u_\sigma^0 + b_\alpha^\sigma b_{\sigma\beta}u_3^0), \\ {e_{\alpha||3}}^1&=\frac{1}{2}({\partial}_3 u_\alpha^2 + {\partial}_\alpha u_3^1) + b_{\alpha}^\sigma u_\sigma^1 + x_3b_\alpha^\tau b_\tau^\sigma u_\sigma^0, \\ {e_{3||3}}^1&={\partial}_3 u_3^2. \end{aligned}\right. \qquad \quad \qquad \end{aligned}$$ In addition, the functions ${e_{i||j}}(\varepsilon;{\mbox{\boldmath{$v$}}}) $ admit the following expansion, $$\begin{aligned} {e_{i||j}}(\varepsilon;{\mbox{\boldmath{$v$}}})=\frac{1}{\varepsilon}{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) + {e_{i||j}}^0({\mbox{\boldmath{$v$}}}) + \varepsilon{e_{i||j}}^1({\mbox{\boldmath{$v$}}})+...\end{aligned}$$ where, $$\begin{aligned} \left\{\begin{aligned}[c] {e_{\alpha||\beta}}^{-1}({\mbox{\boldmath{$v$}}})&=0,\\ {e_{\alpha||3}}^{-1}({\mbox{\boldmath{$v$}}})&=\frac{1}{2}{\partial}_3v_\alpha, \\ {e_{3||3}}^{-1}({\mbox{\boldmath{$v$}}})&={\partial}_3v_3, \end{aligned}\right. \qquad \qquad \quad \left\{\begin{aligned}[c] {e_{\alpha||\beta}}^0({\mbox{\boldmath{$v$}}})&=\frac{1}{2}({\partial}_\beta v_\alpha + {\partial}_\alpha v_\beta) - \Gamma_{\alpha\beta}^\sigma v_\sigma - b_{\alpha\beta}v_3, \\ {e_{\alpha||3}}^0({\mbox{\boldmath{$v$}}})&=\frac{1}{2} {\partial}_\alpha v_3 + b_{\alpha}^\sigma v_\sigma, \\ {e_{3||3}}^0({\mbox{\boldmath{$v$}}})&=0, \end{aligned}\right.\end{aligned}$$ $$\label{eij_terminos_expansion}$$ $$\begin{aligned} \left\{\begin{aligned}[c] {e_{\alpha||\beta}}^1({\mbox{\boldmath{$v$}}})&= x_3b_{\beta|\alpha}^\sigma v_\sigma + x_3b_\alpha^\sigma b_{\sigma\beta}v_3, \\\nonumber {e_{\alpha||3}}^1({\mbox{\boldmath{$v$}}})&= x_3b_\alpha^\tau b_\tau^\sigma v_\sigma, \\\nonumber {e_{3||3}}^1({\mbox{\boldmath{$v$}}})&=0. \end{aligned}\right. \qquad \quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \end{aligned}$$ Upon substitution on (\[ecuacion\_orden\_fuerzas\]), we proceed to characterize the different terms involved in the asymptotic expansions considering different values for $p$, that is, taking different orders for the applied forces. Assume that $$\label{condicion_inicial_indep_3} {\partial}_3 {\mbox{\boldmath{$u$}}}_0^0={\mbox{\boldmath{$0$}}},$$ this is, that the zeroth-order term of the initial displacement is independent of the transversal variable. Also, we assume that the initial condition for the scaled linear strains is such that $$\label{condicion_inicial_def} {e_{i||j}}^0(0,\cdot)={e_{i||j}}^1(0,\cdot)=0,$$ this is, the strains at the beginning of the period of observation are of order $O({\varepsilon}^2)$ at least (since by (\[eij\_terminos\_expansion\_u\]) and (\[condicion\_inicial\_indep\_3\]) we have that ${e_{i||j}}^{-1}(0,\cdot)=0$). We shall now identify the leading term ${\mbox{\boldmath{$u$}}}^0$ of the expansion (\[desarrollo\_asintotico\]) by canceling the other terms of the successive powers of ${\varepsilon}$ in the equations of the Problem \[problema\_orden\_fuerzas\]. We will show that ${\mbox{\boldmath{$u$}}}^0$ is solution of a two-dimensional problem of a viscoelastic membrane or flexural shell depending on several factors, and that the orders of applied forces are determined in both cases. Given ${\mbox{\boldmath{$\eta$}}}=(\eta_i)\in [H^1(\omega)]^3,$ let $$\label{def_gab} {\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}}):= \frac{1}{2}({\partial}_\beta\eta_\alpha + {\partial}_\alpha\eta_\beta) - \Gamma_{\alpha\beta}^\sigma\eta_\sigma - b_{\alpha\beta}\eta_3,$$ denote the covariant components of the linearized change of metric tensor associated with a displacement field $\eta_i{\mbox{\boldmath{$a$}}}^i$ of the surface $S$. Let us define the spaces, $$\begin{aligned} \nonumber V(\omega)&:=\{{\mbox{\boldmath{$\eta$}}}=(\eta_i)\in[H^1(\omega)]^3 ; \eta_i=0 \ \textrm{on} \ \gamma_0 \}, \\ \nonumber V_0(\omega)&:=\{{\mbox{\boldmath{$\eta$}}}=(\eta_i)\in V(\omega), \gamma_{\alpha\beta}({\mbox{\boldmath{$\eta$}}})=0 \ \textrm{in} \ \omega \}, \\ \nonumber V_F(\omega)&:= \{ {\mbox{\boldmath{$\eta$}}}=(\eta_i) \in H^1(\omega)\times H^1(\omega)\times H^2(\omega) ; \eta_i={\partial}_\nu \eta_3=0 \ \textrm{on} \ \gamma_0, {\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})=0 { \ \textrm{in} \ }\omega \}. \end{aligned}$$ Consider the Problem \[problema\_orden\_fuerzas\] upon substitution of the expansion for ${\mbox{\boldmath{$u$}}}({\varepsilon})$ proposed in (\[desarrollo\_asintotico\]). Identifying the terms multiplied by the same powers of ${\varepsilon}$ we find that: 1. The main leading term ${\mbox{\boldmath{$u$}}}^0$ of the asymptotic expansion is independent of the transversal variable $x_3$. Therefore, it can be identified with a function ${\mbox{\boldmath{$\xi$}}}^0\in [H^1(\omega)]^3$ such that ${\mbox{\boldmath{$\xi$}}}^0={\mbox{\boldmath{$0$}}}$ on $\gamma_0$ and also we can identify ${\mbox{\boldmath{$u$}}}_0^0$ with a function ${\mbox{\boldmath{$\xi$}}}^0_0(\cdot)={\mbox{\boldmath{$\xi$}}}^0(0,\cdot)$. As a consequence, $${e_{i||j}}^{-1}(t)=0 { \ \textrm{in} \ }\Omega, {\ \forall \ t\in[0,T]}.$$ 2. The following zeroth-order terms of the scaled linearized strains are identified. On one hand, $${e_{\alpha||3}}^0(t)=0 { \ \textrm{in} \ }\Omega, {\ \forall \ t\in[0,T]}.$$ On the other hand, if we assume $\theta>0$ we obtain that $$\begin{aligned} \label{edtres_cero} {e_{3||3}}^0(t)= - \frac{\theta}{\theta + \rho} \left( a^{\alpha \beta }{e_{\alpha||\beta}}^0(t) + \Lambda\int_0^te^{-k(t-s)}a^{\alpha\beta}{e_{\alpha||\beta}}^0(s) ds \right), { \ \textrm{in} \ }\Omega, {\ \forall \ t\in[0,T]},\end{aligned}$$ where, $$\begin{aligned} \label{Constantes} \Lambda:=\left( \frac{\lambda}{\theta} - \frac{\lambda+ 2 \mu}{\theta + \rho} \right), \quad k:=\frac{\lambda+ 2 \mu}{\theta + \rho} .\end{aligned}$$ Moreover, $$\begin{aligned} {\dot{e}_{3||3}}^0(t)= - \frac{\lambda}{\theta + \rho} a^{\alpha \beta} {e_{\alpha||\beta}}^0(t)- \frac{\lambda + 2\mu}{\theta + \rho} {e_{3||3}}^0(t) - \frac{\theta}{\theta + \rho} a^{\alpha\beta}{\dot{e}_{\alpha||\beta}}^0(t),\end{aligned}$$ ${ \ \textrm{in} \ }\Omega {\ a.e. \ t\in(0,T)}$. 3. The following equality is verified, $$\begin{aligned} &\frac{1}{2}{\int_{\Omega}}{a^{\alpha\beta\sigma\tau}}{e_{\sigma||\tau}}^0{e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}})\sqrt{a}dx + \frac{1}{2}{\int_{\Omega}}{b^{\alpha\beta\sigma\tau}}{\dot{e}_{\sigma||\tau}}^0 {e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}})\sqrt{a}dx \\ & \qquad- \frac{1}{2}\int_0^te^{-k(t-s)}{\int_{\Omega}}{c^{\alpha\beta\sigma\tau}}{e_{\sigma||\tau}}^0(s){e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}})\sqrt{a}dx ds \\ & \quad = {\int_{\Omega}}f^{i,0}\eta_i \sqrt{a}dx + {\int_{\Gamma_+\cup\Gamma_-}}h^{i,1} \eta_i \sqrt{a} d \Gamma, \ \forall{\mbox{\boldmath{$\eta$}}}\in V(\omega) {\ a.e. \ \textrm{in} \ (0,T)},\end{aligned}$$ where ${a^{\alpha\beta\sigma\tau}}$ , ${b^{\alpha\beta\sigma\tau}}$ and ${c^{\alpha\beta\sigma\tau}}$ denote the contravariant components of the fourth order two-dimensional tensors, defined as follows: $$\begin{aligned} \label{tensor_a_bidimensional} {a^{\alpha\beta\sigma\tau}}&:=\frac{2\lambda\rho^2 + 4\mu\theta^2}{(\theta + \rho)^2}a^{\alpha\beta}a^{\sigma\tau} + 2\mu {(a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma})}, \\ \label{tensor_b_bidimensional} {b^{\alpha\beta\sigma\tau}}&:=\frac{2\theta\rho}{\theta + \rho}a^{\alpha\beta}a^{\sigma\tau} + \rho{(a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma})}, \\ \label{tensor_c_bidimensional} {c^{\alpha\beta\sigma\tau}}&:=\frac{2 \left(\theta \Lambda \right)^2}{\theta + \rho} a^{\alpha\beta}a^{\sigma\tau}. \end{aligned}$$ Moreover, $$\begin{aligned} \label{et3} {e_{\alpha||\beta}}^0(t)={\gamma_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^0(t)) \ \textrm{and} \ {e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}}(t))={\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}}(t)) \ \textrm{for all}\ {\mbox{\boldmath{$\eta$}}}\in V(\omega) {\ \forall \ t\in[0,T]}.\end{aligned}$$ 4. Assume that $V_0(\omega)=\{{\mbox{\boldmath{$0$}}}\}$. Then we have that ${\mbox{\boldmath{$\xi$}}}^0$ is solution of the two-dimensional limit equations, known as the viscoelastic membrane shell equations: Find ${\mbox{\boldmath{$\xi$}}}^0:[0,T] \times\omega \longrightarrow \mathbb{R}^3$ such that, $$\begin{aligned} \nonumber & {\mbox{\boldmath{$\xi$}}}^0(t,\cdot)\in V(\omega) {\ \forall \ t\in[0,T]},\\ \nonumber &\int_{\omega} {a^{\alpha\beta\sigma\tau}}{\gamma_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^0){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy +\int_{\omega}{b^{\alpha\beta\sigma\tau}}{\gamma_{\sigma\tau}}(\dot{{\mbox{\boldmath{$\xi$}}}}^0){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy \\ &- \int_0^te^{-k(t-s)}{\int_{\omega}}{c^{\alpha\beta\sigma\tau}}{\gamma_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^0(s)){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dyds \\ &\quad=\int_{\omega}p^{i,0}\eta_i\sqrt{a}dy \ \forall {\mbox{\boldmath{$\eta$}}}=(\eta_i)\in V(\omega), {\ a.e. \ \textrm{in} \ (0,T)}, \\ &{\mbox{\boldmath{$\xi$}}}^0(0,\cdot)={\mbox{\boldmath{$\xi$}}}^0_0(\cdot), \end{aligned}$$ where, $$\begin{aligned} \label{p0} p^{i,0}(t):=\int_{-1}^{1}{{{f}}}^{i,0}(t)dx_3+h_+^{i,1}(t)+h_-^{i,1}(t) \ \textrm{and} \ h_{\pm}^{i,1}(t)={{{h}}}^{i,1}(t,\cdot,\pm 1) {\ \forall \ t\in[0,T]}. \end{aligned}$$ 5. Assume that $ V_0(\omega)\neq\{{\mbox{\boldmath{$0$}}}\} $. We find that $$\begin{aligned} {e_{i||j}}^0(t)&=0 { \ \textrm{in} \ }\Omega, {\ \forall \ t\in[0,T]},\\ {\mbox{\boldmath{$\xi$}}}^0(t)&\in V_F(\omega) {\ \forall \ t\in[0,T]}.\end{aligned}$$ Moreover, assume that ${\mbox{\boldmath{$u$}}}^1(t)\in V(\Omega) {\ \forall \ t\in[0,T]}$. Then, there exists a function ${\mbox{\boldmath{$\xi$}}}^1(t)=(\xi_i^1(t))\in V(\omega) {\ \forall \ t\in[0,T]}$, such that $$\begin{aligned} u_\alpha^1(t)&=\xi_\alpha^1(t) - x_3( {\partial}_\alpha\xi_3^0(t) + 2 b_\alpha^\sigma \xi_\sigma^0(t)),\\ u_3^1(t)&=\xi_3^1(t),\end{aligned}$$ ${\ \forall \ t\in[0,T]}.$ Also, the following first-order terms of the scaled linearized strains are identified. On one hand, $$\begin{aligned} {e_{\alpha||3}}^1(t)=0 { \ \textrm{in} \ }\Omega, {\ \forall \ t\in[0,T]}.\end{aligned}$$ On the other hand, we obtain that, $$\begin{aligned} {e_{3||3}}^1(t)= - \frac{\theta}{\theta + \rho} \left( a^{\alpha \beta }{e_{\alpha||\beta}}^1(t) + \Lambda\int_0^te^{-k(t-s)}a^{\alpha\beta}{e_{\alpha||\beta}}^1(s) ds \right), { \ \textrm{in} \ }\Omega, {\ \forall \ t\in[0,T]},\end{aligned}$$ and where $\Lambda$ and $k$ are defined as in (\[Constantes\]). Moreover, $$\begin{aligned} {\dot{e}_{3||3}}^1(t)= - \frac{\lambda}{\theta + \rho} a^{\alpha \beta} {e_{\alpha||\beta}}^1(t)- \frac{\lambda + 2\mu}{\theta + \rho} {e_{3||3}}^1(t) - \frac{\theta}{\theta + \rho} a^{\alpha\beta}{\dot{e}_{\alpha||\beta}}^1(t),\end{aligned}$$ ${ \ \textrm{in} \ }\Omega, {\ a.e. \ t\in(0,T)}$. Furthermore, let $$\label{rab} \rho_{\alpha\beta}({\mbox{\boldmath{$\eta$}}}):= {\partial}_{\alpha\beta}\eta_3 - \Gamma_{\alpha\beta}^\sigma {\partial}_\sigma\eta_3 - b_\alpha^\sigma b_{\sigma\beta} \eta_3 + b_\alpha^\sigma ({\partial}_\beta\eta_\sigma- \Gamma_{\beta\sigma}^\tau \eta_\tau) + b_\beta^\tau({\partial}_\alpha\eta_\tau-\Gamma_{\alpha\tau}^\sigma\eta_\sigma ) + b^\tau_{\beta|\alpha} \eta_\tau,$$ denote the covariant components of the linearized change of curvature tensor associated with a displacement field $\eta_i {\mbox{\boldmath{$a$}}}^i$ of the surface $S$. Then $$\label{ref1} {e_{\alpha||\beta}}^1(t)={\gamma_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^1(t))- x_3{\rho_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^0(t)) {\ \forall \ t\in[0,T]}.$$ 6. Assume that $ V_0(\omega)\neq\{{\mbox{\boldmath{$0$}}}\} $, then $$\begin{aligned} {\mbox{\boldmath{$\xi$}}}^1(t) \in V_0(\omega) {\ \forall \ t\in[0,T]}.\end{aligned}$$ 7. For the case where $V_0(\omega)\neq\{{\mbox{\boldmath{$0$}}}\}$, we find that ${\mbox{\boldmath{$\xi$}}}^0$ is solution of the two-dimensional limit equations known as viscoelastic flexural shell equations: Find ${\mbox{\boldmath{$\xi$}}}^0:(0,T) \times\omega \longrightarrow \mathbb{R}^3$ such that, $$\begin{aligned} \nonumber & {\mbox{\boldmath{$\xi$}}}^0(t,\cdot)\in V_F(\omega) {\ \forall \ t\in[0,T]},\\ \nonumber &\frac{1}{3}\int_{\omega} {a^{\alpha\beta\sigma\tau}}{\rho_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^0){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy +\frac{1}{3}\int_{\omega}{b^{\alpha\beta\sigma\tau}}{\rho_{\sigma\tau}}(\dot{{\mbox{\boldmath{$\xi$}}}}^0){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy \\ \nonumber &- \frac{1}{3}\int_0^te^{-k(t-s)}{\int_{\omega}}{c^{\alpha\beta\sigma\tau}}{\rho_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^0(s)){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dyds \\ &\quad=\int_{\omega}p^{i,2}\eta_i\sqrt{a}dy \ \forall {\mbox{\boldmath{$\eta$}}}=(\eta_i)\in V_F(\omega), {\ a.e. \ \textrm{in} \ (0,T)}, \\ &{\mbox{\boldmath{$\xi$}}}^0(0,\cdot)={\mbox{\boldmath{$\xi$}}}^0_0(\cdot), \end{aligned}$$ where, $$\begin{aligned} \label{p2} p^{i,2}(t):=\int_{-1}^{1}{{{f}}}^{i,2}(t)dx_3+h_+^{i,3}(t)+h_-^{i,3}(t) \ \textrm{and} \ h_{\pm}^{i,3}(t)={{{h}}}^{i,3}(t,\cdot,\pm 1) {\ \forall \ t\in[0,T]}. \end{aligned}$$ For the proof of this theorem firstly, we will take values for $p$ on the Problem \[problema\_orden\_fuerzas\]. Then, we group terms multiplied by the same powers of $ {\varepsilon}$, canceling the terms of the expansion proposed. 1. Let $p=-2$ in (\[ecuacion\_orden\_fuerzas\]). Hence, grouping the terms multiplied by ${\varepsilon}^{-2}$ (see (\[tensorA\_tildes\])–(\[tensorB\_tildes\])) we find that $$\begin{aligned} \nonumber &\int_{\Omega} A^{ijkl}(0){e_{k||l}}^{-1}{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) \sqrt{a}dx + \int_{\Omega} B^{ijkl}(0){\dot{e}_{k||l}}^{-1}{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) \sqrt{a}dx \\ \label{et2} &\quad=\int_{\Omega } f^{i,-2} v_i \sqrt{a} dx + {\int_{\Gamma_+\cup\Gamma_-}}h^{i,-1} v_i \sqrt{a}d\Gamma.\end{aligned}$$ Considering ${\mbox{\boldmath{$v$}}}\in V(\Omega)$ independent of $x_3$ (see (\[eij\_terminos\_expansion\])), the left-hand side of the equation (\[et2\]) cancels. Hence, in order to avoid compatibility conditions between the applied forces we must take $f^{i,-2}=0$ and $h^{i,-1}=0$. So that, back on the equation (\[et2\]), using (\[eij\_terminos\_expansion\_u\]), (\[eij\_terminos\_expansion\]) and Theorem \[Th\_comportamiento asintotico\], leads to $$\begin{aligned} \nonumber &{\int_{\Omega}}A^{ijkl}(0){e_{k||l}}^{-1}{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}})\sqrt{a} dx + {\int_{\Omega}}B^{ijkl}(0){\dot{e}_{k||l}}^{-1}{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}})\sqrt{a}dx \\\nonumber &\quad = {\int_{\Omega}}\left(4A^{\alpha 3 \sigma 3}(0){e_{\sigma||3}}^{-1}{e_{\alpha||3}}^{-1}({\mbox{\boldmath{$v$}}}) + A^{3333}(0){e_{3||3}}^{-1} {e_{3||3}}^{-1}({\mbox{\boldmath{$v$}}}) \right) \sqrt{a}dx \\\nonumber & \qquad +{\int_{\Omega}}\left(4B^{\alpha 3 \sigma 3}(0){\dot{e}_{\sigma||3}}^{-1}{e_{\alpha||3}}^{-1}({\mbox{\boldmath{$v$}}}) + B^{3333}(0){\dot{e}_{3||3}}^{-1} {e_{3||3}}^{-1}({\mbox{\boldmath{$v$}}}) \right) \sqrt{a}dx \\\nonumber &\quad = {\int_{\Omega}}\left( \mu a^{\alpha\sigma} {\partial}_3u_\sigma^0 {\partial}_3v_\alpha + (\lambda + 2\mu) {\partial}_3u_3^0 {\partial}_3 v_3 \right) \sqrt{a} dx \\ \label{ecuacion_int_ui} & \qquad + {\int_{\Omega}}\left( \frac{\rho}{2} a^{\alpha\sigma} {\partial}_3\dot{u}_\sigma^0 {\partial}_3v_\alpha + (\theta + \rho) {\partial}_3\dot{u}_3^0 {\partial}_3 v_3 \right) \sqrt{a} dx=0,\end{aligned}$$ for all ${\mbox{\boldmath{$v$}}}=(v_i)\in V(\Omega), {\ a.e. \ \textrm{in} \ (0,T)}$. Let ${\mbox{\boldmath{$v$}}}=(v_i)\in V(\Omega)$ such that $v_\alpha=0$. By the Theorem \[th\_int\_nula\], we obtain the following differential equation $$(\lambda + 2 \mu) {\partial}_3 u_3^0 + (\theta + \rho) {\partial}_3 \dot{u}_3^0=0.$$ This equation together with the initial condition $(\ref{condicion_inicial_indep_3})$, leads to $${\partial}_3u_3^0(t)=0 { \ \textrm{in} \ }\Omega, \ \textrm{for all} \ t\in[0,T].$$ Now, taking $v_\alpha=u_\alpha^0$ in (\[ecuacion\_int\_ui\]), we have $$\begin{aligned} {\int_{\Omega}}\mu a^{\alpha\sigma} {\partial}_3u_\sigma^0 {\partial}_3 u_\alpha^0 \sqrt{a} dx + {\int_{\Omega}}\frac{\rho}{2} a^{\alpha\sigma} {\partial}_3 \dot{u}_\sigma^0{\partial}_3u_\alpha^0 \sqrt{a}dx=0, \ \textrm{a.e in} \ (0,T),\end{aligned}$$ that is equivalent to $$\begin{aligned} {\int_{\Omega}}\mu a^{\alpha\sigma} {\partial}_3u_\sigma^0 {\partial}_3 u_\alpha^0 \sqrt{a} dx + \frac{{\partial}}{{\partial}t}{\int_{\Omega}}\frac{\rho}{4} a^{\alpha\sigma} {\partial}_3 {u}_\sigma^0{\partial}_3u_\alpha^0 \sqrt{a}dx=0, \ \textrm{a.e in} \ (0,T).\end{aligned}$$ Since the matrix $\left(a^{\alpha\sigma}\right)$ is positive definite, we have $$\begin{aligned} \frac{{\partial}}{{\partial}t}{\int_{\Omega}}\frac{\rho}{4} a^{\alpha\sigma} {\partial}_3 {u}_\sigma^0{\partial}_3u_\alpha^0 \sqrt{a}dx\leq0, \ \textrm{a.e in} \ (0,T).\end{aligned}$$ By integrating with respect to the time variable and by (\[condicion\_inicial\_indep\_3\]), we deduce $$\begin{aligned} {\int_{\Omega}}a^{\alpha\sigma} {\partial}_3 {u}_\sigma^0{\partial}_3u_\alpha^0 \sqrt{a}dx\leq0 {\ \forall \ t\in[0,T]},\end{aligned}$$ and using again the positive definiteness of $\left(a^{\alpha\sigma}\right)$ we conclude $${\partial}_3 u_\alpha^0(t)=0 { \ \textrm{in} \ }\Omega,{\ \forall \ t\in[0,T]}.$$ Therefore, we have found that the main term ${\mbox{\boldmath{$u$}}}^0$ of the asymptotic expansion is independent of the transversal variable${\ \forall \ t\in[0,T]}$, hence, it can be identified with a function ${\mbox{\boldmath{$\xi$}}}^0(t)\in [H^1(\omega)]^3 {\ \forall \ t\in[0,T]}$ such that ${\mbox{\boldmath{$\xi$}}}^0={\mbox{\boldmath{$0$}}}$ on $\gamma_0$, this is, ${\mbox{\boldmath{$\xi$}}}^0(t)\in V(\omega) {\ \forall \ t\in[0,T]}$. Moreover, as ${\mbox{\boldmath{$u$}}}^0_0$ does not depend on $x_3$ as well by (\[condicion\_inicial\_indep\_3\]), we can identify ${\mbox{\boldmath{$u$}}}_0^0$ with a function ${\mbox{\boldmath{$\xi$}}}^0_0\in V(\omega)$ and it is verified that ${\mbox{\boldmath{$\xi$}}}^0_0(\cdot)={\mbox{\boldmath{$\xi$}}}^0(0,\cdot)$. Moreover, by (\[eij\_terminos\_expansion\]) we obtain that $${e_{i||j}}^{-1}(t)=0 { \ \textrm{in} \ }\Omega, {\ \forall \ t\in[0,T]}.$$ 2. Let now $p=-1$ in (\[ecuacion\_orden\_fuerzas\]). Grouping the terms multiplied by ${\varepsilon}^{-1}$, we find (taking into account the results from the previous step $(i)$) that $$\begin{aligned} \nonumber &{\int_{\Omega}}A^{ijkl}(0){e_{k||l}}^0{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}})\sqrt{a}dx + {\int_{\Omega}}B^{ijkl}(0){\dot{e}_{k||l}}^0{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}})\sqrt{a}dx \\ \label{ec_step2} & \quad={\int_{\Omega}}f^{i,-1}v_i \sqrt{a}dx + {\int_{\Gamma_+\cup\Gamma_-}}h^{i,0}v_i \sqrt{a} d\Gamma,\end{aligned}$$ for all ${\mbox{\boldmath{$v$}}}\in V(\Omega), {\ a.e. \ \textrm{in} \ (0,T)}$. Analogously to step $(i)$, considering a test function ${\mbox{\boldmath{$v$}}}$ independent of $x_3$, we obtain that $f^{i,-1}$ and $h^{i,0}$ must be zero. Therefore, from the left-hand side of the last equation we have $$\begin{aligned} \nonumber &{\int_{\Omega}}A^{ijkl}(0){e_{k||l}}^0{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}})\sqrt{a}dx + {\int_{\Omega}}B^{ijkl}(0){\dot{e}_{k||l}}^0{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}})\sqrt{a}dx \\\nonumber &\quad ={\int_{\Omega}}4A^{\alpha 3 \sigma 3}(0){e_{\alpha||3}}^0{e_{\sigma||3}}^{-1}({\mbox{\boldmath{$v$}}})\sqrt{a}dx + {\int_{\Omega}}\left( A^{\alpha\beta 33}(0){e_{\alpha||\beta}}^0 + A^{3333}(0){e_{3||3}}^0 \right){e_{3||3}}^{-1}({\mbox{\boldmath{$v$}}}) \sqrt{a}dx \\\nonumber & \qquad + {\int_{\Omega}}4B^{\alpha 3 \sigma 3}(0){\dot{e}_{\alpha||3}}^0{e_{\sigma||3}}^{-1}({\mbox{\boldmath{$v$}}})\sqrt{a}dx + {\int_{\Omega}}\left( B^{\alpha\beta 33}(0){\dot{e}_{\alpha||\beta}}^0 + B^{3333}(0){\dot{e}_{3||3}}^0 \right){e_{3||3}}^{-1}({\mbox{\boldmath{$v$}}}) \sqrt{a}dx \\\nonumber &\quad= {\int_{\Omega}}\left(2\mu a^{\alpha \sigma}{e_{\alpha||3}}^0{\partial}_3v_\sigma + \left(\lambda a^{\alpha\beta}{e_{\alpha||\beta}}^0+ (\lambda + 2\mu){e_{3||3}}^0 \right){\partial}_3v_3\right)\sqrt{a}dx \\ \label{ecuacion_integral1} &\qquad+ {\int_{\Omega}}\left(\rho a^{\alpha \sigma}{\dot{e}_{\alpha||3}}^0{\partial}_3v_\sigma + \left(\theta a^{\alpha\beta}{\dot{e}_{\alpha||\beta}}^0+ (\theta + \rho){\dot{e}_{3||3}}^0 \right){\partial}_3v_3\right)\sqrt{a}dx=0.\end{aligned}$$ On one hand, if we take ${\mbox{\boldmath{$v$}}}\in V(\Omega)$ such that $v_2=v_3=0$ and using the Theorem \[th\_int\_nula\], we have $$\begin{aligned} \label{ec_dif1} 2\mu a^{\alpha 1} {e_{\alpha||3}}^0 + \rho a^{\alpha 1} {\dot{e}_{\alpha||3}}^0 =0 {\ a.e. \ \textrm{in} \ (0,T)}.\end{aligned}$$ On the other hand, if we take ${\mbox{\boldmath{$v$}}}\in V(\Omega)$ such that $v_1=v_3=0$ and using the Theorem \[th\_int\_nula\], we have $$\begin{aligned} \label{ec_dif2} 2\mu a^{\alpha 2} {e_{\alpha||3}}^0 + \rho a^{\alpha 2} {\dot{e}_{\alpha||3}}^0 =0 {\ a.e. \ \textrm{in} \ (0,T)}.\end{aligned}$$ Multiplying (\[ec\_dif1\]) by $a^{22}$ and (\[ec\_dif2\]) by $-a^{21}$ and adding both expressions we have $$\begin{aligned} 2\mu \left(a^{22}a^{11} - a^{21}a^{12} \right) e_{1||3}^0 + \rho \left( a^{22}a^{11} - a^{21}a^{12}\right)\dot{e}_{1||3}^0 = 2\mu ae_{1||3}^0 + \rho a\dot{e}_{1||3}^0 = 0,\end{aligned}$$ ${\ a.e. \ \textrm{in} \ (0,T)}$, by (\[definicion\_a\]). Now, by the initial condition in (\[condicion\_inicial\_def\]) we conclude $$\begin{aligned} e_{1||3}^0(t) =0 { \ \textrm{in} \ }\Omega ,{\ \forall \ t\in[0,T]}.\end{aligned}$$ Multiplying (\[ec\_dif1\]) by $a^{12}$ and (\[ec\_dif2\]) by $-a^{11}$ and adding both expressions we have $$\begin{aligned} 2\mu ae_{2||3}^0 + \rho a\dot{e}_{2||3}^0 = 0 {\ a.e. \ \textrm{in} \ (0,T)},\end{aligned}$$ Now, by the initial condition in (\[condicion\_inicial\_def\]) we conclude $$\begin{aligned} e_{2||3}^0(t) =0 { \ \textrm{in} \ }\Omega ,{\ \forall \ t\in[0,T]}.\end{aligned}$$ Taking ${\mbox{\boldmath{$v$}}}\in V(\Omega)$ in (\[ecuacion\_integral1\]) such that $v_\alpha=0$, we obtain $$\begin{aligned} \nonumber &{\int_{\Omega}}\left(\lambda a^{\alpha\beta}{e_{\alpha||\beta}}^0+ (\lambda + 2\mu){e_{3||3}}^0 \right){\partial}_3v_3\sqrt{a}dx \\ &\qquad + {\int_{\Omega}}\left(\theta a^{\alpha\beta}{\dot{e}_{\alpha||\beta}}^0+ (\theta + \rho){\dot{e}_{3||3}}^0 \right){\partial}_3v_3\sqrt{a}dx=0,\end{aligned}$$ for all $v_3\in H^1(\Omega)$ with $v_3=0 { \ \textrm{in} \ }\Gamma_0, {\ a.e. \ \textrm{in} \ (0,T)}$. By Theorem \[th\_int\_nula\], we obtain the following differential equation $$\begin{aligned} \label{ecuacion_casuistica} \lambda a^{\alpha\beta} {e_{\alpha||\beta}}^0 + (\lambda+2\mu) {e_{3||3}}^0 +\theta a^{\alpha \beta} {\dot{e}_{\alpha||\beta}}^0 + (\theta + \rho) {\dot{e}_{3||3}}^0=0.\end{aligned}$$ \[nota\_desvio\] Note that removing time dependency and viscosity, that is taking $\theta=\rho=0$, the equation leads to the one studied in [@Ciarlet4b], that is, the elastic case. In order to solve the equation (\[ecuacion\_casuistica\]) in the more general case, we assume that the viscosity coefficient $\theta$ is strictly positive. Moreover, we can prove that this equation is equivalent to $$\begin{aligned} \theta e^{-\frac{\lambda}{\theta}t} \frac{{\partial}}{{\partial}t}\left(a^{\alpha\beta}{e_{\alpha||\beta}}^0 e^{\frac{\lambda}{\theta}t}\right)=-\left(\theta + \rho \right)e^{-\frac{\lambda + 2\mu}{\theta + \rho}t} \frac{{\partial}}{{\partial}t}\left({e_{3||3}}^0e^{\frac{\lambda + 2\mu}{\theta + \rho}t}\right).\end{aligned}$$ Integrating with respect to the time variable and using (\[condicion\_inicial\_def\]) we find that, $$\begin{aligned} {e_{3||3}}^0e^{\frac{\lambda + 2\mu}{\theta + \rho}t}= - \frac{\theta}{\theta + \rho} \int_0^t e^{\left(\frac{\lambda +2 \mu}{\theta + \rho} - \frac{\lambda}{\theta}\right) s} \frac{{\partial}}{{\partial}s} \left(a^{\alpha\beta}{e_{\alpha||\beta}}^0(s) e^{\frac{\lambda}{\theta}s}\right)ds,\end{aligned}$$ integrating by parts and simplifying we conclude that, $$\begin{aligned} {e_{3||3}}^0(t)= - \frac{\theta}{\theta + \rho} \left( a^{\alpha \beta }{e_{\alpha||\beta}}^0(t) + \Lambda\int_0^te^{-k(t-s)}a^{\alpha\beta}{e_{\alpha||\beta}}^0(s) ds \right),\end{aligned}$$ in $\Omega$,${\ \forall \ t\in[0,T]}$, with the definitions introduced in (\[Constantes\]). Moreover, from (\[ecuacion\_casuistica\]) we obtain that, $$\begin{aligned} {\dot{e}_{3||3}}^0(t)= - \frac{\lambda}{\theta + \rho} a^{\alpha \beta} {e_{\alpha||\beta}}^0(t)- \frac{\lambda + 2\mu}{\theta + \rho} {e_{3||3}}^0(t) - \frac{\theta}{\theta + \rho} a^{\alpha\beta}{\dot{e}_{\alpha||\beta}}^0(t),\end{aligned}$$ ${ \ \textrm{in} \ }\Omega, {\ a.e. \ t\in(0,T)}$. 3. Let $p=0$ in (\[ecuacion\_orden\_fuerzas\]). Grouping the terms multiplied by ${\varepsilon}^0$, taking into account (\[tensorA\_tildes\])–(\[tensorB\_tildes\]) and by step $(i)$ we find $$\begin{aligned} \nonumber &{\int_{\Omega}}A^{ijkl}(0) \left({e_{k||l}}^0 {e_{i||j}}^0({\mbox{\boldmath{$v$}}}) + {e_{k||l}}^1{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) \right) \sqrt{a} dx + {\int_{\Omega}}\tilde{A}^{ijkl,1}{e_{k||l}}^0{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) dx \\ \nonumber &\qquad + {\int_{\Omega}}B^{ijkl}(0) \left({\dot{e}_{k||l}}^0 {e_{i||j}}^0({\mbox{\boldmath{$v$}}}) + {\dot{e}_{k||l}}^1{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) \right) \sqrt{a} dx + {\int_{\Omega}}\tilde{B}^{ijkl,1}{\dot{e}_{k||l}}^0{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}})dx \\ \label{ec_step3} & \quad = {\int_{\Omega}}f^{i,0} v_i \sqrt{a} dx + {\int_{\Gamma_+\cup\Gamma_-}}h^{i,1}v_i \sqrt{a} d\Gamma,\end{aligned}$$ for all ${\mbox{\boldmath{$v$}}}\in V(\Omega), {\ a.e. \ \textrm{in} \ (0,T)}$. Taking ${\mbox{\boldmath{$v$}}}\in V(\Omega)$ such that it is independent of the transversal variable $x_3$, this is, such that we can identify ${\mbox{\boldmath{$v$}}}$ with a function ${\mbox{\boldmath{$\eta$}}}\in V(\omega)$, we have by (\[eij\_terminos\_expansion\]) that ${e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}})=0$. Moreover, since ${e_{\alpha||3}}^0=0$ by step $(ii)$, we have $$\begin{aligned} \nonumber &{\int_{\Omega}}A^{ijkl}(0){e_{k||l}}^0 {e_{i||j}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx + {\int_{\Omega}}B^{ijkl}(0){\dot{e}_{k||l}}^0{e_{i||j}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx \\ \nonumber & \quad = {\int_{\Omega}}\left( \lambda a^{\alpha \beta} a^{\sigma \tau} + \mu (a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma}) \right) {e_{\sigma||\tau}}^0 {e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx + {\int_{\Omega}}\lambda a^{\alpha\beta}{e_{3||3}}^0 {e_{\alpha||\beta}}^0 ({\mbox{\boldmath{$\eta$}}})\sqrt{a} dx \\ \nonumber & \qquad + {\int_{\Omega}}\left( \theta a^{\alpha \beta} a^{\sigma \tau} + \frac{\rho}{2} (a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma}) \right) {\dot{e}_{\sigma||\tau}}^0 {e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx + {\int_{\Omega}}\theta a^{\alpha\beta}{\dot{e}_{3||3}}^0 {e_{\alpha||\beta}}^0 ({\mbox{\boldmath{$\eta$}}})\sqrt{a} dx \\ \label{ref2} &\quad = {\int_{\Omega}}f^{i,0} \eta_i \sqrt{a} dx + {\int_{\Gamma_+\cup\Gamma_-}}h^{i,1}\eta_i \sqrt{a} d\Gamma .\end{aligned}$$ Using the expressions of ${e_{3||3}}$ and its time derivative found in step $(ii)$, we have that $$\begin{aligned} \nonumber &{\int_{\Omega}}\left( \lambda a^{\alpha \beta} a^{\sigma \tau} + \mu (a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma}) \right) {e_{\sigma||\tau}}^0 {e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx + {\int_{\Omega}}\lambda a^{\alpha\beta}{e_{3||3}}^0 {e_{\alpha||\beta}}^0 ({\mbox{\boldmath{$\eta$}}})\sqrt{a} dx \\ \nonumber & \qquad + {\int_{\Omega}}\left( \theta a^{\alpha \beta} a^{\sigma \tau} + \frac{\rho}{2} (a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma}) \right) {\dot{e}_{\sigma||\tau}}^0 {e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx + {\int_{\Omega}}\theta a^{\alpha\beta}{\dot{e}_{3||3}}^0 {e_{\alpha||\beta}}^0 ({\mbox{\boldmath{$\eta$}}})\sqrt{a} dx \\ &\quad = {\int_{\Omega}}\left( \lambda a^{\alpha \beta} a^{\sigma \tau} + \mu (a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma}) \right) {e_{\sigma||\tau}}^0 {e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx \\ & \qquad + {\int_{\Omega}}\left( \theta a^{\alpha \beta} a^{\sigma \tau} + \frac{\rho}{2} (a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma}) \right) {\dot{e}_{\sigma||\tau}}^0 {e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx \\ \nonumber & \qquad + {\int_{\Omega}}\left(\lambda - \theta \frac{\lambda + 2\mu}{\theta + \rho } \right) \left( - \frac{\theta}{\theta + \rho} \left( a^{\sigma \tau }{e_{\sigma||\tau}}^0 + \Lambda\int_0^te^{-k(t-s)}a^{\sigma \tau}{e_{\sigma||\tau}}^0(s) ds \right) \right) a^{\alpha\beta} {e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx \\ & \qquad - {\int_{\Omega}}\frac{\theta}{\theta + \rho} \left( \lambda a^{\sigma \tau}{e_{\sigma||\tau}}^0 + \theta a^{\sigma \tau}{\dot{e}_{\sigma||\tau}}^0 \right) a^{\alpha \beta} {e_{\alpha||\beta}}^0 ({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx,\end{aligned}$$ which is equivalent to, $$\begin{aligned} &{\int_{\Omega}}\left( \left(\lambda - \frac{\theta}{\theta + \rho} \left( \theta \Lambda + \lambda \right)\right)a^{\alpha\beta}a^{\sigma \tau} + \mu{(a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma})}\right) {e_{\sigma||\tau}}^0{e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}})\sqrt{a}dx \\ &\qquad + {\int_{\Omega}}\left( \frac{\theta \rho }{\theta + \rho} a^{\alpha \beta} a^{\sigma \tau} + \frac{\rho}{2}{(a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma})}\right) {\dot{e}_{\sigma||\tau}}^0{e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}})\sqrt{a}dx \\ & \qquad - {\int_{\Omega}}\frac{\left( \theta \Lambda\right)^2}{\theta + \rho} \int_0^t e^{-k(t-s)}a^{\sigma \tau} {e_{\sigma||\tau}}^0(s) ds a^{\alpha \beta} {e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx \\ &\quad = {\int_{\Omega}}f^{i,0} \eta_i \sqrt{a} dx + {\int_{\Gamma_+\cup\Gamma_-}}h^{i,1}\eta_i \sqrt{a} d\Gamma,\end{aligned}$$ hence, we obtain that $$\begin{aligned} &\frac{1}{2}{\int_{\Omega}}{a^{\alpha\beta\sigma\tau}}{e_{\sigma||\tau}}^0{e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}})\sqrt{a}dx + \frac{1}{2}{\int_{\Omega}}{b^{\alpha\beta\sigma\tau}}{\dot{e}_{\sigma||\tau}}^0 {e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}})\sqrt{a}dx \\ & \qquad- \frac{1}{2}\int_0^te^{-k(t-s)}{\int_{\Omega}}{c^{\alpha\beta\sigma\tau}}{e_{\sigma||\tau}}^0(s){e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}})\sqrt{a}dx ds \\ & \quad = {\int_{\Omega}}f^{i,0}\eta_i \sqrt{a}dx + {\int_{\Gamma_+\cup\Gamma_-}}h^{i,1} \eta_i \sqrt{a} d \Gamma, \ \forall{\mbox{\boldmath{$\eta$}}}\in V(\omega) {\ a.e. \ \textrm{in} \ (0,T)},\end{aligned}$$ where ${a^{\alpha\beta\sigma\tau}}$ , ${b^{\alpha\beta\sigma\tau}}$ and ${c^{\alpha\beta\sigma\tau}}$ denote the contravariant components of the fourth order two-dimensional tensors, defined in (\[tensor\_a\_bidimensional\])–(\[tensor\_c\_bidimensional\]). Note that if ${\mbox{\boldmath{$\eta$}}}=(\eta_i)\in H^1(\omega)\times H^1(\omega) \times L^2(\omega),$ then $$\begin{aligned} {\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\in L^2(\omega).\end{aligned}$$ Hence, the equalities in (\[et3\]) $$\begin{aligned} {e_{\alpha||\beta}}^0(t)={\gamma_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^0(t)) \ \textrm{and} \ {e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}}(t))={\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}}(t)) \ \textrm{for all}\ {\mbox{\boldmath{$\eta$}}}\in V(\omega) {\ \forall \ t\in[0,T]},\end{aligned}$$ follow from the definitions (\[eij\_terminos\_expansion\_u\]), (\[eij\_terminos\_expansion\]) and (\[def\_gab\]). 4. Assume that $V_0(\omega)=\{{\mbox{\boldmath{$0$}}}\}$. By the previous step we have the following variational problem: Find ${\mbox{\boldmath{$\xi$}}}^0:[0,T] \times\omega \longrightarrow \mathbb{R}^3$ such that, $$\begin{aligned} \nonumber &{\mbox{\boldmath{$\xi$}}}^0(t)\in V(\omega) {\ \forall \ t\in[0,T]}, \\ \nonumber &{\int_{\omega}}{a^{\alpha\beta\sigma\tau}}{\gamma_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^0){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy + {\int_{\omega}}{b^{\alpha\beta\sigma\tau}}{\gamma_{\sigma\tau}}(\dot{{\mbox{\boldmath{$\xi$}}}}^0) {\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy \\ \nonumber & \qquad- \int_0^te^{-k(t-s)}{\int_{\omega}}{c^{\alpha\beta\sigma\tau}}{\gamma_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^0(s)){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy ds \\ \label{ec_paso4} & \quad = {\int_{\omega}}p^{i,0}\eta_i \sqrt{a}dy, \ \forall{\mbox{\boldmath{$\eta$}}}\in V(\omega) {\ a.e. \ \textrm{in} \ (0,T)}, \\ \nonumber & {\mbox{\boldmath{$\xi$}}}^0(0,\cdot)={\mbox{\boldmath{$\xi$}}}_0^0(\cdot),\end{aligned}$$ where $p^{i,0}$ is defined in (\[p0\]). This problem will be known as the two-dimensional variational problem for a viscoelastic membrane shell. 5. Assume that $V_0(\omega)\neq\{{\mbox{\boldmath{$0$}}}\}$. Taking ${\mbox{\boldmath{$\eta$}}}\in (V_0(\omega)\setminus\{{\mbox{\boldmath{$0$}}}\})$ in (\[ec\_paso4\]) we have that $$\begin{aligned} {\int_{\omega}}p^{i,0}\eta_i \sqrt{a}dy = {\int_{\Omega}}f^{i,0}\eta_i \sqrt{a}dx + {\int_{\Gamma_+\cup\Gamma_-}}h^{i,1} \eta_i \sqrt{a} d \Gamma=0.\end{aligned}$$ Hence, in order to avoid compatibility conditions between the applied forces we must take $f^{i,0}=0$ and $h^{i,1}=0$. Therefore, taking ${\mbox{\boldmath{$\eta$}}}={\mbox{\boldmath{$\xi$}}}^0$ in the equation (\[ec\_paso4\]) leads to $$\begin{aligned} \nonumber &{\int_{\omega}}{a^{\alpha\beta\sigma\tau}}{\gamma_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^0){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^0)\sqrt{a}dy + {\int_{\omega}}{b^{\alpha\beta\sigma\tau}}{\gamma_{\sigma\tau}}(\dot{{\mbox{\boldmath{$\xi$}}}}^0) {\gamma_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^0)\sqrt{a}dy \\ & \qquad- \int_0^te^{-k(t-s)}{\int_{\omega}}{c^{\alpha\beta\sigma\tau}}{\gamma_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^0(s)){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^0)\sqrt{a}dy ds =0.\end{aligned}$$ By (\[condicion\_inicial\_def\]) and the first equality in (\[et3\]), we have that ${\gamma_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^0(0))=0.$ This initial condition together with the Theorem \[teorema\_existencia\_bidimensional\] imply that ${\gamma_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^0(t))=0 {\ \forall \ t\in[0,T]}$, that is, ${\mbox{\boldmath{$\xi$}}}^0\in V_0(\omega)$. Therefore, again by (\[et3\]), we find that ${e_{\alpha||\beta}}^0={\gamma_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^0)=0$. Moreover, by (\[eij\_terminos\_expansion\_u\]) and (\[edtres\_cero\]) we have that $${\partial}_3 u_3^1(t)={e_{3||3}}^0(t)=0 { \ \textrm{in} \ }\Omega, {\ \forall \ t\in[0,T]}.$$ By the definition of ${e_{\alpha||3}}^0$ in (\[eij\_terminos\_expansion\_u\]) and steps $(i)$–$(ii)$ we have $$\begin{aligned} {e_{\alpha||3}}^0=\frac{1}{2}\left( {\partial}_\alpha \xi^0_3+ {\partial}_3 u_\alpha^1 \right)+ b_\alpha^\sigma \xi^0_\sigma =0,\end{aligned}$$ hence, $$\begin{aligned} {\partial}_3 u_\alpha^1(t)=- \left({\partial}_\alpha \xi_3^0(t) + 2b_\alpha^\sigma \xi^0_\sigma(t)\right) { \ \textrm{in} \ }\Omega, {\ \forall \ t\in[0,T]}.\end{aligned}$$ Since we are assuming that ${\mbox{\boldmath{$u$}}}^1(t)\in V(\Omega) {\ \forall \ t\in[0,T]}$ and since ${\mbox{\boldmath{$\xi$}}}^0$ is independent of $x_3$ by step $(i)$, there exists a field ${\mbox{\boldmath{$\xi$}}}^1(t)\in V(\omega) {\ \forall \ t\in[0,T]}$ such that $$\begin{aligned} u_\alpha^1 (t)&= \xi^1_\alpha(t) - x_3 \left({\partial}_\alpha\xi_3^0(t) + 2 b_\alpha^\sigma \xi_\sigma^0(t) \right), \\ u_3^1(t)&=\xi_3^1(t),\end{aligned}$$ in $\Omega, {\ \forall \ t\in[0,T]}$. Notice that this implies that $\xi^0_3(t)\in H^2(\Omega) {\ \forall \ t\in[0,T]}$. Now, since $\xi_\alpha^0=0$ on $\gamma_0$, then ${\partial}_\nu\xi_3^0=0$, where ${\partial}_\nu$ denotes the outer normal derivative along the boundary. Therefore, we have ${\mbox{\boldmath{$\xi$}}}^0(t)\in V_F(\omega) {\ \forall \ t\in[0,T]}$. Since ${e_{i||j}}^0=0$, coming back to the terms multiplied by ${\varepsilon}^0$ (see (\[ec\_step3\]) in step $(iii)$), we have $$\begin{aligned} {\int_{\Omega}}A^{ijkl}(0) {e_{k||l}}^1{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) \sqrt{a} dx + {\int_{\Omega}}B^{ijkl}(0) {\dot{e}_{k||l}}^1{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) \sqrt{a} dx =0,\end{aligned}$$ for all ${\mbox{\boldmath{$v$}}}\in V(\Omega), {\ a.e. \ \textrm{in} \ (0,T)}$. Notice that this equation is analogous to the one obtained in the step $(ii)$ involving the terms ${e_{i||j}}^1$ instead of the terms ${e_{i||j}}^0$ (see (\[ec\_step2\])). Therefore, using similar arguments, we conclude that $$\begin{aligned} {e_{\alpha||3}}^1(t)=0 { \ \textrm{in} \ }\Omega, {\ \forall \ t\in[0,T]},\end{aligned}$$ and moreover, $$\begin{aligned} {e_{3||3}}^1(t)= - \frac{\theta}{\theta + \rho} \left( a^{\alpha \beta }{e_{\alpha||\beta}}^1(t) + \Lambda\int_0^te^{-k(t-s)}a^{\alpha\beta}{e_{\alpha||\beta}}^1(s) ds \right), { \ \textrm{in} \ }\Omega, {\ \forall \ t\in[0,T]}, \end{aligned}$$ where $\Lambda$ and $k$ are defined in (\[Constantes\]). Furthermore, $$\begin{aligned} {\dot{e}_{3||3}}^1(t)= - \frac{\lambda}{\theta + \rho} a^{\alpha \beta} {e_{\alpha||\beta}}^1(t)- \frac{\lambda + 2\mu}{\theta + \rho} {e_{3||3}}^1(t) - \frac{\theta}{\theta + \rho} a^{\alpha\beta}{\dot{e}_{\alpha||\beta}}^1(t), \end{aligned}$$ ${ \ \textrm{in} \ }\Omega, {\ a.e. \ t\in(0,T)}$. Now by the the definitions in (\[eij\_terminos\_expansion\_u\]) in terms of $\xi^0_i$ and $\xi^1_i$ and replacing ${\partial}_\beta b_\alpha^\sigma$ terms from (\[b\_barra\]), after some computations we have that $$\begin{aligned} \nonumber {e_{\alpha||\beta}}^1&= \frac{1}{2} \left( {\partial}_\beta \xi_\alpha^1 + {\partial}_\alpha \xi_\beta^1 \right) - \Gamma_{\alpha\beta}^\sigma \xi_\sigma^1 - b_{\alpha\beta} \xi_3^1 - x_3\left( {\partial}_{\alpha\beta}\xi_3^0 - \Gamma_{\alpha\beta}^\sigma{\partial}_\sigma\xi_3^0 - b_\alpha^\sigma b_{\sigma\beta}\xi_3^0 \right. \\ \label{ec_paso5} &\qquad \left. + b_\alpha^\sigma\left({\partial}_\beta\xi_\sigma^0 - \Gamma_{\beta\sigma}^\tau\xi_\tau^0 \right) + b_\beta^\tau\left( {\partial}_\alpha\xi_\tau^0 - \Gamma_{\alpha\tau}^\sigma \xi_\sigma^0 \right)+ b^\tau_{\beta|\alpha}\xi_\tau^0 \right).\end{aligned}$$ Note that if ${\mbox{\boldmath{$\eta$}}}=(\eta_i)\in H^1(\omega)\times H^1(\omega) \times L^2(\omega),$ then (see (\[rab\])) $$\begin{aligned} \rho_{\alpha\beta} ({\mbox{\boldmath{$\eta$}}}) \in L^2(\Omega).\end{aligned}$$ Hence, by (\[def\_gab\]) for ${\mbox{\boldmath{$\eta$}}}={\mbox{\boldmath{$\xi$}}}^1(t)$ and (\[rab\]) for ${\mbox{\boldmath{$\eta$}}}={\mbox{\boldmath{$\xi$}}}^0(t)$, it follows from (\[ec\_paso5\]) the equality $${e_{\alpha||\beta}}^1(t)={\gamma_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^1(t))- x_3{\rho_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^0(t)) { \ \textrm{in} \ }\Omega ,{\ \forall \ t\in[0,T]}.$$ 6. Assume that $V_0(\Omega)\neq\{{\mbox{\boldmath{$0$}}}\}.$ Let $p=1$ in (\[ecuacion\_orden\_fuerzas\]). Grouping the terms multiplied by ${\varepsilon}$, taking into account steps $(i)-(v)$ we have $$\begin{aligned} \nonumber &{\int_{\Omega}}A^{ijkl}(0)\left({e_{k||l}}^1{e_{i||j}}^0({\mbox{\boldmath{$v$}}}) + {e_{k||l}}^2{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) \right) \sqrt{a} dx+ {\int_{\Omega}}\tilde{A}^{ijkl,1}{e_{k||l}}^1{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}})dx \\ \nonumber &\qquad + {\int_{\Omega}}B^{ijkl}(0)\left({\dot{e}_{k||l}}^1{e_{i||j}}^0({\mbox{\boldmath{$v$}}}) + {\dot{e}_{k||l}}^2{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) \right) \sqrt{a} dx+ {\int_{\Omega}}\tilde{B}^{ijkl,1}{\dot{e}_{k||l}}^1{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}})dx \\ \label{ref3} &\quad ={\int_{\Omega}}f^{i,1}v_i \sqrt{a}dx + {\int_{\Gamma_+\cup\Gamma_-}}h^{i,2} v_i \sqrt{a}d\Gamma,\end{aligned}$$ for all ${\mbox{\boldmath{$v$}}}\in V(\Omega), {\ a.e. \ \textrm{in} \ (0,T)}$. Taking ${\mbox{\boldmath{$v$}}}={\mbox{\boldmath{$\eta$}}}\in V(\omega)$, this is, ${\mbox{\boldmath{$v$}}}$ independent of $x_3$, by (\[eij\_terminos\_expansion\]) we obtain $$\begin{aligned} &{\int_{\Omega}}A^{ijkl}(0){e_{k||l}}^1{e_{i||j}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx + {\int_{\Omega}}B^{ijkl}(0){\dot{e}_{k||l}}^1{e_{i||j}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx \\ &\quad ={\int_{\Omega}}f^{i,1}\eta_i \sqrt{a}dx + {\int_{\Gamma_+\cup\Gamma_-}}h^{i,2} \eta_i \sqrt{a}d\Gamma,\end{aligned}$$ for all ${\mbox{\boldmath{$\eta$}}}\in V(\omega), {\ a.e. \ \textrm{in} \ (0,T)}$. Since ${e_{\alpha||3}}^1=0$ by $(v)$ we obtain $$\begin{aligned} &{\int_{\Omega}}A^{ijkl}(0){e_{k||l}}^1{e_{i||j}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx + {\int_{\Omega}}B^{ijkl}(0){\dot{e}_{k||l}}^1{e_{i||j}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx \\ &\quad ={\int_{\Omega}}\left( \lambda a^{\alpha \beta} a^{\sigma \tau} + \mu (a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma}) \right) {e_{\sigma||\tau}}^1 {e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx + {\int_{\Omega}}\lambda a^{\alpha\beta}{e_{3||3}}^1 {e_{\alpha||\beta}}^0 ({\mbox{\boldmath{$\eta$}}})\sqrt{a} dx \\ \nonumber & \qquad + {\int_{\Omega}}\left( \theta a^{\alpha \beta} a^{\sigma \tau} + \frac{\rho}{2} (a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma}) \right) {\dot{e}_{\sigma||\tau}}^1 {e_{\alpha||\beta}}^0({\mbox{\boldmath{$\eta$}}}) \sqrt{a} dx + {\int_{\Omega}}\theta a^{\alpha\beta}{\dot{e}_{3||3}}^1 {e_{\alpha||\beta}}^0 ({\mbox{\boldmath{$\eta$}}})\sqrt{a} dx \\ &\quad = {\int_{\Omega}}f^{i,1} \eta_i \sqrt{a} dx + {\int_{\Gamma_+\cup\Gamma_-}}h^{i,2}\eta_i \sqrt{a} d\Gamma ,\end{aligned}$$ for all ${\mbox{\boldmath{$\eta$}}}\in V(\omega), {\ a.e. \ \textrm{in} \ (0,T)}$, which is analogous to the expression obtained in (\[ref2\]). Therefore, following the same arguments made there, taking into account $(v)$, we find that $$\begin{aligned} \nonumber &{\int_{\omega}}{a^{\alpha\beta\sigma\tau}}{\gamma_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^1){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy + {\int_{\omega}}{b^{\alpha\beta\sigma\tau}}{\gamma_{\sigma\tau}}(\dot{{\mbox{\boldmath{$\xi$}}}}^1) {\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy \\ \nonumber & \qquad- \int_0^te^{-k(t-s)}{\int_{\omega}}{c^{\alpha\beta\sigma\tau}}{\gamma_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^1(s)){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy ds \\ \label{et6} &\quad = {\int_{\Omega}}f^{i,1} \eta_i \sqrt{a} dx + {\int_{\Gamma_+\cup\Gamma_-}}h^{i,2}\eta_i \sqrt{a} d\Gamma ,\end{aligned}$$ for all ${\mbox{\boldmath{$\eta$}}}\in V(\omega), {\ a.e. \ \textrm{in} \ (0,T)}$, where the contravariant components of the fourth order two-dimensional tensors ${a^{\alpha\beta\sigma\tau}},{b^{\alpha\beta\sigma\tau}},{c^{\alpha\beta\sigma\tau}}$ are defined in (\[tensor\_a\_bidimensional\])–(\[tensor\_c\_bidimensional\]). Taking ${\mbox{\boldmath{$\eta$}}}\in \left( V_0(\omega)\setminus\{{\mbox{\boldmath{$0$}}}\} \right)$ we have that $$\begin{aligned} {\int_{\Omega}}f^{i,1} \eta_i \sqrt{a} dx + {\int_{\Gamma_+\cup\Gamma_-}}h^{i,2}\eta_i \sqrt{a} d\Gamma =0,\end{aligned}$$ hence, in order to avoid compatibility conditions between the applied forces we must take $f^{i,1}=0$ and $h^{i,2}=0$. Therefore, letting ${\mbox{\boldmath{$\eta$}}}={\mbox{\boldmath{$\xi$}}}^1$ in (\[et6\]) leads to $$\begin{aligned} &{\int_{\omega}}{a^{\alpha\beta\sigma\tau}}{\gamma_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^1){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^1)\sqrt{a}dy + {\int_{\omega}}{b^{\alpha\beta\sigma\tau}}{\gamma_{\sigma\tau}}(\dot{{\mbox{\boldmath{$\xi$}}}}^1) {\gamma_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^1)\sqrt{a}dy \\ & \qquad- \int_0^te^{-k(t-s)}{\int_{\omega}}{c^{\alpha\beta\sigma\tau}}{\gamma_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^1(s)){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^1(t))\sqrt{a}dy ds =0.\end{aligned}$$ By (\[condicion\_inicial\_def\]) and the relation (\[ref1\]) found in the step $(v)$, we obtain that $ {\gamma_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^1(0))=0$, hence, by the Theorem \[teorema\_existencia\_bidimensional\] we deduce that $ {\gamma_{\alpha\beta}}({\mbox{\boldmath{$\xi$}}}^1(t))=0 {\ \forall \ t\in[0,T]}$. Therefore, $$\begin{aligned} {\mbox{\boldmath{$\xi$}}}^1(t)\in V_0(\omega) {\ \forall \ t\in[0,T]}.\end{aligned}$$ 7. On one hand, coming back to the equation (\[ref3\]), with $f^{i,1}=0$ and $h^{i,2}=0$, leads to $$\begin{aligned} \nonumber &{\int_{\Omega}}A^{ijkl}(0)\left({e_{k||l}}^1{e_{i||j}}^0({\mbox{\boldmath{$v$}}}) + {e_{k||l}}^2{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) \right) \sqrt{a} dx+ {\int_{\Omega}}\tilde{A}^{ijkl,1}{e_{k||l}}^1{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}})dx \\ \nonumber &\qquad + {\int_{\Omega}}B^{ijkl}(0)\left({\dot{e}_{k||l}}^1{e_{i||j}}^0({\mbox{\boldmath{$v$}}}) + {\dot{e}_{k||l}}^2{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) \right) \sqrt{a} dx+ {\int_{\Omega}}\tilde{B}^{ijkl,1}{\dot{e}_{k||l}}^1{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}})dx =0\end{aligned}$$ Given ${\mbox{\boldmath{$\eta$}}}\in V_F(\omega)$, we define ${\mbox{\boldmath{$v$}}}({\mbox{\boldmath{$\eta$}}})=(v_i({\mbox{\boldmath{$\eta$}}}))$ as $$\begin{aligned} v_\alpha({\mbox{\boldmath{$\eta$}}})&:= x_3 \left( 2b_\alpha^\sigma \eta_\sigma + {\partial}_\alpha\eta_3 \right),\\ v_3({\mbox{\boldmath{$\eta$}}})&:=0,\end{aligned}$$ and take ${\mbox{\boldmath{$v$}}}={\mbox{\boldmath{$v$}}}({\mbox{\boldmath{$\eta$}}})$ in the previous equation, leading to (see (\[eij\_terminos\_expansion\])) $$\begin{aligned} \nonumber &{\int_{\Omega}}A^{ijkl}(0){e_{k||l}}^1{e_{i||j}}^0({\mbox{\boldmath{$v$}}}({\mbox{\boldmath{$\eta$}}})) \sqrt{a}dx + 4{\int_{\Omega}}A^{\alpha 3\sigma 3}(0){e_{\sigma||3}}^2\left( b_\alpha^\tau\eta_\tau + \frac{1}{2} {\partial}_\alpha\eta_3 \right) \sqrt{a} dx \\\nonumber & \qquad + 4{\int_{\Omega}}\tilde{A}^{\alpha 3 \sigma 3,1}{e_{\sigma||3}}^1\left( b_\alpha^\tau\eta_\tau + \frac{1}{2} {\partial}_\alpha\eta_3 \right)dx \\ \nonumber &\qquad +{\int_{\Omega}}B^{ijkl}(0){\dot{e}_{k||l}}^1{e_{i||j}}^0({\mbox{\boldmath{$v$}}}({\mbox{\boldmath{$\eta$}}})) \sqrt{a}dx + 4{\int_{\Omega}}B^{\alpha 3\sigma 3}(0){\dot{e}_{\sigma||3}}^2\left( b_\alpha^\tau\eta_\tau + \frac{1}{2} {\partial}_\alpha\eta_3 \right) \sqrt{a} dx \\ \label{ref4} & \qquad + 4{\int_{\Omega}}\tilde{B}^{\alpha 3 \sigma 3,1}{\dot{e}_{\sigma||3}}^1\left( b_\alpha^\tau\eta_\tau + \frac{1}{2} {\partial}_\alpha\eta_3 \right)dx =0,\end{aligned}$$ for all ${\mbox{\boldmath{$\eta$}}}\in V_F(\omega), {\ a.e. \ \textrm{in} \ (0,T)}$. On the other hand, let $p=2$ in (\[ecuacion\_orden\_fuerzas\]). Grouping the terms multiplied by ${\varepsilon}^2$ and using steps $(i)$ and $(v)$ we find that $$\begin{aligned} &{\int_{\Omega}}A^{ijkl}(0) \left({e_{k||l}}^1{e_{i||j}}^1({\mbox{\boldmath{$v$}}}) + {e_{k||l}}^2{e_{i||j}}^0({\mbox{\boldmath{$v$}}}) + {e_{k||l}}^{3}{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) \right)\sqrt{a}dx \\ &\qquad + {\int_{\Omega}}\tilde{A}^{ijkl,1}\left({e_{k||l}}^1{e_{i||j}}^0({\mbox{\boldmath{$v$}}}) + {e_{k||l}}^2{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) \right) dx + {\int_{\Omega}}\tilde{A}^{ijkl,2} {e_{k||l}}^1{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) dx \\ & \qquad +{\int_{\Omega}}B^{ijkl}(0) \left({\dot{e}_{k||l}}^1{e_{i||j}}^1({\mbox{\boldmath{$v$}}}) + {\dot{e}_{k||l}}^2{e_{i||j}}^0({\mbox{\boldmath{$v$}}}) + {\dot{e}_{k||l}}^{3}{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) \right)\sqrt{a}dx \\ &\qquad + {\int_{\Omega}}\tilde{B}^{ijkl,1}\left({\dot{e}_{k||l}}^1{e_{i||j}}^0({\mbox{\boldmath{$v$}}}) + {\dot{e}_{k||l}}^2{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) \right) dx + {\int_{\Omega}}\tilde{B}^{ijkl,2} {\dot{e}_{k||l}}^1{e_{i||j}}^{-1}({\mbox{\boldmath{$v$}}}) dx \\ & \quad = {\int_{\Omega}}f^{i,2} v_i \sqrt{a} dx + {\int_{\Gamma_+\cup\Gamma_-}}h^{i,3}v_i \sqrt{a} d\Gamma ,\end{aligned}$$ for all ${\mbox{\boldmath{$v$}}}\in V(\Omega), {\ a.e. \ \textrm{in} \ (0,T)}$. Consider now any ${\mbox{\boldmath{$v$}}}$ which can be identified with a function ${\mbox{\boldmath{$\eta$}}}\in V_F(\omega)$; hence by steps $(i)$, $(v)$ and (\[eij\_terminos\_expansion\]) we have $$\begin{aligned} &{\int_{\Omega}}A^{ijkl}(0) {e_{k||l}}^1{e_{i||j}}^1({\mbox{\boldmath{$\eta$}}})\sqrt{a}dx + 4{\int_{\Omega}}A^{\alpha 3 \sigma 3}(0) {e_{\sigma||3}}^2 \left( b_\alpha^\tau\eta_\tau + \frac{1}{2} {\partial}_\alpha\eta_3 \right) \sqrt{a}dx \\ &\qquad + {\int_{\Omega}}\tilde{A}^{ijkl,1}{e_{\sigma||3}}^1 \left( b_\alpha^\tau\eta_\tau + \frac{1}{2} {\partial}_\alpha\eta_3 \right) dx \\ & \qquad +{\int_{\Omega}}B^{ijkl}(0) {\dot{e}_{k||l}}^1{e_{i||j}}^1({\mbox{\boldmath{$\eta$}}})\sqrt{a}dx + 4{\int_{\Omega}}B^{\alpha 3 \sigma 3}(0) {\dot{e}_{\sigma||3}}^2 \left( b_\alpha^\tau\eta_\tau + \frac{1}{2} {\partial}_\alpha\eta_3 \right) \sqrt{a}dx \\ &\qquad + {\int_{\Omega}}\tilde{B}^{ijkl,1}{\dot{e}_{\sigma||3}}^1 \left( b_\alpha^\tau\eta_\tau + \frac{1}{2} {\partial}_\alpha\eta_3 \right) dx \\ & \quad = {\int_{\omega}}p^{i,2} \eta_i \sqrt{a} dy , \end{aligned}$$ for all ${\mbox{\boldmath{$\eta$}}}\in V_F( \omega), {\ a.e. \ \textrm{in} \ (0,T)}$, where $p^{i,2}$ is defined in (\[p2\]). By subtracting (\[ref4\]), we obtain $$\begin{aligned} \nonumber &{\int_{\Omega}}A^{ijkl}(0) {e_{k||l}}^1 \left({e_{i||j}}^1({\mbox{\boldmath{$\eta$}}}) - {e_{i||j}}^0({\mbox{\boldmath{$v$}}}({\mbox{\boldmath{$\eta$}}})) \right) \sqrt{a}dx + {\int_{\Omega}}B^{ijkl}(0) {\dot{e}_{k||l}}^1 \left({e_{i||j}}^1({\mbox{\boldmath{$\eta$}}}) - {e_{i||j}}^0({\mbox{\boldmath{$v$}}}({\mbox{\boldmath{$\eta$}}})) \right) \sqrt{a}dx \\ \label{ref5} & \quad = {\int_{\omega}}p^{i,2} \eta_i \sqrt{a} dy ,\end{aligned}$$ for all ${\mbox{\boldmath{$\eta$}}}\in V_F( \omega), {\ a.e. \ \textrm{in} \ (0,T)}.$ Now, by step $(v)$ and (\[eij\_terminos\_expansion\]) we have that $$\begin{aligned} A^{ijkl}(0){e_{k||l}}^1 \left( {e_{i||j}}^1({\mbox{\boldmath{$\eta$}}}) - {e_{i||j}}^0({\mbox{\boldmath{$v$}}}({\mbox{\boldmath{$\eta$}}})) \right) &= A^{\alpha\beta\sigma\tau}(0){e_{\sigma||\tau}}^1\left({e_{\alpha||\beta}}^1 ({\mbox{\boldmath{$\eta$}}})- {e_{\alpha||\beta}}^0({\mbox{\boldmath{$v$}}}({\mbox{\boldmath{$\eta$}}}) \right) \\ &\quad+ A^{\alpha\beta 33}(0){e_{3||3}}^1\left({e_{\alpha||\beta}}^1 ({\mbox{\boldmath{$\eta$}}})- {e_{\alpha||\beta}}^0({\mbox{\boldmath{$v$}}}({\mbox{\boldmath{$\eta$}}})) \right).\end{aligned}$$ We also have the analogous equality for the components of the viscosity tensor multiplying the time derivatives of the strain components. Moreover, by steps $(v)$ and $(vi)$ we have $$\begin{aligned} \label{est_1_rst} {e_{\sigma||\tau}}^1(t)= -x_3 {\rho_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^0(t)) {\ \forall \ t\in[0,T]}. \end{aligned}$$ Furthermore, by (\[eij\_terminos\_expansion\]) we also find that $$\begin{aligned} {e_{\alpha||\beta}}^1({\mbox{\boldmath{$\eta$}}})-{e_{\alpha||\beta}}^0({\mbox{\boldmath{$v$}}}({\mbox{\boldmath{$\eta$}}}))&= x_3 \left( b^\sigma_{\beta|\alpha} \eta_\sigma + b_\alpha^\sigma b_{\sigma\beta}\eta_3 \right) \\ & \quad- x_3 \left({\partial}_\alpha(b_\beta^\tau \eta_\tau) + {\partial}_\beta(b_\alpha^\sigma\eta_\sigma) + {\partial}_{\alpha\beta}\eta_3 - \Gamma_{\alpha\beta}^\sigma {\partial}_\sigma \eta_3 -2\Gamma_{\alpha\beta}^\sigma b_\sigma^\tau \eta_\tau \right),\end{aligned}$$ and making some calculations we conclude that $$\begin{aligned} {e_{\alpha||\beta}}^1({\mbox{\boldmath{$\eta$}}}) - {e_{\alpha||\beta}}^0({\mbox{\boldmath{$v$}}}({\mbox{\boldmath{$\eta$}}})) = - x_3 {\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}}), \ \forall {\mbox{\boldmath{$\eta$}}}\in V_F(\omega).\end{aligned}$$ Therefore, the left-hand side of the equation (\[ref5\]) leads to $$\begin{aligned} \nonumber &{\int_{\Omega}}A^{ijkl}(0) {e_{k||l}}^1 \left({e_{i||j}}^1({\mbox{\boldmath{$\eta$}}}) - {e_{i||j}}^0({\mbox{\boldmath{$v$}}}({\mbox{\boldmath{$\eta$}}})) \right) \sqrt{a}dx + {\int_{\Omega}}B^{ijkl}(0) {\dot{e}_{k||l}}^1 \left({e_{i||j}}^1({\mbox{\boldmath{$\eta$}}}) - {e_{i||j}}^0({\mbox{\boldmath{$v$}}}({\mbox{\boldmath{$\eta$}}})) \right) \sqrt{a}dx \\ \nonumber & \quad ={\int_{\Omega}}\left( \lambda a^{\alpha \beta} a^{\sigma \tau} + \mu (a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma}) \right){e_{\sigma||\tau}}^1\left( -x_3 {\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\right) \sqrt{a} dx \\ \nonumber &\qquad + {\int_{\Omega}}\lambda a^{\alpha\beta}{e_{3||3}}^1 \left(-x_3 {\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}}) \right)\sqrt{a} dx \\ \nonumber & \qquad + {\int_{\Omega}}\left( \theta a^{\alpha \beta} a^{\sigma \tau} + \frac{\rho}{2} (a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma}) \right) {\dot{e}_{\sigma||\tau}}^1 \left( -x_3 {\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\right) \sqrt{a} dx \\ \label{ref6} &\qquad + {\int_{\Omega}}\theta a^{\alpha\beta}{\dot{e}_{3||3}}^1 \left(-x_3 {\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}}) \right)\sqrt{a} dx .\end{aligned}$$ Now, by the findings in step $(v)$, we have that (\[ref6\]) leads to $$\begin{aligned} &{\int_{\Omega}}\left( \left(\lambda - \frac{\theta}{\theta + \rho} \left( \theta \Lambda + \lambda \right)\right)a^{\alpha\beta}a^{\sigma \tau} + \mu{(a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma})}\right) {e_{\sigma||\tau}}^1\left(-x_3 {\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}}) \right)\sqrt{a}dx \\ &\qquad + {\int_{\Omega}}\left( \frac{\theta \rho }{\theta + \rho} a^{\alpha \beta} a^{\sigma \tau} + \frac{\rho}{2}{(a^{\alpha \sigma}a^{\beta \tau} + a^{\alpha \tau}a^{\beta\sigma})}\right) {\dot{e}_{\sigma||\tau}}^1\left(-x_3 {\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}}) \right)\sqrt{a}dx \\ & \qquad - {\int_{\Omega}}\frac{\left( \theta \Lambda\right)^2}{\theta + \rho} \int_0^t e^{-k(t-s)}a^{\sigma \tau} {e_{\sigma||\tau}}^1(s) ds \left(-x_3 a^{\alpha \beta} {\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}}) \right) \sqrt{a} dx,\end{aligned}$$ which using (\[est\_1\_rst\]) is equivalent to $$\begin{aligned} &{\int_{\Omega}}\frac{x_3^2}{2}{a^{\alpha\beta\sigma\tau}}{\rho_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^0){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dx + {\int_{\Omega}}\frac{x_3^2}{2}{b^{\alpha\beta\sigma\tau}}{\rho_{\sigma\tau}}(\dot{{\mbox{\boldmath{$\xi$}}}}^0){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dx \\ & \qquad- \int_0^te^{-k(t-s)}{\int_{\Omega}}\frac{x_3^2}{2}{c^{\alpha\beta\sigma\tau}}{\rho_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^0(s)){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dx ds \\ & \quad = \frac{1}{3}{\int_{\omega}}{a^{\alpha\beta\sigma\tau}}{\rho_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^0){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy + \frac{1}{3}{\int_{\omega}}{b^{\alpha\beta\sigma\tau}}{\rho_{\sigma\tau}}(\dot{{\mbox{\boldmath{$\xi$}}}}^0){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy \\ & \qquad- \frac{1}{3}\int_0^te^{-k(t-s)}{\int_{\omega}}{c^{\alpha\beta\sigma\tau}}{\rho_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^0(s)){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy ds, \end{aligned}$$ for all ${\mbox{\boldmath{$\eta$}}}\in V_F(\omega), {\ a.e. \ \textrm{in} \ (0,T)},$ where ${a^{\alpha\beta\sigma\tau}}$ , ${b^{\alpha\beta\sigma\tau}}$ and ${c^{\alpha\beta\sigma\tau}}$ denote the contravariant components of the fourth order two-dimensional tensors, defined in (\[tensor\_a\_bidimensional\])–(\[tensor\_c\_bidimensional\]). Hence, we have obtained the following variational problem: Find ${\mbox{\boldmath{$\xi$}}}^0:[0,T] \times\omega \longrightarrow \mathbb{R}^3$ such that $$\begin{aligned} \nonumber &{\mbox{\boldmath{$\xi$}}}^0(t)\in V_F(\omega) {\ \forall \ t\in[0,T]}, \\ \nonumber &\frac{1}{3}{\int_{\omega}}{a^{\alpha\beta\sigma\tau}}{\rho_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^0){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy + \frac{1}{3}{\int_{\omega}}{b^{\alpha\beta\sigma\tau}}{\rho_{\sigma\tau}}(\dot{{\mbox{\boldmath{$\xi$}}}}^0){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy \\ \nonumber & \qquad- \frac{1}{3}\int_0^te^{-k(t-s)}{\int_{\omega}}{c^{\alpha\beta\sigma\tau}}{\rho_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^0(s)){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy ds \\ \label{et1} & \quad= {\int_{\omega}}p^{i,2}\eta_i \sqrt{a}dy \ \forall {\mbox{\boldmath{$\eta$}}}\in V_F(\omega), {\ a.e. \ \textrm{in} \ (0,T)}, \\ \nonumber & {\mbox{\boldmath{$\xi$}}}^0(0,\cdot)={\mbox{\boldmath{$\xi$}}}^0_0(\cdot).\end{aligned}$$ This problem will be known as the two-dimensional variational problem for a viscoelastic flexural shell. The mathematical variational models found in (\[ec\_paso4\]) and in (\[et1\]) show a long-term memory that takes into account the deformations in previous times, represented by an integral on the time variable. Notice that the weight coefficient term makes the older strain states less influential than the newer ones. Analogous behavior has been presented in beam models for the bending-stretching of viscoelastic rods [@AV], obtained by using asymptotic methods as well. Also, this kind of viscoelasticity has been described in [@DL; @Pipkin], for example. Existence and uniqueness of the solution of the two-dimensional problems ======================================================================== \[Existencia\] In what follows, we study the existence and uniqueness of solution of the two-dimensional limit problems found in the previous section: the membrane and flexural shell cases. To that aim, we first give the following result regarding the ellipticity of the fourth order two-dimensional tensors defined by their contravariant components in (\[tensor\_a\_bidimensional\])–(\[tensor\_c\_bidimensional\]). Let $\omega$ be a domain in $\mathbb{R}^2$, let ${\mbox{\boldmath{$\theta$}}}\in\mathcal{C}^1(\bar{\omega};\mathbb{R}^3)$ be an injective mapping such that the two vectors ${\mbox{\boldmath{$a$}}}_\alpha={\partial}_\alpha{\mbox{\boldmath{$\theta$}}}$ are linearly independent at all points of $\bar{\omega}$, let $a^{\alpha\beta}$ denote the contravariant components of the metric tensor of $S={\mbox{\boldmath{$\theta$}}}(\bar{\omega})$. Let us consider the contravariant components of the scaled fourth order two-dimensional tensors of the shell, ${a^{\alpha\beta\sigma\tau}},{b^{\alpha\beta\sigma\tau}},$ defined in (\[tensor\_a\_bidimensional\])–(\[tensor\_b\_bidimensional\]). Assume that $\lambda\geq0$ and $\mu,\theta,\rho>0$. Then there exist two constants $c_e>0$ and $c_v>0$ independent of the variables and ${\varepsilon}$, such that $$\begin{aligned} \label{tensor_a_elip} \sum_{\alpha,\beta}|t_{\alpha\beta}|^2\leq c_e {a^{\alpha\beta\sigma\tau}}({\mbox{\boldmath{$y$}}})t_{\sigma\tau}t_{\alpha\beta}, \\\label{tensor_b_elip} \sum_{\alpha,\beta}|t_{\alpha\beta}|^2 \leq c_v {b^{\alpha\beta\sigma\tau}}({\mbox{\boldmath{$y$}}})t_{\sigma\tau}t_{\alpha\beta}, \end{aligned}$$ for all ${\mbox{\boldmath{$y$}}}\in\bar{\omega}$ and all ${\mbox{\boldmath{$t$}}}=(t_{\alpha\beta})\in\mathbb{S}^2$. The proof of this result is straightforward following similar arguments as in Theorem 3.3-2, [@Ciarlet4b]. We shall present the limit problems in a de-scaled form. The details of the convergence and the physical interpretation of the solutions for those problems are subject of forthcoming papers ([@eliptico; @flexural; @generalizada]). There we shall see that in fact, the subspace which plays the key role in differentiating viscoelastic membrane shells from viscoelastic flexural shells is $V_F(\omega)$ instead of $V_0(\omega)$, as happened in the elastic case (see [@Ciarlet4b]). Viscoelastic membrane shell --------------------------- Let us first consider that $V_F(\omega)=\{{\mbox{\boldmath{$0$}}}\}$. In order to obtain a well posed problem we must consider a larger space, completion of $V(\omega)$ , which will be denoted by $V_M(\omega)$. Specifically, we will distinguish the different types of membranes depending on the type of middle surface of the family of shells and the subset where the boundary condition of place is considered. For example, if the middle surface $S$ is elliptic and $\gamma=\gamma_0$, we take $V_M(\omega):=H^1_0(\omega)\times H^1_0(\omega)\times L^2(\omega)$. In this type of membranes it is verified the two-dimensional Korn’s type inequality (see, for example, Theorem 2.7-3, [@Ciarlet4b]): there exists a constant $c_M=c_M(\omega,{\mbox{\boldmath{$\theta$}}})$ such that $$\begin{aligned} \label{Korn_elipticas} \left( \sum_\alpha||\eta_\alpha||^2_{1,\omega} + ||\eta_3||_{0,\omega}^2 \right)^{1/2} \leq c_M \left( \sum_{\alpha,\beta} ||{\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})||_{0,\omega}^2 \right)^{1/2} \ \forall {\mbox{\boldmath{$\eta$}}}\in V_M(\omega). \end{aligned}$$ Complete studies will be presented in detail in two forthcoming papers ([@eliptico; @generalizada]). We can enunciate the de-scaled variational problem for a viscoelastic membrane shell: \[problema\_ab\_eps\] Find ${\mbox{\boldmath{$\xi$}}}^{\varepsilon}:[0,T] \times\omega \longrightarrow \mathbb{R}^3$ such that, $$\begin{aligned} \nonumber & {\mbox{\boldmath{$\xi$}}}^{\varepsilon}(t,\cdot)\in V_M(\omega) {\ \forall \ t\in[0,T]},\\ \nonumber &{\varepsilon}\int_{\omega} {a^{\alpha\beta\sigma\tau,\varepsilon}}{\gamma_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^{\varepsilon}){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy +{\varepsilon}\int_{\omega}{b^{\alpha\beta\sigma\tau,\varepsilon}}{\gamma_{\sigma\tau}}(\dot{{\mbox{\boldmath{$\xi$}}}}^{\varepsilon}){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy \\\nonumber & \qquad- {\varepsilon}\int_0^te^{-k(t-s)}{\int_{\omega}}{c^{\alpha\beta\sigma\tau,\varepsilon}}{\gamma_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^{\varepsilon}(s)){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy ds \\ &\quad=\int_{\omega}p^{i,{\varepsilon}}\eta_i\sqrt{a}dy \ \forall {\mbox{\boldmath{$\eta$}}}=(\eta_i)\in V_M(\omega), {\ a.e. \ \textrm{in} \ (0,T)}, \\ &{\mbox{\boldmath{$\xi$}}}^{\varepsilon}(0,\cdot)={\mbox{\boldmath{$\xi$}}}^{\varepsilon}_0(\cdot), \end{aligned}$$ where, $$\begin{aligned} &{\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}}):= \frac{1}{2}({\partial}_\alpha\eta_\beta + {\partial}_\beta\eta_\alpha) - \Gamma_{\alpha\beta}^\sigma\eta_\sigma -b_{\alpha\beta}\eta_3, \\ & p^{i,{\varepsilon}}(t):=\int_{-{\varepsilon}}^{{\varepsilon}}{{{f}}}^{i,{\varepsilon}}(t)dx_3^{\varepsilon}+h_+^{i,{\varepsilon}}(t)+h_-^{i,{\varepsilon}}(t) \ \textrm{and} \ h_{\pm}^{i,{\varepsilon}}(t)={{{h}}}^{i,{\varepsilon}}(t,\cdot,\pm {\varepsilon}), \end{aligned}$$ and where the contravariant components of the fourth order two-dimensional tensors ${a^{\alpha\beta\sigma\tau,\varepsilon}},$ ${b^{\alpha\beta\sigma\tau,\varepsilon}}, $ ${c^{\alpha\beta\sigma\tau,\varepsilon}}$ are defined as rescaled versions of (\[tensor\_a\_bidimensional\])–(\[tensor\_c\_bidimensional\]). The space $V_M(\omega)$ denotes a space completion of $V(\omega)$ where the viscoelastic membrane problem is well posed (to be detailed in forthcoming papers). \[Th\_exist\_unic\_bid\_cero\] Let $\omega$ be a domain in $\mathbb{R}^2$, let ${\mbox{\boldmath{$\theta$}}}\in\mathcal{C}^2(\bar{\omega};\mathbb{R}^3)$ be an injective mapping such that the two vectors ${\mbox{\boldmath{$a$}}}_\alpha={\partial}_\alpha{\mbox{\boldmath{$\theta$}}}$ are linearly independent at all points of $\bar{\omega}$. Let ${{{f}}}^{i,{\varepsilon}}\in L^{2}(0,T; L^2(\Omega^{\varepsilon})) $, ${{{h}}}^{i,{\varepsilon}}\in L^{2}(0,T; L^2(\Gamma_1^{\varepsilon}))$, where $\Gamma_1^{\varepsilon}:= \Gamma_+^{\varepsilon}\cup\Gamma_-^{\varepsilon}$. Let ${\mbox{\boldmath{$\xi$}}}_0^{\varepsilon}\in V_M(\omega). $ Then the Problem \[problema\_ab\_eps\], has a unique solution ${\mbox{\boldmath{$\xi$}}}^{\varepsilon}\in W^{1,2}(0,T;V_M(\omega))$. In addition to that, if $\dot{{{{f}}}}^{i,{\varepsilon}}\in L^{2}(0,T; L^2(\Omega^{\varepsilon})) $, $\dot{{{{h}}}}^{i,{\varepsilon}}\in L^{2}(0,T; L^2(\Gamma_1^{\varepsilon}))$, then ${\mbox{\boldmath{$\xi$}}}^{\varepsilon}\in W^{2,2}(0,T;V_M(\omega))$. Let us consider the bilinear forms $a^{\varepsilon},b^{\varepsilon},c^{\varepsilon}: V_M(\omega)\times V_M(\omega)\longrightarrow \mathbb{R}$ defined by, $$\begin{aligned} a^{\varepsilon}({\mbox{\boldmath{$\xi$}}}^{\varepsilon},{\mbox{\boldmath{$\eta$}}})&: = {\varepsilon}\int_{\omega} {a^{\alpha\beta\sigma\tau,\varepsilon}}{\gamma_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^{\varepsilon}){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy, \\ b^{\varepsilon}({\mbox{\boldmath{$\xi$}}}^{\varepsilon},{\mbox{\boldmath{$\eta$}}})&:={\varepsilon}\int_{\omega}{b^{\alpha\beta\sigma\tau,\varepsilon}}{\gamma_{\sigma\tau}}({{\mbox{\boldmath{$\xi$}}}}^{\varepsilon}){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy, \\ c^{\varepsilon}({\mbox{\boldmath{$\xi$}}}^{\varepsilon},{\mbox{\boldmath{$\eta$}}})&:={\varepsilon}{\int_{\omega}}{c^{\alpha\beta\sigma\tau,\varepsilon}}{\gamma_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^{\varepsilon}){\gamma_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy ,\end{aligned}$$ for all ${\mbox{\boldmath{$\xi$}}}^{\varepsilon},{\mbox{\boldmath{$\eta$}}}\in V_M(\omega) $ and for each ${\varepsilon}>0$. Therefore the Problem \[problema\_ab\_eps\] can be cast into an analogous framework of the formulation (\[ecuacion\_operadores\])–(\[condicion\_operadores\]), since $p^{i,{\varepsilon}}\in L^2(0,T;L^2(\omega))$ and by the ellipticity of the two-dimensional tensors in (\[tensor\_a\_elip\])–(\[tensor\_b\_elip\]). Therefore, combining a Korn’s type inequality (see (\[Korn\_elipticas\]) for the elliptic case) with similar arguments as in the proof of the Theorem \[teorema\_existencia\_bidimensional\], we find that the Problem \[problema\_ab\_eps\] has uniqueness of solution and such that ${\mbox{\boldmath{$\xi$}}}^{\varepsilon}\in W^{1,2}(0,T;V_M(\omega))$. Moreover, with the additional regularity of $f^{i,{\varepsilon}}$ and $h^{i,{\varepsilon}}$, we conclude that ${\mbox{\boldmath{$\xi$}}}^{\varepsilon}\in W^{2,2}(0,T;V_M(\omega))$. Viscoelastic flexural shell --------------------------- Let us consider now that the space $ V_F(\omega) $ contains non-zero functions. Therefore, we can enunciate the de-scaled variational problem for a viscoelastic flexural shell: \[problema\_flexural\_eps\] Find ${\mbox{\boldmath{$\xi$}}}^{\varepsilon}:[0,T] \times\omega \longrightarrow \mathbb{R}^3$ such that, $$\begin{aligned} \nonumber & {\mbox{\boldmath{$\xi$}}}^{\varepsilon}(t,\cdot)\in V_F(\omega) {\ \forall \ t\in[0,T]},\\ \nonumber &\frac{{\varepsilon}^3}{3}\int_{\omega} {a^{\alpha\beta\sigma\tau,\varepsilon}}{\rho_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^{\varepsilon}){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy +\frac{{\varepsilon}^3}{3}\int_{\omega}{b^{\alpha\beta\sigma\tau,\varepsilon}}{\rho_{\sigma\tau}}(\dot{{\mbox{\boldmath{$\xi$}}}}^{\varepsilon}){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy \\ \nonumber &- \frac{{\varepsilon}^3}{3}\int_0^te^{-k(t-s)}{\int_{\omega}}{c^{\alpha\beta\sigma\tau}}{\rho_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^{\varepsilon}(s)){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dyds \\ &\quad=\int_{\omega}p^{i,{\varepsilon}}\eta_i\sqrt{a}dy \ \forall {\mbox{\boldmath{$\eta$}}}=(\eta_i)\in V_F(\omega),{\ a.e. \ \textrm{in} \ (0,T)}, \\ &{\mbox{\boldmath{$\xi$}}}^{\varepsilon}(0,\cdot)={\mbox{\boldmath{$\xi$}}}^{\varepsilon}_0(\cdot), \end{aligned}$$ where, $$\begin{aligned} &\rho_{\alpha\beta}({\mbox{\boldmath{$\eta$}}}):= {\partial}_{\alpha\beta}\eta_3 - \Gamma_{\alpha\beta}^\sigma {\partial}_\sigma\eta_3 - b_\alpha^\sigma b_{\sigma\beta} \eta_3 + b_\alpha^\sigma ({\partial}_\beta\eta_\sigma- \Gamma_{\beta\sigma}^\tau \eta_\tau) + b_\beta^\tau({\partial}_\alpha\eta_\tau-\Gamma_{\alpha\tau}^\sigma\eta_\sigma ) + b^\tau_{\beta|\alpha} \eta_\tau, \\ & p^{i,{\varepsilon}}(t):=\int_{-{\varepsilon}}^{{\varepsilon}}{{{f}}}^{i,{\varepsilon}}(t)dx_3^{\varepsilon}+h_+^{i,{\varepsilon}}(t)+h_-^{i,{\varepsilon}}(t) \ \textrm{and} \ h_{\pm}^{i,{\varepsilon}}(t)={{{h}}}^{i,{\varepsilon}}(t,\cdot,\pm {\varepsilon}), \end{aligned}$$ and where the contravariant components of the fourth order two-dimensional tensors ${a^{\alpha\beta\sigma\tau,\varepsilon}},$ ${b^{\alpha\beta\sigma\tau,\varepsilon}}, $ ${c^{\alpha\beta\sigma\tau,\varepsilon}}$ are defined as rescaled versions of (\[tensor\_a\_bidimensional\])–(\[tensor\_c\_bidimensional\]). If ${\mbox{\boldmath{$\theta$}}}\in\mathcal{C}^3(\bar{\omega};\mathbb{R}^3),$ it is verified the following Korn’s type inequality (see, for example, Theorem 2.6-4, [@Ciarlet4b]): there exists a constant $c=c(\omega, \gamma_0, {\mbox{\boldmath{$\theta$}}})$ such that $$\begin{aligned} \label{Korn_flexural} \left( \sum_\alpha||\eta_\alpha||^2_{1,\omega} + ||\eta_3||_{2,\omega}^2 \right)^{1/2} \leq c \left( \sum_{\alpha,\beta} ||{\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})||_{0,\omega}^2 \right)^{1/2} \ \forall {\mbox{\boldmath{$\eta$}}}\in V_F(\omega). \end{aligned}$$ \[Th\_exist\_unic\_bid\_dos\] Let $\omega$ be a domain in $\mathbb{R}^2$, let ${\mbox{\boldmath{$\theta$}}}\in\mathcal{C}^3(\bar{\omega};\mathbb{R}^3)$ be an injective mapping such that the two vectors ${\mbox{\boldmath{$a$}}}_\alpha={\partial}_\alpha{\mbox{\boldmath{$\theta$}}}$ are linearly independent at all points of $\bar{\omega}$. Let ${{{f}}}^{i,{\varepsilon}}\in L^{2}(0,T; L^2(\Omega^{\varepsilon})) $, ${{{h}}}^{i,{\varepsilon}}\in L^{2}(0,T; L^2(\Gamma_1^{\varepsilon}))$, where $\Gamma_1^{\varepsilon}:= \Gamma_+^{\varepsilon}\cup\Gamma_-^{\varepsilon}$. Let ${\mbox{\boldmath{$\xi$}}}_0^{\varepsilon}\in V_F(\omega). $ Then the Problem \[problema\_flexural\_eps\], has a unique solution ${\mbox{\boldmath{$\xi$}}}^{\varepsilon}\in W^{1,2}(0,T;V_F(\omega))$. In addition to that, if $\dot{{{{f}}}}^{i,{\varepsilon}}\in L^{2}(0,T; L^2(\Omega^{\varepsilon})) $, $\dot{{{{h}}}}^{i,{\varepsilon}}\in L^{2}(0,T; L^2(\Gamma_1^{\varepsilon}))$, then ${\mbox{\boldmath{$\xi$}}}^{\varepsilon}\in W^{2,2}(0,T;V_F(\omega))$. Let us consider the bilinear forms $a^{\varepsilon},b^{\varepsilon},c^{\varepsilon}: V_F(\omega)\times V_F(\omega)\longrightarrow \mathbb{R}$ defined by, $$\begin{aligned} a^{\varepsilon}({\mbox{\boldmath{$\xi$}}}^{\varepsilon},{\mbox{\boldmath{$\eta$}}})&: = \frac{{\varepsilon}^3}{3}\int_{\omega} {a^{\alpha\beta\sigma\tau,\varepsilon}}{\rho_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^{\varepsilon}){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy, \\ b^{\varepsilon}({\mbox{\boldmath{$\xi$}}}^{\varepsilon},{\mbox{\boldmath{$\eta$}}})&:=\frac{{\varepsilon}^3}{3}\int_{\omega}{b^{\alpha\beta\sigma\tau,\varepsilon}}{\rho_{\sigma\tau}}({{\mbox{\boldmath{$\xi$}}}}^{\varepsilon}){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy, \\ c^{\varepsilon}({\mbox{\boldmath{$\xi$}}}^{\varepsilon},{\mbox{\boldmath{$\eta$}}})&:=\frac{{\varepsilon}^3}{3}{\int_{\omega}}{c^{\alpha\beta\sigma\tau,\varepsilon}}{\rho_{\sigma\tau}}({\mbox{\boldmath{$\xi$}}}^{\varepsilon}){\rho_{\alpha\beta}}({\mbox{\boldmath{$\eta$}}})\sqrt{a}dy ,\end{aligned}$$ for all ${\mbox{\boldmath{$\xi$}}}^{\varepsilon},{\mbox{\boldmath{$\eta$}}}\in V_F(\omega) $ and for each ${\varepsilon}>0$. Therefore the Problem \[problema\_flexural\_eps\] can be cast into an analogous framework of the formulation (\[ecuacion\_operadores\])–(\[condicion\_operadores\]), since $p^{i,{\varepsilon}}\in L^2(0,T;L^2(\omega))$ and by the ellipticity of the two-dimensional tensors in (\[tensor\_a\_elip\])–(\[tensor\_b\_elip\]). Therefore, combining a Korn’s type inequality (see (\[Korn\_flexural\])) with similar arguments as in the proof of the Theorem \[teorema\_existencia\_bidimensional\], we find that the Problem \[problema\_flexural\_eps\] has uniqueness of solution and such that ${\mbox{\boldmath{$\xi$}}}^{\varepsilon}\in W^{1,2}(0,T;V_F(\omega))$. Moreover, with the additional regularity of $f^{i,{\varepsilon}}$ and ${{{h}}}^{i,{\varepsilon}}$, we conclude that ${\mbox{\boldmath{$\xi$}}}^{\varepsilon}\in W^{2,2}(0,T;V_F(\omega))$. Conclusions =========== \[conclusiones\] We have found limit two-dimensional models for viscoelastic membrane shells and viscoelastic flexural shells. To this end we used the asymptotic expansion method to identify the variational equations from the scaled three-dimensional viscoelastic shell problem. We have provided an analysis of the existence and uniqueness of solution for the three-dimensional problems and announced the corresponding results for the two-dimensional limit problems as well. Particularly interesting is that in the process of passing to the limit a long-term memory arises naturally (see (\[ec\_paso4\]) and (\[et1\])). Long-term memory is a well known phenomenon associated to a variety of viscoelastic materials that takes into account the deformations of previous times, represented by an integral on the time variable. Analogous behavior has been presented in beam models for the bending-stretching of viscoelastic rods [@AV], obtained by using asymptotic methods as well. Also, this kind of viscoelasticity has been described in [@DL; @Pipkin], for example. As the viscoelastic case differs from the elastic case on time dependent constitutive law and external forces, we must consider the possibility that these models generalize the elastic case (studied in [@Ciarlet4b]). However, as the reader can easily check, when the ordinary differential equation (\[ecuacion\_casuistica\]) was presented, we had to use assumptions that make it impossible to consider the elastic case. For instance, we could try to reduce the viscoelastic model to the elastic case by neglecting the viscosity constants and considering the various functions involved to be stationary. We show in the Remark \[nota\_desvio\], the last step where these arguments can be considered that, indeed, we would obtain the same models obtained in [@Ciarlet4b] for the corresponding elastic cases. Nevertheless, in what follows, the viscosity coefficient $\theta$ can not be zero, so the same proof can not be followed from that point. Hence, the viscoelastic and elastic problems must be treated separately in order to reach reasonable and justified conclusions. The asymptotic approaches need to be mathematically justified in order to ensure robust results. To this end, guided by the formal analysis developed in this paper, a more deep and robust study including convergence theorems will be presented in forthcoming papers ([@eliptico; @flexural; @generalizada]), regarding the different cases that have appeared in this work. The formal asymptotic procedure made in this work has placed the two dimensional limit equations for the membrane case on spaces where the problems were not well posed, so we need to find completions for these spaces. This will be done by taking into account the type of the middle surface of the family of shells and the subset where the boundary condition of place is considered. Therefore, on one hand, we shall study in [@eliptico] the case when $S$ is elliptic and when $\gamma_0=\gamma$, this is $V_0(\omega)= \{{\mbox{\boldmath{$0$}}}\}$ (which implies $V_F(\omega)=\{{\mbox{\boldmath{$0$}}}\}$). These are known as viscoelastic elliptic membrane shells. On the other hand, in [@generalizada] we shall consider the cases when the membrane is not elliptic or $\gamma_0\neq\gamma$ but still $V_F(\omega)= \{{\mbox{\boldmath{$0$}}}\}$. For these cases, additional spaces must be considered in order to obtain well posed problems. They are the so-called viscoelastic generalized membranes, where we also distinguish the cases where $V_0(\omega)$ contains only the zero function (first kind) or not (second kind). Further, regarding the case where the space $V_F(\omega)$ contains non-zero functions, in [@flexural] we shall study the problem of viscoelastic flexural shells. Acknowledgements {#acknowledgements .unnumbered} ================ [This research was partially supported by Ministerio de Economía y Competitividad of Spain, under grants MTM2012-36452-C02-01 and MTM2016-78718-P, with the participation of FEDER.]{} References {#references .unnumbered} ========== [10]{} A. Bermúdez and J. M. Via[ñ]{}o. Une justification des équations de la thermoélasticité de poutres à section variable par des méthodes asymptotiques. , 18(4):347–376, 1984. G. Casti[ñ]{}eira and Rodr[í]{}guez-Ar[ó]{}s. On the justification of the viscoelastic elliptic membrane shell equations. . G. Casti[ñ]{}eira and Rodr[í]{}guez-Ar[ó]{}s. On the justification of the viscoelastic flexural shell equations. . G. Casti[ñ]{}eira and Rodr[í]{}guez-Ar[ó]{}s. On the justification of the viscoelastic generalized membrane shell equations. . P. G. Ciarlet. , volume 20 of [*Studies in Mathematics and its Applications*]{}. North-Holland Publishing Co., Amsterdam, 1988. P. G. Ciarlet. , volume 27 of [*Studies in Mathematics and its Applications*]{}. North-Holland Publishing Co., Amsterdam, 1997. P. G. Ciarlet. , volume 29 of [*Studies in Mathematics and its Applications*]{}. North-Holland Publishing Co., Amsterdam, 2000. P. G. Ciarlet and P. Destuynder. A justification of the two-dimensional linear plate model. , 18(2):315–344, 1979. P. G. Ciarlet and V. Lods. Asymptotic analysis of linearly elastic shells. justification of membrane shell equations. , 136:119–161, 1996. P. G. Ciarlet and V. Lods. On the ellipticity of linear membrane shell equations. , 75:107–124, 1996. A. Cimeti[è]{}re, G. Geymonat, H. Le Dret, A. Raoult, and Z. Tutek. Asymptotic theory and analysis for displacements and stress distribution in nonlinear elastic straight slender rods. , 19(2):111–161, 1988. P. Destuynder. . PhD thesis, Univ. P. et M. Curie, Paris, 1980. G. Duvaut and J.-L. Lions. . Springer Berlin, 1976. L. Fushan. Asymptotic analysis of linearly viscoelastic shells. , 36:21–46, 2003. L. Fushan. Asymptotic analysis of linearly viscoelastic shells - justification of flexural shell equations. , 28A:71–84, 2007. L. Fushan. Asymptotic analysis of linearly viscoelastic shells - justification of koiter’s shell equations. , 54:51–70, 2007. H. Irago and J. M. Via[ñ]{}o. Error estimation in the [B]{}ernoulli-[N]{}avier model for elastic rods. , 21(1):71–87, 1999. J. Lemaitre and J. L. Chaboche. . Cambridge University Press, 1990. X. Li-ming. Asymptotic analysis of dynamic problems for linearly elastic shells - justification of equations for dynamic membrane shells. , 17:121–134, 1998. X. Li-ming. Asymptotic analysis of dynamic problems for linearly elastic shells - justification of equations for dynamic flexural shells. , 22B:13–22, 2001. X. Li-ming. Asymptotic analysis of dynamic problems for linearly elastic shells - justification of equations for dynamic koiter shells. , 22B:267–274, 2001. J.-L. Lions. . Lecture Notes in Mathematics, Vol. 323. Springer-Verlag, Berlin-New York, 1973. M.-L. Mascarenhas. Homogenisation of a viscoelastic equation with non-periodic coefficients. , 106:143–160, 1987. A. C. Pipkin. . Springer-Verlag, New York, 1972. . Rodr[í]{}guez-Ar[ó]{}s. Mathematical justification of an elastic elliptic membrane obstacle problem. , accepted, 2016. . Rodr[í]{}guez-Ar[ó]{}s and J. M. Via[ñ]{}o. Mathematical justification of viscoelastic beam models by asymptotic methods. , 370(2):607–634, 2010. . Rodr[í]{}guez-Ar[ó]{}s and J. M. Via[ñ]{}o. Mathematical justification of [K]{}elvin-[V]{}oigt beam models by asymptotic methods. , 63(3):529–556, 2012. E. Sanchez-Palencia. . Springer-Verlag, Berlin-New York, 1980. M. Shillor, M. Sofonea, and J. Telega. , volume 655. Springer Berlin, 2004. M. Sofonea and A. Matei. , volume 398. London Mathematical Society, 2012. L. Trabucho and J. M. Via[ñ]{}o. Mathematical modelling of rods. In [*Handbook of numerical analysis, [V]{}ol. [IV]{}*]{}, Handb. Numer. Anal., IV, pages 487–974. North-Holland, Amsterdam, 1996. K. Yosida. . Springuer-Verlag, 1966.
--- abstract: 'We present results from the Austral Winter 2003 observing campaign of , a 450  polarimeter used with a two-meter telescope at South Pole. We mapped large-scale magnetic fields in four Giant Molecular Clouds (GMCs) in the Galactic disk: NGC 6334, the Carina Nebula, G333.6-0.2 and G331.5-0.1. We find a statistically significant correlation of the inferred field directions with the orientation of the Galactic plane. Specifically, three of the four GMCs (NGC 6334 is the exception) have mean field directions that are within 15 of the plane. The simplest interpretation is that the field direction tends to be preserved during the process of GMC formation. We have also carried out an analysis of published optical polarimetry data. For the closest of the SPARO GMCs, NGC 6334, we can compare the field direction in the cloud as measured by SPARO with the field direction in a larger region (several hundred pc) surrounding the cloud, as determined from optical polarimetry. For purposes of comparison, we also use optical polarimetry to determine field directions for 9-10 other regions of similar size. We find that the region surrounding NGC 6334 is an outlier in the distribution of field directions determined from optical polarimetry, just as the NGC 6334 cloud is an outlier in the distribution of cloud field directions determined by SPARO. In both cases the field direction corresponding to NGC 6334 is rotated away from the direction of the plane by a large angle ($>$ 60). This finding is consistent with our suggestion that field direction tends to be preserved during GMC formation. Finally, we compare the disorder in our magnetic field maps with the disorder seen in magnetic field maps derived from MHD turbulence simulations. We conclude from these comparisons that the magnetic energy density in our clouds is comparable to the turbulent energy density.' author: - 'H. Li, G. S. Griffin, M. Krejny, and G. Novak' - 'R. F. Loewenstein, and M. G. Newcomb' - 'P. G. Calisse' - 'D. T. Chuss' title: 'Results of SPARO 2003: Mapping Magnetic Fields in Giant Molecular Clouds' --- Introduction ============ It is generally believed that the ionization fraction in interstellar clouds is sufficiently high for flux-freezing to apply, and that interstellar magnetic fields are thus well coupled to the gas. Furthermore, the fields may be strong enough to be dynamically important, and thus may play important roles in star formation. These issues have been reviewed by Crutcher (2004). Furthermore, as reviewed by Mac Low & Klessen (2004), magnetohydrodynamic (MHD) turbulence is probably an important player in the star formation process. Submillimeter polarimetry is a technique for mapping interstellar magnetic fields (Lazarian 2000) that is well suited for studying dense star forming clouds. In particular, this method appears to have an advantage over the related technique of optical/near-IR polarimetry of background stars. Specifically, it has been shown by Whittet et al. (2001) and Arce et al. (1998) that grain populations residing in regions that are shielded from the interstellar radiation field have very low polarizing efficiency at optical/near-IR wavelengths. Moderate shielding ($A_{v} = 1-2$ mag) seems to be sufficient to sharply reduce the efficiency. By contrast, when we study the submillimeter emission from highly obscured regions ($A_{v}$ $\sim$ 30 mag) in a dense cloud, we find significant polarization (Crutcher et al. 2004). Thus, submillimeter polarimetry is especially useful for such regions. This discrepancy between optical/near-IR polarization efficiency and submillimeter efficiency, for highly obscured regions, is explained by Cho and Lazarian (2005) as an effect of grain size. Under the assumption that grains are aligned by the radiative torque mechanism (Dolginov 1972, Draine & Weingartner 1996, 1997), they show that in highly obscured regions only the largest grains will be aligned. They argue that these large aligned grains dominate the submillimeter emission but are relatively less important for the optical/near-IR extinction. Maps of submillimeter/far-IR polarization usually cover relatively small sky areas; typically of order ten square arcminutes (Greaves et al. 2003, Hildebrand 2002, Dotson et al. 2000). Because Giant Molecular Clouds (GMCs) usually extend over much larger sky areas, approaching a square degree, these small-scale maps have not been very useful for probing the large-scale, or global, magnetic field of a GMC. Not every submillimeter polarization map is small on the sky, however: The ARCHEOPS balloon-borne submillimeter polarimeter (Benoît et al. 2004) mapped the degree-scale submillimeter (850 ) polarization of large sections of the Galactic disk. Binning their data to a resolution of several degrees, they obtain significant detections of polarization with magnetic field directions generally running parallel to the Galactic plane. Because the large-scale Galactic magnetic field determined from optical polarimetry is also known to run parallel to the plane (Mathewson & Ford 1970), the ARCHEOPS result shows that the Galactic field penetrates into the dense gas that dominates the submillimeter emission. In contrast, the smaller scale far-IR/submillimeter polarization maps of GMCs in the disk show a wide range of inferred field directions and no preferred orientation with respect to the Galactic plane (see Fig. 21 of Hildebrand 2002). The South Pole polarimeter SPARO is optimized for submillimeter polarimetry on angular scales larger than those typically accessible from ground-based submillimeter dishes, though not as large as those studied by ARCHEOPS. With SPARO we can map fields over a significant fraction a GMC while still resolving field structure within the cloud. Here we present SPARO polarization maps for four GMCs in the Galactic disk. Each map extends over a sky area corresponding to several hundreds of square arcminutes. Observations and Results ======================== The observations were made at Amundsen-Scott South Pole station, using the Viper telescope (Peterson et al. 2000, Kuo et al. 2004) together with Northwestern University’s submillimeter polarimeter, SPARO (Dotson et al. 1998, Renbarger et al. 2004). Radiation entering SPARO passes through a rotatable half-wave plate, and is then divided into two orthogonal polarization components, each of which is detected by a separate $^3$He-cooled 9-pixel detector array. SPARO thus measures polarization simultaneously at nine sky positions, arranged in a 3 by 3 square pattern. Because the South Pole is an exceptionally good submillimeter site Lane (1998), SPARO obtains extremely good sensitivity to spatially extended, low surface-brightness emission. SPARO’s first observations, made during Austral Winter 2000, were reported by Novak et al. (2003). Here we present results from our second observing campaign that took place during April–August 2003. For the 2003 observations, the beam FWHM was determined to be 4$\pm$ 0.5, and the pixel-to-pixel separation was 3.3. The pointing accuracy was $\pm$ 1. SPARO’s spectral passband is centered at $\lambda_0 = $ 450 , with fractional bandwidth $\Delta\lambda/\lambda_0=$ 0.10. Our data acquisition scheme involves carrying out standard “photometric integrations” at each of six half-wave plate rotation angles, successively (Hildebrand et al. 2000). Each photometric integration, in turn, involves rapidly switching the array footprint back and forth on the sky between the source position and a reference position. Reference signals are subtracted from source signals. We used two reference positions (Hildebrand et al. 2000), separated from the source position by +0.65 and $-$0.65 in cross-elevation, respectively. Our data reduction procedures follow closely those described by Hildebrand et al. (2000) with one important difference. Hildebrand et al. (2000) describes how the “polarization signal” is computed from each photometric integration (see above) by taking the difference between signals from corresponding pixels of the two detector arrays, and then dividing this by the sum of these signals (see their equation 7). This corresponds to dividing polarized flux by total flux. We found that for our observations the total flux was often not well determined by a single photometric integration, due to variations in atmospheric emission significantly above the level of the photon noise. These variations are usually referred to as “sky noise”, and they affect the total flux measurement but not the difference signals. In order to circumvent the problem, we developed a new method for computing the polarization signal, which we now describe. We grouped our data into “sets” consisting of nine identical “half-wave plate cycles”. As described above, each such cycle involved six photometric integrations, each taken at a different half-wave plate angle. For each set, we computed a single value for the total flux, or signal sum, for each of the nine pixels. We also computed, for each pixel, six values of the difference signal, one for each half-wave plate position. These six difference signals were then divided by the set-average total flux for the corresponding pixel. Our new technique for determining the polarization signal requires stable atmospheric transmission over time scales one hour, corresponding to the duration of one set. On the basis of opacity measurements obtained with the CMU/NRAO tipper (Peterson et al. 2003), we estimate that signals were stable to $\pm$ 10%. A more detailed description of the new data analysis procedure is given by Li et al. (2005), and a similar procedure that was applied to data from the Hertz polarimeter is described by Kirby et al. (2005). The instrumental polarization was determined by calibrating on the Moon and on the intensity peak of Sgr B2, following the same procedure that we used for our 2000 observations (Novak et al. 2003, Chuss 2002). The level of systematic error in our measurements is $<$ 0.3%, which translates into an uncertainty in polarization angle of $< (9\degr)(1.0\%/P)$. We observed four GMCs: NGC 6334, the Carina Nebula, G333.6-0.2, and G331.5-0.1. Three of our targets were chosen for their high column density and large angular extent as determined from the dust opacity map of Schlegel et al. (1998). This map is derived by combining COBE/DIRBE and IRAS/ISSA maps. One of our targets, Carina, has somewhat lower dust opacity than the others. It was chosen for its high elevation ($\sim 60\degr $ at S. Pole) which corresponds to higher atmospheric transmission. In Tables 1 – 4, we give the measured degree and angle of polarization and associated statistical errors for all sky positions having P $> 2\sigma _{P}$. Note that $\sim 80\%$ of the measurements have P $> 3\sigma _{P}$. All measured degrees of polarization are larger than 0.3%, i.e. they are above the level of systematic error. Figure 1 shows the inferred magnetic field directions (that are orthogonal to the measured polarization directions) and the magnitudes of polarization (denoted by the lengths of the bars) superposed on IRAS 100  maps. Discussion ========== In this section, we first give an overview of the characteristics of the four GMCs we observed (§ 3.1). Then we discuss the measured degrees of polarization (§ 3.2). In the remaining four sections we deal with the inferred magnetic field directions, beginning with an overview of the statistics of these directions (§ 3.3), next comparing with mid-IR intensity maps (§ 3.4) and with information obtained from optical polarimetry of stars (§ 3.5), and finally discussing the relevance to theoretical ideas concerning formation and structure of GMCs (§ 3.6). Characteristics of the observed clouds {#bozomath} -------------------------------------- ### The Carina Nebula Perhaps the best studied of our four targets is the Carina Nebula (NGC 3372), a very bright emission nebula excited by OB star clusters and lying at a distance of about 2.7 kpc (§ 3.1.5) in the Carina spiral arm. Optical nebulosity and mid-IR emission extend over $\sim 3\degr$ in Galactic latitude and $\sim 2\degr$ in longitude (Smith et al. 2000), but most of the far-IR and radio continuum flux is concentrated in the central $1\degr\ x 1\degr$ region shown in Figure 1 (Shaver & Goss 1970; Zhang et al. 2001). The associated molecular clouds are referred to collectively as the Carina Molecular Cloud Complex (Zhang et al. 2001). This complex is elongated in a direction roughly parallel to the Galactic plane, with an extent of about three degrees ($\thicksim$ 140 pc) in Galactic longitude. The complex has a mass of about 7 x $10^{5}$ $M_{\odot}$ (Grabelsky et al. 1988). The bright central radio peak of the Carina Nebula consists of two main parts, Carina I and Carina II (Davidson & Humphreys 1997; Whiteoak 1994; Brooks, Storey, & Whiteoak 2001), whose sky locations are indicated in Figure 1 with filled and open star symbols, respectively. Carina I (a.k.a. G287.4-0.6) is powered by the OB star cluster Tr 14, located a few arcminutes to the East of Carina I.  Carina II (a.k.a. G287.6-0.6) is powered by another OB cluster, Tr 16, that contains Eta Carinae, a very massive ($>$ 100 $M_{\odot}$) evolved star. Eta Carinae is visible in Figure 1 as the flux peak located a few arcminutes SouthEast of Carina II, and this unusual star can also be seen in Figure 5, as the dominant source near the lower left corner of the Carina map. Tr 14 and Tr 16 contain large numbers of unusually massive stars. Six of the 17 known O3-type stars in our Galaxy are found in these two clusters (Davidson & Humphreys 1997). Tr 14, which powers the region we mapped polarimetrically, is estimated to be only about one million years old (Walborn 1995). There is little evidence for ongoing formation of massive stars within the region encompassed by Tr 14, Tr 16, Carina I, and Carina II (Davidson & Humphreys 1997), but evidence for ongoing star formation has been found surrounding this region, especially to the SouthEast of Eta Carinae (Megeath et al. 1996, Smith et al. 2000). For this reason, Carina is often cited as an example of sequential star formation (de Graauw et al. 1981, Smith et al. 2000). ### G333.6-0.2 and NGC 6334 The HII region G333.6-0.2, although unimpressive at optical wavelengths, is one of the brightest radio sources in the Southern sky (Goss & Shaver 1970). When its infrared counterpart was discovered by Becklin et al. (1973), they noted that its luminosity in the 1 - 25  range was unsurpassed by any other HII region. G333.6-0.2 is the bright peak at (0, 0) in Figure 1. McGee, Newton, & Butler (1979) mapped many other radio sources near this peak. These are often collectively referred to as “the $l = 333\degr$ complex” (de Graauw et al. 1981). At least four of these other sources can be seen in Figure 1, but only about half of the emission from the $l = 333\degr$ complex falls within the boundaries of Figure 1, as the complex extends beyond the Southern and Western edges of the image. Maps of the associated CO emission have been obtained by de Graauw et al. (1981) and Bronfman et al. (1989). The radial velocity of the $l = 333\degr$ complex is about -50 km/s and the distance is about 3.0 kpc (Sollins & Megeath 2004; § 3.1.5). The complex is elongated parallel to the Galactic plane (McGee et al. 1979; Russeil et al. 2005; Fig. 1) and has an extent of $\sim$ 1.5 in longitude (estimated from either radio or molecular maps) corresponding to 80 pc. Cheung et al. (1980) estimate a mass of $10^{5}$ $M_{\odot}$ for a region $\thicksim$ 4 pc in extent centered on G333.6-0.2. The total mass of the complex must be much larger, perhaps comparable to that of the Carina Molecular Cloud Complex (§ 3.1.1). NGC 6334 is an optically visible HII region lying at a distance of 1.7 kpc (Neckel 1978; § 3.2). The radio map of Goss & Shaver (1970) shows ionized gas distributed over a region $\thicksim$ 10 pc in size, while the optical nebulosity extends over a region about 20 pc in extent (Gardner & Whiteoak 1975; Straw & Hyland 1989). The extent of the associated molecular gas is also about 20 pc (Dickel, Dickel, & Wilson 1977). From these maps we see that gas associated with NGC 6334 may not extend much beyond the region shown in Figure 1. Based on the CO map of Dickel et al. (1977), Straw & Hyland (1989) estimate a mass of 1.5 x $10^{5}$ $M_{\odot}$ for the entire cloud. Comparing the size and mass of NGC 6334 with values given above for the Carina molecular cloud complex and the $l = 333\degr$ complex, we see that NGC 6334 may be somewhat smaller and less massive. Sollins and Megeath (2004) studied both G333.6-0.2 and NGC 6334. For both sources they present strong evidence of ongoing formation of massive stars. In NGC 6334, they studied the source NGC 6334 I, that lies toward the Northern edge of the dense ridge of emission seen in Figure 1. For both G333.6-0.2 and NGC 6334 I, they find molecular cores containing $\thicksim$ $10^{5}$ $M_{\odot}$ of dense ($\ga$ $10^{6}$ $cm^{-3}$) gas, as well as young HII regions. Specifically, G333.6-0.2 corresponds to a compact HII region and NGC 6334 I to an ultra-compact HII region. Based on their distances (§ 3.1.5) and on the Galactic spiral arm model of Taylor & Cordes (1993) NGC 6334 and G333.6-0.2 are located in the Carina-Sagittarius and Scutum-Crux spiral arms, respectively. ### G331.5-0.1 Of our four targets, G331.5-0.1 is the one that has been studied least. This HII region and six other nearby radio sources form a group having an extent of about a degree in Galactic longitude, and about 0.5 in latitude (Amaral & Abraham 1991). The group is centered near $l = 331\degr$. Several of these radio sources can be seen in Figure 1, and several others lie beyond the beyond the Southern and Western edges of the image. Not all the members of this group have the same line-of-sight velocity, so it is generally believed that the group is a superposition of two unrelated complexes (Russeil et al. 2005, Amaral & Abraham 1991, Caswell & Haynes 1987). Both complexes are elongated parallel to the Galactic plane, and they are separated not only in velocity but also in Galactic latitude. The complex containing G331.5-0.1, which is seen at (0, 0) in Figure 1, has generally more positive latitudes and a higher negative velocity. This complex is referred to by Russeil et al. (2005) as “the -87 km/s complex”. The other complex, which contains the source G331.3-0.3, seen at (0.0, -0.3) in Figure 1, has generally more negative latitude and a lower negative velocity. It is referred to by Russeil et al. (2005) as the -65 km/s complex. Except for G331.3-0.3, all major flux peaks seen in Figure 1 correspond to the -87 km/s complex. For the most part, sky locations where we obtained polarimetry data correspond to the -87 km/s complex. About seven vectors lying at the SouthEast corner of our polarization map are closer to G331.3-0.3 than to any source in the -87 km/s complex, so they probably correspond to the -65 km/s complex. Note that many of these vectors are more nearly perpendicular to the Galactic plane, while the vectors for the rest of the map are more nearly parallel to it. The distance to the -87 km/s complex is still uncertain, but Russeil et al. (2005) argue for a distance of 5.3 kpc (§ 3.1.5). The -65 km/s complex lies at 4.2 kpc, according to these authors. Molecular emission corresponding to the -87 km/s complex can be seen in the CO maps of Bronfman et al. (1989). The size and shape of this CO structure is similar to that measured for the ***l*** = 333 complex that contains G333.6-0.2 (§ 3.1.2) but the integrated intensity is somewhat lower. Factoring in the greater distance, the molecular mass of the -87 km/s complex could be comparable to that of the ***l*** = 333 complex. Despite the overall paucity of observations for G331.5-0.1, it does seem that evidence for ongoing formation of massive stars has been found. For example, at the position of this source we find a massive ammonia core (Vilas-Boas & Abraham 2000), CS emission (Bronfman, Nyman, & May 1996), and strong methanol maser emission (Walsh et al. 1997). ### Comparison of the four GMCs observed All four SPARO targets are associated with massive GMCs ($10^{5}$ - $10^{6}$ $M_{\odot}$) and ongoing or recent formation of massive stars. Using IRAS results as their database, Kuiper et al. (1987) did a comparative study of 65 of the brightest southern molecular clouds in the Galaxy, including our four targets. They calculated 60/100   color temperatures, and column densities. For NGC 6334, G333.6-0.2, and G331.5-0.1, temperatures were in the range 40-43 K. These values are near the peak of the distribution for the 65 sources. For the Carina Nebula, a color temperature of 48 K was found. This value lies in the upper tail of the distribution. Column densities for Carina were significantly lower than for our other three targets. One explanation for these differences is that Carina is more evolved than the others, so more gas has been dispersed and more energy has been released to heat the dust. This would be consistent with the observed fact that massive stars do not seem to be presently forming within the region corresponding to our polarimetric map of Carina (§ 3.1.1). ### Methods used to determine cloud distances For purposes of comparing our results with optical polarimetry (§ 3.5) and estimating physical scales corresponding to the SPARO beam size (§ 3.6) we require accurate estimates of the distances to our targets. Distance determinations for these targets fall into two categories: stellar distance estimates (e.g. spectroscopic parallax) and kinematic distance estimates, based on the Galactic rotation curve. The latter solution is multi-valued, providing a “near distance” and a “far distance”. Using the spectroscopic parallax method, Neckel (1978) obtained distance estimates for OB stars presumed to be associated with NGC 6334. On this basis, they derive a distance of ($1.7 \pm 0.3$) kpc for this cloud. The star clusters Tr 16 and Tr 14 that power the Carina Nebula are separated by about 15 pc on the sky. The maps in Smith et al. (2000) show that this nebula is reasonably isolated on the sky, so it seems likely that the two clusters lie at roughly the same distance. Distances to these clusters and to specific stars within them have been obtained by Carraro et al. (2004), Davidson et al. (2001), Freyhammer et al. (2001), Rauw et al. (2001), Tapia et al. (2003), and Vazquez et al. (1996). These six independent estimates have a mean of 2.7 kpc and a standard deviation of 0.4 kpc. Our distance estimate does not change significantly if we drop the requirement that the two clusters must lie at the same distance and instead simply estimate the distance to Tr 14, which is the cluster that powers the far-IR flux peak that we mapped polarimetrically with SPARO. For G333.6-0.2 and G331.5-0.1, distance estimates are kinematic. For G333.6-0.2, we adopt the value 3.0 kpc, given by Sollins & Megeath (2004) and by Colgan et al. (1993). Other values found in the recent literature are 3.5 kpc (Russeil et al. 2005) and 2.8 kpc (Vilas-Boas & Abraham 2000). The far kinematic distance of about 11 kpc can be ruled out because Russeil et al. (2005) detect optical emission from six sources in the $l = 333\degr$ complex (which they refer to as “CO cloud I”). For G331.5-0.1, it is again the near distance that is almost always quoted, but in this case it is not as easy to rule out the far distance. The strongest discriminant seems to be the comparison of absorption and emission spectra carried out by Dickey et al. (2003) that places G331.5-0.1 at the near kinematic distance, which is given by Russeil et al. (2005) as 5.3 kpc. Measured polarization magnitudes -------------------------------- Hildebrand et al. (1999) show the distrubution of measured magnitudes of 350  polarization for a large sample of measurements, obtained from polarization maps that generally sample smaller spatial scales in comparison with our SPARO maps. The mean magnitude in the Hildebrand et al. (1999) sample is 1.6%, while ours is slightly higher at 2.0%. Several factors could account for this difference. Firse note that in Figure 1, the higher degrees of polarization usually are found in regions with lower column density, a trend that has also been seen in smaller-scale maps (Matthews et al., 2000; Henning et al., 2001; Lai et al., 2002; Crutcher et al., 2004). Since the SPARO data sample regions of generally lower column density in comparison with the regions studied by Hildbrand et al. (1999), this effect could explain why SPARO sees higher polarization. On the other hand, SPARO’s large beam averages over more field disorder, which would tend to reduce the polarization magnitude. Another effect that could be important is our improved data analysis method (§ 2; Li et al. 2005), that removes the bias toward low polarization magnitude. Finally, note that Vaillancourt (2002) present evidence for structure in the polarization spectrum, so the modest wavelength difference (350  vs. 450 ) could be important. Statistics of the inferred magnetic field directions ---------------------------------------------------- Figure 2 shows the distribution of projected magnetic field position angles, for the entire sample of measurements for all four clouds. The position angle is given in Galactic coordinates, with 0corresponding to Galactic North-South, and with position angle increasing in the counter-clockwise direction. Note that the measurements tend to cluster near the dark vertical line at position angle 90, corresponding to magnetic fields running parallel to the Galactic plane. Relatively few measurements are found near the left- and right-hand extremes of the histogram, corresponding to fields orthogonal to the plane. We can explain this by supposing that large-scale magnetic fields in GMCs are generally parallel to the even larger scale Galactic fields in the regions surrounding these clouds, which are known to run preferentially parallel to the Galactic plane (Mathewson & Ford 1970). Much of the remainder of this paper is devoted to exploring the validity of this conclusion, and its implications. As a first step, we break the histogram in Figure 2 into four separate histograms, one for each of our four target GMCs. This is shown in Figure 3, where we also show vertical dotted lines indicating the mean position angle for each cloud. The method we use to calcuate this mean is described in the paragraphs below. For three of the four clouds, the mean field direction is nearly parallel to the Galactic plane, but for NGC 6334 the mean is almost perpendicular to the plane. Because the field direction measurements wrap around, with 0 = 180, the mean position angle is actually undefined. However, it is clear that each cloud has a well defined peak in the distribution of angles (Fig. 3). We have devised a technique for estimating the position of this peak, based on the formalism of Stokes’ parameters (e.g., see Jackson 1999). Two of the four Stokes’ parameters, Q and U, contain information about the state of linear polarization of a light beam. They are related to the intensity of linearly polarized flux $I_{PF}$ and angle of polarization $\phi$ via $$\begin{aligned} Q = I_{PF}cos(2\phi)\ ;\ U = I_{PF}sin(2\phi). \end{aligned}$$ Our method is as follows: For a given cloud, we first compute $Q_{i}$ and $U_{i}$ for each sky position i where we have detected polarization. Note that by summing these to form $Q_{sum}$ and $U_{sum}$, one can determine the polarization state that would be measured by an imaginary polarimeter that acts by combining all flux from the region studied by SPARO and making one polarization measurement on it. However, since our goal is to study the global field of each cloud, we do not want regions having high column density to be given more weight than regions having low column density. Furthermore, we do not want regions having high grain alignment efficiency to be given more weight than regions with low efficiency. Accordingly, to ensure that each sky position will be given equal weight, we compute $$\begin{aligned} Q'_{i} \equiv\ Q_{i} / I_{PF}\ ;\ U'_{i} \equiv\ U_{i} / I_{PF}, \end{aligned}$$ and then average all values of $Q'_{i}$ and $U'_{i}$ for a given cloud to form $\bar{Q'}$ and $\bar{U'}$. From these “equal weight” average values we can derive a mean polarization angle for the cloud using the formalism in equation (1). We refer to the mean magnetic field direction obtained in this way as the “equal weight Stokes mean”, $\theta_{ewsm}$. Note that because the total flux and degree of polarization cancel out in equation (2), $\theta_{ewsm}$ is actually determined using only the measured angles of polarization. No other information is required. Our method for computing $\theta_{ewsm}$ also yields another parameter that we refer to as the “order parameter”, defined as o.p. = $\sqrt{(\bar{Q'})^2 + (\bar{U'})^2}$. For a set of polarization measurements all having the same position angle, we obtain o.p. = 1. For a set of measurements having position angles uniformly spaced over a full 180, we obtain o.p. = 0. We will use this parameter in our analysis of optical polarimetry data (§ 3.5). Note that for three of our four clouds, the mean field angle $\theta_{ewsm}$ is within 15 of the Galactic plane (Figs. 3 & 4). For a set of clouds having a random distribution of $\theta_{ewsm}$ we would expect on average only one cloud in six to have $\theta_{ewsm}$ within 15 of the plane. If we choose four random angles between 0 and 180, the probablility for obtaining at least three of the four within the interval $90\degr\ \pm\ 15\degr$ is only $(4 \times\ 5 + 1)\ /\ 64 = 0.0162$. Therefore this is unlikely to be a chance alignment. The most likely explanation is the one we advanced in the first paragraph of this section: Large-scale magnetic fields in GMCs are preferentially parallel to the fields in the even larger scale regions of the Galaxy that surround these clouds. We explore this issue further in § 3.5, where we also examine the case of the discrepant cloud NGC 6334. Comparisons with MSX maps ------------------------- Polycyclic aromatic hydrocarbon molecules (PAHs) can be used as a tracer of massive star formation (Peeters 2004). These molecules absorb far-UV photons emitted by stars and transfer the energy into vibrational modes associated with molecular stretching and bending. The PAHs then cool by emitting IR radiation, mostly in the wavelength range 3 – 11 . The 8  emission shown in Figure 5 is mostly, though not entirely, due to PAHs. The main exception is that some of the point sources seen in these images corresepond to stars surrounded by hot dust. The 3 – 11  PAH spectrum depends significantly on the charge states (see Fig. 2 of Allamandola et al. 1999; Peeters 2002), which are determined by the ratio of the illuminating UV radiation field and the electron density, $G_{0}/n_{e}$. In regions having high $G_{0}/n_{e}$ ratio the ionized states of PAH dominate, while for low $G_{0}/n_{e}$ ratio it is the neutral states that are more common. In the ionized state, the intensity of the 8  emission is much higher. In crossing the boundary of an HII region, $n_{e}$ changes dramatically but $G_{o}$ changes relatively little. The result is a sharp change in the 8  brightness at the boundary of an HII . This effect can be seen in the 8  map of Carina shown in Figure 5. The shapes of the bipolar HII bubbles seen in this figure are very close to shapes seen in the superbubble simulations of Silich & Franco (1999). OB stars ionize their surroundings, and the ionization pressure wave propagates outwards from the star-forming region. The speed of propagation is inversely proportional to the density of the ambient interstellar medium, so HII regions usually grow fastest along the direction perpendicular to the Galactic plane. This explains the bipolar morphology of the HII bubbles (Silich & Franco 1999, Smith et al. 2000). Note that the magnetic field directions we determined for Carina closely follow the boundaries of the two expanding HII bubbles (Fig. 5). This can be understood as the effect of the expansion of the bubble upon the magnetic field. In this model, we must assume that the expansion of the ionization front is accompanied by bulk gas motion, such that gas is generally flowing in a direction that points away from the center of the bubble. This is reasonable as the ionized gas will be heated and will tend to expand. In this case the gas just beyond the edge of each bubble will be compressed in a direction perpendicular to the bubble edge. Under flux-freezing conditions, such compression will have a strong effect on the magnetic field. The result will be that the field will tend to be parallel to the compression front (e.g., see Novak et al. 2000), precisely as we observe in Carina. However, there is an alternative explanation that could account for the observed parallelism between the edges of the bubble and the field without requiring a compression front. To see this, note that gas expansion, under flux-freezing conditions, will always cause distortions in the ambient magnetic field regardless of the existence or non-existence of a compression front external to the bubble. A concrete example is provided by the model of Tomisaka (1998) where it is the magnetic tension rather than external gas pressure that acts to resist the expansion of the bubble. The bubble-like structures seen in Carina are much more striking than the structures seen in the three other 8  maps shown in Figure 5. This may indicate that, in comparison with our other three targets, Carina has suffered a relatively greater amount of disruption induced by star formation. If so, that would explain why Carina shows more variation in inferred magnetic field direction in comparison with the other SPARO targets (Fig. 3). This interpretation is consistent with what we learned from the comparison between the four clouds (§ 3.1.4). High-resolution radio polarization measurements of galaxies show that the magnetic field is well aligned with spiral arms (Neininger 1992, Neininger & Horellou 1996, Patrickeyev et al. 2005). Because field disorder is magnified when observed along the field direction, one might argue that the lesser degree of uniformity of the fields in Carina is a result of the near-coincidence of the line-of-sight and the tangent to the Carina spiral arm at the location of the cloud (The angle between line-of-sight and the arm tangent is $\sim$ 20, much lower than for NGC 6334 or G333.6-0.2 ). However, the strong correlation between the fields and the bubble boundaries shows that the main reason for the field disorder is star formation. G333.6-0.2 also contains bipolar PAH bubbles. These can be seen in the image shown in Figure 5, at position (0.10, -0.05) and (0.0, 0.05). They are much easier to see in the recently obtained Spitzer/GLIMPSE images (B. Whitney, private communication). In contrast to the case of Carina, our polarization map for G333.6-0.2 covers a region significantly larger than the area containing the bubbles. The field in G333.6-0.2 is basically uniform, but note that the field lines running near the “Galactic North” edge of the northern bubble appear to be pushed outwards by the bubble. NGC 6334 also shows bubble-like structures in Figure 5, but the strongest such structures occur at locations for which we have no SPARO polarization data. Comparison with stellar polarization data ----------------------------------------- It has been shown that Galactic magnetic fields tend to be parallel to the Galactic disk on large scales. For example, optical polarization measurements for stars more distant than 2 kpc show inferred field directions that are mostly parallel to the plane (Mathewson & Ford 1970). As discussed in § 3.3, our observations suggest that on the scales of our SPARO polarization maps (much smaller than 1 kpc), this tendency is still evident. This can be understood if the processes that act to accumulate diffuse gas and thereby form a GMC do not significantly alter the mean magnetic field direction in the gas. But in this case, how do we account for the outlier NGC 6334? One answer to this question is suggested by the fact that even though the large-scale Galactic field is parallel to the disk, there are nevertheless spatial fluctuations in the Galactic field that occur on scales that are larger than the size of a GMC (e.g., Mathewson & Ford 1970). If NGC 6334 happened to form in a region where such fluctuations had resulted in a field significantly different from the large-scale average, then this could explain the discrepancy. In principle, this idea can be tested by comparing SPARO results for each cloud with the field direction in the surrounding diffuse medium as sampled by optical polarimetry of stars. In practice, we have only been able to do this for NGC 6334, the closest of our four targets. Here we describe the results of this analysis, that was carried out using a stellar polarimetry database published by Heiles (2000). Since this database was created by combining many previous polarization catalogs, it is referred to by the author as an “agglomeration”. When computing the field direction in the surrounding diffuse medium, we must choose a length scale to characterize this larger region. The natural choice is the accumulation length, defined so that the cube of this length corresponds to the volume of diffuse ISM containing a gas mass equal to the mass of a GMC. Williams et al. (2000) estimate an accumulation length of 400 pc for the Galactic disk. A stellar polarization measurement gives the mean field direction, weighted by density of dust particles, along the line of sight. To determine the field in a particular region, one must subtract the foreground polarization, inferred from foreground stars, from that measured for background stars (e.g., Marraco et al. 1993). In our analysis of the Heiles (2000) database, we carry out such subtractions. Stellar distance measurements are important because we will rely on them to define the foreground and background stars for a given region. Based on comparisons among the various catalogs that he has agglomerated, Heiles (2000) estimates a typical distance uncertainty of about 20%. ### Evidence of Malmquist bias in the Heiles database Besides stellar distance, another factor that is relevant to the problem of foreground-effect subtraction is extinction. For the Heiles (2000) database, Figure 6 shows how the selective extinction, E(B-V), varies with distance. Each point represents the mean E(B-V) for stars within a 100 pc distance interval. In making this figure, we have included only stars having $\left|b\right|\ \leq\ 0.1\degr$, where $b$ is the Galactic latitude. The figure shows that the selective extinction grows quite linearly up to $\sim\ 1.5$ kpc, with a slope close to the 0.6 mag/kpc value given by Spitzer (1978). Beyond this distance, the extinction tends to level off. There are three effects that could contribute to this flattening of the extinction curve: (1) There is more dust nearer to the Sun than further from it (i.e., we live within a local density enhancement). (2) The data base contains Malmquist bias. This refers to the fact that in a flux-limited sample, observers preferentially see atypically bright objects at larger distances. Two factors that affect brightness are luminosity and extinction. From the point of view of this discussion, the latter possibility is interesting, because if there is a bias toward lower extinction at large distance then this can explain the flattening of the extinction curve. (3) For a fixed latitude, sight-lines to more distant stars extend to higher distances above Galactic plane, where there is less dust. Because we have restricted b to $\pm\ 0.1\degr$, we can be reasonably sure that explanation (3) does not play a large role. This is because no sight line in our sample extends further than 10 pc from the Galactic plane, for distances up to 3 kpc. 10 pc is much smaller than the scale height of the HI gas (Spitzer 1978). Given that the dust density should be highest at the location of the spiral arms, and that the Sun is situated about half-way between two such arms, each about 2 kpc distant (Xu et al. 2006), it seems highly unlikely that explanation (1) could account for all of the flattening seen in Figure 6. Thus, explantion (2) must play a significant role. Malmquist bias can introduce uncertainties in the correction for foreground polarization. For this reason, we believe that using the data of Heiles (2000) to study regions much beyond 1.5 kpc will give unreliable results. We have chosen to study regions as far as 1.7 kpc so that we can include the vicinity of NGC 6334 (d = 1.7 kpc; § 3.1.5). The next closest source, Carina, lies at a distance of 2.7 kpc, well into the flat part of the extinction curve. In § 3.5.4, we show that Malmquist bias does not affect our main conclusions for regions as far as 1.7 kpc. ### Choice of cells for analysis In our analysis of the Heiles (2000) database, we divide the nearby ($d \leq\ 1.7$ kpc) regions of the Galactic plane into “cells” with dimensions roughly corresponding to the accumulation length ($\sim\ 400$ pc; second paragraph of § 3.5). One such cell is centered on NGC 6334. For each cell, we collect data from Heiles (2000) and apply a correction for foreground polarization in order to estimate the mean magnetic field direction within each cell. For the NGC 6334 cell, we compare this result with the mean field direction inferred from the SPARO data. The other cells serve as comparison regions. Our cells are cuboids in ($l$, $b$, $d$)-space, where $l$ and $b$ are Galactic longitude and latitude and $d$ is distance from the Sun. All cells are centered on $b = 0\degr$, and centered on one of three possible values of $d$: 0.7 kpc, 1.2 kpc, and 1.7 kpc. As viewed from far above the Galactic plane, they tile the local region of the Galactic disk, forming three complete 360 rings of cells, with the whole pattern centered on the Sun. The angular size of a cell along its $l$-dimension decreases with distance so that we can keep the corresponding spatial scale (measured in pc) is approximately constant. Thus, the number of cells in a given ring increases with the $d$-value of that ring. The cells’ angular size along the $b$-dimension also decreases with distance, for the same reason. The cells’ linear sizes deviate somewhat from 400 pc, especially along the $b$ dimension, for reasons explained below. For each cell, we construct foreground (background) cells that are similar to the main cell but centered on the near (far) face of the main cell (see Fig. 7). As described in the next section, the polarization measurements for “foreground stars” (stars lying within the foreground cell) are used to correct the polarization measurements for each “backgound star” (star lying in the background cell), thus providing a set of rough estimates for the stellar polarization induced by dust lying within the main cell. As noted above, our intent was to have cells with dimensions corresponding to the accumulation scale, estimated to be 400 pc. We modified the in-plane spatial footprint of each cell, ($\delta l, \delta d$), from (400 pc, 400 pc) to (300 pc, 500 pc) in order to take into account the uncertainty in stellar distances. This is why the three rings are separated by 500 pc in $d$. At the distance of our furthest background cells, centered at $d$ = (1.7 + 0.25 = 1.95) kpc, the 20% distance uncertainty translates to approximately 400 pc. Thus, even after increasing $\delta d$ from 400 to 500 pc, distance uncertainty is still an important limitation to the accuracy of our analysis. However, increasing $\delta d$ to 600 pc or higher while preserving non-overlapping cells would have resulted in a sample containing only two rings of cells rather than three, as there would not have been enough space for the foreground cells corresponding to the closest ring of main cells. For each ring, we adjusted $\delta l$ slightly from 300 pc so that the full 360 are covered by an integer number of cells. For the 1.7 kpc ring, the longitudinal dividing lines of the cells are chosen so that one cell is centered on NGC 6334. For the two closer rings, we have arbitrarily chosen $l = 0\degr$ as a dividing line. The size of the cells along the direction perpendicular to the Galactic plane ($\delta b$) has been set at 120 pc. The reason that we chose such a relatively small value for $\delta b$ is related to the scale height of Galactic gas. Since the thickness of the HI disk is 240 pc (Spitzer 1978), setting $\delta b$ to 400 pc would have resulted in the inclusion of relatively high-latitude background stars having relatively little dust within the main cell. The polarization induced by this relatively small column of dust would have been very small, so the inferred foreground-corrected polarization angles corresponding to these stars would have been very uncertain. Because our method for determining the mean magnetic field angle for each cell (§ 3.5.3) gives equal weight to each background star, the use of $\delta b$ = 400 pc would have thus resulted in loss of accuracy. We chose $\delta b$ to equal half of the thickness of the HI disk in order to preferentially sample the denser part of this gas layer. Note also that all four of the GMCs observed with SPARO lie within 30 pc of the Galactic plane. The resulting $\delta l \times\ \delta b$ sizes are $24\degr\ \times\ 10\degr$, $15\degr\ \times\ 6\degr$, and $\ 10\degr\ \times\ 4\degr$ for rings at $d$ = 0.7, 1.2, and 1.7 kpc, respectively. Finally, note that for the NGC 6334 cell we have explored the effect of increasing $\delta b$ (§ 3.5.5). ### Selection and processing of stellar polarization data for a single cell In order to correct for the effects of foreground polarization, we use the method introduced by Marraco et al. (1993). For each star in the background cell, we first collect a subset of the stars in the foreground cell, chosen to have sky coordinates reasonably close to those of the background star (Fig. 7). Then linear functions of $l$ and $b$, $q(l,b) = c_{1} + c_{2}l + c_{3}b$ and $u(l,b) = c_{4} + c_{5}l + c_{6}b$, are fit to the normalized Stokes parameters, $q_{f}$ and $u_{f}$, of the subset of foreground stars. Next, the contributions of the dust in the main cell to the Stokes parameters of the background star, $q_{b}$ and $u_{b}$, are estimated by $q_{b}-q(l_{o},b_{o})$ and $u_{b}-u(l_{o},b_{o})$, where ($l_{o},b_{o}$) are the coordinates of the background star. In this way we can estimate the polarization introduced to the background star by dust in the particular main cell under study. We will refer to this as the polarization residue, and it contains information about the magnetic field direction in the cell. During the analysis, several criteria are used to reject stars. First, stars with nonpositive values for selective extinction are not used. Also, a background star is rejected if it has fewer than four corresponding foreground stars (Fig. 7). Although three stars are sufficient for the fit, we found that occasionally they will lie close to a straight line on the sky thus making the fit very unreliable. A background star having polarization residue below 0.2% is also rejected as being too close to the 0.1% accuracy of the Heiles (2000) database. For three cells in our sample, we show in Figure 8 the polarization residues for all unrejected background stars. The mean field direction for a given cell is estimated by the equal-weight-Stokes mean $\theta _{ewsm}$ (§ 3.3) of the polarization residues. Cells containing fewer than five polarization residues are rejected. 22 of the 75 cells in our study survive this cut. A further cut is made based on the order parameter (o.p.; § 3.3). Figure 8 illustrates how the order parameter is related to the degree of uniformity in the field directions given by the polarization residues. We see that for the cell with o.p. = 0.21, one would not conclude that the overall field direction for the cell is constrained in any way, so the mean field direction determined for this cell has little meaning. For the cell with o.p. = 0.41, however, the vectors suggest that there is a well-defined mean field direction. The criterion we used is o.p. $\geq$ 0.3, which results in rejection of half of the remaining 22 cells. Using the analysis described above, we obtain mean field directions for 11 cells, and their distribution is shown in Figure 9. More than half of the cells have magnetic field angles within 30 of the Galactic plane, and the equal weight Stokes’ mean of these 11 angles is 103), which is reasonably close to the direction of the Galactic plane. The cell corresponding to NGC 6334 is shaded in Figure 9. Note that NGC 6334 falls in the same bin in this figure as it did in Figure 4 where we showed the distribution of the GMC field directions determined by SPARO. The polarization residues for the NGC 6334 cell are shown in the top panel of Figure 8. ### Correction for the effects of Malmquist bias As discussed in § 3.5.1, there is evidence for Malmquist bias in the Heiles (2000) database. If we assume that this bias is the only effect contributing to the flattening of the extinction curve of Figure 6, then we can derive a “bias correction” that we can apply to the Heiles database in order to remove the effects of the Malmquist bias. Although it is not clear that all of the flattening is due to Malmquist bias (§ 3.5.1), we will nevertheless apply this correction in order to get a rough idea of the possible effects of the bias on our analysis. Our bias correction is based on the reasonable assumption that the bias affects the normalized Stokes parameters $q$ and $u$ in the same way that it affects the measured values of selective extinction. The correction works as follows: First we extend an extrapolation of the linear portion of the extinction curve over the full range of distances (see dotted line in Fig.  6). We assume that this would be the measured extinction curve in the absence of bias. Next, we multiply all of the normalized Stokes’ parameters measured for stars having $d \geq 1.5$ kpc by a correction factor that depends on distance and is determined by taking the ratio of the extinction value given by the extrapolation (dotted line) to that given by the data themselves (estimated by the horizontal line). If we apply this bias correction to the Heiles’ data before using it in our analysis, we find that 10 cells (rather than 11) survive the various cuts. The resulting distribution of magnetic field directions for these cells is shown in Figure 10. Note that the peak corresponding to fields parallel to the Galactic plane is somewhat stronger. The NGC 6334 cell is again indicated as a shaded box, and its field direction is essentially unchanged. (Note that many of the stars we use in our analysis have distances less than 1.5 kpc, so for these stars the bias correction has no effect.) There are two reasons to suspect that our correction for Malmquist bias may be too strong. First, if Malmquist bias is not the only reason for the flattening of the extinction curve, then we will be over-correcting. The second reason is that some of the cuts we made on the data may have served to eliminate cells with relatively stronger Malmquist bias. (This might be expected, as bias tends to dilute the “real” polarization residues, due to actual grain alignment by real magnetic fields, with spurious polarization effects due to bias.) This in turn might be expected to reduce the agreement among vectors in a given cell and lead to rejection of the cell. In this case, applying the correction to the surviving cells, that have relatively weaker Malmquist bias, will result in over-correction. Keeping these possibilities in mind, all we can conclude from our analysis is that our best estimate for the true distribution of field directions probably lies somewhere between the histograms of Figures 9 and 10. In § 3.3, we noted that three of our four GMCs have mean magnetic field directions lying very close to the Galactic plane, and that the probability of this being due to chance alignment is very low. There remained the problem of understanding the outlier, NGC 6334. The result of our study of the local Galactic field on $\sim 400$ pc scales is that the NGC 6334 region turns out to again be an outlier. Specifically, in terms of the angular displacement of its field from the orientation of the Galactic plane, the NGC 6334 cell is either the third highest of 11 (Fig. 9), or the highest of ten (Fig. 10), depending on whether or not the bias correction is applied. We conclude that the most likely explanation for the discrepant SPARO result for NGC 6334 is that the cloud formed in a somewhat unusual region of the Galaxy, where the field direction (on $\sim\ 400$ pc scales) is nearly perpendicular to the Galactic plane. As a whole, our results for the four clouds suggest that the mean field direction inside a GMC is roughly parallel to that in a surrounding region having size approximately given by the accumulation length. ### Dependence on assumed distance of NGC 6334 and cell size We concluded above that the cell corresponding to NGC 6334 is an outlier with respect to the distribution of cell field directions. Next we show that this conclusion is reasonably robust in the sense that it does not depend sensitively on the precise distance assumed for NGC 6334, nor on the precise cell dimensions. As noted in § 3.2, the uncertainty in the distance to NGC 6334 is given as 0.3 kpc. This is due to systematic, not statistical errors (Neckel 1978). We have repeated the analysis for this cell while dispacing its center in 100 pc increments along the line-of-sight away from the nominal value of 1.7 kpc. For distance values of 1.5, 1.6, 1.7, 1.8, and 1.9 kpc, we find magnetic field directions (measured counter-clockwise from Galactic North) of -21, 5, 20, 37, and -23. For cells centered at 1.4 kpc and 2.0 kpc, there are fewer than 5 background stars which is below our threshhold for analysis as discussed in § 3.5.3. Although there is thus significant variation with distance, in all five cases the field direction is closer to being perpendicular to the Galactic plane than to being parallel to the plane. We changed the size of the cell centered on NGC 6334 to examine the effect on mean field direction. Shrinking any of the three dimensions to half the original size will not change the mean field direction by more than 8. Increasing the size along either the line-of-sight or latitude directions will not change the mean field direction by more than 15. Finally, if we increase the dimension along the longitudinal direction by a factor of two, the o.p. drops from 0.41 to 0.25, leading to rejection of the cell (see § 3.5.3). Comparisons with simulations ---------------------------- ### Estimating magnetic field strength from the degree of field disorder Better constraints on magnetic field strengths in molecular clouds are needed in order to constrain theories of star formation (Crutcher 2004). In principle, the method of Chandrasekhar & Fermi (1953) can be used to derive the field strength from the dispersion in measured submillimeter polarization directions (Ostriker, Gammie, & Stone 2001, Heitsch et al. 2001). In practice the effects of beam dilution complicate this issue (Heitsch et al. 2001, Houde 2004), and this problem is especially severe for SPARO due to the 4 beamsize. For this reason, we will instead make a direct comparison of the degree of disorder in field angle seen our measurements with that seen in three dimensional MHD turbulence simulations of GMCs presented by Ostriker et al. (2001), after first smoothing the latter to the coarse resolution of SPARO. We will again use the order parameter (o.p.; § 3.3) to quantify the disorder in magnetic field direction. Ostriker et al. (2001) use their numerical simulations to follow the time evolution of initially smooth, self-gravitating, isothermal gas. The initial velocity field corresponds to Komolgorov-type turbulence with Mach number equal to 14. The initial magnetic field is uniform, but three models are considered with three different field strengths corresponding to $\beta$ = 1, 0.1, and 0.01, where $\beta$ = $c_{s}^2/v_{a}^2$, $c_{s}$ is the sound velocity, and $v_{a}$ is the Alfven velocity. During the evolution, the turbulence decays and the Mach number drops. Snapshots of density, velocity, and magnetic field structure are shown for various Mach numbers. Ostriker et al. (2001) give simulations of optical polarization measurements for background stars observed through their simulated clouds. It is assumed that all grains along the line of sight have equal polarization efficiency. Two such optical polarization maps are given, both corresponding to M = 7, with $\beta$ = 1 and 0.01, respectively (Figs. 22 and 23 of Ostriker et al. 2001). The corresponding $\beta$ = 0.1 map is not shown, but the authors note that it is very similar to the $\beta$ = 1 case. We have obtained the $\beta$ = 0.1 map from E. Ostriker (private communication), and we average the o.p. values obtained from these two maps and refer to the average values as the o.p. for the “$\beta \geq\ 0.1$ case”. For grains aligned by a uniform magnetic field, the degree of optical polarization is linearly proportional to column density, while the submillimeter polarization magnitude has no dependence on column density (for low optical depth). This illustrates that it is the submillimeter polarized flux rather than the submillimeter polarization magnitude that will generally be proportional to the optical polarization magnitude. Thus, when we bin optical polarization vectors to SPARO’s resolution, we should combine them as if they were polarized flux, using the method of Stokes’ parameters. In principle, we should combine Stokes’ parameters derived from each simulation resolution element that lies within a 4 SPARO beam, but Ostriker et al. (2001) only give optical polarization vectors for a 12 $\times$ 12 grid of points. However, this grid is dense enough to provide many optical vectors per SPARO beam (see below) so it should be sufficient for reasonable estimates. The length L of each side of the cubical simulation box of Ostriker et al.(2001) is not well defined, but by assuming a temperature of 10 K and a mean density of 100 $cm^{-3}$ they obtain L = 8 pc for the M = 7 simulations. For NGC 6334, where our beam size is approximately 2 pc, we thus obtain 16 simulated SPARO vectors from each simulated cloud map (each vector is derived from binning nine simulated optical vectors). The resulting o.p. values are 0.97 and 0.15 for $\beta$ = 0.01 and $\beta \geq 0.1$, respectively. For G333.6-0.2, we obtain only four simulated SPARO vectors (each comes from binning 36 simulated optical vectors) and we find o.p. values of 0.98 and 0.35 for the low and high $\beta$ cases, respectively. G331.5-0.1 is too distant for meaningful comparisons. Our results for the Carina nebula are not suitable for comparison with the turbulence simulations of Ostriker et al. (2001). The reason is that these authors simulated the inertial range of the turbulence, with a Kolgomorov type energy spectrum over the entire 8 pc simulation cube. Our observation of Carina, on the other hand, shows field structure determined by the $\sim$ 10 pc scale curvature of the HII bubbles (Fig. 5), not a turbulent cascade in the inertial range. The field structure is the result of energy injection, with bubbles depositing energy directly into the field on $\sim$ 10 pc scales. The simulated turbulence maps correspond to the case where the initial field direction is perpendicular to the line of sight, so the o.p. values we have derived from the simulations are in fact upper limits. Fig. 24 of Ostriker et al. (2001) shows that the angle between the field and the line of sight has a dramatic effect on the degree of disorder in the inferred field directions. From our maps of NGC 6334 and G333.6-0.2, we obtain o.p. values of 0.73 and 0.80, respectively, indicating that $\beta$ = 0.01 is preferred over $\beta \geq\ 0.1$. The product of $\beta$ with the square of the Mach number gives the “turbulent beta”, $\beta_{t}$, which provides a direct comparison of magnetic and turbulent energy densities. We see that our comparison strongly favors $\beta_{t}$ = 0.5 over $\beta_{t} \geq\ 5$, implying that the magnetic energy density is comparable to the turbulent energy density. There are two main limitations to our comparison of SPARO data with simulations. First, there is a relatively small overlap between the smaller spatial scales of the simulations and the larger scales of the SPARO data. Secondly, the assumption that all grains have equal polarizing efficiency is probably wrong (Cho & Lazarian 2005). Despite these questions, it seems difficult to reconcile our observations of reasonably uniform fields with models having very weak fields. ### Comparing fields inside GMCs with larger scale Galactic fields We showed in § 3.3 that three of our four target GMCs have mean field directions, determined from SPARO data, that are within 15 of the Galactic plane. As we noted, the probability for obtaining this result by pure chance is below 2%. This suggests a correlation between GMC fields and the larger-scale Galactic field. NGC 6334 is the outlier in our sample of four GMCs, having its mean field rotated 70 clockwise from the plane. In § 3.5 we used the Heiles (2000) stellar polarization database to examine Galactic fields on scales roughly corresponding to the GMC accumulation length ($\sim$ 400 pc). We obtained mean field directions for a set of 10-11 regions, one of which is centered on the closest of our four targets, NGC 6334, while none of the other SPARO targets is close enough to obtain reliable determinations of field direction from the Heiles database. Within this sample of 10-11 regions, we again find a tendency for the “accumulation-scale field” to be parallel to the Galactic plane (Figs. 9 and 10) and we find that NGC 6334 is again an outlier in the distribution, having its “accumulation-scale field” more aligned with the Galactic latitude direction than with the Galactic longitude direction. We can explain both the tendency for fields in GMCs to align with the Galactic plane and also the agreement of the field direction in NGC 6334 with the direction of the Galactic field local to NGC 6334 by supposing that fields in GMCs are generally parallel to the local Galactic fields. Is this effect seen in simulations of GMC formation? MHD simulations of the process whereby gas uniformly spread through a galactic disk becomes concentrated into GMCs via self-gravitating instabilities have been carried out, for example, by Kim & Ostriker (2001), and Kim, Ostriker, & Stone (2003). The work by Kim & Ostriker (2001) included only two dimensions (thin disk limit), and the particular instabilities that they studied for the Galactic disk (as opposed to the Galactic center) took too long to develop. They were thus judged to be unlikely candidates for GMC formation unless they could be enhanced by additional agents. Nevertheless, for purposes of comparison with our observations we note that their model M23 (Fig. 13 of their paper) shows continuity and alignment between fields inside dense clouds and those in the lower density surrounding medium, while their model M10 (Fig. 12) shows no such correlation. The main difference is the field strength. In principle, comparisons of such models with our results could test theories for GMC formation, but in practice such comparisons may lead to ambiguous results, for present simulations. The problem is that the simulations seem to produce only clouds having angular momentum perpendicular to the Galactic plane. For the 2d simulations this is true by definition, but even in the 3d simulations of Kim et al. (2003) we find that angular momentum of collapsing gas clouds is orthogonal to the plane, presumably because it is inherited from Galactic shear. In this case the predominant mode for field distortion is to shear the fields out into spiraling patterns, with the spiral lying in the Galactic plane (Fig. 12 of Kim & Ostriker 2001). Unfortunately, because SPARO sightlines extending from the Sun to disk GMCs are parallel to the plane, the in-plane spiral field distortion patterns described above will be indistinguishable from the case of continuous, ordered fields extending into GMCs with no or little change in direction. Could the correlation we observed between GMC fields and local Galactic fields be merely a consequence of field twisting induced by clouds rotating preferentially about axes orthogonal to the Galactic plane? While we cannot rule this out, it is worth noting that observations of GMCs in M 33 (Rosolowsky et al. 2003) have failed to find a strong tendency for cloud rotational axes to be perpendicular to the disk of this spiral galaxy. Thus it seems that the more likely explanation for the correlations we have discovered is that Galactic fields do indeed pass into GMCs with little change in direction. When more realistic MHD simulations are available that better match the recent observational data on cloud rotation axes, it will be interesting to compare them with our observations of magnetic field strucuture. This work was supported by NSF Award OPP-0130389. We thank Eve Ostriker for the courtesy of the simulated polarimetry map and Roger Hildebrand and Ellen Zweibel for helpful comments. Amaral, L. H., & Abraham, Z. 1991, A&A, 251, 259 Allamandola, L. J., Hudgins, D. M., & Sandford, S. A. 1999, , 511, 115 Arce, H. G., Goodman, A. A., Bastien, P.. Manset, N., & Sumner, M. 1998, , 499, 93 Benoît, A. & 67 authors 2004, A&A, 424, 571. Bronfman, L., Alvarez, H., Cohen, R. S., & Thaddeus, P. 1989, ApJS, 71, 481 Bronfman, L., Nyman, L. A., & May, J. 1996, A&AS, 115, 81 Brooks, K. J.; Storey, J. W. V. & Whiteoak, J. B. 2001, MNRAS, 327, 46 Carraro, G. 2002, MNRAS, 331, 785 Carraro, G., Romaniello, M., Ventura, P., & Patat, F. 2004, A&A, 428, 525 Caswell, J. L., & Haynes, R. F. 1987, A&A, 171, 261 Cheung, L. H., Frogel, J. A., Hauser, M. G., & Gezari, D. Y. 1980, ApJ, 240, 74 Cho, J., & Lazarian, A. 2005, ApJ, 631, 361 Chuss, D. T. 2002, Ph.D. thesis, Northwestern Univ. Colgan, S. W. J., Haas, M. R., Erickson, E. F., Rubin, R. H., Simpson, J. P., & Russell, R. W. 1993, 413, 237 Crutcher, R. M. 2004, Astrophys. & Space Sci., 292, 225 Crutcher, R. M., Nutter, D. J., Ward-Thompson, D., & Kirk, J. M. 2004, ApJ, 600, 279 Davidson, Kris, & Humphreys, R. M. 1997, ARA&A, 35, 1 Davidson, K., Smith, N., Gull, T. R., Ishibashi, K., & Hillier, D. J., 2001, AJ, 121, 1569 Dickel, H. R., Dickel, J. R., & Wilson, W. J. 1977, ApJ, 217, 56 Dickey, J. M., McClure-Griffiths, N. M., Gaensler, B. M., & Green, A. J. 2003, ApJ, 585, 801 Dolginov, A. Z. 1972, Astrophys. & Space Sci., 18, 337 de Graauw, T., Lidholm, S., Fitton, B., Beckman, J., Israel, F. P., Nieuwenhuijzen, H., & Vermue, J. 1981, A&A, 102, 257 Dotson, J. L., Novak, G., Renbarger, T., Pernic, D., & Sundwall, J. L, 1998, Proc. SPIE, 3357, 543 Dotson, J. L., Davidson, J., Dowell, C. D., Schleuning, D. A., & Hildebrand, R. H. 2000, ApJ Supplement Series 128, 335 Draine, B. T., & Weingartner, J. C. 1996, ApJ 470, 551 Draine, B. T., & Weingartner, J. C. 1997, ApJ 480, 633 Freyhammer, L. M., Clausen, J. V., Arentoft, T., & Sterken, C. 2001, A&A, 369, 561. Gardner, F. F., & Whiteoak, J. B. 1975, MNRAS, 173, 131 Goss, W. M. & Shaver, P. A. 1970, AuJPA, 14, 1 Grabelsky, D. A., Cohen, R. S., Bronfman, L., & Thaddeus, P. 1988, ApJ, 331, 181 Greaves, J. S., Holland, W. S., Jenness, T., Chrysostomou, A., Berry, D. S., Murray, A. G., Tamura, M., Robson, E. I., Ade, P. A. R., Nartallo, R., Stevens, J. A., Momose, M., Morino, J.-I., Moriarty-Schieven, G., Gannaway, F., & Haynes, C. V. 2003, MNRAS, 340, 353 Heiles, C. 2000, AJ, 119, 923. Henning, Th., Wolf, S., Launhardt, R., & Waters, R. 2001, ApJ, 561, 871. Heitsch, F., Zweibel, E. G., Mac Low, M., Li, P., & Norman, M. L. 2001, ApJ, 561, 800 Hildebrand, R. H., Dotson, J. L., Dowell, C. D., Schleuning, D. A., & Vaillancourt, J. E. 1999, ApJ, 516, 834. Hildebrand, R. H., Davidson, J. A., Dotson, J. L., Dowell, C. D., Novak, G. & Vaillancourt, J. E. 2000, PASP, 112, 1215 Hildebrand, R. 2002, in Astrophysical Spectropolarimetry, Proceedings of the XII Canary Islands Winter School of Astrophysics, eds. J. Trujillo-Bueno, F. Moreno-Insertis, and F. Sanchez, (Cambridge: Cambridge University Press), 265 Houde, M. 2004, , 616, 111 Jackson, J. D. 1999, “Classical Electrodynamics”, Third edition (New York:Wiley) Kim, W., & Ostriker, E. C. 2001, , 559, 70 Kim, W., Ostriker, E. C., & Stone, J. M. 2003, , 599, 1157 Kirby, L., Davidson, J. A., Dotson, J. L., Dowell, C. D., & Hildebrand, R. H. 2005, PASP, 117, 991 Kuo, C. L., Ade, P. A. R., Bock, J. J., Cantalupo, C., Daub, M. D., Goldstein, J., Holzapfel, W. L., Lange, A. E., Lueker, M., Newcomb, M., Peterson, J. B., Ruhl, J., Runyan, M. C., & Torbet, E. 2004, , 600, 32 Kuiper, T. B. H., Whiteoak, J. B., Fowler, J. W., & Rice, W. 1987, MNRAS, 227, 1013 Lai, S., Crutcher, R. M., Girart, J. M., & Rao, R. 2002, ApJ, 566, 925 Lane, A. P. 1998, in ASP Conf. Ser. 141, Astrophysics from Antarctica, ed. G. Novak & R. H. Landsberg (San Francisco: ASP), 289 Lazarian, A. 2000, in ASP Conf. Ser. 215, Cosmic Evolution and Galaxy Formation: Structure, Interactions, and Feedback, The Third Guillermo Haro Astrophys. Conf., ed. J. Franco, L. Terlevich, O. López-Cruz, & I. Aretxaga (San Francisco, ASP), 69 Li, H., Griffin, G. S., Krejny, M., Novak, G., Lowenstein, R. F., Newcomb, M. G., Calisse, P. G., & Chuss, D. T. 2005, in ASP Conf. Ser. 343, Astronomical Polarimetry, Current Status and Future Directions, ed. A. Adamson (San Francisco: ASP) Mac Low, M., & Klessen, R. 2004, Rev. Mod. Phys., 76, 125 Marraco, H. G., Vega, E. I., & Vrba, F. J. 1993, AJ, 105, 258. Matthewson, D.S., and Ford, V.L. 1970, Mem. R. Astro. Soc., 74, 139 Matthews, B. C., & Wilson, C. D. 2000, 531, 868. McGee, R. X., Newton, L. M., & Butler, P. W. 1979, MNRAS, 189, 413 Megeath , S. T., Cox, P., Bronfman, L., & Roelfsema, P. R. 1996, A&A, 305, 296 Neckel, T. 1978, A&A, 69, 51 Neiniger N., & Horellou C. 1996, in ASP Conf. Ser. 97, Polarimetry of the Interstellar Medium, ed. W. G. Roberge, & D. C. B. Whittet, (San Francisco, ASP), 592 Neiniger N. 1992, A&A, 263, 30 Novak, G., Dotson, J. L., Dowell, C. D., Hildebrand, R. H., Renbarger, T., & Schleuning, D. A. 2000, ApJ, 529, 241 Novak, G., Chuss, D. T., Renbarger, T., Griffin, G. S., Newcomb, M. G., Peterson, J. B., Loewenstein, R. F., Pernic, D., & Dotson, J. L. 2003, , 583, L83 Ostriker, E. C., Stone, J. M., & Gammie, C. F. 2001, ApJ, 546, 980 Paladini, R., DeZotti, G., Davies, R. D., Giard, M. 2005, MNRAS, 360, 1545 Patrickeyev, I., Fletcher, A., Beck, R., Berkhuijsen, E., Frick, P., & Horellou, C. 2005, in The Magnetized Plasma in Galaxy Evolution, eds. K. Chyzy, K. Otmianowska-Mazur, M. Soida, and R.-J. Dettmar, (Krakow: Jagiellonian University), 130 Peeters, E. 2002, Ph.D. thesis, Rijksuniversiteit Groningen Peeters, E., Spoon, H. W. W., & Tielens, A. G. G. M., 2004 ApJ, 613, 986. Peterson, J. B., Griffin, G. S., Newcomb, M. G., Alvarez, D. L., Cantalupo, C. M., Morgan, D., Miller, K. W., Ganga, K., Pernic, D., & Thoma, M. 2000, , 532, L83. Peterson, J. B., Radford, S. J. E., Ade, P. A. R., Chamberlin, R. A., O’Kelly, M. J., Peterson, K. M., & Schartman, E. 2003, PASP, 115, 383 Rauw, G., Sana, H., Antokhin, I. I., Morrell, N. I., Niemela, V. S., Albacete Colombo, J. F., Gosset, E., & Vreux, J.-M. 2001, MNRAS, 326, 1149 Renbarger, T., Chuss, D. T., Dotson, J. L., Griffin, G. S., Hanna, J. L., Loewenstein, R. F., Malhotra, P. S., Marshall, J. L., Novak, G., & Pernic, R. J. 2004, PASP, 116, 415 Rosolowsky, E., Plambeck, R., Engargiola, G., & Blitz, L. 2003, AJ, 599, 258 Russeil, D., Adami, C., Amram, P., Le Coarer, E., Georgelin, Y. M., Marcelin, M., & Parker, Q. 2005, A&A, 429, 497 Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525 Shaver, P. A. & Goss, W. M. 1970, AuJPA, 14, 77 Silich, S., & Franco, J. 1999, ApJ, 522, 863 Smith, N., Egan, M. P., Carey, S., Price, S. D., Morse, J. A., & Price, P. A. 2000, ApJ, 532, 145 Sollins, P. K., & Megeath, S. T. 2004, AJ, 128, 2374 Spitzer, L., Jr. 1978, Physical Processes in the Interstellar Medium, (New York, Wiley) Straw, Steven M., & Hyland, A. R. 1989, ApJ, 340, 318 Tapia, M., Roth, M., Vazquez, R. A., & Feinstein, A. 2003, MNRAS, 339, 44 Taylor, J. H., & Cordes, J. M. 1993, ApJ, 411, 674 Tomisaka, K. 1998, MNRAS, 298, 797 Vaillancourt, J. E. 2002, ApJS, 142, 53 Vazquez, R. A., Baume, G., Feinstein, A., & Prado, P. 1996, A&AS, 116, 75 Vilas-Boas, J. W. S. & Abraham, Z. 2000, A&A, 355, 1115 Walborn, N. R. 1995, Rev. Mex. Astron. Astrof., 2, 51 Walsh, A. J., Hyland, A. R., Robinson, G., & Burton, M. G. 1997, MNRAS, 291, 261 Whiteoak, J. B. Z. 1994, ApJ, 429, 225 Whittet, D. C. B., Gerakines, P. A., Hough, J. H., & Shenoy, S. S. 2001 , 547, 872 Williams, J. P., Blitz, L., & McKee, C. F. 2000, in Protostars and Planets IV, ed. V. Mannings, A. P. Boss, & S. S. Russell (Tucson: Univ. Arizona Press), 97 Wolfs S., Launhardt, R., & Henning, T. 2003, ApJ, 592, 233 Xu, Y., Reid, M. J., Zheng, X. W., & Menten, K. M. 2006, Science, Vol. 311, no. 5757, 54 Zhang, X., Lee, Y., Bolatto, A., Stark, A. 2001, , 553, 274 ![Histograms of magnetic field directions for all SPARO measurements, broken down by cloud. For each panel, a vertical dotted line shows the mean magnetic field direction for the cloud, computed as described in § 3.3. Position angle is measured in Galactic coordinates, as in Figure 2.](f3.eps) (Higher resolution figure can be downloaded at lennon.astro.northwestern.edu/f5.eps) [cccccc]{} 3.01 & -1.34& 1.8 & 0.16 & 3.0 & 2.6\ 4.36 & 1.67& 1.49 & 0.24 & 16.6 & 4.7\ -1.34 & -3.01& 0.84 & 0.12 & 40.1 & 4.2\ 0 & 0 & 1.16 & 0.05 & 47.4 & 1.3\ 1.34 & 3.01 & 0.38 & 0.13 & 55.5 & 9.8\ -3.01 & 1.34& 2.05 & 0.16 & 58.3 & 2.3\ -1.67 & 4.36& 3.99 & 0.49 & 57.3 & 3.5\ -7.36 & -0.34& 3.18 & 1.18 & 61.3 & 10.6\ -6.02 & 2.68& 3.59 & 1.26 & 42.4 & 10.0\ -4.67 & 5.69& 8.56 & 3.24 & 39.4 & 10.9\ -9.03 & 4.02& 10.13 & 4.46 & 59.2 & 12.6\ -2.35 & -13.39& 1.43 & 0.27 & 84.4 & 5.4\ -1.01 & -10.37& 1.18 & 0.35 & 46.8 & 8.6\ 0.34 & -7.36& 0.75 & 0.25 & 65.1 & 9.5\ -5.36 & -12.04& 1.76 & 0.17 & 101.8 & 2.7\ -4.02 & -9.03& 1.72 & 0.21 & 72.4 & 3.5\ -2.68 & -6.02& 1.24 & 0.21 & 51.9 & 4.9\ -7.03 & -7.69& 2.49 & 0.23 & 75.0 & 2.6\ -5.69 & -4.67& 2.37 & 0.58 & 42.0 & 7.0\ -8.7 & -3.33& 3.49 & 0.9 & 61.1 & 7.4\ -10.37 & 1.03& 7.85 & 2.36 & 98.6 & 8.6\ [cccccc]{} 4.36 & 1.67 & 1.74 & 0.41 & 50.6 & 6.7\ 5.71 & 4.68 & 1.61 & 0.27 & 71.2 & 4.8\ 7.05 & 7.7 & 0.71 & 0.3 & 3.6 & 11.9\ 1.34 & 3.01 & 1.4 & 0.36 & 42.4 & 7.3\ 2.68 & 6.02 & 1.12 & 0.17 & 30.7 & 4.3\ 4.02 & 9.03 & 1.98 & 0.23 & 163.9 & 3.4\ -0.33 & 7.38 & 1.66 & 0.21 & 177.2 & 3.6\ -6.02 & 2.68 & 5.45 & 0.41 & 8.5 & 2.2\ -4.68 & 5.71 & 2.67 & 0.34 & 15.4 & 3.6\ -3.34 & 8.72 & 2.04 & 0.41 & 35.2 & 5.8\ -9.03 & 4.02 & 4.46 & 0.44 & 12.2 & 2.8\ -7.7 & 7.05 & 4.28 & 0.45 & 26.7 & 3.0\ -6.35 & 10.06 & 3.67 & 0.46 & 21.7 & 3.6\ -10.7 & 8.38 & 1.99 & 0.76 & 37.7 & 10.9\ -9.36 & 11.37 & 2.27 & 0.87 & 52.2 & 11.1\ 3.34 & -8.72 & 1.74 & 0.41 & 50.6 & 6.7\ 4.68 & -5.71 & 1.61 & 0.27 & 71.2 & 4.8\ 6.02 & -2.68 & 0.71 & 0.3 & 3.6 & 11.9\ 0.33 & -7.38 & 1.4 & 0.36 & 42.4 & 7.3\ 1.67 & -4.36 & 1.12 & 0.17 & 30.7 & 4.3\ 3.01 & -1.34 & 1.98 & 0.23 & 163.9 & 3.4\ -1.34 & -3.01 & 1.66 & 0.21 & 177.2 & 3.6\ 0 & 0 & 2.18 & 0.12 & 169.1 & 1.6\ -5.71 & -4.68 & 2.2 & 0.5 & 168.8 & 6.5\ -4.36 & -1.67 & 4.79 & 0.31 & 4.3 & 1.8\ -10.06 & -6.53 & 4.5 & 0.93 & 148.2 & 5.9\ -8.72 & -3.34 & 4.6 & 0.57 & 168.7 & 3.5\ -7.38 & -0.33 & 6.14 & 0.51 & 179.1 & 2.4\ -11.73 & -2 & 2.96 & 0.7 & 1.4 & 6.8\ -10.39 & 1.02 & 4.56 & 0.62 & 20.4 & 3.9\ [cccccc]{} 5.71 & 4.68 & 1.04 & 0.25 & 111.3 & 7.0\ 7.05 & 7.7 & 1.69 & 0.21 & 127.2 & 3.5\ 8.38 & 10.7 & 2.21 & 0.22 & 125.2 & 2.9\ 2.68 & 6.02 & 1.35 & 0.29 & 139.9 & 6.1\ 4.02 & 9.03 & 1.67 & 0.29 & 139.2 & 5.0\ 5.36 & 12.04 & 1.87 & 0.27 & 138.1 & 4.2\ 1.02 & 10.39 & 2.41 & 0.52 & 156.1 & 6.2\ 2.35 & 13.39 & 2.4 & 0.48 & 158.3 & 5.7\ 3.01 & -1.34 & 1.2 & 0.19 & 132.8 & 4.6\ 4.36 & 1.67 & 1.2 & 0.18 & 133.4 & 4.4\ -1.34 & -3.01 & 0.57 & 0.12 & 144.2 & 5.8\ 0 & 0 & 0.49 & 0.07 & 59.9 & 93.9\ 1.34 & 3.01 & 0.57 & 0.11 & 133.0 & 5.3\ -1.67 & 4.36 & 0.78 & 0.2 & 142.5 & 7.5\ -7.38 & -0.33 & 1.06 & 0.14 & 108.4 & 3.9\ -6.02 & 2.68 & 1.41 & 0.14 & 114.6 & 2.8\ -4.68 & 5.71 & 0.83 & 0.28 & 110.2 & 9.7\ -10.39 & 1.02 & 0.89 & 0.23 & 135.0 & 7.6\ -9.03 & 4.02 & 1.5 & 0.3 & 113.4 & 5.7\ -7.7 & 7.05 & 1.32 & 0.25 & 103.3 & 5.4\ -12.04 & 5.36 & 1.25 & 0.39 & 130.7 & 8.9\ -10.7 & 8.38 & 0.72 & 0.26 & 107.7 & 10.2\ -2.35 & -13.39 & 0.68 & 0.13 & 116.9 & 5.6\ -1.02 & -10.39 & 0.52 & 0.24 & 148.1 & 13.4\ 0.33 & -7.38 & 0.53 & 0.23 & 40.3 & 12.2\ -5.36 & -12.04 & 0.4 & 0.17 & 107.4 & 12.0\ -4.02 & -9.03 & 1.74 & 0.4 & 145.9 & 6.6\ -7.05 & -7.7 & 1.32 & 0.35 & 127.0 & 7.6\ -5.71 & -4.68 & 0.65 & 0.15 & 124.2 & 6.5\ [cccccc]{} 1.67 & -4.36 & 0.4 & 0.07 & 62.2 & 4.7\ 3.01 & -1.34 & 0.48 & 0.04 & 167.3 & 2.5\ 4.36 & 1.67 & 0.47 & 0.04 & 133.1 & 2.6\ -1.34 & -3.01 & 0.44 & 0.07 & 120.8 & 4.6\ 0 & 0 & 0.3 & 0.05 & 5.8 & 4.7\ 1.34 & 3.01 & 0.46 & 0.04 & 132.4 & 2.8\ -3.01 & 1.34 & 1.14 & 0.11 & 123.2 & 2.8\ -1.67 & 4.36 & 1.87 & 0.11 & 119.2 & 1.8\ 5.71 & 4.68 & 1.37 & 0.14 & 137.5 & 2.8\ 7.05 & 7.7 & 1.12 & 0.19 & 121.3 & 4.8\ 8.38 & 10.7 & 1.97 & 0.24 & 116.3 & 3.5\ 2.68 & 6.02 & 1.26 & 0.11 & 129.9 & 2.4\ 4.02 & 9.03 & 1.06 & 0.25 & 121.8 & 6.6\ 5.36 & 12.04 & 4.07 & 0.59 & 117.9 & 4.2\ 1.02 & 10.39 & 3.32 & 0.38 & 122.3 & 3.3\ 2.35 & 13.39 & 4.02 & 0.6 & 114.6 & 4.3\ 10.7 & -8.38 & 1.65 & 0.55 & 30.3 & 9.5\ 12.04 & -5.36 & 1.5 & 0.56 & 33.0 & 10.6\ 6.02 & -2.68 & 1.43 & 0.29 & 151.6 & 5.8\ -6.02 & 2.68 & 2.4 & 0.38 & 120.6 & 4.6\ -7.7 & 7.05 & 2.45 & 0.77 & 113.8 & 9.1\ -12.04 & 5.36 & 1.14 & 0.44 & 120.2 & 11.0\ -10.7 & 8.38 & 3.11 & 0.84 & 127.0 & 7.7\ -2.35 & 13.39 & 1.23 & 0.37 & 132.2 & 8.6\ -1.02 & 10.39 & 1.23 & 0.29 & 116.0 & 6.7\ 0.33 & -7.38 & 0.69 & 0.3 & 94.8 & 12.7\ 10.39 & -1.02 & 0.81 & 0.29 & 158.7 & 10.2\ 11.73 & 2 & 1.23 & 0.31 & 138.8 & 7.3\ 13.08 & 5.01 & 1.29 & 0.26 & 133.5 & 5.9\ 7.38 & 0.33 & 3.55 & 0.42 & 125.7 & 3.4\ 8.72 & 3.34 & 2.81 & 0.43 & 124.4 & 4.4\ 10.06 & 6.35 & 3.27 & 0.45 & 124.9 & 3.9\ -0.33 & 7.38 & 1.94 & 0.23 & 125.6 & 3.4\ -4.68 & 5.71 & 3.55 & 0.42 & 125.7 & 3.4\ -3.34 & 8.72 & 2.81 & 0.43 & 124.4 & 4.4\ -2 & 11.73 & 3.27 & 0.45 & 124.9 & 3.9\ -6.35 & 10.06 & 2.94 & 0.44 & 121.4 & 4.3\ -5.01 & 13.08 & 2.88 & 0.76 & 131.0 & 7.6\ 5.01 & -13.08 & 0.91 & 0.42 & 110.3 & 13.2\ 6.35 & -10.06 & 1.04 & 0.26 & 114.1 & 7.2\ 7.7 & -7.05 & 0.62 & 0.25 & 127.6 & 11.6\ 2 & -11.73 & 1.54 & 0.34 & 128.1 & 6.4\ 3.34 & -8.72 & 1.26 & 0.25 & 112.4 & 5.8\ -7.05 & -7.7 & 0.91 & 0.42 & 110.3 & 13.2\ -5.71 & -4.68 & 1.04 & 0.26 & 114.1 & 7.2\ -4.36 & -1.67 & 0.62 & 0.25 & 127.6 & 11.6\ -10.06 & -6.35 & 2.22 & 0.52 & 123.2 & 6.7\ -8.72 & -3.34 & 1.54 & 0.34 & 128.1 & 6.4\ -7.38 & -0.33 & 1.26 & 0.25 & 112.4 & 5.8\ -11.73 & -2 & 2.42 & 0.48 & 135.4 & 5.6\ -10.39 & 1.02 & 2.35 & 0.46 & 126.5 & 5.6\
--- abstract: 'The change of the electromagnetic field in a particular place due to the event of a change in the motion of a charged particle can occur only after the light signal from the event can reach this place. Naive calculations of the electromagnetic energy and the work performed by the electromagnetic fields might lead to paradoxes of apparent non-conservation of energy. A few paradoxes of this type for a simple motion of two charges are presented and resolved in a quantitative way providing deeper insight into various relativistic effects in the classical electromagnetic theory.' author: - 'A. Kislev' - 'L. Vaidman' title: | Relativistic Causality and Conservation of Energy\ in Classical Electromagnetic Theory --- INTRODUCTION {#int} ============ Starting from Einstein’s work on special relativity [@Eins] it became clear that classical electromagnetic theory is consistent with relativity, and no true paradoxes can be found. However, several apparent paradoxes have been extensively discussed and these discussions enriched our understanding of the electromagnetic theory. Some of these controversial topics are: hidden momentum [@ShJa], Feynman’s disk [@FD], Trouton-Noble Experiment[@TN], and the 4/3 factor for the self energy of an electron.[@43] Here we present another situation which, analyzed in a naive way, leads to paradoxical conclusions. The paradoxes are relevant to recent discussions of covariance in the electromagnetic theory[@Je99; @Hn98; @Co00; @Hn00; @Go00]. Let us start with the paradox which is simplest to present; its resolution will be shown at the end of the paper. 0.2cm [**Paradox I**]{} There are two particles of charge $q$ initially separated by distance $l$. We consider two ways to bring the particles, initially and finally at rest, to a shorter distance $l-x$, see Fig. 1. \(i) We move one particle the distance $x$ toward the other particle. The work required for this is: $$\label{W} W^{\rm i} = U_{NEW} - U_{OLD} = {q^2 \over {l-x}} - {q^2 \over {l}} .$$ \(ii) We move both particles toward each other for the distance $x/2$. We do it simultaneously and fast enough such that the motion of each particle ends before the signal about this motion can reach the location of the other particle. In this case, the work should be the sum of the amounts of work performed by external forces exerted on the two particles calculated as if the other particle has not moved: $$W^{\rm ii} = W_1 + W_2 =2 \left( {q^2 \over {l-{x\over 2}}} - {q^2 \over {l}}\right) . \label{W'}$$ .4cm [ Space-time diagram of the motion in the two processes: (i) one particle moves, (ii) two particles move.]{} .4cm After the procedure is ended, we obtain the same situation in both cases, but we applied less work when we moved both particles: $W^{\rm ii}<W^{\rm i}$. We can get the energy equal to the work $W^{\rm i}$ back from the system when we reverse the process (i), moving one of the charges to the original distance $l$. We can repeat the cycle of process (ii) followed by reversed process (i) gaining every time the amount of energy: $$\label{gain} W^{\rm i}- W^{\rm ii} \approx {q^2 x^2 \over {2l^3}}.$$ Of course, there must be an error in the above argument. We have not taken all relevant effects into account. However, before explaining this paradox we present and resolve a few other apparent paradoxes demonstrating the relevance of various effects. In Section II we present and in Section III resolve a paradox based upon part of the process (ii) described above. Instead of starting with particles at rest, accelerating, moving with some velocity, and then bringing the particles to rest, we start with the two particles moving with constant velocity and coming to rest at different times. In Section IV we further analyze the setup of Paradox II obtaining and resolving another paradox. Section V explains the same point using an example of a large number of charged particles. In Section VI we consider remaining part of the processes involved in Paradox I: accelerating a pair of particles from rest. In Section VII we consider acceleration of particles moving in parallel. This setup leads to a paradox the resolution of which is due to yet another surprising effect. In Section VIII we return to the analysis of Paradox I and resolve it in a quantitative way. PARADOX II: CONSERVATION OF ENERGY FOR TWO STOPPING PARTICLES {#para1} ============================================================== Consider two particles of charge $q$ and mass $m$ located on the $x$ axis at the separation $l$ and moving in the $x$ direction with a constant velocity $v$. At time $t_1=0$ we stop the first particle and at time $t_2 =t$ we stop the second particle: see Fig. 2. The time $t$ is small enough such that signals about the change of the velocity of one particle cannot reach the location of the other, while it is still moving: $$\label{tbound} {-l\over {c - v}} < t < {l\over {c + v}}.$$ We also require that $\tau$, the time of “stopping” a particle, is small: $\tau \ll |t|$. Let us consider conservation of energy for this process. The initial energy should be equal to the final energy plus the work of the forces which the particles exert on external systems: $$\label{conserv} E_{in} = E_{fin} + W_1 +W_2 +\tilde W,$$ where $W_1$ and $W_2$ are the works due to the forces the particles exert during the process of stopping; $\tilde W$ is the work performed by the particle moving with velocity $v$ during the time that the other particle is at rest (for $t>0$, the work $\tilde W$ is performed by particle 2 and for $t<0$, by particle 1). Of course, no work is performed when both particles are at rest, and, also, the net work vanishes during the time when both particles are moving with velocity $v$. For making relativistic analysis more convenient, we include the rest mass energy. Then, the final energy of the system is $$\label{Efin} E_{fin}= 2mc^2 + {q^2 \over {l - x}} ,$$ where $x$ is the change in the distance between the charges: $$\label{xvt} x = vt .$$ The distance might decrease or increase (negative $t$ and $x$) depending on which particle stopped first. When a particle moves with constant velocity, the total force exerted on it is zero. Therefore, the force it exerts on an external system is equal to the electromagnetic force the other particle exerts on it. Since, in the laboratory frame, the distance between the particles is $l$, in the Lorentz frame at which the charges are at rest, the distance between them is $\gamma l$ (where $\gamma~\equiv~1/\sqrt{1-v^2/c^2}$). The Lorentz transformation for the force in the $x$ direction between the rest frame and the laboratory frame is $F_x= F'_x$ and, therefore, the forces the particles exert on the external systems are: $$\label{force} {F_1}_x = - {F_2}_x = {q^2 \over {(\gamma l)^2}}.$$ Thus, the work $\tilde W$ is: $$\label{tilda} \tilde W = - {{q^2 x}\over {(\gamma l)^2}} .$$ This formula is correct both for $x > 0$, when the work is done by particle 2, and for $x < 0$, when the work is done by particle 1. Thus, the equation of conservation of energy (\[conserv\]) becomes [ Space-time diagram of the motion of the two particles.]{} $$\label{conserv1} E_{in} = 2 mc^2 + {q^2 \over {l - x}}+ W_1 +W_2 - {{q^2 x}\over {(\gamma l)^2}}.$$ The initial energy $E_{in}$ obviously does not depend on $x$. Due to causality argument, the works $ W_1$ and $W_2$ do not depend on $x$ either. The equation of energy has only two terms depending on $x$. Therefore, Eq. (\[conserv1\]) represents a paradox: it must be true for all $x$ in the interval $[{-lv\over {c - v}},{lv\over {c + v}}]$ (corresponding to (\[tbound\])), but it cannot, since it has only two terms depending on $x$ which do not balance each other. RESOLUTION OF PARADOX II: INTERFERENCE OF RADIATION {#PARII} =================================================== In Paradox II, according to our naive calculation, the final energy together with the obtained work had two terms depending on $x$, the change in the distance between the particles, which do not sum up to a constant. There is cancellation of the $x$ dependence in the first order of the parameter ${x\over l}~\sim~{v \over c}$, therefore, the leading term of the unbalanced $x$ dependent term is $ {{q^2 x^2}\over {l^3}}$. Indeed, $$\label{2term} {q^2 \over {l - x}} - {{q^2 x}\over {(\gamma l)^2}} = {q^2\over l} + {{q^2 x^2}\over {l^3}} + ...$$ The process of stopping cannot be arbitrarily slow since it had to be finished before the signal about stopping the other particle can arrive. Moreover, we choose $\tau \ll |t|$. Therefore, we should expect a significant contribution due to [ *radiation*]{} which we have not taken into account. The charges accelerate during the process of stopping and the magnitude of the acceleration is $a = {v \over \tau}$ . According to the Larmor formula, the total energy radiated by a single charge during the whole process of stopping is $$\label{rad} R_1=R_2= {2 \over 3} {{q^2 a^2} \over c^3} \tau = {2 \over 3} {{q^2 v^2} \over { c^3\tau}}.$$ The $x$ dependent term which we have to balance is much smaller than $R_1$ and $R_2$: $$\label{2term<} {{q^2 x^2}\over {l^3}} = {{q^2 (vt)^2}\over {l^3}} \lesssim {{q^2 v^2}\over {c^2 l}} \ll {{q^2 v^2}\over {c^3 \tau}}.$$ However, everything that happens at the close vicinity of the charges cannot depend on $x$, and, in particular, the radiation which each charge emits does not depend on $x$, so how can the radiation energy balance the $x$ dependent terms in the equation of conservation of energy? The effect is due to the [*interference of radiation*]{}. The total radiated energy is $$\label{Rtot} R_{tot}= R_1+R_2+R_{int} .$$ The interference term $R_{int}$ depends on $x$ and restores the balance. Now we will show this in detail. The radiation of the stopping charge propagates inside a spherical shell of width $c \tau$ and the energy flux [**S**]{} is given by: [@Grif] $$\label{radS} {\rm \bf S} = {{q^2 a^2 \sin^2 \theta} \over {4 \pi c^3 r^2}} {\bf \hat r},$$ where $r$ is the radius of the shell and $\theta$ specifies the direction relative to the $x$ axis in our setup. Since we have two accelerated charges, the radiation field due to the two charges will interfere in the region of the overlap, see Fig. 3. The complete overlap will take place for the angle $\theta$ defined by: $$\label{theta} \sin (\theta -{\pi\over2}) ={{c t} \over{ l-vt}}= {{c x} \over{v(l-x)}}.$$ Since the width of the shells is $c \tau $, the overlap will be nullified beyond the deviation $\delta \theta$ from the angle $\theta$ given by $$\label{thetaD} \delta \theta ={{c \tau} \over {(l-x) \sin \theta}} ~,$$ which is obtained from $$\label{thetaDer} c \tau =\delta \left[(l-x)\sin (\theta -{\pi\over2})\right] = [(l-x)\sin \theta ]\ ~\delta \theta.$$ Due to the interference, the total energy radiated in the direction of the overlap is twice as much as if the two charges were radiating separately. At the interval $[(\theta - \delta \theta),(\theta + \delta \theta)]$ the overlap increases and then decreases linearly. Therefore, the interference term of the radiation energy is $$\label{rad-int} R_{int} = 2 {{q^2 a^2 \sin^2 \theta} \over {4 \pi c^3 r^2}} 2\pi r \sin \theta \, r \tau \int_{- \delta \theta}^{\delta \theta} { {\delta \theta - |\phi |}\over \delta \theta} d\phi = {{q^2 v^2 \sin^2 \theta} \over { c^2 (l-x)}} .$$ [ Electromagnetic radiation of the two stopping particles. The area of constructive interference of the radiation field is painted in black.]{} After expressing $\sin^2 \theta = 1- \sin^2 (\theta -{\pi\over 2})$, using (\[theta\]), and making an approximation up to the lowest order in the parameter ${x\over l} \approx {v \over c}$, we obtain: $$\label{Eradint} R_{int}= {{q^2 v^2 } \over { c^2 (l-x)}} - {{q^2 x^2}\over {(l-x)^3}} \approx {{q^2 v^2 } \over { c^2 l}} - {{q^2 x^2}\over {l^3}} .$$ Thus, we have shown that (at least up to a second order in the parameter ${v \over c}$, the precision to which we made our calculations) the equation of conservation of energy which takes into account the electromagnetic radiated energy does not have $x$ dependence. The corrected equation of conservation of energy (which replaces Eq. 5), $$\label{conserv+rad} E_{in} = E_{fin} + W_1 +W_2 +\tilde W + R_{tot},$$ leads, after the approximation, to the expression which does not exhibit $x$ dependence: $$\label{conserv3} E_{in} = 2 mc^2 + W_1 +W_2 +R_1 +R_2 +{{q^2 } \over { l}}\left( 1 + {{ v^2 } \over { c^2}}\right).$$ Thus we have resolved Paradox II with regard to the unbalanced $x$ dependence. But can we show that the LHS and the RHS of the equation of conservation of energy (\[conserv3\]) are equal? We will analyze this in the next section. The paradox of non-conservation of energy of the system of two charged particles when radiation is neglected has been considered in another example.[@scat73] The calculation of the scattering on the basis of the Coulomb forces yields an energy of the charged particles after the collision that is larger than the initial energy. The advantage of the scattering example is that no external forces have to be taken into account. However, the resolution of this paradox by taking into account radiation has been shown only qualitatively.[@scat76] LORENTZ TRANSFORMATION FOR ELECTROMAGNETIC ENERGY {#PARIII} ================================================== Without knowing the mass and without knowing the local works $W_1$ and $W_2$ it seems that we cannot test (\[conserv3\]). However, we can test the consistency of this equation of conservation of energy with single-particle conservation of energy equations. We can write down the conservation of energy for each particle assuming that it performs exactly the same motion (stopping from velocity $v$ during the time $\tau$) in case the other particle is not present. We can argue that the local works $W_i$ and the emitted radiations $R_i$ are the same as in the original example. This argument is not as strong as the causality argument, i.e. the independence of $x$ of the values of these variables, but it seems that since we can take the time $\tau$ of the local processes very small, the effect of the external field can be neglected. In order to reduce any doubt that this is a valid approach, we will consider, in the next section, a similar situation with a large number of charged particles. It exhibits the same problem, but we will not need this assumption. Our approach to finding the initial energy is finding the total energy of the two charges in their rest frame $E_0$ and multiplying it by the factor $\gamma$. In the rest frame of the moving particles, the distance between them is $\gamma l$. Therefore, the total initial energy of the particles is: $$\label{Ein} E_{in}= \gamma \left( 2mc^2 + {q^2 \over {\gamma l}}\right) .$$ Thus, the equation of conservation of energy (\[conserv3\]) becomes $$\label{conserv4} 2\gamma mc^2+ {{q^2 }\over {l}} = 2 mc^2 + W_1 +W_2 +R_1 +R_2 + {{q^2 } \over { l}}\left( 1 + {{ v^2 } \over { c^2}}\right).$$ We can write two (identical) single-particle equations of conservation of energy: $$\begin{aligned} \label{conserv1p} \gamma mc^2 = mc^2 + W_1 +R_1 , \nonumber \\ \gamma mc^2 = mc^2 + W_2 +R_2 .\end{aligned}$$ But when we subtract these equations from the two-particle conservation of energy equation (\[conserv4\]) we see that there is inconsistency: the term ${{q^2 v^2}\over {l c^2}}$ is unbalanced! The inconsistency does not follow from the approximations we made in deriving (\[conserv3\]). The algebraic approximations can be made irrelevant if we consider simultaneous stopping of the charges corresponding to $x=0$, in which case (\[conserv3\]) follows without approximation. We have reached another paradox. There must be another error in our analysis. (In fact, this paradox will appear also for large $|x|$ when there is no interference of radiation; such a case will be considered in the next section.) The paradox arises from the error which we made in Eq. (\[Ein\]). We have claimed that if the total energy of a system of charges in their rest frame is $E_0$, then its energy in the Lorentz frame in which the system moves with velocity $v$ is $$E = \gamma E_0 . \label{gama}$$ Equation (\[gama\]) is, of course, correct when the system is an elementary particle. It is also true when the system is a finite stationary isolated body. But it is, in general, not true for a composite system with external forces such as the system of charged particles we consider here. In order to obtain the correct transformation of the electromagnetic energy from the rest frame to the moving frame, we consider two charges connected by a rigid rod. The energy of the whole system, charges and the rod, does transform according to (\[gama\]). Therefore, the anomalous term of the transformation of the electromagnetic energy equals to the negative of the anomalous part of the mechanical energy of the rod. The latter is easier to calculate and we will do it now. In order to calculate the expression for transformation of energy of the rod, we express it as a volume integral of the energy density $u$: $$\label{enden} E_{in} = \int u dv ,$$ and use the Lorentz transformation for the energy density, the 00 component of the energy-stress tensor: $$u = \gamma ^2 \left(u' + {v\over c^2} S'_x -{v^2\over c^2} \sigma'_{xx}\right), \label{uu}$$ where ${\bf S}$ is the energy flux and $\sigma$ is the stress tensor. The transformation of the energy due to the the first term leads to the usual expression (\[gama\]): the energy density is multiplied by $\gamma^2$, but due to the Lorentz contraction the volume is multiplied by the factor $\gamma^{-1}$. The second term does not contribute since the energy flux in the rest frame vanishes. Therefore, the last term contributes the anomalous term. The tension in the rod prevents the charges, separated by the distance $\gamma l$, from moving, therefore it equals ${q^2} \over {\gamma^2 l^2}$. Thus, in the rest frame of the rod, the stress tensor component is: $$\label{tens} \sigma'_{xx} = {q^2\over {s \gamma ^2l^2}}~,$$ where $s$ is the cross-section of the rod. Therefore, the contribution to the energy in the laboratory frame due to the tension of the rod is: $$\label{cont} -{{\gamma^2 v^2}\over c^2}\int \sigma'_{xx} dv = -{{\gamma ^2 v^2}\over c^2} \sigma'_{xx} ls = - {{v^2 q^2}\over {c^2 l}} .$$ Thus, the correct expression for the initial energy of the electromagnetic field of the two charges (including the anomalous term of the transformation of the electromagnetic energy which equals to the negative of that of the mechanical energy) is: $$\label{Einmod} E_{in}= \gamma\left[ 2mc^2 + {q^2 \over {\gamma l}} \left(1 + {{ v^2}\over { c^2}}\right) \right] .$$ The correction we found for the initial energy of the system of two charges equals exactly the interference term of the radiation energy thus restoring the equation of conservation of energy. CONSERVATION OF ENERGY FOR $N$ STOPPING PARTICLES {#Npar} ================================================== In this section we can strengthen the arguments of the previous section by considering a large number of charged particles in a row. In this case, we do not subtract single-particle equations of conservations of energy from the $N$-particle equation of conservation of energy. Thus, we do not need the assumption of the previous section that the terms of single-particle equations are equal to the corresponding terms of the $N$-particle equation. However, since this section does not describe conceptually new effects, it can be omitted on the first reading. Consider a chain of a large number $N$ of identical particles of charge $q$ and mass $m$ separated by the distance $l$ one from the other, all moving with velocity $v$ on the $x$ axis. The first particle stops at $t_1=0$ during a short time $\tau$. The second particle stops in the same manner at $t_2=t$, the latest possible moment such that the information about stopping of the first particle cannot reach it. This corresponds to $$x = vt = {vl\over {c+v}} . \label{xmax}$$ The third particle stops in the same way at time $t_3 = 2t$ just before the information about the stopping of the first two should arrive, the particle $n$ stops at time $t_n = (n-1)t$, etc. until the stopping of the last particle. This is illustrated in Fig. 4. One difference from the case of two particles is that, due to the particular choice of $x$, we have no interference of radiation from different charges. The overlap of the radiation fields takes place only on the $x$ axis where the intensity is zero. Another change from the case of two particles to the case of $N$ is reflected in the calculations of the initial and final potential energies, the work $\tilde W$ (the work made by particles during motion with constant velocity), and the anomalous energy transformation term: in all these terms appear the factor $\eta$: $$\label{eta} \eta= \sum_{n=1}^{N-1} \sum_{i=1}^n {1\over i}.$$ The appearance of the factor $\eta$ is obvious for potential energy, since ${q^2\over {\gamma l}}\sum_{i=1}^n {1\over i}$ is the energy of bringing the particle number $n+1$ when there are $n$ particles in the row. Let us show that the same factor appears for $\tilde W$. When all particles move with velocity $v$, no net work is done on the system and, of course, no net work is done when all particles are at rest. Consider forces between particle $n$ and all particles $i$ to the right of it, (i.e., $i<n$). The distance the particle $n$ moves while $i$ is at rest is $(n-i)x$, and the force between them is ${{q^2} \over {(\gamma l (n-i))^2}}$. Therefore, the contribution to $\tilde W$ due to the interaction between particle $n$ and particle $i$ is [ Space-time diagram of the motion of $N$ particles.]{} $$\label{con} {{q^2 x} \over {(\gamma l)^2(n-i)}}.$$ We obtain that the contribution to the potential energy due to the forces between particle $n$ and all particles $i$ such that $i<n$ is: $$\label{contri} \sum_{i=1}^{n-1} {{q^2 x} \over {(\gamma l)^2(n-i)}} ={{q^2 x} \over {(\gamma l)^2}} \sum_{i=1}^{n-1} {1\over i}.$$ The sum of the works made by all particles, starting from the moment when the first particle stops, is: $$\label{tild} \tilde W ={{q^2 x} \over {(\gamma l)^2}} \sum_{n=2}^{N} \sum_{i=1}^{n-1} {1\over i} =\eta {{q^2 x} \over {(\gamma l)^2}}.$$ Instead of performing a direct calculation of the anomalous transformation term (deviation from (\[gama\])), we can make the following observation. The anomalous term can be found by calculation of the contribution due to the tension of the rod (compare with (\[cont\])): $$\label{contN} -{{\gamma^2 v^2}\over c^2}\int \sigma'_{xx} dv = -{{\gamma ^2 v^2}\over c^2}\sum_{n=1}^{N-1} (\sigma'_{xx})_n ls = - {\gamma ^2 v^2\over c^2} \sum_{n=1}^{N-1} T_n l .$$ Consider a rod with $N$ charges separated by distance $l$ at rest. We will show that $\sum_{n=1}^{ N-1} T_n l$ is equal to the potential energy of the charges. Since the latter is multiplied by $\eta$ in the transition from 2 to $N$, the former is multiplied by $\eta$ too. Potential energy is equal to the work which electromagnetic forces will do in the process of uniform extension of the length of the rod from $(N-1)l$ to a very large length, which, in turn, is equal to the negative of the mechanical work made by the tension forces of the rod. When the separation between the charges is $\tilde l$, the tension in the parts of the rod can be expressed as $$T_n (\tilde l)= T_n (l) { l^2\over \tilde l^2}~,$$ because this tension compensates the Coulomb force proportional to $\tilde l^{-2}$. Therefore, the work (equal to the potential energy) can, indeed, be expressed as $$\label{work} \sum_{n=1}^{N-1} \int_l^{\infty} T_n (\tilde l) d\tilde l =\sum_{n=1}^{N-1} \int_l^{\infty} T_n (l) { l^2\over \tilde l^2} d\tilde l =\sum_{n=1}^{N-1} T_n l .$$ Now we can write the equation of conservation of energy for $N$ charges including the anomalous transformation term (the LHS is the modification of (\[Einmod\]) and the RHS is the modification of the RHS of (\[conserv1\])): $$\begin{aligned} \label{conserv2} \nonumber N\gamma mc^2+ \eta {{q^2 }\over {l}}\left( 1 + {{ v^2 } \over { c^2}}\right) =~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \\ N mc^2 +\eta {q^2 \over {l - x}}+ W_1 +W_2 + ... + W_N - \eta {{q^2 x}\over {(\gamma l)^2}}. \end{aligned}$$ In order to estimate $\eta$ for large $N$ we can replace the sums by the integrals (and $N-1$ by $N$): $$\label{eta-in} \eta = \sum_{n=1}^{N-1} \sum_{i=1}^n {1\over i} \approx \int_{1}^{N}\left ( \int_{1}^z {{dy}\over y} \right ) ~ dz \approx \int_{1}^{N} \ln z dz \approx N\ln N .$$ (We omitted $N$, since it is negligible relative to $N \ln N$). Since for large $N$ we have $\eta \gg N$, all terms in (\[conserv2\]) which do not have the factor $\eta$ can be neglected. These include all terms $W_n$, the amount of work obtained in the process of stopping particle $n$. Indeed, for small enough $\tau$, these terms will not have strong dependence on $N$ and, therefore, their sum will be approximately proportional to $N$. After substitution of (\[xmax\]), our choice of $x$, we can see that the equation of energy with the anomalous transformation term is balanced in the leading $\eta$ proportional terms (and it is not balanced if we use the transformation of energy (\[gama\])). ACCELERATING PARTICLES FROM REST {#acc} ================================== In order to prepare ourselves for the analysis of Paradox I presented in the Introduction, let us consider a step which has not been discussed yet: the acceleration of the two charged particles from rest to velocity $v$. Of course, in the frame of reference moving with velocity $v$, this is deceleration and stopping of particles moving with equal velocities, the process we analyzed above. However, the transformation from one frame to another might be a difficult task and, as in many other examples [@BK; @Nami], an analysis in a different Lorentz frame allows seeing new physical phenomena; in our case it provides yet another possibility to make an error leading to a paradox. We accelerate the particles in the same way as in the process of simultaneous deceleration discussed in the preceding paragraph, i.e., we perform simultaneous acceleration from rest to velocity $v$ during time $\tau$. The radiated energy during the acceleration, then, should be the same as in the process of stopping, see Eq. (\[Eradint\]). For simultaneous acceleration $x=0$, so the interference term is $$\label{Era1} R_{int} = {{q^2 v^2 } \over { c^2 l}} .$$ Initial energy of the particles is $$E_{in} = 2mc^2 + {{q^2 }\over {l}} , \label{EinN}$$ Since the final state of the particles (motion with velocity $v$ and separation $l$) is identical to the initial state of the particles in the previous example, the expression for the final energy $E_{fin}$ is given by the RHS of (\[Einmod\]). Again, for each particle we can write the equation of conservation of energy for the process when the other particle is not present: $$\begin{aligned} \label{conserv1p2} mc^2 = \gamma mc^2 + W_1 +R_1 ,\nonumber \\ mc^2 = \gamma mc^2 + W_2 +R_2 ,\end{aligned}$$ where $W_1$, $W_2$ are defined as work performed by the particles and are negative in this case. Since the charges start to move together, it seems that the net work is done only during the acceleration period. Therefore, the equation of conservation of energy is: $$E_{in} = E_{fin}+ W_1 +W_2 +R_1 +R_2 + R_{int} . \label{Conserv4}$$ Substituting all the terms and subtracting the single-particle equations we obtain: $$\label{contr} {{q^2 }\over l }= {{ q^2} \over l}\left(1 + {v^2\over c^2}\right) + {{ q^2 v^2}\over {l c^2}}.$$ The final energy is larger than the initial energy: contradiction! The error we made here is more transparent. It appears in the sentence stating that the only net work of the charges is done during the acceleration period. It is true that in a stationary case (particles keep their motion all the time) the net work of charges moving with constant velocity vanishes. However, at the beginning of the motion, the fields at the vicinity of the charges are different from the Coulomb field of the stationary motion: each particle feels the [ *static*]{} field of the other particle (i.e., as if it has not moved) until the signal from the motion of the other particle can arrive. Let us calculate the contribution to the work due to the forces between the particles. Particle 2 moves in the static field of particle 1 during the time $t={{l}\over {c+v}}$ after which it feels the stationary field of particle $1$ which is ${q \over {\gamma ^2 l^2}}$. Similarly, particle 1 moves in the static field of particle 2 during the time $t'={{l}\over {c-v}}$, after which it feels the stationary field of particle 2 of the same strength but in the opposite direction. After time $t'$, there is no contribution to the net work due to the forces between the two particles. Until this time, particle 1 covers distance $x'=vt'$ in the static field of particle 2. Therefore, the contribution to the work from particle 1 is: $$\label{icont} { {q^2 }\over {l} }- {{q^2 }\over {l+x'} } ={ {vq^2 }\over {cl} } .$$ The work performed by particle 2 until time $t'$ has two parts. Until time $t$, it is: $$\label{ncont} { {q^2 }\over {l} }- {{q^2 }\over {l-x }}= -{ {vq^2 }\over {cl} } .$$ Between time $t$ and $t'$ it feels a constant field so the contribution to the work is: $$\label{ncontt1} -{q^2 \over {\gamma^2 l^2}}v(t'-t)= -{ {2 v^2 q^2 }\over {c^2 l} } .$$ The contributions (\[icont\]) and (\[ncont\]) cancel each other, so the net contribution is given by (\[ncontt1\]). The net work performed by the particles during the time of motion with constant velocity is: $$\label{ncontt} \tilde W = -{ {2 v^2 q^2 }\over {c^2 l} } .$$ This restores the balance in the equation of conservation of energy canceling the unbalanced terms in (\[contr\]). We have resolved the contradiction by taking into account the work performed by charged particles during the transition period from static field to stationary field which lasts ${{l}\over {c-v}}$. But what happens if we skip the intermediate stage? We accelerate the particles to velocity $v$ and shortly after (before the particles finished performing the work (\[ncontt\])) stop them in a similar manner. What is the source of the radiation energy (\[Era1\]) in this case? In fact, since we have two processes with the acceleration, it seems that we are missing even more, twice this amount! No, this is another error. We need not look for the source of the radiation energy, because the radiation field due to the acceleration and the radiation field due to stopping interfere destructively. (Note a more bizarre example of a destructive interference of radiation field from a moving body.[@nonrad]) We are not going to analyze the destructive interference in a quantitative way in this case. The same effect yields the resolution of Paradox I which is demonstrated in a quantitative way in Section \[PARI\]. Before this, in the next section, we present a paradox arising from yet another subtle effect. ACCELERATING PARTICLES MOVING IN PARALLEL {#accc} =========================================== Let us consider acceleration of two charged particles lined up in the $y$ axis instead of the $x$ axis. The particles accelerate simultaneously from rest to velocity $v$ in the $x$ direction. The expression for the initial energy is again (\[EinN\]). The final energy, however, is different: $$\label{Efin2} E_{fin}= \gamma\left( 2mc^2 + {q^2 \over { l}} \right) .$$ Indeed, in the rest frame of the moving particles the distance between them is $l$. In this case the electromagnetic energy is transformed in the usual way (\[gama\]) because the energy of the composite system of charges and the rod is transformed according to (\[gama\]) and the energy of the rod with the tension in the $y$ direction is transformed according to (\[gama\]). The anomalous behavior of the rod in the previous case followed from the presence of $\sigma_{xx}$ component of the stress tensor which vanishes in the vertical configuration. The interference term of the radiation energy is modified too. In the $x$ axis configuration, the interference is in the $y-z$ plane and it always has the angle $\pi \over 2$ relative to the direction of the acceleration. In the $y$ axis configuration, the interference is in the $x-z$ plane with varying angle $\theta$ relative to the direction of the acceleration. Therefore, the interference is not always in the direction of the maximal intensity. The intensity is proportional to $\sin^2\theta$, see (\[radS\]), and, therefore, averaging on $\theta$ reduces the interference term relative to the $x$ configuration (\[Era1\]) by the factor of 2: $$\label{Era11} R_{int} = {{q^2 v^2 } \over {2 c^2 l}} .$$ Although during the stationary motion the charges do not exert forces in the direction of motion, in the transition period, when the charges move in the static field, there is a small component of the force in the direction of motion. The particles move in the static field during the time $t$ which fulfills $$ct=\sqrt{ l^2 +t^2 v^2}. \label{time1}$$ Therefore $ ct=\gamma l$. The total work each particle performs is $$\tilde W_1= \tilde W_2={ q^2 \over l} - { q^2 \over{\gamma l}}. \label{time11}$$ The single-particle equations of conservation of energy, of course, remain the same. Thus, putting together all terms of the conservation of energy equation $$E_{in} = E_{fin}+ W_1 +W_2+ \tilde W_1 + \tilde W_2 +R_1 +R_2 + R_{int}, \label{Conserv44}$$ and subtracting single particle equations (\[conserv1p2\]), we obtain: $$\label{contr5} {{q^2 }\over l }= {\gamma q^2 \over l} + {2 q^2\over l} \left(1 -{1\over \gamma}\right)+ {{q^2 v^2 } \over {2 c^2 l}} .$$ Contradiction again: The final energy is larger than the initial energy. Calculating up to the second order in the parameter $v^2 \over c^2$ we see three contributions in the units of ${ {v^2q^2 } \over { c^2 l}}$. The increase in the potential energy contributes $1\over 2$, the work of the static fields during the transition period contributes $2\cdot {1\over 2} =1$, and the interference of radiation contributes $1\over 2$. All terms together contribute ${ {2v^2q^2 } \over { c^2 l}}$. The effect we missed here is, probably, the most subtle one. We have not taken into account the work of the radiation field. The electric field at the point ${\bf r}$ (relative to the charge), due to the radiation of the charge $q$ moving with acceleration ${\bf \it a}$, is: [@Grif] $$\label{Efrad} {\bf E} = {q\over {c^2 r}} {\bf \hat r} \times ({\bf \hat r} \times {\bf \it a}).$$ For our configuration, the field is: $$\label{Efrad1} {\bf E} = -{{q a}\over {c^2 l}} { \hat x} .$$ This field exerts force during the time $\tau$ during which the particle moves distance $\tau v$. Taking into account that $a = {v\over \tau}$ we obtain that the force of the radiation field changes the energy of each particle by $$W_{rad}= - {{q^2 v^2 } \over { c^2 l}}.$$ Both particles lose their energy in this way and, therefore, we lose two units of ${{v^2 q^2 } \over { c^2 l}}$ restoring the equation of conservation of energy. Note that in the $x$ configuration of charges, the work of the radiation field vanishes because in this case the radiation fields at the locations of the particles vanish. RESOLUTION OF THE PARADOX I {#PARI} =========================== Now we have learned all the effects necessary for the resolution of Paradox I. In fact, we have seen a few effects which do not play a role in this case, but understanding them increases our confidence that our explanation is the correct one. The anomalous transformation of the electromagnetic energy is not relevant because the charges are at rest at the beginning and at the end of the process. The subtle effect of the work performed by the radiation field is not present too, since the radiation fields in the locations of the particles vanish. It is the interference of radiation which we have not taken into account that resolves the paradox. The radiation energy (\[rad\]) is much larger than the term (\[gain\]) which we have to compensate, see (\[2term&lt;\]). However, we would like to get quantitative resolution of this paradox showing how the missing term (\[gain\]) arises from the calculation of the radiation energy. In order to obtain a quantitative result we specify how we perform the process described in Section I. We assume that we perform it exactly as in all other setups described here: we accelerate particles during small time $\tau$ until they reach velocity $v$. In case (i), the particle moves with constant velocity the distance $x$ when it stops in the same manner as it was accelerated. In case (ii), both particles reach velocity $v$ (absolute value) and stop at time $t$ after passing the distance $x/2$. The time $t$ is short enough such that each particle cannot receive a signal during its motion about the motion of the other particle. Thus, $$\label{param} \tau \ll t = {x \over {2v}} < {l \over {c+v}}.$$ In case (i) the radiation energy is created in the same amount due to the acceleration and due to the stopping of the particle and, therefore, it is twice the amount given by the Larmor formula (\[rad\]): $$\label{radi} R^{\rm i} = {4 \over 3} {{q^2 v^2} \over { c^3\tau}}.$$ In case (ii) there are four events of changing velocity of a particle by amount $v$ and, therefore, there are four spherical shells of radiation field of the width $\tau c$, see Fig. 5. The radiation energy is four times the Larmor energy (\[rad\]) with the correction due to the interference. The correction due to the interference has four terms. The interferences are due to radiation emitted during the acceleration of the two particles, stopping of the two particles, acceleration of the first and stopping of the second, and acceleration of the second and stopping of the first. All these terms can be calculated in the same way as we have calculated the interference of radiation energy of two stopping charged particles in Section IV. Accelerations and decelerations of the particles are performed simultaneously, therefore, the direction of interference is $\theta = {\pi\over 2}$. This is the direction of the maximal power of the radiation energy, see (\[radS\]). The range of the angles for which there is the interference is given by (\[thetaD\]) and, thus, similarly to derivation of (\[Eradint\]), we obtain that the interference term due to simultaneous acceleration is equal to the term due to simultaneous stopping, and it equals $$\label{aass} - {{q^2 v^2 } \over { c^2 l}},$$ where the minus sign is because particles accelerate in opposite directions and the second term of (\[Eradint\]) does not appear since simultaneity corresponds to $x=0$ in the notation of Section III. The interference between acceleration of the first and stopping of the second takes place in the direction $\hat \theta_1$ defined by $$\label{theta1} \sin (\theta_1 -{\pi\over2}) ={{c t} \over{ l-vt}} .$$ For calculating this correction we can use (\[Eradint\]) again, taking into account that the particle stops after passing the distance $vt={x\over2}$. Therefore, this contribution is: $$\label{cont2} {{q^2 v^2 } \over { c^2 l}} - {{q^2 x^2}\over {4l^3}} .$$ The contribution to the correction of the radiation energy due to the interference between acceleration of the second and stopping of the first particles takes place in the direction defined by $$\label{theta2} \sin (\theta_2 -{\pi\over2}) ={{c t} \over{ l+vt}}. %={{c x} \over {v l}}$$ .4cm [ Electromagnetic radiation of the two charged particles which are simultaneously accelerated toward each other and after time $t$ stopped, case (ii). The shadowed area signifies destructive interference and the area painted in black signifies constructive interference.]{} In our case $l\gg vt$, so we can make an approximation $l-vt~\approx l +vt \approx l$ and, therefore, we get the same expression again. Summing up all the expressions, we obtain: $$\label{radii} R^{\rm ii} = {8 \over 3} {{q^2 v^2} \over { c^3\tau}} - {{2q^2 v^2 } \over { c^2 l}}+ 2\left( {{q^2 v^2 } \over { c^2 l}} - {{q^2 x^2}\over {4l^3}}\right)= {8 \over 3} {{q^2 v^2} \over { c^3\tau}} - {{q^2 x^2}\over {2l^3}} .$$ Now we are able to analyze the setup of Paradox I taking into account the radiation energy. In case (i) the work performed by the external forces should include the radiation energy (\[radi\]). Thus, instead of (\[W\]) we obtain $$\label{W+rad} W^{\rm i} = U_{NEW} - U_{OLD}+ R^{\rm i} = {q^2 \over {l-x}} - {q^2 \over {l}} + {4 \over 3} {{q^2 v^2} \over { c^3\tau}}.$$ In case (ii), following the structure of Paradox I, we have to calculate the work taking into account the causality argument: each particle “does not know” that the other particle moved. Therefore, the work against the field and the radiated energy should be calculated as if the other particle has not moved. The work is twice the amount of work in case (i) with the change of $x \rightarrow {x\over2}$. Thus, instead of (\[W’\]), we obtain $$W^{\rm ii} = W_1 + W_2 =2 \left( {q^2 \over {l-{x\over 2}}} - {q^2 \over l} + {4 \over 3} {{q^2 v^2} \over { c^3\tau}}\right) . \label{W'+rad}$$ Clearly we cannot gain energy from constructing a machine with a cycle of process (ii) and reversed process (i). The work required for reversed process (i) is: $$\label{w+radneg} W^{\rm \tilde i} = {q^2 \over {l}}- {q^2 \over {l-x}} + {4 \over 3} {{q^2 v^2} \over { c^3\tau}}.$$ Thus, the work during the whole cycle is: $$\begin{aligned} \nonumber W_{tot} = W^{\rm \tilde i}+W^{\rm ii}={q^2 \over l}- {q^2 \over {l-x}} + {4 \over 3} {{q^2 v^2} \over { c^3\tau}}+~~~~~~~\\ ~~~~~~~2\left( {q^2 \over {l-{x\over 2}}} - {q^2 \over l} + {4 \over 3} {{q^2 v^2} \over { c^3\tau}}\right) \approx - {{q^2 x^2} \over {2l^3}} + {{4q^2 v^2} \over { c^3\tau}}. \label{Wtot}\end{aligned}$$ This work is greater than zero, since the radiation term is much larger than the gain in the potential energy; it can be seen explicitly using (\[param\]). However, even if we collect the radiation energy, we still cannot gain energy. Indeed, the total radiation energy is: $$\label{radtot} R_{tot}= R^{\rm i}+ R^{\rm ii} = {4 \over 3} {{q^2 v^2} \over { c^3\tau}}+{8 \over 3} {{q^2 v^2} \over { c^3\tau}} - {{q^2 x^2}\over {2l^3}}= {{4q^2 v^2} \over { c^3\tau}} - {{q^2 x^2}\over {2l^3}} .$$ We obtained exactly the same expression, i.e. our calculations have shown (up to the precision of the order of ${v^2\over c^2}$) that during the complete cycle $ W_{tot}= R_{tot}$. This completes the analysis of the paradox presented at the beginning of this paper. Is it a simple task to demonstrate conservation of energy to a higher order in $v\over c$? It is not difficult to expand the algebraic expressions we have to a higher order, but this is not enough. We have used more approximations, in particular, the formulas for the radiation of the charged particles we have used are correct only in the approximation of small acceleration and small velocities. Indeed, Eq. (\[rad\]) cannot be universally correct, since it says that by reducing $\tau$, the time of stopping the charged particle, we can obtain unlimited amount of radiation energy: clearly we cannot get more energy than the particle has. Thus, higher order calculations of the equation of conservation of energy is an elaborate task which goes beyond the scope of this paper. In this paper we have analyzed some relativistic features of the classical electromagnetic theory. We demonstrated in a quantitative way the relevance of the several effects to the balance of conservation of energy in a system of charges. These effects are: changing of the electromagnetic field according to relativistic causality constraints, anomalous transformation of the electromagnetic energy, energy radiated by accelerated charges, interference of radiation, and work performed by the radiation field. We believe that presenting the subject in the form of “paradoxes” helps to achieve a deeper understanding. Obtaining quantitative resolutions of presented paradoxical situations helps to reach confidence in applying the equation of conservation of energy for indirect calculations of various effects. **ACKNOWLEDGMENTS** It is a pleasure to thank Shmuel Nussinov, and Philip Pearle for helpful discussions. This research was supported in part by the EPSRC grant GR/N33058. [99]{} A. Einstein, “Zur elektrodynamik bewegter Körper,” Ann. Phys. ([ *Germany*]{}) [**17**]{}, 891-921 (1905). W. Shockley and R.P. James, “ ‘Try simplest cases’ discovery of ‘hidden momentum’ forces on ‘magnetic currents,’ ” Phys. Rev. Lett. [**18**]{}, 876-879 (1967). R.P. Feynman, R.B. Leighton, and M. Sands, [*The Feynman Lectures on Physics,*]{} vol. [****]{} II (Reading, MA: Addison-Wesley 1964) pp 17.5-17.6. F.T. Trouton and H.R. Noble, “The mechanical forces acting on a charged electric condenser moving through space,” Phil. Trans. R. Soc. [**A 202**]{}, 165-81 (1903). P. Pearle, “Classical electron models,” in [*Electromagnetism. Paths to research.*]{} (Plenum, New York, NY, USA; 1982), pp. 211-95. O. D. Jefimenko, “A relativistic paradox seemingly violating conservation of momentum law in electromagnetic systems,” Eur. J. Phys. [**20**]{}, 39-44 (1999). V. Hnizdo, Covariance of the total energy momentum four vector of a charge and current carrying macroscopic body, Am. J. Phys. [**66**]{}, 414-418 (1998). E. Comay, “Lorentz transformation of a system carrying ‘Hidden Momentum,’ ” Am. J. Phys. [**68**]{}, 1007 (2000). V. Hnizdo, “Response to "Lorentz transformation of a system carrying ‘Hidden Momentum,’ ” by E.Comay”, Am. J. Phys. [**68**]{}, 1007 (2000)\],” Am. J. Phys. [**68**]{}, 1014-15 (2000). G. H. Goedecke, “On electromagnetic conservation laws,” Am. J. Phys. [**68**]{}, 380-384 (2000). D. J. Griffiths, [*Introduction to electrodynamics*]{}, (Englewood Cliffs, N.J. Prentice Hall, 1999) p.461. J. Huschilt, W.E. Baylis, D. Leiter, and G. Szamosi, “Numerical solutions to two-body problems in classical electrodynamics: straight-line motion with retarded fields and no radiation reaction,” Phys. Rev. [**D 7**]{}, 2844-50 (1973). J. Huschilt, W.E. Baylis, D. Leiter, and G. Szamosi, “Numerical solutions to two-body problems in classical electrodynamics: Head-on collisions with retarded fields and radiation reaction. I. Repulsive case,” Phys. Rev. [**D 13**]{}, 3256-61 (1976). D. Bedford and P. Krumm, “On the origin of magnetic Dynamics,” Am. J. Phys. [**54**]{}, 1036-39 (1986). V. Namias, “Electrodynamics of moving dipoles: The case of the missing torque,” Am. J. Phys. [**57**]{}, 171-177 (1986). W.H. Furry, “Examples of momentum distributions in the electromagnetic field and in matter,” Am. J. Phys. [**37**]{}, 621-36 (1969). L. Vaidman, “Torque and force on a magnetic dipole,” Am. J. Phys. [**58**]{}, 978-983 (1990). P. Pearle, “Absence of radiationless motions of relativistically rigid classical electron,” Found. Phys. [**7**]{}, 931-45 (1977).